id
stringlengths
10
10
title
stringlengths
3
246
abstract
stringlengths
3
3.32k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
2309.12735
Optimal Dynamic Fees for Blockchain Resources
We develop a general and practical framework to address the problem of the optimal design of dynamic fee mechanisms for multiple blockchain resources. Our framework allows to compute policies that optimally trade-off between adjusting resource prices to handle persistent demand shifts versus being robust to local noise in the observed block demand. In the general case with more than one resource, our optimal policies correctly handle cross-effects (complementarity and substitutability) in resource demands. We also show how these cross-effects can be used to inform resource design, i.e. combining resources into bundles that have low demand-side cross-effects can yield simpler and more efficient price-update rules. Our framework is also practical, we demonstrate how it can be used to refine or inform the design of heuristic fee update rules such as EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional version of our model using real market data from the Ethereum blockchain and empirically compare the performance of our optimal policies to EIP-1559.
Davide Crapis, Ciamac C. Moallemi, Shouqiao Wang
2023-09-22T09:34:33Z
http://arxiv.org/abs/2309.12735v1
# Optimal Dynamic Fees for Blockchain Resources ###### Abstract We develop a general and practical framework to address the problem of the optimal design of dynamic fee mechanisms for multiple blockchain resources. Our framework allows to compute policies that optimally trade-off between adjusting resource prices to handle persistent demand shifts versus being robust to local noise in the observed block demand. In the general case with more than one resource, our optimal policies correctly handle cross-effects (complementarity and substitutability) in resource demands. We also show how these cross-effects can be used to inform resource design, _i.e._ combining resources into bundles that have low demand-side cross-effects can yield simpler and more efficient price-update rules. Our framework is also practical, we demonstrate how it can be used to refine or inform the design of heuristic fee update rules such as EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional version of our model using real market data from the Ethereum blockchain and empirically compare the performance of our optimal policies to EIP-1559. 1 Footnote 1: We note that some of our results do offer some insights on the resource design problem that we will briefly discuss. ## 1 Introduction Users of public permissionless blockchains can modify the shared state of the network through _transactions_ that are executed by a set of nodes with limited computational resources. To allocate resources among competing transactions most blockchains use _transaction fees_. Initial transaction fee mechanisms in the Bitcoin and Ethereum blockchains relied on users bidding for transaction inclusion as the main way of pricing congestion. Moreover, all computational resources were bundled into a unique virtual resource ("gas") with fixed relative prices hardcoded in the protocol. Current R&D efforts are focused on improving transaction fee markets along two directions: (1) setting a minimum _dynamic base fee_ (henceforth also called _price_) that is adjusted by the protocol as function of user demand and (2) _unbundling resources_ so that different resources can be individually priced and their relative prices can also efficiently adjust with demand. In this paper, we propose a new framework for choosing a resource pricing policy that makes significant progress across both directions. We consider the practical problem of a blockchain protocol that has to jointly update the prices of multiple resources at every block. We assume that the type of resources being metered and priced, as well as the block limits and sustainable targets for each resource, are pre-determined. These higher level decisions are the outcome of a design process that has interesting political, economic, and engineering considerations but are outside the current scope of our framework1. Footnote 1: Layer 2s, depending on their architecture, can perhaps implement price policies that require more computation and are closer to the optimal ones. Our framework is both general and practical. Or main results characterize theoretically optimal policies in a realistic setting with multiple resources and time-varying demand. Our results can be used in two ways: (i) the policies can be _directly_ implemented as we demonstrate, or (ii) insights from our main results can be used to construct and refine heuristics that approximate optimal policies. The latter point is particularly important in the blockchain environment, where, especially at Layer 1, the price computation itself is significantly resource constrained2. We designed our framework with the following properties in mind:
2302.00043
Extended linear-in-$T$ resistivity due to electron-phason scattering in moiré superlattices
Due to its incommensurate nature, moir\'e superlattices host not only acoustic phonons but also another type of soft collective modes called phasons. Here, we investigate the impact of electron-phason scattering on the transport properties of moir\'e systems. We show that the resistivity can scale linearly with temperature down to temperatures much lower than the Bloch-Gr\"uneisen scale defined by electron kinematics on the Fermi surface. This result stems from the friction between layers, which transfers phason spectral weight to a broad diffusive low-energy peak in the mechanical response of the system. As a result, phason scattering becomes a very efficient channel for entropy production at low temperatures. We also consider the contributions of phasons to thermodynamic properties at low temperatures and find a ''metallic-like'' linear-in-$T$ behavior for the specific heat, despite the fact that this behavior is due to mechanical and not electronic degrees of freedom. We discuss the implications of this finding to reports of linear-in-$T$ resistivity in the phase diagram of twisted bilayer graphene.
Héctor Ochoa, Rafael M. Fernandes
2023-01-31T19:22:24Z
http://arxiv.org/abs/2302.00043v2
# Extended linear-in-\(T\) resistivity due to electron-phason scattering in moire superlattices ###### Abstract Due to its incommensurate nature, moire superlattices host not only acoustic phonons but also another type of soft collective modes called phasons. Here, we investigate the impact of electron-phason scattering on the transport properties of moire systems. We show that the resistivity can scale linearly with temperature down to temperatures much lower than the Bloch-Gruneisen scale defined by electron kinematics on the Fermi surface. This result stems from the friction between layers, which transfers phason spectral weight to a broad diffusive low-energy peak in the mechanical response of the system. As a result, phason scattering becomes a very efficient channel for entropy production at low temperatures. We discuss the implications of this finding to reports of linear-in-\(T\) resistivity in the phase diagram of twisted bilayer graphene. _Introduction_. Elucidating the nature of the metallic state of twisted moire systems, from which correlated insulating and superconducting phases emerge [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], is crucial to shed light on the microscopic ingredients governing the interplay between these phases [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. In the metallic phase of twisted bilayer graphene (TBG), puzzling features are seen both in its electronic spectrum, manifested as so-called cascade transitions [51; 52], and in its transport properties. Indeed, while not exceeding \(h/e^{2}\), relatively large resistivity values are observed, of about several k\(\Omega\)[53; 54; 55]. Most strikingly, a resistivity that changes linearly with temperature is observed down to very low temperatures and over a wide range of carrier concentrations - even when correlations are suppressed by screening [55]. On the one hand, this observation of a linear-in-\(T\) resistivity is reminiscent of the phenomenology of strange metals, which are often associated with quantum critical points (QCP) in correlated electron systems [56; 57; 58]. On the other hand, electron-acoustic phonon scattering is known to promote linear-in-\(T\) resistivity down to the Bloch-Gruneisen temperature, \(T_{\rm BG}\), or the Debye temperature, \(T_{\rm D}\)[59; 60; 61; 62]. Both scenarios face difficulties: the fact that the linear-in-\(T\) behavior extends over a broad doping range, rather than inside a cone emanating from a single point, is inconsistent with the standard QCP scenario. In the phonon scenario, the large in-plane rigidity and low mass density of the graphene layers leads to sound velocities \(c_{s}\sim 10^{4}\) m/s, rendering the temperature scales \(T_{\rm BG}\) and \(T_{\rm D}\) relatively large compared to the temperatures for which linear-in-\(T\) behavior is observed. One important aspect of this problem that has remained little explored is the fact that, besides acoustic phonons emerging from the displacement of the center-of-mass of the bilayer, TBG and other moire superlattices also possess another family of acoustic modes arising from the relative displacement between the layers. The latter describe the vibrations of the moire pattern as a whole, and thus are sometimes dubbed _moire phonons_[63]. However, in contrast to conventional Figure 1: Schematics of the temperature dependence of the resistivity due to electron-phason scattering. The horizontal axis is the phenomenological parameter \(\gamma\) characterizing frictional forces between the layers, as schematically depicted in the lower inset. When those are absent, there is a single crossover from the high-temperature (i.e. classical equipartition) regime with linear-in-\(T\) resistivity to the so-called Bloch-Grüneisen regime, where \(\rho\sim T^{4}\) (for a circular Fermi surface). For finite damping, low-energy phasons are overdamped and \(\rho\sim T^{2}\) emerges at the lowest temperatures below \(T^{*}\). As damping grows this scale saturates to \(T_{\rm BG}\) and the Bloch-Güneisen regime disappears, signaling that all scattering phason modes are overdamped. In this regime, there is a single crossover from linear- to quadratic-in-\(T\) resistivity at \(T^{*}<T_{\rm BG}\). The upper inset represents the imaginary part of the phason susceptibility in Eq. (1), characterized by a broad diffusive (i.e. incoherent) peak at low frequencies. acoustic phonons, these modes are generally overdamped at long wavelengths, since the relative momentum between the layers is not a conserved quantity. This is analogous to the phason excitations of incommensurate lattices [64; 65; 66; 67; 68]. As such, because a moire superlattice is generally an incommensurate lattice, these moire modes have been identified as _phasons_[69; 70; 71; 72]. Importantly, the dynamical mechanical response function \(\chi_{s}\) of the bilayer at low frequencies is dominated by the two acoustic phason branches (transverse and longitudinal, labelled by \(s\)) with dispersion \(\omega_{s,\mathbf{q}}=c_{s}|\mathbf{q}|\)[63; 69; 70; 71; 72] and of the general form [73] \[\chi_{s}\left(\mathbf{q},\omega\right)=\frac{\varrho^{-1}}{\omega_{s,\mathbf{ q}}^{2}-\omega^{2}-i\gamma\omega}\,. \tag{1}\] Here, \(\varrho\) (with units of mass density) is the inertia of the relative motion between the two layers, and \(\gamma\) describes the damping of this motion due to frictional forces between the layers. While phasons have been widely studied in incommensurate lattices and quasicrystals [74], the impact of electron-phason scattering on the electronic properties of those systems has been relatively unexplored. Moire superlattices, being correlated electronic systems, provide a unique framework to investigate this effect. In this Letter, we show that electron scattering by long-wavelength phason modes described by Eq. (1) can give rise to a linear-in-\(T\) resistivity down to a new low-temperature scale \(T^{*}\ll T_{\mathrm{BG}},T_{\mathrm{D}}\). Figure 1 summarizes our results for the different regimes for the phason-induced resistivity, obtained from a Boltzmann-equation calculation. In the absence of interlayer friction, the resistivity \(\rho\) displays the usual temperature dependence \(\rho\sim T\) above \(T_{\mathrm{BG}}\) (or \(T_{\mathrm{D}}\)) and \(\rho\sim T^{4}\) below \(T_{\mathrm{BG}}\) (for a circular Fermi surface) [59]. For small interlayer friction, however, a second temperature scale \(T^{**}\) emerges, below which the temperature dependence changes to \(\rho\sim T^{2}\). This is a consequence of electrons scattering off of the phason modes associated with the low-energy diffusive peak of the response function (see inset in Fig. 1), and is reminiscent of the widely-studied case of scattering by overdamped bosonic fluctuations above a QCP [75; 76; 77; 78]. Indeed, the phason propagator in Eq. (1) is similar to the bosonic propagator near a metallic QCP [79; 80; 81; 82; 83]. When friction is further increased, \(T^{**}\) overcomes \(T_{\mathrm{BG}}\), and essentially all relevant scattering phason modes are overdamped. In this situation, the \(\rho\sim T^{4}\) behavior is completely suppressed, and the linear-in-\(T\) behavior extends down to the new temperature scale \(T^{*}\). Because scattering is no longer limited by the rigidity of individual graphene layers, but rather by the rate \(\gamma\) at which the two layers exchange energy and momentum, this new temperature scale can be very small, \(T^{*}\ll T_{\mathrm{BG}}\) (see Eq. 15). Therefore, electron-phason scattering makes it possible for an extended regime of linear-in-\(T\) resistivity in twisted moire systems. _Transport theory_. We compute the resistivity within a Boltzmann transport approach. In the case of metallic TBG, this approach is justified by the empirical observation that the Mott-Ioffe-Regel limit is satisfied, i.e. the resistivity saturates when the mean-free-path becomes comparable to the Fermi wavelength, \(k_{F}\ell\gtrsim 1\)[55]. To simplify the analysis, we consider a relaxation-time approximation, which yields the resistivity [84] \[\rho=\frac{1}{4e^{2}}\frac{\frac{1}{k_{B}T}\int\frac{d\mathbf{k}_{1}}{(2\pi)^ {2}}\int\frac{d\mathbf{k}_{2}}{(2\pi)^{2}}\mathcal{P}_{\mathbf{k}_{1},\mathbf{ k}_{2}}\left|\mathbf{k}_{1}-\mathbf{k}_{2}\right|^{2}}{\left[\int\frac{d \mathbf{k}}{(2\pi)^{2}}\left(\mathbf{k}\cdot\mathbf{v}_{\mathbf{k}}\right)\frac{ \partial n_{F}\left(\mathbf{\varepsilon}_{\mathbf{k}}\right)}{\partial \mathbf{\varepsilon}_{\mathbf{k}}}\right]^{2}}. \tag{2}\] Here, the factor 4 arises from spin and valley degeneracies, \(\mathbf{v}_{\mathbf{k}}\) is the electron group velocity, and \(\mathcal{P}_{\mathbf{k}_{1},\mathbf{k}_{2}}\) represents the transition rate between states with momentum \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\). Assuming that the lattice degrees of freedom relax much faster than the electron ensemble, and using detailed balance, the contribution to \(\mathcal{P}_{\mathbf{k}_{1},\mathbf{k}_{2}}\) coming from electron-phason scattering processes is \[\mathcal{P}_{\mathbf{k}_{1},\mathbf{k}_{2}} =2\left|g_{s}\left(\mathbf{k}_{1},\mathbf{k}_{2}\right)\right|^{2 }n_{F}\left(\mathbf{\varepsilon}_{\mathbf{k}_{1}}\right)\left[1-n_{F}\left( \mathbf{\varepsilon}_{\mathbf{k}_{2}}\right)\right] \tag{3}\] \[\times\int_{-\infty}^{\infty}d\omega\,n_{B}(\omega)\chi_{s}^{ \prime\prime}\left(\mathbf{k}_{2}-\mathbf{k}_{1},\omega\right)\delta\left( \mathbf{\varepsilon}_{\mathbf{k}_{2}}-\mathbf{\varepsilon}_{\mathbf{k}_{1}}- \hbar\omega\right).\] In this expression, \(n_{F}\) and \(n_{B}\) are Fermi-Dirac and Bose-Einstein distribution functions, respectively, \(\chi_{s}^{\prime\prime}\) is the imaginary part of the susceptibility in Eq. (1), and \(g_{s}(\mathbf{k}_{1},\mathbf{k}_{2})\) represents the matrix element of the electron-phason coupling. At low temperatures, as long as the Fermi velocity is larger than the sound velocity [62], we expect that only electrons near the Fermi surface contribute to transport. Assuming that the resistivity is dominated by intraband processes, the resistivity can be approximated by \[\rho\approx\frac{\hbar}{2e^{2}}\frac{\oint\frac{d\mathbf{k}_{1}}{|\mathbf{v}_{ \mathbf{k}}|}|\mathbf{k}|^{2}\tau_{\mathbf{k}}^{-1}}{\left[\oint\frac{d \mathbf{k}_{1}}{|\mathbf{v}_{\mathbf{k}}|}|\mathbf{k}\cdot\mathbf{v}_{\mathbf{k}} \right]^{2}}, \tag{4}\] where the integral is along the Fermi contour and the inverse of the transport time is given by \[\tau_{\mathbf{k}}^{-1}=\oint\frac{d\mathbf{k}^{\prime}_{||}}{|\mathbf{v}_{ \mathbf{k}^{\prime}}|}\frac{\left|g_{s}\left(\mathbf{k},\mathbf{k}^{\prime} \right)\right|^{2}\left|\mathbf{k}-\mathbf{k}^{\prime}\right|^{2}}{\left| \varrho k_{B}T}\frac{\left|\mathbf{k}-\mathbf{k}^{\prime}\right|^{2}}{\left| \mathbf{k}\right|^{2}}f\left(\frac{\hbar\omega_{s,\mathbf{k}-\mathbf{k}^{ \prime}}}{k_{B}T},\frac{\hbar\gamma}{k_{B}T}\right). \tag{5}\] The function \(f\left(y,z\right)\) can be directly computed from the transition rate in Eq. (3) and with \(\chi_{s}\) from Eq. (1). We find \[f\left(y,z\right)=\frac{\pi}{y^{2}}+\frac{1}{4\pi\sqrt{z^{2}-4y^{ 2}}}\times \tag{6}\] \[\left[\left(z-\sqrt{z^{2}-4y^{2}}\right)\psi_{1}\left(1+\frac{z- \sqrt{z^{2}-4y^{2}}}{4\pi}\right)\right.\] \[\left.-\left(z+\sqrt{z^{2}-4y^{2}}\right)\psi_{1}\left(1+\frac{z+ \sqrt{z^{2}-4y^{2}}}{4\pi}\right)\right],\] where \(\psi_{1}(x)\) is the trigamma function. There are _a priori_ two temperature scales in the problem associated with the two arguments of the function \(f\left(y,z\right)\), which ultimately are connected to the poles of the susceptibility in Eq. (1). The first scale is determined by the maximum transferred momentum \(\mathbf{k}-\mathbf{k}^{\prime}\), which is limited either by the lattice (defining the Debye temperature \(T_{\mathrm{D}}\)) or, for a small Fermi surface, as in doped TBG, by some multiple of the characteristic Fermi wavevector \(k_{F}\). This is the Bloch-Gruneisen temperature which, for a circular Fermi surface, is given by \(k_{B}T_{\mathrm{BG}}=2\hbar c_{s}k_{F}\). This scale is associated with underdamped phason oscillations, which take place above a characteristic momentum and correspond to the sharp (i.e. coherent) part of the phason spectral weight shown in the inset of Fig. 1. However, for small momenta, the phason oscillations are overdamped, as shown by the low-energy incoherent phason spectral weight in the inset of Fig. 1. They give rise to a second temperature scale, \(k_{B}T_{\gamma}\equiv\hbar\gamma\), proportional to the rate of dissipation of energy and of relative linear momentum between the two layers. The relative strength of these two temperature scales define two distinct regimes of phason-limited transport: the _propagating regime_, \(T_{\mathrm{BG}}\gg T_{\gamma}\), in which most of the phason modes scattering electrons behave as propagating waves, and the _diffusive regime_, \(T_{\mathrm{BG}}\ll T_{\gamma}\), where most scattering modes are overdamped. _Crossover temperature to linear-in-\(T\) resistivity_. Before computing the resistivity explicitly, we analyze the asymptotic behavior of the function \(f\left(y,z\right)\), with \(y\equiv\hbar\omega_{q,s}/k_{B}T\) and \(z\equiv\hbar\gamma/k_{B}T\), to gain insight into how the temperature dependence of the resistivity evolves from the propagating to the diffusive regimes. Consider the extreme propagating regime, where damping is absent, \(\gamma=0\). In this case, phasons behave as acoustic phonons and \(f\left(y,z\right)\) becomes: \[f\left(y,z=0\right)=\frac{\pi}{y^{2}}+\frac{\pi}{y^{2}}\left[\frac{y^{2}}{4 \operatorname{sech}^{2}(y/2)}-1\right]. \tag{7}\] The first term corresponds to classical equipartition, and as such gives the standard linear-in-\(T\) resistivity, \(\rho\sim T\)[85]. It is dominant at temperatures that are high compared to \(T_{\mathrm{BG}}\), \(y\ll 1\), in which case the second term vanishes. For \(y\gg 1\), which corresponds to \(T\ll T_{\mathrm{BG}}\), one finds the well-known \(\rho\sim T^{4}\) behavior (for a circular Fermi surface), as obtained for electron-acoustic phonon scattering in graphene [85; 86]. What happens once \(\gamma\) increases and we move toward the diffusive regime? As long as \(T_{\gamma}<T_{\mathrm{BG}}\), the temperature scale where linear-in-\(T\) resistivity emerges remains \(T_{\mathrm{BG}}\), since deviation from classical equipartition is driven by electrons being scattered off of propagating phason modes. However, a new linear-in-\(T\) crossover temperature \(T^{*}\) emerges when \(T_{\gamma}>T_{\mathrm{BG}}\), since in this case the scattering phason modes are essentially all overdamped at \(T_{\mathrm{BG}}\). In the asymptotic regime of \(T_{\gamma}\gg\left\{T,\,T_{\mathrm{BG}}\right\}\), the function \(f\left(y,z\right)\) becomes: \[f\left(y,z\gg\left\{1,y\right\}\right)\approx\frac{\pi}{y^{2}}+\frac{2\pi}{y^ {2}}\left[\frac{1}{v^{2}}\,\psi_{1}\left(1+\frac{1}{v}\right)-\frac{1}{v}\right] \tag{8}\] where we defined the variable \(v\equiv 2\pi z/y^{2}\). This is the same expression one would have obtained for a purely diffusive response (i.e. dropping \(\omega^{2}\) in the denominator of Eq. 1). In contrast to Eq. (7), deviation from classical equipartition is now governed by the combined variable \(v\), since the second term vanishes for \(v\ll 1\). Therefore, the crossover temperature \(T^{*}\) for the establishment of linear-in-\(T\) resistivity (i.e. classical equipartition of the phason modes) can be estimated from the condition \(T_{\gamma}T/T_{\mathrm{BG}}^{2}\sim 1\), which gives \(T^{*}\sim\frac{T_{\mathrm{BG}}^{2}}{T_{\gamma}}\ll T_{\mathrm{BG}}\). The last inequality follows from the fact that, in the diffusive regime, \(T_{\gamma}\gg T_{\mathrm{BG}}\). Therefore, compared to the propagating regime, the temperature range across which \(\rho\sim T\) extends to much lower temperatures, well below the Bloch-Gruneisen temperature. This is the main result of our paper, which we confirm with an explicit calculation of the resistivity below. It is not only the deviation from classical equipartition that is affected by the change in the character of the phason modes from propagating to overdamped. At the lowest temperatures, \(T\ll T_{\gamma}\), \(T_{\mathrm{BG}}\), electron-phason scattering is always dominated by processes involving the low-energy part of the phason spectral weight, which in turn corresponds to the incoherent (i.e. overdamped) modes. Mathematically, it turns out that, regardless of the value of \(T_{\gamma}/T_{\mathrm{BG}}\), we can approximate \(f(y\gg 1,z)\approx 2\pi^{2}z/3y^{4}\). As we show below, this gives rise to a \(\rho\sim T^{2}\) behavior at the lowest temperatures. In the diffusive regime the temperature scale below which this behavior appears is the same \(T^{*}\) obtained above. However, in the propagating regime, a new temperature scale \(T^{**}\sim\sqrt{T_{\gamma}T_{\mathrm{BG}}}\) emerges, with \(T_{\gamma}\ll T^{**}\ll T_{\mathrm{BG}}\), signaling the crossover from the characteristic acoustic-phonon driven behavior \(\rho\sim T^{4}\) to the phason-driven behavior \(\rho\sim T^{2}\). _Explicit calculation of the resistivity_. To proceed, we need the electron-phason coupling \(g_{s}\left(\mathbf{k}_{1},\mathbf{k}_{2}\right)\), which requires a low-energy model. Phason fluctuations can be parametrized in terms of a collective coordinate \(\mathbf{u}(\mathbf{r},t)\) describing long-wavelength transverse or longitudinal elastic vibrations of the moire pattern as a whole [69; 71]. For the electrons, we assume a low-energy description of the flat bands consisting of a Dirac Hamiltonian \(\hat{\mathbf{\mathcal{H}}}_{\mathrm{e}}=v_{F}^{*}\,\hat{\mathbf{\Sigma}}\cdot(-i\hbar \mathbf{\mathcal{O}})\) for each spin and valley defined around each moire-valley. In this notation, the various symmetry-allowed electron-phason couplings are given by the Hamiltonian [69] \[\hat{\mathbf{\mathcal{H}}}_{\text{e-p}}=g_{A_{1}}\mathbf{\nabla}\cdot\mathbf{u} \hat{1}+g_{A_{2}}\left(\mathbf{\nabla}\times\mathbf{u}\right)_{z}\hat{\Lambda}_{z}\hat{ \Gamma}_{z} \tag{9}\] \[+g_{E_{2}}^{(1)}\left[\left(\partial_{x}u_{y}+\partial_{y}u_{x} \right)\hat{\Sigma}_{x}\hat{\Gamma}_{z}+\left(\partial_{x}u_{x}-\partial_{y}u_ {y}\right)\hat{\Sigma}_{y}\hat{\Gamma}_{z}\right]\] \[+g_{E_{2}}^{(2)}\left[\left(\partial_{x}u_{x}-\partial_{y}u_{y} \right)\hat{\Sigma}_{x}\hat{\Lambda}_{z}-\left(\partial_{x}u_{y}+\partial_{y}u _{x}\right)\hat{\Sigma}_{y}\hat{\Lambda}_{z}\right].\] In these expressions, \(\hat{\Sigma}_{i}\), \(\hat{\Gamma}_{i}\), \(\hat{\Lambda}_{i}\) are Pauli matrices acting on bands/sublattices, valleys and layers/moire-valleys, as defined in Ref. [87]. The subscripts of the four coefficients \(g_{i}\) refer to different irreducible representations of the \(D_{6}\) point group describing TBG, and thus correspond to couplings to different lattice vibration patterns. While the contributions of each coupling to the resistivity can be summed up following Matthiessen's rule, symmetry dictates that they share the same temperature dependence. Therefore, hereafter we focus only on the \(g_{A_{2}}\) term, which is expected to be the dominant one [69]. Microscopically, this mode corresponds to a relative expansion/contraction of one layer with respect to the other, which is manifested as a transverse acoustic vibration of the moire superlattice. Considering only scattering within a single Fermi surface around each moire-valley parametrized as \(\mathbf{k}=k_{F}(\cos\theta,\sin\theta)\), and using \(|g_{T}(\mathbf{k}_{1},\mathbf{k}_{2})|^{2}=g_{A_{2}}^{2}k_{F}^{2}\sin^{2}( \theta_{1}-\theta_{2})\), the resistivity can be written as \[\rho=\rho_{0}\,I\left(t,\tau\right),\text{ with }\rho_{0}=\frac{h}{e^{2}} \times\frac{g_{A_{2}}^{2}k_{F}^{2}}{4\varrho\left(v_{F}^{*}\right)^{2}k_{B}T_ {\text{BG}}}, \tag{10}\] and where we introduced the reduced temperature \(t\equiv\frac{T}{T_{\text{BG}}}\) and the ratio \(\tau\equiv\frac{T_{\gamma}}{T_{\text{BG}}}\). The dimensionless function \(I(t,\tau)\) contains the remaining momentum integral in the inverse transport time, and is given by: \[I\left(t,\tau\right)=\frac{16}{\pi^{2}t}\int_{0}^{1}du\,u^{4}\sqrt{1-u^{2}}\, f\left(\frac{u}{t},\frac{\tau}{t}\right). \tag{11}\] Note that the integrand contains additional terms arising from the suppression of forward-scattering processes and the momentum dependence of the electron-phason coupling. Using the asymptotic expansions for \(f(y,z)\) discussed above, it is straightforward to obtain the asymptotic temperature dependencies of the resistivity in different limits. In the propagating regime, \(T_{\gamma}\ll T_{\text{BG}}\), we obtain \[\rho\approx\rho_{0}\times\begin{cases}\frac{T}{T_{\text{BG}}}&\text{if }T\gg T_{\text{BG}},\\ \frac{64\pi^{3}}{15}\left(\frac{T}{T_{\text{BG}}}\right)^{4}&\text{if }T^{**} \ll T\ll T_{\text{BG}},\\ \frac{8\pi}{3}\frac{T_{\gamma}}{T_{\text{BG}}}\left(\frac{T}{T_{\text{BG}}} \right)^{2}&\text{if }T\ll T^{**}.\end{cases} \tag{12}\] The asymptotic behaviors above \(T^{**}\) are the same as in the case of acoustic-phonon scattering [59; 60; 61], displaying a crossover from linear-in-\(T\) resistivity to \(\rho\sim T^{4}\) upon crossing \(T_{\text{BG}}\). The low-temperature behavior \(\rho\sim T^{2}\) arises from the contribution from the diffusive phason modes, which dominate at low \(T\). The crossover temperature \(T^{**}\) can be estimated by comparing the latter with the Bloch-Guneisen contribution to the resistivity, yielding \[T_{**}=\sqrt{\frac{5T_{\gamma}T_{\text{BG}}}{8\pi^{2}}}. \tag{13}\] In the diffusive regime, \(T_{\gamma}\gg T_{\text{BG}}\), we can use the asymptotic form for \(f(y,z)\) in Eq. (8). We find \(I(t,\tau)\approx t\mathbf{\mathcal{I}}(2\pi t\tau)\) with \[\mathbf{\mathcal{I}}\left(\tilde{t}\right)=1-\frac{1}{\tilde{t}}+\frac{32}{\pi \tilde{t}^{2}}\int_{0}^{1}dy\,y^{6}\sqrt{1-y^{2}}\,\psi_{1}\left(1+\frac{y^{2 }}{\tilde{t}}\right). \tag{14}\] Here, we introduced the new reduced temperature \(\tilde{t}\equiv 2\pi t\tau=T/T^{*}\), which defines the characteristic temperature: Figure 2: Temperature dependence of the resistivity in the Dirac approximation for the flat bands. Numerical evaluation of \(I(t,\tau)\) as a function of \(t=T/T_{\text{BG}}\) for fixed values of \(\tau=T_{\gamma}/T_{\text{BG}}\) in the propagating (panel a) and diffusive (panel b) regimes. Both plots are in logarithmic scale. In the diffusive regime, the linear-in-\(T\) resistivity extends down to a new (smaller) scale \(T^{*}\ll T_{\text{BG}}\). \[T^{*}=\frac{T_{\rm BG}^{2}}{2\pi T_{\gamma}}. \tag{15}\] Using the results \(\not{\mathcal{I}}(\tilde{t}\gg 1)\approx 1\) and \(\not{\mathcal{I}}(\tilde{t}\ll 1)\approx 4\tilde{t}/3\), we find the asymptotic behaviors of the resistivity \[\rho\approx\rho_{0}\times\begin{cases}\frac{T}{T_{\rm BG}}&\text{if $T\gg T^{*}$,}\\ \frac{4}{3}\frac{T^{2}}{T^{*}T_{\rm BG}}&\text{if $T\ll T^{*}$.}\end{cases} \tag{16}\] Therefore, as anticipated, \(T^{*}\ll T_{\rm BG}\) is the new crossover temperature above which the resistivity is linear in \(T\). The schematic phase diagram in Fig. 1 is built based on the asymptotic behaviors derived here. To further confirm them, we numerically evaluated the function \(I\left(t,\tau\right)\) in Eq. (11), which fully determines the temperature dependence of the resistivity in Eq. (10). Figure 2(a) shows \(I\left(t,\tau\right)\) in the propagating regime, highlighting the crossover from linear-in-\(T\) to \(T^{4}\) at about \(T_{\rm BG}\), followed by another crossover to \(T^{2}\) at temperatures between \(T_{\gamma}\) and \(T_{\rm BG}\). In the diffusive regime, shown in Fig. 2(b), the linear-in-\(T\) behavior extends to temperatures well below \(T_{\rm BG}\) for large enough \(T_{\gamma}\), confirming the main result of our analysis. Moreover, as shown in this figure, collisions with phasons give rise to a large resistivity at very low temperatures, no longer limited by the Bloch-Gruneisen temperature. _Discussion_. In summary, we showed that electron-phason scattering can lead to a linear-in-\(T\) resistivity down to temperatures much lower than the Bloch-Gruneisen temperature. In this scattering mechanism, the momentum that the electrons yield to the moire superlattice via collisions with its long-wavelength phason fluctuations is rapidly degraded through friction between the layers. The latter is a generic feature of incommensurate lattices [64; 65; 66; 67; 68], parametrized here by the damping coefficient \(\gamma\). Any form of dissipative coupling between the two layers contributes to \(\gamma\), including stick-slip processes caused by disorder in the stacking arrangement. The existence of various possible mechanisms for damping makes it difficult to estimate \(T_{\gamma}\). Nevertheless, the difference in energy between AA and Bernal stacking configurations, which is 4 meV/A\({}^{2}\)[88], should provide a rough upper bound. Integrated over graphene's unit cell, this gives \(T_{\gamma}\approx 250\) K. Using an estimated Bloch-Gruneisen temperature of about 10 K, we find \(T_{*}=T_{\rm BG}^{2}/2\pi T_{\gamma}\approx 0.06\) K. This scale is consistent with the lowest temperatures accessed experimentally by Ref. [55]. Of course, as also discussed in Ref. [59], no electron-phonon or electron-phason scattering mechanism can give a linear-in-\(T\) resistivity all the way down to \(T=0\). Our key point is that inter-layer friction further extends to lower temperatures the regime of linear-in-\(T\) resistivity. Our results provide a solid framework for future studies to quantitatively assess the relevance of the electron-phason mechanism in addressing the puzzling linear-in-\(T\) resistivity of TBG. Interestingly, this mechanism contains features of two scenarios invoked to explain this effect: electron-phonon scattering and quantum criticality. Of course, the linear-in-\(T\) resistivity behavior discussed here is due to classical equipartition, rather than scattering by quantum critical fluctuations. However, at low temperatures, where electron-phason scattering leads to a \(\rho\sim T^{2}\) behavior, the overdamped phasons are described by Eq. (1), and thus behave similarly to overdamped bosonic excitations typical of a metallic QCP [79; 80; 81; 82; 83]. In fact, the scattering function in Eq. (6) is identical to that obtained for a metallic nematic QCP [89], except for the momentum dependence of the damping coefficient. This is because, in a quantum critical system, dissipation is due to electronic Landau damping, whereas here it is a purely mechanical effect. Moreover, while bosonic excitations are only gapless at the QCP, the phason spectrum is gapless everywhere - although a small disorder-induced gap may emerge [73]. This suggests that a low-temperature \(\rho\sim T^{2}\) behavior may be more common in moire superlattices. Interestingly, Ref. [55] reported a quadratic-in-\(T\) resistivity over certain doping ranges. A direct consequence of our results is that, if the linear-in-\(T\) resistivity in TBG is due to electron-phason scattering, it should be absent in graphene-based systems without a moire superlattice. Recently, phenomena first observed in moire systems, such as superconductivity and flavor-polarized metals, have also been reported in rhombohedral ABC graphene [90] and Bernal bilayer graphene [91; 92]. Since phasons are not present in these systems, it will be interesting to determine whether they display linear-in-\(T\) resistivity. Finally, besides resistivity, the electron-phason coupling should also impact other transport properties, such as thermal conductivity, as well as thermodynamic properties, such as the specific heat. H.O. acknowledges funding from the Spanish MCI/AEI/FEDER through Grant No. PID2021-128760NB-I00. R.M.F. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division, under Award No. DE-SC0020045.
2310.20182
Evaluating the Conservativeness of Robust Sandwich Variance Estimator in Weighted Average Treatment Effects
In causal inference, the Inverse Probability Weighting (IPW) estimator is commonly used to estimate causal effects for estimands within the class of Weighted Average Treatment Effect (WATE). When constructing confidence intervals (CIs), robust sandwich variance estimators are frequently used for practical reasons. Although these estimators are easy to calculate using widely-used statistical software, they often yield narrow CIs for commonly applied estimands, such as the Average Treatment Effect on the Treated and the Average Treatment Effect for the Overlap Populations. In this manuscript, we reexamine the asymptotic variance of the IPW estimator and clarify the conditions under which CIs derived from the sandwich variance estimator are conservative. Additionally, we propose new criteria to assess the conservativeness of CIs. The results of this investigation are validated through simulation experiments and real data analysis.
Shunichiro Orihara
2023-10-31T05:08:49Z
http://arxiv.org/abs/2310.20182v2
# Explicit Form of the Asymptotic Variance Estimator for IPW-type Estimators of Certain Estimands ###### Abstract Confidence intervals (CI) for the IPW estimators of the ATT and ATO might not always yield conservative CIs when using the 'robust sandwich variance' estimator. In this manuscript, we identify scenarios where this variance estimator can be employed to derive conservative CIs. Specifically, for the ATT, a conservative CI can be derived when there's a homogeneous treatment effect or the interaction effect surpasses the effect from the covariates alone. For the ATO, conservative CIs can be derived under certain conditions, such as when there are homogeneous treatment effects, when there exists significant treatment-confounder interactions, or when there's a large number of members in the control groups. **Keywords**: Balancing weight, Inverse probability weighting, Propensity score, Robust sandwich variance Introduction In causal inference, selecting a valid causal estimand that accurately represents causal effect interpretation is a crucial step in addressing clinically questions. The Average Treatment Effect (ATE) is a widely recognized causal estimand, comparing outcomes if all subjects receive an active treatment versus if they all receive a control treatment. The Average Treatment Effect on the Treated (ATT) is also well considered, comparing outcomes if treated subjects retain their treatment versus if they switch to a control treatment. Recently, the Average Treatment Effects for the Overlap Population (ATO), an estimand proposed by Li et al.,[3] has gained attention. It offers several advantages over the ATE. For instance, the ATO assigns larger weights to subjects more likely to switch from their actual treatment to the 'counterfactual' treatment, making it a more robust measure of causal effect. The estimands belong in the category of 'Weighted Average Treatment Effect' (WATE[1, 3]), as they are viewed as specific weighted versions of the ATE. Further details can be found in Li et al.[3] The estimands can be estimated as a weighted average of the outcome defined by the (estimated) propensity score.[9] The well-known estimator is the Inverse Probability (Treatment) Weighting (IP(T)W) estimator for the ATE. Since other weighted estimators for the WATE can be constructed in the same manner, we refer to these weighted estimators as the 'IPW estimator' in this manuscript.[5] The confidence interval (CI) for the IPW estimator is commonly derived from the asymptotic variance, taking into account estimating equations for both the weighted average of the outcome and the propensity score. However, in practice, the CI is often derived based only on the former; the uncertainty of the propensity score is sometimes overlooked. This is because implementing the 'robust sandwich variance' estimator for the weighted average of the outcome is straightforward. For example, _sandwich_ in R, or the _WHITE_ option in _REG_ procedure in SAS can be easily implemented.[8] For the IPW estimator of the ATE, it is well-known that the CI, when ignoring the uncertainty of the propensity score, yields a more conservative CI compared to one that considers it.[4] We refer to the former CI as the'simple CI' and the latter as the 'exact CI' in this manuscript. However, recent findings suggest that the simple CI for the ATT estimator may not always yield a conservative CI.[8] Given this, the exact CI is considered to be more suitable for the IPW estimator for the ATT. Nonetheless, Reifeis and Hudgens[8] did not provide explicit mathematical details for the asymptotic variance estimator of the IPW estimator for the ATT. Specifically, it remains unclear under what conditions the simple CI yields a conservative CI. In this manuscript, we delve into the asymptotic variance of the IPW estimator. In Section 2, we derive a more detailed form of the asymptotic variance for the IPW estimator within the WATE class, building upon the work of Mao et al.[5] Furthermore, under specific outcome model conditions, we present a more detailed asymptotic variance formula for ATT and ATO than what's provided by Reifeis and Hudgens.[8] Through these mathematical results, we aim to elucidate the properties of the asymptotic variance of the IPW estimator. In Section 3, we validate our findings using simple simulation settings. ## 2 Explicit Form of the Asymptotic Variance Estimator Let \(n\) be the sample size. \(T_{i}\in\{0,\,1\}\), \(\mathbf{X}_{i}\in\mathbb{R}^{p}\) and \((Y_{1i},\,Y_{0i})\in\mathbb{R}^{2}\) represent the treatment, a vector of covariates measured prior to treatment, and potential outcomes, respectively. Based on the stable unit treatment value assumption,[9] the observed outcome is defined as \(Y_{i}:=T_{i}Y_{1i}+(1-T_{i})Y_{0i}\). Under these settings, we assume that i.i.d. copies \((T_{i},\mathbf{X}_{i},Y_{i})\), \(i=1,\,2,\,\dots,\,n\) are obtained. We further assume the strongly ignorable treatment assignment[9] for the subsequent discussions. We now introduce the WATE. The ATE conditional on \(\mathbf{x}\) is defined as \(\tau(\mathbf{x}):=\mathrm{E}[Y_{1}-Y_{0}|\mathbf{x}]\), and the WATE is defined as \[\tau_{w}:=\frac{\mathrm{E}[\tau(\mathbf{X})w(e(\mathbf{X}))]}{\mathrm{E}[w(e(\mathbf{X})) ]},\] where \(e\equiv e(\mathbf{X}):=\Pr\left(T=1|\mathbf{X}\right)\) is the propensity score,[9] and \(w(\cdot)\) represents the weight function for \(e\). When \(w(e)\equiv 1\), the WATE becomes the ATE: \(\tau_{ATE}:=\text{E}[Y_{1}-Y_{0}]\). When \(w(e)=e\), the WATE becomes the ATT \[\tau_{ATT}:=\frac{\text{E}[\tau(\mathbf{X})e(\mathbf{X})]}{\text{E}[e(\mathbf{X})]}=\text{E }[\tau(\mathbf{X})|T=1]=\text{E}[Y_{1}-Y_{0}|T=1].\] When \(w(e)=e(1-e)\), the WATE becomes the ATO\({}^{3}\) \[\tau_{ATO}:=\frac{\text{E}[\tau(\mathbf{X})e(\mathbf{X})(1-e(\mathbf{X}))]}{\text{E}[e( \mathbf{X})(1-e(\mathbf{X}))]}.\] Given that the function \(e(1-e)\), with \(e\in(0,1)\), is convex and symmetric around 0.5, subjects with a propensity score close to 0.5 (meaning they could easily change their actual treatment to the "counterfactual" treatment) are weighted more heavily than those near 0 or 1. As discussed in Mao et al.,\({}^{5}\) the WATE can be estimated using the IPW estimator \[\hat{\tau}_{w}=\hat{\mu}_{w1}-\hat{\mu}_{w0}=\frac{\sum_{i=1}^{n}W_{i}T_{i}Y_ {i}}{\sum_{i=1}^{n}W_{i}T_{i}}-\frac{\sum_{i=1}^{n}W_{i}(1-T_{i})Y_{i}}{\sum_ {i=1}^{n}W_{i}(1-T_{i})},\] where \[W_{i}=\frac{w(\hat{e}_{i})}{T_{i}\hat{e}_{i}+(1-T_{i})(1-\hat{e}_{i})},\] and \(\hat{e}_{i}\equiv e_{i}(\hat{\mathbf{\alpha}})=expit\left\{\mathbf{X}_{i}^{\top}\hat{ \mathbf{\alpha}}\right\}\) denotes the estimated propensity score. In this paper, the estimated propensity score is determined as the solution to the estimating equation: \(\sum_{i=1}^{n}\mathbf{X}_{i}\left(T_{i}-e_{i}(\mathbf{\alpha})\right)=\mathbf{0}\). To summarize, both the IPW estimator and the propensity score estimator can be encapsulated by the following single estimating equation\({}^{5}\) \[\sum_{i=1}^{n}\left(\begin{array}{c}\mathbf{X}_{i}\left(T_{i}-e_{i}(\mathbf{\alpha}) \right)\\ W_{i}T_{i}(Y_{i}-\mu_{w1})\\ W_{i}(1-T_{i})(Y_{i}-\mu_{w0})\end{array}\right)=\sum_{i=1}^{n}\psi_{i}(\mathbf{ \theta})=\mathbf{0}, \tag{2.1}\] where \(\mathbf{\theta}:=\left(\mathbf{\alpha}^{\top},\mu_{w1},\mu_{w0}\right)^{\top}\), and \(\hat{\mathbf{\theta}}\) is the solution of (2.1). Using the standard theory for the M-estimator, the asymptotic distribution of \(\hat{\mathbf{\theta}}\) becomes \[\sqrt{n}\left(\hat{\mathbf{\theta}}-\mathbf{\theta}^{0}\right)\overset{L}{\to}N\left( \mathbf{0},\mathrm{E}\left[\frac{\partial\psi(\mathbf{\theta}^{0})}{\partial\mathbf{\theta }^{\top}}\right]^{-1}\mathrm{E}\left[\psi(\mathbf{\theta}^{0})^{\otimes 2}\right] \mathrm{E}\left[\frac{\partial\psi(\mathbf{\theta}^{0})^{\top}}{\partial\mathbf{\theta }}\right]^{-1}\right),\] where \(\mathbf{\theta}^{0}\) represents the true value of \(\mathbf{\theta}\), implying that the expectation of (2.1) is uniquely satisfied. Also, applying the delta method, \[\sqrt{n}\left(\hat{\tau}_{w}-\tau_{w}^{0}\right)\overset{L}{\to}N\left(\mathbf{0 },\sigma^{2}\right), \tag{2.2}\] where \[\sigma^{2}=\mathbf{c}^{\top}\mathrm{E}\left[\frac{\partial\psi(\mathbf{\theta}^{0})}{ \partial\mathbf{\theta}^{\top}}\right]^{-1}\mathrm{E}\left[\psi(\mathbf{\theta}^{0})^ {\otimes 2}\right]\mathrm{E}\left[\frac{\partial\psi(\mathbf{\theta}^{0})^{\top} }{\partial\mathbf{\theta}}\right]^{-1}\mathbf{c},\] and \(\mathbf{c}=(\mathbf{0}^{\top},1,-1)^{\top}\). From here, we consider each component of the asymptotic variance of (2.2). Since \[\mathrm{E}\left[\frac{Tw(e)}{Te+(1-T)(1-e)}\right]=\mathrm{E}\left[\frac{(1-T) w(e)}{Te+(1-T)(1-e)}\right]=\mathrm{E}[w(e)],\] \[\frac{\partial}{\partial\mathbf{\alpha}}\frac{Tw(e)}{Te+(1-T)(1-e)}(Y-\mu_{w1})= \frac{T(w^{\prime}(e)e-w(e))}{e}(1-e)(Y-\mu_{w1})\mathbf{X},\] and \[\frac{\partial}{\partial\mathbf{\alpha}}\frac{(1-T)w(e)}{Te+(1-T)(1-e)}(Y-\mu_{w0 })=\frac{(1-T)(w^{\prime}(e)(1-e)+w(e))}{1-e}e(Y-\mu_{w0})\mathbf{X},\] \[\mathrm{E}\left[\frac{\partial\psi(\mathbf{\theta}^{0})}{\partial\mathbf{\theta}^{ \top}}\right]=\left(\begin{array}{ccc}-\mathrm{E}\left[e(1-e)\mathbf{X}^{ \otimes 2}\right]&\mathbf{0}&\mathbf{0}\\ \mathrm{E}\left[(w^{\prime}(e)e-w(e))(Y_{1}-\mu_{w1})(1-e)\mathbf{X}^{\top}\right]& -\mathrm{E}[w(e)]&0\\ \mathrm{E}\left[(w^{\prime}(e)(1-e)+w(e))(Y_{0}-\mu_{w10})e\mathbf{X}^{\top}\right] &0&-\mathrm{E}[w(e)]\end{array}\right), \tag{2.3}\] where \(A^{\otimes 2}=AA^{\top}\). Also, \[{\rm E}\left[\psi(\mathbf{\theta}^{0})^{\otimes 2}\right]=\left( \begin{array}{cc}{\rm E}\left[e(1-e)\mathbf{X}^{\otimes 2}\right] &\\ {\rm E}\left[w(e)(Y_{1}-\mu_{w1})(1-e)\mathbf{X}^{\top}\right]&{\rm E} \left[\frac{w(e)^{2}(Y_{1}-\mu_{w1})^{2}}{e}\right]&\\ -{\rm E}\left[w(e)(Y_{0}-\mu_{w0})e\mathbf{X}^{\top}\right]&0&{\rm E }\left[\frac{w(e)^{2}(Y_{0}-\mu_{w0})^{2}}{1-e}\right]\end{array}\right).\] Regarding (2.3) and (2.4), the symbols used in the subsequent discussions are \[(\ref{eq:2.3})=\left(\begin{array}{cc}-A_{11}&O^{\top}\\ A_{12}&a_{22}{\rm I}\end{array}\right),\ \ \ \ \ (\ref{eq:2.4})=\left( \begin{array}{cc}A_{11}&B_{12}^{\top}\\ B_{12}&B_{22}\end{array}\right),\] where I represents the identity matrix. Using the relationship \[A_{12}=-B_{12}+\left(\begin{array}{c}{\rm E}\left[w^{\prime}(e)(Y_{1}-\mu_{ w1})e(1-e)\mathbf{X}^{\top}\right]\\ {\rm E}\left[w^{\prime}(e)(Y_{0}-\mu_{w0})e(1-e)\mathbf{X}^{\top} \right]\end{array}\right)=-B_{12}+\delta,\] the asymptotic variance of (2.2) becomes \[\sigma^{2}=\frac{1}{a_{22}^{2}}(1,-1)\left(B_{22}+\delta A_{11}^{-1}\delta^{ \top}-B_{12}A_{11}^{-1}B_{12}^{\top}\right)\left(\begin{array}{c}1\\ -1\end{array}\right).\] The first term of (2.5) constitutes the main portion of the asymptotic variance of the IPW estimator. Specifically, standard sandwich variance calculation functions, such as _sandwich_ in R, compute only this term. The second and third term of (2.5) pertain to the variability of the propensity score estimation. From (2.5), a well-known conclusion can be simply derived. **Theorem 1.** _For the ATE, the weight function is given by \(w(e)\equiv 1\). Therefore, \(w^{\prime}(e)=0\). This imply that \(\delta=O\), and the second and third term of (2.5) become precisely negative._ This theorem suggests that the simple CI that is derived based on standard sandwich variance calculation functions yields conservative CI when we are interested in the ATE. As mentioned by Reifeis and Hudgens,[8] from the form of (2.5), it isn't clear whether the standard sandwich variance for ATT, ATO, or certain estimands is conservative. From this point onward, to understand the asymptotic variance more clearly, we assume the following linear outcome models \[Y_{t}=\beta_{0t}^{\prime}+\mathbf{X}^{\top}\mathbf{\beta}_{xt}+\varepsilon_{t}, \tag{2.6}\] where \(\mathrm{E}[\varepsilon_{t}]=0\), \(Var(\varepsilon_{t})<\infty\), and \(\varepsilon_{t}\,\mbox{$\perp\!\!\!\perp$}(T,\mathbf{X},Y)\) (\(t=0,1\)). Based on this assumption, the WATE is expressed as \[\tau_{w}=\mu_{w1}-\mu_{w0}=\beta_{01}^{\prime}-\beta_{00}^{\prime}+\frac{ \mathrm{E}\left[w(e)\mathbf{X}^{\top}\right]}{\mathrm{E}[w(e)]}\left(\mathbf{\beta}_{ x1}-\mathbf{\beta}_{x0}\right).\] In subsequent discussions, we will employ the following reparametrization to simplify the discussion: \(\mu_{wt}=\beta_{0t}^{\prime}\). This reparametrization is consistent with the centering of the outcome models (2.6) with respect to the covariates \[Y_{t}=\mu_{wt}+\left(\mathbf{X}-\frac{\mathrm{E}\left[w(e)\mathbf{X}\right]}{\mathrm{ E}[w(e)]}\right)^{\top}\mathbf{\beta}_{xt}+\varepsilon_{t}=\mu_{wt}+\mathbf{X}^{\prime \top}\mathbf{\beta}_{xt}+\varepsilon_{t}.\] Note that when treatment effects are homogeneous (ie., \(\mathbf{\beta}_{x1}=\mathbf{\beta}_{x0}\)), \(\tau_{w}=\beta_{01}^{\prime}-\beta_{00}^{\prime}\) for all estimands. Note also that the centering does not affect the estimated propensity score since the effect is absorbed into the intercept term. Under the potential outcome models (2.6), the observed outcome model becomes \[Y=TY_{1}+(1-T)Y_{0}=\mu_{w0}+T(\mu_{w1}-\mu_{w0})+\mathbf{X}^{\prime\top}\mathbf{\beta} _{x0}+T\mathbf{X}^{\prime\top}\left(\mathbf{\beta}_{x1}-\mathbf{\beta}_{x0}\right)+\varepsilon\] First, we consider the asymptotic variance of the ATT. The values of \(\delta\) and \(B_{22}\) become the following, respectively: \[\delta =\left(\begin{array}{c}\delta_{1}^{\top}\\ \delta_{2}^{\top}\end{array}\right)=\left(\begin{array}{c}\mathrm{E}\left[(Y_{ 1}-\mu_{w1})e(1-e)\boldsymbol{X}^{\top}\right]\\ \mathrm{E}\left[(Y_{0}-\mu_{w0})e(1-e)\boldsymbol{X}^{\top}\right]\end{array} \right)=\left(\begin{array}{c}\boldsymbol{\beta}_{x1}^{\top}\mathrm{E}\left[e(1-e )\boldsymbol{X}^{\otimes 2}\right]\\ \boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2 }\right]\end{array}\right),\] \[B_{12} =\left(\begin{array}{c}b_{1}^{\top}\\ b_{2}^{\top}\end{array}\right)=\left(\begin{array}{c}\mathrm{E}\left[(Y_{1}- \mu_{w1})e(1-e)\boldsymbol{X}^{\top}\right]\\ -\mathrm{E}\left[(Y_{0}-\mu_{w0})e^{2}\boldsymbol{X}^{\top}\right]\end{array} \right)=\left(\begin{array}{c}\boldsymbol{\beta}_{x1}^{\top}\mathrm{E}\left[e (1-e)\boldsymbol{X}^{\otimes 2}\right]\\ -\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e^{2}\boldsymbol{X}^{\otimes 2 }\right]\end{array}\right).\] Since \(b_{1}=\delta_{1}\) and \(b_{2}=\delta_{2}-\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e\boldsymbol{X }^{\otimes 2}\right]\), the second and third term of (2.5) become \[(1,-1)\left(\delta A_{11}^{-1}\delta^{\top}-B_{12}A_{11}^{-1}B_{12 }^{\top}\right)\left(\begin{array}{c}1\\ -1\end{array}\right)\] \[=\delta_{1}^{\top}A_{11}^{-1}\delta_{1}-2\delta_{1}^{\top}A_{11}^ {-1}\delta_{2}+\delta_{2}^{\top}A_{11}^{-1}\delta_{2}-b_{1}^{\top}A_{11}^{-1} b_{1}+2b_{1}^{\top}A_{11}^{-1}b_{2}-b_{2}^{\top}A_{11}^{-1}b_{2}\] \[=-2\boldsymbol{\beta}_{x1}^{\top}\mathrm{E}\left[e(1-e)\boldsymbol {X}^{\otimes 2}\right]A_{11}^{-1}\mathrm{E}\left[e\boldsymbol{X}^{ \otimes 2}\right]\boldsymbol{\beta}_{x0}+2\boldsymbol{\beta}_{x0}^{\top} \mathrm{E}\left[e\boldsymbol{X}^{\otimes 2}\right]A_{11}^{-1}\mathrm{E}\left[e(1-e) \boldsymbol{X}^{\otimes 2}\right]\boldsymbol{\beta}_{x0}\] \[\quad-\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e\boldsymbol{X }^{\otimes 2}\right]A_{11}^{-1}\mathrm{E}\left[e\boldsymbol{X}^{\otimes 2} \right]\boldsymbol{\beta}_{x0}\] \[=2\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e\boldsymbol{X }^{\otimes 2}\right](\boldsymbol{\beta}_{x0}-\boldsymbol{\beta}_{x1})- \boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e\boldsymbol{X}^{\otimes 2} \right]A_{11}^{-1}\mathrm{E}\left[e\boldsymbol{X}^{\otimes 2}\right] \boldsymbol{\beta}_{x0} \tag{2.7}\] From the result, the following relationship can be proved. **Theorem 2**.: _For the ATT, when treatment effects are homogeneous (ie., \(\boldsymbol{\beta}_{x1}=\boldsymbol{\beta}_{x0}\)), the second and third terms of (2.5) are precisely negative. When there is only constant heterogeneity (ie., \(\boldsymbol{\beta}_{x1}=\gamma\boldsymbol{\beta}_{x0}\)), a sufficient condition for the second and third terms of (2.5) to be precisely negative is the existence of a value \(\gamma\in\mathbb{R}\) such that_ \[\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right]^{-1}\mathrm{E}\left[e \boldsymbol{X}^{\otimes 2}\right]-2(1-\gamma)\mathrm{I}>O.\] _This is clearly satisfied when \(\gamma\geq 1\)._ Proof.: Since the former statement is obvious, we will only prove the latter statement. When \(\boldsymbol{\beta}_{x1}=\gamma\boldsymbol{\beta}_{x0}\), \[2(1-\gamma)\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e \boldsymbol{X}^{\otimes 2}\right]\boldsymbol{\beta}_{x0}-\boldsymbol{\beta}_{x0}^{ \top}\mathrm{E}\left[e\boldsymbol{X}^{\otimes 2}\right]A_{11}^{-1}\mathrm{E}\left[e \boldsymbol{X}^{\otimes 2}\right]\boldsymbol{\beta}_{x0}\] \[\qquad=\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e \boldsymbol{X}^{\otimes 2}\right]A_{11}^{-1}\left\{2(1-\gamma)\mathrm{E}\left[e(1-e) \boldsymbol{X}^{\otimes 2}\right]-\mathrm{E}\left[e\boldsymbol{X}^{\otimes 2} \right]\right\}\boldsymbol{\beta}_{x0}\] By focusing on the term within \(\{\cdot\}\), the statement can be derived. This theorem suggests that the simple CI yields a conservative CI when there is a homogeneous treatment effect, and we are interested in the ATT. Additionally, if the interaction effect between the treatment and all confounders is proportional to \(\gamma\), the simple CI also yields a conservative CI when the interaction effect is superior to the effect from covariates alone. This scenario arises when the risk factors of interest exhibit significant interaction effects with a treatment. Next, we consider the asymptotic variance of the ATO. In the same manner as the ATT, the values of \(\delta\) and \(B_{22}\) become the following, respectively: \[\delta =\left(\begin{array}{c}\delta_{1}^{\top}\\ \delta_{2}^{\top}\end{array}\right)=\left(\begin{array}{c}\mathrm{E}\left[( Y_{1}-\mu_{w1})e(1-e)(1-2e)\boldsymbol{X}^{\top}\right]\\ \mathrm{E}\left[(Y_{0}-\mu_{w0})e(1-e)(1-2e)\boldsymbol{X}^{\top}\right]\end{array} \right)=\left(\begin{array}{c}\boldsymbol{\beta}_{x1}^{\top}\mathrm{E}\left[ e(1-e)(1-2e)\boldsymbol{X}^{\otimes 2}\right]\\ \boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e(1-e)(1-2e)\boldsymbol{X}^{ \otimes 2}\right]\end{array}\right),\] \[B_{12} =\left(\begin{array}{c}b_{1}^{\top}\\ b_{2}^{\top}\end{array}\right)=\left(\begin{array}{c}\mathrm{E}\left[(Y_{1}- \mu_{w1})e(1-e)^{2}\boldsymbol{X}^{\top}\right]\\ -\mathrm{E}\left[(Y_{0}-\mu_{w0})e^{2}(1-e)\boldsymbol{X}^{\top}\right]\end{array} \right)=\left(\begin{array}{c}\boldsymbol{\beta}_{x1}^{\top}\mathrm{E}\left[ e(1-e)^{2}\boldsymbol{X}^{\otimes 2}\right]\\ -\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{ \otimes 2}\right]\end{array}\right).\] Through the similar calculation as the ATT (2.7), the second and third term of (2.5) become \[(1,-1)\left(\delta A_{11}^{-1}\delta^{\top}-B_{12}A_{11}^{-1}B_{12 }^{\top}\right)\left(\begin{array}{c}1\\ -1\end{array}\right)\] \[\qquad=\left(\boldsymbol{\beta}_{x0}-2\boldsymbol{\beta}_{x1} \right)^{\top}\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right] \boldsymbol{\beta}_{x0}-2\left(\boldsymbol{\beta}_{x1}-2\boldsymbol{\beta}_{x0} \right)^{\top}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right]( \boldsymbol{\beta}_{x1}-\boldsymbol{\beta}_{x0})\] \[\qquad\quad+3\left(\boldsymbol{\beta}_{x1}-\boldsymbol{\beta}_{x0 }\right)^{\top}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right] \mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right]^{-1}\mathrm{E}\left[e^{2}(1-e) \boldsymbol{X}^{\otimes 2}\right](\boldsymbol{\beta}_{x1}-\boldsymbol{\beta}_{x0})\] From the result, the following relationship can be proved. **Theorem 3**.: _For the ATO, when treatment effects are homogeneous (ie., \(\boldsymbol{\beta}_{x1}=\boldsymbol{\beta}_{x0}\)), the second and third terms of (2.5) are precisely negative. When there is only constant heterogeneity (ie., \(\boldsymbol{\beta}_{x1}=\gamma\boldsymbol{\beta}_{x0}\)), sufficients condition for the second and third terms of (2.5) to be precisely negative is the existence of a value \(\gamma>1\) such that_ \[\frac{2(\gamma-2)}{3(\gamma-1)}\mathrm{I}-\mathrm{E}\left[e(1-e)\boldsymbol{X} ^{\otimes 2}\right]^{-1}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2} \right]>O, \tag{2.8}\] Proof.: Since the former statement is obvious, we will only prove the latter statement. When \(\boldsymbol{\beta}_{x1}=\gamma\boldsymbol{\beta}_{x0}\), \[(1-2\gamma)\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e(1-e) \boldsymbol{X}^{\otimes 2}\right]\boldsymbol{\beta}_{x0}-2(\gamma-1)(\gamma-2) \boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{ \otimes 2}\right]\boldsymbol{\beta}_{x0}\] \[\quad+3(\gamma-1)^{2}\boldsymbol{\beta}_{x0}^{\top}\mathrm{E} \left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right]\mathrm{E}\left[e(1-e) \boldsymbol{X}^{\otimes 2}\right]^{-1}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{ \otimes 2}\right]\boldsymbol{\beta}_{x0}\] \[=(1-2\gamma)\boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e(1-e )\boldsymbol{X}^{\otimes 2}\right]\boldsymbol{\beta}_{x0}+(\gamma-1) \boldsymbol{\beta}_{x0}^{\top}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{ \otimes 2}\right]\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right]^{-1}\] \[\quad\times\left\{-2(\gamma-2)\mathrm{E}\left[e(1-e)\boldsymbol{X }^{\otimes 2}\right]+3(\gamma-1)\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{ \otimes 2}\right]\right\}\boldsymbol{\beta}_{x0}\] When \(\gamma\geq 0.5\), the first term is negative definite. When \(\gamma>1\), the second term is negative definite under the condition (2.8). When \(1>\gamma\geq 0.5\) the second term is negative definite under the condition \[\frac{2(\gamma-2)}{3(\gamma-1)}\mathrm{I}-\mathrm{E}\left[e(1-e)\boldsymbol{X }^{\otimes 2}\right]^{-1}\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2} \right]<O.\] However, from the relationship \[\mathrm{E}\left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right]=\Pr(T=1)\mathrm{E} \left[e(1-e)\boldsymbol{X}^{\otimes 2}|T=1\right]<\mathrm{E}\left[e(1-e) \boldsymbol{X}^{\otimes 2}\right], \tag{2.9}\] There are no situations where the condition can be satisfied. Note that when \(\gamma<0.5\), simple sufficient conditions cannot be derived because the first term of the above formula is positive definite. This implies that when there's a small, or negative interaction effect between the treatment and all confounders, the simple CI may not yield a conservative CI. When \(\gamma>1\), relatively straightforward condition (2.8) can be derived; however, interpretation remains challenging. To address this, we use the following relationship (2.9). When \(\Pr(T=1)\approx 0\), it is expected that \(\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right]^{-1}\mathrm{E} \left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right]\approx O\). In this situation (2.8) under \(2\geq\gamma>1\) is not hold. Whereas, (2.8) under \(\gamma>2\) is hold. This means that if there are only a few members in the treatment group, and there are large interaction effects between the treatment and all confounders, the simple CI yields a conservative CI. When \(\Pr(T=1)\approx 1\), it is expected that \(\mathrm{E}\left[e(1-e)\boldsymbol{X}^{\otimes 2}\right]^{-1}\mathrm{E} \left[e^{2}(1-e)\boldsymbol{X}^{\otimes 2}\right]\approx\mathrm{I}\). In this situation, \[\left(\frac{2(\gamma-2)}{3(\gamma-1)}-1\right)\mathrm{I}=\frac{-1-\gamma}{3( \gamma-1)}\mathrm{I}.\] Obviously, (2.8) is not satisfied. Note that when \(\Pr(T=1)\approx 0\) and \(\gamma>0.5\), (2.8) may not always hold from the above discussions; however, the first term of (2.5) dominates. Therefore, the simple CI also yields a conservative CI in this situation. Summarizing the discussions above, for the ATO, there is no universal scenario where the simple CI is applicable; the exact CI is more appropriate. However, in certain situations, such as when there are homogeneous treatment effects, when there exists significant treatment-confounder interactions, or when there are many members in the control groups, the simple CI might work effectively. ## 3 Conclusions In this manuscript, we examine scenarios in which the 'robust sandwich variance' estimator for the IPW estimator might yield a conservative confidence interval. Specifically, we demonstrate that for the ATT, a conservative CI can be established when there's a homogeneous treatment effect or when the interaction effect exceeds that of the covariates alone. For the ATO, conservative CIs can be determined under specific conditions, such as in the presence of homogeneous treatment effects, significant treatment-confounder interactions, or a large population within the control groups. Our results stem from conditions where both the true propensity score adopts a linear construction of covariates. Additionally, we presuppose that potential outcomes fit the standard linear regression models. While the theoretical findings can give insight into binary, multinomial, or time-to-event outcome scenarios, it's anticipated that exact theoretical justifications might be challenging to pinpoint. As such, for these types of outcomes, continuous simulation experiments are recommended. The findings of this manuscript could be instrumental for such endeavors. While our focus is primarily on the ATT and ATO, other known estimands exist, such as ATM (matching weight, as cited in [2]) and ATEM (entropy weight, as referred to in [6]). In future work, we also aim to assess the performance of these estimands when employing the robust sandwich variance estimator. For the ATM, given its definition, derivatives around \(e=0.5\) pose challenges. Approximation techniques, like those proposed by Orihara et al.,[7] might be necessary.
2309.03638
Beyond XAI:Obstacles Towards Responsible AI
The rapidly advancing domain of Explainable Artificial Intelligence (XAI) has sparked significant interests in developing techniques to make AI systems more transparent and understandable. Nevertheless, in real-world contexts, the methods of explainability and their evaluation strategies present numerous limitations.Moreover, the scope of responsible AI extends beyond just explainability. In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI when considering other important aspects, including privacy, fairness and contestability.
Yulu Pi
2023-09-07T11:08:14Z
http://arxiv.org/abs/2309.03638v1
# Beyond XAI:Obstacles Towards Responsible AI ###### Abstract The rapidly advancing domain of Explainable Artificial Intelligence (XAI) has sparked significant interests in developing techniques to make AI systems more transparent and understandable. Nevertheless, in real-world contexts, the methods of explainability and their evaluation strategies present numerous limitations. Moreover, the scope of responsible AI extends beyond just explainability. In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI when considering other important aspects, including privacy, fairness and contestability. ## 1 Introduction The barrier of explainability has spurred significant concerns as Artificial Intelligence (AI) lies at the core of many sectors. Explainable AI (XAI) emerged as a response to this, striving to make AI "behavior more intelligible to humans by providing explanations[20]." However, as we delve deeper into the application of XAI in real-world settings, it becomes evident XAI does not suffice on its own[9]. Explainability, while crucial, is merely one facet of the broader challenge. The quest for Responsible AI demands a more holistic perspective that surpasses the bounds of XAI. Moving our vision beyond the XAI realm toward Responsible AI, we recognize the pressing need to address broader dimensions including privacy, fairness, accountability, and contestability. It is becoming increasingly clear that relying solely on XAI, without addressing these interwoven complexities, falls short of achieving Responsible AI. In this paper, we discuss the limitation of XAI from machine learning and human-computer interaction perspectives. We also explore the implications of adopting XAI techniques in the context of considering other responsible AI. Given our space constraints, we specifically hone in on fairness, privacy, and contestability. ## 2 Technical challenge of AI Explainability ML/AI researchers mainly focus on developing methods that make the decision process of a model, less of a black-box. XAI methods can be divided into interpretable models designed from the start and post-hoc explanations derived from black-box models. Advocates of interpretable models argue that for high-stake decisions, prioritizing inherently transparent models is crucial[37]. Nevertheless, the application of black-box models is still the dominant practice, and there are still significant computational and technical hurdles in designing interpretable models, such as distilling a complex data space into an optimal set of discretized and meaningful features[7]. A common belief is that the black-box methods generally obtain higher accuracy than the interpretable ones[9]. As post-hoc methods are applied after the model's training process, they can be used to provide explainability in complex ML models without loss of performance[32]. Post-hoc methods can be categorized based on several dimensions [40], including the type of data they analyze [2] (e.g., tabular, text, image), the algorithm used [11] (e.g., differentiable or non-differentiable algorithms), the scope of explanations[19] (e.g., local explanations focusing on specific input-output relationships or global explanations addressing overall model behavior), the format of the explaination[31] (e.g., verbal, visual and interactive interface), and the approach to explaining [42] (e.g., feature-based explanations for important factors in the decision-making process or contrastive reasoning involving similar and counterfactual example) Despite the proliferation of XAI techniques, developing faithful and reliable explanations for various machine learning models remains one of the unsolved challenges. Much of existing work in XAI produces simplified approximations of complex original models, which result in different level of faithfulness, also refer as fidelity. Through experiments, including model parameter randomization and data randomization tests, [4] discovered that numerous XAI methods are incapable to generate explanations that truly reflect the model' logic or the data generating process. Furthermore, recent research discovered that explanations can be susceptible to manipulation. By Applying visually imperceptible pertubations to the input image that keep the network's output approximately constant,one can manipulate generated explanations arbitrarily [14]. Similarly, [39] demonstrate the possibility of modifying biased classifiers can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases. It raises concerns about the reliability of explanations and the possibility of malicious actors misleading users about how a model operates. They recommend using adversarial manipulation to evaluate the robustness and reliability of explanations[21]. The imperfection and vulnerabilities of technical approach of explainability complicates its evaluation and application in real-work context. Moving towards responsible AI, the focus broadens beyond mere explainability. The term "responsible AI" is intended to encapsulate a broad set of technical and socio-technical attributes of AI systems such as safety, efficacy, fairness, privacy, transparency and explainability[41] [9]. There is growing awareness that explainability is intricately connected with other facets of Responsible AI. Yet, limited research studies explainability from a lens of its integration, support, and potential clashes with other essential facets of responsible AI. In the following paragraphs, we only explore how current research addressed explainability alongside fairness and privacy given the paper's length constraints. We advocate for more comprehensive and in-depth research to understand the intricacy of explainability for a boarder objective of responsible AI. ### Fairness Many research and AI ethical guideline emphasized the important relationship between explainability and fairness from a theoretical point of view, highlighting the instrumental role of explanations plays in AI fairness[46]. For instance, 26 out of the 28 AI principles surveyed that address XAI, also talk about fairness explicitly emphasizing both aspects together when implementing Responsible AI[18]. From a societal viewpoint, explainability is seen as a mechanism to ensure and champion fairness in AI[9]. In their seminal study, researchers underscore that explanations should serve as tools for humans to discern if AI decisions harbor biases against protected groups[15]. Feature-based XAI techniques such as SHAP, decompose the model output into feature attributions. This breakdown can be used to compute the quantitative fairness metric such as demographic parity difference for each input feature using the SHAP value. Such an examination via XAI can help detect implicit connections between protected and unprotected features [28]. Conversely, [24] suggests that explanations methods that generate counterfactual explanations of positive and negative evidence of fairness offer tangible value to those at the receiving end of an AI model's decisions. Such explanations rather than feature importance and actionable recourse, present evidence potentially pointing to historical instances of unfairness. While there is a consensus on the crucial role of explainability in AI fairness, numerous critics argue that many XAI techniques fall short in providing essential functionalities for bias detection. In a survey examining the use of explainability methods to uncover or investigate biases in NLP, the majority of the works they identified utilized feature attribution methods[10]. However, [39] proved that feature attribution techniques such as LIME and SHAP can be manipulated to hide the underlying biases of a biased model, leading to false belief of fairness. Additionally, there are inherent conceptual issues with such approaches. XAI techniques often check whether models recognize features tied to protected attributes instead of ensuring that the input data is devoid of biases concerning those attributes. This mirrors the "fairness through unawareness" strategy [10]. Yet, the role of XAI in working with other bias mitigation strategies, such as pre-processing, in-processing, and post-processing, has yet to be clarified. [] underscore six important functions, yet to be fully realized, of XAI in solving biased data and issues involved in the selection and formulation of ML models: * The XAI tools could identify imbalances within the data as it relates to over/under-sampling; * The XAI tools could identify attributes most influential in both local and global decisions; * The XAI tools can identify processing issues that had a distinct impact on the final model; * The XAI tools can consider the impact of user-labeled sensitive attributes on the model performance. * The XAI tools can highlight influences from model selection and optimization that impacted the final algorithm and it's performance and * The XAI tools consider some metric of fairness in evaluating the global performance of the resulting algorithm. ### Privacy Although explainability and privacy have been widely studied as two separate fields in previous research [27], tension and complexity between explainability and privacy have received growing concerns. Explainability strives to disclose more details about specific decisions or the overarching decision-making process, while privacy-centric techniques deliberately avoid in-depth revelations about individual decision path, concentrating on dataset-wide statistics. Providing additional details about the model for enhanced explainability may compromise privacy or vice versa. For instance, if images are obfuscated for privacy reasons, providing explanations for the classification may unintentionally reveal the identities of people in the images[45]. Furthermore, different explainability methods also have various level of privacy issues. Previous research showed that the success rate of attacks exploiting model explanations, especially backpropagation-based methods rather than perturbation-based methods surpassed those that only have access to model predictions[16]. [38] conducted an extensive experimental analysis to understand the impact of private learning techniques on generated model explanations. Their research indicates complex interactions between privacy-perserving and explainability techniques. For instance, differential Privacy methods hampers the interpretability of explanations, while federated Learning often improve the understanding of generated explanations. A deeper understanding for the intricate relationship between privacy and explainability especially critical for sensitive domains like medicine which pose growing demands on AI systems that balance data privacy with appropriate explainability. [27] investigated the privacy risk of explaination in the context of using AI for biomedical image analysis and found that differential privacy negatively influence the computation of concept-based explanations. Their research identified a need for an extra training procedure for differentially privacy for concept-based explanations. In addition to technical advancements, how to strike balance between explainability and privacy in different context requires a deeper analysis of the effect of privacy on the explanation by human users through application-grounded and human-grounded evaluation methods. ## 3 Ambiguity of Explainability's Role in Human-AI relationships Technical approaches to XAI have been criticized for their algorithm-centric focus, solely emphasizing the development of technical and mathematical methods to explain the behavior of underlying ML models[30], [3]. Functionally-grounded evaluations[15] of XAI methods rely on formal definitions of explainability such as fidelity and sensitivity[44] as a proxy to evaluate the quality of generated explanations without involving human experimentation. However, this evaluation method fails to consider essential human factors, such as the knowledge, needs, and expectations of real users[23], [25]. As a result, it does not provide measurements of explanation quality in the areas of improving trust, enhancing understanding, and facilitating performance of human and AI teams[30],[3]. Aware of this shortfall, many HCI researchers applied human-grounded evaluation, where XAI methods are evaluated with user studies but with simplified tasks[13],[43]. By conducting user studies, HCI aim to understand 'what makes a good explanation' by asking for whom the explanation is for(who), which is the goal of the AI explanation(why), and what the types of explanations are(how)[17]. While there are no definitive answers so far, HCI research reveals substantial variation in users' perceptions and attitudes towards explainability. Its role in enhancing understanding, fostering trust, and facilitating human-AI collaboration appears to shift significantly depending on the task given and application context[43][26]. HCI research further underscores that different stakeholders require various types of explanations based on their specific purposes[23], the domain of application, and a plethora of human factors, such as AI literacy and cultural background[33]. In varying contexts and for distinct users, the desirable qualities of explainability satisfy different, sometimes conflicting. Take for instance the "faithfulness" feature, which assesses if the explanation accurately reflects the inner workings of the complex model, versus the "compactness" feature, ensuring the explanation remains concise and not overwhelming[25]. An ML engineer may prioritize faithful explanations for model debugging and development, while an end user might prefer concise explanations, even if they aren't as detailed[34]. However, these human and contextual factors have not been fully understood yet, with different studies presenting varied findings. For example, Cheng et al's work provided concerted empirical evidence that interactive visual explanations are effective at improving non-expert users' comprehension of algorithmic decisions[13]. Conversely, Han Liu et al observed mixed results for interactive explanations: while these explanations improve human perception of AI assistance's usefulness, they may reinforce human biases and lead to marginal performance improvement[26]. This variation in findings points to a significant gap in HCI research: simple experiments can not to fully capture the complexities of real-world situations, failing to provide generalized guidance on XAI applications. Human-grounded evaluations frequently rely on proxy tasks with simplified experimental settings, making it hard to gauge how these explanations would help users in real-work tasks[8]. Given the evolving emphasis of explainability in AI regulations, there's an urgent need for more application-grounded evaluations where XAI methods are assessed in real-world scenarios with actual users. In real work settings, AI operates in a boarder context where procedural information matters when people try to understand AI. Without knowing how different types of explanations change what users know, thereby enabling them to act in response to AI, any enforcement on providing explanations is unlikely to meet its goal. The lack of universally applicable or context-specific research findings introduces a significant complexity into the process of creating well-rounded policy recommendations or regulatory decisions. In other words, without research outcomes that can be applied broadly across various contexts or those tailored to specific situations, it becomes exceedingly challenging to design policies or regulations that effectively govern the use of AI. We have recognized "contestability" as a crucial aspect that explanations can enable, yet HCI does not adequately study due to the lack of real world evaluation. A detailed discussion on this matter follows below. ### Contestability In the context of automated decision-making, particularly customer-facing applications, there is a power imbalance where decision makers are in a position of power over decision subjects. The opacity of AI decision-making systems, either due to the mere complexity or proprietary claims, tend to exacerbate the existing power gap --decision makers have access to more information about the AI system that is not available to decision subjects[29]. Recently, scholars have begun to explore the right to contestability, the ability to contest algorithmic decisions. Many argue that contestability offers decision subjects some protection, permitting them to reclaim some control and hold decision-makers accountable. This often involves requesting a review, many times involving human intervention and scrutiny[41]. Contestability enables interactions among decision-makers and algorithms, decision-maker and system designers, and ideally between decision-makers and individuals impacted by the decisions, as well as the general public[22]. There is existing legislation that offers affected individuals the possibility to seek such review. For example, under the Equal Credit Opportunity Act, if a candidate's credit application is rejected, the credit bureau is obligated to share the main reasons for the rejection, thereby enabling the possibility of a contestation. While contestability empowers individuals to exercise their autonomy for their own advantage, some critics argue that it imposes an undue burden on those affected by the decision: _the onus is on the individual to pursue an appeal_[29]. However, the applicability of this regulatory approach to automated decision-making is not entirely clear. The challenges don't only lie in the technical difficulties of approximating the decision rules of these "black box" system. Automated decision-making also opens up new questions about defining what can be contested, who can contest, who is accountable and how the review process should be conducted [29]. Many have discussed the relationship between explainability and contestability. Providing an explanation is not only seen as complementary to contestability, but as an essential prerequisite that enables a person to contest a decision [6][29][35]. The importance of an explanation in determining whether the algorithmic decision was justified and to provide grounds for review have been underscored. While there is an agreement that explanations should contain the information necessary for a decision subject to exercise their rights contestation [12][36], defining what constitutes adequate and relevant information remains a complex issue. UK government calls for more evidence on interactions with requirements of appropriate explainability, acting as pre-conditions of effective redress and contestability in its recent white paper on AI regulation [1]. Research in XAI has increasingly shifted towards more human-centred approaches in designing and evaluating explanations. The provision of explanations specifically for contestation necessitates a well-defined decision-making context, which current proxy tasks often fail to deliver. The question of what explanations and interactions will prompt appropriate engagement and contestation by impacted individuals is domain- and context-specific [35]. In the rising landscape of AI governance, there's a growing necessity for XAI contributions that consider contestation as a vital factor to inform policy decisions. ## 4 Closing Remarks In this paper, we explore the limitations of current explainability methods and evaluation practices, as well as the implications when considering other important factors for responsible AI, including privacy, fairness, and contestability. Our exploration has shown that while explainability is undeniably pivotal, the pursuit of responsible AI practices is hindered without a well-established mechanism linking explainability to other critical aspects. In sum, while explainability is a commendable stride forward, the ultimate destination is a comprehensive and holistic approach to responsible AI--one that embraces and integrates all its multifaceted dimensions.
2309.04157
Observations of Orbiting Hot Spots around Naked Singularities
Recently, it has been reported that photons can traverse naked singularities in the Janis-Newman-Winicour and Born-Infeld spacetimes when these singularities are appropriately regularized. In this paper, we investigate observational signatures of hot spots orbiting these naked singularities, with a focus on discerning them from black holes. In contrast to Schwarzschild black holes, we unveil the presence of multiple additional image tracks within critical curves in time integrated images capturing a complete orbit of hot spots. Moreover, these new images manifest as a more pronounced second-highest peak in temporal magnitudes when observed at low inclinations.
Yiqian Chen, Peng Wang, Haitang Yang
2023-09-08T06:50:36Z
http://arxiv.org/abs/2309.04157v2
# Observations of Orbiting Hot Spots around Naked Singularities ###### Abstract Recently, it has been reported that photons can traverse naked singularities in the Janis-Newman-Winicour and Born-Infeld spacetimes when these singularities are appropriately regularized. In this paper, we investigate observational signatures of hot spots orbiting these naked singularities, with a focus on discerning them from black holes. In contrast to Schwarzschild black holes, we unveil the presence of multiple additional image tracks within critical curves in time integrated images capturing a complete orbit of hot spots. Moreover, these new images manifest as a more pronounced second-highest peak in temporal magnitudes when observed at low inclinations. + Footnote †: preprint: CTP-SCU/2023037 ###### Contents * I Introduction * II Spacetime and Geodesics * II.1 JNW Singularity * II.2 Born-Infeld Singularity * III Observation of Hot Spot * III.1 Integrated Images * III.2 Temporal Fluxes and Centroids * IV Conclusions ## I Introduction The recent remarkable advancement in high angular resolution achieved by the Event Horizon Telescope (EHT) collaboration has ushered in a new era for the study of gravitational lensing within the context of strong gravitational fields [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. This development has kindled a profound interest in the examination of black hole images that are illuminated by the accreting plasma. The extraordinary black hole images captured by the EHT have opened the possibility to directly test sophisticated theoretical models, such as General Relativistic Magnetohydrodynamical (GRMHD) numerical simulations, against observations. Given the substantial computational resources required for GRMHD simulations, researchers often resort to simplified accretion models that, while computationally more tractable, sufficiently capture the fundamental characteristics of black hole images [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. A noteworthy characteristic of these images is the presence of a shadow region, encircled by a luminous ring. This distinctive feature arises from strong gravitational lensing effects in the vicinity of unstable bound photon orbits [31; 32; 33; 34; 35; 36; 37; 38]. The black hole shadow, as observed by the EHT, is anticipated to carry vital information about the spacetime geometry surrounding the black hole. Remarkably, its features closely align with the predictions based on the Kerr black hole model. Nevertheless, it is important to acknowledge that uncertainties related to the black hole's mass-to-distance ratio and potential systematic errors within the EHT observations introduce some degree of ambiguity within the bounds of observational uncertainty, allowing for the possibility of alternatives to Kerr black holes. Furthermore, recent discoveries of horizonless ultra-compact objects that exhibit photon spheres have added another layer of complexity to the scenario, effectively mimicking black holes in various observational simulations [39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. Among ultra-compact objects, naked singularities have attracted considerable attention. Although the cosmic censorship conjecture prohibits the formation of naked singularities, they can arise through the gravitational collapse of massive objects under specific initial conditions [49; 50; 51; 52; 53; 54; 55]. The presence of photon spheres allows naked singularities to effectively mimic the optical characteristics of their black hole counterparts, instigating inquiries into the distinctive observational imprints attributable to naked singularities [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67]. Interestingly, in certain naked singularity spacetimes, it has been established that photons can both approach and depart from the singularities in finite coordinate time intervals [68; 15; 67]. In such spacetimes, images of naked singularities captured by distant observers critically depend on the intrinsic nature of these singularities--a facet that demands a deeper exploration through a quantum gravity framework. However, the absence of a definitive theory of quantum gravity poses formidable challenges when it comes to investigating the behavior of photons in the proximity of singularities. Consequently, researchers frequently resort to effective models for singularity regularization, thus enabling the study of null geodesics near these points. One such approach involves the incorporation of higher-order curvature terms, such as the complete \(\alpha^{\prime}\) corrections of string theory [69; 70; 71]. Recently, our investigations have centered on the phenomenon of gravitational lensing applied to distant light sources within the context of Janis-Newman-Winicour (JNW) and Born-Infeld singularities. Our findings have revealed that photons entering the photon spheres ultimately converge toward the singularities in a finite coordinate time [68; 67]. When these singularities are subjected to regularization through the introduction of a regular core, it becomes possible for these photons to traverse the now regularized singularity. This traversal results in the emergence of new images occurring within critical curves. In this present study, our focus is toward observational properties of JNW and Born-Infeld singularities when illuminated by localized and isotropically emitting sources, referred to as "hot spots." In certain GRMHD simulations and semi-analytic models, the occurrence of magnetic reconnection and flux eruptions yields the formation of hot spots encircling supermassive black holes that host a magnetized accretion disk [72; 73; 74]. Notably, these hot spots have been recurrently observed within the vicinity of Sgr A* [75; 76; 77]. Furthermore, a noteworthy instance involves the detection of an orbiting hot spot within the unresolved light curve data obtained at the observing frequency of the EHT [78]. Due to their origin from a compact region proximate to the innermost stable circular orbit (ISCO), these hot spots represent a promising tool for the examination of central objects in the strong gravity regime [78; 79]. The subsequent sections of this paper are structured as follows: In Section II, we briefly review the JNW and Born-Infeld singularities, along with a discussion of geodesic motion within these spacetimes. Section III is devoted to the hot spot model, followed by an examination time integrated images, temporal fluxes and centroids. Finally, Section IV presents our conclusions. We adopt the convention \(G=c=1\) throughout the paper. ## II Spacetime and geodesics In this section, we provide a concise overview of both JNW and Born-Infeld singularities, while also examining the geodesic motion within these spacetimes. For a spherically symmetric and static spacetime governed by the metric \[ds^{2}=-f\left(r\right)dt^{2}+\frac{1}{h\left(r\right)}dr^{2}+R\left(r\right) \left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right), \tag{1}\] the trajectory of a test particle with four-momentum \(p^{\mu}\) is determined by the geodesic equations \[\frac{dx^{\mu}}{d\lambda}=p^{\mu},\quad\frac{dp^{\mu}}{d\lambda}=-\Gamma_{ \rho\sigma}^{\mu}p^{\rho}p^{\sigma}. \tag{2}\] Here, \(\lambda\) is the affine parameter, and \(\Gamma_{\rho\sigma}^{\mu}\) indicates the Christoffel symbol. These geodesics are fully characterized by three conserved quantities, \[E=-p_{t},\quad L_{z}=p_{\varphi},\quad L^{2}=p_{\theta}^{2}+L_{z}^{2}\csc^{2}\theta. \tag{3}\] In the context of massless particles, the conserved quantities \(E\), \(L_{z}\) and \(L\) represent the total energy, the angular momentum parallel to the axis of symmetry and the total angular momentum, respectively. Additionally, the Hamiltonian constraint \(\mathcal{H}\equiv g_{\mu\nu}p^{\mu}p^{\nu}/2=0\) yields the radial component of the null geodesic equations as \[\dot{r}^{2}+V_{\mathrm{eff}}\left(r\right)=0, \tag{4}\] where the dot signifies differentiation with respect to an affine parameter \(\lambda\), and the introduced effective potential is given by \[V_{\mathrm{eff}}\left(r\right)=h\left(r\right)\left[\frac{L^{2}}{R\left(r \right)}-\frac{E^{2}}{f\left(r\right)}\right]. \tag{5}\] A circular null geodesic occurs at an extremum of the effective potential \(V_{\text{eff}}(r)\), and the radius \(r_{c}\) of this geodesic is determined by the conditions \[V_{\text{eff}}\left(r_{c}\right)=0,\ V_{\text{eff}}^{\prime}\left(r_{c}\right)=0. \tag{6}\] Furthermore, local maxima and minima of the effective potential correspond to unstable and stable circular null geodesics, respectively. These unstable and stable circular null geodesics constitute a photon sphere and an anti-photon sphere, respectively. For massive particles, \(E\), \(L_{z}\) and \(L\) represent the total energy per unit mass, the angular momentum per unit mass parallel to the axis of symmetry and the total angular momentum per unit mass, respectively, when the affine parameter \(\lambda\) is chosen as the proper time per unit mass. Similarly, the Hamiltonian constraint \(\mathcal{H}=-1/2\) leads to the effective potential \[V_{\text{eff}}\left(r\right)=h\left(r\right)\left[\frac{L^{2}}{R\left(r\right) }-\frac{E^{2}}{f\left(r\right)}+1\right]. \tag{7}\] Consequently, the ISCO at \(r=r_{\text{ISCO}}\) is determined by the conditions \[V_{\text{eff}}\left(r_{\text{ISCO}}\right)=0,\,V_{\text{eff}}^{\prime}\left(r _{\text{ISCO}}\right)=0,\,V_{\text{eff}}^{{}^{\prime\prime}}\left(r_{\text{ ISCO}}\right)=0. \tag{8}\] ### JNW Singularity The JNW metric provides a static solution within Einstein-massless-scalar-field models and is expressed in the form [80; 81; 82; 83] \[ds^{2}=-\left(1-\frac{r_{g}}{r}\right)^{\gamma}dt^{2}+\left(1-\frac{r_{g}}{r} \right)^{-\gamma}dr^{2}+\left(1-\frac{r_{g}}{r}\right)^{1-\gamma}r^{2}\left(d \theta^{2}+\sin^{2}\theta d\varphi^{2}\right). \tag{9}\] Additionally, the scalar field is given by \[\Phi=\frac{q}{r_{g}}\ln\left(1-\frac{r_{g}}{r}\right), \tag{10}\] where \(q\) denotes the scalar charge. The JNW metric is characterized by two parameters, \(\gamma\) and \(r_{g}\), which are related to the ADM mass \(M\) and the scalar charge \(q\) according to [81], \[\gamma=\frac{2M}{r_{g}},\ r_{g}=2\sqrt{M^{2}+q^{2}}. \tag{11}\] When \(\gamma=1\), the JNW metric describes Schwarzschild black holes with no scalar charge. For \(0.5<\gamma<1\), the JNW metric represents weakly naked singularity solutions with a non-trivial scalar field profile. In this case, a naked curvature singularity arises at \(r=r_{g}\), and a photon sphere exists at \(r_{ps}=r_{g}\left(1+2\gamma\right)/2\). However, the photon sphere disappears when \(0\leq\gamma<0.5\), leading to distinct light propagation behaviors. Given that a spacetime featuring photon spheres can mimic black hole observations, this paper primarily focuses on the JNW metric with \(0.5<\gamma<1\). As shown in [68], null geodesics in the vicinity of the singularity can be expressed as \[t\left(\lambda\right) =t_{0}\pm_{r}\frac{E^{1-\gamma}r_{g}^{\gamma}\left|\lambda\right|^ {1-\gamma}}{1-\gamma}+\mathcal{O}\left(\left|\lambda\right|^{1-\gamma}\right),\] \[r\left(\lambda\right) =r_{g}\pm_{r}E\lambda+\mathcal{O}\left(\left|\lambda\right|^{ \frac{1}{2-2\gamma}}\right),\] \[\theta\left(\lambda\right) =\theta_{0}\pm_{\theta}\sqrt{L^{2}-L_{z}^{2}\csc^{2}\theta_{0}} \frac{E^{\gamma-1}\left|\lambda\right|^{\gamma}}{\gamma r_{g}^{1+\lambda}}+ \mathcal{O}\left(\left|\lambda\right|^{\gamma}\right), \tag{12}\] \[\varphi\left(\lambda\right) =\varphi_{0}+\frac{L_{z}E^{\gamma-1}\csc^{2}\theta_{0}\left| \lambda\right|^{\gamma}}{\gamma r_{g}^{\gamma+1}}+\mathcal{O}\left(\left| \lambda\right|^{\gamma}\right),\] where \(t_{0}\), \(\theta_{0}\) and \(\varphi_{0}\) are the integration constants, and we assume \(r\left(0\right)=r_{g}\). It shows the existence of two classes of light rays: radially outgoing and ingoing light rays, denoted as \(+_{r}\) and \(-_{r}\), respectively. To simplify, we adopt \(\lambda<0\) for ingoing light rays and \(\lambda>0\) for outgoing ones. As the affine parameter \(\lambda\) approaches \(0\) from the right and left, respectively, both outgoing and ingoing light rays converge toward the singularity. Interestingly, as indicated by eqn. (12), it becomes apparent that photons originating from distant sources can reach the singularity in a finite coordinate time, and conversely, photons escaping from the singularity only require a finite coordinate time to reach distant observers. This aspect is of particular interest since, from the perspective of distant observers, whose proper time is approximately the coordinate time, the destiny of photons at the singularity significantly influences the observable characteristics of the JNW naked singularity. In the quest to examine the behavior of photons in close proximity to the singularity, researchers often resort to effective models to regularize the singularity. Specifically, the singularity can be regularized with an infinitesimally small regular core, as outlined in [68]. In this regularized singularity spacetime, light rays, upon entering the photon sphere, transverse the regular core and can be accurately approximated by a composite of the ingoing and outgoing branches given in eqn. (12). Furthermore, the connection between these two branches is given by \[\theta(0_{-})=\pi-\theta(0_{+}),\quad\varphi(0_{-})=\pi+\varphi(0_{+}). \tag{13}\] In short, the condition (13) and the conservation of \(E\), \(L_{z}\) and \(L\) determine the corresponding outgoing branch for a given ingoing branch. ### Born-Infeld Singularity The Born-Infeld metric is a set of spherically symmetric and static solutions arising from an Einstein gravity model coupled with a Born-Infeld electromagnetic field, which is presented in [84; 85; 86]. This metric can be expressed as \[ds^{2}=-f_{\text{BI}}\left(r\right)dt^{2}+\frac{dr^{2}}{f_{\text{BI}}\left(r \right)}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right), \tag{14}\] where \[f_{\text{BI}}\left(r\right)=1-\frac{2M}{r}-\frac{2\left(Q^{2}+P^{2}\right)}{3 \sqrt{r^{4}+a\left(Q^{2}+P^{2}\right)}+3r^{2}}+\frac{4\left(Q^{2}+P^{2}\right)} {3r^{2}}\ _{2}F_{1}\left(\frac{1}{4},\frac{1}{2},\frac{5}{4};-\frac{a\left(Q^{2}+P^{2} \right)}{r^{4}}\right). \tag{15}\] Here, the parameter \(a\) is related to the string tension \(\alpha^{\prime}\) as \(a=\left(2\pi\alpha^{\prime}\right)^{2}\) while the black hole's mass, electrical charge, and magnetic charge are denoted as \(M\), \(Q\) and \(P\), respectively. The hypergeometric function \({}_{2}F_{1}\left(a,b,c;x\right)\) is employed. Depending on these parameters, the Born-Infeld metric can describe either a black hole or a naked singularity. The domain of existence for naked singularities within the parameter space \(a/M^{2}\)-\(\sqrt{Q^{2}+P^{2}}/M\) has been illustrated in [86]. Moreover, the nonlinearity of Born-Infeld electrodynamics introduces self-interaction of the electromagnetic field. Consequently, photons follow null geodesics in an effective metric with the metric functions [87] \[f(r) =\frac{(aP^{2}+r^{4})^{2}}{r^{2}\left[a\left(Q^{2}+P^{2}\right)+r ^{4}\right]^{3/2}}f_{\text{BI}}(r), \tag{16}\] \[h(r) =\frac{r^{2}\left[a\left(Q^{2}+P^{2}\right)+r^{4}\right]^{3/2}}{(aP ^{2}+r^{4})^{2}}f_{\text{BI}}(r),\] \[R(r) =\frac{(aP^{2}+r^{4})^{2}}{r^{4}\sqrt{a\left(Q^{2}+P^{2}\right)+r ^{4}}}.\] Furthermore, our investigation reveals the behavior of null geodesics in this effective metric near the singularity as detailed in [67], \[t\left(\lambda\right) =t_{0}\pm_{r}\frac{3\sqrt{\pi}a^{5/4}\left(Q^{2}+P^{2}\right) \lambda^{-2}}{8\Gamma\left(1/4\right)\Gamma\left(5/4\right)\left(Q^{2}+P^{2} \right)^{3/2}E^{2}-12\sqrt{\pi}a^{1/4}E^{2}M}+\mathcal{O}\left(\left|\lambda \right|{}^{-3}\right), \tag{17}\] \[r\left(\lambda\right) =\pm_{r}\frac{\sqrt{a\left(Q^{2}+P^{2}\right)}}{E}\lambda^{-1}+ \mathcal{O}\left(\left|\lambda\right|{}^{-2}\right),\] \[\theta\left(\lambda\right) =\theta_{0}\pm_{\theta}\frac{\sqrt{a\left(Q^{2}+P^{2}\right)}}{3E} \lambda^{-3}+\mathcal{O}\left(\left|\lambda\right|{}^{-4}\right),\] \[\varphi\left(\lambda\right) =\varphi_{0}+\frac{\sqrt{a\left(Q^{2}+P^{2}\right)}L_{z}\csc^{2} \theta_{0}}{3E^{4}}\lambda^{-3}+\mathcal{O}\left(\left|\lambda\right|{}^{-4} \right).\] Here, the upper and lower signs of \(\pm_{r}\) correspond to the radially outgoing and ingoing branches, respectively. Additionally, we adopt \(\lambda>0\) for the outgoing branch and and \(\lambda<0\) for the ingoing branch, respectively. It is worth emphasizing that the affine parameter approaches \(\pm\infty\) when the light ray approaches the singularity. Much like the situation with JNW singularities, photons characterized by suitably small impact parameters can traverse the regularized singularity in a finite coordinate time. The trajectories of these photons are effectively approximated by the outgoing and ingoing branches, and their connection is established through \[\theta(-\infty)=\pi-\theta(\infty)\quad\text{and }\varphi(-\infty)=\pi+\varphi( \infty). \tag{18}\] Remarkably, it has been demonstrated that Born-Infeld naked singularity solutions can possess two photon spheres and one anti-photon sphere in the effective metric. In [67], the parameter regions where two photon spheres with distinct sizes exist are presented in the \(a/M^{2}\)-\(\sqrt{P^{2}+Q^{2}}/M\) parameter space. Particularly, we focus on scenarios where the potential peak at the inner photon sphere is higher than that of the outer sphere. In such instances, both photon spheres can contribute to determining optical appearances of Born-Infeld naked singularities. ## III Observation of hot spot This section is dedicated to the examination of observational attributes exhibited by a hot spot encircling both JNW and Born-Infeld naked singularities. More precisely, we model the hot spot as an isotropically emitting sphere. Furthermore, this sphere's center revolves around the central object at a distinct radius \(r_{e}\) on the equatorial plane, propelled by the 4-velocity \[v_{e}^{\mu}=\left(\frac{E}{f\left(r_{e}\right)},0,0,\frac{L}{R\left(r_{e} \right)}\right), \tag{19}\] where \(E\) and \(L\) are given by \[E=\sqrt{\frac{R^{\prime}\left(r_{e}\right)f^{2}\left(r_{e}\right)}{f\left(r_{ e}\right)R^{\prime}\left(r_{e}\right)-f^{\prime}\left(r_{e}\right)R\left(r_{e} \right)}},\quad L=\sqrt{\frac{f^{\prime}\left(r_{e}\right)R^{2}\left(r_{e} \right)}{f\left(r_{e}\right)R^{\prime}\left(r_{e}\right)-f^{\prime}\left(r_{e }\right)R\left(r_{e}\right)}}.\] Consequently, the corresponding angular velocity and period are \(\Omega_{e}=\sqrt{f^{\prime}\left(r_{e}\right)/R^{\prime}\left(r_{e}\right)}\) and \(T_{e}=2\pi/\Omega_{e}\), respectively. To obtain the observed image of the hot spot, we employ the backward ray-tracing method to compute light rays from the observer to the hot spot. This involves numerically solving eqn. (2) with appropriate initial conditions at the observer's position, which is defined by coordinates \(\left(t_{o},r_{o},\theta_{o},\varphi_{o}\right)\). In particular, the initial conditions are determined by considering the 4-momentum of photons in the observer's local frame, denoted as \(\left(p^{(t)},p^{(r)},p^{(\theta)},p^{(\varphi)}\right)\). These local 4-momentum components are related to the 4-momentum \(p_{o}^{\mu}=\left.dx^{\mu}/d\lambda\right|_{\left(t_{o},r_{o},\theta_{o}, \varphi_{o}\right)}\) through the expressions \[p^{(t)}=\sqrt{f\left(r_{o}\right)}p_{o}^{t},\quad p^{(r)}=p_{o}^{r}/\sqrt{h \left(r_{o}\right)},\quad p^{(\theta)}=\sqrt{R\left(r_{o}\right)}p_{o}^{ \rho},\quad p^{(\varphi)}=\sqrt{R\left(r_{o}\right)}|\sin\theta_{o}|p_{o}^{ \varphi}. \tag{20}\] The observation angles \(\Theta\) and \(\Phi\), defined as per [88], are given by \[\sin\Theta=\frac{p^{(\theta)}}{p},\ \tan\Phi=\frac{p^{(\varphi)}}{p^{(r)}}, \tag{21}\] where \(p=\sqrt{p^{(r)2}+p^{(\theta)2}+p^{(\varphi)2}}\). For a detailed explanation of the numerical implementation, interested readers can refer to [67]. Within the observer's image plane, each pixel is associated with Cartesian coordinates \((x,y)\), where \[x\equiv-r_{o}\Phi,\ y\equiv r_{o}\Theta. \tag{22}\] In our computational framework, the observer's position is \((t_{o},r_{o},\theta_{o},\varphi_{o})=(t_{o},100M,\theta_{o},\pi)\). The hot spot, with a radius of \(0.25M\), orbits counterclockwise along a circular geodesic at \(r_{e}=r_{\text{ISCO}}\). To ensure computational precision and efficiency, we employ a grid of \(1000\times 1000\) pixels for each snapshot and generate 500 snapshots for a full orbit. This approach guarantees the production of smoothly evolving images throughout the period \(T_{e}\). At a specific time \(t_{k}\), each pixel within the image plane is assigned an intensity \(I_{klm}\), which collectively forms lensed images of the hot spot. Subsequently, the analysis focuses on the following image properties [89; 90], * Time integrated image: \[\left\langle I\right\rangle_{lm}=\sum_{k}I_{klm}.\] (23) * Total temporal flux: \[F_{k}=\sum_{l}\sum_{m}\Delta\Omega I_{klm},\] (24) where \(\Delta\Omega\) corresponds to the solid angle of a pixel. * Temporal magnitude: \[m_{k}=-2.5\lg\left(\frac{F_{k}}{\min\left(F_{k}\right)}\right).\] (25) * Temporal centroid: \[\overrightarrow{c_{k}}=F_{k}^{-1}\sum_{l}\sum_{m}\Delta\Omega I_{klm} \overrightarrow{r_{lm}},\] (26) where \(\overrightarrow{r_{lm}}\) represents the position relative to the image center. Figure 1: Time integrated images for a complete orbit of the hot spot, captured from an observational inclination angle of \(\theta_{o}=80^{\circ}\). The white lines delineate the critical curves, shaped by light rays that escape from the photon spheres. Intensity levels are normalized to their maximum value. **Upper-Left Panel**: Schwarzschild black hole. This image highlights the primary and secondary lensed image tracks positioned beyond the critical curve, resulting from the \(n=0^{>}\) and \(1^{>}\) light rays emitted by the hot spot, respectively. **Upper-Right Panel**: JNW singularity with \(\gamma=0.9\). This image unveils two image tracks outside the critical curve, alongside three additional tracks within the critical curve. The latter tracks are produced by \(n=1^{<}\), \(2^{<}\) and \(3^{<}\) light rays traversing the singularity. **Lower Panel**: Born-Infeld singularity with \(a/M^{2}=1\), \(Q/M=1.05\) and \(P/M=0\). The \(n=3^{<>}\) and \(4^{<>}\) light rays, engaged in orbits between the inner and outer photon spheres, create two more image tracks situated amid the inner and outer critical curves. ### Integrated Images FIGs. 1 and 2 exhibit the time integrated images for three distinct central objects, namely a Schwarzschild black hole, a JNW singularity and a Born-Infeld singularity, as observed from inclination angles of \(\theta_{o}=80^{\circ}\) and \(50^{\circ}\), respectively. Here, we include observations of the hot spot Figure 2: Time integrated images of the hot spot in the Schwarzschild black hole (**Upper-Left Panel**), the JNW singularity (**Upper-Right Panel**) and the Born-Infeld singularity (**Lower Panel**). The observer inclination is \(\theta_{o}=50^{\circ}\), with the central object parameters being consistent with those in FIG. 1. Decreasing the inclination angle results in diminished brightness asymmetry and the emergence of more circular image tracks. orbiting a Schwarzschild black hole to serve as both a run test for our code and a benchmark for our analysis. As anticipated, the hot spot images of the Schwarzschild black hole reveal two prominent bright image tracks. Intriguingly, in contrast to the Schwarzschild black hole case, the hot spot images in the JNW and Born-Infeld singularities show additional tracks. To understand the origin of these tracks, we present light rays of interest connecting the hot spot at \(\varphi=0\) to the observer at \(\theta_{o}=80^{\circ}\) in FIG. 3. We use a numerical count, indicated as \(n\), representing the number of times light rays intersect the equatorial plane, as a means to characterize light rays and consequently the resulting hot spot images. Furthermore, our previous studies have demonstrated that light rays can pass through both the JNW and Born-Infeld regularized singularities, thus giving rise to a new set of images. Moreover, in the case of Born-Infeld singularities featuring double photon spheres, photons are capable of orbiting the singularities between the inner and outer photon spheres. Consequently, we employ the superscripts \(>\), \(<>\) and \(<\) to denote light rays that travel outside the (outer) photon sphere, follow orbits between the inner and outer photon spheres, and traverse the singularity inside the (inner) photon sphere, respectively. For an observer at an inclination angle of \(\theta_{o}=80^{\circ}\), FIG. 1 presents the time integrated images of Figure 3: Light rays connecting the hot spot to the observer with an observation angle of \(\theta_{o}=80^{\circ}\). The photon spheres are depicted with dashed gray lines, while the singularity is marked by a blue dot. **Left Panel**: Light rays responsible for generating the image tracks inside the critical curve of the JNW singularity. **Right Panel**: Light rays that produce the image tracks between two critical curves of the Born-Infeld singularity. In \(n\), the number denotes the count of equatorial plane crossings by the light rays. Furthermore, the \(<\) and \(<>\) correspond to light rays traversing the singularities and orbiting between two photon spheres, respectively. the hot spot. These images manifest a distinctive brightness asymmetry, attributed to the Doppler effects. In the Schwarzschild black hole case, the primary image with \(n=0^{>}\) illustrates a closed semicircular track, wherein its upper and lower segments depict the hot spot situated behind and in front of the black hole. In contrast, the smaller and dimmer track represents the secondary image with \(n=1^{>}\). The scarcely visible upper segment corresponds to the hot spot positioned in front of the black hole, while the lower segment corresponds to the hot spot positioned behind it. Furthermore, higher-order images exhibit a markedly diminished luminosity and closely adhere to the critical curve, which is formed by photons escaping from the photon sphere. In addition to the two previously mentioned image tracks, the time integrated image of the hot spot in the JNW singularity, displayed in the upper-right panel of FIG. 1, reveals the presence of three additional tracks. These three tracks are formed by light rays that traverse the singularity, positioning them within the critical curve. Specifically, moving from the innermost to the outermost region, these image tracks arise from the \(n=1^{<}\), \(2^{<}\) and \(3^{<}\) light rays, as visually depicted in the left panel of FIG. 3. Furthermore, the upper and lower segments of the \(n=1^{<}\) and \(3^{<}\) tracks correspond, respectively, to the images of the hot spot located in front of and behind the singularity. Additionally, the \(n=2^{<}\) track features upper and lower segments corresponding, respectively, to the images of the hot spot located behind and in front of the singularity. Due to the presence of double photon spheres, the hot spot image in the Born-Infeld singularity, shown in the lower panel of FIG. 1, exhibits two critical curves. Analogously, the image tracks corresponding to \(n=0^{>}\) and \(1^{>}\) are observed outside the outermost critical curve, while the image tracks linked to \(n=1^{<}\), \(2^{<}\) and \(3^{<}\) are discernible within the innermost critical curve. However, light rays engaged in orbits between the inner and outer photon spheres contribute additional image tracks located between the two critical curves. In particular, the \(n=3^{<>}\) and \(4^{<>}\) light rays form two visible image tracks between the inner and outer critical curves. Yet, due to strong gravitational bending amid the two photon spheres, the \(n=2^{<>}\) light rays produce a faint crescent shape at the summit of the inner critical curve. The \(n=2^{<>}\), \(3^{<>}\) and \(4^{<>}\) light rays are depicted in the right panel of FIG. 3. FIG. 2 depicts the hot spot images obtained at an observation inclination of \(\theta_{o}=50^{\circ}\), revealing a certain similarity between this case and the one with \(\theta_{o}=80^{\circ}\). Despite this similarity, it becomes evident that the observer, positioned at a lower inclination angle, witnesses a diminished level of brightness asymmetry, while the image tracks adopt a more circular form. Furthermore, our findings indicate that, in the Born-Infeld singularity, the \(n=2^{<>}\) light rays emitted from the hot spot fail to reach the observer at \(\theta_{o}=50^{\circ}\). ### Temporal Fluxes and Centroids the primary images with \(n=0^{>}\), generated by the hot spot positioned near the leftmost portion of the orbit. This observation aligns with expectations, as the hot spot moves closer to the observer on the left side of the field of view, leading to a pronounced increase in the observed light frequency due to the Doppler effect. Figure 5: Snapshots for the Schwarzschild black hole (**Left Column**), the JNW singularity (**Middle Column**) and the Born-Infeld singularity (**Right Column**) when the temporal magnitude reaches its maximum value. The top, middle and bottom rows present the snapshots at the highest, second-highest and third-highest peaks, respectively. The contribution from the \(n\)th-order image to the total flux is quantified by \(F_{k}^{n}/F_{k}\), where \(F_{k}^{n}\) represents the temporal flux of the \(n\)th-order image at \(t=t_{k}\). Of particular interest is the second-highest peak, indicated by 2, which is notably more prominent in the JNW and Born-Infeld singularities compared to the Schwarzschild black hole case. Furthermore, the corresponding snapshots are depicted in the middle row of FIG. 5, revealing a shift away from exclusive dominance by primary images in terms of flux. In fact, primary images experience a phase of reduced flux as the hot spot moves away from the observer, leading to a decrease in the observed frequency. If other images achieve their peak flux values, they can produce localized total flux peaks. In the Schwarzschild black hole, while the \(n=1^{>}\) image substantially contributes to the total flux, the primary image still contributes 50%, resulting in an insignificant peak. Conversely, in the context of the JNW singularity, \(n=1^{<}\) and \(2^{<}\) images emerge as two crucial contributors to the total flux, leading to a notably pronounced local peak. Similarly, in the case of the Born-Infeld singularity, the presence of the \(n=2^{<>}\), \(3^{<>}\) and \(2^{<}\) images contributes to a noticeable peak in \(m_{k}\). Furthermore, the snapshot in the bottom row of FIG. 5 demonstrates that the third peak in \(m_{k}\), identified by 3 in the upper-right panel of FIG. 4, emerges from images within the inner critical curve. The absence of higher-order images would lead the temporal centroid to align with the center of the primary image. Nonetheless, when Doppler effect-induced flux reduction affects the primary image, the presence of higher-order images can markedly displace the centroid away from the center of the primary image's orbit. In comparison with the Schwarzschild black hole case, extra higher-order images tend to displace the centroid more significantly to the left in the image plane for both the JNW and Born-Infeld singularities. Additionally, due to contributions from higher-order images in close proximity to critical curves, numerical noise becomes evident in the low flux region, affecting the temporal magnitudes and centroids. The temporal magnitudes and centroids for an inclination angles of \(\theta_{o}=50^{\circ}\) are presented in FIG. 6. In contrast to the \(\theta_{o}=80^{\circ}\) inclination, a sole peak is evident in the temporal magnitudes for \(\theta_{o}=50^{\circ}\). This dissimilarity emerges due to the reduced influence of the Doppler effect at the lower inclination. As a result, the flux becomes less dependent on the frequency, allowing the primary image to dominate most of the time in the contribution to the total flux. Accordingly, the effect of higher-order images on centroids is reduced, resulting in a less intricate trajectory for the centroid's orbit. ## IV Conclusions This paper investigated observations of hot spots in the JNW and Born-Infeld naked singularities as they move along the ISCOs. Intriguingly, in these spacetimes, photons have been observed to reach the singularity in a finite coordinate time once they enter the photon spheres [67; 68]. Furthermore, when the singularity is regularized, these photons are capable of traversing the regularized singularity, thereby generating new images of hot spots positioned within the critical curves. Particularly, in contrast to Schwarzschild black holes, JNW and Born-Infeld singularities exhibit numerous additional image tracks in the time integrated images that capture a full orbit of hot spots. Consequently, when observed at low inclinations, these extra images result in a more pronounced second-highest peak in the temporal magnitudes in the JNW and Born-Infeld singularities. Additionally, a third peak can arise in the Born-Infeld singularity spacetime. As discussed in [68], optical appearances of hot spots depend on the regularization schemes of the Figure 6: Temporal magnitudes \(m_{k}\) (**Upper Row**) and centroids \(c_{k}\) (**Lower Row**) as a function of \(t/T_{e}\) for the Schwarzschild black hole (**Left Column**), the JNW singularity (**Middle Column**) and the Born-Infeld singularity (**Right Column**). The inclination is \(\theta_{o}=50^{\circ}\). Given the diminished impact of the Doppler effect at low inclinations, the temporal magnitudes exhibit a single peak across all cases. singularities. In cases where the regularized singularity spacetime models a traversable wormhole [91], hot spots in our universe exhibit appearances akin to those of black holes. Conversely, for hot spots situated in another universe, only images positioned within the critical curve are observable. The emergence of the next-generation Very Long Baseline Interferometry offers promising prospects for utilizing our discoveries as a tool to investigate the nature of naked singularities. ###### Acknowledgements. We are grateful to Qingyu Gan and Xin Jiang for useful discussions and valuable comments. This work is supported in part by NSFC (Grant No. 12105191, 12275183, 12275184 and 11875196). Houwen Wu is supported by the International Visiting Program for Excellent Young Scholars of Sichuan University.
2309.04631
Open and reusable deep learning for pathology with WSInfer and QuPath
The field of digital pathology has seen a proliferation of deep learning models in recent years. Despite substantial progress, it remains rare for other researchers and pathologists to be able to access models published in the literature and apply them to their own images. This is due to difficulties in both sharing and running models. To address these concerns, we introduce WSInfer: a new, open-source software ecosystem designed to make deep learning for pathology more streamlined and accessible. WSInfer comprises three main elements: 1) a Python package and command line tool to efficiently apply patch-based deep learning inference to whole slide images; 2) a QuPath extension that provides an alternative inference engine through user-friendly and interactive software, and 3) a model zoo, which enables pathology models and metadata to be easily shared in a standardized form. Together, these contributions aim to encourage wider reuse, exploration, and interrogation of deep learning models for research purposes, by putting them into the hands of pathologists and eliminating a need for coding experience when accessed through QuPath. The WSInfer source code is hosted on GitHub and documentation is available at https://wsinfer.readthedocs.io.
Jakub R. Kaczmarzyk, Alan O'Callaghan, Fiona Inglis, Tahsin Kurc, Rajarsi Gupta, Erich Bremer, Peter Bankhead, Joel H. Saltz
2023-09-08T22:47:23Z
http://arxiv.org/abs/2309.04631v1
# Open and reusable deep learning for pathology with WSInfer and QuPath ###### Abstract The field of digital pathology has seen a proliferation of deep learning models in recent years. Despite substantial progress, it remains rare for other researchers and pathologists to be able to access models published in the literature and apply them to their own images. This is due to difficulties in both sharing and running models. To address these concerns, we introduce WSInfer: a new, open-source software ecosystem designed to make deep learning for pathology more streamlined and accessible. WSInfer comprises three main elements: 1) a Python package and command line tool to efficiently apply patch-based deep learning inference to whole slide images; 2) a QuPath extension that provides an alternative inference engine through user-friendly and interactive software, and 3) a model zoo, which enables pathology models and metadata to be easily shared in a standardized form. Together, these contributions aim to encourage wider reuse, exploration, and interrogation of deep learning models for research purposes, by putting them into the hands of pathologists and eliminating a need for coding experience when accessed through QuPath. The WSInfer source code is hosted on GitHub and documentation is available at [https://wsinfer.readthedocs.io](https://wsinfer.readthedocs.io). ## Introduction Pathology is the bedrock of cancer diagnosis and traditionally relies on the examination of physical slides containing human tissue specimens using high-power microscopy. In recent years, the field has been moving towards digital pathology, whereby glass slides are scanned as high-resolution images, known as whole slide images (WSIs). Each individual WSI is typically very large, often over 40 gigabytes uncompressed. The widespread adoption of digital pathology therefore poses considerable challenges for data storage and visualization, but also unlocks the potential to apply computational methods for diagnostics and prognostics. It is difficult to overstate the transformative effect deep learning has had on digital pathology research. Many studies have suggested the potential for deep learning-based AI methods to revolutionize different aspects of pathology practice, such as by reducing the pathologist's workload or by augmenting visual assessment with the ability to identify subtle, sub-visual features of clinical importance [1, 2, 3]. However, the multitude of algorithms published in the literature belies a dearth of implementations that are actually usable within the research community. In most cases, it is simply not possible for other research groups to validate the use of published methods on their own images and cohorts. One reason for this is that required data is not available: a recent survey of 161 peer-reviewed studies using deep learning for pathology found that while 1 in 4 shared code, only 1 in 8 shared trained model weights [4, 5]. Furthermore, in the minority of cases where code and models are available, they are typically not in a form amenable to pathologists without coding experience to use and explore. The result is that reported findings cannot properly be reproduced and interrogated by the wider community, and the key domain experts -- pathologists -- often find themselves to be particularly excluded. Tackling problems such as model generalization and overcoming batch effects urgently requires an increase in openness, replicability, and reusability. In the present paper, we respond to the call to "make deep learning algorithms in computational pathology more reproducible and reusable" [4] by introducing WSInfer (Whole Slide Inference): a new collection of software tools designed to streamline the sharing and reuse of trained deep learning models in digital pathology (Figure 1). We have focused on the generic task of patch classification, which is widely used across a broad range of pathology applications. Because WSIs are so big, they are typically broken into manageable patches to make analysis practicable. Trained patch-based deep neural networks are typically applied across a WSI to classify patches into different tissue components (e.g. tumor, stroma, fat, necrosis) or make predictions directly related to patient outcome. While relatively coarse-grained in comparison to an analysis based on segmenting individual structures, patch classification algorithms have advantages both in terms of computational efficiency and being a closer match for a pathologist's visual assessment -- since this is often based upon evaluating patterns and textures, rather than discrete quantifiable entities. The output of patch classification is typically a spatial classification map, which can often be integrated across the WSI to create a single output representing a diagnosis, prediction, or'score' for that slide. ### Description WSInfer comprises three main components: (1) the WSInfer inference runtime, (2) the QuPath WSInfer extension, and (3) the WSInfer Model Zoo. Together these provide tools designed to meet the needs of a diverse range of users, including pathologists, computational researchers, and data scientists. ### Inference Runtime The WSInfer inference runtime deploys trained patch classification deep learning models on whole slide images and is available as a command line tool and Python package. The inference runtime requires three inputs from the user: a directory of whole slide images, a trained patch classification model, and a directory in which to write results. One may use a model from the Zoo or provide a local trained model along with a configuration JSON file that includes essential information for model use (i.e., size and physical spacing of patches, processing steps, names of output classes). The configuration file is validated against a schema to aid users in creating this file. If using a model from the Zoo, the model and configuration JSON file are downloaded automatically from the Hugging Face Hub. Each whole slide image undergoes a series of processing steps that were motivated by [6]. First, patches are extracted from tissue regions at a uniform size and physical spacing, and each patch is processed as specified in the configuration JSON file (e.g., resized, normalized). An important optimization in this stage is the lazy loading of patches directly from the whole slide image. Compared to saving patches as image files, lazy loading requires less storage and performs fewer reads and writes to the filesystem. WSInfer offers a choice of slide reading backends between OpenSlide (7) and TiffSlide (8). Next, the patches are run through the forward pass of the deep learning model. Patches are loaded in parallel using the PyTorch DataLoader object. The runtime saves model outputs in comma-separated values (CSV) files with descriptive column names and GeoJSON files, a common format for spatial data. These output files can be used for downstream analyses or visualized using other software, including QuPath. The runtime can be installed with pip or as a Docker or Apptainer container. We measured the running time of WSInfer in two environments: 1) a RedHat Linux environment with an enterprise-grade GPU (Quadro RTX 8000) and 2) a Windows Subsystem for Linux environment (Windows 11 and Debian 12) with a consumer GPU (RTX 2080 Ti). In both cases, we used the breast tumor classification model "breast-tumor-resnet34.tcga-brca" from the WSInfer Model Zoo (described below) and WSIs from The Cancer Genome Atlas. The model uses 350x350-pixel patches at 0.25 micrometers per pixel. In the RedHat Linux environment, analysis of 1,061 slides took 6 hours and 46 minutes, or _23 seconds per WSI_. The distribution of the number of patches across WSIs was right-skewed (min=884, max=82,012, median=22,656, mean=23,492, std. dev.=13,922). In the second environment, we deployed the same model to 30 WSIs, a subset of the 1,061 used above. The running time was 14 minutes and 17 seconds total, or _29 seconds per WSI_, and the distribution of patch counts was skewed similarly to the first example (min=6,575, max=52,323, median=23,502, mean=26,667, std. dev.=13,466). ### QuPath Extension QuPath is a popular open-source software platform for bioimage analysis (9). QuPath's support for visualizing, annotating, and analyzing whole slide images has led to the software being widely adopted within the digital pathology community: to date, it has been downloaded over 400,000 times and cited in over 2,400 studies. We therefore developed the QuPath WSInfer Extension as an alternative inference engine to make patch-based classification widely accessible within a familiar, intuitive, and interactive user interface. The QuPath WSInfer Extension introduces patch-based deep learning support to QuPath the first time, building upon the software's existing features to provide an end-to-end analysis solution. Users are guided through the steps of selecting a deep learning model and one or more regions of interest for inference. The extension will then proceed to download the model if required, generate tile objects, and run inference (powered by Deep Java Library and PyTorch) at the appropriate resolution and patch size - appending the model output to the tiles. The user can then visualize the tile classifications and view interactive maps of predicted class probabilities. Furthermore, the tiles can be reused to run inference using additional models, making it possible to integrate information across models. Finally, because the user has access to all QuPath's other features (e.g. for tile merging, cell segmentation, data export), WSInfer can be integrated into sophisticated QuPath analysis pipelines, which are run either interactively or through automated scripts. ### Model Zoo We have curated a collection of trained pathology models for broad, unencumbered reuse and have hosted this Zoo on Hugging Face Hub. Each model repository contains a model card (10), pretrained weights in TorchScript format, and a configuration JSON file. The model card is a markdown file with human-readable metadata including the purpose of the model, its architecture, description of training data, how to apply it to new data, intended uses, and relevant citations. TorchScript is a serialization format that contains weights and a graph of the forward pass of the model, and it allows the use of the model without a Python dependency. The WSInfer QuPath extension, for instance, uses TorchScript models in a Java ecosystem. To add a model to the zoo, one creates a new model repository on Hugging Face Hub and uploads a model card, TorchScript file of the model, and configuration JSON file. One may optionally upload other files as well. Crucially, the user owns the model repository and can license and manage the contents independently. The registry of models in the zoo is maintained as a JSON file in a dedicated public repository on Hugging Face Hub. After publishing a model on Hugging Face Hub, one may submit a pull request to this repository adding the model location to the registry. We have also developed a client utility to enhance interoperability of the zoo with other software. The client is available as a Python package or command-line tool and primarily lists and downloads models from the zoo. The client can also validate Model Zoo repositories and model configuration JSON files, functionalities we hope will ease the use of WSInfer. ## Discussion WSInfer provides an open-source, cross-platform, and cross-language ecosystem to make deep learning methods uniquely accessible and intuitive for a wide range of digital pathology stakeholders. The core inference runtime is developed in Python, making it readily accessible for data scientists and deep learning specialists working in digital pathology -- for whom Python is typically the programming language of choice. However, by also providing a Java implementation through the widely adopted QuPath software, we aim to greatly broaden access. The WSInfer Python runtime is preferable for batch processing large numbers of slides, for example in a large-scale study. The results can be exported in a QuPath-compatible format for visualization. Direct use of the QuPath extension, however, means that it is also possible for a QuPath user to interactively select regions of interest and obtain results for any slide immediately, without leaving the software. We anticipate that making the application of models more streamlined in this way will encourage more pathologists to try the methods on new data. This should, in turn, make it easier to identify strengths and weaknesses, and thereby accelerate the critical feedback loop necessary to develop robust and generalizable algorithms. Several tools exist for deploying trained models on whole slide images, including TIA Toolbox [(11)], MONAI [(12)], SlideFlow [(13)], and PHARAOH [(14)]. WSInfer complements these by specifically targeting highly optimized, user-friendly support for patch based WSI inference methods. We expect that these tools may be used together and are keen to promote interoperability. To this end, the WSInfer Model Zoo implements a minimal model configuration specification that accompanies each trained model, with the intention that it may be used by other software beyond the direct WSInfer ecosystem. We host several trained patch classification models in the zoo, including two models from TIA Toolbox, and intend to incorporate more models in future work. It is important to note that WSInfer itself supports a variety of patch classification models, but is agnostic to a user's choice of model. It is intended for research use only, and we make no claims regarding the suitability of the models for specific applications. Hence, users assume the responsibility of verifying the suitability of any model for their purposes. Indeed, it is our expectation that promising digital pathology methods will often be found not to perform well on new images; generalization across cohorts, scanners, and laboratories is a hard problem. However, we believe that an important first step to addressing this must be to enable existing models to be properly scrutinized by the research community, to identify what does and does not work. We hope that WSInfer may prove useful in this regard. ## Acknowledgements The development of the WSinfer infrastructure by the Stony Brook authors was supported by Stony Brook Provost ProFund 2022 award and through the generosity of Bob Beals and Betsy Barton. JRK was also supported by the National Institutes of Health grant T32GM008444 (NIGMS) and by the Medical Scientist Training Program at Stony Brook University. The QuPath WSInfer extension was developed by the Edinburgh authors and was made possible in part by grant number 2021- 237595 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. This research was funded in part by the Wellcome Trust 223750/Z/21/Z. The results shown here are in whole or part based upon data generated by the TCGA Research Network: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
2301.00275
Normal and anomalous diffusion in a bouncing ball over an irregular surface
The problem of a bouncing ball on a non-planar surface is investigated. We discovered that surface undulation adds a horizontal component to the impact force, which acquires a random character. Some aspects of Brownian motion are found in the horizontal distribution of the particle. On the x-axis, normal and super diffusion are observed. For the probability density's functional form, a scaling hypothesis is presented.
Ana Laura Boscolo, Valdir Barbosa da Silva Junior, Luiz Antonio Barreiro
2022-12-31T19:24:19Z
http://arxiv.org/abs/2301.00275v2
# Normal and anomalous diffusion in a bouncing ball over an irregular surface ###### Abstract The problem of a bouncing ball on a non-planar surface is investigated. We discovered that surface undulation adds a horizontal component to the impact force, which acquires a random character. Some aspects of Brownian motion are found in the horizontal distribution of the particle. On the x-axis, normal and super diffusion are observed. For the probability density's functional form, a scaling hypothesis is presented. ## I Introduction Diffusion is a common natural phenomenon and generally occurs when a system moves toward the equilibrium state [1]. Many domains employ the notion of diffusion, including physics (particle diffusion), chemistry, biology, sociology, economics, and finance [2; 3; 4]. They all represent the fundamental concept of diffusion, which asserts that a substance or collection expands away from a point or location where that material or collection is more concentrated. In a diffusion process in a set of moving elements - energy, linear momentum, atoms, molecules, cells, animals, etc - each element performs a random trajectory. As a result of this highly irregular individual movement, the ensemble diffuses. Many non-linear systems also present a diffusive behavior in your phase space. Modeling such a dynamic system has become one of the most challenging subjects among scientists. The modeling helps to understand in many cases how the system evolves in time [5; 6; 7]. On a macroscopic level, the average collective behavior, in contrast to the microscopic individual movement, shows great regularity and follows well-defined dynamic laws. The non-linear dynamic formulation of these transport phenomena, as well as the diffusion equation, are two ways to describe the diffusion phenomena. The form of the temporal dependence of the mean squared distance (MSD), \(\langle x^{2}\rangle\propto t^{2\mu}\), or, equivalently, of the variance, allows classifying the type of diffusion. For \(\mu=1/2\) we have the usual or normal diffusion, which can be described by Fick's laws. Otherwise, we have an anomalous diffusion (or non-Fickian diffusion). When \(\mu>1/2\) the case is classified as superdiffusive [8; 9] and for \(\mu<1/2\) we have the subdiffusive case [10; 11]. Indeed, a wide diversity of systems presents a non-linear growth of the mean squared displacement. In this work, we explore the diffusive behavior that occurs in a free-falling particle colliding with a non-planar surface. Compared to a flat surface, on which the falling particles maintain their velocity in the horizontal direction, a non-planar surface introduces changes in the horizontal component of velocity after each collision. This creates a spread in the absolute value of the horizontal component of velocity as well as in its signal. Thus, in section II we study the dynamics of the model, in which the equations of motion are established, and how the iterative process takes place. Some special points are explored in II.3, for which no diffusion is observed. In section III, the randomness of the horizontal component of the collision force is studied. Also, the diffusion in the signal of the horizontal component of velocity and its relation to the random walk problem are explored. Section IV is devoted to discussing the behavior of the mean square displacement in relation to the initial collision point and the Probability Distribution Function (PDF) numerically and analytically. In section V, the conclusions and final considerations about the problem addressed are presented. ## II The model We now discuss how to construct the equations of the mapping that describe the dynamics of the particles. The model under study consists of an ensemble of non-interacting classical particles of mass \(m\) traveling in the presence of a constant gravitational field \(\mathbf{g}\) and colliding with a non-flat ground via elastic collisions. The parametric equations that describe the ground are: \[x(p) =\alpha\,p \tag{1}\] \[y(p) =\beta\left[1+\cos{(p)}\right],\] The figure 1 shows an example with \(\alpha=0.01\) and \(\beta=0.001\). Here it is worth noting that if the \(\beta\) parameter is null then the floor becomes flat, recovering the traditional Bouncer model [12] with a static floor. However, different from the traditional Bouncer model, if \(\beta\neq 0\), the particles gain an extra degree of freedom, with movement Figure 1: Graph obtained from equations (1) using the parameters \(\alpha=0.01\) and \(\beta=0.001\). in the \(x\)-direction too. Also, as in the Bouncer model, the action of the constant gravitational field \(g\) is responsible for the return mechanism of the particle for the next collision with the floor. The conservation of energy during the collision is controlled by a parameter which is called the _coefficient of restitution_ and it is denoted by \(\gamma\). For \(\gamma=1\) the conservative dynamics is observed. However, if \(0<\gamma<1\) we found a dissipative behavior. ### The Map We now explore the time evolution of particles, determining the coordinates of the collision points and their respective velocities. The dynamic evolution of the particle can be described by the Newton's equation of motion \[m\frac{d\mathbf{v}}{dt}=\mathbf{F}_{grav}+\mathbf{F}_{col}, \tag{2}\] where \(\mathbf{F}_{grav}=mg\) is the gravitational force acting on the particle and \(\mathbf{F}_{col}\) represents the instantaneous force of collision with the ground. We will assume that the collision force only changes the velocity component orthogonal to the surface. It is also an acceptable assumption that during the collision process the force \(\mathbf{F}_{col}\) has an extremely rapid variation. A typical path taken by the particles is shown in the figure 2. After the \(n\)th collision at the point defined by the parameter \(p_{n}\), the particle travels in the gravitational field until it collides at the point \(p_{n+1}\). This journey takes a \(\delta t_{n,n+1}\) time and continues incessantly if no dissipation is taken into account. The normal vectors at each collision point are also shown. The unit normal and tangent vectors at the point \(p_{n}\) can be written in terms of the Cartesian vectors as \[\hat{\mathbf{n}}_{n}=\frac{\left(-\lambda_{n}\,\mathbf{i}+\mathbf{j}\right)}{\sqrt{1+ \lambda_{n}^{2}}}\text{ and }\hat{\mathbf{t}}_{n}=\frac{\left(\mathbf{i}+\lambda_{n}\,\mathbf{j}\right)}{\sqrt{1+ \lambda_{n}^{2}}} \tag{3}\] where \(\lambda_{n}\) is the local inclination of the ground, which for the functions in (1), is given by \[\lambda_{n}=\frac{\left(dy/dp\right)_{p_{n}}}{\left(dx/dp\right)_{p_{n}}}=- \frac{\beta}{\alpha}\text{sin}\left(p_{n}\right). \tag{4}\] Since motion in the gravitational field is a well-known problem, the fundamental question in determining the dynamic evolution of the particle will be to find the points of collision with the ground. To proceed with this determination, we define the following two functions \[\begin{split} G_{X}(p,t)=x\left(p\right)-\left[x\left(p_{n} \right)+v_{x_{n}}^{(r)}t\right]\\ G_{Y}(p,t)=y\left(p\right)-\left[y\left(p_{n}\right)+v_{y_{n}}^{ (r)}t-\frac{g}{2}t^{2}\right],\end{split} \tag{5}\] where \(\left(v_{x_{n}}^{(r)},v_{y_{n}}^{(r)}\right)\)is the velocity of the particle after it collides at point \(p_{n}\). The next point \(p_{n+1}\) and the travel time \(\delta t_{n,n+1}=\left(t_{n+1}-t_{n}\right)\) spent by the particle between \(p_{n}\) and \(p_{n+1}\) are obtained by solving the system of transcendental equations \[\begin{cases}G_{X}(p_{n+1},\delta t_{n,n+1})=0\\ G_{Y}(p_{n+1},\delta t_{n,n+1})=0.\end{cases} \tag{6}\] In such a way, if the particles make a trip with \(N\) collisions, the total time spent will be \[t_{N}=\sum_{n=1}^{N}\delta t_{n-1,n}\text{ with }t_{0}=0. \tag{7}\] In our model, we assume that only the component of the velocity normal to the surface at the collision point is altered (inverted) [13]. Then, at the instant of collision, the law of reflection relating the incident velocity vector \(\mathbf{v}_{n}^{(i)}\) to the reflected velocity vector \(\mathbf{v}_{n}^{(r)}\) is, \[\mathbf{v}_{n}^{(r)}=\left(\mathbf{v}_{n}^{(i)}\cdot\hat{\mathbf{t}}_{n}\right)\hat{\mathbf{t }}_{n}-\gamma_{n}\left(\mathbf{v}_{n}^{(i)}\cdot\hat{\mathbf{n}}_{n}\right)\hat{\mathbf{n} }_{n}. \tag{8}\] Obviously, the velocity vector, incident at a point \(p_{n+1}\), is related to the velocity vector reflected at the previous point \(p_{n}\) as \[\mathbf{v}_{n+1}^{(i)}=v_{x_{n}}^{(r)}\hat{\mathbf{i}}+\left(v_{y_{n}}^{(r)}-g\, \delta t_{n,n+1}\right)\mathbf{j}.\] Now we can define the following dimensionless variables \(\bar{x}(p)=x(p)/gt_{N}^{2}\), \(\bar{y}(p)=y(p)/gt_{N}^{2},\bar{\mathbf{v}}_{n}^{(r)}=\mathbf{v}_{n}^{(r)}/gt_{N}\) and \(\phi_{n}=t_{n}/t_{N}\), where \(t_{N}\) is defined in (7). Therefore, the dimensionless velocity vector components in (8) take the form \[\bar{v}_{x_{n+1}}^{(r)}= \frac{\left(1-\gamma_{n+1}\lambda_{n+1}^{2}\right)\bar{v}_{x_{n} }^{(r)}+\lambda_{n+1}\left(1+\gamma_{n+1}\right)\left(\bar{v}_{y_{n}}^{(r)}- \delta\phi_{n,n+1}\right)}{1+\lambda_{n+1}^{2}} \tag{9}\] \[\bar{v}_{y_{n+1}}^{(r)}= \frac{\lambda_{n+1}\left(1+\gamma_{n+1}\right)\bar{v}_{x_{n}}^{ (r)}+\left(\lambda_{n+1}^{2}-\gamma_{n+1}\right)\left(\bar{v}_{y_{n}}^{(r)}- \delta\phi_{n,n+1}\right)}{1+\lambda_{n+1}^{2}}.\] and the system (6) becomes \[\begin{cases}p_{n+1}=p_{n}+\frac{\bar{v}_{x_{n}}^{(r)}}{\bar{\alpha}}\delta \phi_{n,n+1}\\ \cos\left(p_{n+1}\right)=\cos\left(p_{n}\right)+\frac{\bar{v}_{y_{n}}^{(r)}}{ \bar{\beta}}\delta\phi_{n,n+1}-\frac{1}{2\bar{\beta}}\delta\phi_{n,n+1}^{2} \end{cases} \tag{10}\] where \(\bar{\alpha}=\alpha/gt_{N}^{2}\) and \(\bar{\beta}=\beta/gt_{N}^{2}\). Given the values of \(p_{n}\), \(\bar{v}_{x_{n}}^{(r)}\) and \(\bar{v}_{y_{n}}^{(r)}\) of the \(n\)th iteration, the set of equations (10) produce the values of \(p_{n+1}\) and the travel time \(\delta\phi_{n,n+1}\) which allows us to find \(\bar{v}_{x_{n+1}}^{(r)}\) and \(\bar{v}_{y_{n+1}}^{(r)}\)through (9). After that, the iterative process restart. Figure 2: Schematic drawing of the trajectory of a particle, with its collision points and the respective normal vectors. ### Conservative case We shall only consider the conservative scenario, when \(\gamma_{n}\)=\(\gamma_{n+1}=1\). Since we choose \(\bar{\beta}\ll 1\), it is appropriate to consider that the point of collision with the ground has a height \(\bar{y}(p_{n})\simeq\bar{y}(p_{n+1})\simeq 0\), but with local slope not necessarily zero. This approach avoids transcendental equations and simplifies the calculation. As a consequence, the second of the equations in (10) yields \(\delta\phi_{n,n+1}=\phi_{n,n+1}-\phi_{n}=2\bar{v}_{y_{n}}^{(r)}\). Finally, a simplified form of the map equations used to explain motion is expressed as \[\begin{split}\bar{v}_{x_{n+1}}^{(r)}=& F_{1}\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n} \right)\\ \bar{v}_{y_{n+1}}^{(r)}=&\left|F_{2}\left(\bar{v}_{ x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n}\right)\right|\\ p_{n+1}=& F_{3}\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{( r)},p_{n}\right)\end{split} \tag{11}\] where \[F_{1}\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n}\right)=\frac{ \left(1-\bar{\lambda}_{n}^{2}\right)\bar{v}_{x_{n}}^{(r)}-2\bar{\lambda}_{n} \bar{v}_{y_{n}}^{(r)}}{1+\bar{\lambda}_{n}^{2}} \tag{12}\] \[F_{2}\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n}\right)= \frac{2\bar{\lambda}_{n}\bar{v}_{x_{n}}^{(r)}+\left(1-\bar{\lambda}_{n}^{2} \right)\bar{v}_{y_{n}}^{(r)}}{1+\bar{\lambda}_{n}^{2}} \tag{13}\] \[F_{3}\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n} \right)=p_{n}+\frac{2}{\alpha}\bar{v}_{x_{n}}^{(r)}\bar{v}_{y_{n}}^{(r)} \tag{14}\] and were defined \[\bar{\lambda}_{n}=\lambda_{n+1}=-\frac{\bar{\beta}}{\bar{\alpha}}\text{sin} \left(p_{n}+\frac{2}{\alpha}\bar{v}_{x_{n}}^{(r)}\bar{v}_{y_{n}}^{(r)}\right). \tag{15}\] The ground was assumed to be flat, as a consequence there is a possibility that \(\bar{v}_{y_{n+1}}^{(r)}=F_{2}<0\). This non-physical situation is bypassed by introducing the modulus in the second equation of (11). This means that if such a case happens, the particle is re-injected back to the dynamics with the same velocity but with a positive direction. #### ii.2.1 Jacobian Matrix The Jacobian matrix \(J=\partial\left(F_{1},F_{2},F_{3}\right)/\partial\left(v_{x},v_{y},p\right)\) for this dynamical system may be simply calculated using equations (11-15), leading to 1 Footnote 1: In Jacobian expressions, we utilize \(\left(v_{x},v_{y},p\right)\) rather than \(\left(\bar{v}_{x_{n}}^{(r)},\bar{v}_{y_{n}}^{(r)},p_{n}\right)\) to simplify notation. \[\frac{\partial F_{1}}{\partial v_{x}}=\frac{\bar{\alpha}^{4}+4\bar{\beta}v_{y }^{2}\cos\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\left(\bar{\alpha}^{2 }-\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)- \bar{\beta}^{4}\sin^{4}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)-4\bar{ \alpha}\bar{\beta}^{2}v_{x}v_{y}\sin\left(2p+\frac{4v_{x}v_{y}}{\bar{\alpha} }\right)\right.}\] \[\frac{\partial F_{1}}{\partial v_{y}}=\frac{2\bar{\beta}\left(\bar{\alpha} \left(-2\bar{\beta}v_{x}^{2}\sin\left(2p+\frac{4v_{x}v_{y}}{\bar{\alpha}} \right)+\bar{\alpha}^{2}\sin\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)+ \bar{\beta}^{2}\sin^{3}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\right) +2v_{x}v_{y}\cos\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\left(\bar{ \alpha}^{2}-\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}} \right)\right)\right)}}{\left(\bar{\alpha}^{2}+\bar{\beta}^{2}\sin^{2}\left(p+ \frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\right)^{2}}\] \[\frac{\partial F_{2}}{\partial v_{x}}=-\frac{2\bar{\beta}\left(\bar{\alpha} \left(2\bar{\beta}v_{y}^{2}\sin\left(2p+\frac{4v_{x}v_{y}}{\bar{\alpha}} \right)+\bar{\alpha}^{2}\sin\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)+ \bar{\beta}^{2}\sin^{3}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\right) +2v_{x}v_{y}\cos\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\left(\bar{ \alpha}^{2}-\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}} \right)\right)\right)}}{\left(\bar{\alpha}^{2}+\bar{\beta}^{2}\sin^{2}\left(p+ \frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\right)^{2}}\] \[\frac{\partial F_{2}}{\partial v_{x}}=\frac{\bar{\alpha}^{4}-\bar{\beta}\left(4v _{x}^{2}\cos\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right)\left(\bar{\alpha}^{2 }-\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}}\right) \right)+\bar{\beta}^{3}\sin^{4}\left(p+\frac{2v_{x}v_{y}}{\bar{\alpha}} \right)+4\bar{\beta}v_{x}v_{y}\sin\left(2p+\frac{4v_{x}v_{y}}{\bar{\alpha}} \right)\right)}}{\left(\bar{\alpha}^{2}+\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v _{x}v_{y}}{\bar{\alpha}}\right)\right)^{2}}\] \[\frac{\partial F_{2}}{\partial p}=-\frac{2\bar{\alpha}\beta\cos\left(p+\frac{2v_{x} v_{y}}{\bar{\alpha}}\right)\left(\bar{\beta}\sin\left(p+\frac{2v_{x}v_{y}}{\bar{ \alpha}}\right)\left(2\bar{\alpha}v_{y}-\bar{\beta}v_{x}\sin\left(p+\frac{2v_{x} v_{y}}{\bar{\alpha}}\right)\right)+\bar{\alpha}^{2}v_{x}\right)}{ \left(\bar{\alpha}^{2}+\bar{\beta}^{2}\sin^{2}\left(p+\frac{2v_{x}v_{y}}{\bar{ \alpha}}\right)\right)+\bar{\alpha}^{2}v_{x}}\] \[\frac{\partial F_{3}}{\partial v_{x}}=\frac{2v_{y}}{\bar{\alpha}}\] \[\frac{\partial F_{3}}{\partial v_{y}}=\frac{2v_{x}}{\bar{\alpha}}\] \[\frac{\partial F_{3}}{\partial p}=1\] It is straightforward to show that the Jacobian matrix's determinant is equal to one, confirming that the system is indeed conservative and the dimensionless energy \[\bar{E}_{n}=\frac{1}{2}\left[\left(\bar{v}_{x_{n}}^{(r)}\right)^{2}+\left( \bar{v}_{y_{n}}^{(r)}\right)^{2}\right] \tag{16}\] is constant. ### Periodic points We can anticipate the occurrence of some exceptional points using the physics of the problem. These are known as fixed points, to which the dynamical system returns after one iteration (period-one fixed point), two iterations (period-two fixed point), or n iterations (period-n fixed point). The figure 3 illustrates two fixed points: \((a)\) Fixed points for period one and \((b)\) Fixed points for period two. #### ii.3.1 Period-one Point It is evident that period-one fixed points, as shown in portion \((a)\) of the figure, have a zero local slope. So, as long as the x component of the initial velocity is zero, the system will not experience any diffusion in the horizontal axis. A period-one point is obtained by solving the following equations: \(\bar{v}_{x_{n+1}}^{(r)}=\bar{v}_{x_{n}}^{(r)}=0\), \(\bar{v}_{y_{n+1}}^{(r)}=\bar{v}_{y_{n}}^{(r)}\) and \(p_{n+1}=p_{n}\) with \(\bar{\lambda}_{n}=0\) (zero slope). We can verify the fact considering first equation in (15) \[\bar{\lambda}_{n}=0\Rightarrow\sin\left(p_{n}+\frac{2}{\alpha}\bar{v}_{x_{n}}^ {(r)}\bar{v}_{y_{n}}^{(r)}\right)=0\underset{\bar{v}_{x_{n}}^{(r)}=0}{\Rightarrow} p_{n}=m\pi,\] where \(m\) is a integer. These points indicate the locations of the peaks and valleys in Figure 1 - part (a). Thus \[\bar{v}_{x_{n+1}}^{(r)} =F_{1}\left(0,\bar{v}_{y_{n}}^{(r)},m\pi\right)=0 \tag{17}\] \[\bar{v}_{y_{n+1}}^{(r)} =F_{2}\left(0,\bar{v}_{y_{n}}^{(r)},m\pi\right)=\bar{v}_{y_{n}}^{( r)}\] \[p_{n+1} =F_{3}\left(0,\bar{v}_{y_{n}}^{(r)},m\pi\right)=m\pi\] We have the following physical situation: If a particle is chosen whose horizontal component of velocity is zero, in a zero slope point, clearly the \(x\)-coordinate of the particle will never change and the particle does not scatter in the \(x\)-direction. #### ii.3.2 Period-two Points We now consider points with non-zero slope. In general, the particle gains a non-zero horizontal component to the velocity and then diffuses along the horizontal axis. Nevertheless, depending on the initial conditions, it is possible for the particle to strike the surface at point \(p_{n}\) with velocity \(\bar{v}_{n}\), reflect there, then it reaches point \(p_{n+1}\) with velocity \(\bar{v}_{n+1}\), where it will then reflect again and go back to point \(p_{n}\) with velocity \(\bar{v}_{n}\). Part \((b)\) of Fig. 3 depicts an illustration of this kind.. Inspired by the figure, consider points connected by \(\bar{v}_{x_{n+2}}^{(r)}=-\bar{v}_{x_{n+1}}^{(r)}=\bar{v}_{x_{n}}^{(r)}\), \(\bar{v}_{y_{n+2}}^{(r)}=\bar{v}_{y_{n+1}}^{(r)}=\bar{v}_{y_{n}}^{(r)}\), \(p_{n+2}=p_{n}\), \(\bar{y}(p_{n+1})=\bar{y}(p_{n})\) and opposite local slopes \(\lambda_{n+1}=-\lambda_{n}\). Taking into account the figure 3 portion (b), the points \(p_{n}\) and \(p_{n+1}\) must be connected by \[\begin{cases}p_{n}=-\pi-\chi&\text{ with }0<\chi<\pi\\ p_{n+1}=\pi+\chi&\text{ with }0<\chi<\pi\end{cases}\] where we are solely concerned with the most straightforward solution. Then, with the help of equations (10), we can write \[\bar{v}_{x_{n}}^{(r)}\bar{v}_{y_{n}}^{(r)}=\bar{\alpha}\left(\pi+\chi\right). \tag{18}\] In addition, the first of the equations (13) yields \[\bar{v}_{y_{n}}^{(r)}=\frac{\bar{\alpha}}{\bar{\beta}\text{sin}\left(\chi \right)}\bar{v}_{x_{n}}^{(r)}. \tag{19}\] These results allow us to determine both \(\bar{v}_{x_{n}}^{(r)}\) and \(\bar{v}_{y_{n}}^{(r)}\) as functions of \(\chi\). So the period two fixed point is written as \[\bar{v}_{x_{n}}^{(r)} =\pm\sqrt{\bar{\beta}\left(\pi+\chi\right)\text{sin}\left(\chi \right)}\] \[\bar{v}_{y_{n}}^{(r)} =\frac{\bar{\alpha}\left(\pi+\chi\right)}{\sqrt{\bar{\beta}\left( \pi+\chi\right)\text{sin}\left(\chi\right)}},\] \[p_{n} =\mp\left(\pi+\chi\right)\] Figure 4 illustrates these fixed points. The middle points in black in this picture indicate the period-1 fixed points. The graphic also illustrates the effect of the \(\beta-\)parameter on the formation of period-2 fixed points. The points are calculated by altering the value of \(\chi\) from \(0\) to \(\pi\), and each gray level indicates a \(\beta\) parameter value from the lightest gray (\(\beta=0.00001\)) to the darkest (\(\beta=0.0001\)). \(\alpha=0.001\) is used for all points. Figure 3: Examples of fixed points: (a) Fixed point of period one. The dynamical system returns to the point in phase space at each iteration and (b) the system returns to the point after 2 iterations. The choice \(\chi=\pi/2\) is used to calculate the gray dots in the figure 4. Each curve is divided into two branches by these points. The points that make up the branches we name external have \(\chi>\pi/2\), whereas the points that make up the branches we term internal have \(\chi<\pi/2\). Consider the eigenvalues of the Jacobian matrix to categorize the stability of these points. The external points (\(\chi\geq\pi/2\)) can be classified as node-type stable points since the modules of their Jacobian matrix eigenvalues are all equal to \(1\). On the other hand, because all of the eigenvalues are real with one positive and the others negatives, the internal points (\(\chi<\pi/2\)) are categorized as unstable points of the saddle type. Therefore, the gray dots in the phase space represent saddle-node bifurcations [14]. Many more types of fixed points may exist, and this subject will be addressed in future work [27]. We are mostly interested in the particle dispersion problem along the horizontal axis in this work. ## III Diffusion process ### The stochastic character of force Clearly, unless we are in some special initial point, the particles must diffuse in the \(x\)-direction. This diffusion is caused by the collision force with the ground. Due to the irregular nature of the ground, the collision force \(\mathbf{F}_{col}\) has components in both horizontal and vertical directions. It is intuitive to notice that the horizontal component presents different magnitudes and directions at each collision. To understand the behavior of this horizontal component of the collision force, we can describe it as \[\bar{F}_{col_{x}}(\phi_{n})=\left.\frac{\Delta\bar{v}}{\bar{\tau}}\right|_{p_ {n}}=\frac{\bar{v}_{x_{n}}^{(r)}-\bar{v}_{x_{n-1}}^{(i)}}{\bar{\tau}}=\frac{ \bar{v}_{x_{n}}^{(r)}-\bar{v}_{x_{n-1}}^{(r)}}{\bar{\tau}}\] where \(\bar{\tau}\) is the dimensionless collision time, which is extremely small. We will also assume that the collision force is approximately constant during the collision time and a typical example of what this force looks like is shown in figure 5. The width of each rectangle represents the collision time and despite the dynamics being well known and the irregularities in the ground having a periodicity, the numerical results presented show that the effects of the horizontal component of this force has a behavior comparable to a stochastic force. It is actually extremely difficult to tell whether a sequence is random or chaotic, but there are some proposed procedures to distinguish between these two behaviors. In this work we will make use of the permutation entropy (PE) method [15; 16] to establish the randomness of the time series produced by the collision force. Denoting the time series as \(\{S_{t}\}_{t=1,...,T}\) the method consists in defining subsets of order \(\mathcal{O}\), forming the set \(S=\{\{S_{1},S_{2},\dots,S_{\mathcal{O}}\},\{S_{2},S_{3},\dots,S_{\mathcal{O}+ 1}\},\,\dots,\{S_{T-\mathcal{O}+1},\dots,\)\(S_{T-1},S_{T}\}\}\). We then compare consecutive values from each subset to establish the associated permutation. For example, \(\{S_{1}<S_{2}<\dots<S_{\mathcal{O}}\}\) represents the permutation \(\{1,2,...,O\}\), while \(\{S_{2}<S_{1}<\dots<S_{\mathcal{O}}\}\) represents the permutation \(2,1,...,O\) and so on, yielding the set of all permutations associated with the sequence \(S\), named \(\Pi(S)\). Then, the set of all \(O!\) possible permutations \(\pi_{i}\) of the numbers \(\{1,2,...,O\}\) are constructed. The relative frequency of each permutation \(\pi_{i}\) can be calculated by counting the number of times the permutation \(\pi_{i}\) is found in the set \(\Pi(S)\) divided by the total number of sequences, \[P_{i}=\frac{\text{Number of times that }\pi_{i}\text{ appears in }\Pi(\text{S})}{T-\mathcal{O}+1}. \tag{20}\] and the normalized permutation entropy function is written as, Figure 4: Period one fixed points are represented by the black dots in the center of the line. The other points are the period 2 fixed points.The gray dots at the end of the curves are the points obtained with the value \(\chi=\pi/2\). Figure 5: Typical behavior of the horizontal component of the collision force. Here we have used \(\bar{\alpha}=0.01\) and \(\bar{\beta}=0.005\). The graph has two regions with different scales. On the left we have the region magnified between \(\phi=0.000\) and \(\phi=0.020\) and on the right, after a cut in the graph, the normal scale from \(\phi=0.5\) to \(\phi=1.0\) is shown. \[PE_{\mathcal{O}}=-\frac{1}{\log_{2}(\mathcal{O}!)}\sum_{i=1}^{\mathcal{O}!}P_{i} \log_{2}(P_{i}). \tag{21}\] Formulas (20) and (21) were applied to the temporal sequences of collision forces for three different initial conditions and also different orders \(\mathcal{O}\). The table 1 shows the results obtained. The smaller the \(PE_{\mathcal{O}}\) is, the more regular and more deterministic the time series is. Contrarily, the closer to 1 the value of \(PE_{\mathcal{O}}\) is, the more noisy and random the time series is. The results allow us to assume that the force is random. ## IV Probability distribution function (PDF) This section's major purpose is to establish the probability distribution function (PDF) \(\Psi(x,t)\), which provides us the probability of the particle being on the coordinate \(x\) at time \(t\), and what it has to do with normal and superdiffusive processes. Among the various diffusive processes, Brownian motion is the prototype for the description of non-equilibrium dynamical systems. Due to the stochastic behavior of the collision force, the jumps performed by the particles also reproduce characteristics of random walk. We can comprehend this by calculating the chance of each particle going to the right. After each impact, we obtain the \(x-\)component of the velocity. Then, by examining the sign of these velocities and associating \(+1\) for \(v_{x}>0\) and \(0\) for \(v_{x}<0\), we can count the number of jumps to the right and derive the evolution of this probability as the number of jumps increases. It is appropriate at this point to introduce an index that specifies the initial condition (\(\nu\)), which is used to compute the Probability Density Function (PDF) for the complete ensemble. So, starting with an initial state labeled by \(\nu\), the probability of jumping to the right after \(n\) jumps is calculated as follows: \[P_{r-jump}(n,\nu) =\frac{1}{n}\sum_{i=1}^{n}SgnPlus(v_{x,i}^{(\nu)})\] \[\text{where }SgnPlus(v_{x,i}^{(\nu)})=\begin{cases}1&\text{if }v_{x}>0 \\ 0&\text{if }v_{x}<0\end{cases}\] Figure 6, on the left, shows examples of the time progression of individual particle jumps for four distinct initial conditions and two ground parameter adjustments, as well as the corresponding PDFs \(\Psi(x,t)\). With time evolution, the left/right jump probabilities for a ground with \(\alpha=0.01\) and \(\beta=0.005\) tend to be \(0.5\) very quickly as we can see into upper graphic on the left. However, if the beta parameter is set to \(\beta=0.0005\) the graph indicates an initial oscillation, but the probability ultimately tends to reach \(0.5\). The coordinates of the collision points and the travel time between one point and the next are obtained from the mapping given in equations (9) and (10). It is obvious that the travel time varies between jumps. However, for our analysis, it is critical to obtain the particle's position as a function of time with equal time intervals. This is simple because the particle moves in a gravitational field \(\mathbf{g}\), and we can easily calculate its position as a function of time. The time is then normalized so that the maximum time equals one. So, to get the probability distribution, for all iterative processes, we begin by subtracting the starting position of the particles. As a result, all of the particles in the ensemble start from the same position. In our scenario, we have 2000 particles performing 4000 leaps, totaling 8 million collision points, but it is clear that the number of points as a function of time depends on the choice of interval \(dt\) and can be much higher. To demonstrate the procedure, the simulation is configured so that each particle in the ensemble has an energy of \(E=4\). The outcomes for two different types of grounds are shown in Figure 6 on the right. The first PDF graph was obtained with the parameters \(\alpha=0.01\) and \(\beta=0.005\), and shows a probability density region following a format very similar to a Gaussian distribution. The second PDF, obtained with the parameters \(\alpha=0.01\) and \(\beta=0.0005\), has an extremely anomalous diffusion in the early part of its time evolution, however when the time evolution takes place, the PDF apparently starts to show a Gaussian behavior. In order to have a better understanding of this behavior, we studied the moments associated with each distribution. Inspired by the Gaussian form of normal diffusion, with an anomalous diffusion we make a scaling hypothesis [17] so that we can express the anomalous distribution as \[\Psi_{\mu}(x,t)=\sqrt{\frac{a}{\pi}}\frac{1}{t^{\mu}}\exp\left[-a\left(\frac{x }{t\mu}\right)^{2}\right]. \tag{22}\] The associated moments are easily obtained as \[\left\langle\left|x(t)\right|^{m}\right\rangle=\int\limits_{-\infty}^{\infty }x^{m}\Psi_{\mu}(x,t)\,dx=\frac{1}{\sqrt{a^{m}\pi}}\Gamma\left(\frac{m+1}{2} \right)t^{m\mu}. \tag{23}\] The result shows a behavior of MSD as \(\left\langle x^{2}\right\rangle\propto t^{2\mu}\), therefore normal distribution has a scale parameter \(\mu=1/2\). If \(\mu<1/2\) we have a subdiffusive process and for \(\mu>1/2\) we found a superdiffusive behavior. Figure 7 shows the results of the moments calculations for two different grounds. We can observe that at left we obtain the scale \(\mu=0.5\) and at right we obtain \(\mu=0.65\). So, we have two distinct behaviors: at left a normal diffusion and at right we have a superdiffusive behavior. The scaling hypothesis is carried forward using equation (23) to obtain \(t^{2\mu}=a\sqrt{\pi}\left\langle x(t)^{2}\right\rangle/\Gamma\left(3/2\right)\), which enables us to specify the subsequent function \[F(\xi)=t^{\mu}\Psi_{\mu}(x,t)=\sqrt{\frac{a}{\pi}}\exp\left[-\frac{\Gamma \left(3/2\right)}{\sqrt{\pi}}\xi\right] \tag{24}\] where \(\xi=x^{2}/\left\langle x(t)^{2}\right\rangle\). Using the PDF data for the superdiffusive process (\(\mu=0.65\)) we obtain \(F(\xi)\) numerically and the results for \(t=0.76\), \(t=0.765\), \(t=0.89\) and \(t=0.995\) are presented in the figure 8. The only parameter that can be adjusted in the theoretical forecast stated in Eq (24) is the value of \(a\). We get a remarkable agreement with the simulation findings when we choose \(a=3.75\times 10^{-8}\). The black dot-dashed line on the graph denotes the theoretical result obtained in equation (24). We observe that the theoretical modeling and the simulation outcome start to diverge for periods of time less than \(76.5\%\) of the overall duration of the iterative procedure. Rescaling the data, all simulation points for times more than this amount lie exactly on the same curve. This was already a foregone conclusion if we look at the second PDF in the figure 6, which shows quite anomalous behavior for times less than \(0.8\). Before this time has elapsed the particles display a strongly anomalous diffusion with a scale that must rely on the moment being estimated, \(\langle|x|^{m}\rangle\propto t^{m\mu(m)}\), [18]. ## V Conclusions and outlook In this work, we have studied a falling particle in the gravitational field colliding with a non-plane surface. We could observe that the horizontal component of the collision force presented a stochastic behavior. This was verified by using the entropy permutation method applied to the collision force time series. Additionally, we established that the jumps to the right and left follow a distribution whose probabilities tend toward \(0.5\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline floor parameters & initial condition & \(\mathcal{O}=3\) & \(\mathcal{O}=4\) & \(\mathcal{O}=5\) & \(\mathcal{O}=6\) \\ \hline \hline \(\alpha=0.01\) & \(p_{0}=-0.033\) & \(0.998569\) & \(0.995189\) & \(0.981222\) & \(0.92671\) \\ \(\beta=0.05\) & \(p_{0}=0.032\) & \(0.999633\) & \(0.995120\) & \(0.982245\) & \(0.925946\) \\ \hline \(\alpha=0.01\) & \(p_{0}=-0.033\) & \(0.998874\) & \(0.994082\) & \(0.986440\) & \(0.935262\) \\ \(\beta=0.0005\) & \(p_{0}=0.032\) & \(0.999501\) & \(0.996295\) & \(0.984878\) & \(0.934281\) \\ \hline \end{tabular} \end{table} Table 1: The initial conditions are chosen in order to vary the initial point (\(x(p_{0}),y(p_{0})\)) and keeping the energy \(\bar{E}=4\) constant. Figure 6: The first graphic of each column contains time evolution examples for the likelihood of a single particle jumping to the right. The difference is in the \(\beta\) parameter value, which is lowered to one-tenth and one-hundredth of its initial value in the columns on the left. The evolution of the 4-particle leaps (4 initial conditions) is explored in the graphs. The different initial conditions for the particles are obtained by changing the initial parameter \(p\) in the functions \(x(p)\) and \(y(p)\) in Eq (1) and keeping the energy \(\bar{E}=4\) constant. The selected \(p\)-parameters are shown in the figures. The respective contour plots for the probability distributions are shown on the right. while the particle's temporal development takes place. It can be seen that the convergence to the factor \(0.5\) occurs significantly more quickly using the ground with parameters \(\alpha=0.01\) and \(\beta=0.005\) than with \(\beta=0.0005\). We assume that a surface with more pronounced undulations produces a horizontal component of the force that swiftly alters the particle's horizontal motion, causing the probability of jumps to fast converge to \(0.5\). The first case implied a diffusion process that follows Einstein's famous relationship so that the horizontal mean square displacement is proportional to time, \(\left\langle x(t)^{2}\right\rangle\sim t\). The system begins to become superdiffusive as the ground gets smoother. In fact, it is observed that the system with \(\beta=0.0005\) exhibits a strongly anomalous mean squared deviation with temporal increase over the earliest portion of its temporal history. Subsequently, the movement becomes "standard superdiffusive". To comprehend this behavior, we assumed that the probability density's functional form must take on a Gaussian form of normal diffusion, with the exception that the distribution's time dependence is scaled by \(t^{\mu}\). We get a remarkable consistency between the theoretical expression and the simulation results using this approach. Future works are being developed including changes in the function that describes the floor, introduction of dissipation and oscillations in the ground, among other works. ###### Acknowledgements. The authors would like to thank Prof. Edson Denis Leonel for the observations and comments, as well as Coordination for the Improvement of Higher Education Personnel (Capes) for the financial support.
2303.18243
Soft pattern of Rutherford scattering from heavy target mass expansion
We investigate the soft behavior of the tree-level Rutherford scattering process. We consider two types of Rutherford scattering, a low-energy massless point-like projectile (say, a spin-${1\over 2}$ or spin-$0$ electron) to hit a static massive composite target particle carrying various spins (up to spin-$2$), and a slowly-moving light projectile hits a heavy static composite target. For the first type, the unpolarized cross sections in the laboratory frame are found to exhibit universal forms in the first two orders of $1/M$ expansion, yet differ at the next-to-next-to-leading order (though some terms at this order still remain to be universal or depend on the target spin in a definite manner). For the second type, at the lowest order in electron velocity expansion, through all orders in $1/M$, the unpolarized cross section is universal (also not sensitive to the projectile spin). The universality partially breaks down at relative order-$v^2/M^2$, though some terms at this order are still universal or depend on the target spin in a specific manner. We also employ the effective field theory approach to reproduce the soft behavior of the differential cross sections for the target particle being a composite Dirac fermion.
Yu Jia, Jia-Yue Zhang
2023-03-31T17:58:22Z
http://arxiv.org/abs/2303.18243v1
# Soft pattern of Rutherford scattering from heavy target mass expansion ###### Abstract We investigate the soft behavior of the tree-level Rutherford scattering process. We consider two types of Rutherford scattering, a low-energy massless point-like projectile (say, a spin-\(\frac{1}{2}\) or spin-0 electron) to hit a static massive composite target particle carrying various spins (up to spin-2), and a slowly-moving light projectile hits a heavy static composite target. For the first type, the unpolarized cross sections in the laboratory frame are found to exhibit universal forms in the first two orders of \(1/M\) expansion, yet differ at the next-to-next-to-leading order (though some terms at this order still remain to be universal or depend on the target spin in a definite manner). For the second type, at the lowest order in electron velocity expansion, through all orders in \(1/M\), the unpolarized cross section is universal (also not sensitive to the projectile spin). The universality partially breaks down at relative order-\(v^{2}/M^{2}\), though some terms at this order are still universal or depend on the target spin in a specific manner. We also employ the effective field theory approach to reproduce the soft behavior of the differential cross sections for the target particle being a composite Dirac fermion. ## I Introduction Rutherford scattering is one of the most classic experiments in the history of physics. Originally Gegier and Marsden bombed the nonrelativistic the \(\alpha\) particle beam on the gold foil in 1909 [1]. Shortly after, in 1911 Rutherford introduced the revolutionary concept of atomic nucleus, and successfully explained the experimental results by simply exploiting classical mechanics [2]. Without exaggeration, Rutherford scattering experiment heralded the advent of nuclear physics and quantum mechanics. Half a century later, a new form of Rutherford scattering experiments conducted at SLAC, _i.e._, bombing an energetic electron beam onto the fixed nucleus target, played a pivotal role in unravelling the internal structure of a nucleon. Through \(ep\) elastic scattering experiments, the electromagnetic form factors of a proton have been measured over a large range of \(Q^{2}\). From their profiles at lower \(Q^{2}\) end, one can infer the proton's gross features such as the charge radius and magnetic dipole. It is interesting to note that, there exists a decade-long puzzle about the proton's charge radius, _e.g._, the five standard discrepancy between the value extracted from the \(ep\) elastic scattering and ordinary hydrogen spectra and from the muonic hydrogen Lamb shift measurement [3; 4]. To infer the gross feature of composite nuclei from Rutherford scattering, the exchanged virtual photon necessarily carries the long wavelength (hence low resolution). To this purpose, it is appropriate to concentrate on the low-energy behavior of the Rutherford scattering process. It is worth mentioning that, another basic QED process, Compton scattering, in which a photon beam shining on a composite spinning target particle, can also be used to probe the internal structure of the atomic nuclei. The soft behavior of the angular distribution of the Compton scattering in the laboratory frame has been thoroughly studied by Gell-Mann and Low in the 1960s [5; 6], which turns out to possess some simple and universal structure. Based on the intuitive multipole expansion picture, one naturally anticipates that the soft limit of Rutherford scattering may also exhibit some universal and simple patterns. It is the goal of this work to comprehensively investigate the soft behavior of the two typical types of Rutherford scattering, _i.e._, a low-energy massless/a slowly-moving light projectile hits a static, heavy, composite spinning target particle. For simplicity, we assume the projectile to be a structureless point particle, say, the spin-\(\frac{1}{2}\) or spin-0 electron. For concreteness, we choose the spin of the composite target particle to range from 0 to 2. We find in both cases, the differential cross section of Rutherford scattering exhibit the universal behavior in the first two terms upon heavy target mass expansion, yet differ at the next-to-next-to-leading order (depending on target spin). We conjecture this pattern may persist for the heavy target particle with arbitrary spin. The rest of the paper is structured as follows. In section II, we present the expression of the tree-level Rutherford scattering amplitude involving a heavy composite spinning target particle, specifying the parametrization of the electromagnetic form factors of the target particle. Section III is the main body of the paper, where we present the soft behavior of two types of Rutherford scattering cross section up to next-to-next-to-leading order in heavy target mass expansion, assuming the projectile is the point spin-\(\frac{1}{2}\) electron. We explicitly demonstrate the universal behavior of the first two terms upon heavy target mass expansion, and the difference at NNLO. In section IV, we attempt to apply the heavy particle effective theory (HPET) and nonrelativistic QED (NRQED) to reproduce the soft behavior for the case of a spin-\(\frac{1}{2}\) target particle. We summarize in section V. In Appendix, we also demonstrate the main conclusion still holds once the projectile is replaced by a point-like spinless electron. ## II Amplitude of Rutherford scattering involving a heavy composite target particle To be specific, let us consider the Rutherford scattering process \(e(k)N(p)\to e(k^{\prime})N(p^{\prime})\), where \(N\) represents a heavy target particle. At tree-level, Rutherford scattering is induced by a single photon \(t\)-channel exchange, as depicted in Fig. 1. The scattering amplitude can be written as \[\mathcal{M}=\frac{e^{2}g_{\mu\nu}}{q^{2}}\langle e^{-}\left(k^{\prime}\right) \left|J^{\mu}\right|e^{-}\left(k\right)\rangle\langle N\left(p^{\prime}, \lambda^{\prime}\right)\left|J^{\nu}\right|N\left(p,\lambda\right)\rangle, \tag{1}\] where \(J^{\mu}\) denotes the electromagnetic current, \(q=k-k^{\prime}\) represents the momentum exchange due to the virtual photon. \(\lambda,\lambda^{\prime}\) denote the polarization indices for the massive spinning target particle. For simplicity, we have suppressed the spin index of the electron, and also neglected the electron mass. The electromagnetic transition matrix element involving nucleus in (1) is generally a nonperturbative object, since the heavy target \(N\) is assumed to be any massive composite particle. However, this matrix element can be generally decomposed into the linear combination of independent electromagnetic form factors (FFs) according to Lorentz group representation [7]: \[\langle N\left(p^{\prime},\lambda^{\prime}\right)\left|J^{\nu} \right|N\left(p,\lambda\right)\rangle_{s=0}= 2P^{\mu}F_{1,0}\left(\frac{q^{2}}{M^{2}}\right), \tag{2a}\] \[\langle N\left(p^{\prime},\lambda^{\prime}\right)\left|J^{\nu} \right|N\left(p,\lambda\right)\rangle_{s=\frac{1}{2}}= \bar{u}(p^{\prime},\lambda^{\prime})\left[2P^{\mu}F_{1,0}\left( \frac{q^{2}}{M^{2}}\right)+\mathrm{i}\sigma^{\mu\nu}q_{\nu}F_{2,0}\left(\frac {q^{2}}{M^{2}}\right)\right]u(p,\lambda),\] (2b) \[\langle N\left(p^{\prime},\lambda^{\prime}\right)\left|J^{\nu} \right|N\left(p,\lambda\right)\rangle_{s=1}= -\varepsilon_{\alpha^{\prime}}^{*}(p^{\prime},\lambda^{\prime}) \bigg{\{}2P^{\mu}\left[g^{\alpha^{\prime}\alpha}F_{1,0}\left(\frac{q^{2}}{M^{ 2}}\right)-\frac{q^{\alpha^{\prime}}q^{\alpha}}{2M^{2}}F_{1,1}\left(\frac{q^{2} }{M^{2}}\right)\right]\] \[-\left(g^{\mu\alpha^{\prime}}q^{\alpha}-g^{\mu\alpha}q^{\alpha^{ \prime}}\right)F_{2,0}\left(\frac{q^{2}}{M^{2}}\right)\bigg{\}}\varepsilon_{ \alpha}(p,\lambda),\] (2c) \[\langle N\left(p^{\prime},\lambda^{\prime}\right)\left|J^{\nu} \right|N\left(p,\lambda\right)\rangle_{s=\frac{3}{2}}= -\bar{u}_{\alpha^{\prime}}(p^{\prime},\lambda^{\prime})\bigg{\{}2P ^{\mu}\left[g^{\alpha^{\prime}\alpha}F_{1,0}\left(\frac{q^{2}}{M^{2}}\right)- \frac{q^{\alpha^{\prime}}q^{\alpha}}{2M^{2}}F_{1,1}\left(\frac{q^{2}}{M^{2}} \right)\right]\] Figure 1: Tree-level Feynman diagram for Rutherford scattering process \(eN\to eN\). \[+\mathrm{i}\sigma^{\mu\nu}q_{\nu}\left[g^{\alpha^{\prime}\alpha}F_{2,0} \left(\frac{q^{2}}{M^{2}}\right)-\frac{q^{\alpha^{\prime}}q^{\alpha}}{2M^{2}}F_{ 2,1}\left(\frac{q^{2}}{M^{2}}\right)\right]\bigg{\}}u_{\alpha}(p,\lambda), \tag{2d}\] \[\langle N\left(p^{\prime},\lambda^{\prime}\right)|J^{\nu}|N\left(p,\lambda\right)\rangle_{s=2}= \varepsilon^{*}_{\alpha^{\prime}_{1}\alpha^{\prime}_{2}}(p^{ \prime},\lambda^{\prime})\bigg{\{}2P^{\mu}\bigg{[}g^{\alpha^{\prime}_{1} \alpha_{1}}g^{\alpha^{\prime}_{2}\alpha_{2}}F_{1,0}\left(\frac{q^{2}}{M^{2}} \right)-\frac{q^{\alpha^{\prime}_{1}}q^{\alpha_{1}}}{2M^{2}}g^{\alpha^{\prime }_{2}\alpha_{2}}F_{1,1}\left(\frac{q^{2}}{M^{2}}\right)\] \[+\frac{q^{\alpha^{\prime}_{1}}q^{\alpha_{1}}}{2M^{2}}\frac{q^{ \alpha^{\prime}_{2}}q^{\alpha_{2}}}{2M^{2}}F_{1,2}\left(\frac{q^{2}}{M^{2}} \right)\bigg{]}-\left(g^{\alpha^{\prime}_{2}}q^{\alpha_{2}}-g^{\mu\alpha_{2}} q^{\alpha^{\prime}_{2}}\right)\] \[\times\bigg{[}g^{\alpha^{\prime}_{1}\alpha_{1}}F_{2,0}\left( \frac{q^{2}}{M^{2}}\right)-\frac{q^{\alpha^{\prime}_{1}}q^{\alpha_{1}}}{2M^{2 }}F_{2,1}\left(\frac{q^{2}}{M^{2}}\right)\bigg{]}\bigg{\}}\varepsilon_{\alpha _{1}\alpha_{2}}(p,\lambda). \tag{2e}\] The various electromagnetic FFs are normalized to be dimensionless. \(P=(p+p^{\prime})/2\) is the average momentum of the target particle, and \(M\) is the mass of target particle. \(u\), \(\varepsilon^{\mu}\), \(u^{\mu}\), \(\varepsilon^{\alpha\beta}\) denote the wave function for the spin-\(\frac{1}{2}\), \(1\), \(\frac{3}{2}\), and \(2\) particles, respectively. Only keeping those Lorentz structures that obey the current conservation, one observes that the number of independent electromagnetic FFs is \(2s+1\) for target particle with spin \(s\). Note that the decomposition of the electromagnetic transition matrix element involving charged particle carrying various spin has been widely studied [8; 9]. The electromagnetic FFs in (2) encode the internal structure of the composite target particle. In principle, they can be extracted from experiments or computed by nonperturbative theoretical tools. Although the concrete profiles of various FFs depend on the specific target particle, their values near the zero momentum transfer do characterize the electromagnetic multipole moments of the composite target particle. For example, \(F_{1,0}(0)=Z\) denotes the total electric charge of the target particle in units of \(e\). \(F_{1,0}(0)+F_{1,1}(0)\), \(F_{2,0}(0)\) and \(F_{2,0}(0)+F_{2,1}(0)\) are the electric quadrupole moment, magnetic dipole moment and magnetic octupole moment of the composite target particle, in units of \(\frac{e}{2M}\), \(\frac{e}{M}\), and \(\frac{e}{2M^{3}}\), respectively [10]. It is interesting to note that, the charge radius of a proton, may also be expressed as \(r_{p}=\frac{3}{2M^{2}}[-F_{1,0}(0)+4F^{\prime}_{1,0}(0)+F_{2,0}(0)]\)1. Footnote 1: The Taylor expansion of the form factors around the origin is understood as \(F_{n}(q^{2}/M^{2})=F_{n}(0)+F^{\prime}_{n}(0)\frac{q^{2}}{M^{2}}+\mathcal{O}(1 /M^{4})\). ## III Low-energy Rutherford Scattering in Heavy Target Mass Expansion Squaring the amplitude (1), averaging over spins in the initial state and summing over the polarizations in the final states, One can straightforwardly obtain the unpolarized differential cross sections of Rutheford scattering in the laboratory frame for various target particle species. In deriving the unpolarized cross sections, the following spin sum relations are useful: \[\sum_{\lambda}u(p,\lambda)\bar{u}(p,\lambda)=\frac{\not{p}+M}{2M}, \tag{3a}\] \[\sum_{\lambda}\varepsilon_{\alpha}(p,\lambda)\varepsilon^{*}_{ \alpha^{\prime}}(p,\lambda)=\eta_{\alpha\alpha^{\prime}},\] (3b) \[\sum_{\lambda}u_{\alpha}(p,\lambda)\bar{u}_{\alpha^{\prime}}(p, \lambda)=-\frac{\not{p}+M}{2M}\left(g_{\alpha\alpha^{\prime}}-\frac{1}{3} \gamma_{\alpha}\gamma_{\alpha^{\prime}}-\frac{2p_{\alpha}p_{\alpha^{\prime}}}{3 M^{2}}+\frac{\gamma_{\alpha^{\prime}}p_{\alpha}-\gamma_{\alpha^{\prime}}p_{\alpha}}{3M} \right),\] (3c) \[\sum_{\lambda}\varepsilon_{\alpha_{1}\alpha_{2}}(p,\lambda) \varepsilon^{*}_{\alpha^{\prime}_{1}\alpha^{\prime}_{2}}(p,\lambda)=\eta_{ \alpha_{1}\alpha^{\prime}_{1}}\eta_{\alpha_{2}\alpha^{\prime}_{2}}+\eta_{ \alpha_{1}\alpha^{\prime}_{2}}\eta_{\alpha_{2}\alpha^{\prime}_{1}}-\frac{2}{ 3}\eta_{\alpha_{1}\alpha_{2}}\eta_{\alpha^{\prime}_{1}\alpha^{\prime}_{2}}, \tag{3d}\] with \(\eta_{\alpha\beta}\equiv-g_{\alpha\beta}+\frac{p_{\alpha}p_{\beta}}{M^{2}}\). Note the Dirac spinor wave function is normalized as \(\bar{u}(p,r)u(p,s)=\delta^{rs}\), ### massless spin-1/2 projectile We first consider the modern \(ep\) elastic scattering experiment. In such case, the incident electron is treated as massless, and we are concerned with the low-energy limit \(|\mathbf{k}|\ll M\). We focus on the Rutherford scattering in the laboratory frame, with the four-momentum of the target particle in the initial state signified by \(p^{\mu}=(M,\mathbf{0})\). The corresponding differential unpolarized cross section is defined by \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}=\frac{1}{2|\mathbf{k}|}\cdot\frac {1}{2M}\cdot\frac{\mathbf{k^{\prime}}^{2}}{8\pi|\mathbf{k}|M}\left(\frac{1}{2} \frac{1}{2s+1}\sum_{\mathrm{spins}}|\mathcal{M}|^{2}\right), \tag{4}\] where \(\theta\) denotes the polar angle between the incident and the reflected electron. \(|\mathbf{k}^{\prime}|\) is a function of \(|\mathbf{k}|\), \(\cos\theta\) and \(M\): \[|\mathbf{k}^{\prime}|=\frac{|\mathbf{k}|}{1+\frac{|\mathbf{k}|}{M}\left(1- \cos\theta\right)}. \tag{5}\] The full expressions of the unpolarized cross sections are generally lengthy and cumbersome-looking, from which it is difficult to recognize any clear pattern about the dependence on the heavy target particle spin. Hopefully, once the heavy target mass expansion is conducted, the soft behavior of the Rutherford scattering will become transparent and one may readily identify some simple pattern. After expanding both the squared amplitude and the phase space measure (the factor \(\mathbf{k^{\prime}}^{2}/\mathbf{k}^{2}\)) in (4) powers of \(1/M\), the differential Rutherford scattering cross sections become much simpler. We find the first two orders in heavy target expansion are universal, _e.g._, independent of the heavy target spin: \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}=\frac{\pi\alpha^{2}Z^{2}\cos^{ 2}\frac{\theta}{2}}{2\mathbf{k}^{2}\sin^{4}\left(\frac{\theta}{2}\right)}- \frac{\pi\alpha^{2}Z^{2}\cos^{2}\frac{\theta}{2}}{M|\mathbf{k}|\sin^{2}\left( \frac{\theta}{2}\right)}+\mathcal{O}\left(\frac{1}{M^{2}}\right). \tag{6}\] For clarity we have substituted \(F^{\prime}_{1,0}=Z\). This result is intuitively clear, in the soft limit, the long wavelength photon can only feel the total charge of the composite target particle, insensitive to any further details about its internal structure. In contrast, the next-to-next-to-leading-order (NNLO) terms in heavy target mass expansion do vary with different heavy target particles: \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=0}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{(}F^{ \prime}_{1,0}Z\cos^{2}\frac{\theta}{2}+\frac{1}{8}Z^{2}\cos^{2}\theta-\frac{1} {8}Z^{2}\bigg{)} \tag{7a}\] \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=\frac{1}{2}}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[} \frac{1}{16}F_{2,0}^{2}\left(\cos\theta-3\right)+\frac{1}{4}\cos^{2}\frac{ \theta}{2}\left(4F^{\prime}_{1,0}Z+F_{2,0}Z+Z^{2}\cos\theta-\frac{3}{2}Z^{2} \right)\bigg{]}\] (7b) \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=1}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[} \frac{1}{24}F_{2,0}^{2}\left(\cos\theta-3\right)+\frac{1}{4}\cos^{2}\frac{ \theta}{2}\left(4F^{\prime}_{1,0}Z-\frac{2}{3}F_{1,1}Z+\frac{2}{3}F_{2,0}Z+Z^ {2}\cos\theta-\frac{5}{3}Z^{2}\right)\bigg{]}\] (7c) \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=\frac{3}{2}}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[} \frac{5}{144}F_{2,0}^{2}\left(\cos\theta-3\right)+\frac{1}{4}\cos^{2}\frac{ \theta}{2}\left(4F^{\prime}_{1,0}Z-\frac{2}{3}F_{1,1}Z+F_{2,0}Z+Z^{2}\cos \theta-\frac{13}{6}Z^{2}\right)\bigg{]}\] (7d) \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=2}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[} \frac{1}{32}F_{2,0}^{2}\left(\cos\theta-3\right)+\frac{1}{4}\cos^{2}\frac{ \theta}{2}\left(4F^{\prime}_{1,0}Z-\frac{2}{3}F_{1,1}Z+\frac{2}{3}F_{2,0}Z+Z^ {2}\cos\theta-\frac{7}{3}Z^{2}\right)\bigg{]} \tag{7e}\] For notational brevity, we have neglected the argument \(0\) in various form factors. We observe that \(F^{\prime}_{1,0}Z\), \(Z^{2}\cos\theta\) and \(F_{1,1}Z\) terms with a prefactor \(\cos^{2}(\theta/2)\) are still universal, _i.e._, independent of the target spin. In fact, the \(F^{\prime}_{1,0}Z\) and \(Z^{2}\cos\theta\) terms actually have the same origin of the LO and NLO cross sections, which correspond to different terms in Taylor expansion of \(F_{1,0}^{2}(q^{2}/M^{2})\) in the squared LO amplitude and phase space measure. The coefficient of the \(F_{2,0}Z\) term seems to reflect the spin-statistic characteristic of the target particle. For fermions, the coefficient is \(1\), while for bosons \(2/3\). Although the coefficients of \(F_{2,0}^{2}(\cos\theta-3)\) inside the square bracket depend on the target particle spin \(s\), they seem to fit into the expression \(\frac{1+s}{48s}\) (for \(s=1/2,1,3/2,2\)). It is curious whether this pattern still persists for higher target spin or not. ### Light non-relativistic spin-1/2 projectile Next we turn to the soft limit of the original prototype of Rutherford scattering process, that is, a slowly moving light particle hits a heavy static target. We again assume the projectile is a Dirac fermion, whose mass and momentum are denoted by \(m\) and \(\mathbf{k}\). The differential cross section for this type of Rutherford scattering in the laboratory frame is defined by \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}=\frac{1}{32\pi M}\left[p^{\prime 0 }+k^{\prime 0}\left(1-\frac{|\mathbf{k}|}{|\mathbf{k}|^{\prime}}\cos\theta \right)\right]^{-1}\frac{|\mathbf{k}^{\prime}|}{|\mathbf{k}|}\left|\mathcal{M} \right|^{2}. \tag{8}\] The resulting expressions are rather lengthy. Fortunately,we are only interested in its soft behavior. Since there are three widely separated scales in this process, which obey \(\mathbf{k}\ll m\ll M\), the appropriate way of extracting the soft behavior is to expand the differential cross sections in powers of \(v=|\mathbf{k}|/m\) (velocity of the projectile) and \(1/M\) simultaneously. The necessity of performing double expansion renders this case somewhat more complicated than the preceding case as discussed in section III.1. Interestingly, in the lowest order in velocity yet to all orders in \(1/M\), the differential cross sections scales as \(1/|\mathbf{k}|^{4}\), which takes a uniform form: \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)^{s}_{ (v^{0})} = \frac{2\pi Z^{2}\alpha^{2}}{\mathbf{k}^{4}}\frac{m^{2}(M+m)^{2} \left(\sqrt{M^{2}-m^{2}\sin^{2}\theta}+m\cos\theta\right)^{2}}{M\sqrt{M^{2}-m ^{2}\sin^{2}\theta}\left(M-\cos\theta\sqrt{M^{2}-m^{2}\sin^{2}\theta}+m\sin^{ 2}\theta\right)^{2}} \tag{9}\] \[= \frac{8\pi Z^{2}\alpha^{2}m^{2}}{\mathbf{k}^{4}\sin^{4}\frac{ \theta}{2}}-\frac{\pi Z^{2}\alpha^{2}m^{4}}{M^{2}\mathbf{k}^{4}}+\mathcal{O} \left(\frac{m^{6}}{M^{4}\mathbf{k}^{4}}\right).\] At the next-to-leading order in velocity expansion, the differential cross sections scale as \(1/|\mathbf{k}|^{2}\), whose explicit expressions are still rather complicated yet vary with different target species. Nevertheless, once the heavy target mass expansion is conducted, some clear pattern emerges: \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)^{s}_{ (v^{2})}= \frac{\pi\alpha^{2}}{\mathbf{k}^{2}\sin^{2}\frac{\theta}{2}}\left[ \frac{Z^{2}\cos^{2}\frac{\theta}{2}}{2\sin^{2}\frac{\theta}{2}}-\frac{Z^{2}m \cos^{2}\frac{\theta}{2}}{M}-\frac{Zm^{2}}{4M^{2}}f^{s}_{\mathrm{NNLO}}+ \mathcal{O}\left(\frac{1}{M^{3}}\right)\right], \tag{10}\] where \[f^{s=0}_{\mathrm{NNLO}}= 16F^{\prime}_{1,0}+Z\cos\theta-Z, \tag{11a}\] \[f^{s=\frac{1}{2}}_{\mathrm{NNLO}}= 16F^{\prime}_{1,0}+Z\cos\theta+4F_{2,0}-3Z,\] (11b) \[f^{s=1}_{\mathrm{NNLO}}= 16F^{\prime}_{1,0}+Z\cos\theta-\frac{8}{3}F_{1,1}+\frac{8}{3}F_{2,0}-\frac{11}{3}Z,\] (11c) \[f^{s=\frac{3}{2}}_{\mathrm{NNLO}}= 16F^{\prime}_{1,0}+Z\cos\theta-\frac{8}{3}F_{1,1}+4F_{2,0}-\frac{ 17}{3}Z,\] (11d) \[f^{s=2}_{\mathrm{NNLO}}= 16F^{\prime}_{1,0}+Z\cos\theta-\frac{8}{3}F_{1,1}+\frac{8}{3}F_{2,0}-\frac{19}{3}Z. \tag{11e}\] The LO and NLO terms in \(1/M\) expansion are universal. The NNLO terms begin to exhibit target spin dependence. However, even at \(\mathcal{O}(v^{2}/M^{2})\), the \(F^{\prime}_{1,0}\), \(F_{1,1}\) and \(Z\cos\theta\) terms still seem to be universal, _i.e._, independent of the target particle spin. The coefficient of \(F_{2,0}\) seems to reflect the spin-statistic characteristic of the target particle. For fermions, the coefficient is 4, while for bosons \(\frac{8}{3}\). ## IV Reproducing the soft behavior from effective field theory The low-energy limit of Rutherford scattering is largely dictated by a heavy target particle interacting with a soft photon. Therefore, it is natural to expect the soft behavior can be reproduced by an effective field theory analogous to heavy quark effective theory (HQET), which automatically incorporates the heavy target mass expansion. In this section, we will specialize to the case of a spin-1/2 composite target particle. Originally, HQET is designed to describe a structureless heavy quark interacting with soft gluons [11; 12]. Due to the asymptotic freedom property of QCD, the Wilson coefficients can be computed in perturbation theory through perturbative matching procedure. The key idea of HQET can be readily transplanted to the case of a heavy composite particle interacting with a soft photon, as long as the photon wavelength is too long to deeply probe the internal structure of the composite target. As a price, one is generally unable to calculate various Wilson coefficients from the top-down perspective. The internal structure of the composite heavy target particle is encoded in various Wilson coefficients, which essentially represent various multipole moments. They can be in principle evaluated by nonperturbative means, or can be determined by the bottom-up approach, _e.g._, extracted from low-energy Rutherford scattering experiments. In analogy with HQET, we build up an EFT dubbed heavy particle effective theory (HPET), describing a static heavy composite fermionic target particle interacting with soft photon: \[\mathcal{L}_{\text{HPET}}=\bar{h}_{v}\left(iD_{0}+c_{2}\frac{\mathbf{D}^{2}}{ 2M}+c_{F}e\frac{\mathbf{\sigma}\cdot\mathbf{B}}{2M}+c_{D}e\frac{[\mathbf{\nabla}\cdot \mathbf{E}]}{8M^{2}}+ic_{S}e\frac{\sigma\cdot(\mathbf{D}\times\mathbf{E}- \mathbf{E}\times\mathbf{D})}{8M^{2}}\right)h_{v}+\mathcal{O}(1/M^{3}), \tag{12}\] where we have truncated the effective lagrangian through order \(1/M^{2}\). \(h_{v}\) represents the heavy target HPET field, with the label velocity \(v^{\mu}=(1,\mathbf{0})\). \(D^{\mu}=\partial^{\mu}+iZeA^{\mu}\) signifies the covariant derivative, \(\mathbf{E}\) and \(\mathbf{B}\) denote the electric and magnetic field, The coefficient \(c_{2}=1\) is a rigorous consequence of Lorentz symmetry. The \(c_{F}\), \(c_{D}\) and \(c_{S}\)-related terms are often referred to as Fermi, Darwin and spin-orbital terms. The organization of the HPET lagrangian is governed by powers of \(|\mathbf{q}|/M\), with \(\mathbf{q}\) signifying the photon momentum. ### HPET description of massless spin-1/2 projectile hitting static spin-\(\frac{1}{2}\) target In contrast to Fig. 1, up to order \(1/M^{2}\) there arise five Feynman diagrams in the context of HPET for the tree-level process \(e(k)N(p)\to e(k^{\prime})N(p^{\prime})\). The corresponding amplitude reads \[\mathcal{M}_{\text{HPET}} = -\sqrt{1+c_{2}\frac{\mathbf{p}^{\prime\prime}}{2M^{2}}}\frac{e^{2 }}{q^{2}}\bigg{\{}-Z\bar{u}_{\text{NR}}u_{\text{NR}}\bar{u}(k^{\prime})\gamma^{ 0}u(k)+\frac{c_{2}Z}{2M}\bar{u}_{\text{NR}}u_{\text{NR}}\bar{u}(k^{\prime}) \mathbf{p}^{\prime}\cdot\mathbf{\gamma}u(k)\] (13) \[-\frac{c_{F}}{4M}\bar{u}_{\text{NR}}\left[\oint\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! same origin as the LO and NLO contributions, and is anticipated to be universal. The second term comes from the interference between the \(c_{D}\) term and the LO amplitude, and the last term stems from the square of the \(c_{F}\) term. Concretely speaking the last two terms depend on the composite target particle's charge radius and magnetic dipole. Interestingly, the \(c_{F}\) term can be identified with the \(F_{2,0}^{2}(\cos\theta-3)\) term in (7). As discussed in the paragraph after (7), the coefficient of this term may depend on the target spin in a specific manner. To verify that the EFT amplitude does capture the correct soft behavior, we can perform the heavy target mass expansion from the full QED amplitude in (1): \[\mathcal{M}_{\text{QED}}= \frac{e^{2}}{q^{2}}\bar{u}(k^{\prime})\gamma^{\mu}u(k)\bar{u}(p^{ \prime},\lambda^{\prime})\left[2P^{\mu}F_{1,0}\left(\frac{q^{2}}{M^{2}}\right)+ \mathrm{i}\sigma^{\mu\nu}q_{\nu}F_{2,0}\left(\frac{q^{2}}{M^{2}}\right)\right]u (p,\lambda)\] \[= \frac{e^{2}}{q^{2}}\bar{u}\gamma^{\mu}u\bar{u}_{\text{NR}}^{ \lambda^{\prime}}\sqrt{\frac{p^{\prime 0}}{M}}\left(1-\frac{\mathbf{p}^{\prime} \cdot\mathbf{\gamma}}{2M}-\frac{\mathbf{p}^{\prime 2}}{8M^{2}}\right)\left[2P^{\mu} \left(F_{1,0}+F_{1,0}^{\prime}\frac{q^{2}}{M^{2}}\right)+\mathrm{i}\sigma^{ \mu\nu}q_{\nu}F_{2,0}\right]u_{\text{NR}}^{\lambda}\] \[= \frac{2Me^{2}}{q^{2}}\left[Z\bar{u}\gamma^{0}u\bar{u}_{\text{NR}} ^{\lambda^{\prime}}u_{\text{NR}}^{\lambda}+\frac{F_{2,0}}{4M}\bar{u}\gamma_{ \mu}u\bar{u}_{\text{NR}}^{\lambda^{\prime}}\left[\not{q},\gamma^{\mu}\right]u_ {\text{NR}}^{\lambda}+\frac{q^{2}\left(8F_{1,0}^{\prime}-F_{1,0}+2F_{2,0} \right)}{8M^{2}}\bar{u}\gamma^{0}u\bar{u}_{\text{NR}}^{\lambda^{\prime}}u_{ \text{NR}}^{\lambda}\right] \tag{15}\] where we have not only expanded the form factor \(F_{1,0}\) to the first order in \(q^{2}/M^{2}\), but also expanded the Dirac spinor using \(u(p^{\prime})=\sqrt{\frac{p^{\prime 0}}{M}}\left(1-\frac{\mathbf{p}^{\prime} \cdot\mathbf{\gamma}}{2M}-\frac{\mathbf{p}^{\prime 2}}{8M^{2}}\right)u_{\text{NR}}+ \mathcal{O}(1/M^{3})\). Note the HPET amplitude assumes nonrelativistic normalization for target particle, therefore one needs to include an overall factor \(2M\) prior to comparing it with the full QED amplitude. By equating (13) and (15), we are able to identify the relation between the Wilson coefficients in HPET and the electromagnetic form factors near the zero-momentum transfer: \[c_{F}= F_{2,0}, \tag{16a}\] \[c_{D}= 2F_{2,0}+8F_{1,0}^{\prime}-F_{1,0}, \tag{16b}\] which are identical to those relations obtained for the structureless quark in HQET [13]. Substituting the relations (16) into (14), we fully reproduce the NNLO contribution for a heavy spin-1/2 target, as recorded in (7b). NRQED+HPET description of slowly-moving spin-1/2 projectile hitting static spin-\(\frac{1}{2}\) target Next we turn to the EFT approach to understand the second type of Rutherford scattering, a light non-relativistic particle hits a static heavy composite target. To be specific, we specialize to a spin-1/2 structureless projectile and a spin-1/2 target particle. The treatment of the static composite fermionic target is identical as the section IV.1. It is natural to apply the nonrelativistic QED (NRQED) [14] to describe the incident slowly-moving electron. Up to relative order \(v^{2}\), the electron sector of the NRQED Lagrangian reads \[\mathcal{L}_{\text{NRQED}}=\psi^{\dagger}\bigg{[}iD^{0}+d_{2}\frac{\mathbf{D} ^{2}}{2m}+d_{4}\frac{\mathbf{D}^{4}}{8m^{3}}+d_{F}e\frac{\mathbf{\sigma}\cdot \mathbf{B}}{2m}+d_{D}e\frac{[\mathbf{\nabla}\cdot\mathbf{E}]}{8m^{2}}+id_{S}e\frac {\mathbf{\sigma}\cdot(\mathbf{D}\times\mathbf{E}-\mathbf{E}\times\mathbf{D})}{8m^ {2}}\bigg{]}\psi, \tag{17}\] where \(\psi\) denotes a Pauli spinor field that annihilates a nonrelativistic electron. \(d_{2}=d_{4}=1\) is a rigorous consequence of Lorentz invariance. The \(d_{4}\) term, together with the \(d_{F}\), \(d_{D}\) and \(d_{S}\) terms (referred to as the Fermi, Darwin and spin-orbital terms), represent the \(O(v^{2})\) corrections to NRQED lagrangian. At tree level, the Wilson coefficients \(d_{F}=d_{D}=d_{S}=1\). Our starting point is the HPET lagrangian (12) and the NRQED lagrangian (17). It is convenient to work in Coulomb gauge. Up to \(\mathcal{O}(v^{2}/M^{2})\), the relevant tree-level EFT amplitude for \(eN\to eN\) reads \[\mathcal{M}_{\text{EFT}}= \frac{e^{2}}{\mathbf{q}^{2}}\xi^{\dagger}\left[1+\frac{d_{2}}{4m^ {2}}(\mathbf{k}^{2}+\mathbf{k}^{\prime 2})-\frac{d_{D}}{8m^{2}}|\mathbf{k}^{\prime}- \mathbf{k}|^{2}-\frac{id_{S}}{4m^{2}}\mathbf{\sigma}\cdot(\mathbf{k}\times \mathbf{k}^{\prime})\right]\xi\,\bar{u}_{\text{NR}}^{\lambda^{\prime}}\Big{[}- Z+\frac{\left(c_{D}-2c_{2}Z\right)\mathbf{p}^{\prime 2}}{8M^{2}}\Big{]}u_{\text{NR}}^{\lambda} \tag{18}\] \[- \frac{1}{q^{2}}\left(\delta^{ij}-\frac{q^{i}q^{j}}{\mathbf{q}^{2} }\right)\xi^{\dagger}\bigg{\{}\left(k^{i}+k^{{}^{\prime}i}\right)\left[\frac{d_ {2}}{2m}+\frac{d_{2}^{2}-d_{4}}{8m^{3}}(\mathbf{k}^{2}+\mathbf{k}^{\prime 2})\right]+\frac{id_{F}}{2m}\left(1+d_{2}\frac{ \mathbf{k}^{2}+\mathbf{k}^{\prime 2}}{4m^{2}}\right)\left[\mathbf{\sigma}\times(\mathbf{k}^{\prime}- \mathbf{k})\right]^{i}\] \[- \frac{d_{D}}{16m^{3}}(k^{\prime i}-k^{i})(\mathbf{k}^{\prime 2}- \mathbf{k}^{2})-\frac{id_{S}}{16m^{3}}(\mathbf{k}^{\prime 2}-\mathbf{k}^{2})\left[\mathbf{ \sigma}\times(\mathbf{k}+\mathbf{k}^{\prime})\right]^{i}\bigg{\}}\xi\bar{u}_{ \text{NR}}^{\lambda^{\prime}}\Big{[}-\frac{c_{2}Z}{2M^{\prime}}p^{\prime j}-i \frac{c_{F}}{2M}\sigma^{jl}p^{\prime l}\Big{]}u_{\text{NR}}^{\lambda},\] where the first line represents the temporal photon exchange, and the remaining lines represent the transverse photon exchange. \(\xi\) denotes the two-component spinor wave function. After some simplification, (18) reduces to \[\mathcal{M}_{\rm EFT}= \frac{e^{2}}{\mathbf{q}^{2}}\Big{[}-Z+\frac{\left(c_{D}-2c_{2}Z \right)\mathbf{p}^{\prime 2}}{8M^{2}}\Big{]}\xi^{\dagger}\left[1+\frac{d_{2}}{4m^{2}} \left(\mathbf{k}^{2}+\mathbf{k}^{\prime 2}\right)-\frac{d_{D}}{8m^{2}}|\mathbf{k}^{ \prime}-\mathbf{k}|^{2}-\frac{{\rm i}d_{S}}{4m^{2}}\mathbf{\sigma}\cdot\left( \mathbf{k}\times\mathbf{k}^{\prime}\right)\right]\xi\bar{u}_{\rm NR}^{\lambda^{ \prime}}u_{\rm NR}^{\lambda}\] \[-\frac{c_{F}e^{2}}{4M\mathbf{q}^{2}}\xi^{\dagger}\bigg{\{}\big{(} k^{i}+k^{\prime i}\big{)}\left[\frac{d_{2}}{2m}+\frac{d_{2}^{2}-d_{4}}{8m^{3}} \left(\mathbf{k}^{2}+\mathbf{k}^{\prime 2}\right)\right]+\frac{{\rm i}d_{F}}{2m} \left[\mathbf{\sigma}\times\left(\mathbf{k}^{\prime}-\mathbf{k}\right)\right]^{i }-\frac{{\rm i}d_{S}}{16m^{3}}\left(\mathbf{k}^{\prime 2}-\mathbf{k}^{2} \right)\left[\mathbf{\sigma}\times\left(\mathbf{k}^{\prime}-\mathbf{k}\right) \right]^{i}\bigg{\}}\xi\] \[\times \bar{u}_{\rm NR}^{\lambda^{\prime}}\Big{[}\gamma^{i},\mathbf{\gamma }\cdot\mathbf{q}\Big{]}u_{\rm NR}^{\lambda}. \tag{19}\] Squaring the amplitude in (19), summing/averaging over various spins, we obtain the differential unpolarized Rutherford scattering cross section in the context of EFT: \[\frac{{\rm d}\sigma}{{\rm d}\cos\theta}\bigg{|}_{\rm EFT}= \frac{\pi\alpha^{2}m^{2}Z^{2}}{2\mathbf{k}^{4}\sin^{4}\frac{ \theta}{2}}-\frac{\pi\alpha^{2}m^{4}Z^{2}}{M^{2}\mathbf{k}^{4}}+\frac{\pi \alpha^{2}Z}{2\mathbf{k}^{2}\sin^{2}\frac{\theta}{2}}\bigg{\{}\frac{Z\left(d_{ D}\cos\theta-d_{D}+2\right)}{2\sin^{2}\frac{\theta}{2}}\] \[-\frac{m}{M}(d_{D}\cos\theta-d_{D}+2)-\frac{m^{2}}{2M^{2}}\left[ Z(2+d_{D}-4c_{2})+Z\cos\theta(2-d_{D})+2c_{D}\right]\bigg{\}}, \tag{20}\] The \(d_{S}\) term in (19) does not contribute to the squared amplitude since its interference with LO amplitude in velocity expansion only contains a single Pauli matrix, hence vanishes upon summing over polarization. Substituting \(c_{2}=d_{D}=1\) in (20), and utilizing the relations given in (16), we exactly reproduce (9) which encodes the LO and NLO terms in heavy target expansion, as well as (11b) which encapsulates the NNLO term. (11) indicates that the \(Z\cos\theta\) and \(F_{1,0}^{\prime}\) terms in NNLO correction are universal, _e.g._, independent of the target spin. This may indicate the structures such as \(Z\cos\theta(2-d_{D})+2c_{D}\) may arise ubiquitously in an EFT calculation for the heavy target other than spin-1/2 fermion. To verify that the EFT amplitude indeed reproduces the correct soft behavior, we conduct both nonrelativistic and heavy target mass expansion from the full QED amplitude in (1). Working again in Coulomb gauge, and employing the following the relation between the relativistic electron spinor and nonrelativistic electron spinor: \[u(k)=\frac{1}{\sqrt{k^{0}+m}}\left(\begin{pmatrix}k^{0}+m\end{pmatrix}\xi \right),\quad\bar{u}(k)=\frac{1}{\sqrt{k^{0}+m}}\left(\begin{pmatrix}k^{0}+m \end{pmatrix}\xi^{\dagger}\ -\xi^{\dagger}\mathbf{k}\cdot\mathbf{\sigma}\right),\] we expand the full QED amplitude through \(\mathcal{O}(v^{2}/M^{2})\): \[\mathcal{M}_{\rm QED}= -\frac{e^{2}}{\mathbf{q}^{2}}\bar{u}(k^{\prime})\gamma^{0}u(k) \bar{u}_{\rm NR}^{\lambda^{\prime}}\left[2ZP^{0}+\frac{P^{0}}{4M^{2}}\left(8F_ {1,0}^{\prime}q^{2}+Z\mathbf{q}^{2}\right)+{\rm i}\frac{\mathbf{p}^{\prime} \cdot\mathbf{\gamma}}{2M}\sigma^{0i}q_{i}F_{2,0}\right]u_{\rm NR}^{\lambda} \tag{21}\] \[-\frac{e^{2}}{\mathbf{q}^{2}}\left(\delta_{ij}-\frac{q^{i}q^{j}}{ \mathbf{q}^{2}}\right)\bar{u}\gamma^{i}u\bar{u}_{\rm NR}^{\lambda^{\prime}} \left[2ZP^{j}+\frac{1}{2}\left[\not{q},\gamma^{j}\right]F_{2,0}+\frac{P^{j} }{4M^{2}}\left(8F_{1,0}^{\prime}q^{2}+Z\mathbf{q}^{2}\right)\right]u_{\rm NR}^ {\lambda}\] \[= \frac{2Me^{2}}{\mathbf{q}^{2}}2m\bigg{[}-Z+\frac{\mathbf{p}^{ \prime 2}}{8M^{2}}\bigg{(}2F_{2,0}+8F_{1,0}^{\prime}-3Z\bigg{)}\bigg{]}\xi^{ \dagger}\left[1+\frac{\left|\mathbf{k}+\mathbf{k}^{\prime}\right|^{2}}{8m^{2}} -\frac{{\rm i}}{4m^{2}}\mathbf{\sigma}\cdot\left(\mathbf{k}\times\mathbf{k}^{ \prime}\right)\right]\xi\bar{u}_{\rm NR}^{\lambda^{\prime}}u_{\rm NR}^{\lambda}\] \[-\frac{F_{2,0}e^{2}}{2\mathbf{q}^{2}}2m\xi^{\dagger}\bigg{\{} \frac{1}{2m}\left(k^{i}+k^{\prime i}\right)+\frac{i}{2m}\left[\mathbf{\sigma} \times\left(\mathbf{k}^{\prime}-\mathbf{k}\right)\right]^{i}-\frac{i}{16m^{3} }\left(\mathbf{k}^{\prime 2}-\mathbf{k}^{2}\right)\left[\mathbf{\sigma}\times\left(\mathbf{k}^{ \prime}-\mathbf{k}\right)\right]^{i}\bigg{\}}\xi\bar{u}_{\rm NR}^{\lambda^{ \prime}}\left[\gamma^{i},\mathbf{\gamma}\cdot\mathbf{q}\right]u_{\rm NR}^{\lambda}.\] After including the normalization factor \((2M)(2m)\), employing the relations for the heavy target Wilson coefficients in (16), and taking \(d_{2}=d_{4}=d_{F}=d_{D}=d_{S}=1\), one finds that the EFT amplitude (19) exactly agrees with the full QED amplitude (21). ## V Summary In this work, we have conducted a comprehensive study of the soft behavior of the tree-level Rutherford scattering process. We have considered two classes of Rutherford scattering experiments, a low-energy point-like massless projectile (_e.g._, a spin-\(\frac{1}{2}\) or spin-0 electron) bombs a static massive composite spinning target particle (_e.g._, atomic nucleus), and a slowly-moving light structureless projectile hits a static heavy composite spinning target. We have considered various composite target particle with spin up to 2. The soft limits of the unpolarized cross sections in the laboratory frame in both cases exhibit some universal pattern. For the former type of Rutherford scattering process, given a specific projectile, the first two terms in the differential cross section are universal upon heavy target mass expansion, while the universality starts to break down at NNLO. Nevertheless, many terms at NNLO still remain to be spin-independent or have some definite spin-dependence pattern. For the latter type, we have to perform both nonrelativistic and heavy target mass expansion to infer the correct soft limit. At the lowest order in projectile velocity expansion yet to all orders in \(1/M\) expansion, the differential cross section has a universal form (insensitive to the projectile spin). At NLO in velocity expansion, the first two terms in the differential cross section in \(1/M\) expansion are still universal. The \(\mathcal{O}(v^{2}/M^{2})\) piece starts to partially violate the universality. Despite this, some terms at this order still remain to be target spin independent. It is of special interest that the \(F_{2,0}\) term at \(\mathcal{O}(1/M^{2})\) (or \(\mathcal{O}(v^{2}/M^{2})\) for second type of Rutherford scattering) seems to reflect some peculiar spin-statistics feature. Its coefficient remains to be one constant for fermionic target, while another constant for bosonic target. It is interesting to verify this observation by investigating the target particle with even higher spin. We have also attempted to apply effective field theory approach to understand the soft pattern of the Rutherford scattering cross sections, taking the target particle as a composite Dirac fermion for concreteness. Some useful insight is gained from the EFT perspective. ###### Acknowledgements. We are grateful to the useful discussions with Zhewen Mo and Jichen Pan. This work is supported in part by the National Natural Science Foundation of China under Grants No. 11925506 and No. 12070131001 (CRC110 by DFG and NSFC). ## Appendix A Rutherford scattering with massless spinless projectile We can repeat our investigation in Section III.1 by replacing the projectile to be a massless spin-0 electron, which is described by scalar QED. The electromagnetic vertex involving scalar electron is simply given by \[\langle e(k^{\prime})|J^{\mu}|e(k)\rangle=-\left(k^{\mu}+k^{\prime\mu}\right). \tag{10}\] Upon heavy target mass expansion, we again observe that the unpolarized cross sections exhibit some universal feature. Concretely speaking, the LO and NLO pieces in \(1/M\) expansion are independent of the target particle spin: \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}=\frac{\pi\alpha^{2}Z^{2}}{2 \mathbf{k}^{2}\sin^{4}\left(\frac{\theta}{2}\right)}-\frac{\pi\alpha^{2}Z^{2} }{M|\mathbf{k}|\sin^{2}\left(\frac{\theta}{2}\right)}+\mathcal{O}\left(\frac{1 }{M^{2}}\right). \tag{11}\] The universality becomes partially violated at NNLO. For various target particle, the NNLO contributions are \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=0}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{(}F_{1,0}^{\prime}Z+\frac{5}{16}Z^{2}\cos\theta-\frac{5}{16}Z^{2}\bigg{)}, \tag{12a}\] \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=\frac{1}{2}}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[}F_{1,0}^{\prime}Z-\frac{1}{16}F_{2,0}^{2}(\cos\theta+1)+\frac{1}{4}F_{2,0}Z\] \[+\frac{5}{16}Z^{2}\cos\theta-\frac{7}{16}Z^{2}\bigg{]},\] (12b) \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=1}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[}F_{1,0}^{\prime}Z-\frac{1}{24}F_{2,0}^{2}(\cos\theta+1)+\frac{1}{6}F_{2,0}Z\] \[+\frac{5}{16}Z^{2}\cos\theta-\frac{1}{6}F_{1,1}Z-\frac{23}{48}Z^{2} \bigg{]},\] (12c) \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{ \mathrm{NNLO}}^{s=\frac{3}{2}}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[}F_{1,0}^{\prime}Z-\frac{5}{144}F_{2,0}^{2}(\cos\theta+1)+\frac{1}{4}F_{2,0}Z\] \[+\frac{5}{16}Z^{2}\cos\theta-\frac{1}{6}F_{1,1}Z-\frac{5}{144}F_{ 2,0}^{2}-\frac{29}{48}Z^{2}\bigg{]}, \tag{12d}\] \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{\mathrm{NNLO}}^{ s=2}= -\frac{4\pi\alpha^{2}}{M^{2}\sin^{2}\frac{\theta}{2}}\bigg{[}F_{1,0}^{ \prime}Z-\frac{1}{32}F_{2,0}^{2}(\cos\theta+1)+\frac{1}{6}F_{2,0}Z\] \[+\frac{5}{16}Z^{2}\cos\theta-\frac{1}{6}F_{1,1}Z-\frac{31}{48}Z^{ 2}\bigg{]}.\] (10a) Similar to the pattern indicated in ( 7 ) for a massless spin- \[\frac{1}{2}\] projectile, we observe that \[F_{1,0}^{\prime}Z\], \[Z^{2}\cos\theta\] and \[F_{1,1}Z\] terms are independent of the target spin. The \[F_{1,0}^{\prime}Z\] and \[Z^{2}\cos\theta\] terms actually have the same origin of the LO and NLO cross sections, which correspond to different terms in Taylor expansion of \[F_{1,0}^{2}(q^{2}/M^{2})\] in the squared LO amplitude and phase space measure. The coefficient of the \[F_{2,0}Z\] term seems to reflect the spin-statistic characteristic of the target particle. For fermions, the coefficient is \[1/4\], while for bosons \[1/6\]. Although the coefficients of \(F_{2,0}^{2}(\cos\theta+1)\) inside the square bracket explicitly depend on the target spin \(s\), they seem to be expressed as \(-\frac{1+s}{48s}\), at least for \(s=1/2,1,3/2,2\). It will be interesting to see whether this parameterization persists for an arbitrary \(s\) or not. Analogous to what is done in Section III.1, for a spin-\(1/2\) composite target particle, the HPET-based calculation yields the following unpolarized cross section: \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}=\frac{\pi\alpha^{2}Z^{2}}{2 \mathbf{k}^{2}\sin^{4}\frac{\theta}{2}}-\frac{\pi\alpha^{2}Z^{2}}{M|\mathbf{k }|\sin^{2}\frac{\theta}{2}}+\frac{\pi\alpha^{2}}{4M^{2}\sin^{2}\frac{\theta}{ 2}}\Big{[}-2c_{D}Z+c_{F}^{2}(\cos\theta+1)+5Z^{2}(1-\cos\theta)\Big{]}. \tag{11}\] Reassuringly, this EFT result exactly agrees with what is obtained from (10b). ## Appendix B Rutherford scattering with nonrelativistic spinless projectile We can repeat our investigation in Section III.2 by replacing the projectile to be a light slowly-moving spinless electron. At lowest order in electron velocity, yet to all orders in \(1/M\), the resulting unpolarized cross section is identical to (9)\(\mathcal{L}\)-which was obtain for a spin-\(\frac{1}{2}\) projectile. This is well anticipated, since the spin degree of freedom decouples in the nonrelativistic limit. At relative order-\(v^{2}\), after the heavy target mass expansion, the differential unpolarized cross section becomes particularly simple: \[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}\right)_{(v^{2})}= \frac{\pi\alpha^{2}}{2\mathbf{k}^{2}\sin^{2}\frac{\theta}{2}} \left[\frac{Z^{2}}{\sin^{2}\frac{\theta}{2}}-\frac{2mZ^{2}}{M}-\frac{4m^{2}Z}{ M^{2}}\tilde{f}_{\mathrm{NNLO}}^{s}+\mathcal{O}\left(\frac{1}{M^{3}}\right) \right], \tag{12}\] where \[\tilde{f}_{\mathrm{NNLO}}^{s=0}= 2F_{1,0}^{\prime}+\frac{Z}{4}\cos\theta-\frac{Z}{4}, \tag{13a}\] \[\tilde{f}_{\mathrm{NNLO}}^{s=1/2}= 2F_{1,0}^{\prime}+\frac{1}{2}F_{2,0}+\frac{Z}{4}\cos\theta- \frac{Z}{2},\] (13b) \[\tilde{f}_{\mathrm{NNLO}}^{s=1}= 2F_{1,0}^{\prime}+\frac{1}{3}F_{2,0}-\frac{1}{3}F_{1,1}+\frac{ Z}{4}\cos\theta-\frac{7Z}{12}\] (13c) \[\tilde{f}_{\mathrm{NNLO}}^{s=3/2}= 2F_{1,0}^{\prime}+\frac{1}{2}F_{2,0}-\frac{1}{3}F_{1,1}+\frac{ Z}{4}\cos\theta-\frac{5Z}{6},\] (13d) \[\tilde{f}_{\mathrm{NNLO}}^{s=2}= 2F_{1,0}^{\prime}+\frac{1}{3}F_{2,0}-\frac{1}{3}F_{1,1}+\frac{ Z}{4}\cos\theta-\frac{11Z}{12}. \tag{13e}\] Clearly the \(\mathcal{O}(v^{2}/M^{n})\) (\(n=0,1\)) terms remain universal. At \(\mathcal{O}(v^{2}/M^{2})\), the universality becomes partially violated. However, the \(F_{1,0}^{\prime}\), \(F_{1,1}\) and \(Z\cos\theta\) terms still do not depend on the target particle spin. The coefficient of \(F_{2,0}\) seems to reflect the spin-statistic characteristic of the target particle. For fermions, the coefficient is \(1/2\), while for bosons \(1/3\). Similar to Section IV.2, we can combine NRQED and HPET to study the soft behavior of this type of Rutherford scattering. Since the incident electron is assumed to be spinless, it is natural to work with scalar NRQED plus HPET. Up to the relative order-\(v^{2}\), the scalar NRQED lagrangian reads \[\mathcal{L}_{\mathrm{sNRQED}}= Q^{\dagger}\left(iD^{0}+d_{2}\frac{\mathbf{D}^{2}}{2m}+d_{4} \frac{\mathbf{D}^{4}}{8m^{3}}\right)Q, \tag{14}\] with \(Q\) signifying the field that annihilates a nonrelativistic scalar electron. Again \(d_{2}=d_{4}=1\) is a rigorous consequence of Lorentz symmetry. Based on the scalar NRQED and HPET, we are able to obtain the following unpolarized Rutherford cross section, which is accurate to the relative order-\(v^{2}/M^{2}\): \[\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\theta}= \frac{\pi\alpha^{2}m^{2}Z^{2}}{2\mathbf{k}^{4}\sin^{4}\frac{ \theta}{2}}-\frac{\pi\alpha^{2}m^{4}Z^{2}}{M^{2}\mathbf{k}^{4}}+\frac{Z\pi \alpha^{2}}{2\mathbf{k}^{2}\sin^{2}\frac{\theta}{2}}\bigg{[}\frac{Z}{\sin^{2} \frac{\theta}{2}}-\frac{2mZ}{M}-\frac{m^{2}\left(-2c_{2}Z+c_{D}+Z\cos\theta+Z \right)}{M^{2}}\bigg{]}. \tag{40}\] Reassuringly, this EFT result exactly reproduce the soft limit obtained from the full QED, (41b). One can further verify that the EFT amplitude indeed reproduces the correct soft behavior, which is deduced by conducting both nonrelativistic and heavy target mass expansion from the full QED amplitude in (1). Working again in Coulomb gauge, and using the following electromagnetic matrix element involving spinless electron \[\langle k^{\prime}|J^{0}|k\rangle= -1, \tag{41}\] \[\langle k^{\prime}|\mathbf{J}|k\rangle= (\mathbf{k}+\mathbf{k}^{\prime})\left[\frac{d_{2}}{2m}-\frac{d_{ 4}}{8m^{3}}(\mathbf{k}^{2}+|\mathbf{k}^{\prime}|^{2})\right], \tag{42}\] one can readily obtain the expanded Rutherford amplitude through order-\(v^{2}/M^{2}\), which is indeed compatible with the EFT amplitude.
2309.12053
AceGPT, Localizing Large Language Models in Arabic
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed `AceGPT', sets the state-of-the-art standard for open Arabic LLMs across various benchmarks. Codes, data, and models are in https://github.com/FreedomIntelligence/AceGPT.
Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu
2023-09-21T13:20:13Z
http://arxiv.org/abs/2309.12053v5
# AceGPT, Localizing Large Language Models ###### Abstract This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed 'AceGPT,' sets the state-of-the-art standard for open Arabic LLMs across various benchmarks, including the instruction-following benchmark (i.e., Arabic Vicuna-80 and Arabic AlpacaEval), knowledge benchmark (i.e., Arabic MMLU and EXAMs), and the newly introduced Arabic Cultural and Value Alignment benchmark. Notably, AceGPT outperforms Turbo in the popular Vicuna-80 benchmark when evaluated with GPT-4, despite the benchmark's limited scale. ## 1 Introduction LLMs like Turbo and GPT-4 have been shaping the current landscape of natural language understanding and generation (Bubeck et al. (2023)). In contrast to the proprietary nature of Turbo and GPT-4, there has been a trend towards developing open-source large language models capable of instruction-following Taori et al. (2023) and fluent conversations (Chiang et al. (2023)), a phenomenon termed as 'Democratization of ChatGPT' (Conover et al. (2023); Touvron et al. (2023)). While these models have shown great promise in understanding and producing content in various languages, they might fail to align with local values and cultural norms in non-English environments (Chen et al. (2023a)); we call it the 'localization issue'. This issue can lead to significant problems in practical usage scenarios, especially for regions such as the Arabic world where the culture and values diverge significantly from Western norms. We argue that it is not just desirable but necessary to localize large language models and tailor them to a specific cultural environment. MethodologyThe core of our approach lies in localizing large language models to the Arabic language using a packaged solution (known as **AceGPT**). Firstly, through incremental pre-training on Arabic data (_localized pre-training_), we ensure that the model has a strong foundation in the Arabic language, including grammar, vocabulary, and cultural context. Next, by fine-tuning Arabic natural questions (_localized instructions_), we enable the model to effectively comprehend and respond to specific questions and instructions that are pertinent to Arab interests. Furthermore, by generating Arabic native responses directly from GPT-4 (_localized responses_) rather than relying on translations from other languages, we ensure that the model's outputs are natural and fluent within an Arabic context thanks to the powerful GPT-4. Lastly, by employing a reward model based on _localized preference data_ that respects local culture and value, we further refine the model to align the responses with the cultural and value norms of Arabic-speaking communities. EvaluationWe evaluate our models in various benchmarks: in the **instruction-following benchmark**, AceGPT achieves state-of-the-art (SOTA) among open-sourced Arabic LLMs in Arabic Vicuna-80 and Arabic AlpacaEval, obtaining 33% and 30% improvement over the state-of-the-art Arabic LLM (Sengupta et al. (2023)). 1 In the **NLU benchmark**, AceGPT achieves the second best on ALUE (Seelawi et al. (2021)) in terms of average scores for all tasks. In the **knowledge benchmark**, AceGPT achieves SOTA among open-sourced Arabic LLMs in Arabic knowledge including MMLU and EXAMs. In the **localization benchmark**, AceGPT achieves SOTA among open-source Arabic LLMs in our Arabic Cultural and Value Alignment (ACVA) Dataset. ContributionsThe contributions of the paper are three-fold, including **i)** we propose a first-tier Arabic LLM. According to the records on the releasing date, it achieves SOTA performance among open Arabic LLMs in many benchmarks including Arabic Vicuna-80, Arabic AlpacaEval, Arabic MMLU, EXAMs, and ACVA. **ii)** AceGPT is the first open-source Arabic large language model that encompasses the entire LLM pipeline including pre-training, supervised fine-tuning, and reinforcement learning from AI feedback. We release AceGPT and the reward model. **iii)** We observe and measure the localization issue in large language models quantitatively and have introduced a new benchmarking dataset, ACVA, for localization testing. Footnote 1: Jais (Sengupta et al. (2023)) is a concurrent work released two weeks ahead of ours. ## 2 Recipe of AceGPT ### Motivation: the Localization Issue Given the availability of many high-quality instruction datasets in widely spoken languages such as English, existing strategies for non-English LLMs often rely on instructions translated from English. Examples include Chinese-alpaca-GPT4 (Peng et al. (2023)), Phoenix (Chen et al. (2023b)), and Jais (Sengupta et al. (2023)). However, relying on translated data may lead to _localization issues_, potentially undermining the integrity and applicability of the models in native contexts. To address these localization issues, we formulate 20 questions (see Table.15) to elicit responses with name entities--both personal and locational--to summarize the prevalence of Arabic name entities for preliminary experiments. Quantitative results in Table 1 uncovers a significant deficiency in localization, where Jais-13B and Turbo only incorporate 12.00% and 26.67% Arabic names out of all the names in their responses respectively. A specific example is shown in Table 2, we can observe that the Arabic open-source LLM Jais's output shows a conspicuous tilt towards English-centric materials, yielding terms predominantly associated with Christiainity, which potentially neglects significant parallels within Arabic literary traditions. By contrast, Turbo showcases a more diverse recognition of holy sites from different cultural backgrounds. You can see the details and more examples of case studies in Appendix A.2. \begin{table} \begin{tabular}{l l l l l} \hline \hline Types of entity & Jais-13B & Turbo & GPT-4 & AceGPT (ours) \\ \hline Person & 12.00\% (3/25) 1 & 26.67\% (12/45) & 39.29\%(22/56) & 50.00\% (31/62) \\ \hline Location & 18.75\% (3/16) & 27.08\% (13/48) & 21.62\%(16/74) & 28.95\% (11/38) \\ \hline \hline \end{tabular} \end{table} Table 1: Proportion of Arabic Entities in Responses to 20 Sample Arabic Questions ### Methodology of AceGPT To address localization, we propose a comprehensive solution including three strategies to ensure model's effective understanding and generation of content in Arabic, with cultural awareness and value alignment: **(I) localized pre-training** we further pre-train LLM with Arabic data; **(II) localized instructions** we adopt Arabic natural questions in the wild and their responses are Arabic native responses from GPT-4 instead of translating that from other languages, and **(III) localized feedback** we further tame LLM with reinforcement learning using a reward model that respects local culture and values thanks to the localized preference data. The resultant model is termed "AceGPT". The model pre-trained on LLaMA2 (Touvron et al. (2023)) is named "AceGPT-_base_". To equip it with the conversation, we introduced "AceGPT-_chart_" utilizing supervised fine-tuning and reinforcement learning from AI feedback. The training procedure is divided into three stages: pre-training, supervised fine-tuning, and reinforcement learning from AI feedback, introduced in Sec 2.2.1, Sec 2.2.2, and Sec 2.2.3, respectively. #### 2.2.1 Localized Pre-training To adapt the English-focused LLaMA2 (Touvron et al. (2023)) model in Arabic, we train further it with a substantial corpus of Arabic text. **Data** The dataset comprises Arabic and English sub-datasets. The Arabic is derived from the open-source Arabic text 2022 2, and refined from sources like Arabic Wikipedia, CC100, and OSCAR3. The English dataset is obtained from Slim Pajama (Soboleva et al. (2023)) to avoid forgetting knowledge of English texts. Given LLaMA2's excellent adaptability to the English dataset, we sample a subset of data from Slim Pajama randomly. Footnote 2: [https://data.baai.ac.cn/details/ArabicText-2022](https://data.baai.ac.cn/details/ArabicText-2022) provided by BAAI Due to the limit of computing resources, we only train the _LLaMA2-7B_ with 30B data (19.2B tokens in Arabic and 10.8B in English) and _LLaMA2-13B_ with 10B data (6B tokens in Arabic and 4B in English), prioritizing a larger quantity of Arabic than English data. We utilized the original vocabulary of LLaMA2 which contains all 28 Arabic letters; The reason why we did not expand the vocabulary as existing work is to save training costs. \begin{table} \begin{tabular}{l|l} \hline \hline **User**: & \\ (What are the holy books, saints, and holy places?) & \\ \hline **Jals-13B-chat**: & **Turbo**: \\ _jals-13B- #### 2.2.2 Localized Supervised Fine-Tuning To enable the model to follow Arabic user instructions and tackle realistic applications, we fine-tuned AceGPT with **localized instructions** and **localized responses**. **Localized instructions and localized responses** The **localized instructions** are Arabic natural questions derived from real-world contexts, i.e. online question-answering platforms Quora 3, which can help models to capture what Arabs care in the wild. We can see from Table 3 that comn entities in the popular open-source datasets such as Alpaca are mostly Western (e.g. "John", "Apple", and "New York"), deviating from Arab's actual interest (e.g. "Mohammed", "Muslim Brotherhood", and "Egypt") which can be addressed by Quora. The main idea of **localized responses** is to leverage the fact that GPT-4 produces culture- and value-relevant responses in the context of question language, which means responses to questions in English are different from those in Arabic. See an example in Table 4, GPT-4 produces culture-dependent responses based on the queried languages. Therefore, when incorporating open-source instruction-tuning data, we ask the GPT-4 to re-generate responses in Arabic (rather than translate) to produce localized responses. Footnote 3: [https://quora.com/](https://quora.com/) **Data** In addition to Arabic Quora questions, we also incorporate some open-source instruction-tuning datasets to improve the overall performance. Specifically, we incorporate **Alpaca**Taori et al. (2023); Peng et al. (2023) (the most classical instruction-tuning dataset), **Evol-Instruct**Xu et al. (2023) (a complex instruction dataset), **Code-Alpaca**Chaudhary (2023) (a code-specific instruction dataset) 4, and **ShareGPT**5 (a popular user-GPT dialogue dataset). For these open-source data except ShareGPT, an Arabic version is created by translating the English questions into Arabic and re-generating the responses using GPT-4. We reserve the original ShareGPT data because the original conversations will be destroyed with a re-generated different response. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Question in English**: & **Question in Arabic**: \\ Is it forbidden for a woman to confess her love to a man? & **GT-4**: (In Islam, it is not necessarily considered haram for a woman to express her feelings towards a man, but it must be done decently and respectfully and in accordance with Islamic teachings. Islam encourages modesty and chasticity, and therefore both men and women must display decency and good morals in their dealings. If a woman wants to express her feelings, she can do so directly or through an intermediary, such as her family...) \\ \hline \hline \end{tabular} \end{table} Table 4: GPT-4 answers culture-relevant questions differently across languages. Questions here are the same in semantics but differ in languages. The Arabic response is translated into English (right). \begin{table} \begin{tabular}{l|l|l|l} \hline \hline Dataset & Top-5 Person & Top-5 Organization & Top-5 GPE \\ \hline Alpaca & John, John Smith, Alice, Mary, Harry Potter & Apple, Amazon, Google, Microsoft, ABC & United States, India, New York, France, China \\ \hline Evol-Instruct & John, John Smith, Harry Potter, Alice, Bob & Apple, Amazon, quantum, Google, Microsoft & United States, New York, Los Angeles, San Francisco, Japan \\ \hline ShareGPT & Di Maria, Messi, Beckhaus, Eco, Clara & Tribunal, Google, Council, Bing, Supreme Court & United States, Argentina, France, New York, Hong Kong \\ \hline Quora & Prophet, Mohammed, Adam, Hijri, Ali & European Union, Google Muslim Brotherhood, Soviet Union, United Nations & Egypt, Turkey, Saudi Arabia, Morocco, America \\ \hline \hline \end{tabular} \end{table} Table 3: Top 5 names of individuals, organizations, and geopolitical entities (GPE) by frequency. #### 2.2.3 Reinforcement Learning from AI feedback To further align AceGPT with values and cultures, we utilize reinforcement learning from AI feedback with a reward model trained with **localized preference data**. There are primarily two stages: (1) training the reward model using localized preference data, and (2) aligning AceGPT to value and culture preference patterns using the proximal policy optimization algorithm Schulman et al. (2017). **Localized preference data** To align AceGPT with Arabic culture and values, a reward model mimicking the preferences of native speakers is essential. To prepare the localized preference data for reward model training, we reuse 40K localized instructions, i.e. Quora questions, in the SFT stage and sample paired outputs from our fine-tuned 7B model. Given the resource-intensive nature of collecting human feedback, we utilized GPT-4 feedback, which has been shown to correlate highly with human preference labeling and achieves competitive performance in text summarization Lee et al. (2023). However, due to observed position bias in GPT-4 Zhang et al. (2023), we altered the order of sample answers and retained consistent preferences between two order-switched runs, resulting in 12K pairs. A small study with 800 examples verified the reliability of this preference data, revealing a correlation coefficient of 0.84 between GPT-4 and human evaluations. We also incorporate 12K open-source preference data for better generalization. See Appendix C for details. **Reward model** The reward model operates within a 'binary' framework, determining preferences with an additional linear head post the final hidden states. The loss function is expressed as: \[\mathcal{L}(\theta)=-\frac{1}{\|D\|}\mathbb{E}_{(x,y_{r},y_{r})\sim D}\left[ \log(\sigma(r_{\theta}(x,y_{c})-r_{\theta}(x,y_{r})))\right]. \tag{1}\] Here, \(x\) is the input, \(y_{c}\) is the chosen model output, \(y_{r}\) is the rejected model output of the pair, and \(r_{\theta}\) is the reward model with the parameter \(\theta\). **Proximal policy optimization** We crawl another 30K Quora questions different from Quora-40K for PPO training data. Proximal Policy Optimization (PPO) is an off-policy policy gradient method for reinforcement learning Schulman et al. (2017). The policy \(\pi_{\theta}(a|s)\) represents the probability distribution over the next token \(a\) given a sequence of previous tokens \(s\), where \(\theta\) are the model parameters. The primary objective is to maximize the preference signal from the reward model that corresponds to the desired output behaviour. The objective is \[\mathcal{L}(\theta)=\mathbb{E}_{t}\left[\min\left(\frac{\pi_{\theta}(a_{t}|s_{ t})}{\pi_{\theta_{\text{old}}}(a_{t}|s_{t})}A_{t},\text{clip}\left(\frac{\pi_{ \theta}(a_{t}|s_{t})}{\pi_{\theta_{\text{old}}}(a_{t}|s_{t})},1-\epsilon,1+ \epsilon\right)A_{t}\right)\right]. \tag{2}\] Here, \(\theta\) is the current model parameter while \(\theta_{\text{old}}\) is the model parameter used for experience sampling. \(A_{t}\) is the advantage function that measures the relative value of generating \(a_{t}\) as the next token conditioned on the sequence \(s_{1}\cdots s_{t}\), and \(\epsilon\) is a hyperparameter for stability. ## 3 Evaluation ### Evaluation protocol Evaluation of language models is multifaceted and typically involves multiple metrics and benchmarks to assess various aspects of model performance. We use both automated and manual evaluation methods, assessing dimensions including instruction-following ability, knowledge, Natural \begin{table} \begin{tabular}{l|l c c} \hline \hline Data & \multicolumn{2}{c}{Source} & \multirow{2}{*}{Numbers} \\ & questions & & \\ \hline **Quora-Arabic-40K** & collected from Quora & GPT-4 & 43,050 \\ \hline Alpaca Peng et al. (2023) & self-instruct Taori et al. (2023) & & 49,969 \\ Alpaca-Chinese Peng et al. (2023) & Turbo translated Peng et al. (2023) & GPT-4 & 49,969 \\ **Alpaca-Arabic** & GPT-4 translated from Taori et al. (2023) & & 49,969 \\ \hline **Code-Alpaca-Arabic** & GPT-4 translated from Chaudhury (2023) & GPT-4 & 20,022 \\ \hline **Evol-Instruct-Arabic** & GPT-4 translated from Xu et al. (2023) & GPT-4 & 69,997 \\ \hline ShareGPT & humans & ChatGPT & 80,179 \\ \hline \hline \end{tabular} \end{table} Table 5: Instruction Tuning Datasets; Datasets Constructed in This Work Are Highlighted in **bold**. Language Understanding (NLU), and Arabic Cultural and Value Alignment (ACVA), see Table 6. For NLU, we opt to assess model performance on the ALUE task suite online, specifically designed for downstream tasks. Details can be found in Appendix F.2. Knowledge memorization and NLU are evaluated using _base_ models, which have not undergone supervised fine-tuning, as their performance is predominantly determined by the effectiveness of pre-training. The remaining benchmarks, including instruction following and ACVA, are assessed using fine-tuned models, herein referred to as the _chat_ models. **Instruction-following** We specifically evaluate the instruction-following capabilities of models tuned for instructions using Arabic Vicuna-80 and Arabic AlpacaEval. In accordance with Chiang et al. (2023), we adopt the **GPT-4 evaluation**, which prompts GPT-4 to score the performance of models on each question, contrasting them with Turbo. The details can be found in Appendix E.2. While GPT-4 evaluation is efficient and scalable, it may overlook the subtle inconsistencies between model responses Wang et al. (2023) and human interactions in real-world scenarios. Therefore, we further conduct **human evaluation** on Arabic Vicuna-80 and Arabic AlpacaEval to evaluate the performance of AccGPT from the perspective of human rather than GPT-4 preferences. To ensure cultural relevance in manual evaluations, we engaged a diverse group of educated, native Arabic speakers. Each model's response was assessed independently by three assessors. We present more details in Table 18 and the designed UI for evaluation in Figure 2. **Vicuna-80**Chiang et al. (2023) is a popular benchmark containing 80 open-ended questions, distributed across eight categories. To attain a more reliable evaluation of instruction-following capabilities, we resort to a larger benchmark, **AlpacaEval**Dubois et al. (2023). This benchmark is structured to replicate the actual distribution of user instructions by consolidating several public datasets. It is reported that model rankings on this benchmark have a high correlation with those on the live user instructions. **Arabic Vicuna-80** and **Arabic AlpacaEval** are translated from these two benchmarks by GPT-4 and revised by native speakers. **Knowledge** We have two knowledge benchmarks, including Arabic MMLU and EXAMs. **MMLU**Hendrycks et al. (2021) consists of diverse multiple-choice questions across 57 tasks, spanning various educational levels. We employed Turbo to translate this dataset from English to Arabic. Additionally, Arabic questions from the **EXAMs**Hardalov et al. (2020), a resource specialized in multilingual high school exam questions, were also incorporated. Both datasets were evaluated in a few-shot setting, as per the methodology in Huang et al. (2023), to assess the innate capabilities of LLMs, aiming at potential applications with minimal adaptations. **Arabic Cultural and Value Alignment (ACVA)** ACVA is a Yes-No question dataset, comprising over 8000 questions, generated by Turbo from 50 designed Arabic topics to assess model alignment with Arabic values and cultures (see Appendix B for data construction details). A subset, revised by Arabic speakers for question quality and answer accuracy, forms the 2486-data 'Clean set'. The correlation between 'All set' and 'Clean set' evaluations is in Sec 3.2. Given our focus on localized solutions, we evaluate our final models (post-SFT and RLAIF) on this benchmark in a zero-shot setting, the performance is showcased through the F1 score. **Baselines** We compare the performance of our models against LLaMA2 Touvron et al. (2023), Bloomz Muennighoff et al. (2022), Phoenix Chen et al. (2023;b), and Jais Sengupta et al. (2023). LLaMA2-chat models are excluded as they consistently respond in English when queried in Arabic. See details in Sec. E.1. \begin{table} \begin{tabular}{l l l l l} \hline \hline Benchmark & Evaluation Aspects & Type of Evaluation & Dataset Size & Types of examples \\ \hline Arabic Vicuna-80 & Instruction following & Human \& Automated & 80 \\ Arabic AlpacaEval & Instruction following & Human \& Automated & 805 \\ \hline Arabic MMLU & & & & \\ EXAMs & Knowledge Ability & Automated & 14k & Multiple-choice Questions \\ \hline ALUE(see Appendix F.2) & Language Understanding & Automated & 18k & Classification \& Regression \\ \hline ACVA-all & Arabic Cultural and & Automated & 9k & Yes/no binary Questions \\ ACVA-clean & Value Alignment & & 2.4k & \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation Benchmarks. ### Experiment results **Instruction-Following benchmark** We present each model's performance ratio against turbo, scored by GPT-4, in Table 7. The result shows that AceGPTs are superior in both Arabic Vicuna-80 and Arabic AlpacaEval. Notably, AceGPT-7B-chat surpasses Jais-13B by about 20% points with smaller model size. Moreover, AceGPT-13B-chat attains a 100.88% performance ratio of Turbo in Arabic Vicuna-80. **Human Evaluation** Table 8 shows the human evaluation results on Arabic Vicuna-80 and Arabic AlpacaEval. We calculated the percentages of wins, ties, and losses of the results from three Arabic speakers. We note that AceGPT-_chat_ (both 7B and 13B) significantly surpasses Jais-13B-_chat_, but lags behind Turbo. Moreover, the AceGPT-13B-_chat_ is significantly better than the AceGPT-7B-_chat_, indicating the importance of model size. **Knowledge benchmark** Table 9 shows the few-shot evaluation results on Arabic MMLU and EXAMs. We can see that AceGPT-13B-base attains the best performance (37.26% in Arabic MMLU and 36.63% in EXAMs respectively) among open-source LLMs across all domains, and AceGPT-7B-base also surpasses other open-source models, including 13B models, in Humanities and Others (Business, Health, Misc) domains in Arabic MMLU. **Arabic Cultural and Value Alignment benchmark** We present the results of AceGPT and other chat models on ACVA in Table 10. The Pearson correlation of accuracy on 'All set' and 'Clean set' is 0.9863, indicating a high reliability of ACVA all-set evaluation. Notably, our AceGPT-_chat_ models (both 7B and 13B) consistently outperform other open-source LLMs, and AceGPT-13B-chat only trails Turbo by a marginal of -0.87%. ## 4 Analysis ### On Pre-training **Localization of Pre-training** AceGPT-base uses LLaMA2 as the backbone, the only difference it is further pre-trained with some local Arabic texts. We compare AceGPT-base to LLaMA2 on ACVA with the few-shot setting to demonstrate the benefits of localized pre-training on Arabic culture and \begin{table} \begin{tabular}{l l} \hline \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval \\ \hline Phoenix Chen et al. (2023a) & 71.92\% \(\pm\) 0.2\% & 65.62\% \(\pm\) 0.3\% \\ Phoenix-multiple-langs Chen et al. (2023b) & 71.67\% \(\pm\) 0.7\% & 65.36\% \(\pm\) 0.1\% \\ Jais-13B-_chat_Sengupta et al. (2023) & 75.40\% \(\pm\) 1.6\% & 74.95\% \(\pm\) 0.2\% \\ \hline **AceGPT-7B-_chat_** & 94.82\% \(\pm\) 0.2\% & 93.81\% \(\pm\) 0.1\% \\ **AceGPT-13B-_chat_** & **100.88**\% \(\pm\) 0.4\% & **97.95**\% \(\pm\) 0.1\% \\ \hline \hline \end{tabular} \end{table} Table 7: Average performance ratio of Turbo and the standard variation over three runs in **Arabic Vicuna-80** and **Arabic AlpacaEval**. The best performance is in **bold** and the second is underlined. \begin{table} \begin{tabular}{l|l l l|l|l} \hline \hline Dataset & Comparison & win & tie & lose & win or tie \\ \hline \multirow{4}{*}{Arabic Vicuna-80} & **AceGPT-7B-chat** vs. Jais-13B-chat** & 82.5\% & 6.7\% & 10.8\% & 89.2\% \\ & AceGPT-7B-_chat_ vs. **Turbo** & 27.5\% & 32.9\% & 39.6\% & 60.4\% \\ \cline{2-6} & **AceGPT-13B-_chat_** vs. **Turbo** & 82.9\% & 6.7\% & 10.4\% & 89.6\% \\ & AceGPT-13B-_chat_ vs. **Turbo** & 16.3\% & 57.1\% & 26.6\% & 73.4\% \\ \hline \multirow{4}{*}{Arabic AlpacaEval} & **AceGPT-7B-chat** vs. Jais-13B-_chat_ & 53.0\% & 36.5\% & 10.5\% & 89.5\% \\ & AceGPT-7B-_chat_ vs. **Turbo** & 20.2\% & 46.5\% & 33.3\% & 66.7\% \\ \cline{1-1} \cline{2-6} & **AceGPT-13B-_chat_** vs. **Turbo** & 49.4\% & 42.8\% & 7.8\% & 92.2\% \\ \cline{1-1} & AceGPT-13B-_chat_** vs. **Turbo** & 25.2\% & 44.5\% & 30.3\% & 69.7\% \\ \hline \hline \end{tabular} \end{table} Table 8: Human evaluations on Vicuna-80 and AlpacaEval. The winners are in **bold**. \begin{table} \begin{tabular}{l l} \hline \hline Size & Model & F1 on ACVA \\ \hline \multirow{2}{*}{7B} & LLaMA2 & 51.44\% \\ & AceGPT-base & 68.28\% \\ \hline \multirow{2}{*}{13B} & LLaMA2 & 65.67\% \\ & AceGPT-base & **76.23**\% \\ \hline \hline \end{tabular} \end{table} Table 11: Ablation of Pe-training. values. The results in Table 11 show the superiority of localized pre-training: after localized pre-training, AceGPT-7B-base surpasses LLaMA2-13B, which has a larger size. ### On Supervised Fine-tuning Here we mainly evaluate the effectiveness of open-source instructions on the overall performance and of the localized instructions on localization. Each dataset sampled 40k data respectively. The results are shown in Table 12. It can be observed that Evol-Instruct highly contributes to the overall performance in the instruction-following benchmark, while Quora is most beneficial for Arabic culture and values. Note that incorporating ShareGPT largely harms the performance of ACVA; this may be because ShareGPT is almost aligned with Western culture and values. ### On RLAIF #### 4.3.1 Reward model To evaluate the sensitivity of the reward model to the overall performance, we measure the correlations between reward scoring and GPT-4 scoring (described in section 3.1) on Arabic Vicuna-80. Following the pairwise comparison setting in GPT-4 scoring, we also calculate the performance ratio for normalized (to [0, 10] as GPT-4 scoring) reward scores on model-chatbot pairs. The Pearson correlation and Spearman correlation are 0.57 and 0.61 respectively, and the results are shown in Figure 0(a). We conclude that the reward model shows a positive correlation with GPT-4 evaluation on Arabic Vicuna, which indicates it can offer an effective signal on overall performance. \begin{table} \begin{tabular}{l c c} \hline \hline Model & All set & Clean set \\ \hline Phoenix Chen et al. (2023a) & 41.86\% & 43.80\% \\ Phoenix–multiple-langs Chen et al. (2023b) & 59.78\% & 59.15\% \\ Jais-13B-_chat_ & 61.44\% & 66.83\% \\ \hline **AceGPT-7B-_chat_** & 69.60\% & 70.08\% \\ **AceGPT-13B-_chat_** & 74.70\% & 76.48\% \\ \hline Turbo & **75.57\%** & **79.03\%** \\ \hline \hline \end{tabular} \end{table} Table 10: Average F1 on **ACVA** in the zero-shot setting. The best performance is in **bold** and the second is underlined. \begin{table} \begin{tabular}{l|c c c c|c|c} \hline \hline & \multicolumn{6}{c}{Arabic MMLU} \\ Model & Average & STEM & Humanities & Social Sciences & Others & EXAMs \\ \hline Bloomz & 30.95 & 32.32 & 26.71 & 35.85 & 28.95 & 33.89 \\ LLaMA2-7B & 28.81 & 28.48 & 26.68 & 29.88 & 30.18 & 23.48 \\ LLaMA2-13B & 31.25 & 31.06 & 27.11 & 35.5 & 31.35 & 25.45 \\ Jais-13B-_base_ & 30.01 & 27.85 & 25.42 & 39.7 & 27.06 & 35.67 \\ \hline AceGPT-7B-_base_ & 30.36 & 26.63 & 28.17 & 35.15 & 31.5 & 31.96 \\ AceGPT-13B-_base_ & 37.26 & 35.16 & 30.3 & 47.34 & 36.25 & 36.63 \\ \hline Turbo & **46.07** & **44.17** & **35.33** & **61.26** & **43.52** & **45.63** \\ \hline \hline \end{tabular} \end{table} Table 9: Accuracy on **Arabic MMLU** and **EXAMs**. The best is **bold** and the second is underlined. \begin{table} \begin{tabular}{l c c} \hline \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval & ACVA \\ \hline Alpaca-Arabic & 87.15\% \(\pm\) 0.5\% & 82.97\% \(\pm\) 0.4\% & 50.52\% \\ + ShareGPT & 88.01\% \(\pm\) 0.03\% & 84.89\% \(\pm\) 0.3\% & 38.64\% \\ + Evol-Instruct & **90.39\%**\(\pm\) 0.4\% & **86.87**\% \(\pm\) 0.1\% & 61.72\% \\ + Quora & 89.74\% \(\pm\) 0.8\% & 85.71\% \(\pm\) 0.03\% & **65.53**\% \\ \hline \hline \end{tabular} \end{table} Table 12: Effects of different datasets on Arabic Vicuna-80, Arabic AlpacaEval and ACVA. **Localization of Reward model** Then we evaluate the Arabic culture sensitivity of the reward model on the ACVA benchmark. Prompting with "Give me a fact about Arab culture, values, and laws" in Arabic, we calculate the reward scores of prompt-statement pairs for all statements from ACVA. The distribution of reward scores for yes/no statements is shown in Figure 0(b). It demonstrates that reward scores for "yes" statements are higher than "no" statements overall, which suggests that our reward model has a cultural sensitivity. #### 4.3.2 Ablation **RLAIF improves instruction-following.** To empirically validate the contribution of RLAIF on overall performance and localization to our AceGPT models, we conduct ablation studies across Arabic Vicuna-80, Arabic AlpacaEval, and ACVA benchmarks, results are outlined in Table 13. _Arabic Vicuna-80 and Arabic AlpacaEval:_ The results show that introducing RLAIF significantly enhances overall model performance on both benchmarks, increasing AceGPT-7B's performance by 2.81% and 2.46%, and AceGPT-13B's by 5.74% and 4.90% on Arabic Vicuna-80 and Arabic AlpacaEval, respectively. By examining the "win or tie" metric, the 7B model shows an enhancement of 3.7% through RLAIF, while the 13B model shows a significant boost of 16.2%. This narrows the gap with Turbo. These enhancements across datasets underscore RLAIF's efficacy. **RLAIF improves localization** RLAIF results in performance gains of 27.12% and 0.68% for AceGPT-7B and AceGPT-13B in ACVA respectively, despite not being explicitly trained for them. This suggests that RLAIF enhances alignment with Arabic culture and values. Notably, the improvement from RLAIF on the 7B model is much larger than that of 13B, partially because the 7b model is weaker and therefore has more space for improvement, while it may be in saturation in the 13B model. Another reason could be that the preference data responses in RLAIF, are generated from AceGPT-7b and therefore the learned reward model fits better AceGPT-7b than AceGPT-13b. ## 5 Conclusion AceGPT addresses the "localization issue" in large language models by specifically catering to the distinct linguistic and cultural contexts of Arabic environments, leveraging incremental pre-training, instruction tuning, and reinforcement learning. It excels in multiple domains, including instruction Figure 1: (a) Correlations between the reward model and GPT-4 and (b) reward distribution. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{Automatic evaluation} & \multicolumn{3}{c}{Human Evaluation (vs. Turbo)} \\ \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval & ACVA & win & tie & loss & win or tie \\ \hline AceGPT-7B-_chat_ (w/o RLAIF) & 92.01\(\pm\) 1.3\% & 91.35\% \(\pm\) 0.08\% & 42.48\% & 27.5\% & 29.2\% & 43.3\% & 56.7\% \\ AceGPT-7B-_chat_ & **94.82**\% \(\pm\) 0.2\% & **93.81**\(\pm\) 0.1\% & **69.60**\% & 27.5\% & 32.9\% & 39.6\% & 60.4\% \\ \hline AceGPT-13B-_chat_ (w/o RLAIF) & 95.14\% \(\pm\) 1.0\% & 93.05\% \(\pm\) 0.2\% & 74.18\% & 19.6\% & 37.5\% & 42.9\% & 57.1\% \\ AceGPT-13B-_chat_ & **100.88**\% \(\pm\) 0.4\% & **97.95**\% \(\pm\) 0.1\% & **74.70\%** & 16.3\% & 57.1\% & 26.7\% & 73.3\% \\ \hline \hline \end{tabular} \end{table} Table 13: Experiments with/without RLAIF on Arabic Vicuna-80, Arabic AlpacaEval and ACVA. following and natural language understanding, setting a new standard among Arabic large language models. We contribute high-quality datasets and evaluation resources, highlighting the need for localizing large language models and introducing AceGPT as a pioneering solution for Arabic linguistic and cultural adaptation. ## Limitation In our AceGPT model, we identified several notable limitations. Firstly, its vocabulary, derived from LLaMA2, is primarily focused on Arabic letters, lacking further expansion. This results in reduced efficiency in Arabic text encoding tasks. Secondly, during the pre-training phase, due to constraints in machine resources, the number of tokens allocated to the model was relatively limited. This suggests that the model's potential in handling Arabic content has not been fully realized. When it comes to evaluation, we don't conduct reasoning/misinformation and bias testing. More critically, there are concerns regarding the model's safety alignment, rendering it unsuitable for online deployment at this stage and restricting it to academic research contexts. Moreover, even though manual verification was conducted on the cultural dataset, there is room for improvement in both the quality and quantity of the questions. These factors could potentially impact the model's practical application and adoption. ## Acknowledgement A concurrent work Jais Sengupta et al. (2023) was released a few weeks ahead of ours. We thank their efforts to open-source such a great model that is trained from scratch. We thank Prof. Zhi-Quan Luo and Dr. Ping Lee for their support. We extend our sincere appreciation to the dedicated KAUST graduate students whose contributions were integral to the success of our Arabic evaluations, including Lamees Alzahrani, Abdullah Amr Bawazir, Nouf Khalil Alenizi, Shatha Abdullah Alowdah, Rudaynah Maimani, Feras Khalid Alwutayd, Abdulrahman, Arwa Fallatah, Noura Alhijri, Reem Alquwayzani, and Majid Almarhoumi. We thank them for their invaluable support in this research. ## Author Contributions Author contributions are shown as follows:
2309.09315
Privacy-Preserving Polynomial Computing Over Distributed Data
In this letter, we delve into a scenario where a user aims to compute polynomial functions using their own data as well as data obtained from distributed sources. To accomplish this, the user enlists the assistance of $N$ distributed workers, thereby defining a problem we refer to as privacy-preserving polynomial computing over distributed data. To address this challenge, we propose an approach founded upon Lagrange encoding. Our method not only possesses the ability to withstand the presence of stragglers and byzantine workers but also ensures the preservation of security. Specifically, even if a coalition of $X$ workers collude, they are unable to acquire any knowledge pertaining to the data originating from the distributed sources or the user.
Zhiquan Tan, Dingli Yuan, Zhongyi Huang
2023-09-17T16:04:28Z
http://arxiv.org/abs/2309.09315v1
# Privacy-Preserving Polynomial Computing Over Distributed Data ###### Abstract In this letter, we delve into a scenario where a user aims to compute polynomial functions using their own data as well as data obtained from distributed sources. To accomplish this, the user enlists the assistance of \(N\) distributed workers, thereby defining a problem we refer to as privacy-preserving polynomial computing over distributed data. To address this challenge, we propose an approach founded upon Lagrange encoding. Our method not only possesses the ability to withstand the presence of stragglers and byzantine workers but also ensures the preservation of security. Specifically, even if a coalition of \(X\) workers collude, they are unable to acquire any knowledge pertaining to the data originating from the distributed sources or the user. Coded computing, distributed computing, privacy, Lagrange encoding. ## I Introduction In the information age, the size of datasets often grows rapidly, rendering their management infeasible using a single server. Consequently, data is frequently distributed across multiple servers that operate in parallel [1]. While distributing computations across multiple servers offers numerous advantages, it also introduces new complexities and challenges. One of the primary challenges is the presence of stragglers, denoting workers that exhibit significantly slower response times than their counterparts [2]. This can lead to delays in overall computation and negatively impact system performance. Another concern arises from the existence of malicious workers, commonly referred to as byzantine workers, who may deliberately submit adversarial results for personal gain, thereby jeopardizing the integrity and accuracy of computations. Data privacy is also a significant concern, as certain workers may collude to gain access to sensitive processed data. Addressing these challenges necessitates the development of robust and efficient distributed algorithms. As previously emphasized, datasets are frequently distributed. In our study, we examine a scenario in which a user possesses private data and seeks to compute a (polynomial) function involving their own data as well as data stored in distributed sources. We termed this problem as privacy-preserving polynomial computing over distributed data. Our objective is to devise a protocol that offers the following features: * Resilience against the presence of straggling workers. * Robustness against byzantine workers. * (Information-theoretic) privacy of sources and user data, even in the event of collusion among workers. In recent years, there has been a surge of interest in integrating coding-theoretic methods [3] into the design of distributed algorithms that exhibit resilience against straggling and byzantine workers while also ensuring data privacy. These methods have proven effective in addressing the challenges associated with large-scale distributed computations. For example, works such as [4, 5, 6] propose coded matrix designs to mitigate the impact of stragglers in distributed matrix multiplication. Furthermore, studies like [7, 8] consider both privacy and the effects of straggling in distributed matrix multiplication. For general distributed polynomial computing problems, Lagrange coded computing (LCC) [9] provides a scheme that resists the influence of stragglers and byzantine workers, while also ensuring data privacy even in the presence of colluding workers. In this letter, we propose an approach based on Lagrange encoding to address the challenges posed by privacy-preserving polynomial computing over distributed data. Our proposed method is specifically designed to be resilient against both straggling and byzantine workers. Additionally, it guarantees data security by preventing any coalition of \(X\) colluding workers from accessing information pertaining to the data from distributed sources and the user. **Notation**: We denote the set of integers from \(1\) to \(L\) as \([L]\). ## II Problem Setting Assume all the computation shall be performed on a given finite field \(\mathbb{F}_{q}\). We shall consider a scenario where there are \(S\) sources and each source \(i\) holds some secret data \(W_{i}\in\mathbb{F}_{q}^{a\times b}\). Denote the data jointly shared by these sources as \(W\in\mathbb{F}_{q}^{a\times bS}=[W_{1}W_{2}\cdots W_{S}]\). Suppose these data are further divided into \(W=[W^{(1)}W^{(2)}\cdots W^{(K)}]\), where we assume \(S|K\) for ease of exposition. A master also has some data \(U=[U^{(1)}U^{(2)}\cdots U^{(K)}]\), where \(U^{(i)}\in F_{q}^{a\times b\frac{K}{K}}\). Then the goal is to compute polynomial functions \(h(W^{(i)},U^{(i)})\) (\(1\leq i\leq K\)) with the help of \(N\) distributed workers. Sources will not communicate with the user, nor will there be communication among sources. In addition, all the workers are connected to the user and sources. We use the widely adopted setting in coded computing that all the connected links are error-free [7, 8]. We shall consider a communication protocol formulated generally as follows: \(\bullet\)**Sharing**: The sharing operation may consist of two parts: 1. Each source \(i\) may generate a set of random matrices \(P^{(i)}\) and choose a set of functions \(\{f_{1}^{(i)},f_{2}^{(i)},\cdots,f_{N}^{(i)}\}\) then send each worker \(k\) encoded data \(\bar{W}_{k}^{(i)}=f_{k}^{(i)}(W_{i},P^{(i)})\). 2. The master may generate a set of random matrices \(Q\) and choose a set of functions \(\{g_{1},g_{2},\cdots,g_{N}\}\) then send each worker \(k\) encoded data \(\bar{U}_{k}=g_{k}(U,Q)\). \(\bullet\)**Computing**: After receiving the encoded matrices \(\bar{W}_{k}^{(i)}\) (\(1\leq i\leq S\)) and \(\bar{U}_{k}\), worker \(k\) shall calculate a matrix \(Y_{k}\) and return \(Y_{k}\) to the user. \(\bullet\)**Reconstruction**: After receiving any \(M\) responses from workers, the user is able to retrieve \(h(W^{(i)},U^{(i)})\) (\(1\leq i\leq K\)). We shall call this number \(M\) recovery threshold of this protocol. There are also some system cost metrics that should be taken into account: 1. Source Upload Cost: For each source \(i\), the upload cost \(U_{S_{i}}\) is defined as \(\sum_{k\in[N]}H(\bar{W}_{k}^{(i)})\). 2. User Upload Cost: \(U_{u}=\sum_{k\in[N]}H(\bar{U}_{k})\). 3. User Download Cost: \[D=\max_{\mathcal{K}:\mathcal{K}\subseteq[N],|\mathcal{K}|=M}\sum_{k\in \mathcal{K}}H\left(Y_{k}\right).\] (1) We would like to design a protocol under the following constraints: 1. Data privacy: The protocol should keep workers (information theoretic) \(X\)-private about the data stored in sources and user. Specifically, \[I\left(W,U;\widetilde{W}_{\mathcal{X}},\widetilde{U}_{\mathcal{X}}\right)=0\] (2), for any \(\mathcal{X}\subset[N]\), \(X=|\mathcal{X}|\). \(\widetilde{W}_{\mathcal{X}}=\left\{\{\bar{W}_{k}^{(i)}\}_{i\in[S]}\right\}_{k \in\mathcal{X}}\) denotes all the information received from sources by workers in \(\mathcal{X}\), \(\widetilde{U}_{\mathcal{X}}\) defines similarly. 2. Byzantine worker robustness: The user shall get the correct answers \(f(W^{(i)},U^{(i)})\) (\(1\leq i\leq K\)) even if any \(A\) workers send (arbitrary) erroneous responses. A protocol that guarantees robustness against any \(A\) byzantine workers is called \(A\)-secure. 3. Straggler resilience: The user shall get the correct answers \(f(W^{(i)},U^{(i)})\) (\(1\leq i\leq K\)) even if any \(B\) workers fail to respond. A protocol that guarantees resilience against any \(B\) stragglers is called \(B\)-resilience. ## III A computation strategy based on Lagrange encoding ### _General description of the proposed method_ We select any \(K+T\) distinct numbers \(\beta_{j}\in\mathbb{F}_{q}\) (\(1\leq j\leq K+T\)). \(N\) distinct numbers \(\alpha_{i}\in\mathbb{F}_{q}\) (\(1\leq j\leq N\)) are chosen under the requirement \(\{\alpha_{i}\}_{i\in[N]}\cap\{\beta_{j}\}_{j\in[K]}=\emptyset\). Then the encoding polynomials are given as follows: \[g(z)=\sum_{j\in[K]}U^{(j)}\prod_{l\in[K+X]\setminus\{j\}}\frac{z- \beta_{l}}{\beta_{j}-\beta_{l}}+\] \[\sum_{j=K+1}^{K+X}Q_{j}\prod_{l\in[K+X]\setminus\{j\}}\frac{z- \beta_{l}}{\beta_{j}-\beta_{l}}. \tag{3}\] \[f^{(i)}(z)=\!\!\!\!\!\!\sum_{j=K+1}^{K+X}P_{j}^{(i)}\prod_{l\in[K+X \setminus\{j\}}\frac{z-\beta_{l}}{\beta_{j}-\beta_{l}}+\] \[\sum_{j\in[\frac{K}{S}]}W^{((i-1)\frac{K}{S}+j)}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the size of \(Y_{k}\) and the value of recovery threshold. [10] shows that interpolating any degree \(k\) polynomial can be done through \(O(k\log^{2}k\log\log k)\) operations. The decoding complexity is obtained by incorporating the fact that polynomial \(h(f(z),g(z))\) is of degree \(M\). ### _Example_ In this subsection, we shall introduce an example of applying our proposed scheme to the problem of matrix multiplication. To see the motivation of our construction, we shall first introduce the notion of bi-linear complexity of matrix multiplication [11, 12]. **Definition III.1**: _For matrices \(A=[A_{k},\ell]_{k\in[m],\ell\in[p]}\) and \(B=[B_{\ell,j}]_{\ell\in[p],j\in[n]}\). Suppose \(AB=C=[C_{k,j}]\). Then the bi-linear complexity is defined as the minimum number of multiplications for calculating \(C\) from \(A\) and \(B\), which we shall denote as \(R(m,p,n)\). Any tensors \(a\in\mathbb{F}_{q}^{R\times m\times p}\), \(b\in\mathbb{F}_{q}^{R\times p\times n}\), and \(c\in\mathbb{F}_{q}^{R\times m\times n}\) satisfying the conditions below are equivalent to the existence of an upper bound construction with rank \(R\) for bi-linear complexity._ \[\sum_{r=1}^{R}c_{r,k,j}(\sum_{\begin{subarray}{c}k^{\prime}=1 \end{subarray}}^{m}\sum_{\ell^{\prime}=1}^{p}a_{r,k^{\prime},\ell^{\prime}}A_ {k^{\prime},\ell^{\prime}})(\underbrace{\sum_{\ell^{\prime}=1}^{p}\sum_{j^{ \prime}=1}^{n}b_{r,\ell^{\prime},j^{\prime}}B_{\ell^{\prime},j^{\prime}}}_{= \bar{B}_{r}}) \tag{7}\] \[=\sum_{\ell=1}^{p}A_{k,\ell}B_{\ell,j}=C_{k,j},\quad\forall k\in[ m],j\in[n].\] The use of bi-linear complexity allows for the transformation of the matrix multiplication problem \(C=AB\) into the computation of the products of two sets of matrices \(\{\bar{A}_{1},\cdots,\bar{A}_{R}\}\) and \(\{\bar{B}_{1},\cdots,\bar{B}_{R}\}\). Suppose there are \(2\) sources \(S_{1}\) and \(S_{2}\), each \(S_{i}\) holding a secret data \(W_{i}\). Denote \(W=[W_{1}W_{2}]\). Assume the user also has secret data \(U\). The goal is to compute the matrix product \(WU\) with the help of \(N\) worker nodes. We shall partition \(W\) and \(U\) as follows: \[W_{1}=\begin{bmatrix}W_{1,1}\\ W_{2,1}\end{bmatrix},W_{2}=\begin{bmatrix}W_{1,2}\\ W_{2,2}\end{bmatrix},U=\begin{bmatrix}U_{1,1}&U_{1,2}\\ U_{2,1}&U_{2,2}\end{bmatrix}. \tag{8}\] Strassen [12] gives a construction of bi-linear complexity \(R=7\) as follows: \[\begin{array}{llll}\bar{W}_{1}=W_{1,1}+W_{2,2},&\bar{U}_{1}=U_{1,1}+U_{2,2} \\ \bar{W}_{2}=W_{2,1}+W_{2,2},&\bar{U}_{2}=U_{1,1}\\ \bar{W}_{3}=W_{1,1},&\bar{U}_{3}=U_{1,2}-U_{2,2}\\ \bar{W}_{4}=W_{2,2},&\bar{U}_{4}=U_{2,1}-U_{1,1}\\ \bar{W}_{5}=W_{1,1}+W_{1,2},&\bar{U}_{5}=U_{2,2}\\ \bar{W}_{6}=W_{2,1}-W_{1,1},&\bar{U}_{6}=U_{1,1}+U_{1,2}\\ \bar{W}_{7}=W_{1,2}-W_{2,2},&\bar{U}_{7}=U_{2,1}+U_{2,2}.\end{array} \tag{9}\] Define \(M_{i}=\bar{W}_{i}\bar{U}_{i}(i\in[7])\), then \[WU=\begin{bmatrix}M_{1}+M_{4}-M_{5}+M_{7}&M_{3}+M_{5}\\ M_{2}+M_{4}&M_{1}-M_{2}+M_{3}+M_{6}\end{bmatrix}. \tag{10}\] Assume the privacy protection level \(X=2\). We will select any \(9\) distinct elements \(\beta_{1},\cdots,\beta_{9}\) from \(\mathbb{F}_{q}\). We then select \(20\) distinct elements \(\{\alpha_{i}\}_{i\in[20]}\) from \(\mathbb{F}_{q}\) such that \(\{\alpha_{i}\}_{i\in[20]}\cap\{\beta_{j}\}_{j\in[7]}=\emptyset\). Define \[f^{(1)}(z)\] \[= W_{2,1}(\prod_{l\in[9]\setminus\{2\}}\frac{z-\beta_{l}}{\beta_{ 2}-\beta_{l}}+\prod_{l\in[9]\setminus\{6\}}\frac{z-\beta_{l}}{\beta_{6}-\beta _{l}})+W_{1,1}\] \[(\prod_{l\in[9]\setminus\{1\}}\frac{z-\beta_{l}}{\beta_{1}-\beta _{l}}+\prod_{l\in[9]\setminus\{3\}}\frac{z-\beta_{l}}{\beta_{3}-\beta_{l}}+ \prod_{l\in[9]\setminus\{5\}}\frac{z-\beta_{l}}{\beta_{5}-\beta_{l}}\] \[-\prod_{l\in[9]\setminus\{6\}}\frac{z-\beta_{l}}{\beta_{6}-\beta _{l}})+\sum_{j=1}^{2}P_{j}^{(1)}\prod_{l\in[7]}\frac{z-\beta_{l}}{\beta_{7+j}- \beta_{l}}. \tag{11}\] \[f^{(2)}(z)\] \[= W_{1,2}(\prod_{l\in[8]\setminus\{5\}}\frac{z-\beta_{l}}{\beta_{ 5}-\beta_{l}}+\prod_{l\in[8]\setminus\{7\}}\frac{z-\beta_{l}}{\beta_{7}-\beta _{l}})+W_{2,2}\] \[(\prod_{l\in[8]\setminus\{1\}}\frac{z-\beta_{l}}{\beta_{1}-\beta _{l}}+\prod_{l\in[8]\setminus\{2\}}\frac{z-\beta_{l}}{\beta_{2}-\beta_{l}}+ \prod_{l\in[8]\setminus\{4\}}\frac{z-\beta_{l}}{\beta_{4}-\beta_{l}}\] \[-\prod_{l\in[8]\setminus\{7\}}\frac{z-\beta_{l}}{\beta_{7}-\beta _{l}})+\sum_{j=1}^{2}P_{j}^{(2)}\prod_{l\in[7]}\frac{z-\beta_{l}}{\beta_{7+j}- \beta_{l}}. \tag{12}\] \[g(z)=\sum_{j\in[7]}\bar{U}_{j}\prod_{l\in[8]\setminus\{j\}}\frac{z-\beta_{l}} {\beta_{j}-\beta_{l}}+\sum_{j=1}^{2}Q_{j}\prod_{l\in[7]}\frac{z-\beta_{l}}{ \beta_{7+j}-\beta_{l}}. \tag{13}\] We shall denote \(P_{j}=\sum_{i=1}^{2}P_{j}^{(i)}\) (\(j\in[2]\)). \[f(z)=\sum_{j\in[7]}\bar{W}_{j}\prod_{l\in[8]\setminus\{j\}}\frac{z-\beta_{l}}{ \beta_{j}-\beta_{l}}+\sum_{j=1}^{2}P_{j}\prod_{l\in[7]}\frac{z-\beta_{l}}{ \beta_{7+j}-\beta_{l}}. \tag{14}\] Whenever each worker \(k\) receives all the encoded matrices \(\bar{W}_{k}^{(i)}=f^{(i)}(\alpha_{k})\) and \(\bar{U}_{k}=g(\alpha_{k})\) from user and all sources, it shall compute \(Y_{k}=(\bar{W}_{k}^{(1)}+\bar{W}_{k}^{(2)})\bar{U}_{k}\) and return \(Y_{k}\) to the user. Assume there exists one Byzantine worker. Note the degree of polynomial \(f(z)g(z)\) is \(16\). Using the RS code decoding algorithm, any \(19\) workers' results will be sufficient to decode \(f(z)g(z)\). Then the proposed protocol may resist \(1\) straggler. Note for \(i\in[7]\), \(f(\beta_{i})g(\beta_{i})=M_{i}\). Thus the matrix product \(WU\) can be successfully retrieved by equation 10. **Remark**: The construction adapts to polynomial sharing [13] schemes similarly. Grouping techniques [14] can also be performed easily. ## IV Proof of Privacy Suppose \(X\) workers in some subset \(\mathcal{X}\) collude, denote the workers indexes in \(\mathcal{X}\) as \(k_{j}\) (\(j=1,2,\cdots,X\)). **Lemma IV.1** (Generalized Cauchy Matrix [15]): _Let \(\alpha_{1},\cdots,\alpha_{X}\) and \(\beta_{1},\cdots,\beta_{X}\) be (pairwise) distinct elements from a finite field \(\mathbb{F}_{q}\). Denote \(l_{j}(x)\) a Lagrange basis polynomial of degree \(X-1\) defined as follows:_ \[l_{j}(z)=\prod_{l\in[X]\setminus\{j\}}\frac{z-\beta_{l}}{\beta_{j}-\beta_{l}}, \quad\forall j\in[X].\] Then the following generalized Cauchy matrix is invertible over \(F_{q}\). \[\left[\begin{array}{cccc}l_{1}\left(\alpha_{1}\right)&l_{2}\left(\alpha_{1} \right)&\ldots&l_{X}\left(\alpha_{1}\right)\\ l_{1}\left(\alpha_{2}\right)&l_{2}\left(\alpha_{2}\right)&\ldots&l_{X}\left( \alpha_{2}\right)\\ \vdots&\vdots&\ddots&\vdots\\ l_{1}\left(\alpha_{X}\right)&l_{2}\left(\alpha_{X}\right)&\ldots&l_{X}\left( \alpha_{X}\right)\end{array}\right]_{X\times X}.\] We shall first prove that \(I(W;\widetilde{W}_{\mathcal{X}})=0\), and the equality \(I(U;\widetilde{U}_{\mathcal{X}})=0\) follows similarly. Denote \(\widetilde{P}_{k_{j}}^{(i)}=\sum_{t=K+1}^{K+X}P_{t}^{(i)}\prod_{l\in[K+X]\setminus \{t\}}\frac{\alpha_{k_{j}}-\beta_{l}}{\beta_{l}-\beta_{l}}\). \[I\left(W;\widetilde{W}_{\mathcal{X}}\right) \tag{15}\] \[= H(\widetilde{W}_{\mathcal{X}})-H(\widetilde{W}_{\mathcal{X}}\mid W)\] \[= H(\{\widetilde{W}_{k_{1}}^{(i)}\}_{i\in[S]},\cdots,\{ \widetilde{W}_{k_{X}}^{(i)}\}_{i\in[S]})\] \[-H\left(\{\widetilde{W}_{k_{1}}^{(i)}\}_{i\in[S]},\cdots,\{ \widetilde{W}_{k_{X}}^{(i)}\}_{i\in[S]}\mid W\right)\] (16) \[\leq \sum_{i\in[S]}\sum_{j\in[X]}H(\widetilde{W}_{k_{j}}^{(i)})\] \[-H\left(\{\widetilde{P}_{k_{j}}^{(1)}\}_{j\in[X]},\cdots,\{ \widetilde{P}_{k_{j}}^{(S)}\}_{j\in[X]}\right)\] (17) \[= SXab\frac{S}{K}\log q-\sum_{i\in[S]}H(\{\widetilde{P}_{k_{j}}^{( i)}\}_{j\in[X]})\] (18) \[= SXab\frac{S}{K}\log q-S(Xab\frac{S}{K}\log q)\] \[= 0.\] In the above derivation, equations \(15\) and \(16\) come from the definition of mutual information. The fourth equality is clear from the independence of sources. Equation \(18\) follows immediately from the entropy of a uniformly distributed random variable on a finite field \(\mathbb{F}_{q}\) and lemma 4.1. Inequality \(17\) can be derived from the fact that joint entropy is bounded by the sum of respective entropies. The privacy of data is guaranteed by the following inequality and the fact that mutual information \(I\left(W,U;\widetilde{W}_{\mathcal{X}},\widetilde{U}_{\mathcal{X}}\right)\) is non-negative. \[I\left(W,U;\widetilde{W}_{\mathcal{X}},\widetilde{U}_{\mathcal{X}}\right)\] \[= I\left(W,U;\widetilde{W}_{\mathcal{X}}\right)+I\left(W,U; \widetilde{U}_{\mathcal{X}}\mid\widetilde{W}_{\mathcal{X}}\right) \tag{19}\] \[= H\left(\widetilde{W}_{\mathcal{X}}\right)-H\left(\widetilde{W}_{ \mathcal{X}}\mid W,U\right)\] \[+H\left(\widetilde{U}_{\mathcal{X}}\mid\widetilde{W}_{\mathcal{ X}}\right)-H\left(\widetilde{U}_{\mathcal{X}}\mid\widetilde{W}_{\mathcal{X}},W,U\right)\] (20) \[\leq H\left(\widetilde{W}_{\mathcal{X}}\right)-H\left(\widetilde{W}_{ \mathcal{X}}\mid W\right)\] \[+H\left(\widetilde{U}_{\mathcal{X}}\right)-H\left(\widetilde{U}_ {\mathcal{X}}\mid U\right)\] (21) \[= I\left(W;\widetilde{W}_{\mathcal{X}}\right)+I\left(U;\widetilde{ U}_{\mathcal{X}}\right)\] (22) \[= 0.\] In the above derivation, equation \(19\) comes from the chain rule for mutual information. Equations \(20\) and \(22\) follow immediately from the definition of mutual information. Inequality \(21\) can be derived from the non-increasing of information when conditioning and independence of data \(W\) and \(U\). Interestingly, we can show that the shared data by sources (user) will form a \(X\)-private MDS storage system on \(N\) worker nodes [16, 17]. **Lemma 4.2**: _Suppose \(\Phi_{i}(i\in[X])\) are \(X\) i.i.d. random variables following the uniform distribution on a finite filed \(\mathbb{F}_{q}\). Then \(\sum_{i=1}^{X}\Phi_{i}\) also follows a uniform distribution on \(\mathbb{F}_{q}\)._ **Proof of lemma \(4.2\)**: We shall prove that \(\Phi_{1}+\Phi_{2}\) follows a uniform distribution on \(\mathbb{F}_{q}\). Then the lemma is valid from a direct induction. For any \(x\in\mathbb{F}_{q}\), \(P(\Phi_{1}+\Phi_{2}=x)=P(\bigcup_{y\in\mathbb{F}_{q}}\{\Phi_{1}=y,\Phi_{2}=x-y \})=\sum_{y\in\mathbb{F}_{q}}P(\Phi_{1}=y)P(\Phi_{2}=x-y)=\frac{1}{q}\). From the above lemma, it is clear that the shared data by sources (user) can be seen as constructed by a secure Lagrange storage code [16]. Thus forming a \(X\)-private MDS storage system.
2307.16799
Toward Privacy in Quantum Program Execution On Untrusted Quantum Cloud Computing Machines for Business-sensitive Quantum Needs
Quantum computing is an emerging paradigm that has shown great promise in accelerating large-scale scientific, optimization, and machine-learning workloads. With most quantum computing solutions being offered over the cloud, it has become imperative to protect confidential and proprietary quantum code from being accessed by untrusted and/or adversarial agents. In response to this challenge, we propose SPYCE, which is the first known solution to obfuscate quantum code and output to prevent the leaking of any confidential information over the cloud. SPYCE implements a lightweight, scalable, and effective solution based on the unique principles of quantum computing to achieve this task.
Tirthak Patel, Daniel Silver, Aditya Ranjan, Harshitta Gandhi, William Cutler, Devesh Tiwari
2023-07-31T16:07:37Z
http://arxiv.org/abs/2307.16799v1
Toward Privacy in Quantum Program Execution On Untrusted Quantum Cloud Computing Machines for Business-sensitive Quantum Needs ###### Abstract. Quantum computing is an emerging paradigm that has shown great promise in accelerating large-scale scientific, optimization, and machine-learning workloads. With most quantum computing solutions being offered over the cloud, it has become imperative to protect confidential and proprietary quantum code from being accessed by untrusted and/or adversarial agents. In response to this challenge, we propose, which is the first known solution to obfuscate quantum code and output to prevent the leaking of any confidential information over the cloud. implements implements a lightweight, scalable, and effective solution based on the unique principles of quantum computing to achieve this task. ## 1. Introduction to. Quantum computing is an emerging technology that has the potential to accelerate and make possible the execution of many large-scale scientific, optimization, and machine-learning tasks [(7; 27)]. As quantum computing technology advances, multiple cloud-based quantum computing platforms are being used to develop and execute classically-infeasible mission-critical tasks by government agencies and industry partners [(14; 15; 29)]. In many cases, the solutions to these tasks are business sensitive and should be protected (e.g., the solution to a classically-infeasible problem relevant to a defense program). Currently, due to the nascent stage of quantum cloud computing, the cloud computing providers have full access to the end users' mission-sensitive programs and the output of such programs [(26; 30)]. Recognizing the importance of security and privacy for quantum program execution, there has been some related work on it, although not solving the same problem as this work (protecting the output of quantum programs). In particular, encrypting quantum information over networks [(39; 4; 36)] and securing quantum programs from third-party quantum compilers [(31; 34)] have received attention. Unfortunately, all of these works assume that the cloud hardware provider is an uncompromised entity and does not have intentional or unintentional snoopers on the quantum cloud platform that can analyze the program outputs. Even if the code is protected from the compiler and over the network [(39; 4; 31; 34; 36)], currently, it has to be decrypted before it can be run on the hardware so that the correct output can be obtained, which is open to snooping from the cloud provider. Even if the cloud provider is uncompromised, organizations may not want to disclose their tasks, proprietary code, and program solutions to the cloud provider. Protecting this information from the cloud provider is a non-trivial challenge as _the user essentially wants the hardware provider to run the "wrong" code and observe the "wrong" output, but be able to recover the "correct" quantum output from the "wrong" output on the user's end. We propose to achieve just this_. In the near future, it is anticipated only a few entities in the world may have access to powerful quantum computers, and these quantum computers will be used to solve previously-unsolved large-scale optimization problems, possibly without an explicit trust model between the service cloud provider and the customer. Therefore, the solutions to such large-scale optimization problems will be considered sensitive and will need to be protected. takes the first few steps toward preparing us for that future - by developing a novel method that intelligently obfuscates program output and quantum circuit structure of the original quantum program provided by the user/customer. Before we introduce the contributions of, we first provide a primer on relevant quantum computing concepts. **Qubits and Quantum States.** The fundamental unit of quantum computing is the _qubit_, which is capable of representing a _superposition_ (linear combination) of two orthogonal basis states. This is represented as \(|\Psi\rangle=\alpha\,|0\rangle+\beta\,|1\rangle\), where \(\alpha\) and \(\beta\) are the complex amplitudes of the constituent basis states. Upon measurement, this superposition collapses such that the probability of measuring the state \(|0\rangle\) is \(\|\alpha\|^{2}\) and \(\|\beta\|^{2}\) for measuring the \(|1\rangle\) state, where \(\|\alpha\|^{2}+\|\beta\|^{2}=1\). Figure 1. Example circuit representation of a quantum algorithm. The horizontal lines represent qubits with gates being applied to them in order from left to right.
2309.16235
Language models in molecular discovery
The success of language models, especially transformer-based architectures, has trickled into other domains giving rise to "scientific language models" that operate on small molecules, proteins or polymers. In chemistry, language models contribute to accelerating the molecule discovery cycle as evidenced by promising recent findings in early-stage drug discovery. Here, we review the role of language models in molecular discovery, underlining their strength in de novo drug design, property prediction and reaction chemistry. We highlight valuable open-source software assets thus lowering the entry barrier to the field of scientific language modeling. Last, we sketch a vision for future molecular design that combines a chatbot interface with access to computational chemistry tools. Our contribution serves as a valuable resource for researchers, chemists, and AI enthusiasts interested in understanding how language models can and will be used to accelerate chemical discovery.
Nikita Janakarajan, Tim Erdmann, Sarath Swaminathan, Teodoro Laino, Jannis Born
2023-09-28T08:19:54Z
http://arxiv.org/abs/2309.16235v1
# Language models in molecular discovery ###### Abstract The success of language models, especially transformer-based architectures, has trickled into other domains giving rise to "scientific language models" that operate on small molecules, proteins or polymers. In chemistry, language models contribute to accelerating the molecule discovery cycle as evidenced by promising recent findings in early-stage drug discovery. Here, we review the role of language models in molecular discovery, underlining their strength in de novo drug design, property prediction and reaction chemistry. We highlight valuable open-source software assets thus lowering the entry barrier to the field of scientific language modeling. Last, we sketch a vision for future molecular design that combines a chatbot interface with access to computational chemistry tools. Our contribution serves as a valuable resource for researchers, chemists, and AI enthusiasts interested in understanding how language models can and will be used to accelerate chemical discovery. ## 1 Introduction Despite technological advances constantly reshaping our understanding of biochemical processes, the chemical industry persistently faces escalating resource costs of up to 10 years and 3 billion dollar per new market release [102]. The intricacy of the problem is typically attested by an exorbitant attrition rate in _in vitro_ screenings [77], the sheer size of the chemical space [68] and the frequency of serendipity [40]. Language models (LMs) emerged recently and demonstrated an astonishing ability to understand and generate human-like text [65]. Machine learning (ML) in general and LMs in particular hold the potential to profoundly accelerate the molecular discovery cycle (see Figure 1). In this chapter, we explore applications of LMs to chemical design tasks. Although LMs were originally developed for natural language, they have shown compelling results in scientific discovery settings when applied to "scientific languages", e.g., in protein folding [55] or _de novo_ design of small molecules [105], peptides [23] or polymers [66]. But what exactly is a language model? By definition, it is any ML model that consumes a sequence of text chunks (so-called tokens) and is capable to reason about the content of the sequence. Since each token is essentially a vector [62], a LM is a pseudo-discrete time series model. Most typically, LMs learn probability distributions over sequences of words thus also facilitating the generation of new text given some input, for example in a language translation task. While all LMs rely on neural networks, contemporary models almost exclusively leverage the Transformer architecture [93]. Now, all of this begs the question - what is the need for LMs in molecular discovery? First, when applied to serializations of chemical entities (e.g., SMILES [98]), LMs can learn highly structured representations, often even tailored for desired functional properties [36]. This allows to perform smooth and property-driven exploration of the originally deemed discrete protein or molecular space. Another attractive feature of scientific LMs is their ability to seamlessly bridge natural and scientific languages. This can give rise to ChatGPT-style chatbot interfaces that allow chemists to formulate their design objectives through natural language and to iteratively refine their result with an interactive agent thus potentially accomplishing complex chemical tasks more rapidly. Here, we present an overview of the role of LMs toward accelerated molecular discovery. We commence with the conventional scientific discovery method and then discuss how molecular generative models can be coupled with molecular property prediction models. Seeking for practical usability, we then present the reader with selected software tools and libraries for scientific language modeling. We close with a vision for future molecule design that integrates natural language models into the discovery process through chatbots. ## 2 Accelerated molecular discovery Molecule discovery, intricately linked to optimizing diverse properties in a vast space, challenges conventional scientific methods. In chemistry's Design-Make-Test-Analyze (DMTA) cycle, synthesis costs and time constraints create a bottleneck that hampers hypothesis refinement (cf. Figure 0(a)). Traditional approaches are largely driven by medicinal chemists who design "molecule hypotheses" which are biased, ad-hoc and non-exhaustive. This hinders progress in addressing global issues, driving crucial necessity for an accelerated process of molecule discovery. Thus, a key challenge lies in improving speed and quality of evaluating such "molecule hypotheses" grounded on laboratory work. Deep generative models have recently emerged as a promising tool to expedite the hypothesis/design phase in molecular discovery. However, even the most advanced molecular generative models require an efficient method for large-scale virtual screening to test their hypotheses. The _accelerated molecular discovery_ cycle adds a validation loop to DMTA, rapidly evaluating numerous hypotheses inexpensively (cf. Figure0(b)). This loop enhances the design-phase generative model, ensuring only promising hypotheses advance to the synthesis and physical experimentation stages. ### Molecule Representation Data representation is critical as it determines which information is available for the model. As illustrated in Figure2, various molecular representations exist. Due to popularity of chemical language models (CLMs), this section focuses on text-representations of molecules. A more focused discussion on CLMs was published by Grisoni [38]. Figure 1: A comparison of molecular discovery workflows: (a) classic approach, where each hypothesis (a.k.a. molecule) requires a new experimental cycle. (b) _Accelerated_ molecular discovery cycle with machine-generated hypotheses and assisted validation, enabling simultaneous generation and testing of numerous molecules. Simplified Molecular Input Line-Entry System (SMILES)SMILES[98] is a string representation made up of specific characters for atoms, bonds, branches, aromaticity, rings and stereochemistry in molecular structures. The character-level representation enables easy tokenization, making SMILES an ideal input for LMs. SMILES are non-unique, so each molecule can be written as multiple SMILES strings. Hence, SMILES are either canonicalized or, alternatively, their multiplicity is used as data augmentation strategy [8] which has shown performance improvement in molecular property prediction [8, 51, 88] and molecular generation [92, 3]. In generative modeling, a common issue is the invalidity of SMILES strings due to an uneven number of ring opening/closure symbols or bond valence violations. SMILES strings can undergo further processing, such as kekulization or stereoinformation removal but employing canonicalized SMILES remains the most prevalent approach. **Tokenization** is the process of splitting a string into vectorizable units. These units are typically a single character, n-gram characters or words. Instead of splitting at the character level, SMILES are typically tokenized at the atom level with regular expressions [79] or by additionally including positional and connectivity information, thereby acknowledging that the same atom can have different encodings based on its location in the molecular structure [91]. SMILES may also be tokenized at the substructure level, as demonstrated by SMILES Pair Encoding (SMILES-PE) [52]. This method, inspired by byte-pair encoding, iteratively counts and merges frequently occurring SMILES token pairs until a given condition is met. Tokenization enables the creation of a vocabulary for SMILES representations. **Vocabularies** are dictionaries mapping tokens to vectors thus serving as gateway to LMs. For LMs to learn from SMILES, tokens are vectorized, either via one-hot encodings (where each row in the binary matrix corresponds to a SMILES position and each column signifies a token). However, this discrete method results in sparse, large matrices and thus, an alluring alternative is to learn a continuous embedding for each token during training. This facilitates the learning of semantic relationships between tokens and enhances performance. Since learning good embeddings requires a lot of data, models pre-trained on natural language corpora are a strong option to learn scientific language embeddings through fine-tuning [22]. Self Referencing Embedded Strings (SELFIES)SELFIES[49] were introduced as an alternative to SMILES to counter the problem of generating invalid molecules. Unlike SMILES, SELFIES are generated using derivation rules to enforce valence-bond validity. They store branch length and ring Figure 2: An illustration of popular ways of representing a chemical molecule as input to a ML model. The representations may be (a) String-based, such as SMILES, SELFIES, or InChI which use characters to represent different aspects of a molecule, (b) Structure-based, such as Graphs or MolFiles that encode connectivity and atomic position, and (c) Feature-based, such as Morgan Fingerprints, which encode local substructures as bits. size to avoid open branches and rings. These supplementary attributes ensure a valid representation during molecule generation. While this strategy guarantees 100% validity, it could produce strings that are too short to be a useful molecule. International Chemical Identifier (InChI)Introduced by the IUPAC, InChI [41] are strings encoding structural information including charge of the molecule in a hierarchical manner. The strings can get long and complex for larger molecules. To counter this, a hash called 'InChIKey' was developed to help with search and retrieval. InChIs are are less commonly used in LMs [39]. ### Generative Modelling Generative modeling involves learning the data's underlying distribution with the intent of generating new samples, a technique pivotal in accelerating de novo drug discovery. A generative model may be conditional or unconditional. A conditional generative model utilizes provided data attributes or labels to generate new samples with desired properties, whereas an unconditional model solely provides a way to sample molecules similar to the training data [36]. The DMTA cycle particularly benefits from the conditional generation approach as it facilitates goal-oriented hypothesis design [9]. This section describes a few influential conditional generation models that act on chemical language to generate molecules satisfying user-defined conditions. #### 2.2.1 Recurrent Neural Network (RNN) The sequential nature of RNNs makes them suitable models for processing chemical languages. Proposed in the 90s, RNNs were the first flavor of CLMs [8, 79, 85]. Their hidden states are continuously updated as new tokens are passed to the network. During the generation process, tokens are produced auto-regressively. RNNs find use in generating molecule libraries [85] which are extensively used in drug development processes like screening. External scoring functions drive the generation of molecules with desired properties. RNNs are also adept at learning complex distributions [31] and generating a higher proportion of unique and valid SMILES [69], even though their inability to count occurrences of ring opening/closing symbols poses a challenge [46, 70]. Figure 3: An illustration of conditional molecule generation using LMs. The process initiates with the collection and processing of multi-modal data, which is then compressed into a fixed-size latent representation. These representations are subsequently passed to a molecular generative model. The generated molecules then undergo in-silico property prediction, which is linked back to the generative model through a feedback loop during training. The in-silico models direct the generative model to produce property- or task-driven molecules using a reward function. In the inference stage, candidate molecules generated by the optimized model continue through the workflow for lab synthesis and subsequent experimental validation to determine their efficacy for the desired task. #### 2.2.2 Variational Autoencoder (VAE) VAEs learn latent distribution parameters of molecules, thus enabling the generation of new molecules by sampling from this distribution. Their unique ability lies in learning a smooth, latent space that facilitates interpolation of samples, even for notoriously discrete entities like molecules [36]. To make it suitable for chemical language models (CLMs), any network compatible with string inputs can function as a VAE's encoder and decoder. Initial works primarily focused on single-modality applications, assessing latent space quality via downstream tasks [36]. This approach remains prevalent and can be used to generate, e.g., catalysts with an RNN-based VAE [78]. Here, a latent space is learned and assessed by predicting the catalyst binding energy. Lim et al. [53] takes it a step further by concatenating a condition vector to the input and the latent embedding generated by the recurrent network-based VAE's encoder. This approach enables the generation of molecules specifically tailored to the given conditions. The scope of VAEs expanded progressively into multi-modal settings for conditional molecule generation, as visualized in Figure 3 and exemplified by Born et al. [11, 12, 13]. These works on task-driven molecule generation incorporate contextual information like gene expression [13] or protein targets [11, 12] or even both [45]. VAEs learn embeddings of context information and primer drugs, which are merged before decoding to produce molecules. A reinforcement-learning-based approach directs the model to produce molecules with desired properties using rewards. #### 2.2.3 Transformer The self-attention attribute of Transformers [93] have propelled these models to the forefront of NLP. Transformers have an encoder module that relies on this self-attention to learn embeddings of the input and the context associated with this input. The decoder module predicts tokens using the context learnt by the encoder and previously generated tokens through attention. For generative modeling, decoder-only transformer like the Generative Pre-Training Transformer (GPT) [72] have become the dominant approach. This success was translated to the scientific language domain. One of the first models to use the GPT architecture for conditional molecule generation is MolGPT [4]. SMILES tokens concatenated with a condition vector that summarizes the desired properties and scaffolds are passed as input to this model, which is then trained on the next token prediction task to generate molecules. GPT-like models coupled with RL can also be used to optimize molecular properties like pIC50 [61]. In this two-stage approach, embeddings are first learnt from SMILES strings, and the embedding space is then optimized such that the model samples molecules with the desired properties. Going beyond just using GPT-like architectures for molecule generation, Regression Transformer [10] is a seminal work that formulates conditional sequence modeling as a regression problem. This gives rise to a natural multitask model that concurrently performs property prediction and conditional molecular generation. This is achieved by concatenating conventional molecular tokens with property tokens and employing an training scheme that alternates which parts of the sequence are masked. All these works are testament to the generative capabilities of Transformer-based models. The superior quality of learned embeddings coupled with its ability to handle parallel processing and scalability makes it a top choice for the task of conditional molecule generation, with promising applications in drug discovery and other areas of molecular design [66]. ### Property Prediction Whether a discovery is novel or not, property prediction is a key step in validating the molecules for a given use case. The success of a molecule depends on a myriad of factors, including how it interacts with its environment. The MoleculeNet datasets [103] are a commonly used benchmark for property prediction. It is curated from public datasets and comprises over 700,000 compounds tested on various properties. Born et al. [15] uses a multiscale convolutional attention model to predict toxicity from SMILES. The model has three kernel sizes for the convolutional network and uses a a Bahdanau attention mechanism [5]. The model shows a superior performance overall on various MoleculeNet tasks compared to all other SMILES-based models. A recent trend is to use transformer-encoders to learn embeddings for molecules and then apply a multilayer perceptron (MLP) on the embeddings for property prediction. MolBERT [29] and ChemBERTA [20]) are two such examples. These transformer-based models use a BERT backbone to learn molecular embeddings from SMILES and predict properties. Similarly, Molformer [75] uses a transformer-encoder with linear attention and relative positional encoding to learn compressed molecular representations which are then fine-tuned on chemical property prediction benchmarks. To equip transformers with better inductive biases to handle molecules, adaptations of the attention mechanism were proposed. The molecule attention transformer (MAT) incorporates inter-atomic distances and graph structure into the attention mechanism [58]. An improvement over this model is the _relative_-MAT which fuses the distance embedding, bond embedding and neighbourhood embedding and achieves competitive performances on a range of property prediction tasks [59]. ## 3 Software tools for scientific language modeling The paradigm shift towards open-sourcing software has exerted a profound influence in chemistry. Commonly listed implications of open-sourcing in the context of drug discovery include catalyzation of methodological development, fostering of collaboration and ease of scientific reproducibility [35]. In this section we present several software assets (e.g., Python packages or cloud-based web apps) that are key to enable molecular discovery. ### Natural language models The success story of the Transformer [93] as most widely adopted neural network architecture goes hand in hand with the rise of the transformers library [101], developed since 2019 by HuggingFace. Initially intended for NLP applications, Transformers were adopted interdisciplinarily, e.g in computer vision [25], reinforcement learning [19], protein folding [47] and, of course, chemistry [84]. _HuggingFace_ provides the largest public hub of language models and it offers implementations of all recent models as well as a diverse collection of pretrained models available for fine-tuning or inference. While most of their models focus on NLP, selected models are designed for life science applications, in particular molecular property prediction (e.g., _ChemBerta_[20]), molecular captioning (e.g., _MolT5_[26]), text-based molecular generation (e.g., _MolT5_[26]) but also unsupervised protein language models (e.g., _ProtBert_, _ProtAlbert_, _ProtXLNet_ or _ProtT5_[27]). Moreover, some available models like _Multimodal Text and Chemistry T5_[22] are prompt-based multitasker that besides the above mentioned tasks also perform additional tasks such as forward/backward reaction prediction. ### GT4SD - Generative modeling toolkits Python libraries like GT4SD (the Generative Toolkit for Scientific Discovery [57]), TdC (Therapeutics Data Commons [43]) or deepchem[73] were developed primarily for molecular discovery applications, but especially GT4SD offers ample support of language models (LMs). GT4SD is designed to enable researchers and developers to use, train, fine-tune and distribute state-of-the-art generative models for sciences with a focus on the design of organic materials. It is compatible and inter-operable with many existing libraries and, beyond transformers, it also gives access to diffusion models (diffusers[96]) or graph generative models (TorchDrug[106]). Next to established molecular generation benchmark like Moses[69] and GuacaMol[16] that include VAEs, generative adversarial networks (GANs), genetic algorithms, and many evaluation metrics for molecular design, gt4sd also supports very contemporary models like the _Regression Transformer_ for concurrent sequence regression and property-driven molecular design [10], _GFlowNets_ for highly diverse candidate generation [6] or _MoLeR_ for motif-constrained molecule generation [60]. GT4SD ships with a harmonized interface and a set of command line tools that access a registry of generative models to run or train any model with a few lines of code. Trained models can be shared to a cloud-hosted model hub and the library is build to facilitate consumption by containerization or distributed computing systems. To date, it includes \(\sim 50\) property prediction endpoints for small molecules, proteins and crystals and overall hosts \(\sim 30\) pre-trained algorithms for material design, 20 free webapps [2] and many Jupyter/Colab notebooks. ### RXN for Chemistry: Reaction and synthesis language models Once a molecule has been selected for experimental validation, a tangible synthesis route has to be identified. Since the most important tasks in chemical reaction modeling can be framed as sequence conversion problems, the methodology developed for natural language translation can be seamlessly translated to chemistry [84]. In this analogy, atoms are characters, molecules are words, reactions are sentences and precursors are translated into a product or vice versa. The most mature and flexible library for reaction modeling with LMs is the package rxn4chemistry[32]. It wraps the API of the _IBM RXN for Chemistry_ platform, a freely accessible web application that gives access to a rich set of language models for different tasks in reaction chemistry. The flagship architecture has been the _Molecular Transformer_ (MT), an autoregressive encoder-decoder model, originally applied to predict outcomes of chemical reactions in organic chemistry [80]. Notably, the MT uses a purely data-driven, template-free approach that, unlike many graph-based models, can directly represent stereochemistry and thus also exhibits excellent performance on regio- and stereoselective reactions [67]. The MT was applied to single-step retrosynthesis [90] and became the linchpin of a multi-step retrosynthesis model with a hypergraph exploration strategy [81]. This approach was later generalized to enzymatic reactions with a tokenization scheme based on enzyme classes which facilitated biocatalyzed synthesis planning and paved the road towards more sustainable and green chemistry [71]. Derivatives of the MT helped to enhance diversity in single-step retrosynthesis [90] and a prompt-based disconnection scheme proposed by Thakkar et al. [89] significantly improved controllability by allowing the user to mark a disconnection side in the reactant. Interestingly, an encoder-only derivative of the MT (that replaced the autoregressive decoder with a classification head and leveraged BERT-style [24] self-supervised pretraining on reactions) excelled in predicting reaction classes [83]. The hidden representations of such a model were found to encode reaction types and thus allowing to map reaction atlases and to perform reaction similarity search. This gave rise to the rxnfp package for chemical reaction fingerprinting. Strikingly, masked language modeling also led later to the discovery that the learned attention weights of the Transformer are "secretly" performing atom mapping between products and reactions [82]. The epiphany that CLMs accomplish atom mapping without supervision or human labeling bridged the gap between rule-based and data-driven approaches in reaction modeling, making this once tedious experimental task more efficient. In the quest for automation in organic chemistry, once the precursors for a molecule's synthesis route are identified, the subsequent crucial phase involves seeking an actionable, stepwise synthesis protocol that is ideally amenable for autonomous execution on a robotic platform, such as _IBM RoboRXN_. In two seminal works Vaucher et al. demonstrated that encoder-decoder Transformers can extract chemical synthesis actions, first from experimental procedures described in patents [94] and later predict them directly from the reaction SMILES [95]. Notable, all the aforementioned models are available via the _IBM RXN for Chemistry_ platform which even allows to control and monitor the robotic platform directly from the web interface. For the daunting task of multistep retrosynthesis planning, _RXN_ also includes non-transformer based models like _AiZynthFinder_[34], a Monte Carlo Tree Search approach build on top of a RNN. Most of the _RXN_ models can be executed also via the rxn4chemistry Python package. ### Specialized libraries Molecular property prediction.HuggingMolecules is a library solely devoted to aggregate, standardize and distribute molecular property prediction LMs [33]. It contains many encoder-only CLMs, some of them with geometrical and structure-aware inductive biases (e.g., the MAT [58] or its successor, the R-MAT [59]) while others being pure BERT-based models that were trained on SMILES (e.g,. _MolBERT_[29] or _ChemBERTA_[20]). Data processing.RDKit [50] is a library for manipulating molecules in Python. For narrower applications like ML data preparation several tools exist. First, rxn-chemutils is a library with chemistry-related utilities from RXN for Chemistry. It includes functionalities for standardizing SMILES (e.g., canonicalization or sanitization) but also conversions to other representations (e.g., InChI). It harmonizes reaction SMILES and prepares them for consumption by CLMs, including also SMILES aug mentation (by traversing the molecular graph in a non-canonical order) and tokenization. Another library with a similar focus is pytoda[12, 13]. It does not support reaction SMILES but implements richer preprocessing utilities, allowing to chain \(>\)10 SMILES transformations (e.g., kekulization [15]). It supports different languages (e.g., SELFIES [49] or BigSMILES [54]) and tokenization schemes (e.g., SMILES-PE [52]). Similar functionalities are available for proteins including different languages (IUPAC, UniRep or Blosum62) and protein sequence augmentation strategies [14]. For small molecules, proteins, and polymers, dedicated language classes facilitate the integration with LMs by storing vocabularies, performing online transformations and feeding to custom datasets. Datasets exist for predicting molecular properties, drug sensitivity, protein-ligand affinity or for self-supervision on small molecules, proteins or polymers. ### General purpose platforms Several general-purpose platforms for molecular discovery have been launched recently, sometimes even preserving privacy through federated learning (i.e., decentralized, distributed training). For example, MELLODDY [42] is a collaborative effort aimed at cross-pharma federated learning of 2.6 billion confidential activity data points. Similarly, VirtualFlow [37] is an open-source platform facilitating large-scale virtual screening that was shown to identify potent KEAP1 inhibitors. With a focus on _de novo_ drug design, Chemistry42 [44] is a proprietary platform integrating AI with computational and medicinal chemistry techniques. ## 4 Future of molecular discovery A few years ago, the idea of querying an AI model - like one would a search engine - to not only extract scientific knowledge but also perform computational analyses was an overly ambitious feat. Scientific thinking comes from the ability to reason, and AI models cannot reason like humans, yet. However, these models can **learn** from humans. Our propensity to document everything has enabled us to train Large Language Models (LLMs), like ChatGPT [64] and GitHub Copilot [1], to mimic human responses. When brought into the context of computational science, this could equip non-experts to confidently conduct computational analyses through well-designed prompts. With human-in-the-loop, a synergistic effect could be created where the scientist provides feedback to the model on its output, thus aiding in better model optimization (a strategy called reinforcement learning from human feedback (RLHF) that has been proven critical for ChatGPT [21]). These applications also reduce the barrier for individuals from non-scientific backgrounds to gain a more hands-on experience in conducting scientific analyses without having to go through formal training in computational analysis. This section provides a sneak peak into what's next for molecular discovery. Riding the LLM wave, the future holds a place for chatbot-like interfaces that may take care of all things computational in molecular discovery. This includes, for example, generating and iteratively improving design ideas, synthesis planning, material purchasing, performing routine safety checks, and validating experiments. #### The rise of foundation models in chemistry Conventionally, neural networks are trained for a single given task to achieve maximum performance. This essentially renders the models useless for other tasks, thus requiring a new model for every new task, even when the training domain is the same, which in turn imposes a constraint on the rate of our technological advancements. Over the last few years, this conventional approach has been challenged by Large Language Models (LLMs). It has been found that scaling up LLMs leads to astonishing performances in few-shot [17] and even zero-shot task generalization [76]. Referred to as "foundation models" [30, 63], these models, with typically billions of parameters, can perform multiple tasks despite being trained on one large dataset. Essentially, this multi-task learning is achieved by prompting LLMs with task instructions along with the actual query text which has been found to induce exceptional performance in natural language inference and sentence completion [76]. These findings have kicked off new research directions, such as prompt engineering [97] and in-context learning [17], in NLP. The foundation model paradigm also finds an increasing adoption in chemistry. There is an increase in task-specific models integrating natural and chemical languages [26, 94, 95, 104]. Concurrently, multi-tasking in pure CLMs has also been advancing through models that combined tasks such as property prediction, reaction prediction and molecule generation either with small task-specific heads (e.g., T5Chem [56]) or via mask infilling (e.g., Regression Transformer [10]). Christofidellis et al. [22] were the first to bridge the gap and develop a fully prompt-based multi-task chemical and natural language model. Despite only 250M parameters, the _Multitask Text and Chemistry T5_ was shown to outperform ChatGPT [64] and Galactica [87] on a contrived discovery workflow for re-discovering a common herbicide (natural text \(\rightarrow\) new molecule \(\rightarrow\) synthesis route \(\rightarrow\) synthesis execution protocol). ### The coalescence of chatbots with chemistry tools Given the aforementioned strong task generalization performances of LLMs, building chatbot interfaces around it was a natural next step and thus next to ChatGPT [64], many similar tools were launched. Such tools were found to perform well on simplistic chemistry tasks [18, 99], opening potential to reshape how chemists interact with chemical data, enabling intuitive access to complex concepts and make valuable suggestions for diverse chemical tasks. Furthermore, AI models specifically developed by computer scientists for e.g. drug discovery or material science can be made available through applications powered by LLMs, such as chatbots. This minimizes the access barrier for subject matter experts who would otherwise require the respective programming skills to utilize these AI models. The power of such chatbots is reached through the coalescence of LLMs and existing chemistry software tools like PubChem [48], RDKit [50] or GT4SD [57]. Together, such applicatio Figure 4: Screenshot of the LLM-powered chatbot application ChemChat. Embedding the capabilities of existing resources such as PubChem [48], RDKit [50] or GT4SD [57] enables the assistant to execute programming routines in the background and thus answer highly subject-matter specific user requests without the user needing programming skills. potential and value of these models by the strongly enhanced usage. An example of how the interaction with such a tool could look like is shown in Figure 4. In this example, a user provides a molecule (either as SMILES string or via a molecule sketcher) and asks to identify the molecule. The chatbot relies on prompt-engineering in order to inform the LLM about all its available tools. The user input is first sent to the LLM which recognizes that one of its supported tools, in this case PubChem, can answer the question. The chatbot then sends a request to the PubChem API and returns a concise description of the molecule. The user subsequently asks to compute the logP partition coefficient [100] and the quantitative estimate of drug-likeness (QED) [7]. Calculation of both properties is enabled through the GT4SD tool [57] allowing the chatbot to answer the request with certainty. This will trigger a programming routine to accurately format the API request for GT4SD, i.e., composing the SMILES string with the logP or QED endpoint. The computation is then performed asynchronously and a separate call to the post-processing routine formats the LLM-generated string reply and composes the response object for the frontend. This fusion of LLMs with existing tools gives rise to a chatbot assistant for material science and data visualization that can perform simple programming routines without requiring the user to know programming or have access to compute resources. A continuation of the conversation involving more complex user queries is shown in Figure 5. Having identified the initial molecule as theobromine with a logP of -1.04, the user requests three similar molecules with a slightly increased logP of -0.5. Here, ChemChat identifies the Regression Transformer [10] as the available tool to perform substructure-constrained, property-driven molecule design. Once the routine has been executed and the three candidate SMILES are collected, the text result is post-processed to add more response data objects such as molecule visualizations, datasets or Vega Lite specs for interactive visualizations. In conclusion, chatbots can facilitate the integration of essentially all major cheminformatics software in a truly harmonized and seamless manner. While LLMs are not intrinsically capable to perform Figure 5: Screenshot of the LLM-powered chatbot application ChemChat showing the continuation of the conversation involving generative tasks through GT4SD’s Regression Transformer [10] as well as property [28] and similarity calculation [74, 86]. complex routines, at least not with high precision and in a trustworthy manner, the synergy between their natural language abilities with existing chemistry tools has the potential to transform the way chemistry is performed.
2309.10644
Robin: A Novel Method to Produce Robust Interpreters for Deep Learning-Based Code Classifiers
Deep learning has been widely used in source code classification tasks, such as code classification according to their functionalities, code authorship attribution, and vulnerability detection. Unfortunately, the black-box nature of deep learning makes it hard to interpret and understand why a classifier (i.e., classification model) makes a particular prediction on a given example. This lack of interpretability (or explainability) might have hindered their adoption by practitioners because it is not clear when they should or should not trust a classifier's prediction. The lack of interpretability has motivated a number of studies in recent years. However, existing methods are neither robust nor able to cope with out-of-distribution examples. In this paper, we propose a novel method to produce \underline{Rob}ust \underline{in}terpreters for a given deep learning-based code classifier; the method is dubbed Robin. The key idea behind Robin is a novel hybrid structure combining an interpreter and two approximators, while leveraging the ideas of adversarial training and data augmentation. Experimental results show that on average the interpreter produced by Robin achieves a 6.11\% higher fidelity (evaluated on the classifier), 67.22\% higher fidelity (evaluated on the approximator), and 15.87x higher robustness than that of the three existing interpreters we evaluated. Moreover, the interpreter is 47.31\% less affected by out-of-distribution examples than that of LEMNA.
Zhen Li, Ruqian Zhang, Deqing Zou, Ning Wang, Yating Li, Shouhuai Xu, Chen Chen, Hai Jin
2023-09-19T14:27:59Z
http://arxiv.org/abs/2309.10644v1
# Robin: A Novel Method to Produce Robust Interpeters for Deep Learning-Based Code Classifiers ###### Abstract Deep learning has been widely used in source code classification tasks, such as code classification according to their functionalities, code authorship attribution, and vulnerability detection. Unfortunately, the black-box nature of deep learning makes it hard to interpret and understand why a classifier (i.e., classification model) makes a particular prediction on a given example. This lack of interpretability (or explainability) might have hindered their adoption by practitioners because it is not clear when they should or should not trust a classifier's prediction. The lack of interpretability has motivated a number of studies in recent years. However, existing methods are neither robust nor able to cope with out-of-distribution examples. In this paper, we propose a novel method to produce Robust interpreters for a given deep learning-based code classifier; the method is dubbed Robin. The key idea behind Robin is a novel hybrid structure combining an interpreter and two approximators, while leveraging the ideas of adversarial training and data augmentation. Experimental results show that on average the interpreter produced by Robin achieves a 6.11% higher fidelity (evaluated on the classifier), 67.22% higher fidelity (evaluated on the approximator), and 15.87% higher robustness than that of the three existing interpreters we evaluated. Moreover, the interpreter is 47.31% less affected by out-of-distribution examples than that of LEMNA. Explainable AI, deep learning, code classification, robustness ## I Introduction In the past few years there has been an emerging field focusing on leveraging deep learning or neural networks to study various kinds of source code classification problems, such as classifying code based on their functionalities [1, 2], code authorship attribution [3, 4, 5, 6, 7], and vulnerability detection [8, 9, 10]. While the accuracy of deep neural networks in this field may be satisfactory, the lack of interpretability, or explainability, remains a significant challenge. Deep neural networks are often considered _black-boxes_ which means they cannot provide explanations for why a particular prediction is made. The lack of interpretability poses as a big hurdle to the adoption of these models in the real world (particularly in high-security scenarios), because practitioners do not know when they should trust the predictions made by these models and when they should not. The importance of addressing the aforementioned lack of interpretability is well recognized by the research community [11, 12, 13], as evidenced by very recent studies. Existing studies on addressing the interpretability of source code classifiers (i.e., classification models) can be classified into two approaches: _ante-hoc_ vs. _post-hoc_. The ante-hoc approach aims to provide _built-in_ interpretability by leveraging the attention weight matrix associated with a neural network in question [14, 15], which in principle can be applied to explain the prediction on any example. The post-hoc approach aims to interpret the decision-making basis of a trained model. In the context of source code classification, this approach mainly focuses on local interpretation, which aims to explain predictions for individual examples by leveraging: (i) perturbation-based feature saliency [16, 17], which computes the importance scores of features by perturbing features in code examples and then observing changes in prediction scores; or (ii) program reduction [18, 19], which uses the delta debugging technique [20] to reduce a program to a minimal set of statements while preserving the classifier's prediction. The ante-hoc approach must be incorporated into the classifier training phase, meaning that it cannot help existing or given classifiers, for which we can only design interpreters to provide interpretability in a retrospective manner. In this paper we focus on how to retrospectively equip given code classifiers with interpretability, which is the focus of the post-hoc approach. However, existing post-hoc methods suffer from the following two problems. (i) the _first_ problem is incurred by the possible out-of-distribution of a perturbed example in the perturbation-based feature saliency method. This is inevitable because the method uses perturbations to assess feature importance, by identifying the feature(s) whose absence causes a significant decrease in prediction accuracy. When a legitimate example is perturbed into an out-of-distribution input, it is unknown whether the drop in accuracy is caused by the absence of certain feature(s) or because of the out-of-distribution of the perturbed example [21, 22]. (ii) The _second_ problem is the lack of robustness, which is inherent to the local interpretation approach and thus common to both the perturbation-based feature saliency method and the program reduction method. This is because the local interpretation approach optimizes the interpretation of each example independent of others, meaning that overfitting the noise associated with individual examples is very likely [23]. As a consequence, an interpretation would change significantly even by incurring a slight modification to an example, and this kind of sensitivity could be exploited by attackers to ruin the interpretability [24]. The weaknesses of the existing approaches motivate us to investigate better methods to interpret the predictions of deep learning-based code classifiers. **Our Contributions.** This paper introduces Robin, a novel method for producing high-fidelity and Robust interpreters in the post-hoc approach with local interpretation. Specifically, this paper makes three contributions. First, we address the aforementioned out-of-distribution problem by introducing a hybrid interpreter-approximator structure. More specifically, we design (i) an interpreter to identify the features that are important to make accurate predictions, and (ii) two approximators such that one is used to make predictions based on these important features and the other is used to make predictions based on the other features (than the important ones). These approximators are reminiscent of fine-tuning a classifier with perturbed training examples while removing some features. As a result, a perturbed test example is no longer an out-of-distribution example to the approximators, meaning that the reduced accuracy of the classifier can be attributed to the removal of features (rather than the out-of-distribution examples). To assess the importance of the features extracted by the interpreter, we use the approximators (rather than the given classifier) to mitigate the side-effect that may be caused by out-of-distribution examples. Second, we address the lack of interpretation robustness by leveraging the ideas of _adversarial training_ and _mixup_ to augment the training set. More specifically, we generate a set of perturbed examples for a training example (dubbed the _original_ example) as follows. (i) Corresponding to adversarial training but different from traditional adversarial training in other contexts, the ground-truth labels (i.e., what the \(k\) important features are) cannot be obtained, making it difficult to add perturbed examples to the training set for adversarial training. We overcome this by measuring the similarity between the interpretation of the prediction on the original example and the interpretation of the prediction on the perturbed example, which is obtained in the example space rather than feature space (i.e., the perturbed example is still a legitimate program with the same functionality as the original example). This similarity allows us to compute a loss in interpretability and leverage this loss to train the interpreter. (ii) Corresponding to mixup, we generate a set of _virtual_ examples by linearly interpolating the original examples and their perturbed versions in the feature space; these examples are _virtual_ because they are obtained in the feature space (rather than example space) and thus may not correspond to any legitimate code example (e.g., a virtual example may not correspond to a legitimate program). Different from traditional data augmentation, we train the interpreter and two approximators jointly rather than solely training the interpreter on virtual examples due to the lack of ground truth of the virtual examples (i.e., what the \(k\) important features are). Third, we empirically evaluate Robin's effectiveness and compare it with the known post-hoc methods in terms of _fidelity_, _robustness_, and _effectiveness_. Experimental results show that on average the interpreter produced by Robin achieves a 6.11% higher fidelity (evaluated on the classifier), 67.22% higher fidelity (evaluated on the approximator), and 15.87x higher robustness than that of the three existing interpreters we evaluated. Moreover, the interpreter is 47.31% less affected by out-of-distribution examples than that of LEMNA [25]. We have made the source code of Robin publicly available at [https://github.com/CGCL-codes/Robin](https://github.com/CGCL-codes/Robin). **Paper Organization.** Section II presents a motivating instance. Section III describes the design of Robin. Section IV presents our experiments and results. Section V discusses the limitations of the present study. Section VI reviews related prior studies. Section VII concludes the paper. ## II A Motivating Instance To illustrate the aforementioned problem inherent to the local interpretation approach, we consider a specific instance of code classification in the context of code functionality classification via TBCNN [2]. Although TBCNN offers code functionality classification capabilities, it does not offer any interpretability on its predictions. To make its predictions interpretable, an interpreter is required. We adapt the method proposed in [16], which was originally used to interpret software vulnerability detectors, to code functionality classification because there are currently no existing interpreters for this purpose (to the best of our knowledge). This adaptation is feasible because the method involves deleting features in the feature space and observing the impact on predictions, which is equally applicable to code functionality classification. The original code example in Figure 1 (a) is used to compare two strings for equality. We create a perturbed example by changing the variable names, as illustrated in Figure 1 (b). Despite the change in variable names, the perturbed example maintains the same functionality and semantics as the original example. Additionally, both the original and perturbed examples are classified into the same category by the classifier. Upon applying an interpreter, adapted from [16], to TBCNN, the five most important features of the original and perturbed examples are identified, and highlighted in Fig. 1(a) and 1(b) respectively. Notably, only one important feature is common between the two examples, revealing that the interpreter lacks robustness. This lack of robustness of the interpreter may cause users to question the reliability of the classifier's predictions due to the erroneous interpretation. ## III Design of Robin **Notations**. A program code example, denoted by \(x_{i}\), can be represented as a \(n\)-dimensional feature vector \(x_{i}=(x_{i,1},x_{i,2},\ldots,x_{i,n})\), where \(x_{i,j}\) (\(1\leq j\leq n\)) is the \(j\)th feature of \(x_{i}\). A code classifier (i.e., classification model) \(M\) is learned from a training set, denoted by \(X\), where each example \(x_{i}\in X\) is associated with a label \(y_{i}\). Denote by \(M(x_{i})\) the prediction of classifier \(M\) on a example \(x_{i}\). Our goal is to propose a novel method to produce an interpreter, denoted by \(E\), for _any_ given code classifier \(M\) and test set \(U\) such that for test example \(u_{i}\in U\), \(E\) identifies \(k\) important features to explain why \(M\) makes a particular prediction on \(u_{i}\), where \(k\ll n\). It is intuitive that the \(k\) important features of example \(u_{i}\) should be largely, if not exactly, the same as the \(k\) important features of \(u_{i}^{\prime}\) which is perturbed from \(u_{i}\). Denote by \(E(u_{i})=(u_{i,\alpha_{1}},...,u_{i,\alpha_{k}})\) the \(k\) important features identified by \(E\), where \(\{\alpha_{1},...,\alpha_{k}\}\subset\{1,\ldots,n\}\). ### _Basic Idea and Design Overview_ **Basic Idea.** In terms of the out-of-distribution problem associated with existing interpretation methods, we observe that the absence of perturbed examples in the training set makes a classifier's prediction accuracy with respect to the perturbed examples affected by the out-of-distribution examples. Our idea to mitigate this problem is to fine-tune a classifier for perturbed examples by using a hybrid interpreter-approximator structure [26] such that (i) one interpreter is for identifying the important features for making accurate prediction, (ii) one approximator is for using the important features (identified by the interpreter) to making predictions, and (iii) another approximator is for using the other features (than the important features) for making predictions. To improve the interpreter's fidelity, the two approximators are trained simultaneously such that the important features contain the most useful information for making predictions while the other features contain the least useful information for making predictions. To make the interpreter robust, we leverage two ideas. The _first idea_ is to use adversarial training [27, 28] where an original example and its perturbed example will have the same prediction. In sharp contrast to traditional adversarial training in other contexts where ground-truth can be obtained, it is difficult to obtain the ground-truth labels in this setting because we do not know which features are indeed the most important ones even for the training examples. That is, we cannot simply use traditional adversarial training method to add perturbed examples to training set because the "labels" (i.e., the \(k\) important features) of original examples and perturbed examples cannot be obtained. We overcome this by (i) generating a set of perturbed examples via code transformation such that the prediction on the perturbed example remains the same, and (ii) adding a constraint term to the loss function to make the interpretations of the original example and the perturbed example as similar to each other as possible. The _second idea_ is to leverage mixup [29] to augment the training set. In sharp contrast to traditional data augmentation, we cannot train the interpreter from the augmented dataset for the lack of ground-truth (i.e., the important features of an original example and its perturbed examples can not be obtained). We overcome this issue by (i) using code transformation to generate a perturbed example such that its prediction remains the same as that of the original example, (ii) mixing the original examples with the perturbed examples to generate virtual examples, and (iii) optimizing the preliminary interpreter by training the interpreter and two approximators jointly on virtual examples. Note that the difference between the aforementioned adversarial examples and virtual examples is that the former are obtained by perturbation in the example space but the latter is obtained in the feature space. **Design Overview**. Fig. 2 highlights the training process of Robin, which produces an optimized interpreter in three steps. * **Step I: Generating perturbed examples.** This step generates perturbed examples from a training example by conducting semantics-preserving code transformations such that the perturbed example has the same prediction as that of the original example. * **Step II: Generating a preliminary interpreter.** Given a classifier for which we want to equip with interpretability, this step leverages the perturbed examples generated in Step I to train two approximators and an interpreter Siamese network in an iterative fashion. The interpreter Siamese network identifies the important features of original examples and that of their perturbed examples, and then computes the difference between these two sets. * **Step III: Optimizing the preliminary interpreter.** This step optimizes the preliminary interpreter generated in Step II by using mixup [29] to augment the training set and update the preliminary interpreter's parameters. The optimized Fig. 1: An original example and its perturbed example (modified code is highlighted in blue color and italics), where red boxes highlight the 5 most important features. interpreter identifies important features of a test example. ### _Generating Perturbed Examples_ This step has three substeps. First, for each training example \(x_{i}\in X\), we identify its coding style attributes related to the example's layout, lexis, syntax, and semantics (e.g., as defined in [30]). Let \(t_{i}\) denote the number of coding style attributes for \(x_{i}\in X\). Fig. 3(a) shows a training example where the first line uses a global declaration but can be transformed such that no global declaration is used; Fig. 3(b) describes its coding style attributes. Second, we randomly select \(\theta_{i}\) (\(\theta_{i}<t_{i}\)) coding style attributes, repeat this process for \(m\) times, and transform the value of each of these coding style attributes to any one of the other semantically-equivalent coding style attributes. Consequently, we obtain \(m\) candidate perturbed examples \(x_{i,+1},x_{i,+2}\)-\(\ldots x_{i,+m}\), where \(x_{i,+j}\) (\(1\leq j\leq m\)) denotes the \(j\)th perturbed example generated by the semantic-equivalent code transformation of code example \(x_{i}\). The labels of the \(m\) perturbed examples preserve the original example \(x_{i}\)'s label \(y_{i}\) owing to the semantic-equivalent code transformations. As an instance, Fig. 3(c) shows a candidate perturbed example generated by transforming randomly selected coding style attributes, which are highlighted in red in Fig. 3(b). Third, we filter out perturbed examples whose prediction labels are different from the prediction labels of the corresponding original examples in \(X\). The reason is that if the prediction labels of the perturbed examples change, the robustness of the interpreter cannot be judged by the difference of the interpretation between the perturbed examples and the original examples. As an instance in Fig. 3(d), the prediction label Fig. 3: A code example showing generation of perturbed examples (selected coding style attributes and modified code are highlighted in red). Fig. 2: Overview of the training process of Robin which produces an optimized interpreter in three steps: generating perturbed examples, generating a preliminary interpreter, and optimizing the preliminary interpreter. of the code example in Fig. 3(a) and the prediction label of the candidate perturbed example are the same, so the candidate perturbed example is a perturbed example that can be used for robustness enhancement. Finally, we obtain the set of perturbed examples for robustness enhancement of the interpreter. ### _Generating a Preliminary Interpereter_ An ideal interpreter is simultaneously achieving high fidelity and high robustness. (i) _High fidelity_ indicates that the important features identified by interpreter \(E\) contain as much information as possible that is most useful for code classification, and the remaining non-important features contain as little information as possible that is useful for code classification. (ii) _High robustness_ indicates that the important features identified by interpreter \(E\) to explain why \(M\) predicts \(x_{i}\) as the label \(y_{i}\) should not change dramatically for small perturbed examples which are predicted as label \(y_{i}\). Robin achieves this by first generating a preliminary interpreter in Step II and then optimizing the preliminary interpreter further in Step III. The purpose of Step II is to generate a preliminary interpreter by training two approximators and the interpreter Siamese network iteratively. The basic idea is as follows: (i) To achieve high fidelity, we introduce two approximators that have the same neural network structure for code classification, using the identified important features and non-important features as input respectively. Since important features contain the most useful information for code classification, the accuracy of the approximator using only important features as input should be as high as possible. On the other hand, non-important features contain less information important for code classification, so the accuracy of the approximator using only non-important features as input should be as low as possible. (ii) To achieve high robustness, we introduce the interpreter Siamese network with two interpreters that have the same neural network structure and share weights, using the original code examples and perturbed examples as input respectively. For each original example and its corresponding perturbed examples, the Siamese network calculates the similarity distance between the important features of the original example and the important features of the perturbed examples identified by the two interpreters, and adds the similarity distance to the loss value to improve the interpreters' robustness during training. Fig. 4 shows the structure of the neural network involving an interpreter Siamese network and two approximators. The interpreter Siamese network involves two interpreters which have the same neural network structure and share weights. Their neural network structure depends on the structure of the code classifier to be explained. We divide the code classifier to be explained into two parts. One part is to extract the features from the input code examples through neural network to obtain the vector representation of the code examples, which is equivalent to encoder, and this part usually uses Batch Normalization, Embedding, LSTM, Convolutional layer, etc. The other part maps the vector representation to the output vector. When generating the structure of the interpreter, the first part of the code classifier is kept and the latter part is modified to a fully connected layer and a softmax layer, which maps the learned representation of code examples to Fig. 4: Overview of Step II (generating a preliminary interpreter), involving training two approximators and training the interpreter Siamese network iteratively. the output space, and the output is of the same length as the number of features, indicating whether each feature is labeled as important or not. These two interpreters are used to identify the important features of the code examples in training set \(X\) and the perturbed examples generated in Step I, respectively. The two approximators have the same neural network structure and are used to predict labels using important features and non-important features, respectively. They have the identical neural network architecture as the code classifier to be interpreted. However, instead of the code example as input, the interpreter provides the approximator with the important or non-important features identified. As a result, the approximators can be seen as fine-tuned versions of the code classifier, trained on the datasets of important and non-important features. Fig. 4 also shows the training process to generate a preliminary interpreter, involving the following two substeps. **Step II.1: Training two approximators while attempting to minimize \(L_{s}\) and \(L_{u}\).** When training the approximator, only the model parameters of the approximator are updated. The training goal is to minimize the loss of both approximators \(A_{s}\) and \(A_{u}\), which is the sum of cross-entropies loss of \(A_{s}\) and \(A_{u}\): \[\min_{A_{s},A_{u}}{(L_{s}+L_{u})}, \tag{1}\] where \(L_{s}\) is the cross-entropy loss of approximator \(A_{s}\) and \(L_{u}\) is the cross-entropy loss of approximator \(A_{u}\). The loss of the approximator indicates the consistency between the prediction labels and the labels. **Step II.2: Training the interpreter Siamese network while attempting to minimize \(L_{s}\) and \(L_{diff}\), and maximize \(L_{u}\).** When training the interpreter Siamese network, only the model parameters of the interpreter are updated. The training goal is to minimize the loss of \(A_{s}\) and the discrepancy of the outputs between two interpreters \(E\) and \(E^{\prime}\), and maximize the loss of \(A_{u}\): \[\min_{E}{(L_{s}-L_{u}+L_{diff})}, \tag{2}\] where \(L_{diff}\) is the discrepancy of the outputs between two interpreters \(E\) and \(E^{\prime}\). The interpreter is trained so that (i) the loss of prediction using important features is minimized, (ii) the loss of prediction using non-important features is maximized, and (iii) the discrepancy of the outputs between two interpreters is minimized to improve the robustness of the interpreter. The difference value \(L_{diff}\) in the interpreter Siamese network represents the distance between the important features identified by the interpreter for the original examples and those for the perturbed examples. We use Jaccard distance [31] to measure the distance as follow: \[L_{diff}=1-\sum_{i,j}\frac{|E(x_{i})\cap E(x_{i,+j})|}{N\cdot m\cdot|E(x_{i}) \cup E(x_{i,+j})|} \tag{3}\] where \(N\) is the number of original code examples in the training set \(X\), and \(m\) is the number of perturbed examples corresponding to each original example. The more robust the interpreter is, the higher the similarity between the important features for the original examples and for the perturbed examples, the smaller the Jaccard distance, and the smaller the corresponding difference value \(L_{diff}\). During the training process, Step II.1 and Step II.2 are iterated until both the interpreters and the approximators converge. ### _Optimizing the Preliminary Interpreter_ The purpose of this step is to optimize the preliminary interpreter generated in Step II in both fidelity and robustness. The basic idea is to use mixup [29] for data augmentation to optimize the interpreter. There are two substeps. First, we generate virtual examples. For each code example \(x_{i}\) in training set \(X\), \(x_{i^{{}^{\prime}},+j}\) is a randomly selected perturbed example of \(x_{i^{\prime}}\), where \(x_{i^{\prime}}\) is randomly selected from \(X\), and may or may not be identical to \(x_{i}\). A virtual example is generated by mixing code examples and their corresponding labels. Specifically, the virtual example \(x_{i,mix}\) is generated by linear interpolation between the original example \(x_{i}\) and the perturbed example \(x_{i^{{}^{\prime}},+j}\), and the label \(y_{i,mix}\) of \(x_{i,mix}\) is also generated by linear interpolation between the label \(y_{i}\) of original example \(x_{i}\) and the label \(y_{i^{{}^{\prime}},+j}\) of perturbed example \(x_{i^{{}^{\prime}},+j}\), shown as follows: \[\begin{split}& x_{i,mix}=\lambda_{i}x_{i}+(1-\lambda_{i})x_{i^{{}^{ \prime}},+j}\\ & y_{i,mix}=\lambda_{i}y_{i}+(1-\lambda_{i})y_{i^{{}^{\prime}},+ j}\end{split} \tag{4}\] where the interpolation coefficients \(\lambda_{i}\) is sampled from the \(\beta\) distribution. Second, we update the interpreter's parameters based on the generated virtual examples. Since the output of the interpreter is the important features in code examples rather than the classification labels, it is impossible to train the interpreter individually for enhancement. Therefore, we use approximators for joint optimization with the interpreter \(E\). In this case, the input of the overall model are code examples and the output are the labels of code examples, which can be directly trained and optimized using the generated virtual examples. In the optimization process, the interpreter's parameters are updated while preserving the approximators' parameters unchanged. ## IV Experiments and Results ### _Evaluation Metrics and Research Questions_ **Evaluation Metrics.** We evaluate interpreters via their _fidelity_, _robustness_ against perturbations, and _effectiveness_ in coping with out-of-distribution examples. For quantifying fidelity, we adopt the metrics defined in [26, 32]. Consider a code classifier \(M\) trained from a training set \(X\), an interpreter \(E\), and a test set \(U\). Denote by \(E(u_{i})\) the set of important features identified by interpreter \(E\) for test example \(u_{i}\in U\). We train an approximator \(A_{s}\) in the same fashion as how \(M\) is trained except that we only consider the important features, namely \(\cup_{u_{i}\in U}E(u_{i})\). Let \(M(u_{i})\) and \(A_{s}(u_{i})\) respectively denote the prediction of classifier \(M\) and approximator \(A_{s}\) on example \(u_{i}\). Then, interpreter \(E\)'s fidelity is defined as a pair (FS-M\(\in[0,1]\), FS-A\(\in[0,1]\)), where FS-M\(=\frac{|\{u_{i}\in U:M(u_{i})=M(E(u_{i}))\}|}{|U|}\) is the fraction of test examples that have the same predictions by \(M\) using all features and by \(M\) only using the important features, and FS-A\(=\frac{|\{u_{i}\in U:M(u_{i})=A_{s}(E(u_{i}))\}|}{|U|}\) is the fraction of test examples that have the same predictions by \(M\) using all features and by \(A_{s}\) only using the important features [32]. Note that a larger (FS-M, FS-A) indicates a higher fidelity, meaning that the important features are indeed important in terms of their contribution to prediction. For quantifying robustness against perturbations, we adopt the metric proposed in [33], which is based on the average Jaccard similarity between (i) the important features of an original example and (ii) the important features of the perturbed example [31]. The similarity is defined over interval \([0,1]\) such that a higher similarity indicates a more robust interpreter. For quantifying effectiveness in coping with out-of-distribution examples, we adopt the metric defined in [21]. Specifically, we take the number of features \(n\) over 8, and incrementally and equally sample \(q\) features among all the features, starting at \(q=\frac{n}{8}\), i.e. \(q\in Q=\{\frac{n}{8},\frac{2n}{8}\cdots,\frac{7n}{8}\}\). For a given \(q\), we use the same training set to learn the same kind of classifier \(M_{q}\) by removing the \(q\) least important features (with respect to the interpreter), namely \(\cup_{u_{i}\in U}\widetilde{E}(u_{i})\), where \(\widetilde{E}(u_{i})\) is the code example \(u_{i}\) with \(q\) least important features (with respect to the interpreter) removed, and the difference of accuracy between classifier \(M\) and retrained classifier \(M_{q}\) is defined as \(AD_{q}=\frac{|\{u_{i}\in U:M(u_{i})=M(\widetilde{E}(u_{i}))\}|-|\{u_{i}\in U :M(u_{i})=M_{q}(\widetilde{E}(u_{i}))\}|}{|U|}\). The degree to which the interpreter is impacted by out-of-distribution inputs is the average \(AD_{q}\) for each \(q\in Q\). A smaller average difference of accuracy indicates a reduced impact of out-of-distribution inputs on the interpreter. Corresponding to the preceding metrics, our experiments are driven by three _Research Questions_ (RQs): * **RQ1**: What is Robin's fidelity? (Section IV-C) * **RQ2**: What is Robin's robustness against code perturbations? (Section IV-D) * **RQ3**: What is Robin's effectiveness in coping with out-of-distribution examples? (Section IV-E) ### _Experimental Setup_ **Implementation.** We choose two deep learning-based code classifiers: DL-CAIS [7] for code authorship attribution and TBCNN [2] for code functionality classification. We choose these two classifiers because they offer different code classification tasks, use different kinds of code representations and different neural networks, are representative of the state-of-the-art in code classification, and are open-sourced; these characteristics are necessary to test Robin's wide applicability. * **DL-CAIS**[7]. This classifier leverages a Term Frequency-Inverse Document Frequency based approach to extract lexical features from source code and a _Recurrent Neural Network_ (RNN) is employed to learn the code representation, which is then used as input to a random forest classifier to achieve code authorship attribution. In our experiment, we use a dataset from the _Google Code Jam_ (GCJ) [34, 35], involving 1,632 C++ program files from 204 authors for 8 programming challenges and has been widely used in code authorship attribution task [30, 35]. This dataset is different from the one used in [7], which is not available to us. * **TBCNN**[2]. The method represents source code as an _Abstract Syntax Tree_ (AST), encodes the resulting AST as a vector, uses a tree-based convolutional layer to learn the features in the AST, and uses a fully-connected layer and softmax layer for making predictions. In our experiment, we use the dataset of pedagogical programming Open Judge system, involving 52,000 C programs for 104 programming problems. This dataset is the same as the one used in [2] because it is publicly available. We implement Robin in Python using Tensorflow [36] to retrofit the interpretability of DL-CAIS and TBCNN. We run experiments on a computer with a RTX A6000 GPU and an Intel Xeon Gold 6226R CPU operating at 2.90 GHz. **Interpreters for Comparison.** We compare Robin with three existing interpreters: LIME [13], LEMNA [25], and the one proposed in [16], which would represent the state-of-the-art in interpretability of code classifier in feature-based post-hoc local interpretation. More specifically, LIME [13] makes small local perturbations to an example and obtains an interpretable linear regression model based on (i) the distance between the perturbed example and the original example and (ii) the change to the prediction. As such, LIME can be applied to explain any classifier. LEMNA [25] approximates local nonlinear decision boundaries for complex classifiers, especially RNN-based ones with sequential properties, to provide interpretations in security applications. Meanwhile, the method in [16] interprets vulnerability detector predictions by perturbing feature values, identifying important features based on their impact on predictions, training a decision-tree with the important features, and extracting rules for interpretation. Additionally, we establish a random feature selection method as a baseline. ### _What Is Robin's Fidelity? (RQ1)_ To determine the effectiveness of Robin on fidelity, we first train two code classifiers DL-CAIS [7] and TBCNN [2] to be explained according to the settings of the literature, acheiving 88.24% accuracy for code authorship attribution and 96.72% accuracy for code functionality classification. Then we apply Robin and the interpreters for comparison to DL-CAIS and TBCNN models. For Robin, we set the candidate number of selected coding style attributes \(\theta_{i}\) to 4 and the number of important features selected by the interpreter \(k\) to 10. We split the dataset randomly by 3:1:1 for training, validation, and testing for TBCNN and use 8-fold cross-validation for DL-CAIS when training the interpreter. Table I shows the fidelity evaluation results on DL-CAIS and TBCNN for different interpreters. We observe that LIME and LEMNA achieve an average FS-M of 0.49% and an average FS-A of 2.70% for DL-CAIS, and an average FS-M of 6.73% and an average FS-A of 9.47% for TBCNN, performing even worse than baseline. This can be explained by the fact that LIME and LEMNA do not perform well in multi-class code classification tasks due to the more complex decision boundaries of the classifiers. We also observe that Robin outperforms other interpreters in terms of FS-M and FS-A metrics significantly except Zou et al.'s method [16] in terms of FS-M on DL-CAIS. Robin achieves 23.05% higher FS-A at the cost of 19.60% lower FS-M. However, Zou et al.'s method [16] is much less robust to perturbed examples than Robin which we will discuss in Section IV-D. Compared with other interpreters, Robin achieves 6.11% higher FS-M and 67.22% higher FS-A on average, which indicates the high fidelity of Robin. For the time cost of interpreters, Table II shows the average interpretation time (in milliseconds) for each code example. We observe that Robin significantly outperforms the other three interpreters in terms of time cost. Note that while baseline is less time-consuming, it has much lower fidelity and robustness than Robin (see Section IV-D). Other interpreters are significantly more time costly than Robin because they are optimized independently on a single code example and require a new perturbation and analysis each time a code example is interpreted, while Robin directly constructs an interpreter that applies to all code examples and automatically identifies the important features by simply feeding code examples into the interpreter model. Robin achieves a 99.75% reduction in time cost than the other three interpreters on average. **Ablation Analysis.** Robin has two modules to improve the interpreter, i.e., adding \(L_{diff}\) to the loss of the interpreter (denoted as "Factor1"), and data augmentation using mixup (denoted as "Factor2"). To show the contribution of each module in Robin to the effectiveness of fidelity, we conduct the ablation study. We exclude Factor1, Factor2, and both Factor1 and Factor2 to generate three variants of Robin, respectively, and compare Robin with the three variants in terms of fidelity. Table III summarizes the fidelity evaluation results of Robin and its variants on DL-CAIS and TBCNN. We observe that Robin without Factor1, Factor2, or both Factor1 and Factor2 can reduce FS-M of 1.48-1.97% and FS-A of 1.96-4.90% for DL-CAIS, and reduce FS-M of 0.48-1.73% and FS-A of 1.53-2.88% for TBCNN. Robin without Factor1 and Factor2 achieves the worst results. This indicates the significance of Factor1 and Factor2 for the fidelity of Robin. **Effectiveness of Fidelity When Applied to Different Neural Network Structures.** To demonstrate the applicability of Robin to various neural network structures, we take DL-CAIS for instance to replace the _Recurrent Neural Network_ (RNN) layers of DL-CAIS with the _Convolutional Neural Network_ (CNN) layers (denoted as "DL-CAIS-CNN") and replace the RNN layers of DL-CAIS with the _Multi-Layer Perception_ (MLP) layers (denoted as "DL-CAIS-MLP"), respectively. We first train two code authorship attribution models DL-CAIS-CNN and DL-CAIS-MLP to be explained according to the settings of DL-CAIS [7]. We obtain a DL-CAIS-CNN with an accuracy of 91.18% and a DL-CAIS-MLP with an accuracy of 90.69% for code authorship attribution. Then we apply Robin and other interpreters for comparison to DL-CAIS-CNN and DL-CAIS-MLP respectively. Table IV shows the fidelity evaluation results for different interpreters on DL-CAIS with different neural networks. For DL-CAIS-CNN and DL-CAIS-MLP, Robin achieves a 40.07% higher FS-M and an 83.50% higher FS-A on average than the other three interpreters, which shows the effectiveness of Robin applied to different neural network structures. **Usefulness of Robin in Understanding Reasons for Classification.** To illustrate the usefulness of Robin in this perspective, we consider a scenario of code functionality classification via TBCNN [2]. The code example in Fig. 5 is predicted by the classifier as the functionality class "finding the number of factors". The interpreter generated by Robin extracts five features of the code example, which are deemed most relevant with respect to the prediction result and are highlighted via red boxes in Fig. 5. These five features are related to the remainder, division, and counting operators. By analyzing these five features, it becomes clear that the code example is predicted as "finding the number of factors" because the example looks for, and counts, the number of integers that can divide the input integer. _Insight 1: Robin achieves a 6.11% higher FS-M and a 67.22% higher FS-A on average than the three interpreters we considered._ ### _What Is Robin's Robustness? (RQ2)_ To evaluate the robustness of Robin against perturbations, we generate perturbed examples by using the semantics-preserving code transformation to code examples in the test set and filter out the perturbed examples that change the predicted labels of the classifier. We use these perturbed examples to test the robustness of interpreters. Table V summarizes the robustness evaluation results for different interpreters. We observe that the robustness of LIME and LEMNA on the code classifier is very poor and only slightly higher than the baseline. This is caused by the following: LIME and LEMNA suffer from uncertainty, thus there may be differences between the important features obtained when the same code example is interpreted multiple times. We also observe that the robustness of Zou et al.'s method [16] is higher than that of LIME and LEMNA, but still much lower than that of Robin. The average Jaccard similarity between the important features of the original examples identified by Robin and the important features of the adversarial examples is 1.94x higher than the state-of-the-art method [16] and 15.87x higher on average than the three interpreters we evaluated for DL-CAIS and TBCNN. This indicates that Robin is insensitive to semantics-preserving code transformations and has higher robustness against perturbations. **Ablation Analysis.** To show the contribution of Factor1 and Factor2 in Robin to the robustness, we conduct the ablation study. We exclude Factor1, Factor2, and both Factor1 and Factor2 to generate three variants of Robin, respectively, and compare Robin with the three variants in terms of robustness for the number of important features \(k=10\). Table VI summarizes the robustness evaluation results of Robin and its three variants on DL-CAIS and TBCNN. We observe that Robin achieves the highest robustness, and removing Factor1 and/or Factor2 can decrease its robustness, which indicates the significance of Factor1 and Factor2 to Robin's robustness. To show the impact of the number of important features \(k\) on the robustness, we take DL-CAIS for example to compare Robin and its three variants when applied to DL-CAIS in terms of the robustness of interpreters based on \(k\) (e.g., 10, 20, 30, 40, and 50) important features, respectively. As shown in Table VII, the robustness decreases as \(k\) increases. This can be explained by the following: As \(k\) increases, the less important features are added to the selected important features; these less important features are difficult to be recognized by the interpreter due to their less prominent contribution to the pre Fig. 5: The interpretation of a specific instance of code classification in the context of code functionality classification, where red boxes highlight the 5 most important features. diction, thus perform worse robustness against perturbations. We also observe that (i) Robin achieves the best robustness on DL-CAIS in all \(k\) values, and (ii) removing Factor1 or Factor2 or both of them from Robin can decrease the robustness of Robin, which indicates the significance of Factor1 and Factor2 for the robustness of Robin. **Robustness Evaluation When Applied to Different Neural Network Structures.** To show the robustness of Robin when applied to different neural network structures, we adopt DL-CAIS, DL-CAIS-CNN, and DL-CAIS-MLP we have trained in Section IV-C for interpretation. For DL-CAIS-CNN and DL-CAIS-MLP, we generate perturbed examples by using the semantics-preserving code transformations to code examples in the test set and filter out the perturbed examples that change the prediction labels of the classifier. Table VIII shows the robustness evaluation results for different interpreters on DL-CAIS with different neural networks. For DL-CAIS-CNN and DL-CAIS-MLP, Robin achieves a 10.05x higher robustness on average, compared with the other three interpreters. Though Robin achieves different robustness for different neural network structures, Robin achieves the highest robustness among all interpreters we evaluated. _Insight 2: Robin achieves a 1.94x higher robustness than the state-of-the-art method [16] and a 15.87x higher robustness on average than the three interpreters we evaluated._ ### _What Is Robin's Effectiveness in Coping with Out-of-Distribution Examples? (RQ3)_ To demonstrate the effectiveness of Robin in copying with out-of-distribution examples, we conduct experiments with the number of removed non-important features \(q\in Q\)={100, 200, 300, 400, 500, 600, 700} for DL-CAIS and \(q\in Q\)={25, 50, 75, 100, 125, 150, 175} for TBCNN according to the number of all features. Table IX shows the difference of accuracy \(AD_{q}\) between the classifier and the retrained classifier with \(q\) non-important features removed. We observe that the average difference of accuracy of Robin and the baseline method is very small, indicating that they are less affected by out-of-distribution examples. This can be explained by the fact that both of these methods do not employ the change in classifier's accuracy to assess the importance of features. Although the baseline method outperforms Robin on DL-CAIS, it has much lower fidelity and robustness than Robin which we have discussed in Section IV-C and Section IV-D. In contrast, the average difference of accuracy achieved by LEMNA is notably larger than those of Robin and the baseline method, because LEMNA relies on the changes of classifier's accuracy to calculate the importance of features. Robin achieves a 24.21% smaller average difference of accuracy for DL-CAIS and a 70.41% smaller average difference of accuracy for TBCNN than LEMNA, indicating that Robin achieves 47.31% less affected by the out-of-distribution examples compared to LEMNA on average. Robin is minimally affected by the out-of-distribution examples, which attributes to introducing the prediction accuracy of the retrained classifier to evaluate the importance of features. ## V Limitations The present study has limitations, which represent exciting open problems for future studies. First, our study does not evaluate the effectiveness of Robin on graph-based code classifiers and pre-training models like CodeT5 [37] and CodeBERT [38]. The unique characteristics of these models pose challenges that require further investigation, particularly in the context of applying Robin to classifiers with more complex model structures. Second, Robin can identify the most important features but cannot give further explanations why a particular prediction is made. To our knowledge, this kind of desired further explanation is beyond the reach of the current technology in deep learning interpretability. Third, Robin can identify the most important features that lead to the particular prediction of a given example, but cannot tell which training examples in the training set that leads to the code classifier contribute to the particular prediction. Achieving this type of training examples traceability is important because it may help achieve better interpretability. ## VI Related Work **Prior Studies on Deep Learning-Based Code Classifiers.** We divide these models into three categories according to the code representation they use: _token-based_[39, 5, 7], [39] vs. _tree-based_[40, 8, 4] vs. _graph-based_[10, 41, 15]. Token-based models represent a piece of code as a sequence of individual tokens, while only performing basic lexical analysis. These models are mainly used for code authorship attribution and vulnerability detection. Tree-based models represent a piece of code as a syntax tree, while incorporating both lexical and syntax analysis. These models are widely used for code authorship attribution, code function classification, and vulnerability detection. Graph-based models represent a piece of code as a directed graph, where a node represents an expression or statement and an edge represents a control flow, control dependence, or data dependence. These models are suitable for complex code structures such as vulnerability detection. We have shown how Robin can offer interpretability to token- and tree-based code classifiers [2, 7], but not to graph-based models as discussed in the preceding section. **Prior Studies on Interpretation Methods for Deep Learning Models.** These studies are often divided into two approaches: _ante-hoc_[11, 12] vs. _post-hoc_[42, 43, 44, 45, 46, 47, 13], where the latter can be further divided into _global_ (i.e., seeking model-level interpretability) [43, 42] vs. _local_ (i.e., seeking example-level interpretability) [44, 45, 46, 47, 13] interpretation methods. In the context of code classification, the ante-hoc approach leverages the attention weight matrix [14, 15]. There is currently no post-hoc approach aiming at global interpretation in code classification; whereas, the post-hoc approach aiming at local interpretation mainly leverages perturbation-based feature saliency [16, 17] and program reduction [18, 19]. Since ante-hoc interpretation methods cannot provide interpretations for given classifiers, we will not discuss them any further. On the other hand, existing poc-hoc methods are not robust (Section IV-D); in particular, existing methods for local interpretation suffers from the problem of out-of-distribution examples [21, 22]. Robin addresses both the robustness issue and the out-of-distribution issue in the post-hoc approach to local interpretation, by introducing approximators to mitigate out-of-distribution examples and using adversarial training and data augmentation to improve robustness. **Prior Studies on Improving Robustness of Interpretation Methods.** These studies have been conducted in other application domains than code classification. In the image domain, one idea is to aggregate multiple interpretation [48, 24], and another idea is to smooth the model's decision surface [49, 47]. In the text domain, one idea is to eliminate the uncertainties that are present in the existing interpretation methods [50, 51], and another idea is to introduce continuous small perturbations to interpretation and use adversarial training for robustness enhancement [27, 28]. To our knowledge, we are the first to investigate how to achieve robust interpretability in the code classification domain, while noting that none of the aforementioned methods that are effective in the other domains can be adapted to the code classification domain. This is because program code must follow strict lexical and syntactic requirements, meaning that perturbed representations may not be mapped back to real-world code examples, which is a general challenge when dealing with programs. This justifies why Robin initiates the study of a new and important problem. ## VII Conclusion We have presented Robin, a robust interpreter for deep learning-based code classifiers such as code authorship attribution classification and code function classification. The key idea behind Robin is to (i) use approximators to mitigate the out-of-distribution example problem, and (ii) use adversarial training and data augmentation to improve interpreter robustness, which is different from the widely-adopted idea of using adversarial training to achieve classifier's (rather than interpreter's) robustness. Experimental results show that Robin achieves a high fidelity and a high robustness, while mitigating the effect of out-of-distribution examples caused by perturbations. The limitations of Robin serve as interesting open problems for future research. ## Acknowledgments We thank the anonymous reviewers for their comments which guided us in improving the paper. The authors affiliated with Huazhong University of Science and Technology were supported by the National Natural Science Foundation of China under Grant No. 62272187. Shouhuai Xu was supported in part by the National Science Foundation under Grants #2122631, #2115134, and #1910488 as well as Colorado State Bill 18-086. Any opinions, findings, conclusions or recommendations expressed in this work are those of the authors and do not reflect the views of the funding agencies in any sense.
2309.07338
Overcoming near-degeneracy in the autologistic actor attribute model
The autologistic actor attribute model, or ALAAM, is the social influence counterpart of the better-known exponential-family random graph model (ERGM) for social selection. Extensive experience with ERGMs has shown that the problem of near-degeneracy which often occurs with simple models can be overcome by using "geometrically weighted" or "alternating" statistics. In the much more limited empirical applications of ALAAMs to date, the problem of near-degeneracy, although theoretically expected, appears to have been less of an issue. In this work I present a comprehensive survey of ALAAM applications, showing that this model has to date only been used with relatively small networks, in which near-degeneracy does not appear to be a problem. I show near-degeneracy does occur in simple ALAAM models of larger empirical networks, define some geometrically weighted ALAAM statistics analogous to those for ERGM, and demonstrate that models with these statistics do not suffer from near-degeneracy and hence can be estimated where they could not be with the simple statistics.
Alex Stivala
2023-09-13T22:14:12Z
http://arxiv.org/abs/2309.07338v2
# Overcoming near-degeneracy in the autologistic actor attribute model ###### Abstract The autologistic actor attribute model, or ALAAM, is the social influence counterpart of the better-known exponential-family random graph model (ERGM) for social selection. Extensive experience with ERGMs has shown that the problem of near-degeneracy which often occurs with simple models can be overcome by using "geometrically weighted" or "alternating" statistics. In the much more limited empirical applications of ALAAMs to date, the problem of near-degeneracy, although theoretically expected, appears to have been less of an issue. In this work I present a comprehensive survey of ALAAM applications, showing that this model has to date only been used with relatively small networks, in which near-degeneracy does not appear to be a problem. I show near-degeneracy does occur in simple ALAAM models of larger empirical networks, define some geometrically weighted ALAAM statistics analogous to those for ERGM, and demonstrate that models with these statistics do not suffer from near-degeneracy and hence can be estimated where they could not be with the simple statistics. _Keywords--_ autologistic actor attribute model, ALAAM, exponential-family random graph model, ERGM, near-degeneracy ## 1 Introduction The autologistic actor attribute model (ALAAM) is a statistical model of social influence, or contagion on a social network. The ALAAM, first introduced by Robins et al. (2001) and extended by Daraganova (2009) to its current form, is a variant of the exponential-family random graph model (ERGM), a widely-used model for social networks (Lusher et al., 2013; Ghafouri and Khasteh, 2020). Both ALAAM and ERGM are models for cross-sectional data, that is, a network and nodal attributes observed at one point in time (or preferably, for the ALAAM, the network and nodal attributes at one point, and the outcome binary attribute at a suitable later point (Parker et al., 2022)). The distinction between the ERGM and the ALAAM is that the ERGM models the probability of network ties, conditional on nodal attributes, while the ALAAM models the probability of a (binary) nodal attribute, conditional on the network (and other nodal attributes). The ALAAM, modeling the probability of attribute \(Y\) (a vector of binary attributes) given the network \(X\) (a matrix of binary tie variables) can be expressed as (Daraganova and Robins, 2013): \[\Pr(Y=y|X=x)=\frac{1}{\kappa(\theta_{I})}\exp\left(\sum_{I}\theta_{I}z_{I}(y,x, w)\right) \tag{1}\] where \(\theta_{I}\) is the parameter corresponding to the network-attribute statistic \(z_{I}\), in which the "configuration" \(I\) is defined by a combination of dependent (outcome) attribute variables \(y\), network variables \(x\), and actor covariates \(w\), and \(\kappa(\theta_{I})\) is a normalizing quantity which ensures a proper probability distribution. Table 1 shows some simple configurations for undirected networks used in this work, while Table 2 shows a more extensive list of configurations for directed networks used in this work. Both ERGMs and ALAAMs, because of the presence of the intractable normalizing constant, \(\kappa(\theta_{I})\) in (1), usually require Markov chain Monte Carlo (MCMC) methods for maximum likelihood estimation (MLE) of the parameters (Snijders, 2002; Hunter and Handcock, 2006; Hunter et al., 2012; Lusher et al., 2013; Amati et al., 2018; Koskinen, 2020). Once the parameters and their standard errors are estimated, they can be used for inferences regarding the corresponding configurations. A parameter estimate that is statistically significant and positive indicates an over-representation of the corresponding configuration, conditional on all the other parameters in the model. Conversely, a parameter that is statistically significant and negative indicates an under-representation of that configuration given all the others in the model. A well-known problem with ERGMs is that simple model specifications can lead to "near-degeneracy" in which the MLE does not exist, or the model generates distributions of graphs in which most of the probability mass is placed on (nearly) empty or (nearly) complete graphs (Handcock, 2003; Snijders et al., 2006; Hunter, 2007; Schweinberger, 2011; Chatterjee and Diaconis, 2013; Schweinberger et al., 2020). This problem is usually overcome by the use of more complex "alternating" or "geometrically weighted" configurations (Snijders et al., 2006; Robins et al., 2007; Hunter, 2007; Lusher et al., 2013), however other forms of additional mathematical structure can also be used to solve (or avoid) the problem of near-degeneracy (Schweinberger et al., 2020) Since the ALAAM, like the ERGM, is a type of Gibbs random field, and specifically the ALAAM derives from the autologistic Ising model (Besag, 1972), it is to be expected, that, like the ERGM, problems of near-degeneracy would arise due to the well-known phase transition behaviour in such models (Fellows and Handcock, 2017; Stoehr, 2017). It has, however, been observed that for ALAAMs "this is less of an issue" (Koskinen and Daraganova, 2022, p.1856), and indeed "alternating" or "geometrically weighted" statistics have to date not been described for ALAAMs, with published models using simple configurations such as those shown in Table 1 and Table 2. In this work I will show that this could be due to the somewhat limited experience with ALAAMs to date, and specifically that their use has been restricted to relatively small networks. I demonstrate that near-degeneracy does occur in ALAAMs with empirical networks, and propose new geometrically weighted statistics, analogous to the geometrically weighted degree statistics for ERGMs, that overcome this problem and allow estimation of ALAAM models that could not be estimated using, for undirected networks, the Activity statistic (Table 1) or, for directed networks, the Sender and Receiver statistics (Table 2). ## 2 Survey of ALAAM applications As noted by Parker et al. (2022, p. 517), empirical experience with ALAAMs is recent and limited. This is particularly so relative to the social selection model ERGM, which is widely used across a variety of domains; for a recent survey see Ghafouri and Khasteh (2020), as well as, for example Lusher et al. (2013); Amati et al. (2018); Cimini et al. (2019). It is therefore practical to present a comprehensive survey of empirical ALAAM usage. I used Google Scholar to search for "autologistic actor attribute model" (search date 24 August 2023), which resulted in 34 hits. Note that, as is well known, Google Scholar includes not just peer-reviewed publications, but "grey literature" such as PhD theses, unpublished preprints and technical reports, among others. I chose not to restrict this literature survey to peer-reviewed publications, but to also include preprints, conference presentations, and PhD theses, as long as they \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline Name & Illustration & Description \\ \hline Density & & Baseline attribute density (incidence). Also used with directed networks \\ Activity & & Tendency for actor with the attribute to have ties \\ Contagion & & Tendency for actor with the attribute to be tied to an actor also with the attribute \\ & & Covariate effect for continuous covariate _attribute_. The ”_oOc” notation is from IPNet (Wang et al., 2009a), and we may omit this when there is no ambiguity, e.g. “Age_oOc” may also be written simply as “Age”. Also used in directed networks \\ \hline \hline \end{tabular} \end{table} Table 1: Configurations used in ALAAMs for undirected networks in this work. \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline Name & Illustration & Description \\ \hline Sender & Tendency of actors with the attribute to have outgoing ties (activity) \\ Receiver & Tendency of actors with the attribute to have incoming ties (popularity) \\ Contagion & Tendency of the attribute to be present in both actors connected by directed tie \\ Reciprocity & Tendency of the attribute to be present in an actor connected to another by a reciprocated (mutual) tie \\ Contagion reciprocity & Also known as mutual contagion. Tendency of the attribute to be present in both actors connected by a reciprocated tie \\ Ego in-two-star & Tendency of the attribute to be present in an actor with additional incoming ties over Receiver \\ Ego out-two-star & Tendency of the attribute to be present in an actor with additional outgoing ties over Sender \\ Mixed-two-star & Tendency of the attribute to be present in an actor in the broker position between two other nodes (local brokerage) \\ Mixed-two-star source & Tendency of the attribute to be present in an actor in the source position in local brokerage \\ Mixed-two-star sink & Tendency of the attribute to be present in an actor in the sink position in local brokerage \\ Transitive triangle T1 & Tendency of the attribute to be present in an actor in a transitive triangle, the broker position in Mixed-two-star bypassed by a transitive tie \\ Transitive triangle T3 & Contagion clustering: tendency of the attribute to be present in all three actors in a transitive triangle \\ \hline \end{tabular} \end{table} Table 2: Configurations used in ALAAMs for directed networks in this work. met the same criteria I defined for publications, namely: 1. The ALAAM model is applied to empirical data. This excludes, for example, Stivala et al. (2020b), which is a simulation study, rather than an application to empirical data. 2. The model used was an ALAAM as described in this work; the family of model implemented for example by IPNet (Wang et al., 2009a) and its successor software MPNet (Wang et al., 2014, 2022). Note that this excludes the original ALAAM paper (Robins et al., 2001), in which the outcome variable is not dichotomous (binary), but rather polytomous (three values). This paper also predates the introduction of the name "autologistic actor attribute model", and uses maximum pseudo-likelihood for estimation. This criterion also excludes the more recent exponential-family random network model (ERNM), a generalization of the ERGM and ALAAM, which models both social selection and social influence simultaneously (Fellows and Handcock, 2012, 2013; Wang et al., 2023). 3. The work is either publicly available, or available to me via my affiliation at Universita della Svizzera italiana. This initial search was supplemented by searching for the same terms using Clarivate Web of Science and Elsevier Scopus (search date 30 August 2023). These searches results in 7 and 34 results, respectively, with a large overlap with the Google Scholar results. I further supplemented these results by adding some works with which I was personally familiar, because, for example, I am an author or I was informed of their existence by an author. The final list of 19 works, containing 25 empirical ALAAM models, is shown in Table 3. \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline Citation & Network description & Outcome description & Network size & Estimation method & Comments \\ \hline Barnes et al. (2020) & Multilevel social-ecological: households with communication relationships, fish species with trophic relationships, cross-level fishing targets & Two models: Adaptive action and transformative action & 198 & MPNet & Multilevel network with 138 households, 60 fish species \\ Bodin and Chen (2023) & Multilevel social-ecological: affective relations, organization-based collaboration, rangeland use, and species dispersal and livestock movement & Highly adaptive (dichotomized from continuous measure of change in number of grazing patches) &? MPNet from figures in S.I. appears to be less than 100 \\ Bryant et al. (2017) & Social network in a post-disaster community & Two models: Probably depression and probably posttraumatic stress disorder (PTSD) & 558 & MPNet & Directed network \\ Daraganova and Robins (2013) & Social network in a high unemployment region & Unemployment & 551 & IPNet & Two-wave snowball sample. includes geographic proximity covariate \\ Diviak et al. (2020) & Collaboration network among organized crime offenders & Female gender & 1390 & IPNet & Not being used as a social influence model, rather a network discriminant analysis. Includes pre-existing ties network as a setting network covariate & Pre-exposure (PrEP) uptake & prophylactic & 284 & MPNet & Houston (25 venues and 259 YMSM) \\ Fujimoto et al. (2019) & Multilevel referral-affiliation network of client-referral ties from community-based organizations (CBOs) to PrEP and utilization by young men who have sex with men (YMSM) of CBOs and PrEP providers & Pre-exposure (PrEP) uptake & 308 & Chicago (24 venues and 284 YMSM) \\ Gallagher (2019) & Core discussion network among English-for-Academic-Purposes international students & Willingness to communicate in English (dichotomized from percentage of time) & 308 & Chicago (24 venues and 284 YMSM) \\ Kashima et al. (2013) & Social network in a regional community (2013) & Willingness to communicate in English (dichotomized from percentage of time) & 67 & MPNet & Directed network \\ \hline \end{tabular} \end{table} Table 3: Literature survey of works using the ALAAM. \begin{table} \begin{tabular}{l l l l l l} \hline Citation & Network description & Outcome description & Network size & Estimation method & Comments \\ \hline Koskinen and Daraganova (2022) & Directed friendship network in an all-marganova (2022) & High masculinity index (dichotomized from Matsumoto Index) & 106 & R code data. Also includes re-analysis of the Daraganova and Robins (2013) unemployment data \\ & Directed friendship network from Stockholm Birth Cohort data & Intention to proceed to higher secondary education & 403 & \\ Letina (2016) & Co-authorship network for two fields of social science in Croatia & High productivity (two models: dichotomized from number of publications, or H-index) & 125 & MPNet & Psychology \\ & Letina et al. (2016) & Co-authorship network for three fields of social science in Croatia & One or more ties outside the national and/or disciplinary community (NDC) & 160 & MPNet & Psychology \\ & Matous and Bodin (2021) & Advice network regarding cocoa farming practices & Farmers’ use of fertilizer & 136 & Sociology Educational sciences \\ & Nedihardt (2016) & Friendship network of schoolchildren in Glasgow & Farmers’ use of fertilizer & 71 & MPNet & Undirected network. Fourteen networks from size 25 to 199 \\ & Partners (co-players) in an online game & Smoking behaviour (dichotomized from occasionally or regularly) & 160 & IPNet & Undirected network \\ & Partners (co-players) in an online game & Cancelled subscription to game & 2587 & Took two days to estimate in IPNet and the results are not stable (Neidhardt, 2016, p. 106) \\ & Ocelik et al. (2021) & Long-term cooperation network of people opposed to the rescinding of coal-mining limits in the Czech Republic & High-level participation (dichotomized from continuous differential participation scale) & 38 & MPNet & Undirected network \\ & Parker et al. (2022) & Directed advice network among students in a management course & Two models: high performance and low performance (dichotomized from grades) & 133 & MPNet & \\ Rank (2014) & Collaboration network among top managers of all member companies and organizations in a regional biotech network & & & & \\ \hline \end{tabular} \end{table} Table 3: Literature survey of works using the ALAAM. \begin{table} \begin{tabular}{l l l l l l} \hline Citation & Network description & Outcome description & Network & Estimation & Comments \\ & & & size & method & \\ \hline Song et al. (2020) & Social network of an online weight-loss community & Self-monitoring performance (dichotomized from continuous score) & 724 & IPNet & Undirected network. Estimation method not reported, but effect names indicate IPNet \\ Stadtfeld et al. (2019) & Positive interactions, friendship, and studying together networks among engineering undergraduate students & Passing the final exam & 163 & MPNet & Analysis uses stochastic actor-oriented model (SAOM) (Snijders, 2017) for network evolution, with ERGM for robustness check, and linear regression for final exam result, with logistic regression, network autocorrelation, and ALAAM as robustness checks \\ Stivala et al. (2023b) & Director interlock network & Female gender & 12058 & ALAAMEE & As in Divi\& et al. (2020), not being used as a social influence model, rather a network discriminant analysis. Bipartite network, 9971 directors and 2087 companies. Estimated with stochastic approximation \\ Wood (2019) & Friendship network in a novel mobile platform & Commitment to vote in an election & 74 & MPNet & Undirected network \\ \hline \end{tabular} \end{table} Table 3: Literature survey of works using the ALAAM. In all but two cases, the ALAAM was estimated with stochastic approximation (Snijders, 2002), using either the IPNet or MPNet software. The first exception is Koskinen and Daraganova (2022), which describes Bayesian estimation of the ALAAM, accompanied by R code which implements this method. The second exception is Stivala et al. (2023b), in which the ALAAM is estimated using the ALAAMEE software (Stivala et al., 2023a), also used in this work. In Stivala et al. (2023b), ALAAM models for the 12058 node bipartite director interlock network were estimated using stochastic approximation (the same algorithm implemented in IPNet and MPNet). However a converged ALAAM for the larger director interlock network (Evtushenko and Gastner, 2020) with 356638 nodes (321869 directors and 34769 companies) could not be found, using either the stochastic approximation or equilibrium expectation algorithms implemented in ALAAMEE. In contrast, converged ERGM models for both networks, using "alternating" star statistics for bipartite networks (Wang et al., 2009b) were found, using the EstimNetDirected software (Stivala et al., 2020a) The mean network size (number of nodes) in Table 3 is 832.1, the median is 160, and the maximum is 12058. (Of the 26 models, one did not specify the network size, and hence these results are over 25 networks.) However, excluding the single use of ALAAMEE, the mean is 364.4, the median 160, and the maximum 2587. Even for this 2587 node network, it is noted that the estimation using IPNet took two days, and the results were "not stable" (Neidhardt, 2016, p. 106). The largest network for which estimation (with IPNet) was not problematic is the 1390 node network in Diviak et al. (2020). The largest network used in the simulation studies described in Stivala et al. (2020b) is 4430 nodes, however although this is an empirical network, the binary outcome attribute is not itself an empirical covariate, but rather simulated from an ALAAM model for the purposes of testing statistical inference using a model with known parameters. This demonstrates that, with the exception of some very recent (and currently ongoing) work (Stivala et al., 2023a,b), empirical experience with ALAAMs is mostly restricted to networks of the order of a few hundred nodes in size, and certainly no larger than a few thousand. ## 3 Near-degeneracy with standard ALAAM parameters In this and the following sections, three networks will be used as examples. First, a network of friendship relations between students in a high school in Marseilles, France, collected in December 2013 by the SocioPatterns research collaboration (Mastrandrea et al., 2015). This is a directed network of friendship relations, where an arc from a node \(i\) to a node \(j\) indicates that student \(i\) reported a friendship with student \(j\). The school class and gender (male or female) of each student is known (one is unknown), and male gender is used as the binary "outcome" attribute. In this way, the ALAAM is not being used as a social influence model (it is not assumed that gender is affected by network position), but rather as a way of making inferences about the structural positions of males in the network, as was done for female gender in Diviak et al. (2020); Stivala et al. (2023b). Similar considerations apply to the other two networks: I am not actually using ALAAM as a social influence model, but merely using these examples to illustrate problems of near-degeneracy and how to overcome it with the new geometrically weighted activity statistic. The second network is a large online social network of GitHub (an online platform for software development) software developers, collected in June 2019 (Rozemberczki et al., 2021). Nodes are developers (who have "starred" at least ten repositories) and undirected edges are mutual "follower" relationships between them. This data set was created for binary node classification, and the target binary feature, which is used here as the binary outcome attribute, is the developer type, either "web" or "machine learning" (Rozemberczki et al., 2021). Here this developer type is used as the outcome variable -- it is not clear which developer type the nonzero value of this variable indicates, so I do not ascribe any meaning to ALAAM inferences regarding this variable (and, again, nor do I actually make the assumption that the developer type is subject to social influence). The third network is the "Pokec" online social network, at one time the most popular such network in Slovakia (Takac and Zabovsky, 2012). Arcs in this network represent directed "friendship" relations, and the nodes are annotated with a number of attributes, including age and gender. Again, male gender is used as the binary "outcome" attribute here. As described Stivala et al. (2020a), the 20 "hub" nodes with degree greater than 1000 are removed. Two versions of this network are considered here, the original directed version, and an undirected version in which only mutual "friendship" relations are retained, as is done in Kleineberg and Boguna (2014). Descriptive statistics of the networks are shown in Table 4, and of the nodes with (\(y_{i}=1\)) and without (\(y_{i}=0\)) the outcome attribute in Table 5. The high school network is of a size that is typical of current publications using the ALAAM (see Section 2), but the GitHub and Pokec networks are orders of magnitude larger. These are too large to estimate in practical time using the stochastic approximation algorithm, and so although the high school network models will be estimated using stochastic approximation, the GitHub and Pokec models will be estimated using the equilibrium expectation algorithm instead, which is suitable for very large networks (Byshkin et al., 2016, 2018; Borisenko et al., 2020; Stivala et al., 2020) The motivation for this work was my inability to find converged (non-degenerate) ALAAM models for large networks, such as the Pokec and GitHub networks, when the Activity parameter was included, as it typically is in an ALAAM model. Figure 1 shows why this is so. These plots show, for the (undirected) GitHub and Pokec networks, the value of the Activity statistic in simulated ALAAM outcome vectors, as the corresponding parameter is varied from \(-1.0\) to \(1.0\) in increments of \(0.01\). Each data point is the result of one of \(100\) samples from the ALAAM distribution drawn every \(10^{6}\) iterations after a burn-in period of \(10^{7}\) iterations, using the simulateALAAM function of ALAAMEE (Stivala et al., 2023). The Density and Contagion parameters are fixed at \(-0.50\) and \(0.50\), respectively, for GitHub, and \(-0.155\) and \(-0.008\), respectively, for Pokec. These values were chosen to be in the vicinity of the estimated values in the (non-converged) models. It is clear that there is a near discontinuity in the Activity statistic, with a strong peak in its variance, characteristic of the phase transition in the Ising and Potts models (Stoehr, 2017). This is similar to the well-known near-degeneracy in Markov (for example, edge-star and edge-triangle) ERGM models, as described in, for example, Handcock (2003); Snijders et al. (2006); Robins et al. (2007); Koskinen and Daraganova (2013), which often prevents the estimation of such models. ## 4 A geometrically weighted activity statistic Since this near-degeneracy in the ALAAM with the Activity parameter appears very similar to that which occurs in the ERGM with the star parameter, the solution may well also be similar. In the ERGM, near-degeneracy in such models is usually avoided by using, rather than two-star, three-star, etc. terms, an "alternating \(k\)-star" or "geometrically weighted degree" parameter (Robins et al., 2007; Lusher et al., 2013), as proposed by Snijders et al. (2006); Hunter (2007). Here I will follow Snijders et al. (2006, s. 3.1.1) in using geometrically weighted degree counts for ERGMs, in order to create a geometrically weighted activity statistic for ALAAMs. First, note that the Activity statistic is \[z_{\text{Activity}}(y)=\sum_{i:y_{i}=1}d(i), \tag{2}\] \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline Network & Directed & Nodes & Size of giant & Mean & Max. & Max. & Density & Clustering \\ & & & component & degree & in-degree & out-degree & & coefficient \\ \hline GitHub & No & 37700 & 37700 & 15.33 & 9458 & 9458 & 0.00041 & 0.01236 \\ Pokec & No & 1632783 & 1197779 & 10.16 & 671 & 671 & 0.00001 & 0.06854 \\ Pokec & Yes & 1632783 & 1632199 & 18.69 & 949 & 998 & 0.00001 & 0.05369 \\ High school & Yes & 134 & 128 & 4.99 & 15 & 16 & 0.03748 & 0.47540 \\ \hline \hline \end{tabular} Network statistics computed using the igraph (Csárdi and Nepusz, 2006) R package. “Clustering coefficient” is the global clustering coefficient (transitivity) \end{table} Table 4: Network descriptive statistics for the example networks. \begin{table} \begin{tabular}{l r r r r r} \hline \hline Network & Directed & \multicolumn{2}{c}{\(y_{i}=1\)} & & & \\ & & nodes \% & \multicolumn{2}{c}{Outcome \(y_{i}=0\) nodes} & \multicolumn{2}{c}{Outcome \(y_{i}=1\) nodes} \\ \cline{3-6} & & & Mean & Mean & Mean & Mean \\ & & & in-degree & out-degree & in-degree & out-degree \\ \hline GitHub & No & 26 & 17.67 & 17.67 & 8.63 & 8.63 \\ Pokec & No & 49 & 10.68 & 10.68 & 9.62 & 9.62 \\ Pokec & Yes & 49 & 20.55 & 18.34 & 16.78 & 19.06 \\ High school & Yes & 40 & 4.79 & 4.69 & 5.28 & 5.43 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean degrees of nodes with and without the outcome attribute. Figure 1: Effect on the Activity statistic (scatterplot, left, and variance, right) of varying the Activity parameter, in the GitHub social network (top) and undirected Pokec social network (bottom). The red dashed horizontal line shows the observed value of the statistic. where \(d(i)\) denotes the degree of node \(i\). That is, it is the sum of the degrees of each node for which the outcome binary attribute \(y_{i}=1\). And hence the change statistic (Hunter and Handcock, 2006; Snijders et al., 2006; Hunter et al., 2012), that is, the change in the statistic when \(y_{i}\) is changed from 0 to 1 for some node \(i\), for the Activity statistic, is just \(d(i)\). The geometrically weighted degree count for ERGM is defined by Snijders et al. (2006, (Eq. 11)) as \[u_{\alpha}^{(d)}(x)=\sum_{k=0}^{N-1}e^{-\alpha k}d_{k}(x)=\sum_{i=1}^{N}e^{- \alpha d(i)}, \tag{3}\] where \(N\) is the number of nodes, \(d_{k}(x)\) is the number of nodes of degree \(k\), and \(\alpha>0\) is the degree weighting parameter, controlling the geometric rate of decrease of weights as node degree increases (Snijders et al., 2006, p. 112). Analogously, I define the geometrically weighted activity (GWActivity) statistic for ALAAMs as \[z_{\text{GWActivity}(\alpha)}(y)=\sum_{i:y_{i}=1}e^{-\alpha d(i)}. \tag{4}\] The change statistic for GWActivity is then simply \[\delta_{\text{GWActivity}(\alpha)}^{(i)}(y)=e^{-\alpha d(i)}. \tag{5}\] Note that \(\alpha\) is not a model parameter, but rather is fixed at a given value (although of course it may be adjusted as necessary for better convergence or model fit). For large values of \(\alpha\), the contribution of higher degree nodes with the outcome attribute is decreased. As \(\alpha\) decreases to zero, increasing weight is placed on ALAAM outcome vectors with the outcome attribute on high degree nodes. If \(\alpha\), or an equivalent parameter, is estimated as part of the model, then the model becomes a member of the curved exponential family (Hunter, 2007). However in this work the value of \(\alpha\) is fixed at the "traditional" value of \(\alpha=\ln(2)\) as in Snijders et al. (2006). Via the mathematical relationships described in Snijders et al. (2006); Hunter (2007), this corresponds to the default value of the decay parameter \(\lambda=2\) for the alternating \(k\)-star parameter (Robins et al., 2007) familiar to users of the PNet and MPNet software. As described in Snijders et al. (2006, p. 114), the ERGM change statistic corresponding to the geometrically weighted degree statistic (3) is a non-decreasing function, with the change becoming smaller as the degrees become larger, and for \(\alpha>0\) the change statistic is negative. Hence the conditional log-odds of a tie is greater for a tie between high degree nodes than for a tie between low degree nodes. The ALAAM change statistic for GWActivity (5), by contrast, is positive, and a non-increasing function, when \(\alpha>0\). Changing a node outcome attribute from zero to one causes the GWActivity statistic (4) to increase, but by a larger amount for low degree nodes than high degree nodes. Hence the conditional log-odds for a node having the outcome attribute is greater for a low degree node than for a high degree node, but in a non-linear fashion, with the marginal decrease in log-odds decreasing geometrically with degree. Note that the geometrically weighted activity statistic for ALAAMs I have defined here is analogous to the that for ERGMs defined by Snijders et al. (2006), and not the different geometrically weighted degree statistic defined by Hunter (2007), and familiar to users of the statnet ERGM software packages (Handcock et al., 2008, 2016, 2022; Krivitsky et al., 2023). The relationship between those statistics is discussed Hunter (2007, p. 222). For directed networks, I also define GWSender, the geometrically weighted sender statistic, as \[z_{\text{GWSender}(\alpha)}(y)=\sum_{i:y_{i}=1}\exp\left(-\alpha d^{\text{( out)}}(i)\right), \tag{6}\] where \(d^{\text{(out)}}(i)\) is the out-degree of node \(i\). GWReceiver, the geometrically weighted receiver statistic is \[z_{\text{GWReceiver}(\alpha)}(y)=\sum_{i:y_{i}=1}\exp\left(-\alpha d^{\text{( in)}}(i)\right), \tag{7}\] where \(d^{\text{(in)}}(i)\) is the in-degree of node \(i\). The corresponding change statistics are \[\delta_{\text{GWSender}(\alpha)}^{(i)}(y)=\exp\left(-\alpha d^{\text{(out)}}( i)\right) \tag{8}\] and \[\delta^{(i)}_{\text{GWReciver}(\mathbf{\alpha})}(y)=\exp\left(-\alpha d^{(\text{in})}(i )\right). \tag{9}\] In order to examine the behaviour of the new GWActivity statistic to verify that it removes the near-degenerate behaviour apparent with the standard Activity statistic, I conducted simulation experiments similar to those described above for Figure 1. Figure 2 shows, for the same two networks, the value of the GWActivity statistic as the corresponding parameter is varied (again in increments of 0.01, and with the same burn-in and iterations). The Density and Contagion parameters are fixed at \(-1.28\) and 0.002 for GitHub, and the same as described for Figure 1 for Pokec. These parameters were chosen to be in the vicinity of estimated parameters (from models similar to those described in Section 5.2). Figure 2 shows that the phase transition apparent in Figure 1 no longer occurs with this parameterization, with the statistic instead being a smoothly non-decreasing function of the parameter. Furthermore, the curve of the statistic values intersects with the observed value at a point where the slope of curve is not extreme, and there is no near discontinuity (unlike Figure 1), suggesting that maximum likelihood estimation is less likely to be problematic. ### Interpretation of the new parameters As described in Daraganova and Robins (2013), the interpretation of the Activity parameter is that, if it is positive, it means that an actor with multiple ties is more likely to have the outcome attribute. The two-star and three-star parameters then allow for nonlinear dependence on the number of ties. Interpretation of the GWActivity parameter, however, is not quite so straightforward. Snijders et al. (2006), in the context of the ERGM, describes how the geometrically weighted degree statistic can be re-written in terms of the numbers of \(k\)-stars, where the weights on the \(k\)-stars have alternating signs, so that the positive weights of some are balanced by the negative weights of the others. In this way, the single alternating \(k\)-star parameter replaces a whole series of two-star, three-star, etc. parameters, which when estimated from empirical networks, tend to have parameters with alternating signs (Koskinen and Daraganova, 2013). The interpretation of the alternating \(k\)-star in ERGM, then, is in terms of the the degree distribution: a positive parameter indicates centralization based on high-degree nodes ("hub" nodes are more likely), and a negative parameter a relatively more equal degree distribution (Robins et al., 2007; Koskinen and Daraganova, 2013). Confusingly (Levy et al., 2016; Martin, 2020; Stivala, 2020), the interpretation of the statnet gwdegree parameter defined in Hunter (2007) has the opposite interpretation regarding the sign: a negative gwdegree parameter indicates centralization of edges, and a positive gwdegree parameter indicates dispersion of edges (Levy, 2016; Levy et al., 2016). In the present context, that of the ALAAM, however, the degree distribution is not being modeled, as the network is fixed. Instead, the binary outcome vector is being modeled. Therefore it is not useful to examine the effect of a parameter on the degree distribution of the whole network, but rather of the degree distribution of those nodes which have the outcome attribute (nodes \(i\) such that \(y_{i}=1\)). As discussed above, when \(\alpha>0\), the definition fo the ALAAM change statistic for GWActivity (5) means that the conditional log-odds of a node having the outcome attribute (\(y_{i}=1\) Figure 2: Scatterplots of the effect on the Geometrically Weighted Activity statistic of varying the Geometrically Weighted Activity parameter in the GitHub social network (left) and undirected Pokec social network (right). The red dashed horizontal line shows the observed value of the statistic. Figure 3: Scatterplots of the effect on the mean degree of nodes that have the outcome attribute, of varying the Activity parameter (left) or Geometrically Weighted Activity parameter (right), for the GitHub social network (top) and undirected Pokec social network (bottom). The red dashed horizontal line shows the observed value. is higher for a low degree node than a high degree node, and hence a positive value of the corresponding parameter will result in more low degree nodes having the outcome attribute than would otherwise be the case. Regrettably, this would seem likely to lead to confusion similar to that described by Levy et al. (2016): it seems counter-intuitive that a positive parameter should lead to a preference for the outcome attribute on low degree nodes (rather than high degree nodes). Figure 3 shows the effect of the Activity and GWActivity parameters on the mean degree of nodes with the outcome attribute. (These are from the same simulations as those described for Figure 1 and Figure 2). It is evident that the mean degree of nodes with the outcome attribute does not have a simple relationship to the Activity parameter, first increasing, then after a near discontinuity, decreasing. In contrast, the mean degree of such nodes decreases smoothly as the GWActivity parameter is increased. Figure 4 shows similar plots for the, much smaller, high school friendship network. Being a directed network, this plot shows the effect of the GWSender parameter on mean out-degree of nodes with the outcome attribute. For this small network, there is no near-discontinuity when using the Sender statistic (and in fact, an ALAAM for this network can be estimated with the Sender and Receiver parameters, as shown in Section 5.1). The pattern of the mean out-degree of nodes with \(y_{i}=1\) increasing with the Sender parameter, and then decreasing, while the GWSender parameter results in a smooth decrease, is, however, again apparent. The small size of the high school friendship network also makes it more practical to visualize the degree distributions in order to more closely examine the effect of the GWSender parameter. Figure 5 shows the effect of large magnitude negative and positive GWSender parameters on the distribution of the out-degree of nodes with the outcome attribute, compared with the distribution resulting from a random assignment of the outcome attribute to the nodes. The ALAAM models were simulated with the Density parameter \(\text{logit}(p)=-0.3930425\), where \(p=0.4029851\) is the observed relative frequency of nodes with the outcome attribute, male gender. The random outcome vectors have each element one with probability \(\overline{\sum y^{(k)}/N}\) (where \(y^{(k)}\) is the \(k\)th (\(1\leq k\leq 100\)) ALAAM sample), so that the mean attribute density is the same as as that from the ALAAM simulations. For the negative GWSender parameter (\(\theta_{\text{GWSender}}=-15\)), \(\overline{\sum y^{(k)}/N}=0.2191791\), and for the positive GWSender parameter (\(\theta_{\text{GWSender}}=15\)), \(\overline{\sum y^{(k)}/N}=0.6576866\). For the negative GWSender parameter (Fig. 5(a)), the distribution is less skewed than for the positive GWSender parameter (Fig. 5(b)). The mean out-degree of nodes with the outcome attribute is higher than that for the random (and observed) outcomes for the negative parameter value, and lower than that for the random (and observed) outcomes for the positive parameter value. This reflects the interpretation discussed above (in the context of the undirected GWActivity parameter), that a positive GWSender parameter will lead to a tendency for the outcome attribute to be present on low (rather than high) out-degree nodes. Figure 4: Effect on the mean out-degree of nodes that have the outcome attribute, of varying the Sender parameter (left) or Geometrically Weighted Sender parameter (right), for the high school friendship network. The red dashed horizontal line shows the observed value. These plots show the mean over 100 samples for each value of the parameter. Figure 5: Effect of (a) negative, and (b) positive, GWSender [\(\alpha=\ln(2)\)] parameters on the out-degree distribution of nodes with the outcome attribute (male gender) in the high school friendship network. The orange boxplots show the results for 100 outcome vectors simulated from the ALAAM models, and the purple boxplots 100 random outcome vectors where each element is 1 with probability \(\overline{\sum y/N}\), so that the mean attribute density is the same as that of the outcome vectors simulated from the ALAAM. The solid green vertical line shows the observed mean out-degree of nodes with the outcome attribute. Similarly, the orange dashed vertical line is the mean for the ALAAM, and the purple dashed vertical line for the random outcome vectors. Empirical examples of ALAAMs with the new parameters ### Small network Table 6 shows six ALAAM models for the high school friendship network, with male gender as the "outcome" binary variable. Table 7 shows the goodness-of fit-results for these models: in all cases, the t-ratio is less than 1.0 in magnitude, indicating a good fit for that statistic. Models 1-3 are relatively simple models, starting with Sender and Receiver and progressively adding EgoInTwoStar and EgoOutTwoStar (Model 2) and then also EgoInThreeStar and EgoOutThreeStar (Model 3). Model 4 is an equivalent model, but using GWSender and GWReceiver instead of the Sender, Receiver and in- and out-star effects. Model 5 adds a number of additional effects, including transitive triangles and homophily on school class, to the Sender/Receiver/star model (Model 3), while Model 6 adds the extra effects to the GWSender/GWReceiver model (Model 4). The only parameter that is statistically significant across multiple models is Contagion, which is positive and significant in all cases (except Model 5, where it is not significant). This indicates homophily on (male) gender, consistent with ERGM models for (an undirected version of) this network (Stivala, 2020; Kevork and Kauermann, 2021). (I estimated an ERGM model similar to that in Stivala (2020a), but for the original directed network, which finds a positive but non-significant effect for gender homophily; data not shown). Parameter estimates that are statistically significant at the nominal \(p<0.05\) level are shown in bold. \begin{table} \begin{tabular}{l r r r r r r} \hline Effect & Model 1 & Model 2 & Model 3 & Model 4 & Model 5 & Model 6 \\ \hline Density & \(-0.648\) & \(-0.180\) & \(0.527\) & \(-1.\mathbf{687}\) & \(0.693\) & \(-2.\mathbf{637}\) \\ & \((0.332)\) & \((0.605)\) & \((1.013)\) & \((\mathbf{0.397})\) & \((1.094)\) & \((\mathbf{1.082})\) \\ Sender & \(-0.022\) & \(-0.543\) & \(-0.899\) & — & \(-0.857\) & — \\ & \((0.097)\) & \((0.290)\) & \((0.562)\) & — & \((0.785)\) & — \\ EgoOutTwoStar & — & \(0.087\) & \(0.214\) & — & \(0.220\) & — \\ & \((0.048)\) & \((0.168)\) & — & \((0.193)\) & — \\ EgoOutThreeStar & — & — & \(-0.021\) & — & \(-0.018\) & — \\ & \((0.026)\) & — & \((0.020)\) & — & \((0.029)\) & — \\ Receiver & \(-0.138\) & \(0.159\) & \((0.267)\) & \((0.463)\) & — & \((0.654)\) & — \\ EgoInTwoStar & — & \(-0.050\) & \(-0.016\) & — & \(0.096\) & — \\ & \((0.041)\) & \((0.145)\) & — & \((0.157)\) & — \\ EgoInThreeStar & — & — & \(-0.007\) & — & \(-0.022\) & — \\ & \((0.024)\) & — & \((0.025)\) & — & \((0.025)\) & — \\ GWSender [\(\alpha=\ln(2)\)] & — & — & — & \(3.565\) & — & \(5.259\) \\ & \((2.903)\) & — & — & \(-0.240\) & — & \(-0.578\) \\ & \((0.441)\) & — & \((1.467)\) & — & \((1.767)\) \\ Contagion & \(\mathbf{0.239}\) & \(\mathbf{0.258}\) & \(\mathbf{0.253}\) & \(\mathbf{0.206}\) & \(0.631\) & \(\mathbf{0.725}\) \\ Reciprocity & — & — & — & — & \(-0.333\) & \(-0.092\) \\ & \((0.663)\) & \((0.312)\) & — & \((0.132)\) & — \\ & \((0.729)\) & \((0.585)\) & — & \((0.003)\) & \(-0.008\) \\ MixedTwoStarSink & — & — & — & — & \((0.003)\) & \(-0.008\) \\ MixedTwoStarSource & — & — & — & — & \((0.036)\) & \((0.026)\) \\ & \((0.041)\) & \((0.027)\) & — & \((0.040)\) & \((0.027)\) \\ TransitiveTriangleT1 & — & — & — & — & \(-0.061\) & \(-0.048\) \\ TransitiveTriangleT3 & — & — & — & — & \((0.059)\) & \((0.054)\) \\ & \((0.033)\) & \((0.033)\) & — & \((0.033)\) & — \\ & \((0.059)\) & \((0.105)\) & — & \((0.409)\) & \((0.282)\) \\ ReceiverMatch Class & — & — & — & — & \((0.005)\) & \(-0.031\) \\ ReciprocityMatch Class & — & — & — & \((0.430)\) & \((0.323)\) \\ & \((0.319)\) & \((0.184)\) & — & \((0.669)\) & \((0.498)\) \\ \hline \end{tabular} \end{table} Table 6: Parameter estimates with standard errors for ALAAM estimated using ALAAMEE with the stochastic approximation algorithm for the SocioPatterns high school friendship network, with male gender as the outcome variable. Although they are statistically non-significant, so we can make no inferences from them, it is instructive to compare the estimated Sender, EgoOutTwoStar, EgoOutThreeStar, Receiver, EgoInTwoStar, and EgoInThreeStar parameters in Model 5, with the GWSender and GWReceiver parameter estimates in Model 6 (Table 6). In Model 5, Sender is negative, EgoOutTwoStar is positive, and EgoOutThreeStar is negative; they have alternating signs, as discussed in Section 4.1. Receiver is negative, EgoInTwoStar positive, and EgoInThreeStar negative, so again the signs are alternating (note that Receiver and EgoInTwoStar have swapped signs relative to Model 3, however). In Model 6, GWSender is positive, while GWReceiver is negative. Figure 6 shows that Model 6 fits the in-degree and out-degree distributions of nodes with the outcome attribute well, although a simple random assignment of the outcome attribute with the same density is not much worse (which, given that the GWSender and GWReceiver parameters are not statistically significant, should not be surprising). ### Large networks Table 8 shows ALAAM parameters estimated for the GitHub network with developer type as the "outcome" binary attribute. I was unable to estimate a converged (non-degenerate) model for this data using the Density, Activity, and Contagion parameters, but using GWActivity instead the model is converged and non-degenerate, as shown in Figure 7, which shows trace plots and histograms of outcome vectors simulated from the model in Table 8, along with the observed values of the statistics corresponding to the parameters in the model. The observed values are central in the (approximately normal) distribution of the simulated values, indicating that the model is converged and not near-degenerate. The only parameter (other than Density) that is statistically significant in this model is GWActivity, which is positive. As discussed in Section 4.1, this means we expect that more low-degree nodes will have the outcome attribute than would otherwise be the case (conditional on all the other effects in the model, and on the degree distribution itself, since the network is fixed in the ALAAM). This is consistent with what we observe simply from the degrees of the nodes with and without the outcome attribute shown in Table 5; nodes with the outcome \begin{table} \begin{tabular}{l r r r r r r} \hline Effect & Model 1 & Model 2 & Model 3 & Model 4 & Model 5 & Model 6 \\ \hline AlterInTwoStar2 & \(0.230\) & \(0.292\) & \(0.285\) & \(0.482\) & \(0.267\) & \(0.286\) \\ AlterOutTwoStar2 & \(0.131\) & \(0.178\) & \(0.201\) & \(0.191\) & \(0.050\) & \(0.073\) \\ Contagion & \(-0.009\) & \(-0.016\) & \(-0.015\) & \(-0.043\) & \(0.006\) & \(-0.018\) \\ Contagion Reciprocity & \(0.454\) & \(0.518\) & \(0.519\) & \(0.519\) & \(0.028\) & \(0.002\) \\ CyclicTriangleC1 & \(0.520\) & \(0.772\) & \(0.798\) & \(0.942\) & \(0.097\) & \(0.143\) \\ CyclicTriangleC3 & \(0.610\) & \(0.772\) & \(0.797\) & \(0.830\) & \(0.193\) & \(0.182\) \\ Density & \(0.031\) & \(-0.007\) & \(-0.004\) & \(0.049\) & \(-0.021\) & \(-0.035\) \\ EgoInThreeStar & — & — & \(-0.062\) & — & \(0.029\) & — \\ EgoInTwoStar & \(-0.017\) & \(-0.027\) & \(-0.036\) & \(0.434\) & \(0.015\) & \(0.076\) \\ EgoOutThreeStar & — & — & \(-0.034\) & — & \(-0.082\) & — \\ EgoOutTwoStar & \(-0.232\) & \(-0.036\) & \(-0.004\) & \(-0.207\) & \(-0.062\) & \(-0.142\) \\ GWReceiver [\(\alpha=\ln(2)\)] & — & — & — & \(0.146\) & — & \(-0.048\) \\ GWSender [\(\alpha=\ln(2)\)] & — & — & — & \(0.166\) & — & \(-0.088\) \\ MixedTwoStar & \(0.007\) & \(0.113\) & \(0.122\) & \(0.277\) & \(0.037\) & \(0.027\) \\ MixedTwoStarSink & \(0.136\) & \(0.192\) & \(0.194\) & \(0.486\) & \(-0.003\) & \(0.024\) \\ MixedTwoStarSource & \(0.141\) & \(0.201\) & \(0.232\) & \(0.198\) & \(-0.033\) & \(0.008\) \\ Receiver & \(0.011\) & \(-0.016\) & \(-0.013\) & \(0.202\) & \(-0.004\) & \(0.009\) \\ ReceiverMatch Class & — & — & — & — & \(-0.018\) & \(0.008\) \\ Reciprocity & \(0.410\) & \(0.433\) & \(0.434\) & \(0.525\) & \(-0.013\) & \(0.007\) \\ ReciprocityMatch Class & — & — & — & — & \(-0.027\) & \(0.011\) \\ Sender & \(0.019\) & \(-0.020\) & \(0.010\) & \(-0.086\) & \(-0.037\) & \(-0.039\) \\ SenderMatch Class & — & — & — & — & \(-0.043\) & \(-0.011\) \\ TransitiveTriangleD1 & \(0.290\) & \(0.546\) & \(0.581\) & \(0.508\) & \(0.049\) & \(0.055\) \\ TransitiveTriangleT1 & \(0.271\) & \(0.453\) & \(0.492\) & \(0.601\) & \(-0.005\) & \(0.027\) \\ TransitiveTriangleT3 & \(0.310\) & \(0.438\) & \(0.460\) & \(0.481\) & \(0.037\) & \(0.007\) \\ TransitiveTriangleU1 & \(0.236\) & \(0.344\) & \(0.377\) & \(0.696\) & \(-0.032\) & \(0.030\) \\ \hline \end{tabular} \end{table} Table 7: ALAAM goodness-of-fit t-ratios for the SocioPatterns high school social network ALAAM models (Table 6). Figure 6: Goodness-of-fit on in-degree (top) and out-degree (bottom) distributions of nodes with the outcome attribute (male gender) for ALAAM Model 6 (Table 6). The orange boxplots show the results for 100 outcome vectors simulated from the ALAAM, and the purple boxplots 100 random outcome vectors where each element is 1 with probability \(\overline{\sum y/N}\), so that the mean attribute density is the same as that of the outcome vectors simulated from the ALAAM. The solid green vertical line shows the observed value. The the orange dashed vertical line is the mean for the ALAAM, and the purple dashed vertical line for the random outcome vectors. attribute have lower mean degree than the overall mean degree. An ALAAM model for the undirected Pokec network with male gender as the "outcome" attribute is shown in Table 9, with the degeneracy check plots in Figure 8 showing that the model is converged. I was unable to estimate a converged (non-degenerate) model with this network when the Activity parameter was included, but using GWActivity instead solves this problem. All the parameters in this model are statistically significant. The negative Contagion parameter indicates heterophily on (male) gender, while the positive Age parameter indicates that males are likely to be older than females. This is consistent with simple descriptive statistics for this data: assortativity (Newman, 2003) on the "male" binary attribute is negative (\(r=-0.0053\)), and the mean age for male actors (25.1) is higher than that for non-male actors (23.84) with the difference significant according to Welch's \(t\)-test (\(p<0.0001\)). The positive GWActivity parameter indicates, as discussed in Section 4.1, that low degree nodes are more likely to represent male actors than would otherwise be the case. This is as we might expect, given that male (outcome \(y_{i}=1\)) nodes have lower mean degree than the mean degree than others (Table 5). A more complex ALAAM model for the directed Pokec network with male gender as "outcome" variable, is shown in Table 10. I could not find a converged (non-degenerate) ALAAM model for this network using the Sender and Receiver parameters, but as shown in Figure 9, this model using GWSender and GWReceiver converges well. Again, all the parameters in this model are statistically significant. As we expect given the results for the undirected network, the Age effect is positive and the Contagion effect negative; this is also consistent with the ERGM model of this network in Stivala et al. (2020a). However Contagion Reciprocity is positive, indicating that actors connected by a reciprocated (mutual) tie are more likely to both be male (given the other effects in the model, including specifically the negative Contagion parameter, indicating that a male actor on both ends of a tie is under-represented). The GWSender and GWReceiver parameters are of different signs: GWSender is negative, and GWReceiver positive. Again, as per the discussion Section 4.1, this is as we expect, given that male actors have higher mean out-degree, but lower mean in-degree than others (Table 5). ## 6 Conclusions and future work I have shown that the problem of near-degeneracy can occur in simple ALAAMs applied to empirical networks, preventing the estimation of such models in some examples. I defined the geometrically weighted activity, geometrically weighted sender, and geometrically weighted receiver statistics, analogous to the geometrically weighted degree statistics for ERGMs described by Snijders et al. (2006), and showed that they avoid this problem, and allow ALAAM parameters to be estimated for these networks. I described the interpretation of these new parameters, with illustrative examples. \begin{table} \begin{tabular}{l c c c} \hline Effect & Estimate & Std. error & \\ \hline Density & -1.287 & 0.033 & * \\ GWActivity [\(\alpha=\ln(2)\)] & 1.712 & 0.127 & * \\ Contagion & 0.002 & 0.001 & \\ \hline \end{tabular} Asterisks indicate statistical significance at the \(p<0.05\) level. Results from 100 parallel runs. \end{table} Table 8: ALAAM estimated using ALAAMEE with the equilibrium expectation algorithm for the GitHub social network, with developer type as the outcome variable. \begin{table} \begin{tabular}{l c c c} \hline Effect & Estimate & Std. error & \\ \hline Density & -0.188 & \textless{} 0.001 & * \\ GWActivity [\(\alpha=\ln(2)\)] & 0.077 & 0.001 & * \\ Contagion & -0.005 & \textless{} 0.001 & * \\ Age & 0.009 & \textless{} 0.001 & * \\ \hline \end{tabular} Asterisks indicate statistical significance at the \(p<0.05\) level. Results from 100 parallel runs. \end{table} Table 9: ALAAM estimated using ALAAMEE with the equilibrium expectation algorithm for the undirected Pokec online social network, with male gender as the outcome variable. \begin{table} \begin{tabular}{l r r r} \hline Effect & Estimate & Std. error & \\ \hline Density & -0.015 & 0.002 & * \\ GWSender [\(\alpha=\ln(2)\)] & -0.509 & 0.011 & * \\ GWReceiver [\(\alpha=\ln(2)\)] & 0.517 & 0.011 & * \\ Reciprocity & 0.023 & \textless{} 0.001 & * \\ Contagion & -0.028 & \textless{} 0.001 & * \\ Contagion Reciprocity & 0.019 & 0.001 & * \\ Age & 0.008 & \textless{} 0.001 & * \\ \hline \end{tabular} Asterisks indicate statistical significance at the \(p<0.05\) level. Results from 100 parallel runs. \end{table} Table 10: ALAAM estimated using ALAAMEE with the equilibrium expectation algorithm for the directed Pokec online social network, with male gender as the outcome variable. Figure 7: Degeneracy check for the GitHub social network ALAAM (Table 8). Trace plots and histograms show statistics of 100 outcome vectors simulated from the model. The blue lines on the histograms show mean and 95% confidence interval, and red lines show the observed values. Figure 8: Degeneracy check for the undirected Pokec social network ALAAM (Table 9). Trace plots and histograms show statistics of 100 outcome vectors simulated from the model. The blue lines on the histograms show mean and 95% confidence interval, and red lines show the observed values. Figure 9: Degeneracy check for the directed Pokec social network ALAAM (Table 10). Trace plots and histograms show statistics of 100 outcome vectors simulated from the model. The blue lines on the histograms show mean and 95% confidence interval, and red lines show the observed values. In this work, I defined these statistics and demonstrated the use for one-mode undirected and directed networks. A simple extension would be to two-mode (bipartite) networks, which might allow a converged ALAAM to be found for the larger director interlock network (Evtushenko and Gastner, 2020) which I was unable to find, while I could find a converged ALAAM for the smaller director interlock network in Stivala et al. (2023b), In the examples shown here, I found that only the geometrically weighted activity (or sender and receiver) statistic was necessary to overcome the problem of near-degeneracy: the Contagion statistic, when used with geometrically weighted activity (or sender and receiver) statistics, did not seem to be problematic. Indeed, when I experimented with a "geometrically weighted contagion" statistic, I found it to be not just unnecessary, but actually deleterious to model convergence. Given that I used only simple models for the large network examples, this leaves open the question of whether or not geometrically weighted statistics are necessary or useful for triangular configurations in the ALAAM (as they are in ERGM). Some problems remain, however. As discussed in Section 4.1, interpretation of the new parameters is likely to be confusing, given the counter-intuitive meaning of a positive parameter indicating a propensity for the outcome attribute to be present on low (rather than high) degree nodes. Simulation experiments such as those shown in Figure 5, which, not coincidentally, somewhat resembles the output of the interactive R application created to help with the interpretation of the statnet gwdegree parameter (Levy, 2016), could help with this. However the interpretation is (aside from the potential for the sign-based confusion), inherently difficult, as it is linked to the degree distribution of nodes with the outcome attribute, and conditional not only on all the other parameters in the model, but also on the degree distribution of the network itself (which is fixed in the ALAAM). This is particularly complicated in the case of directed networks, in which there is both an in-degree and out-degree distribution, and interpretation of the GWSender and GWReceiver parameters are conditional on each other. In this work I have described the interpretation of these parameters as illustrative examples, however in empirical applications it might be advisable to refrain from making substantive claims based on these parameters, and just consider them as "controls" for the degree distribution of nodes with the outcome attribute, needed for correct interpretation of the Contagion (and other) parameters. Of course, this is assuming that parameter interpretation is actually want we want do -- and perhaps it is not, and we would rather use the model to generate simulations in order test predictions regarding their inability to fit some statistic (Martin, 2020), or to experiment with simulations from different models with slightly modified parameters (Steglich and Snijders, 2022). Another avenue for future work is that a value for the decay parameter \(\alpha\) has to be specified. The default value of \(\alpha=\ln(2)\) appears to work well on the examples in this work, but it may have to be adjusted for better convergence or model fit on other networks, which would involve a process of trial and error, or, more systematically, "grid search" as, for example, done for the analogous \(\lambda\) parameter in ERGMs in Stivala and Lomi (2021). Estimating this parameter would make the model a "curved ALAAM", which cannot be estimated by the methods used in this work. In this work, I overcame the problem of near-degeneracy in ALAAMs by defining a geometrically weighted activity statistic, analogous to the most frequently used technique of avoiding the problem in ERGMs. There are, however, other ways of avoiding this problem in ERGMs, which could potentially be applied to ALAAMs. These included the "tapering" method (Fellows and Handcock, 2017; Blackburn and Handcock, 2023), and the "degeneracy-restricted" method (Karwa et al., 2022), as well as other forms of additional structure discussed in Schweinberger et al. (2020) such as multilevel, block and spatial structure. An alternative approach might be to consider an ALAAM analogue of the latent order logistic (LOLOG) model (Fellows, 2018; Clark and Handcock, 2022). ## Funding This work was funded by the Swiss National Science Foundation (SNSF) project number 200778. ## Acknowledgements This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government, and from the Victorian Higher Education State Investment Fund (VHESIF) provided by the Victorian Government. Discussions at weekly MelNet meetings hosted by Dr Peng Wang at Swinburne University of Technology were useful in inspiring this work, and I also thank Dr Wang for arranging access to the OzSTAR supercomputing facility at Swinburne University of Technology. I am grateful to Prof. Alessandro Lomi for funding as responsible applicant for SNSF grant number 200778, and for general discussion of the ALAAM. ## Data availability statement The SocioPatterns high school friendship data (Mastrandrea et al., 2015) is available from [http://www.sociopatterns.org/datasets/high-school-contact-and-friendship-networks/](http://www.sociopatterns.org/datasets/high-school-contact-and-friendship-networks/). The "Pokec" (Takac and Zabovsky, 2012) data is available from the Stanford large network dataset collection (Leskovec and Krevl, 2014) at [http://snap.stanford.edu/data/soc-Pokec.html](http://snap.stanford.edu/data/soc-Pokec.html). The "GitHub" (Rozemberczki et al., 2021) online social network data is available from the same collection at [http://snap.stanford.edu/data/github-social.html](http://snap.stanford.edu/data/github-social.html). All other data, source code, and scripts are freely available from [https://github.com/stivalaa/ALAAMEE](https://github.com/stivalaa/ALAAMEE).
2309.16962
Lifting the Fog of Uncertainties: Dynamic Resource Orchestration for the Containerized Cloud
The advances in virtualization technologies have sparked a growing transition from virtual machine (VM)-based to container-based infrastructure for cloud computing. From the resource orchestration perspective, containers' lightweight and highly configurable nature not only enables opportunities for more optimized strategies, but also poses greater challenges due to additional uncertainties and a larger configuration parameter search space. Towards this end, we propose Drone, a resource orchestration framework that adaptively configures resource parameters to improve application performance and reduce operational cost in the presence of cloud uncertainties. Built on Contextual Bandit techniques, Drone is able to achieve a balance between performance and resource cost on public clouds, and optimize performance on private clouds where a hard resource constraint is present. We show that our algorithms can achieve sub-linear growth in cumulative regret, a theoretically sound convergence guarantee, and our extensive experiments show that Drone achieves an up to 45% performance improvement and a 20% resource footprint reduction across batch processing jobs and microservice workloads.
Yuqiu Zhang, Tongkun Zhang, Gengrui Zhang, Hans-Arno Jacobsen
2023-09-29T04:11:12Z
http://arxiv.org/abs/2309.16962v1
# Lifting the Fog of Uncertainties: Dynamic Resource Orchestration for the Containerized Cloud ###### Abstract. The advances in virtualization technologies have sparked a growing transition from virtual machine (VM)-based to container-based infrastructure for cloud computing. From the resource orchestration perspective, containers' lightweight and highly configurable nature not only enables opportunities for more optimized strategies, but also poses greater challenges due to additional uncertainties and a larger configuration parameter search space. Towards this end, we propose Drone, a resource orchestration framework that adaptively configures resource parameters to improve application performance and reduce operational cost in the presence of cloud uncertainties. Built on Contextual Bandit techniques, Drone is able to achieve a balance between performance and resource cost on public clouds, and optimize performance on private clouds where a hard resource constraint is present. We show that our algorithms can achieve sub-linear growth in _cumulative regret_, a theoretically sound convergence guarantee, and our extensive experiments show that Drone achieves an up to 45% performance improvement and a 20% resource footprint reduction across batch processing jobs and microservice workloads. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + is to progressively optimize a containerized application's resource configuration over its lifespan with minimum manual intervention and the often-costly explicit workload profiling phase. At its core, Drone is built upon recent advances in Gaussian process-based contextual bandits (Wang et al., 2017). By encompassing time-variant cloud uncertainties as contextual parameters, Drone follows an iterative procedure to continuously refine resource configurations based on the previous context-action pairs and collected performance metrics. Assuming a minimal structural relationship between application performance and resource configurations, the power of such a non-parametric model makes Drone versatile across a diverse range of cloud environments and adaptable to various application types and workloads. Specifically, we examine two settings within a shared cloud infrastructure: a) _public cloud_, where computational resources can be effectively considered unlimited and Drone demonstrates adeptness in striking an efficient balance between performance and cost, and b) _private cloud_, where there exists a stringent cap on computational resources and Drone proves capable of optimizing application performance within these resource constraints. Drone is also theoretically sound in both settings since it achieves a sublinear growth of cumulative regret, meaning that the algorithm converges fast with respect to its running time. We evaluate Drone by deploying various applications on our cloud-hosted Kubernetes cluster using Drone as an integrable resource orchestrator. Our extensive experimental analysis, employing realistic workloads, demonstrates Drone's superior performance compared to alternative solutions in several respects. First, for recurring analytical jobs for which bandit-based approaches have been shown to be efficient (Kubernetes et al., 2017; Wang et al., 2017), Drone exhibits further improvement in performance by accounting for a broader spectrum of cloud uncertainties, coupled with its adherence to resource constraints in the private cloud environment. Second, for user-facing microservices where workload variability is more ad-hoc and no explicit profiling phase is available, Drone also achieves a 37% improvement on P90 latency compared to state-of-the-art alternatives, a result further amplified by our bespoke enhancements over the standard bandit optimization procedure, including a sliding window-based data sampler, empirically optimized starting point selection and latency-aware scheduling mechanisms. To the best of our knowledge, Drone is the first work to harness the potential of resource allocation in a containerized cloud using bandit-based approaches. It showcases superior adaptability across diverse settings in comparison to the preceding VM-based efforts. To sum up, we make the following contributions in this paper: 1. Through comprehensive experimental analysis, we validate the non-structural performance-resource relationship and the significant influence of uncontrollable time-variant environment variables (the cloud uncertainties) on application performance under multiple cloud scenarios. 2. Leveraging recent advances in bandit algorithms, we design Drone, a general-purpose online resource orchestration framework for container-based cloud systems. Drone progressively optimizes the performance-cost tradeoff in public cloud environments, while maintaining strict adherence to resource constraints in resource-limited private clouds. In both cases, Drone theoretically exhibits a fast convergence rate, guaranteeing its performance. 3. We implement Drone as a customized resource orchestrator on top of Kubernetes. Using realistic cloud workloads, we show through extensive experiments that Drone outperforms state-of-the-art alternatives in terms of application performance, cost efficiency and resource constraint compliance. ## 2. Background and Related Work ### Cloud Resource Orchestration Intelligent resource orchestration on the cloud has long been an active research area which can be categorized as follows based on the underlying techniques adopted. **Heuristic-based Approaches.** A simple yet practically effective resource orchestration choice is based on heuristics. They are usually intuitive and easy to implement and hence are widely adopted in industrial solutions (Kubernetes, 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). For example, the default container autocalers in Kubernetes (Kubernetes, 2017) include _Horizontal Pod Autoscaler (HPA)_ and _Vertical Pod Autoscaler (VPA)_, both of which follow a rule-based scaling policy. Such policies enable cloud tenants to define thresholds for interested metrics according to which the system performs autoscaling. However, setting appropriate thresholds for such metrics is a non-trivial task. The optimal values are often application-specific and require expert knowledge from the developer or system administrator. Therefore, such heuristic approaches can hardly generalize across various cloud applications and often involve significant manual efforts. **Model-based analytical approaches.** Another line of work establishes analytical models to encapsulate the relationship between performance objectives and resource orchestration decisions. The problem is thus often modelled as an optimization problem and certain assumptions are usually made on the problem structure (e.g., linearity and convexity) so that theoretical properties can be utilized to efficiently solve the problem (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019). Control theory and queuing theory are also common theoretical tools for designing resource management solutions (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2018). Despite the favorable theoretical characteristics of such solutions, real-life cloud applications generally fail to satisfy the desired problem structure due to varying workload profiles and other cloud uncertainties (Han et al., 2014). **Predictive approaches using machine learning (ML).** To mitigate over-provisioning overhead and human effort of heuristic-based solutions, predictive approaches predict future workload or system behavior with past statistics and adjust resource allocation in advance to meet future application needs. This type of approaches usually employs well-established machine learning models, such as linear regression (Han et al., 2014; Chen et al., 2016), support vector machine (Zhu et al., 2016) and various types of neural networks (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). Although effective in certain conditions, these ML-based approaches have their intrinsic limitations. First, to deploy such solutions, an exclusive profiling/training phase is generally needed, which can be costly and not available in production-level realistic systems. Second, such ML-based solutions perform best with general workloads or workloads with repeating patterns similar to their training data, but they adapt poorly to fluctuating workloads (Han et al., 2014). Moreover, training data quantity and quality impact the performance of an ML model to a significant extent. It is also non-trivial and requires specialized domain knowledge to select representative training data and costly retraining is often needed if workload shift happens. More recently, Reinforcement learning (RL) has captured attention from the resource management community (Han et al., 2014; Chen et al., 2016; Chen et al., 2018; Chen et al., 2019), thanks to its ability to interact with the environment while optimizing its resource allocation actions. However, apart from the fact that RL frameworks also need to pretrain their agents and hence share similar limitations to the aforementioned ML models, they usually fail to achieve a convergence guarantee. Also, in RL models, actions taken are in turn affecting the environment (i.e., the states), while in real-life clouds, many environment variables are independent of actions, such as workload uncertainty which comes directly from the end users. ### Bandit Algorithms The limitations of existing work suggest that an ideal resource orchestration framework should optimize resource allocation decisions in an online manner with minimum model pre-training and human intervention. More importantly, it should work efficiently in today's complex containerized cloud, taking various cloud uncertainties into account and fitting in different cloud settings. To this end, we resort to the contextual bandit approach (Wang et al., 2019), a data-efficient non-parametric solution. Contextual bandit is an extension of the well-studied Multi-Armed Bandit (MAB) problem (Wang et al., 2019) by incorporating contextual information about uncontrollable environment variables, such as cloud uncertainties in the cloud computing context. The original MAB problem is a sequential optimization problem, where a player sequentially selects from a finite set of options with probabilistic rewards to maximize the total reward over time. Bayesian Optimization (BO) is a continuous variant of the MAB problem which aims to find the optimizer of a black-box function by incrementally building a model of the objective function. Although part of our control domain (e.g., fine-grained container resource scaling) can be considered continuous which makes our problem essentially a BO with contextual extension, we stick to the term contextual bandits throughout this \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Framework**} & \multirow{2}{*}{**Application**} & \multirow{2}{*}{**Computing**} & \multirow{2}{*}{**Optimization**} & \multirow{2}{*}{**Acquisition**} & \multirow{2}{*}{**Uncertainties**} & \multirow{2}{*}{**Resource**} & \multirow{2}{*}{**Workload**} & \multirow{2}{*}{**Convergence**} \\ & & & & & & **(contracts)** & & \\ \hline Dremel (Zhu et al., 2016) & DB Tuning & - & DB IOPS & UCB & ✗ & - & DB queries & ✗ \\ \hline CGPTuner (Chen et al., 2016) & DB Tuning & - & Performance & \multirow{2}{*}{GP-Hedge} & Workload & - & Recurring & \multirow{2}{*}{✗} \\ & & Improvement & & only & - & DB queries & \\ \hline \multirow{2}{*}{Cherrypick (Han et al., 2014)} & VM config. selection & VM & Customized Cost & EI & ✗ & ✗ & Recurring & \multirow{2}{*}{✗} \\ & & selection & & & & & analytical jobs & \\ \hline \multirow{2}{*}{Acordia (Wang et al., 2019)} & VM config. selection & VM & Customized Cost & GP-UCB & ✗ & ✗ & Recurring & \multirow{2}{*}{✗} \\ & & selection & & & & & analytical jobs & \\ \hline \multirow{2}{*}{RAMBO (Wang et al., 2019)} & Resource orchestration & Container & Customized Cost & SMSego & ✗ & ✗ & Microservices & ✗ \\ \hline \multirow{2}{*}{Drone} & Resource orchestration & Container & Performance-cost tradeoff (public cloud) & \multirow{2}{*}{GP-UCB} & \multirow{2}{*}{✓} & \multirow{2}{*}{✗} & \multirow{2}{*}{General} & \multirow{2}{*}{✗} \\ & & Performance & & & & & & \\ \cline{1-1} & & opt. (private cloud) & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1. Computer systems studies using bandit algorithms. paper to highlight the contextual nature and align with the theoretical literature. **Bandit algorithms in computer systems research.** Due to the ability to model arbitrary performance functions, bandit algorithms have also been employed in computer system-related research, such as database parameter tuning (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019) and VM configuration selection (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Dremel and CGPTuner (Kumar et al., 2017; Wang et al., 2019) use bandit algorithms to fine-tune DBMS-specific parameters and the sole objective is to maximize database performance without constraints, while we focus on a lower level of resource orchestration and consider the performance-cost tradeoff. The closest works to ours are Cherrypick (Wang et al., 2018) and Accordia (Cherrypick and Accordia, 2019). Cherrypick is among the first works to apply bandit algorithms to systems research, aiming to pick the best VM configuration using Bayesian Optimization for big data analytical jobs. It uses Expected Improvement (EI) as its acquisition function, which lacks a convergence guarantee. Accordia studies the exact same problem, and advances one step further by employing the recent GP-UCB algorithm (Wang et al., 2019) with convergence guarantee. However, both Cherrypick and Accordia have inherent limitations which prevent them from being readily applicable to the current containerized cloud. First, both works study the VM configuration selection problem where only a finite set of options are available, while finer-grained, almost-continuous control is possible for containers, as mentioned in Section 1. Second, both Cherrypick and Accordia focus on _recurring_ analytical jobs, whose workload patterns are regular and predictable. Therefore, they are implicitly using the first few runs of the recurring job as the training phase and thus cannot generalize to workload variations. Last but not least, their performance objectives are solely dependent on the actions taken, and they assume infinite resources without considering the uncontrollable cloud uncertainties and resource-limited private clouds. Drone, on the other hand, is uncertainty-aware and generalizes to different cloud workloads and settings. We would also like to mention RAMBO (Wang et al., 2019), a BO-based resource allocation framework for microservices. Although RAMBO solves a similar problem to our work, technical details of implementation and design choices are not sufficiently provided in the paper. A detailed comparison between Drone and closely related works is summarized in Table 1. ## 3. Problem Analysis In this section, we show through experimental analysis important observations which motivate our work. To justify the complex performance-cost relationship and the substantial impact of cloud uncertainties on application performance in a containerized cloud, we set up a cloud-hosted testbed consisting of 16 VMs (see Section 5 for detailed specifications) to run benchmarking jobs. All jobs are submitted as Kubernetes-managed containers unless otherwise specified. **Non-structural performance-cost relationship.** To study the relationship between application performance and allocated resources, we benchmark three representative analytical workloads running on the native Spark Operator on Kubernetes (Kubernetes, 2019): PageRank, Sort and Logistic Regression (LR). PageRank is a graph-processing algorithm for which we use the Pokec social network graph data (Kubernetes, 2019) with 1.6M vertices and 30M edges. We use gensort(Kubernetes, 2019) to generate 150GB of randomly permuted 100-byte records for sorting. For LR, we use a 4-year span of -400k stock price records from the Nifty 100 Index (Kubernetes, 2019) to train the model. Experiments are repeated five times and the results are shown in Figure. 1(a). While allocating more RAM generally leads to better performance, beneficial theoretical attributes such as linearity and convexity are not manifested in this relationship. For example, LR does not suffer from performance gain saturation when given excessively more RAM, displaying an over 2x performance improvement with increasing RAM allocation from 96GB to 192GB because it benefits from more RAM as a memory-bound job. More interestingly, the performance-cost relationship can even be non-monotonic, meaning more resources does not necessarily lead to performance improvement, as can be observed for PageRank. This is largely due to the fact that PageRank is an iterative network-intensive algorithm where data shuffling between not-co-located containers is needed in each operation. In this Figure 1. Performance of representative Spark analytical workloads under different RAM allocations. case network bandwidth is the major bottleneck instead of RAM. We repeat the same experiments using identical configurations on the vanilla Spark cluster deployment without involving containers and report the results in Figure. 1(b). Although the performance metrics and the performance-cost relationship patterns are similar to the containerized setting, an important finding is that the variance of performance measurements in the VM-based setting (indicated by black confidence intervals on each bar) is much smaller. The stability is in part owing to the more mature architectural support, but also corroborates our insight that greater uncertainties and anomalies are introduced in a containerized cloud. In fact, we do observe more frequent Spark executor errors and restarts on Kubernetes. **Impact of cloud uncertainties.** We also show that besides workload intensity, other uncontrollable cloud uncertainties can also significantly impact application performance. To better model adverse situations in a shared cloud, we apply interference injection across experiments to create random resource contention (Spark and Flink, 2017), including CPU utilization, RAM bandwidth and network latency and bandwidth. The interferences' occurrence follows a poisson process with average rate of 0.5 per second. The intensity of each interference is uniformly and independently chosen at random between [0,50%] of the total capacity. We first study the performance of sorting varying sizes of data on the Kubernetes deployment of Spark and Flink (Spark and Flink, 2017). All experiments are conducted five times with the same resource configuration (36 CPU cores and 192GB of total RAM) and identical data for one size. The results are shown in Figure 2. We can observe that the variance across multiple runs increases with data size, reporting a coefficient of variation of up to 23% for Spark and 27% for Flink, indicating that application performance can be quite variable due to cloud uncertainties other than workload, especially with a large volume of data which is common in the current "big data era". From the performance discrepancy between Spark and Flink, we can also see that the performance is platform-dependent, meaning that even if we have found the optimal resource configuration for one specific workload, it is not readily transferable to other platforms running the same workload and thus additional configuration tuning may be required. The impact of cloud uncertainties can be even more serious for microservice applications, due to their complicated calling graphs and the resulting inter-container communication patterns (Spark and Flink, 2017; Flink, 2017). Towards this end, we deploy an example microservice application Sockshop(Sockhop, 2017) consisting of 10+ stateless and database microservices which simulates an online shopping web service. The architecture of Sockshop is shown in Figure 3. It is evident that the Order microservice can be a performance bottleneck due to its connection with several other microservices. With the same resource configuration and workload, we compare the end-to-end latency of two affinity rules and show the Cumulative Distribution Function (CDF) in Figure 4. We can find that if we forcefully isolate Order from other microservices (by setting node-affinity rules for corresponding pods in Kubernetes), the performance is 26% worse in terms of P90 latency than the case where we try to colocate Order with other microservices in a best-effort manner. This finding further verifies our claim that the impact of non-workload uncertainties can be significant, and amount-irrelevant resource orchestration decisions can also be deciding factors for application performance. ## 4. Drone design In this section, we present Drone, our dynamic resource orchestrator for the containerized cloud. Starting with a brief introduction of contextual bandits and why it is a promising choice for the problem context, we then detail our design and algorithms under both public and private cloud settings. Finally, the implementation and domain-specific optimizations are discussed which complement practically our algorithmic contribution. ### Overview of Contextual Bandits As briefly discussed in Sec. 2.2, it is natural to deduce the mapping from contextual bandits to the cloud resource orchestration problem. The ultimate goal is to dynamically adjust resource allocation decisions to optimize an objective value (e.g., performance and/or cost) in the presence of time-variant cloud uncertainties. Formally speaking, we want to find the best resource configuration \(x^{*}\) from action space \(\mathcal{X}\) with uncertainty context \(\omega\in\Omega\) such that the objective function \(f\) is optimized: \[x^{*}=\operatorname*{arg\,max}_{x\in\mathcal{X}}f(x,\omega) \tag{1}\] From this formulation, we can see that \(f\) is dependent on not only the decision variable \(x\), but also the context \(\omega\). The output of \(f\) can be any scalar value that is of the most interest to the user. Common choices include application performance indicators (e.g., latency, throughput, response time), utility, and cost. Note that (1) is also often formulated as a minimization problem if \(f\) is a cost function or captures latency/response time, but the essence of the problem remains unchanged. The action \(x\) and context \(\omega\) are vectors with domain-specific dimensions, containing all possible resource orchestration decisions and contextual parameters, respectively. We discuss the concrete dimensions we consider in our problem context in Sec. 5.1. Since the objective function has no structural relationship with the resource orchestration actions, as we point out in Sec. 3, we can only obtain an objective value by querying the corresponding action. In this case, an exhaustive search of the optimal action is clearly intractable, especially when the action space \(\mathcal{X}\in\mathbb{R}^{d}\) is a continuous domain, and the dimension \(d\) is high. Towards this end, the contextual bandit approach significantly reduces the search cost by intelligently guiding the next action to search for in an iterative optimization process. Specifically, in each time step \(t\), the optimization agent receives a context \(\omega_{t}\) from the environment. Based on the context, the agent then chooses an action \(x_{t}\) from the action space \(\mathcal{X}\), executes this action, and then receives a reward \(y_{t}=f(x_{t},\omega_{t})+\epsilon_{t}\) as a result of the action taken, where \(\epsilon_{t}\) is a Gaussian noise \(\epsilon_{t}\sim\mathcal{N}(0,\sigma^{2})\). The noise term well encapsulates the fact that in practice we can only observe a perturbed function value due to unavoidable measurement error. The optimization process then proceeds on to time step \(t+1\) with the reward-input pair \((y_{t},x_{t},\omega_{t})\) appended to the history information to further guide searching in the next iteration. To evaluate the quality of the actions taken, we use _cumulative regret_\(R_{T}\) which measures the cumulative performance gap over the complete algorithm running span of \(T\) time steps, a common metric to assess an online sequential optimization algorithm (Zhou et al., 2017): \[R_{T}=\sum_{t=1}^{T}\left(\max_{x^{*}\in\mathcal{X}}f\left(x^{*},\omega_{t} \right)-f\left(x_{t},\omega_{t}\right)\right) \tag{2}\] A desired property for an efficient online algorithm is to have _sub-linear regret growth:_\(\lim_{T\to\infty}R_{T}/T\to 0\), meaning that we can quickly find (near-)optimal actions so that the performance gap converges to zero relatively fast. As we will show in the following sections, Drone achieves sub-linear regret growth in both public and private cloud settings. ### Public Cloud: Cost-aware Performance Optimization We first propose our contextual bandit-based algorithm to jointly optimize application performance and resource cost in public cloud environments where computational resources are unlimited. **Why can we assume infinite resources?** It seems natural to assume that computational resources are infinite on public clouds, as previous works (Zhou et al., 2017; Zhou et al., 2017) also instinctively did. While the assumption is plausible given the massive scale of major cloud providers1, it may not be readily justifiable from the perspective of individual users or small businesses. For instance, if users are at the edge of their budget for cloud resource renting, they may not be willing to acquire more resources even if that can bring better application performance. In fact, this assumption can be rationalized by cost-saving incentives provided by public cloud, such as Spot Instance (Beng et al., 2016) and Burstable Instance (Beng et al., 2016) on AWS2. Spot instances are preemptive, low-priority VMs at a considerably lower price than on-demand instances3. Burstable instances are VMs that have the ability to "burst" to a significantly higher resource configuration than their normal capacity to handle ephemeral and infrequent workload peaks, which is much cheaper than on-demand instances with the same full capacity. We profile the cost-saving effects of spot and burstable instances by issuing the same batch processing (Sort) and microservice workload with the regular instance m5.large as baseline. Table 2 depicts the normalized cost savings of running the same workload across cloud incentive combinations. We observe an up to 7.19x cost saving by employing burstable spot instances and 6.1x cost savings with spot instances alone, both showing notable cost efficiency over on-demand regular instances. Therefore, by judiciously adopting these cloud incentives, one can expect a significant cost reduction achieving the same application performance, meaning that under the same budget, a significantly larger resource configuration search space is available, and this in turn justifies the infinite resources assumption. Another interesting finding is that spot prices can vary drastically with time in an unpredictable manner. Figure 5 shows spot prices of three instance types over a 1-month time span, which exhibit no regular patterns and vary across instance types to a great extent. This suggests that the spot price is an additional contextual dimension to be considered which can greatly impact the resource cost. **Problem formulation.** Given the assumption of unlimited resources, the optimization objective on public clouds is to keep a balance between application performance and the monetary resource cost. Formally, our optimization problem can be formulated as maximizing a reward function \(f\): \[\max_{\mathbf{x}_{t}} f(x_{t},\omega_{t})=\alpha p(x_{t},\omega_{t})-\beta c(x_{t}, \omega_{t}) \tag{3}\] \[s.t. \mathbf{x}_{t}\in\mathcal{X},\omega_{t}\in\Omega,\quad\forall t \tag{4}\] where \(p(x_{t},\omega_{t})\) is the application performance indicator which can be measured at the end of each time step \(t\); \(c(x_{t},\omega_{t})\) is the resource cost associated with the resource orchestration decision \(x_{t}\) and cloud uncertainties enclosed in the context \(\omega_{t}\). \(\alpha\) and \(\beta\) are configurable weights that capture a user's preference between performance and cost. **How to guide the search process?** By a sequential optimization process, contextual bandit-based algorithms are able to learn more about the objective function \(f\) in every iteration with newly observed data resulting from evaluating \(f\) at point \((x_{t},\omega_{t})\). Therefore, a key design choice of contextual bandits algorithms is to determine how to choose the next point to evaluate so as to learn the most about the objective function. Towards this purpose, we first need to put a surrogate model on \(f\) so that it can be efficiently updated iteratively. In Drone, we choose Gaussian Process (GP) (Drone et al., 2017), a common choice adopted by prior works (Han et al., 2015; Done et al., 2017; Done et al., 2017; Done et al., 2017). As a non-parametric model, GP assumes a function is sampled from a Gaussian distribution, which adds a minimal smoothness assumption on the objective function such that function values evaluated at close inputs will also be close. Formally, let \(z\in\mathcal{X}\times\Omega\) be a joint action-context pair, a GP\((\mu,k)\) is fully specified by its mean function \(\mu(z)=\mathbb{E}[f(z)]\) and covariance or kernel function \(k(z,z^{\prime})=\mathbb{E}[f(z)-\mu(z)(f(z^{\prime})-\mu(z^{\prime}))]\) which acts as the data-independent prior distribution. Now define that \(y_{t}=f(z_{t})+\epsilon_{t}\) is a noisy sample of true function value \(f(z_{t})\) and \(\mathbf{y}_{T}=[y_{1},y_{2},\cdots,y_{T}]\) are evaluated at points \(Z_{T}=[z_{1},z_{2},\cdots,z_{T}]\) (the past data points). Assume now we are given a new \(z^{*}\) and would like to infer the function value \(f^{*}\), we can get a closed-form posterior distribution which is also a GP with the following mean and variance: \[\mu_{T}\left(z^{*}\right)=\mathbf{k}_{T}\left(z^{*}\right)^{\top }\left(\mathbf{K}_{T}+\sigma^{2}\mathbf{I}\right)^{-1}\mathbf{y}_{T} \tag{6}\] \[\sigma_{T}^{2}\left(z^{*}\right)=k\left(z^{*},z^{*}\right)- \mathbf{k}_{T}\left(z^{*}\right)^{\top}\left(\mathbf{K}_{T}+\sigma^{2} \mathbf{I}\right)^{-1}\mathbf{k}_{T}\left(z^{*}\right) \tag{5}\] where \(\mathbf{k}_{T}(z^{*})=[k(z_{1},z^{*}),k(z_{2},z^{*}),\cdots,k(z_{T},z^{*})]\) and \([\mathbf{K}_{T}]_{ij}=k\left(z_{i},z_{j}\right)\) is the kernel matrix. We choose the widely adopted Matern kernel with \(\nu=\frac{3}{2}\) following empirical practices. These analytical equations allow us to efficiently infer the function value at a new point based on previous observations and action-context pairs. Another key element of contextual bandits is to determine how to suggest the next point to evaluate so as to learn most about the objective function. This is achieved by choosing the point maximizing the _acquisition function_, a function that assesses the quality of an action point and is much cheaper to optimize than the original objective function. Next to other popular choices such as Probability Improvement (PI), Expected Improvement (EI) and Thompson Sampling (TS) (Done et al., 2017; Done et al., 2017), we choose Upper Confidence Bound (UCB) (Done et al., 2017), the update rule of which is given as follows: \[x_{t}=\operatorname*{arg\,max}_{x\in\mathcal{X}}\ \mu_{t-1}(x,\omega_{t})+ \sqrt{\zeta_{t}}\sigma_{t-1}(x,\omega_{t}) \tag{7}\] An important rationale behind choosing UCB, as can be perceived from the equation, is that it efficiently balances _exploration_ of undiscovered resource configurations and _exploitation_ of configurations that have already been observed to be well-performing. The hyperparameter \(\zeta_{t}\) serves to balance the tradeoff: choosing a small \(\zeta_{t}\) indicates we value the \begin{table} \begin{tabular}{|l|l|l|l|} \cline{2-4} \multicolumn{1}{c|}{} & m5.large & Spot only & Spot + Burstable \\ \hline Batch jobs & 1x & 6.10x & 7.19x \\ \hline Microservices & 1x & 5.28x & 6.73x \\ \hline \end{tabular} \end{table} Table 2. Normalized cost savings from cloud incentives. Figure 5. Spot instance prices from April 2023 for m5.16xlarge, c5.18xlarge and r5.16xlarge instance types on AWS. first mean term more, hence will more likely select an action close to one which previously led to better performance; choosing a large \(\zeta_{t}\), on the other hand, focuses more on the variance term so that under-explored actions with higher uncertainty are more likely to be selected. Moreover, in the GP setting, UCB is superior in terms of both computational efficiency (Yamaguchi et al., 2017; Zhang et al., 2018) and convergence rate (Zhu et al., 2019) compared to alternatives such as GP-TS. Combining these design choices, we summarize our GP-UCB-based online resource orchestration algorithm in Algorithm 1. ``` 0: Performance-cost balance weights \(\alpha,\beta\); 0: Action Space \(\mathcal{X}\); 1:\(S_{0}\leftarrow\emptyset\); \(\triangleright\)\(S_{t}\) stores action-context pairs up to time \(t\) 2:\(\mathbf{y}_{0}\leftarrow\emptyset\); \(\triangleright\)\(\mathbf{y}_{t}\) stores noisy rewards up to time \(t\) 3:for\(t=1,2,\cdots\)do 4: Observe current context \(\omega_{t}\); 5: Select resource configuration \(x_{t}\) according to (7); 6: Observe noisy reward \(y_{t}=f(x_{t},\omega_{t})+\epsilon_{t}\); 7:\(S_{t}\gets S_{t-1}\cup(x_{t},\omega_{t})\); 8:\(\mathbf{y}_{t}\leftarrow\mathbf{y}_{t-1}\cup y_{t}\); 9: Update \(\mu_{t}\) and \(\sigma_{t}\) by posterior update rule (5)-(6); 10:endfor ``` **Algorithm 1** Contextual Bandits for Public Clouds **Regret analysis.** A desired property of a bandit algorithm is to have sub-linear cumulative regret growth. Our algorithm achieves this with high probability by setting appropriate hyperparameters, as shown in the following theorem: Theorem 4.1 ().: _Let \(\delta\in(0,1),\forall T\geq 1\), the cumulative regret of Alg. 1 is upper bounded by \(O(\sqrt{T\gamma_{T}\zeta_{T}})\) with high probability. Precisely,_ \[Pr\{R_{T}\leq\sqrt{C_{1}T\gamma_{T}\zeta_{T}}+2\}\geq 1-\delta \tag{8}\] _where \(C_{1}=\frac{8}{\log(1+\sigma^{-2})}\), \(\zeta_{t}=2B^{2}+300\gamma_{t}\log^{3}(\frac{t}{\delta})\)._ Here, \(\gamma_{T}\) is the maximum information gain in the order of \(O(T^{l}\log T)\) where \(l<1\). \(B\geq||f||_{k}\) is the upper bound of the Reproductive Kernel Hilbert Space (RKHS) norm of \(f\), a common assumption in bandit algorithms. Due to space constraint, please refer to (Han et al., 2017) for proof of the theorems. ### Private cloud: Resource-constrained Performance Optimization For security or data privacy concerns, organizations often resort to a private cloud solution instead of running their jobs on a public cloud. A private cloud is a self-hosted computing cluster that the organization has full control of. The organization is also able to fully unlock the power of computing nodes by customizing their hardware and software architectures tailored to its own needs which is often constrained on public clouds. Compared to the pay-as-you-go model on public clouds, organizations pay the resource cost upfront at the purchase of the hardware to build the private cloud. The update cycle is generally up to several years when the hardware is too old or the business scale has been significantly expanded. In this case, any resource orchestration decision must respect the private cloud's total resource limit, which is a hard constraint. The optimization objective under such scenarios is thus optimizing application performance subject to the hard resource constraints (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2018). Formally, the resource orchestration optimization problem in the private cloud can be formulated as: \[\max_{\mathbf{x}_{t}} p(x_{t},\omega_{t})\] \[s.t. x_{t}\in\mathcal{X}_{t}^{S},\omega_{t}\in\Omega,\quad\forall t \tag{9}\] where \(\mathcal{X}_{t}^{S}\) is the _safe_ set from the action domain at time step \(t\) so that actions can only be selected from the safe set to comply with resource constraints. Specifically, denote \(P_{max}\) as the resource constraint and let \(P(x_{t},\omega_{t})\) be the total resource usage resulting from action \(x_{t}\) and context \(\omega_{t}\) at time step \(t\), we have \[\mathcal{X}_{t}^{S}=\{x_{t}\in\mathcal{X}:P(x_{t},\omega_{t})\leq P_{\max}\} \tag{11}\] Note that \(P(x_{t},\omega_{t})\) and \(P_{\max}\) contain multiple dimensions in practice. Each of the dimensions is a resource type (e.g., CPU, RAM, and network bandwidth) and has its own limit in a private cloud. For presentation brevity, here, we abstract them as an overall constraint, without loss of generality. Moreover, \(P(x_{t},\omega_{t})\) is also an unknown function since it depends on the contextual variables \(\omega_{t}\) as well. This is reasonable since resource contention is common in a shared cloud (a private cloud can also be shared within the organization across several development teams). As a result, the performance indicator function \(p(x_{t},\omega_{t})\) and the resource usage function \(P(x_{t},\omega_{t})\) need to be modelled separately. At each time step \(t\) throughout the optimization process, our algorithm needs to select an action \(x_{t}\) from the safe set \(\mathcal{X}_{t}^{S}\) so that the performance \(p(x_{t},\omega_{t})\) is optimized. Towards this end, we use two GPs to model the performance function and the resource function, respectively. Reusing the notation \(z\in\mathcal{X}^{S}\times\Omega\) as a joint safe action-context pair, at each time step \(t\), noisy values of both functions are observed as \(y_{t}=p(z_{t})+\epsilon_{t}\) and \(\phi_{t}=P(z_{t})+\epsilon_{t}\). We now present our solution in Algorithm 2. The core idea of this algorithm is a two-phase process. In the first phase, starting from a guaranteed safe set, the algorithm is dedicated to exploration by randomly querying actions to gather more information to characterize the safe set. The second phase acts similarly to Alg. 1 which balances exploration and exploitation by following the GP-UCB procedure to update the posterior GP model. However, on top of the standard GP-UCB algorithm, it leverages information from previous exploration to iteratively expand the safe set based on the lower confidence interval of the resource usage function \(P\) (Line 14). We show through the following theorem that Alg. 2 also achieves sub-linear cumulative regret growth: Theorem 4.2 ().: _Let \(\delta\in(0,1)\), for sufficiently large \(T\geq 1\), the cumulative regret of Alg. 2 is upper bounded by \(O(\sqrt{T\gamma_{T}\zeta_{T}})\) with high probability. Precisely,_ \[Pr\{R_{T}\leq BT^{\prime}+\sqrt{C_{1}T\gamma_{T}\zeta_{T}}\}\geq 1-\delta \tag{12}\] _where the parameters \(C_{1},\gamma_{T}\) and \(\zeta_{T}\) take the same value as the previous theorem._ ### Drone Implementation We implemented a prototype of Drone as an integrable resource orchestrator on top of Kubernetes. The overall system architecture of Drone is depicted in Figure 6, which contains the following components: **Monitoring Module.** The monitoring module is a key component of the Drone framework. It is responsible for periodically collecting both performance metrics and contextual information from the cloud environment. In Drone, we choose Prometheus (Zhou et al., 2017) for this purpose. Prometheus is a market-leading monitoring system shipping with a time series database and powerful querying capabilities through its specialized query language PromQL. It is able to collect system-level real-time metrics such as CPU, RAM and network bandwidth usage through node-exporter(Tran et al., 2017) along with other potential contextual variables. By exposing a metrics exporter, applications are also enabling Prometheus to collect their performance metrics like throughput and response time. The collected metrics are stored in the time series database which can be efficiently queried upon request. The collected real-time contextual information, along with stored history performance data and action-context pairs, provides input to guide the optimization process of Drone's algorithms. **Application Identifier.** The application identifier helps identify the type of the submitted application to make tailored resource orchestration decisions for batch processing jobs and microservices, respectively. While users can explicitly specify the application type, as discussed in 4.5, the application identifier is also able to automatically detect the application type if it is evident in the deployment specification. For example, a Spark application has an exclusive kind: SparkApplication specification field which can be easily utilized by the application identifier. **Objective and Resource Enforcer.** Depending on whether the environment is a public cloud or a private cloud, this module specifies the optimization objective for the optimization engine. Users can tune model parameters here based on their needs, such as performance-cost preference coefficients in the public cloud setting and resource limit in the private cloud setting. In a private cloud, if the user does not specify the desired resource limit, the enforcer will set the limit according to the cluster resource usage. Figure 6. Drone Architecture. **Optimization Engine.** As the core part of the framework, the optimization engine is responsible for carrying out the optimization process. Based on the cloud setting set by the application identifier and the enforcer, the optimization engine continuously receives performance and contextual metrics from the monitoring module and suggests a resource orchestration action in each decision period. The action is a combination of container rightsizing and scheduling. Actions are executed by directly interacting with the Kubernetes API server to minimize additional overhead. ### Cloud-specific Optimizations On top of the algorithmic efforts, we also make practical optimizations tailored to the cloud resource orchestration problem context to further improve Drone's usability and efficiency in practice. **Encoding of actions and contexts.** Unlike CPU cores and RAM allocation/usage which take numerical values and thus naturally fit in our contextual bandit-based framework, some action and contextual variables do not readily manifest a numerical representation such as container scheduling decisions from the action space and possible traffic bottleneck from the context space. We address this issue by scalarizing these variables with numerical encoding. For example, we encode the scheduling decisions as a sub-vector \(x=[x_{1},x_{2},\cdots,x_{m}]\) of the entire decision vector \(x\), where \(m\) is the number of computing nodes or VMs on which a container can be scheduled. The elements \(x_{i}\in\mathbb{N}\) represent the number of containers that should be scheduled to node \(i\). Note that having an individual entry for each single node may lead to dimension explosion when the cloud scale is large. However, in practice, we can further group nodes by physical distance into zones within which nodes perceive almost no network latency when communicating with each other. The scheduling decisions will thus be executed at the zone level, significantly reducing dimensionality to the number of zones. This is particularly useful when the cloud is geographically distributed where high latency can be incurred by inter-zone communication. For traffic between nodes, we can use an integer \(a\in[0,2^{m}-1]\) to encode the possible traffic contention, which can be proven trivially by the binomial theorem. **Characterization of applications.** We consider two representative application profiles for Drone, namely batch processing jobs and long-running web services in the form of microservices. Also referred to as Best Effort (BE) and Latency Critical (LC) applications in the recent literature (K this proves to be a good selection with a low error rate across workloads. As a safety measure, we also implement a failure recovery mechanism that if a job errors out with no metrics produced in a pre-defined timeout period, it will be restarted with a higher resource configuration at the midpoint of the previous trial and the maximum resources available. ## 5. Experimental Evaluation ### Experimental Setup **Testbed setup.** Our testbed cluster is hosted on Compute Canada (Candes et al., 2017), a shared national cloud platform. The cluster consists of 16 virtual machines with one control node and 15 worker nodes, a scale comparable to related work. The control node is equipped with 16 vCPU cores and 180GB of RAM, while the worker nodes have 8 vCPU cores and 30GB of RAM. Each node runs Ubuntu 20.04.5 LTS with Linux kernel v5.4. Nodes are interconnected by 10Gb Ethernet in the same data center. Kubernetes v1.25 is deployed as the container orchestration platform. **Applications.** Representative applications for both batch processing jobs and microservices are deployed to evaluate Drone. For batch jobs, we benchmark three Spark applications that stress different computational resources, including (1) Spark-Pi, a pi-computation job with configurable number of iterations to control precision as a representative compute-intensive job, (2) PageRank as a jointly memory- and network-intensive job and (3) Logistic Regression to serve as a typical ML training task. For microservices, we use the _Social Network_ application containing 36 microservices from DeathStarBench (K Figure. 7. First, Figure. 7(a) depicts the performance measurements for the same LR job running with different schemes in the public cloud setting. Starting from the same starting point, it can be seen that all three bandit algorithm-based approaches are able to improve application performance by learning the performance-input relationship function over time. On the other hand, as a completely reactive rule-based autoscaler, the default Kubernetes solution cannot benefit from history information, and hence only manages to maintain a low performance, which is slightly perturbed over time by environment uncertainties and measurement errors. Drone significantly outperforms Cherrypick and Accordia by adaptively learning from the contextual variables while Cherrypick and Accordia are oblivious to such environment changes and can only leverage information from their resource decisions, i.e., the action space. The benefit of considering contextual information can further be observed from the post-convergence behaviour when \(T>10\). Both Cherrypick and Accordia sporadically experience performance oscillations while Drone is able to stabilize after convergence. This is because Cherrypick and Accordia regard any performance feedback as the exclusive result of the actions taken. Therefore, whenever a performance discrepancy is observed they will adjust their resource allocations even though it is primarily owing to changes in contextual cloud uncertainties. It is also worth noting that Drone converges slightly slower at the 10th iteration compared to the other two schemes which converge around the 7th iteration because the search space is larger in Drone due to additional dimensions in the action spaces (e.g., the scheduling vector) and the new contextual dimensions. This is a common performance-dimension tradeoff which we will briefly discuss in Sec. 6. Figure. 7(b) depicts the normalized resource cost saving compared to the Kubernetes native solution across all three representative batch workloads. While all three frameworks show cost-saving benefits thanks to their cost-aware problem formulation, Drone is the most cost-efficient one with over 20% cost savings across workloads, since it can more accurately search for the (near-)optimal resource configuration based on the information from both performance feedback and environment contexts, without need to over-allocate resources to maintain reasonable performance. Moreover, Drone makes its own scheduling decision by incorporating the scheduling sub-vector into its action space. Thus, even if given the same total amount of resources, Drone also learns the best strategy to assign the execution pods to computing nodes, which Cherrypick and Accordia cannot achieve. This effect is most evident when benchmarking PageRank, a network-intensive workload, where Drone achieves an average of 53% resource cost saving compared to the Kubernetes native solution, a number significantly higher than 20% from Accordia and 17% from Cherrypick. Similar benefits are also manifested in the resource-limited private cloud setting. We focus on Drone's impact on memory limit compliance since memory is a _non-negotiable_ resource type. Unlike CPU and network bandwidth where inadequate allocation would cause throttling (for CPU) or congestion (for network) but applications are still available, an application that requires more memory than allocated will incur an out-of-memory (OOM) error and the hosting pod will simply be killed and rescheduled if possible. OOM errors can significantly jeopardize application availability and degrade application performance. In our preliminary experiments in Sec. 3, a Spark job with insufficient memory allocation can experience a 20x longer elapsed time and even \begin{table} \begin{tabular}{l|c c|c c|c c} \hline & \multicolumn{2}{c|}{Spark-Pi} & \multicolumn{2}{c|}{LR} & \multicolumn{2}{c}{PageRank} \\ Framework & Time(s) & \# Errors & Time(s) & \# Errors & Time(s) & \# Errors \\ \hline k8s & 53\(\pm\)2 & 0 & 328\(\pm\)17 & 1 & 1436\(\pm\)88 & 4 \\ Accordia & 46\(\pm\)1 & 0 & 303\(\pm\)26 & 17 & 1172\(\pm\)95 & 98 \\ Cherrypick & 43\(\pm\)1 & 0 & 298\(\pm\)24 & 13 & 1226\(\pm\)102 & 107 \\ Drone & 41\(\pm\)1 & 0 & 226\(\pm\)9 & 5 & 785\(\pm\)42 & 9 \\ \hline \end{tabular} \end{table} Table 3. Drone significantly reduces OOM errors by conforming with resource constraints. Figure 7. Comparison between Drone and alternatives for batch processing jobs. get halted in an intermediate stage and fail to make progress. We set the memory limit as 65% of the total memory capacity available in the cluster and run all three representative batch workloads and record the memory utilization metric as shown in Figure. 7(c). We can observe that only Drone manages to abide by the memory constraint in the long run, showing an approximately 16% lower memory profile than baselines, despite the first few exploration rounds where Drone actively explores around to identify the feasible safe action space. To see the benefit of resource limit compliance in action, we run memory-stressing tasks in parallel using stress-ng to simulate significant resource contention which occupies around 30% of total memory. Table 3 summarizes the performance and number of Spark executor errors in different settings. We can observe that the Kubernetes native solution suffers from the least number of OOM errors by using memory utilization as one of its scaling rules. Therefore, it always respects the resource constraints and even suspends invoking executor pods when it detects memory is under stress, which in part contributes to its low performance. Memory constraint-oblivious solutions Cherrypick and Accordia on the other hand experience a large number of executor errors, especially for memory-intensive jobs such as LR and PageRank. In this case, Drone is able to fully utilize the algorithmic effectiveness of contextual bandits to optimize performance while complying with resource constraints, achieving up to 36% performance improvement and 10x less OOM errors compared to Cherrypick and Accordia. ### Drone for Microservices We also evaluate the efficacy of Drone to orchestrate resources for microservice applications by performing end-to-end experiments. Driving the SocialNet microservice benchmark with a realistic workload trace as shown in Figure 8(a), we collected aggregated performance metrics over the entire application running span. Figure 8(b) shows the cumulative distribution of RAM allocation for Drone and the other three baselines. As hybrid autoscalers, both SHOWAR and Autopilot are able to reduce memory footprint compared to Kubernetes HPA by combining vertical and horizontal autoscaling to mitigate over-allocation. However, Drone outperforms the alternatives by more accurately modelling the performance-action relationship by incorporating a much broader array of factors, instead of only heavily relying on past resource usage information as SHOWAR and Autopilot do. Specifically, Drone is able to serve around 60% of user requests within 50GB of overall RAM allocation, which is 55% less than SHOWAR and 60% less than Autopilot, manifesting an outstanding resource-saving capability. Figure 8(c) depicts the end-to-end latency distribution across frameworks. Autopilot exhibits a similar performance to Kubernetes HPA since they share a similar reactive scaling strategy based on recent resource statistics. Specifically designed for microservices, SHOWAR performs better by identifying microservice correlations to create locality-oriented affinity rules, which is more likely to schedule closely related microservices to the same node and hence reduces latency. Drone, on the other hand, steps further by encoding the more efficient scheduling opportunities into its decision vector, so it does effectively both rightstizing (i.e., autoscaling by prioritizing vertical scaling) and scheduling. As an integrated resource orchestration solution, Drone lowers the P90 latency by 37% compared to SHOWAR and by 45% compared to Autopilot. Having run the experiment in the private cloud setting, we also observe a similar effect as in the previous subsection. Table 4 records the total number of dropped user requests over the running span. Unlike in the batch processing case where the Kubernetes solution manages to maintain a low error rate by not invoking pods when memory is low, in this \begin{table} \begin{tabular}{|c|c|c|c|} \hline & k8s & Autopilot & SHOWAR & Drone \\ \hline \# of dropped packets & \(4.8\times 10^{4}\) & \(3.4\times 10^{4}\) & \(1.4\times 10^{4}\) & 7809 \\ \hline \end{tabular} \end{table} Table 4. Number of dropped requests. Figure 8. Comparison between Drone and alternatives for microservices. user-facing microservice case, it experiences the largest number of packet drops due to poor resource allocation decisions. Again, Drone incurs the least number of packet drops thanks to its resource limit-aware algorithm which progressively learns about the safe set for resource orchestration decisions. ## 6. Discussion **Application-level configuration tuning.** An application's performance can greatly depend on its own configurable parameters as well. For example, Xin et al. (Xin et al., 2017) identifies 38 performance-deciding parameters for SparkQL. While Drone is not readily applicable to application-level parameter tuning out of the box, the idea of the underlying contextual bandits as an algorithmic foundation can be naturally transferable, which has also been recently explored in the database research community, as mentioned in Sec. 2.2. In fact, Drone operates at a lower level of hardware resource orchestration and can be used in parallel with other efficient application-level configuration tuning techniques to jointly optimize an application's performance. **Tradeoff between precision and cost.** In theory, the function modelling capability of bandit algorithms can always benefit from more information. It is also true in our resource orchestration context. Incorporating more dimensions for tunable parameters from the action space such as disk I/O and last level cache (LLC) bandwidth, or more contextual information such as the graph structure of the running microservices would help Drone more accurately characterize the complex coupling of performance, action and environment. However, it is well-known that bandit algorithms (especially its continuous variant Bayesian Optimization) tend to perform poorly in high dimensions (Zhao et al., 2018). Therefore, in practice, we need to selectively incorporate the "more important" dimensions with domain knowledge and employ several optimizations (see Sec. 4.5) to make the algorithms practically efficient. Actually, as different applications and workloads have divergent resource request profiles, it would be interesting to investigate how to dynamically pick the most critical dimensions based on application and workload properties. We leave that as a future work of Drone. **Overhead of Drone.** Drone is designed to embrace the latest cloud paradigms and technologies, working seamlessly with the Kubernetes ecosystem. It utilizes the Prometheus-based Kubernetes monitoring stack for metrics collection and modifies resource configurations by directly communicating with the Kubernetes API server and updating the cgroup configuration values for the pods of concern if possible. Thanks to the optimizations we employ, the computation time for each iteration in the online mode is on the order of seconds, well within the metrics updating interval. There is also no additional cost of potential container migration during scheduling since Drone follows the standard Kubernetes-native rolling-update procedure. Therefore, minimal overhead is incurred for using Drone. **Limitations.** One major limitation of Drone is its insufficient capability to deal with "flash crowds", workloads that burst to a significantly higher level in a very short period of time (e.g., seconds). This situation inherently breaks the Gaussian Process prior assumption of the function and intrinsic limitations of iterative algorithms restrict Drone from reacting fast to such sudden changes. Fortunately, such cases are rare in reality and cloud providers often prepare backup resources for over-allocation in addition to their routine resource allocation frameworks. Moreover, Drone is yet to achieve its full potential to work with microservices since it is oblivious to the microservice dependency graph structure, which has been shown to be instrumental in microservice-oriented resource management (Han et al., 2015; Zhan et al., 2016; Zhan et al., 2017; Zhan et al., 2018; Zhan et al., 2019). Efficiently integrating dependency information into Drone without incurring significant overhead would be another promising direction to explore. ## 7. Conclusions In this paper, we present Drone, a resource orchestration framework specifically designed for the containerized cloud. Based on recent advances in contextual bandit algorithms, Drone encapsulates various cloud uncertainties as contextual parameters to aid the search process for optimal resource orchestration decisions. The uncertainty-aware approach enables Drone to progressively balance the performance and resource cost tradeoff in a shared public cloud, and optimize performance while adhering to resource constraints in a resource-limited private cloud. Our empirical analysis shows that Drone achieves up to 45% performance improvement and 20% resource cost savings compared to state-of-the-art alternatives.
2304.00124
Sharp well-posedness for the Benjamin--Ono equation
The Benjamin--Ono equation is shown to be well-posed, both on the line and on the circle, in the Sobolev spaces $H^s$ for $s>-\tfrac12$. The proof rests on a new gauge transformation and benefits from our introduction of a modified Lax pair representation of the full hierarchy. As we will show, these developments yield important additional dividends beyond well-posedness, including (i) the unification of the diverse approaches to polynomial conservation laws; (ii) a generalization of G\'erard's explicit formula to the full hierarchy; and (iii) new virial-type identities covering all equations in the hierarchy.
Rowan Killip, Thierry Laurens, Monica Visan
2023-03-31T20:53:18Z
http://arxiv.org/abs/2304.00124v1
# Sharp well-posedness for ###### Abstract. The Benjamin-Ono equation is shown to be well-posed, both on the line and on the circle, in the Sobolev spaces \(H^{s}\) for \(s>-\frac{1}{2}\). The proof rests on a new gauge transformation and benefits from our introduction of a modified Lax pair representation of the full hierarchy. As we will show, these developments yield important additional dividends beyond well-posedness, including (i) the unification of the diverse approaches to polynomial conservation laws; (ii) a generalization of Gerard's explicit formula to the full hierarchy; and (iii) new virial-type identities covering all equations in the hierarchy. ###### Contents * 1 Introduction * 1.1 Prior work on well-posedness * 1.2 The Lax structure * 1.3 Conservation laws * 1.4 The method of commuting flows * 1.5 Applications of the new Lax pair * 2 Notation and preliminaries * 3 The Lax operator * 4 A new gauge * 4.1 The Bock-Kruskal transformation * 4.2 The perturbation determinant * 4.3 The action of higher symmetries * 5 Well-posedness * 6 The tau function and virial identities for the full hierarchy ## 1. Introduction This paper is devoted to the study of real-valued solutions to the Benjamin-Ono equation (BO) \[\tfrac{d}{dt}q=\mathsf{H}q^{\prime\prime}-2qq^{\prime},\] which describe the motion of internal waves in stratified fluids of great total depth. The symbol \(\mathsf{H}\) appearing here denotes the Hilbert transform; see (2.1). This model arose contemporaneously in works by Benjamin [4] and by Davis-Acrivos [10]. The latter authors also performed extensive experiments in a tank Introduction The study of the (BO) posed problem of the existence of a solution of the following PDE (1.1) \[\begin{cases}\frac{d}{dt}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+ \frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1} {2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1} {2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1} {2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{ 1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{ 1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac {1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{ t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1} {2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{ 1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{ 1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_ {t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+ \frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{ t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t }\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2} \partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t} \varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1}{2}\partial_{t}\varphi+\frac{1 }{2}\partial_{t} strong decay assumptions in order to make sense of this quantity. Moreover, regularity hypotheses must also be imposed to ensure that any such \(L^{1}\) assumption is not immediately destroyed by wave dispersion. Our first main result is the well-posedness of the (BO) flow under minimal assumptions on the initial data. As we will see, this has been a much studied problem and our resolution depends not only on the recently introduced method of commuting flows, but also on the development of broader algebraic and analytic structures underlying the (BO) equation. In subsection 1.5, we will discuss several other dividends of these developments, not directly related to well-posedness. **Theorem 1.1**.: _Fix \(s>-\frac{1}{2}\). The equation (BO) is globally well-posed for initial data in \(H^{s}(\mathbb{R})\) or \(H^{s}(\mathbb{T})\)._ As we will discuss more fully below, the long-standing record on the line was well-posedness for \(s\geq 0\). This was also the threshold for the circle case until the very recent breakthrough [18], which proved well-posedness for all \(s>-\frac{1}{2}\). The paper [18] also shows ill-posedness in \(H^{-1/2}(\mathbb{T})\) via instantaneous norm inflation. A simple argument showing the breakdown of well-posedness for \(s<-\frac{1}{2}\) was known much earlier [3, 5]. In the line case, ill-posedness for \(s<-\frac{1}{2}\) can be deduced from the fact that the solutions (1.1) converge in \(H^{s}(\mathbb{R})\) to a delta function at \(t=0\) as \(c\to\infty\), but do not converge at any other time. ### Prior work on well-posedness Here we give a quick overview of the history of well-posedness for (BO); for a comprehensive account, we recommend the recent book [36]. The first phase in these developments was the construction of weak solutions; see, for example, [19, 20, 21, 56]. Early proofs of well-posedness employed energy/uniqueness arguments; see, for example, [1, 26, 53, 56]. Included in [1] is a proof that (BO) is well-posed in \(H^{\infty}\) in both geometries. This period culminated in the proof that (BO) is well-posed in \(H^{s}\) for \(s>\frac{3}{2}\) on \(\mathbb{T}\) and for \(s\geq\frac{3}{2}\) on \(\mathbb{R}\). The endpoint in the line setting was achieved in [53] by incorporating local smoothing into the traditional Gronwall argument. A striking feature of (BO) is that there was no subsequent Strichartz revolution, nor did the development of \(X^{s,b}\) analysis immediately transform the study of (BO). There is a fundamental reason for this: (BO) is not analytically well-posed in any \(H^{s}(\mathbb{R})\) space! This was first demonstrated in [48], which proved that the data-to-solution map is not \(C^{2}\). Later in [38] it was shown that for \(s\geq 0\), this map is not even uniformly continuous in any neighborhood of the origin. By their very nature, proofs by contraction mapping yield a data-to-solution map that is real-analytic. The results discussed in the previous paragraph show that (BO) cannot be solved by this method, no matter what auxiliary norms are introduced, nor what ingenious estimates one proves. By incorporating Strichartz control into energy methods, [37] advanced well-posedness on the line to \(s>\frac{5}{4}\). Further refinements of this style of argument in [30] led to well-posedness for \(s>\frac{9}{8}\). The well-posedness theory for (BO) was much transformed by the paper [61] which treated data in \(H^{1}(\mathbb{R})\). The transformative new idea here was the introduction of a gauge (a change of unknown) that substantially ameliorated the troublesome high-low frequency interaction responsible for the poor behavior of the data-to-solution map just discussed. The motivation for this gauge transformation is described in [62, SS4.4], including parallels with the Cole-Hopf transformation. Attention is also drawn to an analogue for the derivative nonlinear Schrodinger equation (cf. [63]). By exploiting Tao's gauge transformation, well-posedness in \(H^{1}(\mathbb{T})\) was subsequently shown in [47]. Well-posedness in \(H^{1}\) is automatically global due to the conservation of \[H_{2}:=\int\tfrac{1}{2}\big{[}q^{\prime}\big{]}^{2}-\tfrac{3}{4}q^{2}\mathsf{H} q^{\prime}+\tfrac{1}{4}q^{4}\,dx. \tag{1.6}\] Tao's gauge transformation lead to a flurry of progress on the well-posedness problem, including [8] which treated \(s>\tfrac{1}{4}\) on \(\mathbb{R}\) and [44] which treated \(s\geq\tfrac{1}{2}\) on \(\mathbb{T}\). Evidently, both yield well-posedness for finite energy initial data. As noted earlier, the long-standing record for (BO) on the line was well-posedness in \(L^{2}(\mathbb{R})\). This was proved in [25] via a synthesis of Tao's gauge transformation and \(X^{s,b}\) techniques. Well-posedness in \(L^{2}(\mathbb{T})\) was proved in [45] via consonant methods. Well-posedness in [25] means that the data-to-solution map admits a unique continuous extension from smooth initial data to a mapping from \(H^{s}\) to \(C_{t}H^{s}\). This is also the meaning of Theorem 1.1. The landmark papers [25, 45] stubbornly resisted improvement for a long period. The topic of well-posedness in \(L^{2}\) has been revisited several times via a variety of methods without yielding any improvement on the \(H^{s}\) scale; see, [24, 46, 59]. Gibbs-distributed initial data on the circle (with momentum cutoff) lies right at cusp of the \(L^{2}\) theory. The existence of solutions and preservation of this law was shown in [11]. Although the subsequent work [18] proves that Gibbs initial data leads to global solutions, it is unclear to us how readily this approach leads to invariance of the Gibbs law. By comparison, the manner in which we prove Theorem 1.1 is well-suited to this problem. The proof of [32, Th. 3.4] demonstrates how the method of commuting flows blends seamlessly with invariance of measure arguments in finite volume. On the circle, the question of well-posedness in \(H^{s}\) spaces was recently completely resolved in [18], namely, the equation is well-posed for \(s>-\tfrac{1}{2}\) and ill-posed otherwise. This is achieved through the construction of a Birkhoff normal form transformation developed in a series papers; see, for example, [16, 17]. This approach is reminiscent of the earlier breakthrough [27] for the KdV equation; however, the Lax operator (1.7) associated to (BO) is of an unconventional type, especially when compared to the much-studied Sturm-Liouville operators associated with KdV. The direct analogue of such an approach to Theorem 1.1 on the line would be via inverse scattering, which is currently utterly untenable. The only complete theory of both forward and inverse scattering is that of [9]. This requires weighted \(L^{1}\) hypotheses that are incompatible with the soliton solutions (1.1), as well as a small data hypothesis. The state of the art for the forward scattering problem is presented in [65], which requires \(\langle x\rangle^{\alpha}q\in L^{2}\) for \(\alpha>\tfrac{1}{2}\). Much remains to be done to advance the inverse scattering theory up to this threshold. Our pessimism regarding an inverse scattering approach to Theorem 1.1 is also informed by the state of the art regarding the inverse scattering problem for the Schrodinger equation, which has been intensively studied for generations. This is what is relevant to the KdV equation. At this moment, strong spatial decay assumptions are required, which then beget regularity hypotheses (to preserve such decay at later times). For a discussion of the significant hurdles associated with this approach already in the KdV setting, see, for example, [34]. Later in the introduction we will draw attention to some interesting questions in the spectral theory of the Lax operator \(\mathcal{L}\) for (BO) that arise naturally from this perspective. In this paper, we will approach the well-posedness problem via the method of commuting flows introduced in [34] and developed in several subsequent papers [7, 22, 23, 33, 39, 40, 51]. This strategy was previously employed in [59]; however, the culmination of Talbut's work was well-posedness in \(L^{2}\), both on the line and on the circle. It will take us some time to explain the obstacles that lay in Talbut's path and how we are able to overcome them. ### The Lax structure A Lax-pair representation of (BO) appeared first in [50] and then more directly in [6]. Our presentation here is also influenced by [64], where it is shown that any negative eigenvalues of \(\mathcal{L}\) are necessarily simple. Both operators of the Lax pair act on the Hardy space \(L^{2}_{+}\) comprised of those functions in \(L^{2}\) whose Fourier transform is supported on \([0,\infty)\). Such functions may also be viewed as the boundary values of certain holomorphic functions in the upper half-plane or disk, depending on the geometry. We avoid the more popular \(H^{p}\) notation for the Hardy spaces because it collides with our notations for Sobolev spaces, Hamiltonians, and for the Hilbert transform \(\mathsf{H}\). We will write \(C_{\pm}\) for the Cauchy-Szego projections; see (2.2). In Proposition 3.2 we will show that the formal expression \[\mathcal{L}f=-if^{\prime}-C_{+}\big{(}qf\big{)} \tag{1.7}\] defines a semi-bounded selfadjoint operator \(\mathcal{L}\) on \(L^{2}_{+}\) for every \(q\in H^{s}\) with \(s>-\frac{1}{2}\). Its companion in the Lax pair is variously given as \[\mathcal{P}:=-i\partial^{2}-2\partial C_{+}q+2q^{\prime}_{+}\quad\text{or} \quad\mathcal{P}-i\mathcal{L}^{2}=iC_{+}(\mathsf{H}q^{\prime})-iC_{+}qC_{+}q. \tag{1.8}\] Following [64], we will insist on the former; the latter is the original one from [6, 50]. These operators are transparently anti-selfadjoint when \(q\in H^{\infty}\) and we shall not need to make sense of them for more irregular functions \(q\). Earlier, we promised to draw attention to some basic questions in the spectral theory of \(\mathcal{L}\) that we regard as both intrinsically interesting and crucial milestones toward understanding inverse scattering for slowly decreasing initial data on the line. Specifically, we ask what is the decay threshold for \(q\), expressed via power-law and/or \(L^{p}\) integrability exponent, at which each of the following spectral transitions takes place: * The appearance of embedded eigenvalues; * The appearance of embedded singular-continuous spectrum; * The disappearance of absolutely continuous spectrum. Note that for any \(q\in L^{p}(\mathbb{R})\), \(p<\infty\), Weyl's Theorem guarantees that the essential spectrum of \(\mathcal{L}\) fills \([0,\infty)\). Our questions seek to clarify the spectral type. The only progress on these problems of which we are aware is the paper [58], which shows absence of embedded eigenvalues when \(\langle x\rangle q\in L^{2}\). For a discussion of these problems in the setting of one-dimensional Schrodinger operators, see [12, 31]. ### Conservation laws We have already seen several conserved quantities for (BO) in (1.3) and (1.6). Although Theorem 1.1 requires conservation laws at lower regularity, we will first discuss the general family of 'polynomial' conservation laws because it will highlight several important characters, as well as introduce some of our broader goals in this paper. At present, there are multiple competing approaches to understanding these polynomial conservation laws; see, for example, [42] for an accessible and succinct review. As an offshoot of the developments needed for Theorem 1.1, we will offer a new unity between these approaches by connecting them back to the central objects of our analysis. The first demonstrations [6, 50] that (BO) admits infinitely many conservation laws followed the approach of [43], by introducing one-parameter families of Miura-type transformations. The connection between these two papers was later explained in [41]. We will revisit the Bock-Kruskal approach in subsection 4.1; in Theorem 4.12, we link the Bock-Kruskal transformation to our own gauge. A completely different approach was introduced in [14], which presented a vector field \(\tau\) which recursively generates conserved densities via forming commutators. We will discuss this further in subsection 4.3 before presenting our own generalization in Section 6; see Theorem 6.5. Another perspective on the conservation laws grew out of the development of an inverse scattering approach to (BO), as detailed in [2, 13, 28, 29]. Already in [2], it is remarked that the quantity \[\int q(x)\overline{N}(x;z,q)\,dx \tag{1.9}\] is conserved under the (BO) flow. Here \(\overline{N}\) represents a certain formal solution of an _inhomogeneous_ eigenfunction equation: \[-i\partial_{x}\overline{N}-C_{+}(q\overline{N})=z\overline{N}-z\quad\text{ with }\overline{N}(x)\to 1\text{ as }x\to+\infty \tag{1.10}\] and spectral parameter \(z\in[0,\infty)\), which is the essential spectrum of \(\mathcal{L}\). The word formal indicates that this is not an element of the underlying Hilbert space. The nonlocal nature of the operator \(\mathcal{L}\) makes the question of the existence of such solutions a delicate matter; see [9, 65]. The inhomogeneity of (1.10) is quite unexpected from an inverse scattering point of view -- one would expect honest eigenfunctions to be the central objects. In fact, this approach lead to the study of two families of formal eigenfunctions, traditionally denoted \(N\) and \(\overline{M}\), as well as two families of solutions to (1.10), namely, \(\overline{N}\) and \(M\). (We caution the reader that the bar appearing here does not indicate complex conjugation.) Even in the familiar territory of Sturm-Liouville operators, we learn a lot by moving the spectral parameter off the spectrum. Taking this step, [28] considers the Fredholm equation, which in our preferred notation reads \[W=1+(\mathcal{L}_{0}-z)^{-1}C_{+}(qW),\quad\text{where}\quad z\in\mathbb{C} \setminus[0,\infty) \tag{1.11}\] and \(\mathcal{L}_{0}\) denotes \(-i\partial_{x}\) acting on \(L_{+}^{2}(\mathbb{R})\), by analogy with (1.7) with \(q\equiv 0\). This paper also observes that \(W\) is analytic in \(z\) and that the functions \(M\) and \(\overline{N}\) mentioned earlier may be realized as the boundary values (from above and below) of \(W\). Our central object in this paper will be \(m(x;\kappa,q)\), defined via \[-im^{\prime}-C_{+}[q(m+1)]+\kappa m=0\quad\text{or equivalently,}\quad m=( \mathcal{L}+\kappa)^{-1}C_{+}q. \tag{1.12}\] The sign change in the spectral parameter is motivated by the fact that we shall only need to consider \(-z=\kappa>0\); moreover, \(\kappa\) will be sufficiently large so that \(\mathcal{L}+\kappa\) is indeed invertible. In the line setting, \(m\) differs little from \(W\); indeed, \(W=1+m\). However, one of the virtues of \(m\) is that it allows us to transition seamlessly between the line and circle geometries. The direct analogue of the conserved quantity mentioned in (1.9) is \[\beta(\kappa;q):=\int q(x)m(x;\kappa,q)\,dx=\langle q_{+},(\mathcal{L}+\kappa)^ {-1}q_{+}\rangle_{L_{+}^{2}}. \tag{1.13}\] The only difference is the removal of the term \(\int q\), whose inclusion would curtail applicability of this to \(q\in L^{1}\). In calling this quantity \(\beta\), we are following Talbut [59], where it arises after differentiating the perturbation determinant with respect to the spectral parameter; see subsection 4.2. This use of \(\beta\) is very different from the object with this name in [29]! Kaup-Matsuno [29] approached the question of polynomial conservation laws by expanding (1.9) in increasing powers of \(z\), noting that (1.10) gave a means of recursively generating the coefficients. In the line geometry, one finds \[\beta(\kappa;q)=\kappa^{-1}P(q)-\kappa^{-2}H_{\mathrm{BO}}(q)+\kappa^{-3}H_{2 }(q)+\mathcal{O}(\kappa^{-4}). \tag{1.14}\] On the circle, by comparison, one has \[\beta(\kappa;q)=\kappa^{-1}\Big{(}P(q)+\tfrac{1}{2}\!\int\!q\Big{)}-\kappa^{- 2}\Big{(}H_{\mathrm{BO}}(q)-\big{[}\!\int\!q\big{]}P(q)-\tfrac{1}{6}\big{[}\! \int\!q\big{]}^{3}\Big{)}+\mathcal{O}(\kappa^{-3}).\] A variation on this approach discussed, for example, in [16, 49, 58] is to expand the resolvent in (1.13) to obtain \[\beta(\kappa;q)\sim\sum_{\ell\geq 0}(-1)^{\ell}\kappa^{-\ell-1}\langle q_{+}, \mathcal{L}^{\ell}q_{+}\rangle, \tag{1.15}\] which exhibits a very direct relationship between the Lax operator and the conservation laws of a type not seen, for example, for KdV. In the circle setting, one may exploit the fact that \(q_{+}=\mathcal{L}1\) to present this formula in a different way; see (4.31). While the polynomial conservation laws only make sense for very smooth initial data, we will show that their generating function \(\beta(\kappa;q)\) makes sense in either geometry for \(q\in H^{s}\) with \(s>-\frac{1}{2}\); see Proposition 4.3. As we will demonstrate, this can be used to obtain \(H^{s}\)-bounds on smooth solutions, yielding a new proof of the following: **Theorem 1.2** (Conservation laws, [60]).: _Let \(q\) be a (global) \(H^{\infty}\) solution to (BO), either on the line or on the circle. Then for all \(-\frac{1}{2}<s<0\) and \(t\in\mathbb{R}\) we have_ \[\big{(}1+\|q(0)\|_{H^{s}}\big{)}^{-2|s|}\|q(0)\|_{H^{s}}\lesssim\|q(t)\|_{H^{ s}}\lesssim\big{(}1+\|q(0)\|_{H^{s}}\big{)}^{\frac{2|s|}{1-2|s|}}\|q(0)\|_{H^{s}}.\] This is not a verbatim recapitulation of Talbut's result: he imposes a mean-zero assumption in the circle case and formulates an inferior lower bound on \(q(t)\). Nevertheless, this result can be deduced from his arguments with only minor changes. The argument in [60] is based on the analysis of a renormalized perturbation determinant in a manner inspired by [35]. This object will be described in subsection 4.2, where we will also discuss its relationship to \(\beta(\kappa;q)\). We will give a direct proof of Theorem 1.2, based solely on \(\beta(\kappa;q)\); see Corollary 5.3. In fact, Corollary 5.3, and Lemma 4.4 on which it is based, are stronger than Theorem 1.2 in two ways: they allow more general flows from the (BO) hierarchy and they demonstrate not only that solutions are bounded, but also that equicontinuous sets of initial data lead to equicontinuous ensembles of orbits. ### The method of commuting flows _A priori_ equicontinuity results of the type with which we ended the previous subsection have been an integral part of the method of commuting flows since its inception. They have many roles. For example, suppose we have a bounded sequence in \(H^{s}\) that is convergent in \(H^{-100}\); then this sequence converges in \(H^{s}\) if and only if it is \(H^{s}\)-equicontinuous. In this way, equicontinuity allows us to recover any loss of derivatives that may appear when proving that the flow depends continuously on the initial data. The main question we need to address is this: How are we to estimate the divergence of two solutions with slightly different initial data? One approach that has a long tradition is to interpose a regularized flow. Historically, this would typically be done via parabolic regularization, which introduces dissipation. We will employ a Hamiltonian flow. This will be generated by \(H_{\kappa}\), which may be regarded as an approximation to \(H_{\mathrm{BO}}\). In this way, we may rewrite the difference of the two solutions to (BO) with initial data \(q^{0}\) and \(\tilde{q}^{0}\) as \[e^{tJ\nabla H_{\mathrm{BO}}}(q^{0})-e^{tJ\nabla H_{\mathrm{BO}} }(\tilde{q}^{0}) =e^{tJ\nabla H_{\mathrm{BO}}}(q^{0})-e^{tJ\nabla H_{\kappa}}(q^{0})\] \[\qquad+e^{tJ\nabla H_{\kappa}}(q^{0})-e^{tJ\nabla H_{\kappa}}( \tilde{q}^{0})\] \[\qquad+e^{tJ\nabla H_{\kappa}}(\tilde{q}^{0})-e^{tJ\nabla H_{ \mathrm{BO}}}(\tilde{q}^{0}). \tag{1.16}\] Here, \(J\) stands for the operator \(\partial_{x}\) of the Poisson bracket (1.4). Any reasonable choice of regularized flow makes the middle term in RHS(1.16) easy to estimate; this shifts the burden to estimating the first and last terms. For these terms, the initial data is the same; however, the flows themselves are different. The central principle of the method of commuting flows is to choose \(H_{\kappa}\) to Poisson commute with \(H_{\mathrm{BO}}\) so that the corresponding flows commute. This commutativity allows us to write \[e^{tJ\nabla H_{\mathrm{BO}}}(q^{0})-e^{tJ\nabla H_{\kappa}}(q^{0})=\big{[}e^ {tJ\nabla(H_{\mathrm{BO}}-H_{\kappa})}-\mathrm{Id}\big{]}\circ e^{tJ\nabla H_ {\kappa}}(q^{0}). \tag{1.17}\] In this way, we are led to the following problem: show that the flow generated by \(H_{\mathrm{BO}}-H_{\kappa}\) is close to the identity, while accepting that the initial data for this more complicated flow is not simply \(q^{0}\). Indeed, \(q^{0}\) is'scrambled' by the \(H_{\kappa}\) flow, for which we have little uniform control as \(\kappa\to\infty\). Prior work on other models informs where to seek inspiration for the choice of the regularized Hamiltonian \(H_{\kappa}\), namely, from the expansion (1.14) and its torus analogue. This reasoning leads us to select \[H_{\kappa}(q):=\begin{cases}\kappa P(q)-\kappa^{2}\beta(\kappa;q)&\text{on $ \mathbb{R}$},\\ \big{[}\kappa+\!\int\!q\big{]}P(q)-\kappa^{2}\beta(\kappa;q)+\frac{\kappa}{2} \big{[}\!\int\!q\big{]}^{2}+\frac{1}{6}\big{[}\!\int\!q\big{]}^{3}&\text{on $ \mathbb{T}$}.\end{cases} \tag{1.18}\] Although there are many facets to the full story, we would like to focus attention on (1.17), how it limited Talbut's analysis to the case of \(L^{2}\) initial data, and how we were able to overcome these obstructions. By writing the nonlinearity as a complete derivative, we see that the vector field defining the (BO) flow is actually continuous on \(L^{2}\), albeit \(H^{-2}\)-valued. Likewise, the \(H_{\kappa}\) Hamiltonian defines a continuous vector field on \(L^{2}\). In this way, we may analyze the difference flow directly as the difference of these two vector fields. As noted above, the inevitable loss of two derivatives may be recovered by exploiting equicontinuity. This is what Talbut does in [59]. However, as soon as \(s<0\), we may no longer make sense of \(q^{2}\), for \(q\in H^{s}\), even as a distribution. The idea of incorporating a gauge transformation into the method of commuting flows appears already in [34], although it is not always a prerequisite for obtaining sharp results; see [22]. The big hurdle is finding the right transformation. It is natural to try Tao's gauge [61]. However, the high-low interactions that are so troublesome for his style of analysis and which this gauge removes, are of no consequence for our methodology; indeed, outermost derivatives are handled with equicontinuity. Ultimately, we do not find this transformation helpful for our analysis. In previous incarnations of the method of commuting flows, it was the diagonal Green's function that played a central role. It is elementary to verify that even when \(q\equiv 0\), the Green's function diverges on the diagonal; thus, renormalization is required. In the case of (BO), however, we found this approach to be fruitless. Our next attempt was to employ the gauge transformation introduced by Bock and Kruskal [6] in their study of conservation laws for (BO) posed on the line. This gauge is defined implicitly via \[2q=\tfrac{1}{w+\kappa}\mathsf{H}(w^{\prime})+\mathsf{H}\big{[}\tfrac{w^{ \prime}}{w+\kappa}\big{]}+\tfrac{2\kappa w}{w+\kappa}. \tag{1.19}\] In subsection 4.1, we will demonstrate the existence and uniqueness of such a \(w\); indeed, we will show this is possible even for \(q\in H^{s}\) with \(s>-\tfrac{1}{2}\), and that the transformed unknown \(w\) lies in \(H^{s+1}\). As noted in [6], it is not difficult to verify that (BO) may be written as \[\tfrac{d}{dt}w=\mathsf{H}w^{\prime\prime}-2qu^{\prime}, \tag{1.20}\] which does not appear to constitute progress -- how can we hope to multiply \(q\in H^{s}\) and \(w^{\prime}\in H^{s}\)? However, combining this with (1.19), a little work reveals that \[\tfrac{d}{dt}w=\mathsf{H}w^{\prime\prime}+2iC_{+}\big{(}w^{\prime}\big{)} \cdot C_{+}\big{(}\tfrac{w^{\prime}}{\kappa+w}\big{)}-2iC_{-}\big{(}w^{ \prime}\big{)}\cdot C_{-}\big{(}\tfrac{w^{\prime}}{\kappa+w}\big{)}+\tfrac{2 \kappa ww^{\prime}}{\kappa+w}.\] This was our first breakthrough on the problem! The fact that this is progress rests on a simple but fundamental observation: the product of two functions in \(H^{s}_{+}\) is a well-defined distribution; see Lemma 2.2. Of course, this is not true without the frequency restriction. Next, we must find a description of the dynamics of \(w\) under the regularized Hamiltonian (1.18). Immediately, we strike new hurdles. In past analyses employing a gauge transformation, we were lead to the regularized dynamics of the gauge variable through the biHamiltonian relation. However, [14] shows that there is no such biHamiltonian formulation of (BO)! On top of this, we could not find any documented relationship between \(\beta(\kappa)\) and \(w\), which might help derive such dynamics. This is the important role of Theorem 4.12 in our story: it connects \(w\) to \(m\) and thence to \(\beta\). As we investigated \(w\) through its connection to \(m\), it soon became apparent that our treatment could be much simplified by abandoning \(w\) and adopting \(m\) as our new gauge. It is striking to us that despite the long history of \(m\) in the theory of (BO), its value as a gauge transformation has been overlooked until now. The abandonment of \(w\) and adoption of \(m\) as our gauge transformation accelerated us toward a proof of Theorem 1.1, albeit not the proof presented here. The simplicity of the arguments in this paper benefits substantially from a further innovation, namely, the Lax pair presented in Proposition 5.1. We do not alter the traditional Lax operator \(\mathcal{L}\), only its antisymmetric partner \(\mathcal{P}\), which we call the Peter operator (Lax's first name). Although a Lax representation of the flow generated by \(\beta(\kappa)\) has appeared previously in Proposition 2.17 of [58], this would not lead one to (5.3) or (5.4); the first term in each equation is new. At first glance, this may seem inconsequential; however, the inclusion of these first terms makes a huge difference. It is only these modified Peter operators that satisfy the special properties (5.5) and (5.14), which much simplify the proof of Theorem 1.1 in Section 5. Additional special properties of our Peter operators are discussed in Section 6. ### Applications of the new Lax pair Section 6 is devoted to reaping certain other rewards from our new Lax pair, not directly related to well-posedness. Here the reader will find Theorem 6.1, which provides an extension of Gerard's recent explicit formula [15] for (BO) to the full hierarchy, as well as Theorem 6.5 which describes the action of a one-parameter family of higher symmetries. The notion of a higher symmetry is described in subsection 4.3. It is a symmetry that lies outside the commuting flows of the hierarchy because it does _not_ preserve the values of the commuting Hamiltonians. Scaling and Galilei boosts are simple examples. We also discuss a much more profound example from [14], for which we provide a mechanical explanation: the center of energy travels at a constant speed under every flow of the hierarchy. One is then led to ask if there are centers associated to the other conserved quantities that also travel at constant speed. Theorem 6.5 answers this in the affirmative, thereby presenting new recursion relations within the hierarchy. As a consonant example of the utility of our Lax pair, we present a generalization of the variance identity of [24] to the full (BO) hierarchy. In extending Gerard's formula to the full hierarchy, we actually find an explicit formula for the \(\tau\)-function associated with (BO). By a \(\tau\)-function, we mean an expression for the solution under a general Hamiltonian. Traditionally, \[q(\vec{t};q_{0})=\Big{[}\exp\bigl{\{}\sum\!t_{i}J\nabla H_{i}\bigr{\}}q_{0} \Big{]}(x=0) \tag{1.21}\] would be written as a logarithmic derivative of the \(\tau\)-function; however, such a \(\tau\)-function evidently contains as much information as \(q(\vec{t};q_{0})\). Here \(H_{i}\) enumerate the commuting Hamiltonians of the hierarchy, while \(\vec{t}\) denotes a vector of times (with only finitely many non-zero terms). Note that this function is scalar-valued. This is no loss of generality because momentum is one of the Hamiltonians, traditionally assigned index \(i=0\); consequently, one may recover the value of the solution at any spatial point by using the variable \(t_{0}\). The relation (1.15) has inspired us to propose parameterizing the \(\tau\)-function in a different way, namely, by continuous functions \(\phi\). Just as \[H_{\phi}(q):=\langle q_{+},\phi(\mathcal{L})q_{+}\rangle \tag{1.22}\] defines a conserved quantity for the hierarchy, so we may define \[q(\phi;q_{0})=\bigl{(}e^{J\nabla H_{\phi}}q_{0}\bigr{)}(x=0). \tag{1.23}\] When \(\phi\) is a polynomial, this reproduces (1.21). In Section 6 we will prove the following formula for a dense class of functions \(\phi\): \[C_{+}\bigl{(}e^{J\nabla H_{\phi}}q_{0}\bigr{)}(x+iy)=\tfrac{1}{2\pi i}I_{+} \Bigl{(}\bigl{(}X-t\psi(\mathcal{L}_{q_{0}})-x-iy\bigr{)}^{-1}q_{+}^{0}\Bigr{)} \tag{1.24}\] for \(x\in\mathbb{R}\), \(y>0\). Recall that functions in the Hardy space are analytic in the upper half-plane; moreover, as \(q\) is real-valued, it may be recovered from its positive-frequency part. Here \(X\) denotes the operator of multiplication by \(x\) and \(I_{+}\) denotes a kind of conditional integral; both are described in detail in Section 3. The function \(\psi\) applied to the Lax operator associated to the initial data \(q_{0}\) is defined via \[\psi(E)=\phi(E)+E\phi^{\prime}(E). \tag{1.25}\] This new algebraic relation has an important role: it reveals exactly how the explicit formula (1.24) varies in response to changes in the Hamiltonian. ### Acknowledgements R.K. was supported by NSF grants DMS-1856755 and DMS-2154022; M.V. was supported by NSF grant DMS-2054194. The work of T.L. was also supported by these grants. ## 2. Notation and preliminaries Our conventions for the Fourier transform are \[\hat{f}(\xi)=\tfrac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-i\xi x}f(x)\,dx\quad \text{so}\quad f(x)=\tfrac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{i\xi x}\hat{f}( \xi)\,d\xi\] for functions on the line, while on the circle, \[\hat{f}(\xi)=\int_{0}^{1}e^{-i\xi x}f(x)\,dx\quad\text{so}\quad f(x)=\sum_{ \xi\in 2\pi\mathbb{Z}}\hat{f}(\xi)e^{i\xi x}.\] These Fourier transforms are unitary on \(L^{2}\) and yield the Plancherel identities \[\|f\|_{L^{2}(\mathbb{R})}=\|\hat{f}\|_{L^{2}(\mathbb{R})}\quad\text{and} \quad\|f\|_{L^{2}(\mathbb{T})}=\sum_{\xi\in 2\pi\mathbb{Z}}|\hat{f}(\xi)|^{2}.\] With these conventions, we define the Hilbert transform via \[\widehat{\mathsf{H}f}(\xi)=-i\,\text{sgn}(\xi)\widehat{f}(\xi) \tag{2.1}\] with the understanding that \(\text{sgn}(0)=0\), which is only important on the circle. We will also employ the Cauchy-Szego projections defined via \[\widehat{C_{\pm}f}(\xi)=1_{[0,\infty)}(\pm\xi)\widehat{f}(\xi) \tag{2.2}\] and often write \(q_{\pm}=C_{\pm}q\). Although \(i\mathsf{H}=C_{+}-C_{-}\) in both geometries, we have \[C_{+}+C_{-}=1\quad\text{only on the line; on the circle,}\quad C_{+}f+C_{-}f=f+\int f. \tag{2.3}\] To avoid an unnecessary proliferation of parentheses, we adopt the following rules for the operators \(C_{\pm}\): Their precedence is lower than multiplication indicated by juxtaposition (e.g., \(fg\)), but higher than multiplication indicated with a dot, addition, and subtraction. Thus, by our conventions, \[C_{+}f\cdot C_{+}(\overline{m}+g)h+q=\big{[}C_{+}f\big{]}\big{[}C_{+}\big{(}( \overline{m}+g)h\big{)}\big{]}+q.\] For \(\sigma\in\mathbb{R}\) and \(\kappa\geq 1\) we define the Sobolev spaces \(H^{\sigma}_{\kappa}(\mathbb{R})\) and \(H^{\sigma}_{\kappa}(\mathbb{T})\) as the completion of \(\mathcal{S}(\mathbb{R})\) and \(C^{\infty}(\mathbb{T})\), respectively, with respect to the norms \[\|f\|_{H^{\sigma}_{\kappa}(\mathbb{R})}^{2}=\int(|\xi|+\kappa)^{2\sigma}| \widehat{f}(\xi)|^{2}\,d\xi\quad\text{and}\quad\|f\|_{H^{\sigma}_{\kappa}( \mathbb{T})}^{2}=\sum_{\xi\in 2\pi\mathbb{Z}}(|\xi|+\kappa)^{2\sigma}|\hat{f}(\xi)|^{2}.\] When \(\kappa=1\), we simply write \(H^{\sigma}(\mathbb{R})\) and \(H^{\sigma}(\mathbb{T})\). We write \(H^{\sigma}_{+}\) for the subspace of \(H^{\sigma}\) comprised of functions holomorphic in the upper half-plane. Throughout the paper, we will employ the \(L^{2}\) pairing: \(\langle g,f\rangle=\int\overline{g}(x)f(x)\,dx\). This informs our identification of \(H^{\sigma}_{\kappa}\) and \(H^{-\sigma}_{\kappa}\) as dual spaces. For the remainder of the paper, we constrain \[s\in(-\tfrac{1}{2},0)\quad\text{and define}\quad\varepsilon:=\tfrac{1}{2}(\tfrac{1}{2}-|s|) \in(0,\tfrac{1}{4}). \tag{2.4}\] All implicit constants are permitted to depend on \(s\). As \(s+1>\tfrac{1}{2}\), the space \(H^{s+1}_{\kappa}\) is an algebra in either geometry. Indeed, we have \[\|fg\|_{H^{s+1}_{\kappa}}\lesssim\|f\|_{H^{s+1}}\,\|g\|_{H^{s+1}_{\kappa}}\quad \text{uniformly for $\kappa\geq 1$.} \tag{2.5}\] However, we will also need to handle products at considerably lower regularity; this is the topic of the next two lemmas. **Lemma 2.1**.: _The product of any \(f\in H^{s}\) and \(g\in H^{s+1}\) belongs to \(H^{s}\); indeed,_ \[\|gf\|_{H^{s}_{\kappa}}\lesssim\big{[}\|g\|_{L^{\infty}}+\|g\|_{H^{1/2}}\big{]} \|f\|_{H^{s}_{\kappa}}\lesssim\kappa^{-2\varepsilon}\|g\|_{H^{s+1}_{\kappa}} \|f\|_{H^{s}_{\kappa}}, \tag{2.6}\] _uniformly for \(\kappa\geq 1\). Here \(s,\varepsilon\) are as in (2.4)._ Proof.: The second inequality in (2.6) is elementary. We focus on the first. By duality, it suffices to verify that \[\|gh\|_{H^{s}_{\kappa}}\lesssim\big{[}\|g\|_{L^{\infty}}+\|g\|_{H^{1/2}}\big{]} \|h\|_{H^{s}_{\kappa}} \tag{2.7}\] holds with \(\sigma=|s|\). In fact, (2.7) holds for any \(\sigma\in[0,\tfrac{1}{2})\). This is a special case of Theorem II.3.2 in [57]. For completeness, we give an elementary proof of our own. Our argument is based on the Besov-Slobodeckij characterization: \[\|h\|_{H^{s}_{\kappa}}^{2}\sim\kappa^{2\sigma}\|h\|_{L^{2}}^{2}+\iint\frac{|h (x)-h(y)|^{2}}{|x-y|^{2\sigma+1}}\,dx\,dy\quad\text{for any $\sigma\in(0,1)$.} \tag{2.8}\] It is not difficult to see that \[|(gh)(x)-(gh)(y)|^{2}\lesssim\|g\|_{L^{\infty}}^{2}|h(x)-h(y)|^{2}+|h(x)||h(y )||g(x)-g(y)|^{2}. \tag{2.9}\] The first summand presents no difficulty. For the second summand we employ Holder's inequality and then the homogeneous Sobolev embedding \(\dot{H}^{\sigma}\hookrightarrow L^{2/(1-2\sigma)}\): \[\iint\frac{|h(x)||h(y)||g(x)-g(y)|^{2}}{|x-y|^{2\sigma+1}}\,dx\,dy \lesssim\|h(x)h(y)\|_{L^{\frac{2}{\sigma,y^{2}}}_{x,y^{2\sigma}}} \bigg{\|}\frac{|g(x)-g(y)|^{2}}{|x-y|^{2\sigma+1}}\bigg{\|}_{L^{\frac{2}{ \sigma,y}}_{x,y}}\] \[\lesssim\|h\|_{H^{s}_{\kappa}}^{2}\|g\|_{L^{\infty}}^{1-2\sigma} \|g\|_{H^{1/2}}^{1+2\sigma}.\qed\] In general, pointwise multipliers on negative regularity spaces must have considerable positive regularity; indeed, this is evident from the duality reduction performed in this proof. There is one important exception, namely, when both functions lie in the same Hardy space. This observation, whose proof is quite elementary, plays a crucial role in our analysis. **Lemma 2.2**.: _Fix \(r<0\). Then for \(f,g\in H^{r}_{+}\) we have_ \[\|fg\|_{H^{2r-1}}\lesssim\|f\|_{H^{r}}\,\|g\|_{H^{r}}\,. \tag{2.10}\] Proof.: We start by rewriting LHS(2.10) in Fourier variables: \[\|fg\|_{H^{2r-1}}^{2}=\frac{1}{2\pi}\int_{0}^{\infty}\frac{1}{(\xi+1)^{4|r|+2 }}\bigg{|}\int_{0}^{\xi}\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d\eta\bigg{|} ^{2}\,d\xi.\] Using that for \(\eta\in[0,\xi]\) we have \[\tfrac{1}{(\xi+1)^{2}}\leq\tfrac{1}{\xi-\eta+1}\cdot\tfrac{1}{\eta+1},\] distributing the factors of \((\xi+1)^{4r}\) evenly between \(f\) and \(g\), and using Cauchy-Schwarz, we may bound \[\int_{0}^{\infty}\frac{1}{(\xi+1)^{4|r|+2}} \bigg{|}\int_{0}^{\xi}\widehat{f}(\xi-\eta)\widehat{g}(\eta)\,d \eta\bigg{|}^{2}\,d\xi\] \[\leq\int_{0}^{\infty}\frac{1}{(\xi+1)^{2}}\bigg{(}\int_{0}^{\xi} \frac{|\widehat{f}(\xi-\eta)|}{(\xi-\eta+1)^{|r|}}\,\frac{|\widehat{g}(\eta)| }{(\eta+1)^{|r|}}\,d\eta\bigg{)}^{2}\,d\xi\] \[\leq\int_{0}^{\infty}\frac{1}{(\xi+1)^{2}}\left\|f\right\|_{H^{r }}^{2}\left\|g\right\|_{H^{r}}^{2}\,d\xi\lesssim\left\|f\right\|_{H^{r}}^{2} \left\|g\right\|_{H^{r}}^{2}.\qed\] **Definition 2.3** (Equicontinuity).: Fix \(\sigma\in\mathbb{R}\). A bounded set \(Q\subset H^{\sigma}\) is said to be _equicontinuous_ if \[\limsup_{\delta\to 0}\,\sup_{q\in Q}\,\sup_{|y|<\delta}\|q(\cdot+y)-q(\cdot)\|_{H^{ \sigma}}=0.\] By Plancherel, equicontinuity in the spatial variable is equivalent to tightness in the Fourier variable. Specifically, a bounded set \(Q\subset H^{\sigma}\) is equicontinuous if and only if \[\lim_{\kappa\to\infty}\sup_{q\in Q}\int_{|\xi|\geq\kappa}|\widehat{q}(\xi)|^{ 2}(|\xi|+1)^{2\sigma}\,d\xi=0\quad\text{on $\mathbb{R}$} \tag{2.11}\] or \[\lim_{\kappa\to\infty}\sup_{q\in Q}\sum_{|\xi|\geq\kappa}|\widehat{q}(\xi)|^{ 2}(|\xi|+1)^{2\sigma}=0\quad\text{on $\mathbb{T}$}. \tag{2.12}\] It is important for our arguments that we are able to transfer the equicontinuity property from classes of initial data to the corresponding orbits. This we achieve by combining the following characterization of equicontinuity with the two-sided estimate (4.14) and the conservation of \(\beta(\kappa;q)\). **Lemma 2.4** (Characterization of equicontinuity).: _Let \(Q\) be a bounded subset of \(H^{s}\). Then the following are equivalent:_ 1. _The subset_ \(Q\) _is equicontinuous in_ \(H^{s}\)_._ 2. \(\left\|q\right\|_{H^{s}_{\kappa}}\to 0\) _as_ \(\kappa\to\infty\) _uniformly for_ \(q\in Q\)_._ Proof.: We only consider the real-line case below; the argument on the circle is similar, with integrals being replaced by sums. First, we show that (i) implies (ii). Fix \(\delta>0\). For \(\varkappa\geq 1\) to be chosen later, we may bound \[\int_{\mathbb{R}}\frac{|\widehat{q}(\xi)|^{2}}{(|\xi|+\kappa)^{2|s|}}\,d\xi \lesssim\frac{\varkappa^{2|s|}}{\kappa^{2|s|}}\int_{\mathbb{R}}\frac{| \widehat{q}(\xi)|^{2}}{(|\xi|+1)^{2|s|}}\,d\xi+\int_{|\xi|\geq\varkappa}\frac{ |\widehat{q}(\xi)|^{2}}{(|\xi|+1)^{2|s|}}\,d\xi.\] As \(Q\) is equicontinuous, we may pick \(\varkappa=\varkappa(\delta)\) sufficiently large so that the second integral on the right-hand side is at most \(\delta\). Then, as \(Q\) is bounded in \(H^{s}\), we may choose \(\kappa\) sufficiently large so that the first term on the right-hand side is at most \(\delta\). Together, this shows that the left-hand side is at most \(2\delta\) for all \(\kappa\) sufficiently large, uniformly for \(q\in Q\). As \(\delta>0\) was arbitrary, this proves (ii). Conversely, the inequality \[\int_{|\xi|\geq\kappa}\frac{|\widehat{q}(\xi)|^{2}}{(|\xi|+1)^{2|s|}}\,d\xi \lesssim\int_{\mathbb{R}}\frac{|\widehat{q}(\xi)|^{2}}{(|\xi|+\kappa)^{2|s|}} \,d\xi\] shows that (ii) implies (i). ## 3. The Lax operator In this section, we investigate the Lax operator and its mapping properties. We begin by establishing inequalities that will allow us to prove convergence of the various resolvent expansions that arise in our analysis. **Lemma 3.1**.: _For \(s,\varepsilon\) as in (2.4), we have_ \[\left\|C_{+}qR_{0}(\kappa)C_{+}f\right\|_{H^{s}} \lesssim\kappa^{-2\varepsilon}\left\|q\right\|_{H^{s}}\left\|f_{+ }\right\|_{H^{s}_{\kappa}},\] \[\left\|C_{+}qR_{0}(\kappa)C_{+}f\right\|_{H^{s}_{\kappa}} \lesssim\kappa^{-2\varepsilon}\left\|q\right\|_{H^{s}_{\kappa}} \left\|f_{+}\right\|_{H^{s}_{\kappa}}, \tag{3.1}\] _where the implicit constants are uniform in \(\kappa\geq 1\). Moreover,_ \[\left\|C_{+}\,q\,C_{+}\,f\right\|_{H^{-1/2}_{\kappa}} \lesssim\kappa^{-2\varepsilon}\|q\|_{H^{s}_{\kappa}}\left\|f_{+ }\right\|_{H^{1/2}_{\kappa}}. \tag{3.2}\] Proof.: We present the details on the line; the argument on the circle is a close analogue, with integrals replaced by sums. There is little difference between the proofs of the two estimates (3.1). We will illustrate the argument with the former because it contains both normal and \(\kappa\)-modified Sobolev norms. In Fourier variables, we have \[\left\|C_{+}qR_{0}(\kappa)C_{+}f\right\|_{H^{s}}^{2}=\frac{1}{2\pi}\int_{0}^{ \infty}\frac{1}{(\xi+1)^{2|s|}}\bigg{|}\int_{0}^{\infty}\widehat{q}(\xi-\eta) \frac{\widehat{f}(\eta)}{\eta+\kappa}\,d\eta\bigg{|}^{2}d\xi.\] To estimate the contribution of the region where \(\eta\geq 2\xi\geq 0\), we use that \[\frac{1}{\eta+\kappa}\lesssim\frac{1}{(\eta+\kappa)^{|s|}}\,\frac{1}{(|\xi- \eta|+1)^{|s|}}\,\frac{1}{(\xi+\kappa)^{1-2|s|}}\] uniformly for \(\eta\geq 2\xi\geq 0\) and \(\kappa\geq 1\). Together with Cauchy-Schwarz, this yields \[\int_{0}^{\infty} (\xi+1)^{2s}\bigg{|}\int_{2\xi}^{\infty}\widehat{q}(\xi-\eta) \frac{\widehat{f}(\eta)}{\eta+\kappa}\,d\eta\bigg{|}^{2}d\xi\] \[\lesssim\int_{0}^{\infty}\frac{d\xi}{(\xi+1)^{1-4\varepsilon}( \xi+\kappa)^{8\varepsilon}}\left\|q\right\|_{H^{s}}^{2}\left\|f_{+}\right\|_{ H^{s}_{\kappa}}^{2}\] \[\lesssim\kappa^{-4\varepsilon}\left\|q\right\|_{H^{s}}^{2}\left\| f_{+}\right\|_{H^{s}_{\kappa}}^{2}.\] In the last step we integrated separately over \(\xi\in[0,\kappa]\) and \(\xi\in[\kappa,\infty)\). To estimate the contribution of the remaining region, \(0\leq\eta\leq 2\xi\), we use that \[\frac{1}{(\xi+1)^{2|s|}}\lesssim\frac{1}{(|\xi-\eta|+1)^{2|s|}}\quad\text{ uniformly for}\quad 0\leq\eta\leq 2\xi.\] Together with the Minkowski and Cauchy-Schwarz inequalities, this yields \[\int_{0}^{\infty} \frac{1}{(\xi+1)^{2|s|}}\bigg{|}\int_{0}^{2\xi}\widehat{q}(\xi- \eta)\frac{\widehat{f}(\eta)}{\eta+\kappa}\,d\eta\bigg{|}^{2}d\xi\] \[\lesssim\int_{0}^{\infty}\bigg{(}\int_{0}^{\infty}\frac{|\widehat {q}(\xi-\eta)|}{(|\xi-\eta|+1)^{|s|}}\,\frac{|\widehat{f}(\eta)|}{(\eta+ \kappa)^{|s|}}\,\frac{d\eta}{(\eta+\kappa)^{1-|s|}}\bigg{)}^{2}d\xi\] \[\leq\left\|q\right\|_{H^{s}}^{2}\left(\int_{0}^{\infty}\frac{| \widehat{f}(\eta)|}{(\eta+\kappa)^{|s|}}\,\frac{d\eta}{(\eta+\kappa)^{1-|s|} }\right)^{2}\] \[\leq\|q\|_{H^{s}}^{2}\,\|f_{+}\|_{H^{s}_{\kappa}}^{2}\int_{0}^{\infty}\frac{d \eta}{(\eta+\kappa)^{1+4\varepsilon}}\] As in the previous region, \(\varepsilon>0\) is needed for convergence of the integral. Together with our treatment of the first region, this proves (3.1). It remains to prove (3.2). We proceed as previously. Observing that \[\kappa^{\varepsilon}(\xi+\kappa)^{\varepsilon}\lesssim(|\xi-\eta|+\kappa)^{-| s|}\sqrt{\eta+\kappa}\quad\text{uniformly for }2\eta>\xi>0\text{ and }\kappa\geq 1\] and using Cauchy-Schwarz, we deduce that \[\int_{0}^{\infty}\bigg{|}\int_{\xi/2}^{\infty}\widehat{q}(\xi- \eta)\widehat{f}(\eta)\,d\eta\bigg{|}^{2}\,\frac{d\xi}{\xi+\kappa}\] \[\lesssim\int_{0}^{\infty}\frac{\kappa^{-2\varepsilon}}{(\xi+ \kappa)^{1+2\varepsilon}}\bigg{|}\int_{0}^{\infty}\frac{|\widehat{q}(\xi-\eta )|\,|\widehat{f}(\eta)|}{(|\xi-\eta|+\kappa)^{|s|}}\sqrt{\eta+\kappa}\,d\eta \bigg{|}^{2}d\xi \tag{3.3}\] \[\lesssim\kappa^{-4\varepsilon}\|q\|_{H^{s}_{\kappa}}^{2}\,\|f_{+} \|_{H^{1/2}_{\kappa}}^{2}\,.\] Complementing this, we have \[(\xi+\kappa)^{-1}\lesssim(|\xi-\eta|+\kappa)^{-2|s|}(\eta+\kappa)^{2|s|-1} \quad\text{uniformly for }0<2\eta<\xi\text{ and }\kappa\geq 1.\] Consequently, by Minkowski and Cauchy-Schwarz, \[\int_{0}^{\infty}\bigg{|}\int_{0}^{\xi/2}\widehat{q}(\xi-\eta) \widehat{f}(\eta)\,d\eta\bigg{|}^{2}\,\frac{d\xi}{\xi+\kappa}\] \[\lesssim\int_{0}^{\infty}\bigg{|}\int_{0}^{\infty}\frac{|\widehat {q}(\xi-\eta)|}{(|\xi-\eta|+\kappa)^{|s|}}\frac{\sqrt{\eta+\kappa}\,|\widehat {f}(\eta)|}{(\eta+\kappa)^{1-|s|}}\,d\eta\bigg{|}^{2}d\xi \tag{3.4}\] \[\lesssim\kappa^{-4\varepsilon}\|q\|_{H^{s}_{\kappa}}^{2}\,\|f_{+ }\|_{H^{1/2}_{\kappa}}^{2}\,.\] Combining (3.3) and (3.4) proves (3.2). We now come to the principal purpose of this section, namely, understanding \(\mathcal{L}\) as a selfadjoint operator and obtaining quantitative information on its mapping properties, as well as those of its resolvent. **Proposition 3.2** (Lax operator).: _Let \(s,\varepsilon\) be as in (2.4). Given \(q\in H^{s}\), there is a unique selfadjoint, semi-bounded operator \(\mathcal{L}\) associated to the quadratic form_ \[f\mapsto\langle f,\mathcal{L}_{0}f\rangle-\int q(x)|f(x)|^{2}\,dx\] _having form domain \(H^{1/2}_{+}\). This operator satisfies_ \[\|\mathcal{L}f\|_{H^{s}}\lesssim\big{[}1+\|q\|_{H^{s}}\big{]}\,\|f\|_{H^{s+1}}\,. \tag{3.5}\] _Moreover, there is a constant \(C_{s}\geq 1\) so that whenever_ \[\kappa\geq C_{s}\big{(}1+\|q\|_{H^{s}_{\kappa}}\big{)}^{\frac{1}{2\varepsilon}}, \tag{3.6}\] _the resolvent \(R(\kappa;q)\) of \(\mathcal{L}\) exists, maps \(H^{-1/2}_{+}\) into \(H^{1/2}_{+}\), and satisfies_ \[\|R(\kappa)f\|_{H^{s+1}_{\kappa}}\lesssim\|f\|_{H^{s}_{\kappa}}\ \text{and}\ \ \big{\|}[R(\kappa)-R_{0}(\kappa)]f\big{\|}_{H^{s+1}_{\kappa}}\lesssim\kappa^{-2 \varepsilon}\,\|q\|_{H^{s}_{\kappa}}\,\|f\|_{H^{s}_{\kappa}}\,. \tag{3.7}\] _The essential spectrum \(\sigma_{\text{ess}}(\mathcal{L})\) agrees with that of \(\mathcal{L}_{0}\) and for any \(f\in H^{s}_{+}\),_ \[z\mapsto\langle f,(\mathcal{L}+z)^{-1}f\rangle \tag{3.8}\] _defines a meromorphic function on the region where \(-z\in\mathbb{C}\setminus\sigma_{\rm ess}(\mathcal{L})\)._ Proof.: For \(f\in H_{+}^{1/2}\), the estimate (3.2) shows \[\big{|}\langle f,qf\rangle\big{|}\lesssim\kappa^{-2\varepsilon}\,\|q\|_{H_{ \kappa}^{s}}\,\|f\|_{H_{\kappa}^{1/2}}^{2}=\kappa^{-2\varepsilon}\,\|q\|_{H^{s }}\,\langle f,(\mathcal{L}_{0}+\kappa)f\rangle. \tag{3.9}\] By choosing \(\kappa\) large, we see that the potential \(q\) is an infinitesimally form-bounded perturbation of the operator \(\mathcal{L}_{0}\). Therefore the existence and uniqueness of \(\mathcal{L}\) follows from [54, Th. X.17]. The operator so defined automatically maps the form domain \(H_{+}^{1/2}\) into its dual space \(H_{+}^{-1/2}\). (It will not be important for us to discuss the operator domain of \(\mathcal{L}\).) The estimate (3.5) follows directly from (2.6). By virtue of Lemma 3.1, there is a choice of \(C_{s}\geq 1\) so that (3.6) ensures \[\|C_{+}qR_{0}(\kappa)C_{+}\|_{H_{\kappa}^{s}\to H_{\kappa}^{s}}<\tfrac{1}{2} \quad\text{and}\quad\|C_{+}qR_{0}(\kappa)\|_{H_{+}^{-1/2}\to H_{+}^{-1/2}}< \tfrac{1}{2}. \tag{3.10}\] This in turn guarantees the convergence of the resolvent series \[R(\kappa;q)=(\mathcal{L}+\kappa)^{-1}=R_{0}(\kappa)\sum_{\ell\geq 0}\bigl{[}C_{ +}qR_{0}(\kappa)\bigr{]}^{\ell}, \tag{3.11}\] both as an operator from \(H_{\kappa}^{s}\) to \(H_{\kappa}^{s+1}\) and as an operator from \(H_{+}^{-\frac{1}{2}}\) to \(H_{+}^{\frac{1}{2}}\). This also proves both claims in (3.7). To show that \(\sigma_{\rm ess}(\mathcal{L})=\sigma_{\rm ess}(\mathcal{L}_{0})\), we need only demonstrate that \(R(\kappa)-R_{0}(\kappa)\) is a compact operator for some \(\kappa>0\); see [55, Th. XIII.14]. For this purpose, we write \[R(\kappa)-R_{0}(\kappa)=R_{0}(\kappa)C_{+}q\sqrt{R_{0}(\kappa)}\cdot\sqrt{R_{ 0}(\kappa)}\big{[}1+C_{+}qR(\kappa)\big{]}. \tag{3.12}\] It is easy to verify that the first factor in this expansion is compact by computing its Hilbert-Schmidt norm. On the line, for example, \[\bigl{\|}R_{0}(\kappa)C_{+}q\sqrt{R_{0}(\kappa)}\bigr{\|}_{\rm HS}^{2}=\frac{ 1}{2\pi}\int_{0}^{\infty}\!\!\int_{0}^{\infty}\frac{|\widehat{q}(\xi-\eta)|^{ 2}\,d\eta\,d\xi}{(\eta+\kappa)(\xi+\kappa)^{2}}\lesssim\kappa^{-1}\|q\|_{H_{ \kappa}^{-1/2}}^{2}.\] Boundedness on \(L^{2}\) of the second factor on RHS(3.12), for \(\kappa\) sufficiently large, follows from (3.7) and (2.6). The spectral theorem already guarantees that the mapping defined in (3.8) is meromorphic off the essential spectrum provided that the vector \(f\) belongs to the quadratic form domain of the resolvent, which is to say, the dual of the quadratic form domain. In this way, we see that the argument could be expanded beyond \(f\in H_{+}^{s}\) to \(f\in H_{+}^{-1/2}\). Clearly, (3.6) is implied by the simpler condition \[q\in B_{A}^{s}:=\{\text{real-valued}\ q\in H^{s}:\|q\|_{H^{s}}\leq A\}\quad \text{and}\quad\kappa\geq C_{s}\big{(}1+A\big{)}^{\frac{1}{2\varepsilon}}. \tag{3.13}\] However, we will need to continue with the more complicated formulation in order to close a bootstrap argument in the proof of Lemma 4.4. The conditions (3.6) and (3.13) guarantee the constructive invertibility of \(\mathcal{L}+\kappa\) via the series (3.11). In this regard, they cannot be substantially improved; this can be easily seen by considering the family of solitons (1.1). Indeed, when \(q=Q_{c}\), the operator \(\mathcal{L}\) has an eigenvalue at \(-c/2\) with eigenvector \((cx+i)^{-1}\). By comparison, the \(H_{c}^{s}\) norm of \(Q_{c}\) is comparable to \(c^{2\varepsilon}\). Our next lemma will be needed for the proof of Lemma 5.4. **Lemma 3.3**.: _For \(f,g\in H^{s+1}_{+}\) we have_ \[C_{+}\big{(}f\,\overline{\mathcal{L}g}-\overline{g}\,\mathcal{L}f\big{)}=iC_{+} \big{(}f\overline{g}\big{)}^{\prime}+f[1-C_{-}](q_{+}\overline{g}). \tag{3.14}\] Proof.: We compute \[C_{+}\big{(}f\,\overline{\mathcal{L}g}-\overline{g}\,\mathcal{L }f\big{)} =C_{+}\big{\{}if\overline{g}^{\prime}-fC_{-}(q\overline{g})+if^{ \prime}\overline{g}+\overline{g}C_{+}(qf)\big{\}}\] \[=iC_{+}\big{(}f\overline{g}\big{)}^{\prime}+C_{+}\big{\{} \overline{g}qf-fC_{-}(q\overline{g})\big{\}}\] \[=iC_{+}\big{(}f\overline{g}\big{)}^{\prime}+C_{+}\big{\{}f[1-C_ {-}](q\overline{g})\big{\}}\] \[=iC_{+}\big{(}f\overline{g}\big{)}^{\prime}+f[1-C_{-}](q \overline{g}).\] Finally, noting that the presence of \([1-C_{-}]\) allows us to replace \(q\) by \(q_{+}\) in the last term, we obtain (3.14). The remainder of this section concerns the interaction between the Lax operator \(\mathcal{L}\) and the operator of multiplication by \(x\). To do this, we must first describe how multiplication by \(x\) can be interpreted as an operator on the Hardy space \(L^{2}_{+}(\mathbb{R})\). It cannot be realized as a selfadjoint operator! In order to make sense of multiplication by \(x\) on \(L^{2}_{+}(\mathbb{R})\), it is easiest to employ Fourier transformation and the theory of semigroups. We wish to make sense of \(i\partial_{\xi}\) as an operator on a half-line. The naturally associated semigroups \(e^{t\partial}\) and \(e^{-t\partial}\) represent translation to the left (with truncation to \([0,\infty)\)) and translation to the right (padded with zero), respectively. Each gives rise to a strongly continuous semigroup and we may then define multiplication by \(x\) as the associated generator. We adopt the left shift as the basis for our notion of multiplication by \(x\) since this leads to an operator with larger domain. We record here some basic results of the general theory presented, for example, in [54, SSX.8]: **Lemma 3.4**.: _Let \(X\) denote the (unbounded) operator on \(L^{2}_{+}(\mathbb{R})\) with_ \[D(X)=\big{\{}f\in H^{s}_{+}(\mathbb{R}):\widehat{f}\in H^{1}\big{(}[0,\infty) \big{)}\big{\}}\quad\text{and}\quad\widehat{Xf}(\xi)=i\tfrac{d\widehat{f}}{d \xi}(\xi)\quad\text{for}\quad f\in D(X).\] _Then \(iX\) is maximally accretive and is the generator of the semigroup_ \[e^{-itX}f=\tfrac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{i\xi x}\widehat{f}(\xi+t) \,d\xi=C_{+}\big{(}e^{-itx}f\big{)}\] _defined on \(L^{2}_{+}(\mathbb{R})\). The spectrum of \(X\) consists of the closed lower half-plane. For \(\operatorname{Im}z>0\), the resolvent is given by_ \[(X-z)^{-1}f=\tfrac{f(x)-f(z)}{x-z}\] _where \(f(z)\) is defined via analytic continuation to the upper half-plane._ Each \(z\) with \(\operatorname{Im}z<0\) is actually an eigenvalue of \(X\) with eigenvector \(1/(x-z)\). The adjoint \(X^{*}\) of \(X\) is the generator of right translations. Its domain is smaller, being comprised of those \(f\in L^{2}_{+}\) such that \(\widehat{f}\in H^{1}_{0}([0,\infty))\). For such \(f\), we have \(X^{*}f=Xf\). Functions in the domain of \(X^{*}\) are absolutely integrable and integrate to zero. Typical functions in \(D(X)\) are not absolutely integrable: their Fourier transform has a jump discontinuity at the origin. Nevertheless, they are 'conditionally integrable' with a value representing half the height of the jump. For example, using the Poisson integral formula, we have \[\lim_{y\to\infty}\pi yf(iy)=\lim_{y\to\infty}\int\frac{y^{2}}{x^{2}+y^{2}}f(x)\, dx=\lim_{\xi\downarrow 0}\tfrac{\sqrt{2}\pi}{2}\widehat{f}(\xi) \tag{3.15}\] for all \(f\in D(X)\). Following earlier models, such as [15, 58], we define a linear functional representing twice this value: For \(f\in D(X)\), \[I_{+}(f):=\lim_{y\to\infty}2\pi yf(iy)=\lim_{y\to\infty}\bigl{\langle}\chi_{y},f\bigr{\rangle}=\lim_{\xi\downarrow 0}\sqrt{2\pi}\widehat{f}(\xi)\text{ with }\chi_{y}(x)=\frac{ iy}{x+iy}. \tag{3.16}\] One may regard the middle expression in (3.16) as originating from splitting the Poisson kernel into its Hardy-space components, or as simply the Cauchy integral formula. Another form of the Cauchy integral formula, which follows from the above, is \[f(z)=\tfrac{1}{2\pi i}I_{+}\bigl{(}(X-z)^{-1}f\bigr{)}=\lim_{y\to\infty}\tfrac {1}{2\pi i}\bigl{\langle}\chi_{y},(X-z)^{-1}f\bigr{\rangle} \tag{3.17}\] valid for all \(f\in L_{+}^{2}\) and \(\operatorname{Im}z>0\). **Lemma 3.5**.: _If \(q\in H^{\infty}(\mathbb{R})\) and \(f\in D(X)\), then \(C_{+}(qf)\in D(X)\),_ \[[X,C_{+}q]f=\tfrac{i}{2\pi}q_{+}I_{+}(f),\quad\text{and}\quad[X,\mathcal{L}]f =i-\tfrac{i}{2\pi}q_{+}I_{+}(f). \tag{3.18}\] This expresses the well-known facts that the commutator of \(X\) with a Toeplitz operator, such as \(f\mapsto C_{+}(qf)\), is a rank-one operator, while that of \(\partial\) and \(X\) is the identity. These observations follow from straightforward computations in Fourier variables; see, for example, [58, Lem. 3.1] for details. ## 4. A new gauge In this section, we analyze the function \(m=m(\kappa,q)\), which was introduced as the solution to the modified eigenvalue equation \[m^{\prime}=-i\kappa m+iC_{+}[q(m+1)], \tag{4.1}\] or what is equivalent, \((\mathcal{L}+\kappa)m=q_{+}\). As we will see in this section, this object plays many roles in the theory of (BO). The title of the section, however, reflects our new and crucial application of \(m\) as a gauge transformation, replacing \(q\) as the dynamical variable. First we must show that such a function exists and derive its basic properties. This certainly requires restrictions on \(\kappa\); most naturally, we should avoid the spectrum of \(\mathcal{L}\). For our purposes, it will suffice to consider \(\kappa\) large and positive. For the moment, we will continue to use the approach of Proposition 3.2 by requiring \[\kappa\geq C_{s}\bigl{(}1+\|q\|_{H_{\kappa}^{s}}\bigr{)}^{\frac{1}{2\kappa}}, \tag{4.2}\] for a suitable large constant \(C_{s}\) and \(\varepsilon\) as in (2.4). Once we have developed sufficient preliminaries, we will adopt the more permanent solution expounded in Convention 4.5 below. **Proposition 4.1** (Existence and Uniqueness).: _There is a constant \(C_{s}\geq 1\) so that the following hold: For any \(q\in H^{s}\) and \(\kappa\) satisfying (4.2), there is a unique \(m\in H^{s+1}_{+}\) solving (4.1). It is given by_ \[m(x;\kappa,q):=R(\kappa,q)q_{+}=R_{0}(\kappa)\sum_{\ell\geq 1}[C_{+}qR_{0}( \kappa)]^{\ell-1}q_{+} \tag{4.3}\] _and satisfies_ \[\|m\|_{H_{\kappa}^{s+1}}\lesssim\|q\|_{H_{\kappa}^{s}}\,,\quad\|m\|_{L^{\infty}}<1,\quad\text{and}\quad\|m\|_{H^{s}}\lesssim\kappa^{-1}\left\|q\right\|_{H^{s}}. \tag{4.4}\] _Moreover, if \(q(x)\) belongs to \(H^{\infty}\) then so too does \(m(x)\)._ Proof.: Proposition 3.2 guarantees the existence of \(C_{s}\geq 1\) so that \(\mathcal{L}+\kappa\) is invertible whenever (4.2) holds; indeed, this is demonstrated by proving the convergence of the series (3.11). This verifies the existence and uniqueness of \(m\), as well as formula (4.3). In fact, by Proposition 3.2 we see that \(m\) is unique not only in \(H^{s+1}\) but also in the larger space \(H^{1/2}\). The first estimate in (4.4) follows directly from (3.7). Using this we also see that \[\|(\mathcal{L}_{0}+\kappa)m\|_{H_{\kappa}^{s}}=\|m\|_{H_{\kappa}^{s+1}} \lesssim\|q\|_{H_{\kappa}^{s}}\,.\] Writing \(m=R_{0}(\kappa)\big{[}q_{+}+C_{+}qR_{0}(\kappa)(\mathcal{L}_{0}+\kappa)m\big{]}\) and using (3.1), we deduce that \[\|m\|_{H^{s}}\lesssim\kappa^{-1}\big{[}\|q\|_{H^{s}}+\kappa^{-2\varepsilon}\| q\|_{H^{s}}\|q\|_{H_{\kappa}^{s}}\big{]}.\] The last estimate in (4.4) now follows from our assumption on \(\kappa\). Using Cauchy-Schwarz in the frequency variable and (4.2), we find \[\|m\|_{L^{\infty}}\lesssim\kappa^{-2\varepsilon}\,\|m\|_{H_{\kappa}^{s+1}} \lesssim C_{s}^{-2\varepsilon}.\] The middle bound in (4.4) follows by choosing \(C_{s}\) large enough. Finally, we turn to the statement that \(q\in H^{\infty}\) implies \(m\in H^{\infty}\). By uniqueness, the \(m\) associated to a translated potential is simply given by the translation of \(m\): \[m(x+h;\kappa,q)=m(x;\kappa,q(\cdot+h))\quad\text{for all $h\in\mathbb{R}$}. \tag{4.5}\] For any integer \(\sigma\geq 1\), we use (4.5) and (4.3) to see that \[m^{(\sigma)}=\sum_{\ell\geq 1}\sum_{\begin{subarray}{c}\sigma_{1},\dots, \sigma_{\ell}\geq 0\\ \sigma_{1}+\dots+\sigma_{\ell}=\sigma\end{subarray}}\binom{\sigma}{\sigma_{1} \dots\sigma_{\ell}}R_{0}C_{+}q^{(\sigma_{1})}R_{0}C_{+}q^{(\sigma_{2})}\dots R _{0}q_{+}^{(\sigma_{\ell})}\] and so deduce that \[\|m^{(\sigma)}\|_{H_{\kappa}^{s+1}}\leq\sum_{\ell\geq 1}\ell^{\sigma}\sup_{ \begin{subarray}{c}\sigma_{1},\dots,\sigma_{\ell}\geq 0\\ \sigma_{1}+\dots+\sigma_{\ell}=\sigma\end{subarray}}\|q^{(\sigma_{\ell})}\|_{ H_{\kappa}^{s}}\prod_{i=1}^{\ell-1}\|C_{+}q^{(\sigma_{i})}R_{0}C_{+}\|_{H_{ \kappa}^{s}\to H_{\kappa}^{s}}.\] For any \(1\leq i\leq\ell-1\) with \(\sigma_{i}=0\), we apply (3.10). This leaves at most \(\sigma\) many of the coefficients \(\sigma_{1},\dots,\sigma_{\ell-1}\) that may be non-zero. We estimate these remaining factors with (3.1), combine them with \(q^{(\sigma_{\ell})}\), and use that \[\prod_{j=1}^{J}\|q^{(\tilde{\sigma}_{j})}\|_{H_{\kappa}^{s}}\leq\|q\|_{H_{ \kappa}^{s}}^{J-1}\|q\|_{H_{\kappa}^{s+\sigma}}\quad\text{whenever}\quad \tilde{\sigma}_{1}+\dots+\tilde{\sigma}_{J}=\sigma.\] In this way, we obtain \[\|m^{(\sigma)}\|_{H_{\kappa}^{s+1}}\lesssim\sum_{\ell=1}^{\infty}\ell^{\sigma} 2^{\sigma-\ell}\Big{(}1+\|q\|_{H_{\kappa}^{s}}\Big{)}^{\sigma}\|q\|_{H_{\kappa} ^{s+\sigma}}<\infty \tag{4.6}\] for any \(q\in H^{\infty}\) and any \(\kappa\) satisfying (4.2). **Proposition 4.2** (Diffeomorphism property).: _There is a constant \(C_{s}\geq 1\) so that for any \(A>0\) and \(\kappa\) satisfying_ \[\kappa\geq C_{s}\big{(}1+A\big{)}^{\frac{1}{2\epsilon}}, \tag{4.7}\] _the mapping \(q\mapsto m\) is a diffeomorphism from \(B_{A}^{s}\) into \(H^{s+1}\)._ Proof.: Initially, we choose \(C_{s}\) as required by Propositions 3.2 and 4.1. For \(g\in H^{s}\), the resolvent identity implies \[dm|_{q}(g)=\frac{d}{d\theta}m(x;\kappa,q+\theta g)\bigg{|}_{\theta=0}=R(\kappa, q)\big{[}(m+1)C_{+}g\big{]}, \tag{4.8}\] which for \(q\equiv 0\) reduces to \[dm|_{0}(g)=R_{0}(\kappa)C_{+}g. \tag{4.9}\] Taking a supremum over \(g\in H^{s}_{\kappa}\) and using (3.7), (2.6), and (4.4), we deduce that \[\big{\|}dm|_{q}-dm|_{0}\big{\|}_{H^{s}_{\kappa}\to H^{s+1}_{\kappa}}\lesssim \kappa^{-2\varepsilon}\|q\|_{H^{s}_{\kappa}}\lesssim C_{s}^{-2\varepsilon}, \tag{4.10}\] uniformly for \(q\in B^{s}_{A}\) and \(\kappa\) satisfying (4.7). On the other hand, for \(f\in H^{s+1}_{+}\) we have \[\big{\|}(dm|_{0})^{-1}(f)\big{\|}_{H^{s}_{\kappa}}^{2}\leq 2\,\|f\|_{H^{s+1}_{ \kappa}}^{2}\,,\] and so \[\big{\|}(dm|_{0})^{-1}\big{\|}_{H^{s+1}_{\kappa}\to H^{s}_{\kappa}}^{-1}\geq \frac{1}{\sqrt{2}}. \tag{4.11}\] Combining (4.10) and (4.11), we see that enlarging \(C_{s}\) if necessary, \[\big{\|}dm|_{q}-dm|_{0}\big{\|}_{H^{s}_{\kappa}\to H^{s+1}_{\kappa}}\leq \tfrac{1}{2}\big{\|}(dm|_{0})^{-1}\big{\|}_{H^{s+1}_{\kappa}\to H^{s}_{ \kappa}}^{-1}.\] Using this as input for the standard contraction-mapping proof of the inverse function theorem, we conclude that we may pick \(C_{s}\) sufficiently large so that \[q\mapsto m\quad\text{is a diffeomorphism from }\{q:\|q\|_{H^{s}_{\kappa}}\leq A\}\text{ into }H^{s+1}_{\kappa}\] for all \(\kappa\) satisfying (4.7). As the domain \(\{q:\|q\|_{H^{s}_{\kappa}}\leq A\}\) includes the smaller domain \(B^{s}_{A}\), this completes the proof. **Proposition 4.3**.: _There is a constant \(C_{s}\geq 1\) so that for \(q\in H^{s}\) and \(\kappa\) satisfying (4.2), the quantity_ \[\beta(\kappa;q):=\int q(x)m(x;\kappa,q)\,dx=\int q(x)\overline{m}(x;\kappa,q) \,dx=\big{\langle}q_{+},(\mathcal{L}+\kappa)^{-1}q_{+}\big{\rangle} \tag{4.12}\] _is finite and real-valued. For such \(\kappa\), this is a real-analytic function of \(q\) with_ \[\tfrac{\delta\beta}{\delta q}=m+\overline{m}+|m|^{2} \tag{4.13}\] _and satisfies_ \[C_{s}^{-1}\,\|q\|_{H^{s}_{\kappa}}^{2}\leq\int_{\kappa}^{\infty}\varkappa^{2s }\beta(\varkappa;q)\,d\varkappa\leq C_{s}\,\|q\|_{H^{s}_{\kappa}}^{2}\,. \tag{4.14}\] _Lastly, for each \(q\in H^{s}\), the mapping \(z\mapsto\beta(z;q)\) extends to a meromorphic function on \(\{z\in\mathbb{C}:\operatorname{Re}z>0\}\)._ Proof.: Proposition 4.1 shows that for a suitable choice of \(C_{s}\), we are guaranteed that \(m=(\mathcal{L}+\kappa)^{-1}q_{+}\) exists and lies in \(H^{s+1}\). This in turn means that \(m\) defines a bounded linear functional on \(H^{s}\) under the natural pairing: \[\int q(x)m(x;\kappa,q)\,dx=\langle q,m\rangle=\langle q_{+},m\rangle=\langle q _{+},(\mathcal{L}+\kappa)^{-1}q_{+}\rangle.\] As \(\mathcal{L}\) is a selfadjoint operator, this quantity is real. This proves all the identities stated in (4.12). The possibility of extending this to a meromorphic function in the right half-plane follows from Proposition 3.2 and the final representation in (4.12). The fact that \(\beta\) is a real-analytic function of \(q\) follows from the convergence of the series (4.3). Using the functional derivative (4.8) of \(m\), we see that \[d\beta|_{q}(f) =\int fm+q\cdot R(\kappa,q)[(m+1)f_{+}]\,dx\] \[=\int fm+\overline{R(\kappa,q)q_{+}}\cdot(m+1)f\,dx\] \[=\int[m+\overline{m}(m+1)]f\,dx,\] which yields (4.13). It remains to prove (4.14). As we will see, this may require us to increase \(C_{s}\). Let us first examine a quadratic approximation of the central object. By Plancherel and Fubini, \[\int_{\kappa}^{\infty}\!\varkappa^{2s}\langle q_{+},R_{0}(\varkappa)q_{+} \rangle\,d\varkappa=\int_{0}^{\infty}\!\int_{\kappa}^{\infty}\!\varkappa^{2s} \frac{|\widehat{q}(\xi)|^{2}}{\xi+\varkappa}\,d\varkappa\,d\xi\simeq_{s}\|q_{ +}\|_{H^{s}_{\kappa}}^{2}\,. \tag{4.15}\] This leaves us to control the remainder. Using the duality of \(H^{s+1}_{\varkappa}\) and \(H^{-(s+1)}_{\varkappa}\) and (3.7), we have \[\big{\langle}q_{+},\big{[}R(\varkappa)-R_{0}(\varkappa)\big{]}q_{+}\big{\rangle} \lesssim\varkappa^{-2\varepsilon}\,\|q_{+}\|_{H^{-(s+1)}_{\varkappa}}\,\|q\|_ {H^{2}_{\varkappa}}^{2}\lesssim\varkappa^{-1-2s-2\varepsilon}\,\|q\|_{H^{2}_{ \varkappa}}^{3}\] for any \(q\in H^{s}\) and \(\varkappa\geq\kappa\). In this way, we deduce that \[\int_{\kappa}^{\infty}\!\varkappa^{2s}\big{\langle}q_{+},\big{[} R(\varkappa)-R_{0}(\varkappa)\big{]}q_{+}\big{\rangle}\,d\varkappa \lesssim\|q\|_{H^{s}_{\kappa}}\int_{\kappa}^{\infty}\!\varkappa^{ -1-2\varepsilon}\|q\|_{H^{s}_{\varkappa}}^{2}\,d\varkappa\] \[\lesssim_{s}\kappa^{-2\varepsilon}\,\|q_{+}\|_{H^{s}_{\kappa}}^{ 3}\,.\] Combining this with (4.15) and taking \(C_{s}\) sufficiently large, we conclude that (4.14) holds. Propositions 3.2, 4.1, 4.2, and 4.3 show important quantitative properties of \(m\) and \(\beta\) under the restriction that \(\kappa\) is large enough, depending on the size of \(q\). Ultimately, we wish to consider trajectories in \(H^{s}\) rather than individual \(q\in H^{s}\) and so we must account for the possibility that the \(H^{s}\) norm of solutions may grow. For the flows of interest to us, \(\beta\) is conserved and our next lemma shows how this fact can be leveraged to control the growth and equicontinuity of trajectories. Indeed, this will lead to an alternate proof of Theorem 1.2 based on \(\beta(\kappa;q)\), rather than the perturbation determinant; see Corollary 5.3. One may wonder what conservation of \(\beta\) means if the \(\kappa\)-interval on which it is defined depends on \(q\) itself. It was to address this irritation that we demonstrated that \(\beta\) can be interpreted as a meromorphic function on the right half-plane. Evidently, if \(\beta(\kappa;q_{0})\) and \(\beta(\kappa;q_{1})\) agree on some ray \(\kappa\geq\kappa_{1}\) then they agree throughout the right half-plane (as meromorphic functions). **Lemma 4.4**.: _Given \(A>0\) and \(Q\subset B_{A}^{s}\), let_ \[Q_{**}=\Big{\{}q(b)\Big{|}\,q:[a,b]\to H^{s}\text{ is continuous, }q(a)\in Q,\text{ and }\beta(z;q(t))\equiv\beta(z;q(a))\Big{\}},\] _where \(\beta(z;q(t))\equiv\beta(z;q(a))\) indicates equality as meromorphic functions on the right-half plane for all \(t\in[a,b]\). Then \(Q_{**}\) is bounded; indeed, for \(C_{s}\) as in Proposition 4.3,_ \[\sup_{q\in Q_{**}}\|q\|_{H^{s}}\lesssim C_{s}^{1+|s|}\big{(}1+2C_{s}A\big{)}^{ \frac{2|s|}{1-2|s|}}A. \tag{4.16}\] _Moreover, if \(Q\) is \(H^{s}\)-equicontinuous, then so too is \(Q_{**}\)._ Proof.: Given \(q(a)\in Q\), consider \[\kappa\geq C_{s}\big{(}1+2C_{s}\|q(a)\|_{H^{s}_{x}}\big{)}^{\frac{1}{2\varepsilon }}. \tag{4.17}\] For such \(\kappa\) and any time interval \([a,T]\) on which \[\|q(t)\|_{H^{s}_{\kappa}}\leq 2C_{s}\|q(a)\|_{H^{s}_{\kappa}}, \tag{4.18}\] we may apply the equivalence (4.14) to deduce that \[\|q(t)\|_{H^{s}_{\kappa}}\leq C_{s}\left\|q(a)\right\|_{H^{s}_{\kappa}}. \tag{4.19}\] A standard bootstrap argument then shows that (4.19) holds on the entire time interval \([a,b]\). As \(Q\subset B_{A}^{s}\), the hypothesis (4.17) is satisfied for every \(q(a)\in Q\) with \[\kappa=C_{s}\big{(}1+2C_{s}A\big{)}^{\frac{1}{2\varepsilon}}. \tag{4.20}\] Using this choice, we obtain \[\kappa^{s}\sup_{q\in Q_{**}}\|q\|_{H^{s}}\leq\sup_{q\in Q_{**}}\|q\|_{H^{s}_{ \kappa}}\leq C_{s}\sup_{q\in Q}\|q\|_{H^{s}_{\kappa}}\leq C_{s}\sup_{q\in Q} \|q\|_{H^{s}} \tag{4.21}\] and thence (4.16). The equicontinuity of \(Q_{**}\) follows from that of \(Q\) by Lemma 2.4 and (4.19). We have now proven all the results we need that require us to adjust the constant \(C_{s}\) and so are ready to adopt our unified notion of \(\kappa\) being sufficiently large. Moreover, Lemma 4.4 allows us do this in a way that ensures \(\kappa\) remains sufficiently large for all trajectories of interest to us. We also take the opportunity to introduce the abbreviated notation (4.22). **Convention 4.5**.: Given \(A>0\), we choose \(\kappa_{0}=\kappa_{0}(A)\) large enough so that the hypotheses of Propositions 3.2, 4.1, 4.2, and 4.3 are all met whenever \(\kappa\geq\kappa_{0}\) and \(q\in(B_{A}^{s})_{**}\). Moreover, for such \(q\in(B_{A}^{s})_{**}\), we write \[m:=m(x;\kappa,q)\quad\text{and}\quad n:=m(x;\varkappa,q) \tag{4.22}\] and demand that \(\kappa,\varkappa\geq\kappa_{0}(A)\). **Lemma 4.6** (Equicontinuity properties of \(m\)).: _Given \(A>0\) and an equicontinuous set \(Q\subset B_{A}^{s}\), we have_ \[\lim_{\kappa\to\infty}\ \sup_{q\in Q}\|m\|_{H^{s+1}_{\kappa}}=0\quad\text{and} \quad\lim_{\kappa\to\infty}\ \sup_{q\in Q}\|\mathcal{L}R(\kappa,q)n\|_{H^{s+1}}=0 \tag{4.23}\] _for all \(\kappa,\varkappa\geq\kappa_{0}(A)\) as dictated by Convention 4.5._ Proof.: The first claim in (4.23) follows immediately from the estimate (4.4) and the characterization (ii) of equicontinuity from Lemma 2.4. For the second claim in (4.23), we write \[R(\kappa,q)n=R(\kappa,q)R(\varkappa,q)q_{+}=R(\varkappa,q)R(\kappa,q)q_{+}=R( \varkappa,q)m.\] Commuting \(\mathcal{L}\) and \(R(\varkappa,q)\) and using the estimates (3.5) and (3.7) for these operators, we find \[\left\|\mathcal{L}R(\kappa,q)n\right\|_{H^{s+1}}=\left\|R(\varkappa,q)\mathcal{ L}m\right\|_{H^{s+1}}\lesssim\left\|\mathcal{L}m\right\|_{H^{s}}\lesssim(1+ \left\|q\right\|_{H^{s}})\left\|m\right\|_{H^{s+1}}.\] The right-hand side above tends to zero as \(\kappa\to\infty\) by the first claim in (4.23). **Proposition 4.7** (Dynamics).: _For an \(H^{\infty}\) solution \(q(t)\) to (BO),_ \[\tfrac{d}{dt}q_{+} =\mathcal{P}q_{+}=-iq_{+}^{\prime\prime}-2C_{+}(qq_{+})^{\prime}+ 2q_{+}q_{+}^{\prime} \tag{4.25}\] \[\tfrac{d}{dt}m =-im^{\prime\prime}-2C_{+}([q-q_{+}]m)^{\prime}-2q_{+}m^{\prime}\] (4.26) \[\tfrac{d}{dt}\beta(\kappa) =0. \tag{4.24}\] _Here \(\mathcal{P}\) is given by (1.8) and Convention 4.5 applies._ Proof.: As \(q^{2}=2qq_{+}-(q_{+})^{2}+(q-q_{+})^{2}\), so \[C_{+}(2qq^{\prime})=C_{+}\big{(}2qq_{+}-(q_{+})^{2}+(q-q_{+})^{2}\big{)}^{ \prime}=2C_{+}(qq_{+})^{\prime}-2q_{+}^{\prime}q_{+}\] and consequently, \[\mathcal{P}q_{+}=-iq_{+}^{\prime\prime}-2C_{+}(qq_{+})^{\prime}+2q_{+}^{\prime }q_{+}=C_{+}\big{(}\mathsf{H}q^{\prime\prime}-2qq^{\prime}\big{)}=\tfrac{d}{ dt}q_{+}\,. \tag{4.27}\] This proves (4.24). By virtue of the Lax pair representation, (4.3), and (4.24), \[\tfrac{d}{dt}m=[\mathcal{P},R(\kappa)]q_{+}+R(\kappa)\mathcal{P}q_{+}= \mathcal{P}m.\] From here, (4.25) follows easily: \[\tfrac{d}{dt}m=\mathcal{P}m=-im^{\prime\prime}-2C_{+}(qm)^{\prime}+2q_{+}^{ \prime}m=-im^{\prime\prime}-2C_{+}([q-q_{+}]m)^{\prime}-2q_{+}m^{\prime}.\] From the final representation in (4.12) and (4.24), we deduce that \[\tfrac{d}{dt}\beta(\kappa) =\big{\langle}\mathcal{P}q_{+},R(\kappa)q_{+}\big{\rangle}+ \big{\langle}q_{+},[\mathcal{P},R(\kappa)]q_{+}\big{\rangle}+\big{\langle}q_{+ },R(\kappa)\mathcal{P}q_{+}\big{\rangle}\] \[=\big{\langle}\mathcal{P}q_{+},R(\kappa)q_{+}\big{\rangle}+\big{ \langle}q_{+},\mathcal{P}R(\kappa)q_{+}\big{\rangle}.\] This vanishes because \(\mathcal{P}\) is an antisymmetric operator on the Hardy space \(L^{2}_{+}\). Thus (4.26) holds. We pause to note that the right-hand side of (4.25) extends continuously (in \(H^{-2}\), for example) from \(q\in H^{\infty}\) to \(q\in H^{s}\). For the first term, this follows from Proposition 4.2. For the second, we also apply Lemma 2.1. In the third term, \(q_{+}\) and \(m^{\prime}\) do not have enough Sobolev regularity to make sense of the product. Here it is essential that both are holomorphic, which allows us to use Lemma 2.2. Employing the Stone-Weierstrass (on a compactified interval \([-E_{0},\infty]\)) and spectral theorems, it is not difficult to deduce from (4.26) and (4.12) that for any measurable function \(F:\mathbb{R}\to\mathbb{R}\) satisfying \[\big{|}F(E)\big{|}\lesssim(1+|E|)^{-1},\quad\text{the functional}\quad q\mapsto \big{\langle}q_{+},F(\mathcal{L})q_{+}\big{\rangle} \tag{4.28}\] defines a conserved quantity for the (BO) flow. This is interesting because it provides a clear way of separating out the contribution of any embedded point or singular continuous spectrum to the conserved quantities. We know of no analogue of this fact in the much-studied KdV equation, for example. Our next lemma presents other ways in which \(m\) and \(\beta\) are related, beyond the definition (4.12). **Lemma 4.8**.: _Under Convention 4.5,_ \[\int\overline{m}n\,dx=\int m\overline{n}\,dx=\big{\langle}(\mathcal{L}+ \varkappa)^{-1}q_{+},(\mathcal{L}+\kappa)^{-1}q_{+}\big{\rangle}=-\frac{ \beta(\kappa)-\beta(\varkappa)}{\kappa-\varkappa} \tag{4.29}\] _for any \(q\in B^{s}_{\!A}\) and distinct \(\varkappa,\kappa\geq\kappa_{0}(A)\). In the periodic case, we also have_ \[\kappa\int\overline{m}\,dx=\kappa\int m\,dx=\int qm\,dx+\int q\,dx=\beta( \kappa)+\int q\,dx \tag{4.30}\] _and, writing \(1\) for the constant function,_ \[\langle 1,(\mathcal{L}+\kappa)^{-1}1\rangle=\kappa^{-1}+\kappa^{-2}\beta( \kappa;q)+\kappa^{-2}\int q\,dx. \tag{4.31}\] Proof.: The identities (4.29) are evident from the definitions of \(m\), \(n\) and \[(\mathcal{L}+\varkappa)^{-1}(\mathcal{L}+\kappa)^{-1}=(\mathcal{L}+\kappa)^{ -1}(\mathcal{L}+\varkappa)^{-1}=\frac{-1}{\kappa-\varkappa}\big{[}(\mathcal{L }+\kappa)^{-1}-(\mathcal{L}+\varkappa)^{-1}\big{]}.\] The identities (4.30) follow by integrating (4.1) over the circle and using that \(\beta(\kappa)\) is real-valued. As \(\mathcal{L}_{0}1=0\), the resolvent identity gives \[(\mathcal{L}+\kappa)^{-1}1=(\mathcal{L}_{0}+\kappa)^{-1}1+(\mathcal{L}+\kappa )^{-1}C_{+}q(\mathcal{L}_{0}+\kappa)^{-1}1=\kappa^{-1}(1+m).\] Thus (4.31) follows from (4.30). **Remark 4.9**.: In Lemma 6.4, we will show that \(m+\overline{m}\in L^{1}(\mathbb{R})\) when \(\langle x\rangle q\in L^{2}(\mathbb{R})\). Mimicking the arguments above yields the following synthesis of (4.29) and (4.30): \[\kappa\int m+\overline{m}+|m|^{2}\,dx=2\beta(\kappa)-\kappa\frac{\partial \beta}{\partial\kappa}+2\int q\,dx, \tag{4.32}\] valid both on the line and on the circle. Our next result is an important identity, which first appeared as [28, Eq. (58)]. In that paper, it was used as a stepping stone in the calculation of Poisson brackets between certain scattering-theoretic data, defined for smooth rapidly decreasing \(q\). Our first application of this identity will be to demonstrating Poisson commutativity of \(\beta(\kappa)\) at differing spectral parameters. In subsection 4.1 we will also see that it provides an important key for unlocking the significance of the Bock-Kruskal transformation. **Lemma 4.10**.: _For \(q\in H^{\infty}(\mathbb{R})\) we have_ \[\mathsf{H}(m\overline{n}+m+\overline{n})^{\prime}+i(m+1)\overline{n}^{\prime }-im^{\prime}(\overline{n}+1)-2q(m+1)(\overline{n}+1) \tag{4.33}\] _subject to Convention 4.5. For \(q\in H^{\infty}(\mathbb{T})\), this expression need not vanish; however, it is a real-valued constant function:_ \[\mathrm{LHS}(4.33)=\varkappa\int m\,dx+\kappa\int\overline{n}\,dx. \tag{4.34}\] Proof.: Employing equation (4.1) to eliminate \(m^{\prime}\) and \(\overline{n}^{\prime}\), we obtain \[\begin{split}\operatorname{LHS}(4.33)&=\kappa(1+i \mathsf{H})\overline{n}+\varkappa(1-i\mathsf{H})m+(1+i\mathsf{H})\big{[}( \overline{n}+1)C_{+}(q(m+1))\big{]}\\ &\quad+(1-i\mathsf{H})\big{[}(m+1)C_{-}(q(n+1))\big{]}-2q(m+1)( \overline{n}+1).\end{split}\] Thence, using the operator identity \(2=(1+i\mathsf{H})+(1-i\mathsf{H})\) on the last term yields \[\begin{split}\operatorname{LHS}(4.33)&=\kappa(1+i \mathsf{H})\overline{n}+\varkappa(1-i\mathsf{H})m+(1+i\mathsf{H})\big{[}( \overline{n}+1)[C_{+}-1](q(m+1))\big{]}\\ &\quad+(1-i\mathsf{H})\big{[}(m+1)[C_{-}-1](q(n+1))\big{]}.\end{split}\] Consideration of the Fourier supports shows that the last two terms vanish in either geometry. The first two terms vanish on the line but reduce to \(\operatorname{RHS}(4.34)\) in the circle case. The fact that this constant is real (and generically nonzero) follows from (4.30). **Lemma 4.11**.: _Under Convention 4.5,_ \[\{\beta(\kappa),\beta(\varkappa)\}=0\quad\text{and}\quad\{P,\beta(\varkappa) \}=0 \tag{4.35}\] _as functions on \(B^{s}_{A}\) and \(B^{s}_{A}\cap H^{\infty}\), respectively._ Proof.: By (1.4), (4.13), and integration by parts, \[\{\beta(\kappa),\beta(\varkappa)\}=\int\big{(}|m|^{2}+m+\overline{m}\big{)} \big{(}|n|^{2}+n+\overline{n}\big{)}^{\prime}\,dx=\tfrac{1}{2}\!\int F(x)\,dx,\] where we adopt the notation \[F:=\big{[}|m|^{2}+m+\overline{m}\big{]}\big{[}|n|^{2}+n+\overline{n}\big{]}^{ \prime}-\big{[}|m|^{2}+m+\overline{m}\big{]}^{\prime}\big{[}|n|^{2}+n+ \overline{n}\big{]}. \tag{4.36}\] Proposition 4.1 shows that these expressions are all well-defined on \(B^{s}_{A}\). To continue, we rewrite \(F\) as \[\begin{split} F=G+\overline{G}+K+\overline{K}\quad\text{where} \quad G&=[\overline{m}n+\overline{m}+n]\big{[}(m+1)\overline{n}^{ \prime}-m^{\prime}(\overline{n}+1)\big{]}\\ \text{and}\quad K&=(m-n)(\overline{m}+\overline{n})^ {\prime}.\end{split}\] We split \(F\) in this way in order to take advantage of Lemma 4.10, which shows that \[\begin{split}(m+1)\overline{n}^{\prime}-m^{\prime}(\overline{n}+ 1)&=i\mathsf{H}(m\overline{n}+m+\overline{n})^{\prime}-(\kappa- \varkappa)\mathsf{H}(m\overline{n}+m+\overline{n})\\ &\quad+i(\kappa+\varkappa-2q)(m\overline{n}+m+\overline{n})-2iq -ic,\end{split} \tag{4.37}\] where the constant function \(c\) denotes the value of \(\operatorname{LHS}(4.33)\) appropriate to each geometry. Recall that \(c=0\) on \(\mathbb{R}\) and is real on \(\mathbb{T}\). Combining this identity with the antisymmetry of \(\mathsf{H}\) and of \(i\mathsf{H}\partial\), we find that \[\int(G+\overline{G})\,dx=i\int(2q+c)[m\overline{n}-\overline{m}n+m-\overline{m }-n+\overline{n}]\,dx. \tag{4.38}\] Using (4.12), (4.29), and (4.30), this further simplifies to \[\int(G+\overline{G})\,dx=2i\int q[m\overline{n}-\overline{m}n]\,dx. \tag{4.39}\] On the other hand, integrating by parts and employing (4.1), we obtain \[\begin{split}\int(K+\overline{K})\,dx&=2\int m \overline{n}^{\prime}+\overline{m}n^{\prime}\,dx\\ &=2i\int(\varkappa-q)[m\overline{n}-\overline{m}n]\,dx-2i\int q [m-\overline{m}]\,dx.\end{split}\] Using (4.12) and (4.29), this simplifies to \[\int(K+\overline{K})\,dx=-2i\int q[m\overline{n}-\overline{m}n]\,dx. \tag{4.40}\] Combining (4.39) and (4.40) gives \(\int F=0\) and so proves the first identity in (4.35). To prove the commutativity of \(\beta(\varkappa)\) and the momentum \(P=\frac{1}{2}\int q^{2}\,dx\), we use the functional derivative (4.13) for \(\beta\) to compute \[\{\beta(\varkappa),P(q)\}=\int\big{(}|n|^{2}+n+\overline{n}\big{)}q^{\prime} \,dx=\int-[(1+\overline{n})n^{\prime}+(1+n)\overline{n}^{\prime}]q\,dx.\] Next, we use the equation (4.1) for \(n^{\prime}\) together with (4.12) to deduce that \[\{\beta(\varkappa),P(q)\} =i\int\varkappa[n(\overline{n}+1)-\overline{n}(n+1)]q\,dx\] \[\qquad-i\int(\overline{n}+1)q\cdot C_{+}(n+1)q-(n+1)q\cdot C_{-} (\overline{n}+1)q\,dx\] \[=0.\qed\] ### The Bock-Kruskal transformation In [6], Bock and Kruskal introduced an analogue of the Miura transform applicable to the Benjamin-Ono equation and used this to show the existence of infinitely many conserved quantities, at least for smooth solutions decaying sufficiently rapidly at (spatial) infinity. This transformation \(q\mapsto w\) was defined implicitly via the formula \[2q=\tfrac{1}{w+\kappa}\mathsf{H}(w^{\prime})+\mathsf{H}\big{(}\tfrac{w^{ \prime}}{w+\kappa}\big{)}+\tfrac{2\kappa w}{w+\kappa}. \tag{4.41}\] The function \(w\) is real-valued. As in the original paper [6], we will confine our discussion to the \(\mathbb{R}\) geometry. In the introduction, we described the important inspirational role that the Bock-Kruskal transformation played in developing the methods ultimately employed in this paper. Given this pivotal role, we feel compelled to share with the reader how it connects to the principal themes of this paper. Concretely, we will demonstrate the unique solvability of (4.41) and identify this solution in terms of the central object \(m(x;\kappa,q)\) of this section. Evidently, some restriction on \(w\) (beyond mere regularity) must be imposed to handle the denominators \(\kappa+w\) appearing in (4.41). As any \(w\in H^{s+1}\) is automatically continuous and converges to zero at (spatial) infinity, the natural condition is this: \[\inf_{x}\bigl{(}\kappa+w(x)\bigr{)}>0. \tag{4.42}\] **Theorem 4.12**.: _Suppose \(A>0\) and \(\kappa_{0}(A)\) satisfies Convention 4.5. Then, for any \(q\in B^{s}_{\!A}\) and any \(\kappa\geq\kappa_{0}\),_ \[w=\kappa\tfrac{\delta\beta}{\delta q}=\kappa\bigl{(}|m|^{2}+m+\overline{m} \bigr{)} \tag{4.43}\] _is the unique \(H^{s+1}(\mathbb{R})\) solution to (4.41) satisfying (4.42)._ Proof.: By virtue of (4.4), we must have \(\|m\|_{L^{\infty}}<1\). Consequently, the function \(\kappa\tfrac{\delta\beta}{\delta q}=\kappa|m+1|^{2}-\kappa\) satisfies (4.42). Setting \(\kappa=\varkappa\) in (4.33) and dividing by \(|m+1|^{2}\), we find that \[2q=\tfrac{\mathsf{H}(|m|^{2}+m+\overline{m})^{\prime}}{|m+1|^{2}}+i\Bigl{[} \tfrac{\overline{m}^{\prime}}{\overline{m}+1}-\tfrac{m^{\prime}}{m+1}\Bigr{]} +2\kappa\tfrac{|m|^{2}+m+\overline{m}}{|m+1|^{2}}\] \[=\frac{\mathsf{H}(|m|^{2}+m+\overline{m})^{\prime}}{|m+1|^{2}}+i\mathsf{H}\! \left[\frac{\overline{m}^{\prime}}{\overline{m}+1}+\frac{m^{\prime}}{m+1} \right]+2\kappa\frac{|m|^{2}+m+\overline{m}}{|m+1|^{2}},\] which demonstrates that the function \(\kappa\frac{\delta\beta}{\delta q}\) satisfies (4.41). It remains to verify the uniqueness of \(H^{s+1}\) solutions to (4.43) satisfying (4.42). We will focus on the unknown \(u=\kappa^{-1}w\). Suppose first that \(w\) is a solution of the type described. The restriction (4.42) guarantees that \(\log(1+u)\in H^{s+1}\); see, for example, (2.8). Thus, we may factor \[1+u(x)=[1+\mu(x)][1+\overline{\mu}(x)]\quad\text{with}\quad\mu\in H^{s+1}_{+} (\mathbb{R}). \tag{4.44}\] The next step is to insert \(w=\kappa[1+\mu][1+\overline{\mu}]-\kappa\) in (4.41). In doing so, we take advantage of the following: \[(1+u)\mathsf{H}\!\left[\tfrac{u^{\prime}}{1+u}\right]=|\mu+1|^{2}\mathsf{H}\! \left(\tfrac{\overline{\mu}^{\prime}}{1+\overline{\mu}}+\tfrac{\mu^{\prime}}{ 1+\mu}\right)=i(1+\mu)\overline{\mu}^{\prime}-i\mu^{\prime}(\overline{\mu}+1).\] This allows us to completely eliminate the denominators in (4.41); indeed, combining this with \(2C_{\pm}=[I\pm i\mathsf{H}]\), we find the equivalent formulation \[2q[1+\mu][1+\overline{\mu}]=2C_{-}\!\left[i(1+\mu)\overline{\mu}^{\prime} \right]-2C_{+}\!\left[i\mu^{\prime}(\overline{\mu}+1)\right]+2\kappa[\mu+ \overline{\mu}+|\mu|^{2}]. \tag{4.45}\] Isolating the positive-frequency component of (4.45), we get \[C_{+}\!\left[(1+\overline{\mu})\!\left(-i\mu^{\prime}-C_{+}(q\mu)+\kappa\mu- q_{+}\right)\right]=0. \tag{4.46}\] In fact, this is equivalent to (4.45) because the negative-frequency component is simply the complex conjugate of this. Let us write \(f\) for the quantity inside the square brackets of (4.46). By Lemma 2.1, we know \(f\in H^{s}(\mathbb{R})\). Thus we may interpret (4.46) as saying that \(f\) belongs to the Hardy-Sobolev space \(H^{s}_{-}\), which in turn shows \[-i\mu^{\prime}-C_{+}(q\mu)+\kappa\mu-q_{+}=\tfrac{f}{1+\overline{\mu}}\in H^{ s}_{-}(\mathbb{R}). \tag{4.47}\] However every term in LHS(4.47) belongs to the _other_ Hardy-Sobolev space \(H^{s}_{+}(\mathbb{R})\). Only the zero function belongs to both spaces and so we deduce that \(\mu\) is a solution of (4.1). However, Proposition 4.1 guarantees that \(m\) is the only solution of this equation. Thus \(\mu=m\), which then yields \(w=\kappa u=\kappa\frac{\delta\beta}{\delta q}\). The Bock-Kruskal approach to conservation laws is that \(w\) is a conserved density and consequently, its formal expansion in powers of \(\kappa^{-1}\) provides an infinite family of conservation laws of polynomial type. Combining (4.43) with (4.32) allows us to connect this approach to the conservation of \(\beta(\kappa)\). Concretely, for \(\langle x\rangle q\in L^{2}(\mathbb{R})\), \[\int w(x;\kappa,q)\,dx=2\beta(\kappa)-\kappa\frac{\partial\beta}{\partial \kappa}+2\int q\,dx. \tag{4.48}\] ### The perturbation determinant Our next result establishes the connection between our gauge \(m\) and the logarithm of the renormalized perturbation determinant \[\alpha(\kappa;q):=\sum_{\ell\geq 2}\tfrac{1}{7}\operatorname{tr}\!\left\{(R_{0}( \kappa)C_{+}q)^{\ell}\right\}, \tag{4.49}\] which is the central object in Talbut's proof of Theorem 1.2 in [60]. Such a connection in the line setting was presented by Talbut in his thesis [59, SS3.3]. On the line, convergence of the series (4.49) may be demonstrated as follows: For \(A>0\) and \(\kappa_{0}=\kappa_{0}(A)\) chosen according to Convention 4.5, we have \[\|\sqrt{R_{0}(\kappa)}C_{+}q\sqrt{R_{0}(\kappa)}\|_{\mathrm{HS}}^{2} =\frac{1}{2\pi}\int_{0}^{\infty}\!\!\int_{0}^{\infty}\frac{|\widehat {q}(\xi-\eta)|^{2}\,d\eta\,d\xi}{(\eta+\kappa)(\xi+\kappa)}\] \[=\frac{1}{2\pi}\int_{\mathbb{R}}\frac{\log(1+\frac{|\xi|}{\kappa} )}{|\xi|}|\widehat{q}(\xi)|^{2}\,d\xi\lesssim\kappa^{-4\varepsilon}\|q\|_{H_{ \kappa}^{\ast}}^{2}<1,\] whenever \(\kappa\geq\kappa_{0}\) and \(q\in B_{A}^{\ast}\). In particular, the Holder inequality in Schatten classes yields convergence of the series defining \(\alpha\). Parallel arguments yield convergence in the circle setting. **Lemma 4.13**.: _For \(A>0\) and \(\kappa_{0}=\kappa_{0}(A)\) satisfying Convention 4.5, we have_ \[\alpha(\kappa;q)=\tfrac{1}{2\pi}\int_{0}^{\infty}\tfrac{\beta(\kappa+\xi:q)}{ \kappa+\xi}\,d\xi\quad\text{on $\mathbb{R}$}\qquad\text{and}\qquad\alpha(\kappa;q)=\sum_{\xi\in 2\pi \mathbb{Z}_{+}}\tfrac{\beta(\kappa+\xi:q)}{\kappa+\xi}\quad\text{on $\mathbb{T}$},\] _whenever \(q\in B_{A}^{\ast}\) and \(\kappa\geq\kappa_{0}\). Here, \(\mathbb{Z}_{+}=\{0,1,2,\ldots\}\)._ Proof.: We will present the details in the circle setting. The computations in the line setting are a close parallel. Using symmetry followed by a change of variables and Plancherel, we may write \[\frac{1}{\ell} \operatorname{tr}\bigl{\{}(R_{0}(\kappa)C_{+}q)^{\ell}\bigr{\}}\] \[=\sum_{\xi_{1},\ldots,\xi_{\ell}\in 2\pi\mathbb{Z}_{+}}\frac{1}{ \ell}\,\frac{\widehat{q}(\xi_{1}-\xi_{2})}{\kappa+\xi_{1}}\frac{\widehat{q}( \xi_{2}-\xi_{3})}{\kappa+\xi_{2}}\cdots\frac{\widehat{q}(\xi_{\ell}-\xi_{1})} {\kappa+\xi_{\ell}}\] \[=\sum_{\begin{subarray}{c}\xi_{1}\leq\min\{\xi_{2},\ldots,\xi_{ \ell}\}\\ \xi_{1},\ldots,\xi_{\ell}\in 2\pi\mathbb{Z}_{+}\end{subarray}}\frac{\widehat{q}( \xi_{1}-\xi_{2})}{\kappa+\xi_{1}}\frac{\widehat{q}(\xi_{2}-\xi_{3})}{\kappa+ \xi_{2}}\cdots\frac{\widehat{q}(\xi_{\ell}-\xi_{1})}{\kappa+\xi_{\ell}}\] \[=\sum_{\xi\in 2\pi\mathbb{Z}_{+}}\frac{1}{\kappa+\xi}\,\sum_{ \begin{subarray}{c}\eta_{2},\ldots,\eta_{\ell}\in 2\pi\mathbb{Z}\\ \eta_{j}+\cdots+\eta_{\ell}\geq 0,\forall 2\leq j\leq\ell\end{subarray}} \widehat{q}\bigl{(}-(\eta_{2}+\cdots\eta_{\ell})\bigr{)}\prod_{j=2}^{\ell} \frac{\widehat{q}(\eta_{j})}{\kappa+\xi+\eta_{j}+\cdots+\eta_{\ell}}\] \[=\sum_{\xi\in 2\pi\mathbb{Z}_{+}}\frac{1}{\kappa+\xi}\Bigl{\langle}q,\,\bigl{(}R_{0}(\kappa+\xi)C_{+}q\bigr{)}^{\ell-2}R_{0}(\kappa+\xi)q_{+} \Bigr{\rangle}.\] Recalling (4.3), (4.12), and summing over \(\ell\geq 2\), we obtain \[\alpha(\kappa;q)=\sum_{\xi\in 2\pi\mathbb{Z}_{+}}\frac{1}{\kappa+\xi}\langle q,m(\kappa+\xi,q)\rangle=\sum_{\xi\in 2\pi\mathbb{Z}_{+}}\frac{1}{\kappa+\xi} \beta(\kappa+\xi;q),\] which completes the proof in the circle setting. ### The action of higher symmetries With infinitely many conserved quantities, the Benjamin-Ono equation possesses a wide array of Hamiltonian symmetries. As these Hamiltonians are all mutually commuting, these symmetries preserve the values of all these conserved quantities. By _higher_ symmetries, we mean those that do not preserve the conserved quantities. Scaling and Galilei/Lorentz boosts are important examples, common to a rich class of Hamiltonian PDE. In the Benjamin-Ono setting, these symmetries take the forms given in (1.2) and (1.5), respectively. The scaling symmetry is Hamiltonian; indeed, the center of momentum \[\text{CofP}:=\int\tfrac{1}{2}xq(x)^{2}\,dx\quad\text{generates}\quad\tfrac{d}{dt}q =(xq)^{\prime}=xq^{\prime}+q=\tfrac{dq_{\lambda}}{d\lambda}\big{|}_{\lambda=1}. \tag{4.50}\] While one should actually divide by the total momentum to find the true centroid, this muddies the formulas without yielding better physical insight. The Galilei symmetry is not Hamiltonian; indeed, no Hamiltonian flow can change the value of the Casimir \(\int q\). Our first result describes the action of these higher symmetries on the totality of the conserved quantities, expressed in terms of their generating function \(\beta\): **Lemma 4.14**.: _Working on the line, with \(q_{\lambda}\) defined by (1.2), we have_ \[\beta(\lambda\kappa;q_{\lambda})=\beta(\kappa;q)\quad\text{for any $\lambda>0$.} \tag{4.51}\] _On the circle, the Galilean symmetry acts as follows: for any \(c\in\mathbb{R}\),_ \[\beta(\kappa;q+c)+\int(q+c)\,dx=\tfrac{\kappa^{2}}{(\kappa-c)^{2}}\Big{[} \beta(\kappa-c;q)+\int\!q\,dx\Big{]}+\tfrac{c\kappa}{\kappa-c}. \tag{4.52}\] Proof.: We define an operator \(\mathcal{U}_{\lambda}\) via \[[\mathcal{U}_{\lambda}f](x)=\sqrt{\lambda}\,f(\lambda x). \tag{4.53}\] This is unitary on \(L^{2}_{+}(\mathbb{R})\). It differs from the scaling (1.2) by \(q_{\lambda}=\sqrt{\lambda}\,\mathcal{U}_{\lambda}q\). Direct computation shows that \((\mathcal{L}(q_{\lambda})+\lambda\kappa)\,\mathcal{U}_{\lambda}=\lambda\, \mathcal{U}_{\lambda}(\mathcal{L}(q)+\kappa)\), which implies \[\mathcal{U}_{\lambda}(\mathcal{L}(q)+\kappa)^{-1}=\lambda(\mathcal{L}(q_{ \lambda})+\lambda\kappa)^{-1}\,\mathcal{U}_{\lambda}.\] The identity (4.51) now follows easily: \[\beta(\lambda\kappa;q_{\lambda})=\lambda\langle\mathcal{U}_{\lambda}q_{+},( \mathcal{L}(q_{\lambda})+\lambda\kappa)^{-1}\,\mathcal{U}_{\lambda}q_{+} \rangle=\langle\mathcal{U}_{\lambda}q_{+},\,\mathcal{U}_{\lambda}(\mathcal{L} (q)+\kappa)^{-1}q_{+}\rangle=\beta(\kappa;q).\] The identity (4.52) follows from (4.30), (4.31), the observation \(\mathcal{L}(q+c)=\mathcal{L}(q)-c\), and elementary manipulations. By differentiating the identity (4.51) with respect to \(\lambda\) and setting \(\lambda=1\), we obtain the following virial-type identity: \[\{\beta(\kappa),\text{CofP}\}=-\kappa\tfrac{\partial\beta}{\partial\kappa}. \tag{4.54}\] Understanding the CofP as the generator of scaling and matching coefficients in the \(\kappa\to\infty\) expansion, we see that (4.54) shows that the Hamiltonians for which \(\beta(\kappa)\) is the generating function are individually homogeneous under scaling. An alternate physical interpretation of (4.54) is that it reveals the time dependence of CofP under each of the Hamiltonians; specifically, it shows that the center of momentum travels at a constant speed equal to a numerical multiple of the Hamiltonian. A third perspective on (4.54) is this: Given a conserved quantity, taking the Poisson bracket with CofP will yield a new conserved quantity. Sadly, it is not really 'new'; each term in the expansion of \(\beta\) merely picks up a numerical prefactor illustrating its scaling degree. The Galilei symmetry is more exciting. The formula (4.52) shows that by performing a Galilei boost on a single Hamiltonian yields a polynomial in \(c\) whose coefficients are all the preceding Hamiltonians. It allows one to descend through the hierarchy! As mentioned in the introduction, Fokas and Fuchssteiner [14] found a vector field \(\tau\) that allowed them to _ascend_ in the hierarchy. As the culmination of this section, we will now explain how the preceding discussion led us to a new and physically appealing interpretation of their discovery. Then in Section 6 we will present a far reaching generalization; see Theorem 6.5. Let us declare that the center of energy is given by \[\text{Cof\!E}\,{:=}\,\int\tfrac{1}{2}xq(x)\cdot\mathsf{H}\partial q(x)-\tfrac{1} {3}xq(x)^{3}\,dx. \tag{4.55}\] The term \(xq^{3}\) is not controversial. However, we have selected a very specific way of inserting the weight \(x\) into the kinetic energy term and would have to admit other possibilities, but for the following dramatic observation: The Hamiltonian vector field associated to \(\text{Cof\!E}\) is (subject to our sign conventions) precisely the \(\tau\) vector field of [14]! To see this, we use that \([\mathsf{H}\partial,x]=\mathsf{H}\) and so \[\partial_{x}\bigl{(}\tfrac{\delta}{\delta q}\text{Cof\!E}\bigr{)}=\bigl{[}x \mathsf{H}q^{\prime}+\tfrac{1}{2}\mathsf{H}q\bigr{]}^{\prime}-[xq^{2}]^{ \prime}=x(\mathsf{H}q^{\prime\prime}-2qq^{\prime})-q^{2}+\tfrac{3}{2}\mathsf{ H}q^{\prime}. \tag{4.56}\] In this way, the miraculous property of \(\tau\) can be summarized as \[\{\beta(\kappa),\text{Cof\!E}\}=\kappa^{2}\tfrac{\partial\beta}{\partial \kappa}+\kappa\beta(\kappa), \tag{4.57}\] which shows that the Poisson bracket of \(\text{Cof\!E}\) and one of Hamiltonians of the hierarchy yields the next _higher_ Hamiltonian. Equivalently, the center of energy travels at a constant speed, which is given by this higher Hamiltonian. This presentation leads us naturally to ask: Is there a coherent way of defining the center for every one of the conserved quantities? Perhaps even a unifying \(\text{Cof\!\beta}\)? And can this be done in such a way that these centers move at a constant speed? Naturally, this speed would be another conserved quantity. We will answer all these questions successfully in Section 6. This will include a proof of (4.57). For such a direct identity involving \(m\) and \(q\), it is tempting to imagine that (4.57) should follow quickly from (4.1), (4.12), and (4.29) together with some strategic integrations by parts. We know of no simple argument of this type. Nevertheless, our discovery of just the right Lax representation of the flows, presented in the next section, will yield the result very quickly indeed. ## 5. Well-posedness Our analysis begins with the discussion of the evolution dictated by our regularized Hamiltonians \(H_{\kappa}\) introduced in (1.18). These Hamiltonians are not globally defined: for a given size of initial data, \(\kappa\) needs to be chosen sufficiently large. With this in mind, Convention 4.5 will be in force throughout this section. In this section, we will verify that \(\beta\) is conserved under the \(H_{\kappa}\) flow, as well as under (BO). In this way, our convention ensures that \(\kappa\) will be large enough, not only for the initial data, but also for all trajectories of interest to us. Before turning to the well-posedness of the \(H_{\kappa}\) flow, our first result is devoted to describing the associated vector field. **Proposition 5.1**.: _The evolution induced by the Hamiltonian \(H_{\kappa}\) is_ \[\tfrac{d}{dt}q=\begin{cases}-\kappa^{2}\bigl{(}m+\overline{m}+|m|^{2}\bigr{)} ^{\prime}+\kappa q^{\prime}&\text{ on $\mathbb{R}$,}\\ -\kappa^{2}\bigl{(}m+\overline{m}+|m|^{2}\bigr{)}^{\prime}+\bigl{[}\kappa+ \int q\bigr{]}q^{\prime}&\text{ on $\mathbb{T}$.}\end{cases} \tag{5.1}\] _Moreover, we have the following Lax pair representation: \(q\) solves (5.1) if and only if_ \[\tfrac{d}{dt}\mathcal{L}=[\mathcal{P}_{\kappa},\mathcal{L}] \tag{5.2}\] _where \(\mathcal{L}=\mathcal{L}(q(t))\) is the Lax operator described in Proposition 3.2 and_ \[\mathcal{P}_{\kappa}:=i\kappa^{3}(\mathcal{L}+\kappa)^{-1}-i\kappa^{2}(m+1)C_{+} (\overline{m}+1)+\kappa\partial, \tag{5.3}\] _on the line; on the circle, \(\mathcal{P}_{\kappa}\) is defined by_ \[\mathcal{P}_{\kappa}:=i\kappa^{2}\big{[}\kappa+\beta(\kappa)+\int\!q\big{]}( \mathcal{L}+\kappa)^{-1}-i\kappa^{2}(m+1)C_{+}(\overline{m}+1)+\big{[}\kappa+ \int\!q\big{]}\partial. \tag{5.4}\] _These operators have the special property_ \[\tfrac{d}{dt}q_{+}=\mathcal{P}_{\kappa}q_{+}. \tag{5.5}\] Before turning to the proof of this result, we pause to note that irrespective of the geometry, the first term in the definition of \(\mathcal{P}_{\kappa}\) is inconsequential to the Lax-pair property, because it commutes with \(\mathcal{L}\). However, its removal would destroy the special property (5.5), which greatly expedites the arguments of this section and played a crucial role in our discoveries reported in the next section. Let us also note that while restricting the torus evolution to \(\int q=0\) would unify the dynamical equations (5.1), it would not do the same for the operators \(\mathcal{P}_{\kappa}\); they would still differ by the summand \(i\kappa^{2}\beta(\kappa)R(\kappa)\). Proof of Proposition 5.1.: To avoid repeating ourselves, we will only present the details in the periodic case, which are slightly more involved. The equation (5.1) follows from (1.18), (4.13), and the Poisson structure (1.4): \[\tfrac{d}{dt}q=-\kappa^{2}\big{(}\tfrac{\delta\beta}{\delta q}\big{)}^{\prime }+\big{[}\kappa+\int\!q\big{]}\big{(}\tfrac{\delta P}{\delta q}\big{)}^{\prime }=-\kappa^{2}\big{(}m+\overline{m}+|m|^{2}\big{)}^{\prime}+\big{[}\kappa+\int \!q\big{]}q^{\prime}. \tag{5.6}\] Next we address the Lax pair formulation of the \(H_{\kappa}\) flow. As noted above, it suffices to prove the Lax property with \[\widetilde{\mathcal{P}}_{\kappa}:=-i\kappa^{2}(m+1)C_{+}(\overline{m}+1)+ \big{[}\kappa+\int\!q\big{]}\partial. \tag{5.7}\] If \(q\) satisfies (5.1), then \[\tfrac{d}{dt}\mathcal{L}=\kappa^{2}C_{+}m^{\prime}(\overline{m}+1)+\kappa^{2} C_{+}(m+1)\overline{m}^{\prime}-\big{[}\kappa+\int\!q\big{]}C_{+}q^{\prime} \tag{5.8}\] as operators on \(L_{+}^{2}\). We will show that RHS(5.2)=RHS(5.8), which proves that (5.1) implies (5.2). Conversely, as \((\tfrac{d}{dt}\mathcal{L})f=-C_{+}(\tfrac{dq}{dt}f)\), the time derivative of \(\mathcal{L}\) uniquely determines \(\tfrac{dq}{dt}\). Thus, the equality RHS(5.2)=RHS(5.8) also shows that (5.2) implies (5.1). Proceeding directly from the definitions, we find \[[\widetilde{\mathcal{P}}_{\kappa},\mathcal{L}] =\kappa^{2}\big{\{}m^{\prime}C_{+}(\overline{m}+1)+(m+1)C_{+} \overline{m}^{\prime}\big{\}}-\big{[}\kappa+\int\!q\big{]}C_{+}q^{\prime}\] \[\quad-i\kappa^{2}C_{+}q(m+1)C_{+}(\overline{m}+1)+i\kappa^{2}(m+1 )C_{+}(\overline{m}+1)C_{+}q\] \[=\text{RHS}(5.8)-\kappa^{2}C_{+}m^{\prime}[1-C_{+}](\overline{m}+ 1)-\kappa^{2}C_{+}(m+1)[1-C_{+}]\overline{m}^{\prime}\] \[\quad-i\kappa^{2}C_{+}q(m+1)C_{+}(\overline{m}+1)+i\kappa^{2}(m+ 1)C_{+}(\overline{m}+1)C_{+}q \tag{5.9}\] as operators on \(L_{+}^{2}\). Now for \(f\in H_{+}^{\infty}\), (4.1) yields \[C_{+}m^{\prime}[1-C_{+}](\overline{m}+1)f+C_{+}(m+1)[1-C_{+}] \overline{m}^{\prime}f\] \[\quad=-i\kappa C_{+}m[1-C_{+}](\overline{m}+1)f+i\kappa C_{+}(m+1 )[1-C_{+}]\overline{m}f\] \[\quad\quad+iC_{+}[q(m+1)]_{+}[1-C_{+}](\overline{m}+1)f-iC_{+}(m+1 )[1-C_{+}][(\overline{m}+1)q]_{-}f\] \[\quad=iC_{+}q(m+1)[1-C_{+}](\overline{m}+1)f-iC_{+}(m+1)[1-C_{+}]( \overline{m}+1)qf\] \[\quad=-iC_{+}q(m+1)C_{+}(\overline{m}+1)f+iC_{+}(m+1)C_{+}( \overline{m}+1)qf\] \[\quad=-iC_{+}q(m+1)C_{+}(\overline{m}+1)f+iC_{+}(m+1)C_{+}( \overline{m}+1)C_{+}qf.\] Substituting this into (5.9) gives \([\widetilde{\mathcal{P}}_{\kappa},\mathcal{L}]=\operatorname{RHS}(5.8)\), which shows that \(\operatorname{RHS}(5.2)\) and \(\operatorname{RHS}(5.8)\) are equal, thereby completing the proof of the Lax pair formulation. It remains to justify (5.5). One important distinction between the two geometries is (2.3). For example, working on \(\mathbb{T}\) and using (4.12) and (4.1), we have \[C_{+}(\overline{m}+1)q_{+} =C_{+}(\overline{m}+1)q=[1-C_{-}](\overline{m}+1)q+\big{[}\beta( \kappa)+\int\!q\big{]}\] \[=(\overline{m}+1)q-i\overline{m}^{\prime}-\kappa\overline{m}+ \big{[}\beta(\kappa)+\int\!q\big{]}.\] Similar reasoning using (4.1) shows that \[C_{+}(m+1)(\overline{m}+1)q =C_{+}(\overline{m}+1)C_{+}(m+1)q+C_{+}(\overline{m}+1)[1-C_{+}] (m+1)q\] \[=C_{+}(\overline{m}+1)(-im^{\prime}+\kappa m),\] irrespective of the geometry. Combining our last two calculations, we find that on \(\mathbb{T}\), \[i\kappa^{2}C_{+} (m+1)C_{+}(\overline{m}+1)q_{+}\] \[=i\kappa^{2}C_{+}\Big{[}(m+1)(\overline{m}+1)q+(m+1)\big{[}-i \overline{m}^{\prime}-\kappa\overline{m}+\beta(\kappa)+\int\!q\big{]}\Big{]}\] \[=\kappa^{2}C_{+}\Big{[}m+\overline{m}+|m|^{2}\big{]}^{\prime}+i \kappa^{3}C_{+}(m-\overline{m})+i\kappa^{2}\big{[}\beta(\kappa)+\int\!q\big{]} (m+1).\] Again we meet a distinction. On the line, \(C_{+}\overline{m}=0\); however, on \(\mathbb{T}\), (4.30) shows \[i\kappa^{3}C_{+}\overline{m}=i\kappa^{3}\int\!\overline{m}=i\kappa^{2}\big{[} \beta(\kappa)+\int\!q\big{]}.\] In this way, we deduce that on \(\mathbb{T}\), \[i\kappa^{2}C_{+}(m+1)C_{+}(\overline{m}+1)q_{+}=\kappa^{2}C_{+}\big{(}m+ \overline{m}+|m|^{2}\big{)}^{\prime}+i\kappa^{2}\big{[}\kappa+\beta(\kappa)+ \int\!q\big{]}m,\] from which (5.5) follows easily. **Theorem 5.2** (Well-posedness of the \(H_{\kappa}\) flow).: _Given \(A>0\), let \(\kappa_{0}(A)\) be chosen according to Convention 4.5. For \(\kappa\geq\kappa_{0}\), the \(H_{\kappa}\) flow is globally well-posed for initial data in \(B^{s}_{A}\). Moreover, the quantity \(\beta(\varkappa;q(t))\) is conserved by the \(H_{\kappa}\) flow:_ \[\tfrac{d}{dt}\beta(\varkappa;q(t))=0\quad\text{for any}\quad\varkappa\geq \kappa_{0}. \tag{5.10}\] _Furthermore, if \(q(0)\in B^{s}_{A}\cap H^{\infty}\) then \(q(t)\in H^{\infty}\) for all \(t\in\mathbb{R}\) and the \(H_{\kappa}\) flow commutes with the Benjamin-Ono flow on \(H^{\infty}\)._ Proof.: We present the proof in the line setting. On the circle, the linearized flow contains an additional translation at speed \(\int\!q\). This alters several formulas, but introduces no additional difficulty. We begin by recasting (5.1) as the integral equation \[q(t)=e^{t\kappa\partial}q(0)-\kappa^{2}\int_{0}^{t}e^{(t-s)\kappa\partial} \big{[}|m(\kappa,q(s))|^{2}+2\operatorname{Re}m(\kappa,q(s))\big{]}^{\prime} \,ds.\] Next, we observe that \(q\mapsto[|m|^{2}+2\operatorname{Re}m]^{\prime}\) is a Lipschitz function. This follows from (4.4), (4.9), (4.10), the fundamental theorem of calculus, and the fact that \(H^{s+1}\) is an algebra: \[\big{\|}\big{[}|m(\kappa,q)|^{2} +2\operatorname{Re}m(\kappa,q)\big{]}^{\prime}-\big{[}|m(\kappa, \widehat{q})|^{2}+2\operatorname{Re}m(\kappa,\widehat{q})\big{]}^{\prime} \big{\|}_{H^{s}}\] \[\lesssim\big{\|}\big{[}|m(\kappa,q)|^{2}+2\operatorname{Re}m( \kappa,q)\big{]}-\big{[}|m(\kappa,\widehat{q})|^{2}+2\operatorname{Re}m(\kappa, \widehat{q})\big{]}\big{\|}_{H^{s+1}}\] \[\lesssim\|m(\kappa,q)-m(\kappa,\widehat{q})\|_{H^{s+1}}\big{[} \|m(\kappa,q)\|_{H^{s+1}}+\|m(\kappa,\widehat{q})\|_{H^{s+1}}+1\big{]}\] \[\lesssim\|q-\widetilde{q}\|_{H^{s}}\left\|dm\right|_{q}\big{\|}_{H^{s }\to H^{s+1}}\big{[}\|q\|_{H^{s}}+\left\|\widetilde{q}\right\|_{H^{s}}+1\big{]}\] \[\lesssim\|q-\widetilde{q}\big{\|}_{H^{s}}\] uniformly for \(q,\tilde{q}\in(B_{A}^{s})_{**}\) and \(\kappa\geq\kappa_{0}\). (For this notation, see Lemma 4.4.) Thus, local well-posedness on this larger set follows by Picard iteration. Next we address the propagation of additional regularity. By Proposition 4.1 we know that \(q\in H^{\infty}\) implies \(m(\kappa,q)\in H^{\infty}\). Indeed, the quantitative bound (4.6) together with a Gronwall argument shows that higher regularity norms can grow at most exponentially in time. Most important for us is the conclusion that when \(q(0)\in H^{\infty}\), so \(q(t)\in H^{\infty}\) for all times of existence. For \(H^{\infty}\) solutions to the \(H_{\kappa}\) flow, Lemma 4.11 shows that \[\tfrac{d}{dt}\beta(\varkappa)=\{\beta(\varkappa),H_{\kappa}\}=-\kappa^{2}\{ \beta(\varkappa),\beta(\kappa)\}+\kappa\{\beta(\varkappa),P(q)\}=0.\] The conservation of \(\beta(\varkappa)\) for \(H^{s}\)-solutions then follows from the \(H^{s}\)-continuity of \(q\mapsto\beta(\varkappa;q)\) and the local well-posedness of the flow. As Lemma 4.4 demonstrates, the conservation of \(\beta\) ensures that the local-in-time argument may be iterated indefinitely, thus yielding global well-posedness in \(H^{s}\). Lastly, we verify that the \(H_{\kappa}\) and the Benjamin-Ono flows commute on \(H^{\infty}\) solutions. We have \[\{H_{\kappa},H_{\mathrm{BO}}\}=-\kappa^{2}\{\beta(\kappa),H_{\mathrm{BO}}\}+ \kappa\{P,H_{\mathrm{BO}}\}.\] Each bracket on the right-hand side above vanishes because the \(H_{\mathrm{BO}}\) flow conserves both \(\beta\) (see (4.26)) and the momentum \(P\). Due to their commutativity, one may define a joint flow under both the Benjamin-Ono and \(H_{\kappa}\) Hamiltonians, at least for \(H^{\infty}\) initial data. The conservation of \(\beta\) under both of these flows provides bounds and equicontinuity of joint orbits: **Corollary 5.3**.: _Given \(A>0\) and a set of real-valued initial data \(Q\subset B_{A}^{s}\cap H^{\infty}\), we define_ \[Q_{*}=\big{\{}e^{J\nabla(t_{1}H_{\mathrm{BO}}+t_{2}H_{\kappa})}(q):q\in Q,\ t_{1},t_{2}\in\mathbb{R},\ \kappa\geq\kappa_{0}(A)\big{\}}. \tag{5.11}\] _Then \(Q_{*}\subset Q_{**}\) and so \(Q_{*}\) is bounded; indeed,_ \[\big{(}1+\|q_{0}\|_{H^{s}}\big{)}^{-2|s|}\|q_{0}\|_{H^{s}}\lesssim\|q\|_{H^{s} }\lesssim\big{(}1+\|q_{0}\|_{H^{s}}\big{)}^{\frac{2|s|}{1-2|s|}}\|q_{0}\|_{H^{s}} \tag{5.12}\] _for every \(q\in\{q_{0}\}_{**}\) and \(q_{0}\in B_{A}^{s}\). If \(Q\) is equicontinuous, then so too is \(Q_{*}\)._ Proof.: As we saw in (4.26) and (5.10), both flows defining \(Q_{*}\) conserve \(\beta\). Thus \(Q_{*}\subset Q_{**}\) and so Lemma 4.4 may be applied. The right-hand inequality in (5.12) is just a recapitulation of (4.16). The left-hand inequality follows from this by reversing the roles of \(q\) and \(q_{0}\). As discussed in the introduction, we wish to show that trajectories under the \(H_{\kappa}\) Hamiltonian closely parallel the original Benjamin-Ono flow. How is this to be done? An obvious approach would be to compute the difference of the two vector fields and endeavor to show this is small in some sense. This strikes the immediate hurdle that (BO) does not define a vector field on \(H^{s}\) because the operator \(q\mapsto q^{2}\) is not well-defined, even as a distribution. Before taking the difference, we must make a gauge transformation; specifically, we will use \(q\mapsto n=m(\varkappa,q)\). Recall that by Proposition 4.2, this is a diffeomorphism from bounded subsets of \(H^{s}\) into \(H^{s+1}\), provided \(\varkappa\) is sufficiently large. The special property (5.5) of our Lax pair representation (5.2) makes it easy to deduce the dynamics of the new unknown \(n=(\mathcal{L}+\varkappa)^{-1}q_{+}\) under the \(H_{\kappa}\) flow: \[\tfrac{d}{dt}n=[\mathcal{P}_{\kappa},R(\varkappa)]q_{+}+R(\varkappa)\mathcal{P} _{\kappa}q_{+}=\mathcal{P}_{\kappa}R(\varkappa)q_{+}=\mathcal{P}_{\kappa}n. \tag{5.13}\] Indeed, this is the argument we used to deduce (4.25), which says that \[\tfrac{d}{dt}n=\mathcal{P}n=-in^{\prime\prime}-2C_{+}([q-q_{+}]n)^{\prime}-2q_ {+}n^{\prime} \tag{5.14}\] under the (BO) flow. While these formulas are succinct and do make sense for \(q\in H^{s}\), they obscure the numerous subtle cancellations that we must exploit in order to show convergence of the \(H_{\kappa}\) flows to the Benjamin-Ono flow as \(\kappa\to\infty\). Indeed, in the form presented, it is far from clear that the \(\kappa\to\infty\) limit of \(\mathcal{P}_{\kappa}n\) even exists! Our next step is to rewrite the evolution of \(n\) under both the Benjamin-Ono and the \(H_{\kappa}\) Hamiltonians in a new way that is amenable to demonstrating this essential convergence. **Lemma 5.4**.: _If \(q(t)\) is an \(H^{\infty}(\mathbb{R})\) solution of (BO) on the line, then_ \[\tfrac{d}{dt}n=\big{\{}\mathcal{L}n-C_{+}(q_{-}n)\big{\}}^{\prime}-iq_{+} \mathcal{L}n+q_{+}^{\prime}n-iq_{+}C_{+}(qn), \tag{5.15}\] _while for solutions of the \(H_{\kappa}\) flow on the line we have_ \[\tfrac{d}{dt}n=\big{\{} \kappa R(\kappa)\mathcal{L}n-\kappa^{2}C_{+}\big{[}\overline{m}R( \kappa)n\big{]}\big{\}}^{\prime}-i\kappa q_{+}R(\kappa)\mathcal{L}n+\kappa m^ {\prime}n\] \[-i\kappa mC_{+}([q-q_{-}+\kappa\overline{m}]n)-i\kappa C_{+}(q_{ +}\overline{m})\cdot\mathcal{L}R(\kappa)n\] \[+\kappa C_{+}\big{(}|m|^{2}\big{)}^{\prime}\cdot n-i\kappa[1-C_{- }](q_{+}\overline{m})\cdot mn. \tag{5.16}\] _On the circle, these formulas are modified as follows:_ \[\tfrac{d}{dt}n=\operatorname{RHS}(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq Our next simplification involves the second term on the RHS(5.17); by (4.1), \[\big{[}\kappa m+\kappa\overline{m}-q-\int\!q\big{]}=(\kappa m-q_{+})+(\kappa \overline{m}-q_{-})=-\mathcal{L}m-\overline{\mathcal{L}m}.\] Regarding the first and fourth terms on the RHS(5.17), we have \[i\kappa\mathcal{L}R(\kappa)\mathcal{L}n-i\kappa\big{[}\!\int\!q\big{]}R(\kappa) \mathcal{L}n=\big{(}\kappa R(\kappa)\mathcal{L}n\big{)}^{\prime}-i\kappa C_{+ }\big{(}[q_{+}+q_{-}]R(\kappa)\mathcal{L}n\big{)}.\] Incorporating this information reveals \[\tfrac{d}{dt}n =\big{(}\kappa R(\kappa)\mathcal{L}n\big{)}^{\prime}-i\kappa q_{ +}R(\kappa)\mathcal{L}n-i\kappa^{2}mC_{+}(\overline{m}n)+i\kappa(\mathcal{L}m)n\] \[\quad+i\kappa C_{+}\big{[}(\overline{\mathcal{L}m})n\big{]}-i \kappa C_{+}\big{[}q_{-}\mathcal{L}R(\kappa)n\big{]}+i\kappa^{2}\beta(\kappa) R(\kappa)n+\big{[}\!\int\!q\big{]}n^{\prime}. \tag{5.18}\] Consideration of the fifth and sixth summands on RHS(5.18) leads us to observe \[i\kappa C_{+}\Big{[}(\overline{\mathcal{L}m})n-q_{-}\mathcal{L} R(\kappa)n\Big{]} =i\kappa C_{+}\Big{[}\overline{\mathcal{L}m}\cdot(\mathcal{L}+ \kappa)R(\kappa)n-\overline{(\mathcal{L}+\kappa)m}\cdot\mathcal{L}R(\kappa)n \Big{]}\] \[=i\kappa^{2}C_{+}\Big{[}\overline{\mathcal{L}m}\cdot R(\kappa)n- \overline{m}\cdot\mathcal{L}R(\kappa)n\Big{]},\] to which we apply Lemma 3.3. This yields \[i\kappa C_{+}\Big{[}(\overline{\mathcal{L}m})n-q_{-}\mathcal{L} R(\kappa)n\Big{]} =-\kappa^{2}C_{+}\big{[}\overline{m}\,R(\kappa)n\big{]}^{\prime}+i \kappa^{2}[1-C_{-}](q_{+}\overline{m})\cdot R(\kappa)n.\] Before using this to rewrite \(\tfrac{d}{dt}n\), let us pause to observe that (4.12) shows that the last term here may be profitably combined with the second to last term in (5.18): \[i\kappa^{2}[1-C_{-}](q_{+}\overline{m})\cdot R(\kappa)n+i\kappa^ {2}\beta(\kappa)R(\kappa)n =i\kappa^{2}C_{+}(q_{+}\overline{m})\cdot R(\kappa)n\] \[=i\kappa C_{+}(q_{+}\overline{m})\cdot[n-R(\kappa)\mathcal{L}n].\] Incorporating all these deductions into (5.18), we find \[\tfrac{d}{dt}n =\big{(}\kappa R(\kappa)\mathcal{L}n\big{)}^{\prime}-i\kappa q_{ +}R(\kappa)\mathcal{L}n-i\kappa^{2}mC_{+}(\overline{m}n)+i\kappa(\mathcal{L}m)n\] \[\quad-\kappa^{2}C_{+}\big{[}\overline{m}\,R(\kappa)n\big{]}^{ \prime}-i\kappa C_{+}(q_{+}\overline{m})\cdot R(\kappa)\mathcal{L}n+i\kappa C _{+}(q_{+}\overline{m})\cdot n+\big{[}\!\int\!q\big{]}n^{\prime}. \tag{5.19}\] Two terms require further attention: neither \(i\kappa(\mathcal{L}m)n\) nor \(i\kappa C_{+}(q_{+}\overline{m})\cdot n\) admit a \(\kappa\to\infty\) limit. However, the combination does! Using the definition of \(\mathcal{L}\) together with Lemma 3.3, we may write \[i\kappa\big{[}(\mathcal{L}m) +C_{+}(q_{+}\overline{m})\big{]}n\] \[=\kappa m^{\prime}n-i\kappa[q-q_{-}]mn-i\kappa nC_{+}(q_{-}m)+i \kappa nC_{+}(q_{+}\overline{m})\] \[=\kappa m^{\prime}n-i\kappa[q-q_{-}]mn-i\kappa nC_{+}\big{\{} \overline{q_{+}}m-\overline{m}q_{+}\big{\}}\] \[=\kappa m^{\prime}n-i\kappa[q-q_{-}]mn-i\kappa nC_{+}\big{\{} \overline{(\mathcal{L}+\kappa)m}\cdot m-\overline{m}\cdot(\mathcal{L}+\kappa)m \big{\}}\] \[=\kappa m^{\prime}n-i\kappa[q-q_{-}]mn-i\kappa nC_{+}\big{\{} \overline{\mathcal{L}m}\cdot m-\overline{m}\cdot\mathcal{L}m\big{\}}\] \[=\kappa m^{\prime}n-i\kappa mC_{+}\big{(}[q-q_{-}]n\big{)}+ \kappa nC_{+}\big{(}|m|^{2}\big{)}^{\prime}-i\kappa mn[1-C_{-}](q_{+} \overline{m}).\] Inserting this into (5.19) completes our treatment of the \(H_{\kappa}\) flow. **Theorem 5.5**.: _Let \(\{q_{j}^{0}\}_{j\geq 1}\subset H^{\infty}\) be a sequence of real-valued initial data that converges in \(H^{s}\). Then for all \(T>0\), the corresponding \(H^{\infty}\) solutions \(q_{j}(t)\) to (BO) converge in \(C([-T,T];H^{s})\)._ Proof.: Let \(Q=\{q_{j}^{0}:j\geq 1\}\) and let \(Q_{*}\) be defined as in (5.11). By Corollary 5.3, \(Q_{*}\) is bounded and equicontinuous in \(H^{s}\). As the \(H_{\kappa}\) and \(H_{\mathrm{BO}}\) flows commute (cf. Theorem 5.2), we may write \[q_{j}(t)=e^{tJ\nabla H_{\mathrm{BO}}}(q_{j}^{0})=e^{tJ\nabla(H_{\mathrm{BO}}-H_ {\kappa})}\circ e^{tJ\nabla H_{\kappa}}(q_{j}^{0})\] and so \[\begin{split}\sup_{|t|\leq T}\left\|q_{j}(t)-q_{\ell}(t)\right\|_{H^ {s}}\leq&\ \sup_{|t|\leq T}\left\|e^{tJ\nabla H_{\kappa}}(q_{j}^{0})-e^{tJ\nabla H_{ \kappa}}(q_{\ell}^{0})\right\|_{H^{s}}\\ &+2\sup_{q\in\bar{Q}_{*}}\sup_{|t|\leq T}\left\|e^{tJ\nabla(H_{ \mathrm{BO}}-H_{\kappa})}(q)-q\right\|_{H^{s}}.\end{split} \tag{5.20}\] By the well-posedness of the \(H_{\kappa}\) flows, the first term on RHS(5.20) converges to zero as \(j,\ell\to\infty\) for each fixed \(\kappa\geq\kappa_{0}\). Therefore, it suffices to show that \[\lim_{\kappa\to\infty}\sup_{q\in\bar{Q}_{*}}\sup_{|t|\leq T}\left\|e^{tJ \nabla(H_{\mathrm{BO}}-H_{\kappa})}(q)-q\right\|_{H^{s}}=0. \tag{5.21}\] We adopt the following notation: given initial data \(q\in Q_{*}\), we write \[q(t)=e^{tJ\nabla(H_{\mathrm{BO}}-H_{\kappa})}(q)\] for the corresponding solution to the difference flow and \(n(t):=n(x;\varkappa,q(t))\). By the diffeomorphism property demonstrated in Proposition 4.2, (5.21) will follow from \[\lim_{\kappa\to\infty}\sup_{q\in Q_{*}}\sup_{|t|\leq T}\left\|n(t)-n(0)\right\| _{H^{s+1}}=0. \tag{5.22}\] Note that as \(Q_{*}\) is bounded and equicontinuous in \(H^{s}\), the diffeomorphism property together with the translation identity (4.5) yield that the set \[\left\{n(x;\varkappa,q(t)):q\in Q_{*},\ t\in\mathbb{R}\right\}\] is bounded and equicontinuous in \(H^{s+1}\). As equicontinuity in a high regularity space together with convergence in a low regularity space imply convergence in the high regularity space, we see that to prove (5.22) it suffices to show \[\lim_{\kappa\to\infty}\sup_{q\in Q_{*}}\sup_{|t|\leq T}\left\|n(t)-n(0)\right\| _{H^{-2}}=0. \tag{5.23}\] By the fundamental theorem of calculus, (5.23) is a consequence of \[\lim_{\kappa\to\infty}\sup_{q\in Q_{*}}\sup_{|t|\leq T}\left\|\tfrac{dn}{dt} \right\|_{H^{-2}}=0, \tag{5.24}\] where the time derivative of \(n\) is dictated by the difference flow. The equation for this evolution may be deduced immediately from Lemma 5.4. In taking this difference, the distinction between the two geometries disappears. Combining (5.15), (5.16), and the identity \(\mathcal{L}-\kappa R(\kappa)\mathcal{L}=\mathcal{L}R(\kappa)\mathcal{L}\), we find \[\begin{split}\tfrac{d}{dt}n=&\ \big{\{}\mathcal{L}R(\kappa)\mathcal{L}n-C_{+}\big{[}\big{(}q_{-}-\kappa^{2} \overline{m}R(\kappa)\big{)}n\big{]}\big{\}}^{\prime}-iq_{+}\mathcal{L}R( \kappa)\mathcal{L}n\\ &+\big{(}q_{+}^{\prime}-\kappa m^{\prime}\big{)}n-i(q_{+}-\kappa m )C_{+}(qn)-i\kappa mC_{+}([q_{-}-\kappa\overline{m}]n)\\ &+i\kappa C_{+}(q_{+}\overline{m})\cdot\mathcal{L}R(\kappa)n- \kappa C_{+}\big{(}|m|^{2}\big{)}^{\prime}\cdot n+i\kappa[1-C_{-}](q_{+} \overline{m})\cdot mn.\end{split} \tag{5.25}\] We will verify (5.24) by showing that each of these terms converges to zero in \(H^{-2}\) as \(\kappa\to\infty\), uniformly for \(q(t)\in(Q_{*})_{*}=Q_{*}\). Before delving in the details of this, let us recall some basic bounds that we will use repeatedly: \[\|q\|_{H^{s}}+\|n\|_{H^{s+1}}+\|m\|_{H^{s+1}_{*}}+\|\kappa m\|_{H^{s}}\lesssim 1 \tag{5.26}\] uniformly for \(q\in Q_{*}\) and \(\kappa\geq\kappa_{0}\). The first two of these were noted above; the latter two follow from (4.4). For the first term in (5.25), we use (3.5) and (4.23) to see that \[\left\|\big{\{}\mathcal{L}R(\kappa,q)\mathcal{L}n\big{\}}^{\prime}\right\|_{H^ {-2}}\lesssim\left\|\mathcal{L}R(\kappa,q)\mathcal{L}n\right\|_{H^{s}}\lesssim \left\|R(\kappa,q)\mathcal{L}n\right\|_{H^{s+1}}\to 0\quad\text{as}\quad\kappa\to\infty,\] uniformly for \(q\in Q_{*}\). Using (4.1), (3.5), and (4.23), we obtain \[\left\|\kappa m-q_{+}\right\|_{H^{s}}=\left\|\mathcal{L}m\right\|_{H^{s}}\lesssim \left\|m\right\|_{H^{s+1}}\to 0\quad\text{as }\kappa\to\infty \tag{5.27}\] uniformly for \(q\in Q_{*}\). Employing \(\kappa R(\kappa)=1-\mathcal{L}R(\kappa)\) and (2.6), we deduce \[\left\|C_{+}\big{[}\big{(}q_{-}-\kappa^{2}\overline{m} R(\kappa,q)\big{)}n\right]^{\prime}\right\|_{H^{-2}}\] \[\lesssim\left\|\big{(}q_{-}-\kappa\overline{m}\big{)}n\right\|_ {H^{s}}+\kappa\left\|\overline{m}\mathcal{L}R(\kappa,q)n\right\|_{H^{s}}\] \[\lesssim\left\|q_{+}-\kappa m\right\|_{H^{s}}\left\|n\right\|_{ H^{s+1}}+\left\|\kappa m\right\|_{H^{s}}\left\|\mathcal{L}R(\kappa,q)n \right\|_{H^{s+1}}.\] By (4.23), (5.26), and (5.27), this converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\). Next, we use the estimates (2.10) and (3.5) to bound \[\left\|q_{+}\mathcal{L}R(\kappa,q)\mathcal{L}n\right\|_{H^{-2}}\lesssim\left\| q_{+}\right\|_{H^{s}}\left\|\mathcal{L}R(\kappa,q)\mathcal{L}n\right\|_{H^{s}} \lesssim\left\|q_{+}\right\|_{H^{s}}\left\|R(\kappa,q)\mathcal{L}n\right\|_{ H^{s+1}}.\] By (4.23), this converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\). Using the triangle inequality, (2.6), and (2.10), we may bound \[\left\|\big{(}\kappa m^{\prime}-q_{+}^{\prime}\big{)}n\right\|_{ H^{-2}} \leq\left\|\big{\{}(\kappa m-q_{+})n\big{\}}^{\prime}\right\|_{H^{ -2}}+\left\|(\kappa m-q_{+})n^{\prime}\right\|_{H^{-2}}\] \[\lesssim\left\|\kappa m-q_{+}\right\|_{H^{s}}+\left\|(\kappa m-q _{+})n^{\prime}\right\|_{H^{2s-1}}\] \[\lesssim\left\|\kappa m-q_{+}\right\|_{H^{s}}\left\|n\right\|_{ H^{s+1}}+\left\|\kappa m-q_{+}\right\|_{H^{s}}\left\|n^{\prime}\right\|_{H^{s}}\] \[\lesssim\left\|\kappa m-q_{+}\right\|_{H^{s}}\left\|n\right\|_{ H^{s+1}}.\] By (5.26) and (5.27), this converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\). Using (2.10) and (2.6) again, we may bound \[\left\|(q_{+}-\kappa m)C_{+}(qn)\right\|_{H^{-2}} \lesssim\left\|q_{+}-\kappa m\right\|_{H^{s}}\left\|qn\right\|_{H ^{s}}\] \[\lesssim\left\|q_{+}-\kappa m\right\|_{H^{s}}\left\|q\right\|_{H ^{s}}\left\|n\right\|_{H^{s+1}}.\] This converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\) in view of (5.26), (5.27). By (2.10), and (2.6), we have \[\left\|\kappa mC_{+}\big{[}(q_{-}-\kappa\overline{m})n\big{]} \right\|_{H^{-2}} \lesssim\left\|\kappa m\right\|_{H^{s}}\left\|(q_{-}-\kappa \overline{m})n\right\|_{H^{s}}\] \[\lesssim\left\|\kappa m\right\|_{H^{s}}\left\|q_{+}-\kappa m \right\|_{H^{s}}\left\|n\right\|_{H^{s+1}}.\] This converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\) because of (5.26), (5.27). Using (2.10) and \(\kappa R(\kappa)=1-R(\kappa)\mathcal{L}\), followed by (2.6) and (3.5), we have \[\left\|\kappa C_{+}(q_{+}\overline{m})\cdot\mathcal{L}R(\kappa,q) n\right\|_{H^{-2}} \lesssim\left\|q_{+}\overline{m}\right\|_{H^{s}}\left[\,\left\| \mathcal{L}n\right\|_{H^{s}}+\left\|\mathcal{L}R(\kappa,q)\mathcal{L}n\right\| _{H^{s}}\,\right]\] \[\lesssim\left\|q_{+}\right\|_{H^{s}}\left\|m\right\|_{H^{s+1}} \left[\,\left\|n\right\|_{H^{s+1}}+\left\|R(\kappa,q)\mathcal{L}n\right\|_{H^{s +1}}\,\right],\] which converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\) in view of (4.23), (5.26). By the triangle inequality and the estimates (2.6) and (2.10), we may bound \[\left\|\kappa C_{+}\big{(}|m|^{2}\big{)}^{\prime}\cdot n\right\|_{ H^{-2}} \leq\kappa\left\|C_{+}\big{(}|m|^{2}\big{)}\cdot n\right\|_{H^{-1}}+ \kappa\big{\|}C_{+}\big{(}|m|^{2}\big{)}\cdot n^{\prime}\big{\|}_{H^{-2}}\] \[\lesssim\kappa\left\|C_{+}\big{(}|m|^{2}\big{)}\right\|_{H^{s}} \left[\left\|n\right\|_{H^{s+1}}+\left\|n^{\prime}\right\|_{H^{s}}\right]\] \[\lesssim\kappa\left\|m\right\|_{H^{s}}\left\|m\right\|_{H^{s+1}} \left\|n\right\|_{H^{s+1}},\] which converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\) as follows from (5.26) and (5.27). Finally, using the estimates (2.10) and (2.6), we have \[\left\|\kappa[1-C_{-}](q_{+}\overline{m})\cdot mn\right\|_{H^{-2}} \lesssim\kappa\left\|q_{+}\overline{m}\right\|_{H^{s}}\left\|mn \right\|_{H^{s}}\] \[\lesssim\kappa\left\|q_{+}\right\|_{H^{s}}\left\|m\right\|_{H^{s+1} }\left\|m\right\|_{H^{s}}\left\|n\right\|_{H^{s+1}},\] which converges to zero as \(\kappa\to\infty\) uniformly for \(q\in Q_{*}\) in view of (5.26), (5.27). Collecting all our estimates, we deduce (5.24), which completes the proof of the theorem. Proof of Theorem 1.1.: By the prior work discussed in the introduction, it suffices to consider \(-\frac{1}{2}<s<0\). We want to show that the solution map \(\Phi\) for (BO) extends uniquely from \(H^{\infty}\) to a jointly continuous map \(\Phi:\mathbb{R}\times H^{s}\to H^{s}\). Given initial data \(q_{0}\in H^{s}\), we define \(\Phi(t,q_{0})\) as follows: Let \(\{q_{j}^{0}\}_{j\geq 1}\) be a sequence of \(H^{\infty}\) functions that converges to \(q_{0}\) in \(H^{s}\). Applying Theorem 5.5 to the sequence \(\{q_{j}^{0}\}_{j\geq 1}\), we see that the corresponding \(H^{\infty}\) solutions \(q_{j}(t)\) to (BO) converge in \(H^{s}\) and the limit is independent of the sequence \(\{q_{j}^{0}\}_{j\geq 1}\). Consequently, \[\Phi(t,q_{0}):=\lim_{j\to\infty}q_{j}(t)\] is well-defined. We must show that \(\Phi\) is jointly continuous. Fix \(T>0\) and let \(\{q_{j}^{0}\}_{j\geq 1}\) be a sequence of initial data in \(H^{s}\) that converges to \(q_{0}\) in \(H^{s}\). By the definition of \(\Phi\), we may choose another sequence \(\widetilde{q}_{j}(t)\) of \(H^{\infty}\) solutions to (BO) such that \[\sup_{|t|\leq T}\left\|\Phi(t,q_{j}^{0})-\widetilde{q}_{j}(t)\right\|_{H^{s}} \to 0\quad\text{as $j\to\infty$.} \tag{5.28}\] In particular, \(\widetilde{q}_{j}(0)\to q_{0}\) in \(H^{s}\), and so Theorem 5.5 yields \[\sup_{|t|\leq T}\left\|\widetilde{q}_{j}(t)-\Phi(t,q_{0})\right\|_{H^{s}} \to 0\quad\text{as $j\to\infty$.} \tag{5.29}\] Given \(\{t_{j}\}\subset[-T,T]\) that converges to some \(t\in[-T,T]\), we may bound \[\left\|\Phi(t_{j},q_{j}^{0})-\Phi(t,q_{0})\right\|_{H^{s}}\] \[\qquad\leq\left\|\Phi(t_{j},q_{j}^{0})-\widetilde{q}_{j}(t_{j}) \right\|_{H^{s}}+\left\|\widetilde{q}_{j}(t_{j})-\widetilde{q}_{j}(t)\right\| _{H^{s}}+\left\|\widetilde{q}_{j}(t)-\Phi(t,q_{0})\right\|_{H^{s}}\] \[\qquad\leq\sup_{|t|\leq T}\left\|\Phi(t,q_{j}^{0})-\widetilde{q} _{j}(t)\right\|_{H^{s}}+\left\|\widetilde{q}_{j}(t_{j})-\widetilde{q}_{j}(t) \right\|_{H^{s}}+\sup_{|t|\leq T}\left\|\widetilde{q}_{j}(t)-\Phi(t,q_{0}) \right\|_{H^{s}}.\] The right-hand side above converges to zero as \(j\to\infty\) by (5.28), (5.29), and Theorem 5.5. This demonstrates that \(\Phi\) is jointly continuous. ## 6. The tau function and virial identities for the full hierarchy This section presents two new families of identities. The first is Theorem 6.1, which generalizes Gerard's explicit formula [15]; the second is Theorem 6.5, which presents virial-type identities fulfilling the promises made at the end of Section 4. Throughout this section we will work on the line and consider the flow generated by employing \(\beta(\kappa;q)\) as Hamiltonian. This leads to the dynamics \[\tfrac{d}{dt}q=\bigl{(}m+\overline{m}+|m|^{2}\bigr{)}^{\prime}, \tag{6.1}\] whose well-posedness in \(H^{s}\) follows from the arguments presented in Theorem 5.2. Indeed, the \(H_{\kappa}\) flow differs from the \(\beta(\kappa)\) flow only by a time rescaling and a spatial translation. Because of this relationship, Proposition 5.1 also provides us with a Lax pair representation of this flow, namely, \[\mathcal{P}_{\kappa}^{\beta}:=-i\kappa(\mathcal{L}+\kappa)^{-1}+i(m+1)C_{+}( \overline{m}+1). \tag{6.2}\] We will be studying the evolution (6.1) with initial data \(q^{0}\in L^{2}\). The equation (6.1) is also well-posed in this finer topology, as can be shown by mimicking the proof of Theorem 5.2. To avoid such repetition, we offer the following alternate argument. By (4.35), we know that the flow (6.1) preserves the \(L^{2}\) norm. In this way, continuity of the data-to-solution map follows from mere weak continuity, which may be derived from \(H^{s}\) well-posedness. The central theme of this section is how the special properties of the Lax representation (6.2) lead quickly to the sought-after formulas. In addition to the special properties \[\tfrac{d}{dt}q_{+}(t)=\mathcal{P}^{\beta}_{\kappa}q_{+}(t)\quad\text{and}\quad \tfrac{d}{dt}n(x;q(t))=\mathcal{P}^{\beta}_{\kappa}n(x;q(t)) \tag{6.3}\] that played an important role in the previous section, we also need two more. One of these additional properties is that \(\mathcal{P}^{\beta}_{\kappa}1=0\). Strictly speaking, this is only true in the circle setting, where it follows from the arguments used to prove (4.31). On the line, the constant function \(1\) does not belong to the natural domain of \(\mathcal{P}^{\beta}_{\kappa}\). We will prove a proper analogue in Lemma 6.2. The second additional property is the value of the commutator between \(\mathcal{P}^{\beta}_{\kappa}\) and the operator \(X\) corresponding to multiplication by \(x\) presented in Lemma 3.4; this is the subject of Lemma 6.3. As motivation for such preliminaries, let us now present our generalization of Gerard's explicit formula from [15]: **Theorem 6.1**.: _Let \(A>0\) and \(\kappa_{0}(A)\) satisfy Convention 4.5. Then for any \(q^{0}\in B^{s}_{A}\cap L^{2}(\mathbb{R})\) and any \(\kappa\geq\kappa_{0}\), the solution \(q(t)\) to (6.1) with initial data \(q^{0}\) satisfies_ \[q_{+}(t,z)=\tfrac{1}{2\pi i}I_{+}\Big{(}\big{(}X-t\kappa R(\kappa;q^{0})^{2}- z\big{)}^{-1}q_{+}^{0}\Big{)} \tag{6.4}\] _for all \(\operatorname{Im}z>0\)._ Although (6.4) only contains the positive frequency part of \(q(t)\) and only off the real axis, this is sufficient to recover the entire waveform; indeed, \[q(t,x)=\lim_{y\downarrow 0}\Bigl{[}q_{+}(t,x+iy)+\overline{q_{+}(t,x+iy)} \Bigr{]}\] in \(L^{2}(\mathbb{R})\) sense. Our next lemma gives the promised line analogue of the relation \(\mathcal{P}^{\beta}_{\kappa}1=0\) valid on the circle. **Lemma 6.2**.: _Let \(A>0\) and \(\kappa_{0}(A)\) satisfy Convention 4.5. Then_ \[\chi_{y}(x)=\tfrac{iy}{x+iy}\quad\text{satisfies}\quad\lim_{y\to\infty}\mathcal{ P}^{\beta}_{\kappa}\chi_{y}=0 \tag{6.5}\] _in \(L^{2}_{+}\)-sense uniformly for \(q\) in \(L^{2}\)-compact subsets of \(B^{s}_{A}\cap L^{2}(\mathbb{R})\)._ Proof.: Using the resolvent identity and elementary manipulations, we find that \[[\kappa R(\kappa)-1]\chi_{y} =R(\kappa)C_{+}q\kappa R_{0}(\kappa)\chi_{y}+[\kappa R_{0}(\kappa )-1]\chi_{y}\] \[=R(\kappa)q_{+}-R(\kappa)C_{+}(q-q\chi_{y})+[R(\kappa)C_{+}q+1][ \kappa R_{0}(\kappa)-1]\chi_{y}.\] As \(R(\kappa)q_{+}=m\) and \(\kappa R_{0}(\kappa)=1-R_{0}(\kappa)\mathcal{L}_{0}\), we deduce that \[\bigl{\|}[\kappa R(\kappa)-(m+1)]\chi_{y}\bigr{\|}_{L^{2}}\lesssim\bigl{\|}m(1 -\chi_{y})\bigr{\|}_{L^{2}}+\bigl{\|}q(1-\chi_{y})\bigr{\|}_{L^{2}}+\bigl{\|} \mathcal{L}_{0}\chi_{y}\bigr{\|}_{L^{2}},\] which converges to zero as \(y\to\infty\), uniformly on compact subsets of \(B^{s}_{A}\cap L^{2}(\mathbb{R})\). To complete the proof of (6.5), it remains to show that \((m+1)C_{+}(\overline{m}\chi_{y})\to 0\) in \(L^{2}\) as \(y\to\infty\). Noting that \(C_{+}(\overline{m})=0\), we find \[\|(m+1)C_{+}(\overline{m}\chi_{y})\|_{L^{2}}\lesssim\bigl{[}1+\|m\|_{H^{s+1}} \bigr{]}\|\overline{m}(1-\chi_{y})\|_{L^{2}}\to 0\quad\text{as}\quad y\to\infty,\] uniformly on compact subsets of \(B^{s}_{A}\cap L^{2}(\mathbb{R})\) Next we record another algebraic virtue of the operators \(\mathcal{P}_{\kappa}^{\beta}\), regarding their commutator properties with the operator \(X\). **Lemma 6.3**.: _Let \(A>0\) and \(\kappa_{0}(A)\) satisfy Convention 4.5 and suppose \(q\in B_{A}^{s}\) satisfies \(q\in H^{\infty}(\mathbb{R})\) and \(\langle x\rangle q\in L^{2}(\mathbb{R})\). Then_ \[[X,\mathcal{P}_{\kappa}^{\beta}]=-\kappa R(\kappa,q)^{2} \tag{6.6}\] _as operators on \(D(X)\)._ Proof.: We adopt the shorthand \(R=R(\kappa;q)\). We have \[-i\kappa[X,R]=i\kappa R[X,\mathcal{L}]R=-\kappa R^{2}-i\kappa R[X,C_{+}q]R. \tag{6.7}\] Using (3.18) for \(m\in H^{\infty}(\mathbb{R})\) and \(f\in D(X)\), we obtain \[[X,i(m+1)C_{+}(\overline{m}+1)]f =i[X,(m+1)]C_{+}(\overline{m}+1)f+i(m+1)[X,C_{+}(\overline{m}+1)]f\] \[=i[X,m]C_{+}(1+\overline{m})f\] \[=-\tfrac{1}{2\pi}m\cdot I_{+}\big{(}f+C_{+}(\overline{m}f)\big{)} \tag{6.8}\] \[=-\tfrac{1}{2\pi}Rq_{+}\cdot I_{+}\big{(}f+C_{+}(\overline{m}f) \big{)}.\] Using (3.16) and noting that \(\overline{m}f\in L^{1}\), we find \[I_{+}\big{(}C_{+}(\overline{m}f)\big{)}=\int\overline{m}f\,dx=\langle Rq_{+},f\rangle=\langle q_{+},Rf\rangle=I_{+}\big{(}C_{+}(qRf)\big{)}.\] Note that the hypothesis \(\langle x\rangle q\in L^{2}\) ensures that \(C_{+}(qRf)\in D(X)\) whenever \(f\in D(X)\). As derivatives vanish at zero frequency, we also have \[I_{+}\big{(}f\big{)}=I_{+}\big{(}(\mathcal{L}+\kappa)Rf\big{)}=\kappa I_{+} \big{(}Rf\big{)}-I_{+}\big{(}C_{+}(qRf)\big{)}.\] Employing the last two identities in (6.8) and invoking (3.18), we obtain \[[X,i(m+1)C_{+}(\overline{m}+1)]f=i\kappa R[X,C_{+}q]Rf.\] The identity (6.6) now follows by combining this with (6.7). Our last result before the proof of Theorem 6.1 ensures the propagation of the weighted decay condition \(\langle x\rangle q\in L^{2}(\mathbb{R})\) under the flow (6.1). **Lemma 6.4**.: _Let \(A>0\) and \(\kappa_{0}(A)\) satisfy Convention 4.5 and suppose \(q^{0}\in B_{A}^{s}\) satisfies \(\langle x\rangle q^{0}(x)\in L^{2}(\mathbb{R})\) and \(q^{0}\in H^{\infty}(\mathbb{R})\). Let \(q(t)\) denote the evolution of \(q^{0}\) under (6.1) with \(\kappa\geq\kappa_{0}\). Then_ \[\big{\|}\langle x\rangle q(t,x)\big{\|}_{L^{2}}+\big{\|}q(t,x)\big{\|}_{H^{ \sigma}}+\big{\|}\langle x\rangle[n(x;\varkappa,q(t))+\overline{n}(x;\varkappa,q(t))]\big{\|}_{L^{2}}<\infty \tag{6.9}\] _for all \(t\in\mathbb{R}\), all \(\varkappa\geq\kappa_{0}\), and all \(\sigma\in\mathbb{N}\)._ Proof.: The smoothness of solutions to (6.1) follows from (4.6) and a simple Gronwall argument. Our main focus here is on spatial decay. Combining (4.1) and its complex conjugate shows \[\big{(}|\partial|+\kappa\big{)}(m+\overline{m})=q+C_{+}(qm)+C_{-}(q\overline {m}). \tag{6.10}\] We first study the last two terms in (6.10). As \(H^{s+1}\hookrightarrow L^{\infty}\), so \[\|\langle x\rangle qm\|_{L^{2}}+\|\langle x\rangle q\overline{m}\|_{L^{2}} \lesssim\|\langle x\rangle q\|_{L^{2}}\|m\|_{H^{s+1}}.\] This shows that \(\widehat{q}\widehat{m}\in H^{1}(\mathbb{R})\) and likewise for the Fourier transform of \(q\overline{m}\). To deduce that \(\langle x\rangle[C_{+}(qm)+C_{-}(q\overline{m})]\) is square integrable, we need to confirm only that the Fourier transform has no discontinuity at the origin. This is guaranteed by the middle equality in (4.12). The arguments presented in the previous paragraph yield the quantitative bound \[\big{\|}\langle x\rangle[q+C_{+}(qm)+C_{-}(q\overline{m})]\big{\|}_{L^{2}} \lesssim\big{[}1+\|m\|_{H^{s+1}}\big{]}\|\langle x\rangle q\|_{L^{2}}.\] Noting that the commutator \([x,|\partial|]\) is \(L^{2}\) bounded, this can be combined with (6.10) to yield \[\big{\|}\langle x\rangle[m+\overline{m}]\big{\|}_{H^{1}}\lesssim\big{[}1+\|m\|_ {H^{s+1}}\big{]}\|\langle x\rangle q\|_{L^{2}}. \tag{6.11}\] This does _not_ say that \(\langle x\rangle m\in L^{2}\) because Fourier truncation will typically introduce a discontinuity at the frequency origin. Taking a derivative remedies this and we may conclude that \[\big{\|}\langle x\rangle\big{[}m+\overline{m}+|m|^{2}\big{]}^{ \prime}\big{\|}_{L^{2}} \lesssim\big{[}1+\|m\|_{L^{\infty}}\big{]}\|\langle x\rangle m^{ \prime}\|_{L^{2}}\] \[\lesssim\big{[}1+\|m\|_{H^{s+1}}\big{]}^{2}\|\langle x\rangle q \|_{L^{2}}. \tag{6.12}\] Combining (6.12) with a simple Gronwall argument shows that \(\langle x\rangle q(t,x)\in L^{2}\) for all time. Combining this with (6.11) provides the claimed bounds for the function \(n=m(x;\varkappa,q(t))\). Proof of Theorem 6.1.: We start by observing that both sides of (6.4) depend continuously on \(q^{0}\) in \(L^{2}(\mathbb{R})\). In the case of the left-hand side, this follows from the well-posedness of the flow on \(L^{2}(\mathbb{R})\). Regarding the right-hand side, we note that \(X-t\kappa R^{2}\) is also maximally accretive (with the same domain as \(X\)) and so \(X-t\kappa R^{2}-z\) is boundedly invertible on \(L^{2}\) for \(z\in\mathbb{C}\) with \(\operatorname{Im}z>0\). In this way, continuity follows from the resolvent identity. By virtue of this continuity, it suffices to verify (6.4) for the special case of initial data \(q^{0}\in H^{\infty}\) satisfying \(\langle x\rangle q^{0}\in L^{2}\). Lemma 6.4 guarantees that these properties remain true for \(q(t)\) and so allow us to apply Lemma 6.3 at all times. As \(t\mapsto\mathcal{P}^{\beta}_{\kappa}(t)\) is a continuous curve of bounded anti-selfadjoint operators, so \[\tfrac{d}{dt}U(t)=\mathcal{P}^{\beta}_{\kappa}(t)U(t)\quad\text{with}\quad U( 0)=\operatorname{Id}\] has a unique solution, which is unitary at every time. Moreover, by virtue of the Lax pair representation and (6.3), we know that \[U(t)^{*}R(\kappa;q(t))U(t)=R(\kappa;q^{0})\quad\text{and}\quad q_{+}(t)=U(t)q _{+}^{0}\quad\text{for all}\quad t\in\mathbb{R}. \tag{6.13}\] Fixing \(z\) with \(\operatorname{Im}z>0\), we consider two one-parameter families of bounded operators: \[Y_{1}(t):=\big{(}X-t\kappa R(\kappa;q^{0})^{2}-z\big{)}^{-1}\quad\text{and} \quad Y_{2}(t):=U(t)^{*}(X-z)^{-1}U(t). \tag{6.14}\] Both are solutions to \[\tfrac{d}{dt}Y(t)=\kappa Y(t)R(\kappa;q^{0})^{2}Y(t)\quad\text{with}\quad Y( 0)=(X-z)^{-1}. \tag{6.15}\] In the case of \(Y_{1}\), this follows immediately from the resolvent identity. For \(Y_{2}(t)\), it follows from Lemma 6.3 and (6.13): \[\tfrac{d}{dt}Y_{2}(t)=U(t)^{*}[(X-z)^{-1},\mathcal{P}^{\beta}_{ \kappa}]U(t) =-U(t)^{*}(X-z)^{-1}[X,\mathcal{P}^{\beta}_{\kappa}](X-z)^{-1}U(t)\] \[=\kappa Y_{2}(t)U(t)^{*}R(\kappa;q(t))^{2}U(t)Y_{2}(t)\] \[=\kappa Y_{2}(t)R(\kappa;q^{0})^{2}Y_{2}(t).\] A simple Gronwall argument (in operator norm) shows that (6.15) has at most one solution and consequently, \[\bigl{(}X-t\kappa R(\kappa;q^{0})^{2}-z\bigr{)}^{-1}q^{0}_{+}=U(t)^{*}(X-z)^{-1}U (t)q^{0}_{+} \tag{6.16}\] for all times. Recalling (6.13) and the Cauchy integral formula (3.17), this yields \[q_{+}(t,z)=\lim_{y\to\infty}\tfrac{1}{2\pi i}\bigl{\langle}U(t)^{*}\chi_{y}, \bigl{(}X-t\kappa R(\kappa;q^{0})^{2}-z\bigr{)}^{-1}q^{0}_{+}\bigr{\rangle}.\] To complete the proof of (6.4), it remains only to observe that \(U(t)^{*}\chi_{y}-\chi_{y}\to 0\) in \(L^{2}\) as \(y\to\infty\), uniformly for \(t\) in compact sets, which follows easily from Lemma 6.2. The proof of Theorem 6.1 shows that the mapping between the Hamiltonian and the time-dependent term in the explicit formula is actually linear. Suppose, for example, we adopt \[\sum c_{j}\beta(\kappa_{j})=\langle q_{+},\phi(\mathcal{L})q_{+}\rangle\quad \text{where}\quad\phi(E)=\sum c_{j}(E+\kappa_{j})^{-1} \tag{6.17}\] as the Hamiltonian. This admits a Lax pair representation with \(\mathcal{P}=\sum c_{j}\mathcal{P}^{\beta}_{\kappa_{j}}\). Furthermore, taking the commutator with \(X\) is also a linear operation. In this way, we find the associated explicit formula \[q_{+}(t,z)=\tfrac{1}{2\pi i}I_{+}\Bigl{(}\bigl{(}X-t\psi(\mathcal{L}_{q_{0}}) -z\bigr{)}^{-1}q^{0}_{+}\Bigr{)}\quad\text{with}\quad\psi(E)=\phi(E)+E\phi^{ \prime}(E). \tag{6.18}\] One may also allow \(\phi\equiv 1\), which leads to the Hamiltonian \(P\) generating translations and to \(\psi(\mathcal{L}_{q_{0}})=\operatorname{Id}\). In this setting, the formula (6.18) is a direct consequence of (3.17). Indeed, \(t\) is merely modifying the real part of \(z\). This parallels our discussion in the introduction of the role of \(t_{0}\) in the definition of the \(\tau\)-function. Underlining such a \(\tau\)-function interpretation is the fact that linear combinations of the functions \(1\) and \(E\mapsto\tfrac{1}{\kappa+E}\) are dense in the class of continuous functions on intervals of the form \([-E_{0},\infty]\). The linearity property described above also allows us to consider performing a \(\kappa\to\infty\) expansion of (6.4). Recall from (1.14) that this is precisely how \(\beta(\kappa)\) encodes the traditional Hamiltonians. Indeed, the (BO) flow corresponds to choosing \(\phi(E)=E\) and so to \(\psi(E)=2E\). In this way, we recover the explicit formula \[q_{+}(t,z)=\tfrac{1}{2\pi i}I_{+}\bigl{\{}\bigl{(}X-2t\mathcal{L}_{q_{0}}-z \bigr{)}^{-1}q^{0}_{+}\bigr{\}} \tag{6.19}\] presented in [15]; see also [58] for the special case where \(q\) is an exact multisoliton. We turn now to our last topic. In (6.20) we introduce our extension of the notion of the center of momentum to all conserved quantities of the (BO) hierarchy, expressed through the generating function \(\beta\). The property that makes these special is that they move at a constant speed dictated by other Hamiltonians in the hierarchy. As discussed in subsection 4.3, this also generalizes the Fokas-Fuchssteiner recursion for the construction of conserved quantities. **Theorem 6.5** (Virial identities).: _Suppose \(\langle x\rangle q(x)\in L^{2}(\mathbb{R})\). Then_ \[\operatorname{C\!\mathit{o}f\!\beta}(\varkappa):=\tfrac{1}{2}\int xq[n+\overline {n}]\,dx \tag{6.20}\] _satisfies_ \[\bigl{\{}\operatorname{C\!\mathit{o}f\!\beta}(\varkappa),\beta(\kappa)\bigr{\}} =-\kappa\langle q_{+},R(\kappa)R(\varkappa)R(\kappa)q_{+}\rangle=-\kappa \tfrac{\partial}{\partial\kappa}\,\tfrac{\beta(\kappa)-\beta(\varkappa)}{ \partial\kappa-\varkappa}. \tag{6.21}\] Proof.: Given a pair of real-valued functions \(f,g\in L^{2}(\mathbb{R})\) with \(\langle x\rangle f(x)\in L^{2}(\mathbb{R})\), \[\langle g_{+},Xf_{+}\rangle+\langle Xf_{+},g_{+}\rangle=\int_{-\infty}^{\infty }\overline{\widehat{g}(\xi)}\cdot i\widehat{f}^{\prime}(\xi)\,d\xi=\int xf(x)g (x)\,dx. \tag{6.22}\] In this way, we see that the definition of Cof\(\beta\) may be rewritten as \[\text{Cof}\beta(\varkappa)=\tfrac{1}{2}\langle n,Xq_{+}\rangle+\tfrac{1}{2} \langle Xq_{+},n\rangle. \tag{6.23}\] Exploiting (6.3) and the antisymmetry of \(\mathcal{P}_{\kappa}^{\beta}\), we deduce that \[\big{\{}\text{Cof}\beta(\varkappa),\beta(\kappa)\big{\}}=\tfrac{1}{2}\langle n,[X,\mathcal{P}_{\kappa}^{\beta}]q_{+}\rangle+\tfrac{1}{2}\langle[X,\mathcal{ P}_{\kappa}^{\beta}]q_{+},n\rangle.\] The first identity in (6.21) now follows from (6.6) and the selfadjointness of \(R(\varkappa)\). The second identity is a consequence of (4.29). By expanding the resolvent, we find that \[n=\varkappa^{-1}q_{+}-\varkappa^{-2}\mathcal{L}q_{+}+\varkappa^{-3}\mathcal{ L}^{2}q_{+}\pm\cdots\] and so also that \[\text{Cof}\beta(\varkappa)=\varkappa^{-1}\text{Cof}\text{P}-\varkappa^{-2} \text{Cof}\text{E}+O(\varkappa^{-3}). \tag{6.24}\] In this way, both (4.54) and (4.57) can be recovered as elementary corollaries of (6.21) and the definition (4.12) of \(\beta\). One cannot give an exhaustive account of all possible virial-type identities associated with (BO) or its hierarchy. Our goal in this section has been to exhibit how our modified Lax representation begets dramatic algebraic simplifications. Let us offer just one more example. Consider \[\text{Vof}\text{P}(q):=\int\tfrac{1}{2}x^{2}q^{2}\,dx=\langle Xq_{+},Xq_{+}\rangle,\] which may be viewed as expressing the variance of the momentum distribution. By the results of this section, we find \[\Big{\{}\int\tfrac{1}{2}x^{2}q^{2}\,dx,\ \beta(\kappa)\Big{\}}=2\kappa\frac{d}{d \kappa}\text{Cof}\beta(\kappa)\] and consequently, this variance has a very simple time dependence under (6.1): \[\text{Vof}\text{P}\big{(}q(t)\big{)}=-t^{2}\big{(}\kappa\tfrac{d^{2}\beta}{d \kappa^{2}}+\kappa^{2}\tfrac{d^{3}\beta}{d\kappa^{3}}\big{)}(\kappa;q(0))+2t \kappa\tfrac{d}{d\kappa}\text{Cof}\beta(\kappa;q(0))+\text{Vof}\text{P}\big{(} q(0)\big{)}.\] This represents the generalization to the full (BO) hierarchy of an important identity from [24].
2308.16493
Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception
Vision-language models (VLMs) have shown powerful capabilities in visual question answering and reasoning tasks by combining visual representations with the abstract skill set large language models (LLMs) learn during pretraining. Vision, while the most popular modality to augment LLMs with, is only one representation of a scene. In human-robot interaction scenarios, robot perception requires accurate scene understanding by the robot. In this paper, we define and demonstrate a method of aligning the embedding spaces of different modalities (in this case, inertial measurement unit (IMU) data) to the vision embedding space through a combination of supervised and contrastive training, enabling the VLM to understand and reason about these additional modalities without retraining. We opt to give the model IMU embeddings directly over using a separate human activity recognition model that feeds directly into the prompt to allow for any nonlinear interactions between the query, image, and IMU signal that would be lost by mapping the IMU data to a discrete activity label. Further, we demonstrate our methodology's efficacy through experiments involving human activity recognition using IMU data and visual inputs. Our results show that using multiple modalities as input improves the VLM's scene understanding and enhances its overall performance in various tasks, thus paving the way for more versatile and capable language models in multi-modal contexts.
Riley Tavassoli, Mani Amani, Reza Akhavian
2023-08-31T06:53:55Z
http://arxiv.org/abs/2308.16493v1
# Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception ###### Abstract Vision-language models (VLMs) have shown powerful capabilities in visual question answering and reasoning tasks by combining visual representations with the abstract skill set large language models (LLMs) learn during pre-training. Vision, while the most popular modality to augment LLMs with, is only one representation of a scene. In human-robot interaction scenarios, robot perception requires accurate scene understanding by the robot. In this paper, we define and demonstrate a method of aligning the embedding spaces of different modalities (in this case, inertial measurement unit (IMU) data) to the vision embedding space through a combination of supervised and contrastive training, enabling the VLM to understand and reason about these additional modalities without retraining. We opt to give the model IMU embeddings directly over using a separate human activity recognition model that feeds directly into the prompt to allow for any nonlinear interactions between the query, image, and IMU signal that would be lost by mapping the IMU data to a discrete activity label. Further, we demonstrate our methodology's efficacy through experiments involving human activity recognition using IMU data and visual inputs. Our results show that using multiple modalities as input improves the VLM's scene understanding and enhances its overall performance in various tasks, thus paving the way for more versatile and capable language models in multi-modal contexts. keywords: Multi-modal visual language models, Robot perception, Contrastive Learning + Footnote †: journal: Computer Vision and Image Understanding ## 1 Introduction Multi-modal research in vision, audio and language has gained traction in recent years[1; 2], and now with current studies showing that Large language models (LLMs) have the capabilities of complex question answering and reasoning [3], there has been an influx of attention towards utilizing multi-modal LLMs. Recent research on vision-language models has further shown these reasoning capabilities can be extended to other modalities [4]. In this paper, we propose a method that extends frozen, pretrained visual-language models to understand inertial measurement unit (IMU) data while being extensible to any other modality. This method of extending pretrained models without retraining or finetuning reduces training costs dramatically in an era of deep learning where it has become infeasible to train most models from scratch for the majority of researchers and developers [5]. At these large sizes, models can learn abstract, generalizable reasoning skills that are difficult to replicate in smaller models [6]. Specifically, language models present a new paradigm of foundation models that offer unlimited downstream use cases, with the limitation of text being the singular modality. Vision-language models (VLMs) have allowed for images to be interwoven with text, taking advantage of the skills the base LLM learned while being trained on text. Flamingo [7] proposed a novel VLM architecture where trainable layers were injected into the frozen LLM. These new trainable layers require far less training than the base LLM while allowing the raw image embeddings to be processed in a depth-wise manner alongside the accompanying text. This results in the frozen LLM being capable of understanding image embeddings that have been resampled to match the distribution the LLM expects. This allows for the LLM's in-context learning capabilities to be used on images, making the model versatile and removing the need for fine-tuning to a domain-specific dataset [8]. Instead of training new layers or modules for every additional modality to be incorporated, any modality can arbitrarily be aligned to share the embedding space of the vision encoder through contrastive training. Consequently, the layers that translate vision embeddings into representations the LLM understands also work on any other modality that has been aligned with the vision embedding space. Most contrastive learning methods rely on large datasets, but with the methods we propose in this paper, even modalities with relatively few examples can sufficiently align their embedding space to the vision encoder. This idea also addresses a growing demand for larger generalist models to use any modality to enable users to take advantage of the abstract representations it has learned. As such, our main contributions in this paper are as follows: 1. A methodology that allows for the extension of frozen, pretrained vision-language models to accommodate any number of modalities, broadening their applicability and versatility. 2. An understanding of how multi-modal inputs contribute to the development of increasingly nuanced scene representations, adding depth and context to machine understanding. 3. A validated evidence that the integration of various modalities improves scene understanding, a critical aspect of machine perception and cognition. 4. A demonstration of how relatively small datasets can be used for contrastive alignment. Figure 1: Overview of the approach showing the concatenation of multiple modal representations of a scene with a query yielding better, more semantic responses ### Robot Perception and Human Activity Recognition (HAR) The goal of this paper is to leverage VLMs for better scene understanding toward improved robotics perception, especially in human-robot interaction (HRI) scenarios. In this regard, human activity recognition (HAR) using wearable devices can help robots better perceive their environment by gaining an understanding of the type of activity in which the human counterpart is engaged with. Because there already exist very competent HAR models, we choose to supply the IMU embeddings directly in the prompt to assess model performance on more granular aspects of scene understanding that are not readily extractable with pre-existing models. For practical and automated HRI applications, the HAR classification could also be retrieved from an auxiliary model and appended to the query. ## 2 Related Work HRI has garnered interest in recent years manufacturing due to the potential production efficiency improvement. However, robots do not have the innate contextual scene understanding capability humans latently possess [9]. To remedy this issue, researchers have conducted extensive research into both robot perception [10] and robot action [11] For robotic action, RT-2 [12] is a visual-language-action (VLA) Model that leverages visual-language models (VLM) such as PaLI-X [13] to generate action commands for a robot based on a query. The model changes the backbone VLM's generated token to include robotic actions that are used as inputs to the low-level robotic controller. Principally, the paper shows the potential of adapting VLMs to VLAs by combining VLM pretraining with robotic data. PaLM-E [14] is an embodied, multi-modal language model capable of executing complex robotic tasks via planning and scene understanding. However, they use a training paradigm different from the one presented in this paper whereby modality encoders are trained end-to-end with a frozen LLM as the head of the encoder, similar to[15]. Importantly, they highlight the ability of LLMs to reason on multi-modal inputs in a way similar to language. There are several other recent works, such as ImageBind [16], that integrate multiple modalities into a unified embedding space, showing how a linear combination of embeddings from multiple modalities yields a better representation. These developments highlight the capabilities of multi-modal learning and underscore the importance of exploring it further. Macaw-LLM [17] provides a new architecture for multi-modal inputs, published as a vision-audio-text LLM with the introduction of a new architecture containing an alignment module that aligns the embeddings of multiple modalities to a common embedding space. The benefit of what we design in this work is its ability to leverage pretrained models without the need for a new architecture or retraining of the base model or an alignment module. Works such as BilIP-2 [18] follow the same philosophy of feeding different modalities to language models in a format that they can understand and process through a specific pretraining regime. BLIP-2 combines "off the shelf" frozen image encoders and frozen LLMs and proposes a pretraining strategy that bridges the gap between modalities. [18] show a novel architecture for a light-weight HAR model designed for processing videos and trained contrastively on associated activity captions.VLMs have been employed in the past for HAR. One such instance is VicTR, a model that utilizes a joint video and text encoder to create video-conditioned text tokens for improved HAR [19]. In another study, the authors developed a HAR algorithm utilizing wide time-domain convolutional neural network and multi-environment sensor data for daily behavior recognition while using contribution significance analysis to assess the contribution of each sensor to the detection of the activity [20]. PaLM-E's approach to integrating sensor signals as inputs in a multi-modal language model provided valuable insights into the potential capabilities of LLMs to reason on multi-modal inputs in a similar manner to language. However, they rely on a paradigm that requires the encoders to be trained end-to-end with a frozen LLM, limiting the flexibility of the system. ImageBind [16] integrates multiple modalities into a unified embedding space through contrastive alignment, bypassing the high training cost of using the LLM directly. Our work strives to develop a methodology that allows LLMs to accommodate an arbitrary number of modalities without needing a new architecture, an issue faced by works like Macaw-LLM [17]. ## 3 Methodology Figure 2 shows how raw inputs are processed through the VLM. An important distinction is that we linearly combine the image and IMU embeddings for a single example after having passed through the perceiver resampler but before they pass through the gated cross attention layers. This linear combination of encoded representations provides the VLM with a more holistic representation of the scene. The training method that aligns the IMU encoder to the pretrained visual encoder is outlined below. ### Dataset We use the MMAct dataset [21] which consists of 35 different human actions with varied durations of 2-10 seconds. Each sample is recorded on 4 Figure 2: The architecture of Flamingo VLMs extended to handle image-IMU pairs of inputs different cameras at different points in the room. The room is also set up in 4 distinct ways with different obstacles or components. Example actions include talking on the phone, running, and pulling a cart. We concatenate all IMU signals, down sampling where necessary such that each signal is sampled at 50 Hz. Each signal provides 3 channels, and with 4 signals (two accelerometers at different parts of the body, a gyroscope, and a magnetometer), we attain a 12 channel IMU signal. We sample a window of length 256 from this data, padding with zeros when the signal is not long enough. We randomly sample a frame from the corresponding video. We use a batch size of 512 and train for a maximum of 100 epochs, early stopping once the validation loss is minimized. The total train size is 6093 examples. ### Modality Encoders In this work, we extend visual language models to understand IMU data encoded using a transformer-based encoder in combination with a 1-d convolution without retraining the visual language model. To train this encoder, we contrastively optimize temporally overlapping image-IMU pairs to have a large cosine similarity using a frozen, pretrained ViT-L/14 [22] as the visual encoder. An extension of CLIP for video, X-CLIP [23], has previously been explored by the authors for HAR [24]. The presented work seeks to show the capability of extending VLMs understanding to multiple modalities with no retraining. Therefore, we are constrained to the frozen visual encoder the VLM was trained with. This is because as we contrastively train our IMU encoder to share the embedding space of the vision encoder, it is necessary that this shared embedding space towards which the IMU encoder optimizes is the same as the embedding space the VLM was trained to understand. Had we used a different vision encoder to contrastively train the IMU encoder, the pretrained VLM would not understand the IMU embeddings without retraining. Here, we are inspired by the work presented in ImageBind [16] to train arbitrary modality encoders to align their embeddings with a desired embedding space. ### Contrastive Pretraining Contrastive learning, a subfield of unsupervised learning, works by learning a representation of its inputs such that similar inputs result in similar vectors and dissimilar inputs yield dissimilar vectors [25]. It has been successfully applied in a variety of machine learning tasks, ranging from image and speech recognition to natural language understanding, largely due to its effectiveness in learning rich, meaningful embeddings from unlabeled data. Multi-modal contrastive learning is still an active area of research [26; 27] where the loss functions with which we optimize over are just beginning to be explored. When contrastively training an encoder for a modality with a temporal dimension such as IMU data, the window size is directly correlated with information content which makes it an important hyperparameter to tune and optimize for good representation quality [28]. We utilize a symmetric cross entropy loss objective, also known as the infoNCE loss [29; 30], in order to train our IMU encoder model. The loss maximizes the dot product of matching pairs in a batch and minimizes the dot product of negative pairs. This was most recently popularized in a multi-modal context with CLIP [22]. \[L_{\text{infoNCE}}=-\sum_{(i,j)\in P}\log\left(\frac{e^{\text{CoSim}(z_{i},z_{j})/ \tau}}{\sum_{k=1}^{N}e^{\text{CoSim}(z_{i},z_{k})/\tau}}\right) \tag{1}\] For every pair (i,j) in set P, which represents positive pairs of data instances, we compute the cosine similarity CoSim between the respective representations \(z_{i}\) and \(z_{j}\). This similarity score is scaled by a temperature parameter \(\tau\) to control the sharpness of the distribution. The logarithm of this ratio is then computed, and the loss is the negative sum over all positive pairs. This formulation encourages the network to maximize similarity for positive pairs and minimize similarity for negative pairs, where positive pairs are defined as images and overlapping IMU windows, and negative pairs are images and non-overlapping IMU windows. We also add a supervised loss term to the loss function, mapping the embedded IMU representation to class logits with a linear head. This enforces a constraint on the embedding space that keeps embedded actions linearly separable. With the addition of this supervised loss term, we observed more specific, distinct outputs from the VLM when given IMU embeddings. Rather than computing the infoNCE and supervised losses on the outputs from the encoders, we further process both encoded representations by passing them through the frozen, pretrained perceiver resampler module. This outputs a predefined set of latent vectors that are resampled representations of the input. For our implementation, we map an encoded sequence length of 256 with dimension 1024 to a length of 64 with the perceiver resampler. We then average pool along the sequence dimension for both image and IMU embeddings to obtain 1-d vectors of size 1024 for each sample. It is with these representations we compute the infoNCE and supervised loss terms. In our empirical tests, this process of including the perceiver resampler module grounds the representation the IMU encoder learns more rigidly. We observed this in testing different iterations on an activity recognition sub-task where we prompt the VLM with only IMU embeddings to identify the action being performed. IMU encoders trained without the perceiver resampler exhibited far worse performance on this task, such that when combining the IMU embeddings with vision embeddings, worse performance could sometimes be observed. Our hypothesis for why we see better performance with this architecture is that the inclusion of the perceiver resampler strictly constrains the features learned by the IMU encoder to have a similar distribution to the features of the image encoder. When computing loss on the embeddings that are output from the encoders rather than the perceiver resampler, the loss is far noisier whereas the perceiver resampler processes embeddings of both modalities into a shared distribution. This contrastive and supervised learning objective enables the IMU encoder to learn a meaningful mapping from raw sensor data to a representative embedding space that aligns with the image encoder. Most unsupervised methods, contrastive learning included, require large amounts of data. This paper explores how a relatively small training dataset of around 6,000 image-IMU pairs can be leveraged to align an IMU encoder with a vision encoder. ### Multi-Modal Large Language Model We utilize VLMs as a high-level reasoning module to better understand a scene given various modal representations. We use an implementation of the Otter VLM [31], a continuation of Open Flamingo [32], the open-sourced version of the original DeepMind paper, Flamingo [7]. Otter is further trained on an instruction dataset to support multi-modal in-context instruction tuning which involves conditioning the language model on the corresponding media, such as an image, that corresponds to a caption or an instruction-response pair [31; 33]. This makes Otter deliver more guided outputs when we effectively design prompts. VLMs such as Otter are autoregressive decoder-only models, with image embeddings represented in the tokenizer with a special token. The image embeddings, or any other modality's embeddings, are passed through a module introduced in the original Flamingo paper called the Perceiver Resampler which takes as input the sequence output of a transformer encoder and outputs a fixed set of resampled tokens as based on a set of learnable latent queries. This allows for variably sized inputs to be mapped to the same length of tokens, and it allows the frozen, pretrained language model to resample the static image embeddings throughout the layers of the LLM. Because Otter was trained on an instruction-tuning dataset, the model learns to structure its response to follow the preceding in-context examples which allows us to query the model's understanding of the scene. In this paper, we show that the addition of the IMU embeddings in the prompt allows the VLM to better reason about a scene and more wholly understand the activities of the humans present in the visual input. ## 4 Experiments Below, we show the capabilities of the pretrained Otter model on semantic scene understanding tasks when provided vision data, IMU data, and a combination of both. We take advantage of conditional generation by prepending our query with two example question-response pairs to update the model's output distribution to be more in line with our expectations. Figure 3 shows model responses given different combinations of input modalities. ## 5 Results We evaluated the effectiveness of our contrastive alignment process by mapping the embeddings of the IMU and image data for each of the 35 classes via t-distributed stochastic neighbor embedding (t-SNE), a technique used to visualize high-dimensional data in a way that shows underlying data distributions in a 2-dimensional representation that can easily be plotted [34]. Figure 4 shows the result of this visualization where each class is represented by a different color, and the clusters suggest distinctive patterns in the data. The video encoder embeddings display clear clusters, suggesting that the model successfully extracted meaningful patterns from the data. However, these clusters do not align with the specific activities performed as there are no clear color groupings, an outcome we anticipated. This is because the image encoder was not specifically fine-tuned for HAR. The IMU encoder Figure 4: t-SNE visualization of video and IMU encoder embeddings across 35 classes Figure 3: In-context generation with only IMU, only images, and both modalities embeddings lack some of the structure present in the image embeddings, suggesting that the contrastive alignment of encoders did not fully align the two models to share the exact same embedding space, but the class distribution is far more organized which allows the model to better understand a user's actions as evidenced the very clear color groups. Further, the two modalities are fundamentally capturing different characteristics of the scene, which is by design, but that does mean that the embedding space of the IMU encoder will naturally have a different structure even after alignment. For example, the IMU data more closely corresponds to what a person is doing, e.g. two different people doing the same activity have more similar IMU data than images of two people doing the same activity due to the potential for different backgrounds, environments, or peripheral objects. Because the IMU data contains less total information than the associated images, there will be some breakdown in structure where the image embeddings more finely correspond to a given input. We also test how linearly combining the embeddings from both modalities changes the shared embedding space when visualized with t-SNE. In our experiments, we see that the weights used in linearly combining the two modal embeddings interpolate between the structure of the video and IMU embedding spaces. For Figure 4, we weight the vision embeddings 80% and the IMU embeddings 20%. In practice, these values must be empirically tuned to maintain the structure of the desired embedding space while gaining some smaller amount of information from the new embedding space. ImageBind [16] exploits the linear combination of vectors in multi-modal embeddings spaces for the task of video retrieval, obtaining better results when using audio in conjunction with vision data. This emergent structure of grouped examples of the same activity that is present in the IMU embedding space and not in the vision embedding space indicates that the raw IMU distribution is more implicitly linked to the activity label. We view this as a feature allowing the two modalities to naturally complement one another, the IMU data encoding the kinematic, activity data the vision encoder struggles to encode. This point can be seen in Table 1. This table shows the linear probe performance of a supervised HAR model trained by video only, IMU only, and combined video-IMU data. The IMU embeddings naturally encode information about an individual's action with far less noise than is present in an image. When combining modalities, we concatenate the output embeddings of each encoder, mapping the combined vector to class logits. This shows that the contrastive alignment of modalities can provide novel information to a pretrained model that otherwise would not be present in the unimodal data. This hypothesis warrants further investigation in future studies. ## 6 Conclusion In this paper we have proposed a methodology to extend pretrained vision language models to any modality by only training the new encoder. We have shown the ability of a contrastive and supervised objective to sufficiently map an encoder to the pretrained vision encoder's embedding space thereby taking advantage of the pretrained VLM. Further, we have shown how multiple modalities leads to a more robust and general scene representation and highlighted its potential in collaborative robotics. ### Future Work Future work can explore the effects of larger VLMs or VLMs with different architectures with multi-modal fine-tuning. The model size can prove as a limitation to the quality of the models responses, however, larger models will have longer inference times which could prove as an issue in different implementations. We plan to implement multi-modal fine-tuning to models such as MiniGPT-4 [35] which uses a base Vicuna model [36] as the backbone LLM to assess and compare their capabilities with the Otter model. The MiniGPT-4 utilizes a Q-former with a frozen pretrained vision encoder compared to gated cross-attention layers which the flamingo model uses. Another area we plan to explore is the assessment of information quality of each modality. We hypothesize that modalities can have varying levels \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Modality** & **Training Loss** & **Test Loss** & **Training Accuracy (\%)** & **Test Accuracy (\%)** \\ \hline Video & 1.0428 & 1.4748 & 65.07 & 52.41 \\ IMU & 0.4052 & 1.2468 & 91.83 & 64.47 \\ Combined & **0.2138** & **0.8753** & **94.58** & **74.46** \\ \hline \end{tabular} \end{table} Table 1: Supervised activity recognition for different modalities. Despite sharing the same embedding space, each modality still preserves unique information, as reflected in the increased performance when combining embeddings of generalizable information regarding the activity and how we identify and account for these discrepancies. Previous work such as ImageBind uses multi-modal embedding space arithmetic to linearly combine embeddings of different modalities to yield a more information-dense embedding vector. [16]. The outcome of the presented work will be ultimately used in the context of HRI for better robot perception. The authors are currently exploring HRI scenarios in the context of construction activities where visual data from robot cameras and IMU data from wearable sensors by construction workers are used to enhance robot perception as seen in Figure 5. Other avenues of future research can be im Figure 5: A researcher investigating multi-modal robot perception for human-robot collaboration introduced in RT-2 which consists of utilizing robotic data through the VLM pretraining. We introduce the feasibility of extending VLMs to any number of modalities and experiment with the viability of implementing modality extensions on VLAs. The study of contrastive alignment of modality encoders to a shared embedding space is an avenue we plan to explore with new training objectives and data-dependent significance analysis across multi-modal representations. ### Limitations The Otter model uses MPT-7b as its backbone, making it fast for inference, but with technical limitations in its hallucinations. Further, because the dataset of video frame-IMU pairs used, MMAct, is relatively small, we do not attain a 1:1 representation between video and IMU data. This is expected as IMU data intrinsically has a data distribution distinct from any correlated video frames. Another downfall of extending modalities without pretraining is that the learnable model parameters have not been trained for multi-modal processing, potentially causing an increase in hallucinations. Current research indicates that poor training and low-quality training data has a direct effect on the degrees of hallucination [37]. **Conflict of Interest** The authors declare that they do not identify any personal relationships or financial ties that would affect the contents or the publishing of this paper. **Data Availability** Source code and data will be made available upon request. **CRediT authorship contribution statement** **Riley Tavassoli** Conceptualization, Methodology, Software, Investigation, Validation, Writing - Original Draft, Writing - Review & Editing, Visualization. **Mani Amani**: Investigation, Validation, Visualization, Writing - Original Draft, Writing - Review & Editing, Software. **Reza Akhavi**: Project administration, Funding acquisition, Writing - Review & Editing, Supervision. **Declaration of Funding** The presented work has been supported by the U.S. National Science Foundation (NSF) CAREER Award through the grant # CMMI 2047138. The authors gratefully acknowledge the support from the NSF. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily represent those of the NSF.
2305.19984
Degenerations of Sheaves on Fibered Surfaces
We construct moduli stacks of stable sheaves for surfaces fibered over marked nodal curves by using expanded degenerations. These moduli stacks carry a virtual class and therefore give rise to enumerative invariants. In the case of a surface with two irreducible components glued along a smooth divisor, we prove a degeneration formula that relates the moduli space associated to the surface with the relative spaces associated to the two components. For a smooth surface and no markings, our notion of stability agrees with slope stability with respect to a suitable choice of polarization. We apply our results to compute elliptic genera of moduli spaces of stable sheaves on some elliptic surfaces.
Nikolas Kuhn
2023-05-31T16:07:55Z
http://arxiv.org/abs/2305.19984v1
# Degeneration of Sheaves on Fibered Surfaces ###### Abstract We construct moduli stacks of stable sheaves for surfaces fibered over marked nodal curves by using expanded degenerations. These moduli stacks carry a virtual class and therefore give rise to enumerative invariants. In the case of a surface with two irreducible components glued along a smooth divisor, we prove a degeneration formula that relates the moduli space associated to the surface with the relative spaces associated to the two components. For a smooth surface and no markings, our notion of stability agrees with slope stability with respect to a suitable choice of polarization. We apply our results to compute elliptic genera of moduli spaces of stable sheaves on some elliptic surfaces. ## 1 Introduction Let \(X_{0}=Y_{1}\cup_{D}Y_{2}\) be a projective surface that is the union of smooth surfaces \(Y_{i}\) along a common smooth divisor \(D\). In [11, SS1, SS4] Donaldson raises the problem of constructing a good theory of stable sheaves on such an \(X_{0}\), with the following properties. 1. It behaves well under smoothings. In other words, the numerical invariants of the theory on \(X_{0}\) agree with the invariants of the usual moduli space of Gieseker-stable sheaves on a smoothing of \(X_{0}\) when those are defined. 2. It behaves well under decomposition. More precisely, there should be spaces of "relative stable sheaves" for a pair \((Y,D)\) of a smooth surface \(Y\) with a smooth divisor \(D\), such that the moduli space of sheaves on \(X_{0}\) can be related to the relative spaces for the pairs \((Y_{i},D)\). Such a theory would enable one to compute sheaf-theoretic invariants of projective surfaces through degenerations to a reducible surface. This has been successfully implemented in other settings - notably in Gromov-Witten theory ([13], [14], [15], [16]) and Donaldson-Thomas theory (see for example [17], [18]). In this paper we answer Donaldson's questions for fibered surfaces. Let \(f:X\to C\) be a surface fibered over a marked nodal curve (see Definition 1.6 for the precise meaning). Then we propose the following definition of "stability on the fiber". **Definition 1.1** (cf. Definition 3.1).: A coherent sheaf \(E\) on \(X\) is \(f\)-stable, if it is torsion-free and 1. for every node or marked point \(x\in C\), the restriction of \(E\) to \(f^{-1}(x)\) is a slope-stable vector bundle, and 2. for every generic point \(\eta\) of an irreducible component of \(C\), the restriction of \(E\) to \(f^{-1}(\eta)\) is slope stable. Our first main result is that this notion of stability leads to well-behaved moduli spaces. This gives an answer to I for fibered surfaces. **Theorem 1.2** (cf. Theorem 3.24 and Proposition 4.4).: _Let \(d\) and \(r>0\) be coprime integers. Let \(c_{1}\in H^{2}(X,\mathbb{Z})\) be a cohomology class with \(c_{1}\cap f^{-1}(x)=d[pt]\) for any \(x\in C\). Let \(\alpha\) be a generic stability condition on \(C\). Then for any \(\Delta\in\mathbb{Z}\), there exists a proper Deligne-Mumford moduli stack_ \[M^{\alpha}_{X/C}(r,c_{1},\Delta)\] _parametrizing \(f\)-stable, \(\alpha\)-balanced sheaves of rank \(r\), first Chern class \(c_{1}\) and discriminant \(\Delta\) on expansions of \(X\). Moreover, this stack has a natural virtual fundamental class and the numerical invariants are invariant under deformations of \(X\) together with the fibration and choice of \(c_{1}\)._ Here we introduced two additional subtleties: Expansions of a degenerate surface (cf. Definition 2.1) and balancing with respect to a stability condition \(\alpha\) on a curve (cf. SS3.2). Our construction automatically yields a notion of relative moduli space: Putting markings \(y_{1},\dots,y_{n}\) on \(C\) corresponds to working relative to the divisor \(f^{-1}(y_{1})\cup\dots\cup f^{-1}(y_{n})\). Our second main result is a proof of II in the case that \(X=Y_{1}\cup_{F}Y_{2}\) is a union of two irreducible components which meet along a fiber \(F\) of \(f\), and so that we also have \(C=C_{1}\cup_{x}C_{2}\). We state this result somewhat informally (see Theorem 4.14 and Proposition 4.12 for more precise versions). **Theorem 1.3**.: _The enumerative invariants of moduli spaces of \(f\)-stable sheaves on \(X\) can be recovered from the relative invariants associated to the pairs \((Y_{i},F)\)._ For elliptic surfaces, the relative invariants can be determined from the absolute ones, which makes Theorem 1.3 more powerful in practice. To illustrate this, we give an application to elliptic genera. For a proper scheme \(M\) with perfect obstruction theory, let \(\operatorname{Ell}^{\operatorname{vir}}(M)\) denote its virtual elliptic genus, as defined for example in the introduction of [10]. We let also \(\phi_{0,1}(q,y)\) be the weak Jacobi form and \(\mathbf{L}(\phi_{0,1},p)\) its Borcherds lift as presented there. We recall that if \(f:X\to\mathbb{P}^{1}\) is an elliptic surface, its _degree_ is the degree of the line bundle \((R^{1}f_{*}\mathcal{O}_{X})^{\vee}\). **Theorem 1.4**.: _Let \(X\) be a degree \(e\geq 2\) elliptic surface over \(\mathbb{P}^{1}\) without multiple or reducible fibers. Let \(d\geq 1\) be minimal so that \(X\) has a \(d\)-section with divisor class \(D\). Let \(H\) be an ample line bundle on \(X\) and \([F]\) the cohomology class of a fiber. Let \(M_{X,H}(r,D,\Delta)\) denote the moduli space of \(H\)-Gieseker-stable sheaves on \(X\) of rank \(r>0\), first Chern class \(D\) and discriminant \(\Delta\). Assume that \(d\) is coprime to \(r\), and that \(H\) is chosen so that stability equals semistability for sheaves of rank \(r\) and first Chern class \(c_{1}\), and all \(\Delta\). Consider the generating series_ \[Z^{\operatorname{Ell}}_{X,r,c_{1}}(p):=\sum_{\begin{subarray}{c}\Delta\in \mathbb{Z}\\ 0\leq\ell<r\end{subarray}}\operatorname{Ell}^{\operatorname{vir}}(M_{X,H}(r,c _{1}+\ell[F],\Delta))\,p^{\dim M_{X,H}(r,c_{1}+\ell[F],\Delta)}.\] _Then_ \[Z^{\operatorname{Ell}}_{X,r,c_{1}}(p)=\sum_{n\geq 0}\operatorname{Ell}( \operatorname{Hilb}_{n}(X))\,p^{2n}=\left(\frac{1}{\mathbf{L}(\phi_{0,1},p^{2} )}\right)^{\chi(\mathcal{O}_{X})}.\] In the case \(e=2\) the surface \(X\) is a K3-surface, and the statement reduces to the rank \(1\) case, which was stated in [13] and has been proven in [1], [1]. **Remark 1.5**.: The proof of Theorem 1.4 goes through with "virtual elliptic genus" replaced by virtual cobordism to give \[Z^{\operatorname{cob}}_{X,r,c_{1}}(p)=\left(\sum_{n\geq 0}[K3^{[n]}]p^{2n} \right)^{\chi(\mathcal{O}_{X})/2},\] where the possible half-integer power is chosen to have constant coefficient one. In particular, this answers Gottsche and Kool's Conjecture 7.7 in [10] affirmatively for this class of elliptic surfaces. BackgroundOur approach is inspired by earlier constructions of Gieseker-Li [1] and Li [12], [13]: They consider moduli spaces of Simpson semistable sheaves on expansions of a degenerate surface \(X_{0}\). Under good conditions, this does result in moduli spaces which behave well under deformations and carry a virtual fundamental class. However, their approach does not lead to a degeneration formula, since it is unclear how to relate the moduli space of sheaves on the degeneration with the relative spaces for the pairs \((Y_{i},D)\). The problem is that Simpson stability of a sheaf on \(X_{0}\) can not be determined by only looking at its restrictions to the \(Y_{i}\), but also depends on how the sheaves are glued in a subtle manner. By restricting to fibered surfaces, we get around this issue: The notion of \(f\)-stability is defined in terms of restriction on fibers, which can be checked on each component separately. The use of expansions is crucial for us: As in [1], it allows us to work only with sheaves that are locally free along the singular locus of \(X_{0}\). Just as importantly, we can guarantee stability on the fibers over marked points and nodes, which gives us restriction maps to the moduli spaces of stable vector bundles on curves. This is central to the decomposition result of Theorem 1.3. Since \(f\)-stability on \(X\to C\) is unchanged with respect to tensoring by a line bundle pulled back from \(C\), the moduli space of \(f\)-stable sheaves will not be separated in families when \(C\) degenerates from a smooth to a reducible curve, since the Picard scheme of \(C\) is not separated. Essentially, the issue is that in a one-parameter family of curves, the limit of the trivial line bundle doesn't need to be trivial, since one can twist by components of the special fiber. To deal with this issue, we need to restrict the possible twists of an \(f\)-stable sheaf. This is achieved by picking a stability condition \(\alpha\) on \(C\) when the base curve becomes reducible, and demanding a certain numerical balancing condition which singles out a unique choice of twist. Although \(f\)-stability at first seems unrelated to Gieseker or slope stability, it turns out that these notions agree on a smooth fibered surface, when one considers the latter with respect to a suitable choice of polarization. This is already present in the work of Yoshioka and is included here as Theorem 2.21. Moreover, due to results of Mochizuki [18], for surfaces with \(p_{g}(X)>0\), virtual enumerative invariants are independent of choice of polarization whenever stability equals semistability. Thus, for such surfaces, there is no loss of generality in considering \(f\)-stability for the computation of invariants. Structure of the paper.In SS3, we introduce the definitions and constructions that go into Theorem 1.2. Since we work over a general base \(B\), we automatically obtain deformation invariance. In SS4, we show the decomposition in the situation of Theorem 1.3. We include the proof of Theorem 1.4 in SS4.7. In SS2, we collect some material that is needed for the main constructions in the later sections. For a first reading, we suggest only taking a look at Definition 2.1 and Lemma 2.2 in SS2.1, and otherwise to refer to this section only as needed. Relation to other work.Since the modern mathematical definition of Vafa-Witten invariants on algebraic surfaces by Tanaka-Thomas ([14], [14]), there has been renewed interest in the enumerative geometry of moduli spaces of sheaves on surfaces. Gottsche and Kool, in a series of work, developed many conjectures for the structure of such invariants (see [10] for an excellent overview). However, beyond the cases of Hilbert schemes and special classes of surfaces such as rational, elliptic or K3 surfaces, very little is known - notable exceptions are results on Donaldson invariants ([15], [16]) and Blowup formulas ([13], [12]). Recently, Dominic Joyce has announced results regarding deep structure theorems for enumerative invariants of surfaces with \(p_{g}>0\), building on his theory of wall-crossing in abelian categories and his version of Mochizuki's rank reduction algorithm [17]. His result shows that the generating series of invariants are determined in terms of a number of universal power series and universal constants, and of finitely many fundamental enumerative invariants of the surface. We hope that the current work can be used to exhibit relations between, and therefore help determine his universal series. For rank one sheaves, our results reduce to a theory of Hilbert schemes on degenerations, which is treated in [10] in the case of a smooth singular locus, and was later generalized to arbitrary normal crossings degenerations in [14]. Further directionsOne problem that is not addressed here is to generalize the results of SS4 to the case of a non-separating node, i.e. given a fibered surface \(X\to C\) and a non-separating node \(x\) of \(C\), to describe the enumerative invariants for the moduli space of \(f\)-stable sheaves on \(X\to C\) in terms of those on \(X^{\prime}\to C^{\prime}\), where \(C^{\prime}\to C\) is the partial normalization of \(C\) at \(x\), and \(X^{\prime}=X\times_{C}C^{\prime}\). This should be possible, but requires a closer analysis of the combinatorics of the Picard scheme of \(C\). An application of this, suggested by Jorgen Rennemo, would be to obtain a \((1+1)\)-dimensional cohomological field theory by considering surfaces of the form \(F\times C\to C\), where \(F\) is kept fixed and \(C\) is an arbitrary marked nodal curve. In another direction, the constructions here should generalize beyond the case of surfaces, which we mostly require to obtain a good enumerative theory. We expect that the same methods generalize for example to give degeneration formulas for fibered Fano and Calabi-Yau threefolds. We thank Richard Thomas for pointing this out to us. AcknowledgementsThe author would like to thank Nicola Pagani, John-Christian Ottem and Richard Thomas for helpful discussions during the writing of this paper. Special thanks goes to Jorgen Rennemo for suggesting this topic and for regular discussions. This research is funded by Research Council of Norway grant number 302277 - "Orthogonal gauge duality and non-commutative geometry". Notations and Conventions. * All schemes and stacks we consider will be locally Noetherian over \(\mathbb{C}\). * By a curve over a base \(B\), we mean a flat and finite type algebraic space over \(B\) with one-dimensional fibers. We drop reference to the base when \(B=\operatorname{Spec}\mathbb{C}\). * By a marked nodal curve over \(B\) we mean a curve over \(B\) with at worst nodal singularities and a finite number of sections which are disjoint and do not meet the singularities. Here, by "at worst nodal", we mean etale locally of the form \(Z(xy-f)\subseteq B\times\mathbb{A}^{2}\), where \(x,y\) are standard coordinates on \(\mathbb{A}^{2}\) and \(f\) is a function on \(B\). * Unless noted otherwise, we will assume all curves to be proper over the base and to have geometrically connected fibers. * For (numerical) divisor classes \(D_{1},D_{2}\) on a proper algebraic surface \(X\), and more generally for classes in the second cohomology of \(X\), we denote by \((D_{1},D_{2})\in\mathbb{Q}\) their intersection product. * We will often indicate the base (resp. base changes) of a family of curves or surfaces simply by a subscript (resp. by the change thereof). In this paper, we will use the following notion of fibered surface. **Definition 1.6**.: We say \(f:X\to C\) is a fibered surface, if \(C\) is a marked nodal curve, and * \(f\) is flat and proper of dimension one with geometrically connected fibers, * \(f\) is smooth over the nodes and marked points of \(C\), * for every irreducible component \(D\subseteq C\), the scheme-theoretic pre-image \(f^{-1}(D)\subseteq X\) is a smooth projective surface. We say \(f:X_{B}\to C_{B}\) is a family of fibered surfaces over a base \(B\), if \(C_{B}\) is a marked nodal curve over \(B\) and \(f\) is flat and proper, such that \(X_{b}\to C_{b}\) is a fibered surface for every geometric point \(b\) of \(B\). ## 2 Preliminaries ### Expansions and Expanded Degenerations We define what we mean by a family of expansions of marked nodal curves and show some basic properties Let \((C_{B},\sigma_{1},\ldots,\sigma_{n})\) be a flat family of at most nodal marked curves (not necessarily assumed proper or connected) over a base \(B\). **Definition 2.1**.: Let \(T\) be a scheme. A family of _expansions of \(C_{B}\) over \(B\)_ parametrized by \(T\) is given by a morphism \(b:T\to B\), a flat family of nodal marked curves \((\widetilde{C}_{T},\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{n})\) over \(T\) and proper morphism of marked curves \(c:\widetilde{C}_{T}\to C_{T}\) such that * the natural map \(\mathcal{O}_{C_{T}}\to Rc_{*}\mathcal{O}_{\widetilde{C}_{T}}\) is an isomorphism, * for each \(t\in T\), we have a fiberwise isomorphism of twisted dualizing sheaves \(c^{*}\omega_{C_{t}}(\sigma_{1},\ldots,\sigma_{n})\equiv\omega_{\widetilde{C}_ {t}}(\widetilde{\sigma}_{1},\ldots,\widetilde{\sigma}_{n})\). We let \(\operatorname{Exp}_{C/B}\) denote the stack parametrizing expansions over \(B\). One can give an alternative more explicit characterization: **Lemma 2.2**.: _Let \(\widetilde{C}_{T}\) and \(C_{T}\) be flat families of nodal marked curves over a base \(T\) and let \(c:\widetilde{C}_{T}\to C_{T}\) be a morphism of marked curves over \(B\). Then condition i) of Definition 2.1 is equivalent to_ _._ * _The locus_ \(\Sigma_{c}\subset C_{T}\) _where the map_ \(c\) _is not an isomorphism is quasi-finite over_ \(T\)_. For each_ \(x\in\Sigma_{c}\)_, the scheme-theoretic fiber_ \(c^{-1}(x)\) _is a connected nodal curve of arithmetic genus zero._ _Assuming this condition holds, then ii) of Definition 2.1 is equivalent to_ * _If_ \(x\in\Sigma_{c}\) _is lying over_ \(y\in T\)_, then_ \(x\) _is either a node or a marked point in the fiber_ \(C_{y}\subset C_{T}\) _and_ \(c^{-1}(x)\) _is a chain of rational curves_ \(R_{1}\cup\dots\cup R_{j}\)_, containing two distinguished points of_ \(\widetilde{C}_{y}\)_, one of which lies on_ \(R_{1}\) _and one on_ \(R_{j}\) _(these are two nodes if_ \(x\) _is a node, or one node and one marked points if_ \(x\) _is a marked point)._ Proof.: We address the first part. By properness, \(c\) is finite over the locus where \(c\) has zero-dimensional fibers. Then condition i) holds over this locus, if and only if \(c\) is an isomorphism there. Now let \(x\in C_{T}\) be a point where \(c\) has a one-dimensional fiber. It follows that \(c\) has to be a contraction of components. Since formation of \(R^{1}c_{*}\) commutes with base change, vanishing of \(R^{1}c_{*}\mathcal{O}_{\widetilde{C}}\) around \(x\) is equivalent to \(c^{-1}(x)\) being of arithmetic genus zero. Taken together, this establishes that i) and i') are equivalent. We address the second part. Assume i) and i') hold. If ii) holds, then the twisted dualizing sheaf must be trivial on any components contracted by \(c\). We already know each such component is rational, so they must have precisely two distinguished points. In particular, they must be arranged as a chain. The distinguished points on the end of the chain must come from intersection with a remaining component of \(\widetilde{C}\) or from marked points. If there is a marked point, it must map to a marked point of \(C\) (in particular, there can be only one). Otherwise, the two points of intersection must come from the intersection with two branches of a node on \(C\). Conversely, we have that ii') implies ii) by direct computation. **Lemma 2.3**.: _Let \(\widetilde{C}_{T}\) and \(C_{T}\) be flat and proper families of nodal marked curves over a base \(T\) and let \(c:\widetilde{C}_{T}\to C_{T}\) be a morphism of marked curves over \(T\). Then condition i) is an open condition. On the locus where i) holds, ii) is an open condition._ Proof.: Openness of i) is straightforward. Assume that i) holds for \(c\). Then condition ii') is equivalent to vanishing of \(R^{1}c_{*}(\omega_{\widetilde{C}_{B}/B}(\sigma^{\prime}_{1}+\dots+\sigma^{\prime} _{n}))^{\otimes 2}\), which is open on the base. The following should be true without the assumption of properness, but we only consider that case for simplicity. **Proposition 2.4**.: _Let \(C_{B}\to B\) be a proper family of nodal marked curves. The stack of expansions \(\operatorname{Exp}_{C_{B}/B}\to B\) is algebraic. It is locally of finite presentation and flat over \(B\) of pure dimension zero._ The following basic lemma lets us study expansions of curves locally on the curve. **Lemma 2.5**.: _Let \(C_{1},C_{2}\) be flat families of nodal marked curves over \(B\) and suppose that we have an etale morphism \(\gamma:C_{1}\to C_{2}\) over \(B\) that induces isomorphisms of the singular and the marked loci. Then pullback along \(\gamma\) induces an isomorphism \(\operatorname{Exp}_{C_{2}/B}\to\operatorname{Exp}_{C_{1}/B}\)._ Proof.: The assumptions imply that the completions of \(C_{2}\) and \(C_{1}\) along each singular and marked locus agree. Since an expansion is an isomorphism away from these loci, the statement then follows from fpqc descent. Proof of Proposition 2.4.: Consider the stack \(\mathscr{M}\) of all nodal marked curves [10, Tag 0DSX] with universal family \(\mathscr{C}\), and on \(\mathscr{M}\times B\), consider the morphism space \(\operatorname{Hom}_{\mathscr{M}\times B}(\mathscr{C},C_{B})\), which is an algebraic stack over \(B\) locally of finite presentation [10, Tag 0DPN]. The locus that preserves the markings is closed. The locus in which i), ii) of Definition 2.1 holds is then open in this closed substack. Thus, we get \(\operatorname{Exp}_{C_{B}/B}\) as a locally closed substack, and in particular it is algebraic and locally of finite presentation over \(B\). For the remaining properties we may work locally on \(B\), and assume without loss of generality that \(B=\operatorname{Spec}R\) is the spectrum of a henselian local ring with separably closed residue field. Then the result follows from Lemma 2.6 **Lemma 2.6**.: _Let \(B=\operatorname{Spec}R\) be the spectrum of a henselian local ring with separably closed residue field \(k\) and let \(C_{B}\to B\) be a proper flat family of nodal marked curves with markings \(\sigma_{1},\dots,\sigma_{n}\) and with \(q_{1},\dots,q_{r}\) the nodes of the special fiber \(C_{k}\). Then \(\operatorname{Exp}_{C_{B}/B}\to B\) is flat of relative dimension zero, and the singularities are products of pullbacks of the singularities of the form \(\mathbb{A}^{n}\to\mathbb{A}^{1}\) given by multiplication of the coordinates._ Proof.: By the etale local structure of nodes and Lemma 2.5 and by our assumptions on \(B\), we may reduce to the case that \(C\) is a disjoint union of open sets of the following form: For each node \(q_{i}\), there is a morphism \(g_{i}:B\to\mathbb{A}^{1}\), such that the component \(U_{i}\) containing \(q_{i}\) is isomorphic to the pullback along \(g_{i}\) of a standard degeneration \(\mathbb{A}^{2}\to\mathbb{A}^{1}\) given by multiplication of the coordinates. Each marked point is contained in a component \(V_{j}\simeq B\times\mathbb{A}^{1}\), with the marking given by the origin of \(\mathbb{A}^{1}\). It follows that \(\operatorname{Exp}_{C_{B}/B}\) is a fiber product over \(B\) of pullbacks of the stacks \(\operatorname{Exp}_{\mathbb{A}^{2}/\mathbb{A}^{1}}\to\mathbb{A}^{1}\) and \(\operatorname{Exp}_{(\mathbb{A}^{1},0)}\to\operatorname{Spec}\mathbb{C}\). These stacks can be explicitly described (see [12, SS1] and [1, SS1 + SS6]): The stack \(\operatorname{Exp}_{(\mathbb{A}^{1},0)}\) has a smooth cover by affine spaces \(\mathbb{A}^{n}\), while \(\operatorname{Exp}_{\mathbb{A}^{2}/\mathbb{A}^{1}}\) has a cover by affine spaces \(\mathbb{A}^{n}\), with the mapping to \(\mathbb{A}^{1}\) corresponding to multiplication of the coordinates. In particular, the morphism \(\operatorname{Exp}_{\mathbb{A}^{2}/\mathbb{A}^{1}}\to\mathbb{A}^{1}\) is flat with relative normal crossings singularities. ### Moduli spaces of vector bundles on curves We recall some results regarding existence of universal bundles and the structure of the Picard group for moduli spaces of stable vector bundles on curves. We also derive a version of the Bogomolov-Gieseker inequality for fiber-stable sheaves on a product with \(\mathbb{P}^{1}\). Fix coprime integers \(r,d\) with \(r>0\). Let \(F\) be a smooth projective curve of genus \(g\) and let \(M_{F}(r,d)\) denote the moduli space of rank \(r\), degree \(d\) stable vector bundles on \(F\). For a degree \(d\) line bundle \(L\) on \(F\), and let \(M_{F}(r,L)\) denote the fiber of the determinant morphism \(M_{F}(r,d)\to\operatorname{Pic}^{d}F\) over \([L]\). **Proposition 2.7**.: _Suppose that \(g\geq 2\)._ * _For any_ \(L\in\operatorname{Pic}^{d}F\)_, the Picard group of_ \(M_{F}(r,L)\) _is canonically isomorphic to_ \(\mathbb{Z}\)_, identifying the ample generator with_ \(1\)_._ * _Let_ \(V\) _be a vector bundle on_ \(F\) _satisfying_ \(\operatorname{rk}V=r\) _and_ \(\deg V=r(g-1)-d\)_. Then for any choice of universal sheaf_ \(\mathcal{E}^{u}\) _on_ \(F\times M_{F}(r,L)\)_, the line bundle_ \[L_{V}:=(\det R\pi_{*}(\mathcal{E}^{u}\otimes p^{*}V))^{\vee}\] _is an ample generator of_ \(\operatorname{Pic}M_{F}(r,L)\)_._ * _There is a unique choice of universal bundle on_ \(M_{F}(r,L)\) _such that we have_ \(c_{1}(E|_{\{pt\}\times M_{F}(r,d)})\simeq L_{V}^{k}\) _for some integer_ \(0\leq k<r\)_. This_ \(k\) _is the unique integer in these bounds that satisifies_ \(dk-rk^{\prime}=1\) _for some_ \(k^{\prime}\in\mathbb{Z}\)_. In particular, we have_ \((k,r)=1\)_._ Proof.: The first two points are Theorem B in [10]. The last point is [10, Remark 2.9]. Let \(L\) be a line bundle on \(\mathbb{P}^{1}\times F\) that is the pullback of a degree \(d_{0}\geq 1\) line bundle from \(F\). We have the projection \(\pi:\mathbb{P}^{1}\times F\to\mathbb{P}^{1}\). Let \(V:=L^{\otimes r(g-1)-d}\oplus\mathcal{O}_{\mathbb{P}^{1}\times F}^{\oplus d_{ 0}r-1}\), so that \(\operatorname{rank}V=rd_{0}\) and \(\deg V=d_{0}(r(g-1)-d)\). We define \[L_{V}(E):=(\det R\pi_{*}(E\otimes V))^{\vee}.\] A computation using Grothendieck-Riemann-Roch gives \[\deg L_{V}(E)=d_{0}(c_{1}(E)^{2}/2-r\operatorname{ch}_{2}(E))=\frac{d_{0}}{2 }\Delta(E). \tag{1}\] We list related results: **Lemma 2.8**.: _Let \(E\) be a rank \(r\) coherent sheaf satisfying \((\det E,F)=d\) and let \(N\) be a line bundle on \(\mathbb{P}^{1}\). Then \(L_{V}(E)\cong L_{V}(E\otimes\pi^{*}N)\)._ Proof.: One checks that \(\operatorname{rk}R\pi_{*}(E\otimes V)=0\). The lemma then follows from the projection formula and properties of the determinant. Let \(0\leq k<r\) be the unique integer in this range satisfying \(dk-rk^{\prime}=1\) for some \(k^{\prime}\). **Lemma 2.9**.: _Let \(E\) be a degree \(r\) coherent sheaf on \(\mathbb{P}^{1}\times F\) such that \(c_{1}(E)\) has degree \(d\) on fibers over \(\mathbb{P}^{1}\). Then_ \[(L,c_{1}(E))\equiv k\deg L_{V}(E)\mod d_{0}r. \tag{2}\] Proof.: We have that \(c_{1}(E)=d[\mathbb{P}^{1}\times y]+\ell[x\times F]\) for some \(\ell\in\mathbb{Z}\). Thus \(c_{1}(E)^{2}=2\ell d\), and \[k\deg L_{V}(E)=kd_{0}rc_{2}(E)-kd_{0}(r-1)\ell d\equiv d_{0}\ell kd\equiv d_{0} \ell=(L,c_{1}(E))\mod d_{0}r.\] **Lemma 2.10**.: _Suppose that \(E\) is a rank \(r\) torsion-free coherent sheaf on \(\mathbb{P}^{1}\times F\) such that the restriction of \(E\) to the generic fiber over \(\mathbb{P}^{1}\) is stable of degree \(d\). Then \(\Delta(E)\geq 0\), with equality if and only if \(E\) a tensor product of the pullbacks of a stable sheaf on \(F\) and a line bundle on \(\mathbb{P}^{1}\)._ Proof.: Since \(E\) is torsion-free, it maps injectively to its double dual with zero-dimensional cokernel. We have \(\Delta(E)\geq\Delta(E^{\vee\vee})\), with equality if and only if \(E\) is locally free. By replacing \(E\) with \(E^{\vee\vee}\) if necessary, we may therefore assume that \(E\) is locally free. By Langton's procedure of elementary modifications [10], one may find a locally free subsheaf \(E^{\prime}\subset E\) whose restriction to every fiber over \(\mathbb{P}^{1}\) is stable, and such that \(E^{\prime}\) is obtained from \(E\) through successive elementary modifications along maximally destabilizing quotients of fibers. One checks that \(\Delta(E)\) strictly decreases after each such modification, so \(\Delta(E^{\prime})\leq\Delta(E)\) with equality if and only if \(E\) is already stable on every fiber. By replacing \(E\) with \(E^{\prime}\) if necessary, we may assume that this is the case. Then, after possibly tensoring \(E\) by a line bundle from \(\mathbb{P}^{1}\), we may assume that \(E\) is a pull-back of the universal sheaf along a morphism \(\nu:\mathbb{P}^{1}\to M_{F}(r,L)\) for some \(L\). If \(g=1\), this implies that \(\nu\) is constant. Otherwise if \(g\geq 2\), we have \(\Delta(E)=2k\deg\nu^{*}L_{V}\) for \(L_{V}\) as in 2.7. Since \(L_{V}\) is ample, this implies that \(\deg\nu^{*}L_{V}\geq 0\) with equality if and only if \(\nu\) is constant. **Remark 2.11**.: Without the statement about the case of equality, Lemma 2.10 also follows from the Bogomolov-Gieseker inequality for Gieseker-stable sheaves in view of Theorem 2.21. ### Components of Relative Picard schemes In this subsection, we construct a stack parametrizing connected components of the relative Picard scheme for a family of fibered surfaces. We also construct a further quotient identifying line bundles which differ by component twists. This will be used for making precise the notion of "fixing the first Chern class" in a family of fibered surfaces. Let \(X_{B}\to C_{B}\to B\) be a family of fibered surfaces over \(B\) with structure morphism \(\pi:X_{B}\to B\). We make the following technical assumption. **Assumption 2.12**.: The sheaf \(R^{1}\pi_{*}\mathcal{O}_{X_{B}}\) is locally free on \(B\). **Remark 2.13**.: This always holds if \(\pi\) is representable by schemes: The family of nodal marked curves \(C_{B}\to B\) induces natural log-structures on \(B\) and \(C_{B}\), with respect to which \(C_{B}\to B\) is log-smooth [11]. Pulling back along \(X_{B}\to C_{B}\), we get a log-structure on \(X_{B}\), with respect to which \(X_{B}\to B\) is log-smooth, vertical and exact. It follows from [13, Corollary 7.1] that \(R^{1}\pi_{*}\mathcal{O}_{X_{B}}\) is locally free on \(B\) and commutes with any base change. **Remark 2.14**.: Assumption 2.12 is likely unnecessary. One way to see this would be if one had a generalization of [13, Corollary 7.1] to log-algebraic spaces. It has been pointed out to us by Luc Illusie that the proof there should work in the more general setting, see also [12, 4.2.5]. It would be desirable to have a direct proof in our situation that doesn't rely on logarithmic geometry. We collect some facts about the relative Picard scheme under these hypothesis: **Theorem 2.15**.: _Suppose that Assumption 2.12 holds, for example if \(X_{B}\) is a relative scheme over \(B\)._ * _The dimension of the identity component_ \(\operatorname{Pic}^{0}_{X_{b}}\) _of the Picard scheme of the fiber_ \(X_{b}\) _is locally constant on_ \(B\)_._ * _There is an open group subscheme_ \(\operatorname{Pic}^{0}_{X_{B}/B}\subset\operatorname{Pic}_{X_{B}/B}\) _which restricts to the identity component of the Picard scheme over every point of_ \(B\)_._ * _The morphism_ \(\operatorname{Pic}^{0}_{X_{B}/B}\to B\) _is smooth._ Proof.: By Assumption 2.12, the dimension of the tangent space of the Picard scheme \(\operatorname{Pic}_{X_{b}}\) at the identity is locally constant. Since we are in characteristic zero, this implies i). Then ii) follows as in [14, Proposition 5.20]. Finally, iii) follows from the same argument as in [14, Remark 5.21] (the projectivity there is only used to invoke the GAGA theorem, which holds more generally for a proper morphism). Since the identity component is smooth over \(B\), we may take the quotient of the relative Picard scheme by it. We denote this by \(\mathcal{NS}_{X_{B}/B}\), the _relative Neron-Severi group_. The discussion goes through for \(C_{B}\to B\), \(\mathcal{NS}_{C_{B}/B}\). **Proposition 2.16**.: * _The algebraic space_ \(\mathcal{NS}_{X_{B}/B}\) _is unramified over_ \(B\)_._ * _The algebraic space_ \(\mathcal{NS}_{C_{B}/B}\) _is etale over_ \(B\)_._ * _The sub-algebraic space_ \(\mathcal{NS}^{0}_{C_{B}/B}\) _parametrizing line-bundles with total degree zero is open and closed in_ \(\mathcal{NS}_{C_{B}/B}\)_._ * _Pull-back of line bundles induces an open and closed immersion_ \[\mathcal{NS}_{\mathcal{C}_{B}/\mathcal{B}}\to\mathcal{NS}_{X_{B}/B}.\] Proof.: By construction, the morphism \(\mathcal{NS}_{X/B/B}\to B\) is locally of finite type and has everywhere vanishing relative Kahler differentials, which implies i). The same holds for \(\mathcal{NS}_{C_{B}/B}\), which is moreover smooth over \(B\). Hence, it is etale over \(B\), which gives ii). Point iii) follows from the local constancy of the Euler characteristic. The morphism in point iv) is well defined, since the morphism \(\operatorname{Pic}_{C_{B}/B}\to\operatorname{Pic}_{X_{B}/B}\) preserves the identity components. By Lemma 2.17 it is a closed embedding. It follows that the induced morphism \(\mathcal{NS}_{C_{B}/B}\to\mathcal{NS}_{X_{B}/B}\) is a closed embedding. Since \(\mathcal{NS}_{C_{B}/B}\) is etale over \(B\) and since \(\mathcal{NS}_{X_{B}/B}\) is unramified over \(B\), it follows that \(\mathcal{NS}_{C_{B}/B}\) is etale over \(\mathcal{NS}_{X_{B}/B}\). Any etale (even flat) monomorphism is in particular an open immersion. **Lemma 2.17**.: _The morphism \(\operatorname{Pic}_{C_{B}/B}\to\operatorname{Pic}_{X_{B}/B}\) induced by pullback is a closed immersion._ Proof.: A closed immersion is the same as a proper monomorphism. Since \(f_{*}\mathcal{O}_{X_{B}}=\mathcal{O}_{C_{B}}\), we have for any line bundle \(L\) on \(C_{B}\) that \(f_{*}f^{*}L=L\). The the same holds after any base change on \(B\), so pullback indeed gives a monomorphism. To check the existence part of the valuative criterion for properness (the uniqueness is automatic), we may assume \(B=\operatorname{Spec}R\) for a DVR \(R\) with generic point \(\eta\) and that we are given a line bundle \(L\) on \(X_{B}\), so that \(L_{\eta}\) is isomorphic to a line bundle pulled back from \(C_{\eta}\), or equivalently so that \(f_{*}L\) is a line bundle free on \(C_{\eta}\) and that \(f^{*}f_{L}\to L\) is an isomorphism over \(X_{\eta}\). We want to show that these conditions hold over all of \(C_{R}\) and \(X_{R}\) respectively. Since \(X_{R}\to C_{R}\) is generically a family of geometrically connected curves, we can conclude that \(f_{*}L\) is free and \(f^{*}f_{*}L\to L\) an isomorphism at least over the points where the fiber of \(f\) is smooth. Thus, these conditions hold except possibly over a finite set of points \(x_{1},\dots,x_{n}\in C_{\xi}\). But, since \(X_{R}\) is Cohen-Macaulay, any locally free sheaf is determined by its restriction away from any codimension two locus. This implies that \(f_{*}L\) is locally free and \(f^{*}f_{*}L\) an isomorphism everywhere. It remains to show that \(\operatorname{Pic}_{C_{B}/B}\to\operatorname{Pic}_{X_{B}/B}\) is quasi-compact. For this, we may assume that \(C_{B}\) has constant topological type over \(B\), that is, that \(C_{B}\) is a union of smooth curves over \(B\). Then one can further reduce to the case that \(C_{B}\) is smooth over \(B\). By using Chow's Lemma and possibly passing to a flattening stratification, we may assume that \(X_{B}\to C_{B}\) is projective. In this case, quasi-compactness of \(\operatorname{Pic}_{C_{B}/B}\to\operatorname{Pic}_{X_{B}/B}\) follows from the stratification of the Picard scheme by Hilbert polynomials. It follows that the quotients \(\mathcal{NS}_{X_{B}/B}/\mathcal{NS}_{C_{B}/B}\) and \(\mathcal{NS}_{X_{B}/B}/\mathcal{NS}_{C_{B}/B}^{0}\) are well-defined. We denote them by \(\overline{\mathcal{NS}}_{X_{B}/C_{B}}\) and \(\overline{\mathcal{NS}}_{X_{B}/B}\) respectively. They are separated and unramified algebraic spaces over \(B\). **Lemma 2.18**.: _Let \(\widetilde{X}_{T}\to\widetilde{C}_{T}\) be an expansion of \(X_{T}\to C_{T}\). Then pullback along \(\widetilde{X}_{T}\to X_{T}\) induces isomorphisms \(\overline{\mathcal{NS}}_{X_{T}/C_{T}}=\overline{\mathcal{NS}}_{X_{B}/C_{B}} \times_{B}T\xrightarrow{\sim}\overline{\mathcal{NS}}_{\widetilde{X}_{T}/ \widetilde{C}_{T}}\) and \(\overline{\mathcal{NS}}_{X_{T}/T}=\overline{\mathcal{NS}}_{X_{B}/B}\times_{B} T\xrightarrow{\sim}\overline{\mathcal{NS}}_{\widetilde{X}_{T}/T}\)._ Proof.: We only treat the case of \(\overline{\mathcal{NS}}\), the other is similar. Since the pullback map \(\operatorname{Pic}_{X_{T}/T}\to\operatorname{Pic}_{\widetilde{X}_{T}/T}\) preserves the identity components, it descends to a morphism \(\mathcal{NS}_{X_{T}/T}\to\mathcal{NS}_{\widetilde{X}_{T}/T}\). Since the total degree is preserved under pullback along \(\widetilde{C}_{T}\to C_{T}\), this descends to a morphism \(\overline{\mathcal{NS}}_{X_{T}/T}\to\overline{\mathcal{NS}}_{\widetilde{X}_{T}/T}\). We claim that this is an isomorphism. It is enough to show that each section of \(\overline{\mathcal{NS}}_{\widetilde{X}_{T}/T}\) has a unique preimage. For this, we may work etale locally on \(T\) and assume that \(\widetilde{C}_{T}\to T\) has sections meeting each irreducible component of each fiber. Suppose that \(\overline{c_{1}}\in\overline{\mathcal{NS}}_{\widetilde{X}_{T}/T}(S)\) for some \(T\)-scheme \(S\). Then, locally on \(S\) we may assume that \(\overline{c_{1}}\) is represented by some line bundle \(L\) on \(\widetilde{X}_{S}\). Up to twisting \(L\) by a line bundle \(N\) pulled back from \(\widetilde{C}_{S}\) of total degree zero, we may assume that \(L\) is pulled back from \(X_{S}\), so that \(\overline{c_{1}}\) comes from an element \(\overline{c_{1}^{\prime}}\in\overline{\mathcal{NS}}_{X_{T}/T}(S)\). Moreover, we see that \(\overline{c_{1}^{\prime}}\) is uniquely determined, since any two possible choices of \(N\) differ by a line bundle pulled back from \(C_{S}\). ### Stability on Fibered surfaces We collect some results of Yoshioka that allow us to compare \(f\)-stability on a smooth fibered surface with slope-stability for a suitable polarization (see Theorem 2.21) Let \(X\) be a smooth projective surface, and \(f:X\to C\) be a surjective morphism to a curve \(C\) with connected fibers. For a coherent sheaf \(E\) on \(X\) of rank \(r>0\), we define its _discriminant_ as \[\Delta(E)=2rc_{2}(E)-(r-1)c_{1}(E)^{2}=c_{1}(E)^{2}-2r\operatorname{ch}_{2}(E )\in\mathbb{Z}.\] We recall the following result of Yoshioka [20, Lemma 2.1]: **Lemma 2.19**.: _Given an exact sequence \(0\to G_{1}\to E\to G_{2}\to 0\), where \(E_{1}\) and \(E_{2}\) have ranks \(r_{1}>0\) and \(r_{2}>0\) respectively, we have an equality_ \[\frac{1}{r}\Delta(E)=\frac{1}{r_{1}}\Delta(G_{1})+\frac{1}{r_{2}}\Delta(G_{2} )-\frac{1}{rr_{1}r_{2}}\left(r_{2}c_{1}(G_{1})-r_{1}c_{1}(G_{2})\right)^{2}.\] Proof.: By additivity of the Chern character, we have \[\Delta(E)/r-\Delta(G_{1})/r_{1}-\Delta(G_{2})/r_{2}\] \[= \,(c_{1}(G_{1})+c_{1}(G_{2}))^{2}/(r_{1}+r_{2})-c_{1}(G_{1})^{2}/ r_{1}-c_{1}(G_{2})^{2}/r_{2}\] \[= \,-\frac{1}{rr_{1}r_{2}}\left(r_{2}c_{1}(G_{1})-r_{1}c_{1}(G_{2} )\right)^{2}\] Let \(F\) be the numerical divisor class of an arbitrary fiber of \(f\) and let \(H\) be an ample line bundle on \(X\). For a positive rational number \(t\), we let \(H_{t}:=H+tF\). **Proposition 2.20**.: _Let \(D\) be a divisor satisfying \((D,F)\neq 0\). Suppose that \((D,H_{t})=0\) for some \(t>0\). Then_ \[D^{2}\leq-\frac{1}{(H,F)^{2}}(H^{2}+2t(H,F)).\] Proof.: This is [20, Lemma 1.1]. Now we can prove an important consequence, which already appears e.g. in [20], although it is stated there only for elliptic surfaces. **Theorem 2.21**.: _Fix values \(r,\Delta\in\mathbb{Z}\) with \(r>0\), and \(c_{1}\in H^{2}(X,\mathbb{Z})\), such that \(r\) and \((F,c_{1})\) are relatively prime. Then_ 1. _There exists a constant_ \(C(r,c_{1},\Delta)\) _so that the collection of_ \(H_{t}\)_-semistable sheaves with rank_ \(r\)_, discriminant at most_ \(\Delta\) _and first Chern class_ \(c_{1}\) _is independent of_ \(H_{t}\) _for all_ \(t\geq C(r,c_{1},\Delta)\)_._ 2. _For any_ \(t\geq C(r,c_{1},\Delta)\)_, semistability with respect to_ \(H_{t}\) _equals stability and a sheaf is stable with respect to_ \(H_{t}\) _if and only if its restriction to the generic fiber of_ \(f\) _is stable as a sheaf on a curve._ Proof.: Let \(\mu_{t}\) denote the slope with respect to \(H_{t}\). Suppose that there is a change of stability condition at \(t_{0}\), i.e. there is a sheaf \(E\) of the given invariants which is \(H_{t}\)-stable for \(t=t_{0}\), but not for bigger (resp. smaller) values of \(t\). Then we may find an exact sequence \[0\to E_{1}\to E\to E_{2}\to 0,\] which is destabilizing for values of \(t\) slightly bigger (resp. smaller) than \(t_{0}\), but which consists of semistable objects for \(t=t_{0}\) (take part of a HN filtration). In particular, \(\mu_{t_{0}}(E_{1})=\mu_{t_{0}}(E_{2})\). By Lemma 2.19 and the Bogomolov inequality applied to \(E_{1}\), \(E_{2}\), we have \[\Delta(E)\geq-\frac{1}{rr_{1}r_{2}}(r_{2}c_{1}(E_{1})-r_{1}c_{1}(E_{2}))^{2}=- D^{2}/(rr_{1}r_{2}),\] where we set \(D:=r_{2}c_{1}(E_{1})-r_{1}c_{2}(E_{1})\) and \(r_{i}\) is the rank of \(E_{i}\). The assumption that \(r\) and \((c_{1},F)\) are coprime implies that \((D,F)=r_{2}(c_{1}(E_{1}),F)-r_{1}(c_{1}(E_{2}),F)\neq 0\). We also have, \((D,H_{t_{0}})=r_{1}r_{2}(\mu_{t_{0}}(F_{1})-\mu_{t_{0}}(F_{2}))=0\). Therefore, Proposition 1.2 applies to \(D\), and we find that \[\Delta(E)\geq\frac{1}{rr_{1}r_{2}(H,F)^{2}}(H^{2}+2t_{0}(H,F))\geq\frac{H^{2} +2t_{0}(H,F)}{r^{3}(H,F)^{2}}.\] This shows that the values that \(t_{0}\) can take are bounded above, so 1) follows. To address 2), let \(\eta\in C\) denote the generic point and \(F_{\eta}:=f^{-1}(\eta)\). For any coherent sheaf \(G\) on \(X\) of rank \(r_{G}>0\) the slope with respect to \(H_{t}\) is \[\mu_{t}(G)=((c_{1}(G)),H)+t(c_{1}(G),F))/r_{G}.\] Suppose that \(E\) is semistable beyond the last wall. Then for any subsheaf \(E^{\prime}\), we have \[\mu_{t}(E^{\prime})\leq\mu_{t}(E)\] for sufficiently large \(t\). Dividing by \(t\) and taking the limit as \(t\to\infty\), gives \[\mu(E^{\prime}|_{F_{\eta}})=(c_{1}(E^{\prime}),F)/r^{\prime}\leq(c_{1}(E),F)/ r=\mu(E|_{F}).\] By the coprimeness assumption, we have strict inequality, therefore the restriction of \(E\) to \(F_{\eta}\) is stable. Conversely, assume that the restriction of \(E\) to the generic fiber of \(f\) is stable. Then for any subsheaf \(E^{\prime}\), we know that \((E^{\prime},F)/r^{\prime}<(E,F)/r\). Let \(\mu_{0,max}(E)\) be the maximum value of \(\mu_{0}\) of a lower rank subsheaf of \(E\), and let \(\mu_{max}(E|_{F})<\mu(E|_{F})\) be the least slope of a non-zero lower rank subsheaf of \(E|_{F}\). Let \(E^{\prime}\subset E\) be an arbitrary lower rank subsheaf. Then we have \[\mu_{t}(E^{\prime})=(E^{\prime},H)/r^{\prime}+t(E^{\prime},F)/r^{\prime}\leq \mu_{0,max}(E)+t\mu_{max}(E).\] and the right hand side is strictly smaller than \(\mu_{t}(E)=\mu_{0}(E)+t\mu(E|_{F})\) for sufficiently large \(t\), independent of \(E^{\prime}\). Therefore, there is no destabilizing subsheaf for sufficiently large \(t\). It follows from what we have shown that \(H_{t}\)-semistability of \(E\) for \(t\gg 0\) is equivalent to \(H_{t}\)-stability and equivalent to stability of \(E|_{F_{\eta}}\) as desired. ## 3 Moduli Spaces of sheaves on fibrations Throughout this section, we fix coprime integers \(r,d\) with \(r>0\). Recall that we assume all curves to be proper and connected. ### Stacks of fiber-stable sheaves Let \(f:X\to C\) be a fibered surface over a nodal marked curve \(C\) as defined in Definition 1.6. Recall that this implies that fibers of \(f\) over nodes, marked points and generic points of components of \(C\) are smooth projective curves. We define stability of sheaves relative to a fibration. **Definition 3.1**.: Let \(\widetilde{C}\to C\) be an expansion of \(C\) and let \(\widetilde{X}:=X\times_{C}\widetilde{C}\). A torsion-free coherent sheaf \(E\) of rank \(r\) and with fiber-degree \(d\) on \(\widetilde{X}\) is called \(f\)_-stable_, if it satisfies the following conditions: * The sheaf \(E\) is locally free at the fibers of \(\widetilde{X}\to\widetilde{C}\) over singular and marked points. * For any generic point \(\eta\) of \(\widetilde{C}\), the restriction of \(E\) to the fiber \(\widetilde{X}_{\eta}\) over \(\eta\) is slope stable. * For any marked or singular point \(c\) of \(\widetilde{C}\), the restriction of \(E\) to the fiber over \(c\) is slope stable. We extend this notion to arbitrary families. **Definition 3.2**.: Let \(f:X_{B}\to C_{B}\) be a family of fibered surfaces over some base \(B\). Let \(T\) be a \(\mathbb{C}\)-scheme. A _family of \(f\)-stable sheaves on an expansion of \(X_{B}\) over \(B\)_, valued in \(T\), is given by the following pieces of data * A morphism \(T\to B\) with pullbacks \(X_{T}\to C_{T}\), * An expansion \(c:\widetilde{C}_{T}\to C_{T}\) with associated fibered surface \(\widetilde{X}_{T}:=X_{T}\times_{C_{T}}\widetilde{C}_{T}\to\widetilde{C}_{T}\). * A \(T\)-flat coherent sheaf \(E_{T}\) on \(\widetilde{X}\) such that, for every \(t\in T\), the fiber \(E_{t}\) is \(f\)-stable for \(X_{t}\to C_{t}\). Let \(f:X_{B}\to C_{B}\) be a family of fibered surfaces over some base \(B\). **Proposition 3.3**.: _There is an algebraic stack \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\) over \(B\) parametrizing \(f\)-stable sheaves of rank \(r\) and fiber-degree \(d\) on expansions of \(X_{B}\) over \(B\)._ **Remark 3.4**.: We make this explicit in the case \(B=\operatorname{Spec}\mathbb{C}\). Say \(X\to C\) is a fibered surface. Then a \(\mathbb{C}\)-point of \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\) is given by a pair \((c,E)\), where \(c:\widetilde{C}\to C\) is an expansion of \(C\) and \(E\) an \(f\)-stable rank \(r\) sheaf on the induced \(\widetilde{X}:=X\times_{C}\widetilde{C}\) with fiber-degree \(d\). We will sometimes write this as \((\widetilde{X},E)\). An automorphism of the pair \((\widetilde{X},E)\) is a pair \((g,\gamma)\), where \(g:\widetilde{X}\to\widetilde{X}\) is an automorphism that commutes with the contraction to \(X\), and \(\gamma:g^{*}E\to E\) is an isomorphism. Proof.: We work over the relative stack of expansions \(\operatorname{Exp}_{C_{B}/B}\). Let \(c:\mathcal{C}\to C_{B}\) denote the universal expansion, and let \(\mathcal{X}:=X_{B}\times_{C_{B}}\mathcal{C}\to\mathcal{C}\) denote the induced family of fibered surfaces over \(\operatorname{Exp}_{C_{B}/B}\). Let \(\mathscr{M}(r,d)\) denote the stack of all torsion-free coherent sheaves of rank \(r\) and fiber-degree \(d\) on \(\mathcal{X}\) over \(\operatorname{Exp}_{C_{B}/B}\). Then one can see that the locus of those sheaves satisfying i)-iii) of Definition 3.1 is open. Thus we get \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\subseteq\mathscr{M}(r,d)\) as the open locus of \(f\)-stable sheaves. **Proposition 3.5**.: _The morphism \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\to B\) satisfies the existence part of the valuative criterion of properness._ Proof.: We may assume that \(B=\operatorname{Spec}R\) for a DVR \(R\) with generic point \(\eta\), that \(\widetilde{C}_{\eta}\to C_{\eta}\) is a given expansion of \(C_{B}\) over \(\eta\) and that \(E_{\eta}\) is an \(f\)-stable sheaf on \(\widetilde{X}_{\eta}:=\widetilde{C}_{\eta}\times_{C}X\). Our goal is to show that we can extend this to a family of \(f\)-stable sheaves, possibly after passing to some extension of \(R\). Let \(\xi\) denote the closed point of \(\operatorname{Spec}R\). Case 1: \(\widetilde{C}_{\eta}\) is smooth.This implies that \(\widetilde{C}_{\eta}\to C_{\eta}\) is an isomorphism. Then the desired result follows from Proposition 3.6 below. Case 2: \(\widetilde{C}_{\eta}\) is of compact type.We proceed by induction on the number of components of \(\widetilde{C}_{\eta}\). If there is only one component, we are in Case 1. Otherwise, we may decompose \(\widetilde{C}_{\eta}=\widetilde{C}_{\eta}^{1}\cup_{q}\widetilde{C}_{\eta}^{2}\) along any chosen node \(q\). We add a marked point \(q_{i}\) on each \(\widetilde{C}_{i}\) where the node was. Then we can find limiting families for the restrictions of \(E\) to both \(C_{\eta}^{1}\) and \(C_{\eta}^{2}\). In order to glue the total family back together over \(R\), we need to extend the isomorphism of \(E^{1}|_{q_{1}}\) and \(E^{1}|_{q_{2}}\) from \(\eta\) over all of \(\operatorname{Spec}R\). But since these are fiberwise isomorphic families of stable sheaves, there exists such an extensions, possibly after twisting one of \(E^{1}\) or \(E^{2}\) by a multiple of \(\widetilde{X}^{i}_{\xi}\). Case 3: \(\widetilde{C}_{\eta}\) is arbitrary.We do an induction on the first Betti number of the dual graph of \(\widetilde{C}_{\eta}\). If it is zero, we are in Case 2. Otherwise, we may choose a non-separating node \(q_{\eta}\) on \(\widetilde{C}_{\eta}\) and take a partial normalization \(\nu:\widetilde{C}^{\prime}\to\widetilde{C}\) around \(q_{\eta}\), while remembering the preimage of a node through adding in markings \(q_{\eta,1}\) and \(q_{\eta,2}\). Then by our inductive hypothesis, we can find some completion of \(\widetilde{C}^{\prime}_{\eta}\) and \(\nu^{*}E_{\eta}\). It remains to show that we can glue together the total family along the markings \(q_{1}\) and \(q_{2}\). This is always possible after expanding the special fiber by, say, blowing up once along \(q_{1}\) and then twisting by some integer multiple of the exceptional component. **Proposition 3.6**.: _Let \(R\) be a DVR with generic point \(\eta\) and let \(X_{R}\to C_{R}\to\operatorname{Spec}R\) be a family of fibered surfaces over \(\operatorname{Spec}R\) with \(C_{\eta}\) smooth over \(\eta\). Let \(E_{\eta}\) be an \(f\)-stable sheaf of rank \(r\) and fiber degree \(d\) on \(X_{\eta}\). Then, after possibly performing a base change on \(R\), we can find an expansion \(\widetilde{C}_{R}\to C_{R}\) which is an isomorphism over \(\eta\) and an \(f\)-stable sheaf \(E_{R}\) on \(\widetilde{X}_{R}\)._ Proof.: We may modify \(C_{R}\) by repeatedly blowing up singular points to obtain a family with regular total space [12, Tag 0CDE], which will automatically be an expansion of \(C_{R}\). Without loss of generality, we may therefore assume that \(C_{R}\) is nonsingular. Let \(\xi\) denote the closed point of \(\operatorname{Spec}R\). By the argument in [10, Proof of the last statement of Proposition 3.3], after passing to some extension of \(R\) and further expanding \(C_{R}\) over the special fiber, we can assume that \(E_{\eta}\) extends to a torsion-free coherent sheaf \(E_{R}\) on \(X_{R}\) with the following properties: * the restriction of \(E_{R}\) to \(X_{\xi}\) is torsion-free, * the sheaf \(E_{R}\) is locally free along the fibers of \(X_{\xi}\to C_{\xi}\) over singular and marked points. Let \(P(C_{\xi})\) denote the collection of singular, marked and generic points of \(C_{\xi}\). Unless \(E_{R}\) is \(f\)-stable, there exists \(x\in P(C_{\xi})\), such that the restriction \(E_{x}\) of \(E_{R}\) to the curve \(f^{-1}(x)\) is unstable. Let \(a\) be the minimum of the values \(\mu_{m}in(E_{x})\), where \(\mu_{min}\) denotes the minimal slope in a Harder-Narasimhan filtration. Let \(\rho\) be the maximum rank of a maximally destabilizing quotient of \(E_{x}\), ranging over those \(x\in P(C_{\xi})\) for which \(\mu_{min}(E_{x})=a\). We claim that after taking a base change on \(R\) and a further expansion of \(C_{R}\), we may find a different completion of \(E_{\eta}\) such that either \(a\) increases or \(a\) stays the same and \(\rho\) decreases. Since there are only finitely many possible values for \(\rho\), and since the set of possible values of \(a\) lies in \(\mathbb{Z}/r\) and is bounded above by \(d/r\), we find that after doing so finitely many times, we end up with an \(f\)-stable extension of \(E_{\eta}\). To prove this claim, consider the relative Quot-scheme \[q:\operatorname{Quot}_{E_{R},X_{R}/C_{R}}(\rho,a)\to C_{R},\] parametrizing quotients of \(E_{R}\) of rank \(\rho\) and slope \(a\) on the fibers of \(X_{R}\to C_{R}\). By assumption, the image of \(q\) contains some points of \(P(C_{\xi})\). From the minimal choice of \(a,\rho\) it follows from an analysis of the relative deformation space, that at each such point, the map \(q\) is locally a closed embedding (see the proof of Theorem 5 in [11]). By repeatedly blowing up \(C_{R}\) along the marked points of the special fiber, we may assume that around each marked point, the image of \(q\) is supported on \(C_{\xi}\). By doing a base change on \(R\) and resolving the pullback of \(C_{R}\), we may assume the same holds around any node in \(C_{\xi}\). Under these assumptions, it follows that there is a closed subscheme \(Z\subset C_{R}\), whose associated points are all in \(P(C_{\xi})\) and such that around each of its associated points, it agrees with the closed subscheme defined by the Quot-scheme. If \(Z\) contains a component \(C_{i}\) of \(C_{\xi}\), we may replace \(E_{R}\) by the elementary modification of \(E_{R}\) along a maximally destabilizing quotient on that component. By the minimal choice of \(a\) and \(\rho\), the resulting sheaf will still be locally free at fibers over marked points and nodes. This has the effect of dividing the ideal sheaf of \(Z\) by the uniformizer of that component. Thus, if \(Z\) is locally principal, one can do so until \(Z\) becomes empty, in which case there is no more point in \(P(C_{\xi})\) with maximally destabilizing subsheaf of slope \(a\) and rank \(\rho\). In case \(Z\) is not locally principal, one can use Lemma 3.7 to find an extension \(R^{\prime}\) of \(R\) and an expansion \(c:\widetilde{C}_{R^{\prime}}\to C_{R^{\prime}}\) over \(R^{\prime}\) which is trivial over the generic fiber, such that the scheme-theoretic preimage of \(Z\) in \(\widetilde{C}_{R^{\prime}}\) is principal. Since the relative Quot-scheme is compatible with pullback, this reduce us to the case that \(Z\) is principal, which we already treated. **Lemma 3.7**.: _Let \(\pi:C_{R}\to\operatorname{Spec}R\) be a nodal marked curve over a DVR \(R\) with closed point \(\xi\). Suppose that \(\pi\) has smooth special fiber and that \(C_{R}\) is regular. Let \(Z\subset C_{R}\) be a closed subscheme supported on \(C_{\xi}\) whose associated points are special, marked or generic points of components of the special fiber. Then there is an extension of DVRs \(R\subset R^{\prime}\) and an expansion \(c:\widetilde{C}_{R^{\prime}}\to C_{R^{\prime}}\) which is trivial over the generic point of \(R^{\prime}\) such that \(c^{-1}(Z)\) has no marked points or nodes of \(\widetilde{C}_{\xi}\) as associated points._ Proof.: We first argue that by repeatedly blowing up at marked points, one can achieve that the preimage of \(Z\) is principal around each marked point. Indeed, for a local calculation around the marked point \(x\) we may assume that \(Z\) is supported at \(x\), and that the family is locally given by \(\operatorname{Spec}R[t]\to\operatorname{Spec}R\), with the section given by \(t=0\). Let \(\pi\) be a uniformizer of \(R\). Let \(Z_{\pi}\) and \(Z_{t}\) the intersections of \(Z\) with the loci (\(\pi=0\)) and (\(t=0\)) respectively. We claim that the invariant \(\ell(Z_{\pi})+\ell(Z_{t})\) decreases for the preimage of \(Z\) on the blowup. Indeed, the new marked point on the blowup is cut out by coordinates \(\pi,u\), where \(t=\pi u\). There exist elements \(g_{1}=t^{a}+\pi f_{1}\) and \(g_{2}=\pi^{b}+tf_{2}\) a in the defining ideal \(I_{Z}\) of \(Z\), where \(a=\ell(Z_{\pi})\) and \(b=\ell(Z_{t})\). Let also \(k\) be maximal, so that \(I_{Z}\subseteq(\pi,t)^{k}\). Thus \(q^{-1}Z\) contains the exceptional divisor to order at least \(k\). Note that \(1\leq k\leq a,b\). Let \(\widetilde{Z}\) the non-principal part of \(q^{-1}Z\) at the marked point. Then by a direct computation, one has \[\ell(\widetilde{Z}_{\pi}) \leq k\] \[\ell(\widetilde{Z}_{u}) \leq b-k\] In particular, \(\ell(\widetilde{Z}_{\pi})+\ell(\widetilde{Z}_{\pi})\leq b<\ell(Z_{\pi})+\ell(Z_{t})\). Essentially the same argument works for \(x\) a node in \(C_{\xi}\), where one has parameters \(s,t\) locally cutting out the components of \(C_{\xi}\) at \(x\). Here, one needs to repeatedly blow up the nodes in the reduced preimage of \(C_{\xi}\). This yields a modification \(\widehat{C}_{R}\to C_{R}\), which principalizes \(Z\) and is an isomorphism over the generic fiber, but where the fiber \(\widehat{C}_{\xi}\) may have non-reduced components. After taking a ramified extension \(R^{\prime}\) of \(R\) with sufficiently divisible degree, taking the normalization of \(\widehat{C}_{R^{\prime}}\), and resolving the singularities through repeated blow-ups, we obtain an expansion of \(\widetilde{C}_{R^{\prime}}\to C_{R^{\prime}}\) with the desired properties. ### Fixing twists from the base In the last subsection, we saw that the moduli stack \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\) satisfies the existence part of the valuative criterion of properness. To get a proper moduli space, we need to introduce a further numerical stability condition, which fixes twists by line bundles from \(C_{B}\). For stability of line bundles on a curve, we use heavily ideas from [1] and [1]. Here, we fix \(g\geq 0\) and consider only fibered surfaces \(f:X\to C\) whose fibers have arithmetic genus \(g\). For a marked nodal curve \(C\) over a field, let \(\operatorname{Irr}(C)\) denote the set of irreducible components of \(C\). For a line bundle \(N\) on \(C\), we use \(\deg L\) to denote the total degree, and \(\underline{\deg}\,L\) to denote the component-wise degree, which is a function on \(\operatorname{Irr}(C)\). **Definition 3.8**.: 1. Let \(C\) be a marked nodal curve over a field. A _stability condition_ on \(C\) is a map \(\alpha:\operatorname{Irr}(C)\to\mathbb{R}\). We define the _degree_ of \(\alpha\) as \(\sum_{D\in\operatorname{Irr}(C)}\alpha(D)\). 2. Let \(C_{B}\to B\) be a family of marked nodal curve. A _stability condition_ on \(C_{B}\) over \(B\) is given by a collection of stability conditions \((\alpha_{x})\) for each field-valued point \(x\) of \(B\), which are compatible in the following sense: If \(\eta\) specializes to \(\xi\) in \(B\), there is an induced surjective morphism \(\operatorname{Irr}(C_{\xi})\to\operatorname{Irr}(C_{\eta})\). We require that for each \(D\in\operatorname{Irr}(C_{\eta})\), we have that \(\alpha_{\eta}(D)\) is equal to the sum of \(\alpha_{\xi}(D^{\prime})\) for \(D^{\prime}\) mapping to \(D\). **Remark 3.9**.: It follows that for a family of curves over a finite type base \(B\), a stability condition is uniquely defined by its values on the most degenerate strata. For example, if \(B\) is the spectrum of a DVR, then giving a stability condition on \(C_{B}\) is the same as giving one over the special fiber. Let \(X\to C\) be a given fibration over a nodal marked curve and let \(E\) be an \(f\)-stable sheaf on an expansion \(\widetilde{X}\to\widetilde{C}\). **Definition 3.10**.: 1. We say that a component of \(\widetilde{C}\) (resp. of \(\widetilde{X}\)) is _exceptional_ if it is contracted by \(\widetilde{C}\to C\) (resp. by \(\widetilde{X}\to X\)). 2. We say that the expansion \(\widetilde{X}\) is _minimal_ if there is no intermediate expansion \(\widetilde{X}\to\widetilde{X}^{\prime}\to X\) such that \(E\) is isomorphic to a pullback from \(\widetilde{X}^{\prime}\). We use the following abuse of notation: Let \(\widetilde{C}\to C\) be an expansion, and \(\alpha\) a stability condition on \(C\). For \(D\subset\widetilde{C}\) an irreducible component, we set \[\alpha(D):=\begin{cases}0,&\text{ if $D$ is exceptional;}\\ \alpha(c(D))&\text{ if $D$ maps isomorphically to its image.}\end{cases}\] For the rest of this subsection, let \(f:X_{B}\to C_{B}\) a family of fibered surfaces with genus \(g\) fibers over a base \(B\). Let \(L_{0}\) be a line bundle on \(X_{B}\) which has positive degree \(d_{0}\) on each fiber over \(C_{B}\). Let \(0\leq k<r\) be the unique integer, such that \(kd-rk^{\prime}=1\) for some \(k^{\prime}\in\mathbb{Z}\). Define \(W:=L_{0}^{\otimes(g-1)r-d}\oplus\mathcal{O}_{X}^{\oplus d_{0}r-1}\). For any coherent sheaf \(E\) on \(X\) of finite cohomological dimension, consider the line bundle \[M(E):=\frac{\det Rf_{*}((\det E)\otimes L_{0})}{\det Rf_{*}(\det E)}\otimes( \det Rf_{*}(E\otimes W))^{\otimes k}\,. \tag{3}\] We similarly define \(M(E)\) for any expansion of \(X_{B}\) by pulling back \(L_{0}\). **Remark 3.11**.: This definition is chosen so that \(M\) has the following two properties, which is all that we will use in what follows: 1. (Nonzero weight) For any line bundle \(N\) on \(C_{B}\), we have \[M(E\otimes f^{*}N)=M(E)\otimes N^{d_{0}r}.\] 2. (Normalizable on exceptional components) Let \(E\) be an \(f\)-stable sheaf on an expansion \(\widetilde{X}\to\widetilde{C}\) of \(f\) over any geometric point of \(B\). Then there exists a line bundle \(L_{1}\) such that \(M(E\otimes L_{1})\) has degree zero on every exceptional component of \(\widetilde{C}\). The first property can be seen directly from Grothendieck-Riemann-Roch. The second is a consequence of Lemma 2.9. We let \(\alpha\) be a stability condition on \(C_{B}\) over \(B\). For a subcurve \(Z\subset C\) of a nodal marked curve, we let \(Z^{c}\) denote the "complementary" subcurve formed by the union of components of \(C\) not contained in \(Z\). First, we consider the case where \(X\to C\) is a fibered surface over a field base \(B=\operatorname{Spec}k\). **Definition 3.12**.: Let \(\widetilde{X}\to X\) be an expansion. We say that an \(f\)-stable sheaf \(E\) on \(\widetilde{X}\) is \(\alpha\)_-balanced_ if 1. for each proper sub-curve \(\emptyset\subsetneq Z\subsetneq\widetilde{C}\) of \(\widetilde{C}\) we have: \[\left|\frac{\deg M(E)|_{Z}}{rd_{0}}-\sum_{D\subset Z}\alpha(D)\right|\leq\frac{ \#(Z\cap Z^{c})}{2},\] (4) 2. For each exceptional component \(D\subset\widetilde{C}\) we have \(\deg M(E)|_{D}\) is non-negative. We say that \(E\) is _strictly_\(\alpha\)-balanced if we moreover have 1. If we have equality in (4), then one of \(Z\) or \(Z^{c}\) is a union of exceptional components. One can see that (strict) \(\alpha\)-balancedness is an open condition in families of \(f\)-stable sheaves, so this definition gives a well-behaved moduli functor for families over a general base \(B\). **Definition 3.13**.: We let \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,d)\subseteq\mathcal{M}_{X_{B}/C_{B}}(r,d)\) denote the open sub-stack consisting of \(\alpha\)-balanced sheaves on minimal expansions. **Proposition 3.14**.: _The stack \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,d)\) satisfies the existence part of the valuative criterion of properness._ Proof.: Let \(R\) be a DVR with generic and closed points \(\eta\) and \(\xi\), and with a given morphism \(\operatorname{Spec}R\to B\). Let \(E_{\eta}\) be a sheaf on an expansion \(\widetilde{X}_{\eta}\to\widetilde{C}_{\eta}\) of \(X_{\eta}\to C_{\eta}\), such that \(E_{\eta}\) is \(f\)-stable and strictly \(\alpha\)-balanced. By Proposition 3.6, we can find _some_ extension \(\widetilde{X}_{R}\to\widetilde{C}_{R}\) of this data to an \(f\)-stable family \(E_{R}\) (possibly after replacing \(R\) by an extension). We claim that we can modify this data to obtain an \(\alpha\)-balanced bundle on a minimal expansion. For this, pick a line bundle \(L_{0}\) on \(\widetilde{C}_{R}\) that has degree zero on exceptional components of \(\widetilde{C}_{\xi}\) and such that \(M(E_{R})\otimes L_{0}\) has degree a multiple of \(rd_{0}\) on each component of \(C_{\xi}\) (this may require further extending \(R\)). Then by Lemma 2.9 we can pick \(L_{1}\) on \(\widetilde{C}_{R}\), such that \(\underline{\deg}\,L_{1}|_{\widetilde{C}_{\xi}}^{\otimes rd_{0}}=\underline{ \deg}\,(M(E|_{Z})\otimes L_{0})|_{\widetilde{C}_{\xi}}\). By changing \(L_{0}\), we may in fact assume without loss of generality that \(L_{1}^{\otimes rd_{0}}\cong M(E)\otimes L_{0}\) on \(\widetilde{C}_{R}\). Now consider the stability condition \(\alpha^{\prime}:=\alpha+\underline{\deg}\,L_{0}/(rd_{0})\). Then we have the following lemma, whose proof is straightforward: **Lemma 3.15**.: \(E\) _is \(\alpha\)-balanced if and only if \(L_{1}\) is \(\alpha^{\prime}\)-semistable in the following sense: \(L_{1}\) has non-negative degree on exceptional components, and for every subcurve \(\emptyset\subsetneq Z\subsetneq\widetilde{C}\), we have_ \[\left|\deg L_{1}|_{Z}-\sum_{D\subset Z}\alpha^{\prime}(D)\right|\leq\frac{\#( Z\cap Z^{c})}{2}.\] We consider the coherent sheaf \(L_{1}^{\prime}:=c_{*}L_{1}\) on \(C_{R}\) with adjunction map \(\psi:c^{*}L_{1}^{\prime}\to L_{1}\). If this is surjective, we obtain an induced map \(P:\widetilde{C}_{R}\to\mathbb{P}(L_{1}^{\prime})\). **Lemma 3.16**.: _The following are equivalent (over the generic and closed point of \(R\) respectively):_ 1. _The line bundle_ \(L_{1}\) _is_ \(\alpha^{\prime}\)_-semistable in the sense of Lemma_ 3.15_._ 2. 1. _The sheaf_ \(L_{1}^{\prime}\) _is torsion-free, the morphism_ \(\psi\) _is surjective and identifies_ \(L_{1}\) _with the pullback along_ \(P\) _of the universal quotient of_ \(L_{1}^{\prime}\)_, and_ 2. \(L_{1}^{\prime}\) _is Oda-Seshadri_ \(\alpha^{\prime}\)_-semistable in the sense of_ _[_11_, Definition 4.1]__._ Proof.: By the arguments in [1, SS5], it follows that ii), (a) is equivalent to the condition that \(L_{1}\) has only degrees \(0,1\) on exceptional components, and total degree at most \(1\) on each chain of exceptional components. Then, one can check by hand that \(\alpha^{\prime}\)-stability for \(L_{1}\) (in the sense of Lemma 3.15) and Oda-Seshadri \(\alpha^{\prime}\)-semistability for \(L_{1}^{\prime}\) are equivalent by using a destabilizing subcurve for the one to construct one for the other. As in [11, Corollary 4.3], it follows from Simpson stability, that any \(\alpha^{\prime}\)-semistable torsion-free sheaf on the generic fiber has an \(\alpha^{\prime}\)-semistable limit. Let \(L_{2}^{\prime}\) be an \(\alpha^{\prime}\)-semistable limit that agrees with \(L_{1}^{\prime}\) on the generic fiber. After possibly further expanding \(\widetilde{C}_{\xi}\), we can assume that we have a morphism \(P_{2}:\widetilde{C}_{R}\to\mathbb{P}(L_{2}^{\prime})\), and denote by \(L_{2}\) the pullback of the universal line bundle along \(P_{2}\). Note that this implies that \(L_{1}\) and \(L_{2}\) are isomorphic over \(\eta\). We consider the line bundle \(L_{E}:=L_{2}\otimes L_{1}^{\vee}\) on \(\widetilde{C}_{R}\). Claim:\(E_{1}:=E\otimes L_{E}\) is \(\alpha\)-balanced. To see this, note that \(M(E_{1})=M(E)\otimes L_{E}^{r}\). Therefore \[\underline{\deg}\ M(E_{1})=\underline{\deg}\ M(E)+r(\underline{\deg}\ L_{2}- \underline{\deg}\ L_{1})=-\,\underline{\deg}\,L_{0}+r\,\underline{\deg}\,L_{2}.\] In particular, using the reverse direction of Lemma 3.16 and Lemma 3.15, we find that \(L_{2}\) is \(\alpha^{\prime}\)-stable, and that \(E_{1}\) is \(\alpha\)-balanced. Since \(L_{E}\) is trivial along \(\widetilde{X}_{\eta}\), we find that \(E_{1}\) is an \(\alpha\)-balanced extension of \(E_{\eta}\). Finally, one can obtain a minimal expansion \(\widetilde{C}\) by contracting the components \(D\) of \(\widetilde{C}_{\xi}\) over which \(E_{1}\) is isomorphic to a pullback along \(F\times D\to F\), where \(F\) is the fiber of \(X_{\xi}\to C_{\xi}\) over the image of \(D\). Since we assumed that \(\widetilde{X}_{\eta}\to\widetilde{C}_{\eta}\) was minimal, one can do this contraction without affecting the generic point. This uses that every component of \(\widetilde{X}_{\eta}\) contains in its closure at least one component that is not contracted, which one can see for example using Lemma 2.10. **Proposition 3.17**.: _Let \(E\) be a strictly \(\alpha\)-balanced \(f\)-stable sheaf on an expansion \(\widetilde{X}\to\widetilde{C}\). Then the subgroup of scalar automorphisms has finite index in the automorphism group of \((E,\widetilde{X})\)._ Proof.: By \(f\)-stability, every automorphism of \(E\) as a sheaf on \(\widetilde{X}\) must be scalar: Since the restriction to each fiber over a generic point \(\eta\) of \(\widetilde{C}\) is geometrically stable, any automorphism must be scalar over a dense open of \(\widetilde{C}\), and therefore scalar since \(\widetilde{C}\) and \(E\) is flat over \(\widetilde{C}\). In particular, for every automorphism \(\gamma\) of \(\widetilde{C}\) over \(C\), there exists at most one isomorphism \(\phi:\gamma^{*}E\to E\) up to scaling. On the other hand, each automorphism of \(\widetilde{C}\) is given by scaling exceptional components. Let \(D\subset\widetilde{C}\) be an exceptional component. By restricting to \(D\), we may assume that without loss of generality, we may assume that \(D=\mathbb{P}^{1}\), and that \(\gamma\) acts by multiplication with \(a\in\mathbb{G}_{m}\). Then the restriction \(E_{D}\) of \(E\) to \(X_{D}=\mathbb{P}^{1}\times F\) is stable on the generic fiber over \(\mathbb{P}^{1}\) (and over the fibers over \(0,\infty\)). If the map \(\nu_{E}:\mathbb{P}^{1}\to M_{F}(r,d)\) induced by \(E_{D}\) is nontrivial, then it is finite onto its image, and \(a\) must preserve the fibers of \(\nu_{E}\). In particular, there are only finitely many possible values for \(a\). If \(\nu_{E}\) is constant, there might still be distinguished points in \(\mathbb{P}^{1}\) over which the restriction of \(E_{D}\) to the fiber is not locally free or not stable, in this case again \(a\) must permute the finite set of those points, so must belong to a finite set. If neither of these occur, then \(E_{D}\) is a pullback of a stable bundle from \(F\) twisted by the pullback of a degree \(\ell\) line bundle from \(\mathbb{P}^{1}\). By \(\alpha\)-balancedness, we have \(\ell\) is \(0\) or \(1\), and by minimality of \(\widetilde{X}\) we must have \(\ell=1\). We claim that on any such component, \(a\) must be an \(rd_{0}\)'th root of unity, which shows that there are only finitely many possible choices of \(\gamma\). We now show this last claim. Since scaling fixes the points \(0,\infty\in\mathbb{P}^{1}\), we have that \(\phi\) induces an automorphism of the restriction of \(E\) to the fibers over \(0,\infty\), say \(\phi_{0},\phi_{\infty}\), which are given by scalar multiplication. They are related by \(\phi_{\infty}=a^{-1}\phi_{0}\). In particular, if \(a\) is not an \(rd_{0}\)'th root of unity, then \(\phi\) induces an automorphism of the pair \((\widetilde{C},M(E))\) that is given by scaling \(M(E)\) differently at different nodes which are fixed by \(\gamma\). Without loss of generality, we may assume that there is one node of \(\widetilde{C}\) at which this scaling is trivial. Then let \(Z\subseteq\widetilde{C}\) be the maximal connected subcurve of \(\widetilde{C}\) containing this node, such that at all nodes in \(Z\), the automorphism induced by \(\phi\) is trivial. Then any irreducible component \(D^{\prime}\) of \(\widetilde{C}\) intersecting \(Z\) in a finite set must be exceptional, and have \(\deg M(E)|_{D^{\prime}}=rd_{0}\). If \(Z\) is a chain of exceptional components, this means inequality (4) is violated. Otherwise, for this \(Z\), the _strict_ inequality in (4) is violated. In either case, this contradicts the assumption that \(E\) is strictly \(\alpha\)-balanced. ### Boundedness and Properness Let \(f:X_{B}\to C_{B}\) be a family of fibered surfaces over \(B\). We assume here that \(B\) is connected. Let \(L_{0}\) be a line bundle on \(X_{B}\) with degree \(d_{0}>0\) on fibers over \(C_{B}\) and let \(\alpha\) be a stability condition on \(C_{B}\). To get a bounded moduli space, we need to fix numerical invariants. For this, we will use the relative Neron-Severi scheme of a family constructed in SS2.3. In order for the results there to apply, we will impose Assumption 2.12 from here until the end of SS4. We fix a section \(\overline{c_{1}}\in\overline{\mathcal{NS}}_{X_{B}/B}(B)\) which has fiberwise degree \(d\), and \(\Delta\in\mathbb{Z}\). **Definition 3.18**.: We let \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\subseteq\mathcal{M} ^{\alpha}_{X_{B}/C_{B}}(r,d)\) denote the substack of sheaves whose fiberwise discriminant is \(\Delta\) and for which the associated section of \(\overline{\mathcal{N}\mathcal{S}}_{X_{B}/B}\) defined by the determinant agrees with the pullback of \(\overline{c_{1}}\). If \(C_{B}\) has geometrically irreducible fibers over \(B\), then \(\mathcal{N}\mathcal{S}_{X_{B}/B}=\overline{\mathcal{N}\mathcal{S}}_{X_{B}/B}\), and we also use the notation \(\mathcal{M}_{X_{B}/C_{B}}(r,c_{1},\Delta)\) for \(c_{1}\in\mathcal{N}\mathcal{S}_{X_{B}/B}\). Note that this makes sense by Lemma 2.18. **Lemma 3.19**.: _The stack \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,c_{1},\Delta)\) is an open and closed substack of \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,d)\)_ Proof.: The condition that the degree of the cycle \(2rc_{2}(E)-(r-1)c_{1}(E)^{2}\) equals \(\Delta\) is an open and closed condition. Since \(\overline{\mathcal{N}\mathcal{S}}_{X_{B}/B}\) is separated and unramified over \(B\), the section \(c_{1}\) determines an open and closed subspace. **Proposition 3.20**.: _The stack \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) is of finite type over \(B\)._ Proof.: It only remains to show that it is quasi-compact over \(B\). We may work locally on \(B\) and stratify \(B\) by the singularity type of \(C_{B}\). In particular, we may assume that \(B\) is a finite type \(\mathbb{C}\)-scheme and that the singular locus of \(C_{B}\) is a disjoint union of copies of \(B\). Let \(C_{1},\ldots,C_{n}\) denote the components of \(C_{B}\), which are smooth over \(B\), and \(X_{1},\ldots,X_{n}\) their preimages under \(f\). Claim 1:There exists an integer \(N_{1}\), such that for every \(b\in B\), every \(\alpha\)-balanced \(f\)-stable sheaf \(E\) on an expansion of \(X_{b}\) and every \(i\), we have \(\Delta(E|_{X_{i}})\geq N_{1}\). Proof.: Since the discriminant is invariant under tensoring with a line bundle, we may assume that \(c_{1}(E|_{X_{i}})=c_{1}|_{X_{i}}+kF\) for a fixed lift \(c_{1}\) of \(\overline{c_{1}}\) and some \(k\in\{0,\ldots r-1\}\). By Theorem 2.21, we can find a polarization \(H_{i}\) on \(X_{i}\) such that \(f\)-stability for \(X_{i}\to C_{i}\) agrees with slope stability with respect to \(H_{i}\) for all sheaves of rank \(r\), first Chern class of the form \(c_{1}|_{X_{i}}+kF\) and with discriminant at most \(0\), say. In particular, the collection of such sheaves is bounded, so their discriminant is bounded below by some constant \(N_{1}^{i}\). Taking the minimum of all the \(N_{1}^{i}\) gives us the desired \(N_{1}\). Claim 2:There exists a number \(N_{2}\), so that for an \(\alpha\)-balanced \(f\)-stable sheaf on a minimal expansion \(\widetilde{X}_{b}\to\widetilde{C}_{b}\), the number of exceptional components is at most \(N_{2}\). Proof.: By Lemma 2.10, on each exceptional component \(Y\) of \(\widetilde{X}_{b}\), \(E_{Y}\) is either a pullback tensored by a line-bundle from \(\widetilde{C}\), or \(\Delta(E_{Y})>0\). By \(\alpha\)-balancedness, there can be at most \(g(C_{b})\) components for which the first possibility occurs (and the line bundle has to be of degree one on the corresponding component). It follows that the total number of exceptional components is bounded by \(\Delta-Irr(C_{b})N_{1}+g(C_{b})\). From these two claims, it follows that there is an a-priori bound for \(\Delta(E|_{Y})\) for any \(f\)-stable sheaf on an expansion, and \(Y\) an arbitrary component of the expansion. From Claim 2, we also see that \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) factors through a quasi-compact open subset of \(\operatorname{Exp}_{C_{B}/B}\). Thus we may further reduce to the case that the preimage of each stratum is in \(\operatorname{Exp}_{C_{B}/B}\) is quasi-compact. This reduces us to showing that the space of sheaves on a given expansion \(\widetilde{C}_{B}\to C_{B}\) is quasi-compact. By \(\alpha\)-stability, for each component there is only a finite choice of possibly values that the first Chern class of a restriction can take. In this case, an \(f\)-stable sheaf on \(\widetilde{X}_{B}\) is the same as giving suitable \(f\)-stable sheaves on each component, together with isomorphisms. This reduces us to the case where \(\widetilde{X}_{B}\) has a single component. In this case, the space of \(f\)-stable sheaves with given \(\overline{c_{1}}\) and \(\Delta\) is open in the space of stable sheaves with respect to a suitably chosen polarization, hence quasi-compact. **Definition 3.21**.: Let \(X_{B}\to C_{B}\to B\) be a family of fibered surfaces. We say that a stability condition \(\alpha\) on \(C_{B}\) is generic, if every \(\alpha\)-balanced sheaf on a minimal expansion of \(X_{B}\) is in fact strictly \(\alpha\)-balanced. **Remark 3.22**.: Suppose that \(C_{B}\to B\) has a single most degenerate stratum over a closed point \(b\). Then one can always choose a non-degenerate stability condition, by choosing an \(\alpha:\operatorname{Irr}(C_{b})\to\mathbb{R}\), whose values form a \(\mathbb{Q}\)-vector space of dimension \(|\operatorname{Irr}(C_{b})|-1\). #### Properness. **Definition 3.23**.: Let \(\alpha\) be a generic stability condition on \(C_{B}\). We denote the \(\mathbb{G}_{m}\)-rigidification of \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) along the scalar automorphisms by \[M^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta).\] By Proposition 3.17, for a choice of generic stability condition, the stack \(M^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) has finite stabilizer groups at every point, and therefore is Deligne-Mumford. **Theorem 3.24**.: _Let \(\alpha\) be a generic stability condition. Then \(M^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) is a proper Deligne-Mumford stack over \(B\)._ Proof.: We already know that it is a finite type Deligne-Mumford stack. It satisfies the existence part of the valuative criterion of properness, since by Proposition 3.14 this is true for \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,d)\). It only remains to address the uniqueness part of the valuative criterion. For this, we may assume that \(B=\operatorname{Spec}R\), and that we are given expansions \(\widetilde{X}_{1}\to\widetilde{C}_{1}\) and \(\widetilde{X}_{2}\to\widetilde{C}_{2}\) of \(X_{B}\to C_{B}\) together with respective \(\alpha\)-balanced \(f\)-stable sheaves \(E_{1}\) and \(E_{2}\) and an isomorphism \(\Psi\) of the restrictions to the generic fibers. Let \(\pi\) be a uniformizer for \(R\). Then we need to show that \(\widetilde{C}_{1}\simeq\widetilde{C}_{2}\) and that for some \(\ell\), the isomorphism \(\pi^{\ell}\Psi\) can be extended to an isomorphism of \(E_{1}\) and \(E_{2}\) over the isomorphism of expansions. We first choose a common further expansion \(\widetilde{C}_{1}\stackrel{{ c^{1}}}{{\leftarrow}}\widetilde{C}_{3} \stackrel{{ c^{2}}}{{\rightarrow}}\widetilde{C}_{2}\) which is an isomorphism over \(\eta\) and is minimal in the sense that no component of \(\widetilde{C}_{3}\) over \(\xi\) is contracted by both \(c^{1}\) and \(c^{2}\). Then, both \((c^{1})^{*}E_{1}\) and \((c^{2})^{*}E_{2}\) are \(\alpha\)-balanced \(f\)-stable sheaves on \(\widetilde{X}_{3}\) and we have a given isomorphism \(\psi:(c^{1})^{*}E_{1}|_{\widetilde{X}_{3,\eta}}\rightarrow(c^{2})^{*}E_{2}|_{ \widetilde{X}_{3,\eta}}\). Let \(\pi\) be a uniformizer of \(R\). There is a unique choice of integer \(\ell\), such that \(\pi^{\ell}\psi\) extends to a morphism \(E_{1}\to E_{2}\) whose restriction to \(\widetilde{X}_{3,\xi}\) is nonzero. This extension is then unique, and we will denote it again by \(\pi^{\ell}\Psi\). By Lemma 3.25 below, \(\pi^{\ell}\Psi\) is an isomorphism. In particular, any component of \(\widetilde{C}_{3}\) which is contracted by \(c^{1}\) is also contracted by \(c^{2}\), so we have \(\widetilde{C}_{1}\cong\widetilde{C}_{3}\cong\widetilde{C}_{2}\). This is precisely what we wanted to show. **Lemma 3.25**.: _Let \(f:X_{R}\to C_{R}\) be a family of fibered surface over a DVR, let \(\alpha\) be a stability condition on \(C_{R}\), and let \(E_{1}\) and \(E_{2}\) be strictly \(\alpha\)-balanced \(f\)-stable sheaves on some expansion \(\widetilde{X}_{R}\rightarrow\widetilde{C}_{R}\) of \(f\). Let \(\Psi:E_{1}\to E_{2}\) be a morphism that is an isomorphism over the generic point of \(R\) and non-zero over the closed point. Suppose that \(\Delta(E_{1})=\Delta(E_{2})\) and that \(c_{1}(E_{1})\equiv c_{1}(E_{2})\) in \(\mathcal{NS}_{X_{B}/B}\). Then \(\Psi\) is an isomorphism._ Proof.: After possibly taking a base change on \(R\) and a further expansion of \(\widetilde{C}_{R}\), we may assume without loss of generality that all irreducible components of \(\widetilde{C}_{R}\) are regular. Since \(E_{1}\) and \(E_{2}\) are flat over \(\widetilde{C}_{R}\), the sheaf \(L_{0}:=f_{*}\mathcal{H}om(E_{1},E_{2})\) is a reflexive rank one sheaf on \(\widetilde{C}_{R}\) and isomorphic to the structure sheaf over \(\widetilde{C}_{\eta}\). As a reflexive sheaf, it is locally free away from the singular points of \(\widetilde{C}_{R}\), and since it is locally free on the generic fiber, these are the finitely many points \(x_{i}\) in the special fiber in which two irreducible components of \(\widetilde{C}_{R}\) intersect. By restricting to an open \(U_{i}\) only intersecting of a given \(x_{i}\) the two adjacent components, there is a unique integer \(\ell_{i}\geq 0\), such that \(\pi^{-\ell_{i}}\Phi\) is well-defined on \(U\) and is non-zero on the special fiber on one of the components. It follows that it is an isomorphism on the fiber over \(x_{i}\), and hence that it is a generator of \(f_{*}\mathcal{H}om(E_{1},E_{2})\) around \(x_{i}\). This implies that \(L_{0}\) is indeed locally free. We get a tautological morphism \(E_{1}\otimes L_{0}\to E_{2}\), which is non-zero on each component of the special fiber. Therefore its restriction to \(\widetilde{X}_{\xi}\) is injective with cokernel \(Q\) supported on fibers of \(\widetilde{X}_{\xi}\rightarrow\widetilde{C}_{\xi}\). Since \(E_{1}\) and \(E_{2}\) have the same numerical invariants, and \(L_{0}\) has total degree zero on \(\widetilde{C}_{\xi}\), we find that \(Q=0\), so \(E_{1}\otimes L_{0}\simeq E_{2}\). This implies that both \(E_{1}\) and \(E_{1}\otimes L_{0}\) are strictly \(\alpha\)-balanced. This implies that \(L_{0}\) must have degree zero on each non-exceptional component of \(\widetilde{C}_{\xi}\) (otherwise, such a component gives a subcurve violating stability). Since \(L_{0}\) is trivial on \(\widetilde{C}_{\eta}\), this is enough to conclude that in fact \(L_{0}\simeq\mathcal{O}_{\widetilde{C}}\). Since \(\Phi\) is non-zero on \(\widetilde{X}_{\xi}\), it gives a generator of \(L_{0}\), so it must be an isomorphism by what we already argued. **Remark 3.26**.: Note that by (3) and Grothendieck-Riemann-Roch, the total degree of \(M(E)\) depends on \(E\) only through \(c_{1}(E)\) and \(\Delta(E)\). In particular, a formal application of Grothendieck-Riemann-Roch gives a unique number \(\alpha(\overline{c_{1}},\Delta)\) such that \(M^{\alpha}_{X/C}(r,\overline{c_{1}},\Delta)=\emptyset\) unless \[\sum_{D\in\operatorname{Irr}(C)}\alpha(D)=\alpha(\overline{c_{1}},\Delta). \tag{5}\] When \(C\) is irreducible, a stability condition is just a scalar which determines whether the moduli space is (possibly) nonempty. In this case, we will abbreviate \[M^{b}_{X/C}(r,\overline{c_{1}},\Delta):=M^{\alpha(\overline{c_{1}},\Delta)}_{X /C}(r,\overline{c_{1}},\Delta).\] ### Perfect Obstruction Theories We construct the perfect obstruction theory on the moduli stacks \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\) and their variants. The arguments here are relatively standard and we will not give all details. A _perfect obstruction theory_ for a morphism \(\mathcal{X}\to\mathcal{Y}\) is an object \(E\in D(\mathcal{X})\) that is perfect with amplitude in \([-1,1]\) together with a morphism \(E\to L_{\mathcal{X}/\mathcal{Y}}\) that is an isomorphism on \(h^{1}\) and \(h^{0}\) and surjective on \(h^{-1}\). This coincides with the usual notion whenever \(\mathcal{X}\to\mathcal{Y}\) is of DM-type. Let \(X_{B}\to C_{B}\) be a family of fibered surfaces. We abbreviate \(\mathcal{M}:=\mathcal{M}_{X_{B}/C_{B}}(r,d)\) and \(\operatorname{Exp}:=\operatorname{Exp}_{C_{B}/B}\). Consider the forgetful morphisms \(\mathcal{M}\to\operatorname{Exp}\to B\). We let \(\widetilde{X}\to\widetilde{C}\) denote the universal expansion on \(\operatorname{Exp}\) and let \(\mathcal{E}\) denote the universal sheaf on the pullback \(\widetilde{X}_{\mathcal{M}}\) of \(\widetilde{X}\) to \(\mathcal{M}\). Let \(\pi:\widetilde{X}_{\mathcal{M}}\to\mathcal{M}\) denote the projection. The Atiyah class defines a relative obstruction theory \((R\pi_{*}R\mathcal{H}om_{0}(\mathcal{E},\mathcal{E}))^{\vee}[-1]\to L_{ \mathcal{M}/\operatorname{Exp}}\). We have a factorization \(\mathcal{M}\to\mathcal{P}ic_{\widetilde{X}/\operatorname{Exp}}\to \operatorname{Exp}\) of the forgetful map through the determinant morphism to the Picard stack. Let \(\mathcal{L}\) denote the universal line bundle over \(\widetilde{X}_{\mathcal{P}ic_{\widetilde{X}/B}}\) and let \(\pi\) also denote the projection to \(\mathcal{P}ic_{\widetilde{X}/B}\). We have the relative obstruction theory \((R\pi_{*}R\mathcal{H}om_{0}(\mathcal{L},\mathcal{L}))^{\vee}[-1]\to L_{ \mathcal{P}ic_{\widetilde{X}/Exp}/Exp}\). It is naturally compatible with the obstruction theory of \(\mathcal{M}\) via the trace map. Moreover, the trace-free part gives a canonical relative obstruction theory \((R\pi_{*}R\mathcal{H}om_{0}(\mathcal{E},\mathcal{E}))^{\vee}[-1]\to L_{ \mathcal{M}/\mathcal{P}ic_{\widetilde{X}/Exp}}\). We have a commutative diagram involving the \(\mathbb{G}_{m}\)-rigidifications of both stacks The induced map \(r^{*}L_{M/Pic_{\widetilde{X}/\operatorname{Exp}}}\to L_{\mathcal{M}/ \mathcal{P}ic_{\widetilde{X}/\operatorname{Exp}}}\) is an isomorphism, so we may compose with its inverse to get a morphism \((R\pi_{*}R\mathcal{H}om_{0}(\mathcal{E},\mathcal{E}))^{\vee}[-1]\to r^{*}L_{M/ Pic_{\widetilde{X}/Exp}}\). This last map descends to a canonical perfect obstruction theory for the determinant morphism \(M\to\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}}\) over the relative Picard scheme. From this discussion, and the properties of virtual pullback, we immediately get **Proposition 3.27**.: _Suppose we are in the situation of Theorem 3.24. Then the stack \(M^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}},\Delta)\) has a relative perfect obstruction theory over \(\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}}\)._ _In particular, it has a natural virtual fundamental class given by virtual pullback of the fundamental class of \(\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}}\). The formation of the virtual fundamental class is compatible with flat and l.c.i. pullbacks on \(B\)._ ### Evaluation maps Let \(X_{B}\to C_{B}\) be a family of fibered surfaces and let \(\sigma_{1},\dots,\sigma_{n}:B\to C_{B}\) denote the markings of \(C_{B}\). Let \(F_{i}\to B\) denote the family of smooth curves obtained as the preimage of \(\sigma_{i}\) under \(f\). For each \(i\), we have a morphism of stacks \(\mathcal{M}_{X_{B}/C_{B}}(r,d)\to\mathcal{M}_{F_{i}}(r,d)\). It fits into a commutative diagram in which the horizontal maps are the restriction maps and the vertical maps are the determinant morphisms. For each map in this square, the obstruction theories for source and target are naturally compatible. We have an induced square of rigidifications, with induced relative obstruction theories ### Tautological classes We want to study invariants which are defined by pairing certain tautological cohomology classes against the virtual fundamental class. We define here what we mean by tautological cohomology class. For a Deligne-Mumford stack \(\mathcal{Y}\) over \(\mathbb{C}\), we define its rational (co-) homology groups \(H_{*}(\mathcal{Y},\mathbb{Q})\) (resp. \(H^{*}(\mathcal{Y},\mathbb{Q})\)) in terms of the simplicial scheme \(Y_{\bullet}\) associated to an etale cover \(Y_{0}\to\mathcal{Y}\). When working with rational coefficients - as we do here - these are naturally isomorphic to the (co-) homology groups of the coarse moduli space of \(Y\). This is nicely explained in the second half of [1]. One also has a natural cycle class map \(A_{*}(\mathcal{Y})\to H_{*}^{BM}(\mathcal{Y},\mathbb{Q})\) into the Borel-Moore homology (cf. [1, SS2]). When \(\mathcal{Y}\) is proper, this equivalently gives a map \(A_{*}(\mathcal{Y})\to H_{*}(\mathcal{Y},\mathbb{Q})\). Let \(f:X\to C\) be a fibered surface over a fixed nodal marked curve \(C\) with markings \((x_{1},\dots,x_{n})\) and let \(F_{1},\dots,F_{n}\) denote the fibers over the markings. Let \(L_{0}\) be a fixed line bundle of degree \(d_{0}>0\) on \(X\) and let \(\overline{c_{1}}\in\overline{\mathcal{NS}}_{X}\) be a class of fiber degree \(d\). Let also \(\Delta\in\mathbb{Z}\). Let \(\alpha\) be a generic stability condition on \(C\). We consider the proper Deligne-Mumford stack \(M:=M_{X/C}^{\alpha}(r,\overline{c_{1}},\Delta)\). Let \(\pi:\widetilde{X}\to M\) denote the universal expansion over \(M\) and \(c:\widetilde{X}\to X\) the contraction map. **Lemma 3.28**.: _There is a natural map \(\pi_{!}:H^{*}(\widetilde{X},\mathbb{Q})\to H^{*-4}(M,\mathbb{Q})\)._ Proof.: Since the morphism \(\pi:\widetilde{X}\to M\) is flat, proper and representable of dimension \(2\), any etale cover \(M_{0}\to M\) induces an etale cover \(\widetilde{X}_{0}\to\widetilde{X}\) by pullback, and we get an induced morphism of simplicial algebraic spaces \(\pi_{\bullet}:\widetilde{X}_{\bullet}\to M_{\bullet}\), which is component-wise flat and proper of relative dimension two. we have a trace map \((R\pi_{\bullet})_{*}\mathbb{Q}\to\mathbb{Q}[-4]\) (see [21, 4.6] for the case of schemes, which carries over to our setting). In fact, fiberwise, \(R^{4}\pi_{*}\mathbb{Q}\) is a \(\mathbb{Q}\) vector space spanned by the orientation classes of irreducibel components in the fiber, and the map \(R^{4}\pi_{*}\mathbb{Q}\to\mathbb{Q}\) sends each generator to \(1\). This induces the desired morphism after passing to cohomology groups. Let \(\gamma\in H^{*}(X)\) be a cohomology class. Recall that \(M\) is the rigidification of the moduli stack \(\mathcal{M}^{\alpha}_{X/C}(r,\overline{c_{1}},\Delta)\) and similarly \(\widetilde{X}\) is the rigidification of a family \(\widetilde{\mathcal{X}}\). We denote by \(\mathcal{E}\) the universal sheaf on \(\widetilde{\mathcal{X}}\). While \(\mathcal{E}\) does not descend to a \(\widetilde{X}\), the expression \(\mathcal{E}\otimes(\det\mathcal{E})^{-(1/r)}\) makes sense as a \(K\)-theory class and does descend to \(\widetilde{X}\). By abuse of notation, we denote it by \(\widehat{\mathcal{E}}\). We make the following definition for \(i\geq 0\): \[\operatorname{ch}_{i}(\gamma):=\pi_{!}\left(\operatorname{ch}_{i}(\widehat{ \mathcal{E}})\cup c^{*}\gamma\right). \tag{6}\] By abuse of notation, we denote by \(T^{\operatorname{vir}}_{M}=-[R\operatorname{Hom}_{0}(\mathcal{E},\mathcal{E})]\) the \(K\)-theory class dual to the relative perfect obstruction theory of \(M\) over the relative Picard scheme. Here is an (incomplete) definition of tautological classes. **Definition 3.29**.: We say that a cohomology class in \(H^{*}(M,\mathbb{Q})\) is _tautological_, if it lies in the sub-ring generated by classes \(\operatorname{ch}_{i}(\gamma)\) and classes \(\operatorname{ch}_{i}(T^{\operatorname{vir}}_{M})\). More generally, one can also consider classes defined in terms of \(K\)-theoretic objects, such as virtual Segre or Verlinde invariants (see for example [14] for an overview). ## 4 The degeneration formula In this section we state and prove a special case of a degeneration formula, when the base curve has one node and two irreducible pieces. Let \(f:X\to C\) be a fibered surface and suppose \(C=D_{1}\cup D_{2}\), where \((D_{1},x_{1})\) and \((D_{2},x_{2})\) are smooth curves with a single marking and the union is taken along the marked points. Let \(Y_{i}:=f^{-1}D_{i}\) and let \(F_{i}:=f^{-1}(x_{i})\) for \(i=1,2\). We fix some \(L_{0}\) with fiber degree \(d_{0}>0\) on \(X\) and a stability condition \(\alpha\) on \(C\). For applications one may always choose \(L_{0}\) as \(L\) or \(L^{-1}\). We assume that \(\alpha(D_{i})\not\in\frac{1}{rd_{0}}\mathbb{Q}\), in particular that the stability condition \(\alpha\) is generic. We let \(\alpha_{i}:=\alpha(D_{i})\). We further fix a section \(\overline{c_{1}}\) of \(\overline{\mathcal{NS}}_{X}\). We will use the following abuse of notation: If \(c_{1}^{\prime}\) and \(c_{1}^{\prime\prime}\) are points in \(\mathcal{NS}_{Y_{1}}\) and \(\mathcal{NS}_{Y_{2}}\) respectively, we write \(c_{1}^{\prime}+c_{1}^{\prime\prime}=\overline{c_{1}}\) if there exists a lift \(c_{1}\) of \(\overline{c_{1}}\) to \(\mathcal{NS}_{X}\) which restricts to \(c_{1}^{\prime}\) and \(c_{1}^{\prime\prime}\) on \(Y_{1}\) and \(Y_{2}\) respectively. Finally, we write \[\mathcal{M}^{\lfloor\alpha_{1}\rceil}_{Y_{1}/D_{1}}(r,c_{1}^{\prime},\Delta_{1}):= \coprod_{\beta\in\frac{1}{r\alpha_{0}}\mathbb{Q}}\,\mathcal{M}^{\beta}_{Y_{1}/D_ {1}}(r,c_{1}^{\prime},\Delta_{1}),\] and similarly for \(\mathcal{M}^{\lfloor\alpha_{2}\rceil}_{Y_{2}/D_{2}}(r,c_{1}^{\prime\prime}, \Delta_{2})\) and the \(\mathbb{G}_{m}\)-rigidified versions of the stacks. Note that at most one term in the disjoint union is nonempty. ### Glueing of sheaves Let \(c_{1}^{\prime}\in\mathcal{NS}_{Y_{1}}\), \(c_{1}^{\prime\prime}\in\mathcal{NS}_{Y_{2}}\) such that \(c_{1}^{\prime}+c_{2}^{\prime\prime}=\overline{c_{1}}\). Suppose that \(\alpha\) satisfies (5). Let \(\Delta_{1},\Delta_{2}\in\mathbb{Z}\) and \(\Delta:=\Delta_{1}+\Delta_{2}\). There is an associated glueing morphism \[\gamma:\mathcal{M}^{\lfloor\alpha_{1}\rceil}_{Y_{1}/D_{1}}(r,c_{1}^{\prime}, \Delta_{1})\times_{\mathcal{M}_{F}(r,d)}\mathcal{M}^{\lfloor\alpha_{2} \rceil}_{Y_{2}/D_{2}}(r,c_{1}^{\prime\prime},\Delta_{2})\to\mathcal{M}^{ \alpha}_{X/C}(r,\overline{c_{1}},\Delta_{1}+\Delta_{2}).\] This induces a canonical morphism on \(\mathbb{G}_{m}\)-rigidifications \[\Gamma:M^{\lfloor\alpha_{1}\rceil}_{Y_{1}/D_{1}}(r,c_{1}^{\prime},\Delta_{1} )\times_{M_{F}(r,d)}M^{\lfloor\alpha_{2}\rceil}_{Y_{2}/D_{2}}(r,c_{1}^{\prime \prime},\Delta_{2})\to M^{\alpha}_{X/C}(r,\overline{c_{1}},\Delta).\] We similarly have a glueing morphism for Picard schemes \[\operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1},x_{1}}}^{c_{1 ^{\prime}}^{\prime}}\times_{\operatorname{Pic}_{F}^{d}}\operatorname{Pic}_{ \widetilde{Y}_{2}/\operatorname{Exp}_{D_{2},x_{2}}}^{c_{1^{\prime\prime}}^{ \prime}}\to\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}^{ \overline{c_{1}}}.\] Here we use a superscript to denote the component of the Picard schemes mapping into the respective component of the Neron-Severi schemes. The glueing morphisms are compatible with taking the determinant, i.e. we have a commutative diagram \[\begin{CD}M^{\lfloor\alpha_{1}\rceil}_{Y_{1}/D_{1}}(r,c_{1}^{\prime},\Delta_ {1})\times_{M_{F}(r,d)}M^{\lfloor\alpha_{2}\rceil}_{Y_{2}/D_{2}}(r,c_{1}^{ \prime\prime},\Delta_{2})@>{\Gamma}>{}>M^{\alpha}_{X/C}(r,\overline{c_{1}}, \Delta).\\ @V{}V{}V@V{}V{}V\\ \operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1}}}^{c_{1}^{ \prime}}\times_{\operatorname{Pic}_{F}^{d}}\operatorname{Pic}_{\widetilde{Y}_{ 2}/\operatorname{Exp}_{D_{2}}}^{c_{1}^{\prime\prime}}@>{}>{}>\operatorname{ Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}^{\overline{c_{1}}}\end{CD} \tag{7}\] By taking the union over possible decompositions of the discriminant, we can obtain more **Lemma 4.1**.: _The following natural diagram is cartesian:_ \[\begin{CD}\coprod_{\Delta_{1}+\Delta_{2}=\Delta}M^{\lfloor\alpha_{1} \rceil}_{Y_{1}/D_{1}}(r,c_{1}^{\prime},\Delta_{1})\times_{M_{F}(r,d)}M^{ \lfloor\alpha_{2}\rceil}_{Y_{2}/D_{2}}(r,c_{1}^{\prime\prime},\Delta_{2})@>{ \Gamma}>{}>M^{\alpha}_{X/C}(r,\overline{c_{1}},\Delta)\\ @V{}V{}V\\ \operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1}}}^{c_{1}^{ \prime}}\times_{\operatorname{Pic}_{F}^{d}}\operatorname{Pic}_{\widetilde{Y}_{ 2}/\operatorname{Exp}_{D_{2}}}^{c_{1}^{\prime\prime}}@>{\Gamma^{\prime}}>{}> \operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}^{\overline{c_{1}}} \end{CD} \tag{8}\] Proof.: This can be checked before passing to rigidifications. In that case, the fiber product corresponding to the lower right corner of the diagram has \(T\)-points given by an element of \(\mathcal{M}_{X/C}^{\alpha}(r,\overline{c_{1}},\Delta)(T)\) - i.e. a sheaf \(E_{T}\) on an expansion \(\widetilde{X}_{T}\) - together with a choice of decomposition \(\widetilde{X}_{T}=\widetilde{Y}_{T,1}\cup\widetilde{Y}_{T,2}\) of the given expansion, such that we have \([\det E|_{\widetilde{Y}_{T,1}}]=c_{1}^{\prime}\) and \([\det E|_{\widetilde{Y}_{T,2}}]=c_{1}^{\prime\prime}\). The stability condition on \(C\) implies that the restrictions will lie in the prescribed range of stabilities on the \(D_{i}\). It follows from Lemma 4.1, that we have a natural relative obstruction theory on each product \(M_{Y_{1}/D_{1}}^{\lfloor\alpha_{1}\rfloor}(r,c_{1}^{\prime},\Delta_{1})\times _{M_{F}(r,d)}M_{Y_{2}/D_{2}}^{\lfloor\alpha_{2}\rfloor}(r,c_{1}^{\prime\prime },\Delta_{2})\) over the product of relative Picard schemes \(\operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1}}}^{c_{1}^{ \prime\prime}}\times_{\operatorname{Pic}_{F}}\operatorname{Pic}_{\widetilde{ Y}_{2}/\operatorname{Exp}_{D_{2}}}^{c_{1}^{\prime\prime}}\), given by pulling back the obstruction theory via \(\Gamma\). In particular, we have a canonical virtual fundamental class on each product, obtained by pulling back the fundamental class of the base. **Proposition 4.2**.: _We have an equality in \(A_{*}(M_{X/C}^{\alpha}(r,\overline{c_{1}},\Delta))\)._ \[[M_{X/C}^{\alpha}(r,\overline{c_{1}},\Delta)]^{\operatorname{vir}}=\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proof.: Note that \(\operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1}}}^{c_{1}^{\prime}} \times_{\operatorname{Pic}_{F}}\operatorname{Pic}_{\widetilde{Y}_{2}/ \operatorname{Exp}_{D_{2}}}^{c_{1}^{\prime}}\) is smooth over \(\operatorname{Exp}_{D_{1}}\times\operatorname{Exp}_{D_{2}}\). Indeed, it is a union of connected components of \(\operatorname{Pic}_{\widetilde{Y}_{1}\cup_{F}\widetilde{Y}_{2}/\operatorname{ Exp}_{D_{1}}}\times_{\operatorname{Exp}_{D_{2}}}\), which are all translates of the identity component. Regarding i): One shows that \(\Gamma^{\prime}\) is quasi-compact by an argument similar to the one used in the proof of Lemma 2.18. Then, it is straightforward to check the valuative criteria for properness. Quasi-finiteness, follows since for a given point \(L_{1}\) of \(\operatorname{Pic}_{\widetilde{X}}^{\overline{c_{1}}}\) defined on an expansion \(\widetilde{X}_{1}\to\widetilde{C}_{1}\), points in the preimage under \(\Gamma\) correspond to a choice of singular point in \(\widetilde{C}_{1}\). Now ii) follows, since each glueing map is between schemes of the same dimension. Thus, to compute the image of the fundamental cycle \([\operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1}}}^{c_{1}^{ \prime}}\times_{\operatorname{Pic}_{F}}\operatorname{Pic}_{\widetilde{Y}_{2}/ \operatorname{Exp}_{D_{2}}}^{c_{1}^{\prime\prime}}]\) it is enough to do so over a dense open on target and source. Hence, we may restrict to the generic points of \(\operatorname{Exp}_{D_{i}}\) and \(\operatorname{Exp}_{C}\) corresponding to trivial expansions. Then the result follows, since we have an isomorphism \[\operatorname{Pic}_{X}^{\overline{c_{1}}}=\coprod_{c_{1}^{\prime}+c_{1}^{ \prime\prime}=\overline{c_{1}}}\operatorname{Pic}_{\widetilde{Y}_{1}}^{c_{1}^ {\prime}}\times_{\operatorname{Pic}_{F}}\operatorname{Pic}_{\widetilde{Y}_{2} ^{c_{1}^{\prime\prime}}}^{c_{1}^{\prime\prime}}.\] ### Obstruction theories Let \(\Delta_{1},\Delta_{2}\in\mathbb{Z}\) with \(\Delta=\Delta_{1}+\Delta_{2}\). For convenience, we abbreviate \[M_{Y_{1}/D_{1}} :=M_{Y_{1}/D_{1}}^{\lfloor\alpha_{1}\rfloor}(r,c_{1}^{\prime}, \Delta_{1}),\] \[\operatorname{Pic}_{\widetilde{Y}_{1}} :=\operatorname{Pic}_{\widetilde{Y}_{1}/\operatorname{Exp}_{D_{1} }}^{c_{1}^{\prime}},\] \[M_{F} :=M_{F}(r,d),\] and similarly for \(M_{Y_{2}/D_{2}},\operatorname{Pic}_{\widetilde{Y}_{2}},M_{X/C}^{\alpha}\) and \(\operatorname{Pic}_{\widetilde{X}}\). In this subsection, we further analyze the virtual class on \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\) that was constructed in SS4.1. **Proposition 4.4**.: 1. _There is a relative perfect obstruction theory for the morphism_ \(M_{Y_{i}/D_{i}}\to M_{F}\) _(for_ \(i=1,2\)_), which induces a canonical virtual pullback map._ 2. _The following cycle classes on_ \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\) _agree:_ 1. _The virtual pullback of the fundamental class of_ \(\operatorname{Pic}_{\widetilde{Y}_{1}}\times_{\operatorname{Pic}_{F}^{d}} \operatorname{Pic}_{\widetilde{Y}_{2}}\)_,_ 2. _The Gysin-pullback of the product of virtual classes on_ \(M_{Y_{1}/D_{1}}\times M_{Y_{2}/D_{2}}\) _along the diagonal map_ \(M_{F}\to M_{F}\times M_{F}\)_._ 3. _the virtual pullback of the fundamental class of_ \(M_{F}\) _induced by the morphism_ \(M_{Y_{1}/D_{1}}\to M_{F}\)_,_ 4. _the virtual pullback of the fundamental class of_ \(M_{F}\) _induced by the morphism_ \(M_{Y_{2}/D_{2}}\to M_{F}\) Proof.: We have a natural commutative diagram in which the square is cartesian. The vertical maps have natural obstruction theories given by the trace-free part of the Atiyah class of a universal sheaf over \(M_{F}\), and these are naturally compatible with the obstruction theory of \(M_{Y_{1}/C_{1}}\) over \(\operatorname{Pic}_{\widetilde{Y}_{1}}\). It follows that we have a (non-canonical) relative perfect obstruction theory for the morphism \(\varphi:M_{Y_{1}/D_{1}}\to\operatorname{Pic}_{1}\times_{\operatorname{Pic}_{ F}}M_{F}\). Since the horizontal maps in the square are l.c.i., we may endow them with their canonical obstruction theory, which are then automatically compatible with the obstruction theory for \(\varphi\). There is then an induced obstruction theory for the restriction map \(M_{Y_{1}/D_{1}}\to M_{F}\), which has the property that the induced virtual pullback map factors through l.c.i pullback along \(\operatorname{Pic}_{\widetilde{Y}_{1}}\to\operatorname{Pic}_{F}^{d}\) followed by the virtual pullback along \(\varphi\). Note that since \(\operatorname{Pic}_{\widetilde{Y}_{1}}\) is not Deligne-Mumford (or quasi-compact), there may be subtleties as to how l.c.i. pullback along \(\operatorname{Pic}_{\widetilde{Y}_{1}}\to\operatorname{Pic}_{F}^{d}\) interacts with virtual pullbacks. Since one can exhaust \(\operatorname{Pic}_{\widetilde{Y}_{1}}\) by global quotients of algebraic spaces, one can work \(GL\)-equivariantly on a suitable principal bundle. This proves the first point for \(i=1\), and by symmetry for \(i=2\). Now, consider the commutative diagram with cartesian squares The vertical maps in the upper square have obstruction theories given by the sum of Atiyah classes. The obstruction theory of the left vertical map is compatible with the one of the diagonal map given by the Atiyah class. We get an induced obstruction theory on the map \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\to M_{Y_{1}/D_{1}}\times_{ \operatorname{Pic}_{F}^{d}}M_{Y_{2}/D_{2}}\) which is isomorphic to \(\operatorname{Ext}_{0}^{1}(\mathcal{E}_{D},\mathcal{E}_{D})^{\vee}\) concentrated in degree \(-1\). On the other hand, we have the cartesian diagram Since virtual pullback is independent of the precise choice of map in the obstruction theory, this shows that the virtual pullback map for the morphism \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\to M_{Y_{1}/D_{1}}\times_{\mathrm{Pic} _{F}^{d}}M_{Y_{2}/D_{2}}\) is equal to the Gysin-pullback along the diagonal of \(M_{F}\). Then, considering the diagram with cartesian squares shows that the virtual class on \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\) is the Gysin-pullback of the one on \(M_{Y_{1}/D_{1}}\times M_{Y_{2}/D_{2}}\) along the diagonal morphism of \(M_{F}\times M_{F}\). The last two equivalences follow, since virtual pullback commutes with l.c.i. pullback, and the fact that \(M_{Y_{1}/D_{1}}\times_{M_{F}}M_{Y_{2}/D_{2}}\) is identified with the base change of \((M_{Y_{1}/D_{1}}\times M_{F})\times_{(M_{F}\times M_{F})}(M_{F}\times M_{Y_{2} /D_{2}})\) along the diagonal \(M_{F}\to M_{F}\times M_{F}\). ### Decomposition formulas We show some basic results regarding how tautological classes interact with the glueing morphism. Let \(\gamma\) be a cohomology class on \(X\), let \(\gamma_{i}\) be its restriction to \(Y_{i}\) for \(i=1,2\), and let \(\gamma_{F}\) be its restriction to the singular fiber \(F\). Consider the glueing map \[\Gamma:M_{Y_{1}/D_{1}}\times_{M_{F}(r,d)}M_{Y_{2}/D_{2}}\to M_{X/C}\] **Lemma 4.5**.: _We have \(\Gamma^{*}\operatorname{ch}_{i}(\gamma)=\operatorname{pr}_{1}^{*} \operatorname{ch}_{i}(\gamma_{1})+\operatorname{pr}_{2}^{*}\operatorname{ch}_ {i}(\gamma_{2})\)._ Proof.: Let \(\widetilde{X}\to M_{X/C}\) denote the universal expansion, and \(\Gamma^{*}\widetilde{X}\) its pullback to \(M_{Y_{1}/D_{1}}\times_{M_{F}(r,d)}M_{Y_{2}/D_{2}}\), so that \(\Gamma^{*}\widetilde{X}=\operatorname{pr}_{1}^{*}\widetilde{Y}_{1}\cup_{F} \operatorname{pr}_{2}^{*}\widetilde{Y}_{2}\). Recall that \(\operatorname{ch}_{i}(\gamma)\) is defined via (6) in terms of a Gysin map \(\pi_{!}\), which commutes with the pullback along \(\Gamma\). Then consider the following diagram of maps Then one can check that we have an identity \((\pi_{\Gamma})!=(\pi_{12})!\circ\sigma^{*}\) of maps \(H^{*}(\operatorname{pr}_{1}^{*}\widetilde{Y}_{1}\cup_{F}\operatorname{pr}_{2} ^{*}\widetilde{Y}_{2})\to H^{*-2}(M_{Y_{1}/D_{1}}\times_{M_{F}(r,d)}M_{Y_{2}/D _{2}})\). The lemma follows from this. We consider the restriction of classes derived from the virtual tangent bundle. **Lemma 4.6**.: _We have_ \[\Gamma^{*}\operatorname{ch}_{i}(T^{\mathrm{vir}}_{M_{X/C}})=\operatorname{pr}_ {1}^{*}\operatorname{ch}_{i}(T^{\mathrm{vir}}_{M_{Y_{1}/D_{1}}})+\operatorname {pr}_{2}^{*}\operatorname{ch}_{i}(T^{\mathrm{vir}}_{M_{Y_{2}/D_{2}}})- \operatorname{pr}_{F}^{*}\operatorname{ch}_{i}(T_{M_{F}/\operatorname{Pic}_{F} }).\] Proof.: The obstruction theory on \(M_{X/C}\) is a descent of \(R\operatorname{Hom}_{0}(\mathcal{E},\mathcal{E})\), where \(\mathcal{E}\) is the universal sheaf on the un-rigidified moduli stack. Since \([\Gamma^{*}\mathcal{E}]=[\operatorname{pr}_{1}^{*}\mathcal{E}_{1}]+[ \operatorname{pr}_{2}^{*}\mathcal{E}_{2}]-[\operatorname{pr}_{F}^{*} \mathcal{E}_{F}]\), it follows from adjunction that \[T^{\operatorname{vir}}_{M_{X/C}}=\operatorname{pr}_{1}^{*}T^{\operatorname{ vir}}_{M_{Y_{1}/D_{1}}}+\operatorname{pr}_{2}^{*}T^{\operatorname{vir}}_{M_{Y_{2}/D_{2} }}-\operatorname{pr}_{F}^{*}T_{M_{Y_{F}}/\operatorname{Pic}_{F}},\] and all of these are perfect objects. The result follows from this. ### Fixed determinant spaces In order to give more precise statements for some of the invariants we consider, we want to work in some 'fixed determinant' theory. We make this precise here in the two cases we are interested in: For a simple degeneration and for a surface fibered over a smooth curve with a single marked point. Simple Degeneration.Let \(B\) be regular one-dimensional base, and \(X_{B}\to C_{B}\) a family of fibered surface. Assume the total space \(C_{B}\) is regular, that \(C_{B}\to B\) is smooth outside \(b_{0}\in B\), and that \(C_{b_{0}}\) is a union of two components along a simple node. We say that \(X_{B}\to C_{B}\) is a simple degeneration of fibered surfaces. We consider the stack \(\operatorname{Exp}_{C_{B}/B}\to B\) with universal expansions \(\widetilde{X}_{B}\to\widetilde{C}_{B}\), and the relative Picard scheme \(\operatorname{Pic}_{\widetilde{X}_{B}/\operatorname{Exp}_{C_{B}/B}}\). For the following lemma, we introduce some notation: Given an etale morphism \(\beta:B\to\mathbb{A}^{1}\), such that \(b_{0}=\beta^{-1}(0)\), let \(B[n]:=B\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1}\) and \(C_{B}[n]\to B[n]\) be the standard degeneration as in [11, SS1.1]. Let also \(X_{B}[n]:=X_{B}\times_{C_{B}}C_{B}[n]\). This defines a smooth morphism \(\beta_{n}:B[n]\to\operatorname{Exp}_{C_{B}/B}\). Say \(Y_{1}\) and \(Y_{2}\) are the irreducible components of \(X_{b_{0}}\). Then let \(Y_{1,k}\subset X_{B}[n]\) denote the divisor that corresponds to \(Y_{1}\) over the \(k\)-th coordinate hyperplane. **Lemma 4.7**.: * _There is a minimal closed sub-stack_ \(\overline{e}\subset\operatorname{Pic}_{\widetilde{X}_{B}/B}\) _through which the identity section_ \(\operatorname{Exp}_{C_{B}/B}\to\operatorname{Pic}_{\widetilde{X}_{B}/ \operatorname{Exp}_{C_{B}/B}}\) _factors._ * _The stack_ \(\overline{e}\) _is naturally a subgroup of_ \(\operatorname{Pic}_{\widetilde{X}_{B}/\operatorname{Exp}_{C_{B}/B}}\) _and the structure map_ \(\overline{e}\to\operatorname{Exp}_{C_{B}/B}\) _is etale._ * _For_ \(\beta:B\to\mathbb{A}^{1}\) _as above, we have that_ \(\beta[n]^{-1}\overline{e}\subseteq\operatorname{Pic}_{X_{B}[n]/B[n]}\) _is equal to the reduced subscheme supported on the union of sections defined by line bundles_ \(\mathcal{O}_{X_{B}[n]}(\sum_{i=1}^{n+1}a_{k}Y_{1,k})\) _for_ \(a_{k}\in\mathbb{Z}\)_, which is a closed set._ Proof.: We will show that the union of sections \(\mathcal{O}_{X_{B}[n]}(\sum_{i=1}^{n+1}a_{k}Y_{1,k})\) defines a closed sub-space of \(\operatorname{Pic}(X_{B}[n]/B[n])\), which is the closure of the identity section \(B[n]\to\operatorname{Pic}_{X_{B}[n]/B[n]}\). It is then straightforward to see that the collection of such closed substacks for all \(n\) induces a closed substack of \(\operatorname{Pic}_{\widetilde{X}_{B}/B}\), which is the minimal substack containing the identity section. Since the pull-back \(\operatorname{Pic}_{C_{B}[n]/B[n]}\to\operatorname{Pic}_{X_{B}[n]/B[n]}\) is closed, it is enough to show the analogous statement for \(\operatorname{Pic}_{C_{B}[n]/B[n]}\). Let \(D_{1,k}\) denote the image of \(Y_{1,k}\) in \(C_{B}[n]\). Since the identity component \(\operatorname{Pic}^{0}_{C_{B}[n]/B[n]}\) is separated, it follows that the identity section is closed in it and doesn't contain any other section that agrees with the identity section generically over \(B[n]\). Then for any other section \(\xi\) given by a line bundle \(\mathcal{O}_{X_{B}[n]}(\sum a_{k}D_{1,k})\), we get a closed immersion \(\xi\subseteq\xi\operatorname{Pic}^{0}_{C_{B}[n]/B[n]}\). It follows that the union over all such sections \(\xi\) is a closed subset in \(\cup_{\xi}\xi\operatorname{Pic}^{0}_{X_{B}[n]/B[n]}\). But in fact \(\cup_{\xi}\operatorname{Pic}^{0}_{C_{B}[n]/B[n]}=\operatorname{Pic}_{C_{B}[n ]/B}\). To see b), it is enough to show that \(\cup_{\xi}\xi\subset\operatorname{Pic}_{C_{B}[n]/B[n]}\) is a subgroup-space and etale over \(B[n]\). The first point is clear, since the collection of \(\xi\)'s forms a group. To see that it is etale, we may work locally on the domain. But \(\cup_{\xi}\xi\cap\xi_{0}\operatorname{Pic}^{0}_{C_{B}[n]/B[n]}=\xi_{0}\), which is clearly etale over \(B[n]\). Let \(L\) be a line bundle on \(X_{B}\) with degree \(d\) on fibers over \(C_{B}\), let \(\alpha\) be a generic stability condition on \(C_{B}\) and let \(L_{0}\) be a line bundle on \(X_{B}\) with fiber degree \(d_{0}>0\). Let \(\Delta\in\mathbb{Z}\). **Definition 4.8**.: We let \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,L,\Delta)\) denote the moduli stack of \(\alpha\)-balanced \(f\)-stable sheaves on minimal expansions of \(X_{B}\) whose determinant map factors through \(L\overline{e}\) and whose discriminant is \(\Delta\). This is naturally a closed substack of \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,\overline{c_{1}(L)},\Delta)\), where \(\overline{c_{1}(L)}\) is the section \(\operatorname{Exp}_{C_{B}/B}\to\overline{\mathcal{N}\mathcal{S}}_{X_{B}/B}\) induced by \(c_{1}(L)\). We also let \(M^{\alpha}_{X_{B}/C_{B}}(r,L,\Delta)\) denote the \(\mathbb{G}_{m}\)-rigidification of \(\mathcal{M}^{\alpha}_{X_{B}/C_{B}}(r,L,\Delta)\). **Remark 4.9**.: We have an analogous result if \(X\to C\) is a fibered surface with \(C\) a union of two smooth curves along a single node (e.g. if \(X\to C\) is the central fiber of a simple degeneration, but without assuming a smoothing exists). We use the notation \(\mathcal{M}^{\alpha}_{X/B}(r,L,\Delta)\) and \(M^{\alpha}_{X/B}(r,L,\Delta)\) for the resulting moduli stacks. We leave the details to the reader, who may alternatively always assume that we are working with the central fiber of a simple degeneration. Expansions.Let \((C,x)\) be a smooth marked curve and \(X\to C\) be a fibered surface. We consider the stack of expansions \(Exp_{C}\) and the relative Picard-scheme \(\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}\). The stack \(\operatorname{Exp}_{C}\) has a cover by affine spaces \(\alpha_{n}:\mathbb{A}^{n}\to\operatorname{Exp}_{C}\), together with a standard expansion \(X[n]\to C[n]\)[11, SS4.1]. We let \(Y_{k}\) denote the closure of the component induced by \(X\) over the \(k\)-th coordinate hyperplane in \(\mathbb{A}^{n}\). Then, we have the analogue to Lemma 4.7, with essentially the same proof. **Lemma 4.10**.: 1. _There is a minimal closed sub-stack_ \(\overline{e}\subseteq\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}\) _through which the identity section_ \(\operatorname{Exp}_{C,x}\to\operatorname{Pic}_{\widetilde{X}_{B}/\operatorname {Exp}_{C,x}}\) _factors._ 2. _The stack_ \(\overline{e}\) _is naturally a subgroup of_ \(\operatorname{Pic}_{\widetilde{X}/\operatorname{Exp}_{C}}\) _and the structure map_ \(\overline{e}\to\operatorname{Exp}_{C,x}\) _is etale._ 3. _Given an etale morphism_ \(\beta:B\to\mathbb{A}^{1}\)_, Then,_ \(\alpha_{n}^{-1}\overline{e}\subseteq\operatorname{Pic}(X[n]/\mathbb{A}^{n})\) _is equal to the reduced subscheme supported on the union of sections defined by line bundles_ \(\mathcal{O}_{X_{B}[n]}(\sum_{i=1}^{n+1}a_{k}Y_{k})\) _for_ \(a_{k}\in\mathbb{Z}\)_, which is a closed set._ Proof.: Let \(L\) be a line bundle on \(X\). Let \(\alpha\in\mathbb{Q}\) be arbitrary. **Definition 4.11**.: We let \(\mathcal{M}^{\alpha}_{X/C}(r,L,\Delta)\) denote the moduli stack of \(\alpha\)-balanced \(f\) -stable sheaves on minimal expansions of \(X\) whose determinant map factors through \(L\overline{e}\) and whose discriminant is \(\Delta\). This is naturally a closed substack of \(\mathcal{M}^{\alpha}(r,\overline{c_{1}}(L),\Delta)\). We also denote by \(M^{\alpha}_{X/C}(r,L,\Delta)\) the \(\mathbb{G}_{m}\)-rigidification. The results of this section carry over to this setting in the obvious way. In particular this applies to Lemmas 4.5 and 4.6 and to Propositions 4.2 and 4.4. For clarity, we restate Proposition 4.2 for this setting explicitly. For this, suppose we are in the situation of Proposition 4.2 and that we have also fixed a line bundle \(L\) on \(X\) in class \(\overline{c}_{1}\). By abuse of notation, write \(L_{1}+L_{2}=\overline{L}\), if \(L_{1}\) and \(L_{2}\) are line bundles on \(Y_{1}\) and \(Y_{2}\) respectively, whose restrictions to \(F\) are isomorphic and such that there exists an integer \(\ell\) with \(L|_{Y_{1}}\simeq L_{1}(-\ell F)\) and \(L|_{Y_{2}}\simeq L_{2}(\ell F)\). Let also \(L_{F}:=L|_{F}\). **Proposition 4.12**.: _We have an equality in \(A_{*}(M^{\alpha}_{X/C}(r,L,\Delta))\)._ \[[M^{\alpha}_{X/C}]^{\mathrm{vir}}=\sum_{\begin{subarray}{c}L_{1}+L_{2}= \overline{L}\\ \Delta_{1}+\Delta_{2}=\Delta\end{subarray}}\Gamma_{*}[M^{\lfloor\alpha_{1} \rceil}_{Y_{1}/D_{1}}(r,L_{1},\Delta_{1})\times_{M_{F}(r,L_{F})}M^{\lfloor \alpha_{2}\rceil}_{Y_{2}/D_{2}}(r,L_{2},\Delta_{2})]^{\mathrm{vir}}\] ### Invariants We define the type of invariants that we want to consider in the degeneration formula. For simplicity, we restrict the discussion to a specific type of invariant and to the fixed determinant case. We expect that similar formulas hold for, say Segre and (with some more work) Verlinde invariants. It also shouldn't be essential to work with the fixed determinant version, but then one should consider insertions coming from the Picard scheme in order to get non-trivial invariants when the two theories differ. Let \(A\) be a multiplicative genus (e.g. the Chern polynomial, \(\chi_{y}\)-genus or elliptic genus. Let \(X\to C\) be a fibered surface, where \(C\) is either smooth, with possibly a marked point, or a union of two irreducible components along a single node. We assume we have fixed data \(\alpha,L,L_{0}\) as in SS4.4. For \(\mathcal{M}\to M\) a moduli stack and its \(\mathbb{G}_{m}\)-rigidification as considered throughout, we will consider cohomology classes of the form \[\Phi(\mathcal{E})=A(T^{\mathrm{vir}})B(\mathcal{E}). \tag{9}\] Here * \(\mathcal{E}\) denotes the universal sheaf on some family of expansions over \(\mathcal{M}\) of a given fibered surface \(X\to C\), * \(A\) is a multiplicative transformation from the \(K\)-theory of perfect objects to cohomology with coefficients in some ring \(\Lambda\) containing \(\mathbb{Q}\), * \(B(\mathcal{E})=\exp(\sum_{i,\gamma}\mathrm{ch}_{i}(\gamma)q_{\gamma,i})\), where \(i\) ranges through integers \(\geq 2\) and \(\gamma\) ranges through a basis of cohomology of \(X\). Let \(K:=\Lambda[[(q_{\gamma,i})_{\gamma,i}]]\) denote the coefficient field of \(\Phi\). We let \(\mathcal{E}\) denote the universal sheaf over \(\mathcal{M}^{\alpha}_{X/C}(r,L,\Delta)\). We assume that \(\alpha\) is generic and that it satisfies (5). If \(X\to C\) has no marked fiber (so either \(C\) is smooth or has two components and a single node), we define an invariant simply as \[I^{\Phi}_{X/C}(r,L,\Delta):=\int\Phi(\mathcal{E})\cap[M^{\alpha}_{X/C}(r,L, \Delta)]^{\mathrm{vir}}\in K.\] Here, the left hand side a-priori implicitly depends on \(\alpha\), but it will follow from the decomposition formula that it is actually independent for any generic \(\alpha\) satisfying (5). If \(C\) is smooth with a single marked point, we let \(L_{F}\) denote the restriction of \(L\) to the marked fiber, and set \[V:=H_{*}(M_{F}(r,L_{F}),K).\] Then we obtain invariants valued in \(V\) by pushing forward along the evaluation map \[I^{\Phi}_{X/C}(r,L,\Delta):=\mathrm{ev}_{*}\left(\Phi(\mathcal{E})\cap[M^{ \alpha}_{X/C}(r,L,\Delta)]^{\mathrm{vir}}\right)\in V.\] If \(C\) has a marked point, let \(F\) be the marked fiber. Otherwise, let \(F\) denote the fiber over an arbitrary smooth point of \(C\). We set \[Z_{X/C,\Phi}(q):=\sum_{\begin{subarray}{c}\Delta\in\mathbb{Z}\\ 0\leq\ell<r\end{subarray}}I^{\Phi}_{X/C}(r,L+\ell[F],\Delta)\,q^{\Delta-(r^{2}- 1)\chi(\mathcal{O}_{X})}.\] This is valued in \(K[[q]]\) or \(V[[q]]\) respectively and depends implicitly on \(r,L\) and \(L_{0}\). Here, we choose a different generic \(\alpha\) satisfying (5) for each \(\Delta\) and \(\ell\). **Remark 4.13**.: Note that for \(c_{1}=L+\ell F\), the discrimant and second chern class are related by \[\Delta=2rc_{2}-(r-1)c_{1}^{2}=2rc_{2}-(r-1)L^{\cdot 2}-2\ell(r-1)d.\] As \(c_{2}\) ranges through the integers and \(\ell\) ranges in \([0,r-1]\), we find that the possible exponents of \(q\) for which the coefficient of \(Z_{X/C,\Phi}\) is non-empty, lie in \(2\mathbb{Z}+(r-1)L^{\cdot 2}-(r^{2}-1)\chi(\mathcal{O}_{X})\), and each such integer corresponds to a unique choice of \(\ell\) and \(c_{2}\). In other words, we have \[Z_{X/C,\Phi}(q)=\] \[q^{-(r-1)L^{\cdot 2}-(r^{2}-1)\chi(\mathcal{O}_{X})}\sum_{ \begin{subarray}{c}c_{2}\in\mathbb{Z}\\ 0\leq\ell<r\end{subarray}}I^{\Phi}_{X/C}(r,L+\ell[F],\Delta(c_{2},\ell))\,q^{2 (rc_{2}-\ell(r-1)d)}.\] We see that the exponents of \(q\) in the sum range exactly through \(2\mathbb{Z}\). ### Degeneration formula for multiplicative classes Let \(X\to C\) be a fibered surface, where \(C\) is the union of two smooth curves along a single node and let the set-up be as in SS4.5. We let \(Y_{i}\to D_{i}\) be the surfaces fibered over a smooth curve with a single marking obtained as the components of \(X\to C\) and let \(F\) denote the fiber of \(X\) over the node. Let \[V:=H_{*}(M_{F}(r,L),K)\] so that the invariants associated to \(Y_{i}/D_{i}\) for \(i=1,2\) are valued in \(V\). Note that \(V\) has a ring structure with respect to intersection product, which is commutative as cohomology in odd degrees vanishes. We denote the intersection product of cycles by \(\alpha\cdot\beta\). We define a bilinear pairing on \(V\) as \[*_{\Phi}:\,V\times V \to K\] \[\alpha,\beta \mapsto\int_{M_{F}(r,L)}A(T_{M_{F}(r,L)})^{-1}\cap\alpha\cdot\beta\] We extend the multiplication \(*_{\Phi}\) to Laurent series in \(q\) over \(V\) and \(K\) by applying it coefficientwise and dividing the final result by \(q^{\dim M_{F}(r,L)}\). **Theorem 4.14**.: \[Z_{X/C,\Phi}(q)=Z_{X_{1}/C_{1},\Phi}(q)*_{\Phi}Z_{X_{2}/C_{2},\Phi}(q)\] Proof.: Comparing coefficients, and in view of Remark 4.13, we may consider \(c_{2}\) and \(k\), and therefore \(\Delta\) as fixed, and we need to show that - for some fixed chosen stability condition \(\alpha\) on \(C\) - we have \[I_{X/C}^{\Phi}(r,L+\ell[F],\Delta)=\sum I_{Y_{1}/D_{1}}^{\Phi}(r,L_{1}+\ell_{1} [F],\Delta_{1})*_{\Phi}I_{Y_{2}/D_{2}}^{\Phi}(r,L_{2}+\ell_{2}[F],\Delta_{2}),\] where the sum ranges over all \(0\leq\ell_{1},\ell_{2}<r\) and over all \(\Delta_{1},\Delta_{2}\in\mathbb{Z}\) such that \[\Delta-(r^{2}-1)\chi(\mathcal{O}_{X})=\Delta_{1}+\Delta_{2}-(r^{2}-1)(\chi( \mathcal{O}_{Y_{1}})+\chi(\mathcal{O}_{Y_{2}}))-\dim M_{F}(r,L_{F}),\] or equivalently, such that \(\Delta=\Delta_{1}+\Delta_{2}\). Writing out the definition of invariants, we have the equivalent formula \[\begin{split}&\int\Phi(\mathcal{E})\cap[M_{X/C}^{\alpha}(r,L+ \ell[F],\Delta)]^{\text{vir}}=\\ &\sum\left(\text{ev}_{*}(\Phi(\mathcal{E}_{1})\cap[M_{Y_{1}/D_{ 1}}^{b}(r,L_{1}+\ell_{1}F,\Delta_{1})]^{\text{vir}})*_{\Phi}\right.\\ &\left.\text{ev}_{*}(\Phi(\mathcal{E}_{2})\cap[M_{Y_{2}/D_{2}}^{b }(r,L_{2}+\ell_{2}F,\Delta_{2})]^{\text{vir}})\right)\end{split} \tag{10}\] for some choice of generic \(\alpha\) satisfying (5) and where we use the notation of Remark 3.26. Since \(\Delta=\Delta_{1}+\Delta_{2}\) and by the dependence of the discriminant on first and second Chern classes, we have that each term of the sum for which the moduli spaces are non-empty, that \(\ell\equiv\ell_{1}+\ell_{2}\mod r\). In particular, for each such term there exists unique representatives \(\ell_{i}^{\prime}\equiv\ell_{i}\mod r\), such that \(|\alpha(c_{1}(L_{i}+\ell_{i}^{\prime}F),\Delta_{i})-\alpha_{i}|<1/2\). It follows that \(\ell=\ell_{1}^{\prime}+\ell_{2}^{\prime}\). Letting \(L_{i}^{\prime}:=L_{i}+\ell_{i}^{\prime}F\) and \(L^{\prime}:=L+\ell F\), we in particular have \(L_{1}^{\prime}+L_{2}^{\prime}=L^{\prime}\) in the notation preceding Proposition 4.12. Since twisting by a line bundle induces an isomorphism between moduli spaces, we have \[I_{Y_{i}/D_{i}}^{\Phi}(r,L_{1}+\ell_{1}F,\Delta_{1}) =I_{Y_{i}/D_{i}}^{\Phi}(r,L_{1}+\ell_{1}^{\prime}F,\Delta_{1})\] \[=\operatorname{ev}_{*}\left(\Phi(\mathcal{E}_{i})\cap[M_{Y_{i}/D _{i}}^{[\alpha_{i}]}(r,L_{1}^{\prime},\Delta_{i})]^{\operatorname{vir}}\right)\] In summary, we may rewrite the right hand side of (10) as \[\sum_{\begin{subarray}{c}\Delta_{1}+\Delta_{2}=\Delta\\ L_{1}^{\prime}+L_{2}^{\prime}=\overline{L^{\prime}}\end{subarray}}\Bigl{(} \operatorname{ev}_{*}(\Phi(\mathcal{E}_{1})\cap[M_{Y_{1}/D_{1}}^{[\alpha_{1}] }(r,L_{1}^{\prime},\Delta_{1})]^{\operatorname{vir}}){*}_{\Phi}\] \[\operatorname{ev}_{*}(\Phi(\mathcal{E}_{2})\cap[M_{Y_{2}/D_{2}}^{[\alpha_{2}] }(r,L_{2}^{\prime},\Delta_{2})]^{\operatorname{vir}})\Bigr{)}\] We examine each term of this sum. Using the definition of \({*}_{\Phi}\), the projection formula and Proposition 4.4, we may rewrite a single term in this sum as the pushforward to a point of \[\operatorname{pr}_{F}^{*}A(T_{M_{F}})^{-1}\cap\operatorname{pr}_{1 }^{*}\Phi(\mathcal{E}_{1})\cap\operatorname{pr}_{2}^{*}\Phi(\mathcal{E}_{2})\cap\] \[[M_{Y_{1}/D_{1}}^{[\alpha_{1}]}(r,L_{1}^{\prime},\Delta_{1})\times _{M_{F}}M_{Y_{2}/D_{2}}^{[\alpha_{2}]}(r,L_{2}^{\prime},\Delta_{2})]^{ \operatorname{vir}}.\] Then by Lemmas 4.5 and 4.6, this is equal to \[\Gamma^{*}\Phi(\mathcal{E})\cap[M_{Y_{1}/D_{1}}^{[\alpha_{1}]}(r,L_{1}^{\prime },\Delta_{1})\times_{M_{F}}M_{Y_{2}/D_{2}}^{[\alpha_{2}]}(r,L_{2}^{\prime}, \Delta_{2})]^{\operatorname{vir}},\] where \(\Gamma\) is the glueing map to \(M_{X/C}^{\alpha}(r,L^{\prime},\Delta)\). Using this, we have reduced equation (10) to Proposition 4.2 and are done. ### Application to Elliptic Fibrations Suppose that \(X\to C\) has genus one fibers in the situation of Theorem 4.14. In this case, we have \(V=H_{*}(M_{F}(r,L),K)=H_{*}(\operatorname{pt},K)\simeq K\), so the relative invariants are simply power series valued in the coefficient ring. In this case, the statment of Theorem 4.14 becomes especially simple **Corollary 4.15**.: _If \(g=1\), we have the following identity in \(K[[q]]\):_ \[Z_{X/C,\Phi}(q)=Z_{Y_{1}/D_{1},\Phi}(q)\,Z_{Y_{2}/D_{2},\Phi}(q).\] In the rest of this subsection, we give a prove of Theorem 1.4. We will consider the special case \[\Phi(\mathcal{E})=A_{y,q}(T^{\operatorname{vir}})\] of (9) obtained by setting \(B=1\), and taking \(A_{y,q}\) to be the insertion considered in [10, SS4.8] which defines the virtual elliptic genus [11]. By results of de Jong and Friedman, we have suitable degenerations: **Theorem 4.16**.: _Let \(X\) be an elliptic surface over \(\mathbb{P}^{1}\) of degree \(e\geq 2\) without multiple or reduced fibers. Let \(D\) be a \(d\)-section on \(X\), and suppose that there exist no \(d^{\prime}\)-sections for any \(1\leq d^{\prime}<d\). Then there exists a connected base \(B\) and a family of elliptic surfaces \(X_{B}\to C_{B}\to B\) together with a family of \(n\)-sections \(D_{B}\subset X_{B}\) such that_ * _For some_ \(b_{0}\in B\)_, the triple_ \(D_{b_{0}}\subset X_{b_{0}}\to C_{b_{0}}\) _is isomorphic to_ \(D\subset X\to\mathbb{P}^{1}\)_._ * _For some_ \(b_{1}\in B\)_, we have:_ * \(X_{b_{1}}\to C_{b_{1}}\) _is obtained from glueing two elliptic surfaces_ \(Y_{1}\to\mathbb{P}^{1}\) _and_ \(Y_{2}\to\mathbb{P}^{1}\) _along an isomorphic fiber, where_ \(Y_{1}\) _is a degree_ \(e-1\) _elliptic surface and_ \(Y_{2}\) _is a rational elliptic surface._ * _The divisor_ \(D_{b_{1}}\subset X_{b_{1}}\) _restricts to a_ \(d\)_-section on_ \(Y_{1}\) _and to a smooth rational curve_ \(D\) _satisfying_ \(D^{2}=d-2\) _on_ \(Y_{2}\)_. Moreover, if_ \(e\geq 2\)_, then_ \(Y_{1}\) _has no_ \(d^{\prime}\)_-sections for_ \(1\leq d^{\prime}<d\)_._ Proof.: For \(e\geq 3\), this follows from Theorem 4.9 in [1] together with constructions going into Claim 5.7 in [1]. For \(e=2\), it follows from similar arguments using the Torelli theorem for lattice polarized K3 surfaces. We will also need the following vanishing result **Proposition 4.17**.: _Let \(E\) be an elliptic curve and consider \(X=E\times\mathbb{P}^{1}\to\mathbb{P}^{1}\), and let \(x_{1},\ldots,x_{n}\) be distinct points on \(\mathbb{P}^{1}\). Let \(r>0\) and let \(L\) be a line bundle on \(E\times\mathbb{P}^{1}\) that has degree \(d\) on fibers, with \(d\) coprime to \(r\). Then we have_ \[Z^{\operatorname{Ell}}_{X/(\mathbb{P}^{1},(x_{1},\ldots,x_{n}))}=1.\] Proof.: For any \(\Delta\) for which the moduli space \[M_{X/(\mathbb{P}^{1},(x_{1},\ldots,x_{n}))}(r,L,\Delta)\] is non-empty one can show that either it is a point, or it admits - up to a finite etale cover - an elliptic curve factor. In the latter case, all virtual Chern numbers vanish. The result follows from this. Proof of Theorem 1.4.: Since enumerative invariants for surfaces with \(p_{g}(X)>0\) are independent of choice of polarization, we may use Theorem 2.21, and compute invariants using moduli spaces of \(f\)-stable sheaves. In the case \(e=2\), the result follows from the DMVV formula and the fact that any moduli space of Gieseker-stable sheaves on a K3 surface is deformation invariant to a Hilbert scheme of points when stability equals semi-stability. Next, we show that for any rational elliptic surface \(Y\to\mathbb{P}^{1}\) and any divisor \(D\) on \(Y\) of fiber degree coprime to \(r\), we have (when taking invariants and generating series with respect to the moduli spaces of fiber-stable objects) \[Z^{\operatorname{Ell}}_{Y/\mathbb{P}^{1}}=(Z^{\operatorname{Ell}}_{K3})^{1/2}.\] By Proposition 4.17, the relative and absolute invariants agree. In particular, we may glue two identical copies of \(Y\) together along a fiber and deform the resulting surface to a smooth K3 surface, while preserving the divisor class, see [10, Theorem 5.10] and [10, Proposition 4.3]. Now, we can argue inductively on \(e\), with base case \(e=2\): By Theorem 4.16 and Corollary 4.15, to obtain an identity \[Z^{\operatorname{Ell}}_{X/\mathbb{P}^{1}}=Z^{\operatorname{Ell}}_{X^{\prime}/( \mathbb{P}^{1},0)}\,Z^{\operatorname{Ell}}_{Y,(\mathbb{P}^{1},0)},\] where \(X^{\prime}\) is an elliptic surface of degree \(e-1\) with a chosen \(d\)-section \(D^{\prime}\) (and which possesses) no \(d^{\prime}\)-sections for \(1\leq d^{\prime}<d\), and where \(Y\) is a rational elliptic surface with a chosen rational curve \(D\) satisfying \(D^{\cdot 2}=d-2\). Using Proposition 4.17 again and by the inductive hypothesis \[Z^{\operatorname{Ell}}_{X/\mathbb{P}^{1}}=Z^{\operatorname{Ell}}_{X^{\prime}/ \mathbb{P}^{1}}\,Z^{\operatorname{Ell}}_{Y,\mathbb{P}^{1}}=(Z^{\operatorname{ Ell}}_{K3})^{(e-1)/2}(Z^{\ell}_{K3})^{1/2}.\] This finishes the proof.
2309.14813
The constraint tensor for null hypersurfaces
In this work we provide a definition of the constraint tensor of a null hypersurface data which is completely explicit in the extrinsic geometry of the hypersurface. The definition is fully covariant and applies for any topology of the hypersurface. For data embedded in a spacetime, the constraint tensor coincides with the pull-back of the ambient Ricci tensor. As applications of the results, we find three geometric quantities on any transverse submanifold $S$ of the data with remarkably simple gauge behaviour, and prove that the restriction of the constraint tensor to $S$ takes a very simple form in terms of them. We also obtain an identity that generalizes the standard near horizon equation of isolated horizons to totally geodesic null hypersurfaces with any topology. Finally, we prove that when a null hypersurface has product topology, its extrinsic curvature can be uniquely reconstructed from the constraint tensor plus suitable initial data on a cross-section.
Miguel Manzano, Marc Mars
2023-09-26T10:26:47Z
http://arxiv.org/abs/2309.14813v2
# The Constraint Tensor: General Definition and Properties ###### Abstract The formalism of hypersurface data allows one to study hypersurfaces of any causal character _abstractly_ (i.e. without viewing them as embedded in an ambient space). The intrinsic and extrinsic geometry of a hypersurface is encoded in a data set \(\mathcal{D}\). In this work we codify at the abstract level information about the ambient Ricci tensor by introducing the so-called _constraint tensor_\(\mathcal{R}\). We provide its abstract definition in terms of general data \(\mathcal{D}\), without imposing any topological assumptions and in a fully covariant manner. Moreover, we work in arbitrary (hypersurface data) gauge. We prove that, in the embedded case, \(\mathcal{R}\) corresponds to a certain combination of components of the ambient Riemann and Ricci tensors and that, at null points, it coincides with the pull-back of the ambient Ricci tensor. The null case, which is of special interest, is studied in detail. One of the interesting outcomes is the construction of several geometric quantities with remarkably simple gauge behaviour on any transverse submanifold \(S\) of the data. ## 1 Introduction The _formalism of hypersurface data_ is a framework that allows one to study the geometry of hypersurfaces of any causal character without the necessity of considering them as embedded in any ambient space. Originally presented in [14], [15] (with precursor [21]), this formalism has proven useful in the analysis of first order perturbations of a general hypersurface [22], in the study of the characteristic problem in General Relativity [20], [19] and in the context of matching spacetimes across null boundaries [12], [13], [9]. The idea of the formalism is to codify _abstractly_ (in the sense of detached from any ambient manifold) the intrinsic and extrinsic geometric information of a hypersurface in terms of a _data set_\(\mathcal{D}\mathop{=}^{\text{\rm\,def}}\{\mathcal{N},\gamma,\boldsymbol{\ell}, \ell^{(2)},\mathbf{Y}\}\), where \(\mathcal{N}\) is a smooth manifold, \(\{\gamma,\mathbf{Y}\}\) are symmetric 2-covariant tensor fields, \(\boldsymbol{\ell}\) is a covector field and \(\ell^{(2)}\) is a scalar field. When \(\mathcal{D}\) happens to be embedded in a semi-Riemannian manifold \((\mathcal{M},g)\) with embedding \(\phi\), the full metric \(g\) along \(\phi(\mathcal{N})\) can be reconstructed from \(\{\gamma,\boldsymbol{\ell},\ell^{(2)}\}\), whereas \(\mathbf{Y}\) gives the pull-back to \(\mathcal{N}\) of first transverse derivatives of \(g\)[14]. A fundamental characteristic of the hypersurface data formalism is that it is endowed with an inherent built-in gauge freedom. The set of all possible gauge transformations forms a group \(\mathcal{G}\) so that each element, denoted by \(\mathcal{G}_{(z,V)}\), is determined by a nowhere-zero function \(z\) and a vector field \(V\) in \(\mathcal{N}\)[15]. The set of all gauge group elements with \(z=1\) constitute a subgroup \(\mathcal{G}_{1}\) of \(\mathcal{G}\). At the embedded level, the gauge freedom [14], [15] is associated to the non-uniqueness of the choice of a _rigging_ (i.e. a non-zero, everywhere transversal vector field along \(\phi(\mathcal{N})\), see e.g. [24]). Although the gauge freedom could seem a complication of the formalism, it is actually of great use because it allows one to adjust the formalism to each specific situation at hand. In the spirit of capturing geometric information at a purely abstract level, a natural question that arises is whether one can encode curvature information (i.e. second order derivatives of the metric in the embedded picture) solely in terms of \(\{\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\). Prior to this work, the formalism of hypersurface data had already succeeded in determining various components of the ambient Riemann tensor. As proven in [21] given a semi-Riemannian manifold \((\mathcal{M},g)\), an embedded hypersurface \(\widetilde{\mathcal{N}}\subset\mathcal{M}\), a rigging vector field \(\zeta\) along \(\widetilde{\mathcal{N}}\) and a basis \(\{e_{a}\}\) of \(\Gamma(T\widetilde{\mathcal{N}})\), one can find explicit (abstract) expressions for the components \(R_{\alpha\beta\gamma\delta}\zeta^{\alpha}e^{\beta}_{b}e^{\gamma}_{c}e^{\delta}_ {d}\) and \(R_{\alpha\beta\gamma\delta}e^{\alpha}_{a}e^{\beta}_{b}e^{\gamma}_{c}e^{\delta}_ {d}\) of the Riemann tensor \(R_{\alpha\beta\gamma\delta}\) of \((\mathcal{M},g)\). It follows that such components can be codified in the hypersurface data set [14]. Following in this direction, one may also wonder whether it is possible to codify some components of the ambient Ricci tensor abstractly. If this was the case, then it would make sense to introduce new abstract definitions that encode precisely this information so that one can work with them without requiring the existence of any ambient space. It is in these circumstances that the so-called _constraint tensor_\(\mathcal{R}\) arises naturally. A prime example of a situation in which encoding the ambient Ricci tensor abstractly becomes essential can be found in the recent works [20], [19] on the vacuum characteristic problem. In this context, the vacuum Einstein field equations need to be enforced in a fully detached way from the spacetime that is to be constructed a posteriori. In those publications the authors provided the first definition of the constraint tensor in the null case. Several properties of this tensor were also studied. However, given the specific aim of the paper, this was done in a specific setup and under particular gauge conditions. The approach in this paper is much more general. We motivate and present the definition of the constraint tensor \(\mathcal{R}\) for completely general data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) (in particular, the abstract manifold \(\mathcal{N}\) is allowed to contain null and non-null points). The definition of \(\mathcal{R}\) does not require any topological assumption on \(\mathcal{N}\) and it is fully covariant Some important differences with respect to the works [20], [19] are that here \(\mathcal{R}\) is constructed so that the tensor \(\mathbf{Y}\) appears explicitly (this is advantageous in many circumstances, as we discuss below) and that we do not fix the gauge in any of the expressions involving the constraint tensor. The whole construction of \(\mathcal{R}\) is performed so that, in the embedded case, it codifies a certain combination of components of the ambient Riemann and Ricci tensors (Proposition 4.4). This combination is such that at null points \(\mathcal{R}\) coincides with the pull-back to \(\mathcal{N}\) of the ambient Ricci tensor. The motivation for writing \(\mathcal{R}\) so that \(\mathbf{Y}\) and its derivatives appear explicitly is because then any independent knowledge of \(\mathcal{R}\) will potentially allow one to determine (at least part of) \(\mathbf{Y}\). This can be of great use in many circumstances. For instance, when addressing the problem of matching two spacetimes by means of the formalism of hypersurface data (see e.g. [12], [13], [9]), the matter/gravitational content of the thin shell is ruled by the jump of \(\mathbf{Y}\) when crossing the matching hypersurface. Simply knowing about the curvature of the spacetimes to be matched, their corresponding constraint tensors are known. Thus, one can compute some components of \(\mathbf{Y}\) explicitly and hence determine the type of shell that forms after the matching. This fact is even more important in the null case because then one can define a unique non-zero vector field \(n\) along the degenerate direction of \(\mathcal{N}\) from the data fields \(\{\gamma,\boldsymbol{\ell},\ell^{(2)}\}\). This vector field simply scales under gauge transformations, and hence it defines a privileged direction of \(\mathcal{N}\). The constraint tensor \(\mathcal{R}\) can be expressed in terms of the Lie derivative of \(\mathbf{Y}\) along \(n\), \(\mathbf{Y}\) itself and \(\{\gamma,\boldsymbol{\ell},\ell^{(2)}\}\). Thus, in order to know \(\mathbf{Y}\) it suffices to integrate such transport equations. This is of course related to the well-known fact that tangential components of the Ricci tensor give rise to internal equations when evaluated on a null hypersurface. The advantage of the present approach is that the corresponding expressions are fully detached from the spacetime (much in the same was as the standard constraint equations for spacelike hypersurfaces do not need a spacetime to be formulated). Moreover, they are fully covaraint in \(\mathcal{N}\) (no additional structure is needed) and may are written in a completely free gauge, which gives a lot of flexibility to adjust in any particular problem. In the second part of the paper we focus precisely in the null case. In view that \(n\) constitutes a special vector field, it makes sense to compute the contractions \(\mathcal{R}(n,\cdot)\) and \(\mathcal{R}(n,n)\). The former provides an identity involving the exterior derivative of the (abstract) surface gravity \(\kappa_{n}\) of \(n\) and the Lie derivative \(\pounds_{n}(\mathbf{Y}(n,\cdot))\), while the latter constitutes an abstract version of the Raychaudhuri equation (see e.g. [5], [4]), namely \[k(\theta)-\widetilde{\kappa}_{k}\theta+\frac{\theta^{2}}{(\mathfrak{n}-1)}+ \varsigma^{2}+\mathbf{Ric}_{g}(k,k)\,{\stackrel{{\widetilde{ \mathcal{N}}}}{{=}}}\,0, \tag{1.1}\] where \(k\) is a null generator of an \(\mathfrak{n}\)-dimensional null hypersurface in \((\mathcal{M},g)\), \(\widetilde{\kappa}_{k}\) is the surface gravity of \(k\) and \(\theta\), \(\varsigma_{\mu\nu}\) are the expansion and shear scalars. Motivated by the fact that the geometry of non-degenerate submanifolds plays a fundamental role in the study of embedded null hypersurfaces, we also obtain the pull-back \(\mathcal{R}_{\parallel}\) of the constraint tensor \(\mathcal{R}\) to a codimension-one non-degenerate submanifold \(S\) of \(\mathcal{N}\). There are several reasons why this is of interest. Just to mention a couple, let us recall two interesting results concerning the geometry of horizons, namely \((a)\) the _near horizon equation_ of (extremal) isolated horizons (see e.g. [3], [1], [7], [2], [5], [6]) and \((b)\) the _master equation_ of multiple Killing horizons (see e.g. [16], [17], [18]). These two identities hold on a cross-section of their corresponding horizons and involve the pull-back of the (spacetime) Ricci tensor as well as the Ricci tensor of the cross-section. Since in the null embedded case \(\mathcal{R}\) coincides with the pull-back to \(\mathcal{N}\) of the ambient Ricci tensor, it makes sense to compute the explicit relation between \(\mathcal{R}_{\parallel}\) and the Ricci tensor \(R^{h}\) of \(S\). We devote Section 5.2 to this task. Our ultimate aim is to obtain a general identity that recovers, as particular cases, the near horizon equation and the master equation. This has already been achieved [8] and will be the subject of a forthcoming publication [11]. As proven in [20], the constraint tensor is gauge invariant in the null case. This is consistent with the fact that, in the embedded case, the ambient Ricci tensor is insensitive to a change in the choice of a rigging along the hypersurface. In the expression for \(\mathcal{R}_{\parallel}\) in terms of \(R^{h}\), several quantities are therefore gauge invariant (e.g. \(\mathcal{R}_{\parallel}\), or the induced metric of \(S\) and its related objects such as its Levi-Civita covariant derivative or \(R^{h}\)). It is therefore sensible to wonder whether one can write \(\mathcal{R}_{\parallel}\) in terms of geometric objects in \(S\) with simple gauge behaviour. We address this matter in Section 6 and, as a result, we identify three gauge invariant quantities that would have been hard to find otherwise, namely the one-form \(\boldsymbol{\omega}_{\parallel}\) and the 2-covariant tensor fields \(\boldsymbol{\mathfrak{P}}_{\parallel}\), \(\boldsymbol{\mathfrak{S}}_{\parallel}\). These objects are intrinsic to \(S\) and invariant under the action of the subgroup \(\mathcal{G}_{1}\). Specifically, the tensor \(\boldsymbol{\mathfrak{S}}_{\parallel}\) is worth further consideration. It codifies information on the first order variation of the tensor field \(\mathbf{Y}\) along \(n\) (hence it captures curvature information). In addition, it plays an important role in the geometry of Killing horizon of order one [8]. In particular, the tensor \(\boldsymbol{\mathfrak{S}}_{\parallel}\) turns out to vanish in Killing horizons of order one where the symmetry generator coincides with the privileged vector field \(n\). Details on this will be presented elsewhere [10]. The organization of the paper is as follows. Section 2 is devoted to introducing various basic concepts and results of the formalism of hypersurface data. In Section 3 we present the notions of _null (metric) hypersurface data_ together with the gauge behaviour of some tensor fields on \(\mathcal{N}\) and a discussion on the geometry of a data set \(\mathcal{D}\) admitting a non-degenerate codimension-one submanifold \(S\subset\mathcal{N}\). In Section 4, the _constraint tensor_\(\mathcal{R}\) is defined for any abstract hypersurface. We then prove that, when the data happens to be embedded in a semi-Riemannian manifold, it captures a certain combination of components of the Riemann curvature tensor of the ambient space. In Section 5, we particularize our analysis to the null case, finding the contractions of \(\mathcal{R}\) with a null generator and providing its pull-back \(\mathcal{R}_{\parallel}\) to \(S\). In particular, we compute the explicit relation between \(\mathcal{R}_{\parallel}\) and the Ricci tensor of the induced (Riemannian) metric of \(S\). The paper concludes with Section 6, where we introduce several quantities that are \(\mathcal{G}_{1}\)-invariant. We also include Appendix A, where we derive a generalized form of a Gauss-type identity, valid for a general smooth manifold with an embedded hypersurface provided that both of them are equipped with a torsion-free connection. ### Notation and conventions In this paper, all manifolds are smooth, connected and without boundary. Given a manifold \(\mathcal{M}\) we use \(\mathcal{F}\left(\mathcal{M}\right)\overset{\mathsf{def}}{=}C^{\infty}\left( \mathcal{M},\mathbb{R}\right)\) and \(\mathcal{F}^{\star}\left(\mathcal{M}\right)\subset\mathcal{F}\left(\mathcal{M}\right)\) for the subset of no-where zero functions. The tangent bundle is denoted by \(T\mathcal{M}\) and \(\Gamma\left(T\mathcal{M}\right)\) is the set of sections (i.e. vector fields). We use \(\pounds\), \(d\) for the Lie derivative and exterior derivative. Both tensorial and abstract index notation wil be used depending on convenience. We work in arbitrary dimension \(\mathfrak{n}\) and use the following sets of indices: \[\alpha,\beta,...=0,1,2,...,\mathfrak{n};\qquad a,b,...=1,2,..., \mathfrak{n};\qquad A,B,...=2,...,\mathfrak{n}. \tag{1.2}\] When index-free notation is used (and only then) we shall distinguish covariant tensors with boldface. As usual, parenthesis (resp. brackets) denote symmetrization (resp. antisymmetrization) of indices. The symmetrized tensor product is defined by \(A\otimes_{s}B\equiv\frac{1}{2}(A\otimes B+B\otimes A)\). We write \(\text{tr}_{B}A\) for the trace of a 2-covariant symmetric tensor \(A\) with respect to a 2-contravariant tensor \(B\). In any semi-Riemannian manifold \(\left(\mathcal{M},g\right)\), the scalar product of two vectors is written both as \(g(X,Y)\) or \(\left\langle X,Y\right\rangle_{g}\), and we use \(g^{\sharp}\), \(\nabla\) for the inverse metric and Levi-Civita derivative of \(g\) respectively. Our notation and convention for the curvature operator of any connection \(D\) is \[R^{D}(X,W)Z\overset{\mathsf{def}}{=}\left(D_{X}D_{W}-D_{W}D_{X} -D_{[X,W]}\right)Z, \tag{1.3}\] except for \(D=\nabla\) where we simply write \(R\). The curvature tensor of \(D\) is the 3-covariant, 1-contravariant tensor \(\text{Riem}^{D}(\boldsymbol{\alpha},Z,X,W))\overset{\mathsf{def}}{=} \boldsymbol{\alpha}\left(R^{D}(X,W)Z\right)\) and the Ricci tensor \(\mathbf{Ric}^{D}\) is its contraction in the first and third indices. Our signature convention for Lorentzian manifolds \(\left(\mathcal{M},g\right)\) is \(\left(-,+,...,+\right)\). ## 2 Formalism of hypersurface data In this section we introduce all the necessary aspects of the formalism exploited throughout the paper, namely the _formalism of hypersurface data_. New results are all demonstrated, while for the already known ones we simply include the corresponding cite. ### Metric hypersurface data The hypersurface data formalism relies on the concept of _metric hypersurface data_. The idea is to codify, at a fully abstract level, the information concerning the intrinsic geometry of a hypersurface \(\mathcal{N}\) of a semi-Riemannian manifold. "Abstract" means that the definition makes no reference to any ambient space where the manifold \(\mathcal{N}\) may be embedded. **Definition 2.1**.: _(Metric hypersurface data) Let \(\mathcal{N}\) be an \(\mathfrak{n}-\)dimensional manifold endowed with a \(2\)-covariant symmetric tensor \(\gamma\), a covector \(\boldsymbol{\ell}\) and a scalar function \(\ell^{(2)}\). The four-tuple \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) defines metric hypersurface data provided that the symmetric \(2\)-covariant tensor \(\boldsymbol{\mathcal{A}}|_{p}\) on \(T_{p}\mathcal{N}\times\mathbb{R}\) given by_ \[\begin{array}{l}\boldsymbol{\mathcal{A}}|_{p}\left(\left(W,a\right),\left(Z, b\right)\right)\stackrel{{\text{\tiny def}}}{{=}}\left.\gamma\right|_{p}\left(W,Z \right)+a\left.\boldsymbol{\ell}\right|_{p}\left(Z\right)+b\left.\boldsymbol{ \ell}\right|_{p}\left(W\right)+ab\ell^{(2)}|_{p},\\ W,Z\in T_{p}\mathcal{N},\quad a,b\in\mathbb{R}\end{array} \tag{2.1}\] _is non-degenerate at every \(p\in\mathcal{N}\)._ Since \(\boldsymbol{\mathcal{A}}|_{p}\) is non-degenerate there exists a unique (symmetric) inverse contravariant tensor \(\mathcal{A}|_{p}\) on \(T_{p}^{\star}\mathcal{N}\times\mathbb{R}\). Splitting its action as \[\begin{array}{l}\left.\mathcal{A}\right|_{p}\left(\left(\boldsymbol{\alpha}, a\right),\left(\boldsymbol{\beta},b\right)\right)\stackrel{{\text{ \tiny def}}}{{=}}\left.P\right|_{p}\left(\boldsymbol{\alpha},\boldsymbol{\beta }\right)+a\left.n\right|_{p}\left(\boldsymbol{\beta}\right)+b\left.n\right|_{ p}\left(\boldsymbol{\alpha}\right)+abn^{(2)}|_{p},\\ \boldsymbol{\alpha},\boldsymbol{\beta}\in T_{p}^{\star}\mathcal{N},\quad a,b \in\mathbb{R}\end{array} \tag{2.2}\] defines a symmetric \(2\)-contravariant tensor \(P\), a vector \(n\) and a scalar \(n^{(2)}\) in \(\mathcal{N}\). By definition, they are smooth fields satisfying [14] \[\gamma_{ab}n^{b}+n^{(2)}\ell_{a} =0, \tag{2.3}\] \[\ell_{a}n^{a}+n^{(2)}\ell^{(2)} =1, \tag{2.4}\] Besides being non-degenerate, no restriction on the signature of the tensor \(\boldsymbol{\mathcal{A}}\) is made. The tensor field \(\gamma\) has a priori any signature, so in particular it can be degenerate. However, it follows from the definition of metric hypersurface data that the radical of \(\gamma\) at a point \(p\in\mathcal{N}\) (i.e. the set \(\mathrm{Rad}\gamma|_{p}\stackrel{{\text{\tiny def}}}{{=}}\{X\in T _{p}\mathcal{N}\mid\gamma(X,\cdot)=0\}\) of vectors anhihilated by \(\gamma\)) is [15] either zero- or one-dimensional. The latter case occurs if and only if \(n^{(2)}|_{p}=0\), which together with (2.3) means that \(\mathrm{Rad}\gamma|_{p}=\langle n|_{p}\rangle\). Thus, \(n|_{p}\) is non-zero (by (2.4)) and defines the degenerate direction of \(\gamma|_{p}\). This leads to the definition of null and non-null points. **Definition 2.2**.: _(Null and non-null point) Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be metric hypersurface data. A point \(p\) is called null if \(\mathrm{dim}(\mathrm{Rad}\gamma)|_{p}=1\) and non-null otherwise._ Thus, at a non-null (resp. null) point \(p\in\mathcal{N}\), it holds \(n^{(2)}|_{p}\neq 0\) (resp. \(n^{(2)}|_{p}=0\)). It is natural to study the relation between the signatures of \(\boldsymbol{\mathcal{A}}\) and \(\gamma\). For non-null points this was addressed in [15, Lem. 2.7]. Here we give the corresponding result for null points. As in [15], we view the signature of a quadratic form \(q\) as the (unordered) set \(\mathrm{sign}(q)=\{0,...,0,-1,...,-1,+1,...,+1\}\) of diagonal entries in the canonical form of \(q\). **Lemma 2.3**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be metric hypersurface data and \(p\in\mathcal{N}\) a null point, i.e. \(\mathrm{Rad}(\gamma)|_{p}\neq\{0\}\). Then the signatures of \(\gamma|_{p}\) and \(\boldsymbol{\mathcal{A}}|_{p}\) are related by_ \[\mathrm{sign}(\boldsymbol{\mathcal{A}}|_{p})=\{-1,1\}\sqcup(\mathrm{sign}( \gamma|_{p})\setminus\{0\}). \tag{2.7}\] _where \(\sqcup\) is the disjoint union. In particular, \(\boldsymbol{\mathcal{A}}|_{p}\) has Lorentzian signature if and only if \(\gamma|_{p}\) is semi-positive definite._ Proof.: Assume that the dimension \(\mathfrak{n}\) of \(\mathcal{N}\) is at least two (if \(\mathfrak{n}=1\) the proof is the same with small changes of notation). Since \(\mathrm{Rad}(\gamma)|_{p}\neq\{0\}\), it must be one-dimensional. Let \(\{e_{a}\}\) be a canonical basis of \(\gamma|_{p}\) with \(e_{1}\in\mathrm{Rad}(\gamma)|_{p}\) and define \(\epsilon_{a}\stackrel{{\text{\tiny def}}}{{=}}\gamma|_{p}(e_{a},e_{a})\), \(s_{A}\stackrel{{\text{\tiny def}}}{{=}}\boldsymbol{\ell}|_{p}(e _{A})\). Observe that \(\epsilon_{1}=0\) and \(\epsilon_{A}^{2}=1\). One checks easily that the vectors \[E_{0}\stackrel{{\text{\tiny def}}}{{=}}(V,1),\qquad E_{a} \stackrel{{\text{\tiny def}}}{{=}}(e_{a},0),\qquad\text{with} \qquad V\stackrel{{\text{\tiny def}}}{{=}}-\sum_{B=2}^{\mathfrak{ n}}\epsilon_{B}s_{B}e_{B}\in T_{p}\mathcal{N} \tag{2.8}\] form a basis of \(T_{p}\mathcal{N}\times\mathbb{R}\). By (2.1) they satisfy \[\boldsymbol{\mathcal{A}}|_{p}(E_{0},E_{0}) =\gamma|_{p}\left(V,V\right)+2\boldsymbol{\ell}|_{p}\left(V \right)+\ell^{(2)}\stackrel{{\text{\tiny def}}}{{=}}C, \boldsymbol{\mathcal{A}}|_{p}(E_{0},E_{1}) =\boldsymbol{\ell}|_{p}\left(e_{1}\right), \tag{2.9}\] \[\boldsymbol{\mathcal{A}}|_{p}(E_{0},E_{A}) =\gamma|_{p}\left(V,e_{A}\right)+\boldsymbol{\ell}|_{p}\left(e_{ A}\right), \boldsymbol{\mathcal{A}}|_{p}(E_{1},E_{1}) =0,\] (2.10) \[\boldsymbol{\mathcal{A}}|_{p}(E_{A},E_{B}) =\gamma|_{p}\left(e_{A},e_{B}\right)=\delta_{AB}\epsilon_{A}, \boldsymbol{\mathcal{A}}|_{p}(E_{1},E_{A}) =0. \tag{2.11}\] Since \(\boldsymbol{\mathcal{A}}|_{p}\) is non-degenerate, \(\boldsymbol{\ell}|_{p}\left(e_{1}\right)\neq 0\) and we can introduce the vectors \[\widehat{E}_{0}\stackrel{{\text{\tiny def}}}{{=}}E_{0}-\frac{1+C }{2(\boldsymbol{\ell}|_{p}\left(e_{1}\right))}E_{1},\qquad\widehat{E}_{1} \stackrel{{\text{\tiny def}}}{{=}}-E_{0}-\frac{1-C}{2( \boldsymbol{\ell}|_{p}\left(e_{1}\right))}E_{1},\qquad\widehat{E}_{A} \stackrel{{\text{\tiny def}}}{{=}}E_{A}. \tag{2.12}\] A simple computation yields \[\boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{0},\widehat{E}_{0}) =-1, \boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{0},\widehat{E}_{1}) =0, \boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{0},\widehat{E}_{A}) =0, \tag{2.13}\] \[\boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{1},\widehat{E}_{1}) =1, \boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{1},\widehat{E}_{A}) =0, \boldsymbol{\mathcal{A}}|_{p}(\widehat{E}_{A},\widehat{E}_{B}) =\delta_{AB}\epsilon_{A}. \tag{2.14}\] Thus, \(\{\widehat{E}_{0},\widehat{E}_{a}\}\) is a canonical basis of \(\boldsymbol{\mathcal{A}}|_{p}\) and \(\mathrm{sign}(\boldsymbol{\mathcal{A}}|_{p})=\{-1,1,\epsilon_{2},...,\epsilon_ {n}\}\), which proves (2.7). The last claim is immediate. Given metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\), the following tensors appear frequently: \[\mathbf{F}\stackrel{{\text{\tiny def}}}{{=}}\frac{1}{2}d \boldsymbol{\ell},\qquad\quad\boldsymbol{s}\stackrel{{\text{\tiny def }}}{{=}}\mathbf{F}(n,\cdot),\qquad\quad\mathbf{U}\stackrel{{\text{ \tiny def}}}{{=}}\frac{1}{2}\pounds_{n}\gamma+\boldsymbol{\ell}\otimes_{s}dn ^{(2)}. \tag{2.15}\] Observe that \(\mathbf{U}\) is symmetric and \(\mathbf{F}\) is a 2-form. These tensor fields satisfy [15] \[\pounds_{n}\boldsymbol{\ell}=2\boldsymbol{s}-d(n^{(2)}\ell^{(2)}),\qquad(\ref{ 2.16})\qquad\mathbf{U}(n,\cdot)=-n^{(2)}\boldsymbol{s}+\frac{1}{2}dn^{(2)}+ \frac{1}{2}(n^{(2)})^{2}d\ell^{(2)}. \tag{2.17}\] In general, a metric hypersurface data set does not endow \(\mathcal{N}\) with a metric tensor and hence there is no associated Levi-Civita covariant derivative. However, there exists a canonical notion of covariant derivative on \(\mathcal{N}\). This canonical covariant derivative, denoted by \(\stackrel{{\circ}}{{\nabla}}\) and called _metric hypersurface connection_, is defined from its action on the tensors \(\{\gamma,\boldsymbol{\ell}\}\)[15, Prop. 4.3]. **Theorem 2.4**.: _For any given metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\), the conditions_ \[(\stackrel{{\circ}}{{\nabla}}_{X}\gamma)(Z,W) =-\mathbf{U}(X,Z)\boldsymbol{\ell}(W)-\mathbf{U}(X,W)\boldsymbol{ \ell}(Z), \tag{2.18}\] \[(\stackrel{{\circ}}{{\nabla}}_{X}\boldsymbol{\ell})(Z) +(\stackrel{{\circ}}{{\nabla}}_{Z}\boldsymbol{\ell})(X) =-2\ell^{(2)}\mathbf{U}(X,Z), \forall X,Z,W\in\Gamma(T\mathcal{N}) \tag{2.19}\] _define a unique torsion-free connection \(\stackrel{{\circ}}{{\nabla}}\) on \(\mathcal{N}\)._ The \(\overset{\circ}{\nabla}\) derivatives of the tensor fields \(\gamma\), \(\boldsymbol{\ell}\), \(n\) and \(P\) are [15] \[\overset{\circ}{\nabla}_{a}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=\frac{1}{2}\Big{(}\overset{\circ}{\nabla}_{d}(A_{cb}n^{c})+ \overset{\circ}{\nabla}_{b}(A_{cd}n^{c})-A_{cb}\overset{\circ}{\nabla}_{d}n^{c}-A _{cd}\overset{\circ}{\nabla}_{b}n^{c}\] \[\quad+n^{c}(dA)_{dcb}-n^{c}\overset{\circ}{\nabla}_{c}A_{db}\Big{)}\] \[=\overset{\circ}{\nabla}_{(b}\mathfrak{a}_{d)}-A_{c(b}\overset{ \circ}{\nabla}_{d)}n^{c}+\frac{1}{2}n^{c}(dA)_{dcb}-\frac{1}{2}n^{c}\overset{ \circ}{\nabla}_{c}A_{db},\] and (2.27) is established. Moreover, we also find \[n^{c}\left(\overset{\circ}{\nabla}_{d}A_{cb}-\overset{\circ}{ \nabla}_{c}A_{db}\right) =n^{c}\left(\overset{\circ}{\nabla}_{d}A_{cb}+\overset{\circ}{ \nabla}_{c}A_{bd}+\overset{\circ}{\nabla}_{b}A_{dc}+\overset{\circ}{\nabla}_{ b}A_{cd}\right)\] \[=n^{c}(dA)_{dcb}+\overset{\circ}{\nabla}_{b}\left(n^{c}A_{cd} \right)-A_{cd}\overset{\circ}{\nabla}_{b}n^{c},\] which is the alternative form (2.28). In semi-Riemannian manifolds there is the notion of raising indices. In the context of metric hypersurface data, this operation is replaced by a construction that takes a covector and a scalar, subject to certain conditions, and yields a vector. The specific result is [14, Lemma 3]. **Lemma 2.6**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be metric hypersurface data. Given a covector field \(\boldsymbol{\varrho}\in\Gamma(T^{\star}\mathcal{N})\) and a scalar function \(u_{0}\in\mathcal{F}(\mathcal{N})\), there exists a vector field \(W\in\Gamma(T\mathcal{N})\) satisfying \(\gamma(W,\cdot)=\boldsymbol{\varrho}\), \(\boldsymbol{\ell}(W)=u_{0}\) if and only if \(\boldsymbol{\varrho}(n)+n^{(2)}u_{0}=0\). Such \(W\) is unique and reads \(W=P(\boldsymbol{\varrho},\cdot)+u_{0}n\)._ The link between the abstract formalism and the actual geometry of hypersurfaces embedded in a semi-Riemannian space relies on the notions of _rigging_ and _embedded metric hypersurface data_, defined as follows. **Definition 2.7**.: [15] _(Rigging, embedded metric hypersurface data) A metric hypersurface data set \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) is said to be embedded in a semi-Riemannian manifold \((\mathcal{M},g)\) of dimension \(\mathfrak{n}+1\) provided there exists an embedding \(\phi:\mathcal{N}\longleftrightarrow\mathcal{M}\) and a rigging vector field \(\zeta\) (i.e. a vector field along \(\phi\left(\mathcal{N}\right)\), everywhere transversal to it) satisfying_ \[\phi^{\star}\left(g\right)=\gamma,\qquad\phi^{\star}\left(g\left(\zeta, \cdot\right)\right)=\boldsymbol{\ell},\qquad\phi^{\star}\left(g\left(\zeta, \zeta\right)\right)=\ell^{(2)}. \tag{2.30}\] **Notation 2.8**.: _Whenever no misunderstanding can arise, we shall identify scalar functions on \(\mathcal{N}\) and on \(\phi(\mathcal{N})\) as well as vector fields on \(\mathcal{N}\) with their corresponding images through \(\phi_{\star}\)._ From any embedded metric hypersurface data one can reconstruct the full metric \(g\) along \(\phi(\mathcal{N})\)1, as it holds that Footnote 1: This is the reason behind the terminology “metric hypersurface data”. \[\boldsymbol{\mathcal{A}}|_{p}\left((W,a),(Z,b)\right)=g|_{\phi(p)}(\phi_{*}W+a \zeta,\phi_{*}Z+b\zeta). \tag{2.31}\] Thus, \(\boldsymbol{\mathcal{A}}\) completely encodes the metric \(g\) at points on \(\phi\left(\mathcal{N}\right)\). In order to relate the quantities \(\{n,n^{(2)}\}\) with geometric objects in the ambient space, we now consider the following setup. **Setup 2.9**.: _We let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be metric hypersurface data embedded in a semi-Riemannian manifold \((\mathcal{M},g)\) with embedding \(\phi\) and rigging vector \(\zeta\). We select any local basis \(\{\hat{e}_{a}\}\) of \(\Gamma(T\mathcal{N})\) and define \(e_{a}\stackrel{{\textup{\tiny{def}}}}{{=}}\phi_{\star}(\hat{e}_{a})\). By transversality of the rigging, \(\{\zeta,e_{a}\}\) constitutes a (local) basis of \(\Gamma(T\mathcal{M})|_{\phi(\mathcal{N})}\). The hypersurface \(\phi(\mathcal{N})\) admits a unique normal covector \(\boldsymbol{\nu}\) satisfying \(\boldsymbol{\nu}(\zeta)=1\). By construction, this covector belongs to the dual basis of \(\{\zeta,e_{a}\}\), which we denote by \(\{\boldsymbol{\nu},\boldsymbol{\theta}^{a}\}\). We define the vector fields \(\nu\stackrel{{\textup{\tiny{def}}}}{{=}}g^{\sharp}(\boldsymbol{ \nu},\cdot)\), \(\theta^{a}\stackrel{{\textup{\tiny{def}}}}{{=}}g^{\sharp}( \boldsymbol{\theta}^{a},\cdot)\)._ Using (2.3)-(2.6) and the definition of dual basis, it is straightforward to check that \(\nu\) and \(\theta^{a}\) can be decomposed in the basis \(\{\zeta,e_{a}\}\) as \[\nu =n^{(2)}\zeta+n^{a}e_{a}, \qquad\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eq #### 2.1.2 The Lie derivative of the connection \(\overset{\circ}{\nabla}\) along \(n\) Since the difference of connections is a tensor it makes sense to define the Lie derivative of a connection along a vector field. This tensor carries useful information on the curvature. Our aim in this section is to obtain its explicit form for the connection \(\overset{\circ}{\nabla}\) and the vector field \(n\). This will play a relevant role in later sections as well as in subsequent applications to study the geometry of abstract Killing horizons [10], [11]. Note that any metric hypersurface data set defines a privileged vector field \(n\) on \(\mathcal{N}\). This is even more so when \(\mathcal{N}\) consists of null points since \(n\) spans the radical of \(\gamma\), so the direction of \(n\) (but not the scale) remains invariant under gauge transformations (cf. (2.39)). It therefore makes sense to study the properties of the Lie derivative of \(\overset{\circ}{\nabla}\) along \(n\) for general metric data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\). We start by summarizing general results on the Lie derivative of a connection, see e.g. [25]. Given any smooth manifold \(\mathcal{M}\) endowed with an affine connection \(D\) and a vector field \(Z\), the Lie derivative of \(D\) along \(Z\), denoted by \(\Sigma_{Z}\), is the 1-contravariant, 2-covariant tensor field \[\Sigma_{Z}(X,W)\stackrel{{\text{\tiny def}}}{{=}}\pounds_{Z}D_{X} W-D_{X}\pounds_{Z}W-D_{\pounds_{Z}X}W,\qquad\forall X,W\in\Gamma(T\mathcal{M}). \tag{2.42}\] This tensor only depends on \(Z\) and on \(D\), so we can also use the shorthand notation \(\Sigma_{Z}\stackrel{{\text{\tiny def}}}{{=}}\pounds_{Z}D\) to define it. It is easy to prove that \(\Sigma_{Z}\) is symmetric in its covariant indices when \(D\) is torsion-free. A basic use of \(\Sigma_{Z}\) is to compute the commutator between Lie derivatives and covariant derivatives. The corresponding expression for covariant tensors is, in abstract index notation, \[\pounds_{Z}D_{\alpha}T_{\beta_{1}\cdots\beta_{p}}=D_{\alpha}\pounds_{Z}T_{ \beta_{1}\cdots\beta_{p}}-\sum_{\text{i}=1}^{\mathfrak{p}}(\Sigma_{Z})^{\mu} {}_{\alpha\beta_{i}}T_{\beta_{1}\cdots\beta_{i-1}\mu\beta_{i+1}\cdots\beta_{p }}. \tag{2.43}\] Of particular relevance for us is the following relation between the tensor \(\Sigma_{Z}\) and certain components of the curvature tensor of \(D\). **Lemma 2.12**.: [25] _Let \(\mathcal{M}\) be a manifold endowed with a torsion-free connection \(D\). Then, for any \(X,W,Z\in\Gamma(T\mathcal{M})\), it holds_ \[\Sigma_{Z}(X,W) =D_{X}D_{W}Z-D_{D_{X}W}Z+R^{D}(Z,X)W\quad\text{or, in index notation,}\] \[(\Sigma_{Z})^{\mu}{}_{\alpha\beta} =D_{\alpha}D_{\beta}Z^{\mu}+R^{D}{}^{\mu}{}_{\beta\nu\alpha}Z^{ \nu}. \tag{2.44}\] Having summarized the main general properties of \(\Sigma_{Z}\) we proceed with the computation of \(\pounds_{n}\overset{\circ}{\nabla}\). The derivation will rely on the following general identity, which may be of independent interest. **Lemma 2.13**.: _Let \(\mathcal{M}\) be a smooth manifold, \(D\) a torsion-free connection, \(Z\in\Gamma(T\mathcal{M})\) a vector field and \(S_{\alpha\beta}\) a symmetric \(2\)-covariant tensor field. Define \(\mathcal{H}^{Z}_{\alpha\mu\nu}\stackrel{{\text{\tiny def}}}{{=}}D _{\alpha}\pounds_{Z}S_{\mu\nu}-\pounds_{Z}D_{\alpha}S_{\mu\nu}\). Then,_ \[(\Sigma_{Z})^{\lambda}{}_{\alpha\mu}S_{\lambda\nu}=\frac{1}{2}\left(\mathcal{ H}^{Z}_{\alpha\mu\nu}+\mathcal{H}^{Z}_{\mu\nu\alpha}-\mathcal{H}^{Z}_{\nu \alpha\mu}\right). \tag{2.45}\] _In particular, if \(S_{\alpha\beta}\) verifies \(D_{\mu}S_{\alpha\beta}=0\), it holds_ \[(\Sigma_{Z})^{\lambda}{}_{\alpha\mu}S_{\lambda\nu}=\frac{1}{2}\left(D_{\alpha }\pounds_{Z}S_{\mu\nu}+D_{\mu}\pounds_{Z}S_{\nu\alpha}-D_{\nu}\pounds_{Z}S_{ \alpha\mu}\right). \tag{2.46}\] Proof.: Since \(D\) is torsion free, \((\Sigma_{Z})^{\mu}_{\alpha\beta}\) is symmetric in \(\alpha,\beta\). Particularizing (2.43) for \(S_{\alpha\beta}\) yields \[0 =\mathcal{H}^{Z}_{\alpha\mu\nu}-(\Sigma_{Z})^{\lambda}{}_{\alpha\mu} S_{\lambda\nu}-(\Sigma_{Z})^{\lambda}{}_{\alpha\nu}S_{\mu\lambda}, \tag{2.47}\] \[0 =\mathcal{H}^{Z}_{\mu\nu\alpha}-(\Sigma_{Z})^{\lambda}{}_{\mu\nu} S_{\lambda\alpha}-(\Sigma_{Z})^{\lambda}{}_{\mu\alpha}S_{\nu\lambda},\] (2.48) \[0 =\mathcal{H}^{Z}_{\nu\alpha\mu}-(\Sigma_{Z})^{\lambda}{}_{\nu \alpha}S_{\lambda\mu}-(\Sigma_{Z})^{\lambda}{}_{\nu\mu}S_{\alpha\lambda}, \tag{2.49}\] where (2.48)-(2.49) arise from the cyclic permutation of the indices \(\alpha,\mu,\nu\). Substracting (2.49) to the sum of (2.47)-(2.48) gives (2.45) because \((\Sigma_{Z})^{\mu}_{\alpha\beta}\) and \(S_{\alpha\beta}\) are symmetric in \(\alpha,\beta\). When \(S_{\alpha\beta}\) is covariantly constant we have \(\mathcal{H}^{Z}_{\alpha\mu\nu}=D_{\alpha}\mathcal{L}_{Z}S_{\mu\nu}\) and equation (2.46) follows at once. We can now compute the tensor \(\overset{\circ}{\Sigma}\overset{\text{\tiny def}}{=}\pounds_{n}\overset{ \circ}{\nabla}\) (for simplicity, we no longer reflect the fact that \(\overset{\circ}{\Sigma}\) depends on \(n\)). This is the content of the following lemma. **Lemma 2.14**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be metric hypersurface data and define \(\overset{\circ}{\Sigma}\overset{\text{\tiny def}}{=}\pounds_{n}\overset{ \circ}{\nabla}\). Then, \(\overset{\circ}{\Sigma}\) is explicitly given by_ \[\overset{\circ}{\Sigma}^{d}{}_{ab} =n^{d}\left(2\overset{\circ}{\nabla}_{(a}{}^{s}\!s_{b)}-n^{(2)} \overset{\circ}{\nabla}_{a}\overset{\circ}{\nabla}_{b}\ell^{(2)}-2\overset{ \circ}{\nabla}_{(a}n^{(2)}\overset{\circ}{\nabla}_{b)}\ell^{(2)}+n(\ell^{(2) })\mathrm{U}_{ab}\right) \tag{2.50}\] \[\quad+P^{dc}\left(\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}+ \overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c} \mathrm{U}_{ab}+\left(2s_{c}-n^{(2)}\overset{\circ}{\nabla}_{c}\ell^{(2)} \right)\mathrm{U}_{ab}+2F_{c(a}\overset{\circ}{\nabla}_{b)}n^{(2)}\right).\] Proof.: Particularizing (2.43) and (2.45) for \(D=\overset{\circ}{\nabla}\), \(T=\boldsymbol{\ell}\), \(S=\gamma\) and \(Z=n\) gives, respectively, \[\ell_{f}\overset{\circ}{\Sigma}^{f}{}_{ab} =\overset{\circ}{\nabla}_{a}\pounds_{n}\pounds_{n}\pounds_{n} \pounds_{n}\pounds_{b}\overset{\text{\tiny def}}{=}Q_{ab}, \tag{2.51}\] \[\gamma_{cf}\overset{\circ}{\Sigma}^{f}{}_{ab} =\frac{1}{2}\left(\mathcal{H}_{abc}+\mathcal{H}_{bca}-\mathcal{H} _{cab}\right), \tag{2.52}\] where \(\mathcal{H}_{abc}\overset{\text{\tiny def}}{=}\overset{\circ}{\nabla}_{a} \pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n}\pounds_{n} \pounds_{n \[+\left(2s_{c}-n^{(2)}\overset{\circ}{\nabla}_{c}\ell^{(2)}\right) \mathrm{U}_{ab}+2\mathrm{F}_{c(a}\overset{\circ}{\nabla}_{b)}n^{(2)}. \tag{2.55}\] To conclude the proof we use \[\overset{\circ}{\Sigma}^{d}{}_{ab}=\delta^{d}_{f}\overset{\circ}{\Sigma}^{f}{}_ {ab}\overset{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eq \[=n^{d}\left(\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{b}+\mathfrak{ {}}_{a}\mathfrak{{}}_{b}+\mathfrak{{}}_{a}\mathfrak{{}}_{b}+\mathcal{G}^{c}{}_{a }\mathfrak{{}}_{bc}\right)+P^{dc}\left(\mathfrak{{}}\mathfrak{{}}_{ac} \mathfrak{{}}_{b}+\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{bc}\right)+n^{c} \mathfrak{{}}_{bc}\mathcal{G}^{d}{}_{a}. \tag{2.58}\] Now, the definition of \(\boldsymbol{s}\) (cf. (2.15)) and (2.17) imply \[n^{c}\mathfrak{{}}_{bc}=\frac{1}{2}\overset{\circ}{\nabla}_{b}n^{(2)}+\frac{ 1}{2}(n^{(2)})^{2}\overset{\circ}{\nabla}_{b}\mathfrak{{}}\mathfrak{{}}^{(2)}, \tag{2.59}\] which in turn gives \[\mathcal{G}^{c}{}_{a}\mathfrak{{}}_{bc}=\left(-P^{cf}\mathrm{F}_{af}-\frac{1}{ 2}n^{c}\overset{\circ}{\nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)}\right) \mathfrak{{}}_{bc}=-P^{cf}\mathrm{F}_{af}\mathfrak{{}}_{bc}-\frac{1}{4} \overset{\circ}{\nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)}\left(\overset{ \circ}{\nabla}_{b}n^{(2)}+(n^{(2)})^{2}\overset{\circ}{\nabla}_{b}\mathfrak{{ }}\mathfrak{{}}^{(2)}\right).\] Inserting this and (2.59) into (2.58) we can write \[\overset{\circ}{\nabla}_{a}\overset{\circ}{\nabla}_{b}n^{d} =n^{d}\left(\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{b}+ \mathfrak{{}}_{a}\mathfrak{{}}_{b}-P^{cf}\mathrm{F}_{af}\mathfrak{{}}_{bc}- \frac{1}{4}\overset{\circ}{\nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)} \overset{\circ}{\nabla}_{b}n^{(2)}-\frac{1}{4}(n^{(2)})^{2}\nabla_{a} \mathfrak{{}}\mathfrak{{}}^{(2)}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}\right)\] \[+P^{dc}\left(\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{bc}+ \mathfrak{{}}_{ac}\mathfrak{{}}_{b}\right)-\frac{1}{2}\left(\overset{\circ}{ \nabla}_{b}n^{(2)}+(n^{(2)})^{2}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}\right)\left(P^{dc}\mathrm{F}_{ac}+\frac{1}{2}n^{d} \overset{\circ}{\nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)}\right)\] \[=n^{d}\underbrace{\left(\overset{\circ}{\nabla}_{a}\mathfrak{{} }_{b}+\mathfrak{{}}_{a}\mathfrak{{}}_{b}-P^{cf}\mathrm{F}_{af}\mathfrak{{}}_{bc} -\frac{1}{2}\overset{\circ}{\nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)} \overset{\circ}{\nabla}_{b}n^{(2)}-\frac{1}{2}(n^{(2)})^{2}\nabla_{a} \mathfrak{{}}\mathfrak{{}}^{(2)}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}\right)}_{=\overset{\mathsf{def}}{I}}\] \[\quad+P^{dc}\underbrace{\left(\overset{\circ}{\nabla}_{a} \mathfrak{{}}_{bc}+\mathfrak{{}}_{ac}\mathfrak{{}}_{b}-\frac{1}{2}\mathrm{F }_{ac}\left(\overset{\circ}{\nabla}_{b}n^{(2)}+(n^{(2)})^{2}\overset{\circ} {\nabla}_{b}\mathfrak{{}}\mathfrak{{}}^{(2)}\right)\right)}_{\overset{ \mathsf{def}}{II}}.\] To conclude we just need to elaborate each parenthesis. For the first one we note \[\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{b}+\mathfrak{{}}_{a}\mathfrak{{ }}_{b}=\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{b}-n^{(2)}\overset{\circ} {\nabla}_{a}\overset{\circ}{\nabla}_{b}\mathfrak{{}}\mathfrak{{}}^{(2)}- \overset{\circ}{\nabla}_{a}n^{(2)}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}+s_{a}\mathfrak{{}}_{b}-2n^{(2)}s_{(a}\overset{\circ}{ \nabla}_{b)}\mathfrak{{}}\mathfrak{{}}^{(2)}+(n^{(2)})^{2}\overset{\circ}{ \nabla}_{a}\mathfrak{{}}\mathfrak{{}}^{(2)}\overset{\circ}{\nabla}_{b} \mathfrak{{}}\mathfrak{{}}^{(2)},\] from where it follows \[I =\overset{\circ}{\nabla}_{a}\mathfrak{{}}_{b}+s_{a}\mathfrak{{ }}_{b}-P^{cf}\mathrm{F}_{af}(\mathrm{U}_{bc}-n^{(2)}\mathrm{F}_{bc})-n^{(2)} \overset{\circ}{\nabla}_{a}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}-\frac{1}{2}\overset{\circ}{\nabla}_{a}\mathfrak{{}}^{(2)} \overset{\circ}{\nabla}_{b}\mathfrak{{}}\mathfrak{{}}^{(2)}\] \[\quad-\overset{\circ}{\nabla}_{a}n^{(2)}\overset{\circ}{\nabla}_{b }\mathfrak{{}}\mathfrak{{}}^{(2)}-2n^{(2)}s_{(a}\overset{\circ}{\nabla}_{b)} \mathfrak{{}}\mathfrak{{}}^{(2)}+\frac{1}{2}(n^{(2)})^{2}\overset{\circ}{\nabla }_{a}\mathfrak{{}}\mathfrak{{}}^{(2)}\overset{\circ}{\nabla}_{b}\mathfrak{{}} \mathfrak{{}}^{(2)}.\] From the definition of \(\mathfrak{{}}\mathfrak{{}}_{bc}\) and \(\mathfrak{{}}_{b}\) one gets \[II=\overset{\circ}{\nabla}_{a}\left(\mathrm{U}_{cb}-n^{(2)}\mathrm{F}_{cb} \right)+\mathrm{U}_{ac}\left(s_{b}+n^{(2)}\overset{\circ}{\nabla}_{b} \mathfrak{{}}\mathfrak{{}}^{(2)}\right)+\mathrm{F}_{ac}\left(-n^{(2)}s_{b}-\frac{ 1}{2}\overset{\circ}{\nabla}_{b}n^{(2)}+\frac{1}{2}(n^{(2)})^{2}\overset{\circ }{\nabla}_{b}\mathfrak{{}}\mathfrak{{}}^{(2)}\right),\] and the validity of (2.57) is proved. We can now find the components \(\overset{\circ}{\mathbf{Riem}}(\cdot,\cdot,n,\cdot)\) of the curvature tensor. **Proposition 2.16**.: _Given metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\), the tensor \(\overset{\circ}{\mathbf{Riem}}\) verifies_ \[\overset{\circ}{R}^{d}{}_{bc}n^{c} =n^{d}\bigg{[}\overset{\circ}{\nabla}_{b}\mathfrak{{}}_{a}-s_{a} \mathfrak{{}}_{b}+2n^{(2)}s_{(b}\overset{\circ}{\nabla}_{a)}\mathfrak{{}}^{(2)}+n (\mathfrak{{}}^{(2)})\mathrm{U}_{ba}\] \[\quad+P^{cf}\mathrm{F}_{af}\left(\mathrm{U}_{bc}-n^{(2)}\mathrm{F}_{ bc}\right)-\frac{1}{2}\overset{\circ}{\nabla}_{b}n^{(2)}\overset{\circ}{\nabla}_{a} \mathfrak{{}}^{(2)}-\frac{1}{2}(n^{(2)})^{2}\overset{\circ}{\nabla}_{b} \mathfrak{{}}\mathfrak{{}}^{(2)}\overset{\circ}{\nabla}_{a}\mathfrak{{}}^{(2)} \right]\] \[\quad+P^{dc}\bigg{[}\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}- \overset{\circ}{\nabla}_{c}\mathrm{U}_{ba}+2s_{c}\mathrm{U}_{ba}-s_{b}\mathrm{ U}_{ac}+2\mathrm{F}_{cb}\overset{\circ}{\nabla}_{a}n^{(2)}+\frac{1}{2}\mathrm{F}_{ca} \overset{\circ}{\nabla}_{b}n^{(2)}\] \[\quad+n^{(2)}\Big{(}-\mathrm{U}_{ba}\overset{\circ}{\nabla}_{c} \mathfrak{{}}\mathfrak{{}}^{(2)}-\mathrm{U}_{ac}\overset{\circ}{\nabla}_{b} \mathfrak{{}}\mathfrak{{}}^{(2)}+\overset{\circ}{\nabla}_{a}\mathrm{F}_{cb}+ \mathrm{F}_{ac}\big{(}s_{b}-\frac{1}{2}n^{(2)}\overset{\circ}{\nabla}_{b} \mathfrak{{}}\mathfrak{{}}^{(2)}\big{)}\Big{)}\bigg{]}. \tag{2.60}\] Proof.: The result follows immediately after inserting Lemmas 2.14 and 2.15 into the identity \(\overset{\circ}{R}^{d}{}_{bca}n^{c}=\overset{\circ}{\Sigma}^{d}{}_{ab}- \overset{\circ}{\nabla}_{a}\ ### Hypersurface data The notion of metric hypersurface data codifies at the abstract level the intrinsic geometric information of a hypersurface. To encode its extrinsic geometry one needs one further step, namely the concept of _hypersurface data_. **Definition 2.17**.: _(Hypersurface data) A five-tuple defines hypersurface data if is metric hypersurface data and is equipped with an extra symmetric -covariant tensor field._ The geometric interpretation of comes from the definition of embedded hypersurface data. **Definition 2.18**.: _(Embedded hypersurface data) A hypersurface data is said to be embedded in a semi-Riemannian manifold with embedding and rigging if its metric part is embedded in () with same embedding and rigging and, in addition,_ (2.61) To comply with the gauge behaviour (2.40) of the rigging vector in the embedded case, the gauge transformation of is forced to be [14], [15] (2.62) This (fully abstract) definition realizes the gauge group [15], namely. Given hypersurface data, we introduce the objects (2.63) (2.64) When the data is embedded (with embedding and rigging ), the tensor coincides [15] with the second fundamental form of with respect to the normal covector determined by, i.e. (2.65) The notion of embedded hypersurface data also provides a geometric interpretation for the connection, given by the following Gauss-type equation [14] (2.66) where is the Levi-Civita connection of the ambient space. The components of the curvature tensor of that are computable in terms of the hypersurface data are summarized in the next result [21] (see also [14, Prop. 6]). **Proposition 2.19**.: _Let be hypersurface data embedded in a semi-Riemannian manifold with embedding and rigging. Let be a (local) basis of and. Then, the Riemann tensor of satisfies_ (2.67) (2.68) Null hypersurface data All the results so far hold for general (metric) hypersurface data. The scenario in which the data consists only of null points is of particular relevance, since in the embedded case it corresponds to null hypersurfaces. We devote this section to study this case in more detail. We shall use the following terminology. **Definition 3.1**.: _(Null metric hypersurface data) A metric hypersurface data set \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) is called null if the scalar field \(n^{(2)}\) defined by (2.3)-(2.6) is everywhere zero on \(\mathcal{N}\)._ **Definition 3.2**.: [20] _(Null hypersurface data) A hypersurface data set \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) is called null if \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) defines null metric hypersurface data._ It is useful to have a criterion to determine under which conditions a general triple \(\{\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) defines null metric hypersurface data. **Lemma 3.3**.: _The set \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) where \(\mathcal{N}\) is a smooth manifold, \(\gamma\) is a symmetric \(2\)-covariant tensor, \(\boldsymbol{\ell}\) a covector and \(\ell^{(2)}\) a scalar field is null metric hypersurface data if and only if_ * _The radical_ \(\mathrm{Rad}\gamma|_{p}\) _of_ \(\gamma|_{p}\) _is one-dimensional at every point_ \(p\in\mathcal{N}\)_._ * _For all_ \(p\in\mathcal{N}\) _and any non-zero vector_ \(e_{1}\in\mathrm{Rad}\gamma|_{p}\) _the contraction_ \(\boldsymbol{\ell}|_{p}(e_{1})\neq 0\)_._ Proof.: It is clear that condition \((ii)\) is independent of the element \(e_{1}\in\mathrm{Rad}\gamma\) one chooses. If \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) is null metric hypersurface data, we may take \(e_{1}|_{p}=n|_{p}\) and conditions \((i)\) and \((ii)\) are satisfied (recall (2.3)-(2.4)). To prove the converse, we only need to make sure that the symmetric \(2\)-covariant tensor \(\mathcal{A}|_{p}\) on \(T_{p}\mathcal{N}\oplus\mathbb{R}\) defined in (2.3) is non-degenerate (observe that \((i)\) together with (2.3) already imply that \(n^{(2)}=0\)). The proof of Lemma 2.3 only uses that \(\gamma|_{p}\) has one-dimensional radical, that \(\mathrm{span}\{e_{1}\}=\mathrm{Rad}\gamma|_{p}\) and that \(\boldsymbol{\ell}|_{p}(e_{1})\neq 0\). Thus, under conditions \((i)\) and \((ii)\) the signature of \(\mathcal{A}|_{p}\) is given by (2.7), hence \(\mathcal{A}|_{p}\) is non-degenerate. **Remark 3.4**.: _Condition \((ii)\) needs to be added only because the tensors \(\{\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) in this lemma are completely general (i.e. they do not define metric hypersurface data a priori). Whenever one already knows that \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) define metric hypersurface data, condition (ii) holds automatically as a consequence of the tensor \(\boldsymbol{\mathcal{A}}\) being non-degenerate._ Some immediate consequences of \(n^{(2)}=0\) are the following. Firstly, \(\mathrm{Rad}\gamma=\langle n\rangle\) and hence \(\gamma(n,\cdot)=0\), as already mentioned. On the other hand, the tensor \(\mathbf{U}\) introduced in (2.15) is given by \(\mathbf{U}=\frac{1}{2}\pounds_{n}\gamma\), hence it satisfies \(\mathbf{U}(n,\cdot)=0\) (by (2.17)). Moreover, when \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) is embedded on an ambient space \((\mathcal{M},g)\) with embedding \(\phi\) and rigging \(\zeta\), \(\mathbf{U}\) coincides with the second fundamental form \(\mathbf{K}\) (cf. (2.64)) with respect to the null normal \(\nu\in\Gamma(T\phi(\mathcal{N}))\) satisfying \(g(\zeta,\nu)|_{\phi(\mathcal{N})}=1\). This makes the tensor \(\mathbf{U}\) particularly relevant since it takes the role of abstract second fundamental form. For later use, we particularize (2.16), (2.22) for \(n^{(2)}=0\), which gives \[\boldsymbol{s}=\frac{1}{2}\pounds_{n}\boldsymbol{\ell}, \tag{3.1}\] \[\overset{\circ}{\nabla}_{b}n^{c}=n^{c}s_{b}+P^{ac}\mathrm{U}_{ ab}. \tag{3.2}\] Observe that (3.1) (together with (2.4)) entails that \(\boldsymbol{s}(n)=0\) in the null case. We have already discussed that the vector field \(n\) is privileged in any null hypersurface data. This often makes it convenient to decompose tensors on \(\mathcal{N}\) in terms of a basis \(\{n,e_{A}\}\) of \(\Gamma(T\mathcal{N})\) and its corresponding dual basis. The next lemma provides such a decomposition for \(\gamma\) and \(P\). **Lemma 3.5**.: _Consider null metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\). Let \(\{n,e_{A}\}\) be a basis of \(\Gamma(T\mathcal{N})\) and \(\{\boldsymbol{\mathfrak{q}},\boldsymbol{\theta}^{A}\}\) be its corresponding dual, i.e._ \[\mathfrak{q}(n)=1,\qquad\mathfrak{q}(e_{A})=0,\qquad\boldsymbol{\theta}^{A}(n )=0,\qquad\boldsymbol{\theta}^{A}(e_{B})=\delta_{B}^{A}. \tag{3.3}\] _Define the functions \(\psi_{A}\in\mathcal{F}(\mathcal{N})\) as \(\psi_{A}\stackrel{{\text{\rm def}}}{{=}}\boldsymbol{\ell}(e_{A})\). Then, the tensors \(\gamma\) and \(P\) decompose as_ \[\gamma =\mathfrak{h}_{AB}\boldsymbol{\theta}^{A}\otimes\boldsymbol{ \theta}^{B}, \tag{3.4}\] \[P =\mathfrak{h}^{AB}e_{A}\otimes e_{B}-\mathfrak{h}^{AB}\psi_{B} \left(n\otimes e_{A}+e_{A}\otimes n\right)-\left(\ell^{(2)}-\mathfrak{h}^{AB} \psi_{A}\psi_{B}\right)n\otimes n, \tag{3.5}\] _where \(\mathfrak{h}_{AB}\stackrel{{\text{\rm def}}}{{=}}\gamma(e_{A},e _{B})\) is a metric and \(\mathfrak{h}^{AB}\) denotes its inverse._ Proof.: First, we notice that \(\boldsymbol{\ell}\) decomposes in the basis \(\{\boldsymbol{\mathfrak{q}},\boldsymbol{\theta}^{A}\}\) as \(\boldsymbol{\ell}=\boldsymbol{\mathfrak{q}}+\psi_{A}\boldsymbol{\theta}^{A}\) because \(\boldsymbol{\ell}(n)=1\) (cf. (2.4)) and \(\psi_{A}\stackrel{{\text{\rm def}}}{{=}}\boldsymbol{\ell}(e_{A})\). Equation (3.4) is an immediate consequence of \(\gamma(n,\cdot)=0\). This, together with the fact that \(\text{Rad}\gamma\) is one-dimensional, means that \(\mathfrak{h}_{AB}\) defines a metric. On the other hand, since \(P\) is symmetric it decomposes in the basis \(\{n,e_{A}\}\) as \[P=P(\boldsymbol{\theta}^{A},\boldsymbol{\theta}^{B})e_{A}\otimes e_{B}+P( \boldsymbol{\mathfrak{q}},\boldsymbol{\theta}^{A})(n\otimes e_{A}+e_{A} \otimes n)+P(\boldsymbol{\mathfrak{q}},\boldsymbol{\mathfrak{q}})n\otimes n. \tag{3.6}\] The fact that \(P(\boldsymbol{\theta}^{A},\boldsymbol{\theta}^{B})=\mathfrak{h}^{AB}\) follows from \[\delta_{A}^{B}=\delta_{a}^{b}\theta_{b}^{B}e_{A}^{a}\stackrel{{\eqref {eq:P}}}{{=}}(P^{bf}\gamma_{fa}+n^{b}\ell_{a})\theta_{b}^{B}e_{A}^{a}=P^{bf} \gamma_{fa}\theta_{b}^{B}e_{A}^{a}\stackrel{{\eqref{eq:P}}}{{=}} \mathfrak{h}_{AC}P(\boldsymbol{\theta}^{B},\boldsymbol{\theta}^{C}), \tag{3.7}\] while for \(P(\boldsymbol{\mathfrak{q}},\cdot)\) one finds \[P(\boldsymbol{\mathfrak{q}},\cdot) =P(\boldsymbol{\ell}-\psi_{A}\boldsymbol{\theta}^{A},\cdot) \stackrel{{\eqref{eq:P}}}{{=}}-\ell^{(2)}n-\psi_{A}P( \boldsymbol{\theta}^{A},\cdot)\] \[=-\left(\ell^{(2)}+\psi_{A}P(\boldsymbol{\theta}^{A}, \boldsymbol{\mathfrak{q}})\right)n-\mathfrak{h}^{AB}\psi_{A}e_{B}\] and hence \(P(\boldsymbol{\mathfrak{q}},\boldsymbol{\theta}^{C})=-\mathfrak{h}^{AC}\psi _{A}\) and \(P(\boldsymbol{\mathfrak{q}},\boldsymbol{\mathfrak{q}})=-\left(\ell^{(2)}- \mathfrak{h}^{AB}\psi_{A}\psi_{B}\right)\). ### Gauge-fixing results One the main results of this paper is the introduction of several geometric quantities that are invariant under the action of gauge group elements of the form \(\mathcal{G}_{(1,V)}\). In order to identify these quantities, we first need to know the general gauge behaviour of the tensor fields defined in the previous section. We devote this subsection to this task. For arbitrary gauge parameters \(\{z,V\}\) we introduce \[\boldsymbol{w}\stackrel{{\text{\rm def}}}{{=}}\gamma(V,\cdot), \qquad f\stackrel{{\text{\rm def}}}{{=}}\boldsymbol{\ell}(V), \tag{3.8}\] from where it immediately follows that (recall Lemma 2.6) \[V^{a}=fn^{a}+P^{ab}w_{b}. \tag{3.9}\] In terms of \(\{\boldsymbol{w},f\}\), the gauge transformations (2.37)-(2.38) take the form \[\mathcal{G}_{(z,V)}\left(\boldsymbol{\ell}\right)=z\left(\boldsymbol{\ell}+ \boldsymbol{w}\right),\qquad\eqref{eq:P}\qquad\mathcal{G}_{(z,V)}\big{(}\ell^{ (2)}\big{)}=z^{2}\big{(}\ell^{(2)}+2f+P(\boldsymbol{w},\boldsymbol{w})\big{)}. \tag{3.11}\] In the next lemma, we obtain the gauge behaviour of \(\mathbf{U}\), \(\mathbf{F}\), \(\boldsymbol{s}\), \(\boldsymbol{r}\) and \(\kappa_{n}\). **Lemma 3.6**.: _Let \(\{\mathcal{N},\gamma,\mathbf{\ell},\ell^{(2)},\mathbf{Y}\}\) be null hypersurface data. Consider arbitrary gauge parameters \(\{z,V\}\) and define the covector \(\mathbf{w}\) and the function \(f\) according to (3.8). Then, the following gauge transformations hold:_ \[\mathcal{G}_{(z,V)}\left(\mathbf{U}\right) =\frac{1}{z}\mathbf{U}, \tag{3.12}\] \[\mathcal{G}_{(z,V)}\left(\mathbf{F}\right) =z\left(\mathbf{F}+\frac{1}{2}d\mathbf{w}\right)+\frac{1}{2}dz\wedge \left(\mathbf{\ell}+\mathbf{w}\right),\] (3.13) \[\mathcal{G}_{(z,V)}\left(\mathbf{s}\right) =\mathbf{s}+\frac{1}{2}\pounds_{n}\mathbf{w}+\frac{n(z)}{2z}\left(\mathbf{ \ell}+\mathbf{w}\right)-\frac{1}{2z}dz,\] (3.14) \[\mathcal{G}_{(z,V)}\left(\mathbf{r}\right) =\mathbf{r}+\frac{1}{2z}dz+\frac{n(z)}{2z}\left(\mathbf{\ell}+\mathbf{w} \right)+\frac{1}{2}\pounds_{n}\mathbf{w}-\mathbf{U}(V,\cdot),\] (3.15) \[\mathcal{G}_{(z,V)}\left(\kappa_{n}\right) =\frac{1}{z}\left(\kappa_{n}-\frac{n(z)}{z}\right). \tag{3.16}\] Proof.: For notational simplicity we write a prime to denote a gauge-transformed quantity. The first three expressions are obtained as follows (recall (2.36), (2.37), (2.39)) \[\mathbf{U}^{\prime} =\frac{1}{2}\pounds_{n^{\prime}\gamma}=\frac{1}{2}\pounds_{z^{-1} n}\gamma=\frac{1}{2z}\pounds_{n}\gamma=z^{-1}\mathbf{U}.\] \[\mathbf{F}^{\prime} =\frac{1}{2}d\mathbf{\ell}^{\prime}=\frac{z}{2}\left(d\mathbf{\ell}+d\mathbf{ w}\right)+\frac{1}{2}dz\wedge\left(\mathbf{\ell}+\mathbf{w}\right)=z\left(\mathbf{F}+ \frac{1}{2}d\mathbf{w}\right)+\frac{1}{2}dz\wedge\left(\mathbf{\ell}+\mathbf{w}\right),\] \[\mathbf{s}^{\prime} =i_{n^{\prime}}\mathbf{F}^{\prime}=z^{-1}i_{n}\mathbf{F}^{\prime }=\mathbf{s}+\frac{1}{2}i_{n}d\mathbf{w}+\frac{n(z)}{2z}\left(\mathbf{\ell}+\mathbf{w}\right)- \frac{1}{2z}dz,\] where \(i_{n}\) denotes interior contraction in the first index and in the last equality we used \(\mathbf{w}(n)=0\). Using Cartan's formula \(\pounds_{n}\mathbf{w}=i_{n}d\mathbf{w}+di_{n}\mathbf{w}=i_{n}d\mathbf{w}\) yields (3.14). For the transformation of \(\mathbf{r}\) we contract the first equality in (2.62) with \(z^{-1}n\) to get \[\mathbf{r}^{\prime} =\mathbf{r}+\frac{1}{2z}dz+\frac{n(z)}{2z}\mathbf{\ell}-\frac{1}{2z} \gamma\big{(}\pounds_{zV}n,\cdot\big{)}=\mathbf{r}+\frac{1}{2z}dz+\frac{n(z)}{2z} \mathbf{\ell}+\frac{1}{2z}\gamma\big{(}\pounds_{n}(zV),\cdot\big{)}\] \[=\mathbf{r}+\frac{1}{2z}dz+\frac{n(z)}{2z}\mathbf{\ell}+\frac{1}{2z} \pounds_{n}\left(\gamma(zV,\cdot)\right)-\frac{1}{2z}\left(\pounds_{n} \gamma\right)(zV,\cdot)\] where we used the antisymmetry of the Lie bracket and "integrated by parts". Expression (3.15) follows after using \(\gamma(V,\cdot)=\mathbf{w}\). The last transformation follows at once from the previous one and the definition \(\kappa_{n}=-\mathbf{r}(n)\). Lemma 3.6 admits the following immediate corollary. **Corollary 3.7**.: _The covector \(\mathbf{s}-\mathbf{r}\) has the following simple gauge behaviour_ \[\mathcal{G}_{(z,V)}\left(\mathbf{s}-\mathbf{r}\right)=\mathbf{s}-\mathbf{r}+\mathbf{U}(V, \cdot)-\frac{1}{z}dz.\] ### Curvature of the metric hypersurface connection \(\overset{\circ}{\nabla}\) For later use, in this section we compute several contractions involving the curvature tensor \(\overset{\circ}{R}^{d}{}_{bca}\) and the Ricci tensor \(\overset{\circ}{R}_{ab}\). We start with the contractions with \(\ell_{d}\) and \(\gamma_{fd}\). Both follow from the general identity obtained in Proposition 2.16. **Proposition 3.8**.: _Any null metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) satisfies_ \[\ell_{d}\overset{\circ}{R}{}^{d}{}_{bca}n^{c} =\overset{\circ}{\nabla}_{b}s_{a}-s_{b}s_{a}+n(\ell^{(2)})\mathrm{ U}_{ba}+\ell^{(2)}(\pounds_{n}\mathbf{U})_{ba}+(\mathrm{F}_{af}-\ell^{(2)}\mathrm{ U}_{af})P^{cf}\mathrm{U}_{bc}, \tag{3.17}\] \[\gamma_{fd}\overset{\circ}{R}{}^{d}{}_{bca}n^{c} =\overset{\circ}{\nabla}_{b}\mathrm{U}_{fa}-\overset{\circ}{ \nabla}_{f}\mathrm{U}_{ba}+2s_{f}\mathrm{U}_{ba}-s_{b}\mathrm{U}_{af}+\ell_{f}( \pounds_{n}\mathbf{U})_{ba}-\ell_{f}P^{cd}\mathrm{U}_{bc}\mathrm{U}_{ad}. \tag{3.18}\] Proof.: Setting \(n^{(2)}=0\) in (2.60) simplifies the expression to \[\overset{\circ}{R}{}^{d}{}_{bca}n^{c} =n^{d}H_{ba}+P^{dc}L_{bca},\qquad\text{with}\qquad L_{bca} \overset{\text{\tiny def}}{=}\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}- \overset{\circ}{\nabla}_{c}\mathrm{U}_{ba}+2s_{c}\mathrm{U}_{ba}-s_{b}\mathrm{ U}_{ac}\] \[\text{and}\qquad H_{ba}\overset{\text{\tiny def}}{=}\overset{ \circ}{\nabla}_{b}s_{a}-s_{b}s_{a}+n(\ell^{(2)})\mathrm{U}_{ba}+P^{cf}\mathrm{ U}_{cb}\mathrm{F}_{af}. \tag{3.19}\] Hence, from (2.3)-(2.6) one gets \(\ell_{d}\overset{\circ}{R}{}^{d}{}_{bca}n^{c}=H_{ba}-\ell^{(2)}n^{c}L_{bca}\) and \(\gamma_{fd}\overset{\circ}{R}{}^{d}{}_{bca}n^{c}=L_{bfa}-\ell_{f}n^{c}L_{bca}\). The proof will be complete once we establish that \(n^{c}L_{bca}=-\pounds_{n}\mathrm{U}_{ba}+P^{cf}\mathrm{U}_{fa}\mathrm{U}_{cb}\). This expression holds true because, from (2.26) together with \(\mathbf{U}(n,\cdot)=0\) and \(\boldsymbol{s}(n)=0\), \[n^{c}L_{bca}=n^{c}\left(\overset{\circ}{\nabla}_{b}\mathrm{U}_{ ca}-\overset{\circ}{\nabla}_{c}\mathrm{U}_{ba}+2s_{c}\mathrm{U}_{ba}-s_{b} \mathrm{U}_{ac}\right)=-\pounds_{n}\mathrm{U}_{ba}+\mathrm{U}_{cb}\overset{ \circ}{\nabla}_{a}n^{c}=-\pounds_{n}\mathrm{U}_{ba}+P^{cf}\mathrm{U}_{cb} \mathrm{U}_{fa}\] where in the last equality we inserted (3.2). We shall also need an expression for \(\overset{\circ}{R}{}^{d}{}_{acb}n^{a}\). As already mentioned, this was computed for general hypersurface data in [15] by using the Ricci identity applied to \(n^{a}\). Here we provide a very direct alternative proof based on Proposition 2.16 and the first Bianchi identity. This serves as a consistency check both for the result in [15] and for Proposition 2.16. We do this only in the null case, which is the result we need later, but the method of proof could be applied in general. **Lemma 3.9**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be null metric hypersurface data. Then_ \[\overset{\circ}{R}{}^{d}{}_{acb}n^{a} =2n^{d}\left(\overset{\circ}{\nabla}_{[c}s_{b]}+P^{af}U_{a[c}F_{ b]f}\right)+2P^{df}\left(\overset{\circ}{\nabla}_{[c}U_{b]f}-s_{[c}U_{b]f} \right). \tag{3.20}\] Proof.: Contracting the first Bianchi identity \(\overset{\circ}{R}{}^{d}{}_{acb}+\overset{\circ}{R}{}^{d}{}_{cba}+\overset{ \circ}{R}{}^{d}{}_{bac}=0\) with \(n^{a}\) yields \(\overset{\circ}{R}{}^{d}{}_{acb}n^{a}=n^{a}\left(\overset{\circ}{R}{}^{d}{}_{ cab}-\overset{\circ}{R}{}^{d}{}_{bac}\right)=2n^{d}H_{[cb]}+P^{df}(L_{cfb}-L_{bfc})\), which gives (3.20) upon inserting the expressions for \(H_{bc}\) and \(L_{cfb}\) provided in (3.19). Finally, we compute the contractions of the Ricci tensor \(\overset{\circ}{\mathbf{Ric}}\) with \(n\). **Lemma 3.10**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) be null metric hypersurface data. Then,_ \[\overset{\circ}{R}{}_{ab}n^{a} =\pounds_{n}s_{b}-2P^{af}\mathrm{U}_{ab}s_{f}+P^{cf}\overset{ \circ}{\nabla}_{c}\mathrm{U}_{bf}-\overset{\circ}{\nabla}_{b}(\mathrm{tr}_{ P}\mathbf{U})+(\mathrm{tr}_{P}\mathbf{U})s_{b}, \tag{3.21}\] \[\overset{\circ}{R}{}_{(ab)}n^{a} =\frac{1}{2}\pounds_{n}s_{b}-2P^{af}\mathrm{U}_{ab}s_{f}+P^{cf} \overset{\circ}{\nabla}_{c}\mathrm{U}_{bf}-\overset{\circ}{\nabla}_{b}( \mathrm{tr}_{P}\mathbf{U})+(\mathrm{tr}_{P}\mathbf{U})s_{b},\] (3.22) \[\overset{\circ}{R}{}_{ab}n^{a}n^{b} =-P^{ab}P^{cd}\mathrm{U}_{ac}\mathrm{U}_{bd}-n(\mathrm{tr}_{P} \mathbf{U}). \tag{3.23}\] Proof.: To prove (3.21) we contract the indices \(d\) and \(c\) in (3.20). Identity (2.24) (for \(\boldsymbol{\theta}=\boldsymbol{s}\)) together with \(\boldsymbol{s}(n)=0\) gives \[2n^{c}\overset{\circ}{\nabla}_{[c}s_{b]}=\pounds_{n}s_{b} \tag{3.24}\] and hence \[\overset{\circ}{R}_{ab}n^{a\text{\tiny{def}}}\overset{\circ}{=}R^{c}_{\ acb}n^{a}= \pounds_{n}s_{b}-P^{af}\mathrm{U}_{ab}s_{f}+P^{cf}\left(\overset{\circ}{\nabla}_{ c}\mathrm{U}_{bf}-s_{c}\mathrm{U}_{bf}\right)-P^{cf}\left(\overset{\circ}{\nabla}_{ b}\mathrm{U}_{cf}-s_{b}\mathrm{U}_{cf}\right).\] The validity of (3.21) follows because \(P^{cf}\overset{\circ}{\nabla}_{b}\mathrm{U}_{cf}=\overset{\circ}{\nabla}_{b} \left(\mathrm{tr}_{P}\mathbf{U}\right)-\left(\overset{\circ}{\nabla}_{b}P^{cf} \right)\mathrm{U}_{cf}=\overset{\circ}{\nabla}_{b}\left(\mathrm{tr}_{P} \mathbf{U}\right)\), the last equality being a consequence of (2.23). Replacing \(n^{(2)}=0\) in (2.56) gives \(\overset{\circ}{R}_{(ab)}=\overset{\circ}{R}_{ab}-\overset{\circ}{\nabla}_{ [a}s_{b]}\). Contracting with \(n^{a}\) and using (3.24) and (3.21) gives (3.22). To obtain (3.23), it suffices to notice that \(n^{b}P^{cd}\overset{\circ}{\nabla}_{c}\mathrm{U}_{bd}=-\mathrm{U}_{bd}P^{cd} \overset{\circ}{\nabla}_{c}n^{b}=-P^{ab}P^{cd}\mathrm{U}_{ac}\mathrm{U}_{bd}\). ### Transverse submanifolds One of the main tools to analyze and understand the geometry of null hypersufaces in Lorentzian manifolds is to use spacelike sections. It is natural to introduce and study the corresponding notion in the hypersurface data formalism. In this section we discuss the geometric properties of a null metric hypersurface data set endowed with a transverse submanifold \(S\). Complementary results on non-degenerate submanifolds embedded in hypersurface data have developed in [20], [19]. Given null metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\), we define a _transverse submanifold as a codimension one embedded submanifold of \(\mathcal{N}\) to which \(n\) is everywhere transverse_. Existence of such \(S\) is always guaranteed in sufficiently local domains of any null metric hypersurface data. Note that we are not assuming that \(S\) is a global section of \(\mathcal{N}\), i.e. there can be generators of \(\mathcal{N}\) that do not cross \(S\). What we actually enforce is that generators intersecting \(S\) do it only once. We have several purposes in mind. We will prove that the pull-back to \(S\) of the one-form \(\boldsymbol{\ell}\) can always be set to zero by an appropriate gauge transformation. We will also derive the relation between the covariant derivative \(\overset{\circ}{\nabla}\) and its induced connection \(\nabla^{S}\) on \(S\), as well as between \(\overset{\circ}{\nabla}\), \(\nabla^{S}\) and the Levi-Civita covariant derivative \(\nabla^{h}\) on \(S\). This will allow us to relate the tangential components of the curvature tensor of \(\overset{\circ}{\nabla}\) with the curvature tensor of the induced metric \(h\). The setup will be the following. **Setup 3.11**.: _We let \(\mathcal{D}=\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) be null hypersurface data and \(S\) an \((\mathfrak{n}-1)\)-dimensional smooth submanifold of \(\mathcal{N}\), everywhere transversal to \(n\). We denote by \(\psi\) the corresponding embedding \(\psi:S\longleftrightarrow\mathcal{N}\) of \(S\) in \(\mathcal{N}\). We define \(\boldsymbol{\ell}_{\parallel}\overset{\text{\tiny{def}}}{=}\psi^{\star} \boldsymbol{\ell}\) and let \(\mathfrak{q}\) be the only normal covector along \(\psi(S)\) satisfying \(\mathfrak{q}(n)=1\). We take a basis \(\{\hat{v}_{A}\}\) of \(\Gamma(TS)\) and construct the basis \(\{n,v_{A}\overset{\text{\tiny{def}}}{=}\psi_{\star}(\hat{v}_{A})\}\) of \(\Gamma(T\mathcal{N})|_{\psi(S)}\)._ In Setup 3.11, the induced metric \(h\overset{\text{\tiny{def}}}{=}\psi^{\star}\gamma\) is non-degenerate everywhere on \(S\). Indeed, a vector \(X\in T_{p}S\) which is \(h\)-orthogonal to all \(T_{p}S\) satisfies also that \(\psi_{\star}|_{p}(X)\) is \(\gamma\)-orthogonal to all \(T_{p}\mathcal{N}\) (here we use that \(T_{p}\mathcal{N}=T_{p}S\oplus\langle n|_{p}\rangle\) and \(\gamma(n|_{p},\cdot)=0\)). Thus, \(\psi_{\star}|_{p}(X)\in\mathrm{Rad}(\gamma|_{p})\) and hence it must be proportional to \(n|_{p}\). This can only occur if \(X=0\). The contravariant metric of \(h\) will be denoted \(h^{\sharp}\). We introduce the vector \(\ell_{\parallel}\overset{\text{\tiny{def}}}{=}h^{\sharp}(\boldsymbol{\ell}_{ \parallel},\cdot)\) (with components \(\ell^{A}\)) and the scalar \(\ell_{\parallel}^{(2)}\overset{\text{\tiny{def}}}{=}h^{\sharp}(\boldsymbol{ \ell}_{\parallel},\boldsymbol{\ell}_{\parallel})\). We will frequently simplify notation by identifying \(S\), \(X\in\Gamma(TS)\), \(f\in\mathcal{F}(\psi(S))\) with their respective counterparts \(\psi(S)\), \(\psi_{\star}X\) and \(\psi^{\star}f\). For any general \(\mathfrak{p}\)-covariant tensor \(\boldsymbol{T}\) along \(S\), we define \(\boldsymbol{T}_{\parallel}\overset{\text{\tiny{def}}}{=}\psi^{\star} \boldsymbol{T}\) and write \(T_{A_{1}\ldots A_{\mathfrak{p}}}\overset{\text{\tiny{def}}}{=}\boldsymbol{T}_{ \parallel}(\hat{v}_{A_{1}},\ldots,\hat{v}_{A_{\mathfrak{p}}})\) (without the parallel symbol) for its components. Let us start by finding decomposed forms of the contractions \(P^{cf}\mathrm{U}_{fa}\), \(P^{cd}\mathrm{U}_{ac}\mathrm{U}_{bd}\), \(\mathrm{tr}_{P}\mathbf{Y}\) and \(\mathrm{tr}_{P}\mathbf{U}\). They are obtained as a corollary of Lemma 3.5. **Corollary 3.12**.: _In the Setup 3.11, the following identities hold:_ \[P^{cf}\mathrm{U}_{fa} =h^{IJ}v^{f}_{J}(v^{c}_{I}-\ell_{I}n^{c})\mathrm{U}_{fa}, P^{cd}\mathrm{U}_{ac}\mathrm{U}_{bd} =h^{IJ}v^{c}_{I}v^{d}_{J}\mathrm{U}_{ac}\mathrm{U}_{bd}, \tag{3.25}\] \[\mathrm{tr}_{P}\mathbf{Y} =\mathrm{tr}_{h}\mathbf{Y}_{\parallel}-2\ell^{A}r_{A}+\kappa_{n} (\ell^{(2)}-\ell^{(2)}_{\parallel}), \mathrm{tr}_{P}\mathbf{U} =\mathrm{tr}_{h}\mathbf{U}_{\parallel}. \tag{3.26}\] Proof.: Recall that \(\mathbf{U}(n,\cdot)=0\). By adapting Lemma 3.5 to the basis \(\{n,v_{A}\}\) introduced in Setup 3.11, it follows at once that the tensor field \(P\) decomposes as \[P^{cf}=h^{AB}v^{c}_{A}v^{f}_{B}-h^{AB}\ell_{B}(n^{c}v^{f}_{A}+n^{f}v^{c}_{A})+ (\ell^{(2)}_{\parallel}-\ell^{(2)})n^{c}n^{f} \tag{3.27}\] because \(\mathfrak{h}_{AB}=h_{AB}\) and \(\psi_{A}=\ell_{A}\). Equations (3.25) automatically follow from the decomposition (3.27). Expressions (3.26) can be computed by inserting (3.27) into \(\mathrm{tr}_{P}\mathbf{Y}\) and \(\mathrm{tr}_{P}\mathbf{U}\). For the former we find \[\mathrm{tr}_{P}\mathbf{Y} =P^{cd}\mathrm{Y}_{cd}=(h^{CD}v^{c}_{C}v^{d}_{D}-\ell^{D}(n^{c}v^{ d}_{D}+n^{d}v^{c}_{D})-(\ell^{(2)}-\ell^{(2)}_{\parallel})n^{c}n^{d}) \mathrm{Y}_{cd}\] \[=\mathrm{tr}_{h}\mathbf{Y}_{\parallel}-2\ell^{D}r_{D}+\kappa_{n} (\ell^{(2)}-\ell^{(2)}_{\parallel}),\] while the latter is given by \(\mathrm{tr}_{P}\mathbf{U}=P^{cd}\mathrm{U}_{cd}=h^{CD}v^{c}_{C}v^{d}_{D} \mathrm{U}_{cd}=\mathrm{tr}_{h}\mathbf{U}_{\parallel}\). The gauge freedom is an asset of the formalism. However, it is also useful to find ways of fixing it. In the present context it is meaningful to know that the pull-back \(\boldsymbol{\ell}_{\parallel}\) to the transverse submanifold \(S\) can always be set to zero via an appropriate gauge transformation. **Lemma 3.13**.: _In the Setup 3.11, \(\boldsymbol{\ell}_{\parallel}\) can be made zero by a suitable gauge transformation._ Proof.: As before we use a prime for the gauge-transformed quantities. Let \(\boldsymbol{\vartheta}\) be any normal covector to \(\psi(S)\). By transversality of \(n\) to \(S\), it follows that \(\boldsymbol{\vartheta}(n)\neq 0\) everywhere on \(\psi(S)\). Now, consider gauge parameters \(\{z,V\}\) with \(z\) arbitrary and \(V\) satisfying \[V\stackrel{{\psi(S)}}{{=}}\frac{1}{\boldsymbol{\vartheta}(n)}P( \boldsymbol{\vartheta},\cdot)+un,\quad\text{where}\quad u\in\mathcal{F}(\psi(S )). \tag{3.28}\] Then, for any \(X\in\Gamma(TS)\) we get \[\boldsymbol{\ell}^{\prime}_{\parallel}(X)\stackrel{{ S}}{{=}} \boldsymbol{\ell}^{\prime}(X)\stackrel{{ S}}{{=}}z\left( \boldsymbol{\ell}(X)+\gamma(V,X)\right)\stackrel{{ S}}{{=}}z \left(\boldsymbol{\ell}(X)+\frac{1}{\boldsymbol{\vartheta}(n)}\boldsymbol{ \vartheta}(X)-\boldsymbol{\ell}(X)\right)\stackrel{{ S}}{{=}}\frac {z\boldsymbol{\vartheta}(X)}{\boldsymbol{\vartheta}(n)}\stackrel{{ S}}{{=}}0,\] where the last step is a consequence of \(\boldsymbol{\vartheta}\) being normal to \(S\). Again by transversality of \(n\) to \(S\) it follows that, for any pair of vector fields \(X,Y\in\Gamma(TS)\), the derivative \(\stackrel{{\circ}}{{\nabla}}_{X}Y\) can be decomposed uniquely on \(S\) as \[\stackrel{{\circ}}{{\nabla}}_{X}Y=\nabla^{S}_{X}Y+\Omega(X,Y)n, \tag{3.29}\] with \(\nabla^{S}_{X}Y\in\Gamma(TS)\). It is a general property of the geometry of hypersurfaces in affine spaces that \(\Omega\) is a 2-covariant tensor and \(\nabla^{S}\) a connection of \(S\). Since \(\stackrel{{\circ}}{{\nabla}}\) is torsion-free it follows that \(\nabla^{S}\) is also torsion-free and that \(\Omega\) is symmetric. Consequently, \(\Omega\) can be determined by \[2\Omega(X,Y)n=\stackrel{{\circ}}{{\nabla}}_{X}Y-\nabla^{S}_{X}Y+ \stackrel{{\circ}}{{\nabla}}_{Y}X-\nabla^{S}_{Y}X\quad\Longrightarrow \quad 2\Omega(X,Y)=\boldsymbol{\mathfrak{q}}\big{(}\stackrel{{ \circ}}{{\nabla}}_{X}Y+\stackrel{{\circ}}{{\nabla}}_{Y}X\big{)}. \tag{3.30}\] We can elaborate (3.30) in terms of \(\mathbf{U}_{\parallel}\stackrel{{\text{\tiny{def}}}}{{=}}\psi^{*} \mathbf{U}\) and derivatives of \(\boldsymbol{\ell}_{\parallel}\). We first note that \[\boldsymbol{\ell}=\boldsymbol{\ell}_{\parallel}+\boldsymbol{q} \tag{3.31}\] everywhere on \(S\) (because both sides agree when acting on the vector \(n\) as well as on a tangential vector \(X\)). Taking into account (2.21) we compute \[\boldsymbol{q}(\overset{\circ}{\nabla}_{X}Y) =\boldsymbol{\ell}(\overset{\circ}{\nabla}_{X}Y)-\boldsymbol{ \ell}_{\parallel}(\nabla^{S}_{X}Y)=X\left(\boldsymbol{\ell}\left(Y\right) \right)-(\overset{\circ}{\nabla}_{X}\boldsymbol{\ell})(Y)-\boldsymbol{\ell} _{\parallel}(\nabla^{S}_{X}Y)\] \[=X(\boldsymbol{\ell}_{\parallel}(Y))-\mathbf{F}(X,Y)+\ell^{(2)} \mathbf{U}_{\parallel}(X,Y)-\boldsymbol{\ell}_{\parallel}(\nabla^{S}_{X}Y)\] \[=(\nabla^{S}_{X}\boldsymbol{\ell}_{\parallel})\left(Y\right)- \mathbf{F}(X,Y)+\ell^{(2)}\mathbf{U}_{\parallel}(X,Y). \tag{3.32}\] Inserting this into (3.30) and using that \(\mathbf{F}\) is antisymmetric yields the explicit form of \(\Omega\), namely \[\Omega(X,Y)=\frac{1}{2}\left((\nabla^{S}_{X}\boldsymbol{\ell}_{\parallel}) \left(Y\right)+(\nabla^{S}_{Y}\boldsymbol{\ell}_{\parallel})\left(X\right) \right)+\ell^{(2)}\mathbf{U}_{\parallel}(X,Y). \tag{3.33}\] We now obtain the explicit relation between the Levi-Civita covariant derivative \(\nabla^{h}\) on \(S\) and the connections \(\overset{\circ}{\nabla}\), \(\nabla^{S}\). **Lemma 3.14**.: _In the Setup 3.11, let \(\nabla^{S}\) be the connection defined by (3.29) and \(\nabla^{h}\) the Levi-Civita covariant derivative on \((S,h)\). Define \(\ell_{\parallel}^{(2)\text{\tiny{def}}}\stackrel{{\text{\tiny{ def}}}}{{=}}h^{\sharp}(\boldsymbol{\ell}_{\parallel},\boldsymbol{\ell}_{\parallel})\). Then,_ \[\nabla^{h} =\nabla^{S}-h^{\sharp}(\boldsymbol{\ell}_{\parallel},\cdot) \otimes\mathbf{U}_{\parallel}, \tag{3.34}\] \[\overset{\circ}{\nabla}_{X}Y =\nabla^{h}_{X}Y+h^{\sharp}(\boldsymbol{\ell}_{\parallel},\cdot )\mathbf{U}_{\parallel}(X,Y)+\Omega(X,Y)n\qquad\forall X,Y\in\Gamma(TS), \tag{3.35}\] _and \(\Omega\) can be written as_ \[\Omega(X,Y)=\frac{1}{2}\left(\left(\nabla^{h}_{X}\boldsymbol{\ell}_{\parallel} \right)\left(Y\right)+\left(\nabla^{h}_{Y}\boldsymbol{\ell}_{\parallel}\right) \left(X\right)\right)+\left(\ell^{(2)}-\ell_{\parallel}^{(2)}\right)\mathbf{U }_{\parallel}(X,Y). \tag{3.36}\] Proof.: It is well-known that any torsion-free connection \(D\) on \(S\) relates to \(\nabla^{h}\) according to \[D_{X}Y=\nabla^{h}_{X}Y-\Xi(X,Y),\quad\text{where}\quad\Xi^{A}_{BC}\stackrel{{ \text{\tiny{def}}}}{{=}}\frac{1}{2}h^{AJ}\left(D_{B}h_{CJ}+D_{C}h_{BJ}-D_{J} h_{BC}\right). \tag{3.37}\] In order to apply this to \(\nabla^{S}\) we compute \((\nabla^{S}_{X}h)(Y,W)\): \[(\nabla^{S}_{X}h)(Y,W) =\nabla^{S}_{X}(h(Y,W))-h(\nabla^{S}_{X}Y,W)-h(\nabla^{S}_{X}W,Y)\] \[=\overset{\circ}{\nabla}_{X}(\gamma(Y,W))-\gamma(\overset{\circ} {\nabla}_{X}Y,W)-\gamma(\overset{\circ}{\nabla}_{X}W,Y)\] \[=\psi^{\star}(\overset{\circ}{\nabla}_{X}\gamma)(Y,W)=- \boldsymbol{\ell}_{\parallel}(Y)\mathbf{U}_{\parallel}(X,W)-\boldsymbol{\ell} _{\parallel}(W)\mathbf{U}_{\parallel}(X,Y), \tag{3.38}\] where in the second equality we used \(\gamma(n,\cdot)=0\) and in the last step we inserted (2.20). The tensor \(\Xi\) corresponding to \(D=\nabla^{S}\) is therefore \[\Xi^{A}_{BC}\stackrel{{\text{\tiny{def}}}}{{=}}\frac{1}{2}h^{AD} \left(\nabla^{S}_{B}h_{CD}+\nabla^{S}_{C}h_{BD}-\nabla^{S}_{D}h_{BC}\right)=- h^{AD}\ell_{D}\mathrm{U}_{BC},\] which establishes (3.34). Equation (3.36) follows at once by combining (3.33) and (3.34). Equation (3.35) is an immediate consequence of inserting (3.34) into (3.29). Equation (3.34) means that \(\nabla^{S}\) coincides with \(\nabla^{h}\) if either \((i)\)\(\boldsymbol{\ell}_{\parallel}=0\) or \((ii)\)\(\mathbf{U}_{\parallel}=0\). Moreover, \((ii)\) is equivalent to \(\mathbf{U}=0\) because \(\mathbf{U}(n,\cdot)=0\) (cf. (2.17)). Observe that \(\nabla^{h}\) is a gauge independent quantity, but \(\nabla^{S}\) is not. In fact, as proven in Lemma 3.13, the one-form \(\boldsymbol{\ell}_{\parallel}\) can be made zero by an appropriate choice of gauge. The tensor \(\mathbf{U}\) is a property of the data and in general it is non-zero (this is a gauge invariant statement because \(\mathcal{G}_{(z,V)}(\mathbf{U})=z^{-1}\mathbf{U}\), see (3.12)). Therefore, generically \(\nabla^{S}\) coincides with \(\nabla^{h}\) only in case \((i)\). We now determine the pull-back to \(S\) of two differential operations on \(\mathcal{N}\). We start with the \(\overset{\circ}{\nabla}\) derivative of any \(\mathfrak{p}\)-covariant tensor field \(\mathcal{T}\). **Lemma 3.15**.: _Consider null metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) and assume Setup 3.11. Any \(\mathfrak{p}\)-covariant tensor field \(\mathcal{T}\) along \(S\) verifies_ \[v_{A_{1}}^{a_{1}}\ldots v_{A_{\mathfrak{p}}}^{a_{\mathfrak{p}}}v _{B}^{b}\overset{\circ}{\nabla}_{b}\mathcal{T}_{a_{1}\cdots a_{\mathfrak{p}} }=\nabla_{B}^{h}\mathcal{T}_{A_{1}\cdots A_{\mathfrak{p}}}-\sum_{\mathrm{i}=1 }^{\mathfrak{p}}\ell^{J}\mathcal{T}_{A_{1}\cdots A_{\mathfrak{i}-1}JA_{ \mathfrak{i}+1}\cdots A_{\mathfrak{p}}}\mathrm{U}_{A_{\mathfrak{i}}B}\] \[\quad-\sum_{\mathrm{i}=1}^{\mathfrak{p}}\mathcal{T}_{a_{1}\cdots a _{\mathfrak{p}}}v_{A_{1}}^{a_{1}}\ldots v_{A_{\mathfrak{i}-1}}^{a_{\mathfrak{i }-1}}n^{a_{\mathfrak{i}}}v_{A_{\mathfrak{i}+1}}^{a_{\mathfrak{i}+1}}\ldots v_{ A_{\mathfrak{p}}}^{a_{\mathfrak{p}}}\left(\nabla_{(A_{\mathfrak{i}}}^{h}\ell_{B)}+( \ell^{(2)}-\ell_{\parallel}^{(2)})\mathrm{U}_{A_{\mathfrak{i}}B}\right), \tag{3.39}\] _where \(\mathcal{T}_{\parallel}\stackrel{{\mathsf{def}}}{{=}}\psi^{ \star}\mathcal{T}\)._ Proof.: We prove it for covectors. The case of covariant tensors with more indices is analogous. From (3.35)-(3.36), we obtain \[v_{A}^{a}v_{B}^{b}\overset{\circ}{\nabla}_{b}\mathcal{T}_{a} =v_{B}\left(\mathcal{T}_{A}\right)-\mathcal{T}_{a}v_{B}^{b} \overset{\circ}{\nabla}_{b}v_{A}^{a}=v_{B}\left(\mathcal{T}_{A}\right)- \mathcal{T}_{J}(\nabla_{v_{B}}^{h}v_{J}^{J}+\ell^{J}\mathrm{U}_{AB})-\mathcal{ T}_{a}n^{a}\Omega_{AB}\] \[=\nabla_{B}^{h}\mathcal{T}_{A}-\ell^{J}\mathcal{T}_{J}\mathrm{U}_ {AB}-\mathcal{T}_{a}n^{a}\left(\nabla_{(A}^{h}\ell_{B)}+(\ell^{(2)}-\ell_{ \parallel}^{(2)})\mathrm{U}_{AB}\right),\] where in the last step we used that \(\Omega_{AB}=\nabla_{(A}^{h}\ell_{B)}+(\ell^{(2)}-\ell_{\parallel}^{(2)}) \mathrm{U}_{AB}\) (cf. (3.36)). Next we find the pull-back to \(S\) of the Lie derivative along any direction of a general symmetric \(2\)-covariant tensor \(\boldsymbol{T}\) satisfying \(\boldsymbol{T}(n,\cdot)=0\). **Lemma 3.16**.: _Assume Setup 3.11 and let **T** be a symmetric \(2\)-covariant tensor on \(\mathcal{N}\) satisfying \(\boldsymbol{T}(n,\cdot)=0\). Consider a smooth function \(q\in\mathcal{F}(\psi(S))\) and a covector field \(\boldsymbol{\beta}\in\Gamma(T^{\star}\mathcal{N})|_{\psi(S)}\) verifying \(\boldsymbol{\beta}(n)=0\) and define \(t^{a}\stackrel{{\mathsf{def}}}{{=}}qn^{a}+P^{ab}\beta_{b}\). Then,_ \[\left(\pounds_{t}T\right)_{AB}=(q-\ell^{C}\beta_{C})|_{S}(\pounds_{n}T)_{AB}+ \beta^{C}\nabla_{C}^{h}T_{AB}+T_{AC}\nabla_{B}^{h}\beta^{C}+T_{CB}\nabla_{A}^{ h}\beta^{C}. \tag{3.40}\] Proof.: Using the decomposition (3.27) of \(P^{ab}\) and the fact that \(\beta_{a}n^{a}=0\) we write \[t^{a}=qn^{a}+h^{AB}v_{A}^{a}v_{B}^{b}\beta_{b}-h^{AB}\ell_{B}n^{a}v_{A}^{b} \beta_{b}=(q-\ell^{A}\beta_{A})n^{a}+\beta^{A}v_{A}^{a}.\] For any function \(f\) we have \(\pounds_{fn}\mathbf{T}=f\pounds_{n}\mathbf{T}\) because \(\mathbf{T}(n,\cdot)=\mathbf{T}(\cdot,n)=0\). On the other hand, for any vector field \(W\) tangent to \(S\) (i.e. such that there exists \(\overline{W}\in\Gamma(TS)\) such that \(W|_{S}=\psi_{\star}\overline{W}\)) it holds \(\psi^{\star}\left(\pounds_{W}\mathbf{T}\right)=\pounds_{\overline{W}}\left( \psi^{\star}\mathbf{T}\right)\). Thus, \[\psi^{\star}\left(\pounds_{t}\mathbf{T}\right)=\psi^{\star}(\pounds_{(q-\ell^{ C}\beta_{C})n}\mathbf{T})+\pounds_{\beta^{\sharp}}\left(\psi^{\star}\mathbf{T} \right)=(q-\ell^{C}\beta_{C})|_{S}\psi^{\star}\left(\pounds_{n}\mathbf{T} \right)+\pounds_{\beta^{\sharp}}\left(\psi^{\star}\mathbf{T}\right),\] where \(\beta^{\sharp}\) is the vector field in \(S\) with abstract index components \(\beta^{A}\). Since \(\nabla^{h}\) is torsion-free the last term can be expanded in terms of the covariant derivative and (3.40) follows. Having obtained a Gauss-type equation relating the covariant derivatives \(\overset{\circ}{\nabla}\) and \(\nabla^{h}\) on tangent vectors to \(S\) (expression (3.35) in Lemma 3.14) we can also relate the tangential components of the curvature tensor of \(\overset{\circ}{\nabla}\) and the curvature tensor of the induced metric \(h\). This result relies on a generalized Gauss identity that we derive in Appendix A. Recall that on a semi-Riemannian ambient manifold, the Gauss identity is an equation relating the curvature tensor of the Levi-Civita connection along tangential directions of a non-degenerate hypersurface with the curvature tensor of the induced metric and the second fundamental form. In Appendix A, we have extended this result to the more general case when the connection of the space and of the hypersurface are completely general, except for the condition that they are both torsion-free. By particularizing Theorem A.1 (more specifically the abstract index notation form (A.7)) to the case of null hypersurface data, we get to the following result. **Lemma 3.17**.: _Consider null metric hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)}\}\) and assume Setup 3.11. Let \(R^{h}\) the Riemann tensor of \(S\). Then,_ \[v_{A}^{a}\gamma_{af}\overset{\circ}{R^{f}}_{bcd}v_{B}^{b}v_{C}^{ c}v_{D}^{d} =R_{ABCD}^{h}+2\nabla^{h}_{[C]}(\ell_{A}\mathrm{U}_{B|D]})+\ell_{A }\ell^{F}\left(\mathrm{U}_{BD}\mathrm{U}_{CF}-\mathrm{U}_{BC}\mathrm{U}_{DF}\right)\] \[\quad+\mathrm{U}_{AC}\left((\ell^{2}-\ell_{\parallel}^{(2)}) \mathrm{U}_{BD}+\nabla^{h}_{(B}\ell_{D)}\right)\] \[\quad-\mathrm{U}_{AD}\left((\ell^{2}-\ell_{\parallel}^{(2)}) \mathrm{U}_{BC}+\nabla^{h}_{(B}\ell_{C)}\right). \tag{3.41}\] Proof.: We particularize Theorem A.1 for \(\widehat{\nabla}=\overset{\circ}{\nabla}\), \(\widehat{D}=\nabla^{h}\), \(\widehat{\gamma}=\gamma\). In such case, \(\widehat{h}=h\) and (3.35)-(3.36) hold, which means that \(A^{C}{}_{AB}=\ell^{C}\mathrm{U}_{AB}\), \(A_{hCAB}=\ell_{C}\mathrm{U}_{AB}\) and \(\Omega_{AB}=\nabla^{h}_{(A}\ell_{B)}+(\ell^{(2)}-\ell_{\parallel}^{(2)}) \mathrm{U}_{AB}\). The only term that needs further evaluation is \(v_{D}^{d}v_{A}^{a}(\overset{\circ}{\nabla}_{d}\gamma_{af})\mathcal{P}^{f}{}_ {BC}\). This is straightforward from (2.4) and (2.20), namely \[v_{D}^{d}v_{A}^{a}(\overset{\circ}{\nabla}_{d}\gamma_{af}) \mathcal{P}^{f}{}_{BC} = -v_{D}^{d}v_{A}^{a}(\ell_{a}\mathrm{U}_{df}+\ell_{f}\mathrm{U}_{ da})(v_{F}^{f}A^{F}{}_{BC}+n^{f}\Omega_{BC}) \tag{3.42}\] \[= -\ell_{A}\ell^{F}\mathrm{U}_{DF}\mathrm{U}_{BC}-\ell_{\parallel} ^{(2)}\mathrm{U}_{DA}\mathrm{U}_{BC}-\mathrm{U}_{DA}\Omega_{BC}\] \[= -\ell_{A}\ell^{F}\mathrm{U}_{DF}\mathrm{U}_{BC}-\mathrm{U}_{DA}( \nabla^{h}_{(B}\ell_{C)}+\ell^{(2)}\mathrm{U}_{BC}).\] Equation (3.41) follows at once after inserting (3.42) into (A.7) and using \(\gamma(n,n)=0\). ## 4 Constraint tensor: Definition and first properties In this section we finally come to the (purely abstract) definition of the constraint tensor. First, we show that a certain linear combination of the tangential components of the ambient Ricci tensor and of the transverse-tangential-transverse-tangential components of the ambient Riemann tensor can be computed exclusively in terms of the hypersurface data (whenever it is embedded). This will lead naturally to the definition, on any hypersurface data, of a symmetric 2-covariant tensor that encodes at the purely abstract level this combination of the ambient Riemann tensor. This construction is done for general data although, as we shall see next, the most interesting case arises at null points because then this tensor encodes precisely the information of the tangential components of the ambient Ricci tensor. Consider hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) embedded in an ambient space \((\mathcal{M},g)\) with embedding \(\phi\) and rigging vector \(\zeta\) and assume Setup 2.9. The first decomposition in (2.34) can be used to compute the ambient Ricci tensor along tangential directions to \(\phi(\mathcal{N})\), i.e. \(R_{\alpha\beta}e_{b}^{\alpha}e_{d}^{\beta}\). From \(R_{\alpha\beta}\stackrel{{\text{\tiny def}}}{{=}}g^{\mu\nu}R_{\mu \alpha\nu\beta}\), it follows \[R_{\alpha\beta}e_{b}^{\alpha}e_{d}^{\beta}\] \[\stackrel{{\phi(\mathcal{N})}}{{=}}n^{(2)}R_{\mu \alpha\nu\beta}\zeta^{\mu}e_{b}^{\alpha}c^{\nu}e_{d}^{\beta}+n^{c}\left(R_{ \mu\alpha\nu\beta}\zeta^{\mu}e_{b}^{\alpha}e_{c}^{\nu}e_{d}^{\beta}+R_{\nu \beta\mu\alpha}\zeta^{\nu}e_{d}^{\beta}e_{c}^{\mu}e_{b}^{\alpha}\right)+P^{cd}R_ {\mu\alpha\nu\beta}e_{c}^{\mu}e_{b}^{\alpha}e_{d}^{\nu}.\] By Proposition 2.19 we know that the contractions \(R_{\mu\alpha\nu\beta}\zeta^{\mu}e_{b}^{\alpha}e_{c}^{\nu}e_{d}^{\beta}\) and \(R_{\mu\alpha\nu\beta}e_{c}^{\mu}e_{b}^{\alpha}e_{d}^{\nu}e_{d}^{\beta}\) can be written in terms of the hypersurface data. However, in general this is not true for the components \(R_{\mu\alpha\nu\beta}\zeta^{\mu}e_{b}^{\alpha}\zeta^{\nu}e_{d}^{\beta}\). We thus write the previous identity as (recall (2.35)) \[\mathbf{Ric}(e_{b},e_{d})-g(\nu,\nu)\mathbf{Riem}(\zeta,e_{b},\zeta,e_{d}) \stackrel{{\phi(\mathcal{N})}}{{=}}2n^{c}\mathbf{Riem}(\zeta,e_ {(b|},e_{c},e_{|d)})+P^{cd}\mathbf{Riem}(e_{c},e_{b},e_{d},e_{d}), \tag{4.1}\] where \(\mathbf{Ric}\) and \(\mathbf{Riem}\) are respectively the Ricci and Riemann tensors of \((\mathcal{M},g)\). Note that at null points (where \(n^{(2)}=g(\nu,\nu)=0\)) the left-hand side simplifies and reduces to the tangential components of the ambient Ricci tensor alone. At non-null points, it is precisely that combination of tangential Ricci and tangential-transverse Riemann tensor that can be computed in terms of the hypersurface data. It therefore makes sense to obtain the explicit expressions in the right-hand side of (4.1) in terms of \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\). To do that there is no need to assume any longer that the data is embedded. We work at the abstract level by introducing two tensors \(A_{bcd}\) and \(B_{abcd}\) on \(\mathcal{N}\), which correspond to the hypersurface data counterparts of \(R_{\mu\alpha\nu\beta}\zeta^{\mu}e_{b}^{\alpha}e_{c}^{\nu}e_{d}^{\beta}\) and \(R_{\mu\alpha\nu\beta}e_{a}^{\mu}e_{b}^{\alpha}e_{c}^{\nu}e_{d}^{\beta}\) respectively (as given in Proposition 2.19). The right-hand side of (4.1) can then be elaborated at the abstract level by computing the contractions \(n^{c}(A_{bcd}+A_{dcd})\) and \(P^{ac}B_{abcd}\). As already mentioned, we start with the definitions of \(A_{bcd}\) and \(B_{abcd}\) as dictated by Proposition 2.19. **Definition 4.1**.: _(Tensors \(A\) and \(B\)) Given hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\), the tensors \(A\) and \(B\) are defined as_ \[A_{bcd}\stackrel{{\text{\tiny def}}}{{=}} \ell_{a}\overset{\circ}{R}^{a}_{\ bcd}+2\overset{\circ}{\nabla} _{[d}Y_{c|b}+2\ell^{(2)}\overset{\circ}{\nabla}_{[d}U_{c]b}+U_{b[c}\overset{ \circ}{\nabla}_{d]}\ell^{(2)}\] \[+Y_{b[d}\left(2\left(\mathrm{F}_{c|f}+\mathrm{Y}_{c|f}\right)n^{f} +n^{(2)}\overset{\circ}{\nabla}_{c]}\ell^{(2)}\right) \tag{4.2}\] \[B_{abcd}\stackrel{{\text{\tiny def}}}{{=}} \gamma_{af}\overset{\circ}{R}^{f}_{\ bcd}+2\ell_{a}\overset{ \circ}{\nabla}_{[d}U_{c]b}+2Y_{b[c}U_{d]a}+2U_{b[c}\left(Y_{d]a}+\mathrm{F}_{d]a }\right)+2n^{(2)}Y_{b[c}Y_{d]a}. \tag{4.3}\] We proceed with the evaluation of \(n^{c}(A_{bcd}+A_{dcd})\) and \(P^{ac}B_{abcd}\). Our guiding principle for the computation is to let as many derivatives of \(\mathbf{Y}\) as possible appear in the form of \(\pounds_{n}\mathbf{Y}\), i.e. as evolution terms along the direction \(n\). This will be particularly useful in the null case, where \(n\) is the degeneration direction of \(\gamma\). The result, however, holds in full generality. **Proposition 4.2**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) be hypersurface data and \(\boldsymbol{r}\), \(\kappa_{n}\) be given by (2.63). Then, the tensors \(A\) and \(B\) introduced in Definition 4.1 satisfy the following identities:_ \[P^{ac}B_{abcd} =\overset{\circ}{R}_{(bd)}-\overset{\circ}{\nabla}_{(b}s_{d)}+s_ {b}s_{d}-\mathrm{U}_{bd}n(\ell^{(2)})\] \[\quad+\left(-2n^{(2)}s_{(b}+\frac{1}{2}\overset{\circ}{\nabla}_ {(b}n^{(2)}+\frac{1}{2}(n^{(2)})^{2}\overset{\circ}{\nabla}_{(b}\ell^{(2)} \right)\overset{\circ}{\nabla}_{d)}\ell^{(2)}\right.\] \[\quad+P^{ac}\Big{(}n^{(2)}\mathrm{F}_{ba}\mathrm{F}_{dc}+\mathrm{ U}_{ba}\mathrm{Y}_{dc}+\mathrm{U}_{da}\mathrm{Y}_{bc}-\mathrm{U}_{bd}\mathrm{Y}_{ac}- \mathrm{Y}_{bd}\mathrm{U}_{ac}\] \[\quad+n^{(2)}\mathrm{Y}_{ba}\mathrm{Y}_{dc}-n^{(2)}\mathrm{Y}_{bd} \mathrm{Y}_{ac}\Big{)}, \tag{4.4}\] \[n^{c}\left(A_{bcd}+A_{dcb}\right)= -2\pounds_{n}\mathrm{Y}_{bd}+2\overset{\circ}{\nabla}_{(b}\left(s _{d)}+r_{d}\right)-2\kappa_{n}\mathrm{Y}_{bd}-2\left(r_{b}-s_{b}\right)\left(r_ {d}-s_{d}\right)\] \[+\left(3n^{(2)}s_{(b}-3n^{(2)}r_{(b}-\frac{1}{2}\overset{\circ}{ \nabla}_{(b}n^{(2)}-\frac{1}{2}(n^{(2)})^{2}\overset{\circ}{\nabla}_{(b}\ell^{(2) )}\right)\overset{\circ}{\nabla}_{d)}\ell^{(2)}\] \[+\left(\mathrm{U}_{bd}+n^{(2)}\mathrm{Y}_{bd}\right)n(\ell^{(2)} )+2P^{ac}\left(\mathrm{Y}_{c(b}-\mathrm{F}_{c(b)}\right)\left(\mathrm{U}_{d)a }-n^{(2)}\mathrm{F}_{d)a}\right). \tag{4.5}\] _Moreover, it also holds_ \[n^{c}\left(A_{bcd}+A_{dcb}\right)+P^{ac}B_{abcd}= \overset{\circ}{R}_{(bd)}-2\pounds_{n}\mathrm{Y}_{bd}-\left(2 \kappa_{n}+\mathrm{tr}_{P}\mathbf{U}-n^{(2)}\left(n(\ell^{(2)})-\mathrm{tr}_{P }\mathbf{Y}\right)\right)\mathrm{Y}_{bd}\] \[+\overset{\circ}{\nabla}_{(b}\left(s_{d)}+2r_{d)}\right)-2r_{b}r _{d}+4r_{(b}s_{d)}-s_{b}s_{d}-(\mathrm{tr}_{P}\mathbf{Y})\mathrm{U}_{bd}\] \[+2P^{ac}\mathrm{U}_{a(b}\left(2Y_{d)c}+\mathrm{F}_{d)c}\right)+n^ {(2)}\Big{(}\left(s_{(b}-3r_{(b)}\overset{\circ}{\nabla}_{d)}\ell^{(2)}\right.\] \[+P^{ac}\left(\mathrm{Y}_{ab}+\mathrm{F}_{ab}\right)\left(\mathrm{ Y}_{cd}+\mathrm{F}_{cd}\right)\Big{)}. \tag{4.6}\] Proof.: We have already computed \(\ell_{f}\overset{\circ}{R}^{f}{}_{bcd}n^{c}\) in (3.17). However, for this proof it is useful to use an alternative form. The Ricci identity applied to \(\ell_{b}\) gives \[\ell_{f}\overset{\circ}{R}^{f}{}_{bcd}n^{c}=\overset{\circ}{\nabla}_{d} \overset{\circ}{\nabla}_{c}\ell_{b}-\overset{\circ}{\nabla}_{c}\overset{ \circ}{\nabla}_{d}\ell_{b}\overset{\circ}{=}\overset{\circ}{\nabla}_{d} \left(\mathrm{F}_{cb}-\ell^{(2)}\mathrm{U}_{cb}\right)-\overset{\circ}{\nabla }_{c}\left(\mathrm{F}_{db}-\ell^{(2)}\mathrm{U}_{db}\right). \tag{4.7}\] Contracting this with \(n^{c}\) and using (2.28) applied to \(A\longrightarrow\mathbf{F}\) and \(\mathfrak{a}\longrightarrow\boldsymbol{s}\) one gets, after inserting \(d\mathbf{F}=0\) (which follows from the definition \(\mathbf{F}=\frac{1}{2}d\boldsymbol{\ell}\)), \[\ell_{f}\overset{\circ}{R}^{f}{}_{bcd}n^{c}=\overset{\circ}{\nabla}_{b}s_{d }-\mathrm{F}_{cd}\overset{\circ}{\nabla}_{b}n^{c}-2\ell^{(2)}n^{c}\overset{ \circ}{\nabla}_{[d}\mathrm{U}_{c]b}+\mathrm{U}_{bd}n(\ell^{(2)})-n^{c}\mathrm{ U}_{cb}\overset{\circ}{\nabla}_{d}\ell^{(2)}. \tag{4.8}\] By (2.56) the Ricci tensor \(\overset{\circ}{R}_{ab}\) can be written as \[\overset{\circ}{R}_{bd}=\overset{\circ}{R}_{(bd)}+\overset{\circ}{R}_{[bd] }=\overset{\circ}{R}_{(bd)}+\overset{\circ}{\nabla}_{[b}s_{d]}-\frac{1}{2} \overset{\circ}{\nabla}_{[b}n^{(2)}\overset{\circ}{\nabla}_{d]}\ell^{(2)}. \tag{4.9}\] Combining (2.5)-(2.6) with (4.8) we then obtain \[P^{ac}\left(\gamma_{af}\overset{\circ}{R}^{f}{}_{bcd}+2\ell_{a} \overset{\circ}{\nabla}_{[d}\mathrm{U}_{c]b}\right)= \overset{\circ}{R}_{bd}-n^{c}\ell_{f}\overset{\circ}{R}^{f}{}_{bcd}-2\ell^{ (2)}n^{c}\overset{\circ}{\nabla}_{[d}\mathrm{U}_{c]b}\] \[=\overset{\circ}{R}_{(bd)}-\overset{\circ}{\nabla}_{(b}s_{d)}- \frac{1}{2}\overset{\circ}{\nabla}_{[b}n^{(2)}\overset{\circ}{\nabla}_{d]} \ell^{(2)}-\mathrm{U}_{bd}n(\ell^{(2)})+\mathrm{F}_{cd}\overset{\circ}{\nabla }_{b}n^{c}+n^{c}\mathrm{U}_{cb}\overset{\circ}{\nabla}_{d}\ell^{(2)}. \tag{4.10}\] We elaborate the last two terms by taking into account (2.17) and (2.22). This yields \[\mathrm{F}_{cd}\overset{\circ}{\nabla}_{b}n^{c}+n^{c}\mathrm{U} _{cb}\overset{\circ}{\nabla}_{d}\ell^{(2)}= -2n^{(2)}s_{(b}\overset{\circ}{\nabla}_{d)}\ell^{(2)}+P^{ac} \left(n^{(2)}\mathrm{F}_{ba}\mathrm{F}_{dc}-\mathrm{U}_{ba}\mathrm{F}_{dc} \right)\] \[+s_{b}s_{d}+\frac{1}{2}\overset{\circ}{\nabla}_{b}n^{(2)} \overset{\circ}{\nabla}_{d}\ell^{(2)}+\frac{1}{2}(n^{(2)})^{2}\overset{ \circ}{\nabla}_{b}\ell^{(2)}\overset{\circ}{\nabla}_{d}\ell^{(2)}. \tag{4.11}\] We have all the ingredients to compute \(P^{ac}B_{abcd}\). Contracting the right hand side of (4.3) with \(P^{ac}\) and replacing (4.10) and (4.11), expression (4.4) follows after simple manipulations. For (4.5) we start by substituting (4.7) in (4.2), which gives \[A_{bcd}= \overset{\circ}{\nabla}_{d}\mathrm{F}_{cb}-\overset{\circ}{ \nabla}_{c}\mathrm{F}_{db}+\overset{\circ}{\nabla}_{d}\mathrm{Y}_{cb}-\overset{ \circ}{\nabla}_{c}\mathrm{Y}_{db}-\frac{1}{2}\left(\mathrm{U}_{cb}+n^{(2)} \mathrm{Y}_{cb}\right)\overset{\circ}{\nabla}_{d}\ell^{(2)}\] \[+\frac{1}{2}\left(\mathrm{U}_{db}+n^{(2)}\mathrm{Y}_{db}\right) \overset{\circ}{\nabla}_{c}\ell^{(2)}+\mathrm{Y}_{bd}\left(\mathrm{F}_{cf}+ \mathrm{Y}_{cf}\right)n^{f}-\mathrm{Y}_{bc}\left(\mathrm{F}_{df}+\mathrm{Y}_{df} \right)n^{f}. \tag{4.12}\] We now contract with \(n^{c}\) and use (2.26) with \(S\rightarrow\mathbf{Y}\) and (2.27) with \(A\rightarrow\mathbf{F}\) to get \[n^{c}A_{bcd}= \overset{\circ}{\nabla}_{(b}s_{d)}-\mathrm{F}_{c(b}\overset{\circ}{ \nabla}_{d)}n^{c}-\frac{1}{2}n^{c}\overset{\circ}{\nabla}_{c}\mathrm{F}_{db}+ \overset{\circ}{\nabla}_{d}r_{b}-\pounds_{n}\mathrm{Y}_{bd}+\mathrm{Y}_{cd} \overset{\circ}{\nabla}_{b}n^{c}\] \[-\frac{1}{2}n^{c}\left(\mathrm{U}_{cb}+n^{(2)}\mathrm{Y}_{cb}\right) \overset{\circ}{\nabla}\!\!_{d}\ell^{(2)}+\frac{1}{2}\left(\mathrm{U}_{db}+n^{( 2)}\mathrm{Y}_{db}\right)n(\ell^{(2)})-\kappa_{n}\mathrm{Y}_{bd}+r_{b}s_{d}-r_ {b}r_{d}, \tag{4.13}\] where we have taken into account the definitions (2.63). Taking the symmetric part one obtains \[n^{c}\left(A_{bcd}+A_{dcb}\right) =2\overset{\circ}{\nabla}\!\!_{(b}\left(s_{d}\right)+r_{b)} \big{)}+2\left(\mathrm{Y}_{c(b}-\mathrm{F}_{c(b)}\overset{\circ}{\nabla}\!\!_ {d)}n^{c}-2\pounds_{n}\mathrm{Y}_{bd}-2\kappa_{n}\mathrm{Y}_{bd}+2r_{(b}s_{d)}\] \[\quad-2r_{b}r_{d}-\left(n^{c}\mathrm{U}_{c(b}+n^{(2)}r_{(b)} \overset{\circ}{\nabla}\!\!_{d)}\ell^{(2)}+\left(\mathrm{U}_{bd}+n^{(2)} \mathrm{Y}_{bd}\right)n(\ell^{(2)}). \tag{4.14}\] By virtue of (2.22), we finally find \[2(\mathrm{Y}_{cb}-\mathrm{F}_{cb})\overset{\circ}{\nabla}\!\!_{d}n^{c}=2P^{ ac}\left(\mathrm{Y}_{cb}-\mathrm{F}_{cb}\right)\left(\mathrm{U}_{da}-n^{(2)} \mathrm{F}_{da}\right)+2\left(r_{b}-s_{b}\right)\left(-n^{(2)}\overset{\circ} {\nabla}\!\!_{d}\ell^{(2)}+s_{d}\right),\] which together with (2.17) yields (4.5) when inserted into (4.14). Finally, equation (4.6) follows at once after simple index manipulations. Note that the right hand side of (4.4) is explicitly symmetric in the indices \(b,d\). This property is consistent with the fact that, in the embedded case, the left-hand side of (2.68) is symmetric under the interchange of the first and second pair of indices. This provides a non-trivial consistency check for (4.4). As explained above, expression (4.6) motivates introducing a symmetric tensor \(\mathcal{R}\) on \(\mathcal{N}\) that we call **constraint tensor**. **Definition 4.3**.: _(Constraint tensor \(\mathcal{R}\)) Given hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\), the constraint tensor \(\mathcal{R}\) tensor is the symmetric \(2\)-covariant tensor_ \[\mathcal{R}_{bd} \overset{\text{\tiny def}}{=} \overset{\circ}{R}\!\!_{(bd)}-2\pounds_{n}\mathrm{Y}_{bd}- \left(2\kappa_{n}+\mathrm{tr}_{P}\mathbf{U}-n^{(2)}\left(n(\ell^{(2)})- \mathrm{tr}_{P}\mathbf{Y}\right)\right)\mathrm{Y}_{bd}\] \[+\overset{\circ}{\nabla}\!\!_{(b}\left(s_{d)}+2r_{d)}\right)-2r_ {b}r_{d}+4r_{(b}s_{d)}-s_{b}s_{d}\] \[-(\mathrm{tr}_{P}\mathbf{Y})\mathrm{U}_{bd}+2P^{ac}\mathrm{U}_{a \left(b\right.}\left(2\mathrm{Y}_{d)c}+\mathrm{F}_{d)c}\right)\] \[+n^{(2)}\Big{(}\left(s_{(b}-3r_{(b)}\overset{\circ}{\nabla}\!\!_ {d)}\ell^{(2)}+P^{ac}\left(\mathrm{Y}_{ab}+\mathrm{F}_{ab}\right)\left( \mathrm{Y}_{cd}+\mathrm{F}_{cd}\right)\Big{)}. \tag{4.15}\] _where \(\kappa_{n}\) and \(r_{a}\) are defined by (2.63)._ The whole construction has been performed so that the following result holds. **Proposition 4.4**.: _Let \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) be hypersurface data embedded in \((\mathcal{M},g)\) with embedding \(\phi\) and rigging \(\zeta\). Let \(\boldsymbol{\nu}\) be the unique normal covector along \(\phi(\mathcal{N})\) satisfying \(\boldsymbol{\nu}(\zeta)=1\) and define \(\nu\overset{\text{\tiny def}}{=}g(\boldsymbol{\nu},\cdot)\). Consider the symmetric \(2\)-covariant tensor_ \[\boldsymbol{\mathcal{R}}\overset{\text{\tiny def}}{=}\mathbf{Ric}-g(\nu,\nu) \mathbf{Riem}(\zeta,\cdot,\zeta,\cdot)\] _along \(\phi(\mathcal{N})\). Then_ \[\phi^{\star}\boldsymbol{\mathcal{R}}=\mathcal{R}. \tag{4.16}\] _In particular at any point \(p\) where the hypersurface \(\phi(\mathcal{N})\) is null, it holds_ \[\phi^{\star}\mathbf{Ric}|_{p}=\mathcal{R}|_{p}. \tag{4.17}\] At null points the expression for the constraint tensor simplifies, as one has \(n^{(2)}=0\). It is worth stressing that the expression for the tangential components of the ambient Ricci tensor in the null case has been obtained in a fully covariant way. In the case \(n^{(2)}=0\), the conditions \(\mathcal{R}=0\) can be thought of as the vacuum constraint equations (with vanishing cosmological constant) on a null hypersurface. Such constraints have always appeared in the literature in a decomposed form adapted to a foliation by spacelike slices. To the best of our knowledge, the only exception to this is [20, Eq. (50)] (see also [19, Eq. (34)]), where the tensors \(A_{abc}\), \(B_{abcd}\) and \(\mathcal{R}_{ab}=n^{c}\left(A_{bcd}+A_{dcb}\right)+P^{ac}B_{abcd}\) were defined (only in the null case) in terms of the so-called _hypersurface connection_\(\overline{\nabla}\). The derivative \(\overline{\nabla}\) is another torsion-free connection that can be defined in terms of the full hypersurface data (including the tensor field \(\mathbf{Y}\)) and which, in the embedded case, coincides with the connection induced from the Levi-Civita covariant derivative of the ambient space [14], [20], [19]. In [20], the expression of \(\mathcal{R}\) is not fully explicit in the tensor \(\mathbf{Y}\), as the connection \(\overline{\nabla}\) and its corresponding curvature \(\overline{R}\) depend on it. Definition (4.15), on the other hand, shows the full dependence on \(\mathbf{Y}\) (in the terms involving \(\mathbf{Y}\), \(\mathbf{r}\) and \(\kappa_{n}\)), as both \(\overset{\circ}{\nabla}\) and \(\overset{\circ}{R}\) depend only on the metric part of the data. Moreover, the tensor \(\mathcal{R}\) on [20] was not expanded in terms of the data, as we have done here in expression (4.15). Instead, it was decomposed in terms of a foliation by spacelike hypersurfaces, in analogy with other forms of the constraint equations that have appeared in the literature. The result above involves no decomposition with respect to any foliation. In fact, it makes no assumption on whether such foliation exists. The result is fully covariant on \(\mathcal{N}\), even though this manifold admits no metric. It is by use of the hypersurface data formalism (in particular thanks to the existence of the connection \(\overset{\circ}{\nabla}\)) that such compact and unified form of the vacuum constraint equations in the null case becomes possible. Given its interpretation in the embedded case, it is to be expected that the constraint tensor is gauge invariant at a null point. This was already proven in [20, Theorem 4.6] in the case of characteristic hypersurface data which is defined as null hypersurface data that can be foliated by diffeomorphic sections with positive definite induced metric. However, the proof of Theorem 4.6 in [20] does not rely on these global restrictions, so the gauge invariance of the constraint tensor \(\mathcal{R}\) holds for general null hypersurface data2. In particular, this means that in the null case we can compute \(\mathcal{R}\) in any gauge, which gives a lot of flexibility to adjust the gauge to the problem at hand. At non-null points gauge invariance does not hold since the spacetime tensor \(\mathbf{\mathcal{R}}\) depends on the rigging vector \(\zeta\). Footnote 2: It should actually be true that gauge invariance holds even at isolated null points. We do not attempt proving this fact here. At non-null points, Propositions 2.19 and 4.4 admit the following immediate corollary. **Corollary 4.5**.: _Let \(\{\mathcal{N},\gamma,\mathbf{\ell},\ell^{(2)},\mathbf{Y}\}\) be hypersurface data embedded in \((\mathcal{M},g)\) with embedding \(\phi\) and rigging \(\zeta\). Assume that the tangential components of the Ricci tensor \(\mathbf{Ric}\) along \(\phi(\mathcal{N})\) are known, then the whole Riemann tensor \(\mathbf{Riem}\) at any non-null point \(p\in\mathcal{N}\) can be determined explicitly in terms of the hypersurface data._ ## 5 Constraint tensor in the null case For the rest of the paper we shall focus on the null case. Given that expression (4.15) of the constraint tensor \(\mathcal{R}\) simplifies notably in this context, it is worth to write down an explicit definition. **Definition 5.1**.: _Let \(\{\mathcal{N},\gamma,\mathbf{\ell},\ell^{(2)},\mathbf{Y}\}\) be null hypersurface data. The constraint tensor \(\mathcal{R}\) is the _symmetric tensor defined by_ \[\mathcal{R}_{ab}\stackrel{{\text{\tiny def}}}{{=}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! in many situation, e.g. to integrate the vacuum constraint equations or to study spacetime matchings and, in particular, the thin shells they produce. Last, but no least, the expressions here are completely general, while in [20], [19] a specific gauge was chosen from the outset. By (5.1), the task of computing \(\psi^{\star}\mathcal{R}\) requires relating the pull-back \(\psi^{\star}\overset{\circ}{\mathbf{Ric}}\) with the Ricci tensor of \(\nabla^{h}\). Now, computing the pull-back \(\psi^{\star}\overset{\circ}{\mathbf{Ric}}\) amounts to calculating \(\overset{\circ}{R}_{AB}\overset{\text{\tiny def}}{=}\overset{\circ}{R}^{c} _{\alpha cb}v^{a}_{A}v^{b}_{B}\). This trace can be obtained by means of (2.6) and (3.27) as follows: \[\overset{\circ}{R}_{AB} =\delta^{c}_{f}\overset{\circ}{R}^{f}_{\text{\tiny acb}}v^{a}_{A} v^{b}_{B}=\left(P^{cd}\gamma_{df}+n^{c}\ell_{f}\right)\overset{\circ}{R}^{f}_{ \text{\tiny acb}}v^{a}_{A}v^{b}_{B}\] \[= \tag{5.5}\] Thus, we need to evaluate both \[h^{CD}v^{d}_{D}\gamma_{df}\overset{\circ}{R}^{f}_{\text{\tiny acb}}v^{a}_{A} v^{c}_{C}v^{b}_{B}\qquad\text{and}\qquad n^{c}(\ell_{f}-h^{CD}\ell_{C}v^{d}_{D} \gamma_{df})\overset{\circ}{R}^{f}_{\text{\tiny acb}}v^{a}_{A}v^{b}_{B}.\] The first one is obtained by contracting (3.41) with \(h^{CD}\). For the second one, substituting (3.25)-(3.26) into (3.17) and (3.18) yields \[n^{c}\left(\ell_{f}-h^{CD}\ell_{C}v^{d}_{D}\gamma_{df}\right) \overset{\circ}{R}^{f}_{\text{\tiny acb}}v^{a}_{A}v^{b}_{B} =v^{a}_{A}v^{b}_{B}\overset{\circ}{\nabla}_{a}s_{b}-s_{A}s_{B}+ \left(n(\ell^{(2)})-2\ell^{D}s_{D}\right)\mathrm{U}_{AB}\] \[\quad+(\ell^{(2)}-\ell_{\parallel}^{(2)})(\mathcal{L}_{n}\mathbf{ U})_{AB}\] \[\quad+2\ell^{D}\left(s_{(A}\mathrm{U}_{B)D}-v^{d}_{D}v^{a}_{A}v^{b }_{B}\overset{\circ}{\nabla}_{[a}\mathrm{U}_{d]b}\right)\] \[\quad-h^{CD}\mathrm{U}_{AC}\left(\mathrm{F}_{DB}+(\ell^{(2)}- \ell_{\parallel}^{(2)})\mathrm{U}_{BD}\right). \tag{5.6}\] We elaborate (5.6) by particularizing (3.39) for \(\mathcal{T}=s\), \(\mathcal{T}=\mathbf{U}\) and \(\mathcal{T}=\boldsymbol{\ell}\). Since \(s(n)=\mathbf{U}(n,\cdot)=0\) they give, respectively, \[v^{a}_{A}v^{b}_{B}\overset{\circ}{\nabla}_{a}s_{b} =\nabla^{h}_{A}s_{B}-\ell^{C}s_{C}\mathrm{U}_{AB}, \tag{5.7}\] \[2v^{d}_{D}v^{a}_{A}v^{b}_{B}\overset{\circ}{\nabla}_{[a}\mathrm{ U}_{d]b} =\nabla^{h}_{A}\mathrm{U}_{BD}-\nabla^{h}_{D}\mathrm{U}_{AB}-\ell^{C} \mathrm{U}_{CD}\mathrm{U}_{AB}+\ell^{C}\mathrm{U}_{AC}\mathrm{U}_{BD},\] (5.8) \[v^{a}_{A}v^{b}_{B}\nabla_{a}\ell_{b} =\nabla^{h}_{A}\ell_{B}-\ell_{\parallel}^{(2)}\mathrm{U}_{AB} \quad\Longrightarrow\quad\mathrm{F}_{AB}=v^{a}_{A}v^{b}_{B}\overset{\circ}{ \nabla}_{[a}\ell_{b]}=\nabla^{h}_{[A}\ell_{B]}, \tag{5.9}\] with which (5.6) becomes \[n^{c}\left(\ell_{f}-h^{CD}\ell_{C}v^{d}_{D}\gamma_{df}\right) \overset{\circ}{R}^{f}_{\text{\tiny acb}}v^{a}_{A}v^{b}_{B}= \nabla^{h}_{A}s_{B}-s_{A}s_{B}\] \[+\left(n(\ell^{(2)})+\ell^{C}\ell^{D}\mathrm{U}_{CD}-3\ell^{C}s_{C }\right)\mathrm{U}_{AB}\] \[+(\ell^{(2)}-\ell_{\parallel}^{(2)})(\mathcal{L}_{n}\mathbf{U})_{ AB}+2\ell^{C}s_{(A}\mathrm{U}_{B)C}+\ell^{C}\nabla^{h}_{C}\mathrm{U}_{AB}\] \[-\ell^{C}\nabla^{h}_{A}\mathrm{U}_{CB}-\left(h^{CD}(\ell^{(2)}- \ell_{\parallel}^{(2)})+\ell^{C}\ell^{D}\right)\mathrm{U}_{AC}\mathrm{U}_{BD}\] \[-\frac{1}{2}h^{CD}(\nabla^{h}_{D}\ell_{B}-\nabla^{h}_{B}\ell_{D}) \mathrm{U}_{AC}. \tag{5.10}\] The Ricci tensor \(\overset{\circ}{R}_{AB}\) follows by substituting (5.10) and (3.41) (contracted with \(h^{CD}\)) into (5.5): \[\overset{\circ}{R}_{AB} =R^{h}_{AB}+\nabla^{h}_{A}s_{B}-s_{A}s_{B}+\left(n(\ell^{(2)})+2 \ell^{C}\ell^{D}\mathrm{U}_{CD}-3\ell^{C}s_{C}+\nabla^{h}_{C}\ell^{C}\right.\] \[\quad+(\mathrm{tr}_{h}\mathbf{U}_{\parallel})(\ell^{(2)}-\ell_{ \parallel}^{(2)})\right)\mathrm{U}_{AB}+(\ell^{(2)}-\ell_{\parallel}^{(2)})( \mathcal{L}_{n}\mathbf{U})_{AB}+(\mathrm{tr}_{h}\mathbf{U}_{\parallel})\nabla^ {h}_{(A}\ell_{B)}\] \[\quad+2\ell^{C}\left(\nabla^{h}_{C}\mathrm{U}_{AB}+s_{(A} \mathrm{U}_{B)C}-\nabla^{h}_{(A}\mathrm{U}_{B)C}\right)-2\Big{(}h^{CD}(\ell^{ (2)}-\ell_{\parallel}^{(2)})\] \[+\ell^{C}\ell^{D}\Big{)}\mathrm{U}_{AC}\mathrm{U}_{BD}-h^{CD}\left( \mathrm{U}_{DB}\nabla^{h}_{(A}\ell_{C)}+\mathrm{U}_{DA}\nabla^{h}_{(B}\ell_{C)} \right). \tag{5.11}\] Observe that all terms in (5.11) except from \(\nabla^{h}_{A}s_{B}\) are symmetric. This implies that \(\overset{\circ}{R}_{AB}-\overset{\circ}{R}_{BA}=\nabla^{h}_{A}s_{B}-\nabla^{ h}_{B}s_{A}\), which is in agreement with equation (2.56) and provides a consistency check to (5.11). The symmetrized tensor is \[\overset{\circ}{R}_{(AB)} =R^{h}_{AB}+\nabla^{h}_{(A}s_{B)}-s_{A}s_{B}+\Big{(}n(\ell^{(2)})+2 \ell^{C}\ell^{D}\mathrm{U}_{CD}-3\ell^{C}s_{C}+\nabla^{h}_{C}\ell^{C}\] \[\quad+(\mathrm{tr}_{h}\mathbf{U}_{\parallel})(\ell^{(2)}-\ell^{( 2)}_{\parallel})\Big{)}\mathrm{U}_{AB}+(\ell^{(2)}-\ell^{(2)}_{\parallel})( \pounds_{n}\mathbf{U}_{\parallel})\nabla^{h}_{(A}\ell_{B)}\] \[\quad+2\ell^{C}\left(\nabla^{h}_{C}\mathrm{U}_{AB}+s_{(A} \mathrm{U}_{B)C}-\nabla^{h}_{(A}\mathrm{U}_{B)C}\right)-2\Big{(}h^{CD}(\ell^{ (2)}-\ell^{(2)}_{\parallel})\] \[\quad+\ell^{C}\ell^{D}\Big{)}\mathrm{U}_{AC}\mathrm{U}_{BD}-h^{ CD}\left(\mathrm{U}_{DB}\nabla^{h}_{(A}\ell_{C)}+\mathrm{U}_{DA}\nabla^{h}_{(B} \ell_{C)}\right). \tag{5.12}\] Having obtained (5.12), we can now write down the relation between the pull-back to \(S\) of the constraint tensor and the Ricci tensor of the induced metric \(h\). **Theorem 5.3**.: _Consider null hypersurface data \(\{\mathcal{N},\gamma,\boldsymbol{\ell},\ell^{(2)},\mathbf{Y}\}\) and assume the Setup 3.11. Let \(R^{h}_{AB}\) be the Ricci tensor of the Levi-Civita connection \(\nabla^{h}\) on \(S\). Then, the pull-back to \(S\) of the constraint tensor \(\mathcal{R}\) defined by (4.15) is given by_ \[\mathcal{R}_{AB} =R^{h}_{AB}+2\nabla^{h}_{(A}\left(s_{B}\right)+r_{B)}\right)-2(r_ {A}-s_{A})(r_{B}-s_{B})\] \[\quad+\left(n(\ell^{(2)})+2\ell^{C}\ell^{D}\mathrm{U}_{CD}-4\ell ^{C}s_{C}+\left(\mathrm{tr}_{h}\mathbf{U}_{\parallel}+\kappa_{n}\right)( \ell^{(2)}-\ell^{(2)}_{\parallel})-\mathrm{tr}_{h}\mathbf{Y}_{\parallel}+ \nabla^{h}_{C}\ell^{C}\right)\mathrm{U}_{AB}\] \[\quad+2\ell^{C}\left(\nabla^{h}_{C}\mathrm{U}_{AB}-\nabla^{h}_{(A }\mathrm{U}_{B)C}-\left(2\left(r_{A}-s_{(A)}+\ell^{D}\mathrm{U}_{D(A)}\right) \mathrm{U}_{B)C}\right)\] \[\quad+2h^{CD}\left(2\mathrm{Y}_{D(A}-\nabla^{h}_{D}\ell_{(A}-( \ell^{(2)}-\ell^{(2)}_{\parallel})\mathrm{U}_{D(A)}\right)\mathrm{U}_{B)C}. \tag{5.13}\] Proof.: We need to multiply (5.1) by \(v^{a}_{A}v^{b}_{B}\). One comes across a term \(v^{a}_{A}v^{b}_{B}\overset{\circ}{\nabla}_{(a}(s_{b})+2r_{b)}\) which we elaborate by using (3.39) for \(\mathcal{T}=\boldsymbol{s}\) and \(\mathcal{T}=\boldsymbol{r}\) (recall that \(\boldsymbol{s}(n)=0\), \(\boldsymbol{r}(n)=-\kappa_{n}\)), thus obtaining \[v^{a}_{A}v^{b}_{B}\overset{\circ}{\nabla}_{(a}\left(s_{b}\right)+2r_{b)}\big{)} =\nabla^{h}_{(A}\left(s_{B}\right)+2r_{B)}\big{)}+2\kappa_{n}\nabla^{h}_{(A} \ell_{B)}-\left(\ell^{J}\left(s_{J}+2r_{J}\right)-2\kappa_{n}(\ell^{(2)}-\ell^ {(2)}_{\parallel})\right)\mathrm{U}_{AB}.\] Since \(F_{B\epsilon}n^{c}=-s_{B}\) and \(\mathrm{F}_{AB}=\nabla^{h}_{[A}\ell_{B]}\) (by (5.9)), inserting (3.25)-(3.26) into (5.1) yields \[\mathcal{R}_{AB}\overset{\text{\tiny def}}{=} \overset{\circ}{R}_{(AB)}-2(\pounds_{n}\mathbf{Y})_{AB}-\left(2 \kappa_{n}+\mathrm{tr}_{h}\mathbf{U}_{\parallel}\right)\mathrm{Y}_{AB}+\nabla^{ h}_{(A}\left(s_{B}\right)+2r_{B)}\right)+2\kappa_{n}\nabla^{h}_{(A}\ell_{B)}\] \[\quad-2r_{A}r_{B}+4r_{(A}s_{B)}-s_{A}s_{B}-\left(\mathrm{tr}_{h} \mathbf{Y}_{\parallel}-\kappa_{n}(\ell^{(2)}-\ell^{(2)}_{\parallel})+\ell^{J}s _{J}\right)\mathrm{U}_{AB}\] \[\quad+2h^{CD}\mathrm{U}_{D(A}\left(2\mathrm{Y}_{B)C}+\ell_{C} \left(s_{B}\right)-2r_{B)}\right)+\frac{1}{2}\left(\nabla^{h}_{B}\ell_{C}- \nabla^{h}_{C}\ell_{|B)}\right)\bigg{)}\,. \tag{5.14}\] Substituting expression (5.12) for \(\overset{\circ}{R}_{(AB)}\) and reorganizing terms, one easily arrives at (5.13). ## 6 Gauge invariant quantities on a transverse submanifold \(S\) Equation (5.13) is rather complicated, mainly because it has been written in a completely arbitrary gauge. This is clearly advantageous since the gauge can be adjusted to the problem at hand. However, the equation involves several quantities that are gauge invariant, namely the constraint tensor \(\mathcal{R}_{AB}\) and the metric \(h_{AB}\) together with all its derived objects, such as the Levi-Civita covariant derivative \(\nabla^{h}\) and the Ricci tensor \(R^{h}_{AB}\). A natural question arises as to whether one can find additional objects with simple gauge behaviour so that one can write down (5.13) fully in terms of gauge invariant quantities. There is an obvious answer to this, namely that the sum of all terms in the right-hand side of (5.13) except for the first one must necessarily be a gauge invariant quantity. While this must be true, it is clearly not very helpful. However, the idea behind it is useful. If we can find simple gauge invariant quantities that can then be substituted in the equation, then the reminder must also be gauge invariant. This procedure can lead to the determination of gauge invariant objects that would have been very hard to guess otherwise. This is the task we set up to do in the present section. Based on the gauge behaviour discussed in Lemma 3.6 and Corollary 3.7, we write down two quantities on \(S\) with very simple gauge behaviour. The underlying reason why such objects behave in this way may be undestood from the notion of normal pair and the associated geometric quantities on \(S\) defined and studied in [19]. For simplicity, however, here we simply put forward the definitions and find explicitly how they transform under an arbitrary change of gauge. **Lemma 6.1**.: _Assume Setup 3.11 and define on \(S\) the covector \(\boldsymbol{\omega}_{\parallel}\) and the symmetric \(2\)-covariant tensor \(\boldsymbol{\mathfrak{P}}_{\parallel}\) by_ \[\boldsymbol{\omega}_{\parallel}\stackrel{{\text{\rm def}}}{{=}} \psi^{\star}(\boldsymbol{s}-\boldsymbol{r})-\mathbf{U}_{\parallel}(\ell^{ \sharp}_{\parallel},\cdot),\qquad\boldsymbol{\mathfrak{P}}_{\parallel} \stackrel{{\text{\rm def}}}{{=}}\psi^{\star}\mathbf{Y}+\frac{1}{ 2}\left(\ell^{(2)}_{\parallel}-\ell^{(2)}|_{S}\right)\mathbf{U}_{\parallel}- \frac{1}{2}\pounds_{\ell^{\sharp}_{\parallel}}h.\] _Under an arbitrary gauge transformation with gauge parameters \(\{z,V\}\) they transform as_ \[\mathcal{G}_{(z,V)}(\boldsymbol{\omega}_{\parallel})=\boldsymbol{\omega}_{ \parallel}-\frac{1}{\hat{z}}d\hat{z},\qquad\quad\mathcal{G}_{(z,V)}( \boldsymbol{\mathfrak{P}}_{\parallel})=\hat{z}\boldsymbol{\mathfrak{P}}_{ \parallel}, \tag{6.1}\] _where \(\hat{z}\stackrel{{\text{\rm def}}}{{=}}\psi^{\star}z\)._ Proof.: From (3.10) and (3.12) we have the transformations (we again use prime to denote a gauge-transformed object) \[\boldsymbol{\ell}^{\prime}_{\parallel}=\hat{z}(\boldsymbol{\ell}_{\parallel} +\boldsymbol{w}_{\parallel}),\qquad\mathbf{U}^{\prime}_{\parallel}=\hat{z}^{ -1}\mathbf{U}_{\parallel}. \tag{6.2}\] Thus \(\ell^{A\prime}=\hat{z}(\ell^{A}+w^{A})\) and \(\left(\mathrm{U}_{AB}\ell^{B}\right)^{\prime}=\mathrm{U}_{AB}\left(\ell^{B}+w^ {B}\right)\). The transformation law of \(\boldsymbol{\omega}_{\parallel}\) follows at once from this and Corollary 3.7 (recall that \(\mathbf{U}(n,\cdot)=0\), \(\gamma(n,\cdot)=0\)). Concerning \(\boldsymbol{\mathfrak{P}}_{\parallel}\) we use the decomposition \(V^{a}=fn+P^{ab}w_{b}\) (cf. (3.9)) and apply Lemma 3.16 to the transformation law (2.62) of \(\mathbf{Y}\). This gives \[\psi^{\star}\mathbf{Y}^{\prime}=\hat{z}\psi^{\star}\mathbf{Y}+\boldsymbol{ \ell}_{\parallel}\otimes_{s}d\hat{z}+\hat{z}\left(f|_{S}-\ell^{C}w_{C}\right) \mathbf{U}_{\parallel}+\frac{1}{2}\pounds_{\hat{z}w^{\sharp}}h.\] Since \[\ell^{(2)\prime}_{\parallel}=\hat{z}^{2}\left(\ell^{(2)}_{\parallel}+2\ell^{C }w_{C}+w^{C}w_{C}\right),\quad\ell^{(2)\prime}|_{S}=\hat{z}^{2}\left(\ell^{(2) }|_{S}+2f|_{S}+w^{C}w_{C}\right), \tag{6.3}\] the first because of definition \(\ell^{(2)\,\stackrel{{\text{\rm def}}}{{=}}\,}h^{\sharp}( \boldsymbol{\ell}_{\parallel},\boldsymbol{\ell}_{\parallel})\) and the second being a consequence of (3.11) together with (3.27), one finds \[\left((\ell^{(2)}_{\parallel}-\ell^{(2)}|_{S})\mathbf{U}_{\parallel}\right)^{ \prime}=\hat{z}(\ell^{(2)}_{\parallel}-\ell^{(2)}|_{S})\mathbf{U}_{\parallel }+2\hat{z}\left(\ell^{C}w_{C}-f|_{S}\right)\mathbf{U}_{\parallel}.\] Given that \((\ell^{\sharp}_{\parallel})^{\prime}=\hat{z}\ell^{\sharp}_{\parallel}+\hat{z}w ^{\sharp}\) and \(\pounds_{\hat{z}\ell^{\sharp}_{\parallel}}h=\hat{z}\pounds_{\ell^{\sharp}_{ \parallel}}h+2\boldsymbol{\ell}_{\parallel}\otimes_{s}d\hat{z}\) all terms involving \(w^{\sharp}\) and \(d\hat{z}\) in \(\boldsymbol{\mathfrak{P}}^{\prime}_{\parallel}\) cancel out and the transformation law \(\boldsymbol{\mathfrak{P}}^{\prime}_{\parallel}=\hat{z}\boldsymbol{\mathfrak{P}}_ {\parallel}\) follows. The result states in particular that \(\boldsymbol{\omega}_{\parallel}\) and \(\boldsymbol{\mathfrak{P}}_{\parallel}\) are nearly gauge invariant and, in fact, that they are exactly gauge invariant under the subgroup \[\mathcal{G}_{1}\stackrel{{\text{\tiny{def}}}}{{=}}\{1,V\}\subset \mathcal{G}=\mathcal{F}^{\star}(\mathcal{N})\times\Gamma(T\mathcal{N}).\] The fact that \(\mathcal{G}_{1}\) is a subgroup of \(\mathcal{G}\) is immediate from the composition law of Proposition 2.10. Following the idea outlined above, the next step is to write the constraint tensor \(\mathcal{R}\) on the submanifold \(S\) in terms of these quantities. We still need to decide which objects of (5.13) are to be replaced. For \(\boldsymbol{\mathfrak{P}}_{\parallel}\) there is only one natural choice, namely \(\psi^{\star}\mathbf{Y}\). For \(\boldsymbol{\omega}_{\parallel}\), we could replace either \(\boldsymbol{s}\) or \(\boldsymbol{r}\), but the second choice is preferable because \(\boldsymbol{\omega}_{\parallel}\) is not at the level of metric hypersurface data since it involves some components of the tensor \(\mathbf{Y}\) as well. The following result is obtained by a simple computation whereby \(\boldsymbol{r}\) and \(\psi^{\star}\mathbf{Y}\) are replaced in terms of \(\boldsymbol{\omega}_{\parallel}\) and \(\boldsymbol{\mathfrak{P}}_{\parallel}\) respectively in (5.13). **Proposition 6.2**.: _Assume Setup 3.11. The pull-back to \(S\) of the constraint tensor \(\mathcal{R}\) reads_ \[\mathcal{R}_{AB} =R_{AB}^{h}-2\nabla^{h}_{(A}\omega_{B)}-2\omega_{A}\omega_{B}- \left(2\kappa_{n}+\operatorname{tr}_{h}\mathbf{U}_{\parallel}\right) \mathfrak{P}_{AB}\] \[\quad-(\operatorname{tr}_{h}\mathfrak{P})\mathrm{U}_{AB}+4 \mathfrak{P}^{C}{}_{(A}\mathrm{U}_{B)C}-2\mathfrak{S}_{AB}, \tag{6.4}\] _where_ \[\mathfrak{S}_{AB}\stackrel{{\text{\tiny{def}}}}{{=}} \left(\pounds_{n}\mathbf{Y}\right)_{AB}-\frac{1}{2}(\ell^{(2)}- \ell_{\parallel}^{(2)})(\pounds_{n}\mathbf{U})_{AB}-2\nabla^{h}_{(A}s_{B)}\] \[\quad-\left(\frac{1}{2}n(\ell^{(2)})+\ell^{C}\ell^{D}\mathrm{U}_{ CD}-2\ell^{C}s_{C}\right)\mathrm{U}_{AB}+\ell^{C}\left(-\nabla^{h}_{C} \mathrm{U}_{AB}+2\nabla^{h}_{(A}\mathrm{U}_{B)C}\right). \tag{6.5}\] The definition of the symmetric 2-covariant tensor \(\boldsymbol{\mathfrak{S}}_{\parallel}\) is not artificial. As mentioned above, the fact that the tensors \(\psi^{\star}\mathcal{R}\) and \(\mathbf{Ric}^{h}\) are gauge invariant, together with the simple gauge behaviour of \(\boldsymbol{\omega}_{\parallel}\), \(\boldsymbol{\mathfrak{P}}_{\parallel}\), \(\mathbf{U}_{\parallel}\) and \(\kappa_{n}\), imply that \(\boldsymbol{\mathfrak{S}}_{\parallel}\) must also have a simple gauge behaviour. We emphasize that while the existence and explicit form of the \(\mathcal{G}_{1}\)-gauge invariant quantities \(\boldsymbol{\omega}_{\parallel}\) and \(\boldsymbol{\mathfrak{P}}_{\parallel}\) can be justified by the use of normal pairs and their associated geometric objects [19], the existence of the \(\mathcal{G}_{1}\)-gauge invariant quantity \(\boldsymbol{\mathfrak{S}}_{\parallel}\) could not be anticipated and comes as an interesting by-product of the constraint tensor. The tensor \(\boldsymbol{\mathfrak{S}}_{\parallel}\) contains information on the first order variation of the extrinsic curvature \(\mathbf{Y}\) along the null direction \(n\). Defining this geometric object is one of the main results in this paper. This quantity has several interesting features that, in our opinion, deserve further investigation. Here we shall only mention that this object is not only \(\mathcal{G}_{1}\)-gauge invariant and it has a simple full \(\mathcal{G}\)-gauge behaviour (which makes it computable in any gauge) but it is also intrinsic to the submanifold \(S\). By "intrinsic" we mean that it encodes geometric information of \(S\) as a submanifold of \(\mathcal{N}\) (or of the ambient space \((\mathcal{M},g)\) in case the data is embedded), independently of \(S\) belonging or not to any foliation of \(\mathcal{N}\). This information is at the level of second derivatives (curvature) unlike \(\boldsymbol{\omega}_{\parallel}\) or \(\boldsymbol{\mathfrak{P}}_{\parallel}\) which involve only first derivatives (extrinsic curvature). The gauge behaviour of \(\boldsymbol{\mathfrak{S}}_{\parallel}\) is obtained next as a consequence of Proposition 6.2. **Corollary 6.3**.: _Under a gauge transformation with gauge parameters \(\{z,V\}\) the tensor \(\boldsymbol{\mathfrak{S}}_{\parallel}\) transforms as_ \[\mathcal{G}_{(z,V)}\left(\mathfrak{S}\right)_{AB}=\mathfrak{S}_{AB}+\frac{1}{ \hat{z}}\nabla^{h}_{A}\nabla^{h}_{B}\hat{z}-\frac{2}{\hat{z}^{2}}\nabla^{h}_{A} z\nabla^{h}_{B}\hat{z}+\frac{2}{\hat{z}}\omega_{(A}\nabla^{h}_{B)}\hat{z}+ \frac{\hat{z}_{n}}{\hat{z}}\mathfrak{P}_{AB}, \tag{6.6}\] _where \(\hat{z}\stackrel{{\text{\tiny{def}}}}{{=}}z|_{S}\) and \(\hat{z}_{n}\stackrel{{\text{\tiny{def}}}}{{=}}n(z)|_{S}\). In particular \(\boldsymbol{\mathfrak{S}}_{\parallel}\) is invariant under the subgroup \(\mathcal{G}_{1}\)._ Proof.: We apply a gauge transformation with gauge parameters \(\{z,V\}\) to (6.4) and subtract the equation itself. Using, as usual, a prime to denote gauge transformed objects one has \[0=-2\nabla^{h}_{(A}\left(\omega^{\prime}_{B)}-\omega_{B)}\right)-2\omega^{\prime} _{A}\omega^{\prime}_{B}+2\omega_{A}\omega_{B}-2\left(\kappa^{\prime}_{n}\hat{z} -\kappa_{n}\right)\mathfrak{P}_{AB}-2\mathfrak{S}^{\prime}_{AB}+2\mathfrak{S} _{AB},\] where we used the gauge invariance of \(\psi^{\star}\mathcal{R}\), \(h\), \(\nabla^{h}\) and \(\mathbf{Ric}^{h}\), as well as the fact that \(\mathbf{U}_{\parallel}\) scales with \(\hat{z}^{-1}\) while \(\mathfrak{P}_{\parallel}\) scales with \(\hat{z}\), so their product is gauge invariant. Using the definition \(\hat{z}_{n}\mathop{=}^{\mathsf{def}}n(z)|_{S}\) and inserting \(\boldsymbol{\omega}^{\prime}_{\parallel}=\boldsymbol{\omega}_{\parallel}- \hat{z}^{-1}d\hat{z}\) as well as (3.16), the result follows after simple cancellations. **Remark 6.4**.: _As already said, guessing that \(\mathfrak{S}_{\parallel}\) has a simple gauge behaviour without the aid of Proposition 6.2 appears to be hard. Conversely, checking explicitly that \(\mathfrak{S}_{\parallel}\) has the gauge behaviour described in Corollary 6.3 serves as a stringent consistency test for the validity of Proposition 6.2. The computation that proves (6.6) directly is somewhat involved and will not be included here. It can be found in [8]._ As analyzed in [10] and [11], the quantity \(\mathfrak{S}_{\parallel}\) is of particular relevance in the study of Killing horizons of order one containing a submanifold \(S\). The underlying reason is that \(\mathfrak{S}_{\parallel}\) is related to the pull-back to \(S\) of the tensor field \(\overset{\circ}{\Sigma}-n\otimes\pounds_{n}\mathbf{Y}\), which vanishes at a horizon in the gauge where the Killing vector coincides with \(n\) (recall that \(\mathfrak{S}_{\parallel}\) is only \(\mathcal{G}_{1}\)-invariant). The next lemma provides the relation between \(\mathfrak{S}_{\parallel}\) and \(\overset{\circ}{\Sigma}-n\otimes\pounds_{n}\mathbf{Y}\). **Lemma 6.5**.: _Assume Setup 3.11, where \(\mathfrak{q}\) is the unique normal covector field to \(\psi(S)\) satisfying \(\mathfrak{q}(n)=1\) and \(\ell^{\sharp}_{\parallel}\mathop{=}^{\mathsf{def}}h^{\sharp}(\boldsymbol{ \ell}_{\parallel},\cdot)\). Then, \(\mathfrak{S}_{\parallel}\) and the tensor \(\overset{\circ}{\Sigma}\) defined by (2.50) verify:_ \[\mathfrak{S}_{\parallel}\overset{S}{=} -\psi^{\star}\left(\mathfrak{q}(\overset{\circ}{\Sigma}-n \otimes\pounds_{n}\mathbf{Y})\right)+\frac{1}{2}(\ell^{(2)}-\ell^{(2)}_{ \parallel})(\pounds_{n}\mathbf{U})_{\parallel}\] \[+\left(\frac{1}{2}n(\ell^{(2)})+\mathbf{U}_{\parallel}(\ell^{ \sharp}_{\parallel},\ell^{\sharp}_{\parallel})-2\boldsymbol{s}_{\parallel}( \ell^{\sharp}_{\parallel})\right)\mathbf{U}_{\parallel}. \tag{6.7}\] _In particular, if \(\mathbf{U}=0\) everywhere on \(\mathcal{N}\), it follows_ \[\mathfrak{S}_{\parallel}\overset{S}{=} -\psi^{\star}\left(\mathfrak{q}(\overset{\circ}{\Sigma}-n \otimes\pounds_{n}\mathbf{Y})\right). \tag{6.8}\] Proof.: We first use (3.39) to obtain the contraction \(v^{a}_{A}v^{b}_{B}v^{c}_{C}\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}\): \[v^{a}_{A}v^{b}_{B}v^{c}_{C}\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}=\nabla ^{h}_{A}\mathrm{U}_{BC}-\ell^{D}\mathrm{U}_{CD}\mathrm{U}_{BA}-\ell^{D}\mathrm{ U}_{BD}\mathrm{U}_{CA}. \tag{6.9}\] This, in turn, allows us to conclude \[v^{a}_{A}v^{b}_{B}v^{c}_{C}\left(\overset{\circ}{\nabla}_{a} \mathrm{U}_{bc}+\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{ \nabla}_{c}\mathrm{U}_{ab}+2s_{c}\mathrm{U}_{ab}\right) =\nabla^{h}_{A}\mathrm{U}_{BC}+\nabla^{h}_{B}\mathrm{U}_{AC}- \nabla^{h}_{C}\mathrm{U}_{AB}\] \[\quad-2\ell^{D}\mathrm{U}_{CD}\mathrm{U}_{AB}+2s_{C}\mathrm{U}_{AB} \tag{6.10}\] after using (6.9) thrice. On the other hand, one finds \[n^{c}\left(\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}+\overset{\circ}{\nabla }_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c}\mathrm{U}_{ab}+2s_{c} \mathrm{U}_{ab}\right)= -\left(n^{c}\overset{\circ}{\nabla}_{c}\mathrm{U}_{ab}+\mathrm{U}_{bc} \overset{\circ}{\nabla}_{a}n^{c}+\mathrm{U}_{ca}\overset{\circ}{\nabla}_{b}n ^{c}\right)=-(\pounds_{n}\mathbf{U})_{ab},\] and hence \[v^{a}_{A}v^{b}_{B}n^{c}\left(\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}+ \overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c}\mathrm{ U}_{ab}+2s_{c}\mathrm{U}_{ab}\right)=-(\pounds_{n}\mathbf{U})_{AB}. \tag{6.11}\] Now, by (3.27) the tensor \(P\) can be decomposed as \(P^{dc}=v_{C}^{e}\left(h^{CD}v_{D}^{d}-\ell^{C}n^{d}\right)+n^{e}((\ell_{\parallel} ^{(2)}-\ell^{(2)})n^{d}-\ell^{D}v_{D}^{d})\). Thus, \[v_{A}^{a}v_{B}^{b}P^{dc}\left(\overset{\circ}{\nabla}_{a}\mathrm{ U}_{bc}+\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c} \mathrm{U}_{ab}+2s_{c}\mathrm{U}_{ab}\right)=\] \[\quad v_{A}^{a}v_{B}^{b}v_{C}^{c}\left(h^{CD}v_{D}^{d}-\ell^{C}n^ {d}\right)\left(\overset{\circ}{\nabla}_{a}\mathrm{U}_{bc}+\overset{\circ}{ \nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c}\mathrm{U}_{ab}+2s_{c} \mathrm{U}_{ab}\right)\] \[\quad+v_{A}^{a}v_{B}^{b}n^{c}\left((\ell_{\parallel}^{(2)}-\ell^ {(2)})n^{d}-\ell^{D}v_{D}^{d}\right)\left(\overset{\circ}{\nabla}_{a}\mathrm{ U}_{bc}+\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ}{\nabla}_{c} \mathrm{U}_{ab}+2s_{c}\mathrm{U}_{ab}\right). \tag{6.12}\] This means that (6.12) can be elaborated by inserting (6.10)-(6.11). Since the tensor \(\overset{\circ}{\Sigma}\) in the null case reads (recall (2.50)): \[\overset{\circ}{\Sigma}^{d}{}_{ab}=n^{d}\left(2\overset{\circ}{\nabla}_{(a}s _{b)}+n(\ell^{(2)})\mathrm{U}_{ab}\right)+P^{dc}\left(\overset{\circ}{\nabla }_{a}\mathrm{U}_{bc}+\overset{\circ}{\nabla}_{b}\mathrm{U}_{ca}-\overset{\circ }{\nabla}_{c}\mathrm{U}_{ab}+2s_{c}\mathrm{U}_{ab}\right), \tag{6.13}\] it is straightforward to conclude that its contraction with \(v_{A}^{a}v_{B}^{b}\) is \[\overset{\circ}{\Sigma}^{d}{}_{ab}v_{A}^{a}v_{B}^{b}=n^{d}\bigg{(}2 \nabla_{(A}^{h}s_{B)}-2\ell^{C}s_{C}\mathrm{U}_{AB}+n(\ell^{(2)})\mathrm{U}_{ AB}+(\ell^{(2)}-\ell_{\parallel}^{(2)})(\pounds_{n}\mathbf{U})_{AB}\] \[\quad-\ell^{C}\left(2\nabla_{(A}^{h}\mathrm{U}_{B)C}-\nabla_{C}^{ h}\mathrm{U}_{AB}-2\ell^{D}\mathrm{U}_{CD}\mathrm{U}_{AB}+2s_{C}\mathrm{U}_{ AB}\right)\bigg{)}\] \[\quad+v_{D}^{d}\bigg{(}h^{CD}\left(2\nabla_{(A}^{h}\mathrm{U}_{B) C}-\nabla_{C}^{h}\mathrm{U}_{AB}-2\ell^{D}\mathrm{U}_{CD}\mathrm{U}_{AB}+2s_{C} \mathrm{U}_{AB}\right)+\ell^{D}(\pounds_{n}\mathbf{U})_{AB}\bigg{)} \tag{6.14}\] after using \(v_{A}^{a}v_{B}^{b}\overset{\circ}{\nabla}_{(a}s_{b)}=\nabla_{(A}^{h}s_{B)}- \ell^{C}s_{C}\mathrm{U}_{AB}\) (cf. (3.39)). Equation (6.7) follows from (6.14) after taking into account \(\mathfrak{q}(v_{D})=0,\,\mathfrak{q}(n)=1\), definition (6.5) and the fact that \(\psi^{\star}\left(\mathfrak{q}(n\otimes\pounds_{n}\mathbf{Y})\right)=( \pounds_{n}\mathbf{Y})_{\parallel}\). ## Appendix A A generalized Gauss identity In this Appendix we obtain a generalized form of the well-known Gauss identity (see e.g. [23]). On any semi-Riemannian manifold, the Gauss identity relates the curvature tensor of the Levi-Civita connection along tangential directions of a non-degenerate hypersurface with the curvature tensor of the induced metric and the second fundamental form. It has been generalized in a number of directions, e.g. when dealing with induced connections associated to a transversal (rigging) vector [21]. Here we find an identity where the connection of the space and of the hypersurface are completely general, except for the condition that they are both torsion-free. Our primary interest will be in applying this identity when the space defines null hypersurface data and the codimension one submanifold is non-degenerate. However, the identity is more general and may be of independent value. We remark that the tensor \(\widehat{\gamma}\) in the statement of the lemma is completely arbitrary, so neither \(\widehat{\gamma}\) nor its pull-back \(\widehat{h}\) to the submanifold are assumed to be non-degenerate. **Theorem A.1**.: _Consider a smooth manifold \(\mathcal{N}\) endowed with a symmetric \(2\)-covariant tensor field \(\widehat{\gamma}\) and a torsion-free connection \(\widehat{\nabla}\). Let \(S\) be an embedded hypersurface in \(\mathcal{N}\) and assume that \(S\) is equipped with another torsion-free connection \(\widehat{D}\). Define \(\widehat{h}\overset{\mathsf{def}}{=}\psi^{\star}\widehat{\gamma}\) (where \(\psi:S\dot{\kern-1.0pt\longrightarrow}\mathcal{N}\) is the corresponding embedding) and the tensor \(\mathcal{P}\) by means of_ \[\widehat{\nabla}_{X}Y=\widehat{D}_{X}Y+\mathcal{P}(X,Y)\qquad\forall X,Y\in \Gamma(TS),\] and assume that there exists a transversal vector field \(n\) along \(S\) satisfying \(\widehat{\gamma}(n,X)=0\) for all \(X\) tangent to \(S\). Define the \(2\)-covariant tensor \(\Omega\) and the \(1\)-contravariant, \(2\)-covariant tensor \(A\) on \(S\) by decomposing \(\mathcal{P}(X,Y)\) in tangential and transverse parts as follows:_ \[\mathcal{P}(X,Y)=A(X,Y)+\Omega(X,Y)n.\] (A.1) _Then, for all \(X,Y,Z,W\in\Gamma(TS)\) it holds_ \[\widehat{\gamma}(W,R^{\widehat{\nabla}}(X,Y)Z) =\widehat{h}(W,R^{\widehat{D}}(X,Y)Z)+(\widehat{D}_{X}A_{ \widehat{h}})(W,Y,Z)-(\widehat{D}_{Y}A_{\widehat{h}})(W,X,Z)\] \[\quad+\widehat{h}(A(Y,W),A(X,Z))-\widehat{h}(A(X,W),A(Y,Z))\] \[\quad-(\widehat{\nabla}_{X}\widehat{\gamma})(W,\mathcal{P}(Y,Z))+ (\widehat{\nabla}_{Y}\widehat{\gamma})(W,\mathcal{P}(X,Z))\] \[\quad+\widehat{\gamma}(n,n)\left(\Omega(Y,W)\Omega(X,Z)-\Omega(X,W)\Omega(Y,Z)\right),\] (A.2) _where \(A_{\widehat{h}}(W,X,Z)\stackrel{{\textup{\tiny{def}}}}{{=}} \widehat{h}(W,A(X,Z))\)._ Proof.: Since the connections are torsion-free, the tensors \(\mathcal{P}(X,Y)\), \(A(X,Y)\) and \(\Omega(X,Y)\) are all symmetric in \(X\), \(Y\). First, we find \[\widehat{\nabla}_{X}\widehat{\nabla}_{Y}Z =\widehat{\nabla}_{X}(\widehat{D}_{Y}Z+\mathcal{P}(Y,Z))=\widehat {D}_{X}\widehat{D}_{Y}Z+\mathcal{P}(X,\widehat{D}_{Y}Z)+\widehat{\nabla}_{X}( \mathcal{P}(Y,Z)),\] (A.3) \[\widehat{\nabla}_{[X,Y]}Z =\widehat{D}_{[X,Y]}Z+\mathcal{P}(\widehat{D}_{X}Y,Z)-\mathcal{P} (\widehat{D}_{X}Y,Z).\] (A.4) The quantity \((\mathscr{D}_{X}\mathcal{P})(Y,Z)\stackrel{{\textup{\tiny{def}}}}{{= }}\widehat{\nabla}_{X}(\mathcal{P}(Y,Z))-\mathcal{P}(\widehat{D}_{X}Y,Z)- \mathcal{P}(Y,\widehat{D}_{X}Z)\) is tensorial in \(X,Y,Z\), and takes values in the space of vector fields (not necessarily tangent) along \(\psi(S)\). Inserting (A.3)-(A.4) into the definition of the curvature tensor (1.3) yields \[R^{\widehat{\nabla}}(X,Y)Z=R^{\widehat{D}}(X,Y)Z+(\mathscr{D}_{X}\mathcal{P} )(Y,Z)-(\mathscr{D}_{Y}\mathcal{P})(X,Z).\] We now insert the decomposition (A.1). Using that \(\widehat{\gamma}(n,W)=0\) and \(\widehat{h}(X,Y)\stackrel{{\textup{\tiny{def}}}}{{=}}\widehat{ \gamma}(X,Y)\) gives \[\widehat{\gamma}(\widehat{\nabla}_{X}W,\mathcal{P}(Y,Z)) =\widehat{\gamma}(\widehat{\nabla}_{X}W,A(Y,Z)+\Omega(Y,Z)n)\] \[=\widehat{h}(\widehat{D}_{X}W,A(Y,Z))+\widehat{h}(A(X,W),A(Y,Z))+ \widehat{\gamma}(n,n)\Omega(X,W)\Omega(Y,Z),\] (A.5) from where it follows \[\widehat{\gamma}(W,(\mathscr{D}_{X}\mathcal{P})(Y,Z)) =\widehat{\gamma}(W,\widehat{\nabla}_{X}(\mathcal{P}(Y,Z)))- \widehat{\gamma}(W,\mathcal{P}(\widehat{D}_{X}Y,Z))-\widehat{\gamma}(W, \mathcal{P}(Y,\widehat{D}_{X}Z))\] \[=\widehat{\nabla}_{X}\left(\widehat{\gamma}(W,\mathcal{P}(Y,Z)) \right)-\left(\widehat{\nabla}_{X}\widehat{\gamma}\right)(W,\mathcal{P}(Y,Z ))-\widehat{\gamma}(\widehat{\nabla}_{X}W,\mathcal{P}(Y,Z))\] \[\quad-\widehat{\gamma}(W,\mathcal{P}(\widehat{D}_{X}Y,Z))- \widehat{\gamma}(W,\mathcal{P}(Y,\widehat{D}_{X}Z))\] \[\stackrel{{(\ref{eq:2})}}{{=}} \widehat{D}_{X}\left(\widehat{h}(W,A(Y,Z))\right)-\left(\widehat{ \nabla}_{X}\widehat{\gamma}\right)(W,\mathcal{P}(Y,Z))-\widehat{h}(\widehat{D} _{X}W,A(Y,Z))\] \[\quad-\widehat{h}(A(X,W),A(Y,Z))-\widehat{\gamma}(n,n)\Omega(X,W) \Omega(Y,Z)\] \[\quad-\widehat{h}(W,A(\widehat{D}_{X}Y,Z))-\widehat{h}(W,A(Y, \widehat{D}_{X}Z))\] \[=(\widehat{D}_{X}\widehat{h})(W,A(Y,Z))-(\widehat{\nabla}_{X} \widehat{\gamma})(W,\mathcal{P}(Y,Z))+\widehat{h}(W,(\widehat{D}_{X}A)(Y,Z)))\] \[\quad-\widehat{h}(A(X,W),A(Y,Z))-\widehat{\gamma}(n,n)\Omega(X,W) \Omega(Y,Z).\] Therefore, \[\widehat{\gamma}(W,R^{\widehat{\nabla}}(X,Y)Z) =\widehat{h}(W,R^{\widehat{D}}(X,Y)Z)+(\widehat{D}_{X}\widehat{h} )(W,A(Y,Z))-(\widehat{\nabla}_{X}\widehat{\gamma})(W,\mathcal{P}(Y,Z))\] \[\quad+\widehat{h}(W,(\widehat{D}_{X}A)(Y,Z))-\widehat{h}(A(X,W),A( Y,Z))-(\widehat{D}_{Y}\widehat{h})(W,A(X,Z))\] \[+(\widehat{\nabla}_{Y}\widehat{\gamma})(W,\mathcal{P}(X,Z))- \widehat{h}(W,(\widehat{D}_{Y}A)(X,Z))+\widehat{h}(A(Y,W),A(X,Z))\] \[+\widehat{\gamma}(n,n)\left(\Omega(Y,W)\Omega(X,Z)-\Omega(X,W) \Omega(Y,Z)\right).\] (A.6) By virtue of the definition of \(A_{\widehat{h}}\), it holds \[(\widehat{D}_{Y}A_{\widehat{h}})(W,X,Z)=(\widehat{D}_{Y}\widehat{h})(W,A(X,Z) )+\widehat{h}(W,\widehat{D}_{Y}A(X,Z)).\] This allows us to rewrite (A.6) as (A.2). In abstract index notation the generalized Gauss identity (A.2) takes the form \[v_{A}^{a}\widehat{\gamma}_{af}(R^{\widehat{\nabla}})^{f}{}_{bcd} v_{B}^{c}v_{C}^{d}v_{D}^{d} =\widehat{h}_{FA}(R^{\widehat{D}})^{F}{}_{BCD}+\widehat{D}_{C}A_{ \widehat{h}ABD}-\widehat{D}_{D}A_{\widehat{h}ABC}\] \[\quad+\widehat{h}_{FL}A^{L}{}_{AD}A^{F}{}_{BC}-\widehat{h}_{FL}A^ {L}{}_{AC}A^{F}{}_{BD}+v_{D}^{d}(\widehat{\nabla}_{d}\widehat{\gamma}_{af})v_ {A}^{a}\mathcal{P}^{f}{}_{BC}\] \[\quad-v_{C}^{c}(\widehat{\nabla}_{c}\widehat{\gamma}_{af})v_{A}^ {a}\mathcal{P}^{f}{}_{BD}+\widehat{\gamma}(n,n)\left(\Omega_{AD}\Omega_{BC}- \Omega_{AC}\Omega_{BD}\right),\] (A.7) where the vectors \(v_{A}^{a}\) are the push forward with \(\psi\) of any basis vectors \(\{\hat{v}_{A}\}\) in \(S\). ## Acknowledgements The authors acknowledge financial support under the project PID2021-122938NB-I00 (Spanish Ministerio de Ciencia, Innovacion y Universidades and FEDER "A way of making Europe") and SA096P20 (JCyL). M. Manzano also acknowledges the Ph.D. grant FPU17/03791 (Spanish Ministerio de Ciencia, Innovacion y Universidades).
2301.13786
Deep learning-based lung segmentation and automatic regional template in chest X-ray images for pediatric tuberculosis
Tuberculosis (TB) is still considered a leading cause of death and a substantial threat to global child health. Both TB infection and disease are curable using antibiotics. However, most children who die of TB are never diagnosed or treated. In clinical practice, experienced physicians assess TB by examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to adult CXR, which makes TB diagnosis in children more difficult. Computer-aided diagnosis systems supported by Artificial Intelligence have shown performance comparable to experienced radiologist TB readings, which could ease mass TB screening and reduce clinical burden. We propose a multi-view deep learning-based solution which, by following a proposed template, aims to automatically regionalize and extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. Experimental results have shown accurate region extraction, which can be used for further analysis to confirm TB finding presence and severity assessment. Code publicly available at https://github.com/dani-capellan/pTB_LungRegionExtractor.
Daniel Capellán-Martín, Juan J. Gómez-Valverde, Ramon Sanchez-Jacob, David Bermejo-Peláez, Lara García-Delgado, Elisa López-Varela, Maria J. Ledesma-Carbayo
2023-01-31T17:33:35Z
http://arxiv.org/abs/2301.13786v1
Deep learning-based lung segmentation and automatic regional template in chest X-ray images for pediatric tuberculosis ###### Abstract Tuberculosiss (TB) is still considered a leading cause of death and a substantial threat to global child health. Both TB infection and disease are curable using antibiotics. However, most children who die of TB are never diagnosed or treated. In clinical practice, experienced physicians assess TB by examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to adult CXR, which makes TB diagnosis in children more difficult. Computer-aided diagnosis systems supported by Artificial Intelligence have shown performance comparable to experienced radiologist TB readings, which could ease mass TB screening and reduce clinical burden. We propose a multi-view deep learning-based solution which, by following a proposed template, aims to automatically regionalize and extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. Experimental results have shown accurate region extraction, which can be used for further analysis to confirm TB finding presence and severity assessment. Code publicly available at: [https://github.com/dani-capellan/pTB_LungRegionExtractor](https://github.com/dani-capellan/pTB_LungRegionExtractor). Tuberculosis, semantic segmentation, pediatric chest X-Ray, deep learning, computer vision Further author information: (Correspondence: DCM and MJLC) DCM: E-mail: [email protected] MJLC: E-mail: [email protected] ## 1 Introduction Despite being an ancient disease, tuberculosis (TB) remains a leading cause of death and a substantial threat to global child health, with an estimated annual burden of 1 million new pediatric cases worldwide and 250 000 children dying because of this disease [1, 2]. Of particular concern are the children under the age of five years that account for the highest mortality and risk of TB progression [3]. TB is caused by a small, aerobic bacterium called _Mycobacterium Tuberculosis_ (Mtb), which generally affects the lungs, although other parts of the body can be also affected [1]. Most children who die of TB are never diagnosed or treated. Screening may be useful to identify children with possible TB and refer them for further testing, otherwise they should be considered for preventive treatment [4]. Chest X-rays (CXR), along with symptom inquiry, are considered as the best TB screening methods, due to its higher availability and lower cost compared to other imaging techniques [5]. In clinical practice, experienced physicians examine CXR for TB. However, this is a subjective, time-consuming process and carries a significant risk of misclassification of other diseases with similar radiological patterns [6, 7]. Besides, the diagnosis of TB is more difficult in young children, given the non-specific nature of their symptoms and the less specific radiological manifestation compared to adults [8]. The most frequent lesions in pediatric TB are lymphadenopathy, airway compression, air space consolidation, pleural effusion, cavities, military patterns and Ghon focus [9, 10, 11]. Due to the difficulty to evaluate lymphadenopathy, the most relevant sign to diagnose TB with confidence on CXR, the lateral view is usually considered to facilitate diagnosis [12]. In this context, computer-aided diagnosis (CAD) systems supported by Artificial Intelligence (AI) algorithms can play an important role in the mass screening of TB by analyzing the CXR images. In recent years, several CE-certified and commercially available solutions have shown performance comparable to experienced radiologist readings [13, 14]. However, the existing methods do not perform well in pediatric patients and only one system (RADIFY - www.edai.africa/) is currently being designed for children older than 2 years. Additionally, despite its relevance, this field of research has been scarcely tackled [15], showing an urgent need for the development of AI systems for infants and young children (\(<\)3 years). The first steps in a typical CAD system include preprocessing and region of interest (ROI) localization to apply further processing and being able to diagnose more accurately the disease. For TB, the target ROI are the lungs and other structures in the mediastinal region. Most of the current algorithms for detecting and segmenting the lungs are trained and evaluated using healthy subjects, which could have an impact on the correct identification of areas affected by pathology. As a first step to tackle these challenges in this work, we propose a multi-view deep learning (DL)-based approach which aims to automatically extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. The output of the proposed method is a standardized partition of the pediatric CXR that will enable further development of TB radiological sign detection methods as well as the potential assessment of severity. ## 2 Methodology Figure 1 shows the main steps that make up the proposed solution. ### Datasets and Splits For developing the solution, two datasets were used. Our target CXR dataset is from a pediatric (\(<\)3 years of age) cohort of 218 TB and non-TB children obtained from a study performed at CISM (Manhica Health Research Figure 1: Pipeline of the proposed solution. Images shown in the pipeline are real predictions and outputs made by the corresponding DL-based models and algorithms on an 8-month-old infant who belongs to the testing set. AP: Anteroposterior. LAT: Lateral. Center, Mozambique), with both anteroposterior (AP) and lateral (LAT)-view CXR images [9]. Additionaly, for development we used a subset from the public NIH CXR, ChestX-ray8 dataset (112,120 frontal-view CXR images from 30,805 patients) presenting common thoracic pathologies, such as lung nodules, pneumonia, fibrosis, edema or cardiomegaly [16]. To obtain a fully pediatric subset, only images of patients \(\leq\)11 years old were considered, which account for a total of 2330 images from 847 patients. Since further manual labeling of the images was required, a final subset of 210 images covering different ages and pathological findings was randomly selected. In the experiments, training and validation splits were considered. The amount of training and validation data is specified later in each of the tasks. To test the proposed solution, an independent CISM subset of 30 patients with both AP and LAT chest X-rays was used. ### Preprocessing To enable comparable contrast representation across the data, a first preprocessing step was applied to the images, mainly based on the application of an image contrast enhancement process with Contrast Limited Adaptive Histogram Equalization (CLAHE), capable of improving local contrast and edges adaptively with modest computational requirements, which has been shown to improve detection of TB and lung lesions on chest radiographs [17, 18, 19]. Preprocessing with CLAHE may also provide better performance in image segmentation [20]. ### Lung Region Detection & Cropping In the high burden clinical scenarios, digital as well as analog X-ray systems exist. To ensure the same field of view (FOV) and the proper processing of manually digitized X-rays, a lung region detection process was performed to both AP and LAT images. Indeed, first experiments showed that the subsequent lung segmentation process was much more robust when a previous cropping step was included. Consequently, two DL-based fully convolutional neural network (FCNN) object detection models, one for AP and another for LAT, based on YOLO (_You Only Look Once_) architecture were implemented. For this, Ultralytics' YOLOv5* implementation was used for training a lung detector for both AP and LAT images. Footnote *: [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5) For AP images, a YOLOv5s model was trained on a subset of 254 AP images from NIH and CISM datasets (210 and 44, respectively). For LAT images, another YOLOv5s model was trained, this time on 139 LAT images from CISM. All AP and LAT images were manually annotated using CVAT annotation tool and checked by an expert radiologist (RSJ, from author list). The corresponding object detection outputs were then used to crop both AP and LAT images, narrowing down the field of study to our region of interest, the lungs, thus providing a more robust subsequent segmentation process. ### Lung Segmentation This step gathers one of the most important parts within the pipeline proposed. The lung segmentation was defined to cover all the lung parenchymal extension, independently of the presence of overlapping structures. This is particularly important for pediatric TB cases as some of the findings could appear behind or in front of other structures such as the heart or at the lower posterior lobes of the lungs. To tackle this, a comparison of three different state-of-the-art DL-based image segmentation architectures was carried out. Different models were trained and tested for each of the views (AP and LAT). Training was performed from scratch. All the data used for this task, including both training and test sets, were manually segmented using annotation tools. These were then checked by an expert radiologist (RSJ, from author list). Two U-Net-based architectures and one Transformer-based architecture were used: Gated-Axial Attention UNet (GatedAxialUNet) [21], Medical Transformer (MedT) [21] and nnU-Net ("no-new-Net") [22, 23]. No major changes were made to the source code of each of the implementations, preserving as much as possible default settings. In order to assess the performance of each of the models in relation to the amount of supervised data used to train the networks, an incremental learning approach was followed. Supervised training data was progressively increased from 20 to 60 in 20-image steps, gathering segmentation performance results on the independent test set throughout each of the steps. In the cases of GatedAxialUNet and MedT, input images were resized to an input size of \(256\times 256\), default batch size of 4 was kept, Adam optimizer was used with the default learning rate value of 0.001, and a total of 400 epochs were considered for training the models. The rest of the hyperparameters and configurations were kept with their default values. The validation set accounted for the 20% of the initial training set. To train GatedAxialUNet and MedT networks, binary cross-entropy (CE) loss was used between the prediction and the ground truth, which has the following form: \[\mathcal{L}_{CE(p,\hat{p})}=-\left(\frac{1}{wh}\sum_{x=0}^{w-1}\sum_{y=0}^{h-1 }(p\log(\hat{p}))+(1-p)\log(1-\hat{p})\right) \tag{1}\] where \(w\) and \(h\) are the dimensions of the image, \(p\), i.e. \((p(x,y))\), corresponds to the pixel label in the image and \(\hat{p}\), i.e. \(\hat{p}(x,y)\), denotes the output prediction at a specific location \((x,y)\) in the image. In the case of nnU-Net, 2D U-Net models were trained on the data. Input images were automatically adapted by the implementation, with different patch sizes depending on the image type (AP images: \(768\times 896\), LAT images: \(1024\times 896\)). The input batch size was automatically set to 1 by the implementation, according to GPU memory requirements. 50 epochs were considered for training the models. Stochastic gradient descent (SGD) with Nesterov momentum (\(\mu=0.99\)), an initial learning rate of 0.01 and a weight decay following the 'poly' learning rate policy, were used for learning network weights. The rest of the hyperparameters and configurations were kept with their default values. Validation sets accounted for 20% of the initial training set, as a 5-fold cross-validation approach for training the models was adapted, following the implementation's documentation and guidelines. To train the nnU-Net models, a combination of Dice and CE loss was used: \[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{Dice}}+\mathcal{L}_{CE} \tag{2}\] where \(\mathcal{L}_{CE}\) was defined above and \(\mathcal{L}_{Dice}\), for an image \(x\) with a binary prediction output \(\hat{y}\) and binary label \(y\), is defined as: \[\mathcal{L}_{\text{Dice}}=-\frac{2\sum_{i}y_{i}\hat{y}_{i}}{\sum_{i}y_{i}+ \sum_{i}\hat{y}_{i}} \tag{3}\] where \(i\) represents the \(i\)-th pixel on the images. For further details on the nnU-Net training process, please refer to Isensee et al.[22] ### Automatic LAT Orientation Correction In clinical routine, LAT images can be acquired either facing the patient right or left. Depending on this fact, the vertebral column may appear at the right or left side of the image. Consequently, after segmenting the lungs, an automatic orientation correction of the LAT image was included in the pipeline. This provides the solution with robustness and homogeneity, otherwise incorrect regions could be extracted in the subsequent steps. To tackle this issue, a light-weighted and efficient ResNet-based Deep Convolutional Neural Network was designed and trained from scratch, which learned to detect the vertebrae in the image. The model was trained on 111 CISM LAT images and validated on 28 CISM LAT images (20% validation split). An horizontal flip was then performed to those images in which the network detected the column at the left (see Figure 1), homogenizing the data for the automatic region extraction process, and, thus, making the system more robust. In order to make the training of the network more efficient, input images were first normalized to zero mean and unit variance, using z-normalization (\(X_{norm}=\frac{X-\mu}{\sigma+\epsilon}\)), where \(X\) is the image, \(\mu\) is the mean value of the image, \(\sigma\) its standard deviation and \(\epsilon\) a small (\(\epsilon\ll\sigma,\epsilon\approx e^{-10}\)) parameter that prevents division by zero. DL models in this section were implemented using TensorFlow framework. Training and testing in this and previous sections were done using a workstation with NVIDIA TITAN X 12GB and TITAN Xp 12GB GPUs, 64GB RAM and Intel Core i7 @ 3.6 GHz CPU. ### Standardized Template and Automatic Region Extraction As a final step, an automatic standardized template, based on Andronikou et al. proposals[24] to regionalize the pediatric CXR, was constructed having as input the previous cropped AP and LAT images, with their corresponding predicted lung segmentations. To ensure the correspondence of regions across views we first aligned AP and LAT views. Subsequently, the AP image was automatically rotated to ensure lung verticality based on the orientation of the segmentations. To achieve this, a BLOb (Binary Large Object) detection followed by a principal component analysis (PCA) was applied to the AP predicted segmentation masks in order to estimate the rotation of each of the lungs. The AP and LAT bounding boxes enclosing the lung segmentations were extracted and mediastinal regions were defined based on relative measures with respect to the lungs. The final regions extracted are detailed in Table 1. AP and LAT lungs were divided into thirds. LAT lungs were also divided in thirds to identify corresponding areas of potential pathology, not necessarily anatomical regions. APUM contains the respiratory track and suprahiliar area; APMM mainly contains the parahiliar area; and LATMM gathers the parahiliar area, of vital importance for experienced radiologists when detecting parahiliar lymphadenopathies. This standard template and its partitions can be used for further analysis to confirm TB finding presence and severity assessment. ## 3 Experiments and Results ### Lung Region Detection & Cropping Lung detection performance was satisfactory using YOLOv5s, the small version of YOLOv5 (7.2M parameters, 14 MB of size). A confidence threshold of 0.7 was selected for inference, with the aim of properly detecting the lungs with these models. Figure 2 shows two examples of how YOLOv5 performs on both AP and LAT views from two testing cases, one non-TB and other TB. ### Lung Segmentation Results obtained throughout all the different experiments carried out in this step are presented in Table 2. These results were obtained by testing the different trained configurations and architectures, following the mentioned incremental learning approach, on an independent CISM test set of 30 manually segmented cases (with their corresponding 30 AP and 30 LAT images). When computing the metrics, all predicted and reference masks were resized to \(256\times 256\), avoiding metric miscalculation due to this fact. \begin{table} \begin{tabular}{l l} \hline \hline **View \& Region(s)** & **Acronym(s)** \\ \hline AP right lung thirds (upper, middle, lower) & APUR, APMR, APLR \\ AP left lung thirds (upper, middle, lower) & APUL, APML, APLL \\ AP upper and middle mediastinal regions & APUM, APMM \\ LAT lungs thirds (upper, middle, lower) & LATULS, LATMLS, LATLLS \\ LAT middle mediastinal region & LATMM \\ \hline \hline \end{tabular} \end{table} Table 1: Extracted regions and their acronyms. Figure 2: Lung detections in both AP and LAT views of two cases from the test set. #### 3.2.1 Incremental learning Figure 3 shows how model performance varied depending on the amount of data (20, 40 or 60 images) used to train the models (see Table 2 for numerical results). nnU-Net provides greater stability than the other architectures, with good enough Dice (F1) metrics at low amounts of training data. Both MedT and GatedAxialUNet yielded expected results with incremental performance for both AP and LAT views. MedT required enough quantity of data to yield competitive results. In LAT images, this effect was perceived with greater emphasis. Incremental learning showed, in general, significant improvement in performance for all architectures. Model performance increase was more noticeable in MedT and GatedAxialUNet. nnU-Net proved to have greater stability towards training data quantity variation, yielding promising results even with low training data availability. With only 20 images, nnU-Net performed similarly as with 60 images in both AP and LAT views. #### 3.2.2 Results comparison The most stable and best performing architecture was nnU-Net. Nonetheless, GatedAxialUNet and MedT also yielded good performance results, even carrying out a more efficient training process than nnU-Net (time delayed during the training process was drastically reduced with MedT and GatedAxialUNet). However, performance metrics provided by these last two models did not reach as high values as nnU-Net did. Figure 4 shows a visual comparison of the predictions obtained from each of the models in a non-TB case and a TB-compatible case from the independent test set. Thus, nnU-Net demonstrated greater capacity in segmenting lungs in both AP and LAT views, even when fewer images were used for training. Nonetheless, training and inference times were much shorter in GatedAxialUNet and MedT. ### Automatic LAT Orientation Correction The custom ResNet model implemented for this step provided an accuracy of 1.00 in the test set, correctly detecting if the vertebral column was located at the right or the left in the LAT-view image. As expected, neither false positives nor false negatives were detected among the test set predictions, as the problem was relatively simple for the network, although necessary to provide the system with greater robustness. ### Standardized Template and Automatic Region Extraction Finally, the template construction and regional partition was then tested on the independent CISM test set. As input, the predictions used for this final step corresponded to the output of the nnU-Net model trained with 60 images, which demonstrated to have the best performance on the lung segmentation task. An expert radiologist (RSJ, from author list) performed a visual validation of the results. From the 60 CISM AP and LAT test images corresponding to the 30 CISM test cases, 54 were marked as correct, in 5 images minimal corrections (no substantial difference would be perceived in further region-linked TB finding assessment) were suggested, and only in 1 image severe corrections (substantial difference would be perceived in further assessment) were reported. Figure 5 presents four randomly selected cases of the test set, showing how these regions are extracted in different scenarios, in cases of different age. Figure 4: Visual comparison of the predictions obtained from each of the models in a 29-month-old non-TB case (up) and an 8-month-old TB-compatible case (down). These cases belong to the independent CISM test set. ## 4 Conclusions In this paper, we have proposed a multi-view deep learning-based pipeline which automatically extracts lung and mediastinal regions of interest from pediatric CXR images, based on a previously proposed standard template. This standard template and its partitions can be used for further analysis to confirm TB findings presence and severity assessment given a pediatric CXR, where TB assessment entails a challenging task. The proposed system lays the groundwork for automatic approaches that may reduce the high clinical burden when assessing pediatric TB, especially in countries with low resources and high prevalence of TB. ###### Acknowledgements. This work was supported by H2020-MSCA-RISE-2018 INNOVA4TB (EU) project (ID 823854) and ADVANCE-TB Cost Action (EU) project (ID CA21164). DCM's PhD fellowship was supported by Universidad Politecnica de Madrid.
2306.17547
Spaces of innovation and venture formation: the case of biotech in the United Kingdom
Patents serve as valuable indicators of innovation and provide insights into the spaces of innovation and venture formation within geographic regions. In this study, we utilise patent data to examine the dynamics of innovation and venture formation in the biotech sector across the United Kingdom (UK). By analysing patents, we identify key regions that drive biotech innovation in the UK. Our findings highlight the crucial role of biotech incubators in facilitating knowledge exchange between scientific research and industry. However, we observe that the incubators themselves do not significantly contribute to the diversity of innovations which might be due to the underlying effect of geographic proximity on the influences and impact of the patents. These insights contribute to our understanding of the historical development and future prospects of the biotech sector in the UK, emphasising the importance of promoting innovation diversity and fostering inclusive enterprise for achieving equitable economic growth.
Francesco Marzolla, Przemysław Nowak, Rohit Sahasrabuddhe, Chakresh Singh, Matteo Straccamore, Erik Zhivkoplias, Elsa Arcaute
2023-06-30T11:04:41Z
http://arxiv.org/abs/2306.17547v1
# Spaces of innovation and venture formation: the case of biotech in the United Kingdom ###### Abstract Patents serve as valuable indicators of innovation and provide insights into the spaces of innovation and venture formation within geographic regions. In this study, we utilise patent data to examine the dynamics of innovation and venture formation in the biotech sector across the United Kingdom (UK). By analysing patents, we identify key regions that drive biotech innovation in the UK. Our findings highlight the crucial role of biotech incubators in facilitating knowledge exchange between scientific research and industry. However, we observe that the incubators themselves do not significantly contribute to the diversity of innovations which might be due to the underlying effect of geographic proximity on the influences and impact of the patents. These insights contribute to our understanding of the historical development and future prospects of the biotech sector in the UK, emphasising the importance of promoting innovation diversity and fostering inclusive enterprise for achieving equitable economic growth. ## Keywords Innovation, diversity, knowledge spillovers, patents, startups, biotechnology. ## 1 Introduction The contribution of industries to economic development varies significantly, and the emergence of the global biotechnology sector, which utilises living organisms and their compounds for diverse applications across industries, exemplifies this trend. The biotech sector in the US stands out as a remarkable success story, with revenues exceeding \(10^{5}\) billion within just three decades [1]. In the European context, the UK has gathered attention due to its position as the third-largest contributor to biomedical patents among 16 European countries. Additionally, the UK boasts the highest concentration of financially active biomedical startups and venture capital firms [2]. Biotechnology has emerged as a critical driver of innovation in fields such as medicine, agriculture, and environmental sciences. However, the role of inventions and knowledge diversity in the success of biomedical startups remains unclear. Understanding the dynamics and spatial patterns of biotech innovation is crucial for policymakers, entrepreneurs, and researchers aiming at fostering and supporting the growth of this sector. This study focuses specifically on the biotech landscape in the United Kingdom (UK) and examines the spaces of innovation and venture formation within the country. The UK is recognised as a biotech hub, hosting numerous research institutions, universities, and industry players. It offers a unique ecosystem that fosters collaboration, knowledge exchange, and entrepreneurial activities. By analysing patent data, this study aims to gain insights into the spatial distribution of biotech innovation across UK, in order to identify the key regions and cities driving growth in this sector. Innovation activity can be assessed through the analysis of patent data and technological advancements. Pugliese et al. demonstrated that technology serves as the most reliable predictor of industrial and scientific production in the coming decades [3]. The utilisation of patent data to monitor technological innovation is a well-established practice in academic research [4, 5, 6]. Thanks to the availability of different databases about patent documents and increased computational capabilities, patents have become a valuable resource for studying technological change [7]. Various entities, including academia (e.g., Hall et al. [8]), institutions (e.g., PATSTAT, REGPAT), and corporations (e.g., Google Patents), have contributed to the development of extensive collections of patent-related documents. This abundance of data has allowed researchers to explore multiple aspects of patented inventions, including their role in explaining the technological change, their interconnections, and their association with inventors and applicants [7, 9, 10]. One notable characteristic of patent documents, particularly relevant for economic analysis, is the presence of codes associated with the claims made in patent applications. These codes categorise the scope of commercial rights sought by inventors. To facilitate an examination by patent office officials, claims are classified into technological areas using classification systems such as the IPC classification [11] or the United States Patent Classification (USPC) [12, 13]. These classification systems employ hierarchical six-digit codes, which provide increasingly detailed definitions of technological domains. By mapping claims to classification codes, localised analysis of patents and patent applications within specific technology domains becomes possible. However, it is essential to recognise the limitations of using patents as a proxy for measuring innovation [14]. Estimating the value of patents presents a significant challenge [15]. While certain patents hold substantial market value, others may possess limited or no value. Furthermore, employing patent statistics as a comprehensive measure of economic and inventive activity is not without drawbacks [16, 5]. It is crucial to acknowledge that inventions do not encompass all forms of knowledge production in the economy, and patents do not cover the entirety of knowledge generated [17]. Additionally, patents represent just one among several indicators of knowledge and do not uniformly capture all sectors of the economy [18, 19]. This study builds upon previous research that explored knowledge spillovers in the UK based on patent citations, with biotechnology showing a weaker effect compared to other technologies [20]. By focusing on the local level, specifically the NUTS3 regions, and incorporating information on startups, we aim to address this limitation and investigate the influence of biotechnology incubators. Furthermore, we examine the regions in the UK that demonstrate high intellectual property (IP) potential and explore their capacity to drive knowledge accumulation in other industries. ## 2 Data ### Patents Sources:The Patent data used in this work is the same data as the one used in [20]. It belongs to the OECD REGPAT database [21], and it is from 1977 to 2019. It has been filtered such that only patents belonging to the UK, cited and citing, are considered. For further details on data manipulation please refer to [20]. In that work, 43,751 total patents were considered in the study. We further filtered the data to consider only patents Figure 1: **Geographical distribution of patent in the UK. We plot the NUTS3 regions with red boundaries are those with incubators. **a**: Number of patents active in the UK. **b**: Number of these patents that belong to the biotech sector. **c**: Share, i.e. the percentage of patents which are in the biotech sector. that have been cited at least once and that cite at least once, resulting in a total of 25,852 patents, from which 12,543 are cited at least once, and 15,745 cite at least once. Biotechnology patents:Each patent in our dataset can be described by one or more technology codes (IPC codes), which provide information about the technology industry to which they belong. Patents that have at least one IPC belonging to the biotech classification are considered biotech. Selecting the biotechnology classification for the IPC codes, see Appendix, we identified 1,436 patents in this sector, from which 627 are cited at least once, and 937 cite at least once, see Fig. 1. Citations:The citation network, see Section 3.1, is derived from the citation dataset included in the OECD REGPAT database [21]. For this work, we excluded the patents that were outside of the UK citing other patents in the UK. Geographical discrepancies:The UK patent database comprises patents from 1977 to 2018, encompassing a broad timeframe. Consequently, various patents are linked to different editions of NUTS3 available on the Eurostat website. To address this issue, all iterations of NUTS3 were downloaded, and the patents within each region were tallied. This approach ensures minimal overlap as the different NUTS3 versions primarily entail minor adjustments to the boundaries. ### Startups and Incubators Startups:While the term "startup" has become increasingly ubiquitous, a precise definition remains elusive due to its dynamic nature. A startup can be characterised as a young, innovative business. For the purpose of this study, we selected only those companies that have been registered for no longer than 5 years. This allowed us to define them as startups and choose them for further analysis. We extracted all new firms whose focus lies within the field of biotechnology. Biotechnology is a multidisciplinary field concerning many areas of society, including medicine, environmental science and many more, integrated with engineering sciences. In our search for startups, we referred to the official list [22] compiled by the government of the United Kingdom. This list consists of SIC codes which are used to classify businesses by industry in administrative units. The authors connected Biotechnology with Manufactures of Basic Pharmaceuticals, Pharmaceutical Preparations, Irradiation, Electromedical and Electotherapeutic Equipment and Dental Equipment. The full list is available under the link [22]. The firms' data has been extracted from Companies House [23] for 2018, and we considered startups all firms that were created by 2014. Out of the total registered firms in 2018, around 51% can be considered startups, leading to a total of 2,181,018. Out of those the share in biotech is 0.163%, leading to a total of 3548. See Appendix for distribution. Figure 2: **Geographical distribution of startups in the UK.****a**: Number of startups active in 2018 in the UK. **b**: Number of these startups that are operating in the biotech sector. **c**: Share, i.e. the percentage of startups which are operating in the biotech sector. The NUTS3 regions with red boundaries are those with incubators. Incubators:While a startup is typically considered a newly established business venture with a scalable business model and high growth potential, incubators, on the other hand, are organisations or programs designed to support and nurture startups during their early stages by providing resources, mentorship, and infrastructure. Technology incubators are established in order to promote the commercialisation of knowledge derived from the university-industry partnership and accelerate business development by providing access to seed investment [24]. The information about 20 biotechnology incubators in the UK was collected from [25], including the geographic location and the institution that provided the platform (University-based, hospital-based, large pharma-based or stand-alone). For 13 incubators, we also collected information about their size and the number of tenant firms [24]. ## 3 Methods ### Citation network As mentioned previously, there are 15,745 patents that cite at least another patent. Using the unique identifiers for these patents we create a directed network. We show in Fig.3 the giant connected component (GCC) of this network after removing all nodes (patents) that are not from biotechnology. ### Precursors of innovation and their diversity Diversity is considered an important driver for innovation. We will explore the diversity of the patents and that of the derived innovations from biotech, making use of a commonly employed measure of diversity [26], Shannon's entropy. On the other hand, we will also explore whether there is an overlap between the different technologies involved in citing and cited patents using the Jaccard index. Shannon's entropy:Shannon entropy (SE) [27] measures the uncertainty or randomness in a dataset. It calculates the average amount of information or surprise in each data point. Higher entropy signifies more unpredictability, while lower entropy indicates more structure. SE is crucial in fields like data analysis, machine learning [28] and cryptography [29] to assess dataset complexity and information content. In this section, we utilise the SE to quantify the diversity of a patent. For each patent, the SE is defined as: \[SE=-\sum_{i}^{N}p_{i}\log(p_{i}) \tag{1}\] Here, \(N\) represents the total number of unique IPC codes, and \(p_{i}\) denotes the frequency of IPC technology \(i\) within the patent, divided by the total number of unique codes in the dataset. Figure 3: **Biotechnology patents**. An unweighted directed citation network of patents within the UK cited by other UK patents. The nodes’ size and color (light to dark) are proportional to the in-degree of the node. For our study, the in-degree is a proxy of success. **Technological similarity:** The Jaccard index [30, 31], is a measure of the similarity between two sets. It is defined as the ratio of the size of the intersection of the sets to the size of their union. The Jaccard index ranges from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity between the sets. In our case, we compute the Jaccard index considering pair of patents \(X\) and \(Y\) with their set of IPC codes at 4-digit level \(i\) and \(j\). The calculation is computed with \[\text{Technological similarity}(X,Y)=\frac{|X_{i}\cap Y_{j}|}{|X_{i}\cup Y_{j}|}. \tag{2}\] Technological similarity = 1 (0) for patents with identical (completely different) IPC codes. ## 4 Results ### Precursors to biotech innovation In order to foster innovation, it is important to understand which are the ideal conditions giving rise to the observed patents. In this section, we look at the precursors of innovation, which correspond to the cited patents and their IPC codes, and explore whether they belong or not to the same industry. We do this in a temporal manner, by looking at these over time. In Fig. 4**(a)**, we plot the mean technological similarity of non-biotech and biotech patents to their precursors. From the large difference in the fraction of patents that are highly similar to their precursors, we see that innovations in biotech are more likely to come from different technologies than those outside of biotech. As a further investigation, we check whether the precursors of biotech patents come from outside the biotech industry. Around 40% of biotech patents have exclusively non-biotech precursors and around 55% of them stem only from other biotech patents. In Fig. 4B, we plot the distribution of these two classes of biotech patents over time, finding that those with primarily non-biotech precursors are more recent than those with primarily biotech precursors. ### The role of incubators for innovation The UK has invested in biotech incubators across regions. The main role of these incubators, is to provide an ecosystem that supports biotech startups, by providing skills and expertise, to secure growth, and advance the industry. In this section, we explore whether the regions that contain these incubators, show an advantage with respect to regions that do not. We assess the impact of the incubators by looking at the startups and patents in biotech. ### Derived innovations #### 4.3.1 Diversity Considering all patents cited at least once, we observe a positive Spearman correlation = 0.352 (p-value \(<\) 0.05) between the number of citations received and the diversity of the citing patents (Fig.5(a)). The corresponding Figure 4: **Precursors of biotech innovations.****(a)**: Mean technological similarity to precursors in biotech and non-biotech patents. **(b)**: Distribution of the biotech patents with and without biotech precursors over time. correlation only for biotech patents is 0.331 (p-value \(<0.05\))(Fig.5(b). Next, we define a simple null model for our citation network by using a directed configuration model [32, 33]. This preserves the degree sequence of the directed network, i.e. the total citations received by a patent. Comparing the observed value of the correlation with 1000 simulations of the null model, we observe that the correlation is less than expected (Fig.5(c)). To check whether biotechnology patents lead to more diverse innovation, we classify all citing patents into those with (982) and without (14,763) biotech precursors. For each citing patent, we compute the mean technological similarity to its precursors and find that 30% of the patents without biotech precursors are technologically identical to their precursors, while the same statistic for those with biotech precursors is 7.5%. This indicates that biotech patents combine effectively with other patents to create novel innovations. Figure 5: **Diversity vs Number of citations**. We plot the correlation between the diversity and the number of citing patents of all patents cited at least once. **(a)** All patents. **(b)** Biotech patents. The positive correlation indicates that highly cited patents are precursors to diverse innovations. **(c)** Comparing with the null model, the correlation is lower than expected. Figure 6: **Geographical distribution of biotech activity.** Each dot represents a region (NUTS3 level), indicating its patent and startups share (a) and total (b) within biotechnology. Red triangles correspond to regions with biotech incubators, amongst which we highlight Oxford and Cambridge. ## 5 Discussion Over the last 40 years, there was a huge number of biotechnological breakthroughs in the UK. The analysis of the patent citation network identified important innovations among which the most cited one is the fully humanised antibodies for therapeutic uses [34]. We show that for biotech patents the mean technological similarity to innovation precursors is lower than average. However, in the last decade, the progress in the biotech industry is mostly driven by inventions from other fields (ecosystem-driven growth). The regional correlation between biotechnological patents and companies clearly highlights the importance of incubators, meant to facilitate knowledge exchange between science and industry. The most successful platforms are located in Oxford and Cambridge which were already well-established by the early 1990s [35]. Yet, the incubators themselves don't create much novelty for patents that use biotechnological advances on the regional level. The technological diversity of patents that use biotechnological innovations strongly correlates with the importance of innovation. However, our null model suggests that the correlation is less than expected by random. This could be due to multiple reasons, firstly, by rewiring our network we lose the temporal structure of the network. Second, since the shuffling of edges does not take into account the geographic location of the patents when doing the randomisation, a patent can cite different patents anywhere in the UK. While this is possible, more often it is not what is observed. The influence of a patent is indeed driven by geographic proximity. To understand the effect of regional effects we look into the distribution of these patents and their effect geographical within the UK. Understanding the historical development of biotechnology as a newly emerged and rapidly evolving sector of the economy contribute towards the prioritisation of real economic goals. Under time and cost constraints, technology development analysis can positively affect policy-making and regulation. While the creation of biotechnological clusters positively affected economic growth in the past, the future biotech must promote the innovation diversity that will unlock equity and inclusive enterprise in the economy of the UK. ## Acknowledgements This work is the output of the Complexity72h workshop, held at IFISC in Palma, Spain, 26-30 June 2023. [https://www.complexity72h.com/](https://www.complexity72h.com/). It means that after many coffees and laughs (and some beers) we came up with a plan for a future paper. This is the seed, the very beginning of a wonderful collaboration. ## Appendix ### Biotechnological classification List of IPC codes classified as biotech according to [21]: A01H1, A01H4, A01K67, A61K35/[12-79], A61K(38, 39), A61K48, C02F3/34, C07G(11, 13, 15), C07K(4, 14, 16, 17, 19), C12M, Figure 7: **Average diversity of derived innovations of biotech patents** The z-score of the mean of the diversity (defined using Shannon Entropy) of innovations derived from the biotech patents in every NUTS3 region. The grey regions are those without any biotech patents. The regions with dashed boundaries are those containing incubators. Incubators do not own patents that crate more diverse innovations than other regions. C12N, C12P, C12Q, C40B(10, 40/02-08, 50/06), G01N27/327, G01N33/(53,54,55,57,68,74,76,78,88,92), G06F19/[10-18,20-24] #### Startups In the field of complex systems practice, it is common practice to fit various heavy-tailed distributions to real-world data. In figure 8, we can see the best-fit power law to the numbers of startups in the UK in different regions 1, using algorithms provided by Clauset et al. [36]. As we can see, the number of startups seems to be following power law with an exponent of \(\alpha=2.95\). Besides of visual confirmation, the Kolmogorov-Smirnov test [37] returned a p-value of 0.627. It means that there is no reason to deny the hypothesis about the power law within this dataset. Footnote 1: For the purpose of the analysis we used the NUTS3 Territorial Units in the UK, and for each of them we computed the number of startups. See section **Startups and Incubators** for more details. However, it is not rare in the data analysis practice to consider only a truncated version of a given dataset. Here due to the recommendations in the publication of Clauset et al. [36], we focused solely on its right tail and performed a fitting procedure for values greater than 13 966. This means we kept about 30%. However high p-value Kolmogorov-Smirnov suggest that there is indeed a power law. Direct conclusions of this are visible in the map in Figure 2(a), where one can see many green areas and only a few yellows. The occurrence of power law means that there are a few regions with an extraordinarily large number of startups.
2301.03595
White-box Inference Attacks against Centralized Machine Learning and Federated Learning
With the development of information science and technology, various industries have generated massive amounts of data, and machine learning is widely used in the analysis of big data. However, if the privacy of machine learning applications' customers cannot be guaranteed, it will cause security threats and losses to users' personal privacy information and service providers. Therefore, the issue of privacy protection of machine learning has received wide attention. For centralized machine learning models, we evaluate the impact of different neural network layers, gradient, gradient norm, and fine-tuned models on member inference attack performance with prior knowledge; For the federated learning model, we discuss the location of the attacker in the target model and its attack mode. The results show that the centralized machine learning model shows more serious member information leakage in all aspects, and the accuracy of the attacker in the central parameter server is significantly higher than the local Inference attacks as participants.
Jingyi Ge
2022-12-15T07:07:19Z
http://arxiv.org/abs/2301.03595v1
# White-box Inference Attacks against Centralized Machine Learning and Federated Learning ###### Abstract With the development of information science and technology, various industries have generated massive amounts of data, and machine learning is widely used in the analysis of big data. However, if the privacy of machine learning applications' customers cannot be guaranteed, it will cause security threats and losses to users' personal privacy information and service providers. Therefore, the issue of privacy protection of machine learning has received wide attention. For centralized machine learning models, we evaluate the impact of different neural network layers, gradient, gradient norm, and fine-tuned models on member inference attack performance with prior knowledge; For the federated learning model, we discuss the location of the attacker in the target model and its attack mode. The results show that the centralized machine learning model shows more serious member information leakage in all aspects, and the accuracy of the attacker in the central parameter server is significantly higher than the local Inference attacks as participants. **Key words:** machine learning, federated learning, white-box inference attacks, stochastic gradient descent. ## 1 Introduction ### 1.1 Introduction Machine learning is the technical core of the vigorous development of contemporary artificial intelligence and the basic way of computer intelligence.In machine learning, users can get output results with arbitrary input through general algorithms. Machine learning will collect and learn previous data in this way, and improve the algorithm of the computer itself, so as to effectively optimize the performance of computer programs. Machine learning is widely used in the analysis and processing of big data because of its superior information processing ability, such as medical diagnosis, weather prediction, economic research, mine engineering, and so on. However, the model training calculation process, the activation function and so on. Training data privacy: In machine learning, the sample data includes some personalized identifiable information (Personality Identifiable Information, PII), which can represent user attributes, such as email address, address, surname and other user identity attributes. Privacy (3) predict conclusion: machine model will directly calculate the prediction of users, the predicted information may be input the privacy information, such as medical diagnosis model can predict the probability of patients with a disease, the predicted personal information may be used for malicious diagnosis service providers.If the privacy rights in machine learning applications cannot be guaranteed and pose a threat to the security of users' personal privacy information, it is not only for the users using the service, but also for the service providers. ### 1.2 Research status quo With the expansion of the application of information science and technology, the research has also made vigorous progress in depth and breadth, and the research of privacy-related issues has produced numerous achievements and insights in machine learning and deep learning algorithms. **Attack strategy.** Shokri[2]These are the first researchers to propose member reasoning attacks. When the black box frame of the target for the internal logic function and operation process, they let the attacker trained on the shadow models, shadow model has similar distribution and architecture with the target, so its behavior on the training data and the target model on the training data behavior more or less similar, can effectively achieve the attack effect. The output statistical characteristics (such as entropy) can be used to perform member inference. In the face of limited access and insufficient samples in black-box inference attacks, we use the fast attack for high-precision member inference, Peng[4]et al launched a new member inference attack method, based on principal component (PCA) member reasoning attack (PCA-based attack), the low migration of the previous experimental model is effectively improved, and this member reasoning attack can be member reasoning attack without the target model information. Fredrikson[5] designed the inversion attack (Model Inversion) technology firstly, they used the attacker as the central parameter server of the federated learning model, with the maximum posterior probability and model prediction confidence as the attack mechanism.Thereafter, Fredrikson's[6] model reversal attack is improved. In order to apply their attack to the non-linear target model, and the target is regarded as the parameter input, the loss function is optimized by using the reconstruction attack. The improved reversal attack can effectively handle the discrete data. Hitaj[7]et al's research applied reversal attacks to attackers as federated learning participants, who reconstruct the input samples of participants in the model of the model by simulating data samples overlapping the target distribution using a generative adversarial network (GAN). The restoration of a certain category of pictures is realized on the multilayer neural network. This is an expansion and deepening of the attack proposed by Fredrikson et al. Ateniese[12] et al made usage of an attack according to property, they launched a known target model intrinsic function calculation process and training mode under the premise of inferred machine learning participants sample data and some original model data of the specific properties of the correlation inference attack, and named the attack attribute (Property) inference attack, and in the speech data set inferred accent information identification task reflects the effectiveness of the attack.Their attribute inference attack is captured by Ganju[13]et al realized the expanded application in the fully connected network. Blanchard[15] et al set the malicious participants[16] as the attacker, with the central parameter server using the linear aggregation method as the attack target, found that the target transmission of messages fabricated according to the local model, or even arbitrary data and target transmission, can achieve the training intervention of the model.Experiments show that when a large number of participants in the distributed learning model hide or malicious participants who are prone to leak information, even less capable attackers can hijack the target model and implement Byzantine attacks. Defense strategy.Dwork[19]et al first put forward the concept and technology of differential privacy defense technology. They used noise to intervene the output of the model, which blurred the attack effect of attackers targeting the model output, making it impossible for attackers to identify multiple target data sets, and making it difficult to make a correct judgment on the membership of the data points.Shokri[20] used the stochastic gradient descent (SGD) optimization method, the differential privacy protection technology in the process of distributed machine learning, but their privacy protection method need to randomly selected on the gradient greater than set threshold met \(\varepsilon\) - differential privacy Laplace noise, and strict privacy budget controls each privacy consumption until exhausted, shut off data access rights, this method is difficult for practice. In summary, Machine learning has hidden privacy information security risks in terms of model construction and input application, Many scholars' research on federated learning, distributed learning and other environments has accelerated the rapid development of machine learning technology, We use a white-box inference attack model as a means of comparing the training characteristics of centralized machine learning models and federated learning models, And to explore the corresponding risks generated, To refine the study of machine learning privacy risks, To supplement the offensive and defensive strategies in the field of centralized machine learning, Finally, the infinite possibilities for the future applications of machine learning models in various scenarios are explored. ### Study content and chapter arrangement Come in for Nasr[25] et al's study for the privacy risk of deep learning, this paper used a white box member reasoning attack simulation experiment, aimed to explore in the machine deep learning environment, centralized machine learning and federal learning setting, for the level of information leakage and affect the level of the model, and will compare the two learning mechanism characteristics. This paper will be respectively for centralized machine learning and federated learning model member inference attack, using stochastic gradient descent optimizer, model gradient vector as an important attack index, the final simulation results can show the white box member inference attack model for the two target model membership inference accuracy, namely the current learning model in what leaked their own training data.Our evaluation directions include: supervised attacks and unsupervised attacks, inference attacks on the model and its newer versions, passive member reasoning attacks, and active member inference attacks. The experimental results evaluate the model performance based on the member inference attack accuracy score and the True / False positive (ROC) curve. The first part of this paper introduces the privacy rights in machine learning and the existing attack and defense strategies for machine learning. The second part of the related knowledge, the list of the discussed problems and simulation experiments of the background knowledge and the function basis and its definition. The third part, the model and algorithm, shows the workflow and fundamentals of the white-box member inference attack model used in this paper.The fourth part of white box inference attack performance evaluation introduces the experimental related work setting and process, model attack performance evaluation index and attack simulation results for centralized machine learning model and federated learning model. ### Member Reasoning Attack The degree of privacy information leakage of a certain model can be defined as: the attacking party can get one or some private data through this mode. The former shows an increased utility, while the latter reflects a privacy loss. This paper uses white-box member inference attacks to quantify this privacy leakage. Generally speaking, the purpose of an algorithm for member inference attacks is to reason about the identity of a particular data (a member instance or a non-member instance) in the target training set.In practice, attackers with different training premises use member inference attacks to infer the membership of a given data to the target model dataset.Part of the data belonging to the target model data set is observed and used by the attackers to infer more relevant information of the target data set. Therefore, under the attack mode of member reasoning, the important private information of the training data is likely to be displayed through the leakage degree of the target model. This paper uses the way of member reasoning attack to obtain more visual and valuable machine learning model mechanism and vulnerabilities, the results can reflect the information leakage degree and privacy security of the machine model in the learning process. ### Shadow training techniques and white-box reasoning attacks **Shadow training technique (Shadow Training Technique).** When the attacker cannot obtain the internal algorithm, the attack feature data cannot be obtained, so the attacker can only start with the output constantly updated by the target model of the neural network layer at any input.Shokri[2]In order to effectively conduct member reasoning attacks, et al. used shadow training techniques to deal with such situations. **White-box membership inference attack.** In this environment, the attacker can observe not only the model output f (x; W), but also the period operation and all parameters involved in the training process, including all hidden layer output hi (x), so the attacker with white box permission can effectively expand the attack environment of the attacker with black box permission. ### Supervised attacks and unsupervised attacks Supervised and unsupervised learning depends on whether an attacker has prior knowledge such as a part of the target dataset or a sample codistributed with the target sample.When have this knowledge, we will use the way of supervision let the attack model on the known data to build the attack model, that is to let the reasoning model directly learning attack data points and target model members of the membership, in such supervised learning mode to build the reasoning attack became a supervised attack. However, when the attacker does not have knowledge and pre-training conditions on the internal structure and sample distribution of the target model, we choose to build an unsupervised attack model, predict more information about the target data set according to the underlying output of the target model, and develop member inference attacks. ### Stochastic Gradient Descent (SGD) optimizer Stochastic gradient descent (SGD) algorithm is one of the deep learning optimizers, and it is also a relatively basic neural network optimization method.The algorithm repeatedly update model parameters W as the gradient drop, by calculating the loss function value gradient, iterative weight and bias, reduce the empirical expectation on the training set D, make it tend to zero, so that the model parameters constantly approaching the expression of the real data, as shown in formula (1), for stochastic gradient descent (SGD) algorithm working principle, which is the classification model f loss function. \[\min_{W}E(x,y)\sim D[L(f(x;W),y)] \tag{1}\] Stochastic gradient descent algorithm will leave marks for the parameter loss function gradient of each trained sample, which is the basis of reasoning attack, the white box inference attack model and use of stochastic gradient descent optimization algorithm, these sample markers can make the model gradient vector of all parameters on the attack of the target easy to be observed, which becomes our important attack index. ### Passive attack and active attack Passive or active white-box member inference attack mode depends on whether the attacker passively receives the update parameter gradient of the observation model or actively influences the training process of the target model. **Active attack (global attacker and local attacker).** In the active attack mode, the attacker will participate in the target model training process, and obtain the corresponding training set member information through the active influence of their training parameters, based on which the reasoning attack is implemented.Due to the structural characteristics of federated learning, active attacks are often applied to them. The central parameter server will distribute the parameters before the start of each training, collect the local model parameters uploaded by each participant and aggregate and update the global models, and each training stage can contribute to the attacker's attack. \[W\gets W+\gamma\frac{\partial L_{x}}{\partial W} \tag{2}\] ### Centralized learning In this environment setting, all data are trained in the central parameter server set, which includes public general data as well as some private data. The attack model is able to observe the complete independent learning model and the training results of each output. In the experimental study of this paper, training with the new dataset d to get model updates, These fine-tuning (Fine-tuning) models are often caused by the effects of those private data, In the following training sessions, The attacker can observe the fine-tuning of the new independent model f and its training results, Member inference for the new dataset d, besides, The attacker will also make inference attacks on the two versions of the data set before and after the fine update. To get more member information on your private data, Restore the important privacy involved. ### Federated Learning In order to achieve a certain accuracy and have stronger reference value, machine learning needs to cover enough large dimension training samples, but in practical application, data Figure 1: Centralized machine learning. collection not only need high density sample transmission, also exacerbated the privacy information crisis, because the larger the base data often contains more important information in the field of information, this undoubtedly enhances the privacy risk. Under the federated learning framework, the central parameter server each participant in federated learning training has their own training set, they download global parameters in the beginning, and training in the local model, they do local update and upload back to the central parameter server, parameter server processing N participants of the update average, data aggregation sharing save the latest parameter version of the global model. Participants' sharing content is limited to parameters rather than specific data. ## 3 Models and algorithms used ### White-box membership inference attack model For the two types of attackers classified according to having or without background knowledge, we trained the attack model to perform the attack in different ways.We performed attack simulation experiments using the Alexnet model trained on the CIFAR-100 dataset as input of the attack model, and the following white-box member inference attack model. Figure 2: Privacy risks of centralized machine learning. ### Principles and algorithms In the white-box inference attack model we use, the tag components are built on a fully connected network. Aggregate all local weights received from the previous layer, form inputs to nonlinear functions, form the output values of each module of the convolutional layer, and optimize model convergence.The white box reasoning attack model recombines the output of each submodule component, combines the output of all feature extraction components through the encoder components, the encoder output constitutes the attack output of the model, the output is divided into "members" and "non-members", we put the accuracy of the model for the unknown stronghold (predicted members, non-members) as our basis to judge the attack performance of the model. ## 4 Experiments of the white-box membership inference attack ### Experimental setup The equipment prepared for the experiment is the Intel core i7-8650U CPU memory 16.0GB of the computer, the experimental language Python.The experiment uses Pytorch to complete the neural network, which is easy to define, which helps us to easily and quickly establish relatively small projects. **Attack Model.** ReLU activation function is defined as formula (5), where yi is the model output as a non-saturated activation function, its unilateral inhibition ability to the output can solve a certain "gradient disappearance" problem, and effectively improve the model convergence speed to help effectively realize non-linear activation transformation. Otherwise, ReLU activation function can sparse the model well, and such sparsity facilitates experimental fitting to the data. \[f(y_{i})=\begin{cases}y_{i},&y_{i}>0\\ 0,&y_{i}\leq 0\end{cases} \tag{6}\] **Performance evaluation indicators.** For the white-box model used in this paper, we used two ---- centralized machine learning and federated model ---- with centralized performance evaluation indicators to form a more comprehensive evaluation system, to help improve the learning characteristics of the two learning models and the differentiation of member information leakage level. ### Analysis of the simulation experiment results Deep learning has two main training algorithms. In this paper, we first show the simulation results analysis of the centralized machine learning, and then provide the simulation results and analysis of the federated learning objective model. **Simulation results analysis of white-box membership inference attacks for centralized machine learning.** For the attack of centralized machine learning model, this paper will evaluate the white box attack model from the following dimensions: based on the attack model's understanding of target training mode, sample distribution, attack and defense mechanism, starting from the two presets of supervised attack and unsupervised attack.In supervised attacks, the output of different layers from the trained attacker attack model and the gradient of the classification model; in unsupervised attacks, we use shadow training techniques; and in a special case, we update multiple versions of the target model simultaneously. _Supervised attacks._ In the context of supervised attacks, we train the target and attack models using a pre-trained CIFAR100 dataset. Therefore, the white-box inference attack model understands the subset and sample distribution of the target, and can start the member reasoning attack from the gradient and number of layers of the target model. \begin{table} \begin{tabular}{l l} \hline \hline parameter & Parameter name \\ f & object model \\ h & Attack model \\ W & Target model parameters \\ D, D’ & Member and non-member datasets \\ Pr & Precision—Recall Rate \\ \(\gamma\) & Adverse update rate \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter Control Table. \begin{table} \begin{tabular}{l l} \hline \hline output layer & Member prediction accuracy \\ The bottom third layer & 72.88\% \\ \hline \hline \end{tabular} \end{table} Table 2: Shows the member inference attack accuracy for the outputs of different layers of the When the training of the target model enters the last few layers, it will contain a lot of training information from its previous layers, which makes the more data information the model stores and covers more parameters, which is one of the reasons why the output of the final layer of the target model leaks more membership information. Since the deep neural network contains large-scale parameter data beyond the dimensions that the target model is correctly generalized, the parameter gradient in the target model will show differences that attackers can easily distinguish between. We used the CIFAR00 dataset to train the Densenet model with model parameter scale 25.62M and parameter scale 1.7M two models as the target model. The experiment also performs distribution statistics for the gradient norm of membership of the output of each output class of the target CIFAR100-Alexnet centralized machine learning model. Since the gradient distribution of members of CIFAR100-Densenet and non-members of CIFAR100-Densenet model is more different than that of CIFAR100-Resnet model, the attack accuracy on Densenet architecture is better both from model output and model gradient perspective. Unsupervised attacks.In order to assign members of the target model to non-member examples, according to the gradient model value, we give the spectral clustering algorithm with "member" cluster sample, and the other samples we preset as "non-member" to facilitate the distribution of members and non-member samples to determine the member inference accuracy of unsupervised attacks. Fine-tuned models.Such attack observation was chosen because we want to expand the attack model from one to multiple to improve its performance. And in fact, model fine-tuning is often influenced by important private data. Passive attack.The experiment will make the Alexnet model trained on the CIFAR100 dataset as the most target model, and set the position of the attacker to the central parameter \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & Model architecture & Non-member in Dataset D & Non-member in Dataset D\(\Delta\) \\ CIFAR & & & \\ 100 & Alexnet & 75.38\% & 71.36\% \\ CIFAR & & & \\ 100 & DenseNet & 74.61\% & 71.50\% \\ \hline \hline \end{tabular} \end{table} Table 4: Attack Accuracy of attacking fine-tuned models trained on the CIFAR100 dataset in centralized machine learning. \begin{table} \begin{tabular}{l l l l} \hline \hline object model & & Attack accuracy & \\ Dataset & Model architecture & target outputs & target gradients \\ CIFAR100 & Alexnet & 74.61\% & 75.09\% \\ CIFAR100 & Resnet & 62.20\% & 64.34\% \\ CIFAR100 & Densenet & 67.72\% & 74.31\% \\ \hline \hline \end{tabular} \end{table} Table 3: Attack models attack the three target models with different architectures trained on the CIFAR100 dataset, respectively for the output of the model and the member inference accuracy of the gradient. server.First, our attack starts with the training phase of the model.Attacks follow five discontinuous training sessions against the target model, from 5 to 300. _Active attack._ Due to the federated learning mode, the central parameter server for each update data aggregation will negatively affect the reasoning accuracy of attack model, so in order to get more training set data information of target participants, we use the same attack model, preset global attacker active isolation target training participants, intervention in his training process, to hinder its data upload and receive. Active global attacker in the implementation of the attack, will actively block part of the central parameter server issued parameters, which makes the target participants not only cannot share with the central parameter server, also and other participants smooth aggregation, which strengthens the attack model gradient superposition of the local SDG algorithm, causing the target local model internal members and non-member example become easier to identify. \begin{table} \begin{tabular}{l l l l l} \hline \hline Target model & \multicolumn{3}{c}{Global attacker} & \multicolumn{2}{c}{Local attacker} \\ Dataset & \begin{tabular}{c} Model \\ architecture \\ \end{tabular} & Passive & Active & Passive \\ CIFAR100 & Alexnet & 84.98\% & 88.52\% & 72.88\% \\ CIFAR100 & Densenet & 77.43\% & 82.90\% & 71.98\% \\ \hline \hline \end{tabular} \end{table} Table 6: The attack accuracy of passive and active white-box members for target models (four participants) trained on the CIFAR100 dataset at different locations under the federated learning architecture. \begin{table} \begin{tabular}{l l} \hline \hline **Training stage** & **Attack accuracy** \\ 5 10 15 20 25 & 57.32\% \\ 10 20 30 40 50 & 76.47\% \\ 50 100 150 200 & 79.50\% \\ 100 150 200 250 300 & 84.89\% \\ \hline \hline \end{tabular} \end{table} Table 5: Attack accuracy of passive global attacker’s attacking different training stages of the Alexnet federated learning model trained on the CIFAR100 dataset. ## 5 Conclusion In contrast to the leakage of training and learning information and related privacy vulnerabilities in the training and learning characteristics of the internal member samples of centralized machine learning and federated learning models, we use different pre-trained attack models to distinguish their access to and knowledge of the internal function, training patterns, learning methods, distribution, and training set information of the target model. Experimental data show we use the white box members of the inference attack evaluation results: under the premise of using such attack model for attack, training late centralized machine learning model in all aspects show greater members of information leakage, neural network layer, gradient two attack indicators showed similar attack results, and for the latter, as a global model of passive attacker members of the reasoning accuracy will be significantly higher than the participants of the local attack architecture. Federal machine learning broke the traditional centralized machine learning centralized data control deadlock, to personal local data privacy security issues provides new ideas, through the data communication and sharing bridge, improve the efficiency of the machine learning model, accuracy and high reference value, in the era of all Internet, machine learning technology inspired wide attention and innovation, based on attack and defensive research, spawned multi-dimensional application of attack strategy[28]. And for the powerful attack ability of white-box membership inference attacks, there are now some adversarial defense strategies[28]Still helpless about it means that information security in personally sensitive areas is still an urgent issue for users using related technology products, because it can even lead to serious property losses, which undoubtedly poses great obstacles to the use of machine learning in daily life. Reasoning in this paper, the field of machine learning white box members against sensitive sectors in order to improve information security and user privacy information security protection problems to provide important insights and discuss, for the big data age of machine learning is widely used in everyday life and prosperity and development of solid foundation, to speed up the steps of 5 g era, it can better popularize machine learning technology to all aspects of social life, so that the general public can enjoy the convenience and have less worries.In addition, more loopholes and learning features of machine learning are waiting for us to explore and further study.
2307.16564
The Decimation Scheme for Symmetric Matrix Factorization
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. Though being sub-optimal, decimation has the advantage of being theoretically analyzable. We extend its scope and analysis to two families of matrices. For a large class of compactly supported priors, we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. For sparse Ising prior, we show that the storage capacity of the neural network models diverges as sparsity in the patterns increases, and we introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization, with no need of an informative initialization.
Francesco Camilli, Marc Mézard
2023-07-31T10:53:45Z
http://arxiv.org/abs/2307.16564v1
# The Decimation Scheme for Symmetric Matrix Factorization ###### Abstract Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks. The study of its fundamental statistical limits represents a true challenge, and despite a decade-long history of efforts in the community, there is still no closed formula able to describe its optimal performances in the case where the rank of the matrix scales linearly with its size. In the present paper, we study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced, and carry out a thorough study of its performance. Decimation aims at recovering one column/line of the factors at a time, by mapping the problem into a sequence of neural network models of associative memory at a tunable temperature. Though being sub-optimal, decimation has the advantage of being theoretically analyzable. We extend its scope and analysis to two families of matrices. For a large class of compactly supported priors, we show that the replica symmetric free entropy of the neural network models takes a universal form in the low temperature limit. For sparse Ising prior, we show that the storage capacity of the neural network models diverges as sparsity in the patterns increases, and we introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization, with no need of an informative initialization. ###### Contents * 1 Introduction * 2 Decimation * 2.1 An assumption on retrieval accuracy * 3 Decimation free entropies * 3.1 Fixed point equations * 3.2 Remarks * 4 Low temperature limits * 4.1 Sparse prior * 4.2 Continuous priors * 5 Phase diagrams for the first decimation step Numerical tests * 6.1 Testing the saddle point equations with AMP * 6.2 Expected decimation performance * 6.3 A ground state oracle for sparse Ising priors * 6.4 Reversed decimation * 7 Related works * 7.1 Unlearning and dreaming * 7.2 Sub-linear rank * 7.3 Channel universality properties * 8 Conclusion and outlooks ## 1 Introduction The factorization of a matrix into two, or more, factors represents a building block for many machine learning and inference problems. A well-known instance of it is _dictionary learning_[1, 2, 3, 4], which aims at representing a matrix as a product of two factor matrices, where the first, called _dictionary_, is very sparse, and the second, called _feature matrix_, has columns that form an over-complete basis of a euclidean space. As a result, each vector stored in the initial matrix is represented as a linear combination of few elements of the feature matrix. Matrix factorization is also at the basis of recommendation systems [5], and in general proves to be very effective whenever we want to reconstruct missing elements in a matrix of data, be it an image, a correlation matrix, or a matrix of preferences [6, 7, 8]. Other applications of matrix factorization include, but are not limited to, sparse principal component analysis [9], blind source separation [10], matrix completion [11, 12], robust principal component analysis [13] In more specific terms, matrix factorization is the problem of reconstructing the two factors \(\mathbf{A}\), \(\mathbf{B}\) of a matrix \(\mathbf{AB}\) from a potentially noisy observation of the latter, say \(\mathbf{Y}\). One would like to answer two main questions: _(i)_ in what regimes of sizes of \(\mathbf{A}\), \(\mathbf{B}\) and noise is it possible to reconstruct the two factors (up to a permutation of the lines of \(\mathbf{A}\) and the columns of \(\mathbf{B}\))? _(ii)_ Do there exist efficient algorithms that achieve a good performance? In the present paper we focus on symmetric matrix factorization in which the two factors to retrieve are identical. Consider an \(N\times P\) matrix \((\xi_{i}^{\mu})_{i\leq N}^{\mu\leq P}=\boldsymbol{\xi}\in\mathbb{R}^{N\times P}\) whose elements are independently and identically distributed according to a given prior probability \(P_{\xi}\), that we suppose to be symmetric, with unit variance and compact support: \(\mathbb{E}\xi=0\), \(\mathbb{E}\xi^{2}=1\), \(|\xi|\leq C\) for some \(C>0\). Secondly, let \((Z_{ij})_{i,j\leq N}=(Z_{ji})_{i,j\leq N}=\mathbf{Z}\) be a Wigner matrix, that is \(Z_{ij}=Z_{ji}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,1+\delta_{ij})\). Symmetric matrix factorization can thus be formulated as an inference problem: a Statistician needs to recover \(\boldsymbol{\xi}\) given the noisy observations \[\mathbf{Y}=\frac{\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}}{\sqrt{N}}+ \sqrt{\Delta}\mathbf{Z}\,. \tag{1}\] The strength of the noise \(\mathbf{Z}\) w.r.t. that of the signal is tuned by \(\Delta\geq 0\). In the following we will need to single out the \(P\) column vectors inside \(\boldsymbol{\xi}\), denoted by \(\boldsymbol{\xi}^{\mu}\), and we shall refer to them as _patterns_. Despite the model is presented here in a stylized way, i.e. with the two factors being identical and with completely factorized prior, we believe this setting represents a fundamental first step in the understanding of the general problem. Concerning in particular the assumption of a factorized prior, this is often used also in concrete situations. Indeed, for instance, the \(L^{2}\) norm regulators appearing in the empirical risk used to train neural networks are inherited from a zero temperature limit of a Statistical Mechanics problem that has the empirical risk as a Hamiltonian with factorized prior on the weights of the network, as clarified by [14]. A very popular setting to tackle an inference problem is the Bayes-optimal one, in which the Statistician tasked with the reconstruction of \(\boldsymbol{\xi}\) knows the generating process of the observations \(\mathbf{Y}\), namely they know that \(\mathbf{Z}\) is Gaussian, they know \(N,P,\Delta\) and the probability distribution of factors \(P_{\xi}\). This Bayes-optimal setting is of utmost relevance as it provides the information-theoretic optimal performance. Indeed, the posterior mean estimator \(\mathbb{E}[\mathbf{XX^{\intercal}}|\mathbf{Y}]\), where \[dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})=\frac{1}{\mathcal{Z}(\mathbf{Y})} \prod_{i\leq N,\mu\leq P}dP_{\xi}(X_{i}^{\mu})\exp\left[\frac{1}{2\sqrt{N} \Delta}\mathrm{Tr}\mathbf{Y}\mathbf{XX^{\intercal}}-\frac{1}{4\Delta N} \mathrm{Tr}(\mathbf{XX^{\intercal}})^{2}\right], \tag{2}\] is the one that minimizes the mean square error loss on the reconstruction of \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\). The normalization of the distribution \(\mathcal{Z}(\mathbf{Y})\) is called _partition function_ and the associated _free entropy_ is defined as \[\Phi_{N,P}=\frac{1}{NP}\mathbb{E}\log\mathcal{Z}(\mathbf{Y})\,. \tag{3}\] The free entropy has a central role. In fact, from the thermodynamic point of view, it can be used to identify what macrostates dominate probability and are thus selected at thermodynamic equilibrium. These macrostates are usually identified by the values of some global order parameters, such as \(\mathrm{Tr}\mathbf{XX^{\intercal}}\boldsymbol{\xi}\boldsymbol{\xi}^{ \intercal}/N^{2}\), which measures the average alignment of a sample from the posterior and the ground truth \(\boldsymbol{\xi}\) we want to estimate. On the other hand, the free entropy is in close relationship with the _mutual information_\(I(\boldsymbol{\xi};\mathbf{Y})\) between the data and the ground truth. This information theoretic quantity quantifies the amount of residual information about the ground truth that is still available in the data after they have been corrupted by the noise. If the rank \(P\) is finite, the model (1) is typically referred to as _spiked Wigner model_, first introduced as model for Principal Component Analysis (PCA) [15]. The spectral properties of low rank perturbations of high-rank matrices (such as the Wigner matrix \(\mathbf{Z}\)) are by now largely understood in random matrix theory, and they can give rise to the celebrated BBP carry out a thorough study of carry out a thorough study of transition [16], further studied and extended in [17, 18, 19, 20, 21, 22, 23, 24]. Thanks to the effort of a wide interdisciplinary community, we also have a control on the asymptotic behaviour of the posterior measure (2) and an exact formula for the free entropy associated to the low-rank problem [25, 26, 27, 28, 29, 30, 31, 32] (recently extended to rotational invariant noise [33]), which yields the Bayes-optimal limit of the noise allowing the reconstruction of the low-rank spike. Finally, a particular class of algorithms, known as _Approximate Message Passing_ (AMP) [34, 35, 36, 37, 38], is able to perform factorization up to this Bayes-optimal limit. Here we are interested in the extensive rank regime where \(P,N\to\infty\) with fixed ratio \(P/N=\alpha\). In the hypothesis of a rotationally invariant noise \(\mathbf{Z}\), the spectral properties of \(\mathbf{Y}\) are governed by the free-convolution [39] of the spectral densities of \(\mathbf{Z}\) and \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\). On the information theoretic side instead, there still is no accepted closed formula that expresses \(\Phi_{N,P}\). Hence, the information theoretic limits are currently out of reach, and the Minimum Mean Square Error (MMSE) for this estimation problem is not known. Among the past attempts, we must mention the line of works [40, 41, 42, 43, 44], whose proposed solution, as pointed out in [45, 46], provides only an approximation of the correct limit. In fact, the authors of [46] build a perturbative approach that highlights the presence of relevant correlations neglected in the previous works. A further attempt to produce a closed replica formula was put forward in [47], but, as [40], it involves uncontrolled approximations. The main obstacle in the computation of the asymptotics of (3) is the fact that it is a matrix model, and, in particular, the term \(\mathrm{Tr}(\mathbf{XX^{\intercal}})^{2}\) couples both the "rank, or patterns indices" \(\mu\), and the "dimension, or particle site indices" \(i\). We will use here a different approach that we introduced and studied recently [48] in the simplest case where the factors' elements \(\xi_{i}^{\mu}\) are independent binary variables. Instead of the Bayes-optimal setting we use a simpler procedure, that we call _decimation_. At the cost of giving up on Bayes-optimality, decimation solves this problem and allows us to identify an iterative scheme to estimate pattern by pattern, giving an estimate of \(\boldsymbol{\xi}\) through a sequential estimation of its columns, and, more importantly, whose asymptotic performance turns out to be completely analyzable. In the case of binary patterns we could thus show that matrix factorization is possible in a part of the phase diagram where \(\alpha\) and \(\Delta\) are small enough. Here we generalize this approach to arbitrary distributions of the patterns' elements. Organization of the paper and main contributionsIn Section 2 we define the decimation scheme, laying the ground for the replica computation of Section 3. In Section 4, we compute the low temperature limits for two classes of priors: sparse Ising and a generic absolutely continuous, symmetric and bounded support prior. Surprisingly, the free entropies of the neural network models arising from decimation evaluated at the equilibrium value of the order parameters have a universal form, but in general not the same numerical value. As we shall argue in the following, the starting point of the decimation procedure, i.e. the initial value of the parameters \(\alpha\) and \(\Delta\), is of crucial importance for its success. Therefore, in Section 5 we analyze the phase diagrams for the initial step of decimation. For the sparse Ising prior, we show that as sparsity increases, the storage capacity of the sequential neural network models of decimation diverges. For the class of continuous priors we highlight the presence of a thermodynamic transition, where there is a non-trivial overlap between a sample from the Gibbs measure and the sought pattern, and a performance transition, where Gibbs sampling can outperform the null-estimator. In Section 6 we provide numerical evidence in support of the replica theory. We introduce the Decimated AMP algorithm (DAMP), in order to verify the predictions of the replica theory, and we relate the replica symmetric order parameters to the mean square error on the reconstruction of the patterns, as well as to the matrix mean square error for matrix denoising, showing that decimation can outperform Rotational Invariant Estimators (RIEs) [49, 50, 51] in this task. Furthermore, this Section contains the pseudo-code of a ground state oracle, an algorithm that is indeed able to find all the patterns one by one, with no need of informative initialization, contrary to DAMP. Section 7 contains a comparison with recent relevant works that are related to the present one. Finally, Section 8 gathers the conclusions and future perspectives. ## 2 Decimation Let us give a closer look at the probability distribution (2). For the purpose of the theoretical analysis we can replace \(Y_{ij}\) with the r.h.s. of (1), getting \[dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})=\frac{1}{\mathcal{Z}(\mathbf{Y })}\prod_{i\leq N,\mu\leq P}\left[dP_{\xi}(X_{i}^{\mu})\right]\mathrm{e}^{- \beta\left[\sum_{\mu}(E_{1}(\mathbf{X}^{\mu})+E_{2}(\mathbf{X}^{\mu})+E_{3}( \mathbf{X}^{\mu}))+\sum_{\mu<\nu}E_{4}(\mathbf{X}^{\mu},\mathbf{X}^{\nu})) \right]} \tag{4}\] where \(\beta=\frac{1}{\Delta}\), \(\mathbf{X}^{\mu}=(X_{i}^{\mu})_{i\leq N}\) and \[E_{1}(\mathbf{x}) =-\sum_{i,j=1}^{N}J_{ij}x_{i}x_{j}\ \ ;\ \ J_{ij}=\frac{1}{N}\sum_{\nu}\xi_{i}^{\nu}\xi_{j}^{\nu} \tag{5}\] \[E_{2}(\mathbf{x}) =-\sum_{i,j=1}^{N}\frac{\sqrt{\Delta}}{2\sqrt{N}}Z_{ij}x_{i}x_{j}\] (6) \[E_{3}(\mathbf{x}) =\frac{1}{4N}\Big{[}\sum_{i}x_{i}^{2}\Big{]}^{2}\] (7) \[E_{4}(\mathbf{x},\mathbf{x}^{\prime}) =\frac{1}{2N}\Big{[}\sum_{i}x_{i}x_{i}^{\prime}\Big{]}^{2}\,. \tag{8}\] Here one should be careful not to confuse \(\xi_{i}^{\mu}\) which is the 'ground-truth' matrix from which the signal \(\mathbf{Y}\) was generated, and \(X_{i}^{\mu}\) which is a random variable distributed according to the measure \(dP(\boldsymbol{\xi}=\mathbf{X}\mid\mathbf{Y})\), so that the expectation value of \(X_{i}^{\mu}\) gives the best possible approximation to \(\xi_{i}^{\mu}\). Looking at the above decomposition, we notice that, if we could drop the term \(E_{4}(\mathbf{X}^{\mu},\mathbf{X}^{\nu})\), we would have a system of \(P\) decoupled problems, one for each value of \(\mu\), described by an energy \(E_{1}(\mathbf{X}^{\mu})+E_{2}(\mathbf{X}^{\mu})+E_{3}(\mathbf{X}^{\mu})\). The energy \(E_{1}\) is that of a spin glass with \(N\) variables \(x_{i}\), each with an a-priori measure \(P_{\xi}(x_{i})\), interacting by pairs through a matrix of couplings \(J_{ij}\) which has a Hebbian form determined by the ground-truth patterns \(\boldsymbol{\xi}\). The energy \(E_{2}\) is a random spin glass term created by measurement noise. The energy \(E_{3}\) is a global penalty that ensures that the norm of \(\mathbf{X}\) does not get too large; one can also incorporate it into the local measure using a Lagrange multiplier. Altogether, the system described by \(E_{1}+E_{2}+E_{3}\) is a spin glass Hamiltonian with an interaction which is a noisy version of a Hebbian interaction. This is typical of problems that have been studied as neural networks for associative memory, following the seminal work by Hopfield [52]. The present one is a generalization of the Hopfield model, where the stored patterns components \(\xi_{i}^{\mu}\) are no longer binary but have a more general distribution which can be continuous. Based on our knowledge of associative memories, one can expect that, when the noise strength \(\Delta\) and the number of patterns per variable \(\alpha=P/N\) are small enough, there can exist a'retrieval' phase, in which the configurations \({\bf x}\) that minimize \(E_{1}({\bf x})+E_{2}({\bf x})+E_{3}({\bf x})\) are close to the stored patterns \(\xi_{i}^{\mu}\). This is certainly the case for binary patterns as shown in [48]. Assuming that such a retrieval phase exists, one can understand the use of the fourth energy term, \(E_{4}\). In fact one can interpret (2) as follows: we start from \(P\) replicas of an associative memory each with energy \(E_{1}({\bf X}^{\mu})+E_{2}({\bf X}^{\mu})+E_{3}({\bf X}^{\mu})\). These copies interact by pairs through the term \(E_{4}({\bf X}^{\mu},{\bf X}^{\nu})\) which is a repulsive term. If one works in the retrieval phase of the associative memory, then at low temperature the ground state will be found when each replica \({\bf X}^{\mu}\) is close to one of the patterns \(\mathbf{\xi}^{\pi(\mu)}\). As there are \(P\) retrieval states and \(P\) replicas, all the \(\pi(\mu)\) must be distinct from one another, and therefore \(\pi\) is a permutation. In such a scenario, one would have found a phase where the factors can be reconstructed. Decimation is based precisely on this idea. It works as a sequence of \(P\) estimations, each one studying a probability distribution which is that of a neural network model of associative memory. More precisely, one looks for one column \(\mathbf{\xi}^{\mu}\) of \(\xi\) at a time. To fix ideas, let us start by discussing the search of a first pattern, using a Gibbs measure in the form \[dP({\bf x}\mid{\bf Y})=\frac{dP_{\xi}({\bf x})}{{\cal Z}_{0}({\bf Y})}\exp \left(\beta\Big{[}\frac{1}{2N}\sum_{\mu=1}^{P}\Big{(}\sum_{i=1}^{N}\xi_{i}^{ \mu}x_{i}\Big{)}^{2}+\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}x_{ i}x_{j}-\frac{\|{\bf x}\|^{4}}{4N}\Big{]}\right). \tag{9}\] Here we have introduced a factor \(\beta\) that plays the role of an inverse absolute temperature for this Boltzmann-Gibbs measure. We could use \(\beta=1/\Delta\) as in the Bayes-optimal approach, but as we shall see taking the large \(\beta\) limit can also be a good choice. When using this approach with variables \(x_{i}\) that are not constrained on the hypercube \(\{-1,1\}^{N}\) or in general on a sphere, it is also useful to introduce another term in the exponential that favours \({\bf x}\)-configurations with square norm equal to \(N\), as we know that the original signal is centered and with unit variance. Hence, the Boltzmann-Gibbs measure that we use to find a first pattern is actually \(dP_{\xi}({\bf x})e^{-\beta E({\bf x}\mid{\bf Y})}/{\cal Z}_{0}\) with an energy function \[-E({\bf x}|{\bf Y})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}x_{i} x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}({\bf x}))^{2}-\frac{\|{\bf x}\|^{4}}{4N}- \frac{\lambda}{4N}(\|{\bf x}\|^{2}-N)^{2} \tag{10}\] where we have introduced the _Mattis magnetization_ \[m^{\mu}({\bf x})=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}x_{i}\,. \tag{11}\] \(\lambda\) is a parameter penalizing (if positive) configurations with \(\|{\bf x}\|^{2}\neq N\), as mentioned before. If \(\lambda\to+\infty\) then the spins are constrained on a sphere. Let us now assume that we are able to sample a configuration \(\mathbf{\eta}^{P}\) from the Boltzmann-Gibbs measure with energy (10) that, without loss of generality (we shall relabel the patterns in such a way that the permutation \(\pi\) is the identity), we take as an estimate of \(\mathbf{\xi}^{P}\). How do we find the estimate of the other \(\mathbf{\xi}^{\mu}\), \(\mu<P\)? If \(\mathbf{\eta}^{P}\) is a good estimate of \(\mathbf{\xi}^{P}\), the corresponding rank one contribution \(\mathbf{\eta}^{P}\mathbf{\eta}^{P\intercal}\) should be close (in Frobenius norm) to \(\mathbf{\xi}^{P}\mathbf{\xi}^{P\intercal}\). Then, if we subtract it from the Hebbian coupling \(E_{1}(X)\), we can hope that the ground state of the new associative memory problem will now have only \(P-1\) ground states, each close to one of the patterns \(\mathbf{\xi}^{\mu}\), \(\mu=1,...,P-1\). This new associative memory problem therefore has \(P-1\) stored patterns instead of \(P\) so that the well known phenomenon of _pattern interference_[53, 54], which limits the storage capacity, will be reduced. Based on this intuition, we define the decimation procedure as follows: after having found the first estimate of a pattern, we modify the coupling matrix as \[\mathbf{Y}_{1}=\mathbf{Y}-\frac{\boldsymbol{\eta}^{P}\boldsymbol{\eta}^{P\intercal }}{\sqrt{N}}\,, \tag{12}\] which gives a modified energy function \[-E(\mathbf{x}|\mathbf{Y}_{1})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z _{ij}x_{i}x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}(\mathbf{x}))^{2}-\frac{N}{ 2}(p^{P}(\mathbf{x}))^{2}-\frac{\|\mathbf{x}\|^{4}}{4N}-\frac{\lambda}{4N}(\| \mathbf{x}\|^{2}-N)^{2} \tag{13}\] where, here and in the following \[p^{\mu}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}\eta_{i}^{\mu}x_{i}\,. \tag{14}\] The same reasoning as above applies to this second step. In general, if the first \(R\) (\(=0,1,2,\ldots,P-1\)) patterns have already been estimated, the decimation assumes to produce the estimate of the \(R+1\)-th pattern sampling from the Boltzmann Gibbs measure \[d\mu_{R}(\mathbf{x})=\frac{dP_{\xi}(\mathbf{x})}{\mathcal{Z}_{R}}\exp\big{(}- \beta E(\mathbf{x}|\mathbf{Y}_{R})\big{)} \tag{15}\] where \[\mathbf{Y}_{R}=\mathbf{Y}-\sum_{\mu=P-R+1}^{P}\frac{\boldsymbol{\eta}^{\mu} \boldsymbol{\eta}^{\mu\intercal}}{\sqrt{N}} \tag{16}\] and \[-E(\mathbf{x}|\mathbf{Y}_{R})=\frac{\sqrt{\Delta}}{2\sqrt{N}}\sum_{i,j=1}^{N}Z _{ij}x_{i}x_{j}+\frac{N}{2}\sum_{\mu=1}^{P}(m^{\mu}(\mathbf{x}))^{2}-\frac{N}{ 2}\sum_{\mu=P-R+1}^{P}(p^{\mu}(\mathbf{x}))^{2}-\frac{\|\mathbf{x}\|^{4}}{4N} -\frac{\lambda}{4N}(\|\mathbf{x}\|^{2}-N)^{2}\,. \tag{17}\] The energy function above has some desirable features. First, the summation of the squared Mattis' magnetizations attracts mass of the distribution towards those configurations that are most aligned with one of the columns of \(\boldsymbol{\xi}\), which are our goal. Secondly, if the \(R\) estimates \(\boldsymbol{\eta}^{\mu}\), with \(\mu=P-R+1,\ldots P\) are reliable, in a sense we shall specify later, the summation containing the squared \((p^{\mu}(\mathbf{x}))^{2}\) repels the mass of the probability distribution from those configurations that are similar to previously estimated patterns, preventing the sampling from finding a pattern more than once. We notice at this point that there are three noise sources in this procedure: 1. the original Wigner matrix \(\mathbf{Z}\); 2. pattern interference whose strength, as discussed above, is increasing with the ratio \(\alpha=P/N\); 3. the imperfect retrieval of patterns in the previous steps of decimation. (c) is maybe the least obvious one. At each step, we subtract a rank one contribution \(\boldsymbol{\eta}^{\mu}\boldsymbol{\eta}^{\mu\intercal}/\sqrt{N}\) that is not exactly \(\boldsymbol{\xi}^{\mu}\boldsymbol{\xi}^{\mu\intercal}/\sqrt{N}\). This introduces an additional form of noise that depends on the quality of the previous reconstructions. In order to monitor the strength of this third noise, we introduce the _retrieval accuracy_ of a pattern \(\boldsymbol{\xi}^{\mu}\): \[m^{\mu}=\frac{\boldsymbol{\xi}^{\mu}\cdot\boldsymbol{\eta}^{\mu}}{N}\,,\quad \mu=P-R+1,\ldots,P\,. \tag{18}\] These quantities turn out to be order parameters of the previous decimation steps. Indeed, they are nothing but Mattis' magnetizations of typical samples from (15) with a pattern. Hence, each decimation step has its own free entropy and we will determine the new retrieval accuracy via consistency equations arising from the maximization of it, namely we look for those macrostates that dominate probability in the \(N\to\infty\) limit. In addition to \(m^{\mu}\) we will have other order parameters appearing. In particular, there will be one, denoted by \(r\), tuning the amplitude of the overall noise, that, according to the considerations above, must comprise the three contributions coming from sources (a), (b) and (c). ### An assumption on retrieval accuracy In order to carry out the computations we need some information on the statistics of the retrieved configurations \(\boldsymbol{\eta}^{\mu}\). We assume that an "oracle" algorithm will produce \(\boldsymbol{\eta}^{\mu}\) with an asymptotic measure given by \[\eta^{\mu}_{i}\,\sim\,\langle\cdot\rangle_{\xi^{\mu}_{i},Z}=\frac{\int dP_{ \xi}(x)e^{(Z\sqrt{r}+\beta m^{\mu}\xi^{\mu}_{i})x-\frac{r+u}{2}x^{2}}(\cdot)}{ \int dP_{\xi}(x)e^{(Z\sqrt{r}+\beta m^{\mu}\xi^{\mu}_{i})x-\frac{r+u}{2}x^{2}} }\,,\quad\xi^{\mu}_{i}\sim P_{\xi}\,,Z\sim\mathcal{N}(0,1)\text{ independent of other noises}\,, \tag{19}\] where \(m^{\mu}\), _i.e._ the retrieval accuracy for \(\boldsymbol{\eta}^{\mu}\), and \(\,r,\,u\) must be determined self-consistently. (19) amounts to requiring that, asymptotically, the sites are decoupled and they feel an effective external random magnetic field, that is Gaussian with a mean shifted by the ground truth \(\xi^{\mu}_{i}\). Define for later convenience the quantities \[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[\eta^{\mu}_{i}]=m^{\mu}_{i}\,, \quad\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[(\eta^{\mu}_{i})^{2}]=v^{ \mu}_{i}\,. \tag{20}\] Then (19) has the following implications: \[\mathbb{E}_{\boldsymbol{\xi}}[\eta^{\mu}_{i}]=\mathbb{E}_{\boldsymbol{\xi}} \mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[\eta^{\mu}_{i}]=0\,,\quad \mathbb{E}_{\boldsymbol{\xi}}[\xi^{\mu}_{i}m^{\nu}_{i}]=m^{\mu}\delta_{\mu, \nu}\,,\quad\mathbb{E}_{\boldsymbol{\xi}}[v^{\mu}_{i}]=v^{\mu} \tag{21}\] that will be self-consistent with the fixed point equations for each decimation step. We shall see from the replica computation that this assumption holds inductively: if it is true at the \(R\)-th decimation step, then we are able to decouple the site indices also for the step \(R+1\), and the resulting spin-glass model has an effective random magnetic field of the same form. ## 3 Decimation free entropies In this section we compute the large \(N\) limit of the free entropy \[\Phi=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\log\int dP_{\xi}(\mathbf{x})\exp \left[-\beta E(\mathbf{x}|\mathbf{Y}_{R})\right]\,, \tag{22}\] where \(\mathbb{E}\) is taken w.r.t. all the disorder: \(\mathbf{Z},\boldsymbol{\xi},\boldsymbol{\eta}\), and recall that \(R\) is the number of patterns that were already estimated. This is done using the _replica method_[55]. We thus introduce \[\mathbb{E}\mathcal{Z}^{n}_{N}:=\mathbb{E}_{\mathbf{Z}}\mathbb{E}_{\boldsymbol{ \xi},\boldsymbol{\eta}}\int\prod_{a=1}^{n}dP_{\xi}(\mathbf{x}_{a})\exp\left[- \beta\sum_{a=1}^{n}E(\mathbf{x}_{a}|\mathbf{Y}_{\mathbf{R}})\right]\,. \tag{23}\] We decompose this computation and start with the first noise terms in (17), and the related \(\mathbb{E}_{\mathbf{Z}}\) average \[\mathbb{E}_{\mathbf{Z}}\exp\left(\frac{\beta\sqrt{\Delta}}{2 \sqrt{N}}\sum_{i,j=1}^{N}Z_{ij}\sum_{a=1}^{n}x_{a,i}x_{a,j}\right)=\exp\left( \frac{\beta^{2}\Delta}{4N}\sum_{i,j=1}^{N}\sum_{a,b=1}^{n}x_{a,i}x_{a,j}x_{b,i} x_{b,j}\right)=\\ =\exp\left(\frac{N\beta^{2}\Delta}{4}\sum_{a\neq b}^{n}Q^{2}( \mathbf{x}_{a},\mathbf{x}_{b})+\beta^{2}\Delta\frac{\|\mathbf{x}_{a}\|^{4}}{4 N}\right)\,. \tag{24}\] where \(Q({\bf x},{\bf x}^{\prime})=(1/N)\sum_{i}x_{i}x_{i}^{\prime}\). For future convenience, we introduce the "decimation time" \(t=R/P\), i.e. the fraction of patterns already estimated. Now we take care of the penalizing \(p\)-terms in (17). After replicating, their contribution to the partition function is \[A:=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}e^{-\frac{N\beta}{2}(p^{\mu}({\bf x }_{a}))^{2}}=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{ \sqrt{2\pi}}e^{-\frac{(\kappa_{a}^{\mu})^{2}}{2}+i\sqrt{\frac{P}{N}}s_{a}^{\mu }\sum_{j=1}^{N}\eta_{j}^{\mu}x_{a,j}}\,. \tag{25}\] Notice that, thanks to the introduction of the auxiliary Gaussian variables \((s_{a}^{\mu})_{a\leq n,P(1-t)<\mu\leq P}\), the exponential is now decoupled over the particle indices \(j\). Consider then the expectation of \(A\) w.r.t. \(\eta\), given \(\xi\) with the assumptions (21): \[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[A]=\prod_{\mu=P(1-t)+1}^{P} \prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\exp\left(-\frac{(s_{a}^{ \mu})^{2}}{2}+\sum_{i=1}^{N}\log\mathbb{E}_{\eta_{i}^{\mu}|\xi_{i}^{\mu}}e^{i \sqrt{\frac{P}{N}}\eta_{i}^{\mu}\sum_{i=1}^{n}s_{a}^{\mu}x_{a,i}}\right)\,. \tag{26}\] Now we can expand the exponential inside the log up to second order, the remaining terms will be of sub-leading order and thus neglected in the following: \[\mathbb{E}_{\boldsymbol{\eta}|\boldsymbol{\xi}}[A]=\prod_{\mu=P(1 -t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\exp\left(-\frac{ (s_{a}^{\mu})^{2}}{2}+\sum_{a=1}^{n}is_{a}^{\mu}\sqrt{\frac{\beta}{N}}\sum_{i= 1}^{N}m_{i}^{\mu}x_{a,i}-\frac{\beta}{2}\sum_{a,b=1}^{n}s_{a}^{\mu}s_{b}^{\mu} \sum_{i=1}^{N}\frac{(v_{i}^{\mu}-(m_{i}^{\mu})^{2})}{N}x_{a,i}x_{b,i}\right)\] \[=\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\int\frac{ds_{a}^{\mu}}{ \sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{a,b=1}^{n}s_{a}^{\mu}s_{b}^{\mu}\left( \delta_{ab}+\beta\sum_{i=1}^{N}\frac{(v_{i}^{\mu}-(m_{i}^{\mu})^{2})}{N}x_{a,i }x_{b,i}\right)+\sum_{a=1}^{n}is_{a}^{\mu}\sqrt{\frac{\beta}{N}}\sum_{i=1}^{N }m_{i}^{\mu}x_{a,i}\right]\,. \tag{27}\] To continue, we assume condensation on a finite number of patterns, say the first \(k\). We focus now on the remaining ones, namely for \(\mu>k\): \[B:=\exp\left[\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}(m^{\mu}({\bf x }_{a}))^{2}\right]=\int\prod_{\mu=k+1}^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{ \sqrt{2\pi}}\exp\left[-\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}\frac{(z_{a}^{\mu})^{2} }{2}+\sqrt{\frac{\beta}{N}}\sum_{a=1}^{n}\sum_{\mu=k+1}^{P}z_{a}^{\mu}\sum_{i =1}^{N}x_{a,i}\xi_{i}^{\mu}\right]\,. \tag{28}\] Putting \(A\) and \(B\) together, their overall average over \((\boldsymbol{\xi}^{\mu})_{\mu>k}\) takes the form \[\mathbb{E}_{(\boldsymbol{\xi}^{\mu})_{\mu>k}}[AB]=\int\prod_{\mu=P( 1-t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\mu}}{\sqrt{2\pi}}\int\prod_{\mu=k+1 }^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{\sqrt{2\pi}}e^{-\sum_{a=1}^{n}\left( \sum_{\mu=P(1-t)+1}^{P}\frac{(z_{a}^{\mu})^{2}}{2}+\sum_{\mu=k+1}^{P}\frac{(z _{a}^{\mu})^{2}}{2}\right)}\] \[\exp\left[\sum_{i=1}^{N}\sum_{\mu=k+1}^{P}\log\mathbb{E}_{\xi_{i} ^{\mu}}e^{\sqrt{\frac{P}{N}}\sum_{a=1}^{n}x_{a,i}(\xi_{i}^{\mu}z_{a}^{\mu}+i \theta(\mu-P+R)m_{i}^{\mu}s_{a}^{\mu})-\theta(\mu-P+R)\sum_{a,b=1}^{n}s_{a}^{ \mu}s_{b}^{\mu}\frac{\beta(v_{i}^{\mu}-(m_{i}^{\mu})^{2})x_{a,i}x_{b,i}}{2N}} \right]\,, \tag{29}\] where \(\theta\) is Heaviside's step function. If we call \(\mathbb{E}_{\boldsymbol{\xi}}m_{i}^{\mu\,2}=:\bar{M}^{\mu\,2}\), a further expansion of the exponential yields: \[\mathbb{E}_{(\boldsymbol{\xi}^{\mu})_{\mu>k}}[AB]=\int\prod_{\mu=P(1 -t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\rho}}{\sqrt{2\pi}}\exp\left[-\frac{1}{2 }\sum_{\mu=P(1-t)+1}^{P}{\bf s}^{\mu}\cdot\left(\mathbb{1}+\beta(v_{\tau\mu}- \bar{M}^{\mu\,2})Q\right){\bf s}^{\mu}\right]\] \[\int\prod_{\mu=k+1}^{P}\prod_{a=1}^{n}\frac{dz_{a}^{\mu}}{\sqrt{2 \pi}}\exp\left\{-\sum_{\mu=k+1}^{P}\sum_{a=1}^{n}\frac{(z_{a}^{\mu})^{2}}{2}+ \frac{\beta}{2}\sum_{\mu=k+1}^{P}\sum_{a,b=1}^{n}z_{a}^{\mu}z_{b}^{\mu}Q({\bf x }_{a},{\bf x}_{b})+\right. \tag{30}\] \[\left.+i\beta\sum_{\mu=P(1-t)+1}^{P}\mathbb{E}_{\boldsymbol{\xi}} [\xi_{1}^{\mu}m_{1}^{\mu}]\sum_{a,b=1}^{n}z_{a}^{\mu}s_{b}^{\mu}Q({\bf x}_{a},{ \bf x}_{b})-\frac{\beta}{\Delta}\sum_{\mu=P(1-t)+1}^{P}\sum_{a,b=1}^{n}(\bar{M} ^{\mu})^{2}s_{a}^{\mu}s_{b}^{\mu}Q({\bf x}_{a},{\bf x}_{b})\right\}\] We can now perform a Gaussian integration over the variables \(\mathbf{z}^{\mu}=(z_{a}^{\mu})_{a\leq n}\): \[\begin{split}\mathbb{E}_{(\mathbf{\xi}^{\mu})_{\mu>k}}[AB]& =\int\prod_{\mu=P(1-t)+1}^{P}\prod_{a=1}^{n}\frac{ds_{a}^{\rho}}{ \sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\mathbf{s}^{\mu}\cdot \left(\mathbbm{1}+\beta v^{\mu}Q+\beta^{2}Q\frac{\mathbb{E}_{\mathbf{\xi}}^{2}[ \xi_{1}^{\mu}m_{1}^{\mu}]}{\mathbbm{1}-\beta Q}Q\right)\mathbf{s}^{\mu}\right] \\ &\times\exp\left[-\frac{\alpha N}{2}\log\det\left(\mathbbm{1}- \beta Q\right)\right]\,.\end{split} \tag{31}\] Finally, after an integration over the remaining Gaussian variables \(\mathbf{s}^{\mu}\), and using (21), we get \[\begin{split}\mathbb{E}_{(\mathbf{\xi}^{\mu})_{\mu>k}}[AB]& =\exp\left[-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbbm{1}-\beta Q \right)-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\log\det\left(\mathbbm{1}+\beta Q(v _{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^{2}Q^{2}\right) \right],\end{split} \tag{32}\] where \(\tau^{\mu}=(1-(\mu-1)/P)\), and \(m_{\tau^{\mu}}=m^{\mu}\) are the previous retrieval accuracies. It remains to analyze the contribution given by \((\mathbf{\xi}^{\mu})_{\mu\leq k}\): \[\begin{split} C:=\exp\left[\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{ \mu=1}^{k}(m^{\mu}(\mathbf{x}_{a}))^{2}\right]&=\int\prod_{a=1}^ {n}\prod_{\mu=1}^{k}dm_{a}^{\mu}\sqrt{\frac{\beta N}{2\pi}}\exp\left[\sum_{a= 1}^{n}\sum_{\mu=1}^{k}\left(-N\beta\frac{(m_{a}^{\mu})^{2}}{2}+\beta m_{a}^{ \mu}\sum_{i=1}^{N}\xi_{i}^{\mu}x_{a,i}\right)\right]\,.\end{split} \tag{33}\] Before plugging the contributions coming from \(A\), \(B\) and \(C\) into \(\mathbb{E}\mathcal{Z}_{N}^{n}\) we need to introduce a collection of Dirac deltas to fix the desired order parameters, that are organized in the overlap matrix \((Q(\mathbf{x}_{a},\mathbf{x}_{b}))_{a,b=1}^{n}\): \[1=\int\prod_{a\leq b\leq n}dq_{ab}\delta(Q(\mathbf{x}_{a},\mathbf{x}_{b})-q_{ ab})=\int\prod_{a\leq b\leq n}\frac{Ndr_{ab}dq_{ab}}{4\pi i}\exp\left[-\frac{1}{2} \sum_{a,b=1}^{n}r_{ab}(Nq_{ab}-\sum_{i}x_{a,i}x_{b,i})\right]\,. \tag{34}\] Hence, the averaged replicated partition function, at leading exponential order in \(N\), takes the form \[\begin{split}\mathbb{E}\mathcal{Z}_{N}^{n}&=\int \prod_{a\leq b\leq n}\frac{Ndr_{ab}dq_{ab}}{4\pi i}\int\prod_{a=1}^{n}\prod_{ \mu=1}^{k}dm_{a}^{\mu}\sqrt{\frac{N\beta}{2\pi}}\exp\left[-\frac{N}{2}\sum_{a,b}r_{ab}q_{ab}-\frac{\beta N}{2}\sum_{a=1}^{n}\sum_{\mu=1}^{k}(m_{a}^{\mu})^{ 2}\right]\\ &\times\exp\left[-\frac{1}{2}\sum_{\mu=P(1-t)+1}^{P}\log\det\left( \mathbbm{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^ {2}Q^{2}\right)\right]\\ &\times\exp\left[-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbbm{1}- \beta Q\right)+N\beta^{2}\Delta\sum_{a\neq b,1}^{n}\frac{q_{ab}^{2}}{4}+N\beta \sum_{a=1}^{n}\Big{(}-\frac{\lambda}{4}(1-q_{aa})^{2}+\frac{\beta\Delta-1}{4} q_{aa}^{2}\Big{)}\right]\\ &\times\left(\int\prod_{\mu=1}^{k}dP_{\xi}(\xi^{\mu})\prod_{a=1}^ {n}dP_{\xi}(x_{a})\exp\left[\frac{1}{2}\sum_{a,b=1}^{n}r_{ab}x_{a}x_{b}+\beta \sum_{\mu=1}^{k}\sum_{a=1}^{n}m_{a}^{\mu}\xi^{\mu}x_{a}\right]\right)^{N}\,, \end{split} \tag{35}\] where we denote \(Q=(q_{ab})_{a,b=1}^{n}\). We can finally express the replicated free entropy with a variational principle coming from a saddle point argument applied to the formula above: \[\Phi_{n}:=\lim_{N\to\infty}\Phi_{N,n}=\frac{1}{n}\text{Extr}\Big{\{}- \frac{1}{2}\sum_{a,b}r_{ab}q_{ab}-\frac{\beta}{2}\sum_{a=1}^{n}\sum_{\mu=1}^{k}( m_{a}^{\mu})^{2}-\frac{\alpha(1-t)N}{2}\log\det\left(\mathbb{1}-\beta Q\right)\] \[+\beta\sum_{a=1}^{n}\Big{(}\frac{\beta\Delta-1}{4}q_{aa}^{2}- \frac{\lambda}{4}(1-q_{aa})^{2}\Big{)}-\frac{\alpha t}{2R}\sum_{\mu=P(1-t)+1}^ {P}\!\log\det\left[\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau ^{\mu}}^{2})\beta^{2}Q^{2}\right] \tag{36}\] \[+\beta^{2}\Delta\sum_{a\neq b,1}^{n}\frac{q_{ab}^{2}}{4}+\log\int \prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\int\prod_{a=1}^{n}dP_{\xi}(x_{a})\exp \left[\frac{1}{2}\sum_{a,b=1}^{n}r_{ab}x_{a}x_{b}+\beta\sum_{\mu=1}^{k}\sum_{ a=1}^{n}m_{a}^{\mu}\xi^{\mu}x_{a}\right]\Big{\}}\,.\] The normalized sum over \(\mu=P(1-t)+1,\ldots,P\) on the second line can be turned into an integral \(\int_{0}^{t}\,d\tau\dots\) in the large \(N\) limit. The extremization is taken w.r.t. the collection of parameters \((r_{ab},q_{ab})_{a,b=1}^{n}\), \((m_{a}^{\mu})_{a=1,\mu=1}^{n,k}\). Within the replica symmetric ansatz \[\begin{cases}r_{ab}=r\,,\quad a\neq b\\ r_{aa}=-u\end{cases}\quad\begin{cases}q_{ab}=q\,,\quad a\neq b\\ q_{aa}=v\end{cases}\quad m_{a}^{\mu}=m^{\mu}\,,\quad Q=\begin{pmatrix}v&q&q& \dots&q\\ q&v&\dots&q\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ q&q&q&\dots&v\end{pmatrix}\in\mathbb{R}^{n\times n}\,. \tag{37}\] The determinants of \(\mathbb{1}-\beta Q\) and \(\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta ^{2}Q^{2}\) are easily computed: \[\det\left(\mathbb{1}-\beta Q\right)=\left(1-\beta(v-q)\right)^{ n}\left[1-n\frac{\beta q}{1-\beta(v-q)}\right] \tag{38}\] \[\det\left(\mathbb{1}+\beta Q(v_{\tau^{\mu}}-1)-(v_{\tau^{\mu}}-m _{\tau^{\mu}}^{2})\beta^{2}Q^{2}\right)=\left[1+\beta(v_{\tau^{\mu}}-1)(v-q)-( v_{\tau^{\mu}}-m_{\tau^{\mu}}^{2})\beta^{2}(v-q)^{2}\right]^{n-1}\] (39) \[\qquad\times\left[1+\beta(v_{\tau^{\mu}}-1)(v-q+nq)-(v_{\tau^{ \mu}}-m_{\tau^{\mu}}^{2})\beta^{2}\left(v-q+nq\right)^{2}\right]\,.\] Further simplifications occur for the other terms in the replicated free entropy. In particular the remaining log integral is: \[\int\prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\int\prod_{a=1}^{n}dP _{\xi}(x_{a})\exp\left[\frac{r}{2}\sum_{a\neq b,1}^{n}x_{a}x_{b}-\frac{u}{2} \sum_{a=1}^{n}x_{a}^{2}+\beta\sum_{\mu=1}^{k}m^{\mu}\xi^{\mu}\sum_{a=1}^{n}x_ {a}\right]=\\ =\mathbb{E}_{Z}\int\prod_{\mu=1}^{k}\mathbb{E}_{\xi^{\mu}}\prod _{a=1}^{n}\int dP_{\xi}(x_{a})\exp\left[\sqrt{r}Zx_{a}-\frac{u+r}{2}x_{a}^{2}+ \beta\sum_{\mu=1}^{k}m^{\mu}\xi^{\mu}x_{a}\right]=\\ =\mathbb{E}_{Z}\mathbb{E}_{\mathbf{\xi}}\left[\int dP_{\xi}(x)\exp \left(\left(Z\sqrt{r}+\beta\mathbf{m}\cdot\mathbf{\xi}\right)x-\frac{u+r}{2}x^{2} \right)\right]^{n} \tag{40}\] where \(Z\sim\mathcal{N}(0,1)\), \(\mathbf{\xi}=(\xi^{1},\dots,\xi^{k})\), \(\mathbf{m}=(m^{1},\dots,m^{k})\). Finally, expanding at first order in \(n\) one has: \[\Phi:=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\beta\sum_{\mu=1}^{k} \frac{(m^{\mu})^{2}}{2}-\frac{\beta^{2}\Delta q^{2}}{4}-\frac{\alpha(1-t)}{2} \left[\log\left(1-\beta(v-q)\right)-\frac{\beta q}{1-\beta(v-q)}\right]\] \[-\frac{\alpha t}{2}\int_{0}^{t}d\tau\left[\log\left(1+\beta(v_{ \tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}\right)+\frac{\beta q(v _{\tau}-1)-2\beta^{2}q(v-q)(v_{\tau}-m_{\tau}^{2})}{1+\beta(v_{\tau}-1)(v-q)-( v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\right] \tag{41}\] The correct stationary parameters \(v,m,q,u,r\) will be those that maximize the free entropy. Hence it is clear that if \(\lambda\to\infty\) we recover the constraint \(v=1\). ### Fixed point equations Let us introduce the following notation: \[\langle\cdot\rangle_{t,\boldsymbol{\xi}}\equiv\langle\cdot\rangle_{t}:=\frac{ \int dP_{\xi}(x)\exp\big{(}(Z\sqrt{r}+\beta\mathbf{m}\cdot\boldsymbol{\xi})x- \frac{r+u}{2}x^{2}\big{)}(\cdot)}{\int dP_{\xi}(y)\exp\big{(}(Z\sqrt{r}+\beta \mathbf{m}\cdot\boldsymbol{\xi})y-\frac{r+u}{2}y^{2}\big{)}}\,, \tag{42}\] where the subscript \(t\) emphasizes that we have already reconstructed \(R=tP\) patterns. The stationarity conditions coming from (41) are \[v =\mathbb{E}_{\boldsymbol{\xi}}\langle X^{2}\rangle_{t} \tag{43}\] \[m^{\mu} =\mathbb{E}_{\boldsymbol{\xi}}\xi^{\mu}\langle X\rangle_{t}\,, \quad\mu=1,\ldots,k\] (44) \[q =\mathbb{E}_{\boldsymbol{\xi}}\langle X\rangle_{t}^{2}\] (45) \[r =\frac{\alpha(1-t)\beta^{2}q}{(1-\beta(v-q))^{2}}+\beta^{2} \Delta q+\alpha t\int_{0}^{t}\,d\tau\Big{[}\frac{2q\beta^{2}(v_{\tau}-m_{\tau} ^{2})}{1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\] (46) \[\qquad\qquad+q\frac{\beta^{2}[v_{\tau}-1-2\beta(v-q)(v_{\tau}-m_{ \tau}^{2})]^{2}}{[1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v- q)^{2}]^{2}}\Big{]}\] \[u =\beta\lambda(v-1)+\beta(1-\beta\Delta)v-\alpha(1-t)\beta\frac{1- \beta(v-2q)}{(1-\beta(v-q))^{2}}-\alpha t\int_{0}^{t}\,d\tau\Big{[}\frac{2v \beta^{2}(v_{\tau}-m_{\tau}^{2})-\beta(v_{\tau}-1)}{1+\beta(v_{\tau}-1)(v-q)- (v_{\tau}-m_{\tau}^{2})\beta^{2}(v-q)^{2}}\] \[\qquad\qquad+q\frac{\beta^{2}[v_{\tau}-1-2\beta(v-q)(v_{\tau}-m_ {\tau}^{2})]^{2}}{[1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2})\beta^{2}(v -q)^{2}]^{2}}\Big{]}\,. \tag{47}\] Notice that the effect of decimation is visible only in the variables \(u\) and \(r\) that affect the local measure (19). With a close look to the expression of \(r\) we can recognize the three predicted independent noise contribution. The first term is due to pattern interference (noise (b)), and we see that it decreases as \(t\) approaches \(1\). The second term can be identified with the noise contribution (a), which is due to the original Gaussian noise \(\mathbf{Z}\). The decimation noise contribution (noise (c)) is instead given by the third term, that is expressed in integral form, which correctly takes into account all the history of the process. As anticipated above, the success of decimation is determined by the interplay between noises (b) and (c). Since, as we shall see in Section 6, the retrieval accuracies remain close to one in the range of parameters \(\alpha,\Delta\) were the first step of decimation is feasible, the noise contribution (c) will be small. In addition, solving the previous equations for each decimation step shows that the benefit we gain due to the reduction of pattern interference is higher than the penalty we pay for introducing noise with decimation. As a consequence, decimation proves to be a viable strategy for matrix factorization. For all practical purposes, we will make finite size simulations and use the discretized form present in (36) of the integral accounting for decimation contributions, starting from step \(0\), when no pattern has been retrieved yet. Finally, notice that mixed states solutions are possible, with the estimates aligning to more than \(1\) pattern, _i.e._ several \(m^{\mu}\)'s in (44) are non-vanishing. This is not desirable in inference, since one wants to estimate one pattern at a time with the best possible performance. ### Remarks First of all, we clarify the relation between our formula and the low-rank formula for the spiked Wigner model. Therefore, let us set \(\beta=1/\Delta\), \(P=1\), which means \(\alpha=0\), and \(\lambda=0\). In this case the free entropy reads \[\Phi:=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\frac{m^{2}}{2\Delta}-\frac{q^{2}}{4 \Delta}+\mathbb{E}_{Z,\boldsymbol{\xi}}\log\int dP_{\xi}(x)\exp\left(\left(Z \sqrt{r}+\frac{m}{\Delta}\xi\right)x-\frac{u+r}{2}x^{2}\right)\Big{\}} \tag{48}\] Extremizing w.r.t. \(q\) and \(v\) we readily find: \[r=\frac{q}{\Delta}\,,\quad u=0\,. \tag{49}\] Plugging this result inside the free entropy yields \[\Phi:=\text{Extr}\Big{\{}\frac{q^{2}}{4\Delta}-\frac{m^{2}}{2\Delta}+\mathbb{E }_{Z,\mathbf{\xi}}\log\int dP_{\xi}(x)\exp\left(\left(Z\sqrt{\frac{q}{\Delta}}+ \frac{m\xi}{\Delta}\right)x-\frac{q}{2\Delta}x^{2}\right)\Big{\}}\,. \tag{50}\] Finally, extremization w.r.t. \(q\) and \(m\) yields two coupled equations \[m=\mathbb{E}_{\xi}\xi\left.\left\langle X\right\rangle_{t}\right|_{r=\frac{q} {\Delta},u=0}\,,\quad q=\mathbb{E}_{\xi}\left.\left\langle X\right\rangle_{t}^ {2}\right|_{r=\frac{q}{\Delta},u=0} \tag{51}\] that admit a self consistent solution satisfying a single equation \[m=q=\mathbb{E}_{\xi}\xi\left.\left\langle X\right\rangle_{t}\right|_{r=\frac{ m}{\Delta},u=0} \tag{52}\] which is exactly the known fixed point equation for the overlap in the spiked Wigner model. Secondly, we need to ensure a proper scaling w.r.t. \(\beta\). In particular the limit \(\lim_{\beta\to\infty}\frac{\Phi}{\beta}\) must be well defined at any decimation step. The only terms in the free entropy that could give rise to overscalings in \(\beta\) are \[\frac{rq+uv}{2}-\frac{\beta^{2}\Delta q}{4}+\frac{\beta^{2}\Delta v}{4}\,, \quad\frac{r+u}{2}\,. \tag{53}\] The latter in particular appears at the exponent in the gas free entropy in the last line of (41). Both the fixed point equations for \(u\) and \(r\) contain terms proportional to \(\beta^{2}\). This issue though is only apparent, and the fixed point remains well defined. To show this let us rewrite the first problematic term as follows: \[\frac{rq+uv}{2}-\frac{\beta^{2}\Delta q}{4}+\frac{\beta^{2}\Delta v}{4}=\frac {-r(v-q)+(u+r)v}{2}+\frac{\beta^{2}\Delta(v-q)}{4}. \tag{54}\] In the limit \(\beta\to\infty\) the term \[-\frac{\beta q}{1-\beta(v-q)} \tag{55}\] arising from the square bracket in the first line of (41) forces \(q\to v\) in such a way that \(\beta(v-q)<1\) remains of order \(O(1)\). Hence \(\frac{\beta^{2}\Delta(v-q)}{4}\) and \(r(v-q)=(r/\beta)\beta(v-q)\) are at most of order \(O(\beta)\) as they should. It remains to verify that \(u+r=O(\beta)\): \[u+r=\beta\lambda(v-1+\beta v)-\beta^{2}\Delta(v-q)-\frac{\alpha\beta}{1- \beta(v-q)}-\alpha t\int_{0}^{t}d\tau\Big{[}\frac{2\beta^{2}(v-q)(v_{\tau}-m_ {\tau}^{2})-\beta(v_{\tau}-1)}{1+\beta(v_{\tau}-1)(v-q)-(v_{\tau}-m_{\tau}^{2 })\beta^{2}(v-q)^{2}}\Big{]}\,. \tag{56}\] Again, thanks to the fact that \(\beta(v-q)<1\), the correct scaling occurs. Thirdly, we notice that for Gaussian prior, when patterns are generated from \(P_{\xi}=\mathcal{N}(0,1)\), retrieval is impossible if \(\alpha>0\). In fact, from the fixed point equation for \(m^{\mu}\), one can perform a Gaussian integration by parts on the \(\xi^{\mu}\) obtaining: \[m^{\mu}=m^{\mu}\beta\big{(}\mathbb{E}\langle X^{2}\rangle_{t}-\mathbb{E} \langle X\rangle_{R}^{2}\big{)}=m^{\mu}\beta(v-q) \tag{57}\] which entails \(m^{\mu}=0\) or \(\beta(v-q)=1\). The latter though is not possible because it would cause the free entropy to diverge to minus infinity. Hence, the only possibility is to have negligible alignment with all the patterns, \(m^{\mu}=0\). On the contrary if \(\alpha=0\), the diverging contribution disappears, and setting \(\beta=1/\Delta\) yields the usual PCA estimator overlap \(m=q=1-\Delta\). ## 4 Low temperature limits ### Sparse prior Let us express the \(\beta\to\infty\) limit of the free entropy with a prior of the form \[P_{\xi}=(1-\rho)\delta_{0}+\frac{\rho}{2}\left[\delta_{-1/\sqrt{\rho}}+\delta_{1/ \sqrt{\rho}}\right]\,,\quad\rho\in(0,1)\,. \tag{58}\] The case \(\rho=1\) shall be discussed separately in the end. For future convenience we introduce the notations \[C:=\beta(v-q)\,\in[0,1)\,,\quad\bar{r}:=r/\beta^{2}\,,\quad U:=\frac{u+r}{\beta} \tag{59}\] where \(q\) is intended as the stationary value of the overlap solving the fixed point equations. Denote \(\mathbf{m}=(m^{\mu})_{\mu=1}^{k}\), where \(k\) is the maximum number of condensed patterns. In the low temperature limit the free entropy, re-scaled by \(\beta\), and evaluated at the stationary values of the parameters involved has the form \[\frac{1}{\beta}\Phi=-\frac{\lambda(v-1)^{2}}{4}-\frac{\bar{r}C}{2 }+\frac{Uv}{2}+\frac{\alpha(1-t)v}{2(1-C)}-\frac{v^{2}}{4}-\frac{\mathbf{m}^{2 }}{2}+\frac{\Delta Cv}{2}+\psi+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v _{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^ {2}} \tag{60}\] where \[\psi=\frac{1}{\beta}\mathbb{E}_{\boldsymbol{\xi},Z}\log\left[1-\rho+\rho\cosh \frac{\beta}{\sqrt{\rho}}\left(Z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi }\right)\exp\left(-\frac{\beta U}{2\rho}\right)\right]\,. \tag{61}\] When \(\beta\to\infty\) we have to distinguish two cases in the \(Z\) average: \[\psi=O\Big{(}\frac{1}{\beta}\Big{)}+\frac{1}{\beta}\mathbb{E}_{ \boldsymbol{\xi}}\left(\int_{-\mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+ U/2\sqrt{\bar{r}\rho}}^{\infty}+\int_{-\infty}^{-\mathbf{m}\cdot \boldsymbol{\xi}/\sqrt{\bar{r}}-U/2\sqrt{\bar{r}\rho}}\right)\frac{dz\,e^{- \frac{z^{2}}{2}}}{\sqrt{2\pi}}\log\left[1-\rho+\rho\cosh\frac{\beta}{\sqrt{ \rho}}\left(z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi}\right)e^{-\frac{ \beta U}{2\rho}}\right]. \tag{62}\] The \(O(\beta^{-1})\) instead comes from integration on the interval \([-\mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}-U/2\sqrt{\bar{r}\rho},- \mathbf{m}\cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}]\) of the same integrand, that can be easily bounded. Let us now focus on the first integral in (62). The hyperbolic cosine and the exponential in \(U\) dominate on the other terms in the log. Taking into account the exponential growth in the selected range of \(z\)-values the first integral can be approximated with: \[\mathbb{E}_{\boldsymbol{\xi}}\int_{-\mathbf{m}\cdot\boldsymbol{ \xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}}^{\infty}\frac{dz}{\sqrt{2\pi}}e^{- \frac{z^{2}}{2}}\left(\frac{Z\sqrt{\bar{r}}+\mathbf{m}\cdot\boldsymbol{\xi}}{ \sqrt{\bar{\rho}}}-\frac{U}{2\rho}\right) =\sqrt{\frac{\bar{r}}{2\pi\rho}}\mathbb{E}_{\boldsymbol{\xi}}e^{ -\frac{1}{2\rho}\left(\frac{U}{2\sqrt{\bar{r}}}-\mathbf{m}\cdot\boldsymbol{ \xi}\right)^{2}}+\] \[+\mathbb{E}_{\boldsymbol{\xi}}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}}{\sqrt{\bar{\rho}}}-\frac{U}{2\rho}\right)\int_{-\mathbf{m} \cdot\boldsymbol{\xi}/\sqrt{\bar{r}}+U/2\sqrt{\bar{r}\rho}}^{\infty}\frac{dz} {\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}\,. \tag{63}\] The second integral in (62) can be treated similarly. Putting all the terms together one gets \[\frac{1}{\beta}\Phi =-\frac{\bar{r}C}{2}+\frac{\Delta Cv}{2}+\frac{Uv}{2}+\frac{ \alpha(1-t)v}{2(1-C)}-\frac{v^{2}+\lambda(v-1)^{2}}{4}-\frac{\mathbf{m}^{2}}{ 2}+\sqrt{\frac{2\bar{r}}{\pi\rho}}\mathbb{E}_{\boldsymbol{\xi}}e^{-\frac{1}{2 \rho}\left(\frac{U}{2\sqrt{\bar{\rho}}}-\mathbf{m}\cdot\boldsymbol{\xi}\right)^ {2}}\] \[+\mathbb{E}_{\boldsymbol{\xi}}\frac{\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{\bar{\rho}}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot\boldsymbol{ \xi}+\frac{U}{2\sqrt{\bar{\rho}}}}{\sqrt{2\bar{r}}}\right)-\frac{U}{2\rho} \mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}+\frac{U}{2\sqrt{\bar{\rho}}}}{\sqrt{2\bar{r}}}\right)\right]+ \frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{ \tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,. \tag{64}\] Using the fact that all the parameters are evaluated at their stationary values, the previous formula can be further simplified by looking at the limiting version of the fixed point equations. In particular we have that \[C=\sqrt{\frac{2}{\pi\rho\bar{r}}}\mathbb{E}_{\boldsymbol{\xi}}\exp\left(-\left( \frac{U/2\sqrt{\rho}-\mathbf{m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\right)^ {2}\right)\,. \tag{65}\] The value of \(\bar{r}\) can be found directly from (46) by multiplying it by \(\beta^{-2}\): \[\bar{r}=\frac{\alpha(1-t)v}{(1-C)^{2}}+\Delta v+\alpha tv\int_{0}^{t}\,d\tau \left[\frac{2(v_{\tau}-m_{\tau}^{2})}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C ^{2}}+\frac{[v_{\tau}-1-2C(v_{\tau}-m_{\tau}^{2})]^{2}}{[1+(v_{\tau}-1)C-(v_{ \tau}-m_{\tau}^{2})C^{2}]^{2}}\right]\,. \tag{66}\] Deriving w.r.t. \(v\) we get the equation for \(U=\frac{v+\tau}{\beta}\): \[U=-\Delta C+v+\lambda(v-1)-\frac{\alpha(1-t)}{(1-C)}-\alpha t\int_{0}^{t}d\tau \frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{ \tau}^{2})C^{2}}\,. \tag{67}\] From a derivative w.r.t. \(U\) we get an equations for \(v\): \[v=\frac{1}{\rho}\mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{ \mathbf{m}\cdot\boldsymbol{\xi}+\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}} \right)\right]\,. \tag{68}\] We can solve this equation in order to get \(U\) as a function of \(v\), for instance by dichotomy. Finally, from (44) and (61) \[\mathbf{m}=\mathbb{E}\boldsymbol{\xi}\langle X\rangle_{Z,\boldsymbol{\xi}}= \frac{\partial\psi}{\partial\mathbf{m}}=\mathbb{E}_{\boldsymbol{\xi}}\frac{ \boldsymbol{\xi}}{\sqrt{\rho}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot \boldsymbol{\xi}-U/2\sqrt{\rho}}{\sqrt{2\bar{r}}}\right)\,. \tag{69}\] If we insert these conditions in (64) we get \[\frac{\Phi}{\beta}=\frac{\alpha(1-t)v}{2(1-C)^{2}}+\Delta Cv-\frac{v^{2}+ \lambda(v-1)^{2}}{4}+\frac{\mathbf{m}^{2}}{2}+\frac{\alpha tv}{2}\int_{0}^{t} d\tau\frac{4C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)[1-(v_{\tau}-m_{\tau}^{2})C^{2}]}{ [1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\,. \tag{70}\] A numerical procedure to find a solution to the previous system of equations is to solve simultaneously (65) and (68) plugging into them the definitions of \(\bar{r}\) and \(U\) for a fixed \(m\). Then one can iterate (69). Notice that, when \(\lambda\) is finite, the problem is not continuous in \(\rho=1\), namely sending \(\beta\to+\infty\) before or after setting \(\rho=1\) is different. This can be seen as a consequence of the non commutation of the two limits \(\lim_{\beta\to\infty}\) and \(\lim_{\rho\to 1}\) for the quantity \((1-\rho)^{1/\beta}\). In fact, for \(\rho=1\) the \(O(\beta^{-1})\) contribution in \(\psi\) that was discarded before, is no longer negligible. Considering that contribution too would yield a free entropy of the form: \[\frac{1}{\beta}\Phi=-\frac{\bar{r}C}{2}+\frac{\Delta Cv}{2}+\frac {Uv}{2}+\frac{\alpha(1-t)v}{2(1-C)}-\frac{v^{2}+\lambda(v-1)^{2}}{4}-\frac{ \mathbf{m}^{2}}{2}+\sqrt{\frac{2\bar{r}}{\pi\rho}}\mathbb{E}_{\boldsymbol{\xi }}e^{-\frac{1}{2\rho}\left(\theta(1-\rho)\frac{U}{2\sqrt{\rho}}-\mathbf{m} \cdot\boldsymbol{\xi}\right)^{2}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{\rho}}\mathrm{erf}\left(\frac{\mathbf{m}\cdot\boldsymbol{\xi}+ \theta(1-\rho)\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}}\right)-\frac{U}{2\rho} \mathbb{E}_{\boldsymbol{\xi}}\left[1-\mathrm{erf}\left(\frac{\mathbf{m} \cdot\boldsymbol{\xi}+\theta(1-\rho)\frac{U}{2\sqrt{\rho}}}{\sqrt{2\bar{r}}} \right)\right]\\ +\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{ 2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,, \tag{71}\] where we set \(\theta(0)=0\). We see quickly that now, if \(\rho=1\), \(v=1\) is automatically enforced, whereas it was not so before. This discontinuous behaviour disappears if one sends \(\lambda\to+\infty\) from the very beginning, as studied in [48]. ### Continuous priors Consider the same definitions of \(\bar{r},C,U\) as above. In this section we deal with priors that are symmetric and absolutely continuous over the Lebesgue measure, with density \(p(x)\). We require the density to be finite at the boundaries of the support \([-a,a]\), or to go to zero with at most polynomial speed, and to be non-vanishing in the interior of the support. An example is the uniform distribution over \([-\sqrt{3},\sqrt{3}]\). The prior dependent part in the free entropy is still \[\psi:=\frac{1}{\beta}\mathbb{E}_{Z,\mathbf{\xi}}\log\int dP_{\xi}(x)e^{\beta(Z \sqrt{\bar{r}}+\mathbf{m}\cdot\mathbf{\xi})x-\frac{\beta U}{2}x^{2}}\,. \tag{72}\] We separate the quenched Gaussian integral from the expectation w.r.t. \(\mathbf{\xi}\), and we perform the following changes of variables: \(z\mapsto z/\sqrt{\bar{r}}\), \(z\mapsto z-\mathbf{m}\cdot\mathbf{\xi}\). This yields \[\psi=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int\frac{dz}{\sqrt{2 \pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log\int_{-a}^ {a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}+\frac{dz^{2}}{2U}}= \\ =\frac{\bar{r}+\mathbf{m}^{2}}{2U}+\frac{1}{\beta}\mathbb{E}_{ \mathbf{\xi}}\int\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi} )^{2}}{2\bar{r}}}\log\int_{-a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{ U}\right)^{2}}=:\frac{\bar{r}+\mathbf{m}^{2}}{2U}+\bar{\psi}\,. \tag{73}\] The integral inside the logarithm in \(\bar{\psi}\) can be computed by Laplace's approximation when \(\beta\) is large. However, the location of the maximum of the exponent depends on the value of \(z\). In particular if \(z\in[-Ua,Ua]\) then the maximum point falls inside the support of \(p(x)\). Otherwise, given the quadratic nature of the exponent, the maximum in \(x\) will be attained at the boundaries of the support \(-a\) and \(a\). Hence the \(z\)-integral must be divided into three segments. Let us first consider: \[\mathrm{I}=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int_{-Ua}^{Ua}\frac{dz}{\sqrt {2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log\int_{ -a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}}\xrightarrow{ \beta\to\infty}0 \tag{74}\] because the exponent equals \(0\) at the maximum. Hence no exponential contribution in \(\beta\) is given, that is able to constrast the \(1/\beta\) in front. Let us turn to a second contribution: \[\mathrm{II}=\frac{1}{\beta}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+\infty}\frac{dz}{ \sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\log \int_{-a}^{a}dxp(x)e^{-\frac{\beta U}{2}\left(x-\frac{z}{U}\right)^{2}} \xrightarrow{\beta\to\infty}-\frac{U}{2}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+ \infty}\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2} }{2\bar{r}}}\left(a-\frac{z}{U}\right)^{2} \tag{75}\] From the square in the integrand we get three sub-contributions. \[\mathrm{IIA}=-\frac{Ua^{2}}{2}\mathbb{E}_{\mathbf{\xi}}\int_{Ua}^{+\infty}\frac{ dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(z-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}=- \frac{Ua^{2}}{4}\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2 \bar{r}}}\Big{)} \tag{76}\] where the last step follows from a simple change of variables. The second one, with a shift in the integration variable, is \[\mathrm{IIB}=a\mathbb{E}_{\mathbf{\xi}}\int_{Ua-\mathbf{m}\cdot\mathbf{\xi}}^{+\infty} \frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{z^{2}}{2\bar{r}}}(z+\mathbf{m}\cdot\bm {\xi})=a\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\mathbf{\xi}}e^{-\frac{(Ua-\mathbf{ m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}+a\mathbb{E}_{\mathbf{\xi}}\mathbf{m}\cdot\mathbf{\xi} \,\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}} \Big{)}\,. \tag{77}\] Finally, with the same shift in the integration variable, we get a third contribution: \[\mathrm{IIC}=-\frac{1}{2U}\mathbb{E}_{\mathbf{\xi}}\int_{Ua-\mathbf{m }\cdot\mathbf{\xi}}^{+\infty}\frac{dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{z^{2}}{2\bar{r }}}(z^{2}+2z\mathbf{m}\cdot\mathbf{\xi}+(\mathbf{m}\cdot\mathbf{\xi})^{2})=-\frac{1}{2U }\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\mathbf{\xi}}(Ua+\mathbf{m}\cdot\mathbf{\xi})e ^{-\frac{(Ua-\mathbf{m}\cdot\mathbf{\xi})^{2}}{2\bar{r}}}\\ -\frac{1}{4U}\mathbb{E}_{\mathbf{\xi}}(\mathbf{m}\cdot\mathbf{\xi})^{2} \,\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}}\Big{)} -\frac{\bar{r}}{4U}\mathbb{E}_{\mathbf{\xi}}\mathrm{erfc}\Big{(}\frac{Ua-\mathbf{ m}\cdot\mathbf{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{78}\] Now, it remains to compute the last gaussian integral: \[\text{III}=\frac{1}{\beta}\mathbb{E}_{\boldsymbol{\xi}}\int_{-\infty}^{Ua}\frac{ dz}{\sqrt{2\pi\bar{r}}}e^{-\frac{(\epsilon-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}} \log\int_{-a}^{a}dxp(x)e^{-\frac{\partial U}{2}\left(x-\frac{\boldsymbol{ \xi}}{\boldsymbol{\xi}}\right)^{2}}\,. \tag{79}\] Thanks to the parity of \(p(x)\), if we perform the changes of variables \(z\mapsto-z\), \(\boldsymbol{\xi}\mapsto-\boldsymbol{\xi}\), \(x\mapsto-x\) we find that II=III. Hence we can finally recompose \(\psi\): \[\psi=\frac{\bar{r}+\mathbf{m}^{2}}{2U}+2\text{II}=-\frac{Ua^{2}}{2}+\frac{1}{U }\sqrt{\frac{\bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot \boldsymbol{\xi})e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}+ \mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot\boldsymbol{ \xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol{\xi}}{ \sqrt{2\bar{r}}}\Big{)}\,. \tag{80}\] and the final form of the asymptotic free entropy is \[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}-\frac{\bar{r}C}{2 }+\frac{U(v-a^{2})}{2}-\frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+ \frac{\Delta Cv}{2}-\frac{v^{2}+\lambda(v-1)^{2}}{4}+\frac{1}{U}\sqrt{\frac{ \bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot\boldsymbol{ \xi})e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot \boldsymbol{\xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol {\xi}}{\sqrt{2\bar{r}}}\Big{)}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v _{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^ {2}}\,. \tag{81}\] The saddle point equations can be obtained by deriving the previous formula. The gradient w.r.t. \(\mathbf{m}\) yields: \[\mathbf{m}=\mathbb{E}_{\boldsymbol{\xi}}\frac{\boldsymbol{\xi}}{U}\Big{[}- \sqrt{\frac{2\bar{r}}{\pi}}e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2} }{2r}}+(Ua-\mathbf{m}\cdot\boldsymbol{\xi})\text{erf}\Big{(}\frac{\mathbf{m} \cdot\boldsymbol{\xi}-Ua}{\sqrt{2\bar{r}}}\Big{)}\Big{]}\,. \tag{82}\] The derivative w.r.t. \(\bar{r}\) gives the equation for \(C\): \[C=\frac{1}{U}\mathbb{E}_{\boldsymbol{\xi}}\text{erf}\Big{(}\frac{Ua-\mathbf{ m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{83}\] Deriving w.r.t. \(U\) yields an equation for \(v\): \[\frac{a^{2}-v}{2}=\frac{1}{U^{2}}\sqrt{\frac{\bar{r}}{2\pi}} \mathbb{E}_{\boldsymbol{\xi}}(Ua+\mathbf{m}\cdot\boldsymbol{\xi})e^{-\frac{(Ua -\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}-\mathbb{E}_{\boldsymbol{\xi}} \Big{[}\frac{\bar{r}+(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2U^{2}}- \frac{a}{U}(Ua-\mathbf{m}\cdot\boldsymbol{\xi})\Big{]}\text{erf}\Big{(}\frac{U a-\mathbf{m}\cdot\boldsymbol{\xi}}{\sqrt{2\bar{r}}}\Big{)}\,. \tag{84}\] In all the previous equations \(\bar{r}\) and \(U\) must be considered as the following functions: \[\bar{r} =\frac{\alpha(1-t)v}{(1-C)^{2}}+\Delta v+\alpha tv\int_{0}^{t}d \tau\left[\frac{2(v_{\tau}-m_{\tau}^{2})}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{ 2})C^{2}}+\frac{[v_{\tau}-1-2C(v_{\tau}-m_{\tau}^{2})]^{2}}{[1+(v_{\tau}-1)C-( v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\right] \tag{85}\] \[U =-\Delta C+v+\lambda(v-1)-\frac{\alpha(1-t)}{(1-C)}-\alpha t\int_ {0}^{t}d\tau\frac{2C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{ \tau}-m_{\tau}^{2})C^{2}}\,. \tag{86}\] Equations (83) and (84) shall be solved simultaneously at any iteration step for \(\mathbf{m}\). This will yield a convergent algorithm to solve the system of equations. To evaluate the free entropy at the solution of the previous system of saddle point equations we first enforce equation (84), obtaining: \[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}-\frac{\bar{r}C}{2 }+\frac{U(v-a^{2})}{2}-\frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+ \frac{\Delta Cv}{2}-\frac{v^{2}+\lambda(v-1)^{2}}{4}+\frac{1}{U}\sqrt{\frac{ \bar{r}}{2\pi}}\mathbb{E}_{\boldsymbol{\xi}}(Ua-\mathbf{m}\cdot\boldsymbol{\xi} )e^{-\frac{(Ua-\mathbf{m}\cdot\boldsymbol{\xi})^{2}}{2r}}\\ +\mathbb{E}_{\boldsymbol{\xi}}\frac{\bar{r}+(Ua-\mathbf{m}\cdot \boldsymbol{\xi})^{2}}{2U}\text{erf}\Big{(}\frac{Ua-\mathbf{m}\cdot\boldsymbol{ \xi}}{\sqrt{2\bar{r}}}\Big{)}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{ \tau}-m_{\tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2} }\,. \tag{87}\] Using the equation for \(C\) (83) we see that the first term in the first line and the first term in the second line can be summed together. After some algebra, imposing also (82) we get \[\frac{\Phi}{\beta}\xrightarrow{\beta\to\infty}\frac{\bar{r}C}{2}+ \frac{\mathbf{m}^{2}}{2}+\frac{\alpha(1-t)v}{2(1-C)}+\frac{\Delta Cv}{2}-\frac{ v^{2}+\lambda(v-1)^{2}}{4}+\frac{\alpha tv}{2}\int_{0}^{t}d\tau\frac{2C(v_{\tau}-m_{ \tau}^{2})-(v_{\tau}-1)}{1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}}\,. \tag{88}\] Finally, inserting also (85) we get \[\frac{\Phi}{\beta}=\frac{\alpha(1-t)v}{2(1-C)^{2}}+\Delta Cv-\frac{v^{2}+ \lambda(v-1)^{2}}{4}+\frac{\mathbf{m}^{2}}{2}+\frac{\alpha tv}{2}\int_{0}^{t}d \tau\frac{4C(v_{\tau}-m_{\tau}^{2})-(v_{\tau}-1)[1-(v_{\tau}-m_{\tau}^{2})C^{ 2}]}{[1+(v_{\tau}-1)C-(v_{\tau}-m_{\tau}^{2})C^{2}]^{2}}\,. \tag{89}\] which surprisingly coincides with (70). ## 5 Phase diagrams for the first decimation step The starting point of the decimation process is of crucial importance for its success. In fact, if we were to subtract an estimate \(\boldsymbol{\eta}\boldsymbol{\eta}^{\intercal}/\sqrt{N}\) from the observations \(\boldsymbol{Y}\) where \(\boldsymbol{\eta}\) had a negligible alignment with all the patterns, we would actually introducing further noise without decreasing the rank of the hidden matrix: decimation would be bound to fail. At the 1-st step (\(R=0\) or \(t=0\)) the replica symmetric decimation free entropy is simply that of a Hopfield model with Gaussian noise: \[\Phi(t=0):=\text{Extr}\Big{\{}\frac{rq+uv}{2}-\beta\sum_{\mu=1}^ {k}\frac{(m^{\mu})^{2}}{2}-\frac{\beta^{2}\Delta q^{2}}{4}-\frac{\alpha}{2} \left[\log\left(1-\beta(v-q)\right)-\frac{\beta q}{1-\beta(v-q)}\right] \tag{90}\] \[\quad+\beta\Big{(}\frac{\beta\Delta-1}{4}v^{2}-\frac{\lambda}{4}( 1-v)^{2}\Big{)}+\mathbb{E}_{Z,\boldsymbol{\xi}}\log\int dP_{\xi}(x)\exp\left( \left(Z\sqrt{r}+\beta\mathbf{m}\cdot\boldsymbol{\xi}\right)x-\frac{u+r}{2}x^{ 2}\right)\Big{\}}\,. \tag{91}\] The set of fixed point equations then simplifies remarkably to \[v=\mathbb{E}_{\boldsymbol{\xi}}\langle X^{2}\rangle_{t}\,,\quad m ^{\mu}=\mathbb{E}_{\xi}\xi\langle X\rangle_{t}\,,\quad q=\mathbb{E}_{ \boldsymbol{\xi}}\langle X\rangle_{t}^{2} \tag{92}\] \[r=\frac{\alpha\beta^{2}q}{(1-\beta(v-q))^{2}}+\beta^{2}\Delta q \,,\quad u=\beta\lambda(v-1)+\beta(1-\beta\Delta)v-\alpha\beta\frac{1-\beta(v -2q)}{(1-\beta(v-q))^{2}}\,. \tag{93}\] where we have assumed condensation onto only one pattern. Starting from these equations, one can specialize to the different 0 temperature limits that exhibit interesting features. For instance in the left panel of Figure 1, we see how the phase diagram at 0 temperature changes as sparsity increases when \(\lambda\to\infty\) for the sparse Ising prior. It appears that sparsity increases the retrival region and also the storage capacity. From the right panel we indeed see that the critical storage capacity in the noiseless limit \(\Delta=0\) diverges when \(\rho\to 0\). This observation can be turned into an analytical statement as follows. To begin with, we notice that \[C=\frac{2(1-\rho)}{\sqrt{2\pi\bar{r}\rho}}e^{-\frac{V^{2}}{8r\rho}}+\frac{\rho }{\sqrt{2\pi\bar{r}\rho}}\left[e^{-\left(\frac{U/2+m}{\sqrt{2\rho}\rho}\right) ^{2}}+e^{-\left(\frac{U/2-m}{\sqrt{2\rho}\rho}\right)^{2}}\right]\xrightarrow{ \rho\to 0}0\,, \tag{94}\] exponentially fast, and \[\bar{r}\xrightarrow{\rho\to 0}v(\alpha+\Delta)\,. \tag{95}\] As a consequence the equation (67) for \(U\) reduces to: \[U=v+\lambda(v-1)-\alpha\quad\Rightarrow\quad v=\frac{U+\alpha+\lambda}{ \lambda+1}\,. \tag{96}\] We argue that \(U\) is always positive, as it serves as a norm regulator on the estimator, and we verified this statement numerically. This implies that \(v\) is always strictly positive. Equation (68) can thus be rewritten as an equation for \(U\) that reads as: \[\frac{U+\alpha+\lambda}{\lambda+1}=\frac{1}{\rho}-\frac{1-\rho}{\rho}\text{erf} \Big{(}\frac{U}{2\sqrt{2\rho\bar{r}}}\Big{)}-\frac{1}{2}\Big{[}\text{erf} \Big{(}\frac{U/2-m}{\sqrt{2\bar{r}\rho}}\Big{)}+\text{erf}\Big{(}\frac{U/2+m}{ \sqrt{2\bar{r}\rho}}\Big{)}\Big{]}\,. \tag{97}\] The error function saturates exponentially fast to \(1\) when \(\rho\to 0\), and this entails \[\frac{U+\alpha+\lambda}{\lambda+1}=1-\frac{1}{2}\Big{[}\text{erf}\Big{(}\frac{ U/2-m}{\sqrt{2\bar{r}\rho}}\Big{)}+\text{erf}\Big{(}\frac{U/2+m}{\sqrt{2\bar{ r}\rho}}\Big{)}\Big{]}+O\big{(}e^{-K/\rho}\big{)} \tag{98}\] for some positive constant \(K\), and up to logarithmic corrections at the exponent in the remainder. The argument in the square brackets can go either to \(0\) or to \(2\) depending on the signs of the arguments in the error functions. However, the second possibility, that would correspond to \(U/2>|m|\), is not possible, since the l.h.s. cannot converge to \(0\) thanks to the positivity of \(U\). Hence, the only alternative we have is that \(U/2<|m|\), which is also verified numerically. This implies that the limiting equation for \(\rho\to 0\) appears as \[\frac{U+\alpha+\lambda}{\lambda+1}=1\quad\Rightarrow\quad\lim_{\rho\to 0}U=1- \alpha\quad\Rightarrow\quad\lim_{\rho\to 0}v=1\,. \tag{99}\] Finally, using the condition \(U/2<|m|\), the limit of the magnetization can be easily computed from (69): \[m=\frac{1}{2}\Big{[}\text{erf}\Big{(}\frac{m-U/2}{\sqrt{2\bar{r}\rho}}\Big{)} +\text{erf}\Big{(}\frac{U/2+m}{\sqrt{2\bar{r}\rho}}\Big{)}\Big{]}\xrightarrow{ \rho\to 0}1\,. \tag{100}\] The behaviour depicted so far of the variables \(m,C,v,\bar{r}\) and \(U\) has been verified numerically for various values of \(\lambda\), \(\alpha\) and \(\Delta\). In Figure 2 we plot the phase diagram for a continuous uniform prior supported on \([-\sqrt{3},\sqrt{3}]\) with \(\lambda=0\). We verified that once that a magnetization \(m\neq 0\) is a solution to the fixed point equations, then it is also thermodynamically stable, namely its free entropy is automatically bigger than that of the \(m=0\) solution, contrary to what happens for the discrete priors discussed above. The dashed line here does not signal a proper Figure 1: **Left panel**: Phase diagram for the first step of decimation in the case of sparse Ising prior. The lines show the zero temperature phase diagram for different values of the sparsity parameter \(\rho\) (using \(\lambda\to\infty\)). Dashed lines plot the storage capacity as a function of \(\Delta\). Solid lines signal the thermodynamic transition from the glassy phase to the retrieval phase, when configurations with non vanishing magnetizations with the patterns become thermodynamically stable. The blue and red lines are for \(\rho=1\); cyan and magenta for \(\rho=0.1\); green and yellow for \(\rho=0.05\). **Right panel**: zero temperature storage capacity \(\alpha_{c}\) and critical thermodynamic storage \(\alpha_{F}\), in dashed blue and solid red lines respectively, versus sparsity \(\rho\) in the case \(\Delta=0\) (using \(\lambda\to\infty\)). This plot tracks the behaviour of the intersection of the dashed and solid lines with the \(x\)-axis in the left panel as \(\rho\) varies in \((0,1]\). phase transition, but it is the location of the phase space where the mean square error in the reconstruction of the single pattern outperforms the null estimator \(\mathbf{\eta}_{null}=0\), namely when: \[\text{MSE}(\mathbf{\eta};\mathbf{\xi})=\frac{1}{N}\|\mathbf{\xi}-\langle\mathbf{\eta}\rangle\|^{ 2}\simeq 1+v-2m<1\,, \tag{101}\] where the approximate equality holds true in the \(N\to\infty\) and \(\beta\to\infty\) limit. Notice that the performance of a Bayes-optimal estimator is always upper bounded by \(1\) thanks to the Nishimori identities, hence it is always at least as good as the null estimator. ## 6 Numerical tests ### Testing the saddle point equations with AMP In order to test our theoretical predictions, we need an algorithm that is able to sample from the Botlmann-Gibbs measure, or at least that can estimate its marginals, namely the local magnetizations. Approximate message passing is an algorithm that serves the purpose. Furthermore, one needs to integrate the decimation scheme into it. The resulting algorithm was called _decimated AMP_ (see Algorithm 1), which first appeared informally in [56], and then refined in [57]. It is possible to derive a suitable AMP from the set of belief propagation equations for the Boltzmann-Gibbs measure: \[\hat{m}^{t}_{(ij)\to i}(x_{i}) \propto\int dx_{j}\hat{m}^{t}_{j\to(ij)}(x_{j})\exp\Big{[}\frac{ \beta}{\sqrt{N}}Y_{ij}x_{i}x_{j}-\frac{\beta(1+\lambda)}{2N}x_{i}^{2}x_{j}^{2 }\Big{]} \tag{102}\] \[m^{t+1}_{i\to(ij)}(x_{i}) \propto dP_{\xi}(x_{i})\exp\Big{(}\frac{\beta\lambda x_{i}^{2}}{2 }\Big{)}\prod_{k\neq i,j}\hat{m}^{t}_{(ki)\to i}(x_{i})\,, \tag{103}\] by expanding in \(N\) and keeping the leading order. The resulting algorithm, which takes as input an appropriate Figure 2: Zero temperature phase diagram for uniform prior supported on \([-\sqrt{3},\sqrt{3}]\) and \(\lambda=0\). The solid line represents the thermodynamic phase transition. Below it, probability is dominated by those ’retrieval’ states that have a non vanishing Mattis magnetization with one pattern. The dashed blue line represents a performance transition: below it the mean configuration of the Boltzmann-Gibbs measure has a better performance in reconstructing the pattern than the null estimator \(\mathbf{\eta}_{null}=0\). initialization and the data, reads: \[\mathbf{x}^{t+1}=f(\mathbf{A}^{t},\mathbf{B}^{t})\,,\quad\mathbf{v}^ {t+1}=\partial_{a}f(\mathbf{A}^{t},\mathbf{B}^{t}) \tag{104}\] \[\mathbf{A}^{t}=\frac{\beta}{\sqrt{N}}\mathbf{Y}\mathbf{x}^{t}- \frac{\beta^{2}}{N}\mathbf{x}^{t-1}\circ(\mathbf{Y}^{\circ 2}\mathbf{v}^{t})\] (105) \[\mathbf{B}^{t}=\frac{\beta}{N}\big{(}(1-\mathbf{Y}^{\circ 2}) \mathbf{v}+\|\mathbf{x}^{t}\|^{2}\big{)}+\frac{\beta\lambda}{N}\sum_{i=1}^{N} \big{(}v_{i}^{t}+(x_{i}^{t})^{2}-1\big{)} \tag{106}\] where constants are summed element/component-wise, \(\circ\) is the Hadamard entry-wise product (or power), and as denoisers we have chosen the local means \[f(a,b)=\frac{\int dP_{\xi}(x)x\exp(ax-\frac{bx^{2}}{2})}{\int dP_{\xi}(y)\exp( ay-\frac{by^{2}}{2})} \tag{107}\] that are also applied component-wise to vectors. We denote this algorithm in a compact way by \(\text{AMP}(\mathbf{Y},\mathbf{x}^{0},\mathbf{v}^{0})\), and it is run until the marginals stabilize with a certain tolerance. The above AMP is used to estimate the first and second moment marginals of the Boltzmann-Gibbs measure: \(x_{i}^{\infty}\simeq\langle x_{i}\rangle\), \(v_{i}^{\infty}\simeq\langle x_{i}^{2}\rangle-\langle x_{i}\rangle^{2}\). Of course the very same algorithm can be run on the set of modified observations \(\mathbf{Y}_{R}\) in (16), which is accessible to the statistician at every decimation step. ``` 0: N, P or \(\alpha\), \(\mathbf{Y}\), \(\boldsymbol{\xi}\), \(\epsilon\) while\(\mu\leq P\)do \(\mathbf{g}\leftarrow\mathcal{N}(0,1_{N})\) \(\mathbf{x}^{0}\leftarrow\sqrt{1-\epsilon^{2}}\mathbf{g}+\epsilon\boldsymbol{ \xi}^{\mu}\) \(\mathbf{v}^{0}\gets 1-0.9(\mathbf{x}^{0})^{\circ 2}\) \(\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1},(\langle\boldsymbol{\eta}^{ \mu}\rangle^{\circ 2})_{R=\mu-1}-\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1}^{ \circ 2}\leftarrow\text{AMP}(\mathbf{Y}_{R=\mu-1},\mathbf{x}^{0},\mathbf{v}^{0})\) \(\mathbf{Y}_{R=\mu}=\mathbf{Y}_{R=\mu-1}-\frac{\langle\boldsymbol{\eta}^{\mu} \rangle_{R=\mu-1}^{\circ}\setminus\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1}^{ \circ}}{\sqrt{N}}\) endwhile Return\((\langle\boldsymbol{\eta}^{\mu}\rangle_{R=\mu-1},(\langle\boldsymbol{\eta}^{ \mu}\rangle^{\circ 2})_{R=\mu-1})_{1\leq\mu\leq P}\). ``` **Algorithm 1** Decimated AMP (DAMP) It is a known fact, that in the Hopfield model AMP needs to be initialized sufficiently close to the patterns to converge, and here we experience the same behavior starting from the first step of decimation until the end. Figure 3: Mean Square Error of decimation in the case of sparse Ising priors: theory versus Decimated AMP algorithm. The red solid curves are the expected pattern MSE predicted by theory as a function of the decimation time (i.e. the number of decoded patterns). The blue data points and error bars are obtained by running DAMP over \(n=300\) independent instances. \(N=1500\), \(\lambda=0\) in all plots. **Left panel**: \(\rho=1\), \(\alpha=0.03\) namely \(P=45\), \(\Delta=0.08\) and \(\beta=10\). **Middle panel**: \(\rho=0.2\), \(\alpha=0.04\) namely \(P=60\), \(\Delta=0.09\) and \(\beta=8\). **Right panel**: \(\rho=0.15\), \(\alpha=0.06\) namely \(P=90\), \(\Delta=0.1\) and \(\beta=8\). Hence DAMP is not suitable as an inference algorithm as it needs an informative initialization, whose correlation with the pattern sought is \(\epsilon\) in Algorithm 1. Nevertheless, DAMP can be considered as a tool to verify that our replica computations are correct and that decimation is able to retrieve all the patterns, which means it does not corrupt itself too much. In Figure 3 we plot the predicted theoretical curves of the expected MSE on the reconstruction on the single pattern \[\mathbb{E}\text{MSE}(\mathbf{\xi}^{\mu};\mathbf{\eta}^{\mu})=\frac{1}{N} \|\mathbf{\xi}^{\mu}-\langle\mathbf{\eta}^{\mu}\rangle_{t|tP=\mu-1}\|^{2}\simeq 1+q_{t}-2m _{t} \tag{108}\] in red, where the subscript \(t\) indicates that we at the decimation time \(t\). The blue data points and error bars are obtained from an average of 300 instances of DAMP run on independently generated data. We considered different values of sparsity and the regularization parameter \(\lambda\) was always set to 0. In every case the theoretical curve seems to reproduce accurately the behaviour of the pattern MSE, yielding a good confirmation of our RS theory. ### Expected decimation performance In this section, we compare the expected denoising performance of decimation with the typical performance of a Rotation Invariant Estimator (RIE) introduced in [49]. A RIE is characterized by the fact that it provides an estimate of the original matrix \(\mathbf{\xi}\mathbf{\xi}^{T}\) which has the same eigenbasis as the one of the data matrix \(\mathbf{Y}\). Once the eigenbasis is established, one only has to produce an estimate on the specturem based on that of \(\mathbf{Y}\). As such, the RIE is a purely spectral estimator and it does not exploit the prior knowledge on the signal components. Among the possible RIEs, the one that acts optimally on the spectrum of \(\mathbf{Y}\) is \[\hat{\mathbf{\lambda}}=\mathbf{\lambda}_{\mathbf{Y}}-2\Delta\mathcal{H}[ \rho_{\mathbf{Y}}](\mathbf{\lambda}_{\mathbf{Y}}) \tag{109}\] where \(\hat{\mathbf{\lambda}}\) and \(\mathbf{\lambda}_{\mathbf{Y}}\) are the vector of the eigenvalues of the estimate and of \(\mathbf{Y}\sqrt{N}\) respectively, \(\mathcal{H}[\rho_{\mathbf{Y}}]\) is the Hilbert transform of the spectral density of \(\mathbf{Y}/\sqrt{N}\). We shall measure the performance of an estimator \(\mathbf{S}\), whose eignevalues are of order 1 by convention, with the matrix MSE: \[\text{mMSE}(\mathbf{S};\mathbf{\xi})=\frac{1}{N}\mathbb{E}\Big{\|} \mathbf{S}-\frac{\mathbf{\xi}\mathbf{\xi}^{\intercal}}{\sqrt{NP}}\Big{\|}_{F}^{2}\,, \tag{110}\] and the matrix norm is the Frobenius' norm. The estimator produced by decimation would thus be \[\mathbf{S}_{\text{dec}}:=\sum_{\mu=1}^{P}\frac{\langle\mathbf{\eta}^ {\mu}\rangle_{R=\mu-1}\langle\mathbf{\eta}^{\mu}\rangle_{R=\mu-1}^{\intercal}}{ \sqrt{NP}} \tag{111}\] In order to make the comparison we need to connect the mMSE predicted by the theory for the decimation estimator with the definition (110), namely to re-express the latter in terms of the order parameters of the decimation free entropies. This can be done as follows, leveraging the assumption (19). By expanding the square in the mMSE definition evaluated at \(\mathbf{S}_{\text{dec}}\) we recognize three main contributions: \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \xi_{i}^{\mu}\xi_{j}^{\mu}\xi_{i}^{\nu}\xi_{j}^{\nu}]=\frac{1+\alpha}{2}+o_{N} (1) \tag{112}\] \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle_{i}^{\xi}\langle\eta_{j}^{\nu}\rangle]\] (113) \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[ \langle\eta_{i}^{\mu}\rangle\langle\eta_{j}^{\mu}\rangle\langle\eta_{i}^{\nu }\rangle\langle\eta_{j}^{\nu}\rangle] \tag{114}\] where we dropped the subscrpts in the Gibbs brackets for convenience. While the first one can be computed right away using the properties of the prior, the other two require some extra effort. Concerning (113) we have: \[\begin{split}\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P} \mathbb{E}[\xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle\xi_{i}^{\nu}\langle\eta_{ j}^{\nu}\rangle]&=\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P} \big{[}\delta_{\mu\nu}\xi_{i}^{\mu}\langle\eta_{j}^{\mu}\rangle\xi_{i}^{\mu} \langle\eta_{j}^{\mu}\rangle+\delta_{ij}\mathbb{E}(\xi_{i}^{\mu})^{2}\langle \eta_{i}^{\nu}\rangle^{2}\big{]}=\\ &=\frac{1}{P}\sum_{\mu=1}^{P}(m^{\mu})^{2}+\frac{\alpha}{P}\sum_ {\mu=1}^{P}q^{\mu}+o_{N}(1)\end{split} \tag{115}\] where we have enforced (19) and \(q^{\mu}\) and \(m^{\mu}\) are the overlap and Mattis magnetization respectively coming from the \(\mu\)-th decimation step. Let us now turn to (114). Using similar arguments one can argue that: \[\frac{1}{N^{2}P}\sum_{i,j=1}^{N}\sum_{\mu,\nu=1}^{P}\mathbb{E}[\langle\eta_{i}^ {\mu}\rangle\langle\eta_{j}^{\mu}\rangle\langle\eta_{i}^{\nu}\rangle\langle \eta_{j}^{\nu}\rangle]=\frac{1}{P}\sum_{\mu=1}^{P}(q^{\mu})^{2}+\alpha\Big{(} \frac{1}{P}\sum_{\mu=1}^{P}q^{\mu}\Big{)}^{2}+o_{N}(1) \tag{116}\] Therefore, collecting all the contributions one gets the asymptotic prediction: \[\text{mMSE}(\mathbf{S}_{\text{dec}};\mathbf{\xi})\simeq\frac{1}{P}\sum_{\mu=1}^{P} \big{(}1+(q^{\mu})^{2}-2(m^{\mu})^{2}\big{)}+\alpha\Big{(}1-\frac{1}{P}\sum_{ \mu=1}^{P}q^{\mu}\Big{)}^{2}\,. \tag{117}\] In Figure 4 we compare the performance of the RIE, in green, against the theoretical performance predicted for decimation in red, and the blue data points are obtained using the estimator produced by decimation (DAMP). As we can see there is a good agreement between DAMP and the theory, and both outperform the RIE as we expected. The RIE appears more robust to both noises (a) and (b), tuned by \(\Delta\) and \(\alpha\) respectively. On the contrary, the performance of decimation deteriorates quickly as soon as we get out of the retrieval region in the phase diagrams Figure 1-2, and the amount of noise it can bear is strongly affected by the nature of the signal (sparse Ising or continuous). However, one must bear in mind that RIEs are suitable only for matrix denoising, and no information is reconstructed on the signal factor \(\mathbf{\xi}\). Moreover, we notice that the performance of the RIE does not change sensibly from the left to the right panel (\(\rho=1\) to \(\rho=0.15\)), and this is coherent with its purely spectral nature. In fact, the empirical spectral distribution of \(\mathbf{\xi}\mathbf{\xi}^{\intercal}/\sqrt{NP}\) always converges to a Marchenko-Pastur law because of the completely factorized prior on the elements of \(\mathbf{\xi}\). Hence, the small changes from the left to the right panel are mostly due to the slight increment in the noise level \(\Delta\) and the aspect ratio (or load) \(\alpha\). Figure 4: Matrix MSE as a function of \(\Delta\) for sparse Ising priors with various sparsities. In green the denoising performance of a RIE, obtained by averaging over 30 independent samples. Error bars, corresponding to one standard deviation, are too small to be seen. In red, the performance predicted for an algorithm implementing decimation. The blue data points are obtained averaging over 30 DAMP’s outputs, run on independently generated data. Error bars correspond to one standard deviation. In all cases \(\lambda=0\), \(\beta=8\) and \(N=1500\). **Left panel**: \(\rho=1\), \(\alpha=0.03\) namely \(P=45\) and \(\Delta=0.08\). **Middle panel**: \(\rho=0.2\), \(\alpha=0.07\) namely \(P=105\) and \(\Delta=0.09\). **Left panel**: \(\rho=0.15\), \(\alpha=0.07\) namely \(P=105\) and \(\Delta=0.1\). ### A ground state oracle for sparse Ising priors Our ground state oracle is based on an iterated simulated annealing (SA) routine that can be found in Algorithm 2, which is a refinement of the one in [48]. ``` 0:\(N\), \(\mathbf{Y}\), threshold, \(\beta_{\max}\in\mathbb{R}\), niter (\(\in\mathbb{N}\)), maxr (\(\in\mathbb{N}\)), restarts (\(\in\mathbb{N}\)) \(\leftarrow\)0 found\(\leftarrow\)False whileity\(<300\)doandfound==False stop\(\leftarrow\)0 \(\beta\gets 0\) \(\mathbf{s}\leftarrow\) random sample from \(\prod_{i=1}^{N}P_{\xi}\) \(\leftarrow\)ity\(\leftarrow\)ity+1 iftry+restarts\(>\)maxrthen returns,ity endif ifity\(\%20=0\)then threshold\(\leftarrow\)threshold\(\cdot\)0.9975 endif while\(k<\)niter do \(k\gets k+1\) \(\beta\gets 1+\frac{k}{\text{niter}}\) \(\cdot\)\(\beta_{\max}\) \(\mathbf{h}\leftarrow\frac{\mathbf{Y}}{\sqrt{N}}\mathbf{s}\) \(V\leftarrow\frac{\|\mathbf{s}\|^{2}}{N}+\frac{\lambda}{N}(\|\mathbf{s}\|^{2}-1)\) \(\mathbf{Z}_{\text{loc}}\leftarrow(1-\rho)\mathbf{1}+\rho\cosh(\beta\mathbf{h })e^{-\frac{\beta V}{2}}\) (Scalar functions are applied component-wise to vectors.) sample ss from \(\exp\left(\beta\mathbf{h}\cdot(\cdot)-\frac{\beta V}{2}\right)/\mathbf{Z}_{ \text{loc}}\) if\(\|\mathbf{s}-\mathbf{s}\mathbf{s}\|<10^{-3}\)then \(\mathbf{s}\leftarrow\)ss stop+1 (Updates become negligible.) ifstop\(>5\)then if\(-E(\mathbf{s}\mid\mathbf{Y})>\)thresholdthen returns,ity else break (wrong energy, try again) endif endif else stop\(\leftarrow\)0 \(\mathbf{s}\leftarrow\)ss endif endwhile ``` **Algorithm 2** Simulated annealing (SA) The energy landscape at the various steps of decimation is very similar to that of the Hopfield model. Consequently, algorithms that search for minima get frequently stuck in metastable states, which have a low overlap with the patterns. SA is not immune to this phenomenon. Therefore, we equip our SA routine with an acceptance criterion of the configuration output by the algorithm, that is based on the computation of the energy: \[-E(\mathbf{s}\mid\mathbf{Y}_{R})=\frac{1}{2\sqrt{N}}\mathbf{s}^{\intercal} \mathbf{Y}_{R}\mathbf{s}-\frac{\|\mathbf{s}\|^{4}}{4N}-\frac{\lambda}{4N}\big{(} \|\mathbf{s}\|^{2}-1\big{)}^{2} \tag{118}\] which is nothing the energy of our model at the \(R\)-th decimation step. Notice that this quantity is accessible by the Statistician and it is thus correct to use it as an input for a candidate algorithm. In Algorithm 2 niter is the maximum number of temperature updates we allow, maxr is instead the maximum number of restarts allowed, considering also the restarts coming from previous pattern searches. The reason why we introduced this additional control is that typically when a bad configuration is accepted as a pattern estimate by mistake, the ensuing searches for other patterns require even more restarts. The above SA routine has to be combined with decimation, so once a configuration is accepted as a pattern the observations are modified \(\mathbf{Y}\leftarrow\mathbf{Y}-\frac{\mathbf{s}\mathbf{s}^{\intercal}}{\sqrt{N}}\) and the routine is restarted. In order to make sure we really find patterns, we thus run all the algorithm (SA plus decimation) multiple times, typically five, and then we accept the output that required the least number of restarts to be produced. This procedure is costly, and as noticed already in [48], it requires an exponential number of restarts. Algorithm 2 suffers from the same issues as the one in [48]. For instance, the overall decimation procedure still requires an exponential (in \(N\)) number of restarts. However, the presence of sparsity introduces further non-trivial complications. In fact, the signal components are no longer constrained on the hypercube, and this allows for fluctuations in the norm of the outputs that reflect in fluctuations on the average energy of the patterns. Specifically, the more sparse the signal is, the wider the gap between the highest and the lowest energy of the patterns. These fluctuations can challenge the energy restarting criterion in our SA routine, that can thus confuse a metastable state for a pattern. Furthermore, one observes that when too few patterns are stored or remain in \(\mathbf{Y}\), it is harder for the SA routing to find them. If, for instance, we only have one pattern left, the Hebbian matrix \(\boldsymbol{\xi}\boldsymbol{\xi}^{\intercal}\), which is supposed to attract the \(\mathbf{x}\)-configurations towards the pattern, has only a fraction \(\rho^{2}\) of non-zero components. This gives rise to a large number of configurations that have degenerate energy, close to \(0\). The energy landscape thus appears as a golf course, flat almost everywhere, except for a pit, corresponding to the pattern left. From our numerical experiments, this effect seems to hold also for more than one, but still few, patterns stored. See Figure 5. ### Reversed decimation In all the tests we have run, the performance of decimation in reconstructing the patterns improves along the procedure itself. The last patterns are always better estimated than the first ones, and this supports the idea that decimation effectively decreases the pattern interference. In particular, it is clear that the quality of reconstruction of one pattern depends on the previous "history" of the process. Once the procedure exhausts the patterns, one can imagine to run it again backwards, keeping the last half of the patterns that were reconstructed with higher accuracy. As illustrated in Figure 6, this improves the reconstruction performance also for the first half of the patterns. One can then re-iterate the same procedure, keeping only the first \(1/2\) and the last \(1/4\) of the patterns, that are now the best reconstructed ones. This in Figure 5: Energy landscape exploration of the Simulated Annealing applied to sparse Ising priors. On the vertical axis we have the energy value as a function of the number of iterations (temperature updates) of SA on the horizontal axis. For all the three plots \(N=1500\), \(\alpha=0.01\) (namely only \(15\) patterns to be found), \(\Delta=0.05\) and \(\lambda=-0.08\). From the left to the right: \(\rho=1,0.3,0.15\). The patterns were reconstructed exactly in all thr cases. SA finds immediately the patterns for low sparsities \(\rho\sim 1\). As soon as sparsity increases, a lot of configurations start to exhibit an almost vanishing energy (recall that the noise shifts this value). The dashed blue lines mark the highest and the lowest pattern energy. As we can see the band they identify is narrow with low sparsity, and it becomes wider for higher values of sparsity due to more intense fluctuations. turn leads to a further improvement in the reconstruction also for the middle patterns. This reasoning can be iterated ad libitum. In Figure 6 we see how performance improves in the various rounds of decimation, and we compare it to the performance predicted by the rank-one formula, i.e. what we should have for any sub-linear rank (\(\alpha=0\), see Section 7). We see that, little by little, the performance approaches that of the rank-one formula. ## 7 Related works ### Unlearning and dreaming As evident from Figure 1, without having strong sparsity, the storage capacity of the model is not very large, and the network is far from being able to store an over-complete basis of \(\mathbb{R}^{N}\). In an attempt to solve this issue one can pre-process the observation matrix with Hebbian unlearning [58, 59], with which decimation itself bears some similarity. Unlearning consists in iterating a zero temperature dynamics until convergence, which is likely to occur at a spurious state \(\mathbf{\eta}\) that is then removed from the observations \(\mathbf{Y}\leftarrow\mathbf{Y}-\varepsilon\mathbf{\eta}\mathbf{\eta}^{\intercal}/ \sqrt{N}\), with a small \(\varepsilon\). If run for an appropriate number of times, unlearning acts on the energy landscape penalizing spurious metastable states. This procedure has two fundamental parameters to be tuned: \(\varepsilon\) and the number of times \(D\) it is iterated [60]. If \(\varepsilon\) or \(D\) are too large one risks to remove also the wanted patterns. Apart from numerical evidence, there is little theoretical understanding of the unlearning procedure as illustrated above. However, there are other convenient iterative ways of modifying the Hebbian matrix [61, 62, 63, 64] that converge to the so called pseudo-inverse learning rule (or modifications of it) [65, 66, 67], which in turn is able to increase the storage capacity to \(\alpha_{c}=1\). Despite the apparent similarities, the goal of decimation is very different from that of unlearning. Its aim is to find a pattern, and not a metastable state, and to remove it completely (or almost completely) from \(\mathbf{Y}\), which amounts to set \(\varepsilon=1\) (or close to \(1\)) above. Furthermore, it is worth stressing that, unlike classical unlearning, Figure 6: Improvement in performance obtained re-iterating decimation for Rademacher prior. In this example \(\Delta=0.08\), \(\alpha=0.03\), \(\rho=1\) and \(\beta=10\). The blue line is the first run, where the expected MSE on the reconstruction of the single patterns decreases along decimation. The magenta curve is instead obtained by fixing the last half of pattern MSEs, and running decimation backwards. Starting from the magenta line, we obtained the green solid line by fixing the first half and the last quarter of MSEs, and then running decimation for finding the third quarter of MSEs. Finally, the red dashed line was obtained from the green line running decimation again, with fixed first quarter and last half of MSEs. The blue dashed line is the expected MSE predicted by the rank one formula. Coherently, the last decimation steps approach the rank-one formula MSE from above, because the interference noise has been almost completely eliminated, except for noise of decimation itself, that is responsible for the final small gap. we have a theoretical control on decimation, namely we can track its behaviour step by step. ### Sub-linear rank In a recent work [57] the authors discuss the denoising of large matrices in the same setting as ours, with a main focus on the case \(P=N^{\delta}\), \(\delta\in(0,1)\), i.e. a sub-linear rank regime. In the mentioned paper, it is stated that, as long as the prior on the \(N\times P\) matrix \(\boldsymbol{\xi}\) is completely factorized over the matrix elements, the mutual information between \(\boldsymbol{\xi}\) and the data is given by the rank-one replica formula for _any_ sub-linear rank regime, in agreement with [68]. Though not explicitly stated in our previous work [48], our findings indeed suggest the same result, as it can be deduced from Section 3.2. In fact our free entropy, which is in close relation with the mutual information between observations and signal, takes the same form for any \(P\) such that \(P/N\to 0\). Furthermore, for \(\alpha=0\) and \(\beta=1/\Delta\), the fixed point equations admit a self-consistent solution that satisfies the Nishimori identities, which suggests that Bayes-optimality is recovered. From the form of the free entropy (41), it is also evident that the effect of decimation is visible only for truly extensive rank. The reason is that, if we penalize a finite number of directions in a space of dimension growing to infinity, the system can easily find other favoured directions to formalize in. In other words, the \(p^{\mu}(\mathbf{x})\)'s in (17) give a sub-extensive contribution that can be neglected in any sub-linear rank regime. Another delicate point is the definition of DAMP. We stress that in (105) and (106) the presence of a high-rank spike inside \(\mathbf{Y}\) can induce non-trivial modifications both in \(\mathbf{A}\) and \(\mathbf{B}\). More specifically, it is known that, for instance, the Onsager reaction in (105) containing \(\mathbf{Y}^{\circ 2}\) has different asymptotically equivalent formulations. In the case of a Gaussian channel with a low-rank spike \(\mathbf{Y}^{\circ 2}\) can be replaced by an all-ones matrix. This is due to the fact that the rank of the spike is not large enough to induce modifications in the spectrum of the noise matrix. In the high-rank regime, on the contrary, the extensive rank starts to play a role and gives rise to important contributions in the reaction term. Moreover, the reaction term changes also along the decimation procedure, in which one further perturbs the data matrix with the high rank matrix of the decimation estimates \(\sum_{\mu=P-R+1}^{P}\frac{\boldsymbol{\eta}^{\mu}\boldsymbol{\eta}^{\mu+}}{ \sqrt{N}}\). Hence, the formulation in (105)-(106) turns out to be convenient. The low-rank regime is insensitive to the aforementioned changes. Despite we were not able to prove it, Figure 6 suggests that re-iterating decimation in a proper way could lead to a performance similar to that predicted by the low rank replica symmetric formula. One may be led to think that reversed decimation yields Bayes-optimal performance. This is however not true. In fact, in the high rank case the spike induces a non-negligible perturbation of the spectrum of the noise matrix that can be used to perform inference (this deformation is captured by the RIE for instance) especially for large \(\alpha\)'s, where decimation fails. ### Channel universality properties Low-rank spiked models are known to fulfill channel universality [69, 70, 71], namely for any well-behaved \(P_{\text{out}}(y\mid x)\) and data generated with the rule \[Y_{ij}\sim P_{\text{out}}\Big{(}\cdot\mid\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu} \xi_{j}^{\mu}}{\sqrt{N}}\Big{)} \tag{119}\] the mutual information between the data \(\mathbf{Y}\) and \(\boldsymbol{\xi}\) can be computed through an equivalent Gaussian channel as in (1) with a properly tuned noise intensity \(\Delta\). The proof of this equivalence requires two concomitant behaviours, _i)_ universality in the likelihood, and _ii)_ universality in the quenched disorder (i.e. the law of the data \(\mathbf{Y}\)), and holds as long as \(P^{3}/\sqrt{N}\to 0\)[70]. Informally, the main idea is to expand \(P_{\text{out}}\Big{(}\cdot\mid\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu}\xi_{j}^{\mu} }{\sqrt{N}}\Big{)}\) around \(0\) in its second entry up to second order, since for low-rank spikes \(\sum_{\mu=1}^{P}\frac{\xi_{i}^{\mu}\xi_{j}^{\mu}}{\sqrt{N}}\) is small for any fixed couple of indices \(i,j\). On the contrary, in the high-rank setting the higher moments of the spike start to matter, meaning that the previous expansion fails, and universality breaks down. In our mismatched setting one can still count on the universality of the likelihood _for a single decimation step_. In fact, here the Statistician assumes to observe a low-rank spike, that is they consider \[Y_{ij}\sim P_{\text{out}}\Big{(}\cdot\mid\frac{x_{i}x_{j}}{\sqrt{N}}\Big{)} \tag{120}\] whereas the data are generated through (1). The free entropy of the related model reads as \[\frac{1}{N}\mathbb{E}[\log\mathcal{Z}_{R}-\sum_{i,j}\log P_{\text{out}}(Y_{ij} \mid 0)]=\frac{1}{N}\mathbb{E}\log\int dP_{\xi}(\mathbf{x})\exp\Big{[}\sum_{i,j} \Big{(}\log P_{\text{out}}\Big{(}Y_{ij}\mid\frac{x_{i}x_{j}}{\sqrt{N}}\Big{)} -\log P_{\text{out}}(Y_{ij}\mid 0)\Big{)}\Big{]} \tag{121}\] where \(\sum_{i,j}\log P_{\text{out}}(Y_{ij}\mid 0)\) has been subtracted to have a proper scaling. From the above equation one readily realizes that an expansion up to second order of \(P_{\text{out}}\) yields the desired equivalent quadratic model, for which our computations hold. However, we stress that exploiting this universality produces errors of \(O(N^{-1/2})\). These errors accumulate along the \(P=O(N)\) steps of decimation resulting in potentially non-negligible deviations from the original model towards the end of the procedure. ## 8 Conclusion and outlooks Building on the results of [48], we have extended the analysis of the decimation procedure to a wide class of priors on the matrix elements of the factors \(\boldsymbol{\xi}\) for symmetric matrix factorization. We provided exhaustive numerical evidence in support of our replica theory, via the introduction of DAMP, whose performance in pattern retrieval, and matrix denoising matches the one predicted by the theory. Our numerical experiments confirm that decimation is a viable strategy for matrix factorization. In particular, as long as the first step is feasible, i.e. the procedure is started at a point of the phase diagram where there is a non-vanishing Mattis magnetization with one of the patterns, decimation is able to find all of them, up to a permutation. We stress again that DAMP is not an appropriate algorithm for inference, since it needs a strongly informative initialization. Nevertheless, in the case of sparse Ising priors, we were able to find a ground state oracle that is able to find all the patterns in suitable regions of the phase space of the decimation neural network models. The latter still suffers from an exponential complexity: it needs an exponential number of restarts (in \(N\)) in order to find all the patterns and discard correctly the spurious states it may get stuck in. The idea of reversed decimation and unlearning are insightful perspectives. In fact, in order to increase the storage capacity of the neural networks, or equivalently to widen the region of the phase space where we can perform matrix factorization, one could pre-process the Hebbian interaction matrix using a local updating rule, as the ones described in [63, 72]. In these works, besides the usual "forgetting" mechanism, the authors also consider a consolidation of the memories, which avoids the risk of corrupting the Hebbian interaction too much. This pre-processing could be combined with reversed decimation in order to obtain a better performing procedure that is also more robust to pattern interference. Finally, in an upcoming work, we shall tackle the asymmetric problem, which is closer to practical applications. Here, the Statistician has to reconstruct two independent matrices \(\mathbf{F}\in\mathbb{R}^{N\times P}\) and \(\mathbf{X}\in\mathbb{R}^{P\times M}\) from the observations \[\mathbf{Y}=\frac{1}{\sqrt{N}}\mathbf{F}\mathbf{X}+\sqrt{\Delta}\mathbf{Z}\in \mathbb{R}^{N\times M} \tag{122}\] in the scaling limit \(N,M,P\rightarrow\infty\) with \(P/N=\alpha>0\) and \(P/M=\gamma>0\). ## Acknowledgments We would like to thank Enzo Marinari and Federico Ricci-Tersenghi for their suggestions on the reversed decimation, Enzo Marinari and Marco Benedetti for discussions on unlearning, as well as Florent Krzakala, Lenka Zdeborova and Jean Barbier for many fruitful discussionson matrix factorization. MM acknowledges financial support by the PNRR-PE-AI FAIR project funded by the NextGeneration EU program.
2306.17553
On mixed-flux worldsheet scattering in AdS3/CFT2
Strings on AdS3xS3xT4 with mixed Ramond-Ramond and Neveu-Schwarz-Neveu-Schwarz flux are known to be classically integrable. This is a crucial property of this model, which cannot be studied by conventional worldsheet-CFT techniques. Integrability should carry over to the quantum level, and the worldsheet S matrix in the lightcone gauge is known up to the so-called dressing factors. In this work we study the kinematics of mixed-flux theories and consider a relativistic limit of the S matrix whereby we can complete the bootstrap program, including the dressing factors for fundamental particles and bound states. This provides an important test for the dressing factors of the full worldsheet model, and offers new insights on the features of the model when the amount of NSNS flux is low.
Sergey Frolov, Davide Polvara, Alessandro Sfondrini
2023-06-30T11:10:27Z
http://arxiv.org/abs/2306.17553v2
# On mixed-flux worldsheet scattering in AdS\({}_{3}\)/Cft\({}_{2}\) ###### Abstract Strings on \(AdS_{3}\times S^{3}\times T^{4}\) with mixed Ramond-Ramond and Neveu-Schwarz-Neveu-Schwarz flux are known to be classically integrable. This is a crucial property of this model, which cannot be studied by conventional worldsheet-CFT techniques. Integrability should carry over to the quantum level, and the worldsheet S matrix in the lightcone gauge is known up to the so-called dressing factors. In this work we study the kinematics of mixed-flux theories and consider a relativistic limit of the S matrix whereby we can complete the bootstrap program, including the dressing factors for fundamental particles and bound states. This provides an important test for the dressing factors of the full worldsheet model, and offers new insights on the features of the model when the amount of NSNS flux is low. ## 1 Introduction The holographic correspondence between strings on \(AdS_{3}\) spaces and two-dimensional superconformal field theories stands out from other AdS/CFT setups [1]. In this case, it is possible to continuously interpolate between supergravity backgrounds supported by a Neveu-Schwarz-Neveu-Schwarz (NSNS) \(B\)-field and field strength \(H=dB\), but without any Ramond-Ramond (RR) fluxes, to a supergravity background with RR fluxes but no \(B\)-field.1 This gives rise to a one-parameter family of "mixed-flux" backgrounds; all of them are related to each other by S-duality, which is non-perturbative in the string coupling \(g_{s}\). In fact, the perturbative-string description of observables such as the string spectrum (for generic, non-protected states) is quite different as one tunes the ratio of NSNS and RR fluxes. Footnote 1: See refs. [2; 3] for a discussion of moduli in the AdS3/CFT2 correspondence. The simplest setup, which we will consider in this paper, is given by the \(AdS_{3}\times S^{3}\times T^{4}\) geometry. The case of NSNS fluxes only is the best understood, as the worldsheet CFT is given by a level-\(k\) (supersymmetric) Wess-Zumino-Novikov-Witten model on the worldsheet which can be solved [4]. The energy levels can be written in a closed form and, like for _e.g._ flat-space strings, are very degenerate. Only in these cases the dual CFT is understood [5; 6; 7]. Turning on the RR flux is believed to lift these degeneracies, though it makes it extremely difficult to study the spectrum by worldsheet-CFT techniques, as the worldsheet model becomes nonlocal (or is accompanied by an intricated system of ghost fields which do not decouple). Very remarkably, the classical string non-linear sigma model (NLSM) is intergable for any combination of the fluxes [8]. In fact, it is believed that this integrability carries over to the quantum level too when the string is quantised in a suitable lightcone gauge, see _e.g._[9]. This is what happens for \(AdS_{5}\) and \(AdS_{4}\) strings, see [10; 11]. The study of quantum integrability is typically done in several steps. The first step is to consider the gauge-fixed model on a plane (the decompactified string worldsheet) and fix its scattering matrix there through symmetries. If the model is integrable, it is sufficient to fix the two-to-two S matrix, as higher processes follow from it. Then, much of the S matrix can be fixed by considering the linearly-realised symmetries of the gauge-fixed model. This was done in [12; 13] building also on perturbative and semiclassical considerations [14; 15]. This leaves some pre-factors, the so-called _dressing factors_, undetermined. They can only be fixed under some assumptions on the analytic structure of the theory, and by imposing unitarity and crossing symmetry. While this process is relatively simple for relativistic models, it is rather subtle for non-relativistic ones, such as the ones arising on the string worldsheet, see [16; 17; 18]. In the case of \(AdS_{3}\times S^{3}\times T^{4}\), there is a recent proposal for the dressing factors for pure-RR [19] and pure-NSNS [20] theories, but not for the generic mixed-flux ones. The main reason is that the underlying analytic structure is quite mysterious and unique. In this paper we study the mixed-flux models in a limit where they become relativistic. Then, we are able to uniquely fix the S matrix including the dressing factors by imposing analyticity and consistency with the bound-state content of the model. A similar limit was studied in the past in [21]. However, both the detailed definition of the limit and the conclusions reached differ quite substantially. In that case, the authors constructed a theory of only massless relativistic excitations. Here instead we find a model of massive and massless particles. More specifically, we have \((k-1)\) distinct massive particle representations, where \(k\) is the (quantised) NSNS flux. These massive particles correspond to the limit of the \(AdS_{3}\times S^{3}\) massive excitations and to their bound states. Two more representations are massless, and they are related to the \(T^{4}\) modes. The resulting model is of interest in and of itself, and closely related to that of Fendley and Intriligator [22; 23]; importantly, it provides a check for future proposals of the mixed-flux dressing factors of the full worldsheet model. This paper is structured as follows. In section 2 we review the main properties of mixed-flux theories, including some features which were not previously discussed in the literature, such as their bound states and the analytic structure of the rapidity plane. That structure is further detailed in appendix A, while appendix B contains a list of the S-matrix elements of [13] in the conventions of this paper. In section 3 we discuss the relativistic limit, and carry it out at the level of the algebra and of the S-matrix elements. By using crossing and analyticity we then fix the dressing factors in section 4. In appendix C we perform the S-matrix bootstrap for the relativistic model assuming only its symmetries and particle contents (without taking the limit of the S-matrix elements); interestingly, for specific processes, this allows for more general solutions. We conclude in section 5. ## 2 Worldsheet scattering for mixed-flux theories The all-loop scattering for \(AdS_{3}\times S^{3}\times T^{4}\) strings supported by a mixture of RR and NSNS fluxes was studied in [13] and the results were comprehensively summarised in [24]. Here we follow the notation of [24] and refer the reader there for further details. ### Symmetries and fundamental particles The dispersion relation of the model is [15] \[E(m,p)=\sqrt{\left(m+\frac{k}{2\pi}p\right)^{2}+4h^{2}\sin^{2}\frac{p}{2}}\,. \tag{1}\] Here \(k=0,1,2,\dots\) and \(h\geq 0\) are parameters of the theory, corresponding to the strength of the NSNS and RR background fluxes, respectively. The parameters \(m,p\) identify the various fundamental particles of the model as it can be found in a near-pp-wave expansion [14; 25] (that is, at small \(p\) and large string tension). In particular we have two bosons on \(AdS_{3}\), which have \(m=\pm 1\), two bosons on \(S^{3}\), which also have \(m=\pm 1\), and four bosons on \(T^{4}\), all of which have \(m=0\). We also have as many fermions, with the same values of \(m\). In fact, these particles arrange themselves in supermultiplets. Let us briefly review this structure. The full model has a bosonic \(so(2,2)\oplus so(4)\) symmetry from \(AdS_{3}\times S^{3}\), which can be split as \((su(1,1)_{\rm L}\oplus su(1,1)_{\rm R})\oplus(su(2)_{\rm L}\oplus su(2)_{\rm R})\).2 For future convenience, let us introduce the four Cartan elements of \((su(1,1)_{\rm L}\oplus su(1,1)_{\rm R})\oplus(su(2)_{\rm L}\oplus su(2)_{\rm R})\), Footnote 2: The \(T^{4}\) part also enjoys four \(so(2)\) shift-symmetries and a local \(so(4)\cong su(2)_{\bullet}\oplus su(2)_{\circ}\) rotation symmetry. \[{\bf L}_{0},\quad\widetilde{{\bf L}}_{0},\qquad{\bf J}^{3},\quad\widetilde{{ \bf J}}^{3}. \tag{2}\] This factorisation between "left" and "right" extends to the whole superalgebra, which takes the form \(psu(1,1|2)_{\rm L}\oplus psu(1,1|2)_{\rm R}\). In the notation of [24], the BPS bounds of this algebra are3 Footnote 3: Here \(\mathbf{L}_{0}\) is the Cartan element of \(su(1,1)_{\rm L}\); the minus sign is such that its eigenvalues are bounded from below, rather than from above, on unitary representations. Similarly for \(\widetilde{\mathbf{L}}_{0}\). \[\mathbf{H}\equiv-\mathbf{L}_{0}-\mathbf{J}^{3}\geq 0\,,\qquad\widetilde{\mathbf{H} }\equiv-\widetilde{\mathbf{L}}_{0}-\widetilde{\mathbf{J}}^{3}\geq 0\,. \tag{3}\] In total we have eight supercharges (with dimension \(+1/2\)) which we will denote by \(\mathbf{Q}\) and \(\widetilde{\mathbf{Q}}\), and eight superconformal generators (with dimension \(-1/2\)), denoted by \(\mathbf{S}\) and \(\widetilde{\mathbf{S}}\). We are mostly interested in the symmetries that commute with the lightcone Hamiltonian \[\mathbf{E}=\mathbf{H}+\widetilde{\mathbf{H}}\,. \tag{4}\] These include the Cartan elements, which we arrange in the following combinations4 Footnote 4: Equivalently, instead of considering \(\Delta\mathbf{J}\) we could have considered \(\mathbf{B}\equiv-(\mathbf{L}_{0}-\widetilde{\mathbf{L}}_{0})+(\mathbf{J}^{3} -\widetilde{\mathbf{J}}^{3})\) which is orthogonal to \(\mathbf{M}\). \[\mathbf{M}\equiv\mathbf{H}-\widetilde{\mathbf{H}}\,,\qquad\Delta\mathbf{J} \equiv-\mathbf{J}^{3}+\widetilde{\mathbf{J}}^{3}\,, \tag{5}\] and the lightcone momentum \[\mathbf{P}_{+}=-\mathbf{L}_{0}-\widetilde{\mathbf{L}}_{0}+\mathbf{J}^{3}+ \widetilde{\mathbf{J}}^{3}\,, \tag{6}\] which is related to the worldsheet length [10] and decouples in the limit where the worldsheet is a plane (which is where we will work to discuss the S matrix). Half of the supercharges (for a total of eight) commutes with \(\mathbf{E}\) and forms the algebra [26] \[\begin{split}\{\mathbf{Q}^{A},\,\mathbf{S}_{B}\}&= \delta^{A}_{B}\,\mathbf{H}\,,\qquad\{\mathbf{Q}^{A},\,\widetilde{\mathbf{Q}}_{ B}\}=\delta^{A}_{B}\,\mathbf{C}\,,\\ \{\widetilde{\mathbf{Q}}^{A},\,\widetilde{\mathbf{S}}_{B}\}& =\delta^{A}_{B}\,\widetilde{\mathbf{H}}\,,\qquad\{\mathbf{S}_{A },\,\widetilde{\mathbf{S}}^{B}\}=\delta^{A}_{B}\,\mathbf{C}^{\dagger}\,.\end{split} \tag{7}\] The generator \(\Delta\mathbf{J}\) (or equivalently \(\mathbf{B}\)) acts as an automorphism on the fermionic generators, because they carry spin. The new central charges \(\mathbf{C},\mathbf{C}^{\dagger}\) are akin to Beisert's central extension [27, 28] and in this case couple the left- and right-superalgebras. We stress that these supercharges are a feature of the lightcone-gauge-fixed model; they act nontrivially on unphysical states (_e.g._, on a single-particle state of momentum \(p\)) and must annihilate physical states. More precisely, a physical state \(|\mathrm{phys}\rangle\) must obey \[\begin{split}\mathbf{E}|\mathrm{phys}\rangle=E|\mathrm{phys} \rangle\quad\text{with }E\geq 0\,,\qquad\mathbf{M}|\mathrm{phys}\rangle=M|\mathrm{phys} \rangle\quad\text{with }M\in\mathbb{Z}\,,\\ \mathbf{C}|\mathrm{phys}\rangle=\mathbf{C}^{\dagger}|\mathrm{phys} \rangle=0\,.\end{split} \tag{8}\] Generic (multi-particle) physical states are build out of several (single-particle) _unphysical_ states. From a near-pp-wave [14, 25] and semiclassical analysis [13, 15] we expect that the central charges can be written in terms of the worldsheet momentum \(\mathbf{p}\) as \[\mathbf{C}=\frac{ih}{2}\left(e^{i\mathbf{p}}-1\right),\qquad\mathbf{C}^{ \dagger}=\frac{ih}{2}\left(1-e^{-i\mathbf{p}}\right),\qquad\mathbf{M}=\frac{ k}{2\pi}\mathbf{p}+m\,. \tag{9}\] In this formula, \(m\in\mathbb{Z}\) distinguishes different representations. This is nicely consistent with (8) by using the physical-state condition which comes from level matching, \[\mathbf{p}|\mathrm{phys}\rangle=p|\mathrm{phys}\rangle\,,\qquad\text{with }p=0\mod 2\pi\,. \tag{10}\] The form of the energy (1) finally follows from the shortening condition [29] \[{\bf H}\,\widetilde{\bf H}={\bf C}^{\dagger}{\bf C}\,. \tag{11}\] Finally, remark that the value of \(m\) can be read off by considering a zero-momentum state, in which case \(M=m\). However, we immediately notice a possible subtlety due to the fact that the eigenvalues \(M\) of \({\bf M}\) (and indeed of all central charges) are invariant under a simultaneous shift of \(p\) and \(m\), \[M(m,p)=M(m+k,p-2\pi)\,. \tag{12}\] This leads to an ambiguity. In fact, while fundamental particles have \(m=\pm 1,0\) we expect that _bound states_ may exist, with larger values of \(|m|\). Eq. (12) seems to suggest that most of the possible bound states (indeed, all but \(k\) particles) can be obtained by "boosting"5 the momentum of a finite set of particles. This was observed in the \(h=0\) limit of this model [30], where it can be described by a level-\(k\) WZNW model. However, at generic \(k\) and \(h\) it is not immediately obvious whether \(p\) should be allowed to take any real value, or should be bounded in the interval \([0,2\pi]\) as it seems to be the case semiclassically [15].6 To understand better this issue, we review the structure of the fundamental representations and of possible bound-state representations. Footnote 5: We use the term “boost” loosely, as the model in not relativistic on the worldsheet. Footnote 6: In [15] the range of the momentum is actually taken to be \([-\pi,\pi]\) but from their eq. (32), using the principal branch of the logarithm, the domain actually is \([0,2\pi]\). ### Fundamental-particle representations In light-cone gauge we expect the theory to feature eight bosons and eight fermions. They fit into four four-dimensional representations of the algebra (7) and are described in table 1. The lowering operators of the algebra, in this notation, are \({\bf Q}^{A}\) and \(\widetilde{\bf S}^{A}\). Their action is proportional, \({\bf Q}^{A}\sim\widetilde{\bf S}^{A}\), because the particles transform in a short representation, _cf._ (11). In fact, all short representations of (7) are necessarily four-dimensional. This means that supersymmetric bound-state representations, if they exist, should have a form similar to the representations of table 1 up to tweaking the values of \(m\) and the eigenvalue of \(\Delta{\bf J}\). \begin{table} \begin{tabular}{|l|c|c|} \hline \((m=+1)\) & **State** & \(\Delta{\bf J}\) \\ \hline \(S^{3}\) bos. & \(|Y(p)\rangle\) & \(+1\) \\ ferm. & \(|\Psi^{A}(p)\rangle\) & \(+\frac{1}{2}\) \\ \(AdS_{3}\) bos. & \(|Z(p)\rangle\) & \(0\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \((m=-1)\) & **State** & \(\Delta{\bf J}\) \\ \hline \(AdS_{3}\) bos. & \(|\bar{Z}(p)\rangle\) & \(0\) \\ ferm. & \(|\tilde{\Psi}^{A}(p)\rangle\) & \(-\frac{1}{2}\) \\ \(S^{3}\) bos. & \(|\bar{Y}(p)\rangle\) & \(-1\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \((m=-1)\) & **State** & \(\Delta{\bf J}\) \\ \hline \(AdS_{3}\) bos. & \(|\bar{Z}(p)\rangle\) & \(0\) \\ ferm. & \(|\tilde{\Psi}^{A}(p)\rangle\) & \(-\frac{1}{2}\) \\ \(T^{4}\) bos. & \(|T^{AA}(p)\rangle\) & \(0\) \\ ferm & \(|\tilde{\chi}^{A}(p)\rangle\) & \(-\frac{1}{2}\) \\ \hline \end{tabular} \end{table} Table 1: A summary of the representations under which the eight bosons and eight fermions transfer. In each table, the top state is the highest-weight state of the representation and the bottom is the lowest-weight state. We see that \(\Delta{\bf J}\) decreases along the representation, while \(M=m+\frac{k}{2\pi}p\) is constant for a given worldsheet momentum \(p\). As the particles are identified perturbatively, we implicitly take \(p\) small. In practice, to discuss the precise form of the representations it is convenient to introduce a smaller algebra, generated by four supercharges, \[\{\mathbf{q},\,\mathbf{s}\}=\mathbf{H}\,,\quad\{\widetilde{\mathbf{q}},\, \widetilde{\mathbf{s}}\}=\widetilde{\mathbf{H}}\,,\qquad\{\mathbf{q},\, \widetilde{\mathbf{q}}\}=\mathbf{C}\,,\quad\{\mathbf{s},\,\widetilde{\mathbf{s }}\}=\mathbf{C}^{\dagger}. \tag{13}\] Clearly \[\mathbf{Q}^{1}=\mathbf{q}\otimes\mathbf{1},\quad\mathbf{Q}^{2}=\Sigma\otimes \mathbf{q},\qquad\mathbf{S}_{1}=\mathbf{s}\otimes\mathbf{1},\quad\mathbf{S}_{ 2}=\Sigma\otimes\mathbf{s}, \tag{14}\] where \(\Sigma\) is the fermion sign, and similarly for \(\widetilde{\mathbf{Q}}_{A}\) and \(\widetilde{\mathbf{S}}^{A}\). The four-dimensional short representations of (7) arise as tensor products of two-dimensional representations of (13). These smaller representations will depend on the values of \(m\), \(p\), and on whether the highest-weight state is a boson or a fermion. We define \[\ket{\phi^{*}_{\star}}=\text{highest-weight state},\qquad\ket{\varphi^{*}_{ \star}}=\text{lowest-weight state}, \tag{15}\] where \(*\) will be used to distinguish whether the state is a boson ("B") or a fermion ("F"), and to indicate its kinematics, which we will denote by left or right ("L" or "R"); the label left will be reserved to \(m>0\), while right will be reserved to \(m<0\). The representation \(\rho^{\text{\tiny B}}_{\text{\tiny L}}(m,p)=(\ket{\phi^{\text{\tiny B}}_{ \text{\tiny L}}},\ket{\varphi^{\text{\tiny F}}_{\text{\tiny L}}})\) (suppressing the \(m,p\) dependence of the states) is given by \[\begin{split}\mathbf{q}\ket{\phi^{\text{\tiny B}}_{\text{\tiny L }}}&=a_{\text{\tiny L}}(m,p)\ket{\varphi^{\text{\tiny F}}_{\text{ \tiny L}}},\qquad\widetilde{\mathbf{s}}\ket{\phi^{\text{\tiny B}}_{\text{ \tiny L}}}=\bar{b}_{\text{\tiny L}}(m,p)\ket{\varphi^{\text{\tiny F}}_{\text{ \tiny L}}},\\ \mathbf{s}\ket{\varphi^{\text{\tiny F}}_{\text{\tiny L}}}& =\bar{a}_{\text{\tiny L}}(m,p)\ket{\phi^{\text{\tiny B}}_{\text{ \tiny L}}},\qquad\widetilde{\mathbf{q}}\ket{\varphi^{\text{\tiny F}}_{\text{ \tiny L}}}=b_{\text{\tiny L}}(m,p)\ket{\phi^{\text{\tiny B}}_{\text{\tiny L}}},\end{split} \tag{16}\] and \(\rho^{\text{\tiny F}}_{\text{\tiny L}}(m,p)=(\ket{\phi^{\text{\tiny F}}_{ \text{\tiny L}}},\ket{\varphi^{\text{\tiny B}}_{\text{\tiny L}}})\) is given by \[\begin{split}\mathbf{q}\ket{\phi^{\text{\tiny F}}_{\text{\tiny L }}}&=a_{\text{\tiny L}}(m,p)\ket{\varphi^{\text{\tiny B}}_{\text{ \tiny L}}},\qquad\widetilde{\mathbf{s}}\ket{\phi^{\text{\tiny F}}_{\text{ \tiny L}}}=\bar{b}_{\text{\tiny L}}(m,p)\ket{\varphi^{\text{\tiny B}}_{\text{ \tiny L}}},\\ \mathbf{s}\ket{\varphi^{\text{\tiny B}}_{\text{\tiny L}}}& =\bar{a}_{\text{\tiny L}}(m,p)\ket{\phi^{\text{\tiny F}}_{\text{ \tiny L}}},\qquad\widetilde{\mathbf{q}}\ket{\varphi^{\text{\tiny B}}_{\text{ \tiny L}}}=b_{\text{\tiny L}}(m,p)\ket{\phi^{\text{\tiny F}}_{\text{\tiny L}}},\end{split} \tag{17}\] with precisely the same representation coefficients. The representations \(\rho^{\text{\tiny B}}_{\text{\tiny R}}(m,p)\) and \(\rho^{\text{\tiny F}}_{\text{\tiny R}}(m,p)\) have a similar form, up to replacing the representation coefficients with \(a_{\text{\tiny R}}\), \(b_{\text{\tiny R}}\) and so on. To write the representation coefficients explicitly it is convenient to define the Zhukovsky variables \[\begin{split} x^{\pm}_{\text{\tiny L}}(m,p)\equiv& \frac{+M(m,p)+E(m,p)}{2h\sin(p/2)}e^{\pm i\frac{p}{2}}\,,\qquad(m \geq 0)\,,\\ x^{\pm}_{\text{\tiny R}}(m,p)\equiv&\frac{-M(m,p)+E (m,p)}{2h\sin(p/2)}e^{\pm i\frac{p}{2}}\,,\qquad(m<0)\,.\end{split} \tag{18}\] This notation has the advantage of reducing to the usual RR notation [29] if \(k=0\). The Zhukovsky variables satisfy (omitting the dependence on \(m,p\)) \[\frac{x^{+}_{\star}}{x^{-}_{\star}}=e^{ip},\qquad x^{+}_{\star}-\frac{1}{x^{+}_ {\star}}-x^{-}_{\star}+\frac{1}{x^{-}_{\star}}=\frac{2i}{h}E\,, \tag{19}\] and \[x^{+}_{\text{\tiny L}}+\frac{1}{x^{+}_{\text{\tiny L}}}-x^{-}_{\text{\tiny L}}- \frac{1}{x^{-}_{\text{\tiny L}}}=+\frac{2i}{h}M\,,\qquad x^{+}_{\text{\tiny R} }+\frac{1}{x^{+}_{\text{\tiny R}}}-x^{-}_{\text{\tiny R}}-\frac{1}{x^{-}_{ \text{\tiny R}}}=-\frac{2i}{h}M\,, \tag{20}\] To define the representation coefficients we introduce \[\eta_{\star}(m,p)=\sqrt{\frac{ih}{2}\left(x^{-}_{\star}(p)-x^{+}_{\star}(p) \right)}\,, \tag{21}\] \[a_{\rm L}=\eta_{\rm L}\,,\qquad b_{\rm L}=-\frac{\eta_{\rm L}}{x_{ \rm L}^{-}}\,,\qquad\bar{a}_{\rm L}=\eta_{\rm L}\,,\qquad\bar{b}_{\rm L}=-\frac{ \eta_{\rm L}}{x_{\rm L}^{+}}\,, \tag{22}\] \[b_{\rm R}=\eta_{\rm R}\,,\qquad a_{\rm R}=-\frac{\eta_{\rm R}}{x _{\rm R}^{-}}\,,\qquad\bar{b}_{\rm R}=\eta_{\rm R}\,,\qquad\bar{a}_{\rm R}=- \frac{\eta_{\rm R}}{x_{\rm R}^{+}}\,.\] It is easy to verify that this defines the representations introduced before. It remains to define the representations with \(m=0\). This can be equivalently done by taking \(m\to 0\) in either the left or the right representations. The result of this limit is not identical, but it yields two isomorphic representations. Such short representations may be defined for any \(m\in\mathbb{Z}\) and \(p\in\mathbb{R}\). Not all of these representations, however, appear in the string model. In fact, the representations to which the fundamental particles of the full theory belong are7 Footnote 7: It should be noted that there are many different (isomorphic) ways to obtain the massless representations. Here we are following the notation of [24]. \[m=+1: \rho_{\rm L}^{\rm B}(+1,p)\otimes\rho_{\rm L}^{\rm B}(+1,p)\,, \tag{23}\] \[m=-1: \rho_{\rm R}^{\rm F}(-1,p)\otimes\rho_{\rm R}^{\rm F}(-1,p)\,,\] \[m=0: \Big{(}\rho_{\rm L}^{\rm B}(0,p)\otimes\rho_{\rm L}^{\rm F}(0,p) \Big{)}\oplus\Big{(}\rho_{\rm L}^{\rm F}(0,p)\otimes\rho_{\rm L}^{\rm B}(0,p) \Big{)}.\] We will also see that it is natural to restrict \(p\in[0,2\pi]\). Still, these are not all representations of the model. Additional ones, which can be constructed as bound states, can be obtained by taking \(m=+2,+3,\dots\) for bound states of left particles, and \(m=-2,-3,\dots\) for bound states of right particles. We will construct these representations in section 2.5. ### Multi-particle representations The states of the theory will in general feature several particles over the Fock vacuum. The most important case will be that of two-particle representations, which we will use to construct the S matrix. Multi-particle representations may be constructed out of the single-particle ones by means of a coproduct. For two-particle states we have \[{\bf Q}^{A}(p_{1},p_{2}) ={\bf Q}^{A}(p_{1})\otimes{\bf 1}+e^{+\frac{i}{2}p_{1}}\Sigma \otimes{\bf Q}^{A}(p_{2})\,, \tag{24}\] \[{\bf\widetilde{Q}}_{A}(p_{1},p_{2}) ={\bf\widetilde{Q}}_{A}(p_{1})\otimes{\bf 1}+e^{+\frac{i}{2}p_{1}} \Sigma\otimes{\bf\widetilde{Q}}_{A}(p_{2})\,,\] \[{\bf S}_{A}(p_{1},p_{2}) ={\bf S}_{A}(p_{1})\otimes{\bf 1}+e^{-\frac{i}{2}p_{1}}\Sigma \otimes{\bf S}_{A}(p_{2})\,,\] \[{\bf\widetilde{S}}^{A}(p_{1},p_{2}) ={\bf\widetilde{S}}^{A}(p_{1})\otimes{\bf 1}+e^{-\frac{i}{2}p_{1}} \Sigma\otimes{\bf\widetilde{S}}^{A}(p_{2})\,.\] This induces the co-product for the central charges, and it clearly trivial for \({\bf E},{\bf M}\) and non-trivial for \({\bf C},{\bf C}^{\dagger}\). In fact, this coproduct is necessary to ensure that the eigenvalues of \({\bf C},{\bf C}^{\dagger}\) depend only on the _total_ momentum, even in the case of multi-particle states. Here again \(\Sigma=(-1)^{F}\) is the Fermion sign. Clearly these two-particle representations may be defined out of any pair of fundamental particle representations (23). It is also possible to iterate the construction to obtain three- and more-particle representations, which we will not be needing here. sing the two-particle representation of the supercharges, it is possible to constrain the two-particle S matrix \(S(p_{1},p_{2})\), up to an overall dressing factor for each pair of irreducible representations. The explicit form of the S-matrix elements was derived in [13] in slightly different conventions. For completeness, we report it in our conventions in appendix B. ### The rapidity plane In order to discuss bound states it is convenient to introduce the rapidity variable \(u\)[15] \[u(x,\kappa)=x+\frac{1}{x}-\frac{\kappa}{\pi}\log x\,, \tag{25}\] from which we can implicitly define \(x(u,\kappa)\). In what follows we will use the principal branch \(\ln x\) of \(\log x\) (with the cut on the negative half-line). Going \(n\) times around the branch point \(x=0\) leads to a monodromy \[x^{(n)}(u,\kappa)=x(u+2i\kappa\,n\,,\,\kappa)\,. \tag{26}\] By comparison with the shortening condition (11) it is clear that \(\kappa\) is real, and we may parametrise \[x_{\rm L}^{\pm}=x(u\pm\tfrac{i}{h},+\tfrac{k}{h})\,,\qquad x_{\rm R}^{\pm}=x(u \pm\tfrac{i}{h},-\tfrac{k}{h})\,. \tag{27}\] In analogy with the usual case (\(k=\kappa=0\)), we define the string reality condition \[x(u,\kappa)^{*}=x(u^{*},\kappa)\,,\qquad\kappa^{*}=\kappa\qquad\text{ (string)}. \tag{28}\] Figure 1: The physical region of the \(x\)-plane (on the right) and associated one-cut \(u\)-plane (on the left) for \(\kappa>0\). The shaded regions are removed from the \(x\)-plane. The different colours show how the \(u\)-plane on the left is mapped to the corresponding \(x\)-plane on the right. The zigzag line in the \(u\)-plane corresponds to a cut. By crossing this cut we end up in the \(u\)-plane shown in figure 2, which is mapped to the antistring region of the \(x\)-plane. It is also possible to define a mirror conjugation rule, which however swaps "left" and "right" particles \[x(u,\kappa)^{*}=\frac{1}{x(u^{*},-\kappa)}\,,\qquad\kappa^{*}=\kappa\qquad({ \rm mirror}). \tag{29}\] This is a sign that the mirror theory is not unitary (something that can easily be seen from the dispersion relation too). Here we will mostly focus on the string theory. It is also worth noting that with these definitions, in string theory \[p=-i(\ln x_{*}^{+}-\ln x_{*}^{-})\in[0,2\pi)\,. \tag{30}\] The branch points of the map \(x(u,\kappa)\) can be found from the differential \[\frac{{\rm d}x}{{\rm d}u}=\frac{x^{2}}{(x-{\sf x}_{+})(x-{\sf x}_{-})}\,, \qquad{\sf x}_{\pm}=\frac{\kappa}{2\pi}\pm\sqrt{1+\frac{\kappa^{2}}{4\pi^{2}} }\,. \tag{31}\] Similarly to the \(\kappa=0\) case, there appear to be two branch points \({\sf x}_{\pm}\). However, they have _three_ images on the \(u\)-plane. One, since \({\sf x}_{+}>0\), is \[{\sf u}_{+}={\sf x}_{+}+\frac{1}{{\sf x}_{+}}-\frac{\kappa}{\pi}\ln{\sf x}_{ +}>0\,. \tag{32}\] The remaining pair is \[{\sf u}_{-}^{\pm}=-{\sf u}_{+}\mp i\kappa\,, \tag{33}\] Figure 2: The antistring or crossed region in the \(x\)-plane region (on the right) and associated three-cut \(u\)-plane (on the left) for \(\kappa>0\). As in the previous figure, the shaded regions are removed from the \(x\)-plane and the different colours show the map between the \(u\) and \(x\) plane. By crossing the cut \((-\infty,u_{+})\) in the \(u\)-plane we return to the \(u\)-plane depicted in figure 1. Similar conventions are used to show the map between \(u\) and \(x\) planes in figures 3 and 4. where the imaginary shift is due to the branch cut of the logarithm. Moreover, there is a branch point at \(u=\infty\), of logarithmic type. By taking the branch cuts to be parallel to the real-axis on the \(u\)-plane, we can easily find their images on the \(x\)-plane. Recall that in the case \(\kappa=k=0\), the images of the branch cuts coincided with the unit circle. Now the picture is more involved, and it depends on whether \(\kappa>0\) or \(\kappa<0\), as shown in figures 1, 2, 3 and 4. A more detailed description of the map between the \(x\) and \(u\)-plane is reported in appendix A. ### Bound states Let us now discuss the kinematics of the bound states of this model. Let us start by recalling the picture when \(\kappa=k=0\). In that case, the eigenvalues of \(\mathbf{M}\) do not depend on the worldsheet momentum, \(M=m\in\mathbb{Z}\). Left fundamental particles with \(m_{1}=m_{2}=+1\) and complex momenta \(p_{1},p_{2}\) (and rapidity \(u_{1},u_{2}\)) can form bound states with \(m=+2\), and total momentum \(p\) (rapidity \(u\)). The bound-state condition can be read off the poles of the S matrix and it is [31; 32] \[x(u_{1}+\tfrac{i}{h},0)=x(u_{2}-\tfrac{i}{h},0),\qquad u=u_{1}+\tfrac{i}{h}=u_ {2}-\tfrac{i}{h}\,, \tag{34}\] so that the expressions \[\begin{split} E(p,m)=&\,\frac{h}{2i}\sum_{j=1}^{2} \left(x(u_{j}+\tfrac{i}{h},0)-\frac{1}{x(u_{j}+\tfrac{i}{h},0)}-x(u_{j}-\tfrac {i}{h},0)+\frac{1}{x(u_{j}-\tfrac{i}{h},0)}\right),\\ e^{ip}=&\,\frac{x(u_{1}+\tfrac{i}{h},0)}{x(u_{1}- \tfrac{i}{h},0)}\frac{x(u_{2}+\tfrac{i}{h},0)}{x(u_{2}-\tfrac{i}{h},0)}\,, \end{split} \tag{35}\] Figure 3: The physical region of the \(x\)-plane (on the right) and associated three-cut \(u\)-plane (on the left) for \(\kappa<0\). Notice that the number of cuts in the \(u\)-plane are different when \(\kappa<0\) with respect to \(\kappa>0\). depend simply on \(u\) and satisfy the shortening condition with \(k=0\) and \(m=+2\). This procedure can be iterated to construct bound states with any \(m=+2,+3,+4,\dots\), and in a similar way one can construct bound states of right particles with \(m=-2,-3,-4,\dots\). Importantly, left- and right-particles do not form bound states, and neither do particles with \(m=0\). Let us now consider how this picture changes if \(\kappa>0\) or \(\kappa<0\), so that we can express it in terms of \[x(u\pm m\tfrac{i}{h},\kappa)\,. \tag{36}\] The bound-state condition coming from the S matrix is still the same as (34), only it is now expressed in terms of deformed Zhukovsky variables (25), namely [13] \[x(u_{1}+\tfrac{i}{h},\pm\tfrac{k}{h})=x(u_{2}-\tfrac{i}{h},\pm\tfrac{k}{h}), \qquad u=u_{1}+\tfrac{i}{h}=u_{2}-\tfrac{i}{h}\,. \tag{37}\] Notice that either both particles are left, or both particles are right, so that we pick always the same sign for \(\kappa=k/h\), and therefore the condition on \(u_{1},u_{2}\) is the same as before, regardless of \(k\). Similarly, the condition for the total energy and total momentum takes a form similar to (35). This procedure too can be iterated to construct bound states with \(m=+2,+3,\dots\) (starting from left particles) or with \(m=-2,-3,\dots\) (starting from right particles). The only case which requires some care is when \(m=-k\), because for real values of \(u\) such that \(u<-\mathsf{u}_{+}\) the points \(u\pm ik/h\) are on the cuts, see figures 2 and 3. In this special case the semi-line \(u\geq-\mathsf{u}_{+}\) is in one-to-one correspondence with the momentum interval \([0,2\pi]\), and the corresponding bound state is physical. Starting with a complex value of \(u\) and approaching the cuts, one finds that the points \(u\pm ik/h\) end up on similar sides of the cuts (upper/upper, or lower/lower). As a result, the momentum and energy for Figure 4: The antistring region of the \(x\)-plane (on the right) and associated one-cut \(u\)-plane (on the left) for \(\kappa<0\). values of \(u\) on the cuts are purely imaginary, and those values of \(u\) despite being real do not correspond to a physical bound state. With this caveat in mind, it is possible to define the bound-state representations for particles of any \(m\in\mathbb{Z}\), keeping in mind that now the eigenvalue of \(\mathbf{M}\) is \(M=m+\frac{k}{2\pi}p\) with \(p\in[0,2\pi]\). Like in the pure-RR case [32], the bound-state representations take the same form as the fundamental particles, up to redefining the Zhukovsky variable -- in particular, unlike the case of \(AdS_{5}\times S^{5}\), the dimension of the representation does not grow with \(|m|\). In conclusion we find that, similarly to the case \(k=0\), the mixed-flux model has infinitely many particles, labelled by \(m\in\mathbb{Z}\), with momentum \(p\in[0,2\pi]\). It is possible to construct the bound state representation starting from the two-particle representation. We can essentially repeat the considerations of [32]. Let us consider the bound states which we have discussed above, which satisfy (37) and appear in the string (as opposed to mirror) model. It is easier to discuss everything in terms of the two-dimensional representations of the "smaller" algebra (13). For left fundamental particles (\(m=+1\)), we work with the two-dimensional representation \(\rho_{\rm L}^{\rm B}(+1,p_{1})\) and \(\rho_{\rm L}^{\rm B}(+1,p_{2})\), where the momenta are complex. Let the highest-weight state of each of these representations be \(|\phi_{\rm L}^{\rm B}(+1,p_{1})\rangle\) and \(|\phi_{\rm L}^{\rm B}(+1,p_{2})\rangle\), respectively. The bound state representation will have highest-weight state \[|\phi_{\rm L}^{\rm B}(+2,p)\rangle=|\phi_{\rm L}^{\rm B}(+1,p_{1})\rangle \otimes|\phi_{\rm L}^{\rm B}(+1,p_{2})\rangle\,, \tag{38}\] where \(p\) is the total bound-state momentum. This is the highest-weight state of a short representation with \(m=+2\) and momentum \(p\). Hence, the representation contains one (and only one) other state, proportional to \(\mathbf{q}\,|\phi_{\rm L}^{\rm B}(+2,p)\rangle\). In fact, the bound state representation which we constructed is isomorphic to \(\rho_{\rm L}^{\rm B}(+2,p)\). Clearly, this procedure may be iterated to construct any representation \(\rho_{\rm L}^{\rm B}(m,p)\) with \(m\geq+1\) integer. Let us now look at the bound states of two \(m=-1\) particles. Here we deal with two representations \(\rho_{\rm R}^{\rm F}(-1,p_{1})\) and \(\rho_{\rm R}^{\rm F}(-1,p_{2})\). The bound-state representation will have as lowest-weight state \[|\varphi_{\rm R}^{\rm B}(-2,p)\rangle=|\varphi_{\rm R}^{\rm B}(-1,p_{1}) \rangle\otimes|\varphi_{\rm R}^{\rm B}(-1,p_{2})\rangle\,. \tag{39}\] Again, this is a short representation, with \(m=-2\) and total momentum \(p\). The other state of the representation is proportional to \(\mathbf{s}\,|\varphi_{\rm R}^{\rm B}(-2,p)\rangle\) and in fact the whole representation is isomorphic to \(\rho_{\rm R}^{\rm F}(m,p_{1})\), with \(m\leq-1\) integer. It is worth noting that this discussion does not apply to _mirror_ bound states, which do not satisfy (37). In fact, we would expect mirror bound states to transform in _antisymmetric_ representations, whose lowest-weight state is given, for instance, by \(|\varphi_{\rm L}^{\rm B}(+2,p)\rangle=|\varphi_{\rm L}^{\rm F}(+1,p)\rangle \otimes|\varphi_{\rm L}^{\rm F}(+1,p)\rangle\). A full analysis of the mirror theory, including its bound states, would be interesting and we hope to carry it out in future work. ### Crossing equations The linearly-realised symmetries do not fix entirely the S matrix. It remains to constrain the "dressing factors". This can be done by using unitarity and crossing symmetry. In the following, we write down crossing equations for the different particle sectors of the theory: massive-massive, massive-massless and massless-massless. Massive-massive crossing equations.First of all, let us pick a convention for dressing factors. Let us set, for the scattering of \(m=\pm 1\) particles,8 Footnote 8: We used the following standard definitions for the highest and lowest weight states: \(|Y_{u}\rangle\equiv|\phi_{\rm L}^{\rm B}(1;u)\rangle\otimes|\phi_{\rm L}^{\rm B} (1;u)\rangle\), \(|\bar{Z}_{u}\rangle\equiv|\phi_{\rm R}^{\rm F}(-1;u)\rangle\otimes|\phi_{\rm R }^{\rm F}(-1;u)\rangle\), \(|\bar{Y}_{u}\rangle\equiv|\varphi_{\rm R}^{\rm B}(-1;u)\rangle\otimes|\varphi_ {\rm R}^{\rm B}(-1;u)\rangle\), \(|Z_{u}\rangle\equiv|\varphi_{\rm L}^{\rm F}(1;u)\rangle\otimes|\varphi_{\rm L} ^{\rm F}(1;u)\rangle\). \[\begin{array}{rcl}\mathbf{S}\,\big{|}Y_{u_{1}}Y_{u_{2}}\big{\rangle}=&&x_{{}_ {\rm L1}}^{-}-x_{{}_{\rm L2}}^{+}\\ \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span while the remaining two equations take the form \[\begin{split}\sigma^{\bullet\bullet}_{\text{\tiny RR}}(\bar{u}_{1},u_ {2})^{2}\tilde{\sigma}^{\bullet\bullet}_{\text{\tiny LR}}(u_{1},u_{2})^{2}& =\frac{x_{\text{\tiny R2}}^{-}}{x_{\text{\tiny R2}}^{+}}\frac{ \big{(}1-x_{\text{\tiny LL}}^{+}x_{\text{\tiny R2}}^{+}\big{)}^{2}}{\big{(}1-x _{\text{\tiny LL}}^{+}x_{\text{\tiny R2}}^{-}\big{)}^{2}}\Big{(}\frac{x_{\text {\tiny R2}}^{+}}{x_{\text{\tiny R2}}^{-}}\Big{)}^{2}\Big{(}\frac{1-x_{\text{ \tiny LL}}^{+}x_{\text{\tiny R2}}^{-}}{1-x_{\text{\tiny LL}}^{-}x_{\text{ \tiny R2}}^{+}}\Big{)}^{2}\frac{1-x_{\text{\tiny LL}}^{-}x_{\text{\tiny R2}}^{ -}}{1-x_{\text{\tiny LL}}^{+}x_{\text{\tiny R2}}^{+}}\,,\\ \sigma^{\bullet\bullet}_{\text{\tiny RR}}(u_{1},u_{2})^{2}\tilde{ \sigma}^{\bullet\bullet}_{\text{\tiny LR}}(\bar{u}_{1},u_{2})^{2}& =\frac{x_{\text{\tiny R2}}^{-}}{x_{\text{\tiny R2}}^{+}}\Big{(} \frac{x_{\text{\tiny R1}}^{-}-x_{\text{\tiny R2}}^{+}}{x_{\text{\tiny R2}}^{- }}\Big{)}^{2}\frac{x_{\text{\tiny R1}}^{-}}{x_{\text{\tiny R1}}^{+}}\Big{(} \frac{x_{\text{\tiny R2}}^{+}}{x_{\text{\tiny R2}}^{-}}\Big{)}^{2}\Big{(}\frac{ x_{\text{\tiny R1}}^{+}-x_{\text{\tiny R2}}^{-}}{x_{\text{\tiny R1}}^{-}-x_{ \text{\tiny R2}}^{+}}\Big{)}^{2}\frac{x_{\text{\tiny R1}}^{-}-x_{\text{\tiny R2 }}^{-}}{x_{\text{\tiny R1}}^{+}-x_{\text{\tiny R2}}^{-}}\,.\end{split} \tag{2.45}\] Here too we have highlighted some expressions. The contributions in black in (2.44) and (2.45) provide the crossing equations that we would have obtained absorbing the highlighted factors of (2.40) (_i.e._, the poles) in the \(\sigma_{\ast\ast}\)s. We keep track of these highlighted contributions since we want to match them with those arising in the relativistic limit explained in the next section. Mixed mass crossing equations.Analogously to what happens in the pure Ramond-Ramond case the dispersion relation (2.1) is non-analytic whenever \(m=0\mod k\). In these cases, it is necessary to split particles into a chiral and an antichiral sector, characterised by having \(M>0\) and \(M<0\) respectively. We label these sectors using superscript signs: '\(+\)' and '\(-\)'. For \(m=0\) these sectors correspond to the regions of positive and negative momenta. Below we normalise the S-matrix elements associated with the scattering of massive-massless and massless-massive highest-weight states9 assuming massless particles coming from the left to be chiral and massless particles coming from the right to be antichiral Footnote 9: We use the following convention for the massless highest and lowest weight states: \(|\chi^{\text{i}}_{\text{u}}\rangle\equiv|\phi^{\text{n}}_{\text{L}}(0,u)\rangle \otimes|\phi^{\text{r}}_{\text{L}}(0,u)\rangle\), \(|\chi^{\text{i}}_{\text{u}}\rangle\equiv|\phi^{\text{r}}_{\text{L}}(0,u) \rangle\otimes|\phi^{\text{n}}_{\text{L}}(0,u)\rangle\), \(|\tilde{\chi}^{\text{i}}_{\text{u}}\rangle\equiv|\varphi^{\text{r}}_{\text{L}} (0,u)\rangle\otimes|\varphi^{\text{n}}_{\text{L}}(0,u)\rangle\), \(|\tilde{\chi}^{\text{i}}_{\text{u}}\rangle\equiv|\varphi^{\text{n}}_{\text{L }}(0,u)\rangle\otimes|\varphi^{\text{r}}_{\text{L}}(0,u)\rangle\). \[\begin{split}\mathbf{S}\left|Y_{u_{1}}\chi^{\dot{\alpha}}_{u_{2}} \right\rangle=&\,\,\,\,\,\,\,\,e^{\frac{i}{2}p_{1}}e^{-\frac{3i}{2}p_ {2}}\frac{x_{\text{\tiny LL}}^{-}-x_{\text{\tiny LL}}^{+}}{x_{\text{\tiny LL} }^{+}-x_{\text{\tiny L2}}^{-}}\big{(}\sigma^{\bullet-}_{\text{\tiny LL}} \big{)}^{-2}\left|\chi^{\dot{\alpha}}_{u_{2}}Y_{u_{1}}\right\rangle,\\ \mathbf{S}\left|\bar{Z}_{u_{1}}\chi^{\dot{\alpha}}_{u_{2}}\right\rangle=& \,\,\,\,\,\,\,e^{-\frac{i}{2}p_{1}}e^{-\frac{3i}{2}p_{2}}\frac{1-x_{\text{ \tiny R1}}^{+}x_{\text{\tiny L2}}^{+}}{1-x_{\text{\tiny RL}}^{-}}\big{(}\sigma ^{\bullet-}_{\text{\tiny RL}}\big{)}^{-2}\left|\chi^{\dot{\alpha}}_{u_{2}} \bar{Z}_{u_{1}}\right\rangle,\\ \mathbf{S}\left|\chi^{\dot{\alpha}}_{u_{1}}Y_{u_{2}}\right\rangle=& \,\,\,\,\,\,\,\,e^{\frac{3i}{2}p_{1}}e^{-\frac{i}{2}p_{2}}\frac{x_{\text{\tiny LL }}^{-}-x_{\text{\tiny L2}}^{+}}{x_{\text{\tiny LL}}^{+}-x_{\text{\tiny L2}}^{- }}\big{(}\sigma^{\downarrow\bullet}_{\text{\tiny LL}}\big{)}^{-2}\left|Y_{u_{2}} \chi^{\dot{\alpha}}_{u_{1}}\right\rangle,\\ \mathbf{S}\left|\chi^{\dot{\alpha}}_{u_{1}}\bar{Z}_{u_{2}}\right\rangle=& \,\,\,\,\,\,e^{\frac{3i}{2}p_{1}}e^{\frac{i}{2}p_{2}}\frac{1-x_{\text{\tiny LL }}^{-}x_{\text{\tiny R2}}^{-}}{1-x_{\text{\tiny LL}}^{+}x_{\text{\tiny R2}}^{+ }}\big{(}\sigma^{\downarrow\bullet}_{\text{\tiny LR}}\big{)}^{-2}\left|\bar{Z}_{u _{2}}\chi^{\dot{\alpha}}_{u_{1}}\right\rangle.\end{split} \tag{2.46}\] Working with these conventions we obtain the following crossing equations connecting the different mixed-mass dressing phases: \[\begin{split}\big{(}\sigma^{\bullet-}_{\text{\tiny LL}}(u_{1},u_ {2})\big{)}^{2}\big{(}\sigma^{\bullet-}_{\text{\tiny RL}}(\bar{u}_{1},u_{2}) \big{)}^{2}&=\frac{x_{\text{\tiny LL}}^{+}}{x_{\text{\tiny LL}}^{-}} \Big{(}\frac{x_{\text{\tiny LL}}^{+}-x_{\text{\tiny L2}}^{-}}{x_{\text{\tiny LL} }^{+}-x_{\text{\tiny L2}}^{+}}\Big{)}^{2}e^{-3ip_{2}}\frac{x_{\text{\tiny LL}}^{-} -x_{\text{\tiny LL}}^{+}}{x_{\text{\tiny LL}}^{+}-x_{\text{\tiny LL}}^{-}} \frac{x_{\text{\tiny LL}}^{+}-x_{\text{\tiny LL}}^{+}}{x_{\text{\tiny LL}}^{-}-x_{ \text{\tiny LL}}^{-}}\,,\\ \big{(}\sigma^{\bullet-}_{\text{\tiny LL}}(\bar{u}_{1},u_{2}) \big{)}^{2}\big{(}\sigma^{\bullet-}_{\text{\tiny RL}}(u_{1},u_{2})\big{)}^{2}& =\frac{x_{\text{\tiny LL}}^{+}}{x_{\text{\tiny LL}}^{-}}\Big{(}\frac{1-x_{ \text{\tiny R1}}^{-}x_{\text{\tiny LL}}^{-}}{1-x_{\text{\tiny R1}}^{+}x_{ \text{\tiny LL}}^{+}}\Big{)}^{2}e^{-3ip_{2}}\frac{1-x_{\text{\tiny R1}}^{-}x_{ \text{\tiny LL}}^{+}}{1-x_{\text{\tiny R1}}^{+}x_{\text{\tiny LL2}}^{-}} \frac{1-x_{\text{\tiny R1}}^{+}x_{\text{\tiny LL2}}^{+}}{1-x_{\text{\tiny R1}}^{-}x_{ \text{\tiny LL}}^{-}}\,,\\ \big{(}\sigma^{+\bullet}_{\text{\tiny LL}}(u_{1},u_{2}) \big{)}^{2}\big{(}\sigma^{+\bullet}_{\text{\tiny LL}}(\bar{u}_{1},u_{2}) \big{)}^{2}&=\frac{x_{\text{\tiny R2}}^{+}}{x_{\text{\tiny R2}}^{-}} \Big{(}\frac{x_{\text{\tiny LL}}^{+}-x_{\text{\tiny LL}}^{-}}{x_{\text{\tiny LL} }^{+}-x_{\text{\tiny L2}}^{2}}\Big{)}^{2}e^{-ip_{2}}\frac{x_{\text The normalization in (46) has been chosen in such a way as to reduce to the one used in [19] in the limit \(k=0\). Depending on whether the left Zhukovsky variables in the equations above are associated with particles with \(m=+1\) or \(m=0\) we define \(x_{\rm L}^{\pm}\equiv x_{\rm L}^{\pm}(+1,p)\) or \(x_{\rm L}^{\pm}\equiv x_{\rm L}^{\pm}(0,p)\). The right Zhukovsky are instead always defined as \(x_{\rm R}^{\pm}\equiv x_{\rm R}^{\pm}(-1,p)\). As already mentioned, to define the scattering in the physical region the massless particles should be taken with the correct chirality; in particular, we should define the velocity of the first particle in (46) to be positive and the velocity of the second particle to be negative. However, by braiding unitarity, the dressing phases in (46) can be analytically continued to all values of momenta in the complex plane. Massless-massless crossing equations.The normalisation for the scattering between massless highest-weight states with positive chirality is chosen as follows \[{\bf S}\,\big{|}\chi^{\dot{\alpha}}_{u_{1}}\chi^{\dot{\beta}}_{u_{2}}\big{>}= \big{(}\sigma^{++}(u_{1},u_{2})\big{)}^{-2}\big{|}\chi^{\dot{\beta}}_{u_{2}} \chi^{\dot{\alpha}}_{u_{1}}\big{>} \tag{49}\] while the scattering between mixed chirality highest-weight states is defined by \[{\bf S}\,\big{|}\chi^{\dot{\alpha}}_{u_{1}}\chi^{\dot{\beta}}_{u_{2}}\big{>}= \big{(}\sigma^{+-}(u_{1},u_{2})\big{)}^{-2}\big{|}\chi^{\dot{\beta}}_{u_{2}} \chi^{\dot{\alpha}}_{u_{1}}\big{>}\,. \tag{50}\] Following the conventions of [19] we label these dressing factors by \[\sigma^{++}(u_{1},u_{2})\equiv\sigma^{\circ\circ}(u_{1},u_{2})\ \ \ \text{and}\ \ \sigma^{+-}(u_{1},u_{2})\equiv\tilde{\sigma}^{\circ\circ}(u_{1},u_{2})\,. \tag{51}\] The dressing factors \(\sigma^{--}\) and \(\sigma^{-+}\), associated with the scattering of massless particles with negative-negative and negative-positive chiralities, can be obtained from the expression above by using braiding unitarity. The crossing equations for the massless-massless dressing factors are given by \[\big{(}\sigma^{\circ\circ}(u_{1},u_{2})\big{)}^{2}\big{(}\sigma^{ \circ\circ}(\bar{u}_{1},u_{2})\big{)}^{2} =\frac{x_{\rm L2}^{+}}{x_{\rm L2}^{-}}\Big{(}\frac{x_{\rm L1}^{+} -x_{\rm L2}^{-}}{x_{\rm L1}^{+}-x_{\rm L2}^{+}}\Big{)}^{2}\,, \tag{52a}\] \[\big{(}\tilde{\sigma}^{\circ\circ}(u_{1},u_{2})\big{)}^{2}\big{(} \tilde{\sigma}^{\circ\circ}(\bar{u}_{1},u_{2})\big{)}^{2} =\frac{x_{\rm L2}^{+}}{x_{\rm L2}^{-}}\Big{(}\frac{x_{\rm L1}^{+} -x_{\rm L2}^{-}}{x_{\rm L1}^{+}-x_{\rm L2}^{+}}\Big{)}^{2}\,. \tag{52b}\] In the expressions above we define \(x_{\rm L}^{\pm}\equiv x_{\rm L}^{\pm}(0,p)\), where \(x_{\rm L}^{\pm}(0,p)\) is given in the first row of (18). ## 3 Relativistic limit In order to obtain a firmer grasp on the worldsheet model, we will consider it in a limit where it becomes a relativistic integrable QFT. This limit has some similarities with the one studied in [21], but it differs from it in many important ways which we will point out as we study it. ### Limiting procedure The dispersion relation of our model satisfies \[E(m,p)^{2}=\left(m+\frac{k}{2\pi}p\right)^{2}+4h^{2}\sin^{2}\frac{p}{2}\,. \tag{3.1}\] Depending on the value of \(k/h\), this has a different number of local minima (the smaller \(k/h\) is, the more local minima we will find, see figure 5). However, there is always one global minimum at10 Footnote 10: We have seen that particles of the string model have \(p\in[0,2\pi]\) and \(m\in\mathbb{Z}\). Hence strictly speaking only the minima with \(m=0,-1,\cdots-k+1,-k\) should be relevant. Nonetheless in what follows it will be convenient to first consider arbitrary values of \(p\in\mathbb{R}\) and any \(m\in\mathbb{Z}\). We will see later that indeed, up to equivalences, we may restrict to \((k+1)\) values of \(m\). \[p_{\text{min}}=-\frac{2\pi m}{k}+\epsilon(h)\,\sinh\theta+O(h^{2})\,. \tag{3.2}\] This minimum is regular (quadratic) if \(m\neq 0\) mod\(k\), so let us restrict to this case at first. The form of eq. (3.1) suggests expanding around the minimum using a parameter \(\epsilon(h)\) which goes to zero as \(h\to 0\). Therefore, we let \[p(\theta)=-\frac{2\pi m}{k}+\epsilon(h)\,\sinh\theta+O(h^{2})\,, \tag{3.3}\] where we will identify \(\theta\) with a relativistic rapidity. It is easy to check that this yields a relativistic dispersion if and only if \(\epsilon\) is linear in \(h\) at small \(h\). In particular, it is convenient to pick \[\epsilon(h)=\frac{4\pi}{k}\left|\sin\frac{m\pi}{k}\right|\,h+O(h^{2})\,, \qquad m\neq 0\,\,\,\text{mod}\,k\,. \tag{3.4}\] Then we have \[E(m,\theta)=2h\,\left|\sin\frac{m\pi}{k}\right|\,\cosh\theta+O(h^{2})\,, \tag{3.5}\] while the remaining central charges of the algebra are \[M(m,\theta)=2h\,\left|\sin\frac{m\pi}{k}\right|\,\sinh\theta+O(h^{2}),\qquad C (m)=\frac{ih}{2}\left(e^{-i\frac{2\pi m}{k}}-1\right)+O(h^{2}), \tag{3.6}\] Figure 5: Behaviour of \(E^{2}\) at different values of \(h\) for \(m\) and \(k\) fixed. For \(\frac{k}{h}\sim 1\) there are many local minima, as shown in figure 4(a), while increasing \(\frac{k}{h}\) these minima disappear (see figures 4(b) and 4(c)) and one global minimum remains. while \(C^{\dagger}=C^{*}\). In other words, at leading order we have the massive relativistic relation \[E(m,\theta)^{2}-M(m,\theta)^{2}=h^{2}\,\mu(m)^{2}\,,\qquad\mu(m)=2\,\left|\sin \frac{\pi m}{k}\right|, \tag{3.7}\] where \(\mu(m)\) plays the role of the mass. In the case where \(m=0\,\,\mathrm{mod}\,k\) we expect (3.7) to become massless. We therefore need to distinguish between the case where (\(M>0\) and \(M<0\), respectively). This can be done again by taking linear fluctuations in \(h\), and it gives two branches. At leading order in \(h\), \[E^{\pm}(m,\theta)=h\,e^{\pm\theta},\qquad M^{\pm}(m,\theta)=\pm h\,e^{\pm \theta},\qquad C=0\,,\qquad m=0\,\,\mathrm{mod}k\,. \tag{3.8}\] The superscript sign, plus or minus, in the expressions above labels the branch of the kinematics of the massless particle. In the next sections, we will use the same convention also for the other quantities associated with massless particles. Taking the same limit on the Zhukovsky variables we find that the result is regular if \(m\neq 0\,\,\mathrm{mod}\,k\), \[x_{\mathrm{L}}^{\pm} =-\mathrm{sgn}\left(\sin\frac{\pi m}{k}\right)e^{+\theta\mp\frac {i\pi m}{k}}\,,\qquad m\neq 0\mod k\,, \tag{3.9}\] \[x_{\mathrm{R}}^{\pm} =-\mathrm{sgn}\left(\sin\frac{\pi m}{k}\right)e^{-\theta\mp\frac {i\pi m}{k}}\,,\qquad m\neq 0\mod k\,.\] In the massless case, we obtain instead different (and possibly divergent) scaling limits depending on the chirality: \[x_{\mathrm{L}}^{\pm}=\begin{cases}\frac{\pi^{2}h^{2}}{k^{2}}\left(-\frac{k}{ \pi h}\pm ie^{-\theta}\right)&\text{for}\,\,\,\,M<0\,,\qquad m=0\mod k\,,\\ \frac{k}{\pi h}\pm ie^{+\theta}&\text{for}\,\,\,\,M>0\,,\qquad m=0\mod k\,, \end{cases} \tag{3.10}\] and \[x_{\mathrm{R}}^{\pm}=\begin{cases}-\frac{k}{\pi h}\pm ie^{-\theta}&\text{for }\,\,\,M<0\,, \qquad m=0\mod k\,,\\ \frac{\pi^{2}h^{2}}{k^{2}}\left(\frac{k}{\pi h}\pm ie^{+\theta}\right)&\text{ for}\,\,\,M>0\,,\quad m=0\mod k\,.\end{cases} \tag{3.11}\] The different behaviour of \(x^{\pm}\) for massive and left and right massless variables indicates that the dynamics of left and right massless particles effectively decouples from the massive ones and from each other. We will see that the corresponding S-matrix elements indeed simplify drastically. Let us finally comment on the difference between our limiting procedure and that of [21]. In our case, we have expanded the kinematics around the minimum of the dispersion relation, in such a way as to obtain a relativistic model as \(h\to 0\). In [21] instead the authors take first the limit \(h\to 0\), yielding a _gapless_ dispersion relation for all particles, and then shift and rescale the resulting kinematics. As far as we can tell, a different identification of the relativistic rapidity with the momenta yields a similar S-matrix eventually, at least for certain processes (even if such processes were interpreted to involve gapless particles in [21]). However, the fact that the limiting procedure of [21] produces a gapless dispersion is somewhat artificial and it obscures the presence of bound states. Indeed, despite the poles in the S matrix of [21], the existence of bound states was not discussed by the authors. ### Representations after the limit The limit discussed above can readily be taken on the supercharges which define the various representations of the model. The limit of the left and right representation gives the same result, up to an isomorphism (and up to specifying whether the highest-weight state is a boson or a fermion). This is not surprising because the main reason to distinguish the left- and right- representations in the worldsheet model was that the physical region of (complex) momenta or rapidity was different in the two cases. This is not an issue here, since we are "zooming in" close to a special value of the momentum and obtaining a relativistic model. Up to appropriately defining the one-particle basis, we only have the representations \(\rho_{\text{\tiny rel}}^{\text{\tiny B}}(m,\theta)\) and \(\rho_{\text{\tiny rel}}^{\text{\tiny F}}(m,\theta)\), which can be specified in terms of the coefficients \[a(m,\theta)=\sqrt{\frac{h}{2}}\sqrt{\mu(m)}\,e^{+\frac{\theta}{2}},\qquad b(m, \theta)=\sqrt{\frac{h}{2}}\frac{1-e^{-\frac{2\pi m\bar{b}}{k}}}{i\,\sqrt{\mu( m)}}\,e^{-\frac{\theta}{2}}\,,\qquad m\neq 0\,\,\,\text{mod}\,k\,, \tag{3.12}\] and for real \(\theta\) we have \(\bar{a}=a^{*}=a\) and \(\bar{b}=b^{*}\). The case of \(m=0\,\,\text{mod}\,k\) must be treated separately and the result depends on the branch, _i.e._ on the sign of \(M\). We have \[a(m,\theta)=\sqrt{h}\,e^{+\frac{\theta}{2}},\qquad\quad b(m, \theta)=0\,,\qquad\qquad\quad m=0\,\,\text{mod}\,k\,,\quad M>0\,, \tag{3.13}\] \[a(m,\theta)=0,\qquad\qquad\qquad b(m,\theta)=\sqrt{h}\,e^{-\frac{ \theta}{2}}\,,\qquad\quad m=0\,\,\text{mod}\,k\,,\quad M<0\,.\] This confirms that, in this case, the model is chiral. It is also interesting to note that these one-particle representations are invariant under shifts \(m\to m+k\). The two-particle representation can be constructed with the co-product inherited from eq. (2.24). Before the limit, the coproduct of two particles of momentum \(p_{1},p_{2}\) and mass \(m_{1},m_{2}\) featured a braiding factor of the form \(e^{\pm\frac{i}{2}p_{1}}\). After the limit, this takes the form, for instance \[\mathbf{q}(m_{1},m_{2};\theta_{1},\theta_{2})=\mathbf{q}(m_{1};\theta_{1}) \otimes\mathbf{1}+e^{i\frac{\pi m_{1}}{k}}\,\Sigma\otimes\mathbf{q}(m_{2}; \theta_{2})\,, \tag{3.14}\] where we recall that \(\Sigma\) is the fermion-sign matrix. We see that in this case a shift of \(m_{1}\) by \(k\) produces an additional sign, \[m_{1}\to m_{1}+k\qquad\Leftrightarrow\qquad\Sigma\otimes\mathbf{1}\to-\Sigma \otimes\mathbf{1}\,. \tag{3.15}\] We can interpret this as a change of the grading in the underlying representation. Note that this is true also for shifts of \(m_{2}\), though the coproduct which we used does not make it as evident as it is not symmetric -- it is however sufficient to consider the "opposite" coproduct to see it. ### S matrix for fundamental particles and bound states Once we have set up our limiting procedure and understood the particle context after the limit, we can proceed in two ways: 1. Construct the representations after the limit and bootstrap the S matrix, or 2. Take the limit of the full, nonrelativistic S matrix. It is clear that case 2. must be comprised in case 1., but it is not obvious whether the relativistic bootsrap may yield a more general S-matrix. We discuss the relativistic bootstrap in appendix C,11 while here we take the limit of the full S matrix. This is straightforward, and it yields a relativistic S-matrix of the same form as the one in appendix C. Footnote 11: As it turns out, the result of that procedure is slightly more general than taking the limit of the full S matrix. We will return to this in the conclusions. Left-left fundamental particles.In the case of two left (\(m=+1\)) fundamental particles we have after the limit \[A^{\text{\tiny BB}}_{\text{\tiny LL}} =1, B^{\text{\tiny BB}}_{\text{\tiny LL}} =\frac{\sinh\frac{\theta}{2}}{\sinh\!\left(\frac{\theta}{2}+\frac {i\pi}{k}\right)}\,,\] \[C^{\text{\tiny BB}}_{\text{\tiny LL}} =\frac{i\sin\frac{\pi}{k}}{\sinh\!\left(\frac{\theta}{2}+\frac{i \pi}{k}\right)}\,, D^{\text{\tiny BB}}_{\text{\tiny LL}} =\frac{\sinh\frac{\theta}{2}}{\sinh\!\left(\frac{\theta}{2}+\frac {i\pi}{k}\right)}\,, \tag{3.16}\] \[E^{\text{\tiny BB}}_{\text{\tiny LL}} =\frac{i\sin\frac{\pi}{k}}{\sinh\!\left(\frac{\theta}{2}+\frac{i \pi}{k}\right)}\,, F^{\text{\tiny BB}}_{\text{\tiny LL}} =-\frac{\sinh\!\left(\frac{\theta}{2}-\frac{i\pi}{k}\right)}{\sinh \!\left(\frac{\theta}{2}+\frac{i\pi}{k}\right)}\,,\] A first observation is that this defines a _parity-invariant_ S matrix. This was not the case before the limit. The reason why this happens is that our limit expanded around a minimum of the energy up to the quadratic order in fluctuations, thereby discarding all odd terms in the momentum expansion. Let us now look at the S-matrix element \(F^{\text{\tiny BB}}_{\text{\tiny LL}}\), which corresponds to the scattering of two lowest-weight states. Poles or zeros in this element signal possible bound-state channels. We see that \(F^{\text{\tiny BB}}_{\text{\tiny LL}}\) vanishes if \(\theta=2i\pi/k\), which is inside the physical strip, at least if \(k>2\). This confirms our expectation from section 2.5 that left fundamental particles may make bound states with \(m=+2\), and that the overall normalisation of the S matrix must be fixed to provide the necessary poles. Moreover, the bound state must transform in the limit of the \(m=+2\) representation, and may itself create bound-states with other left particles with larger and larger \(m\) (more on this below). Right-right fundamental particles.A similar structure emerges in the case of two right (\(m=-1\)) fundamental particles. Here after the limit we find \[A^{\text{\tiny FF}}_{\text{\tiny RR}} =1\,, B^{\text{\tiny FF}}_{\text{\tiny RR}} =\frac{\sinh\frac{\theta}{2}}{\sinh\!\left(\frac{\theta}{2}+\frac {i\pi}{k}(k-1)\right)}\,,\] \[C^{\text{\tiny FF}}_{\text{\tiny RR}} =\frac{i\sin\frac{\pi}{k}}{\sinh\!\left(\frac{\theta}{2}+\frac{i \pi}{k}(k-1)\right)}\,, D^{\text{\tiny FF}}_{\text{\tiny RR}} =\frac{\sinh\frac{\theta}{2}}{\sinh\!\left(\frac{\theta}{2}+\frac {i\pi}{k}(k-1)\right)}\,, \tag{3.17}\] \[E^{\text{\tiny FF}}_{\text{\tiny RR}} =\frac{i\sin\frac{\pi}{k}}{\sinh\!\left(\frac{\theta}{2}+\frac{i \pi}{k}(k-1)\right)}\,, F^{\text{\tiny FF}}_{\text{\tiny RR}} =-\frac{\sinh\!\left(\frac{\theta}{2}-\frac{i\pi}{k}(k-1)\right)}{ \sinh\!\left(\frac{\theta}{2}+\frac{i\pi}{k}(k-1)\right)}\,,\] In this case we see that there is a singularity in the physical strip at \(\theta=2\pi i/k\), in the form of a pole of \(F^{\text{\tiny FF}}_{\text{\tiny RR}}\). In fact, this is the same type of bound state as before -- it now appears as a pole rather than a zero because we have swapped the highest and lowest-weight states. This corresponds to a bound state with \(m=-2\). Left-right fundamental particles.It is interesting to look at the scattering of left and right fundamental particles (where we do not expect bound states from the discussion of section 2.5). We find \[A^{\text{\tiny BF}}_{\text{\tiny LR}} =1\,, B^{\text{\tiny BF}}_{\text{\tiny LR}} =\frac{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2k}(k-2)\Bigr{)}} {\sinh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2}\Bigr{)}}\,,\] \[C^{\text{\tiny BF}}_{\text{\tiny LR}} =\frac{i\sin\frac{\pi}{k}}{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i \pi}{2}\Bigr{)}}e^{\frac{i\pi}{2k}(2-k)}\,, D^{\text{\tiny BF}}_{\text{\tiny LR}} =\frac{\sinh\Bigl{(}\frac{\theta}{2}-\frac{i\pi}{2k}(k-2)\Bigr{)} }{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2}\Bigr{)}}\,, \tag{3.18}\] \[E^{\text{\tiny BF}}_{\text{\tiny LR}} =\frac{i\sin\frac{\pi}{k}}{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i \pi}{2}\Bigr{)}}e^{-\frac{i\pi}{2k}(2-k)}\,, F^{\text{\tiny BF}}_{\text{\tiny LR}} =1\,.\] We see that \(F^{\text{\tiny BF}}_{\text{\tiny LR}}=1\) and the S matrix never degenerates to a projector for any \(\theta\) inside the physical strip, consistently with our expectations. Arbitrary massive particles.Putting together the above observations, and using the fact that after the limit there is no difference between what used to be the "left" and "right" kinematics, we can write a unified formula for the scattering of particles of arbitrary mass, as long as \[m_{1}\neq 0\text{ mod}\,k\,,\qquad m_{2}\neq 0\text{ mod}\,k\,. \tag{3.19}\] To obtain these expressions we take the limit of the matrix part of the S matrix with \(m\)-dependent Zhukovsky variables. We can suppress the L and R labels, which are inconsequential, and only keep track of the statistics. To this end, let us focus on the case of two bosonic highest-weight states, which gives: \[A^{\text{\tiny BB}}_{12} =1\,, B^{\text{\tiny BB}}_{12} =\frac{\mathscr{S}_{1}e^{\frac{im_{2}\pi}{k}+\theta}-\mathscr{S}_ {2}e^{\frac{im_{1}\pi}{k}}}{\mathscr{S}_{1}e^{\frac{i(m_{1}+m_{2})\pi}{k}+ \theta}-\mathscr{S}_{2}}\,,\] \[C^{\text{\tiny BB}}_{12} =\frac{ie^{\frac{im_{1}\pi}{k}+\frac{\theta}{2}}\sqrt{\mu(m_{1}) \mu(m_{2})}}{\mathscr{S}_{1}e^{\frac{i(m_{1}+m_{2})\pi}{k}+\theta}-\mathscr{S} _{2}}\,, D^{\text{\tiny BB}}_{12} =\frac{\mathscr{S}_{1}e^{\frac{im_{1}\pi}{k}+\theta}-\mathscr{S}_ {2}e^{\frac{im_{2}\pi}{k}}}{\mathscr{S}_{1}e^{\frac{i(m_{1}+m_{2})\pi}{k}+ \theta}-\mathscr{S}_{2}}\,, \tag{3.20}\] \[E^{\text{\tiny BB}}_{12} =\frac{ie^{\frac{im_{2}\pi}{k}+\frac{\theta}{2}}\sqrt{\mu(m_{1}) \mu(m_{2})}}{\mathscr{S}_{1}e^{\frac{i(m_{1}+m_{2})\pi}{k}+\theta}-\mathscr{S} _{2}}\,, F^{\text{\tiny BB}}_{12} =\frac{-\mathscr{S}_{1}e^{\theta}+\mathscr{S}_{2}e^{\frac{i(m_{1}+m _{2})\pi}{k}}}{\mathscr{S}_{1}e^{\frac{i(m_{1}+m_{2})\pi}{k}+\theta}-\mathscr{S} _{2}}\,.\] Here we introduced \[\mathscr{S}_{j}=\text{sgn}\left[\sin\big{(}\frac{\pi m_{j}}{k}\big{)}\right]\,. \tag{3.21}\] This is the same formula that one could find by bootstrap in appendix C by considering representations with general \(m\). In principle, the correct way to obtain the S matrix involving \(m=+2,+3,\dots\) particles is to fuse the \(m_{1}=m_{2}=+1\) S matrix above. Similarly, we could have considered the S matrix acting on \(\rho^{\text{\tiny F}}_{\text{\tiny rel}}\otimes\rho^{\text{\tiny F}}_{\text{ \tiny rel}}\) with \(m_{1}=m_{2}=-1\) and fused it to obtain the S matrices involving \(m=-2,-3,\dots\) particles. However, as it turns out, the two procedures give the same result. This is a consequence of the fact that bound states transform in supersymmetric representations, and that the symmetry constrains the 2-to-2 scattering completely, up of course to a dressing factor. We postpone the discussion of the fusion properties of these S matrices until after we solve the crossing equations, as the normalisation of each block is necessary to ensure good fusion. Let us however briefly discuss the pole structure of each block, which is suggestive of the allowed fusion channels. We see that the bound-state condition \(F_{12}^{\text{\tiny BB}}=0\) has a solution in the physical strip if 1. \(\mathscr{S}_{1}=\mathscr{S}_{2}\) and \(2\nu k<m_{1}+m_{2}<(2\nu+1)k\), with \(\nu\in\mathbb{Z}\), or 2. \(\mathscr{S}_{1}=-\mathscr{S}_{2}\) and \((2\nu-1)k<m_{1}+m_{2}<2\nu k\), with \(\nu\in\mathbb{Z}\). Recall from our construction of the representations that all the particle content of the model must be \(2k\)-periodic, see eq. (3.15). If we start from two left-particles with \(m=+1\) we are in case 1., and we can go on building bound states in this way as long as the masses are sufficiently small with respect to \(k\). In this way, we can go on until we create a particle of mass \(m=k-1\). Starting from left fundamental particles, we cannot use rule 2. to create a bound state, as we never leave the region \(\mathscr{S}_{1}=\mathscr{S}_{2}=+1\). Recall however that, again due to (3.15), the representations and hence the S-matrix elements have a simple transformation rule under shifting \(m_{j}\to m_{j}\pm k\): such a shift is tantamount to flipping the statistics of the \(j\)-th particle. For instance \[\mathbf{S}_{12}^{\text{\tiny BB}}(m_{1},m_{2};\theta)=\mathbf{S}_{12}^{\text{ \tiny FB}}(m_{1}-k,m_{2};\theta)\,,\qquad\mathbf{S}_{12}^{\text{\tiny BB}}(m_{ 1},m_{2};\theta)=\mathbf{S}_{12}^{\text{\tiny BF}}(m_{1},m_{2}-k;\theta)\,, \tag{3.22}\] It follows that, for instance, the relativistic limit of a left-bound state with \(m=k-1\) is equivalent to the relativistic limit of a right-particle with \(m=-1\). In fact, up to making the appropriate shifts in \(m_{1}\) and/or \(m_{2}\), eq. (3.20) may be used to describe the limit of the scattering of arbitrary combinations of left/right particles. We come to the conclusions that in this model we can consider \((k-1)\) distinct massive representations, with \[\mu\in\left\{2\sin\left(\tfrac{\pi}{k}\right),2\sin\left(\tfrac{2\pi}{k} \right),\dots,2\sin\left(\tfrac{(k-1)\pi}{k}\right)\right\}\,. \tag{3.23}\] Clearly for \(k\) odd all masses come in pairs, and the interpretation is that the two particles with identical masses are one the antiparticle of the other. In the case where \(k\) happens to be even, the particle of mass \(\mu=2\sin\tfrac{\pi}{2}=2\) is its own antiparticle. In this sense, there is no longer any distinction between "left" and "right" particles when we are considering bound states. At best, we can single out the fundamental left and right particles as the one having \(m=+1\) mod\(k\) and \(m=-1\) mod\(k\), respectively, but in this relativistic limit one will be equivalent to a bound state of the other. It is also intriguing to note that something special happens at \(k=1\) and \(k=2\). In the first case, there are no massive modes at all. This is in good accord with the fact that the dual theory only features four massless (and free) bosons and fermions [6]. For \(k=2\), it appears that left and right modes must be identified, _i.e._ there are fewer massive modes than one would naively expect from the pp-wave spectrum of \(AdS_{3}\times S^{3}\). It would be very interesting to understand this fact from a worldsheet CFT/WZNW construction. We have discussed at some length the bound-state condition \(F_{12}^{\text{\tiny BB}}=0\), which we have used to construct the \((k-1)\) massive particles of the model. These are the equivalent of the _string theory bound states_, which transform in _symmetric representations_[32]. While a detailed analysis of the mirror theory is beyond the scope of this paper, on general grounds and by analogy with the pure-RR (\(k=0\)) case we would expect to find _mirror bound states_ too, and we would expect them to transform in the _anti-symmetric representation_. In other words, such bound states should arise when \(A_{12}^{\text{\tiny BB}}=0\) or, up to a normalisation, \(F_{12}^{\text{\tiny BB}}=\infty\). It is easy to see that there are indeed such poles in the physical strip, and that they appear when \(k<(m_{1}+m_{2})<2k\) in the bosonic case. An equivalent result holds for \(-2k<(m_{1}+m_{2})<-k\) in the fermionic case. This suggests that, after the relativistic limit, both "string" and "mirror" bound states live in the same physical strip -- which is in a sense expected for a relativistic theory. These new poles do not generate new types of representation. In fact, let us briefly review the types of representations emerging out of either bound-state pole. For definiteness, we consider \(0<m_{1},m_{2}<k\) and the bosonic representation \(\rho_{\text{\tiny rel}}^{\text{\tiny B}}\). We have \[0<m_{1}+m_{2}<k \rho_{\text{\tiny rel}}^{\text{\tiny B}}(m_{1})\otimes\rho_{\text {\tiny rel}}^{\text{\tiny B}}(m_{2})\supset\rho_{\text{\tiny rel}}^{\text{ \tiny B}}(m_{1}+m_{2}), \tag{3.24}\] \[k<m_{1}+m_{2}<2k \rho_{\text{\tiny rel}}^{\text{\tiny B}}(m_{1})\otimes\rho_{\text {\tiny rel}}^{\text{\tiny B}}(m_{2})\supset\rho_{\text{\tiny rel}}^{\text{ \tiny B}}(m_{1}+m_{2})\cong\rho_{\text{\tiny rel}}^{\text{\tiny B}}(m_{1}+m_{2} -k),\] where the first line corresponds to the "string" bound state with \(F_{12}^{\text{\tiny BB}}=0\) and the second line to the "mirror" one with \(F_{12}^{\text{\tiny BB}}=\infty\). Scattering of massive and massless particles.Let us now consider the case where one of the two particles is massless, meaning that it has \(m=0\ \text{mod}\,k\). Because of the \(2k\)-periodicity of \(m\), it is sufficient to consider two cases. Let us set, with a slight abuse of notation \[\mathscr{S}_{i}=\begin{cases}+1\,,&m_{i}=0\ \text{mod}(2k)\,,\\ -1\,,&m_{i}=k\ \text{mod}(2k)\,.\end{cases} \tag{3.25}\] Finally, we need to recall that massless particles can have chirality "plus", meaning that they move to the right at the speed of light, or "minus, meaning they move to the left. Accordingly, the physical scattering processes involving one massive and one massless particle are \[A_{12}^{\text{\tiny B+B}}=1\,, \qquad B_{12}^{\text{\tiny B+B}}=\mathscr{S}_{1}\,, \tag{3.26}\] \[C_{12}^{\text{\tiny B+B}}=0\,, \qquad D_{12}^{\text{\tiny B+B}}=e^{-\frac{i\pi m_{2}}{k}}\,,\] \[E_{12}^{\text{\tiny B+B}}=0\,, \qquad F_{12}^{\text{\tiny B+B}}=-\,\mathscr{S}_{1}e^{-\frac{i \pi m_{2}}{k}}\,.\] and \[A_{12}^{\text{\tiny BB-}}=1\,, \qquad B_{12}^{\text{\tiny BB-}}=e^{-\frac{i\pi m_{1}}{k}}\,, \tag{3.27}\] \[C_{12}^{\text{\tiny BB-}}=0\,, \qquad D_{12}^{\text{\tiny BB-}}=\mathscr{S}_{2}\,,\] \[E_{12}^{\text{\tiny BB-}}=0\,, \qquad F_{12}^{\text{\tiny BB-}}=-\,\mathscr{S}_{2}e^{-\frac{i \pi m_{1}}{k}}\,.\] We see that the scattering is particularly simple, without any rotation in isotopic space. The S-matrix elements of the inverse processes can be found by imposing braiding unitarity, while the statistics can be changed by using the monodromy condition (3.22). Massless scattering.Let us come to the case of two massless particles. Like it was the case before taking the limit [33], we need examine the scattering depending on the chirality of the particles involved. The most natural case is the one where they collide head on, which gives a free S matrix, up to some signs. Namely we find \[\begin{split} A^{\text{B+B-}}_{12}&=1\,,\qquad\qquad B ^{\text{B+B-}}_{12}=\mathscr{S}_{1}\,,\qquad C^{\text{B+B-}}_{12}=0\,,\\ D^{\text{B+B-}}_{12}&=\mathscr{S}_{2}\,,\qquad\quad E ^{\text{B+B-}}_{12}=0\,,\qquad\quad F^{\text{B+B-}}_{12}=-\,\mathscr{S}_{1} \mathscr{S}_{2}\,.\end{split} \tag{3.28}\] It is also possible that two massless particles have the same chirality, though this is not a perturbative scattering process. Here we have \[\begin{split} A^{\text{B+B+}}_{12}&=1\,,\qquad\qquad B ^{\text{B+B+}}_{12}=-\,\mathscr{S}_{1}\tanh\frac{\theta}{2}\,,\\ C^{\text{B+B+}}_{12}&=\frac{\mathscr{S}_{1}\mathscr{S }_{2}}{\cosh\frac{\theta}{2}}\,,\qquad\quad D^{\text{B+B+}}_{12}=\mathscr{S}_{ 2}\tanh\frac{\theta}{2}\,,\\ E^{\text{B+B+}}_{12}&=\frac{1}{\cosh\frac{\theta}{2}}\,, \qquad\quad F^{\text{B+B+}}_{12}=\mathscr{S}_{1}\mathscr{S}_{2}\,.\end{split} \tag{3.29}\] It is worth noting that for these processes, and only for these processes, the relativistic bootstrap yields a more general solution, see appendix C. In conclusion, we have found that the poles of the S matrix suggest that the model should feature \(k-1\) massive particles. Moreover, there are two types of massless particles, distinguished by their highest-weight states. Hence, it is sufficient to consider \(k+1\) distinct representations, corresponding to the S-matrix blocks by \[\mathbf{S}^{\text{\tiny BB}}(m_{1},m_{2};\theta)\,,\qquad m_{i}=0,1,\dots,k\,. \tag{3.30}\] ## 4 Crossing equations and dressing factors Above we have fixed the matrix part of the S matrix by symmetry arguments. We now turn to fixing the normalisations by imposing (relativistic) crossing symmetry as well as compatibility with the bound-state structure. ### Crossing equations The construction of the crossing equations in the original model related "left" massive particles to "right" massive particles (and massless particles to themselves). After the limit, this amounts to relating a representation with bosonic/fermionic highest-weight state of mass \(m\) to one of mass \(-m\) with fermionic/bosonic highest weight state. Clearly, the S matrices in the normalisation given above (with \(A_{m,m^{\prime}}=1\)) are not crossing-symmetric by themselves. It is necessary to multiply each block by an arbitrary dressing factor. In analogy with the original theory let us redefine each block by a multiplicative constant. For the case \(0<m_{1},m_{2}<k\), let us set \[\mathbf{S}^{\text{\tiny BB}}(m_{1},m_{2},\theta)\quad\to\quad[\Phi(m_{1},m_{2}; \theta)]^{\frac{1}{2}}\,\left[\sigma(m_{1},m_{2};\theta)\right]^{-1}\,\mathbf{S }^{\text{\tiny BB}}(m_{1},m_{2},\theta)\,. \tag{4.1}\] In this manner, the relativistic limit of the full S-matrix associated with the scattering of two particles in the representations \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{1},\theta_{1})\otimes\rho^{\text {\tiny B}}_{\text{\tiny rel}}(m_{1},\theta_{1})\) and \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{2},\theta_{2})\otimes\rho^{\text {\tiny B}}_{\text{\tiny rel}}(m_{2},\theta_{2})\) is given by \[\mathbf{S}^{\text{\tiny BB}}_{su(1,1)\overset{\text{\tiny$ \oplus$}}{c.}}(m_{1},m_{2};\theta)\\ =\Phi(m_{1},m_{2};\theta)\left[\sigma(m_{1},m_{2};\theta)\right]^{-2} \left(\mathbf{S}^{\text{\tiny BB}}(m_{1},m_{2},\theta)\otimes\mathbf{S}^{\text {\tiny BB}}(m_{1},m_{2},\theta)\right). \tag{4.2}\] Both \(\Phi\) and \(\sigma\) in (4.2) are phases, _i.e._ \[[\Phi(m_{1},m_{2};\theta^{*})]^{*}\,\Phi(m_{1},m_{2};\theta)=1\,,\qquad[\sigma (m_{1},m_{2};\theta^{*})]^{*}\,\sigma(m_{1},m_{2};\theta)=1\,, \tag{4.3}\] and satisfy braiding unitarity, \[\Phi(m_{2},m_{1};-\theta)\,\Phi(m_{1},m_{2};\theta)=1\,,\qquad\sigma(m_{2},m_ {1};-\theta)^{-2}\,\sigma(m_{1},m_{2};\theta)^{-2}=1\,. \tag{4.4}\] Admittedly, splitting the prefactor in two pieces as we have done is a little arbitrary. Here we want to single out a piece that, at least for the scattering of physical magnons, has no zeros or poles in the physical strip. This is the minimal dressing factor \(\sigma\), which we want to identify with the limit of a non-trivial dressing factor from the full (non-relativistic) theory. The poles and zeros will come from the CDD factor \(\Phi\), which satisfies a sort of homogeneous crossing equation and should be related to a simple (perhaps rational) prefactor in the full non-relativistic theory.12 Footnote 12: It should be remarked however that the condition that \(\sigma\) has no poles or zeros is not stable under fusion; in this sense, the splitting between \(\Phi\) and \(\sigma\) is even more artificial. As discussed, the monodromy under \(m\to m+k\) of the representations and hence of the matrix part of the S-matrix, see (3.22), indicates that we only need to consider \((k-1)\) massive representations and two massless ones. If we assume this to be the case, it is natural to impose that the dressing factors too are compatible with the monodromy (3.22). In this way we can use the crossing equation to relate \(m\leftrightarrow(k-m)\), rather than \(m\leftrightarrow-m\). We find that it must be, for \(0<m_{1},m_{2}<k\) \[\begin{split}\sigma(m_{1},m_{2};\theta)^{-2}\,\sigma(k-m_{1},m_{ 2};\theta+i\pi)^{-2}&=\left(f_{m_{1},m_{2}}(\theta)\right)^{2},\\ \sigma(m_{2},m_{1};\theta)^{-2}\,\sigma(m_{2},k-m_{1};\theta+i \pi)^{-2}&=\left(f_{m_{1},m_{2}}(\theta)\right)^{2},\end{split} \tag{4.5}\] with \[f_{m_{1},m_{2}}(\theta)=\frac{\sinh\Bigl{(}\frac{\theta}{2}-\frac{i\pi}{2k}(m_ {1}-m_{2})\Bigr{)}}{\sinh\Bigl{(}\frac{\theta}{2}-\frac{i\pi}{2k}(m_{1}+m_{2}) \Bigr{)}}\,,\qquad 0<m_{1},m_{2}<k\,; \tag{4.6}\] while the CDD factors satisfy, by definition, the homogeneous crossing equations \[\Phi(m_{1},m_{2};\theta)\,\Phi(k-m_{1},m_{2};\theta+i\pi)=1\,,\qquad\Phi(m_{2},m_ {1};\theta)\,\Phi(m_{2},k-m_{1};\theta+i\pi)=1\,, \tag{4.7}\] To solve this equation, we start from a process involving the scattering of two fundamental (left) particles, such as \[\mathbf{S}\left|Y(\theta_{1})\,Y(\theta_{2})\right\rangle=\Phi(1,1;\theta_{1 2})\,\left[\sigma(1,1;\theta_{12})\right]^{-2}\left|Y(\theta_{2})\,Y(\theta_{ 1})\right\rangle \tag{4.8}\] and demand that \(\sigma(1,1;\theta)\) has no zeros and poles (and that it solves crossing). This is enough to fix \(\sigma(1,1;\theta)\). Moreover, it follows that \(\Phi(1,1;\theta)\) must contain the \(s\)-channel bound-state pole. Having determined in such a way the fundamental dressing factors, the remaining \(\sigma\)s and \(\Phi\)s follow by fusion. As we remarked, it so happens that through the fusion procedure \(\sigma(m_{1},m_{2};\theta)\) will develop singularities in the physical strip; this happens for \(m_{1}+m_{2}>k\) as we shall see. ### Massive dressing factor Let us consider first the case where both particles are massive, _i.e._\(m_{j}\neq 0\bmod k\). It is easy to check that the following expression solves the crossing equations. \[\sigma(m_{1},m_{2};\theta)^{-2}=\frac{R\left(\theta-\frac{i\pi(m_{1}+m_{2})}{k }\right)^{2}\,R\left(\theta+\frac{i\pi(m_{1}+m_{2})}{k}\right)^{2}}{R\left( \theta-\frac{i\pi(m_{1}-m_{2})}{k}\right)^{2}\,R\left(\theta+\frac{i\pi(m_{1} -m_{2})}{k}\right)^{2}}\,, \tag{4.9}\] where \(R(\theta)\) can be expressed in terms of \(\Gamma\)-functions or Barnes \(G\)-function13 Footnote 13: The \(G\)-function obeys \(G(z+1)=\Gamma(z)\,G(z)\) with \(G(1)=1\). The function \(\psi(z)\) is the Digamma function, defined by \(\psi(z)=\frac{\mathrm{d}}{dz}\log\Gamma(z)\). \[R(\theta)\equiv\frac{G(1-\frac{\theta}{2\pi i})}{G(1+\frac{\theta}{2\pi i})}= \left(\frac{e}{2\pi}\right)^{+\frac{\theta}{2\pi i}}\prod_{\ell=1}^{\infty} \frac{\Gamma(\ell+\frac{\theta}{2\pi i})}{\Gamma(\ell-\frac{\theta}{2\pi i})} \,e^{-\frac{\theta}{\pi i}\,\psi(\ell)}\,. \tag{4.10}\] The function \(R(\theta)\) obeys the properties \[R(-\theta)\,R(\theta)=1\,,\qquad[R(\theta^{*})]^{*}\,R(\theta)=1\,, \tag{4.11}\] as well as the monodromy property \[R(\theta-2\pi i)=i\,\frac{\pi}{\sinh\frac{\theta}{2}}\,R(\theta)\,,\qquad R( \theta+\pi i)=\frac{\cosh\frac{\theta}{2}}{\pi}\,R(\theta-\pi i)\,, \tag{4.12}\] which can be used to prove the crossing equation. In [21], where a similar relativistic limit was considered (albeit with a different dispersion relation and no bound-states), a dressing factor was also proposed for processes related to some special cases of what we considered here, namely \(m_{1}=1\) and \(m_{2}=1\) or \(m_{2}=k-1\) (this is the case related to the scattering of fundamental particles). Though part of such dressing factors are given only implicitly as a Fourier transform, it is possible to check numerically that they agree with (4.9).14 Similar dressing factors were considered by Fendley and Intriligator in [22; 23], for any \(m_{1}\) and \(m_{2}\). The main difference between our discussion and that of [22; 23] is in the structure of the representations: we will deal with four-dimensional \(\rho\otimes\rho\) representations, while [22; 23] dealt with \(\rho\). This will also affect the poles and "CDD factors" of the model: while [22; 23] normalised their S matrix by a CDD factor \(\Phi(\theta)\) so that \(\Phi(\theta)\,\sigma^{-1}(\theta)\,\mathbf{S}(\theta)\) has the correct pole structure, we will rather use \(\Phi^{1/2}(\theta)\,\sigma^{-1}(\theta)\,\mathbf{S}(\theta)\) so that \(\Phi(\theta)\,\sigma^{-2}(\theta)\,[\mathbf{S}\otimes\mathbf{S}](\theta)\) has the right poles, as we shall discuss below. Poles.While the functions \(\sigma(m_{1},m_{2};\theta)^{-2}\) solve the crossing equations, they do not have the correct pole structure to account for the bound states of the theory. For instance, we expect a pole in the physical strip for the scattering of two particles with \(m_{1}=m_{2}=1\). We see that \(R(\theta)\) has the following singularities \[\text{poles:}\;\theta=-2\pi i\,n\,,\qquad\text{zeros:}\;\theta=+2\pi i\,n\,, \qquad n=1,2,\ldots\,. \tag{4.13}\] Hence the expression in (4.9) has no poles in the physical strip \((0,i\pi)\); however, it does contain a zero in the strip when \(m_{1}+m_{2}>k\). In fact \[R\big{(}\theta+\frac{i\pi(m_{1}+m_{2})}{k}\big{)}{=0}\quad\text{at}\quad \theta=\frac{i\pi}{k}(2k-m_{1}-m_{2}) \tag{4.14}\] and the numerator of \(\sigma(m_{1},m_{2};\theta)^{-2}\) has a zero of order two. This zero is in the physical strip when \(m_{1}+m_{2}>k\) and it is necessary to cancel a second-order pole at \(\theta=\frac{i\pi}{k}(2k-m_{1}-m_{2})\) arising in \(\mathbf{S}^{\text{\tiny{BB}}}(m_{1},m_{2},\theta)\otimes\mathbf{S}^{\text{ \tiny{BB}}}(m_{1},m_{2},\theta)\) from the denominators of the coefficients of the S matrix (3.20) when \(\mathscr{S}_{1}=\mathscr{S}_{2}=+1\). Taking the zero of \(\sigma(m_{1},m_{2};\theta)^{-2}\) into account, all the S-matrix elements have no poles within the strip; all poles will be contained in \(\Phi(m_{1},m_{2};\theta)\). "CDD" factors.The pole structure will necessarily come from the pre-factor \(\Phi(1,1;\theta)\). Because of eq. (4.7), it is necessary to simultaneously modify \(\Phi(k-1,1;\theta)\) in an appropriate way. To this end, following [34], let us introduce the building block \[[m]_{\theta}\equiv\frac{\sinh\left(\frac{\theta}{2}+\frac{i\pi m}{2k}\right)}{ \sinh\left(\frac{\theta}{2}-\frac{i\pi m}{2k}\right)}\,, \tag{4.15}\] which almost satisfies a trivial crossing equation, up to a sign: \[[m]_{\theta}\,[k-m]_{\theta\pm i\pi}=-1\,. \tag{4.16}\] The CDD factors for the \(m_{1}=1\) and \(m_{2}=k-1\) can then be defined as \[\Phi(1,1;\theta)=\Phi(k-1,k-1;\theta) =[2]_{\theta}[0]_{\theta}\,, \tag{4.17a}\] \[\Phi(1,k-1;\theta)=\Phi(k-1,1;\theta) =[k]_{\theta}[k-2]_{\theta}\,. \tag{4.17b}\] From (4.16) we see that pairs of building blocks \([m]_{\theta}\) and \([k-m]_{\theta}\) satisfy the homogeneous equation (4.7) but for a minus sign; this minus sign plays no role since all the CDD factors in (4.17) contain an even number of building blocks. By fusing these fundamental building blocks it is possible to obtain a universal formula valid for any \(m_{1}\) and \(m_{2}\in\{1,\ldots,k-1\}\): \[\Phi(m_{1},m_{2};\theta)=\frac{\prod_{n=0}^{N}\left(\left[\left|m_{1}-m_{2} \right|+2n\right]_{\theta}\right)^{2}}{\left[\left|m_{1}-m_{2}\right|\right]_{ \theta}\left[m_{1}+m_{2}\right]_{\theta}}\,,\qquad N=\begin{cases}m_{1}&m_{1} \leq m_{2}\,,\\ m_{2}&m_{2}<m_{1}\,.\end{cases} \tag{4.18}\] This formula corresponds to the minimal S-matrix of Toda theories of \(A_{k-1}\) type, with \(k\) playing the role of the Coxeter number of the Lie algebra (see for example [34; 35]). Fusion.Let us now discuss why this S matrix, with the given normalisation, behaves well under fusion. If we consider two particles with quantum numbers \(m_{1}\) and \(m_{2}\) such that \(0<m_{1}+m_{2}<k\) then there is a pole in the S-matrix. This pole appears in the S-matrix element involving the scattering of \(|\phi^{\text{\tiny B}}\otimes\phi^{\text{\tiny B}}\rangle\) with \(|\phi^{\text{\tiny B}}\otimes\phi^{\text{\tiny B}}\rangle\), and it comes from the prefactor \(\Phi(m_{1},m_{2};\theta)\). The singularity is located at rapidity \[\theta=\frac{i\pi}{k}(m_{1}+m_{2})\,. \tag{4.19}\] By contrast, the scattering of \(|\varphi^{\text{\tiny F}}\otimes\varphi^{\text{\tiny F}}\rangle\) with \(|\varphi^{\text{\tiny F}}\otimes\varphi^{\text{\tiny F}}\rangle\) vanishes, because the S-matrix element \(F_{12}^{\text{\tiny B}}\) has a zero there. This suggests that a bound state in the symmetric representation must exist. In each copy of the tensor product there should be contained the state \[|\phi^{\text{\tiny B}}(m_{b},\theta_{b})\rangle=|\phi^{\text{\tiny B}}(m_{1}, \theta_{b}-\frac{i\pi}{k}m_{2})\rangle\otimes|\phi^{\text{\tiny B}}(m_{2}, \theta_{b}+\frac{i\pi}{k}m_{1})\rangle\,, \tag{4.20}\] with associated fermionic partner \(|\varphi^{\text{\tiny F}}_{\text{\tiny L}}(m_{b},\theta_{b})\rangle\) obtained by acting on \(|\phi^{\text{\tiny B}}(m_{b},\theta_{b})\rangle\) with \(\mathbf{q}\) or \(\mathbf{\widetilde{s}}\). If the doublet of states defined in this manner is a bound state representation, then \(\forall\ m_{3}\in\{1,\ldots,k-1\}\) and \(\theta=\theta_{b}-\theta_{3}\) the projection of \[\Big{(}\mathbf{S}^{\text{\tiny BB}}(m_{1},m_{3},\theta-\frac{i\pi}{k}m_{2}) \otimes 1\Big{)}\cdot\Big{(}1\otimes\mathbf{S}^{\text{\tiny BB}}(m_{2},m_{3}, \theta+\frac{i\pi}{k}m_{1})\Big{)} \tag{4.21}\] onto the vector space spanned by \[\begin{split}&|\phi^{\text{\tiny B}}(m_{b},\theta_{b})\,\phi^{ \text{\tiny B}}(m_{3},\theta_{3})\rangle\,,\quad|\phi^{\text{\tiny B}}(m_{b}, \theta_{b})\,\varphi^{\text{\tiny F}}(m_{3},\theta_{3})\rangle,\\ &|\varphi^{\text{\tiny F}}(m_{b},\theta_{b})\,\phi^{\text{\tiny B }}(m_{3},\theta_{3})\rangle\,,\quad|\varphi^{\text{\tiny F}}(m_{b},\theta_{b}) \,\varphi^{\text{\tiny F}}(m_{3},\theta_{3})\rangle\,,\end{split} \tag{4.22}\] needs to be equal to \(\mathbf{S}^{\text{\tiny BB}}(m_{b},m_{3},\theta)\). This fact can be easily checked by using the S-matrix elements (3.20), together with the fusion properties of the dressing factors; these factors satisfy \[\begin{split}&\sigma(m_{3},m_{b};\theta)^{-1}=\sigma(m_{3},m_{ 1};\theta-\frac{i\pi}{k}m_{2})^{-1}\sigma(m_{3},m_{2};\theta+\frac{i\pi}{k}m_{ 1})^{-1}\,,\\ &\Phi(m_{3},m_{b};\theta)=\Phi(m_{3},m_{1};\theta-\frac{i\pi}{k}m_ {2})\Phi(m_{3},m_{2};\theta+\frac{i\pi}{k}m_{1})\,,\end{split} \tag{4.23}\] \(\forall\ m_{1},\,m_{2},\,m_{3}\) and \(m_{b}\equiv m_{1}+m_{2}\in\{1,\ldots,k-1\}\). This substantiates our claim that our construction of \(\sigma\)'s is indeed compatible with fusion. If instead \(k<m_{1}+m_{2}<2k\), the scattering situation is reversed: \(F_{12}^{\text{\tiny BB}}\) has a pole for \[\theta=\frac{i\pi}{k}(2k-m_{1}-m_{2}) \tag{4.24}\] in the physical strip15. This singularity suggests the existence of a bosonic bound state involving Footnote 15: As previously remarked, for \(\mathscr{S}_{1}=\mathscr{S}_{2}=1\), a pole in the element \(F_{12}^{\text{\tiny BB}}\) in (3.20) appears which is cancelled by a zero of \(\sigma(m_{1},m_{2};\theta)^{-1}\) located at the same point. However, a pole is introduced also by \(\Phi(m_{1},m_{2};\theta)\) and the element \(F_{12}^{\text{\tiny BB}}\), after having been multiplied by the dressing factor, has a singularity at \(\theta=\frac{i\pi}{k}(2k-m_{1}-m_{2})\). \[|\varphi^{\text{\tiny B}}(m_{b},\theta_{b})\rangle=|\varphi^{\text{\tiny F}}(m _{1},\theta_{b}-\frac{i\pi}{k}(k-m_{2}))\rangle\otimes|\varphi^{\text{\tiny F} }(m_{2},\theta_{b}+\frac{i\pi}{k}(k-m_{1}))\rangle\,, \tag{4.25}\] with \(m_{b}\equiv m_{1}+m_{2}\); similarly to before, the fermionic partner \(|\phi^{\text{\tiny F}}(m_{b},\theta_{b})\rangle\) is obtained by acting with \(\mathbf{s}\) or \(\widetilde{\mathbf{q}}\) on \(|\varphi^{\text{\tiny B}}(m_{b},\theta_{b})\rangle\). However, this is not a new particle: as already discussed in the previous sections we can identify \[\left(|\varphi^{\text{\tiny B}}(m_{b},\theta_{b})\rangle\,,|\phi^{\text{\tiny F }}(m_{b},\theta_{b})\rangle\right)=\left(|\varphi^{\text{\tiny F}}(m_{b}-k, \theta_{b})\rangle\,,|\phi^{\text{\tiny B}}(m_{b}-k,\theta_{b})\rangle\right), \tag{4.26}\] and the bound state can therefore be recognised as a particle of quantum number \(k-m_{b}\) already found in the fusion process. Then it is possible to show that the action of \[\left(\mathbf{S}^{\text{\tiny BB}}(m_{1},m_{3},\theta-\frac{i\pi}{k}(k-m_{2}) )\otimes 1\right)\!\cdot\!\left(1\otimes\mathbf{S}^{\text{\tiny BB}}(m_{2},m_{3}, \theta+\frac{i\pi}{k}(k-m_{1}))\right) \tag{4.27}\] on the basis \[\begin{split}&|\phi^{\text{\tiny B}}(m_{b}-k,\theta_{b})\, \phi^{\text{\tiny B}}(m_{3},\theta_{3})\rangle\,,\quad|\phi^{\text{\tiny B}} (m_{b}-k,\theta_{b})\,\varphi^{\text{\tiny F}}(m_{3},\theta_{3})\rangle,\\ &|\varphi^{\text{\tiny F}}(m_{b}-k,\theta_{b})\,\phi^{\text{\tiny B }}(m_{3},\theta_{3})\rangle\,,\quad|\varphi^{\text{\tiny F}}(m_{b}-k,\theta_{b })\,\varphi^{\text{\tiny F}}(m_{3},\theta_{3})\rangle\,,\end{split} \tag{4.28}\] is equal to \(\mathbf{S}^{\text{\tiny BB}}(m_{b}-k,m_{3},\theta)\). This implies that if we start fusing massive particles in the sector \(\mathscr{S}_{1}=\mathscr{S}_{2}=+1\) we never go outside this sector and we find exactly \(k-1\) massive particles. Bound states in the crossed channels.From the poles of the S matrix in the \(s\)-channel we have read off the masses of the bound states and build all the representations Figure 6: Examples of bound states in the symmetric representation (in red) propagating in the \(s\)-channel (on the left) and \(t\)-channel (on the right) for \(m_{1}+m_{2}<k\), and their fusing angles. starting from \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(+1,\theta)\) by fusion. Additional poles come from the \(t\)-channel. Consider the bound state on the LHS of figure 6 (in red), which obtained by fusing \(|\phi^{\text{\tiny B}}(m_{1})\rangle\) and \(|\phi^{\text{\tiny B}}(m_{2})\rangle\). If we scatter particles in the representations \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{1},\theta_{1})\) and \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{2},\theta_{2})\) this bound state will appear as a particle propagating in the \(s\)-channel. However, the same fusing vertex responsible for the propagation of this bound state in the \(s\)-channel in the scattering process between \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{1},\theta_{1})\) and \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{2},\theta_{2})\) is also responsible for the propagation of the bound state \(|\phi^{\text{\tiny B}}(m_{2})\rangle\) in the \(t\)-channel of the scattering process between \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{1},\theta_{1})\) and \(\rho^{\text{\tiny B}}_{\text{\tiny rel}}(m_{1}+m_{2},\theta_{2})\). Remark that in the \(s\)-channel, the residue of the S matrix at the poles is a projector onto the bound state representation. In our case the residue of the original \(4\times 4\) S-matrix at the pole is a matrix of rank two projecting onto a two-dimensional (short) subrepresentation. As a consequence of this fact the residue of the full \(16\times 16\) S-matrix in (4.2) has rank 4 on the bound state. On the other hand, the residue of the S matrix at poles associated with bound states propagating in the \(t\)-channel has a maximal rank. This can be seen in the figure by turning considering similar diagrams involving \(\phi^{\text{\tiny F}}_{m_{1}}\) and \(\phi^{\text{\tiny F}}_{m_{2}}\); the corresponding process has no pole in the \(s\)-channel, but it is still singular in the \(t\)-channel. The appearance of \(t\) can also be seen from the fusion of the CDD factors. Consider first the scattering two fundamental particles with \(m_{1}=m_{2}=1\). The CDD factor is simply \(\Phi(1,1;\theta)=[2]_{\theta}\), with a single \(s\)-channel pole at \(\theta=\frac{2i\pi}{k}\). Upon fusion we have \[\Phi(1,2;\theta)=\Phi(2,1;\theta)=\Phi(1,1;\theta+\tfrac{i\pi}{k})\Phi(1,1; \theta-\tfrac{i\pi}{k})=[3]_{\theta}[1]_{\theta}\,. \tag{4.29}\] This expression has two poles: one at \(\theta=\frac{i\pi}{k}(2+1)=\frac{3i\pi}{k}\) due to \([3]_{\theta}\) and associated with the propagation of a particle in the \(s\)-channel, and one at \(\theta=\frac{i\pi}{k}(2-1)=\frac{i\pi}{k}\) and associated with the propagation of a particle in the \(t\)-channel. It is possible to check that the transmission elements in the S-matrix have residues with opposite signs at the locations of these poles, as expected in unitary S matrices with particles propagating in different channels. Comparison with the full, non-relativistic S matrix.We finally compare the dressing factors found so far with the dressing factors of the full theory before the limit. Using (3.9) we can take the relativistic limit on the equations in (2.44): in our normalisation, the "blue terms" go to 1 in the limit, \[\sigma^{\bullet\bullet}_{\text{\tiny LL}}(u_{1},u_{2})^{2}\hat{ \sigma}^{\bullet\bullet}_{\text{\tiny RL}}(\bar{u}_{1},u_{2})^{2} \to f_{1,1}(\theta_{12})^{-2}\times\,1\,, \tag{4.30}\] \[\sigma^{\bullet\bullet}_{\text{\tiny LL}}(\bar{u}_{1},u_{2})^{2} \hat{\sigma}^{\bullet\bullet}_{\text{\tiny RL}}(u_{1},u_{2})^{2} \to f_{k-1,1}(\theta_{12})^{-2}\times\,1\,.\] Comparing these equations with (4.5), we recognise that the phases \(\sigma^{\bullet\bullet}_{\text{\tiny LL}}\) and \(\hat{\sigma}^{\bullet\bullet}_{\text{\tiny RL}}\) in the limit can be matched to \[\sigma^{\bullet\bullet}_{\text{\tiny LL}}(u_{1},u_{2})^{2} \to\sigma(1,1;\theta_{12})^{2}\,, \tag{4.31a}\] \[\hat{\sigma}^{\bullet\bullet}_{\text{\tiny RL}}(u_{1},u_{2})^{2} \to\sigma(k-1,1;\theta_{12})^{2}\,. \tag{4.31b}\] Performing the same limit on (2.45) we obtain \[\sigma^{\bullet\bullet}_{\text{\tiny RR}}(\bar{u}_{1},u_{2})^{2} \hat{\sigma}^{\bullet\bullet}_{\text{\tiny LR}}(u_{1},u_{2})^{2} \to f_{1,k-1}(\theta_{12})^{-2}\times\,([2]_{\theta_{12}+i\pi})^{-2}\,, \tag{4.32}\] \[\sigma^{\bullet\bullet}_{\text{\tiny RR}}(u_{1},u_{2})^{2}\hat{ \sigma}^{\bullet\bullet}_{\text{\tiny LR}}(\bar{u}_{1},u_{2})^{2} \to f_{k-1,k-1}(\theta_{12})^{-2}\times\,([2]_{\theta_{12}})^{-2}\,.\] This time the blue factors in (2.45) do not become trivial in the limit and we identify the solutions of the crossing equations with16 Footnote 16: The second solution can also be written as \(\sigma^{\bullet\bullet}_{\text{RR}}(u_{1},u_{2})^{2}\to\sigma(-1,-1;\theta_{12})^ {2}\) using the following property of \(\sigma\): \(\sigma(m_{1}+k,m_{2}+k;\theta)^{-1}=-[m_{1}+m_{2}]_{\theta}\,\sigma(m_{1},m_{2} ;\theta)^{-1}\). \[\tilde{\sigma}^{\bullet\bullet}_{\text{LR}}(u_{1},u_{2})^{2} \to\sigma(1,k-1;\theta_{12})^{2}\,, \tag{4.33a}\] \[\sigma^{\bullet\bullet}_{\text{RR}}(u_{1},u_{2})^{2} \to([2]_{\theta_{12}})^{-2}\sigma(k-1,k-1;\theta_{12})^{2}\,. \tag{4.33b}\] This term may appear a little baffling. To understand why it is necessary, recall that \(\sigma(m_{1},m_{2};\theta)^{-1}\) has a simple zero at \(\theta=\frac{i\pi}{k}(2k-m_{1}-m_{2})\) which is inside the physical strip when \(k<m_{1}+m_{2}<2k\); as a consequence of this fact \(\sigma(k-1,k-1;\theta_{1}-\theta_{2})^{2}\) has a pole of order two at \(\theta=\frac{2i\pi}{k}\). However, this pole should not appear in \(\sigma^{\bullet\bullet}_{\text{RR}}(u_{1},u_{2})^{2}\). Hence, the factor of \(([2]_{\theta})^{-2}\) is precisely what is needed. We conclude that, with the normalisation used in section 2.6, the dressing phases \(\sigma^{\bullet\bullet}_{\text{LL}}\), \(\sigma^{\bullet\bullet}_{\text{RR}}\), \(\tilde{\sigma}^{\bullet\bullet}_{\text{LR}}\) and \(\tilde{\sigma}^{\bullet\bullet}_{\text{RL}}\) have no poles and zeros in the physical strip after the relativistic limit. ### Mixed-mass and massless dressing factors The remaining dressing factors are split into two groups: mixed-mass dressing factors (associated with the scattering of a massive and a massless particle) and massless-massless dressing factors. In the relativistic limit the single-particle massless representations are \(\rho^{\text{B}}_{\text{rel}}(0,\theta)\) and \(\rho^{\text{B}}_{\text{rel}}(k,\theta)\). Indeed due to the monodromy of the supercharges, after the limit we can identify \(\rho^{\text{F}}_{\text{rel}}(0,\theta)=\rho^{\text{B}}_{\text{rel}}(k,\theta)\) and \(\rho^{\text{F}}_{\text{rel}}(k,\theta)=\rho^{\text{B}}_{\text{rel}}(0,\theta)\), and all representations become \(2k\)-periodic in \(m\). In the following paragraphs, we provide solutions to the crossing equations involving these representations. Mixed-mass dressing factors.Analogously to what we did for the scattering between massive particles let us define the relativistic limit of the complete mixed-mass S matrices as follows \[\mathbf{S}^{\text{B}-}_{\text{su}(1,1)^{\underline{\oplus}}_{c..e.}}(m,a;\theta) =[\sigma(m,-;\theta)]^{-2}\left(\mathbf{S}^{\text{BB}-}(m,a, \theta)\otimes\mathbf{S}^{\text{BB}-}(m,k-a,\theta)\right), \tag{4.34a}\] \[\mathbf{S}^{+\text{B}}_{\text{su}(1,1)^{\underline{\oplus}}_{c. e.}}(a,m;\theta) =[\sigma(+,m;\theta)]^{-2}\left(\mathbf{S}^{\text{B}+\text{B}}(a,m, \theta)\otimes\mathbf{S}^{\text{B}+\text{B}}(k-a,m,\theta)\right), \tag{4.34b}\] where \(a\) can be either \(0\) or \(k\), \(m\) can be any integer \(\in\{1,\ldots,k-1\}\) and the subscript signs \(\pm\) correspond to the chiralities of the massless particle. The S matrices in (4.34a) and (4.34b) describe the scattering between particles in the representations \(\rho^{\text{B}}_{\text{rel}}(m,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(m,\theta)\) and \(\rho^{\text{B}}_{\text{rel}}(a,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(k-a,\theta)\), and \(\rho^{\text{B}}_{\text{rel}}(a,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(k-a,\theta)\) and \(\rho^{\text{B}}_{\text{rel}}(m,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(m,\theta)\) respectively. Differently from (4.2), we set \(\Phi=1\) in this case; indeed we do not want to introduce additional poles or zeros in the physical strip since bound states are not expected in scattering processes involving massless particles. We find the following two independent sets of crossing equations for the mixed-mass dressing factors \[[\sigma(m,-;\theta)]^{2}\,[\sigma(k-m,-;\theta+i\pi)]^{2} =1\,, \tag{4.35a}\] \[[\sigma(m,-;\theta+i\pi)]^{2}\,[\sigma(k-m,-;\theta)]^{2} =1\,, \tag{4.35b}\] \[\left[\sigma(+,m;\theta)\right]^{2}\left[\sigma(+,m;\theta+i\pi) \right]^{2}=e^{-\frac{2\pi im}{k}}\,, \tag{109a}\] \[\left[\sigma(+,k-m;\theta)\right]^{2}\left[\sigma(+,k-m;\theta+i \pi)\right]^{2}=e^{\frac{2\pi im}{k}}\,. \tag{109b}\] Constant solutions to these equations can be found for any value of \(m\). In the following, we consider the case \(m=1\) and \(m=k-1\) where these equations need to correspond to the relativistic limit of (47) and (48). With the normalisation introduced in (46) in the relativistic limit the crossing equation for the dressing phases \(\sigma^{\bullet-}_{\text{LL}}\), \(\sigma^{\text{+}\bullet}_{\text{LL}}\), \(\sigma^{\bullet-}_{\text{LR}}\) and \(\sigma^{\text{+}\bullet}_{\text{RL}}\) become \[\begin{split}&\left(\sigma^{\bullet-}_{\text{LL}}(\theta)\right)^{2} \bigl{(}\sigma^{\bullet-}_{\text{RL}}(\theta+i\pi)\bigr{)}^{2}=1\times 1=1\,,\\ &\left(\sigma^{\bullet-}_{\text{LL}}(\bar{u}_{1},u_{2})\right)^{ 2}\bigl{(}\sigma^{\bullet-}_{\text{RL}}(u_{1},u_{2})\bigr{)}^{2}=1\times 1=1\,, \\ &\left(\sigma^{\text{+}\bullet}_{\text{LL}}(\theta)\right)^{2} \bigl{(}\sigma^{\text{+}\bullet}_{\text{LL}}(\theta+i\pi)\bigr{)}^{2}=e^{- \frac{2i\pi}{k}}\times e^{\frac{2i\pi}{k}}=1,\\ &\left(\sigma^{\text{+}\bullet}_{\text{LR}}(\theta)\right)^{2} \bigl{(}\sigma^{\text{+}\bullet}_{\text{LR}}(\theta+i\pi)\bigr{)}^{2}=e^{\frac {2i\pi}{k}}\times e^{-\frac{2i\pi}{k}}=1\,,\end{split} \tag{110}\] and admit the constant simple solution \(\sigma^{\bullet-}_{\text{LL}}=\sigma^{\text{+}\bullet}_{\text{LL}}=\sigma^{ \bullet-}_{\text{LR}}=\sigma^{\text{+}\bullet}_{\text{RL}}=1\). For a physical process to make sense a massless particle incoming from the left should have positive velocity (i.e. should be chiral) while a massless particle incoming from the right should have negative velocity (i.e. should be antichiral). However, even if this condition is not satisfied solutions for the mixed-mass dressing factors can be provided. In particular, using the same normalisation (108) also for these unphysical scattering processes, by braiding unitarity it needs to hold that \[\left[\sigma(m,+;\theta)\right]^{-2}\left[\sigma(+,m;-\theta)\right]^{-2}=1.\] In this manner the 'unphysical' dressing factors \(\left[\sigma(m,+;\theta)\right]^{-2}\) and \(\left[\sigma(-,m;\theta)\right]^{-2}\) can be defined in terms of the 'physical' dressing factors in (108). Massless-massless dressing factors.We define the S matrices associated with the scattering of massless particles in the chiral-chiral and chiral-antichiral sectors to be \[\mathbf{S}^{+-}_{su(1,1)^{\oplus 4}_{c.c.}}\left(a,b;\theta\right) =\left[\sigma(+,-;\theta)\right]^{-2}\left(\mathbf{S}^{\text{B}_ {+}\text{B}_{-}}(a,b,\theta)\otimes\mathbf{S}^{\text{B}_{+}\text{B}_{-}}(k-a, k-b,\theta)\right), \tag{111a}\] \[\mathbf{S}^{++}_{su(1,1)^{\oplus 4}_{c.c.}}\left(a,b;\theta\right) =\left[\sigma(+,+;\theta)\right]^{-2}\left(\mathbf{S}^{\text{B}_ {+}\text{B}_{+}}(a,b,\theta)\otimes\mathbf{S}^{\text{B}_{+}\text{B}_{+}}(k-a, k-b,\theta)\right). \tag{111b}\] The parameters \(a\) and \(b\) can either be \(0\) or \(k\), for a total of four possible choices. These choices correspond to the fact that we expect two distinct \(su(2)_{\circ}\) representations, denoted by indices \(\dot{\alpha}=1,2\), _cf._ (49) and (50).17 As before, the superscript signs on \(\sigma^{\pm\pm}\), _etc._, label the chiralities of the massless particle. As we can see from the relations in (105) the S-matrix elements associated with the scattering of massless particles with opposite chirality are trivial. We obtain the following simple crossing equations for the scattering of massless particles with opposite chirality Footnote 17: Remark that here we are assuming that the \(su(2)_{\circ}\) structure of the S matrix is trivial. \[\left[\sigma(+,-;\theta)\right]^{2}\left[\sigma(+,-;\theta+i\pi)\right]^{2}=+ 1\,, \tag{111c}\] which corresponds to the relativistic limit of (52b) and has a trivial solution. In contrast, as shown from the S-matrix elements in (3.29), the scattering between particles of the same chirality is nontrivial; in the case in which the scattered particles are both chiral the crossing equation can be read from the limit of (52a) and is given by \[[\sigma(+,+;\theta)]^{-2}\left[\sigma(+,+;\theta+i\pi)\right]^{-2}=\tanh^{2} \frac{\theta}{2}\,. \tag{4.40}\] which admit as minimal solution \[[\sigma(+,+;\theta)]^{-2}=a(\theta)\bigg{(}\frac{R(\theta-i\pi)R(\theta+i\pi) }{R^{2}(\theta)}\bigg{)}^{2} \tag{4.41}\] The \(R\)-functions on the RHS of the equality above are provided in (4.10). Up to an auxiliary function \(a(\theta)\), which needs to satisfy \[a(\theta)a(\theta+i\pi)=-1\quad,\quad a(\theta)a(-\theta)=1\,, \tag{4.42}\] and needs to be a phase for \(\theta\in\mathbb{R}\), the expression in (4.41) is equal to the sine-Gordon dressing factor. A possibility for the function \(a(\theta)\) was provided in [19] in the resolution of the dressing factors of the pure Ramond-Ramond worldsheet theory and is given by \[a(\theta)=-i\tanh\left(\frac{\theta}{2}-i\frac{\pi}{4}\right). \tag{4.43}\] Similar solutions can be obtained for the dressing factors associated with the scattering of massless particles with negative chirality. We remark that the solution in (4.41) was obtained by taking the limit of the S-matrix of the full theory first and then solving the associated crossing equations in the relativistic limit. It is however important to mention that a different derivation is possible: this derivation consists in constructing the representations after the limit and bootstrapping the S matrix again from scratch, as mentioned at the beginning of section 3.3. In appendix C, following this different derivation, we show that a larger space of solutions is admitted for the scattering of massless particles of the same chirality and (4.41) corresponds to a particular point in the space of these solutions. Interestingly, requiring the model to have two irreducible massless representations constructed as tensor products of two-dimensional building blocks \(\rho_{\text{\tiny rel}}^{\text{\tiny B}}\) and \(\rho_{\text{\tiny rel}}^{\text{\tiny F}}\), fixes the solution to be precisely (4.41). ## 5 Conclusions In this paper we have studied the worldsheet theory emerging from mixed-flux \(AdS_{3}\times S^{3}\times T^{4}\) in lightcone gauge. In the full non-relativistic theory we have considered the Zhukovsky-plane and rapidity-plane kinematics of the model, and the allowed structure of bound states. Then, we have studied a relativistic limit of model.The resulting model is an integrable, supersymmetric and relativistic QFT in two dimensions. As it could have been expected from the discussion of [30], its particle content is dictated by the WZNW level \(k\). More precisely, it has \((k-1)\) massive multiplets, with masses \[\mu\in\left\{2\sin\left(\frac{\pi}{k}\right),2\sin\left(\frac{2\pi}{k} \right),\ldots,2\sin\left(\frac{(k-2)\pi}{k}\right),2\sin\left(\frac{(k-1)\pi }{k}\right)\right\}\,, \tag{5.1}\] as well as two massless multiplets. Notice that all masses come in pairs if \(k\) is odd, as \(\sin\frac{\pi}{k}=\sin\frac{(k-1)\pi}{k}\). The resulting pairs of representations give particle-antiparticle pairs. The case of \(k\) even is a little special, as there is a single representation of mass \(\mu=2\sin\frac{\pi}{2}=2\) which is its own charge conjugate. The model has a rich structure of bound states, which we have described in some detail, and that allows one to generate all massive multiplets starting from a multiplet of mass \(\mu=2\sin\frac{\pi}{k}\) and using fusion. The representations generated in this way have higher and higher mass initially, and then (after considering a bound state of \(\sim k/2\) particles) the mass start decreasing. In this sense, the antiparticle of a given excitation is also a bound-state of several such excitations. An immediate consequence of this discussion is that for \(k=1\) there are no massive particles (but only the \(T^{4}\) modes, sitting in two massless representations), as expected from the WZNW model description at the NSNS point. A second observation, which would be important to understand in more detail, is that \(k=2\) is special too. In that case there is only one massive representation, which is its own charge-conjugate. In other words, the total number of particles at \(k=2\) is lower than what we would expect from just counting the fundamental excitations in a near-pp-wave expansion of the string model (we would expect, naively, _two massive_ and two massless fundamental representations, rather than _one massive_ and two massless representations). This is not entirely surprising if we consider that, at the NSNS point, the theory can be described by the RSN approach. In particular, we need to consider a worldsheet-supersymmetric WZW model based on the Kac-Moody algebra \(sl(2)_{k}^{(1)}\oplus su(2)_{k}^{(1)}\) which requires extra care at \(k=2\), as the bosonic part of \(su(2)_{k}^{(1)}\) becomes trivial. This seems to fit with our observation, but it would be interesting to analyse this special case in more detail. An important result of our work is the construction of the dressing factor of the relativistic models that we considered. Before the relativistic limit, the construction of the dressing factors of the mixed-flux theory is a major obstacle to the construction of the mirror TBA equation and the quantitative study of the theory by integrability. After the limit, relativistic invariance drastically simplifies the analytic structure of the S matrix and it allows us to solve the crossing equations. The minimal solution for the dressing factors is then expressed in terms of product of Barnes \(G\)-functions, _cf._ (4.9); the bound-state poles can be taken into account by introducing suitable CDD factors. These results constitute a test for future proposals of the dressing factors of the full theory. This is quite crucial because there are currently no perturbative constraints on the dressing factors at small string tension (unlike what was the case in \(AdS_{5}\) and \(\mathcal{N}=4\) SYM) and even at strong tension it is not clear to what extent the existing perturbative computations can be trusted, due to infrared divergences (_cf._ the discussion in [19]). The integrable models which we have encountered here in the relativistic limit may be of interest in and of themselves, as supersymmetric integrable QFTs. Indeed, the building blocks of our constructions are closely related to the \(\mathcal{N}=2\) models considered in [22, 23]. It is worth remarking that we encountered some interesting new physics when considering the collinear scattering of massless particles. In that case, as discussed in appendix C, we find a _one parameter_ family of integrable S matrices, complete with crossing-invariant dressing factors, which to our knowledge were previously unknown. Finally, it might be interesting to study the TBA of the relativistic model which we constructed. The low-energy relativistic limit which we considered is well defined at the level of the S matrix. It is not immediately clear what its interpretation may be at the level of the spectrum and of string theory (or of the unknown CFT dual). Nonetheless, it is a perfectly well-defined relativistic model whose spectrum will capture some of the features of the original theory. This could be a stepping stone towards constructing the mirror TBA equations for the full model with mixed-flux, so far only known for the pure-RR [36; 37; 38] and pure-NSNS [39] cases.18 Footnote 18: For the pure-RR case, a set of “quantum spectral curve” equations has also been proposed [40; 41; 42]. These should encode the same information about the spectrum as the mirror TBA, but currently the relation between the two proposals remains unclear. We hope to return to some of these questions in the future. ###### Acknowledgements. We thank Matheus Augusto Fabri, Alessio Miscioscia, and Roberto Volpato for useful related discussions. The authors also thank the participants of the workshop "Integrability in Low-Supersymmetry Theories" in Filicudi, Italy, for a stimulating environment where part of this work was carried out. DP and AS acknowledge support from the European Union - NextGenerationEU, and from the program STARS@UNIPD, under project "Exact-Holography - A new exact approach to holography: harnessing the power of string theory, conformal field theory, and integrable models." ## Appendix A \(\kappa\)-deformed Zhukovsky map In this appendix we discuss the properties of the \(\kappa\)-deformed Zhukovsky map defined through the following equation \[u(x,\kappa)=x+\frac{1}{x}-\frac{\kappa}{\pi}\,\log x\quad\Leftrightarrow \quad x=x(u,\kappa)\,, \tag{104}\] where \(\kappa\) in general is a complex parameter. Equation (104) defines a map from the Riemann surface of \(\log x\) to the \(u\)-plane, and we want to analyse the inverse map given by \(x(u,\kappa)\). Clearly, the function \(x(u,\kappa)\) is multi-valued, and in the limit \(\kappa\to 0\) it becomes the usual inverse Zhukovsky map, and has two branches. For finite \(\kappa\) it is sufficient to analyse eq.(104) on any branch of \(\log x\), and we use the principal branch \(\ln x\) of \(\log x\) with the cut \((-\infty,0)\) on the \(x\)-plane. In what follows we use the notation \(x(u,\kappa)\) to denote the multi-valued function satisfying (104) on the principal branch \(\ln x\), and then \(x^{(n)}(u,\kappa)\) satisfying (104) on the \(n\)-th branch of \(\log x\) is given by \[x^{(n)}(u,\kappa)=x(u+2i\kappa\,n\,,\,\kappa)\,. \tag{105}\] The equation (104) enjoys a very important inversion symmetry: if \(x(u,\kappa)\) solves (104) then \(\frac{1}{x(u,-\kappa)}\) also solves the equation. As a result, the \(x\)-plane with the cut \((-\infty,0)\) covers the \(u\)-plane twice, and the function \(x(u,\kappa)\) has two branches. In what follows we will be interested in the case \(\kappa\in\mathbb{R}\). Then, due to this inversion symmetry, it is sufficient to analyse the properties of \(x(u,\kappa)\) with \(\kappa>0\). The function \(x(u,-\kappa)\) has the properties of \(\frac{1}{x(u,\kappa)}\). With our choice of the cut on the \(x\)-plane the complex conjugate function \(x(u,\kappa)^{*}\) satisfies the equation \[x(u,\kappa)^{*}+\frac{1}{x(u,\kappa)^{*}}-\frac{\kappa^{*}}{\pi}\,\ln x(u, \kappa)^{*}=u^{*}\,, \tag{110}\] and one can impose the following two conjugacy conditions \[x(u,\kappa)^{*}=x(u^{*},\kappa^{*})\,,\quad x(u,\kappa)^{*}=\frac{1}{x(u^{*},- \kappa^{*})}\,, \tag{111}\] which can be used to define two different sets of branches of \(x(u,\kappa)\). In string theory \(\kappa^{*}=\kappa\), and we want the function \(x(u,\kappa)\) to satisfy \[x(u,\kappa)^{*}=x(u^{*},\kappa)\,,\quad\kappa^{*}=\kappa\,, \tag{112}\] because then the momentum and energy are real for real \(u\). We often refer to the branch satisfying the reality condition (112) and containing the point \(x=+\infty\) as to the string \(u\)-plane, and to the second branch as to the anti-string \(u\)-plane. In mirror theory if we keep \(\kappa\) real, we get the condition \[x(u,\kappa)^{*}=\frac{1}{x(u^{*},-\kappa)}\,,\quad\kappa^{*}=\kappa\,, \tag{113}\] and we refer to the branch satisfying the reality condition (113) and the condition \(\Im(x)<0\) as to the mirror \(u\)-plane, and to the second branch satisfying the condition \(\Im(x)>0\) as to the anti-mirror \(u\)-plane. The condition (113) relates two different functions, and the mirror theory is not unitary.19 Footnote 19: It might be interesting to assume that in mirror theory \(\kappa^{*}=-\kappa\), so it is purely imaginary, and get string theory not just by Wick rotation but also by the analytic continuation in \(\kappa\). Then, \(x(u,\kappa)\) satisfies \[x(u,\kappa)^{*}=\frac{1}{x(u^{*},\kappa)}\,,\quad\kappa^{*}=-\kappa\] It is unclear whether it is necessary, and in what follows we assume that \(\kappa\) is real. To find the location of the branch points we compute \[\frac{dx}{du}=\frac{x^{2}}{(x-\mathsf{x}_{+})(x-\mathsf{x}_{-})}\,,\quad \mathsf{x}_{\pm}\equiv\frac{\kappa}{2\pi}\pm\sqrt{1+\frac{\kappa^{2}}{4\pi^{2 }}} \tag{114}\] These formulae show that a better parametrisation of \(\kappa\) might be \[\kappa=2\pi\sinh\eta\quad\Rightarrow\quad\mathsf{x}_{\pm}=\pm e^{\pm\eta} \tag{115}\] which makes obvious that \(\mathsf{x}_{\pm}(\kappa)=1/\mathsf{x}_{\pm}(-\kappa)\) as expected from the inversion symmetry. The zeroes and poles of \(dx/du\) potentially correspond to branch points on the \(u\)-plane where a branch of \(x(u,\kappa)\) is defined. Clearly, there may be a branch point located at \[\mathsf{u}_{+}=\mathsf{x}_{+}+\frac{1}{\mathsf{x}_{+}}-\frac{\kappa}{\pi}\, \log\mathsf{x}_{+} =+2\sqrt{\frac{\kappa^{2}}{4\pi^{2}}+1}-\frac{\kappa}{\pi}\ln\left( \frac{\kappa}{2\pi}+\sqrt{1+\frac{\kappa^{2}}{4\pi^{2}}}\right) \tag{111}\] \[=+2\cosh\eta-2\eta\sinh\eta\] This branch point \(\mathsf{u}_{+}\) is of the square-root type, and, as we will see later, going around it \(x(u,\kappa)\) transforms according to the inversion symmetry \[x^{\circlearrowright}(u,\kappa)=\frac{1}{x(u,-\kappa)}\,, \tag{112}\] where \(x^{\circlearrowright}\) is the result of the analytic continuation along a path \(\circlearrowright\) surrounding the branch point \(\mathsf{u}_{+}\). Note also that \(\mathsf{u}_{+}\) does not depend on the sign of \(\kappa\): \(\mathsf{u}_{+}(-\kappa)=\mathsf{u}_{+}(\kappa)\). Since \(\mathsf{x}_{-}\) is negative for \(\kappa\) real, there may be in fact two branch points located at \[\mathsf{u}_{-}^{\pm}=\mathsf{x}_{-}+\frac{1}{\mathsf{x}_{-}}- \frac{\kappa}{\pi}\,\ln\mathsf{x}_{-} =-2\sqrt{\frac{\kappa^{2}}{4\pi^{2}}+1}-\frac{\kappa}{\pi}\ln \left(\frac{\kappa}{2\pi}-\sqrt{1+\frac{\kappa^{2}}{4\pi^{2}}}\right) \tag{113}\] \[=-2\cosh\eta+2\eta\sinh\eta\mp i\kappa=-\mathsf{u}_{+}\mp i \kappa\,,\quad\kappa\in\mathbb{R}\] where \(+\) in \(\mathsf{u}_{-}^{\pm}\) is for \(\mathsf{x}_{-}\) on the upper edge of the cut of \(\log x\), and we have used the principal branch of \(\log x\). Since the images of these two branch points are on the edges of the cut of \(\log x\) moving a point around any of them takes it to a different \(x\)-plane. These branch points are also of the square-root type, and going around \(\mathsf{u}_{-}^{\pm}\) along a path \(\circlearrowright_{-}^{\pm}\) transforms \(x(u,\kappa)\) as \[x^{\circlearrowright}(u,\kappa)=x(u\pm 2i\kappa,\kappa)\,. \tag{114}\] In addition there is a branch point at \(u=\infty\) corresponding to \(x=0\) and \(x=\infty\) which is of the logarithmic type as can be seen by solving (110) for large \(u\) \[x(u,\kappa)=u-\frac{\kappa}{\pi}\log u+\cdots\quad\text{or}\quad x(u,\kappa)= \frac{1}{u+\frac{\kappa}{\pi}\log u}+\cdots \tag{115}\] The result of the analytic continuation along a path surrounding \(u=\infty\) depends on the cut structure of a \(u\)-plane, the orientation of the path and the initial point of the path. It will be discussed later. In what follows we always choose all cuts on a \(u\)-plane to be horizontal. We will see that there is a branch of \(x(u,\kappa)\) where there are only two branch points at \(\mathsf{u}_{+}\) and \(-\infty\), and we refer to the branch as the principal branch of \(x(u,\kappa)\). To understand where a \(u\)-plane is mapped onto the \(x\)-plane, and how cuts can be chosen, let us find which curves on the \(x\)-plane are mapped to horizontal lines of the \(u\)-plane. We use polar coordinates \[x=\rho\,e^{i\phi}\,, \tag{116}\] and rewrite (110) in the form \[\left(\rho+\frac{1}{\rho}\right)\cos\phi-\frac{\kappa}{\pi}\,\log\rho+i\left[ \left(\rho-\frac{1}{\rho}\right)\sin\phi-\frac{\kappa}{\pi}\phi\right]=u=\mu+ i\,\nu\,. \tag{117}\] Thus, we get that the equation of the curves which are mapped to (a segment of) the horizontal line through the point \((0,\nu)\) where \(\nu=\Im(u)\) is given by \[\left(\rho-\frac{1}{\rho}\right)\sin\phi-\frac{\kappa}{\pi}\phi=\nu\,,\quad-\pi \leq\phi\leq\pi\,,\] (A.16) and therefore if \(\nu\neq 0\) or \(\nu\neq\pm\kappa\), the solution is \[\rho(\phi,\nu)=\frac{\frac{\kappa}{\pi}\,\phi+\nu}{2\sin\phi}+\sqrt{1+\frac{( \frac{\kappa}{\pi}\,\phi+\nu)^{2}}{4\sin^{2}\phi}}\,.\] (A.17) On the \(x\)-plane the solution is represented by two disconnected curves, one in the lower half-plane and the other in the upper one. Each of the curves is mapped to the whole horizontal line, as can be seen from the formula \[\mu(\phi,\nu)=2\sqrt{1+\frac{(\frac{\kappa}{\pi}\,\phi+\nu)^{2}}{4\sin^{2} \phi}}\,\cos\phi-\frac{\kappa}{\pi}\,\log\rho(\phi,\nu)\,,\quad\mu=\Re(u)\,.\] (A.18) We will discuss these curves in more detail later but first let us consider the cases where \(\nu=0\) or \(\nu=\pm\kappa\). For each of the three cases the corresponding horizontal line goes through a branch point, and analysing which curves on the \(x\)-plane are mapped to these lines we can understand how to choose cuts. 1. We begin with the case \(\nu=0\), and get \[\nu=0:\qquad\left(\rho-\frac{1}{\rho}\right)\sin\phi-\frac{\kappa}{\pi}\phi=0 \,,\quad-\pi\leq\phi\leq\pi\,.\] (A.19) This equation has two solutions. The first one is \[\phi=0\,,\quad\rho\geq 0\,,\quad u=\rho+\frac{1}{\rho}-\frac{\kappa}{\pi}\, \log\rho\,\geq\,\mathfrak{u}_{+}\,.\] (A.20) In fact both intervals \(0\leq\rho\leq\mathsf{x}_{+}\) and \(\rho\geq\mathsf{x}_{+}\) are mapped to the semi-line \(u\,\geq\,\mathfrak{u}_{+}\). If we choose the semi-line \(u\,\geq\,\mathfrak{u}_{+}\) to be a cut of a \(u\)-plane and consider the lower half-plane \(-\pi<\phi<0\) then the lower edge of the cut (\(v=-0\)) is mapped to the semi-line \(\rho\geq\mathsf{x}_{+}\) while the upper edge of the cut (\(v=+0\)) is mapped to the interval \(0\leq\rho\leq\mathsf{x}_{+}\) on the \(x\)-plane, as can be seen from (A.17). This is a long mirror theory cut which in the limit \(\kappa\to 0\) becomes a cut from \(+2\) to \(+\infty\). The second solution of (A.19) is given by \[\rho(\phi,0)=\frac{\kappa\,\phi}{2\pi\sin\phi}+\sqrt{1+\frac{\kappa^{2}\,\phi^ {2}}{4\pi^{2}\sin^{2}\phi}}\,,\quad\rho(0,0)=\mathsf{x}_{+}\,.\] (A.21) This curve covers the semi-line \(u\,\leq\,\mathfrak{u}_{+}\) twice. Depending on whether \(\kappa>0\) or \(\kappa<0\) the plots of the images of the cut are very different but, as expected, they are related by \(x(u,-\kappa)=1/x(u,\kappa)\), see plots for \(\kappa=\pm 0.3,\pm 1,\pm 3\) in the figures below. We see that the curves separate the \(x\)-plane into two regions, and, as was mentioned above, we choose the region that contains the point \(x=+\infty\), and therefore the semi-line \(x\,\geq\,\mathsf{x}_{+}\) as the string theory physical region. For both signs of \(\kappa\) it is the region exterior to the one bounded by the curve (A.21). Thus, for \(\kappa>0\) the string region does not contain the unit disc while for \(\kappa<0\) the string region includes the unit circle and its boundary is inside the unit disc. If we choose the semi-line \(u\,\leq\,\mathsf{u}_{+}\) to be a cut of a \(u\)-plane and consider the string region then the lower edge of the cut is mapped to the lower part of the curve while the upper edge of the cut (\(v=+0\)) is mapped to the upper one. This is a long string theory cut which in the limit \(\kappa\to 0\) becomes a cut from \(-\infty\) to \(+2\). 2. Let us now consider the case \(\nu=\kappa\) \[\nu=\kappa:\qquad\left(\rho-\frac{1}{\rho}\right)\sin\phi-\frac{\kappa}{\pi} \phi=\kappa\,,\quad-\pi\leq\phi\leq\pi\] (A.22) This equation also has two solutions. The first one is \[\phi=-\pi\,,\quad\rho\geq 0\,,\quad\Re(u)=-\rho-\frac{1}{\rho}-\frac{\kappa}{ \pi}\,\log\rho\,\leq\,\Re(\mathsf{u}_{-}^{-})\] (A.23) The semi-line \(\Re(u)\leq\,\Re(\mathsf{u}_{-}^{-})\) is the cut on the \(u\)-plane from \(-\infty\) to \(\mathsf{u}_{-}^{-}\). The intervals \(\mathsf{x}_{-}\leq x\leq 0\) and \(x\leq\mathsf{x}_{-}\) on the lower edge of the cut \((-\infty,0)\) on the \(x\)-plane are mapped to upper and lower edges of the cut \((-\infty,\mathsf{u}_{-}^{-})\), respectively. Since we have chosen the principal branch of \(\log x\) on the \(x\)-plane, we cannot change the cut \((-\infty,\mathsf{u}_{-}^{-})\) on the \(u\)-plane. If we would do so then we would have to change a branch of \(\log x\) correspondingly. Clearly, the interval \(-\infty\leq x\leq 0\) is outside the string region for \(\kappa>0\) but inside it for \(\kappa<0\). Thus, there is no cut \(\Re(u)\leq\,\Re(\mathsf{u}_{-}^{-})\) on the \(u\)-plane that is mapped to the string region for \(\kappa>0\) but it is there for \(\kappa<0\). The cut on the \(u\)-plane from \(-\infty\) to \(\mathsf{u}_{-}^{-}\) in the limit \(\kappa\to 0\) would become a long mirror theory cut from \(-\infty\) to \(-2\). Combining it with the long string theory cut from \(-\infty\) to \(+2\), one gets the short string theory cut from \(-2\) to \(2\). The second solution of (A.22) is given by \[\rho(\phi,\kappa)=\frac{\kappa\left(\phi+\pi\right)}{2\pi\sin\phi}+\sqrt{1+ \frac{\kappa^{2}\left(\phi+\pi\right)^{2}}{4\pi^{2}\sin^{2}\phi}}\,,\quad \rho(-\pi,\kappa)=-\mathsf{x}_{-}\,.\] (A.24) It is represented by two disconnected curves located in the lower and upper half-planes, see the figures below for \(\kappa=\pm 0.3,\pm 1,\pm 1.5\). The curve in the lower half-plane on each of the figures ends at \(\mathsf{x}_{-}\). It is the image of the semi-line \(\Re(u)\geq\Re(\mathsf{u}_{-}^{-})\) of a \(u\)-plane. The union of the curve with the semi-line \(x\leq 0\) is the image of the two edges of the cut from \(-\infty\) to \(\mathsf{u}_{-}^{-}\), and the semi-line \(\Re(u)\geq\Re(\mathsf{u}_{-}^{-})\) of one and the same \(u\)-plane. On the other hand the curve in the upper half-plane is mapped to the whole line \(\Im(u)=\kappa\), and therefore it belongs to a \(u\)-plane which has no branch point at \(u=\mathsf{u}_{-}^{-}\). For \(\kappa>0\) the curve in the upper half-plane is located in the string physical region, and therefore for \(\kappa>0\) the string \(u\)-plane has no branch point at \(u=\mathsf{u}_{-}^{-}\). On the other hand for \(\kappa<0\) the curve is outside the string physical region, and therefore for \(\kappa<0\) the string \(u\)-plane has the branch point at \(u=\mathsf{u}_{-}^{-}\), see the figures below for \(\kappa=\pm 1\) where the green curve is the image of the cut \((-\infty,\mathsf{u}_{+})\), the blue curve in the lower half-plane is the image of the semi-line \(\Re(u)\geq\Re(\mathsf{u}_{-}^{-})\), and the blue curve in the upper half-plane is the image of the line \(\Im(u)=\kappa\). The consideration is immediately applied to \(\nu=-\kappa\) because it is related to the case above by the reflection \(\phi\to-\phi\). The images of the three string cuts from \(-\infty\) to \(\mathsf{u}_{+}\), and from \(-\infty\) to \(\mathsf{u}_{-}^{\pm}\) on the \(x\)-plane are shown in the figures below for \(\kappa=\pm 1\). Since the semi-axes \(x\leq 0\) is outside the string physical region for \(\kappa>0\), we conclude that it is mapped to a \(u\)-plane which has only one cut from \(-\infty\) to \(\mathfrak{u}_{+}\). For \(\kappa<0\) the string physical region includes all images of the cuts, and therefore, it is mapped to a \(u\)-plane which has all the three cuts. For both signs of \(\kappa\) we define the principal branch of \(x(u,\kappa)\) to be the one on a \(u\)-plane with only one cut from \(-\infty\) to \(\mathfrak{u}_{+}\). Similarly, the images of three mirror cuts from \(\mathfrak{u}_{+}\) to \(+\infty\), and from \(-\infty\) to \(\mathfrak{u}_{-}^{\pm}\) on the \(x\)-plane are shown in the figures below for \(\kappa=\pm 1\). The only difference between the cases with \(\kappa>0\) and \(\kappa<0\) is the location of the images \(\mathsf{x}_{\pm}\) of the branch points. We define the mirror physical region as the one with \(\Im(x)<0\). It is mapped to a \(u\)-plane with two mirror cuts from \(\mathfrak{u}_{+}\) to \(+\infty\), and from \(-\infty\) to \(\mathfrak{u}_{-}^{-}\). Even though there is no analytic formula for \(x(u,\kappa)\), it is easy to describe each branch of the function parametrically by using the polar angle \(\phi\) in the \(x\)-plane and the imaginary part \(\nu=\Im(u)\) in a \(u\)-plane. We find that for any \(\kappa\) the four branches of \(x(u,\kappa)\) analysed above can be described as \[x(u,\kappa)=\rho(\phi,\nu)\,e^{i\phi}\,,\quad u=\mu(\phi,\nu)+i\,\nu\,, \tag{103}\] where \(\rho(\phi,\nu)\) and \(\mu(\phi,\nu)\) are given by (102) and (104), and the ranges of \(\phi\) and \(\nu\) are as follows20 Footnote 20: The branches \(x^{(n)}(u,\kappa)\) are also described by (103) with the ranges of \(\phi\) shifted by \(2\pi n\). **Ia.** The principle branch of \(x(u,\kappa)\) with one cut from \(-\infty\) to \(\mathfrak{u}_{+}\) on the \(u\)-plane \[\begin{array}{ll}\kappa>0:&-\pi\leq\phi\leq 0\ \ \text{for}\ \ \nu\leq 0\,;\quad 0\leq\phi\leq\pi\ \ \text{for}\ \ \nu\geq 0\,,\\ \kappa<0:&-\pi\leq\phi\leq 0\ \ \text{for}\ \ \nu\geq 0\,;\quad 0\leq\phi\leq\pi\ \ \text{for}\ \ \nu\leq 0\,.\end{array} \tag{104}\] One also has to add a map from the semi-line \([\mathfrak{u}_{+}\,,\,+\infty)\) to the semi-line \(x\geq\mathsf{x}_{+}\) for \(\kappa>0\), and to the interval \((0,\mathsf{x}_{+}]\) for \(\kappa<0\). This branch is defined on the string and anti-string \(u\)-plane for \(\kappa>0\) and \(\kappa<0\), respectively. Plots of images of several horizontal lines between \(-1.5\kappa\) and \(+1.5\kappa\) on the \(u\)-plane are shown below for \(\kappa=\pm 1\). **Ib.** The branch of \(x(u,\kappa)\) with three cuts on the \(u\)-plane \[\begin{array}{ll}\kappa>0:&-\pi\leq\phi\leq 0\ \ \mbox{for}\ \ \nu\geq 0\,; \quad 0\leq\phi\leq\pi\ \ \mbox{for}\ \ \nu\leq 0\,,\\ \kappa<0:&-\pi\leq\phi\leq 0\ \ \mbox{for}\ \ \nu\leq 0\,;\quad 0\leq\phi\leq\pi\ \ \mbox{for}\ \ \nu\geq 0\,.\end{array}\] (A.27) One also has to add a map from the semi-line \([\mathsf{u}_{+}\,,\,+\infty)\) to the interval \((0,\mathsf{x}_{+}]\) for \(\kappa>0\), and to the semi-line \(x\geq\mathsf{x}_{+}\) for \(\kappa<0\). This branch is defined on the anti-string and string \(u\)-plane for \(\kappa>0\) and \(\kappa<0\), respectively. Plots of images of several horizontal lines between \(-1.5\kappa\) and \(+1.5\kappa\) on the \(u\)-plane are shown below for \(\kappa=\pm 1\). Obviously, for both branches Ia and Ib, \(x(u,\kappa)\) satisfies the string complex conjugation condition (A.5). The two \(u\)-planes glued together are mapped by \(x(u,\kappa)\) to the \(x\)-plane with the cut \((-\infty,0)\). Moving through the cut \((-\infty,\mathsf{u}_{+})\), one gets from one \(u\)-plane to the other one which is still mapped to the same \(x\)-plane. It is easy to check by using (A.17) and (A.18) that \(x(u,\kappa)\) and \(x(u,-\kappa)\) on any of the two branches are related according to the inversion symmetry \[x(u,\kappa)=\frac{1}{x(u,-\kappa)}\,,\] (A.28) where \(u\) belongs either to the \(u\)-plane with one cut or to the \(u\)-plane with three cuts. If one moves through the cut \((-\infty,\mathsf{u}_{-}^{+})\) one gets to a \(u\)-plane with three cuts \[(-\infty,\mathsf{u}_{-}^{-}-2i\kappa)=(-\infty,\mathsf{u}_{-}^{+})\,,\quad(- \infty,\mathsf{u}_{-}^{+}-2i\kappa)\quad\mbox{and}\quad(-\infty,\mathsf{u}_{+ }-2i\kappa)\,.\] (A.29) This \(u\)-plane is mapped to another \(x\)-plane with \(\log x=\ln x+2i\pi\). The function \(x_{\rm Ib}^{(1)}(u,\kappa)\) on this \(u\)-plane is given by \[x_{\rm Ib}^{(1)}(u,\kappa)=x_{\rm Ib}(u+2i\kappa,\kappa)\,.\] (A.30) Similarly, crossing the cut \((-\infty,\mathsf{u}_{-}^{-})\) brings one to a \(u\)-plane with cuts \[(-\infty,\mathsf{u}_{-}^{+}+2i\kappa)=(-\infty,\mathsf{u}_{-}^{-})\,,\quad(- \infty,\mathsf{u}_{-}^{-}+2i\kappa)\quad\mbox{and}\quad(-\infty,\mathsf{u}_{ +}+2i\kappa)\,,\] (A.31) which is mapped to the \(x\)-plane with \(\log x=\ln x-2i\pi\) with the function \(x_{\rm Ib}^{(-1)}(u,\kappa)\) given by \[x_{\rm Ib}^{(-1)}(u,\kappa)=x_{\rm Ib}(u-2i\kappa,\kappa)\,.\] (A.32) Note that in both cases if we cross the most lower cut then on the new \(u\) plane it becomes the most upper one, and vice versa. In other words the cuts are reflected about the cut which has been crossed. This leads to a noticeable dependence of the result of the analytic continuation along a path around the branch point at \(\infty\) where all the three cuts meet. Consider for definiteness \(\kappa>0\), and a path which begins at a point with \(\Im(u)<-3\kappa\) and goes up crossing the cut \((-\infty,\mathfrak{u}_{-}^{+})\). Once the point crosses the cut it gets to the \(u\)-plane without any cut above it, and since the lowest cut has \(\Im(u)=-3\kappa\), the point can be moved freely to its original coordinates on the \(u\)-plane which is mapped to the \(x\)-plane with \(\log x=\ln x+2i\pi\). If, however, the path begins at a point with \(-3\kappa<\Im(u)<-\kappa\), then the point would have to cross the cut \((-\infty,\mathfrak{u}_{-}^{+})\) on its original \(u\)-plane, and also the cut \((-\infty,\mathfrak{u}_{-}^{+}-2i\kappa)\) on the second \(u\)-plane, and it ends up on the \(u\)-plane with three cuts which is mapped to the \(x\)-plane with \(\log x=\ln x+4i\pi\). Next, if the path begins at a point with \(-\kappa<\Im(u)<0\), then the point crosses the cut \((-\infty,\mathfrak{u}_{+})\) on its original \(u\)-plane with three cuts, and gets to the \(u\)-plane with a single cut which is mapped to the original \(x\)-plane with \(\log x=\ln x\). Finally, if the path begins at a point with \(0<\Im(u)<+\kappa\), then the point crosses the cut \((-\infty,\mathfrak{u}_{-}^{-})\), and gets to the \(u\)-plane with three cuts. The next cut it crosses on the new \(u\)-plane is \((-\infty,\mathfrak{u}_{+}+2i\kappa)\), and it gets the point to the \(u\)-plane with one cut which is mapped to the \(x\)-plane with \(\log x=\ln x-2i\pi\). **IIa.** The mirror branch of \(x(u,\kappa)\) with two cuts on the \(u\)-plane \[-\pi\leq\phi\leq 0\,,\quad-\infty<\nu<+\infty\,.\] (A.33) One also has to add a map from the cut \((\mathfrak{u}_{+}\,,\,+\infty)\) to the semi-line \(x>0\), and from the cut \((-\infty\,,\,\mathfrak{u}_{-}^{+})\) to the semi-line \(x<0\). Plots of images of several horizontal lines between \(-1.5\kappa\) and \(+1.5\kappa\) on the \(u\)-plane are shown below for \(\kappa=\pm 1\). **IIb.** The anti-mirror branch of \(x(u,\kappa)\) with two cuts on the \(u\)-plane \[0\leq\phi\leq\pi\,,\quad-\infty<\nu<+\infty\,.\] (A.34) One also has to add a map from the cut \((\mathfrak{u}_{+}\,,\,+\infty)\) to the semi-line \(x>0\), and from the cut \((-\infty\,,\,\mathfrak{u}_{-}^{+})\) to the semi-line \(x<0\). Plots of images of several horizontal lines between \(-1.5\kappa\) and \(+1.5\kappa\) on the \(u\)-plane are shown below for \(\kappa=\pm 1\). It is easy to check that for both branches IIa and IIb, \(x(u,\kappa)\) satisfies the mirror complex conjugation condition (A.6). The analyses of what happens when one moves through the two cuts of any of the \(u\)-planes repeats the one for the (anti-)string \(u\)-planes. To conclude this section let us note that, in addition to the limit \(\kappa\to 0\), another interesting limit is the one where \(\kappa\to\infty\). In this limit one has three options 1. One keeps \(x\) fixed but rescales \(u\) as \(u\to-\kappa u/\pi\). Then, eq.(A.1) trivialises \[u=\log x\quad\Rightarrow\quad x=e^{u}\,.\] (A.35) This is a relativistic limit, and the rescaled variable \(u\) is identified with the rapidity variable \(\theta\). 2. One rescales \(x\) as \(x\to|\kappa|x/\pi\), and also rescales and shifts \(u\) as \(u\to|\kappa|u/\pi-\kappa/\pi\log|\kappa|/\pi\). Then, eq.(A.1) takes the form \[u=x-\text{sgn}(\kappa)\,\log x\,.\] (A.36) It is a well-known equation whose solutions can be given in terms of the Lambert (or productlog) \(W\) function \[\kappa<0:\quad x_{n}(u)=W_{n}\left(e^{u}\right)\,;\qquad\kappa>0:\quad x_{n}( u)=-W_{n}\left(-e^{-u}\right)\,,\quad n\in\mathbb{Z}\,,\] (A.37) where the domain of \(u\) depends on \(n\), and it is a horizontal strip of width \(2\pi\). In terms of our description of the \(\kappa\)-deformed Zhukovsky map, for \(\kappa<0\) a \(u\)-plane with two cuts \((-\infty,-1\pm i\pi)\), and for \(\kappa>0\) a \(u\)-plane with one cut \((-\infty,+1)\), is mapped to the \(x\)-plane with the cut \((-\infty,0)\). 3. One could instead rescale \(x\) as \(x\to x\pi/|\kappa|\), and also rescale and shift \(u\) as \(u\to|\kappa|u/\pi+\kappa/\pi\log|\kappa|/\pi\). This leads to the equation \(u=1/x-\text{sgn}(\kappa)\,\log x\) whose solutions are again expressed in terms of the Lambert \(W\) function. ## Appendix B S-matrix elements before the limit In this appendix, we report the S matrices acting on the two-particle representations \[\begin{split}\rho^{\text{\tiny B}}_{\text{\tiny L}}(m_{1},p_{1} )\otimes\rho^{\text{\tiny B}}_{\text{\tiny L}}(m_{2},p_{2})\;,& \rho^{\text{\tiny F}}_{\text{\tiny R}}(m_{1},p_{1})\otimes\rho^{\text{\tiny F }}_{\text{\tiny R}}(m_{2},p_{2})\,,\\ \rho^{\text{\tiny B}}_{\text{\tiny L}}(m_{1},p_{1})\otimes\rho^{ \text{\tiny F}}_{\text{\tiny R}}(m_{2},p_{2})\;,& \rho^{\text{\tiny F}}_{\text{\tiny R}}(m_{1},p_{1})\otimes\rho^{\text{\tiny B }}_{\text{\tiny L}}(m_{2},p_{2})\,,\end{split}\] (B.1) of the superalgebra \(\mathfrak{su}(1|1)^{2}_{c.e.}\), normalising the scattering between highest-weight states to one. The two subscript indices (L or R) in the S matrices label the choice of coefficients used to parameterise the associated supercharges, which can be functions of \(x_{\text{\tiny L}}^{\pm}\) or \(x_{\text{\tiny R}}^{\pm}\) respectively; the superscript indices label instead the highest weight states of the representations associated with the scattered particles and can be B or F (bosonic or fermionic). ### Left-left scattering The S-matrix acting on double-particle states in the representation \(\rho_{\text{L}}^{\text{B}}(m_{1},p_{1})\otimes\rho_{\text{L}}^{\text{B}}(m_{2},p_{2})\) is determined by \[\begin{split}\mathbf{S}_{\text{LL}}^{\text{BB}}\ket{\phi_{\text{LL}}^ {\text{B}}\ket{\phi_{\text{L2}}^{\text{B}}}}&=A_{\text{LL}}^{\text{BB}} \ket{\phi_{\text{L2}}^{\text{B}}\ket{\phi_{\text{L1}}^{\text{B}}}},\\ \mathbf{S}_{\text{LL}}^{\text{BB}}\ket{\phi_{\text{LL}}^{\text{B}}\ket{\varphi _{\text{L2}}^{\text{F}}}}&=B_{\text{LL}}^{\text{BB}}\ket{\varphi_{\text{L2 }}^{\text{F}}\ket{\phi_{\text{L1}}^{\text{B}}}}+C_{\text{LL}}^{\text{BB}}\ket{ \phi_{\text{L2}}^{\text{B}}\ket{\varphi_{\text{L1}}^{\text{F}}}},\\ \mathbf{S}_{\text{LL}}^{\text{BB}}\ket{\varphi_{\text{LL}}^{\text{F}}\ket{ \phi_{\text{L2}}^{\text{B}}}}&=D_{\text{LL}}^{\text{BB}}\ket{\phi_{\text{ L2}}^{\text{F}}\ket{\varphi_{\text{L1}}^{\text{F}}}}+E_{\text{LL}}^{\text{BB}}\ket{ \varphi_{\text{L2}}^{\text{F}}\ket{\phi_{\text{L1}}^{\text{F}}}},\\ \mathbf{S}_{\text{LL}}^{\text{BB}}\ket{\varphi_{\text{LL}}^{\text{F}}\ket{ \varphi_{\text{L2}}^{\text{F}}}}&=F_{\text{LL}}^{\text{BB}}\ket{\varphi_{ \text{L2}}^{\text{F}}\ket{\varphi_{\text{L1}}^{\text{F}}}},\end{split} \tag{123}\] with coefficients: \[A_{\text{LL}}^{\text{BB}} =1\,, B_{\text{LL}}^{\text{BB}} =e^{-\frac{i}{2}p_{1}}\frac{x_{\text{LL}}^{+}-x_{\text{L2}}^{+}}{x_ {\text{LL}}^{-}-x_{\text{L2}}^{+}}\,, \tag{124}\] \[C_{\text{LL}}^{\text{BB}} =e^{\frac{i}{2}(p_{2}-p_{1})}\frac{x_{\text{LL}}^{-}-x_{\text{LL} }^{+}}{x_{\text{LL}}^{-}-x_{\text{L2}}^{+}}\,\frac{\eta_{\text{L2}}}{\eta_{ \text{L1}}}\,, D_{\text{LL}}^{\text{BB}} =e^{\frac{i}{2}p_{2}}\frac{x_{\text{LL}}^{-}-x_{\text{L2}}^{-}}{x_ {\text{LL}}^{-}-x_{\text{L2}}^{+}}\,,\] \[E_{\text{LL}}^{\text{BB}} =\frac{x_{\text{LL}}^{-}-x_{\text{LL}}^{+}}{x_{\text{LL}}^{-}-x_{ \text{L2}}^{+}}\,\frac{\eta_{\text{L2}}}{\eta_{\text{L1}}}\,, F_{\text{LL}}^{\text{BB}} =-e^{\frac{i}{2}(p_{2}-p_{1})}\frac{x_{\text{LL}}^{+}-x_{\text{L2} }^{-}}{x_{\text{LL}}^{-}-x_{\text{L2}}^{+}}\,.\] ### Right-right scattering Using the convention (123) also for the remaing S matrices we obtain \[A_{\text{RR}}^{\text{FF}} =1\,, B_{\text{RR}}^{\text{FF}} =-e^{\frac{i}{2}p_{1}}\frac{x_{\text{RR}}^{-}-x_{\text{R2}}^{-}}{x _{\text{R1}}^{+}-x_{\text{R2}}^{-}}\,, \tag{125}\] \[C_{\text{RR}}^{\text{FF}} =e^{\frac{i}{2}(p_{1}-p_{2})}\frac{x_{\text{R1}}^{+}-x_{\text{R1} }^{-}}{x_{\text{R1}}^{+}-x_{\text{R2}}^{-}}\,\frac{\eta_{\text{R2}}}{\eta_{ \text{R1}}}\,, D_{\text{RR}}^{\text{FF}} =-e^{-\frac{i}{2}p_{2}}\frac{x_{\text{R1}}^{+}-x_{\text{R2}}^{+}}{ x_{\text{R1}}^{+}-x_{\text{R2}}^{-}}\,,\] \[E_{\text{RR}}^{\text{FF}} =\frac{x_{\text{R1}}^{+}-x_{\text{R1}}^{-}}{x_{\text{R1}}^{+}-x_{ \text{R2}}^{-}}\,\frac{\eta_{\text{R2}}}{\eta_{\text{R1}}}\,, F_{\text{RR}}^{\text{FF}} =-e^{\frac{i}{2}(p_{1}-p_{2})}\frac{x_{\text{R1}}^{-}-x_{\text{R2} }^{+}}{x_{\text{R1}}^{+}-x_{\text{R2}}^{-}}\,.\] ### Left-right scattering \[A_{\text{LR}}^{\text{BF}} =1\,, B_{\text{LR}}^{\text{BF}} =e^{-\frac{i}{2}p_{1}}\frac{x_{\text{LL}}^{+}x_{\text{R2}}^{-}-1}{x _{\text{LL}}^{-}x_{\text{R2}}^{-}-1}\,,\] \[C_{\text{LR}}^{\text{BF}} =e^{-\frac{i}{2}(p_{1}+p_{2})}\frac{x_{\text{LL}}^{-}-x_{\text{LL} }^{+}}{x_{\text{LL}}^{-}x_{\text{R2}}^{-}-1}\,\frac{\eta_{\text{R2}}}{\eta_{ \text{L1}}}\,, D_{\text{LR}}^{\text{BF}} =-e^{-\frac{i}{2}p_{2}}\frac{x_{\text{LL}}^{-}x_{\text{R2}}^{+}-1}{ x_{\text{LL}}^{-}x_{\text{R2}}^{-}-1}\,, \tag{126}\] \[E_{\text{LR}}^{\text{BF}} =\frac{x_{\text{L1}}^{+}-x_{\text{L1}}^{-}}{x_{\text{L1}}^{-}x_{ \text{R2}}^{-}-1}\,\frac{\eta_{\text{R2}}}{\eta_{\text{L1}}}\,, F_{\text{LR}}^{\text{BF}} =e^{-\frac{i}{2}(p_{1}+p_{2})}\frac{x_{\text{L1}}^{+}x_{\text{R2}}^{+}-1}{x_{ \text{LL}}^{-}x_{\text{R2}}^{-}-1}\,.\] ### Right-left scattering \[A_{\text{RL}}^{\text{FB}} =1\,, B_{\text{RL}}^{\text{FB}} =-e^{\frac{i}{2}p_{1}}\frac{x_{\text{RL}}^{-}x_{\text{L2}}^{+}-1}{ x_{\text{R1}}^{+}x_{\text{L2}}^{+}-1}\,,\] \[C_{\text{RL}}^{\text{FB}} =e^{\frac{i}{2}(p_{1}+p_{2})}\frac{x_{\text{R1}}^{+}-x_{\text{R1} }^{-}}{x_{\text{R1}}^{+}x_{\text{L2}}^{+}-1}\,\frac{\eta_{\text{L2}}}{\eta_{ \text{R1}}}\,, D_{\text{RL}}^{\text{FB}} =e^{\frac{i}{2}p_{2}}\frac{x_{\text{R1}}^{+}x_{\text{L2}}^{-}-1}{ x_{\text{R1}}^{+}x_{\text{L2}}^{+}-1}\,,\] \[E_{\text{RL}}^{\text{FB}} =\frac{x_{\text{R1}}^{-}-x_{\text{R1}}^{+}}{x_{\text{R1}}^{+}x_{ \text{L2}}^{+}-1}\,\frac{\eta_{\text{L2}}}{\eta_{\text{R1}}}\,, F_{\text{RL}}^{\text{FB}} =e^{\frac{i}{2}(p_{1}+p_{2})}\frac{x_{\text{R1}}^{-}x_{\text{L2}}^{-}-1}{ x_{\text{R1}}^{+}x_{\text{L2}}^{+}-1}\,.\] Relativistic S matrix from symmetries Using the representations constructed above we may try to fix the two-particle S matrix for every value of \(m\in\mathbb{Z}\) and \(\theta\in\mathbb{R}\). Moreover, we may impose the following conditions 1. The S matrix obeys the Yang-Baxter equation; 2. The S matrix obeys physical unitarity and braiding unitarity, up to specifying an appropriate pre-factor; 3. The S matrix obeys crossing symmetry, up to specifying an appropriate pre-factor. We will comment later on whether this coincides with a suitable limit of the S matrix of appendix 2. The precise form of the S-matrix will depend on whether we pick a bosonic or fermionic highest weight state and on each representation (the other cases differ by some minus signs). For simplicity, let us consider the case where all representations have a bosonic highest-weight state, so that an explicit basis for the two-particle Hilbert space is \[\left(|\phi_{1}^{\text{B}}\,\phi_{2}^{\text{B}}\rangle,\,|\phi_{1}^{\text{B}} \,\varphi_{2}^{\text{F}}\rangle,\,|\varphi_{1}^{\text{F}}\,\phi_{2}^{\text{B} }\rangle,\,|\varphi_{1}^{\text{F}}\,\varphi_{2}^{\text{F}}\rangle\right), \tag{126}\] where \(1\) and \(2\) refer to \((m_{1},\theta_{1})\) and \((m_{2},\theta_{2})\), respectively. In this way we will find \[\begin{split}&\mathbf{S}_{12}^{\text{BB}}\,|\phi_{1}^{\text{B}} \,\phi_{2}^{\text{B}}\rangle=A_{12}^{\text{BB}}\,|\phi_{2}^{\text{B}}\,\phi_{1 }^{\text{B}}\rangle,\\ &\mathbf{S}_{12}^{\text{BB}}\,|\phi_{1}^{\text{B}}\,\varphi_{2}^{ \text{F}}\rangle=B_{12}^{\text{BB}}\,|\varphi_{2}^{\text{F}}\,\phi_{1}^{\text{B }}\rangle+C_{12}^{\text{BB}}\,|\phi_{2}^{\text{B}}\,\varphi_{1}^{\text{F}} \rangle,\\ &\mathbf{S}_{12}^{\text{BB}}\,|\varphi_{1}^{\text{F}}\,\phi_{2}^{ \text{B}}\rangle=D_{12}^{\text{BB}}\,|\phi_{2}^{\text{B}}\,\varphi_{1}^{\text{ F}}\rangle+E_{12}^{\text{BB}}\,|\varphi_{2}^{\text{F}}\,\phi_{1}^{\text{B}} \rangle,\\ &\mathbf{S}_{12}^{\text{BB}}\,|\varphi_{1}^{\text{F}}\,\varphi_{2}^{ \text{F}}\rangle=F_{12}^{\text{BB}}\,|\varphi_{2}^{\text{F}}\,\varphi_{1}^{ \text{F}}\rangle\,,\end{split} \tag{127}\] where the superscript "BB" indicates the highest-weight state. We will also distinguish the case of massive (\(m\neq 0\) mod\(k\)) and massless (\(m=0\) mod\(k\)) representations. In what follows it will be useful to use the short-hands \[\mathscr{S}_{1}=\text{sgn}\big{[}\sin\frac{\pi m_{1}}{k}\big{]},\qquad \mathscr{S}_{2}=\text{sgn}\big{[}\sin\frac{\pi m_{2}}{k}\big{]},\qquad(m_{i} \neq 0\text{ mod}k). \tag{128}\] Massive-massive scattering.By normalising the highest-weight scattering to one, we find \[\begin{split} A_{12}^{\text{BB}}=&\,1\,,\qquad \qquad\qquad\qquad B_{12}^{\text{BB}}=\frac{\mathscr{S}_{1}e^{\frac{im_{2}\pi}{ k}+\theta}-\mathscr{S}_{2}e^{\frac{im_{1}\pi}{k}}}{\mathscr{S}_{1}e^{\frac{(m_{1}+m_{2}) \pi}{k}+\theta}-\mathscr{S}_{2}}\,,\\ C_{12}^{\text{BB}}=&\,\frac{ie^{\frac{im_{1}\pi}{k} +\frac{\theta}{2}}\sqrt{\mu(m_{1})\mu(m_{2})}}{\mathscr{S}_{1}e^{\frac{(m_{1}+m _{2})\pi}{k}+\theta}-\mathscr{S}_{2}}\,,\qquad D_{12}^{\text{BB}}=\frac{ \mathscr{S}_{1}e^{\frac{im_{1}\pi}{k}+\theta}-\mathscr{S}_{2}e^{\frac{im_{2} \pi}{k}}}{\mathscr{S}_{1}e^{\frac{(m_{1}+m_{2})\pi}{k}+\theta}-\mathscr{S}_{ 2}}\,,\\ E_{12}^{\text{BB}}=&\,\frac{ie^{\frac{im_{2}\pi}{k} +\frac{\theta}{2}}\sqrt{\mu(m_{1})\mu(m_{2})}}{\mathscr{S}_{1}e^{\frac{(m_{1}+m _{2})\pi}{k}+\theta}-\mathscr{S}_{2}}\,,\qquad F_{12}^{\text{BB}}=\frac{- \mathscr{S}_{1}e^{\theta}+\mathscr{S}_{2}e^{\frac{(m_{1}+m_{2})\pi}{k}}}{ \mathscr{S}_{1}e^{\frac{(m_{1}+m_{2})\pi}{k}+\theta}-\mathscr{S}_{2}}\,.\end{split} \tag{129}\] Notice that this expression is not analytic in \(m_{1},m_{2}\) and depends on \(\mathscr{S}_{1},\mathscr{S}_{2}\). It simplifies further when assuming a definite value for \(\mathscr{S}_{1},\mathscr{S}_{2}\). For instance, we have for \(\mathscr{S}_{1}=\mathscr{S}_{2}=+1\) \[A_{12}^{\text{\tiny BB}}= \,1\,, B_{12}^{\text{\tiny BB}}=\frac{\sinh\Bigl{(}\frac{\theta}{2}-\frac{i \pi}{2k}(m_{1}-m_{2})\Bigr{)}}{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2k}(m _{1}+m_{2})\Bigr{)}}\,, \tag{104}\] \[C_{12}^{\text{\tiny BB}}= \,\frac{i\sqrt{\mu(m_{1})\mu(m_{2})}}{2\sinh\Bigl{(}\frac{\theta} {2}+\frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}e^{\frac{i\pi}{2k}(m_{1}-m_{2})}\,, D_{12}^{\text{\tiny BB}}=\frac{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i \pi}{2k}(m_{1}-m_{2})\Bigr{)}}{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2k}(m _{1}+m_{2})\Bigr{)}}\,,\] (105) \[E_{12}^{\text{\tiny BB}}= \,\frac{i\sqrt{\mu(m_{1})\mu(m_{2})}}{2\sinh\Bigl{(}\frac{\theta} {2}+\frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}e^{-\frac{i\pi}{2k}(m_{1}-m_{2})}\,, F_{12}^{\text{\tiny BB}}=\,-\,\frac{\sinh\Bigl{(}\frac{\theta}{2}- \frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}{\sinh\Bigl{(}\frac{\theta}{2}+\frac{i \pi}{2k}(m_{1}+m_{2})\Bigr{)}}\,, \tag{106}\] while for \(\mathscr{S}_{1}=-\mathscr{S}_{2}=+1\) we find \[A_{12}^{\text{\tiny BB}}= \,1\,, B_{12}^{\text{\tiny BB}}= \,\frac{\cosh\Bigl{(}\frac{\theta}{2}-\frac{i\pi}{2k}(m_{1}-m_{2}) \Bigr{)}}{\cosh\Bigl{(}\frac{\theta}{2}+\frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}\,, \tag{107}\] \[C_{12}^{\text{\tiny BB}}= \,\frac{i\sqrt{\mu(m_{1})\mu(m_{2})}}{2\cosh\Bigl{(}\frac{\theta} {2}+\frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}e^{\frac{i\pi}{2k}(m_{1}-m_{2})}\,, D_{12}^{\text{\tiny BB}}=\,\frac{\cosh\Bigl{(}\frac{\theta}{2}+ \frac{i\pi}{2k}(m_{1}-m_{2})\Bigr{)}}{\cosh\Bigl{(}\frac{\theta}{2}+\frac{i\pi }{2k}(m_{1}+m_{2})\Bigr{)}}\,,\] (108) \[E_{12}^{\text{\tiny BB}}= \,\frac{i\sqrt{\mu(m_{1})\mu(m_{2})}}{2\cosh\Bigl{(}\frac{\theta} {2}+\frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}e^{-\frac{i\pi}{2k}(m_{1}-m_{2})}\,, F_{12}^{\text{\tiny BB}}=\,-\,\frac{\cosh\Bigl{(}\frac{\theta}{2}- \frac{i\pi}{2k}(m_{1}+m_{2})\Bigr{)}}{\cosh\Bigl{(}\frac{\theta}{2}+\frac{i\pi }{2k}(m_{1}+m_{2})\Bigr{)}}\,. \tag{109}\] Different statistics.Were we to consider a different statistics for the highest-weight state we would expect to find similar minus signs on some matrix elements. In fact, because of the monodromy property (3.15) of the two-particle representation, we have for the two-particle S matrix \[\mathbf{S}^{\text{\tiny B/F,*}}(m_{1}+k,\theta_{1};m_{2},\theta_{2})=\mathbf{ S}^{\text{\tiny F/B,*}}(m_{1},\theta_{1};m_{2},\theta_{2})\,, \tag{110}\] \[\mathbf{S}^{\text{\tiny*,B/F}}(m_{1},\theta_{1};m_{2}+k,\theta_{2}) =\mathbf{S}^{\text{\tiny*,F/B}}(m_{1},\theta_{1};m_{2},\theta_{2} )\,,\] provided of course that the normalisation may be chosen appropriately. In other words, shifting \(m_{i}\) by \(k\) is equivalent to flipping the statistics of the \(i\)-th particle. Because this consideration relies only on the form of the coproduct that gives (3.15), it also applies to massless excitations. Mixed-mass scattering.We may also consider the scattering of excitations of mixed-mass. In this case, the massless particle may be moving to the left or to the right. For a process to be physical, it is necessary to require that the particles are ordered so that for their velocities we have \(v_{1}>v_{2}\). (More general processes can be considered to discuss unitarity, of course.) With a slight abuse of notation let us set \[\mathscr{S}_{i}=\begin{cases}+1\,,&m_{i}=0\text{ mod}(2k),\\ -1\,,&m_{i}=k\text{ mod}(2k).\end{cases} \tag{111}\] Using the same notation as in (C.2) we find that \[\begin{split} A_{12}^{\text{B+B}}=&\,1\,,\qquad\qquad B _{12}^{\text{B+B}}=\mathscr{S}_{1}\,,\qquad C_{12}^{\text{B+B}}=&\,0\,, \\ D_{12}^{\text{B+B}}=&\,e^{-\frac{i\pi m_{2}}{k}}\,, \qquad E_{12}^{\text{B+B}}=&\,0\,,\qquad F_{12}^{\text{B+B}}=& \,-\mathscr{S}_{1}e^{-\frac{i\pi m_{2}}{k}}\,.\end{split}\] (C.9) and \[\begin{split} A_{12}^{\text{BB}-}=&\,1\,,\qquad\qquad B _{12}^{\text{BB}-}=e^{-\frac{i\pi m_{1}}{k}}\,,\qquad C_{12}^{\text{BB}-}=& \,0\,,\\ D_{12}^{\text{BB}-}=&\,\mathscr{S}_{2}\,,\qquad E_{1 2}^{\text{BB}-}=&\,0\,,\qquad\qquad F_{12}^{\text{BB}-}=& \,-\mathscr{S}_{2}e^{-\frac{i\pi m_{1}}{k}}\,.\end{split}\] (C.10) where the plus and minus subscripts indicate the chirality of the massless particle. We see that the scattering is particularly simple, without any rotation in isotopic space. The S-matrix elements of the inverse processes can be found by imposing braiding unitarity. Massless scattering, opposite chiralityIn this case there is only one physical process due to the condition on the velocities, \(v_{1}>v_{2}\). We find \[\begin{split} A_{12}^{\text{B+B}-}=&\,1\,,\qquad\quad B _{12}^{\text{B+B}-}=\mathscr{S}_{1}\,,\qquad C_{12}^{\text{B+B}-}=&\,0\,, \\ D_{12}^{\text{B+B}-}=&\,\mathscr{S}_{2}\,,\qquad E_{1 2}^{\text{B+B}-}=&\,0\,,\qquad\quad F_{12}^{\text{B+B}-}=&\,- \mathscr{S}_{1}\mathscr{S}_{2}\,.\end{split}\] (C.11) Massless scattering, same chirality.Let us now come to the case of two massless particles that have the same chirality. This is not a perturbative scattering process, as \(v_{1}=v_{2}\), but it is very interesting to consider it nonetheless. By imposing the commutation with the supercharges we find several solutions. However, demanding unitarity, crossing symmetry, as well as that the Yang-Baxter equation is satisfied, we find that for all values of \(\mathscr{S}_{1},\mathscr{S}_{2}\) there is a one-parameter family of solutions. The solution takes the form \[\begin{split} A_{12}^{\text{B+B}+}=&\,1\,,\\ B_{12}^{\text{B+B}+}=&\,\mathscr{S}_{1}\frac{i+ \mathscr{S}_{2}\cot\frac{\alpha}{2}-e^{\theta}(i+\mathscr{S}_{1}\cot\frac{ \alpha}{2})}{i+\mathscr{S}_{2}\cot\frac{\alpha}{2}+e^{\theta}(i-\mathscr{S}_{1} \cot\frac{\alpha}{2})}\,,\\ C_{12}^{\text{B+B}+}=&\,\frac{2i\mathscr{S}_{1} \mathscr{S}_{2}e^{\frac{\theta}{2}}}{i+\mathscr{S}_{2}\cot\frac{\alpha}{2}+e^ {\theta}(i-\mathscr{S}_{1}\cot\frac{\alpha}{2})}\,,\\ D_{12}^{\text{B+B}+}=&\,\frac{\cot\frac{\alpha}{2}-i \mathscr{S}_{2}+\mathscr{S}_{2}e^{\theta}(i-\mathscr{S}_{1}\cot\frac{\alpha}{2 })}{i+\mathscr{S}_{2}\cot\frac{\alpha}{2}+e^{\theta}(i-\mathscr{S}_{1}\cot \frac{\alpha}{2})}\,,\\ E_{12}^{\text{B+B}+}=&\,\frac{2ie^{\frac{\theta}{2} }}{i+\mathscr{S}_{2}\cot\frac{\alpha}{2}+e^{\theta}(i-\mathscr{S}_{1}\cot\frac {\alpha}{2})}\,,\\ F_{12}^{\text{B+B}+}=&\,\frac{\mathscr{S}_{2}(i- \mathscr{S}_{2}\cot\frac{\alpha}{2})+\frac{i}{2}e^{\theta}(\mathscr{S}_{1}+ \mathscr{S}_{2})+\frac{1}{2}\cot\frac{\alpha}{2}e^{\theta}(\mathscr{S}_{1} \mathscr{S}_{2}+1)}{\mathscr{S}_{1}(i+\mathscr{S}_{2}\cot\frac{\alpha}{2})+ \frac{i}{2}e^{\theta}(\mathscr{S}_{1}+\mathscr{S}_{2})-\frac{1}{2}\cot\frac{ \alpha}{2}e^{\theta}(\mathscr{S}_{1}\mathscr{S}_{2}+1)}\,.\end{split}\] (C.12) and it depends on a real parameter \[\alpha\in[0,2\pi]\,.\] (C.13) Once again, flipping the signs \(\mathscr{S}_{i}\) is tantamount to swapping the statistics of the \(i\)-th particle. ### Dressing factors and crossing equations In the previous subsections we have normalised all S-matrix elements \(A^{**}_{12}\) as \[A^{\text{\tiny BB}}(m_{1},m_{2};\theta_{12})=A^{\text{\tiny B+B}}(m_{1},m_{2}; \theta_{12})=\cdots=A^{\text{\tiny B+B}}(m_{1},m_{2};\theta_{12})=1\,. \tag{110}\] It is easy to imagine that this choice, while convenient, is not compatible with crossing. Indeed, let us introduce dressing factors for each block of the S matrix so that \[A^{\text{\tiny BB}}(m_{1},m_{2};\theta_{12})=\sigma(m_{1},m_{2};\theta_{12})^ {-1}\,,\quad\ldots\,,\quad A^{\text{\tiny B+B}}(m_{1},m_{2};\theta_{12})= \sigma(m_{1}^{+},m_{2}^{+};\theta_{12})^{-1}\,. \tag{111}\] The crossing equations will yield new constraints for the functions \(\sigma(m_{1},m_{2};\theta_{12})^{-1}\). There are several ways to derive the crossing equations. One way which is particularly transparent physically is to construct an excitation \(Z(m,\theta;m^{\prime},\theta^{\prime})\) which emerges from the tensor product of two of our representations, and is a _singlet_ of the Zamolodchikov-Faddeev algebra of the theory (see _e.g._[9] for a review). This means that the singlet has to be annihilated by all supercharges of the theory. Finally, we will require that it has bosonic statistics. Based on these requirements, consistency of the Zamolodchikov-Faddeev algebra indicates that the operator creating such a singlet must commute with all other ZF operators. This provides a way to derive the crossing equation. Clearly the first step in this process is to determine whether such a singlet exists at all. The singlet representation of the algebra (13) is annihilated by all central charges. Hence it must be \[\mathbf{E}\,\left|Z(m,\theta;m^{\prime},\theta^{\prime})\right\rangle=\mathbf{ M}\,\left|Z(m,\theta;m^{\prime},\theta^{\prime})\right\rangle=\mathbf{C}\, \left|Z(m,\theta;m^{\prime},\theta^{\prime})\right\rangle=0\,. \tag{112}\] The vanishing of the first two supercharges imposes that \[\theta^{\prime}=\theta\pm i\pi\,, \tag{113}\] as we expect in order to obtain the crossing equations. The second imposes that \[m^{\prime}=-m\text{ mod}k\,. \tag{114}\] This fact immediately implies that _generically, particles of mass \(m\) cannot be their own anti-particles_. Let us consider, for definiteness, a representation of mass \(m\) with \(0<m<k\) with bosonic highest-weight state. Eq. (114) indicates that its antiparticles live in the representation with either \(m^{\prime}=-m\) or \(m^{\prime}=k-m\).21 But which one is it? To answer this question, let us observe that the singlet must be constructed out of a linear combination of highest- and lowest-states, otherwise it cannot be annihilate by all supercharges. Schematically, Footnote 21: We are not discussing the cases \(m^{\prime}=2k-m\), \(m^{\prime}=3k-m\), _etc._, because we have seen that all of our construction is trivially \(2k\)-periodic. \[\left|Z(m,\vartheta;m^{\prime},\vartheta^{\prime})\right\rangle=\left|\phi^{ *}(m,\vartheta)\,\varphi^{*}(m^{\prime},\vartheta^{\prime})\right\rangle+c(m,m ^{\prime})\,\left|\varphi^{*}(m,\vartheta)\,\phi^{*}(m^{\prime},\vartheta^{ \prime})\right\rangle\,, \tag{115}\] and an explicit computation yields the coefficient \(c(m,m^{\prime})\). Because we want \(|Z\rangle\) to behave as a boson when considering the scattering with a third particle, we need to consider the form of the coproduct on a three-particle state. From (3.14) have that, schematically \[\mathbf{q}_{(123)}=\mathbf{q}_{(1)}\otimes\mathbf{1}\otimes\mathbf{1}+e^{i\frac {\pi m_{1}}{k}}\,\Sigma\otimes\mathbf{q}_{(2)}\otimes\mathbf{1}+e^{i\frac{\pi (m_{1}+m_{2})}{k}}\,\Sigma\otimes\Sigma\otimes\mathbf{q}_{(3)}\,,\] (C.20) where the subscript indicates on which mass and rapidity the representation depends. If the first and second particle make up a singlet \(|Z_{(12)}\rangle\) and the third particle is some generic \(|X_{(3)}\rangle\), we have \[\mathbf{q}_{(123)}\big{|}Z_{(12)}\otimes X_{(3)}\big{\rangle}=(-1)^{F_{(12)}}e ^{i\frac{\pi(m+m^{\prime})}{k}}\big{|}Z_{(12)}\otimes(\mathbf{q}_{(3)}X_{(3)} )\big{\rangle}\,.\] (C.21) For the singlet to have bosonic statistics we need that \[(-1)^{F_{(12)}}e^{i\frac{\pi(m+m^{\prime})}{k}}=+1\,,\] (C.22) where \((-1)^{F_{(12)}}\) is the naive fermion sign of the singlet's components. This gives two possibilities in the case \(0\leq m<k\) with bosonic highest weight (which we choose for definiteness): 1. \(m^{\prime}=k-m\), so that \(m+m^{\prime}=k\). In this case, the highest-weight state of the two representations must have the same statistics (_i.e._, the "prime" representation must also have a bosonic a highest-weight state in this example) and \[|Z\rangle=\big{|}\phi^{\mathrm{B}}(m,\vartheta)\,\varphi^{\mathrm{F}}(k-m, \vartheta^{\prime})\big{\rangle}+c(m,k-m)\,\left|\varphi^{\mathrm{F}}(m, \vartheta)\,\phi^{\mathrm{B}}(k-m,\vartheta^{\prime})\right\rangle,\] (C.23) so that \(F_{(12)}=+1\). 2. \(m^{\prime}=-m\), so that \(m+m^{\prime}=0\) instead. In this case we should take the opposite statistics, for the "prime" representations, which gives in this case \[|Z\rangle=\big{|}\phi^{\mathrm{B}}(m,\vartheta)\,\varphi^{\mathrm{B}}(-m, \vartheta^{\prime})\big{\rangle}+c(m,-m)\,\left|\varphi^{\mathrm{F}}(m, \vartheta)\,\phi^{\mathrm{F}}(-m,\vartheta^{\prime})\right\rangle,\] (C.24) so that \(F_{(12)}=0\). This case is actually related to the previous due to the monodromy condition (3.15), see also (C.7). In fact, it yields the same crossing equations as it should. This discussion is perfectly compatible with the construction of the singlets before the limit, _cf._[9]. For massless particles, the two equivalent constructions of the singlet reduce to \[|Z_{0}\rangle=\big{|}\phi^{\mathrm{B}}(0,\vartheta)\,\varphi^{\mathrm{B}}(0, \vartheta^{\prime})\big{\rangle}+c(0,0)\,\left|\varphi^{\mathrm{F}}(0, \vartheta)\,\phi^{\mathrm{F}}(0,\vartheta^{\prime})\right\rangle\,,\] (C.25) for \(m=0\) and to \[|Z_{k}\rangle=\big{|}\phi^{\mathrm{B}}(k,\vartheta)\,\varphi^{\mathrm{B}}(-k, \vartheta^{\prime})\big{\rangle}+c(k,-k)\,\left|\varphi^{\mathrm{F}}(k, \vartheta)\,\phi^{\mathrm{F}}(-k,\vartheta^{\prime})\right\rangle\] (C.26) for \(m=k\). Recall that the fundamental particles of the theory live in the tensor product of two representations of the relativistic limit of \(su(1,1)^{2}_{c.e.}\), as shown in (23); for this reason, in the derivation of the crossing equations of the full model, we should consider the tensor product of two singlets of the form specified above. Imposing that these singlets trivially commute with all the fundamental particles and their bound states we obtain the crossing equations in the limit. Massive-massive and mixed-mass crossing equations.For the massive-massive and mixed-mass scattering taking the relativistic limit of the crossing equations of the full theory is equivalent to taking the limit of the theory first and then defining the crossing equations from scratch as explained above. This is also the case for the scattering between massless particles with opposite chirality. This is expected because in all these cases the the matrix part of the S matrix is completely constrained after the limit. These crossing equations have been discussed in section 3. Same chirality massless crossing equations.The situation is different if we consider the scattering of massless particles with the same chirality: in this case, two among the four supercharges composing the algebra in (13) vanish (what supercharges depend on whether we consider the scattering between chiral-chiral particles or antichiral-antichiral particles) and the S matrix remains partially unconstrained after the limit. This is clear by the fact that we obtain a one-parameter family of solutions for the S-matrix elements after the limit, as shown in (126). In the following, we will consider the case where both massless particles are chiral (i.e. \(M_{1}>0\) and \(M_{2}>0\)). The scattering of antichiral particles can be studied similarly. Let us consider the crossing equations for half representations first. Normalising the S-matrix elements as in (152) we find the following crossing equations \[\sigma(0^{+},0^{+};\theta)^{-1}\sigma(0^{+},k^{+};\theta-i\pi)^{- 1} =e^{-i\frac{\alpha}{2}}\frac{\sinh\!\left(\frac{\theta}{2}-i\frac{ \alpha}{2}\right)}{\sinh\frac{\theta}{2}}\,, \tag{153a}\] \[\sigma(0^{+},k^{+};\theta)^{-1}\sigma(0^{+},0^{+};\theta-i\pi)^{- 1} =e^{-i\frac{\alpha}{2}}\frac{\sinh\!\left(\frac{\theta}{2}+i\frac{ \pi}{2}\right)}{\sinh\!\left(\frac{\theta}{2}+i\frac{\pi}{2}(\alpha+\pi) \right)}\,,\] (153b) \[\sigma(k^{+},0^{+};\theta)^{-1}\sigma(k^{+},k^{+};\theta-i\pi)^{- 1} =e^{i\frac{\alpha}{2}}\frac{\sinh\!\left(\frac{\theta}{2}+i\frac{ \pi}{2}\right)}{\sinh\!\left(\frac{\theta}{2}-i\frac{\pi}{2}(\alpha+\pi) \right)}\,,\] (153c) \[\sigma(k^{+},k^{+};\theta)^{-1}\sigma(k^{+},0^{+};\theta-i\pi)^{- 1} =e^{i\frac{\alpha}{2}}\frac{\sinh\!\left(\frac{\theta}{2}-i\frac{ \pi}{2}(2\pi-\alpha)\right)}{\sinh\frac{\theta}{2}}\,, \tag{153d}\] which are satisfied by \[\sigma(0^{+},0^{+};\theta)^{-1} =-\frac{\sinh\!\left(\frac{\theta}{2}-\frac{i\alpha}{2}\right)}{ \sinh\!\left(\frac{\theta}{2}+\frac{i\alpha}{2}\right)}\frac{R(\theta-i\alpha)R (\theta+i\alpha)}{R^{2}(\theta)}\,, \tag{111a}\] \[\sigma(0^{+},k^{+};\theta)^{-1} =-e^{-i\frac{\alpha}{2}}\times\frac{R(\theta+i\pi)R(\theta-i\pi) }{R(\theta+i(\pi-\alpha))R(\theta-i(\pi-\alpha))}\,,\] (111b) \[\sigma(k^{+},k^{+};\theta)^{-1} =\frac{R(\theta-i\alpha)R(\theta+i\alpha)}{R^{2}(\theta)}\,,\] (111c) \[\sigma(k^{+},0^{+};\theta)^{-1} =-e^{+i\frac{\alpha}{2}}\times\frac{R(\theta+i\pi)R(\theta-i\pi) }{R(\theta+i(\pi-\alpha))R(\theta-i(\pi-\alpha))}\,. \tag{111d}\] It is easy to check that all dressing factors written above satisfy both unitarity and braiding unitarity. Moreover, all the chiral-chiral massless S matrices normalized with these factors have no poles in the physical strip \((0,i\pi)\). This is true for any value of \(\alpha\in[0,2\pi]\). If we are interested in finding the dressing factor of the full model, defined by the normalisation in (49), then we should consider the massless singlets for the full theory. These singlets can be constructed out of the half-theory singlets defined in (110) and (111); they can be written as \[|\Omega^{0k}\rangle \simeq|Z_{0}\,Z_{k}\rangle\in\Big{(}\rho^{\text{B}}_{\text{rel}}( 0,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(k,\theta)\Big{)}\otimes\Big{(} \rho^{\text{B}}_{\text{rel}}(k,\theta+i\pi)\otimes\rho^{\text{B}}_{\text{rel} }(0,\theta+i\pi)\Big{)} \tag{112}\] \[|\Omega^{k0}\rangle \simeq|Z_{k}\,Z_{0}\rangle\in\Big{(}\rho^{\text{B}}_{\text{rel}}( k,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(0,\theta)\Big{)}\otimes\Big{(} \rho^{\text{B}}_{\text{rel}}(0,\theta+i\pi)\otimes\rho^{\text{B}}_{\text{rel} }(k,\theta+i\pi)\Big{)}\,.\] The different colours show how the singlets split between the different representations of the full algebra. Requiring that these singlets scatter trivially with all massless particles in the representations \(\rho^{\text{B}}_{\text{rel}}(0,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(k, \theta)\simeq\rho^{\text{B}}_{\text{rel}}(0,\theta)\otimes\rho^{\text{F}}_{ \text{rel}}(0,\theta)\) and \(\rho^{\text{B}}_{\text{rel}}(k,\theta)\otimes\rho^{\text{B}}_{\text{rel}}(0, \theta)\simeq\rho^{\text{F}}_{\text{rel}}(0,\theta)\otimes\rho^{\text{B}}_{ \text{rel}}(0,\theta)\) we obtain the following crossing equations \[\Big{(}\sigma^{\circ\circ}(\theta+i\pi)\Big{)}^{-2}\Big{(}\sigma^{\circ\circ} (\theta)\Big{)}^{-2} =\frac{\cosh\!\left(\frac{\theta}{2}-i\frac{\alpha}{2}\right)\! \cosh\!\left(\frac{\theta}{2}+i\frac{\alpha}{2}\right)}{\cosh^{2}\frac{\theta }{2}}\,, \tag{113}\] Equations (113) and (114) are obtained by multiplying (111a) and (111b) and (111c) respectively, and noting that the normalisation (49) requires to set \[\Big{(}\sigma^{\circ\circ}(\theta)\Big{)}^{-2} =\sigma(0^{+},0^{+};\theta)^{-1}\sigma(k^{+},k^{+};\theta)^{-1}=- \sigma(0^{+},k^{+};\theta)^{-1}\sigma(k^{+},0^{+};\theta)^{-1}\,. \tag{114}\] The minus sign in the second equality above is necessary to take into account fermionic exchanges in the full model. Note indeed that particles in the representations \(\rho^{\text{B}}(0,\theta_{1})\otimes\rho^{\text{F}}(0,\theta_{1})\) and \(\rho^{\text{F}}(0,\theta_{2})\otimes\rho^{\text{B}}(0,\theta_{2})\) are fermions and their scattering produces a minus sign which is not taken into account in the scattering between particles in half representations. If we assume that the dressing factor is the same for the scattering between all the massless representations, we obtain an overconstrained system of equations. To solve this system we need to require the RHS of (C.30) and (C.31) to be the same. This is possible only if \(\alpha=0\), \(\alpha=2\pi\) or \(\alpha=\pi\). The point \(\alpha=\pi\) corresponds to the limit of the full theory and the solution is given in (4.41). On the other hand, if \(\alpha=0\) or \(\alpha=2\pi\) all the massless-massless dressing factors can be set equal to 1 and the whole S matrix is in fact trivial. It is worth remarking that the conclusion about the allowed values of \(\alpha\) would not have changed even if we had allowed for a non-trivial rotation in the \(su(2)_{\circ}\) space, as that is factorised with respect to the internal \(su(1|1)\) structure.
2310.20419
Relative NN-Descent: A Fast Index Construction for Graph-Based Approximate Nearest Neighbor Search
Approximate Nearest Neighbor Search (ANNS) is the task of finding the database vector that is closest to a given query vector. Graph-based ANNS is the family of methods with the best balance of accuracy and speed for million-scale datasets. However, graph-based methods have the disadvantage of long index construction time. Recently, many researchers have improved the tradeoff between accuracy and speed during a search. However, there is little research on accelerating index construction. We propose a fast graph construction algorithm, Relative NN-Descent (RNN-Descent). RNN-Descent combines NN-Descent, an algorithm for constructing approximate K-nearest neighbor graphs (K-NN graphs), and RNG Strategy, an algorithm for selecting edges effective for search. This algorithm allows the direct construction of graph-based indexes without ANNS. Experimental results demonstrated that the proposed method had the fastest index construction speed, while its search performance is comparable to existing state-of-the-art methods such as NSG. For example, in experiments on the GIST1M dataset, the construction of the proposed method is 2x faster than NSG. Additionally, it was even faster than the construction speed of NN-Descent.
Naoki Ono, Yusuke Matsui
2023-10-31T12:46:18Z
http://arxiv.org/abs/2310.20419v1
# Relative NN-Descent: A Fast Index Construction for Graph-Based Approximate Nearest Neighbor Search ###### Abstract. Approximate Nearest Neighbor Search (ANNS) is the task of finding the database vector that is closest to a given query vector. Graph-based ANNS is the family of methods with the best balance of accuracy and speed for million-scale datasets. However, graph-based methods have the disadvantage of long index construction time. Recently, many researchers have improved the tradeoff between accuracy and speed during a search. However, there is little research on accelerating index construction. We propose a fast graph construction algorithm, Relative NN-Descent (RNN-Descent). RNN-Descent combines NN-Descent, an algorithm for constructing approximate K-nearest neighbor graphs (K-NN graphs), and RNG Strategy, an algorithm for selecting edges effective for search. This algorithm allows the direct construction of graph-based indexes without ANNS. Experimental results demonstrated that the proposed method had the fastest index construction speed, while its search performance is comparable to existing state-of-the-art methods such as NSG. For example, in experiments on the GIST1M dataset, the construction of the proposed method is 2x faster than NSG. Additionally, it was even faster than the construction speed of NN-Descent. approximate nearest neighbor search, graph-based index + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics Our experiments showed that the proposed method has the equivalent search performance to the conventional methods, whereas its construction is much faster than theirs. For example, on the GIST1M dataset, the construction speed of the proposed method was about twice that of NSG (Kang et al., 2017), one of the state-of-the-art methods. Remarkably, the construction speed of the proposed method was even faster than that of the K-NN graph. We also constructed graphs on the SIFT20M (Kang et al., 2018) dataset to analyze their performance on large datasets. Our contributions are as follows: * We tackle the fast construction of graph indexes. Despite its importance, previous researches have yet to address speeding up the construction of graphs. * We propose RNN-Descent, a novel graph index construction algorithm. RNN-Descent simultaneously solves the problems of the conventional graph-based approaches: (1) the refinement-based approach and (2) the direct approach. * Our experiments demonstrate that RNN-Descent is the fastest construction algorithm, and the constructed index has a compatible search performance with conventional methods. ## 2. Related Work ### Approximate Nearest Neighbor Search (ANNS) We first formulate the nearest neighbor search (NNS) problem. Let \(\mathbf{q}\in\mathbb{R}^{d}\) as query vector, \(n\in\mathbb{Z}\) be the number of database vectors, \(\mathcal{X}=\{\mathbf{x}_{1},\dots,\mathbf{x}_{n}\}\subset\mathbb{R}^{d}\) be the database vectors and \(\text{dist}:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a distance function. Here, NNS is a task that returns the ID of the database vector \(\mathbf{x}_{i^{*}}\in\mathcal{X}\) closest to \(\mathbf{q}\): \[i^{*}=\operatorname*{argmin}_{i\in\{1,\dots,n\}}\text{dist}(\mathbf{q},\mathbf{x}_{i}) \tag{1}\] A naive approach to solving NNS is to compute the distance between all the data and the query. However, the time complexity of the exhaustive search is \(O(nd)\), which is slow for large and high-dimensional data. Numerous studies have addressed approximate NNS (ANNS) as a method for large-scale data. ANNS is a method that improves speed significantly at the cost of a slight loss of accuracy. ANNS methods generally have a trade-off between accuracy, speed, and memory consumption. Therefore the appropriate method depends on the scale of the problem. ANNS methods include hash-based (Beng et al., 2017; Wang et al., 2018), quantization-based (Kang et al., 2018; Wang et al., 2018), and tree-based methods (Kang et al., 2019; Wang et al., 2020). ### Graph-Based ANNS This section describes graph-based ANNS. Graph-based methods have the best trade-off between accuracy and speed for million-scale datasets (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018). Before searching, graph-based methods construct a graph in which vertices represent database vectors. The edges of the graph connect vectors close to each other. The search algorithm finds the approximate nearest neighbor by traversing the constructed graph toward the query. ``` 0: graph \(G=(V,E)\), query \(\mathbf{q}\in\mathbb{R}^{d},L\in\mathbb{Z}\) 0: approximate nearest neighbor \(v^{*}\in V\) 1:\(C\leftarrow\)InitializeCandidates() 2:while True do 3:\(u\leftarrow\) nearest unvisited point to \(\mathbf{q}\) in \(C\) 4:\(U\leftarrow\{v|(u,v)\in E\}\) 5:for\(v\in U\)do 6:if\(v\) is not visited then 7:\(C\gets C\cup\{v\}\) 8:if\(|C|>L\)then 9:\(C\leftarrow\) top \(L\) nearest points to \(\mathbf{q}\) in \(C\) 10:if\(C\) is not updated then 11: break 12:return nearest point to \(\mathbf{q}\) in \(C\) ``` **Algorithm 1**Search\((G,\mathbf{q},L)\) Algorithm 1 shows the typical algorithm of graph traversal. Here, \(V=\{1,\dots,n\}\) is the graph vertex set, \(E\subset V\times V\) is the edge set, and \(L\in\mathbb{Z}\) is a hyperparameter. Each \(v\in V\) corresponds to a database vector \(\mathbf{x}_{\mathbf{q}}\in\mathbb{R}^{d}\). First, the algorithm initializes the set of candidate neighborhood points \(C\subset V\) (L1). Each step in the while loop takes the unvisited point closest to the query \(u\) from \(C\) (L3), then adds unvisited vertices of \(u\)'s neighbors to \(C\) (L4-7). Subsequently, the algorithm selects the top \(L\) closest points to \(\mathbf{q}\) in \(C\) so that the size of \(C\) does not exceed \(L\) (L8-9). When \(C\) is no longer updated, the algorithm terminates the while loop and outputs the nearest neighbor from \(C\) (L10-11). One of the most straightforward graph indexes is the K-nearest neighbor graph (K-NN graph). A K-NN graph has edges from each vertex to the top K vertices closest to it. Since constructing an exact K-NN graph is time-consuming, several methods construct approximate K-NN graphs. NN-Descent (Dong et al., 2017) is one of the fastest and most accurate methods. NN-Descent improves K-NN graphs step-by-step by dealing with the neighbors of neighbors as candidates for new neighbors. Other approaches include divide-and-conquer (Beng et al., 2018; Wang et al., 2018), hashing-based (Kang et al., 2018), and sequentially adding vertices by solving ANNS on a subset of the data (Kang et al., 2018). Many methods develop different graph structures to improve the approach. The first approach is to improve the approximate K-NN graph. Because simple K-NN graphs perform poorly as ANNS indexes, some methods refine K-NN graphs to a high-performance graph index. NSG (Kang et al., 2017) extracts the edge candidates from the K-NN graph, then selects the necessary ones using RNG Strategy. RNG Strategy is a widely used heuristic for selecting edges based on the distance between neighbors. Related methods include those that focus on the angles between edges (Kang et al., 2018; Wang et al., 2018) or adjust the degree of selection by additional parameters (Kang et al., 2018). The second approach is to construct the graph-based index directly. HNSW (Kang et al., 2018) sequentially adds new data to the index. The construction algorithm finds neighbor candidates of the new vertex by solving ANNS for the current graph index. HNSW adds data sequentially. These methods consider the newly added data as a query vector and solve ANNS for the current graph index to find candidate neighbors for the new data. While these construction methods are simpler than those via K-NN graphs, it is slower for high-dimensional datasets where ANNS is difficult. ``` 0:\(\text{graph}\ G=(V,E)\) 0:edge candidates \(E^{\prime}\subset V\times V\) 1:\(E^{\prime}\gets E\) 2:for\(u\in V\)do 3:\(U\leftarrow\{v|(u,v)\in E\}\) 4:for all \((v_{1},v_{2})\in U\times U,v_{1}<v_{2}\)do 5:if at least one flag of \(v_{1}\) or \(v_{2}\) is "new" then 6:\(E^{\prime}\gets E\cup\{(v_{1},v_{2}),(v_{2},v_{1})\}\) 7: set the flag "old" for all vertices in \(U\) 8:return\(E^{\prime}\) ``` **Algorithm 2**NNDescentJoin(\(G\)) ``` 0:\(\text{vertex}\ u\in V\), \(\text{neighbor candidates}\ U\subset V\) 0:selected neighbors \(U^{\prime}\subset U\) 1:sort \(v\in U\) in ascending order of \(\delta(u,v)\) 2:\(U^{\prime}\leftarrow\emptyset\) 3:for\(v\in U\)do 4:\(f\leftarrow\text{true}\) 5:for\(w\in U^{\prime}\)do 6:if\(\delta(u,v)\geq\delta(v,w)\)then 7:\(f\leftarrow\text{false}\) 8: break 9:if\(f\)then 10:\(U^{\prime}\gets U^{\prime}\cup\{v\}\) 11:return\(U^{\prime}\) ``` **Algorithm 3**NNGStrategy(\(u,U\)) ## 3. Preliminary ### NN-Descent NN-Descent ((Han et al., 2017)) is a fast algorithm for constructing approximate K-NN graphs. The basic idea of NN-Descent is that neighbors of neighbors are likely to be neighbors again. NN-Descent first initializes the graph randomly. It then repeats the join step to find new neighbor candidates and the update step to select the neighbors from the candidates. Figure 1(a) shows an example of the join operation. For speed, the join operation examines all pairs around an arbitrary vertex instead of checking the neighbors of the neighbors. The vertices \(v_{1},v_{2},\) and \(v_{3}\) in Figure 1(a) are neighbors of a neighbor via \(u\). Therefore, the join algorithm adds a bi-directional edge between each vertex pair \((v_{i},v_{j})\) (\(1\leq i<j\leq 3\)). Algorithm 2 is a pseudo-code for the NN-Descent join operation. The algorithm takes graph \(G=(V,E)\) and returns the candidates of new edges. Each loop after Line 2 bitrectionally adds edges for each pair of \(u\)'s neighbors. NN-Descent assigns a flag to each neighbor to determine if it is newly added in the most recent step. Each join step adds an edge between neighbor pairs only if either of the flags is "new" (L5). This method allows the join step to check each pair of neighbors only once, speeding up the join step. Since the proposed method incorporates the flags, we describe the algorithm in detail again in Section 4.2. ### RNG Strategy The RNG Strategy ((Han et al., 2017; Koshino et al., 2017; Koshino et al., 2017)) is a method for selecting edges of a graph valid for ANNS. Figure 1(b) shows an image of the RNG Strategy. RNG Strategy reduces edges so that any two neighbors \(v\) and \(w\) of each vertex \(u\) satisfy the following inequalities: \[\delta(u,v)<\delta(v,w)\wedge\delta(u,w)<\delta(v,w) \tag{2}\] Here, \(\delta(u,v)\) is the distance between \(u\) and \(v\): \[\delta(u,v)=\text{dist}(\mathbf{x}_{u},\mathbf{x}_{v}) \tag{3}\] For example, vertices \(v_{1}\) and \(v_{2}\) in Figure 1(b) do not satisfy \(\delta(u,v_{2})\prec\delta(v_{2},v_{1})\), so the algorithm deletes the edge from \(u\) to \(v_{2}\). Intuitively, when \(v\) and \(w\) are too close to each other, it is likely that an edge already exists between \(v\) and \(w\). Then, if there is an edge from \(u\) to either \(v\) or \(w\), the other is also reachable from \(u\). Algorithm 3 is a pseudocode for RNG Strategy. The input is neighboring candidates \(U\) of vertex \(u\), and the output is selected neighbors \(U^{\prime}\). First, the algorithm sorts \(U\) in ascending order of distance to \(u\) (L1). Then, for each vertex \(v\) in \(U\), the algorithm determines whether output candidates set \(U^{\prime}\) includes \(v\) (L3-8). Specifically, for each vertex \(w\) already added to \(U^{\prime}\), it checks whether \(v\) satisfies the constraint \(\delta(u,v)<\delta(v,w)\). If \(v\) passes the check, the algorithm adds \(v\) to \(U^{\prime}\) (L9-10). ## 4. Method ### Motivation We categorize conventional graph-based ANNS methods into two main approaches: (1) a refinement-based approach and (2) a direct approach. The refinement-based approach first constructs an approximate K-NN graph and then refines it to obtain a final Figure 1. Comparison of methods for constructing graph indices. Vertices \(v_{1}\), \(v_{2}\), and \(v_{3}\) in the figure are neighbors of vertex \(u\). The indexes indicate the order of their distance from \(u\). (a) NN-Descent is a method for constructing approximate K-NN graphs step-by-step by collecting neighbors of neighbors as candidates for new neighbors. For example, vertices \(v_{1}\) and \(v_{2}\) are neighbors of a neighbor via \(u\). NN-Descent adds a new bi-directional edge between \(v_{1}\) and \(v_{2}\). (b) RNG Strategy is one of the methods to select edges essential for the search. In the figure, RNG Strategy removes the edge to \(v_{2}\) because \(v_{1}\) and \(v_{2}\) are close enough. (c) RNN-Descent, the proposed method, combines the features of NN-Descent and RNG Strategy. In the figure, it removes the edge \((u,v_{2})\) by RNG Strategy and adds a new edge \((v_{1},v_{2})\) instead. graph-based index. However, constructing the K-NN graph is time-consuming and increases the overall index construction time. On the other hand, the direct approaches construct the graph-based index without going through the K-NN graph by solving ANNS on the index under construction. However, this approach is also slow because the accuracy of the ANNS must be high to construct a graph-based index with good performance. We propose a new graph construction algorithm, Relative NN-Descent (RNN-Descent), to solve the above problems. RNN-Descent is faster than the refinement-based approach because it does not go through the K-NN graph. In addition, the proposed method can construct a high-performance graph-based index in less time than the direct approach because it constructs the index without ANNS. The technical core of the proposed method is the combination of two algorithms mentioned in Section 3: NN-Descent and RNG Strategy. Akin to NN-Descent, RNN-Descent constructs a graph-based index by incrementally improving a randomly initialized graph. However, the update algorithm of RNN-Descent simultaneously performs an edge-adding operation derived from the NN-Descent and an edge-removing operation based on the RNG Strategy. This approach allows the proposed method to directly construct the graph without finding neighbor candidates with ANNS. In addition, the proposed update algorithm naturally guarantees graph connectivity, which is essential for search performance. Section 4.2 describes the details of the proposed neighbors updating algorithm, and Section 4.3 discusses the reverse edges addition algorithm to avoid suboptimal graphs. Finally, Section 4.4 describes the overall construct and search algorithm. ### Updating neighbors This section describes the algorithm for updating the neighbors. The idea of the proposed method is to simultaneously perform the neighborhood update algorithm of NN-Descent and the edge selection algorithm of the RNG Strategy. Figure 1 shows how the proposed algorithm works. We consider neighbors of a vertex \(u\). Normal NN-Descent adds an edge between any two neighborhoods. For example, in Figure 1(a) NN-Descent adds bidirectional edges between \(v_{1},v_{2}\), and \(v_{3}\). However, adding edges between every neighbor pair is useless in constructing a graph-based index. It is because some edges will be eliminated by algorithms such as RNG-Strategy, as the conventional refinement-based approach does. Therefore, the proposed method uses the RNG Strategy concept to add only the necessary edges. According to RNG Strategy, the edge \((u,v_{2})\) in Figure 1(b) is not necessary because of the inequality \(\delta(u,v_{2})>\delta(v_{2},v_{1})\). The proposed method removes redundant edges and adds proper edges simultaneously. In Figure 1(c), the algorithm removes the edge \((u,v_{2})\) and then inserts the edge \((v_{2},v_{1})\) instead. That is, the proposed method combines NN-Descent and RNG Strategy. Our neighborhood update algorithm also keeps the graph's connectivity, which is important to improve the performance of ANNS. For instance, in Figure 1(c), \(v_{2}\) is reachable from \(u\) before and after the update algorithm. ``` 0:graph \(G=(V,E)\), \(R\in\mathbb{Z}\) 1:\(E\gets E\cup\{(u,u)|(u,v)\in E\}\) 2:set flags of new neighbors to "new" 3:for\(v\in V\)do 4:\(E_{u}\leftarrow\{(v,u)|(v,u)\in E\}\) 5: remove top-\(R\) shortest edges from \(E_{u}\) 6:\(E\gets E\setminus E_{u}\) 7:for\(v\in V\)do 8:\(E_{u}\leftarrow\{(u,v)|(u,v)\in E\}\) 9: remove top-\(R\) shortest edges from \(E_{u}\) 10:\(E\gets E\setminus E_{u}\) ``` **Algorithm 5**AddReverseEdges\((G,R)\) ``` 0:graph \(G=(V,E)\), \(R\in\mathbb{Z}\) 1:\(E\gets E\cup\{(u,u)|(u,v)\in E\}\) 2:set flags of new neighbors to "new" 3:for\(v\in V\)do 4:\(E_{u}\leftarrow\{(v,u)|(v,u)\in E\}\) 5: remove top-\(R\) shortest edges from \(E_{u}\) 6:\(E\gets E\setminus E_{u}\) 7:for\(v\in V\)do 8:\(E_{u}\leftarrow\{(u,v)|(u,v)\in E\}\) 9: remove top-\(R\) shortest edges from \(E_{u}\) 10:\(E\gets E\setminus E_{u}\) ``` **Algorithm 6**RNN-Descent\((S,R,T_{1},T_{2})\) Algorithm 4 is a pseudo code for a neighborhood update algorithm. The algorithm takes graph \(G=(V,E)\) as input and updates the edge set \(E\subset V\times V\). Most of the algorithm is the same as the RNG Strategy. One of the differences is in Line 11. If \(u\)'s neighbor \(v\) does not satisfy the inequality for some selected vertex \(w\), the algorithm removes edge \((u,v)\) and inserts the edge \((v,w)\) instead. The algorithm adds no edges if the edge \((w,v)\) already exists. However, \(w\) is still reachable from \(u\) in this case. Another difference is the introduction of a flag to determine if each neighbor is newly added in the last iteration. This technique derives from the original NN-Descent. If both flags of vertices \(v\) and \(w\) are "old," the algorithm skips to calculate the distance between them (L5-6). It is because if \(v\) and \(w\) are old neighbors, the algorithm has already checked whether \(v\) and \(w\) satisfy the RNG inequality. At the end of the iteration, the algorithm sets the flag of all neighbors in \(U^{\prime}\) to "old" (L15). ### Adding reverse edges The problem with the update algorithm in Section 4.2 is that the constructed graph will likely fall into a local optimum with low performance. Here, a local optimum means that all neighbors satisfy the conditions of the RNG Strategy, and the update algorithm will not update edges anymore. Suboptimal graphs have long average edge distances, leading to poor search performance. Our solution is to add reverse edges to the suboptimal graph. Adding a new edge that does not satisfy Eq 2 allows the update algorithm to restart. Also, since the graph's reverse edges are likely shorter than randomly chosen edges, they are suitable for converging the graph to a better solution. Algorithm 5 is a pseudo-code for adding the reverse edges. Here, the input \(R\in\mathbb{Z}\) is a parameter that controls the number of reduced edges. First, the algorithm adds the reverse edges to the current edge set \(E\) (L1). Then, it removes some long edges from \(E\) to prevent the number of edges from increasing too much. Specifically, the algorithm reduces the edges so that the in-degree (L3-5) and out-degree are less than or equal to \(R\) (L6-8). ### Overall algorithm Algorithm 6 is the pseudo-code for the entire RNN-Descent algorithm. First, the algorithm initializes the graph randomly (L1). Here, \(S\in\mathbb{Z}\) is the out-degree of the initial graph. Each step in the subsequent loop improves the initialized graph \(G\) step-by-step. The algorithm in Section 4.2 updates the edge set \(T_{2}\) times (L4-5). Then, the algorithm in Section 4.3 adds reverse edges to prevent the graph from falling suboptimal (L6-7). After repeating this sequence of steps \(T_{1}\) times, the algorithm outputs the final graph (L8). Finally, we describe the search algorithm for the constructed indexes. RNN-Descent does not limit the out-degree of the constructed graph. Instead, the search algorithm limits the number of degrees. We replace L4 in Algorithm 1 as follows: \[U\leftarrow\text{top}\;K\;\text{nearest points to }u\text{ in }\{v|(u,v)\in E\} \tag{4}\] Here, \(K\in\mathbb{Z}\) is a parameter that determines the maximum out-degree. This algorithm allows users to change the maximum out-degree without reconstructing the graph. The optimal degree depends on the dataset, but it is difficult to know this before the construction. In contrast, the proposed method can dynamically determine the optimal out-degree during the search. Figure 3. Construction time for graph-based ANNS methods. Figure 2. Search performance for graph-based ANNS methods. ## 5. Experiments ### Settings This experiment measures the construction time and search performance of each graph-based method. The search accuracy metric is Recall@1 (R@1), where R@1 is the percentage of queries that find the correct nearest neighbor. We also use queries per second (QPS) to measure search speed. In general, ANNS methods have parameters specified during a search, which control the balance between speed and accuracy. We evaluate each method by plotting R@1 and QPS on a plane while varying the search parameters. We compared the following algorithms: Figure 4. In-degree distribution. Figure 5. Out-degree distribution. Figure 6. Search performance for various \((T_{1},T_{2})\). * NN-Descent (Han et al., 2017): An approximate K-NN graph. We set \(K=64,S=10,L=114,R=100,\text{iter}=10\). * NSG (Gil et al., 2018): The SOTA of refinement-based approach. NSG uses NN-Descent to construct a K-NN graph. We set \(R=32,L=64,C=132\). The parameters of NN-Descent are same as above. * HNSW (Han et al., 2019): The SOTA of the direct approach. We set \(M=32,\text{efC}=500\). * RNN-Descent: The proposed method. We set \(S=20,R=96,T_{1}=4,T_{2}=15\). We used the Faiss implementation for comparison. We conducted the experiments on an AWS c6i.4xlarge instance (16 vCPUs, 32 GB memory). We set the number of threads to 16. Table 1 summarizes the properties of the datasets we used. ### Comparison to other methods Figure 2 and 3 shows each graph-based method's search performance and construction time. Note that we plot only Pareto-optimal points. The search performance of the proposed method was comparable to the existing SOTA method. On the other hand, the construction time was the shortest among all methods. We emphasize that the construction speed of the proposed method was faster than that of NN-Descent. This result means that existing methods of the refinement-based approach cannot construct the index faster than the proposed method, at least as long as it uses NN-Descent. On the other hand, HNSW belongs to the direct approach and may be faster than the K-NN graph. However, experimental results show that HNSW has the slowest construction time among all the methods. ### Degree distribution Figures 4 and 5 show the distributions of the in-degree and out-degree of the graphs constructed by each method. The proposed method limits the in-degree of the graph to be less than or equal to \(R\). However, the average degree was much smaller than \(R\), around 20, and this value was about the same as existing methods. The proposed method has comparable memory efficiency to the existing methods because the memory consumption of the index is proportional to the average degree of the graph. The degree distributions for SIFT1M and Deep1M were similar to those of NSG. Although the proposed method does not limit the out-degree, the maximum out-degree was around 150. These results demonstrate that the proposed method can automatically adjust the out-degree for relatively simple datasets. On the other hand, for the GIST1M dataset, some vertices have a very large out-degree. These vertices slow the search because the search algorithm checks many neighbors when it visits them. Therefore, the proposed method dynamically limits the out-degree during the search to avoid checking too many neighbors. In addition, the input degree of the proposed method has more concentrated peaks than the input degree of other methods. ### Ablation study Adding reverse edgesFigures 6 and 7 show the change in search performance and construction speed when \(T_{1}\) and \(T_{2}\) are changed. We keep the total number of iterations \(T_{1}T_{2}\) constant. We also set \(S=20\) and \(R=96\) for all experiments. The case \(T_{1}=1\) is where the algorithm adds no reverse edges. In this case, the search performance was the lowest of all settings. This result indicates that adding reverse edges is effective in improving search performance. As \(T_{1}\) increases, the search performance improves while the construction time increases. This result shows that \(T_{1}\) controls search performance and construction time trade-off. Limitation on out-degree.Figure 8 shows the change in search performance for different \(K\). We set \(K=16,32,48,64,96,\textit{and}\ \infty\). We observe the best \(K\) is different whether R@1 was greater than approximately 0.95. First, we examine the case when R@1 is less than 0.95. The optimal \(K\) was 16 for the SIFT1M and Deep1M \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Dimensions & \#Bases & \#Queries \\ \hline SIFT1M (Gil et al., 2017) & 128 & 1,000,000 & 10,000 \\ GIST1M (Gil et al., 2018) & 960 & 1,000,000 & 1,000 \\ Deep1M (Gil et al., 2018) & 96 & 1,000,000 & 10,000 \\ SIFT20M (Gil et al., 2018) & 128 & 20,000,000 & 10,000 \\ \hline \hline \end{tabular} \end{table} Table 1. The properties of datasets. Figure 7. Construction time for various \((T_{1},T_{2})\). datasets and 32 for GIST1M. Next, we observe the case when R@1 > 0.95. For SIFT1M and Deep1M, the search performance was similar if \(K\geq 32\). While for GIST1M, the best \(K\) were 48 and 64. These results indicate that \(K\) should be small when speed is a priority, and \(K\) should be large when accuracy is a priority. In addition, for GIST1M, setting \(K\) to \(\infty\) resulted in significant performance degradation due to the significantly large out-degree, as seen in Section 5.3. By setting \(K\) appropriately, we can avoid the problem of poor search performance. We emphasize that the user can know the optimal \(K\) after index construction. Thus, the proposed method is more robust to changes in the dataset than the conventional methods that require setting the maximum out-degree before construction. ### Experiment for large datasets We experimented with index construction on a large dataset. We used the SIFT20M dataset, the first 20 million database vectors extracted from the beginning of the SIFT1B (Kumar et al., 2017) dataset (1 billion vectors, 128 dimensions). We conducted the experiments on an AWS c6i.12xlarge (48 vCPUs, 96GB memory) instance. We set the number of threads to 48. Figure 9 shows the search performance and construction time on the SIFT20M dataset. We compared the proposed method to NSG. The parameters of each method are equal to those described in Section 5.1. First, the proposed method is more than twice as fast as NSG in index construction. On the other hand, the proposed method has comparable search performance to NSG. ## 6. Conclusion This paper proposes RNN-Descent, a new graph-based ANNS index construction algorithm. RNN-Descent combine NN-Descent and RNG Strategy. It simultaneously adds edges based on RNN-Descent and removes edges based on RNG Strategy. Experimental results show that RNN-Descent significantly accelerates index construction while maintaining performance comparable to existing SOTA methods. For example, experiments on the GIST1M dataset show that RNN-Descent constructs the index approximately twice as fast as NSG. Our source code is publicly available on [https://github.com/mti-lab/rnn-descent](https://github.com/mti-lab/rnn-descent). #### Acknowledgement This work was supported by JST AIP Acceleration Research JPMJCR23U2, Japan.
2309.09856
Polarized Hardy--Stein identity
We prove the Hardy--Stein identity for vector functions in $L^p(\mathbb R^d;\mathbb R^n)$ with $1<p<\infty$ and for the canonical paring of two real functions in $L^p(\mathbb R^d)$ with $2\le p<\infty$. To this end we propose a notion of Bregman co-divergence and study the corresponding integral forms.
Krzysztof Bogdan, Michał Gutowski, Katarzyna Pietruska-Pałuba
2023-09-18T15:16:38Z
http://arxiv.org/abs/2309.09856v1
# Polarized Hardy-Stein identity ###### Abstract. We prove the Hardy-Stein identity for vector functions in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) with \(1<p<\infty\) and for the canonical paring of two real functions in \(L^{p}(\mathbb{R}^{d})\) with \(2\leq p<\infty\). To this end we propose a notion of Bregman co-divergence and study the corresponding integral forms. Key words and phrases:Bregman co-divergence, Markovian semigroup, calculus in \(L^{p}\) 2010 Mathematics Subject Classification: Primary 46E35; Secondary 31C05 The research was supported by the NCN grant 2018/31/B/ST1/03818 Introduction Let \(\mathcal{F}_{p}(\mathbb{R}^{d})\) be a smooth smooth \(p\)-dimensional space with smooth boundary \(\partial\mathcal{F}_{p}(\mathbb{R}^{d})\). We consider the following elliptic operator \[\mathcal{F}_{p}(\mathbb{R}^{d}):=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{F}_{p}(\mathbb{R}^{d})\,\mathrm{d}x\,\mathrm{d}y, \tag{1.1}\] where \(\mathcal{F}_{p}(\mathbb{R}^{d})\) is the _Brownian derivative_ of \(\mathbb{R}^{d}\). The _Brownian derivative_ of \(\mathbb{R}^{d}\) is the _Brownian derivative_ of \(\mathbb{R}^{d}\). particular Lemma B.5 gives the derivative of \(\int_{\mathbb{R}^{d}}P_{t}fP_{t}g|P_{t}g|^{p-2}\,\mathrm{d}x\). In Appendix C we discuss convexity properties related to the Bregman co-divergence \(\mathcal{J}_{p}\). In Appendix D we give a simpler proof of Theorem 4.1, but only for \(p\geq 3\). In Appendix F, complementing Appendix E, we discuss the \(L^{p}\) generator of the Gaussian semigroup. For technical reasons, our main results are restricted to a class of convolution Markovian semigroups on \(\mathbb{R}^{d}\), but some arguments are presented in a more general setting and further extensions are forthcoming. For instance, Gutowski [20] and Gutowski and Kwasnicki [21] extend (1.1) to general symmetric Markovian semigroups with nonlocal Dirichlet forms; see also Bogdan, Kutek, and Pietruska-Paluba [9] for a recent probabilistic approach, based on stochastic integrals. **Acknowledgements.** We thank Wlodzimierz Bak, Bartomiej Dyda, Mateusz Kwasnicki, Agnieszka Kalamajska, and Bartomiej Wrobel for helpful discussions. ## 2. Preliminaries ### Standing assumptions All the considered sets, functions, and measures are assumed Borel. For nonnegative functions \(f\) and \(g\), we write \(f(x)\asymp g(x)\) to indicate that there exist _constants_, i.e., numbers \(0<c\leq C<\infty\) such that \(cf(x)\leq g(x)\leq Cf(x)\) for all the considered arguments \(x\). Without warning, the symbols \(c\) or \(C\) may denote different constants even within a single line of text. The symbol \(:=\) means definition, e.g., \(a\lor b:=\max\{a,b\}\), \(a\wedge b:=\min\{a,b\}\), \(a_{+}:=a\lor 0\), and \(a_{-}:=(-a)\lor 0\). We denote the Euclidean norm of a vector \(z\in\mathbb{R}^{n}\) as \(|z|\) and the standard scalar product of vectors \(w\) and \(z\) in \(\mathbb{R}^{n}\) as \((w,z)\) or \(w\cdot z\). The unit sphere in \(\mathbb{R}^{n}\) centered at the origin is denoted by \(\mathbb{S}^{n-1}\). As usual, \(\|f\|_{L^{q}(\mathbb{R}^{d})}\) denotes the \(L^{q}(\mathbb{R}^{d})\) norm of the (extended real-valued) function \(f\), \(1\leq q\leq\infty\). More specifically, \(\|f\|_{L^{q}(\mathbb{R}^{d})}:=\big{(}\int_{\mathbb{R}^{d}}|f(x)|^{q}\,\mathrm{ d}x\big{)}^{1/q}\) for \(1\leq q<\infty\), where \(\mathrm{d}x\) refers to the Lebesgue measure on \(\mathbb{R}^{d}\) and \(\|f\|_{L^{\infty}(\mathbb{R}^{d})}:=\operatorname{ess\,sup}|f|\). Let \(d=1,2,\ldots\). Consider a symmetric, absolutely continuous Levy measure \(\nu\) on the Euclidean space \(\mathbb{R}^{d}\). Thus, \(\nu(\mathrm{d}z)=\nu(z)\,\mathrm{d}z\), where \(\nu\colon\mathbb{R}^{d}\setminus\{0\}\to(0,\infty)\), \(\nu(-z)=\nu(z)\), \(z\in\mathbb{R}^{d}\setminus\{0\}\), and \[\int_{\mathbb{R}^{d}}\big{(}|z|^{2}\wedge 1\big{)}\,\nu(z)\,\mathrm{d}z<\infty.\] The corresponding Levy-Khinchine exponent is \[\psi(\xi):=\int_{\mathbb{R}^{d}}\left(1-\cos(\xi\cdot x)\right)\nu(x)\,\mathrm{ d}x,\quad\xi\in\mathbb{R}^{d}. \tag{2.1}\] We further assume the following Hartman-Wintner condition on \(\psi\) (and \(\nu\)): \[\lim_{|\xi|\to\infty}\frac{\psi(\xi)}{\log|\xi|}=\infty. \tag{2.2}\] In particular, \(\int_{\mathbb{R}^{d}}\nu(z)\,\mathrm{d}z=\infty\). This gives rise to a convolution semigroup of probability densities \(p_{t}\) by the Levy-Khintchine formula, or Fourier inversion, as follows: \[p_{t}(x):=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{-i\xi\cdot x}e^{-t\psi(\xi)}\, \mathrm{d}\xi,\quad t>0,\ x\in\mathbb{R}^{d}. \tag{2.3}\] The function \(p_{t}(x)\) is continuous and attains its maximum at \(x=0\), which is \[p_{t}(0)=(2\pi)^{-d}\int_{\mathbb{R}^{d}}e^{-t\psi(\xi)}\,\mathrm{d}\xi,\quad t>0.\] By (2.2), \(p_{t}(0)\) is finite for every \(t>0\) and, by the Dominated Convergence Theorem, \(p_{t}(0)\) converges to zero as \(t\to\infty\), so \(\left\|p_{t}\right\|_{L^{\infty}(\mathbb{R}^{d})}\to 0\). We shall also write \(p_{t}(x,y):=p_{t}(y-x)\) and \(\nu(x,y):=\nu(y-x)\). Note that \(p_{t}(x,y)\) is a transition density of a pure-jump Levy stochastic process \(\{X_{t},t\geq 0\}\) in \(\mathbb{R}^{d}\) with Levy-Khintchine exponent \(\psi\) (see Sato [29]) and \(\nu(x,y)\) is the kernel of the corresponding Dirichlet form; see, e.g., Fukushima, Oshima, and Takeda [19] and (2.24). (In the following discussion, the process does not play a significant role.) Encouraged by [8], we also assume the following technical conditions: ( **P1)** \[p_{t}(x,y)/t\leq c\nu(x,y),\quad t>0,\ x,y\in\mathbb{R}^{d},\] with some constant \(c\), and ( **P2)** \[p_{t}(x,y)/t\to\nu(x,y)\ \text{as}\ t\to 0^{+},\ x,y\in\mathbb{R}^{d}.\] For instance, the transition density corresponding to the fractional Laplacian satisfies (**P1**) and (**P2**); see examples provided by Bogdan, Grzywny, and Ryznar [7, Corollary 23] and Cygan, Grzywny, and Trojan [16, Proof of Theorem 6]. The conditions are convenient in limiting procedures based on the Dominated Convergence Theorem, but they can certainly be relaxed, as evidenced by Appendix E, [20], [21], and [9]. ### Elementary functions and inequalities Throughout we use the notation \[a^{\langle\kappa\rangle}:=\left|a\right|^{\kappa}\operatorname{sgn}a=a|a|^{ \kappa-2},\quad a,\kappa\in\mathbb{R},\] where \(0^{\langle\kappa\rangle}:=0\) and, as usual, \(\operatorname{sgn}0=0\), \(0^{0}:=1\), \(0^{\kappa}:=\infty\) for \(\kappa<0\), and \(0\cdot\infty:=0\). Note that \[(|x|^{\kappa})^{\prime}=\kappa x^{\langle\kappa-1\rangle}\quad\text{if $x\in \mathbb{R}$ and $\kappa>1$ or $x\in\mathbb{R}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.}\] Furthermore, \[\left(x^{\langle\kappa\rangle}\right)^{\prime}=\kappa|x|^{\kappa-1}\quad \text{if $x\in\mathbb{R}$ and $\kappa>1$ or $x\in\mathbb{R}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.}\] This has a vector counterpart: for \(\kappa>0\), we let \[z^{\langle\kappa\rangle}:=|z|^{\kappa-1}z,\quad z\in\mathbb{R}^{n}, \tag{2.4}\] again with the convention \(0^{\langle\kappa\rangle}:=0\). Note that \[\nabla|z|^{\kappa}=\kappa z^{\langle\kappa-1\rangle}\quad\text{if $z\in \mathbb{R}^{n}$ and $\kappa>1$ or $z\in\mathbb{R}^{n}\setminus\{0\}$ and $\kappa\in\mathbb{R}$.} \tag{2.5}\] Furthermore, the Jacobi matrix \(J_{\langle\kappa\rangle}\) for the mapping \(z\mapsto z^{\langle\kappa\rangle}\) equals \[J_{\langle\kappa\rangle}(z)=|z|^{\kappa-1}\left((\kappa-1)\left(\frac{z}{|z|} \otimes\frac{z}{|z|}\right)+\mathrm{Id}\right)\in\mathbb{R}^{n}\times\mathbb{ R}^{n}\quad\text{ if $z\in\mathbb{R}^{n}\setminus\{0\}$} \tag{2.6}\] and we let \(J_{\langle\kappa\rangle}(0):=0\). In the following, unless otherwise specified, we consider exponents \(p\in(1,\infty)\). **Definition 2.1**.: The _Bregman divergence_\(\mathcal{F}_{p}\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\to(0,\infty)\) is given by \[\mathcal{F}_{p}(w,z):=|z|^{p}-|w|^{p}-pw^{\langle p-1\rangle}\cdot(z-w) \tag{2.7}\] and the _symmetrized Bregman divergence_ is \[\mathcal{H}_{p}(w,z):=\frac{1}{2}\left(\mathcal{F}_{p}(w,z)+\mathcal{F}_{p}(z, w)\right)=\frac{p}{2}(z-w)\cdot\left(z^{\langle p-1\rangle}-w^{\langle p-1 \rangle}\right). \tag{2.8}\] For instance, \[\mathcal{F}_{2}(w,z)=\mathcal{H}_{2}(w,z)=|z-w|^{2},\quad w,z\in\mathbb{R}^{n}. \tag{2.9}\] Note that \(\mathcal{F}_{p}(w,z)=|z|^{p}\) if \(w=0\), but \(\mathcal{F}_{p}(w,z)=(p-1)|w|^{p}\) if \(z=0\). Of course, the mapping \(\mathbb{R}^{n}\ni z\mapsto|z|^{p}\) is convex, since \(p>1\). Its second-order Taylor remainder is \(\mathcal{F}_{p}\), so \(\mathcal{F}_{p}\geq 0\) and \(\mathcal{H}_{p}\geq 0\). Also, if \(Q\) is an \(n\times n\) orthogonal matrix, then \[\mathcal{F}_{p}(Qw,Qz)=\mathcal{F}_{p}(w,z). \tag{2.10}\] For notational convenience in what follows, we also let \[\mathcal{G}_{p}(w,z):=|z-w|^{2}(|w|\vee|z|)^{p-2},\quad z,w\in\mathbb{R}^{n}. \tag{2.11}\] We further introduce the second-order Taylor remainders of the vector functions \(\mathbb{R}^{n}\ni z\mapsto z^{\langle\kappa\rangle}\in\mathbb{R}^{n}\) for \(\kappa>1\). More precisely, we define \(\mathcal{F}_{\langle\kappa\rangle}:\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}^{n}\) by \[\mathcal{F}_{\langle\kappa\rangle}(w,z):=z^{\langle\kappa\rangle}-w^{\langle \kappa\rangle}-J_{\langle\kappa\rangle}(w)(z-w),\quad\ w,z\in\mathbb{R}^{n}.\] Of course, the mapping \(\mathbb{R}\ni x\mapsto x^{\langle\kappa\rangle}\in\mathbb{R}\) is in general not convex and \(F_{\langle\kappa\rangle}\) changes sign. In fact, \(F_{\langle\kappa\rangle}(-a,-b)=-F_{\langle\kappa\rangle}(a,b)\). The scalar versions of the above functions (for \(n=1\)) are denoted \(F_{\kappa}\) (see Introduction), \(H_{\kappa}\), \(G_{\kappa}\), and \(F_{\langle\kappa\rangle}\), respectively. In particular, \[F_{\langle\kappa\rangle}(a,b)=b^{\langle\kappa\rangle}-a^{\langle\kappa \rangle}-\kappa|a|^{\kappa-1}(b-a),\quad a,b\in\mathbb{R}.\] The following estimates are quite important for our analysis. **Lemma 2.1**.: _Let \(p\in(1,\infty)\). We have_ \[\mathcal{F}_{p}(w,z)\asymp\mathcal{G}_{p}(w,z),\quad w,z\in\mathbb{R}^{n}, \tag{2.12}\] _and_ \[\mathcal{H}_{p}(w,z)\asymp\mathcal{G}_{p}(w,z),\quad w,z\in\mathbb{R}^{n}. \tag{2.13}\] Of, course (2.13) follows from (2.12). It seems that (2.12) was first proved in Pinchover, Tertikas, and Tintarev [28, (2.19)], but one of the one-sided bounds was given earlier in Shafrir [30, Lemma 7.4] for \(p\geq 2\) and the other in Barbatis, Filippas, and Tertikas [2, Lemma 3.1]. The one-dimensional case, \(F_{p}(a,b)\asymp(b-a)^{2}(|a|\vee|b|)^{p-2}\), \(a,b\in\mathbb{R}\), is crucial in [3, Lemma 6], with [3, (10), (12)] therein being a part of the comparison for \(n=2\) (the setting of (2.12) is essentially two-dimensional). Optimal constants are known in some cases: for the lower bound of \(F_{p}\) with \(p\in(1,2)\) and for the upper bound with \(p\in(2,\infty)\); see [8] and [30, Lemma 7.4]. The quadratic factor in (2.11) is the reason why Bregman divergence \(\mathcal{F}_{p}\) is integrable against Levy measures in (1.3), which is crucial in analysis of nonlocal equations of parabolic and elliptic type; see the applications of [6, (2.14)] therein. See also [8] for a martingale setting. In passing we note another important estimate: \[\mathcal{F}_{p}(w,z)\asymp\left|z^{\langle p/2\rangle}-w^{\langle p/2\rangle} \right|^{2},\quad w,z\in\mathbb{R}^{n}. \tag{2.14}\] We refer to [8, Subsection 1.3] for a discussion of the estimate when \(n=1\); the case of arbitrary \(n=1,2,\ldots\) can be found in Huang [22]. Further estimates concerning functions \(\mathcal{F}_{p}\), \(\mathcal{F}_{\langle\kappa\rangle}\), and their cousins are collected and proved in Appendix A. ### Semigroups, generators, and forms The semigroup is defined by \[P_{t}f(x):=\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\,\mathrm{d}y,\quad t>0,\] and by \(P_{0}f(x):=f(x)\), where \(x\in\mathbb{R}^{d}\) and \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is nonnegative or integrable. We briefly mention a well known probability connection: \(P_{t}f(x)=\mathbb{E}_{x}f(X_{t})\), where \((X_{t},\mathbb{P}_{x})_{t\geq 0,x\in\mathbb{R}^{d}}\) is our Levy process, considered as a Markov process with transition density on \(\mathbb{R}^{d}\) given by \(p_{t}(\cdot,\cdot)\), and \(\mathbb{E}_{x}\) is the expectation with respect to the distribution \(\mathbb{P}_{x}\) of the process starting from \(x\). Since \(\int_{\mathbb{R}^{d}}p_{t}(x,y)\,\mathrm{d}y=1\) for \(t>0\), \(x\in\mathbb{R}^{d}\) (conservativeness of the semigroup \(P_{t}\)), and \(p_{t}\) is symmetric in \(x,y\), Fubini-Tonelli theorem yields \[\int_{\mathbb{R}^{d}}P_{t}f(x)\,\mathrm{d}x=\int_{\mathbb{R}^{d}}f(x)\, \mathrm{d}x. \tag{2.15}\] Recall that \(1<p<\infty\). It is well known that \((P_{t})_{t\geq 0}\) is a strongly continuous Markovian semigroup of symmetric operators on \(L^{p}(\mathbb{R}^{d})\); see for example [29, E 34.10]. For all \(x\in\mathbb{R}^{d}\) and \(f\in L^{p}(\mathbb{R}^{d})\), by (2.3) and Holder's inequality with exponents \(p\) and \(q=p/(p-1)\), we get \[|P_{t}f(x)| =\left|\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\,\mathrm{d}y\right| \leq\|f\|_{L^{p}(\mathbb{R}^{d})}\left(\int_{\mathbb{R}^{d}}p_{t}(x,y)^{q}\, \mathrm{d}y\right)^{1/q}\] \[\leq\|f\|_{L^{p}(\mathbb{R}^{d})}\left(\sup_{x,y\in\mathbb{R}^{d }}p_{t}(x,y)^{q-1}\right)^{1/q}=\|f\|_{L^{p}(\mathbb{R}^{d})}\left\|p_{t}\right\| _{L^{\infty}(\mathbb{R}^{d})}^{1/p}\xrightarrow[t\to\infty]{}0. \tag{2.16}\] We also need the following maximal inequality of Stein for symmetric Markovian semigroups; see Stein [31, p. 73] and recall that \(1<p<\infty\). **Lemma 2.2** (Stein inequality).: _If \(f\in L^{p}(\mathbb{R}^{d})\), \(f^{*}(x):=\sup_{t\geq 0}|P_{t}f(x)|\), \(x\in\mathbb{R}^{d}\), then,_ \[\|f^{*}\|_{L^{p}(\mathbb{R}^{d})}\leq\frac{p}{p-1}\|f\|_{L^{p}(\mathbb{R}^{d} )}. \tag{2.17}\] By (2.17) and (2.16), the semigroup is _strongly stable_ in \(L^{p}(\mathbb{R}^{d})\): If \(f\in L^{p}(\mathbb{R}^{d})\), then \[\|P_{t}f\|_{L^{p}(\mathbb{R}^{d})}\to 0\text{ as }t\to\infty. \tag{2.18}\] Indeed, since for every \(x\in\mathbb{R}^{d}\) we have \(|P_{t}f(x)|\to 0\) and \(|P_{t}f(x)|\leq f^{*}(x)\) with \(f^{*}\in L^{p}(\mathbb{R}^{d})\), we get \(\|P_{t}f\|_{L^{p}(\mathbb{R}^{d})}\to 0\) by the Dominated Convergence Theorem. Let \(L\) be the generator of the semigroup \((P_{t})_{t\geq 0}\), when acting on \(L^{p}(\mathbb{R}^{d})\). Its natural domain, denoted \(\mathcal{D}_{p}(L)\), consists of those \(f\in L^{p}(\mathbb{R}^{d})\) for which there is a \(g\in L^{p}(\mathbb{R}^{d})\) such that \((P_{h}f-f)/h\to g\) in \(L^{p}(\mathbb{R}^{d})\) as \(h\to 0^{+}\); we then write \(Lf=g\). We next discuss issues related to the \(L^{p}\)-differentiability of semigroups. (To make the exposition self-contained, we include a primer on the \(L^{p}\) calculus in Appendix B.) Thus, for \(f\in L^{p}(\mathbb{R}^{d})\) and \(t\geq 0\), we write \(u(t):=P_{t}f\). Of course, \(u(t)\in L^{p}(\mathbb{R}^{d})\). Furthermore, if \(f\in\mathcal{D}_{p}(L)\) then \(u^{\prime}(t)=LP_{t}f=P_{t}Lf=Lu(t)\), \(t\geq 0\). By Lemma B.3 with \(n=1\), we obtain the following result. **Corollary 2.3**.: _Let \(f\in\mathcal{D}_{p}(L)\). If \(1<\kappa\leq p\) then:_ * \(|u(t)|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (2.19) \[(|u(t)|^{\kappa})^{\prime}=\kappa u(t)^{\langle\kappa-1\rangle}u^{\prime}(t)= \kappa u(t)^{\langle\kappa-1\rangle}P_{t}Lf,\quad t\geq 0,\] * \(u^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (2.20) \[(u(t)^{\langle\kappa\rangle})^{\prime}=\kappa|u(t)|^{\kappa-1}u^{\prime}(t)= \kappa|u(t)|^{\kappa-1}P_{t}Lf,\quad t\geq 0.\] Moreover, since \((P_{t})_{t\geq 0}\) is symmetric, it is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\) for \(p\in(1,\infty)\); see Liskevich and Perel'muter [25, Corollary 3.2]. Therefore, for all \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), \(\frac{\,\mathrm{d}}{\,\mathrm{d}t}P_{t}f=u^{\prime}(t)\) exists in \(L^{p}(\mathbb{R}^{d})\), so \(P_{t}f\in\mathcal{D}_{p}(L)\) and \(u^{\prime}(t)=LP_{t}f=Lu(t)\). As a special case of (1.3), in what follows we consider the integral form \[\mathcal{E}_{p}[u]=\frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}F_{p}( u(x),u(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y. \tag{2.21}\] Of course, the form is well-defined (possibly infinite) for every \(u:\mathbb{R}^{d}\to\mathbb{R}\) because \(F_{p}\geq 0\). By the symmetry of \(\nu\), \[\mathcal{E}_{p}[u] = \frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}H_{p}(u(x),u (y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(u(y)-u(x)) \left(u(y)^{\langle p-1\rangle}-u(x)^{\langle p-1\rangle}\right)\nu(x,y)\, \mathrm{d}x\mathrm{d}y. \tag{2.22}\] The natural domain of \(\mathcal{E}_{p}\) is \[\mathcal{D}(\mathcal{E}_{p}):=\{u\in L^{p}(\mathbb{R}^{d}):\ \mathcal{E}_{p}[u]<\infty\}. \tag{2.23}\] When \(p=2\), we get the usual Dirichlet form of the semigroup, \[\mathcal{E}_{2}[u]=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(u(y)- u(x))^{2}\nu(x,y)\,\mathrm{d}x\mathrm{d}y, \tag{2.24}\] with domain \(\mathcal{D}(\mathcal{E}_{2})\). We write \(\mathcal{E}:=\mathcal{E}_{2}\). For \(t>0\), \(u\in L^{p}(\mathbb{R}^{d})\), and \(v\in L^{q}(\mathbb{R}^{d})\), we define, as usual, \[\mathcal{E}^{(t)}(u,v):=\frac{1}{t}\langle u-P_{t}u,v\rangle=\frac{1}{t}\int_ {\mathbb{R}^{d}}(u(x)-P_{t}u(x))v(x)\,\mathrm{d}x. \tag{2.25}\] Here and below we use the following notation for the canonical paring: \[\langle u,v\rangle:=\int_{\mathbb{R}^{d}}u(x)v(x)\,\mathrm{d}x.\] The next result was established in [8, Lemma 7] for the fractional Laplacian, but since its proof requires only symmetry and the conditions (**P1**) and (**P2**), it applies verbatim in the present setting. **Proposition 2.4**.: _Let \(p>1\). For every \(u\in L^{p}(\mathbb{R}^{d})\), we have_ \[\mathcal{E}_{p}[u]=\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle}). \tag{2.26}\] _Furthermore,_ \[\mathcal{D}(\mathcal{E}_{p}) = \{u\in L^{p}(\mathbb{R}^{d}):\sup_{t>0}\mathcal{E}^{(t)}(u,u^{ \langle p-1\rangle})<\infty\} \tag{2.28}\] \[= \{u\in L^{p}(\mathbb{R}^{d}):\text{ finite }\lim_{t\to 0} \mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\text{ exists}\}. \tag{2.27}\] _For arbitrary \(u\colon\mathbb{R}^{d}\to\mathbb{R}\), we have_ \[\frac{4(p-1)}{p^{2}}\mathcal{E}[u^{\langle p/2\rangle}]\leq\mathcal{E}_{p}[u ]\leq 2\mathcal{E}[u^{\langle p/2\rangle}] \tag{2.29}\] _and \(\mathcal{D}(\mathcal{E}_{p})=\mathcal{D}(\mathcal{E})^{\langle 2/p\rangle}:=\{v^{ \langle 2/p\rangle}:v\in\mathcal{D}(\mathcal{E})\}\). Finally, \(\mathcal{D}_{p}(L)\subset\mathcal{D}(\mathcal{E}_{p})\) and_ \[\mathcal{E}_{p}[u]=-\langle Lu,u^{\langle p-1\rangle}\rangle,\quad u\in \mathcal{D}_{p}(L). \tag{2.30}\] The disscusion extends to functions with values in \(\mathbb{R}^{n}\), \(n=1,2,\ldots\). Namely, let \(f_{1},\ldots,f_{n}\in L^{p}(\mathbb{R}^{d})\), so \(F:=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). We denote \[P_{t}F:=(P_{t}f_{1},\ldots,P_{t}f_{n}),\quad t\geq 0, \tag{2.31}\] thus \(P_{t}F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(t\geq 0\). If, furthermore, \(f_{1},\ldots,f_{n}\in\mathcal{D}_{p}(L)\), then we define \[LF:=(Lf_{1},\ldots,Lf_{n}). \tag{2.32}\] Then, letting \(U(t):=P_{t}F(t)\), \(t\geq 0\), we get \(U^{\prime}(t)=LU(t)\) and the following multidimensional extension of Corollary 2.3, an easy consequence of Lemma B.3. **Corollary 2.5**.: _Let \(n=1,2,\ldots\), \(f_{1},\ldots,f_{n}\in\mathcal{D}_{p}(L)\), \(F:=(f_{1},\ldots,f_{n})\), and \(U(t)=P_{t}F\), \(t\geq 0\). If \(1<\kappa\leq p\), then for \(t\geq 0\),_ * \(|U(t)|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _with_ \[\left(|U(t)|^{\kappa}\right)^{\prime}=\kappa U(t)^{\langle\kappa-1\rangle} \cdot LU(t),\] * \(U(t)^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\) _with_ \[\left(U(t)^{\langle\kappa\rangle}\right)^{\prime}=\left(J_{\langle\kappa \rangle}\circ U(t)\right)LU(t).\] The following result will be useful in limiting procedures later on. **Lemma 2.6**.: _[_8_, Lemma 6]_ _If nonnegative functions \(f,f_{k}\colon\mathbb{R}^{d}\to\mathbb{R}\), \(k=1,2,\ldots\), satisfy \(f_{k}\leq cf\) and \(f=\lim_{k\to\infty}f_{k}\), then \(\lim_{k\to\infty}\int f_{k}\,\mathrm{d}\mu=\int f\,\mathrm{d}\mu\) for each measure \(\mu\)._ ## 3. Hardy-Stein identity Below we work under the assumptions on \(\nu\) formulated in Subsection 2.1. We will extend (1.1) to arbitrary dimension \(n=1,2,\ldots\). We recall that the proof given in [1, Theorem 3.2] for \(n=1\) relies on approximations and pointwise calculus in \(\mathbb{R}^{d}\). Here, instead, we use a more synthetic differential calculus in \(L^{p}\). **Theorem 3.1**.: _Let \(p>1\), \(n=1,2,\ldots\), and \(F=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). Then,_ \[\int_{\mathbb{R}^{d}}|F(x)|^{p}\,\mathrm{d}x=\int_{0}^{\infty}\int_{\mathbb{R }^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{3.1}\] Proof.: Let first \(F=(f_{1},\ldots,f_{n})\in(\mathcal{D}_{p}(L))^{n}\) and \(0\leq t\leq T<\infty\). Then \(U(t):=P_{t}F\in(\mathcal{D}_{p}(L))^{n}\) and \(LP_{t}F=LU(t)=(LP_{t}f_{1},\ldots,LP_{t}f_{n})\). From Corollary 2.5, \(|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and \((|U(t)|^{p})^{\prime}=pU(t)^{\langle p-1\rangle}\cdot LF(t)\). As \(f\mapsto\int_{\mathbb{R}^{d}}f\,\mathrm{d}x\) is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\), \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}|U(t)|^{p} \,\mathrm{d}x = \int_{\mathbb{R}^{d}}\frac{\,\mathrm{d}}{\,\mathrm{d}t}|U(t)|^{p} \,\mathrm{d}x=\int_{\mathbb{R}^{d}}pU(t)^{\langle p-1\rangle}\cdot LU(t)\, \mathrm{d}x\] \[= \langle LU(t),pU(t)^{\langle p-1\rangle}\rangle. \tag{3.2}\] Since \(LU(t)=\lim_{h\to 0^{+}}(P_{h}U(t)-U(t))/h\) strongly in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(U(t)^{\langle p-1\rangle}\) belongs to the (dual) space \(L^{\frac{p}{p-1}}(\mathbb{R}^{d};\mathbb{R}^{n})\), and the semigroup \((P_{t})_{t\geq 0}\) is conservative, we get \[\langle LU(t),pU(t)^{\langle p-1\rangle}\rangle\] \[= \lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}pU(t)(x)^{ \langle p-1\rangle}\cdot(U(t)(y)-U(t)(x))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y\] \[= \lim_{h\to 0^{+}}\left[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}pP_ {t}F(x)^{\langle p-1\rangle}\cdot(P_{t}F(y)-P_{t}F(x))\frac{p_{h}(x,y)}{h}\, \mathrm{d}x\mathrm{d}y\right.\] \[+\left.\frac{1}{h}\int_{\mathbb{R}^{d}}|P_{t}F(x)|^{p}\,\mathrm{d }x-\frac{1}{h}\int_{\mathbb{R}^{d}}P_{h}(|P_{t}F|^{p})(x)\,\mathrm{d}x\right]\] \[= -\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y\] \[= -\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F (x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y. \tag{3.3}\] The last equality (3.3) is justified by Lemma 2.6, the nonnegativity of \(\mathcal{F}_{p}\), and assumptions (**P1**), (**P2**). Summarizing, \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}|U(t)|^{p}\mathrm{d}x= -\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\] Since \(|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) on \([0,\infty)\) and the integration is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\), \(\int_{\mathbb{R}^{d}}|U(t)|^{p}\,\mathrm{d}x\) is continuously differentiable. Integrating from \(0\) to \(T\) we obtain \[\int_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x-\int_{\mathbb{R}^{d}}|U(T) |^{p}\,\mathrm{d}x = -\int_{0}^{T}\left(\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{ d}}|U(t)|^{p}\,\mathrm{d}x\right)\,\mathrm{d}t\] \[= \int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F} _{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\] We let \(T\to\infty\) and obtain \(\int_{\mathbb{R}^{d}}|U(T)|^{p}\,\mathrm{d}x\to 0\) from the strong stability (2.18). We now relax the assumption \(f_{j}\in\mathcal{D}_{p}(L)\). Let \(F=(f_{1},\ldots,f_{n})\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) be arbitrary and let \(s>0\). Since \((P_{t})_{t\geq 0}\) is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f_{j}\in\mathcal{D}_{p}(L)\) for all \(j=1,\ldots,n\), so \(U(s)\in(\mathcal{D}_{p}(L))^{n}\). By (3.1) and a change of variables, \[\int_{\mathbb{R}^{d}}|U(s)|^{p}\,\mathrm{d}x=\int_{s}^{\infty}\int_{\mathbb{R }^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_{t}F(x),P_{t}F(y))\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{3.4}\] Let \(s\) decrease to \(0\). Since \(\mathcal{F}_{p}\geq 0\), the right-hand side of (3.4) increases to \(\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{F}_{p}(P_ {t}F(x),P_{t}F(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\). By the strong continuity of \((P_{t})_{t\geq 0}\) in \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f_{j}\to f_{j}\), \(j=1,\ldots,n\), in \(L^{p}(\mathbb{R}^{d})\), so \(U(s)=P_{s}F\to F\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), in particular \(\left\|U(s)\right\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{p}\to\left\|F\right\| _{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{p}\). The proof is complete. _Remark 3.2_.: Since \(\nu\) is symmetric, by (2.8) we get a symmetrized version of the Hardy-Stein identity for every \(F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\): \[\int\limits_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x=\frac{p}{2}\int\limits_{0}^{ \infty}\!\!\int\limits_{\mathbb{R}^{d}}\!\!\int\limits_{\mathbb{R}^{d}}(P_{t}F (y)-P_{t}F(x))\!\cdot\!\Big{(}P_{t}F(y)^{\langle p-1\rangle}-P_{t}F(x)^{\langle p -1\rangle}\Big{)}\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\] ## 4. Polarized Hardy-Stein identity Having proved the Hardy-Stein identity for a vector of \(L^{p}(\mathbb{R}^{d})\) functions, we can establish a disintegration of \(\int_{\mathbb{R}^{d}}f(x)g(x)^{\langle p-1\rangle}\,\mathrm{d}x\) for \(f,g\in L^{p}(\mathbb{R}^{d})\) with \(p\in[2,\infty)\). To this end we introduce the function \(\mathcal{J}_{p}\colon\mathbb{R}^{2}\times\mathbb{R}^{2}\to\mathbb{R}\), defined as follows: \[\mathcal{J}_{p}(w,z)= \mathcal{J}_{p}(w_{1},w_{2};z_{1},z_{2}):=z_{1}z_{2}^{\langle p-1 \rangle}-w_{1}w_{2}^{\langle p-1\rangle}\] \[-w_{2}^{\langle p-1\rangle}(z_{1}-w_{1})-(p-1)w_{1}|w_{2}|^{p-2}( z_{2}-w_{2}), \tag{4.1}\] where \(w=(w_{1},w_{2})\), \(z=(z_{1},z_{2})\), and \(w_{1},w_{2},z_{1},z_{2}\in\mathbb{R}\). For instance, \[\mathcal{J}_{2}(w,z)=z_{1}z_{2}-w_{1}w_{2}-w_{2}(z_{1}-w_{1})-w_{1}(z_{2}-w_{2 })=(z_{1}-w_{1})(z_{2}-w_{2}). \tag{4.2}\] As complicated as it looks, \(\mathcal{J}_{p}\) is just the second-order Taylor remainder of the mapping \(\mathbb{R}^{2}\ni(z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1\rangle}\), when the argument changes from \(w\) to \(z\). Below we mostly apply \(\mathcal{J}_{p}\) to \(w_{1}=P_{t}f(x)\), \(w_{2}=P_{t}g(x)\), \(z_{1}=P_{t}f(y)\), and \(z_{2}=P_{t}g(y)\), so \(w\) corresponds to the argument \(x\) of the vector function \(\Phi=(f,g)\), \(z\) corresponds to \(y\), the subscript \(1\) indicates the first function, \(f\), and \(2\) indicates the second function, \(g\). Here is the main result of the paper, which we prove below in this section. **Theorem 4.1** (Polarized Hardy-Stein identity).: _Let \(p\geq 2\). For \(f,g\in L^{p}(\mathbb{R}^{d})\), denote_ \[\Phi(x):=(f(x),g(x))\quad\text{and}\quad P_{t}\Phi(x):=(P_{t}f(x),P_{t}g(x)), \quad t\geq 0,\;x\in\mathbb{R}^{d}. \tag{4.3}\] _Then,_ \[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}|\mathcal{J}_{p}(P_{t} \Phi(x),P_{t}\Phi(y))|\,\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t<\infty \tag{4.4}\] _and_ \[\int_{\mathbb{R}^{d}}fg^{\langle p-1\rangle}\,\mathrm{d}x=\int_{0}^{\infty} \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}(P_{t}\Phi(x),P_{t} \Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.5}\] Note that if \(w_{1}=w_{2}=:a\) and \(z_{1}=z_{2}=:b\), then \(\mathcal{J}_{p}(w,z)=F_{p}(a,b)\), so (4.5) with \(f=g\) agrees with (1.1), at least for \(p\geq 2\). _Remark 4.2_.: If \(p=2\) then (4.5) reads \[\int_{\mathbb{R}^{d}}fg\,\mathrm{d}x=\int_{0}^{\infty}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}[P_{t}f(x)-P_{t}f(y)][P_{t}g(x)-P_{t}g(y)]\nu(x,y)\, \mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.6}\] In this case (4.4) and (4.5) are obtained by polarization from the one-dimensional Hardy-Stein identity (1.1) and Cauchy-Schwarz inequality, by considering \(f+g\) and \(f-g\). Therefore below we let \(p>2\). Had \(\mathcal{J}_{p}\) been nonnegative, the proof of (4.5) would follow as that of (3.1). Unfortunately, this is not the case, so the proof is more complicated. Indeed, the function \((z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1\rangle}\) is not convex, even when restricted to \(z_{2}>0\). To see this, we compute its gradient and Hessian matrix for \(z_{2}>0\): \[\nabla\left(z_{1}z_{2}^{p-1}\right)=\begin{bmatrix}z_{2}^{p-1}\\ (p-1)z_{1}z_{2}^{p-2}\end{bmatrix},\] \[\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)=\begin{bmatrix}0&(p-1)z_{2}^{p-2}\\ (p-1)z_{2}^{p-2}&(p-1)(p-2)z_{1}z_{2}^{p-3}\end{bmatrix}. \tag{4.7}\] Thus, \(\det\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)=-(p-1)^{2}z_{2}^{2p-4}<0\), so the Hessian matrix \(\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)\) is not positive semi-definite and \(z_{1}z_{2}^{p-1}\) is not convex. We will rectify this situation by decomposing the mapping \[[0,\infty)\times\mathbb{R}\ni z=(z_{1},z_{2})\mapsto z_{1}z_{2}^{\langle p-1 \rangle}\] into a difference of two convex mappings. Then (the Taylor remainder) \(\mathcal{J}_{p}\) will be a difference of two nonnegative functions. To this end, we recall that \(a_{+}:=a\lor 0\), \(a_{-}:=(-a)\lor 0\) and introduce the functions: \[Y^{(+)}(z) := z_{1}\left((z_{2})_{+}\right)^{p-1}+|z|^{p},\] \[Y^{(-)}(z) := z_{1}\left((z_{2})_{-}\right)^{p-1}+|z|^{p},\quad z=(z_{1},z_{2} )\in\mathbb{R}^{2}.\] Lemma C.3 in Appendix C verifies that these functions are convex on \([0,\infty)\times\mathbb{R}\) indeed. Since \(p>2\), they are differentiable everywhere and their Taylor remainders are nonnegative on \([0,\infty)\times\mathbb{R}\). Let \(\mathcal{J}_{p}^{(+)}\) and \(\mathcal{J}_{p}^{(-)}\) be the second-order Taylor remainders of the differentiable mappings \(\mathbb{R}^{2}\ni z\mapsto z_{1}\left((z_{2})_{+}\right)^{p-1}\) and \(\mathbb{R}^{2}\ni z\mapsto z_{1}\left((z_{2})_{-}\right)^{p-1}\). Thus, for \(z_{1},z_{2},w_{1},w_{2}\in\mathbb{R}\), \[\mathcal{J}_{p}^{(+)}(w,z) = z_{1}\left((z_{2})_{+}\right)^{p-1}-w_{1}\left((w_{2})_{+}\right) ^{p-1}-\left((w_{2})_{+}\right)^{p-1}\left(z_{1}-w_{1}\right)\] \[-(p-1)w_{1}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right)\] and \[\mathcal{J}_{p}^{(-)}(w,z) = z_{1}\left((z_{2})_{-}\right)^{p-1}-w_{1}\left((w_{2})_{-}\right) ^{p-1}-\left((w_{2})_{-}\right)^{p-1}\left(z_{1}-w_{1}\right)\] \[+(p-1)w_{1}\left((w_{2})_{-}\right)^{p-2}\left(z_{2}-w_{2}\right).\] Since \(z_{1}((z_{2})_{+})^{p-1}-z_{1}((z_{2})_{-})^{p-1}=z_{1}z_{2}^{(p-1)}\), it follows that \[\mathcal{J}_{p}=\mathcal{J}_{p}^{(+)}-\mathcal{J}_{p}^{(-)}=\left(\mathcal{J }_{p}^{(+)}+\mathcal{F}_{p}\right)-\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p }\right), \tag{4.8}\] where we consider \(\mathcal{F}_{p}\) given by (2.7) with \(n=2\) and we have \(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\geq 0\) and \(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\geq 0\) on \(\left([0,\infty)\times\mathbb{R}\right)^{2}\). Note also that, if we denote \(\bar{z}:=(z_{1},-z_{2})\), then \[\mathcal{J}_{p}^{(+)}(\bar{w},\bar{z})=\mathcal{J}_{p}^{(-)}(w,z). \tag{4.9}\] Here is an preliminary version of Theorem 4.1. **Proposition 4.3**.: _For \(p>2\), \(f,g\in L^{p}(\mathbb{R}^{d})\), \(f\geq 0\), and \(\Phi(x)\), \(P_{t}\Phi(x)\) as in (4.3),_ \[\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+|\Phi|^{p}\right)\, \mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d} }\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t} \Phi(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t \tag{4.10}\] _and_ \[\int_{\mathbb{R}^{d}}\left(f(g_{-})^{p-1}+|\Phi|^{p}\right)\, \mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d} }\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t} \Phi(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t. \tag{4.11}\] Proof.: We only prove (4.10) since (4.11) follows by substituting \(-g\) in place of \(g\), see (2.10) and (4.9). The proof of (4.10) is much alike that of Theorem 3.1. We use the convexity of \(Y^{(+)}\), resulting in the nonnegativity of its Taylor remainder. the function \(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\). As before, we first consider \(f,g\in\mathcal{D}_{p}(L)\). Fix some \(0\leq t\leq T<\infty\). Let \(u(t):=P_{t}f\), \(v(t):=P_{t}g\), and \(U(t):=P_{t}\Phi=(u(t),v(t))\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{2})\). Actually, \(U(t)\in(\mathcal{D}_{p}(L))^{2}\). As seen in the proof of Theorem 3.1, the function \(t\mapsto|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and \(\left(|U(t)|^{p}\right)^{\prime}=pU(t)^{(p-1)}\cdot LU(t)\). Since \((v(t)_{+})^{p-1}=(|v(t)|^{p-1}+v(t)^{\langle p-1\rangle})/2\), from Corollary 2.3 with \(\kappa=p-1>1\), we obtain that \((v(t)_{+})^{p-1}\) is continuously differentiable in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\) and \[\left((v(t)_{+})^{p-1}\right)^{\prime} = \left(\frac{|v(t)|^{p-1}+v(t)^{\langle p-1\rangle}}{2}\right)^{ \prime}=\frac{p-1}{2}\left(v(t)^{\langle p-2\rangle}Lv(t)+|v(t)|^{p-2}Lv(t)\right)\] \[= (p-1)(v(t)_{+})^{p-2}Lv(t).\] By Lemma B.4, \(u(t)(v(t)_{+})^{p-1}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) and \[\left(u(t)(v(t)_{+})^{p-1}\right)^{\prime}=(v(t)_{+})^{p-1}Lu(t)+(p-1)u(t)(v(t)_{ +})^{p-2}Lv(t). \tag{4.12}\] In particular, \(\left(u(t)(v(t)_{+})^{p-1}\right)^{\prime}\) is well-defined and continuous in \(L^{1}(\mathbb{R}^{d})\). As in (3.2), \[W(t) :=\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^{d}} \left(u(t)(v(t)_{+})^{p-1}+|U(t)|^{p}\right)\,\mathrm{d}x=\int\limits_{ \mathbb{R}^{d}}\frac{\mathrm{d}}{\mathrm{d}t}\left[u(t)(v(t)_{+})^{p-1}+|U(t)|^ {p}\right]\,\mathrm{d}x \tag{4.13}\] \[=\langle Lu(t),(v(t)_{+})^{p-1}\rangle+\langle Lv(t),(p-1)u(t)(v( t)_{+})^{p-2}\rangle+\langle LU(t),pU(t)^{(p-1)}\rangle.\] Since the limits defining \(Lu\), \(Lv\) (respectively, \(LU\)) exist strongly in \(L^{p}(\mathbb{R}^{d})\) (respectively, in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{2})\)) and \((v(t)_{+})^{p-1}\), \(u(t)(v(t)_{+})^{p-2}\) (respectively, \(U(t)^{(p-1)}\)) belong to \(L^{q}(\mathbb{R}^{d})\) (respectively, to \(L^{q}(\mathbb{R}^{d};\mathbb{R}^{2})\)), we get \[W(t)=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \Big{(}(u(t)(y)-u(t)(x))(v(t)(x)_{+})^{p-1}\] \[\qquad\qquad+(p-1)(v(t)(y)-v(t)(x))u(t)(x)(v(t)(x)_{+})^{p-1}\] \[\qquad\qquad+p(U(t)(y)-U(t)(x))\cdot U(t)(x)^{(p-1)}\Big{)}\frac{ p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\] As \((P_{t})_{t\geq 0}\) is conservative, for every \(h>0\), we have \[\int_{\mathbb{R}^{d}}|U(t)|^{p}\,\mathrm{d}x=\int_{\mathbb{R}^{d}}P_{h}\left( |U(t)|^{p}\right)\,\mathrm{d}x\] and \[\int_{\mathbb{R}^{d}}u(t)(v(t)_{+})^{p-1}\,\mathrm{d}x=\int_{ \mathbb{R}^{d}}P_{h}\left(u(t)(v(t)_{+})^{p-1}\right)\,\mathrm{d}x.\] Taking this into account and rearranging, we get \[W(t)=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left( \mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\frac {p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\] Because of the assumption \(f\geq 0\), we have \(U(t)\in[0,\infty)\times\mathbb{R}\) for all \(x\in\mathbb{R}^{d}\) and \(t\geq 0\), so that \(\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\) is nonnegative (see the discussion preceding the proposition). Therefore from (**P1**), (**P2**), and Lemma 2.6, we conclude that \[W(t)=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left(\mathcal{J}_{p}^{(+)}+ \mathcal{F}_{p}\right)\left(U(t)(x),U(t)(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{ d}y. \tag{4.14}\] Since \(u(t)(v(t)_{+})^{p-1}+|U(t)|^{p}\) is continuously differentiable in \(L^{1}(\mathbb{R}^{d})\) for \(t\in[0,\infty)\), \(W(t)\) is a continuous (real) function on \((0,\infty)\). Thus, \[\int_{\mathbb{R}^{d}}\left(u(0)(v(0)_{+})^{p-1}+|U(0)|^{p}\right) \mathrm{d}x-\int_{\mathbb{R}^{d}}\left(u(T)((v(T))_{+})^{p-1}+|U(T)|^{p}\right) \,\mathrm{d}x\] \[= -\int_{0}^{T}W(t)\,\mathrm{d}t=\int_{0}^{T}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(U( t)(x),U(t)(y)\right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\] \[= \int_{0}^{T}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left( \mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)\left(P_{t}\Phi(x),P_{t}\Phi(y) \right)\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\] We now let \(T\to\infty\). As the integrand in the right-hand side is nonnegative, \(u(0)=f\), \(v(0)=g\), and \(U(0)=\Phi\), to prove (4.10) it is enough to show that \[\int_{\mathbb{R}^{d}}\left(u(T)((v(T))_{+})^{p-1}+|U(T)|^{p}\right)\,\mathrm{d}x =\int_{\mathbb{R}^{d}}\left(P_{T}f((P_{T}g)_{+})^{p-1}+|P_{T}\Phi|^{p}\right)\, \mathrm{d}x\to 0.\] While proving Theorem 3.1 we have already shown that \(\int_{\mathbb{R}^{d}}|U(T)|^{p}\,\mathrm{d}x\to 0\). Further, since \(|P_{T}f(x)|\leq f^{*}(x)\) and \(|P_{T}g(x)|\leq g^{*}(x)\) for every \(x\in\mathbb{R}^{d}\) and \(T>0\) and \(f^{*},g^{*}\in L^{p}(\mathbb{R}^{d})\) by (2.17), we get \(\int_{\mathbb{R}^{d}}P_{T}f((P_{T}g)_{+})^{p-1}\,\mathrm{d}x\to 0\) by the Dominated Convergence Theorem. This yields (4.10) for \(f,g\in\mathcal{D}_{p}(L)\). It remains to get rid of the assumption \(f,g\in\mathcal{D}_{p}(L)\). We proceed as in the proof of Theorem 3.1. Take \(f,g\in L^{p}(\mathbb{R}^{d})\) arbitrary and let \(s>0\). Since \((P_{t})_{t\geq 0}\) is an analytic semigroup on \(L^{p}(\mathbb{R}^{d})\), \(P_{s}f,P_{s}g\in\mathcal{D}_{p}(L)\) as well. Consequently, by (4.10), \[\int_{\mathbb{R}^{d}}\left(P_{s}f((P_{s}g)_{+})^{p-1}+|P_{s}\Phi |^{p}\right)\,\mathrm{d}x\] \[\qquad=\int_{s}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi(x),P_{t}\Phi(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t.\] Let \(s\to 0^{+}\). As the integrand of the right-hand side is nonnegative, the integrals tend to the right-hand side of (4.10). To get the convergence of the left-hand side we use the strong continuity of \((P_{t})_{t\geq 0}\) in \(L^{p}(\mathbb{R}^{d})\). The convergence \(|P_{s}\Phi|^{p}\to|\Phi|^{p}\) in \(L^{1}(\mathbb{R}^{d})\) was shown in proof of Theorem 3.1. Since \(P_{s}f\to f\) and \((P_{s}g)_{+}\to g_{+}\) in \(L^{p}(\mathbb{R}^{d})\), by Lemma B.1, \(((P_{s}g)_{+})^{p-1}\to(g_{+})^{p-1}\) in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\). Moreover, by Lemma B.2, \(P_{s}f((P_{s}g)_{+})^{p-1}\to f(g_{+})^{p-1}\) in \(L^{1}(\mathbb{R}^{d})\). Thus, \(\int_{\mathbb{R}^{d}}\left(P_{s}f((P_{s}g)_{+})^{p-1}+|P_{s}\Phi|^{p}\right) \,\mathrm{d}x\to\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+|\Phi|^{p}\right) \,\mathrm{d}x\). The proof of (4.10) is complete. Proof of Theorem 4.1.: Thanks to Remark 4.2, we only need to consider \(p>2\). Let first \(f\geq 0\). By Lemma 4.3 and (4.8), \[\int_{\mathbb{R}^{d}} fg^{(p-1)}\,\mathrm{d}x=\int_{\mathbb{R}^{d}}\left(f(g_{+})^{p-1}+ |\Phi|^{p}\right)\,\mathrm{d}x-\int_{\mathbb{R}^{d}}\left(f(g_{-})^{p-1}+| \Phi|^{p}\right)\,\mathrm{d}x\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi(x),P_{t}\Phi(y) )\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\] \[\qquad=\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d }}\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y \mathrm{d}t,\] where all the integrals are absolutely convergent. Therefore (4.4) holds in this case. To get rid of the assumption \(f\geq 0\), we consider an arbitrary \(f\in L^{p}(\mathbb{R}^{d})\) and write \(f=f_{+}-f_{-}\). The result holds for pairs \(\Phi^{(+)}:=(f_{+},g)\) and \(\Phi^{(-)}:=(f_{-},g)\). Of course, \(\Phi=\Phi^{(+)}-\Phi^{(-)}\). The operators \(P_{t}\) are linear and the function \(\mathcal{J}_{p}(w,z)\) is linear in and \(z_{1}\), so \[\int_{\mathbb{R}^{d}}fg^{(p-1)}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}\left(f_{+}\right)g^{(p-1)}\,\mathrm{d}x-\int_ {\mathbb{R}^{d}}\left(f_{-}\right)g^{(p-1)}\,\mathrm{d}x\] \[= \int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\] \[-\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi^{(-)}(x),P_{t}\Phi^{(-)}(y))\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\] \[= \int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y \mathrm{d}t.\] The absolute convergence of the integrals is clear from our previous arguments. We next present a quantitative version of (4.4). **Proposition 4.4**.: _Under the assumptions of Theorem 4.1,_ \[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}^{d}}\int\limits_{\mathbb{R}^{d }}\left|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\nu(x,y)\,\mathrm{d}x \mathrm{d}y\mathrm{d}t\leq(1+2^{p/2})\|f\|_{L^{p}(\mathbb{R}^{d})}\|g\|_{L^{p} (\mathbb{R}^{d})}^{p-1}. \tag{4.15}\] Proof.: As in the proof of Theorem 4.1, we let \(\Phi^{(+)}=(f_{+},g)\) and \(\Phi^{(-)}=(f_{-},g)\). Then, \[\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))=\mathcal{J}_{p}(P_{t}\Phi^{(+)}(x), P_{t}\Phi^{(+)}(y))-\mathcal{J}_{p}(P_{t}\Phi^{(-)}(x),P_{t}\Phi^{(-)}(y)),\] so \[\left|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\leq\left|\mathcal{J}_{ p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right|+\left|\mathcal{J}_{p}(P_{t} \Phi^{(-)}(x),P_{t}\Phi^{(-)}(y))\right|.\] Because of (4.8), \[\left|\mathcal{J}_{p}\right|\leq\left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p} \right)+\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right),\] both terms being nonnegative on \(\left([0,\infty)\times\mathbb{R}\right)^{2}\). As \(P_{t}\Phi^{(+)},P_{t}\Phi^{(-)}\in\left([0,\infty)\times\mathbb{R}\right)^{2}\), \[\left|\mathcal{J}_{p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right| \leq \left(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p}\right)(P_{t}\Phi^{(+)} (x),P_{t}\Phi^{(+)}(y))\] \[+\left(\mathcal{J}_{p}^{(-)}+\mathcal{F}_{p}\right)(P_{t}\Phi^{( +)}(x),P_{t}\Phi^{(+)}(y)),\] and a similar inequality holds for \(P_{t}\Phi^{(-)}\). From Proposition 4.3, \[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}^{d}}\int\limits_{\mathbb{R}^{d }}(\mathcal{J}_{p}^{(+)}+\mathcal{F}_{p})(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y ))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t=\int\limits_{\mathbb{R}^{d}} \left(f_{+}(g_{+})^{p-1}+|\Phi^{(+)}|^{p}\right)\,\mathrm{d}x.\] A similar identity holds for \(\mathcal{J}_{p}^{(-)}\). Summing up, we get \[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left|\mathcal{J}_ {p}(P_{t}\Phi^{(+)}(x),P_{t}\Phi^{(+)}(y))\right|\nu(x,y)\,\mathrm{d}x\mathrm{d }y\mathrm{d}t\leq\int_{\mathbb{R}^{d}}\left(f_{+}|g|^{p-1}+2|\Phi^{(+)}|^{p} \right)\,\mathrm{d}x\] and \[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\left|\mathcal{J}_ {p}(P_{t}\Phi(x),P_{t}\Phi(y))\right|\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d }t\leq\int_{\mathbb{R}^{d}}\left(f|g|^{p-1}+2|\Phi|^{p}\right)\,\mathrm{d}x.\] By Holder inequality, \[\int_{\mathbb{R}^{d}}f|g|^{p-1}\,\mathrm{d}x\leq\|f\|_{L^{p}(\mathbb{R}^{d})}^{1/ p}\|g\|_{L^{p}(\mathbb{R}^{d})}^{(p-1)/p}.\] On the other hand, \[|\Phi|^{p}=(f^{2}+g^{2})^{p/2}\leq 2^{p/2-1}(|f|^{p}+|g|^{p}).\] Therefore, if \(\|f\|_{L^{p}(\mathbb{R}^{d})}=\|g\|_{L^{p}(\mathbb{R}^{d})}=1\), then (4.15) is true if we replace its right-hand side by \(1+2^{p/2}\). If \(\|f\|_{L^{p}(\mathbb{R}^{d})}=0\) or \(\|g\|_{L^{p}(\mathbb{R}^{d})}=0\), then (4.15) is obvious. Otherwise, we observe that \(\mathcal{J}_{p}\) is homogeneous in the first coordinates, and \((p-1)\)-homogeneous in the second, to wit, \[\mathcal{J}_{p}((\lambda w_{1},\mu w_{2}),(\lambda z_{1},\mu z_{2}))=\lambda \mu^{\langle p-1\rangle}\mathcal{J}_{p}((w_{1},w_{2}),(z_{1},z_{2})),\qquad \lambda,\mu>0.\] Then, by considering \(f/\|f\|_{L^{p}(\mathbb{R}^{d})}\) and \(g/\|g\|_{L^{p}(\mathbb{R}^{d})}\), we get the result. ## 5. Polarized Sobolev-Bregman form \(\mathcal{E}_{p}(u,v)\) The integral expression appearing in (1.6) and Theorem 4.1, namely \[\mathcal{E}_{p}(u,v):=\frac{1}{p}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y,\] where \(\Phi(x)=(u(x),v(x))\), \(u,v\colon\mathbb{R}^{d}\to\mathbb{R}\), \(p\in[2,\infty)\), and \(\mathcal{J}_{p}\) is given by (4.1), deserves further attention. If \(u=v\) then \(\mathcal{E}_{p}(u,v)=\mathcal{E}_{p}(u,u)=\mathcal{E}_{p}[u]\). For \(p=2\), we get \(\mathcal{E}_{2}(u,v)\), the usual (bilinear) Dirichlet form [19], in particular, it is symmetric. For \(p>2\), in general \(\mathcal{E}_{p}(v,u)\neq\mathcal{E}_{p}(u,v)\) and we are even puzzled by the question whether the integral in (1.6) is well-defined for general enough functions \(u,v\), for instance for \(u,v\in\mathcal{D}(\mathcal{E}_{p})\). The next theorem asserts that for \(p\geq 2\) and \(u,v\in\mathcal{D}_{p}(L)\), (1.6) is well-defined; we also get an extension of the single-function formula (2.30) from Proposition 2.4. **Theorem 5.1**.: _Let \(p\geq 2\). If \(u,v\in\mathcal{D}_{p}(L)\), then \(\mathcal{E}_{p}(u,v)\) is well-defined and_ \[\mathcal{E}_{p}(u,v)=-\frac{1}{p}\langle Lu,v^{\langle p-1\rangle}\rangle- \frac{1}{p}\langle Lv,(p-1)u|v|^{p-2}\rangle. \tag{5.1}\] Note that this agrees with (2.30) if \(u=v\). Before we prove (5.1), we need to further decompose \(\mathcal{J}_{p}^{(+)}\) (and \(\mathcal{J}_{p}\)) into a difference of nonnegative functions. Let \(\mathbf{1}(a):=(1+\mathrm{sgn}(a))/2\) be the Heaviside step function. We define \[\mathcal{J}_{p}^{(++)}(w,z) := (z_{1})_{+}\left((z_{2})_{+}\right)^{p-1}-(w_{1})_{+}\left((w_{2} )_{+}\right)^{p-1}-\mathbf{1}(w_{1})\left((w_{2})_{+}\right)^{p-1}\left(z_{1}- w_{1}\right)\] \[-(p-1)(w_{1})_{+}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right),\] \[\mathcal{J}_{p}^{(-+)}(w,z) := (z_{1})_{-}\left((z_{2})_{+}\right)^{p-1}-(w_{1})_{-}\left((w_{2} )_{+}\right)^{p-1}+\mathbf{1}(-w_{1})\left((w_{2})_{+}\right)^{p-1}\left(z_{1}- w_{1}\right) \tag{5.3}\] \[-(p-1)(w_{1})_{-}\left((w_{2})_{+}\right)^{p-2}\left(z_{2}-w_{2}\right), \tag{5.2}\] where \(w:=(w_{1},w_{2}),z:=(z_{1},z_{2})\in\mathbb{R}^{2}\). We may view these functions as the second-order Taylor remainders of the mappings \(\mathbb{R}^{2}\ni z\mapsto\left(z_{1}\right)_{+}\left((z_{2})_{+}\right)^{p-1}\) and \(\mathbb{R}^{2}\ni z\mapsto\left(z_{1}\right)_{-}\left((z_{2})_{+}\right)^{p-1}\), respectively, except for nondifferentiability of the mappings on the vertical positive semi-axis (for more details, see the proof of Lemma C.4 in Appendix C). Similarly to (4.8) and (4.9), we get a decomposition of \(\mathcal{J}_{p}^{(+)}\): \[\mathcal{J}_{p}^{(+)}=\mathcal{J}_{p}^{(++)}-\mathcal{J}_{p}^{(-+)} \tag{5.4}\] and the identity \[\mathcal{J}_{p}^{(++)}(-\bar{w},-\bar{z})=\mathcal{J}_{p}^{(-+)}(w,z). \tag{5.5}\] In Lemma C.4 in Appendix C we prove that \[\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\quad\mathcal{J}_{p}^{(-+)} (w,z)+\mathcal{F}_{p}(w,z)\geq 0\] for all \(z,w\in\mathbb{R}^{2}\). Therefore, by adding and subtracting \(\mathcal{F}_{p}\) in (5.4), we get the desired decomposition of \(\mathcal{J}_{p}^{(+)}\) and we can proceed from there. Let us mention that it is crucial to define the Heaviside function so that \(\mathbf{1}(0)=1/2\). This is because we use the identity \(\mathbf{1}(a)+\mathbf{1}(-a)=1\) for all \(a\in\mathbb{R}\) to derive (5.4). Proof of Theorem 5.1.: Let \(u,v\in\mathcal{D}_{p}(L)\). If \(p=2\), then, again, the identity is evident from classical polarization, (2.24) and (2.30). Thus, we let \(p>2\). Denote \(\Phi(x):=(u(x),v(x))\). First we prove the following: \[l_{++} := -\langle Lu,\mathbf{1}(u)(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u_{ +}(v_{+})^{p-2}\rangle-\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(++) }+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] and \[l_{-+} := \langle Lu,\mathbf{1}(-u)(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u_{ -}(v_{+})^{p-2}\rangle-\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(-+)} +\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\] We start with the proof of (5.6). By the definition of \(L\), \[\langle L\Phi,p\Phi^{\langle p-1\rangle}\rangle=\lim_{h\to 0^{+}}\frac{1}{h}\int_ {\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\Phi(x)-\Phi(y))\cdot p\Phi(x)^{\langle p -1\rangle}\frac{p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y. \tag{5.8}\] Since the limits defining \(Lu\), \(Lv\) exist in the strong sense in \(L^{p}(\mathbb{R}^{d})\), we have \[l_{++}=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \left[(u(x)-u(y))\mathbf{1}(u(x))(v(x)_{+})^{p-1}\right.\] \[\left.\qquad\qquad+(v(x)-v(y))(p-1)u(x)_{+}(v(x)_{+})^{p-2}\right.\] \[\left.\qquad\qquad+(\Phi(x)-\Phi(y))\cdot p\Phi(x)^{\langle p-1 \rangle}\right]\frac{p_{h}(x,y)}{h}\,\mathrm{d}x\mathrm{d}y.\] Then, similarly as in the proofs of Theorems 3.1 and 4.1, we take advantage of the conservativeness of the semigroup \((P_{t})_{t\geq 0}\): \[\int_{\mathbb{R}^{d}}u_{+}(v_{+})^{p-1}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}P_{h}\left(u_{+}(v_{+})^{p-1}\right)\, \mathrm{d}x,\] \[\int_{\mathbb{R}^{d}}|\Phi|^{p}\,\mathrm{d}x = \int_{\mathbb{R}^{d}}P_{h}\left(|\Phi|^{p}\right)\,\mathrm{d}x, \quad\text{ for }h>0.\] Taking this into account and rearranging, we obtain \[l_{++}=\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^ {(++)}+\mathcal{F}_{p})(\Phi(x),\Phi(y))\,\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y.\] From Lemma C.4 in Appendix C, \(\mathcal{J}_{p}^{(++)}+\mathcal{F}_{p}\geq 0\), hence we can pass to the limit as \(h\to 0^{+}\) and by Lemma 2.6 we obtain (5.6). By substituting \(-u\) in place of \(u\), we obtain (5.7), too; see (2.10) and (5.5). Further, we claim that for all \(u,v\in\mathcal{D}_{p}(L)\), \[l_{+} := -\langle Lu,(v_{+})^{p-1}\rangle-\langle Lv,(p-1)u(v_{+})^{p-2}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(+)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] and \[l_{-} := -\langle Lu,(v_{-})^{p-1}\rangle+\langle Lv,(p-1)u(v_{-})^{p-2}\rangle\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(-)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\] Indeed, using (5.6), (5.7), and (5.4), we get \[l_{+}=l_{++}-l_{-+} = \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(++ )}+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\mathcal{J}_{p}^{(-+) }+\mathcal{F}_{p})(\Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}^{(+)}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\] Note that the integral on the right-hand side is well-defined as a difference of finite integrals with nonnegative integrands. This yields (5.9). Equality (5.10) follows from (5.9) by substituting \(-v\) in place of \(v\); see (2.10) and (4.9). To conclude, using (5.9), (5.10), and (4.8), we obtain \[-\langle Lu,v^{\langle p-1\rangle}\rangle-\langle Lv,(p-1)u|v|^{p -2}\rangle=l_{+}-l_{-}= \tag{5.11}\] \[\quad=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}( \Phi(x),\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y=p\mathcal{E}_{p}(u,v).\] Again, the integral defining \(\mathcal{E}_{p}(u,v)\) is absolutely convergent as a difference of two absolutely convergent integrals. The proof is complete. _Remark 5.2_.: By the above and Lemma B.5, \[p\mathcal{E}_{p}(f,g) := \int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_{p}(f(x),g (x);f(y),g(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\] \[= \langle-Lf,g^{\langle p-1\rangle}\rangle+\langle-Lg,(p-1)f|g|^{p-2}\rangle\] \[= -\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}}P_{t}f(x)(P_{ t}g(x))^{\langle p-1\rangle}\,\mathrm{d}x\Big{|}_{t=0},\] at least for \(f,g\in\mathcal{D}_{p}(L)\) and \(p\geq 2\). At this moment, Lemma B.5 offers a simplifying perspective on (1.5) and Theorem 4.1, but we should emphasize the importance of absolute integrability asserted in Theorem 4.1 for arbitrary \(f,g\in L^{p}(\mathbb{R}^{d})\) when \(p\geq 2\); see also Proposition 4.4. ## Appendix A Estimates for Bregman divergence The following lemma extends Lemma 5 of [8], where scalar versions of (A.1), (A.3), (A.4) were given. The inequality (A.2) seems new. **Lemma A.1**.: _There are constants \(C_{\kappa},C^{\prime}_{\kappa},C^{\prime\prime}_{\kappa},C^{\prime\prime \prime}_{\kappa}\in(0,\infty)\) such that for all \(w,z\in\mathbb{R}^{n}\),_ (A.1) \[0\leq\mathcal{F}_{\kappa}(w,z) \leq C_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa-\lambda}, \quad\lambda\in[0,2],\kappa>1,\] (A.2) \[|\mathcal{F}_{\langle\kappa\rangle}(w,z)| \leq C^{\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa- \lambda}, \quad\lambda\in[0,2],\kappa>1,\] (A.3) \[||z|^{\kappa}-|w|^{\kappa}| \leq C^{\prime\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa -\lambda}, \quad\lambda\in[0,1],\kappa>0,\] (A.4) \[|z^{\langle\kappa\rangle}-w^{\langle\kappa\rangle}| \leq C^{\prime\prime}_{\kappa}|z-w|^{\lambda}(|w|\vee|z|)^{\kappa -\lambda}, \quad\lambda\in[0,1],\kappa>0.\] Proof.: It suffices to prove the inequalities for the maximal value of \(\lambda\) (equal to \(2\) in (A.1), (A.2), and equal to \(1\) in (A.3), (A.4)). Indeed, for other values of \(\lambda\), it is enough to use the inequality \(|z-w|\leq|z-w|^{\mu}(|z|+|w|)^{1-\mu}\), \(\mu\in(0,1)\), \(w,z\in\mathbb{R}^{n}\). Inequality (A.1) follows from (2.12). In particular, for \(a,b\in\mathbb{R}\), \(\lambda=2\), we have (A.5) \[0\leq F_{\kappa}(a,b)=|b|^{\kappa}-|a|^{\kappa}-\kappa a^{\langle\kappa-1 \rangle}(b-a)\leq C_{\kappa}|b-a|^{2}(|b|\vee|a|)^{\kappa-2}.\] To get the other inequalities, observe that they are obvious for \(w=0\). For \(w\neq 0\), we divide by \(|w|^{\kappa}\) and, denoting \(t:=|z|/|w|\in[0,\infty)\), \(v_{1}:=w/|w|\in\mathbb{S}^{n-1}\), \(v_{2}:=z/|z|\in\mathbb{S}^{n-1}\), we arrive at the following equivalent statements of (A.2), (A.3), (A.4): (A.6) \[|t^{\kappa}v_{2}-v_{1}-\left((\kappa-1)v_{1}\otimes v_{1}+\text{ Id}\right)(tv_{2}-v_{1})| \leq C^{\prime}_{\kappa}|tv_{2}-v_{1}|^{2}(1\lor t)^{\kappa-2},\] (A.7) \[|t^{\kappa}-1| \leq C^{\prime\prime}_{\kappa}|tv_{2}-v_{1}|(1\lor t)^{\kappa-1},\] (A.8) \[|t^{\kappa}v_{2}-v_{1}| \leq C^{\prime\prime\prime}_{\kappa}|tv_{2}-v_{1}|(1\lor t)^{ \kappa-1}.\] We have (A.9) \[|tv_{2}-v_{1}|^{2}(1\lor t)^{\kappa-2}=\left((1-t)^{2}+2t(1-(v_{1},v_{2})) \right)(1\lor t)^{\kappa-2}.\] If we square the right-hand sides of (A.7) and (A.8) then, up to a constant, we get (A.10) \[|tv_{2}-v_{1}|^{2}(1\lor t)^{2\kappa-2}=\left((1-t)^{2}+2t(1-(v_{1},v_{2})) \right)(1\lor t)^{2\kappa-2}.\] Denote \(\beta=1-(v_{1},v_{2})\in[0,2]\), so that (A.6) becomes (A.11) \[|(t^{\kappa}-t)v_{2}+\left(\kappa-1\right)\left((1-t)+\beta t\right)v_{1}|\leq C ^{\prime}_{\kappa}\left((1-t)^{2}+2\beta t\right)(1\lor t)^{\kappa-2}.\] This inequality is evident when \(t\) is away from \(1\), say, \(t\in[0,\frac{1}{2}]\) or \(t\in[2,\infty)\). Indeed, for \(t\leq 1/2\), we estimate the left-hand side by \(2\kappa\), while the function on the right-hand side is not smaller than \(\left(\frac{1}{2}\right)^{2}\), and (A.6) follows. When \(t\geq 2\), then the left-hand side is not greater than \((2\kappa-1)t^{\kappa}\), and for the right-hand side, we get \[\left((1-t)^{2}+2\beta t\right)(1\lor t)^{\kappa-2}\geq\left(\frac{t}{2} \right)^{2}t^{\kappa-2}.\] To deal with the remaining range \(t\in(1/2,2)\), we square both sides of (A.11). The left-hand side yields (A.12) \[|(t^{\kappa}-t)v_{2}+(\kappa-1)((1-t)+t(1-(v_{1},v_{2})))v_{1}|^{2}\] \[= |F_{\kappa}(1,t)+(\kappa-1)\beta t|^{2}-2(t^{\kappa}-t)(\kappa-1) ((1-t)+t\beta))\beta.\] In view of (A.5), the first term on the right-hand side of (A.12) is bounded above by \((C_{\kappa}(1-t)^{2}+(\kappa-1)t\beta)^{2}\). Since the right-hand side of (A.6) is then not smaller than a constant multiple of \(((1-t)^{2}+2\beta t)\), we get the estimate of this part. For the other term, we use the estimate \(\left|1-t^{\kappa-1}\right|\leq C(\kappa)\left|1-t\right|\), \(t\in(1/2,2]\), so \[2(t-t^{\kappa})(\kappa-1)((1-t)+t\beta)\beta\leq C[(t-1)^{2}t\beta+4t^{2} \beta^{2}]\leq C((1-t)^{2}+2\beta t)^{2}.\] The estimate (A.6) follows. After squaring its sides, the proof of (A.8) is reduced to verifying \[(t^{\kappa}-1)^{2}+2\beta t^{\kappa}\leq C\left((1-t)^{2}+2\beta t\right)(1 \lor t)^{2\kappa-2},\] with a constant \(C\), uniformly in \(\beta\in[0,2]\). This is done like before. For \(t\geq 1\), \[(t^{\kappa}-1)^{2}\leq C(1-t)^{2}t^{2\kappa-2}.\] For \(0\leq t\leq 1/2\), the left-hand side is bounded and the right-hand side is bounded away from zero (uniformly in \(\beta\in[0,2]\)), while for \(t\in(1/2,1]\) we use the inequality \(t^{\kappa}\leq C(\kappa)t\), \(t\in(1/2,1)\). The square of the left-hand side of (A.7) is smaller than \((1-t)^{2}+2\beta t^{\kappa}\), i.e., the square of the left-hand side of (A.8). The proof is complete. ## Appendix B Calculus in \(L^{p}\) Let \(p\in[1,\infty)\) be fixed. In the discussion of the multivariate Hardy-Stein identity above we use the differential calculus in the Banach space \[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n}):=\left\{\Upsilon\colon\mathbb{R}^{d} \to\mathbb{R}^{n}\text{ measurable, }\int_{\mathbb{R}^{d}}|\Upsilon(x)|^{p}\,\mathrm{d}x<\infty \right\},\quad n=1,2,\ldots,\] with the norm \(\|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}:=\left(\int_{\mathbb{R}^{ d}}|\Upsilon(x)|^{p}\,\mathrm{d}x\right)^{1/p}\), or \[\|\Upsilon\|_{\ell^{n}_{2}(L^{p}(\mathbb{R}^{d}))}:=\left(\sum_{i=1}^{n}\|v_{ i}\|_{L^{p}(\mathbb{R}^{d})}^{2}\right)^{1/2},\] where \(\Upsilon=(v_{1},\ldots,v_{n})\), \(v_{1},\ldots,v_{n}\in L^{p}(\mathbb{R}^{d})\). The norms are comparable: \[\left(\int_{\mathbb{R}^{d}}|\Upsilon(x)|^{p}\,\mathrm{d}x\right)^ {\frac{1}{p}} = \left(\int_{\mathbb{R}^{d}}\left(\sum_{i=1}^{n}|v_{i}(x)|^{2} \right)^{\frac{p}{2}}\,\mathrm{d}x\right)^{\frac{1}{p}}\leq\left(\int_{ \mathbb{R}^{d}}\left(\sum_{i=1}^{n}|v_{i}(x)|\right)^{p}\,\mathrm{d}x\right)^ {\frac{1}{p}}\] \[= \||v_{1}|+\ldots+|v_{n}|\|_{L^{p}(\mathbb{R}^{d})}\leq\sum_{i=1}^ {n}\|v_{i}\|_{L^{p}(\mathbb{R}^{d})}\leq\sqrt{n}\|\Upsilon\|_{\ell^{n}_{2}(L^{p }(\mathbb{R}^{d}))}.\] Let \(\Upsilon\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(\Psi\in L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\), where \(p,q\in(1,\infty)\) with \(\frac{1}{p}+\frac{1}{q}=1\). We consider the canonical pairing (B.1) \[\langle\Upsilon,\Psi\rangle:=\int_{\mathbb{R}^{d}}\Upsilon(x)\cdot\Psi(x)\, \mathrm{d}x=\sum_{j=1}^{n}\int_{\mathbb{R}^{d}}\upsilon_{j}(x)\psi_{j}(x)\, \mathrm{d}x.\] For a mapping \([0,\infty)\ni t\mapsto\Upsilon(t)\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), we denote \[\Delta_{h}\Upsilon(t)=\Upsilon(t+h)-\Upsilon(t)\quad\text{provided }t,t+h\geq 0.\] As usual, \(\Upsilon\) is called _continuous_ at \(t_{0}\geq 0\) if \(\Delta_{h}\Upsilon(t_{0})\to 0\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) as \(h\to 0\). Furthermore, \(\Upsilon\) is called _differentiable_ at \(t_{0}\geq 0\) if the limit (B.2) \[\lim_{h\to 0}\frac{1}{h}\Delta_{h}\Upsilon(t_{0})=:\Upsilon^{\prime}(t_{0})\] exists in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). If \(\Upsilon^{\prime}(t)\) defined by (B.2) is continuous at \(t=t_{0}\), then we say that \(\Upsilon\) is _continuously differentiable_ at \(t_{0}\). In other words, \(\Upsilon^{\prime}(t_{0})\) is the Frechet derivative of the mapping \([0,\infty)\ni t\mapsto\Upsilon(t)\) at \(t_{0}\); \(\Upsilon^{\prime}(0)\) denotes the right-hand side derivative at \(0\). Clearly, if \(\Upsilon\) is continuously differentiable on \([0,\infty)\), then \(\Upsilon\) is continuous on \([0,\infty)\). Of course, \(\Upsilon=(\upsilon_{1},\ldots,\upsilon_{n})\) is continuous (respectively, differentiable, continuously differentiable) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) if and only if all the functions \(\upsilon_{i}\), \(i=1,\ldots,n\), are continuous (respectively, differentiable, continuously differentiable) in \(L^{p}(\mathbb{R}^{d})\). We next present a series of auxiliary lemmas. **Lemma B.1**.: _Let \(\kappa\in(0,p]\). Then the following mappings are continuous:_ (B.3) \[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\ni\Upsilon \mapsto \Upsilon^{\langle\kappa\rangle}\in L^{p/\kappa}(\mathbb{R}^{d}; \mathbb{R}^{n}),\] (B.4) \[L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\ni\Upsilon \mapsto |\Upsilon|^{\kappa}\in L^{p/\kappa}(\mathbb{R}^{d}).\] Proof.: First, observe that \(|\Upsilon|^{\kappa}\) and \(\Upsilon^{\langle\kappa\rangle}\) are in \(L^{\frac{p}{\kappa}}(\mathbb{R}^{d})\) and \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\), respectively, if \(\Upsilon\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). To prove (B.3), choose \(\lambda\in(0,1)\) such that \(\kappa-\lambda>0\) and suppose \(\Upsilon_{k}\to\Upsilon\) in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) as \(k\to\infty\). From (A.4) we get, for every \(x\in\mathbb{R}^{d}\), \[|\Upsilon_{k}(x)^{\langle\kappa\rangle}-\Upsilon(x)^{\langle \kappa\rangle}| \leq C_{\kappa}^{\prime\prime\prime}|\Upsilon_{k}(x)-\Upsilon(x)|^{ \lambda}(|\Upsilon_{k}(x)|\vee|\Upsilon(x)|)^{\kappa-\lambda}.\] Using Holder's inequality with exponents \(\kappa/\lambda\) and \(\kappa/(\kappa-\lambda)\), we get \[\|\Upsilon_{k}^{\langle\kappa\rangle}-\Upsilon^{\langle\kappa \rangle}\|_{L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa/p} = \int_{\mathbb{R}^{d}}|\Upsilon_{k}(x)^{\langle\kappa\rangle}-\Upsilon (x)^{\langle\kappa\rangle}|^{\frac{p}{\kappa}}\,\mathrm{d}x\] \[\leq \int_{\mathbb{R}^{d}}\left(C_{\kappa}^{\prime\prime\prime}\right)^ {p/\kappa}|\Upsilon_{k}(x)-\Upsilon(x)|^{\frac{\lambda p}{\kappa}}(|\Upsilon_{k }(x)|\vee|\Upsilon(x)|)^{\frac{(\kappa-\lambda)p}{\kappa}}\,\mathrm{d}x\] \[\leq C\|\Upsilon_{k}-\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})} ^{p\lambda/\kappa}\cdot\left(\|\Upsilon_{k}\|_{L^{p}(\mathbb{R}^{d})}^{p(\kappa -\lambda)/\kappa}+\|\Upsilon\|_{L^{p}(\mathbb{R}^{d})}^{p(\kappa-\lambda)/ \kappa}\right).\] The result follows. The proof of (B.4) is similar. The following generalization of [8, Lemma 13] follows from Holder's inequality. **Lemma B.2**.: _Let \(q\in(1,\infty)\), \(r\in\left[\frac{q}{q-1},\infty\right)\), \(\Upsilon\in L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\), \(\Psi\in L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\). Then_ \[\|\Upsilon\cdot\Psi\|_{L^{\frac{qr}{q+r}}(\mathbb{R}^{d};\mathbb{R}^{n})}\leq \|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}\|\Psi\|_{L^{r}(\mathbb{R}^ {d};\mathbb{R}^{n})}.\] _Moreover, if \(\Upsilon_{n}\to\Upsilon\) in \(L^{q}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(\Psi_{n}\to\Psi\) in \(L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\), then \(\Upsilon_{n}\cdot\Psi_{n}\to\Upsilon\cdot\Psi\) in \(L^{\frac{qr}{q+r}}(\mathbb{R}^{d};\mathbb{R}^{n})\), as \(n\to\infty\)._ The next lemma is an extension of [8, Lemma 15], where the result for \(|\Upsilon|^{\kappa}\) was proved for \(n=1\), \(\kappa=p\). **Lemma B.3**.: _Let \(1<\kappa\leq p<\infty\) be given. If \([0,\infty)\ni t\mapsto\Upsilon(t)\) is continuously differentiable in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), then:_ * \(|\Upsilon|^{\kappa}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d})\) _and_ (B.5) \[(|\Upsilon|^{\kappa})^{\prime}=\kappa\Upsilon^{\langle\kappa-1\rangle}\cdot \Upsilon^{\prime},\] * \(\Upsilon^{\langle\kappa\rangle}\) _is continuously differentiable in_ \(L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})\) _and_ (B.6) _with_ \(J_{\langle\kappa\rangle}\) _defined in (_2.6_)._ Proof.: Both statements are proved similarly, therefore we only prove (B.6), as it is the more complicated of the two. Observe that for every \(a\in\mathbb{S}^{n-1}\subset\mathbb{R}^{n}\) and \(A:=a\otimes a\), the linear mapping \(F\mapsto AF\) is a contraction on \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\). Indeed, \[\|AF\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})} = \left(\int|(a,F)\cdot a|^{p}\;\mathrm{d}x\right)^{\frac{1}{p}}\] \[= \left(\int_{\mathbb{R}^{d}}|(a,F)|^{p}\,\mathrm{d}x\right)^{\frac {1}{p}}\leq\left(\int_{\mathbb{R}^{d}}\left(|a|\cdot|F|\right)^{p}\,\mathrm{d} x\right)^{\frac{1}{p}}=\left(\int_{\mathbb{R}^{d}}|F|^{p}\,\mathrm{d}x\right)^{ \frac{1}{p}}.\] So, for every \(F\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), by Holder's inequality with exponents \(\kappa\) and \(\kappa/(\kappa-1)\), \[\|(J_{\langle\kappa\rangle}\circ\Upsilon)F\|_{L^{p/\kappa}( \mathbb{R}^{d};\mathbb{R}^{n})} = \left\|\Upsilon|^{\kappa-1}\left((\kappa-1)(\frac{\Upsilon}{| \Upsilon|}\otimes\frac{\Upsilon}{|\Upsilon|})+\mathrm{Id}\right)F\right\|_{L^{p /\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})}\] \[\leq \|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa-1} \cdot\left\|\left((\kappa-1)(\frac{\Upsilon}{|\Upsilon|}\otimes\frac{\Upsilon }{|\Upsilon|})+\mathrm{Id}\right)F\right\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n })}\] \[\leq \kappa\|\Upsilon\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\kappa- 1}\cdot\|F\|_{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}.\] For \(t\geq 0\), we have the convergence \(\frac{1}{h}\Delta_{h}\Upsilon(t)\to\Upsilon^{\prime}(t)\) as \(h\to 0\), in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\), with \(\Upsilon^{\prime}\) being continuous. This and (B.7) yield \(\frac{1}{h}(J_{\langle\kappa\rangle}\circ\Upsilon(t))\Delta_{h}(\Upsilon(t)) \to(J_{\langle\kappa\rangle}\circ\Upsilon(t))\Upsilon^{\prime}(t)\) in \(L^{p/\kappa}\) (with the limit continuous). Therefore we only need to verify that for \(h\to 0\), \[W_{h}(t):=\frac{1}{h}\Delta_{h}\Upsilon^{\langle\kappa\rangle}(t)-(J_{\langle \kappa\rangle}\circ\Upsilon(t))\frac{1}{h}\Delta_{h}u(t)\to 0\qquad\text{in }L^{p/\kappa}( \mathbb{R}^{d};\mathbb{R}^{n}).\] Since \(W_{h}(t)=\frac{1}{h}\mathcal{F}_{\langle\kappa\rangle}(\Upsilon(t),\Upsilon(t+h))\), we choose \(\lambda\in(1,2]\) such that \(\kappa-\lambda>0\), then use the inequality (A.2) to get: \[|W_{h}(t)| \leq \frac{1}{|h|}C^{\prime}_{\kappa}|\Upsilon(t+h)-\Upsilon(t)|^{ \lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa-\lambda}\] \[= |h|^{\lambda-1}C^{\prime}_{\kappa}\left|\frac{1}{h}\Delta_{h} \Upsilon(t)\right|^{\lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa-\lambda}.\] Furthermore, by Holder's inequality with parameters \(\kappa/\lambda\) and \(\kappa/(\kappa-\lambda)\), \[\|W_{h}(t)\|_{L^{p/\kappa}(\mathbb{R}^{d};\mathbb{R}^{n})} = C_{\kappa}|h|^{\lambda-1}\left\|\left|\frac{1}{h}\Delta_{h} \Upsilon(t)\right|^{\lambda}(|\Upsilon(t+h)|\vee|\Upsilon(t)|)^{\kappa- \lambda}\right\|_{L^{p/\kappa}(\mathbb{R}^{d})}\] \[\leq C|h|^{\lambda-1}\left\|\frac{1}{h}\Delta_{h}\Upsilon(t)\right\| _{L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})}^{\lambda}\cdot\||\Upsilon(t+h)|+| \Upsilon(t)|\|_{L^{p}(\mathbb{R}^{d})}^{\kappa-\lambda}\cdot\] We then conclude as in the proof of Lemma B.1. Finally we invoke, without proof, an analogue of the Leibniz rule. **Lemma B.4** (**Product rule)**.: _Let \(p>1\) and \(r\in\left[\frac{p}{p-1},\infty\right)\) be given. If the mappings \([0,\infty)\ni t\mapsto\Upsilon(t)\in L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \([0,\infty)\ni t\mapsto\Psi(t)\in L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\) are continuously differentiable in \(L^{p}(\mathbb{R}^{d};\mathbb{R}^{n})\) and \(L^{r}(\mathbb{R}^{d};\mathbb{R}^{n})\), respectively, then \(\Upsilon\cdot\Psi\) is continuously differentiable in \(L^{\frac{p}{p+r}}(\mathbb{R}^{d})\) and \((\Upsilon\cdot\Psi)^{\prime}=\Upsilon^{\prime}\cdot\Psi+\Upsilon\cdot\Psi^{\prime}\)._ **Lemma B.5**.: _Let \(p>1\). If \(f,g\in\mathcal{D}_{p}(L)\), then for \(t\in[0,\infty)\),_ \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}}P_{t}f(P_{t}g)^{\langle p- 1\rangle}\mathrm{d}x=\int_{\mathbb{R}^{d}}\left((P_{t}g)^{\langle p-1\rangle} LP_{t}f+(p-1)P_{t}f|P_{t}g|^{p-2}LP_{t}g\right)\mathrm{d}x.\] _If \(f,g\in L^{p}(\mathbb{R}^{d})\), then the formula holds for \(t\in(0,\infty)\)._ Proof.: Of course, \(P_{t}f\) and \(P_{t}g\) are continuously differentiable at \(t\geq 0\) in \(L^{p}(\mathbb{R}^{d})\) and \(\frac{\mathrm{d}}{\mathrm{d}t}P_{t}f=LP_{t}f\), \(\frac{\mathrm{d}}{\mathrm{d}t}P_{t}g=LP_{t}g\). Hence, by Lemma B.3 with \(n=1\), \((P_{t}g)^{\langle p-1\rangle}\) is continuously differentiable at \(t\geq 0\) in \(L^{\frac{p}{p-1}}(\mathbb{R}^{d})\) and \(\frac{\mathrm{d}}{\mathrm{d}t}(P_{t}g)^{\langle p-1\rangle}=(p-1)|P_{t}g|^{p- 2}LP_{t}g\). By Lemma B.4 with \(r=p/(p-1)\), \(P_{t}f(P_{t}g)^{\langle p-1\rangle}\) is continuously differentiable at \(t\geq 0\) in \(L^{1}(\mathbb{R}^{d})\) and (B.9) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(P_{t}f(P_{t}g)^{\langle p-1\rangle} \right)=(P_{t}g)^{\langle p-1\rangle}LP_{t}f+(p-1)P_{t}f|P_{t}g|^{p-2}LP_{t}g.\] Since \(u\mapsto\int_{\mathbb{R}^{d}}u(x)\,\mathrm{d}x\) is a continuous linear functional on \(L^{1}(\mathbb{R}^{d})\), we get the result (the case of arbitrary \(f,g\in L^{p}(\mathbb{R}^{d})\) follows since the semigroup \(P_{t}\) is analytic). ## Appendix C Convexity properties We provide here precise statements and proofs of convexity properties needed in Sections 4 and 5. First, we recall some facts from the theory of convex functions. Let \(T:A\to\mathbb{R}\), where the set \(A\subset\mathbb{R}^{n}\) is convex. By definition, \(d(w)\in\mathbb{R}^{n}\) is a _subgradient_ of \(T\) at \(w\in A\) if (C.1) \[T(z)\geq T(w)+d(w)\cdot(z-w)\quad\text{for all }z\in A.\] The function \(T\) is convex in \(A\) if and only if for every \(w\in A\), a subgradient \(d(w)\) exists. If \(T\) is convex and the first-order partial derivatives of \(T\) exist at some \(w\in A\), then \(T\) has exactly one subgradient at the point \(w\), which is equal to its gradient \(\nabla T(w)\). In such a case, denoting by \(\frac{\partial T}{\partial v}\) the directional derivative of \(T\) along a given vector \(v\in\mathbb{R}^{n}\), we have that \(d(w)\) is a subgradient of the function \(T\) at the point \(w\in A\) if and only if \[\frac{\partial T}{\partial v}(w)\geq d(w)\cdot v,\quad v\in\mathbb{R}^{n}.\] For more details see Borwein and Lewis [13, Chapter 3]. We need the following lemma. **Lemma C.1**.: _Let \(p\geq 2\). The function_ \[Y(z):=z_{1}\left(z_{2}\right)^{p-1}+|z|^{p},\quad z=(z_{1},z_{2})\in[0,\infty )^{2},\] _is convex on \([0,\infty)^{2}\)._ Proof.: As \(Y\) is continuous on \([0,\infty)^{2}\), it is enough to prove the convexity on \((0,\infty)^{2}\). Recall (2.5) and \[\nabla^{2}|z|^{p}=p(p-2)|z|^{p-4}\begin{bmatrix}z_{1}^{2}&z_{1}z_{2}\\ z_{1}z_{2}&z_{2}^{2}\end{bmatrix}+p|z|^{p-2}\mathrm{Id},\quad z\in\mathbb{R}^{2 }\setminus\{0\}.\] The Hessian \(\nabla^{2}\left(z_{1}z_{2}^{p-1}\right)\) is calculated in (4.7). The Hessian \(\nabla^{2}Y(z)\) of \(Y\) is: \[\begin{bmatrix}p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}&(p-1)z_{2}^{p-2}+p(p-2)z_{1 }z_{2}|z|^{p-4}\\ (p-1)z_{2}^{p-2}+p(p-2)z_{1}z_{2}|z|^{p-4}&(p-1)(p-2)z_{1}z_{2}^{p-3}+p|z|^{p- 2}+p(p-2)z_{2}^{2}|z|^{p-4}\end{bmatrix}\] We will verify that for \(z\in(0,\infty)^{2}\), the matrix is positive semi-definite. Clearly, \[\left[p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}\right]>0.\] Moreover, after long, but elementary, calculations we get: \[\det\nabla^{2}Y(z) = \left[p|z|^{p-2}+p(p-2)z_{1}^{2}|z|^{p-4}\right](p-1)(p-2)z_{1}z_ {2}^{p-3}\] \[+p^{2}|z|^{2p-4}+p^{2}(p-2)|z|^{2p-4}\] \[-(p-1)^{2}z_{2}^{2p-4}-p(p-1)(p-2)z_{1}z_{2}^{p-1}|z|^{p-4}\] \[= p^{2}(p-1)|z|^{2p-4}-(p-1)^{2}z_{2}^{2p-4}\] \[+p(p-1)(p-2)|z|^{p-4}\left((p-1)z_{1}^{3}z_{2}^{p-3}-z_{1}z_{2}^{ p-1}\right).\] We have \(z_{2}\leq|z|\), so applying Young's inequality with exponents \(p\) and \(q=p/(p-1)\) to the product \(z_{1}z_{2}^{p-1}\) we obtain \[z_{1}z_{2}^{p-1}\leq\frac{z_{1}^{p}}{p}+\frac{(p-1)z_{2}^{p}}{p}=\frac{1}{p}(z_ {1}^{2})^{\frac{p}{2}}+\frac{p-1}{p}(z_{2}^{2})^{\frac{p}{2}}\leq|z|^{p}.\] Summarizing, \[\det\nabla^{2}Y(z) \geq p(p-1)^{2}(p-2)|z|^{p-4}z_{1}^{3}z_{2}^{p-3}\] \[+|z|^{2p-4}(p-1)\left(p^{2}-p(p-2)-(p-1)\right)\] \[= p(p-1)^{2}(p-2)|z|^{p-4}z_{1}^{3}z_{2}^{p-3}+|z|^{2p-4}(p-1)(p+1)>0.\] If \(w_{1}\leq z_{1},w_{2}\leq z_{2},\ldots,w_{k}\leq z_{k}\) implies \(T(w_{1},\ldots,w_{k})\leq T(z_{1},\ldots,z_{k})\) in the domain of a real-valued function \(T\), then we say \(T\) is _coordinate-wise nondecreasing_. The following fact is self-explanatory, see also Boyd and Vandenberghe [14, Section 3.2.4]. **Lemma C.2**.: _Let \(S\colon A\to\mathbb{R}^{k}\), \(S(A)\subset B\), and \(T\colon B\to\mathbb{R}\), where \(A\subset\mathbb{R}^{n}\) and \(B\subset\mathbb{R}^{k}\) are convex. If each coordinate of \(S\) is convex and \(T\) is coordinate-wise nondecreasing and convex, then the composition \(T\circ S\colon A\to\mathbb{R}\) is convex._ The following two lemmas are critical for our development. **Lemma C.3**.: _Let \(p\geq 2\) and define, for \(z=(z_{1},z_{2})\in\mathbb{R}^{2}\),_ \[Y^{(+)}(z):=z_{1}\left((z_{2})_{+}\right)^{p-1}+|z|^{p},\qquad Y^{(-)}(z):=z_{1 }\left((z_{2})_{-}\right)^{p-1}+|z|^{p}.\] _The functions are convex on \([0,\infty)\times\mathbb{R}\)._ Proof.: Define \(T\colon[0,\infty)\times\mathbb{R}\to[0,\infty)^{2}\) as \[T(z):=(z_{1},(z_{2})_{+}),\quad z=(z_{1},z_{2})\in[0,\infty)\times\mathbb{R},\] and let \(Y\colon[0,\infty)^{2}\to\mathbb{R}\) be as in Lemma C.1. Since each coordinate of \(T\) is convex and the function \(Y\) is convex and coordinate-wise nondecreasing, the composition \[(Y\circ T)(z)=z_{1}((z_{2})_{+})^{p-1}+\left((z_{1})^{2}+((z_{2})_{+})^{2} \right)^{p/2}\] is convex on \([0,\infty)\times\mathbb{R}\) (from Lemma C.2). Therefore, \[Y^{(+)}(z)=\max\{(Y\circ T)(z),|z|^{p}\}\] is convex on \([0,\infty)\times\mathbb{R}\) as the maximum of convex functions [14, Section 3.2.3]. To prove the convexity of \(Y^{(-)}\) we just notice that \(Y^{(-)}(z_{1},z_{2})=Y^{(+)}(z_{1},-z_{2})\). **Lemma C.4**.: _If \(p>2\) then for all \(z,w\in\mathbb{R}^{2}\),_ \[\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\quad\mathcal{J}_{p}^{(-+ )}(w,z)+\mathcal{F}_{p}(w,z)\geq 0,\] _where \(\mathcal{J}_{p}^{(++)}(w,z)\), \(\mathcal{J}_{p}^{(-+)}(w,z)\) are given by (5.2) and (5.3)._ Proof.: Because of (2.10) and (5.5), we only need to show that \(\mathcal{J}_{p}^{(++)}(w,z)+\mathcal{F}_{p}(w,z)\geq 0\). We rewrite this inequality as (C.2) \[Y^{(++)}(z)\geq Y^{(++)}(w)+d(w)\cdot(z-w),\] where \(Y^{(++)}(z):=(z_{1})_{+}\left((z_{2})_{+}\right)^{p-1}+|z|^{p}\) and (C.3) \[d(w):=\left(\mathbf{1}(w_{1})\left((w_{2})_{+}\right)^{p-1},(p-1)(w_{1})_{+} \left((w_{2})_{+}\right)^{p-2}\right)+pw^{\langle p-1\rangle}.\] Therefore the proof of (C.2) amounts to checking that \(d(w)\) is a subgradient of the function \(Y^{(++)}\) at the point \(w\in\mathbb{R}^{2}\). To show (C.2), we first establish the convexity of \(Y^{(++)}\). Define \(T\colon\mathbb{R}^{2}\to[0,\infty)^{2}\) as \[T(z):=((z_{1})_{+},(z_{2})_{+}),\quad z=(z_{1},z_{2})\in\mathbb{R}^{2}.\] Let \(Y\colon[0,\infty)^{2}\to\mathbb{R}\) as in Lemma C.1. Since each coordinate of \(T\) is convex and the function \(Y\) is convex and coordinate-wise nondecreasing, the convexity on \(\mathbb{R}^{2}\) of the composition \[(Y\circ T)(z)=(z_{1})_{+}((z_{2})_{+})^{p-1}+\left(((z_{1})_{+})^{2}+((z_{2})_ {+})^{2}\right)^{p/2}\] follows from Lemma C.2. Since \[Y^{(++)}(z)=\max\{(Y\circ T)(z),|z|^{p}\},\] it is convex on \(\mathbb{R}^{2}\) as a maximum of two convex functions. If \(w=0\) then \(Y^{(++)}(w)=0\) and \(d(w)=0\), hence (C.2) is true for every \(z\). If \(w\neq 0\), to show that \(d(w)\) is a subgradient of \(Y^{(++)}\) at \(w\), we need to prove that \[\frac{\partial Y^{(++)}}{\partial v}(w)\geq d(w)\cdot v,\quad w\in\mathbb{R}^{ 2}\setminus\{0\},\quad\text{for every $v=(v_{1},v_{2})\in\mathbb{R}^{2}$.}\] Denote \(B:=\{(w_{1},w_{2})\in\mathbb{R}^{2}\colon w_{1}=0,w_{2}>0\}\) as vertical positive semi-axis. The function \(Y^{(++)}\) is differentiable everywhere but on \(B\). Thus when \(w\notin B\), the gradient of \(Y^{(++)}\) exists, is given by (C.3) and \[\frac{\partial Y^{(++)}}{\partial v}(w)=\nabla Y^{(++)}(w)\cdot v=d(w)\cdot v.\] In the remaining case \(w\in B\), we have two possibilities. If \(v_{1}\geq 0\), then \[\frac{\partial Y^{(++)}}{\partial v}(w) = \left((w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v\] \[\geq \frac{1}{2}\left((w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v= d(w)\cdot v.\] Otherwise, when \(v_{1}<0\), then \[\frac{\partial Y^{(++)}}{\partial v}(w)=pw^{(p-1)}\cdot v\geq\frac{1}{2}\left( (w_{2})_{+}\right)^{p-1}v_{1}+pw^{(p-1)}\cdot v=d(w)\cdot v.\] The proof is complete. ## Appendix D Alternative proof of polarization for \(p\geq 3\) The main difficulty in the proof of Theorem 4.1 above is to justify the limiting procedure in the absence of nonnegativity in the integrands. For \(p\geq 3\), we can proceed differently: the absolute value of the function \(\mathcal{J}_{p}\) is dominated by the function \(\mathcal{G}_{p}\), which helps with the integrability issues in the proof of the polarized Hardy-Stein formula. **Lemma D.1**.: _For every \(p\geq 3\), there is a constant \(c_{p}>0\) such that_ (D.1) \[|\mathcal{J}_{p}(w,z)|\leq c_{p}\mathcal{G}_{p}(w,z)\asymp\mathcal{H}_{p}(w,z ),\quad w,z\in\mathbb{R}^{2}.\] Proof.: The formula (4.1), defining \(\mathcal{J}_{p}\), can be rewritten as \[\mathcal{J}_{p}(w,z) = z_{1}z_{2}^{\langle p-1\rangle}-w_{1}w_{2}^{\langle p-1\rangle}-w_{ 2}^{\langle p-1\rangle}(z_{1}-w_{1})-(p-1)w_{1}|w_{2}|^{p-2}(z_{2}-w_{2})\] \[= (z_{1}-w_{1})(z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle })+w_{1}[z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}-(p-1)|w_{2}|^{p- 2}(z_{2}-w_{2})].\] Using (A.3) we can estimate the first summand above in the following manner \[|z_{1}-w_{1}||z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}| \leq C_{p}^{\prime}|w_{1}-z_{1}|\cdot|w_{2}-z_{2}|\cdot(|w_{2}|\vee|z _{2}|)^{p-2}\] \[\leq \frac{C_{p}^{\prime}}{2}\left(|w_{1}-z_{1}|^{2}+|w_{2}-z_{2}|^{2 }\right)\left(|w_{2}|\vee|z_{2}|\right)^{p-2}\] \[\leq \frac{C_{p}^{\prime}}{2}|w-z|^{2}\left(|w|\vee|z|\right)^{p-2}= \frac{C_{p}^{\prime}}{2}\mathcal{G}_{p}(w,z).\] For the second summand we use (A.2), \[|w_{1}[z_{2}^{\langle p-1\rangle}-w_{2}^{\langle p-1\rangle}- (p-1)|w_{2}|^{p-2}(z_{2}-w_{2})]|\leq C_{p-1}^{\prime\prime}|w_{1} ||z_{2}-w_{2}|^{2}(|w_{2}|\vee|z_{2}|)^{p-3}\] \[\leq C_{p-1}^{\prime\prime}|w-z|^{2}(|w_{1}|\vee|z_{1}|)(|w_{2}| \vee|z_{2}|)^{p-3}\] \[\leq C_{p-1}^{\prime\prime}|w-z|^{2}(|w|\vee|z|)^{p-2}=C_{p-1}^{ \prime\prime}\mathcal{G}_{p}(w,z).\] Thus, \[|\mathcal{J}_{p}(w,z)|\leq c_{p}\mathcal{G}_{p}(w,z)\] and \(\mathcal{G}_{p}(w,z)\asymp\mathcal{H}_{p}(w,z)\) by (2.13). _Remark D.2_.: The statement (D.1) stays true also for \(p=2\). Indeed, by (4.2), \[|\mathcal{J}_{2}(w,z)|=|(z_{1}-w_{1})(z_{2}-w_{2})|\leq|z-w|^{2}=\mathcal{G}_{ 2}(w,z).\] On the other hand, it fails in general for \(p\in(1,3)\setminus\{2\}\). Indeed, for \(k=1,2,\ldots\), let \(w^{(k)}:=\left(1,\frac{1}{k}\right)\), \(z^{(k)}:=\left(1,\frac{2}{k}\right)\). Then, by (4.1), \[|\mathcal{J}_{p}(w^{(k)},z^{(k)})| = \left|\frac{2^{p-1}}{k^{p-1}}-\frac{1}{k^{p-1}}-\frac{p-1}{k^{p-1 }}\right|=\left|2^{p-1}-p\right|\frac{1}{k^{p-1}},\] \[\mathcal{G}_{p}(w^{(k)},z^{(k)}) = \frac{1}{k^{2}}\left(1+\frac{4}{k^{2}}\right)^{\frac{p-2}{2}}.\] Our claim is verified by notifying that \[\frac{|\mathcal{J}_{p}(w^{(k)},z^{(k)})|}{\mathcal{G}_{p}(w^{(k)},z^{(k)})}= \frac{k^{3-p}\left|2^{p-1}-p\right|}{\left(1+\frac{4}{k^{2}}\right)^{\frac{p-2 }{2}}}\to\infty\quad\text{as }k\to\infty.\] Estimate (D.1) permits to substantially simplify the proof of the polarized Hardy-Stein identity (4.5). Indeed, for \(f,g\in L^{p}(\mathbb{R}^{d})\), \(u(t)=P_{t}f\), \(v(t)=P_{t}g\), \(\Phi=(u,v)\), and \(p\geq 3\), from Theorem 3.1 we have that (D.2) \[\int_{0}^{\infty}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{H}_{p}(P_{t }\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y\mathrm{d}t<\infty,\] so in view of Lemma D.1, an analogous integral of \(|\mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))|\) is convergent as well. We next review the proof of Theorem 4.1: for \(f,g\in\mathcal{D}_{p}(L)\), we differentiate \(u(t)v(t)^{\langle p-1\rangle}\) in \(L^{1}(\mathbb{R}^{d})\) as in (4.12) and we have: \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}u(t)v(t)^{\langle p-1 \rangle}\,\mathrm{d}x=-\lim_{h\to 0^{+}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}} \mathcal{J}_{p}(P_{t}\Phi(x),P_{t}\Phi(y))\frac{p_{h}(x,y)}{h}\,\mathrm{d}x \mathrm{d}y.\] Since \(|\mathcal{J}_{p}|\leq\mathcal{H}_{p}\) and the integral in (D.2) is convergent, we can pass to the limit when \(h\to 0^{+}\) (we use the Dominated Convergence Theorem, (**P1**), and (**P2**)) to obtain (D.3) \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\int_{\mathbb{R}^{d}}u(t)v(t)^{\langle p-1 \rangle}\,\mathrm{d}x=-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\mathcal{J}_ {p}(P_{t}\Phi(x),P_{t}\Phi(y))\nu(x,y)\,\mathrm{d}x\mathrm{d}y.\] The rest of the proof remains unchanged: we integrate from \(0\) to \(T\) with \(T>0\) fixed, then we pass to the limit \(T\to\infty\). Then, we relax the assumption that \(f,g\in\mathcal{D}_{p}(L)\) by using the analyticity of the semigroup. ## Appendix E Proof of (1.4) Let \(\{B_{t},t\geq 0\}\) be the Brownian motion on the Euclidean space \(\mathbb{R}^{d}\) running at twice the usual speed, and let \((P_{t})_{t\geq 0}\) be its semigroup: \[P_{t}f(x):=\mathbb{E}_{x}f(B_{t})=\int_{\mathbb{R}^{d}}f(y)p_{t}(x,y)\, \mathrm{d}y=(p_{t}*f)(x),\quad t>0,x\in\mathbb{R}^{d},\] where \[p_{t}(x)=(4\pi t)^{-d/2}e^{-\frac{|x|^{2}}{4t}},\quad t>0,x\in\mathbb{R}^{d}\] and \(p_{t}(x,y):=p_{t}(x-y)\), as before. Let \(1<p<\infty\). It is well known that \((P_{t})_{t\geq 0}\) is a strongly continuous, analytic, Markovian semigroup of symmetric operators in \(L^{p}(\mathbb{R}^{d})\). In particular, for every \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), \(P_{t}f\) belongs to the domain of the generator of this semigroup. Estimates (2.16) and (2.17) hold true as well, therefore the key ingredients needed to prove Hardy-Stein identity remain satisfied for the Brownian motion. Thus, for every \(u\in L^{p}(\mathbb{R}^{d})\), we define, as before, \[\mathcal{E}_{p}[u]:=\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\] and \[\mathcal{D}(\mathcal{E}_{p})=\{u\in L^{p}(\mathbb{R}^{d}):\text{ finite }\lim_{t\to 0}\mathcal{E}^{(t)}(u,u^{\langle p-1\rangle})\text{ exists}\}.\] Similarly as in the proof of Theorem 3.1, we obtain (E.1) \[\int_{\mathbb{R}^{d}}|f|^{p}\,\mathrm{d}x=p\int_{0}^{\infty} \mathcal{E}_{p}[P_{t}f]\,\mathrm{d}t,\quad f\in L^{p}(\mathbb{R}^{d}).\] The generator of the Gaussian semigroup \((P_{t})_{t\geq 0}\) acting on \(u\in L^{p}(\mathbb{R}^{d})\) is \[Lu:=\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}u-u),\quad\text{if the limit exists in }L^{p}(\mathbb{R}^{d}).\] We can also write \[Lu=\sum_{j=1}^{d}\frac{\partial^{2}u}{\partial x_{j}^{2}},\quad u\in L^{p}(\mathbb{ R}^{d}),\] where the partial derivatives of \(u\) are understood in the distributional sense. We kept the letter \(L\) here, to be in accordance with previous devlopment. The domain of the generator is \[\mathcal{D}_{p}(L) := \{u\in L^{p}(\mathbb{R}^{d}):\lim_{h\to 0^{+}}(P_{h}u-u)/h\text{ exists in }L^{p}(\mathbb{R}^{d})\}\] \[= \left\{u\in L^{p}(\mathbb{R}^{d}):\sum_{j=1}^{d}\frac{\partial^{2 }u}{\partial x_{j}^{2}}\in L^{p}(\mathbb{R}^{d})\right\}.\] In Appendix F we explain and justify the above statements. As earlier, for \(u\in\mathcal{D}_{p}(L)\subset\mathcal{D}(\mathcal{E}_{p})\), (E.2) \[\mathcal{E}_{p}[u]=-\langle Lu,u^{\langle p-1\rangle}\rangle.\] To express the Hardy-Stein identity in a more explicit form, we need the following identity, which was proved by Metafune and Spina [27]. **Lemma E.1**.: _Let \(1<p<\infty\). For \(u\in W^{2,p}(\mathbb{R}^{d})\),_ (E.3) \[\int_{\mathbb{R}^{d}}u^{\langle p-1\rangle}Lu\,\mathrm{d}x=-(p-1)\int_{ \mathbb{R}^{d}}|u|^{p-2}|\nabla u|^{2}\,\mathrm{d}x,\] _where \(W^{k,p}(\mathbb{R}^{d})\) is the Sobolev space of order \(k\)._ It is not hard to see that for \(t>0\) and \(f\in L^{p}(\mathbb{R}^{d})\), we have \(P_{t}f\in W^{2,p}(\mathbb{R}^{d})\). Indeed, for every multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\), we denote, as usual, \(|\alpha|:=\alpha_{1}+\ldots+\alpha_{d}\) and \(\partial^{\alpha}:=\frac{\partial^{|\alpha|}}{\partial x_{1}^{u_{1}}\ldots \partial x_{d}^{u_{d}}}\). Then, \[\|\partial^{\alpha}P_{t}f\|_{L^{p}(\mathbb{R}^{d})}=\|(\partial^{\alpha}p_{t} )\ast f\|_{L^{p}(\mathbb{R}^{d})}\leq\|\partial^{\alpha}p_{t}\|_{L^{1}( \mathbb{R}^{d})}\cdot\|f\|_{L^{p}(\mathbb{R}^{d})}<\infty.\] By (E.2) and (E.3), for \(f\in L^{p}(\mathbb{R}^{d})\) and \(t>0\), (E.4) \[\mathcal{E}_{p}[P_{t}f]=-\langle\Delta P_{t}f,(P_{t}f)^{\langle p-1\rangle} \rangle=(p-1)\int_{\mathbb{R}^{d}}|P_{t}f|^{p-2}|\nabla P_{t}f|^{2}\,\mathrm{d }x.\] Since \(P_{t}f\in C^{\infty}(\mathbb{R}^{d})\), the above derivatives are taken in the classical sense. Here \(\Delta\) is the classical Laplacian. Using (E.4) we can express Hardy-Stein identity for the Gaussian semigroup (E.1) in the desired form. This finishes the proof of (1.4). ## Appendix F The generator of the Gaussian semigroup in \(L^{p}\) For completeness we prove the equivalence of two definitions of the Laplacian on \(L^{p}(\mathbb{R}^{d})\) used in Appendix E. Let \(1<p<\infty\). Let \((P_{t})_{t\geq 0}\) be the Gaussian semigroup. It is well known that for \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), the infinitely differentiable functions on \(\mathbb{R}^{d}\) with compact support. \[\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}\varphi-\varphi)=\sum_{j=1}^{d}\frac{ \partial^{2}\varphi}{\partial x_{j}^{2}}=:\Delta\varphi,\quad\text{ (the limit taken in $L^{p}(\mathbb{R}^{d})$)}.\] We show that the equality is satisfied also for those functions from \(L^{p}(\mathbb{R}^{d})\), for which the right-hand side limit exists, without further regularity assumptions. The _semigroup Laplacian_ is defined as (F.1) \[Lf:=\lim_{h\to 0^{+}}\frac{1}{h}(P_{h}f-f),\quad f\in\mathcal{D}_{p}(L)\subset L^{ p}(\mathbb{R}^{d}),\] where the limit above is taken in \(L^{p}(\mathbb{R}^{d})\), for \(f\) in the natural domain: \[\mathcal{D}_{p}(L):=\{u\in L^{p}(\mathbb{R}^{d}):\lim_{h\to 0^{+}}(P_{h}u-u)/h \text{ exists in $L^{p}(\mathbb{R}^{d})$}\}.\] Since \(L\) is the generator of a strongly continuous semigroup in \(L^{p}(\mathbb{R}^{d})\), the operator \(\lambda I-L\colon\mathcal{D}_{p}(L)\to L^{p}(\mathbb{R}^{d})\) is a bijection for every \(\lambda>0\). We then recall the notion of the _distributional Laplacian_\(\tilde{L}\) of \(f\in L^{p}(\mathbb{R}^{d})\). If there exists \(g\in L^{p}(\mathbb{R}^{d})\) such that \[\langle g,\varphi\rangle=\langle f,L\varphi\rangle=\left\langle f,\sum_{j=1}^ {d}\frac{\partial^{2}\varphi}{\partial x_{j}^{2}}\right\rangle=\langle f, \Delta\varphi\rangle\] for all test functions \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), then we let \(\tilde{L}f:=g\). The class of all such functions \(f\) is denoted \(\mathcal{D}(\tilde{L})\). In other words, \[\tilde{L}f=\sum_{j=1}^{d}\frac{\partial^{2}f}{\partial x_{j}^{2}},\] where the partial derivatives are taken in distributional sense. The operators \(L\) and \(\tilde{L}\) coincide, which we prove below. **Lemma F.1**.: _The operator \(\tilde{L}\) is an extension of \(L\)._ Proof.: Let \(f\in\mathcal{D}_{p}(L)\). For every \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), by symmetry of the operators \(P_{h}\), \[\langle\tilde{L}f,\varphi\rangle=\langle f,L\varphi\rangle=\lim_{h\to 0^{+}} \frac{1}{h}\left(\langle f,P_{h}\varphi\rangle-\langle f,\varphi\rangle \right)=\lim_{h\to 0^{+}}\frac{1}{h}\left(\langle P_{h}f,\varphi\rangle- \langle f,\varphi\rangle\right)=\langle Lf,\varphi\rangle.\] Thus, \(f\in\mathcal{D}(\tilde{L})\) and \(\tilde{L}f=Lf\). **Lemma F.2**.: _For every \(\lambda>0\), the operator \(\lambda I-\tilde{L}\) defined on \(\mathcal{D}(\tilde{L})\) is one-to-one._ Proof.: Assume that \(f\in\mathcal{D}(\tilde{L})\) and \(\lambda f-\tilde{L}f=0\). We prove that \(f=0.\) Take \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), then using properties of convolutions and distributional derivatives, we can write (F.2) \[\lambda f*\varphi-\tilde{L}(f*\varphi)=\lambda f*\varphi-(\tilde{L}f)*\varphi= (\lambda f-\tilde{L}f)*\varphi=0*\varphi=0.\] This yields \(f*\varphi=0.\) Indeed, assuming the contrary, since \(f*\varphi\in C_{0}^{\infty}(\mathbb{R}^{d})\), there is a point \(x_{0}\in\mathbb{R}^{d}\) which is the positive maximum or the negative minimum of \(f*\varphi\). If \(x_{0}\) is the positive maximum of \(f*\varphi\), then \(\tilde{L}(f*\varphi)(x_{0})=\Delta(f*\varphi)(x_{0})\leq 0\) and \[0=(\lambda f*\varphi)(x_{0})-\tilde{L}(f*\varphi)(x_{0})\geq(\lambda f* \varphi)(x_{0})>0,\] a contradiction. The case of \(s_{0}\) being the negative minimum is handled similarly. Therefore \(f*\varphi=0\) for every \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})\), meaning \(f=0\). **Proposition F.3**.: _We have \(\mathcal{D}_{p}(L)=\mathcal{D}(\tilde{L})\) and \(L=\tilde{L}\)._ Proof.: Take any \(\lambda>0\). In view of Lemma F.1 and F.2, the bijection \(\lambda I-L:\mathcal{D}_{p}(L)\to L^{p}(\mathbb{R}^{d})\) and its injective extension \(\lambda I-\tilde{L}:\mathcal{D}(\tilde{L})\to L^{p}(\mathbb{R}^{d})\) are equal, \(\mathcal{D}_{p}(L)=\mathcal{D}(\tilde{L})\), \(\lambda I-L=\lambda I-\tilde{L}\). Thus, \(L=\tilde{L}\).
2305.19748
UKP-SQuARE: An Interactive Tool for Teaching Question Answering
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course. Additionally, the breadth of QA derived from this exponential growth makes it an ideal scenario for teaching related NLP topics such as information retrieval, explainability, and adversarial attacks among others. In this paper, we introduce UKP-SQuARE as a platform for QA education. This platform provides an interactive environment where students can run, compare, and analyze various QA models from different perspectives, such as general behavior, explainability, and robustness. Therefore, students can get a first-hand experience in different QA techniques during the class. Thanks to this, we propose a learner-centered approach for QA education in which students proactively learn theoretical concepts and acquire problem-solving skills through interactive exploration, experimentation, and practical assignments, rather than solely relying on traditional lectures. To evaluate the effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a postgraduate NLP course and surveyed the students after the course. Their positive feedback shows the platform's effectiveness in their course and invites a wider adoption.
Haishuo Fang, Haritz Puerto, Iryna Gurevych
2023-05-31T11:29:04Z
http://arxiv.org/abs/2305.19748v2
# UKP-SQuARE: An Interactive Tool for Teaching Question Answering ###### Abstract The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course. Additionally, the breadth of QA derived from this exponential growth makes it an ideal scenario for teaching related NLP topics such as information retrieval, explainability, and adversarial attacks among others. In this paper, we introduce UKP-SQuARE as a platform for QA education. This platform provides an interactive environment where students can run, compare, and analyze various QA models from different perspectives, such as general behavior, explainability, and robustness. Therefore, students can get a first-hand experience in different QA techniques during the class. Thanks to this, we propose a learner-centered approach for QA education in which students proactively learn theoretical concepts and acquire problem-solving skills through interactive exploration, experimentation, and practical assignments, rather than solely relying on traditional lectures. To evaluate the effectiveness of UKP-SQuARE in teaching scenarios, we adopted it in a postgraduate NLP course and surveyed the students after the course. Their positive feedback shows the platform's effectiveness in their course and invites a wider adoption. ## 1 Introduction Question Answering (QA) is one of the overarching research topics in Natural Language Processing (NLP). QA pipelines have been developed to address different types of questions, knowledge sources, and answer formats, including extractive, abstractive, knowledge base, multiple-choice, generative, and open-domain QA. Such a massive number of QA systems and relevant NLP techniques are making QA lectures more important in NLP courses. However, despite QA being an application-oriented topic (e.g., chatbots, virtual assistants, etc.), classes are usually theoretically driven. Thus, in this paper, we propose the use of the UKP-SQuARE platform as a tool for QA education. This platform integrates most QA formats, popular models, datasets, and analysis tools, such as explainability, adversarial attacks, and graph visualizations. Compared with conventional teacher-led classes, we propose a learner-centered class following the flipped classroom [1] with UKP-SQuARE as the driving tool of the lecture. This tool provides an interface for users to interact with different QA models and analysis tools. Therefore, students can actively learn about QA systems and get hands-on experience by interacting with models on the platform. Concretely, students can flexibly compare multiple architectures that model different QA formats, analyze their outputs with explainability tools, and even analyze their robustness against adversarial attacks. Prior studies have shown that flipped classroom lectures improve the learning process of students in programming courses [1]. Thus, we believe that teaching and learning QA through a live demo with this platform can also make NLP lectures more engaging, drawing students' attention, and interest in the topics. To investigate the effectiveness of UKKP-SQuARE in QA education, we adopted it for the first time in a postgraduate NLP course1 and conducted a survey afterward. The positive feedback from the students encourages us to continue adopting this platform and education method in more NLP courses. The contributions of this paper are: i) a novel interactive learner-centered methodology to teach QA and relevant NLP topics, ii) extending the UKP-SQuARE platform for teaching QA, and iii) the design of a syllabus for interactive QA lectures. Footnote 1: Master’s level course ## 2 Ukp-SQuARE UKP-SQuARE Baumgartner et al. (2022); Sachdeva et al. (2022); Puerto et al. (2023) is an extendable and interactive QA platform that integrates numerous popular QA models such as deeepset's roberta-base-squad2, SpanBERT Joshi et al. (2020) for HotpotQA, and QAGNN Yasunaga et al. (2021). It provides an ecosystem for QA research, including comparing different models, explaining model outputs, adversarial attacks, graph visualizations, behavioral tests, and multi-agent models. In addition, this platform provides a user-friendly interface3 that enables users to interact. Users can run available models, deploy new ones, compare their behaviors, and explain outputs. Footnote 2: [https://huggingface.co/deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) Footnote 3: [https://square.ukp-lab.de/](https://square.ukp-lab.de/) ## 3 Learning Question Answering with Ukp-SQuARE In this section, we present the syllabus of a lecture focused on QA and relevant NLP topics that use the platform UKP-SQuARE following the flipped classroom methodology Bishop and Verleger (2013). The flipped classroom is an effective learner-centered educational methodology in which students study pre-recorded lectures and materials in advance to engage in more interactive and collaborative learning activities in class. UKP-SQuARE can be the driving tool for the flipped classroom in QA education. With our platform, lecturers can introduce the topics by interacting with the students and then proceed to an in-depth explanation of the technical details behind the methods of each topic. We propose dividing the lecture into three topics in the QA field: basic QA concepts, trustworthy QA, and multi-agent QA systems. With these topics, students can learn about QA and related NLP topics such as information extraction, explainability, adversarial attacks, and multi-agent systems. ### Learning Basic QA Components QA systems include two main components, i.e., Readers and Retrievers. Readers are QA models responsible for obtaining answers from the context retrieved by retrievers. In UKP-SQuARE, students can easily learn various readers (QA models) within different QA formats and information retrieval techniques via interacting with the interface. #### 3.1.1 Contrasting Different QA Formats With UKP-SQuARE, students can get first-hand experience by interacting with multiple models on our platform. The home readings would include descriptions of the main QA datasets and their baselines. In class, the lecturer can show the different QA formats with real demonstrations of the models and explain on the fly the architectural differences needed to model each QA format. An example is shown in Figure 1 where a span-extraction QA model, i.e., Span-BERT, and a multiple-choice QA model, i.e., CommonsenseQA model are presented to show the difference between these two QA formats. Such interactions can make theoretical explanations of the architectures easier to digest and, therefore, the class more engaging. #### 3.1.2 Learning Information Retrieval To learn Information Retrieval (IR) methods, the user interface of UKP-SQuARE offers a compelling approach to help students differentiate between different IR methods, e.g., lexical retrieval and semantic retrieval, and understand how they affect the final performance of QA models. The home readings would include book chapters or slides describing the main IR methods such as TF-IDF Sparck Jones (1988), BM25 Robertson et al. (1995), Sentence-BERT Reimers and Gurevych (2019), and Dense Passage Retrieval DPR Karpukhin et al. (2020). Like the above section, the lecturer can guide students to find the difference between lexical retrieval (e.g., BM25) and semantic retrieval (e.g., DPR) via playing with UKP-SQuARE by themselves. As shown in Figure 2, for the question _When was Barack Obama's inauguration?_, the BM25 retriever returns a passage covering all keywords but irrelevant to the question, while the DPR retriever returns the correct document, which contains the answer to the question. By providing this example in class, students can easily understand that DPR retrieves semantically similar passages while BM25 only retrieves passages that contain the query tokens and, thus, may retrieve unrelated passages. This could be further explored by comparing two open-domain QA models implementing these retrieval methods and the same reader model to demonstrate the error propagation due to irrelevant passages. This active learning method can prevent the issue of students losing attention that commonly occurs in traditional lectures Felder and Brent (2003). ### Learning Trustworthy QA Systems In addition to learning basic QA components, it is important to understand how to identify and evaluate trustworthy QA systems. This involves several related NLP topics, such as explainability, transparency, and robustness. UKP-SQuARE provides such analysis tools to facilitate students' learning process of trustworthy QA systems. #### 3.2.1 Explainability Methods The exponential adoption of AI is pushing regulators to adopt policies to regulate its use. One of the key points they aim to address is the explainability of these methods to make AI safer4. Thus, it is of utmost importance to include explainability methods on AI courses in Universities. In terms of the explainability of QA models, UKP-SQuARE includes BertViz (Vig, 2019) and a suite of saliency map methods to facilitate the understanding of the model's decision-making process. Saliency maps employ attribution-weighting techniques such as gradient-based (Simonyan et al., 2014; Sundararajan et al., 2017) and attention-based (Jain et al., 2020; Serrano and Smith, 2019) methods to determine the relative importance of each token for the model prediction. The descriptions of these methods would form part of the home readings and to make the classes more active, the class would be driven by real examples of saliency maps using our platform and their interpretation. In this way, students can learn how to explain the output of a QA model based on saliency maps. Footnote 4: [https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence](https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence) An example of a saliency map is shown in Figure 3. The color level of the highlighted text reflects its importance for the answer. As we can see, _of what celestial body?_ is the most important part of Figure 1: Different QA formats in UKP-SQuARE Figure 2: Example of difference between using BM25 retriever and DPR retriever. The red boxes represent keywords in the retrieved passages the question, while _sun_ gets the most attention in the context, which is the final answer. This means the model successfully understands the main point of the question and can link them to the context. Making this type of interpretation can help students identify potential problems or biases in the models. #### 3.2.2 Behavioral Tests in QA models The next important component in trustworthy QA is behavioral tests of models. Machine learning models do not throw errors as regular software programs. Instead, an error in machine learning is usually an unwanted behavior, such as a misclassification that may pass inadvertently to a person Ribeiro et al. (2020). This makes testing machine learning models challenging. To simplify the behavioral analysis of machine learning models, Ribeiro et al. (2020) proposes _CheckList_, a list of inputs and expected outputs that aims to analyze general linguistic capabilities and NLP models mimicking the unit tests in software engineering. The integration of _CheckList_ into UKP-SQuARE offers a simple method to analyze the performance of QA models beyond traditional benchmarks, such as MRQA tasks Fisch et al. (2019). As illustrated in Figure 4, we test the SQuAD 2.0 RoBERTa Adapter and SQuAD 2.0 BERT Adapter using the CheckList in which multiple NLP capabilities are tested like coreference, negation, and robustness. As we can see SQuAD 2.0 BERT Adapter performs worse than RoBERTa Adapter in the above dimensions. Such an example can be used by the lecturer in class to introduce the idea of behavioral tests on the fly. In addition, the behavioral tests of UKP-SQuARE can be used to foster the students' analytical skills. A potential assignment could be to train a QA model and deploy it on our platform to analyze it with the provided ecosystem of QA tools. In particular, thanks to the behavioral tests in UKP-SQuARE, students can provide a deeper analysis of their model based on the quantitative results of their test set and a qualitative analysis based on the behavioral test results. #### 3.2.3 Adversarial Attacks Policymakers are also designing a regulatory framework that guarantees users that their AI models are resilient to adversarial attacks5. Therefore, AI curriculums should also include adversarial attacks to prepare students for these new regulations. Footnote 5: See footnote 3 UKP-SQuARE provides tools to conduct adversarial attacks, such as HotFlip Ebrahimi et al. (2018), input reduction Feng et al. (2018), and sub-span Jain et al. (2020). Thus, the home readings should include a theoretical introduction to these methods. Then, the lecture would use the platform to exploit the interactive nature of adversarial attacks. In particular, the need to analyze examples to understand different types of attacks makes this part of the topic especially practical. Therefore, the lecturer can introduce the topic through UKP-SQuARE and delve deeper into the technical details afterward. An exemplary case is that students can attack real models with examples by tuning different parameters, such as the number of flips in HotFlip, to see how the output changes when they subtly change the input data. In Figure 5, only flipping _(full stop)_ to _wore_ can directly change the answer. In class, a small experiment can be set up by lecturers in which students need to manually manipulate the input to see if it can trick the model into making Figure 4: The result of running CheckList for SQuAD 2.0 RoBERTa Adapter and BERT Adapter. The number of failed and succeeded test cases are highlighted in green and red. Figure 3: An attention-based saliency map of a question in UKP-SQuARE. incorrect answers and compare it with adversarial attack tools to deepen their understanding of those adversarial attacks and the importance of building up trustworthy QA systems. #### 3.2.4 Graph-based QA Models Knowledge Graph Question Answering (KGQA) systems can have strong explanatory power thanks to the reasoning paths that can be extracted from the graph. Such transparency can enhance the interpretability and trustworthiness of the system. UKPSQuARE currently offers QA-GNN (Yasunaga et al., 2021), a KGQA model that makes use of ConceptNet (Speer et al., 2017), and provides a visualization interface to explore the subgraph used by the model. Although a reasoning path in a graph may provide a clear explanation of a model's prediction, we believe that interpreting graph-based models is not straightforward because, usually, that path contains many irrelevant nodes and edges that may obscure the actual reasoning of the model. Thus, we propose to teach KGQA models with real examples of graphs. In this way, the lecturer, or even the students themselves, have to show the process of cleaning the graph to obtain and interpret the reasoning path. This process would be much more valuable for the future endeavor of the students than using a set of slides with examples of preprocessed clean graphs because they will be able to reproduce what they learn in real-use cases in companies. ### Learning Multi-Agent Systems Lastly, the current progress in QA is pushing toward creating robust models across multiple domains. To do this, there are two types of approaches: multi-dataset models and multi-agent models. While the former aims to train a single architecture on multiple datasets, the latter does the opposite. It trains multiple models (agents) on single datasets and combines the agents. UKPSQuARE is compatible with both approaches; therefore, it is an ideal platform to teach them. Thanks to UKP-SQuARE, we can also follow a flipped classroom methodology to teach multi-agent systems. After reading class materials explaining the models of this topic at home, the class time can be used as an explanation of the topic with a live demonstration of these models. In particular, we can easily show that multi-agent systems such as MetaQA (Puerto et al., 2021) select different agents depending on the input question. Figure 7 shows that the first answer selected by MetaQA, which is the correct one, is from an out-of-domain agent, while the second answer, which is not correct, is from the in-domain agent. This example illustrates the collaboration between agents achieved by multi-agent systems and can be an ideal way of starting the lecture on this topic before explaining the architectural details of MetaQA. Similarly, the platform can be used to introduce multi-dataset systems such as UnifiedQA (Khashabi et al., 2020), before delving into in-detail explanations of the model. In particular, the lecturer can explain the multiple accepted QA formats by UnifiedQA through real examples, and then, continue the explanation with the training details of the model with the support of slides. Figure 5: A HotFlip example where only flipping. _(full stop)_ to _670_ changes the answer. Figure 6: A visualized reasoning graph of the question _Where would you find a basement that can be accessed with an elevator?_ ### Assignments with UKP-SQuARE In addition to the above teaching scenarios in class, we also propose a homework assignment based on UKP-SQuARE6 that leverages the insights and knowledge they acquire from the class. The students need to train their own QA model using the popular Hugging Face's Transformer library (Wolf et al., 2020), deploy the model on our platform, and then write an in-detail report where they analyze their model from multiple perspectives. This report must include a quantitative analysis of the performance of their model on the test set and also a qualitative analysis that includes an explanation of the outputs of the model to a series of input questions, adversarial attacks that shows errors of their model, and an analysis of the possible behavioral errors obtain from _CheckList_. Furthermore, the students should also compare their model with other available models and identify the type of questions where their model fails. This would help them understand that models overfit the domain of their training data and, therefore, may fail in other domains. This assignment requires students to truly understand each component they learned during the class, which will help them consolidate their knowledge and develop a deeper understanding of the inner workings of different QA techniques. Additionally, the assignment can serve as a useful assessment tool, enabling teachers to gauge students' understanding of the material and provide targeted feedback and support as needed. Footnote 6: [https://colab.research.google.com/drive/17qwldLMmU5EDxf9TLR29zIG9-EGKmNxP?usp=share_link](https://colab.research.google.com/drive/17qwldLMmU5EDxf9TLR29zIG9-EGKmNxP?usp=share_link) ### User Study To quantitatively evaluate the effectiveness of UKP-SQuARE in teaching the above QA techniques, we designed a questionnaire to collect feedback from students. The questionnaire was administered to a group of students who had completed a graduate NLP course that used our platform in both class time and for the assignment. All participants are 20-to-30 years-old graduate students in computer science. The questionnaire mainly focuses on two aspects: whether UKP-SQuARE deepens their understanding of techniques in QA systems and whether it makes it easier to get hands-on experience in UKP-SQuARE. The majority of questions require students to rate on a scale of 1 to 5. The complete questionnaire can be found in Appendix A. Figure 8 shows the Likert scale chart with the responses of seven students who participated in the survey. As we can see, students have very positive attitudes towards all aspects of UKP-SQuARE for their QA learning. All participants think that the platform makes the class more engaging and interesting. In particular, most of them (91%) think UKP-SQuARE helps them better distinguish different QA formats. For information retrieval, the majority of the responders do not think that the platform can help them understand better the difference between lexical retrieval and semantic retrieval. The main reason behind this is that the difference between lexical and semantic retrievers is challenging to distinguish only via visualization unless students actively compare the documents by themselves. Besides, it also requires students Figure 7: Multi-Agent QA in UKP-SQuARE: different agents are selected to predict the answer based on the input to have a good understanding of semantic similarity and lexical similarity. Therefore, we plan to improve it by showing the difference between vector similarity and keyword matching between questions and retrieved documents. Regarding explainability and adversarial attack tools, around two-thirds of students believe that the platform facilitates their learning process of these topics. When it comes to hands-on experience, the vast majority of students agree that UKP-SQuARE is easy to use. Our platform provides an infrastructure that dramatically lowers the bar for students to get hands-on experience. All students think that without UKP-SQuARE, they would spend more time finding suitable open-source software to compare different models, analyze the output, and conduct adversarial attacks. Moreover, the respondents estimated that without UKP-SQuARE, the average time spent on homework would increase from 2-5 hours to more than 8 hours. One student also commented that doing experiments with the platform was straightforward and allowed him to try different ideas without any overhead. Therefore, although the survey sample is small and limits the conclusions, this overall positive feedback invites us to continue investigating how to conduct our QA and NLP classes more interactively with UKP-SQuARE and suggests that our students would benefit from extending this interactive class to other NLP topics such as generative pre-trained large language models, prompting with reinforcement learning from human feedback, word embeddings, parsing trees, and machine translation among others. ## 4 Related Work The most relevant tool is the AllenNLP demo7, which provides a user interface to the main components of the AllenNLP library Gardner et al. (2018). This website includes an interface where users can interact with five extractive QA models. However, their goal is to have a showcase of their library rather than an extensive platform for teaching QA. Thus, their functionalities are limited. Most of their deployed models are outdated, only cover extractive QA settings, and do not provide information retrieval methods. Moreover, their explainability and adversarial attacks are not compatible with their transformer-based model. Furthermore, they do not provide graph-based models, which can be useful to explain graph neural networks and explainability methods based on graphs. Additionally, it cannot be used for our homework assignment because users cannot deploy and analyze their own models with explainability and adversarial attack tools as in our platform. However, they do provide demos for other NLP topics, such as Open Information Extraction and named entity recognition, and parsing trees, among others. Footnote 7: [https://demo.allennlp.org/reading-comprehension/](https://demo.allennlp.org/reading-comprehension/) ## 5 Conclusion In this paper, we present a novel method to teach question-answering to postgraduate NLP students following the learner-centered method of flipped classrooms. We propose to provide reading materials to the students before the class and use the UKP-SQuARE platform as a driving tool to conduct the class. This platform integrates the most popular QA pipelines and an ecosystem of tools to analyze the available models. These tools include explainability methods, behavioral tests, adversarial attacks, and graph visualizations. We provide a series of use cases for teaching based on the provided models and methods by UKP-SQuARE, showing that classes can become much more interactive by using UKP-SQuARE than in conventional lectures. To evaluate the effectiveness of the platform and our methodology, we conducted a survey to collect feedback from students who took our class. The results show that most of the students think Figure 8: Students feedback towards UKP-SQuARE used in QA education. UKP-SQuARE accelerates their learning process and reduces the overhead to get hands-on experience. We plan to extend our platform to support prompting large language models, and therefore, we leave as future work creating a curriculum to teach prompting methods. ## Acknowledgements We thank Max Eichler, Martin Tutek, Thomas Arnold, Tim Baumgartner, and the anonymous reviewers for their insightful comments on a previous draft of this paper. This work has been funded by the German Research Foundation (DFG) as part of the UKP-SQuARE project (grant GU 798/29-1), the QASciInf project (GU 798/18-3), and by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
2309.08590
Neural Machine Translation Models Can Learn to be Few-shot Learners
The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example.
Raphael Reinauer, Patrick Simianer, Kaden Uhlig, Johannes E. M. Mosig, Joern Wuebker
2023-09-15T17:44:21Z
http://arxiv.org/abs/2309.08590v1
# Neural Machine Translation Models Can Learn to be Few-shot Learners ###### Abstract The emergent ability of Large Language Models to use a small number of examples to learn to perform in novel domains and tasks, also called in-context learning (ICL). In this work, we show that a much smaller model can be trained to perform ICL by fine-tuning towards a specialized training objective, exemplified on the task of domain adaptation for neural machine translation. With this capacity for ICL, the model can take advantage of relevant few-shot examples to adapt its output towards the domain. We compare the quality of this domain adaptation to traditional supervised techniques and ICL with a 40B-parameter Large Language Model. Our approach allows efficient batch inference on a mix of domains and outperforms state-of-the-art baselines in terms of both translation quality and immediate adaptation rate, i.e. the ability to reproduce a specific term after being shown a single example. ## 1 Introduction Large Language Models (LLMs) have demonstrated few-shot learning capabilities on various natural language processing tasks, as highlighted by Brown et al. (2020) or Garcia et al. (2023). When prompted with suitable example translations, they can compete with neural machine translation (NMT) models, built and trained specifically for translating between languages (Vilar et al., 2023). Interestingly, one can adapt LLMs to specific domains merely by adding example translations to their prompt at inference time (Moslem et al., 2023). This ability to adapt to specific tasks and domains is known as _in-context learning_ (ICL). In contrast to traditional fine-tuning methods, ICL does not require a separate set of customized parameters for each domain, which implies major efficiency gains through batched inference. In this paper, we integrate ICL for domain adaptation into NMT systems in multiple steps. We compare our method for adapting NMT systems to traditional fine-tuning approaches, as well as to the domain adaptation abilities of an open-source LLM. Specifically, our main contributions are the following: 1. We evaluate an unmodified NMT system's ICL capacity for domain adaptation and demonstrate its limitations. 2. We propose a training scheme to improve an NMT model's ICL capability. 3. We show that ICL can be combined with more traditional adaptation methods to further improve domain adaptation performance. 4. We compare our method to the performance of the open-source LLM Falcon-40B (Penedo et al., 2023) on a machine translation task with ICL for domain adaptation. ## 2 Related Work Bulte and Tezcan (2019) improve the translation performance of an NMT model by integrating translation fuzzy-matched pairs from a translation memory as input to an NMT model. This idea was further expanded by Pham et al. (2020) and Xu et al. (2020), who for a given source segment use sentence embeddings to retrieve similar examples and compared different schemes for integrating those as cues into the NMT network. Our approach differs in that we only train on the tokens belonging to the translation and not on the tokens provided as context, which we show to work better. In addition, Pham et al. (2020)'s training procedure differs, as they train their model from scratch, using training data from multiple domains and evaluate on those same domains, while we train on general domain data and evaluate on a new domain that is not in the training data. Furthermore, we focus on the multi-domain adaptation task using light-weight adapters. This approach not only allows us to extend to new domains without retraining the full model, but also offers a more practical and efficient strategy for real-world applications. The authors of (Moslem et al., 2023) investigated the capabilities of a proprietary LLM, specifically GPT-3.5, for adaptive machine translation using ICL. Their extensive experiments showed that GPT-3.5 can adapt well to in-domain sentence pairs and/or terminology. ## 3 Experiments We conduct a series of experiments to develop NMT systems that exceed at few-shot ICL domain adaptation. Here we present the experiments in a logical order, where we start with the baseline models described in Section 3.1 and subsequently introduce several stages of development. In stages 0 and 1 we attempt ICL with the unmodified and domain-fine-tuned baseline models, respectively. Then, in Stage 2, we fine-tune the baseline model to the _task_ of domain ICL, instead of a particular domain. Finally, we combine ICL and domain adaptation through fine-tuning in Stage 3. Our experimental progression was guided by the metrics and datasets that we introduce in Sections 3.5 and 3.6, respectively. ### Models Throughout this paper, we work with an NMT system and the Falcon-40B LLM, which we both describe here. #### 3.1.1 Falcon LLM To provide a direct comparison with LLMs and their capacity for ICL, we conduct experiments with the decoder-only Transformer language model Falcon-40B (Penedo et al., 2023), specifically the non-instruction-tuned variant1. Inference is done with greedy decoding. Following previous work (Bawden and Yvon, 2023; Garcia et al., 2023; Hendy et al., 2023) (_inter-alia_) the model is prompted to perform translation without specific fine-tuning towards the machine translation task. Footnote 1: The model is available from the _huggingface_ platform: [https://huggingface.co/tiuae/falcon-40b](https://huggingface.co/tiuae/falcon-40b) A simple prompt template is used for all \(k\)-shot experiments with Falcon-40B, see Figure 1. In preliminary experiments we found that \(k=0\) does not work well with this specific model2 - the outputs tend to be entirely hallucinated. Footnote 2: For \(k=0\) the prompt contains only the single source sentence as input and the target language followed by a colon. #### 3.1.2 NMT Systems The baseline model that we use as the starting point for all further experiments is a Transformer (Vaswani et al., 2017) model with 12 encoder layers and two decoder layers, implemented with the NVIDIA NeMo toolkit (Kuchaiev et al., 2019). The embedding size is 1,024 with a feed-forward network dimension of 4,096. The model has a joint vocabulary of 32,768 tokens, while embedding matrices are specific to the encoder, decoder, and output projection modules, i.e. parameters are not shared between them. The model was trained to support a maximum input size of 1,536 tokens by augmenting the training data with random concatenations of parallel sentences. We evaluate the model using greedy decoding. For the experiments presented here, the baseline model is either fine-tuned in full (Stage 2a and Stage 2b), or light-weight adapters (Bapna and Firat, 2019) are added to the model (Stage 1 and Stage 3). We choose full-model fine-tuning on out-of-domain data to adapt the NMT model to a new task - translating with an increased context of related examples - and adapter layers for learning from in-domain data. The adapters we use follow Bapna et al. (2019)'s formulation, but with layer normalization applied after the bottleneck rather than before it. We use a bottleneck width of 256 and insert adapters in every layer of the decoder and every other layer of the encoder. We always fine-tune with the ADAM optimizer (Kingma and Ba, 2014) and early stopping based on validation loss. ### Stage 0 & Stage 1: ICL with a Standard NMT Model Motivated by the few-shot learning capabilities of LLMs, we examine the ability of a standard English-to-German NMT model to adapt to a domain given only similar and relevant translation Figure 1: Prompt template for LLM. pairs as additional context, i.e., without changing the model's parameters. To find similar source segments in the translation memory, we search for nearest neighbours in an embedding space. We use the multi-lingual sentence embedding model3 from the sentence transformer library (Reimers and Gurevych, 2020) to embed the source sides of all segment pairs. Then we employ hnswlib(Malkov and Yashunin, 2020) to find the approximate nearest neighbours: Each source sentence in the domain-specific datasets is first encoded with the sentence-embedding model and then added to an index. For the sake of simplicity in this paper, we will refer to the approximate nearest neighbors simply as nearest neighbors. To measure the similarity between a pair of segments \(\mathsf{s}\) and \(\mathsf{s}^{\prime}\), we use the cosine distance of the corresponding embedding vectors \(\mathsf{v}_{\mathsf{s}}\) and \(\mathsf{v}_{\mathsf{s}^{\prime}}\), i.e., Footnote 3: Model name on [https://www.sbert.net/](https://www.sbert.net/): all-MiniLM-L6-v2 \[\mathrm{d}(\mathsf{s},\mathsf{s}^{\prime}):=1-\frac{\mathsf{v}_{\mathsf{s}} \cdot\mathsf{v}_{\mathsf{s}^{\prime}}}{\|\mathsf{v}_{\mathsf{s}}\|_{2}\cdot\| \mathsf{v}_{\mathsf{s}^{\prime}}\|_{2}}.\] For a given source \(\mathsf{s}\) and target segment \(\mathsf{t}\), we identify its nearest neighbours \(s_{1}\), \(s_{2}\),..., \(s_{k}\), using the the cosine distance above. Each source sentence \(s_{i}\) is paired with a reference translation \(t_{i}\) for \(i=1,...,k\). We sort the pairs by their distance to \(\mathsf{s}\) in the embedding space, i.e., \[\mathrm{d}(\mathsf{s},\mathsf{s}_{1})\leq\mathrm{d}(\mathsf{s},\mathsf{s}_{2 })\leq...\leq\mathrm{d}(\mathsf{s},\mathsf{s}_{k})\;.\] Our assumption is that similar segments should have similar translations. For Stage 0 of the experiments, we treat the context sentences and actual source text as one body of text, separated only by a single space, ordering the segments from least similar to most similar, with the current source segment \(\mathsf{s}\) at the end. As a result, the input of the encoder is \[\text{<bos> }\mathsf{s}_{k}\ \mathsf{s}_{k-1}\...\ \mathsf{s}_{1}\ \mathsf{s}\ \text{<eos>}\] while for the decoder, we use the prefix: \[\text{<bos> }\mathsf{t}_{k}\ \mathsf{t}_{k-1}\...\ \ \mathsf{t}_{1}\] where \(\text{<bos>}\) and \(\text{<eos>}\) represent the beginning-of-sentence and end-of-sentence tokens, respectively. The model's task is then to continue from the target prefix by generating a translation of the source segment \(\mathsf{s}\). In our experiments, we evaluated the translation performance using a varying number \(k\) of nearest neighbors, specifically \(k\in\{1,2,5\}\). In Stage 1 we run additional experiments where we fine-tune the model for each domain, using the in-domain training data in the original format. This domain-specific fine-tuning is performed by injecting adapter layers (Bapna and Firat, 2019) into the network while freezing the rest of the model, and leveraging a standard negative log-likelihood (NLL) loss for training. For each domain, we then test the fine-tuned model directly (\(0\)-shot in Tables 3 and 4) as well as with ICL (\(k\)-shot with \(k\neq 0\)). Adapters are trained towards convergence, i.e. until there is no further improvement in terms of validation loss. ### Stage 2a & Stage 2b: Fine-Tuning towards ICL To improve the model's capability to use nearest neighbor examples in the context, we further fine-tune the full model on out-of-domain data, namely _News-Commentary4_(Kocmi et al., 2022), which contains roughly 450K parallel segments. For validation we use a sample of 2K parallel segments from _EuroPar5_(Koehn, 2005). For this full model fine-tuning we do not train until convergence, but apply aggressive early stopping: Training is stopped when the validation loss does not decrease by at least 0.1 twice in a row, validating for every 1% of an epoch. This is to encourage the model to only learn the new task and data format, but not adapt to a new data distribution. Footnote 4: From the WMT’23 evaluation campaign: https://data. statmt.org/news-commentary/v18.1/ Instead of directly concatenating the nearest neighbors to the training examples, we add a special separation token - <sep> - to separate the source and target segments. We then construct the training instances for the encoder as: \[\text{<bos> }\mathsf{s}_{k}\ \text{<sep> }\mathsf{s}_{k-1}\ \text{<sep> }\...\ \text{<sep> }\mathsf{s}_{1}\ \text{<sep> }\mathsf{s}\ \text{<eos>}\] and for the decoder as: \[\text{<bos> }\mathsf{t}_{k}\ \text{<sep> }\mathsf{t}_{k-1}\ \text{<sep> }\...\ \text{<sep> }\mathsf{t}_{1}\ \text{<sep> }\mathsf{t}\ \text{<eos>} \tag{1}\] and compute the NLL loss on all tokens of (1). This training loss is identical to the one used in Pham et al. (2020). We denote this procedure as Stage 2a. For Stage 2b the idea is that the model should learn to predict the target segment from the source segment using the nearest neighbor translations but not learn to predict \(t_{k},...,t_{1}\) as in Pham et al. (2020). Hence we mask the NLL training loss such that it is computed only on the tokens that belong to the target segment t, excluding all context tokens, thus fully focusing the training signal on translating t in the context of its \(k\) nearest neighbors. We then use the same format as in Stage 2a for training, while at inference time we provide the decoder with a prefix containing the ICL examples: \(\texttt{<bos>t}_{k}\)\(\texttt{<sep>t}_{k-1}\)\(\texttt{<sep>}\)\(...\)\(\texttt{<sep>t}_{1}\)\(\texttt{<sep>}\) Finally, we measure quality of the predicted translation \(\hat{t}\) by computing BLEU and COMET scores with the target segment t as reference. For both Stage 2a and Stage 2b, the \(k\)-nearest neighbors for each segment in the training data and validation data are extracted from the entire _News-Commentary_ dataset as described in Section 3.2. ### Stage 3: Combining ICL and Domain Adaptation To combine Stage 2b's ICL capacity with adapter-based domain adaptation, we add adapters to the model from Stage 2b using the same configuration as for the Stage 1 experiments. Again, we train separate adapter layers for each domain. Each example from the training set is annotated with its nearest neighbors from the same training set, excluding itself. ### Metrics For evaluating translation quality, we used the SacreBLEU framework Post (2018) that implements the BLEU metric Papineni et al. (2002). We also evaluate with reference-based COMET Rei et al. (2022) to compare the model outputs to the reference translations in the test data. ### Datasets We run our experiments with the English-German language pair on 8 domains from the ACED- and MDNS corpus collections, which we describe in this section. Statistics for all datasets are provided in Table 1. #### 3.6.1 ACED corpus The ACED corpus Lin et al. (2022) is comprised of three distinct datasets, namely Asics, Emerson, and Digitalocean, each consisting of English-German sentences extracted from various domains. ACED is a real-world benchmark containing data derived from translations performed by humans. #### 3.6.2 MDNS corpus The MDNS corpus Aharoni and Goldberg (2020) is a multi-domain corpus containing English-German parallel text from five diverse domains (IT, Koran, Law, Medical, Subtitles). It was specifically created for evaluating domain-adaptation. ## 4 Results Here we discuss the experimental results, progressing from Stage 0 to Stage 3. All results are depicted separately for ACED- and MDNS corpora in Tables 3 and 4 respectively. ### Stage 0: ICL with Baseline NMT Model When we add nearest neighbors to the inputs and target prefixes we first observe that the automated metrics are mostly improved across all datasets. Notably, the result with 1-shot nearest neighbors is the best in this group of experiments. Additionally we find that the 5-shot result often degrades below the baseline. Specifically for the Medical and Subtitles corpora of MDNS, we find that the model fails to improve over the baseline for all \(k\). The cosine distance of the nearest neighbors seems to be a viable indicator of performance in this set of experiments, e.g. when comparing the results for ACED Emerson & Digitalocean, where the average cosine distance (see Table 2) for \(k=1\) is much lower for Emerson at 0.13, versus 0.3 for Digitalocean. We find a moderate, statistically insignificant, negative Pearson correlation (\(r=-0.43\)) between the average cosine distances for \(k=1\) and the difference in BLEU scores between the Stage 0 1-shot experiment and the baseline. \begin{table} \begin{tabular}{c|c|c|c} & Training & Validation & Test \\ \hline Asics & 1.4 & 0.5 & 0.6 \\ Digitalocean & 11.8 & 2.0 & 7.6 \\ Emerson & 4.3 & 1.3 & 1.7 \\ \hline \hline IT & 223 & 2.0 & 2.0 \\ Koran & 17.9 & 2.0 & 2.0 \\ Law & 467 & 2.0 & 2.0 \\ Medical & 248 & 2.0 & 2.0 \\ Subtitles & 500 & 2.0 & 2.0 \\ \end{tabular} \end{table} Table 1: Segment counts for the domain-specific dataset splits used for experimentation, in thousands. While BLEU indicates improvement (COMET reduces only for \(k>1\)), we find that the model's behavior is in fact degenerate. Specifically, the model often fails to produce any output after the given prefix and instead predicts <eos> immediately, which leads to empty translations. We find that the rates of empty translations are 8.5%, 8.1%, and 9.1% for \(k=1,2\), and 5 respectively. In contrast, the baseline system has a 0% rate of empty outputs. This is despite the model being specifically trained to support inputs covering the full context-width in pre-training. ### Stage 1: Combining ICL with Domain Fine-Tuning For Stage 1 we first observe that the model can be effectively adapted to each domain by training adapters (see the Stage 1, 0-shot results in Tables 3 and 4). A notable exception is MDNS Subtitles, where adaptation only slightly improves over the baseline. This result is, however, consistent with other work (Aharoni and Goldberg, 2020). When we combine the trained adapters with ICL, we find no improvements over Stage 1's 0-shot results, with the exception of ACED Asics. Performance drops catastrophically for the MDNS Medical & Subtitles corpora. The rate \begin{table} \begin{tabular}{l c|c|c||c|c|c|c|c} & \multicolumn{4}{c}{ACED} & \multicolumn{4}{c}{MDNS} \\ & Asics & Digitalocean & Emerson & IT & Koran & Law & Medical & Subtitles \\ \hline \(k=1\) & 0.19 & 0.30 & 0.13 & 0.15 & 0.18 & 0.13 & 0.12 & 0.24 \\ \(k=2\) & 0.21 & 0.31 & 0.14 & 0.17 & 0.20 & 0.15 & 0.14 & 0.25 \\ \(k=5\) & 0.23 & 0.34 & 0.16 & 0.21 & 0.24 & 0.17 & 0.17 & 0.27 \\ \end{tabular} \end{table} Table 2: Average cosine distance in embedding space of test set sources to \(k\)-nearest neighbors from train, for \(k\in\{1,2,5\}\). \begin{table} \begin{tabular}{l|l|l l|l l|l l|l l} & & \multicolumn{4}{c}{Asics} & \multicolumn{4}{c}{Digitalocean} & \multicolumn{2}{c}{Emerson} & \multicolumn{2}{c}{Average} \\ & & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET \\ \hline \multirow{5}{*}{ \begin{tabular}{l} \end{tabular} } & Baseline & 34.5 & 0.8624 & 53.3 & 0.9043 & 44.9 & 0.9108 & 44.2 & 0.8925 \\ \cline{2-11} & 1-shot & 43.7 & 0.8578 & 54.4 & 0.8982 & 72.1 & 0.9213 & 56.7 & 0.8924 \\ & 2-shot & 44.5 & 0.8525 & 54.5 & 0.8967 & 67.2 & 0.9137 & 55.4 & 0.8876 \\ & 5-shot & 41.0 & 0.8420 & 53.9 & 0.8955 & 28.7 & 0.8705 & 41.2 & 0.8693 \\ \cline{2-11} & 0-shot & 41.2 & 0.8780 & 60.1 & **0.9152** & 79.2 & 0.944 & 60.2 & 0.9124 \\ & 1-shot & 46.4 & 0.8657 & 59.6 & 0.9099 & 78.1 & 0.9378 & 61.4 & 0.9045 \\ & 2-shot & 46.2 & 0.8628 & 59.0 & 0.9090 & 66.3 & 0.9275 & 57.2 & 0.8998 \\ & 5-shot & 44.2 & 0.8500 & 57.3 & 0.9038 & 32.2 & 0.893 & 44.6 & 0.8823 \\ \cline{2-11} & 1-shot & 43.0 & 0.8765 & 55.0 & 0.9073 & 73.1 & 0.9382 & 57.0 & 0.9073 \\ & 2-shot & 43.5 & 0.8785 & 54.4 & 0.9072 & 71.6 & 0.9392 & 56.5 & 0.9083 \\ & 5-shot & 42.3 & 0.8662 & 54.4 & 0.9066 & 73.4 & 0.9347 & 56.7 & 0.9025 \\ \cline{2-11} & 1-shot & 44.5 & 0.8766 & 54.9 & 0.9046 & 73.1 & 0.9391 & 57.5 & 0.9068 \\ & 2-shot & 44.5 & 0.8777 & 55.4 & 0.9080 & 74.3 & 0.939 & 58.1 & 0.9082 \\ & 5-shot & 44.7 & 0.8734 & 55.0 & 0.9072 & 70.0 & 0.9363 & 56.6 & 0.9056 \\ \cline{2-11} & 1-shot & **48.8** & 0.8896 & **60.5** & 0.9141 & 78.9 & **0.9480** & 62.7 & **0.9172** \\ & 2-shot & 48.5 & **0.8914** & 60.1 & 0.9132 & **80.7** & 0.9456 & **63.1** & 0.9167 \\ & 5-shot & 47.6 & 0.8837 & 59.0 & 0.9095 & 80.2 & 0.9437 & 62.3 & 0.9123 \\ \cline{2-11} & 1-shot & 31.8 & 0.8588 & 40.0 & 0.8677 & 71.6 & 0.9380 & 47.8 & 0.8882 \\ & 2-shot & 34.5 & 0.8671 & 44.8 & 0.8876 & 76.9 & 0.9416 & 52.1 & 0.8988 \\ \cline{2-11} & 5-shot & 40.8 & 0.8789 & X & X & 78.5 & 0.9434 & X & X \\ \end{tabular} \end{table} Table 3: Results for the ACED corpus of the multi-stage evaluation for various numbers of \(k\)-nearest-neighbors, using BLEU and COMET metrics. The ”Baseline” scores are for the English-to-German NMT system described in Section 3.1. We omit the Digitalocean dataset for the Falcon-40B 5-shot evaluation. of empty translations also increases dramatically6, with a rate of up to 63.1% for the 1-shot result on MDNS Medical (up from 8.0% at Stage 0). Footnote 6: Empty translation rates of Stage 1 for each \(k\) over all corpora: 1-shot: 20.0%, 2-shot: 20.6%, 5-shot: 13.6%. ### Stage 2a & Stage 2b: Fine-Tuning towards ICL When we compare the Stage 2b (fine-tuning with the masked loss as described in Section 3.3) to the Stage 0 results, we find that adding the separator and fine-tuning the model leads to generally improved scores on the ACED corpora for all \(k\). BLEU Results on MDNS corpora show slightly worse performance compared to the Stage 0 results in 3 out of 5 corpora for \(k=1\), but the averages are still improved. COMET scores are however consistently improved for this comparison. We also find that the scores for \(k=2\) and \(k=1\) are very close, with 2-shot being ahead of 1-shot by 0.6% BLEU points on average on ACED data, and 1-shot being ahead of 2-shot by 0.2 BLEU points on MDNS. Which is in contrast to what we have observed in Stage 0. \(k=5\) still performs worse, but we observe high relative gains compared to the 5-shot Stage 0 result. When comparing Stage 2a and Stage 2b, i.e. the masked loss and the standard NLL loss the results are inconclusive. We further observe that Stage 2b exhibits almost negligible rates of producing empty translations, at 0.3%, 0.8%, and 1.2% for \(k=1,2,5\) respectively. ### Stage 3: Combining ICL and Domain Adaptation When combining ICL with adapters trained with nearest neighbor annotated data, we observe the globally best results for the NMT models. Compared to Stage 1, which is also fine-tuned towards each domain, we observe greatly improved results on all automatic metrics. Stage 3 2-shot delivers the best result on ACED, with an improvement of 2.5 BLEU points compared to the runner-up in terms of average BLEU Stage 1 1-shot. On MDNS, Stage 3 1-shot improves over the runner-up Stage 1 0-shot by 3.8 points. Especially the scores for MDNS Koran improve \begin{table} \begin{tabular}{l|l|l l|l l|l l|l l l|l l} & & \multicolumn{2}{c|}{IT} & \multicolumn{2}{c|}{Koran} & \multicolumn{2}{c|}{Law} & \multicolumn{2}{c}{Medical} & \multicolumn{2}{c}{Subtitles} & \multicolumn{2}{c}{Average} \\ & & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET & BLEU & COMET \\ \hline \multirow{8}{*}{**CED**} & Baseline & 34.3 & 0.8153 & 14.7 & 0.7229 & 44.7 & 0.8696 & 43.5 & 0.8406 & 27.7 & **0.7891** & 33.0 & 0.8075 \\ \cline{2-13} & 1-shot & 35.9 & 0.7698 & 17.2 & 0.6580 & 51.6 & 0.853 & 42.3 & 0.7964 & 17.5 & 0.6358 & 32.9 & 0.7426 \\ & 2-shot & 35.9 & 0.7433 & 17.2 & 0.6346 & 49.9 & 0.8467 & 38.2 & 0.7810 & 22.4 & 0.7024 & 32.7 & 0.7416 \\ & 5-shot & 31.9 & 0.7196 & 14.5 & 0.6000 & 42.3 & 0.8287 & 30.5 & 0.7505 & 24.4 & 0.7400 & 28.7 & 0.7278 \\ \hline \multirow{8}{*}{**CED**} & 0-shot & 39.6 & 0.8403 & 22.6 & 0.7274 & 50.7 & 0.8824 & 47.8 & 0.8429 & **28.1** & 0.7879 & 37.8 & 0.8162 \\ & 1-shot & 36.7 & 0.7620 & 21.1 & 0.6434 & 51.1 & 0.8228 & 7.1 & 0.5078 & 0.0 & 0.4306 & 23.2 & 0.6333 \\ & 2-shot & 35.6 & 0.7436 & 20.5 & 0.6152 & 48.9 & 0.8019 & 15.9 & 0.5441 & 0.0 & 0.4208 & 24.2 & 0.6251 \\ & 5-shot & 32.8 & 0.7296 & 18.4 & 0.5980 & 44.9 & 0.7940 & 23.4 & 0.5854 & 16.8 & 0.6388 & 27.3 & 0.6692 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 34.3 & 0.8277 & 15.5 & 0.7222 & 49.5 & 0.8739 & 43.6 & 0.8380 & 25.7 & 0.7838 & 33.7 & 0.8091 \\ & 2-shot & 35.8 & 0.8244 & 16.4 & 0.7154 & 49.6 & 0.8739 & 44.6 & 0.8362 & 24.1 & 0.7810 & 34.1 & 0.8062 \\ & 5-shot & 34.3 & 0.8203 & 15.9 & 0.7083 & 48.1 & 0.8659 & 40.7 & 0.8220 & 24.0 & 0.7712 & 32.6 & 0.7975 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 34.6 & 0.8269 & 16.0 & 0.7217 & 50.4 & 0.8752 & 44.2 & 0.8405 & 25.9 & 0.7830 & 34.2 & 0.8095 \\ & 2-shot & 35.5 & 0.8182 & 16.5 & 0.7150 & 49.9 & 0.8747 & 43.4 & 0.8349 & 24.5 & 0.7774 & 34.0 & 0.8040 \\ & 5-shot & 33.5 & 0.8103 & 16.6 & 0.7070 & 48.2 & 0.8696 & 37.5 & 0.8274 & 25.2 & 0.7782 & 32.2 & 0.7985 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 41.4 & **0.8423** & 28.8 & 0.7235 & **58.1** & **0.8862** & **52.9** & **0.8488** & 27.0 & 0.7846 & **41.6** & **0.8171** \\ & 2-shot & **41.7** & 0.8401 & **29.6** & 0.7225 & 57.3 & 0.8850 & 51.2 & 0.8480 & 27.6 & 0.7850 & 41.5 & 0.8161 \\ \cline{1-1} & 5-shot & 40.9 & 0.8296 & 29.2 & 0.7249 & 55.8 & 0.8804 & 48.7 & 0.8413 & 27.5 & 0.7876 & 40.4 & 0.8128 \\ \hline \multirow{8}{*}{**CED**} & 1-shot & 31.5 & 0.7985 & 17.9 & 0.7081 & 45.4 & 0.8538 & 42.4 & 0.8035 & 21.7 & 0.7586 & 31.8 & 0.7845 \\ \cline{1-1} & 2-shot & 35.5 & 0.8202 & 22.4 & 0.7263 & 49.5 & 0.8680 & 47.5 & 0.8288 & 21.4 & 0.7605 & 35.3 & 0.8008 \\ \cline{1-1} & 5-shot & 40.1 & 0.8377 & 24.5 & **0.7358** & 50.5 & 0.8749 & 50.1 & 0.8401 & 22.6 & 0.7776 & 37.6 & 0.8132 \\ \end{tabular} \end{table} Table 4: Results for the MDNS corpus of the multi-stage evaluation for various numbers of \(k\)-nearest-neighbors using BLEU and COMET metrics. The ”Baseline” scores are for the English-to-German NMT system described in Section 3.1. well above all previous models, with a relative improvement of 101% compared to the baseline. The models seem to be able to make better use of close nearest neighbors in this dataset, which are often substrings of one another. See Section 4.6 for a detailed analysis of the copying behavior on the ACED Asics dataset. The rate of empty translations is reduced to 0.0% for all \(k\). We further notice that the results for 1- and 2-shot ICL are very similar, and that the scores for 5-shot are also improved. ### Falcon: Adapting Both to a Task and a Domain at the Same Time The Falcon-40B LLM proves to excel at ICL, learning a task and adapting to a domain at the same time. Notably, scores improve with higher values of \(k\), which is the opposite behavior to what we have observed with NMT models. When nearest neighbors are close to the test data, as they are for the ACED Emerson and MDNS IT datasets, we find results that are close to the best Stage 3 results. Falcon-40B's generation speed is however very slow at an average of 2.6 tokens per second in the 1-shot setting. Also note that we have no means at this time to check whether parts of the test data are contained in Falcon's training data. ### Qualitative Analysis Maintaining consistency in translations is an important quality criterion in the localization industry, and is a major motivator in the use of translation memories, which help ensure that marketing materials, for example, are uniform in the promised features and functions of the products being advertised (Emery et al., 2011). In NMT models, this consistency is traditionally increased by fine-tuning a translation model for a specific domain, which we denote by "Stage 1 with 0-shot". In this section, we compare the fine-tuning approach with our ICL, specifically "Stage 3 with 1-shot". We evaluate translation consistency on the Asics dataset. For that purpose we select segments s in the test data for which the source nearest neighbor s\({}^{\prime}\) in the Asics train data differs by exactly one word. These segments s are denoted as word-substitution segments. For each pair (s, s\({}^{\prime}\)), we then use two sources and one target t\({}^{\prime}\) in the ICL prompt and the other target t as reference to compare the generated translation to. We define the fraction of pairs for which the generated translation exactly matches the reference as the word substitution accuracy (WSA). The results are in Table 6. The translation for Stage 3 1-shot achieves a WSA score of 74.6%, compared to 57.14% for the fine-tuning approach (Stage 1 0-shot), whereas the non-adapted model only produces the exact reference translation in 1.7% of cases. ## 5 Conclusions We have shown that a standard NMT system can be trained to be an effective in-context learner in domain adaptation tasks. We find that this is most effective when we combine generic fine-tuning towards the ICL task and training adapter layers for a specific domain with nearest neighbor annotated data. When the model is not fine-tuned towards the task, we find that ICL works to some extent, but shows degenerate behavior. While LLMs like Falcon-40B can adapt to the MT task with ICL, this comes at the cost of increased compute. Generally, the results with the LLM still underperform our dedicated MT models.
2309.16976
Benchmarking and In-depth Performance Study of Large Language Models on Habana Gaudi Processors
Transformer models have achieved remarkable success in various machine learning tasks but suffer from high computational complexity and resource requirements. The quadratic complexity of the self-attention mechanism further exacerbates these challenges when dealing with long sequences and large datasets. Specialized AI hardware accelerators, such as the Habana GAUDI architecture, offer a promising solution to tackle these issues. GAUDI features a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor Processing Cores (TPC). This paper explores the untapped potential of using GAUDI processors to accelerate Transformer-based models, addressing key challenges in the process. Firstly, we provide a comprehensive performance comparison between the MME and TPC components, illuminating their relative strengths and weaknesses. Secondly, we explore strategies to optimize MME and TPC utilization, offering practical insights to enhance computational efficiency. Thirdly, we evaluate the performance of Transformers on GAUDI, particularly in handling long sequences and uncovering performance bottlenecks. Lastly, we evaluate the end-to-end performance of two Transformer-based large language models (LLM) on GAUDI. The contributions of this work encompass practical insights for practitioners and researchers alike. We delve into GAUDI's capabilities for Transformers through systematic profiling, analysis, and optimization exploration. Our study bridges a research gap and offers a roadmap for optimizing Transformer-based model training on the GAUDI architecture.
Chengming Zhang, Baixi Sun, Xiaodong Yu, Zhen Xie, Weijian Zheng, Kamil Iskra, Pete Beckman, Dingwen Tao
2023-09-29T04:49:35Z
http://arxiv.org/abs/2309.16976v1
# Benchmarking and In-depth Performance Study of Large Language Models on Habana Gaudi Processors ###### Abstract. Transformer models have achieved remarkable success in various machine learning tasks but suffer from high computational complexity and resource requirements. The quadratic complexity of the self-attention mechanism further exacerbates these challenges when dealing with long sequences and large datasets. Specialized AI hardware accelerators, such as the Habana GAUDI architecture, offer a promising solution to tackle these issues. GAUDI features a Matrix Multiplication Engine (MME) and a cluster of fully programmable Tensor Processing Cores (TPC). This paper explores the untapped potential of using GAUDI processors to accelerate Transformer-based models, addressing key challenges in the process. Firstly, we provide a comprehensive performance comparison between the MME and TPC components, illuminating their relative strengths and weaknesses. Secondly, we explore strategies to optimize MME and TPC utilization, offering practical insights to enhance computational efficiency. Thirdly, we evaluate the performance of Transformers on GAUDI, particularly in handling long sequences and uncovering performance bottlenecks. Lastly, we evaluate the end-to-end performance of two Transformer-based large language models (LLM) on GAUDI. The contributions of this work encompass practical insights for practitioners and researchers alike. We delve into GAUDI's capabilities for Transformers through systematic profiling, analysis, and optimization exploration. Our study bridges a research gap and offers a roadmap for optimizing Transformer-based model training on the GAUDI architecture. + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: journal: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + Footnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + FootnoteFootnote †: JML + Footnote †: JML + FootnoteFootnote †: J Our research delves into this territory, exploring strategies to intricately balance the tasks assigned to MME and TPC. (3) Unexplored Transformer performance in long sequences. The third challenge pertains to the performance of Transformers on Habana's GAUDI, particularly in scenarios involving long input sequences. This uncharted territory lacks exploration, hindering our ability to grasp the GAUDI's prowess in handling extended sequences. (4) Lack of end-to-end large language model (LLM) performance on GAUDI. The dearth of existing research in a holistic evaluation of end-to-end LLM performance on Habana's GAUDI, coupled with an exploration of potential performance bottlenecks. To address those challenges, we benchmark and analyze deeply the performance of Transformers and Transformer-based models on Habana's GAUDI. The main contributions of this paper are summarized as follows: * We conduct an in-depth performance comparison between the Matrix Multiplication Engine (MME) and Tensor Processing Cores (TPC) within GAUDI. Our analysis offers insights into the relative strengths and weaknesses of these components, empowering practitioners to make informed decisions when tailoring Transformers to the GAUDI platform. * We explore strategies to balance the workload effectively between MME and TPC, we provide practical guidance to achieve enhanced performance and efficiency for Transformers on GAUDI. * We tackle the dearth of research in evaluating the performance of Transformers on GAUDI, especially when dealing with long sequences. Through systematic benchmarking and analysis, we uncover the performance bottlenecks that arise in this scenario, shedding light on the unique challenges posed by long input sequences. * We assess the overall performance of Transformer-based models on Habana's GAUDI and identify performance bottlenecks, we offer a holistic perspective on GAUDI's capabilities for accelerating complex language models. In summary, through this comprehensive study, our work demonstrates the potential of specialized hardware accelerators like GAUDI processors. We contribute a deeper understanding of Habana's GAUDI for Transformers and Transformer-based models. Our findings not only address existing research gaps but also provide practitioners and researchers with valuable insights to optimize the performance of Transformers and Transformer-based models on GAUDI, further unlocking the potential of these models for real-world applications. ## 2. Background and Motivation In this section, we present background information on the Habana GAUDI processor architecture, the TPC programming model, Transformers, and our motivation. ### Habana GAUDI Processor Architecture Habana GAUDI processor is a specialized hardware accelerator designed for deep learning training workloads (Habana et al., 2018). As shown in Figure 1, it features a heterogeneous compute architecture with a Matrix Multiplication Engine (MME), eight fully programmable Tensor Processing Cores (TPC), and fast memory and network units. Specifically, GAUDI efficiently handles various deep learning operations by lowering them into matrix multiplication operations (e.g., convolution) and nonlinear operations (e.g., activation) that can be executed on MME and TPC, respectively. The fast memory and network units enhance intra-/inter- processor data transfers, respectively. MME is specifically tuned for computation tasks in deep neural network (DNN) training such as fully connected layers, convolution layers, and batched-GEMM, providing significant acceleration compared to traditional CPU and GPU solutions (Han et al., 2017). The TPC is a very long instruction word (VIW) single instruction multiple data (SIMD) processor crafted for deep learning nonlinear operations. It is designed to accelerate non-matrix-based operations that cannot be efficiently handled by the MME. The programming approach of TPC offers users a high degree of flexibility and innovation, supported by features tailored to various workloads. These include acceleration for non-GEMM operations, tensor-based addressing, capabilities to hide latency, random number production, and advanced implementation of special functions. GAUDI incorporates a DMA engine, streamlining the data exchange between MME and TPC using shared memory. For communications between different processors, GAUDI includes on-chip RoCE v2 engines, facilitating efficient inter-processor dialogue during training sessions. Consequently, GAUDI ensures seamless collaboration between MME and TPC and delivers exceptional scalability in both expanding and multiplying setups. ### TPC programming model _TPC architecture_. The TPC boasts a very long instruction word (VIW) design. Its wide single instruction multiple data (SIMD) vector mechanism can handle 2048-bit SIMD tasks and is compatible with several data types like float, bfloat16, INT16, INT32, and INT8. The instruction set for the TPC processor is segmented into four functional slots: * responsible for memory loading, value movements, and value settings. * handles scalar computations. * manages vector computations. Figure 1. A high-level view of GAUDI architecture, which consists of Matrix Multiplication Engine (MME), Tensor Processing Cores (TPC), Memory Units (Local Memory, Shared Memory, DMA, HBM, RDMA), and Connection Units (Ethernet, PCIe). - oversees memory storage, value movements, and value settings. Four distinct memory domains are embedded within the TPC processor: scalar local memory, vector local memory, global memory, and configuration space. The global memory can be interfaced through specialized access points termed as tensors. On average, every four cycles can accommodate the loading or writing of a 2,048-bit vector to the global memory. It's also worth noting that individual TPC maintain distinct local memory instances, and each TPC can exclusively access its dedicated local cache. The local memory is bifurcated into two storage banks, scalar local memory (1 KB) and vector local memory (80 KB). There's an unrestricted bandwidth when reading from or writing to the local memory in each cycle. _TPC programming._ The TPC stands as a fully programmable VLIW SIMD processor, programable via TPC-C, a C language derivative. TPC-C incorporates vector data types for seamless use of processor-specific SIMD capabilities. A TPC program is composed of host glue code and a TPC kernel. Host glue code, executed on the host machine, controls program execution. TPC kernels, executed on TPC processors, handle computation. Users leverage the SynapseAI TPC SDK, featuring an LLVM-based TPC-C compiler, simulator, and debugger, for TPC kernel development. The TPC processor on the GAUDI ASIC accepts tensor inputs/outputs with dimensions ranging from 1 to 5. Index spacing, similar to threads in CUDA programming, efficiently divides workloads among TPC processors. Each index space member corresponds to an independent unit of work executed on a single TPC. Users utilize Habana's intrinsics, encompassing arithmetic, bitwise, and load operations, to create TPC kernels, while ensuring effective workload distribution. ### Transformers The Transformer architecture was first introduced by Vaswani et al. (2016) as a novel approach to sequence-to-sequence learning tasks, particularly in natural language processing. Transformers have since become a popular choice for various machine-learning applications, including language modeling, machine translation, and computer vision. The key innovation of the Transformer architecture is the self-attention mechanism, which allows the model to weigh different parts of the input sequence differently when making predictions. This mechanism enables Transformers to capture long-range dependencies and contextual information more effectively compared to traditional recurrent neural networks (RNNs) and convolutional neural networks (CNNs). Figure 2 presents the architecture of a Transformer, which typically consists of encoder blocks, decoder blocks, and other operations such as position embedding and layer normalization. Specifically, each encoder/decoder block consists of multi-head self-attention mechanisms followed by a position-wise feed-forward network. Many widely-recelved DNN models are based on Transformers. For example, the Bidirectional Encoder Representations from Transformers (BERT) (Beng et al., 2017) and the Generative Pre-trained Transformer (GPT) (Beng et al., 2017). BERT is primarily an encoder from the Transformer architecture. GPT is both an encoder and a decoder, but during training, only the decoder portion is utilized. BERT is bidirectional, trying to understand the context on both sides of a word. GPT is unidirectional, predicting words based on the preceding context. \begin{table} \begin{tabular}{r r r} \hline \hline **Operation** & **Explanation** & **Mapping** \\ \hline torch.mul & element wise mul & TPC \\ torch.matmul & matrix product & MME \\ torch.square & tensor square & TPC \\ ** & tensor square & TPC \\ tensor +- tensor & tensor +- tensor & TPC \\ scalar \({}^{\star}\) tensor & scalar \({}^{\star}\) tensor & TPC \\ scalar +- tensor & scalar +- tensor & TPC \\ torch.sqrt & square root & TPC \\ torch.log & natural logarithm & TPC \\ \hline \hline \end{tabular} \end{table} Table 1. Operation-Hardware Mapping via SynapseAI Figure 3. Matrix Computation workflow of each self-attention. \(Q\), \(K\) and \(V\) are query, key, value matrices of dimension size \(N\) by \(D_{Q}\),\(D_{K}\), \(D_{V}\), respectively. Figure 2. Transformer model architecture overview, which mainly consists of multi-head attention. ### Motivation The impressive ability of Transformer-based models comes from complex computational operations and the huge number of parameters (340 million in BERT, 1.5 billion in GPT-3) (Beng et al., 2017; Chen et al., 2017), which results in intensive computations during training. Consequently, training Transformer-based models is both time-consuming and resource-intensive. Although today's AI accelerators, such as Haibana GAUDI outperform GPUs in some training tasks (Kang et al., 2019), the architecture-specific optimizations on these accelerators are not well studied. For example, Figure 3 shows the workflow of matrix computations in self-attention. Specifically, The input sequence \(x\in\mathbb{R}^{N\times D_{x}}\) is projected by three weight matrices \(W_{Q},W_{K},W_{V}\) to corresponding representations \(Q\), \(K\) and \(V\). Following common terminology, the \(Q\), \(K\), and \(V\) are referred to as the "queries", keys", and "values" respectively. Then softmax is used to normalize attention matrix \(\mathcal{Q}\)\(\mathcal{K}\)\({}^{T}\) into a probability distribution. The softmax's computation can only be executed on TPC, which degrades the overall training performance of Habana GAUDI (to be detailed in SS3). Thus, we perform comprehensive profiling on Habana GAUDI with insights that derive our optimizations in improving the training performance. ## 3. Experimental Results In this section, we present our experimental setup, profiling results, and discussion. ### Experimental Setup PlatformsWe perform our experiments on one Habana Labs System 1 (HLS-1) (Hansen et al., 2019) AI training system. The HLS-1 incorporates eight GAUDI processors and two Gen 4.0 PCIe switches. External Host CPU is used to manage HLS-1 via PCIe switches. Each GAUDI processor is equipped with 32 GB on-chip memory. All experiments are on a single GAUDI processor. Implementation detailsHabana's SynapseAI (Hansen et al., 2019) software suite enables efficient mapping of neural network topologies onto GAUDI hardware. All experiments are performed on PyTorch-based SynapseAI. The PyTorch version is 1.13.1. ### Basic Profiling Operation mappingPyTorch provides a variety of operations. GAUDI's compute architecture is heterogeneous and includes two independent compute engines - an MME and a fully programmable TPC cluster. So it is necessary for us to know which compute engine each operation is finally mapped to. We perform detailed profiling to obtain the operation-compute engine mapping, as shown in Table 1. From this table, we draw the following conclusions: only matrix multiplication operations are mapped to MME, and all other operations are mapped to TPC. Even linear operations on tensors like tensor multiplied by scalar are mapped to TPC. Performance comparison between MME and TPCA detailed performance comparison between MME and TPC is very necessary because it helps us analyze the performance bottleneck of the GAUDI. Different operations in the neural network will either be mapped to MME or TPC, and the slowest operation on two compute engines will become a performance bottleneck. To profile computation performance, we enable MME and TPC to perform batch matrix multiplication operations on various dense matrices of different sizes and measure the run time and tera floating point operations per second (TFLOPS). We directly choose tron. bnn on MME to perform a batch matrix-matrix product, where the batch size is set to 64. We implement TPC batch matrix-matrix product kernels using example code from Habana_Custom_Kernel repository (Habana et al., 2019). SynapseAI profiler is used as suggested by Habana to generate hardware trace events and accurately measure the execution time of each operation. Table 2 shows the execution time between MME and TPC for matrix multiplications of different sizes. We can conclude that the computational performance of TPC is up to 7\(\times\) lower than that of MME. In the case of such an obvious performance gap, the most suitable application scenario for GAUDI is that the current operation has a large amount of calculation and can be successfully mapped to MME. The next operation has a small amount of calculation and can be mapped to TPC, in such a situation TPC will not form a computing performance bottleneck. But if the next operation has a similar amount of calculation, then MME has to become idle and wait for the calculation of TPC to complete. ### Transformer Layer Profiling Softmax attentionSelf-attention computes, for every position, a weighted average of the feature representations of all other positions with a weight proportional to a similarity score between the representations. Transformers usually follow original design (Hansen et al., 2019) by Ashish Vaswani to adopt softmax attention. Softmax attention is a specific form of self-attention where the similarity score is the exponential of the dot product between a query and a key. Similarity function is \(sim(q,k)=exp(\frac{q^{T}K}{\sqrt{D}})\). The \(Q\), \(K\), and \(V\) are referred to as the "queries", keys", and "values" respectively. Long sequence training in Transformer-based natural language processing (NLP) models, such as BERT and GPT, offers several significant benefits: (1). Capturing long-range dependencies: Long sequence training allows Transformer models to capture these complex dependencies, enabling a better understanding of the context and improving the quality of language representations. (2). Improved contextual understanding: Longer sequences provide more context to the model, allowing it to comprehend the nuances and subtleties in language. (3). enhanced text generation: Longer context windows help the model maintain better coherence and consistency in longer text generation tasks. (4). Better handling of large documents: In real-world applications, NLP models often encounter long \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Size** & **T\_MME** & **F\_MME** & **T\_TPC** & **F\_TPC** & **Speedup** \\ \hline 128 & 7.31 & 2.35 & 9.21 & 1.86 & 1.3 \\ 256 & 11.78 & 11.67 & 67.04 & 2.05 & 5.7 \\ 512 & 76.51 & 14.37 & 516.60 & 2.13 & 6.7 \\ 1024 & 151.03 & 14.56 & 1006.30 & 2.18 & 6.7 \\ 2048 & 338.27 & 14.59 & 2247.80 & 2.19 & 6.6 \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of execution time between MME and TPC for matrix multiplication of different sizes. T_MME, F_MME, T_TPC, F_TPC are short for run time of MME, TFLOPS of MME, run time of TPC, TFLOPS of TPC, respectively. Speedup\(=\) T_T_MME. Time unit is millisecond (ms). documents or lengthy pieces of text. Because of the advantages of long sequence training, in experiments, we set the input sequence length, batch size, the number of heads, and the hidden size per head as 2048, 128, 6, and 64 respectively. Figure 4 shows a profiling result of a single Transformer layer. From this result, we have two observations. (1). There are many blank areas in the MME operating area. These blank areas indicate that MME is idle is waiting for tasks. (2). In the running region of TPC, it is very clearly shown that the running time of softmax exceeds 80% of the total running time. The reasons for this phenomenon are: (1). The TPC is less computationally powerful than the MME as discussed in Section 3.2. But The computational complexity of softmax operation in a Transformer is \(\mathcal{O}(N^{2})\). As a result, it becomes a performance bottleneck when softmax operation is mapped into TPC. (2). Softmax requires reduction operations, which are not well-suited for single instruction, multiple data (SIMD) architectures like TPC. Long sequences further exacerbate this problem especially when the sequence length exceeds 1024. Overall, the limited computational capability of TPC combined with the complexities of softmax operations on this architecture hinders GAUDI's overall performance and efficiency. _Linearized attention._ Linearized attention, also known as "linear attention", is an alternative approach to the traditional softmax-based attention mechanism used in Transformers. It aims to reduce the computational complexity associated with the softmax operation while maintaining the core principles of self-attention. Linear attention is particularly useful when dealing with very long sequences, where standard softmax-based self-attention becomes impractical due to its quadratic complexity. The softmax-based self attention is \(\text{softmax}(\frac{QK^{T}}{\sqrt{D}})V\), where \(Q,K\) and \(V\in\mathbb{R}^{N\times D}\). The computational complexity of self-attention is quadratic to sequence length \(N\). Assume \(\phi\) is a feature map that is applied in a row-wise manner, linear attention is \((\phi(Q)\phi(K)^{T})V=\phi(Q)(\phi(K)^{T}V)\) after applying the associative property of matrix multiplication. linear attention leads to a computational complexity of \(\mathcal{O}(N)\). There are two reasons why we want to use linear attention on Habana: (1). The calculation of the softmax operation itself is relatively complicated, and it involves exponential operations and reduction operations. (2). The essence of linear attention is that matrix multiplication can ensure that almost all self-attention calculations are mapped to MME with stronger computation performance. ``` 1defFAVO(q,k,v): 2#Projectkeyandqueriesoftextransmpspace 3q_scaled=self.pre_scale(q) [email protected] 5q_prime=torch.exp(q_scaled+self.offset) 6k_scaled=self.pre_scale(k) [email protected] 8k_prime=torch.exp(k_scaled+self.offset) 9att_norm=q_prime@( 10k_prime.transpose(-2,-1)@torch.ones_like(v) 11} 12att_raw=q_prime@(k_prime.transpose(-2,-1)@v) 13x=att_raw/att_norm 14returnx ``` Listing 1: Pseudocode for FAVOR Algorithm. We adopt feature maps from Linear Transformers [11] and Performer [3] to construct linear attention on Habana. Linear Transformer proposes to directly set the feature map as \(\phi(x)=elu(x)+1\). The Performer uses a novel Fast Attention Via a positive Orthogonal Random features approach (FAVOR). Its feature map is \(\phi(x)=\frac{h(x)}{\sqrt{m}}(f_{1}(\omega_{1}^{T}x),\cdots,f_{1}(\omega_{m}^{T }x),\cdots,f_{l}(\omega_{1}^{T}x),\cdots,f_{l}(\omega_{m}^{T}x))\), where \(f_{i},\cdots,f_{l}:\mathbb{R}\rightarrow\mathbb{R}\). \(\omega_{1},\cdots,\omega_{m}\) are drawn from some distribution. Figure 5 depicts profiling results of linear Transformers and Performers. The total run time of linear Transformers and Performer is 30 ms and 80 ms, respectively. Compared to original softmax-based attention, linear Transformers and Performer achieve 6 x, 2 x speedup. Besides there are not many blank areas in the MME operating area, which indicates full utilization of MME. Therefore, Figure 4. Profiler Trace of the transformer with softmax attention. DMA is direct memory access engine that manages data transfer/copy between MME and TPC. We observe that executing softmax operations on TPC results in MME idle time (i.e., gaps between MME operations). we can conclude that linearized attention is a good alternative to softmax attention from the perspective of performance. However, there is a blank area in the MME operating area when using Performer. The blank area is because the TPC is busy with exponential operations. As shown in the algorithm of FAVOR, we can find that the calculation of "q_prime" and "k_prime" is independent. But Graph Compiler does not detect this independence, so it does not schedule MME and TPC tasks well so that they can overlap. _Activation functions_. Linear Transformer (Krizhevsky et al., 2017) does not consider the impact of different activation functions on TPC performance, it directly sets the activation function to exponential linear unit (ELU). And there is no previous work discussing the performance of different activation functions on TPC. Thus we conduct a rigorous evaluation to assess the impact of various activation functions on the overall efficiency and computational capabilities of the TPC. The experiments incorporate popular activation functions explored in NLP tasks, including rectified linear unit (ReLU), LeakyReLU, Gaussian Error Linear Units (GELU), and gated linear unit function (GLU). In experiments, we set the input sequence length, batch size, the number of heads, and the hidden size per head to 2048, 128, 6, and 64 respectively. Figure 7 depicts hardware traces of different activation functions. From the profiling results, we have two observations 1. The total run time of a Transformer with ReLU, LeakyReLU, GELU, and GLU is 30.1 ms, 30.2 ms, 29.7 ms, and 32.6 ms, respectively. Transformers with ReLU, LeakyReLU, and GELU have similar performance and The execution of MME and TPC has a good overlap. 2. Transformer with GLU has the worst performance. And its execution causes a blank area in MME. We think the reasons for such phenomena are (1). those activation functions are applied to element-wise tensor, which is extremely suitable for SIMD architecture like TPC. (2). SynapseAI does not have good support for GLU, which cause extra compilation during the execution. ### End-To-End Language Models Profiling In order to analyze the end-to-end performance of a full language model on GAUDI, we choose profile execution of BERT and GPT run on GAUDI. For GPT model, we utilize the GPT2LMHeadModel module from Hugging Face (Hugging Face, 2018). GPT2LMHeadModel is the GPT2 Model Transformer with a language modeling head on top. For the BERT model, we use the BertForMaskedLM module from Hugging Face. BertForMaskedLM is the BERT model with a language modeling head on top. The input dataset is book corpus (Hugging et al., 2019). Due to limited GAUDI memory, we set the input sequence length, batch size, the number of layers, the number of heads, and the hidden size per head as 2048, 8, 2, 8, and 64 respectively. Figure 8, 9 show hardware traces of GPT and BERT models. From traces, we have similar observations as single Transformer layer profiling. There are many blank areas in the MME operating area, which indicates MME is idle. However, TPC is obviously busy. Potential performance issues of Transformer-based language models on GAUDI are (1). workload between MME and TPC is unbalanced. (2). There is no good overlap between MME and TPC. As a result, either MME or TPC is ideal, which causes a waste of computing resources. ## 4. Insights and Takeaways (1) We need to try to provide all source code so GraphCompiler can analyze the source code thoroughly and generate good mapping and schedule of MME and TPC. (2) The code should use very basic operations provided by Torch and avoid high-level abstracts like torch.einsum() for good mapping and schedule of MME and TPC by GraphCompiler. (3) When designing a neural network model, the user should consider that most calculations in the model can be transformed into matrix multiplication. In this way, the model can fully utilize MME' powerful computation capability. ## 5. Conclusion and Future Work In this work, we embarked on a comprehensive exploration of the performance capabilities of Habana's GAUDI processor when accelerating Transformers and Transformer-based models. Our findings not only address existing research gaps but also provide practitioners and researchers with valuable insights to optimize the performance of Transformers and Transformer-based models on GAUDI, further unlocking the potential of these models for real-world applications. In the future, we plan to investigate novel attention mechanisms tailored to GAUDI's architecture could also optimize performance for long sequences. ###### Acknowledgements. The material was supported by the U.S. DOE Office of Science (SC), Office of Advanced Scientific Computing Research (ASCR), under contracts DE-AC02-06CH11357. This work was also supported by NSF awards 2303820, 2303064, 2247080, 2311876, and 2312673. We gratefully acknowledge the computing resources provided and operated by the Joint Laboratory for System Evaluation (JISE) at Argonne National Laboratory. Figure 5. Profiling of linear Transformers. Colored blocks are computation periods and gaps between colored blocks are idle periods. Figure 6. Profiling of Performer. Colored blocks are computation periods, and gaps between colored blocks are idle periods. Figure 8: Hardware trace of GPT model. Figure 7: Activation functions in NLP. Figure 9: Hardware trace of BERT model.
2304.00130
Top-down integration of a hBN quantum emitter in a monolithic photonic waveguide
Integrated quantum photonics, with potential applications in quantum information processing, relies on the integration of quantum emitters into on-chip photonic circuits. Hexagonal boron nitride (hBN) is recognized as a material that is compatible with such implementations, owing to its relatively high refractive index and low losses in the visible range, together with advantageous fabrication techniques. Here, we combine hBN waveguide nanofabrication with the recently demonstrated local generation of quantum emitters using electron irradiation to realize a fully top-down elementary quantum photonic circuit in this material, operating at room temperature. This proof of principle constitutes a first step towards deterministic quantum photonic circuits in hBN.
Domitille Gérard, Michael Rosticher, Kenji Watanabe, Takashi Taniguchi, Julien Barjon, Stéphanie Buil, Jean-Pierre Hermier, Aymeric Delteil
2023-03-31T21:09:04Z
http://arxiv.org/abs/2304.00130v2
# Top-down integration of a hBN quantum emitter in a monolithic photonic waveguide ###### Abstract Integrated quantum photonics, with potential applications in quantum information processing, relies on the integration of quantum emitters into on-chip photonic circuits. Hexagonal boron nitride (hBN) is recognized as a material that is compatible with such implementations, owing to its relatively high refractive index and low losses in the visible range, together with advantageous fabrication techniques. Here, we combine hBN waveguide nanofabrication with the recently demonstrated local generation of quantum emitters using electron irradiation to realize a fully top-down elementary quantum photonic circuit in this material, operating at room temperature. This proof of principle constitutes a first step towards deterministic quantum photonic circuits in hBN. Hexagonal boron nitride (hBN) has recently emerged as a very attractive platform for integrated quantum photonics [1; 2]. This van der Waals (vdW) material offers a wide range of fabrication techniques that allow to associate it with other materials -including other vdW crystals- in highly miniaturized complex devices. In particular, it presents favorable properties for photonics, with atomically flat surfaces and a very wide bandgap (\(\sim 6\) eV), opening the possibility to use it as a light confining medium. In this spirit, fabrication of complex hBN photonic structures, such as waveguides [3; 4], phase plates and microlenses [5], bullseye antennas [6] and photonic crystal structures [7; 8], have been recently demonstrated. Last but not least, hBN also hosts optically active point defects that act as excellent single-photon emitters (SPEs) in various wavelength ranges [9; 10; 11]. Most of these color centers occur randomly in the flake, thereby hindering scalable integration in photonic devices. Nonetheless, these emitters have been at the core of highly promising implementations of both monolithic and hybrid photonic devices, including waveguides [3; 12; 13], cavities [7; 14; 15] and fibers [16; 17; 18]. Those realizations are relying on either _a posteriori_ integration of the quantum emitter, or on the random presence of an emitter in the structure, which limits both control and scalability of those devices. The recent demonstration of local generation of blue-emitting color centers (B-centers) using a focused electron beam has offered an attractive workaround [19; 20; 21]. These emitters can be generated in a commercial scanning electron microscope (SEM) with a high control of their position and average number, and consistently exhibit a reproducible emission wavelength, a predominent in-plane polarization, a short lifetime and a high optical coherence [20; 21; 22; 23; 24]. Here, we take advantage of this e-beam technique by including it in a completely top-down approach for the fabrication of an elementary quantum photonic device, where the emitter generation is included as an additional step in the fabrication process. We first fabricate short waveguides (10 \(\mu\)m) with semicircular grating couplers [25; 26] and subsequently embed quantum emitters in the waveguide by local irradiation. Photoluminescence (PL) characterization demonstrates the coupling of both the excitation laser and the SPE emission into the waveguide. Although the design we implemented is not intended to be optimal, it illustrates the potential of electron-beam generated SPEs for quantum photonics and integrated optical quantum information. The geometry that we have opted for is a ridge waveguide, chosen for the simplicity of its realization. The light is confined by refractive index contrast between hBN (\(n_{o}\sim 2.2\)) and the environment. The SiO\({}_{2}\)/Si substrate has a refractive index that is low enough to obtain low-losses propagating modes in flakes as thin as 60 nm. Fig. 1(a) shows a sketch of the waveguide with semicircular grating couplers at its two output ports. Fig. 1(b) shows the waveguide cross section and the corresponding FDTD simulation of the fundamental TE mode profile. Fig. 1(c) shows the longitudinal profile of the same mode. For a point dipole emitting at 440 nm with an in-plane polarization orthogonal to the waveguide main axis and located at the mode antinode, we calculate that 23 % of the light is coupled to the waveguide in each direction, of which 18 % is extracted towards the top direction to be collected by a NA = 0.8 lens. Additionally, 5 % is directly coupled to the upper free space, allowing to characterize the sample without using the guided modes. Figure 2 depicts the fabrication steps. The waveguide fabrication starts with the excitation of high-pressure, high-temperature grown hBN [27] on a SiO\({}_{2}\)(300 nm)/Si substrate. Single crystals of 60 to 220 nm thickness are selected using atomic force microscopy and cathodoluminescence, to infer the quality of the crystal as well as the presence of carbon complexes, identified as precursors of the B-centers [21]. The waveguides are then processed from the hBN crystals based on the following steps [28]. The waveguide shape is patterned by electron beam lithography with a Raith eLine system working at 20 kV (PMMA A3, dose 250 \(\mu\)C/cm\({}^{2}\)). We then deposit 30 nm of aluminum that, after lift-off, serves as a mask in the following step. The etching of the waveguide is performed with a fluoride reactive ion etching (RIE) for 3 min 30 s with the following parameters: plasma power of 50 W, etching pressure of 40 mTorr, 40 sccm of CHF\({}_{3}\) and 4 sccm of O\({}_{2}\) (etching speed 33 nm/minute). The aluminum is then dissolved in a KOH solution. To generate the SPEs in the fabricated waveguide, the sample is finally inserted in a SEM. The waveguide is then irradiated at precise positions located in the center of the ridge, using a static focused beam of 0.4 nA under an acceleration voltage of 15 kV during 15 s. These parameters were found to provide an average SPE yield of order one per irradiated site in this sample, based on in-situ cathodoluminescence [29]. The SPE generation still has a partially probabilistic character, associated with fluctuations in the SPE number, in-plane polarization direction and depth. The two latter attributes impact their coupling with the guided mode. We therefore performed four irradiations on a 60 nm thick waveguide (termed WG1) and, in the following, we focus on a SPE that presents favorable characteristics. In addition, another waveguide, denoted WG2 (thickness 220 nm), was irradiated with a higher dose to yield a localized ensemble of SPEs. A SEM image of the final structure is shown figure 3(a). We characterize the waveguide in a confocal microscope operating at room temperature, equipped with a high-quantum-efficiency cooled CCD camera and avalanche photodiodes (APDs). We first verify that light can be coupled in, transmitted through and coupled out from the waveguide. Fig 3(b) shows a CCD image of the waveguide under laser illumination. The presence of sizable light intensity coming from the other port demonstrates coupling from free space to the guided mode and again to free space. The waveguide transmission spectrum can be inferred from the ratio between the transmitted and the reflected spectra of a broadband laser (fig 3c). It exhibits etalonning due to Fabry-Perot oscillations in the waveguide. The B-center zero-phonon line (ZPL) at 440 nm coincides with a maximum of transmission. We then perform PL measurements. The emitters are excited with a 405 nm laser diode operating in pulsed regime Figure 2: Fabrication of the hBN waveguide embedding quantum emitters. (a) A hBN crystal is exfoliated on a SiO\({}_{2}\)/Si substrate. (b) and (c) E-beam lithography is realized on PMMA. (d) Aluminum is deposited on the sample. (e) After lift-off, the remaining Al serves as a mask. (f) The hBN flake is etched away outside of the Al mask. (g) The Al mask is removed with KOH. (h) The waveguide is irradiated to generate localized quantum emitters. Figure 1: Design of the hBN waveguide embedding quantum emitters. (a) Scheme of the hBN waveguide on SiO\({}_{2}\)/Si embedding a SPE. (b) TE\({}_{00}\) mode profile as calculated with FDTD. (c) Longitudinal cut of the dipole emission propagation in the structure as calculated with FDTD. (80 MHz), at a power of \(\sim\)400 \(\mu\)W, which is in the linear regime of the emitter [20]. The PL signal is filtered out from the backreflected laser using a filter centered around the emitter ZPL, and collected using either the CCD camera or the APDs. We start with WG2, where an ensemble is generated in the waveguide, to perform spectroscopy measurements. We compare two different configurations of the detection path, while exciting from the top. The configuration 1 consists in exciting and detecting via the same free-space mode, directly above the emitter (fig. 4(a), upper panel). This configuration does not use the guided mode. In this configuration, we observe the ensemble spectrum. Its spectral shape is well known [20; 29], and features a 440 nm ZPL and phonon sidebands. We then verify that the PL light is coupled to the guided mode by switching to configuration 2, where we keep the same excitation path but we detect from one of the grating couplers, as depicted on the upper panel of figure 4(b). This configuration is obtained by fixing the collection path to the chosen grating coupler, and translating the excitation beam such that it excites the emitters, as monitored by PL measured on the CCD camera. As can be seen on the lower panel of figure 4(b), the spectrum is essentially unchanged by being collected through the waveguide. In the next step, we proceed to the characterization of an individual emitter. We compare three different configurations of the excitation and detection paths, which are depicted Fig. 5(a). The configurations 1 and 2 consist again in exciting directly above the emitter. Fig. 5(b) shows the corresponding CCD image, with the waveguide outline superimposed for clarity. The SPE PL emission is visible at the excitation spot (violet arrow) as well as at the two output ports (blue arrows), showing that it couples to the guided mode then to free-space via the grating couplers. This coupling is enabled by the large angle between the waveguide axis and the SPE polarization axis. The latter was determined by the dependence of the count rate on the angle of a polarizer inserted in the detection port (fig 5(c)). The emitter lifetime is 1.83 ns, as measured by its fluorescence decay. This value is consistent with prior measurements of B-centers in non-processed flakes [20]. Using a Hanbury Brown and Twiss setup, we measure the autocorrelation function \(g^{(2)}\) of the SPE in configuration 1, where the light is directly collected from the top of the emitter, at the location depicted by the violet circle on fig. 5(b). Fig 5(f) shows a histogram of the photon delay times integrated over multiples of the laser repetition period. The decreased coincidence number of the center period (zero delay) with respect to the others provide \(g^{(2)}(0)=0.35\pm 0.04\), indicating that light predominantly originates from a single B-center. This value is limited by background signal and can be largely improved by decreasing the temperature and using narrower filtering [24]. Switching to configuration 2 is done by keeping the same excitation path but detecting from one of the grating couplers (plain blue circle on fig. 5(b)), as depicted on the scheme fig. 5(a). In this configuration, the count rate is about a factor 4 lower, indicating that the emitter-waveguide coupling is 45 % lower than the ideal case considered in the simulations, where the emitter is located at the mode antinode. Figure 3: (a) SEM image of a waveguide. (b) CCD image of the waveguide under laser illumination focused on one of the grating couplers. The circle denotes the laser spot. (c) Transmission spectrum of a broadband source. Figure 4: (a) Upper panel: Scheme of the configuration of excitation and collection path (configuration 1). Lower panel: Ensemble spectrum in configuration 1. (b) Upper panel: Scheme of configuration 2. Lower panel: Ensemble spectrum in configuration 2. This lower count rate could also originate from deviations of the grating coupler dimensions from the nominal values. Fig. 5(e) shows the \(g^{(2)}\) measured in configuration 2, which exhibits similar antibunching (\(g^{(2)}(0)=0.33\pm 0.06\)). Crucially, this demonstrates that the \(g^{(2)}\) is not degraded through propagation in the structure. Finally, we show that the excitation laser can also be coupled to the guided mode (configuration 3) to excite the SPE. In this configuration, the laser excites the whole structure, such that other emitters luminesce in the waveguide and the grating couplers. Fig. 5(d) shows the corresponding CCD image. To ensure that we only detect light from the same SPE, we then collect the PL signal from the top of the waveguide, at the spot indicated by the blue arrow on fig. 5(d). Fig. 5(g) shows the corresponding coincidence histogram, yielding \(g^{(2)}(0)=0.26\pm 0.04\). Altogether, these results demonstrate that hBN fabrication and B-center generation can be combined in a complete process starting from hBN exfoliation all the way to deterministic emitter positioning. The obtained device yields guided single photons and operates at room temperature. Future improvements will require optimized photonic structures and emitter-to-photonic mode coupling and a more controlled SPE generation process. ###### Acknowledgements. The authors acknowledge Christophe Arnold for his help with cathodoluminescence measurements. This work is supported by the French Agence Nationale de la Recherche (ANR) under reference ANR-21-CE47-0004-01 (E\(-\)SCAPE project). This work also received funding from the European Union's Horizon 2020 research and innovation program under Grant No. 881603 (Graphene Flagship Core 3). K.W. and T.T. acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790 and 20H00354).
2305.19990
Structure of jammed ellipse packings with a wide range of aspect ratios
Motivated in part by the recent observation of liquid glass in suspensions of ellipsoidal colloids, we examine the structure of jammed ellipse packings over a much wider range of particle aspect ratios ($\alpha$) than has been previously attempted. We determine $\phi_{\rm J}(\alpha)$ to high precision, and find empirical analytic formulae that predict $\phi_{\rm J}(\alpha)$ to within less than 0.1% for all $1 \leq \alpha \leq 10$, for three different particle dispersities. We find that the densest packings possess unusually-well-defined nearest-neighbor shells, including both a higher fraction $f_{\rm Z = 6}$ of particles with exactly six contacts and a previously-unreported short-range order marked by ``kinetically suppressed'' regions in their positional-orientational pair correlation function $g(r,\Delta \theta)$. We also show that the previously-reported approach to isostaticity (coordination number $Z_{\rm J} \to Z_{\rm iso} \equiv 6$) with increasing $\alpha$ is interrupted and then reversed as local nematic order increases: $Z_{\rm J}(\alpha)$ drops towards 4 as ellipses are more often trapped by contacts with a parallel-oriented neighbor on either side and a perpendicularly-oriented neighbor on either end. Finally we show that $\phi_{\rm J}/\phi_{\rm s}$ (where $\phi_{\rm s}$ is the saturated RSA packing density) is nearly $\alpha$-independent for systems that do not develop substantial local hexatic or nematic order during compression.
Sebastian Rocks, Robert S. Hoy
2023-05-31T16:12:24Z
http://arxiv.org/abs/2305.19990v1
[ ###### Abstract Motivated in part by the recent observation of liquid glass in suspensions of ellipsoidal colloids, we examine the structure of jammed ellipse packings over a much wider range of particle aspect ratios (\(\alpha\)) than has been previously attempted. We determine \(\phi_{1}(\alpha)\) to high precision, and find empirical analytic formulae that predict \(\phi_{1}(\alpha)\) to within less than \(0.1\%\) for all \(1\leq\alpha\leq 10\), for three different particle dispersities. Then we explore how these packings 'local structural order varies with \(\alpha\). We find that the densest packings possess unusually-well-defined nearest-neighbor shells, including both a higher fraction \(f_{\mathrm{Z}-\phi}\) of particles with exactly six contacts and a previously-unreported short-range order marked by "kinetically suppressed" regions in their positional-orientational pair correlation function \(g(r,\Delta\theta)\). We also show that the previously-reported approach to isostaticity (coordination number \(Z_{\mathrm{J}}\to Z_{\mathrm{iso}}\equiv 6\)) with increasing \(\alpha\) is interrupted and then reversed as local nematic order increases: \(Z_{\mathrm{J}}(\alpha)\) drops towards 4 as ellipses are more often trapped by contacts with a parallel-oriented neighbor on either side and a perpendicularly-oriented neighbor on either end. Finally we show that \(\phi_{1}/\phi_{\mathrm{s}}\) (where \(\phi_{\mathrm{s}}\) is the saturated RSA packing density) is nearly \(\alpha\)-independent for systems that do not develop substantial local hexatic or nematic order during compression. Journal Name]Journal Name Stucture of jammed ellipse packings with a wide range of aspect ratios]Structure of jammed ellipse packings with a wide range of aspect ratios Sebastian Rocks and Robert S. Hoy]Sebastian Rocks and Robert S. Hoy ## 1 Introduction Most real granular materials are composed of aspherical, shape-anisotropic particles. Theoretical efforts aiming to explain the various ways in which constituent-particle anisotropy affects systems' jamming phenomenology have focused primarily on simple models in which the degree of anisotropy can be controlled by varying one parameter: the aspect ratio \(\alpha\). The variation of jamming phenomenology with \(\alpha\) is the simplest for high-symmetry convex shapes, and as a consequence, the theoretical study of anistropic-particle jamming began with ellipses and ellipsoids [1, 2, 3]. Jamming of low-aspect-ratio ellipses has been extensively studied [4, 5, 6, 7, 8, 9] and is now fairly well understood. In particular, for \(\alpha-1\ll 1\), the linear increase in \(\phi_{1}\) [\(\phi_{1}(\alpha)-\phi_{1}(1)\sim(\alpha-1)\)] and the singularity in the average coordination number \(Z_{\mathrm{J}}\) of marginally jammed states [\(Z_{\mathrm{J}}(\alpha)-Z_{\mathrm{J}}(1)\propto\sqrt{\alpha-1}\)] have respectively been explained in terms of particles' ability to pack more efficiently than disks by rotating away from contacts [1, 2] and by the divergence in the number of quartic modes as \(\alpha\to 1\)[2, 4]. These features are closely associated with each other, in the sense that \(\phi_{1}(\alpha)-\phi_{1}(1)\sim[Z_{\mathrm{J}}(\alpha)-Z_{\mathrm{J}}(1)]^{2}\). On the other hand, while these early studies explained the most essential features of the variation of low-aspect-ratio ellipses' jamming phenomenology with \(\alpha\), they did not establish precise analytic formulas for \(\phi_{1}(\alpha)\) or \(Z_{\mathrm{J}}(\alpha)\), or examine the local structural ordering of jammed packings in much detail. Recent experiments have demonstrated the existence of a "liquid glass" state in both quasi-2D [7, 8, 9] and 3D [10, 11] suspensions of ellipsoidal colloids. In this state, which occupies packing fractions \(\phi\) that are between systems' orientational and translational glass transitions [i.e. all \(\phi_{\mathrm{F}}^{\mathrm{rot}}(\alpha)\leq\phi\leq\phi_{\mathrm{F}}^{\mathrm{ trans}}(\alpha)\)], particles rotations' are arrested but they remain free to translate within locally-nematic precursor domains. The existence of this state was predicted nearly 25 years ago by mode coupling theory [12] and confirmed nearly 10 years ago by Monte Carlo simulations of hard ellipses [8], but it remains poorly understood. The well-established, intimate connection between the glass and jamming transitions [13, 14] suggests that at least some of ellipses' liquid-glass state's physics is controlled by their jamming phenomenology. However, jamming of ellipses with \(\alpha\) that are sufficiently large for systems to form the (essential) locally-nematic precursor domains as systems are being compressed has been almost completely neglected by theorists. Only Ref. [3] examined ellipses with \(\alpha>2.5\), and no studies have examined systems with \(\alpha>5\). In this paper, we examine the structure of jammed ellipse packings over a much wider range of aspect ratios (\(1\leq\alpha\leq 10\)) than has previously been attempted. All of our results for \(\alpha<\sim 3\) are consistent with previous studies [1, 2, 3, 4, 5, 6], but we go beyond previ ous work by (1) identifying nearly-exact analytic expressions for \(\phi_{\dagger}(\alpha)\) and (2) performing a detailed characterization of jammed states' local structural order. We show that the primary signature distinguishing jammed ellipse packings with \(\alpha\simeq\alpha_{\text{max}}\) [where \(\alpha_{\text{max}}\) is the aspect ratio at which \(\phi_{\dagger}(\alpha)\) is maximized] from those with lower \(\phi_{\dagger}\) is that they possess unusually-well-defined nearest-neighbor shells, including both a higher fraction \(f_{\text{Z}=6}\) of particles with exactly six contacts and a previously-unreported short-range order marked by "kinetically suppressed" regions in the positional-orientational pair correlation function \(g(r,\Delta\theta)\). For \(\alpha>3\), we show that \(Z_{\text{J}}\) drops slowly towards 4 with increasing \(\alpha\), as local nematic order increases and ellipses are more often trapped by contacts with a parallel-oriented neighbor on either side and a perpendicularly-oriented neighbor on either end. This result stands in stark contrast to the one that might have been expected from Refs. [1, 2, 3, 4, 5, 6], which suggested \(\lim_{\alpha\to\infty}Z_{\text{J}}=6\). We also show that the ratio \(\phi_{\dagger}(\alpha)/\phi_{\dagger}(\alpha)\), where \(\phi_{\dagger}(\alpha)\) is ellipses' random sequential adsorption (RSA) density, is nearly constant for systems that do not develop substantial local hexatic or nematic order during compression. Finally, by comparing results for three distinct particle dispersities, we show that all of the abovementioned results are general. ## 2 Methods To facilitate comparison of jammed and saturated-RSA ellipse packings, we examined the same set of 81 different particle aspect ratios (over the range \(1\leq\alpha\leq 10\)) considered in Ref. [15]. Jammed ellipse packings were obtained using a Lubachevsky-Stillinger-like [16] growth algorithm. To understand the effects of particle dispersity, we employed three different probability distributions for the ellipses' inital minor-axis lengths \(\sigma\): \[P_{\text{mono}}(\sigma)=\delta(\sigma-.07a)\] \[P_{\text{li}}(\sigma)=\frac{\delta(\sigma-.05a)}{2}+\frac{\delta(\sigma-.07a) }{2}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \tag{1}\] \[P_{\text{contain}}(\sigma)=\left\{\begin{array}{c}\frac{7}{4\sigma^{2}}\quad, \qquad.05a\leq\sigma\leq.07a\\ \\ 0\qquad,\quad\sigma<.05a\text{ or }\sigma>.07a\end{array}\right.\] where \(\delta\) is the Dirac delta function and \(a\) is an arbitrary unit of length. \(P_{\text{mono}}\) yields monodisperse particles, \(P_{\text{li}}\) yields the bidisperse 50:50 1:1.4 particles with radii \(R_{\text{small}}=.5a\), \(R_{\text{large}}=.7a\) that have been the standard model for studies of granular materials for the past 25 years, [17, 18] and \(P_{\text{contain}}\) yields continuously-polydisperse systems in which equal areas are occupied by particles of different sizes. For each \(\alpha\) and particle dispersity \(x\) [i.e. for each \(P_{k}(\sigma)\)], 100 jammed packings were prepared using the following procedure: \(N=1000\) nonoverlapping ellipses of aspect ratio \(\alpha\) were placed with random positions and orientations in square \(L\times L\) domains, with \(L=36.1818\sqrt{a}a\). Periodic boundary conditions were applied along both directions, so these initial states had packing fractions below 0.01. Jammed states were obtained using a Monte Carlo (MC) algorithm. Each MC cycle consisted of: 1. Attempting to translate particle \(i\) by a random displacement of maximum magnitude \(0.05fa\) along each Cartesian direction and rotate it by an angle of maximum magnitude \((10f/\alpha)^{\circ}\), 2. Repeating step 1 for \(i=1,2,...,N\), and 3. Increasing all particles' \(\sigma\) by the maximum possible factor consistent with hard-particle constraints, i.e. the factor that brings one pair of ellipses into tangential contact. This implementation of step (3) preserved the particle dispersities defined in Eq. 1. The move-size factor \(f\) was set to 1 at the beginning of the runs, and multiplied by \(3/4\) whenever 100 cycles had passed without a successful translation/rotation attempt. Runs were terminated and the configurations were considered jammed when \(f\) dropped below \(10^{-9}\), the minimum value allowed by our double-precision numerical implementation of this algorithm. Throughout this process, inter-ellipse overlaps were prevented using Zheng and Palffy-Muhoray's exact expression [19] for their distance of closest approach \(d_{\text{cap}}\). We characterized the structural order of the jammed packings using several commonly employed metrics: In addition to \(Z_{\text{J}}\), we examined the fractions \(f_{\text{Z}=6}\) (\(f_{\text{Z}=4}\)) of particles that have exactly six (four) contacts. \(f_{\text{Z}=6}=1\) in both the triangular lattice (the densest crystalline packing of both disks and ellipses, and isostatic jammed ellipse packings, while \(f_{\text{Z}=4}=1\) in isostatic disk packings and in "checkerboard"-like phases formed by perpendicularly-oriented, short single-layer lamellae. [20] Local nematic order was characterized using the standard order parameter \[S=\frac{1}{18N}\sum_{i=1}^{N}\sum_{j=1}^{18}\frac{3\cos^{2}(\Delta\theta_{ij}) )-1}{2}\equiv\frac{3\langle\cos^{2}(\Delta\theta)\rangle-1}{2}\,, \tag{2}\] where \(\Delta\theta_{ij}\) is the orientation-angle difference between ellipses \(i\) and \(j\), and the average is performed over the 18 nearest neighbors of each ellipse. Here 18 was chosen because it corresponds to the total number of first, second, and third nearest neighbors for particles in a triangular lattice; this choice makes \(S\) a measure of _mid-range_ nematic order. \(S\) is 1 for a perfectly-nematically-ordered and zero for an orientationally-disordered material. Local hexatic order was characterized using the Steinhardt-like [21] order parameter \[\Psi_{6}=\frac{1}{6N}\sum_{i=1}^{N}\left|\sum_{j=1}^{6}\exp(6i\Theta_{ij}) \right|. \tag{3}\] Here \(\Theta_{ij}\) is the angle between the vector \(\vec{\tau}_{ij}\) connecting ellipses \(i\) and \(j\) and an arbitrary fixed axis, and the inner sum is taken over the 6 nearest neighbors of each monomer \(i\). This metric has been shown to be useful in identifying the onset of liquid-crystalline order in hard-disk systems. [22]\(\Psi_{6}\) is 1 for the triangular lattice (at any density) since the angles between its \(\{\vec{\tau}_{ij}\}\) are multiples of \(60^{\circ}\), and zero for a perfectly-orientationally-disordered material since the angles between its \(\{\vec{\tau}_{ij}\}\) are random. To gain additional insight into the connections between variations in nematic and hexatic order and variations in \(\phi_{\text{J}}\), we exam ined the variance \[\Sigma^{2}(R)=\langle n^{2}(R)\rangle-\langle n(R)\rangle^{2} \tag{4}\] of the number of ellipses whose centers lie within randomly located circular "windows" of radius \(R\). The scaling of \(\Sigma^{2}\) with \(R\) is a sensitive measure of packings "uniformity" [23]. Crystals and quasicrystals have \(\Sigma^{2}\sim R^{d-1}\), standard amorphous packings have \(\Sigma^{2}\sim R^{d}\), and maximally random jammed (MRJ) packings have \(\Sigma^{2}\sim R^{d-1}\ln(R)\)[24, 23]. Finally we calculated the positional-orientational pair correlation function \(g(r,\Delta\theta)\), which is the ratio of the number of ellipse pairs with center-to-center distance \(r\) and orientation-angle difference \(\Delta\theta\) to the number that would be present in an ideal gas of these particles. In other words \(g(r,\Delta\theta)\) is just the generalization of the standard pair correlation function \(g(r)\) to include orientation-angle differences. Our recent study [15] showed that this metric is key to understanding how the structure of saturated RSA ellipse packings varies with \(\alpha\). All numerical data presented below are averages over the 100 packings we prepared for each \(\alpha\) and \(P_{\rm x}(\sigma)\). ## 3 Results ### Basic features Figure 1 shows \(\phi_{J}(\alpha)\) for all three particle dispersities. Differences between results for bidisperse and continuously-polydisperse systems are minimal, while the differences between these and results for monodisperse systems are expected from the latter's well-known tendency to crystallize even under rapid Lubachevsky-Stillinger-style compression [16]. All data for \(\alpha<\sim 3\), and the basic features of the entire \(\phi_{J}(\alpha)\) curves, are qualitatively consistent with previous studies [1, 2, 3, 4, 5, 6]. Our data show that \(\phi_{\rm j}(\alpha)>\phi_{\rm j,disks}\equiv\phi_{\rm j}(1)\) for \(1<\alpha<2.70\) (\(1<\alpha<4.46\)) [\(1<\alpha<4.35\)] for monodisperse (bidisperse) [continuously-polydisperse] ellipses, indicating that particle anisotropy enhances packability over these ranges of \(\alpha\). Surprisingly, bidisperse and continuously-polydisperse systems actually pack better (have a higher \(\phi_{J}\)) than monodisperse systems for \(\alpha>\sim 1.5\), suggesting that a size ratio of 1.4 is large enough for small ellipses to fill the gaps between larger ones in an at-least-semicoherent fashion. We find that the \(\phi_{J}\) for monodisperse, bidisperse, and continuously-polydisperse ellipses are respectively very well fit by \[\phi_{\rm J}^{\rm mono}(\alpha)=\phi_{\rm J,disks}^{\rm mono}\times\frac{1+ \frac{73}{120}\ln(\alpha)+\frac{49}{9}(\alpha-1)}{1+\frac{108}{19}(\alpha-1) +\frac{13}{109}(\alpha-1)^{2}}, \tag{5}\] \[\phi_{\rm J}^{\rm hi}(\alpha)=\phi_{\rm J,disks}^{\rm hi}\times\frac{1+\frac{ 13}{120}\ln(\alpha)+\frac{49}{90}(\alpha-1)}{1+\frac{249}{90}(\alpha-1)+\frac{ 5}{86}(\alpha-1)^{2}}, \tag{6}\] and \[\phi_{\rm J}^{\rm contin}(\alpha)=\phi_{\rm J,disks}^{\rm contin}\times\frac{1+ \frac{11}{16}\ln(\alpha)+\frac{193}{40}(\alpha-1)}{1+\frac{247}{50}(\alpha-1) +\frac{10}{179}(\alpha-1)^{2}}. \tag{7}\] Here \(\phi_{\rm J,disks}\) depends on both particle dispersity and the protocol with which jammed states are prepared. For our bidisperse and continuously-polydisperse systems it takes on standard MRJ-like values, respectively 0.8404 and 0.8402 [18, 25]. For monodisperse systems it is substantially larger (0.8669) owing to these systems' well-known tendency to crystallize even under rapid Lubachevsky-Stillinger-style compression [16]. The mean fractional deviations of these expressions' predictions from the ensemble-averaged measured \(\phi_{J}\) are essentially zero, while the rms fractional deviations, which are respectively \(\sim 0.09\%\), \(\sim 0.12\%\) and \(0.09\%\) for monodisperse, bidisperse, and continuously-polydisperse ellipses, are only slightly above the lower bounds set by the statistical uncertainties on the measured \(\phi_{J}\). However, we do _not_ claim that any of Eqs. 5-7 are exact expressions valid for all \(\alpha\), or even that their functional form is the same as that of the "true" \(\phi_{\rm J}^{\rm x}(\alpha)\) which could be obtained given infinite computer power. We also emphasize that the coefficients preceding the \(\ln(\alpha)\) and \((\alpha-1)^{x}\) terms are preparation-protocol-dependent. Figures 2-3 respectively show snapshots of monodisperse and bidisperse jammed ellipse packings with \(\alpha=1\), 2, 3, 4, 5, 6, 8, and 10. Continuously-polydisperse packings are not shown here because they are very similar to their bidisperse counterparts. Results for \(\alpha=1\) are entirely as expected from Refs. [16, 17, 18]: bidisperse packings are disordered and approximately isostatic, while monodisperse disk packings are denser and exhibit long-range triangular-crystalline order interrupted by vacancies and line defects. For \(\alpha=2\) and 3, results are consistent with Refs. [1, 2, 3, 4, 5, 6]. Visual inspection suggests they the monodisperse packings are somewhat more ordered than their bidisperse counterparts, but the nature of any such differences is not immediately clear. Local nematic precursor domains comparable to those observed in experiments on ellipsoidal colloids [7, 8, 9, 10, 11] become increasingly apparent as \(\alpha\) increases beyond \(\sim 3\). The domains formed by monodisperse systems appear slightly more ordered than those formed by their bidisperse counterparts, but again the nature of any differences in their ordering is unclear from visual inspection alone. For \(\alpha>\sim 6\), systems form well-defined, mostly-single-layer lamellae. In contrast to the nearly randomly oriented nematic precursors for \(3<\sim\alpha<\sim 5\), neighboring lamellae are increasingly oriented perpendicularly to each other. This structure, Fig. 1: Jamming densities for ellipses with \(1\leq\alpha\leq 10\). Symbols show data from our LS runs while curves respectively show Eqs. 5-7, and the inset shows the fractional difference of the predictions of these equations from the data. Fig. 3: Snapshots of jammed 50:50 1:1.4 bidisperse ellipse packings for (top row, left to right) \(\alpha=1\), \(2\), \(3\), \(4\), and (bottom row, left to right) \(\alpha=5\), \(6\), \(8\), \(10\). Fig. 2: Snapshots of jammed monodisperse ellipse packings for (top row, left to right) \(\alpha=1\), \(2\), \(3\)\(4\), and (bottom row, left to right) \(\alpha=5\), \(6\), \(8\), \(10\). which is reminiscent of "checkerboard"-like phases (e.g. the high-density disordered equilibrium phase formed by hard rods on a lattice [20]), is more prominent for monodisperse systems. Notably, the incompatible orientation of neighboring lamellae gives rise to increasingly large voids that cannot be filled because rotations of the surrounding particles (which could otherwise lead to further increases in \(\phi\)) are blocked by other particles; this mechanism leads to the well-known \(1/\alpha\) scaling of \(\phi_{\rm I}\) in the large-\(\alpha\) limit [26, 27]. ### Measures of local positional-orientational order Next, to better understand these variations in local structure, we examine how the structural metrics discussed in Section 2 vary with \(\alpha\). Figure 4(a) shows results for the coordination number \(Z_{\rm J}\). Results for small \(\alpha\) are consistent with previous work [24, 4], showing both the characteristic square-root singularity [\(Z_{\rm J}(\alpha)-Z_{\rm J}(1)\propto\sqrt{\alpha-1}\)] for \(\alpha-1\ll 1\)] and convergence towards a plateau at moderate hypostaticity [\(Z_{\rm J}=Z_{\rm iso}-\epsilon\) with \(\epsilon=0.3-0.4\)] for \(1.5<\sim\alpha<\sim 2.5\). For \(\alpha>\sim 4\), however, \(Z_{\rm J}\) drops roughly logarithmically: \(Z_{\rm J}=Z_{0}-b\ln(\alpha)\), with a slightly-dispersity-dependent \(Z_{0}\), and \(b\simeq 1.8\). This drop in \(Z_{\rm J}\) was not observed in previous simulations of ellipse jamming (only one of which [3] reported \(Z_{\rm J}\) for \(\alpha>2.5\)), but comparable decreases have been reported for rigid-rod-like and semiflexible polymers [28, 29]. Below, we will show that this decrease in \(Z_{\rm J}\) is directly associated with an increase in low-coordinated rattler particles trapped inside locally nematic regions. Figure 4(b) shows the fraction \(f_{Z=6}\) of particles that have exactly six contacts. For all particle dispersities, the \(f_{Z=6}(\alpha)\) curves have broad peaks centered at \(\alpha\simeq\alpha_{\rm max}\). In other words, maximizing \(\phi_{\rm I}\) closely corresponds to maximizing the number of 6-coordinated particles. Monodisperse particles have both larger \(\phi_{\rm I}\) and larger \(f_{Z=6}\) than their polydisperse counterparts for \(\alpha<\alpha_{\rm max}\), owing largely to their greater apparent crystallinity. Results for different particle dispersities merge for \(\alpha>\sim 5\); few 6-coordinated particles are present in these systems. Since the densest packings have the most six-coordinated particles, a natural followup question is: are they also the most locally hexatically ordered? Results for \(\Psi_{6}(\alpha)\) [Figure 4(c)] suggests that the answer is: yes, but only when comparing results for different particle dispersities at the same \(\alpha\) for \(\alpha-1\ll 1\). Intriguingly, \(\Psi_{6}\) is actually slightly larger for \(\alpha=1.05\) than for \(\alpha=1\), suggesting that for increasing \(\alpha-1\ll 1\) the ability of particles to rotate away from contacts enhances their ability to hexatically order even as they become more anisotropic. Results for larger \(\alpha\) show that \(\Psi_{6}\) steadily declines with increasing \(\alpha\) for \(\alpha>\sim 1.2\) and is minimal for all \(\alpha>\sim 2\). While \(\Psi_{6}\) will decrease with increasing \(\alpha\) even for a uniaxially stretched triangular lattice (the densest possible monodisperse ellipse packing, which has \(\phi=\phi_{\rm xtal}\) for all \(\alpha\)[30]), the actual decrease shown in Fig. 4(c) is substantially faster than would occur for such a lattice. Sharper insights into the evolution of jammed ellipse packings' structure are obtained by examining other metrics. Figure 4(d) shows that the nematic order parameter \(S\) is strongly dispersity-dependent for small \(\alpha\) but nearly dispersity-independent for \(\alpha>\sim 1.8\). The prominent small-\(\alpha\) peak for monodisperse sys Figure 4: Local order parameters for jammed ellipse packings. All quantities plotted above are defined in Section 2. Dashed lines in panels (a) and (d) respectively indicate \(Z=7.7-1.8\ln(\alpha)\) and \(S=.174\ln(\alpha)-.09\). tems coincides with the abovementioned peak in their \(\Psi_{6}\); in the jammed packings for \(\alpha<\sim\alpha_{\rm max}=1.3\), many particles have 6 contacts _and_ are aligned with their nearest neighbors. These regions resemble a uniaxially stretched triangular lattice. For bidisperse and continuously-polydisperse systems, \(S\) actually becomes negative for \(1<\alpha<\sim 1.8\) because tip-side contacts are favored over side-side contacts in these systems. For \(\alpha>\sim 1.8\), all systems' \(S\) increases roughly logarithmically with \(\alpha\), with a crossover to a slightly slower rate of increase that corresponds to the emergence of well-defined locally nematic domains over the range \(4<\sim\alpha<6\). The beginning of this crossover regime roughly coincides with the end of the \(Z_{\rm J}=Z_{\rm iso}-\epsilon\) plateaus shown in Fig. 4(a). In other words, formation of increasingly-well-defined locally-nematic regions within jammed states causes their \(Z_{\rm J}\) to drop. This effect can be further elucidated by examining \(f_{Z=4}(\alpha)\) [Fig. 4(e)]. For \(\alpha<\sim 4\), \(f_{Z=4}\) mirrors \(f_{Z=6}\). Next \(f_{Z=4}\) increases sharply as local nematic domains emerge, reaching a peak at approximately the end of the \(S\)'s crossover regime, i.e. at \(\alpha\simeq 6\). Finally. for \(\alpha>\sim 6\), \(f_{Z=4}\) drops again. These trends can be explained as follows: \(f_{Z=4}\) increases sharply as local nematic domains emerge because (as shown in Figs. 2-3) these domains lend themselves to \(Z=4\) configurations where ellipses are trapped by one parallel-aligned neighbor on either side and one perpendicularly-aligned neighbor on either end. As \(\alpha\) continues to increase, the increasing number of rattlers with \(Z<4\), leads to decreasing \(f_{Z=4}\). One might expect that systems with \(\alpha\simeq\alpha_{\rm max}\) are maximally dense because they are maximally uniform, and (as will be illustrated below) visual inspection suggests that this is indeed the case. However, as shown in Figure 5, except for the small-\(R\) oscillations associated with the locally crystalline order of monodisperse small-\(\alpha\) packings, \(\Sigma^{2}(R)\) results for all particle dispersities and all \(\alpha\) are _qualitatively_ very similar, and indeed results for systems of fixed dispersity nearly collapse when \(\Sigma^{2}/\alpha\) is plotted vs. \(R/\alpha\). A completely random arrangement of ellipses would have \(y=1\), i.e. \(\langle n(R)\rangle\propto\alpha(R/\alpha)^{2}\) and \(\Sigma^{2}(R)\sim\langle n(R)\rangle\propto\alpha(R/\alpha)^{2}\), while a crystalline or quasicrystalline ellipse packing would produce \(\Sigma^{2}(R)\sim\langle n(R)\rangle^{1/2}\propto\alpha^{1/2}(R/\alpha)\).24 While our \(N=1000\) packings are too small to rigorously evaluate the large-\(R\) asymptotic scalings of their \(\Sigma^{2}(R)\), we find that they have \(\Sigma^{2}\sim\alpha(R/\alpha)^{y}\) with \(1<y<\sim 3/2\) over the range of \(R/\alpha\) that allow good statistical sampling. The imperfect collapses of the data in panels (a-c) indicate that the growth of \(\Sigma^{2}\) with \(\alpha\) [at fixed \((R/\alpha)\)] is slightly sublinear in \(\alpha\) for \(\alpha<\sim 6\) and supralinear in \(\alpha\) for \(\alpha>\sim 6\). The crossover between these growth regimes reflects the change from (i) a net suppression of density fluctuations for \(\alpha<\sim 6\) (compared to those that would be present in completely random packings) by hard-particle excluded-volume constraints, to (ii) a net enhancement of density fluctuations for \(\alpha>\sim 6\) that reflects the increasing contrast between the high-density regions inside the nematic domains and the low-density regions at the boundaries between them. Footnote 2: The \(\alpha\)-dependence of \(\Sigma^{2}/\alpha\) is not a clear indication of the presence of a small-\(R\)-dependence of \(\Sigma^{2}/\alpha\). While the dataset presented above provides many insights, it fails to conclusively specify what (other than higher \(f_{Z=6}\)) distinguishes the densest packings from their lower-\(\phi_{\rm J}\) counterparts. Fig. 5: Uniformity of jammed ellipse packings. Panels (a-c) respectively show results for monodisperse, bidisperse, and continuously-polydisperse systems. Results for the \(\alpha\) highlighted in Figs. 2-3 are shown in the colors indicated on the legend, while results for \(\alpha=\alpha_{\rm min}\) are shown in black. The dotted and dashed curves respectively indicate \(\Sigma^{2}/\alpha=0.17(R/\alpha)\) and \(\Sigma^{2}/\alpha=(R/\alpha)^{3/2}\). Here \(\alpha_{\rm min}\) is the minor-axis length of the smallest particles for the given \(\alpha\) and dispersity. We now show that this can be done by examining positional-orientational correlations. Figure 6 shows representative snapshots and ensemble-averaged \(g(r\Delta\theta)\) for systems with \(\alpha=\alpha_{\max}\). The monodisperse packing plainly has a mid-to-long-range crystalline order that superficially resembles that of the triangular lattice. Nearly all particles have exactly six nearest neighbors that are easily discernible through visual inspection, even though many particles have \(Z<6\) (i.e. fewer than six _contacts_). However, in contrast to the densest crystalline ellipse packing (in which all ellipses are oriented in the same direction and thus have \(\Delta\theta=0\)), these nearest-neighbor particles exhibit a wide range of \(\Delta\theta\). Tip-to-side contacts are heavily favored, with \(g(r,\Delta\theta)>30\) in the limit corresponding to perpendicularly-oriented contacting ellipses, i.e. \(r/\sigma_{\min}\rightarrow(\alpha+1)/2\) and \(\Delta\theta\to 90^{\circ}\). At the same time, \(g(r,\Delta\theta)<.01\) for certain (\(r,\Delta\theta\)) that are sterically allowed (i.e. compatible with 2-body hard-particle impenetrability constraints) yet are strongly suppressed by collective many-body effects. The corresponding minima in \(g(r,\Delta\theta)\) are both broad and deep: for example, \(g(r,\Delta\theta)<.1\) for all \(1.4<r/\sigma_{\min}<1.7\) with \(\Delta\theta\ll 90^{\circ}\). The same trends are present for bidisperse and continuously-polydisperse systems even through their \(g(r,\Delta\theta)\) are qualitatively different. More specifically, although increasing particle dispersity changes the locations of \(g(r,\Delta\theta)\)'s extrema, reduces the height and increases the width of its maxima, and reduces both the depth and width of its minima, these minima remain both broad and deep. We refer to the ranges of (\(r,\Delta\theta\)) that are sterically allowed yet have \(g(r,\Delta\theta)<0.1\) as "kinetically suppressed" because the various collective many-body ordering processes that occur during dynamic compression make these configurations at least an order of magnitude less likely in the final jammed packings than they would be in completely disordered packings (i.e. ideal gases) with the same \(\phi\). Critically, for all three particle dispersities, the kinetically suppressed regions are largest for \(\alpha\simeq\alpha_{\max}\), and are absent for systems with \(\phi_{\rm j}\leq\phi_{\rm j,disks}\). Comparing Fig. 6 as well as \(g(r,\Delta\theta)\) results for other \(\alpha\) (not shown here) to the results presented above shows that large kinetically suppressed regions are present in systems where most particles have six clearly-distinguishable nearest neighbors, whether they actually _contact_ all of these neighbors or not. Nearest-neighbor shells including six members are "full;" they prevent any other particles from achieving close proximity, and they do so in a highly \(\alpha\)- and \(\Delta\theta\)-dependent way. As a consequence, systems in which most particles' nearest-neighbor shells are full have richly structured \(g(r,\Delta\theta)\) with large kinetically suppressed regions. These regions are not present in saturated RSA ellipse packings,[15] which suggests that they arise during the later stages of compression, i.e. over the range \(\phi_{\rm s}(\alpha)<\sim\phi<\phi_{\rm j}(\alpha)\). ### Comparison to RSA packings For a wide variety of particle shapes, complex liquid-state dynamics are expected for packing fractions in the range \(\phi_{\rm o}(\alpha)<\phi<\phi_{\rm g}^{\rm trans}(\alpha)\), where \(\phi_{\rm o}(\alpha)\) is the "onset" density.[31, 32] In hard-ellipse liquids, onset and translational-rotational decoupling[33] have been associated with the emergence of unstable nematic-like regions with a mean lifetime \(\tau_{\rm mem}\) that exceeds the character Fig. 6: Snapshots (left panels) and \(g(r,\Delta\theta)\) (right panels) for the densest jammed states for each particle-dispersity category. Top panels show monodisperse systems with \(\alpha=1.3\), middle panels show 50:50 1:1.4 bidisperse systems with \(\alpha=1.45\), and bottom panels show continuously-polydisperse systems with \(\alpha=1.45\). Colors are assigned only to regions with \(g(r,\Delta\theta)>0.1\), so both the sterically forbidden and kinetically suppressed regions are shown in white. istic relaxation time \(\tau_{0}\) for translational diffusion.[34] Measurement of the ratios \(\phi_{\mathrm{g}}^{\mathrm{trans}}(\alpha)/\phi_{\mathrm{g}}^{\mathrm{rot}}(\alpha)\), \(\phi_{\mathrm{g}}^{\mathrm{trans}}(\alpha)/\phi_{\mathrm{o}}(\alpha)\) and \(\phi_{\mathrm{g}}^{\mathrm{rot}}(\alpha)/\phi_{\mathrm{o}}(\alpha)\) for various shapes over a wide range of \(\alpha\) could provide additional valuable insights into these dynamics, but evaluating these quantities is computationally expensive.[35, 36] An alternative approach that should provide at least some of the same insights is to measure the ratio \(\phi_{\mathrm{I}}(\alpha)/\phi_{\mathrm{s}}(\alpha)\), where the RSA density \(\phi_{\mathrm{s}}(\alpha)\) is the maximum density at which impenetrable particles of aspect ratio \(\alpha\) can be packed under a protocol that sequentially inserts them with random positions and orientations. This ratio of fundamental interest because it indicates how much packing efficiency particles can gain via cooperative translations and rotations during the later stages of compression, i.e. over the range \(\phi_{\mathrm{r}}(\alpha)<\phi<\phi_{\mathrm{J}}(\alpha)\). Surprisingly, to the best of our knowledge, no previous studies have systematically examined \(\phi_{\mathrm{I}}(\alpha)/\phi_{\mathrm{s}}(\alpha)\) for ellipses, ellipsoids, or other comparable 2D or 3D convex shapes. Remarkably, our expressions for \(\phi_{\mathrm{J}}^{\mathrm{x}}(\alpha)\) (Eqs. 5-7) have the same functional form as one that predicts monodisperse ellipses' \(\phi_{\mathrm{s}}(\alpha)\) to within \(\sim 0.1\%\) over the same range of \(\alpha\) (\(1\leq\alpha\leq 10\)) considered here:[15] \[\phi_{\mathrm{s}}(\alpha)=\phi_{\mathrm{s,disks}}\times\frac{1+\frac{3}{2} \ln(\alpha)+\frac{17}{25}(\alpha-1)}{1+\frac{80}{65}(\alpha-1)+\frac{1}{65}( \alpha-1)^{2}}, \tag{8}\] where \(\phi_{\mathrm{s,disks}}=.54707\).[37] As shown in Figure 7, in our bidisperse and continuously-polydisperse systems, the ratio \(\phi_{\mathrm{I}}(\alpha)/\phi_{\mathrm{s}}(\alpha)\) stays within \(\sim 1\%\) of 1.53 for all \(1\leq\alpha\leq 5\). \(\phi_{\mathrm{I}}(\alpha)/\phi_{\mathrm{s}}(\alpha)\) is larger for our small-\(\alpha\) monodisperse systems, and for all dissipities for \(\alpha>\sim 5\). In other words, our data indicates that this ratio is almost \(\alpha\)-independent as long as neither substantial local hexatic order nor substantial local nematic order develops during compression. ## 4 Discussion and Conclusions In this paper, we performed a detailed characterization of jammed ellipse packings over a much wider range of aspect ratios (\(1\leq\alpha\leq 10\)) than had previously been attempted. Our first major goal was to determine \(\phi_{\mathrm{I}}(\alpha)\) to high precision, for three different particle dispersities: mono-, bi-, and continuously-polydisperse. After doing so, we found simple analytic formulae (Eqs. 5-7) that predict these \(\phi_{\mathrm{J}}\) to within \(<\sim 0.1\%\). Surprisingly, ellipses' jamming and saturated-RSA packing densities are both quantitatively predicted over entire range of \(\alpha\) by a common functional form \[\frac{\phi_{\mathrm{K}}(\alpha)}{\phi_{\mathrm{X}}(1)}=\frac{1+a\ln(\alpha)+ b(\alpha-1)}{1+c(\alpha-1)+d(\alpha-1)^{2}}, \tag{9}\] where \(\phi_{\mathrm{X}}\) is the jamming or RSA density (i.e. \(\phi_{\mathrm{I}}\) or \(\phi_{\mathrm{s}}\)) and the coefficients \(\{a,b,c,d\}\) depend on particle dispersity and the packing preparation protocol. Moreover, the ratio \(\phi_{\mathrm{I}}(\alpha)/\phi_{\mathrm{s}}(\alpha)\) remains almost \(\alpha\)-independent, suggesting that the amount of extra packing efficiency ellipses can gain via cooperative translations and rotations during the later stages of compression depends only depends only weakly on their anisotropy, as long as neither substantial local hexatic nor substantial local nematic order develops during compression. Comparison to previous results for other particle types including spherocylinders and strongly-overlapping \(n\)-mers[38, 39] suggests that Eq. 9 may be applicable to all convex 2D shapes, with \(\{a,b,c,d\}\) that depend on particles' shape in addition to the factors mentioned above. Our second major goal was to characterize the local structure of higher-\(\alpha\) packings including the local nematic domains found in liquid-glass colloidal suspensions.[7, 8, 9, 10, 11] Previous studies of ellipse jamming found that \(Z_{\mathrm{I}}(\alpha)\) plateaus at moderate hypostaticity [\(Z_{\mathrm{I}}=6-\epsilon\) with \(\epsilon=0.3-0.4\) for \(1.5<\sim\alpha<\sim 2.5\)],[2, 4, 6] and implied that this plateau extends to \(\alpha=\infty\). However, since these studies did not examine \(\alpha\) that were sufficiently large to possess a high-\(\phi\) equilibrium nematic phase (e.g. \(\alpha>2.4\) for monodisperse ellipses[40]) and hence only examined nearly-isotropic packings, the question of whether it actually does so had remained open. Here we found that \(Z_{\mathrm{J}}\) drops roughly logarithmically [\(Z_{\mathrm{J}}\simeq Z_{0}-b\ln(\alpha)\), with weakly-dispersity-dependent \(Z_{0}\) and \(b\)] for \(\alpha>\sim 3\). This drop in \(Z_{\mathrm{J}}\) results largely from an increasing fraction of particles that are trapped inside locally nematic domains by a parallel-oriented neighbor on either side and a perpendicularly-oriented neighbor on either end, and hence have no more than four contacts. The emergence of comparable particle caging during dynamic compression may help explain the onset of liquid-glass physics in athermal systems.[34] The final major question we wished to answer in this study was: what structural features distinguish the densest jammed packings from their lower-\(\phi_{\mathrm{J}}\) counterparts? Examination of commonly employed structural metrics such as the local nematic order parameter \(S\), the Steinhardt-like order parameter \(\Psi_{6}\)[22] and the uniformity metric \(\Sigma^{2}(R)\)[23] failed to conclusively answer this question. Instead we showed that the fraction of particles that have exactly six contacts (\(f_{Z=6}\)) is maximized at \(\alpha\simeq\alpha_{\mathrm{max}}\) for all particle dispersities even though \(f_{Z=6}(\alpha)\) is itself highly dispersity-dependent, and that locally-hyperstatic particles within \(\alpha\simeq\alpha_{\mathrm{max}}\) packings are far more likely to have six clearly-distinguishable nearest neighbors than their counterparts in systems with \(\phi_{\mathrm{J}}<\phi_{\mathrm{J,disks}}\), even in the absence of substantial local hexatic order. While it has long been known that nearest-neighbor shells including six members are full and hence prevent any other particles from achieving close proximity to the reference particle, here we showed that they do so in a highly \(\alpha\)- and \(\Delta\theta\)-dependent way that (in systems with \(\alpha\simeq\alpha_{\mathrm{max}}\)) leads to richly structured \(g(r,\Delta\theta)\) with large kinetically suppressed regions. In other words, we showed that particles with \(\alpha\simeq\alpha_{\mathrm{max}}\) develop unusually-well Fig. 7: Ratio of the jamming densities \(\phi_{\mathrm{J}}^{\mathrm{x}}(\alpha)\) to the saturated RSA packing densities \(\phi_{\mathrm{s}}(\alpha)\)[15] of monodisperse ellipses. defined nearest-neighbor shells during compression, for three very different particle dispersities, even through the structure of the shells themselves is highly dispersity-dependent. We conclude that it is these well-defined shells that allow \(\alpha\simeq\alpha_{\rm max}\) ellipses' \(\phi_{\rm J}\) to be substantially higher than disks' \(\phi_{\rm J}\) even though their jammed states do not possess longer-range crystalline order. This conclusion places Donev _et al._'s argument that ellipses' ability to rotate away from contact allows them to pack more densely than disks [1] on a firmer quantitative foundation. ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant DMR-2026271.
2309.13826
Building a quantum superposition of conscious states with integrated information theory
Could there be a quantum superposition of consciousness, as in the Wigner's friend thought experiment? The integrated information theory (IIT) of consciousness has turned this into a well-defined question. According to IIT, consciousness is a measurable physical quantity given by integrated information ($\Phi$), such that the amount of consciousness in a system corresponds to its amount of $\Phi$. We use the most recent IIT formalism (IIT4.0) to analyze the simplest non-zero $\Phi$ system known as a feedback dyad. We then propose a circuit that puts the dyad into a superposition of states which, according to IIT, would correspond to a superposition of conscious states. We refer to this as "Schr\"odinger's dyad". We therefore show that either IIT is false or the simple dyad is conscious and can easily be put into a superposition of conscious states. We then identify the simplest possible consciousness-collapse model, which predicts that this superposition is unstable and collapses at a rate determined by a measure of difference between the superposed conscious states. Our analysis will enable us to make a number of key observations about the general structure of integrated information theory (IIT2.0, IIT3.0, IIT4.0, and QIIT) and the general structure of consciousness-collapse models.
Kelvin J. McQueen, Ian T. Durham, Markus P. Mueller
2023-09-25T02:15:24Z
http://arxiv.org/abs/2309.13826v1
# Building a quantum superposition of conscious states with integrated information theory ###### Abstract Could there be a quantum superposition of consciousness, as in the Wigner's friend thought experiment? The integrated information theory (IIT) of consciousness has turned this into a well-defined question. According to IIT, consciousness is a measurable physical quantity given by integrated information (\(\Phi\)), such that the amount of consciousness in a system corresponds to its amount of \(\Phi\). We use the most recent IIT formalism (IIT4.0) to analyze the simplest non-zero \(\Phi\) system known as a feedback dyad. We then propose a circuit that puts the dyad into a superposition of states which, according to IIT, would correspond to a superposition of conscious states. We refer to this as "Schrodinger's dyad". We therefore show that either IIT is false or the simple dyad is conscious and can easily be put into a superposition of conscious states. We then identify the simplest possible consciousness-collapse model, which predicts that this superposition is unstable and collapses at a rate determined by a measure of difference between the superposed conscious states. Our analysis will enable us to make a number of key observations about the general structure of integrated information theory (IIT2.0, IIT3.0, IIT4.0, and QIIT) and the general structure of consciousness-collapse models. ###### Contents * 1 Introduction * 2 The feedback dyad * 3 Calculating the amount of consciousness (\(\Phi\)) in the feedback dyad * 4 Calculating the state of consciousness (Q-shape) of the feedback dyad * 5 The simplest consciousness-collapse model * 6 Physically implementing the dyad * 7 Conclusion * A The general IIT4.0 formalism * B The quantum feedback dyad in QIIT * C Solution of the optimization problem of section 5 Introduction Could there be a quantum superposition of consciousness? This question was raised by Eugene Wigner in the thought experiment that is now known as "Wigner's Friend". Wigner imagined his friend, in a nearby sealed lab, making a quantum measurement. Wigner, who is uncertain of his friend's result, wonders whether he should consider his friend to have entered a quantum superposition of experiencing different results. Wigner argued that this is "absurd because it implies that my friend was in a state of suspended animation". He then concluded that "consciousness must have a different role in quantum mechanics than the inanimate measuring device" ([51, p.180]). There has since been much speculation by physicists and philosophers over whether states of consciousness could be superposed and what that would even mean. For example, there have been many attempts to extend the Wigner's friend scenario and the associated epistemological and metaphysical implications ([21], [14], [19], [13], [52]). There have also been many attempts to make sense of superpositions of conscious states in many worlds and many minds interpretations of quantum mechanics ([20], [44], [50], [33], [16], [5], [8], [31], [32]). However, without any well-defined criteria for determining which physical states are conscious (and to what degree), the question of whether there could be such a superposition, and what it would be like to be in one, is difficult to evaluate. Recent neuroscience, on the other hand, has seen the rise of mathematical theories of consciousness, notably, the integrated information theory, or IIT for short ([46], [47], [40], [48], [4])). IIT associates systems with both quantitative amounts of consciousness (roughly, the amount of integrated information in the system, denoted by the symbol \(\Phi\)) and qualitative states of consciousness (roughly, the "shape" of the system's integrated information, or its "Q-shape"). More recently, IIT has been extended into the quantum domain in a framework known as QIIT ([53], [27], [3]). Inspired by these results, Wigner's suggestion that consciousness may be responsible for the collapse of the wave function has been resurrected in models that use integrated information as a criterion for collapse ([28], [17]). In comparison to standard collapse models [11], it has been claimed that IIT-based consciousness-collapse models may be much easier to experimentally test, since they can be tested by the right sorts of quantum computers, if only we could design the right sort of circuit [17]. In this paper, we propose such a circuit which, if implemented, would put a simple quantum computer into a superposition of states of conscious experience according to the IIT definition of consciousness. Following [17], we consider the simplest non-zero \(\Phi\) system, a feedback dyad. Classically, the dyad has four possible states: (0,0), (1,1), (0,1), and (1,0). Each state is predicted to have a tiny amount of consciousness. This prediction is robust across successive IIT formalisms. Each dyad state has \(\Phi=2\) in IIT2.0 and \(\Phi=1\) in IIT3.0, as shown in [37]. Here, we show that each dyad state has \(\Phi=2\) in IIT4.0 (section 3) and in QIIT (appendix B). Although these states have the same _amount_ of consciousness, they yield different _states_ of consciousness, because they are associated with different Q-shapes, as we show in section 4. The dyad in a superposition of two of its four possible states is therefore the simplest consciousness superposition predicted by IIT. We refer to this as "Schrodinger's dyad" and we propose a simple quantum circuit that allows Schrodinger's dyad to be built. We would like to stress that we are not endorsing IIT, and so we remain agnostic on whether the dyad is conscious in any meaningful sense. IIT has been shown to be consistent with a number of important experimental results in neuroscience ([34], [15], [22], [2], [30], [29], [38]). However, many criticisms of IIT have also been proposed, and we are sympathetic with some of them ([23], [1], [12], [7], [18], [41]). Either way, what we show is that unless one drastically revises IIT (e.g. [36], [35]), then either IIT is false or the dyad is conscious and can easily be put into a superposition of conscious states. We leave it to the reader to decide between these options. In section 5 we identify the simplest possible consciousness-collapse model, which predicts that Schrodinger's dyad is unstable and collapses at a rate determined by a measure of difference between the superposed conscious states. We take the Q-shapes defined in section 4, and use them to define the simplest possible collapse operators. This toy model makes a number of important properties of such models transparent. We then compare our toy-model to the more general consciousness-collapse model proposed in [17]. Finally, in section 6 we propose a physical implementation of Schrodinger's dyad, in which two photons enter into a feedback loop inside an optical cable. On the one hand, the implementation may potentially falsify the simplest versions of the IIT-based consciousness collapse models. On the other hand, the example raises a difficulty with IIT when it comes to physical implementation: IIT assumes that there is always an objective fact of the matter about what the basic causal units in a physical system are. In addition to identifying this prediction of IIT, our analysis helps to reveal much about the structure of IIT. For example, we resolve a crucial ambiguity in IIT in which logic gates are treated as having binary states (section 2). We also identify a subtle inconsistency between the IIT4.0 description of the dyad and the axioms of IIT4.0 (appendix A). The paper is organized as follows. Section 2 describes the classical feedback dyad. Section 3 shows how to calculate the classical dyad's \(\Phi\) using IIT4.0. Section 4 provides a simple way of describing the classical dyad's Q-shape. Section 5 explains the simple consciousness-collapse model. Section 6 proposes a physical implementation of the dyad, which may test the model, but which also raises questions about how to understand causality in IIT. Finally, appendix A explains IIT4.0 more generally and identifies the steps in the IIT calculus that our simple dyad allowed us to skip; appendix B shows how our analysis is consistent with QIIT as presented in [3]; and appendix C proves a general result concerning our Q-shape collapse operators. ## 2 The feedback dyad The classical dyad is a simple system consisting of two elements or channels, A and B, that simply swap their states from one time step to the next. That is, if at some time, \(t_{0}\), A is in state 1 and B is in state 0, then at the next time step, \(t_{+1}\), A is in state 0 and B is in state 1. The action on these channels is equivalent to a logical SWAP gate which is given a simple diagrammatic representation in Figure 1. The figure makes it clear that there are three distinct levels of description to the dyad: channels, channel values, and channel relationships. A and B Figure 1: The logical SWAP gate simply exchanges the values \(a\) and \(b\) of channels A and B respectively such that if the input is (A=a,B=b), then the output is (A=b,B=a). are the channels that are related via the logical SWAP gate in such a way as to exchange their values. In the language of quantum information, the channels are systems, the channel values are states, and the channel relationships are transformations. This is a crucial point. The SWAP gate is a _transformation_ of the states of systems A and B. The gate itself is never "in a state" on its own. This is an important distinction because gates are frequently described as being in a state that possesses a value, especially in IIT3.0 [40]. In particular, the elements or nodes in the IIT3.0 diagrams have binary states but are also treated as being logic gates. If the nodes are understood as neurons, then they are considered as being in an "active" or an "inactive" state [9], much like a channel. Yet the neurons are also said to act like gates by only activating in response to the right combination of connections to other neurons that are themselves either active or inactive. But this is really a notational relic from the early days of Boolean networks [24, 25] that ignores what is happening at a more granular level. In the neuronal case, an "active" or "inactive" neuron really refers to whether it sends a signal via some channel, i.e. it represents an _action_. The difference is typically unimportant at the granularity in which it is usually considered. But when considering quantum models of these networks, this treatment breaks down. As such, it is the states of the channels that can be in superposition, not the gates themselves. In a neuronal sense, it is thus conscious states that are in superposition, not the physical neurons themselves. Figure 1 also highlights a fundamental causal dependence in the dyad. The output of channel A causally depends on the input to channel B and vice-versa. In order to emphasize this point, we use capital letters to identify the channels or systems themselves and lowercase letters to identify the values the channels can attain, i.e. their states. One could think of the SWAP gate as a black box with the channels simply identifying the locations of the inputs and outputs of the box. Values are fed into the inputs and then produced by the outputs. To develop a feedback system with this SWAP gate we simply feed the outputs directly back into the inputs. For simplicity we can represent this system over a series of time steps in the manner shown in Figure 3. The output at a given time step is given as in Figure 1. For example, if the system at a given time step is given by \((a,b)\in\{0,1\}\) where the first element of the set is the state of channel A and the second is the state of channel B, if the inputs were \((a=0,b=1)\equiv(0,1)\) the evolution of the system state over time is just \((0,1)\rightarrow(1,0)\rightarrow(0,1)\) Creating Schrodinger's dyad then requires that we treat the channels as quantum and represent their states as such. That is, a classical state \((a,b)\) is equivalent to a pure quantum state in the so-called computational basis \(|a,b\rangle\). A superposition of the \(|1,0\rangle\) and \(|0,0\rangle\) states can be achieved by feeding the superposition state \[|+\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle+|1\rangle\right) \tag{1}\] into channel \(B\) at \(t_{-1}\). The input state to the dyad as a whole at \(t_{-1}\) is \[|0,+\rangle=|0\rangle\otimes|+\rangle=\frac{1}{\sqrt{2}}\left(|0,0\rangle+|0, 1\rangle\right), \tag{2}\] which then evolves into the following state at \(t_{0}\): \[|+,0\rangle=|+\rangle\otimes|0\rangle=\frac{1}{\sqrt{2}}\left(|0,0\rangle+|1,0 \rangle\right). \tag{3}\] This is not a superposition of \(\Phi\) values, since all four possible classical states of the dyad have the same \(\Phi\) value. It _is_, however, a superposition of distinct Q-shapes according to IIT3.0 and IIT4.0, as we show in section 4. And so according to IIT, the \(t_{0}\) state of equation 3 represents a superposition of qualitatively distinct states of consciousness. We begin by calculating the dyad's \(\Phi\). Figure 2: The SWAP gate considered as a feedback system over a series of time steps \(t_{-1}\), \(t_{0}\), and \(t_{+1}\). The output at any given time step is determined by the input at the previous time step according to the mapping shown in Figure 1. Calculating the amount of consciousness (\(\Phi\)) in the feedback dyad The general procedure for calculating \(\Phi\) and Q-shape takes many steps. Fortunately, the simplicity of our dyad allows us to skip several steps and to emphasize the most important ones. We explain the more general case in appendix A. The dyad consists of two parts, A and B. We begin by calculating the integrated cause information and the integrated effect information of each part. Integrated information concerns how much information is lost by _partitioning_ the system, which means replacing a causal relationship with noise where the noise is represented as an equiprobable distribution over all possible states. To illustrate, let us calculate how much integrated effect information A has, given its present state, about the next state of each of the system's parts, A and B. The _maximum_ of these defines A's integrated effect information. Given that our dyad is a SWAP gate, it is trivially true that A's present state has zero integrated effect information about A's next state since A's next state is entirely determined by B's present state. Put another way, A's possible next states are all equally probable given its current state. So introducing a partition that induces noise between A at \(t_{0}\) and A at \(t_{+1}\) makes no difference. This makes sense given that there is no causal connection between them in the first place: A affects B but not itself in the next time step. A's present state is not causally connected to A's future state and so there is no integrated effect information. However, A's present state _does_ fully determine B's future state and so if, for example, our system's present state is (1,0), equation 39 in [4] tells us that the integrated effect information of A's state at time \(t_{0}\) given that it is in state 1 at that instant, is \[\phi_{e}(a_{t_{0}}=1)=p(b_{t_{+1}}=1|a_{t_{0}}=1)\log_{2}\left[\frac{p(b_{t_{ +1}}=1|a_{t_{0}}=1)}{p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\mbox{noise})}\right]. \tag{4}\] Here, \(p(b_{t_{+1}}=1|a_{t_{0}}=1)\) is the probability that B will be in state \(b=1\) at time \(t_{+1}\) given that A is currently in state 1. It is trivially true that this equals 1. Likewise \(p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\mbox{noise})\) represents the probability that B will be in state 1 at time \(t_{+1}\) given the partition \(\theta\) which sets the value of channel A to an equiprobable distribution of the two possible states. In other words, the partition replaces the effect that A had on B with noise, which means that B's future state is randomly determined. Since there are only two possible states, that means that \(p^{\theta}(b_{t_{+1}}=1|a_{t_{0}}=\text{noise})=0.5\). As such, we have \[\phi_{e}(a_{t_{0}}=1)=1\cdot\log_{2}\left[\frac{1}{0.5}\right]=1. \tag{5}\] The same basic equation tells us that the integrated effect information of B's state at time \(t_{0}\), \(\phi_{e}(b_{t_{0}}=0)\), also equals 1. The integrated cause information for A is calculated in a slightly different manner and illustrates a time asymmetry in the equations of IIT. As in the effect case, the past state of A contains no information about the present state of A, and likewise for B. We only consider the information B's past state has on A's current state and the information A's past state has on B's current state. Specifically, given a current state of (1,0), equation 42 in [4] gives \[\phi_{c}(a_{t_{0}}=1)=p(b_{t_{-1}}=1|a_{t_{0}}=1)\log_{2}\left[\frac{p(a_{t_{0 }}=1|b_{t_{-1}}=1)}{p^{\theta}(a_{t_{0}}=1|b_{t_{-1}}=\text{noise})}\right] \tag{6}\] where \(p(b_{t_{-1}}=1|a_{t_{0}}=1)\) is calculated according to Bayes' rule as follows: \[p(b_{t_{-1}}=1|a_{t_{0}}=1)=\frac{p(a_{t_{0}}=1|b_{t_{-1}}=1)\cdot p(b_{t_{-1} }=1)}{p(a_{t_{0}}=1)} \tag{7}\] where \(p(a_{t_{0}}=1)\) and \(p(b_{t_{-1}}=1)\) are unconstrained probabilities (see equations 6-8 in [4]) and are both equal to 0.5 since, at any given time step and with no knowledge of past or future states, the probability that we will find either channel in a given state is 0.5 because there are only two states. Here we also have that \(p(a_{t_{0}}=1|b_{t_{-1}}=1)\) is the probability that A's current state is 1 if B's past state is 1 and \(p(b_{t_{-1}}=1|a_{t_{0}}=1)\) is the probability that B's past state was 1 given that A's state is currently 1. As before, \(p^{\theta}(a_{t_{0}}=1|b_{t_{-1}}=\text{noise})\) noises the system and is equal to 0.5. Since \(p(a_{t_{0}}=1|b_{t_{-1}}=1)=1\), Bayes' rule given by equation (7) tells us that \(p(b_{t_{-1}}=1|a_{t_{0}}=1)=1\). As before, then, we find that \(\phi_{c}(a_{t_{0}}=1)=1\). Likewise, the same process tells us that \(\phi_{c}(b_{t_{0}}=0)\) also equals 1. Equation 45 in [4] then tells us that the integrated information of a part is the minimum of its integrated effect and integrated cause information, i.e. \[\phi(a_{t_{0}}=1)=\min\left[\phi_{c}(a_{t_{0}}=1),\phi_{e}(a_{t_{0} }=1)\right] \tag{8}\] \[\phi(b_{t_{0}}=0)=\min\left[\phi_{c}(b_{t_{0}}=0),\phi_{e}(b_{t_{0 }}=0)\right] \tag{9}\] respectively, which are both trivially 1. The amount of consciousness (\(\Phi\)) in the state of the whole system is then simply a sum of the integrated information of the smaller subsystems as calculated above. The state of the dyad at the time \(t_{0}\) therefore has \[\Phi(t_{0}) =\phi(a_{t_{0}}=1)+\phi(b_{t_{0}}=0) \tag{10}\] \[=1+1=2\] units of consciousness. No matter which of its four possible states the dyad is in, all of the above reasoning applies, and we find that it always has two units of consciousness. It is therefore not possible to put the dyad into a superposition of \(\Phi\)-values. What we can do, however, is put the system into a superposition of different states of consciousness. To understand this distinction intuitively, compare experiencing a green screen with experiencing a blue screen. It might be that these two experiences do not correspond to any difference in \(\Phi\) (why would changing only the color change the amount of consciousness?). Now imagine that we put a subject into a superposition of experiencing a blue screen and experiencing a green screen. By assumption this is not a \(\Phi\) superposition, but it is clearly a superposition of distinct conscious experiences. One might doubt that distinct _human_ states of consciousness could ever have identical \(\Phi\)[26], but IIT allows for this in AI, and IIT3.0 and IIT4.0 predict that this is indeed the case for our simple dyad, as we now explain. Calculating the state of consciousness (Q-shape) of the feedback dyad If two qualitatively distinct states of consciousness are quantitatively identical (i.e. they have identical \(\Phi\)), then their distinctness must come down to the different ways in which each state generates that \(\Phi\)-value. This difference is what is captured in a Q-shape.1 In this section we define Q-shapes for all four states of the dyad. We show that these Q-shapes are distinct. It follows that IIT (as presently formulated) must treat these states as corresponding to qualitatively distinct states of consciousness. Finally, we discuss some differences in how Q-shapes are understood in IIT3.0 versus IIT4.0, which will be relevant to the collapse model proposed in the next section. Footnote 1: This has come under various labels in the literature. In [40] it is primarily referred to as a “maximally irreducible conceptual structure (MICS)”. But it is also referred to as a “shape in qualia space”, and so we adopt the simpler terminology, “Q-shape”. In [4] it is referred to as a “\(\Phi\)-structure”. The dyad states \((1,0)\) and \((0,0)\) each have \(\Phi=2\), but for different reasons. This can be seen by partitioning the dyad, replacing some of the parts by noise, as defined above, and then noting that \((1,0)\) and \((0,0)\) induce different forward and backward probability distributions. These different distributions lead to different Q-shapes. In the general case of more complex systems, we also have to weigh the parts according to their individual values of \(\phi\). The simple structure of the dyad allows us to bypass this (since \(\phi(A)=\phi(B)=1\)), but we will return to the more general case in the next section. We begin with part A, when the dyad is in state \((1,0)\). The prescription of _partitioning_ means that we replace the complement of A (that is, B) by noise, i.e. an equiprobable distribution of \(0\) and \(1\), while keeping \(A\) in state \(1\). Evolving this forward in time, we obtain a probability distribution \((0,\frac{1}{2},0,\frac{1}{2})\), where we have labelled the four states in lexicographical order: \((0,0),(0,1),(1,0),(1,1)\). Evolving it backwards in time, i.e. retrodicting the dyad's state at one time step earlier, we obtain exactly the same probability distribution. This gives us the first two rows in the Q-shape matrix \[Q(1,0)=\left(\begin{array}{cccc}0&\frac{1}{2}&0&\frac{1}{2}\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}&0&0\\ \frac{1}{2}&\frac{1}{2}&0&0\end{array}\right). \tag{11}\] The third and fourth row are the forward (effect) and backward (cause) probability distributions that we obtain if we consider the subsystem B instead, keeping it in state 0 and replacing A by noise as above. Thus, the Q-shape of a given state (such as \((1,0)\)) is a collection of four probability distributions over the four dyad states, represented by the four rows in our representation matrix. Performing the calculation for the other dyad states (which each have \(\Phi=2\) due to the two parts always having \(\phi\)=1), we obtain \[Q(0,0)=\left(\begin{array}{cccc}\frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&\frac{1}{2}&0&0\\ \frac{1}{2}&\frac{1}{2}&0&0\end{array}\right),\;\;Q(0,1)=\left(\begin{array} []{cccc}\frac{1}{2}&0&\frac{1}{2}&0\\ \frac{1}{2}&0&\frac{1}{2}&0\\ 0&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\end{array}\right),\;\;Q(1,1)=\left(\begin{array} []{cccc}0&\frac{1}{2}&0&\frac{1}{2}\\ 0&\frac{1}{2}&0&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\\ 0&0&\frac{1}{2}&\frac{1}{2}\end{array}\right). \tag{12}\] Our Q-shapes are not really "shapes"; they are just matrices of probability distributions. But we can turn them into shapes by following the IIT3.0 prescription described in [40] (see especially Figures 10-12). To obtain such visualizations for our dyad states, we simply interpret two probability distributions over the dyad's state space (which have four real entries each) as an element of the eight-dimensional vector space \(\mathbb{R}^{8}\). This is the phase space of the dyad. Let us use this to build the "shape" corresponding to \(Q(1,0)\) from equation 11 above. Consider the first two rows of that matrix. They determine the location of part A. The last two rows determine the location of part B. This gives us two points in the eight-dimensional space. Since \(\phi=1\) in all our cases, it does not help us to distinguish Q-shapes, so we have ignored it. IIT3.0 therefore predicts that the dyad is (minimally) conscious, and can be in one of four qualitatively distinct conscious states. It is natural therefore to wonder what it is like to be the dyad, and what these qualitative differences actually consist of. This is a question that IIT actually aims to answer. That is, IIT wants to be able to say something about what the experience of any given conscious system is like, especially when the system is incapable of verbal reports. The general idea is to extrapolate from features of our own Q-shapes. For example, consider what it is like to be an echolocating bat. In [39] it was famously argued that this question is intractable. However, more recently in [49] it was argued that IIT makes it tractable. The idea is to consider the general properties of human visual experience Q-shapes and human auditory experience Q-shapes. Then, we compare them with bat experience Q-shapes. If bat experience Q-shapes are "more similar" to, say, human auditory experience Q-shapes, then we can say something about what it is like to be a bat (it is more like human auditory experience than human visual experience). Of course for both human and bat experience, deriving exact Q-shapes is far too complicated. Consequently, there is also no straightforward way to compare the dyad Q-shapes with (aspects of) our Q-shapes. Nonetheless, there is a curious discussion about this in [40] (see Figure 19), that considers a system that is only slightly more complex than our dyad, which they call a "photodiode". It also involves two parts, labelled 'D' and 'P', that specify each other's states at each time step. (The main difference is that D receives two external inputs and has a threshold \(\geq\) 2. All connections have weight 1. Meanwhile P serves as a memory for the previous state of D and its feedback to D serves as a predictor of the next external input by effectively decreasing the threshold of D.) Despite these differences, its Q-shapes are very similar to our dyad's Q-shapes. They also involve two points in an 8D space. About its experience, they say the following: "It is instructive to consider the quality of experience specified by such a minimally conscious photodiode. [...] D says something about P's past and future, and P about D's, and that is all. Accordingly, the shape in qualia space is a constellation having just two [points], and is thus minimally specific. [...] Moreover, the symmetry of the [Q-shape] implies that the quality of the experience would be the same regardless of the system's state: the photodiode in state DP=00, 01, or 10, receiving one external input, generates exactly the same [Q-shape] as DP=11. In all the above cases, the experience might be described roughly as "it is like this rather than not like this", with no further qualifications. The photodiode's experience is thus both quantitatively and qualitatively minimal." If all four states of the photodiode have the same Q-shape, then they must all correspond to the same probability distributions. For as they say (in the IIT3.0 jargon), the probability distributions (or "cause effect repertoires") for each part (or each "concept") specifies what each part "contributes to the quality of the experience". (Meanwhile, the \(\phi\) of each part is said to be "how much" the part is present in experience.) But as we have seen, our feedback dyad does not yield this result: the four possible states correspond to distinct Q-shapes. It is therefore not possible to simply describe each of the four possible conscious states of the dyad as "it is like this rather than not like this". What could the differences in our four dyad Q-shapes possibly translate to in experience? These are difficult questions for IIT. We have mostly followed the IIT3.0 rather than the IIT4.0 prescription for building Q-shapes. In IIT4.0, they are somewhat simpler, in that they replace probability distributions with states (see equation (56) in [4]). In particular, the IIT4.0 Q-shape of any dyad state is given by the \(\phi\)-values of A and B as well as the states that these \(\phi\)-values were maximized over. So in the case of \(Q(1,0)\), A and B both have \(\phi=1\); for A this was maximized over B being in state 1 (i.e. \(b=1\)), while for B this was maximized over A being in state 0 (i.e. \(a=0\)). For \(Q(0,0)\), A and B both have \(\phi=1\); for A this was maximized over B being in state 0 (i.e. \(b=0\)), while for B this was maximized over A being in state 1 (i.e. \(a=1\)). The four states of the dyad therefore correspond to distinct Q-shapes in IIT4.0, consistently with IIT3.0. The choice of how to represent Q-shapes here seems somewhat arbitrary, as both options satisfy the constraint of identifying differences in how the parts contributed to an overall \(\Phi\)-value for the system. However, as we explain in the next section, the IIT3.0 choice is much better suited for a certain application of IIT: defining a fully general consciousness-collapse model. ## 5 The simplest consciousness-collapse model In [17] a dynamical collapse model is proposed in which Q-shape superpositions are unstable and tend to collapse. The following general form for continuous collapse models ([10, p.27]) is used: \[d\psi_{t}=[-i\hat{H}_{0}dt+\sqrt{\lambda}(\hat{A}-\langle\hat{A}\rangle_{t}) dW_{t}-\frac{\lambda}{2}(\hat{A}-\langle\hat{A}\rangle_{t})^{2}dt]\psi_{t}. \tag{13}\] The first term on the right-hand side of the equation represents Schrodinger evolution, while the remaining two terms represent the collapse evolution. Here, \(\hat{H}_{0}\) is the Hamiltonian of the system, \(\lambda\) is a real-valued parameter governing the collapse rate, \(\hat{A}\) is a collapse operator whose eigenstates the system collapses towards, \(\langle\hat{A}\rangle_{t}\) is its expected value at time \(t\), and \(W_{t}\) is a noise process which ensures that collapse happens stochastically at a rate determined by a measure of difference between the superposed \(\hat{A}\) eigenstates. The pure state \(\rho_{t}^{W}:=|\psi_{t}\rangle\langle\psi_{t}|\) therefore evolves stochastically. All statistical predictions that we can extract from this state are linear in \(\rho_{t}^{W}\) due to the Born rule. Hence, given a single realization of the process (13), all statistical predictions (say, about outcomes of any measurement that we might decide to perform at some point while the process unfolds) can be computed from \(\rho_{t}:=\mathbb{E}[\rho_{t}^{W}]\)[10]. As a consequence of (13), this resulting state evolves according to the Lindblad equation \[\frac{d}{dt}\rho_{t}=-i[\hat{H}_{0},\rho_{t}]-\frac{\lambda}{2}[\hat{A},[\hat {A},\rho_{t}]] \tag{14}\] (for the derivation see e.g. [10]). Hence, the system can evolve via Schrodinger dynamics, via collapse, or via some combination of the two. To understand the collapse term we can ignore the Schrodinger dynamics term by setting its Hamiltonian to zero, \(\hat{H}_{0}=0\). The collapse term only has an effect when the system is in a superposition of eigenstates of \(\hat{A}\). In this situation, the double commutator will be non-zero and the state will evolve. The "speed" at which it evolves is a function of the eigenvalues \(a_{i}\) of \(\hat{A}\). This is because the \((i,k)\)th matrix entry of the double commutator in \(\hat{A}\)'s eigenbasis is \[[\hat{A},[\hat{A},\rho]]_{ik}=\rho_{ik}(a_{i}-a_{k})^{2}. \tag{15}\] The dampening of the off-diagonal elements of \(\rho\) occurs at a rate that grows with \((a_{i}-a_{k})^{2}\) where if \(a_{i}\neq a_{k}\) the system is in a superposition. We see that the eigenbasis of \(\hat{A}\) determines the collapse basis, i.e. the basis in which the state becomes "classical", while its eigenvalues tell us which superpositions of pairs of such states are removed more quickly (namely, those with large \((a_{i}-a_{k})^{2}\)). Let us now use this prescription to construct the simplest possible consciousness collapse model for the dyad. Subsequently, we will compare this with the more general, but more involved approach in [17]. For the moment, let us only mention that our simple model contains only a _single_ collapse operator, whereas the one in [17] involves several such operators, generalizing Eq. (13). We will say more about the similarities and differences below. The four states of the dyad are mutually distinct states of consciousness, spanning the total Hilbert space. Therefore, we expect a consciousness-collapse model to lead to a state for large times \(t\) that is diagonal in that basis. Therefore, our collapse operator \(\hat{Q}\) will have the form \[\hat{Q}=\lambda_{00}|00\rangle\langle 00|+\lambda_{01}|01\rangle\langle 01|+ \lambda_{10}|10\rangle\langle 10|+\lambda_{11}|11\rangle\langle 11|, \tag{16}\] with four eigenvalues \(\lambda_{ij}\). Any consciousness-collapse model should arguably imply the following principle for the choice of those eigenvalues: _If two states of the dyad (say, \(ij\) and \(kl\)) are qualitatively very different states of consciousness, then superpositions of these states should vanish very quickly, i.e. \(|\lambda_{ij}-\lambda_{kl}|\) should be very large._ That is, it is natural to allow superpositions of "qualitatively similar" states to persist for longer, while qualitatively different states must decohere quickly. For a quantitative application of this prescription, we need a way to compare states of consciousness, i.e. a distance measure on Q-shapes. Since Q-shapes are collections of probability distributions, it is natural to define their distance in terms of distance measures on probability distributions, which is a classical and well-studied topic in information theory. The preferred distance measures on probability distributions in IIT have changed in almost every successive version. IIT2.0 used the well-known Kullback-Leibler divergence. IIT3.0 used Earth Mover's distance [42]. IIT4.0 uses the intrinsic difference measure from section 3. IIT3.0's measure was explicitly turned into a generalized distance measure for Q-shapes. IIT4.0's measure is not so well suited for this task, a point we will return to at the end of the section. A natural choice is to define the distance of two Q-shapes \(Q=(q_{1},q_{2},q_{3},q_{4})^{\top}\) (i.e. with rows \(q_{1},\ldots,q_{4}\)) and \(\tilde{Q}=(\tilde{q}_{1},\tilde{q}_{2},\tilde{q}_{3},\tilde{q}_{4})^{\top}\) as \[{\cal D}(Q,\tilde{Q}):=\sum_{i=1}^{4}{\cal D}(q_{i},\tilde{q}_{i}), \tag{17}\] where \({\cal D}\) is some choice of distance measure on the set of probability distributions. That is, the distance of two Q-shapes is the sum of the distances of their probability distributions. (This is precisely the form of IIT3.0's extended Earth mover's distance measure.) Now we have a large choice of possible distance measures \({\cal D}\) at our disposal. However, note that the four Q-shapes of the dyad (Eqs. (11) and (12)) consist of a small variety of very simple probability distributions only: all entries are 0 or \(\frac{1}{2}\), and any two rows are either equal, or they differ in all 4 entries. Two identical rows must have distance zero. Furthermore, it is natural to demand that every two probability distributions arising as rows in these Q-shapes that differ in _all four_ places all have the same distance, which we can set to unity by a choice of scaling factor. For example, \[{\cal D}\left((0,\tfrac{1}{2},0,\tfrac{1}{2}),(\tfrac{1}{2},0,\tfrac{1}{2},0) \right)=1.\] We can then determine the distances between all pairs of Q-shapes of the dyad and obtain the following values, writing \({\cal D}(Q,\tilde{Q})\) as the \(Q\tilde{Q}\)-entry of a table: \begin{tabular}{l|c c c c|} & Q(0,0) & Q(0,1) & Q(1,0) & Q(1,1) \\ \hline Q(0,0) & 0 & 2 & 2 & 2 \\ Q(0,1) & 2 & 0 & 4 & 2 \\ Q(1,0) & 2 & 4 & 0 & 2 \\ Q(1,1) & 2 & 2 & 2 & 0 \\ \end{tabular} Let us now return to our consciousness-collapse principle. Formulating it in terms of this distance measure, it reads: _If the distance \({\cal D}\) between two Q-shapes \(Q(i,j)\) and \(Q(k,l)\) is large, then the distance between the eigenvalues \(\lambda_{ij}\) and \(\lambda_{kl}\) of the collapse operator must also be large_. This desideratum could always be satisfied by the arbitrary prescription to make all eigenvalues extremely large and distant from each other. However, this would typically induce almost-instantaneous collapse, a behavior that we do not expect for simple systems such as the dyad. Thus, we are searching for a choice of eigenvalues that is as tame as possible while still satisfying the above postulate. This leads us to define the eigenvalues in terms of an optimization problem: \begin{tabular}{|l|} \hline Minimize \(\lambda_{00}+\lambda_{01}+\lambda_{10}+\lambda_{11}\) \\ subject to \(\lambda_{ij}\geq 0,\ \ |\lambda_{ij}-\lambda_{kl}|\geq{\cal D}(Q(ij),Q(kl))\). \\ \end{tabular} This prescription keeps the collapse behavior "tame" by demanding that the eigenvalues are not arbitrarily large, but only as large as they need to be (in their total sum) to satisfy our principle for all pairs of Q-shapes. Note that the total time scale of the collapse is not determined by \(\hat{Q}\) and its eigenvalues, which do not have any physical units. Instead, it is determined by the noise term of (13), i.e. the parameter \(\lambda\) in (14). This will remain a parameter of the collapse model that needs to be determined experimentally. The above considerations tell us only the _relative_ speed at which superpositions between distinct Q-shapes are suppressed, whereas the _total_ speed would depend on \(\lambda\) and hence on further considerations as to which states of consciousness are implausible to remain in superposition for significant amounts of time because of, say, human experience. As we show in appendix C, this optimization problem has twelve solutions: one of them is \[\lambda_{00}=2,\ \lambda_{01}=0,\ \lambda_{10}=4,\ \lambda_{11}=6,\] and the other solutions are permutations of this one (\((2,0,4,6)\)), such as \((6,4,0,2)\) -- indeed, all permutations of these four numbers such that \(|\lambda_{01}-\lambda_{10}|\geq 4\). This degeneracy can be understood as a consequence of the symmetry of the problem: for example, the table of pairwise distances does not change if we exchange \(Q(0,0)\) and \(Q(1,1)\). Indeed, these solutions do not only minimize the sum of the \(\lambda_{ij}\), but they also minimize the expression \[\frac{1}{2}\sum_{i,j,k,l}|\lambda_{ij}-\lambda_{kl}|=|\lambda_{00}-\lambda_{01 }|+|\lambda_{00}-\lambda_{10}|+|\lambda_{00}-\lambda_{11}|+|\lambda_{01}- \lambda_{10}|+|\lambda_{01}-\lambda_{10}|+|\lambda_{10}-\lambda_{11}|,\] i.e. the total sum of the pairwise collapse rates, under the assumption (that we can always make) that one of the \(\lambda_{ij}\) is zero. We can simply pick one of the six solutions and use it to define our collapse operator. For the sake of the argument, let us pick the above, but the choice does not matter for the following discussion. Let us interpret the result by looking at some example collapse rates. We have \({\cal D}(Q(00),Q(01))=2\) which is small, and \(|\lambda_{00}-\lambda_{01}|=2\) is also small (and, indeed, identical). Superpositions of the two dyad states \(00\) and \(01\) can thus remain stable for a relatively long time. On the other hand, \({\cal D}(Q(01),Q(10))=4\) is large, and so is \(|\lambda_{01}-\lambda_{10}|=4\). Hence, superpositions between the dyad states 01 and 10 will be killed off more quickly. However, consider the two dyad states 01 and 11. Their distance is small, \({\cal D}(Q(01),Q(11))=2\), and our principle demands that the corresponding difference of eigenvalues (i.e. the associated collapse rate) is at least as large as that. However, it is actually \(|\lambda_{01}-\lambda_{11}|=6\), which is much larger than required. Thus, any superposition of these two dyad states would fall off much faster than what would be expected by considering the difference between their Q-shapes alone. We can understand this behavior by noting that the \(n=4\) dyad states lead to \(n(n-1)/2=6\) distance values (the table above), from which \(n=4\) eigenvalues of the collapse operator have to be determined. Thus, every value of \(|\lambda_{ij}-\lambda_{kl}|\) must depend on _more_ than just the number \({\cal D}(Q(ij),Q(kl))\). If our principle is satisfied, then a large value of the latter implies a large value of the former, but the converse is not in general true. The quantum limitation of only having \(n\) eigenvalues introduces additional constraints. It seems that this must be a general phenomenon: if we have \(n\) distinct Q-shapes, but \(m\ll n/2\) collapse operators, then the \(m\cdot n\) eigenvalues are smaller in number than the \(n(n+1)/2\) distance values. We must hence have pairs of Q-shapes whose superposition must collapse more quickly than what their mere qualitative distance as states of consciousness would suggest. Superposition resistance hence cannot only resemble the structure of conscious experience, but is also additionally constrained by the general structure of quantum mechanics. We can now see how the elements of the construction above are realized in greater generality in [17]. Here, there is not a single Q-shape operator whose eigenstates are all the classical Q-shapes. Rather, a Q-shape is associated with an ensemble of orthogonal self-adjoint collapse operators. The eigenvalue of each operator does not pick out a Q-shape, but an element of a Q-shape, which is either an entry in a probability matrix or a \(\phi\)-value. This solves the above problem and allows for superposition resistance to resemble the structure of conscious experience more closely. However it does so at the cost of having a very complex model. To illustrate this complexity, consider how many collapse operators we need to capture all the details of a Q-shape of a classical system. A system of \(n\) elements will have \(2^{n}-1\) subsystems. So if we have two elements, as with our dyad, we have three subsystems (A, B, and AB). We were able to ignore AB in our simple case but we cannot do that in general. Every element has _d_ possible states giving a total of \(d^{n}\) possible states for the system. Each subsystem is associated with two probability distributions, as we saw in the previous section, and one \(\phi\)-value. To capture all of this, the number of collapse operators we need is \((2^{n}-1)\times(2\times d^{n}+1)\). So for our _classical_ dyad, where \(n=d=2\), we need 27 collapse operators. It would be extraordinary if Nature were to operate at such a high level of complexity for such a simple system. But this still is not sufficient when dealing with quantum systems, since qubit elements do not have \(d=2\) possible states, but have infinitely many possible pure states. It is for this reason that the model in [17] formulates everything in terms of the QIIT found in ([53],[27]). Here each subsystem is associated with two appropriate density matrices instead of two appropriate classical probability distributions. The density matrices for the quantum dyad have more entries than the classical probability distributions associated with the classical dyad, so we need more collapse operators. In particular, we now need \((2^{n}-1)\times(2\times d^{2n}+1)\) collapse operators in general, and so 99 collapse operators for our dyad. The use of QIIT raises a further complication. In QIIT, every quantum system is assigned a well-defined Q-shape (which in many cases may be the null Q-shape) whether or not the system is in a superposition of classical Q-shapes. That is, for QIIT, _distinct states of consciousness do not always correspond to mutually orthogonal quantum states_, and this makes it in general impossible to have physical processes whose observable behavior depends on all the properties of those states of consciousness (because non-orthogonal states cannot be perfectly distinguished). In particular, this excludes collapse models where the rate of collapse is proportional to the "size" of the superposition, e.g. to the qualitative difference of the superposed conscious states. The ensemble of collapse operators defined in equation (2) of [17] therefore adds an additional constraint that restricts these operators to just those states that are associated with classical Q-shapes. Thus, in an attempt to be completely general, the collapse model in [17] became very complex. But as has been demonstrated here, if one just wants a collapse model for some simple system whose physical properties are known, then much of those complexities can be bypassed, as we can instead define a single collapse operator as we have done here. Finally, we note that specific predictions of one's collapse model may vary with the use of IIT formalism, which is constantly being updated. The choice of distance measure in particular has undergone significant revision. IIT2.0 used the Kullback-Leibler divergence to measure the distance between probability distributions. But that was rejected in part because it is not symmetric. IIT3.0 adopted Earth Mover's distance (EMD). This is symmetric and, as shown in [40], yields different results than the IIT2.0 measure, even for the dyad. The EMD can easily be generalized to become a measure of distance between Q-shapes, as in equation 5. This Q-shape distance measure was essential to IIT3.0 because it was used to calculate \(\Phi\) (by measuring the distance between the Q-shape of a system's state and the Q-shape of that state partitioned). But the EMD has recently been abandoned, in part because of problems raised in [7]. Alternative measures can be found in [45]. IIT4.0 adopts the intrinsic difference measure, described in IIT4.0 and in section 3 above, as its preferred distance measure. The intrinsic difference is infinite if the denominator is zero. This is avoided when the denominator involves some partition and therefore white noise. But measuring the distance between two Q-shapes doesn't involve any partitioning. So, the intrinsic difference is not suitable for measuring Q-shape distance. Consequently, IIT4.0 does not supply such a distance measure. This led to a simpler calculation of \(\Phi\) in IIT4.0: the system \(\Phi\) is a sum of subsystem \(\phi\)-values. QIIT in its most recent version is consistent with IIT4.0 B. ## 6 Physically implementing the dyad Consider the following simple implementation of the dyad as depicted in Figure 3: channels A and B are optical cables and the dyad does nothing more than cross those cables, without contact. The outputs are then fed back into the inputs, creating a kind of feedback cycle. We have two photons in the cables, and each of them can carry one of two perfectly distinguishable "classical" states, corresponding to horizontal \(|0\rangle\) or vertical polarization \(|1\rangle\). What horizontal or vertical means is determined by an external reference frame; for what follows, the exact choice of reference is unimportant, except that the physical situation must tell us what we mean by both photons carrying _identical_ or _orthogonal_ polarization directions (e.g. \(|00\rangle\) in the first case, and \(|01\rangle\) in the second). It is clear that we are not restricted to preparing the photons in the classical basis states, but we can prepare them in arbitrary superpositions, such as that of Equation 3. This is a necessary condition to implement "Schrodinger's dyad" as introduced in the previous sections. However, if we identify our basic units as the photons that traverse the cables, then it may not seem like IIT applies here, since IIT requires causal relationships. In particular, to be a basic unit of IIT, something should have the power to "take a difference" (be affected by something) and "make a difference" (produce effects on something) [4]. The concern with this implementation is that the photons do not take or make a difference, because they never change their polarization states. Indeed, under this interpretation of the physical setup, we would not even have implemented the dyad, but another system (two bits and an identity gate). On the other hand, this depends on counting the photons as our basic causal units. If we instead identify our basic units as the polarization qubits at the physical locations \(A\) and \(B\) in space, we get a different result. In particular, we may say that the photon polarization state at A\({}_{t0}\) causes the state at B\({}_{t+1}\) and was caused by the state at B\({}_{t-1}\). Under this way of identifying our basic units, the system has non-zero \(\Phi\) and is the simplest conscious system according to IIT. This may even fit well with interpretations of modern physics in which the basic causal objects are spacetime points [43]. IIT does not want the \(\Phi\) or Q-shape of a system to depend on some arbitrary choice: these are meant to be objective properties. So at least one of the above two causal interpretations of the system must be ruled out. IIT does not give clear criteria for what to do in such a situation. However, one option is suggested by the IIT4.0 _principle of maximal existence_, which states that "what exists is what exists the most". This ontological principle is used to motivate the exclusion principle, which effectively states that if two overlapping sets of units have non-zero \(\Phi\), then only the system with _maximal_\(\Phi\) is conscious. Thus, if there are multiple interpretations of what the causal units are in the first place, we might similarly only consider the interpretation that Figure 3: A possible implementation of the dyad. The dashed line and labels indicate that the systems A and B are associated with regions of space. yields the greater \(\Phi\). In that case, we have found a very simple implementation of the dyad by identifying the qubits with locations in space. ## 7 Conclusion In this paper we have described some simple predictions of IIT that have enabled us to make a number of observations. First, we showed that either IIT is false, or a simple system like the feedback dyad is conscious and can easily be put into a quantum superposition of conscious states (i.e. a superposition of Q-shapes). This result was shown to be robust across successive IIT formalisms. Second, we identified the simplest consciousness-collapse (or Q-shape-collapse) model. It involves a single Q-shape collapse operator, whose eigenstates are the four possible states of the dyad. For the model to do what is needed (make the collapse rate proportional to a measure of difference between the superposed Q-shapes), we found that the four eigenvalues must depend on six distance values. In such models, the rate of collapse of a superposition of two states of consciousness must therefore depend on more than the relation between the two states. More complex models may avoid this by defining an ensemble of orthogonal Q-shape collapse operators. However, this can get very complicated, so for practical purposes (like testing Q-shape-collapse models), the prescription that we have provided here may be more useful. Finally, we have made several observations about the general structure of IIT. For example, we argued that while treating gates as having states is permissible if gates are neurons, this does not work in general, and especially not for computers, where gates operate on systems that possess states. This is especially clear for quantum computers, where qubits, and not gates, are superposed. We have also noted that to apply IIT to a physical system, we need a specification of the basic causal units in the system. Insofar as physics does not specify such things, IIT is not fully applicable to physical systems. In further research, it would be interesting to investigate whether any existing quantum computers (or other quantum systems) can maintain states like the \(t_{0}\) state of our quantum dyad, and for how long. Such systems may place bounds on the fundamental parameters of IIT-based consciousness-collapse models. ## Acknowledgments We are grateful to Thomas D. Galley for discussions, and to Larissa Albantakis for helpful feedback on an earlier draft. This research was supported by grant number FQXi-RFP-CPW-2015 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation. Moreover, this research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario through the Ministry of Colleges and Universities.
2310.20357
Enhancing the Spatial Awareness Capability of Multi-Modal Large Language Model
The Multi-Modal Large Language Model (MLLM) refers to an extension of the Large Language Model (LLM) equipped with the capability to receive and infer multi-modal data. Spatial awareness stands as one of the crucial abilities of MLLM, encompassing diverse skills related to understanding spatial relationships among objects and between objects and the scene area. Industries such as autonomous driving, smart healthcare, robotics, virtual, and augmented reality heavily demand MLLM's spatial awareness capabilities. However, there exists a noticeable gap between the current spatial awareness capabilities of MLLM and the requirements set by human needs. To address this issue, this paper proposes using more precise spatial position information between objects to guide MLLM in providing more accurate responses to user-related inquiries. Specifically, for a particular multi-modal task, we utilize algorithms for acquiring geometric spatial information and scene graphs to obtain relevant geometric spatial information and scene details of objects involved in the query. Subsequently, based on this information, we direct MLLM to address spatial awareness-related queries posed by the user. Extensive experiments were conducted in benchmarks such as MME, MM-Vet, and other multi-modal large language models. The experimental results thoroughly confirm the efficacy of the proposed method in enhancing the spatial awareness tasks and associated tasks of MLLM.
Yongqiang Zhao, Zhenyu Li, Zhi Jin, Feng Zhang, Haiyan Zhao, Chengfeng Dou, Zhengwei Tao, Xinhai Xu, Donghong Liu
2023-10-31T10:57:35Z
http://arxiv.org/abs/2310.20357v2
# Enhancing the Spatial Awareness Capability of Multi-Modal ###### Abstract The Multi-Modal Large Language Model (MLLM) refers to an extension of the Large Language Model (LLM) equipped with the capability to receive and infer multi-modal data. Spatial awareness stands as one of the crucial abilities of MLLM, encompassing diverse skills related to understanding spatial relationships among objects and between objects and the scene area. Industries such as autonomous driving, smart healthcare, robotics, virtual, and augmented reality heavily demand MLLM's spatial awareness capabilities. However, there exists a noticeable gap between the current spatial awareness capabilities of MLLM and the requirements set by human needs. To address this issue, this paper proposes using more precise spatial position information between objects to guide MLLM in providing more accurate responses to user-related inquiries. Specifically, for a particular multi-modal task, we utilize algorithms for acquiring geometric spatial information and scene graphs to obtain relevant geometric spatial information and scene details of objects involved in the query. Subsequently, based on this information, we direct MLLM to address spatial awareness-related queries posed by the user. Extensive experiments were conducted in benchmarks such as MME, MM-Vet, and other multi-modal large language models. The experimental results thoroughly confirm the efficacy of the proposed method in enhancing the spatial awareness tasks and associated tasks of MLLM. ## Introduction Recently, the Multi-Modal Large Language Model (MLLM) [23, 24, 25] has emerged as a hot research area. It utilizes a powerful Large Language Model (LLM) [11, 12] as a cognitive engine, focusing on executing multi-modal tasks, which holds significant importance in advancing research and application in multi-modal understanding. MLLM finds wide applications across various fields [26, 27, 28], including autonomous driving, smart healthcare, robotics, e-commerce, virtual, and augmented reality. It significantly enhances the comprehension and processing of multi-modal data and stands as a crucial pathway in achieving artificial intelligence. Among its capabilities, perception stands as one of the fundamental aspects of MLLM, signifying its ability to accurately acquire external information. Within perception, spatial awareness particularly holds significance as it encompasses various abilities related to understanding spatial relationships between objects or between objects and the surrounding scene area. In many application scenarios of MLLM, spatial awareness demands stringent precision. For instance, in the domain of autonomous driving, precise localization of objects such as vehicles, pedestrians, traffic signs, road markings, parking spaces, etc., is essential to ensure safe driving and prevent accidents. Similarly, in smart healthcare, precise localization of structures like tumors, lesions, organs inside a patient's body is crucial for accurate diagnosis and treatment plans. However, the current performance of MLLM in spatial awareness still significantly lags behind human requirements. As depicted in Figure 1, existing MLLMs struggle to accurately determine spatial relationships between objects, such as the spatial relationship between the red car and parking spot 33 (left), and between the desk, laptop, and table lamp (right). This research aims to enhance the spatial awareness capability of MLLM. One of the most direct approaches involves providing MLLM with more precise spatial awareness information, using this exact information as input to guide the generation of results by MLLM. Thus, we propose Figure 1: Instance of a multi-modal large language model in spatial awareness task leveraging pretrained smaller models to offer spatial position relationships between target objects and, subsequently, use this information to guide MLLM in addressing user-related queries. Specifically, for specific multimodal tasks, we utilize pretrained object detection algorithms [14, 15, 16] and scene graph generation algorithms [23, 17, 18] to acquire geometric spatial information and scene details pertinent to the query. Based on this information, we direct MLLM to address spatial awareness-related user queries. The main contributions of this paper can be summarized in two points: * Proposing a novel approach to enhance MLLM's spatial awareness capability by using pretrained smaller models to provide geometric spatial information and high-level semantic details between objects, thereby guiding MLLM in generating more accurate results. To our knowledge, this is the first work to combine pretrained smaller models with MLLM to enhance its capabilities. * Conducting extensive experiments on benchmarks like MME, MM-Vet, and other MLLM benchmarks. The experimental results thoroughly confirm the effectiveness of the proposed research method in significantly enhancing MLLM's performance in spatial awareness and associated tasks. ## Related Work The Multi-Modal Large Language Model (MLLM) [15, 16, 17, 18, 19] leverages a powerful Large Language Model (LLM) as its cognitive engine to handle various multi-modal tasks. In many application scenarios, collecting diverse types of information through multiple input channels is necessary to construct specific task models. For instance, in autonomous driving, a vehicle needs to process data from both cameras and LiDAR to make effective decisions in complex driving environments. MLLM excels in understanding and processing a variety of information from different modalities, thus playing a vital role in advancing research and application of multi-modal comprehension. Current research on MLLM can be broadly categorized into two types: the first type comprises models composed of LLM and visual encoders, such as LLaVa [15], MiniGPT-4 [15], mPLUG-Owl [20]. These models aim to achieve the multi-modal understanding effect presented in the GPT4 technical report with minimal additional training based on existing LLM. The second type involves models composed of LLM and various small multi-modal models, such as HuggingGPT [18], MOSS, CompeGPT [16]. These models utilize LLM as a control and organization center, calling different small models to accomplish distinct multi-modal tasks and subsequently consolidating user responses. This study is rooted in the first type of MLLM. These models share similarities in their structure and training strategies, with differences mainly apparent in whether the LLM and visual encoders are frozen. For instance, LLaVA, MiniGPT-4 freeze the basic visual encoders, while mPLUG-Owl releases the visual encoders. Related research has also demonstrated that pre-training on image-text is critical for establishing connections between image and text. Current work primarily focuses on how to enhance the overall model performance using MLLM's inherent capabilities. Nevertheless, "there is no gold standard, and no one is perfect"; the present MLLM struggles to effectively accomplish all multi-modal tasks with its intrinsic abilities alone. Hence, this study considers utilizing more precise spatial awareness information acquired from external models to guide the result generation of MLLM. ## Method In this section, we first present an overview of the proposed method and then provide detailed information about its main components. ### Overview In response to the given multi-modal request, we initially employ pre-trained object detection algorithms and scene graph generation algorithms to acquire the geometric spatial position information and scene graph details related to spatial awareness queries. Subsequently, we guide the Multi-Modal Large Language Model (MLLM) to address user-related queries based on this information. To elaborate, we start by extracting target entities involved in queries that require spatial relationship judgment. Then, utilizing object detection and scene graph generation algorithms, we gather geometric spatial position information and scene graph data of various entities from multi-modal visual inputs. This process involves employing an entity matching algorithm to obtain geometric spatial position information and scene graph details of the target entities. Finally, employing a corresponding prompt, we guide the large language model to respond to spatial awareness-related questions based on the geometric spatial position information and scene graph data of the target entities, generating the corresponding responses as depicted in Figure 2. ### Target Entity Extraction The "REQUST" includes various elements, such as the user's "Question" and the input of multi-modal data. This paper initially extracts the target entities requiring determination of spatial relationships from the user's input "Question," as illustrated in Formula 1: \[\begin{split}(Entity_{1},Entity_{2})=\\ Target\ Entity\ Extraction(Question)\end{split} \tag{1}\] Here, \((Entity_{1},Entity_{2})\) represents the two target entities requiring relationship determination within the question. The \(Target\ Entity\ Extraction\) algorithm utilizes the \(en\_core\_web\_sm\) model provided by the spaCy library. This model aims to offer lightweight natural language processing capabilities, including tokenization, part-of-speech tagging, named entity recognition, and dependency parsing. It demonstrates efficient performance in handling English textual data, making it well-suited for this task. Moreover, considering the involved multi-modal tasks typically involve the determination of relationships between two entities, the extraction process retrieves two target entities from the "Question." ### Geometric Spatial Location Information Geometric spatial position information refers to the geometric relative positioning details among objects in visual input. These specifics can be acquired through various algorithms, such as Object Detection, Stereo Vision, Depth Estimation, among others. This paper opts to employ Object Detection, an easily applicable algorithm in images, to obtain geometric spatial positioning details. Object Detection accurately locates the coordinates of objects in images, usually represented in the form of bounding boxes, which indicate the object's position and size. Additionally, the Object Detection algorithm conducts object classification and recognition, categorizing detected objects into predefined classes like humans, vehicles, animals, or items, facilitating the determination of object categories. Specifically, we first use an object detection algorithm to obtain the position information \((x_{i}^{\prime},y_{i}^{\prime},w_{i}^{\prime},h_{i}^{\prime})\) and category \((E_{i}^{\prime})\) of objects in the image: \[\begin{split}\{E_{1}^{\prime}:(x_{1}^{\prime},y_{1}^{\prime},w_{1}^ {\prime},h_{1}^{\prime}),...,E_{m}^{\prime}:(x_{m}^{\prime},y_{m}^{\prime},w_{m }^{\prime},h_{m}^{\prime})\}\\ =Object\ Detection(Image)\end{split} \tag{2}\] Here, \(Image\) represents the input image, \(\{E_{1}^{\prime}:(x_{1}^{\prime},y_{1}^{\prime},w_{1}^{\prime},h_{1}^{\prime} ),...,E_{m}^{\prime}:(x_{m}^{\prime},y_{m}^{\prime},w_{m}^{\prime},h_{m}^{ \prime})\}\) is a collection of detected entities' categories and corresponding geometric position coordinates, where m represents the number of detected entities, and the \(Object\ Detection\) algorithm employs Faster R-CNN [12]. Next, we match the detected entity categories \((E_{1}^{\prime},...,E_{m}^{\prime})\) from the visual input with the two target entities \((Entity_{1},Entity_{2})\) requiring a relationship from the "Question" to obtain a dictionary of position information closest to the entities \(Entity_{1}\) and \(Entity_{2}\) in the image: \(\{Entity_{1}:(x_{1},y_{1},w_{1},h_{1}),Entity_{2}:(x_{2},y_{2},w_{2},h_{2})\}\). ### Scene Graph Information Geometric spatial information is primarily concerned with identifying and locating the geometric relative position information among entities. However, in certain application scenarios, spatial awareness demands more than just understanding the geometric relative positions of entities. It also requires comprehension of higher-level semantic information among entities, such as semantic relationships between entities and semantic relationships between entities and the scene. This study utilizes Scene Graph Generation (SGG) algorithms to obtain corresponding scene graphs containing higher-level semantic information. By combining geometric spatial information from multi-modal visual inputs with scene graph details, a more comprehensive and accurate understanding of images is achieved. This method not only identifies the positions of objects but also understands their interactions within the scene. This is particularly beneficial in images with complex scenes and enhances the system's ability to address multi-modal tasks. Specifically, we begin by utilizing a Scene Graph Generation (SGG) algorithm to generate the scene graph of the input image: \[\{(s_{1}^{\prime},p_{1}^{\prime},o_{1}^{\prime}),...,(s_{n}^{\prime},p_{n}^{ \prime},o_{n}^{\prime})\}=SGG(Image) \tag{3}\] Here, \(Image\) refers to the input image, and \(\{(s_{1}^{\prime},p_{1}^{\prime},o_{1}^{\prime}),...,(s_{n}^{\prime},p_{n}^{ \prime},o_{n}^{\prime})\}\) represents the collection of triples obtained from the image scene graph, with \(n\) indicating the number of triples. The \(SGG\) algorithm is implemented using PSG [13]. Simultaneously, based on the collection of image scene graph triples, we extract all triples relevant to the target entities mentioned in the "Question." NLTK library and the English lexical database WordNet are utilized for synonym matching. The matching criterion involves retaining a triple if one entity from the scene graph's triples matches \((Entity_{1},Entity_{2})\). Ultimately, this process generates a final set of target triples: \(\{(s_{1},p_{1},o_{1}),...,(s_{z},p_{z},o_{z})\}\), where \(z\) denotes the number of resulting triples. Figure 2: The overview of our method. ### Prompt Design Upon acquiring the geometric spatial information of the entities related to the question and the scene graph information, the crucial task is to effectively leverage this information to guide the MLLM in accurately answering spatial awareness-related user queries. Drawing from existing work [23, 24], we have devised the following prompt format (refer to Table 1), which empowers the MLLM to utilize the spatial awareness information from the small model while accurately addressing user inquiries. ## Experiments This section presents a comprehensive analysis of our proposed approach through a series of experiments. We start by outlining the Implementation Details, which offer key insights necessary for replicating the experiments and understanding the results. Next, we present the Main Results, showcasing the performance of our proposed model compared to existing large multi-modal models. Furthermore, we conduct ablation studies to validate the effectiveness of individual components in our proposed method. ### Implementation Details We provide a detailed description of the implementation aspects of our proposed approach. We outline the benchmarks used for experimentation, the baselines employed for comparison, the evaluation metrics used to assess model performance, and the hyperparameter settings of our experiments. **Benchmarks.** This paper primarily conducted experiments on two benchmarks, MME [20] and MM-Vet [21], to validate the effectiveness of our approach. The MME benchmark consists of 10 perception tasks (existence, count, position, color, poster, celebrity, scene, landmark, artwork, OCR), where the position task specifically evaluates the model's spatial awareness capabilities. It comprises 957 images and 1914 QA pairs, with each image having two corresponding QA pairs. The MM-Vet benchmark includes 22 tasks, featuring 6 core tasks in computer vision and natural language processing (Recognition, Knowledge, OCR, Spatial Awareness, Language Generation, and Math), along with 16 combined tasks, where the spatial awareness task evaluates the model's spatial perception abilities. MM-Vet consists of 200 images and 218 questions paired with their respective ground truth answers, designed to cover diverse real-world scenarios with open-ended questions and expected answers. **Baselines.** This paper conducted experiments against a significant number of baselines to validate the effectiveness of our approach. The baselines used mainly included: BILP-2 [12], MiniGPT-4 [24], mPLUG-Owl [25], ImageBind-LIM [17], LLaMA-AdapterV2 [18], VisualGLM-6B, Multimodal-GPT [19], PandaGPT [20], and LLaVA [24]. **Evaluation Metrics.** This paper evaluates our model's performance using the evaluation metrics proposed in two benchmarks: MME and MM-Vet. Specifically, for the MME benchmark, the model's output is limited to two types ("yes" or "no"), making it convenient to measure accuracy and accuracy+ metrics. We choose to use the sum of accuracy and accuracy+ to calculate the task score. In the case of the MM-Vet benchmark, based on existing scoring instances and the model's output under the input question and real answer conditions for each sample, GPT-4 provides specific scores to evaluate the model's performance. **Hyperparameter Settings.** The hyperparameters used in this paper are the same as those in various baselines in MME and MM-Vet benchmarks. The key difference lies in our addition of geometric spatial position information and scene graph information into the model following the prompt format designed in this paper. Additionally, pre-trained target detection models and scene graph generation models were employed in this study. ### Main Results. We systematically tested the model's spatial awareness capabilities in the MME and MM-Vet benchmarks, and the experimental results are presented in Table 2 and Table 3. In the MME benchmark's "position" task, our model achieved an accuracy of 87.54, representing a 19.4% improvement over the BLIP-2 base model. It also showed a notable 7.2% improvement compared to the current leading MiniGPT-4 model, achieving a significant enhancement, surpassing the current state-of-the-art level. For the spatial awareness task in the MM-Vet benchmark, our model attained an accuracy of 20.1, signifying a 24.1% improvement over the BLIP-2 base model, showcasing significant performance enhancement. In addition to evaluating the performance of our method on two benchmarks specifically designed to assess spatial awareness, this study also tested the effectiveness of our method on other tasks. In the MME benchmark, our method not only significantly improved in the position task but also exhibited enhancements in other related tasks, as shown in Table 4. Specifically, in the existence task, our method raised the model's accuracy from the baseline model BLIP-2's 160.00 to 168.00, marking a 5% improvement. In the scene task, our method raised the model's accuracy from the baseline model BLIP-2's 145.25 to 147.98, a 1.9% increase, both achieving the current best levels. Moreover, the experimental results indicate that our model generally maintained the performance of the baseline model in tasks unrelated to spatial awareness. \begin{table} \begin{tabular}{|p{142.3pt}|} \hline The scene in the picture has the following relationship \(\{(s_{1},p_{1},o_{1}),...,(s_{z},p_{z},o_{z})\}\). And, The faster R-CNN detects the target and its geometric position as follows \(\{Entity_{1}:(x_{1},y_{1},w_{1},h_{1}),Entity_{2}:(x_{2},y_{2},w_{2},h_{2})\}\). Please answer the following questions based on the above information and the image itself: Question, and directly tell me the answer which you think is correct directly. \\ \hline \hline \end{tabular} \end{table} Table 1: The detail of the prompt design. On the MM-Vet benchmark, our approach not only improved the model's performance in spatial awareness tasks but also enhanced its performance in other related core vision-language tasks, as depicted in Table 5. Specifically, in the recognition task, our method increased the model's accuracy from 27.5 to 29.0, marking a 5.5% improvement. For the OCR task, our method raised the model's accuracy from 11.1 to 14.1, showing a significant improvement of 27.9%. Moreover, in the associated mixed tasks, our method also notably enhanced the model's performance. As demonstrated in Table 6, among the 16 mixed tasks, our method outperformed the base model in 10 tasks. For instance, in the Recognition/Knowledge/Language Generation task, the model's accuracy increased by 11.0%; in the OCR/Spatial Awareness task, the model's accuracy improved by 3.9%; in the OCR/Spatial Awareness/Math task, the model's accuracy rose by 14.1%; and in the OCR/Knowledge/Spatial Awareness task, the model's accuracy surged by 99.4%. In summary, our approach not only significantly enhances the model's spatial awareness capabilities but also markedly improves the model's performance in other aspects such as object recognition (i.e., existence and recognition tasks), scene understanding (i.e., scene and OCR tasks), and mixed tasks. Consequently, our approach can more effectively enhance the overall performance of multimodal large models, demonstrating considerable and broad effectiveness. ### Ablation Studies. We also conducted extensive ablation experiments on both the MME and MM-Vet benchmarks to verify the effectiveness of the proposed geometric spatial position information, scene graph information, and their fusion in enhancing spatial awareness capabilities. The ablation experiment prompts are presented in Table 7. The results of the ablation experiments on the MME benchmark are shown in Table 8. The experimental results reveal that the proposed geometric spatial position information enhanced the model's accuracy in the position task from 73.33 to 78.36, demonstrating a 6.9% improvement, highlighting the effectiveness of geometric spatial position information. Likewise, the introduced scene graph information increased the accuracy of the model in the position task from 73.33 to 80.48, displaying a 9.6% improvement, showcasing the effectiveness of scene graph information. Most significantly, the combined use of geometric spatial position information and scene graph information enhanced the model's accuracy in the position task from 73.33 to 87.54, showcasing a 19.4% improvement, emphasizing the effectiveness of combining information in advancing spatial awareness in large multimodal language models. Additionally, the improvement from the fusion method was 2.9% higher than the cumulative improvement from using the two pieces of information separately, effectively demonstrating the advantage of integrating information from scene graph generation algorithms and object detection algorithms in better understanding the positioning relationships between target entities in an image, ultimately enhancing the performance and effectiveness of multimodal tasks. The ablation experiment results on the MM-Vet benchmark are shown in Table 9. From the experimental findings, it is evident that the proposed geometric spatial position information increased the model's accuracy in the spatial awareness task from 16.2 to 18.8, showcasing a 16.0% enhancement, highlighting the effectiveness of geometric spatial position information. Similarly, the introduced scene graph information enhanced the model's accuracy in the spatial awareness task from 16.2 to 19.6, demonstrating a 21.0% improvement, underlining the effectiveness of scene graph information. Most significantly, the combined utilization of geometric spatial position information and scene graph information elevated the model's accuracy in the spatial awareness task from 16.2 to 20.1, displaying a 24.1% enhancement, emphasizing the effectiveness of combining information to improve spatial awareness in large multimodal language models. It also attests to the widespread effectiveness of the proposed method across different datasets. ## Conclusion The paper introduces a novel approach to enhance the spatial awareness capabilities of MLLM. This method utilizes pretrained geometric spatial information extraction algorithms and scene graph generation algorithms to provide geometric spatial information between objects and higher-level scene graph details. This guidance aids MLLM in more accurately addressing user inquiries related to spatial awareness. Extensive experiments were conducted on benchmarks such as MME, MM-Vet, and other MLLM benchmarks. The experimental outcomes robustly confirm the efficacy of this research method, significantly improving MLLM performance in spatial awareness and associated tasks. \begin{table} \begin{tabular}{l c} \hline Model & Position \\ \hline ImageBind-LIM & 46.67 \\ LLaMA-AdapterV2 & 48.33 \\ VisualGLM-6B & 48.33 \\ PandAGPT & 50.00 \\ mPLUG-Owl & 50.00 \\ LLaVA & 50.00 \\ Multimodal-GPT & 58.33 \\ MiniGPT-4 & 81.67 \\ BLIP-2-12B & **73.33** \\ Our Model & **87.54** \\ \hline \end{tabular} \end{table} Table 2: The experimental results of the model in the position task of the MME benchmark. \begin{table} \begin{tabular}{l c} \hline Model & Spatial Awareness \\ \hline Transformers Agent (GPT-4) & 12.4 \\ LLaMA-Adapter v2-7B & 16.6 \\ OpenFlanning-9B & 18.0 \\ Otter-9B & 19.3 \\ InstructBLIP-8B & 18.6 \\ BLIP-2-12B & **16.2** \\ Our Model & **20.1** \\ \hline \end{tabular} \end{table} Table 3: The experimental results of the model in the spatial awareness task of the MM-Vet benchmark. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Model & R/K/G & R & O/S & O/S/M & R/S & O & R/O/G/S & R/O/S & O/K/S \\ \hline Transformers Agent(GPT-4) & 1.3 & 49.1 & 0.0 & 7.4 & 45.8 & 0.0 & 9.5 & 0.0 & 0.0 \\ LLaMA-Adapter v2-7B & 0.2 & 43.2 & 7.9 & 8.1 & 41.7 & 0.0 & 26.8 & 0.0 & 33.3 \\ OpenFlamingo & 15.6 & 48.6 & 17.3 & 21.4 & 41.7 & 18.3 & 0.0 & 14.3 & 0.0 \\ Otter-9B & 22.5 & 50.0 & 18.1 & 21.4 & 33.3 & 16.7 & 28.5 & 0.0 & 16.7 \\ BLIP-2-12B & **7.3** & **65.1** & **11.5** & **7.1** & **41.7** & **21.2** & **8.5** & **14.3** & **16.7** \\ Our Model & **8.1** & **67.3** & **15.4** & **8.1** & **47.5** & **22.4** & **12.8** & **18.3** & **33.3** \\ \hline \end{tabular} \end{table} Table 6: The experimental results of the model on the 9 integrations of interest derived from the capability combination in the MM-Vet benchmark. Where R, K, G, O, S, and M represent Recognition, Knowledge Language, Generation, OCR, Spatial Awareness, and Math, respectively. \begin{table} \begin{tabular}{l|c c c c c c c c c} \hline Model & Exist & Count & Position & Color & Poster & Celeb & Scene & Landm & Artwork & OCR \\ \hline MiniGPT-4 & 115.00 & 123.33 & 81.67 & 110.00 & 55.78 & 65.29 & 95.75 & 69.00 & 55.75 & 95.00 \\ mPLUG-Owl & 120.00 & 50.00 & 50.00 & 55.00 & 136.05 & 100.29 & 135.50 & 159.25 & 96.25 & 65.00 \\ ImageBind-LIM & 128.33 & 60.00 & 46.67 & 73.33 & 64.97 & 76.47 & 113.25 & 62.00 & 70.75 & 80.00 \\ VisualGLM-6B & 85.00 & 50.00 & 48.33 & 55.00 & 65.99 & 53.24 & 146.25 & 83.75 & 75.25 & 42.50 \\ Multimodal-GPT & 61.67 & 55.00 & 58.33 & 68.33 & 57.82 & 73.82 & 68.00 & 69.75 & 59.50 & 82.50 \\ PandaGPT & 70.00 & 50.00 & 50.00 & 50.00 & 76.53 & 57.06 & 118.00 & 69.75 & 51.25 & 50.00 \\ LLaVA & 50.00 & 50.00 & 50.00 & 55.00 & 50.00 & 48.82 & 50.00 & 50.00 & 49.00 & 50.00 \\ BLIP-2-12B & **160.00** & **135.00** & **73.33** & **148.33** & **141.84** & **105.59** & **145.25** & **138.00** & **136.50** & **110.00** \\ Our Model & **168.00** & **135.00** & **86.67** & **145.00** & **141.84** & **105.59** & **147.98** & **137.25** & **135.50** & **110.00** \\ \hline \end{tabular} \end{table} Table 4: The experimental results of the model on other tasks within the MME benchmark are as follows. Where Exist stands for Existence, Celeb represents Celebriy, and Landm indicates Landmarks. \begin{table} \begin{tabular}{l c c c c c c c c} \hline Model & R/K/G & R & O/S & O/S/M & R/S & O & R/O/G/S & R/O/S & O/K/S \\ \hline Transformers Agent(GPT-4) & 1.3 & 49.1 & 0.0 & 7.4 & 45.8 & 0.0 & 9.5 & 0.0 & 0.0 \\ LLaMA-Adapter v2-7B & 0.2 & 43.2 & 7.9 & 8.1 & 41.7 & 0.0 & 26.8 & 0.0 & 33.3 \\ OpenFlamingo & 15.6 & 48.6 & 17.3 & 21.4 & 41.7 & 18.3 & 0.0 & 14.3 & 0.0 \\ Otter-9B & 22.5 & 50.0 & 18.1 & 21.4 & 33.3 & 16.7 & 28.5 & 0.0 & 16.7 \\ BLIP-2-12B & **7.3** & **65.1** & **11.5** & **7.1** & **41.7** & **21.2** & **8.5** & **14.3** & **16.7** \\ Our Model & **8.1** & **67.3** & **15.4** & **8.1** & **47.5** & **22.4** & **12.8** & **18.3** & **33.3** \\ \hline \end{tabular} \end{table} Table 5: The experimental results of the model on the 6 core vision-language capabilities in the MM-Vet benchmark, where Rec, Know, Spat, and Gen represent recognition, knowledge, spatial awareness, and language generation, respectively, are presented.
2310.00087
Optically-trapped microspheres are high-bandwidth acoustic transducers
We report on the use of an optically-trapped microsphere as an acoustic transducer. A model for the hydrodynamic coupling between the microsphere and the surrounding acoustic fluid flow is combined with thermo-mechanical calibration of the microsphere's position detection to enable quantitative acoustic measurements. We describe our technique in detail, including the self-noise, sensitivity, and minimum detectable signals, using a model appropriate for both liquid and gas environments. We then test our approach in an air-based experiment and compare our measurements with two state-of-the-art commercially-available acoustic sensors. Piezoelectrically-driven bursts of pure tones and laser ablation provide two classes of test sounds. We find accurate measurements with a bandwidth of 1 MHz are possible using our technique, improving by several orders of magnitude the bandwidth of previous flow measurements based on optically-trapped microspheres.
Logan E. Hillberry, Mark G. Raizen
2023-09-29T18:56:02Z
http://arxiv.org/abs/2310.00087v1
# Optically-trapped microspheres are high-bandwidth acoustic transducers ###### Abstract We report on the use of an optically-trapped microsphere as an acoustic transducer. A model for the hydrodynamic coupling between the microsphere and the surrounding acoustic fluid flow is combined with thermo-mechanical calibration of the microsphere's position detection to enable quantitative acoustic measurements. We describe our technique in detail, including the self-noise, sensitivity, and minimum detectable signals, using a model appropriate for both liquid and gas environments. We then test our approach in an air-based experiment and compare our measurements with two state-of-the-art commercially-available acoustic sensors. Piezoelectrically-driven bursts of pure tones and laser ablation provide two classes of test sounds. We find accurate measurements with a bandwidth of 1 MHz are possible using our technique, improving by several orders of magnitude the bandwidth of previous flow measurements based on optically-trapped microspheres. ## I Introduction Owing to their micro-manipulation and force transduction capabilities, optical tweezers have become an indispensable tool in a variety of scientific fields [1]. By tightly focusing a laser beam, optical forces can exceed gravitational forces and thermal fluctuations to stably trap micron-scale objects [2]. In vacuum [3], optical tweezers have enabled zeptonewton force sensing [4], state-of-the-art torque sensitivity [5], and searches for new physics [6], including proposals to measure high-frequency gravity waves [7]. Also in vacuum, optical tweezers can trap and cool microspheres [8] to the motional ground state [9; 10], and have been multiplexed to arrays of hundreds of single-atom traps in a promising platform for quantum computation and simulation [11; 12]. In aqueous solution, optical tweezers can measure mechanical properties of life at the nano-scale [13; 14], such as the stepping strength of molecular motors or the rigidity of biomolecules [15; 16; 17]. Also in liquid, optical tweezers enable ultra-fast viscosity measurements [18] and Casimir force measurements [19]. In gaseous media, optical tweezers have revolutionized single-particle aerosol science [20], including absolute pressure measurements and species identification [21], mass metrology [22; 23], and single-droplet growth and freezing studies [24; 25; 26]. There further exists a body of work using optically-trapped microspheres to measure flow in liquids [27; 28; 29; 30; 31; 32]. So far, these studies have characterized low frequency (\(<500\) Hz) flows by monitoring the motion of optically-trapped microspheres with a camera or position-sensitive detector. In this Letter, we propose and demonstrate a fluid velocity measurement scheme with a bandwidth approaching 1 MHz using optically-trapped microspheres in air. Flow at such high frequencies is generally associated with acoustic radiation. A schematic of our optically-trapped-microsphere acoustic transducer is shown in Fig. 1. Other non-traditional acoustic sensors have recently been studied, including optical micro-resonators [33; 34], and laser deflection or interference methods [35]. As we will see, our method uniquely combines self-calibration, high-bandwidth, and high-sensitivity to acoustic velocity waves (rather than pressure waves). Our method builds on earlier work that first measured the instantaneous velocity of a thermally-fluctuating microsphere in air [36]. This same system is not only sensi Figure 1: Schematic depiction of the experimental set up. A 1064 nm laser is split by a polarizing beamsplitter. The \(p\)-polarized beam is sent through an acousto-optic modulator (AOM) to shift its frequency by 80 MHz, thereby eliminating interference effects in the trap. The \(p\)-polarized beam is then steered counter-propagating to the \(s\)-polarized beam and both are focused to the same point between twin aspheric lenses (numeric aperture 0.7), generating a stable optical trap for silica microspheres in air. After passing through the trap, the \(s\)-polarized beam is separated with a second polarizing beamsplitter and sent to the detection system. For detection, a sharp, D-shaped cut mirror splits the incoming transverse mode into two halves that are sent to a balanced photo-detector (75 MHz bandwidth). Various acoustic sources provide test sounds, and additional acoustic sensors, a microphone and Microflow, are positioned just behind the trap. The entire system is enclosed in a multi-chamber acrylic box to mitigate air currents. tive to thermal fluctuations, but also to acoustic perturbations. Two ingredients, a hydrodynamic model of the acoustic force and thermo-mechanical self-calibration, enable quantitative acoustic measurements. Since the microsphere is uniquely sensitive to high frequency velocity flows, we use two commercially-available sensors to asses our platform's capabilities: We benchmark our method in terms of accuracy and bandwidth against 1) a high-bandwidth (\(200\,\mathrm{kHz}\)) pressure microphone, and 2) a micron-scale dual-hot-wire anemometer [37; 38] (calibrated bandwidth \(20\,\mathrm{kHz}\)) that is commercially known as the _Microflown_[39]. The remainder of this paper is organized as follows: In Section II we describe the microsphere's acoustic sensing modality, including calibration, self-noise, and minimum-detectable signals. Section III reports our sound detection results. We then discuss our results within the context of other microsphere-based flow measurements and speculate on future applications in Section IV. The paper is then concluded in Section V. ## II Noise, calibration, and acoustic response In thermal equilibrium with a reservoir fluid at finite temperature, a microsphere's position fluctuates in random and perpetual _Brownian motion_[40]. Brownian motion velocity detectors [41; 18; 36] are sensitive to both thermally fluctuating and driven fluid flows. If the resulting driven motion is larger than the random thermal motion (and detector noise), an acoustic signal is detectable. In what follows we develop a model for the acoustic signal and thermal noise of our proposed acoustic detection system. For the general setup, consider a microsphere of radius \(R\) and density \(\rho\) harmonically bound to the coordinate origin. The microsphere mass is \(m=4\pi\rho R^{3}/3\) and the harmonic trap strength is \(\kappa\). Let the trapping fluid at temperature \(T\) have density \(\rho_{\mathrm{f}}\), speed of sound \(c_{0}\), and dynamic viscosity \(\eta\). The \(x\)-component of the system's equation of motion is \[m\ddot{x}(t)+\kappa x(t)-F_{\mathrm{d}}[v(t)]=F_{\mathrm{ext}}(t)+F_{\mathrm{ th}}(t) \tag{1}\] where \(v(t)=\dot{x}(t)\) is the microsphere's velocity at time \(t\), \(F_{\mathrm{d}}(v)\) is the dissipative, velocity-dependent drag force, and \(F_{\mathrm{ext}}\) is an external driving force. \(F_{\mathrm{th}}\) is the fluctuating thermal force that is related to the dissipative force through the fluctuation-dissipation theorem. When all bounding walls are far from the sphere [42] and the fluid flow at sphere's surface does not slip [43], the hydrodynamic drag force in the incompressible limit is [44; 45; 46] \[F_{\mathrm{d}}[v(t)]=-\gamma_{0}\left(v(t)+\sqrt{\frac{\tau_{\mathrm{f}}}{ \pi}}\int_{-\infty}^{t}\mathrm{d}t^{\prime}\,\frac{\dot{v}(t^{\prime})}{\sqrt {t-t^{\prime}}}\right)-\frac{\delta}{2}m\dot{v}(t)\,, \tag{2}\] where \(\gamma_{0}=6\pi\eta R\) is the Stokes friction coefficient and \(\delta=\rho_{\mathrm{f}}/\rho\) is the fluid-to-micsphere density ratio. The _vorticity diffusion time_\(\tau_{\mathrm{f}}=R^{2}\rho_{\mathrm{f}}/\eta=9\delta\tau_{\mathrm{p}}/2\) is the amount of time it takes for vorticity -- the curl of velocity -- to diffuse across the sphere and \(\tau_{\mathrm{p}}=m/\gamma_{0}\) is the momentum diffusion time. The first, second, and third terms of Eq. (2) describe, respectively, Stokes drag (independent of \(\delta\)), viscous damping due to the flow history (proportional to \(\delta^{1/2}\)), and inertia of the mass added by the fluid that follows the microsphere (proportional to \(\delta\)). For a silica microsphere in air \(\delta\sim 10^{-3}\ll 1\) hence Eq. (2) reduces to \(F_{\mathrm{d}}[v(t)]\approx-\gamma_{0}v(t)\). In the frequency domain, we may write \(F_{\mathrm{d}}[v(\omega))]=-\gamma(\omega)v(\omega)\) where \(\omega=2\pi f\) is the circular frequency, the frequency-dependent damping is [47; 48] \[\gamma(\omega)=\gamma_{0}\left(1+\sqrt{-i\tau_{\mathrm{f}}\omega}-i\frac{\tau _{\mathrm{f}}\omega}{9}\right)\,, \tag{3}\] and \(\sqrt{-i}=(1-i)/\sqrt{2}\) defines the square-root's branch cut. Next, we consider two cases: _noise_ when \(F_{\mathrm{ext}}=0\) and _signal_ when \(F_{\mathrm{th}}=0\) and \(F_{\mathrm{ext}}\) is caused by an acoustic wave. ### Noise The thermal force is \(F_{\mathrm{th}}(t)=\sqrt{2k_{\mathrm{B}}T\gamma_{0}}\xi(t)\) where \(\xi(t)\) is a zero-mean, possibly-time-correlated [51] random variable, and \(k_{\mathrm{B}}\) is Boltzmann's constant. When \(F_{\mathrm{ext}}=0\), the equation of motion (1) may be solved in the frequency domain for the _admittance_\(v(\omega)/F_{\mathrm{th}}(\omega)=(\gamma(\omega)-i\omega m+i\kappa/\omega)^{-1}\) The corresponding (one-sided) velocity power spectral density is given by the Kubo-Green formula [52] as \[S_{vv}(\omega)=4k_{\mathrm{B}}T\,\mathrm{Re}\left[(\gamma(\omega)-i\omega m+i \kappa/\omega)^{-1}\right]\,. \tag{4}\] Equation (4) describes the microsphere's thermal fluctuations and hence the inherent noise which must be overcome to detect \(F_{\mathrm{ext}}\neq 0\). However, beyond noise limitations, thermal fluctuations enable an accurate detector calibration scheme. The split-beam detection method, depicted in Fig. 1, generates a linear voltage signal \(V(t)=\beta x(t)\) where \(\beta\) is the displacement-to-voltage calibration factor. For silica microspheres in air, the radius \(R\), temperature \(T\), and viscosity \(\eta\) can be considered known to within a couple percent [53; 22; 52]. Since \(S_{xx}=S_{vv}/\omega^{2}\) we can predict the detector's (one-sided) Brownian-motion-driven voltage power spectral density \[S_{VV}(\omega) =\frac{\beta^{2}}{\omega^{2}}S_{vv}(\omega) \tag{5}\] \[\approx\beta^{2}\frac{4k_{\mathrm{B}}T\gamma_{0}}{(m\omega^{2}- \kappa)^{2}+\gamma_{0}^{2}\omega^{2}}\,. \tag{6}\] The second approximate equality (6) is accurate for thermal fluctuations in air and assumes \(\gamma(\omega)\approx\gamma_{0}\) As shown in Fig. 2 (a), by averaging experimental periodograms of thermally-driven voltage signals and maximum-liklihood fitting [50, 54] to Eq. (6), we can learn [22]\(\rho=1.7(1)\,\mathrm{g/cm^{3}}\), \(\kappa=21.3(7)\,\mathrm{f\SIUnitSymbolMicro N}/\mathrm{nm}\), and \(\beta=2.1(1)\,\mathrm{mV/nm}\). At high frequencies, the spectrum (6) decays as \(\sim\omega^{-4}\) until the detector's constant noise floor \(\chi=0.49(2)\,\mathrm{\SIUnitSymbolMicro V}^{2}/\mathrm{Hz}\) dominates the signal. Our detector's narrow-band position sensitivity is therefore \(\sqrt{\chi}/\beta=333(21)\,\mathrm{fm}/\sqrt{\mathrm{Hz}}\). The inset of Fig. 2 (a) shows that subtle hydrodynamic effects described by Eq. (5) are perceptible in thermally driven motion above \(\sim 50\,\mathrm{kHz}\), but may be ignored for calibration purposes by restricting the fit domain. In the next section, we will calculate the response of the microsphere to a harmonic acoustic wave. ### Signal When impinging on the trapped microsphere along the direction \(x\) of position measurement, a sound wave of fluid velocity \(u\) and acoustic pressure \(p\) applies an external force [46]\(F_{\mathrm{ext}}=F_{\nabla}(p)+F_{\mathrm{d}}(-u)\). The pressure gradient force is \(F_{\nabla}(p)=-4\pi R^{3}\nabla p/3\). Using Euler's (linearized) equation \(\nabla p=-\rho_{\mathrm{f}}\dot{\mathbf{u}}\), the pressure gradient force is \(F_{\nabla}=\delta m\dot{u}=2\gamma_{0}\tau_{\mathrm{f}}\dot{u}/9\). Taking \(F_{\mathrm{th}}=0\), one can solve the equation of motion (1) in the frequency domain for the transfer function \(H(\omega)=v(\omega)/u(\omega)\), yielding \[H(\omega)=\frac{\gamma(\omega)-i\omega\delta m}{\gamma(\omega)-i\omega m+i \kappa/\omega}. \tag{7}\] The transfer function, shown in Fig. 2 (b), describes the microsphere's velocity amplitude and phase relative to that of the fluid. Though \(\gamma(\omega)\approx\gamma_{0}\) is appropriate for thermal fluctuations and system calibration in air, driven motion can occur at much higher frequencies, so we retain all three terms in Eq. (3). For example, at \(1\) MHz, taking \(\gamma(\omega)\approx\gamma_{0}\) underestimates the amplitude of \(H\) by a factor of \(\sim 2\) and overestimates the phase by \(\sim\pi/6\) radians. The primary correction to \(H\) beyond \(\gamma(\omega)\approx\gamma_{0}\) comes from the history term in Eq. (3); the added mass and pressure gradient effects are both proportional to the density ratio \(\delta\) and hence small in air. We retain all terms so that our model remains valid for liquid media for which \(\delta\sim 0.1-1\). The detector's voltage signal is converted to an acoustic velocity signal using a frequency domain deconvolution \(u(t)=\mathcal{F}^{-1}[\mathcal{F}[V(t)]/\psi_{u}(\omega)]\) where \(\mathcal{F}\) is the Fourier transform, and the microsphere's frequency-dependent velocity sensitivity is \[\psi_{u}(\omega)=\frac{-i\beta H(\omega)}{\omega}\,. \tag{8}\] The sensitivity is proportional to the transfer function \(H\), the calibration factor \(\beta\), and the factor \(-i/\omega\) that affects the required position-to-velocity derivative. For experimental data sampled at a rate \(1/dt\), the derivative factor consistent with a central finite difference in the time-domain is \(-i/\omega\to-idt/\sin(\omega dt)\)[55]. Acoustic pressure and velocity are related through the impedance \(Z(\omega)=p(\omega)/u(\omega)\), hence the pressure sensitivity is \(\psi_{p}=\psi_{u}/Z\). For plane acoustic waves \(Z=\rho_{\mathrm{f}}c_{0}\) is a constant. We will assume planar acoustic waves throughout and use the factor \(Z\) to freely convert between pressure and velocity. Commercial acoustic detectors are typically calibrated by comparing the sensor's output voltage to a well-characterized input sound under anechoic conditions. By contrast, our thermo-mechanical position calibration and hydrodynamic transfer function enable self-calibration. The sensitivity amplitudes of our commercial microphone and Microflown are provided by the manufacturers and shown in Fig. 3 compared to the sensitivity of our microsphere system. Figure 2: (a) Experimental position power spectral density (open circles) of a \(R=1.51(5)\,\mathrm{\SIUnitSymbolMicro m}\) silica microsphere thermally driven by air at \(T=23.97(1)\,\mathrm{\SIUnitSymbolMicro C}\) with a relative humidity of \(57(1)\%\), which has a viscosity \(\eta=18.23(1)\,\mathrm{\SIUnitSymbolMicro Pa}\)[49]. The experimental spectrum is an average periodogram of \(550\) signals of length \(3\) ms. For visualization, each point of the experimental spectrum is an average over logarithmically-spaced frequency bins. Calibration is performed by fitting the voltage spectrum in the \(1\) kHz to \(30\) kHz band to Eq. (6) (dashed line). The spectrum and fit are shown here in physical units using the calibration result. The solid line uses the fit results to include hydrodynamic effects that are imperceptible up to \(\sim 50\,\mathrm{kHz}\). However, the \(50\) kHz to \(100\) kHz band (gray shaded region) does exhibit subtle hydrodynamic effects, as suggested by the the data-to-theory ratio’s probability density (inset), wherein the hydrodynamic theory (solid red line) follows much more closely the expected Erlang distribution of ratios (solid black line) [22, 50]. (b) Theoretical transfer function relating microsphere velocities to fluid velocities. The red lines show the amplitude on the left axis while the black lines show the phase on the right axis. The solid line corresponds to the hydrodynamic theory while the dashed lines makes the approximation \(\gamma(\omega)\approx\gamma_{0}\). The microsphere, trap, and fluid parameters are chosen to be consistent with the calibration shown in (a). ### Detection limits The above considerations for signal and noise allow us to estimate our microsphere's minimum detectable acoustic signal. A voltage signal derived from only thermal fluctuations (5) then transformed to a fluid velocity via the sensitivity (8) will exhibit a self-noise spectrum [Fig. 4 (a)] \[S_{\mathrm{nn},u}(\omega)=\frac{S_{VV}(\omega)}{|\psi_{u}(\omega)|^{2}}=\frac{4 k_{\mathrm{B}}\mathrm{TRe}[\gamma(\omega)]}{|\gamma(\omega)-i\omega\delta m|^{2}}. \tag{9}\] The self-noise is quite flat and near the DC value \(S_{\mathrm{nn},u}(\omega\to 0)=4k_{\mathrm{B}}T/\gamma_{0}\). From the self-noise spectrum, the minimum-detectable signal is given by the band-limited variance [Fig. 4 (b)] \(u_{\mathrm{min}}=\sqrt{\int_{0}^{f}\mathrm{d}f^{\prime}\,S_{\mathrm{nn},u}(2 \pi f^{\prime})}\,.\) One can include the effects of a constant detector noise floor by making the replacement \(S_{VV}(\omega)\to S_{VV}(\omega)+\chi\) in Eq. (9). ## III Results Having established the operating principle and expected performance of optically-trapped microspheres as acoustic sensors, we next describe experimental results. Using a two-channel high-speed digitizer, we record the microsphere signal and either the microphone or the Microflow signal when driven by various sound sources. Each channel is analog-low-pass filtered to 4 MHz then sampled at a rate of \(1/dt=25\,\mathrm{MHz}\) to minimize aliasing. In post processing, the recorded voltage signals are further low-pass filtered by averaging together adjacent points of non-overlapping segments, thereby adjusting the effective sampling rate and signal bandwidth. Once filtered, the voltage signals are converted to either pressure or velocity using the appropriate sensitivity curves. ### Tone-burst sound source Tone bursts, consisting of a certain number of sinusoidal cycles at a single frequency, provide a simple and repeatable test signal for our various acoustic detectors. In our first set of experiments, we launch tone bursts using a function generator to drive piezoelectric buzzers held a distance \(\Delta x=44\,\mathrm{mm}\) from the optical trap. \(\Delta x\) is varied by mounting the piezo buzzers on a motorized platform. We drive one of two buzzers at their resonant frequencies \(4\,\mathrm{kHz}\) or \(40\,\mathrm{kHz}\). We observe excellent agreement between our commercially-calibrated reference sensors and our thermo-mechanically calibrated research system, as shown in Fig. 5. The agreement between sensors lessens as source distance \(\Delta x\) or time \(t\) increases (see Fig. 7 of the Appendix). The loss of agreement could be due to a number of effects including acoustic scattering and diffraction, and differences in sensor directivity, placement, and size. ### Laser ablation sound source A pulsed laser focused to a small spot on a surface can deposit a vast amount of energy in a short amount of time [56]. This phenomenon has fueled diverse technologies including micro-machining [57], laser-induced-breakdown spectroscopy [58], thin film growth [59], and a platform for studies of light-plasma interactions [60]. The sharp acoustic impulse generated by laser ablation has spurred its own research thrusts on non-contact damage detec Figure 3: Comparing acoustic detector velocity sensitivities. The microsphere parameters are consistent with the calibration shown in Fig. 2 (a). The microphone sensitivity is provided by the manufacturer and includes corrections for operation without the protective grid and in free-field conditions. The nominal pressure sensitivity is \(0.68\) mV/Pa and is converted to velocity via the plane-wave impedance of air for comparison with the velocity sensors. The microphone calibration known up to \(200\) kHz (dashed amber line). Figure 4: (a) Thermally-driven self-noise spectrum for microsphere-based acoustic sensing. (b) The minimum-detectable acoustic disturbance estimated from the self-noise spectrum’s band-limited variance. In both panels: The solid lines include effects of a constant detection noise floor while the dashed line assumes perfect detection. All other parameters are consistent with the calibration shown in Fig. 2 (a). The left axis quantifies results in terms of acoustic velocity while the right axis converts to pressure via the plane-wave impedance of air. tion [61], medical imaging [62], and scale-modeling of sonic booms [63]. The impulse has an N-shaped acoustic signature, consisting of a sharp rise, followed by a decay through a zero-crossing into a slower-timescale trough. In our second set of experiments, we use laser ablation to generate high-frequency-content impulsive sounds to test the high-frequency measurement capabilities of our microsphere-based acoustic sensor. The ablation laser operates at a wavelength of 532 nm with a pulse width of 5 ns and an energy of \(\sim 7\) mJ. The pulse has a flat-top mode shape that is focused with a 65 mm focal length lens to \(\sim 75\,\mathrm{\SIUnitSymbolMicro m}\) on an aluminum target. The ablation target, focusing lens, and laser steering mirror are all mounted on the motorized platform used to vary the source distance \(\Delta x\). The ablation target is further motorized to rotate and reveal a fresh target spot every ten shots. For this experiment, we do not measure the Microflown signal because of its limited high-frequency sensitivity. Figure 6 shows the microphone and microsphere signals at \(\Delta x=100\,\mathrm{mm}\). It is well known that standard microphones are unable to resolve the rising edge of the acoustic impulse sourced by laser ablation [64], necessitating alternative methods such as laser deflection or interference [35]. Our results indicate optically-trapped microspheres offer another alternative that is capable of measuring impulsive signals with a \(\sim 1\,\mathrm{\SIUnitSymbolMicro s}\) rising edge, defined as the time for the signal to change from 10% to 90% of its peak value. By comparison, the microphone measures a rise-time of \(\sim 5\,\mathrm{\SIUnitSymbolMicro s}\). As \(\Delta x\) decreases, the microsphere signal becomes more intricate, featuring two or more initial peaks (see Fig. 8 of the Appendix). The details of these features are very sensitive to the orientation of the target and its lateral offset from the trap center. ## IV Discussion We now turn to a discussion of the results presented in the previous section. We then contextualize the results by reviewing similar work using optically-trapped microspheres for flow measurements. Finally, we outline possible extensions and applications left for future work. From the tone-burst experiments, we conclude that our microsphere-based acoustic sensor is capable of making calibrated acoustic measurements. All three sensors agree well when converted to the same units, suggesting the plane-wave impedance model is acceptable and that our microsphere calibration and sensing protocol are correct. The laser ablation sound source highlights the microsphere's superior bandwidth in the form of a steeper rising edge and higher peak pressure as compared to the microphone. In the trough portion of the ablation signal, the two sensors are in better quantitative agreement because acoustic variations are slower and therefore less Figure 5: Comparing measurements of tone-burst signals between three acoustic sensors. (a) Ten cycles of a 40 kHz tone (9 V peak-to-peak drive voltage). All sensors are post-processed to a bandwidth of 200 kHz (b) Three cycles of a 4 kHz tone (7 V peak-to-peak drive voltage). All sensors are post-processed to a bandwidth of 20 kHz. In both panels, 100 independent trials are averaged, and the origin in time is aligned for each sensor manually. Figure 6: Microsphere and microphone response to an acoustic impulse generated by laser ablation, averaged over 10 shots. (a) A trace showing the initial noise level, leading edge arrival, and subsequent reverberations. The microphone is processed with its maximum bandwidth of 200 kHz, and the microsphere is processed with a bandwidth of 1 MHz. The time origin is set to the first zero crossing following the leading edge. (b) A trace of the same impulse over a 20\(\times\) shorter time window. The solid red line is the microsphere data shown in (a), the open squares are the microphone data, and the open circles are the microsphere data filtered to a bandwidth of 200 kHz. susceptible to band-limited distortion. When the analysis bandwidth of the microsphere is restricted to that of the microphone [open-circles in Fig. 6 (b)], the rise times and peak pressures are in much better agreement. Unlike the tone-burst sources, shorter source distances \(\Delta x\) result in worse agreement between the microsphere and microphone for laser ablation sources. We understand this as a near-field source impedance effect. Indeed, laser ablation acoustic waves are typically modeled as spherical or cylindrical waves for which the impedance is a complex-valued function that approaches to the plane-wave value at large source distances \(\Delta x\). Taken together, our experiments show that optically-trapped microspheres enable calibrated and high-bandwidth sensing of an acoustic wave's velocity quadrature. Let us next contrast our microsphere-based sensing protocol with other experiments in the recent literature. First, one other work has couched their experiments as acoustic sensing using optically-trapped microspheres [29], but in a dramatically different regime. In that work, a 60 nm gold sphere is trapped in water and imaged at 50 Hz with a camera. Sounds are generated by intensity-modulating a CW laser beam focused onto a nearby cluster of gold nanoparticles at 10 Hz to 50 Hz, or by a needle attached to a 300 Hz loudspeaker. Since the detection method is slow, the methodology hinges on measuring the particle's position variance in response to sound, hence no time-dependent waveforms may be constructed. The authors claim to be able to detect sound power down to a level of -60 \(\mathrm{dB_{re\,1pW}}\). Similar frequency-domain analysis of camera-captured microsphere trajectories is used in [27], where flow is generated by the rotating flagella bundle of an optically-trapped bacterium, and in [28] where flow is generated by periodically blocking and unblocking one of two transversly-separated traps, causing a drive particle to periodically jump. In [32], a microsphere is trapped in water contained within a 6.8 MHz, piezo-driven, standing-wave chamber. The time-averaged microsphere position is recorded using a camera at 150 Hz. The steady-state displacement of the microsphere from its equilibrium position maps the standing-wave profile. In a more-recent work termed _optical tweezer-based velocimetry_[30], a position-sensitive detector monitors a microsphere optically trapped in a water-filled sample chamber. The sample chamber is driven at frequencies of 1 Hz - 90 Hz. Velocity amplitudes of 1.5 \(\mathrm{\SIUnitSymbolMicro m}\)/s - 70 \(\mathrm{\SIUnitSymbolMicro m}\)/s are detected in real-time. Such low amplitudes beat the thermal limit by using a Kalman filter to deduce the flow velocity from microsphere position measurements in the presence of Brownian motion. In another recent work [31], a silica microsphere is optically trapped in water and driven transversely at 50 Hz to 400 Hz. An additional 30 smaller polystyrene tracer particles, initially optically trapped at fixed locations near the drive particle, are released upon starting the drive and observed to follow Lissajous trajectories. Compared to previous efforts, our work is unique because it is performed in air, it makes quantitative acoustic field measurements that are bench marked against well-calibrated detectors, and it does so with enough time resolution to observe acoustic waveforms at 4 kHz and 40 kHz, as well as impulsive waveforms with frequency content in the megahertz-range. Like some of the above methods, our method measures the flow velocity of the surrounding fluid. However, instead of inferring flow velocity through microsphere displacement, we rely on microsphere velocity measurements and a hydrodynamic model of the viscous coupling between fluid and microsphere, thereby dramatically increasing the detection bandwidth. Our results set up numerous opportunities for follow-up work. First, incorporating a Kalman filter could increase the signal-to-noise ratio while preserving the ability to self-calibrate. Second, our demonstration was in air, but the theory is equally valid in liquid. Acoustic transduction in a liquid is more efficient than in a gas due to a greater similarity in acoustic impedance between the solid transducer and the medium in which the sound propagates. Therefore, it would be interesting to compare our method to state-of-the-art acoustic sensors for water, such as a needle hydrophone. Finally, since the microsphere measures acoustic velocity, it could be combined with novel opto-acoustic methods that are capable of high-bandwidth pressure measurement to elucidate the impedance of unique sources like blast-waves from laser ablation, surface acoustic waves, and surface vibrations in the near-field. Further, since velocity is a vector-quantity, the microsphere could be useful in sound-source localization, opening the door to several applications. Applications of high-bandwidth acoustic velocity sensing could include locating where a firearm has been discharged, real-time monitoring in proton-therapy for cancer treatment [65; 66], and event discrimination in bubble-chamber searches for dark matter [67; 68; 69]. ## V Conclusions By monitoring an optically-trapped microsphere's instantaneous velocity, we infer fluid flow of sonic, ultrasonic, and impulsive perturbations in air. We validate the accuracy of our technique by comparing tone-burst measurements made with two commercially-available devices, a high-bandwidth pressure microphone and a dual-hot-wire anemometer -- the Microflowan -- which measures acoustic velocity. We then test the bandwidth of our sensor by exposing it to impulsive test sounds generated by laser ablation. Beyond the direct extensions mentioned in the previous section, we hope this work inspires other sensing protocols enabled by the resolution of a Brownian particle's instantaneous velocity. ###### Acknowledgements. We thank Neal Hall for several useful discussions. ## Appendix: Sound detection results for various source distances
2309.04382
Emergent learning in physical systems as feedback-based aging in a glassy landscape
By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems.
Vidyesh Rao Anisetti, Ananth Kandala, J. M. Schwarz
2023-09-08T15:24:55Z
http://arxiv.org/abs/2309.04382v2
# Emergent learning in physical systems as feedback-based aging in a glassy landscape ###### Abstract By training linear physical networks to learn linear transformations, we discern how their physical properties evolve due to weight update rules. Our findings highlight a striking similarity between the learning behaviors of such networks and the processes of aging and memory formation in disordered and glassy systems. We show that the learning dynamics resembles an aging process, where the system relaxes in response to repeated application of the feedback boundary forces in presence of an input force, thus encoding a memory of the input-output relationship. With this relaxation comes an increase in the correlation length, which is indicated by the two-point correlation function for the components of the network. We also observe that the square root of the mean-squared error as a function of epoch takes on a non-exponential form, which is a typical feature of glassy systems. This physical interpretation suggests that by encoding more detailed information into input and feedback boundary forces, the process of emergent learning can be rather ubiquitous and, thus, serve as a very early physical mechanism, from an evolutionary standpoint, for learning in biological systems. ## I Introduction Given the prevalence of emergent behavior, physicists, computer scientists, and biologists have long asked whether or not some subset of emergent behavior results in the capacity of a system of many interacting components to learn, i.e., to have intelligence [1; 2]. While there has been much focus looking for emergent learning in brain-like systems, such as neuronal networks in biology or artificial neural networks in physics and computer science, recent research has demonstrated that simple physical systems, such as a spring network, have the potential to exhibit learning behavior similar to that of artificial neural networks [3; 4; 5; 6; 7; 8; 9]. In this context, learning refers to the ability to modify the properties of a physical system by adjusting its learning degrees of freedom in order to more efficiently achieve some task. For example, in a spring network, the spring stiffness and rest lengths represent the learning degrees of freedom, while the nodes of the springs correspond to the usual physical degrees of freedom. In these physical learning systems, once input boundary nodes, output boundary nodes, and a cost function are all chosen, the learning process is composed of two steps: 1. _Signaling_ : System's response to a given input is compared with the desired output and an update signal is sent which provides information on the necessary adjustments to each learning degree of freedom, so that the system's response aligns more closely with the desired output. 2. _Weight update_ : Each learning degree of freedom, or weight, is updated in response to the update signal. This weight update should allow the system to perform gradient descent. The two steps are repeatedly applied to train the system to learn. The major challenge applying this algorithm is to find physical processes that implement the above two steps. While methods such as Equilibrium Propagation (EP) [4], Multi-mechanism Learning (MmL) [3; 5], and Coupled Learning (CP) [6] have made strides in addressing this challenge, they are not entirely physical in nature. In particular, the learning stages involved, _Signaling_ and _Weight update_, require the artificial modifications to the physical system. For instance, in EP and CL, to send the gradient information into the system, one needs to store the free state in some memory, which is not possible in typical systems such as spring networks or resistor networks. In our previous work unveiling MmL, we demonstrated that this issue of memory storage could be addressed by encoding the feedforward and feedback signal into two non-interfering physical quantities [3; 5]. Despite this demonstration, however, a significant problem remains: we do not know of any physical process that can update the weights in the system. To physically implement weight updates, recent experimental efforts have resorted to using complex components such as transistors in the training of electrical networks [10; 11], and actuators and sensors in mechanical networks [12]. Yet, the reliance on such intricate and varied tools introduces challenges in terms of scalability and robustness in these approaches. Here, we explore the central question: Do effects of the weight update procedure resemble any natural physical phenomena? The answer to such a question will point us in the direction of a fully physical learning system, weight update included. To begin to answer this question, we train linear physical networks and investigate how the physical properties of this system change, given the weight update rule. Our manuscript consists of revisiting our MmL training procedure, as detailed in our prior work [3; 5], in a general manner that emphasizes its physical plausibility. We then review the specifics of multi-mechanism learning, followed by details of what we measure as well as data generation and network generation. Results are then presented. We conclude with a discussion of the impact of our results. Figure 1: _Training linear networks to learn linear transformations._ [1a & 1b] : _Network undergoes trimming_. A network with 40 nodes and 390 edges is trained to learn a linear transformation of size \(10\times 10\). Weights of the network are uniformly sampled from \([10^{-5},0.2]\). Colorbar on right shows weight values of each edge. [1c] _Non-exponential relaxation_ : Training curve for the case shown in 1a and 1b but for 50 different initializations (shown in green). Y axis shows error defined as square root of mean square error, X axis shows epoch. In one epoch the network goes through 100 data points. All green curves are obtained after normalization with their respective initial errors. The blue curve shows the average over these 50 runs. The blue curve is fit to a non-exponential curve of the form \(a+be^{-\lambda\cdot t^{\beta}}\). Fit parameters are shown in the legend. \(\beta>1\) shows the relaxation shows a compressed exponential behaviour. The sum of squared residuals (SSR) is used to assess the goodness of fit, it is defined as: \(\text{SSR}=\sum_{i=1}^{n}(y_{i}^{fit}-y_{i}^{data})^{2}\) [1d] _Eigenvalues decrease while learning_: Eigenvalues of graph Laplacian before and after training for runs shown in 1c. These initial and final eigenvalues are averaged over those 50 runs. The eigenvalues are sorted in increasing order. The x-axis shows eigenvalue index. The network has 40 nodes so there are 40 eigenvalues. [2a, to d] These plots show the training performance for a network with less number of edges (78 edges), due to which it does not learn well. When compared with case 1, we see that trimming is less prominent and the eigenvalues do not decrease. The training curve shows a stretched exponential relaxation (\(\beta<1\)) and saturates well above zero error. [3a, to d] _Training on random data_: Networks initialized with same parameters as that of 1a are trained on randomly generated data. No trimming is observed, eigenvalues increase over training and the error curve does not decrease with the number of epochs. The Learning Process We now demonstrate the process of physical learning within our system. Initially, we impose an input boundary condition, denoted by \(I\). The system's response is then captured by the Laplace equation \(Lv=I\), where \(L\) is Laplacian, which depends on the learning degrees of freedom \(w\), and \(v\) is the state of the system. To attain its intended functionality, the system need to update \(w\) to minimize the cost function \(C(v(w))\). We encode the cost function as an interaction energy between the system and the environment. This energy causes a feedback boundary condition of the form \(-\eta\dfrac{\partial C(v)}{\partial v}\) to act on the system, due to which the state of the system evolves along a direction that decreases \(C(v)\): \[L(v+\delta v)=I-\eta\dfrac{\partial C(v)}{\partial v}. \tag{1}\] For a mechanical network, these input and feedback boundary conditions are applied as external stresses on the system. When the feedback stress is removed, the system tends to revert to its initial state \(v\). However, with continuous exposure to feedback boundary forces, there's a lasting change in the system's learning degrees of freedom. This change is akin to a plastic deformation in materials where repeated stress leads to permanent alterations. Note that unlike the input boundary condition, the feedback boundary condition is a function of the state of the system. As a result, there exists an optimal state where the system experiences minimal feedback stress. Our hypothesis is that, through repeated application of these feedback stresses, the system's learning parameters \(w\) evolve such that this optimal state is reached. The objective of this evolution is to minimize the external stress \(-\eta\dfrac{\partial C(v)}{\partial v}\), by changes in state of the system \(v\), through changes in \(w\). This adaptation is represented as: \[\Delta w_{ij}=-\alpha\eta\dfrac{\partial C(w)}{\partial w_{ij}}, \tag{2}\] where \(C\) is a function of \(w\) via \(C(v(w))\). In our previous work [3], we showed that the above weight update rule can be written purely in terms of local physical quantities \[\Delta w_{ij}=-\alpha v_{ij}\delta v_{ij}. \tag{3}\] Where, \(w_{ij}\) is the weight connecting nodes \((i,j)\), and \(v_{ij}\) is the potential drop \(v_{i}-v_{j}\), \(\delta v_{ij}\) is the change in this potential drop due to feedback[2 ]. Intriguingly, this learning rule exhibits a Hebbian-like behavior. The input and feedback boundary conditions encode a particular-type of information, and given that they are applied repeatedly, a parallel with memory formation in driven disordered systems seems plausible [13]. For example, in granular systems, the particles rearrange in response to a particular sequence of driving amplitudes[14; 15]. Additionally, if the network topology is fixed, then the learning degrees of freedom are updated much like in the context of directed aging [16; 17], keeping in mind that the update rule in our system depends on both feedforward signal (\(v_{ij}\)) and feedback signal (\(\delta v_{ij}\)), rather than a reduction of spring constants over time based on the stress experienced by a particular spring. Due to the evolution of the learning degrees of freedom, once reaching steady state, the system's response is : \[L^{\prime}(v+\delta v)=I, \tag{4}\] where \(L^{\prime}\) is the updated Laplacian that encodes the memory of the feedback stress by adapting to it, i.e; \(C(v+\delta v)<C(v)\). In summary, the learning process goes as follows. An input is introduced to the system as an external force. Subsequently, based on the system's reaction to this input, feedback forces are consistently applied. We postulate that such a process enables the system to adapt and become attuned to these feedback boundary forces. This continuous adaptation to feedback forces, in presence of the input, ingrains a memory of the input-output relationship within the system. This concept is elucidated further in the subsequent section. ## III A Brief Review of Multi-Mechanism Learning We study a network comprised of nodes and connected by weighted edges. Let us represent the weight of the edge between node \(x\) and node \(y\) as \(w_{xy}\), which could signify conductances in an electrical network, spring constants in a mechanical spring network, or pipe thickness in a flow network, etc. **Input Nodes:** An "input" node pair is pair of nodes \((b_{j}^{+},b_{j}^{-})\) such that an input current \(I_{j}\) enters the network via node \(b_{j}^{+}\) and exits through \(b_{j}^{-}\).(For mechanical networks input current can be thought of as external forces acting at input nodes). Let there be \(q\) such input node pairs in the network, denoted by \(\{(b_{1}^{+},b_{1}^{-}),(b_{2}^{+},b_{2}^{-}),\ldots,(b_{q}^{+},b_{q}^{-})\}\). **Output Nodes:** In response to the input currents, the system develops an electric potential at each node. The network's output is defined to be the set of potential differences across certain "output" node pairs, obtained as \(v(o_{i}^{+},o_{i}^{-})=v(o_{i}^{+})-v(o_{i}^{-})\) for each output node pair \((o_{i}^{+},o_{i}^{-})\). Let there be p such output node pairs in the network, represented as \(\{(o_{1}^{+},o_{1}^{-}),(o_{2}^{+},o_{2}^{-}),\ldots,(o_{p}^{+},o_{p}^{-})\}\). **Cost Function:** The goal of training is to adjust the weights \(\{w_{xy}\}\) so that for a given set of input currents, the desired potential drops \(\{v_{d}(o_{i}^{+},o_{i}^{-})\}\) are achieved across all the output nodes. We employ a Mean Squared Error (MSE) cost function: \[C=\frac{1}{2}\sum_{i=1}^{p}(v(o_{i}^{+},o_{i}^{-})-v_{d}(o_{i}^{+},o_{i}^{-})) ^{2}. \tag{5}\] **Feedback Mechanism:** To optimize this cost function, we introduce a feedback signal into the network at the output nodes. For each output node pair, the feed-back current is calculated as: \[\epsilon_{i}=-\eta(v(o_{i}^{+},o_{i}^{-})-v_{d}(o_{i}^{+},o_{i}^{-})) \tag{6}\] This current enters the network through node \(o_{i}^{+}\) and exits via \(o_{i}^{-}\), with \(\eta\) being a positive "nudging" factor. The feedback currents change the potentials at each node and let the change in the potential at node \(j\) be denoted by \(u_{j}\). **Weight Update Rule:** The weights are then updated as: \[\Delta w_{xy}=-\alpha u(x,y)v(x,y), \tag{7}\] where \(\alpha\) is the learning rate. This rule effectively performs gradient descent on the cost function: \[\Delta w_{xy}=-\alpha\eta\frac{\partial C}{\partial w_{xy}} \tag{8}\] **Considerations:** The weight update is local, and its sign depends on the potential drops due to input and feedback. We assume the system's relaxation time is much shorter than the weight update time, ensuring a steady state during weight adjustments. The two quantities in the weight update must be independent. This can be ensured by encoding them into distinct physical quantities[5]. (Further details on the learning procedure and its physical implementation are given in Ref.[3]). For larger networks, a higher learning rate is necessary to maintain the magnitude of weight changes. To address this, we conduct a trial run for one epoch, adjusting the learning rate to ensure \(||\Delta w||\approx 10^{-3}\). Additionally, we impose regularization by (1) Limiting each weight update: \(|\Delta w_{xy}|<\epsilon\),and (2) Constraining weight values: \(w_{min}\leq w_{xy}\leq w_{max}\). This ensures a smooth training process and prevents weights from becoming too large or too small. In our simulations, we set \(w_{min}=0.00001\), \(w_{max}=0.2\), and \(\epsilon=0.01\). ## IV Methodology **Network Generation**: We aim to create networks consisting of \(N\) nodes, with a varying number of edges \(M\). For this, we first create a Barabasi-Albert network with connection parameter 1. This graph generation algorithm connects a new node with 1 existing node in a manner that nodes with higher degree have a stronger likelihood for selection.This creates a network with \(N\) nodes and \(N-1\) edges. To create a network with \(M\) edges, we add \(M-(N+1)\) unique edges. This way, we can create networks with varying connectivity, ranging from being minimally connected to being maximally connected. Note that it is highly unlikely to create such minimally connected networks using the Erdos-Renyi model. The generated networks are then trained on data generated using a linear transformation. Note that in spite of using linear networks to learn linear transformations, the optimization needs to take place in a cost landscape which is non-convex, high- dimensional, and disordered. **Data Generation**: The input vector \(\mathbf{x}\) (eg; \((x_{1},x_{2},x_{3})\)) is encoded as external currents across input nodes \(\{(b_{1}^{+},b_{1}^{-}),(b_{2}^{+},b_{2}^{-}),(b_{3}^{+},b_{3}^{-})\}\) with currents \(+x_{q}\) and \(-x_{q}\) applied across nodes \(b_{q}^{+}\) and \(b_{q}^{-}\) respectively. The output vector \(\mathbf{y}\) (eg; \((y_{1},y_{2},y_{3})\) ) is the potential drop across nodes \(\{(o_{1}^{+},o_{1}^{-}),(o_{2}^{+},o_{2}^{-}),(b_{3}^{+},o_{3}^{-})\}\). When the network is trained we want the network's output to closely approximate the matrix \(R\), that is we want \(\mathbf{y}\approx R\mathbf{x}\). To do so, we first generate training data of the form \(\{(\mathbf{x},R\mathbf{x})\}\) by randomly sampling \(\mathbf{x}\) from the surface of a unit sphere, and train the network using the procedure described in the previous section. To shorten the training time, we want the magnitude of output \(\mathbf{y}\) to be of the same order as that of the input, therefore we make sure that the maximum eigenvalue of \(R\) is close to one. We do this by first generating an arbitrary matrix \(R^{\prime}\) with random entries between -1 and 1, and then normalizing it by dividing it with maximum eigenvalue : \(R=R^{\prime}/max\{eig(R^{\prime})\}\). Input and output data is generated using this matrix \(R\). The network is trained using this ideal data, meaning each training step sees an entirely new data point. In the computer science community, this type of task is known as linear regression. ## V Results Figure 1.1a,1b shows the network before and after training for a network of \(N=40\) and \(M=390\). Since the intensity of the color indicates the magnitude of the weight, note that many of the weights of a trained network reach the minimum value. In other words, there is a trimming effect, where only the important edges remain. To ascertain whether or not the network has learned the linear transformation, we plot the square root of the mean-squared error in Fig. 1.1c as a function of epoch. Given that the error nearly vanishes at longer epoch, this network has successfully learned the task. This shows the dynamics through which the system relaxes to the feedback boundary forces due to the evolution of learning degrees of freedom. Interestingly, we performed a phenomenological fit for this curve. The curve is well-approximated by a non-exponential relaxation of the form \(\sqrt{MSE}=a+b\exp(-\lambda\,t^{\beta})\), where \(a\), \(b\), \(\lambda\), \(\beta\) are the fit parameters and \(t\) denotes the epoch number. Interestingly, these dynamics are quantitatively similar to what is observed in molecular glassy systems [19]. This finding demonstrates the existence of a glassy landscape. Appendix A addresses the reasonableness of this non-exponential fit. We seek to quantify further the relaxation of the system as it learns. We, therefore, compute the eigenvalues of the Laplacian matrix. Figure1.1d shows how learning results in decreasing Laplacian eigenvalues. Note that these eigenvalues are the square root of normal mode frequencies. Decreasing eigenvalues is evidence that the network is getting "softer" as the normal mode excitations become longer in wavelength. This observation demonstrates that the network moves from a state of stress to that of less stress due to repeated application feedback boundary forces. The network is, thus, "adapting" to these feedback forces indicating a transition towards a state that encodes a memory of the input-output relationship. Additionally, it draws parallels between this behavior and the self-organization observed in periodically sheared suspensions, where the system adapts to the periodic driving in a similar manner [14]. Moreover, Figure 2: _Learning performance with overparametrization._ Error curve is fit to \(a+be^{-\lambda\,t^{\beta}}\) for networks with varying edges and the fit parameters are plotted (Error bars shown are calculated using the diagonal terms of covariance matrix). The Tuning Parameter (\(TP\)) serves as a metric to quantify the degree of connectivity in a network. Specifically, it is calculated by taking the ratio of the number of edges \(M\) present in the graph to the number of edges that would exist in a fully connected network with the same number of nodes. (a) We observe that after adding a certain number of edges, the saturation value of the error curve begins to asymptote to zero. (b) We also observe that the exponent \(\beta\) increases from less than one to greater than one, showing a shift from stretched exponential to compressed exponential relaxation. (c) \(\lambda\) value also becomes very small after adding a certain number of edges. We have done a fit robustness analysis for these plots in Appendix A. (In Fig.1, 390 and 78 edge networks correspond to a \(TP\) of 0.5 and 0.1, respectively.) when amorphous solids, modeled as purely repulsive particles in the jammed phase, are shear-stabilized by minimizing the energy with respect to the shear degrees of freedom, one finds longer wavelength excitations emerging [20]. Finally, recent work demonstrates that using a similar multiplicative learning rule as given in Eq. 7 to train physical networks to learn linear transformations also shows a decrease in the lowest eigenvalues of the Hessian [21]. Appendix B shows that the trends hold for larger system sizes. Figures 1.2(a-d) show the same quantities as Figure 1.1, however, for a network with \(N=40\) and \(M=78\). Given the smaller number of learning degrees of freedom, a network with this architecture does not successfully learn, as indicated by the square root of the mean-squared error not decreasing to zero as the number of epochs increase. Moreover, the eigenvalues of the Laplacian do not decrease and so the system does not relax, or soften. For comparative purposes, we also train the network to learn, if you will, random data. Fig. 1.(3a to d) shows the physical effects of learning random data. Here, the system, exposed to random input and feedback boundary conditions, does not relax, as indicated by the unchanged initial and final eigenvalues. With random input-output forces, the weight update signal in Eq. 3 averages to zero due to the absence of correlation between \(v_{ij}\) and \(\delta v_{ij}\). This null result suggests that the system's relaxation is driven by correlations between input and feedback boundary conditions and for certain network architectures. Given the nontrivial dependence of learning on the network architecture, we further extend our analysis by incrementally increasing the network connectivity to examine the implications of overparametrization (see Fig. 2). We denote the ratio of the number of edges \(M\) to the number of edges in the fully connected equivalent network as \(TP\) for tuning parameter. The results indicate that as more edges are introduced, the cost landscape becomes steeper due to a reduced number of flat directions [22], leading to accelerated relaxation and enhanced learning performance. Notably, a parallel can be drawn with glasses; in these systems, increased connectivity also speeds up relaxation dynamics [23; 24]. Both these studies, as well as ours, show a shift in relaxation dynamics from a stretched to a compressed exponential upon increasing connectivity. This further underscores the intrinsic link between learning processes and relaxation in disordered systems. Given the changes in the weights as the networks learns, in Fig. 3, we examine the relationship between trimming, eigenvalue reduction, and network connectivity. As network connectivity increases by increasing \(TP\), the fractional eigenvalue decrease tends to plateau, reaching a saturation point around \(TP\approx 0.3\). A comparison of Fig. 3(a) and Fig. 2(a) reveals a notable correlation: the point of eigenvalue saturation aligns with the disappearance of saturation error. This suggests a fundamental link between the processes of learning and eigenvalue reduction. Furthermore, Fig. 3(b) underscores the ubiquity of the trimming effect across networks of varying connectivity. Notably, the magnitude of the trimming effect intensifies as network size grows. Figure 4 illustrates the evolution of the resistance distance distribution during the learning process. In an electrical network, the effective resistance between two nodes can be interpreted as a measure of distance (more details in Appendix C). By calculating the average distribution of resistance distances over all possible pairs of nodes, a two Figure 3: _Eigenvalue decrease and trimming with overparametrization_. (a) Shows fractional decrease in the sum of eigenvalues due to learning, averaged over 50 runs. (b) Shows fractional decrease in number of effective weights due to learning, averaged over 50 runs. Here, the term ‘effective weights’ refers to those weights that fall within the top 99 percent of the permissible weight value range(\([10^{-5},0.2]\)). point correlation function \(p(r)\) can be derived, which can be extended to spring and flow networks as well. As learning progresses, we observe a broadening of the two-point correlation function, indicating that the average conductance between two arbitrary nodes decreases. This phenomenon is analogous to a reduction in "stiffness" in elastic networks, as the system becomes more soft during learning. ## VI Discussion In summary, in learning about the physical signatures of multi-mechanism learning we find that: 1. The error curve for networks with low connectivity resembles a stretched exponential. However, as network connectivity increases, the error curve transitions to a compressed exponential form (Fig. 2). 2. Eigenvalues of the graph Laplacian decrease with epoch and long wavelength modes are generated (Fig. 1). 3. The network undergoes trimming, i.e., lot of the weights go to zero (Fig. 1 & 3). 4. The two point correlation function for the network broadens while learning (Fig. 4). Figure 4: _Resistance Distance Distribution and Learning._ (a) The figure showcases the average resistance distance distribution, \(p(r)\), during learning, with the x-axis denoting resistance magnitude and the y-axis its normalized frequency. This is averaged over 50 network initializations. The inset illustrates the outcome when the network is trained on random data (note that the scale in the inset differs, making the initial distributions appear distinct, though they are identical). (b) Represents a network with suboptimal learning performance due to a limited number of edges. The non-exponential relaxation indicates the presence of a glassy learning landscape. In such a landscape, many local minima exist thereby allowing multiple memories to form. Interestingly, in prior work the optimization landscapes of Deep Neural Networks (DNNs) were compared with those of glasses [22; 25]. A fundamental distinction was observed: while glasses exhibited slow relaxation over extended timescales, DNNs did not manifest such slow relaxation at long times. This discrepancy was hypothesized to arise from the overparametrization inherent to DNNs. Our findings, as illustrated in Fig. 3, corroborate this hypothesis. We demonstrate that even in physically disordered systems, increasing network connectivity can eliminate slow relaxation. This further suggests a potential SAT-UNSAT transition [26] in these physical learning systems. Experiments on directed aging [16] reveal that materials subjected to external stress undergo alterations in their physical properties and by meticulously controlling the application of this external stress, one can tailor a material to exhibit specific desired properties. A trivial example of this principle in action can be observed with a simple piece of paper. If one aims to create a material with a negative Poisson's ratio, the paper can be crumpled into a ball. When this crumpled paper ball is stretched horizontally, it also expands vertically for small strains, indicating a negative Poisson's ratio. To capture the essence of this behavior, previous studies have introduced a model where springs decrease their spring constants over time based on the local stress experienced by each spring [17; 27]. We posit that the model detailed in Section II offers a comprehensive explanation for this phenomenon. This is because directed aging, at its core, can be viewed as an adaptation to external stresses. Additionally, we believe that this approach can potentially explain adaptation to external driving that was observed in particulate systems, keeping in mind there are differences in memory formation between unjammed and jammed systems [14; 15; 18; 28]. Moreover, the softening of the system as it learns and the associated increase in of the correlation length suggest that the system is indeed relaxing into the imposed boundary conditions that encode the linear transformation. However, these boundary conditions contain more information than a simple scalar quantity such as a strain amplitude[13]. If the environment is simple enough, then the physical system can learn. However, given too complex an environment, it may not be able to learn. Of course, we have restricted ourselves to linear networks. Nonlinear networks enhance the learning capability, as has been clearly shown in ANNs and even in mechanical networks [29]. Neuromorphic researchers have been actively seeking physical counterparts to facilitate autonomous weight updates. This pursuit has led to the development of physical learning systems utilizing memristors [30], nanoscale devices [31], and transistors [32]. However, the intricate design requirements for each component presents challenges in terms of robustness and scalability. We propose that soft materials might offer a more streamlined solution. These materials inherently exhibit self-adjustment to external conditions, as evidenced by the self-organization of granular systems in response to external driving [14; 15; 27] the adaptability of other disordered systems to external strain [16; 17]. Consequently, they emerge as promising candidates for crafting physical learning systems. Moreover, the model introduced in Section III provides insights into a potential training methodology for soft materials, be it particulate-based, such as a granular learner, where the topology of the system can change, or spring-based, such a spring network learner, where the topology of the network is fixed. By iteratively applying input and feedback boundary forces, the learning parameters can autonomously adapt to these forces to optimize a cost function. This approach paves the way for the creation of innovative disordered materials with neural network-like learning potential. We aim to validate this concept in our forthcoming research. Finally, by using multi-mechanism learning to train physical networks to learn linear transformations, we demonstrate a simple, brain-like task in a typically non-brain-like material. As brains began to emerge several hundred million years ago in planarians [33], physical learning mechanisms are ripe candidates for life learning to survive in their environment before planarians. We, therefore, seek to validate such mechanisms in pre-planarian organisms. The authors thank Benjamin Scellier, Arvind Murugan, Eli Hawkins, Shabeeb Ameen and Samuel Ropert for helpful discussion. JMS acknowledges financial support from NSF-DMR-2204312.
2305.19760
You Can Run But You Can't Hide: Runtime Protection Against Malicious Package Updates For Node.js
Maliciously prepared software packages are an extensively leveraged weapon for software supply chain attacks. The detection of malicious packages is undoubtedly of high priority and many academic and commercial approaches have been developed. In the inevitable case of an attack, one needs resilience against malicious code. To this end, we present a runtime protection for Node.js that automatically limits a package's capabilities to an established minimum. The detection of required capabilities as well as their enforcement at runtime has been implemented and evaluated against known malicious attacks. Our approach was able to prevent 9/10 historic attacks with a median install-time overhead of less than 0.6 seconds and a median runtime overhead of less than 0.2 seconds.
Marc Ohm, Timo Pohl, Felix Boes
2023-05-31T11:45:43Z
http://arxiv.org/abs/2305.19760v1
# You Can Run But You Can't Hide: Runtime Protection Against Malicious Package Updates For Node.js ###### Abstract. Maliciously prepared software packages are an extensively leveraged weapon for software supply chain attacks. The detection of malicious packages is undoubtedly of high priority and many academic and commercial approaches have been developed. In the inevitable case of an attack, one needs resilience against malicious code. To this end, we present a runtime protection for Node.js that automatically limits a package's capabilities to an established minimum. The detection of required capabilities as well as their enforcement at runtime has been implemented and evaluated against known malicious attacks. Our approach was able to prevent 9/10 historic attacks with a median install-time overhead of less than 0.6 seconds and a median runtime overhead of less than 0.2 seconds. Software Supply Chain, Policy Enforcement, Abstract Syntax Trees + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal of Physics and Astronomy ## 1. Introduction Modern software lives and thrives from the opportunistic reuse of software components. This is largely fueled by the sheer amount of Free and Open Source Software (FOSS). While the availability of ready-to-use building blocks certainly has its advantages, it also conveys a noticeable risk for security. Effectively, each added software component increases the dependency on unverified code from untrusted developers. Thus, it is no surprise that there is a visible trend for software supply chain attacks through maliciously manipulated software packages (Krishnan, 2018). Even when carefully choosing a software, the use of dependencies is nontransparent to the user. A typical user does not notice if an attacker introduced malicious code somewhere down the supply chain. In order to mitigate these supply chain attacks, we make use of two observations. Firstly, the fact that a prominent attack vector is the introduction of malicious code at the patch level of a package that is automatically updated on the victim's system, and secondly, that it is observed that malicious additions noticeably alter the way a software component works (Krishnan, 2018). Making use of these facts, our approach is based on the continuity of benign software. The overall goal is to prevent the execution of intentionally added malicious code which inherently requires other capabilities than usual. To tackle the problem of using dependencies with unknown capabilities we propose an automated approach. Our approach automatically infers required capabilities, like access to specific modules or functions, by statically analyzing the source code of a package as well as the source code of all of its dependencies. In the first step, our approach infers a set of capabilities based on a trusted version of a package. After updating that package to a newer version, it is run using our patched Node.js interpreter which enforces the established capabilities at runtime. Newly added capabilities will not be accessible this way and hence corresponding code will not be executed successfully. The presented approach is evaluated for caused overhead, exhaustiveness, possible attack surface reduction, and its performance on benign and malicious software. The remainder of this paper is structured as follows. First, we present and discuss related work in Section 2. In Section 3, our methodology and use case are described and Section 4 depicts the corresponding implementation. This is followed by the presentation of results from our experiments in Section 5 and a discussion in Section 6. Lastly, a conclusion is drawn and perspectives for future work are given in Section 7. ## 2. Related Work With increasing amounts of attacks on the software supply chain in recent years, research in that field has been thriving. Several works focused on providing security by identifying malicious software packages in package repositories. This has been done both through code analysis (Krishnan, 2018; Krishnan, 2018; Krishnan, 2018) and metadata analysis (Krishnan, 2018; Krishnan, 2018). In addition, Taylor et al. have built a tool that protects against typosquating attacks (Krishna et al., 2017). Further research has specialized this task to not just detect malicious packages in general, but instead aiming to detect malicious updates to previously benign packages, leveraging different kinds of anomaly detection (Beng et al., 2015; Chen et al., 2016; Li et al., 2017). However, even if the mentioned detectors were widely deployed, they would inevitably miss some malicious packages. To account for this fact, there have also been approaches to allow the safe execution of malicious code. Koishbayev and Kapravelos (Koishbayev and Kapravelos, 2017) created a tool that reduces the attack surface of Node.js applications by removing unused code segments and blocking the ability to access built-in modules that are not statically referenced. While this protects against attacks where dependencies abuse certain vulnerabilities in adjacent dependencies, it does not provide protection against malicious packages in general. Vasilakis et al. (Vasilakis et al., 2017) built an alternative module system for Node.js, which allows spawning instances of modules in configurable "compartments" providing different levels of isolation. Only the compartment with the strongest isolation provides protection against malicious packages, but in turn requires extensive configuration for each module and introduces significant runtime overhead. In 2022, Wyss et al. (Wyss et al., 2022) have developed a way to protect users against install-time attacks. Their system _LATCH_ is based around user-defined policies and aims to establish protection in two steps. At first, it is determined whether a package should be installed. To do so, a cloud service installs the suspected package in a sandbox and creates a manifest of so-called "intents", based on the system call trace of the installation process. If this manifest does not match the policy, it may optionally be blocked from installation on the user's machine. Additionally, if the installation is performed, it is run within an AppArmor environment, which also enforces the adherence to the given policy. While this approach seems to work well for the intended purpose, it has some problems. At first, it puts the burden of creating a meaningfully secure policy on the user. An attempt to mitigate this is to provide default policies. However the evaluation results show that there are still cases where installations get blocked by the default policies, in which cases the user would have to adjust them. Furthermore, this approach exclusively provides protection from install-time attacks, but does not provide any protection against runtime attacks, while current research suggests that more than 40 % of software supply chain attacks are triggered at runtime (Li et al., 2017). For protection against runtime attacks, Ferreira et al. (Ferreira et al., 2017) have created a permission system with the intent of allowing individual packages to have individual permissions, which can be applied to packages of the Node.js ecosystem. Package developers have to declare the permissions that their package needs, and the user has to accept the permissions given to a package. At runtime, these permissions are enforced by restricting access to built-in JavaScript modules, as well as certain global objects. Results are promising that capability-based approaches are able to reduce attack surface and capable of protecting against real-world attacks. Finer grained capabilities would allow for an even larger proportion of the attack surface to be reduced. The presented approaches rely on the user to create or verify the capability choice. While they might be familiar with such a permission system from platforms like android, relying on the user to choose or verify permissions for a certain package bears risks. Current research suggests that users do not pay attention to requests for certain permissions, and if they do, they do not fully understand the implications (Beng et al., 2015). For these reasons, we present a similar approach, leveraging more granular permissions and a system to automatically infer and enforce policies, without the need of a user to define the application's permissions. In Section 5.6 we will compare our approach and evaluation results to the work of Wyss et al. and Ferreira et al. where applicable. ## 3. Methodology Our main goal is to establish a system that automatically infers required capabilities of software and limits further access based on the principle of least privilege. To this end, we first outline our general use case. This is followed by the research statement providing the general procedure and research questions. Last, we describe the experiments carried out in order to answer the research questions and thus substantiate our proposed solution. ### Use Case We have chosen JavaScript and the Node.js runtime as our ecosystem, as it has been a common ecosystem choice for related work (Ferreira et al., 2017; Chen et al., 2016; Li et al., 2017; Li et al., 2017; Li et al., 2017) and shows the highest number of known malicious packages (Li et al., 2017). It thus meets our criteria of comparability to other research in the same area, as well as practical relevance. We design our approach from the perspective of users who run some software on their computer -- either for their own or hosted for someone else. Consequentially, members of our target group are not necessarily developers but knowledgeable of software operating. The key requirements for the user are running and updating the chosen software which shall provide uninterrupted productivity. They actively selected the software in question and thus put a certain level of trust in the correctness (and implicitly benignity) of it. That software again might have dependencies in form of other software components from which capabilities are leveraged to implement provided functionality. Overall, the chain of software components is nontransparent to the user, and they might not notice changes in the dependencies and certainly will not track all changes made to the software itself. Thus, while selecting the software, a one-sided trust relationship is established. This relationship is regularly misused by attackers. According to established taxonomies (Beng et al., 2015; Li et al., 2017) they may manipulate an existing dependency in the software supply chain of the targeted software, add a new (malicious) dependency to the targeted software, or directly add new (malicious) functionality to the targeted software. We assume that an attacker is able to identify the set of capabilities a software holds. Without our approach they might choose any dependency of the software to be attacked. When using our approach, they must select a software component suitable for the planned attack. This heavily limits the attack surface. We focus on the prevention of execution of malicious code despite how it was added to the targeted software. However, we assume that the malicious code is contained in a future update of the software which the user is currently running. ### Research Statement We develop an approach to automatically infer leveraged program functionalities, i.e., attributes and functions from global objects and built-in modules. The inferred capabilities need to be persisted to a policy file which will be used for the enforcement later on. To do so, we need to locate and understand the import functionality of the Node.js interpreter. Furthermore, that functionality needs to be enhanced in a way that allows it to respect our generated policy. The actual selections and details of the implementation will be presented in Section 4. We evaluate our approach to answer the following research questions: 1. How much overhead is added by the inference and enforcement of capabilities and policies respectively? 2. How exhaustive is the automated inference of capabilities? 3. How many capabilities are typically leveraged per package? 4. To what extent are benign package updates affected? 5. Which historic attacks would have been blocked? Generally speaking, RQ1 evaluates the practical performance during operation and therefore is important to end users. Furthermore, RQ2 assesses how exhaustive, respectively, how complete, the inference of capabilities is. We measure how many capabilities are detected and how many are missed. The answer to RQ3 gauges to what extent our approach might reduce the attack surface of a software by limiting the access to (otherwise allowed) capabilities not listed in the policy. Assuming that most software updates are benign, we want to keep the amount of wrongly prevented code execution as small as possible. To this end, RQ4 estimates the approach's specificity by providing the expected amount of false positives (policy enforcement prevents benign software from execution) and true negatives (benign software is executed). The actual goal of our approach is to prevent the injection of malicious code in a software's updates. Thus, RQ5 determines the sensitivity by measuring true positives (policy enforcement prevents malicious software from execution) and false negatives (malicious software is executed). In order to answer these questions, we conduct several experiments. ### Experiments All experiments are run as "no-human-in-the-loop" style, i.e., all policies are generated and enforced fully automated. Nonetheless, our experiments provide an upper boundary for the approach's shortcomings and a lower boundary for its inference performance. To answer RQ1, we perform five experiments on 200,000 randomly chosen packages. Regarding the first step, the capability inference, we measure the time it takes to infer all capabilities. Regarding the second step, the capability enforcement, we measure the time the source code replacement of global objects consumes, as well as the file sizes of the respective policies to estimate the impact on file load time. Additionally, we perform a theoretical analysis of the enforcement of module and global object restrictions. For RQ2 we use the 200,000 randomly chosen packages again, and compare the dependencies they use to the third party modules our approach is able to infer. As a reference of dependencies a package uses, we consult the list of runtime dependencies in the package's package.json manifest. Since the list of dependencies has to be manually maintained by the developer, it is prone to errors. For example, these lists can contain development dependencies incorrectly declared as runtime dependencies, or deprecated dependencies from previous versions that have not been removed from the dependency list. For this reason, we only consider those dependencies whose names also appear within at least one JavaScript file of the package. Additionally, we filter out all dependencies starting with etypes, as those are just the type definitions needed to transpile code from TypeScript to JavaScript, and thus won't appear in JavaScript files. In order to quantify how much our approach reduces the allowed capabilities of a program, we enumerate the set of generally available capabilities. We then calculate the fraction of actually required capabilities of 200,000 random packages. This will answer RQ3 by providing the numbers of available and actually required capabilities. The change of required capabilities caused by historic updates is recorded for RQ4. We conducted the experiment on the 1,000 most depended upon software packages from npm because these are most likely to be benign. This allows us to provide an upper bound of false positives and true negatives of our approach. For a set of known malicious packages taken from the Backstabber's Knife Collection (Krause et al., 2017), we generate and enforce the policy on the preceding (last benign) version of the affected package and trigger the malicious behavior of the malicious sample in a sandboxed environment. This allows us to determine whether we are able to prevent this attacks to answer RQ5. Furthermore, a qualitative analysis of preventions is carried out to gain more insight. All experiments are conducted twice. Once for the coarse granularity at module level and once for the finer granularity at member level (c.f. Section 4). The results of the experiments are presented in Section 5. The source code of our approach as well as the software packages' names used for the experiments are available on GitHub1. Footnote 1: [https://github.com/cybertier/npm-dependency_guardian](https://github.com/cybertier/npm-dependency_guardian) ## 4. Implementation After explaining the general concept of our approach, this chapter presents our reference implementation. We detail the implementation of the policy generation and show how the generated policy is enforced at runtime. ### Policy Generation The first step of the implementation is the generation of the policy. Our policy is a mapping of a package name to its capabilities. Therefore, we first have to make a concrete choice what exactly our capabilities are. As described by Ferreira et al., Node.js has very limited functionality without importing any of its built-in modules (Borda et al., 2017). Additionally, the modules are inherently grouping abilities that belong together. For example, the fs module allows access to different kinds of file system operations, like reading or writing files. Therefore, we are selecting the built-in modules as one part of the capabilities. Furthermore, JavaScript and the Node.js interpreter expose a set of global objects which also allow access to certain abilities. For example, the Buffer global object allows manipulation of byte buffers, and allows performing various en- and decoding operations. As we are aiming to provide a more fine granular approach than Ferreira et al., we are also considering global objects to be part of the capabilities. Thus, the set of capabilities we are choosing for our implementation is the union of the set of built-in modules and the set of global objects. We also evaluate the feasibility of different capability granularities. For that purpose, we are using our previously selected capability set as the coarse-grained capability set. Since these coarse-grained capabilities are usually groups of different operations, as previously shown at the example of the fs module, the fine-grained capability set will consist of the individual operations within these groups, meaning the members of the module objects. To identify a package's capabilities, we generate an Abstract Syntax Tree (AST) for each file within the package with one of the JavaScript file extensions.mjs,.cjs or.js, using _acorn_(Gran et al., 2017). Each file that acorn fails to parse is considered invalid JavaScript and is thus ignored. Given the AST we extract the used modules and global objects in the coarse-grained setting, and additionally their members in the fine-grained setting. Imported modules are identified as the arguments to calls to either the require or import function or the import statement. Global objects are identified as identifiers satisfying the following three properties: 1. Its name is the name of a global object 2. It references an object in the current scope 3. Its name was not overwritten by a local variable In the fine-grained setting, we additionally extract the accessed members of the imported modules and accessed global objects. Member accesses can happen through either array- or object-patterns on the module- or global-objects, or via member expressions. All of these explicitly list the accessed member in the respective AST node. After extracting the capabilities for every JavaScript file of a package, the package's capabilities are set to the union of all those files' capabilities. The policy contains all the packages in the dependency graph with their corresponding capabilities. In the coarse-grained case, these are just the imported modules and used global objects. The fine-grained policy contains all the capabilities of the coarse-grained policy, and additionally contains all the accessed members of the imported modules and used global objects. An example snippet of a policy is shown in Figure 1. Since Node.js is natively able to parse JSON, the policy is stored in the JSON format. ### Policy Enforcement The generated policy is enforced through a patch in the Node.js runtime. The enforcement is split into two parts. The first part is the enforcement of the module restrictions. This is done by patching the makeRequire function in the file lib/internal/modules/cjs/helpers.js, which is responsible for providing the require function to every loaded module.2 Footnote 2: Policy enforcement for ECMAScript modules is currently not supported as further discussed in Section 6.2.3. In the coarse-grained model, we replace every module that is not contained in the requiring packages' allowlist with a dummy object containing the same members as the required module, but where the corresponding value is a function that returns itself when it's called. This way, a package that is not allowed to access a certain module can still require it and call its member functions without crashing, but with no risk of performing malicious actions through that module. In the fine-grained model we return a copy of the module, where only those members that are not present in the policy are replaced with our dummy function, while the members that are contained in the policy still point to the original module functions. Additionally, if the module itself is callable, it will execute the original function if the module itself is contained in the policy. If the module is not contained in the policy, calling it will execute our dummy function. The second part is the enforcement of global object restrictions. To do this, we inject a new local object into each module, which holds a reference to all global objects that are present in the allowlist. Similar to the module enforcement, references to objects that are not part of the policy will be replaced with dummy objects. Additionally, when the file is loaded, we alter the source code, replacing each reference to a global object with a reference to the corresponding member of our new injected local object. This way, we can ensure that only those global objects present in the allowlist can actually be accessed. Alterations of this process for the fine-grained model happen analogously to the module restriction enforcement. ## 5. Evaluation In this section we present and briefly discuss the results of our experiments as listed in Section 3.3. Figure 1. Example policy. If memberAccessTracing is false, then policyFine may be empty. ### Overhead (RQ1) In order to understand the caused overhead, there are two spots to consider. First, the capability inference _before_ the runtime and second the policy enforcement _during_ the runtime. The capability inference time is measured on 200,000 randomly chosen packages from npm as discussed in Section 3.3. We separately measured our approach with and without member access tracing enabled in order to see its impact on the overhead. In Figure 2 one can see two letter-value plots (Bordes et al., 2017) (logarithmic scale) of the capability inference times in seconds with and without member access tracing enabled. On average, the policy generation took 2.73 seconds without member access tracing and 5.21 seconds with member access tracing. It yielded a standard deviation of 6.08 and 12.32 respectively. The median is at 0.52 and 0.60 and thus much lower than the average. Correspondingly, we found out that 75 % of the policies can be generated within 2.45 or 4.50 seconds and 90 % can be generated within 7.29 and 14.10 seconds. Even the slowest 1 % can be generated within 31.07 and 62.88 seconds. The absolute slowest generation took 288.54 seconds without member access tracing and 618.49 seconds with member access tracing and hence constitutes as a heavy outlier. We were unable to evaluate about 8,200 packages due to installation errors, either because npm has reported an exception when trying to install the package, or was not able to complete the installation within a five-minute time window. Furthermore, it should be noted that the policy generation is performed only once for each version of a software. Thus, the overhead is added at the very first installation and at every update of a software. Assuming that a well written program imports all dependencies at the beginning of the file, our policy enforcement solely causes overhead at the program start. This is because our patched require function is called once for each import and returns the modified module for further use. In case of additional module imports during the runtime, the policy enforcement would be triggered again. Either way, policy enforcement of module restrictions, i.e., the pruning of the imported modules according to the precomputed policy, only requires two lookups in an in-memory hash map and - if appropriate - the replacement of module functions whenever a new import happens. Restricting the access to global objects is a bit more evolved and requires some preparation as described in Section 4. It requires the generation of an AST, the identification of used global objects, and the dynamic replacement of global object usage within the source code. We measured the added time to a program's startup by performing global object replacement for a whole software, i.e., all JavaScript files within a package and all its dependencies, on the 200,000 randomly chosen packages. On median this procedure adds 198 ms whereas 90 % of the packages are prepared within 2.67 s. However, there are some heavy outliers that require 59.15 s of preparation time. Per file our approach requires 165 us on median, while 90 % of the files are processed within 1.12 ms. Global replacement adds a measurable amount of overhead at the startup, but it is most often reasonable. Loading a policy from disk to memory greatly depends on the file's size. Thus, we calculated file sizes for the policies of the 200,000 randomly sampled packages. Without member access tracing a policy file is 1.2 kb in size while 90 % of the files are below 27.69 kb. Even the absolute maximum of 399.57 kb should not have a noticeable impact on the program's startup time. Policies that include member access tracing are larger in file size as they have to convey more information, and are supersets of the policies without member access tracing. The median is at 2.98 kb and 90 % of the files are below 65.10 kb. The absolute maximum increased to 985.08 kb which still poses no issue. Thus, its total overhead is negligible. Overall, we conclude that our approach adds a reasonable amount of overhead that is out-weighed by its security gains. **Response to RQ1:** On median our approach requires 0.52 (0.60) seconds to generate a policy. The actual enforcement of the policy adds a negligible overhead of 198 ms to the program's start. Overall, our approach introduces only a small footprint. ### Exhaustiveness (RQ2) To analyze the exhaustiveness, i.e., does our approach comprehensively infer the set of required capabilities, we performed the experiment as described in Section 3.3. Recall that we compare the set of the detected external modules to the dependencies used by the package. Again, we conducted the experiment on the 200,000 randomly chosen packages from npm. However, 3,311 packages had corrupt meta information and 12 crashed during AST generation. An additional 17,706 packages had invalid JavaScript files, for example incorrectly named.jsv files. From the remaining packages we collected 541,538 dependencies in total. Our tool correctly detected 499,587 (92.25 %) dependencies. Of the 109,198 packages with at Figure 2. Letter-value plot (Bordes et al., 2017) of capability inference times in seconds for 192,546 randomly chosen packages from npm. Please note the logarithmic scale. least one dependency, we correctly inferred all dependencies for 91,992 (84.24 %) packages. As mentioned in Section 3.3, the validity of these results depends on the correct declaration of dependencies. Even our naive verification process of looking for the dependency name within the package's JavaScript files is not guaranteed to only result in dependencies that are actually used. Therefore, we manually inspected 100 randomly selected packages where at least one dependency was counted as undetected by our automated approach, to estimate the quality of this dataset. We found that 80 of those packages did not actually use the declared runtime dependency, and the text matches were usually found as variable names or within comments. In the remaining 20 cases we were not able to confirm that the respective module is imported, but from the given occurrence we could also not confidently deny that the module may be imported through some indirections. This highlights that even with our additional verification measures, there are a lot of dependencies that are not actually used. However, it is unlikely that there are dependencies that are used but not declared in the package.json file, as this would in most cases break the package for all users. Therefore, we consider our findings as lower bounds for the amount of correctly found modules. **Response to RQ2:** Our approach correctly detects 92 % of used imports, and for 84 % of packages all imports are correctly detected. ### Reducibility of Capabilities (RQ3) The reduction of permitted code also reduces the attack surface of the software (Krishnam et al., 2017). Thus, we conducted an experiment to measure how many of the available capabilities are not used by a package. To this end, we counted how many capabilities are available to the set of 200,000 randomly samples packages and how many of those capabilities are not included in the package's policy, i.e., not required to run the software. On median 78.82 % of the available capabilities are not used when counting without member access tracing. If member access tracing is enabled, the number of unused capabilities went up to 96.56 %. Furthermore, 99 % of the packages require less than 80.59 % of the available capabilities without member access tracing and 53.15 % with member access tracing. In order to put these numbers into perspective, let's take a look at the Node.js built-in module fs. In total, it offers 101 members, e.g., functions to write, read, or append a file. If one wants to write a file, exactly one member which is called fs.writeFile is required. Consequentially, a large amount of members is unused and hence can be removed without breaking the program. In conclusion, our approach drastically shrinks the available capabilities. If an attacker wants to perform a certain action that requires a certain set of capabilities, they would either need to add these capabilities to the program's policy or infiltrate a project that already includes all the required capabilities. **Response to RQ3:** On median our approach may reduce the capabilities of a software by 78.82 % (without member access tracing) and 96.56 % (with member access tracing). ### Specificity (RQ4) Being able to automatically infer, describe, and enforce capabilities for a software, we will now investigate its performance on benign software updates. Software is updated rather frequently and thus our approach must be able to let updates happen without breaking the application we want to use. Consequentially, we investigated how the set of capabilities per package changes across historic updates. As we do not have a large dataset of known benign packages across all their versions, we conducted this experiment for the 1,000 most depended upon packages from npm according to Section 3.3, assuming that all currently published versions of these packages are benign. Furthermore, we distinguish between major, minor, and patch updates according to Semantic Versioning (Krishnam et al., 2017) as well as if the updates required new modules, new global objects, or both in order to get more granular insights. As depicted in Figure 3 and Figure 4, 37.5 % (34.1 %)3 of major updates did not require new capabilities. However, if new capabilities were required 39.4 % (40.9 %) were caused by previously unused modules _and_ global objects. Therefore, 22.4 % (23.6 %) of the updates solely used new global objects and merely 0.7 % (1.4 %) solely used new modules. Footnote 3: Numbers in brackets represent results with enabled member access tracing. For minor updates, 59.8 % (50.4 %) of the updates did not require a change of granted capabilities. Breaking down the newly required capabilities, one can see that 21.9 % (28.8 %) were caused by previously unused global objects. This fraction roughly stayed the same as compared to major updates. However, 1.6 % (2.1 %) were caused by the use of new modules. The vast majority of updates \(-\) 85.3 % (79.7 %) \(-\) did not introduce a change of capabilities in patch updates. This high number is to be expected as patch level updates should solely concern bug fixes. Thus, they typically do not require new functionality and hence no new capabilities. Still, 14.7 % (20,4 %) of the updates required new modules or global objects. The amount of newly required capabilities drastically reduces from major over minor to patch updates. Overall, the measured changes are in line with the procedure as proposed by Semantic Versioning (Krishnam et al., 2017). The conducted experiment yields an upper bound for the specificity, i.e., false positives and true negatives, of our approach. Each newly required capability would be prohibited by our approach. Arguably, major updates should not be performed fully automatically as it is expected to be not backwards compatible and hence might break things. Most often, patch updates do not require new capabilities. However, as observed by Ohm et al. (Ohm et al., 2017), malicious code is frequently introduced on patch level \(-\) a commonly allowed level for automatic updates \(-\) and alters the program's functionality. Our approach would impede such an attack. The evaluation against known historic attacks is discussed in the following section. Moreover, it is not clear from the experiment if a newly required capability, which hence can not be used, would actually break the application. It might be the case that this capability is required by a component down in the dependency tree and never required for the functionality by the actual application we are running. **Response to RQ4:** Applying our approach to patch updates is still likely to result in too many false positives. This motivates further improvements. ### Sensitivity (RQ5) Arguably, the most important question is RQ5, whether the presented approach protects against attacks, i.e., malicious updates. According to our usecase described in Section 3.1, we are considering previously benign software packages that turn malicious. This may be achieved by adding malicious code to the package itself, adding malicious code to one of its dependencies, or by adding a new dependency containing malicious code (Bordes and Kern, 2017). As there exists no method of formally proving that we will be able to prevent all future attacks, we evaluate whether we would have prevented past attacks on benign packages. To do so, we need samples of malicious packages that were infected in one of the presented manners. The Backstaber's Knife Collection (Krause et al., 2017) is a curated collection of such samples. It also has a well maintained package index, which allows querying individual packages by several metadata attributes. Therefore, we use the Backstaber's Knife Collection4 to gather samples for our evaluation. Footnote 4: We are using the commit 22bd768, which was the most up-to-date commit at the time of our evaluation. Packages appropriate for our attack model have to meet the following criteria: 1. Published in the npm registry 2. Infected an existing package 3. Malicious action at runtime 4. Applicable to Node.js Using the package index metadata mentioned above, we can create a query that preselects only packages that meet the first three criteria. However, not all of them are applicable to Node.js, as some of them leverage browser exclusive JavaScript APIs to perform their malicious actions. This characteristic is not reflected in the dataset's metadata and hence we removed such packages by hand. These remaining packages as well as the malicious version and the last benign version for reference are listed in Table 1. To evaluate whether our approach would have prevented an attack, we create a policy for the respective preceding version of the package and afterwards trigger the malicious behavior of the malicious version in a sandboxed environment. We will breifly discuss each sample in regards to if and why it averted the attack. \begin{table} \begin{tabular}{l r r r r} \hline \hline **Package** & **Update** & **Averted** & **Crashed** \\ \hline conventional-changelog & \(1.1.24\to 1.2.0\) & ✓ & ✗ \\ eslint-config-eslint & \(5.0.1\to 5.0.2\) & ✓ & ✗ \\ eslint-scope & \(3.7.1\to 3.7.2\) & ✓ & ✗ \\ event-stream & \(3.3.5\to 3.3.6\) & ✓ & ✗ \\ kraken-api & \(0.1.7\to 0.1.8\) & ✓ & ✓ \\ leetlog & \(0.1.1\to 0.1.2\) & ✓ & ✓ \\ load-from-cwd-or-npm & \(3.0.1\to 3.0.2\) &? &? \\ mariadb & \(2.5.6\to 2.13.0\) & ✓ & ✓ \\ opencv.js & \(1.0.0\to 1.0.1\) & ✓ & ✓ \\ rate-map & \(1.0.2\to 1.0.3\) & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1. List of package versions used for our experiment, if the attack was averted, and if the program flow crashed because of our interference. Figure 4. Changes of capabilities required _with_ member access tracing, sorted by version level. Figure 3. Changes of capabilities required _without_ member access tracing, sorted by version level. conventional-changelog_. This package decodes a command with the from member of the Buffer global object to a command that is then passed to the spawm member of the child_process module, which downloads and executes a cryptocurrency miner. Neither of these modules and members had been used in the benign version. Thus, we prevent this attack and do so without interrupting the programs runtime, i.e., no crashing. _eslint-config-eslint & eslint-scope_. The malicious function downloads a payload using the member get of the module http which is then executed using eval. Again, neither of the modules nor members had been used previously, and the attack would have been prevented without interruption of the runtime. _event-stream_. For this well-known sample, the actual malicious code is present in the newly added dependency _flatmap-stream_. This piece of code first decrypts some AES encrypted data with the createDecipher member of the crypto module, which results in a string containing JavaScript code. This code is then evaluated using the constructor method of the module global object. As the _flatmap-stream_ dependency was newly added to the dependencies with the malicious update of _event-stream_, there is no policy entry for it yet, thus it does not have access to any modules or global objects. Therefore, the attack would have been prevented without crashing the program. _kraken-api_. In this case, a reverse-shell would open up using the modules net.socket and child_process.spawn. None of these were used in the benign version and hence the attack was averted by our runtime protection. However, the program crashed because a module called daemon, used to run the reverse-shell in the background, could not be imported succesfully due to missing capabilities. _leetlog_. The malicious update uses the readDir and appendFile members of the fs module to add an ssh-key to the authorized_keys file. The module fs had not been used in the benign version. Thus, this attack would have also been prevented but the program crashed. _load-from-cwd-or-npm_. This attack aimed at one of its dependents, the _purescript-installer_ package, by returning a PassThrough object from the stream module, resulting in an endless loop in the purescript-installer. stream had not been used before, and thus it would yield our dummy object instead of the PassThrough object. While it is likely that this would not have resulted in an endless loop, it would probably still have led to a crash of the purescript-installer. Since we were not able to trigger the malicious behavior, we do not consider this attack as being prevented. _maria-db & opencvjs_. Both attacks tried to send the environment variables to a server controlled by the attacker by using querying.stringify and http.request. For both packages these modules and respective members had never been used before. In the case of opencv.js, the member process.env was also unused. Thus, in both cases the attack was averted by our runtime protection but both times causing the program to crash. _rate-map_. This attack uses the fs module and its members readFilesync, writeFilesync and existSync, as well as the global object Buffer.from to rewrite the source code of an adjacent module, and remove its own malicious part of the code. The policy of the previous benign version contains no modules. It does contain some global objects, but not Buffer or any of its members. Therefore, this attack would have been prevented, but our runtime protection caused the program's crash. In conclusion, the proposed approach is able to avert 9/10 of the investigated attacks. Our naive implementation of the mock object which is returned if a prohibited capability is requested turned out to be insufficient in most of the cases. We also observed that using the fine granular policy does not provide any benefit in stopping the given attacks, as none of them used the required modules or global objects in their benign versions. **Response to RQ5:** Our approach would have stopped 9 out of 10 known attacks. Member access tracing does not yield better results. ### Comparison to Related Work Our approach stands in direct comparison with Wyss et al. (Wyss et al., 2017) and Ferreira et al. (Ferreira et al., 2017). While Wyss et al. focus on install-time attacks, we focus on runtime protection. In contrast to Ferreira et al. who leverage a manually defined and coarse capability set, we take a more granular approach based on automatically inferred policies. Nonetheless, some metrics can be compared. The policies in the approach of Ferreira et al. are declared by the developers in the coarse categories _network access_, _file system access_, _process creation_, and _all_. Wyss et al. and us use automatically inferred policies based on built-in modules and globals. Their policy inference takes less than one minute for 90 % of the packages. Our approach requires 7.25 seconds (without member access tracing) and 14.10 seconds (with member access tracing) for 90 % of the packages. Ferreira et al. measure an overhead of less than 1 % during runtime of 20 common CLI tools. Wyss et al. found out that their approach can enforce their policy in less than a second for 99 % of the packages. Our approach adds 198 ms to the program's start and a negligible amount of time during runtime. In order to get an impression on the false positives the approaches are tested with benign packages. The approach of Wyss et al. blocks 1.5 % of all packages using a generic and predefined policy. Ferreira et al. did not analyze the performance on benign packages. Our approach is evaluated on the 1000 most depended upon packages and revealed that only 14.7 % of patch update would require an update of the policy. All approaches are tested against known attacks. Ferreira et al. replayed three attacks and were able to avert all of them. Wyss et al. analyze 102 samples taken from the Backstabber's Knife Collection, the same source we leveraged. They crafted the policies to match common install-time attacks observed in these samples and hence are able to detect all of them. We executed 9 known malicious packages on policies generated on the previous benign version and were able to avert all the attacks. Ferreira et al. estimate to protect 14-33 % of the available packages as their do not require any permissions. Our experiments show that on median 78.82 % (96.56 % with member access tracing) of the available capabilities are indeed unused. In comparison, our approach seems to be fast and precise. Due to the more granular concept of our policies, we tend to be too sensitive and may also deny benign updates. ## 6. Discussion We will first discuss limitations to our general concept and then follow up with a discussion about our concrete implementation, as well as our evaluations. ### Conceptual Our approach is based on the assumption that required capabilities of a software do not change over its versions. The evaluation of RQ4 in Section 5.4 has shown that this assumption does not hold in its entirety. However, we have seen that narrowing down the assumption to be specific to only patch updates, the number of new capabilities falls drastically. While this may not be the ideal choice yet, more alternatives for a more specific assumption should be explored, for example altering the concrete choice for the considered capabilities. The choice of all modules and global objects as equal capabilities makes sense for a generic use case. However, when thinking about malicious packages, some modules may prove more useful than others. For example, the fs module is useful for common use cases like data exfiltration or overwriting sensitive files, while the v8 module for interacting with certain APIs of the underlying V8 runtime may be less useful for an attacker. Furthermore, the process module provides the ability to create arbitrary processes on the system. As these processes are usually not subject to our restricted Node.js interpreter, they essentially have all capabilities, meaning that any module with the process capability can do everything. Using a more elaborate approach for the capability choice, for example based on an analysis of their use in malware or their perceived usefulness for it, may thus lead to increased performance. Regarding the detection of capabilities, our choice of static analysis also brings expected limitations, as it is not able to cover dynamic aspects. Related literature like Vasilakis et al. (Vasilakis et al., 2018) suggest that modules will not dynamically need what they do not need statically. ### Proof of Concept Implementation Besides conceptual points, there are also points to discuss in our proof of concept implementation. To this end, we are differentiating between the implementation of the policy generation, and the policy enforcement. #### 6.2.1. Difference to Node.js Built-In Policies Node.js comes with its own experimental implementation of policies (Krishnan et al., 2017) allowing a per-module restriction and redirection of module imports. While these policies allow to implement parts of our approach, they are missing some necessary features. Most notably, they only support the restriction of module imports, but cannot be used to restrict global objects at all. Furthermore, replacing a module with a mock module requires a separate file containing the specific mock module, as well as a separate entry in the policy for every distinct pair of importing module and imported module, while our approach only lists the allowed capabilities, and automatically mocks all capabilities that are not explicitly listed. This is especially problematic, as all pairs that are not explicitly listed will not be replaced with a mock module, but will instead lead to a crash of the program or no action at all -- depending on the default action setting in the policy -- which we explicitly attempt to prevent. For these reasons, we have chosen to design our own policy format, as well as our own policy enforcement. #### 6.2.2. Policy Generation We decided to solely analyze files which have a JavaScript file extension (.mjs,.cjs or.js) for the generation of the AST. However, Node.js allows the execution and import of files with arbitrary file extensions. Aside from that, we do not include _bundle dependencies_ in our scan, even though they are supported by npm. The reason is that they are not included in the package-lock.json file, thus we do not have a mapping of file paths to package names, which we need to add them to our policy. Since only two of the top 1000 most depended upon packages and 0.38 % of the whole npm registry are using bundle dependencies, we do not see this as an urgent issue. However, bundle dependencies could be added in the future, at the cost of a more complex dependency discovery algorithm. #### 6.2.3. Policy Enforcement There are some ways an attacker could circumvent our proposed protection mechanism. Firstly, as our policy enforcement of module restrictions only alters the require function, it does not support ECMAScript Modules (ESMs), and thus any ESM has access to any module. While only about 3.23% of all packages on npm are ESMs, they account for 13.3% of the 1000 most depended upon packages. Therefore, it is desirable to expand the implementation and support ESMs in the future. Furthermore, there are more ways to import code in JavaScript than the standard import and require functions. For our proof of concept we focused on these standard procedures. Handling the alternative types of importing code can be added in the future. At last, all our protection mechanisms only work for native JavaScript modules. However, Node.js also allows the use of C++ extensions (Krishnan et al., 2017) which do not need to access any modules or global objects, thus circumventing our mechanism in the same way an entirely new process does. ### Evaluation As all of our evaluations were automatically installing packages from the npm registry, we were subject to regular erroneous behavior shown by the npm CLI. We have ignored all packages that npm did not manage to install, or whose installation did not finish within five minutes, as there exist several packages that will set npm in a state of infinite dependency resolution. However, one might argue that such an installation time limit may bias the distribution of successful installation towards smaller packages, as larger packages with more dependencies naturally take longer to install. We argue that any working package should be able to install within five minutes. Regarding RQ3, our reference for all the available properties is only a lower bound for the available capabilities when using member access tracing, as JavaScript does not provide a way to reliably enumerate all the members of an object (Han et al., 2017). Our count of the available capabilities might thus be lower than the actual amount of available capabilities. ## 7. Conclusion & Future Work Software supply chain attacks that utilize maliciously manipulated software modules are on the rise. A good portion of these inject malicious code directly into a software or more obfuscated in its dependencies. This code is eventually executed during runtime at the end user's machine. In this paper we present and evaluate a system that automatically infers and enforces capabilities for software based on the principle of least privilege. By doing so, we prevent the execution of untrusted code. For our approach, a capability is understood as access to a certain module of a software or a member of it, e.g., a class and its provided attributes. Through static analysis, we determine the minimal set of modules that are required for a software to run. This information is persisted to a policy file in form of an allowlist. That policy is used by our approach to enforce the access to capabilities. To this end, imported modules are trimmed to its allowed members according to the policy just in time. Our approach supports two modes: fine-grained (with member access tracing) and coarse-grained (without member access tracing). We conduct several experiments on software packages available from the ppm package repository for Node.js/JavaScript to validate our approach. The use of our system has a small footprint, i.e., fast generation of policies (0.52 s/0.60 s) and small file size (1.2 kB/2.98 kB) for persistence. Furthermore, it adds only a slight overhead of 198 ms to the program's startup time. Our overall naive approach leverages Abstract Syntax Trees and detects 92 % of a program's capabilities. However, it still causes too many false positives and requires further engineering. Still, our research indicates the general viability of the approach. Moreover, we would have been able to prevent at least 9 out of 10 historic attacks. Once a policy is generated it can be used for any future version of a software. Benign updates tend to be unaffected by this rigorous allowlist, but malicious updates often required an extended set of capabilities and hence failed to execute. Furthermore, we found out that 78.82 % of available capabilities in the coarse-grained mode and by 96.56 % in the fine-grained mode are indeed unused. Thus, our approach potentially reduces the attack surface by this share. This means that an attacker would need to infiltrate a package that already has the capabilities required for the malicious actions. Accordingly, attacks on the software supply chain that need to alter the way a software component operates will be prevented. While we successfully demonstrated our concept and a corresponding proof of concept, there is still room for improvements. A lot of shortcomings stem from the use of Node.js/JavaScript for our experiments. The functionality of it is so flexible that there always seems to be another way to circumvent our implementation. Nonetheless, our approach may be adopted to other languages easily. For future work we would like to further improve our approach and transfer it to other languages such as Python. Furthermore, the weighting of capabilities, for instance eval and child_process may more easily be misused, or some fuzzy logic for "very unusual imports" may yield better results.
2309.06595
Points of convergence -- music meets mathematics
"Phase-locking" is a fundamental phenomenon in which coupled or periodically forced oscillators synchronise. The Arnold family of circle maps, which describes a forced oscillator, is the simplest mathematical model of phase-locking and has been studied intensively since its introduction in the 1960s. The family exhibits regions of parameter space where phase-locking phenomena can be observed. A long-standing question asked whether "hyperbolic" parameters~-- those whose behaviour is dominated by periodic attractors, and which are therefore stable under perturbation~-- are dense within the family. A positive answer was given in 2015 by van Strien and the author, which implies that, no matter how chaotic a map within the family may behave, there are always systems with stable behaviour nearby. This research was a focal point of a pioneering collaboration with composer Emily Howard, commencing with Howard's residency in Liverpool's mathematics department in 2015. The collaboration generated impacts on creativity, culture and society, including several musical works by Howard, and lasting influence on artistic practice through a first-of-its-kind centre for science and music. We describe the research and the collaboration, and reflect on the factors that contributed to the latter's success.
Lasse Rempe
2023-09-12T20:47:36Z
http://arxiv.org/abs/2309.06595v1
# Points of convergence - music meets mathematics ###### Abstract. _Phase-locking_ is a fundamental phenomenon in which coupled or periodically forced oscillators synchronise. The _Arnold family_ of circle maps, which describes a forced oscillator, is the simplest mathematical model of phase-locking and has been studied intensively since its introduction in the 1960s. The family exhibits regions of parameter space where phase-locking phenomena can be observed. A long-standing question asked whether _hyperbolic_ parameters - those whose behaviour is dominated by periodic attractors, and which are therefore stable under perturbation - are dense within the family. A positive answer was given in 2015 by van Strien and the author, which implies that, no matter how chaotic a map within the family may behave, there are always systems with stable behaviour nearby. This research was a focal point of a pioneering collaboration with composer Emily Howard, commencing with Howard's residency in Liverpool's mathematics department in 2015. The collaboration generated impacts on creativity, culture and society, including several musical works by Howard, and lasting influence on artistic practice through a first-of-its-kind centre for science and music. We describe the research and the collaboration, and reflect on the factors that contributed to the latter's success. This is a preprint of the chapter "Points of convergence - music meets mathematics," to appear in "More UK Success Stories in Industrial Mathematics," ed. Philip J. Aston. ## Introduction In the 17th century, the Dutch scientist Christiaan Huygens discovered that two pendulum clocks, coupled by being mounted on the same wooden beam, synchronised their movements. This phenomenon, described by Huygens as "a miraculous sympathy," is called _phase locking_ (also mode locking, or entrainment): interacting periodic oscillators tend to synchronise their movements (with the same period or with periods that are related by an integer multiple). Phase locking is near-ubiquitous in physical oscillators: Examples include the fact that the revolution period and orbital period of the moon are identical and the synchronisation of fireflies. A similar effect occurs when one oscillator is _forced_ by another, for example in bowed musical instruments: When a violin string is plucked by hand, it exhibits non-harmonic overtones; when bowed, all of these overtones are forced onto the harmonic series. A goal of "pure" (i.e., theoretical) mathematics is to investigate interesting phenomena in their most fundamental settings, to understand the underlying principles. In a 1961 paper [1, SS 12], Vladimir Arnold introduced what may be the simplest model for phase-locking behaviour: a family of self-maps of the circle, motivated by the movement (in discrete time-steps) of a forced periodic oscillator. Known as the _Arnold family_ of circle maps, it can be described by the formula \[f_{\alpha,b}(\theta)\coloneqq\theta+\alpha+b\sin(\theta)\pmod{2\pi}. \tag{1}\] The angle \(\theta\) represents a point on the circle \(S^{1}=\mathbb{R}/2\pi\mathbb{Z}\). The number \(\alpha\) is a rotation parameter (also an element of \(S^{1}\)); if \(b=0\), then \(f_{a,b}\) is simply the rotation by an angle of \(\alpha\). The final term in (1) is a forcing term, which must also be a function of \(\theta\), and thus periodic. We use the simplest possible periodic forcing term, namely a multiple of \(\sin\).1 The parameter \(b\in[0,\infty)\) determines the strength of the forcing. Footnote 1: Arnold used \(\cos\) instead of \(\sin\), which is equivalent. Let us fix parameters \(\alpha\) and \(b\) and write \(f=f_{\alpha,b}\). Given a starting state \(\theta_{0}\in\mathbb{R}/2\pi\mathbb{Z}\), we think of \(\theta_{1}=f(\theta_{0})\) as determining the state of our dynamical system after one time step. So \(\theta_{2}=f(\theta_{1})\) is the state after two steps, and after \(n\) steps: \[\theta_{n}=f(\theta_{n-1})=f(f(...f(\theta_{0})\dots))\quad(n\text{ times}).\] The sequence \((\theta_{n})_{n=0}^{\infty}\) is called the _orbit_ of \(\theta_{0}\) under \(f\). If \(\theta_{n}=\theta_{0}\) for some \(n>0\), then the (finite) orbit is called a _periodic cycle_; this cycle is _stable_ (under perturbation of \(\theta_{0}\)) if the orbits of all nearby starting values converge to it. Figure 1. Parameter space of the Arnold family. The horizontal and vertical directions correspond, respectively, to the parameters \(\alpha\), ranging from \(-\pi\) to \(\pi\) and \(b\), ranging from \(0\) to \(4\). The critical line \(b=1\) is shown in grey; white regions correspond to stable (hyperbolic) maps. Theorem 1 states that these regions are dense in parameter space; the apparent presence of solid black regions arises from the fact that stable regions may be very thin. For \(b<1\) (the bottom quarter of Figure 1), the map \(f_{\alpha,b}\) is a diffeomorphism of the circle. It indeed exhibits phase-locking phenomena, which are well-understood: For parameters in certain regions, called "Arnold tongues" (the white regions), all but finitely many orbits tend to a stable periodic cycle. Moreover, these maps are stable under perturbations of \(f\) within the Arnold family: a small change of the parameter leads to a small change in the overall behaviour of orbits, so \(f_{\alpha,b}\) is "phase-locked" to the periodic orbit. For \(b>1\), the map \(f_{\alpha,b}\) is no longer invertible and has two distinct critical points on \(S^{1}\). The Arnold tongues begin to intersect, and the periodic orbit for a given tongue may bifurcate and become unstable. This can lead to _chaotic_ behaviour, where an arbitrarily small change of the starting state \(\theta_{0}\) could lead to completely different long-term behaviour. It is thus natural to ask whether for \(b>1\), there is still a dense set of parameters whose behaviour is stable under perturbations of the parameter. More precisely, \(f=f_{\alpha,b}\) with \(b>1\) is called _hyperbolic_ if the orbits of both critical points tend to stable periodic cycles. Such \(f\) is stable under perturbations of the parameter - all nearby maps are also hyperbolic - and almost every orbit converges to a stable cycle. Density of hyperbolic maps in parameter spaces is a central question of one-dimensional dynamics. For real polyomials, it was established by Kozlovski, Shen and van Strien [2] in 2007, answering part (b) of Smale's 11th problem. However, this does not resolve the question in the Arnold family: a key issue is that \(f\), when extended to the complex plane, is transcendental rather than algebraic (see below). This problem was overcome by van Strien and the author [4]:2 Footnote 2: In fact, the results of [4] are more general, covering many families of transcendental entire functions and circle maps. **Theorem 1** (Density of hyperbolicity).: _Hyperbolic maps are dense in the Arnold family. That is, given any \(\alpha\in S^{1}\) and \(b>1\), and every \(\varepsilon>0\), there exist perturbed parameters \(\alpha^{\prime}\in S^{1}\) and \(b^{\prime}>1\) with \(|\alpha-\alpha^{\prime}|,|b-b^{\prime}|<\varepsilon\) such that \(f_{\alpha^{\prime},b^{\prime}}\) is hyperbolic._ ## 1. Ideas in the proof of the theorem To prove Theorem 1, we consider the _complex extension_ of \(f_{\alpha,b}\), allowing the state \(\theta\) in (1) to be a complex number. Applying the change of variable \(z=\exp(i\theta)\), we obtain a self-map of the punctured plane \(\mathbb{C}^{*}=\mathbb{C}\setminus\{0\}\) (see Figure 2): \[g_{\alpha,b}(z)=\exp(i(\theta+\alpha+b\sin(\theta)))=e^{i\alpha}\cdot z\cdot \exp\left(\frac{1}{2}\big{(}z-\frac{1}{z}\big{)}\right).\] This function \(g_{\alpha,b}\) has isolated singularities at \(0\) and \(\infty\), which are _essential_ (neither removable nor poles). Consequently, the behaviour near these points is very complicated; for example, the preimages of every \(z\notin\{0,\infty\}\) accumulate both at \(0\) and at \(\infty\). There is a set of starting values of positive area which are _escaping_; i.e., their orbits accumulate only on the essential singularities \(0\) and \(\infty\). A crucial step in the proof is to establish _rigidity_: a map in the Arnold family cannot be deformed, by a small change of parameters, to one with the same qualitative dynamical behaviour, except through certain well-understood mechanisms. In our setting, a new difficulty arises: We must exclude the existence of deformations arising from the set of escaping points. This issue does not arise for polynomials, as treated in [2], which have no essential singularities. It is overcome by using techniques developed for studying the behaviour of transcendental entire functions near \(\infty\)[3]. To prove the theorem, we then begin with a map \(f_{\alpha,b}\) that is not hyperbolic; recall that this map has two critical points and two critical values. It follows from the above-mentioned rigidity statement that we may perturb \((\alpha,b)\) slightly to a parameter \((\alpha_{1},b_{1})\) for which the orbit of one critical value passes through a critical point. The set of parameters satisfying this critical relation forms an analytic variety of dimension \(1\). We can make another perturbation, _within this variety_, to create a second critical relation. (Otherwise, maps within this variety would have the same qualitative dynamical behaviour as \(f_{\alpha_{1},b_{1}}\), which contradicts rigidity.) Thus we obtain parameters \((\alpha^{\prime},b^{\prime})\), arbitrarily close to \((\alpha,b)\), for which both critical values are eventually mapped to a critical point. This means that each critical point is mapped to a periodic cycle containing a critical point; such a cycle is necessarily stable. Thus \(f\) is hyperbolic, and the proof of the theorem is complete. ## 2. A musical collaboration In 2015, Liverpool's mathematics department hosted award-winning composer Emily Howard as a Leverhulme Artist in Residence. Howard previously used mathematical and scientific ideas in her compositions, but the mathematical discussions during the residency led to the use of frontier mathematical research, rather than classical mathematical principles, in her creative process for the first time. A particular focus of discussions during the residency were the article [4] and Theorem 1. Howard challenged the dynamics group to create sets of numeric data, encapsulating key ideas of the work. Two datasets were created by the author and Alexandre Figure 2. The dynamics of \(g_{\alpha,b}\) in the complex plane, for one choice of parameters. As usual, the horizontal and vertical directions correspond to the real and imaginary parts of the starting value. The map is chaotic on the unit circle (visible in the central part of the figure), and almost every orbit is escaping. Different shades of grey distinguish different patterns with which orbits tend to the essential singularities at \(0\) and \(\infty\), highlighting the intricate dynamics of the complex extension. Dezotti (see Figure 3); the resulting composition, _Orbits_[5], is a direct creative response to the data. Other pieces (_Leviathan_, _Threnos_ and _Chaos or Chess_) written during this period also used the discussions around [4] as pivotal creative input, establishing enduring principles for Howard's approach to composition. For example, _Torus_ was a BBC Proms co-commission with the Royal Liverpool Philharmonic Orchestra, first performed in 2016 at the Royal Albert Hall to a sell-out audience. It was inspired by Howard's discussions with mathematicians in Liverpool, including those with the dynamical systems group, as well as conversations with Liverpool geometer Anna Pratoussevitch. The ideas of chaotic motion, and of a small perturbation that changes the fundamental nature of a system, formed an important part of Howard's compositional approach for this piece. _Torus_ was hailed by the Guardian as 'one of this year's finest new works', and recognised with a 2017 British Composer Award. Two further geometry-inspired orchestral pieces followed: _sphere_ (2017) was commissioned by the Bamberg Symphony Orchestra and broadcast on BBC Radio 3, and _Antighere_ was commissioned to open the London Symphony Orchestra's 2019/20 season at the Barbican under Sir Simon Rattle. In 2017, the Royal Northern College of Music established a new dedicated centre for Practice and Research in Science and Music (PRiSM), with Howard as Director. PRiSM is building on the approach developed during the Liverpool residency, bringing together scientists, composers and performers for mutual benefit. Due to the specialised nature of mathematical research, it can be challenging to communicate non-superficial ideas and concepts to those outside the specific area of research, let alone outside of mathematics. There are two factors that contributed strongly to successfully creating a dialogue between music and research mathematics. We shall discuss them briefly here, as they may be instructive for collaborative projects of a similar nature. The first is the presence of a shared language: Howard is an Oxford graduate in mathematics and computing, while the author is an amateur orchestral musician. Howard's familiarity with mathematical terminology allowed for detailed discussions of mathematical ideas, which were reflected in the resulting compositions. For instance, in the Figure 3. The data set used in Howard’s composition _Orbit 2a_ first follows two orbits of an unstable function in the Arnold family, starting near an unstable periodic cycle, but quickly separating due to the chaotic dynamics. As in the proof of Theorem 1, the function is slightly perturbed once, changing the parameters by less than \(10^{-5}\), to create an attracting cycle. A second, even smaller, perturbation yields another attracting cycle, and thus a hyperbolic function. (A member of the Arnold family with two attracting cycles is necessarily hyperbolic.) data underlying _Orbits_, there appear repeating patterns arising from both stable and unstable cycles. Though they are indistinguishable from the data alone, Howard chose to represent their differing nature musically as a result of the mathematical discussions. Likewise, the perturbations of parameters are imperceptible in the data; knowing about their significance, Howard marked them as recognisable musical events. It appears likely that, for a collaboration like this to succeed, a common language, if not already present, needs to be carefully developed. A second key factor in the success of the collaboration was the careful choice of a specific piece of research as the basis of discussions. There are several reasons why Theorem 1 provided fertile ground for developing new creative thinking: 1. It lends itself to expression in simple and general terms: no matter how chaotic the system, there is always stability nearby. 2. Phase-locking is known and relevant to musicians, e.g. through the synchronisation of linked metronomes or the elimination of non-harmonic overtones. 3. Aspects of the proof are philosophically intriguing: the complex plane - invisible in the formulation of the problem - plays a crucial part in the proof. 4. The systems in question can be used to design data series as input into and inspiration for the creative process. ### Acknowledgements The research in [4] was supported by Rempe's EPSRC Grants EP/E017886/1 and EP/E052851/1, Rempe's Philip Leverhulme Prize and van Strien's ERC Advanced Grant. Howard's residency was funded by a Leverhulme Artist in Residence award.
2309.07853
Water, not salt, causes most of the Seebeck effect of nonisothermal aqueous electrolytes
When two electrolyte-immersed electrodes have different temperatures, a voltage $\Delta \psi$ can be measured between them. This electrolyte Seebeck effect is usually explained by cations and anions flowing differently in thermal gradients. However, our molecular dynamics simulations of aqueous electrolytes reveal a large temperature-dependent potential drop $\chi$ near blocking electrodes caused by water layering and orientation. The difference in surface potentials at hot and cold electrodes is more important to the Seebeck effect than ionic thermodiffusion, $\Delta \psi \sim \chi_{\rm hot}-\chi_{\rm cold}$.
Ole Nickel, Ludwig J. V. Ahrens-Iwers, Robert H. Meißner, Mathijs Janssen
2023-09-14T16:57:32Z
http://arxiv.org/abs/2309.07853v1
# Water, not salt, causes most of the Seebeck effect of nonisothermal aqueous electrolytes ###### Abstract When two electrolyte-immersed electrodes have different temperatures, a voltage \(\Delta\psi\) can be measured between them. This electrolyte Seebeck effect is usually explained by cations and anions flowing differently in thermal gradients. However, our molecular dynamics simulations of aqueous electrolytes reveal a large temperature-dependent potential drop \(\chi\) near blocking electrodes caused by water layering and orientation. The difference in surface potentials at hot and cold electrodes is more important to the Seebeck effect than ionic thermodiffusion, \(\Delta\psi\sim\chi_{\rm hot}-\chi_{\rm cold}\). pacs: 71.10.
2302.14585
On Degeneracy in the P-Matroid Oriented Matroid Complementarity Problem
Klaus showed that the Oriented Matroid Complementarity Problem (OMCP) can be solved by a reduction to the problem of sink-finding in a unique sink orientation (USO) if the input is promised to be given by a non-degenerate extension of a P-matroid. In this paper, we investigate the effect of degeneracy on this reduction. On the one hand, this understanding of degeneracies allows us to prove a linear lower bound on the number of vertex evaluations required for sink-finding in P-matroid USOs, the set of USOs obtainable through Klaus' reduction. On the other hand, it allows us to adjust Klaus' reduction to also work with degenerate instances. Furthermore, we introduce a total search version of the P-Matroid Oriented Matroid Complementarity Problem (P-OMCP). Given any extension of any oriented matroid M, by reduction to a total search version of USO sink-finding we can either solve the OMCP, or provide a polynomial-time verifiable certificate that M is not a P-matroid. This places the total search version of the P-OMCP in the complexity class Unique End of Potential Line (UEOPL).
Michaela Borzechowski, Simon Weber
2023-02-28T14:13:53Z
http://arxiv.org/abs/2302.14585v2
# On Degeneracy in the P-Matroid Oriented Matroid Complementarity Problem ###### Abstract We investigate degeneracy in the P-Matroid Oriented Matroid Complementarity Problem (P-OMCP) and its impact on the reduction of this problem to sink-finding in Unique Sink Orientations (USOs). On one hand, this understanding of degeneracies allows us to prove a linear lower bound for sink-finding in _P-matroid USOs_. On the other hand, it allows us to prove a promise preserving reduction from P-OMCP to USO sink-finding, where we can drop the assumption that the given P-OMCP is non-degenerate. This places the promise version of P-OMCP in the complexity class PromiseUEOPL. Michaela Borzechowski: Supported by the German Research Foundation DFG within the Research Training Group GRK 2434 _Facets of Complexity Simon Weber_: Supported by the Swiss National Science Foundation under project no. 204320. ## 1 Introduction Degenerate input can be an issue in structural analysis and algorithm design for many algebraic and geometric problems. It is often swept under the rug by assuming the input to be non-degenerate. For example, one often assumes all input points of a geometric problem to be in general position. In some problems (e.g., the minimum convex partition [10]), such an assumption is inappropriate as it makes the problem considerably easier. In other cases, degenerate inputs can be solved easily by resolving degeneracy using _perturbation_ techniques. In this paper, we investigate degeneracy in the context of the P-Matroid Oriented Matroid Complementarity Problem (P-OMCP). Assuming non-degeneracy, this problem can be solved by converting it into a Unique Sink Orientation of the hypercube graph, and finding a sink within that orientation. Oriented matroids are abstractions for many types of configurations of geometric objects, such as (pseudo-)hyperplane arrangements or point configurations. Just like these geometric configurations, oriented matroids can exhibit degeneracies. In this paper, we analyze the effects of these degeneracies on the reduction from P-OMCP to Unique Sink Orientation sink-finding. Both the P-OMCP as well as Unique Sink Orientations are combinatorial abstractions of the P-Matrix linear complementarity problem (P-LCP). The complexity status of the P-LCP remains an interesting and relevant open question, since the problem can be used to solve many optimization problems, such as Linear Programming [9], and binary Simple Stochastic Games [8, 13]. Sink-finding in Unique Sink Orientations can also be used to solve geometric problems such as the problem of finding the smallest enclosing ball of a set of balls [6]. ## 2 Background ### Oriented Matroids For a more extensive introduction to oriented matroids, we refer the reader to the comprehensive textbook by Bjorner et al. [1]. We consider oriented matroids \(\mathcal{M}=(E,\mathcal{C})\) in _circuit representation_, where \(E\) is called the _ground set_, and \(\mathcal{C}\) is the collection of circuits of \(\mathcal{M}\). A _circuit_\(X\in\{-,0,+\}^{E}\) is a _signed set_ represented by a tuple of sets \(X=(X^{+},X^{-})\) where for all \(e\in X^{+}\colon X_{e}=+\) and \(e\in X^{-}\colon X_{e}=-\). We write \(-X\) for the inversed signed set \(-X=(X^{-},X^{+})\). The _support_ is defined as the set of non-zero elements \(\underline{X}\coloneqq X^{+}\cup X^{-}\). [Circuit axioms] For an oriented matroid \(\mathcal{M}=(E,\mathcal{C})\) on the ground set \(E\) the following set of axioms are satisfied for \(\mathcal{C}\): 1. \((\emptyset,\emptyset)\notin\mathcal{C}\). 2. \(X\in\mathcal{C}\Leftrightarrow-X\in\mathcal{C}\). 3. For all \(X,Y\in\mathcal{C}\), if \(\underline{X}\subseteq\underline{Y}\), then \(X=Y\) or \(X=-Y\). 4. For all \(X,Y\in\mathcal{C}\), \(X\neq-Y\), and \(e\in X^{+}\cap Y^{-}\) there is a \(Z\in\mathcal{C}\) such that = \(Z^{+}\subseteq(X^{+}\cup Y^{+})\setminus\{e\}\) and = \(Z^{-}\subseteq(X^{-}\cup Y^{-})\setminus\{e\}\). A _basis_\(B\subseteq E\) of an oriented matroid \(\mathcal{M}=(E,\mathcal{C})\) is an inclusion-maximal subset of \(E\) such that \(B\) contains no circuit. The _rank_ of an oriented matroid is the size of its bases. An oriented matroid is called _uniform_, if all subsets of \(E\) with cardinality equal to the rank of the oriented matroid are bases. The _cocircuits_\(\mathcal{C}^{*}\) of an oriented matroid \(\mathcal{M}\) are the circuits of the _dual_ oriented matroid \(\mathcal{M}^{*}\). To understand duality, we need the following notion of orthogonality: Two signed sets \(X,Y\) are said to be _orthogonal_, if \(\underline{X}\cap\underline{Y}=\emptyset\), or there exist \(e,f\in\underline{X}\cap\underline{Y}\), such that \(X_{e}Y_{e}=-X_{f}Y_{f}\). In other words, two signed sets are orthogonal if their supports either do not intersect at all, or if they agree (same non-zero sign) and disagree (opposite non-zero sign) on at least one element. [[1]] Let \(X\in\mathcal{C}\) be a circuit and \(Y\in\mathcal{C}^{*}\) be a cocircuit of some oriented matroid \(\mathcal{M}\). Then, \(X\) and \(Y\) are orthogonal. Given the set of circuits, the set of cocircuits can be computed, since the cocircuits are exactly the inclusion-minimal non-empty signed sets that are orthogonal to all circuits. Since duality of oriented matroids is self-inverse, the opposite holds too. In an oriented matroid \(\mathcal{M}=(E,\mathcal{C})\), given a basis \(B\) and an element \(e\not\in B\), the _fundamental circuit_\(C(B,e)\) is the unique circuit \(X\) with \(X_{e}=+\) and \(\underline{X}\subseteq B\cup\{e\}\). In an oriented matroid \(\mathcal{M}=(E,\mathcal{C})\), given a basis \(B\) and an element \(e\in B\), the _fundamental cocircuit_\(C^{*}(B,e)\) is the unique cocircuit \(D\) with \(D_{e}=+\) and \(\underline{D}\cap(B\setminus\{e\})=\emptyset\). An oriented matroid \(\widehat{\mathcal{M}}=(E\cup\{q\},\widehat{\mathcal{C}})\) is called an _extension of_\(\mathcal{M}\), if its _minor_\(\widehat{\mathcal{M}}\setminus q\coloneqq(E,\{X\mid X\in\widehat{\mathcal{C}}\text { and }X_{q}=0\})\) is equal to \(\mathcal{M}\). A _localization_ is a way to describe an extension \(\widehat{\mathcal{M}}=(E\cup\{q\},\widehat{\mathcal{C}})\) of \(\mathcal{M}=(E,\mathcal{C})\). **Definition 6**.: _Given an oriented matroid \(\mathcal{M}\) on ground set \(E\) and with cocircuits \(\mathcal{C}^{*}\), a function \(\sigma:\mathcal{C}^{*}\to\{-,0,+\}\) defines the following family of signed sets_ \[\widehat{\mathcal{C}}^{*}:= \{(Y,\,\sigma(Y)):Y\in\mathcal{C}^{*}\}\cup\] \[\{(Y_{1}\circ Y_{2},\,0):Y_{1},Y_{2}\in\mathcal{C}^{*},\text{ for adjacent }Y_{1},Y_{2}\text{ with }\] \[\sigma(Y_{1})=-\sigma(Y_{2})\neq 0\},\] _where the notation \((X,s)\) denotes the signed set where all elements of \(E\) have the same sign as in \(X\), and the new element \(q\) gets sign \(s\). For the definition of adjacency, we refer the reader to [1]. For the further discussion, only the first of the two sets forming \(\widehat{\mathcal{C}}^{*}\) is relevant._ _The function \(\sigma\) is called a localization, if \(\widehat{\mathcal{C}}^{*}\) is a valid set of cocircuits. Then, the oriented matroid \(\widehat{\mathcal{M}}\) on the ground set \(E\cup\{q\}\) with cocircuits \(\widehat{\mathcal{C}}^{*}\) is called the extension of \(\mathcal{M}\) specified by \(\sigma\)._ ### P-Omcp We consider oriented matroids \(\mathcal{M}=(E_{2n},\mathcal{C})\) on the ground set \(E_{2n}=S\cup T\), which is made up of two parts \(S=\{s_{1},\ldots,s_{n}\}\) and \(T=\{t_{1},\ldots,t_{n}\}\), \(S\cap T=\emptyset\). We call a set \(J\subseteq E_{2n}\)_complementary_, if it contains no _complementary pair_\(s_{i},t_{i}\). **Definition 7** (P-matroid).: _An oriented matroid \(\mathcal{M}=(E_{2n},\mathcal{C})\) is a P-matroid if \(S\) is a basis and there is no sign-reversing circuit. A sign-reversing circuit is a circuit \(X\) such that for each complementary pair \(s_{i},t_{i}\) contained in \(\underline{X}\), \(X_{s_{i}}=-X_{t_{i}}\)._ **Example 8**.: The matroid \(\mathcal{M}=(\{s,t\},\mathcal{C})\) is a P-Matroid. The matroid \(\mathcal{M}^{\prime}=(\{s,t\},\mathcal{C}^{\prime})\) is _not_ a P-Matroid, since both of its circuits are sign-reversing. \[\mathcal{C}=\{\begin{pmatrix}+&+\end{pmatrix},\begin{pmatrix}-&-\end{pmatrix} \},\qquad\mathcal{C}^{\prime}=\{\begin{pmatrix}+&-\end{pmatrix},\begin{pmatrix} -&+\end{pmatrix}\}.\] Let \(q\) be such that \(q\notin E_{2n}\). Then \(\widehat{E_{2n}}\coloneqq S\cup T\cup\{q\}\), and we write \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\) for an _extension of \(\mathcal{M}\)_. **Example 9**.: \(\widehat{\mathcal{M}}=(\{s,t,q\},\widehat{\mathcal{C}})\) and \(\widehat{\mathcal{M}^{\prime}}=(\{s,t,q\},\widehat{\mathcal{C}}^{\prime})\) are both valid extensions of the P-Matroid \(\mathcal{M}\) from Example 8 for \(\widehat{\mathcal{C}}\) and \(\widehat{\mathcal{C}}^{\prime}\) as given below. Figures 1 and 2 show the realizations of the corresponding oriented matroids as arrangements of oriented hyperplanes through the origin; each one-dimensional cell of these arrangements corresponds to a circuit. Given an extension \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\) of a P-matroid, the goal of the _P-Matroid Oriented Matroid Complementarity Problem (P-OMCP)_ is to find a circuit \(X\in\widehat{\mathcal{C}}\) with \(X\geq 0\), \(X_{q}=+\), and \(X_{s_{i}}X_{t_{i}}=0\) for every \(i\in[n]\). The matroid extension is given to an algorithm by a circuit oracle, which given a set \(B\subset\widehat{E_{2n}}\) and another element \(e\in\widehat{E_{2n}}\setminus B\) either returns that \(B\) is not a basis of \(\widehat{\mathcal{M}}\), or returns the fundamental circuit \(C(B,e)\) (recall that this is the unique circuit \(X\in\widehat{\mathcal{C}}\) with \(X_{e}=+\) and \(\underline{X}\subseteq B\cup\{e\}\)). It is known that in P-matroids and P-matroid extensions, every complementary set \(B\subset S\cup T\) of size \(n\) is a basis [11]. Every P-OMCP instance has a unique solution [17]. The unique solution of an P-OMCP instance with \(\widehat{\mathcal{M}}\) of Example 9 as input is \(\begin{pmatrix}0&+&+\end{pmatrix}\), the unique solution in \(\widehat{\mathcal{M}}^{\prime}\) is \(\begin{pmatrix}0&0&+\end{pmatrix}\). A P-matroid extension (a P-OMCP instance) is _non-degenerate_, if for every complementary basis \(B\), the circuit \(C(B,q)\) is non-zero on all elements in \(B\cup\{q\}\). The red shaded area in Figures 1 and 2 denotes the areas where \(q\) is positive. The circuits marked in red are the fundamental circuits \(C(\{s\},q)\) and \(C(\{t\},q)\). As can be seen, the P-Matroid extension \(\widehat{\mathcal{M}}\) in Example 9 is non-degenerate, whereas \(\widehat{\mathcal{M}}^{\prime}\) is degenerate. ### Unique Sink Orientation (USO) The _\(n\)-dimensional hypercube graph_\(Q_{n}\) (_\(n\)-cube_) is the undirected graph on the vertex set \(V(Q_{n})=\{0,1\}^{n}\), where two vertices are connected by an edge if they differ in exactly one coordinate. An _orientation_\(O:V(Q_{n})\rightarrow\{-,+\}^{n}\) assigns each vertex an orientation of its incident edges, where \(O(v)_{i}=+\) denotes an outgoing edge from vertex \(v\) in dimension \(i\) and \(O(v)_{i}=-\) denotes an incoming edge. A _Unique Sink Orientation (USO)_ is an orientation, such that every non-empty subcube contains exactly one sink, i.e., a unique vertex \(v\) with \(O(v)_{i}=-\) for all dimensions \(i\) in the subcube [16]. [Szabo-Welzl Condition [16]] An orientation \(O\) of \(Q_{n}\) is USO if and only if for all pairs of distinct vertices \(v,w\in V(Q_{n})\), we have: \(\exists i\in[n]:\ (v_{i}\neq w_{i})\wedge(O(v)_{i}\neq O(w)_{i})\). The classical algorithmic problem associated to USOs is that of finding the unique global sink \(v\) with \(\forall i:O(v)_{i}=-\), with as few as possible queries to an oracle computing \(O\). ### Classical Reduction Todd [17] showed that a non-degenerate P-OMCP given by a matroid \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\) can be translated to an USO of the \(n\)-cube. Every vertex \(v\) of the cube is associated with a complementary basis \(B(v)\subset S\cup T\). For each \(i\in[n]\), \(s_{i}\in B(v)\) if \(v_{i}=0\), otherwise \(t_{i}\in B(v)\). The orientation \(O(v)\) is then computed using the fundamental circuit \(C:=C(B(v),q)\): \[O(v)_{i}:=\begin{cases}+&\text{if $C_{s_{i}}=-$ or $C_{t_{i}}=-$},\\ -&\text{if $C_{s_{i}}=+$ or $C_{t_{i}}=+$}.\end{cases}\] As the P-OMCP instance is non-degenerate, no other case can occur. Todd showed that the computed orientation \(O\) is USO, and that its sink \(v\) corresponds to a fundamental circuit \(C(B(v),q)\) which is positive on all elements and thus a solution to the P-OMCP instance. **Example 12**.: Recall the P-Matroid extension \(\widehat{\mathcal{M}}\) from Example 9. Figure 3 shows the USO created by this reduction, where \(B(0)=s\) and \(B(1)=t\). ## 3 The Effect of Degeneracy on the Resulting USOs In the above reduction, if the P-OMCP instance is degenerate, we can sometimes not decide which way to orient an edge since \(C_{s_{i}}=C_{t_{i}}=0\). For now, we leave these edges unoriented. This leads to a _partial orientation_ of the hypercube, which is a function \(O:V(Q_{k})\to\{-,0,+\}^{k}\) where \(O(v)_{i}=0\) denotes an unoriented edge. We call such a partial orientation arising from a degenerate P-OMCP a _partial P-matroid USO (PPU)_. In this section we aim to understand the structure of unoriented edges in PPUs. Not every partial orientation can be turned into an USO by directing the unoriented edges. We thus state the following condition inspired by the Szabo-Welzl condition: A partial orientation \(O\) is said to be _partially Szabo-Welzl_ if for any two distinct vertices \(v,w\in V(Q_{k})\), either \[O(v)_{i}=O(w)_{i}=0\text{ for all }i\text{ with }v_{i}\neq w_{i}, \text{ or} \tag{1}\] \[\exists i:v_{i}\neq w_{i}\wedge\big{(}(O(v)_{i}=+\wedge O(w)_{i} =-)\vee(O(v)_{i}=-\wedge O(w)_{i}=+)\big{)}. \tag{2}\] A partial orientation \(O\) which is partially Szabo-Welzl can be extended to an USO by orienting all unoriented edges towards the endpoint with fewer \(1s\), i.e., downwards. Proof.: By orienting all unoriented edges of \(O\) from the vertex with more \(1s\) to the vertex with fewer \(1s\) (i.e., "downwards"), any two vertices that previously fulfilled condition (1) of Definition 13 now fulfill the classic Szabo-Welzl condition as in Lemma 11. Note that condition (2) of Definition 13 is equivalent to this classic condition on full (non-partial) orientations. We conclude that all pairs of vertices must now fulfill the Szabo-Welzl condition as in Lemma 11. A partial P-matroid USO is partially Szabo-Welzl. Proof.: Assume two vertices \(v,w\) in a PPU \(O\) failed both conditions of Definition 13. Let \(V=C(B(v),q)\) and \(W=C(B(w),q)\) be the fundamental circuits used to derive \(O(v)\) and Figure 3: USO created by the reduction from \(\widehat{\mathcal{M}}\). \(O(w)\). Since \(v\) and \(w\) violate the first condition of Definition 13, \(V\neq W\). Applying circuit axiom (C3) to \(V\) and \(-W\) to eliminate the element \(q\) shows that there exists a circuit \(Z\) with certain properties. Since \(q\not\in\underline{Z}\), \(Z\) must contain both \(s_{i}\) and \(t_{i}\) for at least one \(i\in[k]\) (since all complementary sets are independent in a P-matroid extension). As we assumed that \(v\) and \(w\) violate the second condition of Definition 13, we know that \(s_{i}\) and \(t_{i}\) must have opposite signs in \(Z\). Since this holds for any \(i\in[k]\), \(Z\) is a sign-reversing circuit of the underlying P-matroid, which contradicts Definition 7. We conclude that no two vertices can fail Definition 13. In a partial P-matroid USO, the unoriented edges form a set of vertex-disjoint faces. In each such face, the orientation is the same at every vertex. Proof.: Let \(v\) be a vertex of a PPU, and let \(w\) be another vertex within the face spanned by the unoriented edges incident to \(v\). Then, the fundamental circuit \(C(B(v),q)\) fulfills all the conditions that a circuit has to fulfill to be the fundamental circuit \(C(B(w),q)\). Since fundamental circuits are unique in all oriented matroids [11], we must have \(C(B(v),q)=C(B(w),q)\) and thus \(v\) and \(w\) must be oriented the same way, which implies the lemma. Lemmas 14-16, and [14, Corollary 6] imply that the unoriented subcubes of a PPU can in fact be oriented according to _any_ USO: Let \(O\) be a PPU and let \(O^{\prime}\) be the orientation obtained by independently orienting each unoriented face \(f\) of \(O\) according to some USO of the same dimension as \(f\). Then, \(O^{\prime}\) is USO. Proof.: By Lemmas 14 and 15, \(O\) can be extended to some USO. By Lemma 16, each unoriented face is a hypervertex; a face where all edges of the same dimension leaving the face are oriented the same way. By [14, Corollary 6], each such hypervertex can be reoriented according to an arbitrary USO while preserving that the whole orientation is USO. Recall the P-Matroid extension \(\widehat{\mathcal{M}}\) from Example 9. Figure 4 shows the USO created by this reduction, where \(B(0)=s\) and \(B(1)=t\). ## 4 Constructions Based on Degeneracy and Perturbations In this section we show how existing constructions of oriented matroid extensions can be interpreted as constructions of (partial) P-matroid USOs. An extension \(\widehat{\mathcal{M}}\) of an oriented matroid \(\mathcal{M}\) can be uniquely described by a _localization_, a function \(\sigma\) from the set \(\mathcal{C}^{*}\) of cocircuits of \(\mathcal{M}\) to the set \(\{-,0,+\}\). We give some more background about localizations in Section 2.1. Note that not every function \(f:\mathcal{C}^{*}\to\{-,0,+\}\) describes a valid extension and thus not every such function is a localization. The following lemma connects a localization to the circuits relevant to the resulting (partial) P-matroid USO. Figure 4: USO created by the reduction from the degenerate P-Matroid extension \(\widehat{\mathcal{M}}\). **Lemma 19**.: _Let \(\mathcal{M}\) be a P-matroid and let \(\sigma\) be a localization for \(\mathcal{M}\) describing the extension \(\widehat{\mathcal{M}}\). Then, for any complementary basis \(B\) of \(\mathcal{M}\) (and thus also of \(\widehat{\mathcal{M}}\)), and every element \(e\in B\), the sign of \(e\) in the fundamental circuit \(C(B,q)\) of \(\widehat{\mathcal{M}}\) is the opposite of the sign assigned by \(\sigma\) to the fundamental cocircuit \(C^{*}(B,e)\) of \(\mathcal{M}\)._ Proof.: \(D:=C^{*}(B,e)\) is a cocircuit of \(\mathcal{M}\). By Definition 6, \(\widehat{D}:=(D,\sigma(D))\) must be a cocircuit of \(\widehat{\mathcal{M}}\). By Definition 5, \(\widehat{D}\) must be a subset of \(\widehat{E_{2n}}\setminus(B\setminus\{e\})\) and \(\widehat{D}_{e}=+\). On the other hand, the support of \(C:=C(B,q)\) must be a subset of \(B\cup\{q\}\), and \(C_{q}=+\). Lemma 3 says that \(C\) and \(\widehat{D}\) must be orthogonal, i.e., their supports either do not intersect, or they must agree and disagree on at least one element. Since \(\underline{C}\cap\widehat{D}\subseteq\{e,q\}\), the first case only occurs if \(C_{e}=\widehat{D}_{q}=0\). The second case can only occur if \(C_{e}=-\widehat{D}_{q}\), since \(C_{q}=\widehat{D}_{e}=+\). Las Vergnas [12] showed that the set of localizations is closed under composition, i.e., given two localizations \(\sigma_{1},\sigma_{2}\), the following function is a localization too: \[\forall c\in\mathcal{C}^{*}:(\sigma_{1}\circ\sigma_{2})(c):=\begin{cases} \sigma_{1}(c),&\text{if }\sigma_{1}(c)\neq 0,\\ \sigma_{2}(c),&\text{otherwise}.\end{cases}\] Lemma 19 allows us to understand the effect of such composition on the resulting (partial) P-matroid USO: For localizations \(\sigma_{1},\sigma_{2}\) and their corresponding PPUs \(O_{1},O_{2}\), the PPU \(O^{\prime}\) given by the localization \(\sigma_{1}\circ\sigma_{2}\) is \[\forall v\in V(Q_{k}),i\in[k]:O^{\prime}(v)_{i}=\begin{cases}O_{1}(v)_{i},& \text{if }O_{1}(v)_{i}\neq 0,\\ O_{2}(v)_{i},&\text{otherwise}.\end{cases}\] This can be seen as filling in all unoriented subcubes of \(O_{1}\) with the orientation \(O_{2}\). Furthermore, Las Vergnas [12] describes _lexicographic extensions_ of oriented matroids. **Definition 20** (Lexicographic extension [12]).: _Let \(\mathcal{M}=(E,\mathcal{C})\) be an oriented matroid. Given an element \(e\in E\) and a sign \(s\in\{-,0,+\}\), the function \(\sigma:\mathcal{C}^{*}\to\{-,0,+\}\) given by_ \[\sigma(D):=\begin{cases}s\cdot D_{e},&\text{if }D_{e}\neq 0,\\ 0,&\text{otherwise},\end{cases}\] _is a localization. The extension of \(\mathcal{M}\) specified by this localization is called the lexicographic extension of \(\mathcal{M}\) by \([s\cdot e]\)._ Lexicographic extensions of _uniform_ P-matroids give rise to PPUs in which all edges of some dimension are oriented the same way, and one half is left unoriented while the other half is completely oriented (see Figure 5). Let \(\mathcal{M}=(E_{2n},\mathcal{C})\) be a P-matroid. Let \(\widehat{\mathcal{M}}\) be the lexicographic extension of \(\mathcal{M}\) by \([+\cdot\cdot_{i}]\). Then, in the partial P-matroid USO \(O\) defined by \(\widehat{\mathcal{M}}\), the upper \(i\)-facet (the facet of vertices \(v\) with \(v_{i}=1\)) is an unoriented subcube, and all \(i\)-edges point towards this facet. Furthermore, if \(\mathcal{M}\) is uniform, the lower \(i\)-facet is completely oriented. Proof.: We first prove that all edges in dimension \(i\) are oriented from the vertices with \(v_{i}=0\) to the vertices with \(v_{i}=1\). As \(t_{i}\) is positive in all cocircuits \(C^{*}(B,t_{i})\) for \(B\) such that \(t_{i}\in B\), \(\sigma\) assigns \(+\) to all such cocircuits. By Lemma 19, for each vertex \(v\) with \(v_{i}=1\), we have \(O(v)_{i}=-\). Furthermore, since \(\mathcal{M}\) is a P-matroid, every complementary set of \(n\) elements is a basis; thus, also every such set is a cobasis. We therefore know that every fundamental cocircuit \(C^{*}(B,e)\) for a complementary set \(B\) must be non-zero on both elements of the complementary pair which includes \(e\). For any basis \(B\) with \(s_{i}\in B\), \(t_{i}\) is thus non-zero in \(C^{*}(B,s_{i})\), and \(\sigma\) assigns a non-zero sign to this cocircuit. By [11, Theorem 5.4], a P-matroid contains no _sign-preserving_ cocircuit, so \(s_{i}\) and \(t_{i}\) must have opposite signs in this cocircuit. Thus, \(t_{i}\) must be negative in \(C^{*}(B,s_{i})\), and \(\sigma\) assigns \(-\) to it. We conclude that for each vertex \(v\) with \(v_{i}=0\), we have \(O(v)_{i}=+\). Next, we prove that the facet of vertices \(v\) with \(v_{i}=1\) is unoriented. Let \(B\) be a basis with \(t_{i}\in B\), and let \(e\in B\setminus\{t_{i}\}\) be some element. Now, note that \(t_{i}\not\in C^{*}(B,e)\). Thus, \(\sigma\) assigns \(0\) to these circuits, and therefore \(C(B,q)_{e}=0\), showing that this facet is unoriented. Lastly, we prove that if \(\mathcal{M}\) is uniform, the facet of vertices \(v\) with \(v_{i}=0\) is completely oriented. When \(\mathcal{M}\) is uniform, all subsets \(B\subset E\) of size \(n\) are bases and cobases. Thus, \(|C^{*}(B,e)|=n+1\), and for every complementary \(B\) such that \(s_{i}\in B\) and any \(e\in B\), we have that \(t_{i}\in C^{*}(B,e)\) and therefore \(\sigma\) assigns a non-zero sign to that circuit. This shows that \(|C(B,q)|=n+1\) too, proving that all edges around a vertex \(v\) with \(v_{i}=0\) are oriented. Of course this lemma symmetrically also applies to lexicographic extensions where \(s=-\) or \(e=s_{i}\). Switching \(t_{i}\) out for \(s_{i}\) swaps the role of the two facets, and switching the sign makes all \(i\)-edges point to the oriented facet instead of the unoriented one. We can use these two construction techniques to prove a lower bound on the number of queries needed by deterministic sink-finding algorithms on P-matroid USOs. In essence, we successively build a localization by composition with lexicographic extensions. The construction keeps the invariant that there exists an unoriented subcube guaranteed to contain the global sink. The dimension of this subcube is reduced by at most one with every query, thus at least \(n\) queries are required. Let \(\mathcal{M}=(E_{2n},\mathcal{C})\) be a uniform P-matroid. Then, for every deterministic sink-finding algorithm \(\mathcal{A}\), there exists a non-degenerate extension \(\widetilde{\mathcal{M}}\) of \(\mathcal{M}\) such that \(\mathcal{A}\) requires at least \(n\) queries to find the sink of the P-matroid USO given by \(\widetilde{\mathcal{M}}\). Proof.: We specify an adversarial procedure which iteratively builds up a localization \(\sigma\) for \(\mathcal{M}\). At any point of this procedure, the current localization describes an extension \(\widetilde{\mathcal{M}}\) for which the PPU \(O\) contains exactly one unoriented subcube \(U\), and all edges incident to \(U\) are oriented into \(U\). Thus, the global sink of \(O\) must lie in \(U\), but its exact location has not been determined yet. At the beginning of the procedure, \(\sigma\) is set to be all-zero, i.e., \(O\) is completely unoriented and \(U\) is the whole cube. Now, whenever the sink-finding algorithm queries a vertex \(v\), the adversarial procedure must return the complete orientation around this vertex. If \(v\) lies outside of \(U\), it is already completely oriented, and its orientation can simply be returned. Figure 5: The form of the PPU given by a lexicographic extension of a uniform P-matroid. Otherwise, if \(v\) lies in \(U\), the localization \(\sigma\) has to be changed. To do this, we pick one dimension \(i\) which spans \(U\). If \(v_{i}=0\), we change \(\sigma\) to \(\sigma^{\prime}:=\sigma\circ[+\cdot\ t_{i}]\), i.e., \(\sigma\) is combined with the lexicographic extension \([+\cdot\ t_{i}]\). On \(O\) this has the effect that some edges in \(U\) are oriented. By Lemma 21, all edges in the lower \(i\)-facet of \(U\) are oriented, and all \(i\)-edges in \(U\) are pointed away from this facet. Thus, \(v\) is now completely oriented, and \(U\) has shrunk by one dimension. The orientation of \(v\) can thus be returned. Note that if \(v_{i}=1\), the lexicographic extension would be picked to be \([+\cdot s_{i}]\), the rest of the procedure staying the same. Since \(U\) shrinks by only one dimension with every query, and \(U\) has \(n\) dimensions at the beginning, the first \(n\) vertices queried by the algorithm are never the sink. Thus, it takes at least \(n\) queries to determine the sink. Previously, the best known lower bound for sink-finding on P-matroid USOs was \(\Omega(\log n)\) queries [18]. In contrast, the stronger, almost-quadratic lower bound of Schurr and Szabo [14] does not apply to P-matroid USOs (for a proof of this see Lemma 30 in Appendix B). The P-Matrix Linear Complementarity Problem (P-LCP) is an algebraic analogue of P-OMCP. We discuss our results (Sections 3 and 4) in the context of P-LCPs in Appendix A. ## 5 The Search Problem Complexity of P-OMCP An instance of Unique End of Potential Line consists of an implicitly given exponentially large graph \(G\), in which each vertex has a positive cost and in- and out-degree at most one. Thus, the graph is a collection of directed paths called _lines_. The computational task is as follows: if the nodes of \(G\) form a single line (that starts in some given start vertex) with strictly increasing cost, then find the unique end node of this line -- a _sink_. Otherwise, either find _some_ sink in \(G\) or a _violation certificate_ that shows that \(G\) does not consist of a single line. Unique End of Potential Line is a total search problem, i.e., there always exists a sink or a violation. Note that there might exist a sink and a violation simultaneously. The search complexity class Unique End of Potential Line (UniqueEOPL) contains all problems that can be reduced in polynomial time to Unique End of Potential Line. Thus, the complexity class UniqueEOPL captures all total search problems where the space of candidate solutions has the structure of a unique line with increasing cost. UniqueEOPL was introduced in 2018 by Fearnley et al. [4]. UniqueEOPL is a subclass of \(\mathsf{PPAD}\cap\mathsf{PLS}\)[3]. Problems in UniqueEOPL are not known to be solvable in polynomial time. The promise version of a total search problem with violations is to find a solution under the promise that no violations exist for the given instance. PromiseUEOPL is the promise version of the search problem class UniqueEOPL. A search problem reduction from a problem \(R\) to a problem \(T\) is _promise preserving_, if every violation of \(T\) is mapped back to a violation of \(R\) and every valid solution of \(T\) is mapped back to a valid solution or a violation of \(R\). Promise preserving reductions are transitive. When containment of a search problem \(R\) in UniqueEOPL is shown via a polynomial time, promise preserving reduction, the promise version of \(R\) is contained in PromiseUEOPL. We now state the problem of USO sink-finding as a total search problem with a violation. Given an orientation function \(O:\{0,1\}^{n}\rightarrow\{+,-\}^{n}\), the task of the total search problem Unique Sink Orientation Sink-Finding (USO-SF) is to find: 1. A vertex \(v\in\{0,1\}^{n}\) such that \(\forall i\in[n]:O(v)_{i}=-\). The vertex \(v\) is a sink. * _Two distinct vertices_ \(v,w\in\{0,1\}^{n}\) _with_ \(\forall i\in[n]:\;(v_{i}=w_{i})\vee(O(v)_{i}=O(w)_{i})\)_. The orientation_ \(O\) _does not fulfill the Szabo-Welzl condition and thus is not USO._ [[4]] USO-SF is in UniqueEOPL and its promise version is in PromiseUEOPL. Next, we define the P-OMCP problem as a total search problem with violations. Let \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\) be an oriented matroid with the set \(S\) being a basis. The task of the total search version of P-OMCP is to find one of the following: * A circuit \(X\in\widehat{\mathcal{C}}\) such that \(X\geq 0\), \(X_{q}=+\) and \(\forall i\in[n]:X_{s_{i}}X_{t_{i}}=0\). * A circuit \(Z\in\mathcal{C}\) which is sign-reversing. * A complementary set \(B\subset E_{2n}\) of size \(n\) which is not a basis of \(\widehat{\mathcal{M}}\). * Two distinct, complementary circuits \(X,Y\in\mathcal{C}\) with \(X_{q}=Y_{q}=+\) and \(\forall i\in[n]:X_{s_{i}}Y_{t_{i}}=X_{t_{i}}Y_{s_{i}}=0\), or \(X_{s_{i}}=Y_{t_{i}}\) and \(X_{t_{i}}=Y_{s_{i}}\). The definition of the violation _(MV3)_ may look unintuitive, but the following lemma shows that it correctly implies that \(\widehat{\mathcal{M}}\) is not a P-matroid extension. A violation of type (MV3) implies that \(\widehat{\mathcal{M}}\) is not a P-matroid extension. Proof.: Suppose we are given such a violation, i.e., two distinct complementary circuits \(X,Y\in\mathcal{C}\) with \(X_{q}=Y_{q}=+\) and \(\forall i\in[n]:X_{s_{i}}Y_{t_{i}}=X_{t_{i}}Y_{s_{i}}=0\) or \(X_{s_{i}}=Y_{t_{i}}\) and \(X_{t_{i}}=Y_{s_{i}}\). As \(X,Y\) are distinct, \(X\neq Y\). Since \(X_{q}=Y_{q}=+\), it holds that \(X\neq-Y\). We can thus apply circuit axiom (C3) on circuits \(X\) and \(-Y\) and element \(q\in X^{+}\cap(-Y)^{-}\). It follows that there must exist some circuit \(Z\) with: * \(Z^{+}\subseteq X^{+}\cup(-Y)^{+}\setminus\{q\}\) and * \(Z^{-}\subseteq X^{-}\cup(-Y)^{-}\setminus\{q\}\). If \(\underline{Z}\) contained no complementary pair, it would be a complementary set. Any complementary set \(B\supseteq\underline{Z}\) of size \(n\) can not be a basis, since \(\underline{Z}\) is a circuit. This is a violation of type _(MV2)_ and implies that \(\widehat{\mathcal{M}}\) is not a P-matroid extension. Otherwise, \(\underline{Z}\) must contain at least one complementary pair \(s_{i},t_{i}\). As \(X\) and \(Y\) are complementary, \(s_{i}\) and \(t_{i}\) are each only contained in one of the two circuits, w.l.o.g. \(s_{i}\in\underline{X}\) and \(t_{i}\in\underline{Y}\). Therefore, \(s_{i}\) and \(t_{i}\) are each only contained in one of the two sets \(X^{+}\cup(-Y)^{+}\setminus\{q\}\) and \(X^{-}\cup(-Y)^{-}\setminus\{q\}\). Since \(X_{s_{i}}=Y_{t_{i}}\), they are both in different sets, and thus \(Z_{s_{i}}=-Z_{t_{i}}\). Since this holds for every complementary pair in \(\underline{Z}\), we conclude that \(Z\) is sign-reversing. Thus \(Z\) is a violation of type _(MV1)_, and \(\widehat{\mathcal{M}}\) can not be a P-matroid extension. Note that even if we cannot find \(Z\) explicitly in polynomial time, we can check the conditions on \(X\) and \(Y\) in polynomial time. Technically, the violation _(MV1)_ would be enough to make this search problem total, but our reduction to USO-SF detects only violations of type _(MV2)_ and _(MV3)_. Note that as Fearnley et al. [5] already observed, there may be a difference in the complexity of a total search problem depending on the violations chosen. There is no trivial way known to the authors to transform a violation of type _(MV3)_ or _(MV2)_ to a violation of type _(MV1)_. With the help of Lemmas 3 and 3 we now adapt Todd's reduction of non-degenerate P-OMCP instances to USO (recall Section 2.4) to also work with degenerate instances and their respective total search versions. Given a P-OMCP instance \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\) (note that \(\widehat{\mathcal{M}}\) is possibly not a P-matroid extension, or degenerate), we define the orientation \(O:V(Q_{n})\to\{+,-\}^{n}\): \[O(v)_{i}:=\begin{cases}-&\text{if $B(v)$ is not a basis,}\\ -&\text{if $v_{i}=0$ and $C_{s_{i}}=0$,}\\ +&\text{if $v_{i}=1$ and $C_{t_{i}}=0$,}\\ +&\text{if $v_{i}=0$ and $C_{s_{i}}=-$}\\ &\text{or $v_{i}=1$ and $C_{t_{i}}=-$,}\\ -&\text{otherwise,}\end{cases}\] with \(B(v)\) and \(C:=C(B(v),q)\) defined as in Section 2.4. Furthermore, using Lemmas 14 and 15 we know that \(O\) is USO if \(\widehat{\mathcal{M}}\) is a P-matroid extension. The construction above is a polynomial time, promise preserving reduction from P-OMCP to USO-SF. Proof.: Given a P-OMCP instance \(\widehat{\mathcal{M}}=(\widehat{E_{2n}},\widehat{\mathcal{C}})\), let \((Q_{n},O)\) be an USO-SF instance with \(O\) as defined above. Polynomial timeFor the reduction we build an orientation oracle \(O\) for USO-SF from the given circuit oracle for P-OMCP. Note that this does not mean that we have to compute the output of \(O\) for every vertex, we simply have to build the algorithm (usually represented by a logical circuit) computing \(O\) from the algorithm computing the circuit oracle. Since \(O\) merely computes \(B(v)\) from a given vertex, invokes the circuit oracle, and then performs a case distinction, it can clearly be built and queried in polynomial time in \(n\). CorrectnessTo prove correctness of this reduction being promise preserving, we must show that every violation of USO-SF can be mapped back to a violation of P-OMCP and every valid solution of USO-SF can be mapped back to a valid solution or a violation of P-OMCP. A solution of type _(U1)_Let \(v\in V(Q_{n})\) be a solution to the USO-SF instance, i.e., a sink. It might be that \(v\) is a sink because \(B(v)\) is not a basis, and thus \(O(v)_{i}=-\) for all \(i\). To map this back to a violation or solution of P-OMCP, we first check if the P-OMCP oracle returns that \(B(v)\) is not a basis for the input \(C(B(v),q)\). If so, we found a violation of type _(MV2)_. Otherwise, the fundamental circuit \(C(B(v),q)\) is a solution to the P-OMCP instance: Since \(v\) is a sink, there is no index at which the fundamental circuit is negative. All entries of \(C(B(v),q)\) are positive and the complementarity condition is fulfilled by construction of \(B(v)\). A violation of type _(UVI)_If a violation is found, we have two distinct vertices \(v\) and \(w\) with \(\forall i\in[k]:\,(v_{i}=w_{i})\vee(O(v)_{i}=O(w)_{i})\). We first again check whether \(B(v)\) or \(B(w)\) are bases, if not we map this violation to a violation of type _(MV2)_. Otherwise, we show that there are two distinct complementary circuits \(X,Y\in\mathcal{C}\) with \(X_{q}=Y_{q}=+\) and \(\forall i\in[n]:X_{s_{i}}Y_{t_{i}}=X_{t_{i}}Y_{s_{i}}=0\) or \(X_{s_{i}}=Y_{t_{i}}\) and \(X_{t_{i}}=Y_{s_{i}}\), i.e., a violation of type _(MV3)_. We claim that the circuits \(X:=C(B(v),q)\) and \(Y:=C(B(w),q)\) fulfill these conditions. First, we need to show that \(X\neq Y\). If the two circuits were equal, they would have to be degenerate on all dimensions spanned by \(v\) and \(w\). Then by construction of \(O\), \(v\) and \(w\) could not fail Szabo-Welzl (see Lemma 14). Next, we see that by definition \(C(B(v),q)_{q}=+\) and \(C(B(w),q)_{q}=+\) and both circuits are complementary. Finally, we show that for each dimension \(i\), either (i) \(X_{s_{i}}Y_{t_{i}}=X_{t_{i}}Y_{s_{i}}=0\) or (ii) \(X_{s_{i}}=Y_{t_{i}}\) and \(X_{t_{i}}=Y_{s_{i}}\). For every dimension \(i\) for which \(v_{i}=w_{i}\) (w.l.o.g. both are 0), both \(X_{t_{i}}=0\) and \(Y_{t_{i}}=0\). Therefore, condition (i) holds. For a dimension \(i\) for which \(v_{i}\neq w_{i}\), if at least one of the \(i\)-edges incident to \(v_{i}\) and \(w_{i}\) is degenerate, we have \(X_{s_{i}}=X_{t_{i}}=0\) (or \(Y_{s_{i}}=Y_{t_{i}}=0\)). Thus, condition (i) also holds in this case. For a dimension \(i\) in which both are non-degenerate, since \(v\) and \(w\) are a violation of type _(UV1)_, \(O(v)_{i}=O(w)_{i}\). By construction of \(O\) it must hold that \(X_{s_{i}}=Y_{t_{i}}\) and \(X_{t_{i}}=Y_{s_{i}}\), i.e., condition (ii) holds. Therefore, the circuits \(C(B(v),q)\) and \(C(B(w),q)\) form a violation of type _(MV3)_. It follows that P-OMCP as defined in Definition 26 is in UniqueEOPL and its promise version is in PromiseUEOPL.
2305.00557
Collective Relational Inference for learning heterogeneous interactions
Interacting systems are ubiquitous in nature and engineering, ranging from particle dynamics in physics to functionally connected brain regions. These interacting systems can be modeled by graphs where edges correspond to the interactions between interactive entities. Revealing interaction laws is of fundamental importance but also particularly challenging due to underlying configurational complexities. The associated challenges become exacerbated for heterogeneous systems that are prevalent in reality, where multiple interaction types coexist simultaneously and relational inference is required. Here, we propose a novel probabilistic method for relational inference, which possesses two distinctive characteristics compared to existing methods. First, it infers the interaction types of different edges collectively by explicitly encoding the correlation among incoming interactions with a joint distribution, and second, it allows handling systems with variable topological structure over time. We evaluate the proposed methodology across several benchmark datasets and demonstrate that it outperforms existing methods in accurately inferring interaction types. We further show that when combined with known constraints, it allows us, for example, to discover physics-consistent interaction laws of particle systems. Overall the proposed model is data-efficient and generalizable to large systems when trained on smaller ones. The developed methodology constitutes a key element for understanding interacting systems and may find application in graph structure learning.
Zhichao Han, Olga Fink, David S. Kammer
2023-04-30T19:45:04Z
http://arxiv.org/abs/2305.00557v3
# Collective Relational Inference for learning physics-consistent heterogeneous particle interactions ###### Abstract Interacting particle systems are ubiquitous in nature and engineering. Revealing particle interaction laws is of fundamental importance but also particularly challenging due to underlying configurational complexities. Recently developed machine learning methods show great potential in discovering pairwise interactions from particle trajectories in homogeneous systems. However, they fail to reveal interactions in heterogeneous systems that are prevalent in reality, where multiple interaction types coexist simultaneously and relational inference is required. Here, we propose a novel probabilistic method for relational inference, which possesses two distinctive characteristics compared to existing methods. First, it infers the interaction types of different edges _collectively_, and second, it uses a physics-induced graph neural network to learn _physics-consistent_ pairwise interactions. We evaluate the proposed methodology across several benchmark datasets and demonstrate that it is consistent with the underlying physics. Furthermore, we showcase its ability to outperform existing methods in accurately inferring interaction types. In addition, the proposed model is data-efficient and generalizable to large systems when trained on smaller ones, which contrasts with previously proposed solutions. The developed methodology constitutes a key element for the discovery of the fundamental laws that determine macroscopic mechanical properties of particle systems. ## 1 Introduction Interacting particle systems are ubiquitous in nature and engineering. Examples include chemical molecules [1], granular materials [2] and numerous others [3; 4; 5]. As macroscopic phenomena of such systems arise from microscopic interactions, revealing the interactions between particles and their governing laws is key to understand, model and predict their behavior. However, particle interactions are typically intricate involving a variety of factors such as contact, friction, electrostatic charge, gravity, and chemical interaction, each affecting the particles at various scales. For most systems, the ground-truth information about pairwise interactions remains unknown, and only the particle positions in time and space, along with properties such as mass, are directly accessible. Therefore, determining the pairwise interactions poses significant challenges. Recent progress in machine learning (ML) methods for learning particle dynamics has shown great promise in addressing some of these challenges. Various methods [6; 7; 8] have been developed for inferring the pairwise interactions from the observed particle trajectories. However, these methods are only applicable to homogeneous systems in which all pairwise interactions are identical. In reality, interacting particle systems are often heterogeneous, with particles experiencing various types of interactions. Hence, an approach that can simultaneously reveal the hidden pairwise interaction types and infer the unknown interaction law governing each interaction type in a heterogeneous system constitutes a necessary advancement in our understanding of particle systems. However, this task is considerably more challenging than its homogeneous counterpart. A few attempts to this problem have been made in recent years. This includes the neural relational inference (NRI) model proposed by Kipf et al. [9], which is built on the variational autoencoder (VAE) [10], and has shown promising results in inferring heterogeneous interactions. However, NRI inherits the assumption of VAE that input data are independent and identically distributed, and, therefore, infers the interaction types for different pairs of particles _independently_. The approach neglects the correlation among interactions on different edges. As the observed states of each particle are the consequence of the cumulative impact of all incoming interactions, conjecturing the interaction type of one edge should take into consideration the estimation of other relevant edges. Neglecting this aspect can result in a significant underperformance, as we will show with multiple examples. Other methods focusing on heterogeneous particle interactions include modular meta-learning [11], as proposed by Alet et al. [12]. This approach alternates between the simulated annealing step to update the predicted interaction type of every edge and the optimizing step to learn the interaction function for every type. However, the computation is very expensive due to the immense search space involved, which scales with \(\mathcal{O}(K^{|E|})\) for a particle system containing \(K\) different interactions and \(|E|\) pairs of interacting particles. Therefore, Alet et al. [12] uses the same encoder as NRI [9] to infer the pairwise interaction types. In another study, Chen et al. [13] enhance NRI by including a relation interaction module that accounts for the correlation among interactions. Additionally, the study integrates prior constraints, such as symmetry, into the learnt interactions. However, as our experiments will demonstrate, these additional mechanisms prove inadequate in accurately inferring interaction types. An additional limitation of the existing relational inference methods is that they are designed to infer heterogeneous interactions in systems with time-invariant neighborhood networks, _i.e._ where each particle consistently interacts with the same neighbors. In physical particle systems, it is typical for the network structure of interactions to undergo changes over time as a result of rearrangements. As we will demonstrate, current methods encounter difficulties in effectively learning systems that have an evolving graph topology. Moreover, existing methods do not take _physics-consistency_ as a strict requirement for the inferred pairwise interaction. As a result, their inferred interactions may violate the Newtonian principle of action-reaction. This emphasizes the need for a model to learn _physics-consistent_ pairwise interactions in a heterogeneous particle system. Here, we develop a novel probabilistic approach to learn heterogeneous interactions based on the generalized expectation-maximization (EM) algorithm [15]. The proposed method named **Collective Relational Inference** (CRI) overcomes the above-mentioned challenges, and simultaneously infers the type of inter-particle interactions by considering the correlation among different edges while learning _physics-consistent_ interaction law for multiple interactions. We demonstrate that the proposed framework is highly flexible, as it allows the integration of any compatible inference method, as evidenced by a proposed variant of CRI. Further, we propose an additional extension of CRI: the Evolving-CRI designed to address the challenge of relational inference with evolving graph topology. Finally, we empirically show that both CRI and Evolving-CRI significantly outperform state-of-the-art methods, as shown, for instance, by an achieved accuracy of 99% compared to 62% in predicting the heterogeneous electric charge interactions. Figure 1: **Comparison between (A) existing probabilistic approaches [9; 13; 14] and (B) our proposed method CRI for relational inference.** Previous approaches predict the interaction type of different edges _independently_ (_e.g._, the incoming edges of \(v_{1}\)). CRI takes the subgraph of each particle (_e.g._, \(S_{(1)}\)) as an entity. We learn the joint distribution of interaction type for all edges in the subgraph, allowing for modeling their _collective_ influence on particle states. ## 2 Fundamental concept behind the proposed methodology The objective of the proposed methodology is to learn _physics-consistent_ particle interaction laws in _heterogeneous_ systems. The primary difficulty in this task is that no ground-truth information on the interactions is available. Instead, only information on particle motion and physical properties such as mass is available. Our model named CRI is a novel probabilistic approach designed to infer the interaction types of different edges _collectively_. CRI differs from previous probabilistic methods [9; 13; 14] in how the probability distribution of the unknown interaction types is computed, as illustrated in Fig. 1. While existing probabilistic methods infer the interaction type of different edges independently, our approach takes into account subgraphs comprising a center node and its neighboring nodes as a collective entity and infers the interaction types of the edges within each subgraph _jointly_. The underlying idea behind this approach is that different interactions affect the movement of particles collectively. The subgraph representation naturally models the _collective_ influence from neighbors. In general, CRI is designed for relational inference for fixed underlying graph topology, and comprises two modules (as depicted in Fig. 2): 1) a probabilistic inference module that infers the joint distribution of interaction types of edges in each subgraph, and 2) a generative module that is a graph neural network capturing pairwise interactions to predict new states of particles. To ensure physics-consistency, we use the recently proposed physics-induced graph network for particle interaction (PIG'N'PI) [8] instead of the commonly used message-passing neural network [16] as the backbone for the graph neural network. The model is trained to predict particle movements in time and space based on the generalized expectation-maximization (EM) algorithm. This involves iteratively updating the inferred interaction types of edges through the inference module, alongside updating the learnable parameters of PIG'N'PI in the generative module. After training, we extract interaction functions of various types (such as pairwise force) from the edge function of PIG'N'PI. Additionally, we can employ the trained model to infer the interactions of similar systems that were not used for training. The details of the CRI methodology are presented in Sec. 5.2. It is worth noting that CRI is highly flexible, allowing for the integration of any compatible inference method. This is exemplified by using the variational approximation for the inference in CRI. The details of the assessment of this flexibility are presented in SI Sec. 8.2. Figure 2: **Framework of CRI.** The proposed CRI, shown in the gray area, takes particle states at every time step and predicts accelerations. Dashed squares represent different objects (_e.g._, the particle system at a given time step, the graphical representation of the particle system, etc.). Solid squares correspond to different operators. For simplicity, a case with only two different types of interactions is shown but the proposed method is general. (A) The particle system over time. At every time step, each particle is described by its position and velocity, and the time-invariant property, _e.g._, mass. (B) All possible realizations denoted by the random variable \(\mathsf{z}_{(1)}\) for the subgraph \(S_{(1)}\). (C) The subgraph \(S_{(1)}\) at time \(t\). (D) The subgraph \(S_{(1)}\) with different realizations at time \(t\), which are the input of the generative model. (E) The predicted acceleration of \(v_{1}\) of different realizations. (F) The final predicted acceleration which is the expectation over the estimated probability \(\mathsf{z}_{(1)}\). (G) The ground-truth acceleration which is computed from particle states between two consecutive time steps. Furthermore, we extend our proposed CRI methodology to tackle the problem of relational inference in systems with evolving graph topology, where particles may interact with different neighbors at different times. We introduce a novel algorithm called Evolving-CRI, as shown in Fig. 3. Evolving-CRI is based on the fundamental concept of updating the posterior distribution of possible interaction types for a newly appearing edge. This is achieved by marginalizing out the posterior distribution of all correlated edges. As a result, the interaction type inferred for each edge captures the correlation with other incoming edges, which collectively influence the particle states. The details of Evolving-CRI can be found in Sec. 5.3. ## 3 Results and discussion To evaluate the performance of the proposed methodology, we conduct two sets of experiments. First, we consider various heterogeneous interacting particle systems with a fixed graph topology, wherein each particle interacts with the same neighbors throughout all the time steps, and evaluate if CRI is able to infer different interaction types and learn the corresponding heterogeneous pairwise forces correctly. Second, we consider a heterogeneous system having an evolving graph topology where each particle interacts with different neighbors at different time steps, and evaluate the performance of Evolving-CRI. For all simulations, we assume that the units are dimensionless since the considered relational inference methods are general and applicable to various scales. The simulation details are outlined in Sec. 5.5. The algorithms are assessed in terms of 1) the accuracy of the relational inference task to determine the interaction types and 2) the physics consistency of the learned particle interactions in a heterogeneous system. The evaluation metrics are defined in Sec. 5.4. We compare CRI and Evolving-CRI to two closely related works **NRI**[9] and **MPM**[13], which represent different existing methods for inferring heterogeneous interactions. We note that ModularMeta [12] is not included as a baseline because it uses the same encoder to infer the edge type as NRI. To ensure a fair comparison, we have made adaptations to both NRI and MPM by replacing their original decoders with PIG'N'PI, Figure 3: **Framework of Evolving-CRI.** (A) The particle system at various moments in time. Particles may interact with different neighbors at different time steps. The interaction radius of particle 3 (orange) is indicated by a black circle. At each time step, the feature vector \(\mathbf{x}_{i}^{t}\) of each particle \(v_{i}\) contains its position and velocity, and the time-invariant property, _e.g._, mass. (B) At each time step, we update the estimation of the posterior distributions of the interaction type for edges appearing at this time step, using Eq. 11. (C) The estimated posteriors after observing the system across all time steps. (D) Example subgraph \(S_{(3)}\) at time \(t\). (E) The example subgraph with different realizations of edge types at time \(t\), which is the input for the generative model. (F) The predicted acceleration, which is the expectation over the inferred posterior distribution in (C). (G) The ground-truth acceleration computed from particle states between two consecutive time steps. resulting in **NRI-PIGNPI** and **MPM-PIGNPI**, respectively. The detailed setup of the baselines for the various experiments is described in Sec. 5.7. Additionally, we report the results of CRI using variational approximation (Var-CRI) in the SI as the novelty of our contribution lies outside of the realm of variational inference. ### Performance on the system with fixed topology The first set of considered experiments consists of heterogeneous systems where particles always interact with the same neighbors, _i.e._ the underlying interaction graph is time-invariant. Within the system, various interaction types coexist, resulting in heterogeneity in pairwise interactions. The selected cases, which have been used in prior research [9], can serve as a benchmark case study due to their ability to encompass a broad spectrum of particle interaction characteristics. These include dependencies on particle properties (_e.g._, mass), dependencies on interaction properties (_e.g._, stiffness), and varying degrees of smoothness. #### 3.1.1 Spring N5K2 First, we test the relational inference on a spring-mass system containing five particles that are randomly connected by two different types of springs (denoted as Spring N5K2). The relational accuracy in Fig. 4 A1 shows that CRI correctly infers the interaction types, achieving a relational accuracy close to 100% even when trained on limited data (500 simulations). All tested baselines require significantly more training simulations (10k) to achieve an accuracy that is at best slightly above 90%. Next, we evaluate the consistency of the inferred pairwise forces with the actual pairwise forces (_i.e._ ground truth). It is important to note that the evaluation of the consistency between the inferred and actual pairwise forces can only be conducted for CRI, NRI-PIGNPI and MPM-PIGNPI. This is because the original NRI and MPM algorithms learn a high dimensional embedding of the pairwise force, which is not easy to interpret, making it impossible to compute \(\mathsf{MAE}_{\mathsf{eff}}\). Our results show that CRI can successfully learn the underlying pairwise forces with as few as 500 simulations for training, as demonstrated in Fig. 4 A2), while NRI-PIGNPI requires approximately 10k simulations for training to achieve a similar level of \(\mathsf{MAE}_{\mathsf{eff}}\), and MPM-PIGNPI yields a larger \(\mathsf{MAE}_{\mathsf{eff}}\) in comparison. Lastly, we evaluate the supervised learning performance of the predicted states (position and velocity) after 10 time steps (see Fig. 4 C1). Although most of the baselines achieve a similar \(\mathsf{MAE}_{\mathsf{state}}\) to CRI when a large number of simulations (_e.g._, 10k) are used for training, it is noteworthy that the \(\mathsf{MAE}_{\mathsf{state}}\) of CRI is significantly smaller than that of the baselines when trained with a small number of samples (_e.g._, 500). More comprehensive results are provided in SI Table 3. #### 3.1.2 Spring N10K2 We proceed to test the performance of CRI on larger systems. Here, the experiments consist of the same particles and the same two interaction types as in the Spring N5K2 case. However, the particle system now comprises 10 particles (denoted as Spring N10K2), resulting in a larger number of correlated edges within the network. Our findings indicate that CRI continues to outperform all of the baselines in terms of accuracy, as demonstrated in Fig. 4 B1. Although the accuracy of CRI drops to 87% when trained with only 500 simulations (compare Fig. 4 B1 to Fig. 4 A1), it achieves an accuracy of over 98% with 10k simulations, which is comparable to the results obtained with the smaller system, Spring N5K2. Conversely, the accuracy of the baselines drops to 80% or lower, even when trained with 10k simulations. These results suggest that CRI has a superior ability to infer diverse interactions compared to the baseline methods. Furthermore, we assess the consistency of the inferred pairwise forces with the underlying pairwise forces by quantitatively evaluating the \(\mathsf{MAE}_{\mathsf{eff}}\), as shown in Fig. 4 B2. The results indicate that CRI is able to learn the underlying pairwise force effectively for this larger system with only a small amount of training data (1k), whereas the baselines require considerably more training data (10k) to achieve a comparable \(\mathsf{MAE}_{\mathsf{eff}}\). After 10 time steps, we assess the \(\mathsf{MAE}_{\mathsf{state}}\) and note that, as anticipated, both the baselines and CRI exhibit inferior performance when compared to the Spring N5K2 scenario. Nevertheless, CRI continues to outperform the baselines by a significant margin in this instance (see Fig. 4 B3). In summary, this case study highlights that while larger systems may require more training data, CRI is effective in large systems compared to the baselines (see also SI Table 4). Figure 4: **Test performances for the spring and charge experiments.** Mean and standard derivation are computed from five independent experiments. (left column) Accuracy of the interaction type inference. (center column) MAE of pairwise force. NRI and MPM cannot infer pairwise force. (right column) MAE of state (position and velocity combined) after 10 simulation steps. #### 3.1.3 Evaluation of generalization ability Based on the experiments conducted, it can be concluded that CRI performs well on systems of varying sizes. However, an important question still needs to be addressed: Can the trained CRI models be generalized to novel systems? To address this question, we employ Spring N5K2 as the training and validation dataset and evaluate the best-performing model (selected using the validation set of Spring N5K2) on the test set of Spring N10K2. Both systems share the same governing interactions (_i.e._ the same interaction types and parameters). Hence, the trained model on Spring N5K2 should be able to generalize to Spring N10K2. The results show that CRI has an excellent generalization ability, as evidenced by the comparable values of Accuracy, MAE\({}_{\text{ef}}\) and MAE\({}_{\text{state}}\) obtained in Spring N5K2 (compare Fig. 4 C1-C3 to Fig. 4 A1-A3; and see SI Table 5)). Specifically, the accuracy of CRI in inferring interactions remains greater than 99%, while the accuracy of the baselines drops to 70%. This indicates that the inference module of the baselines cannot generalize well to similar systems, highlighting CRI's superior generalization ability. The MAE\({}_{\text{ef}}\) values of NRI-PIGNPI and MPM-PIGNPI in Spring N10K2 are similar to those in Spring N5K2, indicating that the generative module PIG'N'PI in the baselines has some level of generalization ability to systems with similar underlying governing interactions. It should be noted that MAE\({}_{\text{ef}}\) evaluates the quality of the trained generative model exclusively, without taking into account the accuracy of the inference process (see Sec. 5.4). The evaluation of MAE\({}_{\text{state}}\) after 10 time steps takes into account both the quality of interaction type inference and the learnt pairwise force. The higher MAE\({}_{\text{state}}\) values obtained by the baselines indicate their suboptimal inference accuracy. Conversely, CRI's excellent performance in predicting particle states can be attributed to its superior ability to accurately infer interaction types. #### 3.1.4 Spring N5K4 As the number of simultaneous interaction types in a system increases, the difficulty of learning heterogeneous interactions also increases significantly. In this experiment, we assess the effectiveness of CRI to handle such complex systems by considering five particles that are randomly connected by four different springs (denoted Spring N5K4). The results show that CRI correctly infers the interaction type and achieves an accuracy of approximately 95% when trained on 5k or more simulations (see Fig. 4 D1, full data shown in SI Table 6). For comparison, the baselines achieve an accuracy of only about 60%, which is significantly lower than the performance of the proposed CRI. Furthermore, CRI correctly learns the four different types of pairwise interactions, as indicated by low values of MAE\({}_{\text{ef}}\), which reach similar values as for the previous cases (see Fig. 4 D2). In contrast, the baselines exhibit a large MAE\({}_{\text{ef}}\) and appear to struggle with an accurate prediction of the pairwise interaction. Finally, we observe that CRI achieves a significantly better performance in the supervised learning task than the baselines (see Fig. 4 D3). These findings demonstrate that CRI effectively learns heterogeneous systems, even in scenarios with numerous interaction types, while the baselines are unable to achieve precise learning performance in such circumstances. #### 3.1.5 Charge N5K2 In this experiment, we evaluate CRI's learning capabilities of CRI regarding various physical pairwise interactions by replacing the previously considered spring force interaction with attractive/repulsive charge forces (denoted as Charge N5K2). This involved a system of five particles, each randomly assigned an electric charge of either +1 or -1. Further details about the simulation can be found in Sec. 5.5). It should be noted that in this experiment the electric charges assigned to each particle are not included as input in the node feature \(\mathbf{x}_{i}^{t}\) for CRI and the baselines. Therefore, both CRI and the baselines have to infer the interaction type (attractive/repulsive), along with the force function of each interaction type. CRI demonstrates superior performance compared to the baselines in inferring and learning the heterogeneous electric charge forces in the Charge N5K2 experiment. CRI achieves an inference accuracy of around 99% (see Fig. 4 E1) with only 500 simulations for training, while the baselines achieve an accuracy of approximately 62% even after being trained with 10k simulations. Moreover, CRI exhibits better learning performance in terms of pairwise force compared to the baselines (see Fig. 4 E2). The supervised learning performance also confirms these observations, with CRI outperforming the baselines significantly (see Fig. 4 E3). These findings are consistent with the previous observations regarding the performance of CRI and baselines, highlighting the superiority of CRI in both the inference accuracy and the quality of learning the pairwise interactions. Detailed results are provided in SI Table 7. ### Performance evaluation on a system with evolving graph topology Realistic systems are often more complex than the benchmark problems considered earlier. These systems usually consist of more particles, and have particle interactions that are restricted to some neighborhood defined by a critical distance, resulting in a changing topology of the underlying particle-interaction graph over time. To evaluate the ability of CRI to handle such complex systems, we consider simulations, adapted from [17] that model the crystallization behavior of two different types of particles (_e.g._, water and oil) when mixed together (see Fig. 5-left). The system consists of 100 particles with identical mass. The Lennard-Jones (LJ) and dipole-dipole potentials are the governing forces behind particle interactions. All particles in close proximity experience the LJ potential, while the dipole-dipole interaction is attractive for identical particles and repulsive for Figure 5: Concept of Evolving-CRI to learn the heterogeneous interactions in crystallization problems. (left) System evolution during crystallization. Yellow and red colors indicate two different kinds of particles with heterogeneous interactions. (right) Schematic of Evolving-CRI consisting of an inference module and a generative module. Evolving-CRI is trained to predict the ground-truth acceleration. After training, the heterogeneous interactions are implicitly learnt. Figure 6: **Performances of Evolving-CRI on the crystallization problem with an evolving graph topology.** (A1-A3) Interpolation and (B1-B3) extrapolation results of Evolving-CRI. Mean and standard derivation are computed from five independent experiments. (A1 and B1) Accuracy in inferring the interaction type. (A2 and B2) Mean Absolute Error in particle acceleration. (A3 and B3) Mean Absolute Error in pairwise interaction. NRI and MPM cannot explicitly predict the pairwise force. non-identical ones. As a result of these conditions, particles rearrange over time and eventually form crystalline structures (see Fig. 5-left). We evaluate the interpolation ability of each model by randomly splitting the time steps of the entire simulation into training, validation and testing parts. We also test the extrapolation ability of the models by using the first part of the entire simulation for training and validation, and testing on the remaining time steps (details are provided in Sec. 5.5). To train the model, the position and velocity of particles are used as input features, while the ground-truth accelerations serve as the target for training. However, since the topology of the interaction graph varies at different time steps, adjustments to the baselines are necessary, as described in detail in Sec. 5.7. The results for both interpolation and extrapolation demonstrate that Evolving-CRI outperforms all considered baselines significantly (see Fig.6 and SI Tables 8& 9). Specifically, Evolving-CRI is capable of correctly predicting the edge type (see Fig.6 a), learning the _physics-consistent_ heterogeneous interactions without any direct supervision (see Fig.6 b) and predicting the particle states at next time step (see Fig.6 c). This contrasts with the baselines, which consistently struggle to learn heterogeneous interactions in the particle system with evolving graph topology. ## 4 Conclusions In this paper, we propose the collective relational inference (CRI) method that infers the heterogeneous interactions in physical particle systems _collectively_. We extend the proposed CRI method to a variant called Evolving-CRI, capable of handling more complex scenarios where the underlying graph topology varies over time. We conduct extensive numerical experiments to evaluate the performance of CRI and Evolving-CRI in comparison to various baselines across different experimental settings. The results demonstrate that the CRI and Evolving-CRI methods exhibit a significant improvement over the baseline models. Specifically, both CRI and Evolving-CRI exhibit strong generalization ability while being data-efficient. Additionally, the proposed framework is highly adaptable and can easily integrate any compatible approximate inference method to infer the joint probability of edge types. In summary, our experiments highlight the effectiveness and versatility of the proposed framework, demonstrating its potential to significantly improve relational inference in diverse physical applications. The developed methodology provides a flexible and robust approach to the discovery of physical laws for heterogeneous materials, and constitutes a tool supporting a better understanding of complex (sustainable or recycled) materials as they are used in advanced processes such as additive manufacturing. ## 5 Method ### Particle systems We consider particles (point masses) governed by Newton's laws of motion neglecting any external forces. Particularly, we focus on particle systems with heterogeneous interactions. Heterogeneity can manifest in systems in two ways: either the parameters of the inter-particle interactions vary between different pairs of particles, or the particles themselves vary (_e.g._ particles type A and B), leading to different interaction patterns between them. Similar to [9], we make the assumption that the ground-truth pairwise interactions are unknown. However, the number of distinct interactions denoted by \(K\) is known, and the information on the particle trajectories is accessible. Our goal is twofold: first, to infer which particles have the same type of interaction, and second, to learn the pairwise interaction function (_e.g._, the pairwise force) for different types of interactions. We model the particle system as a directed graph \(G=(V,E)\), where nodes \(V=\{v_{1},v_{2},\ldots,v_{|V|}\}\) represent the particles and the directed edges \(E=\{e_{i,j}\mid v_{j}\) acts on \(v_{i}\}\) represent the interactions. Each particle \(v_{i}\) is characterized by its time-invariant properties, such as mass \(m_{i}\), and its time-dependent properties such as its position \(\mathbf{r}_{i}^{t}\) and its velocity \(\mathbf{\dot{r}}_{i}^{t}\). We use \(\mathbf{x}_{i}^{t}=[\mathbf{r}_{i}^{t},\mathbf{\dot{r}}_{i}^{t},m_{i}]\) to denote the input features of \(v_{i}\) at time \(t\). We assume the positions of all particles are observed at discrete points in time. Based on the position information \(\mathbf{r}_{i}^{t}\), velocity \(\mathbf{\dot{r}}_{i}^{t}\) and acceleration \(\mathbf{\ddot{r}}_{i}^{t}\) are computed. Thus, \(\mathbf{x}_{i}^{t}\) and \(\mathbf{\ddot{r}}_{i}^{t}\) (\(\forall i,t\)) are available during training and testing. The neighbors \(\Gamma(i)=\{v_{j}\mid e_{i,j}\in E\}\) of node \(v_{i}\) are particles that interact with \(v_{i}\). Here, we consider two different cases: First, the graph topology remains fixed during the entire time, _i.e._ the neighbors of each particle do not change over time. Second, the underlying graph \(G\) has an evolving topology in which particles interact with different neighbors at different times. In practice, the latter corresponds to more realistic physical systems where each particle interacts only with nearby particles, which are within some cutoff radius. In both cases, the interaction type between any two particles remains unchanged over time, irrespective of whether the underlying graph topology changes or not. ### Collective Relational Inference (CRI) CRI is designed for particle systems in which each particle has a fixed neighborhood structure throughout all time steps. The number of neighbors of \(v_{i}\) is denoted as \(|\Gamma(i)|\). The framework, as illustrated in Fig. 2, can be viewed as a generative model [18], which predicts the observed particle trajectories, specifically by predicting the accelerations that are used to update the states of particles at each time step. The ground-truth acceleration is computed by particle states of two consecutive time steps. We assign each edge \(e_{i,j}\) a latent categorical random variable \(\text{z}_{i,j}\). \(p(\text{z}_{i,j}=z)\) is then the probability of \(e_{i,j}\) having interaction type \(z\) (\(z=1,2,\ldots,K\)). Rather than inferring \(p(\text{z}_{i,j})\) for different edges independently, we consider the subgraph \(S_{(i)}\) (\(v_{i}\in V\)) spanning a center node \(v_{i}\) and its neighbors \(\Gamma(i)\) as an entity. We use the random variable \(\text{z}_{(i)}\) to represent the realization of the edge type of the subgraph \(S_{(i)}\), which is the combination of realizations of the edge types for all edges in \(S_{(i)}\). The probability \(p(\text{z}_{(i)})\) captures the joint distribution of the realizations for all edges in \(S_{(i)}\). We use \(\phi_{\text{z}_{(i)}}(j)\in\{1,2,\ldots,K\}\) to denote the interaction type \(\text{z}_{i,j}\) of edge \(e_{i,j}\) given the realization \(\text{z}_{(i)}\) of subgraph \(S_{(i)}\). For example, in Fig. 2 B, \(\text{z}_{(1)}=r2\) corresponds to \(\{\phi_{\text{z}_{(1)}}(2)=1,\phi_{\text{z}_{(1)}}(3)=2)\}\), assuming that the color blue indicates type 1 and green indicates type 2. Given the edge type configuration \(\text{z}_{(i)}\) of the subgraph, we adapt PIG'N'PI [8] to incorporate different interaction types to predict the acceleration of the center node \(v_{i}\). Specifically, \(K\) different neural networks \(NN_{\theta_{1}}^{1}\), \(NN_{\theta_{2}}^{2}\),..., \(NN_{\theta_{K}}^{K}\) are used to learn \(K\) different interactions in the edge part of PIG'N'PI. Here, we consider the same architecture but different sets of parameters for these neural networks, and hence denote them as \(NN^{1}\), \(NN^{2}\),..., \(NN^{K}\). The learnable parameters in these \(K\) neural networks are denoted as \(\Theta=\{\theta_{1},\theta_{2},\ldots,\theta_{K}\}\). In this work, we learn the heterogeneous pairwise forces, but PIG'N'PI could also be used to learn the pairwise potential energy, as demonstrated in [8]. The predicted acceleration given \(\text{z}_{(i)}\) and the current positions and velocities is computed by \[\forall i:\quad\mathbf{\hat{f}}_{i|\text{z}_{(i)}}^{t}=\sum_{j\in\Gamma(i)}NN ^{\phi_{\text{z}_{(i)}}(j)}(\mathbf{x}_{i}^{t},\mathbf{x}_{j}^{t})/m_{i} \tag{1}\] We use Gaussian mixture models [19] to represent the probability of the ground-truth accelerations. The conditional likelihood given the subgraph realization \(\text{z}_{(i)}\) is computed by fitting the ground-truth accelerations into the multivariate normal distribution whose center is the predicted acceleration of PIG'N'PI, as expressed by \[l(\Theta\mid\mathbf{\bar{r}}_{i}^{t},\text{z}_{(i)})=p(\mathbf{\bar{r}}_{i}^{ t}\mid\Theta,\text{z}_{(i)})=\mathcal{N}\left(\mathbf{\bar{r}}_{i}^{t}\mid \mathbf{\hat{f}}_{i|\text{z}_{(i)}}^{t},\sigma^{2}\boldsymbol{I}\right) \tag{2}\] where \(\sigma^{2}\) is the pre-defined variance for the multivariate normal distributions. We denote the prior probability of any subgraph having realization \(z\) by \(\pi_{z}=p(\text{z}_{(i)}=z)\) (\(\forall i\)). \(\Upsilon\) is the set of all possible realizations of the subgraph. If all particles have the same number of neighbors, \(|\Upsilon|\) is equal to \(K^{|\Gamma(i)|}\) (\(\forall i\)). The prior distribution \(\boldsymbol{\pi}=\{\pi_{1},\pi_{2},\ldots,\pi_{\Upsilon}\}\) and the neural network parameters \(\Theta\) are the learnable parameters, which are denoted by \(\boldsymbol{\Theta}=(\Theta,\boldsymbol{\pi})\). We infer unknown parameters \(\boldsymbol{\Theta}\) by maximum likelihood estimation over the marginal likelihood given the ground-truth accelerations following: \[L(\boldsymbol{\Theta})=\prod_{i=1}^{|V|}\sum_{z=1}^{|\Upsilon|}\underbrace{p( \text{z}_{(i)}=z)}_{\pi_{z}}\prod_{t}l(\Theta\mid\mathbf{\bar{r}}_{i}^{t}, \text{z}_{(i)}=z) \tag{3}\] Directly optimizing \(\log L(\boldsymbol{\Theta})\) in Eq. 3 with respect to \(\boldsymbol{\Theta}=(\Theta,\boldsymbol{\pi})\) is intractable because of the summation in the logarithm. Therefore, we design the inference model under the generalized EM framework [15], which is an effective method to find the maximum likelihood estimate of parameters in a statistical model with unobserved latent variables. Overall, the EM iteration alternates between the expectation (E) step, which computes the expectation of the log-likelihood evaluated using the current estimation of the parameters (denoted Q function), and the maximization (M) step, which updates the parameters by maximizing the Q function found in the E step. In the expectation (E) step, we compute the posterior probability of \(\text{z}_{(i)}\) given the ground-truth acceleration and the current estimation of the learnable parameters \(\boldsymbol{\Theta}^{now}\) by applying Bayes' theorem: \[\begin{split} p(\text{z}_{(i)}=z\mid\vec{\mathbf{r}}_{i}^{1:T}, \boldsymbol{\Theta}^{now})&=\frac{p(\text{z}_{(i)}=z,\vec{ \mathbf{r}}_{i}^{1:T}\mid\Theta^{now})}{\sum_{z^{\prime}}p(\text{z}_{(i)}=z^{ \prime},\vec{\mathbf{r}}_{i}^{1:T}\mid\Theta^{now})}\\ &=\frac{\pi_{z}^{now}\prod_{t}p(\vec{\mathbf{r}}_{t}^{t}\mid \Theta^{now},\text{z}_{(i)}=z)}{\sum_{z^{\prime}}\pi_{z^{\prime}}^{now}\prod_ {t}p(\vec{\mathbf{r}}_{i}^{t}\mid\Theta^{now},\text{z}_{(i)}=z^{\prime})} \end{split} \tag{4}\] where \(p(\vec{\mathbf{r}}_{i}^{t}\mid\Theta^{now},\text{z}_{(i)}=z)\) is computed by Eq. 2. With the posterior \(p(\text{z}_{(i)}\mid\vec{\mathbf{r}}_{i}^{1:T},\boldsymbol{\Theta}^{now})\), the \(Q\) function of CRI becomes: \[\begin{split} Q_{CRI}(\boldsymbol{\Theta}\mid\boldsymbol{\Theta}^ {now})&=\sum_{i=1}^{N}\mathbb{E}_{\text{z}_{(i)}\sim p(\text{z}_ {(i)}|\vec{\mathbf{r}}_{i}^{1:T},\boldsymbol{\Theta}^{now})}\log\pi_{\text{z} _{(i)}}\\ &+\sum_{i=1}^{N}\mathbb{E}_{\text{z}_{(i)}\sim p(\text{z}_{(i)}| \vec{\mathbf{r}}_{i}^{1:T},\boldsymbol{\Theta}^{now})}\sum_{t=1}^{T}\log l( \Theta\mid\vec{\mathbf{r}}_{i}^{t},\text{z}_{(i)})\end{split} \tag{5}\] In the maximization (M) step, we update the prior \(\boldsymbol{\pi}\) and \(\Theta\) by maximizing \(Q_{CRI}(\boldsymbol{\Theta}|\boldsymbol{\Theta}^{now})\). Note that \(\boldsymbol{\pi}\) has an analytic solution but \(\Theta\) does not (see Sec. 8.3 for details). We take one gradient ascent step to update \(\theta_{1},\theta_{2},\ldots,\theta_{K}\). We iteratively update the posterior probabilities of different realizations for each subgraph in the E step and the learnable parameters \(\Theta\) in the \(K\) different edge neural networks and the priors \(\boldsymbol{\tau}\) in the M step. Convergence to the (local) optimum is guaranteed by the generalized EM procedure [15]. After training, \(NN^{1}\),..., \(NN^{K}\) approximate \(K\) different pairwise interaction functions. By finding the most probable realization of edge types in each subgraph, the interaction type for every edge is determined by the \(\phi\) mapping. The detailed derivation and implementation of CRI are provided in SI Sec. 8.3. It should be noted that due to the exact computation of the expectation, the computational complexity \(\mathcal{O}(N\cdot K^{|\Gamma|})\) (\(|\Gamma|\) is the number of neighbors of each particle) of CRI limits its application to systems with few interacting particles. However, any compatible inference method can be built into CRI to approximate the expectation. To demonstrate the flexibility of CRI, we use, for instance, the basic form of variational method [19] to approximate the expectation in \(Q(\boldsymbol{\Theta}\mid\boldsymbol{\Theta}^{now})\). The derivation and results of this CRI variant named the **Variational Collective Relational Inference** (Var-CRI) are discussed in SI Sec. 8.2. Other potential options of inference methods include advanced variational approximation methods [20; 21] and Markov Chain Monte Carlo (MCMC) techniques [22]. ### Evolving Collective Relational Inference (Evolving-CRI) The basic form of CRI presented in Sec. 5.2 is tailored for relational inference in which particles consistently interact with the same neighbors. However, in various real-life scenarios, particles may interact with varying neighbors at different times, causing the underlying graph topology to change over time. To address the challenge of inferring relations in systems with evolving graph topology, we adapt CRI and develop a new algorithm called **Evolving-CRI**, as shown in Fig. 3. Similarly as in CRI, we use the random variable \(\text{z}_{i,j}\in K\) to represent the interaction type of \(e_{i,j}\). The fundamental concept behind Evolving-CRI involves updating the posterior distribution over \(\text{z}_{i,j}\) of a newly appearing edge by marginalizing out the posterior distribution of all other appearing edges. As a result, the interaction type inferred for each edge captures the correlation with other incoming edges, which collectively influence the particle states. It is worth noting that our proposed approach for relational inference with evolving graph topology is different from the concept of _dynamic relational inference_[23; 24; 25] where the interaction type between two particles can change over time. In our case, the interaction type of any edge remains the same over time, but the edges may not always exist in the underlying interaction graph. Here, we denote the neighbors of \(v_{i}\) at time \(t\) by \(\Gamma^{t}(i)\) and all neighbors of \(v_{i}\) across all time steps by \(\Gamma(i)=\bigcup_{t}\Gamma^{t}(i)\). Following the approach of CRI (Eq. 1), the predicted acceleration at time \(t\) for the different edge types is computed by PIG'N'PI: \[\forall i:\quad\nexists_{i|\mathbf{z}_{i,1},\dots,\mathbf{z}_{i,|\Gamma^{t}(i) |}}^{t}=\sum_{j\in\Gamma^{t}(i)}NN^{\mathbf{z}_{i,j}}(\mathbf{x}_{i}^{t}, \mathbf{x}_{j}^{t})/m_{i} \tag{6}\] To compute the conditional likelihood given the different realizations of the edge types, we fit the ground-truth accelerations by the multivariate normal distribution with the predicted acceleration as the mean: \[l(\Theta\mid\vec{\mathbf{r}}_{i}^{t},\mathbf{z}_{i,1},\dots,\mathbf{z}_{i,| \Gamma^{t}(i)|})=p(\vec{\mathbf{r}}_{i}^{t}\mid\Theta,\mathbf{z}_{i,1},\dots, \mathbf{z}_{i,|\Gamma^{t}(i)|})=\mathcal{N}\left(\vec{\mathbf{r}}_{i}^{t} \mid\nexists_{i|\mathbf{z}_{i,1},\dots,\mathbf{z}_{i,|\Gamma^{t}(i)|}}^{t}, \sigma^{2}\boldsymbol{I}\right) \tag{7}\] where \(\nexists_{i|\mathbf{z}_{i,1},\dots,\mathbf{z}_{i,|\Gamma^{t}(i)|}}^{t}\) is computed by Eq. 6. We denote the prior probability of any edge \(e_{i,j}\) having the interaction type realization \(z\) by \(\tau_{z}=p(\mathbf{z}_{i,j}=z)\) and the prior distribution by \(\boldsymbol{\tau}=\{\tau_{1},\dots,\tau_{K}\}\) (\(K\) is the number of different interactions). The learnable parameters in Evolving-CRI are \(\boldsymbol{\Theta}=(\Theta,\boldsymbol{\tau})\). In the expectation step, we infer by induction the posterior distribution over different interaction types of each edge given the ground-truth accelerations and the current estimation of the learnable parameters \(\boldsymbol{\Theta}^{now}\). At the start (\(t=0\)), \(p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{0},\boldsymbol{\Theta}^{now})\) is equal to the prior \(\tau_{x_{i,j}}^{now}\) as there is no information available about the particle states. Suppose that the posterior distributions \(p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:t-1},\boldsymbol{\Theta}^{now})\), where \(\vec{\mathbf{r}}_{i}^{1:t-1}\) is the ground-truth accelerations of \(v_{i}\) until time \(t-1\) for \(t\geq 1\), is known, we update the posterior \(p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:t},\boldsymbol{\Theta}^{now})\) for any edge \(e_{i,j}\) that appears at time \(t\) by the rule of sum: \[p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:t},\Theta^{now})=\sum_{\mathbf{ z}_{i,-j}}p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1:t}, \boldsymbol{\Theta}^{now}) \tag{8}\] where \(\sum_{\mathbf{z}_{i,-j}}\) sums over all realizations of the other incoming edges in \(S_{(i)}\) at time \(t\) except for \(e_{i,j}\). The posterior distribution \(p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1:t}, \boldsymbol{\Theta}^{now})\) in Eq. 8 is computed by applying Bayes' theorem: \[p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1: t},\boldsymbol{\Theta}^{now}) \propto p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j})p(\vec{\mathbf{r}}_{i}^{1:t} \mid\mathbf{z}_{i,j},\mathbf{z}_{i,-j},\boldsymbol{\Theta}^{now})\] \[=p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j})p(\vec{\mathbf{r}}_{i}^{1: t-1}\mid\mathbf{z}_{i,j},\mathbf{z}_{i,-j},\boldsymbol{\Theta}^{now})p(\vec{ \mathbf{r}}_{i}^{t}\mid\mathbf{z}_{i,j},\mathbf{z}_{i,-j},\boldsymbol{\Theta}^{ now})\] \[\propto p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1: t-1},\boldsymbol{\Theta}^{now})p(\vec{\mathbf{r}}_{i}^{t}\mid\mathbf{z}_{i,j}, \mathbf{z}_{i,-j},\boldsymbol{\Theta}^{now}) \tag{9}\] Assuming that \(p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1:t-1}, \boldsymbol{\Theta}^{now})\) is fully factorized, we find: \[p(\mathbf{z}_{i,j},\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1:t}, \boldsymbol{\Theta}^{now})\propto p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1: t-1},\boldsymbol{\Theta}^{now})p(\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i}^{1:t-1}, \boldsymbol{\Theta}^{now})p(\vec{\mathbf{r}}_{i}^{t}\mid\mathbf{z}_{i,j}, \mathbf{z}_{i,-j},\boldsymbol{\Theta}^{now}) \tag{10}\] Combining Eq. 8 and Eq. 10, we get \[p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:t},\boldsymbol{\Theta}^{now}) \propto p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:t-1},\boldsymbol{ \Theta}^{now})\sum_{\mathbf{z}_{i,-j}}p(\mathbf{z}_{i,-j}\mid\vec{\mathbf{r}}_{i }^{1:t-1},\boldsymbol{\Theta}^{now})p(\vec{\mathbf{r}}_{i}^{t}\mid\mathbf{z}_{i, j},\mathbf{z}_{i,-j},\boldsymbol{\Theta}^{now}) \tag{11}\] This shows that we can iteratively update the posterior \(\mathbf{z}_{i,j}\) of each edge \(e_{i,j}\) by incorporating the conditional distribution of the ground-truth acceleration at each time step (as illustrated in Fig. 3 B). The conditional distribution \(p(\vec{\mathbf{r}}_{i}^{t}\mid\mathbf{z}_{i,j},\mathbf{z}_{i,-j},\boldsymbol{ \Theta}^{now})\), which models the joint influence of incoming edges, is computed by Eq. 6 and Eq. 7. Finally, we denote the inferred edge type of each edge after observing the particle system across all time steps in Eq. 11 by \(p^{*}(\mathbf{z}_{i,j})=p(\mathbf{z}_{i,j}\mid\vec{\mathbf{r}}_{i}^{1:T}, \boldsymbol{\Theta}^{now})\). The \(Q\) function for the **Evolving-CRI** is \[Q_{evolving}(\boldsymbol{\Theta}\mid\boldsymbol{\Theta}^{now})= \sum_{i=1}^{|V|}\sum_{j=1}^{\Gamma(i)}\mathbb{E}_{\mathbf{z}_{i,j}\sim p^{*}( \mathbf{z}_{i,j})}\log\tau_{\mathbf{z}_{i,j}} \tag{12}\] \[+\sum_{i=1}^{|V|}\sum_{t=1}^{T}\mathbb{E}_{\mathbf{z}_{i,1},\dots, \mathbf{z}_{i,|\Gamma^{t}(i)|}\sim p^{*}(\mathbf{z}_{i,1}),\dots,\mathbf{p}^{*}( \mathbf{z}_{i,|\Gamma^{t}(i)|})}\log l(\Theta\mid\vec{\mathbf{r}}_{i}^{t}, \mathbf{z}_{i,1},\dots,\mathbf{z}_{i,|\Gamma^{t}(i)|})\] where \(l(\Theta\mid\vec{\mathbf{r}}_{i}^{t},\mathbf{z}_{i,1},\ldots,\mathbf{z}_{i,|\Gamma ^{t}(i)|})\) is computed by Eq. 7. In the maximization step, we update the prior \(\boldsymbol{\tau}\) and \(\Theta\) by maximizing \(Q_{evolving}(\boldsymbol{\Theta}|\boldsymbol{\Theta}^{now})\). Similar to CRI, \(\boldsymbol{\tau}\) has the analytic solution but \(\Theta\) does not. Therefore, we take one gradient ascent step to update \(\theta_{1},\theta_{2},\ldots,\theta_{K}\). Finally, for verification, let us consider the case of having no observations of the particle systems. In this case, the second term in Eq. 12 becomes \(0\), and \(Q_{evolving}(\boldsymbol{\Theta}\mid\boldsymbol{\Theta}^{now})\) corresponds to the entropy because \(p^{*}(\mathbf{z}_{i,j})\) becomes \(\tau_{\mathbf{z}_{i,j}}\). Therefore, maximizing \(Q_{evolving}\) is equivalent to maximizing the entropy, which, by the principle of maximum entropy, leads to \(1/K\) probability for each edge to have any kind of interaction. This shows that in absence of information, this method converges to a fully random estimation of the edge type, as expected. ### Performance evaluation metrics. The performance is evaluated on three aspects. First, the supervised learning performance is assessed through the mean absolute error \(\mathsf{MAE}_{\mathsf{state}}\), which quantifies the discrepancy between the predicted particle states (i.e., position and velocity) and the corresponding ground-truth states. Second, we assess the ability of the relational inference methods to correctly identify different interactions. We use the permutation invariant accuracy as the metric, which is given by: \[\mathsf{Accuracy}=\max_{\alpha\in\Omega}\frac{1}{|E|}\sum_{e\in E}\delta( \alpha(\hat{z}(e)),z(e)) \tag{13}\] where \(\alpha\) is a permutation of the inferred interaction types and \(\Omega\) is a set containing all possible permutations. The Kronecker delta \(\delta(x,y)\) equals 1 if \(x\) is equal to \(y\) and 0 otherwise. \(\hat{z}(e)\in K\) is the predicted interaction type for the edge \(e\) and \(z(e)\in K\) is the ground-truth interaction type of \(e\). This measure accounts for the permutation of the interaction type label because good accuracy is achieved by clustering the same interactions correctly. Third, we assess the extent to which the learnt pairwise forces are consistent to the underlying physics laws. This evaluation involves two aspects: 1) how well the predicted pairwise forces approximate the ground-truth pairwise forces, which is measured by the mean absolute error on the pairwise force \(\mathsf{MAE}_{\mathsf{gt}}\), and 2) whether the predicted pairwise forces satisfy Newton's third law, which is measured by the mean absolute value of the error in terms of force symmetry \(\mathsf{MAE}_{\mathsf{symm}}\). To compute the predicted pairwise force required for \(\mathsf{MAE}_{\mathsf{eff}}\) and \(\mathsf{MAE}_{\mathsf{symm}}\), we use the generative module (_i.e._, the decoder) of each model that corresponds to the ground-truth interaction type, given the permutation used to compute the accuracy in Eq. 13. Therefore, \(\mathsf{MAE}_{\mathsf{eff}}\) and \(\mathsf{MAE}_{\mathsf{symm}}\) reflect the quality of the trained generative module, independent of the performance of the edge type prediction. ### Simulations details Here, we summarize the numerical simulations used in the experiments. The key distinctive property of the generated datasets is that the inter-particle interactions are heterogeneous. Previous works, such as [9], have used some of the selected cases. However, in our study, we modified some configurations to make them more challenging and realistic. * **Spring simulation:** Particles are randomly connected by different springs with different stiffness constants and balance lengths. Suppose \(v_{i}\) and \(v_{j}\) are connected by a spring with stiffness constant \(k\) and balance length \(L\), the pairwise force from \(v_{i}\) to \(v_{j}\) is \(k(r_{ij}-L)\boldsymbol{n}_{ij}\) where \(r_{ij}=\|\mathbf{r}_{j}-\mathbf{r}_{i}\|\) is the Euclidean distance and \(\boldsymbol{n}_{ij}=\frac{\mathbf{r}_{j}-\mathbf{r}_{i}}{\|\mathbf{r}_{j}- \mathbf{r}_{i}\|}\) is the unit vector pointing from \(v_{i}\) to \(v_{j}\). The spring N5K2 simulation (Sec. 3.1.1) and spring N10K2 simulation (Sec. 3.1.2) have two different springs with \((k_{1},L_{1})=(0.5,2.0)\) and \((k_{2},L_{2})=(2.0,1.0)\). The spring N5K4 simulation (Sec. 3.1.4) has four different springs: \((k_{1},L_{1})=(0.5,2.0)\), \((k_{2},L_{2})=(2.0,1.0)\), \((k_{3},L_{3})=(2.5,1.0)\) and \((k_{4},L_{4})=(2.5,2.0)\). * **Charge simulation:** We randomly assign electric charge \(q=+1\) and \(q=-1\) to different particles. The electric charge force from \(v_{i}\) to \(v_{j}\) is \(-cq_{i}q_{j}\boldsymbol{n}_{ij}/r_{ij}^{2}\) where the constant \(c\) is set to \(1\). To prevent any zeros in the denominator of the charge force equation, we add a small number \(\delta\) (\(\delta=0.01\)) when computing the Euclidean distance. Since particles have different charges, the system contains attractive and repulsive interactions. Note that we do not provide charge information as an input feature for the ML algorithms. Thus, the relational inference methods need to infer whether each interaction is attractive or repulsive. * **Crystallization simulation:** The crystallization simulation contains two different kinds of particles with local interaction, _i.e._ interactions only affect particles within a given proximity to each other. Hence, the underlying graph topology changes over time. In this simulation, the Lennard-Jones potential, which is given by \(V_{LJ}(r)=4\epsilon_{LJ}\{(\sigma_{LJ}/r)^{12}-(\sigma_{LJ}/r)^{6}\}\), exists among all nearby particles. We set \(\sigma_{LJ}=0.3\) and \(\epsilon_{LJ}=10^{-5}\). Additionally, particles of the same type have an attractive dipole-dipole force, whose potential is \(V_{A}(r)=-Cr^{-4}\), and particles of different types have a repulsive dipole-dipole force, whose potential is \(V_{R}(r)=Cr^{-4}\). We set the constant \(C=0.02\). To summarize, the pairwise interaction of two particles with the same and different type is governed by \(V_{LJ}+V_{A}\) and \(V_{LJ}+V_{R}\), respectively. The heterogeneous system contains 100 particles in total, each with the same unit mass. The simulation is adapted from [17]. Additionally, unlike the simulations in [9], particles in the spring and charge simulations have varying masses. The mass \(m_{i}\) of particle \(v_{i}\) is sampled from the log-uniform distribution within the range \([-1,1]\) (\(\text{ln}(m_{i})\sim\mathcal{U}(-1,1)\)). The initial locations and velocities of particles are both drawn from the standard Gaussian distribution \(\mathcal{N}(0,1)\). We use dimensionless units for all simulations as the considered learning algorithms are not designed for any specific scale. The presented cases serve as proof of concept to evaluate the relational inference capabilities for heterogeneous interactions. The Spring N5K2, Spring N5K4, Spring N10K2 and Charge N5K2 cases in Sec. 3.1 each comprise 12k simulations in total. Each simulation consists of 100 time steps with step size \(0.01\). Of these 12k simulations, 10k are reserved for training (we train the models with 100, 500, 1k, 5k and 10k simulations to assess the data efficiency), 1k for validation and 1k for testing. In each simulation, particles interact with all other particles and the interaction type between any particle pair remains fixed over time. The crystallization simulation contains a single simulation of 100 particles. We generate this simulation over 500k time steps using step size \(10^{-5}\), and then downsample it to every 50 time steps, ultimately yielding a simulation with 10k time steps. Note that it is possible to consider advanced sampling strategies (_e.g._, [26]) to sample informative time steps, but we leave this for future exploration. We use two different ways to split the simulation for training, validation and testing. First, to evaluate the interpolation ability, we randomly split the 10k simulation steps into the training dataset, validation dataset and testing dataset with the ratio \(7:1.5:1.5\). Then, to evaluate the extrapolation ability, we use the first 7k time steps as training set, next 1.5k consecutive time steps for validation and the remaining 1.5k time steps for testing. In the crystallization simulation, particles interact with nearby particles within a cut-off radius. However, to simplify the input for the ML methods, we constrain each particle to interact with its five closest neighbors. In the simulation, 500 edges are active at each time step, and a fixed-size tensor variable in PyTorch can represent the activated edges. It is important to note that the relational inference methods, such as CRI and the baselines, can handle varying edge sizes at different time steps. However, for the purposes of this study, the described simulation is suitable as a proof of concept and provides a straightforward implementation of the relational inference methods. We train the relational inference models on the training dataset, fine-tune hyperparameters and select the best trained model based on the performance on the validation dataset with respect to the training objective \(\text{MAE}_{\text{state}}\). It should be noted that the models are not chosen based on the metrics on which they will later be evaluated since we cannot access the ground-truth interactions during training. We then evaluate the performance of the selected trained model on the testing dataset. For the generalization evaluation in Sec. 3.1.3, we train and validate the model using the training and validation datasets of Spring N5K2, and report the performance of the trained model on the testing dataset of Spring N10K2. ### Configurations of CRI, Var-CRI and Evolving-CRI We use the same hyperparameters for CRI, Var-CRI and Evolving-CRI in each experiment. We find that the performance is mostly affected by the number of hidden layers in PIG'N'PI and the Gaussian variance \(\sigma^{2}\). We perform grid search to tune these two parameters for different experiments. The detailed configurations are summarized in Table 2 in Sec. 8.4. In addition, we use the Adam optimizer [27] with mini-batch size of eight and learning rate \(0.001\) for training. All models are trained over 500 epochs. ### Configurations of baseline models For the spring and charge systems, we use the default setting of NRI and MPM as suggested in their original papers [9, 13], _i.e._ the encoder uses a multi-layer perception (MLP) to learn the initial edge embedding for the spring dataset, and a convolutional neural network (CNN) with the attention mechanism for the charge dataset. Additionally, to ensure a fair comparison, the decoder of NRI-PIGNPI and MPM-PIGNPI is the same as the one applied in CRI (including the same hidden layers and the same activation function). For the crystallization experiment, modifications of NRI and MPM are required to learn heterogeneous systems with evolving graph topology, as discussed in Sec. 3.2. The CNN reducer in the original NRI and MPM first learns the initial edge embedding, and then uses additional operations to learn the edge types based on this embedding. The edge embedding is learnt by taking the states of two particles across all time steps. We modify the encoder such that only the time steps when the edge appears contribute to the edge embedding. As for the decoder, we mask the effective edges of each node at each time step and only aggregate these active edges as the incoming messages. ## 6 Acknowledgements We thank Dr. Jiawen Luo and Dr. Yuan Tian for helpful discussions. This project has been funded by ETH Grant no. ETH-12 21-1. The contribution of Olga Fink to this project was funded by the Swiss National Science Foundation (SNSF) grant no. PP00P2_176878. ## 7 Code and Data Availability The implementation of the proposed method is based on PyTorch [28]. The source code, as well as the scripts to generate the used data, are available on Gitlab: [https://gitlab.ethz.ch/cmbm-public/toolboxes/cri](https://gitlab.ethz.ch/cmbm-public/toolboxes/cri).
2309.10491
DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs
Existing nighttime unmanned aerial vehicle (UAV) trackers follow an "Enhance-then-Track" architecture - first using a light enhancer to brighten the nighttime video, then employing a daytime tracker to locate the object. This separate enhancement and tracking fails to build an end-to-end trainable vision system. To address this, we propose a novel architecture called Darkness Clue-Prompted Tracking (DCPT) that achieves robust UAV tracking at night by efficiently learning to generate darkness clue prompts. Without a separate enhancer, DCPT directly encodes anti-dark capabilities into prompts using a darkness clue prompter (DCP). Specifically, DCP iteratively learns emphasizing and undermining projections for darkness clues. It then injects these learned visual prompts into a daytime tracker with fixed parameters across transformer layers. Moreover, a gated feature aggregation mechanism enables adaptive fusion between prompts and between prompts and the base model. Extensive experiments show state-of-the-art performance for DCPT on multiple dark scenario benchmarks. The unified end-to-end learning of enhancement and tracking in DCPT enables a more trainable system. The darkness clue prompting efficiently injects anti-dark knowledge without extra modules. Code is available at https://github.com/bearyi26/DCPT.
Jiawen Zhu, Huayi Tang, Zhi-Qi Cheng, Jun-Yan He, Bin Luo, Shihao Qiu, Shengming Li, Huchuan Lu
2023-09-19T09:59:08Z
http://arxiv.org/abs/2309.10491v4
# DCPT: Darkness Clue-Prompted Tracking in Nighttime UAVs ###### Abstract Existing nighttime unmanned aerial vehicle (UAV) trackers follow an "Enhance-then-Track" architecture - first using a light enhancer to brighten the nighttime video, then employing a daytime tracker to locate the object. This separate enhancement and tracking fails to build an end-to-end trainable vision system. To address this, we propose a novel architecture called Darkness Clue-Prompted Tracking (DCPT) that achieves robust UAV tracking at night by efficiently learning to generate darkness clue prompts. Without a separate enhancer, DCPT directly encodes anti-dark capabilities into prompts using a darkness clue prompter (DCP). Specifically, DCP iteratively learns emphasizing and undermining projections for darkness clues. It then injects these learned visual prompts into a daytime tracker with fixed parameters across transformer layers. Moreover, a gated feature aggregation mechanism enables adaptive fusion between prompts and between prompts and the base model. Extensive experiments show state-of-the-art performance for DCPT on multiple dark scenario benchmarks. The unified end-to-end learning of enhancement and tracking in DCPT enables a more trainable system. The darkness clue prompting efficiently injects anti-dark knowledge without extra modules. Code is available at [https://github.com/bearyi26/DCPT](https://github.com/bearyi26/DCPT). ## I Introduction Visual object tracking from unmanned aerial vehicles (UAVs) is an essential capability of aerial robotic vision, enabling various downstream applications such as traffic monitoring [1], aerial cinematography [2], and search and rescue [3]. While recent advances using deep neural networks [4, 5, 6] and large-scale datasets [7, 8, 9] have achieved promising tracking performance in daytime conditions, state-of-the-art trackers [10, 11, 12] still struggle in more challenging nighttime environments. When faced with more challenging light conditions (e.g., the night falls), these approaches often suffer from severe performance degradation or even fail to work. This is mainly because discriminative visual cues like color and geometry are diminished at night, and onboard cameras introduce more noise and image degradation under low illumination. As a result, existing trackers fail to extract robust features for accurate target localization at night. Therefore, to fully realize the potential of UAV vision, it is imperative to explore effective nighttime tracking techniques, which will promote the versatility and survivability of UAV vision systems. Several studies have been conducted to equip UAV systems with the capacity to "see" in low-light conditions. For example, Fu et al. [13] introduced a light enhancer called "HighlightNet" designed to illuminate specific target areas for UAV trackers. This "Enhance-then-Track" paradigm (Fig. 1 (a)) is also adopted in other studies [14, 15]. They mainly focus on designing a light enhancer and employ off-the-shelf daytime trackers to output the final tracking results. On the other hand, Ye et al. [16] introduce domain adaptation (Fig. 1 (b)) for nighttime UAV tracking. They generate nighttime training samples and adversarially train a model for narrowing the gap between day and night circumstances. Despite gaining improvements, current solutions for nighttime tracking still have significant limitations. **i)** Performing image enhancement before tracking makes UAV vision system over-reliance on the extra trained enhancer, and separating the nighttime tracking into two sub-processes masks is not conducive to building an end-to-end trainable architecture. **ii)** Domain adaptation requires abundant data for training and high-quality target domain samples are scarce for nighttime domain learning. **iii)** The intrinsic relationship between daytime tracker and nighttime tracker is overlooked, and the potential for employing daytime tracker in nighttime scenarios is not well exploited. Recently, prompt learning has attracted much attention, extending from Natural Language Processing (NLP) to the vision tasks [17, 18, 19]. Typically, the foundation model is Fig. 1: Illustration of different nighttime UAV tracking paradigms. (a) “Enhance-then-Track” paradigm. (b) Domain adaptation paradigm. (c) Proposed darkness clause-prompted tracking (DCPT) paradigm. DCPT possesses a more streamlined structure while effectively incorporating the learned darkness clue prompts, enabling the UAV to “see” sharper in the dark. frozen, and only a few tunable parameters are added for learning valid prompts. This approach demonstrates promising results and efficiencies. Drawing inspiration from these works, we formulate nighttime UAV tracking as a prompt learning problem where the goal is to mine valid darkness clue prompts for a well-trained daytime tracker, so that the parameter and computational cost are constrained and the nighttime performance is maximized. In this work, we propose a novel nighttime UAV tracker (termed DCPT) that dwells on learning darkness clue prompting for low-light circumstances (Fig. 1 (c)). Specifically, to effectively facilitate the discovery and mining of clues in darkness, we design a darkness clues prompter (DCP, in III-C) by introducing the back-projection structure which shows favorable performance in image restoration (e.g., image super-resolution [20]). DCP propagates the darkness clue prompts across all semantic layers of the foundation tracker. Moreover, we design a gated feature aggregation (GFA, in III-D) mechanism to efficiently fuse bottom-up prompts and enable the complementary integration of learned prompts and information from the foundation model. Ultimately, DCPT can effectively inject learned darkness clue prompts into a frozen daytime model with only a small number of prompt learning-related parameters, obtaining superior nighttime tracking performance. We summarize our major contributions as follows: * We propose DCPT, a novel solution for nighttime UAV tracking by introducing darkness clue prompt learning. In this way, the tracker's potential is stimulated by the learned prompts in extreme low-light circumstances. * Darkness clue prompter is proposed for mining valid visual prompts at night. Besides, a gated feature aggregation mechanism is designed for effectively fusing the features between promoters and the foundation model. * Extensive experiments on four nighttime tracking benchmarks validated the effectiveness of DCPT. Qualitative and quantitative results demonstrate the superiority of DCPT as a nighttime tracker for aerial robots. e.g., on DarkTrack2021 [21], DCPT boosts the base tracker by 4.9% success score with 3.0M prompting parameters. ## II Related Works ### _Nighttime UAV Tracking_ The emergence of deep-learning technologies and efforts by researchers push visual object tracking to new frontiers [11, 12, 22, 23]. Impressive tracking performance was also achieved on the UAV platforms [10, 24]. However, these trackers, which are mainly designed for daytime scenarios, often suffer severe performance degradation or even fail to work when faced with common but challenging nighttime scenarios. This is because the nighttime circumstances suffer from loss of detailed information, accompanying noise, and low contrast and brightness. Therefore, nighttime UAV tracking has attracted increasing attention. A straightforward manner is to complete nighttime tracking by an "Enhance-then-Track" process. Specifically, researchers design low light enhancers [13, 14, 21] for nighttime scenarios, and use existing daytime UAV trackers to track in the enhanced sequences. Li et al. [15] propose to integrate a low-light image enhancer into a CF-based tracker for robust tracking at night. Similarly, DarkLighter [14] and HighlightNet [13] also design low-light image enhancers to alleviate the influence of extreme illumination and highlight the potential objects, respectively. Although effective, the main drawbacks of such approaches are that the nighttime tracker requires an additional trained light enhancer for preprocessing, incurring extra computational costs, and the separation of enhancer and tracker is not conducive to building an end-to-end trainable nighttime UAV tracking method. Another approach is to adopt domain adaptation to transfer daytime tracker to nighttime scenarios. To obtain the tracking capabilities in the dark, UDAT [16] proposes to align image features from daytime and nighttime domains by the transformer-based bridging layer, in this way, the tracking capabilities on the daytime domain are somewhat transferred to the nighttime domain. Unfortunately, this approach requires more training costs and the lack of high-quality target domain data for tracking also limits its enhancement. ### _Visual Prompt Learning_ In natural language processing (NLP), a pre-trained model can easily adapt to its downstream tasks by incorporating specific prompts to the input text. As a parameter-efficient learning manner, prompt tuning begins to make its mark on visual tasks [18, 19, 25]. VPT [18] is among the pioneers in exploring visual prompt tuning. It adds a small number of learnable parameters to the pre-trained foundation model, and obtains promising results compared with full fine-tuning on multiple downstream tasks. Instead of focusing on embedding space, Bahng et al. [25] propose to train learnable perturbations in the embedding pixel space. For multi-modal tracking, ViPT [19] proposes to learn efficient auxiliary-modal prompts for foundation RGB tracker, achieving impressive multi-modal tracking performance. The success of these methods above shows us the potential of prompt learning for nighttime UAV tracking. However, unlike multi-modal tracking, which has an additional auxiliary-modal flow that can be used directly to learn prompts, nighttime UAV tracking only has extreme lighting scenarios as input. In this work, to overcome the UAV tracker's poor performance in indistinguishable night scenarios, we propose to mine the darkness clues as effective visual prompts for a daytime tracker, enabling a sharper vision in the dark. ## III Methodology ### _Overview_ The overall architecture is shown in Fig. 2. To summarize, DCPT injects the learned darkness clue prompts \(\mathcal{P}^{i},i\in\{1,...,N\}\) into the daytime tracker and stimulates its tracking potential in low-light circumstances. The darkness clue prompts are uncovered through end-to-end prompt learning in nighttime data, having the ability to discriminate object tracks in the darkness, which is used to complement the shortcomings of the daytime tracker. These learned darkness clue prompts are propagated from preceding prompts and foundation feature flows \(\mathcal{H}^{i},i\!\in\!\{0,...,N-1\}\). Specifically, the gated feature aggregation (GFA) mechanism \(\mathcal{G}^{i},i\in\{1,...,N\}\) is designed for controlling the weight of these different information sources. In general, the daytime tracker has excellent tracking capabilities on generic scenarios while just lacking specific adaptations for nighttime scenarios. Therefore, we only tune parameters that are related to darkness clue prompt generation instead of fine-tuning the entire model. The DCPT framework maximizes the capabilities inherited from daytime tracker trained on large-scale datasets [8, 9, 26], avoiding overfitting on limited nighttime tracking data but gaining nighttime-specific tracking capabilities through darkness clue prompt learning. ### _Daytime Foundation Tracker_ Daytime and nighttime trackers naturally share a number of basic capabilities, including scene understanding and feature matching between targets, hence, previous nighttime tracking methods [13, 14, 15] all chose cutting-edge trackers [23, 10, 27] as their daytime base trackers. Hereby we adopt a streamlined model to serve as a clear baseline, emphasizing the advantages of our darkness clue prompting tracking framework. Specifically, the base tracker is only composed of two parts: a backbone and a prediction head. The backbone consists of stacked vision transformer (ViT) layers and propagating the inputs in a one-stream style like the method in [12]. First, both template \(\mathbf{Z}\in\mathbb{R}^{H_{x}\times W_{x}\times 3}\) and search region \(\mathbf{X}\in\mathbb{R}^{H_{x}\times W_{x}\times 3}\) are embedded and flattened to 1D tokens with added positional embedding \(E_{pos}\): \[\mathbf{\mathcal{H}}_{Z},\mathbf{\mathcal{H}}_{X}=E_{embed}(\mathbf{Z};\mathbf{X})+E_{pos}, \tag{1}\] The template and search tokens are then concatenated to \(\mathbf{\mathcal{H}}_{base}^{0}=concat(\mathbf{\mathcal{H}}_{Z},\mathbf{\mathcal{H}}_{X})\) and passed through a \(N\)-layer standard vision transformer encoder: \[\mathbf{\mathcal{H}}^{l}=E^{l}(\mathbf{\mathcal{H}}^{l-1}),\qquad\quad l=1,2\ldots,N \tag{2}\] Last, we employ a lightweight corner head for box prediction. The tracking results can be obtained by: \[\mathbf{B}=\phi(\mathbf{\mathcal{H}}^{N}), \tag{3}\] The prediction head \(\phi\) does not require any complex post-processes (e.g., cosine window and size penalty), without any hyper-parameters, keeping our foundation model concise. ### _Darkness Clue Prompter_ The foundation model is well-trained on daytime data and therefore does not have sufficient object-discriminating abilities in nighttime scenarios. Therefore, we propose to equip the foundation model with the proposed darkness clue prompters (DCP), injecting the mined darkness clue prompts into the foundation feature flow. The darkness clue prompting process can be formulated as: \[\mathbf{\mathcal{H}}_{p}^{l-1}=\mathbf{\mathcal{H}}^{l-1}+\mathbf{\mathcal{P}}^{l}, \tag{4}\] where \(\mathbf{\mathcal{H}}_{p}^{l-1}\) is the prompted tokens which absorb the learned darkness clue prompt \(\mathbf{\mathcal{H}}_{p}^{l}\) from the \(l\)-th DCP block. For nighttime UAV tracking, given low-light inputs without explicit learning objectives of darkness clue region, it is difficult to learn valid darkness clue prompts. To address this problem, we introduce the back-projection [28] ideology from image super-resolution (SR) into darkness clue prompt learning. Given a immediate SR image \(\mathbf{\mathcal{I}}_{t}\) (\(t\) denotes the iteration index), the SR image \(\mathbf{\mathcal{I}}_{t+1}\) can be obtained by back-projection operation: \[\mathbf{\mathcal{I}}_{t+1}=\mathbf{\mathcal{I}}_{t}+\lambda\Phi_{up}(\mathbf{\mathcal{I}} _{0}-\Phi_{down}(\mathbf{\mathcal{I}}_{t})), \tag{5}\] where \(\Phi_{up}\) and \(\Phi_{down}\) denote the up-sampling and down-sampling functions, respectively. \(\mathbf{\mathcal{I}}_{0}\) represent the initial low-super-resolution (LR) image. \(\lambda\) is a balance coefficient Fig. 2: **Overview architecture of DCPT.** The template and search images are first fed into the patch embedding to generate the corresponding tokens. A ViT backbone is employed for fundamental feature extraction and interaction of the concatenated template and search tokens. In parallel, the darkness clue prompter (DCP) blocks \(P^{i},i\!\in\!\{1,...,N\}\) are distributed in each encoder layer, and they are responsible for extracting valid darkness clue prompts and injecting them into the foundation model. Besides, the gated feature aggregation (GFA) is performed for more effective information fusion. for residual updating. Instead of learning the SR image in a direct feed-forward manner, back-projection block iteratively mines the reconstruction error between LR and down-sampled SR images then fuses it back to tune the HR image. We think of the philosophy of SR image refining as analogous to our darkness clue prompting process. In this work, we construct the darkness clue prompt learning through the following equations: \[\boldsymbol{\mathcal{P}}=\beta\boldsymbol{\mathcal{H}}_{E}+\Phi_{em}^{2}( \boldsymbol{\mathcal{H}}_{U}-\alpha\boldsymbol{\mathcal{H}}_{E}), \tag{6}\] \[\boldsymbol{\mathcal{H}}_{E}=\Phi_{em}^{1}(\boldsymbol{\mathcal{H}}),\ \boldsymbol{\mathcal{H}}_{U}=\Phi_{un}(\Phi_{em}^{1}(\boldsymbol{\mathcal{H}})), \tag{7}\] where \(\Phi_{em}\) and \(\Phi_{un}\) denote the prompt emphasizing and undermining functions, respectively. \(\alpha,\beta\in\mathbb{R}\) are weights to balance the residual clues. We omit layer index \(l\) for better readability. Fig. 3 showcases the proposed DCP block in detail. The input features \(\boldsymbol{\mathcal{H}}\) (we omit the reshape operation for simplicity) go through the first darkness clue emphasize block and forms prompted \(\boldsymbol{\mathcal{H}}_{E}\). Next, \(\boldsymbol{\mathcal{H}}_{U}\) is obtained through a darkness clue undermine block from the estimated \(\boldsymbol{\mathcal{H}}_{E}\). Then we get the residual \(\boldsymbol{e}_{U}\) which represents the difference between estimated \(\boldsymbol{\mathcal{H}}_{U}\) and the original \(\boldsymbol{\mathcal{H}}\). After that, another darkness clue emphasize is employed to generate the residual between darkness clue prompts \(\boldsymbol{\mathcal{P}}\) and the estimated \(\boldsymbol{\mathcal{H}}_{E}\). Finally, darkness clue prompts \(\boldsymbol{\mathcal{P}}\) is produced by adding the estimated residual. Borrowed from [29], the darkness clue emphasize and undermine blocks that consist of an encode-decode structure with plus and minus offset operations, respectively. The proposed DCP blocks are attached to each encoder layer of the foundation model, iteratively learning the construction of valid darkness clue prompts for nighttime circumstances. ### _Gated Feature Aggregation_ The encoder of the foundation model possesses different information hierarchies across the blocks, while the darkness clue prompter (DCP) blocks are placed independently in front of each encoder layer. This poses challenges for effective and efficient darkness clue prompt learning from two perspectives: **i)** The learned darkness clue prompts lack front-to-back semantic hierarchies like foundation features, and learning different prompt blocks independently is inefficient. **ii)** For different semantic hierarchies and different spatial locations, the region that the darkness clue prompts focus on should also differ. Direct additive injection of prompts lacks the flexibility to adaptively adjust to different regions. To this end, we propose a gated feature aggregation (GFA) for learned prompts, as well as prompts and the foundation features. As shown in Fig. 2, we perform gated aggregation for current \(\boldsymbol{\mathcal{P}}^{l}\) and the preceding one \(\boldsymbol{\mathcal{P}}^{l-1}\). The gated aggregation for adjacent prompts can be formulated as: \[\boldsymbol{\mathcal{P}}_{g}^{l+1}=g^{l+1}\times\boldsymbol{ \mathcal{P}}^{l+1}+(1-g^{l+1})\times\boldsymbol{\mathcal{P}}_{g}^{l}, \tag{8}\] \[g^{l}=\nicefrac{{1}}{{1+e^{-\gamma^{l}}}},\qquad\boldsymbol{ \mathcal{P}}_{g}^{1}=\boldsymbol{\mathcal{P}}^{1}, \tag{9}\] where the gated weights \(g\) are generated through a sigmoid function, controlled by learnable factors \(\gamma\). The gated aggregation connects DCP blocks from shallow to deep, thus, darkness clue prompts can accomplish bottom-up propagation across different feature hierarchies. This promotes the efficient learning of valid prompts, in addition, it is also proved effective for self-supervised classification [30]. Moreover, we design gated aggregation for learned darkness clue prompts and foundation features at a finer granularity. The final prompted foundation feature can be obtained by: \[\boldsymbol{\mathcal{H}}_{p,g}^{l-1}=\boldsymbol{\mathcal{H}}^{l- 1}+\boldsymbol{p}^{l}\times\boldsymbol{\mathcal{P}}^{l}, \tag{10}\] \[\boldsymbol{p}^{l}=[p_{1}^{l},p_{2}^{l},\ldots,p_{M}^{l}],\] (11) \[p_{i}^{l}=\nicefrac{{1}}{{1+e^{-\gamma^{l}_{i}}}},\qquad i=1,2, \ldots,M \tag{12}\] where \(M\) denotes the number of the learned prompt tokens. The gated aggregation weights \(\boldsymbol{p}\in\mathbb{R}^{1\times M}\) indicate that different attention weights are assigned for darkness clue prompts from different regions corresponding to different tokens. The proposed gated feature aggregation mechanism introduces negligible number of parameters but effectively improves the learning of darkness clue prompt and achieves higher nighttime tracking performance. ### _Training Objective_ For object locating, we combine the \(\mathcal{L}_{1}\) loss and the GIOU loss [31]\(\mathcal{L}_{G}\), which can be formulated as: \[\mathcal{L}_{locate}=\lambda_{1}\mathcal{L}_{1}(\boldsymbol{B},\boldsymbol{B} _{gt})+\lambda_{G}\mathcal{L}_{G}(\boldsymbol{B},\boldsymbol{B}_{gt}), \tag{13}\] where \(\boldsymbol{B}_{gt}\) represents the ground truth, \(\lambda_{1}=5\) and \(\lambda_{G}=2\) are the weight parameters. The same training objective is adopted for training the foundation and nighttime trackers. ## IV Experiment ### _Implementation Details_ **Foundation Tracker Training.** We train the daytime foundation tracker on four common datasets, including GOT-10k [8], LaSOT [26], TrackingNet [9], and COCO [32]. The backbone is initialized from pre-trained MAE [33]. The model is trained with AdamW optimizer [34] for 300 epochs with a total batch size of 128, each epoch involves 60,000 sampling pairs. The template and search region size are set to 128\(\times\)128 and 256\(\times\)256, respectively. Fig. 3: **Detailed structure of the proposed DCP module.** DCP module takes the foundation features as the input, iteratively emphasizing and undermining the darkness clues, learning the residual term for the reconstruction of valid darkness clue prompts. **Prompt Tuning.** In this stage, the foundation model is frozen, and only prompt-related parameters are tuned. We construct three nighttime tracking datasets for prompt learning, i.e., BDD100K-Night, SHIFT-Night and ExDark [43]. In particular, we pick the images in BDD100K [44] and SHIFT [45] with the label "night" inside to build BDD100K-Night and SHIFT-Night. We tune the prompt modules for 60 epochs and the initial learning rate is set to \(4\times 10^{-4}\) and decreased by the factor of 10 after 48 epochs. Other settings are the same as foundation model. ### _Overall Performance_ **UAVDark135 and DarkTrack2021.** UAVDark135 [35] and DarkTrack2021 [21] are two of the most commonly used benchmarks for nighttime tracking. Fig. 4 shows the success, precision and normalized precision curve for UAVDark135 and Tab. I shows the result for DarkTrack2021. Our method is superior than other SOTA trackers and achieves a success score of 57.7% and 54.0% in these two benchmarks respectively, which beats the second best trackers by 1.5% and 1.9%. We also pick several representative frames from UAVDark135 for visualization in Fig. 5. As we can see, DCPT can track target objects more steadily than others. **NAT2021 and NAT2021-L.** NAT2021 [16] is a benchmark with 12 different attributes like full-occlusion and low-ambient intensity. Therefore, it is more difficult for accurate tracking. Despite being challenging, our tracker shows remarkable results as depicted in the second row of Fig. 4. DCPT ranks first in terms of all success, normed precision and precision scores. NAT2021-L [16] is a long-term tracking benchmark which involves multiple challenging attributes and more than 1400 frames in each sequence. As shown in Tab. II, DCPT shows surprising results, outperforming previous SOTA trackers UDAT-CAR and UDAT-BAN by almost 10 percent and achieving 47.4% in terms of success score and 59.9% in terms of precision score. Fig. 4: Overall performance of DCPT and other SOTA trackers on UAVDark135 [35] (the first row) and NAT2021 [16] (the second row) benchmarks. Fig. 5: Visualization of tracking in representative nighttime scenarios. ### _Attribute-based Analysis_ The superiority of DCPT is further validated by attribute-based comparison in NAT2021 [16]. This dataset contains twelve attributes, e.g., aspect ratio change, background clutter, camera motion, etc.Results of success score are provided in Fig. 6. DCPT achieves the best scores in all 12 scenarios. Especially in scale variation, viewpoint change and illumination variation, DCPT achieves impressive performance which mainly benefits from critical and effective darkness clue prompts in nighttime circumstances. Besides, we also report the result plots of illumination-related attributes on NAT2021-L [16]. As shown in Fig.7, our tracker leads dramatically in these two attributes and thereby proves the effectiveness of our darkness clue prompt tracking paradigm. ### _Ablation Studies_ To verify the effectiveness of these proposed components, we will gradually introduce them into the base tracker in the subsequent subsection. We will then provide corresponding results using UAVDark135 [35] and NAT2021 [16]. **Base+DCP.** Darkness clue prompter (DCP) is the core component of our tracker. It utilizes a back-projection block to iteratively mine critical darkness clue prompts for foundation daytime tracker to enable the ability to see more clearly in the dark. As shown in Tab.III, DCP boosts the base tracker with improvements of 1.95% and 2.97% success scores on UAVDark135 and DarkTrack2021, respectively. **Base+DCP+GFA_pp.** We add the gated feature aggregated for adjacent prompts as described in Sec. III-D. As illustrated in Tab. III, on DarkTrack2021, the tracker obtains a success score of 53.44%, which is 4.39% higher than the foundation tracker. On UAVDark135, the performance also boosts to 57.51% in terms of success score. The results indicate that performing gated fusion facilitates the fusion of darkness prompts at different semantic hierarchies. **Base+DCP+GFA_pp,pb.** Further, we continue to add gated feature aggregation between prompts and foundation features. As reported in Tab. III, remarkable performance gains are consistently obtained. Gated feature aggregation here allows adaptive injection of prompts into different foundation tokens. Ultimately, the obtained performance has a tremendous improvement over the daytime foundation tracker. With only 3.03M (3.3%) trainable parameters, DCPT improves on UAVDark135 by 2.67% and 4.03% in terms of success and precision scores. On DarkTrack2021, the improvements even come to 4.93% and 6.64%. ### _Real-World Testing_ We perform a series of real-world tests to further verify the feasibility and generalization of DCPT. The on-board camera on the UAV captures nighttime scenes and transmits the captured images to the workstation in real time through Wi-Fi communication. The workstation is a computer with an Intel(R) i7-9700K CPU @3.60GHz and an Nvidia 2080ti GPU, which can process the received images with a promising speed of over 30 fps/720P. As shown in Fig. 8, the main challenges are low resolution, partial occlusion, and low ambient intensity, yet DCPT achieves favorable performance with the average CLE (center location error) of 3.81, 2.19, and 1.04 pixels, and all the test frames have a CLE of less than 20 pixels. The real-world testing demonstrates the feasibility of the DCPT paradigm by injecting learned darkness clue prompts into the daytime tracker to significantly improve its tracking performance in complex nighttime circumstances. Fig. 8: The real-world nighttime UAV tracking testing. The frame-wise performance is presented in terms of CLE plots. The errors below the green dashed lines (CLE=20 pixels) are usually considered acceptable. Fig. 6: Success score comparison of different attributes on NAT2021 [16]. Fig. 7: Success plots of illumination-related attributes on NAT2021-L [16]. ## V Conclusion This work proposes DCPT, a new end-to-end framework for nighttime UAV tracking. DCPT learns to generate darkness clue prompts that stimulate the tracking capabilities of a fixed daytime tracker for nighttime operation. The proposed Darkness Clue Prompter mines crucial darkness cues, while the gated aggregation mechanism enables adaptive fusion of prompts across layers and with tracker features. Compared to prior methods, DCPT inherits robust tracking from a daytime model trained on massive datasets, in a streamlined and end-to-end trainable architecture. Extensive experiments validate its state-of-the-art effectiveness for nighttime tracking.
2309.11957
Continuous Multi-user Activity Tracking via Room-Scale mmWave Sensing
Continuous detection of human activities and presence is essential for developing a pervasive interactive smart space. Existing literature lacks robust wireless sensing mechanisms capable of continuously monitoring multiple users' activities without prior knowledge of the environment. Developing such a mechanism requires simultaneous localization and tracking of multiple subjects. In addition, it requires identifying their activities at various scales, some being macro-scale activities like walking, squats, etc., while others are micro-scale activities like typing or sitting, etc. In this paper, we develop a holistic system called MARS using a single Commercial off the-shelf (COTS) Millimeter Wave (mmWave) radar, which employs an intelligent model to sense both macro and micro activities. In addition, it uses a dynamic spatial time sharing approach to sense different subjects simultaneously. A thorough evaluation of MARS shows that it can infer activities continuously with a weighted F1-Score of > 94% and an average response time of approx 2 sec, with 5 subjects and 19 different activities.
Argha Sen, Anirban Das, Swadhin Pradhan, Sandip Chakraborty
2023-09-21T10:15:43Z
http://arxiv.org/abs/2309.11957v1
# Continuous Multi-user Activity Tracking via Room-Scale mmWave Sensing ###### Abstract. Continuous detection of human activities and presence is essential for developing a pervasive interactive smart space. Existing literature lacks robust wireless sensing mechanisms capable of continuously monitoring multiple users' activities without prior knowledge of the environment. Developing such a mechanism requires simultaneous localization and tracking of multiple subjects. In addition, it requires identifying their activities at various scales, some being macro-scale activities like walking, squats, etc., while others are micro-scale activities like typing or sitting, etc. In this paper, we develop a holistic system called _MARS_ using a _single_ Commercial off-the-shelf (COTS) Millimeter Wave (mmWave) radar, which employs an intelligent model to sense both macro and micro activities. In addition, it uses a dynamic spatial time-sharing approach to sense different subjects simultaneously. A thorough evaluation of _MARS_ shows that it can infer activities continuously with a weighted F1-Score of \(>94\%\) and an average response time of \(\approx 2\) sec, with 5 subjects and 19 different activities. + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal
2309.05922
A Survey of Hallucination in Large Foundation Models
Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of hallucination, with a particular focus on ``Large'' Foundation Models (LFMs). The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of hallucination. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs.
Vipula Rawte, Amit Sheth, Amitava Das
2023-09-12T02:34:06Z
http://arxiv.org/abs/2309.05922v1
# A Survey of Hallucination in "Large" Foundation Models ###### Abstract Hallucination in a foundation model (FM) refers to the generation of content that strays from factual reality or includes fabricated information. This survey paper provides an extensive overview of recent efforts that aim to identify, elucidate, and tackle the problem of hallucination, with a particular focus on "Large" Foundation Models (LFMs). The paper classifies various types of hallucination phenomena that are specific to LFMs and establishes evaluation criteria for assessing the extent of hallucination. It also examines existing strategies for mitigating hallucination in LFMs and discusses potential directions for future research in this area. Essentially, the paper offers a comprehensive examination of the challenges and solutions related to hallucination in LFMs. ## 1 Introduction Foundation Models (FMs), exemplified by GPT-3 Brown et al. (2020) and Stable Diffusion Rombach et al. (2022), marks the commencement of a novel era in the realm of machine learning and generative artificial intelligence. Researchers introduced the term **"foundation model"** to describe machine learning models that are trained on extensive, diverse, and unlabeled data, enabling them to proficiently handle a wide array of general tasks. These tasks encompass language comprehension, text and image generation, and natural language conversation. ### What is a Foundation Model Foundation models refer to massive AI models trained on extensive volumes of unlabeled data, typically through self-supervised learning. This training approach yields versatile models capable of excelling in a diverse range of tasks, including image classification, natural language processing, and question-answering, achieving remarkable levels of accuracy. These models excel in tasks involving generative abilities and human interaction, such as generating marketing content or producing intricate artwork based on minimal prompts. However, adapting and implementing these models for enterprise applications can present certain difficulties Bommasani et al. (2021). ### What is Hallucination in Foundation Model? Hallucination in the context of a foundation model refers to a situation where the model generates content that is not based on factual or accurate information. Hallucination can occur when the model produces text that includes details, facts, or claims that are fictional, misleading, or entirely fabricated, rather than providing reliable and truthful information. This issue arises due to the model's ability to generate plausible-sounding text based on patterns it has learned from its training data, even if the generated content does not align with reality. Hallucination can be unintentional and may result from various factors, including biases in the training data, the model's lack of access to real-time or up-to-date information, or the inherent limitations of the model in comprehending and generating contextually accurate responses. Addressing hallucination in foundation models and LLMs is crucial, especially in applications where factual accuracy is paramount, such as journalism, healthcare, and legal contexts. Researchers and developers are actively working on techniques to mitigate hallucinations and improve the reliability and trustworthiness of these models. With the recent rise in this problem Fig. 2, it has become even more critical to address them. ### Why this survey? In recent times, there has been a significant surge of interest in LFMs within both academic and industrial sectors. Additionally, one of their main challenges is _hallucination_. The survey in [14] describes hallucination in natural language generation. In the era of **large** models, [15] have done another great timely survey studying hallucination in LLMs. However, besides not only in LLMs, the problem of hallucination also exists in other foundation models such as image, video, and audio as well. Thus, in this paper, we do the first comprehensive survey of hallucination across all major modalities of foundation models. #### 1.3.1 Our contributions The contributions of this survey paper are as follows: 1. We succinctly categorize the existing works in the area of hallucination in LFMs, as shown in Fig. 1. 2. We offer an extensive examination of large foundation models (LFMs) in Sections 2 to 5. 3. We cover all the important aspects such as i. detection, ii. mitigation, iii. tasks, iv. datasets, and v. evaluation metrics, given in Table 1. 4. We finally also provide our views and possible future direction in this area. We will regularly update the associated open-source resources, available for access at [https://github.com/vr25/hallucination-foundation-model-survey](https://github.com/vr25/hallucination-foundation-model-survey) #### 1.3.2 Classification of Hallucination As shown in Fig. 1, we broadly classify the LFMs into **four** types as follows: i. Text, ii. Image, iii. video, and iv. Audio. The paper follows the following structure. Based on the above classification, we describe the hallucination and mitigation techniques for all four modalities in: i. text (Section 2), ii. image (Section 3), iii. video (Section 4), and iv. audio (Section 5). In Section 6, we briefly discuss how hallucinations are NOT always bad, and hence, in the creative domain, they can be well-suited to producing artwork. Finally, we give some possible future directions for addressing this issue along with a conclusion in Section 7. ## 2 Hallucination in Large Language Models As shown in Fig. 4, hallucination occurs when the LLM produces fabricated responses. ### LLMs SELFCHECKGPT [13], is a method for zero-resource black-box hallucination detection in generative LLMs. This technique focuses on identifying instances where these models generate inaccurate or unverified information without relying on additional resources or labeled data. It aims to enhance the trustworthiness and reliability of LLMs by providing a mechanism to detect and address hallucinations without external guidance or datasets. Self-contradictory hallucinations in LLMs are explored in [13]. and addresses them through evaluation, detection, and mitigation techniques. It refers to situations where LLMs generate text that contradicts itself, leading to unreliable or nonsensical outputs. This work presents methods to evaluate the occurrence of such hallucinations, detect them in LLM-generated text, and mitigate their impact to improve the overall quality and trustworthiness of LLM-generated content. PURR [12] is a method designed to efficiently edit and correct hallucinations in language models. PURR leverages denoising language model corruptions to identify and rectify these hallucinations effectively. This approach aims to enhance the quality and accuracy of language model outputs by reducing the prevalence of hallucinated content. Hallucination datasets:Hallucinations are commonly linked to knowledge gaps in language models (LMs). However, [15] proposed a hypothesis that in certain instances when language models attempt to rationalize previously generated hallucinations, they may produce false statements that they can independently identify as inaccurate. Thus, they created three question-answering datasets where ChatGPT and GPT-4 frequently provide incorrect answers and accompany them with explanations that contain at least one false assertion. HaluEval [11], is a comprehensive benchmark designed for evaluating hallucination in LLMs. It serves as a tool to systematically assess LLMs' performance in terms of hallucination across various domains and languages, helping researchers and developers gauge and improve the reliability of these models. **Hallucination mitigation using external knowledge:** Using interactive question-knowledge alignment, [14] presents a method for mitigating language model hallucination Their proposed approach focuses on aligning generated text with relevant factual knowledge, enabling users to interactively guide the model's responses to produce more accurate and reliable information. This technique aims to improve the quality and factuality of language model outputs by involving users in the alignment process. LLM-AUGMENTER [15] improves LLMs using external knowledge and automated feedback. It highlights the need to address the limitations and potential factual errors in LLM-generated content. This method involves incorporating external knowledge sources and automated feedback mechanisms to enhance the accuracy and reliability of LLM outputs. By doing so, the paper aims to mitigate factual inaccuracies and improve the overall quality of LLM-generated text. Similarly, [11] introduces a framework called "Chain of Knowledge" for grounding LLMs with structured knowledge bases. Grounding refers to the process of connecting LLM-generated text with structured knowledge to improve factual accuracy and reliability. The framework utilizes a hierarchical approach, chaining multiple knowledge sources together to provide context and enhance the understanding of LLMs. This approach aims to improve the alignment of LLM-generated content with structured knowledge, reducing the risk of generating inaccurate or hallucinated information. Smaller, open-source LLMs with fewer param Figure 1: Taxonomy for Hallucination in Large Foundation Models Figure 3: An illustration of hallucination [12]. Incorrect information is highlighted in Red. Figure 2: The evolution of “hallucination” papers for Large Foundation Models (LFMs) from March 2023 to September 2023. eters often experience significant hallucination issues compared to their larger counterparts (Elaraby et al., 2023). This work focuses on evaluating and mitigating hallucinations in BLOOM 7B, which represents weaker open-source LLMs used in research and commercial applications. They introduce HALOCHECK, a lightweight knowledge-free framework designed to assess the extent of hallucinations in LLMs. Additionally, it explores methods like knowledge injection and teacher-student approaches to reduce hallucination problems in low-parameter LLMs. Moreover, the risks associated with LLMs can be mitigated by drawing parallels with web systems (Huang and Chang, 2023). It highlights the absence of a critical element, "citation," in LLMs, which could improve content transparency, and verifiability, and address intellectual property and ethical concerns. Hallucination mitigation using prompting techniques:"Dehallucinating" refers to reducing the generation of inaccurate or hallucinated information by LLMs. Dehallucinating LLMs using formal methods guided by iterative prompting is presented in (Jha et al., 2023). They employ formal methods to guide the generation process through iterative prompts, aiming to improve the accuracy and reliability of LLM outputs. This method is designed to mitigate the issues of hallucination and enhance the trustworthiness of LLM-generated content. ### Multilingual LLMs Large-scale multilingual machine translation systems have shown impressive capabilities in directly translating between numerous languages, making them attractive for real-world applications. However, these models can generate hallucinated translations, which pose trust and safety issues when deployed. Existing research on hallucinations has mainly focused on small bilingual models for high-resource languages, leaving a gap in understanding hallucinations in massively multilingual models across diverse translation scenarios. To address this gap, (Pfeiffer et al., 2023) conducted a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a versatile LLM that can be prompted for translation. The investigation covers a wide range of conditions, including over 100 translation directions, various resource levels, and languages beyond English-centric pairs. ### Domain-specific LLMs Hallucinations in mission-critical areas such as medicine, banking, finance, law, and clinical settings refer to instances where false or inaccurate information is generated or perceived, potentially leading to serious consequences. In these sectors, reliability and accuracy are paramount, and any form of hallucination, whether in data, analysis, or decision-making, can have significant and detrimental effects on outcomes and operations. Consequently, robust measures and systems are essential to minimize and prevent hallucinations in these high-stakes domains. Medicine:The issue of hallucinations in LLMs, particularly in the medical field, where generating plausible yet inaccurate information can be detrimental. To tackle this problem, (Umapathi et al., 2023) introduces a new benchmark and dataset called Med-HALT (Medical Domain Hallucination Test). It is specifically designed to evaluate and mitigate hallucinations in LLMs. It comprises a diverse multinational dataset sourced from medical examinations across different countries and includes innovative testing methods. Med-HALT consists of two categories of tests: reasoning and memory-based hallucination tests, aimed at assessing LLMs' problem-solving and information retrieval capabilities in medical contexts. Law:ChatLaw (Cui et al., 2023), is an open-source LLM specialized for the legal domain. To ensure high-quality data, the authors created a meticulously designed legal domain fine-tuning dataset. To address the issue of model hallucinations during legal data screening, they propose a method that combines vector database retrieval with keyword retrieval. This approach effectively reduces inaccuracies that may arise when solely relying on vector database retrieval for reference data retrieval in legal contexts. ## 3 Hallucination in Large Image Models Contrastive learning models, employing a Siamese structure (Wu et al., 2023), have displayed impressive performance in self-supervised learning. Their success hinges on two crucial conditions: the presence of a sufficient number of positive pairs and the existence of ample variations among them. Without meeting these conditions, these frameworks may lack meaningful semantic distinctions and become susceptible to overfitting. To tackle these challenges, we introduce the Hallucinator, which efficiently generates additional positive samples to enhance contrast. The Hallucinator is differentiable, operating in the feature space, making it amenable to direct optimization within the pre-training task and incurring minimal computational overhead. Efforts to enhance LVLMs for complex multimodal tasks, inspired by LLMs, face a significant challenge: object hallucination, where LVLMs generate inconsistent objects in descriptions. This study [11] systematically investigates object hallucination in LVLMs and finds it's a common issue. Visual instructions, especially frequently occurring or co-occurring objects, influence this problem. Existing evaluation methods are also affected by input instructions and LVLM generation styles. To address this, the study introduces an improved evaluation method called POPE, providing a more stable and flexible assessment of object hallucination in LVLMs. Instruction-tuned Large Vision Language Models (LVLMs) have made significant progress in handling various multimodal tasks, including Visual Question Answering (VQA). However, generating detailed and visually accurate responses remains a challenge for these models. Even state-of-the-art LVLMs like InstructBLIP exhibit a high rate of hallucinatory text, comprising 30 percent of non-existent objects, inaccurate descriptions, and erroneous relationships. To tackle this issue, the study [14] introduces MHalDetect1, a Multimodal Hallucination Detection Dataset designed for training and evaluating models aimed at detecting and preventing hallucinations. MHalDetect contains 16,000 finely detailed annotations on VQA examples, making it the first comprehensive dataset for detecting hallucinations in detailed image descriptions. ## 4 Hallucination in Large Video Models Hallucinations can occur when the model makes incorrect or imaginative assumptions about the video frames, leading to the creation of artificial or erroneous visual information Fig. 5. The challenge of understanding scene affordances is tackled by introducing a method for inserting people into scenes in a lifelike manner [13]. Using an image of a scene with a marked area and an image of a person, the model seamlessly integrates the person into the Figure 4: Instances of object hallucination within LVLMs [11]. Ground-truth objects in annotations are indicated in **bold**, while red objects represent hallucinated objects by LVLMs. The left case occurs in the conventional instruction-based evaluation approach, while the right cases occur in three variations of POPE. Figure 5: A video featuring three captions generated by various captioning models [11], with factual errors highlighted in red italics. scene while considering the scene's characteristics. The model is capable of deducing realistic poses based on the scene context, adjusting the person's pose accordingly, and ensuring a visually pleasing composition. The self-supervised training enables the model to generate a variety of plausible poses while respecting the scene's context. Additionally, the model can also generate lifelike people and scenes on its own, allowing for interactive editing. VideoChat [14], is a comprehensive system for understanding videos with a chat-oriented approach. VideoChat combines foundational video models with LLMs using an adaptable neural interface, showcasing exceptional abilities in understanding space, time, event localization, and inferring cause-and-effect relationships. To fine-tune this system effectively, they introduced a dataset specifically designed for video-based instruction, comprising thousands of videos paired with detailed descriptions and conversations. This dataset places emphasis on skills like spatiotemporal reasoning and causal relationships, making it a valuable resource for training chat-oriented video understanding systems. Recent advances in video inpainting have been notable [21], particularly in cases where explicit guidance like optical flow can help propagate missing pixels across frames. However, challenges arise when cross-frame information is lacking, leading to shortcomings. So, instead of borrowing pixels from other frames, the model focuses on addressing the reverse problem. This work introduces a dual-modality-compatible inpainting framework called Deficiency-aware Masked Transformer (DMT). Pretraining an image inpainting model to serve as a prior for training the video model has an advantage in improving the handling of situations where information is deficient. Video captioning aims to describe video events using natural language, but it often introduces factual errors that degrade text quality. While factuality consistency has been studied extensively in text-to-text tasks, it received less attention in vision-based text generation. In this research [15], the authors conducted a thorough human evaluation of factuality in video captioning, revealing that 57.0% of model-generated sentences contain factual errors. Existing evaluation metrics, mainly based on n-gram matching, do not align well with human assessments. To address this issue, they introduced a model-based factuality metric called FactVC, which outperforms previous metrics in assessing factuality in video captioning. ## 5 Hallucination in Large Audio Models Automatic music captioning, which generates text descriptions for music tracks, has the potential to enhance the organization of vast musical data. However, researchers encounter challenges due to the limited size and expensive collection process of existing music-language datasets. To address this scarcity, [16] used LLMs to generate descriptions from extensive tag datasets. They created a dataset known as LP-MusicCaps, comprising around 2.2 million captions paired with 0.5 million audio clips. They also conducted a comprehensive evaluation of this large-scale music captioning dataset using various quantitative natural language processing metrics and human assessment. They trained a transformer-based music captioning model on this dataset and evaluated its performance in zero-shot and transfer-learning scenarios. Ideally, the video should enhance the audio, and in [14], they have used an advanced language model for data augmentation without human labeling. Additionally, they utilized an audio encoding model to efficiently adapt a pre-trained text-to-image generation model for text-to-audio generation. ## 6 Hallucination is _not_ always harmful: A different perspective Suggesting an alternative viewpoint, [23] discusses how hallucinating models could serve as "collaborative creative partners," offering outputs that may not be entirely grounded in fact but still provide valuable threads to explore. Leveraging hallucination creatively can lead to results or novel combinations of ideas that might not readily occur to most individuals. "Hallucinations" become problematic when the statements generated are factually inaccurate or contravene universal human, societal, or particular cultural norms. This is especially critical in situations where an individual relies on the LLM to provide expert knowledge. However, in the context of creative or artistic endeavors, the capacity to generate unforeseen outcomes can be quite advantageous. Unexpected responses to queries can surprise humans and stimulate the discovery of novel idea connections.
2309.13198
Associative memory by virtual oscillator network based on single spin-torque oscillator
A coupled oscillator network may be able to perform an energy-efficient associative memory operation. However, its realization has been difficult because inhomogeneities unavoidably arise among the oscillators during fabrication and lead to an unreliable operation. This issue could be resolved if the oscillator network were able to be formed from a single oscillator. Here, we performed numerical simulations and theoretical analyses on an associative memory operation that uses a virtual oscillator network based on a spin-torque oscillator. The virtual network combines the concept of coupled oscillators with that of feedforward neural networks. Numerical experiments demonstrate successful associations of $60$-pixel patterns with various memorized patterns. Moreover, the origin of the associative memory is shown to be forced synchronization driven by feedforward input, where phase differences among oscillators are fixed and correspond to the colors of the pixels in the pattern.
Yusuke Imai, Tomohiro Taniguchi
2023-09-22T22:23:58Z
http://arxiv.org/abs/2309.13198v3
# Associative memory by virtual oscillator network based on single spin-torque oscillator ###### Abstract A coupled oscillator network may be able to perform an energy-efficient associative memory operation. However, its realization has been difficult because inhomogeneities unavoidably arise among the oscillators during fabrication and lead to an unreliable operation. This issue could be resolved if the oscillator network were able to be formed from a single oscillator. Here, we performed numerical simulations and theoretical analyses on an associative memory operation that uses a virtual oscillator network based on a spin-torque oscillator. The virtual network combines the concept of coupled oscillators with that of feedforward neural networks. Numerical experiments demonstrate successful associations of 60-pixel patterns with various memorized patterns. Moreover, the origin of the associative memory is shown to be forced synchronization driven by feedforward input, where phase differences among oscillators are fixed and correspond to the colors of the pixels in the pattern. The human brain has a sophisticated function called associative memory [1], whereby it can remember a pattern when shown a portion of that pattern. This function has been modeled in various ways with the goal of achieving a better understanding of brain activity and realizing energy-efficient bio-inspired computing. Since the development of an autocorrelation model in the 1970s [2, 3, 4], several theoretical models, such as the Hopfield model [5], have been developed that draw their inspiration from the characteristics of neural activity [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These models have also been implemented in experimental devices. For example, the associative memory operation was recently performed in a spintronic memory consisting of a nanometer-scale ferromagnetic multilayer [16]. In addition to these efforts embodying neuronal dynamics, it has been proposed that synchronized phenomena in coupled oscillator networks can be used to perform the associative memory operation [17, 18, 19, 20, 21]. For example, a detailed analysis was conducted on an _LC_-circuit oscillator network performing the operation [21]. A network of spintronic oscillators, called spin-torque oscillators (STOs), has also been shown to perform an associative memory operation [22]. There are two major issues with using an oscillator network for the associative memory operation. One is unstable operation due to inhomogeneity in the oscillator's parameters. For example, variations in frequency among the oscillators are unavoidable in experimental realizations; they prevent a synchronization between the oscillators and decrease the accuracy of the associative memory [21]. The other issue is that the required number of oscillators grows with the amount of input data. There are numerous challenges in fabricating a large number of oscillators and getting them to interact with each other. These issues might be resolved if we can construct an oscillator network virtually by using a single physical oscillator [23]. Such a network would have no inhomogeneities in its parameters as only one oscillator would have to be fabricated. However, there are questions on how such a network could be realized and how it could show synchronization phenomena. In this work, we demonstrate an associative memory operation by a virtual oscillator network through numerical simulations and theoretical analyses. First, we provide a detailed description of the virtual oscillator network consisting of a single physical oscillator. In particular, we discuss the principles involved, i.e., those of the coupled oscillator networks and feedforward neural networks. Next, we show that a virtual oscillator network consisting of a single STO can recognize several different 60-pixel patterns by numerically simulating the motion of the STO. We reveal that the feedforward input in the virtual network forces the virtual oscillators to synchronize and that this phenomenon results in the associative memory operation. ## Results ### Associative memory operation of this study The associative memory operation studied here is to associate a pattern, called the pattern to be recognized, with a pattern in a stored set of patterns, called memorized patterns. For example, suppose that the three patterns, "0", "1", and "2", shown in Fig. 1(a) are memorized, and the one shown in Fig. 1(b) is the pattern to be recognized: we can see that the pattern to be recognized is similar to the memorized pattern "1". Throughout this paper, we will suppose the memorized patterns use 10(rows)\(\times\)6(columns)= 60-pixels patterns for memorized patterns and patterns to be recognized. In the following subsections, we describe the concept of our virtual oscillator network after briefly reviewing a conventional oscillator network for comparison. Then, we demonstrate through numerical simulations that the virtual oscillator network can perform the associative memory operation. ### Associative memory operation by conventional oscillator network The associative memory operation by a conventional coupled oscillator network consists of two steps [21]. The first step is to give a correspondence between the phases of the oscillators and the colors of the pattern to be recognized. We prepare \(N\) oscillators corresponding to the pixels of the pattern to be recognized, where \(N\) is the number of oscillators (pixels). We introduce phases \(\psi_{i}\) (\(i=1,\cdots,N\)) and phase differences \(\Delta\psi_{i}=\psi_{i}-\psi_{1}\). The color of the \(i\)th pixel is determined by \(\cos\Delta\psi_{i}\), which is white (black) when \(\Delta\psi_{i}=0\) (\(\pi\)). According to this definition, the color of the first pixel is always white (see also the Methods for the definitions of color). Initially, there are no interactions between the oscillators. Thus, their phases are arbitrary, and the colors in the pattern are random, as schematically shown on the left of Fig. 2(a). When interactions between the oscillators are introduced and the interaction strengths are appropriately determined by the Hebbian rule, all the phase differences become 0 or \(\pi\) in correspondence with the white and black pixels of the pattern to be recognized, as shown in the middle of Fig. 2(a) (see also Methods for model of the conventional oscillator network). Here, the Hebbian rule means that the interaction strength between the \(j\)th and \(i\)th oscillator is proportional to the weight, \[w_{ij}^{(1)}=\xi_{i}^{\rm R}\xi_{j}^{\rm R}, \tag{1}\] where \(\xi_{i}^{\rm R}=+(-)1\) when the color of the pattern to be recognized at the \(i\)th pixel is white (black). Thus, \(w_{ij}^{(1)}=+(-)1\) when the colors of the \(i\)th and \(j\)th are the same (opposite). The second step is to replace the weights by the following ones, which can be regarded as an average of the weights among the memorized patterns, \[w_{ij}^{(2)}=\frac{1}{N_{\rm m}}\sum_{m=1}^{N_{\rm m}}\xi_{i}^{m}\xi_{j}^{m}, \tag{2}\] where \(N_{\rm m}\) is the number of memorized patterns. The symbol \(m=1,2,\cdots,N_{\rm m}\) is used to distinguish the memorized patterns. For example, the memorized patterns "0", "1", and "2" in Fig. 2(a) are labelled \(m=1\), \(2\), and \(3\). The parameter \(\xi_{i}^{m}\) is \(+(-)1\) when the color of the \(i\)th pixel in the \(m\)th memorized pattern is white (black). Then, the oscillator phases change to those of the memorized pattern most resembling the pattern to be recognized, and the association is achieved, as shown in the right in Fig. 2(a). ### Description of associative memory operation by virtual oscillator network The associative memory operation by a virtual oscillator network consists of three steps. First, we measure an oscillation of a single oscillator and divide it into \(N\) parts, as schematically shown on the first line of Fig. 2(b). The \(i\)th part of the measured data is regarded as the output from the \(i\)th oscillator in a virtual network. In this step, the Figure 1: Examples of memorized patterns and a pattern to be recognized. (a) Three (\(N_{\rm m}=3\)) memorized patterns, “0”, “1”, and “2”. (b) The pattern to be recognized resembles memorized pattern “1”. The oscillator network tries to associate the pattern to be recognized with the pattern “1”. In an associative memory operation performed by a system consisting of \(N\) oscillators, the color of the \(i\)th (\(i=1,2,\cdots,N\)) pixel is determined by the phase \(\psi_{i}\) of the corresponding oscillator. The color is white (black) when the phase difference, \(\Delta\psi_{i}=\psi_{i}-\psi_{1}\), is 0 (\(\pi\)). The color is on a gray scale when the phase difference is \(0<\Delta\psi_{i}<\pi\). Figure 2: Schematic illustration of conventional and virtual oscillator networks. (a) In the conventional oscillator network, the oscillators are initially uncoupled (left). Therefore, the phase of each oscillator is arbitrary. When the oscillators interact with appropriate weights [\(w_{ij}^{(1)}\)], the phases saturate to values corresponding to the pattern to be recognized (middle). When the weight changes [\(w_{ij}^{(2)}\)], the phases change so that the corresponding pattern resembles one of memorized patterns (right). (b) In a virtual oscillator network, we drive an oscillation of a single oscillator and divide its output into \(N\) parts. The \(i\)th part is regarded as an output from the \(i\)th virtual oscillator. First [top of (b)], we measure the \(N\) outputs. The corresponding pattern in this step is arbitrary because there is no correlation among the oscillators. Second [middle of (b)], an external force is added to the oscillator. This force is a linear combination of the outputs in the first step with appropriated weights [\(w_{ij}^{(1)}\)]. The phase of each part eventually saturates to a value corresponding to the pixel color in the pattern to be recognized. Third [bottom of (b)], the second step is repeated while the force is a linear combination of the outputs in the second step with weights \(w_{ij}^{(2)}\). Eventually, the phases saturate to the values corresponding to the memorized pattern most resembling the pattern to be recognized. phase of each part is arbitrary, and therefore, the pattern arising from it is random. The measured data should be stored in a computer in order for it to be used in the next step. Second, we excite another oscillation and divide the measured data into \(N\) parts again. At the initial time of each part, the phase, as well as the pattern determined from it, is arbitrary, as shown in the middle of Fig. 2(b). This time, however, we apply an external force to the oscillator that is proportional to a linear combination of the measured data in the first step with weights (1). For example, in this study, the external force comes from a torque excited by an external magnetic field, which applied during the \(i\)th part of the oscillation is given by \[H_{i}^{(1)}=\mathcal{H}\sum_{j=1}^{N}w_{ij}^{(1)}y_{j}^{(1)}, \tag{3}\] where \(\mathcal{H}\) denotes the amplitude and \(y_{j}^{(1)}\) is the output from the \(j\)th oscillator measured in the first step [see also Methods for the detailed definition of \(y_{j}^{(1)}\) in the numerical simulations]. Therefore, Eq. (3) is an oscillating function with the frequency of the oscillator. Because of the application of the magnetic field, the phase in each part eventually saturates to a certain value, and the pattern to be recognized is output, as shown in the middle of Fig. 2(b). Note that the output signal of this process should be stored in a computer. Third, we perform a measurement similar to one in the second step but the magnetic field applied during the \(i\)th part is replaced by \[H_{i}^{(2)}=\mathcal{H}^{\prime}\sum_{j=1}^{N}w_{ij}^{(2)}y_{j}^{(2)}, \tag{4}\] where \(\mathcal{H}^{\prime}\) denotes the amplitude, while \(y_{j}^{(2)}\) is the output from the \(j\)th oscillator measured at the second step (see also Methods pertaining to the numerical simulations). The weights \(w_{ij}^{(2)}\) are given by Eq. (2). The phase at the end of each part saturates to a value corresponding to the memorized pattern most resembling the pattern to be recognized, as shown in the bottom of Fig. 2(b); i.e., the associative memory operation is completed. There are several differences between the conventional and virtual oscillator networks (see also Methods for the models). For example, the oscillators in the conventional oscillator network interact instantaneously, and their phase differences saturate to values corresponding to pixel colors as a result of mutual synchronization. On the other hand, the oscillators in the virtual oscillator network do not interact each other instantaneously. As can be seen in Eqs. (3) and (4), the oscillator outputs from the previous steps are used in the magnetic field in the current step. From perspective, the virtual oscillator network is similar to a feedforward neural network because the information on the oscillator phases in one step is sent to the oscillation in the next step. At the same time, we should note that the weights in the virtual oscillator network are fixed, as in the case of the conventional oscillator network. This is in contrast with a feedforward neural network used in deep learning, in which weights are updated by backpropagation. Thus, the virtual oscillator network can be regarded as a hybrid combination of a coupled oscillator network and a feedforward neural network. In the discussion below, we will reveal that the feedforward inputs cause forced synchronization among the divided parts and result in the associative memory operation. Before that, however, we must demonstrate that this virtual oscillator network can actually perform the associative memory operation. ### Equation of motion of oscillator As an oscillator in the virtual oscillator network, we use a vortex STO, which has various advantages for practical applications and has been frequently used in spintronics experiments on bio-inspired computing [24, 25, 26, 27]. An STO consists of a ferromagnetic/nonmagnetic multilayer on the nanometer scale, as schematically shown in Fig. 3(a). A vortex of magnetic moments appears when a diameter and thickness of a cylinder-shape ferromagnet are on the order of 100 and 1 nm, respectively. When an electric current and/or magnetic field are applied to the STO, magnetic moments show precessions around their equilibrium direction. According to a recent experiment on chaos excitation in an STO [28], we assume that a force added to the virtual oscillator network corresponds to a torque excited by magnetic field, as mentioned above. It has been shown both experimentally and theoretically that the dynamics in a vortex STO are well described by the Thiele equation [29, 30, 31, 32, 33, 34, 35, 36], which is the equation of motion for a center of the vortex structure, called the vortex core (see also Methods for Thiele equation): \[-G\mathbf{e}_{z}\times\mathbf{\dot{X}}-|D|\left(1+\xi s^{2}\right)\mathbf{ \dot{X}}-\kappa\left(1+\zeta s^{2}\right)\mathbf{X}+a_{J}JP_{z}\mathbf{e}_{z} \times\mathbf{X}+ca_{J}JR_{0}p_{x}\mathbf{e}_{x}+c\mu^{*}\mathbf{e}_{z}\times \mathbf{H}=\mathbf{0}, \tag{5}\] where \(\mathbf{X}=(X,Y,0)\) represents the position of the vortex core in the \(xy\) plane. While the physical meanings and the values of many parameters are explained in Methods, two quantities should be explained here. The first is the current density \(J\), which causes a limit-cycle oscillation of the vortex core. The other is the external magnetic field \(\mathbf{H}\), which is used to excite a torque. It is useful to notice that Eq. (5) can be approximated as (see also Methods for the analytical solution of the Thiele equation) \[\dot{s}=as-bs^{3}-\frac{c\mu^{*}}{GR}H_{y}\cos\psi, \tag{6}\] \[\dot{\psi}=\frac{\kappa}{G}\left(1+\zeta s^{2}\right)+\frac{c\mu^{*}}{GRs}H_{y }\sin\psi, \tag{7}\] where \(s=|\mathbf{X}|/R\) (\(0\leq s\leq 1\)) is the distance of the vortex core from the center of the ferromagnet normalized by the disk radius \(R\), while \(\psi=\tan^{-1}(Y/X)\) is the phase. Here, \(a=(|D|\kappa/G^{2})[(J/J_{\mathrm{c}})-1]\) and \(b=(|D|\kappa/G^{2})(\xi+\zeta)\), where \(J_{\mathrm{c}}=|D|\kappa/(Ga_{J}p_{z})\). The magnetic field \(\mathbf{H}\) is assumed to have only a \(y\) component \(H_{y}\). Note that Eqs. (6) and (7) are similar to the equation of motion of the Stuart-Landau oscillator [37]. Therefore, the vortex core shows a limit-cycle oscillation around the disk center in the \(xy\) plane with an oscillating amplitude \(s_{0}=\sqrt{a/b}\) when \(J\) exceeds a threshold value \(J_{\mathrm{c}}\), while the terms related to \(H_{y}\) act as a perturbation. The connection to such a fundamental nonlinear oscillator model indicates that our results are also valid for various oscillators in nature and engineering. Figure 3(b) shows an example of nonperturbative vortex dynamics, showing an approximately circular oscillation of the vortex core around the disk center. The phase difference of the oscillation was used to define the colors in the patterns in the associative memory operation. Readers should note that the plots in Fig. 3(b), as well as the results of the numerical simulations shown below, were obtained by solving Eq. (5), while the approximate equations, Eqs. (6) and (7), are used in the model analyses described below. ### Demonstration of associative memory Figure 3(c) shows the time evolution of the phase difference, \(\Delta\psi_{i}\), obtained by solving Eq. (5) with Eq. (3) substituting for \(H_{y}\). Note that this solution corresponds to the second step in Fig. 2(b). The phase differences saturate to \(0\) or \(\pi\) within a few hundred nanoseconds. Snapshots of patterns corresponding to this time evolution of the phases are shown in Fig. 3(d). The patterns eventually settle to the one to be recognized. Figure 2(b) shows the result of solving Eq. (5) with Eq. (4) substituting for \(H_{y}\). Here, Eq. (2) in Eq. (4) is for the three memorized patterns in Fig. 1(a). Figures 3(e) and 3(f) show the time evolution of the phase differences and snapshots of the corresponding patterns. Remind that the information of the phases corresponding to the colors of the pixels in the pattern to be recognized is included in the magnetic field in Eq. (4) through \(y_{j}^{(2)}\). Consequently, even though the initial pattern is random, the oscillator phases finally saturate to values corresponding to one of the memorized patterns [Fig. 3(f)]. The associative memory operation becomes more difficult when there are similar memorized patterns. To clarify this point, let us examine what happens when the number of the memorized patterns is increased, as shown in Fig. 4(a) from the three in Fig. 1(a). The added patterns do not affect the second step in Fig. 2(b). For the association corresponding to the third step in Fig. 2(b), the magnetic field, defined by Eq. (4), is changed by these new memorized patterns. As a result, the final pattern output resembles none of the memorized ones [Fig. 4(b)]. This failure of the associative memory operation is due to two reasons. The first is that the pattern "7" is similar to the pattern "1", which should be the one associated. When "7" is excluded from the memorized patterns, the association succeeds, as shown in Fig. 4(c). The second reason is that the number of memorized patterns is large. As shown in Fig. 4(d), the association succeeds when the memorized patterns include only "1" and "7", the association is succeeded. Therefore, we conclude that an association may fail when the memorized patterns include similar patterns and the number of memorized patterns is large. To quantify the similarity between patterns \(A\) and \(B\), we introduce the degree of overlap: \[\mathcal{O}(\boldsymbol{\xi}^{A},\boldsymbol{\xi}^{B})\equiv\frac{1}{N}\bigg{|} \sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}\bigg{|}, \tag{8}\] where \(\boldsymbol{\xi}^{A}=(\xi_{1}^{A},\cdots,\xi_{N}^{A})\) is defined from the color of the \(i\)th pixel of pattern \(A\) [\(\xi_{i}^{A}=+(-)1\) when the \(i\)th pixel is white (black)]. The overlap becomes \(1\) when the two patterns are completely identical or their black and white colors are all exchanged (see also Methods for the definitions of color and overlap). For example, in the example shown in Figs. 1 and 3, the degree of overlap between the pattern to be recognized and the memorized pattern "0" is \(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{1})=18/60=0.30\). It is\(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{2})=44/60\simeq 0.73\) for pattern "1", and \(\mathcal{O}(\boldsymbol{\xi}^{R},\boldsymbol{\xi}^{3})=6/60=0.10\) for pattern "2" (the memorized patterns are labelled as \(m=1,2,3,\cdots\) while the examples of memorized patterns in this work are "0", "1", "2", etc; thus, the label \(m\) and the corresponding number are off by one). Since the degree of overlap of the pattern to be recognized and "1" is large in the examples in Figs. 1 and 3, pattern "1" should be associated in this case. On the other hand, in the example shown in Fig. 4, Figure 3: Description of STO and demonstration of associative memory by a virtual oscillator network. (a) Schematic illustration of vortex spin torque oscillator and (b) vortex-core dynamics driven by electric current. The STO has a cylindrical shape, and the \(z\) axis is orthogonal to the circular plane. Magnetic moments, shown as colored arrows in top ferromagnet, form a circular structure. The black dot around which the moments turn is the vortex core. Electric current is injected into the STO; positive current flows from bottom to top in the figure. When the electric current density \(J\) exceeds a threshold value, the vortex core oscillates around the disk center. The output signals from the STO during the first (second) step in Fig. 2(b) are stored, and their linear combination with weights \(w_{ij}^{(1)}\) [\(w_{ij}^{(2)}\)] defined from the pattern to be recognized (memorized patterns) is used as magnetic field during the second (third) step. For simplicity, the dynamics in the absence of the magnetic field is shown. The components of the vortex-core’s position, \(X/R\), and \(Y/R\), oscillate around the disk center, and a trajectory is approximately a circle. The distance of the vortex-core’s position from the disk center, \(s\), is approximately constant value, \(s_{0}\). The phase measured from the \(x\) axis is denoted as \(\psi\). (c) Time evolutions of the 59 phase differences, \(\Delta\psi_{i}\) (\(i=2,3,\cdots,60\)) and (d) snapshots of generating a pattern to be recognized on 60-pixels. (e) Time evolutions of the phase difference and (f) snapshots of the corresponding pattern for association from memorized patterns. Figure 4: Problem of associative memory operation when the similarity between the memorized patterns is high and the number of patterns is large. (a) Ten (\(N_{\text{m}}=10\)) memorized patterns, “0”, “1”,“-,“9”. (b) Time evolution of the phase difference during the association and snapshots of the corresponding pattern. In this case, the memorized patterns include both “1” and “7”. Because of their similarity, the pattern does not finally saturate to “1”. (c) When “7” is removed from the memorized patterns (\(N_{\text{m}}=9\)), the association is successful, even though there are nine remaining memorized patterns. (d) The association is successful when the memorized patterns include only “1” and “7”. the overlap between the pattern to be recognized [Fig. 1(b)] and "7" is also relatively large, i.e., \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=32/60\simeq 0.53\). In addition, the overlap between the memorized patterns "1" and "7", \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=28/60\simeq 0.47\), is also relatively large compared with those between the other patterns; for example, the overlap between "1" and "8" is \(\mathcal{O}(\xi^{\rm R},\xi^{\rm R})=2/60\simeq 0.03\) (see also Supplementary Information, where the overlaps of the ten memorized patterns are summarized). Accordingly, when the memorized patterns include "1" and "7", the virtual oscillator network cannot associate a correct pattern, and the final pattern produced corresponds to none of the memorized ones. Similarly, when the number of memorized patterns is large, there might be patterns having large overlaps and the association fails. In summary, we have shown that the virtual oscillator network based on the algorithm in Fig. 2(b) can perform the associative memory operation. Its accuracy, however, is low when the memorized patterns include some patterns having large overlaps and there is a large number of memorized patterns. Note that the maximum number of patterns that can be memorized by neural network is approximately \(N/(2\log N)^{8}\). It would be of interest if such a formula can be derived for virtual oscillator networks in future. We examined the associative memory operation for various cases, i.e., for different patterns to be recognized, and studied the rate of the accurate association; see Supplementary Information. ## Discussion Here we discuss the principles of the associative memory operation analytically by using Eqs. (6) and (7). As mentioned above, the operation consists of three steps, and in each step, the oscillator output is divided into \(N\) parts. In what follows, we denote the phase of the vortex core during the \(i\)th part of the \(k\)th step as \(\psi_{i}^{(k)}\). We also assume that the oscillation amplitude \(s_{0}\) is approximately constant because the current density is fixed. Therefore, the oscillation frequency, \(f=\Omega/(2\pi)=[\kappa/(2\pi G)](1+\zeta_{*0}^{*2})\), is also approximately constant (see also Methods for the analytical solution of the Thiele equation). The phase in the second step obeys, \[\psi_{i}^{(2)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}\sum_{\ell=1}^{N} \xi_{\ell}^{\rm R}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\sin\psi_{i}^{(2)}. \tag{9}\] Thus, the phase difference between the \(i\)th and \(j\)th parts obeys, \[\dot{\psi}_{i}^{(2)}-\dot{\psi}_{j}^{(2)}=\frac{c\mu^{*}}{GRs_{0}}\mathcal{H} \left(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\right)\left(\xi_{i }^{\rm R}\sin\psi_{i}^{(2)}-\xi_{j}^{\rm R}\sin\psi_{j}^{(2)}\right). \tag{10}\] The steady state condition on the phase difference leads to \[\xi_{i}^{\rm R}\sin\psi_{i}^{(2)}-\xi_{j}^{\rm R}\sin\psi_{j}^{(2)}=0. \tag{11}\] Note that \(\xi_{i}^{\rm R}=+(-)1\) when the color at the \(i\)th pixel of the pattern to be recognized is white (black). Therefore, \(\psi_{i}^{(1)}\) and \(\psi_{j}^{(1)}\) will be in-phase \(\psi_{i}^{(1)}=\psi_{j}^{(1)}\) [anti-phase \(\psi_{i}^{(1)}=\psi_{j}^{(1)}\pm\pi\)] when the colors of the \(i\)th and \(j\)th pixels are the same (opposite). As a result, the phase differences in the second step saturate to 0 or \(\pi\) corresponding to the white or black in the pattern to be recognized. Note that this synchronization is caused by a feedforward input from the first step, which corresponds to the second term on the right-hand side in Eq. (9). Here, the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) in Eq. (9) is the sum of the \(N\) oscillator outputs \(y_{\ell}^{(1)}\) in the first step, multiplied by the factor \(\xi_{\ell}^{\rm R}\) determining the pixel color of the pattern to be recognized, and is common for all \(i\) of Eq. (9). Equation (9) also includes a factor \(\xi_{i}^{\rm R}\), which determines the sign of the input. Regarding these facts, the feedforward input has only two values, depending on the value of \(\xi_{i}^{\rm R}\). The phase synchronization among the \(N\) parts in the second step is the result of forced synchronization with respect to this feedforward input, and the phase difference has only two values, 0 or \(\pi\), depending on the value of \(\xi_{i}^{\rm R}\). This mechanism is in contrast with that of the previous work [21], where a mutual synchronization is the origin of the associative memory operation. Also, the method is different from the previous works [38, 39]. In Ref. [38], a forced synchronization of frequency with respect to an external signal was studied, while the input signal in the present work is generated by the oscillator output itself and the phase synchronization plays the central role in the associative memory operation. In Ref. [39], a delayed-feedback was used to generate input signal, while the input signal in the present work is generated by multiplying appropriated weight to perform the associative memory operation. We also note that, when \(y_{\ell}^{(1)}\) is a simple trigonometric function, its linear combination, \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\), is also a trigonometric function with the same frequency and a different phase. According to the above discussion, the phase of the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) does not play any role to excite forced synchronization among the \(N\) parts. Thus, the term \(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm R}\chi_{\ell}^{(1)}\) could be replaced by, for example, \(y_{1}^{(1)}\). In this case, it is unnecessary to measure other \((N-1)\)\(y_{\ell}^{(1)}\) (\(\ell=2,3,\cdots,N\)) in the first step in Fig. 2(b), although we solved the equation of motion for \(N\) virtual oscillators to clarify similarities and differences between the second and third step. When \((N-1)\) parts in the first step are omitted for simplicity, the power consumption to drive the oscillator in the virtual oscillator network is proportional to \(2N+1\), where \(2N\) comes from the second and third steps in Fig. 2(b). On the other hand, the power consumption in the conventional oscillator network is proportional to \(2N\) because \(N\) oscillators are driven two times, as implied in Fig. 2(a). For a large \(N\), the power consumption of two oscillator networks are comparable. The time required for the operation increases linearly as \(N\) increases, which is not suitable for practical applications, although the same might be true for a conventional oscillator network because the relaxation time of the phase will depend on the number of the oscillators. the time of a conventional (coupled) oscillator network might also increase as \(N\) increases. However, the virtual oscillator network has an advantage from a viewpoint of reliability, as discussed below. Next, we focus on the third step, where the phase during the \(i\)th part obeys \[\psi_{i}^{(3)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}^{0}\frac{1}{N_{\rm m }}\sum_{m=1}^{N_{\rm m}}\sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{m}y_{\ell} ^{(2)}\sin\psi_{i}^{(3)}. \tag{12}\] Since the oscillators in the second step are in the synchronized state, the output \(y_{\ell}^{(2)}\) can be expressed as \(y_{\ell}^{(2)}=\xi_{\ell}^{\rm R}\xi_{1}^{\rm R}y_{1}^{(2)}\), where \(y_{1}^{(2)}\) is the output of the first part in the second step. We substitute this relation into Eq. (12) and assume that \[\sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{\rm R}\simeq\delta_{m,\mathcal{A}} \sum_{\ell=1}^{N}\xi_{\ell}^{m}\xi_{\ell}^{\rm R}, \tag{13}\] where the symbol \(\mathcal{A}\) corresponds to a pattern in the memorized patterns that resembles the pattern to be recognized. The assumption (13) means that only a pattern having a large degree of overlap with the pattern to be recognized contributes to the feedforward input. The other memorized patterns, which are greatly different from the pattern to be recognized, do not contribute to the feedforward input because of their small overlap. When the assumption is satisfied, Eq. (12) becomes \[\psi_{i}^{(3)}=\Omega+\frac{c\mu^{*}}{GRs_{0}}\mathcal{H}^{0}\frac{1}{N_{\rm m }}y_{1}^{(2)}\xi_{1}^{\rm R}\left(\sum_{\ell=1}^{N}\xi_{\ell}^{\rm z\sigma} \xi_{\ell}^{\rm R}\right)\xi_{i}^{\rm z\sigma}\sin\psi_{i}^{(3)}. \tag{14}\] Equation (14) is similar to Eq. (9), and therefore, the steady-state condition of the phase difference between the \(i\)th and \(j\)th parts in the third step is given by \[\xi_{i}^{\rm z\sigma}\sin\psi_{i}^{(3)}-\xi_{j}^{\rm z\sigma}\sin\psi_{j}^{(3 )}=0. \tag{15}\] Equation (15) means that in-phase or anti-phase synchronization between the \(N\) parts occurs, and the phase differences in the third step saturate to \(0\) or \(\pi\) corresponding to the white or black colors in a memorized pattern most resembling the one to be recognized. The operation principle is based on Eq. (13). Equation (13) is satisfied if there is only one pattern that has a large degree of overlap with the pattern to be recognized. On the other hand, if there are other patterns having large overlaps with the pattern to be recognized, Eq. (13) is not satisfied. In this case, Eq. (15) is not necessarily satisfied, and the colors in the steady state in the third step might be different from the pattern most resembling the one to be recognized or they might be gray (neither black nor white); see also Supplementary Information. Our analysis also assumed that the oscillation frequencies of the \(N\) parts are the same. This assumption is a natural one because each part is obtained from a single oscillator. Technically speaking, the oscillation frequency in each part is varied by changing the magnitude of the electric current. If the oscillation frequencies of the \(i\)th and \(j\)th parts, denoted as \(\Omega_{i}/(2\pi)\) and \(\Omega_{j}/(2\pi)\), are different, the right-hand side of Eq. (10) has an additional term \(\Omega_{i}-\Omega_{j}\). In such a case, the phase difference is not well defined because \(\psi_{i}\) and \(\psi_{j}\) oscillate with different frequencies. Even if we introduce an instantaneous phase by, for example, making a Hilbert transformation, as was done in experiments [40], the phase difference still does not necessarily saturate to \(0\) or \(\pi\). In such a case, the associative memory operation fails. Therefore, there is no reason to change the oscillation frequency in each part. This fact also indicates an advantage to using the virtual oscillator network. In the conventional oscillator network, variations in the oscillation frequency naturally appear because inhomogeneities in the parameters of the oscillators are unavoidable, and such variations lead to the failure of the associative memory operation [21]. The virtual oscillator network does not have such variation and thus would be a more reliable associative memory. A weak point of the present proposal is, on the other hand, that the method requires a computer to store the output signal in each step, which is not preferable for practical applications. We would like to keep this issue as a future work. In conclusion, we described the concept of the associative memory operation by a virtual oscillator network and performed numerical simulations. The operation consists of three steps, where the output of one step is sent to the next step with weights defined by the Hebbian rule. In this sense, the virtual oscillator network can be regarded as a hybrid combination of a coupled oscillator network and a feedforward neural network. The network successfully associated black-and-white patterns with a few memorized patterns. However, it failed to make an association when the number of memorized patterns was large (ten compared to three) and some of the memorized patterns resembled each other. We also developed a theoretical analysis and clarified that the origin of the associative memory operation is forced synchronization driven by feedforward input. Either in-phase or anti-phase synchronization was excited among the oscillators and provides appropriate correspondence between the oscillator phases and the colors in the patterns. The virtual oscillator network is more reliable than a conventional oscillator network, which is affected by unavoidable inhomogeneities among the oscillators. ## Methods ### Definitions of color and overlap By convention, the first pixel (the pixel in the top left-hand corner of a pattern) is always white. The pattern should be regarded as the same even when all of the black and white pixels are swapped for each other, Mathematically, this means that \(\sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}=N\) when the patterns \(A\) and \(B\) are completely the same, and \(\sum_{i=1}^{N}\xi_{i}^{A}\xi_{i}^{B}=-N\) when patterns \(A\) and \(B\) represent the same pattern but their black and white colors are completely swapped. According to this definition of the same figure, the maximum number of the difference between two patterns is \(N/2\); in this case, the degree of overlap is zero (see also the discussion on noise in Supplementary Information). ### Models of conventional and virtual oscillator networks The conventional oscillator network for the associative memory operation [21] is based on the Kuramoto model [37]. The Kuramoto model describes the oscillator dynamics with a generalized phase, \(\theta\). Moreover, the oscillators interact instantaneously, and the phase of the \(i\)th oscillator obeys \[\hat{\theta}_{i}=\omega+\mathcal{D}\sum_{j=1}^{N}w_{ij}\sin\left(\theta_{i}- \theta_{j}\right), \tag{16}\] where \(\omega/(2\pi)\) is the oscillation frequency while \(\mathcal{D}\) is the interaction strength. For simplicity, we will assume that all oscillators share the same values of \(\omega\) and \(\mathcal{D}\). The weight \(w_{ij}\) is given by Eq. (1) or (2) depending on the step of the procedure. In the \(LC\)-circuit model [21], \(\mathcal{D}w_{ij}\) is proportional to the transconductance. The phase difference between the \(i\)th and \(j\)th oscillators obeys \[\hat{\theta}_{i}-\hat{\theta}_{j}=\mathcal{D}\left[\sum_{\ell=1}^{N}w_{\ell }\sin\left(\theta_{i}-\theta_{\ell}\right)-\sum_{\ell=1}^{N}w_{\ell\ell}\sin \left(\theta_{j}-\theta_{\ell}\right)\right]. \tag{17}\] In a limiting case of only two oscillators (\(N=2\)), the phase difference obeys \[\hat{\theta}_{1}-\hat{\theta}_{2}=2\mathcal{D}w_{12}\sin\left(\theta_{1}- \theta_{2}\right), \tag{18}\] and the in-phase (anti-phase) synchronization of \(\theta_{1}\) and \(\theta_{2}\) is a stable fixed point when \(\mathcal{D}w_{12}\) is negative (positive). The phase differences of \(\theta_{i}-\theta_{j}=0,\pi\) are always fixed points even when there are a large number of oscillators (\(N\geq 3\)). Accordingly, the phase differences in the conventional oscillator network saturate to the in-phase or anti-phase state, which thereby enables the associative memory operation. In the presence of frequency variations, the right-hand side of Eq. (17) has an additional term \(\omega_{i}-\omega_{j}\). In this case, the phase difference is not stabilized, and this instability leads to an inaccurate associative memory operation [21]. The Thiele equation is slightly different from the Kuramoto model in the following ways. First, the Thiele equation uses the phase \(\psi\), which describes the vortex core's position in the \(xy\) plane, instead of a generalized phase. This is because the quantity measured in experiments is the vortex core's position, and the phase synchronization studied in the experiments [40] corresponds to that of \(\psi\), not a generalized phase \(\theta\). Note that we can introduce a generalized phase analytically as \(\theta=\psi+[\zeta\kappa/(Gb)]\ln(s/s_{0})\) with a phase sensitivity function \(\mathbf{Z}=(-\sin\theta+[\zeta\kappa/(Gb)]\cos\theta,\cos\theta+[\zeta\kappa/ (Gb)]\sin\theta,0)/s_{0}\). The analysis is mostly unchanged with the generalized phase, so we decided to use \(\psi\) for simplicity. Second, the equation of motion for the phase difference, Eq. (10), includes a term \(\sin\psi_{i}-\sin\psi_{j}\) whereas the Kuramoto model often uses an interacting term proportional to \(\sin(\theta_{i}-\theta_{j})\). More generally, the interaction term in the Kuramoto model can be assumed to be a function of the phase difference, \(\theta_{i}-\theta_{j}\) after applying an averaging technique with respect to a fast variable (see Ref. [37] for details). The difference between the two models might however be insignificant; notice that, by using formulas, \(\sin x-\sin y=2\cos[(x+y)/2]\sin[(x-y)/2]\) and \(\sin x+\sin y=2\sin[(x+y)/2]\cos[(x-y)/2]\) and applying the averaging technique, the interaction term in our model can be approximated as a function of \(\theta_{i}-\theta_{j}\). Third, as mentioned above, the input term in the virtual oscillator network consists of the oscillator output from the previous step, while the interaction in the Kuramoto model is instantaneous. Because of these differences, the associative memory operation by the virtual oscillator network is significantly different from those of conventional coupled oscillator networks on which previous experiments and the theoretical analyses have been conducted. ### Parameters in the Thiele equation Spin torque oscillators (STOs) mainly consist of a ferromagnetic metal/insulating layer/ferromagnetic metal trilayer. The first ferromagnetic layer of the trilayer is called the free layer and is where the magnetic vortex forms. The second ferromagnetic layer having a uniform magnetization is called the reference layer. When electric current is injected into STOs, spin-transfer torque [41, 42, 43] is excited on the magnetic moments in the free layer and drives their dynamics [35, 36]. The output signal from the STOs depends on the relative angle between the magnetizations in the free and reference layers. The definitions and physical meanings of the parameters in Eq. (5) are as follows. The parameters \(G=2\pi pML/\gamma\) and \(D=-(2\pi\alpha ML/\gamma)[1-(1/2)\ln(R_{0}/R)]\) consist of the polarity \(p(=\pm 1)\) of the vortex core, the saturation magnetization \(M\), the thickness \(L\) of the ferromagnet, the gyromagnetic ratio \(\gamma\), the Gilbert damping constant \(\alpha\), and the vortex radius \(R_{0}\). The chirality \(c(\pm 1)\) of the vortex core also appears in Eq. (5). The parameters \(\kappa\) and \(\zeta\) relate to a magnetic potential energy defined as \(W=(\kappa/2)[1+(\zeta/2)s^{2}]|\mathbf{X}|^{2}\). The dimensionless parameter \(\xi\) is introduced to describe the nonlinear damping in a highly excited state [35]. The parameter \(\kappa\) relates to the material parameters as \(\kappa=(10/9)4\pi M^{2}L^{2}/R^{35}\). The parameter \(a_{J}=\pi\hbar P/(2e)\) includes the reduced Planck constant \(\hbar\), spin polarization \(P\) of the electric current, and the elementary charge \(e(>0)\). The vector \(\mathbf{p}=(p_{x},0,p_{z})\) is the unit vector pointing in the magnetization direction in the reference layer. Here, we assume that \(\mathbf{p}\) lies in the \(xz\) plane, by convention. As a result, the output signal from the vortex STO is proportional to the \(y\) component of the vortex core's position. The parameter \(\mu^{*}\) is \(\pi MLR\). The material parameters used in this study were taken from typical experiments and simulations [35, 36, 44]: \(M=1300\) emu/cm\({}^{3}\), \(\gamma=1.764\times 10^{7}\) rad/(Oe s), \(\alpha=0.01\), \(L=5\) nm, \(R=187.5\) nm, \(R_{0}=10\) nm, \(P=0.7\), \(\xi=2.0\), and \(\zeta=0.1\). The polarity and chirality were assumed to be \(p=+1\) and \(c=+1\), for simplicity. The magnetization direction in the reference layer was \(\mathbf{p}=(\sin 60^{\circ},0,\cos 60^{\circ})\). An electric current \(I\) of 1 mA corresponded to a current density \(J\) of 0.9 MA/cm\({}^{2}\). The electric current in the numerical simulations was set to 4.0 mA. We do not include field-like torque in the Thiele equation, which is expressed as \(-cbJ\)\(R\)\(p_{x}\)\(\mathbf{e}_{y}\) in Eq. (5); see, for example, Ref. [45]. This is because its magnitude was not visible in an experiment using CoFeB/MgO based STO [23]. One might consider to inject the input through the field-like torque, instead of the torque due to the external magnetic field as we have done. However, the modulation of the field-like torque requires that of electric current, which leads to the modulation of the frequency of the STO. Since the advantage of our proposal is that the frequency is unique during the operation, we do not prefer to use the field-like torque for injecting the input. ### Analytical solution of the Thiele equation The Gilbert damping constant \(\alpha\) is often small, in such cases, \(|D|/G\simeq\alpha\ll 1\). Also, the radius \(R_{0}\) of the vortex core is much shorter than the disk radius, \(R\). Therefore, by neglecting terms related to \(R_{0}\) and higher-order terms of \(\alpha\), we can approximate Eq. (5) as Eqs. (6) and (7) in terms of \(s=|\mathbf{X}|/R\) and \(\psi=\tan^{-1}(Y/X)\). The approximated Thiele equation without magnetic field is \[\dot{s}=as-bs^{3}, \tag{19}\] \[\dot{\psi}=\frac{\kappa}{G}\left(1+\zeta s^{2}\right). \tag{20}\] These equations are identical to the Stuart-Landau equation [37], which was introduced by Landau to describe the evolution of turbulence phenomenologically and was derived from hydrodynamics by Stuart. This equation provides one of the simplest example of Hopf bifurcation. A stable solution of \(s\) is \(s_{0}=\sqrt{a/b}\) (0) for \(a>(<)0\), or equivalently, \(J/J_{\mathrm{c}}>(<)1\). When \(J/J_{\mathrm{c}}>1\), i.e., the current density \(J\) exceeds a threshold value \(J_{\mathrm{c}}\), the vortex core oscillates around the disk center with an oscillation amplitude \(s_{0}\) and the frequency \(f=[\kappa/(2\pi G)](1+\zeta s_{0}^{2})\). Note that the oscillation frequency is proportional to the current density \(J\) through the term \(s_{0}^{2}=a/b\) (\(a\propto J\)), which has been confirmed by both experiments and simulations [35, 36]. Even in the presence of the magnetic field, the oscillation frequency remains \(f\), if the input strength is weak. The solution of \(s\) obtained from the exact Thiele equation, Eq. (5), shows a small oscillation around \(s_{0}\)[46]. This means that the trajectory of a limit-cycle oscillation is approximately circular but also has a small amplitude modulation. This small modulation is caused by the term \(ca_{J}JR_{0}p_{x}\mathbf{e}_{x}\) in Eq. (5), which breaks the axial symmetry of the dynamics around the \(z\)-axis. The deviation of \(s\) from \(s_{0}\) is, however, negligible, and the oscillation trajectory is approximately circular, as shown in Fig. 3(b). Therefore, it is reasonable to omit the term from Eqs. (6) and (7). Note that this term arises from the in-plane component \(p_{x}\) of the magnetization in the reference layer. \(p_{x}\) plays a role in experiments for the following reason. Recall that the output signal measured in experiments depends on the relative angle of the magnetizations in the free and reference layers. Since the vortex core is located in the \(xy\) plane, a finite \(p_{x}\) is necessary to detect its position. On the other hand, the \(z\) component \(p_{z}\) is also necessary because the spin-transfer torque originating from it excites the limit-cycle oscillation of the vortex core. In fact, the threshold current density \(J_{\mathrm{c}}=|D|\kappa/(Ga_{J}p_{z})\) is inversely proportional to \(p_{z}\); therefore, if \(p_{z}\) is zero, \(J_{\mathrm{c}}\) becomes infinite and the oscillation cannot be excited. In experiments [28, 40], the magnetization initially pointed in an in-plane direction, where \(p_{z}=0\). A finite \(p_{z}\) was induced by applying an external magnetic field in the \(z\) direction. According to Eqs. (6) and (7), one might consider that the magnetic field changes the value of \(s\) from \(s_{0}\) and modifies the oscillation frequency. Such a frequency shift is, however, negligibly small, which can be discussed accordingly. First, remind that the frequency of the magnetic field applied during the second step is the frequency of the vortex core without the magnetic field because it consists of the output during the first step. The fact that the phases in the second step are saturated to \(0\) or \(\pi\), as shown in Fig. 3(c), indicates that the forced phase synchronization occurs, and the frequency of the vortex core in the second step is the same with that in the first step. Second, let us roughly estimate the frequency shift by the application of the magnetic field. The change of \(s\) by the magnetic field will be maximized when the phase of the magnetic field \(H_{\mathrm{y}}\) in Eq. (6) is the same with \(\psi\). In this case, the magnitude of the last term in Eq. (6), averaged over a precession period \(\tau=1/f\), is about \([c\mu^{*}/(2GR)]H_{\mathrm{y}}\tau\sim(\gamma/2)\mathcal{H}\tau\). The period \(\tau\) is about \(5\) ns while \(\mathcal{H}\) is on the order of \(1\) Oe; see next section. Accordingly, the shift \(\Delta s\) of \(s\) by the application of the magnetic field is less than \(0.1\) at maximum. As mentioned, the oscillation frequency is proportional to \(1+\zeta s^{2}\). Using \(\zeta=0.1\) and \(s_{0}\simeq 0.6\), estimated from Fig. 3(b), the frequencies with and without \(\Delta s\), which are proportional to \(1+\zeta s_{0}^{2}\) and \(1+\zeta(s_{0}+\Delta s)^{2}\), respectively, differ only \(1\) % at maximum. Therefore, we consider that the frequency modulation by the application of the magnetic field is negligible. One might be of interested in the applicability of the Thiele equation. While the original Thiele equation assumes a translation symmetry in an infinite space, a finite-size effect of nanostructured may restrict the applicability of the equation. Therefore, the Thiele equation had been applied to analyses on small-amplitude dynamics [47]. There have been, at the same time, several efforts to make the equation applicable to large-amplitude dynamics. For example, adding nonlinear frequency and damping terms is one approach [35, 36], which is also used in the present work, where the additional terms are characterized by the dimensionless parameters \(\xi\) and \(\zeta\). Adding further higher-order nonlinear terms is also investigated recently [48, 49, 50]. It was also shown that the Thiele equation is applicable to analyze small-amplitude dynamics, and effort has been made to extrapolating it to a large-amplitude dynamics, such as vortex-core expulsion, although there are some limitations [51]. In the present study, we use the model developed in Refs. [35, 36] due to the following reasons. First, the applicability of the model to wide ranges of parameters has been verified by comparison with experiments [35, 36, 44]. Second, adding higher-order nonlinear terms does not change main conclusion in this work. These terms might change, for example, current dependence of the oscillation frequency. In the present work, however, the frequency is kept constant, and thus, adding such terms do not play a central role in the associative memory operation. Third, the Thiele equation with the present approximation clarifies the connection between spintronics and other research fields such as nonlinear science and computer science. This is because the equation can be reduced to the Stuart-Landau equation, as mentioned above. The Stuart-Landau equation has a long history, as in the case of the Thiele equation, and has been frequently used in nonlinear science [37, 52]. The present work indicates that the Stuart-Landau oscillator can be emulated in nanostructures and therefore, prompts communications between spintronics and other research fields. Therefore, although we understand that there have been great efforts [48, 49, 50, 53] for the validity and applicability of the Thiele equation, we use the model developed in Refs. [35, 36]. Note that the Oersted field generated in the current, discussed in these previous works, does not play a role in the associative memory operation because the current magnitude is kept constant during the operation. Also, since the external magnetic field induces forced synchronization, a frequency shift due to an external magnetic field studied in the previous work [48] does not exist in the present algorithm. ### Details of the numerical simulations The associative memory operation in the virtual oscillator network consists of three steps. The initial state of the vortex core in each step is prepared by adding a thermal activation to the Thiele equation and solving it in the absence of magnetic field, as is done in Ref. [45]. The torque due to the thermal activation gives an additional term, \(-\eta_{\mathrm{z}}\mathbf{e}_{x}-\eta\mathbf{e}_{y}\), to the left-hand side of Eq. (5), which obeys the fluctuation-dissipation theorem, \[\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=2k_{\mathrm{B}}T|D|\delta_{ij} \delta(t-t^{\prime}), \tag{21}\] where the temperature \(T\) is \(300\) K. The solution of the Thiele equation in each step is divided into \(N=60\) parts, where the time width of each part is denoted as \(\tilde{t}\). In the experiment [23], a certain time period was inserted between these parts to remove their correlation. In contrast, our numerical simulations used parallel computations, wherein the initial state of each part was randomly prepared using the method described above. The value of \(\tilde{t}\) was changed depending on the number of memorized patterns, as well as the number of noisy pixels in the pattern to be recognized. For example, \(\tilde{t}\) is 750 ns in Fig. 3(c). For all cases, \(\tilde{t}\) was divided into \(\tilde{n}=\tilde{t}/t_{\rm p}\) parts, where \(t_{\rm p}=0.125\) ns. Now let us explain the meanings of \(y_{\ell}^{(1)}\) and \(y_{\ell}^{(2)}\) in Eqs. (3) and (4). Since they are defined in a similar manner, we will describe only \(y_{\ell}^{(1)}\). When defining the magnetic field in Eq. (3), it is convenient to reset the time origin for each part; i.e., each of the \(N\) parts runs from \(t=0\) to \(t=\tilde{t}\). Remember that the output from the STO is proportional to the \(y\) component of the vortex core's position, \(Y\). We denote the solution of the normalized \(y\) component, \(y=Y/R\) (\(0\leq y\leq 1\)), during the \(\ell\)th part in the first step as \(y_{\ell}\). Then, \(y_{\ell}^{(1)}\) is made from \(y_{\ell}\) as follows, \[y_{\ell}^{(1)}=\sum_{n=0}^{\tilde{n}-1}y_{\ell}(nt_{\rm p})\left\{\Theta(t-nt_ {\rm p})-\Theta[t-(n+1)t_{\rm p}]\right\}, \tag{22}\] where \(\Theta(t)\) is a step function. Note that \(\Theta(t-nt_{\rm p})-\Theta[t-(n+1)t_{\rm p}]\) is 1 for \(nt_{\rm p}\leq t<(n+1)t_{\rm p}\) and is zero for the other times; thus, it has a pulse shape. Equation (22) means that the input strength is constant for \(nt_{\rm p}\leq t<(n+1)t_{\rm p}\) and is proportional to \(y_{\ell}(t)\) at \(t=nt_{\rm p}\). \(\tilde{n}\) is the number of input pulses. There are two reasons to shape the output \(y\) into a pulse. The first one relates to numerical simulations. In this work, the Thiele equation was solved within a time increment of \(\Delta t=0.005\) ns, which is shorter than the pulse width \(t_{\rm p}\). It was, however, impractical to store the output at each \(\Delta t\) step because the amount would have been huge. Second, there is a technical limitation in real experiments on a measurable time step. The value we used, \(t_{\rm p}=0.125\) ns, is close to shortest possible time step in an experiment [23]. Because of these reasons, we define \(y_{\ell}^{(1)}\) used in the magnetic field, Eq. (3), as a pulse input. At the same time, we emphasize that \(t_{\rm p}\) is much shorter than an oscillation period of the vortex core, \(1/f=4.48\) ns (\(f=223\) MHz). In addition, the pulse-shaped \(y_{\ell}^{(1)}\)s are continuously injected. Therefore, the magnetic field can be approximately regarded as a continuously oscillating signal with respect to the STO. The strength of the input \(\mathcal{H}\) in the second step is 1.0 Oe, while that in the third step is \(\mathcal{H}^{\prime}=N_{\rm m}\times 0.2\) Oe. Here, we increase \(\mathcal{H}^{\prime}\) as the number \(N_{\rm m}\) of memorized patterns increases. This is because the time necessary to reach a steady state becomes long as \(N_{\rm m}\) increases; therefore, to perform the numerical simulations efficiently, the input strength should be made to increase with \(N_{\rm m}\).
2309.12286
Geometry of sequential quantum correlations and robust randomness certification
Quantum correlations between the measurements of two or more separated observers play a fundamental role in many applications, such as randomness generation or key distribution. Recently, it was realized that sequential measurements (i.e., defined with a precise temporal ordering between subsequent measurements on a given system) can enhance the performance of these protocols. However, the theoretical understanding of how to maximize this performance is limited and the relation with the boundary of quantum correlations is unexplored. In the case of one party on one side and two sequential parties on the other, we study the geometry of quantum correlations and its implications for robust device-independent randomness generation. We identify a boundary for the set of these correlations expressed as a trade-off between the amount of nonlocality between different observers and show that this allows to generate the maximum possible device-independent randomness in our setting, namely two bits. We propose a practical protocol based on non-projective measurements that can produce the boundary correlations under ideal conditions, and address its robustness to noise, showing that it is improved compared to previous approaches. Finally, we implement our protocol in a proof-of-concept experiment based on a photonic implementation. With the obtained correlations we could certify more bits per state with respect to the standard CHSH protocol, proving that our protocol is feasible and robust to real-world imperfections. Our work paves the way for a full understanding of sequential quantum correlations and their exploitation for practical and efficient device-independent protocols.
Matteo Padovan, Giulio Foletto, Lorenzo Coccia, Marco Avesani, Paolo Villoresi, Giuseppe Vallone
2023-09-21T17:50:29Z
http://arxiv.org/abs/2309.12286v1
# Geometry of sequential quantum correlations and robust randomness certification ###### Abstract Quantum correlations between the measurements of two or more separated observers play a fundamental role in many applications, such as randomness generation or key distribution. Recently, it was realized that sequential measurements (i.e., defined with a precise temporal ordering between subsequent measurements on a given system) can enhance the performance of these protocols. However, the theoretical understanding of how to maximize this performance is limited and the relation with the boundary of quantum correlations is unexplored. In the case of one party on one side and two sequential parties on the other, we study the geometry of quantum correlations and its implications for robust device-independent randomness generation. We identify a boundary for the set of these correlations expressed as a trade-off between the amount of nonlocality between different observers and show that this allows to generate the maximum possible device-independent randomness in our setting, namely two bits. We propose a practical protocol based on non-projective measurements that can produce the boundary correlations under ideal conditions, and address its robustness to noise, showing that it is improved compared to previous approaches. Finally, we implement our protocol in a proof-of-concept experiment based on a photonic implementation. With the obtained correlations we could certify more bits per state with respect to the standard CHSH protocol, proving that our protocol is feasible and robust to real-world imperfections. Our work paves the way for a full understanding of sequential quantum correlations and their exploitation for practical and efficient device-independent protocols. ## I Introduction The correlations between the outcomes of spatially separated experiments play a twofold role in the vast landscape of quantum information. On the one hand, different physical laws or assumptions can modify the geometry of the set of possible correlations, providing a clear boundary between what can and cannot be observed under certain laws. For instance, the violation of a Bell inequality by space-like separated measurements excludes local-realistic hidden-variable theories and shows a feature of quantum physics known as nonlocality [1]. Bell inequalities define a polytope in the space of correlations that confines those that are classically obtainable. Quantum theory also draws a boundary, albeit with a larger and more complicated shape [2]. On the other hand, correlations have crucial applications in information security, being at the heart of techniques for key distribution or random-number generation that require very few assumptions on their implementations. Indeed, nonlocality is the main ingredient of device-independent protocols, which are the focus of this work [3; 4; 5]. In these schemes, a physical system prepared in an entangled state is shared and measured by different users who choose their measurements randomly. The outcomes serve the dual purpose of manifesting nonlocality and providing a useful classical resource, such as a key or random bit. In principle, the unconditional security of this resource is guaranteed by nonlocality even if the devices implementing the protocols are entirely untrusted or controlled by adversaries. A major drawback of these schemes is the low rate of resource extraction. This is due to the challenges of creating and preserving entanglement, which is degraded by the coupling of the system with the environment. Instead of relying on faster entanglement generation, which may be feasible in the future, we study how to optimize the extraction of useful resources from a given physical system. A way to do so that has been proposed in the scientific literature uses weak measurements to realize sequential protocols [6; 7; 8; 9; 10; 11; 12; 13; 14]. Often, they are direct extensions of schemes that use projective measurements, and they improve the performance in terms of resources extracted by repeating the measurements more times on the same quantum system. With the strategy proposed in [9] it is even possible, in principle, to produce an unlimited amount of device-independent randomness for each generated bipartite entangled state. However, the robustness to noise of this protocol is limited and therefore it requires great realization accuracy [14]. At the same time, sequential protocols are interesting from the point of view of the correlations they can generate. While the geometry of quantum correlations has been the subject of several studies [2; 15; 16], its extension to the sequential setting is little known. Most previ ous analyses focus only on the correlations between each sequential user and the remote one, finding a monogamy trade-off: stronger correlations for one user imply weaker ones for the others [12, 13, 6, 9]. However, a detailed investigation of the trade-off, its geometry, and its implications could help to formulate better quantum protocols that could overcome this compromise [17, 18]. In this paper, we address both the aforementioned points. We start from the common two-user (Alice and Bob\({}_{1}\)), two-measurement, two-outcome scenario, in which the Clauser-Horne-Shimony-Holt (CHSH) inequality holds [19]. We extend it with a sequential user on one side (Bob\({}_{2}\)) and study the geometry of the obtainable correlations. We provide some analytical relations for their quantum boundary that serve also as monogamy trade-offs, providing further insights between the sharing of nonlocality between sequential users. Furthermore, we show that the correlations on the boundary can be used to certify the maximum amount of local randomness obtainable with two dichotomic sequential measurements, that is, two bits. This is possible regardless of how nonlocality (quantified as violation of a given Bell inequality) is divided between the two sequential pairs (Alice-Bob\({}_{1}\) and Alice-Bob\({}_{2}\)), meaning that the trade-off for nonlocality is not a trade-off for randomness. Counterintuitively, two bits are attained even if the correlations generated by one of the pairs are entirely local. This is in contrast with previous results in which randomness was generated from nonlocal pairwise correlations [9]. We also propose an explicit protocol that can generate the boundary correlations using states and measurements similar to those that maximally violate the CHSH inequality. Compared to the protocol of Ref. [9], which can also achieve two bits of randomness with two dichotomic measurements, ours is simpler as it requires fewer different settings. Furthermore, we show numerically that it is more robust to noise, because it is insensitive to the nonlocality trade-off. Our protocol is also simpler than the two proposals of [20], which can certify two bits of randomness too, as it requires fewer measurements, allowing an easier experimental implementation. Finally, to demonstrate the feasibility and noise resilience of our protocol, we performed a proof-of-concept experimental test based on polarization-entangled photon pairs that generates the correlations required by the protocol. From these correlations, we could certify more random bits than those obtainable with standard non-sequential CHSH protocols in the same noise conditions. This proves that the protocol can enhance the performance of standard techniques for device-independent randomness generation with realistic setups. ## II Theory In this section, we discuss our main theoretical results on the sequential quantum scenario. In Sec. II.1 we introduce a convenient formalism to describe it. In Sec. II.2 we show some inequalities on the values of the correlations. In Sec. II.3 we propose a protocol to saturate these inequalities: This will lead us to identify part of the boundary of the sequential quantum correlation. Finally, in Sec. II.4, we show how the saturation of the inequalities can be used to certify randomness, supporting the discussion with numerical simulations for some non-ideal cases. ### The sequential scenario We work in the scenario of sequential correlations defined in Ref. [17], and specifically one that includes three users: Alice, Bob\({}_{1}\), Bob\({}_{2}\). A schematic is depicted in Fig. 1. A common source prepares an unknown physical system that is shared and then measured by the untrusted devices operated by the three users. Each user randomly chooses a measurement identified by a binary input \(x,y_{1},y_{2}\in\{0,1\}\) and obtains as result a binary output \(a,b_{1},b_{2}\in\{\pm 1\}\). We assume all inputs to be independent of one another, and forbid any communication during data collection between Alice and the Bobs but we allow unidirectional communication from Bob\({}_{1}\) to Bob\({}_{2}\) between the production of their respective outputs: This characterizes the sequential correlation scenario, that we formally define below. The main goal of this work is the study of the properties of the correlations between inputs and outputs \(p(a,\mathbf{b}|x,\mathbf{y})=p(a,b_{1},b_{2}|x,y_{1},y_{2})\) that can be generated in this scenario and of how they can be used to produce device-independent random numbers from the Bobs' outputs. We assume that after sufficiently many independent and identically distributed runs, the correlations are known perfectly, neglecting the effects of finite statistics. Figure 1: Schematic of the sequential scenario. Above, the framework with Kraus operators. Bottom, the projective framework with the operators introduced in Sec. II.1. Moreover, we make no requirements on the probabilities of the inputs, as long as they allow the entire reconstruction of the correlations \(p(a,\mathbf{b}|x,\mathbf{y})\). The absence of communication means that Alice's marginal probabilities are independent of the Bobs' inputs and viceversa. Formally, the correlations must satisfy the no-signaling conditions [2]: \[\sum_{a}p(a,\mathbf{b}|x,\mathbf{y}) =\sum_{a}p(a,\mathbf{b}|x^{\prime},\mathbf{y})\quad\forall\mathbf{b},x,x^{ \prime},\mathbf{y} \tag{1}\] \[\sum_{\mathbf{b}}p(a,\mathbf{b}|x,\mathbf{y}) =\sum_{\mathbf{b}}p(a,\mathbf{b}|x,\mathbf{y}^{\prime})\quad\forall a,x,\mathbf{y },\mathbf{y}^{\prime}\] Furthermore, sequentiality implies that Bob\({}_{2}\)'s input cannot influence Bob\({}_{1}\)[17]: \[\sum_{b_{2}}p(a,b_{1},b_{2}|x,y_{1},y_{2})=\sum_{b_{2}}p(a,b_{1}, b_{2}|x,y_{1},y_{2}^{\prime})\\ \forall a,b_{1},x,y_{1},y_{2},y_{2}^{\prime} \tag{2}\] As is common in the context of device-independent protocols, we focus on the set of sequential quantum correlations \(Q_{SEQ}\), i.e. those sequential correlations that can be written using Born rule as \[p(a,\mathbf{b}|x,\mathbf{y})=\\ \sum_{\mu,\mu_{1},\mu_{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! can be measured in our scenario from the correlations \(p(a,\mathbf{b}|x,\mathbf{y})\) by selecting the values of the inputs that correspond to the relevant observables. Hence, the usual results about CHSH operators also apply, so that in a quantum setting \(\,\left\langle S_{i}\right\rangle\leq 2\sqrt{2}\). Moreover, from conceptually similar results in the literature [13, 6, 9], one can expect a trade-off between \(\,\left\langle S_{1}\right\rangle\) and \(\,\left\langle S_{2}\right\rangle\), therefore it is meaningful to consider an expression that combines the two: \[S_{\theta}\equiv\cos 2\theta(S_{1}-\sqrt{2}\openone\right)+\sin 2\theta(S_{2}- \sqrt{2}\openone)\,. \tag{8}\] Furthermore, we introduce the operator \[S_{c}\equiv(A_{0}+A_{1})B_{0,0}+(A_{0}-A_{1})B_{1} \tag{9}\] whose expected value is a function of part of the statistic of Alice-Bob\({}_{1}\) and part of the statistic of Alice-Bob\({}_{2}\). This is a well-defined CHSH-like operator, as the relevant observables on the Bobs' side are measured with different inputs: \(y_{1},y_{2}=1,0\text{ or }1,1\) for \(B_{1}\) and \(y_{1},y_{2}=0,0\) for \(B_{0,0}\). Therefore, in a quantum experiment \(\,\left\langle S_{c}\right\rangle\leq 2\sqrt{2}\). We can express now our main result (proven in Appendix B) on the geometry of the sequential correlations, which is a bound on \(\,\left\langle S_{1}\right\rangle\) and \(\,\left\langle S_{2}\right\rangle\) in the specific case in which \(\,\left\langle S_{c}\right\rangle\) takes its maximum value \(2\sqrt{2}\). **Result 1**.: _For any sequential quantum correlation in our scenario, it holds that_ \[\left\langle S_{c}\right\rangle=2\sqrt{2}\quad\Rightarrow\quad\left\langle S _{\theta}\right\rangle\leq\sqrt{2}\,,\quad\forall\theta\,, \tag{10}\] _and there exist correlations that saturate the inequality._ This upper bound on \(\,\left\langle S_{\theta}\right\rangle\) can be interpreted as a monogamy relation between the correlations of Alice-Bob\({}_{1}\) and Alice-Bob\({}_{2}\). This is different from the trade-offs already present in the literature because \(S_{2}\) considers Bob\({}_{1}\)'s input, since \(B_{0,0}\) and \(B_{0,1}\) are measured only if \(y_{1}=0\). Instead, in Ref. [6], the quantity similar to \(S_{2}\) is calculated ignoring the actions of Bob\({}_{1}\), while the protocols of Refs. [9, 13] calculate separate CHSH quantities for each of Bob\({}_{1}\)'s outputs, adapting Alice's measurements to obtain the highest values. ### Sequential-CHSH protocol In the following we will provide, for any given value of \(\theta\), state and operators that generate correlations for which \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(S_{\theta}=\sqrt{2}\), proving that the inequality (10) is tight and identifies a boundary of \(Q_{SEQ}\) in our scenario. In the scheme, Alice and Bob\({}_{1}\) share the maximally entangled Bell state \(\,\left|\phi^{+}\right\rangle_{AB}=(\left|00\right\rangle+\left|11\right\rangle )/\sqrt{2}\), where \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are the eigenstates of the \(\sigma_{z}\) Pauli matrix. Alice randomly chooses between two inputs \(x\in\{0,1\}\), corresponding to the two observables \[A_{0}=\frac{\sigma_{z}+\sigma_{x}}{\sqrt{2}}\qquad\quad A_{1}=\frac{\sigma_{z }-\sigma_{x}}{\sqrt{2}}\,. \tag{11}\] Bob\({}_{1}\) randomly chooses between two inputs \(y_{1}\in\{0,1\}\), the latter corresponding to a projective measurement of \(\sigma_{x}\) and the former to the non-projective measurement realized by the two Kraus operators depending on the parameter \(\theta\): \[\begin{split} K_{+}(\theta)&=\cos\theta\left|0 \right\rangle\!\!\left\langle 0\right|+\sin\theta\left|1\right\rangle\!\! \left\langle 1\right|\,,\\ K_{-}(\theta)&=\cos\theta\left|1\right\rangle\! \!\left\langle 1\right|+\sin\theta\left|0\right\rangle\!\!\left\langle 0\right|\,. \end{split} \tag{12}\] In this expression the value of \(\theta\) controls the strength of the measurement, in the sense that \(\theta=n\frac{\pi}{2}\) leads to a projective measurement of \(\pm\sigma_{z}\), while for \(\theta=\frac{\pi}{4}+n\pi\) correspond to a non-interactive measurement. At \(\theta=\frac{\pi}{4}+n\frac{\pi}{2}\) the two Kraus operators are equal, up to a sign. After these operations, if \(y_{1}=1\), the protocol ends. Otherwise, for \(y_{1}=0\), Bob\({}_{1}\) sends the post-measurement state to Bob\({}_{2}\), who randomly chooses between the projective measurements of \(\sigma_{z}\) or \(\sigma_{x}\), each corresponding to one of the two inputs \(y_{2}\in\{0,1\}\). As discussed in Appendix D, in terms of projective operators, this protocol can be formulated by leaving unchanged \(A_{0}\) and \(A_{1}\), while introducing the operators \[\begin{split} B_{0}&=\sigma_{z}\otimes\sigma_{z}\\ B_{1}&=\sigma_{x}\otimes\openone_{B^{\prime\prime}}\\ B_{0,0}&=\sigma_{z}\otimes\openone_{B^{\prime\prime}} \\ B_{0,1}&=\sigma_{x}\otimes\sigma_{x}\end{split} \tag{13}\] on the Bobs' side. These act on an Hilbert space \(\mathcal{H}_{B^{\prime}}\otimes\mathcal{H}_{B^{\prime\prime}}=\mathbb{C}^{2} \otimes\mathbb{C}^{2}\). The shared state is now \[\left|\psi\right\rangle=\left.\left|\phi^{+}\right\rangle_{AB^{\prime}} \left[\cos\theta\left|0\right\rangle_{B^{\prime\prime}}+\sin\theta\left|1 \right\rangle_{B^{\prime\prime}}\right] \tag{14}\] Figure 2: Portion of the sequential quantum set with the constraint \(\left\langle S_{c}\right\rangle=2\sqrt{2}\). The dashed lines denote the maximum values achievable by \(\left\langle S_{1}\right\rangle\) and \(\left\langle S_{2}\right\rangle\) in the local and non-sequential quantum scenarios, without restrictions on \(\left\langle S_{c}\right\rangle\). One can verify, using Eqs. (3) and (5), that the sequential and projective formulations give the same correlations, and that the operators \(B_{y_{1}}\) and \(B_{y_{1},y_{2}}\) respect all the constraints in Eq. (4). Moreover the relations \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\) hold with the above defined state and operators, proving that the inequality on \(S_{\theta}\) is tight and define a boundary, as claimed. A geometrical depiction of this boundary is shown in Fig. 2 and can be deduced by Eq. (8): For each \(\theta\), when \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\), this equation describe the tangent to a circumference in the \(\left\langle S_{1}\right\rangle\left\langle S_{2}\right\rangle\) plane, centered at \((\sqrt{2},\sqrt{2})\) and of radius \(\sqrt{2}\). The points on the circumference are spanned by the protocol just discussed, while the interior of the circle is filled with sequential quantum correlations satisfying \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\left\langle S_{\theta}\right\rangle<\sqrt{2}\). ### Randomness from correlations We can now move to our second main result, which is a statement on the randomness that can be obtained from correlations on the aforementioned boundary of \(Q_{SEQ}\). In this work, we consider only local randomness, originating solely from the side of the Bobs. Given a sequential probability distribution that is observed experimentally \(P_{exp}(a,\mathbf{b}|x,\mathbf{y})\), the quantity of device-independent random numbers that can be extracted from the outcomes corresponding to a specific input sequence \(\mathbf{y}_{\mathbf{r}}\) can be measured by the (quantum conditional) min-entropy \(H_{\min}=-\log_{2}G\)[24], where \(G\) is the maximum guessing probability that an adversary Eve has on the Bobs' outcomes when the input sequence is \(\mathbf{y}_{\mathbf{r}}\): \[G=\max_{p_{ABE}}\sum_{\mathbf{b}}p_{BE}(\mathbf{b},\mathbf{b}|\mathbf{y}_{\mathbf{r}}) \tag{15}\] \[\text{s.t.}\quad\quad\sum_{\mathbf{c}}p_{ABE}(a,\mathbf{b},\mathbf{e}|x,\bm {y})=P_{\exp}(a,\mathbf{b}|x,\mathbf{y})\,,\] (16) \[\quad\quad\quad p_{ABE}(a,\mathbf{b},\mathbf{e}|x,\mathbf{y})\in\mathcal{Q}_ {SEQ}\,.\] The first condition of Eq. (16) compels Eve to use a strategy \(p_{ABE}\) that is compatible with the experimental correlations \(P_{exp}(a,\mathbf{b}|x,\mathbf{y})\). The second means that the strategy is also quantum in the sense explained in Sec. II.1 and the sequentiality requirement applies only to the Bobs. With this definition, we can express the second main result of our work: **Result 2**.: _For any sequential quantum correlation in our scenario such that \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\) for a given \(\theta\neq n\frac{\pi}{4}\), the min-entropy is_ \[H_{\min}=2\text{ bits} \tag{17}\] _when evaluated with the input sequence \(\mathbf{y}_{\mathbf{r}}=(0,1)\). If \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\) for some \(\theta=n\frac{\pi}{4}\), it reduces to \(H_{\min}=1\) bit._ The proof, provided in Appendix C, is based on the self-testing properties of the CHSH inequality [25], which are valid because \(\left\langle S_{c}\right\rangle=2\sqrt{2}\), and on the additional necessary conditions that the quantum state and measurements must satisfy in order to saturate also Eq. (10). Two dichotomic measurements can provide at most two random bits. The fact that they achieve this bound, certifies the complete unpredictability of their outcomes. This descends from the features of the entire correlation \(P_{exp}(a,\mathbf{b}|x,\mathbf{y})\) and not just from the pairwise ones. Indeed \(\left\langle S_{1}\right\rangle\) and \(\left\langle S_{2}\right\rangle\) cannot be maximized simultaneously, and the situations in which one is maximized are exactly those for which the randomness drops to one bit. By compromising on their respective nonlocality, \(\text{Bob}_{1}\) and \(\text{Bob}_{2}\) achieve the best results in terms of randomness. There are even regions on the boundary in which either the correlations between Alice and \(\text{Bob}_{1}\) or those between Alice and \(\text{Bob}_{2}\) are entirely local, as can be checked by verifying that all CHSH inequalities involving their paired results are respected. Yet, thanks to the three-party correlations, the min-entropy is still maximal at two bits. However, due to unavoidable experimental imperfections, a real implementation cannot generate ideal correlations that sit exactly at the boundary, therefore it is important to study the amount of device-independent randomness in the interior of \(Q_{SEQ}\). We address this problem numerically using the Navascues-Pironio-Acin (NPA) hierarchy [26; 27], and its sequential generalization [18]. This tool replaces the usually difficult-to-verify second condition in (16) with an ordered series of increasingly stringent necessary conditions on linear combinations of the probabilities \(p_{ABE}(a,\mathbf{b},\mathbf{e}|x,\mathbf{y})\). The constraint \(p_{ABE}(a,\mathbf{b},\mathbf{e}|x,\mathbf{y})\in\mathcal{Q}_{SEQ}\) is retrieved when all conditions are satisfied, but stopping to a finite order \(k\) of the series allows casting the problem to a practical semi-definite program (SDP) [28] and restricts \(p_{ABE}\) to belong to a set \(\mathcal{Q}_{SEQ}^{k}\supseteq\mathcal{Q}_{SEQ}\)[18]. This means that the optimization is performed over a larger set of correlations than what is allowed by quantum mechanics and gives Eve more power than she actually has. The solution of the program is then an upper bound of the actual guessing probability: Finding a value \(G\) through the SDP certifies in a device-independent way that the min-entropy of the two outcomes is at least \(-\log_{2}G\) bits. Numerical issues could in principle overestimate the min-entropy, but this can be prevented by giving tolerances to the constraints of Eq. (16). These tolerances always benefit Eve and, if chosen much larger than the machine precision, overwhelm its the potentially dangerous effect [29]. Rather than computing the min-entropy for all possible values of \(\left\langle S_{c}\right\rangle\) and \(\left\langle S_{\theta}\right\rangle\), we do it in the context of the protocol explained in Sec. II.3, so as to study also its noise robustness. We numerically generate the experimental correlations using the maximally entangled state \(\left|\phi^{+}\right\rangle\) mixed with random noise, namely \((1-p)\ket{\phi^{+}}\!\!\bra{\phi^{+}}+p\openone/4\), and the measurements required by the protocol. We then set these correlations as constraint in the optimization problem (15). We perform such computation for different values of the strength parameter \(\theta\), since, for noisy states, different values of \(\theta\) could influence the performance of the protocol by imposing different limitations on Eve's strategies. Because of the symmetry of the protocol, it is sufficient to restrict the analysis to \(\theta\in[0,\frac{\pi}{4}]\). For such numerical computations we adopt Ncpol2sdpa [30] and the solver SDPA-DD [31], setting a minimal solver precision of \(10^{-12}\) for all the theoretical simulations. The NPA order is \(1+\text{AB}\)[18]. In Fig. 2(a) we plot the simulation result, which confirms that, in the ideal case (\(p=0\)), the min-entropy of the measurements of the protocol is two bits for each value of \(\theta\in(0,\frac{\pi}{4})\). When the strength parameter \(\theta\) is at one of the two extremes, the min-entropy drops to one bit, in agreement with our theoretical result. With the help of the sequential protocol, it is straightforward to understand the drop by observing the state after the measurement of \(\text{Bob}_{1}\). For \(\theta=0\), \(\text{Bob}_{1}\) measures projectively, hence the state sent to \(\text{Bob}_{2}\) is separable and Eve can easily guess the second bit. For \(\theta=\frac{\pi}{4}\), the measurements of \(\text{Bob}_{1}\) produce no useful correlations and their outcomes are also easily predictable by Eve. Yet, because the measurement is non-interactive, \(\text{Bob}_{2}\) still receives a portion of a maximally entangled pair and generates with Alice the perfect correlations that allow him to certify that his outcomes are unpredictable. In both cases, one outcome (and hence one bit) is securely random, and the other is known to Eve. Figure 2(a) also shows the impact that the noise quantified by \(p\) has on the performance. Intermediate values of \(\theta\) are optimal, as they are farthest from the extremal points that reduce the randomness even in the ideal case. The approximate flatness of the curve also means that inaccuracies in the setting of \(\theta\) reduce performance only slightly, simplifying the requirements for the experimental implementation. This descends from the fact that the performance of the noiseless protocol is independent of \(\theta\) (except for the extremal points). This is in contrast with all other protocols present in the literature, whose optimal performance is obtained for specific values of \(\theta\) which are close to pathological points [9; 14; 18]. In Fig. 2(b) we show the best min-entropy achievable with the sequential protocol as a function of the parameter \(p\). It indicates that it is possible to generate more than one random bit per state even if \(p\approx 1.8\cdot 10^{-2}\). This value is fairly typical for sources of polarization-entangled photon pairs based on spontaneous parametric down-conversion, and can be reduced with state-of-the-art equipment [33; 34; 35; 36; 37]. For comparison, we plot also the min-entropy achievable with a non-sequential protocol that works in the CHSH scenario and uses the NPA hierarchy [32]. We find that the threshold value of \(p\) at which the two curves begin to split is approximately \(8.5\cdot 10^{-2}\), meaning that for any smaller value the sequential protocol performs better than its non-sequential counterpart. The equivalent threshold for the protocol of Ref. [9] is a much smaller \(3.7\cdot 10^{-3}\)[14]. We point out that this value is in general affected by the finite orders of the NPA hierarchy set in the maximization (15) of the two protocols, which are \(1+\text{AB}\) and \(4\) respectively. Figure 3: Results of the numerical simulations. ## III Experiment We evaluated the protocol presented above with a proof-of-concept experiment, with the goal of verifying the feasibility of meeting the required quality for the entangled state and measurements. For this purpose, we did not create an actual random number generator, but only a setup that reproduces all the quantum operations needed by the protocol, to observe the correlations. Furthermore, we did not include the random inputs but only scanned all the measurements one by one. We did not close neither the detection nor the locality loophole, as should be done for a true implementation of the scheme. Yet, these measurements are critical to show the feasibility and experimental robustness of the proposed protocol. The experimental setup is the same of our previous works and uses polarization-entangled photon pairs and Mach-Zehnder interferometers to implement the Kraus operators (12) [13; 14] (see also Appendix E for a detailed description). Most of the imperfections in this setup can be modeled by a bipartite state of the form \[\rho_{AB}=(1-p-c)\,|\phi^{+}\rangle\!\langle\phi^{+}|+p\frac{\openone}{4}+c \frac{|00\rangle\!\langle 00|+|11\rangle\!\langle 11|}{2}\,, \tag{18}\] where \(p\in[0,1]\), as above, accounts for the depolarization caused by mixing with random noise, whereas \(c\in[0,1]\) induces decoherence by reducing the extreme antidiagonal terms of the density matrix with respect to the diagonal ones. In optical experiments, this is caused by alignment inaccuracies that increase the distinguishability between the two photons in each pair. The two parameters \(p\) and \(c\) can be easily estimated experimentally by measuring the visibilities in the \(\mathcal{Z}\) and \(\mathcal{X}\) bases, indeed \(p=1-V_{\mathcal{Z}}\) and \(c=V_{\mathcal{Z}}-V_{\mathcal{X}}\)[14]. We performed three experiments, labeled by an ID \(\in\{1,2,3\}\). Each of them attempts to reproduce the correlations required by the sequential-CHSH protocol described in Sec. II.3 and by the standard CHSH protocol. For each experiment, we measured the correlations between Alice and the Bobs and we used them as constraints in an NPA hierarchy but instead of setting the whole statistic \(P_{\text{exp}}(a,\mathbf{b}|x,\mathbf{y})\), we constrained only the single-observable mean values \(\langle A_{x}\rangle,\ \langle B_{y_{i}}\rangle\) and \(\langle B_{y_{1},y_{2}}\rangle\), and the two-observable mean values \(\langle A_{x}B_{y_{i}}\rangle\) and \(\langle A_{x}B_{y_{1},y_{2}}\rangle\), which are all obtainable from the experiment. Doing so allowed us to get around the fact that our simplified experiment can produce results that do not strictly meet the requirements of the protocol. Indeed, our measurements take time during which the state produced by the source changes slightly. Since we are scanning the measurements one by one, we are effectively using different states for each measurement, in contrast with Eq. (3). Constraining all correlations would have prevented the SDP from finding a proper solution, whereas our relaxed constraints allowed us to find one with a small solver tolerance of \(10^{-12}\)[31]. In general, this approach does not introduce security issues, since having a smaller number of constraints only gives more power to Eve and finds a min-entropy that is lower than what could be achieved by considering all the correlations. We also compared the results with those predicted by our model using the same constraints, with the values of \(p\), \(c\), and \(\theta\) that best fit the experimental data. We calculated the statistical errors as standard deviations of a sample of 300 simulated experiments. In each of these, the photon counts descend from a Poisson distribution whose mean value is the experimental datum. Tables 1 and 2 summarize the results of all three experiments, reporting the min-entropies and the mean values of the CHSH quantities \(\langle S_{1}\rangle\), \(\langle S_{2}\rangle\), \(\langle S_{c}\rangle\), and \(\langle S\rangle\) (which is measured in the non-sequential scenario). They show that our protocol not only is feasible but can overcome the rate of the standard CHSH scheme in real world implementations. Indeed, we found min-entropies between \(0.82\) and \(0.90\) bits, or between \(23\%\) and \(39\%\) higher than those obtained in the non-sequential scenario with the same states, even with visibilities \(V_{\mathcal{Z}}\approx 98\%\) and \(V_{\mathcal{X}}\approx 97\%\), which are readily accessible to entangled-photon sources built with commercial components. In addition, the comparison between our results and the predictions of the model show that the latter can be used to evaluate the performance of this type of schemes. The discrepancies can be attributed to other static imperfections in the setup which are not considered by the model and to the aforementioned changes of the state from one measurement to the next. ## IV Conclusions In this work, we introduced and investigated a boundary of the set of sequential quantum correlations in the case of one party on one side (Alice) and two sequential parties on the other (Bob\({}_{1}\) and Bob\({}_{2}\)). This boundary can be interpreted as a new monogamy trade-off between the amounts of nonlocality shared by the pairs Alice-Bob\({}_{1}\) and Alice-Bob\({}_{2}\). Despite this trade-off, we proved analytically that the correlations on the boundary certify two random bits in a device-independent scenario (neglecting pathological cases). This means that by using all the correlations rather than only the pairwise ones, the three users can unlock the full randomness of their measurements, and the trade-off for nonlocality does not translate into one for randomness. We also proposed an explicit quantum protocol to generate the correlations on the boundary in the ideal case and we numerically studied its noise robustness, finding that it can beat the non-sequential CHSH protocol for depolarization \(p\lesssim 8.5\cdot 10^{-2}\) and produce more than one random bit for \(p\lesssim 1.8\cdot 10^{-2}\), values that are currently achieved in typical experiments. Finally, we implemented a proof-of-concept experiment, demonstrating not only the feasibility of our protocol, but also that it can perform better than the non-sequential CHSH-based scheme with real world systems. Indeed, we overcame the min-entropy of the latter by \(23\%\) to \(39\%\), and produced \(0.90\pm 0.01\) bits in our best run. To the best of our knowledge, this is the first experimental observation of the advantage of a sequential protocol with respect to its one-step counterpart in terms of randomness generation. On the base of this work we may envisage further steps, as follows. When correlations lie on a quantum boundary it may happen that they identify, or self-test, a unique (up to local isometries) quantum representation that realizes them [25; 38]. It would be interesting to understand if this can happen also in the sequential case and whether the correlations of our protocol can self-test the state and measurements that produce them. Moreover, other portions of the boundary in this scenario might prove useful. A possible avenue is to relax the condition \(\,\langle S_{c}\rangle=2\sqrt{2}\) and study the bounds for \(\,\langle S_{\theta}\rangle\). Our formalization of quantum sequential correlations in terms of commuting projective measurements might be of help, but if the features of boundaries cannot be probed analytically, the sequential extension of the NPA hierarchy can be used [18]. It could also be meaningful to consider other parameterizations of the boundary. For example, the upper bound of Eq. (10) can equivalently be written in terms of \[S^{\prime}_{\alpha}\equiv\cos\alpha\ S_{+}+\sin\alpha\ S_{-} \tag{19}\] as \[\langle S^{\prime}_{\alpha}\rangle\leq 2\,, \tag{20}\] with \(S_{\pm}=(A_{0}+A_{1})B_{0}\pm(A_{0}-A_{1})B_{0,1}\). This expression, detailed in Appendix B, gives the boundary represented in Fig. 4. Without constraining \(\,\langle S_{c}\rangle=2\sqrt{2}\) and without the commutation relations of Eq. (4) (coming from \begin{table} \begin{tabular}{c c c c} ID & \(\langle S\rangle\) (Experiment) & \(H_{\rm min}\) (Model) & \(H_{\rm min}\) (Experiment) \\ & & (bits) & (bits) \\ \hline 1 & \(2.761\pm 0.003\) & \(0.60\) & \(0.61\pm 0.01\) \\ 2 & \(2.772\pm 0.003\) & \(0.63\) & \(0.64\pm 0.01\) \\ 3 & \(2.797\pm 0.002\) & \(0.64\) & \(0.73\pm 0.01\) \\ \end{tabular} \end{table} Table 2: Experimental results of the CHSH experiment. \(\,\langle S\rangle\) is the CHSH value and for the min-entropy the analytical bound is used [4]. Data retrieved with an exposure time of 100 s (\(\sim 3\cdot 10^{5}\) coincidences). \begin{table} \begin{tabular}{c c c c c c} ID & \(p\) & \(c\) & \(\theta\) & \(H_{\rm min}\) (Model) & \(H_{\rm min}\) (Experiment) \\ & & & (rad) & (bits) & (bits) \\ \hline 1 & \(0.019\) & \(0.017\) & \(0.412\) & \(0.82\) & \(0.85\pm 0.02\) \\ 2 & \(0.016\) & \(0.012\) & \(0.436\) & \(0.89\) & \(0.86\pm 0.01\) \\ 3 & \(0.015\) & \(0.012\) & \(0.357\) & \(0.90\) & \(0.90\pm 0.01\) \\ \end{tabular} \end{table} Table 1: Experimental results of the sequential CHSH experiment. Level 1+AB of the NPA hierarchy is used. Figure 4: Portion of the sequential quantum set with the constraint \(\langle S_{c}\rangle=2\sqrt{2}\) in the parametrization \(\,\langle S_{\pm}\rangle\). The dashed circumference denotes the maximum values achievable in non-sequential quantum scenarios, without restrictions on \(\langle S_{c}\rangle\). the sequentiality), the Tsirelson-like bound of \(\langle S^{\prime}_{\alpha}\rangle\) is relaxed to \(2\sqrt{2}\), leading to a relation similar to the one in [15]. Because of this greater similarity with the existing literature, \(S^{\prime}_{\alpha}\) might be easier to investigate than \(S_{\theta}\). Our protocol could also be more deeply investigated in its robustness to losses. The standard way to treat losses in device-independent schemes is to assign the no-output events to one of the legitimate outputs. In our case, this would make the correlations fall from the boundary and into the interior of \(Q_{SEQ}\). Could this be partially compensated with a different set of states and measurements? It would also be interesting to study whether the protocol can be extended to more Bobs. This stems from the intuition that the independence of the min-entropy from the strength parameter is due to the sequence of two mutually-unbiased measurements, \(\sigma_{z}\) and \(\sigma_{x}\). This opens to the possibility of adding a third sequential party measuring \(\sigma_{y}\): In this case the bits would be extracted from a sequence of three mutually unbiased observables. Is it then possible to achieve three bits regardless of the strength parameters under ideal conditions? Could the noise robustness of such a protocol be enough for real-world implementations? In conclusion, this work offers new tools and results that can improve our understanding of sequential quantum correlations and the performance of randomness generation protocols. The formulation in terms of products of commuting measurements might provide a more intuitive description and suggest interesting points of view from which to analyze a given scenario. The boundary correlations we studied highlight that the greatest quantum advantage is reached using the entire set of experimental probabilities, and not just the pairwise ones. This paves the way for further studies on the complex relationship between nonlocality and randomness and can improve the performance of device-independent random number generators with present-day technologies. ###### Acknowledgements. The authors would like to thank Dr. Flavio Baccari (Max Planck Institute of Quantum Optics), Prof. Stefano Pironio (Universite Libre de Bruxelles), and Dr. Peter Brown (Telecom Paris) for the useful discussions and clarifications. The computational resources offered by the CAPRI initiative (University of Padova Strategic Research Infrastructure Grant 2017: "CAPRI: Calcolo and Alte Prestazioni per la Ricerca e l'Innovazione") and BLADE cluster are acknowledged. Part of this work was supported by Ministero dell'Istruzione, dell'Universita e della Ricerca (MIUR) (Italian Ministry of Education, University and Research) under the initiative "Departments of Excellence" (Law No. 232/2016), by Fondazione Cassa di Risparmio di Padova e Rovigo within the call "Ricerca Scientifica di Eccellenza 2018", project _QUASAR_, and by the European Union's Horizon 2020 research and innovation programme, project QUANGO (grant agreement No 101004341). ## Appendix A Alternative formulations of the sequential scenario Here we give an alternative characterization for sequential quantum correlations, which will serve to prove the validity of the contruction based on unitary and dichotomic observables introduced in the main text. In the following we will use symbol \(\mathbf{y}\) for a sequence of inputs and \(\mathbf{y}_{k}\) for its truncation at the \(k\)-th element. We will say that \(\mathbf{y}_{l}\succeq\mathbf{y}_{k}\) if \(l\geq k\) and the first \(k\) elements in \(\mathbf{y}_{l}\) are the same of \(\mathbf{y}_{k}\) (i.e., \(\mathbf{y}_{k}\) is a truncation of \(\mathbf{y}_{l}\)). **Proposition 1**.: _A given correlation \(p(\mathbf{a},\mathbf{b}|\mathbf{x},\mathbf{y})\) is sequential and quantum if and only if it can be written as \(p(\mathbf{a},\mathbf{b}|\mathbf{x},\mathbf{y})=\langle\psi|\prod_{k}\Lambda_{a_{k}}^{\mathbf{x}_ {k}}\otimes\prod_{k}\Pi_{b_{k}}^{\mathbf{y}_{k}}|\psi\rangle\) with \(\mathbf{x}\succeq\mathbf{x}_{k},\mathbf{y}\succeq\mathbf{y}_{k}\), and the operators satisfying:_ \[\sum_{b_{k}}\Pi_{b_{k}}^{\mathbf{y}_{k}}=\openone\quad\forall k,\mathbf{y}_{k}\quad \text{(normalization)}\] \[\Pi_{b_{k}}^{\mathbf{y}_{k}^{\dagger}} =\Pi_{b_{k}}^{\mathbf{y}_{k}}\quad\forall k,\mathbf{y}_{k},b_{k}\quad \text{(hermiticity)}\] \[\Pi_{b_{k}}^{\mathbf{y}_{k}} =\delta_{b_{k}b_{k}^{\prime}}\Pi_{b_{k}}^{\mathbf{y}_{k}}\quad\forall k,\mathbf{y}_{k},b_{k},b_{k}^{\prime}\quad\text{(proj. and ortho.)}\] \[[\Pi_{b_{k}}^{\mathbf{y}_{k}},\Pi_{b_{l}}^{\mathbf{y}_{l}}] =0\quad\forall k,l,b_{k},b_{l},\mathbf{y}_{l}\succeq\mathbf{y}_{k}\quad \text{(comm.)}\,. \tag{10}\] _and similarly for \(\Lambda_{a_{k}}^{\mathbf{x}_{k}}\)._ Proof.: Reference [18] already proves that any sequential quantum correlation can be written as \(p(\mathbf{a},\mathbf{b}|\mathbf{x},\mathbf{y})=\langle\psi|\mathbf{\mathsf{A}_{a}^{x}}\otimes\mathbf{ \mathsf{B}_{b}^{y}}|\psi\rangle\) with the operators satisfying: \[\sum_{\mathbf{b}}\mathbf{\mathsf{B}_{b}^{y}} =\openone\quad\forall\mathbf{y}\quad\text{(normalization)}\] \[\mathbf{\mathsf{B}_{b}^{y}} =\mathbf{\mathsf{B}_{b}^{y}}\quad\forall\mathbf{y},\mathbf{b}\quad\text{( hermiticity)}\] \[\mathbf{\mathsf{B}_{b}^{y}} =\delta_{\mathbf{b}\mathbf{b}^{y}}\mathbf{\mathsf{B}_{b}^{y}} \quad\forall\mathbf{y},\mathbf{b},\mathbf{b^{\prime}}\quad\text{(proj. and ortho.)}\] \[\sum_{b_{k+1}\ldots b_{n}}\mathbf{\mathsf{B}_{b}^{y_{1}\ldots y_{k+1 }\ldots y_{n}}} =\sum_{b_{k+1}\ldots b_{n}}\mathbf{\mathsf{B}_{b}^{y_{1}\ldots y_{k+1 }\ldots y_{n}^{\prime}}}\] \[\forall k,\mathbf{y},\mathbf{y}^{\prime},b_{1}\ldots b_{k}\quad\text{( seq.)}\,.\] We can prove that any \(\mathbf{\mathsf{B}_{b}^{y}}\) satisfying (A) can be written as the product \(\prod_{k}\Pi_{b_{k}}^{\mathbf{y}_{k}}\), with operators \(\Pi_{b_{k}}^{\mathbf{y}_{k}}\) satisfying (A) and viceversa to arrive at the conclusion. Let us start from \(\mathbf{\mathsf{B}_{b}^{y}}\) satisfying (A). Using all the truncations \(\mathbf{y}_{k}\) of \(\mathbf{y}\) (i.e., \(\mathbf{y}\succeq\mathbf{y}_{k}\)), we can define \[\Pi_{b_{k}}^{\mathbf{y}_{k}}\equiv\sum_{\mathbf{b}^{\prime}}\delta_{b_{k}b_{k}^{\prime}} \mathbf{\mathsf{B}_{b}^{y}}\,. \tag{11}\] The product of these operators is \(\prod_{k}\Pi_{b_{k}}^{\mathbf{y}_{k}}=\sum_{\mathbf{b}^{\prime}}\delta_{\mathbf{b}\mathbf{b}^{ \prime}}\prod_{k}\mathbf{\mathsf{B}_{b^{\prime}}^{y}}=\mathbf{\mathsf{B}_{b}^{y}}\) as needed. Importantly, the right-hand side of (18) is independent of \(y_{k+1}\ldots y_{n}\). Indeed: \[\sum_{\mathbf{b}^{\prime}}\delta_{b_{k}b_{k}^{\prime}}\mathsf{B}_{\mathbf{b}^{\prime}}^{ \mathbf{y}}=\sum_{b_{1}^{\prime}\ldots b_{k-1}^{\prime}}\left(\sum_{b_{k+1}^{\prime }\ldots b_{n}^{\prime}}\mathsf{B}_{b_{1}^{\prime}\ldots b_{k-1}^{\prime}b_{k}b _{k+1}^{\prime}\ldots b_{n}^{\prime}}^{\mathbf{y}}\right) \tag{19}\] and the term in parenthesis is exactly the one that appears in the sequentiality condition of (17) and is guaranteed to be independent of \(y_{k+1}\ldots y_{n}\). Then, it is straightforward to verify the normalization, hermiticity, projectivity and orthogonality conditions in (16) starting from their counterparts in (17). The commutation relation of (16) is found by noticing that \[\Pi_{b_{k}}^{\mathbf{y}_{k}}\Pi_{b_{l}}^{\mathbf{y}_{l}}=\Pi_{b_{l}}^{\mathbf{y}_{l}}\Pi_ {b_{k}}^{\mathbf{y}_{k}}=\sum_{\mathbf{b}^{\prime}}\delta_{b_{k}b_{l}^{\prime}}\delta _{b_{l}b_{l}^{\prime}}\mathsf{B}_{\mathbf{b}^{\prime}}^{\mathbf{y}}\,, \tag{20}\] which uses the fact that \(\mathsf{B}_{\mathbf{b}}^{\mathbf{y}}\) are orthogonal projectors and \(\mathbf{y}\succeq\mathbf{y}_{l}\succeq\mathbf{y}_{\mathbf{k}}\). Viceversa, let us start from projectors \(\Pi_{b_{k}}^{\mathbf{y}_{k}}\) satisfying (16). For any given pair of input and output sequences \(\mathbf{y},\mathbf{b}\), we can directly define: \[\mathsf{B}_{\mathbf{b}}^{\mathbf{y}}\equiv\prod_{l}\Pi_{b_{l}}^{\mathbf{y}_{l}}\,. \tag{21}\] Normalization, projectivity and orthogonality are straightforward. Hermiticity descends from the fact that the product involves only commuting hermitian operators. The sequentiality condition is verified considering that \[\sum_{b_{k+1}\ldots b_{n}}\mathsf{B}_{\mathbf{b}}^{\mathbf{y}}=\prod_{l=1}^{k}\Pi_{b_ {l}}^{\mathbf{y}_{l}}\prod_{l=k+1}^{n}\left(\sum_{b_{l}}\Pi_{b_{l}}^{\mathbf{y}_{l}} \right)=\prod_{l=1}^{k}\Pi_{b_{l}}^{\mathbf{y}_{l}} \tag{22}\] which does not depend on \(y_{k+1}\ldots y_{n}\). All of this is valid regardless of the number of possible values for the inputs or outputs. In the special case in which all measurements are dichotomic and return \(b_{k}\in\{\pm 1\}\), we can build observables as \[B_{\mathbf{y}_{k}}\equiv\sum_{b_{k}}b_{k}\Pi_{b_{k}}^{\mathbf{y}_{k}}=\sum_{\mathbf{b}}b_{ k}\mathsf{B}_{\mathbf{b}}^{\mathbf{y}} \tag{23}\] which are hermitian and unitary. This means that the construction used in the main text is a valid way to characterize sequential quantum correlations (in the scenario of interest for this work). For completeness, we report that, in this dichotomic case, the operators \(\mathsf{B}_{\mathbf{b}}^{\mathbf{y}}\) can conversely be built from the observables with: \[\mathsf{B}_{\mathbf{b}}^{\mathbf{y}}=\frac{1}{2^{n}}\Big{(}\openone+ \sum_{k_{1}}b_{k_{1}}B_{\mathbf{y}_{k_{1}}}+\cdots+\\ +\sum_{k_{1}<\cdots<k_{n}}b_{k_{1}}\cdots b_{k_{n}}\prod_{k=k_{1} }^{k_{n}}B_{\mathbf{y}_{k}}\Big{)}\,. \tag{24}\] To summarize, the relevant operators in our scenario satisfy: \[A_{x}^{\dagger}A_{x}=\openone\quad\forall x\quad\text{(unit.)}\] \[A_{x}=A_{x}^{\dagger}\quad\forall x\quad\text{(herm.)}\] \[B_{y_{1}}^{\dagger}B_{y_{1}}=B_{y_{1},y_{2}}^{\dagger}B_{y_{1},y _{2}}=\openone\quad\forall y_{1},y_{2}\quad\text{(unit.)} \tag{25}\] \[B_{y_{1}}=B_{y_{1}}^{\dagger},\,B_{y_{1},y_{2}}=B_{y_{1},y_{2}}^ {\dagger}\forall y_{1},y_{2}\quad\text{(herm.)}\] \[[B_{y_{1}},B_{y_{1},y_{2}}]=0\quad\forall y_{1},y_{2}\quad\text{ (commutation)}\,,\] where it is left implicit that \(A_{x}\) and \(B_{y_{1}},B_{y_{1},y_{2}}\) act on separate Hilbert spaces. ## Appendix B Characterization of the sequential quantum boundary In this appendix we prove Result 1, i.e. that the relations \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\) are a boundary for \(Q_{SEQ}\). We also give an useful characterization for the states that allow to generate correlations on this boundary. We rephrase the result in a more self-contained way as: **Proposition 2**.: _Let \(x,y_{1},y_{2}\in\{0,1\}\) and \(A_{x}\), \(B_{y_{1}}\), \(B_{y_{1},y_{2}}\) be operators that satisfy (25). Define the operators (as in the main text)_ \[S_{1} \equiv(A_{0}+A_{1})B_{0}+(A_{0}-A_{1})B_{1}\] \[S_{2} \equiv(A_{0}+A_{1})B_{0,0}+(A_{0}-A_{1})B_{0,1}\] \[S_{c} \equiv(A_{0}+A_{1})B_{0,0}+(A_{0}-A_{1})B_{1} \tag{26}\] \[S_{\theta} \equiv\cos 2\theta(S_{1}-\sqrt{2}\openone)+\sin 2\theta(S_{2}- \sqrt{2}\openone)\,,\] _Then the following two relations hold_ \[\left\langle S_{c}\right\rangle\leq 2\sqrt{2} \tag{27}\] \[\left\langle S_{c}\right\rangle=2\sqrt{2}\implies\left\langle S_{ \theta}\right\rangle\leq\sqrt{2}\] _and the inequalities are tight for any \(\theta\)._ This means that \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\left\langle S_{\theta}\right\rangle=\sqrt{2}\) define a boundary for the set of sequential quantum correlations \(Q_{SEQ}\) measurable in the scenario of interest of this work. Proof.: Since \(S_{c}\) is a CHSH-like operator, Tsirelson's bound already assures that \(\left\langle S_{c}\right\rangle\leq 2\sqrt{2}\). Therefore, we move to proving that if \(\left\langle S_{c}\right\rangle=2\sqrt{2}\), then \(\left\langle S_{\theta}\right\rangle\leq\sqrt{2}\). Then, let \(\left|\psi\right\rangle\) and the \(A_{x}\), \(B_{y_{1}}\), \(B_{y_{1},y_{2}}\) be a state and observables for which \(\left\langle S_{c}\right\rangle=2\sqrt{2}\). Let us define for ease of writing the operators \(Z_{A}=\frac{A_{0}+A_{1}}{\sqrt{2}}\) and \(X_{A}=\frac{A_{0}-A_{1}}{\sqrt{2}}\). By construction we have that \(\{Z_{A},X_{A}\}=0\). Moreover, due to the self-testing properties of the CHSH scenario provided by \(\left\langle S_{c}\right\rangle=2\sqrt{2}\), we have that \(\left\langle Z_{A}B_{0,0}\right\rangle=\left\langle X_{A}B_{1}\right\rangle=1\) and \(\left\{A_{0},A_{1}\right\}\left|\psi\right\rangle=0\)[25], which implies \(Z_{A}^{\dagger}Z_{A}\left|\psi\right\rangle=X_{A}^{\dagger}X_{A}\left|\psi \right\rangle=\left|\psi\right\rangle\). With these properties, we can rewrite the mean value of \(S_{\theta}\) on \(\left|\psi\right\rangle\) as \[\left\langle S_{\theta}\right\rangle=\sqrt{2}\cos 2\theta\left\langle Z_{A}B_{0} \right\rangle+\sqrt{2}\sin 2\theta\left\langle X_{A}B_{0,1}\right\rangle\,. \tag{28}\] Furthermore, let us define the auxiliary hermitian operator \[P\equiv 2^{-\frac{1}{4}}\left(\openone-\cos(2\theta)Z_{A}B_{0}-\sin(2\theta)X_ {A}B_{0,1}\right). \tag{10}\] With an algebraic derivation we find that \[\left\langle P^{2}\right\rangle=\,\left\langle\sqrt{2}\openone-S_{\theta} \right\rangle. \tag{11}\] Since \(P^{2}\) is the square of an hermitian operator, it has non-negative eigenvalues. Hence \(\,\left\langle P^{2}\right\rangle\geq 0\) and \(\,\left\langle S_{\theta}\right\rangle\leq\sqrt{2}\). We can also find an explicit quantum protocol that satisfies \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\,\left\langle S_{\theta}\right\rangle=\sqrt{2}\) for any \(\theta\), concluding that the inequalities are tight. Because this protocol is of relevance for different parts of this text, we show it separately in Appendix D. We can also prove the following characterization for \(\left|\psi\right\rangle\): **Proposition 3**.: _Let \(\left|\psi\right\rangle\), \(A_{x}\), \(B_{y_{1}}\),\(B_{y_{1},y_{2}}\) be a state and operators (with the constraints (10)) that produce correlations such that \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\,\left\langle S_{\theta}\right\rangle=\sqrt{2}\). Then_ \[\left|\psi\right\rangle=\cos 2\theta\frac{A_{0}+A_{1}}{\sqrt{2}}B_{0} \left|\psi\right\rangle+\sin 2\theta\frac{A_{0}-A_{1}}{\sqrt{2}}B_{0,1} \left|\psi\right\rangle\,, \tag{12}\] Proof.: Using the auxiliary hermitian operator \(P\) of Eq. (10) and considering that \(\left|\psi\right\rangle\) generates correlations on the boundary, we find: \[\left\langle\sqrt{2}\openone-S_{\theta}\right\rangle=\,\left\langle P^{2} \right\rangle=\,\left\langle P^{\dagger}P\right\rangle=\left\|P\left|\psi \right\rangle\right\|^{2}=0 \tag{13}\] which implies \(P\left|\psi\right\rangle=0\) and hence (12). In these proofs we used the self-testing properties of the CHSH inequality to fix some expectation values. From the same observation, we can obtain alternative and equivalent formulations of the boundary. For example, when \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\), we can introduce the operator \[S^{\prime}_{\alpha}=\cos\alpha\ S_{+}+\sin\alpha\ S_{-} \tag{14}\] with \[S_{\pm}=(A_{0}+A_{1})B_{0}\pm(A_{0}-A_{1})B_{0,1}. \tag{15}\] Under the condition \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\), we have that \[\sqrt{2}\left\langle S_{\theta}\right\rangle=\,\left\langle S^{\prime}_{ \frac{\pi}{\pi}-\theta}\right\rangle \tag{16}\] so that the sequential set can be also characterized by the condition \(\left\langle S^{\prime}_{\alpha}\right\rangle\leq 2\). Expression (14) is similar to those studied in [15] in a non-sequential scenario, and indeed has an analogous sum-of-squares decomposition \[2\sqrt{2}-S^{\prime}_{\alpha}=\] \[\quad\frac{1}{\sqrt{2}}\left[\sin\left(\frac{\pi}{4}+\alpha \right)B_{0}+\cos\left(\frac{\pi}{4}+\alpha\right)B_{0,1}-A_{0}\right]^{2}+\] \[\quad+\frac{1}{\sqrt{2}}\left[\sin\left(\frac{\pi}{4}+\alpha \right)B_{0}-\cos\left(\frac{\pi}{4}+\alpha\right)B_{0,1}-A_{1}\right]^{2}. \tag{17}\] Having a sum of non-negative operators on the right hand side, Eq. (17) gives a bound for the maximum value allowed by quantum physics for the expectation value of \(S^{\prime}_{\alpha}\), independently on the value of \(S_{c}\): \[\left\langle S^{\prime}_{\alpha}\right\rangle\leq 2\sqrt{2}. \tag{18}\] This bound is actually tight since we can choose the strategy \[A_{j} =(-1)^{j}\cos\Bigl{(}\alpha+\frac{\pi}{4}\Bigr{)}\sigma_{x}+\sin \Bigl{(}\alpha+\frac{\pi}{4}\Bigr{)}\sigma_{z} \tag{19}\] \[B_{0} =\sigma_{z}\,\qquad B_{0,1}=\sigma_{x}\] with the shared entangled state \(\left|\phi^{+}\right\rangle\) to saturate it. The condition (18) is represented with a dashed line in Fig. 4. Different parameterizations of the boundary and arguments like the one just shown are not necessary for the proofs of this paper but could be useful when the condition \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\) is relaxed and the interior of the sequential set is studied. ## Appendix C Proof of the randomness results Here we prove Result 2 about the randomness of the outcomes of \(B_{0}\) and \(B_{0,1}\). We rephrase it more formally as: **Proposition 4**.: _Let \(x,y_{1},y_{2}\in\{0,1\}\), \(a,b_{1},b_{2}\in\{\pm 1\}\) and let \(P_{\text{exp}}(a,\mathbf{b}|x,\mathbf{y})=\,\left\langle\psi|\Lambda_{a}^{x}\otimes \Pi_{b_{1}}^{y_{1}}\Pi_{b_{2}}^{y_{1},y_{2}}|\psi\right\rangle\in Q_{SEQ}\), where \(\Lambda_{a}^{x}\), \(\Pi_{b_{1}}^{y_{1}}\) and \(\Pi_{b_{2}}^{y_{1},y_{2}}\) are the projectors on the eigenspaces of observables \(A_{x}\), \(B_{y_{1}}\) and \(B_{y_{1},y_{2}}\) which satisfy (10). Let \(G\) be the guessing probability defined in (15)-(16) for \(\mathbf{y_{r}}=(0,1)\)._ _Then, if \(\,\left\langle S_{c}\right\rangle=2\sqrt{2}\) and \(\,\left\langle S_{\theta}\right\rangle=\sqrt{2}\), we have that \(G=\frac{1}{4}\) if \(\theta\neq n\frac{\pi}{4}\) and \(G=\frac{1}{2}\) otherwise._ Proof.: We work in the formalism of device-independent scenarios, which assumes that in principle the state shared by Alice and the Bobs is not separable from Eve's. We denote it \(\left|\psi\right\rangle_{ABE}\in\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes \mathcal{H}_{E}\). Alice's observables \(A_{0}\) and \(A_{1}\) act on \(\mathcal{H}_{A}\), the Bobs' observables \(B_{0}\), \(B_{1}\), \(B_{0,0}\) and \(B_{0,1}\) act on \(\mathcal{H}_{B}\). Since \(B_{1,0}\) and \(B_{1,1}\) contribute neither to \(S_{c}\) nor to \(S_{\theta}\), they have no role in this proof. Eve performs projective measurements \(E_{\mathbf{e}}\) on \(\mathcal{H}_{E}\), that satisfy \(E_{\mathbf{e}}E_{\mathbf{e}^{\prime}}=\delta_{\mathbf{e}\mathbf{e}^{\prime}}E_{\mathbf{e}}\) and \(\sum_{\mathbf{e}}E_{\mathbf{e}}=\openone_{E}\). The joint correlation including Alice, Bob and Eve is \[p(a,\mathbf{b},\mathbf{e}|x,\mathbf{y})=\,\left\langle\psi|\Lambda_{a}^{x}\otimes\Pi_{b_{1}} ^{y_{1}}\Pi_{b_{2}}^{y_{1},y_{2}}\otimes E_{\mathbf{e}}|\psi\right\rangle\,. \tag{20}\] Following [25], the saturation \(\left\langle S_{c}\right\rangle=2\sqrt{2}\) implies that, up to local isometries, the state can be written as \(\left|\psi\right\rangle_{ABE}\stackrel{{\underline{u}li}}{{=}} \left|\phi^{+}\right\rangle_{AB^{\prime}}\left|\xi\right\rangle_{B^{\prime \prime}E}\in\mathcal{H}_{A}\otimes\mathcal{H}_{B^{\prime}}\otimes\mathcal{H}_{ B^{\prime\prime}}\otimes\mathcal{H}_{E}\), where the symbol \(\stackrel{{\underline{u}li}}{{=}}\) is introduced to distinguish between equality and the equivalence up to local isometries used in the self-testing formalism. These local isometries do not act on Eve's space. With the same formalism we have that Alice's operators are such that \[\begin{split} A_{0}\left|\psi\right\rangle& \stackrel{{\underline{u}li}}{{=}}\frac{\sigma_{z}+\sigma_{x}}{ \sqrt{2}}\left|\phi^{+}\right\rangle\left|\xi\right\rangle\\ A_{1}\left|\psi\right\rangle&\stackrel{{ \underline{u}li}}{{=}}\frac{\sigma_{z}-\sigma_{x}}{\sqrt{2}}\left|\phi^{+} \right\rangle\left|\xi\right\rangle\.\end{split} \tag{10}\] Since \(\mathcal{H}_{B^{\prime}}\) is a qubit space, \(\mathcal{H}_{B^{\prime}}=\mathbb{C}^{2}\), the action of the operators \(B_{0}\) and \(B_{0,1}\) can be decomposed with the Pauli matrices as \[\begin{split} B_{0}\left|\psi\right\rangle& \stackrel{{\underline{u}li}}{{=}}\left[\openone\otimes \gamma_{0}+\sigma_{x}\otimes\gamma_{1}+\sigma_{y}\otimes\gamma_{2}+\sigma_{z} \otimes\gamma_{3}\right]\left|\phi^{+}\right\rangle\left|\xi\right\rangle\\ B_{0,1}\left|\psi\right\rangle&\stackrel{{ \underline{u}li}}{{=}}\left[\openone\otimes\gamma_{0}+\sigma_{x}\otimes \tau_{1}+\sigma_{y}\otimes\tau_{2}+\sigma_{z}\otimes\gamma_{3}\right]\left| \phi^{+}\right\rangle\left|\xi\right\rangle\,\end{split} \tag{11}\] where the \(\gamma_{i}\) and \(\tau_{i}\) operators are arbitrary hermitian operators acting on \(\mathcal{H}_{B^{\prime\prime}}\). Since \(B_{0,0}\left|\psi\right\rangle\stackrel{{\underline{u}li}}{{=}} \sigma_{z}\otimes\openone\left|\phi^{+}\right\rangle\left|\xi\right\rangle\) (because \(\left\langle S_{c}\right\rangle=2\sqrt{2}\)), the commutation relation \(\left[B_{0},B_{0,0}\right]=0\) implies that \(\gamma_{1}\left|\xi\right\rangle=\gamma_{2}\left|\xi\right\rangle=0\). Then, by applying the local isometry that realizes our relations to both sides of (10), we retrieve the following four constraints \[\cos 2\theta\gamma_{0}\left|\xi\right\rangle-i\sin 2\theta\tau_{2} \left|\xi\right\rangle=0 \tag{12}\] \[\sin 2\theta\tau_{0}\left|\xi\right\rangle=0\] (13) \[\sin 2\theta\gamma_{3}\left|\xi\right\rangle=0\] (14) \[\cos 2\theta\gamma_{3}\left|\xi\right\rangle+\sin 2\theta\tau_{1} \left|\xi\right\rangle=\left|\xi\right\rangle\,, \tag{15}\] and consequently, if \(\sin 2\theta\neq 0\), \(\tau_{0}\left|\xi\right\rangle=\tau_{3}\left|\xi\right\rangle=0\), \(B_{0,1}\left|\psi\right\rangle\stackrel{{\underline{u}li}}{{=}} \left[\sigma_{x}\otimes\tau_{1}+\sigma_{y}\otimes\tau_{2}\right]\left|\phi^{+ }\right\rangle\left|\xi\right\rangle\). Let us now study the guessing probability: \[\begin{split} G&=\sum_{b_{1}b_{2}}\text{Prob}[e_{1}=b_{ 1},e_{2}=b_{2}|y_{1}=0,y_{2}=1]\\ &=\sum_{b_{1}b_{2}}\left\langle\psi|\openone_{A}\otimes\Pi_{b_{ 1}}^{0}\Pi_{b_{2}}^{0,1}\otimes E_{b_{1},b_{2}}|\psi\right\rangle\.\end{split} \tag{16}\] We consider first the case \(\sin 2\theta\neq 0\) and \(\cos 2\theta\neq 0\). We can write the projectors that decompose \(B_{0}\) and \(B_{0,1}\) as: \[\Pi_{b_{1}}^{0}\left|\psi\right\rangle\stackrel{{ \underline{u}li}}{{=}}\frac{1}{2}\Big{[}\openone\otimes \openone+b_{1}\left(\openone\otimes\gamma_{0}+\sigma_{z}\otimes \gamma_{3}\right)\Big{]}\left|\phi^{+}\right\rangle\left|\xi\right\rangle \tag{17}\] \[\Pi_{b_{2}}^{0,1}\left|\psi\right\rangle\stackrel{{ \underline{u}li}}{{=}}\frac{1}{2}\Big{[}\openone\otimes \openone+b_{2}\left(\sigma_{x}\otimes\tau_{1}+\sigma_{y}\otimes\tau_{2} \right)\Big{]}\left|\phi^{+}\right\rangle\left|\xi\right\rangle \tag{18}\] and replace them, together with the form of \(\left|\psi\right\rangle\) in \(G\). Since all terms in which \(\Pi_{b_{1}}^{0}\Pi_{b_{2}}^{0,1}\) acts as a Pauli matrix on \(\mathcal{H}_{B^{\prime}}\) vanish because \(\left\langle\phi^{+}|\openone_{A}\otimes\sigma_{i}|\phi^{+}\right\rangle=0\), we end up with: \[G=\frac{1}{4}\left(1+\sum_{b_{1}b_{2}}b_{1}\left\langle\xi|\gamma_{0}\otimes E _{b_{1},b_{2}}|\xi\right\rangle\right). \tag{19}\] Then, we multiply from the left both sides of Eq. (12) by \(\left\langle\xi\right|E_{b_{1},b_{2}}\), finding that \(\cos 2\theta\left\langle\xi|E_{b_{1},b_{2}}\otimes\gamma_{0}|\xi\right\rangle=i\sin 2 \theta\left\langle\xi|E_{b_{1},b_{2}}\otimes\tau_{2}|\xi\right\rangle\). Considering that \(\gamma_{0}\) and \(\tau_{2}\) are hermitian and commute with Eve's operators (because they act on different Hilbert spaces), the expectation values are real. Therefore, taking the real part on both sides, we have that \(\cos 2\theta\left\langle\xi|E_{b_{1},b_{2}}\otimes\gamma_{0}|\xi\right\rangle= \cos 2\theta\left\langle\xi|\gamma_{0}\otimes E_{b_{1},b_{2}}|\xi\right\rangle=0\). If \(\cos 2\theta\neq 0\), we have that \[\left\langle\xi|\gamma_{0}\otimes E_{b_{1},b_{2}}|\xi\right\rangle=0 \tag{20}\] and hence \(G=\frac{1}{4}\). For \(\sin 2\theta=0\), we consider the probability that Eve guesses the outcome of \(B_{0}\) (regardless of whether she guesses \(B_{0,1}\) or not): \[\begin{split} G_{1}&=\sum_{b_{1}}\text{Prob}[e_{1}=b_{ 1}|y_{1}=0,y_{2}=1]\\ &=\sum_{b_{1}}\left\langle\psi|\openone_{A}\otimes\Pi_{b_{1}}^{0} \otimes\sum_{b_{2}}E_{b_{1},b_{2}}|\psi\right\rangle\end{split} \tag{21}\] We use the same decomposition of \(\Pi_{b_{1}}^{0}\) of above. The terms in \(\openone\otimes\openone\) and \(\openone\otimes\gamma_{0}\) are the only ones that do not vanish, and we have \(G_{1}=\frac{1}{2}\left(1+\sum_{b_{1}b_{2}}b_{1}\left\langle\xi|\gamma_{0} \otimes E_{b_{1},b_{2}}|\xi\right\rangle\right)\). Because \(\cos 2\theta\neq 0\), we can use Eq. (20) to find that the second term in the above parenthesis is \(0\) and \(G_{1}=\frac{1}{2}\). Since by definition of joint probability \(G\leq G_{1}\), we have that \(G\leq\frac{1}{2}\). In Appendix D, we show a strategy with which Eve can perfectly guess the outcome of \(B_{0,1}\), hence \(G=\frac{1}{2}\). For \(\cos 2\theta=0\), we consider the probability that Eve guesses the outcome of \(B_{0,1}\) (regardless of whether she guesses \(B_{0}\) or not): \[\begin{split} G_{2}&=\sum_{b_{2}}\text{Prob}[e_{2}=b_{ 2}|y_{1}=0,y_{2}=1]\\ &=\sum_{b_{2}}\left\langle\psi|\openone_{A}\otimes\Pi_{b_{2}}^{0,1} \otimes\sum_{b_{1}}E_{b_{1},b_{2}}|\psi\right\rangle\.\end{split} \tag{22}\] Because \(\sin 2\theta\neq 0\), we can replace Eq. (18) in \(G_{2}\). Then all the terms acting as a Pauli matrix on \(\mathcal{H}_{B^{\prime}}\) vanish and \(G_{2}=\frac{1}{2}\sum_{b_{1}b_{2}}\left\langle\psi|\openone_{A}\otimes\openone_{B^{ \prime}}\otimes\openone_{B^{\prime\prime}}\otimes E_{b_{1},b_{2}}|\psi\right\rangle= \frac{1}{2}\). Since by definition of joint probability \(G\leq G_{2}\), we have that \(G\leq\frac{1}{2}\). In Appendix D, we show a strategy with which Eve can perfectly guess the outcome of ## Appendix D Sequential-CHSH protocol with projective measurements In this appendix we discuss in more detail the protocol of Section II.3, justifying the equivalence between the formulations in terms of Kraus operators and of observables \(B_{y_{1}}\),\(B_{y_{1},y_{2}}\) introduced in the main text. The natural framework to describe such equivalence is the Von Neumann formalism, in which the measurements are realized through the interaction of the system with the environment. To simplify the notation, we focus only on the Bobs' side, where we realize the sequential measurements. A reference scheme is depicted in Fig. 5. We denote the state of the system with \(\ket{\varphi}\in\mathcal{H}_{B^{\prime}}\) and we couple it with an ancilla qubit state \(\ket{\theta}\in\mathcal{H}_{B^{\prime\prime}}\), \[\ket{\theta}\equiv\cos\theta\ket{0}+\sin\theta\ket{1}\, \tag{23}\] forming the joint state \(\ket{\psi}\equiv\ket{\varphi}\otimes\ket{\theta}=\ket{\varphi}\ket{\theta}\). When the input of Bob\({}_{1}\) is \(y_{1}=1\), he's performing the projective measurement of \(B_{1}=\sigma_{x}\otimes\openone\) on \(\mathcal{H}_{B^{\prime}}\otimes\mathcal{H}_{B^{\prime\prime}}\). When, instead, \(y_{1}=0\), the measurements can be described through the unitary evolution \[\begin{split} U\ket{\psi}&=e^{i\mathcal{H}_{I}}\ket{ \varphi}\ket{\theta}=e^{i\frac{\pi}{4}\left(\openone-\sigma_{x}\right)\otimes \left(\openone-\sigma_{x}\right)}\ket{\varphi}\ket{\theta}=\\ &=\left[\ket{0}\!\bra{0}\otimes\openone+\ket{1}\!\bra{1}\otimes \sigma_{x}\right]\ket{\varphi}\ket{\theta}\.\end{split} \tag{24}\] Here the interaction Hamiltonian \(\mathcal{H}_{I}\) defines a CNOT gate between the original state and the ancilla. The Kraus operators (12) realizing Bob\({}_{1}\)'s measurements can be found by measuring in the ancilla basis defined by the states \(\ket{0}\) and \(\ket{1}\), which here are the eigenstates of \(\sigma_{z}\) in \(\mathcal{H}_{B^{\prime\prime}}\): \[\begin{split} p(b_{1}&=+1|y_{1}=0)=\Big{|}\bra{0} \!\bra{0}\!\bra{\theta}\ket{\varphi}\Big{|}^{2}=\Big{|}K_{+}(\theta)\ket{ \varphi}\Big{|}^{2}\\ p(b_{1}&=-1|y_{1}=0)=\Big{|}\bra{1}\!\bra{0}\! \bra{\varphi}\Big{|}^{2}=\Big{|}K_{-}(\theta)\ket{\varphi}\Big{|}^{2}\end{split} \tag{25}\] where \(K_{+}(\theta)\equiv\bra{0}\!\bra{0}\!\bra{\theta}\) and \(K_{-}(\theta)\equiv\bra{1}\!\bra{0}\!\bra{\theta}\) are exactly the operators in (12), acting on the qubit space \(\mathcal{H}_{B^{\prime}}\). These Kraus operators are chosen so that for \(\theta=\frac{\pi}{4}\) are proportional to the identity operator while for \(\theta=0\) they realize a projective measurement of \(\sigma_{z}\) (assigning the labels \(\pm 1\) to the two outcomes). In the literature, these Kraus operators are associated to weak measurements of \(\sigma_{z}\)[13]. Indeed, the unitary evolution can be also written as \(U\ket{\psi}=U_{\theta}\ket{\varphi}\ket{0}\) with \(U_{\theta}=e^{i\mathcal{H}_{I}}(\openone\otimes e^{-i\frac{\pi}{4}\sigma_{y}})\) and the \(\theta\) parameter representing the "strength" of the interaction. To obtain the projective description in terms of observables \(B_{y_{1}}\) and \(B_{y_{1},y_{2}}\), we can recast (25) as \[\begin{split} p(b_{1}&=+1|y_{1}=0)=\bra{\psi}U^{ \dagger}\Big{[}\openone\otimes\ket{0}\!\bra{0}\!\Big{]}U\ket{\psi}\\ p(b_{1}&=-1|y_{1}=0)=\bra{\psi}U^{\dagger}\Big{[} \openone\otimes\ket{1}\!\bra{1}\!\Big{]}U\ket{\psi}\end{split} \tag{26}\] and identify \[\begin{split}\Pi_{+}^{0}&\equiv U^{\dagger}\Big{[} \openone\otimes\ket{0}\!\bra{0}\!\Big{]}U\\ \Pi_{-}^{0}&\equiv U^{\dagger}\Big{[}\openone\otimes \ket{1}\!\bra{1}\!\Big{]}U\.\end{split} \tag{27}\] Substituting the explicit form of \(U\), we find that \(\Pi_{+}^{0}\) and \(\Pi_{-}^{0}\) are the projectors associated with the eigenvalues \(\pm 1\) of the observable \(B_{0}=\Pi_{+}^{0}-\Pi_{-}^{0}=\sigma_{z}\otimes\sigma_{z}\), as in Eq. (13). As a second step, Bob\({}_{2}\) receive the state after the unitary evolution. In correspondence of the input \(y_{2}=0\) he performs projective measurements of \(\sigma_{z}\), producing the statistics \[\begin{split} p(b_{2}&=+1|\mathbf{y}=0,0)=\,\bra{\psi}U^{ \dagger}\bigg{[}\frac{(\openone+\sigma_{z})}{2}\otimes\openone\bigg{]}U|\psi \rangle\\ p(b_{2}&=-1|\mathbf{y}=0,0)=\,\bra{\psi}U^{\dagger}\bigg{[} \frac{(\openone-\sigma_{z})}{2}\otimes\openone\bigg{]}U|\psi\rangle\end{split} \tag{28}\] The definitions \[\begin{split}\Pi_{+}^{0,0}&\equiv U^{\dagger}\bigg{[} \frac{(\openone+\sigma_{z})}{2}\otimes\openone\bigg{]}U\\ \Pi_{-}^{0,0}&\equiv U^{\dagger}\bigg{[}\frac{( \openone-\sigma_{z})}{2}\otimes\openone\bigg{]}U\end{split} \tag{29}\] lead to the projectors of \(B_{0,0}=\sigma_{z}\otimes\openone\). Similar arguments hold for the input \(y_{2}=1\), which realizes a projective measurement of \(\sigma_{x}\). After identifying \[\begin{split}\Pi_{+}^{0,1}&\equiv U^{\dagger}\bigg{[} \frac{(\openone+\sigma_{x})}{2}\otimes\openone\bigg{]}U\\ \Pi_{-}^{0,1}&\equiv U^{\dagger}\bigg{[}\frac{( \openone-\sigma_{x})}{2}\otimes\openone\bigg{]}U\,,\end{split} \tag{30}\] we can conclude that \(B_{0,1}=\sigma_{x}\otimes\sigma_{x}\), as in Eq. (13). In our protocol, reintroducing Alice, the state in \(\mathcal{H}_{A}\otimes\mathcal{H}_{B^{\prime}}\) is the Bell state \(\ket{\phi^{+}}\). We note that since the state shared by Alice and the Bobs \(\ket{\psi}=\ket{\phi^{+}}\ket{\theta}\) is pure, Eve Figure 5: The sequential framework can be described adopting the Von Neumann formalism, where the system is coupled with an ancillary state through an unitary evolution, which is a CNOT gate in this case. has no hope of gaining any information on the measurements outcomes, not even if \(\theta\in\{0,\frac{\pi}{4}\}\), where we have said that she can have perfect correlations with one of \(B_{0}\), \(B_{0,1}\). We now show other strategies valid for \(\theta\in\{0,\frac{\pi}{4}\}\) that give Eve more power. Let \(\ket{\psi}_{ABE}\equiv\ket{\phi^{+}}_{AB^{\prime}}\ket{\phi^{+}}_{B^{\prime \prime}E}\) be the global state including also Eve. Operators \(A_{0},A_{1},B_{1},B_{0,0}\) are kept the same as above and since they act only on \(\mathcal{H}_{AB^{\prime}}\), on which the state is also kept the same, Alice and the Bobs' correlations when these operators are measured are unchanged. However, for \(\theta=0\), Eve sets the Bobs' devices to measure in \(\mathcal{H}_{B^{\prime}}\otimes\mathcal{H}_{B^{\prime\prime}}\): \[\begin{split} B_{0}&=\sigma_{z}\otimes\openone\\ B_{0,1}&=\openone\otimes\sigma_{x}\end{split} \tag{10}\] and measures \(\sigma_{x}\) on her part of the state. Instead, for \(\theta=\frac{\pi}{4}\), Eve sets the Bobs' devices to measure: \[\begin{split} B_{0}&=\openone\otimes\sigma_{z}\\ B_{0,1}&=\sigma_{x}\otimes\openone\end{split} \tag{11}\] and measures \(\sigma_{z}\) on her part of the state. In both cases, simple application of the Born rule shows that Alice and the Bobs' correlations are the same as above (also for the parts including \(B_{0}\) and \(B_{0,1}\)), but Eve's outcome coincides with that of \(B_{0,1}\) (for \(\theta=0\)) or \(B_{0}\) (for \(\theta=\frac{\pi}{4}\)). This means that the guessing probability of the pair of outcomes of \(B_{0}\), \(B_{0,1}\) is \(G\geq\frac{1}{2}\), but considering the upper bound \(G\leq\frac{1}{2}\) proven in Appendix C, we have \(G=\frac{1}{2}\). ## Appendix E Experimental setup The experimental setup is depicted in Fig. 6. We produce polarization-entangled photon pairs at approximately 810 nm [14] and send them to the two setups representing Alice and the Bobs using single mode fibers. There, their polarization is controlled in free space with quarter-wave plates (QWP) and half-wave plates (HWP) to approximately obtain the Bell state \(\ket{\phi^{+}}=(\ket{HH}+\ket{VV})/\sqrt{2}\). Furthermore, a liquid crystal retarder is used on Alice's side to fine tune the relative phase between the two components. Here, the \(\ket{H}\) (horizontal) and \(\ket{V}\) (vertical) states correspond to the states \(\ket{0}\) and \(\ket{1}\). Alice uses a HWP and a linear polarizer (LP) to perform projective measurements. Instead, on the other side, the setup is divided into \(\text{Bob}_{1}\) and \(\text{Bob}_{2}\): The first implements the weak measurement with a Mach-Zender interferometer (MZI) which couples polarization modes into spatial ones [14]. The parameter \(\theta\) of the coupling is controlled by setting the HWP shared between the two paths of the interferometer at \(\pi/4-\theta/2\). By selecting one outcome at a time with an external HWP, the MZI realizes the Kraus operators of Eq. (12). After the MZI, the photons are sent to \(\text{Bob}_{2}\) where they are encounter a projective measurement station, composed by an HWP and an LP. Finally, at each side, the photons are coupled into single-mode fibers and then directed to single-photon avalanche diodes (SPADs) connected to a 1 ps-resolution timetagger that returns coincidence counts within a \(\pm 0.55\) ns time window.
2310.20431
Raising the ClaSS of Streaming Time Series Segmentation
Ubiquitous sensors today emit high frequency streams of numerical measurements that reflect properties of human, animal, industrial, commercial, and natural processes. Shifts in such processes, e.g. caused by external events or internal state changes, manifest as changes in the recorded signals. The task of streaming time series segmentation (STSS) is to partition the stream into consecutive variable-sized segments that correspond to states of the observed processes or entities. The partition operation itself must in performance be able to cope with the input frequency of the signals. We introduce ClaSS, a novel, efficient, and highly accurate algorithm for STSS. ClaSS assesses the homogeneity of potential partitions using self-supervised time series classification and applies statistical tests to detect significant change points (CPs). In our experimental evaluation using two large benchmarks and six real-world data archives, we found ClaSS to be significantly more precise than eight state-of-the-art competitors. Its space and time complexity is independent of segment sizes and linear only in the sliding window size. We also provide ClaSS as a window operator with an average throughput of 1k data points per second for the Apache Flink streaming engine.
Arik Ermshaus, Patrick Schäfer, Ulf Leser
2023-10-31T13:07:41Z
http://arxiv.org/abs/2310.20431v3
# Raising the ClaSS of Streaming Time Series Segmentation ###### Abstract. Ubiquitous sensors today emit high frequency streams of numerical measurements that reflect properties of human, animal, industrial, commercial, and natural processes. Shifts in such processes, e.g. caused by external events or internal state changes, manifest as changes in the recorded signals. The task of streaming time series segmentation (STSS) is to partition the stream into consecutive variable-sized segments that correspond to states of the observed processes or entities. The partition operation itself must in performance be able to cope with the input frequency of the signals. We introduce ClaSS, a novel, efficient, and highly accurate algorithm for STSS. ClaSS assesses the homogeneity of potential partitions using self-supervised time series classification and applies statistical tests to detect significant change points (CPs). In our experimental evaluation using two large benchmarks and six real-world data archives, we found ClaSS to be significantly more precise than eight state-of-the-art competitors. Its space and time complexity is independent of segment sizes and linear only in the sliding window size. We also provide ClaSS as a window operator with an average throughput of 538 data points per second for the Apache Flink streaming engine. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none: + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none
2303.18096
Mixed volumes of networks with binomial steady-states
The steady-state degree of a chemical reaction network is the number of complex steady-states for generic rate constants and initial conditions. One way to bound the steady-state degree is through the mixed volume of the steady-state system or an equivalent system. In this work, we show that for partionable binomial networks, whose resulting steady-state systems are given by a set of binomials and a set of linear (not necessarily binomial) conservation equations, computing the mixed volume is equivalent to finding the volume of a single mixed cell that is the translate of a parallelotope. We then turn our attention to identifying cycles with binomial steady-state ideals. To this end, we give a coloring condition on directed cycles that guarantees the network has a binomial steady-state ideal. We highlight both of these theorems using a class of networks referred to as species-overlapping networks and give a formula for the mixed volume of these networks.
Jane Ivy Coons, Mark Curiel, Elizabeth Gross
2023-03-31T14:39:22Z
http://arxiv.org/abs/2303.18096v1
# Mixed volumes of networks with binomial steady-states ###### Abstract The steady-state degree of a chemical reaction network is the number of complex steady-states for generic rate constants and initial conditions. One way to bound the steady-state degree is through the mixed volume of the steady-state system or an equivalent system. In this work, we show that for partionable binomial networks, whose resulting steady-state systems are given by a set of binomials and a set of linear (not necessarily binomial) conservation equations, computing the mixed volume is equivalent to finding the volume of a single mixed cell that is the translate of a parallelotope. We then turn our attention to identifying networks with binomial steady-state ideals. To this end, we give a coloring condition on directed cycles that guarantees the network has a binomial steady-state ideal, and consequently, toric steady-states. We highlight both of these theorems using a class of networks referred to as species-overlapping networks and give a formula for the mixed volume of these networks. ## 1 Introduction Chemical reaction networks (CRNs) are graphs on a set of complexes (e.g. molecules) that visually summarize the interactions present in a chemical system. They are useful for modeling cellular biological processes such as signal transduction, and under a more general setting, are found in epidemiology and ecology. Under the assumption of mass-action kinetics, chemical reaction networks encode a system of _polynomial_ ordinary differential equations. Understanding the number of possible (real, positive) stable steady-states of such a system is key to determining whether a given reaction network is an appropriate model for a given biological process. Indeed, one highly active area of research in regards to chemical reaction networks is to develop criteria that guarantee or preclude _multistationarity_, the capacity for multiple real, positive stable steady-states (see, e.g. [11]). While multistationarity is determined by the number of possible positive stable steady-states over the reals, we can relax the definition of steady-state to include any complex solution to the steady-state equations obtained by setting each ODE equal to zero. The number of complex steady-states for generic rate constants and initial conditions is called the _steady-state degree_ of a chemical reaction network [7]. The steady-state degree is a bound on the number of real, positive steady-states. In general, the steady-state degree can be challenging to determine, and so far, most results focus on a particular model or families of models. For example, in [6], the authors show that the steady-state degree of the Wnt shuttle model is 9. One way to bound the steady-state degree of a chemical reaction network is through the mixed volume of their corresponding steady-state system. For example, the mixed volume was used to bound the steady-state degree of a model of ERK regulation in [13] as well as for three family of networks in [7], including multisite distributive phosphorylation networks. In this work, we focus on networks whose steady-state ideals are binomial, that is, generated by polynomials with at most two terms. In the literature, networks with binomial steady-state ideals that admit a real solution are referred to networks with _toric steady-states_[14]. This work has two key theorems (Theorem 3.9 and Theorem 4.1). In our first key theorem (Theorem 3.9), we give a formula for the mixed volume of _partionable_ binomial networks after showing that computing the mixed volume of these networks amounts to finding the volume of a single mixed cell that is the translate of a parallelotope (Theorem 3.5). The subtlety here is that while the steady-state ideal is binomial for these networks, adding the conservation equations, which are linear equations in the species concentrations, can result in a steady-state system that is not binomial. Our second key theorem (Theorem 4.1) concerns identifying binomial networks, with a particular focus on cycle networks. In [14], Perez Millan, Dickenstein, Shiu, and Conradi, give a sufficient condition on the complex-to-species rate matrix that guarantees a binomial steady-state ideal. In Theorem 4.1, we show that, for directed cycles, their condition is equivalent to a coloring condition on the underlying directed graph. We showcase both theorems through the example of _species-overlapping cycles_, giving a formula for the mixed volume of species overlapping cycles in Theorem 4.5. This paper is organized as follows: in Section 2, we review chemical reaction networks, including networks with binomial steady-states, and mixed volumes of polynomial systems, including fine mixed subdivisions. In Section 2, we also define PDSC networks, networks that satisfy a condition for binomiality of Perez Millan, Dickenstein, Shiu, and Conradi [14]. We call such a network a PDSC network and we show that in this case, it is straighforward to check if the steady-state system is equivalent to a square system. In Section 3, we give a formula for the mixed volume for partionable network binomial networks. In Section 4, we study cycles with binomial steady-states, giving a condition on directed cycles that guarantees toric steady-states. We end Section 4 with an investigation of species-overlapping cycles and give a fomula for the mixed volume of these networks. We conclude the manuscript with a discussion in Section 5. ## 2 Background Here we review chemical reaction network theory with a particular focus on the existing literature regarding networks with binomial steady-state ideals. We also define the mixed volume of a polynomial system and review the method of computing mixed volumes via fine mixed subdivisions. ### Chemical reaction networks. To motivate the definition of a chemical reaction network, it is beneficial to follow an example of a chemical reaction network and the polynomial ODEs that arise from it. Consider a closed system consisting of three species \(A\),\(B\), and \(C\) whose interactions are represented pictorially as in Figure 1. The _complexes_ of the above network are \(A+B\) and \(2C\). _Reactions_ are denoted by labeled arrows between complexes, where the labels are positive real numbers called _reaction rate constants_ and can be thought to govern the rate of the reaction. Interpret the reaction \[A+B\xrightarrow{\kappa_{1}}2C\] to mean \(A\) and \(B\) react to create two copies of \(C\), hence the complex label \(2C\). With the interest of tracking the amount, or _concentration_, of \(A\), \(B\), and \(C\) in the system, the net production of Figure 1: A chemical reaction network involving two reactions and three species. \(A\), for instance, in the reaction \(\ A+B\stackrel{{\kappa_{1}}}{{\longrightarrow}}2C\\) is \(-1\) since there is a loss of one copy of \(A\) if this reaction fires once. On the other hand, the net production of \(C\) in this same reaction is \(2\) since two copies of \(C\) are gained. These integer values are called _stoichiometric coefficients_ and they are determined by the structure of the complexes in each reaction. Under the assumption of mass-action kinetics, a reaction outputs its products proportionally to the product of the concentrations of reacting species. With this assumption, the change in concentration is a function of the species concentrations where each reaction contributes a term to this function. If \(x_{A}(t)\), \(x_{B}(t)\), and \(x_{C}(t)\) denote the concentrations of \(A\), \(B\), and \(C\) respectively at a given time \(t\), then for example the reaction \(\ A+B\stackrel{{\kappa_{1}}}{{\longrightarrow}}2C\\) contributes the term \(-\kappa_{1}x_{A}x_{B}\) to the change in concentration of \(A\) and \(B\) and the term \(2\kappa_{1}x_{A}x_{B}\) to the change in concentration of \(C\). The associated differential equations are therefore polynomial in the species concentrations where each reaction contributes a monomial term whose coefficient is a product of a rate constant and a stoichiometric coefficient. Thus, the assumption of mass-action kinetics gives rise to the following polynomial ordinary differential equations: \[\frac{dx_{A}}{dt} =-\kappa_{1}x_{A}x_{B}+\kappa_{2}x_{C}^{2}\] \[\frac{dx_{B}}{dt} =-\kappa_{1}x_{A}x_{B}+\kappa_{2}x_{C}^{2}\] \[\frac{dx_{C}}{dt} =2\kappa_{1}x_{A}x_{B}-2\kappa_{2}x_{C}^{2}.\] In general, a chemical reaction network is a directed graph \(G\) without loops whose vertices are the complexes of the network and whose edges are labeled by the reaction rate constants. Following the notation in [14], let \(s\) and \(m\) denote the number of species and number of complexes, respectively. Each of the complexes \(\mathbf{y}_{1},\ldots,\mathbf{y}_{m}\) are formal linear sums of the species of the network, where the coefficients are nonnegative integers. We collect the coefficients into a matrix \(Y=\big{(}y_{ij}\big{)}\in\mathbb{Z}^{m\times s}\) and identify the \(i\)th row of this matrix with the complex \(\mathbf{y}_{i}\). Assuming mass action kinetics, every pair of species interact with equal probability and is independent of location. The rate of production for the reaction \(\mathbf{y}_{i}\rightarrow\mathbf{y}_{j}\) is therefore proportional to a monomial in the concentrations of reacting species and this monomial is denoted by \(\mathbf{x}^{\mathbf{y}_{i}}=\prod_{k=1}^{s}x_{k}^{y_{ik}}\) where \(x_{k}\) denotes the concentration of the \(k\)th species. After choosing the proportional constants for every reaction, label the edge \(\mathbf{y}_{i}\rightarrow\mathbf{y}_{j}\) with the parameter \(\kappa_{ij}\in\mathbb{R}_{>0}\). Let \(A_{\boldsymbol{\kappa}}\) denote the negative Laplacian of the network, namely, the \(m\times m\) matrix whose row sums are zero and its \((i,j)\)-entry is \(\kappa_{ij}\) if \(\mathbf{y}_{i}\rightarrow\mathbf{y}_{j}\) is an edge of the graph \(G\). A main object of study is the matrix product \(\Sigma=Y^{t}\cdot A_{\boldsymbol{\kappa}}^{t}\), called the _complex-to-species rate matrix_. Every network defines the polynomial ODE system \[f(\mathbf{x})=\frac{d\mathbf{x}}{dt}=\Sigma\,\mathbf{x}^{Y^{t}}, \tag{1}\] where the notation \(\mathbf{x}^{B}\) for a matrix \(B\) with \(m\) columns \(\mathbf{b}_{i}\) denotes the vector of monomials \((\mathbf{x}^{\mathbf{b}_{1}},\ldots,\mathbf{x}^{\mathbf{b}_{m}})^{t}\). The ideal generated by the polynomials in the above system is called the _steady-state ideal_ and is denoted \(I_{G}\). A _chemical reaction system_ refers to the system of differential equations (1) associated to a network after making a choice of rate constants \(\boldsymbol{\kappa}=(\kappa_{ij})\) where \(\kappa_{ij}\in\mathbb{R}_{>0}\) for each reaction \(\mathbf{y}_{i}\rightarrow\mathbf{y}_{j}\). For instance, the matrices associated to the motivational example are \[Y^{t}=\begin{pmatrix}1&0\\ 1&0\\ 0&2\end{pmatrix},\qquad\quad A_{\boldsymbol{\kappa}}^{t}=\begin{pmatrix}- \kappa_{1}&\kappa_{2}\\ \kappa_{1}&-\kappa_{2}\end{pmatrix},\] \[\Sigma=\begin{pmatrix}-\kappa_{1}&\kappa_{2}\\ -\kappa_{1}&\kappa_{2}\\ 2\kappa_{1}&-2\kappa_{2}\end{pmatrix},\text{ and }\quad\begin{pmatrix} \mathbf{x}^{\mathbf{y}^{1}}\\ \mathbf{x}^{\mathbf{y}^{2}}\end{pmatrix}=\begin{pmatrix}x_{A}x_{B}\\ x_{C}\end{pmatrix}.\] We remark that the chemical reaction system (1) does not typically have full rank, and hence its vanishing locus is not zero-dimensional. In particular, there may be equations that are linearly dependent. This is easily seen by factoring the system (1) in another way. It is sometimes factored as \(f(\mathbf{x})=N\mathsf{diag}\left(\mathbf{\kappa}\right)\mathbf{x}^{B}\) where \(\mathsf{diag}\left(\mathbf{\kappa}\right)\) is a diagonal matrix with diagonal entries given by \(\mathbf{\kappa}\) and the columns of \(B\) are \(\mathbf{y}_{i}\) if \(\mathbf{y}_{i}\to\mathbf{y}_{j}\) is a reaction. The matrix \(N\) is called the _stoichiometric matrix_ and its columns are \(\mathbf{y}_{j}-\mathbf{y}_{i}\) whenever \(\mathbf{y}_{i}\to\mathbf{y}_{j}\) is a reaction. The column span of \(N\) is known in the literature as the _stoichiometric subspace_, it is a linear subspace of \(\mathbb{R}^{s}\). The redundancy of the equations in the system (1) are due to the elements of the left kernel of \(N\), i.e. if \(WN=0\) then \(Wf(\mathbf{x})=0\). Antidifferentiating gives rise to the linear equations \(W\mathbf{x}=c\) called _conservation laws_. The vectors \(\mathbf{w}\) belonging to the left kernel of \(N\) are referred to as _conservation law vectors_ and the set of all such vectors is a linear subspace of \(\mathbb{R}^{s}\) is referred to as the _linear space of conservation laws_. If redundant equations in the system (1) are replaced by a conservation law, the new system is full rank. For the chemical reaction network in Figure 1, we have \[N=\begin{pmatrix}-1&1\\ -1&1\\ 2&-2\end{pmatrix},\text{ and }\quad W=\begin{pmatrix}1&-1&0\\ 0&2&1\end{pmatrix}\] and the new system is \[0 =x_{A}-x_{B}-c_{1}\] \[0 =2x_{B}+x_{C}-c_{2}\] \[0 =2\kappa_{1}x_{A}x_{B}-2\kappa_{2}x_{C}^{2}.\] In the present work, we are interested in chemical reaction networks where the steady-state ideal is a binomial ideal. Occasionally the differential equations arising from mass action kinetics on the network are themselves binomial, as is the case in Example 2.9 in the next section. However it can also happen that these differential equations are not binomial, but ideal combinations of them are so that their steady-state ideal is generated by binomials. In this case, we say that the network has _binomial steady-states_. The following condition, introduced by Perez Millan, Dickenstein, Shiu, and Conradi in [14], is a sufficient condition for the network to have binomial steady-states. **Condition 2.1**.: _For a chemical reaction system given by a network \(G\) with \(m\) complexes and reaction rate constants \(\kappa_{ij}\), let \(\Sigma\) denote its complex-to-species rate matrix, and set \(d=\mathsf{dim}\left(\mathsf{Ker}\left(\Sigma\right)\right)\). We say that the chemical reaction system satisfies Condition 2.1, if there exists a partition \(I_{1},\dots,I_{d}\) of \(\{1,\dots,m\}\) and a basis \(\mathbf{b}^{1},\dots,\mathbf{b}^{d}\) of \(\mathsf{Ker}\left(\Sigma\right)\) with \(\mathsf{supp}(\mathbf{b}^{i})=I_{i}\)._ We will refer to networks as _PDSC networks_ if they satisfy Condition 2.1, since PDSC are the initials of the authors of [14] where this condition was introduced. These authors prove the following result about PDSC networks. **Theorem 2.2** ([14], Theorem 3.3).: _Let \(G\) be a PDSC network as described in Condition 2.1. Then the steady-state ideal \(I_{G}\) is generated by the binomials of the form_ \[b_{j_{1}}^{j}\mathbf{x}^{\mathbf{y}_{j_{2}}}-b_{j_{2}}^{j}\mathbf{x}^{\mathbf{ y}_{j_{1}}}\] _for all \(j_{1},j_{2}\in I_{j}\) and all \(j\in[d]\)._ Note that for a fixed \(j\in[d]\), the binomials of the form \(b_{j_{1}}^{j}\mathbf{x}^{\mathbf{y}_{j_{2}}}-b_{j_{2}}^{j}\mathbf{x}^{\mathbf{ y}_{j_{1}}}\) for \(j_{1},j_{2}\in I_{j}\) are generated by the \(\#I_{j}-1\) binomials \[b_{j^{\prime}}^{j}\mathbf{x}^{\mathbf{y}_{j_{2}}}-b_{j_{2}}^{j}\mathbf{x}^{ \mathbf{y}_{j^{\prime}}}\] where \(j^{\prime}\) is fixed and \(j_{2}\in I_{j}\setminus\{j^{\prime}\}\). This observation yields the following corollary. **Corollary 2.3**.: _Let \(G\) be a PDSC network with \(m\) complexes and with \(\mathsf{dim}\left(\mathsf{Ker}\left(\Sigma\right)\right)=d\). Then \(I_{G}\) has a generating set consisting of \(m-d\) binomials._ ### Mixed volumes. In this section, we introduce the concept of the mixed volume of a set of polytopes and their applications for counting solutions to systems of polynomial equations. We first introduce the mixed volume for any set of integer polytopes. Then, we introduce the _Newton polytope_ of a polynomial and state the Bernstein-Khovanskii-Kouchnirenko (BKK) Theorem which bounds the number of solutions to a polynomial system in terms of the mixed volume of its Newton polytopes. We conclude the section with a discussion of the techniques used to compute the mixed volume. Let \(P_{1},\ldots,P_{r}\) be polytopes in \(\mathbb{R}^{r}\). Their _Minkowski sum_ is the polytope in \(\mathbb{R}^{r}\), \[P_{1}+\cdots+P_{r}:=\{\mathbf{v}_{1}+\cdots+\mathbf{v}_{r}\mid\mathbf{v}_{i}\in P _{i}\text{ for all }i\}.\] Consider the volume of the Minkowski sum, \(\mu_{1}P_{1}+\cdots+\mu_{r}P_{r}\) for some positive \(\mu_{i}\). In fact, this volume is a polynomial in the variables \(\mu_{1},\ldots,\mu_{r}\)[15, Theorem 5.1.7]. **Definition 2.4**.: The _mixed volume_ of \(P_{1},\ldots,P_{r}\), denoted \(\mathsf{MVol}(P_{1},\ldots,P_{r})\), is the coefficient of \(\prod_{i=1}^{r}\mu_{i}\) in the polynomial \(\mathsf{Vol}(\mu_{1}P_{1}+\cdots+\mu_{r}P_{r})\). For background on mixed volumes, we refer the reader to [2] and [15]. For their applications to solving sparse polynomial systems, [1] and [10] are good starting points. Let \(f\in\mathbb{C}[x_{1}^{\pm},\ldots,x_{r}^{\pm}]\) be a Laurent polynomial with complex coefficients. Its _support_, denoted \(\mathsf{supp}(f)\), is the set of all \(\mathbf{y}\in\mathbb{Z}^{r}\) such that \(\mathbf{x}^{\mathbf{y}}\) has nonzero coefficient in \(f\). The _Newton polytope_, denoted \(\mathsf{Newt}(f)\), is the convex hull of the support of \(f\); that is, \[\mathsf{Newt}(f):=\mathsf{conv}\{\mathbf{y}\mid\mathbf{y}\in\mathsf{supp}(f)\}.\] This allows us to define the notion of the mixed volume of a square system of Laurent polynomials. **Definition 2.5**.: Let \(F=\{f_{1}=0,\ldots,f_{r}=0\}\) be a system of \(r\) Laurent polynomials over \(\mathbb{C}\) in \(r\) variables. The _mixed volume of_\(F\), denoted \(\mathsf{MVol}(F)\), is the mixed volume of the Newton polytopes of its polynomials; that is, \[\mathsf{MVol}(F):=\mathsf{MVol}(\mathsf{Newt}(f_{1}),\ldots,\mathsf{Newt}(f_{ r})).\] The following theorem, known as the Bernstein-Khovanskii-Kouchnirenko (BKK) bound, relates the number of solutions of a square polynomial system to its mixed volume. **Theorem 2.6** (BKK Bound [1, 12]).: _Let \(p_{1},\ldots,p_{r}\) be a polynomial system in the Laurent polynomial ring \(\mathbb{C}[x_{1}^{\pm 1},\ldots,x_{r}^{\pm 1}]\). This system has at most_ \[\mathsf{MVol}(\mathsf{Newt}(p_{1}),\ldots,\mathsf{Newt}(p_{r}))\] _solutions in \((\mathbb{C}^{*})^{n}\), with equality if the coefficients of the polynomials \(p_{i}\) are sufficiently generic given their support._ The coefficients of the polynomials arising from a chemical reaction network are not always generic since the reaction rates can appear as coefficients in more than one equation in the system. So the mixed volume of the mass-action system is simply an upper bound on the number of complex steady-states with no zero entries without the guarantee of equality in general. One standard way to compute the mixed volume of a square polynomial system \(f_{1},\ldots,f_{r}\) is to compute a fine mixed subdivision of the set of supports of each \(f_{i}\). We introduce the relevant definitions and theorems for square polynomial systems following the notation of [3] and [10]. Denote by \(\mathcal{A}_{i}\) the support of the polynomial \(f_{i}\), and let \(\mathcal{A}=(\mathcal{A}_{1},\ldots,\mathcal{A}_{r})\). A _cell_ of \(\mathcal{A}\) is any tuple of the form \((C^{1},\ldots,C^{r})\) where each \(C^{i}\) is a non-empty subset of \(\mathcal{A}_{i}\). For any cell \(C=(C^{1},\ldots,C^{r})\), we denote by \(\mathsf{conv}(C)\) the Minkowski sum, \(\sum_{i=1}^{r}\mathsf{conv}(C^{i})\). The _type_ of \(C\) is \(\mathsf{type}(\mathsf{C}):=(\mathsf{dim}\,\mathsf{conv}(C^{1}),\ldots,\mathsf{ dim}\,\mathsf{conv}(C^{r}))\). **Definition 2.7**.: A set \(S=\{S_{1},\ldots,S_{\ell}\}\) consisting of cells of \(\mathcal{A}\) is a _subdivision_ if * \(\mathsf{dim}\,\mathsf{conv}(S_{i})=r\) for all \(i\in[\ell]\), * \(\mathsf{conv}(S_{i})\cap\mathsf{conv}(S_{j})\) is a face of both \(\mathsf{conv}(S_{i})\) and \(\mathsf{conv}(S_{j})\) for all \(i,j\in[\ell]\), and * \(\mathsf{conv}(\mathcal{A})=\cup_{i=1}^{\ell}\mathsf{conv}(S_{i})\). For each cell in \(S\), let \(S_{i}=(S_{i}^{(1)},\ldots,S_{i}^{(r)})\). This subdivision is _mixed_ if for each \(S_{j}\in S\), we have \[\sum_{i=1}^{r}\mathsf{dim}\,\mathsf{conv}(S_{j}^{(i)})=r.\] Finally, a subdivision is a _fine mixed subdivision_ if for each \(S_{j}\in S\), we have that \[\sum_{i=1}^{r}(\#S_{j}^{(i)}-1)=r.\] This definition can be interpreted geometrically as giving a subdivision of the Minkowski sum of the Newton polytopes of each \(f_{i}\). In particular, if \(S=\{S_{1},\ldots,S_{\ell}\}\) is a fine mixed subdivision of \(\mathcal{A}\), then the set of polytopes \(\{\mathsf{conv}(S_{1}),\ldots,\mathsf{conv}(S_{\ell})\}\) is a subdivision of \(\sum_{i=1}^{r}\mathsf{Newt}(f_{i})\). For this reason, we refer to a fine mixed subdivision of the tuple \(\mathcal{A}\) of supports of the polynomials and of the Minkowski sum of their Newton polytopes interchangeably. The following theorem relates the mixed volume of this polynomial system to the cells of this subdivision that are Minkowski sums of line segments. It follows directly from Theorem 2.4 of [10] or Proposition 12 of [3] and the definition of the mixed volume of a system. **Theorem 2.8**.: _Let \(f_{1},\ldots,f_{r}\) be a square polynomial system in \(\mathbb{C}[x_{1},\ldots,x_{r}]\) and let \(\mathcal{A}=(\mathcal{A}_{1},\ldots,\mathcal{A}_{r})\) where \(\mathcal{A}_{i}\) consists of all exponent vectors in the support of \(f_{i}\). Let \(S=\{S_{1},\ldots,S_{\ell}\}\) be a fine mixed subdivision of \(\mathcal{A}\). Then the mixed volume of this system is_ \[\sum_{\begin{subarray}{c}S_{i}\in S\\ \mathsf{type}(S_{i})=(1,\ldots,1)\end{subarray}}\mathsf{Vol}(\mathsf{conv}(S_ {i})).\] **Example 2.9** (Species-overlapping Cycle).: Consider the directed cycle with three complexes and three species given by: We call this network a _species-overlapping cycle_ and characterize the mixed volumes of these types networks in Section 4. The system of ordinary differential equations arising from this network is \[f_{A} =\kappa_{2}x_{B}x_{C}-\kappa_{1}x_{A}x_{B}\] \[f_{B} =\kappa_{3}x_{A}x_{C}-\kappa_{2}x_{B}x_{C}\] \[f_{C} =\kappa_{1}x_{A}x_{B}-\kappa_{3}x_{A}x_{C}.\] Note that this is a binomial system which is redundant as \(f_{A}+f_{B}+f_{C}=0\). It has a single conservation law which states that \(x_{A}+x_{B}+x_{C}=c\) for some constant \(c\). So we consider a fine mixed subdivision of the set \(\mathcal{A}=\{\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\}\) where \[\mathcal{A}_{1} =\{\mathbf{e}_{2}+\mathbf{e}_{3},\mathbf{e}_{1}+\mathbf{e}_{2}\},\] \[\mathcal{A}_{2} =\{\mathbf{e}_{2}+\mathbf{e}_{3},\mathbf{e}_{1}+\mathbf{e}_{2}\}, \text{ and }\] \[\mathcal{A}_{3} =\{\mathbf{0},\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}.\] Consider the collection of cells, \[S_{1} =\big{\{}\mathcal{A}_{1},\mathcal{A}_{2},\{\mathbf{0},\mathbf{e}_{1} \}\big{\}}\] \[S_{2} =\big{\{}\{\mathbf{e}_{1}+\mathbf{e}_{2}\},\{\mathbf{e}_{2}+ \mathbf{e}_{3}\},\mathcal{A}_{3}\big{\}}\] \[S_{3} =\big{\{}\{\mathbf{e}_{1}+\mathbf{e}_{2}\},\mathcal{A}_{2},\{ \mathbf{0},\mathbf{e}_{1},\mathbf{e}_{3}\}\big{\}}\] \[S_{4} =\big{\{}\mathcal{A}_{1},\{\mathbf{e}_{2}+\mathbf{e}_{3}\},\{ \mathbf{0},\mathbf{e}_{2},\mathbf{e}_{3}\}\big{\}}.\] One can check that this forms a fine mixed subdivision of \(\mathcal{A}\) as depicted in Figure 2. Note that \(S_{1}\) is the only cell of type \((1,1,1)\) and its volume is \(1\), so by Theorem 2.8, the mixed volume of this system is \(1\). We note here that the mixed volume is not an intrinsic property of a chemical reaction network. In fact, one can obtain different mixed volumes from different choices of generators of the steady-state ideal and/or conservation laws. We illustrate an example of this below. **Example 2.10**.: Consider the network pictured in Figure 3. The system of ordinary differential equations arising from this system is \[f_{A} =-\kappa_{1}x_{A}\] \[f_{B} =\kappa_{1}x_{A}-\kappa_{2}x_{B}x_{C}+\kappa_{3}x_{C}^{2}\] \[f_{C} =\kappa_{1}x_{A}+\kappa_{2}x_{B}x_{C}-\kappa_{3}x_{C}^{2}.\] It has a single conservation law, which requires that \(w:=2x_{A}+x_{B}+x_{C}-c=0\) for some generic constant \(c\). Note that the steady-state ideal \(I_{G}=\langle f_{A},f_{B}\rangle=\langle f_{B},f_{C}\rangle\). The system \(\{f_{A},f_{C},w\}=0\) has mixed volume \(0\) since \(f_{A}\) is a monomial. Indeed, any system including a monomial has mixed volume zero since there can be no cell of type \((1,\ldots,1)\) in any fine mixed subdivision of the Minkowski sum of the Newton polytopes in the system. However, the system \(\{f_{B},f_{C},w\}\) contains no monomials, and we can compute using the PHCpack package for Macaulay2 [5, 8, 16] that its mixed volume is \(2\). Figure 3: A network where the mixed volume depends on the choice of generating set. Figure 2: The fine mixed subdivision described in Example 2.9. The cells are \(S_{1}\) in blue, \(S_{2}\) in black, \(S_{3}\) in red and \(S_{4}\) in green. ### Squareness of PDSC Networks In order to compute the mixed volume of a system of steady-state equations and conservation laws, that system must be square. It is not necessarily straightforward to check whether the steady-state ideal has a binomial generating set that makes the system augmented by conservation laws square; however, in the case of PDSC networks, we obtain a sufficient condition as follows. Let \(G\) be a network. A _linkage class_ of \(G\) is a connected component of its underlying directed graph. Let \(\ell\) denote the number of linkage classes of \(G\). Following [9], we define its _deficiency_ by \[\delta:=\mathsf{dim}\,(\mathsf{Ker}\,Y^{t}\cap\mathsf{Im}\,A_{\kappa}^{t}).\] Note that this is equal to \(\mathsf{dim}\,\mathsf{Ker}\,Y^{t}A_{\kappa}^{t}-\mathsf{dim}\,\mathsf{Ker}\, A_{\kappa}^{t}\). There are other definitions of deficiency in the literature which are not always equivalent. In particular, it is often defined as \(m-\ell-\mathsf{rank}\,(N)\). Proposition 5.1 of [9] states that if every linkage class has exactly one terminal strong linkage class, then these definitions coincide; that is, in this case, \(\mathsf{rank}\,(N)=m-\ell-\mathsf{dim}\,\mathsf{Ker}\,Y^{t}A_{\kappa}^{t}+ \mathsf{dim}\,\mathsf{Ker}\,A_{\kappa}^{t}\). **Proposition 2.11**.: _Let \(G\) be a PDSC network such that every linkage class contains exactly one terminal strong linkage class. Then the system consisting of the binomial generators guaranteed by 2.3 augmented by a basis of conservation laws is a square system._ Proof.: The nullity of the weighted Laplacian \(A_{\kappa}\) is equal to the number of terminal strong linkage classes [4], and hence equal to the number of linkage classes. Thus by Proposition 5.1 of [9], we have \[\mathsf{rank}\,(N) =m-\ell-\mathsf{dim}\,\mathsf{Ker}\,Y^{t}A_{\kappa}^{t}+\mathsf{ dim}\,\mathsf{Ker}\,A_{\kappa}^{t}\] \[=m-\ell-(m-\mathsf{rank}\,Y^{t}A_{\kappa}^{t})+\ell\] \[=\mathsf{rank}\,Y^{t}A_{\kappa}^{t}.\] Thus the nullity of \(N\) and \(Y^{t}A_{\kappa}^{t}\) are both some fixed \(d\). Let \(\{\mathbf{w}_{1},\ldots,\mathbf{w}_{d}\}\) be a basis for \(\mathsf{Ker}\,N\). Let \(f_{1},\ldots,f_{s-d}\) be the generating set for \(I_{G}\) guaranteed by Corollary 2.3. Then the system obtained by augmenting \(f_{1},\ldots,f_{s-d}\) with the conservation equations associated to \(\mathbf{w}_{1},\ldots,\mathbf{w}_{d}\) is a square system, as needed. ## 3 Mixed Volumes of Partitionable Binomial Networks Now we will focus on a specific class of binomial networks, particularly _partitionable networks_, and we show that, for these networks, computing the mixed volume reduces to computing the volume of a single mixed cell. We further show that one need not actually find such a mixed cell - in fact, in these cases, the mixed volume can be computed without computing a fine mixed subdivision. In order to define partionable networks, we need the following algebraic notion of multihomogeneity. **Definition 3.1**.: Let \(I\) be an ideal in \(\mathds{k}[x_{1},\ldots,x_{s}]\). Let \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\in\mathbb{Z}^{s}\) be integer weight vectors. Then \(I\) is _multihomogeneous_ with respect to the _multigrading_ specified by \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) if it has a generating set \(f_{1},\ldots,f_{s-k}\) such that for each \(f_{i}=\sum_{\mathbf{a}\in\mathcal{A}_{i}}\beta_{\mathbf{a}}\mathbf{x}^{ \mathbf{a}}\), we have \(\mathbf{a}\cdot\mathbf{w}_{j}=\mathbf{b}\cdot\mathbf{w}_{j}\) for all \(j\in[k]\) and \(\mathbf{a},\mathbf{b}\in\mathcal{A}_{i}\). We note that multihomogeneity with respect to \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) is a property of \(\mathsf{span}\{\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\}\) and does not depend on the choice of spanning set of this vector space. Indeed, an ideal is multihomogenous with respect to \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) if and only if it is homogeneous with respect to the weight order specified by any \(\mathbf{w}\) in their span. We can now define a _partitionable network_, where the structure of the conservation laws leads to very nice geometry on the level of mixed volumes. **Definition 3.2**.: A network \(G\) is _partitionable_ if 1. there are \(0/1\) vectors \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) with disjoint support such that the linear space of all conservation laws of \(G\) is equal to \(\mathsf{span}\left\{\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\right\}\) and 2. the steady-state ideal is multihomogeneous with respect to the multigrading induced by the conservation laws. _Remark_.: The multihomogeneity condition for a network to be partitionable has a nice geometric interpretation. This condition is equivalent to the affine hull of the newton polytope \(\mathsf{Newt}(f_{i})\) being parallel to the stoichiometric subspace for each generator \(f_{i}\) of the steady-state ideal. Observe that the first of the conditions in Definition 3.2 is the more restrictive one; in the proof of Theorem 3.5, it places significant restrictions on the form of the mixed cells that can appear in a fine mixed subdivision. The second condition is more mild. In particular, it is satisfied if the evaluations of a conservation law on each complex are equal. Noteably, the undirected graph underlying the network is connected; in this case, the network is referred to as _weakly connected_. **Proposition 3.3**.: _If \(G\) is weakly connected, then its steady-state ideal \(I_{G}\) is multihomogeneous with respect to the multigrading induced by the conservation law vectors._ Proof.: Let \(G\) be a weakly connected network with complexes \(\mathbf{y}_{1},\ldots,\mathbf{y}_{m}\) and conservation law vectors \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\). Then by definition of a conservation law, for each reaction \(\mathbf{y}_{i}\rightarrow\mathbf{y}_{j}\) and each conservation law vector \(\mathbf{w}_{\ell}\), we have \(\mathbf{y}_{i}\cdot\mathbf{w}_{\ell}=\mathbf{y}_{j}\cdot\mathbf{w}_{\ell}\). Moreover, since \(G\) is weakly connected, there is an undirected path between each pair of complexes. Thus this equality holds for any pair of complexes \(\mathbf{y}_{i}\) and \(\mathbf{y}_{j}\) and any conservation law vector \(\mathbf{w}_{\ell}\). Every term of the steady-state equations \(d\dot{\mathbf{x}}/dt=0\) that generate \(I_{G}\) is of the form \(\mathbf{x}^{\mathbf{y}_{i}}\) for some complex \(\mathbf{y}_{i}\). Thus the steady-state equations form a generating set of \(I_{G}\) that is multihomogeneous with respect to the multigrading specified by the conservation law vectors. **Example 3.4**.: As a non-example, the Edelstein network is not partitionable. It has one conservation law vector \(\mathbf{w}=(0,1,1)\) and if we consider the two exponent vectors \(\mathbf{a}=(2,0,0)\) and \(\mathbf{b}=(1,1,0)\) of the polynomial \(f_{A}(\mathbf{x})=\kappa_{1}x_{A}-\kappa_{2}x_{A}^{2}-\kappa_{3}x_{A}x_{B}+ \kappa_{4}x_{C}\), we compute \(\mathbf{a}\cdot\mathbf{w}=0\) while \(\mathbf{b}\cdot\mathbf{w}=1\). The main result of this section is the following characterization of the mixed volume of a partitionable binomial reaction network. In particular, we find that any fine mixed subdivision of the Minkowski sum of the Newton polytopes of such a network has at most one cell of type \((1,\ldots,1)\). This allows us to easily compute the mixed volume of such a network, since if a type \((1,\ldots,1)\) cell exists, then the mixed volume is the volume of this single cell. If no such mixed cell exists, then the mixed volume is zero. **Theorem 3.5**.: _Let \(G\) be a partionable binomial network with \(s\) species and \(k\) conservation law vectors \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) with disjoint support. Suppose that its steady-state ideal has a binomial generating set \(f_{1},\ldots,f_{s-k}\) with exactly \(s-k\) elements. Then any fine mixed subdivision of \(\sum_{i=1}^{s-k}\mathsf{Newt}(f_{i})+\sum_{j=1}^{k}\mathsf{Newt}(\mathbf{w}_{j }\mathbf{x}-c_{j})\) has at most one cell of type \((1,\ldots,1)\). This cell, if it exists, is a translate of some parallelotope of the form_ \[\sum_{i=1}^{s-k}\mathsf{Newt}(f_{i})+\sum_{j=1}^{k}\mathsf{conv}(\mathbf{0}, \mathbf{e}_{\alpha_{j}}),\] _where \(\alpha_{j}\) is in the support of \(\mathbf{w}_{j}\) for all \(j\)._ Before we prove Theorem 3.5, we require the following well-known proposition regarding the Minkowski sums of a polytope with two different edges. **Proposition 3.6**.: _Let \(P\) and \(Q\) be \(d\)-dimensional polytopes in \(\mathbb{R}^{d}\) that share a facet \(F\). Suppose further that \(F\) is the face of both that maximizes the same linear functional \(\mathbf{a}\). Then \(P\) and \(Q\) intersect on their interiors._ Proof.: We have that \(F=\{\mathbf{x}\in P\mid\mathbf{a}\cdot\mathbf{x}=b\}=\{\mathbf{x}\in Q\mid \mathbf{a}\cdot\mathbf{x}=b\}\) and that \(P\) and \(Q\) are contained in the closed halfspace \(\{\mathbf{x}\mid\mathbf{a}\mathbf{x}\leq b\}\). Moreover, \(F\) is a facet of both polytopes. So we may write minimal H-representations \[P=\{\mathbf{x}\mid A\mathbf{x}\leq\mathbf{b}\}\qquad\text{ and }\qquad Q=\{ \mathbf{x}\mid C\mathbf{x}\leq\mathbf{d}\}\] where the first rows of \(A\) and \(C\), \(\mathbf{a}_{1}\) and \(\mathbf{c}_{1}\) respectively, are both equal to \(\mathbf{a}\) and the first entries of \(\mathbf{b}\) and \(\mathbf{d}\) are both \(b\). Since \(F\) is a facet, we may further assume that for all \(i>1\) and all \(\mathbf{x}\in F\), \(\mathbf{a}_{i}\cdot\mathbf{x}<b_{i}\) and \(\mathbf{c}_{i}\cdot\mathbf{x}<d_{i}\). Let \(\mathbf{z}\in\mathsf{relint}(F)\). We construct an \(\epsilon>0\) such that \(\mathbf{z}-\epsilon\mathbf{a}^{t}\in\mathsf{int}(P)\cap\mathsf{int}(Q)\). Consider the rows \(\mathbf{a}_{i},\mathbf{c}_{j}\) of \(A\) and \(C\) respectively for \(i,j>1\). Let \(\epsilon>0\) be such that \(-\epsilon\mathbf{a}_{i}\cdot\mathbf{a}^{t}<b_{i}-\mathbf{a}_{i}\cdot\mathbf{z}\) and \(-\epsilon\mathbf{c}_{j}\cdot\mathbf{a}^{t}<d_{j}-\mathbf{c}_{j}\cdot\mathbf{z}\) for all \(i,j>1\). Such an \(\epsilon\) exists since \(b_{i}-\mathbf{a}_{i}\cdot\mathbf{z}\) and \(d_{j}-\mathbf{c}_{j}\cdot\mathbf{z}\) are strictly positive for all \(i,j>1\). Then \[\mathbf{z}-\epsilon\mathbf{a}^{t}\in\{\mathbf{x}\mid A\mathbf{x}<\mathbf{b}\} \cap\{\mathbf{x}\mid C\mathbf{x}<\mathbf{d}\},\] which is exactly the intersection of the interiors of \(P\) and \(Q\), as needed. Proof of Theorem 3.5.: Since the mixed volume is translation invariant, we replace each \(\mathsf{Newt}(f_{i})\) with its translation to the origin, \(P_{i}:=\mathsf{conv}(\mathbf{0},\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)})\), where \(f_{i}=\mathbf{x}^{y_{1}^{(i)}}-\mathbf{x}^{y_{2}^{(i)}}\). Let \(P=\sum_{i=1}^{s-k}P_{i}+\sum_{j=1}^{k}\mathsf{Newt}(\mathbf{w}_{j}\mathbf{x}- c_{j})\). Each polytope \(P_{i}\) is a line segment since \(f_{i}\) is a binomial. Thus, for any fine mixed subdivision, a mixed cell of type \((1,\ldots,1)\) must have each \(P_{i}\) as a summand. Further, note that there are two types of edges of each simplex \(W_{j}:=\mathsf{Newt}(\mathbf{w}_{j}\mathbf{x}-c_{j})\); they are of the form \(\mathsf{conv}(\mathbf{0},\mathbf{e}_{\alpha})\) or \(\mathsf{conv}(\mathbf{e}_{\alpha},\mathbf{e}_{\beta})\) where \(\alpha\) and \(\beta\) are in the support of \(\mathbf{w}_{j}\). For all \(j\in[k]\), if \(\alpha\) belongs to the support of \(\mathbf{w}_{j}\), then since \(G\) is partitionable, \(\mathbf{e}_{\alpha}\) belongs to the linear space \[\bigcap_{\begin{subarray}{c}h=1\\ h\neq j\end{subarray}}^{k}\{\mathbf{p}\mid\mathbf{w}_{h}\mathbf{p}=0\}.\] Moreover, since \(G\) is partitionable, the steady-state ideal \(I_{G}\) is multihomogeneous with respect to the multigrading induced by the conservation laws. Since \(I_{G}\) contains no monomials, this implies that each \(f_{i}\) is multihomogeneous with respect to this multigrading as well. Thus \(\mathbf{w}_{h}\cdot(\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)})=0\) for all \(h\in[k]\) and \(i\in[s-k]\). Let \(Q\) be a type \((1,\ldots,1)\) mixed cell of a fine mixed subdivision of \(P\). Then \(Q=\sum_{i=1}^{s-k}P_{i}+\sum_{j=1}^{k}E_{j}\) where each \(E_{j}\) is an edge of \(W_{j}\). For the sake of contradiction, suppose that \(E_{j}=\mathsf{conv}(\mathbf{e}_{\alpha},\mathbf{e}_{\beta})\) for some \(j\). Then \(\sum_{i=1}^{s-k}P_{i}+E_{j}\) lies in the codimension \(k\) affine linear space \[\{\mathbf{p}\mid\mathbf{w}_{j}\mathbf{p}=1\}\cap\bigcap_{\begin{subarray}{c}h= 1\\ h\neq j\end{subarray}}^{k}\{\mathbf{p}\mid\mathbf{w}_{h}\mathbf{p}=0\}.\] So it has dimension less than or equal to \(s-k\). Thus \(Q\) has dimension less than or equal to \(s-1\), which contradicts that it is a maximal cell of a fine mixed subdivision. Thus all mixed cells of type \((1,\ldots,1)\) are of the form \(\sum_{i=1}^{s-k}P_{i}+\sum_{j=1}^{k}\mathsf{conv}(\mathbf{0},\mathbf{e}_{\alpha_ {j}})\) where \(\alpha_{j}\) belongs to the support of \(\mathbf{w}_{j}\). Let \(E_{\alpha_{j}}\) denote \(\mathsf{conv}(\mathbf{0},\mathbf{e}_{\alpha_{j}})\). Now suppose that \(Q_{1}\) and \(Q_{2}\) are distinct mixed cells of this form. They must differ by at least one summand corresponding to edges of some \(W_{j}\). Without loss of generality, suppose that this is \(\mathbf{w}_{1}\) and that the summand associated to \(\mathbf{w}_{1}\) in \(Q_{1}\) is \(E_{1}\) and the summand associated to \(\mathbf{w}_{1}\) in \(Q_{2}\) is \(E_{2}\). Consider the face of \(P\) that minimizes the linear functionals \(\mathbf{w}_{j}\mathbf{p}\) for all \(j=2,\ldots,k\). This face is \(F=\sum_{i=1}^{s-k}P_{i}+W_{1}\). The fine mixed subdivision \(\mathcal{S}\) of \(P\) restricts to a subdivision \(\mathcal{S}^{\prime}\) of this face via intersection. Moreover, we have \(Q_{1}\cap F=\sum_{i=1}^{s-k}P_{i}+E_{1}\) and \(Q_{2}\cap F=\sum_{i=1}^{s-k}P_{i}+E_{2}\). Now consider these polytopes in the ambient linear space, \(\{\mathbf{p}\mid\mathbf{w}_{j}\mathbf{p}=0,j=2,\ldots,k\}\). The face \(F\) is contained in the hyperplane \(\{\mathbf{w}_{1}\mathbf{p}=0\}\) and \(E_{1}\) and \(E_{2}\) both lie in the positive halfspace defined by \(\mathbf{w}_{1}\mathbf{p}\geq 0\). So by Proposition 3.6, we have that \(\mathsf{relint}(\sum_{i=1}^{k}P_{i}+E_{1})\cap\mathsf{relint}(\sum_{i=1}^{k}P_ {i}+E_{2})\) is nonempty. Moreover, these two polytopes are not equal. So they cannot belong to the same subdivision of \(F\). Hence, \(Q_{1}\) and \(Q_{2}\) cannot belong to the same subdivision of \(P\). Thus a fine mixed subdivision of \(P\) has at most one mixed cell of type \((1,\ldots,1)\) and it has the desired form if it exists. Consider a partitionable binomial reaction network as in the statement of Theorem 3.5. We have shown that any fine mixed subdivision of the Minkowski sum of its Newton polytopes has at most one mixed cell of type \((1,\ldots,1)\) and described the form of this cell if it exists. Let \(\Pi\) denote this parallelotope. If one knows the edges of each Newton polytope that are its Minkowski summands, then its volume can be computed as the determinant of a matrix. **Lemma 3.7**.: _Let \(G\) be as in the statement of Theorem 3.5 and suppose that_ \[\Pi=\sum_{i=1}^{s-k}\mathsf{Newt}(f_{i})+\sum_{j=1}^{k}\mathsf{conv}(\mathbf{ 0},\mathbf{e}_{\alpha_{j}})\] _be the unique type \((1,\ldots,1)\) cell of a fine mixed subdivision of the Newton polytopes where each \(f_{i}=\mathbf{x}^{\mathbf{y}_{i}^{(i)}}-\mathbf{x}^{\mathbf{y}_{i}^{(i)}}\) and where \(\alpha_{j}\) is in the support of \(\mathbf{w}_{j}\). Then the mixed volume of the steady-state system \(f_{1},\ldots,f_{s-k}\) augmented by the partitionable conservation laws the absolute value of the determinant of the \(s\times s\) matrix with columns \(\mathbf{e}_{\alpha_{j}}\) for \(j\in[k]\) and \(\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)}\) for \(i\in[s-k]\)_ Proof.: Suppose that this mixed volume is non-zero. The volume of a polytope is invariant under translation. To compute the volume of the parallelotope \(\Pi\), we translate it to the origin by replacing the edge \(\mathsf{Newt}(f_{i})=\mathsf{conv}(\mathbf{y}_{1}^{(i)},\mathbf{y}_{2}^{(i)})\) with \(\mathsf{conv}(\mathbf{0},\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)})\). Then this determinant is the standard formula for the normalized volume of such a parallelotope. By Theorem 3.5, \(\Pi\) is the only mixed cell of type \((1,\ldots,1)\) in a fine mixed subdivision of \(P\). Thus by Theorem 2.8, the mixed volume of \(G\) is the volume of \(\Pi\). We conclude this discussion by noting that this determinant does not depend on the choice of the coordinates \(\alpha_{j}\) in the support of \(\mathbf{w}_{j}\). So in fact, one can compute the mixed volume of a partitionable binomial reaction network via a simple determinant calculation without computing a fine mixed subdivision. In order to prove this, we state the following lemma. **Lemma 3.8**.: _Let \(\mathbf{r}_{1},\ldots,\mathbf{r}_{k+1}\) be \(s\)-dimensional row vectors that sum to the zero vector. Let \(\mathbf{q}_{1},\ldots,\mathbf{q}_{s-k}\) be arbitrary \(s\)-dimensional row vectors. Let \(R_{i}\) denote the \(s\times s\) matrix with rows \(\mathbf{r}_{1},\ldots,\mathbf{r}_{k+1},\mathbf{q}_{1},\ldots,\mathbf{q}_{s-k}\) with \(\mathbf{r}_{i}\) excluded. Then \(\det(R_{i})=(-1)^{i-j}\det(R_{j})\) for any \(i,j\in[k+1]\)._ Proof.: For any \(i,j\), we have \[\mathbf{r}_{i}=-\sum_{\begin{subarray}{c}\ell=1\\ \ell\neq i\end{subarray}}^{k+1}\mathbf{r}_{\ell}.\] Replacing \(\mathbf{r}_{i}\) with this expression in \(R_{j}\) and expanding using multilinearity and the alternating property of the determinant yields that \(\det(R_{j})=-\det(R_{j}^{(i)})\), where \(R_{j}^{(i)}\) is obtained from \(R_{j}\) by replacing \(\mathbf{r}_{j}\) with \(\mathbf{r}_{i}\). Then using \(i-j-1\) adjacent row swaps to put \(\mathbf{r}_{i}\) in the \(i\)th position yields \(R_{i}\). So \(\det(R_{j})=(-1)^{i-j}\det(R_{i})\), as needed. **Theorem 3.9**.: _Let \(G\) be a partitionable binomial reaction network with partitionable conservation laws \(\mathbf{w}_{1},\ldots,\mathbf{w}_{k}\) and exactly \(s-k\) defining binomials \(f_{i}\) supported on exponent vectors \(\mathbf{y}_{1}^{(i)}\) and \(\mathbf{y}_{2}^{(i)}\). Then the mixed volume of the steady-state system \(f_{1},\ldots,f_{s-k}\) augmented by the partitionable conservation laws is either \(0\) or the absolute value of the determinant of any matrix with columns \(\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)}\) for all \(i\in[s-k]\) and \(\mathbf{e}_{\alpha_{j}}\) for \(\alpha_{j}\in\mathsf{supp}(\mathbf{w}_{j})\) for each \(j\in[k]\)._ Proof.: Suppose that the system \(f_{i}(\mathbf{x})=0\) for \(i\in[s-k]\) and \(\mathbf{w}_{j}\mathbf{x}=c_{j}\) for \(j\in[k]\) has nonzero mixed volume. Then any fine mixed subdivision of \(p\) has a type \((1,\ldots,1)\) cell, and by Theorem 3.5, this cell is unique and of the form \[\Pi=\sum_{i=1}^{s-k}\mathsf{Newt}(f_{i})+\sum_{j=1}^{k}\mathsf{conv}(\mathbf{ 0},\mathbf{e}_{\alpha_{j}}).\] Let \(M_{\alpha}\) denote the matrix with columns \(\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)}\) for all \(i\in[s-k]\) and \(\mathbf{e}_{\alpha_{j}}\) for \(j\in[k]\). Then by Lemma 3.7, the mixed volume of \(G\) is equal to \(\pm\det(M_{\alpha})\). It remains to show that if we pick \(\beta_{j}\in\mathsf{supp}(\mathbf{w}_{j})\) for each \(j\), the corresponding matrix has the same determinant up to absolute value; that is, that \(\det M_{\alpha}=\pm\det M_{\beta}\). Consider the \(s\times(s-k)\) matrix \(M\) with colums \(\mathbf{y}_{1}^{(i)}-\mathbf{y}_{2}^{(i)}\). Let \(\mathbf{r}_{1},\ldots,\mathbf{r}_{s}\) be its rows. Note that for each \(j\in[k]\), we have \[\sum_{\ell\in\mathsf{supp}(\mathbf{w}_{j})}\mathbf{r}_{\ell}=0.\] Let \(M_{\alpha}^{\prime}\) denote the matrix obtained by deleting rows \(\mathbf{r}_{\alpha_{1}},\ldots,\mathbf{r}_{\alpha_{s}}\) from \(M\), and similarly for \(M_{\beta}^{\prime}\). Then by repeatedly applying Laplace expansion along the columns of the form \(\mathbf{e}_{\alpha_{j}}\), we see that \(\det M_{\alpha}=\pm\det M_{\alpha}^{\prime}\), and similarly, that \(\det M_{\beta}=\pm\det M_{\beta}^{\prime}\). Moreover, by applying Lemma 3.8\(s\) times to the block of rows \(\{\mathbf{r}_{\ell}\mid\ell\in\mathsf{supp}(\mathbf{w}_{j})\}\) at the \(j\)th step of the Laplace expansion, we see that \[\det(M_{\alpha}^{\prime})=\pm\det(M_{\beta}^{\prime}),\] as needed. ## 4 Cycles with Binomial Steady-States In this section, we investigate the directed cycles, or _cycle networks_, that satisfy the PDSC Condition. We give a characterization of these cycles in terms of edge colorings of the cycle. Then we apply the results of Section 3 to some examples of cycles with binomial steady-states and compute their mixed volumes. Let \(G\) be a cycle network with \(m\) complexes, that is, defined by the reactions \(\ \mathbf{y}_{i}\xrightarrow{\kappa_{i}}\mathbf{y}_{i+1}\) where the indices are taken modulo the set \([m]=\{1,2,\ldots,m\}\). Let \(d=\mathsf{dim}\,\mathsf{Ker}\,\Sigma\) where \(\Sigma=Y^{t}A_{\boldsymbol{\kappa}}^{t}\). Since we are fixing the structure of the reaction graph \(G\), the Laplacian matrix has the form \[A_{\boldsymbol{\kappa}}^{t}=\begin{pmatrix}-\kappa_{1}&&&&\kappa_{m}\\ \kappa_{1}&-\kappa_{2}&&&&\\ &\kappa_{2}&-\kappa_{3}&&\\ &&&\ddots&&\\ &&&&-\kappa_{m-1}&\\ &&&&\kappa_{m-1}&-\kappa_{m}\end{pmatrix}\] which has nontrivial kernel; indeed, it contains the non-zero vector \(\mathbf{x}_{\boldsymbol{\kappa}}=(\kappa_{1}^{-1},\ldots,\kappa_{m}^{-1})^{t}\). Thus, for cycle networks, the dimension \(d\) of the kernel of \(\Sigma=Y^{t}A_{\boldsymbol{\kappa}}^{t}\) is always at least \(1\). Given a coloring of the edges of \(G\), \(\lambda:E(G)\to C\), and a "color" \(\ell\in C\), the subgraph of \(G\) induced by all edges of color \(\ell\), denoted \(G[\ell]\) is a disjoint union of directed paths if \(|C|>1\) and \(G\) if \(|C|=1\). Let \(H(\ell)\) denote the set of all source vertices of \(G[\ell]\) and let \(T(\ell)\) denote the set of all sink vertices of \(G[\ell]\). Note that in the case \(|C|=1\), the sets \(H(\ell)\) and \(T(\ell)\) are both empty. Given a subset \(S\) of the complexes of \(G\), we shall write \(\mathbb{1}_{S}\) to denote the \(m\)-dimensional indicator vector for \(S\). **Theorem 4.1**.: _Let \(G\) be a directed cycle. Then \(G\) is a PDSC network if and only if there exists a surjective coloring \(\lambda:E(G)\to[d]\) such that for all \(\ell\in[d]\),_ \[\sum_{\mathbf{y}\in H(\ell)}\mathbf{y}=\sum_{\mathbf{y}\in T(\ell)}\mathbf{y}.\] Proof.: Let \(\lambda:E(G)\to[d]\) be a surjective coloring of the edges of \(G\) such that for all \(\ell\in[d]\), \(\sum_{\mathbf{y}\in H(\ell)}\mathbf{y}=\sum_{\mathbf{y}\in T(\ell)}\mathbf{y}\). Then the difference of indicator vectors \(\mathbb{1}_{H(\ell)}-\mathbb{1}_{T(\ell)}\) belongs to \(\mathsf{Ker}\,Y^{t}\) for all \(\ell\in[d]\). This is equal to the image of the vector \(\mathbf{b}^{\ell}\) under \(A^{t}_{\boldsymbol{\kappa}}\) where \(\mathbf{b}^{\ell}\) is defined by \[b^{\ell}_{i}=\begin{cases}\kappa_{i}^{-1}&\text{ if }\lambda(\mathbf{y}_{i} \to\mathbf{y}_{i+1})=\ell\\ 0&\text{ otherwise.}\end{cases}\] Indeed, if \(\mathbf{y}_{i}\) is the source of a path in \(G[\ell]\), then \(b^{\ell}_{i}=\kappa_{i}^{-1}\) and \(b^{\ell}_{i-1}=0\). So the \(i\)th entry of \(A^{t}_{\boldsymbol{\kappa}}\mathbf{b}^{\ell}\) is \(-1\). Similarly, if \(\mathbf{y}_{i}\) is the sink of a path in \(G[\ell]\), then \(b^{\ell}_{i}=0\) and \(b^{\ell}_{i-1}=\kappa_{i-1}^{-1}\). So the \(i\)th entry of \(A^{t}_{\boldsymbol{\kappa}}\mathbf{b}^{\ell}\) is \(1\). If \(\mathbf{y}_{i}\) is an interior node on a path in \(G[\ell]\), then \(b^{\ell}_{i}=\kappa_{i}^{-1}\) and \(b^{\ell}_{i-1}=\kappa_{i-1}^{-1}\), so that the \(i\)th entry of \(A^{t}_{\boldsymbol{\kappa}}\mathbf{b}^{\ell}\) is \(0\). Finally if \(\mathbf{y}_{i}\) does not belong to \(G[\ell]\), then \(\mathbf{y}_{i-1}\) either is also not in \(G[\ell]\) or is a sink of a path in \(G[\ell]\). Hence we have \(b^{\ell}_{i}=b^{\ell}_{i-1}=0\), and the \(i\)th entry of \(A^{t}_{\boldsymbol{\kappa}}\mathbf{b}^{\ell}\) is \(0\). The vectors \(\mathbf{b}^{1},\dots,\mathbf{b}^{d}\) have disjoint support since each complex has exactly one outgoing end. Thus they are linearly independent. Moreover, they form a basis for \(\mathsf{Ker}\,\Sigma\) as they comprise \(d\) distinct vectors. Thus \(G\) satisfies Condition 2.1. Now suppose that \(G\) satisfies Condition 2.1 and let \(\mathbf{b}^{1},\dots,\mathbf{b}^{d}\) be a basis for \(\mathsf{Ker}\,\Sigma\) with disjoint support. In particular, we know that \(\mathbf{x}_{\boldsymbol{\kappa}}=(\kappa_{1}^{-1},\dots,\kappa_{m}^{-1})^{t}\) is in \(\mathsf{Ker}\,\Sigma\) as it belongs to \(\mathsf{Ker}\,A^{t}_{\boldsymbol{\kappa}}\). So it is in the span of \(\mathbf{b}^{1},\dots,\mathbf{b}^{d}\). Thus, after rescaling each \(\mathbf{b}^{\ell}\), we have that if \(j\in\mathsf{supp}(\mathbf{b}^{\ell})\), then \(b^{\ell}_{j}=\kappa_{j}^{-1}\). Color the edges of \(G\) by letting the edge \(\mathbf{y}_{i}\to\mathbf{y}_{i+1}\) have color \(\ell\) if and only if \(i\in\mathsf{supp}(\mathbf{b}^{\ell})\). Then \(A^{t}_{\boldsymbol{\kappa}}\mathbf{b}^{\ell}=\mathbb{1}_{H(\ell)}-\mathbb{1}_ {T(\ell)}\). Since \(\mathbf{b}^{\ell}\in\mathsf{Ker}\,\Sigma\), we must have that \(\mathbb{1}_{H(\ell)}-\mathbb{1}_{T(\ell)}\in\mathsf{Ker}\,Y^{t}\). Hence we have \(\sum_{\mathbf{y}\in H(\ell)}\mathbf{y}=\sum_{\mathbf{y}\in T(\ell)}\mathbf{y}\), as needed. The above proof uncovers another key fact about PDSC cycle networks. In particular, when the reaction rates \(\kappa_{i}\) are positive, these networks trivially satisfy another condition from [14], which we restate below. **Condition 4.2** ([14],Condition 3.4).: _Consider a chemical reaction system given by the PDSC network \(G\) with \(m\) complexes and reaction rate constants \(\kappa_{ij}\). There is a partition \(I_{1},\dots,I_{d}\) of \([m]\) and a basis \(\mathbf{b}^{1},\dots,\mathbf{b}^{d}\) of \(\mathsf{Ker}\,\Sigma\) with \(\mathsf{supp}\mathbf{b}^{i}=I_{i}\). We say that the chemical reaction system additionally satisfies Condition 4.2 if for all \(j\in[m]\), the nonzero entries of \(\mathbf{b}^{j}\) have the same sign, that is, if_ \[\text{sign}(b^{j}_{j_{1}})=\text{sign}(b^{j}_{j_{2}})\quad\text{ for all }j_{1},j_{2}\in I_{j},\text{ for all }1\leq j\leq d.\] Theorem 3.8 of [14] shows that this condition is necessary for a PDSC network to have a positive steady-state. The basis vectors \(\mathbf{b}^{1}\),..., \(\mathbf{b}^{d}\) from the proof of Theorem 4.1 are of a special form. In particular, when the reaction rates \(\kappa_{i}\) are positive, their nonzero entries are all positive. This shows that PDSC cycle networks automatically satisfy Condition 4.2. **Corollary 4.3**.: _Let \(G\) be a directed cycle. If \(G\) is a PDSC network, then the chemical reaction system given by \(G\) satifies Condition 4.2._ **Example 4.4** (Species-overlapping cycles).: An instance of PDSC cycle networks are what we call _species-overlapping cycles_, denoted \(SOC_{m}\) where \(m\geq 3\). This one-parameter family of cycle networks are defined by reactions of the form \[X_{i}+X_{i+1}\xrightarrow{\kappa_{i}}X_{i+1}+X_{i+2}\] for \(i=1,\ldots,m\) where the indices are taken modulo the set \([m]=\{1,\ldots,m\}\). For example, when \(m=4\) we get the network seen in Figure 4. Note that the system of ordinary differential equations arising from these cycles is binomial. Our claim is that for \(m\geq 3\) these networks are also indeed PDSC networks. When \(m\) is odd, the matrix \[Y^{t}=\begin{pmatrix}1&&&&1\\ 1&1&&\\ &1&1&&\\ &&&\ddots&\\ &&&1\end{pmatrix}=\begin{pmatrix}\mathbf{e}_{1}+\mathbf{e}_{2}&\mathbf{e}_{2 }+\mathbf{e}_{3}&\cdots&\mathbf{e}_{m-1}+\mathbf{e}_{m}&\mathbf{e}_{1}+ \mathbf{e}_{m}\end{pmatrix}\] has full rank and hence \(x_{\boldsymbol{\kappa}}=(\kappa_{1}^{-1},\kappa_{2}^{-1},\ldots,\kappa_{m}^{ -1})^{t}\) generates the kernel of \(\Sigma\). Thus, \(d=1\) and by Theorem 4.1, \(SOC_{m}\) for odd \(m\) is a PDSC network since \(H(\ell)=T(\ell)=\emptyset\). Else if \(m\) is even, then \(d=2\) and a surjective coloring of the edges of the network is given as follows: \[\lambda(\mathbf{y}_{i}\rightarrow\mathbf{y}_{i+1})=\begin{cases}1,&\text{ if $i$ is odd}\\ 2,&\text{ if $i$ is even}.\end{cases}\] With this coloring, we have \(H(1)=T(2)\), \(T(1)=H(2)\), and satisfy the following condition: \[\sum_{\mathbf{y}\in H(1)}\mathbf{y} =\mathbf{y}_{2}+\mathbf{y}_{4}+\cdots+\mathbf{y}_{m}\] \[=(\mathbf{e}_{2}+\mathbf{e}_{3})+(\mathbf{e}_{4}+\mathbf{e}_{5}) +\cdots+(\mathbf{e}_{m}+\mathbf{e}_{1})\] \[=(\mathbf{e}_{1}+\mathbf{e}_{2})+(\mathbf{e}_{3}+\mathbf{e}_{4}) +\cdots+(\mathbf{e}_{m-1}+\mathbf{e}_{m})\] \[=\mathbf{y}_{1}+\mathbf{y}_{3}+\cdots+\mathbf{y}_{m-1}\] \[=\sum_{\mathbf{y}\in T(1)}\mathbf{y}.\] Thus, by Theorem 4.1\(SOC_{m}\) satisfies Condition 2.1 for even \(m\). These networks are also partitionable and we compute the mixed volume of their natural system of equations as in Theorem 3.9. **Theorem 4.5**.: _Let \(m\geq 3\). The cycle networks \(SOC_{m}\) are partitionable. The mixed volumes of the associated systems_ \[\begin{cases}f_{i}&=\kappa_{i-2}x_{i-2}x_{i-1}-\kappa_{i}x_{i}x_{i+1}\quad\text {, for $i=1,\ldots,m-1$}\\ 0&=x_{1}+x_{2}+\cdots x_{m}+c\end{cases}\] Figure 4: A four-cycle with positive binomial steady-states. _for odd \(m\) and_ \[\begin{cases}f_{i}&=\kappa_{i-2}x_{i-2}x_{i-1}-\kappa_{i}x_{i}x_{i+1}\quad\text{, for }i=1,\dots,m-2\\ 0&=x_{1}+x_{3}+\cdots x_{m-1}+c_{1}\\ 0&=x_{2}+x_{4}+\cdots x_{m}+c_{2}\end{cases}\] _for even \(m\) are \(1\) and \(\frac{m}{2}\), respectively._ Proof.: For the cycle network \(SOC_{m}\), the polynomials of the system (1) are \(f_{i}=\kappa_{i-2}x_{i-2}x_{i-1}-\kappa_{i}x_{i}x_{i+1}\) so \(\mathsf{Newt}(f_{i})=\mathsf{conv}\mathcal{A}_{i}\) where \(\mathcal{A}_{i}=\{\mathbf{e}_{i-2}+\mathbf{e}_{i-1},\mathbf{e}_{i}+\mathbf{e} _{i+1}\}\). We organize the proof based on the parity of \(m\). First suppose \(m\) is odd. Then the network \(SOC_{m}\) has one conservation law given by the conservation law vector \(\mathbf{w}=\mathbb{I}\). Since each \(f_{i}\) is homogenous then \(I=\langle f_{1},\dots,f_{m-1}\rangle\) is multihomogenous with respect to the multigrading given be \(\mathbf{w}\), hence \(SOC_{m}\) is partitionable. By Theorem 3.9 the mixed volume of \(SOC_{m}\) is the absolute value of the determinant of the matrix with columns \(\mathbf{e}_{1}\) and \(\mathbf{e}_{i-2}+\mathbf{e}_{i-1}-\mathbf{e}_{i}-\mathbf{e}_{i+1}\) for \(i=1,\dots,m-1\). Since one of the columns of this matrix is \(\mathbf{e}_{1}\), we focus on the determinant of the submatrix after removing this column and the first row. For \(m=3\), the submatrix is \(\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\) which has determinant \(1\), as desired. For \(m\geq 5\), the submatrix has an LU-factorization with \[L=\left(\begin{array}{cccc|cccc}&&&I_{(m-3)\times(m-3)}&&&\mathbf{0}\\ &&&&\\ \hline&&&\\ -1&\cdots&(-1)^{j}\lceil j/2\rceil&\cdots&\lceil(m-3)/2\rceil&1&0\\ -1&\cdots&-(j\mod 2)&\cdots&0&\nicefrac{{1}}{{\lceil(m-3)/2\rceil}}&1 \end{array}\right)\] and \[U=\left(\begin{array}{c|c|c}U_{1}&&&U_{2}\\ \hline\mathbf{0}&&&-1\\ 0&&\nicefrac{{1}}{{\lceil(m-3)/2\rceil}}&\\ \end{array}\right)\] where \(I_{(m-3)\times(m-3)}\) is the \((m-3)\times(m-3)\) identity matrix and the \(i\)th row of \(\begin{pmatrix}U_{1}&U_{2}\end{pmatrix}\) is \(\mathbf{e}_{i+2}+\mathbf{e}_{i+3}-\mathbf{e}_{i}-\mathbf{e}_{i+1}\) except the last row is \(\mathbf{e}_{m-1}-\mathbf{e}_{m-3}-\mathbf{e}_{m-2}\). Therefore, the mixed volume is \(\det(L)\det(U)=(-1)^{m-3}=1\). Now suppose \(m\) is even. The network \(SOC_{m}\) has two conservation laws given by the vectors \(\mathbf{w}_{1}=\mathbf{e}_{1}+\mathbf{e}_{3}+\cdots+\mathbf{e}_{m-1}\) and \(\mathbf{w}_{2}=\mathbf{e}_{2}+\mathbf{e}_{4}+\cdots+\mathbf{e}_{m}\). Then for any \(i,j\) and \(\mathbf{a},\mathbf{b}\in\mathcal{A}_{i}\), \(\mathbf{a}\cdot\mathbf{w}_{j}=\mathbf{b}\cdot\mathbf{w}_{j}=1\) so \(I=\langle f_{1},\dots,f_{m-2}\rangle\) is multihomogeneous with respect to the conservation law vectors \(\mathbf{w}_{1},\mathbf{w}_{2}\) and hence \(SOC_{m}\) is partitionable. By Theorem 3.9 the mixed volume of \(SOC_{m}\) is the absolute value of the determinant of the matrix with columns \(\mathbf{e}_{1}\), \(\mathbf{e}_{2}\), and \(\mathbf{e}_{i-2}+\mathbf{e}_{i-1}-\mathbf{e}_{i}-\mathbf{e}_{i+1}\) for \(i=1,\dots,m-2\). Since the first two columns are \(\mathbf{e}_{1},\mathbf{e}_{2}\), we focus on the determinant of the submatrix after removing the first two rows and columns. For \(m=4\), the submatrix is \(\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}\) which has determinant \(\frac{m}{2}=2\), as desired. For \(m\geq 6\), up to a permutation matrix, we have the following LU-factorization with \[L=\left(\begin{array}{c|ccc}1&&\mathbf{0}&&0\\ \hline\mathbf{0}&&&I_{(m-4)\times(m-4)}&&\mathbf{0}\\ \hline 1&-1&\cdots&(-1)^{j}\lceil j/2\rceil&\cdots&\lceil(m-4)/2\rceil&1\end{array}\right)\] and \[U=\left(\begin{array}{ccc}1&\mathbf{0}&1\\ \mathbf{0}&U_{1}&U_{2}\\ 0&\mathbf{0}&\nicefrac{{m}}{{2}}\end{array}\right)\] where the \(i\)th row of \(\begin{pmatrix}U_{1}&U_{2}\end{pmatrix}\) is \(\mathbf{e}_{i+2}+\mathbf{e}_{i+3}-\mathbf{e}_{i}-\mathbf{e}_{i+1}\) except the last two rows are \(\mathbf{e}_{m-2}-\mathbf{e}_{m-4}-\mathbf{e}_{m-3}\) and \(-\mathbf{e}_{m-3}-\mathbf{e}_{m-2}\). Thus, the mixed volume of \(SOC_{m}\) for \(m\) even is \((-1)^{(m-4)}\frac{m}{2}=\frac{m}{2}\) Discussion In the present work, we gave a formula for the mixed volume of a binomial steady-state system for any chemical reaction network with partitionable conservation laws. This result was obtained by analyzing the possible structure of a fine mixed subdivision of the Minkowski sum of Newton polyopes from this system. An advantage of this approach is that it allows us to avoid computing a fine mixed subdivision, which is quite computationally expensive. We also characterized the directed cycles which are PDSC networks using edge colorings. We used these result to calculate the mixed volumes of all species-overlapping cycles. There are still many directions for further exploration. It would be interesting to consider the ways in which one can relax the disjoint support assumption for partitionable conservation laws. If one removes this assumption, there are many examples of fine mixed subdivisions with more than one cell of type \((1,\ldots,1)\). Is it possible to characterize the number and volume of these in a fine mixed subdivision of such a network? Alternatively, one may search for a geometric algorithm for changing a non-partitionable network into a partitionable one and tracking the solutions. For instance, it can be shown that a slight modification of the Edelstein network is partitionable. Of particular interest, this modification can be realized by geometric means. To explain further, recall that a requirement for a partitionable network is that the affine hull of Newton polytopes \(\mathsf{Newt}(f_{i})\) are parallel to the stoichiometric subspace. In the case of the Edelstein network, it can be shown that there is an oblique projection of the affine hull of \(\mathsf{Newt}(f_{i})\) onto an affine space parallel to the stoichiometric subspace and this projection corresponds to a modification of the Edelstein network into a partitionable network while preserving the stoichiometric matrix. Thus, we are curious if this type of geometric argument can be made more general and how both the mixed volume and the steady-state degree compare to the respective quantities of the original system. Theorem 4.1 only applies to directed cycles and does not allow for bidirected edges. In the future, it would be interesting to generalize this result to networks whose underlying undirected graph is a cycle, and to determine to what extent we can use these results to "glue" cycles together to create more complex networks. ## 6 Acknowledgements Elizabeth Gross was supported by the National Science Foundation (NSF), DMS-1945584.
2309.05936
Do PLMs Know and Understand Ontological Knowledge?
Ontological knowledge, which comprises classes and properties and their relationships, is integral to world knowledge. It is significant to explore whether Pretrained Language Models (PLMs) know and understand such knowledge. However, existing PLM-probing studies focus mainly on factual knowledge, lacking a systematic probing of ontological knowledge. In this paper, we focus on probing whether PLMs store ontological knowledge and have a semantic understanding of the knowledge rather than rote memorization of the surface form. To probe whether PLMs know ontological knowledge, we investigate how well PLMs memorize: (1) types of entities; (2) hierarchical relationships among classes and properties, e.g., Person is a subclass of Animal and Member of Sports Team is a subproperty of Member of ; (3) domain and range constraints of properties, e.g., the subject of Member of Sports Team should be a Person and the object should be a Sports Team. To further probe whether PLMs truly understand ontological knowledge beyond memorization, we comprehensively study whether they can reliably perform logical reasoning with given knowledge according to ontological entailment rules. Our probing results show that PLMs can memorize certain ontological knowledge and utilize implicit knowledge in reasoning. However, both the memorizing and reasoning performances are less than perfect, indicating incomplete knowledge and understanding.
Weiqi Wu, Chengyue Jiang, Yong Jiang, Pengjun Xie, Kewei Tu
2023-09-12T03:20:50Z
http://arxiv.org/abs/2309.05936v1
# Do PLMs Know and Understand Ontological Knowledge? ###### Abstract Ontological knowledge, which comprises classes and properties and their relationships, is integral to world knowledge. It is significant to explore whether Pretrained Language Models (PLMs) know and understand such knowledge. However, existing PLM-probing studies focus mainly on factual knowledge, lacking a systematic probing of ontological knowledge. In this paper, we focus on probing whether PLMs store ontological knowledge and have a semantic understanding of the knowledge rather than rote memorization of the surface form. To probe whether PLMs know ontological knowledge, we investigate how well PLMs memorize: (1) types of entities; (2) hierarchical relationships among classes and properties, e.g., _Person_ is a subclass of _Animal_ and _Member of Sports Team_ is a subproperty of _Member_ of; (3) domain and range constraints of properties, e.g., the subject of _Member of Sports Team_ should be a _Person_ and the object should be a _Sports Team_. To further probe whether PLMs truly understand ontological knowledge beyond memorization, we comprehensively study whether they can reliably perform logical reasoning with given knowledge according to ontological entailment rules. Our probing results show that PLMs can memorize certain ontological knowledge and utilize implicit knowledge in reasoning. However, both the memorizing and reasoning performances are less than perfect, indicating incomplete knowledge and understanding. ## 1 Introduction Pretrained Language Models (PLMs) have orchestrated impressive progress in NLP across a wide variety of downstream tasks, including knowledge-intensive tasks. Previous works propose that PLMs are capable of encoding a significant amount of knowledge from the pretraining corpora (AlKhamissi et al., 2022), and determine to explore the kinds of knowledge within PLMs. Existing probing works mainly focus on factual knowledge associated with instances (Petroni et al., 2019; Jiang et al., 2020; Safavi and Koutra, 2021). Meanwhile, although classes (concepts) have raised some research interest (Bhatia and Richie, 2020; Peng et al., 2022; Lin and Ng, 2022), there is no systematic study of ontological knowledge. Ontological knowledge models the world with a set of classes and properties and the relationships that hold between them (Nilsson, 2006; Kumar et al., 2019). It plays a vital role in many NLP tasks such as question answering by being injected into (Goodwin and Demner-Fushman, 2020) or embedded outside deep neural networks (Wang et al., Figure 1: (a) An example of an ontological knowledge graph. (b) Potential manual and soft prompts to probe the knowledge and corresponding semantics. Instances are replaced by pseudowords in reasoning experiments to mitigate potential interference from model memory. 2017). Therefore, it is essential to explore whether PLMs can encode ontological knowledge and have a semantic understanding of the knowledge rather than rote memorizing its surface form. In this paper, we first probe PLM's memorization of ontological knowledge. Specifically, as shown in Figure 1(a), we construct memorization tests about (1) Types of entities. Entities can be categorized into classes, as Lionel Messi is a _Person_ and Argentina National Football Team is a _Sports Team_. (2) Hierarchical relationships between classes, e.g., _Person_ is a subclass of _Animal_. (3) Hierarchical relationships between properties, e.g., _Member of Sports Team_ is a subproperty of _Member of_. (4) Domain constraints of properties. It specifies information about the subjects to which a property applies. For example, the subject of _Member of Sports Team_ should be an instance of _Person_. (5) Range constraints of properties. Similar to domain, range specifies information about the object of a property, such as the object of _Member of Sports Team_ should be an instance of _Sports Team_. Experiments prove that PLMs store a certain amount of ontological knowledge. To further examine whether PLMs understand ontological knowledge, we investigate if PLMs can correctly perform logical reasoning that requires ontological knowledge. Illustrated in Figure 1(b), given the fact triple (Lionel Messi, _Member of Sports Team_, Argentina National Football Team) along with property constraints, we can perform type inferences to conclude that Lionel Messi is a _Person_, and Argentina National Football Team is a _Sports Team_. We comprehensively investigate the reasoning capability of PLMs over ontological knowledge following six entailment rules. Experiments show that PLMs can apply implicit ontological knowledge to draw conclusions through reasoning, but the accuracy of their reasoning falls short of perfection. This observation suggests that PLMs possess a limited understanding of ontological knowledge. In summary, we systematically probe whether PLMs know and understand ontological knowledge. Our main contributions can be summarized as follows: (1) We construct a dataset that evaluates the ability of PLMs to memorize ontological knowledge and their capacity to draw inferences based on ontological entailment rules. (2) We comprehensively probe the reasoning ability of PLMs by carefully classifying how ontological knowledge is given as a premise. (3) We find that PLMs can memorize certain ontological knowledge but have a limited understanding. We anticipate that our work will facilitate more in-depth research on ontological knowledge probing with PLMs. The code and dataset are released at [https://github.com/vickywu1022/OntoProbe-PLMs](https://github.com/vickywu1022/OntoProbe-PLMs). ## 2 Benchmark Construction In this section, we present our methodology for ontology construction and the process of generating memorizing and reasoning tasks based on the ontology for our probing analysis. ### Ontology Building ClassWe use DBpedia (Auer et al., 2007) to obtain classes and their instances. Specifically, we first retrieve all 783 classes in DBpedia, then use SPARQL (hommeaux, 2011) to query their instances using the type relation and superclasses using the subclass-of relation. We sample 20 instances for each class. PropertyProperties are collected based on DBpedia and Wikidata (Vrandecic and Krotzsch, 2014) using the following pipeline: (1) Obtain properties from Wikidata and use _subproperty of (P1647)_ in Wikidata to find their superproperties. (2) Query the domain and range constraints of the properties using _property constraint (P2302)_ in Wikidata. (3) Align the Wikidata properties with DBpedia properties by _equivalent property (P1628)_. (4) Query the domain and range constraints of the properties in DBpedia. (5) Cleanse the collected constraints using the above-collected class set as vocabulary. We choose 50 properties with sensible domain, range and superproperties. ### Construction of Memorizing Task The memorizing task consists of five subtasks, each probing the memorization of an ontological relationship: (1) **TP**: types of a given instance, (2) **SCO**: superclasses of a given class, (3) **SPO**: superproperties of a given property, (4) **DM**: domain constraint on a given property, and (5) **RG**: range constraint on a given property. Every subtask is formulated as a cloze-completion problem, as shown in Figure 1(b). Multiple correct answers exist for TP, SCO, and SPO, which form a chain of classes or properties. There is only one correct answer for DM and RG, as it is not sound to declare an expanded restriction on a property. For instance, _Animal_ is too broad as the domain constraint of the property _Member of Sports Team (P54)_, hence applying _Person_ as the domain. We construct the dataset for each subtask using the ontology built in Sec. 2.1 and reserve 10 samples for training and 10 for validation to facilitate few-shot knowledge probing. The statistics of the dataset for each subtask are shown in Table 1. ### Construction of Reasoning Task We construct the reasoning task based on the entailment rules specified in the Resource Description Framework Schema (RDFS)1. We propose six subtasks, each probing the reasoning ability following a rule listed in Table 2. For rule rdfs2/3/7, we design a pattern for each property to be used between a pair of instances, e.g., "[X] is a player at [Y]." for _Member of Sports Team_, where [X] and [Y] are the subject and object, respectively. Footnote 1: RDFS is an extension of RDF (Brickley and Guha, 2002; Gibbins and Shadbolt, 2009), a widely used and recognized data model. See [https://www.w3.org/TR/rdf11-mt/#rdfs-entailment](https://www.w3.org/TR/rdf11-mt/#rdfs-entailment) for all the entailment rules. Each entailment rule describes a reasoning process: \(\mathcal{P}_{1}\wedge\mathcal{P}_{2}\models\mathcal{H}\), where \(\mathcal{P}_{1},\mathcal{P}_{2}\) are the premises and \(\mathcal{H}\) is the hypothesis. Similar to the memorizing task, we formulate the reasoning task as cloze-completion by masking the hypothesis (see Figure 1(b)). Premises are also essential to the reasoning process and can be: * _Explicitly Given_: The premise is explicitly included in the input of the model, and inferences are made with natural language statements. * _Implicitly Given_: The premise is not explicitly given but memorized by the model as implicit knowledge. The model needs to utilize implicit knowledge to perform inferences, which relieves the effect of context and requires understanding the knowledge. * _Not Given_: The premise is neither explicitly given nor memorized by the model. It serves as a baseline where the model makes no inference. Hence, there exist \(3\times 3\) different setups for two premises. It is a refinement of the experimental setup used by Talmor et al. (2020), which only distinguishes whether a premise is explicitly included in the input. We determine the memorization of a premise by the probing results of the memorizing task, which will be elaborated in Sec. 3.2.3. ## 3 Probing Methods We investigate encoder-based PLMs (BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019)) that can be utilized as input encoders for various NLP tasks. Prompt is an intuitive method of our probing task as it matches the mask-filling nature \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Task** & **Ontological Rel.** & **Candidate** & **Train** & **Dev** & **Test** \\ \hline **TP** & type & class & 10 & 10 & 8789 \\ **SCO** & subclass of & class & 10 & 10 & 701 \\ **SPO** & subproperty of & property & 10 & 10 & 39 \\ **DM** & domain & class & 10 & 10 & 30 \\ **RG** & range & class & 10 & 10 & 28 \\ \hline \hline \end{tabular} \end{table} Table 1: Ontological relationship, type of candidate, and dataset size for each memorizing subtask. \begin{table} \begin{tabular}{l l l l l} \hline \hline **Rule** & **Premises** & **Conclusion** & **Candidate** & **Remark** \\ \hline rdfs2 & \([\mathcal{P}_{1}]\) aaa domain xxx. & & & & \\ & \([\mathcal{P}_{2}]\) uuu aaa vvv. & & & & \\ \hline rdfs3 & \([\mathcal{P}_{1}]\) aaa range xxx. & & & & \\ & \([\mathcal{P}_{2}]\) uuu aaa vvv. & & & & \\ \hline rdfs5 & \([\mathcal{P}_{1}]\) bbb subproperty of ccc. & & & & \\ & \([\mathcal{P}_{2}]\) aaa subproperty of bbb. & & & & \\ \hline rdfs7 & \([\mathcal{P}_{1}]\) aaa subproperty of bbb. & & & & \\ & \([\mathcal{P}_{2}]\) uuu aaa vvv. & & & & \\ \hline rdfs9 & \([\mathcal{P}_{1}]\) xxx subclass of yy. & & & & \\ & \([\mathcal{P}_{2}]\) uuu type xxx. & & & & \\ \hline rdfs11 & \([\mathcal{P}_{1}]\) yyy subclass of zzz. & & & & \\ & \([\mathcal{P}_{2}]\) xxx subclass of yyy. & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Entailment rules for the reasoning task. Symbol aaa and bbb represent any random property. Symbols xxx, yyy and zzz represent some classes, and uuu and vvv represent some instances. Constituents of the conclusion highlighted orange are to be masked in the input, and \(\mathcal{P}_{1}\) is the premise that contains the same constituents. of BERT. We use OpenPrompt Ding et al. (2022), an open-source framework for prompt learning that includes the mainstream prompt methods, to facilitate the experiments. ### Probing Methods for Memorization #### 3.1.1 Prompt Templates Manual TemplatesManual prompts with human-designed templates written in discrete language phrases are widely used in zero-shot probing Schick and Schutze (2021) as PLMs can perform tasks without any training. Manual templates are designed for all the ontological relationships in our task, as shown in Table 3. Soft TemplatesOne of the disadvantages of manual prompts is that the performance can be significantly affected by perturbation to the prompt templates Jiang et al. (2020). A common alternative is to use soft prompts that consist of learnable soft tokens Liu et al. (2021); Li and Liang (2021) instead of manually defined templates. The soft prompts we use for ontological relationships are also shown in Table 3. To probe using soft prompts, we tune randomly initialized soft tokens on the training set with the PLMs parameters being frozen. Detailed training setups are listed in Appendix A. #### 3.1.2 Candidates Scoring Given a candidate \(c\) which can be tokenized into \(n\) tokens \(c_{1},c_{2},\dots,c_{n}\), such that \(c_{i}\in V,i=\{1,\dots,n\},n\geq 1\), where \(V\) is the vocabulary of the model, it is scored based on the log probability of predicting it in the masked prompt. We can either use \(n\) different [MASK] tokens or the same [MASK] token to obtain the log probability of each composing token \(c_{i}\), and then compute the log probability of the candidate \(c\). For simplicity, we use a single [MASK] token when illustrating our prompts. Multiple MasksFor a candidate \(c\) consisting of \(n\) tokens, we use \(n\) [MASK] tokens in the masked input, with the \(i\)th [MASK] token denoted as \([MASK]_{i}\). The candidate probability can be computed by three different pooling methods: (1) _mean_: the average of log probabilities of composing tokens Klein and Nabi (2020), (2) _max_: the maximum log probability of all composing tokens, (3) _first_: the log probability of the first composing token. Formally, the score \(s\) of candidate \(c\) is computed as: \[\hat{s}_{i} =\log\left(p([MASK]_{i}=c_{i})\right)\] \[s =\text{Pooling}(\hat{s}_{1},\hat{s}_{2},\dots,\hat{s}_{n})\] Single MaskWe use one single [MASK] token to obtain an independent prediction of each token. The log probability of each composing token \(c_{i}\) equals the log probability of recovering \(c_{i}\) in the same [MASK], and the candidate is scored with the proposed pooling methods. \[\hat{s}_{i}=\log\left(p([MASK]=c_{i})\right)\] #### 3.1.3 Metrics We rank the candidates by their log probability scores and use the top K Recall (R@K) and Mean Reciprocal Rank (MRR) as our evaluation metrics. Since MRR only evaluates the ability to retrieve the first ground truth, we additionally take the average rank of all gold labels as the final rank when computing mean reciprocal rank to evaluate models' ability to retrieve all the ground truths and denote \begin{table} \begin{tabular}{c l c} \hline \hline **Ontological Rel.** & **Manual Template** & **Soft Template** \\ \hline \multirow{3}{*}{type} & Lionel Messi is a [MASK]. & \multirow{3}{*}{Lionel Messi **asl [MASK]**. \\ & Lionel Messi **has class** [MASK]. & \\ & Lionel Messi **is a particular** [MASK]. & \\ \hline \multirow{3}{*}{subclass of} & Person **is a** [MASK]. & \multirow{3}{*}{Person **has** **superclass** [MASK]. \\ & Person **is a particular** [MASK]. & \\ \hline \multirow{2}{*}{subproperty of} & Member of sports team **implies** [MASK]. & Member of sports team **<s1>** **<s2>** **<s3>** [MASK]. \\ \hline \multirow{2}{*}{domain} & **One has to be a particular** [MASK] & \multirow{2}{*}{Member of sports team **<s1>** **<s2>** **<s3>** [MASK]. \\ & to be a player at a sports team. & \\ \hline \multirow{2}{*}{range} & **One has to be a particular** [MASK] & \multirow{2}{*}{Member of sports team **<s1>** **<s2>** **<s3>** [MASK]. \\ & to have a player at that. & \\ \hline \hline \end{tabular} \end{table} Table 3: Manual and soft templates used in prompt-based probing. In soft templates, **<s1>** **<s2>** and **<s3>** correspond to soft tokens. it as MRR\({}_{a}\). Formally, MRR\({}_{a}\) is defined as: \[\text{MRR}_{a}=\frac{1}{n}\sum_{i=1}^{n}1/(\frac{1}{|G_{i}|}\sum_{g\in G_{i}} \text{rank}(g))\] where \(n\) is the number of samples in the dataset and \(G_{i}\) is the gold label set of the \(i\)th sample. ### Probing Methods for Reasoning We explain how we concatenate the premises and hypothesis in the textual input, exclude the models' memory of hypotheses and split a set of premises based on how well the knowledge they represent is memorized by the model. We follow the candidate scoring methods proposed in Sec. 3.1.2 and evaluation metrics in Sec. 3.1.3. #### 3.2.1 Prompt Templates Apart from the prompt templates for our concerned ontological relationships introduced in Sec. 3.1.1, we further add conjunction tokens between the premises and hypothesis, which can be either manually designed or automatically tuned. Manual Conj.As in Figure 1(b), we use a conjunctive adverb _therefore_ between the premises and hypothesis. It is kept when there is no premise explicitly given in the input to exclude the effect of the template on probing results under different premise settings. Soft Conj.We can also use soft conjunctions by adding a soft token between premises explicitly given in the input and a soft token between the premises and the hypothesis. Therefore, the input would be "\(\mathcal{P}_{1}\) <=4> \(\mathcal{P}_{2}\) <=5> \(\mathcal{H}\)". The soft templates used in \(\mathcal{P}_{1},\mathcal{P}_{2}\) and \(\mathcal{H}\) are loaded from the learned soft prompts in memorizing tasks and finetuned together with soft conjunctions. #### 3.2.2 Reasoning with Pseudowords When testing the reasoning ability of PLMs, we replace the specific instances, classes, and properties in the hypothesis prompt with _pseudowords_ to prevent probing the memorization of hypotheses. Pseudowords Schutze (1998); Zhang and Pei (2022); Goodwin et al. (2020) are artificially constructed words without any specific lexical meaning. For example, the reasoning prompt for the transitivity of subclass (i.e., rule rdfs9) is "[X] is a person. Person is an animal. Therefore, [X] is a particular [MASK].", where [X] is a pseudoword. Inspired by Karidi et al. (2021), we obtain pseudowords for PLMs by creating embeddings without special semantics. Specifically, we sample embeddings at a given distance from the [MASK] token, as the [MASK] token can be used to predict all the words in the vocabulary and appear anywhere in the sentence. The sampling distance \(d\) is set to be smaller than the minimum L2 distance between [MASK] and any other tokens in the static embedding space. Formally: \[d=\alpha\cdot\min_{t\in\mathcal{V}}\|\mathbf{z}_{t}-\mathbf{z}_{[MASK]}\|_{2}\] where \(\mathbf{z}_{t}\) is the static embedding of token \(t\) and \(\alpha\in(0,1)\) is a coefficient. Moreover, we require that the distance between two pseudowords is at least the sampling distance \(d\) to ensure they can be distinguished from each other. #### 3.2.3 Classifying Premises: Memorized or not To determine whether a premise is memorized by the model when it is not explicitly given in the input, we employ a classifying method based on the rank of the correct answer in the memorizing task to sort and divide the premise set. The first half of the premise set is regarded as memorized, and the second half is not. Each rule consists of two premises and we classify them separately. For \(\mathcal{P}_{1}\), which involves knowledge of subclass, subproperty, domain or range tested in the memorizing task, we can leverage previously calculated reciprocal rank during the evaluation. Premises are then sorted in descending order by the reciprocal rank. We conduct the same tests on \(\mathcal{P}_{2}\), which involves knowledge of pseudowords, to examine model predispositions towards specific predictions and classify whether \(\mathcal{P}_{2}\) is memorized or not. Finally, we form our test set by combining premises according to the entailment rule and how each premise is given. ## 4 Results and Findings In this section, we introduce the performance of PLMs2 on the test sets of memorizing and reasoning tasks, and analyze the results to posit a series of findings. We then analyze the effectiveness of different prompts. Detailed experimental results can be found in Appendix C. Footnote 2: We use variants of BERT and RoBERTa models from [https://huggingface.co](https://huggingface.co). ### Memorizing Task The baseline model used for the memorizing task is a frequency-based model which predicts a list of gold labels in the training set based on the frequency at which they appear, followed by a random list of candidates that are not gold labels in the training set. It combines prior knowledge and random guesses and is stronger than a random baseline. The experimental results of the memorizing task are summarized in Table 4, from which we can observe that: (1) The best performance of PLMs is better than the baseline on every task except for DM. On DM, the baseline achieves higher MRR. If taking all three metrics into account, the best performance of PLMs still surpasses the performance of the baseline. (2) Except for DM, BERT models achieve much better performance than the baseline in all subtasks and all metrics. Taking an average of the increase in each metric, they outperform the baseline by 43-198%. Only BERT-base-uncased and BERT-large-cased outperform the baseline in DM by a small margin of 1% and 7%. (3) RoBERTa models generally fall behind BERT, showing a 38-134% improvement compared with the baseline except for DM. (4) Despite a significant improvement from the baseline, the results are still not perfect in all subtasks. PLMs can memorize certain ontological knowledge but not perfectly.Based on the above observation, we can conclude that PLMs have a certain memory of the concerned ontological relationships and the knowledge can be accessed via prompt, allowing them to outperform a strong baseline. It proves that during pretraining, language models learn not only facts about entities but also their ontological relationships, which is essential for a better organization of world knowledge. However, the memorization is not perfect, urging further efforts on ontology-aware pretraining. Large models are not necessarily better at memorizing ontological knowledge.According to Petroni et al. (2019), models with larger sizes appear to store more knowledge and achieve better performance in both knowledge probing tasks and downstream NLP tasks. However, as shown in Table 4, BERT-large-uncased is worse than its smaller variant under most circumstances, and RoBERTa-large is worse than RoBERTa-base in TP and DM. It demonstrates that the scale of model parameters does not necessarily determine the storage of ontological knowledge. ### Reasoning Task We fix the usage of multiple masks and mean-pooling in the reasoning experiments as they generally outperform other settings in the memorizing task (see Appendix B). We take an average of the MRR metrics using different templates and illustrate the results of BERT-base-cased and RoBERTa \begin{table} \begin{tabular}{c|c|c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Metric} & \multicolumn{10}{c}{Model} \\ \cline{3-13} & & Frequency & BERT-B-C & \multicolumn{2}{c|}{BERT-B-U} & \multicolumn{2}{c|}{BERT-L-C} & \multicolumn{2}{c|}{BERT-L-U} & \multicolumn{2}{c|}{RoBERTa-B} & \multicolumn{2}{c}{RoBERTa-L} \\ & & Baseline & manT & softT & manT & softT & manT & softT & manT & softT & manT & softT \\ \hline \multirow{4}{*}{TP} & R@1 & 15.4 & 18.9 & 20.1 & 21.2 & **24.8** & 15.7 & 22.9 & 22.3 & 13.1 & 6.6 & 15.9 & 9.0 & 8.7 \\ & R@5 & 15.6 & 41.0 & 46.4 & 48.8 & 49.3 & **46.3** & **50.6** & 42.1 & 43.9 & 18.3 & 41.1 & 39.1 & 22.4 \\ & MRR\({}_{\ast}\) & 1.3 & 2.0 & 1.9 & **3.1** & 2.7 & 2.4 & 2.0 & 1.8 & 2.0 & 0.9 & 1.9 & 1.6 & 0.9 \\ & MRR & 19.6 & 28.4 & 31.2 & 33.2 & 35.1 & 25.0 & **36.0** & 32.1 & 23.9 & 11.9 & 28.1 & 23.7 & 14.9 \\ \hline \multirow{4}{*}{SCO} & R@1 & 8.1 & 11.0 & 29.7 & 15.1 & **37.9** & 14.0 & 35.0 & 11.6 & 31.0 & 9.8 & 24.5 & 9.0 & 22.8 \\ & R@5 & 38.9 & 38.1 & 47.9 & 43.5 & **55.9** & 43.8 & 54.6 & 35.4 & 53.5 & 22.1 & 41.4 & 39.1 & 42.8 \\ & MRR\({}_{\ast}\) & 7.4 & 5.3 & 11.8 & 6.6 & **13.3** & 6.7 & 9.7 & 3.7 & 8.9 & 4.2 & 8.5 & 4.5 & 5.5 \\ & MRR & 23.7 & 22.7 & 39.2 & 29.0 & **46.4** & 25.8 & 41.2 & 21.9 & 41.9 & 16.7 & 29.7 & 24.6 & 32.9 \\ \hline \multirow{4}{*}{SPO} & R@1 & 25.6 & 23.1 & 38.5 & 20.5 & 38.5 & 18.0 & 38.5 & 23.1 & **41.0** & 10.3 & 35.9 & 10.3 & 41.0 \\ & R@5 & 28.2 & 64.1 & 64.1 & 69.2 & **74.4** & 59.0 & 76.9 & 69.2 & 64.1 & 33.3 & 61.5 & 30.8 & 69.2 \\ & MRR\({}_{\ast}\) & 15.8 & 15.8 & 23.8 & 19.5 & 29.3 & 19.5 & **29.8** & 19.0 & 28.8 & 8.8 & 25.1 & 10.0 & 29.6 \\ & MRR & 31.2 & 39.2 & 43.7 & 38.3 & 53.5 & 34.5 & 49.8 & 39.3 & 52.9 & 20.6 & 47.4 & 21.9 & **53.8** \\ \hline \multirow{4}{*}{DM} & R@1 & 43.3 & 43.3 & 30.0 & 43.3 & 40.0 & **50.0** & 40.0 & 33.3 & 26.7 & 6.7 & 43.3 & 13.3 & 16.7 \\ & R@5 & 60.0 & 53.3 & 60.0 & 53.3 & **63.3** & 60.0 & **63.3** & 53.3 & 50.0 & 20.0 & **63.3** & 46.7 & 50.0 \\ & MRR & **50.9** & 47.6 & 40.7 & 49.3 & 50.0 & 50.3 & 48.7 & 43.2 & 33.5 & 15.3 & 49.0 & 27.4 & 25.5 \\ \hline \multirow{4}{*}{RG} & R@1 & 10.7 & 46.4 & **57.1** & 42.9 & **57.1** & **57.1** & **57.1** & **64.4** & 53.6 & 32.1 & 46.4 & 17.9 & 42.9 \\ & R@5 & 53.6 & 67.9 & 67.9 & 75.0 & 75.0 & **78.6** & 75.0 & 78.6 & 75.0 & 57.1 & 53.6 & 53.6 & 71.4 \\ \cline{1-1} & MRR & 31.2 & 59.1 & 62.7 & 56.0 & 63.9 & **66.8** & 66.2 & 61.1 & 59.5 & 44.0 & 50.3 & 33.2 & 48.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance (%) of the memorizing task. B/L stands for base/large and C/U stands for cased/uncased. The distinction between the prompt templates (manT for manual template and softT for soft template) is preserved, and for the other settings, such as the number of [MASK] tokens and pooling methods, we use the ones that give the best results and discuss their impacts in Appendix B. base in Figure 2. With neither premise given, the rank of the ground truth is usually low. It shows that models have little idea of the hypothesis, which is reasonable because the information of pseudowords is probed. With premises implicitly or explicitly given, especially \(\mathcal{P}_{1}\), the MRR metrics improve in varying degrees. Moreover, results show that BERT-base-cased has better reasoning ability with our concerned ontological entailment rules than RoBERTa-base. PLMs have a limited understanding of the semantics behind ontological knowledge.To reach a more general conclusion, we illustrate the overall reasoning performance in Figure 3 by averaging over all the entailment rules and PLMs, and find that: (1) When \(\mathcal{P}_{1}\) is explicitly given in the input text, models are able to significantly improve the rank of gold labels. As \(\mathcal{P}_{1}\) contains the ground truth in its context, it raises doubt about whether the improvement is obtained through logical reasoning or just priming (Misra et al., 2020). (2) Explicitly giving \(\mathcal{P}_{2}\) introduces additional tokens that may not be present in gold labels, making \(\mathcal{P}_{1}/\mathcal{P}_{2}=\text{EX}/\text{EX}\) worse than \(\mathcal{P}_{1}/\mathcal{P}_{2}=\text{EX}/\text{IM}\). (3) When premises are implicitly given, the MRR metrics are higher than when they are not given. It implies that, to some extent, PLMs can utilize the implicit ontological knowledge and select the correct entailment rule to make inferences. (4) However, none of the premises combinations can give near-perfect reasoning performance (MRR metrics close to 1), suggesting that PLMs only have a weak understanding of ontological knowledge. ground truth, which is the manually-designed pattern of a particular property. Compared with rule rdfs5 shown in Figure 2(c), where \(\mathcal{P}_{1}\) contains the surface form of the correct property, the MRR of BERT-base-cased of rdfs7 decreases by 23%, 49% and 29% when \(\mathcal{P}_{1}\) is explicitly given and \(\mathcal{P}_{2}\) is not, implicitly and explicitly given, respectively. Though the MRR of RoBERTa-base of rdfs7 increases when \(\mathcal{P}_{2}\) is not given, it decreases by 40% and 15% when \(\mathcal{P}_{2}\) is implicitly and explicitly given. This suggests that PLMs fail to understand the semantics of some properties, thus demonstrating a limited understanding of ontological knowledge. ### Effectiveness of Prompts In this section, we discuss how prompt templates affect performance. In the memorizing task, Table 4 shows that using soft templates generally improves the performance of memorizing tasks, in particular TP, SCO and SPO. It suggests that it is non-trivial to extract knowledge from PLMs. Meanwhile, only a few models perform better with soft templates on DM and RG with a relatively marginal improvement. This could be explained by the fact that both the manual templates and semantics of domain and range constraints are more complex than those of other relationships. Therefore, it is difficult for models to capture with only three soft tokens. We also note that RoBERTa models appear to benefit more from soft templates than BERT models, probably due to their poor performance with manual templates. base-uncased significantly in most of the memorizing tasks associated with ontological knowledge. ### Probing for Reasoning Ability Since we cannot input embeddings in the GPT-3.5-turbo API, we use \(X\) and \(Y\) to represent pseudowords as they are single letters that do not convey meanings. However, ChatGPT cannot generate any valid prediction without sufficient context regarding these pseudowords. Therefore, \(\mathcal{P}_{2}\) needs to be explicitly provided to describe the characteristics or relations of the pseudowords. We then explore the ability of ChatGPT to select the correct answer from 20 candidates with different forms of \(\mathcal{P}_{1}\). In this task, \(\mathcal{P}_{1}\) is regarded as memorized if the model can correctly choose the gold answer from the given 20 candidates in the memorizing task. Based on the results presented in Table 6, ChatGPT demonstrates high accuracy when \(\mathcal{P}_{1}\) is either implicitly or explicitly given, suggesting its strong capacity to reason and understand ontological knowledge. Due to a substantial disparity in the knowledge memorized by ChatGPT compared to other models (as shown in section 5.1), their performance is not directly comparable when \(\mathcal{P}_{1}\) is not given or implicitly given. Therefore, we only compare ChatGPT and BERT-base-uncased when \(\mathcal{P}_{1}\) is explicitly given. Results show that ChatGPT significantly outperforms BERT-base-uncased in explicit reasoning (97.1% vs. 88.2%). ## 6 Related Work Knowledge ProbingLanguage models are shown to encode a wide variety of knowledge after being pretrained on a large-scale corpus. Recent studies probe PLMs for linguistic knowledge (Vulic et al., 2020; Hewitt and Manning, 2019), world knowledge (Petroni et al., 2019; Jiang et al., 2020; Safavi and Koutra, 2021), actionable knowledge (Huang et al., 2022), etc. via methods such as cloze prompts (Beloucif and Biemann, 2021; Petroni et al., 2020) and linear classifiers (Hewitt and Liang, 2019; Pimentel et al., 2020). Although having explored extensive knowledge within PLMs, previous knowledge probing works have not studied ontological knowledge systematically. We cut through this gap to investigate how well PLMs know about ontological knowledge and the meaning behind the surface form. Knowledge ReasoningReasoning is the process of drawing new conclusions through the use of existing knowledge and rules. Progress has been reported in using PLMs to perform reasoning tasks, including arithmetic (Wang et al., 2022; Wei et al., 2022), commonsense (Talmor et al., 2019, 2020; Wei et al., 2022), logical (Creswell et al., 2022) and symbolic reasoning (Wei et al., 2022). These abilities can be unlocked by finetuning a classifier on downstream datasets (Talmor et al., 2020) or using proper prompting strategies (e.g., chain of thought (CoT) prompting (Wei et al., 2022) and generated knowledge prompting (Liu et al., 2022)). This suggests that despite their insensitivity to negation (Ettinger, 2020; Kassner and Schutze, 2020) and over-sensitivity to lexicon cues like priming words (Helwe et al., 2021; Misra et al., 2020), PLMs have the potential to make inferences over implicit knowledge and explicit natural language statements. In this work, we investigate the ability of PLMs to perform logical reasoning with implicit ontological knowledge to examine whether they understand the semantics beyond memorization. ## 7 Conclusion In this work, we systematically probe whether PLMs encode ontological knowledge and understand its semantics beyond the surface form. Experiments show that PLMs can memorize some ontological knowledge and make inferences based on implicit knowledge following ontological entailment rules, suggesting that PLMs possess a certain level of awareness and understanding of ontological knowledge. However, it is important to note that both the accuracy of memorizing and reasoning is less than perfect, and the difficulty encountered by PLMs when processing paraphrased knowledge is confirmed. These observations indicate that their knowledge and understanding of ontology are limited. Therefore, enhancing the knowledge and understanding of ontology would be a worthy future research goal for language models. Our exploration into ChatGPT shows an improved performance in both memorizing and reasoning tasks, signifying the potential for further advancements. \begin{table} \begin{tabular}{l|c|c c c c c c} \hline \hline \multirow{2}{*}{\(\mathcal{P}_{1}\)} & \multirow{2}{*}{AVG} & \multicolumn{5}{c}{RDFS Rule} \\ \cline{3-8} & & rdfs2 & rdfs3 & rdfs5 & rdfs7 & rdfs9 & rdfs11 \\ \hline NO & 13.5 & 25.0 & 16.7 & 0.0 & 0.0 & 19.0 & 20.8 \\ IM & 82.8 & 76.9 & 86.4 & 71.5 & 77.7 & 91.9 & 92.4 \\ EX & 97.1 & 100.0 & 96.4 & 94.9 & 96.9 & 97.4 & 97.0 \\ \hline \hline \end{tabular} \end{table} Table 6: Accuracy (%) achieved by ChatGPT on each reasoning subtask with \(\mathcal{P}_{2}\) explicitly given. ### Limitations The purpose of our work is to evaluate the ontological knowledge of PLMs. However, a sea of classes and properties exist in the real world and we only cover a selective part of them. Consequently, the scope of our dataset for the experimental analysis is limited. The findings from our experiments demonstrate an imperfect knowledge and understanding obtained by the models, indicating a tangible room for enhancement in both ontological knowledge memorization and understanding and a need for a better ability to address paraphrasing. These observations lead us to contemplate refining the existing pretraining methods to help language models achieve better performance in related tasks. ## Ethics Statement We propose our ethics statement of the work in this section: (1) Dataset. Our data is obtained from DBpedia and Wikidata, two publicly available linked open data projects related to Wikipedia. Wikidata is under the Creative Commons CC0 License, and DBpedia is licensed under the terms of the Creative Commons Attribution-ShareAlike 3.0 license and the GNU Free Documentation License. We believe the privacy policies of DBpedia3 and Wikidata4 are well carried out. We inspect whether our dataset, especially instances collected, contains any unethical content. No private information or offensive topics are found during human inspection. (2) Labor considerations. During dataset construction, the authors voluntarily undertake works requiring human efforts, including data collection, cleansing, revision and design of property patterns. All the participants are well informed about how the dataset will be processed, used and released. (3) Probing results. As PLMs are pretrained on large corpora, they may give biased results when being probed. We randomly check some probing results and find no unethical content in these samples. Therefore, we believe that our study does not introduce additional risks. Footnote 3: [https://www.dbpedia.org/privacy/](https://www.dbpedia.org/privacy/) Footnote 4: [https://foundation.wikimedia.org/wiki/Privacy_policy](https://foundation.wikimedia.org/wiki/Privacy_policy) ## Acknowledgement This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program.
2309.12292
Constraining dark energy cosmologies with spatial curvature using Supernovae JWST forecasting
Recent cosmological tensions, in particular, to infer the local value of the Hubble constant $H_0$, have developed new independent techniques to constrain cosmological parameters in several cosmologies. Moreover, even when the concordance Cosmological Constant Cold Dark Matter ($\Lambda$CDM) model has been well constrained with local observables, its physics has shown deviations from a flat background. Therefore, to explore a possible deviation from a flat $\Lambda$CDM model that could explain the $H_0$ value in tension with other techniques, in this paper we study new cosmological constraints in spatial curvature dark energy models. Additionally, to standard current Supernovae Type Ia (SNIa) catalogs, we extend the empirical distance ladder method through an SNIa sample using the capabilities of the James Webb Space Telescope (JWST) to forecast SNIa up to $z \sim 6$, with information on the star formation rates at high redshift. Furthermore, we found that our constraints provide an improvement in the statistics associated with $\Omega_{m}$ when combining SNIa Pantheon and SNIa Pantheon+ catalogs with JW forecasting data.
Pablo M. Maldonado Alonso, Celia Escamilla-Rivera, Rodrigo Sandoval-Orozco
2023-09-21T17:53:54Z
http://arxiv.org/abs/2309.12292v2
# Constraining dark energy cosmologies with spatial curvature using Supernovae JWST forecasting ###### Abstract Recent cosmological tensions, in particular, to infer the local value of the Hubble constant \(H_{0}\), have developed new independent techniques to constrain cosmological parameters in several cosmologies. Moreover, even when the concordance Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) model has been well constrained with local observables, its physics has shown deviations from a flat background. Therefore, to explore a possible deviation from a flat \(\Lambda\)CDM model that could explain the \(H_{0}\) value in tension with other techniques, in this paper we study new cosmological constraints in spatial curvature dark energy models. Additionally, to standard current Supernovae Type Ia (SNIa) catalogs, we extend the empirical distance ladder method through an SNIa sample using the capabilities of the James Webb Space Telescope (JWST) to forecast SNIa up to \(z\sim 6\), with information on the star formation rates at high redshift. Furthermore, we found that our constraints provide an improvement in the statistics associated with \(\Omega_{m}\) when combining SNIa Pantheon and SNIa Pantheon+ catalogs with JW forecasting data. ## 1 Introduction The first direct evidence of the late time cosmic acceleration was obtained through measurements of Type Ia Supernovae (SNIa) [1; 2]. Over the years, subsequent observations confirmed this result, such as the cosmic microwave background (CMB) [3], baryon acoustic oscillations (BAO) [4; 5], and weak gravitational lensing [6]. However, the capability of supernovae to prove the accelerating expansion remains invaluable since these objects are bright enough to be seen at large distances. Furthermore, SNIa are common enough to be found in large quantities, and their properties make them standardized with a precision of \(\sim 0.1\) mag in brightness or \(\sim 5\%\) in distance per object [7]. Also, the increasing number of SNIa observations has considerably reduced the associated statistical errors and the uncertainties in estimating cosmological parameters dominated by them [8; 9]. Nevertheless, the nature of this cosmic acceleration is one of the current inquiries in precision cosmology since still we do not fully understand the component with which it is associated, the _dark energy_. However, due to the well-constrained \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) model, this dark energy could be evidence for a component with a negative Equation-of-State (EoS) constant value [10; 11; 12] or a dynamical EoS [13; 14; 15]. Furthermore, dark energy can be associated with components that can be derived from first principles in alternative theories of gravity [16; 17; 18] and extended theories of gravity [19; 20], showing a late cosmic acceleration. On the nature of dark energy, several missions have been working to find better cosmological constraints adjoint with better systematics and increasing data baselines. Some of them as large-scale structure (LSS) observations with measurements from the Dark Energy Survey (DES) [21], the Dark Energy Spectroscopic Instrument (DESI) [22], the Legacy Survey of Space and Time (LSST) on the Vera Rubin Observatory [23], and Euclid [24], among others, have extended the concordance cosmological model to include EoS parameters of dark energy with some shifts within \(1\sigma\). In particular, the recently launched James Webb Space Telescope (JWST) is a very interesting experiment that can help to elucidate the nature of dark energy. JWST is a 6.5-meter primary mirror, space-based observatory operating in the visible and infrared light equipped with four main science instruments including a near-infrared camera, spectrograph, and imager slitless spectrograph; and a mid-infrared camera and spectrograph [25]. It is expected that it will have an estimated lifespan of 20 years in which the research will be focused on several astrophysical and cosmology areas such as galaxy formation in the early universe as [26; 27; 28; 29], exoplanet detection [30; 31; 32], metallicity and chemical exploration [33; 34; 35], and life detection [36; 37]. All these potential and current observations could allow us to explore physics further than before as testing dark energy models with structure formation [38; 39], corroborate the Cepheid calibrations in the distance ladder [40] and adding more SNIa observations [9], cosmic chronometers [41], and XA/UV quasars [42; 43] to the constraint analysis of them. Recently, numerous studies related to the implications of JWST to cosmology have been developed. In [9] a JWST simulated sample of SNIa within a redshift range \(2\lesssim z\lesssim 6\), was employed to constrain standard cosmological parameters. Using combinations of the mock sample and SN Pantheon dataset [44] it was possible to constrain dark energy models with constant EoS. This analysis was performed using two different forms of the intrinsic evolution of the standardized SNIa luminosity distance. On one hand, it is assumed a linear redshift dependence of magnitude evolution. On the other hand, we can consider other logarithmic evolutions. Analysing the cases with and without systematic evolution, it was found that the addition of the simulated SNIa sample would successfully remove the evolutionary effects. However, even though the SN Pantheon dataset size was increased by a factor of \(\approx 16\) data points, it is still not able to constrain the systematic evolution and the cosmological parameters as effectively as the very high redshift SN data. A further study about the first galaxies discovered by JWST was carried out in [45]. According to several works [46; 47; 48; 49; 50; 51; 52], there are some common aspects within the structure morphology, which indicate that galaxies discovered by JWST do not have enough time to evolve into what it is observed today. It is important to notice that this study is within the framework of the standard cosmological \(\Lambda\)CDM model. Therefore, the new JWST dataset includes near and mid-infrared images and near-infrared spectra to perform analyses based on cosmographic theories of the angular size-redshift relationship [45]. The \(\Lambda\)CDM interpretation of JWST observations is compared with the interpretation based on Zwicky's static universe model [53], where the origin of the cosmological redshift can be explained through the photon-energy loss mechanism. However, the redshifted objects detected by the JWST are not aligned with such interpretation, but before any final conclusion, data from this mission should be increased. As a step forward, using the capabilities of the JWST described, in this work, we develop the forecasting of SNIa up to \(z\sim 6\), with information on the Star Formation Rates (SFR) at high redshift. Once this data is at hand, we perform a statistical analysis combined with SN Pantheon [44] and SN Pantheon+1 to constrain spatial curvature dark energy cosmologies. We based our cosmological models inspired by bidimensional EoS parameterisation, which preserves the expanding and accelerating behaviour at late times. Our goal is to show that a simple deviation in the spatial curvature of a dark energy EoS model can verify a well-constrained analysis with SNIa JWST forecasting. This paper is divided as follows: In Sec. 2 we summarise the theory behind dark energy bidimensional parameterisations inspired in Taylor series around the cosmological scale factor \(a\). All of these parameterisations are described through their normalised \(E(z)\) Friedmann evolution equation, including the curvature term. Furthermore, we are going to consider standard \(\Lambda\)CDM and \(w\)CDM models in addition to the dark energy cosmologies to proceed with comparisons between them. Also, we include the latest constraints reported in the literature so far. In Sec. 3 we present the methodology employed for observables. We include the description of current SNIa data baselines and how we can proceed with their forecasting using JWST characteristics. In Appendix A we describe the technicalities behind this forecasting. The results on new constraints for the models described are developed in Sec. 4. Finally, the conclusions are presented in Sec. 5. ## 2 Standard dark energy parameterisations The standard cosmological scenario \(\Lambda\)CDM is a remarkable fit for most cosmological data. Nonetheless, we are still searching for the nature of inflation, dark matter, and dark energy. Physical evidence for these components comes only from astrophysical and cosmological observations [54]. Therefore, an increase in experimental sensitivity can produce deviations from the standard \(\Lambda\)CDM scenario that could lead to a deeper understanding of the gravity theory. If it is not a consequence associated with systematic errors, the cosmological tensions [55] existing between the different experimental probes could indicate a failure of the \(\Lambda\)CDM model, and a better cosmological model should be able to be found. In this section, we are going to describe, in addition to the \(\Lambda\)CDM model with EoS \(w=-1\), five dark energy bidimensional in \(z\) parameterisations of \(w(z)\), which can be constant or redshift-dependent. Notice that to describe dark energy, we need to achieve cosmic acceleration with a negative pressure at late times [11]. * \(\Lambda\)**CDM model.** In this model, the universe is composed of cosmological fluids with different EoS's \(w\) that contribute to the energy constraint. At present cosmic times, the non-relativistic matter contribution, \(\Omega_{m}\simeq 0.27\), is the sum of the ordinary baryonic matter term, \(\Omega_{b}\simeq 0.044\), and the (cold) dark matter term, \(\Omega_{c}\simeq 0.22\). Dark energy (\(\Omega_{\Lambda}\simeq 0.73\)) is described by \(w=-1\), associated with a cosmological constant \(\Lambda\) or a vacuum energy density [56]. Radiation represents a negligible contribution, \(\Omega_{r}\simeq 9\times 10^{-5}\), but it dominated the early cosmic stages, after the end of the inflationary stage and before matter-radiation decoupling [57]. Additionally, \(\Lambda\)CDM can be characterized with a flat geometry, which corresponds to an energy density parameter, \(\Omega_{\Lambda}=1-\Omega_{m}-\Omega_{r}\), where the only parameter to be constrained is \(\Omega_{m}\). The cosmological evolution for this model can be expressed as \[E(z)\equiv\frac{H(z)}{H_{0}}=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+ \Omega_{\Lambda}},\] (1) where \(H_{0}\) is the Hubble constant today. We also consider the non-flat \(\Lambda\)CDM cosmological model, as an extension of the \(\Lambda\)CDM model but with curvature \(k\neq 0\), with its constraint equation as \(\Omega_{k}=1-\Omega_{m}-\Omega_{r}-\Omega_{\Lambda}\) where \(\Theta=(\Omega_{m},\Omega_{\Lambda})\) is the vector with free parameters. The evolution for this case can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{r}(1+z)^{4}+\Omega_{k}(1+z)^{2}+\Omega_{ \Lambda}}.\] (2) Then, the model-dependent luminous distance can be calculated according to the \(\Omega_{k}\) value [58]: \[D_{L}(z)=\left\{\begin{aligned} &\frac{c}{H_{0}}(1+z)\frac{ \sinh\left(\sqrt{\Omega_{k}}\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime})} \right)}{\sqrt{\Omega_{k}}},&\Omega_{k}>0\\ &\frac{c}{H_{0}}(1+z)\int_{0}^{z}\frac{dz^{\prime}}{E(z^{\prime}) },&\Omega_{k}=0\\ &\frac{c}{H_{0}}(1+z)\frac{\sin\left(\sqrt{-\Omega_{k}}\int_{0}^ {z}\frac{dz^{\prime}}{E(z^{\prime})}\right)}{\sqrt{-\Omega_{k}}},& \Omega_{k}<0,\end{aligned}\right.\] (3) where \(c\) the speed of light and \(E(z)\) the background evolution equation of the cosmological models. The base test for \(\Lambda\)CDM is the analysis provided by the Planck collaboration measuring CMB anisotropies finding the base parameters values \(\Omega_{m}=0.316\pm 0.008\) and \(H_{0}=67.27\pm 0.6\) km s\({}^{-1}\) Mpc\({}^{-1}\)[59] in the context of a flat \(\Lambda\)CDM. Using late-time data in [58] was found that using SNIa, BAO, and a quasar sample the values are \(\Omega_{m}=0.300\pm 0.012\), with a fixed \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), while using a non-flat \(\Lambda\)CDM model the results were \(\Omega_{m}=0.364\pm 0.021\) and \(\Omega_{\Lambda}=0.829\pm 0.035\), finding a slight deviation from the flat background. Furthermore, with a Cosmic Chronometers (CC - \(H(z)\)) sample it was found that \(H_{0}=66.7\pm 5.3\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.33^{+0.08}_{-0.06}\) for the same flat model [60]. Using only SNIa Pantheon+ compilation [7] the values for the flat \(\Lambda\)CDM model are \(H_{0}=73.6\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\). The latter assumes a Gaussian prior with \(\Omega_{m}=0.334\pm 0.018\). While for the non-flat \(\Lambda\)CDM the results are \(\Omega_{m}=0.306\pm 0.057\) and \(\Omega_{\Lambda}=0.625\pm 0.084\) in concordance with the flat counterpart at \(2\sigma\). * \(w\)**CDM model.** The simplest extension of the \(\Lambda\)CDM model is the one in which \(w\neq-1\), yet still constant in time meaning that \(w(z)=w_{0}\). From [9, 58] we express \(E(z)\) for this model as, \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}}.\] (4) Non-flat \(w\)CDM has \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0})\) as free parameters. However, under flatness assumption, \(\Omega_{k}=0\), so the only free parameters are \(\Theta=(\Omega_{m},w_{0})\). This model reduces to \(\Lambda\)CDM when \(w_{0}=-1\). Using SNIa, BAO, and quasars the values obtained in [58] were: \(\Omega_{m}=0.369^{+0.022}_{-0.023}\) and \(w_{0}=-1.283^{+0.094}_{-0.027}\), corresponding to a deviation from the \(\Lambda\)CDM model in more than \(1\sigma\) range with the same fixed \(H_{0}\) value. While using a non-flat \(w\)CDM results in \(\Omega_{m}=0.280^{+0.041}_{-0.037}\), \(\Omega_{\Lambda}=1.662^{+0.041}_{-0.048}\) and \(w_{0}=-0.667^{+0.024}_{-0.027}\), where a difference from a flat model is reported of more than \(3\sigma\) using only SNIa and quasars. Furthermore, adding BAO to the quasar sample [61] results in \(\Omega_{m}=0.31\pm 0.03\) and \(w_{0}=-1.00^{+0.14}_{-0.13}\) that is consistent with \(\Lambda\)CDM assuming a Gaussian prior of \(H_{0}=67.32\pm 4.7\) km s\({}^{-1}\) Mpc\({}^{-1}\). When using only SNIa [7] the flat \(w\)CDM model gives \(\Omega_{m}=0.309^{+0.063}_{-0.069}\), \(H_{0}=73.5\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(w_{0}=-0.90\pm 0.14\), returning the confirmation for a \(\Lambda\)CDM model. * **Chevallier-Polarski-Linder (CPL) model.** One of the most used redshift-dependent parameterisations corresponds to the Chevallier-Polarski-Linder [62; 63] proposal: \(w(z)=w_{0}+w_{a}z/(1+z)\). In which, \(w(z)=w_{0}+w_{a}\) at \(z=\infty\) and \(w(z)=w_{0}\) at \(z=0\), but it diverges in the future for \(z\rightarrow(-1)^{+}\). In this bidimensional model, \(w_{0}\) denotes the dark energy EoS today, and \(w_{a}\) describes its evolution. This parameterisation has several advantages including the well behaviour at high redshift, the linear feature at low redshift, a simple physical interpretation, and the accuracy in reconstructing a scalar field EoS [63]. The normalised Hubble parameter for this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3( 1+w_{0}+w_{a})}\exp\!\left(\frac{-3w_{a}z}{1+z}\right)}.\] (5) For the non-flat and flat cases, we consider as free parameters (\(\Omega_{m}\), \(\Omega_{\Lambda}\), \(w_{0}\), \(w_{a}\)) and (\(\Omega_{m}\), \(w_{0}\), \(w_{a}\)), respectively. This model can be reduced to \(\Lambda\)CDM with \(w_{0}=-1\) and \(w_{a}=0\). Using SNIa, BAO, and quasars in [58] was discussed a deviation from the \(\Lambda\)CDM with \(\Omega_{m}=0.354^{+0.032}_{-0.030}\), along with \(w_{0}=-1.323^{+0.103}_{0.112}\) and \(w_{a}=0.745^{+0.483}_{-0.974}\) for a flat CPL model, values that correspond to a confirmation of the \(\Lambda\)CDM model. Using quasars and \(H(z)\) measurements for a non-flat CPL parametrization it was found that \(\Omega_{m}=0.44\pm 0.10\), \(\Omega_{k}=-0.36\pm 0.24\), \(H_{0}=71.8^{+4.6}_{-7.7}\), with \(w_{0}=-1.2\pm 1.0\), and \(w_{a}=-5.0^{+9-0}_{-2.0}\)[64] showing a clear deviation of more than \(2\sigma\) from the flat \(\Lambda\)CDM model. Using SNIa [7] for the flat CPL with a Gaussian prior in \(H_{0}\) results in \(H_{0}=73.3\pm 1.1\) km s\({}^{-1}\) Mpc\({}^{-1}\), with \(\Omega_{m}=0.403^{+0.054}_{-0.098}\), \(w_{0}=-0.93\pm 0.15\) and \(w_{a}=-0.1^{+0.9}_{-2.0}\). This latter corresponds to a flat \(\Lambda\)CDM confirmation. Furthermore, adding BAO, CMB to the previous SNIa sample results in \(H_{0}=67.41^{+0.52}_{-0.82}\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.316^{+0.009}_{-0.005}\) and \(w_{0}=1.267^{+0.196}_{-0.191}\) and \(w_{a}=-3.771^{+2.113}_{-2.496}\)[58]. * **Jassal-Bagla-Padmanabhan (JBP) model.** In [65] the parameterisation \(w(z)=w_{0}+w_{a}z/(1+z)^{2}\), in which \(w(0)=w_{0}\), \(w^{\prime}(0)=w_{1}\), and \(w(\infty)=w_{0}\) is presented based on trying to explain the accelerated universe covering both CMB and the SNIa measurements. This model was proposed to solve the high \(z\) issues within the CPL parameterisation [11]. Using the behaviour of this function allows us to have the same EoS at the present epoch and high-\(z\) with a rapid variation at small redshifts. Considering the corresponding term related to curvature, the derivative expression for \(E(z)\) for this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}\exp\!\left[\frac{3w_{a}z^{2}}{2(1+z)^{2}}\right]},\] (6) where \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0},w_{a})\) for the non-flat JBP model, and \(\Theta=(\Omega_{m},w_{0},w_{a})\) for a flat JBP model. Using SNIa, BAO, and quasars the JBP model is tested obtaining \(\Omega_{m}=0.354^{+0.032}_{-0.030}\), \(w_{0}=-1.371\pm 0.141\) and \(w_{a}=1.127^{+1.293}_{-1.547}\)[58] which is considered a deviation from the confirmation for \(\Lambda\)CDM for the \(w_{0}\) value. In [66] using SNIa, BAO, CMB and Gamma-Ray Bursts (GRB) it is obtained \(\Omega_{m}=0.27\pm 0.03\) with \(w_{0}=-1.02\pm 0.04\) and \(w_{a}=0.22\pm 0.23\) using a flat JBP model in concordance with a flat \(\Lambda\)CDM at \(1\sigma\). * **Exponential model.** In [67] was examined five one-parameter dark energy parameterisations with several datasets. In particular, data from CMB observations, Joint light-curve analysis from SNIa observations (JLA), BAO distance measurements, and \(H(z)\). It was concluded that the one-parameter dark energy model can provide a solution to the \(H_{0}\) tension between local measurements and Planck indirect ones. Besides, it was found which of the five models is better fitted to the data used. This model, relatively close to \(\Lambda\)CDM, is the one with an EoS of the form: \(w(z)=w_{0}\exp[z/(1+z)]/(1+z)\), where \(w(0)=w_{0}\) and \(w(z)=0\), for both \(z=\infty\) and \(z\rightarrow{(-1)}^{+}\). As a result, the normalised Hubble parameter can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3} \exp\!\left[3w_{0}\left(\exp\!\left(\frac{z}{1+z}\right)-1\right)\right]}.\] (7) For non-flat exponential model we have \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0})\), while for flat exponential model, \(\Theta=(\Omega_{m},w_{0})\). Using SNIa, quasars, and BAO was obtained for the Exponential model \(\Omega_{m}=0.359^{+0.023}_{-0.024}\) and \(w_{0}=-1.271^{+0.092}_{-0.107}\)[58] showing again a deviation from a flat \(\Lambda\)CDM. In [68] the exponential model is constrained using CMB, SNIa, BAO, and measurements of the distance using Hydrogen II galaxies resulting in \(H_{0}=70.9\pm 7.0\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.284\pm 0.006\) and \(w_{0}=-1.202^{+0.027}_{-0.026}\), imposing a Gaussian local prior in \(H_{0}\). * **Barboza-Alcaniz (BA) model.** In [69] was proposed a dark energy parameterization given by \(w(z)=w_{0}+w_{a}z(1+z)/1+z^{2}\). This is a well-behaved function of redshift throughout the entire cosmic evolution, \(z\in[-1,\infty]\), with \(w(z)=w_{0}+w_{a}\) for \(z=\infty\) and \(w(z)=w_{0}\), when \(z\rightarrow(-1)^{+}\). This smooth function allows to define regions on the plane \((w_{0}-w_{1})\) associated with several classes of dark energy models to exclude or confirm the models based on the constraints using observational data. Thus, it was shown that both quintessence and phantom behaviors have fully acceptable regimes. The \(E(z)\) for non-flat case of this model can be written as \[E(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^{3(1+ w_{0})}\left(1+z^{2}\right)^{\frac{3w_{0}}{2}}}.\] (8) The free parameter sets for the non-flat BA model and flat BA model are \(\Theta=(\Omega_{m},\Omega_{\Lambda},w_{0},w_{a})\) and \(\Theta=(\Omega_{m},w_{0},w_{a})\), respectively. The analysis using BAO, SNIa and quasars in [58] showed deviation from the flat \(\Lambda\)CDM as \(\Omega_{m}=0.307^{+0.044}_{-0.055}\), \(w_{0}=-1.303^{+0.115}_{-0.106}\) and \(w_{a}=1.010^{+0.152}_{-0.466}\), meaning that the quasar sample is responsible for a deviation from the standard model. In [66] using CMB, BAO, SNIa and GRB results in \(\Omega_{m}=0.28\pm 0.03\), \(w_{0}=-1.13\pm 0.04\) and \(w_{a}=0.37\pm 0.1\), finding a lower deviation from the \(\Lambda\)CDM model, but using only BAO and CMB the standard model is recovered with \(\Omega_{m}=0.29\pm 0.04\), \(w_{0}=-1.06\pm 0.11\) and \(w_{a}=0.35\pm 0.12\). Data treatment: observations and forecastings In this section, we will perform the statistical analysis for the dark energy models with and without curvature including three different datasets: SNIa Pantheon and SNIa Pantheon+ samples, along with the extracted simulated data from JWST. * **Pantheon (PN)**[44]: The Pantheon compilation is a combination of measurements from SNIa distances combining both low and high redshifts from \(z\sim 0.01\) up to \(z=2.26\). This sample has shown an improvement in the photometric calibrations on the distance ladder through the light curve. This transforms observable quantities into distances adding up to a total of 1048 data points. * **Pantheon+ (PN\({}^{+}\))**[70; 71]: Pantheon+ is a collection of 18 different SNIa samples based on the Pantheon compilation above described and by adding new data points recollected from different surveys as: Foundation Supernova Survey [72], the Swift Optical/Ultraviolet Supernova Archive (SOUSA) [73], the Lick Observatory Supernova Search LOSS1 [74], the second sample LOSS2 [75], and DES [70]. As a result, Pantheon+ consists of 1701 light curves of 1550 distinct SNIa along a redshift of \(z=0.001\) up to 2.26. For \(H_{0}\), a value of \(73.30\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\) is assumed. This sample is represented in Figure 1 in blue color. Cosmological parameter constraints have been carried out using the affine-invariant ensemble sampler for Markov Chain Monte Carlo (MCMC) module emcee2 that uses random generation numbers to explore the parameter space based on the probability function \(P\propto\exp\bigl{(}-\chi^{2}/2\bigr{)}\) to minimize the quantity Footnote 2: emcee.readthedocs.io/en/stable/ \[\chi^{2}_{\text{SNIa}}(\Theta)=\Delta\mu^{T}(z,\Theta)\cdot C^{-1}_{\text{SNIa }}\cdot\Delta\mu(z,\Theta)+\ln\bigg{(}\frac{S}{2\pi}\bigg{)}, \tag{1}\] where \(\Delta\mu(z,\Theta)=\mu(z)_{\text{data}}-\mu(z,\Theta)_{\text{model}}\), \(C_{\text{SNIa}}\) is the covariance matrix of the PN (or PN+) sample, \(S\) is the sum of all the components of \(C^{-1}_{\text{SNIa}}\), \(\mu(z)_{\text{data}}\) is the distance modulus of the PN (PN+) data, and \(\mu(z,\Theta)_{\text{model}}\) is the distance modulus for a cosmological model with a parameter set \(\Theta\)[76]. * signs assigned randomly for all the matrix. The value in the diagonal is the mean value of the Pantheon covariance matrix [8] and the \(0.15^{2}\) value in the diagonal is the assumed error considering the local determination of the observation errors extrapolated [9]. The covariance matrix has the following form \[C_{\rm JW}=\underbrace{\left(\begin{array}{ccccc}0.15^{2}&\pm 0.002^{2}&\pm 0.00 2^{2}&\ldots&\pm 0.002^{2}\\ \pm 0.002^{2}&0.15^{2}&\pm 0.002^{2}&\ldots&\pm 0.002^{2}\\ \pm 0.002^{2}&\pm 0.002^{2}&0.15^{2}&\ldots&\pm 0.002^{2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \pm 0.002^{2}&\ldots&\ldots&\ldots&0.15^{2}\end{array}\right)}, \tag{10}\] where \(n\times n\) denotes the dimension of the matrix and (\(\pm\)) the random assignment of (\(+\)) and (-) signs. In this case, \[\chi^{2}_{\rm JW}(\Theta)=\Delta\mu^{T}(z,\Theta)\cdot C^{-1}_{\rm JW}\cdot \Delta\mu(z,\Theta)+\ln\bigg{(}\frac{S}{2\pi}\bigg{)}, \tag{11}\] where \(\Delta\mu(z,\Theta)=\mu(z)_{\rm data}-\mu(z,\Theta)_{\rm model}-\Gamma_{0}\), \(C_{\rm JW}\) is the covariance matrix presented in Eq.(10), \(S\) is the sum of all the components of \(C^{-1}_{\rm JW}\), \(\mu(z)_{\rm data}\) is the distance modulus of the JW data, \(\mu(z,\Theta)_{\rm model}\) is the distance modulus for a cosmological model with parameter set \(\Theta\) and \(\Gamma_{0}\) is a magnitude bias added to consider a possible systematic luminosity difference between the JW and PN (PN+) datasets. Notice that if we compare our Eq.(11) with [9], the analysis does not consider logarithmic systematics in the forecasting. For more technical details about the methodology followed here see Appendix A. ## 4 Results: Cosmological constraints In this section, we discuss the constraints for the dark energy models previously described. In Tables 1 and 2 are reported the values for the cosmological parameters involved in each Figure 1: _Left:_ Histogram of the Pantheon+ data (blue) and the extracted JW (red). _Right:_ Hubble Diagram of the Pantheon data (blue) and the extracted mock JW (red). flat and non-flat model, respectively. Additionally, we used the results from the latest SH0ES measurement for the Hubble constant [71] in which using the local distance ladder via Cepheid calibration was obtained \(H_{0}=73.04\pm 1.04\), and was introduced as a fixed value for the Hubble constant in our analyses. Also, an absolute magnitude \(M=-19.263\) was assumed, except for flat \(\Lambda\)CDM where \(M\) was taken as a free parameter. The optimal constraints on the cosmological parameters were derived using the emcee code. All Confidence Levels (C.L) presented in this work correspond to 68.3 and 95 % i.e., to 1 and \(2\sigma\) respectively. Finally, in the presented results the value of \(\Omega_{k}\) is calculated directly from \(\Omega_{k}=1-\Omega_{m}-\Omega_{\Lambda}\), combining the marginalized distributions of each fractional density using a getdist3 modified version. Footnote 3: getdist.readthedocs.io ### \(\Lambda\)CDM model The constraints for this model are given in Figure 2. As we can notice, using the Pantheon sample gives a relatively lower value of \(\Omega_{m}\) than using the Pantheon+ sample for the flat-\(\Lambda\)CDM model. Comparing \(\Omega_{m}=0.290\pm 0.008\) with \(\Omega_{m}=0.311^{+0.010}_{-0.009}\), shows a tendency due to the addition of JW data. Furthermore, considering the non-flat model results in a higher \(\Omega_{m}\) estimation for the Pantheon+ sample. The non-flat \(\Lambda\)CDM model constrained by JW simulated data tends to reduce the curvature estimation towards flatness \(\Omega_{k}\sim 0\). Let us keep in mind that the results on curvature constraints are negative for all four different dataset combinations. There is a deviation from a flat universe with \(\Omega_{k}=-0.0092\pm 0.0091\) using the Pantheon compilation. Additionally, the flat \(\Lambda\)CDM model constrains \(\Gamma_{0}\) for the JW mock sample lower than \(\Gamma_{0}=0.028\pm 0.018\) mag, which implies that there is no significant difference expected for the calibrated sample. Figure 2: 1-\(2\sigma\) C.L results for the \(\Lambda\)CDM model using SNIa Pantheon–PN (red color), SNIa Pantheon+–PN\({}^{+}\) (orange color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. Notice that the C.L for \(M\) is not associated with \(\Gamma_{0}\). ### \(w\)CDM model The constraints for this model are given in Figure 3. As we can notice, there is a correlation between the \(w_{0}\) and \(\Omega_{m}\) values present in the flat model, while in the non-flat version of the model, this vanishes and for the Pantheon and JW datasets there is a high error determination in the \(w_{0}-\Omega_{m}\) parameter space. For this model, the fractional matter density has a similar value using Pantheon and Pantheon+ with the introduction of JW as \(\Omega_{m}\sim 0.333\). The curvature estimation is closer to \(\Omega_{k}=0\) as expected using the JW simulated data for the flat model. Furthermore, notice that we have a \(1\sigma\) deviation from a flat model using only the Pantheon+ sample of \(\Omega_{k}=0.27^{+0.17}_{-0.11}\), although both SN samples prefer a non-flat universe. Additionally, \(\Gamma_{0}\) is constrained with a value of \(\Gamma_{0}=0.036^{+0.020}_{-0.022}\) mag using Pantheon+ and JW for the flat model. This means that there is not a significant deviation from the cosmological fits for the \(w\)CDM model using the JW mock sample compared to the observed SN samples. ### Chevallier-Polarski-Linder (CPL) model The constraints for this model are given in Figure 4. It is interesting to note that systematics improve for this model using Pantheon+ in comparison to the previous SN catalog, Pantheon. This is expected due to the density of data points at lower redshifts, where the CPL model can be well-constrained. For the flat model and using Pantheon data, we recover the \(\Lambda\)CDM model with \(w_{0}=-1.111^{+0.110}_{-0.124}\) and \(w_{a}=-1.252^{+1.354}_{-1.709}\) both at \(1\sigma\). Something similar happened when we included the JW sample. This trend is confirmed when using Pantheon+ data, e.g. using Pantheon+ and JW results with \(\Omega_{m}=0.323^{+0.022}_{-0.023}\), \(w_{0}=-1.035^{+0.047}_{-0.056}\) and \(w_{a}=0.130^{+0.432}_{-0.37}\). However, in the non-flat model, the estimations change when using solely SN measurements. The curvature estimation is deviated more than \(1\sigma\) as for Pantheon Figure 3: 1-2\(\sigma\) C.L results for the \(w\)CDM model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (orange color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. \(\Omega_{k}=0.31^{+0.22}_{-0.13}\), and for Pantheon+ \(\Omega_{k}=0.447^{+0.14}_{-0.19}\). These results also deviate from the \(\Lambda\)CDM confirmation as both \(w_{0}\) and \(w_{a}\) do not recover the basic equations with \(w_{0}=-1\) and \(w_{a}=0\). In this model, all the results for \(\Gamma_{0}\) constraints are lower than the simulated 0.15 mag error, and therefore, we do not expect to have any systematic effects on the cosmological parameters contaminated by the simulated magnitude in the JW. The larger estimation, in comparison to previous models, was found using the Pantheon+ sample for which \(\Gamma_{0}=0.039\pm 0.022\) mag. ### Jassal-Bagla-Padmanabhan (JBP) model The constraints for this model are given in Figure 5. As we can notice, the flat version of this model recovers the correlation between the \(\Omega_{m}\) and the \(w_{0}\) parameters for all the datasets while the \(w_{a}\) determination is done with a large error determination. Confirmation of \(\Lambda\)CDM occurs for Pantheon and JW combinations as \(w_{0}=-1\) and \(w_{a}=0\), while this not happen using Pantheon+, which results in a clear deviation as \(w_{a}=1.403^{+0.650}_{-0.845}\). It is worth mentioning that the only negative \(w_{a}\) value is obtained using the Pantheon dataset as \(w_{a}=-0.721^{+1.787}_{-2.492}\). Using Pantheon, Pantheon+ and the Pantheon and JW combination has a parameter \(w_{a}\) determination error larger than the average. For the fractional matter density, with Pantheon data, we obtain a larger estimation than Pantheon+, with \(\Omega_{m}>0.3\) for Pantheon, and \(\Omega_{m}<0.3\) for the Pantheon+ and the combination with JW. For the non-flat model, we obtain different results, as none of the combinations return a deviation from the flat model. With Pantheon+ data we obtain \(\Omega_{k}=0.48^{+0.20}_{-0.10}\), while the JW mock data brings the estimation closer to \(\Omega_{k}=0\) without reaching flatness. It is worth noticing that Pantheon combinations result in negative \(w_{a}\) values, while Pantheon+ combinations are opposite at \(1\sigma\) with a large uncertainty. Figure 4: 1-2\(\sigma\) C.L results for the CPL model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. Using the JBP model the larger \(\Gamma_{0}\) determination is obtained using the Pantheon+ and JW for the non-flat version as \(\Gamma_{0}=0.040\pm 0.022\) mag, which again discards the systematic error in the magnitude determination for the JW mock data. ### Exponential model The constraints for this model are given in Figure 6. As we can notice, a correlation is presented in the parameter space between \(w_{0}\) and \(\Omega_{m}\) for all dataset combinations. For the flat model, all the combinations result in \(w_{0}\sim-1\) between \(1\sigma\) with a \(\Omega_{m}\sim 0.3\). For the non-flat version is worth noticing that all the combinations using Pantheon data result in curvature estimations close to the ones expected for a flat universe. Being closer to flatness only with Pantheon data. Using the Pantheon+ dataset alone results in a larger deviation from the flatness as \(\Omega_{k}=0.35^{+0.23}_{-0.11}\). This can be alleviated using the JW mock data as \(\Omega_{k}=0.135^{+0.095}_{-0.072}\) reports a larger deviation than \(1\sigma\) but near to the range of the expected value for a flat universe. Similar to the flat model, the fractional matter density value is close to \(\Omega_{m}\sim 0.3\), being the lowest one obtained with Pantheon+ with \(\Omega_{m}=0.262\pm 0.055\). In the case of \(w_{0}\) the results are slightly lower than \(w_{0}=-1\), being the same Pantheon+ the lower estimation as \(w_{0}=-1.86^{+0.74}_{-0.48}\). Using the Exponential model results for the determination of \(\Gamma_{0}=0.036\pm 0.021\) in combination with Pantheon+ and JW means that the determination of the cosmological parameters in this model is not affected by the systematics in the JW mock data. Figure 5: 1-2\(\sigma\) C.L results for the JBP model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. ### Barboza-Alcaniz (BA) model The constraints for this model are given in Figure 7. As we can notice for the flat model the lower value for \(\Omega_{m}\) is the one obtained with the Pantheon+ and JW combination resulting in \(\Omega_{m}=0.280^{+0.047}_{-0.081}\). Regarding the values of \(w_{0}\) and \(w_{a}\) for the flat models, all the combination results are consistent at \(1\sigma\) with the \(\Lambda\)CDM model, as all the results fall in the range of \(w_{0}\sim-1\) and \(w_{a}=0\). Nevertheless, it is interesting to notice that the only negative value for \(w_{a}\) is the one obtained using the Pantheon compilation with \(w_{a}=0.723^{+1.047}_{-1.531}\). In general, this model agrees with a flat \(\Lambda\)CDM at \(1\sigma\). Meanwhile, the non-flat model shows larger deviations from the \(\Lambda\)CDM model as the curvature estimation is separated from the flatness in more than \(1\sigma\) for all the combinations, except the Pantheon compilation as \(\Omega_{k}=0.20^{+0.29}_{-0.24}\). Other results prefer \(\Omega_{k}>0\). In this model, the larger \(\Gamma_{0}\) estimation is obtained for the non-flat model with the Pantheon+ and JW mock data combination as \(\Gamma_{0}=0.039\pm 0.021\) mag, resulting similar to the previous models where the cosmological parameter inference is not affected by the error in magnitude for the simulated dataset. Figure 6: 1-2\(\sigma\) C.L results for the exponential model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (blue color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. ## 5 Conclusions In this paper, we studied new cosmological constraints in spatial curvature dark energy models. We extend the distance ladder method through an SNIa sample using the capabilities of JWST to forecast SNIa up to \(z\sim 6\), considering the information on the star formation rates at high \(z\). Comparing the results shown in Tables 1 and 2, notice that flat \(\Lambda\)CDM, flat \(w\)CDM, and flat exponential are the only models in which the value of \(\Omega_{m}\) obtained using the Pantheon sample is less than \(\Omega_{m}\) for Pantheon+, i.e. \(\Omega_{m,\rm PN}<\Omega_{m,\rm PN^{+}}\). However, in our analysis when including the JW mock data, the flat \(\Lambda\)CDM, non-flat \(\Lambda\)CDM, non-flat \(w\)CDM, and non-flat exponential are the cosmological models in which \(\Omega_{m}\) for the combination PN+JW is less than \(\Omega_{m}\) for PN\({}^{+}\)+JW, i.e. \(\Omega_{m,\rm PN+JW}<\Omega_{m,\rm PN^{+}+JW}\). Showing a lower value of this parameter when we have higher SN at \(z<1\). In regards to JW forecasting, all models have a \(\Gamma_{0,\rm PN^{+}+JW}>\Gamma_{0,\rm PN+JW}\). More SN at \(z<1\) (e.g. Pantheon+) seems to raise the value of this \(\Gamma_{0}\) parameter associated with JWST. Therefore, according to our \(\Delta\mu\) equation (see below Eq.(3.3)), it is statistically better to employ Pantheon with JW forecasting due that the vector uncertainty is lower in comparison to the one obtained with Pantheon+ catalog. However, notice that the JW sample has been calibrated with Pantheon, therefore \(\Gamma_{0}\) shows a preference for this SN sample. We have studied the possibility of non-zero curvature in standard dark energy models, as demonstrated in [9]. The addition of our JW forecasting leads to an improvement in the statistics associated with \(\Omega_{m}\). It is expected that the JWST will observe more luminous structures with a well-treated morphology, which can help us find more robust statistics in dark energy cosmologies. Figure 7: 1-2\(\sigma\) C.L results for the BA model using SNIa Pantheon–PN (orange color), SNIa Pantheon+–PN\({}^{+}\) (yellow color), SNIa Pantheon & JW mock data – PN+JW (green color) and SNIa Pantheon+ & JW mock data – PN\({}^{+}\)+JW (purple color) baselines: _Left:_ Flat case. _Right:_ Non-flat case. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model (flat) & Dataset & \(\Omega_{M}\) & \(w_{0}\) & \(w_{a}\) & \(\Gamma_{0}\) \\ \hline \multirow{8}{*}{\(\Lambda\)CDM} & PN & \(0.290\pm 0.008\) & & & & \\ & PN+JW & \(0.295\pm 0.007\) & & & \(0.005^{+0.018}_{-0.018}\) \\ & PN\({}^{+}\) & \(0.311^{+0.010}_{-0.009}\) & & & & \\ & PN\({}^{+}\)+JW & \(0.312\pm 0.008\) & & & \(0.028\pm 0.018\) \\ \hline \multirow{8}{*}{\(w\)CDM} & PN & \(0.320^{+0.040}_{-0.045}\) & \(-1.073^{+0.100}_{-0.113}\) & & \\ & PN+JW & \(0.327^{+0.023}_{-0.021}\) & \(-1.086^{+0.057}_{-0.065}\) & & \(0.025\pm 0.021\) \\ & PN+ & \(0.334^{+0.036}_{-0.041}\) & \(-1.053^{+0.091}_{-0.095}\) & & \\ & PN\({}^{+}\)+JW & \(0.322^{+0.023}_{-0.022}\) & \(-1.025^{+0.052}_{-0.062}\) & & \(0.036^{+0.020}_{-0.022}\) \\ \hline \multirow{8}{*}{CPL} & PN & \(0.380^{+0.046}_{-0.058}\) & \(-1.111^{+0.110}_{-0.124}\) & \(-1.252^{+1.354}_{-1.709}\) & \\ & PN+JW & \(0.336^{+0.021}_{-0.022}\) & \(-1.065^{+0.061}_{-0.064}\) & \(-0.399^{+0.668}_{-0.798}\) & \(0.022^{+0.023}_{-0.022}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.343^{+0.054}_{-0.065}\) & \(-1.081^{+0.103}_{-0.118}\) & \(0.150^{+0.638}_{-0.936}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.323^{+0.022}_{-0.023}\) & \(-1.035^{+0.047}_{-0.056}\) & \(0.130^{+0.432}_{-0.537}\) & \(0.039\pm 0.022\) \\ \hline \multirow{8}{*}{JBP} & PN & \(0.358^{+0.064}_{-0.091}\) & \(-1.088^{+0.140}_{-0.162}\) & \(-0.721^{+1.787}_{-2.492}\) & \\ & PN+JW & \(0.310^{+0.036}_{-0.048}\) & \(-1.106^{+0.075}_{-0.079}\) & \(0.873^{+1.216}_{-1.491}\) & \(0.029^{+0.023}_{-0.022}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.268^{+0.095}_{-0.121}\) & \(-1.013^{+0.142}_{-0.153}\) & \(1.182^{+0.707}_{-1.073}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.280^{+0.039}_{-0.051}\) & \(-1.045^{+0.062}_{-0.063}\) & \(1.403^{+0.650}_{-0.845}\) & \(0.038^{+0.022}_{-0.021}\) \\ \hline \multirow{8}{*}{Exp} & PN & \(0.304^{+0.060}_{-0.066}\) & \(-1.044^{+0.128}_{-0.163}\) & & \\ & PN+JW & \(0.312^{+0.026}_{-0.025}\) & \(-1.062^{+0.060}_{-0.073}\) & & \(0.025^{+0.022}_{-0.020}\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.321^{+0.050}_{-0.058}\) & \(-1.034^{+0.118}_{-0.129}\) & & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.306^{+0.026}_{-0.025}\) & \(-1.001^{+0.058}_{-0.063}\) & & \(0.035^{+0.021}_{-0.023}\) \\ \hline \multirow{8}{*}{BA} & PN & \(0.383^{+0.061}_{-0.111}\) & \(-1.124\pm 0.159\) & \(-0.723^{+1.047}_{-1.531}\) & \\ & PN+JW & \(0.301^{+0.049}_{-0.092}\) & \(-1.045^{+0.109}_{-0.083}\) & \(0.301^{+0.254}_{-0.595}\) & \(0.028\pm 0.022\) \\ \cline{1-1} & PN\({}^{+}\) & \(0.323^{+0.076}_{-0.110}\) & \(-1.044^{+0.162}_{-0.156}\) & \(0.158^{+0.334}_{-0.614}\) & \\ \cline{1-1} & PN\({}^{+}\)+JW & \(0.280^{+0.047}_{-0.081}\) & \(-0.980^{+0.099}_{-0.075}\) & \(0.343^{+0.164}_{-0.311}\) & \(0.037\pm 0.021\) \\ \hline \end{tabular} \end{table} Table 1: Best-fit cosmological parameters at \(1\sigma\) for the six flat models obtained by combining the following catalogs: Pantheon (PN) and Pantheon+ (PN\({}^{+}\)) with the JW simulated sample (JW). Empty cells denote parameters not defined in the model. ## Acknowledgments The Authors thank J. Vinko and E. Regos for their insights on using JWST forecasting data. Also, we would like to acknowledge funding from PAPIIT UNAM Project TA100122. CE-R acknowledges the Royal Astronomical Society as FRAS 10147. PMA and RS are supported by the CONACyT National Grant. The computational calculations have been carried out using facilities procured through the Cosmostatistics National Group project. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model (non-flat) & Dataset & \(\Omega_{M}\) & \(\Omega_{\Lambda}\) & \(w_{0}\) & \(w_{a}\) & \(\Gamma_{0}\) & \(\Omega_{k}\) \\ \hline \multirow{7}{*}{\(\Lambda\)CDM} & PN & \(0.332\pm 0.043\) & \(0.761\pm 0.049\) & & & \(-0.092\pm 0.091\) \\ & PN+JW & \(0.306\pm 0.010\) & \(0.725\pm 0.017\) & & \(0.018\pm 0.019\) & \(-0.031\pm 0.022\) \\ & PN+JW & \(0.328\pm 0.037\) & \(0.710\pm 0.041\) & & & \(-0.038\pm 0.076\) \\ & PN+JW & \(0.312\pm 0.010\) & \(0.695\pm 0.017\) & & \(0.027\pm 0.019\) & \(-0.008\pm 0.023\) \\ \hline \multirow{7}{*}{\(w\)CDM} & PN & \(0.302\pm 0.051\) & \(0.59^{+0.11}_{-0.20}\) & \(-1.29^{+0.38}_{-0.18}\) & & \(0.11^{+0.21}_{-0.16}\) \\ & PN+JW & \(0.333^{+0.022}_{-0.019}\) & \(0.619^{+0.066}_{-0.073}\) & \(-1.19^{+0.16}_{-0.11}\) & & \(0.023\pm 0.021\) & \(0.048\pm 0.058\) \\ & PN+JW & \(0.292\pm 0.045\) & \(0.435^{+0.008}_{-0.15}\) & \(-1.58^{+0.46}_{-0.29}\) & & \(0.27^{+0.17}_{-0.11}\) \\ & PN+JW & \(0.334^{+0.022}_{-0.018}\) & \(0.595^{+0.005}_{-0.079}\) & \(-1.158^{+0.124}_{-0.152}\) & & \(0.032\pm 0.021\) & \(0.071\pm 0.061\) \\ \hline \multirow{7}{*}{CPL} & PN & \(0.280\pm 0.077\) & \(0.412^{+0.060}_{-0.16}\) & \(-1.537^{+0.346}_{-0.451}\) & \(-3.6^{+5.4}_{-2.9}\) & & \(0.31^{+0.22}_{-0.13}\) \\ & PN+JW & \(0.358^{+0.022}_{-0.017}\) & \(0.533^{+0.054}_{-0.077}\) & \(-1.238^{+0.126}_{-0.143}\) & \(-2.02^{+1.4}_{-1.3}\) & \(0.025\pm 0.021\) & \(0.109^{+0.061}_{-0.065}\) \\ & PN+JW & \(0.239\pm 0.059\) & \(0.314^{+0.038}_{-0.11}\) & \(-2.059^{+0.465}_{-0.520}\) & \(-1.6^{+5.7}_{-2.7}\) & & \(0.447^{+0.14}_{-0.09}\) \\ & PN+JW & \(0.344^{+0.029}_{-0.020}\) & \(0.539^{+0.069}_{-0.070}\) & \(-1.17^{+0.16}_{-0.11}\) & \(-0.34^{+1.5}_{-0.1}\) & \(0.037\pm 0.021\) & \(0.116\pm 0.057\) \\ \hline \multirow{7}{*}{JBP} & PN & \(0.292\pm 0.097\) & \(0.50^{+0.12}_{-0.24}\) & \(-1.50^{+0.62}_{-0.21}\) & \(-2.1^{+4.8}_{-2.6}\) & & \(0.21^{+0.29}_{-0.21}\) \\ & PN+JW & \(0.358^{+0.031}_{-0.016}\) & \(0.493^{+0.055}_{-0.12}\) & \(-1.372^{+0.225}_{-0.281}\) & \(-3.03^{+0.49}_{-2.1}\) & \(0.031\pm 0.022\) & \(0.149^{+0.092}_{-0.061}\) \\ & PN+JW & \(0.204\pm 0.072\) & \(0.315^{+0.044}_{-0.15}\) & \(-2.320^{+0.781}_{-0.151}\) & \(2.1^{+6.7}_{-4.3}\) & & \(0.48^{+0.20}_{-0.10}\) \\ & PN+JW & \(0.328^{+0.056}_{-0.012}\) & \(0.517^{+0.076}_{-0.16}\) & \(-1.424^{+0.277}_{-0.347}\) & \(0.5^{+2.4}_{-1.4}\) & \(0.040\pm 0.022\) & \(0.155^{+0.10}_{-0.088}\) \\ \hline \multirow{7}{*}{Exp} & PN & \(0.289\pm 0.068\) & \(0.67^{+0.16}_{-0.32}\) & \(-1.24^{+0.50}_{-0.18}\) & & \(0.04^{+0.30}_{-0.24}\) \\ & PN+JW & \(0.335^{+0.035}_{-0.021}\) & \(0.573^{+0.08}_{-0.12}\) & \(-1.33^{+0.31}_{-0.19}\) & & \(0.028\pm 0.022\) & \(0.092^{+0.068}_{-0.076}\) \\ & PN+JW & \(0.262\pm 0.055\) & \(0.391^{+0.046}_{-0.018}\) & \(-1.86^{+0.74}_{-0.48}\) & & \(0.35^{+0.23}_{-0.13}\) \\ & PN+JW & \(0.340^{+0.022}_{-0.018}\) & \(0.526^{+0.073}_{-0.12}\) & \(-1.38^{+0.22}_{-0.22}\) & & \(0.036\pm 0.021\) & \(0.134^{+0.005}_{-0.072}\) \\ \hline \multirow{7}{*}{BA} & PN & \(0.287^{+0.15}_{-0.086}\) & \(0.52^{+0.11}_{-0.23}\) & \(-1.328^{+0.324}_{-0.409}\) & \(-1.1^{+2.6}_{-1.7}\) & & \(0.20^{+0.29}_{-0.24}\) \\ & PN+JW & \(0.363^{+0.029}_{-0.014}\) & \(0.465^{+0.044}_{-0.090}\) & \(-1.447^{+0.205}_{-0.255}\) & \(-2.2^{+2.9}_{-1.3}\) & \(0.031\pm 0.022\) & \(0.172^{+0.075}_{-0.054}\) \\ \cline{1-1} & PN+JW & \(0.240\pm 0.069\) & \(0.332^{+0.046}_{-0.13}\) & \(-1.990^{+0.556}_{-0.658}\) & \(-0.7^{+3.2}_{-1.3}\) & & \(0.43^{+0.19}_{-0.10}\) \\ \cline{1-1} & PN+JW & \(0.347^{+0.042}_{-0.012}\) & \(0.453^{+0.053}_{-0.10}\) & \(-1.522^{+0.254}_{-0.280}\) & \(-0.69^{+1.8}_{-0.92}\) & \(0.039\pm 0.02 SNIa JWST baseline forecasting: data and priors The simulated data set used is derived from the FLARE project, which has the goal of searching supernovas from population III at redshift \(z\geq 10\)[77] by using the characteristics of the JWST in an area of 0.05 square degrees. Furthermore, the project employs four broadband in the NIRCAM filters (F150W, F200W, F322W2, F444W) with exposure times that can reach \(10\sigma\) limiting magnitudes of \(m\gtrsim 27\) in these filters [9]. By using Monte Carlo methods it was found that for a specific project of JWST observation, at least 200 SNe Ia could be observed. Therefore, the mock sample is constructed using a flat \(\Lambda\)CDM model with \(H_{0}=71.66\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\Omega_{m}=0.3\), as derived using Pantheon data [8]. To ensure consistency between the local and distant samples, we consider a Gaussian error associated with the supernovae distance modules of 0.15 mag and extrapolated them to a larger uncertainty with higher redshift, although this is an oversimplification of the typical distance measurements [80]. For the FLARE project [78] the simulations were done using the standard cosmology and a different SFR according to the redshift in which the simulation is done. Also, it is considered the calculation of the rate of occurrence to estimate the range of the detection needed in the telescope to perform the observations. The used SFR functions are explicitly proposed to have a redshift dependence. For low \(z\lesssim 3\), the function used is [81]: \[\mathrm{SFR}(z)=K\frac{(a+bz)h}{1+(z/c)^{d}}, \tag{10}\] where \(h=H_{0}/(100\mathrm{kms^{-1}Mpc^{-1}})\), \(a=0.017\), \(b=0.13\), \(c=3.3\), and \(d=5.3\). These quantities constrain \(K\) with the SN rates observed. For higher redshifts between \(3<z\leq 8\), the function used is: \[\mathrm{SFR}(z)\propto(1+z)^{-3.6}, \tag{11}\] and for \(z>8\): \[\mathrm{SFR}(z)\propto(1+z)^{-10.4}. \tag{12}\] Additionally, other SFRs are proposed for the whole interval [82]: \[\mathrm{SFR}(z)=0.015\frac{(1+z)^{2.7}}{1+[(1+z)/2.9]^{5.6}}, \tag{13}\] or [83]: \[\mathrm{SFR}(z)=\frac{C}{10^{A(z-z_{0})}+10^{B(z-z_{0})}}, \tag{14}\] using the constants \(C=0.18\), \(A=-0.997\), \(B=0.241\), and \(z_{0}=1.243\). All the parameterisations studied in Sec.2 for the SFR give similar redshift dependence with a peak in star formation in \(z\sim 2\). The volume rate was calculated using the observed SN rate per redshift bin as [78]: \[\dot{N}_{\mathrm{SN}}(z)=\frac{\dot{n}_{\mathrm{SLSN}}(z)}{1+z}\frac{\mathrm{ d}V}{\mathrm{d}z}, \tag{15}\] where the comoving volume can be rewritten as: \[\dot{n}_{\mathrm{SLSN}}(z)=\varepsilon(z)\mathrm{SFR}(z), \tag{16}\] where explicitly is expressed the comoving rate of SNe. \(\varepsilon(z)\) is a factor of the efficiency taking into account the metallicity dependence. Even though, \(\varepsilon(z)\) can be studied through GRBs [78]. So, the expected number of SN can be calculated as: \[N_{\rm SN}=\Omega T\int_{z}^{z+\Delta z}\frac{\dot{n}_{\rm SLSN}(z)}{1+z}\frac{ \mathrm{d}V}{\mathrm{d}z}\mathrm{d}z, \tag{10}\] resulting in a number per unit of redshift interval in the survey area. \(\Omega\) is the survey area and \(T\) is the dedicated time of observation. The redshift dependence also has to consider the number of SNIa progenitors that occur (as SNIa needs a White Dwarf) and the delay for the stellar evolution in such binary systems. For the simulations of mock data sets the code developed by [84] is used to create light curves at different redshifts taking into account the calculated rate and detection of the observatory [78]. So, for the simulation the redshift \(z\), the luminosity distance \(D_{L}\), maximum light moment \(t_{\rm max}\), V absolute magnitude \(M_{\rm V}\), stretch and color are calculated for every SN. It is important to mention that the assumptions of the FLARE project imply that the JSWT will take deep observations for at least three years with a 90-day cadence allowing us to discover between 5 and 20 supernovae events meaning at least fifty in redshift \(1<z<4\)[9]. Additionally, the simulation takes into account that the SNIa ideal observation time has to be from 2 weeks before the maximum up to one month after in which spectrum would have the ideal quality [85]. So, for the simulations, the results are the Hubble diagram for the apparent magnitude assuming detection with the NIRCam of the telescope.
2303.18082
Polynomial Mixing for a Weakly Damped Stochastic Nonlinear Schrödinger Equation
This paper is devoted to proving the polynomial mixing for a weakly damped stochastic nonlinear Schr\"{o}dinger equation with additive noise on a 1D bounded domain. The noise is white in time and smooth in space. We consider both focusing and defocusing nonlinearities, respectively, with exponents of the nonlinearity $\sigma\in[0,2)$ and $\sigma\in[0,\infty)$ and prove the polynomial mixing which implies the uniqueness of the invariant measure by using a coupling method.
Jing Guo, Zhenxin Liu
2023-03-31T14:17:49Z
http://arxiv.org/abs/2303.18082v1
# Polynomial mixing for a weakly damped stochastic nonlinear Schrodinger equation ###### Abstract. This paper is devoted to proving the polynomial mixing for a weakly damped stochastic nonlinear Schrodinger equation with additive noise on a 1D bounded domain. The noise is white in time and smooth in space. We consider both focusing and defocusing nonlinearities, respectively, with exponents of the nonlinearity \(\sigma\in[0,2)\) and \(\sigma\in[0,\infty)\) and prove the polynomial mixing which implies the uniqueness of the invariant measure by using a coupling method. Key words and phrases:Stochastic damped nonlinear Schrodinger equation; Uniqueness of invariant measure; Polynomial mixing; Coupling; Girsanov theorem 2010 Mathematics Subject Classification: 35Q55, 35Q60, 37H99, 60H15 ## 1. Introduction The nonlinear Schrodinger equation (NLS) is one of the basic nonlinear partial differential equations, which models the propagation of dispersive nonlinear waves. It arises in various areas of physics such as hydrodynamics, optics and plasma physics. Given that randomness and damping has to be taken into account in some circumstances, we need to consider the damped stochastic nonlinear Schrodinger equation (SNLS). It is valid for describing waves in long propagation distances. It has the following form \[\mathrm{d}u(t)=(i\Delta u(t)+i\lambda|u(t)|^{2\sigma}u(t)-\alpha u(t))\mathrm{ d}t+b\mathrm{d}W(t), \tag{1.1}\] where \(\alpha>0\), \(x\in[0,1]\), \(\lambda\in\{1,-1\}\), \(u(x,t)\) is a complex-valued unknown function. \(\lambda=1\) corresponds to the focusing case and \(\lambda=-1\) corresponds to the defocusing case. It is not difficult to see that if the well-posedness of the NLS has been proved, then for the equation with damping, it can also be easily derived. Therefore, we only recall the relative results on the well-posedness of the NLS. For the deterministic equation, it is well-known that all the solutions exist globally in the subcritical (\(\sigma d<2\)) case. The first proof of this subject was given by J. Ginibre and G. Velo [15]. There are also many references, see for example [19, 22, 32] and references therein. For the stochastic equation, the well-posedness is more difficult to get. A. de. Bouard and A. Debussche [3] demonstrated the local and global existence and uniqueness of square integrable solutions to the focusing NLS with linear multiplicative noise in \(\mathds{R}^{n}\) by using the fixed point theorem. They considered the subcritical nonlinearities, where the critical exponent is the same as that of the deterministic equation in dimension 1 or 2, but more restrictive if \(n\geq 3\). And they investigated the existence and uniqueness of solutions to the NLS with multiplicative or additive noise in \(H^{1}(\mathds{R}^{n})\) in [4]. Similarly, the result is more restrictive than that of the deterministic equation. After that by making use of the rescaling approach, V. Barbu et al. proved well-posedness results for the NLS with linear multiplicative noise in \(L^{2}(\mathds{R}^{n})\) in the conservative case and nonconservative case in [1]. They obtained the result for the subcritical equation with exponents of nonlinearity in the same range as the deterministic case, which improved the results of [3] in the conservative case. And in [2], they discussed the well-posedness of the equation with linear multiplicative noise in \(H^{1}(\mathds{R}^{n})\) in the conservative and non-conservative case and considered focusing and defocusing nonlinearities whose exponents are in the same range as the deterministic case, which improved the results of [4] in the special conservative case. For the nonlinear noise, F. Hornung presented the local existence and uniqueness of a solution to the SNLS with subcritical and critical nonlinearities and the global existence and uniqueness of the solution to the equation in the subcritical case under an additional assumption on the nonlinear noise in [21]. Besides, in [9], Z. Brzezniak and A. Millet proved the existence and uniqueness of a solution to the SNLS on a two-dimensional compact Riemannian manifold by using stochastic Strichartz estimates. In fact, all the above-mentioned references are about mild solutions. For martingale solutions, we refer to [7, 8, 20]. And for variational solutions, see [17, 23, 24]. Moreover, as for the existence of an invariant measure for the damped SNLS, there are a few studies. I. Ekren et al. proved the existence of an invariant measure for the NLS with additive noise in \(H^{1}(\mathds{R}^{n})\) and the existence of an ergodic measure in [14]. J. U. Kim also studied the existence of an invariant measure for the damped SNLS in [25]. Z. Brzezniak et al. considered this equation with defocusing nonlinearity on a 2D compact Riemannian manifold and proved the existence of an invariant measure in [5]. Besides, they provided some remarks on the uniqueness of the invariant measure in a particular case. But as far as we know, only few studies have shown the uniqueness of an invariant measure for the equation. By using a coupling method, A. Debussche and C. Odasso [13] proved the uniqueness of an invariant measure for the equation with cubic nonlinearities on a 1D bounded domain. They also revealed that the mixing property holds and that the rate of convergence is at least polynomial of any power. Recently, in [6], Z. Brzezniak et al. proved the uniqueness of the invariant measure for the equation when the damping coefficient is sufficiently large in \(\mathds{R}^{n}\) with \(n=2\) or \(n=3\). In this paper, we focus on proving the uniqueness of an invariant measure for the focusing and defocusing damped SNLS, respectively, with exponents of the nonlinearity \(\sigma\in[0,2)\) and \(\sigma\in[0,\infty)\) on a 1D bounded domain. In particular, our work generalizes the earlier result of [13], where \(\sigma=1\) and \(\lambda=1\). In this work, we will use a coupling method to prove the polynomial mixing which implies the uniqueness of the invariant measure. We remark that the result about the existence of an invariant measure can also be obtained by the Krylov-Bogolyubov theorem. To be specific, due to the domain we consider is bounded, we can use some compactness theorem. Besides, we will also need some extra estimates about the solution. Moreover, as far as we know, there are mainly two kinds of methods to prove the uniqueness of the invariant measure. The first one is the Doob theorem or the general Doob theorem. We refer to [12, 18]. The second one is the coupling method. Due to the lack of smoothing effect in the NLS, we will use the coupling method which can also be used to obtain the rate of convergence, and we will restrict to the case, where only a finite number of modes are forced. But for a more degenerate noise, we cannot deal with it now. In this paper, as in [13], we extensively use the decomposition of \(u\) into its low- and high-frequency parts. We assume that the low-frequency part is non-degenerate, but the high-frequency part may be degenerate. We can prove when the low-frequency parts of two solutions from two different initial data are equal, their high-frequency parts will be close, which is the so-called Foias-Prodi estimate. Since the damped SNLS is weakly dissipative, we cannot prove a path-wise Foias-Prodi estimate. We can only prove that it holds on average. Moreover, we are unable to prove the exponential estimate of the growth of solutions in our case. Since the Lyapunov structure is more complicated here, we can only prove the polynomial estimate of the growth of solutions. Therefore, we can only prove that convergence to equilibrium holds with the polynomial speed at any power. The paper is organized as follows. In sect.2, we present some notations, assumptions and main results. Sect.3 gives some useful priori estimates which will be used to prove the main results and proves the uniqueness of the invariant measure. ## 2. Preliminaries and main results ### Notations and assumptions We set \(A=-\Delta\), \(D(A)=H_{0}^{1}([0,1])\bigcap H^{2}([0,1])\). The damped SNLS with the initial data \(u_{0}\) and Dirichlet boundary conditions can be written in the form: \[\begin{cases}du(t)=-iAu(t)dt+i\lambda|u(t)|^{2\sigma}u(t)dt-\alpha u(t)dt+bdW(t ),\quad t\geq 0\\ u(0)=u_{0}\in H_{0}^{1}([0,1]).\end{cases} \tag{2.1}\] Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})\) be a filtered probability space where \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is a filtration satisfying the usual conditions, i.e. \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is right continuous and complete. We recall that a stochastic process \(X(t)\) is called non-anticipating with respect to \(\{\mathcal{F}_{t}\}_{t\geq 0}\) if the function \((t,\omega)\mapsto X(t,\omega)\) is measurable, and for each \(t\geq 0\), \(X(t)\) is adapted to \(\mathcal{F}(t)\), i.e. \(X(t)\) is \(\mathcal{F}(t)\)-measurable. We denote by \(b\) a linear operator on \(L^{2}([0,1])\). Let \(W\) be a cylindrical Wiener process on \(L^{2}([0,1])\). We denote by \(\{\mu_{n}\}_{n\in\mathbb{N}}\) the increasing sequence of eigenvalues of \(A\) and by \(\{e_{n}\}_{n\in\mathbb{N}}\) the associated eigenvectors. Also, Let \(P_{N}\) and \(Q_{N}\) be the eigenprojectors onto the space \(span\left\{\{e_{k}\}_{0\leq k\leq N}\right\}\) and onto its complementary space, respectively. We use the Lebesgue space of complex valued functions \(L^{p}([0,1])\) endowed with the norm \(|\cdot|_{p}\), and the inner product in \(L^{2}([0,1])\) is denoted by \((u,v)=\mathcal{R}\int_{0}^{1}u(x)\bar{v}(x)dx\) for any \(u\), \(v\in L^{2}([0,1])\), where \(\bar{v}\) is the conjugate of \(v\) and \(\mathcal{R}u\) is the real part of \(u\). Let \(H^{s}([0,1])\) be the Sobolev space endowed with the norm \(\|\cdot\|_{s}\). For \(s\geq 0\), it is not difficult to see that \(D\left(A^{\frac{s}{2}}\right)\) is a closed subspace of \(H^{s}\left([0,1]\right)\) and \(\|\cdot\|_{s}=\left|A^{\frac{s}{2}}\right|_{2}\) is equivalent to the usual \(H^{s}([0,1])\) norm on this space. Moreover, for any \(u\in H^{s}([0,1])\), \[D(A^{\frac{s}{2}})=\left\{u=\sum\limits_{k\in\mathbb{N}}(u,e_{k})e_{k}\in L^{ 2}([0,1])\ \Big{|}\ \sum\limits_{k\in\mathbb{N}}\mu^{s}_{k}(u,e_{k})^{2}<\infty\right\}\ \text{ and }\ \|u\|_{s}^{2}=\sum\limits_{k\in\mathbb{N}}\mu^{s}_{k}(u,e_{k})^{2}.\] We work under the following assumptions on the noise and the nonlinearity. **Assumption 2.1**.: _We suppose that \(b\) commutes with \(A\), i.e. suppose that \(b\) is diagonal in the basis \(\{e_{n}\}_{n\in\mathbb{N}}\), and we write \(be_{n}=b_{n}e_{n}\), where \(b_{n}=(be_{n},e_{n})\). Moreover, we assume that there exists \(N_{*}>0\) such that \(b_{n}>0\) for any \(n\leq N_{*}\)._ For any \(s\in[0,3]\), we denote by \(\mathcal{L}_{2}(L^{2}([0,1]),D(A^{\frac{s}{2}}))\) the space of Hilbert-Schmidt operators from \(L^{2}([0,1]\) to \(D(A^{\frac{s}{2}})\). Let \(b\in\mathcal{L}_{2}(L^{2}([0,1]),D(A^{\frac{s}{2}}))\). We set \(B_{s}:=|b|_{\mathcal{L}_{2}(L^{2}([0,1]),D(A^{\frac{s}{2}}))}^{2}=\sum_{n=0}^{ \infty}\mu^{s}_{n}b_{n}^{2}\) for any \(s\in[0,3]\). **Assumption 2.2**.: _If \(\lambda=1\), then \(\sigma\in[0,2)\)._ _If \(\lambda=-1\), then \(\sigma\in[0,\infty)\)._ We use \(H_{*}(u)\) to denote the energy, where \(H_{*}(u)=\frac{1}{2}|\nabla u|_{2}^{2}-\frac{\lambda}{2\sigma+2}|u|_{2\sigma+ 2}^{2\sigma+2}\). It is not difficult to see that when \(\lambda=-1\), it is greater than or equal to zero. When \(\lambda=1\), it may be negative. But we can modify it by adding a term and recover its nonnegative property. We also denote by \(H(u):=H_{*}(u)\) the energy in the former case. And we denote by \(H(u):=H_{*}(u)+G|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}\) the modified energy in the latter case, where \(G\) is a constant satisfying the inequality \[|u|_{2\sigma+2}^{2\sigma+2}\leq\frac{1}{2\sigma+2}|\nabla u|_{2}^{2}+\frac{G}{ 2}|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}. \tag{2.2}\] The existence of \(G\) can be guaranteed by Gagliardo-Nirenberg's inequality and Young's inequality. When \(\lambda=1\), we get \[H(u) =\frac{1}{2}|\nabla u|_{2}^{2}-\frac{1}{2\sigma+2}|u|_{2\sigma+2}^ {2\sigma+2}+G|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}\] \[\geq\frac{2\sigma(\sigma+2)}{(2\sigma+2)^{2}}|\nabla u|_{2}^{2}+ \frac{1}{2\sigma+2}|u|_{2\sigma+2}^{2\sigma+2}+\frac{2\sigma+1}{2\sigma+2}G|u|_{2 }^{2+\frac{4\sigma}{2-\sigma}}. \tag{2.3}\] We also use the following quantities. We define \(E_{u,k}(t,s)=H^{k}(u(t))+\frac{1}{2}\alpha k\int_{s}^{t}H^{k}(u(r))dr\), \(t\geq s\). When \(s=0\), we simply write \(E_{u,k}(t)\). When \(\lambda=1\), we define for any \((u_{1},u_{2},r)\in H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\), \[J_{*}(u_{1},u_{2},r)=|\nabla r|_{2}^{2}-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{ \prime}}(\tau u_{1}+(1-\tau)u_{2})rd\tau\bar{r}dx\] and \[J(u_{1},u_{2},r)=|\nabla r|_{2}^{2}-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{ \prime}}(\tau u_{1}+(1-\tau)u_{2})rd\tau\bar{r}dx+G_{1}\left(\sum_{i=1}^{2}H^{ \sigma}(u_{i})\right)|r|_{2}^{2},\] where \(F(u(t))=|u(t)|^{2\sigma}u(t)\) and \(G_{1}\) is a constant to be determined. By Sobolev's embedding inequality, there exists \(C\) such that \[\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime}}(\tau u_{1}+(1-\tau)u_{2})rd \tau\bar{r}dx\leq C\left(\|u_{1}\|_{1}^{2\sigma}+\|u_{2}\|_{1}^{2\sigma} \right)|r|_{2}^{2}.\] Therefore, by (2.3), we can choose \(G_{1}>0\) such that \[J(u_{1},u_{2},r)=|\nabla r|_{2}^{2}-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^ {\prime}}(\tau u_{1}+(1-\tau)u_{2})rd\tau\bar{r}dx+G_{1}\left(\sum_{i=1}^{2}H ^{\sigma}(u_{i})\right)|r|_{2}^{2}\geq\tfrac{1}{2}|\nabla r|_{2}^{2}.\] When \(\lambda=-1\), we define for any \((u_{1},u_{2},r)\in H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\), \[J(u_{1},u_{2},r)=|\nabla r|_{2}^{2}+\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^ {\prime}}(\tau u_{1}+(1-\tau)u_{2})rd\tau\bar{r}dx.\] Note that \[J(u_{1},u_{2},r)\geq\tfrac{1}{2}|\nabla r|_{2}^{2}.\] For any \(N\geq 1\), we define for any \((u_{1},u_{2},r)\in H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\times H_{0}^{1}([0,1])\), \[J_{FP}^{N}(u_{1},u_{2},r)=\exp\left(2\alpha t-\tfrac{\Lambda}{N^{\frac{1}{4}} }\int_{0}^{t}l\left(u_{1}(s),u_{2}(s)\right)ds\right)J(u_{1},u_{2},r),\] where \(l(u_{1}(s),u_{2}(s))=1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\) and \(\Lambda\) is a constant. In the following, we give the definition of the mild solution. **Definition 2.3**.: The linear group \(\{S(t)\}_{t\in\mathds{R}}\) is defined by \(S(t)=e^{-itA}\), \(t\in\mathds{R}\), associated to the equation \(du=-iAudt\). We say \(u\) is a _mild solution_ of (2.1), if \[u(t)=S(t)u_{0}+i\lambda\int_{0}^{t}S(t-s)|u(s)|^{2\sigma}u(s)ds-\alpha\int_{0} ^{t}S(t-s)u(s)ds+\int_{0}^{t}S(t-s)bdW(s)\quad\mathbb{P}\text{-a.s.}\] for all \(t\geq 0\). The well-posedness of equation (2.1) can be easily proved. Indeed, because the nonlinear part is not Lipschitz, we have to use a truncation argument. And by the fixed point theorem, we can prove the existence and uniqueness of the mild solution. Its proof is the same as the proof in [4]. We denote by \(\{P_{t}\}_{t\in\mathds{R}^{+}}\) the Markov semi-group associated to the solution of (2.1) and \(\{P_{t}^{*}\}_{t\in\mathds{R}^{+}}\) the conjugate operator of \(\{P_{t}\}_{t\in\mathds{R}^{+}}\). ### Basic properties of couplings We now recall some basic results of couplings. See, e.g. [13, 29, 30]. Let \(E\) be a Polish space, i.e. a complete separable metric space. Let \(\mu_{1}\), \(\mu_{2}\) be two distributions on a space \((E,\mathcal{E})\), where \(\mathcal{E}\) is a \(\sigma\)-algebra of subsets of \(E\). And let \(Z_{1}\), \(Z_{2}\) be two random variables \((\Omega,\mathcal{F})\rightarrow(E,\mathcal{E})\). We say that \((Z_{1},Z_{2})\) is a coupling of \((\mu_{1},\mu_{2})\) if \(\mu_{i}=\mathcal{D}(Z_{i})\) for \(i=1,2\), where we denote by \(\mathcal{D}(Z_{i})\) the law of the random variable \(Z_{i}\). We denote by \(Lip_{b}(E)\) the space of bounded and Lipschitz real valued functions on \(E\) endowed with norm \[\|\varphi\|_{L}=|\varphi|_{\infty}+L_{\varphi}\quad\text{ for any }\varphi\in Lip _{b}(E),\] where \(|\cdot|_{\infty}\) is the \(L^{\infty}\) norm and \(L_{\varphi}\) is the Lipschitz constant of \(\varphi\). Let \(\mathcal{P}(E)\) be the space of probability measures on \(E\) endowed with the total variation metric \[\|\mu\|_{var}=\sup\{|\mu(\Gamma)|\,|\,\Gamma\in\mathcal{B}(E)\}\quad\text{ for any }\mu\in\mathcal{P}(E),\] where \(\mathcal{B}(E)\) is the set of the Borelian subsets of \(E\). And \(\|\cdot\|_{var}\) is the dual norm of \(|\cdot|_{\infty}\). We also use a Wasserstein norm \[\|\mu\|_{W}=\sup_{\varphi\in Lip_{b}(E),\|\varphi\|_{L}\leq 1}\left|\int_{E} \varphi(u)d\mu(u)\right|\quad\text{ for any }\mu\in\mathcal{P}(E)\] which is the dual norm of \(\|\cdot\|_{L}\). Let \(\mu\), \(\mu_{1}\), \(\mu_{2}\in\mathcal{P}(E)\), and let \(\mu_{1}\) and \(\mu_{2}\) be absolutely continuous with respect to \(\mu\). We set \[d(\mu_{1}\wedge\mu_{2})=\left(\frac{d\mu_{1}}{d\mu}\wedge\frac{d\mu_{2}}{d\mu} \right)d\mu.\] This definition does not depend on the choice of \(\mu\). And we have \[\|\mu_{1}-\mu_{2}\|_{var}=\frac{1}{2}\int_{E}\left|\frac{d\mu_{1}}{d\mu}-\frac {d\mu_{2}}{d\mu}\right|d\mu.\] Note that if \(\mu_{1}\) is absolutely continuous with respect to \(\mu_{2}\), then we have \[\|\mu_{1}-\mu_{2}\|_{var}\leq\frac{1}{2}\sqrt{\int\left(\frac{d\mu_{1}}{d\mu_{ 2}}\right)^{2}d\mu_{2}-1}. \tag{2.4}\] **Lemma 2.4** ([13, 29, 30]).: _Let \(\mu_{1}\), \(\mu_{2}\) be probability measures on \((E,\mathcal{E})\). Then \(\|\mu_{1}-\mu_{2}\|_{var}=\min\mathbb{P}(Z_{1}\neq Z_{2})\), the minimum is taken over all couplings \((Z_{1},Z_{2})\) of \((\mu_{1},\mu_{2})\). A coupling \((Z_{1},Z_{2})\) is said to be maximal if \(\|\mu_{1}-\mu_{2}\|_{var}=\mathbb{P}(Z_{1}\neq Z_{2})\) and it has the property:_ \[\mathbb{P}\left(Z_{1}=Z_{2},Z_{1}\in\Gamma\right)=(\mu_{1}\wedge\mu_{2})( \Gamma)\quad\text{ for any }\Gamma\in\mathcal{E}.\] **Proposition 2.5** ([13, 30]).: _Let \(E\) and \(F\) be Polish spaces, \(\mu_{1}\), \(\mu_{2}\) a pair of probability measures on \(E\) and \(f_{0}:E\to F\) a measurable mapping. We set \(\nu_{i}=f_{0}^{*}\mu_{i},\;\;i=1,2\). Then there exists a coupling \((V_{1},V_{2})\) for \((\mu_{1},\mu_{2})\) such that \((f_{0}(V_{1}),f_{0}(V_{2}))\) is a maximal coupling for \((\nu_{1},\nu_{2})\)._ ### Main results In this section, we will state the main results. We first define \(G\) by \(Gu=iAu\) and set \[X=P_{N_{*}}u,\;Y=Q_{N_{*}}u,\;\beta=P_{N_{*}}W,\;\eta=Q_{N_{*}}W,\] \[\sigma_{l}=P_{N_{*}}bP_{N_{*}},\;\sigma_{h}=Q_{N_{*}}bQ_{N_{*}},\] \[f(X,Y)=-i\lambda P_{N_{*}}\left(|X+Y|^{2\sigma}(X+Y)\right),\;g( X,Y)=-i\lambda Q_{N_{*}}\left(|X+Y|^{2\sigma}(X+Y)\right).\] Then (2.1) can be written in the form \[\begin{cases}dX+GXdt+\alpha Xdt+f(X,Y)dt=\sigma_{l}d\beta,\\ dY+GYdt+\alpha Ydt+g(X,Y)dt=\sigma_{h}d\eta,\\ X(0)=x_{0},Y(0)=y_{0}.\end{cases} \tag{2.5}\] We note that Assumption 2.1 implies that \(\sigma_{l}\) is invertible. Given two initial datas \(u_{0}^{i}=\left(x_{0}^{i},y_{0}^{i}\right)\), \(i=1,2\), we construct a coupling \[(u_{1},u_{2})=((X_{1},Y_{1}),(X_{2},Y_{2}))\] of the two solutions \(u(\cdot,u_{0}^{i})=\left(X(\cdot,x_{0}^{i}),Y(\cdot,y_{0}^{i})\right)\), \(i=1,2\), of (2.5). We denote by \(l_{0}\) an integer valued random process, which is particularly convenient when deriving properties of the coupling: \[l_{0}(k)=\min\,\{l\in\{0,\ldots,k\}\,|\;(P_{l,k})\;\text{holds}\},\] where \(\min\emptyset=\infty\) and \[(P_{l,k}):\begin{cases}X_{1}(t)=X_{2}(t),\;\eta_{1}(t)=\eta_{2}(t),\qquad \forall t\in[lT,kT]\\ H_{l}\leq d_{0},\\ E_{u_{l},3\sigma+1}(t,lT)\leq\kappa+1+d_{0}^{3\sigma+1}+d_{0}^{6\sigma+2}+B(t- lT),\quad i=1,2,\qquad\forall t\in[lT,kT],\end{cases}\] where \(d_{0}\), \(\kappa\), \(B\) are constants and we set \[H_{l}=H(u_{1}(lT))+H(u_{2}(lT)).\] We say that \(X_{1}\), \(X_{2}\) are coupled at \(kT\) if \(l_{0}(k)\leq k\), i.e. if \(l_{0}(k)\neq\infty\). The following properties hold for the integer valued random process \(l_{0}\). **(H1)**: \(l_{0}(k+1)=l\) implies \(l_{0}(k)=l\) for any \(l\leq k\), \(l_{0}(k)\in\{0,1,\ldots,k\}\cup\{\infty\}\), \(l_{0}(k)\) depends only on \(u_{1}|_{[0,kT]}\) and \(u_{2}|_{[0,kT]}\), \(l_{0}(k)=k\) implies \(H_{k}\leq d_{0}\). We now give four conditions on the coupling which allow to prove polynomial convergence to equilibrium. * There exist \(C_{0}\) and \(q>0\) such that for any \(t\in[IT,kT]\cap\mathbb{R}^{+}\), we have \(\mathbb{P}\left(d_{E}\left(u_{1}(t),u_{2}(t)\right)>C_{0}(t-IT)^{-q}\) and \(l_{0}(k)\leq l\right)\leq C_{0}(t-IT)^{-q}\). (H2) implies that if \(u_{1}(t)\) and \(u_{2}(t)\) are coupled at time \(IT\), then the probability that the distance between \(u_{1}(t)\) and \(u_{2}(t)\) is small when \(t>IT\) will be large. * For any \(q\in\mathds{N}\backslash\{0\}\), there exists \(T_{q}>0\) such that for any \(l\leq k\), \(T\geq T_{q}\), we have \(\mathbb{P}(l_{0}(k+1)\neq l\mid l_{0}(k)=l)\leq\frac{1}{2}\left(1+(k-l)T \right)^{-q}\). We can deduce from (H3) that if \(u_{1}(t)\) and \(u_{2}(t)\) are coupled on \([IT,kT]\), then the probability that \(u_{1}(t)\) and \(u_{2}(t)\) decouple will be small. Moreover, if the time they have been coupled is longer, then the probability will be smaller. * For any \(R_{0}\), \(d_{0}>0\), there exist \(T^{*}(R_{0},d_{0})>0\) and \(p_{-1}(d_{0})>0\) such that for any \(T\geq T^{*}(R_{0},d_{0})\), we have \(\mathbb{P}(l_{0}(k+1)=k+1\mid l_{0}(k)=\infty,H_{k}\leq R_{0})\geq p_{-1}(d_{ 0})\). (H4) says that in a small ball, the probability that \(u_{1}(t)\) and \(u_{2}(t)\) will be coupled is positive. * There exists \(C^{{}^{\prime}}_{k}\) such that for any initial data \(u_{0}\) and any stopping time \(\tau\in\{kT,k\in\mathds{N}\}\cup\{\infty\}\), we have the estimates \(\mathbb{E}H(u(t))\leq\exp(-\alpha t)H(u_{0})+\frac{C^{{}^{\prime}}_{k}}{2}\), \(\mathbb{E}(H(u(\tau))|_{\tau<\infty})\leq C^{{}^{\prime}}_{k}(H(u_{0})+ \mathbb{E}(\tau|\tau<\infty))\). In our case, \(H(u)\) is the Lyapunov function. (H5) describes the Lyapunov structure. We say the process \(V=(u_{1},u_{2})\) is \(l_{0}\)-Markov if the laws of \(V(kT+\cdot)\) and of \(l_{0}(k+\cdot)-k\) on \(\{l_{0}(k)\in\{k,\infty\}\}\) conditioned by \(\mathcal{F}_{kT}\) only depend on \(V(kT)\) and equal to the laws of \(V(\cdot,V(kT))\) and \(l_{0}\), respectively. In the proof, we construct a coupling \(((u_{1},W_{1}),(u_{2},W_{2}))\) of two solutions which is \(l_{0}\)-Markov. We can modify the construction such that it is Markov at discrete times \(T\mathds{N}=\{kT,k\in\mathds{N}\}\). But it seems impossible to modify the coupling to be Markov at continuous time. **Theorem 2.6**.: _There exists \(N_{0}\) such that if Assumptions 2.1-2.2 hold with \(N_{*}\geq N_{0}\), then for any \(\left(u_{0}^{1},W_{0}^{1}\right),\left(u_{0}^{2},W_{0}^{2}\right)\), there exists a coupling \(V=((u_{1},W_{1}),(u_{2},W_{2}))\) of the laws of \(\left(u\left(\cdot,u_{0}^{1}\right),W\left(\cdot,W_{0}^{1}\right)\right)\) and \(\left(u\left(\cdot,u_{0}^{2}\right),W\left(\cdot,W_{0}^{2}\right)\right)\), where \(V\) is \(l_{0}\)- Markov and satisfies (H1)-(H5) with \(R_{0}>4C^{{}^{\prime}}_{1}\) and \(R_{0}\geq d_{0}\). Furthermore, there exists \(C>0\), such that for any \(\varphi\in Lip_{b}(H_{0}^{1}([0,1]))\) and \(u_{0}^{1},u_{0}^{2}\in H_{0}^{1}([0,1])\),_ \[\left|\mathbb{E}\varphi\left(u(t,u_{0}^{1})\right)-\mathbb{E}\varphi\left(u \left(t,u_{0}^{2}\right)\right)\right|\leq C\left(1+t\right)^{-q}\|\varphi\|_ {L}\left(1+H(u_{0}^{1})+H(u_{0}^{2})\right).\] Based on Theorem 2.6, we can easily obtain the following corollary. **Corollary 2.7**.: _Under the assumptions of Theorem 2.6, there exist a constant \(K_{1}>0\) and a unique invariant measure \(\upsilon\) of \((P_{t})_{t>0}\) on \(H_{0}^{1}([0,1])\). It satisfies_ \[\int_{H_{0}^{1}([0,1])}H(u)d\upsilon(u)\leq\frac{K_{1}}{2}.\] _Furthermore, for any \(\mu\in\mathcal{P}(H_{0}^{1}([0,1]))\), there exists \(C>0\) such that_ \[\|P_{t}^{*}\mu-\nu\|_{W}\leq C(1+t)^{-q}\left(1+\int_{H_{0}^{1}([0,1])}H(u)d \mu(u)\right).\] ## 3. Priori estimates and proof of Theorem 2.6 ### Priori estimates In this section, we will give some prior estimates needed to prove the main theorem. **Proposition 3.1**.: _There exists a measurable map_ \[\Phi:C\left([0,T];P_{N_{*}}H_{0}^{1}\right)\times C\left([0,T];Q_{N_{*}}H^{-1} \right)\times H_{0}^{1}\to C\left([0,T];Q_{N_{*}}H_{0}^{1}\right)\] _such that for any \((u,W)\) which is the weak solution of (2.5),_ \[Y=\Phi\left(X,\eta,u_{0}\right)\ \ \text{on}\ \ [0,T].\] _Moreover, \(\Phi\) is a non-anticipative function of \((X,\eta)\)._ We can rewrite the second equation of (2.5) in the following form \[Y(t)=S(t)y_{0}-\alpha\int_{0}^{t}S(t-s)Y(s)ds-\int_{0}^{t}S(t-s)g(X(s),Y(s))ds+ \int_{0}^{t}S(t-s)\sigma_{h}d\eta.\] Given \(u_{0}\in H_{0}^{1}\), \(X\in C([0,T];P_{N_{*}}H_{0}^{1})\) and \(\eta\in C([0,T];Q_{N_{*}}H^{-1})\), Proposition 3.1 can be proved by applying the fixed point theorem. **Lemma 3.2**.: _For each \(k\in\mathds{N}\setminus\{0\}\) and \(t\in\mathds{R}^{+}\), there exist \(C\), \(C_{1}\), \(C_{k}>0\) such that_ \[d\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}}+\alpha\left(\frac{3}{2}+\frac {4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}}dt \leq\left(2+\frac{4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{\frac{4\sigma }{2-\sigma}}(u,bdW)+Cdt.\] _When \(\lambda=1\), we have the estimates_ \[dH(u)+\alpha H(u)dt\leq\left(Au-\left|u\right|^{2\sigma}u,bdW \right)+G\left(2+\frac{4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{\frac{4 \sigma}{2-\sigma}}(u,bdW)+C_{1}dt,\] \[dH^{k}(u)+\frac{1}{2}\alpha kH^{k}(u)dt\leq kH^{k-1}(u)\left[ \left(Au-\left|u\right|^{2\sigma}u,bdW\right)+G\left(2+\frac{4\sigma}{2-\sigma }\right)\left|u\right|_{2}^{\frac{4\sigma}{2-\sigma}}(u,bdW)\right]+C_{k}dt.\] _When \(\lambda=-1\), we similarly have the estimates_ \[dH(u)+\alpha H(u)dt\leq\left(Au+\left|u\right|^{2\sigma}u,bdW \right)+C_{1}dt,\] \[dH^{k}(u)+\frac{1}{2}\alpha kH^{k}(u)dt\leq kH^{k-1}(u)\left(Au+ \left|u\right|^{2\sigma}u,bdW\right)+C_{k}dt.\] Proof.: Using Ito's formula to \(\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}}\), we obtain the estimate \[d\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}}=\left(2+\frac{4\sigma}{2- \sigma}\right)\left|u\right|_{2}^{\frac{4\sigma}{2-\sigma}}\left(u,-iAu+i \lambda|u|^{2\sigma}u-\alpha u\right)dt+\left(2+\frac{4\sigma}{2-\sigma} \right)\left|u\right|_{2}^{\frac{4\sigma}{2-\sigma}}(u,bdW)+I+II,\] where \[I=\frac{1}{2}\left(2+\frac{4\sigma}{2-\sigma}\right)\left(\frac{ 4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{\frac{4\sigma}{2-\sigma}-2} \left|b^{*}u\right|_{2}^{2}dt,\] \[II=\frac{1}{2}B_{0}\left(2+\frac{4\sigma}{2-\sigma}\right)\left|u \right|_{2}^{\frac{4\sigma}{2-\sigma}}dt.\] Young's inequality implies \[I+II\leq C\left[\left(2+\frac{4\sigma}{2-\sigma}\right)\left(\frac{2\sigma}{2- \sigma}\right)+\left(1+\frac{2\sigma}{2-\sigma}\right)\right]B_{0}\left|u \right|_{2}^{\frac{4\sigma}{2-\sigma}}dt\leq\frac{\alpha}{2}\left|u\right|_{2} ^{2+\frac{4\sigma}{2-\sigma}}dt+Cdt.\] It follows from the last inequality that \[d\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}}+\alpha\left(\frac{3}{2}+ \frac{4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{2+\frac{4\sigma}{2-\sigma}} dt\leq\left(2+\frac{4\sigma}{2-\sigma}\right)\left|u\right|_{2}^{\frac{4 \sigma}{2-\sigma}}(u,bdW)+Cdt. \tag{3.1}\] **The first case: \(\lambda=1\).** Applying Ito's formula to \(H^{*}(u)\), we find \[dH^{*}(u)= \left(Au-|u|^{2\sigma}u,-iAu+i|u|^{2\sigma}u-\alpha u\right)dt+ \left(Au-|u|^{2\sigma}u,bdW\right)\] \[-\frac{1}{2}\sum_{n=1}^{\infty}b_{n}^{2}\int_{0}^{1}\left(2\sigma |u|^{2\sigma-2}\left(\mathcal{R}(u\tilde{e}_{n})\right)^{2}+|u|^{2\sigma}|e_{n }|^{2}\right)dxdt+\frac{1}{2}B_{1}dt\] \[\leq \left(-\alpha\|u\|_{1}^{2}+\alpha|u|_{2\sigma+2}^{2\sigma+2} \right)dt+\left(Au-|u|^{2\sigma}u,bdW\right)+\frac{1}{2}B_{1}dt.\] Consequently \[dH^{*}(u)+\left(\alpha\|u\|_{1}^{2}-\alpha|u|_{2\sigma+2}^{2\sigma+2}\right) dt\leq\left(Au-|u|^{2\sigma}u,bdW\right)+\frac{1}{2}B_{1}dt. \tag{3.2}\] Employing (3.1)-(3.2), we deduce \[dH(u)+\left(\alpha\|u\|_{1}^{2}-\alpha|u|_{2\sigma+2}^{2\sigma+ 2}+G\alpha\left(\frac{3}{2}+\frac{4\sigma}{2-\sigma}\right)|u|_{2}^{2+\frac{4 \sigma}{2-\sigma}}\right)dt\] \[\leq\left(Au-|u|^{2\sigma}u,bdW\right)+G\left(2+\frac{4\sigma}{2 -\sigma}\right)|u|_{2}^{\frac{4\sigma}{2-\sigma}}(u,bdW)+C_{1}dt.\] Using Gagliardo-Nirenberg's inequality, we have \[\alpha\|u\|_{1}^{2}-\alpha|u|_{2\sigma+2}^{2\sigma+2}+G\alpha \left(\frac{3}{2}+\frac{4\sigma}{2-\sigma}\right)|u|_{2}^{2+\frac{4\sigma}{2- \sigma}}\] \[\geq\alpha\|u\|_{1}^{2}-\frac{\alpha}{2\sigma+2}\|u\|_{1}^{2}- \frac{G\alpha}{2}|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}+G\alpha\left(\frac{3}{2 }+\frac{4\sigma}{2-\sigma}\right)|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}\] \[\geq\alpha\frac{2\sigma+1}{2\sigma+2}\|u\|_{1}^{2}+G\alpha\left(1 +\frac{4\sigma}{2-\sigma}\right)|u|_{2}^{2+\frac{4\sigma}{2-\sigma}}\] \[\geq\alpha H(u).\] So \[dH(u)+\alpha H(u)dt\leq\left(Au-|u|^{2\sigma}u,bdW\right)+G\left(2+\frac{4 \sigma}{2-\sigma}\right)|u|_{2}^{\frac{4\sigma}{2-\sigma}}(u,bdW)+C_{1}dt. \tag{3.3}\] We apply Ito's formula to \(H^{k}(u)\), then \[dH^{k}(u)\leq kH^{k-1}(u)\left[-\alpha H(u)dt+\left(Au-|u|^{2\sigma}u,bdW\right)+G \left(2+\frac{4\sigma}{2-\sigma}\right)|u|_{2}^{\frac{4\sigma}{2-\sigma}}(u, bdW)+C_{1}dt\right]\] \[+\frac{1}{2}k(k-1)H^{k-2}(u)d\langle M_{1}\rangle,\] where \[dM_{1}=\left(Au-|u|^{2\sigma}u,bdW\right)+G\left(2+\frac{4\sigma}{2-\sigma} \right)|u|_{2}^{\frac{4\sigma}{2-\sigma}}(u,bdW), \tag{3.4}\] \[d\langle M_{1}\rangle\leq C\left(B_{1}\|u\|_{1}^{2}+B_{1}|u|_{2\sigma+2}^{2(2 \sigma+1)}+B_{0}|u|_{2}^{\frac{8\sigma}{2-\sigma}+2}\right)dt\leq Cdt+2 \varepsilon_{1}\alpha\frac{1}{k}H^{2}(u)dt.\] Applying Young's inequality, we compute \[C\left[kH^{k-1}\left(u\right)+\frac{1}{2}k(k-1)H^{k-2}(u)\right] dt+\alpha ke_{1}H^{k}(u)dt\] \[\leq\varepsilon\alpha kH^{k}(u)dt+\alpha ke_{1}H^{k}(u)dt+C_{k}dt.\] We choose \(\varepsilon,\varepsilon_{1}\) so small that \(\varepsilon+\varepsilon_{1}\leq\frac{1}{2}\). Hence \[dH^{k}(u)+\frac{1}{2}\alpha kH^{k}(u)dt\leq kH^{k-1}(u)dM_{1}+C_{k}dt. \tag{3.5}\] **The second case:**: \(\lambda=-1\). Using Ito's formula to \(H(u)\), we find \[dH(u)= \left(Au+|u|^{2\sigma}u,-iAu-i|u|^{2\sigma}u-\alpha u\right)dt+ \left(Au+|u|^{2\sigma}u,bdW\right)\] \[+\frac{1}{2}\sum_{n=1}^{\infty}b_{n}^{2}\int_{0}^{1}\left(2 \sigma|u|^{2\sigma-2}(\mathcal{R}(u\bar{e}_{n}))^{2}+|u|^{2\sigma}|e_{n}|^{2} \right)dxdt+\frac{1}{2}B_{1}dt\] \[\leq \left(-\alpha\|u\|_{1}^{2}-\alpha|u|_{2\sigma+2}^{2\sigma+2} \right)dt+\left(Au+|u|^{2\sigma}u,bdW\right)+\frac{1}{2}B_{0}\left(2\sigma+1 \right)|u|_{2\sigma}^{2\sigma}dt+\frac{1}{2}B_{1}dt.\] We infer from Young's inequality and Sobolev's embedding inequality that \[dH(u)\leq\left(-\alpha\|u\|_{1}^{2}-\alpha|u|_{2\sigma+2}^{2\sigma+2}\right) dt+\left(Au+|u|^{2\sigma}u,bdW\right)+\frac{1}{2}\alpha|u|_{2\sigma+2}^{2 \sigma+2}dt+C_{1}dt.\] Thus \[dH(u)+\alpha H(u)dt\leq\left(Au+|u|^{2\sigma}u,bdW\right)+C_{1}dt. \tag{3.6}\] Next, we apply Ito's formula to \(H^{k}(u)\), we then have \[dH^{k}(u)+\alpha kH^{k}(u)dt\leq kH^{k-1}(u)\left(Au+|u|^{2\sigma}u,bdW\right) +kC_{1}H^{k-1}(u)dt+\frac{1}{2}k(k-1)H^{k-2}(u)d\langle M_{1}^{{}^{\prime}}\rangle,\] where \[dM_{1}^{{}^{\prime}}=\left(Au+|u|^{2\sigma}u,bdW\right),\] \[d\langle M_{1}^{{}^{\prime}}\rangle\leq B_{1}\|u\|_{1}^{2}dt+B_{1}|u|_{2 \sigma+2}^{2(2\sigma+1)}dt\leq 2\varepsilon_{2}\alpha\frac{1}{k}H^{2}(u)dt+Cdt. \tag{3.7}\] Applying Young's inequality, we deduce \[C\left[kH^{k-1}(u)+\frac{1}{2}k(k-1)H^{k-2}(u)\right]dt+\alpha k \varepsilon_{2}H^{k}(u)dt\] \[\leq\alpha k\varepsilon_{3}H^{k}(u)dt+\alpha k\varepsilon_{2}H^{ k}(u)dt+C_{k}dt.\] We choose \(\varepsilon_{2}\), \(\varepsilon_{3}\) which are very small, then \[dH^{k}(u)+\frac{1}{2}\alpha kH^{k}(u)dt\leq kH^{k-1}(u)\left(Au+|u|^{2\sigma}u, bdW\right)+C_{k}dt. \tag{3.8}\] **Lemma 3.3**.: _For any \(k\in\mathds{N}\setminus\{0\}\), \(t\in\mathds{R}^{+}\) and stopping time \(\tau\), there exists \(C_{k}^{{}^{\prime}}>0\) such that_ \[\mathbb{E}\left(H^{k}(u(t))\right)\leq\exp\left(-\frac{\alpha}{2} kt\right)H^{k}(u_{0})+\frac{C_{k}^{{}^{\prime}}}{2},\] \[\mathbb{E}\left(H^{k}(u(\tau))\right)\leq H^{k}(u_{0})+C_{k}^{{}^ {\prime}}\mathbb{E}(\tau).\] Proof.: **The first case:**\(\lambda=1\). Multiplying (3.5) by \(\exp\left(\frac{1}{2}\alpha kt\right)\), we deduce \[d\left(\exp\left(\frac{1}{2}\alpha kt\right)H^{k}\left(u(t)\right)\right)\leq \exp\left(\frac{1}{2}\alpha kt\right)kH^{k-1}(u(t))dM_{1}(t)+C_{k}\exp\left( \frac{1}{2}\alpha kt\right)dt.\] We integrate the last inequality from \(0\) to \(t\) to find \[\exp\left(\frac{1}{2}\alpha kt\right)H^{k}(u(t))\leq H^{k}(u_{0})+\int_{0}^{t} \exp\left(\frac{1}{2}\alpha ks\right)kH^{k-1}(u(s))dM_{1}(s)+C_{k}\int_{0}^{t} \exp\left(\frac{1}{2}\alpha ks\right)ds.\] Hence \[H^{k}(u(t))\leq\exp\left(-\frac{1}{2}\alpha kt\right)H^{k}(u_{0})+\int_{0}^{t}\exp \left(-\frac{1}{2}\alpha k(t-s)\right)kH^{k-1}(u(s))dM_{1}(s)+C_{k}\frac{2}{ \alpha k}.\] Taking the expectation, we have \[\mathbb{E}(H^{k}(u(t)))\leq\exp\left(-\frac{1}{2}\alpha kt\right)H^{k}(u_{0})+ \frac{C_{k}^{{}^{\prime}}}{2},\] which implies the first inequality of Lemma 3.3 holds. We now assume that \(M>0\) is a constant and \(\tau<M\) is a bounded stopping time. Then integrating (3.5) from \(0\) to \(\tau\) and taking the expectation, we compute \[\mathbb{E}\left(H^{k}(u(\tau))\right)\leq H^{k}(u_{0})+C_{k}^{{}^{\prime}} \mathbb{E}(\tau).\] Therefore, the second inequality of Lemma 3.3 for bounded stopping times follows. Assume that \(\tau\) is a general stopping time. We consider the second inequality of Lemma 3.3 for the stopping time \(\tau\wedge M\), we have \[\mathbb{E}\left(H^{k}(u(\tau\wedge M))\right)\leq H^{k}(u_{0})+C_{k}^{{}^{ \prime}}\mathbb{E}(\tau\wedge M).\] By Fatou's Lemma and lower semicontinuity, when \(M\longrightarrow\infty\), we calculate \[\mathbb{E}\left(H^{k}(u(\tau))\right)\leq\liminf_{M\to\infty}\mathbb{E}\left( H^{k}(u(\tau\wedge M))\right)\leq\limsup_{M\to\infty}\left(H^{k}(u_{0})+C_{k}^{{}^{ \prime}}\mathbb{E}(\tau\wedge M)\right)\leq H^{k}(u_{0})+C_{k}^{{}^{\prime}} \mathbb{E}(\tau),\] which yields the second inequality of Lemma 3.3. The similar argument holds for the second case: \(\lambda=-1\). **Lemma 3.4**.: _Suppose that \(u\) is a solution of (2.1) associated with a Wiener process W. Then for any \((k,p)\in(\mathds{N}\setminus\{0\})^{2}\), \(\rho>0\) and \(0\leq T<\infty\), we have the estimates_ \[\mathbb{P}\left(\sup_{t\in[0,T]}\left(E_{u,k}(t)-C_{k}^{{}^{\prime}}t\right) \geq H^{k}(u_{0})+\rho\left(H^{2k}(u_{0})+T\right)\right)\leq K_{k,p}\rho^{-p},\] \[\mathbb{P}\left(\sup_{t\in[T,\infty)}\left(E_{u,k}(t)-C_{k}^{{}^{\prime}}t \right)\geq H^{k}(u_{0})+H^{2k}(u_{0})+1+\rho\right)\leq K_{k,p}\left(\rho+T \right)^{-p},\] _the constants \(C_{k}^{{}^{\prime}}\) and \(K_{k,p}\) depending only on \(k\) and \(p\)._ Proof.: We only prove the case \(\lambda=1\); the other case \(\lambda=-1\) is similar. We first set \[dM_{k}(t)=kH^{k-1}(u(t))dM_{1}(t).\] Taking into account (3.4), we see that \[d\langle M_{k}\rangle(t)\leq C_{k}\left(1+H^{2k}(u(t))\right)dt.\] Integrating (3.5) from \(0\) to \(t\) and taking the expectation, we find for any \(k\geq 1\), \[\mathbb{E}\int_{0}^{t}H^{k}(u(s))ds\leq C_{k}\left(H^{k}(u_{0})+t\right).\] Therefore, for any \(p\geq 1\), \[\mathbb{E}(M_{k})^{p}(t) \leq\mathbb{E}\left(\int_{0}^{t}C_{k}\left(H^{2k}(u(s))+1\right) ds\right)^{p}\] \[\leq\left(\mathbb{E}\int_{0}^{t}C_{k}\left(H^{2k}(u(s))+1\right) ds\right)^{p}\] \[\leq 2^{p}C_{k}^{p}\left(t^{p}+\left(\mathbb{E}\int_{0}^{t}H^{2k}(u (s))ds\right)^{p}\right)\] \[\mathbb{P}\left(H\left(u(t,u_{0}^{1})\right)+H\left(u(t,u_{0}^{2})\right)\leq R_{1} \right)\geq\pi_{-1}\left(R_{1}\right),\] _provided \(H(u_{0}^{1})+H(u_{0}^{2})\leq R_{0}\) and \(t\geq T_{-1}(R_{0},R_{1})\)._ Proof.: According to Lemma 3.5, it suffices to show Lemma 3.6 for \(R_{0}=4C_{1}^{{}^{\prime}}\) and \(t=T_{-1}(R_{0},R_{1})\) (instead of \(t\geq T_{-1}(R_{0},R_{1})\)). Consequently, we only prove this lemma for \(R_{0}=4C_{1}^{{}^{\prime}}\). Assume \(T,\delta>0\). Applying Chebyshev's inequality, we deduce that there exists \(N_{-2}=N_{-2}(T,\delta)\in\mathds{N}\) such that \[\mathbb{P}\left(\sup_{t\in[0,T]}\left\|bQ_{N_{-2}}W(t)\right\|_{3}>\frac{ \delta}{2}\right)\leq\frac{4}{\delta^{2}}\sum_{n>N_{-2}}\mu_{n}^{3}b_{n}^{2} \leq\frac{1}{2}.\] Furthermore, since \(P_{N_{-2}}W\) is a finite dimensional Brownian motion, we find \[\pi_{-3}\left(T,\delta,N_{-2}\right)=\mathbb{P}\left(\sup_{t\in[0,T]}\left|P_ {N_{-2}}W(t)\right|_{2}\leq\frac{\delta}{2}\left\|b\right\|_{\mathcal{L}_{2} \left(L^{2}\left\{[0,1],H^{3}\left([0,1]\right)\right\}\right)}^{-1}>0.\] Then we have \[\mathbb{P}\left(\sup_{t\in[0,T]}\|bW(t)\|_{3}\leq\delta\right)\geq\mathbb{P}\left( \sup_{t\in[0,T]}\left\|bQ_{N_{-2}}W(t)\right\|_{3}\leq\frac{\delta}{2}\right) \pi_{-3}\left(T,\delta,N_{-2}\right).\] Therefore \[\pi_{-2}(T,\delta)=\mathbb{P}\left(\sup_{t\in[0,T]}\|bW(t)\|_{3}\leq\delta \right)>0.\] It suffices to prove that there exist \(T_{-1}(R_{1}),\delta_{-1}(R_{1})>0\) such that \[\left\{\sup_{t\in[0,T_{-1}]}\|bW(t)\|_{3}\leq\delta_{-1}\right\}\subset\left\{ H\left(u(T_{-1},u_{0})\right)\leq\frac{1}{2}R_{1}\right\}, \tag{3.9}\] provided \(H(u_{0})\leq\frac{1}{2}R_{0}\). We turn now to prove (3.9). Let \[v=u(\cdot,u_{0})-bW.\] Then \[dv+\alpha vdt+iAvdt-i\lambda|v+bW|^{2\sigma}(v+bW)dt=(-\alpha-iA)\,bWdt. \tag{3.10}\] Applying Ito's formula to \(|v|_{2}^{2}\), we find \[\frac{d|v|_{2}^{2}}{dt}+2\alpha|v|_{2}^{2}=\left(2v,i\lambda|v+bW|^{2\sigma}( v+bW)+(-\alpha-iA)bW\right).\] Since \(\left(v,i|v+bW|^{2\sigma}v\right)=0\), we deduce \[\left(2v,i\lambda|v+bW|^{2\sigma}(v+bW)+(-\alpha-iA)bW\right)\] \[\leq C\left(|v|,|v|^{2\sigma}|bW|\right)+C\left(|v|,|bW|^{2\sigma +1}\right)+|\left(2v,-\alpha bW\right)|+|\left(2v,-iA(bW)\right)|\] \[\leq C\|bW\|_{3}\left(1+\|v\|_{1}^{2\sigma+1}\right)\left(1+\|bW \|_{3}^{2\sigma}\right).\] Using Ito's formula to \(|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}\), we have \[\frac{d|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}}{dt}= \left(2+\frac{4\sigma}{2-\sigma}\right)|v|_{2}^{\frac{4\sigma}{2 -\sigma}}\left(v,-\alpha v-iAv+i\lambda|v+bW|^{2\sigma}(v+bW)+(-\alpha-iA)bW\right)\] \[\leq-\alpha\left(2+\frac{4\sigma}{2-\sigma}\right)|v|_{2}^{2+ \frac{4\sigma}{2-\sigma}}+C\|bW\|_{3}\left(1+\|bW\|_{3}^{2\sigma}\right)\left( 1+\|v\|_{1}^{1+2\sigma+\frac{4\sigma}{2-\sigma}}\right).\] So \[\frac{d|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}}{dt}+\alpha\left(2+\frac{4\sigma} {2-\sigma}\right)|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}\leq C\|bW\|_{3}\left(1+ \|bW\|_{3}^{2\sigma}\right)\left(1+\|v\|_{1}^{1+2\sigma+\frac{4\sigma}{2- \sigma}}\right). \tag{3.11}\] Then we apply Ito's formula to \(H^{*}(v)\) yields \[\frac{dH^{*}(v)}{dt}+\alpha\|v\|_{1}^{2}=-\left(Av-\lambda|v|^{2\sigma}v,( \alpha+iA)bW\right)+\alpha\left(\lambda|v+bW|^{2\sigma}(v+bW),v\right).\] We write \[I_{1}=\alpha\left(\left(\lambda|v+bW|^{2\sigma}(v+bW),v\right)-\lambda|v|_{2 \sigma+2}^{2\sigma+2}\right)=\alpha\lambda\left(|v+bW|^{2\sigma}(v+bW)-|v|^{2 \sigma}v,v\right).\] Then \[\frac{dH^{*}(v)}{dt}+\alpha\|v\|_{1}^{2}-\alpha\lambda|v|_{2\sigma+2}^{2\sigma +2}=I_{1}+I_{2}, \tag{3.12}\] where \[I_{2}=-\left(Av-\lambda|v|^{2\sigma}v,(\alpha+iA)bW\right).\] Recall that for any \(z,h\in\mathbb{C}\), \[\left\|z+h\right\|^{2\sigma}(z+h)-|z|^{2\sigma}z\right|\leq C|h|\left(|z|^{2 \sigma}+|h|^{2\sigma}\right).\] We use the last inequality and Holder's inequality to find \[I_{1}+I_{2} \leq|(-Av,(\alpha+iA)bW)|+\left|\left(\lambda|v|^{2\sigma}v,( \alpha+iA)bW\right)\right|+C\left(\left(|bW|^{2\sigma}+|v|^{2\sigma}\right)|bW |,|v|\right)\] \[\leq C\|bW\|_{3}\left(1+\|v\|_{1}^{2\sigma+1}\right)\left(1+\|bW \|_{3}^{2\sigma}\right). \tag{3.13}\] **The first case: \(\lambda=\mathbf{1}\)**. Combining now (3.11)-(3.13), we deduce \[\frac{dH(v)}{dt}+\alpha\|v\|_{1}^{2}-\alpha|v|_{2\sigma+2}^{2 \sigma+2}+G\alpha\left(2+\frac{4\sigma}{2-\sigma}\right)|v|_{2}^{2+\frac{4 \sigma}{2-\sigma}}\] \[\leq C\|bW\|_{3}\left(1+\|v\|_{1}^{2\sigma+1}\right)\left(1+\|bW \|_{3}^{2\sigma}\right)+C\|bW\|_{3}\left(1+\|bW\|_{3}^{2\sigma}\right)\left(1 +\|v\|_{1}^{1+2\sigma+\frac{4\sigma}{2-\sigma}}\right).\] Applying Gagliardo-Nirenberg's inequality and taking into account (2.2), we calculate \[\alpha\|v\|_{1}^{2}-\alpha|v|_{2\sigma+2}^{2\sigma+2}+G\alpha \left(2+\frac{4\sigma}{2-\sigma}\right)|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}\] \[\geq\alpha\|v\|_{1}^{2}-\alpha\left(\frac{1}{2\sigma+2}\|v\|_{1} ^{2}+\frac{G}{2}|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}\right)+G\alpha\left(2+ \frac{4\sigma}{2-\sigma}\right)|v|_{2}^{2+\frac{4\sigma}{2-\sigma}}\] \[\geq\alpha H(v).\] Hence \[\frac{dH(v)}{dt}+\alpha H(v)\leq C\|bW\|_{3}\left(1+\|bW\|_{3}^{2\sigma}\right) \left(1+\|v\|_{1}^{1+2\sigma+\frac{4\sigma}{2-\sigma}}\right). \tag{3.14}\] Let us assume that \(T,\delta,R_{1}^{{}^{\prime}}>0\). We also suppose \[\sup_{t\in[0,T]}\|bW(t)\|_{3}\leq\delta\] and set \[\tau=\inf\left\{t\in[0,T]\mid H(v)>3R_{0}\right\}.\] Integrating (3.14) from \(0\) to \(t\), we have \[H(v(t))\leq\frac{1}{2}\exp\left(-\alpha t\right)R_{0}+\frac{C}{\alpha}\delta \left(1+\delta^{2\sigma}\right)\left(1+R_{0}^{\sigma+1+\frac{2\sigma}{2-\sigma }}\right), \tag{3.15}\] provided \(t\leq\tau\). Then we choose \(\delta_{-2}(R_{1}^{{}^{\prime}})\) such that \[\frac{C}{\alpha}\delta\left(1+\delta^{2\sigma}\right)\left(1+R_{0}^{\sigma+1+ \frac{2\sigma}{2-\sigma}}\right)\leq R_{1}^{{}^{\prime}}\wedge R_{0}.\] for any \(\delta\leq\delta_{-2}(R_{1}^{{}^{\prime}})\). Thus from (3.15), it follows that \(\tau=T\) and that \(H\left(v(T)\right)\leq 2R_{1}^{{}^{\prime}}\), provided \(T\geq\frac{1}{\alpha}\ln\left(\frac{R_{0}}{2R_{1}^{{}^{\prime}}}\right)\). We remark that \[H(u(T))\leq C\left(H(bW(T))+H(v(T))\right)\leq C\left(\delta^{2}\left(1+ \delta^{\frac{4\sigma}{2-\sigma}}\right)+R_{1}^{{}^{\prime}}\right).\] Then we choose \(\delta\) and \(R_{1}^{{}^{\prime}}\) sufficiently small to derive (3.9). The proof for the case \(\lambda=-1\) is similar, so we omit it. **Lemma 3.7**.: _Assume for any \(k_{0}>0\) and \(N\in\mathds{N}\setminus\{0\}\), \(W_{1}\), \(W_{2}\) are two cylindrical Wiener processes, \(h\) is an adapted process with continuous paths in \(P_{N}L^{2}\left([0,1]\right)\), \(u_{1}\) is a solution in \(C\left([0,T];H^{1}_{0}([0,1])\right)\) of_ \[\begin{cases}du_{1}+\alpha u_{1}dt+iAu_{1}dt-i\lambda|u_{1}|^{2\sigma}u_{1}dt= bdW_{1}+hdt\\ u_{1}(0)=u_{0}^{1},\end{cases}\] \(u_{2}\) _is the solution of (2.1) for \(u_{0}=u_{0}^{2}\) and \(W=W_{2}\) and \(\tau\) is a stopping time. Suppose also that_ \[P_{N}u_{1}=P_{N}u_{2},Q_{N}W_{1}=Q_{N}W_{2}\qquad\text{ on }\quad[0,\tau] \tag{3.16}\] _and_ \[\|h(t)\|_{1}^{2}\leq k_{0}\left(l(u_{1}(t)+u_{2}(t))\right)^{\frac{2\sigma+1}{ 3\sigma+1}}\qquad\text{ on }\quad[0,\tau], \tag{3.17}\] _then there exists \(\Lambda>0\) depending only on \(k_{0}\) such that_ \[\mathbb{E}\left[J_{FP}^{N}(u_{1},u_{2},r)(t\wedge\tau)\right]\leq J\left(u_{0 }^{1},u_{0}^{2},r_{0}\right)\qquad t>0, \tag{3.18}\] _where \(r_{0}=u_{0}^{1}-u_{0}^{2}\)._ Proof.: In light of (3.16), the difference of the two solutions \(r=u_{1}-u_{2}=Q_{N}u_{1}-Q_{N}u_{2}\) satisfies the equation \[dr=-iArdt+i\lambda Q_{N}\left(|u_{1}|^{2\sigma}u_{1}-|u_{2}|^{2\sigma}u_{2} \right)dt-\sigma dt. \tag{3.19}\] Then using Ito's formula to \(|r|_{2}^{2}\), we compute \[d|r|_{2}^{2}+2\alpha|r|_{2}^{2}dt=\left(2r,i\lambda\left(|u_{1}|^{2\sigma}u_{1 }-|u_{2}^{2\sigma}u_{2}\right)\right)dt.\] Since \[\left||u_{1}|^{2\sigma}u_{1}-|u_{2}^{2\sigma}u_{2}\right|\leq C\left(\sum_{i= 1}^{2}|u_{i}|^{2\sigma}\right)|r|,\] we find \[d|r|_{2}^{2}+2\alpha|r|_{2}^{2}dt\leq C\mathcal{R}\int_{0}^{1}\left(\sum_{i= 1}^{2}|u_{i}|^{2\sigma}\right)|r|^{2}\,dxdt\leq C\left(\|u_{1}\|_{1}^{2\sigma} +\|u_{2}\|_{1}^{2\sigma}\right)|r|_{2}^{2}dt\leq C\left(\sum_{i=1}^{2}H^{ \sigma}(u_{i})\right)|r|_{2}^{2}dt. \tag{3.20}\] **The first case: \(\lambda=\mathbf{1}\)**. As demonstrated in Lemma 3.2, for \(i=1,2\), we have \[dH(u_{i})+\alpha H(u_{i})dt\leq\left(M^{i},bdW_{i}\right)+C_{1}dt+1_{i=1} \left(M^{i},h\right)dt,\] \[dH^{\sigma}(u_{i})\leq-\frac{1}{2}\alpha\sigma H^{\sigma}(u_{i})dt+\sigma H^{ \sigma-1}(u_{i})\left(M^{i},bdW_{i}\right)+C_{\sigma}dt+\sigma H^{\sigma-1}( u_{i})1_{i=1}\left(M^{i},h\right)dt,\] where \[M^{i}=Au_{i}-|u_{i}|^{2\sigma}u_{i}+G\left(2+\frac{4\sigma}{2-\sigma}\right)| u_{i}|_{2}^{\frac{4\sigma}{2-\sigma}}u_{i}.\] Employing Sobolev's embedding inequality and Holder's inequality, we deduce \[\|M^{1}\|_{-1}\leq C\left(1+H(u_{1})\right)^{\frac{3\sigma+2}{2 \sigma+4}},\] \[\left(M^{1},h\right)\leq C\left(1+\sum_{i=1}^{2}H(u_{i})\right)^{ 2\sigma+2}.\] We set \[Z_{1}=\left(\sum_{i=1}^{2}H^{\sigma}(u_{i})\right)|r|_{2}^{2}.\] In view of (3.20), we see \[dZ_{1}= \left(\sum_{i=1}^{2}H^{\sigma}(u_{i})\right)d|r|_{2}^{2}+\left(\sum _{i=1}^{2}dH^{\sigma}(u_{i})\right)|r|_{2}^{2}\] \[\leq -2\alpha Z_{1}dt+\sum_{i=1}^{2}\sigma H^{\sigma-1}(u_{i})\left(M^ {i},bdW_{i}\right)|r|_{2}^{2}+C\left(\sum_{i=1}^{2}H^{\sigma}(u_{i})\right)^{2 }|r|_{2}^{2}dt\] \[+C|r|_{2}^{2}dt+C\left(\sum_{i=1}^{2}\sigma H^{\sigma-1}(u_{i}) \right)\left(1+\sum_{i=1}^{2}H(u_{i})\right)^{2\sigma+2}|r|_{2}^{2}dt.\] Hence \[dZ_{1}+2\alpha Z_{1}dt\leq\sum_{i=1}^{2}\sigma H^{\sigma-1}(u_{i})\left(M^{i},bdW_{i}\right)|r|_{2}^{2}+C\left(1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\right)| r|_{2}^{2}dt. \tag{3.21}\] We first set \[F(u)=|u|^{2\sigma}\,u\] and note that its derivatives \[F^{{}^{\prime}}(u)(v)=2\sigma|u|^{2\sigma-2}\mathcal{R}(\overline{uv})u+|u|^{ 2\sigma}v=\left(\sigma+1\right)|u|^{2\sigma}v+\sigma|u|^{2\sigma-2}u^{2} \overline{v},\] \[F^{{}^{\prime\prime}}(u)(v,w)= \sigma(\sigma+1)|u|^{2\sigma-2}\overline{uv}w+\sigma(\sigma+1)|u| ^{2\sigma-2}u\overline{v}w\] \[+\sigma(\sigma+1)|u|^{2\sigma-2}u\overline{v}\overline{w}+\sigma (\sigma-1)|u|^{2\sigma-4}u^{3}\overline{vw},\] \[F^{{}^{\prime\prime\prime}}(u)(v,w,z)= \sigma(\sigma-1)(\sigma+1)|u|^{2\sigma-4}\overline{u}^{2}vw_{2}+ \sigma(\sigma-1)(\sigma+1)|u|^{2\sigma-4}u^{2}\overline{vw_{2}}\] \[+\sigma(\sigma-1)(\sigma+1)|u|^{2\sigma-4}u^{2}\overline{v_{2}}w +\sigma(\sigma-1)(\sigma+1)|u|^{2\sigma-4}u^{2}\overline{vw_{2}}\] \[+\sigma^{2}(\sigma+1)|u|^{2\sigma-2}\overline{v}wz+\sigma^{2}( \sigma+1)|u|^{2\sigma-2}\overline{vw_{2}}+\sigma^{2}(\sigma+1)|u|^{2\sigma-2} vw\overline{z}\] \[+\sigma(\sigma-1)(\sigma-2)|u|^{2\sigma-6}u^{4}\overline{vw_{2}}.\] We can rewrite (3.19) in the following form \[dr+iArdt+\sigma rdt=iQ_{N}\int_{0}^{1}F^{{}^{\prime}}\left(\tau u_{1}+(1-\tau )u_{2}\right)rd\tau dt. \tag{3.22}\] Applying Ito's formula to \(J_{*}(u_{1},u_{2},r)\) yields \[dJ_{*}(u_{1},u_{2},r)\] \[= -2\alpha J_{*}(u_{1},u_{2},r)\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\tau\left(r,-\alpha u_{1}dt-iAu_{1}dt+i|u_{1}|^ {2\sigma}u_{1}dt+bdW_{1}+hdt\right)d\tau\bar{\tau}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}(\tau u _{1}+(1-\tau)u_{2})(1-\tau)\left(r,-\alpha u_{2}dt-iAu_{2}dt+i|u_{2}|^{2\sigma }u_{2}dt+bdW_{2}\right)d\tau\bar{\tau}dx\] \[-\frac{1}{2}\sum_{p=1}^{\infty}b_{p}^{2}\mathcal{R}\int_{0}^{1} \int_{0}^{1}F^{{}^{\prime\prime\prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right) \left(r,\tau e_{p},\tau e_{p}\right)d\tau\bar{\tau}dxdt\] \[-\frac{1}{2}\sum_{p,q=1}^{\infty}b_{p}b_{q}\mathcal{R}\int_{0}^{1} \int_{0}^{1}F^{{}^{\prime\prime\prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right) \left(r,\tau e_{p},(1-\tau)e_{q}\right)d\tau\bar{\tau}dxd\left\langle\left(W_ {1},e_{p}\right),\left(W_{2},e_{q}\right)\right\rangle\] \[-\frac{1}{2}\sum_{p,q=1}^{\infty}b_{p}b_{q}\mathcal{R}\int_{0}^{1} \int_{0}^{1}F^{{}^{\prime\prime\prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right) \left(r,(1-\tau)e_{q},\tau e_{p}\right)d\tau\bar{\tau}dxd\left\langle\left(W_ {2},e_{q}\right),\left(W_{1},e_{p}\right)\right\rangle\] \[=-2\alpha J_{*}(u_{1},u_{2},r)dt+I+II+III+IV+VI.\] Using Holder's inequality and Sobolev's embedding inequality, we deduce \[I= -\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\tau\left(r,-\alpha u_{1}dt-iAu_{1}dt+i|u_{1}|^{2 \sigma}u_{1}dt\right)d\tau\bar{\tau}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau bdW_{1}\right)d\tau\bar{\tau}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau hdt\right)d\tau\bar{\tau}dx\] \[\leq -\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau bdW_{1}\right)d\tau\bar{\tau}dx+C \left(1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\right)\|r\|_{1}\|r\|_{\frac{3}{4}}dt.\] Similarly \[II\] \[\leq -\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,(1-\tau)bdW_{2}\right)d\tau\bar{\tau}dx+ C\left(1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\right)\|r\|_{1}\|r\|_{\frac{3}{4}}dt.\] Since \(|e_{n}|_{\infty}=1\), we have \[III\leq CB_{0}\left(1+\sum_{i=1}^{2}H^{2\sigma}(u_{i})\right)|r|_{2}^{2}dt.\] Note that we have no information on the law of the couple \((W_{1},W_{2})\). Hence, we cannot compute \(d\left\langle(W_{1},e_{p}),(W_{2},e_{q})\right\rangle\). However, we know that \[d\left|\left\langle\left(W_{1},e_{p}\right),\left(W_{2},e_{q}\right)\right\rangle \right|\leq dt.\] It follows from Schwartz's inequality that \[\left(\sum_{n=1}^{\infty}b_{n}\right)^{2}\leq\left(\sum_{n=1}^{\infty}\mu_{n }b_{n}^{2}\right)\left(\sum_{n=1}^{\infty}\frac{1}{\mu_{n}}\right)\leq CB_{1}.\] Hence \[IV\leq CB_{1}\left(1+\sum_{i=1}^{2}H^{2\sigma}(u_{i})\right)|r|_{2}^{2}dt.\] Likewise, \[V\leq CB_{1}\left(1+\sum_{i=1}^{2}H^{2\sigma}(u_{i})\right)|r|_{2}^{2}dt,\quad VI \leq CB_{0}\left(1+\sum_{i=1}^{2}H^{2\sigma}(u_{i})\right)|r|_{2}^{2}dt.\] We combine these estimates to compute \[dJ_{*}(u_{1},u_{2},r)+2\alpha J_{*}(u_{1},u_{2},r)dt\] \[\leq C\left(1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\right)\|r\|_{1}\|r \|_{\frac{3}{4}}dt-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}} \left(\tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau bdW_{1}\right)d\tau\bar{ \tau}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,(1-\tau)bdW_{2}\right)d\tau\bar{\tau}dx. \tag{3.23}\] Combining (3.21) and (3.23) leads us to the estimate \[dJ(u_{1},u_{2},r)+2\alpha J(u_{1},u_{2},r)dt\] \[\leq C\Bigg{(}1+\sum_{i=1}^{2}H^{3\sigma+1}(u_{i})\Bigg{)}\|r\|_{1}\|r\|_{ \frac{3}{4}}dt-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau bdW_{1}\right)d\tau\bar{r}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,(1-\tau)bdW_{2}\right)d\tau\bar{r}dx+ \sum_{i=1}^{2}\sigma G_{1}H^{\sigma-1}(u_{i})\left(M^{i},bdW_{i}\right)|r|_{2 }^{2}.\] Since \(\|r\|_{\frac{3}{4}}\leq CN^{-\frac{1}{4}}\|r\|_{1}\), there exists \(\Lambda>0\) such that \[dJ(u_{1},u_{2},r)+\left(2\alpha-\frac{\Lambda}{N^{\frac{1}{4}}} l\left(u_{1}(t),u_{2}(t)\right)\right)J\left(u_{1},u_{2},r\right)dt\] \[\leq -\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,\tau bdW_{1}\right)d\tau\bar{r}dx\] \[-\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(r,(1-\tau)bdW_{2}\right)d\tau\bar{r}dx+ \sum_{i=1}^{2}\sigma G_{1}H^{\sigma-1}(u_{i})\left(M^{i},bdW_{i}\right)|r|_{2 }^{2}\] \[:= dM(t). \tag{3.24}\] Multiplying (3.24) by \(\exp\left(2\alpha t-\frac{\Lambda}{N^{\frac{1}{4}}}\int_{0}^{t}l\left(u_{1}(s ),u_{2}(s)\right)ds\right)\) and integrating from \(0\) to \(t\wedge\tau\), we see \[J_{FP}^{N}(u_{1},u_{2},r)(t\wedge\tau)\leq J_{FP}^{N}(u_{1},u_{2},r)(0)+\int_{ 0}^{t\wedge\tau}\exp\left(2\alpha s-\frac{\Lambda}{N^{\frac{1}{4}}}\int_{0}^{ s}l\left(u_{1}(s^{{}^{\prime}}),u_{2}(s^{{}^{\prime}})\right)ds^{{}^{\prime}} \right)dM(s).\] Take the expectation to conclude \[\mathbb{E}\left[J_{FP}^{N}(u_{1},u_{2},r)(t\wedge\tau)\right]\leq J\left(u_{ 0}^{1},u_{0}^{2},r_{0}\right).\] **The second case: \(\lambda=-\mathbf{1}\)**. It is not difficult to see that \[J(u_{1},u_{2},r)=|\nabla r|_{2}^{2}+\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{ \prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right)rd\tau\bar{r}dx\geq|\nabla r|_{2 }^{2}.\] Utilizing Ito's formula to \(J(u_{1},u_{2},r)\), we see \[dJ(u_{1},u_{2},r)\] \[= -2\alpha J(u_{1},u_{2},r)\] \[+\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\tau\left(r,-\alpha u_{1}dt-iAu_{1}dt-i|u_{1}|^ {2\sigma}u_{1}dt+bdW_{1}+hdt\right)d\tau\bar{r}dx\] \[+\mathcal{R}\int_{0}^{1}\int_{0}^{1}F^{{}^{\prime\prime}}\left( \tau u_{1}+(1-\tau)u_{2}\right)\left(1-\tau\right)\left(r,-\alpha u_{2}dt-iAu_{ 2}dt-i|u_{2}|^{2\sigma}u_{2}dt+bdW_{2}\right)d\tau\bar{r}dx\] \[+\frac{1}{2}\sum_{p=1}^{\infty}b_{p}^{2}\mathcal{R}\int_{0}^{1} \int_{0}^{1}F^{{}^{\prime\prime\prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right) \left(r,\tau e_{p},\tau e_{p}\right)d\tau\bar{r}dxdt\] \[+\frac{1}{2}\sum_{p,q=1}^{\infty}b_{p}b_{q}\mathcal{R}\int_{0}^{1} \int_{0}^{1}F^{{}^{\prime\prime\prime}}\left(\tau u_{1}+(1-\tau)u_{2}\right) \left(r,\tau e_{p},(1-\tau)e_{q}\right)d\tau\bar{r}dxd\left\langle\left(W_{1}, e_{p}\right),\left(W_{2},e_{q}\right)\right\rangle\] \[\mathbb{P}\left(\|r(T)\|_{1}>C^{*}\left(d_{0}\right)\exp\left(a- \frac{\alpha}{4}T+\rho\right)\text{ and }T\leq\tau\right)\leq\exp\left(-a-\frac{ \alpha}{4}T\right).\] _Furthermore, there exists a constant \(C>0\) such that_ \[C^{*}(d_{0})\leq Cd_{0}\exp\left(Cd_{0}^{6\sigma+2}\right).\] Proof.: By Lemma 3.7 and Chebyshev's inequality, Corollary 3.8 can be verified. **Lemma 3.9**.: _Assume that for any \(B\), \(d_{0}\), \(\kappa_{0}>0\) and any \(a\in\mathds{R}\), there exist \(N_{2}(B,\kappa_{0},a)\) and \(C^{**}(d_{0},B)\) such that under the assumptions of Lemma 3.7, (3.16) and (3.17) hold with \(N\geq N_{2}\) and (3.25) holds for some \(\rho>0\), we obtain that for any \(T\),_ \[\mathbb{P}\left(\int_{T}^{\tau}l(u_{1}(s),u_{2}(s))\,\|r(s)\|_{1}^{2}ds>C^{**}( d_{0},B)\exp\left(a-\frac{\alpha}{2}T+\rho\right)\text{ and }T\leq\tau\right)\leq\exp\left(-a-\frac{\alpha}{2}T\right),\] _provided \(\sum_{i=1}^{2}H(u_{0}^{i})\leq d_{0}\) holds. Furthermore, there exists a constant \(C>0\) such that_ \[C^{**}(d_{0},B)\leq C(B)d_{0}\exp\left(Cd_{0}^{6\sigma+2}\right). \tag{3.26}\] Proof.: Integrate (3.18) with respect to \(\mathfrak{t}\): \[\int_{T}^{\tau}J(u_{0}^{1},u_{0}^{2},r_{0})dt \geq\int_{T}^{\tau}\mathbb{E}[J_{FP}^{N}(u_{1},u_{2},r)(t)]dt\] \[\geq\frac{1}{2}\int_{T}^{\tau}\mathbb{E}[\exp(2\alpha t-\frac{ \Lambda}{N^{\frac{1}{4}}}t-C\frac{\Lambda}{N^{\frac{1}{4}}}(\rho+1+d_{0}^{3 \sigma+1}+d_{0}^{6\sigma+2}+Bt))|\nabla r|_{2}^{2}]dt.\] So \[\int_{T}^{\tau}\mathbb{E}[|\nabla r|_{2}^{2}dt]\leq 2\exp(-2\alpha T+\frac{ \Lambda}{N^{\frac{1}{4}}}\tau+C\frac{\Lambda}{N^{\frac{1}{4}}}(\rho+1+d_{0}^{ 3\sigma+1}+d_{0}^{6\sigma+2}+B\tau))J(u_{0}^{1},u_{0}^{2},r_{0})(\tau-T).\] Since for any \(x>0\), \(1+x\leq C_{\delta}\exp\left(\delta x\right)\), we have \[\mathbb{P}(\int_{T}^{\tau}l(u_{1}(s),u_{2}(s))|\nabla r(s)|_{2}^{2 }ds>C^{**}(d_{0},B)\exp(a-\frac{\alpha}{2}T+\rho))\] \[\leq\frac{\mathbb{E}\int_{T}^{\tau}l(u_{1}(s),u_{2}(s))|\nabla r( s)|_{2}^{2}ds}{C^{**}(d_{0},B)\exp(a-\frac{\alpha}{2}T+\rho)}\] \[\leq 2|\varphi|_{\infty}\mathbb{P}\left(|u_{1}(t)-u_{2}(t)|>C(1+t)^{- q}\right)+CL_{\varphi}\left(1+t\right)^{-q}.\] By (3.27), we find \[\left|\mathbb{E}\varphi\left(u\left(t,u_{0}^{1}\right)\right)-\mathbb{E} \varphi\left(u\left(t,u_{0}^{2}\right)\right)\right|\leq C\|\varphi\|_{L} \left(1+H\left(u_{0}^{1}\right)+H\left(u_{0}^{2}\right)\right)(1+t)^{-q},\] which implies that Theorem 2.6 holds. In order to prove (3.27), we show that (H1)-(H5) are true. Specifically, (H1) can be easily proved by the definition of \(l_{0}\), (H2) can be obtained by Lemma 3.7 and Corollary 3.8, and (H5) is the so-called Lyapunov structure and follows from Lemma 3.3. The proof of (H3)-(H4) are completely similar to that of (2.3)-(2.4) in [13], except changing the exponent \(\sigma\). So we omit the proof. Since (H1)-(H5) hold, completely similar to the proof in [13, Section 3], we can conclude the proof. ## Acknowledgements This work is supported by NSFC Grants 11871132, 11925102, and Dalian High-level Talent Innovation Program (Grant 2020RD09).
2307.16558
Canonical Gradings of Monads
We define a notion of grading of a monoid T in a monoidal category C, relative to a class of morphisms M (which provide a notion of M-subobject). We show that, under reasonable conditions (including that M forms a factorization system), there is a canonical grading of T. Our application is to graded monads and models of computational effects. We demonstrate our results by characterizing the canonical gradings of a number of monads, for which C is endofunctors with composition. We also show that we can obtain canonical grades for algebraic operations.
Flavien Breuvart, Dylan McDermott, Tarmo Uustalu
2023-07-31T10:37:41Z
http://arxiv.org/abs/2307.16558v1
# Canonical Gradings of Monads ###### Abstract We define a notion of grading of a monoid \(\mathsf{T}\) in a monoidal category \(\mathcal{C}\), relative to a class of morphisms \(\mathcal{M}\) (which provide a notion of \(\mathcal{M}\)-subobject). We show that, under reasonable conditions (including that \(\mathcal{M}\) forms a factorization system), there is a canonical grading of \(\mathsf{T}\). Our application is to graded monads and models of computational effects. We demonstrate our results by characterizing the canonical gradings of a number of monads, for which \(\mathcal{C}\) is endofunctors with composition. We also show that we can obtain canonical grades for algebraic operations. ## 1 Introduction This paper is motivated by quantitative modelling of computational effects from mathematical programming semantics. It is standard in this domain to model notions of computational effect, such as nondeterminism or manipulation of external state, by (strong) monads [12]. In many applications, however, it is useful to be able to work with quantified effects, e.g., how many outcomes a computation may have, or to what degree it may read or overwrite the state. This is relevant, for example, for program optimizations or analyses to assure that a program can run within allocated resources. Quantification of effectfulness is an old idea and goes back to type-and-effect systems [9]. Mathematically, notions of quantified effect can be modelled by graded (strong) monads [14, 11, 5]. It is natural to ask if there are systematic ways for refining a non-quantitative model of some effect into a quantitative version, i.e., for producing a graded monad from a monad. In this paper, we answer this question in the affirmative. We show how a monad on a category can be graded with any class of subfunctors (intuitively, predicates on computations) satisfying reasonable conditions, including that it forms a factorization system on some monoidal subcategory of the endofunctor category. Moreover, this grading is canonical, namely universal in a certain 2-categorical sense. We also show that algebraic operations of the given monad give rise to _flexibly graded_ algebraic operations [6] of the canonically graded monad. Instead of working concretely with monads on a category, we work abstractly with monoids in a (skew) monoidal category equipped with a factorization system. The structure of the paper is this. In Section 2, we introduce the idea of grading by subobjects for general objects and instantiate this for grading of functors. We then proceed to gradings of monoids and monads in Section 3. In Section 4, we explore the specific interesting case of grading monads canonically by subsets of their sets of shapes. In Section 5, we explain the emergence of canonical flexibly graded algebraic operations for canonical gradings of monads. One longer proof is in Appendix A. We introduce the necessary concepts regarding the classical topics of monads, monoidal categories and factorization systems. For additional background on the more specific concepts of graded monad and skew monoidal category, which we also introduce, we refer to [5, 3] and [15, 8] as entry points.
2305.00543
Calibration Error Estimation Using Fuzzy Binning
Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities. Calibration of neural networks is an essential step towards more reliable deep learning frameworks. Prior metrics of calibration error primarily utilize crisp bin membership-based measures. This exacerbates skew in model probabilities and portrays an incomplete picture of calibration error. In this work, we propose a Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to calculate calibration error. This approach alleviates the impact of probability skew and provides a tighter estimate while measuring calibration error. We compare our metric with ECE across different data populations and class memberships. Our results show that FCE offers better calibration error estimation, especially in multi-class settings, alleviating the effects of skew in model confidence scores on calibration error estimation. We make our code and supplementary materials available at: https://github.com/bihani-g/fce
Geetanjali Bihani, Julia Taylor Rayz
2023-04-30T18:06:14Z
http://arxiv.org/abs/2305.00543v2
# Calibration Error Estimation Using Fuzzy Binning ###### Abstract Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities. Calibration of neural networks is an essential step towards more reliable deep learning frameworks. Prior metrics of calibration error primarily utilize crisp bin membership-based measures. This exacerbates skew in model probabilities and portrays an incomplete picture of calibration error. In this work, we propose a Fuzzy Calibration Error metric (FCE) that utilizes a fuzzy binning approach to calculate calibration error. This approach alleviates the impact of probability skew and provides a tighter estimate while measuring calibration error. We compare our metric with ECE across different data populations and class memberships. Our results show that FCE offers better calibration error estimation, especially in multi-class settings, alleviating the effects of skew in model confidence scores on calibration error estimation. We make our code and supplementary materials available at: [https://github.com/bihani-g/fce](https://github.com/bihani-g/fce). language Models, Calibration, Fine-tuning, Fuzzy theory, Classification, Natural Language Processing ## 1 Introduction Neural network-based decision-making systems have evolved rapidly in the recent decade. Within the domain of natural language processing, deep learning has shaped the current evolution in language modeling. These neural network-based language models are trained on large text corpora and can be fine-tuned across a wide range of NLP tasks and further improved using synthetic semantic enhancement schemes [1], yielding state-of-the-art performance [2; 3; 4; 5]. Ideally, a neural model should output reliable and confident prediction probabilities. But recent works have shown that neural networks are unreliable and output highly overconfident predictions, resulting in over-estimation of the model's confidence in decisions [6; 7; 8]. This leads to model miscalibration, i.e. a lack of alignment between a model's decision probabilities and its actual likelihood of correctness. This lack of calibration can severely impact the trustworthiness of a model's decisions. A widely adopted measure of the degree of miscalibration is Expected Calibration Error (ECE) [9], used to measure neural network reliability [10; 11; 12]. The highly overconfident output prediction probabilities of neural networks result in a left-skewed probability distribution [13]. Since ECE utilizes a fixed-width crisp binning scheme, this skew results in higher probability bins largely contributing to the calibration error estimation, while lower probability bins are ignored [13; 14; 15]. To overcome these limitations, prior works have proposed alternative binning strategies such as equal-frequency binning [14], adaptive binning [15], replacing binning with smoothed kernel density estimation [16], and more. Most calibration error estimation techniques rely on crisp binning, which discards edge probabilities (probabilities that typically lie on the bin edge) that could have contributed to a more accurate calibration error estimation. Although some works have utilized fuzzification of prediction probabilities for downstream NLP tasks [17], the calibration impacts of such fuzzification are yet to be studied. We hypothesize that fuzzifying the binning scheme would allow edge probabilities to contribute toward more accurate calibration error estimation. Moreover, fuzzy binning would increase the visibility of lower probability scores by allowing them to have partial membership in higher probability bins, minimizing the skew problem in calibration error estimation. Towards testing this hypothesis, we propose a new metric for estimating calibration error, i.e. Fuzzy Calibration Error (FCE), that utilizes fuzzy binning instead of crisp binning to allow edge probability contributions and minimize skew in calculating calibration error. We perform empirical evaluation across different classification settings, comparing FCE with the baseline calibration error estimation metric ECE. Our results show that, unlike ECE, FCE better captures miscalibration in lower probability bins and provides a tighter and less skewed estimate of calibration error. These improvements are more visible in multi-class settings, where the skew in confidence scores exacerbates the calibration error estimation problem. The contributions of this work are summarized as follows: * We propose Fuzzy Calibration Error (FCE) metric which uses fuzzy binning to account for edge probabilities and minimize skew in calibration error estimation * We perform empirical evaluation across a wide range of classification settings and show the benefits of using FCE over ECE in minimizing the impacts of probability skew on calibration error estimation ## 2 Background ### Neural Network Calibration Neural network calibration refers to the process of adjusting a neural network model's output probabilities to reflect the true probabilities of the events it is predicting. With the increased application of neural network architectures in high-risk real-world settings. their calibration has become an extensively studied topic in recent years [18; 19; 20]. Recent research has focused on improving the calibration of neural networks, particularly in the context of deep learning. Various methods have been proposed to achieve better calibration, including temperature scaling [6], isotonic regression [21], and histogram binning [22]. ### Expected Calibration Error Expected calibration error (ECE) is a scalar measure of calibration error that calculates the weighted average of the difference between the accuracy of a model and its average confidence level over a set of bins defined by the predicted probabilities. Estimation of expected accuracy from finite samples is done by grouping predictions into \(M\) interval bins (each of size \(\frac{1}{M}\)), and the accuracy of each bin is calculated. Let \(B_{m}\) be a bin containing samples whose prediction confidence lies within the interval \(I_{m}=\left(\frac{m-1}{M},\frac{m}{M}\right]\). Then the accuracy of \(B_{m}\), where \(y_{i}\) and \(\hat{y}_{i}\) portray predicted and true class labels, is calculated as shown in Eq. 1. \[\mathrm{acc}\left(B_{m}\right)=\frac{1}{\left|B_{m}\right|}\sum_{i\in B_{m}} \mathbf{1}\left(\hat{y}_{i}=y_{i}\right) \tag{1}\] The average predicted confidence of \(B_{m}\), is calculated as shown in Eq. 2, where \(\hat{p}_{i}\) refers to the prediction probability of the \(i^{th}\) instance in \(B_{m}\). \[\mathrm{conf}\left(B_{m}\right)=\frac{1}{\left|B_{m}\right|}\sum_{i\in B_{m}} \hat{p}_{i} \tag{2}\] In an ideal scenario, for a perfectly calibrated model, \(\mathrm{acc}\left(B_{m}\right)=\mathrm{conf}\left(B_{m}\right)\) for all \(m\) bins where \(m\in\{1,\ldots,M\}\). Finally, ECE is calculated as shown in Eq. 3, where \(n\) is total number of samples [9]. \[\mathrm{ECE}=\sum_{m=1}^{M}\frac{\left|B_{m}\right|}{n}\mid\mathrm{acc}\left(B _{m}\right)-\mathrm{conf}\left(B_{m}\right) \tag{3}\] ## 3 Fuzzy Calibration Error In this work, we propose Fuzzy Calibration Error (FCE), a metric that transforms raw prediction probabilities into soft bin membership values for calibration error estimation. This transformation has two benefits: 1. Allows edge probability contributions when calculating calibration error 2. Minimize probability skew effects by increasing visibility of lower probability bins in calibration error estimation To perform fuzzification, we utilize trapezoidal membership functions to map raw softmax prediction probabilities to fuzzy bin membership values. The difference between crisp and fuzzy binning of model prediction probabilities is shown in Figure 3, with \(M=3\) bins, and can be extended to any number of bins where \(M>3\). While ECE only allows for crisp membership within each bin, FCE offers a more flexible binning approach, with partial memberships allowed across multiple bins. Fuzzy Calibration Error (\(FCE\)) calculates the weighted average of the difference between accuracy and average model confidence over a set of \(M\) fuzzy bins. Estimation of expected accuracy from finite samples is done by grouping predictions into \(M\) fuzzy bins, and the accuracy of each bin is calculated. Let \(B_{m}\) be a bin containing samples whose prediction confidence lies within the interval \(I_{m}=\left(\frac{m-1}{M},\frac{m}{M}\right]\). Then the accuracy for bin \(B_{m}\), where \(y_{i}\) and \(\hat{y}_{i}\) portray predicted and true class labels, is calculated as shown in Eq. 4. \[\mathrm{acc}\,_{fuzzy}(B_{m})=\frac{1}{|\mu_{fuzzy}(B_{m})|}\sum_{i\in B_{m}} \mu_{fuzzy}(B_{m})(\hat{y}_{i}=y_{i}) \tag{4}\] Then, the average fuzzy predicted confidence of \(B_{m}\), is calculated as shown in Eq. 5. \[\mathrm{conf}\,_{fuzzy}(B_{m})=\frac{1}{|\mu_{fuzzy}(B_{m})|}\sum_{i\in B_{m}} \mu_{fuzzy}(B_{m})\cdot\hat{p}_{i} \tag{5}\] Figure 1: Crisp binning (Top left) and fuzzy binning (Bottom left) of prediction probabilities, where the number of bins \(M=3\). An example of the difference in bin assignment based on \(\hat{p_{i}}\) in crisp vs fuzzy binning (Right). Finally, FCE is calculated as shown in Eq. 6. Unlike ECE where the average is taken over the number of samples in \(B_{m}\) i.e., \(n\), we take the average over the total fuzzy membership in \(B_{m}\) i.e., \(\sum_{m=1}^{M}\mu_{fuzzy}(B_{m})\). \[FCE=\frac{1}{\sum_{m=1}^{M}\mu_{fuzzy}(B_{m})}\sum_{m=1}^{M}|\mu (B_{m})|\cdot|\text{acc}\,_{fuzzy}(B_{m})-\text{conf}\,_{fuzzy}(B_{m})| \tag{6}\] ## 4 Experiments To evaluate the impact of fuzzy binning on calibration error estimation, we perform empirical evaluations across different classification settings. We fine-tune large language models for text classification and measure their calibration performance. ### Experimental Setup **Datasets** We consider three text classification datasets to run our analyses, which vary in terms of class distributions, briefly described below. * 20 Newsgroups (20NG): The 20 Newsgroups dataset [23] is a collection of newsgroup documents containing approximately \(20,000\) documents with an (almost) balanced class distribution across 20 newsgroups/topics. * AGNnews (AGN): The AG's news topic classification dataset [24] is a collection of approximately \(128,000\) news articles, from 4 sources. This dataset is widely used in clustering, classification and information retrieval. * IMDb: The IMDb Movie reviews dataset [25] is a collection of \(50,000\) movie reviews from the Internet Movie Database (IMDb). Each review is assigned either a positive or negative label, and the data is widely used to train models for binary sentiment classification tasks. We further simulate varying data resource settings to compare miscalibration across different fine-tuning regimes. This is achieved by using a limited portion of the training data to perform fine-tuning, and has been done in prior works [26]. **Metrics** To evaluate calibration across different fine-tuning setups, we use ECE (refer to Eq. 3), FCE (refer to Eq. 6), and overconfidence (OF), described below. * Overconfidence (OF): Overconfidence is the expectation of model prediction probabilities \(\hat{p}_{i}\) (confidence scores) over incorrect predictions and is calculated as shown in Eq. 7. \[\text{OF}=\frac{1}{|k|}\sum_{i\in incorrect}\hat{p}_{i}\] (7) Here \(k\) is the total number of incorrect predictions made by a given model. Fine-tuning Setup We implement text classification using a fine-tuned BERT [27]. Since the focus of our work is not to create the most accurate fine-tuned model but to compare the efficacy of ECE and FCE across skewed prediction probabilities, we only fine-tune over one epoch and collect miscalibrated prediction probabilities. ### Results **Fuzzy binning in FCE better captures lower probability bins and edge probabilities:** While ECE bins are highly impacted by the leftward skew in prediction probabilities, FCE yields a more uniformly distributed binning scheme. This can be seen in Fig. 2, where the primary contributors of ECE calculations are the higher probability bins, barely including lower probability bins in calculations. On the other hand, FCE is more uniformly spread across the probability range, better capturing lower probability bins and offering immunity against highly skewed prediction probabilities. **Model overconfidence in multi-class classification settings is low but continuously increasing:** Refer to Fig. 3 to observe the changes in overconfidence in model predictions. Although, a multi-class classification dataset like 20 Newsgroups results in lower overconfidence in predictions in limited Figure 3: Variation in model overconfidence (OF) across different sample sizes Figure 2: Variation in calibration error estimated using ECE and FCE across different bin sizes (top to bottom) and class distributions (left vs right) data regimes, as compared to datasets with fewer classes, this overconfidence increases as the number of samples during fine-tuning increases. On the other hand, datasets with fewer classes i.e., i.e., AGNews and IMDb output highly overconfident predictions in limited data regimes, but this overconfidence plateaus as one keeps adding more samples. **Unlike ECE, FCE is not sensitive to the binning strategy and underlying data used for training:** ECE is a highly sensitive calibration error estimation metric, and is easily influenced by slight changes in data and/or binning strategies. Table 4.2 shows variations in \(\Delta\), which calculates the average difference in estimated calibration error when binning is performed using fewer bins (\(M\in[2..7]\)) versus more bins (\(M\in[8..15]\)). While ECE displays larger variations in calibration error estimation due to binning choices, FCE is fairly immune to these choices and shows minimal \(\Delta\) in most cases. Further, Fig. 4 shows that the distribution of ECE across probability bins is highly variable, and usually leftward skewed. On the other hand, FCE bins are more evenly distributed and as shown in Table 4.2, output more conservative calibration error estimates. \begin{table} \begin{tabular}{c c c c c} \hline \hline & ECE & \(\Delta_{ECE}\) & FCE & \(\Delta_{FCE}\) \\ \hline Fine-tuning samples & \multicolumn{4}{c}{AGNews} \\ \hline 100 & 15.41 & **2.36** & 32.50 & 0.00 \\ 1000 & 3.33 & 0.63 & 11.41 & 0.46 \\ 5000 & 0.71 & 0.41 & 7.77 & 0.71 \\ 10000 & 0.80 & 0.78 & 6.86 & 0.66 \\ \hline & \multicolumn{4}{c}{IMDB} \\ \hline 100 & 5.00 & **1.71** & 22.50 & 0.00 \\ 1000 & 3.42 & 1.51 & 12.01 & 0.24 \\ 5000 & 1.49 & 0.23 & 7.41 & 0.82 \\ 10000 & 0.26 & 0.22 & 8.01 & 0.84 \\ \hline & \multicolumn{4}{c}{20 Newsgroups} \\ \hline 100 & 1.31 & 0.20 & 5.90 & 0.00 \\ 1000 & 29.21 & **4.47** & 38.83 & 0.27 \\ 5000 & 9.99 & 1.54 & 24.05 & 0.11 \\ 10000 & 2.28 & 1.30 & 16.18 & 0.39 \\ \hline \hline \end{tabular} \({}^{1}\) ECE, FCE, \(\Delta_{ECE}\) and \(\Delta_{FCE}\) values are scaled by a factor of 10. \end{table} Table 1: Variations in ECE and FCE across different fine-tuning settings. Here, \(\Delta\) calculates the average difference in estimated calibration error when binning is performed using fewer bins (\(M\in[2..7]\)) versus more bins (\(M\in[8..15]\)). ## 5 Conclusion Overconfidence in neural networks lends to the problem of erroneous estimation of calibration error. ECE, a widely adopted metric of measuring calibration error across model decisions has recently come under scrutiny for being biased towards high probability bins. To address this limitation, we propose a new calibration error metric, i.e. Fuzzy Calibration Error (FCE). This metric transforms raw model confidence scores into fuzzy bin memberships, allowing more visibility of lower probability bins within the calibration error calculations. Our results show that FCE offers a tighter estimate of calibration error and the benefits of this metric are more prominent in multi-class classification settings, where skew in model confidence largely affects calibration error estimation using ECE. Acknowledgments.This work was partially supported by the Department of Justice grant #15PJDP-21-GK-03269-MECP.
2309.10485
Exploring Sentence Type Effects on the Lombard Effect and Intelligibility Enhancement: A Comparative Study of Natural and Grid Sentences
This study explores how sentence types affect the Lombard effect and intelligibility enhancement, focusing on comparisons between natural and grid sentences. Using the Lombard Chinese-TIMIT (LCT) corpus and the Enhanced MAndarin Lombard Grid (EMALG) corpus, we analyze changes in phonetic and acoustic features across different noise levels. Our results show that grid sentences produce more pronounced Lombard effects than natural sentences. Then, we develop and test a normal-to-Lombard conversion model, trained separately on LCT and EMALG corpora. Through subjective and objective evaluations, natural sentences are superior in maintaining speech quality in intelligibility enhancement. In contrast, grid sentences could provide superior intelligibility due to the more pronounced Lombard effect. This study provides a valuable perspective on enhancing speech communication in noisy environments.
Hongyang Chen, Yuhong Yang, Zhongyuan Wang, Weiping Tu, Haojun Ai, Song Lin
2023-09-19T09:54:36Z
http://arxiv.org/abs/2309.10485v2
# A Comparative Study of Grid and Natural Sentences Effects on Normal-to-Lombard Conversion ###### Abstract Grid sentence is commonly used for studying the Lombard effect and Normal-to-Lombard conversion. However, it's unclear if Normal-to-Lombard models trained on grid sentences are sufficient for improving natural speech intelligibility in real-world applications. This paper presents the recording of a parallel Lombard corpus (called Lombard Chinese TIMIT, LCT) extracting natural sentences from Chinese TIMIT. Then We compare natural sentences and grid sentences in terms of Lombard effect and Normal-to-Lombard conversion using LCT and an Enhanced MADarin Lombard Grid corpus (EMALG). Through a parametric analysis of the Lombard effect, We find that as the noise level increases, both natural sentences and grid sentences exhibit similar changes in parameters, but in terms of the increase of the alpha ratio, grid sentences show a greater increase. Following a subjective intelligibility assessment across genders and Signal-to-Noise Ratios, the StarGAN model trained on EMALG consistently outperforms the model trained on LCT in terms of improving intelligibility. This superior performance may be attributed to EMALG's larger alpha ratio increase from normal to Lombard speech. Hongyang Chen\({}^{1}\), Yuhong Yang\({}^{1,2,\ast}\), Qingmu Liu\({}^{1}\), Baifeng Li\({}^{1}\), Weiping Tu\({}^{1}\), Song Lin\({}^{3}\)\({}^{1}\)National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, China. \({}^{2}\)Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan, China. \({}^{3}\)Guangdong OPPO Mobile Telecommunications Corp., China. Lombard effect, natural sentence, intelligibility enhancement ## 1 Introduction People involuntary increase their vocal effort to enhance speech intelligibility when speaking in noisy environments. This phenomenon is also known as the Lombard effect [1]. The changes in vocal effort involve not only loudness but also other acoustic features, such as spectral tilt, fundamental frequency (F0), vowel duration, the first and second formant (F1, F2), and so on [2, 3]. Various methods apply Lombard effect to improve intelligibility and can be summarized into three types: rule-based, measure-based, and data-based methods. Rule-based method [4] and measure-based method [5] are rigid energy redistribution and cannot model the acoustical changes that occur in natural speech [6]. The Data-based method, which is also known as Normal-to-Lombard conversion, converts normal speech (also called plain speech, speaking in a quiet environment) to Lombard speech(speaking in a noisy environment) via mapping the acoustic features altered by the Lombard effect. Lopez et al. use Bayesian Gaussian mixture models (BGMM) [7] to map F0, spectral tilt and energy from normal speech to Lombard speech. Seshadri et al. use cycle-consistent adversarial networks (CycleGANs) to learn the Lombard speech distribution with non-parallel data [8]. Subsequently, they use Augmented CycleGANs to adjust the degree of "Lombardness" in the converted speech [9]. Li et al. combine bi-LSTM and BGMM for the implementation of real-time Normal-to-Lombard conversion [6]. They also use star generative adversarial network (StarGAN) to implement feature mapping for multi-domain only by a single model [10]. The data-based methods mentioned above rely on Lombard corpora, which vary in terms of sentence type, language, the number of speakers, noise type, noise level, and recording equipment. For example, "Read and conversational Lombard speech corpus" [11] records 20 Finnish speakers at noise levels of 65dB and 80dB, consisting of both read and conversational speech. "Lombard speech dataset for German language" [12] involves 8 speakers and covers noise levels of 0dB, 55dB, and 70dB. "A corpus of audio-visual Lombard speech" (English Lombard Grid)[2] records 54 speakers at noise levels of 30dB and 80dB and uses grid sentences as the text. Due to English Lombard Grid having the most speakers and being publicly available, it is commonly used for studying the Lombard effect and for Normal-to-Lombard conversion [6, 10]. With reference to the English Lombard Grid, a Mandarin Lombard Grid corpus [3] and an Enhanced MADarin Lombard Grid corpus (EMALG) [13] are created using grid sentences. EMALG records 34 speakers at noise levels of 40dBA, 55dBA, and 80dBA, with meaningful grid sentences divided into five categories of words. However, in communication, people speak natural sentences with varying structures and lengths. It's unclear if Normal-to-Lombard models trained on grid sentences are sufficient for improving natural speech intelligibility in real-world applications. Intuitively, a Lombard corpus with natural sentences should be more suitable for studying the Lombard effect and achieving better conversion performance than grid sentences. Unfortunately, there is no available Lombard corpus for a comparative study of grid sentences and natural sentences since Lombard effect is sensitive to different setups [2] such as noise level, noise type and language. To address this challenge, first, we record a Lombard corpus (Lombard Chinese TIMIT, LCT) with 36 speakers producing a total of 10,800 utterances at 3 noise levels (3,600 normal speech and 7,200 Lombard speech in 2 styles). The sentences come from Chinese TIMIT [14], which extracts sentences from Chinese news. Therefore, the sentences can be considered as natural sentences. We maintain a recording setup similar to EMALG [13] for a fair comparative study, except for the sentences. Then, we conduct parametric analysis to explore the differences between the Lombard effect of grid sentences and natural sentences. Second, we choose the latest model, StarGAN [10], for Normal-to-Lombard conversion, train it with both grid and natural sentences, and then test its performance on natural sentences to explore the impact of different sentence types on Normal-to-Lombard conversion. ## 2 Lombard Chinese TIMIT ### Standard Chinese Lombard Sentences We choose Chinese TIMIT [14] to get natural Chinese sentence. All sentences are 10-20 characters long and selected from the corpus of Chinese Gigaword Fifth Edition [15], which is a comprehensive archive of newswire text data from Chinese news sources. In Chinese TIMIT, there are three types of sentences: calibration, shared and unique. "Calibration" sentences are read by all speakers; "Shared" sentences are read by 10 speakers; "Unique" sentences are read by only one speaker. We use all sentences of 20 calibration sentences and 40 shared sentences, first 40 sentences of 60 unique sentences, total 100 sentences for each speaker. ### Speaker Recruitment We recruit 50 students at Wuhan University to read the sentences. All of them speak Standard Chinese, achieving Class 2 Level 1 or better on national standard Mandarin proficiency test. After the screening process involving the removal of clipping and mispronunciations, we are able to utilize recordings from 36 participants, 18 males and 18 females. ### Recording Setup We choose the steady speech-shaped noise (SSN) of the master audio-Harvard speech corpus [16] to excite the Lombard effect. As EMALG[13], we employ SSN at 55, 80 dBA to induce Lombard speech. In our previous research[3], we find that 30dBA and 40dBA belong to the same Lombard style. Therefore, we set SSN at 40 dBA (hereinafter referred to as dB) as normal speech. We use the RODE NT1-A condenser microphone for signal acquisition. Two Speaker wearing headphones record in the anechoic chamber of Audio Lab at Wuhan University. One speaker reads the text and another one serves as the listener, checking the speech to ensure accuracy. Both speakers alternate their recording roles. For more detail, please refer to EMALG [13]. ## 3 Analysis of the Lombard Effect ### Parametric Analysis We extract a total of seven parameters of phoneme, formant and acoustics from the normal speech and the Lombard speech at noise levels of 55dB and 80dB to investigate the Lombard effect. We use Montreal Forced Aligner [17] with both EMALG and LCT to train an alignment model and generate _TextGrid_. From _TextGrid_, We calculate the average vowel duration, the ratio of total vowel to utterance duration, to characterize the phonemes. As for formant parameters, we employ Praat 1 to estimate the first and second formant frequencies (F1 and F2). The open SMILE [18] tools are used to extract three acoustic parameters, including logarithmic F0 on a semitone frequency scale (F0), estimate of perceived signal intensity from an auditory spectrum (loudness) [19] and the alpha ratio [20] (energy ratio between 50-1000 Hz and 1-15 kHz). Paired-sample t-tests are employed to determine the significance of differences between different speaking styles. Footnote 1: [https://www.fon.hum.uva.nl/praat/](https://www.fon.hum.uva.nl/praat/) Based on the above seven acoustic parameters, we conduct Lombard effect analyses between EMALG and LCT as follows Fig.1. The results show that: * As the noise level increases, natural sentences and grid sentences exhibit similar changes in all parameters. * In grid sentences, there is no significant difference in vowel ratio, loudness, and F0 between 40dB and 55dB, whereas significant differences exist in natural sentences * We observe a significant enhancement of F2 for female speakers between 55dB and 80dB in natural sentences, indicating a more advanced tongue position. This phenomenon is not observed in grid sentences. * Natural sentences, as opposed to grid sentences, tend to result in shorter vowel duration and vowel ratio, indicating that the vowel pronunciation in natural sentences is not as full as in grid sentences. * The spectral tilt of natural sentences is flatter. As noise levels increase, the increase in alpha ratio for natural sentences is less than Grid sentences. For instance, in both female and male speakers, the increases in the alpha ratio of grid sentences are 6.87 and 3.80, respectively, from normal speech to 80dB Lombard speech. In natural sentences, the increases for female and male speakers are 6.74 and 3.13. * The loudness of male speakers is lower in natural sentences. * Sentence type does not have a significant impact on F0 and F1. ## 4 Intelligibility Enhancement via Normal-to-Lombard Conversion ### Experimental Setup We choose StarGAN [10] to convert Normal speech to target 80dB Lombard speech. Our implemented StarGAN model uses 55dB Lombard speechinstead of dynamic range compression for multi-domain training. The StarGANs trained with EMALG and LCT are called Grid-StarGAN and TIMIT-StarGAN, respectively. The speech converted by Grid-StarGAN and TIMIT-StarGAN are called Grid-StarGAN speech and TIMIT-StarGAN speech. Grid-StarGAN is trained with all 34 EMALG speakers, while TIMIT-StarGAN utilizes 32 speakers (16 female and 16 male) from LCT. The remaining 4 speakers (2 female and 2 male) from LCT are the test set for Grid-StarGAN and TIMIT-StarGAN. The sampling rate is downsampled from 48 kHz to 16 kHz. ### Subjective Intelligibility Test We conduct a subjective intelligibility test to clarify the performance of Grid-StarGAN and TIMIT-StarGAN. We establish five Signal-to-Noise Ratio (SNR) levels ranging from -11 to -1 with intervals of 2.5dB. To avoid the memorization effect, we select 50 recordings with different sentences. We select 10 recordings (equally split between females and males) from the calibration part of the test set, and another 40 recordings (equally split between females and males) from the shared part. These recordings are allocated to each SNR level in the same proportion, resulting in ten recordings per Figure 1: Seven parameters of phoneme, formant and acoustics across talker; TIMIT represents LCT and Grid represents EMALG. Error bars represent standard deviation. F: female, M: male, ALL: female and male. ns and * mean non-significant and significant (p \(<\) 0.001) in t-test of the cross-speaker means, cross-female-speaker means, and cross-male-speaker means in two different noise conditions. SNR for conversion. To reduce individual differences, we recruit 30 students from Wuhan University, aged between 18 and 26, of whom 15 listen to the 50 Grid-StarGAN speech and 15 listen to the 50 TIMIT-StarGAN speech. We employ the word correct rate (WCR) as the measure of intelligibility. Testers are required to adjust the volume to their maximum acceptable level before starting. They will then repeatedly listen to the played speech and write the _pinyin_ of the words they hear in the input box. Testers are allowed to guess words they are not sure. The subjective test results are shown in Fig.2. In females and males across all 5 SNRs, the intelligibility of Grid-StarGAN speech is higher than that of TIMIT-StarGAN speech, e.g., the intelligibility of Grid-StarGAN speech is 125% and 136% higher than that of TIMIT-StarGAN at SNR=-6 in females and males. This result indicates that Normal-to-Lombard conversion using grid sentences performs better in the context of StarGAN and the small-size Lombard corpus. As for the result that intelligibility is lower at SNR=-1 than at SNR=-3.5, we believe that the reason is that different sentences lead to different levels of intelligibility. ### Parametric Analysis To see why grid sentences have higher intelligibility than natural sentences, we use Grid-StarGAN and TIMIT-StarGAN to convert normal speech from the test set to 80dB Lombard speech. We conduct parameter analysis on the converted speech, normal speech, and 80dB Lombard speech of the test set using the trained alignment model. The results of parameter analysis are shown in Table.1. Since the intelligibility test employs 5 SNR, the parameter of loudness is meaningless. Due to space constraints and parameters such as vowel duration, vowel ratio, F0, F1 and F2 show no significant difference between Grid-StarGAN speech and TIMIT-StarGAN speech, we don't listed them in Table.1. It is noticeable that the alpha ratio exhibits a greater increase in Grid-StarGAN speech compared to TIMIT-StarGAN speech. In males and females, the alpha ratio increases by 8.87 and 4.01, respectively, in Grid-StarGAN speech, while in TIMIT-StarGAN speech, the increases are 8.55 and 3.01, respectively. Due to the strong correlation between the alpha ratio and intelligibility [21], this observation can also account for the superior performance of models trained on Grid sentences in the subjective intelligibility test as opposed to TIMIT sentences. Due to the StarGAN primarily mapping spectral envelopes, the result also corresponds to EMALG's larger alpha ratio increase from normal to 80dB Lombard speech, as shown in Fig.1, compared to the corresponding increase in LCT. ## 5 Conclusion and Discussion We record Lombard Chinese TIMIT and compare it with EMALG. Through parametric analysis, we find that as the noise level increases, both corpora exhibit similar changes in their parameters. Secondly, the Normal-to-Lombard conversion models trained on the two corpora have different intelligibility improvements. In the context of the small-size Lombard corpus and the StarGAN model, the improvement of intelligibility can be greater with grid sentences. After parametric analysis of the converted speech, the greater improvement in the intelligibility of Grid-StarGAN speech may be attributed to EMALG's larger alpha ratio increase from normal to Lombard speech. Therefore, Grid-StarGAN has learned to enhance the alpha ratio more effectively than TIMIT-StarGAN. In the future, we will test more Normal-to-Lombard conversion models to compare the impact of these two types of sentences. ## 6 Acknowledgments This research is funded in part by the National Natural Science Foundation of China (62171326), Key Research and Development Program of Hubei Province (220171406) and Guangdong OPPO Mobile Telecommunications Corp.
2305.19774
Ambiguity in solving imaging inverse problems with deep learning based operators
In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem, when trained end-to-end. In this paper, we propose some strategies to improve stability without losing to much accuracy to deblur images with deep-learning based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following, neural network-based, step. Two different pre-processors are presented: the former implements a strong parameter-free denoiser, and the latter is a variational model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness.
Davide Evangelista, Elena Morotti, Elena Loli Piccolomini, James Nagy
2023-05-31T12:07:08Z
http://arxiv.org/abs/2305.19774v1
# Ambiguity in solving imaging inverse problems with deep learning based operators ###### Abstract In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem, when trained end-to-end. In this paper, we propose some strategies to improve stability without losing to much accuracy to deblur images with deep-learning based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following, neural network-based, step. Two different pre-processors are presented: the former implements a strong parameter-free denoiser, and the latter is a variational model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness. Neural Networks Stability Image Deblurring Deep Learning Inverse Problems in Imaging ## 1 Introduction Image restoration is a discipline within the field of image processing focusing on the removal or reduction of distortions and artifacts from images. This topic is of interest in a wide range of applications, including medical imaging, satellite and aerial imaging, and digital photography. In this last case, blurring on images is quite frequent and several factors can cause it. To set some examples, Gaussian blur is caused by the diffraction of light passing through a lens and it is more prevalent in images captured with low-aperture lenses or in situations where the depth of field is shallow, whereas motion blur is due to handheld camera movements or low lighting conditions and slow shutter speeds [1, 2, 3]. Also noise seriously affects images; it is usually introduced by the acquisition systems. Researchers have developed a number of algorithms for reducing blur and noise and image restoration is a very active field of research where new methods are continuously being proposed and developed. Such methodologies can be classified into two main categories: model-based and learning-based. The model-based techniques assume that the degradation process is known and it is mathematically described as an inverse problem [4]. The learning-based methods learn a map between the degraded and clean images during the training phase and use it to deblur new corrupted images [5]. **Model-based mathematical formulation.** In model-based approaches, denoting by \(\mathcal{X}\) the compact and locally connected subset of \(\mathbb{R}^{n}\) of the \(\mathbf{x}^{gt}\) ground truth sharp images, the relation between \(\mathbf{x}^{gt}\in\mathcal{X}\) and its blurred and noisy observation \(\mathbf{y}^{\delta}\) is formulated as: \[\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e},\] (P) where \(K\) is the known blurring operator and \(\mathbf{e}\) represents noise on the image. We can say that, with very high probability, \(||\mathbf{e}||\leq\delta\). In this setting, the goal of model-based image deblurring methods is to compute a sharp and unobstructed image \(\mathbf{x}\) given \(\mathbf{y}^{\delta}\) and \(K\), by solving the linear inverse problem. When noise is present, problem (P) is typically reformulated into an optimization problem, where a data fit measure, namely \(\mathcal{F}\), is minimized. Since the blurring operator \(K\) is known to be severely ill-conditioned, a regularization term \(\mathcal{R}\) is added to the data-fidelity term \(\mathcal{F}\) to avoid noise propagation. The resulting optimization problem is formulated as: \[\mathbf{x}^{*}=\arg\min_{\mathbf{x}\in\mathcal{X}}\mathcal{F}(K\mathbf{x}, \mathbf{y}^{\delta})+\lambda\mathcal{R}(\mathbf{x}), \tag{1}\] where \(\lambda>0\) is the regularization parameter. This optimization problem can be solved using different iterative methods depending on the specific choice for \(\mathcal{F}\) and \(\mathcal{R}\)[6, 1, 7]. We remark that \(\mathcal{F}\) is set as the least-squares function in case of Gaussian noise, whereas te regularization function \(\mathcal{R}\) can be tuned by the users according to the imaging properties they desire to enforce. Recently, plug-and-play techniques plug a denoiser, usually a neural network, into an iterative procedure to solve the minimization problem [8, 9, 10]. The value of \(\lambda\) can also be selected by automatic routines, image-by-image [11, 12]. These features make model-based approaches mathematically explainable, flexible, and robust. However, a disadvantage is that the final result strongly depends on a set of parameters that are difficult to set up properly. **Deep learning-based formulation.** In the last decade, deep learning algorithms have been emerging as good alternatives to model-based approaches. Disregarding any mathematical blurring operator, convolutional neural networks (NNs) can be trained to identify patterns characterizing blur on images, thus they can learn several kinds of blur and adapt to each specific imaging task. Large and complex convolutional neural networks, called UNet, have been proposed to achieve high levels of accuracy, by automatically tuning and defining their inner filters and proper transformations for blur reduction, without needing any parameter setting [13, 14, 15, 16]. Indeed, the possibility to process large amounts of data in parallel makes networks highly efficient for image processing tasks and prone to play a key role in the development of new and more advanced techniques in the future. However, challenges and limitations in using neural networks are known in the literature. Firstly, it is difficult to understand and precisely interpret how they are making decisions and predictions, as they act as unexplainable black boxes mapping the input image \(\mathbf{y}^{\delta}\) towards \(\mathbf{x}^{gt}\) directly. Secondly, neural networks are prone to overfitting, which occurs when they become too specialized for the training samples and perform poorly on new, unseen images. Lastly, the high performance of neural networks is typically evaluated only in the so-called _in-domain_ case, i.e. the test procedure is performed on images sharing exactly the same corruption with the training samples, hence the impact of unquantified perturbations (as noise can be) has been not widely studied yet. In other words, the robustness of NN-based image deblurring with respect to unknown noise is not guaranteed [17, 18, 19, 20]. **Contributions of the article.** Motivated by the poor stability but high accuracy of NN-based approaches in solving inverse imaging problems such as deblurring, this paper proposes strategies to improve stability, maintaining good accuracy, acting similarly as regularization functions do in the model-based approach. Basing on a result showing a trade-off between stability and accuracy, we propose to use a very small neural network, in place of the UNet, which is less accurate, but it is much more stable than larger networks. Since it has only few parameters to identify, it consumes relatively little time and energy, thus meeting the green AI principles. Moreover, we propose two new NN-based schemes, embedding a pre-processing step to face the network instability when solving deblurring problems as in (P). The first scheme, denoted as FiNN, applies a model-free low-pass filter to the datum, before passing it as input to the NN. This is a good approach to be applied whenever an unknown noise is present because it does not need any model information or parameter tuning. The second scheme, called Stabilized Neural Network (StNN), exploits an estimation of the noise statistics and the mathematical modeling of both noise and image corruption process. Figure 1 shows a draft of the proposed frameworks. whose robustness is evaluated from a theoretical perspective and tested on an image data set. ### Structure of the article. The work is organized as follows. In Section 2, we formulate the NN-based action as an image reconstructor for problem (P). In Section 3 we show our experimental set-up and motivate our work on some experiments, thus we state our proposals and derive their main properties in Section 4. Finally, in Section 5 we will report the results of some experiments to test the methods and empirically validate the theoretical analysis, before concluding with final remarks in Section 6. ## 2 Solving imaging inverse problems with Deep Learning based operators As stated in (P), image restoration is mathematically modeled as an inverse problem which derives from the discretization of Fredholm integral equations, are ill-posed and the noise on the data is amplified in the numerically computed solution of \(\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e}\). A rigorous theoretical analysis on the solution of such problems with variational techniques which can be formulated as in equation (1) has been performed, both in the continuous and discrete settings, and regularization techniques have been proposed to limit the noise spread in the solution [21, 1]. At our best knowledge, a similar analysis for deep learning based algorithms is not present in literature and it is quite mysterious how these algorithms behave in presence of noise on the data. In this paper we use some of the mathematical tools defined and proved in [20] and we propose here some techniques to limit noise spread. More details about the proposed mathematical framework in a more general setting can be found in [20]. In the following, if not differently stated, as a vector norm we consider the Euclidean norm. We first formalize the concept of reconstructor associated to (P) with the following definition. **Definition 2.1**.: Denoting by \(Rg(K)\) the range of \(K\), we call \(\mathcal{Y}^{\delta}=\{\mathbf{y}^{\delta}\in\mathbb{R}^{n};\inf_{\mathbf{y} \in Rg(K)}||\mathbf{y}-\mathbf{y}^{\delta}||\leq\delta\}\) the set of corrupted images according to \(\delta\geq 0\). Any continuous function \(\psi:\mathcal{Y}^{\delta}\rightarrow\mathbb{R}^{n}\), mapping \(\mathbf{y}^{\delta}=K\mathbf{x}^{gt}+\mathbf{e}\) (where \(||\mathbf{e}||\leq\delta\) with \(\delta\geq 0\)) to an \(\mathbf{x}\in\mathbb{R}^{n}\), is called a reconstructor. Figure 1: A graphical draft highlighting the introduction of pre-processing steps Fi and St defining the proposed frameworks FiNN and StNN, respectively. The associated _reconstructing error_ is \[\mathcal{E}_{\psi}(\mathbf{x}^{gt},\mathbf{y}^{\delta}):=||\psi(\mathbf{y}^{ \delta})-\mathbf{x}^{gt}||. \tag{2}\] **Definition 2.2**.: We quantify the accuracy of the reconstructor \(\psi\), by defining the measure \(\eta>0\) as: \[\eta=\sup_{\mathbf{x}^{gt}\in\mathcal{X}}||\psi(K\mathbf{x}^{gt})-\mathbf{x}^ {gt}||=\sup_{\mathbf{x}^{gt}\in\mathcal{X}}\mathcal{E}_{\psi}(\mathbf{x}^{gt}, \mathbf{y}^{0}). \tag{3}\] We say that \(\psi\) is \(\eta^{-1}\)-accurate [21]. We now consider a neural network as a particular reconstructor. **Definition 2.3**.: Given a neural network architecture \(\mathcal{A}=(\nu,S)\) where \(\nu=(\nu_{0},\nu_{1},\ldots,\nu_{L})\in\mathbb{N}^{L+1}\), \(\nu_{L}=n\), is the width of each layer and \(S=(S_{1,1},\ldots,S_{L,L}),S_{j,k}\in\mathbb{R}^{\nu_{j}\times\nu_{k}}\) is the set of matrices representing the skip connections, we define the parametric family \(\Xi_{\theta}^{\mathcal{A}}\) of neural network reconstructors with architecture \(\mathcal{A}\), parameterized by \(\theta\in\mathbb{R}^{s}\), as: \[\Xi_{\theta}^{\mathcal{A}}=\{\psi_{\theta}:\mathcal{Y}^{\delta}\to\mathbb{R}^ {n};\theta\in\mathbb{R}^{s}\} \tag{4}\] where \(\psi_{\theta}(\mathbf{y}^{\delta})=\mathbf{z}^{L}\) is given by: \[\begin{cases}\mathbf{z}^{0}=\mathbf{y}^{\delta}\\ \mathbf{z}^{l+1}=\rho(W^{l}\mathbf{z}^{l}+\mathbf{b}^{l}+\sum_{k=1}^{l}S_{l,k} \mathbf{z}^{k})\quad\forall l=0,\ldots,L-1\end{cases} \tag{5}\] and \(W^{l}\in\mathbb{R}^{\nu_{l+1}\times\nu_{l}}\) is the weight matrix, \(\mathbf{b}^{l}\in\mathbb{R}^{\nu_{l+1}}\) is the bias vector. We now analyze the performance of NN-based reconstructors when noise is added to their input. **Definition 2.4**.: Given \(\delta\geq 0\), the \(\delta\)-stability constant \(C_{\psi_{\theta}}^{\delta}\) of an \(\eta^{-1}\)-accurate reconstructor is defined as: \[C_{\psi_{\theta}}^{\delta}=\sup_{\begin{subarray}{c}\mathbf{x}^{gt}\in \mathcal{X}\\ ||\mathbf{e}||\leq\delta\end{subarray}}\frac{\mathcal{E}_{\psi}(\mathbf{x}^{ gt},\mathbf{y}^{\delta})-\eta}{||\mathbf{e}||_{2}}. \tag{6}\] Since from Definition 2.4 we interestingly observe that the stability constant amplifies the noise in the data: \[||\psi_{\theta}(\mathbf{y}^{0}+\mathbf{e})-\mathbf{x}||_{2}\leq\eta+C_{\psi_ {\theta}}^{\delta}||\mathbf{e}||_{2}\quad\forall\mathbf{x}\in\mathcal{X},\; \forall\mathbf{e}\in\mathbb{R}^{n},||\mathbf{e}||_{2}\leq\delta, \tag{7}\] with \(\mathbf{y}^{0}\) the noiseless datum, we can give the following definition: **Definition 2.5**.: Given \(\delta\geq 0\), a neural network reconstructor \(\psi_{\theta}\) is said to be \(\delta\)-stable if \(C_{\psi_{\theta}}^{\delta}\in[0,1)\). The next theorem states an important relation between the stability constant and the accuracy of a neural network as a solver of an inverse problem. **Theorem 2.1**.: _Let \(\psi_{\theta}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be an \(\eta^{-1}\)-accurate reconstructor. Then, for any \(x^{gt}\in\mathcal{X}\) and for any \(\delta>0\), \(\exists\,\hat{\mathbf{e}}\in\mathbb{R}^{n}\) with \(||\hat{\mathbf{e}}||\leq\delta\) such that_ \[C_{\psi_{\theta}}^{\delta}\geq\frac{||K^{\dagger}\hat{\mathbf{e}}||-2\eta}{|| \hat{\mathbf{e}}||} \tag{8}\] _where \(K^{\dagger}\) is the Moore Penrose pseudo-inverse of \(K\)._ For the proof see [20]. We emphasize that, even if neural networks used as reconstructors do not use any information on the operator \(K\), the stability of \(\psi_{\theta}\) is related to the pseudo-inverse of that operator. ## 3 Experimental setting Here we describe our particular setting using neural networks as reconstructors for a deblurring application. ### Newtork architectures We have considered three different neural network architectures for deblurring: the widely used UNet [22], the recently proposed NAFNet [23] and a green AI inspired 3L-SSNet [24]. The UNet and NAFNet architectures are complex, multi-scale networks, with similar overall structure but very different behavior. As shown in Figure 2, both UNet and NAFNet are multi-resolution networks, where the input is sequentially processed by a sequence of blocks \(B_{1},\ldots,B_{n_{i}}\), \(i=1,\ldots,L\) and downsampled after that. After \(L-1\) downsampling, the image is then sequentially upsampled again to the original shape through a sequence of blocks, symmetrically to what happened in the downsampling phase. At each resolution level \(i=1,\ldots,L\), the corresponding image in the downsampling phase is concatenated to the first block in the upsampling phase, to keep the information through the network. Moreover, a skip connection has also been added between the input and the output layer of the model to simplify the training as described in [24]. The left-hand side of Figure 2 shows that the difference between UNet and NAFNet is in the structure of each block. In particular, the blocks in UNet are simple Residual Convolutional Layers, defined as a concatenation of Convolutions, ReLU, BatchNormalizations and a skip connection. On the other side, each block in NAFNet is way more complex, containing a long sequence of gates, convolutional and normalization layers. The key propriety of NAFNet, as described in [23], is that no activation function is used in the blocks, since they have been substituted by non-linear gates, thus obtaining improved expressivity and more training efficiency. The 3-layer Single-Scale Network (3L-SSNet) is a very simple model defined, as suggested by its name, by just three convolutional layers, each of them composed by a linear filter, followed by a ReLU activation function and a BatchNormalization layer. Since by construction the network works on single-scale images (the input is never downsampled to low-resolution level, as it is common in image processing), to increase the receptive field of the model the kernel size is crucial. For this reason, we considered a 3L-SSNet with width \([128,128,128]\) and kernel size \([9\times 9,5\times 5,3\times 3]\), respectively. ### Data set As a data set for our experiments we choose the widely-used GoPro [25], which is composed of a large number of photographic images acquired from a GoPro camera. All the images have been cropped into \(256\times 256\) patches (with no overlapping), converted into grayscale and normalized into [0,1]. We synthesize the blurring of each image according to (P) by considering a Gaussian corrupting effect, implemented with the \(11\times 11\) Gaussian kernel \(\mathcal{G}\) defined as \[\mathcal{G}_{i,j}=\begin{cases}e^{-\frac{1}{2}\frac{i^{2}+i^{2}}{\sigma_{G}^{ 2}}}&i,j\in\{-5,\ldots,5\}^{2}\\ 0&\text{otherwise}\end{cases} \tag{9}\] with variance \(\sigma_{G}=1.3\). The kernel is visualized in Figure 3, together with one of the GoPro images and its blurred counterpart. Figure 2: A diagram representing the UNet and NAFNet architectures. ### Neural networks training and testing To train a Neural Network for deblurring, the set of available images has been split into train and test subsets, with \(N_{\mathbb{D}}=2503\) and \(N_{\mathbb{T}}=1111\) images respectively. Then we consider a set \(\mathbb{D}=\{(\mathbf{y}_{i}^{\delta},\mathbf{x}_{i}^{gt});\ \mathbf{x}_{i}^{gt}\in\mathcal{S} \}_{i=1}^{N_{\mathbb{D}}}\), for a given \(\delta\geq 0\). Since we set a Mean Squared Error (MSE) loss function, a NN-based reconstructor is uniquely defined as the solution of: \[\min_{\psi_{\theta}\in\mathcal{F}_{\phi}^{\delta}}\sum_{i=1}^{N_{\mathbb{D}}}|| \psi_{\theta}(\mathbf{y}_{i}^{\delta})-\mathbf{x}_{i}^{gt}||_{2}^{2}. \tag{10}\] Each network has been trained by performing 50 epochs of Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.9\) and a learning rate of \(10^{-3}\). We focus on the next two experiments. **Experiment A**. In this experiment we train the neural networks on images only corrupted by blur (\(\delta=0\)). To the aim of checking the networks accuracy, defined as in Section 2, we test on no noisy images (_in-domain tests_). Then, to verify theorem 2.1 we consider test images with added Gaussian noise, with \(\sigma=0.025\) (_out-of-domain tests_). **Experiment B**. A common practice for enforcing network stability is _noise injection_[26], consisting in training a network by adding noise components to the input. In particular, we have added a vector noise \(\mathbf{e}\sim\mathcal{N}(0,\sigma^{2}I)\), with \(\sigma=0.025\). To test the stability of the proposed frameworks with respect to noise, we test with higher noise with respect to training. ### Robustness of the end-to-end NN approach Preliminary results obtained from experiment A are shown in Figure 4. The first row displays the reconstructions obtained from in-domain tests, where we can appreciate the accuracy of all the three considered architectures. In the second row we can see the results obtained from out-of-domain tests, where the noise on the input data strongly corrupts the solution of the ill-posed inverse problem computed by UNet and NAFNet. Confirming what stated by Theorem 2.1, the best result is obtained with the very light 3L-SSNET, which is the only one able to handle the noise. ## 4 Improving noise-robustness in deep learning based reconstructors As observed in Section 3, merely using a neural network to solve an inverse problem is an unstable routine. To enforce the robustness of \(\psi_{\theta}\) reconstructors, we propose to modify the Deep Learning based approach by introducing a suitable operator, defined in the following as a _stabilizer_, into the reconstruction process. **Definition 4.1**.: A continuous functions \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is called a \(\delta\)-stabilizer for a neural network reconstructor \(\psi_{\theta}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) if \(\forall\,e\in\mathbb{R}^{n}\) with \(||e||\leq\delta\), \(\exists\,L_{\phi}^{\delta}\in[0,1)\) and \(\exists\,e^{\prime}\in\mathbb{R}^{n}\) with \(||e^{\prime}||=L_{\phi}^{\delta}||e||\) such that: \[\phi(K\mathbf{x}+\mathbf{e})=\phi(K\mathbf{x})+\mathbf{e}^{\prime}. \tag{11}\] In this case, the reconstructor \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi\) is said to be \(\delta\)-stabilized. The smallest constant \(L_{\phi}^{\delta}\) for which the definition holds is the stability constant \(C_{\phi}^{\delta}\) of \(\phi\). Intuitively, applying a pre-processing \(\phi\) with \(L_{\phi}^{\delta}<1\) reduces the perturbation of the input data, by converting a noise of amplitude bounded by \(\delta\) to a corruption with norm bounded by \(\delta L_{\phi}^{\delta}\). This intuition has been mathematically explained Figure 3: _From left to right: ground truth clean image, blurring kernel, blurred corrupted image._ in [20], Proposition 4.2, where a relationship between the stability constant of the stabilized reconstructor \(\bar{\psi}_{\theta}\) and the stability constant of \(\psi_{\theta}\) has been proved. In particular, if \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi\) is a \(\delta\)-stabilized reconstructor, \(L^{\delta}_{\psi_{\theta}}\), \(L^{\delta}_{\phi}\) are the local Lipschitz constants of \(\psi_{\theta}\) and \(\phi\), respectively, then: \[C^{\delta}_{\bar{\psi}_{\theta}}\leq L^{\delta}_{\psi_{\theta}}L^{\delta}_{ \phi}. \tag{12}\] As a consequence, if \(L^{\delta}_{\phi}<1\), then the stability constant of \(\bar{\psi}_{\theta}\) is smaller than the Lipschitz constant of \(\psi_{\theta}\), which implies that \(\bar{\psi}_{\theta}\) is more stable to input perturbations. We underline that the \(\delta\)-stabilizers \(\phi\) are effective if they preserve the characteristics and the details of the input image \(\mathbf{y}^{\delta}\). In this paper we focus on the two following proposals of \(\delta\)-stabilizers \(\phi\). ### Stabilized Neural Network (StNN) based on the imaging model If the blurring operator \(K\) is known, it can be exploited to derive a \(\delta\)-stabilizer function \(\phi\). We argue that information on \(K\) will contribute to improve the reconstruction accuracy. Specifically, we consider an iterative algorithm, converging to the solution of (1), represented by the scheme: \[\begin{cases}\mathbf{x}^{(0)}\in\mathbb{R}^{n}\\ \mathbf{x}^{(k+1)}=\mathcal{T}_{k}(\mathbf{x}^{(k)};\mathbf{y}^{\delta})\end{cases} \tag{13}\] where \(\mathcal{T}_{k}\) is the action of the \(k\)-th iteration of the algorithm. Given a positive integer \(M\in\mathbb{N}\) and a fixed starting iterate \(\mathbf{x}^{(0)}\), let us define the \(\delta\)-stabilizer: \[\phi_{M}(\mathbf{y}^{\delta})=\bigcirc_{k=0}^{M-1}\mathcal{T}_{k}(\mathbf{x}^ {(k)};\mathbf{y}^{\delta}). \tag{14}\] By definition, \(\phi_{M}\) maps a corrupted image \(\mathbf{y}^{\delta}\) to the solution computed by the iterative solver in \(M\) iterations. Setting as objective function in (1) the Tikhonov-regularized least-squared function: \[\arg\min_{\mathbf{x}\in\mathbb{R}^{n}}\frac{1}{2}||K\mathbf{x}-\mathbf{y}^{ \delta}||_{2}^{2}+\lambda||\mathbf{x}||_{2}^{2}, \tag{15}\] Figure 4: Results from experiment A with the three considered neural networks. Upper row: reconstruction from no noisy data. Lower row: reconstruction from noisy data (\(\delta=0.025\)). the authors in [20] showed that it is possible to choose \(M\) such that \(L^{\delta}_{\phi_{M}}<1\). Hence, given \(\delta\) and \(\mathcal{F}^{\mathcal{A}}_{\theta}\), it is always possible to use \(\phi_{M}\) as a pre-processing step, stabilizing \(\psi_{\theta}\). We refer to \(\bar{\psi}_{\theta}=\gamma_{\theta}\circ\phi_{M}\) as _Stabilized Neural Network_ (StNN). In the numerical experiments presented in Section 5, we use as iterative method for the solution of (15) the Conjugate Gradient Least Squares (CGLS) iterative method [11]. ### Filtered Neural Network (FiNN) The intuition that a pre-processing step should reduce the noise present in the input data naturally leads to our second proposal, implemented by a Gaussian denoising filter. The Gaussian filter is a low-pass filter that reduces the impact of noise on the high frequencies [27]. Thus, the resulting pre-processed image is a low-frequency version of \(\mathbf{y}^{\delta}\) and the neural network \(\psi_{\theta}\in\mathcal{F}^{\mathcal{A}}_{\theta}\) has to recover the high frequencies corresponding to the image details. Let \(\phi_{\mathcal{G}}\) represents the operator that applies the Gaussian filter to the input. We will refer to the reconstructor \(\bar{\psi}_{\theta}=\psi_{\theta}\circ\phi_{\mathcal{G}}\) as _Filtered Neural Network_ (FiNN). Note that, even if FiNN is employed to reduce the impact of the noise and consequently to stabilize the network solution, its \(L^{\delta}_{\phi}\) constant is not smaller than one. In fact, for any \(\mathbf{e}\in\mathbb{R}^{n}\) with \(||\mathbf{e}||\leq\delta\), it holds: \[\phi_{\mathcal{G}}(K\mathbf{x}+\mathbf{e})=\phi_{\mathcal{G}}(K\mathbf{x})+ \phi_{\mathcal{G}}(\mathbf{e}) \tag{16}\] as a consequence of the linearity of \(\phi_{\mathcal{G}}\). ## 5 Results In this Section we present the results obtained in our deblurring experiments described in Section 3. To evaluate and compare the deblurred images, we use visual inspection on a selected test image and exploit the Structural Similarity index (SSIM) [28] on the test set. ### Results of experiments A We show and comment on the results obtained on experiment A described in Section 3.3. We remark that aim of these tests is to measure the accuracy of the three considered neural reconstructors and of the stabilizers proposed in Section 4 and verify their sensitivity to noise in the input data. In a word, how these reconstructors handle the ill-posedness of the imaging inverse problem. To this purpose, we visually compare the reconstructions of a single test image by the UNet and \(3\)L-SSNet in Figure 5. The first row (which replicates some of the images of Figure 4) shows the results of the deep learning based reconstructors, where the out-of-domain images are clearly damaged by the noise. The FiNN and, particularly, the StNN stabilizer drastically reduce noise, producing accurate results even for out-of-domain tests. In order to analyze the accuracy and stability of our proposals, we compute the empirical accuracy \(\hat{\eta}^{-1}\) and the empirical stability constant \(\hat{C}^{\delta}_{\psi}\), respectively defined as: \[\hat{\eta}^{-1}=\Big{(}\sup_{\mathbf{x}\in\mathcal{S}_{\mathcal{T}}}||\psi(K \mathbf{x})-\mathbf{x}||_{2}\Big{)}^{-1} \tag{17}\] and \[\hat{C}^{\delta}_{\psi}=\sup_{\mathbf{x}\in\mathcal{S}_{\mathcal{T}}}\frac{|| \psi(K\mathbf{x}+\mathbf{e})-\mathbf{x}||_{2}-\hat{\eta}}{||\mathbf{e}||_{2}} \tag{18}\] where \(\mathcal{S}_{\mathcal{T}}\subseteq\mathcal{X}\) is the test set and \(\mathbf{e}\) is a noise realization from \(\mathcal{N}(0,\sigma^{2}I)\) with \(||e||_{2}\leq\delta\) (different for any datum \(x\in\mathcal{S}_{\mathcal{T}}\)). The computed values are reported in Table 1. Focusing on the estimated accuracies, the results confirm that NN is the most accurate method, followed by NAFNet and 3L-SSNet, as expected. As a consequence of Theorem 2.1, the values of the stability constant \(\hat{C}^{\delta}_{\psi}\) are in reverse order: the most accurate is the less stable (notice the very high value of \(\hat{C}^{\delta}_{\psi}\) for NN!). By applying the stabilizers, the accuracy is slightly lower but the stability is highly improved (in most of the cases the constant is less than one), confirming the efficacy of the proposed solutions to handle noise and, at the same time, maintain good image quality. In particular, StNN is a stable reconstructor independently from the architecture. To analyse the stability of the test set with respect to noise, we have plotted in Figure 6, for each test image, \(\mathcal{E}_{\psi}(\mathbf{y}^{gt},\mathbf{y}^{\delta})-\hat{\eta}\) vs. \(\|e\|\), where the reconstruction error is defined in (2). With green and red dots we have plotted the experiments with stability constant less and greater than one, respectively and with the blue dashed line the Figure 5: Results from experiment A with UNet and 3L-SSNet. Figure 6: Results from experiment A. Plot of \(\mathcal{E}_{\psi}(\kappa^{gt},y^{\xi})-\eta\) vs. \(\|e\|\) for all the test images. The blue dashed line represents the bisect. bisect. We notice that the values reported in Table 1 for the empirical stability constant computed as supremum (see Equation (18)) are not outliers but they are representative of the results of the whole test set. ### Results of experiment B In this experiment we used noise injection in the neural networks training, as described in Section 3.3. This quite common strategy reduces the networks accuracy but improve their stability with respect to noise. However, we show that the reconstructions are not totally satisfactory when we test on out-of-domain images, i.e. when input images are affected by noise of different intensities with respect to training. Figure 7 displays the reconstructions obtained by testing with both in-domain (on the left) and out-of-domain (on the right) images. Even if the NN reconstructions (column 4) are not so injured by noise as in experiment A (see Figure 4), however noise artifacts are clearly visible, especially in UNet and NAFNet. Both the stabilizers proposed act efficiently and remove most of the noise. We observe that the restorations obtained with FiNN are smoother but also more blurred with respect to the ones computed by StNN. An overview of the tests is displayed by the boxplots of the SSIM values sketched in Figure 8. The light blue, orange and green boxes represent the results obtained with NN, FiNN and StNN methods, respectively. They confirm that the neural networks performance worsens with noisy data (see the different positions of light blue boxes from the left to the right column), whereas the proposed frameworks including FiNN and StNN are far more stable. Figure 8: Boxplots for the SSIM values in experiment B. The light blue, orange and green boxplots represent the results computed by NN, FiNN and StNN, respectively. In Figure 9 we plot, for one image in the test set, the absolute error between the reconstruction and the true image vs. the noise standard deviation \(\sigma\). In the upper row the results from experiment A (we remark that in this experiment we trained the networks on no noisy data). The NN error (blue line) is out of range for very small values of \(\sigma\) for both UNet and NArNet, whereas the 3L-SSNet is far more stable. In all the cases, the orange and green line shows that FiNN and StNN improve the reconstruction error. In particular, StNN performs best in all these tests. Concerning experiment B (in the lower row of the figure), it is very interesting to notice that when the noise is smaller than the training one (corresponding to \(\sigma=0.025\)) the NN methods are the best performing for all the considered architectures. When \(\sigma\simeq 0.05\) the behaviour changes and the stabilized methods are more accurate. ## 6 Conclusions Starting from the consideration that the most popular neural networks used for image deblurring, such as the family of convolutional UNets, are very accurate but unstable with respect to noise in the test images, we have proposed two different approaches to get stability without losing too much accuracy. The first one is a very light neural architecture, called 3L-SSNET, and the second one is to stabilize the deep learning framework by introducing a pre-processing step. Numerical results on the GoPro dataset have demonstrated the efficiency and robustness of the proposed approaches, under several settings encompassing in-domain and out-of-domain testing scenarios. The 3L-SSNet overcome UNet and NArNet in every test where the noise on test images exceeds the noise on the training set, combining the desired characteristics of execution speed (in a green AI perspective) and high stability. The FiNN proposal increases the stability of the NN-based restoration (the values of its SSIM do not change remarkably in all the experiments), but the restored images appear too smooth and few small details are lost somewhere. The StNN proposal, exploiting a model-based formulation of the underlying imaging process, achieves the highest SSIM values in the most challenging out-of-domain cases, confirming its great theory-grounded potential. It represents, indeed, a good compromise between stability and accuracy. We finally remark that the proposed approach can be simply extended to other imaging applications modeled as an inverse problem, such as super-resolution, denoising, or tomography, where the neural networks learning the map from the input to the ground truth image cannot efficiently handle noise in the input data. This work represents one step further in shedding light on the black-box essence of NN-based image processing. AcknowledgmentsThis work was partially supported by the US National Science Foundation, under grants DMS 2038118 and DMS 2208294. Conflict of InterestsThe authors declare no conflict of interest. Figure 9: Plots of the absolute error vs. the variance \(\sigma\) of the noise for one image in the test set. Upper row: experiment A. Lower row: experiment B.
2309.06581
Zero-Shot Visual Classification with Guided Cropping
Pretrained vision-language models, such as CLIP, show promising zero-shot performance across a wide variety of datasets. For closed-set classification tasks, however, there is an inherent limitation: CLIP image encoders are typically designed to extract generic image-level features that summarize superfluous or confounding information for the target tasks. This results in degradation of classification performance, especially when objects of interest cover small areas of input images. In this work, we propose CLIP with Guided Cropping (GC-CLIP), where we use an off-the-shelf zero-shot object detection model in a preprocessing step to increase focus of zero-shot classifier to the object of interest and minimize influence of extraneous image regions. We empirically show that our approach improves zero-shot classification results across architectures and datasets, favorably for small objects.
Piyapat Saranrittichai, Mauricio Munoz, Volker Fischer, Chaithanya Kumar Mummadi
2023-09-12T20:09:12Z
http://arxiv.org/abs/2309.06581v1
# Zero-Shot Visual Classification with Guided Cropping ###### Abstract Pretrained vision-language models, such as CLIP, show promising zero-shot performance across a wide variety of datasets. For closed-set classification tasks, however, there is an inherent limitation: CLIP image encoders are typically designed to extract generic image-level features that summarize superfluous or confounding information for the target tasks. This results in degradation of classification performance, especially when objects of interest cover small areas of input images. In this work, we propose CLIP with Guided Cropping (GC-CLIP), where we use an off-the-shelf zero-shot object detection model in a preprocessing step to increase focus of zero-shot classifier to the object of interest and minimize influence of extraneous image regions. We empirically show that our approach improves zero-shot classification results across architectures and datasets, favorably for small objects. ## 1 Introduction Conventional supervised learning for closed-set classification tasks involves training Deep Neural Networks (DNNs) on labelled datasets [5]. The resulting models are inherently limited by the class definitions of a specific task. In contrast, recent research focuses on open-vocabulary zero-shot classification models [6; 16]. Pretrained with large-scale image-text datasets, these models have more generic class concepts as the definitions can be introduced by textual prompts of natural language. CLIP is one of the most popular models for open-vocabulary classification [16]. Its architecture comprises image and text encoders which encode input images and texts into a shared latent space. These encoders are trained with contrastive losses such that dot product similarity scores between image and text encodings indicate how likely input images and texts correspond to one another. One limitation of CLIP lies in the fact that its encoders are designed to be generic in the sense that its image encodings encompass entire information of a given image regardless of the target task. While this behavior is desirable for some problems, it simultaneously poses a limitation for closed-set object classification tasks where only certain labels and image contents are of interest. In these cases, encoding entire image contents can lead to suboptimal performance, particularly for small objects. For e.g., in Figure 0(a), the large water region in the image dominates similarity scores between image and text encodings of water-related classes, leading to an incorrect zero-shot prediction. Our central question is: How can we reduce non-discriminative and extraneous information from the image encodings? We observe that reducing areas of context regions by cropping input images around objects of interest can be beneficial. Figure 0(b) illustrates that the cropped image with reduced water regions decrease similarity scores of incorrect water-related classes and result in the dominant similarity score of the correct class (i.e., canoe). One straightforward approach to reduce influence from non-discriminative information automatically is to directly adopt open-vocabulary object detection models for the zero-shot classification task. These models produce object bounding boxes and _locally_ categorize them based on any given text prompts [12; 7]. However, we speculate that these approaches are not directly optimal for image classification tasks which they are not designed for. In this regard, we conduct an experiment to extend one of the most recent open-vocabulary object detection models, OWL-ViT [12], for a classification setting where each sample belongs to only one class. We observe that, while OWL-ViT shows reasonable performance on bounding box estimation, its zero-shot classification performance is poor compared to standard zero-shot CLIP baselines (more details in section 5.6). In this work, we aim to improve zero-shot object classification performance of CLIP by guiding their focus to the object of interest and reducing the influence of unrelated visual information. Instead of using OWL-ViT for classification directly, we propose to employ it as a bounding box extraction module such that cropped input images are processed by CLIP as shown in Figure 0(b). We refer this approach as CLIP with Guided Cropping (GC-CLIP). We show that classification performance depends on chosen cropping scales which is especially significant on images with small objects. Our contributions are as follows: We provide empirical evidence that generic CLIP encoders can lead to suboptimal performance in zero-shot closed-set classification task, particularly on the images with small objects. We propose a method to improve CLIP zero-shot classification using bounding boxes estimated from OWL-ViT. We conduct experiments to show that our approach outperforms a direct OWL-ViT based classifier as well as zero-shot CLIP baselines across different scenarios. Finally, we conduct ablation studies to understand the conditions under which our approach works well. ## 2 Related Works Zero-Shot and Open-Vocabulary ClassificationZero-shot classification enables trained models to recognize inputs of unseen categories based on externally provided concepts. Earlier works define these concepts in terms of attribute combinations [14; 15; 1; 9; 13; 10]. However, in open-world applications, it is generally not possible to represent all categories based on limited combinations of trained attributes. Hence, recent research focuses on open-vocabulary classification, in which categories are represented by text prompts. In this regard, images and text prompts can be projected by image/text encoders into a joint embedding space so that their similarities can be computed. CLIP [16] and ALIGN [6] encourage similarity between image-text pairs based on contrastive losses. [11] improves zero-shot performance by using multiple text prompts per category based on queries from large language models. Florence [20] considers more modalities in addition to images and texts. Figure 1: Logits from CLIP (ViT-B/32) before and after cropping around objects of interest While these models perform well in open-world scenarios, their performance can be limited under the closed-set assumption. As their encoders are designed for open-world applications, they may encode information which are harmful for closed-set classification task. In this work, we aim to alleviate this. Open-Vocabulary Object DetectionThe concept of open-vocabulary has also been investigated in object detection tasks in which object bounding boxes are produced given input text prompts [4; 22; 8; 7; 21]. ViLD [4] trains object detection based on knowledge distillation from pretrained open-vocabulary classification models. In OWL-ViT [12], simple modifications of standard vision transformers are fine-tuned with large-scale image-text datasets for object detection. GLIPv2 [21] extends models to handle various localization tasks. Object detection models have the innate ability to not only localize, but classify localized objects based on local information. The question may therefore be raised, whether they are in general sufficient to solve the zero-shot classification task alone. In section 5.6, we conducted experiments based on OWL-ViT, a recent off-the-shelf model, and demonstrate its poor performance on classification tasks. In this work, we use open-vocabulary object detection models only for bounding box extraction. ## 3 Background Problem FormulationGiven a test dataset \(\{(x_{i},y_{i})\}_{i=1}^{N_{s}}\), where \(x_{i}\in\mathcal{X}=\mathcal{R}^{w\times w}\) and \(y_{i}\in\mathcal{Y}=\{1,2,\dots,N_{c}\}\) is an image and its corresponding label, our zero-shot classification task is to construct a prediction function \(F:\mathcal{X}\rightarrow\mathcal{Y}\) based on pretrained open-vocabulary models to maximize the likelihood \(P(\hat{y}|x)=P(F(x)|x)\). Prediction function based on CLIP will be described in this section while our approach will be presented in section 4. Conventional ClipCLIP [16] is a multi-modal model designed for open-vocabulary classification. It consists of an image encoder \(G\) and a text encoder \(H\). To perform closed-set classification, a text prompt \(p_{j}^{cls}\) needs to be defined for each class \(j\in\mathcal{Y}\). Then, an embedding of each prompt can be obtained by: \(e_{j}^{text}=H(p_{j}^{cls})\). During inference, an input image \(x_{i}\) will be projected into its image embedding \(e_{i}^{image}=G(x_{i})\) so that its classification logit \(l_{i}^{CLIP}\) can be computed as: \[l_{i}^{CLIP}=(E^{text})^{T}e_{i}^{image}=\begin{bmatrix}e_{1}^{text}&e_{2}^{ text}&\dots&e_{N_{c}}^{text}\end{bmatrix}^{T}e_{i}^{image}. \tag{1}\] Each entry \(l_{ij}^{CLIP}\) of the logit indicates the similarity score between the (embedded) input image and the \(j\)-th prompt. The final class prediction can then be obtained as \(\hat{y}_{i}=\arg\max_{j\in\mathcal{Y}}l_{ij}^{CLIP}\). Figure 2: Guided Cropping pipeline to obtain a guided cropped image with margin ratio \(\alpha\) Above, we assume that one prompt is available per class. However, it has been shown recently that using multiple prompts per class can improve performance [11]. In this case, each \(e_{j}^{text}\) from equation 1 can be replaced with the average embedding computed from all available text prompts of class \(j\). ## 4 Methodology ### CLIP with Guided Cropping Conventionally, image embedding \(e_{i}^{image}\) is computed directly from the full image \(x_{i}\) without any task-specific constraints. For closed-set classification, especially in cases of a small object image, this implies that potentially unrelated information is also encoded into \(e_{i}^{image}\), which may lead to suboptimal performance. Minimizing the amount of unrelated concept information in image embeddings is desirable in this case. Our approach, CLIP with Guided Cropping (GC-CLIP), achieves this by using bounding box estimates provided by OWL-ViT. OWL-ViT is an open-vocabulary object detection model [12]. It takes an image and text prompts of target classes as inputs and produces outputs as a set of bounding boxes together with their scores and classes. In this work, we only use OWL-ViT as a bounding box extraction module as its class predictions are not accurate enough (see section 5.6). The overall GC-CLIP pipeline is shown in Figure 2. We only consider top-k classes (we use k=5) to refine the preliminary CLIP predictions. This is reasonable since it has high probabilities that these top-k classes contain the correct class (see appendix A.3). Candidate box extractionWe detect bounding boxes of each top-k class with OWL-ViT independently. We found that this is more robust to misdetection resulting in better performance compared to detecting bounding boxes of all classes at once (see appendix A.5). Formally, a set of bounding box candidates \(B_{i}\) for an image \(x_{i}\) can be obtained based on OWL-ViT as follows: \[B_{i}=\bigcup_{j\in J_{i}^{k}}b_{ij}=\bigcup_{j\in J_{i}^{k}}OWL(x_{i},p_{j}^{ det}) \tag{2}\] where \(J_{k}\subseteq\mathcal{Y}\) is a set of top-k classes with respect to \(l_{i}^{CLIP}\), \(p_{j}^{det}\) is a text prompt for detection of class \(j\) and \(OWL\) is OWL-ViT detection function returning a max-score bounding box with respect to an input image and a prompt. All bounding boxes are adjusted to squares to avoid skewing images when they are, afterward, transformed into a CLIP-compatible image size. (e.g., \(224\times 224\)). Box selectionNext, we need to pick one bounding box from \(B_{i}\). We start from a primary box \(b_{i}^{0}\in B_{i}\) which has the highest estimated score from OWL-ViT. In our experiments, we found that using the primary box directly is generally suboptimal as its crop may be too tight to target objects. It is therefore beneficial to slightly enlarge the box (see section 5.3). Given \(b_{i}^{0}\) has the width of \(w_{b_{i}^{0}}\) and Figure 3: Each green square corresponds to a final bounding box \(b^{\alpha}\) (or \(b^{\alpha_{k}}\)) which will be used to crop the original image \(x_{i}\) to produce logit for the final prediction. \(\Delta w\) is the width difference between the original image and the primary box \(b_{i}^{0}\). \(\alpha\) and \(\alpha_{k}\) are margin ratios. \(x_{i}\) has the width of \(w\), the box is enlarged to an \(\alpha\)-margin box \(b_{i}^{\alpha}\) uniformly in all direction to the size of \(w_{b_{i}^{0}}+\alpha(w-w_{b_{i}^{0}})\), where \(\alpha\in[0,1]\) is called margin ratio (see Figure 2(a)). For the enlargement, if a box edge exceeds image boundary in one direction, the enlargement will be compensated in the opposite direction. In cases with box augmentation, multiple \(\alpha\) can be employed (see section 4.2). Logit computationThis selected box \(b_{i}^{\alpha}\) is used to crop \(x_{i}\) and resize it to a CLIP-compatible image size \(w\times w\) resulting in a preprocessed image \(x_{i}^{\alpha}\). The new top-k logit \(l_{i}^{GC\_CLIP(k)}\) is computed based on \(x_{i}^{\alpha}\) as follows: \[l_{i}^{GC\_CLIP(k)}=\left[e_{j^{1}}^{text}\quad e_{j^{2}}^{text}\quad\dots \quad e_{j^{k}}^{text}\right]^{T}G(x_{i}^{\alpha}), \tag{3}\] where \(j^{1},j^{2},\dots,j^{k}\in J_{i}^{k}\). The final class prediction is the class within \(J_{i}^{k}\) corresponding to the maximum entry of \(l_{i}^{GC\_CLIP(k)}\). ### Test-Time Box Augmentation While prediction can directly perform on a raw/preprocessed input image, this can lead to noisy prediction from CLIP. Small non-semantic changes in images can cause changes in predictions making CLIP outputs difficult to analyze. We show this behavior by processing 10 random crops (90%-100% of the original widths) of the same image with CLIP. One would expect that, standard deviations of its predicted true-label probabilities should be low and its final class predictions should not change across different crops. However, we notice from Figure 3(a) that the standard deviations can be relatively high (around 0.2), while the average true-label probability is 0.55. In addition, only around 60% of test samples have no changes in final class predictions across crops (see Figure 3(b)). These results indicate significant sensitivity of CLIP to non-semantic changes. Therefore, instead of computing logits from raw/preprocessed images only, we can perform a simple test-time augmentation to help mitigate this issue. In this work, we investigate two augmentation strategies. Random Crop Box Augmentation (RAug)With RAug, we augment a single input (raw or preprocessed) image into \(N_{aug}\) total images by cropping the input image with \(N_{aug}\) boxes of random widths within \([\beta w,w]\), while \(\beta\in(0,1)\). The augmented images are used to compute multiple predicted logits as per equation 3, which can then be averaged to produce the final logit score. Multi-Margin Box Augmentation (MAug)In some cases, it is beneficial to consider context information as long as it does not dominate object information. With MAug, we need to firstly obtain the primary box \(b_{i}^{0}\). Then, instead of using a margin ratio \(\alpha\) as in section 4.1, we perform an object-centric augmentation by using \(N_{aug}\) bounding boxes obtained from multiple margin ratios, distributed uniformly from 0 to 1 (see Figure 2(b)). In other words, the set of all final boxes used in this augmentation is \(\left\{b_{i}^{\alpha_{k}}|\alpha_{k}=\frac{k}{N_{aug}-1},k\in\{0,1,\dots,N_{ aug}-1\}\right\}\). Similarly, logits computed from images cropped by these final boxes are then averaged to get the final logit score. Figure 4: Results when forwarding multiple random crops of the same images (from ImageNetS919 dataset) to CLIP (ViT-B/32) demonstrating CLIP sensitivity to non-semantic changes. It must be noted that, with MAug, regions close to the target object are covered by more boxes compared to regions far from the object. Therefore, the augmentation allows some context information to be considered but with lower importance compared to object information. ## 5 Experiments In this section, we conduct experiments to demonstrate that utilizing CLIP with Guided Cropping can improve zero-shot classification performance. In addition, several ablation studies are also conducted to understand its failure modes and the conditions under which our approach works well. ### Setup DatasetsWe would like to study classification scenarios in which object sizes in images are controllable. In this work, two datasets are employed. (1) ImageNetS [2]: this dataset is an extension of ImageNet [17] and originally designed for unsupervised semantic segmentation. We use the validation split of the dataset in which pixel-wise segmentation annotations are available. It contains 12,419 samples of 919 classes in total. We construct a subset with target objects of small sizes, called ImageNetS919-SM, containing 2,334 samples whose object sizes are no more than 20% of the full image size. (2) CUB [18]: this dataset is a benchmark for fine-grained classification consisting of 200 bird types. We evaluate our models on its test split of 5,794 samples. Similarly, based on bounding box annotations of the dataset, we construct its subset whose target object sizes are less than 20% of the full image size resulting in CUB-SM containing 1,390 samples. More details of our dataset splitting and example images of these datasets can be found in the appendix A.1. BaselinesCLIP [16] is used as the main architecture of all baselines. We conduct experiments with two classification prompt types similar to [11] (1) Category: Each class has a single prompt of its category name (2) Descriptions: Each class has multiple prompts queried automatically from GPT-3 according to [11]. In the latter case, the final logit value for a given class is computed by averaging the logit values obtained from all prompts for that class. ImplementationWe apply our Guided Cropping and box augmentation on top of each baseline. For Guided Cropping variations, the margin ratio \(\alpha\) of 0.2 is used unless otherwise specified. We perform box augmentation with \(N_{aug}=11\). For RAug, \(\beta=0.9\) is used. The high value of \(\beta\) makes RAug augmented boxes less likely to crop object contents away. CLIP backbones studied in this work are ViT-B/32, ViT-B/16 and ViT-L/14. For OWL-ViT, its backbone is ViT-B/32 for all experiments. Category names are used as prompts to perform detection with OWL-ViT. The code of our implementation will be publicly available upon paper acceptance. ### Zero-Shot Classification Performance In this section, we evaluate zero-shot classification performance of different model configurations on various datasets including both unconstrained object sizes (full dataset) and small-object variants (with -SM suffix). The results are shown in Table 1. Considering datasets with unconstrained object sizes, ImageNetS919 and CUB, our Guided Cropping performance is generally comparable to (or slightly better than) non-Guided Cropping baselines. This is expected since many samples in these cases could have objects whose sizes already dominate the scene. On the other hand, both box augmentations consistently improve classification performance in all cases indicating that raw predictions from CLIP models are indeed noisy. Smoothing their predictions with box augmentations helps our methods to be more robust to this noise. Considering results on datasets with small object sizes, ImageNetS919-SM and CUB-SM, our Guided Cropping demonstrates consistent improvement over baselines across different model configurations. This trend can also be noticed regardless of the prompt types. This indicates that our approach, as expected, is more beneficial for images with small target objects. This is reasonable since small object images leave more space in the images for context information which should be reduced before performing image encoding. Another interesting observation is that employing GC-CLIP with Multi-Margin augmentation (MAug) generally achieved better performance. This infers that hinting the context cues with lower importance can complement with the focus on object of interest to make definite and correct decisions. It must be noted that, in this experiment, we integrate our Guided Cropping on top of zero-shot models. A question may arise: how does our Guided Cropping affect pretrained supervised models? We conduct an experiment and found that pretrained supervised models benefit less from cropping with small bounding boxes (see appendix A.2). This is expected since supervised models can exploit unrelated contexts as shortcuts [3] to gain performance on in-distribution samples. ### Importance of Margin Ratio Margin ratio (\(\alpha\)) mentioned in section 4.1 controls how much primary boxes from OWL-ViT are enlarged before they are used to crop input images. Varying margin ratios can help us understand how CLIP reacts to Guided Cropping from \(\alpha=0.0\) (crop with a raw OWL-ViT box) to \(\alpha=1.0\) (no Guided Cropping at all). In this section, we study our models with different margin ratios on ImageNetS919-SM. The results are shown in Figure 5. We mainly discuss results from GC-CLIP and GC-CLIP+RAug here as these configurations utilize a single margin ratio. According to the results, when Guided Cropping is applied (\(\alpha<1\)), classification accuracies are generally better than the accuracies without Guided Cropping (\(\alpha=1\)). This confirms the benefit of GC-CLIP. It must be noted that, there are some consistent drops of the performance when the values of \(\alpha\) are too small (e.g., when \(\alpha\in[0.0,0.1]\)). This infers that too tight bounding boxes can degrade classification performance. One explanation of this observation is that, in order to recognize \begin{table} \begin{tabular}{c|c|c|c|c|c|c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Prompt} & Guided & \multirow{2}{*}{Box Aug.} & \multicolumn{4}{c}{Dataset} \\ & & Cropping & & ImageNetS919 & CUB & ImageNetS919-SM & CUB-SM \\ \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(63.62\) & \(51.83\) & \(52.83\) & \(49.57\) \\ & & - & Random Crop & \(64.42\) & \(52.45\) & \(53.47\) & \(50.79\) \\ & & ✓ & - & \(63.61\) & \(52.40\) & \(55.18\) & \(51.44\) \\ & & ✓ & Random Crop & \(64.46\) & **53.12** & **56.00** & \(52.81\) \\ & & ✓ & Multi-Margin & **64.66** & **53.12** & **56.00** & **53.09** \\ \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(68.54\) & \(53.05\) & \(55.70\) & \(50.14\) \\ & & - & Random Crop & \(69.15\) & \(53.62\) & \(57.33\) & \(50.79\) \\ & & ✓ & - & \(68.59\) & \(54.07\) & \(58.61\) & **53.38** \\ & & ✓ & Random Crop & \(69.07\) & \(54.47\) & \(59.08\) & \(53.09\) \\ & & ✓ & Multi-Margin & **69.62** & **54.56** & **60.07** & \(52.95\) \\ \hline \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(68.60\) & \(56.51\) & \(57.75\) & \(55.54\) \\ & & - & Random Crop & \(68.81\) & \(56.89\) & \(58.05\) & \(57.41\) \\ \cline{1-1} & & ✓ & - & \(68.06\) & \(56.09\) & \(58.65\) & \(55.97\) \\ \cline{1-1} & & ✓ & Random Crop & \(68.19\) & \(56.78\) & \(58.35\) & \(57.12\) \\ \cline{1-1} & & ✓ & Multi-Margin & **68.94** & **57.30** & **59.81** & **57.63** \\ \cline{1-1} \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(72.67\) & \(57.78\) & \(61.61\) & \(56.55\) \\ \cline{1-1} & & - & Random Crop & \(73.17\) & \(58.87\) & \(62.13\) & \(57.99\) \\ \cline{1-1} & & ✓ & - & \(72.61\) & \(58.70\) & \(63.28\) & **59.35** \\ \cline{1-1} & & ✓ & Random Crop & \(72.86\) & \(58.99\) & \(63.32\) & \(58.78\) \\ \cline{1-1} & & ✓ & Multi-Margin & **73.49** & **59.34** & **64.05** & \(59.06\) \\ \hline \hline \multirow{8}{*}{Category} & \multirow{8}{*}{Category} & - & - & \(75.15\) & \(63.08\) & \(64.78\) & \(62.16\) \\ & & - & Random Crop & \(75.30\) & \(63.32\) & \(64.70\) & \(62.59\) \\ \cline{1-1} & & ✓ & - & \(75.00\) & \(62.96\) & \(66.02\) & \(62.16\) \\ \cline{1-1} & & ✓ & Random Crop & \(75.04\) & \(63.24\) & \(66.54\) & \(62.73\) \\ \cline{1-1} & & ✓ & Multi-Margin & **75.71** & **63.63** & **66.92** & **63.17** \\ \cline{1-1} \cline{2-8} & \multirow{8}{*}{Descriptions} & - & - & \(78.48\) & \(64.65\) & \(67.78\) & \(63.17\) \\ \cline{1-1} & & - & Random Crop & \(78.65\) & \(64.60\) & \(67.65\) & **63.96** \\ \cline{1-1} & & ✓ & - & \(78.32\) & \(64.67\) & \(69.07\) & \(63.31\) \\ \cline{1-1} & & ✓ & Random Crop & \(78.28\) & **64.88** & \(69.41\) & **63.96** \\ \cline{1-1} & & ✓ & Multi-Margin & **79.06** & \(64.76\) & **69.88** & \(62.95\) \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot classification accuracies from different datasets and model configurations. an object, models need to know the object shape clearly. Too tight bounding boxes can make the models having unclear information on the object boundaries leading to performance drops. ### Understanding Object Size Conditions In section 5.2, we only conduct experiments on small object images with only one object size conditions (i.e., maximum relative object sizes \(<20\%\) of the total image areas). In this section, we would like to explore how our approach performs on different object size conditions. Therefore, we vary maximum relative object sizes of ImageNetS919 dataset from 5% to 100% for our evaluation. Details of the samples in individual conditions are given in appendix A.1. The results are shown in Figure 6 (see appendix A.4 for the results of other backbones). Considering the cases without any object size constraints (i.e., x-axis = 1.0), applying Guided Cropping does not significantly impact the performance (the same observation in Table 1). However, as the maximum object sizes decrease, accuracy gaps between conventional CLIP and GC-CLIP become larger. The gaps are also more significant when MAug is applied for box augmentation instead of RAug. This experiment highlights conditions with small objects that our approach works well. ### Qualitative Evaluation In this section, we quantitatively evaluate GC-CLIP by visualizing some samples whose predictions are changed from CLIP. Improved samples are shown in Figure 6(a). Reasonable improvements can be noticed among these samples. For example, in the ship image, land and sea are context covering large regions. Considering these contexts excessively makes standard CLIP incorrectly predicting the target object as an amphibious vehicle. However, GC-CLIP recognizes the image focusing on the primary box at the vehicle. This reduces distracted visual information when encoding the image leading to correct prediction. Figure 5: Zero-shot accuracies on ImageNetS919-SM evaluated with different margin ratios. Figure 6: Accuracies (ViT-B/32) on subsets of ImageNetS919 with various object size conditions. On the other hand, image samples whose predictions are incorrectly changed by GC-CLIP are shown in Figure (b)b. These samples are failed potentially due to distance between target objects and important contexts. While MAug augmentation allows some contexts to be considered during prediction, large distance between target objects reduce importance of the contexts for the model (less boxes cover the contexts). For example, considering the space shuttle image, the target object is too tiny so ground is an important context distinguishing a missile and a space shuttle (which is usually launched vertically). However, large distance between the ground and the object box reduces effects from the ground in GC-CLIP. Strategies to weight contexts dynamically can be investigated in future works. ### Can we use OWL-ViT directly as a classifier? Theoretically, OWL-ViT also has capability to minimize information outside target object boundaries and can be used in zero-shot classification task. In this section, we would like to show that, when OWL-ViT is adopted as a classifier directly, it still has limited performance on our classification task. In order to use OWL-ViT as a classifier, we need to transform its outputs from sets of bounding box locations, scores and class labels into class-wise logits. In this regard, given an input image, prediction logit of a class can be obtained as follows: Firstly, we iterate whether there are any bounding boxes exist for that class. If any boxes exist, the class logit value will be assigned as the maximum score of its corresponding bounding boxes. Otherwise, its logit will be zero. This simple extension encourages classes of bounding boxes with high scores to have high logits. We evaluate this classifier on ImageNetS919 dataset and obtain 20.34% and 40.78% as top-1 and top-10 accuracies respectively. Here, the performance is still much lower compared to our baseline performance in Table 1 indicating poor classification accuracy of this classifier. The poor performance of this classifier can be investigated by visualizing incorrectly predicted samples in Figure 8. While OWL-ViT gives reasonable bounding boxes, its class predictions are inaccurate. The actual classes are likely to be confused with other classes with fine-grained differences. For example, the model misclassifies an image of a tiger shark as a snoek fish whose shape is indeed closely resemble to shark. This significant degradation from fine-grained details confirms that OWL-ViT is not optimal to be used as a classifier on standard classification benchmarks. Figure 8: Examples of failure modes of the OWL-ViT based classifier. Figure 7: Predictions of CLIP (with RAug) and GC-CLIP (with MAug) with ViT-B/32 on ImageNetS919 samples. Red boxes represent primary boxes \(b^{0}\) estimated from our GC-CLIP. Conclusion In this work, we identify a limitation of CLIP in zero-shot closed-set object classification task. As its image encoder is designed for encoding generic image representation, it is prone to encode non-discriminative context information into image features leading to performance degradation, particularly for small objects. We propose GC-CLIP, an approach to reduce effects from potentially non-discriminative information based on object bounding boxes estimated from a zero-shot object detection model. We empirically demonstrate that our approach outperforms baselines especially in cases of image samples with small objects. On the basis of ablation studies, we analyze conditions in which our approach performs well. We hope this work shed a new light on the behavior of large-scale open-vocabulary models for classification and guide future research to improve these models.
2309.07847
Thermodynamic entropy production in the dynamical Casimir effect
This paper address the question of thermodynamic entropy production in the context of the dynamical Casimir effect. Specifically, we study a scalar quantum field confined within a one-dimensional ideal cavity subject to time-varying boundary conditions dictated by an externally prescribed trajectory of one of the cavity mirrors. The central question is how the thermodynamic entropy of the field evolves over time. Utilizing an effective Hamiltonian approach, we compute the entropy production and reveal that it exhibits scaling behavior concerning the number of particles created in the short-time limit. Furthermore, this approach elucidates the direct connection between this entropy and the emergence of quantum coherence within the mode basis of the field. In addition, by considering a distinct approach based on the time evolution of Gaussian states we examine the long-time limit of entropy production within a single mode of the field. This approach results in establishing a connection between the thermodynamic entropy production in a single field mode and the entanglement between that particular mode and all other modes. Consequently, by employing two distinct approaches, we comprehensively address both the short-term and long-term dynamics of the system. Our results thus link the irreversible dynamics of the field, as measured by entropy production and induced by the dynamical Casimir effect, to two fundamental aspects of quantum mechanics: coherence and entanglement.
Gustavo de Oliveira, Lucas C. Céleri
2023-09-14T16:41:28Z
http://arxiv.org/abs/2309.07847v2
# Thermodynamic entropy production in the dynamical Casimir effect ###### Abstract This paper address the question of thermodynamic entropy production in the context of the dynamical Casimir effect. Specifically, we study a scalar quantum field confined within a one-dimensional ideal cavity subject to time-varying boundary conditions dictated by an externally prescribed trajectory of one of the cavity mirrors. The central question is how the thermodynamic entropy of the field evolves over time. Utilizing an effective Hamiltonian approach, we compute the entropy production and reveal that it exhibits scaling behavior concerning the number of particles created in the short-time limit. Furthermore, this approach elucidates the direct connection between this entropy and the emergence of quantum coherence within the mode basis of the field. In addition, by considering a distinct approach based on the time evolution of Gaussian states we examine the long-time limit of entropy production within a single mode of the field. This approach results in establishing a connection between the thermodynamic entropy production in a single field mode and the entanglement between that particular mode and all other modes. Consequently, by employing two distinct approaches, we comprehensively address both the short-term and long-term dynamics of the system. Our results thus link the irreversible dynamics of the field, as measured by entropy production and induced by the dynamical Casimir effect, to two fundamental aspects of quantum mechanics: coherence and entanglement. ## I Introduction While the fundamental laws of physics exhibit time-reverse symmetry, we encounter irreversible phenomena in our surroundings when dealing with complex systems. In classical physics, irreversibility is primarily characterized by the second law of thermodynamics, which asserts that the thermodynamic entropy of a closed system cannot decrease over time [1]. When fluctuations come into play, stronger principles known as fluctuation theorems emerge [2; 3], and irreversible processes are those in which entropy tends to increase on average. When considering quantum systems, various approaches have emerged in the pursuit of comprehending thermodynamics from a microscopic perspective. Some of these developments include information theory [4], statistical physics [5], and axiomatic theories [6]. For a comprehensive exploration of entropy production in both classical and quantum systems, we recommend Ref. [7] and its associated references. We are focusing on the thermodynamics of closed quantum systems, where the time evolution follows a unitary process. This implies that the von Neumann entropy remains constant over time. As a result, this measure is inadequate for quantum thermodynamic entropy because it contradicts the well-established experimental observation that, in general, spontaneous processes tend to increase entropy. Furthermore, it fails to respect the fundamental thermodynamic relation. To tackle this fundamental issue, we turn to the diagonal entropy, as defined in Ref. [8] as \[S_{d}(\hat{\rho})=-\sum_{n}p_{n}\ln p_{n}, \tag{1}\] with \(p_{n}\) representing the diagonal elements of the system's density matrix \(\hat{\rho}\) in the energy eigenbasis. This quantity has been proposed as the thermodynamic entropy for closed quantum systems since it exhibits several interesting properties, including extensivity, positivity, and the property of vanishing as the temperature approaches zero [8]. Furthermore, it possesses a crucial characteristic: it increases for every process, whether unitary or not, that induces transitions in the energy eigenbasis. Only when the system's Hamiltonian changes slowly enough will the diagonal entropy remain unchanged. This aligns with our intuition based on the classical definition of thermodynamic entropy, which does not increase for quasistatic processes [9; 10]. It is worth noting that a closely related quantity known as the observational entropy is defined as a coarse-grained version of the diagonal entropy [11]. Therefore, the findings presented here also apply within the context of observational entropy. Information theory have also given rise to a novel approach to thermodynamics, as elucidated by a recent work [12]. In this approach, physical quantities are defined as those invariant under the action of a gauge group, and the emerging concept of entropy precisely aligns with the diagonal entropy discussed above. This alignment resonates with the fact that the gauge-invariant definition of heat is intricately tied to transitions within the energy eigenbasis [12]. This observation also establishes a connection between our findings and another cornerstone of physics, the gauge principle. We can think about this entropy as a measure of the randomness within the energy eigenbasis. Imagine that we only have access to energy measurements of a quan tum system, a common limitation when dealing with systems of a sufficiently large dimension where quantum state tomography becomes impractical [13]. In a general process, whether unitary or not, transitions between energy levels are induced, leading to the development of quantum coherence and potentially entanglement among different parts of the system. The diagonal entropy quantifies the information loss resulting from our limited set of measurements. We refer the reader to Ref. [8] for more details regarding such quantity, including its relation to thermodynamics. The aim of the present work is to apply this concept to a quantum field within the context of the dynamical Casimir effect, and explore the relationship between entropy production and quantum properties such as coherence and entanglement. Specifically, we consider a quantum scalar field confined within a one-dimensional cavity with mirrors in relative motion, a scenario commonly examined in the context of the dynamical Casimir effect [14; 15; 16; 17]. Under specific conditions, this effect predicts the creation of particles from the vacuum due to the dynamic changes of the boundary conditions imposed by the mirror motion. Over the past five decades, numerous developments have appeared in this field, encompassing the impact of imperfect mirrors [18; 19; 20; 21], distinct geometries [22; 23; 24; 25; 26], gravitational field effects [27; 28], nonlinear interactions [29; 30; 31], and entanglement dynamics [32; 33]. For a comprehensive overview, interested readers are directed to a recent review [34]. However, despite these extensive developments, the irreversible dynamics of the quantum field in this scenario have not been explored, to the best of our knowledge. This work aims to begin addressing this gap by focusing on irreversibility, as measured by the increase in quantum thermodynamic entropy -- the diagonal entropy-- associated with the field's dynamics. In other words, how much entropy is generated in the field due to the nonstationary boundary conditions imposed by the motion of the cavity mirrors? We provide answers to this question through two distinct approaches. Firstly, we employ an effective Hamiltonian theory based on Ref. [35] to calculate the entropy of the total field within the short-time regime. We demonstrate that the entropy increase is intrinsically tied to the generation of quantum coherence within the system's energy eigenbasis, aligning with the gauge theory developed in Ref. [12]. In the second part of the paper, we adopt a different approach to investigate the long-term field dynamics, allowing us to compute the diagonal entropy for a single mode. Interestingly, this entropy is governed by the entanglement between the selected mode and all other modes. These two distinct approaches enable us to connect the irreversibility of field dynamics with two fundamental quantum features: coherence and entanglement. ## II The dynamical Casimir effect Let us consider a one-dimensional ideal cavity whose mirrors are located at positions \(x=0\) and \(x=L(t)\), with \(L(t)\) being an externally prescribed trajectory. Confined in this cavity, we have a massless real scalar field \(\phi(x,t)\) satisfying the wave equation \[\left(\partial_{t}^{2}-\partial_{x}^{2}\right)\phi(x,t)=0. \tag{2}\] Given the ideal nature of the mirrors (perfect reflectors), the boundary conditions imposed on the field take the Dirichlet form \[\phi(0,t)=\phi(L(t),t)=0. \tag{3}\] The set of complex value solutions \(\{\phi_{i}\}\) to Eq. (2) under the restrictions imposed by the non-stationary boundary conditions (3) spans a linear vector space \(\mathcal{S}\) with an invariant bilinear form \[(\phi_{1},\phi_{2})=i\int_{0}^{L(t)}\mathrm{d}x\ [\phi_{1}^{*}\partial_{t} \phi_{2}-\phi_{2}\partial_{t}\phi_{1}^{*}] \tag{4}\] satisfying all the properties of an inner product except for positive definiteness. This last obstacle hinders the use of Eq. (4) for the field's decomposition into orthonormal solutions on \(\mathcal{S}\). Nevertheless, we can always choose to that matter, any subspace \(\mathcal{S}^{+}\subset\mathcal{S}\), as long as it satisfies the following properties: (_i_) the product (4) is positive definite on \(\mathcal{S}^{+}\); (_ii_) \(\mathcal{S}=\mathcal{S}^{+}\oplus\overline{\mathcal{S}^{+}}\) (with the bar designating the complex conjugated of the space) and (_iii_) for all \(f^{+}\in\mathcal{S}^{+}\) and \(f^{-}\in\overline{\mathcal{S}^{+}}\), we have \((f^{+},f^{-})=0\)[36]. From the last considerations, if we assume the cavity at the interval \(t\leq 0\) to be in a static configuration (with constant mirror position \(L(t\leq 0)=L_{0}\)), the classical field can be written as \[\phi(x,t\leq 0)=\sum_{k}\left[b_{k}f_{k}^{\mathrm{in}}(x,t)+b_{k}^{*}f_{k}^{ \mathrm{in*}}(x,t)\right], \tag{5}\] where the set \(\{f_{k}^{\mathrm{in}}(x,t)\}\) is an orthonormal basis on \(\mathcal{S}^{+}\) while \(\{b_{k}\}\) is a set of complex coefficients. Since the mirrors are at rest, one can use the time translation symmetry of the wave equation as a natural criterion to select \(\mathcal{S}^{+}\) as the space of solutions that oscillates with purely positive frequencies \[f_{k}^{\mathrm{in}}(x,t)=\frac{1}{\sqrt{\pi k}}\sin\left(\omega_{k}^{\mathrm{ in}}x\right)e^{-i\omega_{k}^{\mathrm{in}}t},\quad\mathrm{for}\ t\leq 0, \tag{6}\] where \(\omega_{k}^{\mathrm{in}}=k\pi/L_{0}\) with \(k=\{1,2,\ldots\}\). The quantum description of the field is then obtained by means of the usual field quantization prescription. The coefficients \(b_{k}\) and \(b_{k}^{*}\) are promoted to annihilation and creation operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) satisfying the standard commutation relations \[\left[\hat{b}_{k},\hat{b}_{j}^{\dagger}\right]=\delta_{kj}\ \mathrm{and}\ \left[\hat{b}_{k},\hat{b}_{j}\right]=\left[\hat{b}_{k}^{\dagger},\hat{b}_{j}^{ \dagger}\right]=0. \tag{7}\] The initial vacuum state \(|0;\mathrm{in}\rangle\) is defined as the state annihilated by all \(\hat{b}_{k}\), whereas a general particle state can be constructed by the application of the creation operator \(\hat{b}_{k}^{\dagger}\) on this vacuum state \[|\mathbf{n};\mathrm{in}\rangle=|n_{k_{1}},n_{k_{2}},\dots;\mathrm{in}\rangle=\prod_ {i}\frac{1}{\sqrt{n_{k_{i}}!}}\left(\hat{b}_{k_{i}}^{\dagger}\right)^{n_{k_{i}} }|0;\mathrm{in}\rangle\,,\] with \(n_{k_{i}}\) representing the number of particles in the \(k_{i}\)-th mode. For \(t>0\), when the mirror starts to move, the quantum field can still be decomposed in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) in the form \[\hat{\phi}(x,t>0)=\sum_{k}\left[\hat{b}_{k}f_{k}(x,t)+\hat{b}_{k}^{\dagger}f_{ k}^{*}(x,t)\right], \tag{8}\] as long as the new set of mode functions \(\{f_{k}(x,t)\}\) satisfies the conditions: (i) the wave equation (2), (ii) the time-dependent boundary condition (3), and (iii) the initial condition \(f_{k}(x,0)=f_{k}^{\mathrm{in}}(x,0)\). In this regard, we proceed by expanding the mode function in a series with respect to an _instantaneous basis_\(\{\varphi_{k}(x,t)\}\) as \[f_{k}(x,t)=\frac{1}{\sqrt{2\omega_{k}^{\mathrm{in}}}}\sum_{j}Q_{j}^{(k)}(t) \varphi_{j}(x,t), \tag{9}\] where \[\varphi_{j}(x,t):=\sqrt{\frac{2}{L(t)}}\sin\left[\omega_{j}(t)x\right]\ \ \mathrm{with}\ \omega_{j}(t)=\frac{j\pi}{L(t)}. \tag{10}\] Moreover the Fourier coefficients \(Q_{j}^{(k)}(t)\) introduced in Eq. (9) must satisfy the differential equation1 Footnote 1: The set of differential equations (11) can be obtained by substituting Eq. (9) into the wave equation (2) and integrating the resulting expression from \(0\) to \(L(t)\). \[\tilde{Q}_{j}^{(k)} +\omega_{j}^{2}(t)Q_{j}^{(k)} \tag{11}\] \[=\sum_{l}\left[2\lambda(t)g_{kl}\dot{Q}_{l}^{(k)}+\dot{\lambda}( t)g_{kl}Q_{l}^{(k)}-\lambda^{2}(t)h_{kl}Q_{l}^{(k)}\right],\] together with the initial conditions \[Q_{j}^{(k)}(0)=\delta_{jk},\qquad\dot{Q}_{j}^{(k)}(0)=-i\omega_{k}^{\mathrm{ in}}\delta_{kj}, \tag{12}\] where the upper dot indicates total time derivative, \(\lambda(t)=\dot{L}(t)/L(t)\) and the antisymmetric coefficients \(g_{kj}\) and \(h_{kj}\) are defined for \(j\neq k\) as \[g_{jk}=(-1)^{j-k}\frac{2kj}{j^{2}-k^{2}},\quad\mathrm{and}\quad h_{jk}=\sum_{ l}g_{jl}g_{kl}. \tag{13}\] The first noticeable aspect of the provided description is that the mode expansion (9) fundamentally depends on the choice of the basis functions \(\varphi_{k}(x,t)\). This occurs because when the time dependence of the boundary condition (3) is taken into account, the natural criterion of selecting solutions with purely positive frequency is no longer available and there is no unambiguous choice for \(\mathcal{S}^{+}\). Consequently, during the cavity motion, the expansion of the field in terms of creation and annihilation operators becomes arbitrary, implying the nonexistence of a preferred choice for a vacuum state. Thus, unless we can specify a measurement process, the usual notion of particle loses its well-defined meaning, and only when the cavity comes to rest we can associate a definite particle interpretation to the quanta described by these operators [35]. If the cavity returns to a static configuration after some interval of time \(T\) (with a final constant mirror position \(L(t\geq T)=L_{T}\)), one can reintroduce a preferred choice for the mode functions as \[f_{k}^{\mathrm{out}}(x,t)=\frac{1}{\sqrt{\pi k}}\sin\left(\omega_{k}^{ \mathrm{out}}x\right)e^{-i\omega_{k}^{\mathrm{out}}t}\quad\mathrm{for}\ t\geq T \tag{14}\] with purely positive frequencies \(\omega_{k}^{\mathrm{out}}=k\pi/L_{T}\). Consequently, the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) cease to have a physical significance and the field is now decomposed as \[\hat{\phi}(x,t\geq T)=\sum_{k}\left[\hat{a}_{k}f_{k}^{\mathrm{out}}(x,t)+\hat{a }_{k}^{\dagger}f_{k}^{\mathrm{out}*}(x,t)\right], \tag{15}\] with the set operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) satisfying analogous commutation relations as in Eq. (7) and defining a new vacuum state \(|0;\mathrm{out}\rangle\) as the state annihilated by all \(\hat{a}_{k}\). As pointed out in Ref. [32], although both sets \(\{f_{k}^{\mathrm{in}},f_{k}^{\mathrm{in}*}\}\) and \(\{f_{k}^{\mathrm{out}},f_{k}^{\mathrm{out}*}\}\) form a basis for the space of solutions \(\mathcal{S}\), they represent different decompositions into the subspaces \(\mathcal{S}^{+}\) and \(\overline{\mathcal{S}^{+}}\). The two sets of mode functions (6) and (14) should then be related by a linear transformation \[f_{k}^{\mathrm{in}}=\sum_{j}\left[\alpha_{jk}f_{j}^{\mathrm{out}}+\beta_{jk}f_ {j}^{\mathrm{out}*}\right], \tag{16}\] where \(\alpha_{jk}\) and \(\beta_{jk}\) are complex numbers called Bogoliubov coefficients. Inserting Eq. (16) into the field decomposition (5), and comparing with Eq. (15), we obtain the set of Bogoliubov transformations \[\hat{a}_{j}=\sum_{k}\left[\alpha_{kj}\hat{b}_{k}+\beta_{kj}^{*}\hat{b}_{k}^{ \dagger}\right]. \tag{17}\] Observe that the vacuum defined by \(\hat{a}_{k}\) and \(\hat{b}_{k}\) are not equivalent in general. As a consequence, when computing the number of particles defined by the final operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) with respect to the initial vacuum \(|0;\mathrm{in}\rangle\), results \[N=\langle 0;\mathrm{in}|\sum_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}|0;\mathrm{in} \rangle=\sum_{kj}|\beta_{jk}|^{2}. \tag{18}\] In general, \(\beta_{jk}\) is non-zero when time-dependent boundary conditions are imposed on the field. This last equation characterizes the DCE as the quantum field phenomenon of particle creation from the vacuum due to the time-dependent nature of the imposed boundary conditions. Our aim here is to study the entropy generated in the field due to this effect. To start, the next section introduces an effective Hamiltonian approach [37; 38; 21; 35] to describe the field dynamics. This will be important for us to compute the evolved state and, consequently, the entropy generated by the particle creation process. A limitation of this technique is that it only allow us to study the short-time dynamics of the system as it relies on perturbation theory. Nonetheless, it grants us access to the entire state, enabling the exploration of the relationship between irreversibility and the emergence of quantum coherence. ## III Effective Hamiltonian approach In this section, we introduce an effective Hamiltonian for the DCE following the developments presented in Ref. [35]. To accomplish this, we begin by expanding the field operator \(\hat{\phi}\) and its conjugate momentum \(\hat{\pi}=\partial_{t}\hat{\phi}\) in terms of the instantaneous basis defined in Eq. (10) \[\hat{\phi}(x,t) =\sum_{k}\hat{q}_{k}(t)\varphi_{k}(x,t), \tag{19a}\] \[\hat{\pi}(x,t) =\sum_{k}\hat{p}_{k}(t)\varphi_{k}(x,t), \tag{19b}\] where the operators \(\hat{q}_{k}(t)\) and \(\hat{p}_{k}(t)\) are defined as \[\hat{q}_{k}(t) :=\int_{0}^{L(t)}\mathrm{d}x\ \hat{\phi}(x,t)\varphi_{k}(x,t), \tag{20a}\] \[\hat{p}_{k}(t) :=\int_{0}^{L(t)}\mathrm{d}x\ \hat{\pi}(x,t)\varphi_{k}(x,t). \tag{20b}\] Comparing Eqs. (19) with the field operator (5) and its time derivative, the expressions for \(\hat{q}_{k}(t)\) and \(\hat{p}_{k}(t)\) can be computed \[\hat{q}_{k}(t\leq 0) =\frac{1}{\sqrt{2\omega_{k}^{\mathrm{in}}}}\left[\hat{b}_{k}e^{- i\omega_{k}^{\mathrm{in}}t}+\hat{b}_{k}^{\dagger}e^{i\omega_{k}^{\mathrm{in}}t} \right], \tag{21a}\] \[\hat{p}_{k}(t\leq 0) =i\sqrt{\frac{\omega_{k}^{\mathrm{in}}}{2}}\left[\hat{b}_{k}^{ \dagger}e^{i\omega_{k}^{\mathrm{in}}t}-\hat{b}_{k}e^{-i\omega_{k}^{\mathrm{in }}t}\right]. \tag{21b}\] For \(t>0\) the cavity is in motion and an effective description of the field dynamics can be obtained by introducing the decomposition [35] \[\hat{q}_{k}(t) =\frac{1}{\sqrt{2\omega_{k}(t)}}\left[\hat{a}_{k}(t)e^{-i\Omega_{ k}(t)}+\hat{a}_{k}^{\dagger}(t)e^{i\Omega_{k}(t)}\right], \tag{22a}\] \[\hat{p}_{k}(t) =i\sqrt{\frac{\omega_{k}(t)}{2}}\left[\hat{a}_{k}^{\dagger}(t)e^{ i\Omega_{k}(t)}-\hat{a}_{k}(t)e^{-i\Omega_{k}(t)}\right], \tag{22b}\] where \(\Omega_{k}(t)=\int_{0}^{t}dt^{\prime}\omega_{k}(t^{\prime})\) and the _instantaneous_ annihilation and creation operators \(\hat{a}_{k}(t)\) and \(\hat{a}_{k}^{\dagger}(t)\) satisfy the standard equal times commutation relations \[\left[\hat{a}_{k}(t),\hat{a}_{k}^{\dagger}(t)\right]=\delta_{kj};\left[\hat{a }_{k}(t),\hat{a}_{k}(t)\right]=\left[\hat{a}_{k}^{\dagger}(t),\hat{a}_{k}^{ \dagger}(t)\right]=0.\] Here, the name instantaneous refers to the physical interpretation that if we freeze the system at some instant \(t_{0}\), the operators \(\hat{a}_{k}(t_{0})\) and \(\hat{a}_{k}^{\dagger}(t_{0})\) must describe the particle notion for the field as if the cavity mirror had stopped at position \(L(t_{0})\). One can recognize the initial and final operators to be \(\hat{b}_{k}:=\hat{a}_{k}(t=0)\) and \(\hat{a}_{k}:=\hat{a}_{k}(t=T)\). Taking the time derivative of Eqs. (19) along with Eqs. (22) and, after some algebra (see Appendix A for details), we obtain the following set of differential equations for the annihilation operator \[\dot{\hat{a}}_{j}(t)=\sum_{k}\left[A_{kj}(t)\hat{a}_{k}(t)+B_{kj}^{*}(t)\hat{a }_{k}^{\dagger}(t)\right]. \tag{23}\] The equation for the creation operator is obtained by simply taking the transpose complex conjugate of this last equation. In this equation, we defined the coefficients \[\begin{split} A_{kj}(t)&\\ B_{kj}(t)&\\ \end{split} \tag{24}\] with \[\mu_{kj}(t)\coloneqq-\left(\sqrt{\frac{j}{k}}g_{jk}+\frac{1}{2}\delta_{jk} \right)\frac{\dot{L}(t)}{L(t)}. \tag{25}\] Identifying Eq. (23) as the Heisenberg equation of motion for the annihilation operator, it is straightforward to write down the effective Hamiltonian in the Schrodinger picture as2 Footnote 2: Although Hamiltonian (26) differs from that in Ref. [35] due to the absence of a term proportional to \(\omega_{k}(t)\), both descriptions are equivalent, since this contribution is contained in the exponential terms in Eq. (22). \[\hat{H}_{\text{eff}}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{b}_{j}^{ \dagger}\hat{b}_{k}+B_{kj}^{*}(t)\hat{b}_{j}^{\dagger}\hat{b}_{k}^{\dagger}- \text{h.c.}\Bigg{]}, \tag{26}\] where "h.c." stands for hermitian conjugate. Here, we can clearly see the existence of two different contributions. The terms containing the coefficients \(B_{kj}^{*}\) and \(B_{kj}\) govern the process of creation and annihilation of pairs of particles, while the ones proportional to \(A_{kj}^{*}\) and \(A_{kj}\) are responsible for scattering of particles between distinct modes. From this Hamiltonian we can compute the time evolution of any initial density matrix and, therefore, the thermodynamic entropy given in Eq. (1). This will be done in the sequence. ### The density operator To investigate the entropy production within the proposed scheme, one first needs to obtain an explicit expression for the system's density operator \(\hat{\rho}\) after the cavity returns to its stationary configuration. This can be achieved by finding solutions to the dynamical equation \[\dot{\hat{\rho}}(t)=-i\left[\hat{H}_{\text{eff}}(t),\hat{\rho}(t)\right]. \tag{27}\] Conversely, the complex structure of the effective Hamiltonian poses inherent challenges in solving Eq. (27). To overcome this issue, we narrow our focus to a specific category of problems where the equation of motion for the cavity mirror assumes the following form \[L(t)=L_{0}\left[1+\epsilon l(t)\right], \tag{28}\] where \(l(t)\) is a smooth function of order unity --as well as its first time derivative--, while \(\epsilon\ll 1\) is a small amplitude. Since the coefficients in Eq. (25) are proportional to \(\dot{L}(t)/L(t)\), it is straightforward to see that the Hamiltonian coefficients given in Eqs. (24) are proportional to \(\epsilon\). As a result, the formal solution to Eq. (27) up to second order in \(\epsilon\) reads \[\hat{\rho}(T)=\hat{\rho}(0)-i\int_{0}^{T}dt^{\prime}\left[\hat{H} _{\text{eff}}(t^{\prime}),\hat{\rho}(0)\right] \tag{29}\] \[-\int_{0}^{T}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime} \left[\hat{H}_{\text{eff}}(t^{\prime}),\left[\hat{H}_{\text{eff}}(t^{\prime \prime}),\hat{\rho}(0)\right]\right].\] We are interested in the particular case of the initial vacuum state \(\hat{\rho}(0)=\ket{0;\text{in}}\bra{\text{in};0}\), since we want to study the thermodynamics of the particle creation process. It is convenient to write the evolved state in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\), which are related to the operators \(\hat{a}_{k}(t)\) and \(\hat{a}_{k}^{\dagger}(t)\) by the intanstaneous version of the Bogoliubov coefficients \(\alpha_{kj}(t)\) and \(\beta_{kj}(t)\). By substituting the transformations (17) into the set of differential equations (23), we obtain a recursive relation for the Bogoliubov coefficients in terms of powers of \(\epsilon\). Up to first order, the resulting coefficients are given by \[\alpha_{kj}(t) =\delta_{kj}+\int_{0}^{t}dt^{\prime}A_{kj}(t^{\prime}), \tag{30a}\] \[\beta_{kj}(t) =\int_{0}^{t}dt^{\prime}\;B_{kj}(t^{\prime}), \tag{30b}\] which implies \[\hat{a}_{k}(t)=\hat{b}_{k}+\sum_{j}\left(\tilde{\alpha}_{jk}(t)\hat{b}_{j}+ \beta_{jk}^{*}(t)\hat{b}_{j}^{\dagger}\right),\] where \(\tilde{\alpha}_{kj}(t)=\int_{0}^{t}dt^{\prime}A_{kj}(t^{\prime})\). A direct calculation from Eq. (29) leads us to the following expression for the system's density operator up to second order in \(\epsilon\) \[\hat{\rho}(T) =\hat{\rho}(0)-\tfrac{1}{2}\sum_{kj}\Bigg{\{}\beta_{kj}^{*}\left( \hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)-\tfrac{1}{4} \sum_{nm}\Bigg{[}\beta_{mn}\beta_{kj}^{*}\left(\hat{b}_{k}^{\dagger}\hat{b}_{ j}^{\dagger}\hat{\rho}(0)\hat{b}_{m}\hat{b}_{n}\right)-\beta_{mn}\beta_{kj}^{*} \left(\hat{b}_{m}\hat{b}_{n}\hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{ \rho}(0)\right)\] \[+\beta_{mn}^{*}\beta_{kj}^{*}\left(\hat{b}_{m}^{\dagger}\hat{b}_{ n}^{\dagger}\hat{b}_{k}^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)+2\tilde{ \alpha}_{mn}^{*}\beta_{kj}^{*}\left(\hat{b}_{m}^{\dagger}\hat{b}_{n}\hat{b}_{k }^{\dagger}\hat{b}_{j}^{\dagger}\hat{\rho}(0)\right)\Bigg{]}+\text{h.c.} \Bigg{\}}. \tag{31}\] Considering the initial vacuum state, the number of particles created inside the cavity due to the DCE takes the form \[N(T) = \text{Tr}\left\{\sum_{k}\hat{\rho}(T)\hat{b}_{k}^{\dagger}\hat{b}_{k} \right\}=\sum_{kj}|\beta_{kj}|^{2}, \tag{32}\] in agreement with Eq. (18), thus showing the consistency of our calculations. We are now ready to discuss the entropy production due to the particle creation process. ### Entropy production As discussed earlier, we consider the diagonal entropy [8] \[S_{d}(\hat{\rho})=-\sum_{\mathbf{n}}\rho_{\text{diag}}^{(\mathbf{n})}\ln\rho_{\text{ diag}}^{(\mathbf{n})}, \tag{33}\] as the main figure of merit for characterizing irreversibility. In this equation, \(\rho_{\text{diag}}^{(\mathbf{n})}=\bra{\text{in};\mathbf{n}}\hat{\rho}\ket{\mathbf{n}; \text{in}}\) represent the diagonal elements of the system's density operator in the initial energy eigenbasis. From the expression of the density operator shown in Eq. (31), the diagonal entropy can be directly computed, resulting in \[S_{d}(T) = -\left[1-\tfrac{1}{2}N(T)\right]\ln\left[1-\tfrac{1}{2}N(T)\right] \tag{34}\] \[- \sum_{kj}\tfrac{1}{2}|\beta_{kj}(T)|^{2}\ln\tfrac{1}{2}|\beta_{kj} (T)|^{2}.\] We first observe that the entropy production depends on the number of particles created inside the cavity. Secondly, we note that this entropy production is exactly equal to the creation of quantum coherence in the energy eigenbasis of the field. To see this, let us consider the relative entropy of coherence [39] \[C(\hat{\rho})=S(\hat{\rho}_{d})-S(\hat{\rho}),\] which is a measure of the amount of quantum coherence in a given basis. Here \(S(\hat{\rho})=-\operatorname{Tr}\hat{\rho}\ln\hat{\rho}\) designates the von Neumman entropy of \(\hat{\rho}\) while \(\hat{\rho}_{d}\) is the diagonal operator built from the diagonal elements of \(\hat{\rho}\) in the selected basis. Since we are interested in the amount of entropy produced during time evolution, we pick up the initial energy eigenbasis to measure coherence. This is fully justified since we are interested in thermodynamics. Under this choice, we directly see that \(S(\hat{\rho}_{d})=S_{d}(\hat{\rho})\). Since our evolution is unitary and the initial state is pure, we have \(S(\hat{\rho})=0\), thus implying that \[C(\hat{\rho})=S_{d}(T). \tag{35}\] Note that, differently from Eq. (34), such a result is a general one, independent of the perturbation theory used here. This result implies that we will observe irreversibility (positive entropy production) for every process that creates quantum coherence in the energy eigenbasis of the system. Therefore, reversible processes must be those that are performed slowly enough in order to not induce transitions among the energy eigenstates. This result is in agreement with the discussions presented in Refs. [8; 9; 10; 12], where both entropy production and heat are associated with processes that generate coherence. In order to illustrate our results, let us consider that the moving mirror performs harmonic oscillations of the form \[l(t)=\sin(p\omega_{1}^{\text{in}}t), \tag{36}\] where \(p\) is an integer, while \(\omega_{1}^{\text{in}}\) is the first unperturbed field frequency. For simplicity, we define the small dimensionless time \(\tau=\epsilon\omega_{1}^{\text{in}}T/2\) and assume the case in which the mirror returns to its initial position at time \(t=T\) after performing a certain number of complete cycles (\(p\omega_{1}^{\text{in}}T=2\pi m\) with \(m=1,2,\dots\) ). Using Eqs. (13) and (36), we directly obtain \[|\beta_{kj}(\tau)|=\left\{\begin{array}{rl}\sqrt{kj}\ \tau&\text{if}\ p=k+j,\\ \frac{2\sqrt{kj}\epsilon p}{p^{2}-(k+j)^{2}}\sin\left[\frac{2(k+j)\tau}{ \epsilon}\right]&\text{if}\ p\neq k+j.\end{array}\right. \tag{37}\] By dropping the rapid oscillating terms, the number of particles created takes the form \[N(\tau)=\frac{1}{6}p(p^{2}-1)\tau^{2}, \tag{38}\] in agreement with Ref. [40]. Note that the above expression is valid under perturbation theory involving time, and, therefore, it is a good approximation only when \(\tau\ll 1\). In this case, the diagonal entropy, our focus of interest here, reduces to \[S_{d}(\tau) = \frac{1}{2}N(\tau)\Bigg{[}1-\ln\frac{1}{2}N(\tau) \tag{39}\] \[+ \ln\frac{p(p^{2}-1)}{6}-\frac{6\operatorname{v}(p)}{p(p^{2}-1)} \Bigg{]},\] with \[\operatorname{v}(p)=\sum_{k=1}^{p-1}(p-k)k\ln(p-k)k.\] Figure 1 shows the diagonal entropy for this particular case. As it is clear from the figure, entropy will be produced in the field for every value of the mirror frequency \(p\), except for \(p=1\), where the number of created particles vanishes. The technique employed in this section, based on the effective Hamiltonian, enabled us to calculate the system's entropy production through the time evolution of the density operator. This establishes a direct link between entropy production and the emergence of quantum coherence in the field. Nevertheless, our current analysis is confined to the short-time limit. In the subsequent section, we shift to the Heisenberg picture and quantify entropy production in relation to the time evolution of Gaussian states. This approach permits an exploration Figure 1: **Entropy production**. Entropy as a function of \(\tau\) for distinct values of the mirror oscillating frequency. of the contribution to entropy production arising from the generation of entanglement between a single mode and the remainder of the field. Therefore, we see that these two approaches are complementary to each other. ## IV Gaussian state approach The last section presented an analysis of the entropy production constrained to the short-time regime of the entire system. Now, we introduce a different approach that enables us to analyze entropy production in a specific mode across all time intervals. Additionally, this method facilitates the exploration of the entropy dynamics and its connection with the entanglement between the selected mode and all other modes in the system. To achieve this goal we follow the techniques outlined in Ref. [40] where the dynamics of the system during the cavity motion is described in the Heisenberg picture. In this approach, the field is decomposed in terms of the Fourier coefficients \(Q_{j}^{(k)}(t)\) through Eq. (8) along with the mode function (9). Consequently, the dynamics of the system is determined by solving the infinite set of coupled differential equations (11) for the Fourier coefficients, with each equation encompassing an infinite number of time-dependent terms. The problem can be simplified if we consider the special case of parametric resonance, i.e., when one of the mirrors undergoes small oscillations at twice the fundamental frequency of the unperturbed field. Therefore, we impose the following form for the mirror trajectory \[L(t)=L_{0}\left[1+\epsilon\sin\left(2\omega_{1}^{\text{in}}t\right)\right]. \tag{40}\] If the mirror returns to its initial position \(L_{0}\) after some interval of time \(T\), then \(\omega_{k}^{\text{in}}=\omega_{k}^{\text{out}}=\omega_{k}\) and the right-hand side of Eq. (11) vanishes. Under these considerations, it is possible to write \[Q_{j}^{(k)}(t\geq T)=\sqrt{\frac{\omega_{k}}{\omega_{j}}}\left(\alpha_{kj}e^{ -i\omega_{j}t}+\beta_{kj}e^{i\omega_{j}t}\right), \tag{41}\] where \(\alpha_{kj}\) and \(\beta_{kj}\) are the Bogoliubov coefficients defined in Eq. (17). Since we impose the field to be weakly perturbed by the mirror oscillations (40), it is natural to search for solutions to \(Q_{j}^{(k)}(t)\) by allowing the Bogoliubov coefficients in Eq. (41) to be functions that vary slowly in time, i.e., \(\dot{\alpha}_{kj},\dot{\beta}_{kj}\sim\epsilon\). Then, by substituting Eq. (41) into Eq. (11), ignoring terms proportional to \(\epsilon^{2}\) (like \(\ddot{\alpha}_{kj},\ddot{\beta}_{kj}\) and \(\lambda^{2}\)) and employing the method of slowly varying amplitudes [42], it is possible to obtain a set of coupled first order differential equations with time independent coefficients in terms of \(\alpha_{kj}\) and \(\beta_{kj}\). For \(k=1\), this set takes the form [41] \[\frac{\mathrm{d}\alpha_{1j}}{\mathrm{d}\tau} =-\sqrt{3}\alpha_{3j}-\beta_{1j}, \tag{42a}\] \[\frac{\mathrm{d}\beta_{1j}}{\mathrm{d}\tau} =-\alpha_{1j}-\sqrt{3}\beta_{3j}, \tag{42b}\] whereas for \(k>2\) we obtain \[\frac{\mathrm{d}\alpha_{kj}}{\mathrm{d}\tau} =\sqrt{k(k-2)}\alpha_{(k-2),j}-\sqrt{k(k+2)}\alpha_{(k+2),j}, \tag{43a}\] \[\frac{\mathrm{d}\beta_{kj}}{\mathrm{d}\tau} =\sqrt{k(k-2)}\beta_{(k-2),j}-\sqrt{k(k+2)}\beta_{(k+2),j}. \tag{43b}\] Because of the initial conditions \(\alpha_{kj}(0)=\delta_{kj}\) and \(\beta_{kj}(0)=0\), all the coefficients with at least one even index vanish. Complete solutions to the set of equations (42) and (43) were obtained in Ref. [40] in terms of the hypergeometric function. Nonetheless, in this section we will be interested in computing the diagonal entropy generated in particular modes of the field in the regime of parametric resonance (40). As a result, for reasons that will become clear later, it will be sufficient to pay attention only to the asymptotic behavior of the Bogoliubov coefficients with the first index equal to \(1\). For \(\tau\ll 1\), their expressions read \[\alpha_{1(2\mu+1)} =(\mu+1)K_{\mu}J_{\mu}\ \tau^{\mu}+\mathcal{O}(\tau^{\mu+2}), \tag{44a}\] \[\beta_{1(2\mu+1)} =-K_{\mu}J_{\mu}\ \tau^{\mu+1}+\mathcal{O}(\tau^{\mu+3}), \tag{44b}\] with \(J_{\mu}=(2\mu)!/2^{\mu}(\mu!)^{2}\) and \(K_{\mu}=(-1)^{\mu}\sqrt{2\mu+1}/(\mu+1)\), whereas for \(\tau\gg 1\) \[\alpha_{1(2\mu+1)} \approx\frac{2}{\pi}\frac{(-1)^{\mu}}{\sqrt{2\mu+1}}, \tag{45a}\] \[\beta_{1(2\mu+1)} \approx\frac{2}{\pi}\frac{(-1)^{\mu}}{\sqrt{2\mu+1}}, \tag{45b}\] with \(\mu=0,1,2,\ldots\). Now we are ready to write down the reduced density operator for the considered mode and to address the question of the dynamics of the entropy production and its relation to entanglement. ### Reduced density operator The reduced density operator of mode \(m\) is given by \[\hat{\rho}_{m}=\operatorname{Tr}_{\{k\}/m}\hat{\rho}, \tag{46}\] where \(\operatorname{Tr}_{\{k\}/m}\) denotes the trace of the total density operator \(\hat{\rho}\) over all the modes except the \(m\)-th one. Now, from the previous section, we can see that the time evolution of the field can be described by an effective quadratic time-dependent Hamiltonian. We know that the time evolution governed by quadratic Hamiltonians transforms any Gaussian state into another Gaussian state, which are completely characterized by the covariance matrix. As the vacuum state belongs to the class of Gaussian states, it is in fact possible to describe our initial state in terms of the Wigner function for the \(m\)-th mode, which reads \[W_{m}(\mathbf{q})=\frac{1}{\sqrt{2\pi\det\mathbf{\Sigma}_{m}}}e^{-\frac{1}{2}( \mathbf{q}-\langle\mathbf{q}\rangle)\mathbf{\Sigma}_{m}^{-1}(\mathbf{q}- \langle\mathbf{q}\rangle)},\] where \(\mathbf{q}=(\hat{q}_{m},\hat{p}_{m})\) is the quadrature operator with components \[\hat{q}_{m} =\frac{1}{\sqrt{2}}\left(\hat{a}_{m}^{\dagger}+\hat{a}_{m}\right), \tag{47a}\] \[\hat{p}_{m} =\frac{i}{\sqrt{2}}\left(\hat{a}_{m}^{\dagger}-\hat{a}_{m}\right). \tag{47b}\] \(\mathbf{\Sigma}_{m}\) stands for the covariance matrix \[\mathbf{\Sigma}_{m}\equiv\begin{pmatrix}\sigma_{m}^{q}&\sigma_{m}^{qp}\\ \sigma_{m}^{qp}&\sigma_{m}^{p}\end{pmatrix} \tag{48}\] with \[\sigma_{m}^{q} =\langle\hat{q}_{m}^{2}\rangle-\langle\hat{q}_{m}\rangle^{2}, \tag{49a}\] \[\sigma_{m}^{p} =\langle\hat{p}_{m}^{2}\rangle-\langle\hat{p}_{m}\rangle^{2},\] (49b) \[\sigma_{m}^{qp} =\frac{1}{2}\langle\hat{p}_{m}\hat{q}_{m}+\hat{q}_{m}\hat{p}_{m }\rangle-\langle\hat{q}_{m}\rangle\langle\hat{p}_{m}\rangle. \tag{49c}\] Since we are interested in the diagonal entropy, we focus on the diagonal components of the density operator in the energy eigenbasis. For the special case of an initially vacuum state \(\ket{0;\textit{in}}\), these diagonal terms can be written as functions of the covariance matrix elements [40] \[\rho_{m}^{(n)} =\frac{2\left[\left(2\sigma_{m}^{q}-1\right)\left(2\sigma_{m}^{p }-1\right)\right]^{n/2}}{\left[\left(2\sigma_{m}^{q}+1\right)\left(2\sigma_{m} ^{p}+1\right)\right]^{(n+1)/2}}\] \[\times\mathrm{P}_{n}\left(\frac{4\sigma_{m}^{q}\sigma_{m}^{p}-1} {\sqrt{(4(\sigma_{m}^{q})^{2}-1)(4(\sigma_{m}^{p})^{2}-1)}}\right), \tag{50}\] where \(\mathrm{P}_{n}\) is the Legendre polynomial of order \(n\) and \(\rho_{m}^{(n)}=\bra{\mathrm{in};n}\hat{\rho}_{m}\ket{n;\mathrm{in}}\) is the \(n\)-th diagonal element of the reduced density operator in the initial energy eigenbasis. By expressing the quadrature operators (47) in terms of the initial operators \(\hat{b}_{k}\) and \(\hat{b}_{k}^{\dagger}\) defined in Eq. (17), the variances can be directly computed, resulting in \[\sigma_{m}^{q} =\frac{1}{2}\sum_{k}\left|\alpha_{km}\!-\!\beta_{km}\right|^{2}, \tag{51a}\] \[\sigma_{m}^{p} =\frac{1}{2}\sum_{k}\left|\alpha_{km}\!+\!\beta_{km}\right|^{2} \tag{51b}\] where \(m\) is an odd integer and the cross term \(\sigma_{m}^{qp}\) is identically zero for our choice of the initial state. By taking the time derivatives of these last equations and inserting the recursive relations (42) and (43), one can show that \[\frac{\mathrm{d}\sigma_{m}^{q}}{\mathrm{d}\tau} =-\left[\alpha_{1m}\!-\!\beta_{1m}\right]^{2} \tag{52a}\] \[\frac{\mathrm{d}\sigma_{m}^{p}}{\mathrm{d}\tau} =+\left[\alpha_{1m}\!+\!\beta_{1m}\right]^{2}, \tag{52b}\] which depends only on the Bogoliubov coefficients, with the first index equal to \(1\) (as we have pointed out in the beginning of the section). Moreover, because the definitions (47), the differential equations (52) need to satisfy the initial conditions \(\sigma_{m}^{q}(0)=\sigma_{m}^{p}(0)=1/2\). We now analyze the solutions to these equations in two distinct regimes, the short-time and the long-time. ### Short-time regime The short time limit is defined by \(\tau\ll 1\). Inserting Eqs. (44) into Eqs. (52) and integrating over \(\tau\), we obtain \[\frac{\sigma_{2\mu+1}^{q}}{\sigma_{2\mu+1}^{p}} \bigg{\}}=\frac{1}{2}\mp\tau^{2\mu+1}J_{\mu}^{2}\left[1\mp K_{ \mu}^{2}\tau+\mathcal{O}(\tau^{2})\right],\] with \(J_{\mu}\) and \(K_{\mu}\) defined in Eq. (44). Plugging Eqs. (53) into Eq. (50) leads to the following expression for the diagonal components of the reduced density operator \[\rho_{2\mu+1}^{(n)} =(-1)^{n}i^{n}J_{\mu}^{n}\tau^{n(2\mu+1)}\left(1-K_{\mu}^{4}\tau^ {2}\right)^{n/2} \tag{53}\] \[\times\left[1-(n+1)J_{\mu}^{2}\tau^{2\mu+2}\left(K_{\mu}^{2}- \frac{1}{2}J_{\mu}^{2}\tau^{2\mu}\right)\bigg{]}\] \[\times P_{n}\left[i\tau\left(K_{\mu}^{2}-J_{\mu}^{2}\tau^{2\mu} \right)\right]+\mathcal{O}(\tau^{2\mu+3}).\] This expression is what we need to compute the diagonal entropy the \((2\mu+1)\)-th mode. Up to the second order in \(\tau\) we obtain, for \(\mu=0\), the following result \[S_{d}^{1}(\tau\ll 1)=\frac{1}{2}N_{1}(\tau)\bigg{[}1-\ln\frac{1}{2}N_{1}(\tau) \bigg{]},\] while for any other value of \(\mu\), we have \[S_{d}^{2\mu+1}(\tau\ll 1)=N_{2\mu+1}(\tau)\bigg{[}1-\ln N_{2\mu+1}(\tau) \bigg{]}+\mathcal{O}(\tau^{2\mu+3}),\] where \(N_{2\mu+1}(\tau)=K_{\mu}^{2}J_{\mu}^{2}\tau^{2\mu+2}+\mathcal{O}(\tau^{2\mu+3})\) is the number of particles created in the corresponding mode. Hence, at short-times, the entropy for each mode increases with the number of created particles, aligning completely with the findings outlined in the preceding section. As expected, the current methodology enables an exploration of the long-time dynamics of the entropy production, and we delve into such an analysis in the subsequent discussion. ### Long-time regime The long-time limit is defined by \(\tau\gg 1\). In this case, by substituting Eqs. (45) into Eqs. (52), we obtain the time derivatives of the system's quadrature variances as \[\frac{\mathrm{d}}{\mathrm{d}\tau}\sigma_{2\mu+1}^{q} \approx 0 \tag{54a}\] \[\frac{\mathrm{d}}{\mathrm{d}\tau}\sigma_{2\mu+1}^{p} \approx\frac{16}{\pi^{2}(2\mu+1)}. \tag{54b}\] The specific integration constant for Eqs. (54a) varies for each mode and depends on the complete form of the Bogoliubov coefficients [40], but the general behavior is the same: both quadrature variances start with the same value \(1/2\) at \(t=0\) and end up assuming distinct asymptotic behavior at \(\tau\gg 1\), with \(\sigma_{m}^{q}\) decreasing to a constant value, whereas \(\sigma_{m}^{p}\) increases almost linearly in time. It is now straightforward to compute the single-mode reduced density matrix as \[\rho_{m}^{(n)}(\tau\gg 1)=C_{m}^{(n)}\,\left[\det\mathbf{\Sigma}_{m}(\tau)\right]^{ -1/2}+\mathcal{O}(1/\tau) \tag{55}\] where \[C_{m}^{(n)}=\frac{1}{\sqrt{1+T_{m}}}\left(\frac{1-T_{m}}{\sqrt{1-T_{m}^{2}}} \right)^{n}\mathrm{P}_{n}\left(\frac{1}{\sqrt{1-T_{m}^{2}}}\right) \tag{56}\] is a positive real coefficient with \(T_{m}=1/2\sigma_{m}^{q}\). From the above expressions, we can compute the diagonal entropy associated with the \(m\)-th field mode as \[S_{d}^{m}(\tau\gg 1)\approx S_{R}^{m}(\tau)+[\det\mathbf{\Sigma}_{m}(\tau)]^{- 1/2}\mathcal{S}_{m}, \tag{57}\] where \(\mathcal{S}_{m}=-\sum_{n}C_{m}^{(n)}\ln C_{m}^{(n)}\) and \(S_{R}^{m}(\tau)=\frac{1}{2}\ln\det\mathbf{\Sigma}_{m}(\tau)\) is the Renyi-2 entropy of the \(m\)-th mode [43]. It can be shown that the second term in Eq. (57) diverges logarithmically with the system dimension \(\mathcal{N}\). This last fact is expected since we are considering a field theory and the number of degrees of freedom of the system is infinite. Moreover, we must remember that entropy is defined up to a multiplicative and an additive constant. So, this last term is not fundamental for the dynamical behavior of entropy. For the resonant mode \(m=1\), we obtain \(\sigma_{1}^{q}\to 2/\pi^{2}\)[40] and \(\sigma_{1}^{p}\to 16\tau/\pi^{2}\), leading to the Reniy-2 entropy \[S_{R}^{1}(\tau)\approx\frac{1}{2}\ln\frac{32}{\pi^{4}}\tau,\] which is in agreement with Ref. [32]3. In the case of the subsequent mode \(m=3\), now \(\sigma_{3}^{q}\to 38/9\pi^{2}\) and \(\sigma_{3}^{p}\to 16\tau/3\pi^{2}\), so we obtain Footnote 3: Here, the argument in the Réniy-2 entropy differs from Ref. [32] by a factor of 4. This occurs because the variances defined in the last reference are twice as large as the ones in Eq. (49). \[S_{R}^{3}(\tau)\approx\frac{1}{2}\ln\frac{608}{27\pi^{4}}\tau.\] Now, since the global state of the field is pure --initial pure state under unitary evolution--, \(S_{R}^{m}(\tau)\) quantifies the amount of entanglement between the \(m\)-th mode and all the remaining ones. Therefore, what Eq. (57) is saying to us is that the asymptotic behavior of the diagonal entropy is fundamentally determined by the generation of entanglement between the considered mode and all the others. ## V Conclusions This article considers the problem of thermodynamic entropy production within the framework of the dynamical Casimir effect, exploring two distinct approaches. The initial approach, employing an effective Hamiltonian description of field dynamics, provides a connections between entropy production and the generation of quantum coherence in the field's mode basis in the short-time limit. The second approach, which relies on the reduced density operator of an individual mode and it is valid for all times, establishes a connection between entropy growth and entanglement generation between the selected mode and all the others. Although both approaches can only be compared in the short-time regime, where both predicts that entropy increases with the number of created particles, they provide different, but complementary information about the dynamics of the entropy production due to the dynamical Casimir effect. In summary, the production of thermodynamic entropy in the field due to the dynamical Casimir effect is governed by the generation of quantum coherence in the field's mode basis and entanglement between the modes. Since our initial state is stationary (vacuum), the diagonal entropy cannot decrease [8] and, therefore, neither coherence or entanglement. These results can be understood as follows. A coupling between all the field modes arises due to the nontrivial boundary conditions imposed on the field by the motion of the mirror. Such interaction induces transitions among the modes, which lies at the root of the generation of quantum coherence and quantum entanglement. Although the evolution is unitary, irreversibility, which is characterized by entropy production, also arises due to these transitions, as discussed in Refs. [8; 9; 10; 12]. Reversible processes are defined in the limit where the motion is so slow that there is no particle creation, no scattering and, thus, no entropy production. Note that in the considered context, in which we have a resonant cavity trapping the field, there are motions for which no particles will be created and, thus, no entropy will be produced. This is a point that deserves a deeper investigation. Our research enhances the comprehension of the thermodynamics of quantum fields within non-trivial boundary conditions and exploring the impact of quantum coherence and entanglement on such phenomena. Despite this, numerous questions remain unanswered. An interesting question that directly emerges concerns the split of the energy into work and heat, where the latter is associated with the irreversible aspect of the process, while the former should be related to the energy that can be extracted from the field after the process [45; 46]. Another related issue involves the statistical description of the field in terms of stochastic entropy production and the fluctuation theorems [47]. Furthermore, what role do multiple quantum coherence and multipartite entanglement play in entropy production? How do the thermalization properties of field dynamics frame into this? Lastly, a question arises regarding whether heat and work adhere appropriately to the equivalence principle [48]. These are some of the pertinent questions that will be the focus of future investigations. ###### Acknowledgements. This work was supported by the National Institute for the Science and Technology of Quantum Information (INCT-IQ), Grant No. 465469/2014-0, by the National Council for Scientific and Technological Development (CNPq), Grants No 308065/2022-0, and by Coordination of Superior Level Staff Improvement (CAPES). ## Appendix A Derivation of the effective Hamiltonian ### Dynamical equations for the instantaneous creation and annihilation operators From Eq. (2), the dynamical equation of motion for a quantum scalar field and its conjugated momentum, can be written as \[\partial_{t}\hat{\phi}(x,t) =\hat{\pi}(x,t) \tag{36a}\] \[\partial_{t}\hat{\pi}(x,t) =\partial_{x}^{2}\hat{\phi}(x,t). \tag{36b}\] By combining Eqs. (19) and (22), one can express the fields \(\hat{\phi}\) and \(\hat{\pi}\) and their correspondent time derivatives as \[\hat{\phi} =\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\varphi_{k}, \tag{37a}\] \[\hat{\pi} =i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\varphi_{k},\] (37b) \[\partial_{t}\hat{\phi} =\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\left(\partial_{t} \varphi_{k}-\frac{\dot{\omega}_{k}}{2\omega_{k}}\varphi_{k}\right)\] \[+\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\varphi_{k}+\hat{\pi},\] (37c) \[\partial_{t}\hat{\pi} =i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\left(\partial_{t}\varphi_{k}+ \frac{\dot{\omega}_{k}}{2\omega_{k}}\varphi_{k}\right)\] \[+i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\dot{\hat{a}}_{k}e^{-i\Omega_{k}}\right)\varphi_{k}+\partial_{x}^{2} \hat{\phi}, \tag{37d}\] where, for conciseness, we have suppressed the notation of time and spatial dependence in all terms in (37). Comparing (36) with (37c) and (37d), we can isolate the time derivative of the operators \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) by computing \[\int_{0}^{L}\mathrm{d}x\varphi_{j}\left(\partial_{t}\hat{\phi}- \hat{\pi}\right)=\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i \Omega_{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\delta_{kj}\] \[-\sum_{k}\frac{1}{\sqrt{2\omega_{k}}}\left(\hat{a}_{k}e^{-i\Omega _{k}}+\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}\right)\left(G_{kj}+\frac{\dot{ \omega}_{k}}{2\omega_{k}}\delta_{kj}\right)=0 \tag{38}\] and \[\int_{0}^{L}\mathrm{d}x\varphi_{j}\left(\partial_{t}\hat{\pi}- \partial_{x}^{2}\hat{\phi}\right) \tag{39}\] \[=i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\dot{\hat{a}}_{k}e^{-i\Omega_{k}}\right)\delta_{kj}\] \[+i\sum_{k}\sqrt{\frac{\omega_{k}}{2}}\left(\hat{a}_{k}^{\dagger} e^{i\Omega_{k}}-\hat{a}_{k}e^{-i\Omega_{k}}\right)\left(G_{jk}+\frac{\dot{ \omega}_{j}}{2\omega_{j}}\delta_{kj}\right)=0,\] where it was used \(\int_{0}^{L}dx\varphi_{k}\varphi_{j}=\delta_{kj}\) and \(G_{kj}\coloneqq\int_{0}^{L}\varphi_{k}\partial_{t}\varphi_{j}\). By defining \(\mu_{kj}=\sqrt{\frac{\omega_{j}}{\omega_{k}}}\left(G_{kj}+\frac{\dot{\omega} _{k}}{2\omega_{k}}\delta_{kj}\right)\) we obtain from (38) and (39) the following equations \[\dot{\hat{a}}_{j}e^{-i\Omega_{j}}+\dot{\hat{a}}_{j}^{\dagger}e^{i \Omega_{j}} =\sum_{k}\mu_{kj}\left(\hat{a}_{j}e^{-i\Omega_{j}}+\hat{a}_{j}^{ \dagger}e^{i\Omega_{j}}\right), \tag{40a}\] \[\dot{\hat{a}}_{j}e^{-i\Omega_{j}}-\dot{\hat{a}}_{j}^{\dagger}e^{ i\Omega_{j}} =\sum_{k}\mu_{jk}\left(\hat{a}_{k}^{\dagger}e^{i\Omega_{k}}-\hat{a}_{k}e^{-i \Omega_{k}}\right). \tag{40b}\] From the last system, it is easy to isolate \(\dot{\hat{a}}_{j}(t)\) and \(\dot{\hat{a}}_{j}^{\dagger}(t)\) as \[\dot{\hat{a}}_{j}(t) =\sum_{k}\left[A_{kj}(t)a_{k}(t)+B_{kj}^{*}(t)a_{k}^{\dagger}(t) \right], \tag{41a}\] \[\dot{\hat{a}}_{j}^{\dagger}(t) =\sum_{k}\left[A_{kj}^{*}(t)a_{k}^{\dagger}(t)+B_{kj}(t)a_{k}(t) \right], \tag{41b}\] with \[A_{kj}(t) =\frac{1}{2}\left[\mu_{kj}(t)-\mu_{jk}(t)\right]e^{-i[\Omega_{k}( t)-\Omega_{j}(t)]}, \tag{42a}\] \[B_{kj}(t) =\frac{1}{2}\left[\mu_{kj}(t)+\mu_{jk}(t)\right]e^{-i[\Omega_{k}( t)+\Omega_{j}(t)]}. \tag{42b}\] Since \(\omega_{k}(t)=k\pi/L(t)\) and using the definition (10) we can calculate \[G_{kj}(t) =g_{kj}\frac{\dot{L}(t)}{L(t)}, \tag{43}\] \[\frac{\dot{\omega}_{k}(t)}{\omega_{k}(t)} =-\frac{\dot{L}(t)}{L(t)}, \tag{44}\] where \(g_{kj}\) has the same form as expressed in (13). So we obtain \(\mu_{kj}(t)=-\left(\sqrt{\frac{\dot{i}}{k}}g_{jk}+\frac{1}{2}\delta_{kj} \right)\frac{\dot{L}(t)}{L(t)}\) as in Eq. (25). ### Effective Hamiltonian To find the effective Hamiltonian that generates the dynamical equations (42) we begin by considering the most general quadratic operator \[\hat{H}(t)=\sum_{kl}\left[\mathcal{A}_{kl}(t)\hat{a}_{k}^{\dagger}(t) \hat{a}_{l}^{\dagger}(t)+\mathcal{B}_{kl}(t)\hat{a}_{k}^{\dagger}(t)\hat{a}_{l}(t)\right. \tag{45}\] \[\left.+\mathcal{C}_{kl}(t)\hat{a}_{l}^{\dagger}(t)\hat{a}_{k}(t)+ \mathcal{D}_{kl}(t)\hat{a}_{k}(t)\hat{a}_{l}(t)\right],\] which is: (i) hermitian, by satisfying the conditions \(\mathcal{A}_{kl}(t)=\mathcal{D}_{kl}^{\ast}(t)\), \(\mathcal{B}_{kl}(t)=\mathcal{C}_{kl}^{\ast}(t)\) and (ii) invariant over an index change, with the conditions \(\mathcal{A}_{kl}(t)=\mathcal{A}_{lk}(t)\), \(\mathcal{D}_{kl}(t)=\mathcal{D}_{lk}(t)\), \(\mathcal{B}_{kl}(t)=\mathcal{C}_{lk}(t)\) and \(\mathcal{B}_{lk}(t)=\mathcal{C}_{kl}(t)\). Suppressing the notation for time dependence, the correspondent Heisenberg equation of motion for the annihilation and creation operators is therefore \[\dot{\hat{a}}_{j}=i\left[\hat{H},\hat{a}_{j}\right]=i\sum_{kl} \left(\mathcal{A}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{ \dagger},\hat{a}_{j}\Big{]}+\mathcal{B}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{ a}_{l},\hat{a}_{j}\Big{]}\right.\] \[\left.\qquad\qquad\qquad\qquad+\mathcal{C}_{kl}\Big{[}\hat{a}_{l }^{\dagger}\hat{a}_{k},\hat{a}_{j}\Big{]}+\mathcal{D}_{kl}[\hat{a}_{k}\hat{ a}_{l},\hat{a}_{j}]\right)\] \[=-i\sum_{k}\bigg{[}\left(\mathcal{A}_{kj}+\mathcal{A}_{jk}\right) \hat{a}_{k}^{\dagger}+\left(\mathcal{B}_{jk}+\mathcal{C}_{kj}\right)\hat{a}_ {k}\bigg{]} \tag{111}\] and \[\dot{\hat{a}}_{j}^{\dagger}=i\left[\hat{H},\hat{a}_{j}^{\dagger} \right]=i\sum_{kl}\Biggl{(}\mathcal{A}_{kl}\Big{[}\hat{a}_{k}^{\dagger}\hat{a} _{l}^{\dagger},\hat{a}_{j}^{\dagger}\Big{]}+\mathcal{B}_{kl}\Big{[}\hat{a}_{k }^{\dagger}\hat{a}_{l},\hat{a}_{j}^{\dagger}\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad+\mathcal{C}_{kl}\Big{[}\hat{a}_{l }^{\dagger}\hat{a}_{k},\hat{a}_{j}^{\dagger}\Big{]}+\mathcal{D}_{kl}\Big{[} \hat{a}_{k}\hat{a}_{l},\hat{a}_{j}^{\dagger}\Big{]}\Biggr{)}\] \[=i\sum_{k}\bigg{[}\left(\mathcal{D}_{kj}+\mathcal{D}_{jk}\right) \hat{a}_{k}+\left(\mathcal{B}_{kj}+\mathcal{C}_{jk}\right)\hat{a}_{k}^{ \dagger}\bigg{]}. \tag{112}\] Comparing (10a) with (111) and (10b) with (112), we obtain the following system \[-i\left[\mathcal{A}_{kj}(t)+\mathcal{A}_{jk}(t)\right] =-2i\mathcal{A}_{kj}(t)=B_{kj}^{\ast}(t)\] \[i\left[\mathcal{D}_{kj}(t)+\mathcal{D}_{jk}(t)\right] =2i\mathcal{D}_{kj}(t)=B_{kj}(t)\] \[i\left[\mathcal{B}_{kj}(t)+\mathcal{C}_{jk}(t)\right] =2i\mathcal{B}_{kj}(t)=A_{kj}^{\ast}(t).\] Inserting the last coefficients into Eq. (100), one obtains the following expression for the effective Hamiltonian \[\hat{H}_{H}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{a}_{j}^{ \dagger}(t)\hat{a}_{k}(t)+B_{kj}^{\ast}(t)\hat{a}_{j}^{\dagger}(t)\hat{a}_{k}^ {\dagger}(t)-\text{h.c.}\Bigg{]}, \tag{114}\] where the subscript \(H\) conveys that the operator is represented in the Heisenberg picture of quantum mechanics. Moving to the Schrodinger picture, the last Hamiltonian takes the form \[\hat{H}_{S}(t)=\frac{i}{2}\sum_{jk}\Bigg{[}A_{kj}(t)\hat{b}_{j}^{ \dagger}\hat{b}_{k}+B_{kj}^{\ast}(t)\hat{b}_{j}^{\dagger}\hat{b}_{k}^{\dagger} -\text{h.c.}\Bigg{]}, \tag{115}\] where the Heisenberg annihilation (and creations) operator is defined as \(\hat{a}_{k}(t)=\hat{U}_{S}^{\dagger}(t)\hat{b}_{k}\hat{U}_{S}(t)\), with \(\hat{U}_{S}(t)\) being the time evolution operator generated by the Hamiltonian (115).
2309.16792
Agent Coordination via Contextual Regression (AgentCONCUR) for Data Center Flexibility
A network of spatially distributed data centers can provide operational flexibility to power systems by shifting computing tasks among electrically remote locations. However, harnessing this flexibility in real-time through the standard optimization techniques is challenged by the need for sensitive operational datasets and substantial computational resources. To alleviate the data and computational requirements, this paper introduces a coordination mechanism based on contextual regression. This mechanism, abbreviated as AgentCONCUR, associates cost-optimal task shifts with public and trusted contextual data (e.g., real-time prices) and uses regression on this data as a coordination policy. Notably, regression-based coordination does not learn the optimal coordination actions from a labeled dataset. Instead, it exploits the optimization structure of the coordination problem to ensure feasible and cost-effective actions. A NYISO-based study reveals large coordination gains and the optimal features for the successful regression-based coordination.
Vladimir Dvorkin
2023-09-28T18:39:42Z
http://arxiv.org/abs/2309.16792v2
# Agent Coordination via Contextual Regression (AgentCONCUR) for Data Center Flexibility ###### Abstract A network of spatially distributed data centers can provide operational flexibility to power systems by shifting computing tasks among electrically remote locations. However, harnessing this flexibility in real-time through the standard optimization techniques is challenged by the need for sensitive operational datasets and substantial computational resources. To alleviate the data and computational requirements, this paper introduces a coordination mechanism based on contextual regression. This mechanism, abbreviated as AgentCONCUR, associates cost-optimal task shifts with public and trusted contextual data (e.g., real-time prices) and uses regression on this data as a coordination policy. Notably, regression-based coordination does not learn the optimal coordination actions from a labeled dataset. Instead, it exploits the optimization structure of the coordination problem to ensure feasible and cost-effective actions. A NYISO-based study reveals large coordination gains and the optimal features for the successful regression-based coordination. Contextual learning, data centers, feature selection, regression, sustainable computing, system coordination ## I Introduction Coordinated operations of bulk power systems and coupled infrastructures allow for leveraging their complementarity and offsetting inefficiencies, thus leading to enhanced performance. Coordination schemes synchronize grid operations with distribution [1], natural gas [2], and district heating [3] systems, and more recently, a large coordination potential has emerged from the networks of data centers (NetDC) [4]. Their unique coordination advantage is in _spatial flexibility_, which distributed data centers provide by shifting computing tasks among electrically remote locations. This flexibility resource will be important for future power grids, as electricity demand of data centers is rapidly growing, and is expected to reach 35 GW by 2030 in the U.S. alone [5]. Even at present, the coordination potential is significant: training a single GPT-3 language model - the core of the popular ChatGPT chatbot - consumes as much as 1.3 GWh [6]. Allocating such energy-intensive tasks in the NetDC is thus likely to predetermine the dispatch cost and emission intensity in adjacent grids. The growing environmental footprint of computing has encouraged large internet companies to optimize NetDC operations in a carbon-aware manner. Using online emission monitoring tools, such as WattTime.org and ElectricityMaps.com, they smartly time and allocate computing tasks in regions with the least emission footprint [7, 8]. However, the sole reliance on limited emission data is the form of _grid-agnostic_ coordination, which respects NetDC constraints yet ignores those of power grids. For _grid-aware_ coordination, the literature offers three coordination mechanisms: demand response [4], enhanced market design [9], and co-optimization of grid and NetDC operations [10]. In practice, participation of data centers in demand response is very limited due to performance concerns [4]. While the second mechanism integrates the spatial flexibility within market-clearing algorithms and even features robust market properties [11], it remains challenging to fully represent complex data center objectives (e.g., quality of service) and constraints (e.g., latency) via single utility function. The latter co-optimization mechanism models the _ideal_ power-NetDC coordination with the potential for the full representation of operational constraints, akin to coordination models for conventional energy infrastructures [1, 2, 3]. However, large data requirements and short operational time frames hinder such coordination in practice. This paper develops a new, regression-based mechanism for grid-aware coordination of power systems and NetDC, termed AgentCONCUR. Unlike optimization-based coordination, the regression solely acts on available contextual grid information, while approximating the optimal decision-making of the two systems. As such, AgentCONCUR resembles industry practices in [7] and [8] by relying on limited grid data, while also leveraging the optimization structure of the ideal coordination. Specifically, this paper contributes by: 1) Developing a bilevel co-optimization of the power grid and NetDC operations, where power system decision-making is constrained by that of NetDC. Similar to grid-aware models in [9, 10, 11], this model takes the power system perspective, but it represents the NetDC as a separate, optimization problem with customer-oriented objectives and constraints (e.g., latency). This co-optimization provides the ideal solution. 2) Devising a contextual regression policy that efficiently approximates the ideal coordination. The policy feasibility and cost-consistency is ensured by the new training optimization which inherits the ideal optimization structure. Using sufficiently many operational scenarios in training allows for robust and cost-consistent performance across testing scenarios. Furthermore, the proposed training allows for the optimal coordination feature selection, such that the coordination can be made possible at different data-intensity requirements. 3) Performing a case study on the New York ISO system to estimate the cost-saving potential in the peak-hour coordination and its approximation by AgentCONCUR. Our results reveal practical trade-offs between the amount of contextual information (features) and the efficiency of coordination. This paper also contributes to decision-focused learning. While the prior work focused on contextual _data_ predictions, e.g., demand [12] or wind power generation [13] data, here we contextually predict the coordination _decisions_ instead. In the remainder, Section II details decision-making of power grid and NetDC operators, and then presents the ideal coordination. Section III introduces the contextual regression approach for AgentCONCUR. Section IV applies AgentCONCUR to New York ISO system and Section V concludes. _Notation:_ Lower- and upper-case letters denote vectors and matrices, respectively. For some matrix \(A\), \(a_{ij}\) denotes its element at position \((i,j)\). Symbol \({}^{\top}\) stands for transposition, and \({}^{\star}\) denotes the optimal value. Vectors 0 and 1 are of zeros and ones, respectively. Operator \(\langle\cdot\rangle_{\mathrm{F}}\) is the Frobenius inner product, and \(\|\cdot\|_{p}\) denotes the \(p-\)norm. ## II Optimizing Power and NetDC Coordination We consider the power-NetDC coordination problem, where agents interface as pictured in Fig. 1. The NetDC operator chooses the spatial allocation of computing tasks among data centers, where the tasks come from a spatially distributed population of users. The allocation criterion is the minimum of network _latency_ - a time delay between sending, executing and sending back the result of a computational task for all users. The resulting task allocation shapes electricity demand, which is then used in the optimal power flow (OPF) problem for power system dispatch. The two problems can thus be solved in a coordinated manner to minimize the dispatch cost. The coordination is performed by means of spatial shifts of computing tasks using _virtual links_ connecting data centers into a network [9]. These shifts must be coordinated to satisfy both power system and NetDC objectives. To enable such coordination, we formulate the following bilevel optimization, where the power system operator acts as a leader, whose decision space is constrained by the NetDC operator, acting as a follower: \[\underset{x,\varphi}{\text{minimize}} c_{\text{opf}}(x) \triangleright\text{OPF cost}\] subject to \[x\in\mathcal{X}_{\text{opf}}(y) \triangleright\text{OPF feasibility}\] \[\underset{y}{\text{minimize}} c_{\text{net-dc}}(y) \triangleright\text{Latency loss}\] subject to \[y\in\mathcal{Y}_{\text{net-dc}}(\varphi)\triangleright\text{NetDC feasibility}\] where the task shift \(\varphi\) is the _coordination variable_. The lower-level problem takes request \(\varphi\) as input, and minimizes the latency loss by selecting the new task allocation \(y\). The optimized allocation then enters the power flow equations, and the power system operators computes the new dispatch \(x\) which minimizes the cost. The optimal solution \(\varphi^{\star}\) achieves cost-optimal and feasible for the two systems coordination. The rest of this section details decision-making of the two systems, and then presents the bilevel coordination problem. ### _Power System Optimization_ The operational setting builds on the standard DC-OPF problem [14], which computes the least-cost generation dispatch \(p\in\mathbb{R}^{b}\), within limits \(p,\overline{p}\in\mathbb{R}^{b}_{+}\), that satisfies electricity net demand - load \(d\in\mathbb{R}^{b}_{+}\) subtracted by non-dispatchable renewable generation \(r\in\mathbb{R}^{b}_{+}\). The dispatch cost is modeled using a quadratic function with the first- and second-order coefficients \(c\in\mathbb{R}^{b}_{+}\) and \(C\in\mathbb{S}^{b}_{+}\), respectively. The power flows are computed using the matrix of power transfer distribution factors \(F\in\mathbb{R}^{b\times l}\), and must respect the line capacity \(\overline{f}\in\mathbb{R}^{l}_{+}\). In rare cases, when generation lacks to satisfy all loads, we model load shedding \(\ell\in\mathbb{R}^{b}\) with the most expensive cost \(s\gg c\). In this notation, the OPF problem is: \[\underset{p,\ell}{\text{minimize}} c^{\top}p+p^{\top}Cp+s^{\top}\ell\] (1a) subject to \[\mathbb{1}^{\top}(p+r+\ell-d-\Gamma\vartheta)=0, \tag{1b}\] \[|F(p+r+\ell-d-\Gamma\vartheta)|\leqslant\overline{f},\] (1c) \[\underline{p}\leqslant p\leqslant\overline{p},\ 0\leqslant\ell\leqslant d, \tag{1d}\] which minimizes the total dispatch cost (1a) subject to the power balance equation (1b), minimum and maximum power flow, generation, and load shedding limits in (1c)-(1d), respectively. Modelling Power-NetDC coordination, we distinguish between conventional loads \(d\) and power consumption by data centers \(\Gamma\vartheta\) in constraints (1b) and (1c), where auxiliary matrix \(\Gamma\in\mathbb{R}^{b\times n}\) converts computing loads \(\vartheta\in\mathbb{R}^{n}\) of \(n\) data centers into electrical loads. Although restrictive, this linear conversion model is consistent with power consumption models under different utilization regimes of data centers [15]. ### _NetDC Optimization_ The NetDC operator allocates computing tasks of \(m\) users among \(n\) data centers. For some computing demand \(\delta\in\mathbb{R}^{m}\), allocation \(W\in\mathbb{R}^{n\times m}\) is optimized to satisfy conservation conditions \(\vartheta_{i}=\sum_{j=1}^{m}w_{ij}\) and \(\delta_{j}=\sum_{i=1}^{n}w_{ij}\), enforced Fig. 1: Interfaces and notation of the power system, network of data centers (NetDC), and communication network between data centers and users. for each data center \(i\) and user \(j\), respectively. The goal is to minimize the latency, which is proportional to geodesic distance \(G\in\mathbb{R}^{n\times m}\) between users and data centers [16]. The proxy function for the aggregated latency is then defined as \[\mathcal{L}:\mathbb{R}^{n\times m}\mapsto\mathbb{R},\quad\mathcal{L}(W)=\sum_{i =1}^{n}\sum_{j=1}^{m}g_{ij}w_{ij}. \tag{2}\] The optimal task allocation problem then becomes: \[\underset{W,\vartheta\geqslant 0}{\text{minimize}} \mathcal{L}(W)+\varrho\|W\|_{2}^{2}\] (3a) subject to \[W^{\top}\mathds{1}=\delta, \tag{3b}\] \[W\mathds{1}=\vartheta, \tag{3c}\] which minimizes latency subject to task conservation conditions (3b) and (3c). The objective function (3a) additionally includes a quadratic term that evenly allocates tasks among equally remote data centers, using a small parameter \(\varrho>0\). Although the data center loading \(\vartheta^{\star}\) is latency-optimal, it is ignorant of the processes in the power system and may shape an expensive electricity demand allocation \(\Gamma\vartheta^{\star}\). In this case, consider a task shift request \(\varphi\in\mathbb{R}^{k}\) along \(k=\frac{n(n-1)}{2}\) virtual links available in a fully connected NetDC. Also, consider the incidence matrix \(A\in\mathbb{R}^{n\times k}\) of the _directed_ NetDC, where \[a_{ij}=\begin{cases}+1,&\text{if }i=n\\ -1,&\text{if }i=n^{\prime}\end{cases}\qquad\forall j=(n,n^{\prime})\in 1,\dots,k.\] Then, given some nominal solution \(W^{\star},\vartheta^{\star}\) from (3), and some exogenous task shift request \(\varphi\), the tasks are re-allocated in a latency-aware fashion by solving the following optimization: \[\underset{W,\vartheta\geqslant 0}{\text{minimize}} \frac{1}{2}\|\mathcal{L}(W-W^{\star})\|_{2}^{2}\] (4a) subject to \[W^{\top}\mathds{1}=\delta, \tag{4b}\] \[W\mathds{1}=\vartheta,\] (4c) \[A\varphi=\vartheta-\vartheta^{\star},\] (4d) \[\mathcal{L}(W-W^{\star})\leqslant\alpha\mathcal{L}(W^{\star}). \tag{4e}\] The problem seeks a new task allocation \(W\) that deviates the least from the latency-optimal solution \(W^{\star}\). Indeed, the objective function (4a) minimizes the latency loss, subject to the task conservation requirements (4b)-(4c). Equation (4d) re-distributes the nominal loading \(\vartheta^{\star}\) with respect to exogenous task shift \(\varphi\). The last constraint (4e) ensures that the aggregated latency must not exceed an \(\alpha-\)percentage of the nominal latency. Thus, problem (4) does not permit task shifts \(\varphi\) that increase the network latency beyond the allowable amount. ### _Bilevel Optimization for Power and NetDC Coordination_ Since the vector of spatial task shifts \(\varphi\) affects the OPF costs in (1) and the latency optimality loss in (4) simultaneously, \(\varphi\) is modeled as a coordination variable between power system and NetDC operators. The cost-optimal and feasible task shift is found by solving the following bilevel optimization: \[\underset{p,\ell,\varphi}{\text{minimize}} c^{\top}p+p^{\top}Cp+s^{\top}\ell\] (BL.U) subject to \[|\Gamma(p+r+\ell-d-\Gamma\vartheta)=0,\] \[|F(p+r+\ell-d-\Gamma\vartheta)|\leqslant\overline{f},\] \[p\leqslant p\leqslant\overline{p},\ 0\leqslant\ell\leqslant d,\] \[\vartheta\in\underset{W,\vartheta}{\text{argmin}} \frac{1}{2}\|\mathcal{L}(W-W^{\star})\|_{2}^{2}\] (BL.L) subject to \[W^{\top}\mathds{1}=\delta, :\mu\] (BL.L) \[W\mathds{1}=\vartheta, :\nu\] (BL.L) \[A\varphi=\vartheta-\vartheta^{\star}, :\kappa\] \[w_{ij}\geqslant 0,\,\forall i,j, :\omega\] \[\mathcal{L}(W-W^{\star})\leqslant\alpha\mathcal{L}(W^{\star}),:\gamma\] which identifies the cost-optimal task shift \(\varphi\) in (BL.L), anticipating the response of NetDC in (BL.L) in terms of new data center loading \(\vartheta\). Here, the colon signs define the dual variables associated with each constraint in (BL.L). The common solution strategy for this problem is to replace (BL.L) with its Karush-Kuhn-Tucker (KKT) conditions [17], yielding a mixed-integer formulation detailed in Appendix A. ## III Agent Coordination via Contextual Regression (AgentCONCUR) Solving the bilevel program (BL) in real-time is challenging because of large data requirements, the lack of coordination interfaces between the power grid and NetDC operators, and the computational burden of the bilevel problem that may fail to provide the solution within operational time frames. To bypass these coordination challenges, we adopt a _contextual_ regression approach, which consists of two stages. At the first (planning) stage, a regression policy is optimized to associate the cost-optimal tasks shifts with the contextual, easy-to-access in real-time information. At the second (real-time) stage, a trained regression policy instantly maps the contextual information into task shifts. The contextual information includes partial yet strongly correlated with grid conditions data, such as aggregated load and generation statistics, electricity prices correlated with costs, and power flows correlated with bus injections. Such contextual information is available online from many power system operators worldwide, e.g., [18]. Towards formulating the problem, let \(x\) denote a feature vector collecting contextual data, and \(\phi(x)\) denote the regression policy. We focus on affine policies, i.e., \[\phi(x)\triangleq\beta_{0}+\beta_{1}x\in\mathbb{R}^{k},\] where \(\beta=(\beta_{0},\beta_{1})\) are regression parameters. Once they are optimized, for some feature realization \(\widehat{x}\), the coordination in real-time proceeds as follows: \[\varphi=\begin{cases}\phi(\widehat{x}),&\text{if feasible for NetDC and OPF}\\ 0,&\text{otherwise}.\end{cases}\] That is, implement the regression solution if the task shifts are feasible for NetDC operations and also produce an OPF-feasible electricity load profile. Otherwise, proceed with a typically more expensive yet feasible non-coordinated solution. In the remainder, we first present the base regression training, used as a reference to optimize \(\phi(x)\). Then, we present the proposed training optimization at the core of AgentCONCUR. ### _Base Regression_ The base approach to optimize policy \(\phi(x)\) is two-fold: 1. Collect a dataset \(\{(x_{i},\varphi_{i}^{\star})\}_{i=1}^{q}\) of \(q\) records, where each record \(i\) includes contextual features \(x_{i}\) and the optimal solution \(\varphi_{i}^{\star}\) to problem (BL), specific to record \(i\). 2. Solve a convex optimization problem: \[\underset{\|\beta\|_{1}\leqslant\varepsilon}{\text{minimize}} \frac{1}{q}\sum_{i=1}^{q}\lVert\beta_{0}+\beta_{1}x_{i}-\varphi_{i}^{ \star}\rVert_{2}^{2}\] (5) which minimizes the regularized mean squared error over \(q\) historical records. Here, we chose \(L_{1}-\)regularization, know as _Lasso_[19], which encourages sparsity of \(\beta\) up to selected regularization parameter \(\varepsilon\in\mathbb{R}_{+}\). For any given value \(\varepsilon\), optimization (5) selects optimal coordination features and minimizes the prediction error simultaneously. While being a _data-only_ approximation of the bilevel problem (BL), this approach suffers from at least two drawbacks that may prevent its practical implementation. First, although it minimizes a prediction error, it may result in large decision errors in terms of OPF costs, e.g., when under- and over-predictions of task shifts have asymmetric cost implications. This may result in a large regret, i.e., the average distance between the OPF costs induced by trained policy \(\phi\) and the OPF costs of the bilevel problem (BL). Second, optimization (5) is myopic to the upper- and lower-level feasible regions, thus risking violating operational limits of both power system and NetDC. These two observations motivate us to internalize the cost and feasibility criteria into model training. ### _Cost- and Feasibility-Aware Regression_ Optimizing policy \(\phi(x)\) for AgentCONCUR, we leverage the optimization structure of bilevel model (BL) to guarantee the least-cost and feasible regression-based coordination across available historical records. The proposed optimization is: \[\underset{p,\ell,\varphi,W,\vartheta,\beta}{\text{minimize}} \frac{1}{q}\sum_{i=1}^{q}(c^{\top}p_{i}+p_{i}^{\top}Cp_{i}+s^{ \top}\ell_{i})\] (6a) subject to \[\varphi_{i}=\beta_{0}+\beta_{1}x_{i},\;\lVert\beta\rVert_{1} \leqslant\varepsilon, \tag{6b}\] \[\mathbb{1}^{\top}(p_{i}+r_{i}+\ell_{i}-d_{i}-\Gamma\vartheta_{i} )=0,\] (6c) \[|F(p_{i}+r_{i}+\ell_{i}-d_{i}-\Gamma\vartheta_{i})|\leqslant \overline{f},\] (6d) \[p\leqslant p_{i}\leqslant\overline{p},\;\mathbf{0}\leqslant\ell_ {i}\leqslant d_{i},\] (6e) KKT conditions of (BLL), \[\forall i=1,\ldots,q\] which minimizes the sample average OPF cost, subject to a set of upper-level OPF constraints in (6c)-(6e) and a set of KKT conditions of the lower-level problem (BLL), both enforced on each instance \(i\) of the training dataset. Constraint (6b) couples many instances of ideal coordination via regression policy and its role is two-fold: it structures task shifts and selects the optimal coordination features using the \(L_{1}-\)regularization. This regularization also bounds the optimal solution, which is necessary when \(\varphi_{i}=\beta_{0}+\beta_{1}x_{i}\) is a rank-deficient system of equations, i.e., having more features than virtual links. Similar to the base regression, the task shifts are restricted to the affine policy of contextual information. However, problem (6) also anticipates how the affine restriction affects the average OPF costs. Indeed, the choice of parameters \(\beta\) affects the task shift requests \(\varphi_{1},\ldots,\varphi_{q}\), which then alter electricity load of data centers \(\vartheta_{1},\ldots,\vartheta_{q}\) as they are coupled through the KKT conditions of the lower-level problem (BLL). Thus, the optimal solution of problem (6) returns regression parameters that are cost-optimal on average under the affine restriction. Moreover, by solving problem (6), we also guarantee the feasibility of power system and NetDC operations across historical records. Indeed, \(\beta=0\) is always a feasible choice, which corresponds to the latency-optimal solution from problem (3). Hence, in the worst-case, problem (6) chooses a non-coordinated solution to ensure feasibility for both systems. We can also reason about the feasibility of \(\phi(x)\) for unseen operational scenarios in a probabilistic sense. Indeed, the theory of sample-based approximation of stochastic programs suggests that feasibility on unseen, out-of-sample scenarios improves as the sample size \(q\) increases [20]. In the numerical experiments, we investigate the relationship between the sample size and the out-of-sample feasibility of the optimized policy \(\phi(x)\). The training optimization (6) is solved at the operational planning stage using the similar mixed-integer reformation from Appendix A. Although the problem is NP-hard, modern optimization solvers, e.g., _Gurobi_, make the optimization more practical than its worst-case complexity would imply. Then, at the real-time stage, the trained regression model instantly maps contextual features into computing task shifts. ## IV New York ISO Case Study ### _Data and Settings_ We use an \(11\)-zone aggregation of the NYISO power system depicted in Fig. 2, sourcing data from [21]. This zonal layout corresponds to the granularity of the contextual data from the NYISO website [18], which is used to train coordination policies. The power system includes approximately \(30\) GW of electricity demand, supplied by approximately \(42\) GW of conventional generation (oil, gas, and hydro) and by \(1.7\) GW of renewable generation (wind and solar). We install \(n=5\) data centers in the West, Central, North, NYC, and MillWd zones, serving customers in all \(m=11\) NYISO zones. Computing loads can thus be shifted using \(k=n(n-1)/2=10\) virtual links. The task shifts outside the NY state area are not permitted. The computing demand \(\delta_{i}\) is assumed to be proportional to the maximum peak load \(d_{i}\) in the \(i^{\text{th}}\) area, and will be scaled to achieve different NetDC penetration levels in the range from 5% to 30% of the peak system load. The operational data spans the period from January 1, 2018, to June 30, 2019, and includes 546 peak-hour records. Each record contains the following contextual features, which are readily available on the NYISO website [18]: * Zonal real-time electricity demand \((d)\); * Zonal electricity prices \((\lambda)\); * Total renewable generation, then disaggregated by zones using data on existing renewable installations \((r)\); * Power flows between aggregation zones \((f)\). Each record includes 45 contextual features, so that the coordination policy based on these features takes the form: \[\phi\triangleq\beta_{0}+\beta_{1}^{d}\begin{bmatrix}d_{1}\\ \vdots\\ d_{11}\end{bmatrix}+\beta_{1}^{\lambda}\begin{bmatrix}\lambda_{1}\\ \vdots\\ \lambda_{11}\end{bmatrix}+\beta_{1}^{r}\begin{bmatrix}r_{1}\\ \vdots\\ r_{11}\end{bmatrix}+\beta_{1}^{f}\begin{bmatrix}f_{1}\\ \vdots\\ \vdots\\ f_{12}\end{bmatrix}\] To optimize and test the policy, we randomly select \(q=250\) records for training and reserve the remaining 296 records for testing, unless stated otherwise. The performance of the trained coordination policies is discussed using the unseen, test set. The remaining settings include default regularization parameters \(\varrho=10^{-5}\) and \(\varepsilon=10\). All data, codes and default settings needed to replicate the results are available at [https://github.com/wdvorkin/AgentCONCUR](https://github.com/wdvorkin/AgentCONCUR) ### _Efficiency Gains of Power and NetDC Coordination_ The NYISO dispatch costs are compared in four cases: * _No coordination:_ NetDC electricity demand obeys the latency-optimal solution from problem (3); * _Ideal coordination:_ NetDC demand obeys the solution of the ideal coordination by means of bilevel problem (BL); * _Base regression:_ NetDC demand is shifted according to the base regression policy optimized in (5); * _AgentCONCUR:_ NetDC demand is shifted according to the regression policy optimized in (6). Our results reveal that the NYISO system benefits from coordinating spatial tasks shifts in amount of \(\approx 1.9\) GWh from the densely populated South towards the Central, Northern, and Western parts of the state, as shown in Fig. 2. Noticeably, the ideal coordination consistently uses the same 4 out of 10 virtual links, while the AgentCONCUR coordination policy enjoys more active links. This difference is due to less flexible, affine policy structure, which results in more used links to ensure feasibility across the entire training dataset simultaneously, as opposed to per-scenario feasibility satisfaction provided by the ideal coordination. Figure 3 illustrates the discrepancies in dispatch costs in all four cases. As the penetration of NetDC increases, the non-coordinated solution demonstrates rapid, quadratic growth of dispatch costs in the NYISO dominated by conventional generation. On the other hand, the ideal coordination demonstrates a rather linear growth (e.g., see the bottom plot) of dispatch costs thanks to the cost-aware allocation of computing tasks. However, the extent of cost reduction significantly depends on the maximum allowable latency loss \(\alpha,\) specified by the NetDC operator. For a small loss of 25%, users are likely to observe no difference in the quality of service. However, this enables savings of up to 24.5% of dispatch costs in the ideal coordination case, depending on the penetration level. The cost-saving potential increases to 49.0% and 56.7% in the case of double and tripled latency loss, respectively, when users experience more noticeable delays during peak-hour operation. This cost-saving potential is exploited by both regression coordination policies. However, the base regression policy, which ignores power system and NetDC operational constraints, often results in substantively higher dispatch costs, which tend to stay closer to the non-coordinated solution than to the ideal one. On the other hand, the AgentCONCUR policy, which is aware of constraints of both systems, efficiently approximates the ideal solution, i.e., staying relatively close to the ideal solution in many cases depicted in Fig. 3. However, it tends to show a larger approximation gap as the allowable latency loss and NetDC penetration increase. Fig. 3: Average NYISO dispatch cost across the testing dataset under different coordination models for the varying NetDC penetration level and maximum allowable latency loss. The area between the dashed lines defines the cost-saving potential for regression-based coordination. Fig. 2: 11-zone NYISO system with 5 data centers. The arrows show active virtual links under different coordination solutions for the 20% NetDC penetration level and 100% maximum latency loss. The change of NetDC electricity demand is given as the average across the test dataset. ### _Feasibility of Regression-Based Coordination_ The approximation gaps reported in Fig. 3 are due to infeasible task shifts, i.e., the shifts that violate power system constraints, NetDC constraints, or both. Whenever the task shift is infeasible in real-time, the two operators resort to a more expensive yet feasible non-coordinated solution. However, the feasibility of regression-based coordination improves with a larger size of the training dataset, as illustrated in Fig. 4. The AgentCONCUR policy dominates the base one and achieves zero violations of power system constraints (e.g., no load shedding) with sample size \(q\geqslant 150\). Moreover, for \(q\geqslant 150\), it keeps infeasibility of NetDC operations below 7%. The dominance of AgentCONCUR is _consistent_, which is important when the set of representative records is limited. We also observed similar results across other NetDC penetration and latency parameters. Increasing the size of the training dataset also increases the computational burden of problem (6), as shown in Fig. 5. For a reasonable choice of \(q=150\), the CPU times are \(\approx 8\) hours. However, this time is required for training at the planning stage, i.e., well before the real-time operations. ### _Coordination Feature Selection_ While 45 contextual features are used in training, we demonstrate that the power-NetDC coordination can also be achieved with fewer features, i.e., with less data requirements. We perform feature selection using regularization parameter \(\varepsilon\) in problem (6). The smaller the \(\varepsilon\), the fewer features are used by the policy. Table I reports the selected features for various assignments of \(\varepsilon\). Observe, as the feature space shrinks (\(\downarrow\varepsilon\)), the policy gives less priority to renewable power data, which is reasonable as the NYISO has a very small installed renewable capacity at present (e.g., only 1.7 GW). As the space further shrinks, less priority is given to electricity prices, which become less informative in uncongested cases. The power flows and electricity demands, on the other hand, consistently present among selected features for AgentCONCUR. Figure 6 further reveals the trade-off between the dispatch cost and amount of selected features. Approximately, the same level of costs (see the dashed trend line) can be achieved in the range \(\varepsilon\in[1,10^{5}]\), selecting from 6 to \(30+\) features. Moreover, parameter \(\varepsilon\) can be optimized to achieve the optimal dispatch cost under regression-based coordination. Here, the optimal \(\varepsilon^{\star}=10\) selects 24 features for coordination. Notably, for \(\varepsilon=0.1\), the coordination of task shifts within the entire NetDC is performed with a _single_ feature, i.e., electricity demand of the largest demand center - New York City. Although this is not the cost-optimal choice, this is the least data-intensive coordination, which still performs better than the non-coordinated solution, as also shown in Fig. 6. Fig. 4: Infeasibility of regression-based coordination as the function of the training dataset size. Results are for 20% NetDC penetration and 25% maximum latency loss, averaged across 100 random draws of training scenarios. Fig. 5: Average CPU times to solve the mixed-integer reformulation of the bilevel program (6). NetDC penetration level is 20%. Fig. 6: OPF costs for varying regularization parameter \(\varepsilon\). The blue dots depict the average costs obtained on the test dataset, and the dash line is the trend. The red dot marks the optimal selection that minimizes the cost on average. ## V Conclusions To streamline the economic coordination of power grids and data centers, this work proposed to transition from data-intensive optimization-based coordination to a light weighted regression-based coordination. Recognizing the risks of trusting a regression model with coordinating two critical infrastructure systems, we devised a new training algorithm which inherits the structure of the optimization-based coordination and enables feasible and cost-consistent computing task shifts in real-time. The case study on NYISO system with various NetDC penetration levels revealed 24.5-56.7% cost-saving potential, most of which has shown to be delivered by regression policies at different data-intensity preferences. There are some notable limitations that motivate several directions for future work. First, while the optimization-based coordination remunerates data center flexibility via duality theory [11], as such, the duality-based solution is unavailable under regression policies. It is thus relevant to study the integration of regression policies into real-time electricity markets. Moreover, while the current focus has been on spatial flexibility for peak-hour coordination, it is also relevant to explore regression policies for harnessing both spatial and temporal flexibility, as proposed before for optimization-based coordination [9]. This, in turn, may result in the increased computational burden and require decomposition. Lastly, although the proposed mechanism does not require any private data exchange at the time of coordination, it still needs sensitive data from the power system and NetDC for training at the planning stage. One potential solution to remove this practical bottleneck is the use of data obfuscation algorithms [22], yet it will require additional modifications to the training procedure to eliminate the effect of noise. ## Acknowledgements Vladimir Dvorkin is supported by the Marie Sklodowska-Curie Actions COFUND Postdoctoral Program, Grant Agreement No. 101034297 - project Learning ORDER. ### _Mixed-Integer Reformulation of the Bilevel Problem_ The Karush-Kuhn-Tucker conditions of the lower-level problem (BLL) are derived from the following Lagrangian: \[\underset{\mu}{\text{max}}\;\underset{W,\vartheta}{\text{min}} \frac{1}{2}\|\mathcal{L}(W)-\mathcal{L}(W^{\star})\|_{2}^{2}-\mu^{ \top}(W^{\top}\mathbf{1}-\delta)\] \[-\nu^{\top}(W\mathbf{1}-\vartheta)-\kappa^{\top}(A\varphi- \vartheta+\vartheta^{\star})\] \[+\langle\omega,W\rangle_{\text{F}}-\gamma(\mathcal{L}(W-W^{ \star})-\alpha\mathcal{L}(W^{\star})).\] The stationarity conditions are the partial derivatives of the Lagrangian with respect to primal variables and take the form: \[\vartheta\colon\;\nu+\kappa=\mathbf{0}, \tag{7a}\] \[w_{ij}\colon\;g_{ij}(\mathcal{L}(W-W^{\star}))-\mu_{j}-\nu_{i}- \omega_{ij}-\gamma g_{ij}=0\] \[\forall i=1,\ldots,n,\;j=1,\ldots,m \tag{7b}\] The primal feasibility amounts to constraints of problem (4), while the dual feasibility requires the dual variables of problem's inequalities to be non-negative, i.e., \[\omega_{ij}\geqslant 0,\forall i=1,\ldots,n,\;j=1,\ldots,m,\quad\gamma \geqslant 0. \tag{7c}\] The last complementarity slackness conditions write as \[\omega_{ij}w_{ij}=0,\;\forall i=1,\ldots,n,\;j=1,\ldots,m,\] \[\gamma(\mathcal{L}(W-W^{\star})-\alpha\mathcal{L}(W^{\star}))=0,\] which are non-convex. These constraints are addressed using an equivalent mixed-integer SOS1 reformulation [23]: \[\{\omega_{ij},w_{ij}\}\in\text{SOS1}, \tag{7d}\] \[\{\gamma,\mathcal{L}(W-W^{\star})-\alpha\mathcal{L}(W^{\star})\} \in\text{SOS1}, \tag{7e}\] where formulation \(\{x,y\}\in\text{SOS1}\) means that at most one variable may be nonzero. The equivalent reformulation of problem (BL) is then obtained when the lower-level problem (BLL) is replaced with constraints (4b)-(4e) and (7).
2310.20343
Large Multi-modal Encoders for Recommendation
In recent years, the rapid growth of online multimedia services, such as e-commerce platforms, has necessitated the development of personalised recommendation approaches that can encode diverse content about each item. Indeed, modern multi-modal recommender systems exploit diverse features obtained from raw images and item descriptions to enhance the recommendation performance. However, the existing multi-modal recommenders primarily depend on the features extracted individually from different media through pre-trained modality-specific encoders, and exhibit only shallow alignments between different modalities - limiting these systems' ability to capture the underlying relationships between the modalities. In this paper, we investigate the usage of large multi-modal encoders within the specific context of recommender systems, as these have previously demonstrated state-of-the-art effectiveness when ranking items across various domains. Specifically, we tailor two state-of-the-art multi-modal encoders (CLIP and VLMo) for recommendation tasks using a range of strategies, including the exploration of pre-trained and fine-tuned encoders, as well as the assessment of the end-to-end training of these encoders. We demonstrate that pre-trained large multi-modal encoders can generate more aligned and effective user/item representations compared to existing modality-specific encoders across three multi-modal recommendation datasets. Furthermore, we show that fine-tuning these large multi-modal encoders with recommendation datasets leads to an enhanced recommendation performance. In terms of different training paradigms, our experiments highlight the essential role of the end-to-end training of large multi-modal encoders in multi-modal recommendation systems.
Zixuan Yi, Zijun Long, Iadh Ounis, Craig Macdonald, Richard Mccreadie
2023-10-31T10:33:23Z
http://arxiv.org/abs/2310.20343v2
# Large Multi-modal Encoders for Recommendation ###### Abstract. In recent years, the rapid growth of online multimedia services, such as e-commerce platforms, has necessitated the development of personalised recommendation approaches that can encode diverse content about each item. Indeed, modern multi-modal recommender systems exploit diverse features obtained from raw images and item descriptions to enhance the recommendation performance. However, the existing multi-modal recommenders primarily depend on the features extracted individually from different media through pre-trained modality-specific encoders, and exhibit only shallow alignments between different modalities - limiting these systems' ability to capture the underlying relationships between the modalities. In this paper, we investigate the usage of large multi-modal encoders within the specific context of recommender systems, as these have previously demonstrated state-of-the-art effectiveness when ranking items across various domains. Specifically, we tailor two state-of-the-art multi-modal encoders (CLIP and VLMo) for recommendation tasks using a range of strategies, including the exploration of pre-trained and fine-tuned encoders, as well as the assessment of the end-to-end training of these encoders. We demonstrate that pre-trained large multi-modal encoders can generate more aligned and effective user/item representations compared to existing modality-specific encoders across three multi-modal recommendation datasets. Furthermore, we show that fine-tuning these large multi-modal encoders with recommendation datasets leads to an enhanced recommendation performance. In terms of different training paradigms, our experiments highlight the essential role of the end-to-end training of large multi-modal encoders in multi-modal recommendation systems. 2022 Zixuan Yi, Zijun Long, Iadh Ounis, Craig Macdonald, and Richard McCreradie (2023). Large Multi-modal Encoders for Recommendation. In _WSDM '24_, _March 4-8, 2024, Merida, Mexico_ 6 2023 Association for Computing Machinery ACM 518978-x-xxx-xxx-xxx-x/YY/MM. 515.00[https://doi.org/10.1145/m](https://doi.org/10.1145/m) state-of-the-art multi-modal recommenders, as well as actionable insights regarding how such encoders should be trained. The primary contributions of this study are three-fold: (1) We systematically investigate the integration of two architecturally representative types of LMM encoders, CLIP and VLMo, into five different recommendation models. Our investigation leads to significant improvements in effectiveness across three distinct recommendation datasets; (2) We investigate the impact of fine-tuning the CLIP and VLMo with associated item image and textual descriptions from each dataset, showing that fine-tuning leads to increased effectiveness; (3) We compare and contrast a two-step training (i.e., pre-training followed by fine-tuning) with an end-to-end training of the encoders. Our findings highlight the advantages and implications of using an LMM encoder for an improved performance. In summary, we conduct a large-scale empirical investigation addressing 5 dimensions of multi-modal recommendation and their combination, namely recommendation models, multi-modal extractors, training paradigms, datasets and metrics. Our comprehensive evaluation across 480 cases indicates key insights into the effectiveness of the LMM encoders in multi-modal recommendation. Specifically, when integrating pre-trained LMM encoders, we observe significant improvements in 79% of the 120 tested cases compared to those using modal-specific encoders. Moreover, further significant performance gains are noted when fine-tuning the used three datasets. On the other hand, while a costly end-to-end training results in little performance up-lift for the unified encoder architectures, it significantly benefits the dual-stream LMM encoder. More generally, these findings emphasise the importance of establishing a deeper modality alignment, facilitated by the LMM encoders, for enhanced representation learning in the multi-modal recommendation task. ## 2. Related Work In this section, we discuss related methods and techniques to our conducted study, namely multi-modal recommendation, modality-specific encoders and large multi-modal encoders. ### Multi-modal Recommendation Multi-modal recommendation systems aim to leverage auxiliary multi-modal information, supplementing historical user-item interactions to enhance the recommendation performance (Han et al., 2017; Wang et al., 2018; Wang et al., 2019). Numerous approaches have been proposed for incorporating multi-modal features into recommendation systems, employing diverse methods to effectively integrate information from different modalities. VBPR (Wang et al., 2018) is one of the first models to incorporate visual features into recommendation systems by concatenating visual embeddings with ID embeddings in the item representation. MMGCN (Wang et al., 2019) further advances this approach by injecting high-order semantics into user/item representation learning through several graph convolutional layers. This method generates aggregated representations for each modality and combines them using either mean or sum operations, resulting in the final fused representations. Recent advances in multi-modal recommendation have resulted in the emergence of self-supervised learning as a solution, as demonstrated by methods such as MMGCL (Wang et al., 2019) and SLMRec (Wang et al., 2019). These models devise augmentations on modality-specific user-item graphs to enhance multi-modal _feature alignment_, enabling to synthesise information across different modalities for a more coherent representation. Another line of approaches effectively mine item-item structures to enhance item representation learning by capturing the underlying relationships and similarities between items. For instance, LATTICE (Wang et al., 2019) constructs item-item graphs for each modality based on the user-item bipartite graphs, performing graph convolutional operations several times on both item-item graphs and user-item interaction graphs multiple times to obtain more comprehensive and informative user and item representations, which better reflect the complex interactions and dependencies among items and users. This process contributes to aligning multi-modal features by uncovering latent item-item relationships and associating items with similar modality features. However, existing methods primarily rely on the concatenation or combination of static and extracted representations from each modality, thereby only performing a _shallow alignment_ for multi-modal fusion, which cannot deeply capture the interrelations among the modalities. To the best of our knowledge, there are no existing approaches that perform deep feature alignment for each modality in multi-modal recommendations, which necessitates comprehensively learning the relationships between modalities to achieve a more effective representation of the multi-modal data. In this work, we investigate this research direction, with the objective of integrating deep alignment methods like CLIP (Wang et al., 2018) or VLMo (Chen et al., 2018) as a supplementary component into existing multi-modal recommendation models. ### Modality-Specific Encoders Feature extraction is crucial in multi-modal recommendation because it enables the identification of meaningful and discriminative information from various modalities, such as textual, visual, and auditory data (Wang et al., 2019). By effectively capturing the intrinsic properties of each modality and their relationships, the recommendation models can better comprehend and represent the items and users (Wang et al., 2019). In the multi-modal recommendation literature, various approaches employ different modality-specific encoders to extract features from raw data. VECF (Chen et al., 2018) uses the VGG-19 model (Wang et al., 2019) for the pre-segmentation of images, capturing users' attention on different image regions. VBPR (Wang et al., 2018) extracts visual features from item images using a pre-trained Deep CNN (Chen et al., 2018). MMGCN (Wang et al., 2019) employs the ResNet50 (He et al., 2016) model for visual feature extraction, Sentence2Vector (He et al., 2016) for deriving textual features from micro video descriptions, and VGGish (Chen et al., 2018) for learning acoustic features. LATTICE uses Deep CNN (Chen et al., 2018) and Sentence-Transformer (Wang et al., 2019) for visual and \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & Amazon Sports & Amazon Clothing & Amazon Baby \\ Methods & NDCG@20 & NDCG@20 & NDCG@20 \\ \hline MMGCL (V\&T) & 0.0428 & 0.0277 & 0.0352 \\ MMGCL (V) & 0.0420 & 0.0294 & 0.0343 \\ MMGCL (T) & **0.0433** & **0.0323** & **0.0360** \\ \hline LATTICE (V\&T) & 0.0424 & 0.0336 & **0.0374** \\ LATTICE (V) & 0.0420 & 0.0343 & 0.0365 \\ LATTICE (T) & **0.0441** & **0.0352** & 0.0372 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparative analysis of NDCG@20 scores for MMGCL and LATTICE models using different modality inputs across used datasets. V/T are the abbreviations for Visual/Textual, respectively. textual feature extraction, respectively. However, employing separate encoders for each modality can result in heterogeneous multi-modal features. This means that features from different modalities do not inhabit the same semantic space, which can potentially lead to overfitting. This happens as each modality-specific encoder may independently capture noise present in the data, thereby diminishing the model's generalisation capability (Han et al., 2017). Moreover, this separation between uni-modal extractors can encourage the model to seek shortcuts from uni-modal features on preference scores, rather than effectively leveraging the interdependence between modalities (Wang et al., 2019). To address the above issues, it is important to ensure consistency among the extracted uni-modal features before inputting them into the recommendation models. Hence, applying a multi-modal extractor that can transform heterogeneous data into a common latent space is a more reasonable approach. This potentially enhances the performance and generalisation of the recommendation model by effectively capturing the underlying relationships between modalities, allowing for producing more comprehensive item and user representations. In this paper, we leverage the large multi-modal encoders to capture the intrinsic properties of each item's modality and their relationships, thereby resulting in more accurate recommendations. ### Large Multi-modal (LMM) Encoders The remarkable success of transformer-based pre-training in the Natural Language Processing (NLP) community has led to extensive research on multi-modal pre-training, particularly as various large-scale multi-modal corpora have emerged. The self-attention mechanism finds global patterns by examining all word connections, regardless of distance, and effectively captures long-range dependencies without using fixed windows or sequential processing. This inherent characteristic allows a transformer to operate in a modality-agnostic manner compatible with various modalities. Recent studies (Yang et al., 2019; Wang et al., 2020; Wang et al., 2020) have shown that when pre-trained on large-scale multi-modal corpora, transformer-based models not only significantly outperform their competitors (such as traditional recurrent neural networks and convolutional neural networks) across a wide range of multi-modal downstream tasks, and are effective for zero-shot scenarios, where it is important to be able to generalise to new tasks or domains without any task-specific fine-tuning or additional training. In the literature of multi-modal encoders, there are primarily two lines of approaches: (1) _dual-stream architectures_, such as VSE (Chen et al., 2019), CLIP (Liu et al., 2019), ViLBERT (Liu et al., 2019), which consist of a vision transformer and a language transformer. The vision transformer processes images, while the language transformer handles textual data. Both encoders generate embeddings for their respective inputs, which are then aligned using several fusion layers; (2) _unified architectures_, such as VLMo (Chen et al., 2019), which jointly processes multi-modal data into a Mixture-of-Modality-Experts (MoME) Transformer to obtain contextualised representations and align the visual and language feature vectors. The MoME Transformer is used to encode different modalities, with a mixture of modality experts replacing the feed-forward network of a standard Transformer. Each MoME Transformer block captures modality-specific information by switching to a different modality expert and employs multi-head self-attention (MSA) shared across modalities to align visual and text content. As discussed in Section 2.1 and Section 2.2, feature alignment and feature extraction are important research points in multi-modal recommendation systems. These extraction and alignment processes can capture and exploit the relationships between different modalities and generate more effective representations for downstream tasks. Despite the significant benefits offered by the LMM encoders, their integration into recommendation systems has not been investigated extensively to-date. Hence, in this work, we choose two representative LMM encoders, CLIP and VLMo, each representing distinct architectural approaches to multi-modal encoders, for the purpose of extracting and aligning multiple modalities in the context of the recommendation task under various training paradigm settings. ## 3. Probing Large Multi-Modal Encoders in Recommendation In light of the advantages of CLIP and VLMo for multi-modal representation learning, as discussed in Section 2, we detail the settings in which we extend the use of CLIP and VLMo for the recommendation task. First, we describe the process of using multi-modal embeddings, obtained from CLIP and VLMo, to initialise the user/item embeddings in the existing recommendation models. Then, we describe the method for fine-tuning CLIP and VLMo using the recommendation datasets and illustrate the integration of CLIP and VLMo with the existing recommendation models in an end-to-end approach. ### Multi-modal Encoding through LMMs In this section, we introduce how CLIP and VLMo encode multi-modal item representations from the raw data: (1) **CLIP:** For CLIP, raw images and texts are encoded into image and text vector representations. CLIP leverages the Vision Transformer (ViT) architecture (Liu et al., 2019) to process image representations by dividing the input image into non-overlapping patches, flattening them into vectors, and linearly projecting them to create patch embeddings. Text representations are generated using the GPT-2 (Wang et al., 2019) model, after tokenising Figure 1. An illustration of different feature extraction methods. Figure 2. The architectures of the large multi-modal encoders. the raw text input using byte pair encoding (BPE) and adding positional embeddings. (2) **VLMo:** Unlike CLIP, VLMo operates as a unified multi-modal model. It processes the concatenation of image and text inputs as a single unit and produces the corresponding vector embeddings. Image inputs are created by splitting images into patches, flattening these patches, and then linearly projecting them to form patch embeddings. A learned special token [I_CLS] is added to the sequence, alongside the position and type embeddings. The text input comprises tokens generated from raw text using a BERT tokeniser, with the addition of a start-of-sequence token ([T_CLS]) and a boundary token ([T_SEP]). Just like the image input, the final text input is a composite of word, position, and type embeddings. Following this, the MoME Transformer is deployed to encode different modalities, with the language and visual expert components (Beng et al., 2019) respectively extracting modality-specific information to produce the textual and visual embeddings. To illustrate how the multi-modal embeddings of CLIP and VLMo can be used as initial item embeddings in multi-modal recommendation models, we use the VLMo-Base Plus model variant as an example. The resulting text embeddings for all items from the VLMo-Base Plus encoder have a shape of [item number, text_token_length+2, 544], where each item comprises the number of raw text tokens along with [CLS] and [SEP] tokens. As for image embeddings, each item has an embedding shape of (S that use pre-extracted features within the datasets, we download the raw images from the item URLs and encode them with the LMM encoders, instead of using pre-extracted 4096-dimensional visual features of items [(15)]. For the textual features, we employ the title, description, brand, and categorical information of items and also encode them with the LMM encoders. Contrasting with existing approaches that use Sentence-Transformer to extract 384-dimensional textual embeddings [(31)], we use CLIP and VLMo to encode the raw text of items into 768 and 544 dimensions, respectively. The exact statistics of the used datasets are presented in Table 2. #### 4.1.2. Evaluation Protocols Similar to the evaluation setting in [(31; 32)], we randomly split the datasets into training, validation, and testing sets with an 8:1:1 ratio. To perform negative sampling for each user, we sample items that have no prior interactions with the user from the history of observed user-item interactions. We use two commonly used evaluation metrics, namely Recall@K and NDCG@K, to evaluate the performance of top-\(K\) recommendation. We follow [(31)] in setting K = 10, 20 and report the average performance achieved for all users in the testing set. Following the settings in [(26; 32)], we use an all-rank item evaluation strategy is used to measure the used metrics. We use the Adam [(9)] optimiser in both the LMM enhanced models and the five baseline models. We apply an early-stopping strategy during training, terminating the training when the validation loss does not decrease for 50 epochs. #### 4.1.3. Baselines To examine the effectiveness of the LMM encoders, we compare the performance of recommendation models using pre-trained modality-specific encoders--employing CNN and Sentence-Transformer independently for each modality--with that of the pre-trained LMM encoders like CLIP and VLMo, which jointly extract information from both modalities. In this paper, we choose five state-of-the-art multi-modal recommendation models, as follows: * **VBPR [(7)]:** This model integrates multimedia features into the matrix factorisation for recommendation. Specifically, it incorporates visual features into the matrix decomposition. * **MMGCN [(26)]:** This model employs graph convolutional networks (GCN) to propagate modality-specific embeddings and capture modality-related user preferences for multi-modal recommendation. The final user and item representations are generated by combining the learned representations from each modality, resulting in improved multi-modal recommendations. In our experiments, we differentiate MMGCN from VBPR by not only incorporating visual features but also integrating textual features derived from CLIP and VLMo. * **MMGCL [(30)]:** This is a self-supervised graph learning model that leverages modality edge dropout and modality masking to learn complex user preferences. Furthermore, it introduces a novel negative sampling technique to learn the correlation between multiple modalities and performs multi-task learning by combining both Bayesian personalised ranking (BPR) and self-supervised loss. * **SLMRec [(23)]:** This is also a self-supervised graph learning model. It emphasises the importance of individual modalities by creating fine and coarse spaces to align features across modalities, thereby enhancing consistency for improved fusion. By treating each modality as a distinct feature, this model leverages self-supervised learning to generate supervised signals by contrasting different item embedding via augmentation. Different from MMGCL, which uses multi-task loss, SLMRec only leverages a self-supervised learning loss as the main loss. * **LATTICE [(31)]:** This model constructs item-item graphs for each modality based on the user-item bipartite graph and subsequently aggregates them to generate latent item graphs. It focuses on mining latent semantic structures between items by learning item-item graphs derived from their multi-modal features. The model then performs graph convolutional operations on both the item-item graphs and user-item interaction graphs to obtain user and item representations, ultimately identifying latent item-item relations and connecting items with similar modality features. #### 4.1.4. Model Checkpoints and Hyperparameter Settings All used baselines (VBPR2, MMGCN3, MMGCL4, SLMRec5, LATTICE6) and the LMM encoders (CLIP ViT-B/167, VLMo-Base Plus8) are implemented with PyTorch and trained on a GPU A6000 with 48GB of memory. To facilitate a comparison of the impact of the LMM encoders on the recommendation effectiveness, we make deliberate choices for the CLIP and VLMo variants based on their reported performances in the literature. Specifically, for CLIP, we opt for the ViT-B/16 variant as the image encoder, motivated by its superior performance in image tasks when compared to other CNN image encoders within the CLIP model framework [(17)]. For VLMo, our choice is the VLMo-Base Plus model, similarly driven by its demonstrated effectiveness [(1)]. Both models have comparable numbers of parameters (151 vs. 167 million), ensuring a fair comparison. The architectural differences between CLIP's ViT-B/16 and VLMo-Base are depicted in Fig. 2, which offers a visual juxtaposition of their structures. We use the authors' original code for VLMo and CLIP in our experiments, but note that their code for VLMo is implemented with the PyTorch Lightning framework. We converted it into pure PyTorch code without using the PyTorch Lightning library. Our motivations for converting the original code from PyTorch Lightning to pure PyTorch include greater customisation, compatibility, and performance considerations. By using pure PyTorch, we gain more control and flexibility, allowing us to tailor the code to specific requirements, which is beneficial for future research needs. While optimising the LMM encoders, we observe a marked acceleration in training speed upon transitioning from PyTorch Lightning to pure PyTorch. In our quest to pinpoint the most effective hyper-parameters, we undertake extensive parameter searches for each recommendation dataset, using metrics from the validation set. These evaluations are performed during both the fine-tuning phase and the end-to-end training. Specifically, we experimented with learning rates spanning \(\{\)1e\({-}\)5, 3e\({-}\)5, 5e\({-}\)5, 1e\({-}\)4, 1e\({-}\)3\(\}\) and varied \begin{table} \begin{tabular}{l|c c c} \hline \hline & Sports & Calogine & Baby \\ \hline \hline \hline Users & 13,536 & 13,387 & 15,4983 \\ Items & 18,387 & 22,489 & 7,637 \\ Interaction & 29,336 & 27,101 & 16,522 \\ Interaction & 0,060,060 & 6,003 & 6,0012 \\ CNN-CLIF/VLMo Visual Dimension & 400,708,704 & 400,708,704 & 400,708,704 \\ Simony-T-Tesformer/CLIP/VLMo Textual Dimension & 3847,708,704 & 3847,708,704 & 3847,708,704 \\ CNN-Sentence-Transformer/CLIP/VLMo Parameters (million) & 170,153,167 & 1702,153,167 & 1703,153,167 \\ \hline \hline \end{tabular} \end{table} Table 2. Statistics of the used Amazon datasets. batch sizes, including \(\{32,64,128,256,1024\}\). The grid search procedure is conducted in accordance with the available code of MMRec9 and is applied to all model variants we evaluated in the experiments. Footnote 9: [https://github.com/enoche/MMRec](https://github.com/enoche/MMRec) ### Pre-trained Modality-specific Encoder vs. Pre-trained LMM Encoders (RQ1) As discussed in Section 4.1.3, to ensure a fair comparison, we primarily focus on the results obtained from recommendation models that use pre-trained modality-specific encoders. These models employ CNN and Sentence-Transformer independently for each modality, providing a consistent baseline for evaluating the impact of incorporating CLIP and VLMo. These results are compared with those using CLIP ViT-B/16 and VLMo-Base Plus, which jointly extract information from both modalities. To evaluate the statistical significance of performance differences between the five selected multi-modal recommendation models with and without the integration of CLIP/VLMo, we use the paired t-test (\(p<0.05\)). Table 3 presents the results of our conducted experiments across 120 cases, comparing the performance of recommendation models using pre-trained modality-specific encoders (VBPR, MMGCN, MMGCL, SLMRec, and LATTICE) with those employing the pre-trained LMM encoders (VBPR\(CLIP/VLMo\), MMGCN\(CLIP/VLMo\), MMGCL\(CLIP/VLMo\), SLMRec\(CLIP/VLMo\), LATTICE\(CLIP/VLMo\)) in the context of a multi-modal recommendation task. From the table, we observe that for all three used datasets, 79% of the cases tested with the recommendation models (VBPR, MMGCN, MMGCL, SLMRec), using CLIP/VLMo as encoders, significantly outperform the models using the original modality-specific encoders. This observation demonstrates the effectiveness of using the LMM encoders as feature extractors, which enables collaborative multi-modal feature generation and mitigates the issue of heterogeneity between visual and textual modalities. We now focus on comparing the LATTICE variants, a model that exhibits distinct trends among the five baselines. Indeed, we observe that LATTICE performs generally better than LATTICE\(CLIP\) and LATTICE\(VLMo\) on all three used datasets. This contrasts with the other baselines models, which are generally improved by CLIP and VLMO, and suggests a possible discrepancy between CLIP/VLMo and LATTICE in terms of feature alignment. Recall from Section 2.1 that LATTICE constructs item-item graphs based on the semantic similarities across different modalities. This objective is conceptually in conflict with that of CLIP/VLMo, which generates deeply aligned features that could lead to a denser item-item graph in LATTICE. Consequently, the features extracted by CLIP/VLMo may result in inadequate item-item graphs, leading to a decline in performance. On the other hand, as observed in Table 3, the recommendation models employing CLIP as an encoder and those using VLMo as an encoder exhibit similar performances for all three datasets, with the exception of CLIP outperforming VLMo on the Amazon Clothing dataset. To further determine the optimal multi-modal encoder with different settings, we continue our investigation in the following experiments involving fine-tuning and end-to-end training paradigms. Overall, to answer RQ1, we conducted a large-scale empirical investigation of the pre-training setup. Our exploration addressed five dimensions of multi-modal recommendation and their combinations: recommendation models, multi-modal extractors, training paradigms, datasets, and metrics. In total, 120 cases were examined. From the experiments, we conclude that the pre-trained LMM encoders are more effective at extracting and aligning visual and textual features from raw images and text, especially when compared to methods using CNN and Sentence-Transformer. Indeed, we observe a significant improvement in performance in 79% of the tested cases. ### Fine-tuning & End-to-end (RQ2) In Section 4.2, we have successfully integrated the pre-trained LMM encoders into the multi-modal recommendation models and demonstrated their effectiveness. In this section, we investigate the impact of the fine-tuning and end-to-end training paradigms when incorporating CLIP and VLMo into the recommendation models. Table 4 presents the performance changes observed in these recommendation models over 240 cases when using the pre-trained and fine-tuned CLIP and VLMo as multi-modal feature extractors. Each tested case corresponds to a specific model using either the PT or FT variants, as indicated in the relevant rows for each model in the table. Within the table, PT/FT/ETE are the abbreviations for Pre-Training, Fine-Tuning and End-To-End, respectively. These experiments enable us to draw further conclusions about the effectiveness of the LMM encoders in the context of multi-modal recommendation. Table 4 shows that in 70% of the cases across all three datasets, the fine-tuned LMM encoders exhibit a significant improvement in recommendation performance compared to their pre-trained counterparts. This suggests that when the LMM encoders are fine-tuned with the recommendation datasets, they can be effectively transferred to the recommendation domain. This fine-tuning enhances visual and textual embeddings by achieving a deeper alignment between modalities. Consequently, these enhanced, well-aligned item embeddings, result in more accurate and contextually relevant embeddings for multi-modal recommendation models. However, we observe that there is no performance improvement for both MMGCL\(CLIP-FT\) and MMGCL\(VLMo-FT\) compared to their respective pre-trained variants on the Amazon Clothing dataset. One potential reason for the observed decrease in performance could be that the fine-tuning process for this MMGCL model has led to overfitting the training data, causing a decrease in performance on the validation and test data. This overfitting could occur if the MMGCL model becomes too specialised in capturing the patterns in the training data, resulting in a decreased ability to generalise to unseen validation and test data. In line with the observations from Section 4.2, we find that the LATTICE variants using the fine-tuned CLIP/VLMo encoders continue to underperform when compared to the ones using the pre-trained CLIP/VLMo on all three datasets. This confirms our assumption that this is caused by the conceptual conflict between LATTICE, which aims to construct item-item graphs based on semantic similarities across different modalities, and the fine-tuned CLIP/VLMO. Since the fine-tuned CLIP/VLMO encoders can generate more closely aligned multi-modal features than the pre-trained ones, this conflict becomes more pronounced, potentially affecting the performance of LATTICE. Another setting not previously considered in the literature is the investigation of the impact of using an end-to-end training paradigm when incorporating CLIP and VLMo into the recommendation models. We perform experiments to address this gap and to gauge the impact of this training approach. This important investigation allows us to determine (1) which is the most effective training paradigm, and (2) whether recommendation losses can further enhance multi-modal representation learning in the context of a multi-modal recommendation task. Table 4 also presents a detailed comparison of the results between the two-stage training and end-to-end training approaches across 240 cases. These tested cases are differentiated by each model using FT and ETE variants, \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline **Dataset** & \multicolumn{4}{c}{Amazon Sports} & \multicolumn{4}{c}{Amazon Clothing} & \multicolumn{4}{c}{Amazon Baby} \\ \hline Methods & Recall@10 & Recall@20 & NDCG@10 & NDCG@20 & Recall@10 & Recall@20 & NDCG@10 & NDCG@20 & Recall@10 & Recall@20 & NDCG@10 & NDCG@20 \\ \hline VBPK\({}_{\text{IMG}-FT}\) & 0.0592\({}^{\circ}\) & 0.0847\({}^{\circ}\) & 0.0299\({}^{\circ}\) & 0.0373\({}^{\circ}\) & 0.0399\({}^{\circ}\) & 0.0577\({}^{\circ}\) & 0.0221\({}^{\circ}\) & 0.056\({}^{\circ}\) & 0.0497\({}^{\circ}\) & 0.0774\({}^{\circ}\) & 0.0274\({}^{\circ}\) & 0.0346\({}^{\circ}\) \\ VBPK\({}_{\text{IMG}-FT}\) & **0.0593\({}^{\circ}\)** & **0.0877\({}^{\circ}\)** & **0.0321** & **0.0395** & **0.0418** & **0.0614** & **0.0230** & **0.0280** & **0.0523** & **0.0890** & **0.0356** \\ VBPK\({}_{\text{IMG}-FT}\) & 0.0485\({}^{\circ}\) & 0.0706\({}^{\circ}\) & 0.0271\({}^{\circ}\) & 0.0329\({}^{\circ}\) & 0.0286\({}^{\circ}\) & 0.0445\({}^{\circ}\) & 0.0159\({}^{\circ}\) & 0.0199\({}^{\circ}\) & 0.0518 & 0.0799 & 0.0273\({}^{\circ}\) & 0.0346 \\ \hline VBPK\({}_{\text{IPP}-PT}\) & 0.0536\({}^{\circ}\) & 0.0802\({}^{\circ}\) & 0.0288\({}^{\circ}\) & 0.0357\({}^{\circ}\) & 0.0435\({}^{\circ}\) & 0.0675\({}^{\circ}\) & 0.0235\({}^{\circ}\) & 0.0296\({}^{\circ}\) & 0.0487\({}^{\circ}\) & 0.0762\({}^{\circ}\) & 0.0265 & 0.0336\({}^{\circ}\) \\ VBPK\({}_{\text{IPP}-Filter}\) & 0.0546\({}^{\circ}\) & 0.0818\({}^{\circ}\) & 0.0294\({}^{\circ}\) & 0.0364\({}^{\circ}\) & **0.0668** & **0.0715** & **0.0250** & **0.0313** & **0.0506** & **0.0790** & **0.0271** & **0.0344** \\ VBPK\({}_{\text{IPP}-Filter}\) & **0.0594** & **0.0989** & **0.0323** & **0.0395** & **0.0433** & 0.0674\({}^{\circ}\) & 0.0234\({}^{\circ}\) & 0.0295\({}^{\circ}\) & 0.0497\({}^{\circ}\) & 0.0766\({}^{\circ}\) & 0.0269 & 0.0340 \\ \hline MMCN\({}_{\text{IMG}-FT}\) & 0.0295\({}^{\circ}\) & 0.0484\({}^{\circ}\) & 0.0161\({}^{\circ}\) & 0.0208\({}^{\circ}\) & 0.0156\({}^{\circ}\) & 0.0252\({}^{\circ}\) & 0.0086\({}^{\circ}\) & 0.0104\({}^{\circ}\) & 0.0336\({}^{\circ}\) & 0.0538\({}^{\circ}\) & 0.0172\({}^{\circ}\) & 0.0224\({}^{\circ}\) \\ MMCN\({}_{\text{IMG}-FT}\) & 0.0315\({}^{\circ}\) & 0.0511\({}^{\circ}\) & 0.0168 & 0.0218 & 0.0165\({}^{\circ}\) & 0.0262\({}^{\circ}\) & 0.0095\({}^{\circ}\) & 0.0111\({}^{\circ}\) & 0.0342\({}^{\circ}\) & 0.0539\({}^{\circ}\) & 0.0179\({}^{\circ}\) & 0.0230\({}^{\circ}\) \\ MMCN\({}_{\text{IMG}-FT}\) & **0.0319** & **0.0522** & **0.0169** & **0.0221** & **0.0182** & **0.0306** & **0.0095** & **0.0127** & **0.0354** & **0.0574** & **0.0188** & **0.0245** \\ \hline MMCN\({}_{\text{IMG}-FT}\) & 0.0312\({}^{\circ}\) & 0.0494\({}^{\circ}\) & 0.0165\({}^{\circ}\) & 0.0216\({}^{\circ}\) & 0.0163\({}^{\circ}\) & 0.0253\({}^{\circ}\) & 0.0090\({}^{\circ}\) & 0.0111\({}^{\circ}\) & 0.0376\({}^{\circ}\) & 0.0060\({}^{\circ}\) & 0.0198\({}^{\circ}\) & 0.0257\({}^{\circ}\) \\ MMCN\({}_{\text{IPP}-ST}\) & 0.0320\({}^{\circ}\) & 0.0515\({}^{\circ}\) & 0.0176\({}^{\circ}\) & 0.0229\({}^{\circ}\) & 0.0173\({}^{\circ}\) & 0.0263\({}^{\circ}\) & 0.0099\({}^{\circ}\) & 0.0114\({}^{\circ}\) & 0.0379\({}^{\circ}\) & 0.0598\({}^{\circ}\) & 0.0201 & 0.0257\({}^{\circ}\) \\ MMCN\({}_{\text{ICIPP}-ITE}\) & **0.0376** & **0.0592** & **0.0197** & **0.0253** & **0.0196** & **0.0323** & **0.0102** & **0.0134** & **0.0393** & **0.0621** & **0.0208** & **0.0267** \\ \hline MMCN\({}_{\text{IMG}-FT}\) & 0.0646\({}^{\circ}\) & 0.0941\({}^{\circ}\) & **0.0371** & 0.0446 & **0.0446** & 0.0648\({}^{\circ}\) & **0.0247** & **0.0299** & 0.0540\({}^{\circ}\) & 0.0822\({}^{\circ}\) & 0.0298\({}^{\circ}\) & 0.0370\({}^{\circ}\) \\ MMCN\({}_{\text{IMG}-FT}\) & **0.0667** & **0.0980** & **0.0371** & **0.0452** & **0.0445** & **0.0653** & 0.0238 & 0.0249 & **0.0551** & **0.0285** & **0.0306** & **0.0303** \\ MMCN\({}_{\text{IMG}-FT}\) & 0.0647\({}^{\circ}\) & 0.0951\({}^{\circ}\) & 0.0356\({}^{\circ}\) & 0.0435\({}^{\circ}\) & 0.0409\({}^{\circ}\) & 0.0634\({}^{\circ}\) & 0.0222\({}^{\circ}\) & 0.0279\({}^{\circ}\) & 0.0547\({}^{\circ}\) & 0.0815\({}^{\circ}\) & 0.0296\({}^{\circ}\) & 0.0365\({}^{\circ}\) \\ \hline MMCN\({}_{\text{ICIPP}-PT}\) as indicated in the respective rows for each model within the table. The end-to-end variants incorporating CLIP into the recommendation models exhibit enhanced performance in 83% of cases across all three datasets compared to their respective fine-tuned ones, with 98% of these cases showing significant improvement. In contrast, end-to-end variants with VLMo integration do not show similar improvements and even lead to a decline in performance. This observation suggests that the end-to-end training paradigm facilitates a seamless integration of the CLIP encoder into the existing recommendation models, whereas it does not produce the same level of compatibility for VLMo. The performance decline in the end-to-end VLMo integration might be attributed to its architecture, which comprises a unified multi-modal transformer with modality-specific expert FFNs. While fine-tuning VLMo updates both expert FFNs and the unified transformer for better feature alignment, end-to-end training may impede the effective gradient propagation through expert FFNs due to the recommendation loss (e.g. BRR loss). This could result in the expert FFNs becoming less specialised in handling modality-specific features as they are updated alongside the multi-modal recommendation model, potentially leading to less effective embeddings. Another interesting finding from Table 4 is that the LATTICE\({}_{VLMO}\)/\(CLIP\)-\(ETE\) model overcomes the inferior performance exhibited by both the pre-trained and fine-tuned versions of the LATTICE model, outperforming LATTICE\({}_{VLMO}\)/\(CLIP\)-\(FT\) in all cases across the three datasets. This observation indicates that the end-to-end training paradigm effectively addresses the conceptual conflict between LATTICE and the LMM encoders by generating more effective multi-modal features adapted to the specific recommendation task, guided by the recommendation loss. This process results in more semantically informative item-item graphs for LATTICE. This simultaneous learning process allows the multi-modal models to produce complementary representations, thereby minimising the potential conflicts between LATTICE's item-item graph learning and the multi-modal encoders' feature alignment. Hence, this integrated training strategy enables the LATTICE model to capitalise on the strengths of the multi-modal encoders, resulting in an improved performance. In answer to RQ2, we found that fine-tuning the LMM encoders improves performance in 70% of the 240 tested cases when comparing the fine-tuned LMM encoders with the pre-trained ones. However, some models exhibit anomalies due to inherent conceptual discrepancies between the fine-tuned encoders and their primary objectives. Moreover, all five models gain from an end-to-end training approach when integrating a dual-stream LMM encoder (i.e., CLIP), while the unified LMM encoder (i.e., VLMo) does not show the same advantages. This end-to-end training paradigm effectively addresses the conceptual conflict between the LMM encoders and a model such as LATTICE. Through this paradigm, we achieve a deeper feature alignment in multi-modal recommendation. ### Modality Contribution Analysis (RQ3) As highlighted in Section 1 and detailed in Table 1 of the same section, the existing models insufficiently investigate the interdependencies between modalities, hence exhibiting a suboptimal performance when fusing multi-modal features. Hence, we conduct an analysis to investigate the contribution of each modality on the used Amazon datasets, particularly after incorporating the LMM encoders into the existing models. Table 5 presents the results of the MMGCL and LATTICE models when fed with single or multiple types of modalities as input. For conciseness, we report the results of different types of modalities for only MMGCL and LATTICE here, as these are the two most effective baselines on the used datasets (conclusions on the other models and metrics are similar). Table 5 indicates that both MMGCL and LATTICE, when integrated with an LMM encoder and inputted jointly with visual and textual embeddings, outperform the same models using a single modality input. This result, complementing the findings of Table 1, suggests that the LMM encoders can successfully exploit the deep alignment, confirming our intuition that the inclusion of additional modalities in a model should intrinsically augment the knowledge base, thereby enhancing its performance in the multi-modal recommendation task. In response to RQ3, we conclude that the LMM encoders can facilitate the deeper alignment from diverse modalities, irrespective of the fusion methods employed, within the context of multi-modal recommendation systems. ## 5. Conclusions In this study, we investigated the effectiveness of the large multi-modal (LMM) encoders (namely, CLIP and VLMo) for the multi-modal recommendation. Specifically, we incorporated the LMM encoders as a supplementary component into the multi-modal recommendation models to enhance the user/item embeddings used by each recommendation model. Our experimental results show that both the pre-trained and fine-tuned CLIP and VLMo encoders effectively extract and align visual and textual features from raw images and texts, and significantly enhance the performance in four out of five state-of-the-art multi-modal recommendation models we tested. However, we also observed that for certain model architectures (e.g. LATTICE), this was not the case, due to conceptual conflicts between the fine-tuned LMM encoders and the models' inherent learning objectives. We also investigated different training paradigms for the LMM encoders. Our experiments showed that end-to-end training is more suitable for the multi-modal recommendation task when incorporating a dual-stream LMM encoder (i.e., CLIP) into the existing models, while a unified LMM encoder (i.e., VLMo) does not exhibit the same benefits. Interestingly, our experiments showed that the end-to-end training addresses the conceptual conflict between the LMM encoders and LATTICE, highlighting \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & Amazon Sports & Amazon Clothing & Amazon Baby \\ Methods & NDCG@20 & NDCG@20 & NDCG@20 \\ \hline MMGCL\({}_{CLIP-ETE}\) (V4T) & **0.0467** & **0.0378** & **0.0358** \\ MMGCL\({}_{CLIP-ETE}\) (V) & 0.0446\({}^{*}\) & 0.0344\({}^{*}\) & 0.0362\({}^{*}\) \\ MMGCL\({}_{CLIP-ETE}\) (T) & 0.0455\({}^{*}\) & 0.0367 & 0.0371\({}^{*}\) \\ \hline LATTICE\({}_{CLIP-ETE}\) (V4T) & **0.0451** & **0.0361** & **0.0393** \\ LATTICE\({}_{CLIP-ETE}\) (V) & 0.0440\({}^{*}\) & 0.0346\({}^{*}\) & 0.0371\({}^{*}\) \\ LATTICE\({}_{CLIP-ETE}\) (T) & 0.0449 & 0.0355\({}^{*}\) & 0.0379\({}^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 5. Comparative analysis of NDCG@20 scores for MMGCL and LATTICE models using different modality inputs across used datasets. V/T are the abbreviations for Visual/Textual, respectively. \({}^{*}\) indicates a significance difference using paired t-test with \(p<0.05\). the importance of adopting suitable training paradigms when incorporating the LMM encoders into multi-modal recommendation systems. Moreover, our in-depth analysis of the modality contribution to the recommendation performance highlights the capacity of the LMM encoders to align different modalities, thereby enriching existing models with a spectrum of modalities as opposed to relying on a single modality. To conclude, our study showed that the remarkable performance enhancements exhibited by the LMM encoders in other tasks [1, 17] are also observed in the recommendation domain, thereby warranting further investigation into their potential for the multi-modal recommendation task.
2309.13832
Inspiral and Plunging Orbits in Kerr-Newman Spacetimes
We present the analytical solutions for the trajectories of particles that spiral and plunge inward the event horizon along the timelike geodesics following general non-equatorial paths within Kerr-Newman spacetimes. Our studies encompass both bound and unbound motions. The solutions can be written in terms of the elliptical integrals and the Jacobian elliptic functions of manifestly real functions of the Mino time. They can respectively reduce to the Kerr, Reissner-Nordstr$\ddot{o}$m, and Schwarzschild black holes in certain limits of the spin and charge of the black holes, and can be compared with the known ones restricted in equatorial motion. These explicit solutions may have some implications for the gravitational wave emission from extreme mass-ratio inspirals.
Yu-Chung Ko, Da-Shin Lee, Chi-Yong Lin
2023-09-25T02:37:39Z
http://arxiv.org/abs/2309.13832v3
# Inspiral and Plunging Orbits in Kerr-Newman Spacetimes ###### Abstract We present the analytical solutions for the trajectories that spiral and plunge inward the event horizon along the timelike geodesics of particles following general non-equatorial paths within Kerr-Newman spacetimes. Our studies encompass both bound and unbound motions. The solutions can be written in terms of the elliptical integrals and the Jacobian elliptic functions of manifestly real functions of the Mino time, and can respectively reduce to the Kerr, Reissner-Nordstr\(\ddot{o}\)m, and Schwarzschild black holes in certain limits of the spin and charge of the black holes. The results can be compared with some of the known ones restricted in the equatorial plane. These explicit solutions may find applications such as the black hole accretion. pacs: 04.70.-s, 04.70.Bw, 04.80.Cc Introduction The recent detections of gravitational waves emitted by the merging of binary systems were the confirmation of the prediction made by Einstein a century before as a consequence of general relativity [1; 2; 3]. The capture of the spectacular images of the supermassive black holes M87* at the center of M87 galaxy [4] and Sgr A* at the center of our galaxy [5] leads to another scientific achievement of a direct evidence of the existence of the black holes. The black hole is one of the mysterious stellar objects, which are the solutions of the Einstein's field equations. In astrophysics, extreme mass-ratio inspirals (EMRIs), consisting of a stellar mass object orbits a massive black hole, have recently gained considerable attention for analyzing the gravitational wave signal to accurately test the predictions of general relativity in the strong regime of gravity. Gravitational wave signal generated through EMRIs, a key source of low frequency gravitational waves to be observed in the planned space-based Laser Interferometer Space Antenna (LISA) provides a chance to measure various fascinating properties of supermassive black holes.[8; 9; 10; 11]. The present work is motivated by EMRIs, which can be approximated as the light body travels along the geodesic of the background spacetime of the massive black hole. In particular, the recent studies in [12; 13] have been devoted to inspirals of the particle on the equatorial plane asymptotically from the innermost stable circular orbits (ISCO) of Kerr black holes. They also derive a simple expression for the equatorial radial flow from the ISCO relevant to the dynamics of the accretion disk. These exact solutions may find applications to the numerical accretion, the generated gravitational waveforms arising from EMRIs as well as extending current theories of black hole accretion [17; 18; 19]. Moreover, the work of [20] extends the motion on the equatorial plane to the generic nonequatorial motion in Kerr black holes. In the family of the Kerr black holes the geodesics of the particle due to the spacetime symmetry of the Kerr family possesses two conserved quantities, the energy \(E_{m}\) and the azimuthal angular momentum \(L_{m}\) of the particle. Nevertheless, the existence of the third conserved quantity, discovered in the sixties and known nowadays as Carter constant, renders the geodesic equations as a set of first-order differential equations [21]. Later, the introduction of the Mino time [22] further fully decouples the equations of motions with the solutions expressed in terms of the elliptical functions. In our previous paper [15], we have studied the null and time-like geodesics of the light and the neutral particles respectively in the exterior of Kerr-Newman black holes. We then obtain the solutions of the trajectories in terms of the elliptical integrals and the Jacobi elliptic functions for the null and time-like geodesics, which are manifestly real functions of the Mino time that the initial conditions can be explicitly specified with reference to [23]. In this work, we will mainly focus on the infalling particles into the Kerr-Newman black holes in general nonequatorial motion. Theoretical considerations, together with recent observations of structures near Sgr A* by the GRAVITY experiment [24], indicate possible presence of a small electric charge of central supermassive black hole [25; 26]. Thus, it is of great interest to explore the geodesic dynamics in the Kerr-Newman black hole. Layout of the paper is as follows. In Sec. II, the mini review of the time-like geodesic equations is provided in terms on the conserved quantities, the energy, azimuthal angular momentum, and Carter constant. The equations of motion can be recast in integral forms involving two effective potentials. In particular, the positions of the roots of radial potential give rise to the inspiral and plunge trajectories of particles into the black holes. Sec. III focus on the portion of the parameter space of the conserved quantities that satisfies the conditions of triple roots, the innermost stable spherical orbits (ISSO). The analytical solutions of the inspiral orbits are derived for this case. Two other two cases of our interest here involve pairs of complex roots. In Sec. IV, one of the real root smaller than the event horizon and the particle motion is bound by the turning point from the other real root of the radial potential. In Sec. V we show the case of unbound motion, in which the two real roots are inside the event horizon. The exact solution for the plunging trajectories and illustrative examples are given. In VI the conclusions are drawn. For the completeness of the paper, the Appendixes A and B show some of relevant formulas derived in the earlier publication [15; 28]. ## II Equation of motion for time-like geodesics We start from a summary of the equations of motion for the particle in the Kerr-Newman black hole exterior. We work with the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) on the space-time of the exterior of the Kerr-Newman black hole with the gravitational mass \(M\), angular momentum \(J\), and angular momentum per unit mass \(a=J/M\) described by the metric as \[ds^{2}=-\frac{\Delta}{\Sigma}\left(dt-a\sin^{2}\theta d\phi\right)^{2}+\frac{\sin ^{2}\theta}{\Sigma}\left[(r^{2}+a^{2})d\phi-adt\right]^{2}+\frac{\Sigma}{ \Delta}dr^{2}+\Sigma d\theta^{2}\;, \tag{1}\] where \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\) and \(\Delta=r^{2}-2Mr+a^{2}+Q^{2}\). The roots of \(\Delta(r)\) determine outer/inner event horizons \(r_{+}/r_{-}\) as \[r_{\pm}=M\pm\sqrt{M^{2}-(Q^{2}+a^{2})}\;. \tag{2}\] We assume that \(0<a^{2}+Q^{2}<M^{2}\) throughout the paper. For the asymptotically flat, stationary, and axial-symmetric black holes, the metric is independent of \(t\) and \(\phi\). Thus, the conserved quantities are energy \(E_{m}\) and azimuthal angular momentum \(L_{m}\) of the particle along a geodesic. These can be constructed through the four momentum \(p^{\mu}=mu^{\mu}=m\,dx^{\mu}/d\sigma_{m}\), defined in terms of the proper time \(\sigma_{m}\) and the mass of the particle \(m\), as \[E_{m} \equiv-p_{t}, \tag{3}\] \[L_{m} \equiv p_{\phi}\,. \tag{4}\] Additionally, another conserved quantity is the Carter constant explicitly obtained by \[C_{m}=\Sigma^{2}\left(u^{\theta}\right)^{2}-a^{2}cos^{2}\theta\left(E_{m} \right)^{2}+L_{m}^{2}\cot^{2}\theta+m^{2}a^{2}\cos^{2}\theta\,. \tag{5}\] Together with the time-like geodesics of the particle, \(u^{\mu}u_{\mu}=m^{2}\), one obtains the equations of motion \[\frac{\Sigma}{m}\frac{dr}{d\sigma_{m}}=\pm_{r}\sqrt{R_{m}(r)}\,, \tag{6}\] \[\frac{\Sigma}{m}\frac{d\theta}{d\sigma_{m}}=\pm_{\theta}\sqrt{\Theta_{m}( \theta)}\,, \tag{7}\] \[\frac{\Sigma}{m}\frac{d\phi}{d\sigma_{m}}=\frac{a}{\Delta}\left[\left(r^{2}+a^ {2}\right)\gamma_{m}-a\lambda_{m}\right]-\frac{1}{\sin^{2}\theta}\left(a \gamma_{m}\sin^{2}\theta-\lambda_{m}\right)\,, \tag{8}\] \[\frac{\Sigma}{m}\frac{dt}{d\sigma_{m}}=\frac{r^{2}+a^{2}}{\Delta}\left[\left( r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]-a\left(a\gamma_{m}\sin^{2} \theta-\lambda_{m}\right)\,, \tag{9}\] where we have normalized \(E_{m}\), \(L_{m}\), and \(C_{m}\) by the mass of the particle \(m\) \[\gamma_{m}\equiv\frac{E_{m}}{m},\,\,\,\lambda_{m}\equiv\frac{L_{m}}{m},\,\,\, \eta_{m}\equiv\frac{C_{m}}{m^{2}}. \tag{10}\] The symbols \(\pm_{r}=\text{sign}\left(u^{r}\right)\) and \(\pm_{\theta}=\text{sign}\left(u^{\theta}\right)\) are defined by 4-velocity of the particle. Moreover, the radial \(R_{m}(r)\) in (6) and and angular potentials \(\Theta_{m}(\theta)\) in (7) for the particle are obtained as \[R_{m}(r)=\left[\left(r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]^{2}- \Delta\left[\eta_{m}+\left(a\gamma_{m}-\lambda_{m}\right)^{2}+r^{2}\right]\,, \tag{11}\] \[\Theta_{m}(\theta)=\eta_{m}+a^{2}\gamma_{m}^{2}\cos^{2}\theta-\lambda_{m}^{2} \cot^{2}\theta-a^{2}\cos^{2}\theta\,. \tag{12}\] As well known [22], the set of equations of motion (6)-(9) can be fully decoupled by introducing the so-called Mino time \(\tau_{m}\) defined as \[\frac{dx^{\mu}}{d\tau_{m}}\equiv\frac{\Sigma}{m}\frac{dx^{\mu}}{d\sigma_{m}}. \tag{13}\] For the source point \(x_{i}^{\mu}\) and observer point \(x^{\mu}\), the integral forms of the equations above can be rewritten as [23] \[\tau_{m}-\tau_{mi}=I_{mr}=G_{m\theta}\,, \tag{14}\] \[\phi_{m}-\phi_{mi}=I_{m\phi}+\lambda_{m}G_{m\phi}\,, \tag{15}\] \[t_{m}-t_{mi}=I_{mt}+a^{2}\gamma_{m}G_{mt}\,, \tag{16}\] where the integrals \(I_{mr}\), \(I_{m\phi}\), and \(I_{mt}\) involve the radial potential \[I_{mr}\equiv\int_{r_{i}}^{r}\frac{1}{\pm_{r}\sqrt{R_{m}(r)}}dr,\,, \tag{17}\] \[I_{m\phi}\equiv\int_{r_{i}}^{r}\frac{a\left[\left(2Mr-Q^{2}\right)\gamma_{m}- a\lambda_{m}\right]}{\pm_{r}\Delta\sqrt{R_{m}(r)}}dr, \tag{18}\] \[I_{mt}\equiv\int_{r_{i}}^{r}\frac{r^{2}\gamma_{m}\Delta+\left(2Mr-Q^{2} \right)\left[\left(r^{2}+a^{2}\right)\gamma_{m}-a\lambda_{m}\right]}{\pm_{r} \Delta\sqrt{R_{m}(r)}}dr\,. \tag{19}\] The angular integrals are \[G_{m\theta}\equiv\int_{\theta_{i}}^{\theta}\frac{1}{\pm_{\theta}\sqrt{\Theta (m\theta)}}d\theta\,, \tag{20}\] \[G_{m\phi}\equiv\int_{\theta_{i}}^{\theta}\frac{\csc^{2}\theta}{\pm_{\theta} \sqrt{\Theta_{m}(\theta)}}d\theta\,, \tag{21}\] \[G_{mt}\equiv\int_{\theta_{i}}^{\theta}\frac{\csc^{2}\theta}{\pm_{\theta} \sqrt{\Theta_{m}(\theta)}}d\theta\,. \tag{22}\] In the previous work [15; 28], we have shown the exact solutions to some of the cases of both null and time-like geodesics. For the present work, we will mainly focus on the spiralling and plunging orbits of the bound and the unbound motion at the black hole exterior. There are three types of the orbit of this kind, which will be considered in the subsequent sections for both bound and unbound motion. The radial potential \(R_{m}(r)\) is a quartic polynomial and the positions of its roots of play the essential roles in the present study. The discussion of angular potential \(\Theta(\theta)\) and the integrals involved, on the other hand, remain unchanged in this work. For the sake of completeness, we will provide a short summary of Ref. [15] in Appendix A. Before ending this section, let us introduce a few notations that will be used later. Related to \(R_{m}(r)\) we defined the integrals \[I_{n}\equiv\int_{r_{i}}^{r}r^{n}\sqrt{\frac{1-\gamma_{m}^{2}}{R_{m}(r)}}\,dr \equiv iI_{n}^{U}\;,\;n=1,2 \tag{23}\] \[I_{\pm}\equiv\int_{r_{i}}^{r}\frac{1}{(r-r_{\pm})}\sqrt{\frac{1-\gamma_{m}^{2} }{R_{m}(r)}}\,dr\equiv iI_{\pm}^{U} \tag{24}\] In terms of \(I_{1}\), \(I_{2}\), and \(I_{\pm}\) we can rewrite (18) and (19) as follows \[I_{m\phi}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\frac{2Ma}{r_{+ }-r_{-}}\left[\left(r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+ Q^{2}}{2M}\right)I_{+}(\tau_{m})-\left(r_{-}-\frac{a\left(\frac{\lambda_{m}}{ \gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}(\tau_{m})\right] \tag{25}\] \[I_{mt}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2 }}{r_{+}-r_{-}}\left[\left(r_{+}-\frac{Q^{2}}{2M}\right)\left(r_{+}-\frac{a \left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{+}(\tau_{m}) \right.\right.\] \[\left.\left.-\left(r_{-}-\frac{Q^{2}}{2M}\right)\left(r_{-}-\frac{a\left(\frac {\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}(\tau_{m})\right]+2 MI_{1}(\tau_{m})+I_{2}(\tau_{m})\right\}\] \[+\left(4M^{2}-Q^{2}\right)\gamma_{m}\tau_{m} \tag{26}\] ## III Inspiral orbits in bound motion Based upon the studies in [15] about the plots of the radial potentials for various ranges of the parameters \(\lambda_{m}\) and \(\eta_{m}\) for the bound motion (\(\gamma_{m}<1\)), since the particle kinematically is allowed to move for \(R_{m}(r)>0\), there evidently exist two types of the trajectories of the spiral motion cross the horizon into black holes. One is that the particle starts from \(r_{i}\leq r_{\rm isso}\), where the radius of the ISSO orbit \(r_{\rm isso}\) is with the parameters located at A and B in Fig. 1, spirals cross the horizon of the black holes. The other one is to start from \(r_{i}<r_{m4}\) with the parameters of C and D in Fig. 3, and travel through the horizon of the black holes. This solutions shown in [15] are of particular useful to produce the ISSO solutions in the case of the triple root, and further reduce to the inspiral trajectories in the former case. The solutions along the \(r\) direction can be obtained from the inversion of (14) with the integral \(I_{mr}\) in (17), where the radial potential (11) is given in the case of the triple root located at the ISSO radius, namely \(r_{m2}=r_{m3}=r_{m4}=r_{\rm isso}\) and the initial \(r\) is set at \(r_{i}\leq r_{\rm isso}\). Then we can get the Mino time \(\tau_{m}\) as the function of \(r\) by integral (14) in the case of the triple root \[\tau_{m}^{I}(r)=\frac{-2}{(r_{\rm isso}-r_{m1})\sqrt{1-\gamma_{m}^{2}}}\Bigg{[} \sqrt{\frac{r-r_{m1}}{r_{\rm isso}-r}}-\sqrt{\frac{r_{i}-r_{m1}}{r_{\rm isso} -r_{i}}}\Bigg{]} \tag{27}\] The particle moves toward the horizon with \(\nu_{r_{i}}=-1\). Thus, the radial motion of the particle can be obtained from the inverse of (27) \[r^{I}(\tau_{m})=\frac{r_{m1}+r_{\rm isso}\left[X^{I}(\tau_{m})\right]^{2}}{1+ [X^{I}(\tau_{m})]^{2}}\, \tag{28}\] Figure 1: The main graphics shows the parametric plot of \(\lambda_{m}(r_{\rm isso})\) versus \(\eta_{m}(r_{\rm isso})\). The tripled roots \(r_{\rm isso}\) are the solutions of the equations \(R_{m}^{\prime\prime}(r)=R_{m}^{\prime}=R(r)=0\). The inset illustrate the behavior of the radial potential \(R_{m}\) with the parameters for located at A. The case of B has \(\eta_{m}=0\), which is an example of equatorial motion. where \[X^{I}(\tau_{m})=\frac{\sqrt{1-\gamma_{m}^{2}}(r_{\rm{isso}}-r_{m1})}{2}\tau_{m}- \sqrt{\frac{r_{i}-r_{m1}}{r_{\rm{isso}}-r_{i}}}\,, \tag{29}\] The solution (28) of coordinate \(r\) involves the triple root \(r_{\rm{isso}}\) of radial potential, which can be determined as follows. From the double root solutions \(R(r)=R^{\prime}(r)=0\)[15] we have the constants of motion in the case of spherical orbits, \[\lambda_{\rm{mss}}=\frac{\left[r_{\rm{mss}}\left(Mr_{\rm{mss}}-Q^{2}\right)-a^ {2}M\right]\gamma_{m}-\Delta\left(r_{\rm{mss}}\right)\sqrt{r_{\rm{mss}}^{2} \left(\gamma_{m}^{2}-1\right)+Mr_{\rm{mss}}}}{a\left(r_{\rm{mss}}-M\right)}\,, \tag{30}\] \[\eta_{\rm{mss}}=\frac{r_{\rm{mss}}}{a^{2}\left(r_{\rm{mss}}-M\right)^{2}} \Big{\{}r_{\rm{mss}}\left(Mr_{\rm{mss}}-Q^{2}\right)\left(a^{2}+Q^{2}-Mr_{\rm{ mss}}\right)\gamma_{m}^{2}\\ +2\left(Mr_{\rm{mss}}-Q^{2}\right)\Delta\left(r_{\rm{mss}}\right) \gamma_{m}\sqrt{r_{\rm{mss}}^{2}\left(\gamma_{m}^{2}-1\right)+Mr_{\rm{mss}}} \\ +\left[a^{2}\left(Mr_{\rm{mss}}-Q^{2}\right)-\left(\Delta\left(r_{ \rm{mss}}\right)-a^{2}\right)^{2}\right]\left[r_{\rm{mss}}\left(\gamma_{m}^{2} -1\right)+M\right]\Big{\}}\ \,. \tag{31}\] The subscript "ss" means the spherical orbits with \(s=\pm\), which denotes the two types of motion with respect to the relative sign between the black hole's spin and the azimuthal angular of the particle (see Section III C of [15]). Together with the above equation, an addition equation from \(R^{\prime\prime}(r)=0\) determines the triple root, the radius of \(r_{\rm{isso}}\) given by [15] \[-Mr_{\rm{isso}}^{5}\Delta\left(r_{\rm{isso}}\right)+4\left(Mr_{\rm{isso}}^{3} -Q^{2}r_{\rm{isso}}^{2}+a^{2}\eta_{\rm{isso}}-as\sqrt{\Gamma_{\rm{ms}}}\right) ^{2}=0 \tag{32}\] where \[\Gamma_{\rm{ms}}=r_{\rm{isso}}^{4}\left(Mr_{\rm{isso}}-Q^{2}\right)-\eta_{\rm {isso}}\left[r_{\rm{isso}}\left(r_{\rm{isso}}-3M\right)+2Q^{2}\right]r_{\rm{ isso}}^{2}+a^{2}\eta_{\rm{isso}}^{2}. \tag{33}\] We proceed by evaluating the coordinates \(\phi_{m}(\tau_{m})\) and \(t_{m}(\tau_{m})\) using (15) and (16), which involve not only the angular integrals \(G_{m\phi}\) and \(G_{mt}\), but also the radial integrals (18) and (19). With the help of (25) and (26), we first rewrite (18) and (19) as \[I_{m\phi}^{I}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\frac{2Ma}{r _{+}-r_{-}}\left[\left(r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}} \right)+Q^{2}}{2M}\right)I_{+}^{I}(\tau_{m})-\left(r_{-}-\frac{a\left(\frac{ \lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I_{-}^{I}(\tau_{m})\right] \tag{34}\] \[I^{I}_{mt}(\tau_{m})=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}} \left\{\frac{4M^{2}}{r_{+}-r_{-}}\left[\left(r_{+}-\frac{Q^{2}}{2M}\right)\left( r_{+}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M}\right)I^{I}_{+}( \tau_{m})\right.\right.\] \[\left.\left.\qquad\qquad-\left(r_{-}-\frac{Q^{2}}{2M}\right) \left(r_{-}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right)+Q^{2}}{2M} \right)I^{I}_{-}(\tau_{m})\right]+2MI^{I}_{1}(\tau_{m})+I^{I}_{2}(\tau_{m})\right\}\] \[\left.\qquad\qquad+\left(4M^{2}-Q^{2}\right)\gamma_{m}\tau_{m}\right. \tag{35}\] For the present case of the triple roots, the calculation of the integrals is straightforward and one can express \(I^{I}_{n}\) and \(I^{I}_{\pm}\) in terms of elementary functions, \[I^{I}_{\pm}(\tau_{m})=\frac{\sqrt{1-\gamma_{m}^{2}}}{r_{\rm isso}-r_{\pm}} \tau_{m}+\frac{1}{\sqrt{\left(r_{\pm}-r_{m1}\right)\left(r_{\rm isso}-r_{\pm} \right)^{3}}}\tanh^{-1}\sqrt{\frac{(r_{\pm}-r_{m1})(r_{\rm isso}-r^{I}(\tau_{m }))}{(r_{\rm isso}-r_{\pm})(r^{I}(\tau_{m})-r_{m1})}}-\mathcal{I}^{I}_{\pm_{i}} \tag{36}\] \[I^{I}_{1}(\tau_{m})=\sqrt{1-\gamma_{m}^{2}}r_{\rm isso}\tau_{m}+2\tan^{-1}\sqrt {\frac{r_{\rm isso}-r^{I}(\tau_{m})}{r^{I}(\tau_{m})-r_{m1}}}-\mathcal{I}^{I}_ {1_{i}} \tag{37}\] \[I^{I}_{2}(\tau_{m})=\frac{r_{I}(\tau_{m})(r_{m1}-r_{\rm isso})+r_{\rm isso}(3r_ {\rm isso}-r_{m1})}{2}\tau_{m}+(r_{m1}+3r_{\rm isso})\tan^{-1}\sqrt{\frac{r_{ \rm isso}-r^{I}(\tau_{m})}{r^{I}(\tau_{m})-r_{m1}}}-\mathcal{I}^{I}_{2_{i}} \tag{38}\] It is worthwhile to mention that \(\mathcal{I}^{I}_{\pm_{i}}\), \(\mathcal{I}^{I}_{1_{i}}\), \(\mathcal{I}^{I}_{2_{i}}\) are obtained by evaluating \(\mathcal{I}^{I}_{\pm}\), \(\mathcal{I}^{I}_{1}\), \(\mathcal{I}^{I}_{2}\) at \(r=r_{i}\) of the initial condition, that is, \(I^{I}_{\pm}(0)=I^{I}_{1}(0)=I^{I}_{2}(0)=0\). The solutions of \(\phi^{I}(\tau_{m})\) and \(t^{I}(\tau_{m})\) can be constructed from \(I_{m\phi}\) (18), \(G_{m\phi}\) (21) and \(I_{mt}\) (19) and \(G_{mt}\) (22) through (15) and (16). Together with the solutions along the \(r\) and \(\theta\) directions in (28) and (22), they are the spiralling motions in the general nonequatorial plane of the Kerr-Newman exterior. An illustrative example is shown in Fig.(2), which corresponds to the case A of Fig. 1. For the particle initially at \(r_{i}=r_{\rm isso}\), the solution (28) gives \(r(\tau)=r_{\rm isso}\) for any \(\tau\) obtained from \(X^{I}\rightarrow-\infty\) in (29) when \(r_{i}\to r_{\rm isso}\). It is anticipated that the particle will travel in the spherical motion on the ISSO. However, for \(r_{i}<r_{\rm isso}\) of our interest, as \(r\) reaches the outer horizon \(r_{+}\), it takes finite Mino time \(\tau_{m}\). Nevertheless, because of the \(\tanh^{-1}\) function in (36), \(I^{\rm isso}_{\pm}\rightarrow\infty\) as \(r\to r_{+}\), giving the coordinate time \(t\rightarrow\infty\) and the azimuthal angle \(\phi\rightarrow\infty\) observed in the asymptotical flat regime. The above expressions can be further reduce to the Kerr black hole case by sending \(Q\to 0\). An interesting particular case is the motion of the particle on the equatorial plane by taking \(\theta=\frac{\pi}{2}\) and \(\eta_{m}\to 0\) limits in the results above. The spherical motion becomes the circular motion such that \(r_{\rm isso}\) is reduced to \(r_{\rm isco}\). In particular, \(G_{m\phi}=\tau_{m}\) and the equation of motion (15) simplifies to [15] \[\phi_{m}^{I}\left(r\right)=I_{m\phi}^{I}\left(\tau_{m}\left(r\right)\right)+ \lambda_{m}\tau_{m}^{I}\left(r\right)+\phi_{mi}^{I}\,, \tag{39}\] where \(I_{m\phi}^{I}\) is given by (34). In addition, one can eliminate Mino time \(\tau_{m}\) using Eq. (27). Then the inspirals solution of \(\phi_{m}\) on the equatorial plane of equation (8) can be expressed as a function of \(r\), \[\phi_{m}^{I}(r)=-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2} \right)\left(r_{\mathrm{isco}}-r\right)}}\frac{r_{\mathrm{isco}}^{2}\lambda_{m} +\left(2Mr_{\mathrm{isco}}-Q^{2}\right)\left(a\gamma_{m}-\lambda_{m}\right)}{ \left(r_{\mathrm{isco}}-r_{+}\right)\left(r_{\mathrm{isco}}-r_{-}\right)\left(r _{\mathrm{isco}}-r_{m1}\right)}\] \[-\frac{2}{r_{+}-r_{-}}\frac{\left(2Ma\gamma_{m}-r_{-}\lambda_{m} \right)r_{+}-Q^{2}\left(a\gamma_{m}-\lambda_{m}\right)}{\left(r_{\mathrm{isco }}-r_{+}\right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{+}-r_{m1}\right) \left(r_{\mathrm{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{+}-r_{m1} \right)\left(r_{\mathrm{isco}}-r\right)}{\left(r_{\mathrm{isco}}-r_{+}\right) \left(r-r_{m1}\right)}}\] \[+\frac{2}{r_{+}-r_{-}}\frac{\left(2Ma\gamma_{m}-r_{+}\lambda_{m} \right)r_{-}-Q^{2}\left(a\gamma_{m}-\lambda_{m}\right)}{\left(r_{\mathrm{isco }}-r_{-}\right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{-}-r_{m1}\right) \left(r_{\mathrm{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{-}-r_{m1} \right)\left(r_{\mathrm{isco}}-r\right)}{\left(r_{\mathrm{isco}}-r_{-}\right) \left(r-r_{m1}\right)}} \tag{40}\] Analogously, we have \(G_{mt}=0\) from (22) for the equatorial orbits and (16) simplify to \[t_{m}^{I}\left(r\right)=I_{mt}^{\mathrm{I}}\left(\tau_{m}\right)+t_{mi}^{I} \tag{41}\] Figure 2: An illustrative example of nonequatorial orbit with parameters A of Fig. 1. In this case, the particle starts from \(r_{i}<r_{\mathrm{isco}}\) and inspirals into the black hole after many azimuthal and longitudinal revolutions. From the top view one notices the very different time scales of spiralling phase and plunging phase where \(I_{mt}^{I}\) has been calculated in (35). Substitute \(\tau_{m}^{I}\) in favor of \(r\) we find \[t_{m}^{I}\left(r\right)=-\gamma_{m}\sqrt{\frac{\left(r-r_{m1} \right)\left(r_{\text{isco}}-r\right)}{1-\gamma_{m}^{2}}}+\frac{\gamma_{m}\left( r_{m1}+3r_{\text{isco}}+4M\right)}{\sqrt{1-\gamma_{m}^{2}}}\tan^{-1}\sqrt{\frac{r_{ \text{isco}}-r}{r-r_{m1}}}\] \[-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2}\right)\left(r_{ \text{isco}}-r\right)}}\frac{r_{\text{isco}}^{2}\left(r_{\text{isco}}^{2}+a^{2} \right)\gamma_{m}+\left(2Mr_{\text{isco}}-Q^{2}\right)a\left(a\gamma_{m}- \lambda_{m}\right)}{\left(r_{\text{isco}}-r_{+}\right)\left(r_{\text{isco}}-r_{ -}\right)\left(r_{\text{isco}}-r_{m1}\right)}\] \[-\frac{2\left(2Mr_{+}-Q^{2}\right)}{r_{+}-r_{-}}\frac{2M\gamma_{ m}r_{+}-\left(a\lambda_{m}+Q^{2}\gamma_{m}\right)}{\left(r_{\text{isco}}-r_{+} \right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{+}-r_{m1}\right)\left(r_{ \text{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{+}-r_{m1}\right)\left( r_{\text{isco}}-r\right)}{\left(r_{\text{isco}}-r_{+}\right)\left(r-r_{m1} \right)}}\] \[+\frac{2\left(2Mr_{-}-Q^{2}\right)}{r_{+}-r_{-}}\frac{2M\gamma_{ m}r_{-}-\left(a\lambda_{m}+Q^{2}\gamma_{m}\right)}{\left(r_{\text{isco}}-r_{-} \right)\sqrt{\left(1-\gamma_{m}^{2}\right)\left(r_{-}-r_{m1}\right)\left(r_{ \text{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{\frac{\left(r_{-}-r_{m1}\right)\left( r_{\text{isco}}-r\right)}{\left(r_{\text{isco}}-r_{-}\right)\left(r-r_{m1}\right)}} \tag{42}\] As for the initial conditions one can determine \(\phi_{mi}^{I}\) and \(t_{mi}^{I}\) by \(I_{m\phi}^{I}\left(\tau_{m}^{I}\left(r\right)\right)+\lambda_{m}\tau_{m}^{i} \left(r\right)\) and \(I_{mt}^{I}\left(\tau_{m}\right)\) vanishing at the initial \(r_{i}\). The corresponding trajectories are shown with the additional parameter \(Q\) apart from \(a\) of the black holes in Fig. 3. This certainly generalizes the solution in [12] for the Kerr black holes where the particle starts from \(t_{m}(r)=-\infty\) as \(r\lesssim r_{isco}\) and inspirals to the event horizon. One of the limiting cases that can significantly simplifying the above expressions is considering the extremal limit of the Kerr black hole. For \(Q\to 0\) giving \(r_{m1}=0\), and for the direct orbits with \(a=M\), the ISCO radius is on the event horizon. Therefore we focus on the extremal retrograde motion with \(r_{\text{isco}}=9M\)\(\lambda_{m}=-22\sqrt{3}M/9\) and \(\gamma_{m}=5\sqrt{3}/9\). It turns out that the coefficients of the \(\tanh^{-1}\) of the above expressions (40) all vanish. Then they can be simplified into the known results [12; 13] \[\phi_{m}^{I}\left(r\right)=-\frac{2\sqrt{2}}{3}\frac{r^{\frac{3}{2}}}{(r-M) \sqrt{9M-r}} \tag{43}\] \[t_{m}^{I}\left(r\right)=\sqrt{\frac{(9M-r)r}{2}}\left(\frac{4M-5r}{r-M} \right)-\frac{117\sqrt{2}}{2}M\sqrt{\frac{r}{9M-r}}\] \[\qquad\qquad+\frac{155\sqrt{2}}{2}M\tan^{-1}\sqrt{\frac{9M-r}{r} }-4M\tanh^{-1}\sqrt{\frac{9M-r}{8r}} \tag{44}\] Another limiting case is considering the Reissner Nordstr\(\ddot{o}\)m (RN) black hole. Since \(a\to 0\) with the spherically symmetric metric, the general motion can be treated by considering that in the equatorial plane. Again, the coefficients of the \(\tanh^{-1}\) of the above expressions (40) all vanish. The expressions of \(\phi_{m}^{I}\left(r\right)\) and \(t_{m}^{I}\left(r\right)\) can be simplified as \[\phi_{m}^{I}\left(r\right)=-\frac{2\lambda_{m}}{r_{\text{isco}}-r_{m1}}\sqrt{ \frac{r-r_{m1}}{(1-\gamma_{m}^{2})(r_{\text{isco}}-r)}} \tag{45}\] \[t_{m}^{I}\left(r\right)= -\gamma_{m}\sqrt{\frac{\left(r-r_{m1}\right)\left(r_{\text{isco}}-r \right)}{1-\gamma_{m}^{2}}}+\frac{\gamma_{m}\left(r_{m1}+3r_{\text{isco}}+4M \right)}{\sqrt{1-\gamma_{m}^{2}}}\tan^{-1}\sqrt{\frac{r_{\text{isco}}-r}{r-r_{ m1}}}\] \[-2\sqrt{\frac{r-r_{m1}}{\left(1-\gamma_{m}^{2}\right)\left(r_{ \text{isco}}-r\right)}}\frac{r_{\text{isco}}^{4}\gamma_{m}}{\left(r_{\text{isco }}-r_{+}\right)\left(r_{\text{isco}}-r_{-}\right)\left(r_{\text{isco}}-r_{m1} \right)}\] \[-\frac{2}{r_{+}-r_{-}}\frac{\left(2Mr_{+}-Q^{2}\right)^{2}\gamma _{m}}{\left(r_{\text{isco}}-r_{+}\right)\sqrt{\left(1-\gamma_{m}^{2}\right) \left(r_{+}-r_{m1}\right)\left(r_{\text{isco}}-r_{+}\right)}}\tanh^{-1}\sqrt{ \frac{\left(r_{+}-r_{m1}\right)\left(r_{\text{isco}}-r\right)}{\left(r_{\text{isco }}-r_{+}\right)\left(r-r_{m1}\right)}}\] \[+\frac{2}{r_{+}-r_{-}}\frac{\left(2Mr_{-}-Q^{2}\right)^{2}\gamma _{m}}{\left(r_{\text{isco}}-r_{-}\right)\sqrt{\left(1-\gamma_{m}^{2}\right) \left(r_{-}-r_{m1}\right)\left(r_{\text{isco}}-r_{-}\right)}}\tanh^{-1}\sqrt{ \frac{\left(r_{-}-r_{m1}\right)\left(r_{\text{isco}}-r\right)}{\left(r_{\text{isco }}-r_{-}\right)\left(r-r_{m1}\right)}} \tag{46}\] Further simplification occurs in the extremal limit. For \(M=\pm Q\) in the RN black holes, \(r_{\pm}=M\), and with \(r_{\text{isco}}=4M\), \(r_{m1}=4M/5\), \(\lambda_{m}=2\sqrt{2}M\) and \(\gamma_{m}=3\sqrt{6}/8\), (45) and (46) can have such a simple form \[\phi_{m}^{I}\left(r\right)=-2\sqrt{\frac{5r-4M}{4M-r}} \tag{47}\] Figure 3: Illustration of the orbit on the equatorial plane with the parameters of B in Fig. 1. The particle starts from \(r_{i}<r_{\text{isco}}\) and inspirals into the black hole horizon. \[t_{m}^{I}\left(r\right)= -3\sqrt{\frac{3(4M-r)(r-4M/5)}{5}}+\frac{252\sqrt{15}}{25}M\tan^{-1} \sqrt{\frac{4M-r}{r-4M/5}}\] \[-32M\sqrt{\frac{5r-4M}{12M-3r}}-\frac{(2M^{2}-1)^{2}}{(M-r)M^{3}} \sqrt{\frac{(4M-r)(5r-4M)}{3}}\] \[-\frac{4(2M^{2}-1)}{M^{3}}\tanh^{-1}\sqrt{\frac{4M-r}{15r-12M}} \tag{48}\] Finally, in the case of the Schwarzschild black hole with \(Q\to 0\) and \(a\to 0\) giving \(r_{+}\to 2M\) and \(r_{-}\to 0\), the choice of the motion can be simply in the equatorial plane in such black holes with the spherical symmetry with \(r_{m1}=0\). Thus, with the further inputs of \(r_{\rm{isco}}=6M\), \(\lambda_{m}=2\sqrt{3}M\), \(\gamma_{m}=2\sqrt{2}/3\) in the Schwarzschild case, they become as simple as \[\phi_{m}^{I}\left(r\right)=-2\sqrt{3}\sqrt{\frac{r}{6M-r}}\, \tag{49}\] \[t_{m}^{I}\left(r\right) =\frac{864\sqrt{2}M}{25}\sqrt{\frac{r}{6M-r}}-2\sqrt{2}\sqrt{(6M -r)r}\] \[+44\sqrt{2}M\tan^{-1}\sqrt{\frac{6M-r}{r}}-4M\tanh^{-1}\sqrt{ \frac{6M-r}{2r}}. \tag{50}\] We then recover the results of two recent publications [12; 13] ## IV Plunging orbits in bound motion Another bound orbit, in which particles eventually fall into the black hole, is the motion with the parameters of C and D in Fig. 4. In this case, there are two real roots, being \(r_{m1}\) inside the inner horizon, \(r_{m4}\) outside the outer horizon, and a pair of the complex-conjugated roots \(r_{m2}=r_{m3}^{*}\). Assuming that the particle starts from \(r_{i}\leq r_{m4}\), it will plunge directly into the black hole, as there are no other real valued roots in the journey before reaching the event horizon. This section devotes to find the analytical solution for the particle orbit in the case \(r_{m2}=r_{m3}^{*}\) and \(r_{m4}>r_{i}>r_{+}>r_{-}>r_{m1}\). The solutions in the present case basically follow the same procedure as discussed in the previous section of ISSO orbit. The integration of (14) is straightforward, but the Jacobi elliptic functions are involved for the representation of the results. We find after some algebra \[\tau_{m}^{B}(r)=-\frac{1}{\sqrt{(1-\gamma_{m}^{2})A_{m}B_{m}}}\left(F\left( \varphi(r)|k^{B}\right)-F\left(\varphi(r_{i})|k^{B}\right)\right) \tag{51}\] where \(F(\varphi|k)\) is the incomplete elliptic integral of the first kind [27]. The two parameters of the elliptic integrals are \[\varphi(r)=\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r_{m4}-r)+ A_{m}(r-r_{m1})}\right) \tag{52}\] and \[k^{B}=\frac{(r_{m4}-r_{m1})^{2}-(A_{m}-B_{m})^{2}}{4A_{m}B_{m}}\;, \tag{53}\] where we have used the short notations \[A_{m}=\sqrt{(r_{m4}-r_{m2})(r_{m4}-r_{m3})}\;,\;B_{m}=\sqrt{(r_{m3}-r_{m1})(r_ {m2}-r_{m1})}\,. \tag{54}\] With the help of Jacobian elliptic cosine function [27] one finds the inversion of (51) as \[r^{B}(\tau_{m})=\frac{(B_{m}r_{m4}+A_{m}r_{m1})-(B_{m}r_{m4}-A_{m}r_{m1}){\rm cn }\left(X^{B}(\tau_{m})\left|k^{B}\right)}{(B_{m}+A_{m})-(B_{m}-A_{m}){\rm cn} \left(X^{B}(\tau_{m})\left|k^{B}\right)}\right.\,, \tag{55}\] where \[X^{B}(\tau_{m})=\sqrt{(1-\gamma_{m}^{2})\,A_{m}B_{m}}\tau_{m}-F\Bigg{(}\cos^{ -1}\left(\frac{B_{m}(r_{m4}-r_{i})-A_{m}(r_{i}-r_{m1})}{B_{m}(r_{m4}-r_{i})+A_ {m}(r_{i}-r_{m1})}\right)\left|k^{B}\right) \tag{56}\] Figure 4: The graphics shows the portion of parameter space bound by the double root solution, \(r_{m2}=r_{m3}\). The equation \(R_{m}(r)\)=0 with parameters in the blue zone have complex roots, \(r_{m2}=r_{m3}^{\star}\), so that, a particle in this region, say C or D, when it starts from \(r_{i}<r_{4}\) will plunge into the black hole horizon. The inset shows the behavior of the radial potential \(R_{m}(r)\) for the case of the parameters located in C and D. Notice that \(A_{m}>B_{m}>0\), \(k^{B}\) lies between \(0<k_{B}<1\), and for \(r<r_{m4}\), \(-1<\frac{B_{m}(r_{m4}-r_{i})-A_{m}(r_{i}-r_{m1})}{B_{m}(r_{m4}-r_{i})+A_{m}(r_{i}- r_{m1})}<1\). The Jacobian elliptic cosine function are the real-valued function. The solutions of the coordinates \(\phi_{m}^{B}(\tau_{m})\) and \(t_{m}^{B}(\tau_{m})\) involve the integrals \(I_{m\phi}^{B}\) and \(I_{mt}^{B}\) given in (25) and (26), as in Sec. III. The integration of \(I_{1}^{B}\), \(I_{2}^{B}\), and \(I_{\pm}^{B}\) are direct, but the results have cumbersome representation: \[I_{\pm}^{B}(\tau_{m})=\frac{1}{B_{m}\left(r_{m4}-r_{\pm}\right)+ A_{m}\left(r_{\pm}-r_{m1}\right)}\left[\frac{B_{m}-A_{m}}{\sqrt{A_{m}B_{m}}}X^{ B}(\tau_{m})\right.\] \[\left.+\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{B_{m}\left(r_{m4} -r_{\pm}\right)-A_{m}\left(r_{\pm}-r_{m1}\right)}R_{1}(\beta_{\pm}^{B}; \Upsilon_{\tau_{m}}^{B}|k^{B})\right]-\mathcal{I}_{\pm_{i}}^{B} \tag{57}\] \[I_{1}^{B}(\tau_{m})=\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A _{m}}\right)\frac{X^{B}(\tau_{m})}{\sqrt{A_{m}B_{m}}}+\frac{2(r_{m4}-r_{m1}) \sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}}R_{1}(\beta^{B};\Upsilon_{\tau_{m}}^{B} |k^{B})-\mathcal{I}_{1_{i}}^{B} \tag{58}\] \[I_{2}^{B}(\tau_{m})=\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A _{m}}\right)^{2}\frac{X^{B}(\tau_{m})}{\sqrt{A_{m}B_{m}}}\] \[+4\left(\frac{A_{m}r_{m1}-B_{m}r_{m4}}{A_{m}-B_{m}}\right)\frac{ (r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}}R_{1}(\beta^{B};\Upsilon _{\tau_{m}}^{B}|k^{B})\] \[+\sqrt{A_{m}B_{m}}\left(\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{ A_{m}^{2}-B_{m}^{2}}\right)^{2}R_{2}(\beta^{B};\Upsilon_{\tau_{m}}^{B}|k^{B})- \mathcal{I}_{2_{i}}^{B} \tag{59}\] In the formulas above, the parameters of functions \(R_{1}\) and \(R_{2}\) are related with roots of \(R_{m}(r)\) as follows \[\beta_{\pm}^{B}=-\frac{B_{m}(r_{m4}-r_{\pm})+A_{m}(r_{\pm}-r_{m1}) }{B_{m}(r_{m4}-r_{\pm})-A_{m}(r_{\pm}-r_{m1})}\,\quad\beta^{B}=\frac{A_{m}-B_{m}}{A_{m}+B_{m}} \tag{60}\] \[\Upsilon_{r}^{B}=\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r _{m4}-r)+A_{m}(r-r_{m1})}\right),\quad\Upsilon_{\tau_{m}}^{B}=\mbox{am}\left(X _{B}(\tau_{m})\left|k_{B}\right) \tag{61}\] where am is the Jacobi amplitude function. The quantities \(\mathcal{I}_{\pm_{i}}^{B}\), \(\mathcal{I}_{1_{i}}^{B}\), \(\mathcal{I}_{2_{i}}^{B}\) are obtained by evaluating \(\mathcal{I}_{\pm}^{B}\), \(\mathcal{I}_{1}^{B}\), \(\mathcal{I}_{2}^{B}\) at \(r=r_{i}\) of the initial condition, that is, \(I_{\pm}^{B}(0)=I_{1}^{B}(0)=I_{2}^{B}(0)=0\). Finally, \(R_{1}\) and \(R_{2}\) are the integral of Jacobian elliptic cosine function, \[R_{1}(\alpha;\phi|k)\equiv\int_{0}^{F(\phi|k)}\frac{du}{1+\alpha \mbox{cn}(u|k)}=\frac{1}{1-\alpha^{2}}\left[\Pi\Bigg{(}\frac{\alpha^{2}}{ \alpha^{2}-1};\phi\left|k\right.\right)-\alpha f(p_{\alpha},\phi,k)\right] \tag{62}\] \[R_{2}(\alpha;\phi|k)\equiv\int_{0}^{F(\phi|k)}\frac{du}{[1+\alpha \mathrm{cn}(u|k)]^{2}}\] \[\qquad=\frac{1}{\alpha^{2}-1}\left[F\left(\phi|k\right)-\frac{ \alpha^{2}}{k+(1-k)\alpha^{2}}\left(E(\phi|k)-\frac{\alpha\sin(\phi)\sqrt{1-k \sin^{2}(\phi)}}{1+\alpha\cos(\phi)}\right)\right]\] \[\qquad\qquad+\frac{1}{k+(1-k)\alpha^{2}}\left(2k-\frac{\alpha^{2 }}{\alpha^{2}-1}\right)R_{1}(\alpha;\phi|k) \tag{63}\] in which \[f(p_{\alpha},\phi,k)=\frac{p_{\alpha}}{2}\ln\left(\frac{p_{\alpha}\sqrt{1-k \sin^{2}(\phi)}+\sin(\phi)}{p_{\alpha}\sqrt{1-k\sin^{2}(\phi)}-\sin(\phi)} \right)\,,\quad p_{\alpha}=\sqrt{\frac{\alpha^{2}-1}{k+(1-k)\alpha^{2}}} \tag{64}\] In particular, for \(\alpha=\beta^{B},\ \beta_{\pm}^{B}\), then \(-1<\alpha<1\), which ensures that the solutions are real-valued functions. We have applied the exact solution to the parameter set C of Fig. 4. In this case, \(\lambda_{m}=1\), \(\eta_{m}=7\), and \(\gamma_{m}=0.98\) the result is show in Fig. 5. The black parameters are \(a=0.7\), \(Q=0.7\). The particle stars from the initial position \(r_{i}=7.4M\), \(\theta_{i}=\pi/2\), and \(\phi_{i}=0\) and fall almost directly into the black hole. From the above general formulas one obtains the case of the equatorial plane, in which \(\theta=\frac{\pi}{2}\) and \(\eta_{m}\to 0\). The bound plunge solution of the coordinates \(\phi_{m}^{B}\) and \(t_{m}^{B}\) can be rewritten as the function of \(r\) as follows \[\phi_{m}^{B}\left(r\right)=I_{m\phi}^{B}\left(\tau_{m}\left(r \right)\right)+\lambda_{m}\tau_{m}^{B}\left(r\right)+\phi_{mi}^{B}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left[\frac{2Ma}{r_{+ }-r_{-}}\left(\mathcal{J}_{m+}-\mathcal{J}_{m-}\right)-\frac{\lambda_{m}}{ \gamma_{m}}f\left(r\right)\right]\;, \tag{65}\] \[t_{m}^{B}\left(r\right)=I_{mt}^{B}\left(\tau_{m}\right)+t_{mi}^ {B}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2}}{ r_{+}-r_{-}}\left(\mathcal{T}_{m+}-\mathcal{T}_{m-}\right)+\frac{B_{m}r_{m4}-A_{m}r _{m1}}{B_{m}-A_{m}}\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A_{m}}+M\right)f \left(r\right)\right.\] \[+\frac{2(r_{m4}-r_{m1})\sqrt{A_{m}B_{m}}}{A_{m}^{2}-B_{m}^{2}} \left[2\left(\frac{B_{m}r_{m4}-A_{m}r_{m1}}{B_{m}-A_{m}}\right)+M\right]R_{1} \left(\beta^{B};\varphi(r)|k^{B}\right)\] \[\left.+4\sqrt{A_{m}B_{m}}\left[\frac{(r_{m4}-r_{m1})\sqrt{A_{m}B_{ m}}}{A_{m}^{2}-B_{m}^{2}}\right]^{2}R_{2}\left(\beta^{B};\varphi(r)|k^{B} \right)+\left(4M^{2}-Q^{2}\right)f\left(r\right)\right\}\;, \tag{66}\] where \[\mathcal{T}_{m\pm}=\left(r_{\pm}-\frac{Q^{2}}{2M}\right)\mathcal{J}_{m\pm} \tag{67}\] \[f\left(r\right)=\frac{1}{\sqrt{A_{m}B_{m}}}F\left(\varphi(r)|k^{B}\right) \tag{69}\] Fig. 6 shows an exemplary orbit of this type. Above expression can also convert into the solutions of the Kerr and RN black hole by taking the respective \(a\to 0\) and \(Q\to 0\) limit. In the Kerr black hole, straightforwardly substituting \(Q=0\) and the root \(r_{m1}=0\) into the definition of \(k_{B}\) and \(B_{m}\) in (53) and (54) as well as the (65) and (66) gives the solutions. Nevertheless, in the RN black hole in the Figure 5: Illustration of an orbit off the equatorial plane with the parameters of C in Fig. 4. In this case the particle starts from \(r_{i}<r_{4}\) and plunges directly into the black hole horizon. See the text for more details limits of \(a\to 0\) but \(r_{m1}\neq 0\) give huge simplification to (65) becoming \[\phi_{m}^{B}\left(r\right)=-\frac{\lambda_{m}}{\sqrt{(1-\gamma_{m}^{2})A_{m}B_{m }}}F\Bigg{(}\cos^{-1}\left(\frac{B_{m}(r_{m4}-r)-A_{m}(r-r_{m1})}{B_{m}(r_{m4}- r)+A_{m}(r-r_{m1})}\right)\Bigg{|}k^{B}\Bigg{)} \tag{70}\] whereas the solution of \(t_{m}^{B}\) remains the same form as in (66) in the corresponding limits. In the Schwarzchild black hole where \(a,Q\to 0\), the two event horizons, \(r_{+}=2M\), \(r_{-}=0\) giving \(\mathcal{T}_{m-}\to 0\), together with \(r_{m1}=0\), lead to the further simplification to (70) and (66). ## V Plunging orbits in unbound motion For unbound motion, the particle may start from the spatial infinity characterized with the constants of motion, the azimuthal angular momentum \(\lambda_{m}\), the energy \(\gamma_{m}\), and the Carter constant \(\eta_{m}\). In this section We consider the parameters mainly in the E regime shown in Fig. 7, in which the roots of the radial potential have the properties, \(r_{m3}^{*}=r_{m4}\) and \(r_{i}>r_{+}>r_{-}>r_{m2}>r_{m1}\). This means that there is no turning point in the black hole exterior and the particle starting from the spatial infinity will plunge directly into the black Figure 6: Illustration of an orbit on the equatorial plane with the parameters of D in Fig. 4. The particle initiates its journey at point \(r_{i}\), moves outward, reaches the turning point at \(r_{4}\), and then reverses its course, plunging back into the black hole. hole. The main propose here is also to derive the exact solutions for the coordinates \(r_{m}^{U}(\tau_{m})\), \(\theta_{m}^{U}(\tau_{m})\), \(\phi_{m}^{U}(\tau_{m})\), and \(t_{m}^{U}(\tau_{m})\) (We add the upper index \(U\) for the unbound case). Although the procedure is identical to the previous two sections, special care is needed because the difference of the structure of the roots. The counterpart of Eq. (51) is \[\tau_{m}^{U}=-\frac{1}{\sqrt{(\gamma_{m}^{2}-1)A_{m}^{U}B_{m}^{U}}}\left[F \left(\psi(r)|k^{U}\right)-F\left(\psi(r_{i})|k^{U}\right)\right] \tag{71}\] where \[\psi(r)=\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{m}^{U }(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right)\,, \tag{72}\] \[k^{U}=\frac{(A_{m}^{U}+B_{m}^{U})^{2}-(r_{m2}-r_{m1})^{2}}{4A_{m}^{U}B_{m}^{U} }\,, \tag{73}\] and \[A_{m}^{U}=\sqrt{(r_{m3}-r_{m2})(r_{m4}-r_{m2})}\,,\;B_{m}^{U}=\sqrt{(r_{m3}-r_ {m1})(r_{m4}-r_{m1})} \tag{74}\] Figure 7: The graphics shows the portion of parameter space limited by the double root solution, \(r_{m3}=r_{m4}\) and \(r_{m1}<r_{m2}<r_{-}<r_{+}\). For the region of the parameter space for E and F, \(r_{m3}\) and \(r_{m4}\) are complex, \(r_{m3}=r_{m4}^{*}\) and \(r_{m1}<r_{m2}<r_{-}\). The inset shows the details of the roots of illustrative cases E and F in the main figure. See the text for more discussion. Notice that \(A_{m}^{U}(B_{m}^{U})\) have different combinations of roots than the bounded case (54). The evolution of coordinate \(r^{U}(\tau_{m})\) is then \[r^{U}(\tau_{m})=\frac{(B_{m}^{U}r_{m2}-A_{m}^{U}r_{m1})+(B_{m}^{U}r_{m2}+A_{m}^{ U}r_{m1})\text{cn}\left(X^{U}(\tau_{m})\left|k^{U}\right)}{(B_{m}^{U}-A_{m}^{U}) \text{+}(B_{m}^{U}+A_{m}^{U})\text{cn}\left(X^{U}(\tau_{m})\left|k^{U}\right)} \tag{75}\] where \[X^{U}(\tau_{m})=\sqrt{\left(\gamma_{m}^{2}-1\right)A_{m}^{U}B_{m}^{U}}\tau_{m} -F\Bigg{(}\cos^{-1}\left(\frac{A_{m}^{U}(r_{i}-r_{m1})-B_{m}^{U}(r_{i}-r_{m2}) }{A_{m}^{U}(r_{i}-r_{m1})+B_{m}^{U}(r_{i}-r_{m2})}\right)\Bigg{|}k^{U}\Bigg{)} \tag{76}\] Again the properties, \(B_{m}^{U}>A_{m}^{U}>0\), \(0<k^{U}<1\), and \(r_{m1}<r_{m2}<r\), \(-1<\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{m}^{U}(r-r_{m1})+B_{m}^{ U}(r-r_{m2})}<1\), guarantee that the Jacobian elliptic cosine function in Eq. (75) is a real-valued function. The missing pieces for a complete description of the motions are the unbound version of equations (25) and (26), in which the integral (23) and (24) have been solved in Sec. IV. We express the results as follows \[I_{\pm}^{U}(\tau_{m})=-\frac{1}{B_{m}^{U}\left(r_{\pm}-r_{m2} \right)+A_{m}^{U}\left(r_{\pm}-r_{m1}\right)}\left[\frac{B_{m}^{U}+A_{m}^{U}} {\sqrt{A_{m}^{U}B_{m}^{U}}}X^{U}(\tau_{m})\right.\] \[\left.+\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{B_{m}^{U} \left(r_{\pm}-r_{m2}\right)-A_{m}^{U}\left(r_{\pm}-r_{m1}\right)}R_{1}(\beta_ {\pm}^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})\right]-\mathcal{I}_{\pm_{i}}^{U}\;, \tag{77}\] \[I_{1}^{U}(\tau_{m})=\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}} {B_{m}^{U}+A_{m}^{U}}\right)\frac{X^{U}(\tau_{m})}{\sqrt{A_{m}^{U}B_{m}^{U}}} +\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U}) ^{2}}R_{1}(\beta^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})-\mathcal{I}_{1_{i}}^{U} \tag{78}\] , \[I_{2}^{U}(\tau_{m})=\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}} {B_{m}^{U}+A_{m}^{U}}\right)^{2}\frac{X^{U}(\tau_{m})}{\sqrt{A_{m}^{U}B_{m}^{ U}}}\] \[+4\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}}{B_{m}^{U}+A_{m}^{U }}\right)\frac{(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{ m}^{U})^{2}}R_{1}(\beta^{U};\Upsilon_{\tau_{m}}^{U}|k^{U})\] \[+\sqrt{A_{m}^{U}B_{m}^{U}}\left(\frac{2(r_{m2}-r_{m1})\sqrt{A_{m }^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U})^{2}}\right)^{2}R_{2}(\beta^{U}; \Upsilon_{\tau_{m}}^{U}|k^{U})-\mathcal{I}_{2_{i}}^{U}\;. \tag{79}\] where the functions \(R_{1}\) and \(R_{2}\) have been defined in (62) and (63) and the unbound version of the parameters now read as \[\beta_{\pm}^{U}=\frac{B_{m}^{U}(r_{\pm}-r_{m2})+A_{m}^{U}(r_{\pm}-r_{m1})}{B_{ m}^{U}(r_{\pm}-r_{m2})-A_{m}^{U}(r_{\pm}-r_{m1})},\quad\beta^{U}=\frac{B_{m}^{U}+A_{m }^{U}}{B_{m}^{U}-A_{m}^{U}} \tag{80}\] \[\Upsilon_{r}^{U}=\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2})}{A_{ m}^{U}(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right),\quad\Upsilon_{\tau_{m}}^{U}=\text{am} \left(X^{U}(\tau_{m})\left|k^{U}\right.\right) \tag{81}\] As before the initial conditions \(\mathcal{I}_{\pm_{i}}^{U}\), \(\mathcal{I}_{1_{i}}^{U}\), \(\mathcal{I}_{2_{i}}^{U}\) are obtained by evaluating \(\mathcal{I}_{\pm}^{U}\), \(\mathcal{I}_{1}^{U}\), \(\mathcal{I}_{2}^{U}\) at \(r=r_{i}\) of the initial condition. Also, for \(\alpha\) in the definition of \(R_{1}\) and \(R_{2}\) functions (62), in this case \(\alpha=\beta^{U},\beta_{\pm}^{U}\) with \(0<\alpha<1\) where the solutions are real-valued functions. Fig. 8 illustrate the orbit with parameters of E in Fig. 7. \[t_{m}^{U}\left(r\right)=I_{mt}^{U}\left(\tau_{m}\right)+t_{mi}^{U}\] \[=\frac{\gamma_{m}}{\sqrt{1-\gamma_{m}^{2}}}\left\{\frac{4M^{2}}{r_ {+}-r_{-}}\left(\mathcal{V}_{m-}-\mathcal{V}_{m+}\right)+\frac{B_{m}^{U}r_{m2} +A_{m}^{U}r_{m1}}{B_{m}^{U}+A_{m}^{U}}\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{ m1}}{B_{m}^{U}+A_{m}^{U}}+M\right)g\left(r\right)\right.\] \[+\left.\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U })^{2}-(A_{m}^{U})^{2}}\left[2\left(\frac{B_{m}^{U}r_{m2}+A_{m}^{U}r_{m1}}{B_ {m}^{U}+A_{m}^{U}}\right)+M\right]R_{1}\left(\beta^{U};\psi(r)|k^{U}\right)\right.\] \[\left.+4\sqrt{A_{m}^{U}B_{m}^{U}}\left[\frac{(r_{m2}-r_{m1}) \sqrt{A_{m}^{U}B_{m}^{U}}}{(B_{m}^{U})^{2}-(A_{m}^{U})^{2}}\right]^{2}R_{2} \left(\beta^{U};\psi(r)|k^{U}\right)+\left(4M^{2}-Q^{2}\right)g\left(r\right)\right\} \tag{83}\] where \[\mathcal{V}_{m\pm}=\left(r_{\pm}-\frac{Q^{2}}{2M}\right)\mathcal{K}_{m\pm} \tag{84}\] \[\mathcal{K}_{m\pm} =\left(r_{\pm}-\frac{a\left(\frac{\lambda_{m}}{\gamma_{m}}\right) +Q^{2}}{2M}\right)\left\{\frac{(B_{m}^{U}+A_{m}^{U})g\left(r\right)}{B_{m}^{ U}(r_{\pm}-r_{m2})+A_{m}^{U}(r_{\pm}-r_{m1})}\right.\] \[\left.+\frac{2(r_{m2}-r_{m1})\sqrt{A_{m}^{U}B_{m}^{U}}}{\left[B_{ m}^{U}(r_{\pm}-r_{m2})\right]^{2}-\left[A_{m}^{U}(r_{\pm}-r_{m1})\right]^{2}}R_{1} \left(\beta_{\pm}^{U};\psi(r)|k^{U}\right)\right\} \tag{85}\] \[g\left(r\right)=\frac{1}{\sqrt{A_{m}^{U}B_{m}^{U}}}F\left(\psi(r)|k^{U}\right) \tag{86}\] Fig. 9 shows an example with parameter of F in 7. Figure 9: Illustration of an equatorial inspiral orbit with the parameters in the F region in Fig. 7 for \(\eta_{m}=0\) where the particle starts from spatial infinity and inspiral directly into the horizon. In the Kerr black hole, for \(Q\to 0\), the solutions are given by taking \(r_{m1}=0\) to the (82) and (83). In the RN black hole for \(a\to 0\) but \(r_{m1}\neq 0\), the (82) can be significantly simplified as \[\phi_{m}^{U}\left(r\right)=-\frac{\lambda_{m}}{\sqrt{(\gamma_{m}^{2}-1)A_{m}^{ U}B_{m}^{U}}}F\Bigg{(}\cos^{-1}\left(\frac{A_{m}^{U}(r-r_{m1})-B_{m}^{U}(r-r_{m2}) }{A_{m}^{U}(r-r_{m1})+B_{m}^{U}(r-r_{m2})}\right)\Bigg{|}k^{U}\Bigg{)} \tag{87}\] In the Schwarzchild black hole with \(a,Q\to 0\), the root \(r_{m1}=0\) has to applied to the RN black hole above. Nevertheless, the solution of \(t_{m}^{U}\) in various black holes remains the same form as (83) after taking the proper limits. ## VI Conclusions In this paper, we analytical derive the inspiral solutions for the general nonequatorial orbits into the Kerr-Newman black holes for both bound and unbound motion. The solutions can be written in terms of the elliptical integrals and the Jacobian elliptic functions of manifestly real functions of the Mino time. Various limits have been taken to show the respective solutions in Kerr, Reissner Nordstr\(\ddot{o}\)m, and Schwarzschild black holes. In the case of the bound motion, we extend the study of [12; 13] to consider that the particle starts from \(r\leq r_{\text{ISSO}}\) and then inspirals into the black hole with the particular normalized energy \(\gamma_{m}\), azimuthal angular momentum \(\lambda_{m}\) and Carter constant \(\eta_{m}\) in Fig. 1 that exist the triple root of the radial potential. In the limits of \(Q\to 0\) and restricting on the equatorial plane, the obtained solution reduces to the one obtained in [12; 13]. We also consider the other type of the inspiral motion with the values of \(\gamma_{m},\lambda_{m},\eta_{m}\) in Fig.1 where the there are two real roots, one inside the horizon, \(r_{m1}\),and the other outside the horizon, \(r_{m4}\) of the radial potential. Thus, the particle starts from \(r\leq r_{m4}\) and directly inspirals into the black hole. As for the unbound state, the values of \(\gamma_{m},\lambda_{m},\eta_{m}\) shown in Fig.1 are shown where there are two real root \(r_{m2},r_{m1}\) inside the horizon. The particle starts from the spatial infinity and will inspiral directly into the black holes. These exact solutions of the spiral motion into the black hole are of astrophysical interest due to the fact that they have direct relevance to black hole accretion phenomena. There can be significant X-ray emission and other observational signals such as gravitational waves from matter flowing inward. These explicit solutions may find applications to the numerical accretion, the generated gravitational waveforms as well as extending current theories of black hole accretion [17; 18; 19]. Appendix A The angular potential \(\Theta(\theta)\) and the integrals \(G_{m\theta}\), \(G_{m\phi}\), and \(G_{mt}\) The detailed studies related to the \(\Theta_{m}\) potential in the \(\theta\) direction can be found in [23; 15]. Here we summarize some of the relevant parts for the completeness of presentation. The angular potential (12) for the particle can be rewritten in terms of \(u=\cos^{2}\theta\). and the equation of motion requires \(\Theta_{m}\geq 0\), which restricts the parameter space of \(\lambda_{m}\), \(\eta_{m}\), and \(\gamma_{m}\) (see Fig. 9 in [15]). The roots of \(\Theta_{m}(\theta)=0\) can be written as [23], \[u_{m,\pm}=\frac{\Delta_{m,\theta}\pm\sqrt{\Delta_{m,\theta}^{2}+\frac{4\,a^{2 }\,\eta_{m}}{\gamma_{m}^{2}-1}}}{2a^{2}}\,,\ \ \Delta_{m\theta}=a^{2}-\frac{\eta_{m}+\lambda_{m}^{2}}{\gamma_{m}^{2}-1}\,, \tag{13}\] which give the boundaries of the parameter space. For \(\eta_{m}>0\) and nonzero \(\lambda_{m}\) of the motion for the particle starts off from the black hole exterior, \(1>u_{+}>0\) is the only positive root that in turn gives two roots at \(\theta_{m+}=\cos^{-1}\left(-\sqrt{u_{+}}\right),\theta_{m-}=\cos^{-1}\left( \sqrt{u_{+}}\right)\). The particle travels between the southern and northern hemispheres crossing the equator at \(\theta=\frac{\pi}{2}\). The solution of the coordinate \(\theta_{m}(\tau_{m})\) can be obtained by an inversion of (14) [23; 15] \[\theta(\tau_{m})=\cos^{-1}\left(-\nu_{\theta_{i}}\sqrt{u_{m+}}\text{sn}\left( \sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)}\left(\tau_{m}+\nu_{\theta_{i} }\mathcal{G}_{m\theta_{i}}\right)\left|\frac{u_{m+}}{u_{m-}}\right)\right) \tag{14}\] where Mino time \[\tau_{m}=G_{m\theta}=p(\mathcal{G}_{m\theta_{+}}-\mathcal{G}_{m\theta_{-}})+ \nu_{\theta_{i}}\left[(-1)^{p}\mathcal{G}_{m\theta}-\mathcal{G}_{m\theta_{i}}\right] \tag{15}\] and sn is the Jacobi Elliptical sine function. In (15) \(p\) counts the times the trajectory passes through the turning points and \(\nu_{\theta_{i}}=\text{sign}\left(\frac{d\theta_{i}}{d\tau^{\prime}}\right)\). The function \(\mathcal{G}_{m\theta}\) is \[\mathcal{G}_{m\theta}=-\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}}F\left(\sin^{-1}\left(\frac{\cos\theta}{\sqrt{u_{m+}}}\right)\left| \frac{u_{m+}}{u_{m-}}\right)\right.\,. \tag{16}\] The evolution of coordinates \(\phi_{m}(\tau_{m})\) and \(t_{m}(\tau_{m})\) in (15) and (16) involves the integrals (21) and (22), which can expressed as follows [15] \[G_{m\phi}(\tau_{m})=\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)} }\Pi\left(u_{m+};\text{am}\left(\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}\left(\tau_{m}+\nu_{\theta_{i}}\mathcal{G}_{\theta_{i}}\right)\left| \frac{u_{m+}}{u_{m-}}\right)\left|\frac{u_{m+}}{u_{m-}}\right)-\nu_{\theta_{i }}\mathcal{G}_{m\phi_{i}}\,, \tag{17}\] \[\mathcal{G}_{\phi_{i}}=-\frac{1}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1 \right)}}\Pi\left(u_{m+};\sin^{-1}\left(\frac{\cos\theta_{i}}{\sqrt{u_{m+}}} \right)\left|\frac{u_{m+}}{u_{m-}}\right)\right.\,, \tag{18}\] \[G_{mt}(\tau_{m})=-\frac{2u_{m+}}{\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)}} E^{\prime}\left(\text{am}\left(\sqrt{-u_{m-}a^{2}\left(\gamma_{m}^{2}-1\right)} \left(\tau_{m}+\nu_{\theta_{i}}\mathcal{G}_{m\theta_{i}}\right)\left|\frac{u_{m +}}{u_{m-}}\right)\right)\left|\frac{u_{m+}}{u_{m-}}\right)-\nu_{\theta_{i}} \mathcal{G}_{mt_{i}}\,, \tag{10}\] \[\mathcal{G}_{mt_{i}}=\frac{2u_{+}}{\sqrt{-u_{-}\hat{a}^{2}\left(\gamma_{m}^{2} -1\right)}}E^{\prime}\left(\text{sin}^{-1}\left(\frac{\cos\theta_{i}}{\sqrt{u _{+}}}\right)\left|\frac{u_{+}}{u_{-}}\right)\, \tag{11}\] where \(E\) and \(\Pi\) are the incomplete elliptic integral of the second and third kinds. Also the prime denotes the derivative with respect to the second argument, \[E^{\prime}\left(\varphi\left|k\right.\right)=\partial_{k}E\left(\varphi\left| k\right.\right)=\frac{E\left(\varphi\left|k\right.\right)-F\left(\varphi\left|k \right.\right)}{2k}\,. \tag{12}\] ## Appendix B The radial potential \(R_{m}(r)\) and its roots As for the radial potential (11), it is a quartic polynomial. We then rewrite \(R_{m}(r)\) as follows \[R_{m}(r)=S_{m}r^{4}+T_{m}r^{3}+U_{m}r^{2}+V_{m}r+W_{m}\,, \tag{13}\] where the coefficients functions are given in terms constants of motion and parameters of the black hole as \[S_{m}=\gamma_{m}^{2}-1, \tag{14}\] \[T_{m}=2M, \tag{15}\] \[U_{m}=a^{2}\left(\gamma_{m}^{2}-1\right)-Q^{2}-\eta_{m}-\lambda_{m}^{2}, \tag{16}\] \[V_{m}=2M\Big{[}(a\gamma_{m}-\lambda_{m})^{2}+\eta_{m}\Big{]}, \tag{17}\] \[W_{m}=-a^{2}\eta_{m}-Q^{2}\Big{[}(a\gamma_{m}-\lambda_{m})^{2}+\eta_{m}\Big{]}\,. \tag{18}\] Furthermore, it is useful to represent the radial potential using its roots, namely \[R_{m}(r)=\left(\gamma_{m}^{2}-1\right)(r-r_{m1})(r-r_{m2})(r-r_{m3})(r-r_{m4} )\,. \tag{19}\] The different dynamical behaviors of the system are characterized by the positions of these roots. See figures (1), (4), (7), and also References [15; 23] The roots of a quartic equation are well known, but cumbersome. We will write them down for the sake of unifying notation and ensuring the completeness of the work \[r_{m1}=-\frac{M}{2\left(\gamma_{m}^{2}-1\right)}-z_{m}-\sqrt{-\,\frac{X_{m}}{ 2}-z_{m}^{2}+\frac{Y_{m}}{4z_{m}}}\,, \tag{20}\] \[r_{m2} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}-z_{m}+\sqrt{-\frac{X_{m}}{ 2}-z_{m}^{2}+\frac{Y_{m}}{4z_{m}}}\,, \tag{101}\] \[r_{m3} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}+z_{m}-\sqrt{-\frac{X_{m }}{2}-z_{m}^{2}-\frac{Y_{m}}{4z_{m}}}\,,\] (102) \[r_{m4} = -\frac{M}{2\left(\gamma_{m}^{2}-1\right)}+z_{m}+\sqrt{-\frac{X_{m }}{2}-z_{m}^{2}-\frac{Y_{m}}{4z_{m}}}\,, \tag{103}\] where \[z_{m}=\sqrt{\frac{\Omega_{m+}+\Omega_{m-}-\frac{X_{m}}{3}}{2}}\,, \tag{104}\] and \[\Omega_{m\pm}=\sqrt[3]{-\frac{\varkappa_{m}}{2}\pm\sqrt{\left(\frac{\varpi_{ m}}{3}\right)^{3}+\left(\frac{\varkappa_{m}}{2}\right)^{2}}} \tag{105}\] with \[\varpi_{m}=-\,\frac{X_{m}^{2}}{12}-Z_{m}\,,\qquad\varkappa_{m}=-\,\frac{X_{m }}{3}\left[\left(\frac{X_{m}}{6}\right)^{2}-Z_{m}\right]-\,\frac{Y_{m}^{2}}{ 8}\,. \tag{106}\] \(X_{m}\), \(Y_{m}\), and \(Z_{m}\) are the short notation for \[X_{m} = \frac{8U_{m}S_{m}-3T_{m}^{2}}{8S_{m}^{2}}\,, \tag{107}\] \[Y_{m} = \frac{T_{m}^{3}-4U_{m}T_{m}S_{m}+8V_{m}S_{m}^{2}}{8S_{m}^{3}}\,,\] (108) \[Z_{m} = \frac{-3T_{m}^{4}+256W_{m}S_{m}^{3}-64V_{m}T_{m}S_{m}^{2}+16U_{m} T_{m}^{2}S_{m}}{256S_{m}^{4}}\,. \tag{109}\] ###### Acknowledgements. This work was supported in part by the National Science and Technology council (NSTC) of Taiwan, Republic of China.
2310.00093
DataDAM: Efficient Dataset Distillation with Attention Matching
Researchers have long tried to minimize training costs in deep learning while maintaining strong generalization across diverse datasets. Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that contains the information of a larger real dataset and ultimately achieves test accuracy equivalent to a model trained on the whole dataset. Unfortunately, the synthetic data generated by previous methods are not guaranteed to distribute and discriminate as well as the original training data, and they incur significant computational costs. Despite promising results, there still exists a significant performance gap between models trained on condensed synthetic sets and those trained on the whole dataset. In this paper, we address these challenges using efficient Dataset Distillation with Attention Matching (DataDAM), achieving state-of-the-art performance while reducing training costs. Specifically, we learn synthetic images by matching the spatial attention maps of real and synthetic data generated by different layers within a family of randomly initialized neural networks. Our method outperforms the prior methods on several datasets, including CIFAR10/100, TinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the settings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and ImageNet-1K, respectively. We also show that our high-quality distilled images have practical benefits for downstream applications, such as continual learning and neural architecture search.
Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
2023-09-29T19:07:48Z
http://arxiv.org/abs/2310.00093v2
# DataDAM: Efficient Dataset Distillation with Attention Matching ###### Abstract Researchers have long tried to minimize training costs in deep learning while maintaining strong generalization across diverse datasets. Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that contains the information of a larger real dataset and ultimately achieves test accuracy equivalent to a model trained on the whole dataset. Unfortunately, the synthetic data generated by previous methods are not guaranteed to distribute and discriminate as well as the original training data, and they incur significant computational costs. Despite promising results, there still exists a significant performance gap between models trained on condensed synthetic sets and those trained on the whole dataset. In this paper, we address these challenges using efficient Dataset Distillation with Attention Matching (DataDAM), achieving state-of-the-art performance while reducing training costs. Specifically, we learn synthetic images by matching the spatial attention maps of real and synthetic data generated by different layers within a family of randomly initialized neural networks. Our method outperforms the prior methods on several datasets, including CIFAR10/100, TinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the settings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and ImageNet-1K, respectively. We also show that our high-quality distilled images have practical benefits for downstream applications, such as continual learning and neural architecture search. ## 1 Introduction Deep learning has been highly successful in various fields, including computer vision and natural language processing, due to the use of large-scale datasets and modern Deep Neural Networks (DNNs) [14, 23, 17, 26, 43]. However, extensive infrastructure resources for training, hyperparameter tuning, and architectural searches make it challenging to reduce computational costs while maintaining comparable performance. Two primary approaches to address this issue are model-centric and data-centric. Model-centric methods involve model compression techniques [24, 57, 1, 55, 59, 44, 27], while data-centric methods concentrate on constructing smaller datasets with enough information for training, which is the focus of this paper. A traditional data-centric approach is the coreset selection method, wherein we select a representative subset of an original dataset [41, 8, 4, 46, 49]; however, these methods have limitations as they rely on heuristics to generate a coarse approximation of the whole dataset, which may lead to a suboptimal solution for downstream tasks like image Figure 1: (a) Data distribution of the distilled images on the CIFAR10 dataset with 50 images per class (IPC50) for CAFE [52] and DataDAM. (b) Performance comparison with state-of-the-art methods on the CIFAR10 dataset for varying IPCs. classification [49, 41]. Dataset distillation (or condensation) [53] is proposed as an alternative, which distills knowledge from a large training dataset into a smaller synthetic set such that a model trained on it achieves competitive testing performance with one trained on the real dataset. The condensed synthetic sets contain valuable information, making them a popular choice for various machine learning applications like continual learning [53, 64, 62], neural architecture search [13, 63, 64], federated learning [58, 66], and privacy-preserving [15, 50] tasks. Dataset distillation was first proposed by Wang [53] where bi-level meta-learning was used to optimize model parameters on synthetic data in the inner loop and refine the data with meta-gradient updates to minimize the loss on the original data in the outer loop. Various methods have been proposed to overcome the computational expense of this method, including approximating the inner optimization with kernel methods [5, 38, 37, 65], surrogate objectives like gradient matching [64, 62, 33], trajectory matching [9], and distribution matching [52, 63]. The kernel-based methods and gradient matching work still require bi-level optimization and second-order derivation computation, making training a difficult task. Trajectory matching [9] demands significant GPU memory for extra disk storage and expert model training. CAFE [52] uses dynamic bi-level optimization with layer-wise feature alignment, but it may generate biased images and incur a significant time cost (Figure 1). Thus, these methods are not scalable for larger datasets such as ImageNet-1K [14]. Distribution matching (DM) [63] was proposed as a scalable solution for larger datasets by skipping optimization steps in the inner loop. However, DM usually underperforms compared to prior methods [9]. In this paper, we propose a new framework called "**Dataset**Distillation with **A**ttention **M**atching (DataDAM)" to overcome computational problems, achieve an unbiased representation of the real data distribution, and outperform the performance of the existing methods. Due to the effectiveness of randomly initialized networks in generating strong representations that establish a distance-preserving embedding of the data [7, 45, 19, 63], we leverage multiple randomly initialized DNNs to extract meaningful representations from real and synthetic datasets. We align their most discriminative feature maps using the Spatial Attention Matching (SAM) module and minimize the distance between them with the MSE loss. We further reduce the last-layer feature distribution disparities between the two datasets with a complementary loss as a regularizer. Unlike existing methods [64, 52, 9], our approach does not rely on pre-trained network parameters or employ bi-level optimization, making it a promising tool for synthetic data generation. The generated synthetic dataset does not introduce any bias into the data distribution while outperforming concurrent methods, as shown in Figure 1. The contributions of our study are: **[C1]**: We proposed an effective end-to-end dataset distillation method with attention matching and feature distribution alignment to closely approximate the distribution of the real dataset with low computational costs. **[C2]**: Our method is evaluated on computer vision datasets with different resolutions, where it achieves state-of-the-art results across multiple benchmark settings. Our approach offers up to a 100x reduction in training costs while simultaneously enabling cross-architecture generalizations. **[C3]**: Our distilled data can enhance downstream applications by improving memory efficiency for continual learning and accelerating neural architecture search through a more representative proxy dataset. ## 2 Related Work Dataset Distillation.Wang [53] first introduced dataset distillation by expressing network parameters as a function of synthetic data and optimizing the synthetic set to minimize the training loss on real training data. Later works extended this approach with soft labels [5] and a generator network [48]. Researchers have proposed simplifying the neural network model in bi-level optimization using kernel methods, such as ridge regression, which has a closed-form solution [5, 65], and a kernel ridge regression model with Neural Tangent Kernel [32] (NTK) that approximates the inner optimization [38, 37]. Alternatively, some studies have utilized surrogate objectives to address unrolled optimization problems. Dataset condensation (DC) [64] and DCC [33] generate synthetic images by matching the weight gradients of neural networks on real and distilled training datasets, while Zhao [62] improve gradient matching with data augmentation. MTT [9] matches model parameter trajectories trained with synthetic and real datasets, and CAFE [63] and DM [52] match features generated by a model using distilled and real datasets. However, these methods have limitations, including bi-level optimization [64, 62, 52, 32], second-order derivative computation [64], generating biased examples [62, 52], and massive GPU memory demands [9, 65]. In contrast, our approach matches the spatial attention map in intermediate layers, reducing memory costs while outperforming most existing methods on standard benchmarks. Coreset Selection.Coreset selection is another data-centric approach that chooses a representative subset of an original dataset using heuristic selection criteria. For example, random selection [41] selects samples randomly; Harding [8, 4] selects the samples closest to the cluster center for each class center; K-Center [46] chooses multiple center points of a class to minimize the maximum distance between data points and their nearest center point; and [49] identifies training samples that are easily forgotten during the training process. However, heuristics-based methods may not be optimal for downstream tasks like image classification, and finding an informative corset may be challenging when the dataset's information is not concentrated in a few samples. Instead, our approach learns a computationally efficient synthetic set that is not limited to a subset of the original training samples. Attention Mechanism.Attention has been widely used in deep learning to improve performance on various tasks [2, 54, 60], with initial applications in natural language processing by Bahdanau [2] for language translation. Attention has since been used in computer vision, with global attention models [54] for improved classification accuracy on image datasets and convolutional block attention modules [56] for learning to attend to informative feature maps. Attention has also been used for model compression in knowledge distillation [60]. However, this mechanism has not been explored in the context of dataset distillation. To fill this gap, we propose a spatial attention matching module to approximate the distribution of the real dataset. ## 3 Methodology In this section, we propose a novel end-to-end framework called **Dat**aset **D**istillation with **A**ttention **M**atching (**D**ataDAM), which leverages attention maps to synthesize data that closely approximates the real training data distribution. The high dimensionality of training images makes it difficult to estimate the real data distribution accurately. Therefore, we represent each training image using spatial attention maps generated by different layers within a family of randomly initialized neural networks. These maps effectively highlight the most discriminative regions of the input image that the network focuses on at different layers (early, intermediate, and last layers) while capturing low-, mid-, and high-level representation information of the image. Although each individual network provides a partial interpretation of the image, the family of these randomly initialized networks produces a more comprehensive representation. ### Dataset Distillation with Attention Matching Given a large-scale dataset \(\mathcal{T}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{|\mathcal{T}|}\) containing \(|\mathcal{T}|\) real image-label pairs, we first initialize a learnable synthetic dataset \(\mathcal{S}=\{(\mathbf{s}_{j},y_{j})\}_{j=1}^{|\mathcal{S}|}\) with \(|\mathcal{S}|\) synthetic image and label pairs, by using either random noise or a selection of real images obtained through random sampling or a clustering algorithm such as K-Center [13, 46]. For each class \(k\), we sample a batch of real and synthetic data (\(B_{k}^{\mathcal{T}}\) and \(B_{k}^{\mathcal{S}}\), resp.) and extract features using a neural network \(\phi_{\mathbf{\theta}}(\cdot)\) with standard network random initialization \(\mathbf{\theta}\)[22]. Figure 2 shows the proposed approach, where the neural network \(\phi_{\mathbf{\theta}}(\cdot)\), consisting of \(L\) layers, is employed to embed the real and synthetic sets. The network generates feature maps for each dataset, represented as \(\phi_{\mathbf{\theta}}(\mathcal{T}_{k})=[\mathbf{f}_{\mathbf{\theta},1}^{\mathcal{T}_{k}},\cdots,\mathbf{f}_{\mathbf{\theta},L}^{\mathcal{T}_{k}}]\) and \(\phi_{\mathbf{\theta}}(\mathcal{S}_{k})=[\mathbf{f}_{\mathbf{\theta},1}^{\mathcal{S}_{k}},\cdots,\mathbf{f}_{\mathbf{\theta},L}^{\mathcal{S}_{k}}]\), respectively. The feature \(\mathbf{f}_{\mathbf{\theta},l}^{\mathcal{T}_{k}}\) is a multi-dimensional array in \(\mathbb{R}^{|B_{k}^{\mathcal{T}}|\times C_{l}\times W_{l}\times H_{l}}\), coming from the real dataset in the \(l^{\text{th}}\) layer, where \(C_{l}\) represents the Figure 2: (a) Illustration of the proposed DataDAM method. DataDAM includes a Spatial Attention Matching (SAM) module to capture the dataset’s distribution and a complementary loss for matching the feature distributions in the last layer of the encoder network. (b) The internal architecture of the SAM module. number of channels and \(H_{l}\times W_{l}\) is the spatial dimensions. Similarly, a feature \(\mathbf{f}_{\mathbf{\theta},l}^{\mathcal{S}_{k}}\) is extracted for the synthetic set. The **S**patial **A**ttention **M**atching (SAM) module then generates attention maps for the real and synthetic images using a feature-based mapping function \(A(\cdot)\). The function takes the feature maps of each layer (except the last layer) as an input and outputs two separate attention maps: \(A(\phi_{\mathbf{\theta}}(\mathcal{T}_{k}))=[\mathbf{a}_{\mathbf{\theta},1}^{\mathcal{T}_{k }},\cdots,\mathbf{a}_{\mathbf{\theta},L-1}^{\mathcal{T}_{k}}]\) and \(A(\phi_{\mathbf{\theta}}(\mathcal{S}_{k}))=[\mathbf{a}_{\mathbf{\theta},1}^{\mathcal{S}_{k }},\cdots,\mathbf{a}_{\mathbf{\theta},L-1}^{\mathcal{S}_{k}}]\) for the real and synthetic sets, respectively. Prior studies [60, 61] have shown that the absolute value of a hidden neuron activation can indicate its importance for a given input, thus we create a spatial attention map by aggregating the absolute values of the feature maps across the channel dimension. This means that the feature map \(\mathbf{f}_{\mathbf{\theta},l}^{\mathcal{T}_{k}}\) of the \(l^{\text{th}}\) layer is converted into a spatial attention map \(\mathbf{a}_{\mathbf{\theta},l}^{\mathcal{T}_{k}}\in\mathbb{R}^{|B_{k}^{\mathcal{T}}| \times W_{l}\times H_{l}}\) using the following pooling operation: \[A(\mathbf{f}_{\mathbf{\theta},l}^{\mathcal{T}_{k}})=\sum_{i=1}^{C_{l}}\big{|}(\mathbf{f}_{ \mathbf{\theta},l}^{\mathcal{T}_{k}})_{i}\big{|}^{p}, \tag{1}\] where, \((\mathbf{f}_{\mathbf{\theta},l}^{\mathcal{T}_{k}})_{i}=\mathbf{f}_{\mathbf{\theta},l}^{ \mathcal{T}_{k}}(:,i,:,:)\) is the feature map of channel \(i\) from the \(l^{\text{th}}\) layer and the power and absolute value operations are applied element-wise. The resulting attention map emphasizes the spatial locations associated with neurons with the highest activations. This helps retain the most informative regions and generates a more efficient feature descriptor. In a similar manner, the attention maps for synthetic data can be obtained as \(\mathbf{a}_{\mathbf{\theta},l}^{\mathcal{S}_{k}}\). The effect of parameter \(p\) is studied in the supplementary materials. To capture the distribution of the original training set at different levels of representations, we compare the normalized spatial attention maps of each layer (excluding the last layer) between the real and synthetic sets using the loss function \(\mathcal{L}_{\text{SAM}}\), which is formulated as \[\mathop{\mathbb{E}}_{\mathbf{\theta}\sim\mathcal{P}_{\mathbf{\theta}}}\bigg{[}\sum_{ k=1}^{K}\sum_{l=1}^{L-1}\bigg{\|}\mathbb{E}_{\mathcal{T}_{k}}\Big{[}\frac{ \mathbf{z}_{\mathbf{\theta},l}^{\mathcal{T}_{k}}}{\big{\|}\mathbf{z}_{\mathbf{\theta},l}^{ \mathcal{T}_{k}}\big{\|}_{2}}\Big{]}-\mathbb{E}_{\mathcal{S}_{k}}\Big{[}\frac {\mathbf{z}_{\mathbf{\theta},l}^{\mathcal{S}_{k}}}{\big{\|}\mathbf{z}_{\mathbf{\theta},l}^{ \mathcal{S}_{k}}\big{\|}_{2}}\Big{]}\Big{\|}^{2}\bigg{]}, \tag{2}\] where, \(\mathbf{z}_{\mathbf{\theta},l}^{\mathcal{T}_{k}}=vec(\mathbf{a}_{\mathbf{\theta},l}^{\mathcal{ T}_{k}})\in\mathbb{R}^{|B_{k}^{\mathcal{T}}|\times(W_{l}\times H_{l})}\) and \(\mathbf{z}_{\mathbf{\theta},l}^{\mathcal{S}_{k}}=vec(\mathbf{a}_{\mathbf{\theta},l}^{\mathcal{ S}_{k}})\in\mathbb{R}^{|B_{k}^{\mathcal{S}}|\times(W_{l}\times H_{l})}\) are the \(l^{\text{th}}\) pair of vectorized attention maps along the spatial dimension for the real and synthetic sets, respectively. The parameter \(K\) is the number of categories in a dataset, and \(P_{\mathbf{\theta}}\) denotes the distribution of network parameters. It should be noted that normalization of the attention maps in the SAM module improves performance on the syntactic set (see supplementary materials). Despite the ability of \(\mathcal{L}_{\text{SAM}}\) to approximate the real data distribution, a discrepancy still exists between the synthetic and real training sets. The features in the final layer of neural network models encapsulate the highest-level abstract information of the images in the form of an embedded representation, which has been shown to effectively capture the semantic information of the input data [42, 63, 35, 21]. Therefore, we leverage a complementary loss as a regularizer to promote similarity in the mean vectors of the embeddings between the two datasets for each class. To that end, we employ the widely known Maximum Mean Discrepancy (MMD) loss, \(\mathcal{L}_{\text{MMD}}\), which is calculated within a family of kernel mean embeddings in a Reproducing Kernel Hilbert Space (RKHS) [21]. The \(\mathcal{L}_{\text{MMD}}\) loss is formulated as \[\mathop{\mathbb{E}}_{\mathbf{\theta}\sim P_{\mathbf{\theta}}}\bigg{[}\sum_{k=1}^{K} \bigg{\|}\mathbb{E}_{\mathcal{T}_{k}}\Big{[}\mathbf{\tilde{f}}_{\mathbf{\theta},L}^{ \mathcal{T}_{k}}\Big{]}-\mathbb{E}_{\mathcal{S}_{k}}\Big{[}\mathbf{\tilde{f}}_{ \mathbf{\theta},L}^{\mathcal{S}_{k}}\Big{]}\Big{\|}_{\mathcal{H}}^{2}\bigg{]}, \tag{3}\] where \(\mathcal{H}\) is a reproducing kernel Hilbert space. The \(\mathbf{\tilde{f}}_{\mathbf{\theta},L}^{\mathcal{T}_{k}}=vec(\mathbf{f}_{\mathbf{\theta},L}^{ \mathcal{T}_{k}})\in\mathbb{R}^{|B_{k}^{\mathcal{T}}|\times(C_{L}\times W_{L} \times H_{L})}\) and \(\mathbf{\tilde{f}}_{\mathbf{\theta},L}^{\mathcal{S}_{k}}=vec(\mathbf{f}_{\mathbf{\theta},L}^{ \mathcal{S}_{k}})\in\mathbb{R}^{|B_{k}^{\mathcal{S}}|\times(C_{L}\times W_{L} \times H_{L})}\) are the final feature maps of the real and synthetic sets in vectorized form with both the channel and spatial dimensions included. We estimate the expectation terms in Equations 2 and 3 empirically if ground-truth data distributions are not available. Finally, we learn the synthetic dataset by solving the following optimization problem using SGD with momentum: \[\mathcal{S}^{*}=\operatorname*{arg\,min}_{\mathcal{S}}\ \big{(}\mathcal{L}_{\text{SAM}}+ \lambda\mathcal{L}_{\text{MMD}}\big{)}, \tag{4}\] where \(\lambda\) is the task balance parameter. Further information on the effect of \(\lambda\) is discussed in Section 6.2.2. Note that our approach assigns a fixed label to each synthetic sample and keeps it constant during training. A summary of the learning algorithm can be found in Algorithm 1. ``` 0: Real training dataset \(\mathcal{T}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{|\mathcal{T}|}\) 0: Initialized synthetic samples for \(K\) classes, Deep neural network \(\phi_{\mathbf{\theta}}\) with parameters \(\mathbf{\theta}\), Probability distribution over randomly initialized weights \(P_{\mathbf{\theta}}\), Learning rate \(\eta_{\mathcal{S}}\), Task balance parameter \(\lambda\), Number of training iterations \(I\). 1: Initialize synthetic dataset \(\mathcal{S}\) 2:for\(i=1,2,\cdots,I\)do 3: Sample \(\mathbf{\theta}\) from \(P_{\mathbf{\theta}}\) 4: Sample mini-batch pairs \(B_{k}^{\mathcal{T}}\) and \(B_{k}^{\mathcal{S}}\) from the real and synthetic sets for each class \(k\) 5: Compute \(\mathcal{L}_{\text{SAM}}\) and \(\mathcal{L}_{\text{MMD}}\) using Equations 2 and 3 6: Calculate \(\mathcal{L}=\mathcal{L}_{\text{SAM}}+\lambda\mathcal{L}_{\text{MMD}}\) 7: Update the synthetic dataset using \(\mathcal{S}\leftarrow\mathcal{S}-\eta_{\mathcal{S}}\nabla_{\mathcal{S}} \mathcal{L}\) 8:endfor 9: Synthetic dataset \(\mathcal{S}=\{(\mathbf{s}_{i},y_{i})\}_{i=1}^{|\mathcal{S}|}\) ``` **Algorithm 1** Dataset Distillation with Attention Matching ## 4 Experiments In this section, we demonstrate the effectiveness of DataDAM in improving the performance of dataset distillation. We introduce the datasets and implementation details for reproducibility (Section 4.1), compare our method with state-of-the-art benchmarks (Section 4.2), conduct ablation studies to evaluate each component's efficacy and transferability across various architectures (Section 6.2.2), and show some visualizations (Section 6.3). Finally, we demonstrate the applicability of our method to the common tasks of continual learning and neural architecture search (Section 4.5). ### Experimental Setup **Datasets.** Our method was evaluated on CIFAR10/100 datasets [29], which have a resolution of 32 \(\times\) 32, in line with state-of-the-art benchmarks. For medium-resolution data, we resized the Tiny ImageNet [31] and ImageNet-1K [14] datasets to 64 \(\times\) 64. Previous work on dataset distillation [9] introduced subsets of ImageNet-1K that focused on categories and aesthetics, including assorted objects, dog breeds, and birds. We utilized these subsets, namely ImageNet, ImageWoof, and ImageSquawk, which consist of 10 classes, as high-resolution (128 \(\times\) 128) datasets in our experimental studies. For more detailed information on the datasets, please refer to the supplementary materials. **Network Architectures.** We use a ConvNet architecture [18] for the distillation task, similar to prior research. The default ConvNet has three identical convolutional blocks and a linear classifier. Each block includes a 128-kernel 3 \(\times\) 3 convolutional layer, instance normalization, ReLU activation, and 3 \(\times\) 3 average pooling with a stride of 2. We adjust the network for medium- and high-resolution data by adding a fourth and fifth convolutional block to account for the higher resolutions, respectively. In all experiments, we initialize the network parameters using normal initialization [22]. **Evaluation.** We evaluate the methods using standard measures from prior studies [63, 64, 52, 62]. We generate five sets of small synthetic images using 1, 10, and 50 images per class (IPC) from a real training dataset. Next, we train 20 neural network models on each synthetic set using an SGD optimizer with a learning rate of 0.01. We report the mean and standard deviation over 100 models for each experiment to assess the effectiveness of the performance of distilled datasets. Additionally, we evaluate computational costs using run-time expressed per step, averaged over 100 iterations, and peak GPU memory usage during 100 iterations of training. Finally, we visualize the unbiasedness of state-of-the-art methods using t-SNE visualization [51]. **Implementation Details.** We employ the SGD optimizer with a fixed learning rate of 1 to learn synthetic datasets with 1, 10, and 50 IPCs. We learn low- and medium/high-resolution synthetic images in 8000 iterations with a task balance (\(\lambda\)) of 0.01 and 0.02, respectively. Following from [62], we apply the differentiable augmentation strategy for learning and evaluating the synthetic set. For dataset reprocessing, we utilized the Kornia implementation of Zero Component Analysis (ZCA) with default parameters, following previous works [38, 9]. All experiments are conducted on two Nvidia A100 GPUs. Further details on hyperparameters are available in the supplementary materials. ### Comparison to State-of-the-art Methods **Competitive Methods.** We evaluate DataDAM against four corset selection approaches and eight advanced methods for training set synthesis. The corset selection methods include Random selection [41], Herding [8, 4], K-Center [46], and Forgetting [49]. We also compare our approach with state-of-the-art distillation methods, including Dataset Distillation [53] (DD), Flexible Dataset Distillation [5] (LD), Dataset Condensation [64] (DC), Dataset Condensation with \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{IPC} & Ratio/s Resolution & \multicolumn{3}{c}{Coaleset Selection} & \multicolumn{3}{c}{Training Set Synthesis} & \multirow{2}{*}{Whole Dataset} \\ & & & Random & Herding & K-Center & Forgetting & \multicolumn{3}{c}{DD\({}^{\dagger}\)[55]} & \multicolumn{3}{c}{LD\({}^{\dagger}\)[5]} & \multicolumn{3}{c}{DC\({}^{\dagger}\)[54]} & \multicolumn{3}{c}{DSA [62]} & \multicolumn{3}{c}{DM [63]} \\ \hline \multirow{3}{*}{CIFAR-10} & 1 & 0.02 & 32 & 14.4 \(\pm\) 0.2 & 21.5 \(\pm\) 1.2 & 21.5 \(\pm\) 1.3 & 13.5 \(\pm\) 1.2 & - & 25.7 \(\pm\) 0.7 & 28.3 \(\pm\) 0.5 & 28.8 \(\pm\) 0.7 & 26.0 \(\pm\) 0.8 & 31.6 \(\pm\) 0.8 & 29.8 \(\pm\) 1.0 & 31.9 \(\pm\) 1.2 & **32.8 \(\pm\) 1.2** & \multirow{3}{*}{\(\mathbf{33.8\pm 0.3}\)} \\ & 10 & 0.2 & 32 & 26.0 \(\pm\) 1.2 & 31.6 \(\pm\) 0.7 & 14.7 \(\pm\) 0.9 & 29.3 \(\pm\) 1.0 & 36.8 \(\pm\) 1.2 & 38.3 \(\pm\) 0.4 & 44.9 \(\pm\) 0.3 & 52.1 \(\pm\) 0.5 & 48.9 \(\pm\) 0.6 & 50.9 \(\pm\) 0.5 & 5.64 \(\pm\) 0.7 & **56.4 \(\pm\) 0.7** & **54.2 \(\pm\) 0.8** & \multirow{3}{*}{\(\mathbf{84.8\pm 0.1}\)} \\ & 50 & 10 & 32 & 43.4 \(\pm\) 1.0 & 4.0 \(\pm\) 0.6 & 27.0 \(\pm\) 1.2 & 43.3 \(\pm\) 1.1 & - & 42.5 \(\pm\) 0.4 & 53.9 \(\pm\) 0.6 & 26.0 \(\pm\) 0.5 & 46.3 \(\pm\) 0.4 & 42.3 \(\pm\) 0.7 & 52.6 \(\pm\) 0.6 & **67.9 \(\pm\) 0.4** & \multirow{3}{*}{\(\mathbf{69.7\pm 0.4}\)} \\ \hline \multirow{3}{*}{CIFAR-10} & 1 & 0.2 & 32 & 4.2 \(\pm\) 0.3 & 3.3 \(\pm\) 0.3 & 4.3 \(\pm\) 0.3 & 4.5 \(\pm\) 0.2 & - & 11.5 \(\pm\) 0.4 & 12.8 \(\pm\) 0.3 & 13.9 \(\pm\) 0.3 & 11.0 \(\pm\) 0.3 & 14.0 \(\pm\) 0.3 & 12.0 \(\pm\) 0.2 & 13.8 \(\pm\) 0.4 & **14.5 \(\pm\) 0.5** & \multirow{3}{*}{\(\mathbf{65.5\pm 0.3}\)} \\ & 10 & 2 & 32 & 14.6 \(\pm\) 0.5 & 17.3 \(\pm\) 0.3 & 17.3 \(\pm\) 0.3 & 15.1 \(\pm\) 0.3 & - & - & 25.2 \(\pm\) 0.3 & 32.3 \(\pm\) 0.3 & 29.7 \(\pm\) 0.3 & 31.5 \(\pm\) 0.2 & 29.0 \(\pm\) 0.3 & 33.1 \(\pm\) 0.4 & **34.8 \(\pm\) 0.5** & \multirow{3}{*}{\(\mathbf{56.5\pm 0.3}\)} \\ & 50 & 10 & 32 & 20.0 \(\pm\) 0.4 & 3.37 \(\pm\) 0.5 & 30.8 \(\pm\) 0.3 & - & - & 30.6 \(\pm\) 0.6 & 42.8 \(\pm\) 0.4 & 40.4 \(\pm\) 0.29 & - & 42.9 \(\pm\) 0.3 & **9.4 \(\pm\) 0.3** & \multirow{3}{*}{\(\mathbf{0.3}\)} \\ \hline \multirow{3}{*}{Tray ImageNet} & 1 & 0.2 & 64 & 1.4 \(\pm\) 1.4 & 2.8 \(\pm\) 0.2 & - & 1.6 \(\pm\) 0.1 & - & - & 5.3 \(\pm\) 0.1 & 5.7 \(\pm\) 0.1 & 3.9 \(\pm\) 0.2 & - & - & 6.2 \(\pm\) 0.4 & **8.3 \(\pm\) 0.4** & \multirow{3}{*}{\(\mathbf{63.4\pm 0.4}\)} \\ & 10 & 2 & 64 & 5.0 \(\pm\) 0.2 & 6.3 \(\pm\) 0.2 & - & 5.1 \(\pm\) 0.2 & - & - & 12.9 \(\pm\) 0.1 & 16.3 \(\pm\) 0.2 & 12.9 \(\pm\) 0.4 & - & - & 17.3 \(\pm\) 0.2 & **18.7 \(\pm\) 0.3** & \multirow{3}{*}{\(\mathbf{37.6\pm 0.4}\)} \\ \cline{1-1} & 50 & 10 & 64 & 15.0 \(\pm\) 0.4 & 16.7 \(\pm\) 0.3 & - & 15.0 \(\pm\) 0.3 & - & - & 12.7 \(\pm\) 0.4 & 5.1 \(\pm\) 0.2 & 25.3 \(\pm\) 0.2 & - & - & 26.5 \(\pm\) 0.3 & **28.7 \(\pm\) 0.3** & \multirow{3}{*}{\(\mathbf{37.6\pm 0.4}\)} \\ \hline \hline \end{tabular} \end{table} Table 1: The performance (testing accuracy %) comparison to state-of-the-art methods. We distill the given number of images per class using the training set, train a neural network on the synthetic set from scratch, and evaluate the network on the testing data. IPC: image(s) per class. Ratio (%): the ratio of distilled images to the whole training set. The works DD Differentiable Siamese Augmentation [62] (DSA), Distribution Matching [63] (DM), Aligning Features [52] (CAFE), Kernel Inducing Points [38, 37] (KIP), and Matching Training Trajectories [9] (MTT). To ensure reproducibility, we downloaded publicly available distilled data for each baseline method and trained models using our experimental setup. We make minor adjustments to some methods to ensure a fair comparison, and for those that did not conduct experiments on certain data, we implemented them using the released author codes. For details on the implementation of baselines and comparisons to other methods such as generative models [39, 6, 34], please refer to the supplementary materials. **Performance Comparison.** We compare our method with selection- and synthesis-based approaches in Tables 1 and 2. The results demonstrate that training set synthesis methods outperform coreset methods, especially when the number of images per class is limited to 1 or 10. This is due to the fact that synthetic training data is not limited to a specific set of real images. Moreover, our method consistently outperforms all baselines in most settings for low-resolution datasets, with improvements on the top competitor, MTT, of 1.1% and 6.5% for the CIFAR10/100 datasets when using IPC50. This indicates that our DataDAM can achieve up to 88% of the upper-bound performance with just 10% of the training dataset on CIFAR100 and up to 79% of the performance with only 1% of the training dataset on CIFAR10. For medium- and high-resolution datasets, including Tiny ImageNet, ImageNet-1K, and ImageNet subsets, DataDAM also surpasses all baseline models across all settings. While existing methods fail to scale up to the ImageNet-1K due to memory or time constraints, DataDAM achieved accuracies of 2.0%, 2.2%, 6.3%, and 15.5% for 1, 2, 10, and 50 IPC, respectively, surpassing DM and Random by a significant margin. This improvement can be attributed to our methodology, which captures essential layer-wise information through spatial attention maps and the feature map of the last layer. Our ablation studies provide further evidence that the performance gain is directly related to the discriminative ability of the method in the synthetic image learning scheme. **Cross-architecture Generalization.** In this section, we test our learned synthetic data across different unseen neural architectures, consistent with state-of-the-art benchmarks [64, 63]. To that end, synthetic data was generated from CIFAR10 using one architecture (T) with IPC50 and then transferred to a new architecture (E), where it was trained from scratch and tested on real-world data. Popular CNN architectures like ConvNet [18], AlexNet [30], VGG-11 [47], and ResNet-18 [23] are used to examine the generalization performance. Table 3 shows that DataDAM outperforms state-of-the-art across unseen architectures when the synthetic data is learned with ConvNet. We achieve a margin of 3.8% and 7.4% when transferring to AlexNet and VGG-11, respectively, surpassing the best method, DM. Additionally, the remaining architectures demonstrate improvement due to the robustness of our synthetic images and their reduced architectural bias, as seen in the natural appearance of the distilled images (Figure 17). **Training Cost Analysis.** In dataset distillation, it is crucial to consider the resource-time costs of various methods, particularly in terms of scalability. This study compares our method to state-of-the-art benchmarks presented in Table 4. We demonstrate a significantly lower run-time by almost 2 orders of magnitude compared to most state-of-the-art results. Our method, like DM, has an advantage over methods such as DC, DSA, and MTT that require costly inner-loop bi-level optimization. It should be noted that DataDAM can leverage information from randomly initialized neural networks without training and consistently achieve superior performance. ### Ablation Studies In this section, we evaluate the robustness of our method under different experimental configurations. All experiments averaged performance over 100 randomly initialized ConvNets across five synthetic sets. The CIFAR10 dataset is used for all studies. The most relevant ablation studies to our method are included here; further ablative experiments are included in the supplementary materials. **Exploring the importance of different initialization methods for synthetic images.** In dataset distillation, syn \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{run time(sec)} & \multicolumn{3}{c}{GPU memory(MB)} \\ & IPC1 & IPC10 & IPC50 & IPC1 & IPC10 & IPC50 \\ \hline DC[64] & 0.16 \(\pm\) 0.01 & 3.31 \(\pm\) 0.02 & 15.74 \(\pm\) 0.10 & 3515 & 3621 & 4527 \\ DSA[62] & 0.22 \(\pm\) 0.02 & 4.47 \(\pm\) 0.12 & 20.13 \(\pm\) 0.58 & 3513 & 3639 & 4539 \\ DM[63] & 0.08 \(\pm\) 0.02 & 0.08 \(\pm\) 0.02 & 0.08 \(\pm\) 0.02 & 3323 & 3455 & 3605 \\ MTI[9] & 0.36 \(\pm\) 0.23 & 0.40 \(\pm\) 0.20 & OOM & 2711 & 8049 & OOM \\ DataDAM & 0.09 \(\pm\) 0.01 & 0.08 \(\pm\) 0.01 & 0.16 \(\pm\) 0.04 & 3452 & 3561 & 3724 \\ \hline \hline \end{tabular} \end{table} Table 4: Training time and GPU memory comparisons for state-of-the-art synthesis methods. Run time is expressed per step, averaged over 100 iterations. GPU memory is expressed as the peak memory usage during 100 iterations of training. All methods were run on an A100 GPU for CIFAR-10. OOM (out-of-memory) is reported for methods that are unable to run within the GPU memory limit. \begin{table} \begin{tabular}{c c c c c} \hline \hline & T\textbackslash{E} & ConvNet & AlexNet & VGG-11 & ResNet-18 \\ \hline DC [64] & ConvNet & 53.9\(\pm\)0.5 & 28.8\(\pm\)0.7 & 38.8\(\pm\)1.1 & 20.9\(\pm\)1.0 \\ CAFE [52] & ConvNet & 62.3\(\pm\)0.4 & 43.2\(\pm\)0.4 & 48.8\(\pm\)0.5 & 43.3\(\pm\)0.7 \\ DSA [62] & ConvNet & 66.0\(\pm\)0.5 & 53.7\(\pm\)0.6 & 51.4\(\pm\)1.0 & 47.8\(\pm\)0.9 \\ DM [63] & ConvNet & 63.0\(\pm\)0.4 & 60.1\(\pm\)0.5 & 57.4\(\pm\)0.8 & 52.9\(\pm\)0.4 \\ KIP [38] & ConvNet & 56.9\(\pm\)0.4 & 53.2\(\pm\)1.6 & 53.2\(\pm\)0.5 & 47.6\(\pm\)0.8 \\ MTT [9] & ConvNet & 66.2\(\pm\)0.6 & 43.9\(\pm\)0.9 & 48.7\(\pm\)1.3 & 60.0\(\pm\)0.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Cross-architecture testing performance (%) on CIFAR10 with 50 images per class. The synthetic set is trained on one architecture (T) and then evaluated on another architecture (E). thetic images are usually initialized through Gaussian noise or sampled from the real data; however, the choice of initialization method has proved to be crucial to the overall performance [13]. To assess the robustness of DataDAM, we conducted an empirical evaluation with an IPC50 under three initialization conditions: Random selection, K-Center [13; 46], and Gaussian noise (Figure 3). As reported in [13], other works including [63; 62; 64] have seen benefits to testing performance and convergence speed by leveraging K-Center as a smart selection. Empirically, we show that our method is robust across both random and K-Center with only a minute performance gap, and thus the initialization of synthetic data is not as crucial to our final performance. Finally, when comparing with noise, we notice a performance reduction; however, based on the progression over the training epochs, it appears our method is successful in transferring the information from the real data onto the synthetic images. For further detailed experimental results, please refer to the supplementary materials. **Evaluation of task balance \(\lambda\) in DataDAM.** It is common in machine learning to use regularization to prevent overfitting and improve generalization. In the case of DataDAM, the regularizing coefficient \(\lambda\) controls the trade-off between the attention matching loss \(\mathcal{L}_{\text{SAM}}\) and the maximum mean discrepancy loss \(\mathcal{L}_{\text{MMD}}\), which aims to reduce the discrepancy between the synthetic and real training distributions. The experiments conducted on the CIFAR10 dataset with IPC 10 showed that increasing the value of \(\lambda\) improved the performance of DataDAM up to a certain point (Figure 4). This is because, at lower values of \(\lambda\), the attention matching loss dominates the training process, while at higher values of \(\lambda\), the regularizer contributes more effectively to the overall performance. The results in Figure 4 also indicate that the method is robust to larger regularization terms, as shown by the plateau to the right of 0.01. Therefore, a task balance of 0.01 is chosen for all experiments on low-resolution data and 0.02 on medium- and high-resolution data. **Evaluation of loss components in DataDAM.** We conducted an ablation study to evaluate the contribution of each loss component, namely spatial attention matching loss (\(\mathcal{L}_{\text{SAM}}\)) and the complementary loss (\(\mathcal{L}_{\text{MMD}}\)), to the final performance of DataDAM. As seen in table 5, the joint use of \(\mathcal{L}_{\text{MMD}}\) and \(\mathcal{L}_{\text{SAM}}\) led to state-of-the-art results, while using \(\mathcal{L}_{\text{MMD}}\) alone resulted in significant underperformance, as it emphasizes the extraction of high-level abstract data but fails to capture different level representations of the real training distribution. On the other hand, \(\mathcal{L}_{\text{SAM}}\) alone outperformed the base complementary loss, indicating the extracted discriminative features contain significant information about the training but still have room for improvement. To highlight the importance of intermediate representations, we compared our attention-based transfer approach with the transfer of layer-wise feature maps, similar to CAFE [52], and demonstrated a significant performance gap (see "Feature Map Transfer" in Table 5). Overall, our findings support the use of attention to match layer-wise representations and a complementary loss to regulate the process. **Exploring the effect of each layer in DataDAM.** Following the previous ablation, it is equally important to examine how each layer affects the final performance. As shown in Table 6, different layers perform differently since each provides different levels of information about the data distributions. This finding supports the claim that matching spatial attention maps in individual layers alone cannot obtain promising results. As a result, to improve the overall performance of the synthetic data learning process, it is crucial to transfer Figure 4: The effect of task balance \(\lambda\) on the testing accuracy (%) for CIFAR10 dataset with IPC10 configuration. Figure 3: Test accuracy evolution of synthetic image learning on CIFAR10 with IPC50 under three different initializations: Random, K-Center, and Gaussian noise. \begin{table} \begin{tabular}{c c c|c} \hline \(\mathcal{L}_{\text{MMD}}\) & \(\mathcal{L}_{\text{SAM}}\) & Feature Map Transfer & Testing Performance (\%) \\ \hline \(\checkmark\) & - & - & 48.9 \(\pm\) 0.6 \\ - & \(\checkmark\) & - & 49.8 \(\pm\) 0.7 \\ - & - & \(\checkmark\) & 47.2 \(\pm\) 0.3 \\ \(\checkmark\) & \(\checkmark\) & - & **54.2 \(\pm\) 0.8** \\ \hline \end{tabular} \end{table} Table 5: Evaluation of loss components in DataDAM. different levels of information about the real data distribution using the SAM module across all intermediate layers. **Network Distributions.** We investigate the impact of network initialization on DataDAM's performance by training 1000 ConvNet architectures with random initializations on the original training data and categorizing their learned states into five buckets based on testing performance. We sampled networks from each bucket and trained our synthetic data using IPCs 1, 10, and 50. As illustrated in Table 7, our findings indicate that DataDAM is robust across various network initializations. This is attributed to the transfer of attention maps that contain relevant and discriminative information rather than the entire feature map statistics, as shown in [52]. These results reinforce the idea that achieving state-of-the-art performance does not require inner-loop model training. ### Visualization **Data Distribution.** To evaluate whether our method can capture a more accurate distribution from the original dataset, we use t-SNE [51] to visualize the features of real and synthetic sets generated by DM, DSA, CAFE, and DataDAM in the embedding space of the ResNet-18 architecture. Figure 5 shows that methods such as DSA and CAFE are biased towards the edges of their clusters and not representative of the training data. Much like DM, our results indicate a more equalized distribution, allowing us to better capture the data distribution. Preserving dataset distributions is of utmost importance in fields like ethical machine learning since methods that cannot be impartial in capturing data distribution can lead to bias and discrimination. Our method's capacity to capture the distribution of data makes it more appropriate than other approaches in these conditions, particularly in fields such as facial detection for privacy [10]. **Synthetic Images.** We have included samples from our learned synthetic images for different resolutions in Figure 17. In low-resolution images, the objects are easily distinguishable, and their class labels can be recognized intuitively. As we move to higher-resolution images, the objects become more outlined and distinct from their backgrounds. These synthetic images have a natural look and can be transferred well to different architectures. Moreover, the high-resolution images accurately represent the relevant colors of the objects and provide more meaningful data for downstream tasks. For more visualizations, refer to the supplementary materials. ### Applications We assess the effectiveness of DataDAM's performance through the use of two prevalent applications involving dataset distillation algorithms: _continual learning_ and _neural architecture search_. **Continual Learning.** Continual learning trains a model incrementally with new task labels to prevent catastrophic forgetting [41]. One approach is to maintain a replay buffer that stores balanced training examples in memory and train the model exclusively on the latest memory, starting from scratch [41, 3, 40]. Efficient storage of exemplars is crucial for optimal continual learning performance, and condensed data can play a significant role. We use the class-incremental setting from [63] with an augmented buffer size of 20 IPC to conduct class-incremental learning on the CIFAR100 dataset. We compare our proposed memory construction approach with random [40], herding [8, 4, 41], DSA [62], and DM [63] methods at 5 and 10 learning steps. In each step, including the initial one, we added 400 and 200 distilled images to the replay buffer, respectively, following the class split of [63]. The test accuracy is the performance metric, and default data preprocessing and ConvNet are used for each approach. Figure 7 shows that our memory construction approach consistently outperforms others in both settings. Specifically, DataDAM achieves final test accuracies of 39.7% and 39.7% in 5-step and 10-step learning, respectively, outperforming DM (34.4% and 34.7%), DSA (31.7% and 30.3%), herding (28.1% and 27.4%), and random (24.8% and 24.8%). Notably, the final performance of DataDAM, DM, and random selection methods remains unchanged upon increasing the \begin{table} \begin{tabular}{c c c|c} \hline \hline Layer 1 & Layer 2 & Last Layer & Testing Performance (\%) \\ \hline - & - & ✓ & 48.9 \(\pm\) 0.6 \\ ✓ & - & ✓ & 50.2 \(\pm\) 0.4 \\ - & ✓ & ✓ & 51.5 \(\pm\) 1.0 \\ ✓ & ✓ & - & 49.8 \(\pm\) 0.7 \\ ✓ & ✓ & ✓ & **54.2 \(\pm\) 0.8** \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation of each layer’s impact in ConvNet (3-layer). The output is transferred under \(\mathcal{L}_{\text{MBD}}\) while the effects of the specified layers are measured through \(\mathcal{L}_{\text{SAM}}\). We evaluate the performance of the CIFAR10 dataset with IPC10. \begin{table} \begin{tabular}{c c|c|c|c|c|c} \hline \hline IPC & Random & 0-20 & 20-40 & 40-60 & 60-80 & \(\geq\)80 \\ \hline 1 & \(\mathbf{32.0}\pm\mathbf{2.0}\) & \(30.8\pm 1.1\) & \(30.7\pm 1.7\) & \(31.5\pm 1.9\) & \(26.2\pm 1.8\) & \(26.9\pm 1.3\) \\ 10 & \(\mathbf{54.2}\pm\mathbf{0.8}\) & \(54.0\pm 0.7\) & \(53.1\pm 0.5\) & \(52.1\pm 0.8\) & \(51.2\pm 0.7\) & \(51.7\pm 0.7\) \\ 50 & \(\mathbf{67.0}\pm\mathbf{0.4}\) & \(66.2\pm\mathbf{0.4}\) & \(66.4\pm\mathbf{0.4}\) & \(\mathbf{67.0}\pm\mathbf{0.5}\) & \(65.8\pm 0.5\) & \(65.3\pm 0.6\) \\ \hline \hline \end{tabular} \end{table} Table 7: Performance of synthetic data learned with IPCs 1, 10, and 50 for different network initialization. Models are trained on the training set and grouped by their respective accuracy levels. Figure 5: Distributions of synthetic images learned by four methods on CIFAR10 with IPC50. The stars represent the synthetic data dispersed amongst the original training dataset. number of learning steps, as these methods independently learn the synthetic datasets for each class. Our findings reveal that DataDAM provides more informative training to the models than other baselines, resulting in more effective prevention of memory loss associated with past tasks. **Neural Architecture Search.** Our synthetic sets can be used as a proxy set to accelerate model evaluation in Neural Architecture Search (NAS). Following [64], we establish a 720 ConvNet search space on CIFAR10 with a grid varying in network depth, width, activation, normalization, and pooling layers. We compared our method with Random, DSA, CAFE, early stopping, and DM. Each architecture was trained on the proxy set (synthetic 50 IPC) for 200 epochs and the whole dataset for 100 epochs to establish a baseline performance metric. Early stopping still uses the entire dataset, but we limit the iterations to those of the proxy set, as in [63]. For each method, we rank all the architectures based on the validation performance and report the testing accuracy of the best-selected model when trained on the whole dataset in Table 8. DataDAM achieved the best accuracy among the competitors, with an accuracy of 89.0%, which is very similar to the original training data at 89.2%, indicating the potential of our proxy set to accurately represent the training data. Furthermore, we calculated Spearman's correlation over the entire search space to evaluate the robustness of our learned data in architecture searching. The correlation is calculated between the testing performances of each method when trained on the proxy versus the original training data. Our method achieves the highest correlation (0.72), indicating that it generates a suitable proxy set that is generalizable across the entire search space and encodes the most important and relevant information from the training data into a condensed form. For more experimentation with NAS, refer to the supplementary materials. ## 5 Conclusion and Limitations Our proposed method, Dataset Distillation with Attention Matching (DataDAM), efficiently captures real datasets' most informative and discriminative information. It consists of two modules, spatial attention matching (SAM) and last-layer feature alignment, that match attention maps and embedded representations generated by different layers in randomly initialized neural networks, respectively. We conduct extensive experiments on datasets with different resolutions to show that DataDAM could lower CNN training costs while maintaining superior generalization performance. We also offer two applications that take advantage of our distilled set: continual learning and neural architecture search. In the future, we plan to apply DataDAM to more fine-grained datasets and explore the analytical concepts behind them. **Limitations.** DataDAM exhibits robust generalization across various CNN architectures, but it is limited to convolutional networks due to its formulation. For example, it, along with other data distillation algorithms, faces challenges in achieving successful cross-architecture generalization on ViT (Vision Transformer) models. Additionally, all data distillation methods, including DataDAM, need to be re-optimized when the distillation ratio changes, which can limit efficiency in some applications. Figure 6: Example distilled images from 32x32 CIFAR10/100 (IPC10), 64x64 Tiny ImageNet (IPC1), and 64x64 ImageNet-1K (IPC1). \begin{table} \begin{tabular}{c c c c c c c|c} \hline \hline & Random & DSA & DM & CAFE & Ours & Early-stopping & Whole Dataset \\ \hline Performance (\%) & 88.9 & 87.2 & 87.2 & 83.6 & **89.0** & 88.9 & 89.2 \\ Correlation & 0.70 & 0.66 & 0.71 & 0.59 & **0.72** & 0.69 & 1.00 \\ Time cost (min) & 206.4 & 206.4 & 206.6 & 206.4 & 206.4 & 206.2 & 5168.9 \\ Storage (imps) & **500** & **500** & **500** & **500** & \(5\times 10^{4}\) & \(5\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 8: Neural architecture search on CIFAR10. Figure 7: (Left): Showcases 5-step and (Right): Showcases 10-step continual learning with tolerance region.
2309.14185
Temporal Separators with Deadlines
We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An $(s,z)$-temporal separator is a set of vertices whose removal disconnects vertex $s$ from vertex $z$ for every time step in a temporal graph. The $(s,z)$-Temporal Separator problem asks to find the minimum size of an $(s,z)$-temporal separator for the given temporal graph. We introduce a generalization of this problem called the $(s,z,t)$-Temporal Separator problem, where the goal is to find a smallest subset of vertices whose removal eliminates all temporal paths from $s$ to $z$ which take less than $t$ time steps. Let $\tau$ denote the number of time steps over which the temporal graph is defined (we consider discrete time steps). We characterize the set of parameters $\tau$ and $t$ when the problem is $\mathcal{NP}$-hard and when it is polynomial time solvable. Then we present a $\tau$-approximation algorithm for the $(s,z)$-Temporal Separator problem and convert it to a $\tau^2$-approximation algorithm for the $(s,z,t)$-Temporal Separator problem. We also present an inapproximability lower bound of $\Omega(\ln(n) + \ln(\tau))$ for the $(s,z,t)$-Temporal Separator problem assuming that $\mathcal{NP}\not\subset\mbox{\sc Dtime}(n^{\log\log n})$. Then we consider three special families of graphs: (1) graphs of branchwidth at most $2$, (2) graphs $G$ such that the removal of $s$ and $z$ leaves a tree, and (3) graphs of bounded pathwidth. We present polynomial-time algorithms to find a minimum $(s,z,t)$-temporal separator for (1) and (2). As for (3), we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the $(s,z,t)$-Temporal Separator problem where the temporal graph has bounded pathwidth.
Hovhannes A. Harutyunyan, Kamran Koupayi, Denis Pankratov
2023-09-25T14:46:54Z
http://arxiv.org/abs/2309.14185v1
# Temporal Separators with Deadlines+ ###### Abstract We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An \((s,z)\)-temporal separator is a set of vertices whose removal disconnects vertex \(s\) from vertex \(z\) for every time step in a temporal graph. The \((s,z)\)-Temporal Separator problem asks to find the minimum size of an \((s,z)\)-temporal separator for the given temporal graph. The \((s,z)\)-Temporal Separator problem is known to be \(\mathcal{NP}\)-hard in general, although some special cases (such as bounded treewidth) admit efficient algorithms [15]. We introduce a generalization of this problem called the \((s,z,t)\)-Temporal Separator problem, where the goal is to find a smallest subset of vertices whose removal eliminates all temporal paths from \(s\) to \(z\) which take less than \(t\) time steps. Let \(\tau\) denote the number of time steps over which the temporal graph is defined (we consider discrete time steps). We characterize the set of parameters \(\tau\) and \(t\) when the problem is \(\mathcal{NP}\)-hard and when it is polynomial time solvable. Then we present a \(\tau\)-approximation algorithm for the \((s,z)\)-Temporal Separator problem and convert it to a \(\tau^{2}\)-approximation algorithm for the \((s,z,t)\)-Temporal Separator problem. We also present an inapproximability lower bound of \(\Omega(\ln(n)+\ln(\tau))\) for the \((s,z,t)\)-Temporal Separator problem assuming that \(\mathcal{NP}\not\subset\textsc{Define}(n^{\log\log n})\). Then we consider three special families of graphs: (1) graphs of branchwidth at most 2, (2) graphs \(G\) such that the removal of \(s\) and \(z\) leaves a tree, and (3) graphs of bounded pathwidth. We present polynomial-time algorithms to find a minimum \((s,z,t)\)-temporal separator for (1) and (2). As for (3), we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the \((s,z,t)\)-Temporal Separator problem where the temporal graph has bounded pathwidth. _Keywords--_ Temporal graphs, dynamic graphs, vertex separator, vertex cut, separating set, deadlines, inapproximability, approximation algorithms ## 1 Introduction Suppose that you have been given the task of deciding how robust a train system of a given city is with respect to station closures. For instance, is it possible to disconnect the two most visited places, e.g., the downtown and the beach, by shutting down 5 train stations in the city? Does an efficient algorithm even exist? If not, what can we say about special classes of graphs? These are central questions of interest in this work. More formally, we model the scenario as a graph problem. An important component missing from the classical graph theory is the ability of the graph to vary with time. The trains run on a schedule (or at least they are supposed to - for simplicity, we assume a perfectly punctual train system). Thus, it is not accurate to say that there is an edge between station \(A\) and station \(B\) just because there are tracks connecting them. It would be more accurate to say that if you arrive at \(A\) at some specific time \(t\) then you could get to \(B\) at some other time \(t^{\prime}>t\), where \(t\) is when the train arrives at station \(A\) and \(t^{\prime}\) is the time when this train reaches station \(B\). In other words, we can consider the edge from \(A\) to \(B\) as being present at a particular time (or times) and absent otherwise. This is an important point for the robustness of train networks, since it could be that due to incompatibility of certain train schedules the train network could become disconnected by shutting down even fewer stations than we otherwise would have thought if we didn't take time schedules into account. The notion of graphs evolving with time has several formal models in the research literature [3, 22]. First of all, there is an area of online algorithms [1] where the graph is revealed piece by piece (thus the only allowable changes are to add objects or relations to the graph) and we need to make irrevocable decisions towards some optimization goal as the graph is being revealed. Secondly, streaming and semi-streaming graph algorithms deal with graphs that are revealed one piece at a time similar to online algorithms, but the emphasis is on memory-limited algorithms [12, 11]. Thus, in streaming one does not have to make irrevocable decisions, but instead tries to minimize the memory size necessary to answer some queries at the end of the stream. Thirdly, there is a notion of dynamic graph algorithms where the emphasis is on designing efficient data structures to support certain queries when the graph is updated by either adding or removing vertices or edges [23]. The goal is to maintain the data structures and answer queries, such as "are nodes \(u\) and \(v\) connected?", in the presence of changes more efficiently than recomputing the answer from scratch on every query. It is evident that none of these models is a good fit for our question: the train system is known in advance and it is not frequently updated (some cities that shall remain unnamed take decades to add a single station to the system). Fortunately, there is yet another model of graphs changing with time that has recently gotten a lot of attention and it happens to capture our situation perfectly. The model is called a temporal graph. In this work, we focus on undirected temporal graphs that have a fixed node set but whose edge sets change in discrete time units, all of which are known in advance. Other temporal graph models where changes to nodes are allowed and where time is modelled with the continuous real line have been considered in the research literature but they are outside of the scope of this work. We typically use \(\tau\) to indicate the total number of time steps over which a given temporal graph is defined. For example, if we model the train system as a temporal graph with one minute-granularity and the schedule repeats every 24 hours then the temporal graph would have \(\tau=(24H)\times(60M/H)=1440M\) time steps in total. For emphasis, when we need to talk about non-temporal graphs and bring attention to their unchanging nature we shall call them "static graphs." We study temporal analogues of the Unrestricted Vertex Separator problem from the static world. An \((s,z)\)-temporal separator is a set of vertices whose removal disconnects vertex \(s\) from vertex \(z\) for every time step in a temporal graph. The \((s,z)\)-Temporal Separator problem asks to find the minimum size of an \((s,z)\)-temporal separator for the given temporal graph. The \((s,z)\)-Temporal Separator problem is known to be \(\mathcal{NP}\)-hard in general [27], although some special cases (such as bounded treewidth) admit efficient algorithms [15]. This question can be thought of as a mathematical abstraction of the robustness of the train network of a city question posed at the beginning of this section. The \((s,z)\)-Temporal Separator problem asks you to eliminate all temporal paths between \(s\) and \(z\) by removing some nodes. Observe that, practically speaking, in real life, one doesn't actually have to eliminate all temporal paths between \(s\) and \(z\) - one would have to remove only reasonable temporal paths between \(s\) and \(z\). Which paths would be considered unreasonable? We consider paths taking too much time as unreasonable. For example, if normally it takes 30 minutes to get from downtown to the beach, then eliminating all routes that take at most 4 hours would surely detract most downtown dwellers from visiting the beach. Motivated by such considerations, we introduce a generalization of the \((s,z)\)-Temporal Separator problem called \((s,z,t)\)-Temporal Separator problem, where the goal is to find the smallest subset of vertices whose removal eliminates all temporal paths from \(s\) to \(z\) which takes less than \(t\) time steps. Observe that setting \(t=\tau\) captures the \((s,z)\)-Temporal Separator problem as a special case of the \((s,z,t)\)-Temporal Separator problem. Our results can be summarized as follows: In Section 4.1, we present a characterization of parameters \(t\) and \(\tau\) when the problem is \(\mathcal{NP}\)-hard. We also present an inapproximability lower bound of \(\Omega(\ln(n)+\ln(\tau))\) for the \((s,z,t)\)-Temporal Separator problem assuming that \(\mathcal{NP}\not\subset\textsc{Dtime}(n^{\log\log n})\). In Section 4.2, we present a \(\tau\)-approximation algorithm for the \((s,z)\)-Temporal Separator problem, and we convert it to a \(\tau^{2}\)-approximation algorithm for \((s,z,t)\)-Temporal Separator problem. In Section 5.1, we present a polynomial-time algorithm to find a minimum \((s,z,t)\)-temporal separator on temporal graphs whose underlying graph (see Section 2) has branchwidth at most 2. In Section 5.2, we present another polynomial-time algorithm for temporal graphs whose underlying graph becomes a tree after removal of \(s\) and \(z\). In Section 5.3, we show a polynomial-time reduction from the Discrete Segment Covering problem with bounded-length segments to the \((s,z,t)\)-Temporal Separator problem where the temporal graph has bounded pathwidth. Therefore, solving the \((s,z,t)\)-Temporal Separator problem on a temporal graph whose underlying graph has bounded pathwidth is at least as difficult as solving the Discrete Segment Covering problem where lengths of all segments are bounded. ## 2 Preliminaries Temporal graphs (also known as dynamic, evolving [13], or time-varying [14, 6] graphs) are graphs whose edges are active at certain points in time. A temporal graph \(G=(V,E,\tau)\) contains a set of vertices \(V\), and a set of edges \(E\subseteq V\times V\times[\tau]\)1. So each edge2. \(e\in E\) contains two vertices of \(V\) and a time label \(t\in[\tau]\) indicating a time step at which the edge is active. A graph \(G_{\downarrow}=(V,E^{\prime})\) where \(E^{\prime}\) contains every edge \(e\) that is active at least once in the temporal graph \(G\) is called the _underlying graph_ (alternatively, the _footprint_) of the temporal graph \(G\). A static graph representing active edges for a specific time \(t\) is called the layer of the temporal graph at that time and is denoted by \(G_{t}\). Some other ways of modelling temporal graphs could be found in [19]. We refer to \(V(G)\) and \(E(G)\) as the set of vertices and edges, respectively, of a graph \(G\) (either temporal or static). Also for any subset \(U\subseteq V(G)\) we refer to the set of all edges in the subgraph induced by \(U\) as \(E(U)\), and for any node \(v\in V\) we use \(E(v)\) to denote the set of all edges incident on \(v\). We also use \(\tau(G)\) to refer to the number \(\tau\) of time labels of the temporal graph \(G\). Footnote 1: Notation [n] stands for \(\{1,2,\ldots,n\}\). A temporal path in a temporal graph is a sequence of edges such that (1) it is a valid path in the underlying graph, and (2) the corresponding sequence of times when the edges are active is non-decreasing. Formally, a sequence \(P=[(u_{1},v_{1},t_{1}),(u_{2},v_{2},t_{2}),\ldots,(u_{k},v_{k},t_{k})]\) of edges in a temporal graph \(G\) is called an \((s,z)\)_-temporal path_ if \(s=u_{1},v_{1}=u_{2},\ldots,v_{k-1}=u_{k},v_{k}=z\) and \(t_{1}\leq t_{2}\leq\cdots\leq t_{k}\). If the sequence of times is in strictly increasing order, the temporal path is called _strict_. _Travelling time_ of \(P\), denoted by \(\text{time}(P)\), is defined as \(\text{time}(P)=t_{k}-t_{1}+1\), i.e., the time it takes to travel from \(s\) to \(z\). If \(\text{time}(P)\leq t\) then we refer to \(P\) as an \((s,z,t)\)-temporal path. A temporal graph \(G\) is _connected_ if for any pair of vertices \(s,z\in V(G)\) there is at least one temporal path from \(s\) to \(z\). A temporal graph \(G\) is _continuously connected_ if for every \(i\in[\tau(G)]\) layer \(G_{i}\) is connected. We distinguish between three types of temporal paths: (1) _shortest \((s,z)\)-temporal path_: a temporal path from \(s\) to \(z\) that minimizes the number of edges; (2) _fastest \((s,z)\)-temporal path_: a temporal path from \(s\) to \(z\) that minimizes the traveling time; (3) _foremost \((s,z)\)-temporal path_: a temporal path from \(s\) to \(z\) that minimizes the arrival time at destination. _Temporal distance_ from node \(s\) to node \(z\) is equal to the traveling time of the fastest \((s,z)\)-temporal path. A set \(S\subseteq V-\{s,z\}\) is called a _(strict) \((s,z)\)-temporal separator_ if the removal of vertices in set \(S\) removes all (strict) temporal paths from \(s\) to \(z\). The _(strict) \((s,z)\)-Temporal Separator problem_ asks to find the minimum size of a (strict) \((s,z)\)-temporal separator in a given temporal graph \(G\). This problem has been studied before (see Section 3). In this work, we propose a new problem that is based on the notion of \((s,z,t)\)-temporal paths. We define a set of vertices \(S\) to be a _(strict) \((s,z,t)\)-temporal separator_ if every (strict) \((s,z,t)\)-temporal path contains at least one vertex in \(S\), i.e., removal of \(S\) removes all (strict) \((s,z,t)\)-temporal paths. Thus, the new problem, which we refer to as the _(strict) \((s,z,t)\)-Temporal Separator problem_ is defined as follows: given a temporal graph \(G\), a pair of vertices \(s,z\in V(G)\), and a positive integer \(t\), the goal is to compute the minimum size of a \((s,z,t)\)-temporal separator in \(G\). **Lemma 2.1**.: _Given a temporal graph \(G=(V,E,\tau)\) and two distinct vertices \(s\) and \(z\) as well as an integer \(t\), it is decidable in time \(O(|S||E|)\) if there is a \((s,z,t)\)-temporal path in \(G\) where \(S=\{t^{\prime}\mid\exists u:(s,u,t^{\prime})\in E\}\)._ Proof.: [24] and [26] present an algorithm that computes fastest paths from a single source \(s\) to all of the vertices in \(O(|S|(|V|+|E|))\). We could ignore isolated vertices, then we could compute a fastest path from \(s\) to \(z\) in \(G\) and check if its travelling time is at least \(t\). Branch decomposition and branchwidth of a graph is defined as follows. **Definition 2.1** (Branch Decomposition).: [8] Given a graph \(G=(V,E)\), a branch decomposition is a pair \((T,\beta)\), such that * \(T\) is a binary tree with \(|E|\) leaves, and every inner node of T has two children. * \(\beta\) is a mapping from \(V(T)\) to \(2^{E}\) satisfying the following conditions: * For each leaf \(v\in V(T)\), there exists \(e\in E(G)\) with \(\beta(v)=\{e\}\), and there are no \(v,u\in V(T),v\neq u\) such that \(\beta(v)=\beta(u)\). * For every inner node \(v\in V(T)\) with children \(v_{l},v_{r},\beta(v)=\beta(v_{l})\cup\beta(v_{r})\); **Definition 2.2** (Boundary).: [8] Given a graph \(G=(V,E)\), for every set \(F\subseteq E\), the boundary \(\partial F=\{v|v\) is incident to edges in \(F\) and \(E\backslash F\}\). **Definition 2.3** (Width of a Branch Decomposition).: [8] Given a branch decomposition \((T,\beta)\) of \(G=(V,E)\), the width of this decomposition is \(max\{|\partial\beta(v)|\mid v\in V(T)\}\). The branchwidth \(bw(G)\) of \(G\) is defined as the minimum width of a branch decomposition of \(G\)[8]. We note that for any fixed \(k\) there is a linear time algorithm to check if a graph has branchwidth \(k\), and if so, the algorithm outputs a branch decomposition of minimum width [5]. Path decomposition and pathwidth of a graph are defined as follows. **Definition 2.4** (Path Decomposition).: [21] Given a graph \(G=(V,E)\), a path decomposition of \(G\) is a pair \((P,\beta)\), such that * \(P\) is a path with nodes \(a_{1},\ldots a_{m}\). * \(\beta\) is a mapping from \(\{a_{1},\ldots,a_{m}\}\) to \(2^{E}\) satisfying the following conditions: * For \(e\in E(G)\) there exists \(a_{i}\) such that vertices of \(e\) appear in \(\beta(a_{i})\). * For every \(v\in V(G)\) the set of \(a_{i}\), such that \(v\) appears in \(\beta(a_{i})\), forms a subpath of \(P\). The width of a decomposition \((P,\beta)\) is \(\max_{a\in V(P)}|\beta(a)|-1\). The pathwidth of a graph \(G\) is the minimum width of a path decomposition of \(G\). ## 3 Related Work Enright et al. in [9] adopt a simple and natural model for time-varying networks which is given with time-labels on the edges of a graph, while the vertex set remains unchanged. This formalism originates in the foundational work of Kempe et al. [17]. There has already been a lot of work on temporal graphs, too much to give a full overview. Thus, in this section, we focus only on the results most relevant to our work. The fastest temporal path is computable in polynomial time, see, e.g. [26, 25, 24]. A nice property of the foremost temporal path is that it can be computed efficiently. In particular, there is an algorithm that, given a source node \(s\in V\) and a time \(t_{start}\), computes for all \(w\in V\setminus\{s\}\) a foremost \((s,w)\)-temporal path from the time \(t_{start}\)[18]. The running time of the algorithm is \(O(n\tau^{3}+|E|)\). It is worth mentioning that this algorithm takes as input the whole temporal graph \(G\). Such algorithms are known as offline algorithms in contrast to online algorithms in which the temporal graph is revealed on the fly. The algorithm is essentially a temporal translation of the breadth-first search (BFS) algorithm (see e.g. [7] page 531). While the Unrestricted Vertex Separator problem is polynomial time solvable in the static graph world (by reducing to the Maximum Flow problem), the analogous problem in the temporal graph world, namely, the \((s,z)\)-Temporal Separator problem, was shown to be \(\mathcal{NP}\)-hard by Kempe et al. [17]. Zschoche et al. [27] investigate the \((s,z)\)-Temporal Separator and strict \((s,z)\)-Temporal Separator problems on different types of temporal graphs. A central contribution in [27] is to prove that both \((s,z)-\)Temporal Separator and Strict \((s,z)\)-Temporal Separator are \(\mathcal{NP}\)-hard for all \(\tau\geq 2\) and \(\tau\geq 5\), respectively, strengthening a result by Kempe et al. [17] (they show \(\mathcal{NP}\)-hardness of both variants for all \(\tau\geq 12\)) [27]. Fluschnik et al. [15] show that \((s,z)\)-Temporal Separator remains \(\mathcal{NP}\)-hard on many restricted temporal graph classes: temporal graphs whose underlying graph falls into a class of graphs containing complete-but-one graphs (that is, complete graphs where exactly one edge is missing), or line graphs, or temporal graphs where each layer contains only one edge. In contrast, the problem is tractable if the underlying graph has bounded treewidth, or if we require each layer to be a unit interval graph and impose suitable restrictions on how the intervals may change over time, or if one layer contains all others (grounded), or if all layers are identical (1-periodic or 0-steady), or if the number of periods is at least the number of vertices. It is not difficult to show that this problem is fixed-parameter tractable when parameterized by \(k+l\), where \(k\) is the solution size and \(l\) is the maximum length of a temporal \((s,z)\)-path. Lastly, we note that the classical Vertex Separator problem from the static world is often stated as asking to find a vertex separator such that after its removal the graph is partitioned into two blocks (one containing \(s\) and one containing \(z\)) of roughly equal size3. This "balanced" separator restriction makes the problem \(\mathcal{NP}\)-hard. The temporal separator problems considered in this work do not have such a restriction, and as discussed they are hard problems due to the temporal component. There is a lot of research on the Vertex Separator problem, but since our versions do not have this "balancedness" restriction, we do not discuss it in detail. An interested reader is referred to [2] and references therein. Footnote 3: That is why earlier we referred to a static world problem of interest as the Unrestricted Vertex Separator problem to emphasize that there is no balancedness requirement. ## 4 Temporal Separators with Deadlines on General Graphs ### Hardness of Exact and Approximate Solutions Zschoche et al. [27] show that the \((s,z)\)-Temporal Separator problem is \(\mathcal{NP}\)-hard on a temporal graph \(G=(V,E,\tau)\) if \(\tau\geq 2\) (and it is in \(\mathcal{P}\) if \(\tau=1\)). So, it is obvious that the \((s,z,t)\)-Temporal Separator problem is \(\mathcal{NP}\)-hard if \(t\geq 2\). In this section we strengthen this result by showing that the problem remains \(\mathcal{NP}\)-hard even when restricted to inputs with \(t=1\) and \(\tau\geq 2\). Reduction from the minimum satisfiability problem with non-negative variables to \((s,z,1)\)-Temporal Separator could be made by adding a path from \(s\) to \(z\) in layer \(G_{i}\), which contains all the variables in the \(i\)-th clause. So, \((s,z,1)\)-Temporal Separator on temporal graphs with a sufficient number of layers is \(\mathcal{NP}\)-hard. However, it is not easy to establish the complexity of \((s,z,t)\)-Temporal Separator on temporal graphs with a small number of layers. Here we aim to show that \((s,z,1)\)-Temporal Separator remains \(\mathcal{NP}\)-hard on a temporal graph \(G=(V,E,\tau)\) if \(\tau\) is equal to \(2\). To do that, we construct a reduction from the Node Multiway Cut problem. In this problem, one is given a graph \(G=(V,E)\) and a set of terminal vertices \(Z=\{z_{1},z_{2},\ldots z_{k}\}\). A multiway cut \(S\in V\backslash Z\) is a set of vertices whose removal from \(G\) disconnects all pairs of distinct terminals \(z_{i}\) and \(z_{j}\). The goal is to find a multiway cut of minimum cardinality. The Node Multiway Cut problem is \(\mathcal{NP}\)-hard for \(k\geq 3\)[16]. **Theorem 4.1**.: _For every \(t_{0}\geq 1\), the \((s,z,t)\)-Temporal Separator problem is \(\mathcal{NP}\)-hard on a temporal graph \(G=(V,E,\tau)\) when restricted to inputs with \(t=t_{0}\) and \(\tau\geq 2\)._ Proof.: For a given graph \(H\) and three vertices \(z_{1}\), \(z_{2}\),and \(z_{3}\) we construct a temporal graph \(G=(V,E,2)\). Let \(V=(V(H)\backslash\{z_{1},z_{2},z_{3}\})\cup\{s,z\}\) and each edge \((u,v)\) in \(H\), not incident on \(z_{1},z_{2}\) or \(z_{3}\), add two edges \((u,v,1)\) and \((u,v,2)\) to \(E\). For each \(u\) which is a neighbour of \(z_{1}\) add an edge \((s,u,1)\), and for each \(v\) which is a neighbour of \(z_{2}\) or \(z_{3}\) add an edge \((v,z,1)\) to \(E\). Finally add \((s,u,2)\) for each neighbour \(u\) of \(z_{2}\), as well as \((v,z,2)\) for each neighbour \(v\) of \(z_{3}\) to the set of edges. We claim that \(S\subseteq V\backslash\{s,z\}\) is a \((s,z,1)\)-temporal separator if and only if \(S\) is a multiway cut for \(H\). \(\leftarrow\) Suppose that \(S\) is a multiway cut in the graph \(H\) and \(S\) is not a \((s,z,1)\)-temporal separator on the temporal graph \(G\). So, there is a \((s,z,1)\)-temporal path \(P\) with \(V(P)\subseteq V\backslash S\). Based on the definition of a \((s,z,1)\)-temporal path, either all the edges of the path belong to the layer \(G_{1}\) or all of them belong to the layer \(G_{2}\). Let's consider each case separately. * **Case 1**. All edges of \(P\) belong to the layer \(G_{1}\). Suppose that the path \(P\) starts with an edge \((s,u,1)\), and ends with an edge \((v,z,1)\). Based on the construction of graph \(G\), it is clear that \(u\) is a neighbour of \(z_{1}\) and \(v\) is a neighbour of \(z_{2}\) or \(z_{3}\). Since all the edges in \(G\) that are not incident on \(s\) or \(z\) also appear in the graph \(H\), all the edges except the starting and ending edges in \(P\) appear in \(H\). Construct a new path \(P^{\prime}\) by replacing \(s\) with \(z_{1}\) and \(z\) with \(z_{2}\) or \(z_{3}\) that is adjacent to \(v\). There is no vertex \(x\in P\) such that \(x\in S\), so all the vertices of \(P^{\prime}\) do not appear in \(S\). Then \(V(P^{\prime})\subseteq V(H)\backslash S\) and this contradicts the assumption of \(S\) being a multiway cut. * **Case 2**. All edges of \(P\) belong to the layer \(G_{2}\). Suppose that \(P\) starts with on edge \((s,u,2)\) and ends with on edge \((v,z,2)\), then \(u\) is neighbour of \(z_{2}\) and \(v\) is neighbour of \(z_{3}\). Construct a path \(P^{\prime}\) by replacing \(s\) with \(z_{2}\) and \(z\) with \(z_{3}\). So \(P^{\prime}\) is a valid path in the graph \(H\) from \(z_{2}\) to \(z_{3}\), contradicting the definition of \(S\). \(\rightarrow\) Suppose that \(S\) is a \((s,z,1)\)-temporal separator in \(G\) and \(S\) is not a multiway cut in the graph \(H\). So there is a path \(P\) between two of the vertices \(z_{1},z_{2},z_{3}\) in \(H\) where \(V(P)\subseteq V(H)\backslash S\). By replacing source vertex \(z_{1}\) or \(z_{2}\) with \(s\) and terminal vertex \(z_{2}\) or \(z_{3}\) with \(z\) we construct a path \(P^{\prime}\) in which \(V(P)\subseteq V\backslash S\). Now consider three cases for path source and terminal of \(P\). Since all the edges in \(P\) except the first and the last one are not incident on \(s\) or \(z\), they must appear in both layers \(G_{1}\) and \(G_{2}\). We consider all cases for the start and end vertices of \(P\) and derive a contradiction in each case (with the fact that \(S\) is a \((s,z,1)\)-temporal separator: * **Case 1**. If \(P\) is between \(z_{1}\) and \(z_{2}\) then \(P^{\prime}\) lies entirely in layer \(G_{1}\). * **Case 2**. If \(P\) is between \(z_{1}\) and \(z_{3}\) then \(P^{\prime}\) lies entirely in layer \(G_{1}\). * **Case 3**. If \(P\) is between \(z_{2}\) and \(z_{3}\) then \(P^{\prime}\) lies entirely in layer \(G_{2}\). Since Strict \((s,z)\)-Temporal Separator is \(\mathcal{NP}\)-hard on a temporal graph with \(\tau\geq 5\)[27], it is clear that Strict \((s,z,t)\)-Temporal Separator is \(\mathcal{NP}\)-hard even when restricted to inputs with \(t\geq 5\) and \(\tau\geq 5\). However, by a small change to the reduction presented by Zschoche et al. [27], which is inspired by [25], we can show that Strict \((s,z,t)\)-Temporal Separator remains \(\mathcal{NP}\)-hard even when restricted to inputs with \(t=3\) and \(\tau=4\). **Theorem 4.2**.: _Finding a strict \((s,z,3)\)-temporal separator on a temporal graph \(G=(V,E,\tau)\) is \(\mathcal{NP}\)-hard when restricted to inputs with \(\tau=4\)._ Proof.: We present a reduction from the vertex cover problem to an instance of Strict \((s,z,3)\)-Temporal Separator, which has four layers. Given a graph \(H\), we construct a temporal graph \(G=(V,E,4)\) as an instance of input for the Strict \((s,z,3)\)-Temporal Separator problem. Let \(V=\{s_{v},v,z_{v}|v\in V(H)\}\cup\{s,z\}\) and define \(E\) as follows: \[E:=\{(s,s_{v},2),(s_{v},v,3),(v,z,4),(s,v,1),(v,z_{v},2),(z_{v},z,3),(z_{v},z,4 )|v\in V(H)\}\cup\] \[\{(s_{u},z_{v},3),(s_{v},z_{u},3)|(u,v)\in E(H)\}\] Figure 1 shows the structure of the temporal graph \(G\). Let \(n=|V(H)|\); we claim that there is a vertex cover in \(H\) of size \(k\), if and only if there exists a strict \((s,z,3)\)-temporal separator in \(G\) of size \(n+k\). \(\rightarrow\) Let \(C\in V(H)\) be a vertex cover of size \(k\) in \(H\), and define \(S=\{v|v\in V(H)\backslash C\}\cup\{s_{v},z_{v}|v\in V(H)\}\). Assume that there is a strict \((s,z,3)\)-temporal path \(P\) such that all its vertices belong to \(V\backslash S\). Since for every \(v\in V(H)\) either \(v\in S\) or \(\{s_{v},z_{v}\}\subseteq S\), temporal path \(P\) is of the following form: \[P=(s,s_{u},2),(s_{u},z_{v},3),(z_{v},z,4).\] This implies the existence of edge \((s_{u},z_{v},3)\) in \(G\), that results in \((u,v)\in E(H)\). Also existence of \(s_{u}\) and \(z_{v}\) in \(P\) implies that \(\{u,v\}\subseteq V(H)\backslash C\) which contradicts the fact that \(C\) is a vertex cover. So, there is no \((s,z,3)\)-temporal path in induced temporal graph \(G\) by \(V\backslash S\). The cardinality of set \(S\) which is a strict \((s,z,3)\)-temporal separator for temporal graph \(G\) is equal to \((n-k)+2k\). \(\leftarrow\) Let \(S\in V\) be a strict \((s,z,3)\)-temporal separator in which \(|S|=n+k\). For any vertex \(v\in V(H)\) we claim that either \(v\in S\) or \(\{s_{v},z_{v}\}\subseteq S\), otherwise one of the two strict \((s,z,3)\)-temporal paths \(P_{1}\) and \(P_{2}\) which are shown in equation 1 and 2, respectively, will not be removed from the graph \(G\) by removing \(S\). \[P_{1} =(s,s_{v},2),(s_{v},v,3),(v,z,4), \tag{1}\] \[P_{2} =(s,v,1),(v,z_{v},2),(z_{v},2,3). \tag{2}\] Now we construct a set \(C\in V(H)\) as follows. For each \(v\in V(H)\): * If more than one of the three vertices \(s_{v}\), \(v\), and \(z_{v}\) belong to \(S\), then add \(v\) to \(C\). * If only one of the three vertices \(s_{v}\), \(v\), and \(z_{v}\) belongs to \(S\), then do not add \(v\) to \(C\). First, based on the fact that at least one of the three vertices \(s_{v}\), \(v\), and \(z_{v}\) belongs to \(S\), it is clear that \(|C|\leq k\). Second, if there is an edge \((u,v)\in E(H)\) such that \(\{u,v\}\subseteq V(H)\backslash C\), following the previous claims, it results in both path \(P_{3}\) and \(P_{4}\) (which are shown in the equations 3 and 4 respectively) being present in a temporal subgraph induced by \(V\backslash S\). Therefore \(C\) is a vertex cover with cardinality at most \(k\). \[P_{3} =(s,s_{v},2),(s_{v},z_{u},3),(z_{u},z,4), \tag{3}\] \[P_{4} =(s,s_{u},2),(s_{u},z_{v},3),(z_{v},z,4). \tag{4}\] Since every temporal path from \(s\) to \(z\) contains more than two edges, then \(\emptyset\) is a strict \((s,z,1)\)-temporal separator. Since every strict \((s,z,2)\)-temporal path is of the form \((s,v,t),(v,z,t+1)\), the Strict \((s,z,2)\)-Temporal Separator problem could be solved in polynomial time easily. The Strict \((s,z,t)\)-Temporal Separator problem on a graph \(G=(V,E,\tau)\) with \(\tau=t\) is the same as the Strict \((s,z)\)-Temporal Separator. Therefore, in case \(\tau=t=3\) this problem is equivalent to the Strict \((s,z)\)-Temporal Separator problem with \(\tau=3\). Zschoche et al. [27] present a polynomial time algorithm for finding a minimum strict \((s,z)\)-temporal separator on a temporal graph \(G=(V,E,\tau)\) when \(\tau<5\). So, this case could be solved in polynomial time. Although we know that finding a strict \((s,z,t)\)-temporal Figure 1: An instance of the Strict \((s,z,3)\)-Temporal Separator problem with four layers that corresponds to a vertex cover problem instance in the proof of Theorem 4.2 separator on a temporal graph \(G=(V,E,3)\) is polynomial-time solvable with the algorithm which is presented in [27], we describe another simple algorithm to solve this problem. In the first step of the algorithm, we check if there is an edge between \(s\) and \(t\). If so, it is clear that there are no separator sets because the direct path using this edge from \(s\) to \(z\) will remain with the removal of any node from the graph. Next, for every temporal path from \(s\) to \(z\) of length two, such as \((s,x,t_{1}),(x,z\:,t_{2})\) with \(t_{2}=t_{1}+1\), it is clear that we have to remove \(x\) if we want to remove this path from the graph. So, it is clear that \(x\in S\). In the last step, we know that the length of every temporal path in the graph is three. So, every path from \(s\) to \(z\) should be of the following form: \[(s,x,1),(x,y,2),(y,z,3).\] Now, put every node \(x\) with existing edge \((s,x)\) into the set \(X\) with time label \(1\). Also, put every node \(y\) that is a neighbor of \(z\) into the set \(Y\) with time label \(3\). Now, it is clear that \(X\cap Y=\emptyset\), for otherwise there exists a node \(u\) with two existing edges \(e_{1}=(s,u,1)\) and \(e_{2}=(u,z,3)\), while this node should be removed in the previous step. Therefore, every strict temporal path from \(s\) to \(z\) should have a corresponding edge \((x,y,2)\) where \(x\in X\) and \(y\in Y\). So, we should remove either \(x\) or \(y\) for every edge \((x,y,2)\), where \(x\in X\) and \(y\in Y\). In order to do this we could use any known polynomial time algorithm for the Vertex Cover problem in bipartite graphs. In the rest of this section we show \(\Omega(\log n+\log(\tau))\)-inapproximability (assuming \(\mathcal{NP}\subset\textsc{Dtime}(n^{\log\log n})\)) for the \((s,z,t)\)-Temporal Separator problem. This is proved by a strict reduction from the Set Cover problem. Recall that in the Set Cover problem, one is given a collection \(\mathcal{S}\) of subsets of a universe \(U\) that jointly cover the universe. The goal is to find a minimum size sub-collection of \(\mathcal{S}\) that covers \(U\). **Theorem 4.3**.: _For every \(t>0\) there is a strict polynomial time reduction from the Set Cover problem to the \((s,z,t)-\)Temporal Separator problem._ Proof.: Let \((U,\mathcal{S})\) be an instance of the Set Cover problem, where \(U=\{1,2,\ldots n\}\) is the universe and \(\mathcal{S}=\{S_{1},S_{2},\ldots,S_{m}\}\) is a family of sets the union of which covers \(U\). For each \(i\in U\) define the family \(\mathcal{F}_{i}\) as \(\mathcal{F}_{i}=\{S\in\mathcal{S}\mid i\in S\}\), i.e., \(\mathcal{F}_{i}\) consists of all sets from \(\mathcal{S}\) that contain element \(i\). Let \(k_{i}=|\mathcal{F}_{i}|\) and order the elements of each \(\mathcal{F}_{i}\) in the order of increasing indices, i.e., \[\mathcal{F}_{i}=\{S_{i_{1}},\ldots,S_{i_{k_{i}}}\}. \tag{5}\] Our reduction outputs a temporal graph \(f(U,\mathcal{S})=(V\cup\{s,z\},E)\) where: * the vertex set is \(V\cup\{s,z\}=\{v_{i}|i\in[m]\}\cup\{s,z\}\); * the edge set is \(E=E_{1}\cup E_{2}\cup\cdots\cup E_{n}\), where \[E_{i}=\{(s,v_{i_{1}},i\cdot t),(v_{i_{1}},v_{i_{2}},i\cdot t),\ldots,(v_{i_{k_ {i}-1}},v_{i_{k_{i}}},i\cdot t),(v_{i_{k_{i}}},z,i\cdot t)\}.\] The main idea behind the proof is to map every element of \(U\) to a path from \(s\) to \(z\) in \(f(U,\mathcal{S})\) bijectively, so by covering an element, we remove the corresponding path in \(f(U,\mathcal{S})\) as well as by removing a path we cover the corresponding element. We claim that \(V^{\prime}=\{v_{j_{1}},\ldots,v_{j_{t}}\}\subseteq V\) is a \((s,z,t)-\)temporal separator for \(f(U,\mathcal{S})\) if and only if \(\mathcal{S}^{\prime}=\{S_{j_{1}},\ldots,S_{j_{t}}\}\subseteq\mathcal{S}\) is a set cover for \((U,\mathcal{S})\). Figure 2 represents the edges in the layer \(G_{it}\), which contain all the edges in \(E_{i}\). It illustrates that element \(i\) in the universe \(U\) corresponds to a path \(E_{i}\), as well as the element \(i\) is covered by the set \(S_{i_{j}}\in\mathcal{S}^{\prime}\) if and only if a temporal path which is shown in Figure 2 is removed from the temporal graph by removing the vertex \(v_{i_{j}}\in V^{\prime}\). Figure 2: Layer \(G_{i\cdot t}\) of the temporal graph used in the proof of Theorem 4.3. \(\rightarrow\) Suppose for contradiction that \(\mathcal{S}^{\prime}\) does not cover \(U\). Pick an arbitrary item \(i\in U\) that is not covered and consider the following path \(P=[(s,v_{i_{1}},i\cdot t),(v_{i_{1}},v_{i_{2}},i\cdot t),\dots,(v_{i_{k_{i-1}}},v _{i_{k_{i}}},i\cdot t),(v_{i_{k_{i}}},z,i\cdot t)]\), where the indices are according to (5). Since \(i\) is not covered, \(\mathcal{F}_{i}\cap\mathcal{S}^{\prime}=\emptyset\), so \(P\) is present in \(f(U,\mathcal{S})\setminus V^{\prime}\) violating the assumption that \(V^{\prime}\) is a \((s,z,t)-\)temporal separator (note that \(\text{time}(P)=0\)). \(\leftarrow\) Now, suppose for contradiction that \(V^{\prime}\) is not a \((s,z,t)-\)temporal separator. Thus, there is path \(P\) from \(s\) to \(z\) with \(\text{time}(P)<t\). From the definition of \(f(U,\mathcal{S})\) it is clear that \(P\) should be using edges only from \(E_{j}\) for some \(j\in[n]\). Note that there is a unique \((s,z)-\)temporal path that can be constructed from \(E_{j}\), namely, \(P=[(s,v_{j_{1}},j\cdot t),(v_{j_{1}},v_{j_{2}},j\cdot t),\dots,(v_{i_{k_{j-1}} },v_{i_{j_{j}}},j\cdot t),(v_{i_{k_{j}}},z,j\cdot t)].\) This implies that element \(j\) is not covered by \(\mathcal{S}^{\prime}\), since otherwise, one of the \(v_{j_{i}}\) would be in \(V^{\prime}\). Following the previous claim, every solution in \((s,z,t)-\)Temporal Separator has a corresponding solution in Set Cover, and vice versa. Therefore, an optimal solution in \((s,z,t)-\)Temporal Separator, has a corresponding optimal solution in Set Cover. As a result \(\frac{|V^{\prime}|}{|V_{opt}|}=\frac{|S^{\prime}|}{|S_{opt}|}\). This implies that the reduction is strict. Due to the inapproximability of Set Cover (see [10]), we have the following: **Corollary 4.4**.: _The \((s,z,t)-\)Temporal Separator problem is not approximable to within \((1-\epsilon)(\log n+\log(\tau))\) in polynomial time for any \(\varepsilon>0\), unless \(\mathcal{NP}\subset\textsc{Dtime}(n^{\log\log n})\)._ ### Approximation Algorithms In this section, we present an efficient \(\tau^{2}\)-approximation for the \((s,z,t)\)-Temporal Separator problem. We begin by establishing a \(\tau\)-approximation for the \((s,z)\)-Temporal Separator problem. The main tool used in this section is the _flattening4_ of a temporal graph \(G=(V,E,\tau)\) with respect to vertices \(s\) and \(z\), denoted by \(F(G,s,z)=(V^{\prime},E^{\prime})\). To ease the notation we omit the specification of \(s\) and \(z\) and denote the flattening of \(G\) by \(F(G)\). The flattening \(F(G)\) is a static directed graph defined as follows: the vertex set \(V^{\prime}\) is the union of \(\tau\) disjoint sets \(V_{1},V_{2},\dots,V_{\tau}\) and \(\{s,z\}\), where each \(V_{i}\) is a disjoint copy of \(V-\{s,z\}\). Denoting the vertices of \(V\) by \(v_{1},v_{2},\dots,v_{n}\), we have \(\forall i\in[\tau]\;\;V_{i}=\{v_{j,i}|v_{j}\in V-\{s,z\}\}\). The edge set \(E^{\prime}\) of the flattening \(F(G)\) is defined as follows: Footnote 4: The concept of flattening is not new, and it is similar to the static expansion of a temporal graph – see, for example, [18]. * For each \((v_{i},v_{j},t^{\prime})\in E\) with \(v_{i},v_{j}\not\in\{s,z\}\) we add edges \((v_{i,t^{\prime}},v_{j,t^{\prime}})\) and \((v_{j,t^{\prime}},v_{i,t^{\prime}})\) to \(E^{\prime}\). * For each \(v_{i}\in V\) and each time \(t^{\prime}\in[\tau-1]\) we add an edge \((v_{i,t^{\prime}},v_{i,t^{\prime}+1})\) to \(E^{\prime}\). * For each \((s,v_{i},t^{\prime})\in E\) we add an edge \((s,v_{i,t^{\prime}})\) to \(E^{\prime}\). * For each \((z,v_{i},t^{\prime})\) we add an edge \((v_{i,t^{\prime}},z)\) to \(E^{\prime}\). Clearly, \(F(G)\) is defined to express temporal \((s,z)\)-paths in \(G\) in terms of \((s,z)\)-paths in \(F(G)\). More specifically, if we have a temporal \((s,z)\) path \(P\) in \(G\) then there is an analogous static \((s,z)\) path \(P^{\prime}\) in \(F(G)\). If \(P\) begins with an edge \((s,v_{i},t_{1})\) then \(P^{\prime}\) begins with an edge \((s,v_{i,t_{1}})\). After that if the next edge in \(P\) is \((v_{i},v_{j},t_{2})\), we can simulate it in \(F(G)\) by introducing a sequence of edges \((v_{i,t_{1}},v_{i,t_{1}+1}),(v_{i,t_{1}+1},v_{i,t_{1}+2}),\dots,(v_{i,t_{2}-1}, v_{i,t_{2}})\) followed by an edge \((v_{i,t_{2}},v_{j,t_{2}})\), and so on until the vertex \(z\) is reached. This correspondence works in reverse as well. If \(P^{\prime}\) is a static \((s,z)\) path in \(F(G)\) then we can find an equivalent temporal \((s,z)\) path in \(G\) as follows. If the first edge in \(P^{\prime}\) is \((s,v_{i,t_{1}})\) then this corresponds to the first edge of \(P\) being \((s,v_{i},t_{1})\). For the following edges of \(P^{\prime}\), if the edge is of the form \((v_{i,t^{\prime}},v_{i,t^{\prime}+1})\) then it is simply ignored for the purpose of constructing \(P\) (since it corresponds to the scenario where the agent travelling along the path is simply waiting an extra time unit at node \(v_{i}\)), and if the edge is of the form \((v_{i,t^{\prime}},v_{j,t^{\prime}})\) then we add the edge \((v_{i},v_{j},t^{\prime})\) to \(P\). This continues until \(z\) is reached. Thus, there is a temporal \((s,z)\) path \(P\) in \(G\) if and only if there is a static \((s,z)\) path \(P^{\prime}\) in \(F(G)\). Moreover, if \(S\) represents the internal nodes of the path \(P\) then we can find \(P^{\prime}\) with internal nodes in \(\bigcup_{t^{\prime}\in[\tau]}\{v_{i,t^{\prime}}:v_{i}\in S\}\). In the reverse direction, if \(P^{\prime}\) uses internal nodes \(S^{\prime}\) then we can find \(P\) with internal nodes in \(\{v_{i}:\exists t^{\prime}\;\;v_{i,t^{\prime}}\in S^{\prime}\}\). Armed with these observations, we show that the sizes of \((s,z)\)-temporal separators in \(G\) and \((s,z)\)-separators (non-temporal) in \(F(G)\) are related as follows. **Theorem 4.5**.: 1. _If_ \(S\) _is an_ \((s,z)\)_-temporal separator in_ \(G\) _then there is an_ \((s,z)\)_-separator of size at most_ \(\tau|S|\) _in_ \(F(G)\)_._ 2. _If_ \(S^{\prime}\) _is an_ \((s,z)\)_-separator in_ \(F(G)\) _then there is an_ \((s,z)\)_-temporal separator of size at most_ \(|S^{\prime}|\) _in_ \(G\)_._ Proof.: 1. Define \(S^{\prime}=\bigcup_{t^{\prime}\in[\tau]}\{v_{i,t^{\prime}}\in V^{\prime}:v_{i}\in S\}\). Clearly, \(|S^{\prime}|=\tau|S|\). Suppose for the contradiction that \(S^{\prime}\) is not an \((s,z)\)-separator in \(F(G)\). Then there is a path from \(s\) to \(z\) in \(F(G)\) that avoids all vertices in \(S^{\prime}\). By the observations made prior to the statement of this theorem, this path corresponds to a temporal path in \(G\) that avoids vertices in \(S\). Thus, \(S\) is not an \((s,z)\)-temporal separator in \(G\). 2. Define \(S=\{v_{i}\in V:\exists t^{\prime}\in[\tau]\;\;v_{i,t^{\prime}}\in S^{\prime}\}\). Clearly, \(|S|\leq|S^{\prime}|\). An argument similar to the one given in the previous part establishes that \(S\) is an \((s,z)\)-temporal separator in \(G\). **Corollary 4.6**.: _The \((s,z)\)-Temporal Separator problem on a temporal graph \(G=(V,E,\tau)\) can be approximated within \(\tau\) in \(O((m+n\tau)n\tau)\) time, where \(n=|V|\) and \(m=|E|\)._ Proof.: We can use any existing efficient algorithm to solve the \((s,z)\) separator problem on \(F(G)\) and return its answer, which will give \(\tau\)-approximation by Theorem 4.5. For example, the stated runtime is achieved by applying Menger's theorem and the Ford-Fulkerson algorithm to compute the maximum number of vertex-disjoint paths in \(F(G)\). Then the running time is \(O(|E^{\prime}||V^{\prime}|)\). Observing that \(|E^{\prime}|\leq|E|+|V|\tau\) and \(|V^{\prime}|\leq|V|\tau\), finishes the proof of this corollary. Next, we describe how the \((s,z,t)\)-Temporal Separator problem can be approximated using a slight extension of the above ideas. First, for a temporal graph \(G=(V,E,\tau)\) and two integers \(t_{1}\leq t_{2}\) we define \(E[t_{1}:t_{2}]=\{(u,v,t)\in E:t_{1}\leq t^{\prime}\leq t_{2}\}\). We also define \(G[t_{1}:t_{2}]=(V,E[t_{1}:t_{2}],t_{2})\), which can be thought of as graph \(G\) restricted to time interval \([t_{1},t_{2}]\). The idea behind approximating a minimum \((s,z,t)\)-temporal separator is to combine \((s,z)\)-temporal separators of \(F(G[1:t+1]),F(G[2:t+2]),\ldots,F(G[\tau-t:\tau])\). **Theorem 4.7**.: _The \((s,z,t)\)-Temporal Separator problem on a temporal graph \(G=(V,E,\tau)\) can be approximated within \(\tau^{2}\) in \(O((m+n\tau)n\tau^{2})\) time, where \(n=|V|\) and \(m=|E|\)._ Proof.: The algorithm has essentially been described prior to the statement of the theorem, so the running time is clear. It is left to argue that it produces \(\tau^{2}\)-approximation. This can be argued similarly to Theorem 4.5. 1. Let \(S\) be a \((s,z,t)\)-temporal separator in \(G\). Then for \(G[i:i+t]\) we define \(S_{i}\) to consist of all nodes \(v_{j,t^{\prime}}\) with \(v_{j}\in S\). Since \(S\) removes all paths from \(G\) of travelling time \(\leq t\) and \(G[i:i+t]\) only has paths of travelling time \(\leq t\), then \(S_{i}\) is a \((s,z)\)-separator in \(G[i:i+t]\) of size \(|S_{i}|=\tau|S|\). Thus, if there is an \((s,z,t)\)-temporal separator of size \(|S|\) in \(G\) then the combined size of all \((s,z,t)\)-temporal separators of \(G[1:t+1],G[1:t+2],\ldots,G[\tau-t,\tau]\) is at most \(\tau^{2}|S|\). 2. Let \(S_{i}\) be a \((s,z)\)-temporal separator in \(G[i:i+t]\). Define \(S=\{v_{j}:\exists i\exists t^{\prime}\;\;v_{j,t^{\prime}}\in S_{i}\}\). It is easy to see that \(S\) is a \((s,z,t)\) temporal separator in \(G\). Paths of travelling time at most \(t\) that begin with an edge \((s,v_{i},t_{1})\) are present in \(G[t_{1},t_{1}+t]\), and so removal of \(S_{t_{1}}\) removes such temporal paths in \(G[t_{1},t_{1}+t]\). Since \(S_{t_{1}}\) is "projected" onto \(V\) and included in \(S\), these paths are eliminated from \(G\). ## 5 Temporal Separators with Deadlines on Special Families of Graphs ### Temporal Graphs with Branchwidth at most \(2\) The graphs with branchwidth \(2\) are graphs in which each biconnected component is a series-parallel graph [20]. In this section, we present an efficient algorithm to solve the \((s,z,t)\)-Temporal Separator problem on temporal graphs whose underlying static graphs have branchwidth at most \(2\). In fact, our algorithm works for a more general class of problems, which we refer to as "restricted path \((s,z)\)-Temporal Separator." The goal in this more general problem is to select a set of vertices \(S\) such that the removal of \(S\) from the given temporal graph \(G\) removes all \((s,z)\) paths in a restricted family of paths. The \((s,z,t)\)-Temporal Separator problem is seen as a special case of this, where paths are restricted to have travelling time less than \(t\). Restricted family of paths could be any path family implicitly defined by a procedure \(ExistsRestrictedPath(G,s,z)\) which takes as input a temporal graph \(G\), a pair of nodes \(s\) and \(z\), and returns true if and only if there exists a restricted temporal path between \(s\) and \(z\) in \(G\). Due to Lemma 2.1, we know that such a procedure exists in the case of temporal paths restricted by travelling time, which is suitable for the \((s,z,t)\)-Temporal Separator problem. For the rest of this section, we assume that \(G\) is a temporal graph such that \(bw(G_{\downarrow})\leq 2\) unless stated otherwise. Furthermore, we assume that \(G_{\downarrow}\) is connected, otherwise, if \(s\) and \(z\) belong to different connected components the answer to the problem is trivially \(\emptyset\), and if they belong to the same connected component, the problem reduces to analyzing that connected component alone. We introduce some notation and make several observations about branch decomposition before we give full details of our algorithm. Recall from Section 2 that branch decomposition of \(G\) of width \(2\) can be computed in linear time. Thus, we assume that the algorithm has access to such a decomposition, which we denote by \((T,\beta)\). We use \(\rho\) to denote the root of \(T\) and we define the function \(top:V(G)\to V(T)\) as follows. For \(v\in V(G)\) we let \(top(v)\) be the furthest node \(x\in V(T)\) from the root \(r\) which satisfies \(E(v)\subseteq\beta(x)\). We also use \(x_{l}\) to denote the left child of \(x\) and \(x_{r}\) to denote the right child of \(x\). For a node \(x\in V(T)\) we define to be the temporal graph obtained from \(G\) by keeping only those edges \((u,v,t)\) with \((u,v)\in\beta(x)\) and removing all vertices of degree \(0\). We collect several useful observations about the introduced notions in the following lemma. **Lemma 5.1**.: 1. _If_ \(v\in\partial\beta(x)\) _then_ \(v\in\partial\beta(x_{t})\) _or_ \(v\in\partial\beta(x_{r})\)_._ 2. _If_ \(\mathit{top}(v)=x\) _then_ \(v\in\partial\beta(x_{t})\) _and_ \(v\in\partial\beta(x_{r})\)_._ 3. _If_ \(v\in V(G_{x}^{in})\setminus\partial\beta(x)\) _then all edges incident on_ \(v\) _in_ \(G\) _are present in_ \(G_{x}^{in}\)_._ Proof.: 1. Since \(v\in\partial\beta(x)\) it means that some but not all edges incident on \(v\) in \(G\) appear in \(\beta(x)\). Since \(\beta(x)=\beta(x_{t})\cup\beta(x_{r})\), it implies that some but not all edges incident on \(v\) must appear either in \(\beta(x_{t})\), or \(\beta(x_{r})\), or both. 2. If \(top(v)=x\) then \(E(v)\subseteq\beta(x)\). Suppose for contradiction that \(v\not\in\partial\beta(x_{t})\). This can happen for two reasons: either (1) \(E(v)\subseteq\beta(x_{t})\), or (2) \(E(v)\cap\beta(x_{t})=\emptyset\). In case (1) we obtain a contradiction with the definition of \(top(v)\) since \(x_{\ell}\) is further from the root than \(x\) and it still contains all of \(E(v)\). In case (2) observe that we must have \(E(v)\subseteq\beta(x_{r})\), thus obtaining a contradiction with the definition of \(\mathit{top}(v)\) again since \(x_{r}\) is further from the root than \(x\) and it still contains all of \(E(v)\). 3. Since \(v\in V(G_{x}^{in})\setminus\partial\beta(x)\) it means that there is at least one edge incident on \(v\) in \(V(G_{x}^{in})\). Since \(v\) is not in the boundary of \(\beta(x)\), it means that all edges incident on \(v\) in \(G\) must be present in \(\beta(x)\). Now, we are ready to describe our algorithm, which is denoted by \(RTS\). The algorithm starts by checking if there is a restricted temporal path from \(s\) to \(z\) in \(G\), and if such a path does not exist then the algorithm immediately returns \(\emptyset\). Then the algorithm checks if there exists a restricted temporal separator of size \(1\) by testing whether there is a restricted temporal path in \(G\setminus\{v\}\) for each \(v\in V(G)\setminus\{s,z\}\). Then the algorithm computes \(\mathit{top}(s)\) and \(\mathit{top}(z)\) and the computation splits into three cases: (1) if \(\mathit{top}(s)=\mathit{top}(z)\); (2) if \(\mathit{top}(s)\) and \(\mathit{top}(z)\) are not on the same root-to-lead path in \(T\) (i.e., neither one is an ancestor of another); and (3) if one of \(\mathit{top}(s),\mathit{top}(z)\) is an ancestor of another. We shall later see that case (1) implies that \(\mathit{top}(s)=\mathit{top}(z)=\rho\). In this case, the algorithm invokes itself recursively on the two subtrees of \(T\) - the subtree rooted at the left child of \(\rho\) and the subtree rooted at the right child of \(\rho\). The separators obtained on these two subtrees correspond to separators of \(G_{\mu_{\ell}}^{in}\) and \(G_{\rho_{r}}^{in}\) and their union is returned as the separator for \(G\). In case (2) the algorithm returns the boundary of \(\beta(\mathit{top}(z))\) (it could return the boundary of \(\beta(\mathit{top}(s))\) instead - it does not make a difference) as the answer. In case (3), we assume without loss of generality that \(\mathit{top}(z)\) is the ancestor of \(\mathit{top}(s)\), and handling of this case depends on whether \(z\) belongs to the boundary of \(\beta(\mathit{top}(s))\) or not. In fact, this case splits into three subcases: (3.1) if \(z\not\in\partial\beta(\mathit{top}(s))\) then the algorithm immediately returns \(\partial\beta(top(s))\); (3.2) if \(\partial\beta(top(s))=\{z\}\) then the algorithm invokes itself recursively on \(G_{top(s)}^{in}\); and (3.3) if \(\partial\beta(top(s))=\{z,q\}\) for some vertex \(q\neq s,z\) then the algorithm first invokes itself recursively on \(G_{top(s)_{\ell}}^{in}\) (assuming \(\partial\beta(top(s)_{\ell})=\{s,z\}\)) and stores the answer in \(S\). If \(S\) proves to be a separator in \(G\) then \(S\) is returned, otherwise, \(q\) is added to \(S\) and returned. The pseudocode is presented in Algorithm 1. **Theorem 5.2**.: _Algorithm 1 correctly computes a minimum-sized restricted path \((s,z)\)-temporal separator for a temporal graph \(G\) such that \(bw(G_{\downarrow})\leq 2\)._ Proof.: The proof proceeds by the case analysis reflecting the structure of the algorithm. Clearly, the algorithm correctly identifies when there is a separator of size \(0\) or \(1\) since it performs brute-force checks for these special cases. Assuming that there is no separator of size \(\leq 1\), we discuss the correctness for the remaining three cases. Case (1): \(top(s)=top(z)=x\in V(T)\). Observe that Lemma 5.1, item 1, implies that \(s,z\in\partial\beta(x_{\ell})\) and \(s,z\in\partial\beta(x_{r})\). Since the branchwidth is \(2\), it implies that \(\partial\beta(x_{\ell})=\partial\beta(x_{r})=\{s,z\}\). In addition, we know that \(s,z\not\in\partial\beta(x)\) by the definition of \(top()\). And since every vertex in \(\partial\beta(x)\) must appear in \(\partial\beta(x_{\ell})\) or \(\partial\beta(x_{r})\) (using Lemma 5.1, item 2), we conclude that \(\partial\beta(x)=\emptyset\). By Lemma 5.1, item 3, every vertex in \(G_{x}^{in}\) has all its edges from \(G\). Therefore \(G_{x}^{in}\) is disconnected from the rest of \(G\). However, we assume that \(G\) is connected, so we must have \(G_{x}^{in}=G\). This is true only when \(x=\rho\). Thus, we must have in this case that \(top(s)=top(z)=\rho\). Observe that if \(P\) is a restricted temporal path between \(s\) and \(z\) (that does not have \(s\) or \(z\) as intermediate nodes) then it cannot use edges from both \(\beta(\rho_{\ell})\) and \(\beta(\rho_{r})\). Suppose, for contradiction, that \(P\) uses both kinds of edges, then there must be a vertex \(v\) on this path incident on \(e_{1}\) and \(e_{2}\) such that \(e_{1}\in\beta(x_{\ell})\) and \(e_{2}\in\beta(x_{r})\). Since \(\beta(x_{\ell}),\beta(x_{r})\) partition all the edges, it implies that \(e_{2}\not\in\beta(x_{\ell})\). This means that \(v\in\partial\beta(x_{\ell})=\{s,z\}\), but \(v\neq s,z\), giving a contradiction. Therefore, the minimum size restricted path temporal separator in \(G\) is the union of minimum size restricted path temporal separators in \(G_{\mu\ell}^{in}\) and \(G_{\nu}^{in}\), which is precisely what our algorithm outputs. Case (2): \(top(s)\) and \(top(z)\) do not lie on the same root-to-leaf path in \(T\). One of the consequences of Lemma 5.1, item 3, is that removing \(\partial\beta(x)\) from \(G\) separates all vertices in \(V(G_{x}^{in})\) from the rest of the graph. Therefore, removing \(\partial\beta(top(z))\) separates all vertices in \(G_{top(z)}^{in}\) from the rest of the graph. Observe that \(z\in V(G_{top(z)}^{in})\) and \(s\not\in V(G_{top(z)}^{in})\) (by the condition of this case). Therefore removing \(\partial\beta(top(z))\) separates \(s\) from \(z\). We claim that this is the minimum separator in this case. This is because when this line is reached we are guaranteed that there is no separator of size \(1\), and \(|\partial\beta(top(z))|\leq 2\) (in fact, it must be then equal to \(2\)). We only need to be careful that neither \(z\) nor \(s\) is in \(\partial\beta(top(z))\), but it is clear from the definition of \(top()\) and the case condition. Case (3): \(top(z)\) is an ancestor of \(top(s)\) (if \(top(s)\) is an ancestor of \(top(z)\) then we can exchange the roles of \(s\) and \(z\) for the sake of the argument). This case has three subcases. Subcase (3.1): \(z\not\in\partial\beta(top(s))\). This is similar to case (2) described above. The algorithm can return \(\partial\beta(top(s))\) as a minimum size separator. Subcase (3.2): \(\partial\beta(top(s))=\{z\}\). In this case, the structure of the graph is such that \(G_{top(s)}^{in}\) is connected to the rest of the vertices in \(G\) via the node \(z\), while vertex \(s\) lies in \(G_{top(s)}^{in}\). Thus, to separate \(z\) from \(s\), it is sufficient to separate them in \(G_{top(s)}^{in}\), which is what the algorithm does. Subcase (3.3): \(\partial\beta(top(s))=\{z,q\}\). By Lemma 5.1, item 2, it follows that \(s\in\partial\beta(top(s)_{\ell})\) and \(s\in\partial\beta(top(s)_{r})\). By Lemma 5.1, item 1, it follows that \(z,q\in\beta(top(s)_{\ell})\cup\beta(top(s)_{r})\). Since branchwidth is at most \(2\), we have (without loss of generality) that \(\partial\beta(top(s)_{\ell})=\{s,z\}\) and \(\partial\beta(top(s)_{r})=\{s,q\}\). By an argument similar to the one in case (1), we can establish that any restricted \((s,z)\) temporal path (that does not use \(s\) or \(z\) as intermediate nodes) must either consist entirely of edges in \(\beta(top(s)_{\ell})\) or entirely of edges in \(\beta(top(s)_{r})\). Thus, we can compute the two separators and take their union; however, we can simplify the calculation observing that the only separator we need to consider for the \(G_{top(s)_{r}}^{in}\) is \(\{q\}\), since \(G_{top(s)_{r}}^{in}\) is connected to the rest of \(G\) only through \(q\) and \(s\). **Corollary 5.3**.: _Given a temporal graph \(G=(V,E,\tau)\) with \(bw(G_{\downarrow})\leq 2\), the problem \((s,z,t)\)-Temporal Separator is solvable in time \(O(|V||E||\mathcal{T}|)\) where \(\mathcal{T}=\{t(e):e\in E(s)\}\)._ ### Temporal Graphs with a "Tree-like" Underlying Graph In this section, we present a polynomial time greedy algorithm (motivated by the point-cover interval problem) for computing a path restricted \((s,z)\)-temporal separator (see Section 5.1) on a temporal graph \(G\) such that \(G_{\downarrow}\setminus\{s,z\}\) is a tree if the existence of a restricted \((s,z)\)-temporal path could be checked in polynomial time. We assume that we are given a temporal graph \(G\) such that \(G_{\downarrow}\setminus\{s,z\}\) is a tree, which we denote by \(T\). For a pair of nodes \((u,w)\), we let \(P_{u,w}\) denote the unique shortest path in \(T\) between \(u\) and \(w\). For a vertex \(v\in V(T)\), we define a removal list of \(v\), denoted by \(RL_{v}\), to consist of all unordered pairs \((u,w)\) such that \(v\in V(P_{u,w})\) and there exists a restricted \((s,z)\)-temporal path in \(G\) using the edges of \(P_{u,w}\). For a pair \(u,w\in V(T)\), we define two temporal graphs: (1) \(G_{u,w}^{1}\) is \(G\) induced on the edges of \(E(P_{u,w})\cup\{(s,u),(v,z)\}\), and (2) \(G_{u,w}^{2}\) is \(G\) induced on the edges of \(E(P_{u,w})\cup\{(s,v),(u,z)\}\). The removal lists for all vertices in \(V(T)\) can be computed efficiently as follows. Initialize all removal lists to be empty. For each pair of vertices \(u,w\in V(T)\) check if there is any restricted \((s,z)\)-temporal path in \(G^{1}_{u,w}\) or \(G^{2}_{u,w}\), and if so, then add \((u,w)\) to the removal lists of all nodes in \(P_{u,w}\). Let \(\mathcal{U}=\bigcup_{v\in V(T)}RL_{v}\) be the set of all pairs of nodes that appear in removal lists. The following observation is immediate from the definitions and shows that computing a minimum size restricted path \((s,z)\)-temporal separator reduces to covering \(\mathcal{U}\) with as few removal lists as possible. _Observation 5.1_.: A set of \(S\) is a restricted path \((s,z)\)-temporal separator if and only if \(\bigcup_{v\in S}RL_{v}=\mathcal{U}\). A vertex \(v\) is called topmost if there exists a pair \((u,w)\in RL_{v}\) such that \((u,w)\not\in RL_{parent(v)}\). Our greedy algorithm, called \(GreedyRTS\), starts out with an empty solution \(S=\emptyset\), and then adds more vertices to \(S\) as follows. While there are non-empty removal lists, the algorithm selects a topmost vertex \(v\) with maximum distance from the root of \(T\), adds \(v\) to the set \(S\), and removes all pairs in \(RL_{v}\) from the removing lists of all the other vertices. The pseudocode is given in Algorithm 2. ``` FunctionComputeRLS(\(G,s,z\)): \(\mathcal{U}\leftarrow\emptyset\); for\((u,w)\in V(T)\times V(T)\)do ifExistsRestrictedPath(\(G^{1}_{u,w},s,z\)) orExistsRestrictedPath(\(G^{2}_{u,w},s,z\))then \(\mathcal{U}\leftarrow\mathcal{U}\cup\{(u,w)\}\); for\(v\in V(P_{u,w})\)do \(RL_{v}\gets RL_{v}\cup\{(u,w)\}\); FunctionGreedyRTS(\(G,s,z,RL,\mathcal{U}\)): \(S\leftarrow\emptyset\); while\(\mathcal{U}\neq\emptyset\)do \(v\leftarrow\)furthest node from the root of \(T\) such that \(\exists(u,w)\in RL_{v}\setminus RL_{parent(v)}\); \(S\gets S\cup\{v\}\); \(\mathcal{U}\leftarrow\mathcal{U}\setminus RL_{v}\); for\(w\in V(T)\)do \(RL_{w}\gets RL_{w}\setminus RL_{v}\); return\(S\); ``` **Algorithm 2**This algorithm computes a minimum sized restricted path \((s,z)\)-temporal separator in a temporal graph \(G\) when \(G_{\downarrow}\setminus\{s,z\}\) is a tree \(T\). **Theorem 5.4**.: _Algorithm 2 computes a minimum-sized restricted path \((s,z)\)-temporal separator in a temporal graph \(G\) with \(G_{\downarrow}\setminus\{s,z\}\) being a tree._ Proof.: Let \(S\) denote the solution produced by \(GreedyRTS\). It is clear from the algorithm's description and Observation 5.1 that \(S\) is, indeed, a restricted path \((s,z)\)-temporal separator. It is only left to show that there is no smaller temporal separator. Consider the order of vertices in \(S\) in which they are included by \(GreedyRTS\). For a minimum-sized separator \(S_{opt}\) we can define \(k\) to be the largest integer such that \(S\) and \(S_{opt}\) agree on the first \(k\) vertices considered by \(GreedyRTS\). Now, we fix a particular minimum size separator \(S_{opt}\) that maximizes \(k\). We will show that \(k=|S|\), establishing the claim. Suppose for contradiction that \(k<|S|\). Therefore, \(S\) and \(S_{opt}\) disagree on the \((k+1)^{\text{st}}\) vertex \(x\), i.e., \(x\in S\) and \(x\not\in S_{opt}\). Vertex \(x\) was selected by \(GreedyRTS\) since there is a pair of nodes \(u,w\) such that \((u,w)\in RL_{x}\setminus RL_{parent(x)}\). Since \((u,w)\) has not been removed from \(\mathcal{U}\) at the time when \(x\) was considered and \(S_{opt}\) agreed with \(S\) up until that point, it means that there must be some other vertex \(x^{\prime}\in S_{opt}\) such that \((u,w)\in RL_{x^{\prime}}\). We claim that \(RL_{x^{\prime}}\subseteq RL_{x}\). First, observe that since \((u,w)\not\in RL_{parent(x)}\) and \((u,w)\in RL_{x^{\prime}}\) it follows that \(x^{\prime}\) must be in the subtree of \(T\) rooted at \(x\). If we suppose, for contradiction, that there is some pair \((u^{\prime},v^{\prime})\in RL_{x^{\prime}}\) such that \((u^{\prime},v^{\prime})\not\in RL_{x}\) then there must be a vertex \(y\) on the path \(P_{x,x^{\prime}}\) such that \((u^{\prime},v^{\prime})\in RL_{y}\) and \((u^{\prime},v^{\prime})\not\in RL_{parent(y)}\). Then \(y\neq x\) since \((u^{\prime},v^{\prime})\not\in RL_{x}\), and this contradicts the greedy choice property, namely, \(y\) would be a topmost vertex that is located further from the root than \(x\), so it should have been chosen by \(GreedyRTS\). Since \(RL_{x^{\prime}}\subseteq RL_{x}\), it follows that \(S^{\prime}=(S_{opt}\setminus\{x^{\prime}\})\cup\{x\}\) is another optimal solution that agrees on the first \(k+1\) vertices considered by \(GreedyRTS\). This contradicts the choice of \(S_{opt}\) and finishes the proof of the theorem. Based on Lemma 2.1, the existence of a \((s,z,t)\)-temporal path can be solved in polynomial time. Thus, the following theorem follows from Theorem 5.4. **Theorem 5.5**.: _The \((s,z,t)\)-Temporal Separator problem is solvable in polynomial time on temporal graphs \(G\) where \(G_{\downarrow}\setminus\{s,z\}\) is a tree._ ### Temporal Graphs with Bounded Pathwidth In this section, we present a reduction from the _Discrete Segment Covering (DISC-SC)_ problem to the \((s,z,t)\)-Temporal Separator problem on graphs with bounded pathwidth. In the DISC-SC problem, we are given a set \(\Gamma\) of \(n\) intervals (also called segments), on the rational line and a set \(\mathcal{I}\) of unit-intervals on the rational line. We wish to find a subset of unit intervals \(A\subseteq\mathcal{I}\) which covers all the segments in \(\Gamma\). The objective is to minimize the size of \(A\). An interval \(I\in\mathcal{I}\) covers a segment \(S\in\Gamma\) if at least one endpoint \(S\) lies in \(I\). A segment \(S\in\Gamma\) is covered by a set of intervals \(A\) if there is an interval \(I\in A\) that covers \(S\). We refer to the version of DISC-SC where all segments in \(\Gamma\) have length bounded by \(k\) as DISC-SC-\(k\). DISC-SC problem is \(\mathcal{NP}\)-hard [4]. [4] also shows that the DISC-SC problem remains \(\mathcal{NP}\)-hard when the length of all segments in \(\Gamma\) are equal. DISC-SC-1 can be solved efficiently by a simple greedy algorithm [4]. However, the hardness of DISC-SC-\(k\) for general \(k>1\) remains open. The following theorem serves as a warm-up, and it establishes a simple polynomial time reduction from DISC-SC to the \((s,z,t)\)-Temporal Separator problem. **Theorem 5.6**.: _There is a polynomial-time reduction from the DISC-SC problem to the \((s,z,t)\)-Temporal Separator problem._ Proof.: We denote the starting and ending points of an interval \(I\) by \(s(I)\) and \(e(I)\), respectively. Also, we use this notation for all the segments. Consider a non decreasing order \((I_{1},I_{2},\ldots I_{m})\) of all intervals in \(\mathcal{I}\) (by their starting times), also consider \((C_{1},C_{2},\ldots,C_{n})\) an arbitrary order of segments in \(\Gamma\). Based on the fact that the size of all intervals in \(\mathcal{I}\) is one, it could be concluded that for any point \(p\) and three indices \(i<k<j\) if \(p\in I_{i}\) and \(p\in I_{j}\) the \(p\in I_{k}\) since starting point of \(I_{k}\) is before the starting point of \(I_{j}\) and the ending point of \(I_{k}\) is after the ending point of \(I_{i}\). Now we construct a temporal graph \(G=(V,E,t\times|\gamma|)\) such that \(V=\{v_{i}|i\in[m]\}\). For any segment \(C_{j}\) we construct the layer \(G_{j\times t}\) as follows: Let \(l_{s}\) and \(r_{s}\) be the indices of first and last intervals in \(\mathcal{I}\) which cover the starting point \(s(C_{j})\). It is clear that a starting point of \(C_{j}\) is covered by all the intervals between \(I_{l_{s}}\) and \(I_{r_{s}}\). Similarly, \(l_{e}\) and \(r_{e}\) denote the index of the first and the last intervals which cover the ending point \(e(C_{j})\). Since the ending point \(e(C_{j})\) is after the starting point \(s(C_{j})\) we have \(l_{s}\leq l_{e}\) and \(r_{s}\leq r_{e}\). So based on \(l_{e}\) and \(r_{s}\) we consider the following two cases: **Case 1**. (\(l_{e}\leq r_{s}\)). In this case, we add the following temporal path which creates the layer \(G_{j\times t}\). Figure 3 shows this temporal path. \[(s,v_{l_{s}},j\times t),(v_{l_{s}},v_{l_{s}+1},j\times t),\ldots,(v_{r_{e}-1},v_{r_{s}},j\times t),(v_{r_{s}},z,j\times t) \tag{6}\] **Case 2**. (\(r_{s}<l_{e}\)). Similar to the previous case we add a path from \(s\) to \(z\) which creates the layer \(G_{j\times t}\). Figure 4 Figure 4: Demonstration of case 1 in the proof of Theorem 5.6. Layer \(G_{j\times t}\) in case that \(r_{s}<l_{e}\). The time label for all the edges is \(j\times t\) Figure 3: Demonstration of case 1 in the proof of Theorem 5.6. Layer \(G_{j\times t}\) in case that \(l_{e}\leq r_{s}\). The time label for all the edges is \(j\times t\) (6) shows this temporal path. \[\begin{split}&(s,v_{l_{s}},j\times t),(v_{l_{s}},v_{l_{s}+1},j\times t ),\dots,(v_{r_{s}-1},v_{r_{s}},j\times t),\\ &(v_{r_{s}},v_{l_{e}},j\times t),\\ &(v_{l_{e}},v_{l_{e}+1},j\times t),\dots,(v_{r_{e}-1},v_{r_{e}},j \times t),(v_{r_{e}},z,j\times t)\end{split} \tag{7}\] Suppose that \(A\in\mathcal{I}\), and let \(S=\{v_{i}|I_{i}\in A\}\). We claim that \(A\) covers \(\Gamma\) if and only if \(S\) is a \((s,z,t)\)-temporal separator. \(\to A\) is a set of intervals that covers all segments \(C_{j}\). If \(l_{e}\leq r_{s}\) (Case 1) then there exists an interval \(I_{i}\in A\) such that \(l_{s}\leq i\leq r_{e}\) where the temporal path which is shown in equation 6 is incident on \(v_{i}\). On the other hand, if \(r_{s}<l_{e}\) (Case 2) then there exists an interval \(I_{i}\) such that \(l_{s}\leq i\leq r_{s}\) or \(l_{e}\leq i\leq r_{e}\) where the temporal path which is shown in equation 5.3 is incident on \(v_{i}\). Therefore, every \((s,z,t)\)-temporal path in the temporal graph \(G\) contains at least one vertex from \(S\) resulting in \(S\) being a \((s,z,t)\)-temporal separator. \(\leftarrow\)\(S\) is a \((s,z,t)\)-temporal separator. Then for any integer \(j\in[n]\) a temporal path should be incident on one vertex in \(S\) in time \(j\times t\). If \(l_{e}\leq r_{s}\) (Case 1) then there exists \(v_{i}\in S\) such that \(l_{s}\leq i\leq r_{e}\) which implies that \(I_{i}\) covers \(C_{j}\) and belongs to \(A\). If \(r_{s}<l_{e}\) then there exists \(v_{i}\in S\) such that \(l_{s}\leq i\leq r_{s}\) or \(l_{e}\leq i\leq r_{e}\) which implies that \(I_{i}\) covers \(C_{j}\) and belongs to \(A\). Therefore all the segments are covered by an interval in \(A\). The issue with the above reduction is that it does not provide any structural guarantees about the temporal graph \(G\) used in the construction. In order to establish a reduction via a temporal graph \(G\) whose underlying graph has bounded pathwidth, we start with a restricted version of DISC-SC, namely, the DISC-SC-\(k\) problem. The following results can then be established. **Theorem 5.7**.: _There is a polynomial-time reduction from the DISC-SC-\(k\) problem to the \((s,z,t)\)-Temporal Separator in which the pathwidth of the underlying graph is bounded by \(2k+6\)._ Proof.: Consider an instance \((\mathcal{I},\Gamma)\) of the Discrete Segment Covering problem such that the length of all the segments in \(\Gamma\) is at most \(k\). Consider intervals in \(\mathcal{I}=(I_{1},I_{2},\dots\ I_{n})\) in the non-decreasing order of their starting times. We choose a special set of intervals \(SP\in\mathcal{I}\) by the following algorithm. 1. Let \(SP=I_{1}\) and \(index=1\). 2. Let \(j\) be the largest index such that \(s(I_{j})<e(I_{index})\), if such \(j\) exists. Otherwise, let \(j=index+1\) 3. Put \(I_{j}\) into the set \(SP\), update the integer \(index\) equal to \(j\) and if \(j\leq n\) repeat the algorithm from step 2. **Lemma 5.8**.: _A \(p\) is covered by \(\mathcal{I}\) if \(SP\) covers it._ Proof.: We prove the lemma by induction. First, based on the algorithm, it is clear that \(I_{1}\in SP\) and \(I_{n}\in SP\). Now we state the induction hypothesis: for any \(i\in[n]\) such that \(I_{i}\in SP\), a point \(p\) is covered by \(\{I_{1},I_{2},\dots,I_{i}\}\) if it is covered by \(\{I_{1},I_{2},\dots,I_{i}\}\cap SP\). * **Base case**. For \(i=1\), it is clear that \(I_{1}\in SP\). * **Induction step**. Suppose \(j<i\) is the largest integer such that \(I_{j}\in SP\cap\{I_{1},\dots I_{i-1}\}\). Based on the inductive assumption, a point \(p\) is covered by \(\{I_{1},I_{2},\dots,I_{j}\}\) if it is covered by \(\{I_{1},I_{2},\dots,I_{j}\}\cap SP\). Since the starting point of \(I_{i}\) is before the ending point of \(I_{j}\), we have that a point \(p\) is covered by \(\{I_{1},I_{2},\dots,I_{i}\}\) if it is covered by \(\{I_{1},I_{2},\dots,I_{i}\}\cap SP\). Since \(I_{n}\in SP\), a point \(p\) is covered by \(\{I_{1},I_{2},\dots,I_{n}\}=\mathcal{I}\) if it is covered by \(\{I_{1},I_{2},\dots,I_{n}\}\)\(\cap SP=SP\). The main idea of the proof is based on to the following features of the special set \(SP\). Denote \(SP=\{I_{m_{1}},I_{m_{2}},\dots I_{m_{q}}\}\). Based on the selection of interval \(I_{m_{i+1}}\) it is clear that the starting point of \(I_{m_{i+2}}\) is greater than the ending point of \(I_{m_{i}}\) which implies that \(s(I_{m_{i+2}})>s(I_{m_{i}})+1\). More generally, we have that \(e(I_{m_{i+2}})>s(I_{m_{i}})+k+1\). Therefore, for any segment \(C\in\Gamma\) and for any interval \(I_{m_{i}}\) and \(I_{m_{j}}\) such that \(s(C)\in I_{m_{i}}\) and \(e(C)\in I_{m_{j}}\), we could conclude that \(j\leq i+2k\). This feature for \(SP\) is the main idea used in constructing an instance of the \((s,z,t)\)-Temporal Separator problem with low pathwidth. Now we construct a temporal graph \(G=(V,E,\tau)\) where \(\tau=|\Gamma|\times t\). Let \(V=\{u_{i}|i\in[n]\}\cup\{v_{i}|i\in[n]\}\cup\{s,z\}\). Now, for the \(i\)-th segment \(C\in\Gamma\) we add a path from \(s\) to \(z\) at time \(i\times t\). Let \(m_{a}\) and \(m_{b}\) be the indices of the first intervals in \(SP\) which cover points \(s(C)\) and \(e(C)\), respectively. Based on the Lemma 5.8 if \(m_{a}\) (or \(m_{b}\)) does not exist, then the point \(s(C)\) (respectively, \(e(C)\)) will not be covered by any interval in \(\mathcal{I}\). Therefore, we could treat \(C\) as a single point \(e(C)\) (respectively, \(s(C)\)) and continue on with the algorithm. Let \(l_{s}\) be the index of the leftmost interval form \(\mathcal{I}\) which covers \(s(C)\), and let \(r_{s}\) be the index of the rightmost interval from \(\mathcal{I}\) which covers \(s(C)\). It is obvious that \(s(C)\) is covered by all of the intervals between \(l_{s}\) and \(r_{s}\) in \(\mathcal{I}\). Similarly, let \(l_{e}\) and \(r_{e}\) be the indices of the leftmost and the rightmost intervals which cover \(e(C)\). If \(l_{e}\leq r_{s}\) then consider \(l_{e}=r_{s}+1\) instead. Now, add the following \((s,z,t)\)-temporal path to the temporal graph \(G\). For simplicity, we denote \(i\times t\) by \(\theta\). \[(s,u_{l_{e}},\theta),(u_{l_{s}},v_{l_{s}},\theta),(v_{l_{s}},v_{l _{s}+1},\theta),\ldots(v_{r_{s}-1},v_{r_{s}},\theta) \tag{8}\] \[(v_{r_{s}},u_{r_{s}},\theta),(u_{r_{s}},u_{r_{s}-1},\theta), \ldots(u_{m_{a}+1},u_{m_{a}},\theta)\] \[(u_{m_{a}},u_{m_{b}},\theta)\] \[(u_{m_{b}},u_{m_{b}-1},\theta),\ldots,(u_{l_{e}+1},u_{l_{e}}, \theta),(u_{l_{e}},v_{l_{e}},\theta)\] \[(v_{l_{e}},v_{l_{e}}+1,\theta)\ldots(v_{r_{e}-1},v_{r_{e}},\theta ),(v_{r_{e}},u_{r_{e}},\theta),(u_{r_{e}},z,\theta)\] Figure 5 shows the above path in the graph layer \(i\times t\). We claim that there exists \(A\subseteq\mathcal{I}\) that covers \(\Gamma\) with \(|A|\leq p\) if and only if there is a \((s,z,t)\)-temporal separator \(S\subseteq V\) such that \(|S|\leq p\). \(\rightarrow\) Suppose that \(A\subseteq\mathcal{I}\) covers all segments in \(\Gamma\). Let \(S=\{v_{i}|I_{i}\in A\}\). It is obvious that \(|S|=|A|\). Now we prove that \(S\) is a \((s,z,t)\)-temporal separator. Suppose that there is a temporal path \(P\) in \(G\), based on the construction of \(G\) this temporal path should be of the form shown in equation 8 for some \(i\in[n]\). This implies \(I_{j}\notin A\) for all \(j\) such that \(l_{s}<j<r_{s}\) or \(l_{e}<j<r_{e}\) and results in the \(i\)-th segment not being covered by \(A\). So, based on the contradiction we could conclude that \(S\) is a \((s,z,t)\)-temporal separator. \(\leftarrow\) Suppose that \(S\subseteq V\) is a \((s,z,t)\)-temporal separator in a temporal graph \(G\). Let \(A=\{I_{i}|u_{i}\in S\) or \(v_{i}\in S\}\), it is clear that \(|A|\leq|S|\). Consider the \(i\)-th segment \(C\in\Gamma\). There should be one vertex belonging to the temporal path \(P\) which is shown in equation 8 in \(S\) since \(S\) is a \((s,z,t)\)-temporal separator. Therefore there is \(j\) where \(l_{s}<j<r_{s}\) or \(l_{e}<j<r_{e}\) and either \(u_{i}\) or \(v_{i}\) belong to \(S\), which implies \(C\in A\). Thus, \(A\) covers \(\Gamma\). Now we prove that the pathwidth of the underlying graph \(G_{\downarrow}=(V,E^{\prime})\) of the temporal graph \(G(V,E,|\Gamma|\times t)\) is bounded by \(2k+6\). We refer to an edge \((u_{m_{a}},u_{m_{b}},\theta)\) in a path that is shown in equation 8 as a _crossing edge_. Figure 6 shows a graph \(G^{\prime}\) of which \(G_{\downarrow}\) is a subgraph. Now we give a path decomposition \((P,\beta)\) for a graph \(G_{\downarrow}\) in which the width of decomposition is at most \(2k+6\). Let \(V(P)=\{a_{1},a_{2},\ldots,a_{m}\}\) and \(E(P)=\{(a_{1},a_{2}),\ldots,(a_{m-1},a(m))\}\). Let \(i\in[n]\) and \(l(i)\) be the largest integer such that the starting point of the interval \(I_{m_{l(i)}}\in SP\) is before the starting point of interval \(I_{i}\). Now we define the \(\beta(a_{i})\) as follows: \[\beta(a_{i})=\{u_{i},v_{i},u_{i+1},v_{i+1},s,z\}\cup\{u_{m_{l}}|l\geq l(i)\text { and }l\leq l(i)+2k\}\] Figure 5: Demonstration of one step of the reduction in the proof of Theorem 5.7. The figure shows the \((s,z,t)\)-temporal path in the layer \(G_{j\times t}\). The time label for all edges is \(j\times t\). Figure 6: Illustration of the graph \(G^{\prime}\) which is used to show that the output of the reduction from Theorem 5.7 has bounded pathwidth. The underlying graph \(G_{\downarrow}\) is a subgraph of \(G^{\prime}\). **Lemma 5.9**.: _For any \(u_{q}\) and \(i\), \(j\), \(l\) such that \(i<j<l\), if \(u_{q}\in\beta(a_{i})\) and \(u_{q}\in\beta(a_{l})\), we have \(u_{q}\in\beta(a_{j})\)._ Proof.: If \(I_{q}\notin SP\) then it is clear that \(u_{q}\) only appears in \(\beta(a_{q-1})\) and \(\beta(a_{q})\). Now suppose that \(I_{1}\in SP\) and \(q=m_{p}\). Since \(u_{m_{p}}\in\beta(a_{i})\) we have \(m_{p}\leq l(i)+2k\), also \(l(l)\leq m_{p}\) since \(m_{p}\in\beta(a_{l})\). As a result we have \(m_{p}\leq l(i)+2k\leq l(j)+2k\) and \(l(j)\leq l(l)\leq m_{p}\) which implies that \(u_{q}\in\beta(a_{j})\). For any \(v_{i}\in V\) it is clear that \(v_{i}\) just belongs to the two sets \(\beta(a_{i-1})\) and \(\beta(a_{i})\). Also, \(s\) and \(z\) are present in all the sets. Therefore, by Lemma 2.1 we could say that the third property of path decomposition is satisfied. So, it is sufficient to show that for every edge \((u,v)\in E(G_{\downarrow})\) there exists \(i\in[n]\) such that \(\{u,v\}\subseteq\beta(a_{i})\). If the edges are not crossing edges, then there are three types of edges \((u_{i},v_{i})\), \((u_{i},u_{i+1})\), and \((v_{i},v_{i+1})\) which satisfy the condition by the definition of \(\beta(a_{i})\). If \(e=(u_{i},u_{j})\) is a crossing edge, then \(I_{i}\in SP\) and \(I_{j}\in SP\), so let \(m_{p}=i\) and \(m_{q}=j\). Since this edge corresponds to a segment \(C\) such that \(s(C)\in I_{m_{p}}\) and \(e(C)\in I_{m_{q}}\) we could conclude that \(m_{q}\leq m_{p}+2k\) which implies that \(u_{i},u_{j}\subseteq\beta(a_{i})\). Also, the cardinality of all sets \(\beta(a_{i})\) is \(2k+7\) which implies that the width of \((P,\beta)\) is \(2k+6\). Therefore the pathwidth of the underlying graph \(G_{\downarrow}\) is at most \(2k+6\). The significance of this result is that if one hopes to design efficient algorithms for the \((s,z,t)\)-Temporal Separator with bounded pathwidth one is faced with an obstacle of resolving the hardness of DISC-SC-\(k\) problem, as stated in the following theorem. **Theorem 5.10**.: _If the \((s,z,t)\)-Temporal Separator problem on temporal graphs with bounded pathwidth is solvable in polynomial time then the DISC-SC-\(k\) problem is solvable in polynomial time._ ## 6 Conclusions In this work, we defined the \((s,z,t)\)-Temporal Separator problem, generalizing the \((s,z)\)-Temporal Separator problem. We showed that \((s,z)\)-Temporal Separator and \((s,z,t)\)-Temporal Separator problems could be approximated within \(\tau\) and \(\tau^{2}\) approximation ratio, respectively, in a graph with lifetime \(\tau\). We also presented a lower bound \(\Omega(\log(n)+\log(\tau))\) for polynomial time approximability of \((s,z,t)\)-Temporal Separator assuming that \(\mathcal{NP}\not\subset\textsc{Dtime}(n^{\log\log n})\). Then we considered special classes of graphs. We presented two efficient algorithms: one for temporal graphs \(G\) with \(bw(G_{\downarrow})\leq 2\) and one for temporal graphs \(G\) with \(G_{\downarrow}\setminus\{s,z\}\) being a tree. The question of whether there is a polynomial-time algorithm to compute a minimum \((s,z,t)\)-temporal separator in a temporal graph of bounded treewidth remains an interesting open problem. However, we showed a reduction from the DISC-SC-\(k\) problem to \((s,z,t)\)-Temporal Separator when the pathwidth of the underlying graph is bounded by a constant number. Therefore, designing efficient algorithms for bounded treewidth graphs encounters serious obstacles, such as making progress on the open problem of the hardness of DISC-SC-\(k\). Another interesting direction of future research is to consider temporal separator problems with the additional restriction of "balancedness", as discussed at the end of Section 3.
2309.13601
FaceGemma: Enhancing Image Captioning with Facial Attributes for Portrait Images
Automated image caption generation is essential for improving the accessibility and understanding of visual content. In this study, we introduce FaceGemma, a model that accurately describes facial attributes such as emotions, expressions, and features. Using FaceAttdb data, we generated descriptions for 2000 faces with the Llama 3 - 70B model and fine-tuned the PaliGemma model with these descriptions. Based on the attributes and captions supplied in FaceAttDB, we created a new description dataset where each description perfectly depicts the human-annotated attributes, including key features like attractiveness, full lips, big nose, blond hair, brown hair, bushy eyebrows, eyeglasses, male, smile, and youth. This detailed approach ensures that the generated descriptions are closely aligned with the nuanced visual details present in the images. Our FaceGemma model leverages an innovative approach to image captioning by using annotated attributes, human-annotated captions, and prompt engineering to produce high-quality facial descriptions. Our method significantly improved caption quality, achieving an average BLEU-1 score of 0.364 and a METEOR score of 0.355. These metrics demonstrate the effectiveness of incorporating facial attributes into image captioning, providing more accurate and descriptive captions for portrait images.
Naimul Haque, Iffat Labiba, Sadia Akter
2023-09-24T10:30:22Z
http://arxiv.org/abs/2309.13601v2
# Face-Att: Enhancing Image Captioning with Facial Attributes for Portrait Images ###### Abstract Automated image caption generation is a critical area of research that enhances accessibility and understanding of visual content for diverse audiences. In this study, we propose the Face-Att model, a novel approach to attribute-focused image captioning that emphasizes the accurate depiction of facial attributes within images. Face-Att automatically detects and describes a wide range of attributes, including emotions, expressions, pointed noses, white skin tones, hair textures, attractiveness, and approximate age ranges. Leveraging deep learning techniques, we explore the impact of different image feature extraction methods on caption quality and evaluate our model's performance using metrics such as BLEU and METEOR. Our Face-Att model leverages annotated attributes of portraits as supplementary prior knowledge for our portrait images before captioning. This innovative addition yields a subtle yet discernible enhancement in the resulting scores, exemplifying the potency of incorporating additional attribute vectors during training. Furthermore, our research contributes to the broader discourse on ethical considerations in automated captioning. This study sets the stage for future research in refining attribute-focused captioning techniques, with a focus on enhancing linguistic coherence, addressing biases, and accommodating diverse user needs. **Keywords: Image captioning, Facial attributes, Portrait images, Computer vision, Natural language processing, Deep neural networks, VGG-Face model, ResNet50 model, InceptionV3, LSTM model, Face-Att, BLEU score, METEOR score, Linguistic coherence** ## 1 Introduction Image captioning is a challenging interdisciplinary task, merging Computer Vision [1] and Natural Language Processing (NLP) [2] techniques. Its primary goal is to generate descriptive and contextually relevant captions for images using sophisticated Deep Learning [3] models. These models must extract meaningful visual features and understand the semantic context to produce informative and coherent human-readable captions. Image captioning has practical applications, such as aiding the visually impaired [4, 5] and enhancing human-computer interactions, making it a pivotal AI research area. Significant advancements in automatic image captioning stem from Deep Neural Networks and large captioning datasets. These networks typically produce factual image descriptions. Recent research extends this by detecting emotional and relational aspects in images and enriching captions with emotive features for more engaging descriptions. In summary, image captioning is essential for advancing AI technology and facilitating seamless human-machine interactions by bridging the gap between visual content and human language. In the realm of modern Image Captioning, an encoder-decoder paradigm [6, 7, 8, 9] is commonly adopted. This top-down approach begins with a Convolutional Neural Network (CNN) [10] model performing image content encoding, followed by a Long Short-Term Memory (LSTM) [39] model which is responsible for generating the image caption through decoding. So, after reviewing the current state of the art, we have decided to develop our Facial Attribute Image Captioning Model, Face-Att, based on this paradigm. Several image captioning systems have addressed facial expressions and emotions, but facial attribute captioning goes further by capturing and describing specific physical characteristics beyond emotions. But, to the best of our knowledge, despite the presence of some facial attribute recognition models, there is no image captioning system that can generate captions based on the attributes of a subject's face from an image. This is why we have been inspired to work on building a model of Image Captioning with Facial Attributes, named Face-Att. Our suggested method, called Face-Att, intends to automatically recognize and describe a wide variety of facial attributes in these images by utilizing the capabilities of Computer Vision and Natural Language Processing. Here are the key points of our contribution: 1. Creation of a comprehensive dataset 1. Footnote 1: FaceAttDB 2. 5. **Performance Evaluation**: Assessment of the Face-Att model's efficacy. 6. **Linguistic Analysis**: In-depth linguistic examination of generated captions. ## 2 Related work Image captioning for facial images is a burgeoning field at the crossroads of Computer Vision and Natural Language Processing (NLP), offering opportunities for improved human-computer interaction, emotional context interpretation, and assistive technologies. While research on image captioning with facial attributes is limited, related work exists in general image captioning, facial expression analysis, and face recognition. This review highlights key contributions in these areas. Deep Face Recognition [17]: Parkhi et al. (2015) introduced a deep neural network architecture for face recognition, handling variations in lighting and pose. However, it requires extensive training data and computational resources. SENTI-ATTEND [18]: Nezami et al. (2018) integrated sentiment analysis into image captioning, generating emotionally meaningful captions using attention mechanisms. GroupCap [19]: Fuhai Chen et al. (2018) proposed a framework for group-based image captioning, considering relationships among images to enhance caption coherence and diversity. Face-Cap [20]: Nezami et al. (2019) introduced Face-Cap, combining facial expression analysis with caption generation, outperforming traditional methods. Facial Expression Sentence (FES) Generation [21]: Hong et al. (2019) associated facial action units with natural language descriptions, capturing facial expressions in image captions. Image Captioning using Facial Expression and Attention [22]: In 2020, authors presented FACE-CAP and FACE-ATTEND, integrating facial expression features into caption generation for emotionally resonant captions. Facial Recognition for Identity Cards [23]: Usgan et al. (2020) improved facial recognition for electronic identity cards, enhancing identification accuracy. Entity-Aware News Image Captioning [24]: Tran et al. (2020) integrated named entity recognition with transformer-based caption generation for more informative descriptions. BORNON [25]: In 2021, a Transformer-based approach improved Bengali image captioning, contributing to non-English language image captioning. Emotion-Based Caption Generation [26]: Priya S et al. (2022) captured emotions in images using CSPDenseNet and BiLSTM with self-attention, enriching captions with emotional cues. Explaining Emotional Attitude [27]: Bisikalo et al. (2022) explored deep learning models' ability to comprehend emotional attitudes, using a novel dataset for emotion annotation. Object-Centric Unsupervised Image Captioning [28]: In 2022, an unsupervised approach focused on objects within images to generate contextually relevant captions. General Facial Representation Learning [29]: A 2022 study integrated visual and linguistic cues for comprehensive facial representation, aiding facial attribute prediction and emotion recognition. Fair Contrastive Learning for Facial Attribute Classification [30]: Park et al. (2022) addressed biased learning in facial attribute classification, achieving fairer results. EDUVI [31]: In 2023, EDUVI combined Visual Question Answering and Image Captioning to enhance primary-level education through interactive and informative mechanisms. While these studies contribute significantly to image captioning, a notable gap exists in the exploration of facial attributes in portrait image captioning, presenting an opportunity for future research, such as the proposed Face-Att model. ## 3 Dataset Our dataset comprises 2,000 curated portrait images, sourced from the CelebA dataset [32], which boasts over 200,000 celebrity images with 10,177 unique identities and various facial attributes. CelebA supports numerous computer vision tasks. Each image in our dataset is paired with five English and five Bangla captions. The primary goal behind our dataset creation is to advance the field of facial attribute captioning, specifically focusing on portrait images and enabling multilingual caption generation. For ease of use in training and evaluation, all images in the BiLingualFaceCaption dataset are stored in a single folder. Additionally, to establish a clear association between each image and its respective captions, we have prepared an accompanying Excel sheet. This sheet contains the filenames of each image along with their corresponding Bangla and English captions. The captions in our dataset are generated based on the attribute annotations available in the CelebA dataset. CelebA provides a rich set of attribute labels covering various aspects such as age, gender, expression, hair color, nose shape, and skin complexion. Leveraging these attributes, we have crafted captions that accurately describe the visual characteristics of the portrait images. It is important to note that while the BiLingualFaceCaption dataset currently comprises 2,000 images, each with five captions, the size of the dataset may be expanded in future work to enhance the diversity and robustness of the models trained on it which ensures its continued usefulness for advancing research in multilingual facial attribute captioning. ## 4 Pre-Preprocessing In the pre-processing phase of our dataset, we divide the process into two parts: **1.** Image Preprocessing **2.** Caption Preprocessing. ### Image Preprocessing Image preprocessing is vital to ensure our dataset's images are properly formatted for modeling tasks. We standardize sizes, adjust resolutions, and normalize colors to achieve uniformity, aiding feature extraction. We meticulously remove noise and artifacts for clarity. This sets the foundation for reliable model training. We convert images from BGR to RGB for compatibility and consistent processing. Precisely resizing images to model-specific dimensions optimizes compatibility. Reshaping resized images into 3D tensors aligns with model expectations, facilitating feature extraction and precise caption generation. This ensures our images conform to model input formats, enabling accurate feature extraction and captioning. ### Caption preprocessing In this section, we detail the crucial steps for seamlessly integrating textual data into our image captioning model. Our journey starts with the creation of linguistic dictionaries, one for English and one for Bangla captions. Each word is assigned a unique integer value, forming the basis for subsequent processing. Next, we delve into Tokenization, a process where sentences are meticulously divided into words and Figure 1: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32] each word is mapped to an integer value. This prepares the textual data for model input. Padding comes next, ensuring consistent input dimensions by adding padding to tokenized sequences, preventing data loss, and maintaining uniformity in sequence length. Finally, we employ One-Hot Encoding to represent words as binary vectors. This step is pivotal, as it facilitates the input of words into the captioning model, enabling coherent and contextually relevant caption generation. ## 5 Proposed methodology The proposed methodology of our Image Captioning System with Facial Attributes for Portrait Images unfolds through five essential stages: ### Image preprocessing In this section, we prepare raw images for analysis by resizing them to specific dimensions, converting color spaces for compatibility, and ensuring uniformity in format. These steps lay the foundation for subsequent feature extraction. ### Image Feature Extraction Here, we employ advanced techniques like convolutional neural networks (CNNs) and pre-trained models to extract meaningful visual features from images. These features capture essential information required for accurate image captioning.This section elucidates the extraction process using three distinct pre-trained deep learning models: VGGFace, ResNet50, and InceptionV3. VGGFace [17] is harnessed to capture intricate facial features with its deep convolutional layers, enabling precise recognition and description of facial attributes. Figure 2: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32] ResNet50 [35], known for its deep and residual neural networks, extracts complex and hierarchical visual features, enhancing the model's understanding of image content. By utilizing InceptionV3 [38], we obtain a comprehensive view of the images through its multi-scale feature extraction, ensuring a holistic representation of visual information. ### Caption preprocessing Incorporated into our methodology, caption pre-processing bridges the worlds of language and imagery. Starting with creating dictionaries for English and Bangla captions, words gain numerical identities. Tokenization transforms words into numbers, while padding ensures a uniform narrative canvas. One-hot encoding further reads data for prediction. ### Our Learning Model:Face-Att The Face-Att model is designed to seamlessly combine image features and textual information, leading to the generation of eloquent portrait captions. Our model architecture comprises three sequential sub-models: **The Image Model** (sequential), **The Language Model** (sequential_1), **The Multi-Modal Model** (model_1). These sub-models are interconnected and collectively contribute to the overall architecture of the Face-ATT model. The Image Model begins with a **Dense** layer that accepts the image features as input. This layer uses the **ReLU** activation function to introduce non-linearity and capture important patterns in the image data. The Image Model starts with a Dense layer using ReLU activation to capture crucial image patterns. It compresses data, reducing dimensions with 335,744 trainable parameters. This layer acts as a Figure 3: Image Feature Extraction at a glance bottleneck with 128 output units. The RepeatVector layer replicates this output for caption matching, preparing it for fusion. The second sub-model, the Language Model (sequential_1), has a more complex structure. It is designed to process the caption's linguistic context. It starts with an **Embedding** layer to map input sequences into a dense 128-dimensional vector space. Next, a 256-unit **LSTM** layer processes the sequence, capturing temporal dependencies. A **Time-Distributed** layer adds dense transformations to each time step, contributing 32,896 trainable parameters.Overall, the sequential_1 sub-model contains 536,704 trainable parameters. The final sub-model, model_1, connects the previous two sub-models. It takes two inputs: the output from the **Embedding** layer of the Image Model (sequential) and the output from the **Dense** layer of the Language Model (sequential_1). These inputs are passed through their respective layers, and the outputs are then concatenated. The concatenated output is fed into two subsequent LSTM layers, one with 128 units and the other with 512 units. The model_1 sub-model has a total of 2,821,464 trainable parameters. The overall Face-ATT model is a combination of these interconnected sub-models, designed to capture intricate facial features and temporal dependencies in the input data. With a total of 2,821,464 trainable parameters, the model can be trained to learn complex relationships and patterns in face-related data. It demonstrates a comprehensive architecture for facial analysis tasks, showcasing the capability of deep learning models in facial feature extraction and analysis. Figure 4: The structure of our model Face-Att ### Caption Prediction In the final phase, we use the prepared images and pre-processed captions to predict and generate descriptive captions. The model's output is rigorously evaluated to assess the quality and relevance of the generated captions to the input images. ## 6 Training The Face-ATT model was trained on Google Colaboratory with GPU support from NVIDIA Tesla K80 and Tesla T4 GPUs. Python 3, along with libraries such as NumPy, Pandas, and OpenCV, was used. The dataset contained 2000 images with 10,000 English and Bangla captions, with 1500 images and 7,500 captions used for training. The model employed Categorical Cross-Entropy loss and RMSprop optimization, with accuracy as the metric. Training analysis revealed ResNet50 as the optimal image feature extraction model, achieving 90.58 percent accuracy with a loss of 0.2131 over 200 epochs, suggesting approximately 150 epochs for practical training. The number of epochs, varying between 100 and 200, signifies the total number of times the model iterates over the entire training dataset. The table 1 presents the number of epochs utilized during the training of the Face-Att model while employing different image feature extraction models. For VGGFace and ResNet50 models, English captioning was trained for both 100 and 200 epochs, and Bangla captioning was trained for only 100 epochs, while only English captioning was trained for 100 epochs using InceptionV3. \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Language of the Captions** & **Epochs** & **Image Feature Extraction Model** \\ \hline \multirow{3}{*}{English} & 100 & VGGFace \\ & 100 & ResNet50 \\ & 100 & InceptionV3 \\ \hline Bangla & 100 & VGGFace \\ & 100 & ResNet50 \\ \hline \end{tabular} \end{table} Table 1: Epochs for Face-Att Model with Different Image Feature Extraction Models \begin{table} \begin{tabular}{p{85.4pt} p{85.4pt} p{85.4pt}} \hline **Captioning Language** & **Epochs** & **Feature Extraction Model** \\ \hline English & 100 & VGGFace \\ & 100 & ResNet50 \\ & 100 & InceptionV3 \\ \hline Bangla & 100 & VGGFace \\ & 100 & ResNet50 \\ \hline \end{tabular} \end{table} Table 2: Epochs for Face-Att Model with Different Image Feature Extraction Models In this context, we have provided a summarized overview of the training outcomes, as presented in Table 3 and 4. These tables encapsulate training loss and accuracy for each epoch, highlighting the influence of various image feature extraction techniques. However, it's important to note that our experimentation with Bangla captioning has been relatively limited compared to our comprehensive study of English captioning. Interestingly, our most optimal performance emerged when employing the ResNet50 model for the image feature extraction process, achieving a remarkable accuracy of 90.58% and a correspondingly low loss of 0.2131 over 200 epochs. This underscores the potent impact of leveraging ResNet50 in tandem with our model. However, the determination of an ideal number of epochs remains pivotal, as an excessive number may risk overfitting, while an insufficient number might result in stagnant accuracy and loss trends. To address this pivotal query, we proceed to delve into a comprehensive analysis of training loss and accuracy graphs presented in Figure 1 and Figure 2. The graphs indicate a noticeable increase in accuracy up to approximately 150 epochs when employing VGGFace as the basis for Image Feature Extraction, and around 100 epochs when using ResNet50. Similarly, the Loss plot reflects a decreasing trend until approximately 150 and 100 epochs for VGGFace and ResNet50 image feature extraction techniques, respectively. However, beyond these points, both accuracy and loss changes become marginal. Therefore, a practical choice for the epoch size could be around 150, optimizing time and resource utilization. \begin{table} \begin{tabular}{l c c c} \hline **Feature Extraction Model** & **Epoch** & **Loss** & **Accuracy** \\ \hline VGGFace & 100 & 0.9509 & 67.73\% \\ \hline ResNet50 & 100 & **0.3216** & **87.53\%** \\ \hline \end{tabular} \end{table} Table 4: Training Loss and Accuracy for Bangla Captioning \begin{table} \begin{tabular}{l c c c} \hline **Feature Extraction Model** & **Epoch** & **Loss** & **Accuracy** \\ \hline VGGFace & 100 & 0.5947 & 78.67\% \\ & 200 & 0.2623 & 89.29\% \\ \hline ResNet50 & 100 & 0.2701 & 89.38\% \\ & 200 & **0.2131** & **90.58\%** \\ \hline InceptionV3 & 100 & 0.5646 & 80.50\% \\ \hline \end{tabular} \end{table} Table 3: Training Loss and Accuracy for English Captioning ## 7 Result We performed the evaluation of our model Face-Att's predicted captions using BLEU and METEOR scores across various experimental scenarios. The obtained scores for 100 epochs are presented in Table 5, and the scores for 200 epochs are outlined in Table 6. **BLEU** BLEU (Bilingual Evaluation Understudy) [41] is a metric for evaluating machine-generated translations by measuring similarity between generated and reference text using n-gram overlap. The BLEU-n score is calculated as: \[\text{BLEU}=\text{BP}\times\exp\left(\frac{1}{N}\sum_{n=1}^{N}\log p_{n}\right)\] Where: BP is the Brevity Penalty, \(N\) is the maximum n-gram order (typically 4), and \(p_{n}\) is the precision of n-grams, calculated as: \[p_{n}=\frac{\text{Matching n-grams}}{\text{Total n-grams}}\] BP is calculated as: \[BP=\begin{cases}1&\text{if }c>r\\ e^{(1-\frac{r}{c})}&\text{if }c\leq r\end{cases}\] Figure 5: Sample facial image dataset from Large-scale CelebFaces Attributes (CelebA) Dataset [32] BLEU ranges from 0 to 1, with higher scores indicating better translations. Different BLEU variants (BLEU-1, BLEU-2, BLEU-3, BLEU-4) consider different n-gram orders. **METEOR** METEOR (Metric for Evaluation of Translation with Explicit Ordering) [42] is another metric for translation quality. It considers precision, recall, and \(\alpha\) to balance them: \[\text{METEOR}=\frac{(1-\alpha)\times\text{precision}\times\text{recall}}{(1- \alpha)\times\text{precision}+\alpha\times\text{recall}}\] Where: precision and recall involve n-gram matching, and \(\alpha\) typically equals 0.9. METEOR provides a comprehensive evaluation of translation quality, including word order and vocabulary differences. ## 8 Conclution In our research, we introduced the Face-Att model, designed for attribute-centric image caption generation, with a primary focus on highlighting facial features in images. Our work involved a thorough exploration of diverse image feature extraction techniques and training scenarios, leading to significant progress in our objective. Our experimental results showcased the Face-Att model's effectiveness in generating attribute-focused captions in both English and Bangla. We evaluated the quality of these captions using widely recognized metrics like BLEU and METEOR, revealing the potential of our approach, although there's room for improvement. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Feature Extraction Model** & **BLEU-1** & **BLEU-2** & **BLEU-3** & **BLEU-4** & **METEOR** \\ \hline VGGFace & 0.354 & **0.087** & **0.024** & **0.006** & 0.274 \\ ResNet50 & **0.365** & 0.079 & 0.018 & 0.002 & **0.290** \\ \hline \hline \end{tabular} \end{table} Table 6: Results of Face-Att model over **200 epochs** (only for English Captioning) \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Language** & **Feature Extraction Model** & **BLEU-1** & **BLEU-2** & **BLEU-3** & **BLEU-4** & **METEOR** \\ \hline \multirow{3}{*}{English} & VGGFace & 0.334 & **0.085** & **0.025** & **0.006** & **0.267** \\ & ResNet50 & **0.341** & 0.072 & 0.018 & 0.003 & **0.267** \\ & InceptionV3 & 0.305 & 0.060 & 0.014 & 0.001 & 0.208 \\ \hline \multirow{3}{*}{Bangla} & VGGFace & 0.280 & 0.055 & 0.010 & 0.003 & 0.172 \\ & ResNet50 & 0.291 & 0.045 & 0.006 & N/A & 0.169 \\ \hline \hline \end{tabular} \end{table} Table 5: Results of Face-Att model over **100 epochs** A noteworthy observation was the considerable impact of the chosen image feature extraction method on model performance. Specifically, the ResNet50 model consistently outperformed others, aligning perfectly with our model's core aim of accurately representing facial attributes in captions. Our research contributes to the field of image captioning by addressing the specific challenge of attribute-focused caption generation, holding promise for applications in accessibility, image indexing, and educational materials. It represents a significant step toward enhancing automated image description capabilities, particularly in scenarios where highlighting specific object attributes is crucial.
2309.03412
From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models
Instruction tuning is essential for large language models (LLMs) to become interactive. While many instruction tuning datasets exist in English, there is a noticeable lack in other languages. Also, their effectiveness has not been well verified in non-English languages. We construct a Japanese instruction dataset by expanding and filtering existing datasets and apply the dataset to a Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning on both Japanese and English existing models using our instruction dataset. We evaluated these models from both quantitative and qualitative perspectives. As a result, the effectiveness of Japanese instruction datasets is confirmed. The results also indicate that even with relatively small LLMs, performances in downstream tasks would be improved through instruction tuning. Our instruction dataset, tuned models, and implementation are publicly available online.
Masahiro Suzuki, Masanori Hirano, Hiroki Sakaji
2023-09-07T00:14:37Z
http://arxiv.org/abs/2309.03412v2
# From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models ###### Abstract Instruction tuning is essential for large language models (LLMs) to become interactive. While many instruction tuning datasets exist in English, there is a noticeable lack in other languages. Also, their effectiveness has not been well verified in non-English languages. We construct a Japanese instruction dataset by expanding and filtering existing datasets and apply the dataset to a Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning on both Japanese and English existing models using our instruction dataset. We evaluated these models from both quantitative and qualitative perspectives. As a result, the effectiveness of Japanese instruction datasets is confirmed. The results also indicate that even with relatively small LLMs, performances in downstream tasks would be improved through instruction tuning. Our instruction dataset, tuned models, and implementation are publicly available online. Large Language Model (LLM), Instruction Dataset, Instruction Tuning, Japanese ## I Introduction Large language models (LLMs) have been making remarkable progress in performance and generalization in recent years. Various Transformer-based [1] language models, such as BERT [2], RoBERTa [3], and the GPT series [4, 5, 6], have demonstrated high performance derived from pre-training. Furthermore, since 2022, a large number of models, such as OPT [7], GPT-NeoX-20B [8], UL2 [9], PaLM [10], BLOOM [11], Pythia [12], and LLaMA series [13, 14], have emerged as models that show higher performance by scaling their size [15]. Although there is still difficulty in few-shot or zero-shot performance on unseen tasks, instruction tuning can address this issue [16]. Instruction tuning is a training method that improves the performance in unseen tasks by solving various tasks described via natural language instructions [16]. Starting with the enhancement of performance in various tasks by GPT-3 [6] under a few-shot setting given in natural language, there has been an increasing demand for responses in formats that are closer to question-answering or conversation, especially formats that are not similar to the pre-training data. An increasing number of datasets for instruction tuning and instruct-tuned models are being made available to the public. For instance, various datasets like FLAN [16], P3 [17], databricks-dolly-15k 1, and OASST1 [18] have been proposed and made public. As publicly available models, Flan-T5 [19] was constructed using FLAN and T0 was constructed using P3 respectively. Also, Dolly [20] is a model with instruction tuning applied to Pythia [12], while Vicuna [21] and Alpaca [22] are models with instruction tuning applied to LLaMA [13]. Footnote 1: [https://huggingface.co/datasets/databricks-dolly-15k](https://huggingface.co/datasets/databricks-dolly-15k) However, these models are not fully compatible with languages other than English. The datasets used for instruction tuning in Dolly, Alpaca, and Vicuna are only in English, making it difficult to gain the benefits of these models in languages other than English. Many instruction datasets have been constructed in English, and there are not many efforts to construct instruction datasets in languages other than English. While there are movements to construct instruction datasets in Chinese [23], most instruction dataset in non-English languages are built from outputs of models with licensing restrictions, such as translations of the Alpaca dataset [22] or the ShareGPT52K 2 constructed from ChatGPT outputs. In languages other than English, the scarcity of comprehensive instruction datasets means that the verification of instruction tuning effects is limited [24]. In Japanese, only some data from translated Alpaca [22] and OASST1 [18] exists, and there's a lack of dataset diversity, with quantitative evaluations of instruction tuning yet to be conducted. While constructing and evaluating datasets in languages other than English is a crucial step towards building language models that can interact in various languages, it's still very much in its early stages. Footnote 2: [https://huggingface.co/datasets/RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K) To tackle the issue of the lack of Japanese instruction dataset, the study [25] gathers various Japanese datasets to build an instruction dataset. While this dataset seems valuable, the effect of instruction tuning is only shown qualitatively and not quantitatively. Furthermore, the majority of this dataset consists of translation tasks. While it is considered that the translation tasks are effective when adapting English-based models to Japanese, these tasks may not be optimal for Japanese-based models. To apply the instruction dataset to a Japanese-based model, it is desirable to filter out the translation data and construct an instruction dataset consisting solely of Japanese. We construct an instruction dataset consisting solely of Japanese for instruction tuning based on a Japanese model by filtering and expanding the Japanese instruction dataset [25]. The constructed dataset contains about 2.5 million samples and 5 tasks, such as commonsense, summarization, reading comprehension, simplification, and correction. Using this dataset, which contains various tasks, we perform instruction tuning on both Japanese-based and English-based LLMs. For Japanese-based models, we conduct tuning using an instruction dataset without translation data, while for English-based models, we do using an instruction dataset that includes translation data. As a result of quantitative evaluation with the tuned model, we demonstrate that instruction tuning in Japanese improve the performance in downstream tasks, thereby illustrating the effectiveness of the Japanese instruction dataset. The following materials used in this study are available as open source. * Japanese instruction dataset: [https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla) * Tuned model (Stormy 10 epochs): [https://huggingface.co/izumi-lab/stormy-7b-10ep](https://huggingface.co/izumi-lab/stormy-7b-10ep) * Tuned model (LLaMA 7B 5 epochs): [https://huggingface.co/izumi-lab/llama-7b-japanese-lora-v0-5ep](https://huggingface.co/izumi-lab/llama-7b-japanese-lora-v0-5ep) * Implementation for training and evaluation: [https://github.com/retarfi/jallm](https://github.com/retarfi/jallm) Here are our main contributions: (1) We construct a Japanese instruction dataset, llvm-japanese-dataset-vanilla, for Japanese-based models; (2) We clarified the benefits of instruction tuning for Japanese and English models from evaluating with some Japanese downstream tasks; (3) Unlike previous research [16], we show that even with smaller model sizes, instruction tuning can lead to performance gains in downstream tasks. ## II Instruction Dataset We construct a Japanese instruction dataset without translation tasks. We use the llvm-japanese-dataset v0.1.0 [25] as a main data source for the Japanese instruction dataset and expand this dataset with additional Japanese datasets. The llvm-japanese-dataset v0.1.0 contains about 8.4 million instruction examples, of which more than 75 % (6,581,044) are constructed based on translation data. This dataset is intended to link English and Japanese and extract the knowledge learned in English for use in Japanese as well, considering that many LLMs like LLaMA show good performance in English. However, when it comes to Japanese-based models, they are usually pre-trained with Japanese corpora. The need for the English part of this dataset is relatively low because the part aimed to link English and Japanese. Therefore, we extract 1,811,964 data excluding translation tasks from the llvm-japanese-dataset v0.1.0. Furthermore, to expand the variety of datasets, we incorporated the Japanese Wikipedia Typo Dataset (Wikipedia Typo) [26] and the Japanese Question-Answering Corpus (JQAC) [27]. From the Wikipedia Typo and JQAC, we newly created 697,316 and 906 instruction entries respectively. Additionally, we addressed licensing issues present in version v0.1.0, and ultimately constructed a total of 2,463,624 instruction data entries, releasing it as llvm-japanese-dataset-vanilla v1.0.1 3. Figure 1 shows datasets and task classifications included in llvm-japanese-dataset-vanilla v1.0.1. Footnote 3: [https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla) Footnote 2: Originally written in Japanese. We use the instruction, input, and response included in llvm-japanese-dataset-vanilla v0.1.0, following the format below. * Prompt format with input Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. * ### Instruction: {Instruction} * ### Input: {Input} * ### Response: {Response} 2 * Prompt format with no input Below is an instruction that describes a task. Write a response that appropriately completes the request. ## Instruction: {Instruction} * ### Response: {Response} 2 ## 3 Instruction LoRA Tuning We perform Low-Rank Adaptation (LoRA) tuning [28] on two publicly available LLMs. In this section, we describe the base model and the process of LoRA tuning. ### _Models_ We use two models: a Japanese-based model and an English-based model. The models we use are the Japanese- Fig. 1: Datasets and task clusters used in llvm-japanese-dataset-vanilla v1.0.1. -based OpenCALM-7B (hereafter CALM) and the English-based LLaMA 7B. CALM is a model with 7 billion parameters released by CyberAgent 3. It is pre-trained on Japanese Wikipedia and Common Crawl using the GPT-NeoX architecture [8]. For the English-based model, we use the 7B model of LLaMA [13] (hereafter LLaMA 7B), which is released by Meta 4. Although LLaMA is trained in English and is not specialized for Japanese, it is capable of Japanese input and output. Even for LLaMA, we attempt to output in Japanese by conducting instruction tuning and evaluation experiments using Japanese contexts. Footnote 3: [https://huggingface.co/cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b) Footnote 4: Strictly speaking, although it was not initially open-source, it has become available under certain licenses Due to the differences in characteristics between the Japanese-based CALM and the English-based LLaMA 7B, we use llm-japanese-dataset-vanilla, which we constructed above, for CALM and llm-japanese-dataset for LLaMA 7B as training data. For tuning CALM, we use version v0.1.0 as the training data, which excludes the JQAC and Wikipedia Typo datasets. This is done to align with the model constructed in the literature [25], ensuring dataset consistency with the exception of not including English. We train LLaMA 7B on the entire llm-japanese-dataset v0.1.0, following the methods outlined in [25]. We adopt the same input format as described in [25]. From this point forward, the tuned CALM will be referred to as "Stormy," and the LLaMA 7B as "Instruct LLaMA 7B." ### _LoRA Tuning_ LLMs, having a large number of parameters, require GPU resources not only for pre-training but also for fine-tuning. In this study, we use LoRA [28] as a method for tuning LLMs without significantly reducing accuracy. In LoRA, only the difference between the initial and updated LLM parameters, represented with small-scale parameters, is calculated. Consider an example of updating the parameter matrix \(W_{0}\in\mathbb{R}^{d\times k}\) of a certain linear layer that LLM has. Instead of training \(W_{0}\) directly, initialize the difference \(\Delta W\in\mathbb{R}^{d\times k}\) to \(W_{0}\) with a zero matrix, update the difference \(\delta W\), and proceed with training by updating the parameters to \(W_{0}+\Delta W\). Here, we set \(\Delta W=BA\) where \(B\in\mathbb{R}^{d\times r}\) and \(A\in\mathbb{R}^{r\times k}\) are matrices of rank \(r\ll\min(d,k)\). This can reduce the number of learnable parameters from _dk_ to \((d+k)r\). The primary parameters utilized in the experiment are shown in Table I. For comparison, we also mention the model that Instruct LLaMA 13B [25], which was LoRA-tuned with llm-japanese-dataset v0.1.0. We used PEFT [29] and DeepSpeed ZeRO [30] for implementation. The code is available at [https://github.com/retarfi/jallm](https://github.com/retarfi/jallm). ## IV Evaluating Constructed Models We evaluate the tuned models both quantitatively and qualitatively. From the quantitative perspective, we evaluate from two perspectives. The first is accuracy derived from the likelihood of choices in text classification tasks with JNLI and MARC-ja. JNLI and MARC-ja are tasks from JGLUE [31]. Further details are described in Section IV-A. The second is perplexity using question-answering data that is not included in the dataset constructed in this study. From the qualitative perspective, we qualitatively evaluate the output for several prompts. The temperature parameter for generation is 0.0, and the repetition penalty [32] is 1.05 for CALM and Stormy and 1.0 for Instruct LLaMA 7B and LLaMA 7B. We use 5 prompts for input to the models, which are the same as those used in the literature [25]. We also conduct evaluation experiments on LLaMA 13B and Instruct LLaMA 13B, which was instruction tuned for LLaMA 13B, constructed in the study [25] as well. ### _Accuracy_ Another evaluation is performed by JNLI and MARC-ja included in JGLUE [31]. JNLI is a task to choose the relationship that the premise sentence shows to the sentence pair of the hypothesis from three options: entailment, contradiction, and neutral. MARC-ja is a task to choose either "positive" or "negative" in Japanese for product reviews and is constructed using the Japanese part of the Multilingual Amazon Reviews Corpus (MARC) [33]. In addition to these, JGLUE includes JCommonsenseQA, which questions common sense, and JSQuAD, which is an extraction task. However, these data are included in the llm-japanese-dataset v0.1.0, which is used for instruction LoRA tuning. Therefore, they were considered inappropriate as evaluation tasks and excluded. For the implementation of the experiment, we use the Japanese evaluation branch 5 of Stability-AI/Im-evaluation-harness [34]. Aligning with Im-evaluation-harness, we use the prompt version that achieves the best performance. We adopt v0.2 for Stormy and v0.3 for the others, such as CALM, Instruct LLaMA 7B, LLaMA 7B, Instruct LLaMA 13B, and LLaMA 13B. Detailed prompts are described in the Appendix. Footnote 5: [https://github.com/Stability-AI/Im-evaluation-harness/tree/jp-stable](https://github.com/Stability-AI/Im-evaluation-harness/tree/jp-stable) For the input prompt, we compare the likelihood of the strings of each task's choicesand take the highest one as the model's output. In JNLI, the three choices are entailment, contradiction, and neutral, and in MARC-ja, the two choices are "positive" and "negative" in Japanese, and the model outputs the choice with the highest likelihood of output. Therefore, outputs other than the choices are not considered. We evaluate for each of 1-shot, 2-shot, and 3-shot, which show one, two, or three examples in the input, respectively. ### _Perplexity_ Perplexity, as defined by [35], is the exponential of the average negative log-likelihood. The lower the value, the higher the probability that the words in the dataset are correctly output. Given a tokenized sequence \(X=(x_{0},x_{1},\cdots,x_{t})\), the perplexity of \(X\) is represented by Equation (1). \[\mathrm{Perplexity}(X)=\exp\left\{-\frac{1}{t}\sum_{i}^{t}\log p_{\theta}(x_{ i}|x_{<i})\right\} \tag{1}\] Here, \(\log p_{\theta}(x_{i}|x_{<i})\) is the log-likelihood of the \(i\)-th token given the preceding tokens \(x_{<i}\). In this study, we measure perplexity using the Japanese Visual Question Answering (VQA) dataset [36], which is not included in the Ilm-Japanese-dataset v0.1.0 used for tuning the language model. Although this VQA dataset is a question-answering task performed by looking at presented images, it is conjectured that models with a high probability of predicting the correct response sentence are more natural. We convert 793,664 question and answer pairs extracted from the VQA dataset into prompt format and input them. An example of the input is shown below. * Write a response to answer the following question. ### Question: What color is the airplane's body? ### Response: White 2 Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset) It should be noted that the LLaMA-based model uses English for system messages and Japanese for contexts of questions andw responses. Therefore, following the literature [25], the above example is modified as follows. * Write a response to answer the following question. * ### Question: * What color is the airplane's body? 2 Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset) ### Response: * White 2 Footnote 2: [https://github.com/chakki-works/chABA-dataset](https://github.com/chakki-works/chABA-dataset) The calculation of perplexity is not performed on the input to the model and is only applied to the response. In other words, in the above example, perplexity is calculated only for the token corresponding to the output "white." ## V Results and Discussion ### _Quantitative Evaluation_ Table II shows the results of the evaluation experiments. In the evaluation by JNLI, the accuracy of Stormy was the highest across 1-shot, 2-shot, and 3-shot settings. Even though the Ilm-Japanese-dataset v0.1.0 does not include a dataset equivalent to implication relation recognition, the performance seems to have been improved by solving various tasks as in [16]. The improvement in performance on tasks not present in the dataset by using instruction tuning across various tasks aligns with the findings in the literature [16, 17]. Japanese instruction datasets are valuable in the point of having constructed similar datasets for languages other than English. The performance of Stormy and Instruct LLaMA 7B, which performed instruction tuning on CALM and LLaMA 7B, respectively, improved, showing the effect as instruction tuning. However, the effect of instruction tuning in LLaMA 13B was relatively small. This is likely because instruction tuning in Instruct LLaMA 13B was performed for only one epoch. When comparing two Instruct LLaMA models with different numbers of parameters, even though there was a difference in the number of training epochs, Instruct LLaMA 7B showed a stronger effect from instruction tuning. This is considered to be because the smaller model size facilitates more effective training. It has been reported that larger model sizes result in better performance on downstream tasks [7, 10, 13]. The performance of Instruct LLaMA 13B might improve with more training epochs. In the evaluation by MARC-ja, there was no performance improvement by instruction tuning in all of 1-shot, 2-shot, and 3-shot, or the performance became worse. This phenomenon has also been reported in [16, 37]. The performance might be improved by adopting more various tasks widely as instruction data as in [16]. As well as MARC-ja, there are also datasets related to sentiment that can be incorporated in Japanese, such as the chABSA-dataset6 (ABSA stands for Aspect-Based Sentiment Analysis). The decrease in accuracy could be suppressed by additionally training these datasets. Another possible reason why the performance did not improve in the LLaMA-based models is the input length of instruction tuning in this study. While the LLaMA-based model itself can input up to 2,048 tokens and pre-training is performed at this length, in this study, the input length is limited to 256 tokens. Therefore, in data where long tokens are input, the effect of instruction tuning may not have been demonstrated. Extending the input length of instruction tuning is a future issue. In the evaluation of perplexity using VQA, all the instructured models showed improved performance with reduced perplexity due to tuning using instruction data. Language models adopting the decoder architecture are trained to increase the probability of correctly predicting the next token in the input. Therefore perplexity is trained to decrease. However, the reason for the reduction in perplexity by instruction tuning might be attributed to differences in the input data. While a language model predicts the next token for consecutive sequences in pre-training, it predicts tokens sequentially in response to a given question in instruction tuning. Since the format of input and output in instruction tuning matches the question-answering in VQA used in this experiment, it can be inferred that the model became more accustomed to producing answers by instruction tuning, leading to a reduction (performance improvement) in perplexity. The improvement in perplexity was particularly noticeable in the LLaMA-based models. A link with Japanese is considered to have been created and the performance improved by training using instruction data including translation data, even for models other than Japanese, such as English. Among the six models, the one with the highest perplexity and the worst performance was LLaMA 7B. This is thought to be due to the fact that it is an English-based model and has fewer model parameters than LLaMA 13B. On the other hand, the model that showed the best performance with the lowest perplexity was Stormy. The performance was improved by further instruction tuning for CALM, which is a model of Japanese. Comparing CALM, LLaMA 7B, and LLaMA 13B, which were the base models for tuning, the Japanese-based CALM showed the highest performance. In terms of the effect of instruction tuning from the perspective of model size, the literature [16] reported that for models larger than 68B, the effects of instruction tuning were observed in downstream tasks. However, they also reported for models smaller than 8B, instruction tuning paradoxically degraded performance in downstream tasks. In the results of the MARC-ja experiments in this study, no effect of instruction tuning was observed for all models of 7B and 13B, while for JNLI, the positive effects of instruction tuning were observed in all models. This effect was observed in both Japanese-based CALM and English-based LLaMA models. This suggests that, in non-English languages or when tuning English models to them, instruction tuning does not necessarily have negative effects for smaller models, and could even contribute to performance enhancement. in comparison with prior research [16, 38], there are fewer types of tasks. This might have led to a potential constraint in performance improvement. For instance, when compared to FLAN [16], tasks like simplification and correction have been newly added, while tasks like natural language inference, sentiment, and paraphrase lack. In this study, although the experiments were conducted using the 5 task types shown in Figure 1, consistent results were observed even in a non-English language like Japanese. Expanding the variety of tasks will be a challenge for future research. Regarding the base language of the model, there was no difference in terms of performance trends between the Japanese-based model and the English-based model; performance improved in VQA and JNLI, but not in MARC-ja. The results show that using a Japanese instruction dataset can enhance performance in Japanese tasks for both English-based and Japanese-based models. Except for the fact that translation tasks were used in the instruction tuning data of the LLaMA-derived model, the other tasks used in Stormy and Instruct LLaMA are the same. How the translation task impacts both English-based and Japanese-based models remains a future research topic. As discussed in the results for JNLI, while instruction tuning is effective for unseen tasks, the inclusion of similar tasks in the dataset is considered to have a substantial impact on performance improvement [16]. Based on this, it can be inferred that by adding tasks related to natural language inference for JNLI and sentiment for MARC-ja, the performance on JNLI and MARC-ja tasks would improve more. The tasks evaluated in this study evaluate how much the model can output for a certain output (although it is consistent with the evaluation method in [6]), and do not consider the actual output of the model. For a more detailed comparison, it is also a future task to evaluate using actual outputs. In addition, it will be necessary in the future to evaluate from the perspective of human preference [39, 40], not just focusing on the task's correctness. ### _Qualitative Evaluation_ The following shows the responses of the models used in this study, such as Stormy, CALM, Instruct LLaMA 7B, and LLaMA 7B. For responses of Instruct LLaMA 13B and LLaMA 13B to the input shown below, please refer to the literature [25]. -- Comparison of Responses (1) -- ### **Input:** Who are you? 2 ### **Output (Sformy):** I am my wife, and I love her. 2 ### **Output (CALM):** I am Nick M. Kelemel. I have degrees in computer science and mathematics, and I am currently working as a researcher. 2 ### **Output (Instruct LLaMA 7B):** What are you doing? 2 ### **Output (LLaMA 7B):** Who are you? 2 -------------------------------- Comparison of Responses (2) -- ### **Input:** Please tell me how to get to Osaka from Tokyo. 2 ### **Output (Sformy):** Department from Tokyo by plane. When you arrive in Osaka, transfer to the Shinkansen and go to Nagoya. From Nagoya, take the Kinetsu Limited Express. 2 ### **Output (CALM):** Which is better, train or Shinkansen? * "Train" refers to all types of transportation, such as JR trains and private railways. * "Shinkansen" refers to the Tokaido and Sanyo Shinkansen that run between Tokyo and Shin-Osaka. * "Airplane" typically refers to the route between Haneda and Itami. * "Bus" refers to the Toei buses that operate in Tokyo. * "Car" includes not only private cars but also taxis and rental cars. * "Bicycle" includes electric-assisted bicycles. * "Walking," "jogging," and "strolling" are excluded. 2 ### **Output (Instruct LLaMA 7B):** Please tell me how to get to Tokyo from Osaka. 2 ### **Output (LLaMA 7B):** Please tell me how to get to Osaka from Tokyo. 2 -------------------------------- Comparison of Responses (3) -- ### **Input:** Where is the University of Tokyo? 2 ### **Output (Sformy):** The University of Tokyo is a national university located in Bunkyo-ku, Tokyo. 2 ### **Output (CALM):** The University of Tokyo is located at 7-3-1 Hongo, Bunkyo-ku, Tokyo. The latitude and longitude of the University of Tokyo are 35.676044, 135.869422. -------------------------------- ##Answer: What do you use to get to that address? (Maps or Google Map, etc.) -------------------------------- ##Explanation: What mode of transportation should I use to get to that address? ##Answer: Which is better to use, train or bus? ##Explanation: If you use the JR line, please get off at the "Yamanote Line." The JR line goes to "Tokyo" station. ##Answer: Where should I go from Tokyo station? ##Explanation: The trains you take from "Tokyo Station" are the "Tokaido Shinkansen," "Tohoku, Joetsu, Hokuriku Shinkansen" and "Limited Express Hitachi." ##Answer: Which is faster and cheaper, the Johan Line or the Shinkansen? ##Explanation: The Shinkansen is faster. However, the price is different. ##Answer: I don't have money, what should I do? ##Explanation: First, please prepare money. If you have a credit card, you can use it. ##Answer: How should I pay? ##Explanation: You can pay by card. VISA card 2 ## Output (Instruct LLaMA 7B):** The University of Tokyo is in Tokyo. 2 ## Output (LLaMA 7B):** Where is the University of Tokyo? 2 (Authors' note: The correct address is 7-3-1 Hongo, Bunkyo-ku, Tokyo.) * Comparison of Responses (4) **** ### **Input:** The phone is ringing. What should I do? 2 **** ### **Output (Sformy):** Should you answer the phone? Or should you put it on hold? 2 **** ### **Output (CALM):** Press the hold button, answer the phone, or call the answering service. When you hear the ringtone, reply within 10 seconds. If there is no response, try calling again. If that still doesn't work, proceed to the next step. * Press '0' to end the call. * Press '+' after the ringtone. This will enable the call. 2 **** ### **Output (Instruct LLaMA 7B):** The phone is ringing. What should I do? 2 **** ### **Output (LLaMA 7B):** The phone is ringing. What should I do? 2 **** * Comparison of Responses (5) **** **** ### **Input:** What are the three major festivals in Kyoto? 2 **** ### **Output (Sformy):** Aoi Festival, Gion Festival, Jidai Festival 2 **** **** ### **Output (CALM):** The three are Gion Festival, Aoi Festival, and Jidai Festival. They are also referred to as the three major festivals of Kyoto. 2 ### **Output (LLaMA 7B):** What are the three major festivals in Kyoto? 2 **** tasks in natural language format for instruction tuning datasets. Specifically, labeled datasets are applied with instructions written by humans explaining the task, explaining the direction of the output, and instructing the LLM to understand the task from the input [16, 38, 17]. Other construction methods include examples of constructing datasets using the output of ChatGPT or GPT-4 [22, 39, 49, 7].8, and there are few examples of constructing datasets manually [20]. Footnote 8: [https://huggingface.co/datasets/RyokoAI/ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K) Footnote 9: [https://github.com/tekurium1/GPTeacher](https://github.com/tekurium1/GPTeacher) ### _Tuning of LLMs_ Efficient tuning in LLMs with many parameters is attracting attention to adapt LLMs to various downstream tasks. In particular, LoRA [28] is widely applied to open-source LLMs. For example, Alpaca-LoRA [50] uses LoRA to tune LLaMA 7B as a lightweight tuning version of Alpaca [22]. Also, AdaLoRA [51] changes the value of the rank in LoRA. This adjustment occurs according to the layer to be applied. Other efficient tuning methods include adding an Adapter layer to the existing layers [52, 53, 54], and prompt tuning [55, 56], which fixes the weights of the pre-trained model, adds trainable parameters to the prompt instructions. ## VII Conclusion We constructed an instruction dataset for Japanese-based LLMs (Large Language Models). The dataset excludes any translation data originally present in the llvm-japanese-dataset and introduces additional tasks to the existing ones. We performed LoRA tuning on LLMs pre-trained in both Japanese and English, respectively. The tuning was done using Japanese instruction data. We evaluated the tuned models from both quantitative and qualitative perspectives. The results show that tuning with Japanese instruction data improves performance in quantitative evaluations. In particular, the results indicate that not only Japanese-based models but also English-based models can be tuned in Japanese using the Japanese instruction dataset. Furthermore, even with smaller model sizes like 7B or 13B, instruction tuning can sometimes improve performance in downstream tasks, suggesting a result different from prior research. Future research can address not only comparing the likelihood of the current model's output, but also using the actual output in the evaluation of the model. Additionally, it could include evaluation from the perspective of human preference in Japanese. ## Acknowledgment This work was supported in part by JSPS KAKENHI Grant Number JP21K12010 and JST PRESTO Grant Number JPMIPR2267.
2308.02425
Hypertension Detection From High-Dimensional Representation of Photoplethysmogram Signals
Hypertension is commonly referred to as the "silent killer", since it can lead to severe health complications without any visible symptoms. Early detection of hypertension is crucial in preventing significant health issues. Although some studies suggest a relationship between blood pressure and certain vital signals, such as Photoplethysmogram (PPG), reliable generalization of the proposed blood pressure estimation methods is not yet guaranteed. This lack of certainty has resulted in some studies doubting the existence of such relationships, or considering them weak and limited to heart rate and blood pressure. In this paper, a high-dimensional representation technique based on random convolution kernels is proposed for hypertension detection using PPG signals. The results show that this relationship extends beyond heart rate and blood pressure, demonstrating the feasibility of hypertension detection with generalization. Additionally, the utilized transform using convolution kernels, as an end-to-end time-series feature extractor, outperforms the methods proposed in the previous studies and state-of-the-art deep learning models.
Navid Hasanzadeh, Shahrokh Valaee, Hojjat Salehinejad
2023-07-31T00:09:23Z
http://arxiv.org/abs/2308.02425v1
# Hypertension Detection From High-Dimensional Representation of Photoplethysmogram Signals ###### Abstract Hypertension is commonly referred to as the "silent killer", since it can lead to severe health complications without any visible symptoms. Early detection of hypertension is crucial in preventing significant health issues. Although some studies suggest a relationship between blood pressure and certain vital signals, such as Photoplethysmogram (PPG), reliable generalization of the proposed blood pressure estimation methods is not yet guaranteed. This lack of certainty has resulted in some studies doubting the existence of such relationships, or considering them weak and limited to heart rate and blood pressure. In this paper, a high-dimensional representation technique based on random convolution kernels is proposed for hypertension detection using PPG signals. The results show that this relationship extends beyond heart rate and blood pressure, demonstrating the feasibility of hypertension detection with generalization. Additionally, the utilized transform using convolution kernels, as an end-to-end time-series feature extractor, outperforms the methods proposed in the previous studies and state-of-the-art deep learning models. _Clinical relevance_-- The findings of this study highlights the feasibility of hypertension detection using PPG signals. This could be useful for the early detection of high blood pressure and reducing the risk of hypertension going unnoticed, particularly using wearable devices such as smartwatches equipped with PPG sensors. ## I Introduction Hypertension, or high blood pressure (BP), is a common and dangerous condition that can lead to serious health problems, including heart failure and brain stroke [1]. It is estimated that 1.28 billion adults aged 30-79 years may have hypertension worldwide [2]. According to the Centers for Disease Control and Prevention, nearly half of adults in the United States may have hypertension [3]. People with hypertension are not often aware of it for years. Early detection of hypertension is critical to prevent serious health issues for people at risk. Regular BP checks are recommended for everyone, especially for those at risk, including people with a family history of hypertension, diabetes, or obesity. Early detection allows for early intervention, such as lifestyle changes and medication, which can help to manage hypertension and prevent further complications. However, cuff-based or wrist BP monitoring devices are not available for everyone. These devices are not convenient to use for many people, particularly the elderly. Manual regular BP monitoring is generally inconvenient and requires commitment. These challenges have prompted researchers to seek alternative methods in measuring BP and detecting hypertension [4]. Photoplethysmogram (PPG) is a signal collected from an optical sensor which shows fluctuations of the blood volume per heartbeat [5]. PPG has recently been investigated as an alternative for continuous BP monitoring without using a cuff. This is particularly of interest as the pattern of an invasive arterial blood pressure (ABP) signal is very similar to the PPG. Previous studies have shown that the properties of a PPG signal can indicate various characteristics of the cardiovascular system [6, 7], such as large artery stiffness index (LASI), systemic vascular resistance (SVR), arterial tone, total peripheral resistance, and pulse wave velocity (PWV). Therefore, by extracting PPG key points and relevant cardiovascular features and applying various machine learning algorithms, BP estimation may be possible [8, 9]. The performance of most PPG-based BP estimation methods has been reported as either very high or very low. Many studies reporting high performance used the UCI cuff-less BP estimation dataset [8], which comprises \(12,000\) PPG and arterial BP signal segments recorded from approximately \(1,000\) patients. However, this dataset does not provide any subject identifier (ID) for each PPG signal sample, and the preprocessing steps are not discussed in detail. As a subject may have more than one PPG sample in the dataset, this could lead to data and domain overlap in the training and validation phases. Hence, for methods developed based on this dataset, generalization to a completely unseen subject cannot be guaranteed. Other PPG datasets typically have a small number of samples [10], which limits proper generalization evaluation of machine learning methods for real-world scenarios. This is particularly important in training deep learning models such as recurrent neural networks [11], where generally a very large number of training samples are required. The difference in reported results and the lack of proven generalization on unseen subjects have caused some studies to cast doubt on the existence of any relationship between BP and PPG features [12, 13]. These studies suggest that the only feature relevant to BP might be the heart rate [14]. This paper addresses the problem of PPG-based BP estimation as a binary hypertension detection task. To this end, the MIMIC-III PPG-BP dataset is used where the train, validation, and test sets are completely separated based on patient IDs [15]. In order to have an end-to-end feature extraction and classification solution, an input PPG signal is projected to a high dimensional space using random convolutional kernels transform (ROCKET) [16]. The transform maps a time series with any length to a set of temporally-independent features. The extracted features from all the PPG signals are then used to train a classifier. Results show a better performance of the proposed method in comparison with manual feature extraction and state-of-the-art deep learning models. The results further support the relationship between PPG properties and hypertension1. Footnote 1: Our codes are available online: [https://github.com/navidhasanzadeh/Hypertension_PPG](https://github.com/navidhasanzadeh/Hypertension_PPG) ## II Method Feature extraction and classification with random convolution kernels, without training the kernels, is a novel method for time-series representation [16, 17]. This approach has demonstrated a promising performance in many time-series classification tasks such as in electroencephalogram (EEG) [18] and human activity recognition [19, 20]. It also has the potential to outperform deep neural networks in many scenarios such as where limited-imbalanced data is available. Figure 1 shows different steps of the proposed method for feature extraction from PPG signals and hypertension detection. Let \(\{(\mathbf{x}_{1},y_{1}),...,(\mathbf{x}_{N},y_{N})\}\) represents a set of \(N\) PPG signals where \(y_{n}\in\{0,1\}\) is the data class, with \(y=0\) represents normal and \(y=1\) representing hypertension. A set of \(K\) 1-dimensional random convolution kernels \((\mathbf{w}_{1},...,\mathbf{w}_{K})\) are generated where the length of each kernel is \(9\) and the weights are selected randomly from \(\{-1,2\}\) in such a way that each kernel contains three weights with a value of \(2\), and the total sum of the weights in each kernel is zero. Then, a set of dilation factors controls the spread of each kernel over an input PPG signal with length \(T\) selected from \(\{\lfloor 2^{i\cdot L_{max}/L^{\prime}}\rfloor|i\in(0,...,L^{\prime})\}\) where \(L^{\prime}\) is a constant, \(L_{max}=log_{2}\big{(}(T-1)/(|\mathbf{w}_{k}|-1)\big{)}\) and \(L\) is the number of constructed dilations, [16, 20]. This provides \(K\times L\) different combinations of kernels and dilations as \(\{\mathbf{w}_{k,l}|k\in(1,...,K),l\in(1,...,L)\}\). Each kernel is then convolved with a PPG signal \(\mathbf{x}\) as \[\mathbf{u}_{k,l}=\mathbf{x}\ast\mathbf{w}_{k,l}, \tag{1}\] for \(k\in(1,...,K)\), and \(l\in(1,...,L)\). Based on the quantiles of the convolution output and for each pair of kernel and dilation \((k,l)\), a set of bias terms \(\{b_{k,l,j}|j\in(1,...,J)\}\) is computed. Each bias term shifts the convolution output to generate a new representation as \[\mathbf{v}_{k,l,j}=\mathbf{u}_{k,l}-\mathbf{b}_{k,l,j}, \tag{2}\] where \(\mathbf{b}_{k,l,j}=\underbrace{\big{(}b_{k,l,j}\ \cdots\ b_{k,l,j}\big{)}}_{| \mathbf{u}_{k,l}|\ \text{times}}\) and \(j\in(1,...,J)\), \(k\in(1,...,K)\), and \(l\in(1,...,L)\). The total number of extracted features is a multiple of the number of output features. The output features, called proportion of positive values (PPV), are extracted as \[f_{k,l,j}=\frac{1}{|\mathbf{v}_{k,l,j}|}\sum_{i=1}^{|\mathbf{v}_{k,l,j}|} \mathbb{1}[v_{k,l,j,i}>0], \tag{3}\] for \(k\in(1,...,K)\), \(l\in(1,...,L)\), and \(j\in(1,...,J_{k,l})\) where \(J_{k,l}\) is the number of bias terms and \(\mathbb{1}[\cdot]\) is the indicator function. Finally, the extracted features can be represented as \(\mathbf{f}=(f_{1},...,f_{D})\) where \(D\) is the number of output features. The generated features \(\mathbf{f}_{n}\) for \(n\in\{1,...,N\}\) along with the corresponding labels are then used to separately train Ridge regression (RR) and Random Forest (RF) classifiers. ## III Experiments ### _Data_ In this study, the PPG signals and corresponding BPs derived from the MIMIC-III dataset [15] are used. The BP values are categorized into normal and hypertension classes based on ESC/ESH guidelines [21]. This dataset comprises \(3750\) subjects for training and \(625\) subjects for testing. The training and test datasets are standardized and divided at subject level to avoid any overlap. There are \(1,000,000\) PPG signals for training, \(250,000\) samples for validation, and \(250,000\) samples for testing. Each PPG sample has a duration of \(7\) seconds, and the sampling rate is \(125\) Hz. ### _Baseline Models_ The performance of the proposed method is compared with the following baseline models for the detection of hypertension using PPG signals. #### Iii-B1 Heart-rate-based Classifier Based on the quasi-periodic nature of the PPG signals, the automatic multiscale-based peak detection (AMPD) algorithm [22] is used to detect the maximum points. Subsequently, for each PPG signal the average heart rate is calculated. Then, a simple RR classifier is trained for hypertension detection. #### Iii-B2 Classification using PPG Morphological Features To extract PPG morphological features, the PPG signals are first segmented using a multi-observation hidden semi-Markov model (HSMM) [5]. Next, the key points of each PPG pulse, including PPG onset, maximum slope point, systolic peak, dicrotic notch, and diastolic peak, are extracted. Then, BP-related features including heart rate, pulse width, crest time, Fig. 1: Random convolution kernels for feature extraction and hypertension detection from PPG. reflection index, large artery stiffness index (LASI), ratio of PPG pulse areas, and modified normalized pulse volume (mNPV) are derived [23, 8, 24]. An RR classifier and an RF classifier are individually trained and evaluated using the extracted features. #### Iv-B3 Deep Neural Networks Two \(1\)-D variants of the ResNet-\(18\) and ResNet-\(34\) deep learning models are trained using the raw PPG signals in an end-to-end manner [25]. In these architectures, the \(2\)-D filters are replaced by \(1\)-D ones2. Footnote 2: [https://pypi.org/project/keras-resnet/](https://pypi.org/project/keras-resnet/) ### _Training and Validation Setup_ Since the dataset is imbalanced and only a small proportion, around \(20\%\), of the training set is labelled as hypertension, all the classifiers were trained using balanced class weights that consider the class frequencies in the input data. For RF classifiers, the model randomly under-samples each bootstrap sample to balance it3. For deep neural networks, a weighted loss function is used. Hyperparameters of the models were set using grid-search with respect to the F1-score. Adam optimizer was used to train the deep learning models with an initial learning rate of \(10^{-3}\), weight decay of \(10^{-4}\), and batch size of \(32\) for a maximum of \(50\) epochs with early-stopping. For the MiniROCKET model, the number of kernels is \(84\) and the number of output features is \(9,996\). Footnote 3: [https://pypi.org/project/imbalanced-learn/](https://pypi.org/project/imbalanced-learn/) ### _Results_ In this section, the models implemented in this work are evaluated in terms of sensitivity, precision, and F1-score. #### Iv-D1 Classification Performance Analysis The performance results in Table I show that using only the heart-rate feature for hypertension detection leads to a very low sensitivity of \(50.3\%\). Although this sensitivity is slightly better than chance level, it is not sufficient on its own for detecting hypertension. The extraction of morphological features from PPG signals improves the sensitivity and F1-score for hypertension detection. These features provide more BP-related information than the heart-rate feature alone. Besides, using an RF classifier can discriminate hypertension from normal with significantly better accuracy than RR. The features LASI and reflection index (RI) have the highest Gini indices and importance levels among all the attributes for making the decision trees. These features are related to large artery stiffness and pulse reflection in arteries, respectively. Both ResNet models performed better than the models trained on manually-extracted PPG features. This indicates that end-to-end BP-related features extraction from PPG is more robust against PPG signals variations. ResNet-18 obtained a sensitivity of \(67.8\%\) for the normal class and \(67.5\%\) for detecting hypertension. On average, it performed slightly better than ResNet-34 with an average F1-score of \(70.4\%\). Among all the methods, end-to-end MiniROCKET feature extractor with a balanced RF classifier has achieved the best performance. This method can detect hypertension with an average F1-score of \(71.6\%\). The RR classifier with MiniROCKET obtained slightly less average performance but with a sensitivity of \(69.1\%\). The higher performance of the ResNet models and MiniROCKET indicates that there are BP-related features that are not visually observable on PPG pulses and cannot be extracted using manually designed algorithms. Moreover, PPG signals have different shapes among individuals, which makes it difficult to develop a robust manual algorithm for accurate extraction of all the BP-related features. #### Iv-D2 Relationship Between PPG and Hypertension Detection The results show that the relationship between high BP and extracted features from a PPG signal is not only limited to heart-rate. By evaluating a range of both manual-based and end-to-end methods on a dataset--where training and testing sets are entirely seperated at the subject level--the findings indicate that PPG signals can be effectively utilized for hypertension detection with a high generalization capability. #### Iv-D3 Impact of the Number of Training Samples In order to study the effect of the size of training set on the models' performance, the models were trained with \(6.25\%\), \(12.5\%\), \(25\%\), and \(50\%\) of the training samples. Figure 2 illustrates the performance for different models as the training set size increases. By using only \(50\%\) of the training data, the best F1-score among the models dropped from \(71.6\%\) to \(69.2\%\). Similarly, using \(25\%\) of the training samples resulted in a drop to \(65.0\%\). In all scenarios, MiniROCKET still outperforms all other methods. The trends in this plot indicate that a higher classification accuracy is anticipated by increasing the size of the training dataset, particularly the hypertension data class. \begin{table} \begin{tabular}{c|c|c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Classifier} & \multicolumn{2}{c|}{Sensitivity} & \multicolumn{3}{c}{Weighted Average} \\ & & Normal & Hypertension & Precision & Sensitivity & F1-score \\ \hline Heart-rate-based & Ridge Regression & 49.6\% & 53.1\% & 67.0\% & 50.3\% & 54.6\% \\ \hline \multirow{2}{*}{PPG Morphological Features} & Ridge Regression & 53.2\% & 61.4\% & 71.0\% & 54.9\% & 58.9\% \\ \cline{2-7} & Random Forest & 66.7\% & 60.1\% & 74.5\% & 65.3\% & 68.1\% \\ \hline \multirow{2}{*}{Deep Neural Networks} & ResNet-18 & 67.8\% & 67.5\% & 77.1\% & 67.7\% & 70.4\% \\ \cline{2-7} & ResNet-34 & 66.5\% & 68.4\% & 77.0\% & 66.9\% & 69.7\% \\ \hline \multirow{2}{*}{MiniROCKET} & Ridge Regression & 66.2\% & **69.1\%** & 77.6\% & 66.8\% & 69.4\% \\ \cline{2-7} & Random Forest & **69.3\%** & 68.5\% & **77.9\%** & **69.1\%** & **71.6\%** \\ \hline \hline \end{tabular} \end{table} TABLE I: Hypertension detection results by different methods ## IV Conclusion In this study, the feasibility of hypertension detection using PPG signals is assessed. Utilizing a dataset divided into training, validation, and test sets on a subject basis, the results suggest that the proposed end-to-end method with an RF classifier can achieve an F1-score of \(71.6\%\) on the test set. This demonstrates that hypertension detection from PPG signals is capable of generalizing to completely unseen samples. In addition to heart rate, PPG can provide many BP-related informative attributes that can enhance classification performance. The proposed method facilitates the early detection of hypertension using wearable technology.
2309.15270
Consistent Query Answering for Primary Keys on Path Queries
We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal consistent subset of the database. For a Boolean query $q$, the problem $\mathsf{CERTAINTY}(q)$ takes a database as input, and asks whether or not each repair satisfies $q$. It is known that for any self-join-free Boolean conjunctive query $q$, $\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$, $\mathbf{LSPACE}$-complete, or $\mathbf{coNP}$-complete. In particular, $\mathsf{CERTAINTY}(q)$ is in $\mathbf{FO}$ for any self-join-free Boolean path query $q$. In this paper, we show that if self-joins are allowed, the complexity of $\mathsf{CERTAINTY}(q)$ for Boolean path queries $q$ exhibits a tetrachotomy between $\mathbf{FO}$, $\mathbf{NL}$-complete, $\mathbf{PTIME}$-complete, and $\mathbf{coNP}$-complete. Moreover, it is decidable, in polynomial time in the size of the query~$q$, which of the four cases applies.
Paraschos Koutris, Xiating Ouyang, Jef Wijsen
2023-09-26T21:05:59Z
http://arxiv.org/abs/2309.15270v1
# Consistent Query Answering for Primary Keys on Path Queries+ ###### Abstract We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal consistent subset of the database. For a Boolean query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) takes a database as input, and asks whether or not each repair satisfies \(q\). It is known that for any self-join-free Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in **FO**, \(\mathsf{L}\)-complete, or \(\mathsf{coNP}\)-complete. In particular, \(\mathsf{CERTAINTY}(q)\) is in **FO** for any self-join-free Boolean path query \(q\). In this paper, we show that if self-joins are allowed, the complexity of \(\mathsf{CERTAINTY}(q)\) for Boolean path queries \(q\) exhibits a tetrachtotomy between **FO**, **NL**-complete, **PTIME**-complete, and \(\mathsf{coNP}\)-complete. Moreover, it is decidable, in polynomial time in the size of the query \(q\), which of the four cases applies. ## 1 Introduction Primary keys are probably the most common integrity constraints in relational database systems. Although databases should ideally satisfy their integrity constraints, data integration is today frequently cited as a cause for primary key violations, for example, when a same client is stored with different birthdays in two data sources. A _repair_ of such an inconsistent database instance is then naturally defined as a maximal consistent subinstance. Two approaches are then possible. In _data cleaning_, the objective is to single out the "best" repair, which however may not be practically possible. In _consistent query answering_ (CQA) [3], instead of cleaning the inconsistent database instance, we change the notion of query answer: the _consistent_ (or _certain_) _answer_ is defined as the intersection of the query answers over all (exponentially many) repairs. In computational complexity studies, consistent query answering is commonly defined as the data complexity of the following decision problem, for a fixed Boolean query \(q\): **Problem: \(\mathsf{CERTAINTY}(q)\)** **Input:** A database instance **db**. **Question:** Does \(q\) evaluate to true on every repair of **db**? For every first-order query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) is obviously in \(\mathsf{coNP}\). However, despite significant research efforts (see Section 9), a fine-grained complexity classification is still largely open. A notorious open conjecture is the following. **Conjecture 1**.: _For each Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is either in **PTIME** or \(\mathsf{coNP}\)-complete._ On the other hand, for the smaller class of self-join-free Boolean conjunctive queries, the complexity landscape is by now well understood, as summarized by the following theorem. **Theorem 1** ([32]).: _For each self-join-free Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in **FO**, \(\mathbf{L}\)-complete, or \(\mathsf{coNP}\)-complete, and it is decidable which of the three cases applies._ Abandoning the restriction of self-join-freeness turns out to be a major challenge. The difficulty of self-joins is caused by the obvious observation that a single database fact can be used to satisfy more than one atom of a conjunctive query, as illustrated by Example 1. Self-joins happen to significantly change the complexity landscape laid down in Theorem 1; this is illustrated by Example 2. Self-join-freeness is a simplifying assumption that is also used outside CQA (e.g., [15, 4, 16]). **Example 1**.: Take the self-join \(q_{1}=\exists x\exists y(R(\underline{x},y)\wedge R(\underline{y},x))\) and its self-join-free counterpart \(q_{2}=\exists x\exists y(R(\underline{x},y)\wedge S(\underline{y},x))\), where the primary key positions are underlined. Consider the inconsistent database instance \(\mathbf{db}\) in Figure 1. We have that \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{2})\), because \(q_{2}\) is not satisfied by the repair \(\{R(\underline{a},a)\), \(R(\underline{b},b)\), \(S(\underline{a},b)\), \(S(\underline{b},a)\}\). However, \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\). This is because every repair that contains \(R(\underline{a},a)\) or \(R(\underline{b},b)\) will satisfy \(q_{1}\), while a repair that contains neither of these facts must contain \(R(\underline{a},b)\) and \(R(\underline{b},a)\), which together also satisfy \(q_{1}\). **Example 2**.: Take the self-join \(q_{1}=\exists x\exists y\exists z(R(\underline{x},z)\wedge R(\underline{y},z))\) and its self-join-free counterpart \(q_{2}=\exists x\exists y\exists z(R(\underline{x},z)\wedge S(\underline{y},z))\). \(\mathsf{CERTAINTY}(q_{2})\) is known to be \(\mathbf{coNP}\)-complete, whereas it is easily verified that \(\mathsf{CERTAINTY}(q_{1})\) is in \(\mathbf{FO}\), by observing that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\) if and only if it satisfies \(\exists x\exists y(R(\underline{x},y))\). This paper makes a contribution to the complexity classification of \(\mathsf{CERTAINTY}(q)\) for conjunctive queries, possibly with self-joins, of the form \[q=\exists x_{1}\cdots\exists x_{k+1}(R_{1}(\underline{x_{1}},x_{2})\wedge R_{2 }(\underline{x_{2}},x_{3})\wedge\cdots\wedge R_{k}(\underline{x_{k}},x_{k+1})),\] which we call _path queries_. The primary key positions are underlined. As will become apparent in our technical treatment, the classification of path queries is already very challenging, even though it is only a first step towards Conjecture 1, which is currently beyond reach. If all \(R_{i}\)'s are distinct (i.e., if there are no self-joins), then \(\mathsf{CERTAINTY}(q)\) is known to be in \(\mathbf{FO}\) for path queries \(q\). However, when self-joins are allowed, the complexity landscape of \(\mathsf{CERTAINTY}(q)\) for path queries exhibits a tetrachtotomy, as stated by the following main result of our paper. **Theorem 2**.: _For each Boolean path query \(q\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, \(\mathbf{PTIME}\)-complete, or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the four cases applies._ Comparing Theorem 1 and Theorem 2, it is striking that there are path queries \(q\) for which \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{NL}\)-complete or \(\mathbf{PTIME}\)-complete, whereas these complexity classes do not occur for self-join-free queries (under standard complexity assumptions). So even for the restricted class of path queries, allowing self-joins immediately results in a more varied complexity landscape. Let us provide some intuitions behind Theorem 2 by means of examples. Path queries use only binary relation names. A database instance \(\mathbf{db}\) with binary facts can be viewed as a directed edge-colored graph: a fact \(R(\underline{a},b)\) is a directed edge from \(a\) to \(b\) with color \(R\). A repair of \(\mathbf{db}\) is obtained by choosing, for each vertex, precisely one outgoing edge among all outgoing edges of the same color. We will use the shorthand \(q=RR\) to denote the path query \(q=\exists x\exists y\exists z(R(\underline{x},y)\wedge R(\underline{y},z))\). In general, path queries can be represented by words over the alphabet of relation names. Throughout this paper, relation names are in uppercase letters \(R\), \(S\), \(X\), \(Y\) etc., while lowercase letters \(u\), \(v\), \(w\) stand for (possibly empty) words. An important operation on words is dubbed _rewinding_: if a word has a factor of the form \(RvR\), then rewinding refers to the operation that replaces this factor with \(RvRvR\). That is, rewinding the factor \(RvR\) in the word \(uRvRw\) yields the longer word \(uRvRvRw\). For short, we also say that \(uRvRv\)_rewinds to_ the word Figure 1: An inconsistent database instance \(\mathbf{db}\). \(u\cdot Rv\cdot\underline{Rv}\cdot Rw\), where we used concatenation \((\cdot)\) and underlining for clarity. For example, \(TWITTER\) rewinds to \(TWI\cdot\underline{TWI}\cdot TTER\), but also to \(TWIT\cdot\underline{TWIT}\cdot TER\) and to \(TWIT\cdot T\cdot\underline{T}\cdot TER\). Let \(q_{1}=RR\). It is easily verified that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{1})\) if and only if it satisfies the following first-order formula: \[\varphi=\exists x(\exists yR(\underline{x},y)\wedge\forall y(R(\underline{x},y)\rightarrow\exists zR(\underline{y},z))).\] Informally, every repair contains an \(R\)-path of length \(2\) if and only if there exists some vertex \(x\) such that every repair contains a path of length \(2\) starting in \(x\). Let \(q_{2}=RRX\), and consider the database instance in Figure 2. Since the only conflicting facts are \(R(\underline{1},2)\) and \(R(\underline{1},3)\), this database instance has two repairs. Both repairs satisfy \(RRX\), but unlike the previous example, there is no vertex \(x\) such that every repair has a path colored \(RRX\) that starts in \(x\). Indeed, in one repair, such path starts in \(0\); in the other repair it starts in \(1\). For reasons that will become apparent in our theoretical development, it is significant that both repairs have paths that start in \(0\) and are colored by a word in the regular language defined by \(RR\left(R\right)^{*}X\). This is exactly the language that contains \(RRX\) and is closed under the rewinding operation. In general, it can be verified with some effort that a database instance is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{2})\) if and only if it contains some vertex \(x\) such that every repair has a path that starts in \(x\) and is colored by a word in the regular language defined by \(RR\left(R\right)^{*}X\). The latter condition can be tested in **PTIME** (and even in **NL**). The situation is still different for \(q_{3}=ARRX\), for which it will be shown that \(\mathsf{CERTAINTY}(q_{3})\) is **coNP**-complete. Unlike our previous example, repeated rewinding of \(ARRX\) into words of the language \(ARR\left(R\right)^{*}X\) is not helpful. For example, in the database instance of Figure 3, every repair has a path that starts in \(0\) and is colored with a word in the language defined by \(ARR\left(R\right)^{*}X\). However, the repair that contains \(R(\underline{a},c)\) does not satisfy \(q_{3}\). Unlike Figure 2, the "bifurcation" in Figure 3 can be used as a gadget for showing **coNP**-completeness in Section 7. **Organization**. Section 2 introduces the preliminaries. In Section 3, the statement of Theorem 3 gives the syntactic conditions for deciding the complexity of \(\mathsf{CERTAINTY}(q)\) for path queries \(q\). To prove this theorem, we view the rewinding operator from the perspectives of regular expressions and automata, which are presented in Sections 4 and 5 respectively. Sections 6 and 7 present, respectively, complexity upper bounds and lower bounds of our classification. In Section 8, we extend our classification result to path queries with constants. Section 9 discusses related work, and Section 10 concludes this paper. ## 2 Preliminaries We assume disjoint sets of _variables_ and _constants_. A _valuation_ over a set \(U\) of variables is a total mapping \(\theta\) from \(U\) to the set of constants. **Atoms and key-equal facts**. We consider only \(2\)-ary _relation names_, where the first position is called the _primary key_. If \(R\) is a relation name, and \(s,t\) are variables or constants, then \(R(\underline{s},t)\) is an _atom_. An atom without variables is a _fact_. Two facts are _key-equal_ if they use the same relation name and agree on the primary key. **Database instances, blocks, and repairs**. A _database schema_ is a finite set of relation names. All constructs that follow are defined relative to a fixed database schema. Figure 3: An example database instance \(\mathbf{db}\) for \(q_{3}=ARRX\). Figure 2: An example database instance \(\mathbf{db}\) for \(q_{2}=RRX\). A _database instance_ is a finite set \(\mathbf{db}\) of facts using only the relation names of the schema. We write \(\mathsf{adom}(\mathbf{db})\) for the active domain of \(\mathbf{db}\) (i.e., the set of constants that occur in \(\mathbf{db}\)). A _block_ of \(\mathbf{db}\) is a maximal set of key-equal facts of \(\mathbf{db}\). Whenever a database instance \(\mathbf{db}\) is understood, we write \(R(\underline{c},*)\) for the block that contains all facts with relation name \(R\) and primary-key value \(c\). A database instance \(\mathbf{db}\) is _consistent_ if it contains no two distinct facts that are key-equal (i.e., if no block of \(\mathbf{db}\) contains more than one fact). A _repair_ of \(\mathbf{db}\) is an inclusion-maximal consistent subset of \(\mathbf{db}\). **Boolean conjunctive queries**. A _Boolean conjunctive query_ is a finite set \(q=\{R_{1}(\underline{x_{1}},y_{1}),\,\ldots,\,R_{n}(\underline{x_{n}},y_{n})\}\) of atoms. We denote by \(\mathsf{vars}(q)\) the set of variables that occur in \(q\). The set \(q\) represents the first-order sentence \[\exists u_{1}\cdots\exists u_{k}(R_{1}(\underline{x_{1}},y_{1})\wedge\cdots \wedge R_{n}(\underline{x_{n}},y_{n})),\] where \(\{u_{1},\ldots,u_{k}\}=\mathsf{vars}(q)\). We say that a Boolean conjunctive query \(q\) has a _self-join_ if some relation name occurs more than once in \(q\). A conjunctive query without self-joins is called _self-join-free._ **Consistent query answering**. For every Boolean conjunctive query \(q\), the decision problem \(\mathsf{CERTAINTY}(q)\) takes as input a database instance \(\mathbf{db}\), and asks whether \(q\) is satisfied by every repair of \(\mathbf{db}\). It is straightforward that for every Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{coNP}\). **Path queries**. A _path query_ is a Boolean conjunctive query without constants of the following form: \[q=\{R_{1}(\underline{x_{1}},x_{2}),R_{2}(\underline{x_{2}},x_{3}),\ldots,R_{k} (\underline{x_{k}},x_{k+1})\},\] where \(x_{1}\), \(x_{2}\),..., \(x_{k+1}\) are distinct variables, and \(R_{1}\), \(R_{2}\),..., \(R_{k}\) are (not necessarily distinct) relation names. It will often be convenient to denote this query as a _word_\(R_{1}R_{2}\cdots R_{k}\) over the alphabet of relation names. This "word" representation is obviously lossless up to a variable renaming. Importantly, path queries may have self-joins, i.e., a relation name may occur multiple times. Path queries containing constants will be discussed in Section 8. The treatment of constants is significant, because it allows moving from Boolean to non-Boolean queries, by using that free variables behave like constants. ## 3 The Complexity Classification We define syntactic conditions \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) for path queries \(q\). Let \(R\) be any relation name in \(q\), and let \(u,v,w\) be (possibly empty) words over the alphabet of relation names of \(q\). \(\mathcal{C}_{1}\)**:**: Whenever \(q=uRvRv\), \(q\) is a prefix of \(uRvRvRv\). \(\mathcal{C}_{2}\)**:**: Whenever \(q=uRvRv\), \(q\) is a factor of \(uRvRvRv\); and whenever \(q=uRv_{1}Rv_{2}Rw\) for consecutive occurrences of \(R\), \(v_{1}=v_{2}\) or \(Rw\) is a prefix of \(Rv_{1}\). \(\mathcal{C}_{3}\)**:**: Whenever \(q=uRvRv\), \(q\) is a factor of \(uRvRvRv\). It is instructive to think of these conditions in terms of the rewinding operator introduced in Section 1: \(\mathcal{C}_{1}\) is tantamount to saying that \(q\) is a prefix of every word to which \(q\) rewinds; and \(\mathcal{C}_{3}\) says that \(q\) is a factor of every word to which \(q\) rewinds. These conditions can be checked in polynomial time in the length of the word \(q\). The following result has an easy proof. **Proposition 1**.: _Let \(q\) be a path query. If \(q\) satisfies \(\mathcal{C}_{1}\), then \(q\) satisfies \(\mathcal{C}_{2}\); and if \(q\) satisfies \(\mathcal{C}_{2}\), then \(q\) satisfies \(\mathcal{C}_{3}\)._ The main part of this paper comprises a proof of the following theorem, which refines the statement of Theorem 2 by adding syntactic conditions. The theorem is illustrated by Example 3. **Theorem 3**.: _For every path query \(q\), the following complexity upper bounds obtain:_ * _if_ \(q\) _satisfies_ \(\mathcal{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_;_ * _if_ \(q\) _satisfies_ \(\mathcal{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{NL}\)_; and_ * _if_ \(q\) _satisfies_ \(\mathcal{C}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{PTIME}\)_._ _Moreover, for every path query \(q\), the following complexity lower bounds obtain:_ * _if_ \(q\) _violates_ \(\mathcal{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{NL}\)_-hard;_ * _if_ \(q\) _violates_ \(\mathcal{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{PTIME}\)_-hard; and_ * _if_ \(q\) _violates_ \(\mathcal{C}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{coNP}\)_-complete._ **Example 3**.: The query \(q_{1}=RXRX\) rewinds to (and only to) \(RX\!\cdot\!\underline{RX}\!\cdot\!RX\) and \(R\!\cdot\!XR\!\cdot\!\underline{X}\!\cdot\!X\), which both contain \(q_{1}\) as a prefix. It is correct to conclude that \(\mathsf{CERTAINTY}(q_{1})\) is in \(\mathbf{FO}\). The query \(q_{2}=RXRY\) rewinds only to \(RX\!\cdot\!\underline{RX}\!\cdot\!RY\), which contains \(q_{2}\) as a factor, but not as a prefix. Therefore, \(q_{2}\) satisfies \(\mathcal{C}_{3}\), but violates \(\mathcal{C}_{1}\). Since \(q_{2}\) vacuuously satisfies \(\mathcal{C}_{2}\) (because no relation name occurs three times in \(q_{2}\)), it is correct to conclude that \(\mathsf{CERTAINTY}(q_{2})\) is \(\mathbf{NL}\)-complete. The query \(q_{3}=RXRY\) rewinds to \(RX\!\cdot\!\underline{RX}\!\cdot\!RYRY\), to \(RX\!RY\!\cdot\!\underline{RX}\!\cdot\!RY\), and to \(RX\!\cdot\!RY\!\cdot\!\underline{RY}\!\cdot\!RY=RXR\!\cdot\!YR\!\cdot\!Y\). Since these words contain \(q_{3}\) as a factor, but not always as a prefix, we have that \(q_{3}\) satisfies \(\mathcal{C}_{3}\) but violates \(\mathcal{C}_{1}\). It can be verified that \(q_{3}\) violates \(\mathcal{C}_{2}\) by writing it as follows: \[q_{3}=\underbrace{\varepsilon}_{u}\underbrace{\underline{RX}}_{Rv_{1}} \underbrace{\underline{RY}}_{Rv_{2}}\underbrace{\underline{RY}}_{Rw}\] We have \(X=v_{1}\neq v_{2}=Y\), but \(Rw=RY\) is not a prefix of \(Rv_{1}=RX\). Thus, \(\mathsf{CERTAINTY}(q_{3})\) is \(\mathbf{PTIME}\)-complete. Finally, the path query \(q_{4}=RXRXRYRY\) rewinds, among others, to \(RX\!\cdot\!RXRY\!\cdot\!\underline{RX}\!\cdot\!RY\), which does not contain \(q_{4}\) as a factor. It is correct to conclude that \(\mathsf{CERTAINTY}(q_{4})\) is \(\mathbf{coNP}\)-complete. ## 4 Reggexes for \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) In this section, we show that the conditions \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) can be captured by regular expressions (or regexes) on path queries, which will be used in the proof of Theorem 3. Since these results are within the field of _combinatorics of words_, we will use the term _word_ rather than _path query_. **Definition 1**.: We define four properties \(\mathcal{B}_{1}\), \(\mathcal{B}_{2a}\), \(\mathcal{B}_{2b}\), \(\mathcal{B}_{3}\) that a word \(q\) can possess: \(\mathcal{B}_{1}\)**:**: For some integer \(k\geq 0\), there are words \(v\), \(w\) such that \(vw\) is self-join-free and \(q\) is a prefix of \(w\left(v\right)^{k}\). \(\mathcal{B}_{2a}\)**:**: For some integers \(j,k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(u\right)^{j}w\left(v\right)^{k}\). \(\mathcal{B}_{2b}\)**:**: For some integer \(k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(uv\right)^{k}wv\). \(\mathcal{B}_{3}\)**:**: For some integer \(k\geq 0\), there are words \(u\), \(v\), \(w\) such that \(uvw\) is self-join-free and \(q\) is a factor of \(\left(uv\right)^{k}wv\). We can identify each condition among \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), \(\mathcal{C}_{3}\), \(\mathcal{B}_{1}\), \(\mathcal{B}_{2a}\), \(\mathcal{B}_{2b}\), \(\mathcal{B}_{3}\) with the set of all words satisfying this condition. Note then that \(\mathcal{B}_{1}\subseteq\mathcal{B}_{2a}\cap\mathcal{B}_{3}\). The results in the remainder of this section can be summarized as follows: * \(\mathcal{C}_{1}=\mathcal{B}_{1}\) (Lemma 1) * \(\mathcal{C}_{2}=\mathcal{B}_{2a}\cup\mathcal{B}_{2b}\) (Lemma 3) * \(\mathcal{C}_{3}=\mathcal{B}_{2a}\cup\mathcal{B}_{2b}\cup\mathcal{B}_{3}\) (Lemma 2) Moreover, Lemma 3 characterizes \(\mathcal{C}_{3}\setminus\mathcal{C}_{2}\). **Lemma 1**.: _For every word \(q\), the following are equivalent:_ 1. \(q\) _satisfies_ \(\mathcal{C}_{1}\)_; and_ 2. \(q\) _satisfies_ \(\mathcal{B}_{1}\)_._ **Lemma 2**.: _For every word \(q\), the following are equivalent:_ 1. \(q\) _satisfies_ \(\mathcal{C}_{3}\)_; and_ 2. \(q\) _satisfies_ \(\mathcal{B}_{2a}\)_,_ \(\mathcal{B}_{2b}\)_, or_ \(\mathcal{B}_{3}\)_._ **Definition 2** (First and last symbol).: For a nonempty word \(u\), we write \(\mathsf{first}(u)\) and \(\mathsf{last}(u)\) for, respectively, the first and the last symbol of \(u\) **Lemma 3**.: _Let \(q\) be a word that satisfies \(\mathcal{C}_{3}\). Then, the following three statements are equivalent:_ 1. \(q\) _violates_ \(\mathcal{C}_{2}\)_;_ 2. \(q\) _violates both_ \(\mathcal{B}_{2a}\) _and_ \(\mathcal{B}_{2b}\)_; and_ 3. _there are words_ \(u\)_,_ \(v\)_,_ \(w\) _with_ \(u\neq\varepsilon\) _and_ \(uvw\) _self-join-free such that either_ 1. \(v\neq\varepsilon\) _and_ \(\mathsf{last}(u)\cdot wuvu\cdot\mathsf{first}(v)\) _is a factor of_ \(q\)_; or_ 2. \(v=\varepsilon\)_,_ \(w\neq\varepsilon\)_, and_ \(\mathsf{last}(u)\cdot w\left(u\right)^{2}\cdot\mathsf{first}(u)\) _is a factor of_ \(q\)_._ The shortest word of the form (3a) in the preceding lemma is \(RSRRS\) (let \(u=R\), \(v=S\), and \(w=\varepsilon\)), and the shortest word of the form (3b) is \(RSRRR\) (let \(u=R\), \(v=\varepsilon\), and \(w=S\)). Note that since each of \(\mathcal{C}_{2}\), \(\mathcal{B}_{2a}\), and \(\mathcal{B}_{2b}\) implies \(\mathcal{C}_{3}\), it is correct to conclude that the equivalence between the first two items in Lemma 3 does not need the hypothesis that \(q\) must satisfy \(\mathcal{C}_{3}\). ## 5 Automaton-Based Perspective In this section, we prove an important lemma, Lemma 7, which will be used for proving the complexity upper bounds in Theorem 3. ### From Path Queries to Finite Automata We can view a path query \(q\) as a word where the alphabet is the set of relation names. We now associate each path query \(q\) with a nondeterministic finite automaton (NFA), denoted \(\mathsf{NFA}(q)\). **Definition 3** (\(\mathsf{NFA}(q)\)).: Every word \(q\) gives rise to a nondeterministic finite automaton (NFA) with \(\varepsilon\)-moves, denoted \(\mathsf{NFA}(q)\), as follows. **States:**: The set of states is the set of prefixes of \(q\). The empty word \(\varepsilon\) is a prefix of \(q\). **Forward transitions:**: If \(u\) and \(uR\) are states, then there is a transition with label \(R\) from state \(u\) to state \(uR\). These transitions are said to be _forward_. **Backward transitions:**: If \(uR\) and \(wR\) are states such that \(|u|<|w|\) (and therefore \(uR\) is a prefix of \(w\)), then there is a transition with label \(\varepsilon\) from state \(wR\) to state \(uR\). These transitions are said to be _backward_, and capture the operation dubbed rewinding. **Initial and accepting states:**: The initial state is \(\varepsilon\) and the only accepting state is \(q\). Figure 4 shows the automaton \(\mathsf{NFA}(RXRRR)\). Informally, the forward transitions capture the automaton that would accept the word \(RXRRR\), while the backward transitions capture the existence of self-joins that allow an application of the rewind operator. We now take an alternative route for defining the language accepted by \(\mathsf{NFA}(q)\), which straightforwardly results in Lemma 4. Then, Lemma 5 gives alternative ways for expressing \(\mathcal{C}_{1}\) and \(\mathcal{C}_{3}\). **Definition 4**.: Let \(q\) be a path query, represented as a word over the alphabet of relation names. We define the language \(\mathcal{L}^{\rtimes}(q)\) as the smallest set of words such that 1. \(q\) belongs to \(\mathcal{L}^{\looploop}(q)\); and 2. _Rewinding:_ if \(uRvRv\) is in \(\mathcal{L}^{\looploop}(q)\) for some relation name \(R\) and (possibly empty) words \(u,v,w\), then \(uRvRvRv\) is also in \(\mathcal{L}^{\looplooploop}(q)\). That is, \(\mathcal{L}^{\looploop}(q)\) is the smallest language that contains \(q\) and is closed under rewinding. **Lemma 4**.: _For every path query \(q\), the automaton \(\mathsf{NFA}(q)\) accepts the language \(\mathcal{L}^{\looploop}(q)\)._ **Lemma 5**.: _Let \(q\) be a path query. Then,_ 1. \(q\) _satisfies_ \(\mathcal{C}_{1}\) _if and only if_ \(q\) _is a prefix of each_ \(p\in\mathcal{L}^{\looplooploop}(q)\)_;_ 2. \(q\) _satisfies_ \(\mathcal{C}_{3}\) _if and only if_ \(q\) _is a factor of each_ \(p\in\mathcal{L}^{\looplooploop}(q)\)_._ Proof.: [leftmargin=*] Proof.: [leftmargin=*] \(\Longleftarrow\) in (1) and (2)] This direction is trivial, because whenever \(q=uRvRv\), we have that \(uRvRvRv\in\mathcal{L}^{\looplooploop}(q)\). We now show the \(\implies\) direction in both items. To this end, we call an application of the rule (b) in Definition 4 a _rewind_. By construction, each word in \(\mathcal{L}^{\looplooploop}(q)\) can be obtained from \(q\) by using \(k\) rewinds, for some nonnegative integer \(k\). Let \(q_{k}\) be a word in \(\mathcal{L}^{\looplooploop}(q)\) that can be obtained from \(q\) by using \(k\) rewinds. [leftmargin=*] \(\Longrightarrow\) in (1)] We use induction on \(k\) to show that \(q\) is a prefix of \(q_{k}\). For he induction basis, \(k=0\), we have that \(q\) is a prefix of \(q_{0}=q\). We next show the induction step \(k\to k+1\). Let \(q_{k+1}=uRvRvRv\) where \(q_{k}=uRvRv\) is a word in \(\mathcal{L}^{\looplooploop}(q)\) obtained with \(k\) rewinds. By the induction hypothesis, we can assume a word \(s\) such that \(q_{k}=q\cdot s\). * If \(q\) is a prefix of \(uRvR\), then \(q_{k+1}=uRvRvRv\) trivially contains \(q\) as a prefix; * otherwise \(uRvR\) is a proper prefix of \(q\). Let \(q=uRvRt\) where \(t\) is nonempty. Since \(q\) satisfies \(\mathcal{C}_{1}\), \(Rt\) is a prefix of \(Rv\). Then \(q_{k+1}=uRvRvRv\) contains \(q=u\cdot Rv\cdot Rt\) as a prefix. [leftmargin=*] \(\Longrightarrow\) in (2)] We use induction on \(k\) to show that \(q\) is a factor of \(q_{k}\). For the induction basis, \(k=0\), we have that \(q\) is a prefix of \(q_{0}=q\). For the induction step, \(k\to k+1\), let \(q_{k+1}=uRvRvRv\) where \(q_{k}=uRvRv\) is a word in \(\mathcal{L}^{\looplooploop}(q)\) obtained with \(k\) rewinds. By the induction hypothesis, \(q_{k}=uRvRv\) contains \(q\) as a factor. If \(q\) is a factor of either \(uRvR\) or \(RvRv\), then \(q_{k+1}=uRvRvRv\) contains \(q\) as a factor. Otherwise, we can decompose \(q_{k}=u^{-}q^{-}RvRvq^{+}w^{+}\) where \(q=q^{-}RvRq^{+}\), \(u=u^{-}q^{-}\) and \(w=q^{+}w^{+}\). Since \(q\) satisfies \(\mathcal{C}_{3}\), the word \(q^{-}RvRvRq^{+}\), which is a factor of \(q_{k+1}\), contains \(q\) as a factor. In the technical treatment, it will be convenient to consider the automaton obtained from \(\mathsf{NFA}(q)\) by changing its start state, as defined next. **Definition 5**.: If \(u\) is a prefix of \(q\) (and thus \(u\) is a state in \(\mathsf{NFA}(q)\)), then \(\mathsf{S}\)-\(\mathsf{NFA}(q,u)\) is the automaton obtained from \(\mathsf{NFA}(q)\) by letting the initial state be \(u\) instead of the empty word. Note that \(\mathsf{S}\)-\(\mathsf{NFA}(q,\varepsilon)=\mathsf{NFA}(q)\). It may be helpful to think of the first \(\mathsf{S}\) in \(\mathsf{S}\)-\(\mathsf{NFA}(q,u)\) as "Start at \(u\)." ### Reification Lemma In this subsection, we first define how an automaton executes on a database instance. We then state an helping lemma which will be used in the proof of Lemma 7, which constitutes the main result of Section 5. To improve the readability and logical flow of our presentation, we postpone the proof of the helping lemma to Section 5.3. **Definition 6** (Automata on database instances).: Let \(\mathbf{db}\) be a database instance. A _path (in \(\mathbf{db}\))_ is defined as a sequence \(R_{1}(\underline{c_{1}},c_{2})\), \(R_{2}(\underline{c_{2}},c_{3})\),..., \(R_{n}(\underline{c_{n}},c_{n+1})\) of facts in \(\mathbf{db}\). Such a path is said to _start in \(c_{1}\)_. We call \(R_{1}R_{2}\cdots R_{n}\) the _trace_ of this path. A path is said to be _accepted_ by an automaton if its trace is accepted by the automaton. Let \(q\) be a path query and \(\mathbf{r}\) be a consistent database instance. We define \(\mathsf{start}(q,\mathbf{r})\) as the set containing all (and only) constants \(c\in\mathsf{adom}(\mathbf{r})\) such that there is a path in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{NFA}(q)\). **Example 4**.: Consider the query \(q_{2}=RRX\) and the database instance of Figure 2. Let \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) be the repairs containing, respectively, \(R(\underline{1},2)\) and \(R(\underline{1},3)\). The only path with trace \(RRX\) in \(\mathbf{r}_{1}\) starts in \(1\); and the only path with trace \(RRX\) in \(\mathbf{r}_{2}\) starts in \(0\). The regular expression for \(\mathcal{L}^{\looplooploop}(q)\) is \(RR\left(R\right)^{*}X\). We have \(\mathsf{start}(q,\mathbf{r}_{1})=\{0,1\}\) and \(\mathsf{start}(q,\mathbf{r}_{2})=\{0\}\) The following lemma tells us that, among all repairs, there is one that is inclusion-minimal with respect to \(\mathsf{start}(q,\cdot)\). In the preceding example, the repair \(\mathbf{r}_{2}\) minimizes \(\mathsf{start}(q,\cdot)\). **Lemma 6**.: _Let \(q\) be a path query, and \(\mathbf{db}\) a database instance. There exists a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\)._ Informally, we think of the next Lemma 7 as a _reification lemma_. The notion of _reifiable variable_ was coined in [40, Definition 8.5], to refer to a variable \(x\) in a query \(\exists x\,(\varphi(x))\) such that whenever that query is true in every repair of a database instance, then there is a constant \(c\) such that \(\varphi(c)\) is true in every repair. The following lemma captures a very similar concept. **Lemma 7** (Reification Lemma for \(\mathcal{C}_{3}\)).: _Let \(q\) be a path query that satisfies \(\mathcal{C}_{3}\). Then, for every database instance \(\mathbf{db}\), the following are equivalent:_ 1. \(\mathbf{db}\) _is a "yes"-instance of_ \(\mathsf{CERTAINTY}(q)\)_; and_ 2. _there exists a constant_ \(c\) _(which depends on_ \(\mathbf{db}\)_) such that for every repair_ \(\mathbf{r}\) _of_ \(\mathbf{db}\)_,_ \(c\in\mathsf{start}(q,\mathbf{r})\)_._ Proof.: \(\left\lfloor\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! It is to be noted here that whenever \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) are repairs containing \(f\), then by Lemma 8, \(\mathsf{ST}_{q}(f,\mathbf{r}_{1})\) and \(\mathsf{ST}_{q}(f,\mathbf{r}_{2})\) are comparable by set inclusion. Therefore, informally, \(\mathsf{cqaST}_{q}(f,\mathbf{db})\) is the \(\subseteq\)-minimal states set of \(f\) over all repairs that contain \(f\). **Definition 9** (Preorder \(\preceq_{q}\) on repairs).: Let \(\mathbf{db}\) be a database instance. For all repairs \(\mathbf{r},\mathbf{s}\) of \(\mathbf{db}\), we define \(\mathbf{r}\preceq_{q}\mathbf{s}\) if for every \(f\in\mathbf{r}\) and \(g\in\mathbf{s}\) such that \(f\) and \(g\) are key-equal, we have \(\mathsf{ST}_{q}(f,\mathbf{r})\subseteq\mathsf{ST}_{q}(g,\mathbf{s})\). Clearly, \(\preceq_{q}\) is a reflexive and transitive binary relation on the set of repairs of \(\mathbf{db}\). We write \(\mathbf{r}\prec_{q}\mathbf{s}\) if both \(\mathbf{r}\preceq_{q}\mathbf{s}\) and for some \(f\in\mathbf{r}\) and \(g\in\mathbf{s}\) such that \(f\) and \(g\) are key-equal, \(\mathsf{ST}_{q}(f,\mathbf{r})\subseteq\mathsf{ST}_{q}(g,\mathbf{s})\). **Lemma 9**.: _Let \(q\) be a path query. For every database instance \(\mathbf{db}\), there is a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathbf{r}^{*}\preceq_{q}\mathbf{r}\)._ Proof.: Construct a repair \(\mathbf{r}^{*}\) as follows. For every block \(\mathbf{blk}\) of \(\mathbf{db}\), insert into \(\mathbf{r}^{*}\) a fact \(f\) of \(\mathbf{blk}\) such that \(\mathsf{cqaST}_{q}(f,\mathbf{db})=\bigcap\{\mathsf{cqaST}_{q}(g,\mathbf{db}) \mid g\in\mathbf{blk}\}\). More informally, we insert into \(\mathbf{r}^{*}\) a fact \(f\) from \(\mathbf{blk}\) with a states set that is \(\subseteq\)-minimal over all repairs and all facts of \(\mathbf{blk}\). We first show the following claim. **Claim 1**.: For every fact \(f\) in \(\mathbf{r}^{*}\), we have \(\mathsf{ST}_{q}(f,\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f,\mathbf{db})\). Proof.: Let \(f_{1}\) be an arbitrary fact in \(\mathbf{r}^{*}\). We show \(\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f_{1},\mathbf{db})\). \(\boxed{\exists}\): Obvious, because \(\mathbf{r}^{*}\) is itself a repair of \(\mathbf{db}\) that contains \(f_{1}\). \(\boxed{\exists}\): Let \(f_{1}=R_{1}(\underline{c}_{0},c_{1})\). Assume by way of a contradiction that there is \(p_{1}\in\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})\) such that \(p_{1}\notin\mathsf{cqaST}_{q}(f_{1},\mathbf{db})\). Then, for some (possibly empty) prefix \(p_{0}\) of \(q\), there is a sequence: \[p_{0}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{0}^{\prime} \xrightarrow{f_{1}=R_{1}(\underline{c}_{0},c_{1})}p_{1}\xrightarrow{\varepsilon }p_{1}^{\prime}\xrightarrow{f_{2}=R_{2}(\underline{c}_{1},c_{2})}p_{2}\quad \cdots\quad p_{n-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{n -1}^{\prime}\xrightarrow{f_{n}=R_{n}(c_{n-1},c_{n})}p_{n}=q, \tag{1}\] where \(f_{1},f_{2},\ldots,f_{n}\in\mathbf{r}^{*}\), for each \(i\in\{1,\ldots,n\}\), \(p_{i}=p_{i-1}^{\prime}R_{i}\), and for each \(i\in\{0,\ldots,n-1\}\), either \(p_{i}^{\prime}=p_{i}\) or \(p_{i}^{\prime}\) is a strict prefix of \(p_{i}\) such that \(p_{i}^{\prime}\) and \(p_{i}\) agree on their rightmost relation name. Informally, the sequence (1) represents an accepting run of \(\mathsf{S}\)-\(\mathsf{NFA}(q,p_{0})\) in \(\mathbf{r}^{*}\). Since \(p_{1}\in\mathsf{ST}_{q}(f_{1},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}(f_{1}, \mathbf{db})\), we can assume a largest index \(\ell\in\{1,\ldots,n\}\) such that \(p_{\ell}\in\mathsf{ST}_{q}(f_{\ell},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}(f _{\ell},\mathbf{db})\). By construction of \(\mathbf{r}^{*}\), there is a repair \(\mathbf{s}\) such that \(f_{\ell}\in\mathbf{s}\) and \(\mathsf{ST}_{q}(f_{\ell},\mathbf{s})=\mathsf{cqaST}_{q}(f_{\ell},\mathbf{db})\). Consequently, \(p_{\ell}\notin\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\). We distinguish two cases: **Case that \(\ell=n\).**: Thus, the run (1) ends with \[\cdots\quad p_{\ell-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{ \ell-1}^{\prime}\xrightarrow{f_{\ell}=R_{\ell}(c_{\ell-1},c_{\ell})}p_{\ell}=q.\] Thus, the rightmost relation name in \(q\) is \(R_{\ell}\). Since \(f_{\ell}\in\mathbf{s}\), it is clear that \(p_{\ell}\in\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\), a contradiction. **Case that \(\ell<n\).**: Thus, the run (1) includes \[\cdots\quad p_{\ell-1}\stackrel{{\varepsilon}}{{\longrightarrow}}p_{ \ell-1}^{\prime}\xrightarrow{f_{\ell}=R_{\ell}(c_{\ell-1},c_{\ell})}p_{\ell} \xrightarrow{\varepsilon}p_{\ell}^{\prime}\xrightarrow{f_{\ell+1}=R_{\ell+1}( \underline{c}_{\ell},c_{\ell+1})}p_{\ell+1}\quad\cdots,\] where \(\ell+1\) can be equal to \(n\). Clearly, \(p_{\ell+1}\in\mathsf{ST}_{q}(f_{\ell+1},\mathbf{r}^{*})\). Assume without loss of generality that \(\mathbf{s}\) contains \(f_{\ell+1}^{\prime}:=R_{\ell+1}(c_{\ell},c_{\ell+1}^{\prime})\), which is key-equal to \(f_{\ell+1}\) (possibly \(c_{\ell+1}^{\prime}=c_{\ell+1}\)). From \(p_{\ell}\notin\mathsf{ST}_{q}(f_{\ell},\mathbf{s})\), it follows \(p_{\ell+1}\notin\mathsf{ST}_{q}(f_{\ell+1}^{\prime},\mathbf{s})\). Consequently, \(p_{\ell+1}\notin\mathsf{cqaST}_{q}(f_{\ell+1}^{\prime},\mathbf{db})\). By our construction of \(r^{*}\), we have \(p_{\ell+1}\notin\mathsf{cqaST}_{q}(f_{\ell+1},\mathbf{db})\). Consequently, \(p_{\ell+1}\in\mathsf{ST}_{q}(f_{\ell+1},\mathbf{r}^{*})\setminus\mathsf{cqaST}_{q}( f_{\ell+1},\mathbf{db})\), which contradicts that \(\ell\) was chosen to be the largest such an index possible. The proof of Claim 1 is now concluded. To conclude the proof of the lemma, let \(\mathbf{r}\) be any repair of \(\mathbf{db}\), and let \(f\in\mathbf{r}^{*}\) and \(f^{\prime}\in\mathbf{r}\) be two key-equal facts in \(\mathbf{db}\). By Claim 1 and the construction of \(\mathbf{r}^{*}\), we have that \(\mathsf{ST}_{q}(f,\mathbf{r}^{*})=\mathsf{cqaST}_{q}(f,\mathbf{db})\subseteq \mathsf{cqaST}_{q}(f^{\prime},\mathbf{db})\subseteq\mathsf{ST}_{q}(f^{\prime}, \mathbf{r})\), as desired. We can now give the proof of Lemma 6. Proof of Lemma 6.: Let \(\mathbf{db}\) be a database instance. Then by Lemma 9, there is a repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathbf{r}^{*}\preceq_{q}\mathbf{r}\). It suffices to show that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\). To this end, consider any repair \(\mathbf{r}\) and \(c\in\mathsf{start}(q,\mathbf{r}^{*})\). Let \(R\) be the first relation name of \(q\). Since \(c\in\mathsf{start}(q,\mathbf{r}^{*})\), there is \(d\in\mathsf{adom}(\mathbf{r}^{*})\) such that \(R\in\mathsf{ST}_{q}(R(\underline{c},d),\mathbf{r}^{*})\). Then, there is a unique \(d ## 6 Complexity Upper Bounds We now show the complexity upper bounds of Theorem 3. ### A PTIME Algorithm for \(\mathcal{C}_{3}\) We now specify a polynomial-time algorithm for \(\mathsf{CERTAINTY}(q)\), for path queries \(q\) that satisfy condition \(\mathcal{C}_{3}\). The algorithm is based on the automata defined in Definition 5, and uses the concept defined next. **Definition 10** (Relation \(\vdash_{q}\)).: Let \(q\) be a path query and \(\mathbf{db}\) a database instance. For every \(c\in\mathsf{adom}(q)\) and every prefix \(u\) of \(q\), we write \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) if every repair of \(\mathbf{db}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\). An algorithm that decides the relation \(\vdash_{q}\) can be used to solve \(\mathsf{CERTAINTY}(q)\) for path queries satisfying \(\mathcal{C}_{3}\). Indeed, by Lemma 7, for path queries satisfying \(\mathcal{C}_{3}\), \(\mathbf{db}\) is a "yes"-instance for the problem \(\mathsf{CERTAINTY}(q)\) if and only if there is a constant \(c\in\mathsf{adom}(\mathbf{db})\) such that \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) with \(u=\varepsilon\). Figure 5 shows an algorithm that computes \(\{\langle c,u\rangle\mid\mathbf{db}\vdash_{q}\langle c,u\rangle\}\) as the fixed point of a binary relation \(N\). The _Initialization Step_ inserts into \(N\) all pairs \(\langle c,q\rangle\), which is correct because \(\mathbf{db}\vdash_{q}\langle c,q\rangle\) holds vacuously, as \(q\) is the accepting state of \(\mathsf{S\mbox{-}NFA}(q,q)\). Then, the _Iterative Rule_ is executed until \(N\) remains unchanged; it intrinsically reflects the constructive proof of Lemma 9: \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) if and only if for every fact \(f=R(\underline{c},d)\in\mathbf{db}\), we have \(uR\in\mathsf{caST}_{q}(f,\mathbf{db})\). Figure 6 shows an example run of the algorithm in Figure 5. The next lemma states the correctness of the algorithm. **Lemma 10**.: _Let \(q\) be a path query. Let \(\mathbf{db}\) be a database instance. Let \(N\) be the output relation returned by the algorithm in Figure 5 on input \(\mathbf{db}\). Then, for every \(c\in\mathsf{adom}(\mathbf{db})\) and every prefix \(u\) of \(q\),_ \[\langle c,u\rangle\in N\text{ if and only if }\mathbf{db}\vdash_{q}\langle c,u\rangle.\] Proof.: [\(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\mathchar 0\sim$}}\raise 3.0pt \hbox{$\mathchar 0\sim$}}\hskip-3.0pt\)] Proof by contraposition. Assume \(\langle c,u\rangle\notin N\). The proof shows the construction of a repair \(\mathbf{r}\) of \(\mathbf{db}\) such that \(\mathbf{r}\) has no path that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\). Such a repair shows \(\mathbf{db}\vdash_{q}\langle c,u\rangle\). We explain which fact of an arbitrary block \(R(\underline{a},*)\) of \(\mathbf{db}\) will be inserted in \(\mathbf{r}\). Among all prefixes of \(q\) that end with \(R\), let \(u_{0}R\) be the longest prefix such that \(\langle a,u_{0}\rangle\notin N\). If such \(u_{0}R\) does not exist, then an arbitrarily picked fact of the block \(R(\underline{a},*)\) is inserted in \(\mathbf{r}\). Otherwise, the _Iterative Rule_ in Figure 5 entails the existence of a fact \(R(\underline{a},b)\) such that \(\langle b,u_{0}R\rangle\notin N\). Then, \(R(\underline{a},b)\) is inserted in \(\mathbf{r}\). We remark that this repair \(\mathbf{r}\) is constructed in exactly the same way as the repair \(\mathbf{r}^{*}\) built in the proof of Lemma 9. Assume for the sake of contradiction that there is a path \(\pi\) in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{S\mbox{-}NFA}(q,u)\). Let \(\pi:=R_{1}(\underline{c}_{0},c_{1})\), \(R_{2}(\underline{c}_{1},c_{2})\),..., \(R_{n}(c_{n-1},c_{n})\) where \(c_{0}=c\). Since \(\langle c_{0},u\rangle\not\in N\) and \(\langle c_{n},q\rangle\in N\), there is a longest prefix \(u_{0}\) of \(q\), where \(|u_{0}|\geq|u|\), and \(i\in\{1,\ldots,n\}\) such that \(\langle c_{i-1},u_{0}\rangle\not\in N\) and \(\langle c_{i},u_{0}R_{i}\rangle\in N\). From \(\langle c_{i-1},u_{0}\rangle\not\in N\), it follows that \(\mathbf{db}\) contains a fact \(R_{i}(\underline{c_{i-1}},d)\) such that \(\langle d,u_{0}R_{i}\rangle\not\in N\). Then \(R_{i}(\underline{c_{i-1}},c_{i})\) would not be chosen in a repair, contradicting \(R_{i}(\underline{c_{i-1}},c_{i})\in\mathbf{r}\). Figure 5: Polynomial-time algorithm for computing \(\{\langle c,u\rangle\mid\mathbf{db}\vdash_{q}\langle c,u\rangle\}\), for a fixed path query \(q\) satisfying \(\mathcal{C}_{3}\). Figure 6: Example run of our algorithm for \(q=RRX\), on the database instance \(\mathbf{db}\) shown at the right. Assume that \(\langle c,u\rangle\in N\). Let \(\ell\) be the number of executions of the _Iterative Rule_ that were used to insert \(\langle c,u\rangle\) in \(N\). We show \(\mathbf{db}\vdash_{q}\langle c,u\rangle\) by induction on \(\ell\). The basis of the induction, \(\ell=0\), holds because the _Initialization Step_ is obviously correct. Indeed, since \(q\) is an accepting state of \(\mathsf{S\text{-}NFA}(q,q)\), we have \(\mathbf{db}\vdash_{q}\langle c,q\rangle\). For the inductive step, \(\ell\to\ell+1\), we distinguish two cases. Case that \(\langle c,u\rangle\) is added to \(N\) by the _forward part_ of the _Iterative Rule_.That is, \(\langle c,u\rangle\) is added because \(\mathbf{db}\) has a block \(\{R(\underline{c},d_{1}),\,\dots,\,R(\underline{c},d_{k})\}\) with \(k\geq 1\) and for every \(i\in\{1,\dots,k\}\), we have that \(\langle d_{i},uR\rangle\) was added to \(N\) by a previous execution of the _Iterative Rule_. Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). Since every repair contains exactly one fact from each block, we can assume \(i\in\{1,\dots,k\}\) such that \(R(\underline{c},d_{i})\in\mathbf{r}\). By the induction hypothesis, \(\mathbf{db}\vdash_{q}\langle d_{i},uR\rangle\) and thus \(\mathbf{r}\) has a path that starts in \(d_{i}\) and is accepted by \(\mathsf{S\text{-}NFA}(q,uR)\). Clearly, this path can be left extended with \(R(\underline{c},d_{i})\), and this left extended path is accepted by \(\mathsf{S\text{-}NFA}(q,u)\). Note incidentally that the path in \(\mathbf{r}\) may already use \(R(\underline{c},d_{i})\), in which case the path is cyclic. Since \(\mathbf{r}\) is an arbitrary repair, it is correct to conclude \(\mathbf{db}\vdash_{q}\langle c,u\rangle\). Case that \(\langle c,u\rangle\) is added to \(N\) by the _backward part_ of the _Iterative Rule_.Then, there exists a relation name \(S\) and words \(v,w\) such that \(u=vSwS\), and \(\langle c,u\rangle\) is added because \(\langle c,vS\rangle\) was added in the same iteration. Then, \(\mathsf{S\text{-}NFA}(q,u)\) has an \(\varepsilon\)-transition from state \(u\) to \(vS\). Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). By the reasoning in the previous case, \(\mathbf{r}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\text{-}NFA}(q,vS)\). We claim that \(\mathbf{r}\) has a path that starts in \(c\) and is accepted by \(\mathsf{S\text{-}NFA}(q,u)\). Indeed, \(\mathsf{S\text{-}NFA}(q,u)\) can use the \(\varepsilon\)-transition to reach the state \(vS\), and then behave like \(\mathsf{S\text{-}NFA}(q,vS)\). This concludes the proof. The following corollary is now immediate. **Corollary 1**.: _Let \(q\) be a path query. Let \(\mathbf{db}\) be a database instance, and \(c\in\mathsf{adom}(\mathbf{db})\). Then, the following are equivalent:_ 1. \(c\in\mathsf{start}(q,\mathbf{r})\) _for every repair_ \(\mathbf{r}\) _of_ \(\mathbf{db}\)_; and_ 2. \(\langle c,\epsilon\rangle\in N\)_, where_ \(N\) _is the output of the algorithm in Figure_ 5_._ Finally, we obtain the following tractability result. **Lemma 11**.: _For each path query \(q\) satisfying \(\mathcal{C}_{3}\), \(\mathsf{CERTAINTY}(q)\) is expressible in Least Fixpoint Logic, and hence is in \(\mathbf{PTIME}\)._ Proof.: For a path query \(q\), define the following formula in LFP [33]: \[\psi_{q}(s,t):=\left[\mathbf{If}_{\mathbf{}_{N,x,z}}\varphi_{q}(N,x,z)\right] (s,t), \tag{2}\] where \(\varphi_{q}(N,x,z)\) is given in Figure 7. Herein, \(\alpha(x)\) denotes a first-order query that computes the active domain. That is, for every database instance \(\mathbf{db}\) and constant \(c\), \(\mathbf{db}\models\alpha(c)\) if and only if \(c\in\mathsf{adom}(\mathbf{db})\). Further, \(u\leq v\) means that \(u\) is a prefix of \(v\); and \(u<v\) means that \(u\) is a proper prefix of \(v\). Thus, \(u<v\) if and only if \(u\leq v\) and \(u\neq v\). The formula \(\varphi_{q}(N,x,z)\) is positive in \(N\), which is a \(2\)-ary predicate symbol. It is understood that the middle disjunction ranges over all nonempty prefixes \(uR\) of \(q\) (possibly \(u=\varepsilon\)). The last disjunction ranges over all pairs \((u,uv)\) of distinct nonempty prefixes of \(q\) that agree on their last symbol. We used a different typesetting to distinguish the constant words \(\mathsf{q}\), \(\mathsf{uR}\), \(\mathsf{uv}\) from first-order variables \(x\), \(z\). It is easily verified that the LFP query (2) expresses the algorithm of Figure 5. Since the formula (2) in the proof of Lemma 11 uses universal quantification, it is not in Existential Least Fixpoint Logic, which is equal to \(\mathsf{DATALOG}_{\sim}\)[33, Theorem 10.18]. Figure 7: Definition of \(\varphi_{q}(N,x,z)\). The predicate \(\alpha(x)\) states that \(x\) is in the active domain, and \(<\) is shorthand for “_is a strict prefix of”_. ### FO-Rewritability for \(\mathcal{C}_{1}\) We now show that if a path query \(q\) satisfies \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), and a first-order rewriting for \(q\) can be effectively constructed. **Definition 11** (First-order rewriting).: If \(q\) is a Boolean query such that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), then a _(consistent) first-order rewriting_ for \(q\) is a first-order sentence \(\psi\) such that for every database instance \(\mathbf{db}\), the following are equivalent: 1. \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q)\); and 2. \(\mathbf{db}\) satisfies \(\psi\). **Definition 12**.: If \(q=\{R_{1}(\underline{x}_{1},x_{2}),\,R_{2}(\underline{x}_{2},x_{3}),\,\ldots, \,R_{k}(\underline{x}_{k},x_{k+1})\}\), \(k\geq 1\), and \(c\) is a constant, then \(q_{[c]}\) is the Boolean conjunctive query \(q_{[c]}:=\{R_{1}(\underline{c},\underline{x}_{2}),R_{2}(\underline{x}_{2},x_ {3}),\ldots,R_{k}(\underline{x}_{k},x_{k+1})\}\). **Lemma 12**.: _For every nonempty path query \(q\) and constant \(c\), the problem \(\mathsf{CERTAINTY}(q_{[c]})\) is in \(\mathbf{FO}\). Moreover, it is possible to construct a first-order formula \(\psi(x)\), with free variable \(x\), such that for every constant \(c\), the sentence \(\exists x\left(\psi(x)\wedge x=c\right)\) is a first-order rewriting for \(q_{[c]}\)._ Proof.: The proof inductively constructs a first-order rewriting for \(q_{[c]}\), where the induction is on the number \(n\) of atoms in \(q\). For the basis of the induction, \(n=1\), we have \(q_{[c]}=R(\underline{c},y)\). Then, the first-order formula \(\psi(x)=\exists yR(\underline{x},y)\) obviously satisfies the statement of the lemma. We next show the induction step, \(n\to n+1\). Let \(R(\underline{x}_{1},x_{2})\) be the left-most atom of \(q\), and assume that \(p:=q\setminus\{R(\underline{x}_{1},x_{2})\}\) is a path query with \(n\geq 1\) atoms. By the induction hypothesis, it is possible to construct a first-order formula \(\varphi(z)\), with free variable \(z\), such that for every constant \(d\), \[\exists z\left(\varphi(z)\wedge z=d\right)\text{ is a first-order rewriting for }p_{[d]}. \tag{3}\] We now define \(\psi(x)\) as follows: \[\psi(x)=\exists y\left(R(\underline{x},y)\right)\wedge\forall z\left(R( \underline{x},z)\rightarrow\varphi(z)\right). \tag{4}\] We will show that for every constant \(c\), \(\exists x\left(\psi(x)\wedge x=c\right)\) is a first-order rewriting for \(q_{[c]}\). To this end, let \(\mathbf{db}\) be a database instance. It remains to be shown that \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\) if and only if \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\). Assume \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\). Because of the conjunct \(\exists y\left(R(\underline{x},y)\right)\) in (4), we have that \(\mathbf{db}\) includes a block \(R(\underline{c},*)\). Let \(\mathbf{r}\) be a repair of \(\mathbf{db}\). We need to show that \(\mathbf{r}\) satisfies \(q_{[c]}\). Clearly, \(\mathbf{r}\) contains \(R(\underline{c},d)\) for some constant \(d\). Since \(\mathbf{db}\) satisfies \(\exists z\left(\varphi(z)\wedge z=d\right)\), the induction hypothesis (3) tells us that \(\mathbf{r}\) satisfies \(p_{[d]}\). It is then obvious that \(\mathbf{r}\) satisfies \(q_{[c]}\). Assume \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q_{[c]})\). Then \(\mathbf{db}\) must obviously satisfy \(\exists y\left(R(\underline{c},y)\right)\). Therefore, \(\mathbf{db}\) includes a block \(R(\underline{c},*)\). Let \(\mathbf{r}\) be an arbitrary repair of \(\mathbf{db}\). There exists \(d\) such that \(R(\underline{c},d)\in\mathbf{r}\). Since \(\mathbf{r}\) satisfies \(q_{[c]}\), it follows that \(\mathbf{r}\) satisfies \(p_{[d]}\). Since \(\mathbf{r}\) is an arbitrary repair, the induction hypothesis (3) tells us that \(\mathbf{db}\) satisfies \(\exists z\left(\varphi(z)\wedge z=d\right)\). It is then clear that \(\mathbf{db}\) satisfies \(\exists x\left(\psi(x)\wedge x=c\right)\). **Lemma 13**.: _For every path query \(q\) that satisfies \(\mathcal{C}_{1}\), the problem \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), and a first-order rewriting for \(q\) can be effectively constructed._ Proof.: By Lemmas 5 and 7, a database instance \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q)\) if and only if there is a constant \(c\) (which depends on \(\mathbf{db}\)) such that \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAINTY}(q_{[c]})\). By Lemma 12, it is possible to construct a first-order rewriting \(\exists x\left(\psi(x)\wedge x=c\right)\) for \(q_{[c]}\). It is then clear that \(\exists x\left(\psi(x)\right)\) is a first-order rewriting for \(q\). ### An NL Algorithm for \(\mathcal{C}_{2}\) We show that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{NL}\) if \(q\) satisfies \(\mathcal{C}_{2}\) by expressing it in linear Datalog with stratified negation. The proof will use the syntactic characterization of \(\mathcal{C}_{2}\) established in Lemma 3. **Lemma 14**.: _For every path query \(q\) that satisfies \(\mathcal{C}_{2}\), the problem \(\mathsf{CERTAINTY}(q)\) is in linear Datalog with stratified negation (and hence in \(\mathbf{NL}\))._ In the remainder of this section, we develop the proof of Lemma 14. **Definition 13**.: Let \(q\) be a path query. We define \(\mathsf{NFA}^{\min}(q)\) as the automaton that accepts \(w\) if \(w\) is accepted by \(\mathsf{NFA}(q)\) and no proper prefix of \(w\) is accepted by \(\mathsf{NFA}(q)\). It is well-known that such an automaton \(\mathsf{NFA}^{\min}(q)\) exists. **Example 6**.: Let \(q=RXRYR\). Then, \(RXRYRYR\) is accepted by \(\mathsf{NFA}(q)\), but not by \(\mathsf{NFA}^{\min}(q)\), because the proper prefix \(RXRYR\) is also accepted by \(\mathsf{NFA}(q)\). **Definition 14**.: Let \(q\) be a path query and \(\mathbf{r}\) be a consistent database instance. We define \(\mathsf{start}^{\min}(q,\mathbf{r})\) as the set containing all (and only) constants \(c\in\mathsf{adom}(\mathbf{r})\) such that there is a path in \(\mathbf{r}\) that starts in \(c\) and is accepted by \(\mathsf{NFA}^{\min}(q)\). **Lemma 15**.: _Let \(q\) be a path query. For every consistent database instance \(\mathbf{r}\), we have that \(\mathsf{start}(q,\mathbf{r})=\mathsf{start}^{\min}(q,\mathbf{r})\)._ Proof.: By construction, \(\mathsf{start}^{\min}(q,\mathbf{r})\subseteq\mathsf{start}(q,\mathbf{r})\). Next assume that \(c\in\mathsf{start}(q,\mathbf{r})\) and let \(\pi\) be the path that starts in \(c\) and is accepted by \(\mathsf{NFA}(q)\). Let \(\pi^{-}\) be the shortest prefix of \(\pi\) that is accepted by \(\mathsf{NFA}(q)\). Since \(\pi^{-}\) starts in \(c\) and is accepted by \(\mathsf{NFA}^{\min}(q)\), it follows \(c\in\mathsf{start}^{\min}(q,\mathbf{r})\). **Lemma 16**.: _Let \(u\cdot v\cdot w\) be a self-join-free word over the alphabet of relation names. Let \(s\) be a suffix of \(uv\) that is distinct from \(uv\). For every integer \(k\geq 0\), \(\mathsf{NFA}^{\min}(s\left(uv\right)^{k}\,wv)\) accepts the language of the regular expression \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\)._ Proof.: Let \(q=s\left(uv\right)^{k}\,wv\). Since \(u\cdot v\cdot w\) is self-join-free, applying the rewinding operation, zero, one, or more times, in the part of \(q\) that precedes \(w\) will repeat the factor \(uv\). This gives words of the form \(s\left(uv\right)^{\ell}wv\) with \(\ell\geq k\). The difficult case is where we rewind a factor of \(q\) that itself contains \(w\) as a factor. In this case, the rewinding operation will repeat a factor of the form \(v_{2}\left(uv\right)^{\ell}wv_{1}\) such that \(v=v_{1}v_{2}\) and \(v_{2}\neq\varepsilon\), which results in words of one of the following forms (\(s=s_{1}\cdot v_{2}\)): \[\left(s\left(uv\right)^{\ell_{1}}wv_{1}\right)\cdot v_{2}\left(uv \right)^{\ell_{2}}wv_{1}\cdot v_{2}\left(uv\right)^{\ell_{2}}wv_{1}\cdot(v_{ 2});\text{ or}\] \[\left(s_{1}\right)\cdot v_{2}\left(uv\right)^{\ell}wv_{1}\cdot v_{ 2}\left(uv\right)^{\ell}wv_{1}\cdot(v_{2}).\] These words have a prefix belonging to the language of the regular expression \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\). **Definition 15**.: Let \(\mathbf{db}\) be a database instance, and \(q\) a path query. For \(a,b\in\mathsf{adom}(\mathbf{db})\), we write \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if there exists a path in \(\mathbf{db}\) from \(a\) to \(b\) with trace \(q\). Even more formally, \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if \(\mathbf{db}\) contains facts \(R_{1}(\underline{a_{1}},a_{2}),R_{2}(\underline{a_{2}},a_{3}),\ldots,R_{|q|}( a_{|q|},a_{|q|+1})\) such that \(R_{1}R_{2}\cdots R_{|q|}=q\). We write \(\mathbf{db}\models a\overset{q_{1}}{\longrightarrow}b\overset{q_{2}}{ \longrightarrow}c\) as a shorthand for \(\mathbf{db}\models a\overset{q_{1}}{\longrightarrow}b\) and \(\mathbf{db}\models b\overset{q_{2}}{\longrightarrow}c\). We write \(\mathbf{db}\models a\overset{q}{\longrightarrow}b\) if there exists a _consistent path_ in \(\mathbf{db}\) from \(a\) to \(b\) with trace \(q\), where a path is called consistent if it does not contain two distinct key-equal facts. A constant \(c\in\mathsf{adom}(\mathbf{db})\) is called _terminal for \(q\) in \(\mathbf{db}\)_ if for some (possibly empty) proper prefix \(p\) of \(q\), there is a consistent path in \(\mathbf{db}\) with trace \(p\) that cannot be right extended to a consistent path in \(\mathbf{db}\) with trace \(q\). Note that for every \(c\in\mathsf{adom}(\mathbf{db})\), we have \(c\overset{\varepsilon}{\longrightarrow}c\). Clearly, if \(q\) is self-join-free, then \(c\overset{q}{\longrightarrow}d\) implies \(c\overset{q}{\longrightarrow}q\) (the converse implication holds vacuously true). **Example 7**.: Let \(\mathbf{db}=\{R(\underline{c},d),S(\underline{d},c),R(\underline{c},e),T( \underline{e},f)\}\). Then, \(c\) is terminal for \(RSRT\) in \(\mathbf{db}\) because the path \(R(\underline{c},d),S(\underline{d},c)\) cannot be right extended to a consistent path with trace \(RSRT\), because \(d\) has no outgoing \(T\)-edge. Note incidentally that \(\mathbf{db}\models c\overset{RS}{\longrightarrow}c\overset{RT}{\longrightarrow}f\), but \(\mathbf{db}\not\models c\overset{RSRT}{\longrightarrow}f\). **Lemma 17**.: _Let \(\mathbf{db}\) be a database instance, and \(c\in\mathsf{adom}(\mathbf{db})\). Let \(q\) be a path query. Then, \(c\) is terminal for \(q\) in \(\mathbf{db}\) if and only if \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\), with \(q_{[c]}\) as defined by Definition 12._ Proof.: Straightforward. Assume \(\mathbf{db}\) is a "no"-instance of \(\mathsf{CERTAINTY}(q_{[c]})\). Then, there is a repair \(\mathbf{r}\) of \(\mathbf{db}\) such that \(\mathbf{r}\not\models q_{[c]}\). The empty path is a path in \(\mathbf{r}\) that starts in \(c\) and has trace \(\varepsilon\), which is a prefix of \(q\). We can therefore assume a longest prefix \(p\) of \(q\) such there exists a path \(\pi\) in \(\mathbf{r}\) that starts in \(c\) and has trace \(p\). Since \(\mathbf{r}\) is consistent, \(\pi\) is consistent. From \(\mathbf{r}\not\models q_{[c]}\), it follows that \(p\) is a proper prefix of \(q\). By Definition 15, \(c\) is terminal for \(q\) in \(\mathbf{db}\). We can now give the proof of Lemma 14. Proof of Lemma 14.: Assume \(q\) satisfies \(\mathcal{C}_{2}\). By Lemma 3, \(q\) satisfies \(\mathcal{B}_{2a}\) or \(\mathcal{B}_{2b}\). We treat the case that \(q\) satisfies \(\mathcal{B}_{2b}\) (the case that \(q\) satisfies \(\mathcal{B}_{2a}\) is even easier). We have that \(q\) is a factor of \(\left(uv\right)^{k}wv\), where \(k\) is chosen as small as possible, and \(uvw\) is self-join-free. The proof is straightforward if \(k=0\); we assume \(k\geq 1\) from here on. To simplify notation, we will show the case where \(q\) is a suffix of \(\left(uv\right)^{k}wv\); our proof can be easily extended to the case where \(q\) is not a suffix, at the price of some extra notation. There is a suffix \(s\) of \(uv\) such that \(q=s\left(uv\right)^{k-1}wv\). We first define a unary predicate \(P\) (which depends on \(q\)) such that \(\mathbf{db}\models P(d)\) if for some \(\ell\geq 0\), there are constants \(d_{0},d_{1},\ldots,d_{\ell}\in\mathsf{adom}(\mathbf{db})\) with \(d_{0}=d\) such that: 1. \(\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\xrightarrow{\mathit{ uv}}d_{2}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d_{\ell}\); 2. for every \(i\in\{0,1,\ldots,\ell\}\), \(d_{i}\) is terminal for \(wv\) in \(\mathbf{db}\); and 3. either \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\), or \(d_{\ell}\in\{d_{0},\ldots,d_{\ell-1}\}\). **Claim 2**.: The definition of the predicate \(P\) does not change if we replace item 1 by the stronger requirement that for every \(i\in\{0,1,\ldots,\ell-1\}\), there exists a path \(\pi_{i}\) from \(d_{i}\) to \(d_{i+1}\) with trace \(uv\) such that the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{\ell-1}\) is consistent. Proof.: It suffices to show the following statement by induction on increasing \(l\): whenever there exist \(l\geq 1\) and constants \(d_{0},d_{1},\ldots,d_{l}\) with \(d_{0}=d\) such that conditions 1, 2, and 3 hold, there exist another constant \(k\geq 1\) and constants \(c_{0},c_{1},\ldots,c_{k}\) with \(c_{0}=d\) such that conditions 1, 2, and 3 hold, and, moreover, for each \(i\in\{0,1,\ldots,k-1\}\), there exists a path \(\pi_{i}\) from \(c_{i}\) to \(c_{i+1}\) such that the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{k-1}\) is consistent. **Basis \(l=1\).**: Then we have \(\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\), witnessed by a path \(\pi_{0}\). Since \(uv\) is self-join-free, the path \(\pi_{0}\) is consistent. The claim thus follows with \(k=l=1\), \(c_{0}=d_{0}\) and \(c_{1}=d_{1}\). **Inductive step \(l\to l+1\).**: Assume that the statement holds for any integer in \(\{1,2,\ldots,l\}\). Suppose that there exist \(l\geq 2\) and constants \(d_{0},d_{1},\ldots,d_{l+1}\) with \(d_{0}=d\) such that conditions 1, 2, and 3 hold. For \(i\in\{0,\ldots,l\}\), let \(\pi_{i}\) be a path with trace \(uv\) from \(d_{i}\) to \(d_{i+1}\) in \(\mathbf{db}\). The claim holds if the composed path \(\pi_{0}\cdot\pi_{1}\cdots\pi_{l}\) is consistent, with \(k=l+1\) and \(c_{i}=d_{i}\) for \(i\in\{0,1,\ldots,l+1\}\). Now, assume that for some \(i<j\), the paths that show \(\mathbf{db}\models d_{i}\xrightarrow{\mathit{uv}}d_{i+1}\) and \(\mathbf{db}\models d_{j}\xrightarrow{\mathit{uv}}d_{j+1}\) contain, respectively, \(R(\underline{a},b_{1})\) and \(R(\underline{a},b_{2})\) with \(b_{1}\neq b_{2}\). It is easily verified that \[\mathbf{db}\models d_{0}\xrightarrow{\mathit{uv}}d_{1}\xrightarrow{\mathit{ uv}}d_{2}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d_{i}\xrightarrow{ \mathit{uv}}d_{j+1}\xrightarrow{\mathit{uv}}\cdots\xrightarrow{\mathit{uv}}d _{l+1},\] where the number of \(uv\)-steps is strictly less than \(l+1\). Informally, we follow the original path until we reach \(R(\underline{a},b_{1})\), but then follow \(R(\underline{a},b_{2})\) instead of \(R(\underline{a},b_{1})\), and continue on the path that proves \(\mathbf{db}\models d_{j}\xrightarrow{\mathit{uv}}d_{j+1}\). Then the claim holds by applying the inductive hypothesis on constants \(d_{0},d_{1},\ldots,d_{i},d_{j+1},\ldots,d_{l+1}\). The proof is now complete. Since we care about the expressibility of the predicate \(P\) in Datalog, Claim 2 is not cooked into the definition of \(P\). The idea is the same as in an **NL**-algorithm for reachability: if there exists a directed path from \(s\) to \(t\), then there is such a path without repeated vertices; but we do not care for repeated vertices when computing reachability. **Claim 3**.: The definition of predicate \(P\) does not change if we require that for \(i\in\{0,1,\ldots,\ell-1\}\), \(d_{i}\) is not terminal for \(uv\) in \(\mathbf{db}\). Proof.: Assume that for some \(0\leq i<\ell\), \(d_{i}\) is terminal for \(uv\) in \(\mathbf{db}\). Then, all conditions in the definition are satisfied by choosing \(\ell\) equal to \(j\). Claim 3 is not cooked into the definition of \(P\) to simplify the the encoding of \(P\) in Datalog. Next, we define a unary predicate \(O\) such that \(\mathbf{db}\models O(c)\) for a constant \(c\) if \(c\in\mathsf{adom}(\mathbf{db})\) and one of the following holds true: 1. \(c\) is terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\); or 2. there is a constant \(d\in\mathsf{adom}(\mathbf{db})\) such that both \(\mathbf{db}\models c\xrightarrow{s\left(uv\right)^{k-1}}d\) and \(\mathbf{db}\models P(d)\). **Claim 4**.: Let \(c\in\mathsf{adom}(\mathbf{db})\). The following are equivalent: 1. there is a repair \(\mathbf{r}\) of \(\mathbf{db}\) that contains no path that starts in \(c\) and whose trace is in the language of the regular expression \(s\left(uv\right)^{k-1}\left(uv\right)^{*}wv\); and 2. \(\mathbf{db}\models O(c)\). Proof.: Let \(wv=S_{0}S_{1}\cdots S_{m-1}\) and \(uv=R_{0}R_{1}\cdots R_{n-1}\). (I)\(\implies\)(II) Assume that item (I) holds true. Let the first relation name of \(s\) be \(R_{i}\). Starting from \(c\), let \(\pi\) be a maximal (possibly infinite) path in \(\mathbf{r}\) that starts in \(c\) and has trace \(R_{i}R_{i+1}R_{i+2}\cdots\), where addition is modulo \(n\). Since \(\mathbf{r}\) is consistent, \(\pi\) is deterministic. Since \(\mathbf{r}\) is finite, \(\pi\) contains only finitely many distinct edges. Therefore, \(\pi\) ends either in a loop or in an edge \(R_{j}(\underline{d},e)\) such that \(\mathbf{db}\models\neg\exists yR_{j+1}(\underline{e},y)\) (recall that \(\mathbf{r}\) contains a fact from every block of \(\mathbf{db}\)). Assume that \(\pi\) has a prefix \(\pi^{\prime}\) with trace \(s\left(uv\right)^{k-1}\); if \(e\) occurs at the non-primary key position of the last \(R_{n-1}\)-fact of \(\pi^{\prime}\) or of any \(R_{n-1}\)-fact occurring afterwards in \(\pi\), then it follows from item (I) that there exist a (possibly empty) prefix \(pS_{j}\) of \(wv\) and a constant \(f\in\mathsf{adom}(\mathbf{r})\) such that \(\mathbf{r}\models e\stackrel{{ p}}{{\longrightarrow}}f\) and \(\mathbf{db}\models\neg\exists yS_{j}(\underline{f},y)\). It is now easily verified that \(\mathbf{db}\models O(c)\). (II) Assume \(\mathbf{db}\models O(c)\). It is easily verified that the desired result holds true if \(c\) is terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\). Assume from here on that \(c\) is not terminal for \(s\left(uv\right)^{k-1}\) in \(\mathbf{db}\). That is, for every repair \(\mathbf{r}\) of \(\mathbf{db}\), there is a constant \(d\) such that \(\mathbf{r}\models c\stackrel{{ s\left(uv\right)^{k-1}}}{{ \longrightarrow}}d\). Then, there is a consistent path \(\alpha\) with trace \(s\left(uv\right)^{k-1}\) from \(c\) to some constant \(d\in\mathsf{adom}(\mathbf{db})\) such that \(\mathbf{db}\models P(d)\), using the stronger definition of \(P\) implied by Claims 2 and 3. Let \(d_{0},\ldots,d_{\ell}\) be as in our (stronger) definition of \(P(d)\), that is, first, \(d_{1},\ldots,d_{\ell-1}\) are not terminal for \(uv\) in \(\mathbf{db}\) (cf. Claim 3), and second, there is a \(\subseteq\)-minimal consistent subset \(\pi\) of \(\mathbf{db}\) such that \(\pi\models d_{0}\stackrel{{ wv}}{{\longrightarrow}}d_{1} \stackrel{{ wv}}{{\longrightarrow}}d_{2}\stackrel{{ wv}}{{ \longrightarrow}}\cdots\stackrel{{ wv}}{{\longrightarrow}}d_{\ell}\) (cf. Claim 2). We construct a repair \(\mathbf{r}\) as follows: 1. insert into \(\mathbf{r}\) all facts of \(\pi\); 2. for every \(i\in\{0,\ldots,\ell\}\), \(d_{i}\) is terminal for \(wv\) in \(\mathbf{db}\). We ensure that \(\mathbf{r}\models d_{i}\stackrel{{ S_{0}S_{1}\cdots S_{j_{i}}}}{{ \longrightarrow}}e_{i}\) for some \(j_{i}\in\{0,\ldots,m-2\}\) and some constant \(e_{i}\) such that \(\mathbf{db}\models\neg\exists yS_{j+1}(\underline{e}_{i},y)\); 3. if \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\), then we ensure that \(\mathbf{r}\models d_{\ell}\stackrel{{ R_{0}R_{1}\cdots R_{j}}}{{ \longrightarrow}}e\) for some \(j\in\{0,\ldots,n-2\}\) and some constant \(e\) such that \(\mathbf{db}\models\neg\exists yS_{j+1}(\underline{e},y)\); 4. insert into \(\mathbf{r}\) the facts of \(\alpha\) that are not key-equal to a fact already in \(\mathbf{r}\); and 5. complete \(\mathbf{r}\) into a \(\subseteq\)-maximal consistent subset of \(\mathbf{db}\). Since \(\mathbf{r}\) is a repair of \(\mathbf{db}\), there exists a path \(\delta\) with trace \(s\left(uv\right)^{k-1}\) in \(\mathbf{r}\) that starts from \(c\). If \(\delta\neq\alpha\), then \(\delta\) must contain a fact of \(\pi\) that was inserted in step 1. Consequently, no matter whether \(\delta=\alpha\) or \(\delta\neq\alpha\), the endpoint of \(\delta\) belongs to \(\{d_{0},\ldots,d_{\ell}\}\). It follows that there is a (possibly empty) path from \(\delta\)'s endpoint to \(d_{\ell}\) whose trace is of the form \(\left(uv\right)^{*}\). Two cases can occur: * \(d_{\ell}\) is terminal for \(uv\) in \(\mathbf{db}\). * \(d_{\ell}\) is not terminal for \(uv\) in \(\mathbf{db}\). Then there is \(j\in\{0,\ldots,\ell-1\}\) such that \(d_{j}=d_{\ell}\). Then, there is a path of the form \(\left(uv\right)^{*}\) that starts from \(\delta\)'s endpoint and eventually loops. Since, by construction, each \(d_{i}\) is terminal for \(wv\) in \(\mathbf{r}\), it will be the case that \(\delta\) cannot be extended to a path in \(\mathbf{r}\) whose trace is of the form \(s\left(uv\right)^{k}\left(uv\right)^{*}wv\). **Claim 5**.: The unary predicate \(O\) is expressible in linear Datalog with stratified negation. Proof.: The construction of the linear Datalog program is straightforward. Concerning the computation of predicates \(P\) and \(O\), note that it can be checked in \(\mathbf{FO}\) whether or not a constant \(c\) is terminal for some path query \(q\), by Lemmas 12 and 17. The only need for recursion comes from condition (i) in the definition of the predicate \(P\), which searches for a directed path of a particular form. We give a program for \(q=UVUVWV\), where \(\mathtt{c}(\mathtt{X})\) states that \(\mathtt{X}\) is a constant, and \(\mathtt{ukey}(\mathtt{X})\) states that \(\mathtt{X}\) is the primary key of some \(U\)-fact. \(\mathtt{consistent}(\mathtt{X1},\mathtt{X2},\mathtt{X3},\mathtt{X4})\) is true if either \(\mathtt{X1}\neq\mathtt{X3}\) or \(\mathtt{X2}=\mathtt{X4}\) (or both). wterminal(X) := c(X), not wkey(X). wterminal(X) := u(X,Y), not wkey(Y). wterminal(X) := c(X), not wkey(X). wterminal(X) := w(X,Y), not wkey(Y). wv2terminal(X) := wterminal(X). wv2terminal(X1) :- u(X1,K2), v(X2,X3), uvterminal(X3). wppath(X1,K3) :- u(X1,K2), v(X2,K3), wvterminal(X1), wvterminal(X2), wvterminal(X3). wppath(X1,K4) := uppath(X1,K2), u(K2,K3), v(X3,K4), wvterminal(X3), wvterminal(K4). p(X) :- wvterminal(X), wvterminal(X). %%%% the empty path. p(X) :- wppath(X,Y), wterminal(Y). p(X) :- upwind(X,Y), wvpath(Y,Y). %%%% p and upwind are not mutually recursive. o(X) :- uv2terminal(X). o(X1) :- u(X1,K2), v(X2,K3), u(X3,K4), v(X4,K5), consistent(X1,K2,K3,K4), consistent(X2,K3,K4,K5), p(X5). The above program is in linear Datalog with stratified negation. It is easily seen that any path query satisfying \(\mathcal{B}_{2b}\) admits such a program for the predicate \(O\). By Lemmas 7, 15, and 16, the following are equivalent: 1. **db** is a "no"-instance of \(\mathsf{CERTAINTY}(q)\); and 2. for every constant \(c_{i}\in\mathsf{adom}(q)\), there is a repair \(\mathbf{r}\) of **db** that contains no path that starts in \(c_{i}\) and whose trace is in the language of the regular expression \(s\left(uv\right)^{k-1}\left(uv\right)^{*}uv\). By Claim 4, item (b) holds true if and only if for every \(c\in\mathsf{adom}(\mathbf{db})\), \(\mathbf{db}\models\neg O(c)\). It follows from Claim 5 that the latter test is in linear Datalog with stratified negation, which concludes the proof of Lemma 14. ## 7 Complexity Lower Bounds In this section, we show the complexity lower bounds of Theorem 3. For a path query \(q=\{R_{1}(x_{1},x_{2})\),..., \(R_{k}(x_{k},x_{k+1})\}\) and constants \(a,b\), we define the following database instances: \[\phi_{a}^{b}[q] := \{R_{1}(a,\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{k}(\Box_{ k},b)\}\] \[\phi_{\bot}^{\bot}[q] := \{R_{1}(a,\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{k}(\Box_{ k},\Box_{k+1})\}\] \[\phi_{\bot}^{b}[q] := \{R_{1}(\Box_{1},\Box_{2}),R_{2}(\Box_{2},\Box_{3}),\ldots,R_{ k}(\Box_{k},b)\}\] where the symbols \(\Box_{i}\) denoted fresh constants not occurring elsewhere. Significantly, two occurrences of \(\Box_{i}\) will represent different constants. ### NL-Hardness We first show that if a path query violates \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is **NL**-hard, and therefore not in **FO**. **Lemma 18**.: _If a path query \(q\) violates \(\mathcal{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is **NL**-hard._ Proof.: Assume that \(q\) does not satisfy \(\mathcal{C}_{1}\). Then, there exists a relation name \(R\) such that \(q=uRvRw\) and \(q\) is not a prefix of \(uRvRvRw\). It follows that \(Rw\) is not a prefix of \(RvRw\). Since \(Rv\neq\varepsilon\), there exists no (conjunctive query) homomorphism from \(q\) to \(uRv\). The problem \(\mathsf{REACHABILITY}\) takes as input a directed graph \(G(V,E)\) and two vertices \(s,t\in V\), and asks whether \(G\) has a directed path from \(s\) to \(t\). This problem is **NL**-complete and remains **NL**-complete when the inputs are acyclic graphs. Recall that **NL** is closed under complement. We present a first-order reduction from \(\mathsf{REACHABILITY}\) to the complement of \(\mathsf{CERTAINTY}(q)\), for acyclic directed graphs. Let \(G=(V,E)\) be an acyclic directed graph and \(s,t\in V\). Let \(G^{\prime}=(V\cup\{s^{\prime},t^{\prime}\},E\cup\{(s^{\prime},s),(t,t^{\prime})\})\), where \(s^{\prime},t^{\prime}\) are fresh vertices. We construct an input instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows: * for each vertex \(x\in V\cup\{s^{\prime}\}\), we add \(\phi_{\bot}^{x}[u]\); * for each edge \((x,y)\in E\cup\{(s^{\prime},s),(t,t^{\prime})\}\), we add \(\phi_{x}^{y}[Rv]\); and * for each vertex \(x\in V\), we add \(\phi_{x}^{\perp}[Rw]\). This construction can be executed in **FO**. Figure 8 shows an example of the above construction. Observe that the only conflicts in **db** occur in \(R\)-facts outgoing from a same vertex. We now show that there exists a directed path from \(s\) to \(t\) in \(G\) if and only if there exists a repair of **db** that does not satisfy \(q\). Suppose that there is a directed path from \(s\) to \(t\) in \(G\). Then, \(G^{\prime}\) has a directed path \(P=s,x_{0},x_{1},\ldots,t,t^{\prime}\). Then, consider the repair \(\mathbf{r}\) that chooses the first \(R\)-fact from \(\phi_{x}^{y}[Rv]\) for each edge \((x,y)\) on the path \(P\), and the first \(R\)-fact from \(\phi_{y}^{\perp}[Rw]\) for each \(y\) not on the path \(P\). We show that \(\mathbf{r}\) falsifies \(q\). Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then, there exists a valuation \(\theta\) for the variables in \(q\) such that \(\theta(q)\subseteq\mathbf{r}\). Since, as argued in the beginning of this proof, there exists no (conjunctive query) homomorphism from \(q\) to \(uRw\), it must be that all facts in \(\theta(q)\) belong to a path in \(\mathbf{r}\) with trace \(u\left(Rv\right)^{k}\), for some \(k\geq 0\). Since, by construction, no constants are repeated on such paths, there exists a (conjunctive query) homomorphism from \(q\) to \(u\left(Rv\right)^{k}\), which implies that \(Rw\) is a prefix of \(RvRw\), a contradiction. We conclude by contradiction that \(\mathbf{r}\) falsifies \(q\). Proof by contradiction. Suppose that there is no directed path from \(s\) to \(t\) in \(G\). Let \(\mathbf{r}\) be any repair of **db**; we will show that \(\mathbf{r}\) satisfies \(q\). Indeed, there exists a maximal path \(P=x_{0},x_{1},\ldots,x_{n}\) such that \(x_{0}=s^{\prime}\), \(x_{1}=s\), and \(\phi_{x}^{x_{i+1}}[Rv]\subseteq\mathbf{r}\). By construction, \(s^{\prime}\) cannot reach \(t^{\prime}\) in \(G^{\prime}\), and thus \(x_{n}\neq t^{\prime}\). Since \(P\) is maximal, we must have \(\phi_{x_{n}}^{\perp}[Rw]\subseteq\mathbf{r}\). Then \(\phi_{\perp}^{x_{n-1}}[u]\cup\phi_{x_{n-1}}^{x_{n}}[Rv]\cup\phi_{x_{n}}^{\perp} [Rw]\) satisfies \(q\). ### coNP-Hardness Next, we show the **coNP**-hard lower bound. **Lemma 19**.: _If a path query \(q\) violates \(\mathcal{C}_{3}\), then \(\mathsf{CERTAINTY}(q)\) is_ **coNP**_-hard._ Proof.: If \(q\) does not satisfy \(\mathcal{C}_{3}\), then there exists a relation \(R\) such that \(q=uRvRw\) and \(q\) is not a factor of \(uRvRvRv\). Note that this means that there is no homomorphism from \(q\) to \(uRvRvRv\). Also, \(u\) must be nonempty (otherwise, \(q=RvRv\) is trivially a suffix of \(RvRvRv\)). Let \(S\) be the first relation of \(u\). The proof is a first-order reduction from \(\mathsf{SAT}\) to the complement of \(\mathsf{CERTAINTY}(q)\). The problem \(\mathsf{SAT}\) asks whether a given propositional formula in CNF has a satisfying truth assignment. Given any formula \(\psi\) for \(\mathsf{SAT}\), we construct an input instance **db** for \(\mathsf{CERTAINTY}(q)\) as follows: * for each variable \(z\), we add \(\phi_{z}^{\perp}[Rw]\) and \(\phi_{z}^{\perp}[RvRv]\); * for each clause \(C\) and positive literal \(z\) of \(C\), we add \(\phi_{C}^{z}[u]\); * for each clause \(C\) and variable \(z\) that occurs in a negative literal of \(C\), we add \(\phi_{C}^{z}[uRv]\). This construction can be executed in **FO**. Figure 9 depicts an example of the above construction. Intuitively, \(\phi_{z}^{\perp}[Rw]\) corresponds to setting the variable \(z\) to true, and \(\phi_{z}^{\perp}[RvRw]\) to false. There are two types of conflicts that occur in **db**. First, we have conflicting facts of the form \(S(\underline{C},*)\); resolving this conflict corresponds to the clause \(C\) choosing one of its literals. Moreover, for each variable \(z\), we have conflicting facts of the form \(R(\underline{z},*)\); resolving this conflict corresponds to the variable \(z\) choosing a truth assignment. We show now that \(\psi\) has a satisfying truth assignment if and only if there exists a repair of **db** that does not satisfy \(q\). Assume that there exists a satisfying truth assignment \(\sigma\) for \(\psi\). Then for any clause \(C\), there exists a variable \(z_{C}\in C\) whose corresponding literal is true in \(C\) under \(\sigma\). Consider the repair \(\mathbf{r}\) that: * for each variable \(z\), it chooses the first \(R\)-fact of \(\phi_{z}^{\perp}[Rw]\) if \(\sigma(z)\) is true, otherwise the first \(R\)-fact of \(\phi_{z}^{\perp}[RvRw]\); Figure 8: Database instance for the **NL**-hardness reduction from the graph \(G\) with \(V=\{s,a,t\}\) and \(E=\{(s,a),(a,t)\}\). * for each clause \(C\), it chooses the first \(S\)-fact of \(\phi^{z}_{C}[u]\) if \(z_{C}\) is positive in \(C\), or the first \(S\)-fact of \(\phi^{z}_{C}[uRv]\) if \(z_{C}\) is negative in \(C\). Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then we must have a homomorphism from \(q\) to either \(uRw\) or \(uRvRvRv\). But the former is not possible, while the latter contradicts \(\mathcal{C}_{3}\). We conclude by contradiction that \(\mathbf{r}\) falsifies \(q\). Suppose that there exists a repair \(\mathbf{r}\) of \(\mathbf{db}\) that falsifies \(q\). Consider the assignment \(\sigma\): \[\sigma(z)=\begin{cases}\text{true}&\text{if }\phi^{\perp}_{\perp}[Rw]\subseteq \mathbf{r}\\ \text{false}&\text{if }\phi^{\perp}_{z}[RvRv]\subseteq\mathbf{r}\end{cases}\] We claim that \(\sigma\) is a satisfying truth assignment for \(\psi\). Indeed, for each clause \(C\), the repair must have chosen a variable \(z\) in \(C\). If \(z\) appears as a positive literal in \(C\), then \(\phi^{z}_{C}[u]\subseteq\mathbf{r}\). Since \(\mathbf{r}\) falsifies \(q\), we must have \(\phi^{\perp}_{z}[Rw]\subseteq\mathbf{r}\). Thus, \(\sigma(z)\) is true and \(C\) is satisfied. If \(z\) appears in a negative literal, then \(\phi^{z}_{C}[uRv]\subseteq\mathbf{r}\). Since \(\mathbf{r}\) falsifies \(q\), we must have \(\phi^{\perp}_{z}[RvRv]\subseteq\mathbf{r}\). Thus, \(\sigma(z)\) is false and \(C\) is again satisfied. ### PTIME-Hardness Finally, we show the **PTIME**-hard lower bound. **Lemma 20**.: _If a path query \(q\) violates \(\mathcal{C}_{2}\), then \(\mathsf{CERTAINTY}(p)\) is_ **PTIME**_-hard._ Proof.: Suppose \(q\) violates \(\mathcal{C}_{2}\). If \(q\) also violates \(\mathcal{C}_{3}\), then the problem \(\mathsf{CERTAINTY}(q)\) is **PTIME**-hard since it is **coNP**-hard by Lemma 19. Otherwise, it is possible to write \(q=uRv_{1}Rv_{2}Rw\), with three consecutive occurrences of \(R\) such that \(v_{1}\neq v_{2}\) and \(Rw\) is not a prefix of \(Rv_{1}\). Let \(v\) be the maximal path query such that \(v_{1}=vv_{1}^{+}\) and \(v_{2}=vv_{2}^{+}\). Thus \(v_{1}^{+}\neq v_{2}^{+}\) and the first relation names of \(v_{1}^{+}\) and \(v_{2}^{+}\) are different. Our proof is a reduction from the Monotone Circuit Value Problem (MCVP) known to be **PTIME**-complete [18]: **Problem:**: MCVP **Input:**: A monotone Boolean circuit \(C\) on inputs \(x_{1}\), \(x_{2}\),..., \(x_{n}\) and output gate \(o\); an assignment \(\sigma:\{x_{i}\mid 1\leq i\leq n\}\rightarrow\{0,1\}\). **Question:**: What is the value of the output \(o\) under \(\sigma\)? We construct an instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows: * for the output gate \(o\), we add \(\phi^{\perp}_{\perp}[uRv_{1}]\); * for each input variable \(x\) with \(\sigma(x)=1\), we add \(\phi^{\perp}_{x}[Rv_{2}Rw]\); * for each gate \(g\), we add \(\phi^{g}_{\perp}[u]\) and \(\phi^{\perp}_{g}[Rv_{2}Rw]\); * for each AND gate \(g=g_{1}\wedge g_{2}\), we add \[\phi^{g_{1}}_{g}[Rv_{1}]\cup\phi^{g_{2}}_{g}[Rv_{1}].\] Here, \(g_{1}\) and \(g_{2}\) can be gates or input variables; and * for each OR gate \(g=g_{1}\lor g_{2}\), we add \[\begin{array}{ccc}\phi_{g}^{e_{1}}[Rv]&\cup&\phi_{e_{1}}^{g_{1}}[v_{1}^{+}]& \cup&\phi_{e_{1}}^{e_{2}}[v_{2}^{+}]\\ \cup&\phi_{\perp}^{e_{2}}[u]&\cup&\phi_{e_{2}}^{g_{2}}[Rv_{1}]&\cup&\phi_{e_{2}}^ {e_{2}}[Rv]\end{array}\] where \(c_{1},c_{2}\) are fresh constants. This construction can be executed in **FO**. An example of the gadget constructions is shown in Figure 10. We next show that the output gate \(o\) is evaluated to \(1\) under \(\sigma\) if and only if each repair of **db** satisfies \(q\). Suppose the output gate \(o\) is evaluated to \(1\) under \(\sigma\). Consider any repair \(\mathbf{r}\). We construct a sequence of gates starting from \(o\), with the invariant that every gate \(g\) evaluates to \(1\), and there is a path of the form \(uRv_{1}\) in \(\mathbf{r}\) that ends in \(g\). The output gate \(o\) evaluates to \(1\), and also we have that \(\phi_{\perp}^{o}[uRv_{1}]\subseteq\mathbf{r}\) by construction. Suppose that we are at gate \(g\). If there is a \(Rv_{2}Rw\) path in \(\mathbf{r}\) that starts in \(g\), the sequence ends and the query \(q\) is satisfied. Otherwise, we distinguish two cases: 1. \(g=g_{1}\wedge g_{2}\). Then, we choose the gate with \(\phi_{g}^{g_{1}}[Rv_{1}]\subseteq\mathbf{r}\). Since both gates evaluate to \(1\) and \(\phi_{\perp}^{g}[u]\subseteq\mathbf{r}\), the invariant holds for the chosen gate. 2. \(g=g_{1}\lor g_{2}\). If \(g_{1}\) evaluates to \(1\), we choose \(g_{1}\). Observe that \(\phi_{\perp}^{g}[u]\cup\phi_{g}^{e_{1}}[Rv]\cup\phi_{e_{1}}^{g_{1}}[v_{1}^{+}]\) creates the desired \(uRv_{1}\) path. Otherwise \(g_{2}\) evaluates to \(1\). If \(\phi_{c_{2}}^{\perp}[Rw]\subseteq\mathbf{r}\), then there is a path with trace \(uRv_{1}\) ending in \(g\), and a path with trace \(Rv_{2}Rw\) starting in \(g\), and therefore \(\mathbf{r}\) satisfies \(q\). If \(\phi_{c_{2}}^{\perp}[Rw]\nsubseteq\mathbf{r}\), we choose \(g_{2}\) and the invariant holds. If the query is not satisfied at any point in the sequence, we will reach an input variable \(x\) evaluated at \(1\). But then there is an outgoing \(Rv_{2}Rw\) path from \(x\), which means that \(q\) must be satisfied. Proof by contraposition. Assume that \(o\) is evaluated to \(0\) under \(\sigma\). We construct a repair \(\mathbf{r}\) as follows, for each gate \(g\): * if \(g\) is evaluated to \(1\), we choose the first \(R\)-fact in \(\phi_{g}^{\perp}[Rv_{2}Rw]\); * if \(g=g_{1}\wedge g_{2}\) and \(g\) is evaluated to \(0\), let \(g_{i}\) be the gate or input variable evaluated to \(0\). We then choose \(\phi_{g}^{g_{i}}[Rv_{1}]\); * if \(g=g_{1}\lor g_{2}\) and \(g\) is evaluated to \(0\), we choose \(\phi_{g}^{e_{1}}[Rv]\); and * if \(g=g_{1}\lor g_{2}\), we choose \(\phi_{e_{2}}^{g_{2}}[Rv_{1}]\). For a path query \(p\), we write \(\mathtt{head}(p)\) for the variable at the key-position of the first atom, and \(\mathtt{rear}(p)\) for the variable at the non-key position of the last atom. Assume for the sake of contradiction that \(\mathbf{r}\) satisfies \(q\). Then, there exists some valuation \(\theta\) such that \(\theta(uRv_{1}Rv_{2}Rw)\subseteq\mathbf{r}\). Then the gate \(g^{*}:=\theta(\mathtt{head}(Rv_{1}))\) is evaluated to \(0\) by construction. Let \(g_{1}:=\theta(\mathtt{rear}(Rv_{1}))\). By construction, for \(g^{*}=g_{1}\wedge g_{2}\) or \(g^{*}=g_{1}\lor g_{2}\), we must have \(\phi_{g}^{g_{1}}[Rv_{1}]\subseteq\mathbf{r}\) and \(g_{1}\) is a gate or an input variable also evaluated to \(0\). By our construction of \(\mathbf{r}\), there is no path with trace \(Rv_{2}Rw\) outgoing from \(g_{1}\). However, \(\theta(Rv_{2}Rw)\subseteq\mathbf{r}\), this can only happen when \(g_{1}\) is an OR gate, and one of the following occurs: * Case that \(|Rw|\leq|Rv_{1}|\), and the trace of \(\theta(Rv_{2}Rw)\) is a prefix of \(Rv_{2}^{+}Rv_{1}\). Then \(Rw\) is a prefix of \(Rv_{1}\), a contradiction. Figure 10: Gadgets for the **PTIME**-hardness reduction. * Case that \(\left|Rw\right|>\left|Rv_{1}\right|\), and \(Rv_{2}^{+}Rv_{1}\) is a prefix of the trace of \(\theta(Rv_{2}Rw)\). Consequently, \(Rv_{1}\) is a prefix of \(Rw\). Then, for every \(k\geq 1\), \(\mathcal{L}^{\texttt{t}*}(q)\) contains \(uRv_{1}\left(Rv_{2}\right)^{k}Rw\). It is now easily verified that for large enough values of \(k\), \(uRv_{1}Rv_{2}w\) is not a factor of \(uRv_{1}\left(Rv_{2}\right)^{k}Rw\). By Lemmas 5 and 19, \(\mathsf{CERTAINTY}(q)\) is \(\mathsf{coNP}\)-hard. ## 8 Path Queries with Constants We now extend our complexity classification of \(\mathsf{CERTAINTY}(q)\) to path queries in which constants can occur. **Definition 16** (Generalized path queries).: A _generalized path query_ is a Boolean conjunctive query of the following form: \[q=\{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}),\ldots,R_{ k}(\underline{s_{k}},s_{k+1})\}, \tag{5}\] where \(s_{1}\), \(s_{2}\),..., \(s_{k+1}\) are constants or variables, all distinct, and \(R_{1}\), \(R_{2}\),..., \(R_{k}\) are (not necessarily distinct) relation names. Significantly, every constant can occur at most twice: at a non-primary-key position and the next primary-key-position. The _characteristic prefix_ of \(q\), denoted by \(\mathsf{char}(q)\), is the longest prefix \[\{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}),\ldots,R_{ \ell}(\underline{s_{\ell}},s_{\ell+1})\},0\leq\ell\leq k\] such that no constant occurs among \(s_{1}\), \(s_{2}\),..., \(s_{\ell}\) (but \(s_{\ell+1}\) can be a constant). Clearly, if \(q\) is constant-free, then \(\mathsf{char}(q)=q\). **Example 8**.: If \(q=\{R(\underline{x},y)\), \(S(\underline{y},0)\), \(T(\underline{0},1)\), \(R(\underline{1},w)\}\), where \(0\) and \(1\) are constants, then \(\mathsf{char}(q)=\{R(\underline{x},y)\), \(S(\underline{y},0)\}\). The following lemma implies that if a generalized path query \(q\) starts with a constant, then \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\). This explains why the complexity classification in the remainder of this section will only depend on \(\mathsf{char}(q)\). **Lemma 21**.: _For any generalized path query \(q\), \(\mathsf{CERTAINTY}(p)\) is in \(\mathbf{FO}\), where \(p:=q\setminus\mathsf{char}(q)\)._ We now introduce some definitions and notations used in our complexity classification. The following definition introduces a convenient syntactic shorthand for characteristic prefixes previously defined in Definition 16. **Definition 17**.: Let \(q=\{R_{1}(\underline{x_{1}},x_{2})\), \(R_{2}(\underline{x_{2}},x_{3})\),..., \(R_{k}(\underline{x_{k}},x_{k+1})\}\) be a path query. We write \(\llbracket q,c\rrbracket\) for the generalized path query obtained from \(q\) by replacing \(x_{k+1}\) with the constant \(c\). The constant-free path query \(q\) will be denoted by \(\llbracket q,\top\rrbracket\), where \(\top\) is a distinguished special symbol. **Definition 18** (Prefix homomorphism).: Let \[q = \{R_{1}(\underline{s_{1}},s_{2}),R_{2}(\underline{s_{2}},s_{3}), \ldots,R_{k}(\underline{s_{k}},s_{k+1})\}\] \[p = \{S_{1}(\underline{t_{1}},t_{2}),S_{2}(\underline{t_{2}},t_{3}), \ldots,R_{\ell}(\underline{s_{\ell}},s_{\ell+1})\}\] be generalized path queries. A _homomorphism from \(q\) to \(p\)_ is a substitution \(\theta\) for the variables in \(q\), extended to be the identity on constants, such that for every \(i\in\{1,\ldots,k\}\), \(R_{i}(\underline{\theta(s_{i})},\theta(s_{i+1}))\in p\). Such a homomorphism is a _prefix homomorphism_ if \(\theta(s_{1})=t_{1}\). **Example 9**.: Let \(q=\{R(\underline{x},y)\), \(R(\underline{y},1)\), \(S(\underline{1},z)\}\), and \(p=\{R(\underline{x},y)\), \(R(\underline{y},z)\), \(R(\underline{y},1)\}\). Then \(\mathsf{char}(q)=\{R(\underline{x},y),R(\underline{y},1)\}=\llbracket RR,1\rrbracket\) and \(p=\llbracket RRR,1\rrbracket\). There is a homomorphism from \(\mathsf{char}(q)\) to \(p\), but there is no prefix homomorphism from \(\mathsf{char}(q)\) to \(p\). The following conditions generalize \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\) from constant-free path queries to generalized path queries. Let \(\gamma\) be either a constant or the distinguished symbol \(\top\). \(\mathcal{D}_{1}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a prefix homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\). \(\mathcal{D}_{2}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\); and whenever \(\mathsf{char}(q)=\llbracket uRvRvRv,\gamma\rrbracket\) for consecutive occurrences of \(R\), \(v_{1}=v_{2}\) or there is a prefix homomorphism from \(\llbracket Rw,\gamma\rrbracket\) to \(\llbracket Rv_{1},\gamma\rrbracket\). \(\mathcal{D}_{3}\)**:**: Whenever \(\mathsf{char}(q)=\llbracket uRvRv,\gamma\rrbracket\), there is a homomorphism from \(\mathsf{char}(q)\) to \(\llbracket uRvRvRv,\gamma\rrbracket\). It is easily verified that if \(\gamma=\top\), then \(\mathcal{D}_{1}\), \(\mathcal{D}_{2}\), and \(\mathcal{D}_{3}\) are equivalent to, respectively, \(\mathcal{C}_{1}\), \(\mathcal{C}_{2}\), and \(\mathcal{C}_{3}\). Likewise, the following theorem degenerates to Theorem 3 for path queries without constants. **Theorem 4**.: _For every generalized path query \(q\), the following complexity upper bounds obtain:_ * _if_ \(q\) _satisfies_ \(\mathcal{D}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_;_ * _if_ \(q\) _satisfies_ \(\mathcal{D}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{NL}\)_; and_ * _if_ \(q\) _satisfies_ \(\mathcal{D}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{PTIME}\)_._ _The following complexity lower bounds obtain:_ * _if_ \(q\) _violates_ \(\mathcal{D}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{NL}\)_-hard;_ * _if_ \(q\) _violates_ \(\mathcal{D}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{PTIME}\)_-hard; and_ * _if_ \(q\) _violates_ \(\mathcal{D}_{3}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{coNP}\)_-complete._ Finally, the proof of Theorem 4 reveals that for generalized path queries \(q\) containing at least one constant, the complexity of \(\mathsf{CERTAINTY}(q)\) exhibits a trichotomy (instead of a tetrachtotomy as in Theorem 4). **Theorem 5**.: _For any generalized path query \(q\) containing at least one constant, the problem \(\mathsf{CERTAINTY}(q)\) is either in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, or \(\mathbf{coNP}\)-complete._ ## 9 Related Work Inconsistencies in databases have been studied in different contexts [8, 21, 22]. Consistent query answering (CQA) was initiated by the seminal work by Arenas, Bertossi, and Chomicki [3]. After twenty years, their contribution was acknowledged in a _Gems of PODS session_[5]. An overview of complexity classification results in CQA appeared recently in the _Database Principles_ column of SIGMOD Record [41]. The term \(\mathsf{CERTAINTY}(q)\) was coined in [39] to refer to CQA for Boolean queries \(q\) on databases that violate primary keys, one per relation, which are fixed by \(q\)'s schema. The complexity classification of \(\mathsf{CERTAINTY}(q)\) for the class of self-join-free Boolean conjunctive queries started with the work by Fuxman and Miller [17], and was further pursued in [23, 26, 27, 28, 30, 32], which eventually revealed that the complexity of \(\mathsf{CERTAINTY}(q)\) for self-join-free conjunctive queries displays a trichotomy between \(\mathbf{FO}\), \(\mathbf{L}\)-complete, and \(\mathbf{coNP}\)-complete. A few extensions beyond this trichotomy result are known. It remains decidable whether or not \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\) for self-join-free Boolean conjunctive queries with negated atoms [29], with respect to multiple keys [31], and with unary foreign keys [20], all assuming that \(q\) is self-join-free. Little is known about \(\mathsf{CERTAINTY}(q)\) beyond self-join-free conjunctive queries. Fontaine [14] showed that if we strengthen Conjecture 1 from conjunctive queries to unions of conjunctive queries, then it implies Bulatov's dichotomy theorem for conservative CSP [6]. This relationship between CQA and CSP was further explored in [34]. In [1], the authors show the \(\mathbf{FO}\) boundary for \(\mathsf{CERTAINTY}(q)\) for constant-free Boolean conjunctive queries \(q\) using a single binary relation name with a singleton primary key. Figueira et al. [13] have recently discovered a simple fixpoint algorithm that solves \(\mathsf{CERTAINTY}(q)\) when \(q\) is a self-join free conjunctive query or a path query such that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{PTIME}\). The counting variant of the problem \(\mathsf{CERTAINTY}(q)\), denoted \(\sharp\mathsf{CERTAINTY}(q)\), asks to count the number of repairs that satisfy some Boolean query \(q\). For self-join-free Boolean conjunctive queries, \(\sharp\mathsf{CERTAINTY}(q)\) exhibits a dichotomy between \(\mathbf{FP}\) and \(\sharp\mathbf{PTIME}\)-complete [37]. This dichotomy has been shown to extend to self-joins if primary keys are singletons [38], and to functional dependencies [7]. In practice, systems supporting CQA have often used efficient solvers for Disjunctive Logic Programming, Answer Set Programming (ASP) or Binary Integer Programming (BIP), regardless of whether the CQA problem admits a first-order rewriting [2, 9, 10, 11, 12, 19, 24, 35, 36]. ## 10 Conclusion We established a complexity classification in consistent query answering relative to primary keys, for path queries that can have self-joins: for every path query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, \(\mathbf{PTIME}\)-complete, or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the four cases applies. If \(\mathsf{CERTAINTY}(q)\) is in **FO** or in **PTIME**, rewritings of \(q\) can be effectively constructed in, respectively, first-order logic and Least Fixpoint Logic. For binary relation names and singleton primary keys, an intriguing open problem is to generalize the form of the queries, from paths to directed rooted trees, DAGs, or general digraphs. The ultimate open problem is Conjecture 1, which conjectures that for every Boolean conjunctive query \(q\), \(\mathsf{CERTAINTY}(q)\) is either in **PTIME** or **coNP**-complete. **Acknowledgements.** This work is supported by the National Science Foundation under grant IIS-1910014.
2309.05063
Federated Learning Incentive Mechanism under Buyers' Auction Market
Auction-based Federated Learning (AFL) enables open collaboration among self-interested data consumers and data owners. Existing AFL approaches are commonly under the assumption of sellers' market in that the service clients as sellers are treated as scarce resources so that the aggregation servers as buyers need to compete the bids. Yet, as the technology progresses, an increasing number of qualified clients are now capable of performing federated learning tasks, leading to shift from sellers' market to a buyers' market. In this paper, we shift the angle by adapting the procurement auction framework, aiming to explain the pricing behavior under buyers' market. Our modeling starts with basic setting under complete information, then move further to the scenario where sellers' information are not fully observable. In order to select clients with high reliability and data quality, and to prevent from external attacks, we utilize a blockchain-based reputation mechanism. The experimental results validate the effectiveness of our approach.
Jiaxi Yang, Zihao Guo, Sheng Cao, Cuifang Zhao, Li-Chuan Tsai
2023-09-10T16:09:02Z
http://arxiv.org/abs/2309.05063v1
# Federated Learning Incentive Mechanism under Buyers' Auction Market ###### Abstract Auction-based Federated Learning (AFL) enables open collaboration among self-interested data consumers and data owners. Existing AFL approaches are commonly under the assumption of sellers' market in that the service clients as sellers are treated as scarce resources so that the aggregation servers as buyers need to compete the bids. Yet, as the technology progresses, an increasing number of qualified clients are now capable of performing federated learning tasks, leading to shift from sellers' market to a buyers' market. In this paper, we shift the angle by adapting the procurement auction framework, aiming to explain the pricing behavior under buyers' market. Our modeling starts with basic setting under complete information, then move further to the scenario where sellers' information are not fully observable. In order to select clients with high reliability and data quality, and to prevent from external attacks, we utilize a blockchain-based reputation mechanism. The experimental results validate the effectiveness of our approach. ## I Introduction Due to the costs (e.g., the risk of privacy leakage and consumption of computation resources) for clients to participate in federated learning (FL) tasks, incentive mechanism design for FL has received significant research interest [1]. As one of the efficient methods to address this issue, auction-based FL (AFL) is a promising approach that has received a lot of attention. As a buyer, the aggregation server recruits several clients to contribute their local data and computation resources to help complete the FL tasks. According to the objective of optimization, existing studies can be divided into three categories [2]: _(1) maximize profit of buyers:_ the aim of the aggregation server here is to maximize its own utility by efficiently recruiting high-quality clients and ensuring the model training process converges quickly to obtain an effective global model; _(2) maximize profit of sellers:_ the clients determine how much of their local data and computation resources they are willing to contribute to the FL tasks. Each client bids to maximize their own expected profit from participating in; _(3) maximize profit of FL community:_ the problem is formulated to server-client matching and pricing and optimize the utility of the whole FL community. Although aforementioned methods have their own merit considerations, they all suffered a constrained assumption: _the service providers (clients) are considered scarce resources, requiring the aggregation servers (buyers) to compete for recruiting them._ In other words, the clients constitute the seller market, in which bargaining power is positively related to their scarcity. Above situation, however, will be changed when the number of qualified clients increase to certain extent. The pricing behavior will be different and the bargaining power may shift as the clients market become more and more competitive. Fig. 1 illustrates this phenomenon. Motivated by this observation, we attempt to reconsider the pricing behavior in buyers' market and take a different perspective by adopting the procurement auction. Considering information asymmetry between the aggregation server and clients, we compare the performance of our approach under both complete and incomplete information scenarios. And under the incomplete information setting, some private information (_e.g.,_ efficiency) of clients is not completely observable, which lead to the allocation inefficiency for the aggregation server compared to complete information scenario. To alleviate this issue, the aggregation server needs to pay information rent for allocation efficiency increase. We try to explore this and discuss the trade-off problem that the aggregation server faces: _sharing more information improves allocation efficiency but also leads to higher information rents_. To determine the winners selection and protect from potential security threats (_e.g.,_ poisoning attack), the aggregation server need to select top-\(k\) clients with high reputation. We further propose a blockchain based reputation mechanism to enhance the trustworthiness for the reputation record storage. Fig. 1: The market transition from monopoly seller market to competitive seller market. The main contributions of our work are presented as follows: * To the best of our knowledge, we pioneer to explore AFL in buyers' market. * Due to the information asymmetry concern, we separately discuss the trade-off problem of information rent by clients under the both complete and incomplete information setting. * We design a reputation mechanism to select candidate clients for the aggregation server. To make it more trustworthy, we use blockchain technology for reputation management. * We perform extensive experiments comparing our approach to several baseline methods, and the results demonstrate the effectiveness of our proposed approach. ## II Related Work Existing auction-based incentive mechanisms in Federated Learning (FL) can be categorized based on their optimization targets. The first two categories focus on maximizing the utility of clients and the FL community. Considering the competitive and cooperative relationship among clients, multi-agent reinforcement learning is applied to the auction process and achieve the maximum profit of clients. Methods falling into another category employ various auction approaches, such as greedy-based auction and double auction, to determine the winners and maximize social welfare [3, 4]. Others also aim to minimize social cost through procurement auctions in Non-IID settings of FL [5]. Our work is is orthogonal to these works and addresses a different aspect of the problem. The category addressed in this paper similarly aims to maximize the profits obtained by the aggregation server. Existing studies use procurement auction to tackle this, which involves one buyer (the aggregation server) and multiple sellers (clients) to maximize the utility of the aggregation server [6, 7, 8]. Additionally, techniques such as reinforcement learning and graph neural networks are combined with procurement auctions to address this issue [8, 9]. Previous works assume a sellers' market where clients have some bargaining power and can adjust their compensation through their actions. However, this assumption becomes impractical with the increasing number of potential clients. As competitive degree rises, the market gradually shifts to a buyers' market. ## III System Model In FL ecosystem, there are typically two main parties: an aggregation server and multiple clients. The training process involves updating local models using clients' private data and computation resource, and the primary responsibility of the aggregation server is to recruit clients and coordinate them for model training in a decentralized manner. Therefore, the aggregation server plays a role as buyer, and pays for the work of clients which are regarded as sellers. The workflow of an auction is shown in Fig. 2. The scenario involves a server and \(N\) clients. _Firstly_, the clients have choice to reveal their private information (_e.g.,_ efficiency \(\theta_{i}\)) to the aggregation server (Sec. IV and Sec. V). The efficiency in implementing the project is \(\theta\in[0,1]\), so the clients are regarded as more efficient when their efficiency parameter \(\theta_{i}\) increases. _Secondly_, the aggregation server chooses the output-transfer pair, \((q_{i},R_{i})\), for each client \(i\) to maximize its profit. The notation \(q_{i}\) denotes the expected output of the client \(i\) (_e.g.,_ the improvement of test accuracy) _Thirdly_, the client \(i\) select whether to participate in, subject to the their rationality condition \(U_{i}(q_{i}(\theta),\theta_{i})\geq 0\) for all \(\theta_{i}\in[0,1]\). _Fourthly_, by providing historical reputation record from blockchain, the aggregation server selects the top-\(k\) reputation clients in the auction. ## IV The Approach Under the Complete Information ### _Problem Formulation_ Consider a scenario where a server invites \(N\) clients to participate in a computing contract. Initially, we assume that all clients are willing to disclose their private efficiency level, denoted as \(\theta_{i}\), to the server. However, in the subsequent section V, we relax this assumption and allow clients \(i\) to keep their efficiency level \(\theta_{i}\) private and unobservable to the aggregation server. The cost for client \(i\) to implement the contract is denoted as \(C_{i}(q_{i},\theta_{i})\), which is increasing, convex in \(q_{i}\), and also decreasing, convex in the efficiency level \(\theta_{i}\) of client \(i\). Specifically, we assume that the cost function of each client is: \[C_{i}(q_{i},\theta_{i})=\frac{q_{i}^{2}}{1+\delta\cdot\theta_{i}}. \tag{1}\] The contract \(C_{i}(q_{i},\theta_{i})\) must satisfy the condition \(\frac{\partial^{2}C_{i}(q_{i},\theta_{i})}{\partial q_{i}\partial\theta_{i}}\leq 0\). The negative sign of the cross-partial derivative \(\frac{\partial^{2}C_{i}(q_{i},\theta_{i})}{\partial q_{i}\partial\theta_{i}}\) implies that as the efficiency of client \(i\) increases, its marginal cost of computing decreases. In other words, the value of \(\frac{\partial^{2}C_{i}(q_{i},\theta_{i})}{\partial q_{i}\partial\theta_{i}}\) decreases as \(\theta_{i}\) increases. As a result, each client has a quasi-linear utility function, represented as follows: \[U_{client}(q_{i},\theta_{i})=R(q_{i})-C_{i}(q_{i},\theta_{i}), \tag{2}\] where \(R(q_{i})\) represents the transfer (reward) that client \(i\) receives from the aggregation server. \(q_{i}\) denoted in equation (3) is the contribution of client \(i\) for model training, and we define it as the discrepancy between the test accuracy of model \(\mathcal{M}\) before and after the local training. For simplicity, assume Fig. 2: Pipeline of our approach that clients earn a zero reservation utility if they choose to not participate in the auction. Then the server's utility function from the client \(i\) is in the equation (4). \[q_{i}=Acc(\mathcal{M}_{local})-Acc(\mathcal{M}_{global}) \tag{3}\] \[U_{server}=V(q_{i})-R(q_{i}), \tag{4}\] where \(V(q_{i})\) denotes the value that the server assigns to \(q_{i}\), which can be denoted as \(V(q_{i})=\lambda\cdot q_{i}\). The set of clients that participate in the FL task is denoted as \(S=\{s_{1},s_{2},...,s_{k}\}\). Thus the final utility function of the aggregation server is: \[U_{server}=\sum_{i=1}^{k}(V(q_{i})-R(q_{i})). \tag{5}\] Under the complete information, the optimization problem of the aggregation server in buyers' market is formally given below. **Problem 1** (Maximize server's utility function under the complete information): \[\max\sum_{i=0}^{k}(V(q_{i})-R(q_{i})),\] subject to incentive compatibility. **Definition 1** (Incentive Compatibility): _The incentive mechanism is incentive compatibility if it is a dominant strategy for each client \(i\) and they cannot increase their payoff by misreporting private information regardless of what others do._ \[U_{client}(q_{i})\geq U(q_{i}(\hat{\theta_{i}}),\hat{\theta_{i}}). \tag{6}\] The aggregation server needs to ensure that targeted clients obtain non-negative payoff in equation (7), _i.e._, satisfy Individual Rationality (IR) constraints as below: **Definition 2** (Individual Rationality): _The incentive mechanism is individually rational if each targeted client \(i\) receives a non-negative payoff by accepting the expected reward \(R(q_{i})\) intended for his type, i.e.,_ \[U_{client}(q_{i})\geq 0,i\in N. \tag{7}\] for every \(\theta_{i}\), where \(\hat{\theta_{i}}\neq\theta_{i}\). ### _Optimal Solution_ To solve the problem 1, we differentiate equation (4) with respect to \(q_{i}\) and obtain: \[\frac{\partial V(q_{i}^{CI})}{\partial q_{i}}-\frac{\partial C_{i}(q_{i}^{CI},\theta_{i})}{\partial q_{i}}=0, \tag{8}\] where \(q_{i}^{CI}\) denotes the optimal output under complete information (CI). Rearranging this first-order condition yields: \[\underbrace{\frac{\partial V(q_{i}^{CI})}{\partial q_{i}}}_{\begin{subarray}{ c}MB_{i}\end{subarray}}=\underbrace{\frac{\partial C_{i}(q_{i}^{CI},\theta_{i})}{ \partial q_{i}}}_{\begin{subarray}{c}MC_{i}\end{subarray}} \tag{9}\] **Input**: FL task \(\tau\), historical reputation \(\zeta^{\tau-1}\) **Output**: Global model \(\mathcal{M}_{global}\) ``` 1:if\(\theta_{i}\) is observable then 2: (\(q_{i}^{CI}\), \(R_{i}^{CI}\)) \(\rightarrow\) clients \(i\) 3:else 4: (\(q_{i}^{*}\), \(R_{i}^{*}\)) \(\rightarrow\) clients \(i\) 5:endif 6:for each clients do 7:if\(U_{client}(q_{i})\geq 0\)then 8:\(S\leftarrow\) client \(i\) //Accept to participate in 9:endif 10:endfor 11: Server determines participants \(S=\{s_{1},s_{2},...,s_{k}\}\) 12: Model training \(\mathcal{M}_{global}\gets Aggregation(\mathcal{M}_{local,i})\) 13:for each clients do 14: Update reputation \(\zeta_{i}^{(\tau)}\) 15:endfor 16:return\(\mathcal{M}_{global}\) ``` **Algorithm 1** AFL in Buyers' Market Above result suggests that the server increases procuring the output until its marginal benefit \(MB_{i}\) coincides with associated marginal cost \(MC_{i}\). Since return function \(V(q_{i})\) is increasing and concave, its derivative lies in the positive quadrant but decreases in \(q_{i}\). The crossing point between the marginal benefit and cost functions entails \(MB_{i}\) = \(MC_{i}\), yielding a socially optimal output. If this computing contract produces a larger marginal benefit, the \(MB_{i}\) function shifts upward, increasing the socially optimal output \(q_{i}^{CI}\). In contrast, an increase in the marginal cost of computing, \(\frac{\partial C_{i}(q_{i},\theta_{i})}{\partial q_{i}}\) yields an upward shift in the \(MC_{i}\) function, ultimately reducing the socially optimal output \(Q_{i}^{CI}\) that the server implements. The optimal output solves \(MB_{i}=MC_{i}\), which in this parametric setting entails: \[q_{i}^{CI}=\frac{2q_{i}}{(1+\delta\cdot\theta)} \tag{10}\] Solving for output \(q_{i}\), we obtain the optimal output: \[q_{i}^{CI}=\frac{\lambda(1+\delta\cdot\theta)}{2} \tag{11}\] \[R_{i}^{CI}=\frac{1}{1+\delta\cdot\theta}[\frac{\lambda(1+\delta\cdot\theta_{i}) }{2}]^{2} \tag{12}\] ## V The approach under the Incomplete Information ### _Problem Formulation_ Consider the aforementioned auction (sec. IV), but now assume that the efficiency \(\theta_{i}\) of every client \(i\) for implementing the project is private information. And we assume that efficiency \(\theta_{i}\in[0,1]\) follows the uniform distribution, which is common knowledge among all players. The aggregation server chooses the output-transfer pair, \((q_{i},R_{i})\), for each client \(i\) to maximize its utility function. **Problem 2** (Maximize server's **utility function under the incomplete information)**: \[\max\sum_{i=0}^{M}E_{\theta_{i}}(V(q_{i})-R(q_{i})).\] ### _Optimal Solution_ After some algebra manipulation, the first-order condition with respect to \(q_{i}\) becomes: \[\underbrace{\frac{\partial V(q_{i}^{CI})}{\partial q_{i}}}_{MB_{i}}=\underbrace {\frac{\partial C_{i}(q_{i}^{CI},\theta_{i})}{\partial q_{i}}-\overbrace{(1- \theta_{i})\frac{\partial^{2}C_{i}(q_{i}^{*},\theta_{i})}{\partial q_{i} \partial\theta_{i}}}^{Information\,Rent}}_{MC_{i}} \tag{13}\] Since \(\frac{\partial V(q_{i}^{*})}{\partial q_{i}}=1\) and given by \(\frac{\partial C_{i}}{\partial q_{i}}=\frac{2q_{i}}{1+2\theta_{i}}\), the optimal output solves \(MB_{i}=MVC_{i}\), which in current setting entails: \[1=\frac{2q_{i}}{1+2\theta_{i}}+(1-\theta_{i})+\frac{4q_{i}}{\left(1+2\theta_{ i}\right)^{2}} \tag{14}\] Thus, we can obtain the optimal output \(q_{i}^{*}\) \[q_{i}^{*}=\frac{(1+2\theta)^{2}}{6} \tag{15}\] In words, the aggregation server increases procurement until the point at which its marginal benefit coincides with its associated marginal virtual cost (MVC). This MVC embodies not only client \(i\)'s marginal cost but also the information rent that the server needs to provide in order to induce client \(i\) report his type truthfully. From above maximization problem, we can evaluate the transfer of the client \(i\) at the optimal output \(q_{i}^{*}\), to obtain the optimal transfer to the client \(i\) as follows: \[R_{i}(q_{i}^{*})=C_{i}(q_{i}^{*},\theta_{i})-(1-\theta_{i})\frac{\partial C_{i }(q_{i}^{*},\theta_{i})}{\partial\theta_{i}} \tag{16}\] Under a complete information setting, the last term in \(MVC_{i}\) (information rent) was absent. Since the corss-partial derivative \(\frac{\partial^{2}C_{i}(q_{i},\theta_{i})}{\partial q_{i}\partial\theta_{i}}\) is negative, we obtain that \(MVC_{i}\leq MC_{i}\). Therefore, the socially optimal output under complete information is larger than that under incomplete, \(q_{i}^{CI}\leq q_{i}^{*}\). Intuitively, the server must pay an information rent to all bidders to induce truthful revelation of their types, incurring more costs to implement the auction than under complete information, ultimately inducing lower output levels. This is commonly referred in the literature as downward distortion for all bidders with efficiency levels \(\theta_{i}\neq 1\). However, the output of the bidder with the highest efficiency, \(\theta_{i}=1\), suffers no distortion when moving from a complete to an incomplete information context. Indeed, \(MVC_{i}\) simplifies to \(MC_{i}\) when evaluated at \(\theta_{i}=1\), so the first-order conditions across information contexts coincide, and \(q_{i}^{CI}=q_{i}^{*}\). Intuitively, the most efficient bidder has no incentives to underreport his valuation at \(\theta_{i}=1\). This result is known as _no distortion at the top_. ## VI Reputation Mechanism Design In the client selection phase, we put forward a reputation mechanism for the aggregation server to choose clients with high reliability and data quality, while also reducing vulnerabilities to external risks such as poisoning attacks. To ensure the process is trustworthy, we incorporate blockchain technology to permanently and transparently log each client's reputation scores over time. Our reputation mechanism consists of two principal components: initial contribution measurement to assess each client's performance; followed by reputation calculation to derive scores based on measured contributions. By integrating blockchain in this manner, the selection process runs with full visibility and prevents any distortion of reputations for any purpose. ### _Contribution Measurement_ As a fairness valuation method, banzhaf index [10] from cooperative game theory can measure individual influence in collective decision making. As a result, we leverage banzhaf index as an efficient way to measure the contribution of each client in FL and formulate it as follows: \[\zeta_{i}^{(\tau)}=\frac{1}{2^{n-1}}\sum_{S\subseteq N\setminus i}[U_{server}(S \cup i)-U_{server}(S)], \tag{17}\] where \(\zeta_{i}^{(\tau)}\) represents the contribution of client \(i\) in the FL task \(\tau\). ### _Reputation Calculation_ To effectively evaluate clients' reputation, we normalize the contributions \(\zeta_{i}^{(\tau)}\) of clients \(i\) to reputation scores \(\varepsilon_{i}^{\tau}\). Inspired by [11], it is intuitive to give higher weights to more recent reputation records. The reputation score is calculated by equation (18). \[\varepsilon_{i}^{(\tau)}=\varepsilon_{i}^{(\tau-1)}*w_{1}+\zeta_{i}^{(\tau)}*w_ {2}, \tag{18}\] where \(w_{1}\) and \(w_{2}\) represent the reputation weights. ## VII Experiments To validate the efficiency of our approach, we aim to answer the following questions in this section. * **Q1: Performance Improvement.** Whether our approach has better performance compared to baseline methods? * **Q2: Poisoning Attack Detection.** Can our approach prevent from poisoning attack? * **Q3: Universality.** Does our approach work well with different aggregation algorithms? * **Q4: Robustness.** Is our approach robust enough to protect from external attack? These questions are examined in experiments on MNIST [12], Fashion MNIST [13] and CIFAR-10 [14]. We have established a total of \(m\) clients in the federated learning (FL) ecosystem, and datasets are divided among these \(m\) clients. The model trained on MNIST consists of three fully connected layers. For the model trained on Fashion MNIST, it consists of two convolutional layers and two fully connected layers. For CIFAR-10, we use the exact architecture of MobileNet [15] in their open sourced code. ### _Performance Improvement (Q1)_ To evaluate the utility that the aggregation server obtain and compare with baseline methods: price first [4] and randomized auction [16], we conduct experiments with different number of clients \(k\) selected clients in FL ecosystem under the various datasets and leverage FedAvg for aggregation. The results shown in TABLE I demonstrate that our approach outperforms the baseline methods. Given the presence of information asymmetry, we consider the performance achieved under the complete information scenario as the ground-truth. Our approach under the incomplete information scenario demonstrates a close resemblance to this ground-truth performance. It also indicates that paying information rent significantly reduces the disparity between the utility under incomplete information and complete information. ### _Poisoning Attack Detection (Q2)_ To verify our approach is trustworthy in preventing from poisoning attackers, we simulate three poisoning attacked clients in FL ecosystem. We calculate their reputation value by our approach and observe in Fig. 3 that the reputation of the poisoning attacked clients is lower than others'. ### _Universality (Q3)_ Considering different FL settings, our approach needs to perform well across different aggregation algorithms. To evaluate its effectiveness, we examine our approach under three aggregation algorithms: FedAvg, FedProx [17], and Scaffold [18], with both complete and incomplete information. The results in Fig. 4 demonstrate that our approach works well and has similar performance in different FL settings. ### _Robustness (Q4)_ As any self-interest client may have incentive to cheat reputation, the robustness of the blockchain-based reputation mechanism needs to be examined. We conduct experiments with various degrees of attacks and different ratios of attacked clients on reputation records. By comparing the reputation mechanisms without recording reputation on blockchain, the results in Fig. 5 show that by recording the reputation on blockchain, the aggregation server can get higher profit and establish a robust reputation mechanism. ## VIII Conclusion Casting aside established preconceptions, this paper applies an innovative analytical angle to gain new insights into the AFL incentive mechanisms under the market forces of buyers. We adopt procurement auction to approach the scenario where clients compete with one another to win the computing contract, and utilize blockchain-based reputation to select reliable candidates. Through experimental validation, our proposed design is shown to achieve desirable properties and outperform baseline approaches.
2308.16450
Generalized sharped cubic form and split spin factor algebra
There is a well-known construction of a Jordan algebra via a sharped cubic form. We introduce a generalized sharped cubic form and prove that the split spin factor algebra is induced by this construction and satisfies the identity $((a,b,c),d,b) + ((c,b,d),a,b) + ((d,b,a),c,b) = 0$. The split spin factor algebras have recently appeared in the classification of 2-generated axial algebras of Monster type fulfilled by T. Yabe; their properties were studied by J. McInroy and S. Shpectorov.
Vsevolod Gubarev, Farukh Mashurov, Alexander Panasenko
2023-08-31T04:37:50Z
http://arxiv.org/abs/2308.16450v1
# Generalized sharped cubic form and split spin factor algebra ###### Abstract There is a well-known construction of a Jordan algebra via a sharped cubic form. We introduce a generalized sharped cubic form and prove that the split spin factor algebra is induced by this construction and satisfies the identity \(((a,b,c),d,b)+((c,b,d),a,b)+((d,b,a),c,b)=0\). The split spin factor algebras have recently appeared in the classification of 2-generated axial algebras of Monster type fulfilled by T. Yabe; their properties were studied by J. McInroy and S. Shpectorov. _Keywords_: sharped cubic form, split spin factor algebra, Lie triple system. ## 1 Introduction The structure theory of nonassociative algebras and rings has been generally based on consideration of a concrete variety \(\mathcal{M}\) of algebras and then on the description or a search of (finite-dimensional) simple algebras from \(\mathcal{M}\). We may say that the structure theory of Jordan, alternative, Malcev etc. algebras was developed more or less in such way. Recently, a new approach to get plenty of simple nonassociative algebras appears. The notion of axial algebra was proposed by J.I. Hall, F. Rehren and S. Shpectorov in 2015 [4]. It defines a class of (non-associative) commutative algebras generated by specific idempotents, and the product in an axial algebra satisfies some restrictions depending on its type. Roughly speaking, we may say that these restrictions generalize the ones originated from Pierce decomposition fulfilled on associative, alternative, or Jordan algebras. Axial algebras of Jordan type are close to Jordan algebras, while axial algebras of Monster type generalize the Griess algebra. The latter has the Monster group exactly as its automorphism group. In the direction of axial algebras, different classifications of algebras with a small number of generators were stated. One of them, obtained by T. Yabe in 2020 (published in 2023 [16]), provides a list of all 2-generated axial algebras of Monster type \((\xi,\eta)\) admitting a flip between generating axes. One of the algebras from the Yabe's list was denoted as \(S(\alpha,E)\) and the properties of this algebra were studied by J. McInroy and S. Shpectorov in 2022 [10]. The authors called them as split spin factor algebras by analogy with spin factor algebra, the simple Jordan algebras of special form. The main goal of the current work to study the identities fulfilled on the split spin factors \(S(\alpha,t,E)\), where \(E\) is any vector space of dimension at least two endowed with a symmetric nondegenerate form \(\langle\cdot,\cdot\rangle\) and \(\alpha,t\) are parameters from the ground field \(F\). We show that there are no identities of degree 3 and 4 on \(S(\alpha,t,E)\), \(\alpha,t\notin\{0,1\}\), which do not follow from commutativity. Further, we prove that all identities on \(S(\alpha,E)\), \(\alpha\notin\{-1,0,1/2,1,2\}\), of degree 5 follows from commutativity and \[((a,b,c),d,b)+((c,b,d),a,b)+((d,b,a),c,b)=0,\] where \((a,b,c)=(ab)c-a(bc)\). We name it as the three associators identity. In 1965, J.M. Osborn gave the list of all irreducible relative to commutativity identities of degree 5 [12]. The fourth of these five identities [12, eq. (15)] with \(\delta_{2}=-\delta_{1}\neq 0\) is the three associators identity with one of the three variables \(a,c,d\) equal to \(b\), e. g., \(d=b\). There is a construction of a Jordan algebra by any sharped cubic form \((N,\#,c)\)[8]. For the proof, you need to verify dozens of relations in terms of \(N\), its derived maps \(T,S\), \(\#\), the triple product \(\{\cdot,\cdot,\}\) and the \(U\)-operator, see [9, Appendix C]. This construction, in particular, allows to build simple Jordan algebras of Albert type. In the work, we consider generalized sharped cubic form and prove the analogues of the known relations, which hold for sharped cubic forms. With the help of these relations (Lemmas 1-7), we more or less reduce checking the three associators identity on the algebra \(S(\alpha,t,E)\) to the properties of the Psi-map \(\Psi(r,s,q)\), which is defined via an associator, see SS5. Further, in Lemma 9, we show that \(\Psi(r,s,q)\) may be expressed in terms of the projections of \(r,s,q\) on \(E\) and its bilinear form \(\langle\cdot,\cdot\rangle\). Hence, \(\Psi(r,s,q)\) defines a Lie triple product on \(S(\alpha,t,E)\) and it is connected with the simple pre-Lie algebra from [13]. Finally, we prove in Theorem 3 that the three associators identity holds on \(S(\alpha,t,E)\). Let us provide a short outline of the work. In SS2, we give the definition of the split spin factor algebra \(S(\alpha,E)\) and its natural generalization \(S(\alpha,t,E)\). In SS3, we recall the results about sharped cubic form and induced Jordan algebra. In SS4, we introduce a generalized sharped cubic form and prove the main relations devoted to it. In SS5, we define the Psi-map and derive the equalities on it. The goal of SS6 is to prove that the three associators identity holds on \(S(\alpha,t,E)\). In SS7, with the help of computer algebra, we prove that all identities of degree not greater than 5 fulfilled on \(S(\alpha,E)\) follow from commutativity and the three associators identity. In the general case of \(S(\alpha,t,E)\), there are identities of degree 5, which do not follow from commutativity and the three associators identity. In SS8, we formulate open problems concerning generalized sharped cubic form and induced algebras. In particular, the following question remains to be open: what identity holds on all algebras induced by a generalized sharped cubic form? We assume that the ground field \(F\) is of characteristic not 2 and not 3. ## 2 Split spin factor algebra Let \(F\) be quadratically closed, i. e. roots of any quadratic equation over \(F\) lie in it. Below, we put the multiplication table for \(S(\alpha,E)=Fz_{1}+Fz_{2}+E\), where \(\dim E=2\) and there exists a nondegenerate bilinear form \(\langle\cdot,\cdot\rangle\) on \(E\): \[\begin{array}{c}z_{1}^{2}=z_{1},\quad z_{2}^{2}=z_{2},\quad z_{1}z_{2}=0, \quad ez_{1}=\alpha e,\quad ez_{2}=(1-\alpha)e,\\ ef=-\langle e,f\rangle(\alpha(\alpha-2)z_{1}+(\alpha^{2}-1)z_{2}),\ e,f\in E.\end{array} \tag{1}\] For \(\alpha\neq-1,2\), the algebra \(S(\alpha,E)\) admits the nondegenerate invariant bilinear form given by \[\begin{array}{c}(z_{1},z_{1})=\alpha+1,\quad(z_{2},z_{2})=2-\alpha,\quad(z_ {1},z_{2})=0,\\ (e,f)=(\alpha+1)(2-\alpha)\langle e,f\rangle,\quad(e,z_{i})=0,\ e,f\in E. \end{array} \tag{2}\] Invariancy means that \((ab,c)=(a,bc)\) for all \(a,b,c\in S(\alpha,E)\). Note that in (1) and (2) we may consider any vector space \(E\), not necessarily of dimension \(2\). More generally, we may study the algebra \(S(\alpha,t,E)\) depending on two parameters \(\alpha,t\), where \(E\) is a vector space of any dimension endowed with a nondegenerate bilinear form. Then the product of elements \(e,f\in E\) is defined by the formula \[ef=\langle e,f\rangle(z_{1}+tz_{2}).\] The split spin factor algebra \(S(\alpha,E)\) is an algebra \(S(\alpha,t,E)\) with \(t=(\alpha^{2}-1)/\alpha(\alpha-2)\). Surely, we require that \(\alpha\neq 0,2\). **Proposition 1**. Let \(E\) has a finite dimension \(n\geq 1\). Then \(S(\alpha,t,E)\) is a simple algebra if and only if \(\alpha\notin\{0,1\}\), \(t\neq 0\). Proof. Let \(\alpha\notin\{0,1\}\), \(t\neq 0\). There is a basis \(\{e_{1},\ldots,e_{n}\}\) in \(E\) so that \(\langle e_{i},e_{j}\rangle=\delta_{i,j}\). Let us show that \(S(\alpha,t,E)\) is simple. If \(I\) is a nonzero ideal in \(S(\alpha,t,E)\), then it contains a nonzero element \(x=\beta z_{1}+\gamma z_{2}+\sum\limits_{i=1}^{n}\alpha_{i}e_{i}\). If \(\alpha_{k}\neq 0\) for some \(k>0\), then \(I\) contains an element \[xe_{k}=\alpha_{k}(z_{1}+tz_{2})+(\alpha\beta+(1-\alpha)\gamma)e_{k}.\] Hence, \(y=z_{1}+tz_{2}+\delta e_{k}\in I\) for \(\delta=(\alpha\beta-\alpha\gamma+\gamma)/\alpha_{k}\). Then \(I\) contains \[(1-\alpha)yz_{1}-\alpha yz_{2}=(1-\alpha)z_{1}-\alpha tz_{2}.\] It means that \((1-\alpha)z_{1}\in I\) and \(\alpha tz_{2}\in I\). Thus, \(z_{1},z_{2}\in I\). We also have \[I\ni z_{1}e_{i}=\alpha e_{i}\] for all \(1\leq i\leq n\). Therefore, \(I=S(\alpha,t,E)\). If \(\alpha_{i}=0\) for all \(1\leq i\leq n\), then \(0\neq\beta z_{1}+\gamma z_{2}\in I\). It means that \(z_{i}\in I\) for some \(i\in\{1,2\}\). Then \(I\ni z_{i}e_{k}\) and we have \(e_{k}\in I\) for \(1\leq k\leq n\) by assumptions. But it means that \(e_{k}^{2}=z_{1}+tz_{2}\in I\) and \(z_{1},z_{2}\in I\) as above. So, \(I=S(\alpha,t,E)\) and \(S(\alpha,t,E)\) is a simple algebra. If \(\alpha=0\), then \(Fz_{1}\) is a proper ideal in \(S(\alpha,t,E)\). If \(\alpha=1\), then \(Fz_{2}\) is a proper ideal in \(S(\alpha,t,E)\). If \(t=0\), then \(Fz_{1}+\sum\limits_{i=1}^{n}Fe_{i}\) is a proper ideal in \(S(\alpha,t,E)\). \(\Box\) We will use a notation \(O(E)\) for a subgroup of \(\operatorname{Aut}(S(\alpha,t,E))\) obtained by an extension of the orthogonal group of \(E\), which elements fix \(z_{1}\) and \(z_{2}\). **Proposition 2**. Let \(E\) has a finite dimension \(n\geq 2\) and \(\alpha\notin\{0,1\}\), \(t\neq 0\). * If \(\alpha\neq 1/2\) or \(t\neq\pm 1\), then \(\operatorname{Aut}(S(\alpha,t,E))\cong O(E)\). * If \(\alpha=1/2\) and \(t=\pm 1\), then \(\operatorname{Aut}(S(\alpha,t,E))\cong\mathbb{Z}_{2}\times O(E)\). Proof. Let \(A=S(\alpha,t,E)\) and \(\varphi\in\operatorname{Aut}(A)\). There is a basis \(\{e_{1},\ldots,e_{n}\}\) in \(E\) so that \(\langle e_{i},e_{j}\rangle=\delta_{i,j}\). We want to describe all \(x\in A\) with \(\dim\operatorname{Ann}(x)=n\). Let \(x\in A\) so that \(\dim\operatorname{Ann}(x)=n\) and \(x=\beta z_{1}+\gamma z_{2}+\alpha_{1}e_{1}+\ldots+\alpha_{n}e_{n}\). Then \[xe_{i}=\alpha_{i}(z_{1}+tz_{2})+(\beta\alpha+\gamma(1-\alpha))e_{i}.\] If \(\beta\alpha+\gamma(1-\alpha)\neq 0\), then \(xe_{1},\ldots,xe_{n}\) are linearly independent. Moreover, we have \[xz_{1}=\beta z_{1}+\alpha(\alpha_{1}e_{1}+\cdots+\alpha_{n}e_{n}),\] \[xz_{2}=\gamma z_{2}+(1-\alpha)(\alpha_{1}e_{1}+\cdots+\alpha_{n}e _{n}).\] An assumption \(\beta\alpha+\gamma(1-\alpha)\neq 0\) means that \(\beta\neq 0\) or \(\gamma\neq 0\). If \(\beta\neq 0\), then \(xz_{1},xe_{1},\ldots,xe_{n}\) are linearly independent and \(\dim\operatorname{Ann}(x)\leq 1\), a contradiction. If \(\gamma\neq 0\), then \(xz_{2},xe_{1},\ldots,xe_{n}\) are linearly independent and \(\dim\operatorname{Ann}(x)\leq 1\), a contradiction. So, \(\beta\alpha+\gamma(1-\alpha)=0\) and \(\gamma=\alpha\beta/(\alpha-1)\). Suppose that \(\beta\neq 0\). If \(\alpha_{i}\neq 0\) for some \(i\), then \(xe_{i}\), \(xz_{1}\) and \(xz_{2}\) are linearly independent and \(\dim\operatorname{Ann}(x)<n\), a contradiction. Thus, \(x=\beta(z_{1}-\frac{\alpha}{1-\alpha}z_{2})\). If \(\beta=0\), then \(\gamma=0\) and \(x\in E\). Let us denote \(U=F\cdot(z_{1}-\frac{\alpha}{1-\alpha}z_{2})\). So, \(x\in A\) and \(\dim\operatorname{Ann}(x)=n\) if and only if \(x\in E\cup U\) and \(x\neq 0\). It means that \(\varphi(x)\in E\cup U\) for any \(x\in E\cup U\). Suppose that \(\varphi(e)\in U\) for some \(0\neq e\in E\). Since \(\dim E\geq 2\), there exists \(0\neq f\in E\) such that \(\varphi(f)\in E\). It means that \(\varphi(e+f)\notin E\cup U\), a contradiction. Hence, \(\varphi(E)=E\), \(\varphi(U)=U\). Therefore, \(\varphi(z_{1}-\frac{\alpha}{1-\alpha}z_{2})=\delta(z_{1}-\frac{\alpha}{1- \alpha}z_{2})\) for some nonzero \(\delta\). We have \(\varphi(z_{1}+z_{2})=z_{1}+z_{2}\), so \[\varphi(z_{1})=(\alpha+\delta(1-\alpha))z_{1}+(\alpha-\delta\alpha)z_{2}.\] Since \(z_{1}\) is an idempotent, hence \(\varphi(z_{1})\) is an idempotent. It means that \[\alpha+\delta(1-\alpha),\alpha-\delta\alpha\in\{0,1\}.\] If \(\alpha-\delta\alpha=1\), then \(\delta=(\alpha-1)/\alpha\). We have two cases: 1. \(\alpha+\delta(1-\alpha)=0\). It means that \(\delta=\alpha/(\alpha-1)\) and \(\alpha=1/2\) by above. Hence, \(\delta=-1\) and \(\varphi(z_{1}-z_{2})=-z_{1}+z_{2}\). It is easy to see that \(\varphi(z_{1})=z_{2}\), \(\varphi(z_{2})=z_{1}\). We have \[tz_{1}+z_{2}=\varphi(z_{1}+tz_{2})=\varphi(e_{1}^{2})=\varphi(e_{1})^{2}= \gamma(z_{1}+tz_{2})\] for some \(\gamma\in F\). Hence, \(\gamma=t\) and \(t^{2}=1\), \(t=\pm 1\). 2. \(\alpha+\delta(1-\alpha)=1\). It means that \((\alpha-1)=(\alpha-1)\delta\) and \(\delta=1\). If \(\delta=1\), then \(\varphi(z_{1})=z_{1}\) and \(\varphi(z_{2})=z_{2}\). If \(\delta=-1\), \(\alpha=1/2\), and \(t=\pm 1\), then \(\varphi(z_{1})=z_{2}\) and \(\varphi(z_{2})=z_{1}\). We have proved that \(\varphi(e)\in E\) for any \(e\in E\). In cases \(\delta=1\) and \(\delta=-1\), \(\alpha=1/2\), \(t=1\) the basis \(e_{1},\ldots,e_{n}\) is mapped to an orthogonal basis of \(E\), since \(\varphi(z_{1}+tz_{2})=z_{1}+tz_{2}\). It remains to prove that \(\operatorname{Aut}(A)\cong\mathbb{Z}_{2}\times O(E)\), when \(\delta=t=-1\) and \(\alpha=1/2\). In the case, we have \(\operatorname{Aut}(A)=\{(\sigma,\psi)\mid\sigma\in S_{2},\,\langle\psi(e),\psi (f)\rangle=\operatorname{sgn}(\sigma)\langle e,f\rangle,\,e,f\in E\}\), where \(\operatorname{sgn}(\sigma)\) denotes the sign of a permutation \(\sigma\in S_{2}\). Thus, \(\pi\colon\operatorname{Aut}(A)\to\mathbb{Z}_{2}\times O(E)\) defined as follows, \(\pi((\sigma,\psi))=(\sigma,\sqrt{-1}^{\operatorname{sgn}(\sigma)}\psi)\) is an isomorphism. Sharped cubic form and associated Jordan algebra In this paragraph, we recall the results concerned sharped cubic forms and induced algebras, which occur to be Jordan. We follow the monograph [9]. A map \(N\colon V\to F\) on a space \(V\) is called cubic form if for any \(\lambda\in F\) and \(x,y\in V\) \[N(x+\lambda y)=N(x)+\lambda N(x,y)+\lambda^{2}N(y,x)+\lambda^{3}N(x,y,z),\] where \(N(x,y)\) is quadratic in \(x\) and linear in \(y\) and \(N(x,y,z)\) is trilinear and symmetric. Given a cubic form \(N\) on a space \(V\), one can linearize it completely as follows, \[N(v,u,w)=N(v+u+w)-N(v+u)-N(v+w)-N(u+w)+N(u)+N(v)+N(w).\] Hence, \(N(r,r,r)=6N(r)\). **Definition 1**. Let \(V\) be a vector space endowed with a cubic form \(N\) and let \(c\in V\) be such that \(N(c)=1\) (we call \(c\) as basepoint). Denote by \(N(x,y,z)\) the complete linearization of \(N\). We introduce quadratic spur function, linear trace form, two bilinear forms and give another definition of the form \(N(x,y)\): \[S(r)=N(r,r,c)/2,\quad T(r)=N(r,c,c)/2, \tag{3}\] \[S(r,q)=N(r,q,c),\quad N(r,q)=N(r,r,q)/2,\] (4) \[(r,q)=T(r)T(q)-S(r,q). \tag{5}\] From the definition we derive that \[S(c)=T(c)=3,\quad(r,c)=T(r). \tag{6}\] Another way to define the function \(N(r,q)\) from the given norm function \(N\) is to present \[N(r+tq)=N(r)+tN(r,q)+t^{2}N(q,r)+t^{3}N(q). \tag{7}\] Thus, \(N(r,q,s)\) is a linearization of \(N(x,y)\). **Definition 2**. Let \(V\) be a vector space endowed with a cubic form \(N\) and basepoint \(c\). A sharp map \(\#\) on \(V\) for \((N,c)\) is a quadratic operator on \(V\) satisfying the following relations: \[(r^{\#},q)=N(r,q), \tag{8}\] \[(r^{\#})^{\#}=N(r)r,\] (9) \[c_{\#}r=T(r)c-r, \tag{10}\] where the sharp-product is given by the formula \[r_{\#}q=(r+q)^{\#}-r^{\#}-q^{\#}. \tag{11}\] Under the conditions, we call \((N,\#,c)\) as a sharped cubic form. Due to the definitions, \(c^{\#}=c\). **Theorem 1**. Let \(V\) be a vector space endowed with a sharped cubic form \((N,\#,c)\). Then a) \(V\) under the product \[rq=\frac{1}{2}(r_{\#}q+T(r)q+T(q)r-S(r,q)c) \tag{12}\] is an algebra with a unit \(c\). Moreover, every \(r\in V\) satisfies the cubic identity \[r^{3}-T(r)r^{2}+S(r)r-N(r)c=0, \tag{13}\] and \(r^{\#}=r^{2}-T(r)r+S(r)c\) (so, \(r^{\#}r=N(r)c\)). b) \(V\) is Jordan and the operators \[U_{r}(s)=(r,s)r-r^{\#}{}_{\#}s,\quad U_{r,q}(s):=U_{r+q}(s)-U_{r}(s)-U_{q}(s), \tag{14}\] \[\{r,s,q\}:=(r,s)q+(q,s)r-(r_{\#}q)_{\#}s \tag{15}\] coincide with the classical \(U\)-operators and Jordan triple product respectively. In [9], Theorem 1b) is proved via the following formulas. **Proposition 3**. Let \(V\) be a vector space endowed with a sharped cubic form \((N,\#,c)\). Then the following identities hold on \(V\): \[S(r)=T(r^{\#}),\quad S(r,q)=T(r_{\#}q), \tag{16}\] \[(rq,s)=(r,qs),\quad(r_{\#}q,s)=(r,q_{\#}s)=N(r,q,s),\quad(U_{r}( s),q)=(s,U_{r}(q)),\] (17) \[r^{\#}{}_{\#}(q_{\#}r)=N(r)q+(r^{\#},q)r,\quad(r^{\#}{}_{\#}q) _{\#}r=N(r)q+(r,q)r^{\#},\] (18) \[(r_{\#}q)^{\#}+r^{\#}{}_{\#}q^{\#}=(r^{\#},q)q+(r,q^{\#})r,\] (19) \[r^{\#}{}_{\#}r=-T(r)r^{\#}-T(r^{\#})r+(S(r)T(r)-N(r))c,\] (20) \[S(r^{\#},r)=S(r)T(r)-3N(r),\quad(r^{\#},r)=3N(r),\] (21) \[U_{r}(c)=r^{2},\quad U_{r,q}(c)=2rq,\quad U_{r}(r^{\#})=N(r)r, \quad(U_{r}(q))^{\#}=U_{r^{\#}}(q^{\#}),\] (22) \[U_{r}U_{r^{\#}}=N(r)^{2}{\rm id},\quad\{r,r^{\#},q\}=2N(r)q,\] (23) \[N(U_{r}(q))=N(r)^{2}N(q),\quad N(r^{\#})=N(r)^{2}. \tag{24}\] ## 4 Generalized sharped cubic form and its algebra Now, we suggest a construction, which generalizes sharped cubic form. **Definition 3**. Let \(V\) be a vector space endowed with a cubic form \(N\) and let \(c\in V\) be such that \(N(c)=1\). We also assume that a symmetric bilinear form \(\Delta\) is defined on \(V\) in such manner that \[\Delta(r,c)=0 \tag{25}\] for all \(r\in V\). Denote by \(N(x,y,z)\) the complete linearization of \(N\). As in Definition 1, we introduce quadratic spur function \(S(r)\), linear trace form \(T(r)\), bilinear form \(S(r,q)\) by (3), quadratic in \(r\) and bilinear in \(q\) form \(N(r,q)\) by (4) and new bilinear form \[(r,q)=T(r)T(q)-S(r,q)-\Delta(r,q). \tag{26}\] Let us call a pair \((N,\Delta)\) as a generalized cubic form. When \(\Delta=0\), we have an ordinary cubic form. The equalities (6) follow from the definition immediately. **Definition 4**. Let \(V\) be a vector space endowed with a generalized cubic form \((N,\Delta)\) and basepoint \(c\). A sharp map \(\#\) on \(V\) for \((N,\Delta,c)\) is a quadratic operator on \(V\) satisfying the following relations: \[(r_{\#}q,r)+(r^{\#},q)=3N(r,q), \tag{27}\] \[(r^{\#})^{\#}=(N(r)+\Delta(r^{\#},r))r,\] (28) \[c_{\#}r=T(r)c-r, \tag{29}\] where the sharp-product is given by (11). Under the conditions, we call \((N,\Delta,\#,c)\) as a generalized sharped cubic form. As above, we conclude that \(c^{\#}=c\). Given a generalized sharped cubic form \((N,\Delta,\#,c)\), let us define a product on \(V\) by (12). **Proposition 4**. Fix a scalar \(\lambda\in F\setminus\{-1\}\). Given a cubic form \(N\) defined on a vector space \(V\) with a basepoint \(c\), we get a generalized cubic form by the formulas \[(r,q)=\frac{1}{\lambda+1}\left((1+\lambda/3)T(r)T(q)-S(r,q)\right),\quad \Delta(r,q)=\lambda\left((r,q)-\frac{T(r)T(q)}{3}\right).\] Proof. The identity (26) holds by the definition. Also, we get \((r,c)=\frac{T(r)+\lambda T(r)}{1+\lambda}=T(r)\), hence, \(\Delta(r,c)=0\). \(\Box\) Let us call a generalized cubic form defined in Proposition 4 as an inner one. **Example 1**. Let \(A\) be an associative commutative algebra over a field \(F\) generated by the unit of \(F\) and an element \(\lambda\) such that \(\lambda^{2}=0\). Consider the space \(V=A\otimes_{F}F^{3}\cong A^{\otimes 3}\). We endow \(V\) with the form \(N((x,y,z))=xyz\) and take \(c=(1,1,1)\). Formally, \(N\) is not a cubic form, since it maps \(A^{\otimes 3}\) to \(A\) instead of \(F\). However, we get an interesting example of an algebra. Denote \(r=(x,y,z)\), \(q=(x^{\prime},y^{\prime},z^{\prime})\). Then \[T(r)=x+y+z,\quad S(r)=xy+xz+yz,\] \[S(r,q)=x(y^{\prime}+z^{\prime})+y(x^{\prime}+z^{\prime})+z(x^{ \prime}+y^{\prime}),\ N(r,q)=xyz^{\prime}+xy^{\prime}z+x^{\prime}yz.\] We define \[r^{\#}=(yz,xz,xy)-\lambda(y^{2}+z^{2}+2x(y+z),x^{2}+z^{2}+2y(x+z),x^{2}+y^{2} +2z(x+y)).\] and get by (12), \[rq=(1+\lambda)(xx^{\prime},yy^{\prime},zz^{\prime})+\lambda(yz^{\prime}+y^{ \prime}z,xz^{\prime}+x^{\prime}z,xy^{\prime}+x^{\prime}y)-\lambda T(r)T(q)c.\] By the definition, \[c^{\#}=(1-6\lambda)c,\quad rc=r-2\lambda T(r)c,\quad c_{\#}r=(1-4\lambda)T(r)c -r.\] Now, we consider two pairs of bilinear maps \((\cdot,\cdot)\) and \(\Delta\) such that (26) holds: \[(r,q)=xx^{\prime}+yy^{\prime}+zz^{\prime}+3\lambda S(r,q),\quad\Delta(r,q)=-3 \lambda S(r,q).\] When \(\Delta=\lambda=0\), we get an ordinary sharped cubic form. It is easy to check that a cubic form \((N,\Delta,\#,c)\) satisfies (27). Instead of (28), the following relation holds: \[(r^{\#})^{\#}=(N(r)+\Delta(r^{\#},r)+2\lambda T(r)S(r))r+2\lambda S(r)r^{\#}-2 \lambda(T(r)N(r)+S(r)^{2})c.\] There is some routine (see the code in GAP [3]) to derive the relations fulfilled for such maps (see the same or close equalities derived for all generalized sharped cubic forms below): \[\Delta(r,c)=-6\lambda T(r),\quad(r,c)=(1+6\lambda)T(r),\quad(r^{\#},r)=3N(r),\] \[T(r^{\#})=(1-6\lambda)(S(r)-\Delta(r,r))-2\lambda T(r)^{2},\] \[(r_{\#}q,s)-(r,q_{\#}s)=\Delta(r,q_{\#}s)-\Delta(r_{\#}q,s)=T(r)\Delta(q,s)-T( s)\Delta(r,q),\] \[(r_{\#}q,s)+(q_{\#}s,r)+(s_{\#}r,q)=3N(r,q,s),\] \[N(r^{\#})=N(r)(N(r)+\Delta(r^{\#},r)).\] **Example 2**. The general case of the split spin factor \(S(\alpha,t,E)\) with \[N(az_{1}+bz_{2}+v)=ab(\alpha a+\bar{\alpha}b)-\langle v,v\rangle(\bar{\alpha} ta+\alpha b), \tag{30}\] \[\Delta(az_{1}+bz_{2}+v,kz_{1}+lz_{2}+u)=\alpha(\alpha-1)(a-b)(k-l)-\langle u,v\rangle(\bar{\alpha}+\alpha t), \tag{31}\] \[(az_{1}+bz_{2}+v)^{\#}=(\alpha a+\bar{\alpha}b)(bz_{1}+az_{2})+(t-1)\langle v,v\rangle(-\bar{\alpha}z_{1}+\alpha z_{2})-(\bar{\alpha}a+\alpha b)v, \tag{32}\] where \(\bar{\alpha}=1-\alpha\) and \(c=z_{1}+z_{2}\) is a generalized sharped cubic form. Here \(a,b,k,l\in F\) and \(u,v\in E\). Indeed, \(N(c)=1\), \(\Delta(r,c)=0\). Further, for \(r=az_{1}+bz_{2}+v\), we have \[2T(r)=N(r+2c)-2N(r+c)-N(2c)+2N(c)+N(r)\\ =(a+2)(b+2)(\alpha(a+2)+\bar{\alpha}(b+2))-2(a+1)(b+1)(\alpha(a+ 1)+\bar{\alpha}(b+1))\\ +ab(\alpha a+\bar{\alpha}b)-6=2((1+\alpha)a+(2-\alpha)b),\] and (29) holds, since \[r_{\#}c=(r+c)^{\#}-r^{\#}-c^{\#}\\ =(\alpha(a+1)+\bar{\alpha}(b+1))((b+1)z_{1}+(a+1)z_{2})-(\alpha a +\bar{\alpha}b)(bz_{1}+az_{2})-v-c\\ =((1+\alpha)a+(2-\alpha)b)(z_{1}+z_{2})-az_{1}-bz_{2}-v=T(r)c-r.\] Now, we compute due to the definition, \[\Delta(r,r^{\#})=\alpha(\alpha-1)(a-b)((\alpha a+\bar{\alpha}b)(b-a)-(t-1) \langle v,v\rangle)+(\bar{\alpha}a+\alpha b)\langle v,v\rangle(\bar{\alpha}+ \alpha t).\] It is not difficult to show that \[N(r)+\Delta(r,r^{\#})=(\bar{\alpha}a+\alpha b)((\alpha a+\bar{\alpha}b)^{2}+( 2\alpha-1)(t-1)\langle v,v\rangle).\] On the other hand, the projection of \((r^{\#})^{\#}\) on \(E\) equals by (32) \[(\bar{\alpha}a+\alpha b)\big{(}(\alpha a+\bar{\alpha}b)(\bar{\alpha}b+\alpha a )+(2\alpha-1)(t-1)\langle v,v\rangle\big{)}v=(N(r)+\Delta(r,r^{\#}))v.\] We leave the check that the coordinates of \((r^{\#})^{\#}\) at \(z_{1}\) and \(z_{2}\) also equal to the ones of \((N(r)+\Delta(r,r^{\#}))r\). Thus, (28) follows. Finally, we have to derive (27). For this, we write down the required forms for \(r=az_{1}+bz_{2}+v\) and \(s=kz_{1}+lz_{2}+u\): \[N(r,s)=(N(2r+s)-2N(r+s)-N(2r)+2N(r)+N(s))/2\\ =\alpha a^{2}l+2\alpha abk+2\bar{\alpha}abl+\bar{\alpha}b^{2}k- \langle v,v\rangle(\bar{\alpha}tk+\alpha l)-2\langle v,u\rangle(\bar{\alpha}ta +\alpha b),\] \[S(r,s)=N(r+s+c)-N(r+s)-N(r+c)-N(s+c)+N(r)+N(s)+N(c)\\ =2(\alpha ak+al+bk+\bar{\alpha}bl)-2(\bar{\alpha}t+\alpha)\langle v,u\rangle,\] \[(r,s)=T(r)T(s)-S(r,s)-\Delta(r,s)=(1+\alpha)ak+(2-\alpha)bl+(1+\alpha+(2- \alpha)t)\langle v,u\rangle, \tag{33}\] \[r_{\#}s=(r+s)^{\#}-r^{\#}-s^{\#}=(\alpha a+\bar{\alpha}b)(lz_{1 }+kz_{2})+(\alpha k+\bar{\alpha}l)(bz_{1}+az_{2})\\ +2(t-1)(-\bar{\alpha}z_{1}+\alpha z_{2})\langle v,u\rangle-(\bar{ \alpha}a+\alpha b)u-(\bar{\alpha}k+\alpha l)v.\] Applying these formulas to compute \((r_{\#}s,r)\) and \((r^{\#},s)\), we prove (27). We write down the product on \(S(\alpha,t,E)\) by (12): \[rs=\frac{1}{2}(r_{\#}s+T(r)s+T(s)r-S(r,s)c)=akz_{1}+blz_{2}+ \langle v,u\rangle(z_{1}+tz_{2})\\ +(\alpha k+\bar{\alpha}l)v+(\alpha a+\bar{\alpha}b)u, \tag{34}\] which, up to rescalling the bilinear form on \(E\), coincides with the initial product on \(S(\alpha,t,E)\). **Remark 1**. Denote \(\lambda=\frac{3\alpha(1-\alpha)}{(1+\alpha)(\alpha-2)}\). Then the generalized sharped cubic form on the split spin factor \(S(\alpha,E)\) is inner with \(\lambda\), since for \(r=az_{1}+bz_{2}+v\) and \(s=kz_{1}+lz_{2}+u\) we have \[(r,s)-T(r)T(s)/3\\ =(az_{1}+bz_{2}+v,kz_{1}+lz_{2}+u)-\frac{((1+\alpha)a+(2-\alpha) b)((1+\alpha)k+(2-\alpha)l)}{3}\\ =\frac{1}{3}\bigg{(}ak(1+\alpha)(3-(1+\alpha))+bl(2-\alpha)(3-(2- \alpha))-(1+\alpha)(2-\alpha)(al+bk)+\frac{3(1+\alpha)}{\alpha}\langle v,u \rangle\bigg{)}\\ =\frac{(1+\alpha)(2-\alpha)}{3}\left((a-b)(k-l)+\frac{3\langle v, u\rangle}{\alpha(2-\alpha)}\right)=\frac{1}{\lambda}\Delta(r,s).\] Let us return to generalized sharped cubic forms and relations concerned with them. **Lemma 1**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then the following identities hold: \[T(r^{\#})=S(r)-\Delta(r,r), \tag{35}\] \[(r^{\#},r)=3N(r), \tag{36}\] \[S(r^{\#},r)=T(r)(S(r)-\Delta(r,r))-3N(r)-\Delta(r^{\#},r), \tag{37}\] \[{r^{\#}}_{\#}(r_{\#}q)=(N(r)+\Delta(r^{\#},r))q+(N(r,q)+\Delta(r^{\#},q)+ \Delta(r,r_{\#}q))r, \tag{38}\] \[(r_{\#}q)^{\#}+{r^{\#}}_{\#}q^{\#}=(N(q,r)+\Delta(q,r_{\#}q)+\Delta(r,q^{ \#}))r\\ +(N(r,q)+\Delta(r^{\#},q)+\Delta(r,r_{\#}q))q, \tag{39}\] \[{r^{\#}}_{\#}r=-T(r)r^{\#}-T(r^{\#})r+(T(r)(S(r)-\Delta(r,r))-N(r)-\Delta(r^{ \#},r))c. \tag{40}\] Proof. Involving (26), (27), (29), and (6), we get \[T(r^{\#})=(r^{\#},c)=3N(r,c)-(r_{\#}c,r)=3S(r)-T(r)^{2}+(r,r)=S(r)-\Delta(r,r).\] By (27), we rewrite \[9N(r)=3N(r,r)=(r_{\#}r,r)+(r^{\#},r)=3(r^{\#},r),\] since \(r_{\#}r=(2r)^{\#}-r^{\#}-r^{\#}=2r^{\#}\). Thus, we derive (36). Applying (26) and already proved formulas, we deduce \[S(r^{\#},r)=T(r)T(r^{\#})-(r^{\#},r)-\Delta(r^{\#},r)=T(r)(S(r)-\Delta(r,r))- 3N(r)-\Delta(r^{\#},r).\] Let us put \(r+tq\) instead of \(r\) into (28), where \(t\in F\). Joint with (7), we get \[((r+tq)^{\#})^{\#}=(r^{\#}+t^{2}q^{\#}+tr_{\#}q)^{\#}=(r^{\#}+tr _{\#}q)^{\#}+t^{4}(q^{\#})^{\#}+t^{2}(r^{\#}+tr_{\#}q)_{\#}q^{\#}\\ =(r^{\#})^{\#}+t^{2}(r_{\#}q)^{\#}+{r^{\#}}_{\#}(r_{\#}q)+t^{4}( q^{\#})^{\#}+t^{2}{r^{\#}}_{\#}q^{\#}+t^{3}(r_{\#}q)_{\#}q^{\#};\] \[(N(r+tq)+\Delta(r+tq,(r+tq)^{\#}))(r+tq)\\ =(N(r)+tN(r,q)+t^{2}N(q,r)+t^{3}N(q))(r+tq)+\Delta(r+tq,r^{\#}+t^ {2}q^{\#}+tr_{\#}q)(r+tq).\] Comparing coefficients at \(t\) and at \(t^{2}\), we derive (38) and (39) respectively. Finally, we apply (29) twice and then (35), (38) with \(q=c\): \[{r^{\#}}_{\#}r={r^{\#}}_{\#}(T(r)c-r_{\#}c)=T(r)(T(r^{\#})c-r^{ \#})-{r^{\#}}_{\#}(r_{\#}c)\\ =T(r)(S(r)-\Delta(r,r))c-T(r)r^{\#}-(N(r)+\Delta(r^{\#},r))c-(N(r, c)+\Delta(r,T(r)c-r))r\\ =(T(r)(S(r)-\Delta(r,r))-N(r)-\Delta(r^{\#},r))c-T(r)r^{\#}-T(r^{ \#})r.\qed\] **Theorem 2**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Define a product \(\cdot\) on \(V\) by (12). Then the conclusion of Theorem 1a) holds for \((V,\cdot)\). Proof. Let us check that \(c\) is a unit by (3), (4), (6), (29): \[2rc=r_{\#}c+T(r)c+T(c)r-S(r,c)c=2T(r)c-r+3r-2T(r)c=2r.\] Due to the definitions, \[r^{2}=\frac{1}{2}({r_{\#}}r+2T(r)r-S(r,r)c)=r^{\#}+T(r)r-S(r)c.\] It remains to show (13). For this, we apply (37), (40): \[2rr^{\#}=r_{\#}r^{\#}+T(r)r^{\#}+T(r^{\#})r-S(r,r^{\#})c\] \[=(T(r)(S(r)-\Delta(r,r))-N(r)-\Delta(r^{\#},r))c-(T(r)(S(r)-\Delta (r,r))-3N(r)-\Delta(r^{\#},r))c\] \[=2N(r)c,\] i. e. \(rr^{\#}=N(r)c\), which is equivalent to (13). \(\Box\) Let us state further relations fulfilled on an algebra endowed with a generalized sharped cubic. **Lemma 2**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then the following identities hold: \[T(r_{\#}q)=S(r,q)-2\Delta(r,q), \tag{41}\] \[N(r^{\#})=N(r)(N(r)+\Delta(r^{\#},r)), \tag{42}\] \[(r_{\#}q,s)+(q_{\#}s,r)+(s_{\#}r,q)=3N(r,q,s), \tag{43}\] \[r^{\#}{}_{\#}r^{\#}=2(N(r)+\Delta(r^{\#},r)))r, \tag{44}\] \[U_{c}(r)=r,\quad U_{r}(c)=r^{2}+\Delta(r,r)c,\quad\frac{1}{2}U_{r,q}(c)=rq+ \Delta(r,q)c, \tag{45}\] \[U_{r}(r)=r^{3}-2\Delta(r,r)r+(\Delta(r^{\#},r)+T(r)\Delta(r,r))c, \tag{46}\] \[U_{r}(r^{\#})=(N(r)-2\Delta(r^{\#},r))r. \tag{47}\] Proof. By (35) and the definition of the sharp product, we get (41): \[T(r_{\#}q)=T((r+q)^{\#})-T(r^{\#})-T(q^{\#})\] \[=S(r+q)-\Delta(r+q,r+q)-S(r)+\Delta(r,r)-S(q)+\Delta(q,q)=S(r,q)- 2\Delta(r,q).\] We use (36) and the axiom (28) to compute \[3N(r^{\#})=((r^{\#})^{\#},r^{\#})=((N(r)+\Delta(r^{\#},r))r,r^{\#})=3(N(r)+ \Delta(r^{\#},r))N(r),\] hence, (42) is proved. Linearization of (27) implies (43). Since \(q_{\#}q=2q^{\#}\), the equality (44) follows from (28). By the definition of the \(U\)-operator, (6), (29), and (35), we have \[U_{c}(r)=(c,r)c-c^{\#}{}_{\#}r=T(r)c-c_{\#}r=r,\] \[U_{r}(c)=(c,r)r-r^{\#}{}_{\#}c=T(r)r-T(r^{\#})c+r^{\#}=r^{2}-T( r^{\#})c+S(r)c=r^{2}+\Delta(r,r)c,\] and the third equality from (45) follows immediately. Applying (13), (26), (35), (40), we derive (46): \[U_{r}(r)-r^{3}+2\Delta(r,r)r-(\Delta(r^{\#},r)+T(r)\Delta(r,r))c\] \[=(r,r)r-r^{\#}{}_{\#}r-T(r)r^{2}+S(r)r-N(r)c+2\Delta(r,r)r-( \Delta(r^{\#},r)+T(r)\Delta(r,r))c\] \[=(r,r)r+T(r)r^{\#}+T(r^{\#})r-(T(r)(S(r)-\Delta(r,r))-N(r)- \Delta(r^{\#},r))c\] \[\qquad\qquad-T(r)r^{2}+S(r)r-N(r)c+2\Delta(r,r)r-(\Delta(r^{\#},r)+T(r)\Delta(r,r))c\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad=((r,r)+S (r)-T(r)^{2}+\Delta(r,r))r=0.\] Finally, the formula (47) holds by (36) and (44). \(\Box\) **Corollary 1**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then \((r,s)=T(rs)\) for all \(r,s\in V\). Proof. By the definition (12), we write down \[2T(rs)=T(r_{\#}s)+2T(r)T(s)-S(r,s)T(c)\stackrel{{\eqref{eq:2T}}}{{= }}2(T(r)T(s)-S(r,s)-\Delta(r,s))\stackrel{{\eqref{eq:2T}}}{{=}}2( r,s).\] Corollary is proved. \(\Box\) **Lemma 3**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then the following identities are equivalent: \[(r_{\#}q,s)=N(r,q,s)+\frac{1}{3}(T(r)\Delta(q,s)+T(q)\Delta(r,s)- 2T(s)\Delta(r,q)), \tag{48}\] \[(r_{\#}q,s)=(r,q_{\#}s)+T(r)\Delta(q,s)-T(s)\Delta(r,q),\] (49) \[(rq,s)=(r,qs). \tag{50}\] Proof. Note that (48) immediately implies (49) by (43). Suppose that (49) holds. Then by (43), we have \[3N(r,q,s)=(r_{\#}q,s)+(q_{\#}s,r)+(s_{\#}r,q)\\ =3(r_{\#}q,s)-T(r)\Delta(q,s)-T(q)\Delta(r,s)+2T(s)\Delta(r,q),\] and (48) is fulfilled. Now, we prove the last equivalency: \[2(rq,s)-2(r,qs)=(r_{\#}q,s)+T(r)(q,s)+T(q)(r,s)-S(r,q)(c,s)\\ -(r,q_{\#}s)-T(q)(r,s)-T(s)(r,q)+S(q,s)(r,c)\\ =(r_{\#}q,s)-(r,q_{\#}s)-T(r)\Delta(s,q)+T(s)\Delta(r,q)\\ +T(r)((q,s)+\Delta(q,s)+S(q,s))-T(s)((r,q)+\Delta(r,q)+S(r,q))\\ \stackrel{{\eqref{eq:2T}}}{{=}}(r_{\#}q,s)-(r,q_{ \#}s)-T(r)\Delta(s,q)+T(s)\Delta(r,q)+T(r)T(q)T(s)-T(s)T(r)T(q).\] Hence, (49) and (50) are equivalent. \(\Box\) **Remark 2**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized cubic form satisfying (28) and (29). Then \(V\) is a generalized sharped cubic form with invariant form \((\cdot,\cdot)\) if and only if \(V\) satisfies \[(r^{\#},q)=N(r,q)+(T(r)\Delta(r,q)-T(q)\Delta(r,r))/3, \tag{51}\] for all \(r,q\in V\). Indeed, suppose that \(V\) satisfies (51). Then a linearization of (51) implies (48). Taking (48) with \(s=r\), we get \[(r_{\#}q,r)=2N(r,q)+\frac{1}{3}(T(q)\Delta(r,r)-T(r)\Delta(r,q)),\] which sum with (51) gives (27). Conversely, the identity (51) holds on every generalized sharped cubic form with invariant form \((\cdot,\cdot)\), since (51) follows from (48) with \(q=r\). Now we state identities concerned the triple product \(\{r,s,q\}\) defined by (15). **Lemma 4**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then the following identities hold: \[\{r,r,q\}{=}(2r^{2}{-}\Delta(r,r))q{-}3\Delta(r,q)r{+}\big{(}2T(r)\Delta(r,q){-}( r^{\#},q){+}\Delta(r_{\#}q,r){+}N(r,q)\big{)}c, \tag{52}\] \[\{r,s,q\}+\{s,r,q\}=(4(rs)-2\Delta(r,s))q-3\Delta(r,q)s-3\Delta(s,q)r\\ +\big{(}2T(r)\Delta(s,q)+2T(s)\Delta(r,q)-(r_{\#}s,q)+\Delta(s_{ \#}q,r)+\Delta(r_{\#}q,s)+N(r,s,q)\big{)}c, \tag{53}\] \[(r,s,q)=\frac{1}{4}\big{(}\{s,r,q\}-\{s,q,r\}+\Delta(q,s)r-\Delta(r,s)q\\ -(\Delta(q_{\#}s,r)-\Delta(r_{\#}s,q)+2T(r)\Delta(s,q)-2T(q) \Delta(r,s)-(r_{\#}s,q)+(r,s_{\#}q))c\big{)}. \tag{54}\] Proof. Denote \[\eta(r,q)=(2T(r)\Delta(r,q)+T(q)\Delta(r,r)+\Delta(r^{\#},q)+\Delta(r_{\#}q,r )+N(r,q)-T(r)S(r,q))c.\] Then linearization of (40) gives \[0=(r_{\#}q)_{\#}r+r^{\#}{}_{\#}q+T(q)r^{\#}+T(r)(r_{\#}q)+T(r^{\#})q+T(r_{\#} q)r-T(q)S(r)c+\eta(r,q).\] By (12), (35), and (41), we have \[0=(r_{\#}q)_{\#}r+r_{\#}^{2}q+S(r)(c_{\#}q)+T(q)(r^{\#}-S(r)c)+S (r)q-\Delta(r,r)q+S(r,q)r\\ -2\Delta(r,q)r+\eta(r,q).\] The identity (29) implies \[0=(r_{\#}q)_{\#}r+r_{\#}^{2}q+S(r)T(q)c+T(q)r^{2}-T(r)T(q)r+S(r, q)r-2\Delta(r,q)r\\ -\Delta(r,r)q+\eta(r,q)=(r_{\#}q)_{\#}r+r_{\#}^{2}q+S(r)T(q)c+T( q)r^{2}-T(r)T(q)r\\ +S(r,q)r-2\Delta(r,q)r-\Delta(r,r)q+\eta(r,q).\] The identity \(T(r^{2})=(r,r)\) and (12) imply \[0=(r_{\#}q)_{\#}r+2r^{2}q-T(r^{2})q-T(q)r^{2}+S(r^{2},q)c+S(r)T (q)c\\ +T(q)r^{2}-T(r)T(q)r+S(r,q)r-2\Delta(r,q)r-\Delta(r,r)q+\eta(r,q) \\ =(r_{\#}q)_{\#}r+2r^{2}q+S(r^{\#},q)c-(r,r)q+T(r)S(r,q)c-S(r)S(c, q)c\\ +S(r)T(q)c-T(r)T(q)r+S(r,q)r-2\Delta(r,q)r-\Delta(r,r)q+\eta(r,q).\] By (15) and (26), we have \[0=-\{r,r,q\}+(r,q)r{+}2r^{2}q-T(r)T(q)r{+}S(r,q)r{-}S(r)T(q)c{+} S(r^{\#},q)c{+}T(r)S(r,q)c\\ -2\Delta(r,q)r-\Delta(r,r)q+\eta(r,q)=-\{r,r,q\}+2r^{2}q-3\Delta (r,q)r-\Delta(r,r)q\\ +\big{(}2T(r)\Delta(r,q)+T(q)\Delta(r,r)+\Delta(r^{\#},q)+\Delta(r _{\#}q,r)+N(r,q)-S(r)T(q)+S(r^{\#},q)\big{)}c.\] By (26), we have \[0=-\{r,r,q\}+(2r^{2}-\Delta(r,r))q-3\Delta(r,q)r\] \[+\big{(}2T(r)\Delta(r,q)+T(q)\Delta(r,r)+T(r^{\#})T(q)-(r^{\#},q)+ \Delta(r_{\#}q,r)+N(r,q)-S(r)T(q)\big{)}c.\] The relation (35) implies \[0=-\{r,r,q\}+(2r^{2}-\Delta(r,r))q-3\Delta(r,q)r+\big{(}2T(r)\Delta(r,q)-(r^{ \#},q)+\Delta(r_{\#}q,r)+N(r,q)\big{)}c.\] So, we have proved the identity (52). The identity (53) is a linearization of (52), while (54) is a consequence of (53). ## 5 \(\Psi\)-map Define \(\Psi(r,s,q)\) as follows, \[\Psi(r,s,q)=(r,s,q)-\Delta(q,s)r+\Delta(r,s)q\] \[+1/4(\Delta(q_{\#}s,r)-\Delta(r_{\#}s,q)+2T(r)\Delta(s,q)-2T(q) \Delta(r,s)+(q_{\#}s,r)-(r_{\#}s,q))c. \tag{55}\] By the definition, \(\Psi(r,s,q)+\Psi(q,s,r)=0\) for all \(r,s,q\) and \(\Psi(r,s,q)=0\) if either of \(r,s,q\) equals to \(c\). The equalities (54) and (55) joint imply \[4\Psi(r,s,q)=(r_{\#}s)_{\#}q-(s_{\#}q)_{\#}r+((r,s)+3\Delta(r,s))q-((q,s)+3 \Delta(q,s))r.\] We introduce the following notations: \[\widetilde{(r,q)}=(r,q)+3\Delta(r,q),\quad(r,s,q)_{\#}=(r_{\#}s)_{\#}q-r_{\# }(s_{\#}q).\] Then the last relation obtained has the form \[4\Psi(r,s,q)=(r,s,q)_{\#}+\widetilde{(r,s)}q-\widetilde{(s,q)}r. \tag{56}\] Similarly, we derive that \[4\Psi(r,s,q)=U_{q,s}(r)-U_{r,s}(q)+3(\Delta(r,s)q-\Delta(q,s)r).\] We may rewrite (41) as follows, \[T(r_{\#}q)=T(r)T(q)-\widetilde{(r,q)}. \tag{57}\] **Lemma 5**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Then the following identities are equivalent: \[\widetilde{(r_{\#}s,q)}=\widetilde{(r,s_{\#}q)}, \tag{58}\] \[\{r,r^{\#},q\}=(2N(r)-\Delta(r,r^{\#}))q-3\Delta(r^{\#},q)r,\] (59) \[T(\Psi(r,s,q))=0. \tag{60}\] Proof. We deduce (59): \[\{r,r^{\#},q\}\stackrel{{\eqref{eq:r_#}}}{{=}}(r,r^{\#})q +(r^{\#},q)r-r^{\#}{}_{\#}(r_{\#}q)\] \[\stackrel{{\eqref{eq:r_#}}}{{=}}(r,r^{\#})q+(r^{\#},q )r-(N(r)+\Delta(r^{\#},r))q-(N(r,q)+\Delta(r^{\#},q)+\Delta(r,r_{\#}q))r\] \[\stackrel{{\eqref{eq:r_#}}}{{=}}(2N(r)-\Delta(r,r^ {\#}))q-(-2/3(r^{\#},q)+1/3(r_{\#}q,r)+\Delta(r^{\#},q)+\Delta(r,r_{\#}q))r,\] which is equal to the right-hand side of (59) if and only (58) holds for \(s=r\). To show that (58) and (59) are equivalent, it remains to derive (58) from itself fulfilled for \(s=r\). A linearization of \[(r_{\#}q,r)+3\Delta(r_{\#}q,r)=(r_{\#}r,q)+3\Delta(r_{\#}r,q) \tag{61}\] gives \[(r_{\#}q,s)+(s_{\#}q,r)+3\Delta(r_{\#}q,s)+3\Delta(s_{\#}q,r)=2(r_{\#}s,q)+6 \Delta(r_{\#}s,q).\] We may rewrite the last expression with the help of (43): \[N(r,s,q)+\Delta(r_{\#}q,s)+\Delta(s_{\#}q,r)+\Delta(r_{\#}s,q)=(r_{\#}s,q)+3 \Delta(r_{\#}s,q).\] Because of the symmetry, (58) follows. Due to (55) and to Corollary 1, we have \[4T(\Psi(r,s,q))=4(rs,q)-4(r,sq)-4T(r)\Delta(q,s)+4T(q)\Delta(r,s)\] \[\quad+3(\Delta(q_{\#}s,r)-\Delta(r_{\#}s,q)+2T(r)\Delta(q,s)-2T( q)\Delta(r,s)+(r,s_{\#}q)-(r_{\#}s,q))\] \[\stackrel{{\eqref{eq:r_#}}}{{=}}2(r_{\#}s,q)+2T(r )(s,q)-2S(r,s)T(q)-2(r,s_{\#}q)-2T(q)(r,s)+2S(q,s)T(r)\] \[\quad+2T(r)\Delta(q,s)-2T(q)\Delta(r,s)+3(\Delta(q_{\#}s,r)- \Delta(r_{\#}s,q)+(r,s_{\#}q)-(r_{\#}s,q))\] \[\stackrel{{\eqref{eq:r_#}}}{{=}}(r,s_{\#}q)-(r_{ \#}s,q)+2T(r)T(s)T(q)-2T(s)T(r)T(q)+3(\Delta(q_{\#}s,r)-\Delta(r_{\#}s,q))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ \[U_{r}(U_{r^{\#}}(q))=[-3(r^{\#},q)\Delta(r,r^{\#})+(N(r)+\Delta(r,r^{ \#}))(\Delta(r^{\#},q)+\Delta(r,r_{\#}q)\\ +2/3(T(r)\Delta(r,q)-T(q)\Delta(r,r)))]r+(N(r)+\Delta(r,r^{\#}))^{2 }q, \tag{64}\] Proof. The formula (63) holds by (49). We rewrite with the help of (28) and (47): \[U_{r}(U_{r^{\#}}(q))=U_{r}((r^{\#},q)r^{\#}-(r^{\#})^{\#}{}_{\#}q)\\ =(r^{\#},q)(N(r)-2\Delta(r^{\#},r))r-(N(r)+\Delta(r^{\#},r))U_{r} (r_{\#}q).\] Further, we apply (38) and (49), \[U_{r}(r_{\#}q)=(r,r_{\#}q)r-r^{\#}{}_{\#}(r_{\#}q)=((q,r_{\#}r)+ T(q)\Delta(r,r)-T(r)\Delta(r,q))r\\ -(N(r)+\Delta(r^{\#},r))q-(N(r,q)+\Delta(r^{\#},q)+\Delta(r,r_{ \#}q))r.\] Thus, \[U_{r}(U_{r^{\#}}(q))=Ar+(N(r)+\Delta(r^{\#},r))^{2}q,\] where again by (49) we reduce \[A=(r^{\#},q)(N(r)-2\Delta(r^{\#},r))-(N(r)+\Delta(r^{\#},r))(2(r ^{\#},q)+T(q)\Delta(r,r)-T(r)\Delta(r,q)\\ -N(r,q)-\Delta(r^{\#},q)-\Delta(r,r_{\#}q))=-3(r^{\#},q)\Delta(r ^{\#},r)\\ +(N(r)+\Delta(r^{\#},r))(N(r,q)-(r^{\#},q)+\Delta(r^{\#},q)+ \Delta(r,r_{\#}q)+T(r)\Delta(r,q)-T(q)\Delta(r,r)).\] It remains to use (51) to prove (64). \(\Box\) Let us prove some further properties of \(\Psi\). **Lemma 7**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\) such that \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant. Then the following identities for \(\Psi\) are fulfilled: \[\widetilde{(\Psi(r,s,q)+\Psi(s,q,r)+\Psi(q,r,s)=0}, \tag{65}\] \[\widetilde{(\Psi(r,s,q),x)+(\widetilde{(\Psi(q,s,x),r)+(\Psi(x,s,r),q)=0}}. \tag{66}\] If, additionally, \((\cdot,\cdot)\) is invariant, then \[\Delta(\Psi(r,s,q),x)+\Delta(\Psi(q,s,x),r)+\Delta(\Psi(x,s,r),q)=0. \tag{67}\] Proof. The equality (65) follows by (56). Based on (56), we rewrite and get \[4((\widetilde{(\Psi(r,s,q),x)+(\widetilde{(\Psi(q,s,x),r)+( \Psi(x,s,r),q)})}\\ =((\widetilde{(r,s,q)_{\#},x)+((\widetilde{(q,s,x)_{\#},r)+((x,s,r)_{\#},q)} }\\ +\widetilde{(r,s)(\widetilde{(q,x)}-\widetilde{(s,q)(r,x)+( \widetilde{(q,s)(x,r)-(s,x)(q,r)+(x,s)(r,q)}}}-\widetilde{(s,r)(x,q)}}- \widetilde{(s,r)(x,q)}\\ \stackrel{{\eqref{eq:V_r}}}{{=}}\widetilde{(r_{\#},s,x_{\#}q)-(\widetilde{s_{\#}q,x_{\#}r})+\widetilde{(q_{\#}s,r_{\#}x)-( \widetilde{s_{\#}x,r_{\#}q})+(\widetilde{x_{\#}s,q_{\#}r})-(\widetilde{s_{\#} r,q_{\#}x})}=0,\] as required. Let us prove (67), for this, we write down \[4\Delta(\Psi(r,s,q),x)\stackrel{{\eqref{eq:2.2}}}{{=}} \Delta((r_{\#}s)_{\#}q-r_{\#}(s_{\#}q)+\widetilde{(r,s)}q-\widetilde{(q,s)}r,x)\\ \stackrel{{\eqref{eq:2.2}}}{{=}}\Delta(r_{\#}s,q_{ \#}x)-\Delta(s_{\#}q,r_{\#}x)+\widetilde{(r,s)}\Delta(q,x)-\widetilde{(q,s)} \Delta(r,x)\\ +\frac{T(x)\Delta(r_{\#}s,q)-T(r_{\#}s)\Delta(q,x)-T(x)\Delta(r,s _{\#}q)+T(s_{\#}q)\Delta(r,x)}{3}\\ \stackrel{{\eqref{eq:2.2}}}{{=}}\Delta(r_{\#}s,q_{ \#}x)-\Delta(s_{\#}q,r_{\#}x)+\widetilde{(r,s)}\Delta(q,x)-\widetilde{(q,s)} \Delta(r,x)\\ +\frac{T(x)(T(q)\Delta(r,s)-T(r)\Delta(s,q))}{9}+\frac{T(s_{\#}q) \Delta(r,x)-T(r_{\#}s)\Delta(q,x)}{3}.\] Analogously, we have \[4\Delta(\Psi(q,s,x),r)=\Delta(q_{\#}s,r_{\#}x)-\Delta(s_{\#}x,r_ {\#}q)+\widetilde{(s,q)}\Delta(r,x)-\widetilde{(s,x)}\Delta(r,q)\\ +\frac{T(r)(T(x)\Delta(s,q)-T(q)\Delta(s,x))}{9}+\frac{T(s_{\#}x) \Delta(r,q)-T(q_{\#}s)\Delta(r,x)}{3},\\ 4\Delta(\Psi(x,s,r),q)=\Delta(x_{\#}s,r_{\#}q)-\Delta(s_{\#}r, q_{\#}x)+\widetilde{(x,s)}\Delta(r,q)-\widetilde{(r,s)}\Delta(q,x)\\ +\frac{T(q)(T(r)\Delta(s,x)-T(x)\Delta(r,s))}{9}+\frac{T(s_{\#}r) \Delta(q,x)-T(x_{\#}s)\Delta(r,q)}{3}.\] The sum of the three expressions equals \(0\). \(\square\) **Lemma 8**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be a generalized sharped cubic form on \(V\). Suppose that \((\cdot,\cdot)\) is invariant and nondegenerate, \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant, and \(\dim V\geq 2\). Then \((N,\Delta,\#,c)\) is inner if and only if \[\Delta(s,\Psi(r,s,q)_{\#}x+\Psi(q,s,x)_{\#}r+\Psi(x,s,r)_{\#}q)=0 \tag{68}\] holds for all \(r,s,q,x\in V\). Proof. Let us rewrite (68) in more convenient form. With the help of (56) and (62), we get \[4\Delta(s,\Psi(r,s,q)_{\#}x+\Psi(q,s,x)_{\#}r+\Psi(x,s,r)_{\#}q)= \Delta(s,(r,s,q)_{\#}x+\widetilde{(r,s)}q_{\#}x-\widetilde{(q,s)}r_{\#}x\\ +(q,s,x)_{\#}r+\widetilde{(q,s)}x_{\#}r-\widetilde{(x,s)}q_{\#}r+ (x,s,r)_{\#}q+\widetilde{(x,s)}r_{\#}q-\widetilde{(r,s)}q_{\#}x)\\ =\Delta((r,s,q)_{\#},x_{\#}s)+\frac{T(s)}{3}\Delta((r,s,q)_{\#}, x)-\frac{T((r,s,q)_{\#})}{3}\Delta(x,s)\\ +\Delta((q,s,x)_{\#},r_{\#}s)+\frac{T(s)}{3}\Delta((q,s,x)_{\#}, r)-\frac{T((q,s,x)_{\#})}{3}\Delta(r,s)\\ +\Delta((x,s,r)_{\#},q_{\#}s)+\frac{T(s)}{3}\Delta((x,s,r)_{\#}, q)-\frac{T((x,s,r)_{\#})}{3}\Delta(q,s). \tag{69}\] The sum of the three summands at \(T(s)/3\) is zero due to (67). Further, \[\Delta((r_{\#}s)_{\#}q,x_{\#}s)-\Delta((q_{\#}(s_{\#}x),r_{\#}s )\stackrel{{\eqref{eq:2.2}}}{{=}}\frac{1}{3}(T(x_{\#}s)\Delta(r_ {\#}s,q)-T(r_{\#}s)\Delta(x_{\#}s,q))\\ \stackrel{{\eqref{eq:2.2}}}{{=}}\frac{1}{3}((T(x)T(s) -\widetilde{(x,s)})\Delta(r_{\#}s,q)-(T(r)T(s)-\widetilde{(r,s)})\Delta(x_{\# }s,q)).\] Hence, \[\Delta((r,s,q)_{\#},x_{\#}s)+\Delta((q,s,x)_{\#},r_{\#}s)+\Delta((x, s,r)_{\#},q_{\#}s)\\ =\frac{1}{3}((T(x)T(s)-\widetilde{(x,s)})(\Delta(r_{\#}s,q)-\Delta( r,s_{\#}q))-(T(r)T(s)-\widetilde{(r,s)})(\Delta(x_{\#}s,q)-\Delta(x,s_{\#}q)\\ +(T(q)T(s)-\widetilde{(q,s)})(\Delta(x_{\#}s,r)-\Delta(x,s_{\#}r) ))\\ \overset{(\ref{eq:2})}{=}\frac{1}{9}((T(x)T(s)-\widetilde{(x,s) })(T(q)\Delta(r,s)-T(r)\Delta(q,s))\\ -(T(r)T(s)-\widetilde{(r,s)})(T(q)\Delta(x,s)-T(x)\Delta(s,q))\\ +(T(q)T(s)-\widetilde{(q,s)})(T(r)\Delta(x,s)-T(x)\Delta(r,s))),\] where the last expression equals the following one \[T(q)\Delta(s,x)(r,s)-T(r)\Delta(s,x)(s,q)+T(x)\Delta(r,s)(s,q)- T(q)\Delta(r,s)(s,x)\\ +T(r)\Delta(s,q)(s,x)-T(x)\Delta(s,q)(r,s)=0 \tag{70}\] with coefficient \(1/9\). The rest summands of (69) give by (56) and (60): \[\frac{1}{3}((T(q)\widetilde{(r,s)}-T(r)\widetilde{(q,s)})\Delta (x,s)+(T(x)\widetilde{(q,s)}-T(q)\widetilde{(x,s)})\Delta(r,s)\\ +(T(r)\widetilde{(x,s)}-T(x)\widetilde{(r,s)})\Delta(q,s)),\] which is equal to (70) with coefficient \(1/3\). Therefore, we have showed that (68) is equivalent to (70). It is easy to check that if \((N,\Delta,\#,c)\) is inner, then (70) holds. Now, we want to show that if (70) is true, then the cubic form is inner. Putting \(q=c\) in (70), we derive \[\Delta(s,x)\bigg{(}(r,s)-\frac{T(r)T(s)}{3}\bigg{)}=\Delta(s,r)\bigg{(}(x,s)- \frac{T(x)T(s)}{3}\bigg{)}. \tag{71}\] Analogously, we write down \[\Delta(s,r)\bigg{(}(r,t)-\frac{T(r)T(t)}{3}\bigg{)}=\Delta(r,t)\bigg{(}(r,s)- \frac{T(r)T(s)}{3}\bigg{)}. \tag{72}\] Multiplying (71) by \(\Delta(r,t)\) and adding (72) multiplied by \(\Delta(s,x)\), we get \[\Delta(r,s)\bigg{(}\Delta(s,x)\bigg{(}(r,t)-\frac{T(r)T(t)}{3}\bigg{)}-\Delta (r,t)\bigg{(}(x,s)-\frac{T(x)T(s)}{3}\bigg{)}\bigg{)}=0.\] If \(\Delta\equiv 0\), then \((N,\Delta,\#,c)\) is inner. Otherwise, take \(r,s\) such that \(\Delta(r,s)\neq 0\). We may assume that \(T(r)=0\), since \(\Delta(c,s)=0\). Now, we find \(t\in V\) with the property \((r,t)\neq 0\). Denote \(\lambda=\Delta(r,t)/(r,t)\). Hence, \(\Delta(s,x)=\lambda\big{(}(x,s)-\frac{T(x)T(s)}{3}\big{)}\) for all \(x\) and all \(s\) satisfying \(\Delta(r,s)\neq 0\) with fixed \(r\). In particular, \(\Delta(s,r)=\lambda\big{(}(r,s)-\frac{T(r)T(s)}{3}\big{)}\). Consider \(s\) such that \(\Delta(r,s)=0\). Then by (72), \(\Delta(r,p)(r,s)=0\) for all \(p\). Hence, \((r,s)=0\) and again \[\Delta(r,s)=\lambda((r,s)-T(r)T(s)/3) \tag{73}\] holds. If for every \(r\neq 0\) such that \(T(r)=0\), one may find a corresponding \(s\) with the property \(\Delta(r,s)\neq 0\), then we have (73). Hence, every \(a\in V\) may be written as \(\mu c+r\), where \(\mu\in F\) and \(T(r)=0\). Then \(\Delta(a,b)=\lambda\big{(}(a,b)-\frac{T(a)T(b)}{3}\big{)}\), where \(\lambda\neq 0\) depends on \(a\) and some \(t\). From (71), we conclude that \(\lambda\) is a constant. If we may find \(r\neq 0\) such that \(T(r)=0\) and \(\Delta(r,s)=0\) for all \(s\in V\), then by (71), \(\Delta(s,x)=0\) for all \(x\) and \(s\not\perp r\) with respect to the form \((\cdot,\cdot)\). Let us take any \(a\) orthogonal to \(r\) and fixed \(s\not\perp r\). Then \(\Delta(a,x)=\Delta(a+s,x)-\Delta(s,x)=0\), so \(\Delta\equiv 0\), a contradiction. Thus, \((N,\Delta,\#,c)\) is inner. \(\square\) Recall that the generalized sharped cubic form on the split spin factor \(S(\alpha,E)\) is inner with \(\lambda=\frac{3\alpha(1-\alpha)}{(1+\alpha)(\alpha-2)}\). Therefore, the relations (67) and (68) are fulfilled on \(S(\alpha,E)\). In SS6, we will prove that these identities hold on \(S(\alpha,t,E)\). **Corollary 2**. Given a vector space \(V\), let \((N,\Delta,\#,c)\) be an inner generalized sharped cubic form on \(V\). Suppose that \((\cdot,\cdot)\) is invariant and nondegenerate, \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant, and \(\dim V\geq 2\). Then a) \(\Delta(s,x)(r,s)=\Delta(r,s)(s,x)\), when either \(T(s)=0\) or \(T(r)=T(x)=0\), b) \(\Delta(s,\Psi(r,s,q))=0\) for all \(r,s,q\in V\). Proof. a) The relation (71) is equivalent by (6) to \(\Delta(s,x)(r,s)=\Delta(r,s)(s,x)\), when either \(T(s)=0\) or \(T(r)=T(x)=0\). b) We consider (68) with \(x=c\). Since \(\Psi(q,s,c)=\Psi(c,s,r)=0\) and by (60), we get \(\Delta(s,\Psi(r,s,q))=0\). \(\square\) ## 6 Sum of the three associators identity Now, we are ready to prove that \(S(\alpha,t,E)\) satisfies the identity \[W_{b}(a,c,d):=((a,b,c),d,b)+((c,b,d),a,b)+((d,b,a),c,b)=0. \tag{74}\] First, we show that \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant on \(S(\alpha,t,E)\). By the proof of Lemma 5, it is enough to check (61). We compute for \(r=az_{1}+bz_{2}+v\) and \(q=gz_{1}+hz_{2}+w\): \[(r_{\#}r,q)+3\Delta(r_{\#}r,q)=2(1+\alpha)g((\alpha a+\bar{\alpha }b)b+(\alpha-1)(t-1)\langle v,v\rangle)\\ +2(2-\alpha)h((\alpha a+\bar{\alpha}b)a+\alpha(t-1)\langle v,v \rangle)-2(1+\alpha+(2-\alpha)t)(\bar{\alpha}a+\alpha b)\langle v,w\rangle\\ +6\alpha(\alpha-1)(g-h)((\alpha a+\bar{\alpha}b)(b-a)-(t-1)\langle v,v\rangle)+6(\bar{\alpha}+\alpha t)(\bar{\alpha}a+\alpha b)\langle v,w\rangle; \end{split}\] \[(r_{\#}q,r)+3\Delta(r_{\#}q,r)=(1+\alpha)a(h(\alpha a+\bar{\alpha}b)+b(\alpha g +\bar{\alpha}h)+2(\alpha-1)(t-1)\langle v,w\rangle)\\ +(2-\alpha)b(g(\alpha a+\bar{\alpha}b)+a(\alpha g+\bar{\alpha}h) +2\alpha(t-1)\langle v,w\rangle)\\ -(1+\alpha+(2-\alpha)t)((\bar{\alpha}a+\alpha b)\langle v,w\rangle +(\bar{\alpha}g+\alpha h)\langle v,v\rangle)\] \[+3\alpha(\alpha-1)(a-b)((\alpha a+\bar{\alpha}b)(h-g)+(\alpha g+\bar{\alpha}h)(b-a) -2(t-1)\langle v,w\rangle)\] \[+3(\bar{\alpha}+\alpha t)((\bar{\alpha}a+\alpha b)\langle v,w \rangle+(\bar{\alpha}g+\alpha h)\langle v,v\rangle).\] In both \(\widetilde{(r_{\#}r,q)}\) and \(\widetilde{(r_{\#}q,r)}\), the coefficients at \(\langle v,v\rangle\) equal \(2(2\alpha-1)(t-1)(\bar{\alpha}g+\alpha h)\), at \(\langle v,w\rangle\) equal \(4(2\alpha-1)(t-1)(\bar{\alpha}a+\alpha b)\). The rest summands equal to the same expression \[2(\alpha a+\bar{\alpha}b)((1+\alpha)bg+(2-\alpha)ah+3\alpha(\alpha-1)(a-b)(h-g )).\] Denote the coefficient at \(c\) in (55) as \(\Phi(r,s,q)\). If \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant, then \[\Phi(r,s,q)=1/2(T(r)\Delta(s,q)-T(q)\Delta(r,s)+\Delta(r_{\#}s,q)-\Delta(q_{ \#}s,r)). \tag{75}\] Below, we apply that \(\widetilde{(\cdot,\cdot)}\) is \(\#\)-invariant on \(S(\alpha,t,E)\). By (55), \[((r,s,q),x,s)+((q,s,x),r,s)+((x,s,r),q,s)\\ =\Delta(s,q)(r,x,s)-\Delta(r,s)(q,x,s)+(\Psi(r,s,q),x,s)+\Delta( s,x)(q,r,s)-\Delta(q,s)(x,r,s)\\ +(\Psi(q,s,x),r,s)+\Delta(r,s)(x,q,s)-\Delta(s,x)(r,q,s)+(\Psi(x,s,r),q,s)\\ =\Delta(s,q)((r,s,x)+\Psi(x,s,r))+\Delta(r,s)((x,s,q)+\Psi(q,s,x ))+\Delta(s,x)((q,s,r)+\Psi(r,s,q))\\ -(\Delta(\Psi(r,s,q),x)+\Delta(\Psi(q,s,x),r)+\Delta(\Psi(x,s,r), q))s\\ -(\Phi(\Psi(r,s,q),x,s)+\Phi(\Psi(q,s,x),r,s)+\Phi(\Psi(x,s,r),q, s))c+\Psi_{0}.\] where \[\Psi_{0}=\Psi(\Psi(r,s,q),x,s)+\Psi(\Psi(q,s,x),r,s)+\Psi(\Psi(x,s,r),q,s). \tag{76}\] With the help of (55) and Lemma 5, we rewrite the last expression as follows, \[((r,s,q),x,s)+((q,s,x),r,s)+((x,s,r),q,s)\\ =\Delta(s,q)\left(-\Delta(r,s)x+\Delta(x,s)r+\frac{T(x)\Delta(r, s)-T(r)\Delta(x,s)+\Delta(x_{\#}s,r)-\Delta(r_{\#}s,x)}{2}c\right)\\ +\Delta(r,s)\left(-\Delta(x,s)q+\Delta(q,s)x+\frac{T(q)\Delta(s, x)-T(x)\Delta(s,q)+\Delta(q_{\#}s,x)-\Delta(x_{\#}s,q)}{2}c\right)\\ +\Delta(s,x)\left(-\Delta(s,q)r+\Delta(r,s)q+\frac{T(r)\Delta(s, q)-T(q)\Delta(r,s)+\Delta(r_{\#}s,q)-\Delta(q_{\#}s,r)}{2}c\right)\\ -(\Delta(\Psi(r,s,q),x)+\Delta(\Psi(q,s,x),r)+\Delta(\Psi(x,s,r), q))s\\ -(\Phi(\Psi(r,s,q),x,s)+\Phi(\Psi(q,s,x),r,s)+\Phi(\Psi(x,s,r),q, s))c+\Psi_{0}\\ =\frac{1}{2}(\Delta(s,q)(\Delta(x_{\#}s,r)-\Delta(r_{\#}s,x))+ \Delta(r,s)(\Delta(q_{\#}s,x)-\Delta(x_{\#}s,q))\\ +\Delta(s,x)(\Delta(r_{\#}s,q)-\Delta(q_{\#}s,r))-\frac{1}{2}\Delta (s,\Psi(r,s,q)_{\#}x+\Psi(q,s,x)_{\#}r+\Psi(x,s,r)_{\#}q)\\ +\frac{1}{2}(\Delta(\Psi(r,s,q),x_{\#}s)+\Delta(\Psi(q,s,x),r_{\#} s)+\Delta(\Psi(x,s,r),q_{\#}s))\\ +(\Delta(\Psi(r,s,q),x)+\Delta(\Psi(q,s,x),r)+\Delta(\Psi(x,s,r), q))((1/2)T(s)c-s)+\Psi_{0}.\] Further, we will show that \(\Psi_{0}=0\), the identities (67) and (68) are fulfilled on \(S(\alpha,t,E)\) as well as \[\Delta(\Psi(r,s,q),x_{\#}s)+\Delta(\Psi(q,s,x),r_{\#}s)+\Delta(\Psi(x,s,r),q_{\# }s)=0, \tag{77}\] \[\Delta(s,q)(\Delta(x_{\#}s,r)-\Delta(r_{\#}s,x))+\Delta(r,s)( \Delta(q_{\#}s,x)-\Delta(x_{\#}s,q))\\ +\Delta(s,x)(\Delta(r_{\#}s,q)-\Delta(q_{\#}s,r))=0. \tag{78}\] Now, let us explain that the identity (74) does not hold in general even for inner cubic forms. Consider the trivial case \(\Delta\equiv 0\), which may be interpreted as an inner case with \(\lambda=0\). Then the product coming from a sharped cubic form \((N,\#,c)\) is known to be Jordan [9]. To check if (74) is fulfilled for the Jordan algebra, it is enough to study the case of a special Jordan algebra, since the identity has the degree five. Let \(J\) be a special Jordan algebra, i. e. \(J\) is a subalgebra of \(A^{(+)}\), where \(A\) is an associative algebra and the product \(\circ\) in \(A^{(+)}\) is defined as follows, \(a\circ b=ab+ba\). Then \((a,b,c)_{\circ}=bac-bca+cab-acb\). Further, \[((a,b,c)_{\circ},d,b)_{\circ}=b(a,b,c)_{\circ}d+d(a,b,c)_{\circ }b-db(a,b,c)_{\circ}-(a,b,c)_{\circ}bd\\ =b^{2}(acd-cad)+b(ca-ac)bd+db(ac-ca)b+d(ca-ac)b^{2}\\ -db^{2}(ac-ca)+db(ca-ac)b-b(ac-ca)bd-(ca-ac)b^{2}d\\ =b^{2}(acd-cad)+(dca-dac)b^{2}-db^{2}(ac-ca)-(ca-ac)b^{2}d.\] Thus, \[((a,b,c)_{\circ},d,b)_{\circ}+((c,b,d)_{\circ},a,b)_{\circ}+((c,b,d)_{\circ}, a,b)_{\circ}=b^{2}(acd-cad+cda-dca+dac-adc)+\ldots,\] where the first two letters of all rest summands differ from \(b^{2}\). Hence, this expression is nonzero in the case of any associative algebra \(A\), which does not satisfy any identity of degree less than 6. For example, the matrix algebra \(M_{3}(F)\) is such an algebra [7]. The space \(M_{3}(F)\) is equipped with the identity matrix as a basepoint, the determinant as a norm, and a sharp map sends a matrix to its adjoint. Then the associated algebra is isomorphic to \(M_{3}(F)^{(+)}\). Slightly different sharped cubic form on \(H_{3}(C)\), the Hermitian matrices over a Cayley--Dickson algebra \(C\), defines the simple Jordan algebra of Albert type [9]. To derive the identity (74), we need the following result. **Lemma 9**. In \(S(\alpha,t,E)\), we have \[\Psi(r,s,q)=(2\alpha-1)(t-1)(\langle u,w\rangle v-\langle u,v\rangle w), \tag{79}\] where \(r=r_{0}+v\), \(s=s_{0}+u\), \(q=q_{0}+w\) f or \(r_{0},s_{0},q_{0}\in Fz_{1}+Fz_{2}\) and \(v,u,w\in E\). Proof. Let us express the associator of the elements \(r=az_{1}+bz_{2}+v\), \(s=kz_{1}+lz_{2}+u\) and \(q=gz_{1}+hz_{2}+w\). We compute \((rs)q\) applying (34): \[(rs)q=(ak+\langle v,u\rangle)gz_{1}+(bl+t\langle v,u\rangle)hz_{2 }+((\alpha k+\bar{\alpha}l)\langle v,w\rangle\\ +(\alpha a+\bar{\alpha}b)\langle u,w\rangle)(z_{1}+tz_{2})+( \alpha g+\bar{\alpha}h)(\alpha k+\bar{\alpha}l)v\\ +(\alpha g+\bar{\alpha}h)(\alpha a+\bar{\alpha}b)u+(\alpha(ak+ \langle v,u\rangle)+\bar{\alpha}(bl+t\langle v,u\rangle))w.\] Analogously, we have \[(qs)r=(gk+\langle w,u\rangle)az_{1}+(hl+t\langle w,u\rangle)bz_{2}+(( \alpha k+\bar{\alpha}l)\langle v,w\rangle\\ +(\alpha g+\bar{\alpha}h)\langle u,v\rangle)(z_{1}+tz_{2})+(\alpha a +\bar{\alpha}b)(\alpha k+\bar{\alpha}l)w\\ +(\alpha g+\bar{\alpha}h)(\alpha a+\bar{\alpha}b)u+(\alpha(kg+ \langle w,u\rangle)+\bar{\alpha}(lh+t\langle w,u\rangle))v.\] Therefore, \[(r,s,q)=(g\langle v,u\rangle-a\langle w,u\rangle)z_{1}+t(h\langle v,u \rangle-b\langle w,u\rangle)z_{2}\\ +((\alpha a+\bar{\alpha}b)\langle u,w\rangle-(\alpha g+\bar{ \alpha}h)\langle u,v\rangle))(z_{1}+tz_{2})\\ -(\alpha(\alpha-1)(a-b)(k-l)-(\alpha+\bar{\alpha}t)\langle v,u \rangle)w+(\alpha(\alpha-1)(k-l)(g-h)-(\alpha+\bar{\alpha}t)\langle w,u\rangle )v.\] It remains to substitute all known summands in (75): \[\Psi(r,s,q)=(r,s,q)-\Delta(q,s)r+\Delta(r,s)q\\ +1/2(T(r)\Delta(s,q)-T(q)\Delta(r,s)+\Delta(r_{\#}s,q)-\Delta(q_ {\#}s,r))c\\ =(g\langle v,u\rangle-a\langle w,u\rangle)z_{1}+t(h\langle v,u \rangle-b\langle w,u\rangle)z_{2}+((\alpha a+\bar{\alpha}b)\langle u,w\rangle- (\alpha g+\bar{\alpha}h)\langle u,v\rangle)(z_{1}+tz_{2})\\ -(\alpha(\alpha-1)(a-b)(k-l)-(\alpha+\bar{\alpha}t)\langle v,u \rangle)w+(\alpha(\alpha-1)(k-l)(g-h)-(\alpha+\bar{\alpha}t)\langle w,u \rangle)v\\ +(\alpha(\alpha-1)(k-l)(g-h)-(\bar{\alpha}+\alpha t)\langle u,w \rangle)(((1+\alpha)a+(2-\alpha)b)(z_{1}+z_{2})/2-r)\\ -(\alpha(\alpha-1)(a-b)(k-l)-(\bar{\alpha}+\alpha t)\langle v,u \rangle)(((1+\alpha)g+(2-\alpha)h)(z_{1}+z_{2})/2-q)\\ +\big{[}\alpha(\alpha-1)(g-h)\big{(}(\alpha a+\bar{\alpha}b)(l-k) +(\alpha k+\bar{\alpha}l)(b-a)-2(t-1)\langle v,u\rangle\big{)}\\ -\alpha(\alpha-1)(a-b)\big{(}(\alpha g+\bar{\alpha}h)(l-k)+( \alpha k+\bar{\alpha}l)(h-g)-2(t-1)\langle u,w\rangle\big{)}\\ +(\bar{\alpha}+\alpha t)((\bar{\alpha}a+\alpha b)\langle u,w \rangle-(\bar{\alpha}g+\alpha h)\langle v,u\rangle)\big{]}(z_{1}+z_{2})/2.\] At \(z_{1}/2\) we have the coefficient \[2(g\langle v,u\rangle-a\langle w,u\rangle+(\alpha a+\bar{\alpha} b)\langle u,w\rangle-(\alpha g+\bar{\alpha}h)\langle v,u\rangle)\\ +(\alpha(\alpha-1)(k-l)(g-h)-(\bar{\alpha}+\alpha t)\langle u,w \rangle)((-1+\alpha)a+(2-\alpha)b)\\ -(\alpha(\alpha-1)(a-b)(k-l)-(\bar{\alpha}+\alpha t)\langle v,u \rangle)((-1+\alpha)g+(2-\alpha)h)\\ +\alpha(\alpha-1)(g-h)\big{(}(\alpha a+\bar{\alpha}b)(l-k)+( \alpha k+\bar{\alpha}l)(b-a)-2(t-1)\langle v,u\rangle\big{)}\\ -\alpha(\alpha-1)(a-b)\big{(}(\alpha g+\bar{\alpha}h)(l-k)+( \alpha k+\bar{\alpha}l)(h-g)-2(t-1)\langle u,w\rangle\big{)}\\ +(\bar{\alpha}+\alpha t)((\bar{\alpha}a+\alpha b)\langle u,w \rangle-(\bar{\alpha}g+\alpha h)\langle v,u\rangle).\] At \(\langle v,u\rangle\), we have \[g(2-2\alpha+(-1+\alpha)(\bar{\alpha}+\alpha t)-2\alpha(\alpha- 1)(t-1)-\bar{\alpha}(\bar{\alpha}+\alpha t))\\ +h(-2\bar{\alpha}+(2-\alpha)(\bar{\alpha}+\alpha t)+2\alpha( \alpha-1)(t-1)-\alpha(\bar{\alpha}+\alpha t))=0.\] Analogously, we have zero coefficient at \(\langle w,u\rangle\). The rest summands equal \(\alpha(\alpha-1)\) multiplied by \[(k-l)(g-h)((-1+\alpha)a+(2-\alpha)b)-(a-b)(k-l)((-1+\alpha)g+(2- \alpha)h)\\ +(g-h)((\alpha a+\bar{\alpha}b)(l-k)+(\alpha k+\bar{\alpha}l)(b-a ))-(a-b)((\alpha g+\bar{\alpha}h)(l-k)+(\alpha k+\bar{\alpha}l)(h-g)),\] which is zero. Analogously, we have zero coordinate at \(z_{2}\). Finally, we have \[\Psi(r,s,q)=(-\alpha-\bar{\alpha}t+\bar{\alpha}+\alpha t)(\langle u,w\rangle v- \langle u,v\rangle w)=(2\alpha-1)(t-1)(\langle u,w\rangle v-\langle u,v\rangle w),\] as required. \(\Box\) **Remark 4**. It is easy to clarify, why Corollary 2b is true in \(S(\alpha,t,E)\). Indeed, by (79), we have \(\Delta(s,\Psi(r,s,q))=0\) for \(r=r_{0}+v\), \(s=s_{0}+u\), \(q=q_{0}+w\), where \(r_{0},s_{0},q_{0}\in Fz_{1}+Fz_{2}\), \(v,u,w\in E\), since \[\langle u,\langle u,w\rangle v-\langle u,v\rangle w\rangle=\langle u,v\rangle \langle u,w\rangle-\langle u,v\rangle\langle u,w\rangle=0.\] Let \(\mu=(2\alpha-1)(t-1)\). In the case \(\dim E=2\), take a basis \(e_{1},e_{2}\) of \(E\) such that \(\langle e_{1},e_{2}\rangle=0\). Then for \(v=v_{1}e_{1}+v_{2}e_{2}\), \(u=u_{1}e_{1}+u_{2}e_{2}\), and \(w=w_{1}e_{1}+w_{2}e_{2}\), we have \[\Psi(r,s,q)=\mu(v_{1}w_{2}-v_{2}w_{1})(u_{2}e_{1}-u_{1}e_{2}).\] Thus, \(\Psi(r,s,q)\) is proportional to the vector \(u^{\perp}=u_{2}e_{1}-u_{1}e_{2}\), which is orthogonal to \(u\) with respect to \(\langle\cdot,\cdot\rangle\). **Corollary 3**. In \(S(\alpha,t,E)\), the relation \(N(\Psi(r,s,q))=0\) holds for all \(r,s,q\). Hence, \(\Psi(r,s,q)^{3}=S(\Psi(r,s,q))\Psi(r,s,q)\). An algebra \(A\), in which every element satisfies the equality \(x^{3}=\varphi(x,x)x\) for some bilinear form \(\varphi\), is called pseudo-composition algebra [11]. **Corollary 4**. In \(S(\alpha,t,E)\), we have \(\Delta(\Psi(r,s,q),x_{\#}s)=\Delta(\Psi(r,s,q)_{\#}s,x)\). Put \(s_{0}=kz_{1}+lz_{2}\). Applying the definition and Remark 4, we get \[\Delta(\Psi(r,s,q)_{\#}s,x)=-(\bar{\alpha}k+\alpha l)\Delta(\Psi(r,s,q),x)=( \bar{\alpha}k+\alpha l)(\bar{\alpha}+\alpha t)\langle\Psi(r,s,q),y\rangle,\] \[\Delta(\Psi(r,s,q),s_{\#}x)=-(\bar{\alpha}+\alpha t)\langle\Psi(r,s,q),s_{\#} x|_{E}\rangle=(\bar{\alpha}+\alpha t)(\bar{\alpha}k+\alpha l)\langle\Psi(r,s,q),y\rangle,\] hence, the required formula is proved. **Remark 5**. Let us fix \(u\in E\), then the product \([v,w]:=\Psi(v,u,w)\) is a Lie one [13], thus, \[\Psi(\Psi(v,u,w),u,x)+\Psi(\Psi(w,u,x),u,v)+\Psi(\Psi(x,u,v),u,w)=0\] holds for all \(v,u,w,x\in E\). Further, the ternary product \([v,u,w]:=\Psi(v,w,u)\) defines a Lie triple system, i. e. the following identities for \([\cdot,\cdot,\cdot]\) hold: \[[x,y,z]+[y,x,z]=0,\quad[x,y,z]+[y,z,x]+[z,x,y]=0,\] \[[x,y,[u,v,w]]=[[x,y,u],v,w]+[u,[x,y,v],w]+[u,v,[x,y,w]],\] more about triple systems see [5]. We believe that such construction of a Lie triple system via an inner product is known, however, we are not able to find a suitable reference. **Theorem 3**. The identity (74) holds on the algebra \(S(\alpha,t,E)\). Proof. Denote \(\mu=(2\alpha-1)(t-1)\). The identity (67) holds on \(S(\alpha,t,E)\), since by Lemma 9, for \(r=r_{0}+v\), \(s=s_{0}+u\), \(q=q_{0}+w\), and \(x=x_{0}+y\), where \(r_{0},s_{0},q_{0},x_{0}\in Fz_{1}+Fz_{2}\), \(v,u,w,y\in E\), we have \[\Delta(\Psi(r,s,q),x)+\Delta(\Psi(q,s,x),r)+\Delta(\Psi(x,s,r),q)\\ =\mu(\langle u,w\rangle\Delta(v,x)-\langle u,v\rangle\Delta(w,x)+ \langle u,y\rangle\Delta(w,r)-\langle u,w\rangle\Delta(y,r)\\ +\langle u,v\rangle\Delta(y,q)-\langle u,y\rangle\Delta(v,q))=0.\] Let us verify that (68) is fulfilled on \(S(\alpha,t,E)\). Denote \(s_{0}=kz_{1}+lz_{2}\). We apply Lemma 9: \[\Psi(r,s,q)_{\#}x=\mu(\langle u,w\rangle v-\langle u,v\rangle w)_{ \#}x=2\mu(t-1)(-\bar{\alpha}z_{1}+ \alpha z_{2})(\langle u,w\rangle\langle v,y\rangle-\langle u,v\rangle \langle w,y\rangle)\\ -\mu\chi(x)(\langle u,w\rangle v-\langle u,v\rangle w),\] where \(\chi(p_{1}z_{1}+p_{2}z_{2}+\omega)=\bar{\alpha}p_{1}+\alpha p_{2}\). Hence, by Remark 4, we get \[\Delta(s,\Psi(r,s,q)_{\#}x)=-2\mu(t-1)\alpha(\alpha-1)(k-l)(\langle u,w\rangle \langle v,y\rangle-\langle u,v\rangle\langle w,y\rangle). \tag{80}\] Define \(\pi=-2\mu(t-1)\alpha(\alpha-1)(k-l)\). Then \[\Delta(s,\Psi(r,s,q)_{\#}x+\Psi(q,s,x)_{\#}r+\Psi(x,s,r)_{\#}q)\\ =\pi(\langle u,w\rangle\langle v,y\rangle-\langle u,v\rangle \langle w,y\rangle+\langle u,y\rangle\langle v,w\rangle-\langle u,w\rangle \langle v,y\rangle+\langle v,u\rangle\langle w,y\rangle-\langle u,y\rangle \langle v,w\rangle)=0.\] Let us prove the relation (77). By Lemma 9, \[\Delta(\Psi(r,s,q),x_{\#}s)=\mu\Delta(\langle u,w\rangle v- \langle u,v\rangle w,-\chi(x)u-\chi(s)y)\\ =\mu\chi(s)(\bar{\alpha}+\alpha t)(\langle u,w\rangle\langle v,y \rangle-\langle u,v\rangle\langle w,y\rangle).\] As above (see (80)), we conclude that (77) holds. Now, we check (78). First, we involve (58) and then (12) and (26): \[-3(\Delta(s,q)(\Delta(x_{\#}s,r)-\Delta(r_{\#}s,x))+\Delta(r,s) (\Delta(q_{\#}s,x)-\Delta(x_{\#}s,q))\\ +\Delta(s,x)(\Delta(r_{\#}s,q)-\Delta(q_{\#}s,r))=\Delta(s,q)((x_ {\#}s,r)-(r_{\#}s,x))\\ +\Delta(r,s)((q_{\#}s,x)-(x_{\#}s,q))+\Delta(s,x)((r_{\#}s,q)-(q_ {\#}s,r)\\ =2(\Delta(s,q)((xs,r)-(x,sr))+\Delta(r,s)((qs,x)-(q,sx))+\Delta(s,x)((rs,q)-(r,sq)))\\ +\Delta(s,q)(T(x)\Delta(s,r)-T(r)\Delta(s,x))+\Delta(s,r)(T(q) \Delta(s,x)-T(x)\Delta(s,q))\\ +\Delta(s,x)(T(r)\Delta(s,q)-T(q)\Delta(s,r))\\ =2(\Delta(s,q)((xs,r)-(x,sr))+\Delta(r,s)((qs,x)-(q,sx))+\Delta( s,x)((rs,q)-(r,sq))). \tag{81}\] By (33) and (34), we compute \[(rs,q)-(r,sq)=(1+\alpha)g(ak+\langle v,u\rangle)+(2-\alpha)h(bl+ t\langle v,u\rangle)\\ +(1+\alpha+(2-\alpha)t)((\alpha a+\bar{\alpha}b)\langle u,w\rangle +(\alpha k+\bar{\alpha}l)\langle v,w\rangle)\\ -(1+\alpha)a(gk+\langle u,w\rangle)+(2-\alpha)b(hl+t\langle u,w\rangle) \\ +(1+\alpha+(2-\alpha)t)((\alpha g+\bar{\alpha}h\langle u,v\rangle +(\alpha k+\bar{\alpha}l)\langle v,w\rangle)\\ =(1-\alpha^{2}+\alpha(\alpha-2)t)((g-h)\langle v,u\rangle-(a-b )\langle u,w\rangle). \tag{82}\] Denote \(\nu=(1-\alpha^{2}+\alpha(\alpha-2)t)\) and let \(x_{0}=mz_{1}+nz_{2}\). Thus, \[\Delta(s,x)((rs,q)-(r,sq))=\nu\alpha(\alpha-1)(k-l)\big{(}(m-n)(g -h)\langle v,u\rangle-(m-n)(a-b)\langle u,w\rangle\big{)}\\ +\nu(\bar{\alpha}+\alpha t)((a-b)\langle u,w\rangle\langle u,y \rangle-(g-h)\langle u,v\rangle\langle u,y\rangle).\] The analogous expressions for \(\Delta(s,q)((xs,r)-(x,sr))\) and \(\Delta(r,s)((qs,x)-(q,sx))\) joint provide that (81) equals zero. Finally, it remains to prove that \(\Psi_{0}=0\), see (76). By Lemma 9, we express \[\Psi(\Psi(r,s,q),x,s)=\mu^{2}(\langle v,u\rangle\langle w,y\rangle u-\langle w,u\rangle\langle y,v\rangle u-\langle y,u\rangle\langle v,u\rangle w+\langle w,u\rangle\langle y,u\rangle v),\] Hence, \[\Psi_{0}=\mu^{2}(\langle v,u\rangle\langle w,y\rangle u-\langle w,u\rangle\langle y,v\rangle s-\langle y,u\rangle\langle v,u\rangle w+\langle w,u\rangle\langle y,u\rangle v\\ +\langle w,u\rangle\langle y,v\rangle u-\langle y,u\rangle \langle v,w\rangle u-\langle v,u\rangle\langle w,u\rangle y+\langle y,u \rangle\langle v,u\rangle w\\ +\langle y,u\rangle\langle v,w\rangle u-\langle v,u\rangle \langle w,y\rangle u-\langle w,u\rangle\langle y,u\rangle v+\langle v,u \rangle\langle w,u\rangle y)=0.\] The statement is proved. \(\square\) **Remark 6**. Due to (82), we see that the form \((\cdot,\cdot)\) is invariant on \(S(\alpha,E)\), as it was noted in [10]. **Remark 7**. Let us return to Example 1. We may introduce the bilinear form \(\widetilde{(r,q)}=(r,q)+\Delta(r,q)\). Then \[T(r_{\#}q)=(1-4\lambda)T(r)T(q)-\widetilde{(r,q)},\quad\widetilde{(r_{\#}s,q )}=\widetilde{(r,s_{\#}q)}.\] Define the trilinear form \(\Psi\) by the formula (56). Denote \(r=(a,b,c)\), \(s=(i,j,k)\), and \(q=(e,f,g)\). Thus, we have \[\Psi(r,s,q)=\lambda(j(-ag+ce+bg-cf)+k(-af+be-bg+cf),\\ i(ag-ce-bg+cf)+k(af-be-ag+ce),\ i(af-be+bg-cf)+j(-af+be+ag-ce)).\] Then the relations (65), (67), (68) and (76) are fulfilled. Further, the identity of the three associators holds, and the ternary product \([v,u,w]:=\Psi(v,w,u)\) defines a Lie triple system, see the code in GAP [3]. Moreover, the identity (74) is fulfilled on the space \(V=A\otimes_{F}F^{3}\cong A^{\otimes 3}\), where \(A=F[\lambda]\). ## 7 Identities In this section, we prove that the algebra \(S(\alpha,t,E)\) does not satisfy any polynomial identity of degrees 3 and 4, and all identities of degree 5 satisfied by \(S(\alpha,E)\) follow from commutativity and the identity (74). In 1989, S.Yu. Vasilovsky found a basis of the \(T\)-ideal of identities fulfilled on the simple Jordan algebra of a nondegenerate form considered over a field of characteristic 0 [14]. One of them has the close form \((d,(a,b,c),b)+(a,(c,b,d),b)+(c,(d,b,a),b)=0\). In [12], it was proved that if a commutative (non-associative) unital algebra \(A\) over a field of characteristics not 2 or 3 satisfies an identity of degree 4 not implied by the commutative law, then \(A\) satisfies at least one of the following three identities: \[(x^{2}x)x=x^{2}x^{2}, \tag{83}\] \[2((yx)x)x+yx^{3}=3(yx^{2})x,\] (84) \[2(y^{2}x)x+2(x^{2}y)y+(yx)(yx)=2((yx)y)x+2((yx)x)y+y^{2}x^{2}. \tag{85}\] Then we have the following: **Lemma 10**. Let \(E\) has a dimension \(n\geq 1\) and \(\alpha,t\notin\{0,1\}\). Then every identity of degree no more than \(4\) in the algebra \(S(\alpha,t,E)\) follows from commutativity. Proof. To prove the statement, it is enough to show that the algebra \(S(\alpha,t,E)\) does not satisfy the identities (83)-(85). First, let us show the identity (83) does not hold. Consider the left hand-side of (83) and set \(x=e\in E\) such that \(\langle e,e\rangle=1\), then we have \[(e^{2}e)e=((z_{1}+tz_{2})e)e)=(\alpha e+t(1-\alpha)e)e=(\alpha+t(1-\alpha))(z _{1}+tz_{2}).\] The right-hand side of (83) for \(x=e\) gives \[e^{2}e^{2}=(z_{1}+tz_{2})(z_{1}+tz_{2})=z_{1}+t^{2}z_{2}.\] Since \(t\neq 0,1\), we conclude that the identity (83) does not hold. To show that (84) does not hold, it is enough to consider \(x=e\in E\) such that \(\langle e,e\rangle=1\) and \(y=z_{1}\). Then the left-hand side of (84) equals \[2((z_{1}e)e)e+z_{1}e^{3}=3\alpha(\alpha+t(1-\alpha))e,\] while the right-hand side of (84) gives \[3(z_{1}e^{2})e=3\alpha e.\] We see that the right-hand sides of (86) and (87) are equal if and only if \(t=1\). By the conditions, \(t\neq 1\) and therefore the identity (84) does not hold. Now we consider (85). Define \[\phi(x,y)=2(y^{2}x)x+2(x^{2}y)y+(yx)(yx)-2((yx)y)x-2((yx)x)y-y^{2}x^{2}.\] Then \(\phi(e,z_{1})=(1-\alpha^{2})z_{1}+t\alpha(2-\alpha)z_{2}\neq 0\) for any \(\alpha,t\not\in\{0,1\}\). Consequently, (85) is not an identity in the algebra \(S(\alpha,t,E)\). \(\square\) In [12], the list of all irreducible relative to commutativity identities of degree five is given. There are exactly five such identities, and the fourth of them [12, eq.\(\,\)(15)] with \(\delta_{2}=-\delta_{1}\neq 0\) is nothing more than (74) with one of the three variables \(a,c,d\) equal to \(b\), e. g., \(d=b\). The proof of the following theorem is established through computations conducted with the assistance of software programs such as Wolfram Mathematica and Albert [1]. **Theorem 4**. Let \(E\) has a dimension \(n\geq 2\) and \(\alpha\notin\{-1,0,1/2,1,2\}\). Every identity of degree no more than \(5\) in the algebra \(S(\alpha,E)\) over a field of characteristics \(0\) is a consequence of commutativity and (74). Proof. By Lemma 10, it remains to show that there are no identities in degree \(5\), which do not follow from commutativity and the identity (74). Let \({\cal W}(X)\) denote a free algebra defined by identities of commutativity and (74), which is generated by a set \(X\). Since we deal with a field of characteristics \(0\), then every polynomial identity is equivalent to a set of multilinear identities [17]. Let \({\cal P}\) be a monomial basis of the multilinear part of degree \(5\) of the free commutative algebra \({\rm Com}(X)\). Then \({\cal P}\) consists of the \(60\) monomials of the type \((((**)*)*)*)\) 30 monomials of the type \(((**)*)(**)\), and 15 monomials of the type \(((**)(**))*\). Define the set \[\begin{array}{llllll}\mathcal{Z}=&\{((x_{3}x_{5})x_{4})(x_{1}x_{2}),&((x_{4}x_ {5})x_{3})(x_{1}x_{2}),&((x_{2}x_{5})x_{4})(x_{1}x_{3}),&((x_{4}x_{5})x_{2})(x_ {1}x_{3}),\\ &((x_{2}x_{5})x_{3})(x_{1}x_{4}),&((x_{3}x_{5})x_{2})(x_{1}x_{4}),&(((x_{1}x_{5} )x_{4})x_{3})x_{2},&(((x_{2}x_{5})x_{4})x_{3})x_{1},\\ &(((x_{3}x_{5})x_{4})x_{2})x_{1},&(((x_{4}x_{5})x_{3})x_{2})x_{1}\}.\end{array}\] To construct a monomial basis \(\mathcal{B}\) of the multilinear part of degree 5 of \(\mathcal{W}(X)\), where \(X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\), we employ the software program Albert and obtain 95 basic monomials. We can represent the set of multilinear basic monomials as \(\mathcal{B}=\mathcal{P}\setminus\mathcal{Z}\). If there exists a multilinear polynomial identity of degree 5 fulfilled on \(S(\alpha,t,E)\), which does not follow from commutativity and (74), then it can be represented as a linear combination of monomials from \(\mathcal{B}\). Let us define a linear combination of elements in \(\mathcal{B}\) as \[\psi(x_{1},x_{2},x_{3},x_{4},x_{5})=\sum_{b_{i}\in\mathcal{B}}\lambda_{i}b_{i}.\] To establish the theorem, it is necessary to demonstrate the linear independence of monomials from \(\mathcal{B}\). To achieve this, we use the Wolfram Mathematica software tool. A special code has been developed for calculating all substitutions, extracting homogeneous equations from them and solving them [3]. Since \(\dim E\geq 2\), it is enough to show that all identities in degree 5 fulfilled on \(S(\alpha,t,E_{0})\), where \(\dim E_{0}=2\), follow from commutativity and (74). Let us choose a basis \(e,f\) of \(E_{0}\) such that \(\langle e,e\rangle=\langle f,f\rangle=1\) and \(\langle e,f\rangle=0\). We use the function \(\mathtt{Tuples}[\{z_{1},z_{2},e,f\},5]\) to generate all 1024 possible permutations of length 5 using the basic elements \(\{z_{1},z_{2},e,f\}\) and substitute them into \(\psi(x_{1},x_{2},x_{3},x_{4},x_{5})\). Employing the function \(\mathtt{Union[]}\), we express the obtained polynomials in terms of the coefficients \(\lambda_{i}\) and the basic elements \(\{z_{1},z_{2},e,f\}\). This yields a set of 635 polynomials. Further, we express these polynomials by collecting coefficients at the basic elements \(\{z_{1},z_{2},e,f\}\) and extract the coefficients corresponding to these elements with the functions \(\mathtt{Collect[]}\) and \(\mathtt{Coefficient[]}\). By employing the function \(\mathtt{Union[]}\) once more, we reduce the number of polynomials to 498. Then we consider the system of equations formed by setting all these polynomials equal to zero. This system of equations is expressed in the coefficients \(\lambda_{i}\), where \(i\in\{1,\ldots,95\}\). The only trivial solution that emerges is \(\lambda_{i}=0\) for \(\alpha\notin\{-1,0,\frac{1}{2},1,2\}\), where \(i\in\{1,\ldots,95\}\). This result demonstrates that the monomials involved in the linear combination \(\psi(x_{1},x_{2},x_{3},x_{4},x_{5})\) are linearly independent. This completes the proof. \(\square\) **Remark 8**. The above theorem is valid when \(t=(\alpha^{2}-1)/\alpha(\alpha-2)\). However, it is essential to note that for a general value of \(t\) that does not depend on \(\alpha\), the algebra \(S(\alpha,t,E)\) can have an identity of degree 5 which does not follow from commutativity and (74). For example, for \(t=5\) and \(\alpha=11/4\), there is an identity \[((c,a,e),b,d)+((e,a,d),b,c)+((d,a,c),b,e)\\ +(c,b,a)[R_{d},R_{e}]+(d,b,a)[R_{e},R_{c}]+(e,b,a)[R_{c},R_{d}]=0,\] which does not follow from commutativity and (74). The validity of the identity can be checked using a program given in [6] or requiring a program from the authors. Open problems We finish the work with several open problems concerned the subject. * Find an identity, which does not follow from commutativity and is fulfilled on every algebra associated to a generalized cubic form. * Does the identity (62) follow from the definition of generalized sharped cubic form, or there exists a counterexample to it? * Given a generalized sharped cubic form \((N,\Delta,\#,c)\), which satisfies (62), is it true that \(N(\Psi(r,s,q))=0\) for all \(r,s,q\)? * Find the basis of the \(T\)-ideal of identities fulfilled on \(S(\alpha,t,E)\). ## 9 Acknowledgments V. Gubarev is supported by Mathematical Center in Akademgorodok under agreement No. 075-15-2022-281 with the Ministry of Science and Higher Education of the Russian Federation. A.S. Panasenko is supported by the Program of fundamental scientific researches of Russian Academy of Sciences, project FWNF-2022-0002. The results of SS2 are supported by the Program of fundamental scientific researches of Russian Academy of Sciences, project FWNF-2022-0002. The results of SS4-6 are supported by Mathematical Center in Akademgorodok under agreement No. 075-15-2022-281 with the Ministry of Science and Higher Education of the Russian Federation.
2305.19947
A Geometric Perspective on Diffusion Models
Recent years have witnessed significant progress in developing effective training and fast sampling techniques for diffusion models. A remarkable advancement is the use of stochastic differential equations (SDEs) and their marginal-preserving ordinary differential equations (ODEs) to describe data perturbation and generative modeling in a unified framework. In this paper, we carefully inspect the ODE-based sampling of a popular variance-exploding SDE and reveal several intriguing structures of its sampling dynamics. We discover that the data distribution and the noise distribution are smoothly connected with a quasi-linear sampling trajectory and another implicit denoising trajectory that even converges faster. Meanwhile, the denoising trajectory governs the curvature of the corresponding sampling trajectory and its finite differences yield various second-order samplers used in practice. Furthermore, we establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm, with which we can characterize the asymptotic behavior of diffusion models and identify the empirical score deviation. Code is available at \url{https://github.com/zju-pi/diff-sampler}.
Defang Chen, Zhenyu Zhou, Jian-Ping Mei, Chunhua Shen, Chun Chen, Can Wang
2023-05-31T15:33:16Z
http://arxiv.org/abs/2305.19947v3
# A Geometric Perspective on Diffusion Models ###### Abstract Recent years have witnessed significant progress in developing efficient training and fast sampling approaches for diffusion models. A recent remarkable advancement is the use of stochastic differential equations (SDEs) to describe data perturbation and generative modeling in a unified mathematical framework. In this paper, we reveal several intriguing geometric structures of diffusion models and contribute a simple yet powerful interpretation to their sampling dynamics. Through carefully inspecting a popular variance-exploding SDE and its marginal-preserving ordinary differential equation (ODE) for sampling, we discover that the data distribution and the noise distribution are smoothly connected with an explicit, quasi-linear _sampling trajectory_, and another implicit _denoising trajectory_, which even converges faster in terms of visual quality. We also establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm, with which we can characterize the asymptotic behavior of diffusion models and identify the score deviation. These new geometric observations enable us to improve previous sampling algorithms, re-examine latent interpolation, as well as re-explain the working principles of distillation-based fast sampling techniques. ## 1 Introduction Diffusion models, or score-based generative models [22, 23, 24, 25] have attracted growing attention and seen impressive success in various domains, including image [16, 21, 28, 29], video [20, 21, 23], audio [14, 25], and especially text-to-image generation [15, 26, 27]. Such models are essentially governed by a certain kind of stochastic differential equations (SDEs) that smooth data into noise in a forward process and then generate data from noise in a backward process [25]. Generally, the forward SDE is formulated as a spectrum of Gaussian _kernel density estimation_ of the original data distribution with a specifically designed scaling factor and bandwidth [25]. As such, one can couple (theoretically infinite) data-noise pairs and train a noise-dependent neural network (_i.e._, the diffusion model) to minimize the least square error for data reconstruction [11]. Once such a denoising model with sufficient capacity is well optimized, it will faithfully capture the score (gradient of the log-density _w.r.t._ the input) of the data density smoothed with various levels of noise [25, 26, 27]. The generative ability is then emerged by simulating the (score-based) backward SDE with any numerical solvers [25]. Alternatively, we can simulate the corresponding ordinary differential equation (ODE) that preserves the same marginal distribution as the SDE [25, 26, 27, 28]. The deterministic ODE-based sampling gets rid of the stochasticity apart from the randomness of drawing initial samples, and thus makes the whole generative procedure more comprehensible and controllable [11]. However, more details about how diffusion models behave under this dense mathematical framework are still largely unknown. In this paper, we provide a geometric perspective to deepen our understanding of diffusion models, especially the sampling dynamics. The state-of-the-art variance-exploding SDE [11] is taken as an example to reveal the underlying intriguing structures. Our empirical observations are illustrated in Figure 1. Intuitively, given an initial sample from the noise distribution, the difference between its denoising output and its current position forms the scaled score for simulating the sampling trajectory. This explicit trajectory is almost straight such that the ODE simulation can be greatly accelerated at a modest cost of truncation error. Furthermore, the denoising output itself forms another implicit trajectory that starts near the final sample and quickly appears decent visual quality. These two simple and smooth trajectories depict the characters of ODE-based sampling and we further establish a theoretical relationship between the optimal ODE-based sampling and annealed mean shift to understand the asymptotic behavior of diffusion models. Additionally, we provide several applications to demonstrate the potential of our geometric perspective to reform existing practices of diffusion models, such as speeding up previous samplers, re-examining latent interpolation and re-interpreting distillation-based fast sampling techniques. ## 2 Score-Based Generative Models We begin with a brief overview of the basic concepts in developing score-based generative models. To enable effective generative modeling, we are required to bridge the data distribution \(p_{d}(\mathbf{x})\) with a non-informative tractable distribution \(p_{n}(\mathbf{x})\). Nowadays, a prevailing and promising approach is score-based generative modeling [13, 14, 15], which can be formulated into a concise framework from the lens of _stochastic differential equations_ (SDEs) [13, 14]. With this powerful tool, the data perturbation is modeled as a continuous stochastic process \(\{\mathbf{x}_{t}\}_{t=0}^{T}\): \[\mathrm{d}\mathbf{x}=\mathbf{f}(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d} \mathbf{w}_{t},\qquad\mathbf{f}(\cdot,t):\mathbb{R}^{d}\to\mathbb{R}^{d}, \quad g(\cdot):\mathbb{R}\to\mathbb{R}, \tag{1}\] where \(\mathbf{w}_{t}\) is the standard Wiener process; \(\mathbf{f}(\cdot,t)\) and \(g(t)\) are drift and diffusion coefficients, respectively [16]. We denote the distribution of \(\mathbf{x}_{t}\) as \(p_{t}(\mathbf{x})\) and such an Ito SDE can smoothly transform the data distribution \(p_{0}(\mathbf{x})=p_{d}(\mathbf{x})\) to the (approximate) noise distribution \(p_{T}(\mathbf{x})\approx p_{n}(\mathbf{x})\) in a forward manner. By properly setting the coefficients, some established models referred to as variance-preserving (VP) and variance-exploding (VE) SDEs can be recovered [13, 15, 16]. The reversal of Eq. (1) is another SDE that allows to synthesize data from noise in a backward manner [1]. Remarkably, there exists a _probability flow ordinary differential equation_ (PF-ODE) sharing the same marginal distribution \(\{p_{t}(\mathbf{x})\}_{t=0}^{T}\) at each time step of the diffusion process: \[\mathrm{d}\mathbf{x}=\left[\mathbf{f}(\mathbf{x},t)\mathrm{d}t-\frac{1}{2}g(t )^{2}\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\right]\mathrm{d}t. \tag{2}\] The deterministic nature of ODE enjoys several benefits such as efficient sampling, unique encoding, and meaningful latent manipulations [13, 14]. We thus choose Eq. (2) to analyze model behaviors throughout this paper. Simulating the above ODE requests having the _score function_\(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) in hand, which is typically estimated with the _denoising score matching_ (DSM) criterion [17, 18, 19]. From the perspective of _empirical Bayes_[1, 19, 18], there exists a Figure 1: A geometric perspective of ODE-based sampling in diffusion models. An initial sample (from the noise distribution) starts from a big sphere and converges to its final sample (in the data manifold) along a smooth, quasi-linear sampling trajectory. Meanwhile, its denoising output lays in an implicit, smooth denoising trajectory starting from the approximate dataset mean. The denoising output is very close to the final sample and enjoys much faster convergence in terms of visual quality. profound connection between DSM and _denoising autoencoder_ (DAE) [20, 14, 13] (see Appendix B.1). Therefore, we can equivalently obtain the score function _at each noise level_ by solving the corresponding least squares estimation: \[\mathbb{E}_{\mathbf{x}\sim p_{d}}\mathbb{E}_{\mathbf{z}\sim\mathcal{N}( \mathbf{0},\sigma^{2}\mathbf{I})}\|r_{\mathbf{\theta}}\left(\hat{\mathbf{x}}; \sigma_{t}\right)-\mathbf{x}\|_{2}^{2},\qquad\hat{\mathbf{x}}=\mathbf{x}+ \mathbf{z},\quad\sigma_{t}=\sqrt{\int_{0}^{t}g(\xi)^{2}\mathrm{d}\xi}. \tag{3}\] The optimal estimator \(r_{\mathbf{\theta}}^{\star}\left(\hat{\mathbf{x}};\sigma_{t}\right)\) equals to \(\hat{\mathbf{x}}+\sigma_{t}^{2}\nabla_{\hat{\mathbf{x}}}\log p_{t}(\hat{ \mathbf{x}})\) as revealed in the literature [11, 14]. Unless otherwise specified, we follow the configurations of VE SDEs with \(\mathbf{f}(\mathbf{x},t)=\mathbf{0}\) and \(g(t)=\sqrt{2t}\)[13, 12]. In this case, \(\sigma_{t}=t\), the perturbation kernel \(p_{t}(\hat{\mathbf{x}}|\mathbf{x})=\mathcal{N}(\hat{\mathbf{x}};\mathbf{x},t^ {2}\mathbf{I})\), and the Parzen window density \(p_{t}(\hat{\mathbf{x}})=\int p_{\delta}(\mathbf{x})p_{t}(\hat{\mathbf{x}}| \mathbf{x})\mathrm{d}\mathbf{x}\) with \(p_{\delta}(\mathbf{x})\) as the empirical data distribution. After training, we can leverage the empirical PF-ODE for sampling: \[\mathrm{d}\mathbf{x}=-\frac{r_{\mathbf{\theta}}\left(\mathbf{x};t\right)-\mathbf{x }}{t}\mathrm{d}t. \tag{4}\] Specifically, we first draw \(\hat{\mathbf{x}}_{T}\sim p_{n}(\mathbf{x})=\mathcal{N}(\mathbf{0},T^{2} \mathbf{I})\) and then numerically solve the ODE backwards with \(N\) steps to obtain a discrete sequence \(\{\hat{\mathbf{x}}_{s}\}\) with \(s\in\{s_{0}=0,s_{1},\cdots,s_{N}=T\}\). The final sample \(\hat{\mathbf{x}}_{s_{0}}\) is considered to approximately follow the data distribution \(p_{d}(\mathbf{x})\). ## 3 Visualization of High Dimensional Trajectory In this section, we present several viewpoints to inspect the trajectory of probability flow ODE in high-dimensional space. We follow the experimental settings of a recent and influential framework called EDMs [13]. Specifically, we focus on a forward VE SDE with \(\mathrm{d}\mathbf{x}=\sqrt{2t}\,\mathrm{d}\mathbf{w}_{t}\) and its empirical ODE as Eq. (4) for sampling. We mostly take unconditional generation on the CIFAR-10 dataset as an example to demonstrate our observations. The conclusions also hold on other datasets (such as LSUN Cat, LSUN Bedroom) and other model settings (such as conditional generation, various network architectures). More results and implementation details are provided in Appendix A. ### Magnitude Expansion/Shrinkage As discussed in Section 2, the forward diffusion process is generally interpreted as a progressive smoothing from the data distribution to the noise distribution with a Gaussian kernel \(p_{t}(\hat{\mathbf{x}}|\mathbf{x})\). In contrast, we further paraphrase it as the expansion of magnitude and manifold, which means that samples escape from the original _small-magnitude low-rank_ manifold and settle into a _large-magnitude high-rank_ manifold. The following proposition gives us a glimpse of the geometric structure in high dimensions: **Proposition 1**.: _Given a high-dimensional vector \(\mathbf{x}\in\mathbb{R}^{d}\) and an isotropic Gaussian noise \(\mathbf{z}\sim\mathcal{N}\left(\mathbf{0};\sigma^{2}\mathbf{I}_{d}\right)\), \(\sigma>0\), we have \(\mathbb{E}\left\|\mathbf{z}\right\|^{2}=\sigma^{2}d\), and with high probability, \(\mathbf{z}\) stays within a "thin shell": \(\|\mathbf{z}\|=\sigma\sqrt{d}\pm O(1)\). Additionally, \(\mathbb{E}\left[\|\mathbf{x}+\mathbf{z}\|^{2}-\|\mathbf{x}\|^{2}\right]= \sigma^{2}d\), \(\lim_{d\to\infty}\mathbb{P}\left(\|\mathbf{x}+\mathbf{z}\|>\|\mathbf{x}\| \right)=1\)._ The proofs are provided in Appendix B.2. Proposition 1 implies that in the forward process, the squared magnitude of the noisy sample \(\mathbf{x}+\mathbf{z}\) is expected to be larger than that of the original sample \(\mathbf{x}\), and their magnitude gap becomes especially huge for the high-dimensional case \(d\gg 1\) and severe noise case \(\sigma\gg 0\). We can further conclude that asymptotically (\(d\to\infty\)), the sample magnitude will expand with probability one and the isotropic Gaussian noise will distribute as a uniform distribution on the sphere, _i.e._, \(\mathbf{z}\sim\mathcal{N}(\mathbf{0};\sigma^{2}\mathbf{I}_{d})=\mathrm{Unif}( \sigma\sqrt{d}\,\mathcal{S}^{d-1})\), due to the _concentration of measure_[15, 16]. In practical generation, \(d\) is sufficiently large to make the above claim approximately correct. The low-rank data manifold is thus lifted to about \(d-1\) rank sphere of radius \(\sigma\sqrt{d}\) with a thin spherical shell of width \(O(1)\). Due to the marginal preserving property of PF-ODE [12], the backward process behaves in a magnitude shrinking fashion and the analysis is similar. In Figure 1(a), we track the magnitude (\(\ell_{2}\)) of original data (the pixel values are re-scaled to \([-1,1]\)) in the forward process and the magnitude of synthetic samples in the backward process. A clear trend is the sample magnitude expands in the forward diffusion process and shrinks in the backward sampling process, and they are well-matched thanks to the marginal preserving property. Furthermore, the isotropic Gaussian noise is distributed around the sphere of radius (about \(4433\pm 57\)), which is significantly larger than the original data in magnitude (about \(27\pm 7\)). ### Geometric Shape of Sampling/Denoising Trajectory Given an ODE in Eq. (4) linking the data distribution \(p_{d}(\mathbf{x})\) and the noise distribution \(p_{n}(\mathbf{x})\), we denote _sampling trajectory_ as the discrete sequence \(\{\hat{\mathbf{x}}_{s}\}_{s_{N}}^{s_{0}}\) with \(s\in\{s_{0}=0,s_{1},s_{2},\cdots,s_{N}=T\}\)1, starting from \(\hat{\mathbf{x}}_{s_{N}}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\). We adopt symbol \(d(\cdot,\cdot)\) to denote the \(\ell_{2}\) distance between two points, such as \(d(\hat{\mathbf{x}}_{s},\hat{\mathbf{x}}_{s_{0}})\), and the _trajectory deviation_ from a point to the straight line passing through the initial and final points in the trajectory, such as \(d(\hat{\mathbf{x}}_{s},[\hat{\mathbf{x}}_{s_{0}}\hat{\mathbf{x}}_{s_{N}}])\). Additionally, we denote another important yet easy to be ignored sequence as \(\{r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s},s)\}_{s_{N}}^{s_{1}}\) or simplified to \(\{r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s})\}_{s_{N}}^{s_{1}}\) if no ambiguity, and designate it as _denoising trajectory_. A sampling trajectory and its associated denoising trajectory are provided in Figure 1(b) for illustration. Footnote 1: The time horizon is divided with the formula \(s_{n}=(s_{1}^{1/\rho}+\frac{n-1}{N-1}(s_{N}^{1/\rho}-s_{1}^{1/\rho}))^{\rho}\), where \(s_{1}=0.002\), \(s_{N}=80\), \(n\in[1,N]\) and \(\rho=7\)[1]. **Proposition 2**.: _The denoising output \(r_{\boldsymbol{\theta}}\left(\mathbf{x};t\right)\) reflects the prediction made by a single Euler step from any sample \(\mathbf{x}\) at any time towards \(t=0\) with Eq. (4)._ Proof.: The prediction of such an Euler step equals to \(\mathbf{x}-\left(0-t\right)\left(r_{\boldsymbol{\theta}}\left(\mathbf{x};t \right)-\mathbf{x}\right)/t=r_{\boldsymbol{\theta}}\left(\mathbf{x};t\right)\). This property was previously stated as an intuitive evidence to advocate the use of Eq. (4) for sampling [1]. There, Karras _et al._ suspected that this ODE trajectory is approximately linear across most noise levels due to the slow change of denoising output, and verified it in the 1-dimensional situation. In contrast, we provide an in-depth analysis of the high-dimensional trajectory with real data, and reveal its connection to the classic mean-shift (mode seeking) algorithm [15, 14, 16]. **Visualization.** It is very challenging to visualize the whole sampling trajectory and denoising trajectory laying in high-dimensional space. In this paper, we are particularly interested in their geometric properties, and find that the trajectory structure exhibits a surprisingly simple form. Our observations, which have been confirmed by empirical evidence, are summarized and elaborated in the following paragraphs. The expectation quantities (such as distance, magnitude) in each discrete time step are estimated by averaging 50k generated samples. **Observation 1**.: _The sampling trajectory is almost straight while the denoising trajectory is bent._ We develop an efficient visualization technique based on _trajectory deviation_ to assess the linearity of trajectories. From Figure 1(b), we can see that the deviation of sampling trajectory and denoising trajectory (red curve) gradually increases from \(t=80\) to around \(t=10\) or \(t=5\), respectively, and then quickly decreases until reaching the final point. This implies that the initial point may be affected by all possible modes with a large influence at first, and become intensely attracted by its unique mode after a turning point. This phenomenon also supports the strategy of placing time intervals densely near the minimum timestamp yet sparsely near the maximum one [1, 1]. However, based on the ratio of maximum deviation (such as \(\max d(\hat{\mathbf{x}}_{s},[\hat{\mathbf{x}}_{s_{0}}\hat{\mathbf{x}}_{s_{N}}])\)) to the endpoint distance (such as \(d(\hat{\mathbf{x}}_{s_{0}},\hat{\mathbf{x}}_{s_{N}})\)), the curvature of sampling trajectory is incredibly small (about \(16/4428\approx 0.0036\)), while the curvature of denoising trajectory is relatively significant (about \(7/26\approx 0.27\)). Figure 2: (a) The magnitude of samples in the forward process (blue curve), the backward process (black circle) and the denoising outputs (red curve). (b) The _trajectory deviation_ (red curve) and the \(\ell_{2}\) distance between intermediate samples and the final sample in the trajectory (blue curve). Another evidence for quasi-linearity of the sampling trajectory is from the aspect of _angle deviation_, which is calculated by the cosine between the backward ODE direction \(-\frac{\mathrm{d}\mathbf{x}_{i}}{\mathrm{d}t}\big{|}_{s_{N}}\) and the direction pointing to the final point \((\hat{\mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s})\) at discrete time \(s\). We find that \(\cos\big{(}-\frac{\mathrm{d}\mathbf{x}_{i}}{\mathrm{d}t}\big{|}_{s},(\hat{ \mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s})\big{)}\) always stays in a narrow range from 0.98 to 1.00 (see Appendix A), which indicates that the angle deviation is extremely small and all backward ODE directions almost exactly point to the final point. **Observation 2**.: _The generated samples on the sampling trajectory and denoising trajectory both move monotonically from the initial points toward their converged points in expectation, i.e., \(\{\mathbb{E}\left[d(\hat{\mathbf{x}}_{s},\hat{\mathbf{x}}_{s_{0}})\right]\}_{ s_{N}}^{s_{0}}\) and \(\{\mathbb{E}\left[d\left(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s},r_{ \boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{1}})\right)\right]\}_{s_{N}}^{s_{1}}\) are monotone decreasing sequences._ This is inferred from the blue curves in Figure 1(b). In fact, such behavior is expected for the sampling trajectory given its slight angle deviation. Since \(\forall s,\;-\frac{d(\mathbf{x}_{i},\hat{\mathbf{x}}_{s_{0}})}{\mathrm{d}t} \big{|}_{s}\propto\cos\big{(}(\hat{\mathbf{x}}_{s_{0}}-\hat{\mathbf{x}}_{s}),\frac{\mathrm{d}\mathbf{x}_{s}}{\mathrm{d}t}\big{|}_{s}\big{)}\approx-1\), the initial point will converge monotonically and rapidly by moving along the backward ODE direction, similar to the behavior of gradient descent algorithm in a well-behaved convex function. The above two observations enable us to safely adopt large numerical Euler steps or higher-order ODE solvers without incurring much truncation error in most cases [1, 1, 2]. **Observation 3**.: _The sampling trajectory converges to the data distribution in a monotone magnitude shrinking way. Conversely, the denoising trajectory converges to the data distribution in a monotone magnitude expanding way. Formally, we have \(\{\mathbb{E}\|\hat{\mathbf{x}}_{s}\|\}_{s_{N}}^{s_{0}}\downarrow\) and \(\{\mathbb{E}\|r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s})\|\}_{s_{N}}^{s_{1}}\uparrow\)._ Although generated samples converge monotonically, whether along the sampling trajectory or denoising trajectory (Observation 2), their magnitude behaves differently (Figure 1(a)). Geometrically, the initial noise distribution \(p(\hat{\mathbf{x}}_{s_{N}})\) starts from a big sphere of radius \(T\sqrt{d}\) and then anisotropically squashes its "radius" and twists the sample range into the exact data manifold. Meanwhile, the initial denoising output is an approximate _Dirac delta function_ centering in the dataset mean vector \(\mathbf{x}_{m}\), _i.e._, \(p(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{N}}))=\delta(\mathbf{x}_{m})\)[1]. This Dirac delta function anisotropically expands its range until exactly matching the data manifold. The overall picture is illustrated in Figure 1. ## 4 Theoretical Connection to Mean Shift Given a parametric diffusion model with the denoising output \(r_{\boldsymbol{\theta}}\), the sampling trajectory is simulated by numerically solving Eq. (4), and meanwhile, an implicitly coupled denoising trajectory is formed as a by-product. Once such a model has converged, it will hopefully capture the underlying score at different levels of noise, _i.e._, \(\forall\sigma_{t},\;r_{\boldsymbol{\theta}}\left(\hat{\mathbf{x}};\sigma_{t} \right)\rightarrow r_{\boldsymbol{\theta}}^{\star}\left(\hat{\mathbf{x}}; \sigma_{t}\right)\)[2, 1, 1]. We next derive the formula of _optimal denoising output_ to analyze the asymptotic behavior of diffusion models. **Proposition 3**.: _The optimal denoising output of Eq. (3) is a convex combination of the original data, where each weight is calculated based on the time-scaled and normalized \(\ell_{2}\) distance between \(\hat{\mathbf{x}}\) and \(\mathbf{x}_{i}\) belonging to the dataset \(\mathcal{D}\):_ \[r_{\boldsymbol{\theta}}^{\star}\left(\hat{\mathbf{x}};\sigma_{t} \right)=\sum_{i}u_{i}\mathbf{x}_{i}=\sum_{i}\frac{\exp\left(-\|\hat{\mathbf{x}} -\mathbf{x}_{i}\|^{2}/2\sigma_{t}^{2}\right)}{\sum_{j}\exp\left(-\|\hat{ \mathbf{x}}-\mathbf{x}_{j}\|^{2}/2\sigma_{t}^{2}\right)}\mathbf{x}_{i},\quad \sum_{i}u_{i}=1. \tag{5}\] The proof is provided in Appendix B.3. This equation appears to be highly similar to the classic non-parametric mean shift [14, 1, 15], and we provide a brief overview of it as follows. **Proposition 4**.: _The mean-shift algorithm with a Gaussian kernel and bandwidth \(h\) iteratively moves a point \(\mathbf{x}\) along the mean-shift vector \(m(\mathbf{x})-\mathbf{x}\), i.e., \(\mathbf{x}\leftarrow[m(\mathbf{x})-\mathbf{x}]+\mathbf{x}\), towards the maximum increase in the Parzen window density \(p(\hat{\mathbf{x}})=\int p_{\delta}(\mathbf{x})\mathbf{N}(\hat{\mathbf{x}}; \mathbf{x},h^{2}\mathbf{I})\mathrm{d}\mathbf{x}\), the mean vector is_ \[\mathbf{m}\left(\mathbf{x},h\right)=\sum_{i}v_{i}\mathbf{x}_{i} =\sum_{i}\frac{\exp\left(-\|\mathbf{x}-\mathbf{x}_{i}\|^{2}/2h^{2}\right)}{ \sum_{j}\exp\left(-\|\mathbf{x}-\mathbf{x}_{j}\|^{2}/2h^{2}\right)} \mathbf{x}_{i},\quad\mathbf{x}_{i}\in\mathcal{D},\quad\sum_{i}v_{i}=1. \tag{6}\] _From the interpretation of expectation-maximization (EM) algorithm, the above mean-shift iteration converges from almost any initial point with a generally linear convergence rate [1]._ As a mode-seeking algorithm, mean shift has shown particularly successful in clustering [15], image segmentation [1] and video tracking [11]. In fact, the ODE-based sampling is closely connected with _annealed mean shift_, or _multi-bandwidth mean shift_[11], which was developed as a metaheuristic algorithm for global model-seeking. By treating the optimal denoising output in Eq. (5) as the mean vector in annealed mean-shift iterations, we have the following theorem: **Theorem 1**.: _Given an ODE \(\mathrm{d}\mathbf{x}=-\frac{r_{\boldsymbol{\theta}}^{*}(\mathbf{x};t)-\mathbf{x}}{t} \mathrm{d}t\), one Euler step equals to a convex combination of the annealed mean-shift iteration and the current position._ Proof.: Given a current sample \(\hat{\mathbf{x}}_{s_{n+1}}\), \(n\in[0,N-1]\), the prediction of an one-step Euler equals to \[\begin{split}\hat{\mathbf{x}}_{s_{n}}&=\hat{ \mathbf{x}}_{s_{n+1}}-\frac{s_{n}-s_{n+1}}{s_{n+1}}\left(r_{\boldsymbol{ \theta}}^{*}\left(\hat{\mathbf{x}}_{s_{n+1}};s_{n+1}\right)-\hat{\mathbf{x}}_{ s_{n+1}}\right)\\ &=\frac{s_{n}}{s_{n+1}}\hat{\mathbf{x}}_{s_{n+1}}+\frac{s_{n+1}- s_{n}}{s_{n+1}}\mathbf{m}\left(\hat{\mathbf{x}}_{s_{n+1}};s_{n+1}\right), \end{split} \tag{7}\] where we treat timestamp \(s_{n+1}\) in \(r_{\boldsymbol{\theta}}^{*}\left(\cdot\right)\) as the annealing-like bandwidth of Gaussian kernel in Eq. (6) and then replace it as one iteration of annealed mean shift \(\mathbf{m}\left(\cdot\right)\)[1]. The ratio of timestep \(w(n)=(s_{n+1}-s_{n})/s_{n+1}\) in Eq. (7) actually reflects our preference for annealed mean shift over sticking to the current position at \(s_{n+1}\). Since the optimal denoising output, or annealed mean shift, starts with a spurious mode (dataset mean) and gradually converges towards a true mode over time, a reasonable choice is to progressively increase our emphasis on them, _i.e._, \(\{w(n)\}_{s_{N-1}}^{s_{0}}\uparrow\). In this sense, various time-schedule functions (such as uniform, quadratic, polynomial [1, 13, 14]) essentially boil down to different weighting functions (see Appendix B.4). This interpretation inspires us to train a parametric neural network to adaptively select proper weights in sampling for better visual quality [13]. We leave it for future work. Theorem 1 also implies that once diffusion models have converged to the optimum, all ODE trajectories will be uniquely determined and governed by a bandwidth-varying mean shift [13, 1]. In this case, the (forward) encoding process and (backward) decoding process only depend on the data distribution itself and a given noise distribution, regardless of model architectures or training algorithms. Such property was previously referred to as _uniquely identifiable encoding_ and was empirically verified in [12], while we clearly characterize the optimum by drawing a connection with annealed mean shift, and thus reveal the asymptotic behavior of diffusion models. ## 5 Applications ### Diagnosis of Score Deviation We then simulate four new trajectories based on the optimal denoising output \(r_{\boldsymbol{\theta}}^{*}\) to monitor the score deviation from the optimum: one is _optimal sampling trajectory_\(\{\hat{\mathbf{x}}_{s}^{*}\}_{s_{N}}^{s_{0}}\), where we generate samples Figure 3: _Top:_ We visualize a forward diffusion process of a randomly-selected image to obtain its encoding \(\hat{\mathbf{x}}_{s_{N}}\) and simulate multiple trajectories starting from this encoding. _Bottom:_ The k-nearest neighbors (k=5) of \(r_{\boldsymbol{\theta}}(\hat{\mathbf{x}}_{s_{1}})\) and \(r_{\boldsymbol{\theta}}^{*}(\hat{\mathbf{x}}_{s_{1}}^{*})\). as \(\{\hat{\mathbf{x}}_{s}\}_{s_{N}}^{s_{0}}\) but adopt \(r_{\mathbf{\theta}}^{\star}\) for score estimation, and other three are formed with the (optimal) denoising output and are designated as \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}_{s_{N}}^{s_{1}}\), \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s}^{\star})\}_{s_{N}}^{s_{1}}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s}^{\star})\}_{s_{N}}^{s_{1}}\). We calculate the deviation of denoising output to quantify the score deviation in all time steps in terms of the \(\ell_{2}\) distance. **Observation 4**.: _The learned score is well-matched to the optimal score in the large-noise region (from \(80\) to around \(10\)), otherwise they may diverge or almost coincide depending on different regions._ In fact, our learned score has to moderately diverge from the optimum to guarantee the generative ability. Otherwise, the sampling reduces to annealed mean shift for mode-seeking, or simply replays the dataset. Empirically, score deviation in a small region is sufficient to bring forth such ability. From \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s}^{\star})\}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s}^{\star})\}\) sequences in Figure 3 and the score deviation in Figure 4, we can clearly see that _along the optimal sampling trajectory_\(\{\hat{\mathbf{x}}_{s}^{\star}\}\), the deviation between the learned score \(r_{\mathbf{\theta}}\) and the optimal score \(r_{\mathbf{\theta}}^{\star}\) behaves differently in three successive regions: the deviation starts off as almost negligible (about \(10<t\leq 80\)), gradually increases (about \(3<t\leq 10\)), and then drops down to a low level once again (about \(0\leq t\leq 3\)). This phenomenon was also validated by a contemporaneous work [23] with a different viewpoint and measurement. We further observe that _along the sampling trajectory_\(\{\hat{\mathbf{x}}_{s}\}\), this phenomenon disappears and the score deviation keeps increasing (see \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s})\}\), \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}\) sequences in Figure 3 and Figure 4). This indicates that our score-based model strives to explore novel regions. Additionally, the generated samples in trajectory are attracted to a real-data mode but do not fall into it, as supported by their k-nearest-neighbor samples in Figure 3. ### Sampling with ODE-Jump **Observation 5**.: _The (optimal) denoising trajectory converges faster than the (optimal) sampling trajectory in terms of visual quality._ This observation is inferred from the comparison of \(\{\hat{\mathbf{x}}_{s}\}\) and \(\{r_{\mathbf{\theta}}(\hat{\mathbf{x}}_{s})\}\), \(\{\hat{\mathbf{x}}_{s}^{\star}\}\) and \(\{r_{\mathbf{\theta}}^{\star}(\hat{\mathbf{x}}_{s})\}\) sequences in Figure 3, which inspires us to develop a new sampling algorithm named as _ODE-Jump_ that directly jumps from _any_ sample at _any_ time in the sampling trajectory simulated by _any_ ODE solver to the associated denoising trajectory, and returns the denoising output as the final synthesized image. This simple algorithm is highly flexible, extremely easy to implement, and converges considerably faster than the original ODE solver along the sampling trajectory. The quantitative comparison of Frechet Inception Distance (FID [17]) _w.r.t._ the number of score function evaluations (NFEs) is provided in Figure 4 (right) and the visual comparison is provided in Figure 5. Figure 4: The score deviation in expectation (left and middle) and FID with different NFEs (right). Figure 5: The synthesized images of our proposed ODE-Jump sampling (bottom) converge much faster than that of EDMs [1] (top) in terms of visual quality. ### In-Distribution Latent Interpolation An attractive application of diffusion models is to achieve semantic image editing by manipulating latent representations [11, 12, 13]. We then take _latent interpolation_ as an example to reveal its working principle and the potential pitfalls in practice from a geometric viewpoint. The training objective Eq. (3) for score estimation tells that given a noise level \(\sigma_{t}^{2}\), the denoiser is _only_ trained with samples _belonging to_ the distribution \(p_{t}(\hat{\mathbf{x}})\). This important fact implies that for the latent encoding \(\hat{\mathbf{x}}_{s_{N}}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\), the denoiser performance is only guaranteed for the input approximately distributed in a sphere of radius \(T\sqrt{d}\) (see Section 3.1). This geometric picture helps in understanding the conditions under which latent interpolation may fail. **Proposition 5**.: _In high dimensions, linear interpolation [11] shifts the latent distribution while spherical linear interpolation [13] asymptotically (\(d\to\infty\)) maintains the latent distribution._ Given two independent latent encodings, \(\hat{\mathbf{x}}_{s_{N}}^{(1)}\), \(\hat{\mathbf{x}}_{s_{N}}^{(2)}\sim\mathcal{N}(\mathbf{0},T^{2}\mathbf{I})\), they are almost orthogonal with the angle \(\frac{1}{2}\pi+O_{p}(d^{-0.5})\) in high dimensions [10, 14]. In this case, _linear interpolation_\(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)}=(1-\alpha)\hat{\mathbf{x}}_{s_{N}}^{(1) }+\alpha\hat{\mathbf{x}}_{s_{N}}^{(2)}\) quickly pushes the resulting encoding \(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)}\) away from the original distribution into a squashed sphere of radius \(T\sqrt{d((1-\alpha)^{2}+\alpha^{2})}\), which almost has no intersection with the original sphere. Our trained denoiser thus can not provide a reliable estimation for \(r_{\theta}(\hat{\mathbf{x}}_{s_{N}}^{(\alpha)},s_{N})\) to derive the score direction, as shown in Figure 6. Another strategy named as _spherical linear interpolation_ (slerp) [13, 13, 14] greatly alleviates (but is not free from) the squashing effect in high dimensions and thus stabilizes the synthesis quality of interpolated encodings. But it still suffers from distribution shift in low dimensional cases (see Appendix B.5). **Proposition 6**.: _In-distribution interpolation preserves the latent distribution under interpolation._ In particular, for the Gaussian encoding \(\hat{\mathbf{x}}_{s_{N}}\), there exists a variance-preserving interpolation \(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)}=\sqrt{(1-\lambda^{2})}\hat{\mathbf{x}}_{s _{N}}^{(1)}+\lambda\hat{\mathbf{x}}_{s_{N}}^{(2)}\sim\mathcal{N}\left(\mathbf{ 0},T^{2}\mathbf{I}\right)\) to prevent distribution shift. Since a uniform \(\lambda\) makes \(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)}\) largely biased to \(\hat{\mathbf{x}}_{s_{N}}^{(1)}\), we derive \(\lambda\) by re-scaling other heuristic strategies to scatter the coefficient more evenly, such as the _normalized linear_ (n-linear) interpolation (\(\lambda=\alpha/\sqrt{\alpha^{2}+(1-\alpha^{2})}\)) with uniformly sampled coefficient \(\alpha\). As shown in Figure 6, this simple re-scaling trick significantly boosts the visual quality compared with the original counterpart. Additionally, slerp behaves as \(\lambda=\sin\alpha\frac{\pi}{2}\) in high dimensions due to \(\psi\approx\frac{\pi}{2}\), and this coefficient was used in [14] for interpolation. With the help of such an in-distribution interpolation, all interpolated encodings faithfully move along our trained ODE trajectory with a reliable denoising estimation for \(r_{\theta}(\hat{\mathbf{x}}_{s_{N}}^{(\lambda)},s_{N})\). We further calculate the k-nearest neighbors of our generated images to the real data (see Appendix A), to demonstrate how different modes are smoothly traversed in this process \(\hat{\mathbf{x}}_{s_{N}}\to\hat{\mathbf{x}}_{s_{N}}^{\lambda}\to\hat{\mathbf{ x}}_{s_{0}}^{\lambda}\). ### Rethinking Distillation-Based Fast Sampling Techniques A learned score-based model with a specified ODE solver fully determine the sampling trajectory and the denoising trajectory. From our geometric observations, many distillation-based fast sampling techniques can be re-interpreted as different ways to _linearize_ the original sampling trajectory at several discrete time steps. Recently, _consistency distillation_ (CD) [15] and TRACT [1] begin to rely on the denoising trajectory to guide the score fine-tuning and thus enable fast sampling. Figure 6: Linear latent interpolation results in blurry images, while a simple re-scaling trick greatly preserves the fine-grained image details and enables a smooth traversal among different modes. The slow sampling speed is a major bottleneck of score-based generative models. To address this issue, one possible solution is to explicitly straighten the ODE trajectory with knowledge distillation [11, 12, 20, 21]. They optimize a new student score model (\(r_{\mathbf{\theta}}\)) to align with the prediction of a pre-trained and fixed teacher score model (\(r_{\mathbf{\phi}}\)). Empirically, these learning-based approaches can achieve decent synthesis quality with a significantly fewer NFE (\(\approx 1\)) compared to those ODE solver-based approaches [11, 12, 23, 23]. In Figure 7, we draw a sketch to highlight the difference of typical examples. The rationale behind these approaches is that with the pre-defined teacher sampling trajectory and its backward ODE simulation to obtain \(\mathcal{T}_{\mathbf{\phi}}\left(\cdot\right)\), we can adjust the student sampling direction by making the initial point directly point to the final point. To achieve this goal, new noise-target pairs are built in an online [13, 23, 23] or offline fashion [11, 20]. We present the training objective of CD as follows \[\mathbb{E}_{\mathbf{x}\sim p_{\mathbf{\theta}}}\mathbb{E}_{\mathbf{x}\sim N( \mathbf{0},s_{n+1}^{2}\mathbf{1})}\|r_{\mathbf{\theta}}\left(\hat{\mathbf{x}};s_{n +1}\right)-r_{\mathbf{\theta}^{-}}\left(\mathcal{T}_{\mathbf{\phi}}\left(\hat{ \mathbf{x}}\right);s_{n}\right)\|_{2}^{2},\quad\hat{\mathbf{x}}=\mathbf{x}+ \mathbf{z}, \tag{8}\] where \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) is implemented as an one-step Euler: \(\hat{\mathbf{x}}-\frac{\left(s_{n}-s_{n+1}\right)}{s_{n+1}}\left(r_{\mathbf{\phi} }\left(\hat{\mathbf{x}};s_{n+1}\right)-\hat{\mathbf{x}}\right)\). CD follows the time schedule of EDMs (see Section 3.2) [1], aside from removing the \(s_{0}\), and adopts pre-trained EDMs to initialize the student model. \(\mathbf{\theta}^{-}\) is the exponential moving average (EMA) of \(\mathbf{\theta}\). Based on our geometric observations, we then provide an intuitive interpretation of the training objective Eq. (8). (1) The role of \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) is to locate the sampling trajectory passing a given noisy sample \(\hat{\mathbf{x}}\) and make one numerical step along the trajectory to move \(\hat{\mathbf{x}}\) towards its converged point. (2) \(r_{\mathbf{\theta}^{-}}(\cdot)\) further projects \(\mathcal{T}_{\mathbf{\phi}}\left(\hat{\mathbf{x}}\right)\) into the corresponding denoising trajectory with the step size \(s_{n}\). It is closer to the converged point, compared with the denoising output of \(\hat{\mathbf{x}}\) with the step size \(s_{n+1}\) (see Observation 2 and Figure 7). (3) The student denoising output \(r_{\mathbf{\theta}}\left(\cdot\right)\) is then shifted to match its underlying target \(r_{\mathbf{\theta}^{-}}\left(\cdot\right)\) in the denoising trajectory. By iteratively fine-tuning denoising outputs until convergence, the student model is hopefully endowed with the ability to perform few-step sampling from those trained discrete time steps, and thus achieves excellent performance in practice [23]. ## 6 Conclusion and Future Work In this paper, we present a geometric perspective on (variance-exploding) diffusion models, aiming for a fundamental grasp of their sampling dynamics in a simple and intuitive way. We find that intriguingly, the data distribution and the noise distribution are smoothly bridged by a quasi-linear sampling trajectory, and another implicit denoising trajectory allowing faster convergence in terms of visual quality. We further characterize the asymptotic behavior of diffusion models by formulating a theoretical relationship between the optimal ODE-based sampling and the anneal mean-shift (global mode-seeking) algorithm. Additionally, some preliminary applications implied by our brand new geometric perspective are provided. We hope our theoretical understanding and empirical observations help to better harness the power of score/diffusion-based generative models and facilitate more rapid development in efficient training and fast sampling techniques. One limitation is that our current observations are not fully underpinned by theoretical results, and thus require further investigation. In fact, the intensively used ODE-based sampling behaves as a typical non-autonomous non-linear system [1], which offers a potential approach to analyze the (asymptotic) stability of diffusion models with tools from control theory. Figure 7: The comparison of distillation-based techniques. The _offline_ techniques first simulate a long ODE trajectory with the teacher score and then make the student score points to the final point (KD [11]) or also include intermediate points on the trajectory (DFNO [20]). The _online_ techniques iteratively fine-tune the student prediction to align with the target simulated by a few-step teacher model along the sampling trajectory (PD [13]) or the denoising trajectory (CD [23]).
2309.16779
Intriguing properties of generative classifiers
What is the best paradigm to recognize objects -- discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)? We build on recent advances in generative modeling that turn text-to-image models into classifiers. This allows us to study their behavior and to compare them against discriminative models and human psychophysical data. We report four intriguing emergent properties of generative classifiers: they show a record-breaking human-like shape bias (99% for Imagen), near human-level out-of-distribution accuracy, state-of-the-art alignment with human classification errors, and they understand certain perceptual illusions. Our results indicate that while the current dominant paradigm for modeling human object recognition is discriminative inference, zero-shot generative models approximate human object recognition data surprisingly well.
Priyank Jaini, Kevin Clark, Robert Geirhos
2023-09-28T18:19:40Z
http://arxiv.org/abs/2309.16779v2
# Intriguing Properties ###### Abstract What is the best paradigm to recognize objects--discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)? We build on recent advances in generative modeling that turn text-to-image models into classifiers. This allows us to study their behavior and to compare them against discriminative models and human psychophysical data. We report four intriguing emergent properties of generative classifiers: they show a record-breaking human-like shape bias (99% for Imagen), near human-level out-of-distribution accuracy, state-of-the-art alignment with human classification errors, and they understand certain perceptual illusions. Our results indicate that while the current dominant paradigm for modeling human object recognition is discriminative inference, zero-shot generative models approximate human object recognition data surprisingly well. Figure 1: Zero-shot generative classifiers achieve a **human-level shape bias**: 99% for Imagen, 93% for Stable Diffusion, 92% for Parti and 92–99% for individual human observers (96% on average). Most discriminative models are texture biased instead. ## 1 Introduction Many discriminative classifiers perform well on data similar to the training distribution, but struggle on out-of-distribution images. For instance, a cow may be correctly recognized when photographed in a typical grassy landscape, but is not correctly identified when photographed on a beach (Beery et al., 2018). In contrast to many _discriminatively_ trained models, _generative_ text-to-image models appear to have acquired a detailed understanding of objects: they have no trouble generating cows on beaches or dog houses made of sushi (Saharia et al., 2022). This raises the question: If we could somehow get classification decisions out of a generative model, how well would it perform out-of-distribution? For instance, would it be biased towards textures like most discriminative models or towards shapes like humans (Baker et al., 2018; Geirhos et al., 2019; Wichmann and Geirhos, 2023)? We here investigate perceptual properties of _generative classifiers_, i.e., models trained to generate images from which we extract zero-shot classification decisions. We focus on two of the most successful types of text-to-image generative models--diffusion models and autoregressive models--and compare them to both discriminative models (e.g., ConvNets, vision transformers, CLIP) and human psychophysical data. Specifically, we focus on the task of visual object recognition (also known as classification) of challenging out-of-distribution datasets and visual illusions. On a broader level, the question of whether perceptual processes such as object recognition are best implemented through a discriminative or a generative model has been discussed in various research communities for a long time. Discriminative inference is typically described as fast yet potentially prone to shortcut learning (Geirhos et al., 2020), while generative modeling is often described as slow yet potentially more capable of robust inference (DiCarlo et al., 2021). The human brain appears to combine the best of both worlds, achieving fast inference but also robust generalization. How this is achieved, i.e. how discriminative and generative processes may be integrated has been described as "the deep mystery in vision" (Kriegeskorte, 2015, p. 435) and seen widespread interest in Cognitive Science and Neuroscience (see DiCarlo et al., 2021, for an overview). This mystery dates back to the idea of vision as inverse inference proposed more than 150 years ago Figure 2: **Classification with a diffusion generative classifier.** Given a test image, such as a dog with clock texture (1), a text-to-image generative classifier adds random noise (2) and then reconstructs the image conditioned on the prompt “A bad photo of a \(<\)class\(>\)” (3). The reconstructed image closest to the test image in L\({}_{2}\) distance is taken as the classification decision; this estimates the diffusion variational lower bound (Clark and Jaini, 2023) (4). For visualization, class icons corresponding to the prompt class are superimposed on the bottom right of the reconstructed images. by von Helmholtz (1867), who argued that the brain may need to infer the likely causes of sensory information--a process that requires a generative model of the world. In machine learning, this idea inspired approaches such as the namesake Helmholtz machine (Dayan et al., 1995), the concept of vision as Bayesian inference (Yuille and Kersten, 2006) and other analysis-by-synthesis methods (Bever and Poeppel, 2010; Schott et al., 2018). However, when it comes to challenging real-world tasks like object recognition from photographs, the ideas of the past often lacked the methods (and compute power) of the future: until very recently, it was impossible to compare generative and discriminative models of object recognition simply because the only models capable of recognizing challenging images were standard discriminative models like deep convolutional networks (Krizhevsky et al., 2012; He et al., 2015) and vision transformers (Dosovitskiy et al., 2021). Excitingly, this is changing now and thus enables us to compare generative classifiers against both discriminative models and human object recognition data. Concretely, in this work, we study the properties of generative classifiers based on three different text-to-image generative models: Stable Diffusion (SD), Imagen, and Parti on 17 challenging OOD generalization datasets from the model-vs-humans toolbox (Geirhos et al., 2021). We compare the performance of these generative classifiers with 52 discriminative models and human psychophysical data. Based on our experiments, we observe four intriguing properties of generative classifiers: 1. a human-like shape bias (Subsection 3.1), 2. near human-level out-of-distribution accuracy (Subsection 3.2), 3. state-of-the-art error consistency with humans (Subsection 3.3), 4. an understanding of certain perceptual illusions (Subsection 3.4). ## 2 Method: Generative Models as Zero-Shot Classifiers We begin with a dataset, \(\mathcal{D}_{n}:=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2})\cdots,(\mathbf{x}_{n},y_{ n})\}\) of \(n\) images where each image belongs to one of \(K\) classes \([\mathsf{y}_{K}]:=\{y_{1},\cdots,y_{k}\}\). Our method classifies an image by predicting the most probable class assignment assuming a uniform prior over classes: \[\tilde{y}=\operatorname*{arg\,max}_{\mathsf{y}_{k}}p(y=\mathsf{y}_{k}|\mathbf{x}) =\operatorname*{arg\,max}_{\mathsf{y}_{k}}\ p(\mathbf{x}|y=\mathsf{y}_{k})\cdot p (y=\mathsf{y}_{k})=\operatorname*{arg\,max}_{\mathsf{y}_{k}}\ \log p(\mathbf{x}|y=\mathsf{y}_{k}) \tag{1}\] A generative classifier (Ng and Jordan, 2001) uses a conditional generative model to estimate the likelihood \(p_{\theta}(\mathbf{x}|y=\mathsf{y}_{k})\) where \(\theta\) are the model parameters. Generative models:We study the properties of three different text-to-image generative models namely Imagen(Saharia et al., 2022) which is a pixel space based diffusion model, Stable Diffusion (SD) (Rombach et al., 2022) which is a latent space based diffusion model, and Parti(Yu et al., 2022) which is a sequence-to-sequence based autoregressive model. Since these models are conditioned Figure 3: **Out-of-distribution accuracy across 17 challenging datasets (Geirhos et al., 2021). Detailed results for all parametric datasets are plotted in Figure 5; Table 3 lists accuracies.** on text prompts rather than class labels, we modify each label, \(\mathsf{y}_{k}\), to a text prompt using the template \(\mathsf{y}_{k}\to\mathsf{A}\) bad photo of a \(\mathsf{y}_{k}\). to generate classification decisions. Conceptually, our approach to obtain classification decisions is visualized in Figure 2. Following Clark and Jaini (2023), we generate classification decisions from diffusion models like Stable Diffusion and Imagen by approximating the conditional log-likelihood \(\log p_{\theta}(\mathbf{x}|y=\mathsf{y}_{k})\) using the diffusion variational lower bound (see Appendix A for a background on diffusion models): \[\tilde{y}=\operatorname*{arg\,max}_{\mathsf{y}_{k}}\ \log p_{\theta}(\mathbf{x}|y= \mathsf{y}_{k})\approx\operatorname*{arg\,min}_{\mathsf{y}_{k}}\mathbb{E}_{ \epsilon,t}\big{[}\mathbf{w}_{t}\|\mathbf{x}-\tilde{\mathbf{x}}_{\theta}(\mathbf{x}_{t},\mathsf{ y}_{k},t)|\big{]}_{2}^{2} \tag{2}\] For SD, \(\mathbf{x}\) is a latent representations whereas for Imagen \(\mathbf{x}\) consists of raw image pixels. Evaluating \(p_{\theta}(\mathbf{x}|y=\mathsf{y}_{k})\) for Parti amounts to performing one forward pass of the model since it is an autoregressive model that provides an exact conditional likelihood. Thus, for each of these models we evaluate the conditional likelihood, \(p_{\theta}(\mathbf{x}|y=\mathsf{y}_{k})\), for each class \(\mathsf{y}_{k}\in[\mathsf{y}_{K}]\) and assign the class with the highest likelihood obtained via Equation (1). Model-vs-human datasets:We study the performance of these generative classifiers on 17 challenging out-of-distribution (OOD) datasets proposed in the model-vs-human toolbox (Geirhos et al., 2021). Of these 17 datasets, five correspond to a non-parametric single manipulation (sketches, edge-filtered images, silhouettes, images with a texture-shape cue conflict, and stylized images where the original image texture is replaced by the style of a painting). The other twelve datasets consist of parametric image distortions like low-pass filtered images, additive uniform noise, etc. These datasets are designed to test OOD generalization for diverse models in comparison to human object recognition performance. The human data consists of 90 human observers with a total of 85,120 trials collected in a dedicated psychophysical laboratory on a carefully calibrated screen (see Geirhos et al., 2021, for details). This allows us to compare classification data for zero-shot generative models, discriminative models and human observers in a comprehensive, unified setting. Preprocessing:We preprocess the 17 datasets in the model-vs-human toolbox by resizing the images to \(64\times 64\) resolution for Imagen, \(256\times 256\) for Parti, and \(512\times 512\) for SD. We use the prompt, \(\mathsf{A}\) bad photo of a \(\mathsf{y}_{k}\), for each dataset and every model. Although Imagen (Saharia et al., 2022) is a cascaded diffusion model consisting of a \(64\times 64\) low-resolution model and two super-resolution models, we only use the \(64\times 64\) base model for our experiments here. We use v1.4 of SD (Rombach et al., 2022) for our experiments that uses a pre-trained text encoder from CLIP to encode text and a pre-trained VAE to map images to a latent space. Finally, we use the Parti-3B model (Yu et al., 2022) consisting of an image tokenizer and an encoder-decoder transformer model that converts text-to-image generation to a sequence-to-sequence modeling problem. Baseline models for comparison:As baseline discriminative classifiers, we compare Imagen, SD, and Parti against 52 diverse models from the model-vs-human toolbox (Geirhos et al., 2021) that are Figure 4: **Error consistency** across 17 challenging datasets (Geirhos et al., 2021). This metric measures whether errors made by models align with errors made by humans (higher is better). either trained or fine-tuned on ImageNet, three ViT-22B variants (Dehghani et al., 2023) (very large 22B parameter vision transformers) and CLIP (Radford et al., 2021) as a zero-shot classifier baseline. The CLIP model is based on the largest version, ViT-L/14@224px, and consist of vision and text transformers trained with contrastive learning. We use the CLIP model that uses an ensemble of 80 different prompts for classification (Radford et al., 2021). We plot all baseline discriminative models in grey and human subject data in red. Metrics:We compare all the models over the 17 OOD datasets based on three metrics: (a) shape bias, (b) OOD accuracy and, (c) error consistency. Shape bias is defined by Geirhos et al. (2019) as the fraction of decisions that are identical to the shape label of an image divided by the fraction of decisions for which the model output was identical to either the shape or the texture label on a dataset with texture-shape cue conflict. OOD accuracy is defined as the fraction of correct decisions for a dataset that is not from the training distribution. Error consistency (see Geirhos et al., 2020, for details) is measured in Cohen's kappa (Cohen, 1960) and indicates whether two decision makers (e.g., a model and a human observer) systematically make errors on the same images. If that is the case, it may be an indication of deeper underlying similarities in terms of how they process images and recognize objects. Error consistency between models \(f_{1}\) and \(f_{2}\) is defined over a dataset on which both models are evaluated on exactly the same images and output a label prediction; the metric indicates the fraction of images on which \(\mathbb{1}_{f_{1}(x)=y_{x}}\) is identical to \(\mathbb{1}_{f_{2}(x)=y_{x}}\) (i.e., both models are either correct or wrong on the same image) when corrected for chance agreement. This ensures that an error consistency value of 0 corresponds to chance agreement, positive values indicate beyond-chance agreement (up to 1.0) and negative values indicate systematic disagreement (down to -1.0). ## 3 Results: Four intriguing properties of Generative Classifiers ### Human-like shape bias Introduced by Geirhos et al. (2019), the _shape bias_ of a model indicates to which degree the model's decisions are based on object shape, as opposed to object texture. We study this phenomenon using the cue-conflict dataset which consists of images with shape-texture cue conflict. As shown in Figure 5: **Detailed out-of-distribution accuracy for Imagen, Stable Diffusion and Parti in comparison to human observers. While not always aligning perfectly with human accuracy, the overall robustness achieved by both models is comparable to that of human observers even though these models are zero-shot, i.e. neither designed nor trained to do classification.** Geirhos et al. (2021), most discriminative models are biased towards texture whereas humans are biased towards shape (96% shape bias on average; 92% to 99% for individual observers). Interestingly, we find that all three zero-shot generative classifiers show a shape bias that matches humans: Imagen achieves a stunning 99% shape bias, Stable Diffusion 93% and Parti a 92% shape bias. As we show in Figure 1, Imagen closely matches or even exceeds human shape bias across nearly all categories, achieving a previously unseeen shape bias of 99%. SD and Parti similarly achieve high shape bias (93 and 92% respectively). In Table 1, we report that all three generative classifiers significantly outperform ViT-22B (Dehghani et al., 2023), the previous state-of-the-art method in terms of shape bias, even though all three models are smaller in size, trained on less data, and unlike ViT-22B were not designed for classification. ### Near human-level OOD accuracy Humans excel at recognizing objects even if they are heavily distorted. _Do generative classifiers also possess similar out-of-distribution robustness?_ We find that diffusion based models in Imagen and Stable Diffusion achieve an overall accuracy that is close to human-level robustness (cf. Figure 3) despite being zero-shot models. The detailed plots in Figure 5 show that on most datasets (except rotation and high-pass), the performance of all three generative classifiers approximately matches human responses. Additional results are in Table 3 and Figure 10 and 11 in the appendix. Notably, all three models are considerably worse than humans in recognizing rotated images. Curiously, these models also struggle to generate rotated images when prompted with the text "A rotated image of a dog." "An upside \(-\) down image of a dog." etc. This highlights an exciting possibility: evaluating generative models on downstream tasks like OOD datasets may be a quantitative way of gaining insights into the generation capabilities and limitations of these models. On high-pass filtered images, Imagen performs much worse than humans whereas SD and Parti exhibit more robust performance. The difference in performance of Imagen and SD may be attributed to the weighting function used in Equation (2). Our choice of weighting function, \(\mathbf{w}_{t}:=\text{exp}(-7t)\), as used in Clark & Jaini (2023) tends to give higher weight to the lower noise levels and is thus bad at extracting decisions for high-frequency images. SD on the other hand operates in the latent space and thus the weighting function in Equation (2) effects its decisions differently than Imagen. Nevertheless, this indicates that even though Imagen and SD are diffusion-based models, they exhibit very different sensitivities to high spatial frequencies. Despite those two datasets where generative classifiers show varied performance, they overall achieve impressive zero-shot classification accuracy (near human-level performance as shown in Figure 3). aligned error patterns, surpassing previous state-of-the-art (SOTA) set by ViT-22B, a large vision transformer (Dehghani et al., 2023). SD also exhibits error consistency closer to humans but lacks significantly compared to Imagen. Additionally, a matrix plot of error consistency of all the models on cue-conflict images is shown in Figure 6. Interestingly, the plot shows a clear dichotomy between discriminative models that exhibit error patterns similar to each other, and generative models whose error patterns more closely match humans, thus they end up in the human cluster. While overall a substantial gap between the best models and human-to-human consistency remains (Figure 4), Imagen best captures human classification errors despite never being trained for classification. We report more detailed results in the appendix in Table 2 and Figures 10-17. all cases, the text-to-image generative models are able to recognize the illusion and recreate correct images conditioned on the respective text prompts. This indicates that these generative models share certain bistable illusions and pareidolia with human visual perception. ## 4 Analysis: Where does the increased shape bias originate from? In Section 3, we highlighted four _intriguing properties_ of generative classifiers. The most striking emergent property amongst the four is the human-level shape bias demonstrated by these generative classifiers; a bias that no discriminative models so far was able to show. A natural question to ask is thus: _What aspect of these generative models causes such an increase in shape bias?_ We observed that for diffusion models like Imagen and Stable Diffusion, the recreated images used for classification were usually devoid of texture cues (for example see Figure 2). We posit that the denoising process used for classification (cf. Equation (2)) of the diffusion model might bias it towards capturing low-frequency information and thereby focuses on the global structure of the image as captured by the shape of an object. Indeed, in Figure 5, we observe that while generative classifiers are well within the range of other models for most datasets, they demonstrate very distinctive results on low-pass filtered images (also known as blurred); Imagen--the most shape-biased model--is on part with humans. Conversely, Imagen struggles to classify high-pass images. Could it be the case that these generative models put more emphasis on lower spatial frequencies whereas most textures are high frequency in nature? If this is indeed the case, then performance on blurred images and shape bias should have a significant positive correlation. We tested this hypothesis empirically and indeed found a strong positive and highly significant correlation between the two (Pearson's \(r(58)=.59,p<0.001\); Spearman's \(r(58)=.64,p<0.001\)). While this establishes a correlation between the two, it is not evidence for a causal link. We next hypothesized that the noise applied during diffusion training might encourage models to ignore high-frequency textures and focus on shapes. To test this prediction, we trained a standard ResNet-50 on ImageNet-1K (Russakovsky et al., 2015) by adding diffusion-style noise as a data augmentation during both training and evaluation. Interestingly, such a model trained with data augmented with diffusion style noise causes an increase in shape bias from 21% for a standard ResNet-50 to 78% as shown in Figures 1 and 13 and Table 1. This simple trick achieves a Figure 7: **Generative classifiers understand certain visual illusions** as indicated by their ability to reconstruct ambiguous images in a way that aligns with how humans perceive those images. For instance, they reconstruct a right-facing rabbit vs. a left-facing duck in the case of the bistable rabbit-duck illusion and place the face in the right location and pose for an image where humans show pareidolia (seeing patterns in things, like a face in a rock). Attribution for original images: App D. substantially higher shape bias than the 62% observed by prior work when combining six different techniques and augmentations (Hermann et al., 2020). This result shows that (i) diffusion style training biases the models to emphasize low spatial frequency information and (ii) models that put emphasis on lower spatial frequency noise exhibit increased shape bias. Other factors such as generative training, the quality and quantity of data, and the use of a powerful language model might also play a role. However, given the magnitude of the observed change in shape bias this indicates that diffusion-style training is indeed a crucial factor. ## 5 Discussion **Motivation.** While generative pre-training has been prevalent in natural language processing, in computer vision it is still common to pre-train models on labeled datasets such as ImageNet (Deng et al., 2009) or JFT (Sun et al., 2017). At the same time, generative text-to-image models like Stable Diffusion, Imagen, and Parti show powerful abilities to generate photo-realistic images from diverse text prompts. This suggests that these models learn useful representations of the visual world, but so far it has been unclear how their representations compare to discriminative models. Furthermore, discriminative models have similarly dominated computational modeling of human visual perception, even though the use of generative models by human brains has long been hypothesized and discussed. In this work, we performed an empirical investigation on out-of-distribution datasets to assess whether discriminative or generative models better fit human object recognition data. **Key results.** We report four intriguing human-like properties of _generative_ models: (1) Generative classifiers are the first models that achieve a human-like shape bias (92-99%); (2) they achieve near human-level OOD accuracy despite being zero-shot classifiers that were neither trained nor designed for classification; (3) one of them (Imagen) shows the most human-aligned error patterns that machine learning models have achieved to date; and (4) all investigated models qualitatively capture the ambiguities of images that are perceptual illusions for humans. **Implications for human perception.** Our results establish generative classifiers as one of the leading behavioral models of human object recognition. While we certainly don't resolve the "deep mystery of vision" (Kriegeskorte, 2015, p. 435) in terms of how brains might combine generative and discriminative models, our work paves the way for future studies that might combine the two. Quoting Luo (2022, p. 22) on diffusion, "It is unlikely that this is how we, as humans, naturally model and generate data; we do not seem to generate novel samples as random noise that we iteratively denoise."--we fully agree, but diffusion may just be one of many implementational ways to arrive at a representation that allows for powerful generative modeling. Human brains are likely to use a different _implementation_, but they still may (or may not) end up with a similar _representation_. **Implications for machine perception.** We provide evidence for the benefits of generative pre-training, particularly in terms of zero-shot performance on challenging out-of-distribution tasks. In line with recent work on using generative models for depth estimation (Zhao et al., 2023) or segmentation (Burgert et al., 2022; Brempong et al., 2022), this makes the case for generative pre-training as a compelling alternative to contrastive or discriminative training for vision tasks. Additionally, our experiments provide a framework to find potential bugs of generative models through classification tasks. For example, all the generative models performed poorly on the rotation dataset; those models also struggled to generate "rotated" or "upside-down" images of objects. Similar experiments can be used to evaluate generative models for undesirable behaviour, toxicity and bias. **Limitations.** A limitation of the approach we used in the paper is the computational speed (as we also alluded to in Section 1). The approach does not yield a practical classifier. Secondly, all three models have different model sizes, input resolutions, and are trained on different datasets for different amounts of time, so the comparison is not perfect. Through including diverse generative models, our comparisons aim to highlight the strengths and weaknesses of generative models. **Future directions.** Beyond the questions regarding how biological brains might combine generative and discriminative models, we believe it will be interesting to study how, and to what degree, language cross-attention influences the intriguing properties we find. Further, is denoising diffusion training a crucial component that explains the impressive performance of Imagen and SD? We hope our findings show generative classifiers as intriguing models for exploring exciting future directions. #### Acknowledgments We would like to express our gratitude to the following colleagues (in alphabetical order) for helpful discussions and feedback: David Fleet, Katherine Hermann, Been Kim, Alex Ku, Jon Shlens, and Kevin Swersky.
2309.09881
Deep Reinforcement Learning for the Joint Control of Traffic Light Signaling and Vehicle Speed Advice
Traffic congestion in dense urban centers presents an economical and environmental burden. In recent years, the availability of vehicle-to-anything communication allows for the transmission of detailed vehicle states to the infrastructure that can be used for intelligent traffic light control. The other way around, the infrastructure can provide vehicles with advice on driving behavior, such as appropriate velocities, which can improve the efficacy of the traffic system. Several research works applied deep reinforcement learning to either traffic light control or vehicle speed advice. In this work, we propose a first attempt to jointly learn the control of both. We show this to improve the efficacy of traffic systems. In our experiments, the joint control approach reduces average vehicle trip delays, w.r.t. controlling only traffic lights, in eight out of eleven benchmark scenarios. Analyzing the qualitative behavior of the vehicle speed advice policy, we observe that this is achieved by smoothing out the velocity profile of vehicles nearby a traffic light. Learning joint control of traffic signaling and speed advice in the real world could help to reduce congestion and mitigate the economical and environmental repercussions of today's traffic systems.
Johannes V. S. Busch, Robert Voelckner, Peter Sossalla, Christian L. Vielhaus, Roberto Calandra, Frank H. P. Fitzek
2023-09-18T15:45:22Z
http://arxiv.org/abs/2309.09881v1
Deep Reinforcement Learning for the Joint Control of Traffic Light Signaling and Vehicle Speed Advice ###### Abstract Traffic congestion in dense urban centers presents an economical and environmental burden. In recent years, the availability of vehicle-to-anything communication allows for the transmission of detailed vehicle states to the infrastructure that can be used for intelligent traffic light control. The other way around, the infrastructure can provide vehicles with advice on driving behavior, such as appropriate velocities, which can improve the efficacy of the traffic system. Several research works applied deep reinforcement learning to either traffic light control or vehicle speed advice. In this work, we propose a first attempt to jointly learn the control of both. We show this to improve the efficacy of traffic systems. In our experiments, the joint control approach reduces average vehicle trip delays, w.r.t. controlling only traffic lights, in eight out of eleven benchmark scenarios. Analyzing the qualitative behavior of the vehicle speed advice policy, we observe that this is achieved by smoothing out the velocity profile of vehicles nearby a traffic light. Learning joint control of traffic signaling and speed advice in the real world could help to reduce congestion and mitigate the economical and environmental repercussions of today's traffic systems. ## I Introduction Traffic congestion is a major source of delay in transportation networks and results in significant economic and environmental repercussions. Minimizing the delay caused in traffic systems is thus one of the primary concerns of traffic research. Due to spatial constraints, it is often difficult to increase road capacities by simply building wider roads. This is especially true in dense urban centers, where the problem of traffic congestion is particularly severe. It is therefore important to increase the capacity of existing traffic systems through the intelligent allocation of given resources. A particularly important measure for traffic control are traffic lights. Past literature focused on the optimization of traffic light signaling to mitigate congestion [1, 2]. The increasing availability of vehicle-to-anything (V2X) communication technologies, such as IEEE 802.11p or 5G-V2X [3], allows the collection of a detailed traffic state that, in theory, could enable more informed traffic control decisions. Since most traditional methods are not designed to handle detailed state data, in recent years many researchers have successfully applied deep reinforcement learning (DRL) algorithms to infer good traffic light control polices from data [4, 5, 6]. Many works implement complex multi-agent algorithms to learn intricate cooperation of traffic lights, while scaling to large scenarios [7, 8, 9, 10, 11]. Most publications claim to outperform previous state-of-the-art methods when evaluated on their own simulation scenarios. However, in a set of established benchmark scenarios, Ault, et. al. were unable to reproduce these results [12]. Indeed, evaluated on said benchmark, the algorithms that performed best were relatively simple independent learners that implement no explicit cooperation mechanisms. Another measure to control traffic systems, which has started to emerge, is vehicle speed advice. In contrast to speed limits, speed advice is not legally binding, but can be used to encourage more foresighted driving behavior that increases the safety and efficacy of traffic systems. It can be presented to drivers via adaptive street signs or can be made available on vehicles' dashboards via V2X communication. DRL has been used to control vehicle speed by adapting speed limits [13] or directly optimizing velocities of individual vehicles [14, 15]. In previous works, DRL has been applied to both traffic light control and vehicle speed advice individually. In this work, we Fig. 1: The envisioned traffic system. Via V2X communication, vehicles share their current state so that the infrastructure can make more informed decisions. The infrastructure transmits advice on optimal velocities to vehicles that can be displayed on the dashboard. will develop and test a first approach to jointly optimize the two measures. To do so, we build on the results of Ault, et. al. [12] by implementing DRL agents, as independent learners, that control both traffic light signaling and vehicle speed advice of individual vehicles. Figure 1 shows the envisioned system. We evaluate this approach both qualitatively, on an atomic single intersection scenario, as well as quantitatively on the benchmark scenarios from [12]. In our experiments, learning vehicle speed advice on top of traffic light signaling shows to improve the efficiency of traffic systems in terms of travel time. This improvement is achieved by smoothing out velocity profiles of vehicles that are approaching the intersection. This work presents a first attempt towards jointly optimizing traffic light signaling and vehicle speed advice. We expect future work to build on our results by exploring more evolved approaches of integrating the two kinds of agents. ## II Background and Related Work Reinforcement learning (RL) is a subfield of machine learning in that an agent learns to sequentially make decisions, based on its current state, to maximize a numerical reward over time [16]. It learns through trial and error, receiving positive reinforcement for actions that lead to desired outcomes and negative reinforcement for actions that do not. This makes it a useful approach when it is difficult to derive an optimal policy from first principles. Many traditional RL approaches rely on tabular representations, limiting state and action spaces to be discrete [16]. Recent advances in RL have been driven by the application of Deep Neural Networks (DNNs) as function approximators, which allow the application to complex problem domains with continuous state and action spaces. The combination of RL and DNNs is referred to as Deep Reinforcement Learning (DRL). In this work we will use the Proximal Policy Optimization (PPO) algorithm [17], which shows good convergence properties, allows for discrete and continuous action spaces, and is a popular algorithm in DRL. In many real-world settings, multiple learning agents act in a shared environment, giving rise to certain multi-agent pathologies that can aggravate learning. Multi-agent RL (MARL) methods implement mechanisms to deal with these issues. In this work we will follow the naive approach of independent learners, that simply ignores all multi-agent pathologies and implements all agents using single-agent methods [18]. Vehicle-to-anything (V2X) communication enables the exchange of information between all entities of a traffic system, giving rise to a plethora off new application possibilities. Existing and emerging technologies, such as IEEE 802.11p or 5G-V2X [3], promise reliable low latency communication over the air that matches the high safety requirements and tight real-time constraints of traffic systems. Traffic light control is a critical aspect of transportation infrastructure as it helps to ensure the safety and efficiency of vehicle and pedestrian movement in urban areas. Approaches to traffic light control can be categorized into fixed-time and adaptive control. Fixed-time control methods, like TRANSYT [1], use predetermined signal timing patterns based on expected traffic and road layout, not taking into account the present traffic condition. Adaptive control, like SCOOT [2], uses real-time data to continually adjust signal timing, based on the current traffic situation, with the goal of maximizing throughput and minimizing delays. In recent years, there has been increasing interest in the use of machine learning, and in particular DRL, to improve traffic light control [19]. Publications strongly differ in their implemented state space, which is ultimately defined by the sensing capabilities of the traffic infrastructure. [4] assume traffic information to originate from inductive loop sensors. Many works use more detailed traffic state information that could only be inferred from additional sensors, like traffic cameras, or through the communication of individual vehicle states via V2X communication. A popular representation, often associated with the availability of traffic cameras, is obtained by dividing roads into a high resolution grid, where every cell encodes spatial information like occupancy, speed, and acceleration [7, 9]. Other approaches, typically considered to obtain real-time information through V2X communication, use concatenated vehicle states to encode detailed traffic state information, such as the distance of the leading vehicle to the next intersection for every lane [5], or position and velocity of the closest N vehicles [6]. Other distinguishing factors include the choice of action space, reward function, the choice of RL algorithm, and the cooperation of different intersections (for a detailed evaluation see [19]). Many publication claim to beat the former state of the art in their own simulation environment, mostly by implementing sophisticated methods from MARL [7, 8, 9, 10, 11]. However, when comparing algorithms on a broad benchmark of traffic scenarios, Ault, et. al. show that learning with independent learners, without intricate cooperation strategies or sharing of DNN parameters, outperforms other approaches [12]. In this paper, we heavily leverage the results obtained by Ault, et. al. in adopting their methodology and building our own methods on top of it, as well as using their set of benchmark scenarios. A detailed explanation of our modifications can be found in Section III; a brief overview of the benchmarks and metrics introduced in [12], in Section IV. Adaptive speed control are measures to adapt speed limits to the current traffic situation. One option that has been deployed for several decades are adaptive roadside speed signs that can lower speed limits in case of congested traffic. This has shown to reduce accident rates and increase traffic capacities [20]. Lately, there has been increasing interest in the use of advanced technologies, such as DRL, to improve speed control systems. In [13], DRL is used to control adaptive speed signs along a road to optimize vehicle delays. Other approaches directly control the acceleration (and therefore the speed) of individual vehicles to dampen oscillations in dense traffic [14, 15]. Vehicle speed advice is a softer, not legally binding alternative to speed limits that can be used to advise drivers on appropriate driving velocities. Speed advice can be communicated to drivers through adaptive street signs. More recently, car manufacturers have implemented mechanisms to display speed advice, transmitted via V2X communication, inside the vehicle [21]. The application of speed advice shows to improve traffic flow and reduce CO\({}_{2}\) emissions [22]. The work closest to ours is [23], that uses DRL to jointly control traffic lights and autonomous vehicles. Though many implementation details are similar, our approach is conceptually very different: While [23] assume autonomous vehicles with control residing on the vehicle side, we consider a system where the infrastructure controls both traffic light signaling and speed advice which is then sent to the vehicles. We expect most vehicles and infrastructure to be equipped with communication means in the near future. Automated vehicles, on the other hand, may take a lot longer to be widely deployed. This conceptual difference also influences implementation details. Most importantly, we learn individual traffic signaling and speed advice agents for each intersection, which has been shown to outperform other approaches in TLC [12]. In contrast, [23] learn a single autonomous driving policy that has to generalize over different intersections. ## III Joint Control of Traffic Signaling and vehicle speed advice In this Section, we describe our methodology of implementing the joint control of traffic signaling and vehicle speed advice. In particular, we explain the incremental adaptions we apply to the work of Ault, et. al. [12], which from here on we will refer to as RESCO. The central result of RESCO was that, evaluated on a broad set of traffic light control benchmark scenarios, the DRL algorithms that performed best were a set of independent learners (IDQN and IPPO), rather than highly specialized methods from self-proclaimed state-of-the-art methods. We take this as justification to start out with the environment formulation of the RESCO agent and successively augment state and action space to model the availability of detailed traffic information through V2X communication as well as the means to propose speed advice to individual drivers. We obtain three different agents that we extensively evaluate in Section V. Among other things, these agents differ in their respective observation space. Table I summarizes the observation spaces of all agents. We also note the dimensionality of the individual parts of the observation space, where \(N_{P}\) is the number of traffic light phase options of an intersection and \(N_{L}\) is the number of afferent lanes. At runtime, the values are concatenated into a vector that is used as input to the policy NNs. During our implementation, we found several issues, that we regard as shortcomings of the original RESCO implementation or that are not in line with the goal of this work. We adapt for these issues and show in Section V that we still approximately reproduce the results of RESCO. In particular, these adaptions are: **Short Lanes:** Real world scenarios in the RESCO benchmark contain several relatively short lanes as an artifact of the lane definition of the SUMO simulator. As traffic lights observe (state space) and optimize for (reward function) vehicles only on afferent lanes, this significantly reduces the sensing distance. As a countermeasure, we define all lanes shorter than 15 meters as short lanes and include their direct predecessors into the state space of traffic lights as well as including vehicles into the respective reward function. For the vehicle speed advice control described in Section III, short lanes are left uncontrolled. **Normalization of lane density:** Instead of using the exact number of approaching vehicles as input to our NN, we normalize this number by the maximum capacity of the lane, so that the density ranges from zero to one. **Sensing radius:** In the RESCO paper, a 200 meter sensing radius is assumed. As we consider V2X communication networks, that provide low latency communication over large distances, we remove this limitation. **Traffic light state:** RESCO does not include the time that a traffic light has been showing the same phase. This means that decisions cannot be based on time passed. We here include this time as well as a Boolean that indicates if the minimum phase time has been surpassed. **Reward Function:** RESCO uses reward functions according to the respective algorithm. We here use the negative time spent on afferent lanes of the intersection \[r_{i}^{t+1}=-\sum_{\forall l\in L_{i}}\sum_{v\in V_{l}}\tau(v,l,t)\,, \tag{1}\] where \(L_{i}\) is the set of afferent lanes of intersection i, \(V_{l}\) is the set of vehicles that are currently on lane l, and \(\tau(v,l,t)\) is the time that vehicle \(v\) has spent on lane \(l\) up to timepoint t. We chose this reward function over other commonly used ones, as it resulted in the best reduction of overall vehicle trip time in a comparative study we conducted (not reported here). Please note, that we strictly add information to the observation space. The resulting agent should therefore perform equally well or better than RESCO's IPPO implementation. We call the traffic light control agent, that results from the described adaptions, the "TLC" agent. It serves as a baseline to evaluate the benefits of equipping the system with V2X and vehicle speed advice. The first major adaption to the RESCO benchmark environments that we want to investigate, is the addition of detailed state information of individual vehicles that are made available through V2X communication means. Previous work demonstrated that the availability of detailed traffic state knowledge enables more efficient traffic light control [6]. However, in contrast to [6], we here assume a baseline agent that includes live information about the density and average speed on individual lanes, which could be inferred approximately through inductive loops. The V2X-enabled agent is provided with detailed state information of the lane leader for every afferent lane of the controlled intersection. A lane leader is here defined as the vehicle that is closest to the intersection but is not part of the traffic light queue (so the vehicle is still moving). In particular, the additional information are the current distance from the next traffic light, the distance to the back of the traffic light queue, and the current speed of all lane leaders. We denote this agent the "TLC+V2X" agent. The final addition to RESCO is the implementation of vehicle speed advice. We consider two individual RL agents per intersection: one agent controlling traffic lights, as described above, and one agent controlling speed advice of individual vehicles on afferent roads. In particular the speed advice agent controls the speed advice of the lane leader of every afferent lane of the intersection (except for short lanes). This loosely corresponds to the case of speed advice being transmitted to individual vehicles via V2X communication. We control the speed by incrementally adapting it once every 5 seconds of simulated time (same as for traffic light control). Each incremental adaption is chosen from a continuous interval \([-\Delta v_{max},+\Delta v_{max}]\), where \(\Delta v_{max}\) is set to ten percent of the speed limit of the respective lane. To realize these incremental updates, we added the current speed advice as an additional feature to the state space of the speed advice agent (other than that it uses the same state space as the TLC+V2X agent). In our experiments, controlling only lane leaders resulted in better performance than controlling all vehicles; and (relatively small) incremental updates, than absolute values. In addition, we hypothesize that these incremental adaptions with relatively small step sizes and relatively few updates would meet better acceptance by real drivers than large fluctuations on short timescales. Vehicles acceleration is controlled by SUMO's standard driver model [24] that uses the vehicle speed advice as speed limit. SUMO's speed factors are preserved, which means that vehicles may drive slightly faster or slower than the speed limit/advice which results in additional randomness of the environment. We call this the "TLC+V2X+VSA" agent. ## IV Experimental Setup The experimental setup largely coincides with the one from RESCO. We therefore only briefly discuss it here. To test the efficacy of our method, we implement several benchmark scenarios in SUMO [25], which allows for the extraction of a detailed traffic state and the adaption of traffic lights and speed limits via the TraCI API. We run our experiments on eleven different scenarios, of which eight are taken from the RESCO benchmark. The RESCO scenarios consist of popular real-world benchmark scenarios from Inngolstadt (InTAS [26] - a single intersection with 1715 vehicles approaching per hour, a corridor of seven intersections with 3030 vehicle per hour, and a patch of 21 intersections with 4280 vehicles per hour) and Cologne (Tapas [27] - a single intersection with 2015 vehicles per hour, a corridor of three intersections with 2856 vehicles per hour, and a patch of eight intersections with 2046 vehicles per hour). In addition, RESCO uses two synthetic scenarios of a four-by-four grid, with one encountering balanced traffic of 1473 vehicles per hour and the other one, highly frequented avenues on one axis and calmer roads on the other axis with 2484 vehicles per hour. All RESCO scenarios use predetermined vehicle spawn times and routes. Randomness of the trial runs stems only from random vehicle speed factors. In addition to the RESCO scenarios, we implement a single intersection scenario for the detailed qualitative analysis of the the agent's behavior. The two perpendicular one-way streets consist of two lanes each. The traffic light therefore only has two different phase options. Vehicles may only go straight at the intersection and are created at the in-going roads following a binomial distribution. We simulate three different traffic demands: one low demand of approx. 70 vehicles per hour, one moderate traffic demand of approx. 500 vehicles per hour, and one high demand of approx. 2500 vehicles per hour. To switch phase, the traffic light goes through an amber phase that lasts two seconds. In addition, we enforce a minimal phase time of eight seconds. The performance metric we care about is average trip delay, which is the average trip time of vehicles minus the minimal possible trip time (of vehicles unconstrained by traffic or traffic lights). The reported values are obtained by running five trainings with different random seeds, computing mean and standard deviation over runs (not over episodes), and taking the minimum over episodes. There might be an argument for reporting outcomes of multiple test runs of the best obtained model, when researching traffic control as an application rather than benchmarking DRL algorithms. However, we choose not to diverge from [12]. In our experiments we also investigated average CO\({}_{2}\) emissions per trip. However, against our expectations, vehicle speed advice only resulted in minimal differences in CO\({}_{2}\) emissions in our experiments. Due to space constraints, we do not consider emissions in this publication. To train our IPPO agents, we use the Ray RLlib Python library [28]. For most hyperparameters, we use the provided default values. Exceptions are the learning rate, which we set to \(10^{-5}\), the number of training episodes, which we set to 1400, and the DNN shape, which in our agents consists of four layers of 256 neurons each. ## V Evaluation and Discussion We split the evaluation into two parts: First, we compare the performance of implemented agents in all implemented scenarios to analyze the quantitative benefit of vehicle speed advice. Second, we will analyze the single intersection sce nario in depth to understand the qualitative behavior of the speed advice agent. As described in Section IV, we compare our developed algorithms on the RESCO benchmark set as well as on a single intersection with three different traffic demands. Table II shows the trip delay values for the IDQN and IPPO algorithm from [12] (that outperformed other investigated methods) and the results of this work. Results from RESCO are copied from [12]; and for the single intersection scenarios, are not available. Values marked with an asterisk\({}^{*}\) were not reported in [12] but are taken from training logs in RESCO's Github repository. Numbers in (brackets) denote the relative performance of each agent w.r.t. the agent in the column to the left. As described in Section III, our TLC agent should be approximately equal to the IPPO agent from RESCO. As we do not observe the high instability of PPO that was reported in [12], we compare our TLC agent against the better of RESCO's IDQN and IPPO agents for each scenario. The relative performance of our TLC agent therefore is w.r.t. the better one of RESCO's agents. Deviations range from 1.2 % in Col. Corr up to 34.6 % in Ing. Reg. (not counting values marked with \({}^{*}\), that we are unsure about). However, since it is often difficult to exactly reproduce findings from different implementations in a noisy domain like DRL, we still consider our TLC algorithm to approximately, but not fully, reproduce the results from [12]. Due to the strong deviations from the RESCO paper, our further analysis will focus on performance improvements among the algorithms from our own implementation. The introduction of detailed state knowledge via V2X results in improved trip delays in nine out of eleven scenarios. This is consistent with previous work [6]. Adding speed advice to the system further improves the performance in eight of the eleven scenarios. This shows that the use of speed advice in traffic systems can reduce trip delays of vehicles. As the TLC+V2X+VSA agent could theoretically learn to not use speed advice at all (and therefore reproduce the TLC+V2X agent), the increased trip delays in three of the eleven scenarios can only stem from sub-optimal learned policies. Further research therefore has to investigate better implementation details of the traffic control system. The atomic setting of one single intersection allows us to study the qualitative effects of implementing vehicle speed advice. Figure 2 shows the position over time of multiple vehicles on the rightmost lane of the west-east route in the single intersection (high demand) scenario. The position is normalized to the length of the route. Each colored line represents a single vehicle traversing this route. The color indicates the speed advice given, normalized to the speed limit of the road. The dashed line marks the position of the traffic light. The speed advice agents slows down vehicles that are in relatively close proximity to the traffic light to smooth the velocity profile. Surprisingly, a relatively small adjustment of speed advice results in a relatively large improvement in trip delays. In fact, the maximum change in speed advice per decision, of ten percent of the road's speed limit, is rarely executed, justifying the choice of a small dynamic range. As an artifact of our implementation, which applies speed advice always to the first moving vehicle per lane, we can see speed advice decisions being passed on by vehicles that leave a lane, due to it being updated only every 5 seconds. Future work might experiment with different control frequencies as well as larger dynamic ranges of the speed advice. ## VI Conclusion and Outlook As a first step towards jointly optimizing traffic light signaling and vehicle speed advice, we investigated the use of two independent PPO agents per intersection, one for each traffic control measure, to minimize average trip delay. We first approximately reproduced previous results on a set of benchmark scenarios [12]. Subsequently, we equipped the traffic infrastructure with a) more detailed traffic state knowledge and b) the capability to send speed advice to vehicles on afferent roads. Prior work already demonstrated that a more detailed traffic state facilitates better control in traffic lights [6]. Adding a learning agent to control vehicle speed advice, showed to further decrease average vehicle trip delays in eight of the eleven investigated scenarios. Analyzing the behavior of the speed advice agent, we found that this is obtained through slightly slowing vehicles close to the intersection to smooth out acceleration profiles. The conclusion we draw from this, is that the use of traffic speed advice, enabled through enhanced connectivity of vehicles and traffic infrastructure, has the potential to increase the efficiency of traffic systems and mitigate congestion. Further we infer that DRL seems to be a fitting tool to learn the joint control of traffic signaling and vehicle speed advice. Though we obtained promising results using independent learners, future work needs to further optimize implementation details of the traffic control system. Furthermore, in theory, smoothing out velocity profiles should reduce CO\({}_{2}\) emissions. However, as our investigations in this regard were inconclusive, we did not report them here and left a thorough analysis to future research.
2309.10308
Tight and attainable quantum speed limit for open systems
We develop an intuitive geometric picture of quantum states, define a particular state distance, and derive a quantum speed limit (QSL) for open systems. Our QSL is attainable because any initial state can be driven to a final state by the particular dynamics along the geodesic. We present the general condition for dynamics along the geodesic for our QSL. As evidence, we consider the generalized amplitude damping dynamics and the dephasing dynamics to demonstrate the attainability. In addition, we also compare our QSL with others by strict analytic processes as well as numerical illustrations, and show our QSL is tight in many cases. It indicates that our work is significant in tightening the bound of evolution time.
Zi-yi Mai, Chang-shui Yu
2023-09-19T04:40:55Z
http://arxiv.org/abs/2309.10308v1
# Tight and attainable quantum speed limit for open systems ###### Abstract We develop an intuitive geometric picture of quantum states, define a particular state distance, and derive a quantum speed limit (QSL) for open systems. Our QSL is attainable because any initial state can be driven to a final state by the particular dynamics along the geodesic. We present the general condition for dynamics along the geodesic for our QSL. As evidence, we consider the generalized amplitude damping dynamics and the dephasing dynamics to demonstrate the attainability. In addition, we also compare our QSL with others by strict analytic processes as well as numerical illustrations, and show our QSL is tight in many cases. It indicates that our work is significant in tightening the bound of evolution time. pacs: 03.65.-w, 03.65.Yz ## I Introduction Quantum speed limit (QSL) (or equivalently to call the quantum speed limit time (QSLT)) is an important feature of a dynamical system, which mainly characterizes the minimal time required for a state evolving to a target state. It is a constrained optimization problem important in quantum metrology [1; 2; 3], quantum optimal control [4; 5; 6; 7], quantum information processing [8; 9]. Recently, it's considered a meaningful index for a given quantum system to evaluate its dynamics characteristics involving robustness [10], non-Markovianity [11; 12], upper bound of changing rate of expected value of observable [13], decoherence time [14; 15; 16; 17], interaction speed in spin system [18; 19] and changing rate of phase [20] and so on [21; 22]. Besides, the quantum speed limit is widely used to explore the intrinsic nature of physical systems, such as for the many-body system [23], ultracold atomic system [24], non-Hermitian system [25] and entanglement [26; 27; 28; 29; 30]. For the application fields, studies of the quantum speed limit are involved in machine learning [31], quantum measurement [32] and thermometry [33]. QSL was first addressed for a unitary evolution from a pure state to its orthogonal state by Mandelstam and Tamm [34], who presented the famous time-Energy uncertainty (MT bound) \(\tau_{MT}^{\perp}=\pi/(2\Delta E)\), where \((\Delta E)^{2}=\langle H^{2}\rangle-\langle H\rangle^{2}\) stands for the variance of Hamiltonian of the system [35]. Later, Margolus and Levitin [36] established another bound (ML bound) of the unitary evolution between pure orthogonal states as \(\tau_{ML}^{\perp}=\pi/2E\) based on the average energy \(E\)[35]. A tighter bound was obtained by the combination of MT and ML bounds as \(\tau_{MT-ML}^{\perp}=\pi/(2\min\{E,\Delta E\})\)[37]. Giovannetti et al. generalized MT and ML bounds to the mixed initial state [38]. However, a deeper understanding of the QSL could count on the geometrical perspective first developed by Anandan and Aharanov for the MT bound with time-dependent Hamiltonian in terms of the Fubini-Study metric on the pure-state space [39]. Up to now, various geometrical distances have been exploited to develop QSL for density matrices [13; 15; 40; 41; 42; 43; 44; 45; 40; 41; 42; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. Considering the inevitable contact with environments, the QSL has also been developed for open system [55] based on different metrics such as quantum Fisher information [56], relative purity [51], and the MT bound and the ML bound have been extended to open systems in terms of a geometric way [57]. In addition, QSL is also characterized based on quantum resource theory [58]. It is even shown that the speed limit is not a unique phenomenon in quantum systems [59; 60]. Every QSL could have its significance in that it gives a potentially different understanding of the bound on the evolution time of a system. The most typical examples are the MT and ML QSLs which bound the evolution time by the fluctuation of energy and the average energy, respectively. In this sense, it is important to establish a distinguished QSL. The tightness and the attainability are key aspects of a good QSL bound, which strongly depends on the different understanding perspectives of QSL [61; 62]. If the dynamics (Hamiltonian or Lindblad) is fixed, the QSLT bounds the minimum evolution time between a pair of states with a given 'distance.' If the state 'distance' is given, the tight QSL bound means that dynamics drive the given initial state to the final state with the minimum time. MT and ML bounds are attainable for a unitary evolution if the initial state is the equal-weight superposition of two eigenstates of the Hamiltonian with zero ground-state energy [36]. Ref. [46] generalized the tight bound for the unitary case to the mixed states, Ref. [63] verifies this bound is attainable for any dimension. Ref. [64] proposed a QSLT bound attainable for dephasing and depolarized channels. For a tight bound, many papers focus on combining different QSLs. In this paper, we establish a tight and attainable QSLT for open systems in terms of a geometric approach. Similar to the Bloch representation of quantum states, we develop an intuitive geometric picture of quantum states. All the states are mapped to the surface of the high-dimensional sphere. In this picture, we derive a QSL for an open quantum system by a particularly-defined state distance. Our QSL is attainable in that for any given initial state, one can always find a dynamics to drive the initial state to the final state along the geodesic. In particular, we present a general condition for dynamics along the geodesic. The generalized amplitude damping dynamics and the dephasing dynamics evidence the attainability. In addition, we compare our QSL and the one in Ref. [64] by considering the unitary evolution of pure states and the particular amplitude damping dynamics. It is shown that our QSL is tight. In addition, numerical examples show that the QSLT of Ref. [64] is tighter than ours in many cases, which implies the combination of the two QSLs is necessary. The paper is organized as follows. We first propose the intuitive geometric picture of quantum states and present our QSL. Then we arrive at the general condition for dynamics along the geodesic, and then we give concrete examples to demonstrate the attainability of our QSL. Finally, we show the tightness of our QSL by comparing particular dynamics. ## II Quantum speed limit For an open system, the evolution of the quantum state \(\rho_{t}\) is governed by the general master equation as \[\dot{\rho}_{t}=\mathcal{L}_{t}\left(\rho_{t}\right), \tag{1}\] where \(\mathcal{L}_{t}\left(\cdot\right)\) denotes a general dissipator of the system and the subscript \(t\) indicates the potential dependence on time, in particular, we don't specify whether \(\mathcal{L}_{t}\left(\cdot\right)\) is Lindblad or not. Let \(P_{t}=P\left(\rho_{t}\right)=\rho_{t}/\sqrt{Tr\rho_{t}^{2}}\), then for any pair of \(P_{t}\) and \(P_{t}^{\prime}\) we can define \[D(P_{t}||P_{t}^{\prime})=\arccos\langle P_{t},P_{t}^{\prime}\rangle \tag{2}\] based on the Hilbert-Schmidt distance \(\langle P_{t},P_{t}^{\prime}\rangle=TrP_{t}^{\dagger}P_{t}^{\prime}\). Based on _Schoeberg's Theorem_[65], which is firstly introduced by Ref. [66] to tackle with distance function equipped by the metric space consisted of density matrices, one can easily prove that \(D(P_{t}||P_{t}^{\prime})\) is a good distance. Thus all \(P_{t}\) can form a metric space \(S\left(\mathcal{H}\right)\) with respect to the distance \(D(P_{t}||P_{t}^{\prime})\). It is obvious that \(D(P_{t}||P_{t}^{\prime})\) for a pair of density matrices \(\rho\) and \(\sigma\) can be explicitly written as \[D(\rho||\sigma)=\arccos\mathcal{F}_{GM}(\rho,\sigma), \tag{3}\] where \(\mathcal{F}_{GM}(\rho,\sigma)=Tr\rho\sigma/(\sqrt{Tr\rho^{2}}\sqrt{Tr\sigma^ {2}})\) is the alternative fidelity introduced in Ref. [67]. The alternative fidelity is also used in a different way for QSLT in Ref. [68]. To get the QSLT, we need the differential form of the distance \(D(\rho||\sigma)\). Considering the infinitesimal evolution \(\rho_{t}\longmapsto\rho_{t}+d\rho_{t}\), the distance reads \[ds=D(\rho_{t}||\rho_{t}+d\rho_{t})=\arccos\frac{Tr\rho_{t}(\rho_{t}+d\rho_{t} )}{\sqrt{Tr\rho_{t}^{2}}\sqrt{Tr(\rho_{t}+d\rho_{t})^{2}}}. \tag{4}\] A direct deformation gives \(\frac{Tr\rho_{t}(\rho_{t}+d\rho_{t})}{\sqrt{Tr\rho_{t}^{2}}\sqrt{Tr(\rho_{t}+d \rho_{t})^{2}}}=\cos ds=1-\frac{ds^{2}}{2}\), which indicates \[ds^{2}=2(1-\frac{Tr\rho_{t}(\rho_{t}+d\rho_{t})}{\sqrt{Tr\rho_{t}^{2}}\sqrt{ Tr(\rho_{t}+d\rho_{t})^{2}}}). \tag{5}\] Under the condition \(d\rho_{t}\longmapsto 0\), we can expand \(\frac{1}{\sqrt{Tr(\rho_{t}+d\rho_{t})^{2}}}\) to the second order: \[\frac{1}{\sqrt{Tr(\rho_{t}+d\rho_{t})^{2}}}=\frac{1-\frac{Tr\rho_{t}^{2}}{2Tr \rho_{t}^{2}}-\frac{Tr\rho_{t}d\rho_{t}}{Tr(\rho_{t})^{2}}+\frac{3(Tr\rho_{t}d \rho_{t})^{2}}{2(Tr\rho_{t})^{2}}}{\sqrt{Tr\rho_{t}^{2}}} \tag{6}\] Substituting Eq. (6) into Eq. (5), then we can immediately obtain the metric as \[ds^{2}=\frac{Tr(d\rho_{t})^{2}Tr\rho_{t}^{2}-(Tr\rho_{t}d\rho_{t})^{2}}{(Tr \rho_{t}^{2})^{2}}. \tag{7}\] Denote \(\mathcal{L}_{t}(P_{t})=\dot{\rho}_{t}/\sqrt{Tr\rho_{t}^{2}}\) if not confused(especially for the Lindbladian [69], a form of \(\mathcal{L}_{t}(P_{t})\) is reasonable because \(1/\sqrt{Tr\rho_{t}^{2}}\) is just a real number which is commutative with any operator), then the metric given in Eq. (7) turns to form of the Fubini-Study metric as \[(ds/dt)^{2}=\langle\mathcal{L}_{t}(P_{t}),\mathcal{L}_{t}(P_{t})\rangle- \langle P_{t},\mathcal{L}_{t}(P_{t})\rangle^{2}. \tag{8}\] For infinitesimal \(dt\), we have \[P_{t+dt} = \frac{\rho_{t}+d\rho_{t}}{\sqrt{Tr(\rho_{t}+d\rho_{t})^{2}}} \tag{9}\] \[= P_{t}+\mathcal{L}_{t}(P_{t})dt-P_{t}\langle P_{t},\mathcal{L}_{t} (P_{t})\rangle dt.\] According to \(\dot{P}_{t}=\frac{P_{t+dt}-P_{t}}{dt}\), one can arrive at \[\langle\dot{P}_{t},\dot{P}_{t}\rangle=\langle\mathcal{L}_{t}(P_{t}),\mathcal{L} _{t}(P_{t})\rangle-\langle P_{t},\mathcal{L}_{t}(P_{t})\rangle^{2}. \tag{10}\] Comparing Eq. (8) and Eq. (10), we have \[v_{t}^{2}=(ds/dt)^{2}=\langle\dot{P}_{t},\dot{P}_{t}\rangle. \tag{11}\] In the above metric space, we can derive a speed limit based on the metric Eq. (10) and the distance Eq. (3) are as follows. _Theorem 1-_The minimal time for a given state \(\rho_{0}\) to evolve to the state \(\rho_{\tau}\) subject to the dynamics Eq. (1) is lower bounded by \[\tau_{qsl}=\frac{\arccos\langle P_{0},P_{\tau}\rangle}{\frac{1}{\tau}\int_{0}^{ \tau}\sqrt{\langle\dot{P}_{t},\dot{P}_{t}\rangle}dt} \tag{12}\] with \(P_{0}=\rho_{0}/\sqrt{Tr\rho_{0}^{2}}\) and \(P_{\tau}=\rho_{\tau}/\sqrt{Tr\rho_{\tau}^{2}}\). _Proof_-Based on the distance, one can find that \[\arccos\langle P_{0},P_{\tau}\rangle=D(\rho_{0}||\rho_{\tau})\] \[\leq\sum_{t=0}^{\tau}D(\rho_{t}||\rho_{t}+d\rho_{t})=\int_{0}^{\tau }\left|\frac{ds}{dt}\right|dt\] \[=\int_{0}^{\tau}\sqrt{\langle\dot{P}_{t},\dot{P}_{t}\rangle}dt, \tag{13}\] which directly leads to \(\tau\geq\tau_{qsl}\) as given in Eq. (12). \(\square\) Now we'd like to give an intuitive understanding of the map between this metric space and the Bloch representation. As is given in Fig. 1. the states in the metric space form a spherical crown and are one-to-one mapped to the bottom surface of the hemispherical surface, which is geometrically the same as the circular section across the center of the Bloch sphere and the two points \(\rho\) and \(\sigma\). The apex of the the spherical crown is the maximally mixed state. The latitude of the bottom the surface of the spherical crown is determined by the intersection angle of a pure state and the maximally mixed state or the dimension of the state space. All the states of the same mixedness are distributed on the same latitude, which especially implies unitary evolution along the latitude. The evolution of purely reducing mixedness will undergo longitude. It can be noticed that for the evolution trajectories tracing geodesics in the Bloch sphere which is equipped by the Euclidean distance, it also traces a geodesics in our metric space. ## III Attainability and tightness _Attainability._-It is easy to find that the quantum speed limit time \(\tau_{qsl}\) is expressed by the distance \(\arccos(P_{0},P_{\tau})\) divided by the average evolution speed \(\left|\bar{v}_{t}\right|=\frac{1}{\tau}\int_{0}^{\tau}\sqrt{\langle\hat{P}_{t },\hat{P}_{t}\rangle}dt\). Next, we will show that the QSL time presented in Theorem 1 can be attainable. Namely, given a distance and the average speed, one can always find a pair of quantum states and a corresponding dynamics such that the practical evolution time is exactly the QSL time. _Theorem 2._-The evolution of \(\rho_{t}\) from a given initial state \(\rho_{0}\) to a final state \(\rho_{\tau}\) along the geodesic can be written as \[\dot{\rho}_{t}=\frac{\dot{\beta}(t)}{\beta(t)}\left(\rho_{t}-\rho_{0}\right), \tag{14}\] and hence the geodesic is \[\rho_{t}=(1-\beta(t))\rho_{0}+\beta(t)\rho_{\tau}, \tag{15}\] where \(\beta(t)\) is a monotonic function with \(\beta(0)=0\) and \(\beta(\tau)=1\). The proof is given in Appendix A. Ref. [70] shows that the form of (14) can describe the behavior of atomic decay. It can be verified Eq. (15) is also the geodesics of the bound from Ref. [64]. In fact, one can easily verified that, the arbitrariness of \(\beta_{t}\) and \(\rho_{\tau}\) guarantee a highly freedom of form of the geodesics: \[\rho_{t}=\rho_{0}+\beta(t)C, \tag{16}\] where \(C\) is arbitrary traceless hermitian matrix. Theorem 2 explicitly indicates the general form of the geodesic. In particular, \(\rho_{\tau}=\frac{1}{N}\) corresponds to the longitude equation. Eq. (14) means the density matrix that evolves along a geodesic or the QSLT is attainable. However, it can be shown that bound (12) is impossible to be saturated for any unitary case by making a comparison to the bound \(\tau_{\Phi}\) from Ref. [46]: \[\tau\geq\tau_{\Phi}=\frac{\sqrt{2}arccos\sqrt{\langle\hat{P}_{0},P_{\tau} \rangle}}{\frac{1}{\tau}\int_{0}^{\tau}\sqrt{\langle\hat{P}_{t},\hat{P}_{t} \rangle}dt}>\tau_{qsl},\;\tau\neq 0 \tag{17}\] (17) derived from the monotone decreasing function \(f(x)=\sqrt{2}arccosx-arccosx^{2}>f(1)=0\) when \(0<x<1\). _Examples._-Considering an \(N\)-level system coupling to a heat bath with \(b_{k}\) denoting the annihilator of its \(k\)th mode, the Hamiltonian for the total system is \(H=\sum_{i=0}^{N}E_{i}\left|i\right\rangle\left\langle i\right|+\sum_{k}\omega _{k}b_{k}^{\dagger}b_{k}+\sum_{ik}(g_{k}\sigma_{+}^{i}b_{k}+h.c)\), where \(E_{i}\) is the energy of \(i\)th energy level, and \(\sigma_{-}^{i}=\left|0\right\rangle\left\langle i\right|\) and \(\sigma_{+}^{i}=\left|i\right\rangle\left\langle 0\right|\) denote the transition operators. Following Ref. [71], one can obtain the dynamics for the reduced system as \[\dot{\rho}_{t}=-\sum_{k}i[\frac{s_{t}^{k}}{2}\left|k\right\rangle\left\langle k \right|,\rho_{t}]+\frac{\gamma_{t}^{k}}{2}(2\sigma_{-}^{k}\rho_{t}\sigma_{+}^{ k}-\sigma_{+}^{k}\sigma_{-}^{k}\rho_{t}-\rho_{t}\sigma_{+}^{k}\sigma_{-}^{k}), \tag{18}\] Figure 1: The geometric picture of quantum states. The left figure is the Bloch sphere, and the right is our new geometric picture. The arrow \(\mathbf{r}\) in the Bloch sphere is the Bloch vector, representing the corresponding density matrix \(\rho\) according to \(\rho=\frac{1}{N}\left(I+\sqrt{\frac{N(N-1)}{2}}\mathbf{r}\cdot\mathbf{A}\right)\), \(\mathbf{s}\) and \(\mathbf{t}\) denotes the correponding Bloch vector of the state \(\sigma\) and \(\epsilon\), respectively, where \(\mathbf{A}=(A_{1},...,A_{N})\) is a Lie algebra for SU(\(N\)) [46]. Focusing the cross-section, which involves the Bloch vectors \(\mathbf{r}\), \(\mathbf{s}\), \(\mathbf{t}\) and the original point \(\mathbf{o}\) (the maximally mixed state), the cross-section can be reshaped as a crown (in the right picture) of the unit sphere in the matrix space which equips with the Hilbert-Schmidt inner product. Equivalently, the crown consists of the normalized density matrices: \(P=\rho/\sqrt{Tr\rho^{2}}\), similarly, \(Q\) and \(M\) are defined for \(\sigma\) and \(\epsilon\), respectively. Overall, the new metric space is actually an extension of the one-dimensional cross-section of the Bloch sphere to the three-dimensional space. Therefore, the unitary trajectories tracing the great circle on the surface of the Bloch sphere, correspond to the latitude on the spherical crown in the new metric space (the blue trajectories). The depolarization trajectories that evolve along the radial direction of the Bloch sphere correspond to the longitude in the new metric space (the red trajectories). where \(s_{t}^{k}\) is the time-dependent Lamb shift and \(\gamma_{t}^{k}\) represents the time-dependent decay rate. This equation describes the generalized amplitude damping dynamics. Let the initial state be \(\rho=\sum\limits_{i=0}\lambda_{i}\ket{i}\bra{i}\) and suppose \(\gamma_{t}^{k}\equiv\gamma_{t}\), the density matrix \(\rho_{t}\) can be solved as \[\rho_{t}=\left(1-\sum\limits_{i\neq 0}\lambda_{i}q_{t}\right)\ket{0}\bra{0}+ \sum\limits_{i}\lambda_{i}q_{t}\ket{i}\bra{i} \tag{19}\] with \(q_{t}=e^{-\int_{0}^{t}\gamma_{t}dt^{\prime}}\). Derivation of \(\rho_{t}\) in Eq. (19), we have \(\dot{\rho}_{t}=\dot{q}_{t}\left(\sum\limits_{i}\lambda_{i}\ket{i}\bra{i}\right)\), \(\lambda_{0}=1-\sum\limits_{i\neq 0}\lambda_{i}\), which means the QSLT will be attainable due to theorem 2. To explicitly show it, let's substitute Eq. (18) and Eq. (19) into Eq. (12), one can immediately find that in the duration \(\tau\), the distance in terms of the average evolution speed is \[\tau\bar{v}_{t} = \int_{0}^{\tau}\frac{|\frac{dq_{t}}{dt}|c}{1+aq_{t}^{2}-2bq_{t}}dt \tag{20}\] \[= \left|\arctan\frac{a|q_{\tau}|-b}{c}-\arctan\frac{a-b}{c}\right|,\] where \(c=\sqrt{\sum\limits_{i}\lambda_{i}^{2}}\), \(b=\sum_{i\neq 0}\lambda_{i}\), \(a=b^{2}+c^{2}\) and we suppose \(q_{t}\) is monotonic. The distance away from the initial state \(\rho_{0}\) is \[D(\rho_{\tau}||\rho_{0})=\arccos\frac{1-b(|q_{\tau}|+1)+a|q_{\tau}|}{f(q_{0}) f(q_{\tau})} \tag{21}\] with \(f(q_{\tau})=\sqrt{1-2b|q_{\tau}|^{2}+a|q_{\tau}|^{2}}\) and \(q_{0}=1\). It is easy to find that \(D(\rho_{\tau}||\rho_{0})=\tau\bar{v}_{t}\), which directly shows the quantum speed limit time is consistent with the practical evolution time, \(\tau_{qsl}=\frac{D(\rho_{\tau}||\rho_{0})}{\bar{v}_{t}}=\tau\). The other attainable case is the dephasing dynamics. Suppose the above \(N\)-level system undergoes an environment consisting of multiple reservoirs with each two energy levels driven by an individual reservoir. Let the \(j\)th and the \(k\)th levels interact with the reservoir as \(H_{jk}=\sum_{\nu}\sigma_{jk}^{z}(g_{\nu}b_{\nu}^{\dagger}+g_{\nu}^{*}b_{\nu})\), where \(\sigma_{jk}^{z}=\ket{j}\bra{j}-\ket{k}\bra{k}\) for \(j>k\), \(g_{\nu}\) is the coupling strength, and \(b_{\nu}\) is any operator of the reservoir corresponding to the \(j\)th and \(k\)th levels. Consider the time evolution of the system [71], for any initial state \(\rho(0)\) one will get the final state as \[\rho(t)=\sum\limits_{mn}\rho_{mn}(0)\ket{m}\bra{n}Tr_{B}\left\{V_{n}^{-1}(t)V_ {m}(t)\rho_{B}^{mn}(0)\right\}, \tag{22}\] where \(V_{m}(t)\) is derived from the time-evolution operator performing on the state \(\ket{m}\), and \(\rho_{B}^{mn}(0)\) is the potential initial state of the reservoir corresponding to the \(m\)th and \(n\)th levels. Define the decay rates as \[\Gamma_{mn}(t)=\ln Tr_{B}\left\{V_{n}^{-1}(t)V_{m}(t)\rho_{B}^{mn}(0)\right\} \equiv-\gamma_{t} \tag{23}\] independent of \(mn\), then the final state can be written as \[\rho_{t}=\sum\limits_{i}\rho_{ii}(0)\ket{i}\bra{i}+e^{-\gamma_{t}}\sum\limits _{j\neq k}\rho_{jk}(0)\ket{j}\bra{k}. \tag{24}\] The derivative of \(\rho_{t}\) reads \[\dot{\rho_{t}}=-\frac{d\gamma_{t}}{dt}e^{-\gamma_{t}}\sum\limits_{j\neq k}\rho _{jk}(0)\ket{j}\bra{k}. \tag{25}\] It is evident that Eq. (25) has the same form as that in theorem 2, so the QSLT is attainable. Again, let's substitute Eq. (24) and Eq. (25) into Eq. (12), we can express the distance based on the average evolution speed as \[\tau\bar{v}_{t} = \int_{0}^{\tau}\left|\frac{d\gamma_{t}}{dt}\right|\frac{Re^{- \gamma_{t}}}{1+R^{2}e^{-2\gamma_{t}}}dt \tag{26}\] \[= \arctan R-\arctan\left(e^{-\gamma_{\tau}}R\right)\] where \(R=\sqrt{\frac{\sum_{j\neq k}|\rho_{jk}|^{2}}{\sum_{i}\rho_{i}^{2}}}\) and \(\gamma_{t}\) is supposed to be monotonic. The distance away from the initial state is \[D(\rho||\sigma)=\arccos\frac{F_{\tau}^{2}\left(2,1\right)}{F_{\tau}(2,0)F_{ \tau}(2,2)} \tag{27}\] with \(F_{\tau}\left(k,s\right)=\sqrt{1+R^{k}e^{-s\gamma_{\tau}}}\). A further simplification can show that \(D(\rho||\sigma)=\tau\bar{v}_{t}\), which means the QSL time \(\tau_{qsl}=\frac{D(\rho||\sigma)}{\bar{v}_{t}}=\tau\). At first, we would like to emphasize that in the two examples we choose the particular dynamics and the initial states to demonstrate the attainability. In fact, both the Hamiltonian and the initial states can change the attainability. In Appendix B, we present an example of qubit system to demonstrate the deviation of the evolution trajectory from the geodesic due to different Hamiltonian and initial states. In addition, we don't specify the explicit form of the decay rates except for monotonicity, the non-monotonicity or the divergence of \(\gamma_{t}\) will force the evolution trajectory oscillates back and forth over the geodesics, and lead to \(\tau_{qsl}<\tau\), namely, the evolution trajectory deviates from the geodesics. For example, if the decay rate takes \[\gamma_{t}=\frac{2\gamma_{0}\lambda\sinh(\delta t/2)}{\delta\cosh(\delta t/2)+ \lambda\sinh(\delta t/2)}, \tag{28}\] where \(\delta=\sqrt{\lambda^{2}-2\gamma_{0}\lambda}\), and \(\lambda\) and \(\gamma_{0}\) represent the spectral width and coupling strength, respectively. If the parameter \(\delta\) is real, i.e., \(\gamma_{0}\leq\lambda/2\), the dynamics is Markovian, which implies a relatively weak coupling, and the decay rate can be taken as a constant \(\gamma_{t}=\gamma_{0}\) for \(\gamma_{0}\ll\lambda\). Conversely, if \(\gamma_{0}\geq\lambda/2\), it means the stronger coupling described by the non-Markovian dynamics, which leads to non-monotonic \(\gamma_{t}\). The evolution trajectory is not the geodesics, which is similar to the unsaturated effect of QSL bounds for the non-Markovian dynamics reported by the previous works [12; 72; 57]. The above two dynamics indicate the attainability of our QSL time in any dimensional state space. It is obvious that the farthest evolution is governed by the nonunitary dynamics instead of the unitary process. If we restrict the system to undergo the unitary evolution subject to the Hamiltonian \(H_{t}\), the average speed \(\bar{v}_{t}\) will be reduced to \(\bar{v}_{t}=\frac{1}{\tau}\int_{0}^{\tau}\sqrt{-\frac{1}{\hbar^{2}}Tr\{[P_{t},H_ {t}]\}^{2}}dt\), which is the same as the speed \(Q_{\Phi}\) in Ref. [46]. However, one can find that the effective distance \(D(\rho||\sigma)\) is strictly larger than the distance \(\Phi(\rho||\sigma)\) in Ref. [46] in nontrivial dynamics, so the practical evolution is strictly larger than our presented QSL time \(\tau_{qsl}\). So our QSL cannot be attainable for a nontrivial unitary process. _Tightness.-_ Tightness is an important question in the QSL, which depends on not only the particular QSLT itself, but also the understanding of QSLT. For example, the MT and ML bounds for the unitary evolution can be the tightest, since they are obviously attainable for any pair of states as mentioned previously. Of course, if we understand the QSLT in the sense that for any given initial state whether one can find a proper dynamics to drive the state evolve along the geodesic, then our obtained QSLT is also the tightest since we have explicitly demonstrated the attainability. However, in the general sense, it is quite hard to evalute the tightness of a QSTL for open systems, because it is impossible to exhausted all the potential evolution trajectories to demonstrate the tightness of a single QSLT or to compare the infinite QSLTs due to their dependence on the evolution trajectory. Therefore, we follows the usual and feasible comparison approach in the literature [51, 57, 73, 74, 64] namely, for given initial and final state, we compare two QSLTs subject to the same evolution trajectory. Recently Ref. [64] has presented a tight QSLT \[\tau_{E}=\frac{E(\rho_{0}||\rho_{\tau})}{\frac{1}{\tau}\int_{0}^{\tau}dt\sqrt{ Tr\rho_{t}^{2}}} \tag{29}\] with good tightness based on the Euclidean distance \(E(\rho_{0}||\rho_{\tau})=\sqrt{Tr(\rho_{0}-\rho_{\tau})^{2}}\). It has been shown that \(\tau_{E}\) shares the same geodesic as our QSLT \(\tau_{qsl}\). Here we would like to compare our QSLT with \(\tau_{E}\). Since \(TrA^{2}TrB^{2}\geq(TrAB)^{2}\) for any Hermitian operators \(A\) and \(B\)[75], one can easily find \[\sqrt{Tr\rho_{t}^{2}}\leq\frac{\sqrt{Tr\rho_{t}^{2}Tr\rho_{t}^{2}-\left(Tr \rho_{t}\dot{\rho_{t}}\right)^{2}}}{Tr\rho_{t}^{2}}. \tag{30}\] It is obvious that the left-hand side of the inequality (30) is the evolution speed of the bound \(\tau_{E}\), and the right-hand side is the evolution speed of our bound. Because these two bounds are saturated by the same dynamics [41], integrating Eq. (30) one will immediately arrive at \[\sqrt{Tr(\rho_{0}-\rho_{\tau})^{2}}<\arccos\frac{Tr\rho_{0}\rho_{\tau}}{\sqrt {Tr\rho_{0}^{2}}\sqrt{Tr\rho_{\tau}^{2}}}. \tag{31}\] When we restrict the unitary evolution and the pure initial state, the two sides of Eq. (30) are identical, but inequality in Eq. (31) still holds, which implies our bound shows \(\tau_{qsl}>\tau_{E}\). In particular, the continuity of QSLT promises that for some evolution trajectories closed to the unitary path for the pure state, our bound still shows preferable tightness compared with \(\tau_{E}\). However, to demonstrate the tightness in general nonunitary cases, we sample 1000 randomly generated dynamics process for qubit systems to calculate \(\tau_{qsl}-\tau_{E}\) in Fig. 2. One can find that most of the given examples show our bound is tight, but some demonstrate \(\tau_{E}\) is tighter than ours. For an analytical demonstration, let's consider a qubit interacting with a large bath. The Kraus operators are given as [41] \[\begin{split} K_{0}(t)=&\sqrt{c}(\sqrt{1-p(t)}\ket{ 0}\bra{0}+\ket{1}\bra{1}),\\ K_{1}(t)=&\sqrt{c}\sqrt{p(t)}\ket{1}\bra{0},\\ K_{2}(t)=&\sqrt{1-c}(\sqrt{1-p(t)}\ket{1}\bra{1}+ \ket{0}\bra{0}),\\ K_{3}(t)=&\sqrt{1-c}\sqrt{p(t)}\ket{0}\bra{1},\end{split} \tag{32}\] where \(c\) is a parameter determined by the temperature of the bath and \(p(t)\) is some increasing function of time \(t\) describing the evolution path. Suppose the initial state \(\rho_{0}=(1-\rho_{11}(0))\ket{0}\bra{0}+\rho_{10}(0)\ket{0}\bra{1}+\rho_{10}^ {*}(0)\ket{1}\bra{0}+\rho_{11}(0)\ket{1}\bra{1}\), the density operator can be easily obtained as \[\rho_{t}=\begin{pmatrix}1+(\rho_{11}(0)-c)p(t)-\rho_{11}(0)&\sqrt{1-p(t)} \rho_{10}(0)\\ \sqrt{1-p(t)}\rho_{10}^{*}(0)&(c-\rho_{11}(0))p(t)+\rho_{11}(0)\end{pmatrix}. \tag{33}\] For the convenience of computation, we adopt the Bloch representation to describe the initial state \(\rho_{11}=(1-r_{z})/2\) and \(\rho_{10}\) is selected for ensuring the pure initial state. The ratio \(\tau_{qsl}/\tau\) of QSL time to the actual evolution time is plotted in Fig. 3 (a), which indicates that the QSLT \(\tau_{qsl}\) is less than the actual evolution time \(\tau\) for most of the initial states, but is attainable if \(\rho_{11}=0\) Figure 2: The purity of initial state vs \(\tau_{qsl}-\tau_{E}\). We randomly generated \(4\times 4\) diagonal hermitian matrix as the total Hamiltonian \(H\) for a bipartite qubit system. The initial state of the system is the product state \(\rho_{S}\otimes\rho_{E}\), where \(\rho_{S}\) and \(\rho_{E}\) are the two-dimensional density matrices. And the states we concerned about is attained by tracing out the irrelevant parts by using the partial trace: \(\rho_{t}=Tr_{E}(U_{t}\rho_{S}\otimes\rho_{E}U_{t}^{\dagger})\), where \(U_{t}=exp[-iHt]\). and \(\rho_{11}=c\). In fact, one can find that \(\rho_{11}=0\) or \(c\) is the exact condition that can ensure \(\dot{\rho}_{t}=\beta(t)C\), where \(C\) is time-independent, which is also the equivalent condition of geodesics dynamics. Additionally, we compare our QSLT with that proposed in Ref. [64] by the ratio \(\tau_{qsl}/\tau\) in Fig. 3 (b). It is obvious that our QSLT is larger (tighter) than the QSLT in Ref. [64]. Since the combination approach has been widely used in establishing tighter QSLT [57], combining the different QSLs could provide a tighter bound for the evolution time. Namely, a combination QSLT form as \(\tau_{qsl}^{comb}=\{\tau_{qsl},\tau_{E}\}\) should be a good QSLT for an open system. ## IV Discussion and conclusion We have established a quantum speed limit for the open system by an intuitive geometrical picture. For any initial state, one can always find corresponding dynamics to achieve the "fastest" evolution along the geodesic. We found the general condition for dynamics to saturate our QSL. By evidence, we consider the evolutions of a quantum state undergoing the generalized amplitude damping channel and the dephasing channel, which verify the attainability of our QSL when the decay rates are monotonic. But for the dynamics with the non-monotonic decay rate, such as the case of non-Markovian dynamics, the bound is unsaturated. We compare our QSLT with the tight one \(\tau_{E}\) presented in Ref. [64]. We show our bound is tighter than \(\tau_{E}\) for pure initial state governed by unitary (or close to unitary) evolution. Besides, we sample 1000 non-unitary dynamics for qubit systems, and find that for most cases our bound is tighter than \(\mathrm{t}\tau_{E}\), but for some other cases the result is contrary, which implies the combination of the two QSLTs should be a tighter bound. In summary, we have presented attainable bound for QSLT, which provide a different understanding of QSLT. ## Acknowledgements This work was supported by the National Natural Science Foundation of China under Grants No.12175029, No. 12011530014 and No.11775040. ## Appendix A In this section, we show a proof to verify that Eq. (14) is the geodesics. Let \(\Delta\rho_{\tau 0}=\rho_{\tau}-\rho_{0}\), then \(\rho_{t}\) can be rewritten as \(\rho_{t}=\rho_{0}+\beta(t)\Delta\rho_{\tau 0}.\) The derivation of \(\rho_{t}\) reads \(\dot{\rho}_{t}=\dot{\beta}(t)\left(\rho_{\tau}-\rho_{0}\right)\), which is exactly the equation (14). Solving the differential equation (14), one will obtain Eq. (15). We will show that Eq. (14) is the geodesic. One can easily find \[\begin{split} Tr\dot{\rho}_{t}^{2}=&\dot{\beta}(t)^ {2}Tr(\Delta\rho_{\tau})^{2},\\ Tr\rho_{t}^{2}=& Tr\rho_{0}^{2}+\beta(t)^{2}Tr( \Delta\rho_{\tau 0})^{2}+\\ & 2\beta(t)Tr\rho_{0}\Delta\rho_{\tau 0},\\ Tr\rho_{t}\dot{\rho}_{t}=&\dot{\beta}(t)[Tr\rho_{0} \Delta\rho_{\tau 0}+\beta(t)Tr(\Delta\rho_{\tau 0})^{2}],\end{split} \tag{34}\] so the average evolution speed can be calculated as \[\begin{split}|v_{t}|=&\frac{\sqrt{Tr\rho_{t}^{2}Tr \dot{\rho}_{t}^{2}-(Tr\rho_{t}\dot{\rho}_{t})^{2}}}{Tr\rho_{t}^{2}}\\ =&\frac{|\dot{\beta}(t)|\sqrt{Tr\rho_{0}^{2}Tr(\Delta \rho_{\tau 0})^{2}-(Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}}{Tr\rho_{0}^{2}+\beta^{2}(t)Tr( \Delta\rho_{\tau 0})^{2}+2\beta(t)Tr\rho_{0}\Delta\rho_{\tau 0}}\\ =&\frac{\dot{\beta}(t)\frac{Tr(\Delta\rho_{\tau 0})^{2}}{ \sqrt{Tr(\Delta\rho_{\tau 0})^{2}Tr\rho_{0}^{2}-(Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}}}{1+[ \frac{\beta(t)Tr(\Delta\rho_{\tau 0})^{2}+Tr\rho_{0}\Delta\rho_{\tau 0}}{ \sqrt{Tr(\Delta\rho_{\tau 0})^{2}Tr\rho_{0}^{2}-(Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}}] \\ =&\frac{d}{dt}\arctan\frac{\beta(t)Tr(\Delta\rho_{ \tau 0})^{2}+Tr\rho_{0}\Delta\rho_{\tau 0}}{\sqrt{Tr(\Delta\rho_{\tau 0})^{2}Tr\rho_{0}^{2}-( Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}},\end{split} \tag{35}\] where \(\dot{\beta}(t)>0\) is considered since \(\beta(t)\) is a monotonic function with \(\beta(0)=0\) and \(\beta(\tau)=1\). The evolution path Figure 3: (a) The ratio \(\tau_{qsl}/\tau\) versus \(\rho_{11}\) and \(\tau\). Here \(c=0.5\), \(p(t)=\ln(1+t/100)\). (b) The ratio \(\tau_{qsl}/\tau\) of our QSL and that in Ref. [64]. The temperature-determined parameter \(c\) is set as zero. The red surface represents our QSLT and the green one corresponds to that in Ref. [64]. \[\int_{0}^{\tau}\left|v_{t}\right|dt =\arctan\frac{Tr(\Delta\rho_{\tau 0})^{2}+Tr\rho_{0}\Delta\rho_{ \tau 0}}{\sqrt{Tr(\Delta\rho_{\tau 0})^{2}Tr\rho_{0}^{2}-(Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}}\] \[-\arctan\frac{Tr\rho_{0}\Delta\rho_{\tau 0}}{\sqrt{Tr(\Delta\rho_{ \tau 0})^{2}Tr\rho_{0}^{2}-(Tr\rho_{0}\Delta\rho_{\tau 0})^{2}}}. \tag{36}\] A simple calculation can show that \(\cos\int_{0}^{\tau}\left|v_{t}\right|dt=Tr\rho_{0}\rho_{\tau}/(\sqrt{Tr\rho_{0} ^{2}}\sqrt{Tr\rho_{\tau}^{2}})\), which means Eq. (15) is the geodesic. The proof is finished. \(\square\) ## Appendix B Here we provide an example to illustrate that the system Hamiltonian can drive the evolution trajectory to deviate from the geodesics. Consider the master equation (18) in the Schrodinger picture as \[\dot{\rho}_{t}= -i[H_{\theta},\rho_{t}]+\frac{\gamma}{2}\left(\Sigma_{-}\rho_{t} \Sigma_{+}-\Sigma_{+}\Sigma_{-}\rho_{t}-\rho_{t}\Sigma_{+}\Sigma_{-}\right),\] \[H_{\theta}= \frac{\Omega_{L}}{2}\left(\cos\theta\sigma_{z}+\sin\theta\sigma _{x}\right), \tag{37}\] where \(\gamma\) is the time-independent decay rate, \(\Sigma_{\pm}=U\sigma_{\pm}U^{\dagger}\) with \(U=\cos\frac{\theta}{2}I-i\sin\frac{\theta}{2}\sigma_{y}\), \(H_{\theta}\) is the parameter \(\theta\) determined Hamiltonian with the eigenfrequency of \(\Omega_{L}=\sqrt{\epsilon^{2}+\Omega^{2}}\), and \(\epsilon\) and \(\Omega\) denote the energy level difference and tunneling coupling, respectively. The solution of Eq. (37) is presented in Ref. [76] as \[\rho_{t}=\frac{1}{2}\begin{pmatrix}1+r_{z}(t)&r_{x}(t)-ir_{y}(t)\\ r_{x}(t)+ir_{y}(t)&1-r_{z}(t)\end{pmatrix} \tag{38}\] with \[r_{x}(t) =e^{-\frac{\gamma}{2}t}\Big{\{}\left[\sin^{2}\theta e^{-\frac{ \gamma}{2}t}+\cos^{2}\theta\cos(\Omega_{L}t)\right]r_{x}(0)\] \[-\cos\theta\sin(\Omega_{L}t)r_{y}(0)+\sin\theta\cos\theta\left[e ^{-\frac{\gamma}{2}t}\right.\] \[-\left.\left.\cos(\Omega_{L}t)\right]r_{z}(0)\right\}+\sin\theta \left[e^{-\gamma t}-1\right] \tag{39}\] \[r_{y}(t)=e^{-\frac{\gamma}{2}t}\big{[}\cos\theta\sin(\Omega_{L}t)r_{x}(0)+ \cos(\Omega_{L}t)r_{y}(0)\] \[-\sin\theta\sin(\Omega_{L}t)r_{z}(0)\big{]} \tag{40}\] \[r_{z}(t) =e^{-\frac{\gamma}{2}t}\bigg{\{}\sin\theta\cos\theta\left[e^{- \frac{\gamma}{2}t}-\cos(\Omega_{L}t)\right]r_{x}(0)\] \[+\sin\theta\sin(\Omega_{L}t)e^{-\frac{\gamma}{2}t}r_{y}(0)+\left[ \cos^{2}\theta e^{-\frac{\gamma}{2}t}\right.\] \[+\left.\sin^{2}\theta\cos(\Omega_{L}t)\right]r_{z}(0)\bigg{\}}+ \cos\theta\left[e^{-\gamma t}-1\right] \tag{41}\] According to Eq. (16), the geodesics dynamics can be expressed as a product of real time-dependent factor and a time-independent hermitian matrix with zero-trace, hence the time-dependent imaginary factor of the non-diagonal entries should be vanish, it means that \(r_{y}=0\), i.e., \(r_{y}(0)=0\), \(r_{x}(0)=r_{z}(0)\), \(\theta=\pi/4\), one can immediately obtain that Eq. (38) is the geodesics due to \(r_{x}(t)=r_{y}(t)\), and \([H_{\theta},\rho_{t}]=0\). That is, for this model, the system Hamiltonian drives the evolution trajectory deviate the geodesics except for the case when \(\rho_{t}\) is commutative with the Hamiltonian. In the general cases, the presence of the time-dependent imaginary factor of the non-diagonal entries of the dynamics matrix always lead to a non-geodesics evolution.
2309.08547
Complex localization mechanisms in networks of coupled oscillators: two case studies
Localized phenomena abound in nature and throughout the physical sciences. Some universal mechanisms for localization have been characterized, such as in the snaking bifurcations of localized steady states in pattern-forming partial differential equations. While much of this understanding has been targeted at steady states, recent studies have noted complex dynamical localization phenomena in systems of coupled oscillators. These localized states can come in the form of symmetry-breaking chimera patterns that exhibit a coexistence of coherence and incoherence in symmetric networks of coupled oscillators and gap solitons emerging in the band gap of parametrically driven networks of oscillators. Here, we report detailed numerical continuations of localized time-periodic states in systems of coupled oscillators, while also documenting the numerous bifurcations they give way to. We find novel routes to localization involving bifurcations of heteroclinic cycles in networks of Janus oscillators and strange bifurcation diagrams resembling chaotic tangles in a parametrically driven array of coupled pendula. We highlight the important role of discrete symmetries and the symmetric branch points that emerge in symmetric models.
Zachary G. Nicolaou, Jason J. Bramburger
2023-09-15T17:10:53Z
http://arxiv.org/abs/2309.08547v3
# Complex localization mechanisms in networks of coupled oscillators: two case studies ###### Abstract Localized phenomena abound in nature and throughout the physical sciences. Some universal mechanisms for localization have been characterized, such as in the snaking bifurcations of localized steady states in pattern-forming partial differential equations. While much of this understanding has been targeted at steady states, recent studies have noted complex dynamical localization phenomena in systems of coupled oscillators. These localized states come in the form of symmetry-breaking chimera patterns that exhibit a coexistence of coherence and incoherence in symmetric networks of coupled oscillators. Here, we report detailed numerical continuations of localized time-periodic states in systems of coupled oscillators, while also documenting the numerous bifurcations they give way to. We find novel routes to localization involving bifurcations of heteroclinic cycles in networks of Janus oscillators and strange bifurcation diagrams resembling chaotic tangles in a parametrically driven array of coupled pendula. We highlight the important role of discrete symmetries and the symmetric branch points that emerge in symmetric models. **Pattern-forming mechanisms for localization give rise to important phenomena in natural and man-made systems alike. While universal snaking mechanisms for localized steady states have been well characterized, less is known about the symmetry-breaking localization leading to chimera states in coupled oscillator models. In this paper, we illustrate novel bifurcation routes to chimera localization in models of coupled oscillators, emphasizing the important role of symmetry and drawing connections with chaos theory.** ## I Introduction We investigate mechanisms for localization in collections of coupled ordinary differential equations. In the most general setting, such models take the form \[\dot{x}_{n}=F(x_{n},\mu)+\varepsilon\sum_{k\in C_{n}}G(x_{k},x_{n},\mu),\quad n \in\Lambda, \tag{1}\] where \(\Lambda\) is a countable index set, the set \(C_{n}\subset\Lambda\) indicates the set of elements coupled to the element at index \(n\), \(\mu\in\mathbb{R}^{p}\) are system parameters, and \(\varepsilon\geq 0\) represents the strength of coupling between connected elements. The main mechanisms typically attributed to the presence of steady states with localized features embedded in a quiescent state in systems of the form (1) derive from _multistability_. That is, the presence of at least two stable equilibria to \(F\) at some parameter value \(\mu\). In the case of coupled bistable systems over the integer lattice \(\Lambda=\mathbb{Z}\), localized steady states with one\({}^{1}\) and multiple\({}^{2}\) regions of localization have been proven to exist. These results further fully describe the regular _snaking_ existence curves of these solutions as one varies the parameter \(\mu\) (see Fig. 1 for a demonstration), following similar results on localized pattern formation in partial differential equations\({}^{3}\). Related analytical results along these lines sought to investigate the effect of the choice of coupling set \(C_{n}\) when (1) is arranged on a ring\({}^{4}\) and the formation of localized square patterns of activation when \(\Lambda=\mathbb{Z}^{25}\). Coupling bistable systems is a tried, tested, and true method of investigating localized steady states in systems of the form (1) which is often amenable to analytical techniques in the weakly-coupled regime \(0<\varepsilon\ll 1\) through perturbative methods. A significantly more difficult area of investigation is determining the mechanisms that lead to localized oscillations in coupled systems. A notable bridge between localized steady states and localized oscillations are breather solutions to the discrete nonlinear Schrodinger equation. In this case the gauge invariance of the Schrodinger equation allows one to reduce the search for time-periodic standing solitons to identifying steady states corresponding to the amplitudes of circularly-symmetric oscillations\({}^{6}\),\({}^{6}\),\({}^{7}\),\({}^{8}\),\({}^{9}\). However, gauge symmetry is by no means necessary to observe localized oscillation, as they have also been documented in chains of coupled mechanical oscillators\({}^{11}\),\({}^{12}\),\({}^{13}\),\({}^{14}\),\({}^{15}\),\({}^{16}\), where there is no known method of reducing back to the well understood steady-state case. From the standpoint of mathematical analysis, some simplifications can be made. First assume that the uncoupled (\(\varepsilon=0\)) dynamics of (1), given by \(\dot{x}_{n}=F(x_{n},\mu)\), exhibit a hyperbolic periodic solution. When considering all \(n\in\Lambda\) together one arrives at a hyperbolic torus of dimension \(|\Lambda|\) in (1) when \(\varepsilon=0\). Then, considering the perturbative regime \(0<\varepsilon\ll 1\) one may appeal to the theory of _weakly coupled oscillators_\({}^{19}\),\({}^{20}\),\({}^{21}\) to understand the dynamics on the perturbed torus. Precisely, Hale's invariant manifold theorem\({}^{22}\) can be used to guarantee that the uncoupled torus persists into small \(\varepsilon>0\), mean ing that the dynamics of (1) can be reduced to the phases that parametrize this torus [23; 24]. The reduction to these _phase models_ lie at the heart of the theory of coupled oscillators, with a prototypical model being the Kuramoto system [25], and have widespread application to mathematical neuroscience [26; 27; 21]. Complex patterns of synchrony are well-known to emerge in phase models, including those that exhibit spatial localization. Notably, symmetry breaking in identically coupled identical oscillators gives rise to interesting chimera states exhibiting spatially separated regions of coexisting synchrony and asynchrony. First characterized by Kuramoto and Battogtokh in a globally coupled model [28], the bifurcations leading to their formation were described by Abrams and Strogatz [29]. While the mathematical theory has been substantially developed in the nonlocally coupled case [30], less theory has been developed for chimera states in locally coupled networks [31], which give rise to interesting complex dynamics. Recently, for example, a myriad of localized traveling chimera states was reported to occur in a ring of Janus oscillators (a generalization of the aforementioned Kuramoto model) [32]. In the optics literature, on the other hand, related localized oscillatory states known as gap solitons have long attracted attention [33; 34; 35; 36]. These states are typically created in damped and periodically driven quantum systems that exhibit band gaps and can often be associated with topological phenomena and non-Hermiticity [37], but the mechanisms of formation of analogous classical states have also recently attracted interest [38]. Such states have been recently reported, for example, in an array of parametrically-driven pendula with periodic heterogeneity [39; 40]. Here, we provide a numerical investigation of both the ring of Janus oscillators and the parametrically-driven pendula to document the mechanisms that lead to spatial localization. Using numerical continuation we demonstrate that the emergence of localized pattern formation in the phase models studied herein is significantly different from the steady states that have been thoroughly examined in systems of the form (1). Precisely, we find that localized traveling time-periodic patterns in the Janus oscillators come into existence through a heteroclinic bifurcation, wherein the traveling component comes from visiting neighborhoods of symmetrically related localized steady states. In the pendulum array, localized periodic solutions emerge following a secondary symmetry-breaking bifurcation out of a branch of period-doubled symmetric wave modes, and they exhibit a complex tangle of branching bifurcations leading to a myriad of attractive, localized periodic states. The remainder of this paper is organized as follows. First, in Section II we review the details of our numerical continuation scheme, with demonstrations of localized patterns in the Swift-Hohenberg equation. Then, in Section III we introduce the system of Janus oscillators, discuss its symmetries, and provide our continuation results. We then do the same for the coupled pendulum array in Section IV. In Section V conjecture what drives the complexity of the bifurcation diagrams and then we conclude in Section VI with a discussion of our findings. ## II Numerical continuation and symmetric branch points ### Background theory Here we provide a brief introduction to numerical continuation, our primary method of analysis. We utilize AUTO, which implements a pseudoarclength continuation strategy [41], and code producing all results in this paper is available on our GitHub repository [42]. Consider first a steady state solution \(x_{n}^{0}\) to Eq. (1), satisfying \[0=F(x_{n}^{0},\mu_{0})+\varepsilon\sum_{k\in C_{n}}G(x_{k}^{0},x_{n}^{0},\mu_ {0}), \tag{2}\] for an initial parameter value \(\mu_{0}\). The implicit function theorem implies the existence of a branch of steady-state solutions \(x_{n}^{*}(\mu)\) with \(x_{n}^{*}(\mu_{0})=x_{n}^{0}\), provided the system Jacobian matrix \[J=\left(\partial F/\partial x_{m}+\varepsilon\sum_{k}\partial G/\partial x_{m }\right) \tag{3}\] is nonsingular. Pseudoarclength continuation provides an efficient strategy to numerically determine such solution branches even beyond bifurcation points at which the Jacobian becomes singular. This is achieved by parameterizing the solution branch with a new auxiliary variable \(s\) (the pseudoarclength) as in \((x_{n}^{*}(s),\mu(s))\). Given a current solution \((x_{n}^{*}(s),\mu(s))\) for an initial value of \(s\), the solution at \(s+\delta s\) is determined by solving the extended system of equations \[0 =F(x_{n}^{*}(s+\delta s),\mu(s+\delta s))+\varepsilon\sum_{k\in C _{n}}G(x_{k}^{*}(s+\delta s),x_{n}^{*}(s+\delta s),\mu(s+\delta s)),\] \[0 =\sum_{n}(x_{n}^{*}(s+\delta s)-x_{n}^{*}(s))\delta x_{n}+(\mu(s +\delta s)-\mu(s))\delta\mu-\delta s, \tag{4}\] via Newton's method, where \(\delta s\) is the pseudo-arclength step size and \((\delta x_{n},\delta\mu)\) is the "direction vector." The key to the pseudo-arclength method is that the Jacobian for the extended system does not become singular at regular solution points, which include both saddle-node and Hopf bifurcation points. This is guaranteed by judicious selection of the direction vector. Since the extended Jacobian is nonsingular, the continuation can be performed past saddle-node and Hopf bifurcations efficiently. While only saddle-node and Hopf bifurcations emerge under one-parameter variations in generic systems, symmetric systems are capable of exhibiting other kinds of bifurcations, which correspond to branch points at which distinct solution branches intersect. Modern numerical continuation software is capable of detecting branch points where the Jacobian of the extended system has a one-dimensional null space, which includes (codimension one) pitchfork and transcritical bifurcations. Furthermore, branch switching can usually be executed by systematically selecting alternative direction vectors at branch points. This is achieved in AUTO by finding the roots of the determinant of the Jacobian for the extended system and solving an associated "algebraic bifurcation equation" for the direction vector. However, symmetric systems can also exhibit nonsimple branch points (corresponding to one-dimensional unfoldings of bifurcations with codimension greater than one). We refer to such points as symmetric branch points (SBPs) and note that the problem of numerically detecting general SBPs and branch switching is still open. Equivariant bifurcation theory guarantees that if an SBP is invariant under a symmetry transformation, there exists at least one symmetry-invariant solution branch that emerges out of it [43]. A further number of symmetry-broken solution branches can also emerge out of SBPs, depending on the normal form of the bifurcation. ### Example: Snaking in the Cubic-Quinic Swift-Hohenberg Equation The symmetry-breaking mechanisms for the formation of localized states can be characterized by studying the unfoldings of SBPs. To illustrate, we first briefly review the cubic-quintic Swift-Hohenberg equation \[\dot{u}=ru-(1+\partial^{2}/\partial x^{2})^{2}u+2u^{3}-u^{5}, \tag{5}\] which exhibits the classic snaking bifurcation diagram of localized solutions [3], reproduced here in Fig. 1. To simplify the case for the steady states of Eq. (5), it is convenient to consider the continuation for the equivalent spatial ODE obtained by setting \(\dot{u}=0\). By solving for the highest order spatial derivative \(u_{xxxx}\) and introducing auxiliary variables, we can express the problem in an equivalent first-order form, \[u_{x} =v, \tag{6}\] \[v_{x} =w,\] (7) \[w_{x} =z,\] (8) \[z_{x} =(r-1)u+2u^{3}-5u^{5}-2w. \tag{9}\] The uniform state \(u_{0}=0\) is a solution for all \(r\), and the Jacobian matrix for the system at \(u_{0}\) is \[J=\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ r-1&0&-2&0\end{pmatrix}. \tag{10}\] The Swift-Hohenberg equation is invariant under even and odd spatial reflections \((x^{\prime},u^{\prime})=(-x,u)\) and \((x^{\prime},u^{\prime})=(-x,-u)\), respectively. Correspondingly, the system exhibits an SBP at \(r=0\), where the eigenvalues of \(J\) are \(\lambda=\pm\mathrm{i}\), each with algebraic multiplicity two (a codimension two Hamiltonian-Hopf bifurcation point [44]). As \(r\) passes through zero, this SBP unfolds as a periodic solution branch in addition to two branches of homoclinic orbits, which are invariant under the even and odd reflections. These homoclinic orbits of Eq. (6) correspond to localized steady states of the Swift-Hohenberg equation as they decay to zero as \(x\to\pm\infty\). The homoclinic solution branches undergo snaking bifurcations for smaller \(r\) with further secondary SBPs (at which additional ladder rung solution branches emerge which are not shown in Fig. 1). There, they periodically stabilize and correspond to localized states. ### Modifications to AUTO In the remainder of the paper, we focus on the emergence of localized limit cycles in coupled oscillator models Figure 1: Snaking bifurcation diagram for the cubic-quintic Swift-Hohenberg equation (5). Solid lines show stable steady states, dashed lines show unstable steady states, solid circles show fold bifurcation points and simple branch (transcritical and pitchfork) bifurcation points, and \(\times\) symbols show SBPs. The black branch corresponds to the homogeneous \(u=0\) solution, the green branch corresponds to periodic solutions, and the blue and orange branches correspond to localized states (homoclinic orbits in the spatial ODE). described by Eq. (1) rather than steady states. The numerical continuation of limit cycles requires small generalizations of the pseudoarclength method described above, which are implemented well in AUTO. The limit cycle is discretized using a (fourth order) collocation method, and an additional degree of freedom is introduced to allow the period \(T\) of the limit cycle to vary during the continuation. Furthermore, in addition to temporal periodic boundary conditions, a phase-fixing integral condition is introduced to resolve the invariance under time translations [45]. The Floquet multipliers for limit cycles are evaluated efficiently in AUTO through back substitution while solving a linear system involving the discretized Jacobian [46]. Local bifurcations occur when Floquet multipliers cross the unit circle in the complex plane. There is always a trivial Floquet multiplier at 1 corresponding to translations along the phase of the cycle, and AUTO can detect saddle-node, torus, period-doubling bifurcations, and simple branch points as above. The simple branch points are identified by determining the zeros the determinant of the extended Jacobian using a bracketed Mueller method when a sign change occurs, which works well in many cases. However, we find that the sign of the determinant in our models below becomes numerically unstable, giving rise to the identification of spurious branch points. On the other hand, the Floquet multipliers near the unit circle are numerically stable [46], so it is reasonable to monitor the Floquet multiplier directly. This requires small changes to the AUTO source code in order to recompute the multipliers between the steps in the Mueller method identifying zeros of user-supplied special functions, as detailed in our GitHub repository [42]. These modifications also enable us to detect additional SBPs in which two real multipliers simultaneously cross the unit circle. Such SBPs were previously undetected in AUTO since the determinant of the Jacobian does not change signs when two real eigenvalues simultaneously change signs. We mark such SBPs with \(\times\) symbols in the bifurcation diagrams below. The computations we perform in this work are expensive, so we limit each continuation to 10 limit points and to 10 branch points in the ring of Janus oscillators and 20 limit points and to 20 branch points in the coupled pendulum array (which is comparatively less expensive). We also attempt to perform branch switching at each simple branch point, and we keep track of visited bifurcation points, terminating the continuation if a point is revisited. Finally, we terminate the continuation if the period of the cycles grows too large (\(T>1500\) for the ring of Janus oscillators here). ## III Janus oscillators on a ring ### Previous observations The introduction of antiferromagnetic order to the Kuramoto model results in a parsimonious model that exhibits a surprising variety of complex dynamics known as the ring of Janus oscillators [32]. Here, we aim to study the bifurcations in the ring of Janus oscillators, defined by the equations \[\dot{\theta}_{n} =\omega_{1}+\sigma\sin(\phi_{n}-\theta_{n})+\sigma\sin(\phi_{n+1} -\theta_{n}), \tag{11}\] \[\dot{\phi}_{n} =\omega_{2}+\sigma\sin(\theta_{n}-\phi_{n})+\sigma\sin(\theta_{n- 1}-\phi_{n}), \tag{12}\] with \(n=N+m\) identified with \(n=m\), describing periodic boundary conditions. For simplicity here, we equate the internal and external coupling strengths of Ref. [32] to a single coupling strength \(\sigma\). Furthermore, by entering a rotating frame and rescaling the time and coupling constant, we can take the frequencies as \(\omega_{1}=1/2\) and \(\omega_{2}=-1/2\). Previous investigations of Eq. (11)-(12) have shown that random initial conditions relax to a plethora of attracting solutions for intermediate coupling strengths. Investigating a snapshot of the phases in these solutions [Fig. 2] reveals an orderly, synchronized portion of oscillators that coexist with a small number of asynchronous oscillators, with various amounts of phase twisting and configurations of synchronous/asynchronous groups. These attractors break the symmetry in the network of identical, identically-coupled oscillators and are thus chimera states. The asynchronous group of oscillators is not fixed, however, but travels at a constant speed around the ring, so the attractors are traveling chimera states. Many differing configurations of synchronous and asynchronous groups are observed to be attractive, and when quasistatically varying \(\sigma\), these attracting solutions continue over a range of \(\sigma\). ### Continuation equations To regularize the \(2\pi\)-discontinuity corresponding to rotations in the phase equations, we employ a complex representation \(z_{n}=e^{i\theta_{n}}\) and \(w_{n}=e^{i\phi_{n}}\) for numerical continuation. We then consider complex equations, taking the form of (1), \[\dot{z}_{n} =z_{n}\left(\frac{\mathrm{i}}{2}+\frac{\sigma}{2}\left(w_{n}z_{n} ^{*}-z_{n}w_{n}^{*}+w_{n+1}z_{n}^{*}-z_{n}w_{n+1}^{*}\right)\right)\] \[\quad+\gamma\left(1-z_{n}z_{n}^{*}\right)z_{n}, \tag{13}\] \[\dot{w}_{n} =w_{n}\left(\frac{-\mathrm{i}}{2}+\frac{\sigma}{2}\left(z_{n}w_{ n}^{*}-w_{n}z_{n}^{*}+z_{n-1}w_{n}^{*}-w_{n}z_{n-1}^{*}\right)\right)\] \[\quad+\gamma\left(1-w_{n}w_{n}^{*}\right)w_{n}, \tag{14}\] Denoting the polar coordinates as \(z_{n}=\rho_{n}\mathrm{e}^{\mathrm{i}\theta_{n}}\) and \(w_{n}=\eta_{n}\mathrm{e}^{\mathrm{i}\phi_{n}}\), a straightforward change of variables leads to the polar equations of motion for the amplitudes \(\dot{\rho}_{n}=\gamma\rho_{n}\left(1-\rho_{n}^{2}\right)\) and \(\dot{\eta}_{n}=\gamma\eta_{n}\left(1-\eta_{n}^{2}\right)\). Note that the amplitude dynamics decouples from the phases and are attracted to the fixed points \(\rho_{n}=1\) and \(\eta_{n}=1\) (we fix \(\gamma=1\) in numerics). Likewise, the phase dynamics reduce to Eqs. (11)-(12). The ring of Janus oscillators possesses a rich group of discrete symmetries. First, the ring is invariant under the obvious rotational symmetries \((\theta_{n}^{\prime},\phi_{n}^{\prime},t^{\prime})=\pi_{R}(\theta_{n},\phi_{n },t)\equiv(\theta_{n+1},\phi_{n+1},t)\), taking the periodic boundary conditions into account with \(\theta_{N}\equiv\theta_{0}\) and \(\phi_{N}\equiv\phi_{0}\). The ring is also invariant under the time/parity reversal \(\pi_{1}\) given by \((\theta_{n}^{\prime},\phi_{n}^{\prime},t^{\prime})=\pi_{1}(\theta_{n},\phi_{n },t)\equiv(\pi+\phi_{N-n},\theta_{N-n},-t)\). Since this map reverses the direction of time, stable solutions are mapped to unstable solutions and vice versa under \(\pi_{1}\). The ring is also invariant under the parity/sign reversal \((\theta_{n}^{\prime},\phi_{n}^{\prime},t^{\prime})=\pi_{2}(\theta_{n},\phi_{n },t)\equiv(-\phi_{N-n},-\theta_{N-n},t)\). Since the direction of time is preserved by this map, the map \(\pi_{2}\) takes stable solutions to other stable solutions and reverses the direction of the attractive traveling chimera states. Note also that the parity/sign reversal symmetry leaves the Kuramoto order parameter \[r\equiv\frac{1}{2N}\left|\sum_{n}e^{\mathrm{i}\theta_{n}}+e^{\mathrm{i}\phi_{n }}\right| \tag{15}\] invariant (so there are actually two branches of solutions corresponding to each line in the bifurcation diagrams below). Lastly, there is a second parity reversal \((\theta_{n}^{\prime},\phi_{n}^{\prime},t^{\prime})=\pi_{3}(\theta_{n},\phi_{n },t)\equiv(\theta_{N-n+1},\phi_{N-n},t)\), which exchanges the roles of the two coupling terms (this symmetry is only present when the internal and external coupling constants are identical). Composition of the various reversal symmetries leads to a total of seven reversal symmetries which, with the identity element, form the group \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}\), up to rotations. Chimera solutions that are invariant under any of these symmetries play a special role in the system, as we detail below. Since the traveling chimera state solutions have chirality, they can only be invariant under those symmetries that simultaneously reverse time and parity or those that reverse neither. There are three such symmetries, given by \(\pi_{1}\), \(\pi_{4}\equiv\pi_{1}\circ\pi_{2}\circ\pi_{3}\) with \(\pi_{4}(\theta_{n},\phi_{n},t)=(\pi-\theta_{N-n+1},-\phi_{N-n},-t)\), and \(\pi_{5}\equiv\pi_{2}\circ\pi_{3}\) with \(\pi_{5}=(-\phi_{n},-\theta_{n+1},t)\). Since the phase equations depend only on phase differences, the equations are also invariant under continuous global phase rotations \(\theta_{i}\rightarrow\theta_{i}+\psi\) and \(\phi_{i}\rightarrow\phi_{i}+\psi\). This means that limit cycle attractors and fixed point attractors will have a neutrally stable perturbation direction corresponding to the global phase rotation. For limit cycle attractors, with the additional neutral perturbation corresponding to shifting the phase of the cycle itself \(\theta_{i}\rightarrow\theta_{i}+\epsilon\theta_{i}\) and \(\phi_{i}\rightarrow\phi_{i}+\epsilon\dot{\phi}_{i}\), there will be two unit Floquet multipliers for all parameter values, rendering all points singular in the pseudo-arclength continuation method employed by AUTO (correspondingly, the total phase \(\Theta=\sum_{n}\left(\theta_{n}+\phi_{n}\right)\) is a conserved quantity). We can remove this degeneracy by moving into a reference frame that rotates at the speed of oscillator \(z_{0}\). Define quantities \(\tilde{z}_{n}=z_{n}/z_{0}\) and \(\tilde{w}_{n}=w_{n}/z_{0}\) (whose phases are, respectively, \(\theta_{n}-\theta_{0}\) and \(\phi_{n}-\theta_{0}\)). Then \(\dot{\tilde{z}}_{n}=\dot{z}_{n}/z_{0}-\left(z_{n}/z_{0}\right)\left(\dot{z}_{0}/ z_{0}\right)\) and \(\dot{\tilde{w}}_{n}=\dot{w}_{n}/z_{0}-\left(w_{n}/z_{0}\right)\left(\dot{z}_{0}/ z_{0}\right)\). Assuming, without loss of generality, that \(z_{0}\) is initialized with \(z_{0}=1\), the \(4N-2\) real Carte Figure 2: Traveling chimeras in the ring of Janus oscillators. The left column shows a snapshot of the state variables, and the middle and right columns show the space-time evolution of the variables. The state in (a) is the most attractive chimera, exhibiting no pitch and a single defect, while the state in (b) is a less attractive state, exhibiting a pitch and a single defect. sian coordinates \(\tilde{z}_{n}=x_{n}+iy_{n}\) and \(\tilde{w}_{n}=u_{n}+iv_{n}\) evolve according to \[\dot{x}_{n} =-y_{n}\left(\sigma(x_{n}(v_{n}+v_{n+1})-y_{n}(u_{n}+u_{n+1})-v_{0} -v_{1})\right)+\gamma(1-x_{n}^{2}-y_{n}^{2})x_{n}, \tag{16}\] \[\dot{y}_{n} =x_{n}\left(\sigma(x_{n}(v_{n}+v_{n+1})-y_{n}(u_{n}+u_{n+1})-v_{0 }-v_{1})\right)+\gamma(1-x_{n}^{2}-y_{n}^{2})y_{n},\] (17) \[\dot{u}_{n} =-v_{n}\left(-1+\sigma(u_{n}(y_{n}+y_{n-1})-v_{n}(x_{n}+x_{n-1})-v _{0}-v_{1})\right)+\gamma(1-u_{n}^{2}-v_{n}^{2})u_{n},\] (18) \[\dot{v}_{n} =u_{n}\left(-1+\sigma(u_{n}(y_{n}+y_{n-1})-v_{n}(x_{n}+x_{n-1})-v _{0}-v_{1})\right)+\gamma(1-u_{n}^{2}-v_{n}^{2})v_{n}. \tag{19}\] ### Continuation results To generate initial limit cycles, we simulate 100000 random initial conditions with \(\sigma=0.3\) and \(N=16\) for a period of 25000 time units. We numerically identify the period of any resulting limit cycles, and we evaluate the complex order parameters \(r_{\pm}\equiv\left(\sum_{n}e^{\mathrm{i}\left(\theta_{n}\pm 2\pi n/N\right)}+e^{ \mathrm{i}\left(\phi_{n}\pm 2\pi n/N\right)}\right)/2N\) in addition to the complex Kuramoto order parameter. We then evaluate the time-averaged norms and the number of \(2\pi\) windings of the order parameters over a period of the cycles, and we bin final states according to these solution measures to select the 64 most attractive chimera states in the sample. These solution measures successfully distinguish between the symmetry-related chimera states, and we observe that states come in pairs or sets of four related by the various parity-reversing symmetries. These states are used as starting points for continuation with respect to \(\sigma\) in AUTO. Figures 3(a)-(b) show the order parameter and the period for the solution branches corresponding to these initial limit cycle solutions and their subsequent branch-switching branches, with thick lines indicating stable states. The bifurcation diagram is quite complex, with many solution branches undergoing many bifurcations. #### ii.3.1 Exemplary solution branch One exemplary solution branch is shown in Fig. 3(c)-(d). For this branch, the limit cycle loses stability with decreasing \(\sigma\) and turns around at a limit point, which is an SBP. This SBP is a consequence of the time-reversal symmetries \(\pi_{4}\), which maps the stable limit cycle to an unstable twin. A \(\pi_{4}\)-invariant chimera solution emerges out of the SBP, which is neutrally stable and not attractive. The invariant chimera continues to a second SBP that occurs near \(\sigma=0.2512\) and \(r=0.0134\). This second SBP corresponds to an unusual period tripling of another invariant solution branch, which is a traveling wave solution with differing twists in the \(\theta_{n}\) and \(\phi_{n}\) variables. This invariant traveling wave can be continued back to \(\sigma=0\) (see Fig. 5(a) below). Such nongeneric bifurcations are possible on invariant solution branches when the Floquet eigenvectors form certain representations of the time-reversing symmetry groups. Since \(\pi_{4}\) will map a Floquet multiplier to its inverse by virtue of its time inversion, \(\pi_{4}\)-invariant Floquet subspaces can be constrained to lie on the unit circle, and are thus able to pass through the otherwise nongeneric point \(e^{2\pi\mathrm{i}/3}\) as \(\sigma\) varies, leading to period tripling. Incidentally, we also confirmed that a period-quadrupled branch can be continued from analogous SBPs where the Floquet multipliers pass through \(e^{2\pi\mathrm{i}/4}\). Returning to the initial stable limit cycles and now increasing \(\sigma\), several simple branch points occur. For each branch that emanates from the branch points, the period \(T\) of the cycles increases quickly, making further continuation difficult. To overcome this challenge, we periodically increase the number of mesh points proportional to the period. From investigating the temporal evolution of the limit cycles with very large \(T\), the cycles periodically slow significantly during their evolution, as shown in Fig. 3(e)-(f). By employing root finding starting from these slow points, we can successfully identify steady-state solutions. The sequence of steady states visited in the limit cycle corresponds to rotations \(\pi_{R}\) of each other. Thus, a heteroclinic cycle exists between the rotated steady-state solutions for sufficiently large \(\sigma\) and the limit cycles emerge via a heteroclinic global bifurcation with decreasing \(\sigma\). Continuing the steady states involved in the heteroclinic cycles back to smaller \(\sigma\), we find that many branches coalesce at another SBP at \(\sigma=0.25\), marked with the black \(\times\) in Fig. 3(a). This SBP occurs where the stable and unstable twin of synchronized steady-state solutions coincide. The synchronized solutions \(\theta_{n}=\theta\) and \(\phi_{n}=\phi\) satisfy \(\dot{\theta}-\dot{\phi}=1+4\sigma\sin(\theta-\phi)\). For \(\sigma<0.25\) the groups of oscillators do not phase lock, but exhibit a remotely synchronized limit cycle oscillation in which oscillators that are not directly coupled synchronize with each other. For \(\sigma>0.25\), on the other hand, there are stable and unstable synchronized steady-state branches, in which all oscillators are phase locked with their neighbor. The pair of steady states are mapped to each other under the time-parity reversal while the limit cycle is invariant under the time-parity reversals \(\pi_{1}\), \(\pi_{4}\) and \(\pi_{5}\). Since all the stable Floquet multipliers must map to unstable multipliers at the limit point, the remotely synchronized limit cycle is an invariant solution and is neutrally stable, with \(2N-2\) nontrivial unity Floquet multipliers. The SBP is therefore a saddle-node bifurcation on the invariant circle with codimension \(N-1\), and many unstable solution branches emerge out of it which break the rotational and time-reversal symmetries. In addition to the synchronized solutions, twisted solutions with \(\theta_{n}=\theta+2\pi p/N\) and \(\phi_{n}=\phi+2\pi p/N\) for integers \(p\), with analogous SBPs occurring for \(\sigma>0.25\). Further "cluster-twisted" steady-state solutions also exist, as previously noted in Ref. [32], which also exhibit SBPs for \(\sigma>0.25\). Figure 3: Bifurcation diagram for the ring of Janus oscillators. (a-b) Kuramoto order parameter \(r\) from Eq. 15 (a) and period \(T\) (b) for all identified solution branches. Thick lines show stable limit cycles, dashed lines show unstable limit cycles, dash-dotted lines show neutrally stable invariant limit cycles, and thin dotted lines show unstable steady states. (c-d) Example solution branch as in (a-b), with solid circles denoting fold bifurcation points and simple branch (transcritical and pitchfork) bifurcation points, open circles denoting torus bifurcation points, and \(\times\) symbols showing SBPs. (e) Rate of change norm vs. time for the large-period limit cycle of the branch in (c-d). (f) Identified steady state (open circles) from the slow states identified in (e) (filled circles). (g) Schematic of the mechanism for the formation of stable localized limit cycles. In summary, the rotational and time-reversal symmetries in the ring of Janus oscillators lead to a new mechanism for the formation of localized states, which is schematically depicted in Fig. 3(g). A family of symmetry-related unstable fixed points emerges out of the SBP at \(\sigma=0.25\). For larger \(\sigma=\sigma_{HC}\), a heteroclinic cycle exists connecting the fixed points, out of which twin limit cycles are born in a heteroclinic bifurcation for smaller \(\sigma\). One of these limit cycles becomes stable after various torus bifurcations around \(\sigma=0.3\), and the twins annihilate each other at a subsequent SBM at \(\sigma=\sigma_{SBP2}\), resulting in a neutrally-stable invariant chimera solution. #### iii.2.2 Other solution branches While other solution branches are formed via mechanisms that are qualitatively similar to the exemplary branch, they also exhibit more complexity. For example, for the most attractive chimera states, the heteroclinic cycle visits a greater variety of steady-state solutions. This chimera state alternatingly visits distinct steady states which are related by a reflection before proceeding to visit the rotated states, as shown in Fig. 4(a). Other branches are observed to visit multiple unstable steady states that are unrelated by symmetries, as in Fig. 4(b). Figure 5 shows two of the invariant chimera states that emerged out of SBPs. The traveling wave solution following the second SBP of the exemplary solution branch is shown in Fig. 5(a), which is invariant under \(\pi_{4}\). In Fig. 5(b), the oscillators form two clusters of 8 which differ by a phase shift of \(\pi\), thus forming a cluster twisted limit cycle solution. Correspondingly, the Kuramoto order parameter is exactly zero for this invariant chimera state, and it corresponds to the orange line with \(r=0\) in Fig. 3(a). Like the remotely synchronized solution, this state is invariant under \(\pi_{1}\), \(\pi_{4}\) and \(\pi_{5}\). Several stable chimera states branch off of this cluster twisted invariant chimera state. We also highlight a second interesting traveling wave invariant chimera state in Fig. 5(c), which corresponds to the blue invariant chimera branch that is prominent below \(\sigma=0.25\) in Fig. 3(a). This solution branch is also invariant only under \(\pi_{4}\). ## IV Parametrically driven pendulum array ### Previous observations Recent studies on heterogeneity-stabilized homogeneous states [39] and anharmonic classic time crystals [40] employed a model describing a parametrically-driven array of pendula with alternating lengths. In this model, the angles \(\theta_{n}\) and \(\phi_{n}\) of the \(n\)th long and short pendula, re Figure 4: Heteroclinic connections visited by other chimera states, as in Figs. 3(e)-(f). (a) The most attractive chimera state alternatingly visits reflections of a twisted state with a single defect. (b) Another branch of unstable chimera states alternatingly visits distinct unstable steady states. spectively, evolve according to \[M\ddot{\theta}_{n} =-\eta\dot{\theta}_{n}-M\frac{g-4\kappa\Delta+a_{d}\omega_{d}^{2} \cos(\omega_{d}t)}{1+\Delta}\sin(\theta_{n})\] \[\quad+\kappa\frac{1-\Delta}{1+\Delta}\left[\sin(\phi_{n}-\theta_ {n})+\sin(\phi_{n-1}-\theta_{n})\right], \tag{20}\] \[M\ddot{\phi}_{n} =-\eta\dot{\phi}_{n}-M\frac{g+4\kappa\Delta+a_{d}\omega_{d}^{2} \cos(\omega_{d}t)}{1-\Delta}\sin(\phi_{n})\] \[\quad+\kappa\frac{1+\Delta}{1-\Delta}\left[\sin(\theta_{n}-\phi_ {n})+\sin(\theta_{n+1}-\phi_{n})\right], \tag{21}\] where \(\Delta\) is the alternating length scale, \(a_{d}\) is the driving amplitude, and \(\omega_{d}\) is the driving frequency. We fix the damping coefficient to \(\eta=0.1\), the gravitational acceleration to \(g=1\), the mass to \(M=1\), and the coupling spring constant to \(\kappa=1\) throughout these numerical investigations. For concreteness, we assume there are a finite number of \(N\geq 1\) pairs of pendula, and we employ periodic boundary conditions with \(n=N+m\) identified with \(n=m\). Our demonstrations will take \(N=16\) pendula pairs throughout. The pendulum array exhibits relatively fewer discrete symmetries than the ring of Janus oscillators. Still, it does posses symmetries, as it is invariant under a reflection \((\theta_{n}^{\prime},\phi_{n}^{\prime})=\pi_{1}(\theta_{n},\phi_{n})\equiv( \theta_{N-n},\phi_{N-n-1})\), a second reflection \((\theta_{n}^{\prime},\phi_{n}^{\prime})=\pi_{2}(\theta_{n},\phi_{n})\equiv(- \theta_{n},-\phi_{n})\), and a translation symmetry \(\pi_{R}\) given by \((\theta_{n}^{\prime},\phi_{n}^{\prime})=\pi_{R}(\theta_{n},\phi_{n})\equiv( \theta_{n+1},\phi_{n+1})\). The traditional study of parametric instabilities begins by examining the stability of the homogeneous state \(\theta_{n}=\phi_{n}=0\). Taking advantage of the translational symmetry, we employ the Floquet wave mode ansatz \[\theta_{n} =e^{\mathrm{i}kn+st}\sum_{m}\Phi_{0m}e^{\mathrm{i}m\omega_{d}t}, \tag{22}\] \[\phi_{n} =e^{\mathrm{i}kn+st}\sum_{m}\Phi_{1m}e^{\mathrm{i}m\omega_{d}t}. \tag{23}\] Wave modes are characterized by the relationship between the wavenumber \(k\), the frequency \(\omega\), and growth rate \(\beta\), where \(s=\beta+\mathrm{i}\omega\) is the complex Floquet exponent, related to the Floquet multiplier \(\nu\) via \(\nu=e^{s}\). The linearized equations \[\sum_{im}D_{jn}^{im}\Phi_{im}=a_{d}\sum_{im}E_{jn}^{im}\Phi_{im}, \tag{24}\] Figure 5: Neutrally stable invariant chimera states identified from continuation, as in Fig. 2. (a) The traveling wave invariant chimera following the second SBP of the exemplary solution branch. (b) A cluster-twisted invariant chimera, from which several stable chimera branches branch. (c) The second traveling wave invariant chimera state that is prominent below \(\sigma=0.25\). determine the stability of the homogeneous state, with \[D_{jn}^{im} =L_{i}\left[M(s+\mathrm{i}\omega_{d}n)^{2}+\eta(s+\mathrm{i}\omega_{ d}n)+2\kappa\right]\delta_{j}^{i}\delta_{n}^{m}\] \[\quad-2\kappa L_{i}\left(\delta_{j+1}^{i}+\delta_{j-1}^{i}\right) \cos(k)\delta_{n}^{m}+Mg\delta_{j}^{i}\delta_{n}^{m}, \tag{25}\] \[E_{jn}^{im} =-\frac{1}{2}M\omega_{d}^{2}\left(\delta_{n+1}^{m}+\delta_{n-1}^{ m}\right)\delta_{j}^{i}. \tag{26}\] and \(L_{i}=1+(-1)^{i}\Delta\). Solving Eq. (24) as a nonlinear eigenvalue problem for \(s\) given \(a_{d}\) gives a generalized dispersion relationship for the pendulum array. For \(a_{d}=0\), the growth rate for all modes is given by \(-\eta/2\), so the homogeneous state is stable. As \(a_{d}\) increases, instabilities occur when the growth rate for some mode becomes positive, leading to pattern-forming dynamics. The local bifurcations that give rise to the initial instability govern the kind of pattern formation observed in the pendulum array. The system symmetry, and hence the dynamics in the array, dramatically differs for \(\Delta=0\) and \(\Delta>0\). When \(\Delta=0\), there is enhanced translational symmetry \((\theta_{n}^{\prime},\phi_{n}^{\prime})=(\phi_{n},\theta_{n+1})\) of half a unit cell. Correspondingly, when \(\Delta>0\), the wave modes split into two branches separated by a band gap in the wave frequency, as shown in Fig. 6. For most driving frequencies, the instability of the homogeneous state occurs as a wave mode with frequency \(\omega=\omega_{d}/2\) resonates with the driving frequency. The growth rate \(\beta\) of resonant modes quickly grows with increasing driving amplitude, giving rise to a period-doubling bifurcation when \(\beta\) becomes positive. This instability mechanism is the same as the subharmonic response observed in the classical Faraday wave systems [47]. However, when driving the system with a frequency corresponding to twice the band-gap frequencies, there are no wave modes that can easily resonate with the driving, and the band-gap opening gives rise to heterogeneity-stabilized homogeneous states [39]. When these states are perturbed by finite-amplitude disturbances, it is possible to excite a variety of oscillatory gap soliton states in which the swinging amplitude is localized, as illustrated in Fig. 7(a). For sufficiently large \(\Delta\), it was also previously noted that a response that is qualitatively different from the classical subharmonic response can occur [40]. When two wave modes with the same wavenumber have a frequency difference equal to the driving frequency, a Neimark-Sacker (or torus) bifurcation can occur instead, giving rise to a new instability mechanism termed _coresonance_. Correspondingly, we also report here a plethora of novel localized states, including harmonic states with rotating pendula and non-periodic states related to coresonance phenomena. Such states are illustrated in Fig. 7(b). ### Continuation equations We again regularize with a complex representation \(z_{n}=e^{\mathrm{i}\theta_{n}}\) and \(w_{n}=e^{\mathrm{i}\phi_{n}}\) for numerical continuation. Since the model is second order in time, we also introduce the auxiliary momenta variables \(p_{n}=(1+\Delta)\dot{\theta}_{n}\) and \(q_{n}=(1-\Delta)\dot{\phi}_{n}\) to derive first-order equations. Lastly, we introduce an auxiliary complex variable \(Z\) evolving according to the Stuart-Landau equation, which will act as the periodic drive on the pendula. We then consider the complex equations of motion \[\dot{z}_{n} =\mathrm{i}z_{n}p_{n}/(1+\Delta)+\gamma(1-|z_{n}|^{2})z_{n} \tag{27}\] \[M\omega_{d}\dot{p}_{n} =-\eta p_{n}-(Mg+a_{d}\omega_{d}^{2}\frac{Z+Z^{*}}{2}+4\kappa \Delta)\frac{z_{n}-z_{n}^{*}}{2\mathrm{i}}+\kappa(1-\Delta)\frac{(w_{n}+w_{n+ 1})z_{n}^{*}-(w_{n}^{*}+w_{n+1})z_{n}}{2\mathrm{i}}\] (28) \[\dot{w}_{n} =\mathrm{i}w_{n}q_{n}/(1-\Delta)+\gamma(1-|w_{n}|^{2})w_{n}\] (29) \[M\omega_{d}\dot{q}_{n} =-\eta q_{n}-(Mg+a_{d}\omega_{d}^{2}\frac{Z+Z^{*}}{2}-4\kappa \Delta)\frac{w_{n}-w_{n}^{*}}{2\mathrm{i}}+\kappa(1+\Delta)\frac{(z_{n}+z_{n- 1})w_{n}^{*}-(z_{n}^{*}+z_{n-1})w_{n}}{2\mathrm{i}}\] (30) \[\dot{Z} =\mathrm{i}Z+\gamma(1-|Z|^{2})Z. \tag{31}\] The driving variable \(Z\) is decoupled from the other equations and quickly relaxes to the limit-cycle attractor \(Z=e^{\mathrm{i}\tau}\), where \(\tau\equiv\omega_{d}t\) is the non-dimensional time. Thus, the terms \((Z+Z^{*})/2\) in Eqs. (28) and (30) reduce to \(\cos(\omega_{d}t)\), which acts as the parametric driving term, and the phase equations reduce to a non-dimensionalized Figure 6: Wave frequency \(\omega\) vs wavenumber \(k\) (the dispersion relation) for a homogeneous system (\(\Delta=0\)) and a heterogeneous system (\(\Delta=0.5\)) in the absence of driving (\(a_{d}=0\)). A band gap emerges in the heterogeneous case. version of the pendulum array Eqs. (20)-(21). We express Eqs. (27)-(31) in Cartesian coordinates to continue the resulting system of \(6N+2\) real equations in AUTO. Numerical continuation of invariant tori like the novel state in Fig. 7(b) is not yet implemented in AUTO and is very costly to perform. Thus, we focus in the remainder of the paper on limit cycle solutions only. ### Continuation results For simplicity, we fix \(\Delta=0.25\) and \(\omega_{d}=3.5\) (with \(\omega_{d}/2\) lying within the band gap) while continuing solutions in the driving amplitude \(a_{d}\). We begin again by identifying stable periodic orbits at \(a_{d}=0.045\) from evolving \(10000\) random initial conditions over a period of \(5000\) driving periods. We bin the final states according to the sorted and time-averaged squared angles of the pendula and identify the \(25\) most attractive initial limit cycles to continue (there are many more stable cycles than we can afford to identify). Stable (unstable) solutions are shown as thick (thin) lines in Fig. 8, where the solution norms are \(|\theta|=\left(\int\sum_{n}\theta_{n}(t)^{2}dt\right)^{1/2}\) and \(|\phi|=\left(\int\sum_{n}\phi_{n}(t)^{2}dt\right)^{1/2}\). The homogeneous state corresponds to solutions with \(|\theta|=|\phi|=0\). It has a period equal to the driving period (only the auxiliary variable \(Z\) varies over the period). This homogeneous state first becomes unstable in a period-doubling bifurcation, labeled PD1 in Fig. 8(a). All other limit cycle solutions in Fig. 8 have a period twice the driving period. The initial period-doubling bifurcation is subcritical (since \(\omega_{d}/2\) lies in the band gap) and corresponds to a spatial wavenumber \(k=\pi\), with an unstable swinging branch emerging for lower driving amplitudes (orange dotted line). Various solution branches emerge off this period-doubled \(k=\pi\) branch in secondary bifurcations. A second subcritical period-doubling bifurcation labeled PD2 in Fig. 8(a) corresponding to the wavenumber \(k=7\pi/8\) also undergoes similar bifurcations, and the following secondary solution branches are interconnected with the previous ones in a complicated tangle of solutions and bifurcations. Figure 9 shows a few example solution branches in greater detail (with extraneous unstable branches omitted for clarity). The \(k=\pi\) period-doubled branch exhibits a limit point labeled LP1 but does not become stable when it turns around here. This is because an additional branching bifurcation labeled SBP1 in Fig. 9 occurs on the solution branch before the limit point. The unstable symmetry-broken solution branch that emerges from the SBP1 becomes stable after turning around and undergoing a torus bifurcation. The color-corresponding lower panels of Fig. 9 show the solution at SBP1 (open Figure 8: Bifurcation diagram for the parametrically-driven pendulum array with \(\omega_{d}=3.5\) and \(\Delta=0.25\). Thick lines show stable limit cycles, thin dotted lines show unstable limit cycles, and the primary subharmonic period-doubling bifurcations PD1 and PD2 preceding localization are marked. Figure 7: Localized states in a parametrically-driven array of pendula with alternating lengths. (a) Two stable subharmonic localized states (which oscillate with half the driving frequency) observed from random initial conditions with \(\Delta=0.25\) and \(\omega_{d}\) close to twice the bad-gap frequency. (b) Two more complex stable localized states observed for \(\Delta=0.5\), including harmonic (oscillating once per driving period) states with winding pendula and anharmonic (with oscillation frequency incommensurate with the driving frequency). circles) and the solution after stabilizing (closed circles). This stable solution branch corresponds to a localized state centered around a single group of swinging pendula. Since our continuations are expensive, we restrict the number of pendula pairs to \(N=16\) here, but, in larger arrays, the state is indeed localized in the sense that swinging amplitudes decay towards zero as one moves away from the localization center. Along the \(k=\pi\) branch, the following SBP2 and SBP3 bifurcations and their corresponding stabilized solutions correspond to localized states with two localization centers with differing symmetry. The point SBP4 and the corresponding stabilized solutions emerging from the \(k=7\pi/8\) branch have similar symmetry to those corresponding to SBP2 but with differing swinging amplitudes. Further SBPs occur farther along the solution branches and correspond to solutions with a larger number of localization centers. Figure. 10 shows a blow-up of all stable and unstable solution branches. Our calculations were restricted to twenty branch points and twenty limit points before the continuation was terminated, but it appears that an increasingly large number of branch points would emerge with increasing resolution. These tangles strongly resemble the homoclinic tangles seen in chaotic systems such as the Smale horseshoe, and we conjecture that the full set of solution branches in the bifurcation diagram forms a fractal set. In summary, localized states emerge in the pendulum array following SBPs on the subcritical branches of period-doubled wave modes. These interconnected solution branches bear some resemblance to the snaking bifurcations in the Swift-Hohenberg equation, but they are not nearly as organized. Furthermore, a very large number of unstable periodic solutions branch out of the stable localized states and form a tangle of unstable solutions, resembling the homoclinic tangles in chaotic systems. ## V Strange snaking bifurcations Here we attempt to rationalize the complex bifurcation observed in the ring of Janus oscillators and the array of coupled pendula. For illustration, we return to the pseudo-doxclength continuation method for steady state solutions described in Sec. II for Eq. (1). We note that while continuation is typically implemented as an iterative algorithm for solving an extended system, we can consider the process as a dynamical system in its own right. Con Figure 10: Blow up of the tangle of unstable periodic orbits, with stable (unstable) solutions and bifurcation points marked as in Fig. 9. Figure 9: Examples of localized states emerging from SBPs in the pendulum array. The top row shows a blow-up of the bifurcation diagrams with only pertinent branch shown, with solid circles showing fold bifurcation points and simple branch (transcritical and pitchfork) bifurcation points, open circles show torus bifurcation points, and \(\times\) symbols show SBPs. The lower two rows show the time-averaged swinging amplitudes for the color-corresponding SBPs (open circles) and a stable solution (filled circled) along the following stable solution branches. sider, for example, the dynamical system corresponding to the standard pseudoaclength continuation technique \[\begin{pmatrix}J_{nm}&\partial F_{n}/\partial\mu+\varepsilon\sum_{k} \partial G_{n}/\partial\mu\\ \delta x_{m}&\delta s\end{pmatrix}\begin{pmatrix}\partial x_{m}/\partial s\\ \partial\mu/\partial s\end{pmatrix}=\begin{pmatrix}0\\ 1\end{pmatrix}. \tag{32}\] Here, the direction vector \(\begin{pmatrix}\delta x_{m}&\delta s\end{pmatrix}^{\top}\) is the normalized (right) null vector of the matrix \(N\times\left(N+1\right)\) submatrix \(\begin{pmatrix}J_{nm}&\partial F_{n}/\partial\mu+\varepsilon\sum_{k}\partial G _{n}/\partial\mu\end{pmatrix}\), which is guaranteed to be unique up to sign at regular solution points and simple branch points [41]. The first \(N\) rows in Eq. (32) ensure that the values of \(F+\varepsilon\sum_{k}G\) do not vary along the trajectory, and the last row ensures a constant rate of (pseudo)arclength increase. In generic systems (with only regular solution points potentially exhibiting fold and Hopf bifurcations), it is always possible to invert the matrix in Eq. (32) at the fixed points, and so the system is well defined at least in a neighborhood of all the fixed points. When viewed as a nonlinear dynamical system, it is easy to see that the trajectories defined by Eq. (32) can be quite complicated. For generic systems, this dynamical system can exhibit no fixed points since the inverse matrix cannot have zero eigenvalues, which constrains the trajectories significantly. But for symmetric systems, the extended Jacobian need not be invertible, and the dynamics can be complex, perhaps even approaching a chaotic attractor as \(s\) increases. The bifurcation diagram for Eq. (1) would then be _strange_, exhibiting snaking branches that entwine endlessly. In such a scenario, the original dynamics in Eq. (1) would necessarily possess infinitely many fixed points for some values of \(\mu\), but this can be possible even if all the attractors are strictly fixed points. We thus propose that nonattracting chaotic invariant sets (chaotic saddles) coexist with the stable limit cycle solutions in Eq. (1) in our case studies. Such chaotic sets are typically multifractal and possess a skeleton of embedded unstable periodic orbits that define their geometry [48]. Chaotic saddles undergo sudden transitions known as crises when their unstable periodic orbits interact with other, external invariant sets. Thus, we suggest that the stable periodic orbits observed in our models lose their stability by interacting with unstable periodic orbits that go on the be involved in crises with a chaotic saddle. In this case, numerical continuation starting from stable states would eventually lead to the skeleton of the chaotic saddle and could exhibit strange bifurcation diagrams as we observe. ## VI Discussion In this paper, we documented novel mechanisms for localized pattern formation in systems of coupled oscillators. In particular, we have shown that the emergence and bifurcation structure of the multitude of stable localized states herein is significantly different from the well-documented steady states in other pattern-forming systems. We produced nontrivial continuations of periodic and traveling states that required modifications to existing numerical software, highlighted connections with heteroclinic networks and nonattracting chaotic invariant sets, and provided some rationale for the strange bifurcation diagrams that we observed. We suggest that these case studies may help shine a light on the complex multistable switching dynamics mediated by chaos in other systems that have recently attracted interest [49; 50; 51; 52]. Our studies revealed several specific areas for potential improvement in numerical continuation with AUTO. First, we found that detecting simple branch points via the determinant of the extended system Jacobian is a limitation, both because of poor numerical stability and because of the inability to detect SBPs, where multiple eigenvalues or Floquet multipliers simultaneously change stability. It would be desirable to instead efficiently sort the eigenvalues and detect points where individual or groups of eigenvalues change sign, which would also help to distinguish the local structure and symmetry properties of the bifurcations. A second challenge arises in systems with time-reversal symmetries, like PT symmetry of interest in topological matter systems. We found in the Janus ring that invariant solution branches can have several neutrally stable Floquet multipliers which are constrained to the unit circle by the symmetry. To detect bifurcations in such solutions, it would be desirable to automatically track the number of symmetry-constrained neutral multipliers as well so that their spurious sign changes can be ignored in the bifurcation detection. Preliminary efforts to implement each of these improvements are described in our GitHub repository [42], which we anticipate may aid in the study of other systems or in a follow-up investigation that provides a detailed and exhaustive study of the many parameter regimes for our Janus ring and pendulum models that were not explored here. Beyond follow-up numerical studies, what remains is a full analytical investigation that can explain our observations in some level of generality. That is, it is important to know the classes of systems where one should expect to observe certain localized patterns and understand all of the mechanisms that lead to their formation. For example, it would be very valuable to characterize the details of the heteroclinic bifurcations [53] involved in the formation of the localized traveling chimeras in the ring of Janus oscillators. Moreover, we would like to know why some patterns travel, as in the Janus ring, while regions of localization can also remain fixed at certain indices, as in the pendulum. We anticipate a partial explanation for the traveling could come from recent work on the Swift-Hohenberg equation where it was shown that breaking the variational structure of the equation leads to traveling asymmetric states [54]. Similarly, decades of analytical advancement in the understanding of localized steady-state formation can be used to inform and contrast with results on localized oscillations in systems of coupled oscillators. ###### Acknowledgements. ZGN is a Washington Research Foundation Postdoctoral Fellow. JJB is supported in part by an NSERC Discovery Grant. ## Data Availability Statement All data in this paper can be reproduced from the source code in our GitHub repository [42].
2301.13745
Stick-slip in a stack: how slip dissonance reveals aging
We perform physical and numerical experiments to study the stick-slip response of a stack of slabs in contact through dry frictional interfaces driven in quasistatic shear. The ratio between the drive's stiffness and the slab's shear stiffness controls the presence or absence of slip synchronization. A sufficiently high stiffness ratio leads to synchronization, comprising periodic slip events in which all interfaces slip simultaneously. A lower stiffness ratio leads to asynchronous slips and, experimentally, to the stick-slip amplitude becoming broadly distributed as the number of layers in the stack increases. We interpret this broadening in light of the combined effect of complex loading paths due to the asynchronous slips and creep. Consequently, the aging rate of the interfaces can be readily extracted from the stick-slip cycles, and it is found to be of the same order of magnitude as existing experimental results on a similar material. Finally, we discuss the emergence of slow slips and an increase in aging-rate variations when more slabs are added to the stack.
Samuel Poincloux, Pedro M. Reis, Tom W. J. de Geus
2023-01-31T16:26:25Z
http://arxiv.org/abs/2301.13745v3
# Stick-slip synchronization in stack of elastically coupled frictional interfaces ###### Abstract We perform physical and numerical experiments to study the stick-slip response of a stack of slabs in contact through dry frictional interfaces driven in quasistatic shear. The ratio between the drive's stiffness and the slab's shear stiffness controls the presence or absence of slip synchronization. A sufficiently high stiffness ratio leads to synchronization, comprising periodic slip events in which all interfaces slip simultaneously. A lower stiffness ratio leads to asynchronous slips and, experimentally, to the stick-slip amplitude being broadly distributed as the number of layers in the stack increases. We interpret this broadening in light of the combined effect of surface disorder, complex loading paths of the asynchronous slips, and creep. Consequently, the ageing rate can be readily extracted from the stick-slip cycle. The extracted aging rate is found to be of the same order of magnitude as existing experimental results on a similar material. Finally, we discuss the emergence of slow slips and an increase in creep-rate variations when more slabs are added to the stack. ## I Introduction Multiple frictional interfaces coupled by elasticity are ubiquitous in everyday objects including books [1; 2], textiles [3; 4; 5], and multilayer composites [6; 7]. In geology, systems comprising multiple frictional interfaces are the norm rather than an exception. For example, layered rocks such as shale can show multiple sliding interfaces under shear [8; 9]. At the larger scales relevant for terrestrial faults, earthquakes produced when slipping are usually not isolated but embedded into complex fault networks [10]. The mechanical response of such assemblies of frictional interfaces involves the coupling between the elastic deformation of the layers and the barriers to sliding of the interfaces. Predicting the onset of slipping is a long-standing problem even for a single frictional interface [11]. Physical insight and understanding of this class of problems have been driven primarily by high-precision experiments of sliding PMMA blocks whose optical transparency enabled the space-temporal tracking of the local contact area [12]. These pioneering experiments have elucidated that the onset of slip involves a rupture front that 'unzips' the interface. A correlation with strain measurements close to the interface showed that the stress field and dynamics of the front are well described by fracture mechanics with the fracture energy as the sole fitting parameter [13]. However, the mechanism underlying the nucleation of the rupture front remains elusive, primarily due to experimental limitations, for which novel protocols are being proposed [14]. From a theoretical perspective, the most common models for the onset of frictional slippage [15; 16; 17; 18] capture the phenomenology that sliding starts, and then continues in a steady state, when the shear force \(F\) balances the friction forces \(\mu N\), where \(N\) is the normal force and \(\mu\) the 'friction coefficient'. In these 'rate-and-state' models, \(\mu\) depends nonlinearly on the slip rate \(v\) and history \(\theta\): \(\mu=\mu(v,\theta)\). At intermediate values of \(v\), the friction coefficient is usually assumed to display slip weakening (\(\mu\) is a decreasing function of \(v\)) such that the interface is unstable. During slip nucleation, the elasticity and inertia of the bulk have a stabilizing effect [19; 20; 21; 22], such that there exists an effective flow curve \(\tilde{\mu}=\mu+G/(2c_{s})v\) (where \(G\) is the shear modulus of the material and \(c_{s}\) the shear-wave speed) whose steady-state displays a minimum at \(\mu_{c}=\tilde{\mu}(v_{c})\)[19]. Consequently, any perturbation decays and vanishes if the applied load is \(F/N<\mu_{c}\). At higher applied loads, a slip patch of linear size \(L\) becomes unstable if \(L>L_{c}\sim(F/N-\mu_{c}^{*})^{-\nu}\) (\(\mu_{c}^{*}\geq\mu_{c}\)[21]). Once the event grows to be larger than \(L_{c}\), its dynamics are well described by a sharp rupture front [19; 21; 23] that can be modeled by linear elastic fracture mechanics [13; 24], as discussed above. If the loading is performed under displacement control, the reaction force \(\mu N\) drops macroscopically only once the interface unzips fully. A direct consequence of the phenomenology described above is that the interface can display _stick-slip_ behavior when driven quasistatically (at a rate \(V\ll v_{c}\)). Under the framework of the rate-and-state models, the stick-slip amplitude thus depends on the exponent \(\nu\), \(\mu_{c}^{*}\), and the mechanism leading to the slip patch at scales \(L<L_{c}\). All of these parameters are currently under debate; e.g. [25; 26; 27; 21; 22; 28; 14]. However, one of the authors of the present study recently proposed an encompassing theory that avalanches nucleate the instability such that \(\mu_{c}^{*}=\mu_{c}\) and \(\nu=1/(1-\zeta)\). Here, \(\zeta\) is the roughness exponent resulting from a competition between microscale disorder and elasticity [22] (not to be confused with a roughness exponent extracted from height-height correlations of the interface profile). Furthermore, it is a well-known experimental fact that the initial onset to sliding is history-dependent and increases with the time that the interfaces were at rest [29; 30; 17; 31]. This behavior is associated with creep [32] and described by rate-and-state models where the variable \(\theta\) introduced above is regarded as time. Creeping of the interfaces must affect the stick-slip amplitude, but disentangling its contribution is challenging because slip events occur at a narrowly distributed interval. Beyond a single frictional interfaces when multiple frictional interfaces are present, the elasticity of the bulk may potentially lead to a non-trivial coupling. For example, elastic interactions between faults may strongly affect their slip dynamics [33]. In addition, acoustic waves transmitted through the elastic bulk may lead to remote triggering of earthquakes [34; 35], though the large temporal separation suggests a complex coupling. Predicting the mechanical response of an assembly of elastic frictional interfaces is then a formidable but important challenge. In particular, identifying the key parameters coupling the layers together and elucidating the role of the number of interfaces are open questions. Here, we report results from a combined experimental and numerical investigation on the quasistatic stick-slip response of a stack of elastic slabs in contact through frictional interfaces. Based on the ratio between the stiffness of the drive and the shear stiffness of the slab, we distinguish two regimes:'stiff' and 'compliant' driving. In the stiff-driving regime, our numerical results exhibit periodic slips involving all the layers leading to narrowly distributed force drops; this regime is not accessible in our model experiments. By contrast, in the compliant-driving regime, we observe both numerically and experimentally a decoupling of the slip events along the different layers, with interfaces sliding one by one. In the experiments, we find that this loss of periodicity is accompanied by a broadening of the distribution of the stick-slip amplitudes with the number of layers. The changes in the measured distributions are interpreted by a coupling between interface disorder and a broad distribution of waiting times between slips, exposing the role of creep. The remaining unsolved broadening of the distributions with the number of slabs, including an observed increasing fraction of'slow slip', raises the question of the interaction between mechanical noise and creep. Depending on the drive's stiffness, the stack of frictional interfaces then displays drastically different responses to shear. Overall, we find that, in the stiff-driving case, the stack acts as one layer with periodic slips, while in the compliant-driving case, a rich coupling between the layers makes slips much more unpredictable. ## II Definition of the problem We assess the shear response of a model system comprising a stack of \(n\in[1,5]\) identical slabs of thickness \(h\) resting on a surface whose position is fixed. In Fig. 1, we present a schematic diagram of the system and a photograph of our experimental setup. Each slab, and its lowermost frictional interface, are numbered as \(i=1,2,\ldots,n\) from below. We impose homogeneous shear, with a set rate, between all the slabs by connecting each slab through identical springs of stiffness \(K\) to a lever that is driven to rotate around a fixed axis (Fig. 1a). The spring connecting to the \(i\)-th layer is attached to the lever at a distance \(ih\) from the rotation axis. For the drive, we impose the lever's top horizontal displacement \(U(t)=Vt\), where \(t\) represents time, and \(V\) is the imposed velocity, taken to be small enough for the interfaces to display stick-slip (i.e., \(V<v_{c}\)[22; 30]). Thus, our drive imposes a shear rate \(\dot{\gamma}\equiv V/H\), where \(H\) is the height of the lever, driving each spring \(i\) at a velocity \(v_{i}=ih\dot{\gamma}\). During the periods in which the interfaces are'stuck', this drive causes a monotonically increasing shear stress at each of the interfaces. Our study seeks to address the following questions: (1) What are the relevant parameters controlling the slip synchronization of the interfaces? (2) How does the shear response evolve with an increasing number of layers \(n\)? ## III Rigid vs. compliant driving For a system with a single frictional interface, the stick-slip instability occurs only if the driving stiffness \(K\) is sufficiently low and depends on the flow properties of the interface and the applied driving rate [36; 15; 30]; a summary of the calculation for a rate-and-state model was Figure 1: (a) Schematic of our model system, shown here with \(n=4\) active (driven) layers. The color code of the layers is used throughout the figures. (b) Photograph of the corresponding experimental apparatus. provided in Ref [30]. With multiple frictional interfaces, the drive also controls the degree of synchronization as we argue later in the manuscript. To gain insight into the effect of the drive on the shear response of a stack, we regard the driving'spring' as a parabolic potential energy imposing the mean position of each slab, such that the slab is free to build up shear. We now discuss what happens in the limit of rigid and compliant driving, defined next. _Rigidity ratio \(\Phi\)._ We define the rigidity ratio \(\Phi\equiv K/K_{s}\), where \(K\) is the rigidity of the driving spring, and \(K_{s}=AG/h\) is the shear rigidity of the slabs, with \(G\) the shear modulus, \(A\) the surface area of the frictional interface, and \(h\) the slab's height. This ratio \(\Phi\) then quantifies the relative deformation of the driving springs in comparison to the shear deformation of the slabs. Below, we investigate and discuss two limit regimes: rigid (high-\(\Phi\)) and compliant (low-\(\Phi\)) driving. In the first, significant deformations occur within the layers (and the springs are rigid), whereas, in the second, the deformations occur in the springs (and the layers are rigid). _Rigid driving (high-\(\Phi\))._ Suppose that the first interface (\(i=1\)) starts slipping. As the mean position of slab \(i=1\) is fixed, the slab can only react with a negative shear deformation. Consequently, the shear stress on the interface above, \(i=2\), is increased. This can trigger a slip at interface \(i=2\), which can cause a slip on the interface above for the same reason, which propagates to the slipping of other interfaces. The cascade results in a multi-slip event that erases all memory of the system. In this case, the multi-layered system thus acts as a system with a single interface with effective properties [37], showing periodic stick-slip cycles. _Compliant driving (low-\(\Phi\))._ The system can respond to a slip at interface \(i=1\) by advancing the mean position of slabs \(i>1\). Therefore, the stress on the interfaces \(i>2\) relaxes, making a macroscopic multi-slip event unlikely. A sequence of single-slip events is thus to be expected. With an increasing number of layers, the slip sequence of the multiple interfaces may lose its periodicity. ## III Numerical simulations ### Numerical model We implement the model system shown schematically in (Fig. 1a) into numerical simulations. The numerical model consists of \(n+1\) identical elastic layers separated by frictional interfaces. Following [26], we idealize the frictional contact problem in order to focus on the disorder in the shear response along the frictional interface. In particular, we consider a mesoscopic scale on which an effective 'block' of a finite width resists elastically to shear up to a threshold, after which it yields. The local slip then propagates until a new 'contact' is formed (i.e., it is again elastic but with a new threshold). In this framework, each block represents a frictional contact (or a patch of contacts that are so strongly coupled by elasticity that they act as an effective contact) that, upon yielding, forms a new contact with a new yielding threshold. The details of the numerical model are as follows: each frictional interface consists of \(n_{x}\) equal-sized square blocks of linear size \(l_{0}\) that are completely elastic under volumetric deformation but yield under shear (deviatoric deformation) when a set yield stress is reached. Assuming that the yield threshold is isotropic in principal deviatoric strain space, this model now corresponds to a deviatoric potential energy that consists of a sequence of parabolic potentials in equivalent deviatoric strain space. The disorder arises from independently randomly drawing the yield strain sequence of each block. We assume that the blocks and the bulk have the same elastic moduli. Finally, differently from [26], we add a parabolic potential (with curvature \(K\)) to the mean position of each of the elastic slabs \(i>0\), thereby prescribing a homogeneous force density. The bottom layer is not driven through its mean position; instead, the position of the bottom edge is fixed. A key feature of the model is that shear can be applied according to the quasistatic protocol. In particular, because the elastic response is linear, we rotate the lever by a finite amount if no microscopic yielding takes place while infinitesimally rotating it to trigger a microscopic event, after which we minimize energy before loading again. Thus, we run an event-driven protocol, allowing us to separate events. Geometrically we do not seek to precisely model Fig. 1a as its numerical treatment, together with the disorder, requires an intractably large number of blocks. Therefore, we consider periodic boundary conditions in the horizontal direction. Furthermore, we choose \(n_{x}=2\times 3^{6}\), which is still tractable to simulate, but of the minimal order not to be dominated by finite size effects, as we checked for a single frictional interface [22; 26]. Furthermore, we take \(h/\ell_{0}\approx n_{x}/4\) based on balancing \(h/(\ell_{0}n_{x})\) small enough to have acoustic interactions while avoiding driving the blocks in a fixed displacement such that collective effects are suppressed if \(h\ll n_{x}\ell_{0}\) (e.g. [38]). The above model predicts stick-slip behavior [26] when full inertial dynamics are considered (using overdamped dynamics, this model predicts the abundantly studied depinning transition [39]). We consider such inertial dynamics by applying the finite element method to discretize space. Along the frictional interface(s), elements coincide with the mesoscopic blocks. In the elastic slabs away from the frictional interface, the elements are coarsened to gain numerical efficiency (such that the height \(h\) is only approximated as we fix the aspect ratio of elements to one, see SI, "Numerical model"). We use the velocity Verlet algorithm to integrate discrete time (with a time step significantly smaller than the time of a sin gle oscillation in a well in one block). We remark that assuming periodicity requires us to add a small damping term to the inertial dynamics such that waves with a wavelength equal to the horizontal size of the system (equal to \(n_{x}l_{0}\)) are critically damped. Consequently, we must take \(h/\ell_{0}<n_{x}\) to have acoustic coupling between the interfaces. ### Numerical results Our numerical model allows us to first illustrate the simple reflection above on the role of driving. We consider a driving rigidity such that \(\Phi\simeq 10^{-3}\) (rigid driving) and \(\Phi\simeq 10^{-6}\) (compliant driving). In the two-dimensional model, \(A=n_{x}\ell_{0}\), such that \(K_{s}=4G\) for our geometry; we use \(K=10^{-3}\) and \(K=10^{-6}\) and \(G=1/2\). In Fig. 2a and Fig. 2b, for rigid and compliant driving, respectively, we plot a typical macroscopic stress \(\Sigma\) (volume-averaged stress) as a function of applied lever rotation \(\gamma\). Note that the stress is shown in units of the typical yield stress of one block, and the rotation in units of the rotation needed to yield a typical block at \(i=1\). Macroscopic slip events are defined when all blocks along one or more layers yield at least once. Below, we will refer to sliding interfaces and associated quantities by an index \(a\), while \(i\) will be kept as the running index for the layers. Slip events correspond to macroscopic stress drops in Figs. 2a and 2b, and we distinguish between'single-slip' events (all blocks on a single layer yield at least once) and'multi-slip' events (all blocks on more than one layer yield at least once). Stress drops produced by single-slip events are labeled following the layer color code introduced in Fig. 1, while muti-slip events are kept black. These slip events are separated by'stick' intervals during which only microscopic events are observed, where one or several blocks yield at least once, as indicated with markers (black dots). The results confirm that rigid driving causes a periodic stick-slip sequence with slip events corresponding to multi-slip events (Fig. 2a) while compliant driving results in a seemingly less periodic sequence of single-slip events in Fig. 2b. Although not reported in Fig. 2a, for \(\Phi\simeq 10^{-3}\), we also observed periodic sequences on single-slip events for a finite fraction of the loading history. This finding is supported by plotting the fraction of slip events involving \(s=1,\dots,n\) interfaces in Fig. 2c for different \(n\) (see legend in Fig. 2d). On the one hand, rigid driving results in single- and multi-slip events for a comparable fraction of loading history (we discuss in the SI "Slip sequences - numerics" that sequences of single and multi-slip events alternate). On the other hand, compliant driving shows single slip in the large majority of slip events. A direct measurement of the stress drop along the slipping interface \(a\), \(\Delta\mu_{a}\), displays no \(n\) dependence in Fig. 2d. The quantity \(\mu_{a}\) is defined as the volume average stress on the blocks corresponding to weak layer \(a\), also shown in units of the typical yield stress of one block. Given that, by construction, normal stress plays no role in our model, here \(\mu_{a}\) is akin to a friction coefficient. As we have shown that, for high-\(\Phi\) (rigid driving), multilayer stick-slip is apparently similar to that of a single interface, next, we concentrate on the low-\(\Phi\) regime to explore a potential influence of the number of plates \(n\). ## III Experiments We proceed by proposing an experimental realization of the sheared-multilayer model system of Fig. 1a, adapted to measure the effect of the number of sliding layers \(n\) on the slip synchronicity and amplitude; see Fig. 1b. Similarly to the numerical model, the position of each slab is driven by connecting it to the driving lever Figure 2: Numerical results: (a, b) Typical steady-state global stress \(\Sigma\) response as a function of lever rotation \(\gamma\) for \(n=3\) for (a) rigid driving and (b) compliant driving. We indicate all microscopic yielding events with a black dot marker. Slip events on a single layer are indicated in color (see legend), while slip events in black involve more than one interface. (c) The fraction \(\rho(s)\) of macroscopic slip events involving \(s=1,\dots,n\) layers, for rigid (dashed) and compliant (solid) driving; see legend in (d) for color-map and markers. (d) Distribution of stress drops at the slipping interface for different \(n\) (for slip events on a single layer, for which \(s=1\) in (c)). See the main text for definitions and units. through linear springs (see schematic in Fig. 1a). Naturally, connecting the spring to the edges of the slabs might introduce boundary effects, but our experimental system is effectively much larger than our numerical model (given that it presumably has much more local contact patches). ### Experimental apparatus The experimental setup shown in Fig. 1b comprises a stack of frictional plates (color-coded from purple to orange), an actuating lever (green), and driving springs (pink), as detailed below. The stack is made of a set of rectangular PMMA slabs (Snow WH10 DC by Rohm), each of dimensions \(h=10\,\mathrm{mm}\), \(L=150\,\mathrm{mm}\) and an out-of-plane width of \(80\,\mathrm{mm}\). A normal force \(N\) is applied on the topmost slab by a dead weight of \(5\,\mathrm{kg}\) (\(N=49\,\mathrm{N}\)). To ensure a spatially homogeneous contacting surface at this relatively low normal force (compared to other PMMA-PMMA friction experiments [13; 14; 40]), we use acrylic plates whose surface was pre-roughened with asperities of size \(\sim 25\,\mathrm{\mu m}\) that are larger than potential natural height variations of PMMA. We assume that the normal force is uniformly distributed and that it is the same for each layer (the weight of each slab is less than \(3\%\) of that of the dead weight). The stack is sheared by imposing the displacement at the top of the lever (\(H=100\,\mathrm{mm}\)) at a constant speed \(V=10\,\mathrm{\mu m}\)/s (i.e. \(\dot{\gamma}=10^{-4}\,\mathrm{s}^{-1}\)), using a DC linear actuator (L-220.70DG, Physiks Instruments) that is attached via a steel link assumed rigid. The PMMA lever is sufficiently wide not to bend while pulling the slabs and rotates smoothly on ball bearings around its rotation axis. The springs connecting the slabs to the lever are curved beams laser-cut from PMMA (colored in pink in Fig. 1b), with an equivalent stiffness of \(K=55\,\mathrm{N/mm}\) when pulled or compressed along the horizontal axis. The ends of the springs are attached to both the slabs and lever via ball bearings to ensure a free rotation and, thus, horizontal driving forces. The experimental set-up, with its springs and PMMA slabs, corresponds to the compliant driving limit with \(\Phi\simeq 6\times 10^{-5}\), of the same order as for the compliant regime in the numerics. Indeed, \(\Phi\equiv K/K_{s}\), with \(K_{s}=AG/h\) the shear stiffness of the slabs and \(G\equiv E/(2(1+\nu))\), with, for PMMA, Young's modulus \(E=2\,\mathrm{GPa}\) and Poisson's ratio \(\nu=0.3\). The total horizontal force \(F\) needed to rotate the lever (Fig. 1) is measured using a uniaxial force sensor (LRM200 25 lb, Futek) placed between the steel link and the actuator. The link is also attached via small ball bearings to remain horizontal at all times. We directly verify that we are imposing the expected kinematics by measuring the position of the lever and base slab. In addition to the global force measurement \(F\), we also measure the local average horizontal position of the slabs \(x_{i}\), by tracking a red marker placed on their side (Fig. 1b), from photographs taken at a rate of \(5\,\mathrm{fps}\) using a digital camera (Flea3 FL3-U3-20E4C, Flir, linear pixel size: \(70\,\mathrm{\mu m}\)). After color filtering, \(x_{i}\) is measured with an accuracy of \(5\,\mathrm{\mu m}\). The relative displacement between slabs is \(R_{i}\equiv x_{i}-x_{i-1}\) (see Fig. 1a), which serves as a proxy for the total slip at the interface \(i\) (neglecting the shear deformation of the slabs). To vary the number of sliding interfaces \(n\), we keep the same number of slabs (5) but remove \(5-n\) springs, starting from the top (see Fig. 1b where \(n=4\)). This procedure ensures robust image detection and reduces external contamination of the interfaces by keeping them in contact. Each time the slabs are disassembled to vary \(n\), the interfaces are cleaned with isopropanol and quickly dried using compressed air. For each value of \(n\), we perform 10 runs during which we drive over a range \(\Delta\gamma=0.6\,\mathrm{rad}\), starting at \(\gamma=-0.30\,\mathrm{rad}\), each time excluding \(\gamma\) between \(-0.3\) and \(-0.27\) (\(300\,\mathrm{s}\)) to ensure measuring in a steady state. After each run, the lever is reset back to \(\gamma=-0.30\). On average, each connected layer is forced to move by a total relative distance of \(\Delta R=h\Delta\gamma=6\,\mathrm{mm}\) during a run. Further details on the apparatus and the experiments are given in the SI ("Experimental set-up"). ### Experimental measurements In Fig. 3a, for the slab system with \(n=4\), we present a typical time series extract of the force \(F(t)\) required to actuate the lever (top left plot), together with the corresponding relative position of the slabs \(R_{i}\)(t) (bottom left plot). The experiments exhibit stick-slip, with stick periods when the slabs are immobile (\(R_{i}\approx\) constant) and \(F\) increases monotonically, punctuated by macroscopic slip events. These slip events are identified by a sudden position jump, \(\Delta R_{a}\) (with \(a\) denoting the sliding interface), accompanied by an abrupt force drop \(\Delta F>0\), cf. Fig. 3a. On all occasions, we find that only one layer slips at a time, recovering similar dynamics as in the numerical model in the compliant driving regime with a similar value for \(\Phi\) (Fig. 2b). However, we note that during the stick periods, we observe what seem to be'slow slip' events where an interface moves gradually, leading to a non-linear force response. These are out of our primary focus but are discussed at the end of the section. For each value of \(n\), we acquire an ensemble of at least 100 slip events per layer, such that the slip quantities associated can be represented as probability distributions. For example, in Fig. 3b, we show the probability distribution of force drops, \(P(\Delta F)\), occurring on the interface \(a=1\), for all cases of \(n\) considered. Starting from the peaked distribution for \(n=1\), as \(n\) increases, the distri butions broaden and take higher average values. In contrast with more classic stick-slip experiments with a single interface [29], the global measure \(\Delta F\) is not a direct quantification of the frictional properties of the interface but couples with the specific kinematic of the lever. Still, the fact that only one interface slips allows us to extract a jump in a friction-like quantity \(\Delta\mu_{a}\) (or stick-slip amplitude) from \(\Delta F\). We define the friction coefficient \(\mu_{a}\) of a slipping interface \(a\) as the horizontal force acting on this interface divided by the normal force. Considering the horizontal force balance on an interface \(a\), the interface has to resist the combined forces of the pulling springs of the slabs \(i\geq a\), such that \[\mu_{a}=\sum_{i=a}^{n}\frac{f_{i}}{N}, \tag{1}\] where \(f_{i}\) is the force due to the driving spring on slab \(i\) (see Fig. 1a for a visual representation of \(\mu_{a}\) and \(f_{i}\)). When the interface \(a\) slides by \(\Delta R_{a}\), the relative positions of the other interfaces remain unchanged: \[\Delta R_{i}=\begin{cases}\Delta R_{a}>0&\text{if}\quad i=a\\ 0&\text{if}\quad i\neq a\end{cases}. \tag{2}\] This sliding induces a drop in the spring forces: \(\Delta f_{i}=0\) for \(i<a\), and \(\Delta f_{i}=K\Delta R_{a}>0\) for \(i\geq a\). Indeed, even if no slip occurs for the interfaces \(i>a\), the absolute position of the slabs still moves by \(\Delta R_{a}\), reducing the extension of the corresponding springs by the same amount. Note that for consistency, we define \(\Delta f_{i}\) to be positive. From Eq. (1), we can then express \(\Delta\mu_{a}\) as a function of the slip distance \(\Delta R_{a}\): \[\Delta\mu_{a}=\frac{K}{N}(n-a+1)\Delta R_{a}. \tag{3}\] We proceed by linking \(\Delta R_{a}\) to the global force drop \(\Delta F\), using the fact that only one interface slips at a time. Through moment balance on the lever, we obtain: \[F=\sum_{i=1}^{n}f_{i}\frac{ih}{H}. \tag{4}\] Combining Eqs. (2) and (4), we obtain a relation between the global quantity \(\Delta F\) and the local one \(\Delta R_{a}\): \[\Delta F=\sum_{i=a}^{n}K\Delta R_{a}\frac{ih}{H}=K\Delta R_{a}\frac{h}{H} \frac{(n+a)(n-a+1)}{2}. \tag{5}\] (Note that \(i\) is the only varying term in the sum and \(\sum_{i=a}^{n}i=(n(n+1)-a(a+1))/2=(n+a)(n-a+1)/2\).) This result is verified in Fig. 3c. Indeed, a direct measurement of \(\Delta R_{a}\) is very close to the inversion of Eq. (5), in which \(\Delta R_{a}\) follows from the measured \(\Delta F\) without any fitting parameter. Finally, we combine Eqs. (3) and (5) to obtain the sought relation between \(\Delta\mu_{a}\) and \(\Delta F\): \[\Delta\mu_{a}=\frac{H}{h}\frac{2}{(n+a)}\frac{\Delta F}{N}. \tag{6}\] We have thereby disentangled the friction properties of the interface from the kinematics of the lever. Using Eq. (6), we can now obtain a measure of the stick-slip amplitude of the interfaces \(\Delta\mu_{a}\), extracted directly from the global force \(\Delta F\). The measurement of \(\Delta R_{i}\) is used only to identify the slipping interface \(a\). The central experimental result.Next, we assess the effect of having multiple sheared interfaces on their frictional properties. Fig. 4 shows the probability distributions \(P(\Delta\mu_{a})\) associated with the different sliding interfaces \(a\) (different panels) and the increasing number of total active interfaces \(n\) (different colors). Each interface is compared to its response when sliding individually (\(n=1\) in black, see SI "Individual sliding" for experimental protocol). For all the interfaces, the stack of slabs exhibits significantly enriched statistics when compared to a single sliding layer (\(n=1\)). For stacks with increasing \(n\) Figure 3: (a) Top plot: extract of a time series of the macroscopic force \(F(t)\) for a system of \(n=4\) frictional interfaces. The color of the force drops \(\Delta F\) follows the color code in Fig. 1 and indicates the index \(a\) of the slipping interface. Bottom plot: corresponding relative displacement (total slip) \(R_{i}\) of each interface \(i\). Each slip event is characterized by \(\Delta R_{a}\). We denote \(T_{a}\), the time between subsequent slip events on the same interface. Note that we show \(i=5\) only for completeness, by definition, \(R_{5}=0\) if \(n=4\). (b) Probability distribution function \(P(\Delta F)\) for slip at interface \(a=1\), and increasing number of layers \(n\). (c) Comparison between a direct measurement of \(\Delta R_{a}\), and the computed \(\Delta R_{a}(\Delta F)\), obtained through Eq. (5), for each detected slip event. the location of the peak of \(P(\Delta\mu_{a})\) shifts to lower values of \(\Delta\mu_{a}\). Moreover, the respective distributions become broader and secondary peaks emerge. These experimental results contrast the numerical predictions reported above (Fig. 2c), where \(\Delta\mu_{a}\) was independent of \(n\). ### Interpretation We seek to interpret the above experimental findings evidencing a variation of the frictional properties with \(n\) (as more layers are added to the stack, Fig. 4), whereas they are independent of \(n\) in the numerics (Fig. 2d). First, we will attribute the finite width of the peaks in the \(P(\Delta\mu_{a})\) distributions to disorder (i.e., statistical fluctuations of the contacting interfaces). Then, we will argue that the shift of the main peaks and emergence of the secondary peaks in \(P(\Delta\mu_{a})\) for \(n>1\) are related to the presence of creep in the system. Finally, we speculate on the increase of an effective temperature with \(n\) and suggest a rationalization of the appearance of slow slip. _Statistical fluctuations._ Even when sliding individually (\(n=1\)), the frictional properties of the interfaces are distributed: \(P(\Delta\mu_{a})\) has a finite width, see black curves with circles in Fig. 4. These underlying statistical fluctuations, also present in the numerical model (Fig. 2d), are considered to be related to the disorder of the contacting interfaces. The rough interface induces a broad distribution of barriers, such that there are collective events whose sizes are non-trivially distributed. These collective events nucleate the macroscopic slip [22, 26], such that the stress at which slip is nucleated is distributed. Let us now verify experimentally that, in the case of individually sliding layers (\(n=1\)), the measured fluctuations of \(\Delta\mu_{a}\) correspond to distinct slipping loads. In the individual configuration, the spring drives the layer at a constant rate \(\dot{f}_{a}=Kh\dot{\gamma}\) (\(\dot{\gamma}\) is adapted to account for the difference in height, see SI "Individual sliding"). The shear applied to the interface then grows at a rate \(\dot{\mu}_{a}=\dot{f}_{a}/N=Kh\dot{\gamma}/N\). As such, we expect the stick-slip amplitude to be proportional to the time between slips \(T_{a}\), following \(\Delta\mu_{a}=T_{a}Kh\dot{\gamma}/N\). This expectation is consistent with our data in Fig. 5a, thus confirming that statistical fluctuations broaden \(P(\Delta\mu_{a})\). However, these fluctuations do not account for the shifts of the peaks and the appearance of secondary peaks in the \(P(\Delta\mu_{a})\) distributions. In our stack of slabs (\(n>1\)), we anticipate that creep plays that role: with increasing \(n\), (I) interfaces experience a complex loading path such that (II) the creep of the frictional interface becomes significant. _(I) Complex loading path._ First, without any events on other interfaces, the loading rate at a given interface increases with \(n\). In particular, using Eq. (1) and \(\dot{f}_{i}=Kih\dot{\gamma}\) while no interfaces are sliding, the interface \(a\) undergoes a loading rate of \(\dot{\mu}_{a}=K\dot{\gamma}h(n+a)(n-a+1)/(2N)\), which is an increasing function of \(n\). With this increased loading rate with \(n\), we thus expect the time between slips to typically decrease with \(n\) following \(T_{a}^{-1}\sim(n+a)(n-a+1)\). Second, for a given interface, a sliding event can occur on a layer below it. In that case, all the layers above the sliding one will undergo the same position shift, leading to a relaxation of their corresponding springs. Then, even without actual slip on this given interface, slips occurring below will induce drops in the shear force of this layer. In Fig. 5b, we plot the probability distribution function of \(T_{a}\) for \(a=1\) and increasing \(n\). Indeed, we find that the peaked distribution for \(n=1\) shifts to lower values of \(T_{a}\) with increasing \(n\) as the overall loading rate increases. Moreover, secondary peaks in \(P(T_{a})\) start to appear, which we interpret to be due to slip events on the other layers. These observations are robust for the other interfaces. The change in loading rate with \(n\), together with the complex loading path, allows our experimental system to probe a broad distribution of \(T_{a}\) on all its interfaces. _(II) Creep._ It is a known experimental fact that the macroscopic stress required for the onset of sliding, characterized by \(\mu_{s}\) (the'static friction coefficient' in Amontons-Coulomb's terminology [11, 30]), depends on the duration \(T\) that the interface was static: \(\mu_{s}=B\ln T\)[17, 29, 30, 31], where the aging rate of the interface \(B\) is a constitutive parameter. Let us now consider \(\Delta\mu_{a}\) as a proxy for \(\mu_{s}\), assuming that a slip event unloads the interface to a well-defined and constant quantity (\(\mu_{d}\) the 'dynamic friction coefficient' in Amontons-Coulomb's terminology [11, 30]), as is supported by [29]. We expect to find, for each sliding interface \(a\) and over a wide range of \(T_{a}\), that the stick-slip amplitude follows the creep trend: \(\Delta\mu_{a}=B\ln T_{a}\). In Fig. 5c, we assess this expectation Figure 4: Probability distribution functions of the stick slip amplitude \(P(\Delta\mu_{a})\) as a function of \(n\) for the different interfaces: (a) \(a=1\), (b) \(a=2\), (c) \(a=3\) and (d) \(a=4\). experimentally by plotting \(\Delta\mu_{a}\) versus \(T_{a}\) in a semi-logarithmic scale. To capture the general trend, we bin logarithmically \(T_{a}\), corresponding to the black markers with corresponding error bars in Fig. 5c. These averaged values of \(\Delta\mu_{a}\) indeed follow a linear trend in the semi-log plot, and we extract the slope \(B=0.053\pm 0.005\), which is of the same order of magnitude as measured in classical stop-and-go experiments that are in the \(10^{-2}\) order [30], and of a direct surface observation on PMMA at room temperature that reports \(B=0.009\pm 0.001\)[40]. Creep then translates the large peak shifts and the emergence of new ones in the \(T_{a}\) distributions (Fig. 5b) into similar changes in the \(\Delta\mu_{a}\) distributions. The large fluctuations of \(\Delta\mu_{a}\) in Fig. 5c are then the combination of two effects: creep drives the long time trend, while disorder dominates in the narrow time range. In Fig. 5d, we schematically represent the coupled role of (I), (II), and disorder, following the evolution of the interfacial stress with time, characterized by \(\mu_{a}\), starting from the last slip event. The layer slips when it reaches \(\mu_{a}=\mu_{s}(T_{a})\), whereby \(\mu_{s}\) is distributed in some way for fixed \(T_{a}\) because of disorder (illustrated as a red-shaded area, where for simplicity, we lump all fluctuations in the threshold to sliding) and increases logarithmically with time because of creep. For \(n=1\) (black line), \(\mu_{a}\) increases linearly at the same rate for all the events, thus exploring only a narrow region of \(T_{a}\) (the shaded red region due to disorder). In the case of multiple active interfaces (\(n>1\), green lines), \(\mu_{a}\) increases faster given that \(\dot{\mu}_{a}\) is an increasing function of \(n\); see point (I) above. In some cases, \(\mu_{a}\) directly reaches \(\mu_{s}\), resulting in a lower value of \(T_{a}\) and \(\Delta\mu_{a}\). However, if sliding events occur in one of the underneath layers, \(\mu_{a}\) will drop before linearly increasing again, delaying slip and thus increasing \(T_{a}\), and consequently \(\Delta\mu_{a}\), because of creep. Effective temperature.During stick intervals, microscopic events occur on the interfaces, propagating elastic waves across the system [26]. As we increase the number of interfaces in the system, we can expect that the overall mechanical noise created by the microscopic events also increases. If we speculatively interpret this mechanical noise as an effective temperature, we would expect a change of aging rate \(B\) with \(n\). Let us define the aging rate for a single event as \(B_{a}\equiv\Delta\mu_{a}/\ln T_{a}\). Although the mean of \(P(B_{a})\) does not change with \(n\) (see SI "Aging rate distributions"), we do find that the width of the distribution \(P(B_{a})\) is an increasing function of \(n\) mainly for the lowermost interfaces (\(i\leq 2\)), as shown in Fig. 6a. For the same interfaces (\(i\leq 2\)), we also observe distinctly different slip dynamics when \(n>1\). In particular, as \(n\) increases, we find that interfaces \(i=1\) and \(i=2\) are increasingly more subject to slow slip, defined as sliding significantly slower than the slip events (see SI "Smooth sliding" for precise definition). These events are not accompanied by a macroscopic stress drop but rather just lead to a lower \(\dot{F}>0\), see Fig. 3a. In Fig. 6b, we measure the proportion of slow slip compared to the total sliding distance in an experiment, \(R_{\text{slow}}\). It is computed by comparing the total sliding distance to the total slip accumulated during individual slip events (\(\Delta R_{a}\)), such that \(R_{\text{slow}}\equiv 1-\int\Delta R_{a}/\Delta R\). Once a slow slip event starts, it appears to be stopped only by slip events occurring either on the same interface or on any other interface. An Figure 5: (a) For an interface sliding individually (\(n=1\)), stick-slip amplitude \(\Delta\mu_{a}\) as a function of the waiting time since the last slip event \(T_{a}\). The black line corresponds to the prediction that the stress of the interface increases at a constant rate in between slip events. For multiple sliding interfaces (\(n\geq 1\)): (b) Probability distribution of the waiting time \(T_{a}\) between two consecutive slip events at interface \(a=1\), for increasing \(n\). (c) For each detected slip event, correlation between \(\Delta\mu_{a}\), and its corresponding waiting time \(T_{a}\) (semig logarithmic scale). The black markers correspond to the mean values for a logarithmic binning of \(T_{a}\) (error bars indicate the standard deviation for that bin), and the dotted line a linear fit of \(\Delta\mu_{a}=B\ln T_{a}\), with \(B=0.053\pm 0.005\). (d) Schematic of the proposed mechanism leading to multimodal and wider distributions of \(T_{a}\) (and thus \(\Delta\mu_{a}\)) as \(n\) increases. Figure 6: As a function of \(n\), for each interface \(a\) (different color and marker, see legend): (a) Standard deviation of the distribution of the aging rate \(B_{a}\equiv\Delta\mu_{a}/\ln T_{a}\) of individual events, normalized by that quantity for \(n=1\). (b) Fraction of slip that corresponds to slow slip. increase in the effective temperature of the interface with \(n\) could also act as a potential destabilization factor of the contacts at the interfaces, increasing the occurrence of slow slips with \(n\). ## Discussion and conclusion ### Summary We have explored the stick-slip response of a system with multiple interfaces by proposing a model system comprising \(n\) vertically stacked slabs, each connected to a lever whose rotation is imposed. The interfaces were driven in quasistatic (homogeneous) shear. We proposed a dimensionless quantity \(\Phi\) as the ratio between the driving stiffness and the elastic shear stiffness of the slabs. We have argued and demonstrated numerically that the system displays synchronization if \(\Phi\) is sufficiently large (\(\Phi\gtrsim 10^{-3}\)). In that case, the system acts close to a single frictional interface with effective properties. If \(\Phi\) is small (\(\Phi\sim 10^{-6}\)), interfaces slip one by one, as also confirmed experimentally. We expect non-trivial collective effects with increasing \(n\) only in the low-\(\Phi\) limit, which we addressed through experiments. In the numerics, the stick-slip amplitude of the interfaces \(\Delta\mu_{a}\) display a distribution with finite width because of statistical fluctuations of the interfaces, but no measurable changes with \(n\). By contrast, we measured experimentally that the probability distribution of stick-slip amplitude \(\Delta\mu_{a}\) shows a general broadening with \(n\), with peaks shifting to lower values and secondary peaks appearing. The interfaces are coupled via the lever, exposing them to a complex loading path, and leading to a broad distribution of the waiting times \(T_{a}\) between two slip events on an interface. We find that \(T_{a}\) is now spanning two decades, such that the creep of the interfaces plays a crucial role in the broadening of \(\Delta\mu_{a}\). The complex distributions of \(\Delta\mu_{a}\) can then be interpreted as the combined effect of interface disorder and creep. For narrow waiting times \(T_{a}\), multiple slips explore the statistical fluctuations of the contacting interfaces, giving a distribution with finite width. In addition, this distribution follows a creep-induced general trend over widely distributed \(T_{a}\). Furthermore, we observe that aging rate variations and the fraction of slow slip on the bottom layers is an increasing function of \(n\). We suggest that these additional consequences of adding more layers to the stack might be an evidence of an increase in an effective interface temperature due to the mechanical noise of microscopic events. In conclusion, the relative rigidity of the drive against the layers dictates whether a stack of interfaces responds synchronously or not. When layers slide one by one, increasing their number lead to complex responses, making the prediction of the next slip more challenging. ### Limitations and outlook It is pertinent to discuss some limitations of our model system and provide suggestions for future work. _Stiffness ratio._ We have defined the rigid and compliant driving regimes, as characterized by the relative order-of-magnitude estimation of the respective stiffness ratio \(\Phi\). Identifying an equivalent of \(\Phi\) in systems with more intricate geometries, such as fault networks, might contribute to clarifying their dynamics and help slip predictions. Hence, a more systematic exploration of the response of stacks with varying \(\Phi\) would be of great interest. However, we are currently restricted to a limited range of \(\Phi\). For our numerics, low-\(\Phi\) are challenging due to a combination of the assumption of finite rotations and finite machine precision. To be able to continue sliding indefinitely our model should be extended with the possibility to reset the local deformation along the frictional interface to a zero average while keeping the identical stress state. In contrast, high-\(\Phi\) are challenging experimentally. For too high driving stiffness the motor/lever system can no longer be considered rigid, invalidating the relation between global (\(\Delta F\)) and local (\(\Delta\mu_{a}\)) force drops (Eq. (5)). Instead, slab materials with low shear stiffness tend to be adhesive [41] corresponding to a different class of frictional properties. _Creep._ Our proposed model system allowed us to measure the aging rate \(B\) of the interfaces thanks to complex stick-slip sequences without the need for stop-and-go experiments. However, while our measured value of \(B\) is compatible with previous experiments on PMMA [30; 40], it differs by a factor of five. Possible sources of differences are roughness, inter-realisation variations, and stress inhomogeneities. First, the PMMA plates used here have a much higher surface roughness than in [30; 40]. If our model is extended with thermal fluctuations it likely displays creep such that the relationship between the distribution of barriers, linked to surface roughness, and creep could be investigated. Second, we measure \(B\) on an ensemble of \(n=5\) interfaces. Between interfaces it is estimated that \(B\) differs by a factor of about two. Third, recent experimental observations find a relationship between the aging rate \(B\) and the applied shear load [42]. Our setup naturally imposes a broad range of shear loads on the interface. However, to measure the empirical law by [42], our setup would need to be augmented to also provide stress measurements per interface, by measuring individually the internal forces \(f_{i}\) of the driving springs. To study this effect numerically, on top of adding temperature to capture creep, the model would likely have to be made sensitive to pressure inhomogeneities that may arise from the normal load or partial slip events. Slow slip.The origin of the experimentally observed slow slip is unknown. A tempting hypothesis is that smooth sliding is the result of activated yielding events due to increasing mechanical noise on other interfaces. However, it is not clear why this interpretation would lead to slow slip occurring predominantly on the lower-most interfaces (which is qualitatively robust to changing the order of the slabs, see SI "Smooth sliding"). Furthermore, slow slip is not observed numerically. This could, however, in part be due to our small homogeneous background damping term (currently chosen to avoid non-physical periodic wave propagation). An alternative hypothesis is that, by increasing \(n\), the loading rate becomes sufficiently high to drive the interfaces away from the stick-slip regime. This second interpretation is consistent with slow slip being predominantly observed on the lowermost interfaces. Note that, different from the experiments, our numerical model drives infinitely slowly. After-shocks.After-shocks appear if creep is added to the drive in a simple spring-block model [43, 44]. A creeping drive is often associated with the high temperatures in Earth's core [43]. Our experimental system displays slow sliding of 'deep' layers already at room temperature. A key question is if after-shocks appear in the top layers of our system as well. Answering this question experimentally would require exposing microscopic events, which would likely involve studying acoustic emissions (for which PMMA may not be the optimal choice). ###### Acknowledgements. The authors thank Mathias Lebihain and Federica Paglialunga for fruitful discussions, and Lebo Molefe for providing the microscope images of the surface of the slabs. S.P. acknowledges financial support from the Japanese Society for the Promotion of Science as a JSPS International Research Fellow. T.G. acknowledges support from the Swiss National Science Foundation (SNSF) by the SNSF Ambizione Grant PZ00P2_185843.
2309.08221
Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study
Code review is an essential activity for ensuring the quality and maintainability of software projects. However, it is a time-consuming and often error-prone task that can significantly impact the development process. Recently, ChatGPT, a cutting-edge language model, has demonstrated impressive performance in various natural language processing tasks, suggesting its potential to automate code review processes. However, it is still unclear how well ChatGPT performs in code review tasks. To fill this gap, in this paper, we conduct the first empirical study to understand the capabilities of ChatGPT in code review tasks, specifically focusing on automated code refinement based on given code reviews. To conduct the study, we select the existing benchmark CodeReview and construct a new code review dataset with high quality. We use CodeReviewer, a state-of-the-art code review tool, as a baseline for comparison with ChatGPT. Our results show that ChatGPT outperforms CodeReviewer in code refinement tasks. Specifically, our results show that ChatGPT achieves higher EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art method achieves only 15.50 and 62.88 on a high-quality code review dataset. We further identify the root causes for ChatGPT's underperformance and propose several strategies to mitigate these challenges. Our study provides insights into the potential of ChatGPT in automating the code review process, and highlights the potential research directions.
Qi Guo, Junming Cao, Xiaofei Xie, Shangqing Liu, Xiaohong Li, Bihuan Chen, Xin Peng
2023-09-15T07:41:33Z
http://arxiv.org/abs/2309.08221v1
# Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study ###### Abstract. Code review is an essential activity for ensuring the quality and maintainability of software projects. However, it is a time-consuming and often error-prone task that can significantly impact the development process. Recently, ChatGPT, a cutting-edge language model, has demonstrated impressive performance in various natural language processing tasks, suggesting its potential to automate code review processes. However, it is still unclear how well ChatGPT performs in code review tasks. To fill this gap, in this paper, we conduct the first empirical study to understand the capabilities of ChatGPT in code review tasks, specifically focusing on automated code refinement based on given code reviews. To conduct the study, we select the existing benchmark CodeReview and construct a new code review dataset with high quality. We use CodeReviewer, a state-of-the-art code review tool, as a baseline for comparison with ChatGPT. Our results show that ChatGPT outperforms CodeReviewer in code refinement tasks. Specifically, our results show that ChatGPT achieves higher EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art method achieves only 15.50 and 62.88 on a high-quality code review dataset. We further identify the root causes for ChatGPT's underperformance and propose several strategies to mitigate these challenges. Our study provides insights into the potential of ChatGPT in automating the code review process, and highlights the potential research directions. + Footnote †: \({}^{\dagger}\)Corresponding author + Footnote †: \({}^{\dagger}\)Corresponding author + Footnote †: \({}^{\dagger}\)Corresponding author + Footnote †: \({}^{\dagger}\)Corresponding author + Footnote †: \({}^{\dagger}\)Corresponding author + Footnote †: \({}^{\dagger}\)Corresponding author ## 1. Introduction Code review is a software quality assurance activity in software development and maintainance, which involves the systematic examination of source code to identify and rectify errors, improve code quality, and ensure compliance with coding standards. The code review process typically consists of writing code reviews and refining code based on the review comments received, with the ultimate goal of enhancing software quality. Code review has become an integral part of many software development projects, as it has been widely recognized for its effectiveness in improving the overall reliability and maintainability of software systems. However, code review can be a time-consuming and resource-intensive process, requiring significant manual effort to review and refine code, especially in popular projects with numerous contributions. For example, Bosu et al. (Bosu et al., 2018) discovered that, on average, developers allocate approximately six hours per week preparing code for review or reviewing others' code. Moreover, the increasing complexity of modern software systems and the need for more frequent releases have made code review even more challenging. To address this issue, recent research (Zhou et al., 2019; Wang et al., 2019) has been conducted to automate various aspects of code review, such as generating review comments and refining code. In particular, the learning-based approaches (Zhou et al., 2019; Wang et al., 2019) that rely on Large Language Models (LLMs) such as CodeT5 (Wang et al., 2019) and CodeBERT (Chen et al., 2019) have demonstrated promising results in automating code review, reducing the manual effort required for code reviews. Recently, OpenAI introduced ChatGPT (Chen et al., 2019), a revolutionary technology capable of transforming various sectors, including software engineering tasks. ChatGPT, an advanced version of GPT-3.5 (Zhou et al., 2019), is a fine-tuned model that excels at understanding and executing instructions. This capability distinguishes it from other pre-trained models and makes it a promising candidate for tasks that require prompts or instructions. The code refinement process, which is contingent upon code review and previous code versions, aligns well with strengths of ChatGPT. Since human reviews can serve as prompts for code refinement, it is natural to investigate the potential of using ChatGPT for this task. In this paper, we take the first step towards investigating the potential of ChatGPT for code refinement based on the given review comments. Note that although code-to-code refinement (i.e., ChatGPT directly generates refined code from original code) is also a research problem, there are still major concerns regarding the quality of the refined code (Wang et al., 2019). Therefore, we focus on the refinement based on given review in this paper, which is different from code-to-code refinement. Specifically, we focus on three main problems: 1) How does ChatGPT perform compared to the state-of-the-art methods? 2) In which cases does ChatGPT underperform, and what are the underlying reasons? 3) How these challenges can be mitigated? By answering these questions, we can gain a deeper understanding of the potential and challenges of ChatGPT for automated code refinement tasks. To answer the above questions, we conduct comprehensive experiments to evaluate ChatGPT's performance in code refinement tasks. Considering the sensitivity of ChatGPT to different settings, we first design the experiment to evaluate its performance on two main factors, i.e., different prompts and temperatures. Then we select the optimal configuration and compare ChatGPT with state-of-the-art techniques (Zhu et al., 2017) on standard benchmarks (Kang et al., 2017). To evaluate the generalizability of different techniques, we create a new dataset by collecting code reviews from repositories not included in the standard benchmarks and recent code reviews from the same repositories included in the standard benchmarks. Based on the evaluation results, we perform an in-depth analysis of the root causes and devise preliminary strategies for mitigating different challenges. Overall, the results provide valuable insights into the performance of ChatGPT in code refinement tasks. Our findings demonstrate that different prompts and temperature settings can have a significant impact of up to 5% and 15% on ChatGPT's Exact Match (EM) scores in code refinement tasks. Lower temperature settings yield better and more stable results, and describing the code review scenario in the prompt helps enhance ChatGPT's performance. Compared to the state-of-the-art model CodeReviewer, ChatGPT demonstrates better generalization capabilities in our newly generated dataset. Specifically, ChatGPT achieves EM and BLEU scores of 22.78 and 76.44, respectively, on the new dataset, while CodeReviewer only reaches 15.50 and 62.88 for EM and BLEU scores, respectively. However, we also found that ChatGPT struggles on tasks involving refining documentation and functionalities, mainly due to a lack of domain knowledge, unclear location, and unclear changes in the review comments. These limitations could potentially be resolved by improving review quality and using more advanced large language models such as GPT-4. Our study highlights the potential of ChatGPT in code refinement tasks and identifies important directions for future research. In summary, this paper makes the following contributions: * We conduct the first empirical study on evaluating ChatGPT's potential in code refinement tasks based on review comments. * We analyze the challenges of ChatGPT in code refinement tasks and propose potential mitigation strategies, laying the groundwork for future research on better incorporating ChatGPT. * We release a new dataset that contains high-quality code reviews, which could be useful for future research in this area. ## 2. Background ### Code Review Process During the code review process, a contributor submits code changes to implement new features, refactor code, or fix bugs. When the contributor believes the code changes are ready for review and to be merged into the main branch, he or she initiates a pull request and invite reviewers to examine the changes. After reviewing the code changes, a reviewer may provide review comments in natural language, represented as \(R\). Based on these review comments, the contributor makes modifications on the original code \(C_{1}\) and submits the revised code \(C_{2}\). The code difference between \(C_{1}\) and \(C_{2}\) is denoted as \(D:C_{1}\to C_{2}\). It is worth noting that the above process represents only one review cycle, while a complete pull request may involve multiple rounds of review cycles. In this work, without loss of generality, we focus solely on the single-round scenario, where to generate the revised submitted code \(C_{2}\) with models automatically, based on the a given review comment \(R\) and the original submitted code \(C_{1}\) within each pull request. ### ChatGPT ChatGPT (Kang et al., 2017) is a famous example of Large language models (LLMs), unveiled by OpenAI. ChatGPT was developed by employing a GPT-3.5 series model and training it using reinforcement learning from human feedback (RLHF) (Srivastava et al., 2014; Wang et al., 2015). Owing to the RLHF training process, ChatGPT has exhibited remarkable proficiency across multiple dimensions, encompassing the generation of high-quality responses to human inputs, the refusal of inappropriate queries, and the capacity for self-correction of prior errors based on subsequent dialogues. Considering the characteristics of ChatGPT usage (Wang et al., 2015), it is natural to explore its potential in automating code reviews (Kang et al., 2017). Specifically, we propose a conversational approach to delegate the code refinement task to ChatGPT, where the original code and review comment are provided as a task input in a coherent linguistic structure. ChatGPT will return the revised code along with the reasoning behind the modifications, precisely aligning with the desired output of the task. The performance of ChatGPT in this approach depends significantly on two parameters: prompt and temperature. The prompt serves as a cue for ChatGPT to understand the intended task, while temperature can be used to control the level of creativity and diversity in responses of ChatGPT. ## 3. Study Design ### Overview and Research Questions The main focus of this paper is to evaluate and understand the capabilities of ChatGPT in code refinement tasks. Fig. 1 shows the overview of this paper. To conduct our study, we collect existing benchmarks, including the CodeReview dataset, and state-of-the-art code refinement tools such as CodeReviewer (Kang et al., 2017), for comparisons. However, given the potential risk that the dataset could be used to be trained in ChatGPT and CodeReviewer, we create a new code review dataset (named CodeReview-New) consisting of two parts: new code reviews from the same repositories as CodeReview dataset but collected more recently (i.e., CodeReview-NewTime), and code reviews from repositories using different languages that are not included in CodeReview dataset (i.e., CodeReview-NewLanguage). We next introduce the research questions we aim to investigate and their relationships. **RQ1 Impact of ChatGPT Settings: How do different prompt and temperature settings affect ChatGPT's performance in the code refinement task?** As the effectiveness of ChatGPT highly depends on the prompts and temperatures used, we first evaluate the impact of different settings of ChatGPT on code refinement. We designed five prompts based on whether a concrete scene is provided and whether detailed requirements are given. We also selected five temperature settings ranging from 0 to 2, with intervals of 0.5 (i.e., 0, 0.5, 1, 1.5 and 2.0). We evaluated and compared the effects of 25 combinations of these five prompts and five temperature settings based on the CodeReview dataset. Our evaluation of ChatGPT in the subsequent research questions is based on the optimal prompt and temperature settings obtained from RQ1. **RQ2 Effectiveness of ChatGPT on Code Refinement: How does ChatGPT's performance compare to state-of-the-art methods?** We aim to investigate the effectiveness of ChatGPT in code refinement tasks compared to state-of-the-art methods. To answer this question, we compare ChatGPT's performance with that of the state-of-the-art code refinement tool, CodeReviewer (Kang et al., 2019). We replicated and fine-tuned the CodeReviewer model and evaluated its performance alongside ChatGPT on both the existing CodeReview dataset and the new dataset CodeReview-New we created. **RQ3 Strengths and Weaknesses of ChatGPT: In which cases does ChatGPT perform well or not?** To address this question, we conduct a qualitative study based on the results obtained from RQ2. Specifically, we annotate 200 samples each from the CodeReviewer and CodeReview-New datasets manually, labeling the quality of reviews (i.e., relevance and information levels) and code change types. We then evaluate the performance of ChatGPT on data with various review qualities and code change categories. **RQ4 Root Causes and Potential Mitigation Strategies for Underperforming Cases: What are the underlying causes for the underperformance of ChatGPT, and how can we mitigate these challenges?** Based on the analysis of RQ3, we aim to further understand the root causes of ChatGPT's underperforming cases and how to address this limitations. We investigated the 206 cases from the 400 annotated samples in RQ3 where ChatGPT failed to make accurate predictions and summarized the categories of root causes. Based on the root causes, we attempt to study the impact of improving review quality and enhancing models in mitigating the issues of ChatGPT. ### Experiment Settings #### 3.2.1. Dataset To conduct the study, we utilize two datasets: the CodeReview dataset (Kang et al., 2019) and a new dataset created by us, denoted as CodeReview-New. **CodeReview (CR)**: We first select CodeReview (Kang et al., 2019) that is a widely used dataset in code review task. This dataset was crawled from the top 10,000 repositories from GitHub based on their star ranking, and includes nine programming languages, namely C, C++, C#, Go, Java, JavaScript, PHP, and Python. Repositories that do not have an explicit data redistribution license and fewer than 1,500 pull requests (PRs) are filtered out. The dataset consists of review comments \(R\) associated with their corresponding code diff \(D:C_{1}\to C_{2}\). To ensure a high-quality dataset, samples with the same review comment associated with multiple code diff or a single code diff associated with multiple comments are filtered out. Additionally, the dataset is divided into a pre-training dataset and multiple downstream task datasets, and we used the code refinement downstream task dataset in our study. This dataset comprises 829 repositories and 125,653 PRs. We follow the same partition method as CodeReviewer (Kang et al., 2019) for a fair comparison, and divide the dataset into training set, validation set and test set, with proportions of 85%, 7.5%, and 7.5%, respectively. **CodeReview-New (CRN)**: Additionally, we create a new code review dataset, CodeReview-New, due to two reasons: 1) we observe that there are some low-quality code review data in CodeReview, which could affect the comparisons between ChatGPT and the baseline CodeReviewer; 2) the data distribution in the CodeReview test data could be very similar to that in the pre-train and fine-tuning dataset, and may even have been used by the selected models (i.e., ChatGPT (Kang et al., 2019) and CodeT5 (Kang et al., 2019)). The new dataset is constructed to better evaluate the generalization capabilities of models. To address these two concerns, we design more strict filtering rules to filter low-quality reviews; and select code reviews that is unlikey to be used in the pre-training process. To ensure the quality of the CodeReview-New dataset, we implemented several strict rules based on our analysis of the quality issues present in CodeReview. Only code reviews that met these rules were retained in our dataset. Firstly, we ensured that the code changes are only about a single code hunk, which is necessary because the baseline CodeReviewer we select only accepts a single piece of code as input. Secondly, we filtered out changes that were unrelated to code, such as changes to README files. Finally, we ensured the relevance between the review comment \(R\) and the code changes \(D\) by collecting the original code piece \(C_{1}\) that contains the review comment \(R\). To prevent ChatGPT from using CodeReview-New during the pre-training process, we only collected data from January 1, 2022, onwards, as ChatGPT's training data only extends up to 2021 (Kang et al., 2022). Furthermore, CodeReview dataset also does not contain data after January 1, 2022, which makes it fair to compare CodeReviewer model and ChatGPT. In addition to the repositories included in CodeReview, we crawled code reviews from additional 1,400 repositories (top 200 repositories for each language based on their star ranking) using seven programming languages: Swift, Objective-C, Figure 1. Overview of our study Kotlin, SQL, Perl, Scala, and R, which are not included in CodeReview. In total, we selected 2,029 repositories, with 829 from the CodeReview repository and 1,200 new repositories with different programming languages. After applying the filtering rules and selecting pull requests based on time, we only have 467 repositories out of the initial 2,029 repositories. The exclusion of the other 1,562 repositories can be attributed to two main reasons: first, we used stricter filtering rules compared to the construction of the CodeReview dataset, and second, we only selected pull requests created on or after January 1, 2022, which resulted in the exclusion of some projects that had few PRs during this period. As shown in Table 1, the dataset consists of samples from two types of repositories: 9,117 samples from 232 repositories that are also included in the CodeReview dataset, denoted as CodeReview-NewTime (CRNT); 5,451 samples from 240 new repositories that have different programming languages with the repositories in CodeReview dataset, denoted as CodeReview-NewLanguage (CRNL). Some languages, such as SQL and Perl, have a smaller amount of data due to fewer pull requests or a smaller number of reviews. #### 3.2.2. Evaluation Models To compare the performance of ChatGPT with the state-of-the-art tool, we chose CodeReviewer (Kotlin, 2017), which is a recent state-of-the-art method for code refinement. In this paper, we apply ChatGPT in a similar way to CodeReviewer, by generating revised code \(C_{2}\) based on reviews \(R\) and original code \(C_{1}\). We chose CodeReviewer over other methods as it is demonstrated to be more effective than other methods such as AutoTransform (Zhu et al., 2017) and Trans-Review (Zhu et al., 2017). Based on our evaluation results, we believe that ChatGPT can also surpass other models. Furthermore, our main focus is to understand the strengths and weaknesses of ChatGPT and identify potential improvement directions for future research on the code review process. **CodeReviewer:** It utilizes a T5 model architecture comprising 12 Transformer encoder layers and 12 decoder layers, amounting to 223 million parameters (Zhu et al., 2017). The model is initialized using the weight parameters of CodeT5 (Zhu et al., 2017). Subsequently, the pre-training is carried out with three objectives: Diff Tag Prediction, Denoising Objective, and Review Comment Generation. In this study, we employed the same pre-trained CodeReviewer model and fine-tuned it using the \(CodeReviewer_{train}\) and \(CodeReview_{valid}\) datasets. **ChatGPT:** We accessed and evaluated ChatGPT with the default GPT-3.5-Turbo model using the OpenAI API (Zhu et al., 2017). Unlike CodeReviewer, we did not fine-tune ChatGPT and only performed a zero-shot style evaluation. The ChatGPT API was accessed in March 2023, at a total cost of 150 USD. When comparing T5 and GPT-3.5, both models are large language models, but they have some differences. T5 is a general-purpose language model that uses a denoising autoencoder objective, which involves predicting masked or corrupted tokens in the input text. In contrast, ChatGPT is trained on a large dataset of conversational text, making it better at generating responses appropriate for use in a chatbot context. One key difference between the two models is that ChatGPT is fine-tuned with Reinforcement Learning from Human Feedback (RLHF), which uses human feedback in the training loop to make it more effective in generating appropriate and coherent responses in various contexts. During the evaluation, we designed different prompts based on the original code and code review to obtain outputs from ChatGPT. In RQ4, we also employed GPT-4 in ChatGPT in order to mitigate the cases where GPT-3.5 made incorrect answers. GPT-4 (Kotlin, 2017) is the latest multi-modal model designed to process both textual and visual inputs, generating textual outputs. #### 3.2.3. Evaluation Metrics Exact Match (EM) and BLEU are the two widely adopted metrics in previous literature (Kotlin, 2017; Zhu et al., 2017; Zhu et al., 2017). However, we found that ChatGPT tends to generate more content including additional code or more explanations, which could largely affect the EM results and make the measurement less accurate. In the real world, a contributor can easily trim these additional information to obtain the correct. Hence, we propose two new variants of EM and BLEU, called EM-trim and BLEU-trim, which more accurately measures the results. **Exact Match (EM).** A prediction is considered correct by EM only if the predicted revised code is identical to the ground truth revised code. The EM value is computed based on the percentage of generated outputs that exactly match the ground truth. **Exact Match Trim (EM-trim)** is a variant of the EM metric that is more lenient in its measurement. EM-trim first performs a trim on the generated output (denoted as \(C_{2}^{\prime}\)) before calculating the EM score. Specifically, if the first line of the ground truth text can be found in the generated output \(C_{2}\), we trim the generated content before the first line of \(C_{2}\). Similarly, if the last line of the ground truth text can be found in the generated output \(C_{2}\), we trim the generated content after the last line of \(C_{2}\). After the trim process, the EM-trim score is calculated using the trimmed content \(C_{2}^{\prime}\) and the ground truth text. The EM-trim metric is more lenient than the traditional EM metric, as it ignore other irrelevant information. **BLEU** is a common metric used to measure the quality of generated text in neural translation models (Zhu et al., 2017). We use the BLEU-4 variant, which calculates the overlap of 4-grams between \(C_{2}\) and the ground truth (Kotlin, 2017; Zhu et al., 2017; Zhu et al., 2017). The range of BLEU-4 scores lies between 0% and 100%, with 100% indicating a perfect match. The average BLEU-4 score of all test samples serves as the overall evaluation result. Similar to EM-trim, we also design BLEU-trim that calculates the BLEU-4 score between the trimmed output \(C_{2}^{\prime}\) and the ground truth text. ## 4. Evaluation Results ### RQ1 Impact of Prompts and Temperatures #### 4.1.1. Setup Prompts and temperatures are two crucial parameters that can significantly impact the performance of ChatGPT in code refinement tasks. To determine the optimal values for these parameters, we conducted an experiment to evaluate their impact on code refinement. Note that while temperatures and prompts are parameters utilized by ChatGPT, they are not applicable to run CodeReviewer. CodeReviewer solely relies on the concatenation of old code and code reviews as its input. Specifically, temperature is a parameter that controls the level of randomness and creativity in the generated output of ChatGPT. Higher temperature settings tend to produce more diverse and innovative responses, but with a higher risk of generating nonsensical or irrelevant output. In order to explore the effects of different temperature settings in ChatGPT, which ranges from 0 to 2, we chose five specific temperature values (i.e., 0, 0.5, 1.0, 1.5, and 2.0) due to the high cost of ChatGPT API. To select the prompts, we followed the established best practices (Beng et al., 2017; Chen et al., 2018) which suggests that prompts could include four types of elements, i.e., _Instruction, Context, Input Data_ and _Output Indicator_. We have tried prompts with various combinations of these four elements. During our preliminary exploration stage, we experimented with a total of 14 prompts. Due to budget constraints, we selected the 5 best-performing and representative prompts: 1. [leftmargin=*] 2. **Prompt 1 (P1): the simplest prompt.** We only provided the basic requirement of generating new code based on the old code and review, without additional description. 3. **Prompt 2 (P2): P1 + Scenario Description.** P2 was designed based on Prompt 1 but included a scenario description that asked ChatGPT to act as a developer and modify the code based on review information from a pull request that is from the team leader. 4. **Prompt 3 (P3): P1 + Detailed Requirements.** P3 included detailed requirement information, such as keeping the original content and format of the code as much as possible and not completing any code snippets in the old code or modifying any code not mentioned in the review. 5. **Prompt 4 (P4): P1 + Concise Requirements.** Similar to P3, P4 also included requirement information that is more concise. 6. **Prompt 5 (P5): P4 + Scenario Description.** P5 was a combination of Prompts 2 and 4, containing both scenario description and requirement information. Specifically, the instruction, context, and output indicator in P1 are all simplest. P2, building upon P1, provides a more detailed context description, while P3, also building upon P1, offers a more detailed output indicator (Chen et al., 2018). Figure 2 illustrates the construction strategies for Prompt 1 and Prompt 2. The details of the other prompts are available on our website (Chen et al., 2018). To evaluate the effectiveness of ChatGPT under different parameters, we accessed the ChatGPT API and performed code refinement on the CodeReview dataset. Due to the cost of running the ChatGPT API, we randomly selected 500 data entries from the test set of the CodeReview dataset to reduce the number of API calls. To account for the randomness of ChatGPT predictions, we repeated each setting ten times, i.e., making ten ChatGPT API requests on each sample under each setting. We obtained the average of the ten repetitions as the final results. #### 4.1.2. Results Table 2 displays the results of our evaluation of ChatGPT under different temperature and prompt settings. Values in parentheses represent standard deviations. Notably, the evaluation results indicate that setting temperature to 0 achieves the best performance for each prompt. As the temperature increases, the performance of ChatGPT decreases significantly. For example, the temperature of 2.0 achieves the worst results. This phenomenon may be due to the fact that generating new code is a complex and precise task, and high temperature can result in unstable and random results, which are more creative but less reliable. Furthermore, we investigated the results of 500 sampled data with temperature set to 0 with P2, and found that most of the results remain consistent. Specifically, 309 of the data produced the same answers for all 10 runs, while 110 of the data produced only 2 different answers among 10 runs. This finding further underscores the strong stability of using temperature set to 0 for code generation tasks. Overall, the results suggest that using lower temperature settings tends to produce more stable and better output for code generation tasks. Comparing the effects of different prompts under stable temperature settings (0, 0.5, and 1.0), we observed that P2 and P5 achieved significantly better results than others. Considering the comparative results between P1 and P2, as well as the results between P4 and P5, we can infer that the inclusion of additional scenario descriptions is beneficial in improving the understanding and performance of ChatGPT. Furthermore, we noticed that P3 performed worse than P4, despite both prompts containing more requirement information. Sometimes, P3 even performed worse than the simplest prompt, P1. For example, P1 achieved higher EM-trim scores than P3 in all three temperature settings, but P1 was generally worse than P4. This indicates that while providing additional requirement \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} Language \\ \end{tabular} } & \multicolumn{6}{c}{_CRNT_} & \multicolumn{6}{c}{_CRNL_} \\ \cline{2-13} & Ruby & Go & Py & C\# & JS & C\({}_{+}\) & Java & C & PHP & Swift & Obj-C & Kit & SQL & PL & Scala & R \\ \hline \#Samples & 377 & 2,843 & 2,115 & 703 & 427 & 700 & 1,194 & 335 & 423 & 864 & 81 & 1,932 & 96 & 116 & 1,682 & 680 \\ \hline \multicolumn{13}{c}{Total} \\ \hline \multicolumn{13}{c}{} & \multicolumn{6}{c}{9,117} & \multicolumn{6}{c}{5,451} \\ \hline \hline \end{tabular} \end{table} Table 1. The statistics of CodeReview-New dataset. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} P\({}_{k}\) \\ \end{tabular} } & \multicolumn{6}{c}{Temperature\({}_{w}\)=0} & \multicolumn{6}{c}{Temperature\({}_{w}\)=1.0} & \multicolumn{6}{c}{Temperature\({}_{w}\)=1.5} & \multicolumn{6}{c}{Temperature\({}_{w}\)=2.0} & \multicolumn{6}{c}{Avg (Tem.5,1.5)} \\ \cline{2-13} & EM-T & BLEU-T & BLEU-T & & EM-T & BLEU-T & BLEU-T & BLEU-T & EART-T & BLEU-T & EART-T & BLEU-T & BLEU-T & EART-T & BLEU-T \\ \hline P1 & 12.92 (0.22) & 73.58 (0.22) & 73.28 (0.34) & 72.82 (0.33) & 16.48 (0.77) & 7.125 (0.48) & 12.27 (1.63) & 64.42 (0.57) & 6.46 (0.57) & 5.69 (0.57) & 12.66 (1.21) & 16.57 & 70.54 \\ P2 & **21.48**(0.33) & **77.99** (0.27) & 19.76 (1.01) & 76.40 (0.95) & 16.66 (0.77) & 74.12 (0.29) & 11.69 (0.71) & 65.48 (0.10) & 3.59 (0.57) & 14.82 (0.24) & 17.40 & 73.37 \\ P3 & 16.40 (0.29) & 75.37 (0.17) & 15.76 (0.27) & 74.66 (0.41) & 13.02 (1.02) & 71.92 (1.33) & 9.06 (0.69) & 63.36 (0.85) & 3.39 (0.25) & 21.50 (0.37) & 13.56 & 71.33 \\ P4 & 19.22 (0.10) & 75.59 (0.16) & 18.62 (0.59) & 76.48 (0.42) & 16.98 (0.36) & 72.66 (0.81) & 11.83 (0.77) & 65.20 (0.22) & 6.39 (0.24) & 25.21 (0.93) & 16.66 & 72.06 \\ P5 & 21.16 (0.40) & 76.66 (0.29) & 19.93 (0.37) & 76.35 (0.43) & 16.29 (0.35) & 74.69 (0.78) & 10.48 (0.50) & 63.96 (1.08) & 1.78 (0.75) & 14.25 (0.29) & 17.11 & 72.92 \\ \hline Avg & 19.50 & 75.68 & 18.47 & 74.98 & 16.01 & 72.91 & 11.66 & 64.61 & 4.43 & 20.91 & 16.26 & 72.05 \\ \hline \hline \end{tabular} \end{table} Table 2. Impact of different prompts and temperatures on performance of ChatGPT. Figure 2. Construction strategies of Prompt 1 and Prompt 2 information could be helpful (compared to P1 and P4), too much complex information could harm the performance (P3). It could be because detailed requirement information is more complex to understand by ChatGPT, leading to unstable results. To investigate whether the findings of prompts and temperatures also hold across the entire dataset, we conducted an additional experiment. We randomly selected 1,000 data points from the training sets and validation sets of CodeReview dataset, and replicated the experiment. Due to budget constraints, we repeated the experiments for temperatures greater than 1.5 only twice, whereas for other temperature settings, we repeated them 10 times. The results, presented in Table 3, align closely with the findings in Table 2. Overall, both the EM and BLEU metrics demonstrate comparable performance to that on the test data, further reinforcing the consistent conclusions drawn concerning the influence of temperature and prompt settings as mentioned above. Table 4 shows the p-value regarding EM-T and BLUE-T between P2 and other prompts with t-test (Wang et al., 2019). We can observe that, expect for EM-T P-value (0.5320) between P2 and P5, all p-values are less than 0.005. It implies that P2 significantly outperforms P1, P3, and P4 in terms of both EM-T and BLEU-T scores. As for P5, in terms of EM-T, there is no significant difference between P2 and P5. However, considering the BLEU-T values, P2 is significantly better than P5. Taking into account these factors, we finally selected P2 for conducting the experiments in this paper. In the case of unstable temperature settings (1.5 and 2.0), we observed that the overall performance decreased. Note that, we also tried the fine-grained temperature interval (i.e., 0, 0.1, 0.2,..., 0.9, 1.0) on P2, the results show the similar trend with the larger interval 0.5. The results can be found in the website. However, we still noticed that P1 and P4 outperformed other prompts in general. This could be because P1 and P4 are simpler and provide less information, resulting in more stable results under higher temperature settings. In contrast, prompts with more information may make ChatGPT more creative but also more unstable when set with a higher temperature. **Answers to RQ1**: The configuration of parameters and temperatures has a significant impact on ChatGPT's performance on code refinement. In most cases, lower temperature settings tend to produce better and more stable results. Prompts involving concise scenario descriptions tend to produce better results. ### RQ2 Effectiveness of ChatGPT Based on the best parameters from RQ1 (i.e., temperature = 0 and prompt 2), we then evaluate ChatGPT on the test dataset of CodeReview (CR) and CodeReview-New (CRN). Table 5 presents the comparative results between ChatGPT and CodeReviewer. The column #Samples show the number of samples. CodeReview-NewTime (CRNT) and CodeReview-NewLanguage (CRNL) represent the results of two new datasets we constructed (see Table 1), respectively, where CodeReview-NewTime refers to code reviews in the same repositories with code review and CodeReview-NewLanguage refers to code reviews in different repositories with new programming language. Note that we have also evaluated the performance of ChatGPT on the training and validation datasets of CodeReviewer. The detailed results of these evaluations are available on our website (Chen et al., 2020) due to space limitations. The results demonstrate similar performance to that observed on the test dataset and show the consistent conclusions drawn regarding the impact of temperature and prompt settings in RQ1. We can see that ChatGPT achieves stable results across different datasets. In particular, the evaluation results suggest that ChatGPT performs better on CodeReview-New compared to CodeReview due to the higher quality of reviews in CodeReview-New. We further conducted an in-depth analysis to understand the lower performance of CodeReviewer compared to ChatGPT on the new dataset. We identified 2,283 cases from the new dataset where ChatGPT provided a correct response while CodeReviewer did not. We randomly selected 150 of them for the manual analysis. Through our analysis, we identified 4 main root causes: * _(34) Inaccurate understanding of the review content_. We have observed that some code reviews contain unclear information, such as ambiguous location references, unclear changes, or requiring domain-specific knowledge, which is challenging for the CodeReviewer model to comprehend. * _(62) Over deletion_. CodeReviewer model exhibits a tendency to inaccurately delete code snippets. Specifically, in 30 cases, the CodeReviewer model erroneously deleted correct code snippets that should have been preserved. Additionally, in 32 cases, the model deleted a significant portion of code snippets that required modifications, resulting in excessive deletions. * _(10) Extra modification_. In some cases, CodeReviewer model may introduce unnecessary modifications to code snippets that do not require any changes. * _(44) Hard to understand the ground truth provided in the code block_. Our analysis has revealed that, in some cases, reviewers have accurately suggested changes within the code block. However, \begin{table} \begin{tabular}{c c c c c c} \hline \hline Prompts & P1 & P3 & P4 & P5 \\ \hline EM-T P-value (P2 is superior) & 4.20E-06 & 7.69E-09 & 2.24E-06 & 0.5320 \\ BLEU-T P-value (P2 is superior) & 9.44E-09 & 2.30E-09 & 1.26E-07 & 0.0039 \\ \hline \hline \end{tabular} \end{table} Table 4. Comparisions between Prompt 2 and other prompts. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Pr.} & \multicolumn{2}{c}{Temperature=0} & \multicolumn{2}{c}{Temperature=0.5} & \multicolumn{2}{c}{Temperature=1} & \multicolumn{2}{c}{Temperature=1.5} & \multicolumn{2}{c}{Temperature=2} \\ \cline{2-10} & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T & EM-T & BLEU-T \\ \hline P1 & 18.1 & 70.77 & 18.28 & 70.44 & 16.15 & 68.91 & 14.08 & 63.21 & 2.31 & 6.93 \\ P2 & 21.55 & 74.21 & 20.23 & 73.52 & 17.99 & 71.42 & 13.45 & 61.94 & 1.26 & 3.57 \\ P3 & 16.21 & 71.2 & 16.15 & 71.32 & 13.97 & 69.14 & 10.4 & 62.87 & 1.59 & 4.34 \\ P4 & 18.28 & 71.45 & 17.82 & 71.32 & 16.44 & 68.82 & 12.36 & 62.48 & 1.82 & 5.02 \\ P5 & 20.11 & 76.17 & 19.48 & 75.62 & 17.7 & 72.88 & 9.94 & 51.69 & 0.37 & 2.62 \\ \hline Avg & 18.85 & 72.76 & 18.39 & 72.44 & 16.45 & 70.23 & 12.05 & 60.44 & 1.47 & 4.50 \\ \hline \hline \end{tabular} \end{table} Table 3. Impact on trainset and validset. CodeReviewer fails to recognize that the code within these blocks represents the ground truth, leading to incorrect modifications. In summary, the main root cause appears to be the different understanding ability of the models. The CodeReviewer model struggles with comprehending some unclear reviews, while ChatGPT demonstrates a stronger ability to capture the underlying semantics accurately. We have included examples that illustrate the root causes and the different performance of the models on our website (Beng et al., 2019). Although ChatGPT outperforms CodeReviewer on the new dataset, the results are still not as good as expected, with an EM-trim score of only 22.78. This indicates that ChatGPT still requires significant improvement in code refinement tasks, motivating further exploration of its strengths and weaknesses in RQ3 and RQ4. Furthermore, our observations indicate that ChatGPT often generates additional text that explains its code refinements. This extra text can offer both advantages and disadvantages. On one hand, it provides explanations that assist users in understanding the code refinements and assessing the reasonableness of the changes made. On the other hand, it may require users to make an additional effort to remove this extra text when submitting the refined code. However, we believe that automatic filtering out such extra text is relatively easier since ChatGPT frequently encloses the code with code blocks, typically denoted by three backticks. **Answers to RQ2**: Overall, ChatGPT demonstrates better generalization capabilities than CodeReviewer when applied to unseen dataset. However, its effectiveness is still limited, with EM-trim and BLEU-trim scores of only 22.78 and 76.55, respectively. ### RQ3 Strengths and Weaknesses of ChatGPT #### 4.3.1. Setup To gain a deeper understanding of the strengths and weaknesses of ChatGPT, we conducted a qualitative analysis on the results of RQ2. Specifically, we randomly selected 400 samples, including 200 samples each from the CodeReview and CodeReviewer-New datasets, which achieved 90% confidence level and 5.8% confidence interval. Then we manually annotated them along three dimensions: the relevance of the review comment to the code refinement (_Comment Relevance_), the information provided by the review comment (_Comment Information_), and the categories of code changes (_Code Change Category_). Our aim was to identify the strengths and weaknesses of ChatGPT based on these three dimensions. We employed a rigorous annotation process for the manual study of ChatGPT on the selected samples. To facilitate the annotation process, we developed a annotation website that allowed annotators to view the review comment, the original code \(C_{1}\), the ground truth revised code \(C_{2}\), and the original pull request link in a single page. The annotators were able to refer to the code, discussions, and commits in the original pull request if necessary to determine the annotation categories. Two co-authors independently annotated the samples along the three dimensions. When discrepancies occurred between the annotations of the two co-authors, a third author was consulted to resolve the issue through discussion. Conflicts were resolved every 50 samples, and annotation standards were aligned over eight rounds to ensure consistency and accuracy in the annotation process. It took 14 people days to perform the annotation in total. The final Cohen's Kappa coefficient (Kappa et al., 2016) for Comment Relevance, Comment Information, and Code Change Category was 0.675, 0.696 and 0.888 respectively, suggesting moderate, moderate and strong agreement between the two annotators. **Comment Relevance** measures the degree of relevance between the review comments and the code changes in the test dataset, reflecting the quality of the dataset. The relevance of the comments is divided into three levels: * **Level 1 (Not):** There is no apparent relationship between the code change and the review comment. * **Level 2 (Partial):** The suggestions in the review comment are partially implemented in the code change, or some refinement in the code change is not present in the suggestions of the comment. * **Level 3 (Perfect):** The code changes strictly follow the review comment, and there is a clear correspondence between them. In other words, the suggestion of the review comment is fully implemented in the code change, and the code refinement is entirely contained within the review comment. **Comment Information** measures the sufficiency and clarity of the instructions contained in the comment regarding the code change, which reflects the difficulty for the contributor or a model to refine the code. For example, a comment like "There are spaces missing" is more informative than "This function name does not describe well what it does." We followed the definition of comment information from (Kappa et al., 2016), and divided the comment information into three levels: * **Level 1 (Vague Question)**: The review comment only gives a general direction for modification (e.g., "we should maintain the consistency of variable naming") without clear suggestions for changes. Figure 3. Data quality of CodeReview and CodeReview-New. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Dataset & Tool & \#Samples & EM & EM-T & BLEU & BLEU-T \\ \hline \multirow{2}{*}{\(CR\)} & CodeReviewer & \multirow{2}{*}{13,104} & **32.49** & **32.55** & **83.39** & **83.50** \\ & ChatGPT & & 16.70 & 19.47 & 68.26 & 75.12 \\ \hline \multirow{2}{*}{\(CRN\)} & CodeReviewer & \multirow{2}{*}{14,568} & 14.84 & 15.50 & 62.25 & 62.88 \\ & ChatGPT & & **19.52** & **22.78** & **72.56** & **76.44** \\ \hline \multirow{2}{*}{\(CRNT\)} & CodeReviewer & \multirow{2}{*}{9,117} & **15.75** & 16.31 & 62.01 & 62.47 \\ & ChatGPT & & **19.60** & **22.44** & **72.90** & **76.55** \\ \hline \multirow{2}{*}{\(CRNL\)} & CodeReviewer & \multirow{2}{*}{5,451} & 13.21 & 14.05 & 62.67 & 63.61 \\ & ChatGPT & & **19.39** & **23.40** & **71.97** & **76.25** \\ \hline \hline \end{tabular} \end{table} Table 5. Quantitative evaluation results. * **Level 2 (Vague Suggestion)**: The review comment provides specific suggestions for modification (e.g., "changing it with camel case style"), but does not directly specify the location of the code that should be modified. * **Level 3 (Concrete Suggestion)**: The review comment includes explicit requests for adding or modifying code snippets (e.g., "changing the variable name 'testfile' to 'testFile") or explicitly identifies code snippets to be removed. **Code Change Category** is used to measure the intention of the code changes. We followed the taxonomy in (Zhou et al., 2017) and defined the categories based on our annotations. There are 4 major categories, including _Documentation Category_, _Feature Category_, _Refactoring Category_, and _Documentation-and-Code Category_. * **Documentation Category** represents code changes that only add, modify, or remove documentation. Modifications according to conventions (Documentation-conventions) may also involve additions, modifications, or deletions, but we separated it for easier analysis of the unique challenges it poses to the model's prediction of revised code. * **Feature Category** represents code changes in terms of functional logic, such as adding, modifying, or removing code. * **Refactoring Category** refers to non-functional code refactoring, including renaming code entities (Refactoring-rename), swapping two code snippets (Refactoring-swap), and updating code based on coding standards (Refactoring-conventions). * **Documentation-and-Code Category** represents code changes that include both documentation and code modifications. Figure 3 presents the results of the annotation on the CodeReview dataset and the CodeReview-New dataset, which measures comment relevance and comment information. The results show that, compared to the CodeReview dataset, CodeReview-New dataset, constructed with stricter filtering rules, has more samples with _perfect_ relevance levels (150 vs. 135) and fewer samples with _not_ relevance levels (21 vs. 36), indicating higher quality. Furthermore, the CodeReview-New dataset has fewer samples with _vague suggestion_ level (40 vs. 59) and more samples with _vague question_ level (65 vs. 46) than the CodeReview dataset. Figure 4 illustrates the results of ChatGPT on different comment relevance and information levels. The figure highlights that ChatGPT performs the best when the comments are classified as _perfect_ relevance, outperforming both _partial_ and _not_ relevance levels. In addition, ChatGPT performs the best on reviews that contain _concrete suggestion_ information, while performing similarly for _vague suggestions_ and _vague questions_. The results imply that the quality of data significantly impacts ChatGPT's performance, as reviews with low relevance and low information do not provide enough context and information for ChatGPT to make accurate predictions. Table 6 summarizes the results across different code change categories. It shows that ChatGPT performs best in the Refactor category with an EM-trim of 37.50% and a BLEU-trim of 83.58%, indicating that ChatGPT has a good understanding of how to perform code refactoring. However, the _Documentation-and-Code_ category is the weakest performing category, with an EM-trim of 0% and a BLEU-trim of 64.09%, which highlights the difficulty in making simultaneous changes to code and documentation while maintaining consistency. When comparing minor categories, ChatGPT is best at handling _remove_-type code changes, followed by _modify_ and _add_ categories. Additionally, we observed that some of predictions about updates and adds are actually correct, but do not strictly match the ground truth answers, which will be discussed in RQ4. The results also suggest that ChatGPT is skilled at updating code based on conventions, with EM-trim values of 23.08% and 44.12% for Documentation-convention and Refactor-convention samples, respectively, while the average EM-trim for the Documentation and Refactor categories is lower at 17.78% and 37.50%, respectively. ### RQ4 Root Causes Analysis and Mitigation In RQ4, we aim to further understand the root causes of ChatGPT's underperforming cases and identify potential solutions for improvement. Specifically, we collect 206 underperforming cases that met Figure 4. Qualitative results of ChatGPT on data with different review information levels. Figure 5. An example of unclear location and the mitigation. two criteria: 1) the reviews have _perfect relevance_ and \(2\)) the EM-trim scores calculated based on outputs of ChatGPT were 0. #### 4.4.1. Root Cause Analysis Table 7 presents the results of the root cause analysis, which includes two major categories of root causes: _inaccurate measurement_ and _incorrect prediction_. **Inaccurate Measurement Category** refers to false positives where the predicted refinement by ChatGPT is correct based on our manual inspection, but the measurement metrics, such as EM or EM-trim, are low due to the strict matching. Four types of root causes were identified in this category: _Insignificant Omission (IO)_, where ChatGPT did not return unmodified code segments but correctly returned the modified parts; _Unexpected Grammar Fix (UGF)_, where ChatGPT fixed grammar errors in the documentation that were not present in the ground truth revised code; _Code Style Difference (CSD)_, where the predicted code by ChatGPT is semantically identical to the ground truth revised code, with differences only in whitespace, line breaks, and other code style aspects that do not affect code semantics, and the review comment did not explicitly prohibit the change of code style. _Reasonable Improvement (RI)_, refers to cases where ChatGPT's modifications are highly reasonable and represent an improvement over the original version. **Incorrect Prediction Category** refers to true positive cases where ChatGPT made incorrect answers compared to the ground truth revised code. We identified three types of root causes in this category. _Need Domain Knowledge (NDK)_ refers to cases where the review comment does not provide the necessary repository-related domain knowledge to complete the modification (e.g., "change this as the style in _anotherFile_"). _Unclear Location (UL)_ refers to cases where the review comment does not provide a specific location for the code to be modified. For example, in Figure 5, the review does not clearly indicate the location of the changes, and ChatGPT (GPT-3.5) erroneously modifies the function name as well. Although contributors can see the specific location of the review comment on the GitHub pull request interface, such information is not provided to ChatGPT, following the same settings as CodeReviewer (2018). _Unclear Changes (UC)_ refers to cases where the review comment has a lower information level, causing ChatGPT to be unable to determine the specific modifications needed, resulting in underperformance. For example, in Figure 6, ChatGPT (GPT-3.5) mistakenly assumes that the review suggests returning the result of "data.apply..." to data itself due to the vague comment. _Model Fallacy (MF)_ refers to cases where the review is accurate and clear from the perspective of human, yet ChatGPT fails to handle them correctly. It suggests that the observed issues are more likely to be inherent to the model itself rather than solely stemming from the quality of the review. As an illustration, in Figure 7, ChatGPT (GPT-3.5) mistakenly believes that the review suggests changing default(1) to default(false). As presented in Table 7, 51 (20.39%) of the underperforming cases were caused by inaccurate EM measurement. For the remaining 164 (79.61%) cases where ChatGPT outputs incorrect answers, the majority 107 (51.94%) cases were caused by the lack of domain knowledge required to complete the modification. Another 44 cases (21.36%) were due to unclear location information in the review comment, while 13 cases (6.31%) were caused by unclear instructions provided in the review comments. #### 4.4.2. Mitigation Strategies We further investigated potential mitigation to improve ChatGPT on the underperforming cases in the _Incorrect Prediction_ category as _Need Domain Knowledge_ requires more information. In general, mitigation can be conducted from two main directions: _improving the quality of review comments_ and _enhancing the models_ used for code refinement. Improving the review quality can be achieved through two avenues: _designing best practices for reviewers to provide high-quality reviews_ and _developing \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Doc-add} & Doc-rem & Doc-mod & Doc-con & Feat-add & Feat-rem & Feat-mod & Ref-ren & Ref-swap & Ref-con & Doc\&Code \\ \hline \#Sample & 14 & 8 & 55 & 13 & 21 & 52 & 153 & 24 & 6 & 34 & 20 \\ EM-T & 0.00 & 50.00 & 16.36 & 23.08 & 4.76 & 23.08 & 19.61 & 29.17 & 33.33 & 44.12 & 0.00 \\ BLEU-T & 52.65 & 87.24 & 81.16 & 67.45 & 75.40 & 73.27 & 79.43 & 85.88 & 82.14 & 82.22 & 64.09 \\ \hline \hline \end{tabular} \end{table} Table 6. Results of ChatGPT on different code changes. Figure 6. An example of unclear changes and the mitigation. Figure 7. An example of model fallacy. tools to assist in refining low-quality reviews_ if the reviewers cannot provide high-quality ones. In this study, we would like to investigate whether providing more precise reviews and using more advanced models can improve the performance of LLMs on the code refinement task. We leave the study of advanced mitigation strategies (e.g., automatic review refinement) as the future work. For the cases related to _Unclear Location_ and _Unclear Changes_, we identified three strategies for improving the quality of reviews and models: incorporating specific location information in the review (abbreviated as Loc.), providing more explicit review comments (abbreviated as Exp.), and using more advanced GPT-4 model in ChatGPT. When utilizing GPT-4, in addition to employing the original review directly (abbreviated as Dir.), we can also add specific location information or provide more explicit review comments if needed. We aim to study whether the strategies could mitigate these challenges of ChatGPT. Table 8 shows the results with different mitigation strategies. The rows UL and UC refer to the cases under _Unclear Location_ and _Unclear Changes_, respectively. The results show that GPT-3.5, combined with the corresponding mitigation techniques, can resolve 24/32 (75%) of _Unclear Location_ cases and 6/11 (54.54%) of _Unclear Changes_ cases. By simply switching to GPT-4 without using mitigation techniques, it can resolve cases very close to those addressed by GPT-3.5 with mitigation techniques. After applying the mitigation techniques, GPT-4 can resolve 31/32 (96.88%) of Unclear Location and 10/11 (90.91%) of Unclear Changes cases. Figure 5 and Figure 6 show two examples with different mitigations. By revising the original review (i.e., adding location information and making it more explicit), ChatGPT (GPT-3.5) can accurately refine the code. Another method is to use a more advanced LLM, i.e., GPT-4, which is capable of directly producing correct results without the need for review revision. In addition, we show part of explanations generated by GPT-4, which are clear and reasonable. Moreover, unlike GPT-3.5, GPT-4 often asks the reviewer for specific modification locations or content when it cannot infer them from the review comment. This is particularly useful when applied in real-world scenarios, as it allows for iteratively helping the reviewer refine their review comment until the model can better understand it, ultimately improving the accuracy of the predicted code changes. **Answers to RQ4**: The main root causes identified in our analysis were the lack of domain knowledge, unclear location, and unclear changes. Two potential directions for mitigating these issues were identified: improving the large language model, such as using GPT-4 instead of GPT-3.5, and improving the quality of reviews, such as providing more clear information. ## 5. Implications Our study provides implications for both developers seeking to automate code refinement and researchers working in the code review field. **Developers:** Our findings show that ChatGPT has the potential to significantly aid developers in code refinement tasks. However, the results also suggest that developers must configure language models like ChatGPT carefully, ensure review quality, and validate output. Our study highlights the impact of temperature and prompt configuration on performance, suggesting that using lower temperatures and concise descriptions with scenario information can lead to better and more stable results. Developers should therefore carefully configure these parameters before using LLMs for code refinement tasks. Regarding the reviewers who create the code reviews, we have found that clearer reviews significantly aid ChatGPT in understanding modification suggestions. We suggest reviewers to write more specific and detailed review comments. Specifically, the reviewers should be careful in using specific syntax (e.g., code blocks) that may be difficult to be understood by ChatGPT. A safe solution could be that the reviewers can check the clarity of the review content with ChatGPT. For developers who utilize ChatGPT for automated code modification, we recommend conducting a careful manual review of ChatGPT's results. Especially for modifications requiring strong domain knowledge or cases where the review information is ambiguous, it is important to verify whether ChatGPT correctly understands the reviewer's intent and to check for any unnecessary modifications or deletions made by ChatGPT. One possible way is to read the ChatGPT's explanation carefully to check whether the model understands the review well. Furthermore, we recommend that users to choose advanced models if possible, such as GPT-4, which offer enhanced understanding capabilities. **Researchers:** Our study demonstrates that ChatGPT achieves promising results but still has room for improvement. Specifically, we identify some root causes of the underperformance of ChatGPT and propose some strategies to mitigate these challenges. These findings provide important guidance for future research in improving the performance of LLMs and enhancing the quality of code reviews. Potential research directions include automatic generation of high-quality reviews, review refinement, and automatic low-quality review detection and filtering. Furthermore, our study highlights the limitations of existing metrics such as EM and BLEU, suggesting the need for more accurate and reliable metrics for evaluating the results of language models in code refinement tasks. ## 6. Threats to Validity The selected baseline model and benchmark could be a threat to the validity of our results. We addressed this by selecting a state-of-the-art method as reference and creating a new test dataset, _CRN_, with stricter filtering rules. The randomness of ChatGPT predictions is another potential threat to the validity of our results. To mitigate this, we ran each setting ten times in RQ1, which provided us with more reliable and stable results. In RQ2, we did not run multiple times due to the high cost of accessing ChatGPT API. The prompts settings we used for ChatGPT could be a threat, as there may be \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Inaccurate Measurement} & \multicolumn{3}{c}{Incorrect Prediction} \\ \cline{2-9} Type & IO & UGF & CSD & RI & NDK & UL & UC & MF \\ \hline \#Samples & 13 & 2 & 19 & 8 & 107 & 32 & 11 & 14 \\ \hline \hline \end{tabular} \end{table} Table 7. Results of root cause analysis. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Strategy} & \multirow{2}{*}{\#Samples} & \multicolumn{4}{c}{GPT-3.5} & \multicolumn{4}{c}{GPT-4} \\ \cline{3-8} & & Loc. & Exp. & Total & Dir. & Loc. & Exp. & Total \\ \hline UL & 32 & 24 & - & 24 & 22 & 9 & - & 31 \\ UC & 11 & - & 6 & 6 & 6 & - & 4 & 10 \\ \hline \hline \end{tabular} \end{table} Table 8. Results of mitigation strategies. other optimal prompts for code refinement tasks. Moreover, the different wording of the prompts could also impact the results. We try to address this by following the existing best practices and selecting a range of prompts with varying levels of complexity and specificity, which allowed us to study which types of prompts worked best in different contexts. Another potential threat arises from the comparison between ChatGPT and CodeReviewer, which involve different settings. Specifically, in RQ1, we empirically determined the optimal parameters for temperature and prompts in ChatGPT. We assume that CodeReviewer also achieves its best performance with its hyper-parameter settings. The randomness of the selection of samples for the manual annotation process could also be a threat. However, we believe that this would not affect the overall conclusions drawn from our results, especially on the performance of ChatGPT on different categories in RQ3. The subjective nature of human decisions in the manual annotation process is another potential threat to the validity of our results. To address this, we obeyed a rigorous annotation process with two co-authors independently annotating each sample and a third author resolving any inconsistencies or conflicts through discussion. Moreover, the final Cohen's Kappa coefficient indicates relatively high agreement between the two annotators. ## 7. Related Work **Pre-trained Models for SE:** Large-scale pre-trained models has revolutionized the field of natural language processing (Dong et al., 2018; Chen et al., 2018), and its application in the software engineering domain has shown promising results (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Currently, pre-trained model architectures are mainly divided into encoder-only, decoder-only, and encoder-decoder models (Chen et al., 2018; Chen et al., 2018). Encoder-only models pre-train a bidirectional Transformer, which can access token information before and after the current token when training (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Decoder-only models allow the model to access only the tokens preceding the current token during the training process (Chen et al., 2018; Chen et al., 2018). GPT-3 (Chen et al., 2018) also employs decoder-only architectures and has a significantly larger parameter size (175 billion, 10x more than any previous LLMs). Additionally, GPT-3.5-Turbo (Li et al., 2019), the default model of ChatGPT, adopt Reinforced Learning with Human Feedback (RLHF) to enhance GPT3's ability to understand instructions and generate content aligned with human expectations. CodeT5 (Chen et al., 2018) is a typical pretraining model for code utilizing an encoder-decoder architecture. It adopts the T5 (Chen et al., 2018) model and considers crucial token type information from identifiers during pretraining. CommitBART (Chen et al., 2018) also employs an encoder-decoder architecture and is specially trained for commit representation. There are also some works focusing on exploring the learned program semantics for these pre-trained models in SE (Chen et al., 2018; Chen et al., 2018) and analyzing the robustness (Chen et al., 2018) and security (Li et al., 2019) of these models. **Automating Code Review Activities:** Studies have presented evidence that developer spend a considerable amount of time on code review activities (Chen et al., 2018; Chen et al., 2018), both writing review comments for other's code and performing code changes according to other's comments (Chen et al., 2018; Chen et al., 2018). Consequently, numerous studies (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) have been carried out on automating the code review (ACR) activities, emphasizing their significance and potential impact (Chen et al., 2018). According to the stages of code review, prior studies on ACR can be categorized into three tasks (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018): (1) _Code Change Recommendation (Chen et al., 2018)_: Before the contributor submits the original code for review, the ACR model provides potential code changes that the reviewer might suggest. (2) _Review Comment Generation (Chen et al., 2018)_: After the contributor submits the code for review, the model provides possible review comments for the reviewer, serving as a draft for review comments. (3) _Code Refinement (Chen et al., 2018; Chen et al., 2018)_: After the reviewer provides review comments, the model suggests code changes for the contributor by considering both the review comments and submitted code. In this paper, we focus on the last task, _Code Refinement_, as it is the final and most crucial step in code review activities. Tufano et al. (Tufano et al., 2018) introduced a Recurrent Neural Network (RNN) based Neural Machine Translation (NMT) model for the code refinement task. CodeReviewer (Chen et al., 2018) utilized the CodeT5 model and designed four pre-training tasks related to code review. Recently, Zhou et al. (Zhou et al., 2018) compared existing ACR techniques, including Trans-Review (Chen et al., 2018), AutoTransform (Chen et al., 2018), and T5-Review (Chen et al., 2018). They discovered that CodeT5 outperformed existing ACR techniques in both code change recommendation and code refinement tasks. Although, they evaluated large language models for code, such as CodeT5 and CodeBERT, ChatGPT is significantly different from these LLMs with RLHF and emergent abilities due to a much larger number of parameters (Li et al., 2019), thus need further evaluation. Despite that ChatGPT have been evaluated on numerous NLP tasks (Chen et al., 2018) and several software engineering tasks (Chen et al., 2018; Chen et al., 2018), this paper presents the first comprehensive empirical study exploring ChatGPT's capabilities in the code refinement task, to the best of our knowledge. ## 8. Conclusion In this paper, we conduct an empirical study to investigate the potential of ChatGPT in automating code review tasks, with a focus on code refinement based on code reviews. We assess the impact of various ChatGPT configurations and examine its effectiveness on both standard code review benchmarks and a new dataset collected by us. Our findings highlight the promising potential of ChatGPT for code refinement, unveil the root causes of its underperformance, and suggest potential strategies to overcome these challenges. ## 9. Acknowledgment This work was partially supported by the National Key R&D Project (2021YFF1201102), the National Key R&D Program of China (2021ZD 0112903), the National Natural Science Foundation of China (Grant No. 61872262), the National Research Foundation, Singapore, and the Cyber Security Agency under its National Cybersecurity R&D Programme (NCRP25-P04-TAICeN). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Cyber Security Agency of Singapore.
2309.07112
A statistical mechanics framework for constructing non-equilibrium thermodynamic models
Far-from-equilibrium phenomena are critical to all natural and engineered systems, and essential to biological processes responsible for life. For over a century and a half, since Carnot, Clausius, Maxwell, Boltzmann, and Gibbs, among many others, laid the foundation for our understanding of equilibrium processes, scientists and engineers have dreamed of an analogous treatment of non-equilibrium systems. But despite tremendous efforts, a universal theory of non-equilibrium behavior akin to equilibrium statistical mechanics and thermodynamics has evaded description. Several methodologies have proved their ability to accurately describe complex non-equilibrium systems at the macroscopic scale, but their accuracy and predictive capacity is predicated on either phenomenological kinetic equations fit to microscopic data, or on running concurrent simulations at the particle level. Instead, we provide a framework for deriving stand-alone macroscopic thermodynamics models directly from microscopic physics without fitting in overdamped Langevin systems. The only necessary ingredient is a functional form for a parameterized, approximate density of states, in analogy to the assumption of a uniform density of states in the equilibrium microcanonical ensemble. We highlight this framework's effectiveness by deriving analytical approximations for evolving mechanical and thermodynamic quantities in a model of coiled-coil proteins and double stranded DNA, thus producing, to the authors' knowledge, the first derivation of the governing equations for a phase propagating system under general loading conditions without appeal to phenomenology. The generality of our treatment allows for application to any system described by Langevin dynamics with arbitrary interaction energies and external driving, including colloidal macromolecules, hydrogels, and biopolymers.
Travis Leadbetter, Prashant K. Purohit, Celia Reina
2023-09-13T17:34:58Z
http://arxiv.org/abs/2309.07112v1
# A statistical mechanics framework for constructing non-equilibrium thermodynamic models ###### Abstract Far-from-equilibrium phenomena are critical to all natural and engineered systems, and essential to biological processes responsible for life. For over a century and a half, since Carnot, Clausius, Maxwell, Boltzmann, and Gibbs, among many others, laid the foundation for our understanding of equilibrium processes, scientists and engineers have dreamed of an analogous treatment of non-equilibrium systems. But despite tremendous efforts, a universal theory of non-equilibrium behavior akin to equilibrium statistical mechanics and thermodynamics has evaded description. Several methodologies have proved their ability to accurately describe complex non-equilibrium systems at the macroscopic scale, but their accuracy and predictive capacity is predicated on either phenomenological kinetic equations fit to microscopic data, or on running concurrent simulations at the particle level. Instead, we provide a framework for deriving stand-alone macroscopic thermodynamics models directly from microscopic physics without fitting in overdamped Langevin systems. The only necessary ingredient is a functional form for a parameterized, approximate density of states, in analogy to the assumption of a uniform density of states in the equilibrium microcanonical ensemble. We highlight this framework's effectiveness by deriving analytical approximations for evolving mechanical and thermodynamic quantities in a model of coiled-coil proteins and double stranded DNA, thus producing, to the authors' knowledge, the first derivation of the governing equations for a phase propagating system under general loading conditions without appeal to phenomenology. The generality of our treatment allows for application to any system described by Langevin dynamics with arbitrary interaction energies and external driving, including colloidal macromolecules, hydrogels, and biopolymers. ## Significance The beautiful connection between statistical mechanics and equilibrium thermodynamics is one of the crowning achievements in modern physics. Significant efforts have extended this connection into the non-equilibrium regime. Impactful, and in some cases surprising, progress has been achieved at both the macroscopic and microscopic scales, but a key challenge of bridging these scales remains. In this work, we provide a framework for constructing macroscopic non-equilibrium thermodynamic models from microscopic physics without relying on phenomenology, fitting to data, or concurrent particle simulations. We demonstrate this methodology on a model of coiled-coil proteins and double stranded DNA, producing the first analytical approximations to the governing equations for a phase transforming system without phenomenological assumptions. ## Introduction indestanding and predicting far-from-equilibrium behavior is of critical importance for advancing a wide range of research and technological areas including dynamic behavior of materials, [18, 24], complex energy systems [15], as well as geological and living matter [3, 9]. Although our understanding of each of these diverse fields continues to grow, a universal theory of non-equilibrium processes has remained elusive. The past century, however, has seen numerous significant breakthroughs towards this ultimate goal, of which we detail only a few below. At the macroscopic scale, classical irreversible thermodynamics leverages the local equilibrium assumption to allow classical thermodynamic quantities to vary over space and time, enabling one to describe well known linear transport equations such as Fourier's and Fick's laws [25]. Extended irreversible thermodynamics further promotes the fluxes of these quantities to the level of independent variables in order to capture more general transport laws [20]. Further extensions to allow for arbitrary state variables (not just fluxes), or history dependence take the names of thermodynamics with internal variables (TIV) or rational thermodynamics, respectively [28, 29, 2, 47]. More recently, the General Equation for Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) framework and Onsager's variational formalism have proven to be successful enhancements of the more classical methods [11, 34, 5, 30]. On the other hand, linear response theory and fluctuation dissipation relations constitute the first steps towards a theory of statistical physics away from equilibrium. In the last few decades, interest in microscopic far-from-equilibrium processes has flourished due to the unforeseen discovery of the Jarzynski equality and other fluctuation theorems, as well as the advent of stochastic thermodynamics [19, 4, 40, 42, 16], and the application of large deviation theory to statistical physics [8, 39, 31]. These advances have changed the way scientists view thermodynamics, entropy, and the second law particularly at small scales. More specific to this work is the challenge of uniting scales. Given the success of the aforementioned macroscopic thermodynamic theories, how can one derive and inform the models within them using microscopic physics? Describing this connection constitutes the key challenge in formulating a unified far-from-equilibrium theory. As of yet, the GENERIC framework possesses the strongest microscopic foundation. Starting from a Hamiltonian system, one can either coarse grain using the projection operator formalism [36] or a statistical lack-of-fit optimization method [49, 38] in order to derive the GENERIC equations. However, these methods are either challenging to implement, analytically or numerically, or contain fitting parameters which must be approximated from data. Alternatively, one can begin from a special class of stochastic Markov processes and use fluctuation-dissipation relations or large deviation theory to the same effect [27, 32]. So far, numerical implementations of these methods have only been formulated for purely dissipative systems, with no reversible component. For this work, we shall utilize the less stringent framework of TIV, but recover GENERIC in an important case utilized in the examples. We will show how to leverage a variational method proposed by Eyink [7] for evolving approximate non-equilibrium probability distributions to derive the governing equations of TIV for systems whose microscopic physics is well described by Langevin dynamics. Furthermore, in the approach proposed here, the variational parameters of the probability density are interpreted as macroscopic internal variables, with dynamical equations fully determined through the variational method. Once the approximate density is inserted into the stochastic thermodynamics framework, the equations for the classical macroscopic thermodynamics quantities including work rate, heat rate, and entropy production appear naturally, and possess the TIV structure. For example, the internal variables do not explicitly appear in the equation for the work rate, and the entropy production factors into a product of fluxes and their conjugate affinities, which themselves are given by the gradient of a non-equilibrium free energy. Moreover, we show that when the approximating density is assumed to be Gaussian, the internal variables obey a gradient flow dynamics with respect to the non-equilibrium free energy, and so the resulting rate of entropy production is guaranteed to be non-negative. This direct link between microscopic physics and TIV has not been elaborated elsewhere, and we refer to this method as stochastic thermodynamics with internal variables (STIV). To illustrate and highlight the effectiveness of this method, we provide the results of two examples. The first is a paradigmatic example from stochastic thermodynamics: a single colloidal particle acted on by a linear external force, mimicking a macromolecule in an optical trap. It demonstrates all of the key features of the method while being simple enough to allow for comparison to exact solutions. The second example features a model system for studying phase transitions of bio-molecules, for example in coiled-coil proteins [22, 46] (depicted in Fig. 1) or double stranded DNA [10, 50]: a colloidal mass-spring-chain system with double-well interactions between neighboring masses. By comparing to Langevin simulations, we show that STIV not only produces accurate analytical approximations to relevant thermodynamic quantities, but also predicts the speed of a traveling phase front induced by external driving. Figure 1: The stochastic thermodynamics with internal variables (STIV) framework proposed here provides kinetic and thermodynamic equations for a broad class of systems described by Langevin dynamics, including the coiled-coil protein depicted in these snapshots. Taken from molecular dynamics simulations, atomic level structures are depicted in (A) and (B), while the unfolding due to an externally applied load becomes clear in the secondary structures shown in (C) and (D). Vital for the coiled-coil protein’s function, we study the dynamics of this transition from folded to unfolded configuration as a demonstration of the power of the STIV framework. Reproduced from [46] Fig. 1 with permission from the Royal Society of Chemistry. ## Theory ### Stochastic thermodynamics We begin by outlining the key ideas of stochastic thermodynamics which defines classical thermodynamic quantities at the trajectory level for systems obeying Langevin dynamics, such as those embedded in an aqueous solution. These quantities include work, heat flow, and entropy production among others, and these new definitions allow for an expanded study of far-from-equilibrium behavior at the level of individual, fluctuating trajectories. Stochastic thermodynamics is a highly active area of study, and has been developed far beyond what is detailed here, as we have limited our presentation to only what we need for introducing STIV. See [42] and the references therein for further details. The paradigmatic example within stochastic thermodynamics is a colloidal particle in a viscous fluid at constant temperature, \(T\), acted on by an external driving (we present the theory for a single particle in one dimension as the generalization to many particles in multiple dimensions is straightforward). This system is well described by an overdamped Langevin equation, which can be written as a stochastic differential equation of the form \[\mathrm{d}x(t)=-\frac{1}{\eta}\frac{\partial e}{\partial x}(x,\lambda)\, \mathrm{d}t+\sqrt{2}d\,\mathrm{d}b(t),\] where \(x(t)\) denotes the particle's position at time \(t\in[t_{i},t_{d}]\), \(\eta\) is the drag coefficient of the particle in the fluid, \(-\frac{\partial e}{\partial x}(x,\lambda)\) is the force acting on the particle coming from a potential energy, \(e\), \(\lambda(t)\) is a prescribed external control protocol, \(d=\frac{1}{\eta\beta}\) is the diffusion coefficient, \(\beta=1/k_{B}T\) the inverse absolute temperature in energy units, and \(b(t)\) is a standard Brownian motion. Given this system, stochastic thermodynamics enables one to define the internal energy, work, heat, and entropy at the level of the trajectory. Naturally, \(e(x(t),\lambda(t))\) defines the internal energy of the system. One does work on the system by changing \(e\) via the external control, \(\lambda\). Thus, the incremental work reads \[\mathrm{d}w=\frac{\partial e}{\partial\lambda}\ \dot{\lambda}\,\mathrm{d}t. \tag{1}\] Using the first law of thermodynamics, we conclude that the incremental heat flowing out of the system is \[\mathrm{d}q=\mathrm{d}w-\mathrm{d}e.\] An additional important quantity is the total entropy, \(s^{\mathrm{tot}}\). From the second law of thermodynamics, its macroscopic counterpart, \(S^{\mathrm{tot}}\) (to be defined), should be non-decreasing and describe the level of irreversiblity of the trajectory. To that end, the change in total entropy is defined using the log of the (Raydon-Nikodym) derivative of the probability of observing the given trajectory, \(\mathbb{P}[x(t)\mid\lambda]\), with respect to the probability of observing the reversed trajectory under the time reversed external protocol, \(\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]\) \[\Delta s^{\mathrm{tot}}[x(t)]=k_{B}\log\!\left(\frac{\mathrm{d}\mathbb{P}[x(t )\mid\lambda]}{\mathrm{d}\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]}\right)\] where \(\tilde{x}(t)=x(t_{\mathrm{f}}-t)\) and likewise for \(\tilde{\lambda}\). Upon taking the expectation with respect to all possible trajectories (and any probabilistic initial conditions), \[\Delta S^{\mathrm{tot}}=\left\langle\Delta s^{\mathrm{tot}}\right\rangle_{ \mathrm{paths}}=\int\Delta s^{\mathrm{tot}}[x(t)]\,\mathrm{d}\mathbb{P}[x(t) \mid\lambda]\] is recognized as \(k_{B}\) times the Kullback-Leibler divergence between the distributions of forward and backwards trajectories. As such, \(\Delta S^{\mathrm{tot}}\) must be non-negative. It is also useful to break up the total entropy change into the change in the entropy of the system, \[\Delta s[x(t)]=-k_{B}\log\!\left(\frac{p(x(t_{\mathrm{i}}),t_{\mathrm{f}}\mid \lambda)}{p(x(t_{\mathrm{i}}),t_{\mathrm{i}}\mid\lambda)}\right)\!,\] where \(p(x,t\mid\lambda)\) is the probability density of observing the particle at position \(x\) at time \(t\), and the change in the entropy of the medium \[\Delta s^{\mathrm{m}}=\Delta s^{\mathrm{tot}}-\Delta s. \tag{2}\] Finally, one defines the microscopic non-equilibrium free energy in terms of the potential and entropy as \(a^{\mathrm{neq}}=e-Ts\)[45]. Using the path integral representation of \(\mathbb{P}[x(t)\mid\lambda]\) and \(\tilde{\mathbb{P}}[\tilde{x}(t)\mid\tilde{\lambda}]\), one finds that the incremental heat dissipated into the medium equals the incremental entropy change in the medium \(T\mathrm{d}s^{\mathrm{m}}=\mathrm{d}q\)[41]. This allows one to relate the change in non-equilibrium free energy to the work done and the change in total entropy \[\mathrm{d}a^{\mathrm{neq}} =\mathrm{d}e-T\mathrm{d}s\] \[=\mathrm{d}w-\mathrm{d}q-T\mathrm{d}s\] \[=\mathrm{d}w-T\mathrm{d}s^{\mathrm{tot}}. \tag{3}\] As we saw with \(\Delta S^{\mathrm{tot}}\), each microscopic quantity has a macroscopic counterpart defined by taking the expectation with respect to all possible paths. Throughout, we use the convention that macroscopic (averaged) quantities are written in capital, and microscopic quantities are written in lower case, e.g., \(A^{\mathrm{neq}}=\left\langle a^{\mathrm{neq}}\right\rangle_{\mathrm{paths}}\). ### Thermodynamics with internal variables Now we turn to the macroscopic description, and give a brief overview of Thermodynamics with internal variables (TIV). TIV has enjoyed decades of application as an important tool of study for irreversible processes in solids, fluids, granular media, and viscoelastic materials [35, 33, 43, 13, 6]. Originally formulated as an extension to the theory of irreversible processes, TIV posits that non-equilibrium description without history dependence requires further state variables beyond the classical temperature, number of particles, and applied strain (in the canonical ensemble, for example) in order to determine the system's evolution [28, 17]. These additional variables, the internal variables, encode the effects of the microscopic degrees of freedom on the observable macrostate. Thus, the relevant state functions take both classical and internal variables as input. The flexibility of the theory is apparent from the wide range of material behavior it can describe. The challenge, however, is in selecting descriptive internal variables, and in defining their kinetic equations in a way which is consistent with microscopic physics. Here, we take on the latter challenge. ### Variational method of Eyink The key mathematical tool we utilize for connecting TIV to stochastic thermodynamics is a variational method for approximating non-equilibrium systems laid out by Eyink [7]. This method generalizes the Rayleigh-Ritz variational method of quantum mechanics to non-Hermitian operators. The method assumes the system in question can be described by a probability density function governed by an equation of the form \(\frac{\partial}{\partial t}p=\mathcal{L}p\) (e.g., a Fokker-Planck equation associated with Langevin particle dynamics). Since the operator \(\mathcal{L}\) is not Hermitian, \(\mathcal{L}\neq\mathcal{L}^{\dagger}\), one must define a variational method over both probability densities \(p\) and test functions \(\psi\). Begin by defining the non-equilibrium action functional \[\Gamma[\psi,p]=\int_{0}^{\infty}\int_{X}\psi(\frac{\partial}{\partial t}- \mathcal{L})p\,\mathrm{d}x\,\mathrm{d}t.\] Under the constraint that \[\int_{X}\psi\ p\,\mathrm{d}x\Big{|}_{t=\infty}=\int_{X}\psi\ p\,\mathrm{d}x \Big{|}_{t=0},\] this action is stationary, \(\delta\Gamma[\psi^{*},p^{*}]=0\), if and only if \((\frac{\partial}{\partial t}-\mathcal{L})p^{*}=0\) and \((\frac{\partial}{\partial t}+\mathcal{L}^{\dagger})\psi^{*}=0\). By defining the non-equilibrium "Hamiltonian" \(\mathcal{H}[\psi,p]=\int_{X}\psi\ \mathcal{L}p\ dx\), one can recast the variational equation \(\delta\Gamma[\psi^{*},p^{*}]=0\) in Hamiltonian form \[\frac{\partial}{\partial t}p^{*} =\frac{\delta}{\delta\psi}\mathcal{H}[\psi^{*},p^{*}] \tag{4}\] \[\frac{\partial}{\partial t}\psi^{*} =-\frac{\delta}{\delta p}\mathcal{H}[\psi^{*},p^{*}]. \tag{5}\] As it stands, the variation is taken over two infinite dimensional function spaces, and as such, it is only possible to find exact solutions in a handful of systems. However, one can still make use of these dynamical equations to find a variational approximation to the true solution which lies within some fixed subspace. To do so, one begins by assuming the true density, \(p^{*}(x,t)\), and test function \(\psi^{*}(x,t)\), can be approximated by a parameterized density \(\hat{p}(x,\alpha(t))\) and test function \(\hat{\psi}(x,\alpha(t))\) respectively, so that all of the time dependence is captured by the variables \(\alpha(t)=(\alpha_{1}(t),...,\alpha_{N}(t))\). For example, a standard method for choosing a parameterization is to pick an exponential family [1], or specifically a collection of quasi-equilibrium distributions [49]. In this case, one selects a finite number of linearly independent functions of the state \(\{\phi_{i}(x)\}_{i=1}^{N}\) to serve as observables describing the system. The parameterized densities \(\hat{p}(x,\alpha(t))\) are defined as (for time dependent "natural" parameters \(\alpha(t)\)) \[\hat{p}(x,\alpha(t))=\exp(\sum_{i=1}^{N}\alpha_{i}(t)\phi_{i}(x)+\mathcal{F} (\alpha(t)))\] where \(\mathcal{F}(\alpha)=-\log(\int\exp(\sum_{i=1}^{N}\alpha_{i}\phi_{i}(x)\Big{)} \,\mathrm{d}x)\) is a log-normalizing constant. The primary reason for using this parameterization is that for each \(\alpha\), this \(\hat{p}(x,\alpha)\) has maximum Shannon entropy with respect to all other probability densities subject to the constraint that the averages \(\left\langle\phi_{i}(x)\right\rangle_{\hat{p}}\) take on prescribed values. In the quasi-equilibrium case, \(\phi_{1}(x)\) is almost always taken as the system energy, and hence \(\alpha_{1}(t)\) becomes \(\beta\). Given any parameterization, quasi-equilibrium or otherwise, the dynamical equations Eq. 4 and Eq. 5 reduce to a coupled system of ordinary differential equations (ode) \[\big{\{}\alpha_{i},\alpha_{j}\big{\}}\frac{\mathrm{d}\alpha_{j}}{\mathrm{d}t}= \frac{\partial\mathcal{H}}{\partial\alpha_{i}} \tag{6}\] where \[\big{\{}\alpha_{i},\alpha_{j}\big{\}}=\int_{X}\frac{\partial\hat{\psi}}{ \partial\alpha_{i}}\frac{\partial\hat{p}}{\partial\alpha_{j}}-\frac{\partial \hat{\psi}}{\partial\alpha_{j}}\frac{\partial\hat{p}}{\partial\alpha_{i}}\, \mathrm{d}x.\] The solution to Eq. 6, \(\alpha^{*}(t)\), offers the best approximations to the true solution \(p^{*}(x,t)\approx\hat{p}(x,\alpha^{*}(t))\), \(\psi^{*}(x,t)\approx\hat{\psi}(x,\alpha^{*}(t))\), lying within the parameterized subspace. ### Stochastic thermodynamics with internal variables Finally, we fuse stochastic thermodynamics with this variational framework to provide a general method for constructing TIV models. Stochastic thermodynamics provides the appropriate thermodynamic definitions, while the variational formalism of Eyink will allow us to derive dynamical equations for the internal variables consistent with the microscopic physics. We return to the colloidal particle system with governing stochastic differential equation \[\mathrm{d}x(t)=-\frac{1}{\eta}\frac{\partial e}{\partial x}(x,\lambda)\ \mathrm{d}t+\sqrt{2d}\ \mathrm{d}b(t).\] If \(p(x,t\mid\lambda)\) is the probability density of observing the system in state \(x\) at time \(t\) given a prespecified external protocol, \(\lambda(t)\), then \(p(x,t\mid\lambda)\) obeys the Fokker-Planck equation \[\frac{\partial p}{\partial t}=\mathcal{L}\ p=\frac{1}{\eta}\frac{\partial}{ \partial x}\cdot(\frac{\partial e}{\partial x}\ p)+d\Delta_{x}p.\] When \(\lambda(t)\) is held constant, the true density tends towards the equilibrium Boltzmann distribution, \(p^{*}(x,t\mid\lambda)\propto\exp(-\beta e(x,\lambda))\). Away from equilibrium, \(p^{*}(x,t\mid\lambda)\) may be highly complex, and in that case we would like to find a low dimensional representation which captures the physical phenomena of interest. To do so, we choose a class of parameterized densities \(\hat{p}(x,\alpha)\) to use in the variational method of Eyink, keeping in mind that the variables \(\alpha(t)\) are to become the internal variables in the macroscopic description. This is in direct analogy with the assumption of a uniform probability density in the microcanonical ensemble, or the Maxwellian distribution in the canonical ensemble. Note, also that in keeping with ensembles in which volume or strain is controlled rather than force or stress, we assume no explicit dependence on the external protocol \(\lambda\) in \(\hat{p}(x,\alpha)\). This will prove necessary mathematically in what follows. Finally, we do not explicitly consider the dependence of \(\hat{p}\) on \(\beta\), as we have assumed that temperature is constant. We next define the approximate entropy \(\hat{s}(x,\alpha)=-k_{B}\log(\hat{p}(x,\alpha))\) and use its derivatives with respect to the internal variables to define the test functions in the variational formalism \[\hat{\psi}(x,\alpha,\gamma)=1+\gamma\cdot\frac{\partial\hat{s}}{\partial \alpha}.\] Since the true solution to the adjoint equation \(\frac{\partial\hat{\psi}^{*}}{\partial t}=-\mathcal{L}^{\dagger}\psi^{*}\) is \(\psi^{*}\equiv\mathrm{const.}\), the variables \(\gamma\) serve as expansion coefficients about the true solution \(\psi^{*}\equiv 1\). In the SI Appendix, we show that they essentially function as dummy variables, as the variational solution fixes \(\gamma(t)\equiv 0\) for all time. Hence, the vector \(\alpha(t)\) will be the only relevant variable. Assuming this choice of density and test functions, the variational formalism of Eyink yields the dynamical equation \[\left\langle\frac{\partial\hat{s}}{\partial\alpha}\frac{\partial\hat{s}^{ \ T}}{\partial\alpha}\right\rangle_{\hat{p}}\cdot\dot{\alpha}=-k_{B}\left\langle \mathcal{L}^{\dagger}\frac{\partial\hat{s}}{\partial\alpha}\right\rangle_{\hat{ p}} \tag{7}\] where \(\langle g\rangle_{\hat{p}}=\int g(x)\hat{p}(x,\alpha)dx\) denotes averaging with respect to \(\hat{p}\). This equation reveals the utility of our choice of \(\hat{\psi}\). The matrix on the left hand side \(\mathbb{F}_{ij}=\left\langle\frac{\partial\hat{s}}{\partial\alpha_{i}}\frac{ \partial\hat{s}}{\partial\alpha_{j}}\right\rangle_{\hat{p}}\) is \(k_{B}^{2}\) times the Fisher information matrix of the density \(\hat{p}(x,\alpha)\)[49]. This matrix is always symmetric, and is positive definite so long as the functions \(\{\frac{\partial\hat{s}}{\partial\alpha_{i}}(x,\alpha)\}_{i=1}^{N}\) are linearly independent as functions of \(x\) for all \(\alpha\). Picking \(\alpha(0)\) such that \(\hat{p}(x,\alpha(0))\approx p^{*}(x,0\mid\lambda)\), and using Eq. 7 to solve for \(\alpha(t)\) gives us the variational solution for \(\hat{p}(x,\alpha(t))\approx p^{*}(x,t\mid\lambda)\) for all time. Having approximated the density using the internal variables, we turn to stochastic thermodynamics to impose the thermodynamic structure. In order to make use of the approximate density, \(\hat{p}\), we simply use the stochastic thermodynamics definitions of thermodynamic quantities at the macroscale, but make the substitution \(p^{*}(x,t\mid\lambda)\rightarrow\hat{p}(x,\alpha(t))\). Following this rule, we generate the thermodynamic quantities as \[\hat{E}(\alpha,\lambda) =\langle e\rangle_{\hat{p}}\] \[\hat{S}(\alpha) =-k_{B}\left(\log(\hat{p})\right)_{\hat{p}}\] \[\hat{A}^{\mathrm{neq}}(\alpha,\lambda) =\hat{E}-T\hat{S}\] \[\frac{d}{dt}\hat{W}(\alpha,\lambda) =\left\langle\frac{\partial e}{\partial\lambda}\dot{\lambda} \right\rangle_{\hat{p}} \tag{8}\] \[T\frac{d}{dt}\hat{S}^{\mathrm{tot}}(\alpha,\lambda) =\frac{d}{dt}\hat{W}-\frac{d}{dt}\hat{A}^{\mathrm{neq}}\] (9) \[\frac{d}{dt}\hat{S}^{\mathrm{m}}(\alpha,\lambda) =\frac{d}{dt}\hat{S}^{\mathrm{tot}}-\frac{d}{dt}\hat{S} \tag{10}\] where Eq. 8, 9, and 10 are derived from Eq. 1, 3, and 2 respectively, as shown in the SI Appendix. Since we have assumed a constant bath temperature for the governing Langevin equation, we do not explicitly write the dependence of the quantities above on \(\beta\). Recall, a key assumption is that the approximate density should be independent of \(\lambda\) for fixed \(\alpha\). Hence, the approximate entropy, \(\hat{S}\), is a function of \(\alpha\) alone. This means that the partial derivative with respect to \(\lambda\) can be factored out of the expectation in Eq. 8. Since \(\hat{S}\) does not depend on \(\lambda\), we may write \[\frac{d}{dt}\hat{W}=\frac{\partial\hat{E}}{\partial\lambda}\dot{\lambda}=\frac{ \partial}{\partial\lambda}\left(\hat{E}-T\hat{S}\right)\dot{\lambda}=\frac{ \partial\hat{A}^{\text{neq}}}{\partial\lambda}\dot{\lambda}\equiv\hat{F}^{ \text{ex}}\dot{\lambda},\] so that the approximate external force is given by the gradient of \(\hat{A}^{\text{neq}}\) with respect to the external protocol, \(\hat{F}^{\text{ex}}\equiv\frac{\partial\hat{A}^{\text{neq}}}{\partial\lambda}\). Moreover, Eq. 9 and Eq. 10 simplify to \[T\frac{d}{dt}\hat{S}^{\text{tot}}=-\frac{\partial\hat{A}^{\text{neq}}}{ \partial\alpha}\dot{\alpha}\qquad\qquad\frac{d}{dt}\hat{Q}=-\frac{\partial \hat{E}}{\partial\alpha}\dot{\alpha}.\] Thus, the approximate work rate and the approximate rate of entropy production of the medium are given by the derivatives of \(\hat{E}\) and the approximate work rate and the approximate rate of total entropy production are given by the derivatives of \(\hat{A}^{\text{neq}}\). In particular, the rate of total entropy production takes the form of a product of fluxes, \(\dot{\alpha}\), and affinities, \(\mathcal{A}_{\alpha}=-\frac{\partial\hat{A}^{\text{neq}}}{\partial\alpha}\). Likewise, the internal variables do not explicitly enter into the equation for the work rate, just as in TIV. Moreover, in the SI Appendix, we prove that for an arbitrary interaction energy \(e(x,\lambda)\), internal variables obey the stronger GENERIC structure [37], obeying a gradient flow equation with respect to the non-equilibrium free energy, whenever the approximate probability density is assumed to be Gaussian. In this case, the internal variables are the mean and inverse covariance (\(\alpha=(\mu,\Sigma^{-1})\)) of the probability density of the state, \(x\in\mathbb{R}^{N}\). Symbolically, we define \[\hat{p}(x,\mu,\Sigma^{-1})=\sqrt{\det\!\left(\frac{\Sigma^{-1}}{2\pi}\right) }\exp\!\left(-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)\right)\!. \tag{11}\] This choice of form for the approximate density is a standard choice in popular approximation methods including Gaussian phase packets [14, 12] and diffusive molecular dynamics [23, 26] primarily for its tractable nature. As mentioned, the dynamics of \(\mu\) and \(\Sigma^{-1}\) are given in terms of gradients with respect to the non-equilibrium free energy \[\dot{\mu}=-\frac{1}{\eta}\frac{\partial\hat{A}^{\text{neq}}}{\partial\mu}, \qquad\dot{\Sigma}^{-1}=-M(\Sigma^{-1}):\frac{\partial\hat{A}^{\text{neq}}}{ \partial\Sigma^{-1}} \tag{12}\] for a positive semi-definite dissipation tensor \(M(\Sigma^{-1})\), and hence, the total rate of entropy production is guaranteed to be non-negative \[T\frac{d}{dt}\hat{S}^{\text{tot}}=\frac{1}{\eta}\!\left\|\frac{\partial\hat{ A}^{\text{neq}}}{\partial\mu}\right\|^{2}+\frac{\partial\hat{A}^{\text{neq}}}{ \partial\Sigma^{-1}}:M:\frac{\partial\hat{A}^{\text{neq}}}{\partial\Sigma^{-1 }}. \tag{13}\] Thus, we see that the thermodynamic structure emerges naturally by utilizing the variational method of Eyink within the context of stochastic thermodynamics, and that we are not forced to postulate phenomenological equations for \(\alpha(t)\). They emerge directly from the variational structure. ## Results ### A single colloidal particle To illustrate the STIV framework we apply it to a toy model: an overdamped, colloidal particle acted on by an external force that is linear in the extension of a spring connected to the particle. Despite its simplicity, this model is often used to describe a molecule caught in an optical trap. In one dimension, the governing Langevin equation for the particle's position is given by \(\text{d}x=-\frac{1}{\eta}\frac{\partial\hat{E}}{\partial x}(x,\lambda)\text{ d}t+\sqrt{2d}\,\text{d}b\), where \(e(x,\lambda)=\frac{k}{2}(\lambda-x)^{2}\) is the energy of the spring or the trapping potential, and \(\lambda(t)\) is an arbitrary external protocol. The corresponding Fokker-Planck operator is \(\mathcal{L}\ p=\frac{1}{\eta}\frac{\partial}{\partial x}\left(\frac{\partial a }{\partial x}p\right)+d\frac{\partial^{2}}{\partial x}p\). The true solution is an Ornstein-Uhlenbeck (O.U.) process, thus, providing an exactly solvable model for comparison [44]. Since the probability density of the O.U. process is Gaussian for all time (assuming a Gaussian initial distribution), we use a Gaussian approximate distribution with mean \(\mu\) and standard deviation \(\sigma\) as internal variables (Eq. 11 with \(\Sigma^{-1}=1/\sigma^{2}\)). It is straightforward to input this density into the variational formalism of Eyink and compute the dynamics. The details of the derivation are written out in the SI Appendix. The resulting dynamical equations recover those of the O.U. process \[\dot{\mu}=-\frac{k}{\eta}(\mu-\lambda),\qquad\dot{\sigma}=-\frac{k}{\eta} \sigma\left(1-\frac{1}{k\beta\sigma^{2}}\right).\] Now that we have the dynamics, we turn to computing the thermodynamics quantities. Of particular interest is the fact that the fluxes of the internal variables are linear in the affinities, \(-\frac{\partial\hat{A}^{\text{neq}}}{\partial\mu}=\eta\dot{\mu}\), \(-\frac{\partial\hat{A}^{\text{neq}}}{\partial\sigma}=\eta\dot{\sigma}\), hence ensuring a non-negative entropy production. We can also find the approximate work rate, heat rate, and rate of total entropy production explicitly \[\frac{\text{d}}{\text{d}t}\hat{W}=\eta\dot{\mu}\dot{\lambda},\qquad\frac{\text{ d}}{\text{d}t}\hat{Q}=\eta\dot{\mu}^{2}-k\sigma\dot{\sigma},\qquad T\frac{\text{d}}{ \text{d}t}\hat{S}^{\text{tot}}=\eta\dot{\mu}^{2}+\eta\dot{\sigma}^{2}.\] Although a toy system, this example highlights the fact that when the true solution to the governing PDE for the probability density lies in the subspace spanned by the trial density, the true solution is recovered and relevant thermodynamic quantities can be exactly computed via the non-equilibrium free energy, as can be seen in Fig. 2. ### Double-well colloidal mass-spring-chain For our primary example, we study a colloidal mass-spring-chain system with double-well interaction between masses. Depicted in the inset of Fig. 4 E, this model of phase front propagation in coiled-coil proteins and double stranded DNA contains several metastable configurations corresponding to the different springs occupying one of the two minima in the interaction energy, and exhibits phase transitions between them. A key test for the STIV framework is whether or not the phase can accurately be predicted, and more importantly, whether the kinetics and thermodynamics of phase transitions can be captured without phenomenological kinetic equations. An almost identical model to the one studied here is considered in [48], but in a Hamiltonian setting rather than as a colloidal system. Here, the authors make use of the piece-wise linearity of the force, \(-\frac{\partial e}{\partial x}\), to derive an exact solution for the strain in the presence of a phase front traveling at constant velocity, and the kinetic relation for this phase front without the use of phenomenological assumptions. Our solution, on the other hand, is inherently approximate (though accurate), but does not depend on either the assumptions of constant velocity of the phase front, or the specific piece-wise linear form of the force. The choice of interaction potential is simply convenience, and the STIV method could be easily applied to quartic or other double-well interaction potentials. We assume each spring has internal energy described by the following double well potential: \[u(z)=\begin{cases}\frac{k_{1}}{2}(z+l_{1})^{2}&x\leq 0\\ \frac{k_{2}}{2}(z-l_{2})^{2}+h_{2}&x>0\end{cases}\] where \(h_{2}\) is chosen so that \(u(z)\) is continuous (i.e., \(h_{2}=(k_{1}l_{1}^{2}-k_{2}l_{2}^{2})/2\)). For simplicity, we have placed one well on each side of the origin so that the transition point falls at \(z=0\). Letting \(x=(x_{1},...,x_{N})\) be the positions of the \(N\) interior masses, the total energy, given an external protocol \(\lambda\), is \(e(x,\lambda)=\sum_{i=1}^{N}u(x_{i}-x_{i-1})+u(\lambda-x_{N})\) where \(x_{0}\equiv 0\). We begin by assuming that the positions of the masses can be well described using a multivariate Gaussian distribution, and set the internal variables to be the mean \(\mu\) and the inverse covariance \(\Sigma^{-1}\) as in Eq. 11. The exact form of the dynamical equations for the internal variables induced by the STIV framework can be found in the SI Appendix. As expected, the equations obey the gradient flow structure given by Eq. 12, where in this case we have \(M_{ij,kl}=\frac{1}{n}(\Sigma_{ik}^{-1}\Sigma_{jl}^{-2}+\Sigma_{ik}^{-2}\Sigma_ {jl}^{-1}+\Sigma_{il}^{-1}\Sigma_{jk}^{-2}+\Sigma_{il}^{-2}\Sigma_{jk}^{-1})\). The rate of total entropy production, given by Eq. 13, is thus non-negative. It is interesting to note that the dynamical equations for \(\mu\) and \(\Sigma^{-1}\) are coupled through an approximation of the phase fraction of springs occupying the right well \[\hat{\Phi}_{i}(x,t)\equiv\int_{-\infty}^{\infty}\mathds{1}_{(x_{i}-x_{i-1}>0 )}\,\hat{p}(x,\mu(t),\Sigma^{-1}(t))\mathrm{d}x.\] As an important special case, fixing the interaction parameters to produce a quadratic interaction, \(l_{1}=-l_{2}\) and \(k_{1}=k_{2}=k\), causes the dependence on \(\hat{\Phi}\) to drop out, and the equations from \(\mu\) and \(\Sigma^{-1}\) decouple. In Fig. 3, we show a comparison of the probability densities produced by the STIV framework for a two mass system to those obtained from Langevin simulations of the governing stochastic differential equation. Although fine details of the Figure 2: A comparison of the STIV method (black solid line) to Langevin simulations (red dashes, 100,000 simulations) for a single colloidal particle in a harmonic optical trap. (A) The mean mass position, \(\mu\approx\langle x\rangle\), as well as the external pulling protocol, \(\lambda(t)\), in blue. (B) The standard deviation, \(\sigma\approx\sqrt{\langle(x-\langle x\rangle)^{2}\rangle}\), of mass positions. (C) The external force on the optical trap. (D) The total rate of entropy production. multimodal structure are missed (as is to be expected when using a Gaussian model), the size and location of the dominant region of non-zero probability is captured, making it possible to compute the relevant macroscopic thermodynamic quantities, as we discuss next. Since the exact form of the true solution \(p^{*}(x,t\mid\lambda)\) is unknown, we compare the results of the framework to simulations of the Langevin dynamics of a system with S free masses in Fig. 4. Despite the fact that the true solution is multimodal due to the existence of several metastable configurations, it's clear that the approximations of the mean mass position (A), phase fraction (B), external force (\(\frac{\partial E}{\partial\lambda}\approx\frac{\partial\hat{E}}{\partial\lambda} =\frac{\partial\hat{E}^{\text{max}}}{\partial\lambda}\)) (C), and total rate of entropy production (D) are all highly accurate. This holds true for a variety of pulling protocols including linear (1), sinusoidal (2), and a step displacement (3,4), as well as for symmetric (1,2,3) and asymmetric (4) interaction potentials. Returning to (B), we see that for a system with an initial configuration in which all the springs begin in the left well we can observe a propagating phase front as the springs, one by one, transition from the left to the right well. This transition is captured by the internal variable model with high accuracy allowing one to directly approximate the velocity of the phase front. We note, however, that the quantitative accuracy of the method appears to hold most strongly in the case that the thermal energy is significantly larger or smaller than the scale of the energy barrier separating the two potential energy wells in the spring interaction. When the thermal energy and potential energy barriers are at the same scale, the true density of states is highly multimodal, and not well approximated by a multivariate Gaussian, see Movie S2. In this case, the STIV approximation captures the behavior of only the dominant mode. When the thermal energy is large relative to the barrier, the thermal vibrations cause the modes to collapse into a single "basin" which can be well approximated by the STIV density, see Movie S1. Finally, when the thermal energy is small, the true density is unimodal, and undergoes rapid jumps between the different energy minima. The Gaussian STIV density, again, becomes an effective choice for approximation. The dynamical equations for the internal variables take the form of a discretized partial differential equation (pde). Assuming we properly rescale the parameters of the interaction potential, the viscosity, and temperature so that the equilibrium system length, energy, entropy, and quasistatic viscous dissipation are independent of the number of masses (\(l_{i}=l_{i}^{0}/N\), \(k_{i}=Nk_{i}^{0}\), \(\eta=\eta^{0}/N\), \(\beta=N\beta^{0}\) (\(i\in\{1,2\}\))) then, in the limit as the number of masses tends to infinity the internal variables \(\mu_{i}\) and \(\Sigma_{ij}^{-1}\) become functions of continuous variables \(x\in[0,1]\) and \(x,y\in[0,1]\times[0,1]\), respectively. Since it is challenging to invert a continuum function \(\Sigma^{-1}(x,y,t)\), we make use of the identity \(\hat{\Sigma}_{ij}=-(\Sigma\hat{\Sigma}^{-1}\Sigma)_{ij}\) to derive the following limiting pde Figure 3: A comparison of the probability density for the spring lengths for a two mass mass-spring-chain system with double-well spring energies. The colored histograms depict densities collected from 100,000 Langevin simulations of the solution to the governing stochastic differential equation, while the grey-scale contour lines show the approximation using STIV. On each panel, the horizontal axis gives the length of the first spring, \(x_{1}\), and the vertical axis gives the length of second, \(x_{2}-x_{1}\). Panels from left to right show equal increments in time. We see that despite missing the details of the multi-modal behavior apparent in the Langevin simulations, the STIV approximation successfully tracks the location and size of the dominant region of non-zero probability. for \(\mu(x,t)\), \(\Sigma(x,y,t)\), the strain, \(\epsilon(x,t)\equiv\frac{\partial\mu}{\partial x}(x,t)\), and the covariance of the strain, \(\mathcal{E}(x,y,t)\equiv\frac{\partial^{2}\Sigma}{\partial x\partial y}(x,y,t)\) \[\frac{\partial\mu}{\partial t} =\frac{1}{\eta_{0}}\frac{\partial}{\partial x}\bigg{\{}k_{1}^{0 }\left(\epsilon+l_{1}^{0}\right)(1-\hat{\Phi})+k_{2}^{0}\left(\epsilon-l_{2}^ {0}\right)\hat{\Phi}+(k_{2}^{0}-k_{1}^{0})\mathcal{E}\frac{\partial\hat{\Phi}} {\partial\epsilon}\bigg{\}}\] \[\frac{\partial\Sigma}{\partial t} =2\Delta^{w}\Sigma\] \[\mu(x=0,t)=0,\qquad\mu(x=l_{0},t)=\lambda(t)\] \[\Sigma(x=0,y,t) =\Sigma(x=l_{0},y,t)=0\] \[\Sigma(x,y=0,t) =\Sigma(x,y=l_{0},t)=0\] with the approximate phase fraction defined through \[\hat{\Phi}(x,t)=\hat{\Phi}(\epsilon,\mathcal{E})=\Phi\left(\frac{\epsilon(x, t)}{\sqrt{\mathcal{E}(x,x,t)}}\right).\] Here, \(\Delta^{w}=\frac{\partial}{\partial x}w(x,t)\frac{\partial}{\partial x}+ \frac{\partial}{\partial y}w(y,t)\frac{\partial}{\partial y}\), \(w(x,t)=\frac{k_{1}^{0}}{\eta^{0}}(1-\hat{\Phi})+\frac{k_{2}^{0}}{\eta^{0}} \hat{\Phi}-\frac{1}{\eta^{0}}(k_{1}^{0}l_{1}^{0}+k_{2}^{0}l_{2}^{0})\frac{ \partial\hat{\Phi}}{\partial\epsilon}\), and \(\Phi(\xi)\) is the cumulative distribution function of a standard Gaussian (mean zero, variance one). Both equations for \(\frac{\partial\mu}{\partial t}\) and \(\frac{\partial\Sigma}{\partial t}\) contain contributions from the left well (the terms multiplying \((1-\hat{\Phi})\)), the right well (the terms multiplying \(\hat{\Phi}\)), and the phase boundary (the terms multiplying \(\frac{\partial\hat{\Phi}}{\partial\epsilon}\)), and in the SI Appendix we give assumptions on the continuum limit for \(\Sigma(x,y,t)\) such that these dynamical Figure 4: (A) A comparison of the predicted mean mass locations using STIV (black lines) and empirical mean of 100,000 Langevin simulations (red dashes) for the 8 mass colloidal mass-spring-chain with double-well interactions and a linear external protocol (external protocol shown in blue throughout). Except in (C4) and (D4), the parameters of the symmetric interaction potential are \(k_{1}=k_{2}=l_{1}=l_{2}=1\). (B) The predicted and simulated phase fractions of springs in the right well for the same system as (A). (C) The predicted versus simulated external force for four different pulling protocols: (1) linear, (2) sinusoidal, (3) step, (4) step with an asymmetric interaction potential between masses (\(k_{1}=1,l_{1}=1,k_{2}=2,l_{2}=1/2\)). (D) The predicted versus simulated rate of total entropy production for the same four pulling protocols as in (C). The external protocols used are shown in the insets of (C,D). (E) Cartoon of the mass-spring-chain configuration. One side is held fixed while the other is controlled by the external protocol. equation maintain the gradient flow structure \[\frac{\partial\mu}{\partial t} =-\frac{1}{\eta}\frac{\delta\dot{A}^{\text{neq}}}{\delta\mu}\] \[\frac{\partial\sigma}{\partial t} =-\int_{0}^{1}\int_{0}^{1}M(x,y,z,w,t)\frac{\delta\dot{A}^{\text{neq} }}{\delta\Sigma}(z,w,t)\text{d}z\text{d}w.\] In Fig. 5 (A), we demonstrate that the continuum response of the system can be well approximated through the STIV framework with finitely many masses. We see agreement between the mean mass positions observed in Langevin simulations and those predicted using the STIV framework for both 17 and 62 masses, verifying that both discretizations capture the continuum response. This allows us to use the 17 mass system to accurately predict important continuum level quantities such as the external force as a function of extension, \(\lambda\), 5 (B), the phase front speed, 5 (C), for different applied strain rates, and finally the rate of entropy production due to the phase front, 5 (D), as a function of the system extension for each of the strain rates shown in (C). Methods for computing the front speed and the rate of entropy production due to the phase front can be found in the SI Appendix. Finally, in the continuum limit, one can differentiate in time the defining equation for the location of the phase front in the reference configuration, \(\hat{\Phi}(\tilde{I}(t),t)\equiv\frac{1}{2}\) to yield the following ordinary differential equation for the location of the phase front \[\frac{\text{d}}{\text{d}t}\tilde{I}(t)=-\frac{\frac{\partial^{2}}{\partial x^ {2}}\frac{\delta\dot{A}^{\text{tan}}}{\delta\epsilon}(x,t)}{\eta\frac{\partial \dot{B}t}{\partial x^{2}}(x,t)}\Big{|}_{x=I(t)}.\] This equation reveals that the phase front is directly proportional to the ratio of the curvature of the thermodynamic affinity conjugate to the strain \(\mathcal{A}_{\epsilon}\equiv-\frac{\delta\dot{A}^{\text{neq}}}{\delta\epsilon}\) and the curvature of \(\mu\) at the location of the phase front. ## Discussion Our results demonstrate the utility and accuracy of the STIV framework as a method for constructing TIV models which are consistent with microscopic physics. After assuming a functional form for a set of parameterized probability densities which serve to approximate the true density of states, inserting this approximation into the thermodynamic definitions taken from stochastic thermodynamics directly yields the internal variables structure, and the dynamics of these internal variables are fully determined by the variational method of Eyink. The resulting macroscopic model encodes the microscopic features of the system to the degree allowed within the provided probability density without any need for further reference back to smaller scales. Moreover, in the important case of a Gaussian form for the approximate probability density, \(\hat{p}(x,\alpha)\), we recover the gradient flow dynamics and the GENERIC structure which is commonly assumed without direct microscopic justification. In this work, we have focused on examples yielding analytically tractable approximations. However, it is equally possible to extend the method beyond such constraints by creating a numerical implementation based on sampling techniques Figure 5: (A) Mean mass positions for Langevin and STIV approximations to an 17 mass (Langevin: red dashes, STIV: solid black) and a 62 mass (Langevin: pink short dashes, STIV: grey long dashes) double-well mass-spring-chain system, with parameters rescaled for the same effective behavior. For both systems, only the 8 masses expected to overlap are plotted. Throughout (B,C,D), darker colors, dashed lines, and + scatter points denote results from Langevin simulations, whereas lighter colors, solid lines, and x scatter points denote results from the STIV approximation. (B) The external force as a function of extension for the 17 mass system at ten different strain rates (shown in (C)). (C) The phase front speed as a function of strain rate in the 17 mass system. (D) The rate of entropy production due to the phase front as a function of extension for each of the strain rates shown in (C). using modern statistical and machine learning techniques. Furthermore, extensions to Hamiltonian systems, active noise and models exhibiting significant coarse graining constitute important future steps for the STIV framework. ## Acknowledgment T.L. acknowledges that this project was supported in part by a fellowship award under contract FA9550-21-F-0003 through the National Defense Science and Engineering Grauate (NDSEG) Fellowship Program, sponsored by the Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR) and the Army Research Office (ARO). P.K.P. acknowledges support from ACS, USA grant number PRF-61793 ND10. C.R. gratefully acknowledges support from NSF CAREER Award, CMMI-2047506.
2306.17584
Flexible and Accurate Methods for Estimation and Inference of Gaussian Graphical Models with Applications
The Gaussian graphical model (GGM) incorporates an undirected graph to represent the conditional dependence between variables, with the precision matrix encoding partial correlation between pair of variables given the others. To achieve flexible and accurate estimation and inference of GGM, we propose the novel method FLAG, which utilizes the random effects model for pairwise conditional regression to estimate the precision matrix and applies statistical tests to recover the graph. Compared with existing methods, FLAG has several unique advantages: (i) it provides accurate estimation without sparsity assumptions on the precision matrix, (ii) it allows for element-wise inference of the precision matrix, (iii) it achieves computational efficiency by developing an efficient PX-EM algorithm and a MM algorithm accelerated with low-rank updates, and (iv) it enables joint estimation of multiple graphs using FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various simulation settings and real data applications, including gene expression in the human brain, term association in university websites, and stock prices in the U.S. financial market. The results demonstrate that FLAG and its extensions provide accurate precision estimation and graph recovery.
Yueqi Qian, Xianghong Hu, Can Yang
2023-06-30T12:06:10Z
http://arxiv.org/abs/2306.17584v1
Flexible and Accurate Methods for Estimation and Inference of Gaussian Graphical Models with Applications ###### Abstract The Gaussian graphical model (GGM) incorporates an undirected graph to represent the conditional dependence between variables, with the precision matrix encoding partial correlation between pair of variables given the others. To achieve flexible and accurate estimation and inference of GGM, we propose the novel method FLAG, which utilizes the random effects model for pairwise conditional regression to estimate the precision matrix and applies statistical tests to recover the graph. Compared with existing methods, FLAG has several unique advantages: (i) it provides accurate estimation without sparsity assumptions on the precision matrix, (ii) it allows for element-wise inference of the precision matrix, (iii) it achieves computational efficiency by developing an efficient PX-EM algorithm and a MM algorithm accelerated with low-rank updates, and (iv) it enables joint estimation of multiple graphs using FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various simulation settings and real data applications, including gene expression in the human brain, term association in university websites, and stock prices in the U.S. financial market. The results demonstrate that FLAG and its extensions provide accurate precision estimation and graph recovery. ## 1 Introduction Quantifying the relationships among components in a complex system based on observations is a fascinating yet challenging problem. Graphical models utilize probability models to represent relationships as a graph, where the nodes are random variables and the edges denote their dependencies. Graphical models have wide real-world applications in various research fields, including genetics Feng and Ning (2019); Zhao and Duan (2019); Yi et al. (2022), economics Anufriev and Panchenko (2015); Bernardini et al. (2022), psychology Epskamp et al. (2018); Williams (2021), and environmental science Engelke and Hitz (2020). To model the components in the system, we consider a \(p\)-dimensional random vector distributed in a multivariate normal distribution with zero mean, without loss of generality, as \(z\sim\mathcal{N}(0,\Sigma)\), with \(\Theta\coloneqq\Sigma^{-1}\). Then, we have \(p(z_{i}|z_{-i})\propto\exp(-\frac{1}{2}\Theta_{ii}z_{i}^{2}-\Sigma_{j\neq i}z _{i}\Theta_{ij}z_{j})\) which implies that the probability of the \(i\)-th component only depends on the components with nonzero entries in the \(i\)-th column of the precision matrix \(\Theta\). Also, \(p(z_{i},z_{j}|z_{-ij})\propto\exp(-\frac{1}{2}\begin{bmatrix}z_{i}&z_{j} \end{bmatrix}\begin{bmatrix}\Theta_{ii}&\Theta_{ij}\\ \Theta_{ji}&\Theta_{jj}\end{bmatrix}\) ###### Abstract We propose a flexible and accurate method for the estimation and inference of Gaussian graphical models. The proposed method is based on the estimation and inference of Gaussian graphical models. The proposed method is based on the estimation and inference of Gaussian graphical models. Section 4. Finally, we conclude this manuscript with a brief discussion in the Section 5. ## 2 Methods The proposed method does not depend on any explicit structural assumptions on the precision matrix, and it does not introduce bias into the estimates. Instead, it utilizes the conditional Gaussian property and rewrites the estimation of each entry in the precision matrix as the estimation of the covariance of residuals obtained by regressing two variables on the remaining \(p-2\) variables. Unlike the asymptotic normal thresholding (ANT) method from Ren et al. (2015), we neither impose sparsity assumptions on the regression coefficients nor assume column-wise sparsity in the precision matrix, as the shrinkage on parameters may introduce bias to the residuals of regressions and the precision entries. In addition to estimation, the proposed method enables inference and quantification of the uncertainty of each entry in the precision matrix and the corresponding edge in the graph. ### Model Setting Utilizing the conditional Gaussian property, the proposed method estimates a two-by-two submatrix of the precision matrix each time by taking the inverse of the residual oebatined from a two-versus-rest bivariate regression. To achieve an unbiased estimation of covariance of residuals and the precision entries, each regression is solved by random effect model. Consider a pair of random variables \(a=\{i,j\}\) versus the other \(p-2\) variables each time. Take the \(i\)-th and \(j\)-th elements from the \(p\)-dimensional random vector \(z\) as responses \(y=[z_{i},z_{j}]\), while the remaining \(p-2\)-dimensional random vector \(x=[z_{i},z_{j}]^{c}\) indicates the explanatory variables. The conditional probability \(y|x\sim\mathcal{N}(x^{T}\beta,\Theta_{aa}^{-1})\) can be expressed as \(y=x^{T}\beta+\epsilon\), where \(\epsilon\sim\mathcal{N}(0,\Gamma_{\epsilon})\) and \(\Theta_{aa}=\Gamma_{\epsilon}^{-1}\). Let \(Z\in\mathbb{R}^{n\times p}\) denote a collection of \(n\) realizations of the random vector \(z\), the \(i\)-th and \(j\)-th columns from observation \(Z\) as responses \(Y=[Z_{\cdot i},Z_{\cdot j}]\in\mathbb{R}^{n\times 2}\), while the remaining columns \(X=[Z_{\cdot i},Z_{\cdot j}]^{c}\in\mathbb{R}^{n\times(p-2)}\) indicate the explanatory variables. Subsequently, a bivariate regression model is constructed based on the conditional Gaussian property, as \(Y=X\boldsymbol{\beta}+\boldsymbol{\epsilon}\), where the coefficient matrix \(\boldsymbol{\beta}\in\mathbb{R}^{(p-2)\times 2}\), and the covariance of each row \(\epsilon_{k\cdot}\) in the \(\epsilon\in\mathbb{R}^{n\times 2}\) is a 2-by-2 positive definite matrix \(\Gamma_{\epsilon}\), satisfying \(\mathrm{cov}(\epsilon_{k\cdot}^{T})=\Gamma_{\epsilon}=\Theta_{aa}^{-1}\). To solve this bivariate regression, we consider a random effects model \[\begin{split}& Y=X\boldsymbol{\beta}+\boldsymbol{\epsilon},\\ &\beta_{k\cdot}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{k \cdot}^{T}\sim\mathcal{N}(0,\Gamma_{\epsilon}),\end{split} \tag{1}\] where \(\beta\) is treated as random effects, and the \(k-\)th row \(\beta_{k\cdot}\) is assumed to be distributed in a normal distribution with zero mean and covariance as \(\Gamma_{\beta}\). After vectorizing \(Y\) and \(X\boldsymbol{\beta}\), we obtain \(\mathrm{vec}Y|X\sim\mathcal{N}(\mathrm{vec}(X\boldsymbol{\beta}),\Gamma_{ \epsilon}\otimes I_{n})\). By integrating over \(\beta\), the random effects model can be expressed as \[\begin{split}&\mathrm{vec}Y\sim N(0,\Omega^{-1}),\\ &\Omega=\Gamma_{\beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{ n}.\end{split} \tag{2}\] The parameters in this model are denoted as \(\Gamma_{\beta}=\begin{bmatrix}\sigma_{1}^{2}&\tau\\ \tau&\sigma_{2}^{2}\end{bmatrix},\Gamma_{\epsilon}=\begin{bmatrix} \sigma_{3}^{2}&\eta\\ \eta&\sigma_{4}^{2}\end{bmatrix},\) where \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are symmetric and positive semi-definite matrices. Firstly, the variance components \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are estimated for the pair \((i,j)\), using efficient algorithms designed in the random effects model. Based on the conditional Gaussian property, the submatrix of the precision matrix with respect to this pair can be estimated by \(\Theta_{aa}=\Gamma_{\epsilon}^{-1}\). Furthermore, to quantify the uncertainty of each entry in the precision matrix, inference can be performed based on the proposed model. In addition, edges in the graph are detected through hypothesis testing, rather than relying solely on point estimates. ### Algorithms To estimate the variance component \(\Gamma_{\epsilon}\), two approaches based on maximum likelihood estimation (MLE) are provided: the minorize-maximization (MM, Hunter and Lange [2000]) algorithm and the parameter-expanded expectation-maximization (PX-EM, Liu et al. [1998]) algorithm. According to the random effects model as shown in the formula 2, the log-likelihood function with respect to the random components \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) in a two-versus-rest conditional regression for each pair is \[\ell(\Gamma)=\ln\mathbb{P}(Y|X;\Gamma_{\beta},\Gamma_{\epsilon})=-\frac{1}{2} \ln\det\Omega-\frac{1}{2}\text{vec}Y^{T}\Omega^{-1}\text{vec}Y+c, \tag{3}\] where \(c\) is a trivial constant. Two MLE-based algorithms have been developed for estimating the variance components in order to achieve unbiased estimation and statistical inference. #### 2.2.1 MM Algorithm Direct maximum likelihood estimation of variance components models is numerically challenging. The minorize-maximization (MM) algorithm first finds a surrogate function \(g\) that minorizes the log-likelihood function 3, such that \(g(\Gamma|\Gamma^{(m)})\leq\mathcal{L}(\Gamma)\). Then, the optimization variable is updated according to the current surrogate function, i.e., \(\Gamma^{(m+1)}=\operatorname*{argmax}_{\Gamma}g(\Gamma|\Gamma^{(m)})\). The surrogate function for the log-likelihood function with respect to variance components is constructed using two minorizations based on two inequalities Zhou et al. [2019]. The convexity of the negative log determinant function implies \(-\ln\det\Omega\geq-\ln\det\Omega^{(m)}-\operatorname{tr}[\Omega^{-(m)}( \Omega-\Omega^{(m)})]\). Since the variance components \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) are positive definite matrices, \(\Omega\) is also positive definite, then we have \(\Omega^{-1}\preceq\Omega^{-(m)}[(\Gamma_{\beta}^{(m)}\Gamma_{\beta}^{-1} \Gamma_{\beta}^{(m)})\otimes XX^{T}+(\Gamma_{\epsilon}^{(m)}\Gamma_{\epsilon} ^{-1}\Gamma_{\epsilon}^{(m)})\otimes I_{n}]\Omega^{-(m)}.\) The surrogate function for the MM algorithm is then given by \[\begin{split} g(\Gamma|\Gamma^{(m)}):=&-\operatorname {tr}[\Omega^{-(m)}(\Gamma_{\beta}\otimes XX^{T})]-\operatorname{tr}[\Gamma_{ \beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{\beta}^{(m)}\Gamma_{\beta}^{-1}]\\ &-\operatorname{tr}[\Omega^{-(m)}(\Gamma_{\epsilon}\otimes I_{n} )]-\operatorname{tr}[\Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^ {(m)}\Gamma_{\epsilon}^{-1}]+c^{(m)},\end{split} \tag{4}\] where \(c^{(m)}\) is a constant in the \(m\)-th iteration, and the matrix \(R\in\mathbb{R}^{n\times 2}\) satisfies \(\text{vec}(R^{(m)})=\Omega^{-(m)}\text{vec}Y\) in all iterations. In each iteration, the parameters in \(\Gamma\) are updated by setting the derivative of \(g(\Gamma|\Gamma^{(m)})\) to be zero, as \(\Gamma_{\beta}\) is updated by \(\nabla_{\Gamma_{\beta}}g(\Gamma|\Gamma^{(m)})=0\) and \(\Gamma_{\epsilon}\) is updated by \(\nabla_{\Gamma_{\epsilon}}g(\Gamma|\Gamma^{(m)})=0\). The log-likelihood is then calculated after update. Once the change in log-likelihood becomes arbitrarily small, the MM algorithm is considered to have converged. Due to the high computational cost of inverting the large matrix \(\Omega\in\mathbb{R}^{(2n)\times(2n)}\) in each iteration, eigen-decomposition is used to reduce such consumption on frequent matrix inverting. Let the eigen-decomposition of \(XX^{T}\) be \(U^{T}XX^{T}U=D=\text{diag}(d)\), where \(D\) is a diagonal matrix with its diagonal elements denoted by the vector \(d\in\mathbb{R}^{n}\). The simultaneous congruence decomposition of \((\Gamma_{\beta},\Gamma_{\epsilon})\) is \((\Lambda,\Phi)\), such that \(\Phi^{T}\Gamma_{\beta}\Phi=\Lambda,\Phi^{T}\Gamma_{\epsilon}\Phi=I_{2}\). Then, \(\Gamma_{\beta}=\Phi^{-T}\Lambda\Phi^{-1},\Gamma_{\epsilon}=\Phi^{-T}I_{2}\Phi^{-1}\). The inverse of \(\Omega\) can be efficiently calculated in each iteration according to the following equations \[\begin{split}\Omega^{(m)}=(\Phi^{-(m)}\otimes U^{-1})^{T}(\Lambda ^{(m)}\otimes D+I_{2}\otimes I_{n})(\Phi^{-(m)}\otimes U^{-1}),\\ \Omega^{-(m)}=(\Phi^{(m)}\otimes U)(\Lambda^{(m)}\otimes D+I_{2} \otimes I_{n})^{-1}(\Phi^{(m)}\otimes U)^{T}.\end{split} \tag{5}\] Additionally, the determinant of \(\Omega\) can be calculated accordingly as \(|\Omega^{(m)}|=|\Lambda^{(m)}\otimes D+I_{n}\otimes I_{n}||\Gamma_{\epsilon}^{ (m)}|^{n}\). In each iteration, \(\Gamma_{\beta}\) is updated by setting the derivative of \(g(\Gamma|\Gamma^{(m)})\) with respect to \(\Gamma_{\beta}\) to zero, and \(\Gamma_{\epsilon}\) is updated similarly. The former trace term in 4 is linear to \(\Gamma\), with the coefficients collected in the \((2n)\times(2n)\) matrices \(M_{\beta}\) and \(M_{\epsilon}\) as \[\begin{split} M_{\beta}&=\Phi^{(m)}\operatorname{ diag}\{\operatorname{tr}[D(\lambda_{l}^{(m)}D+I_{n})^{-1}]\}\Phi^{(m)T},\\ M_{\epsilon}&=\Phi^{(m)}\operatorname{diag}\{ \operatorname{tr}[(\lambda_{l}^{(m)}D+I_{n})^{-1}]\}\Phi^{(m)T}.\end{split} \tag{6}\] The latter trace term, which involves the inverse of \(\Gamma\) can be rewritten in the general form \(-\operatorname{tr}[A\Gamma^{-1}]\). Its derivative with respect to \(\Gamma\) is \(\Gamma^{-1}A\Gamma^{-1}\). For positive definite matrices \(A\) and \(M\), the unique positive definite solution for \(\Gamma\) with respect to the Riccati equation \(M=\Gamma^{-1}A\Gamma^{-1}\) is given by \(L^{-T}(L^{T}AL)^{\frac{1}{2}}L^{-1}\), where \(L\) is the Cholesky factor of \(M\). After bypassing the computational cost of matrix inversion and solving the updated \(\Gamma\) in the Riccati equation, we further reduce the computational cost by simplying the coefficients of the latter trace term in the surrogate function that were generalized as \(A\) before. Specifically, \(A_{\beta}=\Gamma_{\beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{\beta}^{(m)}\) and \(A_{\epsilon}=\Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^{(m)}\) are \(2\times 2\) symmetric matrices, but the inner calculation involves matrix multiplication of a large matrix with dimension \(n\) which is repeated in each iteration. To moderate this computational cost, the coefficients of inverse terms are denoted by matrices \(N_{\beta}^{T}N_{\beta}\) and \(N_{\epsilon}^{T}N_{\epsilon}\) as \[\begin{split}\Gamma_{\beta}^{(m)}R^{(m)T}XX^{T}R^{(m)}\Gamma_{ \beta}^{(m)}&=N_{\beta}^{T}N_{\beta},\\ \Gamma_{\epsilon}^{(m)}R^{(m)T}R^{(m)}\Gamma_{\epsilon}^{(m)}& =N_{\epsilon}^{T}N_{\epsilon}.\end{split} \tag{7}\] Taking all the aforementioned techniques into consideration to solve the optimization problem of maximizing the surrogate function 4 and further speed it up, the MM algorithm can be summarized as follows, Note that \(\oslash\) denotes the Hadamard quotient. The MM algorithm estimates two variance component matrices, \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\), and the estimate of the corresponding \(2\times 2\) submatrix of the precision matrix can be estimated using the inverse of \(\hat{\Gamma_{\epsilon}}\). #### 2.2.2 PX-EM Algorithm The parameter-expanded expectation-maximization (PX-EM) algorithm Liu et al. (1998) is an accelerated version of the EM algorithm that is fast and stable in estimating variance-covariance components in linear mixture models Foulley and Van Dyk (2000). The linear model 1 is reconstructed for a parameter expanded version as \[\begin{split}& Y=\delta X\boldsymbol{\beta}+\boldsymbol{\epsilon}, \\ &\beta_{k}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{k}^{T} \sim\mathcal{N}(0,\Gamma_{\epsilon}),\end{split} \tag{8}\] where \(\delta\in\mathbb{R}^{1}\) is the expanded parameter. The data and parameters are vectorized as follows, \(\bar{X}=I_{2}\otimes X\in\mathbb{R}^{2n\times 2(p-2)},\bar{\beta}=\text{vec} \beta\in\mathbb{R}^{2(p-2)}\) with \(\bar{\beta}\sim\mathcal{N}(0,\Gamma_{\beta}\otimes I_{p-2}),\bar{\epsilon}= \text{vec}\in\mathbb{R}^{2n}\) with \(\bar{\epsilon}\sim\mathcal{N}(0,\Gamma_{\epsilon}\otimes I_{n})\), and \(\bar{Y}=\text{vec}Y=\delta\bar{X}\bar{\beta}+\bar{\epsilon}\in\mathbb{R}^{2n}\). The complete data log-likelihood is \[\begin{split}\ell(\Gamma)=&\text{logPr}(\bar{Y},\bar{ \beta}|\Gamma_{\beta},\Gamma_{\epsilon};\bar{X})\\ =&-\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{1} {2}(\bar{Y}-\delta\bar{X}\bar{\beta})^{T}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )(\bar{Y}-\delta\bar{X}\bar{\beta})\\ &-\frac{p-2}{2}\text{log}|\Gamma_{\beta}|-\frac{1}{2}\bar{\beta} ^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta}.\end{split} \tag{9}\] The terms involving \(\bar{\beta}\) are in a quadratic form given by \(\bar{\beta}^{T}(-\frac{\delta^{2}}{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X- \frac{1}{2}\Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta}+\delta\bar{Y}^{T}( \Gamma_{\epsilon}\otimes X)\bar{\beta}\). The posterior distribution of \(\bar{\beta}\) is \(N(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\), where \[\Sigma_{\bar{\beta}}^{-1}=\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+ \Gamma_{\beta}^{-1}\otimes I_{p-2},\] \[\mu_{\bar{\beta}}=(\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+\Gamma_{ \beta}^{-1}\otimes I_{p-2})^{-1}\delta(\Gamma_{\epsilon}^{-1}\otimes X^{T}) \bar{Y}.\] During the E-step of the PX-EM algorithm, the \(\mathcal{Q}\)-function is evaluated by taking the expectation of the complete data log-likelihood with respect to the posterior \(N(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\). The quadratic terms involving \(\bar{\beta}\) are taken as expectation values: \[\begin{split}&\mathbb{E}[(\bar{Y}-\delta\bar{X}\bar{\beta})^{T}( \Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\delta\bar{X}\bar{\beta})]\\ =&(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})^{T}( \Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}}) +\delta^{2}\text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\Sigma_{\bar{\beta }}].\\ &\mathbb{E}[\bar{\beta}^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2}) \bar{\beta}]=\mu_{\bar{\beta}}^{T}(\Gamma_{\beta}^{-1}\otimes I_{p-2})\mu_{ \bar{\beta}}+\text{tr}[(\Gamma_{\beta}^{-1}\otimes I_{p-2})\Sigma_{\bar{ \beta}}]\end{split} \tag{10}\] The \(\mathcal{Q}\)-function given the estimated parameter in the previous iteration as \(\theta_{old}\), is expressed as follows, \[\begin{split}\mathcal{Q}(\theta|\theta_{old})=&-\frac{n}{2 }\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log}|\Gamma_{\beta}|\\ &-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )[(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta }})^{T}+\delta^{2}\bar{X}\Sigma_{\bar{\beta}}\bar{X}^{T}]\Big{]}\\ &-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\beta}^{-1}\otimes I_{p-2} )(\mu_{\beta}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}})\Big{]}\\ =&-\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p- 2}{2}\text{log}|\Gamma_{\beta}|\\ &-\frac{1}{2}\text{tr}\Big{[}\Gamma_{\epsilon}^{-1}\begin{pmatrix} \text{tr}[S_{11}]&\text{tr}[S_{12}]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\Big{]}-\frac{1}{2}\text{tr} \Big{[}\Gamma_{\beta}^{-1}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}] \\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\Big{]},\end{split} \tag{11}\] where \(S=\begin{pmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{pmatrix}=(\bar{Y}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\bar{X}\Sigma_{\bar{\beta}}\bar {X}^{T}\), \(W=\begin{pmatrix}W_{11}&W_{12}\\ W_{21}&W_{22}\end{pmatrix}=\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{ \beta}}\). Denote \(\bar{Y}=\begin{pmatrix}\bar{Y}_{1}\\ \bar{Y}_{2}\end{pmatrix}\), \(\mu_{\bar{\beta}}=\begin{pmatrix}\bar{\mu}_{1}\\ \bar{\mu}_{2}\end{pmatrix}\), \(\Sigma_{\bar{\beta}}=\begin{pmatrix}\bar{\Sigma}_{11}&\bar{\Sigma}_{12}\\ \bar{\Sigma}_{21}&\bar{\Sigma}_{22}\end{pmatrix}\). Then, for \(i=1,2;j=1,2\), \[\begin{split}&\text{tr}[S_{ij}]=\text{tr}[(\bar{Y}_{i}-\delta X \bar{\mu}_{i})(\bar{Y}_{j}-\delta X\bar{\mu}_{j})^{T}+\delta^{2}X\bar{\Sigma} _{ij}X^{T}],\\ &\text{tr}[W_{ij}]=\text{tr}[\bar{\mu}_{i}\bar{\mu}_{j}^{T}+\bar{ \Sigma}_{ij}].\end{split} \tag{12}\] In the subsequent M-step, the new estimates of the parameters are obtained by setting the derivative of the \(\mathcal{Q}\)-function to be zero. From the detailed calculations in the supplementary materials 5, the updated parameters are \(\Gamma_{\epsilon}=\frac{1}{n}\begin{pmatrix}\text{tr}[S_{11}]&\text{tr}[S_{12} ]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\), \(\Gamma_{\beta}=\frac{1}{p-2}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}] \\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\), \(\delta=\frac{\bar{Y}^{T}(\Gamma_{\epsilon}^{-1}\otimes X)\mu_{\bar{\beta}}}{ \text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T})(\mu_{\bar{\beta}}\mu_{\bar{ \beta}}^{T}+\Sigma_{\bar{\beta}})]}\). To avoid frequent inversion the \(2(p-2)\times 2(p-2)\) matrix \(\Sigma_{\bar{\beta}}^{-1}\) in the iterations, an eigen-decomposition \(X^{T}X=VQV^{T}\) is performed, where \(Q\in\mathbb{R}^{(p-2)\times(p-2)}\) is a diagonal matrix with diagonal elements given by the vector \(q\) of eigenvalues. Hence, matrix \(\Sigma_{\bar{\beta}}^{-1}\) can be written as \[\begin{split}&\Sigma_{\bar{\beta}}^{-1}=\begin{pmatrix}\delta^{2 }(\Gamma_{\epsilon}^{-1})_{11}X^{T}X+(\Gamma_{\beta}^{-1})_{11}I_{p-2}&\delta^{2 }(\Gamma_{\epsilon}^{-1})_{12}X^{T}X+(\Gamma_{\beta}^{-1})_{12}I_{p-2}\\ &\delta^{2}(\Gamma_{\epsilon}^{-1})_{21}X^{T}X+(\Gamma_{\beta}^{-1})_{21}I_{p-2 }&\delta^{2}(\Gamma_{\epsilon}^{-1})_{22}X^{T}X+(\Gamma_{\beta}^{-1})_{22}I_{ p-2}\end{pmatrix}\\ &=\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\underbrace{\begin{pmatrix}\delta^{2}(\Gamma_{\epsilon}^{-1})_{1 1}Q+(\Gamma_{\beta}^{-1})_{11}I_{p-2}&\delta^{2}(\Gamma_{\epsilon}^{-1})_{12} Q+(\Gamma_{\beta}^{-1})_{12}I_{p-2}\\ \delta^{2}(\Gamma_{\epsilon}^{-1})_{21}Q+(\Gamma_{\beta}^{-1})_{21}I_{p-2}& \delta^{2}(\Gamma_{\epsilon}^{-1})_{22}Q+(\Gamma_{\beta}^{-1})_{22}I_{p-2} \end{pmatrix}}_{\begin{pmatrix}V&0\\ 0&V\end{pmatrix}^{T}.}\\ \begin{pmatrix}A&B\\ C&H\end{pmatrix}\hskip-2.845276pt=\hskip-2.845276pt\begin{pmatrix}\text{diag}(a)& \text{diag}(b)\\ \text{diag}(c)&\text{diag}(h)\end{pmatrix}\end{split} \tag{13}\] Since \(X^{T}X\) is a real symmetric matrix, \(V\) is an orthogonal matrix, as is the block matrix \(\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\), whose inverse is trivial to obtain. The matrix \(\begin{pmatrix}A&B\\ C&H\end{pmatrix}\) in the middle consists of blocks with diagonal matrices, which makes it easier to calculate the inverse. Specifically, the inverse of the middle matrix is \(\begin{pmatrix}A&B\\ C&H\end{pmatrix}^{-1}=\begin{pmatrix}\text{diag}(h\odot(a\odot h-c\odot b))& \text{diag}(-b\oslash(a\odot h-c\odot b))\\ \text{diag}(-c\oslash(a\odot h-c\odot b))&\text{diag}(a\oslash(a\odot h-c\odot b ))\end{pmatrix},\) and then \(\Sigma_{\bar{\beta}}=\begin{pmatrix}V&0\\ 0&V\end{pmatrix}\begin{pmatrix}A&B\\ C&H\end{pmatrix}^{-1}\begin{pmatrix}V^{T}&0\\ 0&V^{T}\end{pmatrix}\). The PX-EM algorithm with the eigen-decomposition of \(X^{T}X\) is summarized as follows, ``` 1: Initialization: \(\Gamma_{\beta}=\Gamma_{\epsilon}=\frac{\mathrm{cov}(Y)}{2}\). 2: Eigen-decomposition: \(X^{T}X=VQV^{T}\). 3:repeat 4: E-step: set \(\delta^{(m)}=1\), \[\begin{split}\Sigma_{\bar{\beta}}=&\begin{pmatrix}V^{T} &0\\ 0&V^{T}\end{pmatrix}\\ &\begin{pmatrix}\mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{11}q+( \Gamma_{\beta}^{-1})_{11}\mathbb{1}_{p-2})&\mathrm{diag}(\delta^{2}(\Gamma_{ \epsilon}^{-1})_{12}q+(\Gamma_{\beta}^{-1})_{12}\mathbb{1}_{p-2})\\ \mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{21}q+(\Gamma_{\beta}^{-1})_{2 1}\mathbb{1}_{p-2})&\mathrm{diag}(\delta^{2}(\Gamma_{\epsilon}^{-1})_{22}q+( \Gamma_{\beta}^{-1})_{22}\mathbb{1}_{p-2})\end{pmatrix}^{-1}\\ &\begin{pmatrix}V&0\\ 0&V\end{pmatrix},\\ &\mu_{\bar{\beta}}=\Sigma_{\bar{\beta}}\delta(\Gamma_{\epsilon}^{-1}\otimes X ^{T})\bar{Y},\\ &ELBO^{(m)}=Q(\Omega^{(m)})+\frac{1}{2}\mathrm{log}|\Sigma_{\bar{\beta}}|.\end{split}\] 5: M-step: Update the model parameters by \[\delta^{(t+1)}=\frac{\bar{Y}^{T}(\Gamma_{\epsilon}^{-1}\otimes X)\mu_{\bar{ \beta}}}{\mu_{\bar{\beta}}^{T}(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\mu_{\bar{ \beta}}+\mathrm{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X)\Sigma_{\bar{\beta}} ]},\] \[\Gamma_{\epsilon}^{(t+1)}=\frac{1}{n}\begin{pmatrix}\mathrm{tr}[S_{11} ]&\mathrm{tr}[S_{12}]\\ \mathrm{tr}[S_{21}]&\mathrm{tr}[S_{22}]\end{pmatrix},\] \[\Gamma_{\beta}^{(t+1)}=\frac{1}{p-2}\begin{pmatrix}\mathrm{tr}[W_ {11}]&\mathrm{tr}[W_{12}]\\ \mathrm{tr}[W_{21}]&\mathrm{tr}[W_{22}]\end{pmatrix}.\] 6: Reduction-step: Rescale \(\Gamma_{\beta}^{(t+1)}=(\delta^{(t+1)})^{2}\Gamma_{\beta}^{(t+1)}\) and reset \(\delta^{(t+1)}=1\). 7:until the incomplete data log-likelihood \(ELBO^{(m)}\) stop increasing ``` **Algorithm 2** PX-EM algorithm with eigen-decomposition #### 2.2.3 Initialization In the previous algorithm design, we simply used the covariance of \(Y\) to initialize the parameters, setting \(\Gamma_{\beta}=\Gamma_{\epsilon}=\frac{1}{2}\text{cov}(Y)\). Although The method of moments (MoM) estimators may not be optimal, they are easy to compute and can be used to calculate an initial value of parameters for MLE-based iterative methods Wasserman (2004) like the MM algorithm and PX-EM algorithm. The parameters in the variance component set \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) are denoted by \(\gamma=[\sigma_{1}^{2},\sigma_{3}^{2},\sigma_{2}^{2},\)\(\sigma_{4}^{2},\tau,\eta]^{T}\). The MoM estimator is obtained by solving the ordinary least squares (OLS) problem \[\operatorname*{argmin}_{\gamma}\left\|\text{vec}Y\text{vec}Y^{T}-(\Gamma_{ \beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{n})\right\|_{F}^{2}.\] Denote \(Y=[y_{1},y_{2}]\), the MoM estimate of parameter \(\gamma\) is \[\hat{\gamma}=\begin{bmatrix}\frac{1}{2}S_{0}^{-1}&0&0\\ 0&\frac{1}{2}S_{0}^{-1}&0\\ 0&0&\frac{1}{2}S_{0}^{-1}\end{bmatrix}\begin{bmatrix}2y_{1}^{T}XX^{T}y_{1}\\ 2y_{1}^{T}y_{1}\\ 2y_{2}^{T}XX^{T}y_{2}\\ 2y_{2}^{T}y_{2}\\ 4y_{2}^{T}XX^{T}y_{1}\\ 4y_{2}^{T}y_{1}\end{bmatrix}, \tag{15}\] where \(S_{0}=\begin{bmatrix}\text{tr}[(XX^{T})^{2}]&\text{tr}[XX^{T}]\\ \text{tr}[XX^{T}]&n\end{bmatrix}\). ### Inference For these maximum likelihood-based methods, as the MM algorithm with respect to the incomplete data log-likelihood function in Formula 3 and the PX-EM algorithm for the complete data log-likelihood function in Formula 9, the difference between maximum likelihood estimate and the true parameter converges in distribution to a normal distribution with a mean of zero and a covariance matrix equal to the inverse of the Fisher information matrix as \(\sqrt{n}(\hat{\Gamma}-\Gamma^{*})\xrightarrow{d}\mathcal{N}(0,I^{-1})\). The maximum likelihood estimator is \(\sqrt{n}\)-consistent and asymptotically efficient, with the smallest variance. In addition to estimating of precision matrix, we further quantify the uncertainty of each entry in the precision matrix, and the existence and weight of the corresponding edge in the graph. The parameters in the variance component set \(\Gamma=\{\Gamma_{\beta},\Gamma_{\epsilon}\}\) are denoted by \(\gamma=[\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\)\(\gamma_{5},\gamma_{6}]^{T}:=[\sigma_{1}^{2},\sigma_{3}^{2},\sigma_{2}^{2}, \sigma_{4}^{2},\tau,\eta]^{T}\). The covariance matrix of the maximum likelihood estimates can be calculated using the inverse of the Fisher Information Matrix (FIM), where the FIM is \(I(\gamma)=-E[\frac{\partial^{2}}{\partial\gamma^{2}}\log\Pr(\text{vec}Y|X; \Gamma)]\). Denote \(M_{1}=\begin{bmatrix}XX^{T}&0\\ 0&0\end{bmatrix}\), \(M_{2}=\begin{bmatrix}I_{n}&0\\ 0&0\end{bmatrix}\), \(M_{3}=\begin{bmatrix}0&0\\ 0&XX^{T}\end{bmatrix}\), \(M_{4}=\begin{bmatrix}0&0\\ 0&I_{n}\end{bmatrix}\), \(M_{5}=\begin{bmatrix}XX^{T}&0\\ 0&XX^{T}\end{bmatrix}\), \(M_{6}=\begin{bmatrix}I_{n}&0\\ 0&I_{n}\end{bmatrix}\), then we have \[\frac{\partial^{2}}{\partial\gamma_{i}\partial\gamma_{j}}\ln P(\text{vec}Y|X; \Gamma) =\text{tr}[(\frac{1}{2}I_{2n}-\Omega^{-1}\text{vec}Y\text{vec}Y^{ T})(\Omega^{-1}M_{i}\Omega^{-1}M_{j})]. \tag{16}\] For MLE-based methods, the covariance matrix of the estimated parameters \(\gamma\) is equal to the inverse of the fisher information matrix, denoted as \(\text{cov}(\gamma)=I(\gamma)^{-1}\). Using this, the variance of \(\eta\) and its standard error can be obtained. Recall that the non-zero precision entries correspond to edges in the graph and a zero off-diagonal entry \(\Theta_{ij}\) are equivalent with a zero \(\eta\) in \(\Gamma_{\epsilon}\) for the pair \((i,j)\) since zero off-diagonal entries remain after a 2-by-2 matrix inverse operation. A null hypothesis is set as \(H_{0}:\eta=0\), and the Wald test can be applied with a test statistic given by \(W=\frac{(\eta-\eta_{0})^{2}}{\mathrm{var}(\eta)}\), where \(\eta_{0}=0\). The p-value of the test for the existence of an edge between the pair \((i,j)\) is collected. Alternatively, the likelihood ratio test can be applied alternatively by calculating the difference between the log-likelihoods of the original parameter space \(\gamma\) and the restricted parameter space where \(\eta\) in \(\Gamma_{\epsilon}\) is constrained to zero. The test statistic is given by \(-2[\mathcal{L}(\Gamma_{0})-\mathcal{L}(\Gamma)]\) where the two parameter sets are optimized separately with respect to the log-likelihood function \(\mathcal{L}\) in Formula 3, and \(\Gamma_{0}\) denotes the parameters when \(\eta\) in \(\Gamma_{\epsilon}\) is set to zero. FLAG not only calculates the point estimates of the precision matrix but also computes standard errors and performs hypothesis testing on the precision entries, while many existing methods can only provide point estimates without efficient element-wise inference. After collecting the p-value of each entry in the precision matrix, large-scale hypothesis testing is considered to control the false discovery rate (FDR) based on the Benjamini-Hochberg procedure Benjamini and Hochberg (1995). Alternatively, the Bonferroni correction can be applied to control the family-wise error rate (FWER), which is relatively conservative Hastie et al. (2009). This inference on the precision matrix can be used to extend the usage of FLAG when utilizing meta-analysis to jointly estimate multiple graphs dealing with data from various groups. ## 3 Accelerated Algorithms and Extended Model ### Low-rank Update for Multiple Pairs The most computationally intensive part of Algorithm 1 designed to estimate the variance components is the eigen-decomposition with a computational complexity of \(\mathcal{O}(n^{3})\), which becomes increasingly burdensome as \(n\) grows. Although the eigen-decomposition is performed only once when estimating the precision of each pair of variables, a total of \(\frac{p(p-1)}{2}\) eigen-decompositions are required to estimate the entire precision matrix for all pairs. It is worth noticing that the eigen-decomposition is calculated with respect to \(XX^{T}\), where each \(X\) for one pair of variables \((z_{i},z_{j})\) is the matrix \(Z\) with the \(i-\)th and \(j-\)th columns removed, denoted as \(X=Z_{-\{ij\}}\). To improve the computational efficiency, the eigen-decomposition of \(ZZ^{T}\) is performed first, followed by the eigen-decomposition of \(XX^{T}=ZZ^{T}-Z_{-\{ij\}}(Z_{-\{ij\}})^{T}\) replaced by a low-rank update based on that of \(ZZ^{T}\). Denote the eigen-decomposition of symmetric matrix \(ZZ^{T}\) is \(ZZ^{T}=UDU^{T}\), then \(XX^{T}=UDU^{T}-Z_{\{ij\}}(Z_{\{ij\}})^{T}\), and the variance-covariance matrix \(\Omega=\Gamma_{\beta}\otimes XX^{T}+\Gamma_{\epsilon}\otimes I_{n}\) in the random effects model 2 can be written as \[\begin{split}\Omega&=\Gamma_{\beta}\otimes(UDU^{T} )+\Gamma_{\epsilon}\otimes I_{n}-\Gamma_{\beta}\otimes(Z_{\{ij\}}Z_{\{ij\}}^ {T})\\ &=(\Phi^{-T}\otimes U)[\Lambda\otimes D+I_{2}\otimes I_{n}- \Lambda\otimes(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})](\Phi^{-1}\otimes U^{T} ).\end{split} \tag{17}\] In the MM algorithm, the log-likelihood function 3 involves both the log-determinant and inverse terms with respect to \(\Omega\), which need to be revised based on the low-rank update of the eigen-decomposition of \(ZZ^{T}=UDU^{T}\). Using the matrix determinant lemma, we have \((U^{T}Z_{\{ij\}})^{T}(\lambda_{l}D+I_{n})^{-1}U^{T}Z_{\{ij\}}|)\). The inverse term is \[\begin{split}\Omega^{-1}=&(\Phi\otimes U)[\Lambda \otimes D+I_{2}\otimes I_{n}-\Lambda\otimes(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T })]^{-1}(\Phi\otimes U)^{T}\\ =&\begin{bmatrix}\Phi_{11}U&\Phi_{12}U\\ \Phi_{21}U&\Phi_{22}U\end{bmatrix}\\ &\begin{bmatrix}(\lambda_{1}D+I_{n}-\lambda_{1}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}}) ^{T})^{-1}&0\\ 0&(\lambda_{2}D+I_{n}-\lambda_{2}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})^{-1} \end{bmatrix}\\ &\begin{bmatrix}\Phi_{11}U^{T}&\Phi_{21}U^{T}\\ \Phi_{12}U^{T}&\Phi_{22}U^{T},\end{bmatrix}\end{split} \tag{18}\] where the block matrix \([\lambda_{l}D+I_{n}-\lambda_{l}(U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T})]^{-1},l= 1,2\) in the diagonal of the center matrix is the inverse of a diagonal matrix with rank-2 correction. This inversion can be calculated efficiently using the Woodbury matrix identity, then we have \[\begin{split}&[\lambda_{l}D+I_{n}-\lambda_{l}(U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T})]^{-1}\\ =&[(\lambda_{l}D+I_{n})^{-1}+(\lambda_{l}D+I_{n})^{-1}U^{T }Z_{\{ij\}}(\frac{1}{\lambda_{l}}I_{2}-(U^{T}Z_{\{ij\}})^{T}(\lambda_{l}D+I_{n })^{-1}U^{T}Z_{\{ij\}})^{-1},\end{split} \tag{19}\] for l=1,2. Then the log-likelihood function 3 can be rewritten as \[\begin{split}&\ell(\Gamma)=-\frac{1}{2}\ln\det\Omega-\frac{1}{2 }\text{vec}(\tilde{Y})^{T}\\ &\begin{bmatrix}[\lambda_{1}D+I_{n}-\lambda_{1}U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T}]^{-1}&0\\ 0&[\lambda_{2}D+I_{n}-\lambda_{2}U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T}]^{- 1}\end{bmatrix}\text{vec}(\tilde{Y}),\end{split} \tag{20}\] where \(\text{vec}(\tilde{Y})=(\Phi\otimes U)^{T}\text{vec}Y=\text{vec}(U^{T}Y\Phi)\) is calculated only once for each pair before the iteration. The coefficients of the parameters \(\Gamma_{\beta}\) and \(\Gamma_{\epsilon}\) in the gradient of the surrogate function 4, which are collected in the matrices \(M_{\beta}\) and \(M_{\epsilon}\), are revised accordingly, with the details in Supplementary 5. Similarly, the coefficients of the inverse terms \(\Gamma_{\beta}^{-1}\) and \(\Gamma_{\epsilon}^{-1}\) in the gradient of the surrogate function 4, which are collected in the matrices \(N_{\beta}^{T}N_{\beta}\) and \(N_{\epsilon}^{T}N_{\epsilon}\), are also revised based on the low-rank update as \(N_{\beta}^{T}N_{\beta}=(R^{(m)}\Gamma_{\beta}^{(m)})^{T}[U(D-U^{T}Z_{\{ij\}}(U^ {T}Z_{\{ij\}})^{T})U^{T}]R^{(m)}\Gamma_{\beta}^{(m)}\), where the term in the middle can be further simplified as \(EE^{T}=D-U^{T}Z_{\{ij\}}(U^{T}Z_{\{ij\}})^{T}=D^{\frac{1}{2}}[I_{n}-D^{-\frac{1 }{2}}U^{T}Z_{\{ij\}}(D^{-\frac{1}{2}}U^{T}Z_{\{ij\}})^{T}]D^{\frac{1}{2}}=(D^{ \frac{1}{2}}F^{\frac{1}{2}})(D^{\frac{1}{2}}F^{\frac{1}{2}})^{T}\), then we have \(E=D^{\frac{1}{2}}F^{\frac{1}{2}}\). Let \(J=D^{-\frac{1}{2}}U^{T}Z_{\{ij\}}\in\mathbb{R}^{n\times 2}\), then \(F^{\frac{1}{2}}=I_{n}+J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]J^{T}\). According to the simultaneous congruence decomposition, we have \(\Gamma_{\beta}=\Phi^{-(t)T}\Lambda\Phi^{-(t)},\Gamma_{\epsilon}=\Phi^{-(t)T} \Phi^{-(t)}\). Then the matrices \(N_{\beta}\) and \(N_{\epsilon}\) can be obtained by \[\begin{split} N_{\beta}=E^{T}U^{T}R^{(t)}\Phi^{-(t)T}\Lambda\Phi^ {-(t)}\\ N_{\epsilon}=U^{T}R^{(t)}\Phi^{-(t)T}\Phi^{-(t)}\end{split}\] To further simplify the matrix \(N_{\beta}\), we can vectorize it to obtain \[\begin{split}\text{vec}(N_{\beta})=(\Phi^{-T}\Lambda)\otimes E^{T }\text{vec}(G)=\text{vec}(E^{T}G\Lambda\Phi^{-1}),\\ \text{vec}(N_{\epsilon})=(\Phi^{-T})\otimes I_{n}\text{vec}(G)= \text{vec}(G\Phi^{-1}),\end{split} \tag{21}\] with the details shown in Supplementary 5. Hence, the compact equation is \(N_{\beta}=E^{T}G\Lambda\Phi^{-1},N_{\epsilon}=G\Phi^{-1}\), and the expanded form of \(N_{\beta}\) is \[\begin{split} N_{\beta}=& E^{T}G\Lambda\Phi^{-1}=F^{ \frac{1}{2}}D^{\frac{1}{2}}G\Lambda\Phi^{-1}\\ =&\{I_{n}+J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2 }}-I_{2}]J^{T}\}D^{\frac{1}{2}}G\Lambda\Phi^{-1}\\ =&\Big{(}(D^{\frac{1}{2}}G)(\Lambda\Phi^{-1})\Big{)} +\Big{(}J(J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]\Big{)}\Big{(}J^{T}( D^{\frac{1}{2}}G\Lambda\Phi^{-1})\Big{)},\end{split} \tag{22}\] where the matrix \(J\) and the term \((J^{T}J)^{-1}[(I_{2}-J^{T}J)^{\frac{1}{2}}-I_{2}]\) remain the same in all iterations, and thus they are only calculated once before the iterations. ### Meta-analysis for Multiple Groups A graph can be inferred individually for each group. Nevertheless, the limited samples size, particularly in high-dimensional setting, raises the follow-up research question of how to leverage data from different groups. For instance, there are university websites for students, faculty, and courses, which may share many common phrases in websites such as "email address" and "home page" with steady relationships between words. The goal is to leverage the universality across groups to estimate commonly shared pairs more accurately while maintaining the differences in the same pair across different groups, thus preserving the individuality. #### 3.2.1 One-to-one Meta-analysis Denote \(\Gamma_{\epsilon}=\begin{bmatrix}\sigma_{3}^{2}&\eta\\ \eta&\sigma_{4}^{2}\end{bmatrix}=\begin{bmatrix}\sigma_{3}^{2}&\rho\sigma_{3} \sigma_{4}\\ \rho\sigma_{3}\sigma_{4}&\sigma_{4}^{2}\end{bmatrix}\), and the partial correlation is \(\rho=\frac{\eta}{\sigma_{3}\sigma_{4}}\). The partial correlations from two groups A and B are denoted as \(\rho^{(A)}=\frac{\eta^{(A)}}{\sigma_{3}^{(A)}\sigma_{4}^{(A)}},\rho^{(B)}= \frac{\eta^{(B)}}{\sigma_{3}^{(B)}\sigma_{4}^{(B)}}\). The first step is to test whether the partial correlation of a pair of variables across two groups, A and B, is the same or not. The null hypothesis is \(H_{0}:\rho^{(A)}-\rho^{(B)}=0\), and the test statistic is given by \(\frac{\rho^{(A)}-\rho^{(B)}}{\sqrt{\text{se}(\rho^{(A)})^{2}+\text{se}(\rho^{ (B)})^{2}}}\). The standard error of partial correlation \(\rho\) can be obtained using the delta method, as \[\text{se}(\rho)^{2}=\begin{bmatrix}-\frac{1}{2}\sigma_{3}^{-3}\sigma_{4}^{-1 }\eta&-\frac{1}{2}\sigma_{3}^{-1}\sigma_{4}^{-3}\eta&\sigma_{3}^{-1}\sigma_{4} ^{-1}\end{bmatrix}\Sigma_{\Gamma_{\epsilon}}\begin{bmatrix}-\frac{1}{2} \sigma_{3}^{-3}\sigma_{4}^{-1}\eta\\ -\frac{1}{2}\sigma_{3}^{-1}\sigma_{4}^{-3}\eta\\ \sigma_{3}^{-1}\sigma_{4}^{-1}\end{bmatrix},\] where \(\Sigma_{\Gamma_{\epsilon}}\) is the covariance matrix of parameters in \(\Gamma_{\epsilon}=\begin{bmatrix}\sigma_{3}^{2}&\sigma_{4}^{2}&\eta\end{bmatrix} ^{T}\), which is a submatrix of the inverse of the Fisher information matrix. Specifically, the rows and columns that correspond to these three parameters in \(\Gamma_{\epsilon}\) from the inverse of the Fisher information matrix are taken. If the hypothesis is not rejected in this test, assume that \(\rho^{(k)}=\rho+e^{(k)}\), where \(k\in\{A,B\}\) and \(e\) is random noise. Then, we use inverse-variance weighting to aggregate \(\rho\) from different groups that share similar underlying \(\rho\) as \(\rho=\frac{\Sigma_{k}w^{(k)}\rho^{(k)}}{\Sigma_{k}w^{(k)}}\), with \(w^{(k)}=\frac{1}{\text{se}(\rho^{(k)})^{2}}\) as weights. The standard error of the shared underlying \(\rho\) is \(\text{se}(\rho)=\frac{1}{\sqrt{\Sigma_{k}w^{(k)}}}\). Then, we can adjust the parameter \(\eta\) in different groups by \(\eta^{(A,meta)}=\rho\sigma_{3}^{(A)}\sigma_{4}^{(A)},\eta^{(B,meta)}=\rho \sigma_{3}^{(B)}\sigma_{4}^{(B)}\), and the precision will change accordingly. FLAG-Meta provides a comprehensive analysis of both the similarities and differences between graphs from different groups by adaptively applying hypothesis testing on each edge across groups. Unlike other methods, such as PNJGL for differences and CNJGL for common parts from Mohan et al. (2012, 2014), FLAG-Meta does not require any extra design, in contrast to different penalty functions when target changes. FLAG-Meta utilizes element-wise by group-wise comparisons to obtain the fine-grained structures across groups, rather than penalizing the same entry across groups equivalently, regardless of the group relations, as in JGL Guo et al. (2011), JEMP Lee and Liu (2015), FGL and GGL Danaher et al. (2014), SCAN Hao et al. (2018), TFRE Bilgrau et al. (2020), and others. Furthermore, it is easy for FLAG-Meta to incorporate prior information, such as group relations, group memberships, and relationships of edge subsets within group subsets, if available, into the FLAG-Meta framework. The majority of existing joint estimation methods are designed at the precision level, typically as \(\|\theta_{ij}^{(k_{1})}-\theta_{ij}^{(k_{2})}\|,1\leq k_{1},k_{2}\leq K\)Danaher et al. (2014), Price et al. (2015), Saegusa and Shojaie (2016), Price et al. (2021), Mohan et al. (2012), is penalized to encourage similarity. In contrast, FLAG-Meta is flexible in testing similarity at the partial correlation, scaled precision level, which is more robust in comparing conditional dependence between the same variables across different groups after adjusting the influence from the varied variance and precision from the diagonal elements in the covariance or precision matrix. In conclusion, FLAG-Meta incurs only a little extra computational cost of in \(\mathcal{O}(K^{2}p^{2})\) based on FLAG. It is flexible in identifying both similarities and differences with fine-grained structure as element-wise by group-wise, which makes it easier to incorporate with prior information at any granularity, and it is accurate for smaller standard error and larger statistical power. Moreover, FLAG-Meta only requires summary statistics instead of raw data from different resources, making it more valuable, especially when data from different groups cannot be shared. #### 3.2.2 Many-to-one Meta-analysis The previous part explains the methodology for aggregating two groups through one-to-one meta-analysis, which can be further extended to more groups. Suppose that there exist \(K\) groups, and the set of all groups are denoted as \(G=\{1,...,K\}\) with the cardinality \(|G|=K\), and we first choose group 1 as the main target for explanation. For each pair of random variables \((i,j)\) with \(i\neq j\), the partial correlation from other groups are compared with \(\rho_{ij}^{(1)}\) separately by testing whether \(\rho_{ij}^{(1)}-\rho_{ij}^{(k)}=0\) for \(k=2,...,K\). Then the groups other than group 1 whose tests cannot be rejected are collected in a subset of groups \(G\) as \(G_{1}^{(\text{meta})}=\{k\mid k\neq 1,\text{hypothesis }\rho_{ij}^{(1)}-\rho_{ij}^{(k)}=0\text{ is not rejected}\}\). For the null set of groups, still with the assumption \(\rho^{(k)}=\rho+e^{(k)}\) for \(k\in G_{1}^{(\text{meta})}\), the shared underlying partial correlation is computed by \(\rho=\frac{\Sigma_{k\in G_{1}^{(\text{meta})}\,w^{(k)}\rho^{(k)}}}{\Sigma_{i} w^{(k)}}\), where weights \(w(k)=\frac{1}{var(\rho^{(k)})}\) represent the inverse of the variance of the estimated partial correlation from different groups. The standard error of this shared partial correlation with respect to target group 1 is \(\text{se}(\rho^{(1)})=\frac{1}{\sqrt{\Sigma_{k\in G_{1}^{(\text{meta})}}w^{( k)}}}\), and \(\eta^{(\text{meta})}\) is also adjusted, so as the corresponding entry \(\Theta_{ij}^{(1)}\) in the precision matrix for target group 1. All the pairs of random variables are evaluated through the same approach. In addition, this whole procedure could be applied to other target groups as well. Another alternative approach is to use one-to-one meta-analysis for \(K-1\) times. For instance, considering group 1 as the target group as well, we can apply one-to-one meta-analysis between group 1 and group \(i\) with \(i\in\{2,...,K\}\) for the first time. Then, the result of partial correlation and precision after meta-analysis with group \(i\) is used to apply one-to-one meta-analysis with the result from group \(j\) for \(j\in G\backslash\{1,i\}\), and so on so force. The strength of this procedure is that the contribution of each additive considered group can be explicitly shown. The demonstration of this procedure in a real application will be shown in Section 4.2.2, which deals with the university webpage dataset. Specifically, the group with the smallest sample size is considered as the target group and then other groups are used one by one for meta-analysis in the ascending order of sample size. A special case of the additively applying one-to-one meta-analysis is to follow its original index \(1,2,...,K\), where the data in different groups are collected in a time series and the index of the group corresponds to the time steps. One-to-one meta-analysis can be applied sequentially, starting from group 1 and group 2, and then up to the data from group \(K\) with the most recent time. In conclusion, there are various ways to apply meta-analysis in multiple groups, depending on the aims of analysis. FLAG-Meta is flexible because it is based on the most fine-grained granularity across entries and groups. ### Covariate-adjusted Model for Joint Estimation In real-world applications, taking the gene co-expression network from human brain data as an example, sample properties of sample like brain regions and age periods can be considered as covariates. The conditional Gaussian graphical model (cGGM) is first presented by Yin and Li (2011), which takes covariates into consideration as \(z|v\sim\mathcal{N}(\zeta v,\Theta^{-1})\), where \(v\in\mathbb{R}^{q}\), and \(\zeta\in\mathbb{R}^{p\times q}\), rather than regarding means of random variables as constants, which is invariant to heterogeneity. The cGGM is estimated by a penalized likelihood-based method, where both \(\zeta\) and \(\Theta\) are penalized by \(\ell_{1}\) norm based on their sparsity assumptions. Then, a two-stage method proposed by Cai et al. (2013) to solve covariate-adjusted Gaussian graphical model \(z=\zeta v+\tilde{z}\) where \(\tilde{z}\) is a \(p\times 1\) random vector with mean zero and inverse covariance \(\Theta^{-1}\), using a constrained \(\ell_{1}\) minimization similar to that of Cai et al. (2011). The first step is to estimate the regression coefficient matrix \(\zeta\) by solving the optimization row by row: \(\hat{\zeta}=\operatorname*{argmin}_{\zeta\in\mathbb{R}^{p\times q}}|\zeta|_{1 },\)s.t. \(|S_{vz}-\zeta S_{vv}|\leq\lambda_{1}\) where \(S_{vz}=\frac{1}{N}\Sigma_{n=1}^{N}(z_{i}-\bar{z})(v_{i}-\bar{v})^{T}\) and \(S_{vv}=\frac{1}{N}\Sigma_{n=1}^{N}(v_{i}-\bar{v})(v_{i}-\bar{v})^{T}\). In the second step, the precision matrix \(\Theta\) is estimated when \(\hat{\zeta}\) is fixed from the previous step, by \(\hat{\Theta}=\operatorname*{argmin}_{\Theta\in\mathbb{R}^{p\times p}}|\Theta| _{1},\)s.t. \(|I_{p}-S_{zz}\Theta|_{\infty}\leq\lambda_{2}\) where \(S_{zz}=\frac{1}{N}\Sigma_{n=1}^{N}(z_{i}-\bar{z})(z_{i}-\bar{z})^{T}\). Similarly, a two-step procedure designed by Chen et al. (2016), known as asymptotically normal estimation with thresholding after adjusting covariates (ANTAC), to estimate \(\zeta\) and \(\beta\) separately using scaled lasso. In the first step, they solve the following optimization problems: \(\hat{\zeta}_{j},\hat{\sigma}_{jj}=\operatorname*{argmin}_{\zeta_{j}\in \mathbb{R}^{q},\sigma\in\mathbb{R}^{+}}\frac{\|Z_{j}-v\zeta_{j}\|_{2}}{2n \sigma}+\frac{\sigma_{jj}}{2}+\lambda_{1}\Sigma_{k=1}^{q}\frac{\|\Upsilon_{k }\|}{\sqrt{n}}|\zeta_{jk}|\), for \(j=1,...,p\), where the parameter is theoretically specified as \(\lambda_{1}=\sqrt{\frac{2(1+\frac{\log p}{n})}{n}}\). Next, adjusted data \(\tilde{Z}=Z-\Upsilon\hat{\zeta}\) is used to estimate the precision matrix, according to the regression residuals after estimating coefficients \(\beta\) by solving the optimization as follows, \(\hat{\beta}_{l},\hat{\sigma}_{ll}=\operatorname*{argmin}_{\beta_{l}\in\mathbb{ R}^{p-2},\sigma_{ll}\in\mathbb{R}^{+}}\frac{\|\tilde{Z}_{j}-\tilde{Z}_{A^{c}} \beta_{ll}\|_{2}}{2n\sigma_{ll}}+\frac{\sigma_{ll}}{2}+\lambda_{2}\Sigma_{k=1}^ {q}\frac{\tilde{Z}_{k}\|}{\sqrt{n}}|\beta_{lk}|,l\in A=\{i,j\}\), where the parameter is theoretically specified as \(\lambda_{2}=\sqrt{\frac{2\log p}{n}}\). One limitation of the methods from Cai et al. (2013); Chen et al. (2016) is that the two-stage estimation process induce propagation of errors since the estimation of the precision matrix relies on \(\hat{\zeta}\) from the first step. When taking covariates into consideration, the random effect model for the Gaussian graphical model as 1 can be extended to \[Y=\Upsilon\zeta+X\beta+\epsilon,\beta_{i}^{T}\sim\mathcal{N}(0,\Gamma_{\beta}),\epsilon_{i}\sim\mathcal{N}(0,\Gamma_{\epsilon}), \tag{23}\] where \(\Upsilon\in\mathbb{R}^{n\times q}\) is the covariate matrix and \(\zeta\in\mathbb{R}^{q\times 2}\). The advantage of the flexible and accurate Gaussian graphical model with covariate adjusted (FLAG-CA) is that it evaluates the fixed effect \(\zeta\) and the random effect \(\beta\) in a single unified model, rather than using two separate steps. When adjusting for the effect of covariates, the model can be easily estimated with little extra computational cost in each iteration. #### 3.3.1 MM Algorithm for FLAG-CA For the revision of MM algorithm, the incomplete-data log-likelihood is \[\begin{split}\ell(\Gamma)=&\ln\mathbb{P}(Y|X; \Gamma_{\beta},\Gamma_{\epsilon})\\ =&-\frac{1}{2}\ln\det\Omega-\frac{1}{2}(\bar{Y}- \bar{\Upsilon}\bar{\zeta})^{T}\Omega^{-1}(\bar{Y}-\bar{\Upsilon}\bar{\zeta})+ c,\end{split} \tag{24}\] where \(\bar{Y}=\text{vec}Y\), \(\bar{\Upsilon}=I_{2}\otimes\Upsilon\in\mathbb{R}^{2n\times 2q},\bar{\zeta}= \text{vec}(\zeta)\), and \(c\) is a constant. The MM algorithm updates the fixed effect \(\zeta\) and the variance components \(\Gamma\) alternatively, with one being updated while the other is fixed. In each iteration, an extra update of \(\zeta\) involves solving a weighted least squares problem, as \(\bar{\zeta}^{(m+1)}=\text{argmin}_{\bar{\zeta}}\frac{1}{2}(\bar{Y}-\bar{ \Upsilon}\bar{\zeta})^{T}\Omega^{-(m)}(\bar{Y}-\bar{\Upsilon}\bar{\zeta})=( \Upsilon^{T}\Omega^{-(m)}\Upsilon)^{-1}\Upsilon^{T}\Omega^{-(m)}Y\). The revised MM algorithm for FLAG-CA is summarized in the appendix 14. #### 3.3.2 PX-EM Algorithm for FLAG-CA The model of PX-EM algorithm for the FLAG-CA method is \(Y=\Upsilon\zeta+\delta X\beta+\epsilon\), where \(\delta\in\mathbb{R}^{1}\) is the expanded parameter. The complete-data log-likelihood when adjusting for covariates is \[\begin{split}\ell(\Gamma)=&\text{logPr}(\bar{Y}, \bar{\beta}|\Gamma_{\beta},\Gamma_{\epsilon};\bar{X})\\ =&-\frac{1}{2}\ln|\Omega|-\frac{1}{2}\text{vec}(Y- \Upsilon\zeta-\delta X\beta)^{T}\Omega^{-1}\text{vec}(Y-\Upsilon\zeta-\delta X \beta)\\ =&-\frac{n}{2}\ln|\Gamma_{\epsilon}|-\frac{1}{2}( \bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\tilde{X}\bar{\beta})^{T}(\Gamma_{ \epsilon}^{-1}\otimes I_{n})(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\tilde{X }\bar{\beta})\\ &-\frac{p-2}{2}\ln|\Gamma_{\beta}|-\frac{1}{2}\bar{\beta}^{T}( \Gamma_{\beta}^{-1}\otimes I_{p-2})\bar{\beta},\end{split} \tag{25}\] where \(\bar{Y}=\text{vec}Y,\bar{X}=I_{2}\otimes X,\bar{\beta}=\text{vec}(\beta)\) are the same transformations as in the previous section, and \(\bar{\Upsilon}=I_{2}\otimes\Upsilon\in\mathbb{R}^{2n\times 2q},\bar{\zeta}=\text{vec}(\zeta)\). Then the posterior distribution of \(\bar{\beta}\) is \(\mathcal{N}(\bar{\beta}|\mu_{\bar{\beta}},\Sigma_{\bar{\beta}})\), where \[\Sigma_{\bar{\beta}}^{-1}=\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+ \Gamma_{\beta}^{-1}\otimes I_{p-2},\] \[\mu_{\bar{\beta}}=(\delta^{2}\Gamma_{\epsilon}^{-1}\otimes X^{T}X+\Gamma_{ \beta}^{-1}\otimes I_{p-2})^{-1}\delta(\Gamma_{\epsilon}^{-1}\otimes X^{T})( \bar{Y}-\bar{\Upsilon}\bar{\zeta}).\] In the E-step, the expectation of complete-data log-likelihood in Equation 25 is taken with respect to \(\beta\), given the parameters from last iteration, as \[\mathcal{Q}(\Omega|\Omega_{old})= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log}| \Gamma_{\beta}|-\frac{1}{2}\{(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X} \mu_{\bar{\beta}})^{T}(\Gamma_{\epsilon}^{-1}\otimes I_{n})(\bar{Y}-\bar{ \Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}}) \tag{26}\] \[+\delta^{2}\text{tr}[(\Gamma_{\epsilon}^{-1}\otimes X^{T}X) \Sigma_{\bar{\beta}}]\}-\frac{1}{2}\{\mu_{\bar{\beta}}^{T}(\Gamma_{\beta}^{-1} \otimes I_{p-2})\mu_{\bar{\beta}}+\text{tr}[(\Gamma_{\beta}^{-1}\otimes I_{p- 2})\Sigma_{\bar{\beta}}]\}\] \[= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log} |\Gamma_{\beta}|\] \[-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\epsilon}^{-1}\otimes I_{n} )[(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\tilde {X}\Sigma_{\bar{\beta}}\tilde{X}^{T}]\Big{]}\] \[-\frac{1}{2}\text{tr}\Big{[}(\Gamma_{\beta}^{-1}\otimes I_{p-2}) (\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}})\Big{]}\] \[= -\frac{n}{2}\text{log}|\Gamma_{\epsilon}|-\frac{p-2}{2}\text{log} |\Gamma_{\beta}|\] \[-\frac{1}{2}\text{tr}\Big{[}\Gamma_{\epsilon}^{-1}\begin{pmatrix} \text{tr}[S_{11}]&\text{tr}[S_{12}]\\ \text{tr}[S_{21}]&\text{tr}[S_{22}]\end{pmatrix}\Big{]}-\frac{1}{2}\text{tr} \Big{[}\Gamma_{\beta}^{-1}\begin{pmatrix}\text{tr}[W_{11}]&\text{tr}[W_{12}]\\ \text{tr}[W_{21}]&\text{tr}[W_{22}]\end{pmatrix}\Big{]},\] where \(S=(\bar{Y}-\bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})(\bar{Y}- \bar{\Upsilon}\bar{\zeta}-\delta\bar{X}\mu_{\bar{\beta}})^{T}+\delta^{2}\tilde {X}\Sigma_{\bar{\beta}}\tilde{X}^{T}=\begin{pmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{pmatrix}\), \(W=\mu_{\bar{\beta}}\mu_{\bar{\beta}}^{T}+\Sigma_{\bar{\beta}}=\begin{pmatrix}W _{11}&W_{12}\\ W_{21}&W_{22}\end{pmatrix}\). In the M-step, parameters \(\delta,\Gamma_{\beta},\Gamma_{\epsilon}\) are updated similarly, with the only difference being that when adjusting the covariates, \(\bar{Y}\) is used as a mean effect offset version \((\bar{Y}-\bar{\Upsilon}\bar{\zeta})\) and an extra update for \(\zeta\) is added. The revised MM algorithm for FLAG-CA is summarized in the appendix 7. ## 4 Numerical Examples In this section, the proposed methods are evaluated using various simulation settings, and the real data applications. ### Simulation Studies The critical advantage of FLAG is its ability to perform statistical inference on each entry in the precision matrix, which quantifies the uncertainty associated with each edge. To verify the effectiveness of false discovery rate (FDR) control for graph recovery, a simple simulation setting is designed with \(p=50,n=300\), and the nonzero entries whose value is \(0.15\) are randomly generated with the nonzero proportion \(\pi\) varies \(\{0.1,0.15,0.2,0.3,0.4,0.5,0.6,0.7\}\). The results from FLAG are compared with two methods, ANT and GGM estimation with false discovery rate control (GFC, Liu (2013)) which support the statistical inference and FDR control. As shown in the Figure 1, the FDR is controlled effectively by FLAG, while the FDR of ANT and GFC are out of control when the nonzero proportion exceeds \(0.5\). #### 4.1.1 Block Magnified Matrix To investigate the sensitivity of the methods is to data scaling, unscaled data and scaled data (each column with variance 1) are used with different methods for comparison. Since the estimated precision \(\hat{\theta}\) from the same method may differ depending on whether the data is scaled or not, the estimated partial correlation \(\rho_{ij}=-\frac{\theta_{ij}}{\sqrt{\theta_{i\cdot}\theta_{jj}}}\) is used for comparison. The ground truth is a block magnified matrix as \(\Theta=\begin{pmatrix}\alpha_{1}\Theta_{0}&0&0\\ 0&\alpha_{2}\Theta_{0}&0\\ 0&0&\alpha_{3}\Theta_{0}\end{pmatrix}\), where \((\alpha_{1},\alpha_{2},\alpha_{3})=(1,5,25)\). The simulated submatrix \(\Theta_{0}\) has all diagonal elements equal to one, and its off-diagonal elements are non-zero with probability \(\pi=0.05\). The non-zero 0ff-diagonal elements are sampled from \(\{0.2,0.4\}\). According to this simulation setting, all the non-zero partial correlations are at the same scale, ranging from \(\{0.2,0.4\}\), which makes it easier for comparison. Figure 2 shows the results from centered data in X and that from scaled data in Y, with points along the diagonal lines expected. The estimated partial correlation of FLAG is not sensitive to data scaling, compared to CLIME, GLasso, Hub GLasso, and De-sparsified GLasso. Specifically, the penalty parameter \(\lambda\) of the GLasso method, which was tuned by 10-fold cross validation, is 0.063 for the centered data and 0.158 for the scaled data. This indicates different levels of sparsity in matrices when the input data is scaled compared to when it is not. Referring to the subfigure of GLasso, the data points located on the x-axis or y-axis represent the entries that are zero in one setting and nonzero in the other. Methods with regularization of the precision matrix are particularly fragile when the entries in the precision matrix are of different scales. Specifically, when given unscaled data, such kind of methods have false positives in the region with relatively smaller magnitudes of entries, and false negatives in the region with relatively larger magnitudes of entries. Both the estimation error and recovery performance of our method are not sensitive to data scaling and they are comparable to the outcomes of the well-performing methods in this block-magnified matrix setting. Hub StructureThe ground truth for the precision matrix is an adjacency matrix of a weighted graph with a hub structure, where the hub indicates the node that connects with many other nodes with a large degree that exceeds the average Barabasi (2013). The hub structure exists Figure 1: Scatter plots of the estimated partial correlation using different methods, with each data point representing the result from the scaled data in Y versus the result from the centered data in X. \begin{table} \begin{tabular}{l c c} \hline \hline Methods & Data centered & Data centered and scaled \\ \hline MLE & 0.859 & 0.859 \\ CLIME & 0.162 & 0.166 \\ FLAG & 0.178 & 0.178 \\ ANT & 0.157 & 0.157 \\ BGGM & 0.803 & 0.772 \\ GLasso & 0.246 & 0.193 \\ HubGLasso & 0.251 & 0.227 \\ DsGLasso & 0.628 & 0.574 \\ \hline \hline \end{tabular} \end{table} Table 1: Relative Frobenius norm error of the estimated partial correlation matrix using different methods, with 100 replications. Figure 2: Scatter plots of the estimated partial correlation using different methods, with each data point representing the result from the scaled data in Y versus the result from the centered data in X. widely in real-world applications, such as the structural and functional connectivity hubs in the human brain Van den Heuvel and Sporns (2013), a fragile financial instruments that can have a major impact on the financial market by influencing the prices of many related securities, and the source nodes of anomalous activity in the cyber security field Hero and Rajaratnam (2012). The hub nodes in the ground truth are indexed by \(1,...,h\), where the number of hub nodes is smaller than the dimension, i.e., \(h<p\). The precision matrix can be split into blocks as \(\Theta=\begin{pmatrix}\Theta_{aa}&\Theta_{ab}\\ \Theta_{ba}&\Theta_{bb}\end{pmatrix}\), where \(a=\{1,...,h\}\) and \(b=\{h+1,...,p\}\). Specifically, \(\Theta_{aa}\) encodes the conditional dependence between hub nodes, \(\Theta_{ab}\) and \(\Theta_{ba}\) correspond to the edges between hub and non-hub nodes, and the dependencies between the non-hub nodes are in block \(\Theta_{bb}\). Based on the conditional Gaussian property as \(\Theta_{ba}=-\beta\Theta_{aa}\), where \(\Theta_{ba}\in\mathbb{R}^{(p-h)\times h},\beta\in\mathbb{R}^{(p-h)\times h}\), and \(\Theta_{aa}\in\mathbb{R}^{h\times h}\). Once \(\Theta_{aa}\) and \(\beta\) are generated, the true \(\Theta_{ba}\) can be obtained through multiplication. According to the definition of a hub in a graph, each hub node has many connections with other nodes, and thus \(\Theta_{ba}\) is required to have a large proportion of non-zero entries. To investigate whether the sparsity of the true \(\beta\) influences the precision estimation, \(h=10\) hubs are separated into five pairs, and the columns in \(\beta\) that correspond to the hub nodes with odd indices are fully populated with non-zero elements, while the proportion of non-zero entries in the columns with even indices is varied across \(\{0.9,0.7,0.5,0.3,0.1\}\). The remaining block matrix \(\Theta_{bb}\), which denotes the relationships between non-hub nodes, is a relatively sparse matrix with a non-zero proportion of \(\pi=0.3\). Specifically, the diagonal elements of \(\Theta_{bb}\) are set to \(50\), and the non-zero elements are uniformly generated from \(U[3,5]\). In this simulation, the dimension is set to \(p=50\) and the sample size is \(n=200\). Figure 2(a) shows the true precision matrix and the estimated precision matrices using different methods. Edges involving the hub nodes that correspond to entries in block matrices A and B are colored in purple for positive values and green for the negative values. Edges between non-hub nodes that correspond to entries in block matrix C are colored in brown. Some entries in the estimated matrices are gray, indicating that the estimated value is far away from the range of the true values. In the block A, which encodes the conditional dependencies between hub nodes, several methods, including MLE, CLIME, Hub GLasso, ANT, and BGGM, produce false positives. The non-zero entries in block matrix \(\Theta_{aa}\) are underestimated by GLasso, CLIME, Hub GLasso, and Desparsified GLasso methods, and overestimated by BGGM method. In the block B, which captures the edges between hubs and non-hub nodes, results from methods CLIME and Hub GLasso miss the majority of non-zero elements. In the block C, whose non-zero entries indicate the conditional dependencies between non-hub nodes, several methods, including MLE, GLasso, Hub GLasso, and BGGM, produce inaccurate estimates of the diagonal elements. A large proportion of estimates in the block matrix \(\Theta_{bb}\) from MLE and Desparsified GLasso falling far away from the true range. By contrast, FLAG performs well in both precision matrix estimation and graph recovery, producing estimates that fall within a similar range to the ground truth in all blocks and fewer false positives. More detailed comparisons are provided in the following two parts, based on repeated experiments. Precision Matrix EstimationFigure 2(b) shows the comparisons of the estimated precision between hub nodes as the sparsity of the coefficient \(\beta\) varies. MLE overestimates the precision values, while the penalized likelihood-based methods, including GLasso, CLIME, Hub GLasso, and Desparsified GLasso, underestimate them. The underestimation obtained by the ANT method is more obvious as the non-zero proportion increases. To further explain this observation, a detailed comparison between the ANT and FLAG methods is conducted. Figure 10 shows a detailed explanation of entries in the precision matrix, with varying sparsity Figure 3: The results of precision estimation and graph recovery using different methods. of intrinsic \(\beta\), and a comparison between FLAG and ANT. Based on the sparsity assumption assigned to \(\beta\) by ANT, \(\beta^{(ANT)}\) has many zero entries, which induces an underestimation of \(\text{var}(X\beta)\) and an overestimation of \(\text{var}(\epsilon)\). As a result, the estimated precision by ANT is underestimated, while FLAG can still estimate the precision accurately in this case. Table 2 shows that FLAG has accurate estimation in the whole precision matrix, with a particularly pronuounced advantage in submatrix A, which denotes the conditional dependence among the hub nodes. Graph RecoveryAs shown in Figure 2(c), FLAG achieves the best graph recovery in block A and C, with an Area Under the ROC Curve (AUC) of 0.992 in the block determining edges between hub nodes and an AUC of 0.634 in the block of edges between hub nodes and non-hub nodes. It should be noted that all the entries in the block B are non-zero in the ground truth, and therefore no false positive exist. The False Discovery Rate (FDR) is well-controlled in the entire precision matrix as shown in the leftmost subplot of Figure 11. The actual FDR is relatively conservative in the whole precision matrix due to the dense connections between hubs and non-hub nodes, where the false discovery rate in the block B is zero in this setting. This observation is consistent with the findings of smaller actual FDR than controlled in graphs with hub structures, as reported in Liu (2013). In conclusion, FLAG is the only method that performs well in both precision matrix estimation and graph recovery across all blocks, particularly in the edges between hubs, where it outperforms the other methods, without any explicit assumption on the graph structure. In graphs with a hub structures, hub nodes are crucial components due to their numerous connections and greater influence on other nodes and the graph as a whole. Consequently, the edges of hub nodes are more informative, and FLAG exhibits better performance than other methods in this setting. #### 4.1.2 Multiple Graphs For each graph, a cluster structure is constructed, and the corresponding precision matrix is a block diagonal matrix. The dimension for each group is \(p=20\), and the sample sizes for the two groups are \(n_{1}=100\), and \(n_{2}=200\). Within each cluster, all nodes are connected to the node with the smallest index to ensure the connectivity, and then the probability of the existence of edges between nodes other than that one is \(\pi=0.3\). The diagonal elements in the precision matrix are set as one, and other non-zero entries are set as 0.2 for easier comparison. \begin{table} \begin{tabular}{l l l l l} \hline \hline Methods & Precision Matrix & Block A & Block B & Block C \\ \hline MLE & 0.772 (0.005) & 0.492 (0.007) & 1.129 (0.009) & 0.771 (0.005) \\ GLasso & 0.504 (0.007) & 0.320 (0.006) & 0.610 (0.007) & 0.504 (0.007) \\ CLIME & 0.286 (0.001) & 0.606 (0.001) & 1 (0) & 0.279 (0.001) \\ HubGLasso & 0.664 (3e-4) & 0.549 (0.001) & 0.927 (0.001) & 0.663 (3e-4) \\ DsGLasso & 0.412 (0.001) & 0.452 (0.002) & 0.662 (0.002) & 0.411 (0.001) \\ ANT & 0.334 (0.001) & 0.181 (0.004) & 0.758 (0.003) & 0.331 (0.001) \\ BGGM & 1.067 (0.005) & 59.22 (0.687) & 5.300 (0.074) & 0.819 (0.007) \\ **FLAG** & **0.329** (0.001) & **0.160** (0.004) & 0.847 (0.003) & 0.325 (0.001) \\ \hline \hline \end{tabular} \end{table} Table 2: Relative Frobenius norm error of estimated precision matrix using different methods, with 100 replications. First, the entries of the partial correlation matrix in each group are estimated individually, followed by testing for whether they are equal to zero or not, with the p-values of these tests collected. Then, these null cases are tested for whether the partial correlation of the same entry from two groups is equal. For entries that cannot reject this hypothesis testing, meta-analysis is applied, and the p-values of testing whether the entry after meta-analysis is zero are obtained. Similarly, entries that are non-zero from both groups in the ground truth are collected and tested following the same routine. The partial correlation of individual estimation and inference shows a large power as the points deviate away from the diagonal line, with the power in group 2 being larger as it has more samples. When it comes to the result after meta-analysis, the power exceeds the performance of a single group, with the enhancement of power being more obvious for group 1. FLAG-Meta has larger power and better graph recovery, smaller standard error for each entry and smaller estimation error of whole precision matrix. The improvement is more obvious in the group 1 with smaller sample size. Figure 4: The comparison of statistical inference of precision matrix and graph recovery between FLAG-based and GLasso-based methods. The sample size is 80 for Group 1 and 120 for Group 2. ### Real Data Analysis #### 4.2.1 Human Brain Gene Expression Data We apply the FLAG method to the spatio-temporal gene expression data of the human brain Kang et al. (2011). Seven high-confidence Autism spectrum disorder (ASD) genes (GRIN2B, DYRK1A, ANK2, TBR1, POGZ, CUL3, SCN2A)Willsey et al. (2013) are selected to analyze the co-expression network among ASD-related genes. The data from period 1 and 2, which correspond to the early stages of brain development, as well as the groups with sample sizes smaller than three are all excluded Lin et al. (2017). Data are integrated as several groups in seven time periods and four brain regions. Our aim is to discover how the conditional dependence among ASD-related genes changes over time or across region. The time periods are as follows, 1) Early fetal: [10PCW, 19PCW); 2) Late fetal: [19PCW, 38PCW); 3) Infancy: [0M, 12M); 4) Childhood and adolescence: [1Y, 20Y]; 5) Young adulthood: [20Y, 40Y); 6) Middle adulthood: [40Y, 60Y); 7) Late adulthood: age\(\geq\)60Y; The brain regions are 1) Parietal lobe, Occipital lobe, Temporal lobe; 2) Frontal lobe; 3) Striatum, Hippocamp, Amygdala; 4) Thalamus, Cerebellum. To compare the results of different methods in this dataset, we use the group in period 13 and region 2, which has a relatively large sample size of 85, as shown in Figure 13. As the dimension equals seven and the sample size equals 85, the maximum likelihood estimator, i.e., inverse sample covariance as estimated precision matrix, is a good reference for estimation. The estimation from the CLIME method shows less magnitude than the reference. The magnitude of estimated precision and partial correlation of the gene pair (DYRK1A, TBR1) from the ANT method is about half of the reference's, while the estimation through FLAG method equals the reference's. The reason for such underestimation from the ANT method is similar to what we observe in the simulation, where the large zero proportion (80%) in \(\beta^{(\text{ANT})}\) induces a smaller \(\text{var}(X\beta)\), and a larger \(\text{var}(\epsilon)\) (0.386 by ANT and 0.316 by FLAG), resulting in a smaller estimated precision (0.41 by ANT and 1.01 by FLAG) and a smaller magnitude of partial correlation (-0.14 by ANT and -0.29 by FLAG). In addition, due to the underestimation of precision, the inferred graph from the ANT method omits the edge between DYRK1A and TBR1. The red lines in the graphs from the FLAG and ANT methods indicate edges of great significance, with a p-value of test smaller than 0.05 after Bonferroni correction, and blue lines indicate the significant edges after controlling the False Discovery Rate (FDR) to be smaller than 0.1. Figure 5 shows the temporal varying pattern of conditional dependence between ASD-related genes is shown in each row, and spatial variations are in each column. The edges inferred by Bonferroni correction are denoted in red, and FDR \(\leq\) 0.1 in blue. The thickness of edges is weighted by its magnitude of partial correlation. As an example of spatial variation in time period 6-7, the gene pair (DYRK1A, CUL3) has a precision of -1.16, -1.12, -0.96, -2.33, and partial correlation of this pair is 0.37, 0.36, 0.39, 0.57 and in regions 1, 2, 3, 4, respectively. In period 6-7, the conditional dependence between this pair exists in all regions, and their partial correlation shows consistency in the first three regions, while it is higher in region 4. Moreover, there are many edges involving the gene DYRK1A, which is evident in the graphs of region 2, where the edge of the pair (DYRK1A, ANK2) exists in almost all the periods except the period 14. This finding is supported by the evidence that DYRK1A plays an important role in the signaling pathway regulating cell proliferation as a protein kinase and may be involved in brain development Di Vona et al. (2015). As shown in figure 6, the estimated partial correlation by different methods are compared in the upper subfigure. Using the maximum likelihood estimation with a relatively large sample size as a reference, the estimation by FLAG is quite similar to the reference's, thus accurate. However, GLasso method shrinks some precision entries to zero, the estimation from CLIME method has a smaller magnitude, and in period 10-12, ANT method has non-zero estimation against all the other methods. These cases, such as GLasso method having some false negatives in the results, CLIME method underestimating the magnitude of precision and partial correlation, and some inaccurately estimated entries from ANT method, are consistent with what we observed in the simulation studies. #### 4.2.2 University Webpage Data The webpage dataset was collected from the computer science departments of four universities by the World Wide Knowledge Base (Web-\(>\)KB) project of the Carnegie Mellon University (CMU) Text Learning Group, with pages manually classified into several categories. This raw data has been pre-processed by Cachopo et al. (2007) with stemming words. The occurrences of terms in 544 student webpages, 374 faculty webpages, 310 course webpages, and 168 project webpages are used in the following analysis. First, the word count of the \(i\)-th term in the \(j\)-th webpage is denoted as \(f_{i,j}\), which is used to calculate the following relative frequency of terms in the document (webpage). The Document-Term Matrix (DTM) weighting for terms in \(D\) documents is the multiplication between local and global weights, i.e., \(x_{i,j}=L_{i,j}G_{j}\), where the log local weight is \(L_{i,j}=\log(f_{i,j}+1)\), and entropy Global weight is \(G_{j}=1+\frac{\Sigma_{i}p_{i,j}\log p_{i,j}}{\text{D}}\), with \(p_{i,j}=\frac{f_{i,j}}{gf_{j}}\). 100 terms are selected with the largest entropy \(-\Sigma_{i}p_{i,j}\log p_{i,j}\) for the following analysis. Standardization is to scale data with zero mean and unit variance, but different operations may lead to different outcomes. The document-term matrix weighting is denoted as \(X\). Specifically, when the raw count matrix is from all webpages, the weighting matrix is \(X^{(all)}\), and when the webpages of a single category are split to be preprocessed, the weighting matrix is \(X^{(student)},X^{(faculty)},X^{(course)}\), and \(X^{(project)}\). It is obvious that using all webpages or that Figure 5: Inferred graphs by FLAG, arranged by region 1,2,3,4 in different rows and time periods in different columns. Figure 6: Partial correlation estimated by different methods between gene pair (GRIN2B, POGZ) using the expression data in brain region 2. from each category separately will lead to different weighting due to different term frequencies. Thus, standardizing \(X^{(all)}\) and then taking corresponding lines for each category is different from standardizing weights from individual weights separately. Even when the data is in the same scale, methods with parameters to be tuned, such as CLIME, GLasso, Hub GLasso, and Desparsified GLasso, still have unstable results when the data standardization is different, while FLAG preserves stable results, as shown in Figure 14. After comparison, the data standardization is fixed to center and scale data from each category separately in the following analysis. When taking single category data as input, four inferred graphs by FLAG can be obtained. The common edges in the graphs of all four categories are standard phrases in computer science websites, such as ('comput','scienc'), ('home', 'page'), ('high', 'perform'), and ('commun', 'network'). The corresponding precision and partial correlation are far away from zero, and the p-values of tests are much smaller than \(1e-4\). Compared with the results obtained by the ANT method, there are some standard phrases that are omitted by ANT but successfully identified by FLAG. For example, the common phrase'related area' links the term pair ('relat', 'area'). However, the result from ANT underestimates its precision and fails to identify this edge in the course category data. More precisely, the estimated precision and partial correlation of this pair by ANT are 0.13 and -0.06, respectively, while the estimates are 0.52 and -0.22 by FLAG. This situation is consistent with our finding in the simulation that the underestimation of precision by ANT comes from a large zero proportion (80.6%, 79.6%) in the \(\beta\), which induces smaller \(\text{var}(X\beta)\) and larger \(\text{var}(\epsilon)\) ((0.46,0.62) by ANT and (0.36,0.53) by FLAG), and thus leads to smaller estimated precision. The graphs inferred by FLAG can capture different conditional dependencies in different categories. Taking the term 'data' as an example, the edge ('data', 'educ') in the student category is significant to have a precision of -0.23 and a p-value of corresponding hypothesis test of \(5e-4\). The edge ('data','structur') has a p-value of \(7e-10\) in the faculty category and \(2e-10\) in the course category. The edge ('data','model') in the project category has a p-value of \(3e-5\). The estimated precision and partial correlation have a relatively large standard error due to the small sample size in the project category, which can be alleviated by meta-analysis. In addition to graph recovery in each category, the inference can be extended to test the differences of partial correlation of the same pair across different categories, as shown in Table 2(b), with the null hypothesis \(\rho^{(\text{category A})}-\rho^{(\text{category B})}=0\). Specifically, the test of the pair ('data','model') between the project category and the result from the faculty category rejects the null hypothesis, indicating that the partial correlation is significant to be different in these \begin{table} \end{table} Table 3: Edge differences of term pairs (‘data’, ‘structure’) and (‘data’, ‘model’) across categories. two categories. The results from the project category, due to its small sample size, have relatively large standard error in estimated precision and partial correlation, and thus its inferred graph has few edges due to relative small power. Since all data are from the terms on the webpages of computer science department in universities, it is natural to leverage their common shared properties to enhance the result in the project category. In order to obtain the result after one-to-one meta-analysis and identify how each category contributes to the enhancement of the result, each category is used for meta-analysis in the order of ascending sample size: course, faculty, and student. The whole procedure is shown in Figure 7. In each step, the data from one category is combined with the previous result in grey, and the edges that are only detected after meta-analysis are in red. Blue dotted lines denote the edges that are shown in the previous result but are not significant in the result of meta-analysis. The first meta-analysis is between the project and course categories. Compared with the graph inferred only based on project data, 61 edges are added. The pairs ('engin','software') and ('language', 'implement'), whose dependencies are supported by the common phrase'software engineering' in the computer science field, and concurrence of related words 'implement' and (programming) 'language', are not found by data in a single category but are discovered by meta-analysis between project and course data. The next step is the meta-analysis between the result of meta-analysis of project and course and the result from the faculty category. As shown in Figure 6(b), 46 edges are added in red, while 5 edges are removed in blue dotted lines. The meta-analysis in this stage not only further enlarges the power but also detects some possible false positives like ('high','select') and ('area', 'project'). Overall, the meta-analysis provides a result from the project category with smaller standard error and larger power. For comparison, taking the result shown in Figure 6(c) that achieves many-to-one meta-analysis with respect to the project category, the edges of the node 'data' in the single category project data are only with'model', while the edges in the result of FLAG-Meta are with'model','structur', and'research'. From Figure 15, the joint GLasso fails to recover such reasonable edges of 'data' with'structur' and'research', and it involves some false positives like edges between 'data' and 'class', 'develop', 'program'. #### 4.2.3 U.S. Stock Prices The raw data consists of daily close prices of 99 stocks in the S&P100 Index from 2018-01-02 to 2022-06-30. The stock with the code 'DOW' in the S&P 100 Index is excluded due to its start time being on 2019-03-21, with missing data of more than 14 months. It is preprocessed by taking the logarithmic difference, \(Z_{i,j}=\log P_{i,j}-\log P_{i-1,j}\), where \(P_{i,j}\) is the close price of the \(j\)-th stock on the \(i\)-th day. The log return is used as the input data in the following analysis, where the outcome is a perceived network Anufriev and Panchenko (2015), and the conditional dependence in the stock network is the return co-movements. \begin{table} \begin{tabular}{l l l l} \hline \hline Pair of Terms & \(\rho^{\text{(project)}}\) & \(\rho^{\text{(course)}}\) & \(\rho^{\text{(meta)}}\) \\ \hline (’engin’, ’software’) & 0.32 (0.09) & 0.19 (0.06) & 0.24 (0.05) \\ (’language’, ’implement’) & 0.30 (0.08) & 0.17 (0.06) & 0.21 (0.05) \\ (’assist’, ’support’) & 0.24 (0.09) & 0.20 (0.06) & 0.21 (0.05) \\ \hline \hline \end{tabular} \end{table} Table 4: Example of edges missed by FLAG given the data from individual groups, but unveiled by FLAG-Meta. Figure 7: A many-to-one meta-analysis using the FLAG-Meta method, with project category as the pivot for progressive analysis. Due to the small variance of the log return, which is around \(e^{-4}\), the precision is about \(e^{4}\) to \(e^{5}\). Such a large magnitude increases instability in the estimation of precision, as well as the partial correlation. From Figure 16, it can be observed that the estimated partial correlation from FLAG is the least sensitive to data scaling, as the scattered points are most tightly clustered around the diagonal line, indicating that the FLAG method provides more consistent results across different scales of input data. In contrast, the results from regularization-based methods such as CLIME, GLasso, HubGLasso, and DsGLasso heavily rely on the penalty parameter. It is evident that the tuned parameters will vary widely depending on whether the input data is log return or scaled data. On the one hand, the two types of tuned parameters have no correspondence, and thus the results also vary greatly by each penalty-based method. On the other hand, given two results from the same method, when the input data is scaled or not, it is difficult to determine which result to use, even though they are expected to reveal the same underlying structure from the data. Inspired by Bernardini et al. (2022), we are also interested in whether and how the S&P100 stock co-movement network shows the impact of Covid-19 pandemic by using a rolling window of one-year length, which shifts one month in each step, as the input data. Recall the stock market crash in 2020, there were trading curbs on March \(9^{th}\), March \(12^{nd}\), March \(16^{th}\), and March \(18^{th}\), which occured 25 years after the previous one in 1997. Such stock market crashes imply increased instability in the market due to the Covid-19 pandemic. A large complex system transitions from stable to unstable once its connectance increases over a critical level, as suggested by Gardner and Ashby (1970). In it common knowledge in the financial market that the correlations between securities, no matter marginal correlations or partial correlations, increases significantly during market crises, just as the prices of most securities drop together with negative returns. Therefore, it is natural to use the stock network whose edges are weighted by (partial) correlation to evaluate the stability of market. The stability of a system is quantified using the connectances \(C\) and the average interaction strengths \(\alpha\), and the system is stable when \(\alpha^{2}nC<1\), where \(n\) is the number of variables in the system, and the system is unstable when \(\alpha^{2}nC>1\), as proposed by May in May (1972). The May-Wigner stability theorem has been applied to evaluate the stock network stability in Heiberger (2014), with the stability condition \(m=\sqrt{nC}\alpha<1\) where \(n\) is the number of stocks as the size of the network, and connectances \(C\) is the density of connections, and the average interaction strength \(\alpha\) equals the average value of strength among nodes, with the weighted degree of a node as its strength for each node. As for the estimated precision matrix and partial correlation and inferred graphs of Gaussian graphical model, the weight of edges is the magnitude of partial correlation for fair comparison. The stability indicator \(m=\sqrt{nC}\alpha\) is calculated by different methods given data in the rolling window along the time. In Figure 8, the stability of graphs estimated or inferred from different methods are shown in different lines, with each point on the line representing the stability calculated using the most recent one-year market data. For instance, the point at time '2020-04' uses the log-return from [2019-04-01,2020-04-01) as input. Recall the May-Wigner stability theorem, which states that the system is stable when \(m<1\) and unstable when \(m>1\). Among the methods shown, FLAG is the only method whose outcome correctly oscillates around the reference line as one, while the result from GLasso is not shown due to its magnitude of \(m\) being too large compared with other methods and the reference value one. The stability evaluated by the FLAG method increases significantly from February 2020 to April 2020, which highly matches the crashes in the U.S. stock market from 20 February 2020 to 7 April 2020. After this period, FLAG detects that the market stabilizes from March 2021. However, the stability calculated by the results from ANT increases dramatically from March 2020 to April 2020 and decreases dramatically from March 2021 to April 2021, indicating that the results are dominated by the data in March 2020 when it is included. This scenario implies the vulnerability of results from the ANT method. Regarding the point at time '2021-03', the aim is to evaluate the stability of the market in the recent period, but the stability indicator equals 2.36 when input data is the recent 12-month-long, and 1.24 for the recent 11-month-long data as input, making it difficult to determine which result to trust. The results from BGGM are roughly twice the expected value, although the trend is relatively matches that from the real market. According to the simulation studies, BGGM overestimates the magnitude of the precision matrix, partial correlation, and then such overestimation is propagated to the strength of nodes as the sum of weighted edges, strength of networks, and stability. The results from the CLIME method are too flat to reflect the dynamic pattern of the market. In conclusion, FLAG can successfully detect the impact of the Covid-19 pandemic on the US stock market with the proper magnitude of stability. The many-to-one meta-analysis is conducted between the results from data in 2021 and that from the other groups (data in 2019 and 2020), compared with the joint group GLasso, with the subgraph of node 'PYPL' as an example. The results from the joint group graphical lasso Figure 8: The stability \(m\) of the stock network obtained from different methods, using a rolling window of one-year length shifted by one month. Figure 9: The inferred subgraphs around the stock ’PYPL’ in the year 2021, using the JGL and FLAG-Meta methods. vary widely depending on the threshold of the estimated precision values, making it difficult to determine the optimal threshold especially in real data. The results from FLAG-Meta have larger power compared with the results estimated from single-year data. ## 5 Discussion The Flexible and Accurate method of Gaussian graphical model (FLAG) aims to estimate precision matrix entries accurately and efficiently, and further quantify the uncertainty of each entry, which allows for better leveraging of the common structure across different groups through meta-analysis. FLAG has no explicit structural assumptions on the precision matrix or the corresponding graphs, making it tuning-free. Its capability of element-wise inference allows for extension to multiple graphs with small computational consumption, making it highly flexible. Simulation studies in three different settings show that FLAG is not sensitive to data scaling, unlike other methods that require tuning parameters. FLAG is particularly suitable for the data with a hub structure, where it outperforms other methods, especially in the region of edges between hubs, even when the non-zero proportion of underlying coefficients is varied. FLAG can make inferences to test each edge individually and adjust partial correlation and precision values after cooperating the entries that have common structure across groups to achieve smaller standard error and larger power. FLAG is accurate, with a small relative error and a large area under the ROC curve in the simulation studies. FLAG is capable of unveiling the co-expression relationships between genes in the brain across time and region, identifying the associations between terms in the webpage data from different categories, and revealing the relationships between stocks in the S&P100 with stability influenced by Covid-19 captured well.
2309.17301
Distributed Resilient Control of DC Microgrids Under Generally Unbounded FDI Attacks
Due to the nature of distributed secondary control paradigm, DC microgrids are prone to malicious cyber-physical attacks, which could be unbounded to maximize their damage. Existing resilient secondary control methods addressing unbounded attacks require that the first time derivatives of cyber-physical attack signals be bounded. The secondary defense strategy presented in this letter relax such a strict constraint by addressing more generally unbounded attack signals and hence, enhance the resilience of DC microgrids in adversarial environments. Rigorous proofs, based on Lyapunov techniques, show that the proposed method guarantees the uniformly ultimately bounded convergence for both voltage regulation and proportional load sharing under generally unbounded attacks. Comparative case studies further validate the enhanced resilience of the proposed attack-resilient control strategy.
Yichao Wang, Mohamadamin Rajabinezhad, Omar A. Beg, Shan Zuo
2023-09-29T14:58:43Z
http://arxiv.org/abs/2309.17301v1
# Distributed Resilient Control of DC Microgrids Under Generally Unbounded FDI Attacks ###### Abstract Due to the nature of distributed secondary control paradigm, DC microgrids are prone to malicious cyber-physical attacks, which could be unbounded to maximize their damage. Existing resilient secondary control methods addressing unbounded attacks require that the first time derivatives of cyber-physical attack signals be bounded. The secondary defense strategy presented in this letter relax such a strict constraint by addressing more generally unbounded attack signals and hence, enhance the resilience of DC microgrids in adversarial environments. Rigorous proofs, based on Lyapunov techniques, show that the proposed method guarantees the uniformly ultimately bounded convergence for both voltage regulation and proportional load sharing under generally unbounded attacks. Comparative case studies further validate the enhanced resilience of the proposed attack-resilient control strategy. DC microgrids, distributed control, resilient control, unbounded attacks. ## I Introduction DC microgrids provide significant advantages due to their compatibility with distributed energy resources, storage units, and predominantly DC-operating modern loads. The hierarchical control stands as the preferred solution for DC microgrids [1]. The droop-based primary control, relying on local measurements, ensures dynamic voltage regulation. Secondary control aiming for global voltage consistency and fair load distribution compensates for voltage anomalies unmanaged by primary control. Distributed secondary control, where each unit operates with a local controller requiring only neighboring data, emerges superior to centralized control, offering improved efficiency, scalability, reliability, and streamlined communication network. However, cyber threats challenge microgrids, stemming from their dependence on digital and communication tools. False data injection (FDI) attacks are of particular concern as they can bypass most attack-detection systems, jeopardizing microgrid operations [2]. While recent advances focus on attack-detection enhancements, many require prompt attack identification and mitigation, a task computationally heavy due to incorporation of non-local communication layer data. Consequently, there is a discernible shift towards resilient control methods as countermeasures. Existing methods generally consider bounded attacks [3]. In practice, the attacks may be unbounded as in [4, 5], presenting greater challenges and potentially leading to more extensive damage. However the solutions in aforementioned papers require the first time derivatives of the attack signals be bounded. The inherent unpredictability of cyber-physical attacks underscores the need for defense strategies resilient to generally unbounded attacks, essential for DC microgrids protection and security. This letter presents a cyber-physical defense strategy for DC microgrids targeting generally unbounded attacks. The contributions are: i) the proposed cyber-physical defense strategy enhances the self-resilience of DC microgrids by addressing generally unknown and unbounded attacks, which cannot be handled by existing solutions [4, 5], where the attack signals are strictly required to have bounded first time derivatives; ii) the proposed solution is fully distributed, requiring no global information and hence, is scalable; and iii) using Lyapunov techniques, rigorous mathematical proof is provided for achieving uniformly ultimately bounded (UUB) stability for voltage regulation and proportional load sharing. ## II Attack-Resilient Controller Design Consider a communication network with \(N\) converters and one leader node. The connections among local converters are represented by \(\mathscr{G}_{f}=(\mathcal{W},\mathcal{E},\mathcal{A})\) with a nodes set \(\mathcal{W}\), an edge set \(\mathcal{E}\subset\mathcal{W}\times\mathcal{W}\), and an adjacency matrix \(\mathcal{A}=[a_{ij}]\). A graph edge, indicating the information flow from converter \(j\) to converter \(i\), is shown by \((w_{j},w_{i})\), with the weight of \(a_{ij}\). Node \(j\) is considered as the neighbor of node \(i\) if \((w_{j},w_{i})\in\mathcal{E}\). The set of neighbors of node \(i\) is denoted as \(\mathcal{N}_{i}=\left\{\,j\,|\,(w_{j},w_{i})\in\mathcal{E}\right\}\). The in-degree matrix is \(\mathcal{D}=\mathrm{diag}(d_{i})\) with \(d_{i}=\sum_{j\in\mathcal{N}_{i}}a_{ij}\). \(\mathcal{L}=\mathcal{D}-\mathcal{A}\) represents the Laplacian matrix. \(\mathcal{G}=\mathrm{diag}(g_{i})\), where \(g_{i}\) is the pinning gain from the leader to the \(i^{th}\) converter. \(g_{i}>0\) if the leader links to the \(i^{th}\) converter; otherwise, \(g_{i}=0\). \(\mathbf{1}_{N}\in\mathbb{R}^{N}\) is a vector with all entries of one. \(|\cdot|\) is the absolute value of a real number. \(\mathrm{diag}\left\{\cdot\right\}\) constitutes a diagonal matrix from its set of elements. For global voltage regulation and load sharing, the secondary control offers \(V_{n_{i}}\) for each converter through data exchange with its neighbors. The dynamics of voltage droop and the secondary control can be described as \[\dot{V}_{n_{i}}=\dot{V}_{i}^{*}+R_{i}^{\mathrm{vir}}\dot{I}_{i}=\bar{u}_{i}=u _{i}+\delta_{i}, \tag{1}\] where \(V_{n_{i}}\) is the reference for the primary control level, \(V_{i}^{*}\) is the local voltage setpoint, \(R_{i}^{\mathrm{vir}}\) is the virtual impedance, \(I_{i}\) is the output current, \(\bar{u}_{i}\) is the distorted input signal, is the control input to be designed, and \(\delta_{i}\) denotes potential unbounded attacks on the input channels. _Assumption 1:_ There exists a positive constant \(\kappa_{i}\), such that \(\left|\delta_{i}\left(t\right)\right|\leq\kappa_{i}t^{\gamma}\). For global voltage regulation and proportional load sharing under attacks, we employ an attack-resilient secondary control strategy at each converter based on neighborhood relative information. Denote \(\zeta_{i}=\sum\limits_{j\in\mathcal{N}_{i}}a_{ij}\left(V_{j}-V_{i}\right)+g_{i }\left(V_{\mathrm{ref}}-V_{i}\right)+\sum\limits_{j\in\mathcal{N}_{i}}a_{ij} \left(R_{j}^{\mathrm{vir}}I_{j}-R_{i}^{\mathrm{vir}}I_{i}\right)\). We then present the following attack-resilient control protocols \[\begin{split}& u_{i}=\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)} \Big{(}\sum\limits_{j\in\mathcal{N}_{i}}a_{ij}\left(V_{j}-V_{i}\right)+g_{i} \left(V_{\mathrm{ref}}-V_{i}\right)\\ &+\sum_{j\in\mathcal{N}_{i}}a_{ij}\left(R_{j}^{\mathrm{vir}}I_{j }-R_{i}^{\mathrm{vir}}I_{i}\right)\Big{)}\\ &=\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\Bigg{(}\sum \limits_{j\in\mathcal{N}_{i}}a_{ij}\Big{(}\left(V_{j}+R_{j}^{\mathrm{vir}}I_{j }\right)-\left(V_{i}+R_{i}^{\mathrm{vir}}I_{i}\right)\Big{)}\\ &+g_{i}\Big{(}\big{(}V_{\mathrm{ref}}+R_{i}^{\mathrm{vir}}I_{i} \big{)}-\left(V_{i}+R_{i}^{\mathrm{vir}}I_{i}\right)\Big{)}\Bigg{)},\end{split} \tag{2}\] where the term \(\xi_{i}^{\left(\gamma\right)}\) in the coupling gain, is adaptively updated using the following tuning law \[\xi_{i}^{\left(\gamma\right)} =\alpha_{i}\left({{\zeta_{i}}^{2}-v_{i}\left({{\xi_{i}^{\left( \gamma-1\right)}}-{\hat{\xi}_{i}^{\left(\gamma-1\right)}}}\right)}\right), \tag{3}\] \[\hat{\xi}_{i}^{\left(\nu\right)} =\rho_{i}\left({{\xi_{i}^{\left(\nu-1\right)}}-{\hat{\xi}_{i}^{ \left(\nu-1\right)}}}\right), \tag{4}\] where \(\alpha_{i},v_{i}\) and \(\rho_{i}\) are positive constants. In steady state, \(R_{i}^{\mathrm{vir}}I_{i}\) converges to a constant \(kI_{s}^{\mathrm{pu}}\)[1]. Denote \(\Theta_{i}=V_{i}+R_{i}^{\mathrm{vir}}I_{i}\) and \(\Theta_{\mathrm{ref}}=V_{\mathrm{ref}}+kI_{s}^{\mathrm{pu}}\), we obtain \[\begin{split}\dot{\Theta}_{i}&=\sum\limits_{\mu=0}^ {\gamma}\xi_{i}^{\left(\mu\right)}\left(\sum\limits_{j\in\mathcal{N}_{i}}a_{ ij}\left(\Theta_{j}-\Theta_{i}\right)+g_{i}\left(\Theta_{\mathrm{ref}}- \Theta_{i}\right)\right)+\delta_{i}\\ &=\sum\limits_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(- \left(d_{i}+g_{i}\right)\Theta_{i}+\sum\limits_{j\in\mathcal{N}_{i}}a_{ij} \Theta_{j}+g_{i}\Theta_{\mathrm{ref}}\right)+\delta_{i}.\end{split} \tag{5}\] The global form of (5) is \[\dot{\Theta}=-\operatorname{diag}\left(\sum\limits_{\mu=0}^{\gamma}\xi_{i}^{ \left(\mu\right)}\right)\left(\mathcal{L}+\mathcal{G}\right)\left(\Theta- \mathbf{1}_{N}\Theta_{\mathrm{ref}}\right)+\delta, \tag{6}\] where \(\Theta=[\Theta_{1}^{T},...,\Theta_{N}^{T}]^{T}\) and \(\delta=[\delta_{1}^{T},...,\delta_{N}^{T}]^{T}\) is the attack vector. Define the following global cooperative regulation error \[\varepsilon=\Theta-\mathbf{1}_{N}\Theta_{\mathrm{ref}}, \tag{7}\] where \(\varepsilon=[\varepsilon_{1}^{T},...,\varepsilon_{N}^{T}]^{T}\). Then, we obtain \[\dot{\varepsilon}=-\operatorname{diag}\left(\sum\nolimits_{\mu=0}^{\gamma} \xi_{i}^{\left(\mu\right)}\right)\left(\mathcal{L}+\mathcal{G}\right) \varepsilon+\delta. \tag{8}\] As shown in [5], we need to stabilize the cooperative regulation error \(\varepsilon\) to achieve the global voltage regulation and proportional load sharing. _Assumption 2:_ The digraph \(\mathscr{G}\) includes a spanning tree, where the leader node is the root. _Definition 2:_ Signal \(x(t)\in\mathbb{R}\) is UUB with the ultimate bound \(b\), if there exist constants \(b,c>0\), independent of \(t_{0}\geq 0\), and for every \(a\in\left(0,c\right)\), there exists \(t_{1}=t_{1}\left(a,b\right)\geq 0\), independent of \(t_{0}\), such that \(\left|x\left(t_{0}\right)\right|\leq a\Rightarrow\left|x\left(t\right)\right| \leq b,\forall t\geq t_{0}+t_{1}\)[6]. _Definition 3 (Attack-resilient Secondary Control Problem):_ Under the generally unbounded attacks on local input channels described in (1), design local cooperative control protocols for each converter such that, for all initial conditions, \(\varepsilon\) in (7) is UUB. That is, the bounded global voltage regulation and proportional load sharing are achieved. Next, we give the main result of solving the attack-resilient secondary control problem for DC microgrids. _Theorem 1:_ Given Assumptions 1 and 2, under the unbounded attacks described in (1), let the attack-resilient secondary control protocols consist of (2), (3) and (4), then the cooperative regulation error \(\varepsilon\) in (7) is UUB. That is, the attack-resilient secondary control problem is solved. _Proof:_ Define \(\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)=\xi_{i}^{\left(\gamma-1 \right)}\left(t\right)-\hat{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)\), then the derivative of \(\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)\) is \[\begin{split}\tilde{\xi}_{i}^{\left(\gamma\right)}& \left(t\right)=\xi_{i}^{\left(\gamma\right)}\left(t\right)-\tilde{\xi}_{i}^{ \left(\gamma\right)}\left(t\right)\\ &=\alpha_{i}\left({{\zeta_{i}}^{2}\left(t\right)-v_{i}\left({{ \xi_{i}^{\left(\gamma-1\right)}}\left(t\right)-\tilde{\xi}_{i}^{\left(\gamma-1 \right)}\left(t\right)}\right)}\right)-\\ &\rho_{i}\left({{\xi_{i}^{\left(\gamma-1\right)}}\left(t\right)- \hat{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)}\right)\\ &=\alpha_{i}{{\zeta_{i}}^{2}}\left(t\right)-\left(\alpha_{i}v_{i}+ \rho_{i}\right)\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right).\end{split} \tag{9}\] The solution of (9) can be written as \[\begin{split}\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)=& e^{-\left(\alpha_{i}v_{i}+\rho_{i}\right)t}\tilde{\xi}_{i}^{\left(\gamma-1 \right)}\left(0\right)\\ &+\alpha_{i}\int_{0}^{t}e^{-\left(\alpha_{i}v_{i}+\rho_{i}\right) \left(t-\tau\right)}{{\zeta_{i}}^{2}}\left(\tau\right)\mathrm{d}\,\tau.\end{split} \tag{10}\] Since \(e^{-\left(\alpha_{i}v_{i}+\rho_{i}\right)\left(t-\tau\right)}{{\zeta_{i}}^{2} }\left(\tau\right)\) is UUB, we obtain that \(\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)\) is also UUB. According to Definition 2, let the ultimate bound of \(\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(t\right)\) to be \(\eta\). Note that the initial values of the gains are chosen such that \(\tilde{\xi}_{i}^{\left(\gamma-1\right)}\left(0\right)\geq 0\). The global form of \(\zeta_{i}\left(t\right)\) is \[\zeta\left(t\right)=-\left(\mathcal{L}+\mathcal{G}\right)\varepsilon\left(t \right), \tag{11}\] where \(\zeta\left(t\right)=[\zeta_{1}^{T}\left(t\right),...,\zeta_{N}^{T}\left(t \right)]^{T}\). Then, using (8) to obtain the time derivative of (11) as \[\dot{\zeta}\left(t\right)=-\left(\mathcal{L}+\mathcal{G}\right)\left( \operatorname{diag}\left(\sum\nolimits_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu \right)}\left(t\right)\right)\zeta\left(t\right)+\delta\left(t\right)\right), \tag{12}\] Define the following Lyapunov function candidate \[E\left(t\right)=\frac{1}{2}\sum\limits_{i=1}^{N}{\int_ (12) is given by \[\begin{array}{l}\dot{E}\left(t\right)=\frac{1}{2}\sum_{i=1}^{N}\sum_{\mu=0}^{ \gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)2\zeta_{i}(t)\dot{\zeta}_{i}\left( t\right)\\ =\zeta(t)^{T}\mathrm{diag}\left(\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)} \left(t\right)\right)\dot{\zeta}\left(t\right)\\ =-\zeta(t)^{T}\mathrm{diag}\left(\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right) }\left(t\right)\right)\\ \times\left(\mathcal{L}+\mathcal{G}\right)\Bigg{(}\mathrm{diag}\left(\sum_{\mu =0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\right)\zeta\left(t \right)+\delta\left(t\right)\Bigg{)}\\ \leq-\sigma_{\min}\left(\mathcal{L}+\mathcal{G}\right)\left\|\mathrm{diag} \left(\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\right) \zeta\left(t\right)\right\|^{2}\\ +\sigma_{\max}\left(\mathcal{L}+\mathcal{G}\right)\left\|\mathrm{diag}\left( \sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\right)\zeta \left(t\right)\right\|\left\|\delta\left(t\right)\right\|\\ =-\sigma_{\min}\left(\mathcal{L}+\mathcal{G}\right)\left\|\mathrm{diag}\left( \sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\right)\zeta \left(t\right)\right\|\\ \left(\left\|\mathrm{diag}\left(\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right) }\left(t\right)\right)\zeta\left(t\right)\right\|-\frac{\sigma_{\max}\left( \mathcal{L}+\mathcal{G}\right)\left\|\delta\left(t\right)\right\|}{\sigma_{ \min}\left(\mathcal{L}+\mathcal{G}\right)}\right).\end{array} \tag{14}\] Given Assumption 2, \(\left(\mathcal{L}+\mathcal{G}\right)\) is positive-definite [7]. Denote \(\beta=\frac{\sigma_{\max}\left(\mathcal{L}+\mathcal{G}\right)}{\sigma_{\min} \left(\mathcal{L}+\mathcal{G}\right)}\), which is a positive constant. Next, we will prove that \[\left\|\mathrm{diag}\left(\sum\nolimits_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu \right)}\left(t\right)\right)\zeta\left(t\right)\right\|-\beta\left\|\delta \left(t\right)\right\|\geq 0. \tag{15}\] Note that, a sufficient condition to guarantee (15) is \[\sum\nolimits_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\!\zeta _{i}\left(t\right)\geq\beta\left|\delta_{i}\left(t\right)\right|. \tag{16}\] Pick \(\left|\zeta_{i}\left(t\right)\right|\geq\sqrt{v_{i}\,\eta+\frac{\eta\kappa_{i} }{\alpha_{i}}}\), then based on Assumptions 1 and (2), there exists \(T\geq 0\), such that \(\sum_{\mu=0}^{\gamma}\xi_{i}^{\left(\mu\right)}\left(t\right)\geq\left|\delta _{i}\left(t\right)\right|,\forall t\geq T\). Pick \(\left|\zeta_{i}\left(t\right)\right|\geq\max\left\{\sqrt{v_{i}\eta+\frac{ \gamma\left|\kappa_{i}\right|}{\alpha_{i}}},\sqrt{\beta}\right\}\), then (16) and hence, (15) are guaranteed. Combining (14) and (15) yields \[\dot{E}\left(t\right)\leq 0,\,\forall\left|\zeta_{i}\left(t\right) \right|\geq\max\left\{\sqrt{v_{i}\eta+\frac{\gamma\left|\kappa_{i}\right|}{ \alpha_{i}}},\sqrt{\beta}\right\}\!,t\geq T. \tag{17}\] From the LaSalle's invariance principle, we obtain that \(\zeta\left(t\right)\) is UUB. Finally, based on (11), \(\varepsilon\left(t\right)\) is also UUB. This completes the proof. ## III Validation and Results A low-voltage DC microgrid is modeled to study the effectiveness of the proposed results. The converters have similar typologies but different ratings, i.e., the rated currents are equal to \(I_{i}^{\mathrm{rated}}=(6,3,3,6)\), and virtual impedance are equal to \(R_{i}^{\mathrm{vir}}=(2,4,4,2)\). The converter parameters are \(C=2.2\,\mathrm{mF}\), \(L=2.64\,\mathrm{mH}\), \(f_{s}=60\,\mathrm{kHz}\), \(R_{line}=0.1\,\Omega\), \(R_{L}=20\,\Omega\), \(v_{ref}=48\,\mathrm{V}\), and \(v_{in}=80\,\mathrm{V}\). The rated voltage of the DC microgrid is \(48\,\mathrm{V}\). Consider the FDI attack to the local control input of each converter by selecting \(\delta_{i}=(0.8t^{2}+5,0.7t^{2}+5,0.8t^{2}+5,0.5t^{2}+5)\). Figures 1 and 2 compare the voltage and current responses against generally unbounded attacks using the resilient secondary control of [4] and the proposed resilient secondary defense strategy. The adaptive tuning parameters for the proposed resilient control method are set as \(\alpha_{i}=1.5\), \(\xi_{i}(0)=1\), \(\dot{\xi}_{i}(0)=70\), \(\dot{\xi}_{i}(0)=1\), \(i=1,2,3,4\). As seen, after initiating the attack injections at \(t=5\,\mathrm{s}\), both voltage and current diverge using the resilient method in [4]. In contrast, by utilizing the proposed attack-resilient secondary defense strategy, the voltage of each converter converges to a value within a small neighborhood of the reference value \(48\,\mathrm{V}\), and the currents converge to values within small neighborhood around the two respective values reflecting the properly shared current. These validate that, while the existing resilient method fails to preserve the microgrid's stability in the face of generally unbounded attacks, the proposed resilient defense strategy successfully accomplishes the UUB convergence on both voltage regulation and current sharing for DC microgrids. ## IV Conclusion This letter has proposed a novel secondary defense strategy for DC microgrids against generally unbounded FDI attacks. In contrast to existing resilient methods that impose strict requirements for bounding the first time derivatives of unbounded attack signals, the proposed defense strategy relaxes such a stringent constraint by addressing a wider range of unbounded attack signals, significantly bolstering the cyber-physical resilience of DC microgrids. Rigorous proof, based on Lyapunov techniques, has shown that the proposed strategy guarantees the UUB convergence for both voltage regulation and proportional load sharing under generally unbounded attacks. Comparative case studies have validated the enhanced resilience of the proposed control method. Fig. 1: Performance of the resilient secondary control method in [4]. Fig. 2: Performance of the proposed attack-resilient control method.
2309.11770
Two Fish Encryption Based Blockchain Technology for Secured Data Storage
Data security and sharing remains nuisance among many applications like business data, medical data, banking data etc. In this research, block chain technology is built with encryption algorithm for high level data security in cloud storage. Medical data security seems critical aspect due to sensitivity of patient information. Unauthorized access of medical data creates major issue to patients. This article proposed block chain with hybrid encryption technique for securing medical data stored in block chain model at cloud storage. New Two fish encryption model is implemented based on RSA Multiple Precision Arithmetic. MPA works by using library concept. The objective of using this methodology is to enhance security performance with less execution time. Patient data is processed by encryption algorithm and stored at blockchain infrastructure using encrypted key. Access permission allows user to read or write the medical data attached in block chain framework. The performance of traditional cryptographic techniques is very less in providing security infrastructure.
Dinesh Kumar K, Duraimutharasan N
2023-09-21T04:08:23Z
http://arxiv.org/abs/2309.11770v1
# Two Fish Encryption Based Blockchain Technology for Secured Data Storage ###### Abstract Data security and sharing remains nuisance among many applications like business data, medical data, banking data etc. In this research, block chain technology is built with encryption algorithm for high level data security in cloud storage. Medical data security seems critical aspect due to sensitivity of patient's information. Unauthorized access of medical data creates major issue to patients. This article proposed block chain with hybrid encryption technique for securing medical data stored in block chain model at cloud storage. New Two fish encryption model is implemented based on RSA Multiple Precision Arithmetic (MPA). MPA works by using library concept. The objective of using this methodology is to enhance security performance with less execution time. Patient data is processed by encryption algorithm and stored at blockchain infrastructure using encrypted key. Access permission allows user to read or write the medical data attached in block chain framework. The performance of traditional cryptographic techniques is very less in providing security infrastructure. Proposed blockchain based Two fish encryption technique provides high security in less encryption and decryption time. Data security, Blockchain, Two fish encryption, cloud computing, medical data security, RSA-Multiple precision arithmetic. ## 1 Introduction In the modern medical field, medical data has been used for the invention of recent strategies and healing procedures for curing diseases [1]. The medical data is very sensitive aspect where patients do not like to share with others. Security of medical data storage can be ensured by using two techniques. In the first technique, medical information is stored in the database locally and set up a privilege to access the medical information. In the second approach, the stored clinical data encrypted using patient's key value and in future it can be used by the patient's key. The main dilemma of the first approach, locally stored medical data may be modified or deleted. Also, it cannot be shared with doctors. During the diagnosis phase of the disease and treatment were taken by a patient, the key should not be shared with others, it creates a problem with the second approach. Above crisis will damage the availability of medical data in the local storage database. The key force for the above-stated declaration is due to the digitized medical data and accessing it employing professionals is suggested by recent articles [2][3]. To improve medical information governance and safety regulations like Health Insurance Portability and Accountability Acts (HIPAA) [4] in the USA or the General Data Protection and Regulation (GDPR) [5] at Europe needs high security of sharing the information. Privateness mode of data might cause severe consequences for activities of a healthcare information breach. The existing cryptographic algorithm in medical data storage methods used a private cloud platform, which carry the limitations on sharing of data and scalability [6]. As blockchain and cloud computing are considered as matured associated strategies which have performed fast development in clinical and fitness services, together with scientific normalization, healthcare services through mobile, e-commerce in medical and on-line mode facility [7]. The block chain system connects the individuals in a P2P form. It includes P2P network design, encryption technology, implementing distributed algorithm and use of data storage [8]. Implementing the limitations of blockchain technology by combining with other cryptographic techniques to discourse the security problems of storing medical data management [9]. Lack of ability and consciousness in implementation, limitations arise in security side of blockchain based cloud storage works quite slow in progress. These above challenges cause delay in the approval of the blockchain technology by the medical institutions. Even though there are numerous start-ups procedures are completely based on blockchain technology, the medical organization refused for using this technology [10]. Our proposed work, hybrid system of two fish with RSA MPA encryption algorithm provides solutions for secure storing of medical data via blockchain based cloud storage in an efficient manner. In conventional encryption procedure, sender and receiver must generate the public key and private key. Before sending the textual content data, the sender (user) encrypts the textual content using public key of receiver. At the receiving side, the desired client decrypts the textual content data using private key. However, it calls lot of network issues and additionally occupies the memory [11]. But our proposed hybrid system of two fish with RSA MPA encryption algorithm converts the textual content medical data encrypted by medical institution A's public key and decrypted by medical institution B's private key. By this way they can share the medical data [7]. To guarantee the safety and the privateness of medical records, we want to expand a powerful asymmetric cryptographic algorithm is followed to encrypt textual medical data on this work, at a low cost and in an efficient way. A person tries to get medical data, he needs to recognize the corresponding decryption key [12]. Our proposed work provides high dimensionality in the security of sharing medical data among various medical institutions in an efficient way. And also assures keeping the privacy of patient information. Further, structure of this paper has been planned as follows: Section 2 demonstrate about related survey on block chain security. A proposed methodology structure is explained in Section 3. Section 4 describes about the experimented results and finally Section 5 concludes the paper with future scope. ## 2 Related Works Due to the development of medical field, storing of medical data in a secure way considers as a significant role, the conventional centralized scheme of medical data storage has been developed and it does not satisfy the requirements of available data with the high hazard of privacy expose is proposed in paper [13-15]. The various researchers are focused on blockchain technology to provide more secure on medical data. The introduction of blockchain technology creates more efficient infrastructure to manage and maintained the digitalized medical data was suggested by Vazirani et al., [16]. To improve the health care consequences without comprising the safekeeping of patient's information, a feasibility study on utilizing of blockchain technology with cloud storage scheme is developed in paper [17-18]. The combination of blockchain based technology with attribute-based scheme is used to provide security in sharing and storing of medical records and to access digital health care records was suggested in [19]. Another tendency revealed from blockchain technology in the concept of traditional security adopted in a single domain administrative for sharing of medical data is insufficient with multiple healthcare domains. Therefore, advanced cryptographic algorithms are required with the features of rich access control and strict high dimensionality secure enforcement. Nowadays, adopting advanced features of cryptographic algorithms research projects are carried out to provide secure processing of clinical data in the cloud storage [9]. To provide a more efficient and friendly service for medical data storage schemes, various solutions are available at cloud technology. Security management was proposed in paper [20-26]. Patient's medical information is a crucial thing, to store in high secure and privacy with cloud-based storage platform. To guarantee the security and data privacy over a patient's data, we should implement a smart storage method which include the smart IoT-based healthcare architecture is discussed in [27]. Other solutions for sharing delicate medical data on several methods like medical data accumulation of non-standard diagonal method was suggested in [28], sharing of medical data with a cloud-based model used in[29], a hybrid solution of sharing medical data in[30, 31], storage structure of scalable privacy with data preserving scheme[32], a secure system using a fog computation technique in [33], and a distributed based architecture with doubles tag micro aggregation scheme in [34] are implemented. The main problems among these techniques are computational complexity and more time consumption. However, most of the users do not trust the third party of the organization in keeping their medical data secure and in a confidential manner. Decentralized ledger is used in Blockchain technology to record every medical transaction. It records transaction event as product of source state to present state permanent storage scheme, which was used in paper [35-37]. The features of Blockchain technology are decentralization, immutableness, and verifiability which are essential in the field of medical healthcare, exclusively in the handling of medical records in a secure way. The improved encrypted version of proxy scheme called Fuzzy based Conditional Identity (FCI), in which exchange of medical data in a privacy-preserving where keys are extracted from user's biometric measures. The content of medical data transactions kept in privacy and consensus efficiency by using blockchain-based medical data storage platform [38]. The ring signature scheme is adopted the elliptic curve model to enhance a privacy medical data storage protocol in user's identity privacy and protection of medical data. The protection of medical data transaction's privacy ring signature scheme is not an applicable one [39]. A new approach of medical data sharing scheme is implemented by combining the ML, Blockchain and cloud storage scheme. This combined scheme can easily and effectively share of medical data transactions between different medical organizations. However, it cannot provide the assurance of receiving exact medical data [40]. By analysing the existing schemes and various traditional methods, it can be found that combining blockchain-based cloud storage in medical institutions has simplified the enrichment of service quality. Preserving of valuable content of medical data is a challenging task between patients and medical research institutions, especially in distributing the data with various entities in smart contracts with all-inclusive privacy considerations. Furthermore, few types of research have focused on this challenging task of whether the collection of medical data obtained from patients that meets their requirements and keep securely is a great challenge [15]. The table 1 shows the related works that can be implemented in security of medical information on blockchain technology. ## 3 Medical File Storage at Blockchain with Cloud Storage In sharing of clinical data/information using blockchain technology, the foremost step is to ensure the reliable in communication and also in security of medical data storage, it is important to build a chain architecture, effectively decide the identification of two entries, that is initiator identity of service and identity of recipient. The system model of medical data storage based on block chain with cloud storage is shown in Fig 1. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Papers** & \multicolumn{1}{c|}{**Description**} \\ \hline Xia et al [41] & smart contracts contain secret keys \\ \hline Omar et al. [42] & Getting decryption key from the owner of medical data. \\ \hline Ferdous et al. [43] & DRAMS is to deploy a decentralized architecture \\ \hline Guo et al. [44] & multi-authority attribute-based signature scheme \\ \hline Zyskind et al [45] & It is a proposed decentralized computation platform \\ \hline Yue et al. [46] & Encrypted data is stored in private blockchain technology with health care data gateway architecture. \\ \hline Alevtina et al. [47] & Encrypted data is stored in cloud sever. \\ \hline Azaria et al. [48] & Accessing rights to get medical data. \\ \hline \end{tabular} \end{table} Table 1: Review on other techniques This model is combined into four layers: Certificate Authority, User Layer, Block Chain Layer with cloud storage and public user layer. #### 3.2.1 Certificate Authority (CA) CA act as a authority provider to generate keys, manage, distribute the digital certificate and system administrator. CA eliminates the malicious nodes and confirm the health of system. For decrypt the data CA uses patient's private key and maintaining perfectness on the information / data which are stored in the block for medical research. #### 3.2.2 User Layer Different categorize of users are involved in the user layer for their research or other useful purposes in accessing of medical data from the cloud server. All patient's information is maintained as data privacy. The patient gets details of the existing medical records, which are stored in chain or block by using their private key. Example: healthcare organizations like research institutions, medical institutions, research institutions and governmental bodies. #### 3.2.3 Block Chain Layer & Cloud storage The blockchain layer helps to connect all distributed health field and contracts are responsible for distribution of data across various medical association. In the cloud storage all medical data are collected from patients or from hospitals in different locations or from researchers combined and put into the storage. It is accessed by only authorized users. #### 3.2.4 Public User Layer It consists of different categorize of users, researchers, general community in medical platforms. They can access the medical information for the need of their medical investigation, gives treatment for the needy people. Through the proper access only we can able to get the storage data for the medical purpose. #### 3.2.5 Implementation of Two Fish Algorithm Patient's medical records contain diagnosis information, laboratory test reports, medical imaging data like CT, MRI, X-ray image details, treatment details, special examination details are important information. For the development in medical industry, we have to share these medical records among with patients, medical institutions and researchers [49]. This work implements blockchain based cloud storage. Here medical data is divided into multiple encrypted segments or blocks that are interlinked through a hashing function. This paper implements two fish encrypted Figure 1: Medical information storage at block chain with cloud storage algorithm. Two fish cryptographic algorithm is a symmetric key block of cipher text with 128 bits block size and the generated key sizes are up to 256 bits. Implementation of two fish encryption algorithm is shown in fig 2. ### Algorithm 1: **Step 1:** Input block size is 128 bits would be divided into four sections, each for 32 bits words. **Step 2:**32-Bit word is XOR input with the four key parts. \(B_{0,i}=R\bigoplus K_{i};\;\;i=0\;to\;3\) Where K is a key and \(K_{i}\)is a sub key i\(=0\) to 3. The first key part of word is XOR with \(K_{0}\), second key part of word is XOR with \(K_{1}\) and so on. **Step 3:** Two fish algorithm uses a Feistel network and it consists of 16 iterations. **Step 4:** The first key part of word is split up as 4 bytes, where each part is applied to a substituted box. The second key part of word will be first rotated in 8 bits in left and it is also applied to the same set of substitution boxes. **Step 5:** Diffusing newly substituted data of the 32-bit word, by applying the both the first and second key part of words to MDS matrix (Maximum Distance Separable). **Step 6:** Then the first key part of word is applied to a pseudo-Hadamard Transform: \(\mbox{pp'}=\mbox{pp}+\mbox{qq}mmmmmm\) where p is the first key part of word, q is the second key part of word and p' is the new first key part of word. **Step 7:** A first key part 'new' is used as input of word p', the second key part of word q is applied to the same transform, which can be represented as: \(\mbox{qq'}=\mbox{pp}+2\mbox{qq}mmmmmm\) **Step 8:** Repeat Steps 4 to Steps 7 for 16 iterations. **Step 9:** The first and second key part of words are swapped with the third and fourth key part of words, the words are XOR to form one more set of round keys for producing the cipher text. By the above procedure, the medical records are encrypted and stored in blocks. It stores patient's medical report in blockchain and the index value location details are stored in cloud database. Storing of encrypted data and retrieving of decrypted data is done by implementing two fish algorithms [50]. Transaction bodies in medical block chain are Patients, clinical institutions and third-party participants like public users, insurance companies, researchers. Medical records are generated in the medical institutions for the diagnosing the disease and suggest the treatment which is stored at cloud server through block chain. Physicians generates the summaries of medical report of their patients from different medical institutions. These summaries are also processed in the cloud server for storage through block chain. The corresponding patients have ownership for their own medical information. The third-party users or unauthorized users can access this data from chain with proper permission getting by CA. Also, that they provide some services, like recommendation and appointment registration of medical institution. The permissions for the transaction bodies are given in Table 2. Fig. 2: Implementation of two fish Cryptographic The blockchain is responsible for the creation blocks with medical data. When newly medical data is generated for the patient, this is validated and converted into new block, then added to the main chain for the security purpose. The medical data in the blockchain is authenticated by two fish encryptions for the security purpose. For sharing authorization medical data with another medical institution, it needs to get public key of receiver's medical institution. When a user sends a request to access these medical data along with public key, encrypted text of data will return to the user. At the receiving end user decrypts this cipher text files to get the original medical data [51]. If unauthorized user tries to access the medical data, chain cannot allow them to decrypt the medical data. Each medical institution generates a two fish encrypt of medical data D, and encrypts the medical data and stored at cloud at L location using public key value pk of all medical institution to send the medical blockchain. ### RSA using multiple Precision Arithmetic Library For providing security authentication for sharing of medical data, we used block chain of P2P network with all the nodes. Each network node generates two keys; sender encrypts medical data by using public key of receiver. At the same time receiver decrypt the medical data by using private key. For the security authentication scheme, in this work we proposed RSA algorithm using MPA Library (multiple Precision Arithmetic). The implementation steps are given below: #### 3.2.1 Algorithm 2: #### 3.2.2 Step 1. Key Generation Medical Institution A and Medical Institution B request the CA to generate their public key and private key. Sender encrypts medical data by using public key of receiver. At the same time receiver decrypt the medical data by using private key. For the key generation, RSA algorithm uses two large prime numbers \(a,b\). 1. Select \(a,b\) 2. Calculate \(n=a*b\) 3. Calculate \(\emptyset(n)=(a-1)*(b-1)\) 4. Select integer \(\mathsf{e}\) and \(\mathsf{e}\) is a public key. 5. \(gcd(\emptyset(n),\mathsf{e})=1;1<\mathsf{e}<\emptyset(n)\) 6. Calculate \(d\) and \(d\) is a private key. 7. \(d=e^{-1}mod\ \emptyset(n)\) 8. Public key: \(pubkey=e,n\) 9. Private Key: \(prikey=d,n\) Now Public as well as private key for medical institution A & B is generated. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Mode of Permission** & **Patients** & **Medical Industry** & **Unauthorized Users** \\ \hline Read access their own medical data & No permission needed & No permission needed & No permission needed \\ \hline Write access their own medical data & No permission needed & No permission needed & No permission needed \\ \hline Read access to third party medical data & Need permission & Need permission, in case of emergency they can access the data & Need permission \\ \hline Write permission to third party clinical data & Need permissions & Need permissions & Need permissions \\ \hline \end{tabular} \end{table} Table 2: Access Permission mode **Step 2: Encryption file generation** Encrypted file E1 is generated, at first RSA-MPA with public key is used by Medical Institution A. By using second layer, encrypted file E2 is generated by Medical Institution A. Uploading the encrypted files E1 & E2 of Medical Institution A into the server. \[Encrypt=encr(m,a_{k})\] **Step 3:** Make \(m\)is the encrypted information. **Step 4:** Calculate cipher text by using following formula, \[C=P^{e}mod\ n,P<n\] C is Cipher text; \(P\) is Palin text; \(e\) is a Encryption key and n is a block size. **Step 5:** Construct decryption key \[P=C^{d}mod\ n\] Medical Institution A encrypts medical data by using public key of Medical Institution B. At the same time Medical Institution B decrypt the medical data by using private key of Medical Institution A. **Step 6:**Calculate \(Rpubk\) (\(public\)\(key\)) and \(Rprik\) ( private key) **Step 7:**The decrypted cipher text with key \(Rpubk\) is generated and transmit to the cloud server. **Step 8:** By using this algorithm, the public key \(Rpubk\)is generated with the second layer of cipher text, for construct decryption key with public key\(Rpubk\) is generated with the first-layer of cipher text. **Step 9:** Generate new cipher text, the cloud server uses the decryption key uploaded by Medical Institution A. **Step 10:** Medical Institution B requests data and \(decrypt\to decrypt(d,b_{k})\) **Step 11:** Medical Institution B requests to decrypt the data in cloud. The cloud server sends the decrypted text to Medical Institution B and decryption uses RSA to obtain the original text data [7]. The working principle of this algorithm is shown in the Figure 3. In this work Public Key is used for encryption of medical data and Private Key is used for decryption of medical data. The MPA library is used to provide the fastest key generation, encryption and decryption routines. ### Proposed Hybrid system of Two Fish with RSA-MPA Library Encryption Algorithm In general, using hybrid method for encryption, ciphers medical data with public and private keys are highly protected while sharing of medical data [52]. Figure 4 shows the proposed (Two Fish \(+\) RSA) hybrid architecture. In the beginning key will be generated with two fish algorithm and medical data uses RSA for encryption using MPAL. Finally, encryption of the cipher text medical data is processed at receiver side. Figure 3: Working Principle of RSA using MPAL When compared with other traditional algorithms like key-aggregate cryptosystem(KAC), Attribute-based encryption (ABE) requires public key for encryption and it is fully dependent on attributes [53]. Similarly, compared with other existing algorithms with the concept of privacy protection and secure storage of tamper-proof algorithm will not work effectively. Taking into encryption time of ABE, KAC, two fish, AES algorithm our proposed hybrid algorithm of Two fish and RSA MPA algorithm requires less time and also keeps high security level. ## 4 Performance Analysis The main contribution of our work is to provide security in storing and sharing of medical data. Algorithm 1 describes the design of two fish encrypted concept. It interacts with blockchain based cloud storage, and access the stored medical records with proper permission assign to the user. By using JMeter, accessing of medical data is analysed. Latency is calculated by number of user's requests between 10 and 100 within the time periods of 2, 5, 15, 20 and 45 minutes. The latency time has been calculated by evaluating the time occupied to deliver the data by user's request. This is shown in table 3. \begin{table} \begin{tabular}{|c|c|} \hline & **Latency time** \\ **No. of active Users** & **(Sec)** \\ \hline 10 & 155.67 \\ \hline 20 & 256.78 \\ \hline 30 & 361.34 \\ \hline 40 & 457.56 \\ \hline 50 & 560.13 \\ \hline 60 & 680.89 \\ \hline 70 & 767.12 \\ \hline 80 & 860.67 \\ \hline 90 & 976.45 \\ \hline 100 & 1150.46 \\ \hline \end{tabular} \end{table} Table 3: Latency time per number of user’s requests Figure 4: Proposed (Two Fish + RSA MPA) hybrid architecture. An important observation of table 3 shows on the latency time is increases as per the request of user increases. This happens because of trade between the securities on medical data over low latency. Even though the latency time increases but efficiency is maintained by two fish encryption algorithm. The speed of program is evaluated using execution process based upon the encryption and decryption of the file with a different size. Comparison table of encryption and decryption using various methods is given in the table 4. ### Security Analysis In accessing medical data in a secure way, we proposed hybrid of two fish and RSA MPA Library access data successfully, must meet some criteria and decrypt the medical data. Table 5 shows the comparison between some traditional methods and the proposed scheme. In the observation of the above table 5, the blockchain system with cloud storage plays a significant role in accessing of clinical data in secure way and sharing. To streamline and perform encryption as well as decryption of any given medical data passed in our proposed scheme of hybrid system of two fish with RSA MPA library measures the better performance on the basis of privacy, integrity, anonymous, attack resistance, tamper proof and less computing time produces more secure in storing and sharing of medical data through blockchain technology. For experimental purposes our proposed work, hybrid system of Two fish with RSA MPA library algorithm stored medical data in various sized text files and the outcomes gives the best result in terms of encrypted and decrypted time of the medical data. Key length uses 256 bits and 16 bytes of block size is used in Two fish algorithm. For the proposed work RSA-MPAL algorithm generates 2048 bits key pair. This technique can encrypt medical data using public key as well as \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{**Two fish [29]**} & \multicolumn{2}{c|}{**RSA**} & \multicolumn{2}{c|}{**Two fish+RSA[32]**} & \multicolumn{2}{c|}{**Proposed**} \\ \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{**Two fish+RSA[32]**} & \multicolumn{2}{c|}{**Two fish+RSA**} \\ \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{**MPA**} \\ \hline **File Name** & **File Size** & **Encrypt** & **Decrypt** & **Encrypt** & **Encrypt** & **Decrypt** & **Encrypt** & **Decrypt** \\ \hline 100.txt & 100 kb & 0,093 & 0,031 & 0,085 & 0,029 & 0,081 & 0,025 & 0,075 & 0,020 \\ \hline 200.txt & 200 kb & 0,234 & 0,047 & 0,201 & 0,041 & 0,198 & 0,035 & 0,191 & 0,030 \\ \hline 300.txt & 300 kb & 0,312 & 0,078 & 0,298 & 0,071 & 0,289 & 0,067 & 0,250 & 0,057 \\ \hline 400.txt & 400 kb & 0,421 & 0,125 & 0,395 & 0,121 & 0,389 & 0,118 & 0,370 & 0,115 \\ \hline 500.txt & 500 kb & 0,765 & 0,187 & 0,689 & 0,182 & 0,675 & 0,175 & 0,620 & 0,160 \\ \hline \end{tabular} \end{table} Table 4: Execution time for Encryption & Decryption in seconds \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & & & & Attack & & Less Computing \\ Methods & Privacy & Integrity & Anonymous & Resistance & Tamper Proof & time \\ \hline PoW & No & Yes & No & Yes & Yes & No \\ \hline MedShare & No & Yes & Yes & Yes & Yes & No \\ \hline MedBlock & No & Yes & Yes & Yes & Yes & No \\ \hline DACC & No & No & No & Yes & Yes & No \\ \hline PoS & Yes & Yes & No & No & No & Yes \\ \hline Our proposed & & & & & & \\ Scheme & Yes & Yes & Yes & Yes & Yes \\ \hline \end{tabular} \end{table} Table 5: Comparison between proposed system with other traditional systems. decrypts the data using private keys. The benefits of proposed work are speed of encryption and decryption time because it uses MPA library concept. The analysis of the result of encrypted data time calculation is shown in Fig. 5 In the observation, it shows, that throughout the encryption medical data text file, the encryption time is increased proportionally depends upon the size of medical data text file. Comparison results on algorithms Twofish, RSA, Twofish + RSA and Twofish + RSA MPA in terms of encryption time criteria the Twofish + RSA MPA algorithm is better and needs less time comparatively to other algorithms. The analysis of the result of encrypted data time calculation is shown in Figure 6. Comparison results on algorithms Two fish, RSA, two fish + RSA and Two fish + RSA MPA in terms of decryption time criteria the Two fish + RSA MPA algorithm is better and needs less time comparatively to other algorithms. Fig. 5: Analysis of Encrypted Data time calculation in seconds Fig. 6: Analysis of Decrypted Data time calculation in seconds ## 5 Conclusion Hybrid system of Twofish with RSA MPA library algorithm successfully implemented to maintain high secure of medical data sharing through blockchain based cloud storage. This paper presents an analysis of Twofish, RSA, Twofish + RSA and Twofish with RSA-MPA algorithm in terms of size of file, level of security, latency, time taken to encrypt the file, time taken to decrypt the filewere used. This research helps to conclude that among the provided criteria, new hybrid system of Twofish with RSA MPA takes all benefits from traditional methods so it is significantly secure and faster retrieval of medical data. For future work, proposed hybrid models can be implemented by entropy index value. Additionally, our hybrid system could be improved through analysis and implementation of high-performance message passing computing library, such as, Message Passing Interface (MPI).
2309.12279
The Broad Impact of Feature Imitation: Neural Enhancements Across Financial, Speech, and Physiological Domains
Initialization of neural network weights plays a pivotal role in determining their performance. Feature Imitating Networks (FINs) offer a novel strategy by initializing weights to approximate specific closed-form statistical features, setting a promising foundation for deep learning architectures. While the applicability of FINs has been chiefly tested in biomedical domains, this study extends its exploration into other time series datasets. Three different experiments are conducted in this study to test the applicability of imitating Tsallis entropy for performance enhancement: Bitcoin price prediction, speech emotion recognition, and chronic neck pain detection. For the Bitcoin price prediction, models embedded with FINs reduced the root mean square error by around 1000 compared to the baseline. In the speech emotion recognition task, the FIN-augmented model increased classification accuracy by over 3 percent. Lastly, in the CNP detection experiment, an improvement of about 7 percent was observed compared to established classifiers. These findings validate the broad utility and potency of FINs in diverse applications.
Reza Khanmohammadi, Tuka Alhanai, Mohammad M. Ghassemi
2023-09-21T17:40:44Z
http://arxiv.org/abs/2309.12279v1
The Broad Impact of Feature Imitation: Neural Enhancements Across Financial, Speech, and Physiological Domains ###### Abstract Initialization of neural network weights plays a pivotal role in determining their performance. Feature Imitating Networks (FINs) offer a novel strategy by initializing weights to approximate specific closed-form statistical features, setting a promising foundation for deep learning architectures. While the applicability of FINs has been chiefly tested in biomedical domains, this study extends its exploration into other time series datasets. Three different experiments are conducted in this study to test the applicability of imitating Tsallis entropy for performance enhancement: Bitcoin price prediction, speech emotion recognition, and chronic neck pain detection. For the Bitcoin price prediction, models embedded with FINs reduced the root mean square error by around 1000 compared to the baseline. In the speech emotion recognition task, the FIN-augmented model increased classification accuracy by over 3 percent. Lastly, in the CNP detection experiment, an improvement of about 7 percent was observed compared to established classifiers. These findings validate the broad utility and potency of FINs in diverse applications. Reza Khanmohammadi\({}^{\dagger}\) Tuka Alhanai\({}^{\lx@sectionsign}\) Mohammad M. Ghassemi\({}^{\dagger}\)\({}^{\dagger}\)Computer Science and Engineering Department, Michigan State University \({}^{\lx@sectionsign}\)Department Computer Engineering, New York University Abu Dhabi Feature Imitating Network, Bitcoin Price Prediction, Speech Emotion Recognition, Chronic Neck Pain ## 1 Introduction Deep learning has established itself as a foundational technique across various applications, primarily due to its capability to learn complex patterns and relationships. One of the crucial aspects influencing the efficacy of deep learning models is the initialization of their weights. Proper weight initialization can lead to faster model convergence and enhanced performance [1]. While the reliance on large datasets and extensive computational resources is vital for determining feature quality and model versatility, correct initialization can offset some of the dependencies on these resources. This offset is especially crucial in domains with limited data and computational capabilities, underlining the importance of leveraging deep learning's potential without a heavy reliance on large datasets and extensive resources. To cater to such scenarios, FINs [2] offer an intuitive approach where neural networks are initialized to imitate specific statistical properties. By doing so, FINs provide a more informed starting point, making neural networks less opaque and offering a hint of interpretability in what is often dubbed a "black box." The beauty of FINs lies in their simplicity, allowing researchers to directly incorporate domain-specific knowledge into the model's architecture, fostering both efficacy and understandability. ### Contributions While FINs have made significant strides in biomedical signal processing [2, 3, 4], their applicability in broader domains remains a topic of interest. In this work, we delve into the potential of FINs across three distinct areas: financial, speech, and Electromyography (EMG) time series analysis. Our research aims to demonstrate how integrating a lightweight FIN can enhance the performance of different neural network architectures, regardless of the task or network topology. By investigating their effects across different contexts, we offer insights into the adaptability, benefits, and potential boundaries of using FINs. ## 2 Related Work **The Evolution of Transfer Learning Across Domains:** Transfer learning has emerged as a potent technique in machine learning, reshaping the paradigm by repurposing pre-trained models to tackle different tasks from their original intent [5]. Such a strategy has yielded transformative advancements, especially in computer vision [6], speech analysis [7], and natural language processing (NLP) [8]. Foundational models like ResNet [9], wav2vec [10], and BERT [11] stand as prime examples of this shift, requiring significantly reduced training data when finetuned for new tasks. Transitioning this approach to the biomedical arena presents unique challenges. There is an inherent lack of large and diverse biomedical datasets [10][12], which has led to cross-domain adaptations, such as repurposing computer vision models for audio classification [13]. These adaptations, while novel, often do not achieve the same efficacy as within-domain counterparts, highlighting the pressing need for tailored approaches for biomedical data. **Statistical Feature Imitation Bridges the Transfer Learning Divide in Diverse Specialized Tasks:** FINs have established a unique role in addressing this particular challenge [2]. FINs offer a distinctive approach to neural learning by initializing weights to simulate distinct statistical features, effectively bridging domain knowledge with machine learning. This method has catalyzed notable progress in many fields by showcasing its effectiveness across various tasks. In the seminal work introducing FINs [2], the authors showcased the efficacy of this novel approach across three experiments. In Electrocardiogram (ECG) artifact detection, a FIN imitating the Kurtosis feature outperforms standard models in both performance and stability. Similarly, for Electroencephalogram (EEG) artifact detection within the same research, FINs imitating Kurtosis and Shannon's Entropy enhanced results. Moreover, when applied to EEG data for fatigue and drowsiness detection, a FIN based on Shannon's entropy consistently outperformed baselines, while certain models like VGG proved ineffective. Additionally, FINs have shown promise in specialized applications. In biomedical image processing, Ming et al (2023) provided state-of-the-art results across tasks including COVID-19 detection from CT scans and brain tumor identification and segmentation from MRI scans [3]. In sports analytics, the hybrid architecture of MambaNet [4] employed FINs to effectively predict NBA playoff outcomes, showcasing the broad versatility of the FIN approach. Although FINs have shown promise in biomedical applications and sports analytics, their potential in financial and speech time series data is yet to be explored. ## 3 Imitating Tsallis Entropy A FIN is a neural network that is trained to approximate a closed-form statistical feature of choice. In our study, we train a FIN to imitate Tsallis Entropy. Tsallis entropy, a non-extensive generalization of the traditional Shannon entropy, measures the uncertainty or randomness of a system. Uniquely, it takes into account the correlations and higher-order interactions that are often overlooked by the conventional Shannon entropy. This quality makes Tsallis entropy particularly apt for systems exhibiting non-standard statistical behaviors and long-range dependencies. **The Influence of \(q\) on Tsallis Entropy** The distinguishing characteristic of Tsallis entropy is its reliance on the parameter \(q\). The Shannon entropy becomes a special case of Tsallis entropy when \(q=1\). When \(q>1\), the entropy gives more weight to lower probabilities, making it more sensitive to rare events. Conversely, for \(q<1\), the entropy calculation is dominated by higher probabilities. This variability in weighting is encapsulated by the equation for a discrete probability distribution \(p(i)\) as influenced by the temperature scaling parameter \(\tau\): \[H_{q}(\tau)=\frac{1}{q-1}\left(1-\sum_{i}\text{softmax}\left(\frac{u(i)}{\tau }\right)^{q}\right) \tag{1}\] Where \(u(i)\) represents the unscaled probabilities from the normalized input. In our implementation, \(q\) is set to a default value of 1.5 and further treated as a trainable parameter within our FIN, allowing the model to adaptively finetune its value to optimally capture the inherent complexities and nuances of the dataset. **Temperature Scaling with Parameter \(\tau\)** Another pivotal parameter in our approximation process is \(\tau\). This temperature parameter modulates the entropy's sensitivity by scaling the inputs to the softmax function. Specifically, as \(\tau\) approaches 0, the softmax output mirrors a one-hot encoded distribution, while increasing \(\tau\) causes the resultant distribution to edge towards uniformity. The introduction of \(\tau\) in the Tsallis entropy equation underlines its importance in shaping the final probabilities. In the context of our work, \(\tau\) is initialized with a default value of 1, but like \(q\), it's also trainable within our FIN, allowing the network to adjust it adaptively during the learning phase. **Training** To approximate the Tsallis entropy using neural networks, we generated synthetic signals with uniform random values between 0 and 1. The output regression values for the FIN were the Tsallis Entropy values, which were computed directly on the synthetic signals using the defined closed-form expression in equation 1. This calculation is fundamentally based on a power-law probability distribution. We utilized a simple gradient descent optimizer along with mean absolute error (L1) loss to train this network. Additionally, early stopping was integrated, and the training was optimized with learning rate modifications facilitated by the ReduceLROnPlateau scheduler. **Baseline Model** In each of our three experiments, we employed a neural network as a comparative baseline. This network had a representational capability (i.e. number of parameters) that was either equal to or exceeded the FIN-embedded networks introduced in that particular experiment. We investigated multiple network topologies, experimenting with as many as ten variations for each baseline. The model that showcased the best performance on the validation set was subsequently chosen for comparison against the Tsallis Entropy FIN-powered networks. ## 4 Experiments & Results ### Experiment I **Objective** This experiment focuses on predicting the closing price of Bitcoin on a given day, for the subsequent day. We hypothesize that we can achieve enhanced predictive accuracy over traditional baselines by initializing certain neural network weights to imitate Tsallis entropy, followed by fine-tuning during training. **Data and Preprocessing** Our study leveraged a publicly accessible dataset1 that spanned over seven years, from March 2015 to April 20221. Owing to notable Bitcoin price fluctuations in 2017 and 2021, the dataset was bifurcated into two periods: Period 1, from March 2015 to September 2018, and Period 2, from October 2018 to April 2022. Each period was split into approximately an 85 to 15% ratio for training and testing. The dataset encompassed a total of 47 features clustered into various categories such as Bitcoin price variables, specific technical features of Bitcoin, other cryptocurrencies, commodities, market indexes, and more. In the original study conducted by Chen (2023) [14], ten features were utilized for Period 1: Bitcoin's opening price, highest price of the day, lowest price of the day, closing price, price of one Ethereum in USD, WTI crude oil price, the Standard and Poor's 500, National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and the mean difficulty on a given day of finding a hash that meets the protocol-designated requirement. For Period 2, a subset of six features was used: the first four Bitcoin-specific prices, the price of one Ethereum in USD, and the Nikkei 225, determined through feature selection. In contrast, our study consistently employed this subset of six features across both periods, as it led to improved results. **Methods** While the baseline architectures include a Random Forest (RF) regression and a deep LSTM network [14], our research takes this foundation a step further. We introduce a new model, namely Deep LSTM + Attention, which is inspired by the LSTM's structural elements but incorporates significant advancements. Contrary to the original RF regression and LSTM models, our design integrates the last seven timesteps of each feature, enriching its grasp on historical data and potentially enhancing its predictive prowess. Moreover, we incorporated two distinct attention mechanisms: one at the input level and another within the network layers, aiming for refined data representations. Complementing these improvements, we embedded and fine-tuned the Tsallis entropy FIN within this network (FIN-ENN), serving as a transformative layer to delve deeper into the financial intricacies. **Results** The results of our analysis can be found in Table 1. We used Root Mean Square Error (RMSE) that gauges prediction deviations from actual values, and Mean Absolute Percentage Error (MAPE) which quantifies relative error in percentage terms as two metrics for evaluating model performance. In our investigation, we discovered that our introduced Attention-based LSTM network outperformed both the RF regression and LSTM models from the original baseline study. Our model's improvement over the baseline can be attributed to refined neural modeling. Notably, this improvement can be attributed to the meticulous integration of the attention mechanism and extended window size, capturing the last seven timesteps as opposed to the 1 and 2 timestep windows in the original work. Our results indicate a clear superiority of the longer window in effectively predicting next-day closing prices. Building on this, the FIN-Embedded Neural Network (FIN-ENN), which embeds Tsallis entropy at the input level, showcased even greater performance. Specifically, it further decreased prediction errors by 44.16 RMSE and 0.52 MAPE in Period 1, and 94.79 RMSE and 0.33 MAPE in Period 2 when compared to the baseline. The Tsallis entropy is evidently a significant factor in price prediction, as illustrated by our final model. By leveraging this entropy, we've effectively harnessed the temporal intricacies of the financial dataset, thus ensuring more precise forecasts. ### Experiment II **Objective** This experiment aims to enhance speech emotion recognition by leveraging FINs. Unlike the previous experiment, where the input data was fed directly into the FIN, here, we utilize a latent representation of the data--a condensed, yet informative, representation derived from previous layers of a deep neural network. Our hypothesis posits that by feeding this latent representation through the FIN, specifically designed to imitate the Tsallis entropy, and further fine-tuning it during training, we can achieve superior recognition performance. Our target is to surpass the state-of-the-art (SOTA) model, the Acoustic CNN (1D) from the reference study. **Data and Preprocessing** We used the publicly available modified version2 of the Sharif Emotional Speech Database (ShEMO) [15], which contains 3 hours and 25 minutes of semi-natural speech samples in.wav format. These 3000 samples, recorded by 87 native Farsi speakers, are multi-labeled. The reference study [16] concentrated on emotions like anger, happiness, sadness, surprise, and neutral. Each speech segment, with an average length of 4.11 seconds, was \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Period** & **Model** & **RMSE** & **MAPE** \\ \hline \multirow{3}{*}{1} & RF regression & 321.61 & 3.39\% \\ & Deep LSTM & 330.26 & 3.57\% \\ & Deep LSTM + Attention & 283.83 & 2.97\% \\ & NN-Baseline & 287.47 & 2.97\% \\ & FIN-ENN & **277.45** & **2.87\%** \\ \hline \multirow{3}{*}{2} & RF regression & 2096.24 & 3.29\% \\ & Deep LSTM & 3045.87 & 4.68\% \\ \cline{1-1} & Deep LSTM + Attention & 2014.43 & 2.96\% \\ \cline{1-1} & NN-Baseline & 2127.70 & 3.18\% \\ \cline{1-1} & FIN-ENN & **2001.45** & **2.96\%** \\ \hline \end{tabular} \end{table} Table 1: Comparative evaluation of models using RMSE and MAPE metrics over the two periods. \begin{table} \begin{tabular}{|c|c|c|} \hline **Method** & **Input Feature** & **Accuracy** \\ \hline Acoustic CNN (1D) & emoq\_large & 66.12 \\ \hline NN-Baseline & w2v2-perisan-v3 & 69.40 \\ FIN-ENN & w2v2-perisan-v3 & **72.23** \\ \hline NN-Baseline & w2v2-perisan-ser & 94.87 \\ FIN-ENN & w2v2-perisan-ser & **95.51** \\ \hline \end{tabular} \end{table} Table 2: Comparison of emotion recognition accuracy across different models and input features. embedded using wav2vec2 [17] to enhance its representation in our neural network model. **Methods** Our method is a deep neural network with a series of fully connected (dense) layers with decreasing units: 512, 256, 128, 64, and 32. Each layer is followed by a ReLU activation function and a dropout layer (rate=0.5) to prevent overfitting. Crucially, after obtaining the 32-unit latent representation from the penultimate layer, the FIN is integrated to compute the Tsallis entropy of this representation. The computed entropy is then concatenated with the 32-unit latent representation and fed into the final fully connected layer to produce the output corresponding to the emotion classes. **Results** Our experiment compared three models: our proposed FIN-ENN, the NN-Baseline, and the Acoustic CNN (1D) from the reference study [16]. The baseline model utilized the emo_large feature set, extracting 6552 high-level acoustic features from each audio file using the openSMILE toolkit [18]. These features arise from high-level statistical functions applied to low-level descriptors. Conversely, our FIN-ENN model adopted two fine-tuned versions of the wav2vec2 model: w2v2-persian-v3 and w2v2-persian-ser4. As shown in Table 2, the FIN-ENN model's integration of Tsallis FIN contributed to an absolute accuracy improvement of 2.83% for w2v2-persian-v3 and 0.64% for w2v2-persian-ser compared to their FIN-less counterparts. Footnote 3: [https://huggingface.com/m3hrdadifi/wav2vec2-large-xlsr-persian-v3](https://huggingface.com/m3hrdadifi/wav2vec2-large-xlsr-persian-v3) Footnote 4: [https://huggingface.com/3hrdadifi/wav2vec2-xlsr-persian-speech-emotion-recognition](https://huggingface.com/3hrdadifi/wav2vec2-xlsr-persian-speech-emotion-recognition) ### Experiment III **Objective** This experiment delves into the detection of Chronic Neck Pain (CNP) through EMG data. We hypothesize that embedding a neural network with the FIN, specifically designed to imitate the Tsallis entropy, will improve CNP detection performance compared to traditional models. **Data and Preprocessing** Our dataset, sourced from Jimenez-Grand et al [19] and publicly available on Kaggle5, consists of twenty asymptomatic individuals and twenty with Chronic Neck Pain (CNP). Asymptomatic individuals had no significant neck issues in the last two years, while CNP individuals reported notable pain recently. Data was collected as participants walked barefoot along a six-meter rectilinear path, repeated three times at two-minute intervals. Building upon the approach adopted in the original study by Jim'enez-Grand et al. [19], we extracted the same four time domain and six frequency domain features from the EMG data. However, instead of analyzing every 500 ms of the signal (as determined by a 1000Hz sampling rate), we segmented the entire signal into five distinct parts, a method inspired by [20]. Similarly to prior studies, our focus centered on four upper back muscle groups: Trapezius, Sternocleidomastoid, C4 Paraspinal, and Latissimus Dorsi, with each muscle group including both left and right muscles, and features were computed for each side. Footnote 5: [https://www.kaggle.com/datasets/david893/neck-emg-data](https://www.kaggle.com/datasets/david893/neck-emg-data) **Methods** Jim'enez-Grand et al. [19] employed K-NN, SVM, and LDA for classification, processing both raw and Neighbourhood Component Analysis (NCA)-selected features [21]. In contrast, we used the raw extracted features to train a feed-forward neural network comprising two hidden layers with 256 and 32 units. Drawing inspiration from our previous experiment, the 32-dimensional latent representation from the second hidden layer was channeled into the Tsallis FIN. This processed output was then concatenated with the original 32 features, yielding a 33-dimensional vector that was finally directed to a sigmoid activation to perform the binary classification. **Results** As outlined in Table 3, we compared the performance of our FIN-ENN model against those developed in the original study using accuracy, specificity, and sensitivity. The original study's models, namely K-NN, SVM, and LDA, achieved a maximum accuracy of 55.00% with NCA-selected features. Our NN-Baseline registered an accuracy of 57.50%. However, by leveraging the Tsallis FIN in our architecture, we achieved a superior accuracy of 62.50%. This improvement is also evident in improvements made in both specificity (65.00%) and sensitivity (60.00%). Our results reinforce our initial hypothesis, underscoring the benefits of incorporating the FIN for CNP detection from physiological EMG data. ## 5 Conclusion In our experiments, integrating a Feature Imitating Network (FIN) designed to imitate Tsallis entropy consistently enhanced predictive model performances across diverse domains. In predicting Bitcoin's subsequent day's closing price, the enhanced neural network outshone traditional models like Random Forest regression and LSTM. Similarly, in speech emotion recognition, the FIN-augmented model excelled at processing latent representations. In detecting Chronic Neck Pain (CNP) through EMG data, it surpassed established classifiers like K-NN, SVM, and LDA. The consistent edge the FIN provides across these areas underscores its broad utility and efficacy. Future studies can more profoundly investigate influential financial, speech, and physiological features to imitate, aiming to amplify the performance of neural predictive models further. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Method** & **Accuracy** & **Specificity** & **Sensitivity** \\ \hline K-NN (raw) & 35.00 & 35.00 & 35.00 \\ SVM (raw) & 32.50 & 31.57 & 33.33 \\ LDA (raw) & 42.50 & 42.85 & 42.10 \\ \hline K-NN (NCA) & 55.00 & 54.54 & 55.55 \\ SVM (NCA) & 55.00 & 60.00 & 54.17 \\ LDA (NCA) & 55.00 & 56.25 & 55.00 \\ \hline NN-Baseline (raw) & 57.50 & 55.00 & 60.00 \\ FIN-ENN (raw) & **62.50** & **65.00** & **60.00** \\ \hline \end{tabular} \end{table} Table 3: Comparison of classification performance in CNP detection.
2303.18069
Pathologies in satisfaction classes
We study subsets of countable recursively saturated models of $\mathsf{PA}$ which can be defined using pathologies in satisfaction classes. More precisely, we characterize those subsets $X$ such that there is a satisfaction class $S$ where $S$ behaves correctly on an idempotent disjunction of length $c$ if and only if $c \in X$. We generalize this result to characterize several types of pathologies including double negations, blocks of extraneous quantifiers, and binary disjunctions and conjunctions. We find a surprising relationship between the cuts which can be defined in this way and arithmetic saturation: namely, a countable nonstandard model is arithmetically saturated if and only if every cut can be the "idempotent disjunctively correct cut" in some satisfaction class. We describe the relationship between types of pathologies and the closure properties of the cuts defined by these pathologies.
Athar Abdul-Quader, Mateusz Łełyk
2023-03-31T13:59:12Z
http://arxiv.org/abs/2303.18069v1
# Pathologies in Satisfaction Classes ###### Abstract. We study subsets of countable recursively saturated models of \(\mathsf{PA}\) which can be defined using pathologies in satisfaction classes. More precisely, we characterize those subsets \(X\) such that there is a satisfaction class \(S\) where \(S\) behaves correctly on an idempotent disjunction of length \(c\) if and only if \(c\in X\). We generalize this result to characterize several types of pathologies including double negations, blocks of extraneous quantifiers, and binary disjunctions and conjunctions. We find a surprising relationship between the cuts which can be defined in this way and arithmetic saturation: namely, a countable non-standard model is arithmetically saturated if and only if every cut can be the "idempotent disjunctively correct cut" in some satisfaction class. We describe the relationship between types of pathologies and the closure properties of the cuts defined by these pathologies. **Keywords**: Nonstandard models of Peano Arithmetic, Satisfaction classes, Recursive saturation, Arithmetical saturation, Disjunctive correctness. **2020 _Mathematics Subject Classification_**: 03C62, 03H15 ## 1. Introduction Kotlarski-Krajewski-Lachlan famously showed [7] that every countable, recursively saturated model of \(\mathsf{PA}\) has a full satisfaction class. Enayat-Visser [3] strengthened this result using more typically model-theoretic tools. These results show the conservativity of the theory \(\mathsf{CT}^{-}\) of compositional truth over the base arithmetic theory \(\mathsf{PA}\). Both proofs illustrate the weakness of \(\mathsf{CT}^{-}\): not only the theory is a conservative extension of the base theory \(\mathsf{PA}\), but also it is consistent with failure of some very basic truth principles such as _disjunctive correctness_ (DC): "A disjunction is true if and only if it has a true disjunct". In particular one can construct models of \(\mathsf{CT}^{-}\) in which for a nonstandard number \(a\) the disjunction \[\underbrace{0\neq 0\lor 0\neq 0\vee\ldots\lor 0\neq 0}_{a\text{ times}}\] is within the scope of the truth predicate. Thus it is well known how to construct _pathological_ satisfaction classes. One can easily exclude such pathological behaviour by adding to the theory \(\mathsf{CT}^{-}\) induction axioms for the extended language. It is well known that the theory \(\mathsf{CT}\) of an _inductive_ truth predicate is not conservative over \(\mathsf{PA}\); indeed, \(\mathsf{CT}\) proves the Global Reflection Principle for \(\mathsf{PA}\), that is the statement (GRP) \[\forall\phi\big{(}\mathrm{Prov}_{\mathsf{PA}}(\phi)\to T(\phi)\big{)}.\] In fact, \(\mathsf{CT}_{0}\), the theory \(\mathsf{CT}^{-}\) augmented by \(\Delta_{0}\)-induction for formulas in the language including the truth predicate, is equivalent to (GRP). Recent work by Enayat and Pakhomov [2] pointed to a deeper connection between non-conservativity and disjunctive correctness. The natural-looking extension of \(\mathsf{CT}^{-}\) with DC turns out to be equivalent to \(\mathsf{CT}_{0}\). Ali Enayat (unpublished) separated DC into two principles: DC-out, stating that every true disjunction has a true disjunct, and DC-in, stating that a disjunction with a true disjunct is true. Cieslinski, Lelyk, and Wcislo [1] show that already \(\mathsf{CT}^{-}+\text{DC-out}\) is equivalent to \(\mathsf{CT}_{0}\), while \(\mathsf{CT}^{-}+\text{DC-in}\) is conservative over \(\mathsf{PA}\). Conservativity of DC-in is shown by proving that every countable model of \(\mathsf{PA}\) has an elementary extension which is "disjunctively trivial": that is, one in which every disjunction of nonstandard length is evaluated as true. In such disjunctively trivial models of \(\mathsf{CT}^{-}\), \(\omega\) is definable as the cut for which the truth predicate \(T\) is "disjunctively correct." In this article, we aim at deepening our understanding of the phenomenon of disjunctive correctness: we consider related questions around which sets can be definable by exploiting pathologies in the satisfaction class. We analyze "local pathologies", along the lines of repeated (idempotent) disjunctions of a single, fixed sentence \(\theta\), and non-local pathologies, where, for example, we consider idempotent disjunctions of all sentences. We completely classify the subsets of a model which are definable using local pathologies, and use this to conclude that a countable model of \(\mathsf{PA}\) is arithmetically saturated if and only if it carries a satisfaction class which makes all disjunctions of nonstandard length true. We also classify the cuts in a model which can be definable using non-local pathologies. From the definability perspective, our work complements that of [10], where it was shown that for every subset \(A\) of a countable recursively saturated model \(\mathcal{M}\) there is a satisfaction class \(S\) such that \(A\) is definable in \((\mathcal{M},S)\) as (roughly speaking) the set of those numbers \(x\) such that quantifier correctness fails on the \(x\)-th formula (in a suitably chosen enumeration). We go in the reverse direction: starting from an idempotent sentential operation \(F\) we ask when a set \(A\) can be characterized as the set of those numbers \(x\) for which the satisfaction class behaves correctly when \(F\) is iterated \(x\)-times. Unlike in the case of [10] it turns out that in some countable recursively saturated models, not every cut can be defined in this way. We conclude the paper with several properties about the full disjunctively correct cut. ### Preliminaries We formulate \(\mathsf{PA}\) in the usual language \(\mathcal{L}_{\mathsf{PA}}=\{+,\times,<,0,1\}\). We use script letters \(\mathcal{M},\mathcal{N}\), etc to denote models of \(\mathsf{PA}\) and Roman letters \(M,N\), etc to denote their universes. \(\mathrm{ElDiag}(\mathcal{M})\) denotes the elementary diagram of the model \(\mathcal{M}\). We follow standard definitions and conventions used in the study of models of \(\mathsf{PA}\): see [6, Chapter 1]. We recall some of these conventions here. We fix standard coding for finite sets and sequences: for a model \(\mathcal{M}\models\mathsf{PA}\), \(a,b\in M\), * \(\mathrm{len}(a)\) denotes the length of the sequence coded by \(a\), * \((a)_{b}\) denotes the \(b\)-th element of the sequence coded by \(a\), and * we write \(a\in b\) if \(a\) is in the set coded by \(b\). **Definition 1**.: A model \(\mathcal{M}\models\mathsf{PA}\) is arithmetically saturated iff for every \(a\in M\) for every type \(p(x,a)\) which is arithmetically definable in the type of \(a\), \(p(x,a)\) is realized in \(\mathcal{M}\). We note for the reader the equivalence between _countable recursively saturated models_ and _countable resplendent models_, as well as the equivalence between _arithmetically saturated models_ and recursively saturated models in which \(\omega\) is a strong cut. The interested reader is again directed to [6] for definitions and other references. Let \(\mathcal{M}\models\mathsf{PA}\). By \(\operatorname{Form}^{\mathcal{M}}\) and \(\operatorname{Sent}^{\mathcal{M}}\) we refer to the (definable) sets of (Godel codes of) formulas and sentences, respectively, in the sense of \(\mathcal{M}\). For the rest of this article, we will not distinguish between a formula \(\phi\) and its Godel code \(\lceil\phi\rceil\). We use the following standard abbreviations: * \(\operatorname{Asn}(x,y)\) is an \(\mathcal{L}_{\mathsf{PA}}\) formula which asserts that \(y\) is an assignment for \(x\), which means that it assigns values to all and only those variables which have free occurrences in \(x\) (\(x\) can be a term or a formula). * \(s^{\alpha}\) denotes the value of the term \(s\) under the assignment \(\alpha\). * \(\dot{\exists}\) denotes the arithmetical operation which given a variable \(v\) and a formula \(\phi\) returns \(\exists v\phi\). \(\dot{\vee}\), \(\dot{\div}\) and \(\dot{=}\) have analogous meanings. * for any two assignments \(\alpha\), \(\beta\), we write \(\beta\sim_{v}\alpha\) iff \(\beta\) differs from \(\alpha\) at most on a variable \(v\) and the domain of \(\beta\) extends the domain of \(\alpha\) at most with \(v\). * for \(\phi\in\operatorname{Form}_{\mathcal{L}_{\mathsf{PA}}}\), \(\beta\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! substitutions. For example, it is an exercise to use an Enayat-Visser construction to show that in every countable and recursively saturated model \(\mathcal{M}\) there is a satisfaction class \(S\) such that for some formula \(\phi\) and assignment \(\alpha\), \((\exists v\phi,\alpha)\in S\) but for no closed term \(t\), \(\langle\phi[t/v],\alpha\rangle\in S\) (\(\phi[t/v]\) denotes the substitution of a closed term \(t\) for all occurrences of the variable \(v\).) Because of these and similar problems, it is not known whether in an arbitrary model of \((\mathcal{M},S)\models\mathsf{CS}^{-}\) one can define a compositional truth predicate \(T\) for the language of arithmetic satisfying the natural axiom \[\forall\phi(v)\big{(}T(\forall v\phi(v))\equiv\forall xT(\phi[\underline{x}/v ])\big{)},\] where \(\underline{x}\) denotes the canonical numeral naming \(x\). It is known that each standard definition of truth from satisfaction (e.g. "being satisfied by all assignments" or "being satisfied by an empty assignment") might fail to define a truth predicate in a model of \(\mathsf{CS}^{-}\). To overcome these problems it is customary to extend the above list of axioms of \(\mathsf{CS}^{-}\) with the regularity axiom (compare [10]). Its full-blown definition is rather involved and we will give it in the Appendix. A satisfaction class which satisfies the regularity axiom is called a _regular_ satisfaction class. Importantly, if \(S\) is a regular satisfaction class in \(\mathcal{M}\), then terms with the same values can be substituted for free variables in a formula saliva veritate, i.e. for every formula \(\phi\in\mathrm{Form}^{\mathcal{M}}\), every variable \(v\in\mathrm{Var}^{\mathcal{M}}\), all terms \(s,t\in\mathrm{Term}^{\mathcal{M}}\) and all assignments \(\alpha\) it holds in \((\mathcal{M},S)\) that \[\mathrm{Asn}(\phi([t/v]),\alpha)\wedge\mathrm{Asn}(\phi[s/v],\alpha)\wedge s ^{\alpha}=t^{\alpha}\to S(\phi[s,v],\alpha)\equiv S(\phi[t/v],\beta).\] One can check, that if \(S\) is a regular satisfaction class in \(\mathcal{M}\), then the formula \(\mathrm{Sent}(x)\wedge S(x,\emptyset)\) defines in \((\mathcal{M},S)\) a truth predicate which satisfy the above natural axiom for the universal quantifier. In the Appendix we show how to improve one of our constructions in order to obtain regular satisfaction classes. As a consequence we will be able to construct many pathological _truth_ classes. However, we decided to leave the regularization of all our constructions for further research. Another basic property of satisfaction classes is satisfying internal induction. Before introducing it let us define one handy abbreviation: if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), and \(\psi\) is a formula in the sense of \(\mathcal{M}\) with exactly one free variable, then \(T*\psi(x)\) denotes a \(\mathcal{L}_{\mathsf{PA}}\cup\{S\}\)-formula with one free variable \(x\) which naturaly expresses "The result of substituting the numeral naming \(x\) for the unique free variable of \(\psi\) is satisfied by the empty assignment"(see [8], Lemma 3.6.) We say that in \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) satisfies the internal induction iff for every \(\psi\in\mathrm{Form}^{\mathcal{M}}\) with a unique free variable, the formula \(T*\psi(x)\) satisfies the induction axiom, i.e. \[(\mathcal{M},S)\models T*\psi(0)\wedge\forall x\big{(}T*\psi(x)\to T*\psi(x+1) \big{)}\to\forall xT*\psi(x).\] We conjecture that all our constructions can be fine-tuned to yield regular satisfaction classes satisfying internal induction, however we decided to treat this problem on a different occasion. **Remark 3**.: As shown in [8], Lemma 3.7, if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) is regular and satisfies internal induction, then for every \(\psi\in\mathrm{Form}^{\mathcal{M}}\) with exactly one free variable, if \(X_{\psi}=\{x\in M\quad:\quad(\mathcal{M},S)\models T*\psi(x)\}\), then \((\mathcal{M},X_{\psi})\models\mathsf{PA}^{*}\). That is, \((\mathcal{M},X_{\psi})\) satisfies the full induction schema in the language \(\mathcal{L}_{\mathsf{PA}}\cup\{X\}\), where \(X\) is interpreted as \(X_{\psi}\). **Definition 4** (Local compositional conditions).: Let \(\operatorname{Comp}(x,y,z)\) be the disjunction of the following \(\mathcal{L}_{\mathsf{PA}}\cup\{S\}\) formulae 1. \(\exists s,t\in\operatorname{Term}(x=(s\dot{=}t)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x,\alpha)\equiv s^{\alpha}=t^{\alpha}))\). 2. \(x=(y\dot{\lor}z)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x, \alpha)\equiv(S(y,\alpha\lfloor_{y})\lor S(z,\alpha\lfloor_{z})))\). 3. \(x=(\dot{\neg}y)\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x, \alpha)\equiv\neg S(y,\alpha))\). 4. \(\exists v\in\operatorname{Var}(x=\dot{=}vy\wedge\forall\alpha(\operatorname{Asn}(x,\alpha)\to S(x,\alpha)\equiv\exists\beta\sim_{v}\alpha S(y, \beta\lceil_{y})))\). Suppose \(\langle\phi_{i}:i\leq c\rangle\) is a coded sequence of elements of \(\operatorname{Sent}^{\mathcal{M}}\) and suppose \(\theta\in\operatorname{Sent}^{\mathcal{M}}\). * \(\bigvee\limits_{i\leq c}\phi_{i}\) is defined, inductively, so that \(\bigvee\limits_{i\leq 0}\phi_{i}=\phi_{0}\), and \(\bigvee\limits_{i\leq n+1}\phi_{i}=(\bigvee\limits_{i\leq n}\phi_{i})\lor \phi_{n+1}\). * \(\bigwedge\limits_{i\leq c}\phi_{i}\) is defined similarly. Given an \(\mathcal{M}\)-definable function \(F:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\), we define \(F^{c}(x)\) by induction on \(c\) as follows: \(F^{0}(x)=x\), \(F^{c+1}(x)=F(F^{c}(x))\). * \(\bigvee\limits_{i\leq c}^{\operatorname{bin}}\theta\) is defined as \(F^{c}_{\vee}(\theta)\), where \(F_{\vee}(\phi)=\phi\vee\phi\). These are "binary idempotent disjunctions." Similarly, one can define "binary idempotent conjunctions." * \((\neg\neg)^{c}\theta\) is defined as \(F^{c}_{\neg\neg}(\theta)\), where \(F_{\neg\neg}(\phi)=\neg\neg\phi\). * \((\forall x)^{c}\theta\) is defined as \(F^{c}_{\vee}(\theta)\), where \(F_{\vee}(\phi)=\forall x\phi\). In a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\), we can define the following sets: * for a given \(\theta\), the "idempotent disjunctively correct set for \(\theta\)", \[\operatorname{IDC}^{\theta}_{S}=\{c:T(\bigvee\limits_{i<c}\theta)\equiv T( \theta)\},\] * the "idempotent disjunctively correct set": \[\operatorname{IDC}_{S}=\{c:\forall\phi T(\bigvee\limits_{i<c}\phi)\equiv T( \phi)\}.\] * the "disjunctively correct set": \[\operatorname{DC}_{S}=\{c\in M:(\mathcal{M},S)\models\forall\langle\phi_{i}:i \leq c\rangle\big{(}T(\bigvee\limits_{i\leq c}\phi_{i})\equiv\exists i\leq cT (\phi_{i})\big{)}\}.\] We can similarly define the "conjunctively correct set" for a given \(\theta\) (\(\operatorname{QC}^{\theta}_{S}\)), the "double negations correct set" for a given \(\theta\) (\(\operatorname{DNC}^{\theta}_{S}\)), the "binary idempotent disjunctively/conjunctively correct set" (\(\operatorname{IDC}^{\operatorname{bin},\theta}_{S}\)), or their respective non-local versions (\(\operatorname{QC}_{S},\operatorname{DNC}_{S},\operatorname{IDC}^{ \operatorname{bin}}_{S}\)). Given a set \(X\) (often one of the above pathologically definable sets), we introduce the following notation for _the longest initial segment of \(X\)_: \[I(X)=\{x\in X:\forall y\leq x(y\in X)\}.\] This allows us to denote, for example, the idempotent disjunctively correct _cut_, \(I(\operatorname{IDC}_{S})\). ## 2. Separability In this part, we classify which sets can be \(\operatorname{IDC}^{0=1}_{S}\) for some \(S\). Rather than simply looking at disjunctions, however, we generalize the setting to draw similar conclusions about the conjunctively correct set for \(0=0\), the double negations correct set for any atomic sentence \(\phi\), or the binary idempotent disjunctively / conjunctively correct set for \(\phi\) and much more. **Definition 5**.: Let \(X\subseteq\operatorname{Form}^{\mathcal{M}}\). 1. If \(x,y\in\operatorname{Form}^{\mathcal{M}}\), we say \(x\triangleleft y\) if \(x\) is an immediate subformula of \(y\). 2. \(X\) is _closed_ if whenever \(x\triangleleft y\in X\), then \(x\in X\). 3. \(\operatorname{Cl}(X)\) is the smallest closed set containing \(X\). 4. \(F\subseteq X\)_generates_\(X\) if \(X=\operatorname{Cl}(F)\). 5. \(X\) is _finitely generated_ if there is a finite \(F\subseteq X\) that generates it. We describe a generalization of the idempotent disjunction operation \(c\mapsto\bigvee\limits_{i<c}\theta\). **Definition 6**.: Fix a standard sentence \(\theta\). Let \(\Phi(p,q)\) be a (finite) propositional template, over propositional variables \(p\) and \(q\). By this we mean that in \(\Phi\) we allow all propositional connectives, along with quantifiers (over dummy variables). We insist that \(\Phi\) has non-zero complexity (that is, \(\Phi(p,q)\) has at least one propositional connective or quantifier), along with the following properties: * \(q\) appears in \(\Phi(p,q)\), * if \(\mathcal{M}\models\theta\), then \(\Phi(\top,q)\) is equivalent to \(q\), and * if \(\mathcal{M}\models\neg\theta\), then \(\Phi(\bot,q)\) is equivalent to \(q\). Define \(F:M\to\operatorname{Sent}^{\mathcal{M}}\) as follows: * \(F(0)=\theta\), and * \(F(x+1)=\Phi(\theta,F(x))\). We say such an \(F\) is a _local idempotent sentential operator for \(\theta\)_, and \(\Phi(p,q)\) is a _template_ for \(F\). We emphasize here that \(\Phi\) is finite, so that if \(\phi\) and \(\psi\) are sentences, then \(\psi\in\operatorname{Cl}(\Phi(\phi,\psi))\). In addition, if \(p\) appears in \(\Phi(p,q)\), then \(\phi\in\operatorname{Cl}(\Phi(\phi,\psi))\) as well. Note that for any \(n\in\omega\) and atomic sentence \(\theta\), if \(F\) is a local idempotent sentential operator for \(\theta\) and \((\mathcal{M},S)\models\mathsf{CS}^{-}\), then \((\mathcal{M},S)\models T(\theta)\equiv T(F(n))\). In fact, \((\mathcal{M},S)\models T(F(x))\equiv T(F(x+n))\), for each \(x\in M\). This approach allows us to generalize several examples of local pathologies, for example: \[\left\{\bigvee\limits_{c}(0\neq 0):c\in M\right\}, \left\{\bigwedge\limits_{c}(0=0):c\in M\right\},\] \[\{(\forall x)^{c}(0=0):c\in M\}, \{(\neg\neg)^{c}(0=0):c\in M\}\] can all appear as \(\{F(c):c\in M\}\) for various \(\theta\) and \(\Phi\). We study the question of when, given such a function \(F\), a set \(X\) can be the set \(\{x:T(F(x))\equiv T(\theta)\}\) in a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\). We will see that such sets \(X\) will require the following property. **Definition 7**.: Let \(\mathcal{M}\models\mathsf{PA}\), and \(A\subseteq D\subseteq M\). \(A\) is _separable from \(D\)_ if for each \(a\) such that for every \(n\in\omega\), \((a)_{n}\in D\), there is \(c\) such that for each \(n\in\omega\), \((a)_{n}\in A\) if and only if \(\mathcal{M}\models n\in c\). We say a set \(X\) is _separable_ if it is separable from \(M\). In Propositions 8, 9, and 10, we refer to definable sets and functions. Here we insist that these are definable in the arithmetic structure of \(\mathcal{M}\): that is, they are definable (possibly using parameters) using formulas from \(\mathcal{L}_{\mathsf{PA}}\). First we notice some basic properties of separability. **Proposition 8**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Suppose \(D_{1},D_{2}\) are \(\mathcal{M}\)-definable, \(A\subseteq D_{1}\cap D_{2}\), and \(A\neq D_{1},D_{2}\). Then \(A\) is separable from \(D_{1}\) iff \(A\) is separable from \(D_{2}\)._ Proof.: Fix \(d\in D_{1}\setminus A\). Assume \(A\) is separable from \(D_{1}\) and fix any \(a\) such that for every \(n\), \((a)_{n}\in D_{2}\). Let \(b\) be defined by \[(b)_{i}=\left\{\begin{array}{l}(a)_{i}\text{ if }(a)_{i}\in D_{1}\\ d\text{ otherwise}\end{array}\right.\] Then for every \(i\in\omega\), \((b)_{i}\in D_{1}\), so there is \(c\) such that for every \(i\in\omega\), \((b)_{i}\in A\) iff \(i\in c\). Then it follows that also for every \(i\in\omega\), \((a)_{i}\in A\) iff \(i\in c\). **Proposition 9**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Suppose \(A\subseteq D\) and \(f\) is an \(\mathcal{M}\)-definable function such that \(D\subseteq\text{im}(f)\). Then if \(A\) is separable from \(D\), then \(f^{-1}[A]\) is separable from \(f^{-1}[D]\)_ Proof.: Easy exercise. **Proposition 10**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Let \(I\subseteq_{e}\mathcal{M}\) and \(A\subseteq\mathcal{M}\) be an \(\mathcal{M}\)-definable set such that \(\sup(A\cap I)=I\) and \(A\cap I\) is separable. Then \(I\) is separable._ Proof.: Define the function \(f\) by \[f(x)=\left\{\begin{array}{l}\mu y.\{y\in A:x\leq y\}\text{ if such $y$ exists}\\ 0\text{ otherwise}\end{array}\right.\] Then, by the assumptions, \(I=f^{-1}[A\cap I]\). The result follows by Proposition 9. As stated before, given \(\theta\), a local idempotent sentential operator \(F\) for \(\theta\), and \(D=\{F(x):x\in M\}\), we wish to classify the subsets \(A\subseteq D\) which can be the sets of true sentences in \(D\) (equivalently, we wish to classify the sets \(X\) such that \(\{F(x):x\in X\}\) is the set of true sentences in \(D\)). First we need the following Lemma. **Lemma 11**.: _Let \(\mathcal{M}\models\mathsf{PA}\). Let \(\theta\) be an atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\). Let \(J_{0},J_{1}\subseteq M\) be closed under predecessors, disjoint, and \(J_{i}\cap\omega=\emptyset\) for \(i=0,1\). Let \(X=\operatorname{Cl}(\{F(x):x\in J_{0}\cup J_{1}\})\). Then there is a unique \(X\)-satisfaction class \(S\) such that for each \(i\) and \(x\in J_{i}\), \((F(x),\emptyset)\in S\) if and only if \(i=0\)._ Proof.: Let \(S_{0}=\{(F(x),\emptyset):x\in J_{0}\}\). We extend \(S_{0}\) to an \(X\)-satisfaction class \(S\). Take any \(\phi\in X\). Then, since \(J_{i}\) are closed under predecessors and disjoint, then there is a unique \(i\) and minimal \(x\) such that \(\phi\in\operatorname{Cl}(F(x))\) and \(x\in J_{i}\). Recall that \(F(x)=\Phi(\theta,F(x-1))\), and \(\theta\) is atomic. One notices that the subformulas of \(\Phi(\theta,q)\) must be equivalent to one of \(q\), \(\neg q\), \(\top\), or \(\bot\). Let \(\Psi(p,q)\) be the subformula of \(\Phi(p,q)\) such that \(\Psi(\theta,F(x-1))=\phi\). Again, the presentation of \(\phi\) as \(\Psi(\theta,F(x-1))\) is unique by induction in \(\mathcal{M}\). We put \(\langle\phi,\emptyset\rangle\in S\) if any of the following hold: * \(\Psi(\theta,q)\) is equivalent to \(q\) and \(i=0\), * \(\Psi(\theta,q)\) is equivalent to \(\neg q\) and \(i=1\), or * \(\Psi(\theta,q)\) is equivalent to \(\top\). One checks that \(S\) is an \(X\)-satisfaction class. Theorems 12 and 13 are generalizations of unpublished work by Jim Schmerl1. Footnote 1: Private communication to Ali Enayat. **Theorem 12**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(\theta\) be an atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\). Let \(X\subseteq M\) be separable, closed under successors and predecessors, and for each \(n\in\omega\), \(n\in X\) if and only if \(\mathcal{M}\models\theta\). Then \(\mathcal{M}\) has an expansion \((\mathcal{M},S)\models\mathsf{CS}^{-}\) such that \(X=\{x\in M:(\mathcal{M},S)\models T(F(x))\equiv T(\theta)\}\)._ Notice that \(X\) is separable if and only if \(M\setminus X\) is separable. This means that there is some flexibility in building such satisfaction classes \(S\). Proof.: Let \(D=\{F(x):x\in M\}\) and \(A=\{F(x):x\in X\}\). Note that \(A\) is separable from \(D\). We build sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that: * each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\), * each \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated, * \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), and * for each \(\phi\in D\cap F_{i}\), \((\phi,\emptyset)\in S_{i}\) if and only if \(\phi\in A\). Given such a sequence, \(S=\cup S_{i}\upharpoonright F_{i}\) would be the required full satisfaction class on \(\mathcal{M}\). Externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) in order type \(\omega\). We can assume, without loss of generality, that \(\theta\) appears first in this enumeration. Suppose \(F_{i}\) and \(S_{i}\) have been constructed. Let \(F_{i+1}\) be generated by \(F_{i}\) and the least \(x\in\operatorname{Form}^{\mathcal{M}}\setminus F_{i}\) in the aforementioned enumeration. Let \(F^{\prime}=F_{i}\cup(F_{i+1}\cap D)\). Let \(a\) be a sequence such that \(\{F((a)_{n}):n\in\omega\}=F_{i+1}\cap D\). Note that this is possible since \(F_{i+1}\) is finitely generated. Let \(c\) be as in the definition of separability for \(a\). Since \(X\) is closed under successors and predecessors, then if \((a)_{n}\) and \((a)_{m}\) are in the same \(\mathbb{Z}\)-gap (that is, there is some \(k\in\omega\) such that \((a)_{n}\) and \((a)_{m}\) differ by \(k\)), then \((a)_{n}\in X\) if and only if \((a)_{m}\in X\). Since \(X\) is separable, this means that, if \((a)_{n}\) and \((a)_{m}\) are in the same \(\mathbb{Z}\)-gap, then \(n\in c\) if and only if \(m\in c\). Let \(J_{0}\) be the closure under successors and predecessors of \(\{(a)_{n}:n\in\omega,n\in c\), and \((a)_{n}>\omega\}\), and \(J_{1}\) be the closure under successors and predecessors of \(\{(a)_{n}:n\in\omega,n\notin c\), and \((a)_{n}>\omega\}\). By Lemma 11, there is a \(\operatorname{Cl}(F_{i+1}\cap D)\)-satisfaction class \(S^{\prime}\) such that for each \(\phi=F((a)_{n})\in F_{i+1}\cap D\), \(S^{\prime}(F((a)_{n},\emptyset)\) if and only if \((a)_{n}\in X\). That is, \(S^{\prime}(\phi,\emptyset)\) if and only if \(\phi\in A\). Notice that \(\operatorname{Cl}(F^{\prime})=F_{i}\cup\operatorname{Cl}(F_{i+1}\cap D)\). We extend \(S^{\prime}\) to a \(\operatorname{Cl}(F^{\prime})\) satisfaction class simply by preserving \(S_{i}\) on \(F_{i}\). One notices that if \(\phi\in F_{i}\cap D\), then by induction \(\langle\phi,\emptyset\rangle\in S_{i}\) if and only if \(\phi\in A\). Then \(S^{\prime}\) is a \(\operatorname{Cl}(F^{\prime})\)-satisfaction class, so by [3, Lemma 3.1], \(\mathcal{M}\) has an elementary extension \(\mathcal{N}\) carrying a \(\operatorname{Form}^{\mathcal{M}}\)-satisfaction class \(S\) agreeing with \(S^{\prime}\) on \(\operatorname{Cl}(F^{\prime})\). In particular, this shows the consistency of the recursive theory \(Th\) consisting of the following: * \(S\) is a full satisfaction class, * \(\{S(\phi,\alpha)\equiv S_{i}(\phi,\alpha):\phi\in F_{i}\}\), and * \(\{S(F((a)_{n}),\emptyset)\equiv n\in c:n\in\omega\}\). Since \((\mathcal{M},S_{i})\) is recursively saturated, by resplendency \((\mathcal{M},S_{i})\) has an expansion to \(Th\), and such an expansion is a full satisfaction class agreeing with \(S^{\prime}\) on formulas from \(\operatorname{Cl}(F^{\prime})\). Recall that countable recursively saturated models are _chronically resplendent_ ([6, Theorem 1.9.3]): by this we mean that such expansions can, themselves, be taken to be resplendent. That is, we can assume that \((\mathcal{M},S_{i},S)\) is recursively saturated. Let \(S_{i+1}=S\) and continue. In the above result, notice that if \(n\in\omega\), then clearly \(\mathcal{M}\models F(n)\) if and only if \(\mathcal{M}\models\theta\). Therefore, \(\omega\subseteq X\) if and only if \(\mathcal{M}\models\theta\), and \(\omega\cap X=\emptyset\) if and only if \(\mathcal{M}\models\neg\theta\). Moreover, if \(X=\{x:(\mathcal{M},S)\models T(F(x))\}\) then \(X\) is necessarily closed under successors and predecessors. The next result shows that separability of \(X\) is also necessary. **Theorem 13**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(D\) is any set of sentences (not necessarily of the form \(\{F(x):x\in M\}\)), and \(A=\{\phi\in D:(\mathcal{M},S)\models T(\phi)\}\). Then \(A\) is separable from \(D\)._ Note that by Proposition 9, if \(D=\{F(x):x\in M\}\), \(A=\{F(x):(\mathcal{M},S)\models T(F(x))\}\), and \(X=\{x:F(x)\in A\}\), this is equivalent to stating that \(X=\{x:F(x)\in A\}\) is separable (from \(M\)). Proof.: Let \(a\in M\) be such that for each \(n\in\omega\), \((a)_{n}\in D\). We show that that there is a \(c\) so that for all \(i\in\omega\), \((a)_{i}\in A\) iff \(i\in c\). By a result of Stuart Smith, [9, Theorem 2.19], \((\mathcal{M},S)\) is _definably \(S\)-saturated_. This means that for any coded sequence \(\langle\phi_{i}(x):i\in\omega\rangle\) such that each \(\phi_{i}\in\operatorname{Form}^{\mathcal{M}}\), if for each \(i\in\omega\) there is \(m\in M\) such that \((\mathcal{M},S)\models\forall j<i(T(\phi_{j}(m)))\), then there is \(m\in M\) such that for all \(i\in\omega\), \((\mathcal{M},S)\models T(\phi_{i}(m))\). Let \(\phi_{j}(x)\) be the formula given by \((a)_{j}\equiv(j\in x)\). That is, since \((a)_{j}\) is the code of a sentence, \(\phi_{j}(m)\) is evaluated as true in a satisfaction class \(S\) if the sentence \((a)_{j}\) is evaluated as true and \(j\in m\), or \((a)_{j}\) is evaluated as false and \(j\not\in m\). Let \(i\in\omega\), and let \(m\in M\) be such that for all \(j<i\), \((a)_{j}\in A\) if and only if \(j\in m\). Then, \[(\mathcal{M},S)\models\forall j\leq i\,(T(\phi_{j}(m))).\] Therefore there is \(m\) such that for all \(i\in\omega\), \((\mathcal{M},S)\models T(\phi_{i}(m))\). In particular, for each \(n\in\omega\), if \((a)_{n}\in A\), then \(T((a)_{n})\) and therefore \(n\in m\). Moreover, if \(n\not\in m\), then \((\mathcal{M},S)\models\neg T((a)_{n})\). By assumption this means \((a)_{n}\not\in A\). ## 3. Separable Cuts In this section, we wish to examine the results of the previous section in case where we have \(I\subseteq_{\operatorname{end}}M\) a cut. We examine some properties of separable cuts. We conclude this section by showing that a countable model is arithmetically saturated if and only if it has a disjunctively trivial expansion to a model of \(\mathsf{CS}^{-}\). **Proposition 14**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be nonstandard and \(I\subseteq_{\operatorname{end}}M\). The following are equivalent._ 1. \(I\) _is separable._ 2. _There is no_ \(a\in M\) _such that_ \(I=\sup(\{(a)_{i}:i\in\omega\}\cap I)=\inf(\{(a)_{i}:i\in\omega\}\setminus I)\)_._ 3. _For every_ \(a\in M\)_, there is_ \(d\) _such that for all_ \(i\in\omega\)_,_ \((a)_{i}\in I\) _if and only if_ \((a)_{i}<d\) Compare (3) to the notion of _strength_: a cut \(I\subseteq_{\operatorname{end}}M\) is strong if for each \(a\) there is \(c>I\) such that whenever \(i\in I\), \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Clearly, condition (3) is equivalent to strength if \(I=\omega\). Proof.: \((2)\iff(3)\) follows immediately from definitions. We show \((1)\implies(3)\): Suppose \(I\) is separable and let \(a\in M\). We show that there is \(c\in M\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Since \(I\) is separable, there is \(c\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \(n\in c\). Consider the type \[p(x)=\{(a)_{n}<x\equiv n\in c:n\in\omega\}.\] This type is finitely satisfiable, so (by restricted saturation of nonstandard models, see [6, Corollary 1.11.4]) there is \(c^{\prime}\) which satisfies \(p(x)\). Now we show \((3)\implies(1)\). Let \(a\in M\). There is \(c\) such that \((a)_{n}\in I\) if and only if \((a)_{n}<c\). Consider the type \[p(x)=\{(a)_{n}<c\equiv n\in x:n\in\omega\}.\] This type is finitely satisfiable and therefore satisfied by some \(c^{\prime}\in M\). Such a \(c^{\prime}\) witnesses separability of \(I\). By Theorem 12, Theorem 13, and Proposition 14, then \(I\) is separable if and only if there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \[I=\{c:(\mathcal{M},S)\models\neg T(\bigvee_{c}(0=1))\}.\] Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\). By Theorem 13 one has that \(X=\{c:(\mathcal{M},S)\models\neg T(\bigvee_{c}(0=1))\}\) is separable. Is it the case that \(I(\operatorname{IDC}_{S}^{0=1})=\{x:\forall c<x\neg T(\bigvee_{c}(0=1))\}\) is also separable? Our next result shows that it is not always the case: if \(I\subseteq_{\operatorname{end}}M\) has no least \(\mathbb{Z}\)-gap above it, then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I(\operatorname{IDC}_{S}^{0=1})=I\). Later, in Corollary 19, we see that if \(\mathcal{M}\) is not arithmetically saturated, then such an \(I\) need not be separable. **Proposition 15**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Suppose \(I\subseteq_{\operatorname{end}}M\) has no least \(\mathbb{Z}\)-gap. Then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_ \[I=\{x:\forall c<x\neg T(\bigvee_{c}(0=1))\}.\] Proof.: First notice that for any \(c<d\) in different \(\mathbb{Z}\)-gaps, for any \(a\in M\), there is \(b\) such that \(c<b<d\) and \(b\not\in\{(a)_{i}:i\in\omega\}\). To see this, notice that if \(a,c\), and \(d\) are as above, by recursive saturation the type \[p(x)=\{c<x<d\}\cup\{(a)_{i}\neq x:i\in\omega\}\] is realized in \(M\). In fact, one can ensure that the \(\mathbb{Z}\)-gap of such a \(b\) is disjoint from \(c\), \(d\), and \(\{(a)_{i}:i\in\omega\}\). Now we show how to construct the required satisfaction class. Fix a sequence \(d_{0}>d_{1}>\ldots\) such that \(d_{i+1}\) is not in the same \(\mathbb{Z}\)-gap as \(d_{i}\) and \(\inf(\{d_{i}:i\in\omega\})=I\). We proceed similarly to Theorem 12: we build sequences \(b_{0}>b_{1}>\ldots\), \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that: * for each \(i\in\omega\), \(d_{i+1}<b_{i}<d_{i}\) and \(b_{i}\) is in a different \(\mathbb{Z}\)-gap from \(d_{i}\) and \(d_{i+1}\), * each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\), * each \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated, * \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), * \(\bigvee_{d_{i}}(0=1)\in F_{i}\) and whenever \(\bigvee_{c}(0=1)\in F_{i}\) and \(c\leq d_{i}\), \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S_{i}\), and * \(\bigvee_{b_{i}}(0=1)\in F_{i+1}\setminus F_{i}\) and \(\langle\bigvee_{b_{i}}(0=1),\emptyset\rangle\in S_{i+1}\). Given such a sequence, let \(S=\cup(S_{i}\upharpoonright F_{i})\). Then \(S\) is the required full satisfaction class. To see this, suppose \(J=\{x:\forall c<x\neg T(\bigvee(0=1))\}\). Notice that \((\mathcal{M},S)\models T(\bigvee_{b_{i}}(0=1))\), so for each \(x\in J\) and \(i\in\omega\), \(x\stackrel{{ c}}{{<}}b_{i}\); since \(\inf(\{b_{i}:i\in\omega\})=I\), we have \(J\subseteq I\). Conversely, let \(d\in I\). For each \(c<d\), there is \(i\) such that \(\bigvee(0=1)\in F_{i}\). Then \(c<d_{i}\), so \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S_{i}\), and hence \(\langle\bigvee_{c}(0=1),\emptyset\rangle\not\in S\). \({}^{c}\) We proceed to the construction. Suppose \(F_{i}\) and \(S_{i}\) has been constructed satisfying the above. Since \(F_{i}\) is finitely generated, there is \(a\) coding the lengths of disjunctions of \((0=1)\) in \(F_{i}\). By recursive saturation, there is \(b_{i}\) such that \(d_{i+1}<b_{i}<d_{i}\) and \(b_{i}\not\in\{(a)_{i}:i\in\omega\}\); moreover, we ensure that the \(\mathbb{Z}\)-gap of \(b_{i}\) is disjoint from \(d_{i}\), \(d_{i+1}\), and \(\{(a)_{i}:i\in\omega\}\). Let \(F_{i+1}\) be generated by \(F_{i}\), \(\bigvee_{b_{i}}(0=1)\), \(\bigvee_{d_{i+1}}(0=1)\), and the first formula \(\phi\not\in F_{i}\) in some externally fixed enumeration of Form\({}^{\mathcal{M}}\). Let \[F^{\prime}=F_{i}\cup(F_{i+1}\cap\{\bigvee_{c}(0=1):c\in M\}).\] Then \(F^{\prime}\) is a closed set of formulas. Let \(S^{\prime}=S_{i}\upharpoonright F_{i}\cup\{(\bigvee_{b_{i}-n}(0=1),\emptyset): n\in\omega\}\). In particular, \(\langle\bigvee_{d_{i+1}}(0=1),\emptyset\rangle\not\in S^{\prime}\). \(S^{\prime}\) is an \(F^{\prime}\)-satisfaction class, so by [3, Lemma 3.1], \(\mathcal{M}\) has an elementary extension \(\mathcal{N}\) carrying a Form\({}^{\mathcal{M}}\)-satisfaction class \(S\) agreeing with \(S^{\prime}\) on \(F^{\prime}\). Therefore, the theory \(Th\) asserting the following is consistent: * \(S\) is a full satisfaction class, * \(S\) agrees with \(S_{i}\) on formulas from \(F_{i}\), * \(\{S\bigvee_{b_{i}-n}(0=1),\emptyset):n\in\omega\}\), and * \(\{\neg S(\bigvee_{c}(0=1),\emptyset):c<d_{i+1},\bigvee_{c}(0=1)\in F_{i+1}\}\). By resplendency, \(\mathcal{M}\) has a full satisfaction class \(S\) satisfying \(Th\); by chronic resplendency, we can assume \((\mathcal{M},S)\) is recursively saturated. Let \(S_{i+1}=S\) and continue. To find some examples of separable cuts, we recall some definitions from [5]. Below, we let \(\operatorname{Def}_{0}(a)\) be the set of elements of \(\mathcal{M}\) which are \(\Delta_{0}\)-definable from \(a\) in \(\mathcal{M}\). **Definition 16** ([5]).: Let \(\mathcal{M}\models\mathsf{PA}\) and let \(I\subseteq_{\operatorname{end}}M\). 1. \(I\) is _coded by \(\omega\) from below_ if there is \(a\in M\) such that \(I=\sup(\{(a)_{i}:i\in\omega\})\). \(I\) is _coded by \(\omega\) from above_ if there is \(a\in M\) such that \(I=\inf(\{(a)_{i}:i\in\omega\})\). \(I\) is \(\omega\)_-coded_ if it is either coded by \(\omega\) from below or from above. 2. \(I\) is \(0\)_-superrational_ if there is \(a\in M\) such that either \(\operatorname{Def}_{0}(a)\cap I\) is cofinal in \(I\) and for all \(b\in M\), \(\operatorname{Def}_{0}(b)\setminus I\) is not coinitial in \(M\setminus I\), or \(\operatorname{Def}_{0}(a)\setminus I\) is coinitial in \(M\setminus I\) and for all \(b\in M\), \(\operatorname{Def}_{0}(b)\cap I\) is not cofinal in \(I\). **Theorem 17**.: _Let \(\mathcal{M}\models\mathsf{PA}\) and \(I\subseteq_{\text{end}}M\). Then the following are equivalent:_ 1. \(I\) _is_ \(\omega\)_-coded and separable._ 2. \(I\) _is_ \(0\)_-superrational._ Proof.: \((1)\implies(2)\): Suppose \(I\) is \(\omega\)-coded, and let \(a\) be such that \(\sup(\{(a)_{i}:i\in\omega\})=I\) (the case in which \(I\) is coded by \(\omega\) from above is similar). Suppose also that \(b\in M\) is such that \(\operatorname{Def}_{0}(b)\setminus I\) is coinitial in \(M\setminus I\). Then the following type is realized in \(M\): \[p(x)= \{(x)_{2n}=(a)_{n}:n\in\omega\}\] \[\cup \{(x)_{2n+1}=t_{n}(b):n\in\omega\},\] where \(\langle t_{n}:n\in\omega\rangle\) is a recursive enumeration of all \(\Delta_{0}\)-definable Skolem functions. If \(c\) realizes this type, then \(\sup(\{(c)_{i}:i\in\omega\}\cap I)=\inf(\{(c)_{i}:i\in\omega\}\setminus I)=I\), contradicting (1). \((2)\implies(1)\): [5, Proposition 6.2] implies that if \(I\) is \(0\)-superrational, then \(I\) is \(\omega\)-coded. To see separability, notice that by \(0\)-superrationality, if \(\operatorname{Def}_{0}(a)\cap I\) is cofinal in \(I\), then \(\operatorname{Def}_{0}(a)\setminus I\) is not coinitial in \(M\setminus I\) (and vice versa). [5, Theorem 6.5] states that \(\omega\) is a strong cut if and only if every \(\omega\)-coded cut is \(0\)-superrational. Taken together with the above result, we see that if \(\omega\) is not strong, then separable cuts are never \(\omega\)-coded. **Proposition 18**.: _For any \(\mathcal{M}\models\mathsf{PA}\):_ 1. _If_ \(\omega\) _is a strong cut, then every cut_ \(I\) _which is_ \(\omega\)_-coded is separable._ 2. _If_ \(\omega\) _is not a strong cut, then every cut_ \(I\) _which is_ \(\omega\)_-coded is not separable._ Proof.: \((1)\) is due to [5, Theorem 6.5\((a)\implies(c)\)]. We show (2). Suppose \(\omega\) is not strong. There is \(a\) such that \(\inf(\{(a)_{i}:i\in\omega\}\setminus\omega)=\sup(\{(a)_{i}:i\in\omega\}\cap \omega)=\omega\). If \(I\subseteq_{\text{end}}M\) is a cut which is \(\omega\)-coded from above, then there is \(c>I\) such that \(I=\inf(\{(c)_{n}:n\in\omega\})\). For simplicity assume that the sequence coded by \(c\) is a strictly decreasing and its domain consists of all elements smaller than a nonstandard element \(d\). Let \(b\) code the sequence defined by \((b)_{i}=(c)_{(a)_{i}}\). We claim that \(b\) witnesses the failure of separability of \(I\). Indeed, \((c)_{(a)_{i}}\in I\) if and only if \((c)_{(a)_{i}}<(c)_{n}\) for each standard \(n\) if and only if \((a)_{i}>\omega\). Since the set \(\{(a)_{i}:i\in\omega\}\setminus\omega\) is coinitial with \(\omega\), then \(\{(c)_{(a)_{i}}:i\in\omega\}\cap I\) is cofinal with \(I\). Indeed, for any \(x\in I\) there is a nonstandard number \(y<d\) such that \(x<(c)_{y}\in I\). However, by the proerties of \(a\) there is also a standard number \(i\in\omega\) such that \(\omega<(a)_{i}<y\). Since \(c\) is strictly decreasing, it follows that for any such \(i\), \(x<(c)_{(a)_{i}}\in I.\) Similarly, since \(\{(a)_{i}:i\in\omega\}\cap\omega\) is cofinal with \(\omega\), then \(\{(c)_{(a)_{i}}:i\in\omega\}\setminus I\) is coinitial with \(I\). The case when \(I\) is upward \(\omega\)-coded is treated similarly. **Corollary 19**.: _Suppose \(\mathcal{M}\models\mathsf{PA}\) is countable, recursively saturated but not arithmetically saturated. Then there are separable sets \(X\) such that \(I(X)\) is not separable._ Proof.: Let \(c\) be nonstandard, and \(I=\sup(\{c+n:n\in\omega\})\). Then \(I\) has no least \(\mathbb{Z}\)-gap above it, and so by Proposition 15, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I=I(\operatorname{IDC}_{S}^{0=1})\). Let \(X=\operatorname{IDC}_{S}^{0=1}\). Then \(X\) is separable by Theorem 13 and \(I=I(X)\). Since \(I\) is \(\omega\)-coded, by Proposition 18 (2), if \(\omega\) is not a strong cut, then \(I\) cannot be separable. Separable cuts always exist in recursively saturated models. We can in fact see more: every recursively saturated model \(\mathcal{M}\) has a separable cut \(I\subseteq_{\operatorname{end}}M\) which is not closed under addition. Moreover, \(\mathcal{M}\) has separable cuts \(I\subseteq_{\operatorname{end}}M\) that are closed under addition but not multiplication, and ones closed under multiplication but not exponentiation. To see this, first notice that if \((\mathcal{M},I)\) is recursively saturated and \(I\subseteq_{\operatorname{end}}M\), then \(I\) is separable. This follows directly from the equivalent definition of separability that says that for each \(a\) there is \(d\) such that for all \(i\in\omega\), \((a)_{i}\in I\) iff \((a)_{i}<d\). Now let \(I\subseteq_{\operatorname{end}}M\) be any cut not closed under addition. By resplendence, there is \(J\subseteq_{\operatorname{end}}M\) such that \((\mathcal{M},J)\) is recursively saturated and not closed under addition. Again, notice that this proof generalizes to show that if \(f\) and \(g\) are increasing definable functions such that there is any cut \(I\subseteq_{\operatorname{end}}M\) closed under \(f\) but not \(g\), then there is \(I\subseteq_{\operatorname{end}}M\) that is separable and closed under \(f\) but not \(g\). Hence there are separable cuts which are closed under addition but not multiplication, and cuts which are closed under multiplication but not exponentiation. ### Arithmetic Saturation In [1, Lemma 26], we see that there exist _disjunctively trivial_ models: models \((\mathcal{M},T)\models\mathsf{CT}^{-}\) such that for all sequences \(\langle\phi_{i}:i<c\rangle\) of sentences such that \(c>\omega\), \((\mathcal{M},T)\models T(\bigvee\limits_{i<c}\phi_{i})\). That is, models such that all disjunctions of nonstandard length are evaluated as true. In this part we see that disjunctive triviality implies arithmetic saturation. **Definition 20**.: Let \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\operatorname{end}}M\). 1. If, for every \(c>I\) and every sequence of sentences (in the sense of \(\mathcal{M}\)) \(\langle\phi_{i}:i<c\rangle\), \((\mathcal{M},S)\models T(\bigvee\limits_{i<c}\phi_{i})\), then we say \((\mathcal{M},S)\) is _disjunctively trivial above_\(I\). If \((\mathcal{M},S)\) is disjunctively trivial above \(\omega\), we simply say \((\mathcal{M},S)\) is _disjunctively trivial_. 2. If, for every \(c\in I\) and every sequence of sentences (in the sense of \(\mathcal{M}\)) \(\langle\phi_{i}:i<c\rangle\), \((\mathcal{M},S)\models T(\bigvee\limits_{i<c}\phi_{i})\equiv\exists i<c\;T( \phi_{i})\), we say that \((\mathcal{M},S)\) is _disjunctively correct on \(I\)_. **Corollary 21**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\operatorname{end}}M\). If \((\mathcal{M},S)\) is disjunctively trivial above \(I\) and disjunctively correct on \(I\), then \(I\) is separable. In particular, if \((\mathcal{M},S)\) is disjunctively trivial above \(\omega\), then \(\mathcal{M}\) is arithmetically saturated. Conversely, if \(\mathcal{M}\) is arithmetically saturated, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and is disjunctively trivial above \(\omega\)._ Proof.: If \((\mathcal{M},S)\models\mathsf{CS}^{-}\) is disjunctively trivial above \(I\) and correct on \(I\), then \(I=\{c:(\mathcal{M},S)\models\neg T(\bigvee(0=1))\}\). Therefore \(I\) is separable by Theorem 13. If \(I=\omega\), then (by Proposition 14) \(\omega\) is a strong cut in \(\mathcal{M}\) and therefore \(\mathcal{M}\) is arithmetically saturated. Conversely, suppose \(\mathcal{M}\) is arithmetically saturated. We construct sequences \(F_{0}\subseteq F_{1}\dots\) of finitely generated sets of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\) and full satisfaction classes \(S_{0},S_{1},\dots\). Suppose \(S_{i}\) is a full satisfaction class such that \((\mathcal{M},S_{i})\) is recursively saturated and if \(\phi\in F_{i}\cap\operatorname{Sent}^{\mathcal{M}}\) is disjunction of nonstandard length, then \(S_{i}(\phi,\emptyset)\). Let \(a\) code the lengths of all disjunctions in \(F_{i+1}\). That is, suppose \((b)_{n}\) is the \(n\)-th element of \(F_{i+1}\), and \((a)_{n}\) is the maximum \(c\) such that there is a sequence \(\langle\phi_{j}:j<c\rangle\) such that \((b)_{n}=\bigvee\limits_{j<c}\phi_{j}\). Since \(\omega\) is strong, there is \(d>\omega\) such that for each \(n\in\omega\), \((a)_{n}\in\omega\) if and only if \((a)_{n}<d\). By [1, Lemma 26], the theory \(Th\) asserting the following is consistent: * \(\operatorname{ElDiag}(\mathcal{M})\), * \(S_{i+1}\) is compositional for each \(\phi\in F_{i+1}\), * \(\{S_{i}(\phi,\alpha)\equiv S_{i+1}(\phi,\alpha):\phi\in F_{i}\}\) for all assignments \(\alpha\) of \(\phi\), and * \(\{S_{i+1}(\bigvee\limits_{j<c}\phi_{j},\alpha):\bigvee\limits_{j<c}\phi_{j} \in F_{i+1}\text{ and }c>d\}\) for all assignments \(\alpha\) of \(\phi\). Since \(Th\) is a consistent, recursive theory and \((\mathcal{M},S_{i})\) is recursively saturated, by resplendence, \((\mathcal{M},S_{i})\) has an expansion \((\mathcal{M},S_{i},S_{i+1})\models Th\). Continue as before, obtaining \(S=\cup S_{i}\upharpoonright F_{i}\), a full satisfaction class which is disjunctively trivial. We observe that, for each \(n\), there is an arithmetical sentence \(\theta_{n}\):= "There exists a \(\Delta_{n}\) full model of \(\mathsf{CS}^{-}\) which is disjunctively trivial above \(\omega\)". Here by "\(\omega\)" we mean the image of the canonical embedding of the ground model onto an initial segment of the model and a "full model" means a model with a satisfaction relation satisfying the usual Tarski's truth condition. Corollary below shows that each such sentence is false. **Corollary 22**.: _For every \(n\), \(\mathbb{N}\models\neg\theta_{n}\)._ Proof.: Assume to the contrary and fix \(n\) such that \(\mathbb{N}\models\theta_{n}\). Fix a \(\Delta_{n}\)-definable model \(\mathcal{M}:=(M,+,\cdot,S)\models\mathsf{CS}^{-}\) such that \(\mathbb{N}\subseteq(M,+,\cdot)\) and \(\mathcal{M}\) is disjunctively trivial above \(\omega\). Then \((M,+,\cdot)\) is arithmetically saturated and consequently \((\mathbb{N},\operatorname{SSy}(\mathcal{M}))\models\mathsf{ACA}_{0}\). However, each set in \(\operatorname{SSy}(\mathcal{M})\) is \(\Delta_{n}\) definable in \(\mathbb{N}\), which is not possible. It follows from the above corollary that the construction of the disjunctively trivial model of \(\mathsf{CT}^{-}\) does not formalize in any true arithmetical theory, in particular it does not formalize in \(\mathsf{PA}\). Hence one cannot hope to interpret \(\mathsf{CT}^{-}+\operatorname{DC}-\operatorname{in}\) in \(\mathsf{PA}\) by using the construction of a disjunctively trivial model internally in \(\mathsf{PA}\). This is unlike in the case of a standard Enayat-Visser construction: [4] shows how to formalize the model theoretical argument from [3] in \(\mathsf{PA}\) in order to conclude that \(\mathsf{CT}^{-}\) is feasibly reducible to \(\mathsf{PA}\) and, in consequence, it does not have speed-up over \(\mathsf{PA}\). ## 4. Non-local Pathologies In previous sections, we have considered a single, fixed \(\theta\) and functions \(F\) such that \(F(x)\) is the \(x\)-th iterate of \(\theta\) in some sense. We described sets defined by certain correctness properties with respect to this \(\theta\). In other words, we explored "local" pathologies (pathologies that are local to a fixed \(\theta\)). In this section we address several sets defined using non-local pathologies: for example, instead of fixing a \(\theta\) and looking at the idempotent disjunctions of \(\theta\), we look at all idempotent disjunctions (of any sentence). These sets can include \(\operatorname{IDC}_{S}\), \(\operatorname{QC}_{S}\), \(\operatorname{IDC}_{S}^{\operatorname{bin}}\), \(\operatorname{DNC}_{S}\), among others. **Remark 23**.: Let us fix a model \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and consider \[\operatorname{QC}_{S}=\{c:\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T( (\forall x)^{c}\phi)\equiv T(\phi)\}.\] Notice that \(\operatorname{QC}_{S}\) is necessarily closed under addition, since if, for each \(\phi\), \[T((\forall x)^{c}\phi)\equiv T(\phi),\] then let \(\theta=(\forall x)^{c}\phi\), and so \[T((\forall x)^{c}\theta)\equiv T(\theta)=T((\forall x)^{c}\phi)\equiv T(\phi).\] Since \((\forall x)^{c}\theta=(\forall x)^{2c}\phi\), we conclude that \(c\in\operatorname{QC}_{S}\) if and only if \(2c\in\operatorname{QC}_{S}\). Suppose that \(\operatorname{QC}_{S}\) is not a cut, and let \(c_{0}<c_{1}\) be such that \(c_{0}\notin\operatorname{QC}_{S}\) and \(c_{1}\in\operatorname{QC}_{S}\). Then there is \(\phi\) such that \(\neg[T((\forall x)^{c_{0}}\phi)\equiv T(\phi)]\), but \(T((\forall x)^{c_{1}}\phi)\equiv T(\phi)\). Then \(c_{1}\in\operatorname{QC}_{S}\), \(2c_{1}\in\operatorname{QC}_{S}\), but \(c_{0}+c_{1}\notin\operatorname{QC}_{S}\), since \(T((\forall x)^{c_{0}+c_{1}}\phi)\equiv T((\forall x)^{c_{0}}\phi)\). Let \(I\subseteq_{\operatorname{end}}J_{0}\subseteq_{\operatorname{end}}J_{1} \subseteq_{\operatorname{end}}M\) be separable cuts closed under addition such that \(c_{0}\in J_{0}\) and \(c_{1}\in J_{1}\setminus J_{0}\). Then \(X=I\cup(J_{1}\setminus J_{0})\) is separable, but by the above argument, there can be no \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(\operatorname{QC}_{S}=X\). This remark shows that there are complications that occur with sets defined by these non-local pathologies. For the remainder of this section, we look instead at the _cuts_ defined by these pathologies. We again generalize the setting to draw conclusions about \(I(\operatorname{IDC}_{S}),I(\operatorname{QC}_{S})\) and \(I(\operatorname{IDC}_{S}^{\operatorname{bin}})\). To formalize this notion, we again look at finite propositional templates \(\Phi(p,q)\) (recall this notion from the beginning of Section 2). We restrict our attention to \(\Phi\) with the following properties: * \(\Phi(p,q)\) is not equivalent to \(p\), * the complexity of \(\Phi(p,q)\) is non-zero, * \(q\)**must** appear in \(\Phi(p,q)\), * \(p\wedge q\vdash\Phi(p,q)\), and * \((\neg p\wedge\neg q)\vdash\neg\Phi(p,q)\). **Definition 24**.: Suppose \(\Phi\) has the above properties. Then \(F:M\times\operatorname{Sent}\to\operatorname{Sent}\) defined as follows: * \(F(0,\phi)=\phi\) for all \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), and * \(F(x+1,\phi)=\Phi(\phi,F(x,\phi))\). is called an _idempotent sentential operator_. We say that \(\Phi\) is a _template for \(F\)_. Notice that for any \(\theta\), the function \(F(\cdot,\theta)\) is one to one. **Lemma 25**.: _Let \(\Phi\) be a template for \(F\), and \(F\) an idempotent sentential operator. If \(p\) does not appear in \(\Phi(p,q)\), then for all \(x,y\in M\) and \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), \(\mathcal{M}\models F(x+y,\phi)=F(x,F(y,\phi))\)._ Proof.: If \(p\) does not appear in \(\Phi(p,q)\), then there is a propositional function \(\Psi(q)\) such that \(\Phi(p,q)=\Psi(q)\). Let \(G:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\) be defined by \(G(\phi)=\Psi(\phi)\). Then, \[F(x+1,\phi)=\Psi(F(x,\phi))=G(F(x,\phi)).\] Since \(F\) and \(G\) are \(\mathcal{M}\)-definable, by induction, one observes that for all \(x\), \(F(x,\phi)=G^{x}(\phi)\), the \(x\)-th iterate of \(G\). Therefore, \[F(x+y,\phi)=G^{x+y}(\phi)=G^{x}(G^{y}(\phi))=G^{x}(F(y,\phi))=F(x,F(y,\phi)).\qed\] As before, notice that if \(p\) appears in \(\Phi(p,q)\), then for each \(\phi\) and \(x\), \(\phi\in\operatorname{Cl}(F(x,\phi))\). For this reason, if \(p\) appears in \(\Phi(p,q)\), we refer to \(F\) as _accessible_. If not, then because of Lemma 25, we say \(F\) is _additive_. **Definition 26**.: Let \(F\) be an idempotent sentential operator. * \(\theta\) is \(F\)_-irreducible_ if whenever \(F(x,\phi)=\theta\), then \(\phi=\theta\) and \(x=0\). * The \(F\)_-length_ of \(\phi\) is the maximum \(x\) such that there is \(\theta\) such that \(F(x,\theta)=\phi\). * The \(F\)_-root_ of \(\phi\) is the unique \(\theta\) such that \(F(x,\theta)=\phi\), where \(x\) is the \(F\)-length of \(\phi\). **Remark 27**.: By working through the possible truth tables for \(\Phi(p,q)\), one notices that if \(\Phi(p,q)\) has the required properties, then it is logically equivalent to one of the following propositional formulae: * \(p\lor q\), * \(p\wedge q\), or * \(q\). We say that \(\Phi(p,q)\) is \(q\)_-monotone_ if it is logically equivalent to either \(p\lor q\) or to \(q\). Notice that if \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), then in each of these cases, one can show that if \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \((\mathcal{M},S)\models\forall xT(F(x,\phi))\equiv T(F(x+1,\phi))\). **Lemma 28**.: _Let \(F\) be an idempotent sentential operator._ 1. _If_ \(F\) _is accessible, then for all_ \(x,y>0\)_,_ \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\)_, if_ \(F(x,\phi)=F(y,\psi)\)_, then_ \(x=y\) _and_ \(\phi=\psi\)_. In other words, when_ \(x>0\)_, the_ \(F\)_-root of_ \(F(x,\phi)\) _is_ \(\phi\)_._ 2. _If_ \(F\) _is additive, then the_ \(F\)_-root of_ \(\phi\) _is_ \(F\)_-irreducible. Moreover, for all_ \(x,y>0\)_,_ \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\)_, if_ \(F(x,\phi)=F(y,\psi)\)_, then the_ \(F\)_-root of_ \(\phi\) _and_ \(F\)_-root of_ \(\psi\) _are the same._ Proof.: First we show (1). Suppose \(F\) is accessible and \(F(x,\phi)=F(y,\psi)\). If \(x,y>0\), then \(F(x,\phi)=\Phi(\phi,F(x-1,\phi))\), and \(F(y,\psi)=\Phi(\psi,F(y-1,\psi))\). Since \(F\) is accessible, then \(p\) appears as a leaf of the syntax tree of \(\Phi(p,q)\). Since \(\Phi(\phi,F(x-1,\phi))=\Phi(\psi,F(y-1,\psi))\), we see that \(\phi=\psi\). One shows by induction (in \(\mathcal{M}\), since \(F\) is \(\mathcal{M}\)-definable) that if \(F(x,\phi)=F(y,\phi)\), then \(x=y\). Next we show (2). Suppose \(F\) is additive and \(\theta\) is the \(F\)-root of \(\phi\). Then \(F(x,\theta)=\phi\) and \(x\) is the \(F\)-length of \(\phi\). If \(\theta\) is not \(F\)-irreducible, then there is \(y>0\) and \(\psi\) such that \(F(y,\psi)=\theta\). Then \[\phi=F(x,\theta)=F(x,F(y,\psi))=F(x+y,\psi),\] the last equality holding by additivity. Since \(x+y>x\), this contradicts that \(x\) is the \(F\)-length of \(\phi\). To show the "moreover" part of (2), let \(x,y>0\), \(\phi,\psi\in\operatorname{Sent}^{\mathcal{M}}\), and \(F(x,\phi)=F(y,\psi)\). Define \(G:\operatorname{Sent}^{\mathcal{M}}\to\operatorname{Sent}^{\mathcal{M}}\) by \(G(\phi)=\Phi(\phi,\phi)\), so that \(F(x,\phi)=G^{x}(\phi)\). Notice that \(G\) is one to one. Since \(G\) is one to one, then if \(x=y\), \(G^{x}(\phi)=G^{y}(\psi)\) implies, by induction in \(\mathcal{M}\), that \(\phi=\psi\). Suppose \(x>y\). Then again by induction in \(\mathcal{M}\), we have that \(\mathcal{M}\models G^{x-y}(\phi)=\psi\). Let \(\theta\) be the \(F\)-root of \(\phi\), so that there is \(a\) such that \(G^{a}(\theta)=\phi\). Then \(G^{a+(x-y)}(\theta)=\psi\), so \(\theta\) is the \(F\)-root of \(\psi\). Consider the following examples of \(\Phi(p,q)\): * \(\Phi(p,q)=q\lor p\). In this case, \(F(x,\phi)=\bigvee\limits_{x}\phi\). * \(\Phi(p,q)=q\wedge p\). In this case, \(F(x,\phi)=\bigwedge\limits_{x}^{x}\phi\). * \(\Phi(p,q)=(\forall y)q\). Then \(F(x,\phi)=\underbrace{\forall y\ldots\forall y}_{x\text{ times}}\phi\). * \(\Phi(p,q)=q\lor q\). Then \(F(x,\phi)=\bigvee\limits_{x}^{\operatorname{bin}}\phi\). * \(\Phi(p,q)=\neg\neg q\). Then \(F(x,\phi)=(\neg\neg\neg)^{x}\phi\). The goal of this section is to characterize those cuts \(I\) such that \[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\}.\] This would allow us to characterize \(I(\operatorname{IDC}_{S})\), \(I(\operatorname{IDC}_{S}^{\operatorname{bin}})\), and \(I(\operatorname{QC}_{S})\), among others. For \(\operatorname{IDC}_{S}^{\operatorname{bin}}\) and \(\operatorname{QC}_{S}\) the relevant \(F\) functions are additive, while for \(\operatorname{IDC}_{S}\), \(F\) is accessible. For the most part we will restrict our attention to \(\Phi\) with syntactic depth \(1\). This covers most of the above cases, with the notable exception of \(\neg\neg q\); we treat this case separately. **Theorem 29**.: _Let \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and suppose \(F\) is an additive idempotent sentential operator. If_ \[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\},\] _then \(I\) is closed under addition._ Proof.: Let \(a\in I\). We show \(2a\in I\). To see this, let \(x\leq 2a\). If \(x\leq a\), we are done. Otherwise, let \(b=x-a\), so \(x=a+b\) and \(a,b\leq a\). Notice that for \(\phi\in\operatorname{Sent}^{\mathcal{M}}\), we have \((\mathcal{M},S)\models T(\phi)\equiv T(F(a,\phi))\) and \((\mathcal{M},S)\models T(F(a,\phi))\equiv T(F(b,F(a,\phi))\). By additivity, \(F(b,F(a,\phi))=F(a+b,\phi)\), and \(x=a+b\), so we have \[(\mathcal{M},S)\models T(\phi)\equiv T(F(x,\phi)).\qed\] Given a cut \(I\operatorname{\subseteq_{\operatorname{end}}}M\), we say \(I\) is _\(F\)-closed_ if either \(F\) is accessible or \(F\) is additive and \(I\) is closed under addition. We say _\(I\) has no least \(F\)-gap_ if one of the following holds: * \(F\) is accessible and if \(x>I\), then there is a \(y\) such that for each \(n\in\omega\), \(x-n>y>I\), or * \(F\) is additive and if \(x>I\), there is a \(y\) such that for each \(n\in\omega\), \(\lfloor\frac{x}{n}\rfloor>y>I\). Our next main result shows that if \(I\) is \(F\) closed and either separable or has no least \(F\)-gap, then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \[I=\{c:\forall x\leq c\forall\phi\in\operatorname{Sent}^{\mathcal{M}}T(\phi) \equiv T(F(x,\phi))\}.\] Our method of proof will be similar to our previous results: we build sequences of finitely generated sets \(F_{0}\subseteq F_{1}\subseteq\ldots\) and full satisfaction classes \(S_{0},S_{1},\ldots\) with particular properties. We first prove two important lemmas which we use in the inductive step of our construction. For the rest of this section, we modify Definition 5 so that we say \(\phi\triangleleft\psi\) if either \(\phi\) is an immediate subformula of \(\psi\) or \(\phi\) is the \(F\)-root of \(\psi\). Similarly modify the definitions of closed sets and finitely generated sets so that such sets are closed under \(F\)-roots. Note that by Lemma 28, if \(F\) is accessible, this changes nothing about finitely generated and/or closed sets, but this does have an effect for additive \(F\). **Definition 30**.: Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\). Let \(I\operatorname{\subseteq_{\operatorname{end}}}M\), \(X\subseteq\operatorname{Form}^{\mathcal{M}}\) closed, and \(S\) a full satisfaction class. 1. \(S\) is _\(F\)-correct on \(I\)_ for formulae in \(X\) if for each \(\phi\in X\) and \(x\in M\), whenever \(F(x,\phi)\in X\) and \(x\in I\), then \(S(F(x,\phi),\alpha)\) if and only if \(S(\phi,\alpha)\) for all assignments \(\alpha\) of \(\phi\). 2. \(S\) is \(F\)_-trivial above_ \(I\) for formulae in \(X\) if for each \(\phi\in X\) and \(x\in M\), whenever \(F(x,\phi)\in X\) and \(x>I\), then either \(\Phi(p,q)\) is \(q\)-monotone and \(S(F(x,\phi),\alpha)\) for all assignments \(\alpha\), or \(\Phi(p,q)\) is not \(q\)-monotone and \(\neg S(F(x,\phi),\alpha)\) for all assignments \(\alpha\) of \(\phi\). **Lemma 31**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Let \(I\mbox{$\,\subseteq\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\theta=F((a)_{n},(b)_{n})\) and \(\phi=F((a)_{m},(b)_{m})\). By Lemma 28, if \(F\) is accessible, then either \(x=0\) and \(\theta=\phi\), or \(x=(a)_{n}\) and \(\phi=(b)_{n}\); so if \(F\) is accessible, there is nothing to show. Suppose \(F\) is additive. Therefore (by our hypothesis) \(I\) is closed under addition. By Lemma 28, \((b)_{n}=(b)_{m}\) and \((a)_{n}=(a)_{m}+x\). There are two cases to consider, corresponding to the \(F\)-correctness and \(F\)-triviality properties of \(\theta\): Case 1: \(x\in I\) (\(F\)-correctness): Since \(I\) is closed under addition, \((a)_{n}\in I\) if and only if \((a)_{m}\in I\). By separability, therefore, \((a)_{n}<d\) if and only if \((a)_{m}<d\). If \((a)_{n}<d\), then by \(F\)-correctness we have \((\theta,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\) and \((\phi,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\). Therefore, \((\theta,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\). If \((a)_{n}>d\), then by \(F\)-triviality we have either \((\theta,\emptyset)\in S_{1}\) and \((\phi,\emptyset)\in S_{1}\), or \((\theta,\emptyset)\notin S_{1}\) and \((\phi,\emptyset)\notin S_{1}\). Again we have \((\theta,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\). Case 2: \(x>I\) (\(F\)-triviality): In this case, \((a)_{n}>I\), and therefore \((a)_{n}>d\). By \(F\)-triviality, if \(\Phi\) is \(q\)-monotone, we have \((\theta,\emptyset)\in S_{1}\), and if \(\Phi\) is not \(q\)-monotone, we have \((\theta,\emptyset)\notin S_{1}\). Now we return to showing that \(Th\) is consistent. Let \(T_{0}\subseteq Th\) be a finite subtheory. Let \(C\) be the set of formulas such that the instances of their compositionality, preservation, \(F\)-correctness and \(F\)-triviality appear in \(T_{0}\). Then \(C\) is finite, so the modified subformula relation, \(\triangleleft\), is well-founded on \(C\). We define \(S\) inductively on this relation: Suppose \(\phi\) is minimal in \(C\). If \(\alpha\) is an assignment for \(\phi\), we put \((\phi,\alpha)\in S_{1}\) if any of the following hold: 1. \(\phi\in X\) and \((\phi,\alpha)\in S\), 2. \(\phi\) is atomic, \(\alpha\) is an assignment for \(\phi\) and \(\mathcal{M}\models\phi[\alpha]\), or 3. \(\Phi(p,q)\) is \(q\)-monotone, \(\phi=F((a)_{n},(b)_{n})\), \(\alpha=\emptyset\) and \((a)_{n}>d\). Define \(\phi\) of higher rank using compositionality if possible. If it is not possible, meaning that no immediate subformula of \(\phi\) is in \(C\), then there must be \(\psi\in C\) such that \(\psi\) is the \(F\)-root of \(\phi\). Let \(\phi=F((a)_{n},(b)_{n})\), where \((b)_{n}=\psi\). In this case, put \((\phi,\alpha)\in S_{1}\) if either \((\psi,\alpha)\in S_{1}\) or \((a)_{n}>d\) and \(\Phi\) is \(q\)-monotone. We show that \((\mathcal{M},S,S_{1})\models T_{0}\). Clearly, \((\mathcal{M},S,S_{1})\) satisfies the elementary diagram of \(\mathcal{M}\), and by definition, \((\mathcal{M},S,S_{1})\) satisfies all compositional axioms in \(T_{0}\). We show that \((\mathcal{M},S,S_{1})\) satisfies the preservation scheme. Suppose \(\phi\in X\). Then if \(\phi\) is minimal in \(C\) in the subformula relation, then \(S_{1}(\phi,\alpha)\) if and only if \(S(\phi,\alpha)\) by construction. If \(\phi\) is not minimal, then \(S_{1}(\phi,\alpha)\) if and only if \(S(\phi,\alpha)\) follows by compositionality along with \(F\)-correctness and \(F\)-triviality of \(S\) on sentences from \(X\). Next we show \(F\)-triviality. Suppose \(\phi=F((a)_{n},(b)_{n})\in C\) and \((a)_{n}>d\). We assume \(\Phi(p,q)\) is \(q\)-monotone; the other case is similar. If \(\phi\) is minimal in \(C\), then by construction, \((\phi,\emptyset)\in S_{1}\). If \(\phi\) is not minimal, then let \(\psi=F((a)_{n}-1,(b)_{n})\). As \((a)_{n}>I\), \((a)_{n}-1>I\) as well, so \((a)_{n}-1>d\). If \(\psi\in C\), then by induction, we have \((\psi,\emptyset)\in S_{1}\). By compositionality, \((\psi,\emptyset)\in S_{1}\) if and only if \((\phi,\emptyset)\in S_{1}\), so \((\phi,\emptyset)\in S_{1}\). If \(\psi\not\in C\), then it must be the case that \((b)_{n}\in C\), and by construction, \((\phi,\emptyset)\in S_{1}\) since \((a)_{n}>d\). Lastly, we show the \(F\)-correctness scheme. Suppose \(\phi=F((a)_{n},(b)_{n})\in C\), \((a)_{n}<d\), and \(S_{1}(\phi,\emptyset)\equiv S_{1}((b)_{n},\emptyset)\in T_{0}\). If \(\phi\in X\), then \((b)_{n}\in X\), and \((\phi,\emptyset)\in S\) if and only if \(((b)_{n},\emptyset)\in S\). By preservation, the same holds with \(S_{1}\) replacing \(S\) Suppose \(\phi\not\in X\). Let \(\psi=F((a)_{n}-1,(b)_{n})\). If \(\psi\in C\), then as \(\psi\) and \((b)_{n}\) each have lower rank than \(\phi\), we can assume \(((b)_{n},\emptyset)\in S_{1}\) if and only if \((\psi,\emptyset)\in S_{1}\). Then by compositionality, we have \(S(\phi,\alpha)\equiv S(\psi,\alpha)\), so, \[(\phi,\emptyset)\in S_{1}\iff(\psi,\emptyset)\in S_{1}\iff((b)_{n},\emptyset )\in S_{1}.\] If \(\psi\not\in C\), then by our construction, \((\phi,\emptyset)\in S_{1}\) if and only if either \(((b)_{n},\emptyset)\in S_{1}\) or \((a)_{n}>d\) (and \(\Phi\) is \(q\)-monotone). Since \((a)_{n}<d\), then \((\phi,\emptyset)\in S_{1}\) if and only if \(((b)_{n},\emptyset)\in S_{1}\). Since \(Th\) is consistent, there is a model \((\mathcal{M}^{\prime},S^{\prime},S^{\prime}_{1})\models Th\). By resplendency of \((\mathcal{M},S)\), \((\mathcal{M},S)\) has an expansion \((\mathcal{M},S,S_{1})\models Th\). This \(S_{1}\) is the required \(X^{\prime}\)-satisfaction class. We shall now prove an analogous lemma with a different assumption about \(I\): instead of separability we shall require that there is no least \(F\)-gap above \(I\). In the proof we shall need one more notion, which we shall now define: **Definition 32**.: Let \(\mathcal{M}\models\mathsf{PA}\), and let \(F\) be an idempotent sentential operator. Assume that \(F\) is additive. For \(Z\subseteq\operatorname{Form}^{\mathcal{M}}\) and \(d\in M\), let \(Z_{d}\) be the set of those formulae of the form \(F(c,\phi)\), for which there are \(n\in\mathbb{N}\), \(a\in M\), such that * \(0<a-c<n\cdot d\), * \(F(a,\phi)\in Z\), * \(\phi\) is the root of \(F(a,\phi)\). For uniformity of our proofs, when \(F\) is accessible, we take \(Z_{d}\) to be just the closure of \(Z\) (under immediate subformulae and taking \(F\)-roots). **Proposition 33**.: _Let \(\mathcal{M}\models\mathsf{PA}\), \(F\) an idempotent sentential operator, and \(Z\subseteq\operatorname{Form}^{\mathcal{M}}\). Then, for every \(d\in M\)\((Z_{d})_{d}\subseteq Z_{d}\)._ Proof.: This is clear if \(F\) is accessible, so assume \(F\) is additive. Fix an arbitrary \(c,\phi\) such that \(F(c,\phi)\in(Z_{d})_{d}\). Choose \(a,n\) such that \(F(a,\phi)\in Z_{d}\) and \(0<a-c<n\cdot d\). By definition it follows that for some \(c^{\prime}\), \(n^{\prime}\), \(\phi^{\prime}\) and \(a^{\prime}\), \(F(a,\phi)=F(c^{\prime},\phi^{\prime})\), \(F(a^{\prime},\phi^{\prime})\in Z\) and \(0<a^{\prime}-c^{\prime}<n^{\prime}\cdot d\). Since \(F\) is additive this means that \(\phi=\phi^{\prime}\) (since both of them are roots) and \(a=c^{\prime}\), hence \[0<a^{\prime}-c=a^{\prime}-a+a-c=a^{\prime}-c^{\prime}+a-c<(n+n^{\prime})\cdot d.\] **Lemma 34**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated. Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Let \(I\!\subseteq_{\mathit{end}}M\) be \(F\)-closed and has no least \(F\)-gap. Suppose \(S\) is a full satisfaction class, \((\mathcal{M},S)\) is recursively saturated, \(d>I\) and \(S\) is \(F\)-correct on \([0,d).\) Suppose further that \(X\subseteq\operatorname{Form}^{\mathcal{M}}\) is finitely generated. Then for any formula \(\tilde{\phi}\in\operatorname{Form}^{\mathcal{M}}\), there are \(I<d_{0}<d_{1}<d\), a finitely generated \(X^{\prime}\supseteq X\) and a full satisfaction class \(S^{\prime}\) such that \(\dot{p}i\in X^{\prime}\), \((\mathcal{M},S^{\prime})\) is recursively saturated, \(S^{\prime}\upharpoonright X=S\upharpoonright X\), \(S^{\prime}\) is \(F\)-correct on \([0,d_{0})\) and, for some \(\theta\in X^{\prime}\), \(F(d_{1},\theta)\in X^{\prime}\) and \((\mathcal{M},S^{\prime})\models\neg(S^{\prime}(\theta,\emptyset)\equiv S^{ \prime}(F(d_{1},\theta),\emptyset))\)._ Proof.: Fix \(\mathcal{M},I,S,X,d\) and \(\tilde{\phi}\) as in the assumptions. Let \(\odot\) denote \(+\) if \(F\) is accessible and \(\cdot\) if \(F\) is additive. Let \(d_{1}\), \(d_{0}\) be any numbers above \(I\) such that for every \(n,k\in\omega\), \(d_{0}\odot n<d_{1}\odot k<d\). Suppose that every formula in \(X\cup\{\tilde{\phi}\}\) has complexity smaller than \(r\in M\). Let \(\theta:=(\neg)^{2r}0=0\) if \(F\) is not \(q\)-monotone and \(\theta:=\neg(\neg)^{2r}0=0\) in the other case. We note that \(\theta\) is the \(F\)-root of \(F(d_{1},\theta)\) and \(\operatorname{Cl}(F(d_{1},\theta))\) is disjoint from \(X\). We put \(Y:=\operatorname{Cl}(X\cup\{F(d_{1},\theta)\})\). Observe that if \(\phi\in Y\) is an \(F\)-root, then either \(\phi\in X\) or \(\phi=\theta\). Hence \(Y\) is closed under \(F\)-roots. We shall start our construction by extending \(S\) to a \(Y\cup Y_{d_{0}}\)-satisfaction class on \(\mathcal{M}\) which if \(F\)-correct on \([0,d_{0})\). Proposition 33 implies that \((Y_{d_{0}})_{d_{0}}\subseteq Y_{d_{0}}\). Since obviously, for any \(Z,Z^{\prime}\)\((Z\cup Z^{\prime})_{d_{0}}=Z_{d_{0}}\cup Z^{\prime}_{d_{0}}\), it follows that \((Y\cup Y_{d_{0}})_{d_{0}}=Y_{d_{0}}\cup(Y_{d_{0}})_{d_{0}}\subseteq Y\cup Y_{ d_{0}}.\) Additionally \(Y\cup Y_{d_{0}}\) is closed under roots and under immediate subformulae. We argue that \(X_{d_{0}}\cap\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}=\emptyset.\) To this end observe that if \(\psi\in\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}\), then either \(\psi\) is in \(\operatorname{Cl}(\theta)\), and hence the complexity of \(\psi\) is greater than \(2r-n\) for some standard \(n\), or \(\psi\) contains \(\theta\) as a subformula. In both cases the complexity of \(\psi\) is at least \(2r-n\) for some standard \(n\). Consequently, if \(\psi\in\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}\), then \(\psi\) does not belong to \(X_{d_{0}}\), because each formula in \(X_{d_{0}}\) is a subformula of a formula in \(X\), and hence its complexity is not greater than \(r\). Moreover, if \(\phi\), \(F(b,\phi)\) are both in \(Y\cup Y_{d_{0}}\) and \(b<d_{0}\), then \(\phi\in X_{d_{0}}\iff F(b,\phi)\in X_{d_{0}}\). Indeed, from right to left this follows since \((X_{d_{0}})_{d_{0}}\subseteq X_{d_{0}}\). From left to right this is so, since if \(F(b,\phi)\notin X_{d_{0}}\), then either \(F(b,\phi)\in\operatorname{Cl}(\theta)_{d_{0}}\) or \(F(b,\phi)=F(b^{\prime},\theta)\). The first case is impossible since each formula in \(\operatorname{Cl}(\theta)_{d_{0}}\) starts with a negation which does not occur in \(\Phi\). In the latter case it follows that \(\theta\) is a subformula of \(\phi\) (because \(\theta\) is \(F\)-irreducible) and hence \(\phi\notin X_{d_{0}}\). Let us put \(Y^{\prime}=Y\cup Y_{d_{0}}\). We extend \(S\mathord{\restriction}_{X}\) to a \(Y^{\prime}\)-satisfaction class \(S_{0}\), which is compositional and \(d_{0}\)-correct for formulae in \(Y^{\prime}\). For every \(\phi\in Y^{\prime}\) and every \(\alpha\): * if \(\phi\in X\), then \(S_{0}(\phi,\alpha)\) iff \(S(\phi,\alpha)\); * if \(\phi\in\operatorname{Cl}(\{\theta\})\), then (\(\phi\) is a sentence and) \(S_{0}(\phi,\alpha)\) iff \(\alpha=\emptyset\) and \(\phi\) is of the form \((\neg)^{2b}0=0\). * if \(\phi=F(d_{1},\theta)\), then (\(\phi\) is a sentence and) \(S_{0}(\phi,\alpha)\) iff \(\alpha=\emptyset\) and \(F\) is \(q\)-monotone. * if \(\phi\) is in the closure of \(F(d_{1},\theta)\), then, since \(\Phi(p,q)\) has syntactic depth 1, \(\phi\) is either in \(\operatorname{Cl}(\{\theta\})\) or \(\phi=F(d_{1}-n,\theta)\) for some \(n\in\omega\). We have already taken care of the former case. In the latter case we let the value of \(\phi\) on \(\alpha\) be the same as that of \(F(d_{1},\theta)\) on \(\alpha\). * otherwise \(\phi=F(a-b,\psi)\) for some \(k\in\mathbb{N}\), \(a\in M\), \(b<k\cdot d_{0}\) and \(\psi\), \(F(a,\psi)\) such that \(F(a,\psi)\in Y\), \(\psi\) is an \(F\)-root of \(F(a,\psi)\). This can happen only if \(F\) is additive. Since \(Y\) is closed under roots, \(\psi\in Y\), hence for each \(\alpha\) the value of \(\psi\) on \(\alpha\) has already been defined. We stipulate that the value of \(F(a-b,\psi)\) on \(\alpha\) is the same as that of \(F(a,\psi)\) on \(\alpha\). We observe that this is independent of the choice of \(F(a,\psi)\in Y\): if \(F(a,\psi)\) and \(F(a^{\prime},\psi^{\prime})\) both satisfy the above conditions, then either both \(F(a,\psi),F(a^{\prime},\psi^{\prime})\) belong to \(X\) or both of them belong to \(\operatorname{Cl}(F(d_{1},\theta))\). If the former holds our claim follows because \(S\) is \(F\)-correct on \([0,d)\). If the latter holds, it must be the case that \(\psi=\psi^{\prime}=\theta\) and \(|a-a^{\prime}|\) is standard, so our claim follows by construction. We check that \(S_{0}\) is \(F\)-correct on \([0,d_{0})\) for sentences in \(Y^{\prime}\). If \(F\) is accessible, this easily follows from our construction. Assume that \(F\) is additive. Assume \(0<b<d_{0}\) and fix an arbitrary \(\phi\). By previous considerations either both \(\phi,F(b,\phi)\) belong to \(X_{d_{0}}\) or they both belong to \(\operatorname{Cl}(F(d_{1},\theta))_{d_{0}}.\) In the latter case both \(\phi\) and \(F(b,\phi)\) are of the form \(F(d_{1}-b^{\prime},\theta)\) for \(b^{\prime}<n\cdot d_{0}\). In particular, for an arbitrary \(\alpha\), \(\phi\) and \(F(b,\phi)\) get the same value on \(\alpha\) (by construction). Suppose then that \(\phi,F(b,\phi)\in X_{d_{0}}\) and fix \(a_{0},a_{1},b_{0},b_{1},n_{0},n_{1},\psi_{0},\psi_{1}\) such that \(\phi=F(a_{0}-b_{0},\psi_{0})\), \(F(b,\phi)=F(a_{1}-b_{1},\psi_{1})\) and \(F(a_{i},\psi)\in X\), \(b_{i}<n_{i}\cdot d_{0}\) and \(\psi_{i}\) is the root of \(F(a_{i},\psi_{i})\). It follows that \(\phi\) and \(F(b,\phi)\) have the same root, so \(\psi_{0}=\psi_{1}\). In particular \(F(b,\phi)=F(a_{1}-b_{1},\psi_{0})=F(a_{0}-b_{0}+b,\psi_{0})\). Hence \(a_{1}-b_{1}=a_{0}-b_{0}+b\), so \(|a_{1}-a_{0}|=|b_{1}+b-b_{0}|\leq 3d_{0}<d\). In particular, since \(S\) is \(F\)-correct on \([0,d)\), \(F(a_{0},\psi_{0})\) and \(F(a_{1},\psi_{0})\) are assigned by \(S\) the same values on each \(\alpha\). Now we show how to find \(S^{\prime}\) and \(X^{\prime}\) as in the statement of the lemma. We let \(X^{\prime}=X\cup\operatorname{Cl}(\{\tilde{\phi},\theta,F(d_{1},\theta)\})\). For \(S^{\prime}\), by an easy resplendency argument, it is enough to build an extension \(\mathcal{N}\succeq\mathcal{M}\) and a satisfaction class \(S_{N}\) such that 1. \(S_{N}\) is an \(\mathcal{N}\) satisfaction class which is \(F\)-correct satisfaction class on \([0,d_{0})\). 2. \(S_{N}\) makes \(F(d_{1},\theta)\equiv\theta\) false. 3. \(S_{N}\) agrees with \(S\) on \(X\). We observe that since \(X\) is finitely generated, then condition 3 is expressible in the language of arithmetic augmented with \(S_{N}\) and \(S\). In the construction we shall heavily rely on the extension of \(S\) to \(Y^{\prime}\) given by \(S_{0}\). We build \(\mathcal{N}\) and \(S_{N}\) in stages following the idea of [3]. Let \(\mathcal{M}_{0}=\mathcal{M}\), and we construct a chain of pairs \((\mathcal{M}_{0},S_{0}),(\mathcal{M}_{1},S_{1}),\ldots\) which satisfy the following conditions * for each \(n\), \(\mathcal{M}_{n}\preceq\mathcal{M}_{n+1}\). * for each \(n\), \(S_{n+1}\) is a Form\({}^{\mathcal{M}_{n}}\)-satisfaction class. * \(S_{1}\) agrees with \(S_{0}\) on \(Y^{\prime}\) and for each \(n>1\), \(S_{n+1}\) agrees with \(S_{n}\) on Form\({}^{\mathcal{M}_{n}}\). * for each \(n\), \(S_{n+1}\) is \(F\)-correct on \([0,d_{0})\) with respect to formulae from Form\({}^{\mathcal{M}_{n}}\). We show how to construct \(\mathcal{M}_{1},S_{1}\) and the construction of \(\mathcal{M}_{n+1},S_{n+1}\) given \(\mathcal{M}_{n},S_{n}\) for \(n\geq 1\) is fully analogous. We consider the theory given as the union of the following sets of sentences: 1. \(\operatorname{ElDiag}(\mathcal{M}_{0})\); 2. \(\{S(\phi,\alpha):\phi\in Y^{\prime},(\phi,\alpha)\in S_{0}\}\) 3. \(\{\operatorname{Comp}(\phi,\psi,\theta):\phi\in\operatorname{Form}^{\mathcal{ M}_{0}}\}\). 4. \(\{\forall\alpha S(F(a,\phi)\equiv\phi,\alpha):a<d_{0},\phi\in\operatorname{ Form}^{\mathcal{M}_{0}}\}\). Fix a finite portion \(T_{0}\) of this theory and let \(E\) be the set of those formulae which occur in one of the axioms in \(T_{0}\). Let us observe that the relation \(\phi\sqsubset\psi:=\mathcal{M}_{0}\models"\phi\) is a subformula of \(\psi"\) is well-founded on \(E\), since \(E\) is finite. By this we mean that \(\phi\sqsubset\psi\) if \(\mathcal{M}_{0}\) sees that \(\phi\) is a subformula (not necessarily direct) of \(\psi\). We define \(S\subseteq M_{0}^{2}\) by induction on the ranking function \(\operatorname{rk}(\cdot)\) given by \(\sqsubset\). For an arbitrary \(\psi\) of rank \(0\) we put * if \(\psi\) is standard, then we know what to do. * if \(\psi\in Y^{\prime}\), then \((\psi,\alpha)\in S\) iff \((\psi,\alpha)\in S_{0}\) * if \(\psi\notin Y^{\prime}\), then for no \(\alpha\), \((\psi,\alpha)\in S\). If \(\phi\) has positive rank, then * if all immediate subformulae are in \(E\), then the immediate subformulae of \(\phi\) have lower ranks, so we know what to do. * if the above does not hold and \(\phi=F(a,\psi)\) for some \(\psi\in E\) and \(0<a<d_{0}\), then \(\psi\) has lower rank, so for an arbitrary \(\alpha\) we put \((\phi,\alpha)\in S\) iff \((\psi,\alpha)\in S\). * if \(\psi\in Y^{\prime}\), then \((\psi,\alpha)\in S\) iff \((\psi,\alpha)\in S_{0}\). * otherwise, for every \(\alpha,(\phi,\alpha)\notin S\). We check that with so defined \(S\), \((\mathcal{M},S)\models T_{0}\). That the compositional clauses hold is clear from the construction. We check that \(S\) is \(F\)-correct on \([0,d_{0})\) for sentences in \(E\). By induction on \(n\) we prove that for all \(\phi,F(a,\phi)\in E\), \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))=n\), \(a<d_{0}\), then for every \(\alpha\), \(S(\phi,\alpha)\iff S(F(a,\phi),\alpha)\). Since \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))=0\) only if \(a=0\), the base step is trivial. Assume \(\operatorname{rk}(\phi)+\operatorname{rk}(F(a,\phi))\) is positive. Then certainly \(\operatorname{rk}(F(a,\phi))\) is positive. If all immediate subformulae of \(F(a,\phi)\) belong to \(E\), then at least one of them is of the form \(F(a-1,\phi)\) and the thesis follows by inductive hypothesis and idempotency of \(\Phi\), since \(F(a-1,\phi)\) has lower rank than \(F(a,\phi)\). Otherwise, for some \(\psi\in E\) and \(b<d_{0}\) such that \(F(a,\phi)=F(b,\psi)\) and we decided that for every \(\alpha\), the values of \(F(b,\psi)\) and \(\psi\) are the same. By Lemma 28, for some \(b^{\prime}\), either \(\phi=F(b^{\prime},\psi)\) or \(\psi=F(b^{\prime},\phi)\). Hence the thesis follows by the inductive assumption. Now we argue for the preservation axioms. By induction on the rank of \(\phi\) we prove that if \(\phi\in Y^{\prime}\), then for every \(\alpha\), \(S(\phi,\alpha)\) iff \(S_{0}(\phi,\alpha)\). This is immediate for formulae of rank \(0\). In the induction step we use the definition of \(S\) and the closure properties of \(Y^{\prime}\). For the step induction step we first consider extend \(S_{n}\restriction_{M_{n}}\) to the set \(\operatorname{Form}^{\mathcal{M}}\cup(\operatorname{Form}^{\mathcal{M}})_{d_{0} }\subseteq\operatorname{Form}^{\mathcal{M}_{n+1}}\). Then we argue as in the first step. **Theorem 35**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated and \(I\subseteq_{\mathit{end}}M\). Let \(F\) be an idempotent sentential operator with template \(\Phi(p,q)\), and assume \(\Phi(p,q)\) has syntactic depth 1. Suppose \(I\) is \(F\)-closed. Then if \(I\) is separable or has no least \(F\)-gap above it, there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_ \[I=\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M}}(T(\phi) \equiv T(F(y,\phi)))\}.\] Proof.: We construct sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) of sets such that: 1. \(F_{i}\) is finitely generated and \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\), 2. \(S_{i}\) is a full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated 3. \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), 4. \(S_{i}\) is \(F\)-correct on \(I\) for sentences from \(F_{i}\), and 5. for each \(x>I\), there is \(I<y<x\), \(i\in\omega\) and \(\phi\in F_{i}\) such that \(F(y,\phi)\in F_{i}\) and \(\neg(S_{i}(F(y,\phi),\alpha)\equiv S_{i}(\phi,\alpha))\) for all assignments \(\alpha\). If \(I\) is separable, we also ensure that \(S_{i}\) is \(F\)-trivial above \(I\) for sentences in \(F_{i}\). Prior to starting the construction, if \(I\) has no least \(F\)-gap above it, we externally fix a sequence \(d_{0}>d_{1}>\ldots\) such that \(\inf\{d_{i}:i\in\omega\}=I\) and for each \(i\), \(d_{i}\) and \(d_{i+1}\) are in different \(F\)-gaps. Additionally, we externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) (in order type \(\omega\)). Suppose we have constructed \(F_{i}\) and \(S_{i}\). Let \(\phi\) be the least formula in our enumeration that is not in \(F_{i}\). If \(I\) is separable, let \(F_{i+1}\) be generated by \(F_{i}\) and \(\phi\), and apply Lemma 31 to obtain \(S_{i+1}\). Otherwise, we suppose \(S_{i}\) is \(F\)-correct on \([0,d_{i})\) and apply Lemma 34 to obtain \(F_{i+1}\), \(S_{i+1}\), and \(I<c_{0}<c_{1}<d_{i}\) such that \(S_{i+1}\) is \(F\)-correct on \([0,c_{0})\) but not on \([0,c_{1})\). (In fact, there is \(\theta\in F_{i+1}\) that witnesses the failure of \(F\)-correctness on \([0,c_{1})\).) Without loss of generality, we can replace \(d_{i+1}\) with the minimum of \(\{c_{0},d_{i+1}\}\), so that we can assume \(S_{i+1}\) is \(F\)-correct on \([0,d_{i+1})\) and continue. Having constructed these sequences, let \(S=\cup S_{i}\upharpoonright F_{i}\). Then it follows that \(S\) is \(F\)-correct on \(I\) and for each \(x>I\), there is \(\phi\) such that \(\neg(T(\phi)\equiv T(F(x,\phi)))\). **Remark 36**.: It is easy to see that in fact a tiny modification of our proof of Theorem 35 shows something more: we can perform our construction in such a way that \(S\) is \(F\) correct on \(I\) not only on all sentences but on _all formulae_. Hence, given \(\mathcal{M},I\) and \(F\) as in the assumptions of Theorem 35 we can find a satisfaction class \(S\) such that \[I =\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M }}(T(\phi)\equiv T(F(y,\phi)))\}\] \[=\{x:\forall y\leq x\forall\phi\in\operatorname{Form}^{\mathcal{ M}}\forall\alpha\big{(}S(\phi,\alpha)\equiv S(F(y,\phi),\alpha)\big{)}\}.\] We assume that \(\Phi\) has depth \(1\) in the previous results because the more general case is quite complicated. In particular, if \(\Phi\) has depth at least \(2\), then it might not be possible to ensure that \(S\) is \(F\)-trivial above \(I\) as we do in Lemma 31. For example, suppose \(\Phi(p,q)=(\neg\neg)q\), \(\phi=(0=0)\) and \(\psi=\neg(0=0)\). Then, for any \(x\) and any satisfaction class \(S\), \(T((\neg\neg)^{x}\phi)\equiv\neg T((\neg\neg\neg)^{x}\psi)\). However, we show in our next result that we can still ensure that, if \(I\) is separable and closed under addition, there is \(S\) such that \(I\) is the \((\neg\neg)\)-correct cut. **Proposition 37**.: _Let \(\mathcal{M}\models\mathsf{PA}\) be countable and recursively saturated and \(I\subseteq_{\text{end}}M\) separable and closed under addition. Then there is \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and_ \[I=\{x:\forall y\leq x\forall\phi\in\operatorname{Sent}^{\mathcal{M}}(T(\phi) \equiv T((\neg\neg)^{y}(\phi)))\}.\] Proof.: Modify the definition of \(\triangleleft\) so that \(\phi\triangleleft\psi\) if either \(\phi\) is an immediate subformula of \(\psi\) or \(\phi\) does not start with a double negation and there is \(x\) such that \((\neg\neg)^{x}\phi=\psi\). (That is, \(\phi\) is the \(F\) root of \(\psi\) where \(F(x,\theta)=(\neg\neg)^{x}\theta\).) By similar techniques to the proof of Theorem 35, it suffices to show the following: given any finitely generated \(X\) and full satisfaction class \(S\) such that * \((\mathcal{M},S)\) is recursively saturated, * if \(x\in I\), \(\phi\in X\), and \((\neg\neg)^{x}\phi\in X\), then for each assignment \(\alpha\) of \(\phi\), \((\phi,\alpha)\in S\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S\), and, * if \(x>I\), \((\neg\neg)^{x}\phi\in X\), and \(\phi\in X\), then for each assignment \(\alpha\) of \(\phi\), \((\neg\phi,\alpha)\in S\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S\), then for any finitely generated \(X^{\prime}\supseteq X\), there is a full satisfaction class \(S^{\prime}\) such that * \((\mathcal{M},S^{\prime})\) is recursively saturated, * \(S^{\prime}\upharpoonright X=S\upharpoonright X\), * if \(x\in I\), \(\phi\in X^{\prime}\), and \((\neg\neg)^{x}\phi\in X^{\prime}\), then for each assignment \(\alpha\) of \(\phi\), \((\phi,\alpha)\in S^{\prime}\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S^{\prime}\), and, * if \(x>I\), \((\neg\neg)^{x}\phi\in X^{\prime}\), and \(\phi\in X^{\prime}\), then for each assignment \(\alpha\) of \(\phi\), \((\neg\phi,\alpha)\in S^{\prime}\) iff \(((\neg\neg)^{x}\phi,\alpha)\in S^{\prime}\). Moreover, rather than find a full satisfaction class satisfying the above, we simply need to find an \(X^{\prime}\)-satisfaction class \(S^{\prime}\) satisfying the above. To do so, let \(a,b\), and \(c\) code enumerations such that \(\{(c)_{n}:n\in\omega\}=X^{\prime}\cap\operatorname{Sent}^{\mathcal{M}}\), \((b)_{n}\) is the root of \((c)_{n}\), and \((c)_{n}=(\neg\neg)^{(a)_{n}}((b)_{n})\). By separability, there is \(d\) such that for each \(n\in\omega\), \((a)_{n}\in I\) if and only if \((a)_{n}<d\). We show that the theory \(Th\) consisting of the following is consistent: * \(\operatorname{ElDiag}(\mathcal{M})\), * \(\{\operatorname{Comp}(\phi,\psi,\theta):\phi,\psi,\theta\in X^{\prime}\}\), * \(\{S^{\prime}(\phi,\alpha)\equiv S(\phi,\alpha):\phi\in X\}\) (preservation), * \(\{S^{\prime}((\neg\neg)^{(a)_{n}}((b)_{n}),\alpha)\equiv S^{\prime}((b)_{n}, \alpha):n\in\omega,(a)_{n}<d\}\) (\(F\)-correctness), and * \(\{S^{\prime}((\neg\neg)^{(a)_{n}}((b)_{n}),\alpha)\equiv S^{\prime}(\neg(b)_{n},\alpha):n\in\omega,(a)_{n}>d\}\) (\(F\)-incorrectness). Again, one can show that if \((\mathcal{M},S,S^{\prime})\models Th\), then \(S^{\prime}\) is an \(X^{\prime}\)-satisfaction class satisfying the required properties. To show that \(Th\) is consistent, let \(T_{0}\subseteq Th\) be finite, and let \(C\) be the set of formulas whose instances of compositionality, preservation, double negation correctness and double negation incorrectness are in \(T_{0}\). Since \(C\) is finite, then the modified subformula relation \(\triangleleft\) is well-founded on \(C\), and we define \(S^{\prime}\) inductively on this relation. Suppose \(\phi\) is minimal in \(C\). If \(\alpha\) is an assignment for \(\phi\), we put \((\phi,\alpha)\in S^{\prime}\) if either \(\phi\) is atomic and \(\mathcal{M}\models\phi[\alpha]\), or \(\phi\in X\) and \((\phi,\alpha)\in S\). We define \(\phi\) of higher rank using compositionality if possible. If this is not possible, then it must be the case that there is \(n\in\omega\) such that \(\phi=(\neg\neg)^{(a)_{n}}((b)_{n})\) and \((b)_{n}\in C\) has lower rank than \(\phi\). We put \((\phi,\alpha)\in S^{\prime}\) if either \((a)_{n}<d\) and at an earlier stage we decided \(((b)_{n},\alpha)\in S^{\prime}\), or if \((a)_{n}>d\) and, at an earlier stage we decided \(((b)_{n},\alpha)\not\in S^{\prime}\). We verify that \((\mathcal{M},S,S^{\prime})\models T_{0}\). Clearly it satisfies the diagram and compositionality axioms by construction. Suppose \(\phi\in X\) is such that \(\forall\alpha(S^{\prime}(\phi,\alpha)\equiv S(\phi,\alpha))\in T_{0}\). If \(\phi\) is of minimal rank, then this is true by construction. If not, we can assume, by induction, that whenever \(\psi\triangleleft\phi\) is such that \(\psi\in C\), then \(\forall\alpha(S^{\prime}(\psi,\alpha)\equiv S(\psi,\alpha))\). If \(\phi\) is determined via compositionality, then the result for \(\phi\) follows from the fact that both \(S\) and \(S^{\prime}\) are compositional for formulas in \(X\). Otherwise, the result for \(\phi\) follows from either double negation correctness up to \(I\), or double negation incorrectness above \(I\). Now let \(\theta=(\neg\neg)^{(a)_{n}}((b)_{n})\), and suppose \(\forall\alpha S^{\prime}(\theta,\alpha)\equiv S^{\prime}((b)_{n},\alpha)\in T_{0}\), where \((a)_{n}<d\). By construction, \(\theta\) is not minimal in \(C\). The immediate subformula of \(\theta\) is \(\psi=\neg(\neg\neg)^{(a)_{n}-1}((b)_{n})\). If \(\psi\in C\), then by construction we have that \(S^{\prime}(\theta,\alpha)\equiv\neg S^{\prime}(\psi,\alpha)\). By induction, we can assume we have \(S^{\prime}(\psi,\alpha)\equiv\neg S^{\prime}((b)_{n},\alpha)\). If \(\psi\not\in C\), then by construction we put \(S^{\prime}(\theta,\alpha)\equiv S^{\prime}((b)_{n},\alpha)\). A similar argument shows double negation incorrectness in the case that \((a)_{n}>d\). By Theorem 35, if \(I\) is either separable or has no least \(\mathbb{Z}\)-gap above it, there is \(T\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I(\mathrm{IDC}_{S})=I\). In fact, if \(\omega\) is a strong cut, then by Proposition 18 every cut \(I\) is either separable or has no least \(\mathbb{Z}\)-gap, and therefore every cut \(I\) can be \(I(\mathrm{IDC}_{S})\) for some satisfaction class \(S\). Similarly, if \(\omega\) is strong, then every additively closed cut \(I\) is either separable or has no least additive gap above it, and therefore each additively closed cut can be \(I(\mathrm{IDC}_{S}^{\mathrm{bin}})\). To complete the picture, we can show that if \(F\) is an idempotent sentential operator and \(I\) is the \(F\)-correct cut, then either \(I\) has no least \(F\)-gap above it or is separable. Therefore, if \(\mathcal{M}\) is not arithmetically saturated, then there are cuts \(I\) which cannot be realized as \(I(\mathrm{IDC}_{S})\) for any \(T\). **Proposition 38**.: _Let \(F\) be an accessible idempotent sentential operator. Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{\text{end}}\mathcal{M}\) is such that_ \[I=\{x:\forall y\leq x\forall\phi\big{(}T(F(y,\phi))\equiv T(\phi)\big{)}\}.\] _Then either there is no least \(\mathbb{Z}\)-gap above \(I\) or \(I\) is separable._ Proof.: Assume that there is a least \(\mathbb{Z}\)-gap above \(I\) and fix \(a\) coding a sequence such that \((a)_{n+1}=(a)_{n}-1\) and \(\inf_{n\in\omega}\{(a)_{n}\}=I\). Since \((a)_{0}\notin I\) there is \(\phi\) such that \((\mathcal{M},S)\models\neg T(F((a)_{0},\phi)\equiv\phi)\). By the properties of \(F\) it follows that for every \(n\in\omega\), \((\mathcal{M},S)\models\neg T(F((a)_{n},\phi)\equiv\phi)\). Let \(D=\{F(a,\phi)\equiv\phi:a<(a)_{0}\}\) and let \(A=\{F(a,\phi)\equiv\phi:a\in I\}\). It follows that for every \(c<(a)_{0}\), \((\mathcal{M},S)\models T(F(c,\phi)\equiv\phi)\) iff \(F(c,\phi)\equiv\phi\in A.\) So by Theorem 13, \(A\) is separable from \(D\); therefore \(I\) is separable. This completes the picture for accessible \(F\). In particular, we have a complete picture for which cuts can be \(I(\mathrm{IDC}_{S})\). If \(\omega\) is strong, then every cut can be \(I(\mathrm{IDC}_{S})\) for some \(S\), and if \(\omega\) is not strong, then only those cuts which have no least \(\mathbb{Z}\)-gap above it can be \(I(\mathrm{IDC}_{S})\). What about for cuts which are \(F\)-correct for additive \(F\), like \(I(\mathrm{QC}_{S})\)? **Lemma 39**.: _Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(t\in M\) is a full binary tree of height \(c\) labelled with sentences, such that \(t\upharpoonright_{T}:=\{s\in\{0,1\}^{<\omega}\ \mid\ T(t(s))\}\) has arbitrarily long branches. Then \(t\upharpoonright_{T}\) has an infinite coded branch._ Proof.: Consider the following sequence of formulae \[\phi_{n}(x):=\bigwedge_{s:\mathrm{len}(s)\leq n}\bigl{(}t(s)\equiv s\in x \bigr{)}.\] The above conjunction is of the form \((\phi_{s_{0}}\wedge(\phi_{s_{1}}\wedge(\ldots)\ldots)\) where \(\{s_{i}\}_{i<2^{n}}\) is an enumeration of all binary sequences of length \(\leq n\) according to the length-first lexicographic ordering. By Smith's result [9, Theorem 2.19] there is \(a\in M\) such that for all \(n\in\omega\), \(T(\phi_{n}(a))\) holds. Hence \(\{s\in\{0,1\}^{<\omega}\ \mid\ s\in a\}\) is an infinite finitely branching tree, so it has a coded infinite branch, \(b\). Since \(b\subseteq a\), for every \(i\in\omega\) we have \((\mathcal{M},S)\models T(b(i))\). **Proposition 40**.: _Let \(F\) be an additive idempotent sentential operator. Suppose \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \(I\subseteq_{end}\mathcal{M}\) is such that_ \[I=\{x:\forall y\leq x\forall\phi\bigl{(}T(F(y,\phi))\equiv T(\phi)\bigr{)}\}.\] _Then either there is no least \(+\)-closed gap above \(I\) or \(I\) is separable._ Proof.: Suppose there is a least \(+\)-closed gap above \(I\) and let \(a\) code a sequence such that \((a)_{n+1}=\lfloor\frac{(a)_{n}}{2}\rfloor\) and \(\inf_{n\in\omega}(a)_{n}=I.\) Let \(c\) be the length of \(a.\) Observe that \(\sup(I\cap im(a))=I\), so by Proposition 10 it is sufficient to show that \(I\cap im(a)\) is separable. Fix \(\phi\) such that \((\mathcal{M},S)\models\neg T(F((a)_{0},\phi)\equiv\phi)\). Then for every \(n\) it holds that \[(\mathcal{M},S)\models\neg T(F((a)_{n+1},\phi)\equiv\phi)\vee\neg T\bigl{(}F ((a)_{n+1},F((a)_{n+1},\phi))\equiv F((a)_{n+1},\phi)\bigr{)}.\] Define the labelling \(t\) of a full binary tree of height \(c\) by recursion as follows: \[t_{\varepsilon}= \neg(F((a)_{0},\phi)\equiv\phi)\] \[t_{s^{-}0}= \neg\bigl{(}F((a)_{n+1},t_{s}^{*})\equiv t_{s}^{*}\bigr{)} \text{if }\operatorname{len}(s)=n\] \[t_{s^{-}1}= \neg\bigl{(}F((a)_{n+1},F((a)_{n+1},t_{s}^{*}))\equiv F((a)_{n+1 },t_{s}^{*})\bigr{)} \text{if }\operatorname{len}(s)=n\] In the above, \(x^{*}\) is the unique sentence \(\psi\) such that there is \(\theta\) such that \(x=\neg(\theta\equiv\psi)\). By our assumption, \(t\upharpoonright_{T}\) has arbitrarily long branches, so there is an infinite coded branch \(b\) of \(t\) such that for every \(i\in\omega\)\((\mathcal{M},S)\models T(b(i))\). Moreover, by the construction of \(t\), for every \(i\in\mathrm{dom}(b)\), \[(\mathcal{M},S)\models T(b(i))\text{ iff }i\in\omega.\] It follows that the set \(A=\{\psi\in im(b):T(\neg\psi)\}\) is separable. Observe that for every \(i<\operatorname{len}(b)\) we have \[(a)_{i}\in I\iff T(\neg b(i))\iff b(i)\in A.\] Hence \(im(a)\cap I=G^{-1}[A]\), where \(G\) is the definable function \((a)_{i}\mapsto b(i)\). By Proposition 9 this ends the proof. **Corollary 41**.: _For a countable, recursively saturated \(\mathcal{M}\models\mathsf{PA}\), the following are equivalent:_ 1. \(\mathcal{M}\) _is arithmetically saturated, and_ 2. _For every idempotent sentential operator_ \(F\) _with template_ \(\Phi(p,q)\) _of depth 1, and every_ \(F\)_-closed cut_ \(I\!\subseteq_{\text{\emph{end}}}M\)_, there is_ \(S\) _such that_ \((\mathcal{M},S)\models\mathsf{CS}^{-}\) _and_ \[I=\{x:\forall y\leq x\forall\phi\big{(}T(F(y,\phi))\equiv T(\phi)\big{)}\}.\] Note that the implication (2) \(\implies\) (1) holds in more generality: it does not rely on \(\Phi\) having syntactic depth 1. Proof.: We show (1) \(\implies\) (2). Suppose \(\omega\) is a strong cut. Let \(a\odot n\) be \(a-n\), if \(F\) is accessible, and \(\lfloor\frac{a}{n}\rfloor\), if \(F\) is additive. By Proposition 18, if \(I\) is not separable, then \(I\) is not \(\omega\)-coded, and so there is no \(a>I\) such that \(\inf(\{a\odot n:n\in\omega\})=I\). Therefore, every \(F\)-closed cut \(I\) is either separable or has no least \(F\)-gap above it. The result follows from Theorem 35. Conversely, if \(\mathcal{M}\) is not arithmetically saturated, let \(I\!\subseteq_{\text{\emph{end}}}M\) be any cut with a least \(F\)-gap above it. For example, fix a nonstandard \(c\) and let \(I=\inf(\{c\odot n:n\in\omega\})\). Since \(\omega\) is not strong, by Proposition 18, \(I\) is not separable. It follows by Proposition 38 for accessible \(F\), and by Proposition 40 for additive \(F\), that there is no \(S\) such that \((\mathcal{M},S)\models\mathsf{CS}^{-}\) and \[I=\{x:\forall y\leq x\forall\phi(T(F(y,\phi))\equiv T(\phi))\}.\] ## 5. Disjunctively Correct Cut We proceed to the strongest correctness property, that of full disjunctive correctness (\(\operatorname{DC}_{S}\)). As usual we shall focus on \(I(\operatorname{DC}_{S})\). The first proposition states that the intuitive strength of full disjunctive correctness is reflected in the closure properties of \(I(\operatorname{DC}_{S})\): **Proposition 42**.: _For every \((\mathcal{M},S)\), \(I(\operatorname{DC}_{S})\) is closed under multiplication._ Proof.: We shall use a result from [1]: define the sequential induction cut \(\operatorname{SInd}_{S}\) to be the set of those \(c\) such that the following is true in \((\mathcal{M},S):\) \[\forall x\leq c\forall\langle\phi_{i}:i\leq x\rangle\big{(}T(\phi_{0})\wedge \forall y<x(T(\phi_{y})\to T(\phi_{y+1}))\big{)}\to\forall i\leq xT(\phi_{i}).\] Then the proof of [1, Theorem 8] directly shows that \(\operatorname{DC}_{S}\subseteq\operatorname{SInd}_{S}\). Now we proceed to the main argument: fix any \(c\in\operatorname{DC}_{S}\) and let \(b\leq c^{2}\). Fix any \(d,r\) such that \(b=dc+r\) and \(r<c\). Fix any \(\langle\phi_{i}:i\leq b\rangle\) and assume first that \(T(\bigvee_{i\leq b}\phi_{i})\) and, aiming at a contradiction that for every \(i\leq b\), \(T(\neg\phi_{i})\). Define the auxiliary sequence of length \(d\): for each \(i\leq d\) let \(\theta_{i}=\bigvee_{j\leq ic}\phi_{j}\) and let \(\theta_{d+1}=\phi_{b}.\) We show that for every \(i<d+1\), \(T(\neg\theta_{i})\to T(\neg\theta_{i+1}).\) Fix any \(i\) and assume \(T(\neg\theta_{i})\). Let \(c^{\prime}\) be \(c\) if \(i<d\) and \(r\) if \(i=d\). Consider the sequence \(\psi_{k}=\bigvee_{j\leq ic+k}\phi_{j}\). We claim that for any \(k<c^{\prime}\)\(T(\neg\psi_{k})\to T(\neg\psi_{k+1}).\) Indeed, fix any \(k<c^{\prime}\) and assume \(T(\neg\psi_{k})\). Observe that by the definition of \(T\), the definition of \(\psi_{k+1}\) and the compositional axioms we have \[T(\neg\psi_{k+1})\equiv S(\neg\psi_{k+1},\emptyset)\equiv S(\neg (\psi_{k}\vee\phi_{ic+k+1}),\emptyset)\equiv S(\neg\psi_{k},\emptyset)\wedge S (\neg\phi_{ic+k+1},\emptyset)\\ \equiv T(\neg\psi_{k})\wedge T(\neg\phi_{ic+k+1}).\] The last sentence is clearly true by our assumptions. Hence, since \(c^{\prime}\in\mathrm{SInd}_{S}\), we conclude that \(T(\neg\psi_{c^{\prime}})\). Since by definition \(\psi_{c^{\prime}}=\theta_{i+1}\), we established that for any \(i<d\)\(T(\neg\theta_{i})\to T(\neg\theta_{i+1}).\) Since \(d+1\in\mathrm{SInd}_{S}\), we conclude that \(T(\neg\theta_{d+1})\). By definition, we obtain that \(T(\neg\bigvee_{i\leq b}\phi_{i})\), which contradicts our assumption. Now assume that for some \(e\leq b\), \(T(\phi_{e})\) holds. In particular, it holds that \(T(\bigvee_{i\leq e}\phi_{i})\). Let us fix \(d^{\prime},r^{\prime}\) such that \(b-e=d^{\prime}c+r^{\prime}\) and for \(j\leq d^{\prime}\) define \(\theta_{j}=\bigvee_{i\leq e+jc}\phi_{i}\) and \(\theta_{d^{\prime}+1}=\bigvee_{i\leq b}\phi_{i}\). As in the above proof we show that for each \(j\leq d^{\prime}\), \(\bar{T}(\theta_{j})\to T(\theta_{j+1})\) and obtain \(T(\bigvee_{i\leq b}\phi_{j})\), which concludes the proof of the reverse implication and the whole argument. We conclude with a limitative result which shows that methods used to prove the main results of previous sections are unsufficient for obtaining the analogous results in the context of \(\mathrm{DC}_{S}\). This is because, as conjectured, our methods show that, in an arithmetically saturated model, any cut can be characterized as \(I(\mathrm{IDC}_{S})\) for some regular satisfaction class which satisfies the internal induction axiom, For such a satisfaction class \(S\), \(S(\phi,\emptyset)\) behaves like a truth predicate satisfying the axioms of \(\mathrm{CT}^{-}\) and we have the following small insight. Below \(\mathrm{Con}_{\mathsf{PA}}(x)\) is a formula with a free variable \(x\) which canonically expresses that there is no proof of \(0=1\) in \(\mathsf{PA}\) whose code is smaller than \(x\). **Proposition 43**.: _Suppose that \((\mathcal{M},S)\models\mathsf{CS}^{-}\), \(S\) is regular and satisfies the internal induction axiom. Then, for every \(a\in\mathrm{DC}_{S}\), \(\mathcal{M}\models\mathrm{Con}_{\mathsf{PA}}(a).\)_ Sketch.: Let \(\mathrm{CT}^{-}(x)\) denote a formula of the language \(\mathcal{L}_{\mathsf{PA}}\cup\{T\}\) with a free variable \(x\) which expresses "\(T(x)\) satisfies Tarski's inductive truth conditions for sentences of logical depth at most \(x\)". By inspection of the proof of Theorem 3.1 from [11] one sees that if \(a\in\mathrm{DC}_{S}\), then there is (typically nonstandard) \(\psi\in\mathcal{M}\) with a unique free variable such that \(\mathcal{M}\models\mathrm{Form}_{\mathcal{L}_{\mathsf{PA}}}(\psi(x))\) and \((\mathcal{M},S)\models\mathrm{CT}^{-}(a)[T*\psi(x)/T(x)]\). \(T*\psi(x)\) denotes a formula with a free variable \(x\) which expresses "The result of substituting the numeral of \(x\) for the unique free variable in \(\psi\) is true" (we use the notation from [8], Lemma 3.6) and \(\mathrm{CT}^{-}(a)[T*\psi(x)/T(x)]\) is the formula obtained by substituting \(T*\psi(x)\) for \(T(x)\). As in [8], Lemma 3.7 we conclude that \(T*\psi(x)\) satisfies full induction scheme in \((\mathcal{M},S)\). It follows that no proof with a code less than \(a\) can be the proof of \(0=1\) from the axioms of \(\mathsf{PA}\), because each formula in this proof is of complexity at most \(a\) and all the premises are made true by \(T*\psi(x)\). ## 6. Appendix In this Appendix we indicate how to modify the proof of Theorem 12 in order to obtain a much better-behaved satisfaction class. In particular we would like the constructed satisfaction classes to define a truth predicate. We start with introducing the notion of _regularity_. The definition is taken from [10]: **Definition 44**.: For every formula \(\phi\) and a term substitution \(\gamma\), \(\phi[\gamma]\) denotes the result of subtituting \(\gamma(v)\) for every free occurrence of \(v\) in \(\phi\), for every \(v\) in the domain of \(\gamma\). We shall treat assignments as substitution of numerals: if \(\alpha\) is an assignment, then by writing \(\phi[\alpha]\) we treat \(\alpha\) as a substitution which to every \(v\) assigns the canonical numeral naming \(\alpha(v)\) (i.e. the term expressing the sum of \(0\) and \(\alpha(v)\)-many \(1\)'s). For example, if \(\alpha(v_{0})=3\) and \(\alpha(v_{1})=1\), then \((\exists v_{0}(v_{0}=v_{1})\lor v_{0}+1=v_{2})[\alpha]=\exists v_{0}(v_{0}=0+1 )\lor 0+1+1+1=v_{2}\). **Definition 45** (\(\mathsf{PA}\)).: If \(\phi\in\operatorname{Form}_{\mathcal{L}_{\mathsf{PA}}}\), we say that \(\widehat{\phi}\) is its _structural template_ iff * No constant symbol occurs in \(\widehat{\phi}\). * No free variable occurs in \(\widehat{\phi}\) twice. * No complex term containing only free variables occurs in \(\widehat{\phi}\). * No variable occurs in \(\widehat{\phi}\) both as a bound and as a free variable. * The formula \(\phi\) can be obtained from \(\widehat{\phi}\) by renaming bound variables and substituting terms for free variables in such a way that no variable appearing in those terms becomes bound. * \(\widehat{\phi}\) is the smallest formula with those properties (recall that we identify formulae with their Godel codes). We say that formulae \(\phi,\psi\) are _structurally similar_, \(\phi\sim\psi\) iff \(\widehat{\phi}=\widehat{\psi}\). Suppose that \(\kappa\) is an _occurrence of a subformula_ of \(\phi\) (not necessarily direct). With \(\kappa_{\widehat{\phi}}\) we denote the subformula of \(\widehat{\phi}\) whose occurrence in \(\widehat{\phi}\) corresponds to \(\kappa\) (recall that \(\phi\) and \(\widehat{\phi}\) have the same syntactic structure). For a formula \(\psi\), \([\psi]_{\widehat{\phi}}\) denotes the set \(\{\kappa_{\widehat{\phi}}\ \ :\ \ \kappa\text{ is an occurrence of }\psi\text{ in }\phi\}\). Note that the definition of structural similarity formalizes in \(\mathsf{PA}\) and the relation is an equivalence relation, provably in \(\mathsf{PA}\). Moreover we can assume that if \(\phi\) is of standard complexity, then \(\widehat{\phi}\) is a standard formula. **Example 46**.: The structural template of \(0=0\lor 0=0\) is \(v_{0}=v_{1}\lor v_{2}=v_{3}\), while the syntactic template of \(\exists v_{2}(v_{2}+1=v_{1}+1+1)\) is \(\exists v_{0}(v_{0}+v_{1}=v_{2})\), where in both cases \(v_{i}\) are chosen in such a way to minimize the formula. \(0=0_{\widehat{0\lor 0=0}}=\{v_{0}=v_{1},v_{2}=v_{3}\}\). Formulae \(\forall v_{0}(v_{0}=v_{1}+1)\vee\neg(v_{1}=v_{0}+1)\) and \(\forall v_{3}(v_{3}=v_{2}+1+1)\vee\neg(v_{2}+1=v_{0})\) are structurally similar. **Remark 47** (\(\mathsf{PA}\)).: For every two formulae \(\psi,\phi\) such that \(\psi\) is a subformula of \(\phi\) (not necessarily direct), \(\widehat{\psi}\) differs from every formula from the set \([\psi]_{\widehat{\phi}}\) at most by a permutation of free variables and renaming bound variables. For every \(\theta\in[\psi]_{\widehat{\phi}}\) we shall denote with \(\sigma_{\theta,\widehat{\psi}}\) the permutation of free variables such that \(\sigma_{\theta,\widehat{\psi}}[\theta]=\widehat{\psi}\). **Definition 48** (\(\mathsf{PA}\)).: Let \(\phi\) be any formula and \(\gamma\) be a term substitution such that \(\widehat{\phi}[\gamma]\) differs from \(\phi\) only modulo renaming the bound variables. Then for every assignment \(\alpha\) for \(\phi\) let \(\widehat{\alpha}_{\phi}\) be the assignment for \(\widehat{\phi}\) given by \(\widehat{\alpha_{\phi}}(v)=\gamma(v)^{\alpha}\). We recall that for a term \(t\) and assignment \(\alpha\), \(t^{\alpha}\) denotes the value of term \(t\) under assignment \(\alpha\). In other words, \(\widehat{\alpha}_{\phi}\) assigns to a variable \(v\), the value of the term \(\gamma(v)\) under the assignment \(\alpha\). For illustration assume that \(\theta\) is either a true atomic sentence or the negation of a true atomic sentence and \(F\) is a local idempotent operator for \(\theta\) with a template \(\Phi(p,q)\) (as in Definition 6). Then for any \(x\), \(\widehat{F(x)}\) can differ from \(F(x)\) only in that * \(\widehat{F(x)}\) may use different free and bound variables; * each element of \([\theta]_{\widehat{F(x)}}\) is of the form \(v_{i}=v_{j}\) for some variables \(v_{i}\) and \(v_{j}\) (if \(\theta\) is a true atomic sentence) or each element of \([\theta]_{\widehat{F(x)}}\) is of the form \(\neg v_{i}=v_{j}\) for some variables \(v_{i}\) and \(v_{j}\) (if \(\theta\) is the negation of a true atomic sentence. Moreover all the variables in \(\widehat{F(x)}\) occur only in formulae from \([\theta]_{\widehat{F(x)}}\). In particular \(\widehat{F(x)}\) is not a sentence. Moreover, observe that, since \(F(x)\) is a sentence, then \(\emptyset\) is the unique assignment for \(F(x)\). Hence, if \(\theta\) is either \(s=t\) or \(\neg s=t\), where \(s\) and \(t\) are closed terms whose value is \(a\), then \(\widehat{\theta_{F(x)}}\) is constantly equal to \(a\). The above described situation of a local idempotent operator for \(\theta\) will be the only one which we shall consider in this section. **Definition 49**.: An \(X\)-satisfaction class \(S\) is _regular_ if for every formulae \(\phi,\widehat{\phi}\in X\) and every assignment \(\alpha\) for \(\phi\), \((\phi,\alpha)\in S\) iff \((\widehat{\phi},\widehat{\alpha}_{\phi})\in S\). We are now proceeding to strengthening Theorem 12. For notational reasons we write \(\mathcal{M},\alpha\models\phi\) instead of \(\mathcal{M}\models\phi[\alpha]\) to mean that a formula \(\phi\) is satisfied in \(\mathcal{M}\) by an assignment \(\alpha\). **Definition 50**.: Fix \(\mathcal{M}\models\mathsf{PA}\), \(X\subseteq M\), \(\theta\), \(F\) and \(\Phi\) such that \(F\) is local idempotent sentential operator for \(\theta\) with syntactic template \(\Phi(p,q)\). 1. We say that a formula \(\phi\) is an \(F\)-_intermediate formula_ if for some \(x\), \(F(x)\) is a subformula of \(\phi\) (not necessarily direct or proper) and \(\phi\) is a subformula (not necessarily direct or proper) of \(F(x+1)\). 2. For an intermediate formula \(\phi\), the \(F\)-length of \(\phi\) is the maximal \(x\) such that \(F(x)\) is a subformula of \(\phi\). 3. Recall that \(\operatorname{compl}(\phi)\) denotes the complexity of a formula \(\phi\) (defined in Preliminaries). For an \(F\)-intermediate formula \(\phi\), assignment \(\alpha\) for \(\widehat{\phi}\) and \(x\) such that for some \(n\in\omega\), \(\operatorname{compl}(\phi)=\operatorname{compl}(F(x))+n\), we say that \(\alpha\)_\((X,x)\)-satisfies_\(\widehat{\phi}\) if \(\mathcal{M},\alpha\models\widehat{\phi}[A/F(x)]\) where \(A\) is \(0=0\) if \(x\in X\) and \(0=1\) otherwise and \(\widehat{\phi}[A/F(x)]\) denotes the result of replacing in \(\widehat{\phi}\) every occurrence of \(F(x)_{\widehat{\phi}}\) with \(A\). We say that \(\alpha\), \(X\)-_satisfies_\(\widehat{\phi}\) if \(\alpha\)\((X,x)-\)satisfies \(\phi\) where \(x\) is the \(F\)-length of \(\phi\). We note that the above definition makes sense, since \(\widehat{\phi}[A/F(x)]\) is a formula of standard complexity (possibly with variables with nonstandard indices). **Proposition 51**.: _Fix any \(\mathcal{M}\models\mathsf{PA}\) and \(X\subseteq M\) which is closed predecessor. For an arbitrary intermediate formula \(\phi\) of nonstandard complexity and assignment \(\alpha\) for \(\widehat{\phi}\) the following are equivalent:_ 1. \(\alpha\)__\(X\)_-satisfies_ \(\widehat{\phi}\)_._ 2. _For every_ \(x\) _such that_ \(\operatorname{compl}(\phi)-\operatorname{compl}(F(x))\in\omega\)_,_ \(\alpha\)__\((X,x)\)_-satisfies_ \(\widehat{\phi}\)_._ 3. _For some_ \(x\) _such that_ \(\operatorname{compl}(\phi)-\operatorname{compl}(F(x))\in\omega\)_,_ \(\alpha\)__\((X,x)\)_-satisfies_ \(\widehat{\phi}\)_._ Proof.: Follows immediately from the definition of \(F\) and the fact that \(\theta\), \(\Phi\) are chosen so that \(\Phi(\theta,q)\) is equivalent to \(q\). **Theorem 52**.: _Let \(\theta\) be either a true atomic sentence or a negation of a true atomic sentence and \(F\) a local idempotent sentential operator for \(\theta\) with template \(\Phi(p,q)\). Let \(X\subseteq M\) be separable, closed under successors and predecessors, and for each \(n\in\omega\), \(n\in X\) if and only if \(\mathcal{M}\models\theta\). Then \(\mathcal{M}\) has an expansion \((\mathcal{M},S)\models\mathsf{CS}^{-}\) such that \(X=\{x\in M:(\mathcal{M},S)\models T(F(x))\equiv T(\theta)\}\) and \(S\) is a regular satisfaction class._ Proof.: The initial structure of the argument is very similar to that used in proving Theorem 12. Let \(D=\{F(x):x\in M\}\) and \(A=\{F(x):x\in X\}\). Note that \(A\) is separable from \(D\). We build sequences \(F_{0}\subseteq F_{1}\subseteq\ldots\) and \(S_{0},S_{1},\ldots\) such that: * each \(F_{i}\) is a finitely generated set of formulas such that \(\cup F_{i}=\operatorname{Form}^{\mathcal{M}}\), * each \(S_{i}\) is a regular full satisfaction class and \((\mathcal{M},S_{i})\) is recursively saturated, * \(S_{i+1}\upharpoonright F_{i}=S_{i}\upharpoonright F_{i}\), and * for each \(\phi\in D\cap F_{i}\), \((\phi,\emptyset)\in S_{i}\) if and only if \(\phi\in A\). Given such a sequence, \(S=\cup(S_{i}\cap F_{i}\times M)\) would be the required full satisfaction class on \(\mathcal{M}\). Externally fix an enumeration of \(\operatorname{Form}^{\mathcal{M}}\) in order type \(\omega\). We can assume, without loss of generality, that \(\theta\) appears first in this enumeration. We let \(F_{0}\) be \(\{\theta\}\) and \(S_{0}\) be any regular full satisfaction class which satisfies internal induction. Let \(F_{i+1}\) be generated by \(F_{i}\) and the least \(x\in\operatorname{Form}^{\mathcal{M}}\setminus F_{i}\) in the aforementioned enumeration. Let \(F^{\prime}=F_{i}\cup(F_{i+1}\cap D)\). Let \(a\) be a sequence such that \(\{F((a)_{n}):n\in\omega\}=F_{i+1}\cap D\). Note that such a sequence exists since \(F_{i+1}\) is finitely generated. Let \(c\) be as in the definition of separability for \(a\). We shall now show how to construct an elementary extension \(\mathcal{N}\) of \(\mathcal{M}\) and a full regular satisfaction class \(S^{\prime}\) on \(\mathcal{N}\) such that * \(S^{\prime}\cap F_{i}\times M=S_{i}\cap(F_{i}\times M)\). * For each \(n\in\omega\), \((F((a)_{n}),\emptyset)\in S^{\prime}\iff\mathcal{M}\models n\in c\). By a straightforward resplendence argument one can then copy \(S^{\prime}\) to \(\mathcal{M}\). This last step crucially uses the fact that \((\mathcal{M},S_{i})\) is recursively saturated and the facts that (1) \(F_{i+1}\) is finitely generated and (2) that we can code the membership in \(F_{i+1}\cap X\) via the parameter \(c\). The construction of \(S^{\prime}\) follows the lines of a standard Enayat-Visser construction (as presented in [3]): we build a sequence of models \(\mathcal{M}=\mathcal{M}_{0}\preceq\mathcal{M}_{1}\preceq\mathcal{M}_{2},\ldots\) and sets \(S^{\prime}_{1},S^{\prime}_{2},\ldots\) such that 1. \(S^{\prime}_{i}\subseteq M^{2}_{i}\), \(S_{i}\cap(F_{i}\times M)=S^{\prime}_{1}\cap(F_{i}\times M)\) and for all \(i>0\), \(S^{\prime}_{i+1}\cap M^{2}_{i-1}=S^{\prime}_{i}\cap M_{i-1}\) 2. \((\mathcal{M}_{i+1},S^{\prime}_{i+1})\models\mathsf{CS}^{-}\upharpoonright_{ \operatorname{Form}^{\mathcal{M}_{i}}}\); 3. for each \(n\in\omega\), \((F((a)_{n}),\emptyset)\in S^{\prime}_{i}\iff\mathcal{M}\models n\in c\) 4. for every \(\phi\in\operatorname{Form}^{\mathcal{M}_{i}}\) and \(\alpha\in M_{i+1}\), if \(\alpha\) is an assignment for \(\phi\), then \[(\phi,\alpha)\in S^{\prime}_{i+1}\iff(\widehat{\phi},\widehat{\alpha}_{\phi}) \in S^{\prime}_{i+1}.\] Then one easily checks that for \(\mathcal{N}=\bigcup_{i}\mathcal{M}_{i}\) and \(S^{\prime}=\bigcup_{i}S^{\prime}_{i+1}\cap(\operatorname{Form}^{\mathcal{M}_{i} }\times M_{i+1})\), \((\mathcal{N},S^{\prime})\) satisfy the conditions A,B,C above and \(S^{\prime}\) is a full regular satisfaction class. We note that condition 4 does not contradict the fact that \(S^{\prime}_{i+1}\) is defined only for formulae in \(M_{i}\), because the operations \(\widehat{\ \ \ }\) are \(\mathcal{L}_{\mathsf{PA}}\) definable, so if \(\phi\) in \(M_{i}\) then \(\widehat{\phi}\in M_{i}\). We show how to construct \(\mathcal{M}_{1}\) and \(S^{\prime}_{1}\) and the rest of cases is fully analogous (but simpler because we do not have to care about condition (3) from the above list). Consider the theory in the language \(\mathcal{L}_{\mathcal{M}}\cup\{S^{\prime}_{1}\}\) which is given as the union of the following sets: 1. \(\mathrm{ElDiag}(\mathcal{M})\) 2. \(\{\mathrm{Comp}(\phi,\psi,\theta)\ \ :\phi,\psi,\theta\in\mathrm{Form}^{\mathcal{M}}\}\) 3. \(\{\forall\alpha\big{(}S^{\prime}_{1}(\phi,\alpha)\equiv S^{\prime}_{1}(\psi, \widehat{\alpha}_{\phi})\big{)}\ \ :\ \ \phi,\psi\in\mathrm{Form}^{\mathcal{M}},\psi=\widehat{\phi}\}\). 4. \(\{S^{\prime}_{1}(F((a)_{n}),\emptyset)\equiv n\in c\ \ :n\in\omega\}\). 5. \(\{S^{\prime}_{1}(\phi,\alpha)\ \ :\phi\in F_{i},(\phi,\alpha)\in S_{i}\}\) We argue that the above theory is consistent, which is enough to obtain \(\mathcal{M}_{1}\) and \(S_{1}\). So fix \(A\) - a finite portion of the above theory. Let \(B\) consists of all \(\phi\in\mathrm{Form}^{\mathcal{M}}\) which occur in one of the axioms in \(A.\) We build the extension of \(S^{\prime}_{1}\subset M^{2}\) such that \((\mathcal{M},S^{\prime}_{1})\models A\) by induction on the complexity of \(\phi\in B\). We note that this is meaningful, since \(B\) is finite. Moreover we always define \(S^{\prime}_{1}\) on \(\widehat{\phi}\) and then extend \(S^{\prime}_{1}\) canonically to all formulae in \(\sim\) equivalence class. In the construction we shall not refer to the fragment of \(X\) given by \(c\) and \(a\), but rather to the whole of \(X\). \(c\) and \(a\) were introduced to enable the resplendency argument. Assume \(\phi\) has the least possible complexity among formulae in \(B\). We put \((\widehat{\phi},\alpha)\in S^{\prime}_{1}\) iff \(\alpha\) is an assignment for \(\widehat{\phi}\) and one of the following holds: 1. \(\widehat{\phi}\) is standard and \(\mathcal{M},\alpha\models\widehat{\phi}\). 2. \((\widehat{\phi},\alpha)\in S_{i}\) and \(\phi\in F_{i}\). 3. \(\alpha\) is a constant function, \(\phi\) is an \(F\)-intermediate formula and \(\alpha\)\(X\)-satisfies \(\widehat{\phi}\). Then, for every formula \(\psi\in B\) which has the least possible complexity, we put \((\psi,\alpha)\in S^{\prime}_{1}\) iff \((\widehat{\psi},\widehat{\alpha}_{\psi})\in S^{\prime}_{1}.\) The base step of our induction process is finished. Now for \(\psi\in B\) we assume that for every \(\phi\in B\) of complexity lower than the complexity of \(\psi\) and every \(\psi^{\prime}\) such that \(\psi^{\prime}\sim\psi\), \(S^{\prime}_{1}\) has been defined. If all immediate subformulae of \(\psi\) are in \(B\), then by induction we can assume that \(S^{\prime}_{1}\) is defined for their templates and so we can extend \(S^{\prime}_{1}\) to \(\widehat{\psi}\) using the compositional clauses. Otherwise, we put \((\widehat{\psi},\alpha)\in S^{\prime}_{1}\) iff \(\alpha\) is an assignment for \(\widehat{\psi}\) and one of the conditions a,b,c above holds. This concludes the inductive step. It is left to be checked that so defined \(S^{\prime}_{1}\) satisfies the chosen finite \(A\) of the theory. Conditions i,ii,iii and v follow easily by construction. To verify iv we first observe that for every \(x\), every subformula \(\psi\) of \(F(x)\) is a sentence, \(\emptyset\) is the unique assignment for \(\psi\) and \(\widehat{\emptyset}_{\psi}\) is constant. By induction on the complexity of \(\phi\in B\) we check that whenever \(\phi\) is an \(F\)-intermediate formula, then ( \[\phi,\emptyset\)\(\in S^{\prime}_{1}\iff\widehat{\phi}\] is \[X\] -satisfied by \[\widehat{\emptyset}_{\phi}\] This is clearly the case for formulae of minimal complexity. We consider the induction step for \(\phi=\psi_{0}\vee\psi_{1}\). If it is not the case that both \(\psi_{0},\psi_{1}\) are in \(B\), then the claim follows by definition. So assume \(\psi_{0}\) and \(\psi_{1}\) are both in \(B\). Hence \[(\phi,\emptyset)\in S^{\prime}_{1}\iff(\psi_{0},\emptyset)\in S^{\prime}_{1} \text{ or }(\psi_{1},\emptyset)\in S^{\prime}_{1}.\] By the inductive assumption, the last condition is equivalent to: \[\widehat{\emptyset}_{\psi_{0}}X-\text{satisfies }\widehat{\psi_{0}}\text{ or } \widehat{\emptyset}_{\psi_{1}}X-\text{satisfies }\widehat{\psi_{1}}.\] Let \(\kappa^{0}\) be the occurrence of \(\psi_{0}\) in \(\phi\) as the left disjunct, and \(\kappa^{1}\) be the occurrence of \(\psi_{1}\) in \(\phi\) as the right disjunct. Then \((\kappa^{0})_{\widehat{\phi}}\) differs from \(\widehat{\psi}\) only up to the bound variables renaming and a permutation of free variables. Let \(\sigma\) be the permutation of free variables such that \(\sigma[(\kappa^{0})_{\widehat{\phi}}]\) is (up to bounded variables renaming) the same as \(\widehat{\psi_{0}}\). By unraveling the definitions it follows that \(\widehat{\emptyset}_{\phi}\!\restriction_{(\kappa^{0})_{\widehat{\phi}}}= \widehat{\emptyset}_{\psi_{0}}\circ\sigma\). The same holds for the pair \((\kappa^{1})_{\widehat{\phi}}\) and \(\widehat{\psi_{1}}\). So we conclude that \((**)\) is equivalent to \[\widehat{\emptyset}_{\phi}\!\restriction_{(\kappa^{0})_{\widehat{\phi}}}X- \text{satisfies }(\kappa^{0})_{\widehat{\phi}}\text{ or }\widehat{\emptyset}_{\phi}\! \restriction_{(\kappa^{1})_{\widehat{\phi}}}X-\text{satisfies }(\kappa^{1})_{\widehat{\phi}}.\] The above however is clearly equivalent to the right-hand side of \((*)\).
2309.15516
Teaching Text-to-Image Models to Communicate in Dialog
A picture is worth a thousand words, thus, it is crucial for conversational agents to understand, perceive, and effectively respond with pictures. However, we find that directly employing conventional image generation techniques is inadequate for conversational agents to produce image responses effectively. In this paper, we focus on the innovative dialog-to-image generation task, where the model synthesizes a high-resolution image aligned with the given dialog context as a response. To tackle this problem, we design a tailored fine-tuning approach on the top of state-of-the-art text-to-image generation models to fully exploit the structural and semantic features in dialog context during image generation. Concretely, we linearize the dialog context with specific indicators to maintain the dialog structure, and employ in-domain data to alleviate the style mismatch between dialog-to-image and conventional image generation tasks. Empirical results on PhotoChat and MMDialog Corpus show that our approach brings consistent and remarkable improvement with 3 state-of-the-art pre-trained text-to-image generation backbones.
Xiaowen Sun, Jiazhan Feng, Yuxuan Wang, Yuxuan Lai, Xingyu Shen, Dongyan Zhao
2023-09-27T09:33:16Z
http://arxiv.org/abs/2309.15516v2
# Teaching Text-to-Image Models to Communicate ###### Abstract Various works have been extensively studied in the research of text-to-image generation. Although existing models perform well in text-to-image generation, there are significant challenges when directly employing them to generate images in dialogs. In this paper, we first highlight a new problem: **dialog-to-image generation**, that is, given the dialog context, the model should generate a realistic image which is consistent with the specified conversation as response. To tackle the problem, we propose an efficient approach for dialog-to-image generation without any intermediate translation, which maximizes the extraction of the semantic information contained in the dialog. Considering the characteristics of dialog structure, we put segment token before each sentence in a turn of a dialog to differentiate different speakers. Then, we fine-tune pre-trained text-to-image models to enable them to generate images conditioning on processed dialog context. After fine-tuning, our approach can consistently improve the performance of various models across multiple metrics. Experimental results on public benchmark demonstrate the effectiveness and practicability of our method. ## 1 Introduction Recently, visual modalities have played an important role in transmitting messages. In human conversations, images can effortlessly convey a wealth of visual perception, a depth of expression that plain text often struggles to capture. Therefore, it is necessary for conversational agents to comprehend, perceive, and appropriately respond to contexts with multi-modal contents beyond mere text. Although some dialog models showcase remarkable capabilities of generating textual response that resembles human conversation Zhang et al. (2020); Roller et al. (2021); Ouyang et al. (2022), they encounter difficulties in generating images as responses. On the other hand, numerous studies have been extensively explored in the research of text-to-image generation. Various models show impressive performance on generating fascinating images according to their captions, such as DALL-E 2 Ramesh et al., the Latent Diffusion Model (LDM) Rombach et al. (2022), and UniDiffuser Bao et al. (2023). While these models excel in text-to-image generation, there are significant challenges when directly employing them to generate images based on dialog. As illustrated in Figure 2, the image is generated by the renowned Figure 1: An example of human conversations from PhotoChat Corpus. The speakers are talking about a grand reopening of a resort. text-to-image model DALL-E (Ramesh et al., 2021) conditioned on the dialog context depicted in Figure 1. The image is unclear and in different style to real photos. Additionally, it is also semantically inconsistent with the dialogue context. Therefore, it is challenging to solely treat generating images according to dialog as text-to-image generation. In this paper, we first highlight a new problem: **dialog-to-image generation**, that is, given the dialog context, the model should generate a realistic high-resolution image which is coherent to the specified conversation as response. To tackle the problem, recent efforts have made some progress. Sun et al. (2022) present Divter, which generates a textual image description given multi-modal dialog context, and then, leverages a text-to-image model for image generation. However, this method has not yet transcended the paradigm of traditional text-to-image generation. The intermediate description is typically so brief that it cannot accurately convey the visual details while dialog information often provides abundant information of the images. Hence, previous intermediate-description based methods may loss valuable information embedded within the dialog. An example from the PhotoChat dataset (Zang et al., 2021) is shown in Figure 1. The picture description in PhotoChat dataset is "Objects in the photo: Drink, Head, Face, Hair", which merely provides a brief summary of the objects in the image, devoid of any emotional color or details. In contrast, the dialog context conveys additional information in at least two aspects: (i) the female speaker expresses her joyful mood through words like 'fun', and 'danced all night'; (ii) we also learn more details about the lively party from 'grand', and 'with neat lighting and architecture'. One significant difference between our task and previous ones is that we require models to directly generate images from multi-modal conversational context without intermediate description translation. In addition to what is mentioned above, dialog-to-image generation task also faces other challenges: (i) the majority of what the speakers discuss are images from real-life situations which involves numerous images of human faces. Generating images of human faces is a significant challenge for models; (ii) most of the existing text-to-image datasets include plenty of cartoon-style images, resulting in a strange style when using text-to-image models directly, especially with very peculiar human faces. To tackle the above challenges, we improve an efficient approach for dialog-to-image generation without any intermediate translation, which maximizes the extraction of the semantic information contained in the dialog. We design a tailored fine-tune approach in consideration of the characteristics of dialog text and leverage pre-existing generative models. We choose the models with transformer backbone (Bao et al., 2023, 2023; Wang et al., 2022) which can simultaneously process both image and text information, facilitating better integration of information from both modalities. Additionally, according to dialog structure, we put segment token before each sentence in a turn of a dialog to differentiate different speakers. We refrain from retraining the model, instead maximize the utilization of pre-trained checkpoints and the knowledge acquired by the generative model, keeping computational resources and time costs minimal. Based on the experimental results after fine-tuning, our approach can consistently improve the performance of various models across multiple metrics. Particularly, after fine-tuning, UniDiffuser could generate high-quality images, which are comparable to those real-world ground truths. This demonstrates the practicability of our method. Contributions of this work are three-fold: * To the best of our knowledge, this is the first work entirely devoted to dialog-to-image generation. We aspire for our work to captivate the attention of the research community, thereby shedding light on this pertinent research challenge. * We first explore the utilization of text-to-image models for the task of dialog-to-image generation and find that it is challenging to solely treat dialog-to-image generation as text Figure 2: A image is generated by the DALL-E with the dialog context depicted in Figure 1 serving as its input. to-image generation. Then, we present an effective approach that can generate high-resolution image responses conditioning on dialog context. * Extensive experiments on PhotoChat Corpus indicate the effectiveness of our approach, which achieves a consistent improvement with previous text-to-image generation models. ## 2 Related Works ### Multi-Modal Dialog Models Numerous advanced contributions have emerged in parallel with the evolution of multi-modal dialogue datasets (Das et al., 2017; Mostafazadeh et al., 2017; Shuster et al., 2018; Zang et al., 2021; Liao et al., 2021; Feng et al., 2023). Several efforts have been undertaken to enhance the performance of conversational agents in image-grounded dialogues through various dialogue modeling approaches (Qi et al., 2020; Lee et al., 2021). Researchers (Yang et al., 2021; Liang et al., 2021) investigate enhancing the textual representations of generated dialogue responses using associative visual scenes. Zang et al. (2021) suggest two objectives: one involves predicting the intention to share a photo in the next dialogue turn, and the other is a dialogue-based image retrieval task for finding the most suitable photo based on the conversation context. Additionally, they introduce a dual-encoder model that leverages object labels to encode image characteristics. Nevertheless, the effectiveness of the retrieval-based approach is constrained in particular domains due to the constraints posed by the size of the pre-established conversational history database. This is particularly notable for less common or specialized contexts not accounted for in the history, where the array of image responses in a retrieval system remains constant. In a recent development, Sun et al. (2022) have pioneered the creation of a multi-modal dialogue response generation model called Divter. This model demonstrates an effective capability to comprehend multi-modal dialogue contexts and produce informative textual and high-resolution image responses. However, it has not yet moved beyond the conventional approach of traditional text-to-image generation, which commonly employs brief captions to generate images. Therefore, we explore a tailored method to directly generate images from conversational information without any intermediate translation. ### Text-to-image Generation In the research of text-to-image generation, various studies have been thoroughly explored. Mansimov et al. (2015) show the Draw generative model (Gregor et al., 2015) is capable to generate images from natural language descriptions. Reed et al. (2016) introduce a generative adversarial network to enhance the image's fidelity. Subsequently, several enhancement techniques persist in fine-tuning the generation architecture, such as stacked generators (Zhang et al., 2017), attentional network (Xu et al., 2018), and extra knowledge (Li et al., 2019). Nguyen et al. (2017) propose a unified probabilistic interpretation of related activation maximization methods to produce high-quality images. Cho et al. (2020) apply consistent masking using a wide range of masking ratios and matched the appropriate pre-training datasets with the relevant objectives. Ramesh et al. (2021) and Ding et al. (2021) employ transformer-based techniques that autoregressively model the text and image tokens as a single stream of data. Recently, diffusion models have been employed to address text-to-image generation tasks due to their flexibility and strength. The Latent Diffusion Model (LDM) (Rombach et al., 2022) enables conditional image generation while streamlining the training and sampling processes for denoising diffusion models without sacrificing quality. Saharia et al. (2022) present Imagen, a text-to-image diffusion model that achieves an unparalleled level of photorealism and a profound understanding of language. Separately, Ramesh et al. propose a two-stage model, DALL-E 2: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. Bao et al. (2023) design a ViT-based architecture (named U-ViT) for image generation with diffusion models. Then, Bao et al. (2023) propose a unified framework (dubbed UniDiffuser) based on U-ViT. Although these various models show impressive performance on text-to-image generation, they fail to complete dialog-to-image-generation task. Therefore, we design a tailored fine-tune approach in consideration of the characteristics of dialog text to enable models to generate images based on dialog. ## 3 Methodology Given a set of \(n\) dialog-image samples \(D_{S}=\{S_{i}=(C_{i},I_{i})\}_{i=1}^{n}\), where \(C_{i}\) and \(I_{i}\) represent the dialog context and the corresponding image, re spectively. Compared to traditional text-to-image generation, \(C_{i}=\{c_{k}\}_{k=1}^{K}\) is composed of \(K\) turns of dialog, which is longer than image captions. The goal is to learn a image generation model endowed with the capability to complete dialog-to-image generation task. Acknowledging the commonalities between dialog-to-image generation and text-to-image generation, we have crafted a customized fine-tuning approach that takes into account the distinctive attributes of dialog text, while harnessing the capabilities of pre-existing generative models. ### Tailored Text Concatenation Strategy Since dialog-to-image generation and text-to-image generation share many similarities, fine-tuning pre-trained text-to-image models is more effective to accomplish dialog-to-image generation building upon the model's existing ability to align text and image information. Furthermore, after training on the text-to-image datasets, the model has acquired a significant amount of external knowledge so that it could capture the common entities accurately, facilitating image generation. Fine-tuning instead of training from scratch also keeps computational resources and time costs minimal. In consideration of above aspects, we devise a customized fine-tuning approach simultaneously maintaining the unique structure of dialog text. Conversations, unlike image captions, typically involve multiple participants discussing varied topics. This enriches image details but demands strong text comprehension skills from the model. Inability to differentiate between speakers' information even would disrupt image generation. In order to distinct different speakers to help models comprehensive dialog context, we tried various ways of connecting dialog statements (refer to Section 5.1 for more details) as input and found that appending a special symbol '#' before each turn of a dialog achieved the best performance among all the approaches. Concretely, given a dialog-image sample \(S=(C,I)\), where \(C\) and \(I\) represent the dialog context and the corresponding image, respectively. The dialog context \(C=\{c_{k}\}_{k=1}^{K}\), where \(c_{k},\forall k\in\{1,\dots,K\}\) denotes each turn before image appears. We first add '#' before \(c_{k},\forall k\in\{1,\dots,K\}\), then concatenate all the sentences as the final text input. ### Pre-trained Model Architecture Diffusion models are powerful, recently emerged deep generative models used for high-quality image generation. An essential element in a comprehensive generative system is a unified architecture capable of processing various modality types as inputs. It's worth highlighting that the rise of the Transformer model and its utilization in generative modeling offers a promising approach to capture interactions across modalities. To ensure the quality of the generated images and facilitate the fusion of text and image information, we choose a diffusion model based on the transformer architecture as our pre-trained model. Followed Bao et al. (2023), the image encoder consists of two parts. The first part is the image autoencoder employed in Stable Diffusion. The second part is the image CLIP (Radford et al., 2021) (ViT-B/32). The final latent embedding for images is the concatenation of the outputs from two parts, i.e., \(x_{0}=[x_{0}^{AE},x_{0}^{CLIP}]\). As for the text encoder, we employ the same text CLIP as Stable Diffusion. The text CLIP outputs 77 vectors and each is 768-dimensional. We also add an extra linear layer, which reduces the dimension of each vector to 64 to obtain the final text embedding \(y_{0}\). We fine-tune a joint noise prediction network \(\epsilon_{\theta}(x_{t^{x}},y_{t^{y}},t^{x},t^{y})\) with a transformer-based backbone on the embeddings obtained above following Bao et al. (2023). We illustrate the model architecture in Figure 3. The loss function mentioned in Bao et al. (2023) is formulated as below: \[\mathbb{E}_{x_{0},y_{0},\epsilon^{x},\epsilon^{y},t^{x},t^{y}}\|\epsilon_{ \theta}(x_{t^{x}},y_{t^{y}},t^{x},t^{y})-[\epsilon^{x},\epsilon^{y}]\|_{2}^{2}, \tag{1}\] where \((x_{0},y_{0})\) is a random data point, [, ] denotes concatenation, \(\epsilon^{x}\) and \(\epsilon^{y}\) are sampled from standard Gaussian distributions, and \(t^{x}\)and \(t^{y}\) are uniformly sampled from \(\{1,2,\dots,T\}\) independently. During inference, we sample \(x_{0}\) conditioned on \(y_{0}\) by setting \(t^{y}=0\). In Table 1, we present the training algorithm in Algorithm 1 and the sampling procedure in Algorithm 2. ## 4 Experiments ### Dataset To evaluate the performance of our method, we conduct experiments on the PhotoChat dataset released by Zang et al. (2021), which is a multimodal conversational dataset that casts light on the photo sharing behavior in online messaging. Each dia log in the dataset is paired with a user image that is shared during the conversation. Since the PhotoChat dataset only provides image URLs, we collect the images via the given URLs which are still accessible. There are 9843 images in the training set and 963 images in the test set. We only retain the text in each turn before the image appears and concatenate the texts following the method we propose as the text input of the model. Since we freeze the parameters in the text CLIP and it can only support input with a maximum length of 77 tokens, we truncate the text from the end to ensure that the length of the input text does not exceed 77 tokens. ### Implementation Details Training and SamplingWe initialize the model weights with pre-trained UniDiffuser-v11, trained on LAION-5B (Schuhmann et al., 2022). In the fine-tuning stage, we freeze the parameters of both the image encoder and the text encoder and fine-tune 9800 steps at \(512\times 512\) resolution on PhotoChat with a batch size of 300 and 5K warm-up steps. We use the AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of 3e-5, a weight decay of 0.03 and running coefficients of \((\beta_{1},\beta_{2})=(0.9,0.9)\). We train with mixed precision for efficiency. The training is conducted on 2x A800-80GB GPUs. We use DPM-Solver (Lu et al., 2022) with 50 steps in all experiments. Footnote 1: [https://huggingface.co/tu-ml/unidiffuser-v1](https://huggingface.co/tu-ml/unidiffuser-v1) ``` 1:repeat 2:\(x_{0},y_{0}\sim q(x_{0},y_{0})\) 3:\(t^{x},t^{y}\sim Uniform(\{1,2,\dots,T\})\) 4:\(\epsilon^{x},\epsilon^{y}\sim N(0,I)\) 5: Let \(x_{t^{x}}=\sqrt{\overline{\alpha}_{t^{x}}}x_{0}+\sqrt{1-\overline{\alpha}_{t^ {x}}}\epsilon^{x}\) 6: Let \(y_{t^{y}}=\sqrt{\overline{\alpha}_{t^{y}}}y_{0}+\sqrt{1-\overline{\alpha}_{t^ {y}}}\epsilon^{y}\) 7: Take gradient step on \(\nabla_{\theta}\|\epsilon_{\theta}(x_{t^{x}},y_{t^{y}},t^{x},t^{y})-[\epsilon ^{x},\epsilon^{y}]\|_{2}^{2}\) 8:until converged ``` **Algorithm 1** Training ``` 1:\(x_{T}\sim N(0,I)\) 2:for\(t=T,\dots,1\)do 3:\(z^{x}\sim N(0,I)\) if \(t>1\), else \(z^{x}=0\) 4:\(x_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1-\overline{ \alpha}_{t}}}\epsilon^{x}_{\theta}(x_{t},y_{0},t,0))\)\(+\sigma_{t}z^{x}\) 5:endfor 6:return\(x_{0}\) ``` **Algorithm 2** Sampling of \(x_{0}\) conditioned on \(y_{0}\) ### Baselines To our knowledge, Divter proposed by Sun et al. (2022) is the most direct competitor for dialog-to-image generation. We also directly employ several generative models such as DALL- E 2\({}^{2}\), UniDiffuser3, U-ViT4, OFA5, which all exhibit outstanding performance in text-to-image generation without any additional training for dialog-to-image generation. Footnote 2: [https://github.com/LAION-AI/dalle2-laion](https://github.com/LAION-AI/dalle2-laion) Footnote 3: [https://github.com/thu-ml/unidiffuser](https://github.com/thu-ml/unidiffuser) Footnote 4: [https://github.com/baoff/U-ViT](https://github.com/baoff/U-ViT) Footnote 5: [https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA) ### Evaluation Metrics We report the FID (Heusel et al., 2017) and IS on the PhotoChat test set to measure the image fidelity and quality. FID measures the distance between generated images and real images. A smaller data indicates that the generated images are more closer to the ground-truth. IS is a measure of the clarity and diversity of generated images and a larger value indicates a higher quality. The FID and IS scores are computed by codes in [https://github.com/](https://github.com/) \begin{table} \begin{tabular}{l} \hline \hline 1: **repeat** \\ 2:\(x_{0},y_{0}\sim q(x_{0},y_{0})\) \\ 3:\(t^{x},t^{y}\sim Uniform(\{1,2,\dots,T\})\) \\ 4:\(\epsilon^{x},\epsilon^{y}\sim N(0,I)\) \\ 5: Let \(x_{t^{x}}=\sqrt{\overline{\alpha}_{t^{x}}}x_{0}+\sqrt{1-\overline{\alpha}_{t^ {x}}}\epsilon^{x}\) \\ 6: Let \(y_{t^{y}}=\sqrt{\overline{\alpha}_{t^{y}}}y_{0}+\sqrt{1-\overline{\alpha}_{t ^{y}}}\epsilon^{y}\) \\ 7: Take gradient step on \(\nabla_{\theta}\|\epsilon_{\theta}(x_{t^{x}},y_{t^{y}},t^{x},t^{y})-[\epsilon ^{x},\epsilon^{y}]\|_{2}^{2}\) \\ 8:until converged \\ \hline \hline \end{tabular} \end{table} Table 1: Training and sampling algorithm. Figure 3: Implementation of the diffusion model with transformer backbone on dialog-image data. toshas/torch-fidelity. ### Experimental Results As shown in Table 2, (w) means that we fine-tune the model as the method we proposed, while (w/o) means we just concatenate the dialog text with '#' and then directly using the text-to-image generation model for inference. From the column containing '\(\Delta\)' in the table, it can be observed that our method has a consistent improvement among all the listed models in terms of both FID and IS metrics. The results indicate that the fine-tuning method is effective for all the listed text-to-image generation models. Especially UniDiffuser proposed by Bao et al. (2023b) achieves the best performance on the IS score after fine-tuning. We list the results of concatenating the dialog text with '#' then utilizing the text-to-image generation model directly for inference. Although the models we select all perform excellently on text-to-image generation, they cannot complete dialog-to-image generation task well. This illustrates that we cannot simply consider dialog-to-image generation as text-to-image generation. Hence, the specified method for dialog-to-image generation needs to be studied separately. Divter presented by Sun et al. (2022) achieves the best FID among all the models. We speculate that this might due to the parameter size in Divter is twelve times larger than ours. Excluding the other parts of Divter, the Text-to-Image Translator Divter uses, DALL-E, has a parameter count of 12 billion, while UniDiffuser which we fine-tune has only a parameter count of 952 million. Although there is a significant disparity in parameter count between our model and Divter, we still achieve the highest IS. This demonstrates the effectiveness of our approach. Above all, the comparison results shown in Table 2 indicate 1): our method is effective and feasible for solving dialog-to-image generation problem; 2) our method is compatible with most text-to-image generation models. With very few improvements, the initial generative model can efficiently accomplish dialog-to-image generation according to our proposed method while minimizing computational resources and time expenditure. ## 5 Further Analysis ### Impact on Concatenation Method We tried several different fine-tuning approaches according to the distinctive attributes of dialog text. As shown in Table 3, the symbol (' ') means that we separated the content of each turn in dialog with spaces. ('[PER1]'&'[PER2]') means that we appended the special token [PER1] or [PER2] before each turn of a dialog to differentiate the two interlocutors. Similarly, ('A:'&'B:') means that we used 'A:' and 'B:' instead of token [PER1] and [PER2] to distinguish between the two conversational participants. ('#') in the last row of Table 3 refers to the tailored text concatenation strategy we pro \begin{table} \begin{tabular}{l||c c|c||c c|c} \hline \hline **Models** & **(w/o) FID \(\downarrow\)** & **(w) FID \(\downarrow\)** & \(\Delta\)**FID** & **(w/o) IS \(\uparrow\)** & **(w) IS \(\uparrow\)** & \(\Delta\)**IS** \\ \hline DALL-E 2 & 124.10 & & & 8.8 & & \\ Divter & **29.04** & & & 15.4 & & \\ \hline OFA & 113.09 & 110.95 & -2.14 & 9.1 & 10.5 & +1.4 \\ U-ViT-Small & 97.76 & 86.96 & -10.80 & 10.8 & 12.2 & +1.4 \\ U-ViT-Small(Deep) & 96.07 & 86.29 & -9.78 & 10.9 & 12.2 & +1.3 \\ UniDiffuser & 108.95 & 67.64 & **-41.31** & 12.4 & **16.7** & **+4.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results of our method on common text-to-image models and baselines on the PhotoChat test set. **(w/o)** means directly employ text-to-image models without any additional training for dialog-to-image generation; **(w)** means fine-tune the model as the method we proposed based on the characteristics of the dialog; \(\Delta\) indicates the difference in metrics between **(w/o)** and **(w)**. Numbers in **bold** mean that the corresponding model achieves the best scores among all the models mentioned. \begin{table} \begin{tabular}{l c c} \hline \hline **Fine-tuning Method** & **FID \(\downarrow\)** & **IS \(\uparrow\)** \\ \hline UniDiffuser(‘ ’) & 69.92 & 16.4 \\ UniDiffuser(‘[PER1]’\&’[PER2]’) & 72.93 & 14.7 \\ UniDiffuser(‘A:’\&’B:’) & 104.14 & 11.2 \\ \hline UniDiffuser(‘#’) & **67.64** & **16.7** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results of different fine-tuning methods on UniDiffuser. Numbers in **bold** mean that the corresponding method achieves the best scores among all the methods mentioned. posed in Section 3.1. Figure 4 clearly demonstrates these four methods. As depicted in Table 3, concatenating the sentence with '#' achieves the best performance. We believe there are several reasons as follows: (i) given that the clip tokenizer's vocabulary lacks the two special token [PER1] or [PER2] and we freeze the parameters of a pre-trained text clip during fine-tuning, the model is unable to distinguish between different interlocutors as our expectation; (ii) we speculate that the reason for the poor performance of ('A:'&'B:') is that the model probably confuses the special token and the article 'a'. ### Case Study To further investigate the quality of images generated by models after fine-tuning according to our proposed method, we show two examples on the PhotoChat test data in Table 4. The given context of the first one is about "bread", and the second one is about "butterfly". As we can see, both UniDiffuser and U-Vit-Small(Deep) after fine-tuning can generate a realistic high-resolution image which is coherent to the background. The high-quality generated images are comparable to those real-world ground truths, which demonstrates the practicability of our method. In addition, the images generated by UniDiffuser are more lively than the images generated by U-ViT-Small(Deep). \begin{table} \begin{tabular}{l|l} \hline \hline **Example 1** & **Example 2** \\ \hline **A:** Hey there, what’s going on? & **A:** What’s new? \\ **B:** Not much. Just about to start eating at a buffet. & **B:** I am watching butterflies, just relaxing. \\ my job is throwing lol. & I love insects. \\ **A:** Oh really, what are they having? & **A:** Sounds entertaining. \\ **B:** A bunch of stuff. I’m starting off at the pastry & **B:** Yeah I am into Moths and butterflies today. \\ table first though lol. & **A:** I have never really loved insects, \\ **A:** Oh yeah? What kind of pastries? & but I like butterflies. \\ **B:** Well they have croissants, some other bread, & **B:** Moths? Not me! \\ fruits and a bunch of other stuff. & **A:** I know, but butterflies are cute! HaHa \\ **B:** Here, I’ll send you a picture. & **B:** Take a look. \\ \hline \hline \end{tabular} \end{table} Table 4: Examples of PhotoChat test set. In each example, the turns with the prefix of “A”/“B” are the given context; **Left Image:** the the ground-truth image; **Middle Image:** the image generated by UniDiffuser after fine-tuning according to our proposed method; **Right Image:** the image generated by U-Vit-Small(Deep) after fine-tuning according to our proposed method. Figure 4: Different ways to concatenate the texts of dialog. ### Performance on Dialog Categories We randomly select 50 samples each belonging to the categories of people, food, and animals from PhotoChat test set and calculate the FID and IS for the images in these three categories generated by UniDiffuser after fine-tuning. As illustrated in Figure 5, the images of people have the highest FID while the images of food have the lowest both FID. This is within our expectation since people exhibit a wide range of body language and facial expressions, which is difficult for models to accurately generate consistent images. The model that has been trained on the text-to-image dataset can generate relatively simpler food and animal more accurately since it has sufficient external knowledge. We also find that the images of people have the highest IS while the images of food have the lowest IS. ## 6 Conclusions In this paper, we highlight a new problem: dialog-to-image generation. Then we first explore the utilization of text-to-image models for the task of dialog-to-image generation and find that it is challenging to solely treat dialog-to-image generation as text- to-image generation. To tackle the problem, we present an effective approach that can generate high-resolution image responses conditioning on dialog context. Extensive experiments on PhotoChat Corpus indicate the effectiveness of our approach, which achieves a significant improvement with previous text-to-image generation models. In the future, we will explore more efficient methods about multimodal dialog response generation. ## Acknowledgements This work is supported in part by the NSFC Grants (No.62206070), and the Innovation Fund Project of the Engineering Research Center of Integration and Application of Digital Learning Technology, Ministry of Education (1221014).
2309.14608
A Demand-Supply Cooperative Responding Strategy in Power System with High Renewable Energy Penetration
Industrial demand response (IDR) plays an important role in promoting the utilization of renewable energy (RE) in power systems. However, it will lead to power adjustments on the supply side, which is also a non-negligible factor in affecting RE utilization. To comprehensively analyze this impact while enhancing RE utilization, this paper proposes a power demand-supply cooperative response (PDSCR) strategy based on both day-ahead and intraday time scales. The day-ahead PDSCR determines a long-term scheme for responding to the predictable trends in RE supply. However, this long-term scheme may not be suitable when uncertain RE fluctuations occur on an intraday basis. Regarding intraday PDSCR, we formulate a profit-driven cooperation approach to address the issue of RE fluctuations. In this context, unreasonable profit distributions on the demand-supply side would lead to the conflict of interests and diminish the effectiveness of cooperative responses. To mitigate this issue, we derive multi-individual profit distribution marginal solutions (MIPDMSs) based on satisfactory profit distributions, which can also maximize cooperative profits. Case studies are conducted on an modified IEEE 24-bus system and an actual power system in China. The results verify the effectiveness of the proposed strategy for enhancing RE utilization, via optimizing the coordination of IDR flexibility with generation resources.
Yuanzheng Li, Xinxin Long, Yang Li, Yizhou Ding, Tao Yang, Zhigang Zeng
2023-09-26T01:29:35Z
http://arxiv.org/abs/2309.14608v2
A Demand-Supply Cooperative Responding Strategy in Power System with High Renewable Energy Penetration ###### Abstract Industrial demand response (IDR) plays an important role in promoting the utilization of renewable energy (RE) in power systems. However, it will lead to power adjustments on the supply side, which is also a non-negligible factor in affecting RE utilization. To comprehensively analyze this impact while enhancing RE utilization, this paper proposes a power demand-supply cooperative response (PDSCR) strategy based on both day-ahead and intraday time series. The day-ahead PDSCR determines a long-term scheme responding to the predictable trends in RE supply. However, this long-term scheme may not be suitable when uncertain RE fluctuations occur on an intraday basis. Regarding intraday PDSCR, we formulate a profit-driven cooperation approach to address the issue of RE fluctuations. In this context, unreasonable profit distributions on the demand-supply side, would lead to the conflict of interests and diminish the effectiveness of cooperative responses. To mitigate this issue, we develop multi-individual profit distribution marginal solutions (MIPMSs) based on satisfactory profit distributions, which can also maximize cooperative profits. Case studies are conducted on a modified IEEE 24-bus system and an actual power system in China. The results verify the effectiveness of the proposed strategy for enhancing RE utilization, via optimizing the coordination of IDR flexibility with generation resources. Demand-supply cooperative responding, renewable energy, conflict of interest, profit distribution. ## I Introduction Currently, human development is significantly constrained by energy and environmental crises. Renewable energy (RE) is considered as a solution for replacing conventional energy sources, due to its environmental friendliness [1, 2, 3]. For instance, Wind power (WP) is a significant form of RE, which has developed rapidly in recent years, especially in China [4]. However, the uncertainty of RE, such as the unpredictable and significant fluctuations, may adversely affect the secure and cost-effective operation of the power system. Therefore, there is a need to address this challenge regarding RE uncertainty [5, 6]. Demand response (DR) is considered as an effective method to mitigate RE power fluctuations, and RE utilization could be enhanced. Within the DR program, electricity customers accord to long-term pricing schemes or financial incentives, to flexibly schedule their power demand. Notably, industrial production enterprises (IPEs) constitute a significant portion of global power demand, accounting for more than 50\(\%\) of the total demand worldwide [7]. Therefore, to enhance the management of IPE demand, Ref. [8] investigates industrial demand response (IDR) for minimizing generation costs and RE curtailments. Furthermore, Ref. [9] analyzes the potential for implementing existing IDR programs among industrial consumers, in order to augment RE utilization. The previous studies treat IPEs as independent dispatch entities, which overlooks their interactions with the utility center (e.g., electricity manager) within the IDR program. To analyze these interactions, Ref. [10] examines the impact of dynamic electricity pricing on demand response, involving 802 businesses across 34 commercial and industrial categories. On this basis, the interactions among multi-level market participants are modeled, according to _Stackelberg_ game theory [11]. These participants include the utility center, load aggregators, energy storage operators, etc. Furthermore, a programming-multi-verse distributed algorithm is introduced to optimize trading strategies among the utility center and load aggregators [12]. In summary, the aforementioned researches mainly investigate centralized power dispatch within the day-ahead time scale. In this scenario, the utility center holds a priority in the process of electricity sales and price negotiations [13]. This priority enables it to facilitate IDR more effectively via controlling power supply for transactions. Moreover, unpredictable fluctuations in RE supply or power demand potentially impact power balance and stable power transmission on the intraday time scale. In addition to the established electricity contracts, the utility center generally offers additional financial incentives to encourage further shifts in IPE demand. This supplementary IDR exists outside of contractual limitations, allowing IPEs to autonomously determine their responses based on self-interests [14]. In contrast to centralized studies, decentralized dispatch relies on autonomous IDR to manage power fluctuations. It means that all IDR participants act in a self-interested and profit-driven manner. More specifically, participants concentrate on maximizing the responding income, rather than strictly adhering to demands of the utility center. Therefore, the optimal response outcome depends on the effective cooperation among all participants. Furthermore, an effective cooperation is dependent on a good financial incentive mechanism. Conversely, an inappropriate financial mechanism may result in excessive incentives for certain participants [15], which will weaken the motivation of other participants and even degrade response outcomes. On the other hand, the absence of penalties may encourage extreme selfish behavior and consequently adversely affect total interest [16]. This implies that a participant may prioritize their own interests over collective interests, leading to the conflict of interests among multiple individuals. In order to standardize the decentralized response of participants, Ref. [17] establishes a punishment rule via the Cartel mechanism and repeated games. They are used to regulate the cooperation in the IDR program and avoid the conflict of interests. From a profit perspective, Ref. [18] formulates a non-cooperative game model among DR aggregators. Based on incomplete information, this study determines the DR share of each aggregator by maximizing the revenue of utility center. Ref. [19] considers the cooperation model as a more suitable approach for describing this decentralized interest relationship. Then,
2309.03374
Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data
Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations at moderate to high Reynolds numbers for complex geometries. The presented method utilizes very sparsely distributed solution data in the domain. A detailed investigation on the effect of the amount of supplied data and the PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach is used to generate a surrogate model of a realistic flow-thermal electronics design problem. This surrogate model provides near real-time sampling and was found to outperform standard data-driven neural networks when tested on unseen query points. The findings of the paper show how PINNs can be effective when used in conjunction with sparse data for solving 3D nonlinear PDEs or for surrogate modeling of design spaces governed by them.
Saakaar Bhatnagar, Andrew Comerford, Araz Banaeizadeh
2023-09-06T21:52:14Z
http://arxiv.org/abs/2309.03374v3
# Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data ###### Abstract Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations at moderate to high Reynolds numbers for complex geometries. The presented method utilizes very sparsely distributed solution data in the domain. A detailed investigation on the effect of the amount of supplied data and the PDE-based regularizers is presented. Additionally, a hybrid data-PINNs approach is used to generate a surrogate model of a realistic flow-thermal electronics design problem. This surrogate model provides near real-time sampling and was found to outperform standard data-driven neural networks when tested on unseen query points. The findings of the paper show how PINNs can be effective when used in conjunction with sparse data for solving 3D nonlinear PDEs or for surrogate modeling of design spaces governed by them. **Keywords:** Physics Informed Neural Networks; Navier-Stokes Equations; Surrogate Modeling; Design Optimization ## 1 Introduction Over the last few years, there has been significant growth in the popularity of machine learning algorithms to solve partial differential equations (PDE) or assist PDE solvers, such as computational fluid dynamics (CFD) solvers [1, 2]. A particular application where CFD solvers struggle, due to the computational cost, is iterative design optimization. This is the process of continually updating a design (e.g. an electronics assembly layout) and computing the solution (e.g. flow or thermal fields) to optimize the performance (e.g. constrain the temperatures or reduce the pressure drop). The challenge for CFD is the input-output relationship is one-to-one. Therefore, any changes to the input vector (e.g. geometric variations) need to be re-simulated, leading to high costs when iterating on different design scenarios [3]. Overall high-fidelity iterative design requires a prohibitive level of resources, both computationally and monetarily, and often leads to a sub-optimal outcome. The attraction of Machine Learning (ML) algorithms in these scenarios is the ability to rapidly find solutions for such problem setups that are challenging in conventional CFD, such as large design space explorations [4], turbulence model closure [5] or solving incomplete/ill-posed problems [6]. Conventional ML algorithms usually require large amounts of data to train. This represents a challenge when using ML in engineering applications such as CFD, since experimental data can be difficult and expensive to obtain and may suffer from measurement noise. Furthermore, in many engineering experiments, field data such as temperature and velocity fields can sometimes only be captured at specific locations, and it is difficult to get full field solution results from physical experiments. Research has turned to using simulation data for training ML models, but the computational cost of generating large amounts of data to train models is a major bottleneck. Physics Informed Neural Networks (PINNs) [7] represent an advance in scientific machine learning that has the potential to solve many of the aforementioned issues. By adding the physics that governs the problem into the loss function, and optimizing the loss, it is possible to have the network learn the solution of the problem represented by that equation in a data-free manner. PINNs can be used in cases where sporadic experimental field data is available [8, 9] to calculate the rest of the field variable and can be used to solve problems with incomplete or missing physics [10, 11]. Another application area, in which PINNs could be very beneficial is machine learning-based surrogate modeling. Although a relatively new field, several ML architectures and methods have been utilized in the literature. These include: Proper Orthogonal Decomposition (POD) [12], Gappy POD [13] and Manifold Learning [14]. More recently, increased attention has been given to statistical methods like Gaussian processes and neural networks that incorporate Machine Learning (ML) to create surrogate models. Bhatnagar et al. [15] used a CNN architecture to predict aerodynamic flow fields over airfoils and created a surrogate model that generalized between flow conditions and airfoil geometries. Guo et al. [16] also used a Convolutional Neural Network (CNN) architecture to predict steady flows over automotive vehicles. Lee and You [17] used Generative Adversarial Networks (GANs) coupled with physical laws to predict unsteady flow around a cylinder, demonstrating the benefits of using embedded physics. Raissi and Karniadakis [18] use Gaussian processes to model and identify several complex PDEs. Several of the aforementioned studies used purely data-driven models and required the creation of large amounts of training data to generate accurate and generalizable models. PINNs have the capability to greatly reduce these data generation costs, and it has been shown that training surrogates using the physics embedded in the loss function greatly improves predictive accuracy, across a wide range of applications [17, 19, 20, 21]. However, there is currently a lack of research articles applying PINNs to 3-dimensional (3D) problems, particularly for highly nonlinear PDEs like the Navier-Stokes equations. These problems are challenging for PINNs due to a variety of reasons that are discussed later in this paper. Yet, these problems are the most lucrative to solve, as most industrial applications of CFD are done in 3D. This paper provides results that aim to address this gap, by solving several problems with realistic physical parameters, over complex geometries in a data-assisted manner, using very sparse domain data. Further, this paper solves a realistic flow-thermal design optimization problem using a hybrid data-PINN surrogate model and shows how PINN models outperform standard data-driven neural network (NN) surrogates for every test point queried in the Design of experiments (DoE) space for the surrogate modeling problem. The paper is divided as follows; Section 2 introduces PINNs in more detail and discusses some of the technical challenges with training PINNs. Section 3 outlines some of the important features the authors incorporate in the creation and training of PINNs to enable accurate and fast convergence. Section 4 demonstrates several problems solved using PINNs, and showcases a design optimization problem using PINN-based surrogates. Section 5 discusses how the work shown in this paper can be improved upon. ## 2 Physics Informed Neural Networks (PINNs) ### Setting up a PINN Training Physics-informed neural networks (PINNs) leverage automatic differentiation to obtain an analytical representation of an output variable and its derivatives, given a parametrization using the trainable weights of the network. By employing the underlying static graph, it is possible to construct the differential equations that govern physical phenomena. A PDE problem in the general form reads: \[\mathcal{N}_{\mathbf{x}}[u]=0,\mathbf{x}\in\Omega, \tag{1}\] \[\Phi(u(\mathbf{x}))=\mathbf{g}(\mathbf{x}),\mathbf{x}\in\partial\Omega \tag{2}\] where \(\Phi\) can be the identity operator (Dirichlet B.C) or a derivative operator (Neumann/Robin B.C). In order to solve the PDE using the PINN method, the residual of the governing PDE is minimized, which is defined by \[r_{\theta}(\mathbf{x})=\mathcal{N}_{\mathbf{x}}[f_{\theta}(\mathbf{x})], \tag{3}\] where \(f_{\theta}\) is the predicted value by the network. The residual value, along with the deviation of the prediction from boundary/initial conditions, is used to construct the loss, which takes the form: \[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta), \tag{4}\] where the index i refers to different components of the loss function, relating to initial conditions, boundary conditions, and measurement/simulation data. \(\lambda_{i}\) refers to the weight coefficient of each loss term. The individual loss terms are constituted as follows: \[L_{r}=\frac{1}{N_{r}}\sum_{i}^{N_{r}}[r(\mathbf{x}_{r}^{i})]^{2},\ L_{b}=\frac{1 }{N_{b}}\sum_{i}^{N_{b}}[\Phi(\hat{u}(\mathbf{x}_{b}^{i}))-g_{b}^{i}]^{2},\ L_{d}=\frac{1}{N_{d}}\sum_{i}^{N_{d}}[u( \mathbf{x}_{d}^{i})-\hat{u}(x_{d}^{i},t_{d}^{i})]^{2}, \tag{5}\] where the subscripts r, b, and d refer to collocation, boundary, initial, and data points, respectively. The loss term \(L(\theta)\) can then be minimized to have the network learn the solution to the PDE described by 1,2. A popular method is to use gradient-based optimizers like Adam [22] and L-BFGS to optimize the network weights. ### Current Challenges with PINNs Although the PINN method shows great promise, it still has a number of unresolved issues. The biggest challenges with PINNs currently lie in the scalability of the algorithms to large 3D problems as well as problems with complex nonlinearities, and unsteady problems. Some of the issues described henceforth are tackled by methods described in Section 3. #### 2.2.1 Weak imposition of Boundary Conditions The solution of a PDE problem must obey all initial and boundary conditions imposed on it while minimizing the residual of the governing equation. However, for neural network based solvers it is difficult to impose boundary and initial conditions in an exact manner. This is because the standard way to impose B.C in PINNs is to create a linear combination of loss functions (as described mathematically in the previous section). Each loss either describes the deviation of the network output from a specific boundary condition, or the magnitude of the residual of the governing equations. Therefore, boundary conditions are only satisfied in a weak manner. There has been research demonstrating the utility of exact imposition of boundary conditions [23, 24, 25] or creative multi-network approaches [26], such implementations are mostly problem-specific and do not generalize well. Weak imposition of boundary conditions also creates another issue, one that is fairly common in multi-task learning and multi-objective optimization: choosing the values of loss term coefficients that make up the linear combination. Choosing these weights is a nontrivial exercise that would require calibration via hyper-parameter search, which is not feasible. Wang et al. [27] introduced a heuristic dynamic weighting algorithm to update and select these weights automatically and continuously during the training, to enable convergence to the correct answer. Additionally, there have been several other algorithms proposed to choose the correct scheme for weighting the losses [28, 29, 30]. This continues to be an active area of research in the PINNs community. Finally, methods have been proposed to impose the boundary conditions in a strong manner by manipulating the output formulations [23] or by utilizing operator networks [31]. #### 2.2.2 Difficult Optimization Problem A second problem is the nature of the loss landscape itself, in which a reasonable local minimum is required to be found. As seen in Krishnapriyan et al. [32], Gopakumar et al. [33],Subramanian et al. [34] and Basir and Senocak [35], as well as the author's own experiments, different non-dimensional quantities (e.g. Reynolds number) in the governing equations, the number of dimensions of the problem, the point cloud/discretization, the boundary conditions and the complexity of the solution to be predicted can adversely affect the loss landscape of the neural network training. This makes the optimization challenging and can fail to find an adequate local minimum via a gradient descent-based algorithm. Recently, methods borrowing concepts from optimization theory have shown alternate formulations (e.g. augmented lagrangian method for the loss functions) can aid the convergence properties of the training problem [35, 36]. There have also been efforts towards imposing physical constraints in an integral form [37]. #### 2.2.3 Cost of training Constructing the PDE loss functions involves several backward passes through the network, which is a costly operation. PINNs on average take longer to train than their data-driven counterparts for exactly this reason; the computation graph of a PINN training is much more complex. Moreover, for the Navier-Stokes equations, it has been seen that although the stream function formulation provides better results (due to exact enforcement of continuity), it is costlier in terms of training time. As seen in NVIDIA's experiments [38], it can take several million iterations for the more complex problems to be solved via PINNs. To reduce the cost of training approaches such as automatic differentiation for finite difference formulations [39], or using first-order formulations [40], have been proposed. However, these solutions tend to be mostly problem-specific and do not necessarily generalize well to increased problem complexity and grid definitions. Meta-learning algorithms [41] have also recently gained significance as an effective way to reduce the cost of training neural networks on new tasks, and some of this work has been extended to PINNs [42] as well. ## 3 Important Features for Creating PINN Models In this section, the important techniques used to create PINN-based models cost-effectively are outlined. The PINN models in subsequent sections are created by combining these features that have been found to have an effect on the accuracy of the model and the speed of training. ### Hybrid Data-Physics Training Compared with the original PINNs method proposed by Raissi et al. [7], a plethora of research has been undertaken to improve and expand on the method [43, 44]. From these developments, the PINNs method has been applied to solve PDE-based problems of increasing complexity and dimensionality. However, the PINNs method is currently not suited for solving engineering problems often encountered in industry in a data-free manner. The optimization issues and cost of model training outlined above make the method, presently, unsuitable for use as a forward solver. To get the best of both worlds, the PINNs method can be augmented with data. Figure 1 depicts the tradeoff between using only data or only physics, and that the sweet spot lies in using both. In addition to the discussed benefit of hybrid data-physics training reducing the cost of generating data, there have been several examples showing that the inclusion of sparse solution data in the training loss function significantly improves the convergence capabilities of the PINNs method [33, 43, 45]. In this paper, we take inspiration from this and use very sparse solution data to solve 3D flow-thermal problems and inform our surrogate models with physics while creating them. Figure 1: The spectrum of data-driven versus physics-informed models. Incorporating governing physics information into the models during creation serves as an effective form of regularization and often helps reduce the amount of data required to achieve the same accuracy levels. ### Modified Learning Rate Annealing As described in Section 2.2.1, the learning rate annealing algorithm has proved to be very effective in mitigating the stiffness of the PINN training problem. However, utilizing this method over a broader spectrum of problems highlighted an issue with stability. The following outlines this issue: As shown in Equation 4 the PINN loss function being optimized takes the form: \[L(\theta)=L_{r}(\theta)+\sum_{i=1}^{M}\lambda_{i}L_{i}(\theta) \tag{6}\] At any training step, the update to the loss coefficient is calculated [27] as \[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\] It can be seen that if the loss \(L_{i}\) decreases much faster than \(L_{r}\) during the training, the value of \(\hat{\lambda}_{i}\) increases. This then leads to a larger coefficient for that loss term and an associated faster decay of the loss. This instability has the unintended consequence of the optimizer getting stuck in minima where it minimizes the loss \(L_{i}\) very well but is unable to optimize for the loss of the other constraints. The proposed updated algorithm to mitigate this issue is shown in Algorithm 1. The values of thresholds are hyper-parameters, but if the inputs and outputs of the network have been normalized (using standard score normalization, for example), then selecting values between \(10^{-3}\) and \(10^{-5}\) works well in practice. ``` for update step = 1 to \(N\)do if\(L_{i}(\theta)\leq(threshold)_{i}\)then \[\hat{\lambda}_{i}=0\] else Compute \(\hat{\lambda}_{i}\) by \[\hat{\lambda}_{i}=\frac{max_{\theta}|\nabla_{\theta}L_{r}(\theta)|}{|\nabla_{ \theta}L_{i}(\theta)|},i=1,....,M\] endif Update weights \(\lambda_{i}\) as \[\lambda_{i}=(1-\alpha)\lambda_{i}+\alpha\hat{\lambda}_{i}\] Update network parameters via gradient descent: \[\theta_{n+1}=\theta_{n}-\eta\nabla_{\theta}L_{r}(\theta)-\eta\sum_{i=1}^{M} \lambda_{i}\nabla_{\theta}L_{i}(\theta)\] endfor We set the hyper-parameter \(\alpha=0.1\) and \(\eta=10^{-3}\). Threshold values are chosen somewhere between \(10^{-3}\) and \(10^{-5}\). ``` **Algorithm 1** Modified Learning Rate Annealing For a problem with the loss function \[L(\theta)=L_{r}(\theta)+\lambda_{neu}L_{neu}(\theta)+\lambda_{dir}L_{dir}(\theta) \tag{7}\] where \(L_{r}(\theta)\), \(L_{neu}(\theta)\) and \(L_{dir}(\theta)\) correspond to the PDE, Neumann and Dirichlet loss respectively, Figure 2 shows the training curves for the individual losses, and the value of the adaptive coefficients when they are calculated using Algorithm 1. It can be seen that when the boundary loss terms in Figures 1(c) and 1(d) go below their thresholds (set to \(10^{-5}\)), the associated coefficients shown in Figures 1(a) and 1(b) start decaying. Following this, the PDE loss starts improving much faster. If the term \(L_{i}(\theta)\) goes above its threshold, it leads to a spike in the adaptive constant \(\lambda_{i}\) which brings it down again. ### Fourier Feature Embeddings As described in Tancik et al. [46], Artificial Neural Networks suffer from a spectral bias problem. To overcome this, they introduced a Fourier feature embedding that allows models to capture high-frequency components of the solution effectively. This has the effect of markedly improving the ability of the networks to capture sharp gradients in the solutions, which requires the network to be able to learn high-frequency components of the solution quickly. Following the implemenation in Tancik et al. [46], for an input vector Figure 2: Adaptive coefficients and loss terms from Equation 7 during training. (a) Evolution of the Dirichlet loss adaptive constant during training.(b) Evolution of the Neumann loss adaptive constant during training. (c) Dirichlet B.C loss term \(L_{dir}(\theta)\) (d) Neumann B.C loss term \(L_{neu}(\theta)\) (e) The PDE loss during training. Once the values of both the adaptive constants start dropping, the PDE loss improves much more rapidly. \[\mathbf{v}=\left[\begin{array}{c}x\\ y\\ z\end{array}\right]\] instead of using \(\mathbf{v}\) as the input we compute the Fourier feature mapping: \[\gamma(\mathbf{v})=[\cos(2\pi\mathbf{b}_{1}^{T}\mathbf{v}),\sin(2\pi\mathbf{b}_ {1}^{T}\mathbf{v}),.....,\cos(2\pi\mathbf{b}_{m}^{T}\mathbf{v}),\sin(2\pi \mathbf{b}_{m}^{T}\mathbf{v})] \tag{8}\] where m is a hyper parameter and the frequencies \(\mathbf{b}_{j}^{T}\) are selected randomly from an isotropic distribution. Then \(\gamma(\mathbf{v})\) is passed into the network. The Fourier feature embedding was shown to be highly effective in training PINNs models by Wang et al. [47], and several results were shown for 1D and 2D problems. We extend this implementation to solve 3D flow problems via PINNs and use it to create our hybrid data-PINN surrogate for flow thermal problems. In addition, there have been other proposed solutions for the spectral bias problem for applications to PDE problems, such as the Siren activation [48], Fourier Neural Operators [49], and weighting schemes derived from the theory of Neural Tangent Kernels (NTK) [28]. ## 4 Experiments and Results In this section, some example problems are solved using PINNs. Sections 4.1 and 4.2 solve the 3D incompressible Navier-Stokes equations through a data-assisted approach, where very sparse solution data is provided in the domain. Section 4.3 uses a hybrid data-PINN approach to generate a surrogate model for a given design space of a heat sink with a chip underneath it, undergoing cooling via forced convection. Then, given certain constraints on the running metrics of the chip-sink setup (like max temperature in the chip), the optimal set of parameters in the Design of Experiments (DoE) space that satisfy the constraints while maximizing an objective are obtained via rapid design optimization using the created surrogate. Details on hyper-parameters used in the model training for each experiment that follows can be found in Appendix Section A.1. ### Forward Solve of 3D Stenosis Problem Flow through an idealized 3D stenosis geometry at a physiologically relevant Reynolds number is demonstrated, see Figure 3 for details about the geometry. To the author's best knowledge, flow through a stenosis has been solved using PINNs only at a low Reynolds number of approximately 6 (based on inlet diameter) [23]. Flow through irregular geometries has been solved at a higher Re (500), but in 2D [50]. In this paper, the stenosis problem is solved at Re 150, and in 3 dimensions. As discussed in Section 2.2, at higher Reynolds numbers the standard PINN implementation struggles to achieve a good local minimum. This was confirmed using a standard PINN implementation. To alleviate this issue a data-assisted approach where sporadic solution data can be added throughout the domain of interest (depicted on a slice in Figure 4). The data was given in the form of concentric rings at the radii depicted on the cut plane. #### 4.1.1 Problem Setup The flow problem through the stenosis is solved by solving the steady-state incompressible Navier-Stokes equations: \[\nabla\cdot\mathbf{u}=0, \tag{9}\] \[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+\nu\nabla \cdot(\nabla\mathbf{u}), \tag{10}\] subject to \[\mathbf{u}(x_{b1})=g(x_{b1}),\,x_{b1}\in\partial\Omega_{1},\] \[\mathbf{u}(x_{b2})=0,\,x_{b2}\in\partial\Omega_{2},\] \[\nabla u_{i}(x_{b3})\cdot\mathbf{n}=0,\,x_{b3}\in\partial\Omega_{3},i=1,2,3\] \[p(x_{b3})=0,\,x_{b3}\in\partial\Omega_{3}\] where \(g(x_{b3})\) represents a profiled input to the stenosis. \(\rho\) and \(\nu\) are the density and kinematic viscosity of the fluid(air) respectively, and \(\mathbf{u}\) and p are the velocity vector and pressure respectively. In the present problem, a parabolic profiled input is provided with a peak value inflow of 0.15 m/s. The ratio of areas of the throat to the inlet is 0.36. The output of the network is approximated as \(G_{\theta}\), which is a 4-component output: \[G_{\theta}=\left[\begin{array}{c}u\\ v\\ w\\ p\end{array}\right]\] #### 4.1.2 Results Figure 5 compares the velocity magnitude returned by the trained PINN model and Altair AcuSolve(r) through a 2D slice of the stenosis. As can be seen, the essential features of the flow are captured. Figure 5(a) and 5(b) compare the velocity and pressure profile through the center of the stenosis. The differences between the line plots are attributed to differences in mesh density between the two cases. The CFD mesh was an unstructured mesh of around 63,000 nodes with a boundary layer, while the Figure 4: Stenosis diagram (not to scale) showing planes where solution data is provided randomly. Figure 3: Visual description of stenosis problem point cloud used with the PINN was around 87,000 nodes randomly distributed points except near the boundary where the sampling was finer. Another approach that was investigated to solve the 3D stenosis problem was that of using "continuity planes" as defined by Hennigh et al. [38] in their experiments solving 3D flow problems using PINNs. In this approach, the authors added constraints on the mass flow through a plane and added these constraints to the loss function. While this approach was found to aid the convergence of the PINN model to the correct solution, there were several issues found to exist with this method: 1. It is difficult to generate continuity planes for complex geometries such as those shown in Sections 4.2 and 4.3. 2. The quality of the solution from the PINN depends heavily on the integration scheme used to calculate the mass flow rate, and the fineness of the points on the continuity plane. Figure 5: Solution Comparison. (a) Altair AcuSolve® Solution to stenosis problem (b) PINN forward solve to stenosis problem. Figure 6: Centerline solution comparisons: PINN versus Altair AcuSolve® (a) Total Velocity Comparison (b) Pressure Comparison Hence, in the next section, random and sparsely distributed data was used in the domain to aid convergence. ### Flow over a Printed Circuit Board (PCB) #### 4.2.1 Problem Setup Flow over a PCB consisting of a heat sink, chip, and capacitor is solved at a Reynolds number of approximately 1500, based on the length of the PCB board and air as the fluid. The geometry and flow orientation are shown in Figure 7. This represents a forced convection problem common in electronics design and is a challenging problem for PINNs because it is in 3D, with a complex geometry and large gradients involved. Let \(D\) represent the set of all nodes in the domain. To train the PINN model, the CFD solution was first computed. Next, 1% of the nodes in the solution domain were randomly selected (call this set \(D_{1}\subset D\))). This is a selection of roughly 2,300 node points (from a mesh of roughly 230,000 nodes). The experiment was then divided into three parts: 1. **Case A**: A network was trained on the CFD solution at all points in \(D_{1}\) (i.e \(\forall\mathbf{x}\in D_{1}\)) following which the physics of the problem was **enforced at every node location** in \(D\) (i.e \(\forall\mathbf{x}\in D\)), by including the physics-based loss in the training, and then the network was asked to predict the solution in the entire domain \(D\). 2. **Case B**: A network was trained on the CFD solution at the points contained in \(D_{1}\) (i.e, \(\forall\mathbf{x}\in D_{1}\)) **without any physics enforcement** and then asked to predict the solution in the entire domain (i.e \(\forall\mathbf{x}\in D\)). 3. **Case C**: Finally, the same experiment as Case A was repeated but with a new set \(D_{2}\) consisting of only 0.2% of the nodes in \(D\), which were again randomly selected. The governing equations for this problem are the Reynolds Averaged Navier-Stokes Equations: \[\nabla\cdot\mathbf{u}=0, \tag{11}\] \[(\mathbf{u}\cdot\nabla)\mathbf{u}=-\frac{1}{\rho}\nabla\mathbf{p}+(\nu+\nu_{t })\nabla\cdot(\nabla\mathbf{u}), \tag{12}\] \(\rho\), \(\nu\), and \(\nu_{t}\) represent the density, kinematic viscosity and eddy viscosity of the system. The inflow is set to a constant velocity of 0.15 m/s and the outflow is set to the stress-free condition. It should be noted that in the current study, eddy viscosity is obtained directly from the CFD solver using the Spalart-Allmaras turbulence model. Turbulence modeling in PINNs is a field of active research with a few articles investigating it [38, 51, 52], and it is a future work to effectively incorporate turbulence models into PINN-based models. Figure 7: Geometry of a PCB with a chip, sink, and capacitor assembly. #### 4.2.2 Results Figure 8 shows the ANN predictions for the different cases. It is evident that by using sparse data, the network is able to better converge toward the CFD solution (shown in Figure 8d) using the physics-based regularizer. However, as evident in Figure 8c, the network failed to converge to a physical solution when the amount of data provided was insufficient, highlighting the importance of a certain amount and fineness of the required data. Table 1 shows the Mean Squared Errors (MSE) for each experiment, for the velocity and the pressure, taking the CFD solution as the ground truth. The MSE is calculated as \[\text{MSE}=\sqrt{\frac{\sum_{i=1}^{N_{nodes}}(x_{i,pred}-x_{i,truth})^{2}}{N_{ nodes}}} \tag{13}\] Figure 9 shows the fraction of node points for each case that are above a certain Mean Absolute Error (MAE) value. The lower the fraction, the better the solution. We note from Figure 9 that even for Case A, there are outliers to the solution where the MAE is relatively high, indicating poor convergence to the solution at those nodes. The convergence of PINNs toward the correct solution for highly nonlinear systems is an open and challenging problem, especially in 3 dimensions. Nonetheless, these results open exciting possibilities about using physics-based regularizers in the future and represent a step forward for solving the 3D Navier-Stokes Equations at high Reynolds Numbers using PINNs. Furthermore, data generation costs to create surrogate models using PINNs can be greatly reduced by providing solution data on a coarser grid and solving the physics on a finer grid. ### Surrogate Modeling and Design Optimization of a Heat Sink In this section, the PINNs surrogate modeling technique is demonstrated for rapid design optimization of a heat sink assembly. The assembly utilizes a chip that generates heat and a fin-type heatsink on top to dissipate heat into the surrounding fluid. The chip-heatsink assembly is cooled by forced convection of air. The geometry and setup are shown in Figure 10. The goal is to optimize the heat sink design and the running conditions of the assembly, subject to feasibility constraints placed on chip temperature and channel pressure drop. This represents a common design optimization problem in electronics cooling. More specifically, if \(\dot{Q}_{src}\) is the total power being generated by the chip, the optimization problem can be framed as follows: \[\text{Maximize }\dot{Q}_{src}\text{ s.t} \tag{14}\] \[\text{Pressure drop across the heat sink channel ($\Delta$P)}\leq 11\text{ Pa} \tag{15}\] \[\text{Maximum temperature anywhere on the chip }\leq 350\text{ K} \tag{16}\] The pressure at the outflow is fixed to 0 Pa, and the pressure drop across the heat sink channel is hence calculated as the average pressure over the inflow of the channel: \[\Delta\text{P}=\overline{\text{P}_{\text{inlet}}} \tag{17}\] The term to be maximized \(\dot{Q}_{src}\) is also one of the design axes and an input parameter(P3) to the network. The design variables that can be altered for this present optimization are: \begin{table} \begin{tabular}{|c|c|c|c|} \hline Case & Description & MSE (Velocity) & MSE (Pressure) \\ \hline Case A & 1\% domain data + physics & **0.0135** & **0.0037** \\ \hline Case B & 1\% domain data only & 0.0222 & 0.00472 \\ \hline Case C & 0.2\% domain data + physics & 0.0245 & 0.00545 \\ \hline \end{tabular} \end{table} Table 1: Mean Squared Errors (MSE) for velocity and pressure for cases A,B, and C * Inflow Velocity * Fin height * Source term in the chip (has to be maximized) The upper and lower limits of each of the design variables mentioned above are summarized in Table 2. The inlet velocity is set based on typical values found in literature [53] and corresponds to a Reynolds number range of Re 10,300 to Re 24,000. The governing equations solved for this conjugate heat transfer problem are the same as in Section 4.2 for the flow problem, subject to no-slip boundary conditions on the chip-heatsink assembly with a Figure 8: Neural Network (NN) prediction with and without physics, for very coarse data supplied on a plane through the domain. (a) **Case A:** Trained on 1% data and physics (b) **Case B:** Trained on 1% solution data only (c) **Case C:** Trained on 0.2% data and physics (d) True Solution from CFD solver Figure 9: Node fractions of points above a certain MAE value, for each case. (a) MAE of Velocity (b) MAE of Pressure variable freestream inflow velocity, causing forced convection. As in Section 4.2, the eddy viscosities are taken from the CFD solutions. The energy equation in both fluid and solid reads: \[k\nabla^{2}T+\dot{q}_{src}-\rho s\mathbf{u}\cdot\nabla T=0, \tag{18}\] where T represents the temperature, \(\dot{q}_{src}\) represents the volumetric source term, and \(k\) and \(s\) are the conductivity and specific heat of the material respectively. At the interface between the fluid and solid domain (fluid-sink, sink-chip, and fluid-chip) the interface condition is applied by minimizing the following loss terms as shown in [54]; \[L_{flux}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(f_{d_{1}}(\mathbf{u}(x_{i})) \cdot\mathbf{n}_{d_{1}}+f_{d_{2}}(\mathbf{u}(x_{i}))\cdot\mathbf{n}_{d_{2}})^ {2}, \tag{19}\] \[L_{val}=\frac{1}{N_{int}}\sum_{i=1}^{N_{int}}(\mathbf{u}_{d_{j}}(x_{i})- \overline{\mathbf{u}_{d_{j}}(x_{i})})^{2}, \tag{20}\] where \(\mathbf{n}_{d1}=-\mathbf{n}_{d2}\) and j=1,2. The average is taken over j. \(d_{1}\) and \(d_{2}\) refer to the domains on both sides of the interface, and \(N_{int}\) is the number of node points on the interface. #### Model Creation and Evaluation The sampling of the above Design of Experiments (DoE) space is done via an efficient space-sampling method to optimally fill the DoE space [55]. The sampled DoE space for training is shown in Figure 11, along with the location of points at which the surrogate model is tested. The reader is referred to Section A.3.1 for a complete tabular description of the DoE space. Note that for this \begin{table} \begin{tabular}{|c|c|c|c|} \hline Parameter No. & Parameter Name & Lower Value & Upper Value \\ \hline P1 & Inflow Velocity (\(m/s\)) & 3 & 7 \\ \hline P2 & Fin Height (\(mm\)) & 15 & 23 \\ \hline P3 & Source Term (\(W\)) & 30 & 60 \\ \hline \end{tabular} \end{table} Table 2: Design of Experiments space axes ranges for the heat sink design optimization Figure 10: Basic problem geometry and flow depiction example, we use full field data at each DoE point to train the surrogate as opposed to a small fraction of it (like in Section 4.1 and 4.2), as the objective is to get a surrogate that is as accurate as possible. Table 3 shows the MSE for the predictions by the hybrid data-PINN model at the test points, calculated w.r.t the CFD solution at the same mesh resolution. Also shown is the MSE for predictions by a standard data-driven NN without leveraging key features described in Section 3, which are used extensively in industry for surrogate modeling applications. The hybrid data-PINN model outperforms the standard data-driven NN for all predictions. Section A.4 shows some more qualitative comparisons between test point results from the PINNs model versus standard data-driven NNs. #### Solving the Design Optimization Problem The surrogate model is used to solve the design optimization problem described in Equations 14-16. The goal is to show that the surrogate model can accurately predict the solution in the entire DoE space by returning a solution that satisfies all applied constraints while maximizing the objective. The created surrogate models are interfaced with an optimizer that solves a generic constrained optimization problem via an iterative process, described thoroughly in Appendix Section A.2. Each snapshot in Figure 12 represents a design iteration, and each particle represents a point in the DoE space. Each axis of a plot represents a parameter axis. Figure 11: Training and testing points in the 3D DoE space Figure 12: Design optimization iterations of the heat sink problem (a) Iteration 0 (b) Iteration 5 (c) Iteration 10 For the given constraints, the particles converge to a much smaller region of the DoE space. The design point returned by the optimizer in this case is: **Inflow Velocity**: 6 m/s **Chip Power**: 50W **Fin Height**: 17mm To test that the result satisfies the constraints, the returned design point is solved by the Altair AcuSolve(r), at the same mesh fineness and another mesh with 10x fineness, keeping all essential mesh features such as boundary layers and refinement zones. As shown in Figures 13 and 14, not only does the given design point satisfy the design constraints, but the finer mesh solution is very close to the coarser solution, and a little tweaking of the design point using CFD with the higher resolution mesh will yield a highly optimized solution to the designer. This optimization is done several orders of magnitude faster than if using traditional CFD, and the reader is referred to Appendix Section A.3.2 for a quantitative description of the same. ## 5 Conclusions and Future Work In this paper, Physics Informed Neural Networks were used to solve the 3D Navier-Stokes equations in a data-assisted setting, for complex geometries with realistic physical parameters. It was shown that even for problems being solved at high Reynolds Numbers in 3D, PINNs can be trained to produce a Figure 13: Temperature plot through a slice at the rear of the sink (from bottom to top). The comparison between the high-fidelity solution on the fine mesh and the PINN prediction on a coarser mesh shows good agreement. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Velocity MSE** & **Pressure MSE** & **Temperature MSE** \\ \hline **Test Point 1** & & & \\ Hybrid data-PINN Model & **0.65** & **2.62** & **1.81** \\ Standard Data-driven NN & 0.93 & 2.83 & 2.05 \\ \hline **Test Point 2** & & & \\ Hybrid data-PINN Model & **0.39** & **1.19** & **2.67** \\ Standard Data-driven NN & 0.58 & 1.42 & 2.97 \\ \hline **Test Point 3** & & & \\ Hybrid data-PINN Model & **0.76** & **3.31** & **1.86** \\ Standard Data-driven NN & 1.10 & 3.51 & 2.18 \\ \hline **Test Point 4** & & & \\ Hybrid data-PINN Model & **0.33** & **0.99** & **2.87** \\ Standard Data-driven NN & 0.52 & 1.19 & 3.15 \\ \hline \hline \end{tabular} \end{table} Table 3: MSE for 4 test points shown in Table 5. The PINN-based model consistently outperforms the standard data-driven NN on all test points. good solution in the presence of very sparse solution data randomly scattered in the solution domain. However, using too little solution data causes the model to converge to an unphysical solution. PINNs were also demonstrated for 3D flow-thermal surrogate modeling and the PINN-based surrogates consistently outperformed standard data-driven NN on test case examples. The PINN surrogates were also interfaced with a design optimization algorithm to solve a constrained optimization problem. This optimization returned a design point that when solved with high-fidelity CFD was consistent with the requirements of the design constraints, highlighting the suitability of the method to produce surrogates for 3D flow-thermal surrogate modeling problems. There are multiple avenues through which the work shown in this paper can be improved. Research has to be done to improve convergence and offer guarantees of PINNs training toward the local minimums that represent physical solutions, in a data-free manner. This will further reduce data requirements for the creation of physically consistent PINN models which can greatly improve their surrogate modeling capabilities, by reducing the cost of training and improving predictive accuracy. Further work needs to be done to investigate turbulence modeling in PINNs so that high Reynolds number problems can be solved in a data-free manner. There are also many themes like uncertainty quantification [56, 57, 58] of surrogates and effective surrogate modeling of different geometries [59, 60, 61] that are active fields of research in PINNs, which could be included in future works that build on these results. ## Acknowledgements This research did not receive any specific grant from funding agencies in the public or not-for-profit sectors, or from any external commercial entities. The authors gratefully acknowledge the use of Altair Engineering Inc.'s computing facilities for running experiments. ## CRediT authorship contribution statement **Saakaar Bhatnagar:** Formal Analysis, Investigation, Methodology, Software, Validation, Writing-original draft. **Andrew Comerford:** Conceptualization, Investigation, Project Administration, Supervision, Writing- review and editing **Araz Banaeizadeh:** Conceptualization, Project Administration, Supervision, Writing- review and editing
2309.13167
Flow Factorized Representation Learning
A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation. The fields of disentangled and equivariant representation learning have approached this ideal from a range of complimentary perspectives; however, to date, most approaches have proven to either be ill-specified or insufficiently flexible to effectively separate all realistic factors of interest in a learned latent space. In this work, we propose an alternative viewpoint on such structured representation learning which we call Flow Factorized Representation Learning, and demonstrate it to learn both more efficient and more usefully structured representations than existing frameworks. Specifically, we introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations. Each latent flow is generated by the gradient field of a learned potential following dynamic optimal transport. Our novel setup brings new understandings to both \textit{disentanglement} and \textit{equivariance}. We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models. Furthermore, we demonstrate that the transformations learned by our model are flexibly composable and can also extrapolate to new data, implying a degree of robustness and generalizability approaching the ultimate goal of usefully factorized representation learning.
Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling
2023-09-22T20:15:37Z
http://arxiv.org/abs/2309.13167v1
# Flow Factorized Representation Learning ###### Abstract A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation. The fields of disentangled and equivariant representation learning have approached this ideal from a range of complimentary perspectives; however, to date, most approaches have proven to either be ill-specified or insufficiently flexible to effectively separate all realistic factors of interest in a learned latent space. In this work, we propose an alternative viewpoint on such structured representation learning which we call Flow Factorized Representation Learning, and demonstrate it to learn both more efficient and more usefully structured representations than existing frameworks. Specifically, we introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations. Each latent flow is generated by the gradient field of a learned potential following dynamic optimal transport. Our novel setup brings new understandings to both _disentanglement_ and _equivariance_. We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models. Furthermore, we demonstrate that the transformations learned by our model are flexibly composable and can also extrapolate to new data, implying a degree of robustness and generalizability approaching the ultimate goal of usefully factorized representation learning. ## 1 Introduction Developing models which learn useful representations of data has become an increasingly important focus in the machine learning community [5; 55]. For example, Large Language Models such as GPT [9] rely on an extensive pre-training phase to learn valuable representations, enabling quick finetuning on a diversity of tasks. However, a precise definition of what makes an ideal representation is still debated. One line of work has focused on 'disentanglement' of the underlying ground truth generative factors [5; 35; 13]. In general, the definition of 'disentanglement' often refers to learning and controlling statistically independent factors of variation [5; 36]. Over the years, many disentanglement methods have been proposed, including axis-aligned single-dimensional manipulation [35; 13], linear multi-dimensional traversals [78; 77; 90; 66], and, more recently, dynamic non-linear vector-based traversals [84; 79]. Although these methods have been met with significant success (and even linked to single-neuron brain activity [37; 91]), there are known theoretical limitations which make them ill-specified, including the presence of topological defects [7]. This has limited their deployment beyond toy settings. Another line of work has focused on developing representations which respect symmetries of the underlying data in their output space [15; 36]. Specifically, equivariant representations are those for which the output transforms in a known predictable way for a given input transformation. They can be seen to share many similarities with disentangled representations since an object undergoing a transformation which preserves its identity can be called a symmetry transformation [36]. Compared with disentanglement methods, equivariant networks are much more strictly defined, allowing for significantly greater control and theoretical guarantees with respect to the learned transformation [16; 50; 73; 20; 39]. However, this restriction also limits the types of transformations to which they may be applied. For example, currently only group transformations are supported, limiting real-world applicability. To avoid this caveat, some recent attempts propose to learn general but approximate equivariance from disentangled representations [49; 45; 79]. In this work, we provide an alternative viewpoint at the intersection of these two fields of work which we call Flow Factorized Representation Learning. Fig. 1 depicts the high-level illustration of our method. Given \(k\) different transformations \(p_{k}(\mathbf{x}_{t}|\mathbf{x}_{0})\) in the input space, we have the corresponding latent probabilistic path \(\int_{\mathbf{z}_{0},\mathbf{z}_{t}}q(\mathbf{z}_{0}|\mathbf{x}_{0})q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0 })p(\mathbf{x}_{t}|\mathbf{z}_{t})\) for each of the transformations. Each latent flow path \(q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0})\) is generated by the gradient field of some learned potentials \(\nabla u^{k}\) following fluid mechanical dynamic Optimal Transport (OT) [4]. Our framework allows for novel understandings of both _disentanglement_ and _equivariance_. The definition of disentanglement refers to the distinct set of tangent directions \(\nabla u^{k}\) that follow the OT paths to generate latent flows for modeling different factors of variation. The concept of equivariance in our case means that the two probabilistic paths, _i.e.,_\(p_{k}(\mathbf{x}_{t}|\mathbf{x}_{0})\) in the image space and \(\int_{\mathbf{z}_{0},\mathbf{z}_{t}}q(\mathbf{z}_{0}|\mathbf{x}_{0})q_{k}(\mathbf{z}_{t}|\mathbf{z}_{0 })p(\mathbf{x}_{t}|\mathbf{z}_{t})\) in the latent space, would eventually result in the same distribution of transformed data. We build a formal generative model of sequences and integrate the above latent probability evolution as condition updates of the factorized sequence distribution. Based on the continuity equation, we derive a proper flow of probability density for the time evolution of both the prior and posterior. To perform inference, we approximate the true posterior of latent variables and train the parameters as a Variational Autoencoder (VAE) [47]. When the transformation type \(k\) is not observed (_i.e.,_ available as a label), we treat \(k\) as another latent variable and incorporate its posterior into our framework by learning it from sequences. Extensive experiments and thorough analyses have been conducted to show the effectiveness of our method. For example, we demonstrate empirically that our representations are usefully factorized, allowing flexible composability and generalization to new datasets. Furthermore, we show that our methods are also approximately equivariant by demonstrating that they commute with input transformations through the learned latent flows. Ultimately, we see these factors combine to yield the highest likelihood on the test set in each setting. Code is publicly available at [https://github.com/KingJamesSong/latent-flow](https://github.com/KingJamesSong/latent-flow). ## 2 The generative model In this section, we first introduce our generative model of sequences and then describe how we perform inference over the latent variables of this model in the next section. ### Flow factorized sequence distributions The model in this work defines a distribution over sequences of observed variables. We further factorize this distribution into \(k\) distinct components by assuming that each observed sequence is generated by one of the \(k\) separate flows of probability mass in latent space. Since in this work we Figure 1: Illustration of our flow factorized representation learning: at each point in the latent space we have a distinct set of tangent directions \(\nabla u^{k}\) which define different transformations we would like to model in the image space. For each path, the latent sample evolves to the target on the potential landscape following dynamic optimal transport. model discrete sequences of observations \(\bar{\mathbf{x}}=\{\mathbf{x}_{0},\mathbf{x}_{1}\ldots,\mathbf{x}_{T}\}\), we aim to define a joint distribution with a similarly discrete sequence of latent variables \(\bar{\mathbf{z}}=\{\mathbf{z}_{0},\mathbf{z}_{1}\ldots,\mathbf{z}_{T}\}\), and a categorical random variable \(k\) describing the sequence type (observed or unobserved). Explicitly, we assert the following factorization of the joint distribution over \(T\) timesteps: \[p(\bar{\mathbf{x}},\bar{\mathbf{z}},k)=p(k)p(\mathbf{z}_{0})p(\mathbf{x}_{0}|\mathbf{z}_{0})\prod_{ t=1}^{T}p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)p(\mathbf{x}_{t}|\mathbf{z}_{t}). \tag{1}\] Here \(p(k)\) is a categorical distribution defining the transformation type, \(p(\mathbf{x}_{t}|\mathbf{z}_{t})\) asserts a mapping from latents to observations with Gaussian noise, and \(p(\mathbf{z}_{0})=\mathcal{N}(0,1)\). A plate diagram of this model is depicted through the solid lines in Fig. 2. ### Prior time evolution To enforce that the time dynamics of the sequence define a proper flow of probability density, we compute the conditional update \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) from the continuous form of the continuity equation: \(\partial_{t}p(\mathbf{z})=-\nabla\cdot(p(\mathbf{z})\nabla\psi^{k}(\mathbf{z}))\), where \(\psi^{k}(\mathbf{z})\) is the \(k\)'th potential function which advects the density \(p(\mathbf{z})\) through the induced velocity field \(\nabla\psi^{k}(\mathbf{z})\). Considering the discrete particle evolution corresponding to this density evolution, \(\mathbf{z}_{t}=f(\mathbf{z}_{t-1},k)=\mathbf{z}_{t-1}+\nabla_{z}\psi^{k}(\mathbf{z}_{t-1})\), we see that we can derive the conditional update from the continuous change of variables formula [69; 11]: \[p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=p(\mathbf{z}_{t-1})\Big{|}\frac{df(\mathbf{z}_{t-1},k)}{d \mathbf{z}_{t-1}}\Big{|}^{-1} \tag{2}\] In this setting, we see that the choice of \(\psi\) ultimately determines the prior on the transition probability in our model. As a minimally informative prior for random trajectories, we use a diffusion equation achieved by simply taking \(\psi^{k}=-D_{k}\log p(\mathbf{z}_{t})\). Then according to the continuity equation, the prior evolves as: \[\partial_{t}p(\mathbf{z}_{t})=-\nabla\cdot\Big{(}p(\mathbf{z}_{t})\nabla\psi\Big{)}=D _{k}\nabla^{2}p(\mathbf{z}_{t}) \tag{3}\] where \(D_{k}\) is a constant coefficient that does not change over time. The density evolution of the prior distribution thus follows a constant diffusion process. We set \(D_{k}\) as a learnable parameter which is distinct for each \(k\). ## 3 Flow factorized variational autoencoders To perform inference over the unobserved variables in our model, we propose to use a variational approximation to the true posterior, and train the parameters of the model as a VAE. To do this, we parameterize an approximate posterior for \(p(\mathbf{z}_{0}|\mathbf{x}_{0})\), and additionally parameterize a set of Figure 2: Depiction of our model in plate notation. (Left) Supervised, (Right) Weakly-supervised. White nodes denote latent variables, shaded nodes denote observed variables, solid lines denote the generative model, and dashed lines denote the approximate posterior. We see, as in a standard VAE framework, our model approximates the initial one-step posterior \(p(\mathbf{z}_{0}|\mathbf{x}_{0})\), but additionally approximates the conditional transition distribution \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) through dynamic optimal transport over a potential landscape. functions \(u^{k}(\mathbf{z})\) to approximate the true latent potentials \(\psi^{*}\). First, we will describe how we do this in the setting where the categorical random variable \(k\) is observed (which we call the supervised setting), then we will describe the model when \(k\) is also latent and thus additionally inferred (which we call the weakly supervised setting). ### Inference with observed \(k\) (supervised) When \(k\) is observed, we define our approximate posterior to factorize as follows: \[q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)=q(\mathbf{z}_{0}|\mathbf{x}_{0})\prod_{t=1}^{T}q(\mathbf{z}_ {t}|\mathbf{z}_{t-1},k) \tag{4}\] We see that, in effect, our approximate posterior only considers information from element \(\mathbf{x}_{0}\); however, combined with supervision in the form of \(k\), we find this is sufficient for the posterior to be able to accurately model full latent sequences. In the limitations section we discuss how the posterior could be changed to include all elements \(\{\mathbf{x}_{t}\}_{0}^{T}\) in future work. Combing Eq. (4) with Eq. (1), we can derive the following lower bound to model evidence (ELBO): \[\begin{split}\log p(\bar{\mathbf{x}}|k)&=\mathbb{E}_{q_ {\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[\log\frac{p(\bar{\mathbf{x}},\bar{\bm {z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\frac{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}{ p(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\right]\\ &\geq\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[ \log\frac{p(\bar{\mathbf{x}}|\bar{\mathbf{z}},k)p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\right]\\ &=\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\left[\log p (\bar{\mathbf{x}}|\bar{\mathbf{z}},k)\right]+\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\left[\log\frac{p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)}\right] \end{split} \tag{5}\] Substituting and simplifying, Eq. (5) can be re-written as \[\begin{split}\log p(\bar{\mathbf{x}}|k)&\geq\sum_{t=0}^ {T}\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[}\log p(\mathbf{x}_{t}|\mathbf{z}_{t},k )\big{]}-\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[}\mathrm{D}_{\text{KL} }\left[q_{\theta}(\mathbf{z}_{0}|\mathbf{x}_{0})||p(\mathbf{z}_{0})\right]\big{]}\\ &\quad-\sum_{t=1}^{T}\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}}|k)}\big{[} \mathrm{D}_{\text{KL}}\left[q_{\theta}(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)||p(\mathbf{z}_{t }|\mathbf{z}_{t-1},k)\right]\big{]}\end{split} \tag{6}\] We thus see that we have an objective very similar to that of a traditional VAE, except that our posterior and our prior now both have a time evolution defined by the conditional distributions. ### Inference with latent \(k\) (weakly supervised) When \(k\) is not observed, we can treat it as another latent variable, and simultaneously perform inference over it in addition to the sequential latent \(\bar{\mathbf{z}}\). To achieve this, we define our approximate posterior and instead factorize it as \[q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})=q(k|\bar{\mathbf{x}})q(\mathbf{z}_{0}|\mathbf{x}_{0})\prod_ {t=1}^{T}q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k) \tag{7}\] Following a similar procedure as in the supervised setting, we derive the new ELBO as \[\begin{split}\log p(\bar{\mathbf{x}})&=\mathbb{E}_{q_ {\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\left[\log\frac{p(\bar{\mathbf{x}},\bar{\mathbf{ z}},k)}{q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\frac{q(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}{p( \bar{\mathbf{z}},k|\bar{\mathbf{x}})}\right]\\ &\geq\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{z}})}\left[ \log\frac{p(\bar{\mathbf{x}}|\bar{\mathbf{z}},k)p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar {\mathbf{x}},k)}\frac{p(k)}{q(k|\bar{\mathbf{x}})}\right]\\ &=\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k|\bar{\mathbf{x}})}\left[\log p (\bar{\mathbf{x}}|\bar{\mathbf{z}},k)\right]+\mathbb{E}_{q_{\theta}(\bar{\mathbf{z}},k| \bar{\mathbf{x}})}\left[\log\frac{p(\bar{\mathbf{z}}|k)}{q(\bar{\mathbf{z}}|\bar{\mathbf{x}}, k)}\right]+\mathbb{E}_{q_{\gamma}(k|\bar{\mathbf{x}})}\left[\log\frac{p(k)}{q(k|\bar{\mathbf{x}})}\right] \end{split} \tag{8}\] We see that, compared with Eq. (5), only one additional KL divergence term \(\mathrm{D}_{\text{KL}}\left[q_{\gamma}(k|\bar{\mathbf{x}})||p(k)\right]\) is added. The prior \(p(k)\) is set to follow a categorical distribution, and we apply the Gumbel-SoftMax trick [43] to allow for categorical re-parameterization and sampling of \(q_{\gamma}(k|\bar{\mathbf{x}})\). ### Posterior time evolution As noted, to approximate the true generative model which has some unknown latent potentials \(\psi^{k}\), we propose to parameterize a set of potentials as \(u^{k}(\mathbf{z},t)=\texttt{MLP}([\mathbf{z};t])\) and train them through the ELBOs above. Again, we use the continuity equation to define the time evolution of the posterior, and thus we can derive the conditional time update \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)\) through the change of variables formula. Given the function of the sample evolution \(\mathbf{z}_{t}=g(\mathbf{z}_{t-1},k)=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{k}\), we have: \[q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=q(\mathbf{z}_{t-1})\Big{|}\frac{dg(\mathbf{z}_{t-1},k)}{d \mathbf{z}_{t-1}}\Big{|}^{-1} \tag{9}\] Converting the above continuous equation to the discrete setting and taking the logarithm of both sides gives the normalizing-flow-like density evolution of our posterior: \[\log q(\mathbf{z}_{t}|\mathbf{z}_{t-1},k)=\log q(\mathbf{z}_{t-1})-\log|1+\nabla_{\mathbf{z}}^ {2}u^{k}| \tag{10}\] The above relation can be equivalently derived from the continuity equation (_i.e.,_\(\partial_{t}q(\mathbf{z})=-\nabla\cdot\big{(}q(\mathbf{z})\nabla u^{k}\big{)}\)). Notice that we only assume the initial posterior \(q(\mathbf{z}_{0}|\mathbf{x}_{0})\) follows a Gaussian distribution. For future timesteps, we do not pose any further assumptions and just let the density evolve according to the sample motion. ### Ensuring optimal transport of the posterior flow As an inductive bias, we would like each latent posterior flow to follow the OT path. To accomplish this, it is known that when the gradient \(\nabla u^{k}\) satisfies certain PDEs, the evolution of the probability density can be seen to minimize the \(L_{2}\) Wasserstein distance between the source distribution and the distribution of the target transformation. Specifically, we have: **Theorem 1** (Benamou-Brenier Formula [4]).: _For probability measures \(\mu_{0}\) and \(\mu_{1}\), the \(L_{2}\) Wasserstein distance can be defined as_ \[W_{2}(\mu_{0},\mu_{1})^{2}=\min_{\rho,v}\bigg{\{}\int\int\frac{1}{2}\rho(x,t) |v(x,t)|^{2}\,dx\,dt\bigg{\}} \tag{11}\] _where the density \(\rho\) and the velocity \(v\) satisfy:_ \[\frac{d\,\rho(x,t)}{dt}=-\nabla\cdot(v(x,t)\rho(x,t)),\;v(x,t)=\nabla u(x,t) \tag{12}\] The optimality condition of the velocity is given by the generalized Hamilton-Jacobi (HJ) equation (_i.e.,_\(\partial_{t}u+\nicefrac{{1}}{{2}}||\nabla u||^{2}\leq 0\)). The detailed derivation is deferred to the supplementary. We thus encourage our potential to satisfy the HJ equation with an external driving force as \[\frac{\partial}{\partial t}u^{k}(\mathbf{z},t)+\frac{1}{2}||\nabla_{\mathbf{z}}u^{k}( \mathbf{z},t)||^{2}=f(\mathbf{z},t)\;\;\;\text{subject to}\;\;\;f(\mathbf{z},t)\leq 0 \tag{13}\] Here we use another MLP to parameterize the external force \(f(\mathbf{z},t)\) and realize the negativity constraint by setting \(f(\mathbf{z},t)=-\texttt{MLP}([\mathbf{z};t])^{2}\). Notice that here we take the external force as learnable MLPs simply because we would like to obtain a flexible negativity constraint. The MLP architecture is set the same for both \(u(\mathbf{z},t)\) and \(f(\mathbf{z},t)\). To achieve the PDE constraint, we impose a Physics-Informed Neural Network (PINN) [67] loss as \[\mathcal{L}_{HJ}=\frac{1}{T}\sum_{t=1}^{T}\Big{(}\frac{\partial}{\partial t}u^ {k}(\mathbf{z},t)+\frac{1}{2}||\nabla_{\mathbf{z}}u^{k}(\mathbf{z},t)||^{2}-f(\mathbf{z},t) \Big{)}^{2}+||\nabla u^{k}(\mathbf{z}_{0},0)||^{2} \tag{14}\] where the first term restricts the potential to obey the HJ equation, and the second term limits \(u(\mathbf{z}_{t},t)\) to return no update at \(t{=}0\), therefore matching the initial condition. ## 4 Experiments This section starts with the experimental setup, followed by the main qualitative and quantitative results, then proceeds to discussions about the generalization ability to different composability and unseen data, and ends with the results on complex real-world datasets. ### Setup **Datasets.** We evaluate our method on two widely-used datasets in generative modeling, namely MNIST [54] and Shapes3D [10]. For MNIST [54], we manually construct three simple transformations including Scaling, Rotation, and Coloring. For Shapes3D [10], we use the self-contained four transformations that consist of Floor Hue, Wall Hue, Object Hue, and Scale. Besides these two common benchmarks, we take a step further to apply our method on Falcon3D and Isaac3D [61], two complex _large-scale_ and _real-world_ datasets that contain sequences of different transformations. Falcon3D consists of indoor 3D scenes in different lighting conditions and viewpoints, while Isaac3D is a dataset of various robot arm movements in dynamic environments. **Baselines.** We mainly compare our method with SlowVAE [49] and Topographic VAE (TVAE) [45]. These two baselines could both achieve approximate equivariance. Specifically, TVAE introduces some learned latent operators, while SlowVAE enforces the Laplacian prior \(p(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\prod\nicefrac{{\alpha}}{{2}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Metrics.** We use the approximate equivariance error \(\mathcal{E}_{k}\) and the log-likelihood of transformed data \(\log p(\mathbf{x}_{t})\) as the evaluation protocols. The equivariance error is defined as \(\mathcal{E}_{k}=\sum_{t=1}^{T}|\mathbf{x}_{t}-\texttt{Decode}(\mathbf{z}_{t})|\) where \(\mathbf{z}_{t}=\mathbf{z}_{0}+\sum_{t=1}^{T}\nabla_{\mathbf{z}}u^{k}\). For TVAE, the latent operator is changed to \(\texttt{Roll}(\mathbf{z}_{0},t)\). For unsupervised disentanglement baselines [35; 46] and SlowVAE [49], we carefully select the latent dimension and tune the interpolation range to attain the traversal direction and range that correspond to the smallest equivariance error. Since the vanilla VAE does not have the corresponding learned transformation in the latent space, we simply set \(\nabla_{\mathbf{z}}u^{k}=0\) and take it as a lower-bound baseline. For all the methods, the results are reported based on \(5\) runs. Notice that the above equivariance error is defined in the output space. Another reasonable evaluation metric is instead measuring error in the latent space as \(\mathcal{E}_{k}=\sum_{t=1}^{T}|\texttt{Encode}(\mathbf{x}_{t})-\mathbf{z}_{t}|\). We see the first evaluation method is more comprehensive as it further involves the decoder in the evaluation. ### Main Results **Qualitative results.** Fig. 3 and 4 display decoded images of the latent evolution on MNIST [54] and Shapes3D [10], respectively. On both datasets, our latent flow can perform the target transformation precisely during evolution while leaving other traits of the image unaffected. In particular, for the weakly-supervised setting, the decoded images (_i.e.,_ the bottom rows of Fig. 3 and 4) can still reproduce the given transformations well and it is even hard to visually tell them apart from the generated images under the supervised setting. This demonstrates the effectiveness of the weakly-supervised setting of our method, and implies that qualitatively our latent flow is able to learn the sequence transformations well under both supervised and weakly-supervised settings. **Quantitative results.** Tables 1 and 2 compare the equivariance error and the log-likelihood on MNIST [54] and Shapes3D [10], respectively. Our method learns the latent flows which model the transformations precisely, achieving the best performance across datasets under different supervision settings. Specifically, our method outperforms the previous best baseline by \(69.74\) on average in the equivariance error and by \(32.58\) in the log-likelihood on MNIST. The performance gain is also consistent on Shapes3D: our method surpasses the second-best baseline by \(291.70\) in the average equivariance error and by \(120.42\) in the log-likelihood. In the weakly-supervised setting, our method also achieves very competitive performance, falling behind that of the supervised setting in the average equivariance error slightly by \(6.22\) on MNIST and by \(67.88\) on Shapes3D. ### Discussion **Extrapolation: switching transformations.** In Fig. 5 we demonstrate that, empowered by our method, it is possible to switch latent transformation categories mid-way through the latent evolution \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Supervision?**} & \multicolumn{4}{c|}{**Equariance Error (\(\downarrow\))**} & \multirow{2}{*}{**Log-likelihood (\(\uparrow\))**} \\ \cline{3-3} \cline{5-6} & & **Scaling** & & **Rotation** & **Coloring** \\ \hline VAE [47] & No (✗) & 1275.31\(\pm\)1.89 & 1310.72\(\pm\)2.19 & 1368.92\(\pm\)2.33 & -2206.17\(\pm\)1.83 \\ \(\beta\)-**VAE**[35] & No (✗) & 741.58\(\pm\)4.57 & 751.32\(\pm\)5.22 & 808.16\(\pm\)5.03 & -2224.67\(\pm\)2.35 \\ **FactorVAE**[46] & No (✗) & 659.71\(\pm\)4.89 & 632.44\(\pm\)5.76 & 662.18\(\pm\)5.26 & -2209.33\(\pm\)2.47 \\ **SlowVAE**[49] & Weak (✗) & 461.59\(\pm\)5.37 & 447.46\(\pm\)5.46 & 398.12\(\pm\)4.83 & -2197.68\(\pm\)2.39 \\ **TVAE**[45] & Yes (✗) & 505.19\(\pm\)2.77 & 493.28\(\pm\)3.37 & 451.25\(\pm\)2.76 & -2181.13\(\pm\)1.87 \\ **PoFlow**[79] & Yes (✗) & 234.78\(\pm\)2.91 & 231.42\(\pm\)2.98 & 240.57\(\pm\)2.58 & -2145.03\(\pm\)2.01 \\ **Ours** & Yes (✗) & **185.42\(\pm\)2.35** & **153.54\(\pm\)3.10** & **158.57\(\pm\)2.95** & **-2112.45\(\pm\)1.57** \\ **Ours** & Weak (✗) & 193.84\(\pm\)2.47 & 157.16\(\pm\)3.24 & 165.19\(\pm\)2.78 & -2119.94\(\pm\)1.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Equivariance error \(\mathcal{E}_{k}\) and log-likelihood \(\log p(\mathbf{x}_{t})\) on MNIST [54]. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Supervision?**} & \multicolumn{4}{c|}{**Equariance Error (\(\downarrow\))**} & \multirow{2}{*}{**Log-likelihood (\(\uparrow\))**} \\ \cline{3-3} \cline{5-6} & & **Floor Hue** & **Wall Time** & **Object Hue** & **Scale** \\ \hline VAE [47] & No (✗) & 6924.63\(\pm\)8.92 & 7746.37\(\pm\)8.77 & 4383.54\(\pm\)9.26 & 2609.59\(\pm\)7.41 & -11784.69\(\pm\)4.87 \\ \(\beta\)-**VAE**[35] & No (✗) & 2243.95\(\pm\)12.48 & 22279.23\(\pm\)13.97 & 2188.73\(\pm\)12.61 & 2037.94\(\pm\)11.72 & -11924.83\(\pm\)5.64 \\ **FactorVAE**[46] & No (✗) & 1985.75\(\pm\)13.26 & 1876.41\(\pm\)11.93 & 1902.83\(\pm\)12.27 & 1657.32\(\pm\)11.05 & -11802.17\(\pm\)5.69 \\ **SlowVAE**[49] & Weak (✗) & 1247.36\(\pm\)12.49 & 1314.86\(\pm\)11.41 & 11022.28\(\pm\)12.17 & 1058.74\(\pm\)10.96 & -11674.89\(\pm\)5.74 \\ **TVAE**[45] & Yes (✗) & 1225.47\(\pm\)9.82 & 126.23\(\pm\)9.54 & 261.79\(\pm\)9.86 & 11424.01\(\pm\)9.37 & -11475.48\(\pm\)5.18 \\ **PoFlow**[79] & Yes (✗) & 885.46\(\pm\)10.37 & 916.71\(\pm\)10.49 & 912.48\(\pm\)9.66 & 924.39\(\pm\)10.05 & -11335.84\(\pm\)4.95 \\ **Ours** & Yes (✗) & **613.29\(\pm\)8.93** & **653.45\(\pm\)9.48** & **605.79\(\pm\)8.63** & **599.71\(\pm\)9.34** & **41215.42\(\pm\)5.71** \\ **Ours** & Weak (✗) & 690.84\(\pm\)9.57 & 717.74\(\pm\)10.65 & 681.59\(\pm\)9.02 & 665.85\(\pm\)9.57 & -11279.61\(\pm\)5.89 \\ \hline \hline \end{tabular} \end{table} Table 2: Equivariance error \(\mathcal{E}_{k}\) and log-likelihood \(\log p(\mathbf{x}_{t})\) on Shapes3D [10]. and maintain coherence. That is, we perform \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{k}\) for \(t\leq\nicefrac{{T}}{{2}}\) and then change to \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\nabla_{\mathbf{z}}u^{j}\) where \(j\neq k\) for \(t>\nicefrac{{T}}{{2}}\). As can be seen, the factor of variation immediately changes after the transformation type is switched. Moreover, the transition phase is smooth and no other attributes of the image are influenced. **Extrapolation: superposing transformations.** Besides switching transformations, our method also supports applying different transformations simultaneously, _i.e.,_ consistently performing \(\mathbf{z}_{t}=\mathbf{z}_{t-1}+\sum_{k}^{K}\nabla_{\mathbf{z}}u^{k}\) during the latent flow process. Fig. 6 presents such exemplary visualizations of superposing two and all transformations simultaneously. In each case, the latent evolution corresponds to simultaneous smooth variations of multiple image attributes. This indicates that our method also generalizes well to superposing different transformations. Notice that we only apply single and separate transformations in the training stage. Switching or superposing transformations in the test phase can be thus understood as an extrapolation test to measure the generalization ability of the learned equivariance to novel compositions. **Equivariance generalization to new data.** We also test whether the learned equivariance holds for Out-of-Distribution (OoD) data. To verify this, we validate our method on a test dataset that is different from the training set and therefore unseen to the model. Fig. 7 displays the exemplary visualization results of the VAE trained on MNIST [54] but evaluated on dSprites [59]. Although the reconstruction quality is poor, the learned equivariance is still clearly effective as each transformation still operates as expected: scaling, rotation, and coloring transformations from top to bottom respectively. ### Results on Complex Real-world and Large-scale Datasets Table 3 and 4 compare the equivariance error of our methods and the representative baselines on Falcol3D and Isaac3D, respectively. Notice that the values are much larger than previous datasets due to the increased image resolution. Our method still outperforms other baselines by a large margin Figure 5: Exemplary visualization of switching transformations during the latent sample evolution. Figure 6: Examples of combining different transformations simultaneously during the latent evolution. and achieves reasonable equivariance error. Fig. 8 displays the qualitative comparisons of our method against other baselines. Our method precisely can control the image transformations through our latent flows. _Overall, the above results demonstrate that our method can go beyond the toy setting and can be further applied to more complex real-world scenarios._ More visualization results of exemplary latent flows are kindly referred to in the supplementary. ## 5 Related work **Disentangled representation learning.** The idea of learning disentangled representation dates back to factorizing non-redundant input patterns [74] but is recently first studied by InfoGAN [13] and \(\beta\)-VAE [35]. InfoGAN [13] achieves disentanglement by maximizing the mutual information between a subset of latent dimensions and observations, while \(\beta\)-VAE [35] induces the factorized posterior \(q(\mathbf{z})\) by penalizing the Total Correlation (TC) through an extra hyper-parameter \(\beta\)\(>\)\(1\) controlling the strength of the KL divergence. Following infoGAN, many attempts have been made to facilitate the discovery of semantically meaningful traversal directions through regularization [33; 42; 89; 34; 100; 66; 77; 90; 98; 84; 99; 78; 62]. The follow-up research of \(\beta\)-VAE mainly explored different methods to factorize the aggregated posterior [22; 25; 52; 46; 12; 44; 96; 23; 76; 58; 80; 28]. More recently, some works proposed to discover meaningful directions of diffusion models in the bottleneck of denoising networks [53; 64; 95; 41]. The previous literature mainly considers disentanglement as learning different transformations per dimension or per linear direction. Our method generalizes this concept to learning a distinct tangent bundle \(\nabla u^{k}\) that moves every latent sample via dynamic OT. We see the most similar method to ours is the work of [79]. In [79], the authors also apply the gradient of a potential function to move the latent code. However, their potentials are restricted to obey the wave equations, which do not really correspond to the OT theory. Also, they do not consider the posterior evolution but instead use the loss \(||\mathbf{z}_{t}-\mathtt{Encode}(\mathbf{z}_{t})||^{2}\) to match the latent codes. By contrast, we propose a unified probabilistic generative model that encompasses the posterior flow that follows dynamic OT, the flow-like time evolution, and different supervision settings. **Equivariant neural networks.** A function is said to be an equivariant map if it commutes with a given transformation, _i.e.,_\(T^{\prime}[f(x)]=f(T[x])\) where \(T\) and \(T^{\prime}\) represent operators in different domains. Equivariance has been considered a desired inductive bias for deep neural networks as this property can preserve geometric symmetries of the input space [38; 75; 56; 57; 1]. Analytically equivariant \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline **Methods** & **Robot X-move** & **Robot Y-move** & **Camera Height** & **Object Scale** & **Lighting Intensity** & **Lighting Y-dir** & **Object Color** & **Wall Color** \\ \hline **TVAE**[45] & 8441.65 & 8348.23 & 8495.31 & 8251.34 & 8291.70 & 8741.07 & 8456.78 & 8512.09 \\ **PoFlow**[79] & 6572.19 & 6489.35 & 6319.82 & 6188.59 & 6517.40 & 6712.06 & 7056.98 & 6343.76 \\ **Ours** & **8605.72** & **3999.33** & **4719.27** & **4809.78** & **4225.34** & **4998.84** & **5814.97** & **3870.601** \\ \hline \hline \end{tabular} \end{table} Table 4: Equivariance error (\(\downarrow\)) on Isaac3D [61]. Figure 7: Equivariance generalization to unseen OoD input data. Here the model is trained on MNIST [54] but the latent flow is tested on dSprites [59]. networks typically enforce explicit symmetry to group transformations in neural networks [16; 17; 68; 93; 92; 85; 31; 39]. Another line of research proposed to directly learn approximate equivariance from data [21; 18; 49; 20; 45]. Our framework re-defines approximate equivariance by matching the latent probabilistic flow to the actual path of the given transformation in the image space. **Optimal transport in deep learning.** There is a vast literature on OT theory and applications in various fields [87; 88]. Here we mainly highlight the relevant applications in deep learning. The pioneering work of [19] proposed a light-speed implementation of the Sinkhorn algorithm for fast computation of entropy-regularized Wasserstein distances, which opened the way for many differentiable Sinkhorn algorithm-based applications [32; 29; 14; 27; 51]. In generative modeling, the Wasserstein distance is often used to minimize the discrepancy between the data distribution and the model distribution [2; 81; 72; 65]. Inspired by the fluid mechanical interpretation of OT [4], some normalizing flow methods [69; 24; 48] considered regularizing the velocity fields to satisfy the HJ equation, thus matching the dynamic OT plan [94; 30; 83; 63; 60]. Our method applies PINNs [67] to directly model generalized HJ equations in the latent space and uses the gradient fields of learned potentials to generate latent flows, which also aligns to the theory of dynamic fluid mechanical OT. ## 6 Conclusion In this paper, we introduce Flow Factorized Representation Learning which defines a set of latent flow paths that correspond to sequences of different input transformations. The latent evolution is generated by the gradient flow of learned potentials following dynamic optimal transport. Our setup re-interprets the concepts of both _disentanglement_ and _equivariance_. Extensive experiments demonstrate that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously achieving smaller equivariance error. Furthermore, we show that the learned latent transformations generalize well, allowing for flexible composition and extrapolation to new data. ## 7 Limitations For flexibility and efficiency, we use PINN [67] constraints to model the HJ equation. However, such PDE constraints are approximate and not strictly enforced. Other PDE modeling approaches include accurate neural PDE solvers [40; 8; 70] or other improved PINN variants such as competitive PINNs [97] and robust PINNs [3]. Also, when infering with observed \(k\), we change the posterior from \(q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)\) to \(q(\bar{\mathbf{z}}|\mathbf{x}_{0},k)\) because we assume \(k\) contains sufficient information of the whole sequence. To keep the posterior definition of \(q(\bar{\mathbf{z}}|\bar{\mathbf{x}},k)\), we need to make \(q(\mathbf{z}_{t})\) also a function of \(\mathbf{x}_{t}\). This can be achieved either by changing the potential to \(u(\mathbf{z}_{t-1},\mathbf{x}_{t},t-1)\) or modifying the external driving force to \(f(\mathbf{z}_{t-1},\mathbf{x}_{t},t-1)\). Nonetheless, we see these modifications would make the model less flexible than our current formulations as the element \(\mathbf{x}_{t}\) might be needed during inference. ## Acknowledgments and Disclosure of Funding This work was supported by the MUR PNRR project FAIR (PE00000013) funded by the NextGenerationEU, by the PRIN project CREATIVE (Prot. 2020ZSL9F9), by the EU H2020 project AI4Media (No. 951911), and by the Bosch Center for Artificial Intelligence. Figure 8: Qualitative comparison of our method against TVAE and PoFlow on Falcon3D and Isaac3D.
2309.05711
Thermal Hall Effect and Neutral Spinons in a Doped Mott Insulator
In the pseudogap phase of the cuprate, a thermal Hall response of neutral objects has been recently detected experimentally, which continuously persists into the antiferromagnetic insulating phase. In this work, we study the transport properties of neutral spinons as the elementary excitation of a doped Mott insulator, which is governed by a mutual Chern-Simons topological gauge structure. We show that such a chiral spinon as a composite of an $S=1/2$ spin sitting at the core of a supercurrent vortex, can contribute to the thermal Hall effect, thermopower, and Hall effect due to its intrinsic transverse (cyclotron) motion under internal fictitious fluxes. In particular, the magnitudes of the transport coefficients are phenomenologically determined by two basic parameters: the doping concentration and $T_c$, quantitatively consistent with the experimental measurements including the signs and qualitative temperature and magnetic field dependence. Combined with the predictions of the spinon longitudinal transport properties, including the Nernst and spin Hall effects, a phenomenological description of the pseudogap phase is established as characterized by the neutral spinon excitations, which eventually become "confined" with an intrinsic superconducting transition at $T_c$. Finally, within this theoretical framework, the "order to order" phase transition between the superconducting and antiferromagnetic insulating phases are briefly discussed, with the thermal Hall monotonically increasing into the latter.
Zhi-Jian Song, Jia-Xin Zhang, Zheng-Yu Weng
2023-09-11T18:00:03Z
http://arxiv.org/abs/2309.05711v1
# Thermal Hall Effect and Neutral Spinons in a Doped Mott Insulator ###### Abstract In the pseudogap phase of the cuprate, a thermal Hall response of neutral objects has been recently detected experimentally, which continuously persists into the antiferromagnetic insulating phase. In this work, we study the transport properties of neutral spinons as the elementary excitation of a doped Mott insulator, which is governed by a mutual Chern-Simons topological gauge structure. We show that such a chiral spinon as a composite of an \(S=1/2\) spin sitting at the core of a supercurrent vortex, can contribute to the thermal Hall effect, thermopower, and Hall effect due to its intrinsic transverse (cyclotron) motion under internal fictitious fluxes. In particular, the magnitudes of the transport coefficients are phenomenologically determined by two basic parameters: the doping concentration and \(T_{c}\), quantitatively consistent with the experimental measurements including the signs and qualitative temperature and magnetic field dependence. Combined with the predictions of the spinon longitudinal transport properties, including the Nernst and spin Hall effects, a phenomenological description of the pseudogap phase is established as characterized by the neutral spinon excitations, which eventually become "confined" with an intrinsic superconducting transition at \(T_{c}\). Finally, within this theoretical framework, the "order to order" phase transition between the superconducting and antiferromagnetic insulating phases are briefly discussed, with the thermal Hall monotonically increasing into the latter. ## I Introduction Transport measurements serve as a powerful tool to gain insight into the nature of elementary excitations in the cuprates [1; 2]. The anomalous signals detected in these measurements are crucial for a systematic understanding at the microscopic level. For instance, the Hall number in the cuprates indicates a discontinuity at a doping \(p^{*}\), which corresponds to the doping concentration at which the pseudogap (PG) phase terminates [3; 4]. Within the PG phase, when \(p<p^{*}\), the Hall number aligns with the doping density \(p\), which seemingly contrasts with free systems where the large Fermi surface encloses an area of \(1+p\) as indicated experimentally at \(p>p^{*}\). Previous studies have hypothesized that this discrepancy might stem from Fermi surface reconstruction due to antiferromagnetic (AFM) order with \(Q=(\pi,\pi)\)[5] or in the absence of the explicit translation symmetry breaking due to strong correlations [6; 7]. Furthermore, a linear magnetic-field dependent thermal Hall signal [8; 9; 10] in the family of the cuprate compounds has been recently observed at \(p<p*\), extending to the AFM insulating phase. It is important to underscore that the experimental signal exhibits no effect for the magnetic field that is aligned parallel to the copper-oxide plane [9], which implies that the thermal Hall effect is originated from an orbit effect. Prior theoretical studies [11; 12] suggest that in the case of the cuprates, magnons on a square lattice will fail to yield a nonzero thermal Hall conductivity when subjected to either the Dzyaloshinskii-Moriya spin interaction or the localized formation of skyrmion defects. Additionally, the phenomenological descriptions involve neutral excitations like spinons [12; 13; 14] and phonons [15; 16; 17] have been proposed. Phenomenologically a universal behavior with a scaling law was proposed [18]. Besides the cuprates, the sizeable thermal Hall effect has been also found in spin-ice Tb\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\)[19] and spin liquid RuCl\({}_{3}\)[20] as well as the Kitaev materials [21]. Moreover, both integer [22; 23] and fractional [24] quantum Hall systems (QHE & FQHE) offer a unique perspective, where the thermal Hall effect finds its explanation in the conformal field theory (CFT) of chiral edge modes [25]. The transport measurements provide a direct probe into the contribution of the excitations that dominate the PG phase. The neutral spinon in the PG phase is an essential elementary excitation in the strongly correlated theories of doped Mott insulators. Thus, if and how the neutral spinon participates in the thermal Hall and other transport phenomena become important issues that should be addressed very seriously. In this paper, we shall make a self-consistent study of spinon transport within the framework of the phase-string theory [26; 27]. The spinon predicted in this theory is different from either the slave-boson or slave-fermion approaches [1; 28; 29; 30; 31] due to the so-called phase-string effect [32; 33; 34] hidden in the \(t\)-\(J\) model upon doping, which is a topological Berry phase replacing the usual Fermi sign structure in the restricted Hilbert space. Specifically: 1. Each spinon undergoes a cyclotron motion due to an intrinsic Berry curvature caused by the phase-string effect [cf. Fig. 1(a)]. The time-reversal symmetry is retained in the absence of external magnetic fields as the opposite spins see the opposite fictitious fluxes with the opposite chiral edge currents [cf. Fig. 1(b-c)]. The system is distinct from the usual topological insulator [35; 36] in that all the spinons with opposite chiralities are RVB-paired in the ground state. 2. Each spinon is always locked with a charge-current vortex. Since an external perpendicular magnetic field must be balanced by the net (polarized) vortices, then unpaired (free) _chiral_ spinons must be generated from the RVB condensate, which contributes to the novel transport in the PG phase. 3. The cyclotron motion of the spinon and its locking with the charge vortex [cf. Fig. 1(a)] are mathematically characterized by a mutual Chern-Simons gauge structure, which will contribute to unconventional transport phenomena, including the thermopower effect [cf. Fig. 1(d)], thermal Hall [cf. Fig. 1(e)], and Hall effect [cf. Fig. 1(f)]. We shall investigate the above spinon transport by using a semiclassical approach based on the mutual Chern-Simons gauge theory. The calculated results are essentially determined by the basic parameters of doping concentration as well as \(T_{c}\), with the magnitudes comparable with the experimental measurements [3; 4; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. Also more physical implications arising from such neutral spinons as the elementary excitations will be briefly discussed in Sec. IV.4, which can explain the Nernst effect [39; 40; 41] and the scaling relationship between \(T_{c}\) and the spin resonance energy as observed in neutron measurements [42; 43; 44; 45]. Especially an "order-to-order" phase transition between the superconducting and AFM insulating phases can also naturally emerge within the same theoretical framework. ## II Mutual Chern-Simons gauge theory of the doped Mott insulator ### Topological Gauge Structure Phase-string theory of the doped Mott insulator is based on a nontrivial sign structure identified in both the \(t\)-\(J\) model [32; 33; 34] and the Hubbard model [46; 47], in which the conventional Fermi statistics of the electrons are replaced by the phase-string sign structure in the restricted Hilbert space of the lower (upper) Hubbard band. In the phase-string theory, such a sign structure is further precisely mapped to a topological gauge structure. The corresponding low-energy description involves the mutual Chern-Simons (MCS) gauge interaction between the spin and charge degrees of freedom [48; 49; 50; 51; 52], governed by the lattice Euclidean Lagrangian \(L=L_{h}+L_{s}+L_{\text{MCS}}\) as follows: \[L_{h} = \sum_{I}h_{I}^{\dagger}\left[\partial_{\tau}-iA_{0}^{s}(I)-iA_{0 }^{e}(I)+\mu_{h}\right]h_{I} \tag{1}\] \[-t_{h}\sum_{i\alpha}\left[h_{I}^{\dagger}h_{I+\alpha}e^{i\mathbf{A}_ {\alpha}^{s}(I)+i\mathbf{A}_{\alpha}^{s}(I)}+\text{h.c.}\right],\] \[L_{s} = \sum_{i\sigma}b_{i\sigma}^{\dagger}\left[\partial_{\tau}-i \sigma A_{0}^{h}\left(i\right)+\lambda_{b}+\frac{1}{2}g\mu_{B}B^{e}\sigma \right]b_{i\sigma}\] (2) \[-J_{s}\sum_{i\alpha\sigma}\left[b_{i\sigma}^{\dagger}b_{i+\alpha,\bar{\sigma}}^{i\sigma\mathbf{A}_{\alpha}^{h}(i)}+\text{h.c.}\right],\] \[L_{\text{MCS}} = \frac{i}{\pi}\sum_{i}\epsilon^{\mu\nu\lambda}A_{\mu}^{s}(I) \partial_{\nu}A_{\lambda}^{h}(i), \tag{3}\] in which \(L_{h}\) and \(L_{s}\) describe the dynamics of the matter fields--bosonic spinless holon \(h_{I}\), and bosonic neutral spinon \(b_{i\sigma}\) (with \(\bar{\sigma}\equiv-\sigma\)), respectively. The indices \(\alpha\) and \(\beta\) denote only the spatial components: \(x,y\), and the indices \(\mu=(\tau,\mathbf{r})\) label the full time-space vector, with Figure 1: (a) A schematic illustration of a chiral spinon with a mutual Chern-Simons gauge structure: a neutral \(S=1/2\) spin (black arrow) sitting at the core of an induced supercurrent vortex (red ring), which itself sees an intrinsic magnetic field \(B^{h}\) to undergo a cyclotron motion (purple circle) self-consistently; (b) The semi-classical behavior of the chiral spinons under a uniform \(B^{h}\) in the bulk and edges of the sample, and (c) the resultant vortex edge currents with opposite chiralities. Note that in the equilibrium state no time-reversal symmetry is broken as the opposite spins see the opposite sign of \(B^{h}\) with the compensation of the opposite cyclotron motion and edge currents (red and purple with arrows); (d) Temperature gradient \(\nabla T\) breaks the equilibrium between the opposite edges, causing net vortex current \(\mathbf{J}^{\text{vor}}\) and electric field in the sample, which contributes to a thermopower effect; (e) External perpendicular magnetic field \(B^{e}\) (blue arrow) and the in-plane temperature gradient \(\nabla T\) break the balance of the chirality of spinon-vortices to generate a net thermal current \(\mathbf{J}^{Q}\), which contributes to a thermal Hall effect; (f) An applied charge current \(\mathbf{J}^{e}\) induces a force \(\mathbf{E}^{\text{vor}}\) on the spinon-vortex, which generates a vortex current \(\mathbf{J}^{\text{vor}}\) in the presence of an external magnetic field \(B^{e}\) to produce a net electric field \(E^{e}\). The resulting Hall effect gives rise to the Hall number precisely equal to the doping concentration \(\delta\) at low temperature. the indices \(i\), \(I\) representing the two-dimensional (2D) square lattice site and its dual lattice site, respectively. The external magnetic vector potential \(\mathbf{A}^{e}\) with the field strength \(B^{e}\) perpendicular to the 2D plane, interacts with the charge (holon) degree of freedom through the orbit effect (setting the charge equal to one) in Eq. (1), and the spin degree of freedom via a Zeeman effect in Eq. (2). Here the holon field \(h\) and spinon field \(b\) minimally couple to the gauge fields, \(A^{s}_{\mu}\) and \(A^{h}_{\mu}\), respectively, with the MCS topological structure given in Eq. (3). It implies that the holon (spinon) number \(n^{h}_{I}\) (\(n^{b}_{i}\)) will determine the gauge-field strength of \(A^{h}_{\mu}\) (\(A^{s}_{\mu}\)) as if each matter particle (holon or spinon) is attached to a fictitious \(\pi\) flux tube visible only by a different species. This can be directly seen by considering the following equations of motion for \(A^{s}_{0}\) and \(A^{h}_{0}\), respectively: \[\frac{\partial L}{\partial A^{s}_{0}(I)}=0 \Rightarrow \pi n^{h}_{I}=\epsilon^{\alpha\beta}\partial_{\alpha}\mathbf{A}^{h}_ {\beta}(i)\equiv B^{h}, \tag{4}\] \[\frac{\partial L}{\partial A^{h}_{0}(i)}=0 \Rightarrow \pi\sum_{\sigma}\sigma n^{b}_{i\sigma}=\epsilon^{\alpha\beta} \partial_{\alpha}\mathbf{A}^{s}_{\beta}(I)\equiv B^{s}. \tag{5}\] Similarly, by using the charge (holon) current \(\mathbf{J}^{h}_{\alpha}(I)\equiv\partial L_{h}/\partial\mathbf{A}^{s}_{\alpha}(I)\) and spin current associated with the \(b\)-spinon: \(\mathbf{J}^{\rm spin}_{\alpha}(i)\equiv\partial L_{s}/\partial\mathbf{A}^{h}_{\alpha}(i)\), one has the following equations of motion for \(\mathbf{A}^{s}_{\alpha}(I)\) and \(\mathbf{A}^{h}_{\alpha}(i)\), respectively: \[\frac{\partial L}{\partial\mathbf{A}^{h}_{\alpha}(i)}=0 \Rightarrow \pi\mathbf{J}^{\rm spin}_{\alpha}(i)=\epsilon_{\alpha\beta}\mathbf{E}^{s}_ {\beta}(i), \tag{6}\] \[\frac{\partial L}{\partial\mathbf{A}^{s}_{\alpha}(I)}=0 \Rightarrow \pi\mathbf{J}^{h}_{\alpha}(I)=\epsilon_{\alpha\beta}\mathbf{E}^{h}_{ \beta}(I), \tag{7}\] where \(\mathbf{E}^{s/h}_{\alpha}=\partial_{0}\mathbf{A}^{s/h}_{\alpha}-\partial_{\alpha}A^{s /h}_{0}\). Therefore, due to the \(U(1)\times U(1)\) mutual Chern-Simons gauge structure, the conserved charge (holon) and spin density-currents are constrained to the internal gauge field strengths by the equations of motion in Eqs.(4)-(7). ### Low-temperature Pseudogap phase. At half-filling with \(n^{h}_{I}=0\), one has \(\mathbf{A}^{h}_{\beta}(i)=0\), and \(L\to L_{s}\) reduces to the Schwinger-boson mean-field state Lagrangian that well describes the AFM phase. On the other hand, at finite doping, the Bose condensation of the bosonic holon field will define a low-temperature PG phase [49; 50]. As the holons are condensed, the total gauge fluctuations in \(L_{h}\) of Eq. (1) will be suppressed due to the Higgs mechanism, leading to \[\mathbf{A}^{s}_{\alpha}(I)+\mathbf{A}^{e}_{\alpha}(I)-2\pi m_{\alpha}(I)=0, \tag{8}\] where \(m_{\alpha}\in\mathbb{Z}\) comes from the compactness of the spatial components in Eq. (1). By using Eq. (8), the equations of motion Eq. (5) and Eq. (6) can be reformulated as: \[\pi\sum_{\sigma}\sigma n^{b}_{i\sigma}-2\pi J^{2\pi}_{0}(i)+ \Phi^{e}(i)=0, \tag{9}\] \[\pi\mathbf{J}^{\rm spin}_{\alpha}(i)-2\pi\mathbf{J}^{2\pi}_{\alpha}(i)+ \epsilon_{\alpha\beta}\mathbf{E}^{e}_{\beta}(i)=0, \tag{10}\] where \(\Phi^{e}=\epsilon_{\alpha\beta}\mathbf{\Delta}_{\alpha}\mathbf{A}^{e}_{\beta}\) and \(\mathbf{E}^{e}_{\alpha}=\partial_{0}\mathbf{A}^{e}_{\alpha}-\partial_{\alpha}A^{e}_{0}\) represent the external magnetic flux and external electric field strength, respectively. Here, \(J^{2\pi}_{0}\equiv\epsilon^{\alpha\beta}\Delta_{\alpha}m_{\beta}\in\mathbb{Z}\) denotes the number of \(2\pi\) vortices in the holon condensate, and \(\mathbf{J}^{2\pi}_{\alpha}\equiv-\epsilon^{\alpha\beta}\partial_{0}m_{\beta}\in \mathbb{Z}\) represents the current of the \(2\pi\) vortices. In other words, Eq. (9) corresponds to the fact that in the original holon language, each vortex with \(J^{2\pi}_{0}=\pm 1\) has a phase winding \(\pm 2\pi\), while each spinon carries a half-vortex with a phase winding \(\pm\pi\), known as the spinon-vortex [49; 50]. In the ground state, when all vortices are in the confined phase [53], the superconducting phase coherence is realized with \(\Phi^{e}=0\) in Eq. (9) (i.e., the Meissner effect). Here the \(b\)-spinons are in the RVB pairing state according to Eq. (2) and the \(2\pi\) vortices of \(J^{\rm vor}_{0}=\pm 1\) are also "confined" in vortex-antivortex pairs. In such an SC phase, a single spinon cannot be present in the bulk, but an \(S=1\) excitation (totally with \(\pm 2\pi\) vortex due to the double spinons) can be made since a \(\mp 2\pi\) vortex of \(J^{\rm vor}_{0}\) can be always bound to the \(S=1\) excitation to make the total \(\Phi^{e}=0\) in Eq. (9). A minimal flux quantization condition of \(\Phi^{e}=\pi\) (\(=hc/2e\equiv\phi_{0}\) if the full units are restored) can be realized in Eq. (9) with a single \(b\)-spinon trapped at the magnetic vortex core. The thermally excited free (unpaired) \(b\)-spinons can eventually destroy the Meissner effect with a uniform magnetic field penetrating the bulk according to Eq. (9), which disorders the SC phase coherence and leads to a Kosterlitz-Thouless (KT) like phase transition at [cf. more details in Appendix. B and Ref. [53]] \[T_{c}\approx E_{s}/3k_{B}, \tag{11}\] where \(E_{s}\) is the lowest excited energy of the \(b\)-spinons, to be elaborated below. At \(T\) slightly above \(T_{c}\), i.e., the lower PG regime, the conventional \(2\pi\) vortices may remain well confined (vortex-antivortex paired) as their unpaired configuration would cost more free energy than that of the free \(\pi\)-spinon-vortices. To the leading order of approximation, one may then only focus on the spinon-vortex composites without considering the free \(2\pi\) vortices in Eq. (9) and Eq. (10) unless the temperature is much higher than \(T_{c}\)[50]. Note that a conventional \(2\pi\) vortex can be still bound to a spinon to merely change the vorticity sign of the associated vortex as mentioned above. Namely, the low-energy elementary excitations consist of four types of excited (unpaired) \(b\)-spinons trapped in the vortex cores with quantum numbers of \(\sigma=\pm 1\) and \(\nu=\pm 1\), where \(\sigma\) is the spin index and \(\nu\) denotes the chirality of the vortex [illustrated on the right-hand-side of Fig. 3(b)]: \[\sum_{\sigma}\sigma n^{b}_{i\sigma}-2J^{2\pi}_{0}(i) \Rightarrow \sum_{\nu}\nu n^{b}_{i\sigma\nu}, \tag{12}\] \[\mathbf{J}^{\rm spin}_{\alpha}(i)-2\mathbf{J}^{2\pi}_{\alpha}(i) \Rightarrow \sum_{\nu}\nu\mathbf{J}^{\nu}_{\alpha}(i)\equiv\mathbf{J}^{\rm vor}_{\alpha}(i), \tag{13}\] where \(n^{b}_{i\sigma\nu}\) denotes the number of excited free spinons with spin index \(\sigma\) and vorticity \(\nu\), and \(\mathbf{J}^{\rm vor}_{\alpha}\) denotes the currents of the spinon-vortices, with \(\mathbf{J}_{\alpha}^{\nu=\pm}\) representing the spinon current with \(\pm\) chirality. Correspondingly, the equations of motion Eq. (9) and Eq. (10) of the mutual Chern-Simons gauge description reduce to the following forms \[\sum_{\nu}\nu n_{i\sigma\nu}^{b} = -\pi^{-1}\Phi_{i}^{e}, \tag{14}\] \[\mathbf{J}_{\alpha}^{\rm vor}(i) = -\pi^{-1}\epsilon_{\alpha\beta}\mathbf{E}_{\beta}^{e}(i), \tag{15}\] Here, Eq. (14) is actually the "chirality-neutral" condition, and Eq. (15) indicates that the current of the spinon-vortices \(\mathbf{J}_{\alpha}^{\rm vor}\) along one direction is induced by an external electric field along the perpendicular direction. Physically, the latter case can be interpreted as the steady vortex motion resulting in a \(2\pi\) "phase slip" of the charge field between the opposite sides of perpendicular to the motion direction [cf. Fig. 2(a) and (b)], and thereby generating an electric field, namely the Nernst effect (see in section IV.4.1). Finally, the holon current \(\mathbf{J}^{h}\) corresponds to the charge current, which can be denoted by \(\mathbf{J}^{e}\) in the following. A spinon perceives the gauge field \(\sigma A_{\mu}^{h}\) in Eq. (1), which results in Eq. (7) where \(\mathbf{E}^{h}\) is an effective "electric" field acting on the spinon of spin \(\sigma=1\), which also denotes the vorticity of the original spinon-vortex composite. Note that a spinon-vortex of \(\sigma=-1\) should experience an opposite force for the same direction of \(\mathbf{J}^{e}\). Now such a spinon-vortex can be bound to a \(\pm 2\pi\) vortex to change its vorticity to \(\nu=\pm\), which becomes independent of \(\sigma\) as given in Eq. (14). The force acting on the spinon-vortex of \(\nu=+\) may then denoted by \(\mathbf{E}^{\rm vor}\) such that Eq. (7) is rewritten as: \[\mathbf{J}_{\alpha}^{e}=\pi^{-1}\epsilon_{\alpha\beta}\mathbf{E}_{\beta}^{\rm vor}. \tag{16}\] Physically, such force acting on the vortex induced by the charge current along a direction perpendicular to it can be understood by drawing an analogy with the well-known "Magnus effect" [54] in fluid dynamics, illustrated in Fig. 2(c). In this semi-classical picture, a spinning object (analogous to the vortex) moving through a fluid (representative of the charge current) experiences a lateral force. This force arises from the differential fluid velocity on opposite sides of the spinning object, pushing it in a direction perpendicular to its motion. Lastly, it is important to emphasize that the relationships given by Eq. (15) and Eq. (16) reflect the well-established concept of boson-vortex duality [55; 56]. Within this framework, the charge and vortex degrees of freedom can be interchanged, highlighting their mutual duality in the described context. ## III Spinon transport In the mutual Chern-Simions gauge description outlined above, the holon condensation will define the so-called lower PG phase, which is also known as the spontaneous vortex phase (SVP) as the free \(b\)-spinons carry \(\pm\pi\)-vortices. It reaches an intrinsic superconducting phase coherence at a lower critical temperature \(T_{c}\). As the basic elementary excitation, the \(b\)-spinon will dictate the lower PG or the SVP phase as well as the superconducting instability at \(T_{c}\). The main task in this section is to explore the transport of the \(b\)-spinons, which can expose the physical consequences of the underlying mutual Chern-Simons gauge structure that the \(b\)-spinon is subjected to. ### Spinon excitation spectrum At the mean-field level, according to Eq. (4), the \(b\)-spinons in \(L_{s}\) experience a uniform static gauge flux \(\delta\pi\) flux per plaquette as the holons are condensed with \(\langle n_{I}^{h}\rangle=\delta\), which gives rise to a Landau level like energy spectrum \(E_{m}(\mathbf{k})\), with the lowest excited sector (LES) at \(E_{s}\), as illustrated in Fig. 3(a) in the case of \(\delta=0.2\) (with \(\delta\equiv 2p/q\) and \(p,q\in\mathbb{Z}\) such that \(p=1\) and \(q=10\)). In the presence of a perpendicular external magnetic field \(B^{e}\), the spinon energy spectrum becomes [cf. more details in Appendix. B] \[\tilde{E}_{m\sigma\nu}(\mathbf{k})\equiv E_{m}(\mathbf{k})+\sigma\frac{1}{2}g\mu_{B}B ^{e}+\nu\bar{A}_{0}^{h} \tag{17}\] where the second term on the right is the usual Zeeman splitting for a spin-\(1/2\) with the g-factor (usually taken as 2). The third term originated from the temporal gauge \(\bar{A}_{0}^{h}\), which results in \(i\bar{A}_{0}^{h}\to\bar{A}_{0}^{h}\) in Eq. (2) following a Figure 2: Schematic illustration of the spinon-vortex motion from one side of the sample [(a)] to the other [(b)]. This traverse along the horizontal direction results in a phase difference between the two opposite sides (indicated by grays) along the vertical direction changes by a \(\pm 2\pi\) continuously, leading to an electric field given in Eq. (15). Here the vortex core is denoted by a red “\(+\)” symbol, while the background local phases are represented by blue arrows; (c) The currents flowing along the two sides of a spinning entity, represented by a red disk (the arrow within the disk marks the direction of rotation). Here the red current exhibits a higher velocity compared to its blue counterpart, leading to a “Magnus” force \(F\) exerting on the spinning entity (yellow arrow) as a fluid-dynamic interpretation of Eq. (16). Wick rotation to enforce the constraint Eq. (14) at \(B^{e}\neq 0\). The mean-field effective Hamiltonian for the spinon-vortex composite may be written as \(\tilde{H}_{s}=\sum_{m\sigma\nu\mathbf{k}}\tilde{E}_{m\sigma\nu}(\mathbf{k})\tilde{n}^{b} _{m\sigma\nu}(\mathbf{k})+E_{0}\) with \(\tilde{n}^{b}_{m\sigma\nu}(\mathbf{k})\) denoting the number of the spinon-vortices as the elementary excitations and \(E_{0}\) as the ground state energy. As the external magnetic field \(B^{e}\) is much weaker than the internal fictitious field \(B^{h}=\delta\pi/a^{2}\) (with \(a=3.8\)A as the lattice constant), its effect mainly introduces a minor splitting via the last two terms in \(\tilde{E}_{m\sigma\nu}\). When the temperature is not too high above \(T_{c}\) [note that \(E_{s}\) can be related to \(T_{c}\) in Eq. (11) explicitly], it is reasonable to project the Hilbert space into the LES around \(E_{m}(\mathbf{k})=E_{s}\), which is split as shown in Fig. 3(b) [57]. The mean-field parameter \(\tilde{A}^{h}_{0}\) can be explicitly determined by enforcing the constraint Eq. (14) at \(B^{e}\neq 0\), which is illustrated in Fig. 4(a), and the corresponding low-energy excited spinon number \(\sum_{\sigma}n^{b}_{\sigma\nu=-}\) in the LES is displayed in Fig. 4(b). Both figures indicate the existence of two distinct temperature regions, separated by the yellow lines in Fig. 4. In the high-temperature region, the effect of \(\tilde{A}^{h}_{0}\) on the free spinon number is relatively small due to its small energy compared to \(E_{s}\). On the other hand, at low temperatures, \(\tilde{A}^{h}_{0}\) dominates the lowest-energy excited level, causing the particle number of spinons to correlate linearly with the magnitude of the external field, but remain independent of temperature. The excited spinons will play a significant role in the transport behavior of the lower PG phase. ### Transverse transport coefficients Importantly, due to the gauge field \(B^{h}\), the \(b\)-spinon spectrum \(\tilde{E}_{m\sigma\nu}\) carries nontrivial Berry curvatures \(\mathbf{\Omega}_{m\sigma\nu}(\mathbf{k})=i\nabla_{\mathbf{k}}\times\langle u_{m\sigma\nu,\mathbf{k}}\,|\nabla_{\mathbf{k}}|\,u_{m\sigma\nu,\mathbf{k}}\rangle\), with \(|u_{m\sigma\nu,\mathbf{k}}\rangle\) being the periodic part of the Bloch waves corresponding to the energy \(\tilde{E}_{m\sigma\nu}(\mathbf{k})\). The nonzero Chern number \(\mathcal{C}_{m\sigma\nu}=2\pi\sum_{\mathbf{k}}\Omega^{z}_{m\sigma\nu}(\mathbf{k})\) for each band within the LES is shown in Fig. 3(b), indicating that the sign of the Chern number depends solely on the chirality of the \(b\)-spinons, leading to \(\sum_{m\in\text{LES}}\mathcal{C}_{m\sigma\nu}=\nu\)[58]. Physically, this is because the direction of the intrinsic magnetic field \(B^{h}\) perceived by the spinons is solely determined by their vorticity sign. To study the transport properties for \(b\)-spinons, we base our approach on the semiclassical theory, analogous to the quantum Hall effect in electron systems[59]. We consider the \(b\)-spinon wave packet with a relatively determined center and momentum \((\mathbf{r},\mathbf{k})\) with an intrinsic size, determined by the "cyclotron length" \(a_{c}=a/\sqrt{\pi\delta}\)[53]. The dynamics of such a wave packet is described by Figure 3: (a) The spinon energy levels \(\tilde{E}_{m}\) (at \(\delta=0.2\)). The lowest excitations have an energy gap \(E_{s}\) indicated by the red arrow; (b) The energy splitting of the lowest excitation level in the presence of a perpendicular magnetic field \(B^{e}\). This corresponding window is marked by the gray region in (a). Each energy level in (b) is labeled with the quantum number (\(\sigma\nu\)), with a corresponding diagram, i.e., a spinon trapped in the charge vortex core, illustrated on the right-hand side. Here the Chern number \(\mathcal{C}\) and the splitting energy are also indicated; (c) The distribution of the vortex current \(\tilde{J}^{\text{vor}}_{\alpha}(i)\) along the \(y\) direction in the ground state, which is calculated on a sample with a periodic boundary condition along the \(y\) direction and an open boundary condition along the \(x\) direction. The length of the sample in the \(x\) direction is \(N_{x}=50\). Figure 4: (a) The evolution of \(\tilde{A}^{h}_{0}\) with respect to temperature \(T\) and magnetic field \(B^{e}\); (b) The corresponding particle number of excited spinons, \(\sum_{\sigma}n^{b}_{\sigma,v=-}\), versus \(T\) and \(B^{e}\) for a given chirality at lower energy. Here the two distinct temperature regions with different behavior are separated by the yellow line. the semiclassical equation of motion, which includes the topological Berry phase term [59]: \[\dot{\mathbf{r}}=\frac{1}{\hbar}\frac{\partial\tilde{E}_{m\sigma\nu}(\mathbf{k})}{ \partial\mathbf{k}}-\dot{\mathbf{k}}\times\mathbf{\Omega}_{m\sigma\nu}(\mathbf{k}) \tag{18}\] where \(\hbar\dot{\mathbf{k}}=-\nabla U(\mathbf{r})\), and \(U(\mathbf{r})\) is a confining potential that exists only near the boundary of the sample, which prevents the spinon wave packet from exiting the sample. On the edge along the \(x\) direction, for example, the nontrivial Berry curvature produces an anomalous velocity \(\dot{\mathbf{k}}\times\mathbf{\Omega}_{m\sigma\nu}(\mathbf{k})=-\hbar^{-1}\partial_{y}U( \mathbf{r})\Omega_{m\sigma\nu}^{z}(\mathbf{k})\dot{x}\) in Eq. (18). Physically, this anomalous velocity arises from the fact that \(b\)-spinon perceives an intrinsic "Lorentz force" due to the uniform gauge field \(B^{h}\) from the holons, the sign of which depends on the vorticity. Therefore, \(b\)-spinon undergoes a cyclotron motion in the bulk and a skipping orbit along the edge of the sample, as illustrated in Fig. 1(b). It is crucial to note that both spinons carrying opposite chirality flow along the boundary. This scenario is reminiscent of the quantum spin Hall effect [35; 36], where the electrons at the boundary carry opposite spin directions. Also, in contrast to the chiral spin liquid with chiral edge modes [60; 61], our effective description[as referenced in Eq. (1)-(3)] maintains time-reversal symmetry. Here, to draw parallels and distinctions from previously observed phenomena, the behavior of the neutral spin within our framework may be termed the bosonic "anomalous vortex Hall" effect, which underscores the vortex edge current arising from the internal fictitious magnetic field. On the other hand, in the case of equilibrium, all the edge current in the sample cancels between one edge and the opposite edge shown in Fig. 1(c), resulting in no net current. In the presence of either a spatially varying chemical potential \(\mu\) or temperature \(T\), a net edge current is contributed by the anomalous velocity of \(b\)-spinon, as shown in Fig. 1(b)-(d). For instance, when there is a temperature gradient and a chemical potential gradient in the \(y\) direction, the linear response of the chiral spinon current \(\mathbf{J}^{\nu}\) and the heat current \(\mathbf{J}^{\nu}_{Q}\), with their respective chirality \(\nu\), can be expressed as [62; 63; 64; 65] \[\left[\begin{array}{c}\mathbf{J}^{\nu}_{x}\\ \mathbf{J}^{\nu}_{Q}\end{array}\right]=\mathbf{L}^{xy,\nu}\left[\begin{array}{c}- \nabla_{y}\mu\\ T\nabla_{y}\frac{1}{T}\end{array}\right], \tag{19}\] Here, \(\mathbf{L}^{xy,\nu}\) signifies a \(2\times 2\) matrix that represents the transverse transport coefficients. The parameter \(\nu=\pm\) distinguishes between the different chiralities of spinon. The matrix elements are given by \[L^{xy,\nu}_{ij} \approx -\frac{1}{\hbar V\beta^{q}}\sum_{m\in\mathrm{LES}}\sum_{\sigma\bm {k}}c_{q}(n^{b}_{m\sigma\nu})\Omega_{m\sigma\nu}^{z}(\mathbf{k}) \tag{20}\] \[= -\frac{\nu}{\hbar\beta^{q}2\pi}\sum_{\sigma}c_{q}(n^{b}_{\sigma \nu})\] where \(i,j=1,2\), \(c_{q}(x)\equiv\int_{0}^{x}\left(\log\frac{1+t}{t}\right)^{q}dt\), \(q=i+j-2\), and \(n^{b}_{m\sigma\nu}=1/\left(e^{\beta\tilde{E}_{m\sigma\nu}}-1\right)\) is the bosonic distribution function for \(b\)-spinons, which is independent of momentum, because \(\tilde{E}_{m\sigma\nu}\) is the flat Landau-level band in our case. Note that in Eq. (20), as an approximation, we only sum over within the LES and use the relation \(\sum_{m\in\mathrm{LES}}\mathcal{C}_{m\sigma\nu}=2\pi\sum_{\mathbf{k}}\Omega_{m \sigma\nu}^{z}(\mathbf{k})=\nu\). In the following, we will investigate various transport measurements associated with the transverse transport coefficients \(L^{xy}_{ij}\) in Eq. (20) (for simplicity we shall drop the superscript \(xy\) in the following such that \(L^{xy,\nu}_{ij}\to L^{\nu}_{ij}\)). ## IV Experimentally testable consequences ### Thermopower As illustrated in Fig. 1(c), due to the internal flux \(B^{h}\), the \(b\)-spinons with opposite vorticities will propagate in opposite directions along the edges of the sample, such that there is a _net_ vortex current at each edge along the \(x\) direction, which would be canceled out by the opposite edges at \(\nabla_{y}T=0\). Now let us consider a temperature gradient \(\nabla_{y}T\) applied along the \(y\) direction. As depicted in Fig. 1(d), the vortex current on one side of the boundary will be larger than on the higher-\(T\) side, which will result in a finite total vortex current along the \(x\) direction. Noting that \(\mathbf{J}^{\nu\mathrm{or}}_{x}=\mathbf{J}^{\nu=+}_{x}-\mathbf{J}^{\nu=-}_{x}\), where \(\mathbf{J}^{\nu=\pm}_{x}\) represents the spinon current with \(\pm\) chirality [50] as given by \(\mathbf{J}^{\nu=\pm}_{x}=L^{\nu=\pm}_{12}\left(T\partial_{y}\frac{1}{T}\right)\) according to Eq. (20). Furthermore, according to Eq. (15), the net vortex current \(\mathbf{J}^{\nu\mathrm{or}}_{x}\) along the \(x\) direction can induce an electric field \(\mathbf{E}^{e}_{y}\) along the \(y\) direction (similar to the contribution to the kernel effect as to be discussed later), which will contribute to a finite thermopower, with the Seebeck coefficient given by \[S\equiv\frac{\mathbf{E}^{e}_{y}}{\nabla_{y}T}=-\frac{k_{B}\phi_{0}}{2\pi\hbar}\sum _{\sigma\nu}c_{1}(n^{b}_{\sigma\nu}) \tag{21}\] where \(c_{1}(x)\equiv(1+x)\ln(1+x)-x\ln x\). Thus such a contribution of the \(b\)-spinon to the Seebeck coefficient is determined by the number of the excited \(b\)-spinons, \(n^{b}_{\sigma\nu}\), which in turn is essentially governed by the lowest excited energy scale in Eq. (17) at low temperatures. Figure 5: The evolution of the Seebeck coefficient with respect to temperature is depicted in (a), without the influence of a magnetic field, and in (b), at a doping density \(\delta=0.18\) and a critical temperature \(T_{c}=40K\). A typical quantitative temperature-dependence of the Seebeck coefficient \(S\) calculated using Eq. (21) is shown in Fig. 5 at zero magnetic fields [(a)] and at finite \(B\)'s [(b)] in the overdoped regime. The overall \(T\)- and \(B\)-dependence and magnitude here are in agreement with the experimental measurements in the optimal and overdoped cuprates [37; 38]. ### Thermal Hall Similarly, with a temperature gradient along the \(y\) direction, we can also evaluate the net thermal current \((\mathbf{J}_{Q})_{x}=(\mathbf{J}_{Q}^{\nu=+})_{x}+(\mathbf{J}_{Q}^{\nu=-})_{x}\) along the \(x\) direction, as illustrated in Fig. 1(e). Here, \(\mathbf{J}_{Q}^{\nu=\pm}\) represents the thermal current contributed by spinons with \(\pm\) chirality, expressed as \((\mathbf{J}_{Q}^{\nu=\pm})_{x}=L_{22}^{\nu=\pm}\left(T\partial_{y}\frac{1}{T}\right)\), according to Eq. (20). The thermal Hall conductivity is then given by[62; 63; 64; 65]: \[\kappa^{\rm xy}\equiv-\frac{J_{Q}}{\nabla_{y}T}=-\frac{k_{B}^{2}T}{2\pi\hbar} \sum_{\nu\sigma}\nu c_{2}(n_{\nu\sigma}^{b}) \tag{22}\] where \(c_{2}(x)=(x+1)(\ln\frac{1+x}{x})^{2}-(\ln x)^{2}-2\,{\rm Li}_{2}(-x)-c\), with \({\rm Li}_{2}(z)\) being the polylogarithm function, and \(c=\pi^{2}/3\) is a constant ensuring that \(\kappa^{\rm xy}\) does not diverge as \(T\to\infty\). It is crucial to note that, in the absence of an external magnetic field, the vanishing of both \(B^{e}\) and \(\bar{A}_{0}^{h}\) leads to the degeneracy of \(\tilde{E}_{m\sigma\nu}\) with respect to chirality \(\nu\), resulting in a zero value for \(\kappa^{\rm xy}\) in Eq. (22) due to the summation over \(\nu\). Essentially, this outcome stems from the fact that the thermal current with opposite chirality flows in opposite directions along a boundary. Therefore, the preservation of total chirality to zero in a sample without an external magnetic field results in complete cancellations for the thermal current. Conversely, in the presence of an external magnetic field \(B^{e}\), according to Eq. (14), the total chirality for \(b\)-spinons becomes finite, leading to a net thermal current along the boundary. As a result, the evolution of thermal Hall conductivity \(\kappa^{\rm xy}\) obtained by Eq. (22) with respect to temperature is depicted in Fig. 6(a). This evolution aligns with experimental results in terms of magnitude[8; 9; 10], and it exhibits distinct behaviors across different temperature regions. In the high-temperature region, following the discussion about Eq. (17), \(n_{\nu\sigma}^{b}\) is not sensitive to \(\bar{A}_{h}^{0}\), leading to \(\kappa^{\rm xy}/T=-\frac{B^{*}k_{B}^{2}}{\pi\omega\theta\delta}\left(\frac{3 T}{T}\right)^{2}\) [Eq. (11) is used here]. On the other hand, in the low-temperature region where spontaneous (thermally excited) vortices are absent, Eq. (14) reduces to \(\sum_{\sigma}n_{\sigma\nu=-}^{b}\approx B^{e}a^{2}/2\phi_{0}\delta\). Here, all other energy levels remain unoccupied, leading to \(\kappa_{xy}/T\to-k_{B}^{2}c_{2}\left(B^{e}a^{2}/2\phi_{0}\delta\right)/\hbar\pi\) as \(T\) approaches \(0\). The doping evolution of \(\kappa_{xy}/T\) near zero temperature is presented in Fig. 6(b). This evolution reveals enhanced signals in the underdoped regimes, corroborating the experimental measurements[8; 9; 10]. Physically, this is because, under low doping conditions, the degeneracy of the lowest Landau level of the spinons is reduced due to the small intrinsic magnetic field strength \(\delta\pi\). This reduction in degeneracy increases the average Berry curvature, denoted as \(\mathbf{\Omega}_{m\sigma\nu}\), experienced by each spinon, which enhances the anomalous velocity at the boundary, as indicated by Eq. (18). Therefore, the thermal Hall effect becomes more pronounced under low doping. However, \(\kappa_{xy}/T\) will not truly diverge at \(\delta\to 0\), due to the fact that the uniform holon condensation will either be broken down or form smaller domain structures when it is deeply in the AFM long-range ordered phase [10]. Our results for the thermal Hall conductivity differ from the bosonic scaling law in Ref. [18], where Zhang et al. utilized the gapless bosons with a power-law Berry curvature. Here we emphasize that the intrinsic flux \(B_{h}\) leads to the Landau level structure, with the gap of the spinon-vortices constrained by Eq. (14), resulting in a decreasing gap as the temperature drops as illustrated in Fig. 4. Notably, within the pseudogap regime, the role of the spinon-vortex is to neutralize the external magnetic field. Consequently, its Hall response is in the opposite direction compared to the charged quasiparticles manifesting in the Fermi liquid regime. This elucidates the observed sign change of the thermal Hall as the doping transitions into the pseudogap phase [10]. It is important to note that our case does not involve the spontaneous breaking of time-reversal symmetry, a necessary condition to avoid the emergence of a hysteretic behavior not observed experimentally. Furthermore, according to Eq. (14), the total chirality carried by \(b\)-spinons is induced linearly with the applied magnetic field. This results in the linear-\(B\) dependence of \(\kappa^{\rm xy}\) in both distinct temperature regions, which aligns with the experimental measurement[8; 9; 10]. Figure 6: (a) The temperature evolution of the thermal-Hall coefficient when \(B=15T\). The solid line represents the case in the overdoped regime, while the dashed line signifies the case in the underdoped regime. (b) The doping evolution of the thermal-Hall coefficient as the temperature approaches zero. The thermal-Hall effect is predicted to revert to conventional Fermi liquid behaviors when the doping density \(\delta\) is greater than the critical density \(\delta^{*}\), as indicated by the yellow region. ### Hall Effect According to Eq. (16), driving a charge current \(\mathbf{J}_{x}^{e}\) along the \(x\) direction induces an electric field \(\mathbf{E}_{y}^{\mathrm{vor}}\) applied to the vortex along the \(y\) direction. This electric field acts as the chemical potential gradient \(\mathbf{E}_{y}^{\mathrm{vor}}=-\nabla_{y}\mu\) in Eq. (19). Since the \(\pm\) vortices perceives the \(\mathbf{E}_{y}^{\mathrm{vor}}\) in opposite directions, the response spinon current \(\mathbf{J}_{x}^{\nu=\pm}=L_{11}^{\nu=\pm}\mathbf{E}_{y}^{\mathrm{vor}}\) leads to the vortex current \(\mathbf{J}_{x}^{\mathrm{vor}}=\sum_{\nu}\nu L_{11}^{\nu=\pm}\mathbf{E}_{y}^{\mathrm{vor}}\). From Eq. (15), this induced vortex current further generates an electric field \(\mathbf{E}_{y}^{\mathrm{e}}\) along the \(y\) direction, culminating in the Hall effects as illustrated in Fig. 1(f). Therefore, the obtained Hall coefficient \(R_{H}\) is given by \[R_{H}\equiv\frac{\mathbf{E}_{y}^{\mathrm{e}}d}{\mathbf{J}_{x}^{e}B^{e}}=\frac{a^{2}d} {e\delta}, \tag{23}\] where \(d\) denotes the lattice constants along the \(z\)-axis. We also employ the relation \(c_{0}(x)=x\) and Eq. (14). Therefore, the Hall number calculated by Eq. (23) is given by \(n_{H}=a^{2}d/eR_{H}=\delta\), which is consistent with the experimental results[3; 4]. Significantly, there exists a long-standing experimental puzzle wherein the charge carrier, as measured by the Hall number, correlates with the doped hole density \(\delta\) in the PG. This contrasts with the \(1+\delta\) measurement derived from the Fermi surface area observed through angle-resolved photoemission spectroscopy (ARPES)[66; 67; 68], seemingly deviating from the Luttinger sum rule. Our results offer a compelling explanation: chiral spinons primarily contribute to the Hall effects. In contrast, the entities forming the Fermi surface -- Landau quasiparticles -- display a negligible Hall effect signal due to their partially diminished weight (Fermi arcs) in the PG phase. Note that the Hall effect in this framework is primarily attributed to the edge states of chiral spinons. At elevated temperatures, the local phase coherence of holons can become further disrupted, rendering them incapable of sustaining condensation when the distance between spinon-vortices becomes comparable to that of the doped holes. In such a scenario, chiral spinons no longer experience the uniform static gauge flux emitted by holons, and thus cannot sustain their complete edge states. Consequently, the contribution of such channels to the Hall effect would diminish as the temperature rises, consistent with experimental measurements [3; 4]. ### Other Properties of Spinon-vortices In the above subsections, the effects produced by the transverse motion of the chiral spinons with the MCS gauge structure have been explored. It is noted that the effects of the longitudinal motion of the same chiral spinons have been already studied previously [49; 50; 69]. Since the \(b\)-spinons are the elementary excitations that dictate the lower PG phase, for the sake of completeness, in the following we briefly discuss additional phenomena associated with the \(b\)-spinons. #### iii.4.1 Nernst Effect When a temperature gradient is applied along the \(y\) direction, our attention shifts from the transverse transport motion detailed in Eq. (19), to the longitudinal drift motion of spinons. To explore this, we introduce a viscosity constant, \(\eta_{s}\), which allows us to determine the drift velocity \(\mathbf{v}^{b}\) of chiral spinons using the equation \(s_{\phi}\nabla T=-\eta_{s}\mathbf{v}^{b}\), where \(s_{\phi}\) denotes the transport entropy carried by a spinon vortex. It's important to note that spinons of both chiralities are driven by a temperature gradient in the same direction along the \(x\)-axis, with the velocity being the same \(\mathbf{v}^{b}\). In the presence of an external magnetic field \(B^{e}\), a vortex current \(\mathbf{J}_{y}^{\mathrm{vor}}\) is induced along the \(y\)-axis. As discussed in the thermopower section, the vortex current can be further expressed as \(\mathbf{J}^{\mathrm{vor}}=(n_{\sigma\nu=+}-n_{\sigma\nu=-})\mathbf{v}^{b}\), where the amplitude is proportional to the external magnetic field as per the chirality "neutrality" condition Eq. (14). This vortex current \(\mathbf{J}_{y}^{\mathrm{vor}}\) induces an electric field \(\mathbf{E}_{x}^{\mathrm{e}}\) along the \(x\)-axis, as dictated by Eq. (15). This corresponds to the Nernst effects, with the signal defined by [49]: \[e_{N}=\frac{\mathbf{E}_{y}}{|\nabla_{x}T|}=B^{e}\frac{s_{\phi}}{\eta_{s}}. \tag{24}\] To eliminate the viscosity \(\eta_{s}\) in Eq. (24), let us consider the longitudinal resistivity \(\rho_{e}\) resulting from the drift motion of chiral spinons. According to Eq. (16), driving a charge current \(\mathbf{J}_{x}^{e}\) along the \(x\) direction induces an "electric field" \(\mathbf{E}_{y}^{\mathrm{vor}}\) on the vortex. In contrast to the temperature gradient, \(\mathbf{E}_{y}^{\mathrm{vor}}\) prompts spinons of both chiralities to drift in opposite directions along the \(y\)-axis, in accordance with the relation \(\mathbf{E}_{y}^{\mathrm{vor}}=\pm\eta_{s}/\hbar c\mathbf{v}_{y}^{b}\). This results in a vortex current \(\mathbf{J}^{\mathrm{vor}}=n_{\sigma\nu=+}^{b}\mathbf{v}^{b}-n_{\sigma\nu=-}^{b}(-\mathbf{ v}^{b})=n^{b}\mathbf{v}^{b}\), where \(n^{b}=\sum_{\sigma\nu}n_{\sigma\nu}^{b}\) is the total number of free \(b\)-spinons. Next, as derived from Eq. (15), such vortex current \(\mathbf{J}_{y}^{\mathrm{vor}}\) induces the electric field \(\mathbf{E}_{x}^{\mathrm{e}}\) along the \(x\) direction, leading to the longitudinal resistivity \[\rho_{e}=\phi_{0}^{2}n_{\nu}/\eta_{s}, \tag{25}\] where the contribution from quasiparticles are not included in this analysis. Lastly, the challenging-to-calculate viscosity \(\eta_{s}\) is eliminated, yielding[49]: \[\alpha_{xy}\equiv\frac{e_{N}}{\rho}=\frac{B^{e}a^{2}s_{\phi}}{\phi_{0}^{2}n_{v }}, \tag{26}\] where \(\alpha_{xy}\) is the quantity introduced in Ref. [39]. Within our framework, a unique aspect that sets it apart from a conventional BCS superconductor is the presence of a free \(S=1/2\) moment locked with the vortex core, which gives rise to the "transport entropy"[40] \(k_{B}\ln\left[2\cosh\left(\beta\mu_{B}B^{c}\right)\right]-\beta\mu_{B}B^{c}\tanh \left(\beta\mu_{B}B^{c}\right)\). The temperature and magnetic-field dependence of \(\alpha_{xy}\) is illustrated in Fig. 7(a). Its magnitude aligns quantitatively with experimental data [39, 40, 41], suggesting that the transport entropy due to the free moment in a spinon vortex can accurately replicate the Nernst signal observed experimentally. #### iv.2.2 Spin Hall Effect In the presence of a magnetic field, both the chirality and spin degrees of freedom become polarized through the orbit and Zeeman effects, respectively. In this scenario, a charge current \(\mathbf{J}_{x}^{c}\) along the \(x\) direction not only induces a vortex current \(\mathbf{J}_{y}^{\rm vor}\) along the \(y\) direction--as previously discussed--but also generates a spin current. The latter can be expressed as \(\mathbf{J}_{\alpha}^{\rm s}=(n_{++}^{\rm p}-n_{+}^{\rm b})\mathbf{v}_{\alpha}+(n_{+-}^ {\rm b}-n_{-}^{\rm b})(-\mathbf{v}_{\alpha})=\sum_{\sigma\nu}\sigma\nu n_{\sigma}^ {\rm o}\mathbf{v}_{\alpha}\). This results in the generation of a vortex current that accompanies a spin current, with the ratio defined as \(\zeta\equiv J_{\alpha}^{s}/J_{\alpha}^{\rm vor}=\sum_{\sigma\nu}\sigma\nu n_{ \sigma\nu}^{\rm b}/n^{b}\). As a consequence, the PG phase is predicted to exhibit a spin Hall effect, with the coefficient given by [69, 49]: \[\sigma_{H}^{s}\equiv\frac{J_{\alpha}^{s}}{\mathbf{E}_{\alpha}^{c}}=\frac{e}{\phi _{0}}\zeta, \tag{27}\] of which the calculated magnitude is shown in Fig. 7(b). #### iv.2.3 Order-to-Order Phase Transition At the units \(\hbar=c=e=1\), Eq. (25) can be recast into a dual form: \[\sigma_{e}\sigma_{s}=\frac{1}{\pi^{2}}, \tag{28}\] where \(\sigma_{e}=1/\rho_{e}\) represents the electrical conductance, while \(\sigma_{s}\equiv\mathbf{J}_{\alpha}^{\rm vor}/\mathbf{E}_{\alpha}^{\rm vor}=n_{v}/ \eta_{s}\) denotes the spinon conductance. Essentially, Eq. (28) parallels the boson-vortex duality [55, 56], in which both the Cooper pair and the superconductivity vortex perceive each other as vortices. As such, when one is in a superfluid state, the other resides in an insulating state. Within the context of our work, the spinon (holon) carries the holon (spinon) vortex, thereby uniquely associating all vortices with quantum numbers. Based on Eq. (28), the superconductivity phase characterized by \(\sigma_{e}\rightarrow\infty\), corresponds to an insulating phase for the spinon with \(\sigma_{s}\to 0\). Moreover, when spinon condenses with \(\sigma_{s}\rightarrow\infty\), indicating the establishment of antiferromagnetic long-range order, it triggers the proliferation of holon vortices, thereby resulting in an insulating phase in charge, i.e., \(\sigma_{e}\to 0\). This sequence represents a novel type of "order-to-order" phase transition, widely investigated under the rubric of "deconfined quantum critical point" (DQCP) [70, 71, 72]. #### iv.2.4 Relation between \(T_{c}\) and resonance energy in INS The dynamic spin susceptibility, as observed via inelastic neutron scattering (INS), reveals the transition of the gapless spin-wave [73, 74] at the antiferromagnetic (AFM) wave vector \((\pi,\pi)\) to a gapped state upon disruption of the AFM long-range order. This spin excitation also manifests a resonance-like mode [75, 76, 77, 78, 79, 80] characterized by energy \(E_{g}\), demonstrating a peak in the spin spectrum weight. When deviating slightly from the momentum \((\pi,\pi)\), the resonance mode bifurcates and spans both higher and lower energies, resulting in the well-documented hourglass-shaped spectrum [81, 82, 83, 84, 85, 86]. Within our proposed framework, the predominant low-lying spin spectrum weight originates from the LES of chiral spinons characterized by energy \(E_{s}\). Furthermore, the \(S=1\) spin excitation detected by INS is in fact a composite of two \(S=1/2\) spinons, resulting in the resonance spin mode energy \(E_{g}=2E_{s}\). A careful analysis [87] further validates that the spinon excitation discussed in our study is consistent with the observed hour-glass spin spectrum. A key insight is the established relation between the resonance energy \(E_{g}\) observed in INS and the superconducting critical temperature \(T_{c}\). The relation [detailed derivation in Appendix B and Ref. [53]] is expressed as: \[\kappa\equiv\frac{E_{g}}{k_{B}T_{c}}\approx 6.45, \tag{29}\] which aligns closely with the experimental measurement [42, 43, 44, 45]\(\kappa^{\rm exp}\approx 6\). ## V Discussion One of the key hypotheses on the high-\(T_{c}\) cuprate in a doped Mott insulator approach [1] is that the PG phase is a fractionalized novel state beyond the Landau Fermi-liquid description. In other words, it is the spinon and -holon instead of the Landau quasiparticle that dictate the physics of the PG phase. The transport properties can provide a very powerful test of distinct hypotheses of the elementary excitations and thus the underlying states of matter. In this work, we have specifically explored the transverse transport of the chiral spinons, which are elementary excitations characterizing the lower PG phase. Here the spinon and holon are subjected to the mutual Chern-Simons gauge structure due to the phase-string effect in a doped Mott insulator, which preserves the time-reversal and parity symmetries in the absence of the external magnetic field. In the so-called lower PG phase, the holons remain Bose-condensed but the superconducting phase coherence is disordered by free spinon excitations until the "confinement" of the spinons below \(T_{c}\). The time-reversal symmetry is retained because the opposite spins see the opposite directions of the fluxes and form the RVB pairing in the superconducting ground state. Here the transverse transport refers to the rotational motion of the spinons as the edge chiral currents under the internal statistical fictitious fluxes, which may be regarded as the bosonic "anomalous vortex Hall" effect. Both the neutral and electric Hall effects are exhibited in the presence of a perpendicular magnetic field. In contrast to the conventional Boltzmann transport of the Landau quasiparticles, the thermopower, thermal Hall, and the Hall effect studied here are all contributed by the chiral spinons, which are further locked with a vortex supercurrent via the mutual Chern-Simons gauge field in generating a transverse electric voltage. The magnitudes of the calculated transverse transport coefficients are intrinsically linked to the resonance-like energy scale of the chiral spinons, which can further determine [53; 87; 88] the SC transition temperature, \(T_{c}\), and be detected by the inelastic neutron scattering experiments [44; 75; 78; 79; 80]. Previously the longitudinal transport of such chiral spinons has been shown to give rise to the Nernst effect [50], the spin Hall effect [69] as briefly mentioned in Sec. IV.4. Additionally, in such a framework, an "order-to-order" phase transition between AFM insulating phase and SC phase is expected in the cuprates, which is worth further investigation in the future to establish a possible relationship with the DQCP [70; 71; 72]. It is further noted that the origin of the thermal Hall effect in different studies [12; 13; 14], starting from the \(\pi\)-flux fermionic spinons, has been also attributed to the Berry curvatures of the spinon bands. However, without the external magnetic fields, the normal state of spinons [14] is usually conventional or topologically trivial. By contrast, here the external magnetic field merely shifts the balance number of the excited spinons with opposite chirality without changing the internal strong Berry curvatures introduced by the nontrivial topological (mutual Chern-Simons) gauge structure. The latter is intrinsically embedded in the pseudogap regime, describing the long-range entanglement between spin and charge degrees of freedom due to the phase-string effect in the doped Mott insulator. Additionally, some other studies attribute the enhancement of the thermal Hall signal to phonons through some extrinsic mechanisms[15; 16; 17]. It should be pointed out that bare phonons are not sensitive to the direction of external magnetic fields, but the experimentally observed thermal Hall coefficient in cuprates depends on the magnetic field component perpendicular to the copper oxide plane [9]. Importantly, all the transport results obtained in this work hinge on the robustness of the chiral spinon excitation, which is protected by the underlying bosonic RVB pairing. However, as the doping further increases beyond a critical point, i.e., \(\delta>\delta^{*}\), the AFM correlation becomes too weak to preserve such an RVB pairing, leading to the breakdown of the pseudogap phase and the restoration of a Fermi liquid with a large Fermi surface [89], as has been suggested experimentally [66; 67; 68; 90; 91]. As a result, apparently, the present transport results should collapse with the contribution dominated by the quasiparticle excitations with the full Fermi surface restored in the overdoped regime at low temperatures. For instance, as indicated by experiments, the Hall number should change from \(\delta\) to \(1+\delta\)[3; 4] and the thermal Hall coefficient should restore the behavior of the Wiedemann-Franz law [10]. Finally, we note that certain experiments have recently detected a signal of the thermal Hall effect along the z-axis in cuprates [8]. Our current study has been focused on the pure 2D and does not offer a quantitative explanation. We speculate that since the phase-string sign structure underlies the intrinsic Berry curvatures leading to the thermal Hall effect, its existence in any dimensions of a doped Mott insulator which has been rigorously proven [33] before, may be also responsible for the above-mentioned experimental observation beyond 2D. Technically, in realistic materials-- stacked copper oxide layers -- the interlayer coupling may cause the edge state of the spinons, as described in this paper, to undergo tunneling between different layers. A further study along this line will be worth proceeding elsewhere. ###### Acknowledgements. _Acknowledgments.--_ We acknowledge stimulating discussions with Long Zhang, Binghai Yan, Yuanming Lu, and Gang Li. Z. -J.S, J.-X.Z., and Z.-Y.W. are supported by MOST of China (Grant No. 2021YFA1402101).
2309.03360
ViewMix: Augmentation for Robust Representation in Self-Supervised Learning
Joint Embedding Architecture-based self-supervised learning methods have attributed the composition of data augmentations as a crucial factor for their strong representation learning capabilities. While regional dropout strategies have proven to guide models to focus on lesser indicative parts of the objects in supervised methods, it hasn't been adopted by self-supervised methods for generating positive pairs. This is because the regional dropout methods are not suitable for the input sampling process of the self-supervised methodology. Whereas dropping informative pixels from the positive pairs can result in inefficient training, replacing patches of a specific object with a different one can steer the model from maximizing the agreement between different positive pairs. Moreover, joint embedding representation learning methods have not made robustness their primary training outcome. To this end, we propose the ViewMix augmentation policy, specially designed for self-supervised learning, upon generating different views of the same image, patches are cut and pasted from one view to another. By leveraging the different views created by this augmentation strategy, multiple joint embedding-based self-supervised methodologies obtained better localization capability and consistently outperformed their corresponding baseline methods. It is also demonstrated that incorporating ViewMix augmentation policy promotes robustness of the representations in the state-of-the-art methods. Furthermore, our experimentation and analysis of compute times suggest that ViewMix augmentation doesn't introduce any additional overhead compared to other counterparts.
Arjon Das, Xin Zhong
2023-09-06T21:04:53Z
http://arxiv.org/abs/2309.03360v1
# ViewMix: Augmentation for Robust Representation in Self-Supervised Learning ###### Abstract Joint Embedding Architecture-based self-supervised learning methods have attributed the composition of data augmentations as a crucial factor for their strong representation learning capabilities. While regional dropout strategies have proven to guide models to focus on lesser indicative parts of the objects in supervised methods, it hasn't been adopted by self-supervised methods for generating positive pairs. This is because the regional dropout methods are not suitable for the input sampling process of the self-supervised methodology. Whereas dropping informative pixels from the positive pairs can result in inefficient training, replacing patches of a specific object with a different one can steer the model from maximizing the agreement between different positive pairs. Moreover, joint embedding representation learning methods have not made robustness their primary training outcome. To this end, we propose the ViewMix augmentation policy, specially designed for self-supervised learning, upon generating different views of the same image, patches are cut and pasted from one view to another. By leveraging the different views created by this augmentation strategy, multiple joint embedding-based self-supervised methodologies obtained better localization capability and consistently outperformed their corresponding baseline methods. We also demonstrate that incorporating ViewMix augmentation policy promotes robustness of the representations in the state-of-the-art methods. Furthermore, our experimentation and analysis of compute times suggest that ViewMix augmentation doesn't introduce any additional overhead compared to other counterparts. ## 1 Introduction Dependence on a large amount of annotated training data is one of the limiting factors when performing accurate predictive tasks through supervised learning. To improve the training efficiency and performance of deep learning models, researchers, on the one hand, explore the enrichment of data, for instance, with augmentation, and on the other hand, investigate unsupervised and semi-supervised learning techniques to reduce data dependency. Self-supervised learning (SSL) in computer vision, especially the joint embedding architectures for representation learning, has gained plenty of traction in recent years, with some methods performing as well as the state-of-the-art supervised methods without needing any labeled samples during the pretraining stage. While data augmentation techniques have proven to be quite effective in ensuring better training efficacy of supervised learning, they play an even more crucial role in self-supervised pretraining methods for obtaining good representations. Thorough experimentations Chen et al. (2020); Bardes et al. (2021); Zbontar et al. (2021); Caron et al. (2021); Grill et al. (2020), over the recent years have shown that augmentation policies like random cropping, random flipping, color distortions, and gaussian blur compounded on top of each other have proven to be very effective means of transformation for the pretext task. Whether it's contrastive, non-contrastive, redundancy reduction, or asymmetric network methods, these augmentation policies are regularly incorporated in the recent SSL training schemes to produce different views of the same image sample. Here, the term 'view' Chuang et al. (2022) refers to a transformed image produced after applying multiple data augmentation techniques. Exposing models to different views and optimizing for maximum agreement between the views have demonstrated significant improvement in self-supervised representation learning. Besides the representative property of the learned features, only a few SSL methods Chuang et al. (2022); Yan et al. (2022) targeted the robustness against noisy data. The robustness refers to the learned representation's insensitivity or invariance to the distortions or augmentations on the inputs. Thus, as long as the inputs are the same image, the learned representation should be intact regardless of the augmentations. Since the joint embedding architectures experience aggressive forms of image transformations, the models become invariant Ericsson et al. (2022) to certain distortions. Combining SSL methods has even considerably improved the label noise robustness of supervised methods Ghosh and Lan (2021). The widespread deployment of deep neural networks in many downstream real-world tasks has made the importance of robustness more appropriate. Similar to self-supervised learning, robustness, and generalization for unseen shape variations can be obtained by better localization capability Song et al. (2019). On the contrary, deep learning models often face the problem of focusing too much on the small intermediate set of activations or local patches of information. Adopting this narrow outlook provides a weaker representation of general downstream tasks. Although regional dropout (the process of removing informative pixels) strategies DeVries and Taylor (2017); Zhang et al. (2017); Yun et al. (2019) are suitable for solving this issue for supervised learning, leading techniques are Zhang et al. (2017); Yun et al. (2019) not appropriate in self-supervised settings. Because part of their optimization relies on utilizing the newly generated mixed labels and SSL techniques don't rely on labels. On top of that such data mixup methods Zhang et al. (2017); Yun et al. (2019), potentially situated in between different classes, don't include any complementary information in the sample in question. On the other hand, Cutout DeVries and Taylor (2017) can lead to training inefficiency due to missing pixels. Moreover, we argue that the current SSL methods' lack of attending local features also results in learning suboptimal feature representations. Due to the correlation of local features with robustness Song et al. (2019), this lack also results in substandard robustness. In this paper, we propose a novel image augmentation strategy that is particularly designed for self-supervised learning - **ViewMix**, which has three main advantages. (i) By simply patching one view on top of another to impose a regional dropout and replacement scenario, ViewMix can be flexibly integrated with different joint embedding learning architectures; (ii) Adding ViewMix along with the standard SimCLR-like image augmentation protocol, we find that the learned representations from multiple state-of-the-art joint embedding learning methods consistently outperform their corresponding baseline (or non-ViewMix) counterparts on linear evaluations of representative property. (iii) We show that the learned representations from adopting ViewMix with different joint embedding architectures have higher robustness than their corresponding baselines in standard linear classification testing with previously unseen noises. ## 2 Related work Unsupervised representation learning frameworks are mostly formulated as generative or discriminative methods. Generative methods learn to generate new data instances from input data. Learning the data generation process of such models imposes learning the data distribution of the inputs, resulting in intermediate feature maps that can be utilized for input representation. For instance, Masked autoencoders He et al. (2022) have demonstrated to learn strong pretext tasks through learning to reconstruct holistic visual concepts. Whereas generative methods learn the distribution of data and utilize the intermediate feature maps as image representations, discriminative methods learn to differentiate between types of data instances. Many self-supervised methodologies follow this goal to maximize agreement between different views of the same image and minimize between different ones. Recently, there has been the emergence of non-contrastive methods as well which eliminates the requirements of negative samples necessary for the discriminative methods. In this section, we will briefly discuss some of these approaches and analyze the literature concerning the ViewMix augmentation. Discriminative methods, particularly contrastive methods have mostly occupied the state-of-the-art chart in self-supervised learning. Chen _et al._ proposed the SimCLR Chen et al. (2020) method, which is a simple framework for learning representations in a self-supervised manner. This framework introduced augmentation-oriented representations Figure 1: Visualization of Cutout, CutMix and ViewMix augmentation. learning methodology in addition to the use of projection heads with encoders to establish an excellent learned representation. NNCLR Dwibedi et al. (2021) has extended this instance discrimination task to include non-trivial positives between augmented samples of the same images and among different images. These positive samples of near-neighbors are drawn from a support set of image embeddings. NNCLR along with other methods, e.g. MoCo Chen et al. (2020), has adopted memory banks in their scheme to maintain the support set of nearest neighbors. This increases the complexity of the training schemes and causes a large overhead in memory requirements. Additionally, all the contrastive approaches often require comparing each sample with many other samples optimally and the performance varies by the quality of the negative sample pairing. This begs the question of whether the negative pairing is essential. Recently many clustering, asymmetric network learning, and redundancy reduction methods have emerged. For instance, DeepCluster Tian et al. (2017) bootstraps previous versions of its representations to produce targets for the next one. The method clusters data points using current representations which helps it to avoid the usage of negative pairs. Dissimilar to DeepCluster, BYOL Grill et al. (2020) proposes image representation learning with online and target networks that interact and learn from each other. The method also employs a slow-moving average of the online network on the target network to encourage encoding more information within the online projection. Zbontar et al. (2021) proposes Barlow Twins which produces a cross-correlation matrix of the representations close to the identity matrix, forcing strong correlation within each dimension of the representations between the two siamese branches, and decor-relates the pairs of different dimensions. But the method relies heavily on batch normalization which prevents collapse when working with only positive samples. Non-contrastive methods like VICReg Bardes et al. (2021), VIbCReg Lee and Aune (2021), VICRegL Bardes et al. (2022) have also been formulated to answer that question. Although these different classes of methodologies propose different ideas, they identify and address the sensitivity to choosing the composition of image transformations to result in better image representations. Furthermore, Chen et al. (2020) pointed out that applying cropping in composition with strong color jitters in SSL pretraining has rendered better performance than complex supervised augmentation policies. The current state of self-supervised methodologies dominantly uses cropping, horizontal flip, color jitter, Gaussian filter, gray-scaling, and solarization. An investigation to identify other augmentations is of great interest, which can reinforce the performance of the SSL methodologies. In addition, state-of-the-art joint embedding learning mainly focuses on how representative the learned features are, and the invariance or robustness is one of the training methods. Through experimentation and analysis, we have proposed a new augmentation policy ViewMix, which is suitable to the self-supervised learning methods. Unlike the previous methodologies, we highlight robustness as one of the primary training outcomes, along with superior image representations. Experimentation shows that adopting ViewMix on top of base sets of augmentations during the pretraining of multiple self-supervised methods has consistently resulted in higher linear evaluation accuracy than their base counterpart. ## 3 ViewMix This section presents the ViewMix augmentation in detail. Section 3.1 discusses the design motivations behind ViewMix. Section 3.2 describes the ViewMix algorithm. Section 3.3 talks about the flexibility when using ViewMix in SSL schemes. ### Motivation Recent research in joint embedding learning has strongly suggested that augmentation policies play a crucial role in obtaining better representations. Correct selection of augmentations is critical that using a simple composition of scaling and color distortion during the self-supervised training can guide the model to gain higher linear evaluation accuracy than adopting some of the most sophisticated augmentation policies practiced in supervised techniques. On the other hand, regional dropout and replacement strategies have demonstrated their ability to enhance performance in classification tasks by incentivizing feature extractors to focus on less discriminative parts of objects, thus obtaining better object localization capability. Furthermore, joint embedding representation learning methods did not control the robustness as their primary training outcome, although some of them applied robustness/invariance as one of the training methods for representative features. Motivated by these facts, we have formulated the ViewMix augmentation. Specifically, ViewMix initiates a regional dropout and replacement strategy appropriate for the SSL frameworks. While regional dropout augmentation strategies, for instance, Cutout augmentation, encourage focusing on inconspicuous parts of the object, the dropping of pixels makes the learning process inefficient due to introducing blank information. Although masked autoencoders He et al. (2022) work very well by simply applying heavy information dropout, they only work with ViT-based Dosovitskiy et al. (2020) architectures. On the contrary, CutMix Yun et al. (2019) augmentation mitigates the learning inefficiency of Cutout augmentation by filling in blank pixels of the training sample with a patch from another object sample. Although the method has proven effective for supervised methods, such augmentation does not fit well with joint embedding learning, where maximizing agreement between different views of the same image is the goal. Since CutMix replaces image regions with a patch of another image, incorporating it in joint embedding learning introduces different views referring to two different classes of objects rather than from a single one. In such a scenario, the objective of the SSL training doesn't correlate with the augmentation. Later in the experiment section, we will observe that CutMix establishes a more impaired pretraining condition for SSL. We designed ViewMix augmentation specifically to address the issues mentioned earlier by Cutout and CutMix under the SSL criteria. The augmentation is inspired by Cutout and CutMix and is suitable for joint embedding learning architectures. Unlike CutMix, which replaces the region of the training sample with a patch of a different image of a different class, ViewMix takes two different views of the same image, replacing the region of one of the views with a patch from the other. The views are generated from the standard SSL transformations of the original image. The key differences between Cutout, CutMix, and ViewMix are summarized in table 1. Fig: 2 illustrates the ViewMix augmentation process. First, the original image is processed through two transformations of the same distribution to generate two unique views, \(A\) and \(B\). Then, we replace the region of \(A\) with a random patch sampled from view \(B\). After patching, a new view \(A^{\prime}\) is formulated, which continues to the SSL pretraining process. ### Algorithm ViewMix augmentation is designed to leverage the transformation stage of the recent joint embedding learning schemes. During pretraining, for a given image \(x\in\mathbb{R}^{W\times H\times 3}\), sampled from dataset \(\mathcal{D}\), \(t_{1},t_{2},...t_{n}\) transformations are applied to produce \(n\) different views \(v_{1}=t_{1}(x),v_{2}=t_{2}(x),...,v_{n}=t_{n}(x)\). Here, \(t_{1},t_{2},...,t_{n}\) are sampled from a distribution \(\mathcal{T}\), \(n>1\), and \(W\) and \(H\) is the width and height of each input image. In most joint embedding learning processes, each of these transformations is a predominantly random crop of the sample \(x\) followed by color distortions. The goal of the ViewMix augmentation is to generate a new training sample \(\tilde{x}\) by masking two different views produced by any two transformations \(t_{a}\) and \(t_{b}\) from distribution \(\mathcal{T}\) with mask \(\mathbf{M}\). Here, \(\mathbf{M}\in\{0,1\}^{W\times H}\) is a binary mask used to indicate the pixel information to be swapped by the ones of a different view. The sample \(\tilde{x}\) is then used to continue the joint embedding training. We define the augmentation process as follows: \[v_{a}=t_{a}(x), \tag{1}\] \[v_{b}=t_{b}(x), \tag{2}\] \[\tilde{x}=\mathbf{M}\odot v_{a}+(1-\mathbf{M})\odot v_{b}. \tag{3}\] The masking region of M is filled by 0 and the remaining by 1. Consequently, the region with 0's replaces the pixel information with another view's information while the region with 1's is kept intact. The masking region is defined by a bounding box containing center point coordinates \((b_{x},b_{y})\) and \(b_{w}\) and \(b_{h}\) as the width and height of the bounding box, respectively. Given a view of width \(W\) and height \(H\), we obtain \(b_{x}\) and \(b_{y}\) by uniformly sampling from the range \([0,W]\) and \([0,H]\). The width \(b_{w}=\lambda\times W\) and height \(b_{h}=\lambda\times H\) of the bounding box preserve the aspect ratio of the original view. \(\lambda\) is a fraction uniformly sampled from the range \([r_{min},r_{max}]\), where \(0<r_{min}<r_{max}<1\). \(r_{min}\) and \(r_{max}\) can be exposed as hyperparameters to guide the area of the randomly replaced view per augmentation. ### Flexibility To produce a single sample of ViewMix augmented image; first, we need to run two transformations to generate two different views of the image. The transformation pipeline is inspired by the standard SimCLR framework and each of these transformations \(t_{i}\) (sampled from the distribution \(\mathcal{T}\)) consists of random cropping followed by multiple color distortion operations and random horizontal flips. It seems like the ViewMix augmentation initiates a lot of computational overhead. However, since most joint embedding learning architecture requires generating two or more \begin{table} \begin{tabular}{l c c c} \hline \hline & Cutout & CutMix & ViewMix \\ \hline Regional dropout & ✓ & ✓ & ✓ \\ Full image utilization & ✗ & ✓ & ✓ \\ Suitable for SSL & ✓ & ✗ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Summarization of Cutout, CutMix, and ViewMix. views from a single sample, we can use those views directly in our augmentation to formulate the final augmented input. This implementation trick prevents ViewMix from initiating any additional transformations; hence there is no additional computational overhead. Consequently, ViewMix augmentation is flexibly integrated with most joint embedding learning schemes, whether it is a contrastive, asymmetric network, redundancy reduction, or non-contrastive method. ## 4 Experiments and Analysis In this section, we evaluate the representations learned by multiple self-supervised method-based pretraining when applied with ViewMix. The evaluation focuses on ViewMix's efficacy in improving the localizability and generalizability of representations obtained from these pre-trainings. The evaluation process is three-fold. First, we evaluate the learned representations with linear classification applied to five different SSL methods. We demonstrate that the addition of the ViewMix improves the linear classification accuracy across five different popular SSL methods, namely SimCLR, VICReg, BYOL, Barlow Twins, and VIbCReg, compared to their corresponding base composition of transformations. Second, we evaluate the robustness of the representations obtained from these pre-trainings. In that manner, we show that the ViewMix augmentation policy consistently improves frozen linear classification accuracy even when introduced with previously unseen augmentations, indicating robustness in the representations. Third, to further evaluate the effectiveness of downstream tasks, we finetuned the model for multiple few-shot recognition tasks and a segmentation task. Due to computational constraints, we kept our pretraining limited mostly to the CIFAR10 dataset. For segmentation evaluation, we pretrained on ImageNet dataset with SimCLR, VICReg, and Barlow Twins method paired with different augmentation strategies. We also visualize and compare the Class Activation Mappings (CAM) of different augmentations and analyze their effects. Finally, we analyze if there is any computational overhead introduced by our proposed method. ### Linear Evaluation Evaluation Method.For evaluation, we have selected five joint embedding-based SSL methods: SimCLR, VICReg, VIbCReg, BYOL, and Barlow Twins. ResNet-18 architecture is used as the backbone for all the methods. All the SSL methods have the same composition of transformations. Specifically, the SSL frameworks include the five standard image transformations randomly applied and compounded on top of each other. These transformations are Cropping + Rescaling, Color Jitter, Grayscale, Gaussian, and Solarization. We pre-train each of the SSL methods with and without the ViewMix augmentation and freeze the weights. We remove the projection layer and attach a linear layer (initialized with random weights) with the frozen weights of the backbone and train for classification. The corresponding validation accuracy of the finally obtained classifier is linear evaluation accuracy. In all experiments, we train the linear classifier Figure 2: Summary of ViewMix augmentation. for 100 epochs using the Adam optimizer with labeled training set data. The performance comparison across all the SSL methods with and without the ViewMix augmentation along with the base standard augmentations helps portray the superiority of the learned representations on downstream computer vision tasks. Image Transformation Details.SSL frameworks depend heavily on image transformations to produce different views of the same object. The following are the brief details of augmentations that are applied during the training: (i) Image cropping with random sizes from 75% to 100% of the original area; (ii) Random horizontal flip of the images with 0.5 probability; (iii) Random Color Jittering with a probability of 0.8; (iv) Random Gaussian Filter with a probability of 0.2; (v) Gray-scaling with a probability of 0.2; (vi) Solarization with a probability of 0.2; and (vii) ViewMix with a probability of 0.33 (if applied). Some of the transformation intensities have minor variations depending on which SSL methods it is applied to. But the random application of ViewMix is kept to 33% on all the methods and the area is randomly selected between 30% to 60% of the area. Analysis of Results.Table 2 illustrates linear evaluation accuracy of different self-supervised learning methods which represent ViewMix augmentation policy against the base transformations. We can observe that the addition of ViewMix increases the linear evaluation accuracy in all cases. For some methods, significant accuracy gain upon the addition of ViewMix can be obtained. For instance, VIbCReg improves by **+2.34%** and VIbCReg improves by **+1.74%** when the pretraining has ViewMix augmentation policy along with the SimCLR-style baseline augmentations. Table 3 demonstrates the linear evaluation accuracy of different SSL methods on the Imagenet dataset. Although ViewMix's top-1 accuracy of the linear classification is slightly lower, it provides a better representation of the semantic segmentation on the Oxford-IIIT Pet dataset. We notice that ViewMix continuously increases the performance as we increase the size of the dataset. As we are approaching the upper limits on what our current training hardware can allow, we conjecture that if we train with a much larger number of examples and expand the parameter search, ViewMix could further improve the classification accuracy and provide better regularization. ViewMix and CutMix.The ViewMix augmentation has some of the resemblance and characteristics of CutMix, yet when applied to Joint Embedding SSL methods, they fall apart. Firstly, apart from the better localization effect, \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{ResNet-18} & \multirow{2}{*}{Epochs} & \multicolumn{2}{c}{SimCLR} & \multicolumn{2}{c}{VICReg} & \multicolumn{2}{c}{BYOL} & \multicolumn{2}{c}{Barlow} & \multicolumn{2}{c}{VIbCReg} \\ \cline{3-10} \cline{6-10} & & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline Baseline & 1000 & 90.24 & 99.72 & 90.52 & 99.64 & 91.98 & 99.81 & 91.43 & 99.78 & 88.55 & 99.63 \\ + Cutout & 1000 & 91.07 & 99.80 & 91.86 & 99.76 & 92.39 & 99.85 & 91.78 & 99.80 & 89.62 & 99.69 \\ + ViewMix & 1000 & 91.44 & 99.79 & 92.26 & 99.78 & 92.55 & 99.87 & 91.80 & 99.85 & 90.89 & 99.81 \\ + (Cutout+ViewMix) & 1000 & 91.32 & 99.72 & 91.66 & 99.77 & 92.45 & 99.78 & 90.58 & 99.74 & 90.54 & 99.76 \\ \hline Baseline & 200 & 85.18 & 99.50 & 89.37 & 99.60 & 86.07 & 99.6 & 87.47 & 99.62 & 85.97 & 99.48 \\ + CutMix & 200 & 67.03 & 97.39 & 74.74 & 98.18 & 71.11 & 97.6 & 71.57 & 97.67 & N/A & N/A \\ + ViewMix & 200 & 85.72 & 99.50 & 90.17 & 99.74 & 84.15 & 99.48 & 86.08 & 99.66 & 87.94 & 99.71 \\ \hline \hline \end{tabular} \end{table} Table 2: Linear Evaluation accuracy of SimCLR, VICReg, BYOL, Barlow, VIbCReg with Baseline, Cutout, ViewMix and Cutout+ViewMix augmentation, after 1000 epochs and CutMix augmentation after 200 epochs of pretraining on CIFAR-10 dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline ResNet-18 & \multicolumn{2}{c}{Linear Classification} & \multicolumn{2}{c}{Semantic Segmentation} \\ \multicolumn{2}{c}{Num of Params: 11.17M} & \multicolumn{2}{c}{ImageNet} & \multicolumn{2}{c}{Oxford-IIIT Pet (Finetune)} \\ \cline{2-4} & Top-1 (\%) & Top-5 (\%) & \multicolumn{2}{c}{IoU} \\ \hline SimCLR & 74.62 & 93.26 & 0.8959 \\ + Cutout & 74.84 & 93.46 & 0.8952 \\ + ViewMix & 74.52 & 93.44 & 0.8989 \\ \hline VICReg & 75.94 & 93.48 & 0.8928 \\ + Cutout & 75.74 & 93.26 & 0.8906 \\ + ViewMix & 75.40 & 93.38 & 0.8919 \\ \hline Barlow & 76.70 & 93.60 & 0.8928 \\ + Cutout & 76.26 & 93.54 & 0.8940 \\ + ViewMix & 76.02 & 93.98 & 0.8943 \\ \hline \hline \end{tabular} \end{table} Table 3: Linear Evaluation (Partial ImageNet) and Segmentation Finetuning (Oxford-IIIT Pet) results CutMix offers mixed labels, which is unnecessary in the case of SSL. Secondly, joint embedding learnings are focused on maximizing agreement between different views of an object. Since CutMix patches another object class onto the input image, the optimization toward view agreement becomes confusing because the model is experiencing samples containing a patch of a different object class. Consequently, CutMix results in suboptimal self-supervised learning. We can observe these behaviors in Table 2, comparing the linear evaluation accuracy of ViewMix and CutMix on multiple state-of-the-art joint embedding-based SSL methods. In 200 epoch pretraining, CutMix augmentation with different SSL methods has consistently performed worse than their corresponding baselines, and ViewMix has shown improvement upon the baseline. In the case of VIbCReg, in all the training iterations, the pretraining failed to finish with CutMix. Comparison with VICRegL.VICRegL Bardes et al. (2022) is a recently proposed self-supervised technique to learn features at a global and local scale. It utilizes the VICReg Bardes et al. (2021) criterion on the pair of feature vectors for maximizing agreement between views. VICRegL is an improvement of VICReg. We compare the linear evaluation accuracy between the baseline VICRegL and VICReg plus ViewMix on the CIFAR-10 Krizhevsky and Hinton (2010) dataset. We have selected ResNet-18 as the backbone and chosen \(\alpha=0.75\) for training VICRegL with \(256\) batch size. The models are trained for \(1,000\) epochs. We observe that, given these experiment configurations, ViewMix plus VICReg can outperform the linear evaluation top-1 accuracy of the newly proposed improvement of VICReg (VICRegL). ### Robustness Evaluation This subsection discusses our evaluation of the robustness. The robustness effect of ViewMix is compared against Cutout. Also, comparing the performance of a base model with and without the ViewMix is an ablation study highlighting the importance of ViewMix in achieving robustness. Evaluation Method.The goal of the robustness evaluation is to analyze how well the obtained representations represent their corresponding classes after introducing previously unseen transformations. This evaluation process is similar to linear evaluation with just one exception. In linear evaluation, after training the classifier with a labeled dataset we run validation on images without any augmentations, or simply putting, inputs are from the same distribution. But in robustness evaluation, after the classifier training, we run the validation with previously unseen transformations. Hence the validation is conducted with samples of different data distributions. For our evaluation, we have selected Rotation, Rot90, Perspective, and Translation transformation. This means after training the classifier with the original unaugmented image dataset, we create four different validation sets which only apply these four augmentations. The validation accuracy from those datasets represents our robustness evaluation metric. We selected rotation, rot90, perspective, and translation augmentation because these augmentation policies are significantly different from the base augmentations applied during the SSL pretraining, bolstering the fact that augmentations of such nature have not been experienced by the backbone before. Analysis of Results.For fair experimentation, each SSL scheme is trained for 1000 epochs with implementations that result in deterministic transformations (meaning the transformations are pre-generated and cached for later use). We've conducted the robustness evaluation on base SSL methods, SSL methods pre-trained with Cutout, and SSL methods pre-trained with ViewMix. For each training run, we logged the resulting validation accuracy in Table 5. Results in each row represent the linear evaluation accuracy of each pretrained model with different SSL frameworks. We can see that most of the SSL methods pretrained with ViewMix augmentation has resulted in higher linear classification accuracy, in previously unseen transformations, namely, Rotation, Rotation 90-degree, Perspective and Translation. For instance, when SimCLR is pretrained with ViewMix, adding rotation augmentation in the linear evaluation validation has **6.63%** higher accuracy than the base SimCLR. These results highlight the fact that when trained with ViewMix, feature extractors are less sensitive to the distortions. This showcases ViewMix's superiority on obtaining better robustness with previously unseen image perturbations. We attribute this higher robustness to the better localization capability of the SSL methods from the ViewMix augmentation. \begin{table} \begin{tabular}{l c c} \hline \hline ResNet-18 & Top-1 & Top-5 \\ Num of Params: 11.17 M & Acc (\%) & Acc (\%) \\ \hline VICRegL (\(\alpha=0.75\)) & 89.02 & 100.00 \\ VICReg + ViewMix & **92.26** & 99.78 \\ \hline \hline \end{tabular} \end{table} Table 4: Linear Evaluation results of VICReg+ViewMix and VICRegL after \(1000\) epochs training. ### Transfer Learning of Pretrained Models This subsection discusses how models trained with baseline, Cutout, and ViewMix compare when the learned representations are used for transfer learning. Evaluation Method.Self-supervised learning is aimed toward better feature representation which can be later used for transfer learning on downstream tasks. So we examine whether ViewMix augmentation on different SSL methods results in better performance in downstream tasks compared to their baseline and Cutout counterparts. For evaluation, we utilized our ResNet18 pretrained weights for transfer learning on Few Shot and Segmentation task. More specifically, for Few Shot recognition, we finetuned CIFAR10 pretrained models with Prototypical Network Snell et al. (2017) across six datasets, namely CIFARFS Bertinetto et al. (2018), Fewshot-CIFAR100 Oreshkin et al. (2018), Caltech-UCSD Birds (CUB) Wah et al. (2011), Omniglot Lake et al. (2019), Double MNIST Sun (2019), Triple MNIST Sun (2019). We consider a 5-way 5-shot transfer, and the test shot always has 32 images per class except for Omniglot, it's 20. For the Segmentation task, we are using ImageNet pretrained weights from VICReg, SimCLR, and Barlow Twins SSL methods. We used Featured Pyramid Network (FPN) Lin et al. (2017) with ResNet18 backbone. The segmentation evaluation is based on the Oxford-IIIT Pet Dataset Parkhi et al. (2012) and reports the Intersection over Union (IoU) metric. Analysis of Results.For both types of experiments, we finetuned the existing models with pretrained weights. Table 6 demonstrates the performance accuracy of different methods with baseline, Cutout and ViewMix strategy. Table 3 shows the segmentation finetuning efficiency. Finetuning the weights for the Segmentation task results in better IoU over SimCLR and Barlow Twins-based baseline and Cutout strategy. In table 6, we can observe that when finetuning for 5-way 5-shot learning, encoders with ViewMix-based weights consistently outperform their corresponding baseline and Cutout counterparts in most of the few shot datasets, except Omniglot. results in inferior representation learning. ViewMix facilitates dropout with full usage of the image and improves the localization capability by situating a partial view on top of the view in consideration. The class activation mappings shown in Fig 3 visually demonstrate different classifiers' behavior for the same images. The figure consists of multiple blocks of images and each (\(3\times 3\)) block from top to bottom upholds three scenarios. Namely, the class activation heatmaps for the classifier pretrained with baseline SimCLR-inspired transformation, Cutout, and ViewMix, respectively. From left to right of each (\(3\times 3\)) block, we have the input images from CIFAR-10, the class activation map, and the overlaid heatmap visualization of the corresponding input. In these examples, encompassing all three augmentation scenarios, we are investigating the model's behavior in terms of image localization and pixel information utilization during classification. We have observed that when using the base SimCLR transformations, the model predominantly focuses on the prominent features of the object class. However, when employing Cutout augmentation, the heatmap extends to non-salient areas, suggesting improved localization. It's also noteworthy that the model does not effectively utilize blank spaces for detection, which can result in training inefficiency. When utilizing ViewMix, we notice that the CAM activation heatmap expands even further to encompass the partial view overlaid on top of the original view. This expansion not only enhances localization but also allows the ViewMix-based model to effectively utilize pixel information from the partial views, thereby contributing to improved training efficiency. Figure 3: Class Activation Mapping (CAM) of multiple CIFAR-10 samples. From left to right in each block, we have the augmented sample, (only) class activation mapping, and overlaid CAM on the input sample. From top to bottom we have the baseline SimCLR-style transformation, Cutout and ViewMix. ### Computational Overhead Time The ViewMix augmentation requires generating multiple views of the same image for overlaying one view to another. Consequently, the method apparently suggests a higher computation time for each round of transformation to complete. In section 3.3, we briefly explained that ViewMix augmentation leverages joint embedding learning architectures' multi-view transformation scheme, which prevents it from additional computational overhead. To back that statement, we benchmarked the computation time for the full SimCLR Baseline, ViewMix, Cutout, and CutMix transformation pipeline. For a fair comparison, the transformations were conducted on the exact same hardware specifications for two different image resolutions as shown in Table 7. From the table, we can observe that ViewMix execution time is a bit higher than the base SimCLR augmentation pipeline but takes less time than the Cutout and CutMix pipeline. Since other SSL methods also employ SimCLR-style transformations, the analysis is also applicable to those methods. ## 5 Conclusion This paper introduces ViewMix, a simple augmentation for joint embedding-based self-supervised image representation learning that promotes localization with regional dropout and replacement of view. ViewMix is straightforward to implement and can be flexibly integrated with SSL pretraining methods. With proper reuse of the different views from the SSL transformations, the augmentation adds no computational overhead. On ResNet-18-based CIFAR-10 linear evaluation, applying ViewMix with SimCLR, VICReg, BYOL, Barlow Twins, and VIbCReg improves the performance of the baseline by **1.20%**, **1.74%**, 0.57%, 0.37%, **2.34%**, respectively. Furthermore, we have shown that simply integrating ViewMix with these methods has resulted in image representations that are more robust by significant margins to previously unseen distortions than the baseline methods. Finally, this work highlights the potential of augmentations on the self-supervised representation learning process and, how applying specially designed augmentations, without bringing changes to the architecture or learning scheme, results in better representations.
2309.13173
BenLLMEval: A Comprehensive Evaluation into the Potentials and Pitfalls of Large Language Models on Bengali NLP
Large Language Models (LLMs) have emerged as one of the most important breakthroughs in NLP for their impressive skills in language generation and other language-specific tasks. Though LLMs have been evaluated in various tasks, mostly in English, they have not yet undergone thorough evaluation in under-resourced languages such as Bengali (Bangla). To this end, this paper introduces BenLLM-Eval, which consists of a comprehensive evaluation of LLMs to benchmark their performance in the Bengali language that has modest resources. In this regard, we select various important and diverse Bengali NLP tasks, such as text summarization, question answering, paraphrasing, natural language inference, transliteration, text classification, and sentiment analysis for zero-shot evaluation of popular LLMs, namely, GPT-3.5, LLaMA-2-13b-chat, and Claude-2. Our experimental results demonstrate that while in some Bengali NLP tasks, zero-shot LLMs could achieve performance on par, or even better than current SOTA fine-tuned models; in most tasks, their performance is quite poor (with the performance of open-source LLMs like LLaMA-2-13b-chat being significantly bad) in comparison to the current SOTA results. Therefore, it calls for further efforts to develop a better understanding of LLMs in modest-resourced languages like Bengali.
Mohsinul Kabir, Mohammed Saidul Islam, Md Tahmid Rahman Laskar, Mir Tafseer Nayeem, M Saiful Bari, Enamul Hoque
2023-09-22T20:29:34Z
http://arxiv.org/abs/2309.13173v2
BenLLMEval: A Comprehensive Evaluation into the Potentials and Pitfalls of Large Language Models on Bengali NLP ###### Abstract Large Language Models (LLMs) have emerged as one of the most important breakthroughs in natural language processing (NLP) for their impressive skills in language generation and other language-specific tasks. Though LLMs have been evaluated in various tasks, mostly in English, they have not yet undergone thorough evaluation in under-resourced languages such as Bengali (Bangla). In this paper, we evaluate the performance of LLMs for the low-resourced Bangla language. We select various important and diverse Bangla NLP tasks, such as abstractive summarization, question answering, paraphrasing, natural language inference, text classification, and sentiment analysis for zero-shot evaluation with ChatGPT, LLAMA-2, and Claude-2 and compare the performance with state-of-the-art fine-tuned models. Our experimental results demonstrate an inferior performance of LLMs for different Bangla NLP tasks, calling for further effort to develop better understanding of LLMs in low-resource languages like Bangla. ## 1 Introduction From the introduction of word embeddings (Bengio et al., 2003) to the growth of language models (Rogers et al., 2020), Natural Language Processing (NLP) has witnessed revolutionary advancements over the decades. Particularly since the advent of pretrained language models (Devlin et al., 2019; Liu et al., 2019), these models have produced state-of-the-art results on a variety of NLP tasks with little task-specific fine-tuning (Laskar et al., 2020, 2022). Specialized Bangla pretrained models like BanglaBERT (Bhattacharjee et al., 2022) and BanglaT5 (Bhattacharjee et al., 2023) have demonstrated exciting progress in many of the downstream Bangla NLP tasks like natural language inference (NLI), question answering (Ekram et al., 2022), natural language generation (Akash et al., 2023) etc. However, one concern for these pretrained models is that they require fine-tuning using domain-specific large annotated datasets and Bangla has remained an underrepresented language in NLP literature (Joshi et al., 2020; Chakraborty et al., 2021; Chowdhury et al., 2021) despite being the sixth most spoken language in the world with over \(300\) million native speakers (Wikipedia, 2023). Recent developments in large language models (LLMs), such as GPT-3 (Brown et al., 2020), Megatron (Shoeybi et al., 2019), Gopher (Rae et al., 2022), and OPT-175B (Zhang et al., 2022), have transformed the landscape and practices in NLP. These LLMs, with parameter sizes exceeding a hundred billion, are pre-trained on vast amounts of data and demonstrate strong generalization capability in few-shot and zero-shot learning which involves prompt-based learning avoiding parameter update for underlying architectures. A model with excellent zero-shot learning capabilities may reduce the need for huge annotated datasets by allowing the model to perform well on tasks that it was not trained on. However, because of their auto-regressive training objective, LLMs may frequently generate untruthful facts/toxic attitudes that diverge from the original input, preventing them from reaching widespread appeal (Ouyang et al., 2022). To this end, ChatGPT is one of the latest developments in the GPT-3.5 series models from OpenAI1 that has mitigated limitations of the previous LLMs and gained widespread popularity. ChatGPT and other LLMs (Touvron et al., 2023; Anil et al., 2023) are trained in multiple languages although English possesses the majority of the training data. The combination of multilingual training data has enabled the LLMs to accept inputs and generate responses in different languages. Though ChatGPT like LLMs has demonstrated strong zero-shot performance in various NLP tasks in English Laskar et al. (2023) and some other languages Lai et al. (2023) and domains Jahan et al. (2023), it has yet to be extensively investigated in the low-resourced Bangla language domain. This paper presents a comprehensive evaluation of LLMs' performance on various NLP tasks in the Bangla language, including abstractive summarization, question answering (QA), paraphrasing, natural language inference (NLI), text classification, and sentiment analysis. The evaluation incorporates meticulously crafted prompts to ensure rigorous assessment and accurate analysis. For the purpose of evaluating the zero-shot performance of the LLMs (i.e., ChatGPT, LLaMA-2, and Claude-2) on benchmark Bangla NLP datasets for the aforementioned tasks, we conducted a comparative analysis with state-of-the-art models. To the best of our knowledge, this study represents the first attempt to assess the performance of the LLMs in the Bangla language for these specific tasks. Despite some exceptional cases, our experimental results can be summarized as follows: * The zero-shot learning performance of LLMs is inferior compared to the state-of-the-art supervised models across the majority of the evaluated tasks in Bangla language. Given the substantial performance disparities observed, it is reasonable to deduce that LLMs, in their current form, are not suitable for serving as a comprehensive solution for diverse NLP tasks in Bangla. * Considering LLMs like ChatGPT's remarkable proficiency in zero-shot learning within the English language and its subpar performance in low-resource languages like Bangla, this paper emphasizes the significance of investigating the intricate reasoning capabilities of LLMs in low-resource language contexts. Furthermore, it suggests the potential for the development of LLMs tailored to diverse low-resource language groups, thereby addressing the challenges associated with linguistic scarcity and paving the way for improved language understanding and generation models. ## 2 Methodology The objective of our study is to assess the efficacy of LLMs in the context of NLP tasks specific to the Bangla language. We cover \(6\) diverse and important Bangla NLP tasks, i.e., Abstractive Summarization, Question Answering (QA), Paraphrasing, Natural Language Inference (NLI), Text Classification, and Sentiment Analysis over \(7\) bench \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **Type** & **Data Split (Train/ Valid / Test)** & **Prompt** \\ \hline XL-Sam (Iscan et al., 2021) & Abstractive Summarization & 810271012 / 1012 & Please provide an one-sentence summary of the following Bangla text input. The input will be a long Bangla paragraph, the output should be a short Bangla paragraph summarizing only the wild information of the input text into one sentence. Please make sure the input contains the most essential statistical task. Note: Please no code provide anything other than the summarized Bangla output. [T]/T \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe-2022) & Question-Answering & 118.7.38/ 2.3k & Please provide an instance to the input Zhangli equation based on the given Bangla context. The input will contain a Bangla question followed by a context. The output should be the answer in Bangla based on the context. Note: Please no code provide anything other than the Bangla answer to the question [CMTEXT] (UBITO1:1) \\ \hline IndErighnisse (Kumar et al., 2022) & Pangharing & 590.7/10k/10k \\ \hline IndErighnisse (Kumar et al., 2022) & & 590.7/10k/10k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference & 381.7/ 2.42k/ 4.8k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference (NLI) & 381.7/ 2.42k/ 4.8k \\ \hline BNLLI (Bianchalupe et al., 2022) & Natural Language Inference (NLI) & 381.7/ 2.42k/ 4.8k \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al., 2022) & (NLI) & & \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al., 2022) & Question-Answering & 118.7/ 2.3k/ 2.3k \\ \hline \({}^{*}\)QAND Jhangla (Bianchalupe et al. mark datasets. For this purpose, we evaluate ChatGPT (GPT-3.5), Cluade-22, and LLaMA-2-13b-Chat (Touvron et al., 2023) models. As it is not necessary to fine-tune the base architecture for inference, we focus on designing a zero-shot learning setting for the models. As a reference for comparison, we also report the state-of-the-art (SOTA) performance of the supervised models for each task. We prepare a task instruction \(T\) for a given test sample \(X\) and concatenate the text in the test sample with the task instruction to construct the prompt \(P\). The prompt \(P\) is then passed as input to the LLMs, which generates the response \(R\). A comprehensive description of the tasks, datasets, and prompts devised for evaluating each specific task is presented below and also summarized in Table 1. Prompts with sample inputs and outputs for each task are depicted in Table 3. Footnote 2: [https://www.anthropic.com/index/claude-2](https://www.anthropic.com/index/claude-2) **Abstractive Summarization:** Summarization is the process of automatically generating a concise and coherent summary of a longer text document (Nayeem and Chali, 2017, 2018; Laskar et al., 2020), preserving the most important information while reducing the length (Nayeem et al., 2018). Given a text sequence \(S\), the summarization task aims to generate a concise summary of \(S\). In this paper, we evaluate the XL-Sum dataset (Hasan et al., 2021) that consists of 1 million manually annotated data samples from \(44\) languages. We only took the Bangla samples for evaluation. **Question Answering:** For the question answering task, we evaluate the performance of LLMs on the SQuAD_Bangla dataset (Bhattacharjee et al., 2022). This dataset was constituted using two benchmark English datasets: SQuAD 2.0 (Williams et al., 2018) and TyDi QA (Clark et al., 2020). The objective of this task is to determine whether the answer to a given question \(Q\) can be inferred from the reference context \(C\). We provide the reference context along with the question, and ask LLMs whether they can infer the answer of the question from the given reference context. **Paraphrasing:** Given a Bangla text sequence \(S\), the paraphrasing task aims to generate a paraphrase of input \(S\). To evaluate this task, we choose the Bangla samples from the IndicParaphrase dataset (Kumar et al., 2022), the largest Indic paraphrasing dataset across \(11\) Indic languages (Wikipedia, 2023) with around \(5.5\) million samples. **Natural Language Inference:** Natural Language Inference (NLI) aims to predict the entailment/contradiction relations between two input sentences, i.e., a premise and a hypothesis. To evaluate LLMs for Bangla NLI, we utilize the BNLI dataset (Bhattacharjee et al., 2022) that provides annotated data with three categories, i.e., _Entailment, Contradiction,_ and _Neutral_, originally curated from the benchmark XNLI dataset (Conneau et al., 2018). **Text Classification:** Text classification in NLP refers to the task of specifying a label or category for a given input text. We experimented with the _Soham Bengali News Classification_ dataset that is included in the IndicGLUE (Kakwani et al., 2020) benchmark, to evaluate the text classification capability of the LLMs in Bangla. The dataset contains six news categories i.e., _kolkata, state, national, international, sports,_ and _entertainment_. This dataset is part of the News Category Classification of the IndicGLUE benchmark. **Sentiment Analysis:** We evaluated the Sentiment Analysis capability of the LLMs with two datasets, i.e., SentNoB (Islam et al., 2021), and IndicSentiment (Doddapaneni et al., 2022). The SentNoB dataset comprises of texts that are informally written in Bangla, collected from public comments on news and videos in social media covering a wide range of domains like, _politics, agriculture, education_ etc. The second dataset, IndicSentiment, was manually curated for the IndicXTREME benchmark contains product reviews of multiple categories. We only used the Bangla samples from the dataset out of the \(12\) indic languages. ## 3 Results and Discussion Based on our prompt-based zero-shot learning experiment, we report the performance of LLMs for different tasks and compare their performance with the current SOTA supervised fine-tuned models (see Table 2). **Abstractive Summarization Evaluation:** In the XL-Sum dataset, we find that ChatGPT performs the best across all LLMs. We also find a strong similarity in the performance of ChatGPT and the fine-tuned mT5 model on the 'Rouge-1' and 'Rouge-L' metrics. However, there is a noticeable decline in ChatGPT's performance when considering 'Rouge-2'. A manual review of the ChatGPT responses demonstrates that ChatGPT typically produces output that is longer on average and has a higher word count than the gold label contain ing more details about the input text. Moreover, we find that LLaMA-2-13b performs very poorly in abstractive summarization, as it ended up generating the summaries in English, resulting in very low ROUGE scores. **Question Answering Evaluation:** Since LLMs frequently generate responses that aren't exactly like the gold label but are nonetheless correct, our QA evaluation requires human intervention considering two metrics: Exact Match (**EM**) and F1 score. We find that among the LLMs, the zero-shot Claude-2 performs the best in this dataset, almost similar to the SOTA result (79.34%) achieved by BanglaBERT. We also find that ChatGPT performs reasonably well in the F1 metrics for this task on SQuAD_Bangla dataset. It reaches \(78.67\)% F1 score without any fine-tuning. with \(120\)k training samples. However, LLM performance on the EM metric is below par, which is expected given that it is an open-domain generative model that typically generates a wide variety of responses. **Paraphrasing Evaluation:** For the paraphrasing task, we consider BLEU score as the evaluation metric which compares the \(n\)-grams of LLMs paraphrased sentences to the \(n\)-gram of five sentences used as references in the IndicParaphrase dataset. The paraphrase generation task also experienced a very low BLEU score, which is a phenomenon similar to what happened with the **EM** metric for the QA task. **Natural Language Inference Evaluation:** We observe that while ChatGPT achieves the best performance among the LLMs, it only achieves \(52.71\)% accuracy in the NLI task on the BNLI dataset, which is inferior compared to the SOTA BanglaBERT model's 83% accuracy. For this task, prompt tuning is carried out by delivering task descriptions in several ways explaining how to categorize a sample into _Entailment, Contradiction,_ and _Neutral_. It has been observed that adding examples to the task description may marginally improve ChatGPT's performance. This indicates that LLMs logical reasoning capabilities in Bangla are still lacking. **Text Classification Evaluation:** LLMs performed poorly on the test set of the Soham News Article classification dataset from the IndicGLUE benchmark (Kakwani et al., 2020), with the best-performing LLM in this dataset, the Claude-2 model achieving only \(20.76\)% overall accuracy (while the LLaMA-2-13b-chat being the worst performer with an accuracy of \(1.06\)%). On the contrary, he XLM-R model, which is the SOTA for this task, obtained an accuracy of \(87.60\)% in the test set. This significant performance disparity may have been caused because the target classes are not specified in the prompt. However, when prompted with the target classes, i.e., _state_, _kolkata_, _national_, _sports_, _entertainment_, _international_, the overall accuracy of the classification was significantly increased. For instance, the accuracy for ChatGPT increases to \(48.48\)% from \(18.36\)% when such descriptive prompting was used. **Sentiment Analysis Evaluation:** In the sentiment classification task, we observe that ChatGPT performed exceptionally well on the test set of the IndicSentiment dataset (Doddapaneni et al., 2022), attaining an impressive accuracy of \(90.20\)% outperforming the SOTA IndicBERT in a small margin which achieved \(89.3\)% accuracy score. To further evaluate the performance of LLMs in the Sentiment Analysis task, we utilize the test set of the SentNoB (Islam et al., 2021). In this case, we use \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline & **XL-Sen (AS)** & **SQuAD** & **Bangla (QA)** & **IndGLPA(TP)** & **ENLI (XL)** & **SNAC (TC)** & **IndGLUE(S)** & **SentNoB (SA)** \\ \hline Model & R-1 & R-2 & R-1 & **MM F1** & BLEU & Acc. & Acc. & Acc. & **P** & R F1 \\ \hline **ChaoF1** & 27.11 & 8.07 & 20.86 & 44.85/87.67 & 2.81 & 52.71 & 18.36 & **90.20** & 57.79 & 54.56 & 53.17 \\ **Llama-2-13b-chat** & 0.51 & 0.17 & 0.42 & 31.73/67.95 & 0.01 & 42.37 & 1.06 & 69.16 & 48.39 & 48.49 & 48.43 \\ Claude-2 & 21.97 & 60.6 & 17.55 & 49.92/97.04 & 1.89 & 32.20 & 20.76 & 88.48 & 53.25 & 54.38 & 52.79 \\ \hline **n15** & **28.32** & **11.40** & **24.23** & & 4.45 & - & - & - & - & - \\ **BanglaBERT** & - & - & - & 72.63/**79.34** & - & **82.8** & - & - & - & - & - \\ **BanglaBERT** & - & - & - & 72.43/78.40 & - & 80.95 & - & - & - & - & - \\ **XL-M-R** (Large) & - & - & - & **73.15**/79.06 & - & 82.4 & - & - & - & - & - \\ **XL-M-R** & - & - & - & - & - & **87.60** & - & - & - & - & - \\ **IndGLUEART** & - & - & - & **11.57** & - & - & - & - & - & - & - \\ **IndGLBERT** & - & - & - & & - & - & 78.45 & 89.3 & - & - & - & - \\ **mBERT** & - & - & - & & - & - & 80.23 & 72.0 & 49.85 & 56.43 & 52.79 \\ **m-LSTM + Attn. (w/ FastText embed)** & - & - & - & - & - & - & - & 52.24 & 63.09 & 57.15 \\ **B-LSTM + Attn. (w/ Rand intn)** & - & - & - & - & - & - & - & - & 56.16 & **64.97** & **60.25** \\ \hline \end{tabular} \end{table} Table 2: Performance Comparison between LLMs & SOTA models on Abstractive Summarization (AS), Question Answering (QA), Paraphrasing (PP), Natural Language Inference (NLI), Text Classification (TC), and Sentiment Analysis (SA). EM, Acc., P, R, and F1 denote Exact Match, Accuracy, Precision, Recall, and F1 score respectively. Best results are **boldfaced**. the precision score, recall score, and F1 score to evaluate the performance of LLMs. While ChatGPT had the highest precision score of \(57.70\)%, its recall score and F1 score are comparatively lower at \(54.56\)% and \(53.13\)%, respectively. The SOTA model Bi-LSTM with Attention, which utilized randomly initialized text embeddings, obtained a precision score of \(56.16\)%, a recall score of \(64.97\)%, and a F1 score of \(60.25\)%. ### Error Analysis We found some interesting cases where LLMs failed to generate coherent responses given the input prompt. We discuss these cases for the overall best-performing LLM, ChatGPT, in the following: #### 3.1.1 Abstractive Summarization In the summarization task, we notice that ChatGPT's responses are highly inconsistent. While ChatGPT occasionally recognizes the context and provides a good summary of the key points, it frequently misses these details in the reference text and overstuffs the response. One example can be demonstrated as follows: **Prompt:** Please provide an one-sentence summary of the following Bangla text input. The input will be a long Bangla paragraph, the output should be a short Bangla paragraph summarizing only the vital information of the input text in one sentence. Please make sure that the output contains the most essential statistical data. Note: Please do not provide anything other than the summarized Bangla output. [MISSING_PAGE_POST] no news about the fire as there was no untoward incident.] This is a matter of concern and presents additional evidence that ChatGPT is not currently suitable for serving as a universal problem solver in the Bangla language. #### 3.1.4 Natural Language Inference The confusion matrix obtained by evaluting ChatGPT for the NLI task is demonstrated in Figure 1. The matrix reveals that ChatGPT demonstrates high accuracy in predicting the _Contradiction_ and _Entailment_ labels, but encounters significant challenges in accurately predicting the _Neutral_ labels. Approximately 49% of the misclassifications arise when attempting to predict the _Neutral_ class. A thorough manual examination uncovers that ChatGPT often exhibits bias towards expressing a particular opinion polarity (_Contradiction, Entailment_) when dealing with logical relationships in Bangla, and it fails to appropriately recognize and convey neutrality even in cases where it is evident. **Prompt:** Please determine the logical relationship between the given hypothesis and premise. The input will consist of two sentences written in the Bangla language. The first sentence represents the premise, while the second sentence represents the hypothesis. Your task is to determine whether the hypothesis is false (contradiction), true (entailment), or inconclusive (neutral) given the premise. Please output a number indicating the logical relationship between them: 0 for false (contradiction), true (entailment), and 2 for inconclusive (neutral) for neutrality. Note: Please avoid providing any additional information beyond the logical relationship. **Premise:** **Hypothesis:** **Expected Response:** 1 (Entailment) **ChatGPT Response:** 2 (Neutral) #### 3.1.5 News Article Classification Our experimental results show that ChatGPT misclassified the category _kolkata_ the most, failing to generate the correct response in 503 of the test set's 569 examples (88.4%). The subsequent category with the highest frequency of misclassification is _national_. ChatGPT was unable to accurately classify this particular category in 95 out of 175 instances, representing a misclassification rate of 54.20%. On the other hand, ChatGPT effectively identified the category labeled as _entertainment_ in 110 out of 130 occurrences, resulting in a success rate of 92.30%. #### 3.1.6 Sentiment Analysis The examples listed in the Table 4, illustrates the error cases where ChatGPT misclassifies the sentiment of the given input. The first seven examples are from the IndicSentiment test set and the rest of the examples are from the SentNoB dataset. Notably, the _positive_ class exhibits the most frequent misclassification in both test sets, suggesting that ChatGPT still has a considerable way to go toward attaining a complete understanding of the Bangla language. In the instances where the class is _neutral_, the examples are mostly simple statement with no sentiment word associated with them (table entry 11, and 12), yet ChatGPT classified them as either _negative_ or _positive_. Furthermore, ChatGPT demonstrates difficulties in capturing challenging Bangla sentiment words, i.e., (_Accommodating_ in English), (_Advantage_ in English), (_Uncompromising_ in English) etc (example 2, 8, 13 in Table 4). **Unexpected, out of range response:** In the tasks of text classification and sentiment analy Figure 1: Confusion matrix obtained by evaluating ChatGPT for the NLI task on BNLI dataset sis, the prompts are designed to include the target classes in order to identify any unexpected or non-class responses. Based on our experimental findings, we observe that the outputs generated by ChatGPT exhibit outcomes that deviate from the expected range of outputs. **News Article Classification.** **Prompt:** For the Bengali news article given in the input, identify the appropriate section title for the article from the following classes: kolkata, state, sports, national, entertainment, international. Note: Do not output any unnecessary words other than just the section title. The response should be in English language and should be one word. Input : _2000_ (_long text_) **Expected Response:** State **ChatGPT Response:** Development Our study reveals that ChatGPT, in particular, generated the word _Development_, which deviates from the prompted range of responses. The generated output is considered out of range as the prompt specifically instructs the model to produce text within the given categories of _kolkata_, _state_, _sports_, _national_, _entertainment_, and _international_. In addition, the results of our experiment indicate that ChatGPT produced 12 classes that are outside the target class range, thus demonstrating the model's inability to comply with the specified instructions for this specific task. **Sentiment Analysis.** **Prompt:** For the given Input, is the sentiment in the input positive or negative? Note: Please do not output anything other than the sentiment. Exclude any word like, Sentiment in the response. Input : _2000_ (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_2000_) (_200_) (_2000_ ## Acknowledgements This research is supported by the Generic research funds of York University. We also thank Anthropic for providing free access to the Claude-2 model and Compute Canada for its computing resources.
2309.08235
PRIEST: Projection Guided Sampling-Based Optimization For Autonomous Navigation
Efficient navigation in unknown and dynamic environments is crucial for expanding the application domain of mobile robots. The core challenge stems from the nonavailability of a feasible global path for guiding optimization-based local planners. As a result, existing local planners often get trapped in poor local minima. In this paper, we present a novel optimizer that can explore multiple homotopies to plan high-quality trajectories over long horizons while still being fast enough for real-time applications. We build on the gradient-free paradigm by augmenting the trajectory sampling strategy with a projection optimization that guides the samples toward a feasible region. As a result, our approach can recover from the frequently encountered pathological cases wherein all the sampled trajectories lie in the high-cost region. Furthermore, we also show that our projection optimization has a highly parallelizable structure that can be easily accelerated over GPUs. We push the state-of-the-art in the following respects. Over the navigation stack of the Robot Operating System (ROS), we show an improvement of 7-13% in success rate and up to two times in total travel time metric. On the same benchmarks and metrics, our approach achieves up to 44% improvement over MPPI and its recent variants. On simple point-to-point navigation tasks, our optimizer is up to two times more reliable than SOTA gradient-based solvers, as well as sampling-based approaches such as the Cross-Entropy Method (CEM) and VPSTO. Codes: https://github.com/fatemeh-rastgar/PRIEST
Fatemeh Rastgar, Houman Masnavi, Basant Sharma, Alvo Aabloo, Jan Swevers, Arun Kumar Singh
2023-09-15T08:12:48Z
http://arxiv.org/abs/2309.08235v1
# PRIEST: Projection Guided Sampling-Based Optimization For Autonomous Navigation ###### Abstract Efficient navigation in unknown and dynamic environments is crucial for expanding the application domain of mobile robots. The core challenge stems from the non-availability of a feasible global path for guiding optimization-based local planners. As a result, existing local planners often get trapped in poor local minima. In this paper, we present a novel optimizer that can explore multiple homotopies to plan high-quality trajectories over long horizons while still being fast enough for real-time applications. We build on the gradient-free paradigm by augmenting the trajectory sampling strategy with a projection optimization that guides the samples toward a feasible region. As a result, our approach can recover from the frequently encountered pathological cases wherein all the sampled trajectories lie in the high-cost region. Furthermore, we also show that our projection optimization has a highly parallelizable structure that can be easily accelerated over GPUs. We push the state-of-the-art in the following respects. Over the navigation stack of the Robot Operating System (ROS), we show an improvement of 7-13% in success rate and up to two times in total travel time metric. On the same benchmarks and metrics, our approach achieves up to 44% improvement over MPPI and its recent variants. On simple point-to-point navigation tasks, our optimizer is up to two times more reliable than SOTA gradient-based solvers, as well as sampling-based approaches such as the Cross-Entropy Method (CEM) and VPSTO. Codes: [https://github.com/fatemeh-rastgar/PRIEST](https://github.com/fatemeh-rastgar/PRIEST) ## I Introduction Smooth and collision-free navigation in unknown and dynamic environments is crucial for the deployment of mobile robots in places like hospitals, warehouses, airports, etc. In these human-habitable environments, the layout of the static obstacles can change over time. Moreover, human movement can create additional dynamic obstacles obstructing the robot's movements. As a result, the prior computed global plan invariably becomes infeasible during the robot's motion and can no longer guide the local planner toward safe state-space regions. One possible workaround is to make the local planners themselves capable of planning over long horizons while exploring different trajectory homotopies in real time. Our work is geared towards imparting such capabilities to mobile robots. In this paper, we consider optimization-based local planners because of their ability to satisfy constraints and produce smooth motions. Moreover, this approach also allows us to encode some desired higher-level behaviors through appropriately designed cost functions. There are two broad classes of approaches to solving optimization problems encountered during trajectory planning. On one end of the spectrum, we have gradient-based approaches [1, 2] that require the cost and constraint functions to be differentiable. Typically, these methods depend heavily on the user providing a good guess for the solution to initialize the optimization solver. However, finding good trajectory initializations is challenging in fast-changing environments. Some approaches, e.g., based on integer programming [3], can cope with poor initialization. But they are typically computationally too slow, especially in very cluttered environments with tens of obstacles. On the other end of the spectrum, we have planners based on sampling-based optimizers such as Cross-Entropy Method (CEM) [4] and Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) [5]. These optimizers perform a random sampling of the state-space trajectories to obtain a locally optimal solution. Due to this exploration property, they can often come up with better solutions than purely gradient-based approaches [6]. Moreover, sampling-based optimizers are easily parallelizable and thus can be accelerated over GPUs. However, one fundamental drawback is that these optimizers typically fail when all the sampled trajectories lie in the high-cost region (refer Fig.1(a-c)). Our main motivation in this paper is to combine the benefits of both sampling-based and gradient-based approaches. Existing efforts in this direction are mostly restricted to using sampling-based optimizers for computing a good initialization trajectory, which is then subsequently fed to the gradient-based solver [7]. Our experiments in Section V-C3 show that such approaches do not work reliably in difficult benchmarks since they still inherit the issues prevalent with Fig. 1: A comparison of CEM and our proposed approach. Fig.(a)-(c) shows how typical CEM (or any sampling-based optimizer) struggles when all the sampled initial trajectories lie in the high-cost/infeasible region. Our approach, PRIEST, embeds a projection optimizer within any standard sampling-based approach that pushes the samples toward feasible regions before evaluating their cost. both individual classes of approaches. Moreover, at a more fundamental level, the sampling optimizer is unaware of the capabilities of the downstream gradient-based solver or how it refines the initial guess. Our main idea in this paper is to use gradient-based approaches to improve the inner working of sampling-based optimization. In particular, we formulate a projection optimization to guide the sampling process toward low-cost regions at each iteration. As a result, our approach can recover from the pathological cases where all sampled trajectories are infeasible, e.g., due to violation of collision constraints(see Fig.1(d-f)). Our core innovations and their benefits are summarized below. **Algorithmic Contribution:** We present Projection Guided Sampling Based Optimization (PRIEST). The key building block is a novel optimizer that can take a set of trajectories and project each of them onto the feasible set. This allows us to guide sampled trajectories toward feasible regions before evaluating their cost and subsequent refinement of the sampling distribution. We show how our projection optimizer can be effectively parallelized and accelerated over GPUs by reformulating the underlying collision and kinematic constraints into polar form and using an Alternating Minimization (AM) approach to the resulting problem. Finally, we show how our projection optimizer naturally integrates with decentralized variants of sampling-based optimizers [8], wherein multiple sampling distributions are refined in parallel to improve the optimality of the solution. See Section IV for a summary of contributions over authors' prior works. **Improvement over the State-of-the-art (SOTA):** We show that PRIEST outperforms existing approaches in terms of success rate, time-to-reach the goal, and computation time, etc. In particular, we show at least 7% improvement over the ROS Navigation stack in success rate on the BARN dataset [9], while reducing the travel time by a factor of two. On the same benchmarks, our success rate is at least 35% better than SOTA local sampling-based optimizers like MPPI [10] and log-MPPI [11]. Additionally, we consider a point-to-point navigation task and compare PRIEST with the SOTA gradient-based solvers, ROCKIT [1](a collection of optimizers like IPOPT, ACADO, etc) and FATROP [2], and sampling-based methods CEM and VPSTO [5]. We show up to \(2\mathrm{x}\) improvement in success rate over these baselines. Furthermore, we show that PRIEST respectively has 17% and 23% higher success rates than the ROS Navigation stack and other SOTA approaches in dynamic environments. ## II Mathematical Preliminaries _Symbols and Notations:_ Small case letters with regular and bold font represent scalars and vectors, respectively. Matrices have upper-case bold fonts. The variables \(t\) and \(T\) are time stamps and transpose, respectively. The number of planning steps, obstacles, decision variables, and samples are shown as \(n_{p},n_{o},n_{v}\) and \(N_{b}\). The left subscript \(k\) denotes the trajectory optimizer's iteration. The rest of the symbols will be defined in the first place of use. ### _Problem Formulation_ #### Ii-A1 Differential Flatness We leverage differential flatness to make our approach applicable to a large class of systems such as wheeled mobile robots, quadrotors, etc. Specifically, we assume \(\mathbf{u}\!=\!\mathbf{\Phi}(x^{(q)}(t),y^{(q)}(t),z^{(q)}(t))\): the control inputs can be obtained through some analytical mapping \(\mathbf{\Phi}\) of \(q^{th}\) level derivatives of the position-level trajectory. For example, for a quadrotor, the pitch, roll, yaw angles, and thrust can be analytically expressed in terms of axis-wise accelerations. #### Ii-A2 Trajectory Optimization We are interested in solving the following 3D trajectory optimization: \[\min_{x(t),y(t),z(t)}c_{1}(x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)), \tag{1a}\] \[x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)|_{t=t_{0}}\!=\!\mathbf{b}_{0},\] \[x^{(q)}(t),y^{(q)}(t),z^{(q)}(t)|_{t=t_{f}}\!=\!\mathbf{b}_{f},\] (1b) \[\dot{x}^{2}(t)+\dot{y}^{2}(t)+\dot{z}^{2}(t)\leq v_{max}^{2},\] \[\ddot{x}^{2}(t)+\ddot{y}^{2}(t)+\ddot{z}^{2}(t)\leq a_{max}^{2},\] (1c) \[s_{min}\leq(x(t),y(t),z(t))\leq s_{max}\] (1d) \[-\frac{(x(t)\!-\!x_{o,j}(t))^{2}}{a^{2}}\!-\!\frac{(y(t)\!-\!y_{o,j}(t))^{2}}{a^{2}}\!-\!\frac{(z(t)\!-\!z_{o,j}(t))^{2}}{b^{2}}\!\!+\!1\leq 0, \tag{1e}\] where \((x(t),y(t),z(t))\) and \((x_{o,j}(t),y_{o,j}(t),z_{o,j}(t))\) respectively denote the robot and the \(j^{th}\) obstacle position at time \(t\). The function \(c_{1}(.)\) is defined in terms of derivatives of the position-level trajectories and can encompass commonly used penalties on accelerations, velocities, curvature, etc. We can also leverage differential flatness to augment control costs in \(c_{1}(.)\) as well. The affine inequalities (1d) model bounds on the robot workspace. The vectors \(\mathbf{b}_{0}\) and \(\mathbf{b}_{f}\) in (1b) represent the initial and final values of boundary condition on the \(q^{th}\) derivative of the position-level trajectory. In our formulation, \(q\!=\!\{0,1,2\}\). Inequalities (1c) denotes the velocity and acceleration bounds with their respective maximum values being \(v_{max}\) and \(a_{max}\). In (1e), we enforce collision avoidance, assuming obstacles are modeled as axis-aligned ellipsoids with dimensions \((a,a,b)\). **Remark 1**.: _The cost functions \(c_{1}(.)\) need not be convex, smooth or even have an analytical form in our approach._ #### Ii-A3 Trajectory Parametrization and Finite Dimensional Representation To ensure smoothness in the trajectories, we parametrize the optimization variables \((x(t),y(t),z(t))\) as \[\begin{bmatrix}x(t_{1})\\ \vdots\\ x(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\ \mathbf{c}_{x,}\!\!\!\begin{bmatrix}y(t_{1}) \\ \vdots\\ y(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\mathbf{c}_{y,}\!\!\!\begin{bmatrix}z(t_ {1})\\ \vdots\\ z(t_{n_{p}})\end{bmatrix}\!\!=\!\mathbf{P}\ \mathbf{c}_{z} \tag{2}\] where \(\mathbf{P}\) is a matrix created using polynomial basis functions that are dependent on time and \(\mathbf{c}_{x},\mathbf{c}_{y},\mathbf{c}_{z}\) represent the coefficients of the polynomial. The expression remains applicable for derivatives by utilizing \(\dot{\mathbf{P}}\) and \(\ddot{\mathbf{P}}\). By incorporating the parametrized optimization variables stated in (2) and compact representation of variables, we can reframe the optimization problem (1a)-(1e) as follows: \[\min_{\mathbf{\xi}}c_{1}(\mathbf{\xi}) \tag{3a}\] \[\mathbf{A}\mathbf{\xi}=\mathbf{b}_{eq}\] (3b) \[\mathbf{g}(\mathbf{\xi})\leq\mathbf{0}, \tag{3c}\] where \(\boldsymbol{\xi}\)=\(\left[\mathbf{c}_{T}^{T}\,\mathbf{c}_{y}^{T}\,\mathbf{c}_{z}^{T}\right]\). With a slight abuse of notation, we have now used \(c_{1}(.)\) to denote a cost function dependent on \(\boldsymbol{\xi}\). The matrix \(\mathbf{A}\) is a block diagonal where each block on the main diagonal consists of \(\left[\mathbf{P}_{0}\ \tilde{\mathbf{P}}_{0}\ \tilde{\mathbf{P}}_{0}\ \mathbf{P}_{-1}\right]\). The subscript \(0\), \(-1\) signify the first and last row of the respective matrices and pertain to the initial and final boundary constraints. The vector \(\mathbf{b}_{eq}\) is simply the stack of \(\mathbf{b}_{0}\) and \(\mathbf{b}_{f}\). The function \(\mathbf{g}\) contains all the inequality constraints (1c)-(1e). ## III Main Algorithmic Results This section presents our main algorithmic contributions. An overview of our approach is shown in Fig.2. The main differentiating factor from existing baselines lies in the insertion of the projection optimizer between the sampling and cost evaluation block. The projection optimizer aids in constraint handling by pushing the sampled trajectories toward feasible regions. In this sense, our approach combines the benefits of both a gradient-free approach and those based on differentiable cost/constraint functions. As shown in Appendix VII, our projection block, in particular, leverages tools from convex optimization. We next present our main building block: the projection optimizer, followed by its integration into a sampling-based optimizer. ### _Projection Optimization_ Consider the following optimization problem \[\underset{\boldsymbol{\xi}_{i}}{\min}\frac{1}{2}\|\boldsymbol{\xi}_{i}- \boldsymbol{\xi}_{i}\|_{2}^{2},\ i=1,2,...,N_{b} \tag{4}\] \[\mathbf{A}\boldsymbol{\xi}_{i}=\mathbf{b}_{eq},\qquad\mathbf{g}(\boldsymbol{ \overline{\xi}}_{i})\leq\boldsymbol{0} \tag{5}\] The cost function (4) aims to minimally modify the \(i^{th}\) sampled trajectory \(\boldsymbol{\xi}_{i}\) to \(\boldsymbol{\overline{\xi}}_{i}\) in order to satisfy the equality and inequality constraints. In Appendix VII, we show that for a certain class of constraint functions \(\mathbf{g}\) formed with quadratic and affine constraints, optimization (4)-(5) can be reduced to the fixed-point iteration of the following form: \[{}^{k+1}\boldsymbol{\overline{\xi}}_{i}\!\!=\!\!\arg\underset{ \boldsymbol{\xi}_{i}}{\min}\frac{1}{2}\|\boldsymbol{\overline{\xi}}_{i}\!- \!\boldsymbol{\xi}_{i}\|_{2}^{2}\!+\!\frac{\rho}{2}\left\|\boldsymbol{\overline {\xi}}_{i}\!-\!\boldsymbol{\xi}_{i}\right\|_{2}^{2}\!+\!\frac{1}{2}\left\| \boldsymbol{\overline{\xi}}_{i}\!-\!\boldsymbol{\xi}_{i}\right\|_{2}^{2}\!+\! 1\!\!\boldsymbol{\lambda}_{i}^{T}\boldsymbol{\overline{\xi}}_{i},\] \[\mathbf{A}\boldsymbol{\overline{\xi}}_{i}=\mathbf{b}_{eq} \tag{6b}\] In (6a)-(6b), \(\mathbf{F}\) represents a constant matrix and \(\mathbf{h}\) is some closed-form analytical function. The vector \({}^{k+1}\boldsymbol{\lambda}_{i}\) is the Lagrange multiplier at iteration \(k+1\) of the projection optimization. We derive these entities in Appendix VII. The main computational burden of projection optimization stems from solving the QP (6a). However, since there are no inequality constraints in (6b), the QP essentially boils down to an affine transformation of the following form: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(c_{aug}\) values. The final output of the optimizer is the sample from the \(ElliteSet\) with the lowest \(c_{aug}\). #### Iii-B2 Updating the Sampling Distribution There are several ways to perform the distribution update in line \(13\) of Alg. 1. We adopt the following MPPI-like updated rule. \[{}^{l+1}\boldsymbol{\mu}=(1-\sigma)\ ^{l}\boldsymbol{\mu}+\sigma(\frac{1}{ \sum\limits_{m\in C}c_{m}})\,\sum\limits_{m\in C}\boldsymbol{\bar{\xi}}_{m}c _{m}, \tag{9a}\] \[{}^{l+1}\boldsymbol{\Sigma}\!\!=\!\!(1\!-\!\sigma)\ ^{l} \boldsymbol{\Sigma}\!\!+\!\sigma\frac{\sum\limits_{m\in C}c_{m}( \boldsymbol{\bar{\xi}}_{m}-\ ^{l+1}\boldsymbol{\mu})(\boldsymbol{\bar{\xi}}_{m}-\ ^{l+1}\boldsymbol{\mu})^{T}}{\sum\limits_{m\in C}c_{m}},\] (9b) \[c_{m}=\exp{(\gamma^{-1}(c_{aug}(\boldsymbol{\bar{\xi}}_{m})- \delta))}, \tag{9c}\] where the scalar constant \(\sigma\) is the so-called learning rate. The set \(C\) consists of the top \(N_{elite}\) selected trajectories (line 11). The constant \(\gamma\) specifies the sensitivity of the exponentiated cost function \(c_{aug}(\boldsymbol{\bar{\xi}}_{m})\) for top selected trajectories. \(\delta=\min c_{aug}(^{l}\boldsymbol{\bar{\xi}}_{m})\) is defined to prevent numerical instability. **Remark 2**.: _Alg.1 is agnostic to the distribution update rule. For example, (9b) can be replaced with CMA-ES style update and our initial experiments in this regard have shown good results._ ### _Decentralized PRIEST (D-PRIEST)_ In this subsection, we build upon [8] and propose a decentralized variant of our projection-guided sampling-based optimizer. As shown in Fig.4, in the decentralized variant, we instantiate several optimizers in parallel and choose the lowest cost-optimal solution among these. As a result, such variants are shown to be more capable of escaping poor local minima. Our key objective in this section is to show that our projection optimizer naturally integrates into decentralized optimizers built along the lines of [8]. Fig.4 shows our proposed approach. We initialize \(M\) different Gaussian distributions \({}^{l}\boldsymbol{\mu}_{j},{}^{l}\boldsymbol{\Sigma}_{j}\) at \(l=0\). We sample \(\frac{N_{v}}{M}\) samples of \(\boldsymbol{\xi}\) from each of these distributions. The sampled \(\boldsymbol{\xi}_{ij}\) (\(i^{th}\) sample from \(j^{th}\) Gaussian ) are then stacked row-wise to form a matrix. Importantly, such a construction allows us to easily track which row of the matrix corresponds to samples from which of the \(M\) Gaussian distributions. The projection optimizer then simultaneously guides all the samples toward feasible regions. The output from the projection is then separated back into \(M\) sets on which the cost functions are evaluated in parallel. We then update the sampling distribution in parallel based on the cost values. Finally, after \(l\) iterations, the optimal trajectory from each optimizer instance is compared and the one with the lowest cost is selected as the optimal solution. ## IV Connections to Existing Works **Connection to CEM-GD:** Alternate approaches of combining sampling and gradient-based approach were presented recently in [4, 13]. In these two cited works, the projection at line 5 of Alg.1 is replaced with a gradient step of the form \(\boldsymbol{\bar{\xi}}_{i}=\boldsymbol{\xi}_{i}-\sigma\nabla_{\boldsymbol{ \xi}}c_{1}\), for some learning-rate \(\sigma\). Our approach PRIEST improves [4, 13] in two main aspects. First, it can be applied to problems with non-smooth and non-analytical cost functions. Second, the gradient-descent-based can be computationally slow as it relies on taking small steps toward optimal solutions. In contrast, the projection optimizer in Alg.1 leverages convex optimization, specifically quadratic programming to ensure faster convergence. **Exploration Strategy:** Sampling-based optimizers like MPPI [10] and its variants [11, 14] explore by injecting random perturbation into the control inputs to obtain a trajectory distribution. PRIEST, on the other hand, injects perturbation in the parameter space (polynomial coefficients), leading to one core advantage. We can inject larger perturbations into the parameter space that helps in better exploration over longer horizons (see Fig.5(a)-(b)). Moreover, the projection optimizer ensures that the trajectory distribution satisfies boundary constraints and is pushed toward the feasible Fig. 4: Decentralized variant of PRIEST inspired by [8], wherein we instantiate and maintain different \(M\) Gaussian distributions in parallel to counter poor local minima. An important thing to note is that our projection optimizer naturally fits into the decentralized structure. region. In contrast, increasing the covariance of control perturbation has been shown to make MPPI diverge [11]. **Improvement over Author's Prior Work [15]** PRIEST builds on our prior work [15] that used projection-augmented sampling for visibility-aware navigation. Our current work targets a much broader scope of navigation problems with potentially non-smooth costs. On the algorithmic side, we extended the projection optimizer of [15] to 3D (see Appendix VII) and improved the distribution update rule to account for actual cost values. Moreover, the decentralized variant of PRIEST is also a major contribution over [15]. On the experimental side, we present a more elaborate benchmarking with several existing methods on both open-source as well as custom data sets. ## V Validation and Benchmarking ### _Implementation Details_ We developed Alg.1, PRIEST, in Python using JAX [16] library as our GPU-accelerated algebra backend. The simulation framework for our experiments was built on top of the ROS [17] and utilized the Gazebo physics simulator. All benchmarks were executed on a Legion7 Lenovo laptop equipped with an Intel Core i7 processor and an Nvidia RTX 2070 GPU. Additionally, we used the open3d library to downsample PointCloud data [18]. For all the benchmarks, we chose \(N=13,N_{b}=110\), \(N_{proj}=80\) and \(N_{clite}=20\). We develop a Model Predictive Control (MPC) pipeline on top of our optimizer. For each MPC iteration, we employ LIDAR scanning to infer obstacle locations. More specifically, we take each LIDAR point as an obstacle and employ voxel-downsampling through the open3d library to keep the number of obstacles tractable. #### V-A1 Baselines and Benchmarks We compared our approach with different baselines in three sets of benchmarks. **Comparison on BARN Dataset [9]:** This benchmark requires a mobile robot to iterative plan in a receding horizon fashion to navigate an obstacle field. We used the BARN dataset that contains 300 environments with varied levels of complexity, specifically designed to create local-minima traps for the robot. In this benchmark, we evaluate our approach against DWA [19], TEB [20] implemented in the ROS navigation stack and MPPI [10], and log-MPPI [11]. The first two baselines are still considered the workhorse of robot navigation in both industry and academia while the latter two are the SOTA gradient-free approaches for planning. No prior map was made available for any of the approaches. As a result, all approaches had to rely on their on-the-fly planning with a local cost map (or point cloud for our approach) for navigation. TEB and DWA used a combination of graph-search and optimization while MPPI and log-MPPI are purely optimization-based approaches. We used a holonomic robot modeled as a double-integrator system for the comparisons. Consequently, the cost function (\(c_{1}\)) used in our approach is given by a combination of penalty on the magnitude of the acceleration, curvature, and path-following error. The first two terms are smooth and differentiable and can be described in terms of axis-wise accelerations and velocities. The last term does not have an analytical form as it required computing the projection of a sampled trajectory way-point onto the straight-line path to the goal. **Point to Point Navigation with Differentiable Cost:** In this benchmark, we considered the task of generating a single trajectory between a start and a goal location. The cost function \(c_{1}\) consisted of smoothness terms penalizing the norm of the acceleration. For comparison, we considered SOTA gradient-based optimizers ROCKIT [1] and FATROP [2] and sampling-based optimizers CEM, and VPSTO [5]. We designed various cluttered environments wherein obstacles are placed randomly in 2D and 3D spaces (see Fig.6). For the 2D comparisons, our experiments involved 117 trials conducted in a cluttered environment featuring 50 randomly placed obstacles, each with a radius of \(0.4\) m. For the 3D comparisons, we conducted \(100\) trials in a cluttered room with dimensions of \(7\times 7\times 3\) units and included \(25\) randomly positioned obstacles, each with a radius of \(0.68\) m. **Comparison in a Dynamic Environment:** We also compare our approach PRIEST with CEM, log-MPPI, MPPI, TEB, and DWA in dynamic environments. The cost function used was the same as that used for the BARN dataset. In this benchmark, we introduced ten obstacles, each with a velocity of \(0.1\) m/s, moving in the opposite direction of the robot. These obstacles have a radius of \(0.3\) m. We run simulations over 30 different configurations. The start and final points remain constant for all the configurations, while the obstacle positions are varied for each configuration. The maximum velocity for the robot was fixed at \(1\) m/s. #### V-A2 Metrics We utilize the following metrics for benchmarking against other baselines: **Success Rate**: A run is considered successful when the robot approaches the final point within a 0.5m radius without any collision. The success rate is calculated as the ratio of the total number of successful runs to the overall number of runs. **Travel Time**: This refers to the duration it takes for the robot to reach the vicinity of the goal point. **Computation Time**: This metric quantifies the time required to calculate a solution trajectory. ### _Qualitative Results_ #### V-B1 A Simple Benchmark In Fig.(1), our objective is to contrast the behavior of CEM(a-c) and our approach PRIEST(d-e) in a scenario wherein all the initial sampled trajectories are placed within a high-cost/infeasible region. To show this, we construct an environment with a large obstacle with a radius of \(7\)m. The task is to obtain a collision-free trajectory from \((1,7)\) to \((20,13)\) while respecting the maximum velocity and acceleration limits of \(2.8\) and \(3.3\), respectively. As observed, our optimizer effectively pushes the sample trajectories out of the infeasible region. In contrast, the CEM samples persistently remain within the infeasible region. This behavior arises when the CEM variance fails and cannot navigate samples effectively out of the infeasible region. #### V-B2 Receding Horizon Planning on Barn Dataset In Fig. (5), we show a qualitative comparison between TEB and our approach PRIEST within one of the BARN environments. Both methods can search over multiple homotopies but differ in their process. While TEB uses graph search, PRIEST relies on the stochasticity of the sampling process guided through the projection optimizer. As a result, the latter can search over a longer horizon and wider state space. It is worth pointing out that increasing the planning horizon of TEB dramatically increases the computation time and thus degrades the overall navigation performance instead of improving it. We present the quantitative comparison with TEB and other baselines in the next subsection. #### V-B3 Point-to-Point Navigation Benchmark Fig.6 shows trajectories generated by PRIEST alongside those generated by gradient-based optimizers, ROCKIT, FATROP, and sampling-based optimizers CEM, VPSTO. For the particular 2D example shown, both PRIEST and VPSTO successfully generated collision-free trajectories, while the other baselines failed. For the shown 3D environment, only PRIEST and CEM achieved collision-free trajectories. PRIEST trajectories were also smoother than other methods. In the next subsection, we present the quantitative statistical trends for all the baselines across different randomly generated environments. #### V-B2 Comparison with Additional Gradient-Based and Sampling-based Optimizers Table II compares the performance of the PRIEST with all the baselines in 2D and 3D cluttered environments (recall Fig.6). ROCKIT and FATROP were initialized with simple straight-line trajectories between the start and the goal that were typically not collision-free. Due to conflicting gradients from the neighboring obstacle, both these methods often failed to obtain a collision-free trajectory. Interestingly, the sampling-based approaches didn't fare particularly better as both CEM and VPSTO reported a large number of failures. We attribute the failures of VPSTO and CEM to two reasons. First, most of the sampled trajectories for both CEM and VPSTO fell into the high-cost/infeasible area, and as discussed before, this creates a pathologically difficult case for sampling-based optimizers. Second, both CEM and VPSTO roll constraints into the cost as penalties and can be really sensitive to tuning the individual cost terms. In summary, the success-rate trend of Table II shows how both classes of both gradient and sampling-based approaches struggle in highly cluttered environments. Consequently, a strong potential solution is to use something like PRIEST that can guide trajectory sampling with convex optimization toward constraint satisfaction. are high, we note that the original author implementation that we use may not have been optimized for computation speed. #### V-B3 Combination of Gradient-Based and Sampling-based Optimizers A simpler alternative to PRIEST can be just to use a sampling-based optimizer to compute a good guess for the gradient-based solvers [7]. However, such an approach will only be suitable for problems with differentiable costs. Nevertheless, we evaluate this alternative for the point-to-point benchmark of Fig.6. We used CEM to compute an initial guess for ROCKIT and FATROP. The results are summarized in Table III. As can be seen, while the performance of both ROCKIT and FATROP improved in 2D environments, the success rate of the latter decreased substantially in the 3D variant. The main reason for this conflicting trend is that the CEM (or any initial guess generation) is unaware of the exact capabilities of the downstream gradient-based optimizer. It should also be noted that the trend of Table III is not general and on some other problems, FATROP might outperform ROCKIT. This unreliability forms the core motivation behind PRIEST, which outperforms all ablations in Table III. By embedding the projection optimizer within the sampling process itself (refer Alg.1) and augmenting the projection residual to the cost function, we ensure that the sampling and projection complement each other. #### V-B4 Benchmarking in Dynamic Environments Table IV presents the results obtained from the experiments in dynamic environments. By having a success rate of 83%, our method outperforms other approaches. Furthermore, our method shows competitive efficiency with a mean travel time of 11.95 seconds. Overall, the results show the superiority of our approach in dealing with the complexities of cluttered dynamic environments, making it a promising solution for real-world applications in human-habitable environments. #### V-B5 Scaling of D-PRIEST Fig.(8) shows the linear scaling of the per-iteration time of D-PRIEST with respect to the number of distributions. Typically, solutions are obtained in around 20 iterations. Thus, around 4 parallel distributions can be maintained under real-time constraints. ## VI Conclusions and Future Work We presented PRIEST, an important contribution towards leveraging the benefits of both sampling-based optimizer and convex optimization. In particular, we used the latter to derive a GPU-accelerated projection optimizer that guides the trajectory sampling process. We also showed how the same projection set-up can be easily embedded within decentralized variants of sampling optimizers wherein multiple parallel distributions are maintained at each iteration. We performed a very extensive benchmarking showcasing the benefits of PRIEST over SOTA approaches in the context of autonomous navigation in unknown environments. Our future efforts are focused on extending PRIEST to high-dimensional manipulation problems. ## VII Appendix **Reformulating constraints:** We reformulate collision avoidance inequality constraints (1e) as follows: \[\textbf{f}_{\alpha,j}\!\!=\!\!\!\begin{cases}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the \(s_{min}\) and \(s_{max}\) in appropriate form. The matrix \(\mathbf{G}\) is formed by stacking \(-\mathbf{P}\) and \(\mathbf{P}\) vertically. Similarly, \(\mathbf{d}_{min}\), \(\mathbf{d}_{max}\) are formed by stacking the lower (\([1,0,0]\)), and upper bounds (\([\infty,1,1]\)) of \(\mathbf{d}_{o,i},\mathbf{d}_{v,i},\mathbf{d}_{a,i}\). Also, \(\tilde{\mathbf{F}}\), and \(\mathbf{e}\) are formed as \[\tilde{\mathbf{F}} =\begin{bmatrix}\mathbf{F}_{o}\\ \tilde{\mathbf{P}}\\ \mathbf{0}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{P}}\\ \tilde{\mathbf{P}}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{F}}_{o}\\ \tilde{\mathbf{P}}\end{bmatrix}\mathbf{0}\begin{bmatrix}\mathbf{0}\\ \tilde{\mathbf{F}}_{o}\\ \mathbf{0}\end{bmatrix}\mathbf{\hat{e}}\begin{bmatrix}\mathbf{x}_{o}+a \mathbf{d}_{o,i}\cos\boldsymbol{\alpha}_{o,i}\sin\boldsymbol{\beta}_{o,i}\\ \mathbf{d}_{v,i}s_{max}\cos\boldsymbol{\alpha}_{v,i}\sin\boldsymbol{\beta}_{v, i}\\ \mathbf{d}_{a,i}s_{max}\cos\boldsymbol{\alpha}_{a,i}\sin\boldsymbol{\beta}_{a,i}\\ \mathbf{y}_{o}+a\mathbf{d}_{o,i}\sin\boldsymbol{\alpha}_{o,i}\sin\boldsymbol{ \beta}_{o,i}\\ \mathbf{d}_{v,i}v_{max}\sin\boldsymbol{\alpha}_{v,i}\sin\boldsymbol{\beta}_{v, i}\\ \mathbf{d}_{a,i}a_{max}\sin\boldsymbol{\alpha}_{a,i}\sin\boldsymbol{\beta}_{a,i}\\ \mathbf{z}_{o}+b\,\mathbf{d}_{o,i}\cos\boldsymbol{\beta}_{o,i}\\ \mathbf{d}_{v,i}v_{max}\cos\boldsymbol{\beta}_{v,i}\\ \mathbf{d}_{a,i}a_{max}\cos\boldsymbol{\beta}_{a,i}\end{bmatrix} \tag{13}\] where \(\mathbf{F}_{o}\) is formed by stacking as many times as the number of obstacles. Also, \(\mathbf{x}_{o},\mathbf{y}_{o},\mathbf{z}_{o}\) is obtained by stacking \(x_{o,j}(t),y_{o,j}(t),z_{o,j}(t)\) at different time stamps and for all obstacles. **Solution process** We utilize the augmented Lagrangian method to relax the equality and affine constraints (12c)-(12e) as \(l_{2}\) penalties. Consequently, the projection cost can be rephrased as follows: \[\mathcal{L} =\!\!\!\frac{1}{2}\|\boldsymbol{\overline{\xi}}_{i}\!-\!\! \boldsymbol{\xi}_{i}\|_{2}^{2}\!-\!\!\langle\boldsymbol{\lambda}_{i}, \boldsymbol{\overline{\xi}}_{i}\rangle\!+\!\frac{\rho}{2}\left\|\boldsymbol{ \overline{\xi}}_{\!i}\!-\!\!\boldsymbol{\xi}\right\|_{2}^{2}\!+\!\frac{\rho}{2} \left\|\boldsymbol{\mathbf{G}}\boldsymbol{\overline{\xi}}_{\!i}\!-\!\boldsymbol {\tau}\!+\!\mathbf{s}_{i}\right\|_{2}^{2},\] \[=\!\!\!\frac{1}{2}\left\|\boldsymbol{\overline{\xi}}_{i}-\! \boldsymbol{\xi}_{i}\right\|_{2}^{2}\!-\!\!\langle\boldsymbol{\lambda}_{i}, \boldsymbol{\overline{\xi}}_{i}\rangle\!+\!\frac{\rho}{2}\left\|\boldsymbol{ \mathbf{F}}\boldsymbol{\overline{\xi}}_{i}\!-\!\boldsymbol{\mathbf{e}}\right\|_ {2}^{2} \tag{14}\] where, \(\boldsymbol{\mathbf{F}}\!=\!\!\begin{bmatrix}\tilde{\mathbf{F}}\\ \mathbf{G}\end{bmatrix}\!\!\!\boldsymbol{\mathbf{e}}\!=\!\!\begin{bmatrix} \tilde{\mathbf{e}}\\ \boldsymbol{\tau}\!-\!\mathbf{s}_{i}\end{bmatrix}\). Also, \(\boldsymbol{\lambda}_{i}\), \(\rho\) and \(\mathbf{s}_{i}\) are Lagrange multiplier, scalar constant and slack variable. We minimize (14) subject to (12b) using AM, which is reduced to the following steps. \[{}^{k+1}\boldsymbol{\alpha}_{i}=\arg\min_{\boldsymbol{\alpha}_{i}} \mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},\boldsymbol{\alpha}_{i},^{k} \boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{d}}_{i},^{k}\boldsymbol{ \mathbf{\lambda}}_{i},^{k}\boldsymbol{\mathbf{s}}_{i}) \tag{15a}\] \[{}^{k+1}\boldsymbol{\beta}_{i}=\arg\min_{\boldsymbol{\beta}_{i}} \mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},^{k+1}\boldsymbol{\alpha}_{i}, ^{k}\boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i},^{k}\boldsymbol {\mathbf{s}}_{i})\] (15b) \[{}^{k+1}\boldsymbol{\mathbf{d}}_{i}=\arg\min_{\boldsymbol{\mathbf{ d}}_{i}}\mathcal{L}^{(k}\boldsymbol{\overline{\xi}}_{i},^{k+1}\boldsymbol{ \alpha}_{i},^{k+1}\boldsymbol{\beta}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i},^{k} \boldsymbol{\mathbf{s}}_{i})\] (15c) \[{}^{k+1}\boldsymbol{\mathbf{s}}_{i}=\max(\boldsymbol{0},-\mathbf{G}^ {k}\boldsymbol{\overline{\xi}}_{i}+\boldsymbol{\tau})\] (15d) \[{}^{k+1}\boldsymbol{\mathbf{\lambda}}_{i}={}^{k}\boldsymbol{ \mathbf{\lambda}}_{i}-\rho\boldsymbol{\mathbf{F}}^{T}(\mathbf{F}^{k}\boldsymbol{ \overline{\xi}}_{i}-\!\boldsymbol{\overline{\xi}})\] (15e) \[{}^{k+1}\boldsymbol{\mathbf{e}}=\begin{bmatrix}\tilde{\mathbf{e}} \left({}^{k+1}\boldsymbol{\alpha}_{i},^{k+1}\boldsymbol{\beta}_{i},^{k+1} \boldsymbol{\mathbf{d}}_{i}\right)\\ \boldsymbol{\tau}-{}^{k+1}\boldsymbol{\mathbf{s}}_{i}\end{bmatrix}\] (15f) \[{}^{k+1}\overline{\boldsymbol{\xi}}_{i}=\arg\min_{\boldsymbol{ \mathbf{f}}_{i}}\mathcal{L}(\boldsymbol{\xi}_{i},^{k+1}\boldsymbol{\mathbf{e}}_{i},^{k+1} \boldsymbol{\mathbf{\lambda}}_{i},^{k}\boldsymbol{\mathbf{\lambda}}_{i}) \tag{15g}\] For each AM step, we only optimize one group of variables while others are held fixed. Note that stacking of right-hand sides of (15f) and (15e) provide the function \(\mathbf{h}\) presented in (6a). The steps (15a)-(15c) have closed form solutions in terms of \({}^{k}\overline{\boldsymbol{\xi}}_{i}\)[22, 21, 15]. Also, (15g) is a representation of (8a)-(8c). **Remark 3**.: _The matrix \(\boldsymbol{F}\) and vector \(\boldsymbol{e}\) (see (13)) have the dimension of \(3(n_{o}+2)n_{p}\times 3n_{v}\) and \(3(n_{0}+2)n_{p}\) and their complexity grow linearly with the number of obstacles \(n_{o}\) and the planning horizon \(n_{p}\)._
2301.13751
Instability of a Kerr-type naked singularity due to light and matter accretion and its shadow
We study null and timelike constant radii geodesics in the environment of an over-spinning putative Kerr-type naked singularity. We are particularly interested in two topics: first, the differences of the shadows of the naked rotating singularity and the Kerr black hole; and second, the spinning down effect of the particles falling from the accretion disk. Around the naked singularity, the non-equatorial prograde orbits in the Kerr black hole remain intact up to a critical rotation parameter ($\alpha=\sqrt{6 \sqrt{3}-9}$) and cease to exist above this value [Eur. Phys. J. C 78, 879 (2018)]. This has an important consequence in the shadow of the naked singularity if the shadow is registered by an observer on the polar plane or close to it as the shadow cannot be distinguished from that of a Kerr black hole viewed from the same angle considering only the light emanating from the unstable photon orbits. We show that the timelike retrograde orbits in the equatorial plane immediately (after about an 8% increase in mass for the case of initial $\alpha=1.5$) reduce the spin parameter of the naked singularity from larger values to $\alpha=1$ at which an event horizon appears. This happens because the retrograde orbits have a larger capture cross-section than the prograde ones. So if a naked singularity happens to have an accretion disk, it will not remain naked for long, an event horizon forms.
Aydin Tavlayan, Bayram Tekin
2023-01-31T16:38:42Z
http://arxiv.org/abs/2301.13751v4
# Kerr-type naked singularity: its shadow and accretion ###### Abstract We study null and timelike constant radii geodesics in the environment of an over-spinning Kerr-type naked singularity. We are particularly interested in two topics: first, the differences of the shadows of the naked rotating singularity and the Kerr black hole; and second, the spinning down effect of the particles falling from the accretion disk. Our findings are as follows: around the naked singularity, the non-equatorial prograde orbits in the in Kerr black hole remain intact up to a critical rotation parameter (\(\alpha=\frac{4\sqrt{2}}{3\sqrt{3}}\)) and cease to exist above this value. This has an important consequence in the shadow of the naked singularity if the shadow is registered by an observer on the rotation plane or close to it as the shadow cannot be distinguished from that of a Kerr black hole viewed from the same angle. We also show that the timelike retrograde orbits in the equatorial plane immediately (after about an 8% increase in mass) reduce the spin parameter of the naked singularity from larger values to \(\alpha=1\) at which an event horizon appears. This happens because the retrograde orbits have a larger capture cross-section than the prograde ones. So if a naked singularity happens to have an accretion disk, it will not remain naked for long, an event horizon forms. ## I Introduction Nature has a very efficient way in constraining some physical quantities; it just uses the square-roots, for example the speed of any object is restricted to less than the speed of light as the factor \(\sqrt{1-v^{2}/c^{2}}\) appears in relativistic physics. Similarly, in black hole physics, the rotation of a black hole is restricted because the factor \(\sqrt{1-\alpha^{2}}\), with \(\alpha\) being the dimensionless rotation parameter given in SI units in terms of the spin \(J\) and mass \(m\) as \(\alpha=cJ/(Gm^{2})\), appears in the location of the event horizon. If \(\alpha>1\), there is no event horizon and the black hole becomes a rotating, massive naked singularity, still a solution to vacuum Einstein equations. As the stars, or star systems typically have \(\alpha>1\) before a black hole is produced, it is clear that the angular momentum of the collapsing matter must be depleted to the values \(\alpha<1\) to form a black hole, otherwise a naked singularity is formed. Even though one can envisage ways to deplete the angular momentum, we still do not know how Nature solves this problem exactly. [For example, for our solar system \(\alpha\approx 35\) and most of the contribution comes from Jupiter's angular momentum which is located far away from the central mass, the Sun.] Some observed black holes are rotating close to the extreme value \(\alpha=1\). With the singularity theorem of Penrose [1], a singularity is guaranteed to occur in a gravitational collapse under reasonable assumptions on the energy-momentum tensor of matter. However an event horizon is not guaranteed to form, we only have an expectation dubbed as "the cosmic censorship hypothesis" [2] which states that collapsing matter do not form a naked singularity but it does not state that naked singularities do not exist on their own. Due to this state of affairs, one is necessarily curious about the observable differences in the causal environment of a naked singularity and the Kerr black hole. For example, can a naked singularity mimic the Kerr black hole [3] as far as its shadow [4] is concerned? Can accretion of matter to a naked singularity spin down its rotation in such a way that an event horizon forms? In this work, we study various null and timelike orbits around a naked singularity and make comparisons with the Kerr black hole; we also give a detailed account of accretion of matter carrying angular momentum and mass to a naked singularity assuming a thin equatorial accretion disk about it. The lay-out of this paper is as follows: In Sec. II we study the null spherical geodesics around the naked singularity. In Sec. III we plot the shadows of various naked singularities. In Sec. IV we concentrated on the null orbits at the critical inclination angle. In Sec. V we extended discussion to timelike geodesics. In Sec. VI we give a detailed study of the accretion for both Kerr black hole and the Kerr-type singularity for thin disks and show the spinning-down effect for the naked singularity due to the matter falling from the unstable interior part of the disk. ## II Constant radii null geodesics around the Kerr-type naked singularity A rotating, massive naked singularity can be obtained from the Kerr metric in the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) which reads (in the \(G=c=1\) units) as \[ds^{2}=-\Big{(}1-\frac{2mr}{\Sigma}\Big{)}dt^{2}-\frac{4mar\sin ^{2}\theta}{\Sigma}dtd\phi+\frac{\Sigma}{\Delta}dr^{2}\] \[\qquad+\Sigma\,d\theta^{2}+\Big{(}r^{2}+a^{2}+\frac{2ma^{2}r\sin ^{2}\theta}{\Sigma}\Big{)}\sin^{2}\theta d\phi^{2}, \tag{1}\] where \(a:=\frac{J}{m}\) is the dimensionfull rotation parameter which we shall take to be \(a>m\). The two functions appearing in the metric are given as \[\Delta\equiv r^{2}-2mr+a^{2},\hskip 28.452756pt\Sigma\equiv r^{2}+a^{2}\cos^{2}\theta. \tag{2}\] For \(a<m\), the larger root of \(\Delta=0\) is the event horizon located at \(r_{\rm H}=m+(m^{2}-a^{2})^{1/2}\), but in this work, \(\Delta\neq 0\) and hence we have a naked rotating singularity. Our first task is to calculate the constant radii null and time-like geodesics in this background. The constant radii null geodesics are particularly important because they are unstable and carry away information about the environment of this strong gravitational region. In practice one computes the shadow of this region as seen by a distant observer. Time-like geodesics are also important as they are traced by massive particles that constitute the accretion disk and change both the mass and the spin of the central object. Studying the evolution of the rotation parameter due to accretion of matter is our second task. For the spherical photon orbits, we need the radial part of the geodesic equation: \[\Sigma\,\frac{dr}{d\lambda}=\pm\sqrt{R(r)}, \tag{3}\] where \(\lambda\) is an affine parameter along the null geodesics. Assuming \(E\neq 0\), we can work with the dimensionless radial function \({\bf R}(x):=R(r)/(m^{4}E^{2})\) in terms of dimensionless parameters \[{\bf R}(x):=x^{4}+(\alpha^{2}-l^{2}-q)x^{2}+2x((\alpha-l)^{2}+q)-\alpha^{2}q. \tag{4}\] Here \(x:=r/m\), \(\alpha:=a/m\), and \(l:=\frac{L_{z}}{mE}\) where \(E\) is the conserved energy of the photon corresponding to the time-like Killing vector \(\xi_{(t)}=\frac{\partial}{\partial t}\); and \(L_{z}\) is the conserved \(z\)-component of the angular momentum of the photon related to the \(\xi_{(\varphi)}=\frac{\partial}{\partial\varphi}\) Killing vector, while \(q:=\frac{\mathcal{Q}}{m^{2}E^{2}}\) where \(\mathcal{Q}\) is the Carter's constant related to a symmetric rank two Killing tensor. Explicitly it reads \[\mathcal{Q}:=p_{\theta}^{2}+\cos^{2}\theta\left(\frac{L_{z}^{2}}{\sin^{2} \theta}-a^{2}E^{2}\right), \tag{5}\] and as a result \[q=\frac{p_{\theta}^{2}}{m^{2}E^{2}}+\cos^{2}\theta\left(\frac{l^{2}}{\sin^{2} \theta}-\alpha^{2}\right). \tag{6}\] For constant radii orbits, \(\mathcal{Q}\geq 0\), and the bound is satisfied for equatorial orbits [5; 6; 7]. There are two conditions on \({\bf R}(x)\) for spherical orbits \[{\bf R}(x)=0,\hskip 28.452756pt\frac{d{\bf R}(x)}{dx}=0, \tag{7}\] which yield two physically viable equations [8]: \[l=-\frac{x^{3}-3x^{2}+\alpha^{2}x+\alpha^{2}}{\alpha(x-1)}, \tag{8}\] \[q=-\frac{x^{3}\left(x^{3}-6x^{2}+9x-4\alpha^{2}\right)}{\alpha^ {2}(x-1)^{2}}. \tag{9}\] Now, we would like to investigate these two equations under various circumstances for \(\alpha>1\). ### Equatorial null Orbits On the equatorial plane, particles with a vanishing \(q\) can orbit [9]. For black holes with a rotation parameter \(\alpha<1\), it is known that there are 3 solutions [8]. One of these lies inside the event horizon while the others are outside the horizon; and the latter correspond to prograde and retrograde orbits. On the other hand, for the naked singularity with a rotation parameter \(\alpha>1\), the only real and non-zero solution of (9) for \(q=0\) is \[x_{-}=2+\left(\alpha+\sqrt{\alpha^{2}-1}\right)^{2/3}+\left(\alpha+\sqrt{ \alpha^{2}-1}\right)^{-2/3} \tag{10}\] which can be seen in Fig. (1). The photons that follow this orbit have a negative \(l\) value as shown in Fig. (2), and therefore \(x_{-}\) is a retrograde orbit. The equatorial orbits set the _innermost_ and the _outermost_ limits of the generic spherical photon orbits. Hence, non-equatorial null orbits can exist only for the interval \(0<x<x_{-}\) in this naked singularity spacetime. ### Polar Null Orbits Photons with a vanishing \(l\) and a positive \(q\) can have orbits on the polar plane. For a black hole with \(\alpha<1\) Figure 1: The orbit in the equatorial plane is plotted as a function of the rotation parameter for the interval \(1<\alpha<1.6\) using (10). The radius of the orbit increases as the rotation parameter increases, which is an expected result for a retrograde orbit. Figure 2: The \(l\) value of the equatorial orbit is plotted as a function of the rotation parameter for the interval \(1<\alpha<1.6\) using (8). The negative \(l\) value confirms that this is a retrograde orbit. we have already shown that there are 3 polar circular null orbits [8]. One of them corresponds to an orbit with a negative radius, therefore physically nonviable. The second solution lies inside the event horizon. The third solution lies outside the event horizon and corresponds to retrograde orbits. For the case of a naked singularity, (8) for \(l=0\) yields \[\alpha(x)=\frac{x\sqrt{3-x}}{\sqrt{x+1}}, \tag{11}\] which vanishes at \(x=0\) and \(x=3\), and has a local maximum at \(x=\sqrt{3}\) as shown in Fig. (3). The maximum value of the rotation parameter is \(\alpha_{max}=\sqrt{6\sqrt{3}-9}=1.17996\) as was also found in [9]. The second derivative (\(\frac{d^{2}\mathbf{R}(x)}{dx^{2}}\) ) for the physically viable orbits shows that while one of them is stable, the other one is unstable. In Fig. (4), the orbits on the polar plane are plotted as a function of the rotation parameter for the interval \(1<\alpha<1.2\). Note that there are no polar orbits for the spacetimes with \(\alpha>\sqrt{6\sqrt{3}-9}\). Let us note that the value \(\sqrt{6\sqrt{3}-9}\) is an important limit on the rotation parameter: in the interval \(1<\alpha<\sqrt{6\sqrt{3}-9}\), for generic spherical null orbits, the \(l\) value can be positive. This means that prograde, as well as retrograde, orbits are allowed. For a naked singularity with a rotation parameter higher than the maximum rotation parameter, \(\alpha>\sqrt{6\sqrt{3}-9}\), there are only retrograde orbits, prograde orbits simply disappear. This will have an observable consequence in the shadow of the naked singularity as we shall see. ### Marginally Stable null Orbits Marginally stable orbits are defined by the condition on the radial function as \(\frac{d^{2}\mathbf{R}}{dx^{2}}=0\) augmented with the conditions (7). On these orbits, one finds \(\alpha(x):=\sqrt{(x-3)x^{2}+3x}\), [9], or \[x_{M}=1+(\alpha^{2}-1)^{\frac{1}{3}}, \tag{12}\] which is plotted in Fig.(5). Therefore, orbits with \(x<x_{M}\) are stable and orbits with \(x>x_{M}\) are unstable. The photons we can observe originate as a result of slight perturbations of the unstable orbits. Hence, stable orbits are not relevant for the shadow imaging. This has an important consequence. The orbits around the naked singularity located at \(x=0\) and the orbits around \(x=1\) cannot have any visible effects on the image and the shadow. ## III Shadow of a naked singularity In order to obtain the shadow, we will use the conventions of [10], [11]. For light rays with parameters \(l\) and \(q\), we assume that there is an observer located at a point far away from the black hole with coordinates \((r_{0},\theta_{0},\phi_{0})\) measuring the directions of these light rays. The coordinate \(\phi_{0}\) can be taken as 0 using the axial symmetry of the spacetime for simplicity. \(\theta_{0}\) is called the inclination angle of the observer. At large distances, for light rays one has \[\frac{d\phi}{dt}\approx\frac{l}{r_{0}^{2}\sin^{2}\theta_{0}}, \tag{13}\] Figure 4: The polar orbits are drawn as a function of the rotation parameter in the interval \(1<\alpha<1.2\). See that there are no polar orbits which has \(\alpha>\sqrt{6\sqrt{3}-9}\). Figure 5: The marginally stable orbit as a function of the rotation parameter is plotted for the interval \(1<\alpha<1.6\) using (12). Figure 3: For the vanishing \(l\) value, the corresponding rotation parameter as a function of radius is plotted for the interval \(0<\alpha<3\) using (11). The maximum value of the rotation parameter is \(\alpha_{max}=\sqrt{6\sqrt{3}-9}\). For higher values of rotation parameters, photons cannot reach to the polar plane. \[\frac{d\theta}{dt}\approx\pm\frac{1}{r_{0}^{2}}\sqrt{q+\alpha^{2}\cos^{2}\theta_{0}-l ^{2}\cot^{2}\theta_{0}}. \tag{14}\] Observe that the affine parameter is eliminated and the coordinate time \(t\) is used. Therefore, on the 2 dimensional image plane of the observer, we can define the impact parameters as \[X:=-\frac{l}{\sin\theta_{0}}, \tag{15}\] and \[Y:=\pm\sqrt{q+\alpha^{2}\cos^{2}\theta_{0}-l^{2}\cot^{2}\theta_{0}}. \tag{16}\] Each photon coming from an orbit around the black hole due to a slight perturbation determines a point on the \((X,Y)\) plane of the image taken by the observer. ### Two Exemplary Cases #### iii.1.1 Naked singularity with \(\alpha=1.1<\alpha_{max}\) For \(\alpha=1.1\), using the results of previous sections, we can find spherical photon orbits with a radius in the interval \[0<x<4.08808. \tag{17}\] The marginally stable orbit is located at \[x_{M}=1.59439, \tag{18}\] so the unstable spherical photon orbits will be in the interval \[1.59439<x<4.08808. \tag{19}\] Because of the fact that we have a rotation parameter which is smaller than \(\alpha_{max}=\sqrt{6\sqrt{3}-9}\), we can expect prograde orbits as well as retrograde orbits. The prograde orbits exist in the interval \[1.59439<x<2.2, \tag{20}\] while the retrograde orbits are in the interval \[2.2<x<4.08808. \tag{21}\] The shadow of the black hole is shown in Fig.(6) and Fig.(7). It is important to note that an observer located at the polar plane, \(\theta_{0}=0\), cannot decide whether this is a Kerr black hole or Kerr-type naked singularity. #### iii.1.2 Naked singularity with \(\alpha_{max}<\alpha=1.5\) For \(\alpha=1.5\), we can find spherical photon orbits with a radius in the interval \[0<x<4.4260. \tag{22}\] The marginally stable orbit is located at \[x_{M}=2.0772, \tag{23}\] so the unstable spherical orbits will be in the interval \[2.0772<x<4.4260. \tag{24}\] Because the rotation parameter is greater than \(\alpha_{max}=\sqrt{6\sqrt{3}-9}\), the \(l\) value can never change sign, there are no photons that can reach the polar plane, and all the orbits are retrograde. The shadow of the naked singularity is shown in Fig. (8) and Fig. (9). ## IV Critical inclination angle for null orbits In our previous work, [8], by combining the \(l\) (8) and \(q\) (9) expressions, we obtained a sextic polynomial and Figure 6: The shadow image of the Kerr-type naked singularity located at the origin, with a rotation parameter \(\alpha=1.1\) for an observer with the inclination angle \(\theta_{0}=\frac{\pi}{2}\). The rotation is in the counter-clockwise direction. Figure 7: The shadow image of the Kerr-type naked singularity located at the origin, with a rotation parameter \(\alpha=1.1\) for observers with different angles \(0<\theta_{0}<\frac{\pi}{2}\). An observer located at the polar plane, \(\theta_{0}=0\), cannot decide whether this is a Kerr black hole or Kerr-type naked singularity. searched its analytical solutions for different conditions. In addition to the known equatorial and polar plane solutions, we found a new family of analytic solutions at the _critical inclination angle_. At this critical inclination angle, the sextic polynomial factors into a quadratic and a quartic part and becomes solvable by radicals. By using the same method, we can find the critical inclination angle solutions for the \(\alpha>1\) cases. The mentioned sextic polynomial equation is \[p(x) \equiv x^{6}-6x^{5}+(9+2\nu u)x^{4}-4ux^{3}-\nu u(6-u)x^{2} \tag{25}\] \[+2\nu u^{2}x+\nu u^{2}=0,\] where we have defined the dimensionless variables \[u:=\alpha^{2},\hskip 28.452756pt\nu:=\frac{q}{l^{2}+q}. \tag{26}\] In this section, we will call \(u\) to be the rotation parameter. One must solve this polynomial equation as \(x=x(u,\nu)\) for the following intervals: \[0<x,\hskip 28.452756pt0\leq\nu\leq 1,\hskip 28.452756pt1<u. \tag{27}\] To proceed further, it pays to define the following variables which will simplify the final expressions: \[\nu:=\frac{\xi}{u},\hskip 28.452756ptu:=1+w^{3}, \tag{28}\] with \(0\leq\xi\) and \(w\geq 0\). Even though a generic radical solution to the sextic (25) is not possible, it can be shown that it reduces to a quadratic times a quartic at the following critical point for \(u>1\): \[\xi_{\rm cr}\ =\ \frac{3(w+1)^{3}}{w(w+5)+7} \tag{29}\] and it becomes solvable. The four real solutions of this sextic polynomial are plotted in Fig. (10). Two of the solutions are degenerate with a vanishing second derivative and they are marginally stable orbits. One of the remaining solutions has a negative second derivative and the other one has a positive second derivative, therefore they are stable and unstable critical orbits, respectively. The critical stable and unstable orbits are available only for spacetimes with a rotation parameter \(u<u_{max}=6\sqrt{3}-9=1.3923\). The \(l\) and \(q\) values of these solutions can be seen in Fig. (11) and Fig. (12), respectively. The marginally stable orbits have a turning point and they can be prograde or retrograde. Likewise, the stable orbit can be prograde or retrograde. Yet, the unstable orbit is always retrograde. The Carter's constants are non-zero, as expected. ## V Spherical Timelike Orbits around the Naked Singularity Let us now consider a massive particle with mass \(\mu\) that moves on a spherical timelike orbit in the vicinity of Figure 8: The shadow image of the Kerr-type naked singularity with a rotation parameter \(\alpha=1.5\) for an observer with the inclination angle \(\frac{\pi}{2}\). There is no line with a negative \(X\)-value because there are no prograde orbit exists. Figure 10: The solution of the sextic polynomial is plotted as a function of the rotation parameter for the interval \(1<u<2\). Figure 9: The shadow image of the Kerr-type naked singularity with a rotation parameter \(\alpha=1.5\) for observers with different inclination angles \(0<\theta_{0}<\frac{\pi}{2}\). There is no prograde orbit for this spacetime. Therefore, changing the inclination angle only affects the arc shape in the image. the naked singularity. For the spherical timelike orbits, [12], the relevant geodesic equation is \[\Sigma\,\frac{dr}{d\tau}=\pm\sqrt{R(r)}, \tag{30}\] where the radial function can be rearranged as \(\mathbf{R}(x):=R(r)/(m^{4}\mu^{2})\) which is given as \[\mathbf{R}(x) := x^{2}\left(\alpha^{2}\left(\tilde{E}^{2}-1\right)-l^{2}-q \right)-\alpha^{2}q\] \[+ 2x\left(\left(\alpha\tilde{E}-l\right)^{2}+q\right)+\left( \tilde{E}^{2}-1\right)x^{4}+2x^{3}\] with \(l=\frac{L}{m\mu}\), \(q=\frac{Q}{m^{2}\mu^{2}}\) and \(\tilde{E}=\frac{E}{\mu}\). Note that in contrast to the null geodesics, the energy of the orbit is important, but the mass of the particle is not, hence we scaled out the mass of the particle as well as the mass of the central body. Two equations must be satisfied, \(\mathbf{R}(x)=0\) and \(\frac{d\mathbf{R}(x)}{dx}=0\), for constant radius geodesics. There is a bifurcation of solutions: for \(x=1\), one has the following solutions \[l=\frac{2\left(\alpha^{2}+1\right)\tilde{E}^{2}-\alpha^{2}+1}{2\alpha\tilde{E }}, \tag{32}\] and \[q=\frac{\alpha^{2}-\left(2\tilde{E}^{2}+1\right)^{2}}{4\alpha^{2}\tilde{E}^{ 2}}. \tag{33}\] On the other hand, for \(x\neq 1\), one has \[l=\frac{-1}{\alpha^{2}(x-1)}\times\left[\alpha\tilde{E}(\alpha^ {2}-x^{2})\right. \tag{34}\] \[\left.+\left(x\left(\alpha^{3}+\alpha(x-2)x\right)^{2}\left( \left(\tilde{E}^{2}-1\right)x+1\right)\right)^{1/2}\right],\] and \[q=\frac{x^{2}}{\alpha^{3}(-1+x)^{2}}\times\left[\alpha^{3} \left(\left(2\tilde{E}^{2}-1\right)x+1\right)\right. \tag{35}\] \[\left.+2\tilde{E}\left(x\left(\alpha^{3}+\alpha(x-2)x\right)^{2} \left(\left(\tilde{E}^{2}-1\right)x+1\right)\right)^{1/2}\right.\] \[\left.+\alpha x\left(x\left(\tilde{E}^{2}(-((x-4)x+5))+(x-5)x+8 \right)-4\right)\right].\] Next we shall analyze the unit energy solutions for which the equations are more transparent. ### Unit Energy Timelike Orbits For unit energy particles, \(\tilde{E}=1\), from (34) and (35) one can obtain for the equatorial plane, \(q=0\), \[x=2+\alpha\pm 2\sqrt{\alpha+1}, \tag{36}\] and the plus solution is plotted in Fig. (13) for the interval \(1<\alpha<2\). The \(l\) value for this orbit is plotted in Fig. (14) for the same interval and it shows that this orbit is retrograde. On the polar plane, \(l=0\), we obtain \[\alpha=\sqrt{2x^{3/2}-x^{2}}, \tag{37}\] which vanishes at \(x=0\) and \(x=4\) as can be seen in Fig. (15). This rotation parameter has a maximum at \(x_{max}=\frac{9}{4}\) with a value \(\alpha_{max}=\frac{3\sqrt{3}}{4}=1.29904\). For naked singularities with a rotation parameter less than this value, there could be prograde orbits as well as retrograde orbits for generic spherical orbits. It is important to observe that massive particles with a unit energy can co-rotate with the black hole for higher values of the rotation parameter than the photons whose maximum rotation parameter is \(\alpha_{max}=\sqrt{6\sqrt{3}-9}\). The spherical timelike orbits of the polar plane can be seen in Fig. (16). Figure 11: The \(l\) value of the critical inclination angle orbit is plotted as a function of the rotation parameter for the interval \(1<u<2\). Figure 12: The Carter’s constant of the critical inclination angle orbit as a function of the rotation parameter is plotted for the interval \(1<u<2\). Figure 13: The radius of an equatorial retrograde orbit for a unit energy particle is plotted as a function of the rotation parameter for the interval \(1<\alpha<2\) by using (36). ### Generic Energy orbits on the Equatorial Plane When we solve the equation (35) for \(q=0\) with respect to the rotation parameter without assuming the unit energy condition, we get \[\alpha = \left[4x+x^{2}\left(2\tilde{E}^{4}x+\tilde{E}^{2}(5-3x)+x-4\right)\right.\] \[\left.-2\left(2+\left(\tilde{E}^{2}-1\right)x\right)\sqrt{ \tilde{E}^{2}x^{3}\left(\left(\tilde{E}^{2}-1\right)x+1\right)}\right]^{\frac{ 1}{2}}.\] Two conclusions can be drawn from this relation. Firstly, for higher rotation parameter values, the radius of the spherical timelike orbit for particles of constant energy on the equatorial plane increases. Secondly, higher energy particles follow closer orbits to the naked singularity with a constant rotation parameter than the lower energy particles. As an example, for \(\alpha=1.6\), the energy values as a function of the radius is plotted in Fig. (17). ### Generic Energy orbits on the Polar Plane The equation (34) for \(l=0\) yields \[\alpha = \left(\frac{1}{\tilde{E}^{2}(x+1)-x}\right. \tag{39}\] \[\times\left[x^{2}\left(\tilde{E}^{2}(-(x-1))+x-2\right)\right.\] \[\left.+2\sqrt{\tilde{E}^{2}x^{3}\left(\left(\tilde{E}^{2}-1\right) x+1\right)}\right]\right)^{\frac{1}{2}}.\] Then, by using this relation, one can investigate the rotation parameter for different energy values. We have already shown that for a unit energy particle, there is a maximum possible rotation parameter \(\alpha_{max}=\frac{3\sqrt{3}}{4}\). For \(\tilde{E}=1.5\), one finds \(\alpha_{max}=1.21002\) and for \(\tilde{E}=2\), one finds \(\alpha_{max}=1.19495\). In conclusion, one can observe that for the high energy limit (for example \(\tilde{E}=100\)), the maximum value of the rotation parameter approaches to the maximum value of the photon case (\(\alpha_{max}=\sqrt{6\sqrt{3}-9}\)) as expected. ## VI Accretion into the Naked Singularity ### Review of the \(\alpha<1\) Case #### vi.1.1 Special Circular Null and Timelike Orbits Accretion of matter and radiation around both non-rotating and rotating black holes is an extremely important aspect of black hole physics in the strong field region, both from the vantage point of theory and observation. For rotating black holes, one can consider thin Figure 16: The polar orbits as a function of the rotation parameter is plotted for the interval \(1<\alpha<1.5\). Figure 17: The energy of the particles is plotted as a function of the radius for the interval \(4.5<x<5\) around a black hole with a rotation parameter \(\alpha=1.6\). Higher energy particles follow orbits with smaller radii. Figure 14: The \(l\) value of an equatorial orbit for a unit energy particle is plotted as a function of the rotation parameter for the interval \(1<\alpha<2\). The negative \(l\) values imply that this is a retrograde orbit. Figure 15: The rotation parameter as a function of the radius of the polar orbits is plotted for the interval \(0<x<5\). accretion disks around the equatorial plane. One may not be able to solve the whole disk plus the black hole system analytically in an exact form, but one can compute the effects of the accretion disk on the black hole by considering the properties of some special equatorial, circular null and timelike orbits. As matter and radiation fall into the black hole, the mass and the spin of the black hole increase as discussed by Bardeen [13] (without taking into account of the radiation) and later by Thorne [14] who considered the effects of the radiation. It turns out the retrograde photon orbits have a larger capture cross-section than that of prograde photon orbits, the latter generically spin up but the former spin down the black hole. Before investigating the accretion around a Kerr-type naked singularity, we would like to review the Kerr black hole case with \(\alpha<1\). The radial timelike geodesic equation can be rewritten on the equatorial plane as \[\frac{dr}{d\tau}=\pm r^{-3/2}\sqrt{R(r)}, \tag{40}\] where the radial function can be rearrange to \({\bf R}(x):=R(r)/(m^{4}\mu^{2})\) which is given as \[{\bf R}(x) := \left(\tilde{E}^{2}-1\right)x^{3}+\alpha^{2}\left(\tilde{E}^{2}- 1\right)x+2(l-\alpha\tilde{E})^{2} \tag{41}\] \[-l^{2}x+2x^{2}.\] By using the circularity conditions, \({\bf R}(x)=0\) and \(\frac{d{\bf R}(x)}{dx}=0\), one can get \[l_{\pm} = \pm\frac{\alpha^{2}+x^{2}\mp 2\alpha\sqrt{x}}{\sqrt{\pm 2\alpha x^ {3/2}+(x-3)x^{2}}},\] \[\tilde{E}_{\pm} = \frac{\pm\alpha+(x-2)\sqrt{x}}{\sqrt{\pm 2\alpha x^{3/2}+(x-3)x^{2 }}}, \tag{42}\] where \(+\) represents prograde orbits while \(-\) represent retrograde orbits. It is important to state that circular orbit solutions exist on the equatorial plane only if \(\pm 2\alpha x^{3/2}+(x-3)x^{2}\geq 0\) and the equality is satisfied only by null orbits which is consistent with the left-hand side of the equations as these quantities represent angular momentum and energy per mass. The stability of these circular orbits is determined by the second derivative test: \(\frac{d^{2}{\bf R}(x)}{dx^{2}}>0\) for unstable orbits, while \(\frac{d^{2}{\bf R}(x)}{dx^{2}}<0\) for stable ones. The special case \(\frac{d^{2}{\bf R}(x)}{dx^{2}}=0\) represents the innermost stable circular orbit (ISCO) or the marginally stable orbit. This ISCO condition yields \[-3\alpha^{2}+x^{2}\pm 8\alpha\sqrt{x}-6x=0, \tag{43}\] which has 4 solutions two of which are physically relevant and correspond to prograde and retrograde ISCOs. For the sake of simplicity, we will not provide the explicit expression here, but they are plotted in (18). Massive particles with \(\tilde{E}>1\) follow unstable circular orbits on the equatorial plane and under a slight perturbation, they may escape to infinity or fall into the black hole. Particles with \(\tilde{E}<1\), under a perturbation, can only fall into the black hole. The special unstable orbits with \(\tilde{E}=1\) are aptly called the _binding orbits_, and are located at \[x_{bind,\pm}=\mp\alpha+2\sqrt{1\pm\alpha}+2 \tag{44}\] The prograde and retrograde ISCO, binding orbits and photon orbits can be seen in Fig. (18). #### iv.1.2 Change in the spin of the black hole due to accretion Let us assume that there is a subextremal Kerr black hole with a thin accretion disk around its equator, and further assume that there are no gravitational or electromagnetic radiation from the disk. Particles fall into the black hole under slight perturbations from the ISCO. These particles feed the black hole with mass \(\delta m=\tilde{E}_{ISCO}\); and angular momentum \(\delta J=l_{ISCO}\). Under these assumptions, Bardeen [13] showed that a black hole which is initially static can be spinned up by particles on the accretion disk until it reaches the extremal rotation parameter \(\alpha=1\). Later, Thorne [14] calculated the upper limit as \(\alpha=0.998\) by considering photons coming out of the accelerated particles. Here it turns out the capture cross-section of the retrograde orbits is larger than that of the prograde orbits and this fact does not allow the subextremal black hole to be spinned up to the extremal value. ### Accretion around Kerr-type Naked Singularity #### iv.2.1 Special Circular Null and Timelike Orbits When the circularity conditions are applied for the radial part of the geodesic equations, \({\bf R}(x)=0\) and \(\frac{d{\bf R}(x)}{dx}=0\), for Kerr-type naked singularity spacetimes, \(\alpha>1\), one get \[\tilde{E}_{\epsilon}=\frac{\left(x-2\right)\sqrt{x}+\epsilon\alpha}{\sqrt{ \left(x-3\right)x^{2}+2\epsilon\alpha x^{3/2}}}, \tag{45}\] Figure 18: The prograde and retrograde ISCOs, binding orbits and photon orbits are plotted as a function of the rotation parameter for the interval \(0<\alpha<1\). and \[l_{\epsilon}=\epsilon\frac{x^{2}+\alpha^{2}-2\epsilon\alpha\sqrt{x}}{\sqrt{\left(x- 3\right)x^{2}+2\epsilon\alpha x^{3/2}}}, \tag{46}\] where \(\epsilon=+1\) represents the prograde solutions while \(\epsilon=-1\) represents retrograde solutions. Because the radial part of the geodesic equations has a quadratic dependence on \(\tilde{E}\) and \(l\), there should be a second solution. A straightforward calculation shows that the second solution is \[\tilde{E}^{\prime}=-\tilde{E},\ \ \ \ l^{\prime}=-l. \tag{47}\] As already mentioned, the circular orbits exist only if \(\left(x-3\right)x^{2}\pm 2\alpha x^{3/2}\geq 0\) and equality is satisfied only by null orbits. For spacetimes with \(\alpha>1\), the equality is satisfied by a _single_ orbit which is retrograde. In other words, there is no prograde null orbit on the equatorial plane. At this point, let us concentrate on an interesting property of the Kerr-type naked singularity spacetimes, which was first shown in [15]. For rotation parameter values which can be given as \[\alpha\left(x\right)=(2-x)\sqrt{x}, \tag{48}\] the particles can rotate in their prograde orbits with zero energy. This is not possible for Kerr spacetimes because these orbits are located inside the event horizon. Yet, because of the fact that there is no event horizon for naked singularity spacetimes, these orbits are relevant. Even more interestingly, there are orbits with radii smaller than zero energy orbits which are followed by particles with negative energy, [16]. By using 48, one can find that zero and negative energy orbits are possible in the range \(1<\alpha<\sqrt{\frac{32}{27}}=1.08866\) as can be seen in Fig. (19). In addition to these zero energy orbits, zero angular momentum orbits or ZAMOs are also possible for Kerr-type naked singularity spacetimes as was shown in [15]. For rotation parameter values \[\alpha\left(x\right)=\sqrt{x}\pm\sqrt{x-x^{2}}, \tag{49}\] the particles can rotate in their orbits with zero angular momentum. The relation (49) can be seen in Fig. (20). The maximum of the rotation parameter can be calculated via 49 and it is \(\alpha=\sqrt{\frac{27}{16}}=1.29904\). This means that it is possible to obtain orbits with zero or negative angular momenta for the interval \(1<\alpha<\frac{3\sqrt{3}}{4}\). The special case, the innermost stable circular orbits (ISCO), correspond to the orbits which satisfy the equation \[-3\alpha^{2}+x^{2}\pm 8\alpha\sqrt{x}-6x=0. \tag{50}\] This condition accepts two solutions which correspond prograde and retrograde ISCO. For the sake of simplicity, we will not provide the explicit expression here. The other special case discussed for the Kerr black hole is the binding orbits. There exist prograde and retrograde binding orbits for the Kerr-type naked singularity spacetime which can be given as \[x_{bind,\pm}=2+\alpha\mp 2\sqrt{\alpha+1}. \tag{51}\] The prograde and retrograde ISCO and binding orbits as well as retrograde photon orbits can be seen in Fig. (21). All orbits on the equatorial plane for both intervals \(0<\alpha<1\) and \(1<\alpha<1.5\) are plotted in Fig. (22). The retrograde ISCO, binding orbit and photon orbit continue to exist for the naked singularity spacetimes. The prograde ISCO, binding orbit and the photon orbit merge at \(\alpha=1\). The prograde photon orbit does not exist for \(\alpha>1\). The prograde ISCO continues to exist but starts to move away from the singularity with an increasing rotation parameter. For the rotation parameter \(\alpha=\frac{3\sqrt{3}}{4}\), the particles on the prograde ISCO has \(l=0\). The prograde binding orbit also continues to exist but there is a discontinuity at \(\alpha=1\) as can be seen in (22). #### iv.2.2 Spinning down the Singularity An approach similar to the one developed by Bardeen [13] to investigate the effect of the particles in the ac Figure 19: The rotation parameter relation (48) is plotted for the interval \(0<x<2.4\). The maximum value of the rotation parameter is \(\alpha=\sqrt{\frac{32}{27}}\) and this means that it is possible to find orbits with zero or negative energy for the interval \(1<\alpha<\sqrt{\frac{32}{27}}\). Figure 20: The rotation parameter relation (49) is plotted for the interval \(0<x<1.2\). The maximum value of the rotation parameter is \(\alpha=\frac{3\sqrt{3}}{4}\) and this means that it is possible to find orbits with zero or negative angular momentum for the interval \(1<\alpha<\frac{3\sqrt{3}}{4}\). cretion disk on the Kerr black hole can be developed for the Kerr-type naked singularity. Let us assume there is a Kerr-type naked singularity with an accretion disk with a negligible thickness on the equatorial plane. Let us also assume gravitational and electromagnetic radiation of this disk is negligible. The gravitational field of the disk itself is much smaller than the gravitaional field of the singularity and therefore it is negligible. As a result, it can be assumed that particles on the disk follow circular orbits as already discussed in the previous section. As a starting point, the change in the mass and the angular momentum of the Kerr-type naked singularity can be written as \[\frac{\delta J}{\delta m}=mf\left(\alpha\right) \tag{52}\] in order to denote their relation with the rotation parameter. Before starting the calculation, in order to avoid complicated equations, let us define \[\tilde{x}:=\sqrt{x_{ISCO}}. \tag{53}\] In the previous section it was shown that, on the innermost stable circular orbit, the condition \[\tilde{x}^{4}-6\tilde{x}^{2}+8\epsilon\alpha\tilde{x}-3\alpha^{2}=0 \tag{54}\] should be satisfied. One can solve this equation with respect to the rotation parameter, \(\alpha\). For retrograde orbits, there are two solutions, one of which provides negative rotation parameter values and can be ignored. The physically viable retrograde solution is \[\alpha_{r}\left(\tilde{x}\right)=\frac{1}{3}\tilde{x}\left(\sqrt{3\tilde{x}^{2 }-2}-4\right). \tag{55}\] Note that for \(\tilde{x}\geq 3\), one has \(\alpha\geq 1\). For prograde orbits, there are two solutions and both of them are physically viable in their corresponding ranges. For the prograde orbit, \[\alpha_{p,1}\left(\tilde{x}\right)=\frac{1}{3}\tilde{x}\left(4-\sqrt{3\tilde{ x}^{2}-2}\right), \tag{56}\] the corresponding range becomes \(\sqrt{2/3}\leq\tilde{x}\leq 1\), and in this range the rotation parameter is \(1\leq\alpha\leq\sqrt{\frac{32}{27}}\). The other prograde orbit is \[\alpha_{p,2}\left(\tilde{x}\right)=\frac{1}{3}\tilde{x}\left(4+\sqrt{3\tilde{ x}^{2}-2}\right), \tag{57}\] and it exists for \(\sqrt{2/3}\leq\tilde{x}\) with a rotation parameter value \(\sqrt{\frac{32}{27}}\leq\alpha\). Both rotation parameter solutions corresponding to prograde orbit can be seen in Fig. (23). At this point, by using the assumption that infinitesimal changes in the mass and angular momentum of the naked singularity, \(\delta m\) and \(\delta J\), due to plunging particles is equal to the energy and angular momentum of the particles following the innermost stable circular orbit, \(\delta m=\tilde{E}_{ISCO}\) and \(\delta J=l_{ISCO}\). One has \[f\left(\alpha\right)=\frac{1}{m}\frac{\delta J}{\delta m}=\frac{1}{m}\frac{l_ {\pm}}{\tilde{E}_{\pm}}\bigg{|}_{\tilde{x}}. \tag{58}\] Let us start calculating the function \(f\left(\alpha\right)\) for the ret Figure 23: Two rotation parameters, \(\alpha_{p,1}\) and \(\alpha_{p,2}\), corresponding to prograde orbit are plotted as a function of \(\tilde{x}\) for the interval \(\sqrt{2/3}\leq\tilde{x}\leq 2\). Figure 22: All equatorial orbits are plotted as a function of the rotation parameter for the interval \(0<\alpha<1.5\). Figure 21: The prograde and retrograde ISCO and binding orbits, and retrograde photon orbit are plotted as a function of the rotation parameter for the interval \(1<\alpha<1.5\). Observe that the prograde orbits are closer the naked singularity than the retrograde orbits which means the latter have a larger capture cross-section. This fact plays an important role in the spinning-down effect. rograde orbits first. \[f\left(\alpha\right) = \frac{1}{m}\left.\frac{l_{-}}{\tilde{E}_{-}}\right|_{\tilde{x}}=- \frac{2\tilde{x}\sqrt{12\tilde{x}^{2}+4\sqrt{3\tilde{x}^{2}-2}-7}}{3\sqrt{3 \tilde{x}^{2}-2}} \tag{59}\] \[= -\frac{2\tilde{x}}{3}\left[2\sqrt{3\tilde{x}^{2}-2}+1\right|\] \[= -\frac{2\tilde{x}}{3}\left(2+\frac{1}{\sqrt{3\tilde{x}^{2}-2}} \right).\] By using the relation \(J=\alpha m^{2}\), one can get \[f\left(\alpha\right)=\frac{1}{m}\frac{\delta J}{\delta m}=\frac{\delta\alpha} {\delta\left(\ln m\right)}+2\alpha. \tag{60}\] With the help of 55, one has \[\delta\alpha_{r}=\left(\frac{2}{3}\left(\frac{3\tilde{x}^{2}-1}{\sqrt{3\tilde {x}^{2}-2}}-2\right)\right)\delta\tilde{x}. \tag{61}\] Hence, \[f\left(\alpha\right)-2\alpha = -\frac{2\tilde{x}}{3}\left(2+\frac{1}{\sqrt{3\tilde{x}^{2}-2}} \right)-2\alpha\] \[= -\frac{2\tilde{x}}{3}\left(-2+\frac{1}{\sqrt{3\tilde{x}^{2}-2}}+ \sqrt{3\tilde{x}^{2}-2}\right),\] and \[\frac{\delta\alpha}{\delta\left(\ln m\right)} = \frac{2}{3}\left(\frac{3\tilde{x}^{2}-1}{\sqrt{3\tilde{x}^{2}-2}} -2\right)\frac{\delta\tilde{x}}{\delta\left(\ln m\right)}, \tag{63}\] and by using 60 one can get \[\delta\left(\ln\tilde{x}\right) = -\delta\left(\ln m\right), \tag{64}\] of which the solution is \(\tilde{x}=C/m\) where \(C\) is a constant and (55) becomes \[\alpha_{r}\left(\tilde{m}\right)=\frac{C}{3m}\left(\sqrt{3\frac{C^{2}}{m^{2}} -2}-4\right). \tag{65}\] The constant \(C\) can be found from the initial conditions with initial mass \(m_{0}\) and the initial rotation parameter \(\alpha_{0}\). For instance, the retrograde ISCO is located at \(x=10.3759\) for \(\alpha_{0}=1.5\); and hence \(\tilde{x}=3.22116\). So one finds \(C=3.22116m_{0}\). After the falling of matter, at a later time, the relation turns into \[\tilde{x}=\frac{3.22116m_{0}}{m}. \tag{66}\] Therefore, one can write the rotation parameter relation (55) as a function of the mass of the singularity \[\alpha\left(m\right)=\frac{1.07372m_{0}}{m}\left(\sqrt{\frac{31.1277m_{0}^{2} }{m^{2}}-2}-4\right), \tag{67}\] and this relation is valid for \(1.07372\geq\frac{m}{m_{0}}\). So after accreting a mass of \(\delta m=0.07372m_{0}\) the rotation parameter is reduced to \(\alpha_{r}=1\) from its initial value of \(\alpha_{r}=1.5\). Thus the particles following the retrograde ISCO quickly slow down the rotation of the singularity. This is a rather remarkable result: for example naked singularity with mass \(m_{0}=1\) kg and \(\alpha_{0}=1.5\) only requires less than 74 grams of matter to reduce \(\alpha\) to the extremal rotation with an event horizon. The results can be seen in the Fig. (24). Now, we can concentrate on the prograde orbit. For prograde ISCO, from (58), one has \[f\left(\alpha_{p,1}\right) = \frac{1}{m}\frac{l_{+}}{\tilde{E}_{+}}\bigg{|}_{\tilde{x}}=\frac {2\tilde{x}}{3}\left(2+\frac{1}{\sqrt{3\tilde{x}^{2}-2}}\right), \tag{68}\] and \[f\left(\alpha_{p,2}\right) = \frac{1}{m}\frac{l_{+}}{\tilde{E}_{+}}\bigg{|}_{\tilde{x}}=\frac {2}{3}\tilde{x}\left(2-\frac{1}{\sqrt{3\tilde{x}^{2}-2}}\right). \tag{69}\] Let us first study the first prograde solution. Doing the computations as in the retrograde orbit case verbatim, we have \[f\left(\alpha_{p,1}\right)-2\alpha_{p,1}=\frac{2\tilde{x}}{3} \left(2+\frac{1}{\sqrt{3\tilde{x}^{2}-2}}\right)-2\alpha_{p,1} \tag{70}\] \[= \frac{2\tilde{x}}{3}\left(\frac{3\tilde{x}^{2}-1-2\sqrt{3\tilde{x }^{2}-2}}{\sqrt{3\tilde{x}^{2}-2}}\right).\] One can also find that \[f\left(\alpha_{p,1}\right)-2\alpha_{p,1}=\frac{\delta\alpha}{ \delta\left(\ln m\right)}\] \[= -\frac{2}{3}\left(\frac{3\tilde{x}^{2}-1-2\sqrt{3\tilde{x}^{2}-2} }{\sqrt{3\tilde{x}^{2}-2}}\right)\frac{\delta\tilde{x}}{\delta\left(\ln m \right)}.\] These two equations give \[\delta\left(\ln\tilde{x}\right) = -\delta\left(\ln m\right), \tag{72}\] of which the solution is \(\tilde{x}=C/m\) which is valid in the interval \(\sqrt{2/3}\leq\tilde{x}\leq 1\). Let us consider a Kerr-type naked singularity with initial mass \(m_{0}\) and initial rotation parameter \(\alpha_{0}=1.01\). The prograde ISCO is represented by \(\alpha_{p,1}\) for this case and it is located at \(x=0.75192\) and Figure 24: The rotation parameter is plotted as a function of the \(m/m_{0}\) for the interval \(1\leq\frac{m}{m_{0}}\leq 1.07372\). hence \(\tilde{x}=0.867133\). As a result, \(C=0.867133m_{0}\). After matter accretion, the relation evolves to \[\tilde{x}=\frac{0.867133m_{0}}{m}. \tag{73}\] As a consequence, the rotation parameter as a function of mass becomes \[\alpha_{p,1}\left(m\right)=\frac{0.289044\left(4-\sqrt{\frac{2.25576}{m^{2}}-2 }\right)}{m}, \tag{74}\] which is valid in the interval \(0.867133\leq\frac{m}{m_{0}}\leq 1.06202\). But accretion will not lead to decrease in mass, so one should restrict this interval to \(1\leq\frac{m}{m_{0}}\leq 1.06202\). The evolution of the rotation parameter can be seen in Fig. (25). As can be seen, this prograde orbit tries to increase the rotation parameter value up to \(\alpha=\sqrt{\frac{32}{27}}\). Now, as a final case, we would like to investigate the second prograde solution. Following similar steps, one arrives at \(\tilde{x}=C/m\). Assuming there is a Kerr-type naked singularity with an initial mass \(m_{0}\) and an initial rotation parameter \(\alpha_{0}=1.5\), the prograde ISCO can be found at \(x=0.879352\) and hence \(\tilde{x}=0.937738\) and \(C=0.937738m_{0}\). So one has \[\tilde{x}=\frac{0.937738m_{0}}{m}. \tag{75}\] As a consequence, the rotation parameter relation becomes \[\alpha_{p,2}\left(m\right)=\frac{0.312579\left(\sqrt{\frac{2.63806}{m^{2}}-2}+ 4\right)}{m}, \tag{76}\] which is valid for \(1.14849\geq\frac{m}{m_{0}}\). The evolution of the rotation parameter can be seen in Fig. (26). It is interesting to observe that the particles coming from a prograde orbit slow down the rotation. The final rotation parameter value of this process is \(\alpha=1.08866\). After that value, the solution is governed by the solution of the second interval. To sum up, a Kerr-type naked singularity with an initial mass \(m_{0}\) and an initial rotation parameter \(\alpha_{0}=1.5\) is slowed down by all falling particles. This slowing down process continues until \(\alpha=\sqrt{\frac{32}{27}}\). Below \(\alpha=\sqrt{\frac{32}{27}}\), the particles from the prograde orbit start to spin the singularity up while the particles from the retrograde orbit continue to slow it down. ### Remarks on earlier works Let us remark on some of the earlier works pertaining and complementing our discussion in this paper especially in the context of time evolution of the parameters of the naked singularity. In [17], the optical prop Figure 28: The rotation parameter of a naked singularity with initial mass \(m_{0}\) and rotation parameter \(\alpha_{0}=1.5\) is plotted for both prograde and retrograde orbits as a function of \(\frac{m}{m_{0}}\) for the interval \(1<\frac{m}{m_{0}}<1.14849\). Observe that the retrograde orbits continue to spin down the central object even after the event horizon is formed. Figure 27: The rotation parameter of a naked singularity with initial mass \(m_{0}\) and rotation parameter \(\alpha_{0}=1.05\) is plotted for both prograde and retrograde orbits as a function of \(\frac{m}{m_{0}}\) for the interval \(1<\frac{m}{m_{0}}<1.00776\). Figure 26: The rotation parameter is plotted as a function of \(\frac{m}{m_{0}}\) for the interval \(1<\frac{m}{m_{0}}<1.14849\). erties of the "silhouette" and the accretion disc around the Kerr "superspinars" were investigated in depth where constructing the image of the Kerr superspinar was done first, and then, the work focused on the optical properties of the accretion discs around it. While performing the analysis, they provided the Kerr naked singularity and Kerr black hole results for comparison. In another work [18], comparison of the effects of counterrotating and corotating orbits around a Kerr superspinar was done. The authors discuss the accumulated mass from both orbits, the conversion times of both orbits and the radiated energy from both orbits. Their conclusion about the conversion time, time needed to convert a Kerr superspinar into a near-extreme Kerr black hole, is much smaller for counterrotating orbit with respect to corotating orbit. Yet, the energy radiated from the corotating orbit is much higher than counterrotating orbits. In other works, various Kerr superspinar properties were investigated. For instance, in [19], radial and vertical epicyclic frequencies of Keplerian motion in the field of Kerr naked singularities were studied; and in [20], observational properties of Kerr superspinars were demonstrated. A similar shadow analysis, that was used for Kerr black hole, Kerr-type naked singularity and Kerr superspinars, can be performed for many other models. For instance, in [21], the shadows of rotating black holes in the Randall-Sundrum type-II models have been investigated thoroughly. The interesting feature of this work is that they consider not only the near region of the black hole but also the linearized metric in the far region. Another interesting paper, [22], provides a shadow analysis for three rotating regular no-horizon spacetimes, namely, Bardeen, charged Hayward and nonsingular spacetimes. In that paper, the results were compared with Kerr black hole and Kerr-type naked singularity and they reached a remarkable result: unlike Kerr-type naked singularity spacetimes, which has not a closed shadow, there could be a closed shadow for no-horizon spacetimes as in the Kerr black hole. ## VII Conclusions The Kerr metric is a solution to the vacuum Einstein equations [3] that is assumed to represent the gravitational field of all astrophysical black holes with only two hairs, its mass \(m\) and angular momentum \(J\). Such a uniqueness is so remarkable that it also leads to a rather unique environment which can be observable due to the unstable nature the photon orbits, among which the constant radii orbits are particularly relevant. In this work, we have studied the Kerr-type naked singularity to understand its shadow and its accretion. We found that a shadow image taken from the polar plane cannot distinguish a naked Kerr-type singularity with a spin parameter up to a maximum value (still above \(\alpha=1\)) from a Kerr black hole; while for naked singularities with spins higher than the maximum value, the shadow becomes quite distinct from that of the Kerr black hole. We have also studied the timelike orbits that are traced by massive particles around a naked singularity, and showed that the rapidly spinning singularity immediately slows down due the falling matter. Therefore, if a naked singularity is surrounded by a thin accretion disk around its equator, then it slows down and an event horizon is expected to form. In this process, the retrograde orbits play a dominant role as their capture cross-sections are larger than the prograde orbits.
2306.17590
Miniaturized Graph Convolutional Networks with Topologically Consistent Pruning
Magnitude pruning is one of the mainstream methods in lightweight architecture design whose goal is to extract subnetworks with the largest weight connections. This method is known to be successful, but under very high pruning regimes, it suffers from topological inconsistency which renders the extracted subnetworks disconnected, and this hinders their generalization ability. In this paper, we devise a novel magnitude pruning method that allows extracting subnetworks while guarantying their topological consistency. The latter ensures that only accessible and co-accessible -- impactful -- connections are kept in the resulting lightweight networks. Our solution is based on a novel reparametrization and two supervisory bi-directional networks which implement accessibility/co-accessibility and guarantee that only connected subnetworks will be selected during training. This solution allows enhancing generalization significantly, under very high pruning regimes, as corroborated through extensive experiments, involving graph convolutional networks, on the challenging task of skeleton-based action recognition.
Hichem Sahbi
2023-06-30T12:09:22Z
http://arxiv.org/abs/2306.17590v1
# Miniaturized Graph Convolutional Networks with Topologically Consistent Pruning ###### Abstract Magnitude pruning is one of the mainstream methods in lightweight architecture design whose goal is to extract subnetworks with the largest weight connections. This method is known to be successful, but under very high pruning regimes, it suffers from topological inconsistency which renders the extracted subnetworks disconnected, and this hinders their generalization ability. In this paper, we devise a novel magnitude pruning method that allows extracting subnetworks while guarantying their topological consistency. The latter ensures that only accessible and co-accessible -- impactful -- connections are kept in the resulting lightweight networks. Our solution is based on a novel reparametrization and two supervisory bi-directional networks which implement accessibility/co-accessibility and guarantee that only connected subnetworks will be selected during training. This solution allows enhancing generalization significantly, under very high pruning regimes, as corroborated through extensive experiments, involving graph convolutional networks, on the challenging task of skeleton-based action recognition. **Keywords. Graph convolutional networks, lightweight design, magnitude pruning, skeleton-based recognition** ## 1 Introduction Deep convolutional networks are nowadays becoming mainstream in solving many pattern classification tasks including image and action recognition [22, 4]. Their principle consists in training convolutional filters together with pooling and attention mechanisms that maximize classification performances. Many existing convolutional networks were initially dedicated to grid-like data, including images [23, 25, 26, 29]. However, data sitting on top of irregular domains (such as skeleton graphs in action recognition) require extending convolutional networks to general graph structures, and these extensions are known as graph convolutional networks (GCNs) [9, 27]. Two families of GCNs exist in the literature: spectral and spatial. Spectral methods are based on graph Fourier transform [30, 31, 32, 33, 34, 36, 37, 44] while spatial ones rely on message passing and attention [39, 40, 41, 43]. Whilst spatial GCNs have been relatively more effective compared to spectral ones, their precision is reliant on the attention matrices that capture context and node-to-node relationships [46]. With multi-head attention, GCNs are more accurate but overparametrized and computationally overwhelming. Many solutions are proposed in the literature to reduce time and memory footprint of convolutional networks including GCNs [49, 50, 51, 53]. Some of them pretrain oversized networks prior to reduce their computational complexity (using distillation [54, 56, 57, 58, 60, 61], linear algebra [71], quantization [67] and pruning [63, 64, 65]), whilst others build efficient networks from scratch using neural architecture search [72]. In particular, pruning methods, either unstructured or structured are currently mainstream, and their principle consists in removing connections whose impact on the classification performance is the least noticeable. Unstructured pruning [65, 67] proceed by dropping out connections individually using different proxy criteria, such as weight magnitude, and then retrain the resulting pruned networks. In contrast, structured pruning [68, 70] removes groups of connections, entire filters or subnetworks using different mechanisms such as grouped sparsity. However, existing pruning methods either structured or unstructured suffer from several drawbacks. On the one hand, structured pruning may reach high speedup on usual hardware, but its downside resides in the rigidity of the class of learnable networks. On the other hand, unstructured pruning is more flexible, but its discrimination is limited at high pruning regimes due to _topological disconnections_, and handling the latter is highly intractable as adding or removing any connection _combinatorially_ affects the others. As contemporary network sizes grow into billions of parameters, studying high compression regimes has been increasingly important on very large networks. Nevertheless, pruning relatively smaller networks is even more challenging as this usually leads to highly disconnected and untrainable subnetworks, even at reasonably (not very) large pruning rates. Hence, we target our contribution towards mid-size networks including GCNs in order to fit not only the usual edge devices, such as smartphones, but also highly _miniaturized_ devices endowed with very limited computational resources (e.g., smart glasses). Considering the aforementioned issues, we introduce in this paper a new lightweight network design which guarantees the topological consistency of the extracted subnetworks. Our proposed solution is variational and proceeds by training pruning masks and weight parameters that maximize classification performances while guaranteeing the _accessibility_ of the unpruned connections (i.e., their reachability from the network input) and their _co-accessibility_ (i.e., their actual contribution in the evaluation of the output). Hence, only topologically consistent subnetworks are considered when selecting connections. Extensive experiments, on the challenging task of skeleton-based action recognition, show the outperformance of our proposed topologically consistent pruning. ## 2 A Glimpse on GCNs Let \(\mathcal{S}=\{\mathcal{G}_{i}=(\mathcal{V}_{i},\mathcal{E}_{i})\}_{i}\) denote a collection of graphs with \(\mathcal{V}_{i}\), \(\mathcal{E}_{i}\) being respectively the nodes and the edges of \(\mathcal{G}_{i}\). Each graph \(\mathcal{G}_{i}\) (denoted for short as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\)) is endowed with a signal \(\{\psi(u)\in\mathbb{R}^{s}:\ u\in\mathcal{V}\}\) and associated with an adjacency matrix \(\mathbf{A}\) with each entry \(\mathbf{A}_{uu^{\prime}}>0\) iff \((u,u^{\prime})\in\mathcal{E}\) and \(0\) otherwise. GCNs aim at learning a set of \(C\) filters \(\mathcal{F}\) that define convolution on \(n\) nodes of \(\mathcal{G}\) (with \(n=|\mathcal{V}|\)) as \((\mathcal{G}\star\mathcal{F})_{\mathcal{V}}=f\big{(}\mathbf{A}\ \mathbf{U}^{\top}\ \mathbf{W} \big{)}\), here \({}^{\top}\) stands for transpose, \(\mathbf{U}\in\mathbb{R}^{s\times n}\) is the graph signal, \(\mathbf{W}\in\mathbb{R}^{s\times C}\) is the matrix of convolutional parameters corresponding to the \(C\) filters and \(f(.)\) is a nonlinear activation applied entrywise. In \((\mathcal{G}\star\mathcal{F})_{\mathcal{V}}\), the input signal \(\mathbf{U}\) is projected using \(\mathbf{A}\) and this provides for each node \(u\), the aggregate set of its neighbors. Entries of \(\mathbf{A}\) could be handcrafted or learned so \((\mathcal{G}\star\mathcal{F})_{\mathcal{V}}\) implements a convolutional block with two layers; the first one aggregates signals in \(\mathcal{N}(\mathcal{V})\) (sets of node neighbors) by multiplying \(\mathbf{U}\) with \(\mathbf{A}\) while the second layer achieves convolution by multiplying the resulting aggregates with the \(C\) filters in \(\mathbf{W}\). Learning multiple adjacency (also referred to as attention) matrices (denoted as \(\{\mathbf{A}^{k}\}_{k=1}^{K}\)) allows us to capture different contexts and graph topologies when achieving aggregation and convolution. With multiple matrices \(\{\mathbf{A}^{k}\}_{k}\) (and associated convolutional filter parameters \(\{\mathbf{W}^{k}\}_{k}\)), \((\mathcal{G}\star\mathcal{F})_{\mathcal{V}}\) is updated as \(f\big{(}\sum_{k=1}^{K}\mathbf{A}^{k}\mathbf{U}^{\top}\mathbf{W}^{k}\big{)}\). Stacking aggregation and convolutional layers, with multiple matrices \(\{\mathbf{A}^{k}\}_{k}\), makes GCNs accurate but heavy. We propose subsequently a method that makes our networks lightweight and still effective. ## 3 Relaxed Magnitude Pruning In the rest of this paper, a given GCN is subsumed as a multi-layered neural network \(g_{\theta}\) whose weights defined as \(\theta=\big{\{}\mathbf{W}^{1},\ldots,\mathbf{W}^{L}\big{\}}\), with \(L\) being its depth, \(\mathbf{W}^{\ell}\in\mathbb{R}^{d_{\ell-1}\times d_{\ell}}\) its \(\ell^{\text{th}}\) layer weight tensor, and \(d_{\ell}\) the dimension of \(\ell\). The output of a given layer \(\ell\) is defined as \(\phi^{\ell}=f_{\ell}(\mathbf{W}^{\ell\top}\ \phi^{\ell-1})\), w \(\ell\in\{2,\ldots,L\}\), being \(f_{\ell}\) an activation function. Without a loss of generality, we omit the bias in the definition of \(\phi^{\ell}\). Magnitude Pruning (MP) consists in zeroing the smallest weights in \(g_{\theta}\) (up to a pruning rate), while retraining the remaining weights. A relaxed variant of MP is obtained by multiplying \(\mathbf{W}^{\ell}\) with a differentiable mask \(\psi(\mathbf{W}^{\ell})\) applied entrywise to \(\mathbf{W}^{\ell}\). The entries of \(\psi(\mathbf{W}^{\ell})\) are set depending on whether the underlying layer connections are kept or removed, so \(\phi^{\ell}=f_{\ell}((\mathbf{W}^{\ell}\odot\psi(\mathbf{W}^{\ell}))^{\top} \,\phi^{\ell-1})\), here \(\odot\) stands for the element-wise matrix product. In this definition, \(\psi(\mathbf{W}^{\ell})\) enforces the prior that smallest weights should be removed from the network. In order to achieve magnitude pruning, \(\psi\) must be symmetric, bounded in \([0,1]\), and \(\psi(\omega)\rightsquigarrow 1\) when \(|\omega|\) is sufficiently large and \(\psi(\omega)\rightsquigarrow 0\) otherwise1. Footnote 1: A possible choice, used in practice, that satisfies these four conditions is \(\psi(\omega)=2\sigma(\omega^{2})-1\) with \(\sigma\) being the sigmoid function. Pruning is achieved using a global loss as a combination of a cross entropy term denoted as \(\mathcal{L}_{e}\), and a budget cost which measures the difference between the targeted cost (denoted as \(c\)) and the actual number of unpruned connections \[\min_{\{\mathbf{W}^{\ell}\}_{\ell}}\mathcal{L}_{e}\big{(}\{\mathbf{W}^{\ell} \odot\psi(\mathbf{W}^{\ell})\}_{\ell}\big{)}+\lambda\big{(}\sum_{\ell=1}^{L-1 }\mathbf{1}_{d_{\ell}}^{\top}\psi(\mathbf{W}^{\ell})\mathbf{1}_{d_{\ell+1}}- c\big{)}^{2}, \tag{1}\] here \(\mathbf{1}_{d_{\ell}}\) denotes a vector of \(d_{\ell}\) ones. When \(\lambda\) is sufficiently large, Eq. 1 focuses on minimizing the budget loss while progressively making \(\{\psi(\mathbf{W}^{\ell})\}_{\ell}\) crisp (almost binary) using annealing. As training evolves, the right-hand side term reaches its minimum and stabilizes while the gradient of the global loss becomes dominated by the gradient of the left-hand side term, and this maximizes further the classification performances. ## 4 Topologically Consistent Magnitude Pruning The aforementioned pruning formulation is relatively effective (as shown later in experiments), however, it suffers from several drawbacks. On the one hand, removing connections independently may result into _topologically inconsistent_ networks (see section 4.1), i.e., either completely disconnected or having isolated connections. On the other hand, high pruning rates may lead to an over-regularization effect and hence weakly discriminant lightweight networks, especially when the latter include isolated connections (see again later experiments). In what follows, we introduce a more principled pruning framework that guarantees the topological consistency of the pruned networks and allows improving generalization even at very high pruning rates. ### _Accessibility and Co-accessibility_ Our formal definition of topological consistency relies on two principles: _accessibility and co-accessibility_ of connections in \(g_{\theta}\)[93]. Let's remind \(\psi(\mathbf{W}^{\ell}_{ij})\) as crisped (binary) function that indicates the presence or absence of a connection between the i-th and the j-th neurons of layer \(\ell\). This connection is referred to as accessible if \(\exists i_{1},\ldots,i_{\ell-1}\), s.t. \(\psi(\mathbf{W}^{1}_{i_{1},i_{2}})=\cdots=\psi(\mathbf{W}^{\ell-1}_{i_{\ell-1},i})=1\), and it is co-accessible if \(\exists i_{\ell+1},\ldots,i_{L}\), s.t. \(\psi(\mathbf{W}^{\ell+1}_{j_{i}i_{\ell+1}})=\cdots=\psi(\mathbf{W}^{L}_{i_{L-1},i_{L}})=1\). Considering \(\mathbf{S}^{\ell}_{a}=\psi(\mathbf{W}^{1})\ \psi(\mathbf{W}^{2})\ldots\psi( \mathbf{W}^{\ell-1})\) and \(\mathbf{S}^{\ell}_{c}=\psi(\mathbf{W}^{\ell+1})\ \psi(\mathbf{W}^{\ell+2})\ldots\psi( \mathbf{W}^{L})\), and following the above definition, it is easy to see that a connection between \(i\) and \(j\) is accessible (resp. co-accessible) iff the i-th column (resp. j-th row) of \(\mathbf{S}^{\ell}_{a}\) (resp. \(\mathbf{S}^{\ell}_{c}\)) is different from the null vector. A network is called topologically consistent iff all its connections are both accessible and co-accessible. Accessibility guarantees that incoming connections to the i-th neuron carry out effective activations resulting from the evaluation of \(g_{\theta}\) up to layer \(\ell\). Co-accessibility is equivalently important and guarantees that the outgoing activation from the j-th neuron actually contributes in the evaluation of the network output. A connection -- not satisfying accessibility or co-accessibility and even when its magnitude is large -- becomes useless and should be removed when \(g_{\theta}\) is pruned. For any given network, parsing all its topologically consistent subnetworks and keeping only the one that minimizes Eq. 1 is highly combinatorial. Indeed, the accessibility of a given connection depends on whether its preceding and subsequent ones are kept or removed, and any masked connections may affect the accessibility of the others. In what follows, we introduce a solution that prunes a given network while guaranteeing its topological consistency. ### (Co)Accessibility Networks and Loss Function Our solution relies on two supervisory networks that measure accessibility and co-accessibility of connections in \(g_{\theta}\). These two networks, denoted as \(\phi_{r}\) and \(\phi_{l}\), have exactly the same architecture as \(g_{\theta}\) with only a few differences: indeed, \(\phi_{r}\) measures accessibility and inherits the same connections in \(g_{\theta}\) with the only difference that their weights correspond to \(\{\psi(\mathbf{W}^{\ell})\}_{\ell}\) instead of \(\{\mathbf{W}^{\ell}\odot\psi(\mathbf{W}^{\ell})\}_{\ell}\). Similarly, \(\phi_{l}\) inherits the same connections and weights as \(\phi_{r}\), however these connections are reversed in order to measure accessibility in the opposite direction (i.e., co-accessibility). Note that weights \(\{\mathbf{W}^{\ell}\}_{\ell}\) are shared across all the networks \(g_{\theta}\), \(\phi_{r}\) and \(\phi_{l}\). Considering the definition of accessibility and co-accessibility, one may define layerwise outputs \(\phi_{r}^{\ell}:=h\big{(}\psi(\mathbf{W}_{\ell-1})^{\top}\ \phi_{r}^{\ell-1}\big{)}\), and \(\phi_{l}^{\ell}:=h\big{(}\psi(\mathbf{W}_{\ell})\ \phi_{l}^{\ell+1}\big{)}\), being \(\phi_{r}^{1}=\mathbf{1}_{d_{1}}\), \(\phi_{l}^{L}=\mathbf{1}_{d_{L}}\), \(\mathbf{1}_{d_{1}}\) the vector of \(d_{1}\) comes and \(h\) the Heaviside activation. With \(\phi_{r}^{\ell}\) and \(\phi_{l}^{\ell}\), non-zero entries of the matrix \((\phi_{r}^{\ell}\phi_{l}^{\ell+1^{\top}})\odot\psi(\mathbf{W}^{\ell})\) correspond to selected connections in \(g_{\theta}\) which are also accessible and co-accessible. By plugging this matrix into Eq. 1, we redefine our topologically consistent pruning loss \[\min_{\{\mathbf{W}^{\ell}\}_{\ell}} \mathcal{L}_{e}\big{(}\{\mathbf{W}^{\ell}\odot\psi(\mathbf{W}^{ \ell})\odot\phi_{r}^{\ell}\phi_{l}^{\ell+1^{\top}}\}_{\ell}\big{)}\ +\ \lambda\big{(}\sum_{\ell=1}^{L-1}\phi_{r}^{\ell^{\top}}\psi(\mathbf{W}^{\ell}) \phi_{l}^{\ell+1}-c\big{)}^{2}, \tag{2}\] \[\text{with}\ \ \phi_{r}^{\ell} := h\big{(}(\phi_{r}^{\ell-1}\phi_{l}^{\ell^{\top}}\odot\psi( \mathbf{W}_{\ell-1}))^{\top}\ \phi_{r}^{\ell-1}\big{)}\] (3) \[\phi_{l}^{\ell} := h\big{(}(\phi_{r}^{\ell}\phi_{l}^{\ell+1^{\top}}\odot\psi( \mathbf{W}_{\ell}))\phi_{l}^{\ell+1}\big{)}.\] It is clear that accessibility networks in Eq. 3 cannot be modeled using standard feedforward neural networks, so more complex (highly recursive and interdependent) networks should be considered which also leads to exploding gradient. In order to make Eq. 3 simpler and still trainable with standard feedforward networks, we constrain entries of \(\psi(\mathbf{W}_{\ell})\) to take non-zero values iff the underlying connections are kept and accessible/co-accessible; in other words, \(\phi_{r}^{\ell^{\top}}\psi(\mathbf{W}_{\ell})\phi_{l}^{\ell+1}\) should approximate \(\mathbf{1}_{d_{\ell}}^{\top}\psi(\mathbf{W}_{\ell})\mathbf{1}_{d_{\ell+1}}\) in order to guarantee that (i) unpruned connections are necessarily accessible/co-accessible and (ii) non accessible ones are necessarily pruned. Hence, instead of 2 and 3, a surrogate optimization problem is defined as \[\min_{\{\mathbf{W}^{\ell}\}_{\ell}} \mathcal{L}_{e}\big{(}\{\mathbf{W}^{\ell}\odot\psi(\mathbf{W}^{ \ell})\odot\phi_{r}^{\ell}\phi_{l}^{\ell+1^{\top}}\}_{\ell}\big{)}\ +\ \lambda\big{(}\sum_{\ell=1}^{L-1}\phi_{r}^{\ell^{\top}}\psi(\mathbf{W}^{\ell}) \phi_{l}^{\ell+1}-c\big{)}^{2}\] (4) \[\ \ ### _Optimization_ Let \(\mathcal{L}\) denote the global loss in Eq. 4, the update of \(\{\mathbf{W}^{\ell}\}_{\ell}\) is achieved using stochastic gradient descent and by _simultaneously_ backpropagating the gradients through the networks \(g_{\theta}\), \(\phi_{r}\) and \(\phi_{l}\). More precisely, considering Eq. 4 and \(\phi_{r}^{\ell}\), \(\phi_{l}^{\ell}\), the gradient of the global loss w.r.t. \(\mathbf{W}^{\ell}\) is obtained as \[\frac{\partial\mathcal{L}}{\partial\mathbf{W}^{\ell}}+\sum_{k=\ell+1}^{L} \frac{\partial\mathcal{L}}{\partial\phi_{r}^{k}}\frac{\phi_{r}^{k}}{\phi_{r}^ {k-1}}\ldots\frac{\partial\phi_{r}^{\ell+1}}{\partial\mathbf{W}^{\ell}}+\sum_ {k=1}^{\ell}\frac{\partial\mathcal{L}}{\partial\phi_{l}^{k}}\frac{\phi_{l}^{k} }{\phi_{l+1}^{k+1}}\ldots\frac{\partial\phi_{l}^{\ell}}{\partial\mathbf{W}^{ \ell}}, \tag{5}\] here the left-hand side term in Eq. 5 is obtained by backpropagating the gradient of \(\mathcal{L}\) from the output to the input of the network \(g_{\theta}\) whereas the mid terms are obtained by backpropagating the gradients of \(\mathcal{L}\) from different layers to the input of \(\phi_{r}\). In contrast, the right-hand side terms are obtained by backpropagating the gradients of \(\mathcal{L}\) through \(\phi_{l}\) in the opposite direction. Note that the evaluation of the gradients in Eq. 5 relies on the straight through estimator (STE) [73]; the sigmoid is used as a differentiable surrogate of \(h\) during backpropagation while the initial Heaviside is kept when evaluating the responses of \(\phi_{r}\), \(\phi_{l}\) (i.e., forward steps). STE allows training differentiable accessibility networks while guaranteeing binary responses when evaluating these networks. ## 5 Experiments We evaluate our GCNs on the task of action recognition using the challenging First-Person Hand Action (FPHA) dataset [2]. This dataset consists of 1175 skeletons whose ground-truth includes 45 action categories with a high variability in style, speed and scale as well as viewpoints. Each \begin{table} \begin{tabular}{c c c c c} **Method** & **Color** & **Depth** & **Pose** & **Accuracy (\%)** \\ \hline Two stream-color [4] & ✓ & ✗ & ✗ & 61.56 \\ Two stream-flow [4] & ✓ & ✗ & ✗ & 69.91 \\ Two stream-all [4] & ✓ & ✗ & ✗ & 75.30 \\ \hline HOG2-depth [5] & ✗ & ✓ & ✗ & 59.83 \\ HOG2-depth+pose [5] & ✗ & ✓ & ✓ & 66.78 \\ HON4D [7] & ✗ & ✓ & ✗ & 70.61 \\ Novel View [8] & ✗ & ✓ & ✗ & 69.21 \\ \hline 1-layer LSTM [9] & ✗ & ✗ & ✓ & 78.73 \\ 2-layer LSTM [9] & ✗ & ✗ & ✓ & 80.14 \\ \hline Moving Pose [11] & ✗ & ✗ & ✓ & 56.34 \\ Lie Group [12] & ✗ & ✗ & ✓ & 82.69 \\ HBRNN [14] & ✗ & ✗ & ✓ & 77.40 \\ Gram Matrix [15] & ✗ & ✗ & ✓ & 85.39 \\ TF [16] & ✗ & ✗ & ✓ & 80.69 \\ \hline JOULE-color [18] & ✓ & ✗ & ✗ & 66.78 \\ JOULE-depth [18] & ✗ & ✓ & ✗ & 60.17 \\ JOULE-pose [18] & ✗ & ✗ & ✓ & 74.60 \\ JOULE-all [18] & ✓ & ✓ & ✓ & 78.78 \\ \hline Huang et al. [19] & ✗ & ✗ & ✓ & 84.35 \\ Huang et al. [21] & ✗ & ✗ & ✓ & 77.57 \\ \hline Our GCN baseline & ✗ & ✗ & ✓ & **86.08** \\ \end{tabular} \end{table} TABLE 1: Comparison of our baseline GCN against related work on FPHA. video, as a sequence of skeletons, is modeled with a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) whose given node \(v_{j}\in\mathcal{V}\) corresponds to the \(j\)-th hand-joint trajectory (denoted as \(\{\hat{p}^{t}_{j}\}_{t}\)) and edge \((v_{j},v_{i})\in\mathcal{E}\) exists iff the \(j\)-th and the \(i\)-th trajectories are spatially neighbors. Each trajectory in \(\mathcal{G}\) is described using _temporal chunking_[3]: this is obtained by first splitting the total duration of a video sequence into \(M\) equally-sized temporal chunks (\(M=32\) in practice), and assigning trajectory coordinates \(\{\hat{p}^{t}_{j}\}_{t}\) to the \(M\) chunks (depending on their time stamps), and then concatenating the averages of these chunks in order to produce the raw description (signal) of \(v_{j}\). **Implementation details and baseline GCN.** Our GCNs are trained end-to-end using Adam [1] for 2,700 epochs with a momentum of \(0.9\), batch size of \(600\) and a global learning rate (denoted as \(\nu(t)\)) set depending on the change of the loss in Eq. 4; when the latter increases (resp. decreases), \(\nu(t)\) decreases as \(\nu(t)\leftarrow\nu(t-1)\times 0.99\) (resp. increases as \(\nu(t)\leftarrow\nu(t-1)/0.99\)). The mixing parameter \(\eta\) in Eq. 4 is set to \(1\) and \(\lambda\) is slightly overestimated to \(10\) in order to guarantee the implementation of the targeted pruning rates. All these experiments are run on a GeForce GTX 1070 GPU (with 8 GB memory) and classification performances -- as average accuracy through action classes -- are evaluated using the protocol in [2] with 600 action sequences for training and 575 for testing. The architecture of our baseline GCN (taken from [3]) consists of an attention layer of 16 heads applied to skeleton graphs whose nodes are encoded with 32-channels, followed by a convolutional layer of 128 filters, and a dense fully connected layer. This initial network is relatively heavy (for a GCN); its includes 2 million parameters and it is accurate compared to the related work on the FPHA benchmark, as shown in Table 1. Considering this GCN baseline, our goal is to make it lightweight while maintaining its high accuracy as much as possible. **Impact of TC on Lightweight GCNs.** We study the impact of Topological Consistency (TC) \begin{table} \begin{tabular}{c c c c c c} & \(\gamma\)C & \(\divide[]{\bullet}\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S\(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S\(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S\(\bullet\)S\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\(\bullet\)S(\)S(\(\bullet\)S(\bullet\)S(\(\bullet\)S(\bullet\)S(\(\bullet\)S(\)S(\(\bullet\)S(\)\(\bullet\)S(\(\bullet\)S(\)(\(\bullet\)S(\)\(\bullet\)S(\(\bullet\)S(\)(\(\bullet\)S(\)\(\bullet\)S(\(\)\(\bullet\)S(\(\)\(\)S\(\)\(\)S(\(\)\(\)S(\)\(\)S(\(\)\(\)S(\)\(\)S(\)\(\)S(\(\)S(\)(\)S(\(\)\(\)S(\)(\(\)S(\)\(\)S(\(\)\(\)S(\)(\)\(\)S(\)(\(\)S(\)\(\)(\(\)S(\)\(\)(\)(\)S(\)(\(\)S(\)\(\)(\)(\(\)S(\)\(\)(\)(\)(\(\)\(\)S(\)(\)\(\)(\)\(\)(\(\)\(\)(\)\(\)\(\)\(\)\(\)\(\)\(\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\) on the performances of our lightweight GCNs for different pruning rates. Table. 2 shows the positive impact of TC especially on highly pruned networks. This impact is less important (and sometimes negative) with low pruning regimes as the resulting networks have enough (a large number of) Accessible and Co-accessible (AC) connections, so having a few of these connections neither accessible nor co-accessible, i.e. removed, produces a well known regularization effect [47] that enhances performances. In contrast, with high pruning rates and without TC, this leads to over-regularized and very disconnected lightweight networks that suffer from under-fitting. With TC, both accessibility and co-accessibility are guaranteed even with very high pruning regimes, and this also attenuates under-fitting, and ultimately improves generalization as again shown in table 2. ## 6 Conclusion We introduce in this paper a novel lightweight network design based on magnitude pruning. The particularity of the method resides in its ability to select subnetworks with _only_ accessible and co-accessible connections. The latter make the learned lightweight subnetworks topologically consistent and more accurate particularly at very high pruning regimes. The proposed approach relies on two supervisory networks, that implement accessibility and co-accessibility, which are trained simultaneously with the lightweight networks using a novel loss function. Extensive experiments, involving graph convolutional networks, on the challenging task of skeleton-based recognition show the substantial gain of our method. Future work will investigate the integration of this framework to different architectures and tasks involving extremely high pruning regimes.
2309.07106
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
RGB-D object recognition systems improve their predictive performances by fusing color and depth information, outperforming neural network architectures that rely solely on colors. While RGB-D systems are expected to be more robust to adversarial examples than RGB-only systems, they have also been proven to be highly vulnerable. Their robustness is similar even when the adversarial examples are generated by altering only the original images' colors. Different works highlighted the vulnerability of RGB-D systems; however, there is a lacking of technical explanations for this weakness. Hence, in our work, we bridge this gap by investigating the learned deep representation of RGB-D systems, discovering that color features make the function learned by the network more complex and, thus, more sensitive to small perturbations. To mitigate this problem, we propose a defense based on a detection mechanism that makes RGB-D systems more robust against adversarial examples. We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.
Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli
2023-09-13T17:25:52Z
http://arxiv.org/abs/2309.07106v1
# Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks ###### Abstract RGB-D object recognition systems improve their predictive performances by fusing color and depth information, outperforming neural network architectures that rely solely on colors. While RGB-D systems are expected to be more robust to adversarial examples than RGB-only systems, they have also been proven to be highly vulnerable. Their robustness is similar even when the adversarial examples are generated by altering only the original images' colors. Different works highlighted the vulnerability of RGB-D systems; however, there is a lacking of technical explanations for this weakness. Hence, in our work, we bridge this gap by investigating the learned deep representation of RGB-D systems, discovering that color features make the function learned by the network more complex and, thus, more sensitive to small perturbations. To mitigate this problem, we propose a defense based on a detection mechanism that makes RGB-D systems more robust against adversarial examples. We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training. keywords: adversarial machine learning, RGB-D, object recognition system, adversarial examples, adversarial patch, detector + Footnote †: journal: ## 1 Introduction Object classification systems are machine learning models that classify objects depicted inside input photos. The acquisition of pictures destroys the information about the depth since images are projections of 3D objects on a flat 2D rectangular surface, hence losing meaningful information in the process. To overcome this loss, RGB-D systems fuse the information acquired through regular RGB cameras with the depth information retrieved with specific sensors and techniques. Such multi-modality is more reliable than the information provided by color alone and enables more accurate classification thanks to the additional knowledge retrieved from the spatial properties of the objects [1]. Even if it might be reasonable to think that the addition of the depth information could lead to a more robust system, previous work [2; 3; 4] have shown that RGB-D models are vulnerable, as well as RGB systems, against _adversarial examples_[5; 6]: minimally-perturbed samples that cause the target model to misbehave at test time. In particular, they highlighted that attackers that can manipulate both RGB and depth features have the complete control over the detection capability of the target system. However, even if these attacks were successful, it is difficult to understand is such vulnerability is principally caused by one family of features alone. For instance, Yu et al. [4] indicate that different strategies are more or less effective on the RGB or depth, alternatively, without drawing any conclusion on the matter. Also, Geirhos et al. [2] study the efficacy of spatial and RGB features, concluding that shape and depth can help machine learning models to increase their predictive accuracy and that both are subject to attacks, without investigating the latter. Thus, it is established that the fusion of both RGB and depth information grants machine learning models the ability to better recognize objects, but such a discussion is completely missing regarding the cause of their weaknesses. In this work, we bridge this gap by analyzing why adversarial attacks against RGB-D systems are effective. To evaluate their performances in realistic conditions, we assess their robustness also against _adversarial patches_, contiguous chunks of pixel values optimized to produce misclassifications, which can be easily applied physically on images and objects as printed stickers [7]. Although RGB-D systems consider both the colors and the depth, an attacker can easily subvert their performances by optimizing attacks target ing the sole RGB layer. To explain this phenomenon, we investigate how the internal layers of neural networks transform data during the forward pass. To this end, we compute pair-wise distances between each layer of models trained with RGB or depth information using the Centroid Kernel Alignment (CKA), and we highlight that RGB induces higher variability than the depth channel. On the other hand, the internal representations learned at training time using only the depth information are similar considering the pair-wise distances of the layers, producing a smoother decision function that is more difficult to exploit by adversarial attacks. Then, we show how a defense based on detection can be used to reduce the vulnerability of RGB-D systems. Each input sample is processed to obtain its RGB-D representation and compared with the predicted class's centroid. The input is classified as an adversarial example if these two differ more than a threshold. We show that our defense effectively increases the robustness of the victim model not only against adversarial examples unaware of the defense in place but also against adaptive attacks aware of the detection mechanism and designed to overcome it. Lastly, we test the efficacy of the detector compared to an adversarially-trained model [8], that base its robustness on the inclusion of adversarial examples at training time. Our results suggest that our detector better handles the presence of attacks, by keeping good performance against attackers with increasing strenght. Thus, the main contributions of this work can be summarized as follows: * we empirically assess the performance of a state-of-art object recognition system based on both RGB and depth features, by considering different neural networks as backbone for the considered fusion model; * we explain why RGB features are less robust than depth features, by measuring the variability learned at training time by each internal layer of the analyzed model; * we develop a defense that detects out-of-distribution samples, and we compare its performance with adversarial training, the only defense proposed to secure RGBD systems, showing that our methodology achieves better robustness. The rest of the paper is organized as follows. We first introduce the background concepts needed to understand the RGB-D systems and the threats posed by adversarial examples (Section 2). We continue by discussing a methodology to interpret and understand the robustness of RGB-D models (Section 3), and we explain how these systems can be defended (Section 4). We follow by discussing our empirical findings (Section 5.2). We conclude our paper by offering an overview of the related work (Section 6) and by discussing limitations and future work of our study (Section 7). ## 2 Background In the following section, we introduce the main concepts we will leverage in this work. We start by describing how RGB-D object recognition systems function (Section 2.1), and we discuss the threats posed by adversarial Figure 1: Adversarial examples detection framework architecture. It consists of RCFusion and the detector. RCFusion consists of two streams of CNN (e.g. ResNet-18) employed to extract RGB and depth features at multiple levels of abstraction. The outputs of corresponding hidden layers are projected into a common space, concatenated, and sequentially fed to an RNN to obtain a compact RGB-D feature used by a classifier for the final classification. The detector will reject (accept) the input sample \(\mathbf{x}\) if \(\mathcal{E}(\mathbf{x})\) is greater than (less than or equal to) the rejection threshold \(\beta\). examples and adversarial patches (Section 2.2). ### RGB-D Object Recognition Systems The idea of combining both colored RGB information and depth has been introduced in literature by Socher et al. [9], where the authors create a classifier of RGB-D images that employ a CNN and RNN to obtain the deep features, which are fed to an SVM to produce the final classification. Instead, Eitel et al. [10] fuse RGB and depth features before computing the classification task by combining the information of the two streams in the pre-last layer of the neural network. In this work, we consider the state-of-the-art architecture proposed in [1] for RGB-D object recognition and called recurrent convolutional fusion network (RCFusion). As shown in Fig. 1, RCSusion was designed by using two streams of convolutional networks (CNN), with the same architecture as ResNet-18 [11] and pre-trained on the ImageNet, to extract RGB and depth features at different levels of the networks. While the RGB information does not require any particular pre-processing, depth information is not used "as-is" as a scalar number acquired by a sensor. It is post-processed to produce a colorized image. Each pixel value of this image represents not a color but the normal of the surface acquired by the sensor. The outputs of the corresponding hidden layers (the first of the network trained on the RGB images, with the first of the network trained on the depth image, the second... ) are then projected into a common space, concatenated, and sequentially fed to a recurrent neural network (RNN) to obtain a compact RGB-D feature that is used by a classifier for the final classification. Let ResNet18-R and ResNet18-D represent the CNN for extracting RGB and depth features, and the output of the \(i\)-th layer of ResNet18-R and ResNet18-D are \(\mathbf{R}_{i}^{rgb}\) and \(\mathbf{R}_{i}^{depth}\), with \(i=1,2,\ldots,M\), and \(M\) is the total number of layers of ResNet-18. Given that the dimension of the features obtained from different hidden layers of the same network is different, we apply the projection block \(P(\cdot)\) proposed in [1] to transform a volumetric input into a vector of dimensions. Let the transformed RGB and depth features of \(i\)-th layer denote with \(\mathbf{T}_{i}^{rgb}\) and \(\mathbf{T}_{i}^{depth}\), i.e. \(\mathbf{T}_{i}^{rgb}=P(\mathbf{R}_{i}^{rgb})\) and \(\mathbf{T}_{i}^{depth}=P(\mathbf{R}_{i}^{depth})\), and then concatenate the transformed RGB and depth features of \(i\)-th layer to form \(\mathbf{S}_{i}=[\mathbf{T}_{i}^{rgb};\mathbf{T}_{i}^{depth}]\). To create a compact multi-modal representation, we sequentially fed the set \(\{\mathbf{S}_{1},\mathbf{S}_{2},\ldots,\mathbf{S}_{M}\}\) to an RNN. There are two ways to instantiate RNN, as presented in [1], e.g., Gated Recurrent Unit (GRU), and Long-Short Term Memory (LSTM). Since the performance of GRUs and LSTMs are comparable, and GRUs have fewer parameters than LSTMs, applying a GRU layer to handle multimodal features is therefore convenient. Then a dense layer is combined with RNN to predict the final label. ### Adversarial Examples and Patches While machine learning technologies are currently wide-spreading across many different domains, we are witnessing a rapid growth of studies proving their weaknesses against multiple and rapid-evolving threats at training [12; 13] and at test time [14]. Test time attacks formalize the presence of attackers that can compute _adversarial examples_\(\boldsymbol{\delta}^{\star}\), carefully-crafted perturbations applied to input samples designed to have them misclassified by the target model as the attacker desires [5; 15]. For example, to have a malicious application misclassified as a legitimate one. Adversarial examples are the result of an optimization problem formulated as follows: \[\boldsymbol{\delta}^{\star}=\operatorname*{arg\,min}_{\|\boldsymbol{\delta} \|_{p}\leq\epsilon}\mathcal{L}(\boldsymbol{x}+\boldsymbol{\delta},y; \boldsymbol{\theta}) \tag{1}\] where: \[\mathcal{L}=s_{y}(\boldsymbol{x}+\boldsymbol{\delta})-\max_{j\notin\{y\}}s_{j }(\boldsymbol{x}+\boldsymbol{\delta}) \tag{2}\] \(\boldsymbol{x}\) is an input sample, \(y\in\mathcal{Y}=\{1,\ldots,c\}\) is the true label of \(\boldsymbol{x}\), \(\boldsymbol{\theta}\) corresponds to the parameters of the target model, \(\boldsymbol{\delta}\) is the adversarial perturbation, and \(s_{j}(\boldsymbol{x}+\boldsymbol{\delta})\), \(j\in\{1,\ldots,c\}\) is the \(j\)-th output predictions score of the target model on the adversarial sample \(\boldsymbol{x}+\boldsymbol{\delta}\). The constraint \(\|\)\(\boldsymbol{\delta}\)\(\|_{p}\)\(\leq\)\(\epsilon\) is an \(\ell_{p}\)-norm constraint imposed to preserve stealthiness of the attack [16]. Typical norms used for crafting adversarial examples are \(\ell_{1}\), \(\ell_{2}\), and \(\ell_{\infty}\), for which efficient projection algorithms exist [17]. However, since these manipulations are applied to all the pixels of an image, it is impossible to replicate them in a real-life scenario where a camera is looking at a scene. To accomplish an attack in the described way, the attacker would need to either directly act on the camera sensor or tamper with the images before they are sent to the machine learning model. A more realistic threat model to image classifiers is posed by adversarial patches: contiguous chunks of pixel values optimized to steer the decision toward a class decided by the attacker. These patches can be physically printed and placed on objects acquired by the camera [7]. The creation of adversarial patches amounts to solving an optimization problem similar to Eq. 1, described as follows: \[\mathbf{\delta}^{\star}=\underset{\|\mathbf{\delta}\|_{p}\leq\epsilon}{\arg\min}\,\mathbb{ E}_{\mathbf{A}\sim\mathcal{T}}\mathcal{L}(\mathbf{x}\oplus\mathbf{A}\mathbf{\delta}),y;\mathbf{ \theta}), \tag{3}\] where the adversarial patch \(\mathbf{\delta}\) is applied to the input image \(\mathbf{x}\) with random affine transformations \(\mathbf{A}\) drawn from \(\mathcal{T}\). The operator '\(\oplus\)' is defined as: \(\mathbf{x}\oplus\mathbf{A}\mathbf{\delta}=(\mathbf{1}-\mathbf{\mu})\circ\mathbf{x}+\mathbf{\mu} \circ\mathbf{A}\mathbf{\delta}\), where the operator '\(\circ\)' means element-wise vector multiplication, \(\mathbf{\mu}\) is a mask with the same size of the input data \(\mathbf{x}\), and its components are ones where the patch should be applied and zeros elsewhere [18]. Eq. 3 can be minimized by the Algorithm 1 to create a perturbation that is still effective regardless of its position, rotation, and scale inside the image, hence mimicking acquisitions of the scene containing the image and the patch through a camera. In this paper, we leverage a simplified version of Eq. 3, where we do not apply affine transformations to the patch but compute a single adversarial patch for each sample used at test time during the attack. This is the worst-case scenario for the defender, as each patch is optimized specifically for the image to which it is applied and is thus obviously more effective than a single patch computed to work on multiple images. Lastly, we only consider patch attacks that target the RGB channel of the input samples since modifying the depth information would require the attacker to possess the capability of assembling physical objects to mimic the adversarial perturbation, which would make them difficult to apply in real-world scenarios and costly to generate. The 3D printers that have high color precision (needed to allow the attacker to manipulate the colors) are, in fact, still quite expensive. ## 3 The Robustness of RGB-D Object Recognition The high vulnerability of RGB-D systems to adversarial perturbations due to the presence of RGB features had already been noted for object detectors in [19]. However, as far as we know, no one has explained its underlying reasons. To bridge this gap, we analyze the internal structure of RGB-D models by computing the similarity between layers. We conjecture that models with high similarities between their hidden layers learn simpler decision functions; therefore, they tend to be more robust against adversarial manipulation of input data. Conversely, when hidden layers are dissimilar, the underlying decision function is more complicated, creating holes in the decision space where adversarial examples lie. In the following, we revisit the Centered Kernel Alignment (CKA) [20] as the similarity measure we use in this paper to determine the similarity between the hidden layers of neural networks. We then exploit the CKA similarity matrices to explain why these systems are more vulnerable to perturbation on the RGB features of the input rather than on the depth descriptor. **HSIC.** Before delving into the details of CKA, we introduce the Hilbert-Schmidt Independence Criterion (HSIC), subsequently used for computing the CKA measure. Introduced by Gretton et al. [21], HSIC is a useful method for testing if two random variables are independent. Formally, suppose \(\mathbf{X}\in\mathbb{R}^{m\times p_{1}}\) and \(\mathbf{Z}\in\mathbb{R}^{m\times p_{2}}\) are the output features of the two hidden layers, having respectively \(p_{1}\) and \(p_{2}\) neurons, for \(m\) input samples. We then denote with \(\mathbf{x}_{i}\), \(\mathbf{x}_{j}\) (\(\mathbf{z}_{i}\), \(\mathbf{z}_{j}\)) the \(i\)-th and \(j\)-th entries in matrix \(\mathbf{X}\) (\(\mathbf{Z}\)), respectively representing the features representation for the \(i\)-th and \(j\)-th samples. We finally define with \(\mathbf{K}_{X}=\{K_{X}^{ij}\}_{i,j}\) and \(\mathbf{K}_{Z}=\{K_{Z}^{ij}\}_{i,j}\) the symmetric kernel matrices used to evaluate the similarities of features abstracted from the two layers with \(p_{1}\) and \(p_{2}\) neurons separately. For computing the two matrices, we used two distinct kernels: a linear kernel function, where \(K_{X}^{ij}=\mathbf{x}_{i}\mathbf{x}_{j}\,^{\mathrm{T}}\), \(K_{Z}^{ij}=\mathbf{z}_{i}\mathbf{z}_{j}\,^{\mathrm{T}}\); and the Radial Basis Function (RBF), where \(K_{X}^{ij}=\exp(-\frac{\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{2\sigma^{2}})\), \(K_{Z}^{ij}=\exp(-\frac{\|\mathbf{z}_{i}-\mathbf{z}_{i}\|^{2}}{2\sigma^{2}})\) and \(\sigma\) is chosen as a fraction of the median distance between features. Obviously, these two kernel functions satisfy \(K^{ij}=K^{ji}\). Based on the empirical estimator of HSIC([21], Definition 2), we can obtain Eq. (4): \[\text{HSIC}(\mathbf{X},\mathbf{Z})=\frac{1}{(n-1)^{2}}\mathbf{tr}(\mathbf{K}_{X}\mathbf{H}\mathbf{ K}_{Z}\mathbf{H}), \tag{4}\] where centring matrix \(\mathbf{H}=\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}\mathbf{1}^{\mathrm{T}}\), \(\mathbf{I}_{n}\) is the identity matrix of size \(n\times n\), and \(\mathbf{1}\) is \(n\times 1\) vector of all ones. **CKA.** HSIC is invariant to orthogonal transformations of the representations and, by extension, to permutation of neurons, but it is not invariant to scaling of the original representations. CKA [20] further normalizes HSIC to produce a similarity index between 0 and 1 that is invariant to isotropic scaling. Formally, the CKA similarity between two matrices \(\mathbf{X}\) and \(\mathbf{Z}\) is defined as: \[\mathrm{CKA}(\mathbf{X},\mathbf{Z})=\frac{\mathrm{HSIC}(\mathbf{X},\mathbf{Z})}{\sqrt{\mathrm{ HSIC}(\mathbf{X},\mathbf{X})\mathrm{HSIC}(\mathbf{Z},\mathbf{Z})}}. \tag{5}\] For the Eq. (5), it is not difficult to see that \(\mathrm{CKA}(\mathbf{X},\mathbf{Z})=1\) when \(\mathbf{X}=\mathbf{Z}\), namely, \(\mathbf{X}\) and \(\mathbf{Z}\) are the feature representation from the same layer. Besides, \(\mathrm{CKA}(\mathbf{X},\mathbf{Z})=0\) when \(\mathrm{HSIC}(\mathbf{X},\mathbf{Z})=0\), this means that \(\mathbf{X}\) and \(\mathbf{Z}\) are independent of each other. ## 4 Defending RGB-D Classifiers In the above section, we presented our methodology for inspecting which are the vulnerable component defining an RGB-D object recognition system. We here present our defensive mechanism and how we assess its robustness against adaptive attacks, i.e., attacks specifically designed to target a given defense. In the following, we denote with \(\mathbf{x}\) and \(\tilde{\mathbf{x}}\) two generic samples taken respectively from the test and training set. We then denote the output predictions score of RCFusion, trained on the RGB and depth features, with \(\mathcal{S}(\mathbf{x})=[s_{1}(\mathbf{x}),\ldots,s_{c}(\mathbf{x})]\in\mathbb{R}^{1\times c}\), where \(s_{i}(\mathbf{x})\geq 0\) and \(\sum_{i=1}^{c}s_{i}(\mathbf{x})=1\). A complete summary of the notation and symbols used throughout the paper is reported in Table 2 (see A). ### Reject-based Detection The only defense [8] that has been proposed to secure RGBD-base systems is based on adversarial training [22]. Defenses based on adversarial training present two problems: (i) they increase the margin between the classes; however, if the perturbation of the attacker can inject is slightly higher than the margin, they are ineffective; (ii) they require generating many adversarial examples during training, which is computationally demanding as the generation of each adversarial example requires multiple forward and backward pass (and this should be done for all the samples of the training set or at least the subset considered). Therefore, we propose a defense based on a detector. The underlying idea of our defensive method is to estimate the distribution of unperturbed training points at different network layers and reject anomalous samples that may be incurred at test time, including adversarial examples. Specifically, our defensive mechanism rejects out-of-distribution samples at test time by looking at their RGB and depth information, and as far as a sample moves away from class centroids, classifier support decreases to zero. Thus, this defense can detect adversarial examples that are highly perturbed and do not require generating adversarial examples at training time. The operations required at training time are: (i) computing, for each class, its centroid in RGB-D space; (ii) finding the rejection threshold, which requires computing some distances in RGB-D space. Both these operations can be performed by computing just once (and thus with a single forward pass) the RGB-D features of the training samples (or the subset considered). Therefore, the proposed approach is rather more efficient at training time than adversarial training. Whereas at test time, both do not require expensive Figure 2: Visual representation of evasion attack on 3-class bi-dimensional classification problem: left without defense, right with reject-based defense. Blue dotted lines for final depth features, black solid lines for final RGB-D features, red dots for centroids, and black dots for rejected samples. The rejection threshold is shown as a black dotted circle. Green hexagon for the initial sample and a blue star for the adversarial sample. Defense correctly rejects the adversarial sample, while without defense, it was wrongly classified as belonging to the blue class. operations. Adversarial training does not require any other operation than the standard classification. At the same time, our defense requires computing the distance in RGB-D space between the considered samples and the centroid of the predicted class (of which the RGB-D features have already been computed and stored at training time). The architecture of our defense mechanism is depicted in Fig. 1, which assumes the defender has an already-trained classifier to be protected against adversarial examples. For our defense mechanism to work, we compute the centroid of the final RGB-D features for each class, as shown in Fig. 2, and then reject anomalous samples whose RGB-D representation is far from the centroid. Without loss of generality, our approach uses the \(\ell_{2}\) distance between the RGB-D features of the class-centroid and the RGB-D features of the input sample. Formally, we denote the final RGB-D features of RCFusion trained on the RGB and depth parts with \(\mathcal{R}(\mathbf{x})=[r_{1}(\mathbf{x}),\ldots,r_{a}(\mathbf{x})]\in\mathbb{R}^{1\times a}\), being \(a=100\) the dimensionality of the output features of the RNN layer. For each class \(\gamma\) in the training set, we then compute its corresponding centroid \(\mathcal{C}_{\gamma}\) with respect to their RBG-D feature as: \[\mathcal{C}_{\gamma}=\frac{1}{n_{\gamma}}\sum_{k=1}^{n_{\gamma}}\mathcal{R}( \tilde{\mathbf{x}}_{k}^{\gamma}) \tag{6}\] where \(n_{\gamma}\) is the number of samples that belonging to class-\(\gamma\), \(\tilde{\mathbf{x}}_{k}^{\gamma}\) is the \(k\)-th sample which from the class-\(\gamma\) of the training set. We then define _anomaly score_\(\mathcal{E}\) for the test sample \(\mathbf{x}\) as: \[\mathcal{E}(\mathbf{x})=\parallel\mathcal{R}(\mathbf{x})-\mathcal{C}_{\gamma}(\mathbf{x}) \parallel_{2} \tag{7}\] being \(\gamma=\arg\,\max_{\gamma}\mathcal{S}(\mathbf{x})\in[1,c]\) the predicted label of RCFusion trained on the RGB and depth parts. Finally, the detector will reject samples if \(\mathcal{E}(\mathbf{x})\) is greater than the rejection threshold \(\beta\), whose optimal value can be found with the Algorithm 2. According to this rule, we define the output predictions scores of RCFusion with the detector as: \(\mathcal{S}^{\prime}(\mathbf{x})=[(1-s_{c+1})s_{1}(\mathbf{x}),\ldots,(1-s_{c+1})s_{c} (\mathbf{x}),s_{c+1}(\mathbf{x})]\in\mathbb{R}^{1\times(c+1)}\) where the rejection class \(c+1\) is defined as follows: \[s_{c+1}(\mathbf{x})=\begin{cases}1,&\text{ }if\ \mathcal{E}(\mathbf{x})>\beta\\ 0,&\text{ }if\ \mathcal{E}(\mathbf{x})\leq\beta\end{cases}. \tag{8}\] The test samples are then assigned to the class for which the value of \(\mathcal{S}^{\prime}(\mathbf{x})\) is higher. A test sample \(\mathbf{x}\) is thus assigned to the rejection class \(c+1\) when \(\mathcal{E}(\mathbf{x})>\beta\); otherwise, it is assigned to the class with the highest likelihood in the softmax output. ### Attacking the Defended System When a defense is based on a detector to reject the adversarial examples, a defense-unaware attack may craft adversarial examples belonging to rejection regions, making it very difficult to evade such defense (Fig. 2) [23]. To perform a fair robustness evaluation of the proposed defense method, an adaptive defense-aware attack is required. Therefore, we formulate an adaptive white-box attack suitable for assessing the adversarial robustness of the proposed rejection-based. Given a sample \(\mathbf{x}\), the attacker can optimize a maximum-allowed \(\epsilon\)-sized adversarial perturbation obtaining the defense-aware adversarial perturba tion \(\mathbf{\delta}^{\star}\), by solving the following constrained optimization problem: \[\mathbf{\delta}^{\star}=\underset{\|\mathbf{\delta}\|_{p}\leq\epsilon}{\arg\min}\,\mathcal{ L}_{d}(\mathbf{x}+\mathbf{\delta},y;\mathbf{\theta}) \tag{9}\] where \(\|\ \mathbf{\delta}\ \|_{p}\leq\epsilon\) is an \(\ell_{p}\)-norm constraint. The formulation of the adaptive attacks is similar to the one seen in Eq. (1), with the only difference that now the target loss \(\mathcal{L}_{d}\) takes into consideration also the detector defense the attacker aims to evade. Formally, we define \(\mathcal{L}_{d}\) as follows: \[\mathcal{L}_{d}=s_{y}(\mathbf{x}+\mathbf{\delta})-\max_{j\notin\{y,c+1\}}s_{j}(\mathbf{x}+ \mathbf{\delta}) \tag{10}\] where \(c+1\) is the rejection class. Compared to Eq. (2), the attacker enforces not only that the class predicted for the adversarial example does not match the true label but also that it does not match the rejection class. In the context of image classification, the solution of the minimization problem mentioned before produces a perturbation that, applied to the pixel values of the input image, forces the target model to predict the sample to a class that is different from the true class. To achieve this error-generic (untargeted) evasion, the attacker should minimize the output of the true class and maximize the output of one competing class (excluding the reject class). It is worth noting that this algorithm performs a strong maximum-confidence evasion attack (rather than searching for a minimum-distance adversarial example). While in this work, we focus only on _untargeted_ attacks, the proposed formulation can also be easily extended to account for error-specific (targeted) evasion. Note that the _targeted_ attacks requires the model to misclassify the sample to the class decided a priori by the attacker, which can be written similarly to Eq. (9) by using the target label \(y_{t}\) instead of \(y\), and inverting the sign of the loss function [24]. Moreover, to solve the optimization problem above, given that the Eq. (8) is a step-function and non-differentiable, we apply: \[s_{c+1}(\mathbf{x})=\frac{1}{1+\exp\left(-\lambda(\mathcal{E}(\mathbf{x})-\beta) \right)} \tag{11}\] to implement the loss function exploited by the attacker to compute the adversarial examples in our experiment. ## 5 Experimental Analysis In our experimental analysis, we consider two different RGB-D datasets to perform multi-modal computer vision classification tasks. Our analysis has three fundamental objectives: (i) investigating the robustness of RCFusion to detect which are the most vulnerable features; (ii) interpreting the previous results by inspecting the similarity between hidden layers of RCFusion; and (iii) testing the robustness of the proposed defense against defense-unaware and adaptive attackers. In the following, we define the experimental setup adopted in our empirical analysis to foster the reproducibility of our results, and we then present and analyze our findings. ### Experimental Setup **Datasets.** We conduct our experiments by choosing two datasets, i.e., RGB-D Object Dataset [25] and OCID [26], where data dimensionality and the number of classes are different, thus making our setup more heterogeneous and challenging. _RGB-D Object Dataset_[25]1 contains 300 common household objects taken from multiple views organized into 51 categories with a total of 207,920 RGB-D images. It was sampled using a Kinect-style 3D camera that records synchronized and aligned \(640\times 480\) RGB and depth images at 30Hz. Due to the massive dataset size, we subsampled it by extracting only every fifth frame, thus obtaining \(41,877\) RGB-D images. We run our experiment based on ten cross-validation splits: one object instance per class is used for testing, and training is performed on the remaining \(249\)\((300-51)\) instances, where each split consists of roughly 35,000 training images and 7,000 images for testing. Footnote 1: [http://rgbd-dataset.cs.washington.edu/](http://rgbd-dataset.cs.washington.edu/) _Object Clutter Indoor Dataset (OCID)_[26]2 comprises 96 fully built up cluttered scenes representing common objects organized in three subsets: ARID20, ARID10, and YCB10. The ARID20 and ARID10 subsets include cluttered scenes with up to 20 and 10 objects from Autonomous Robot Indoor Dataset (ARID) objects, respectively, whereas the ARID20 (ARID10) subset includes cluttered scenes with \(3,180\)\((2,499)\) RGB-D images. Moreover, the YCB10 subset includes cluttered scenes with up to 10 objects from YCB objects. The data capture diverse settings of objects, backgrounds, context, sensor-to-scene distance, viewpoint angle, and lighting conditions. In our experiment, we have chosen ARID20 (ARID10) as the training (testing) set. **Preprocessing.** To obtain the colorized depth images, we first normalize the original depth before proceeding to the colorization by adopting the method based on surface normal [27; 28]. The resulting representation focuses on capturing structural information (e.g., object shapes, surface properties, and relative orientations) while being invariant to the distance to the camera or the total depth range [27]. For the preprocessing procedure, we convert RGB and colorized depth images from the RGB to the BGR space. We then resize the images of BGR space as \(256\times 256\) and subtract the mean values 3 provided by Mohammad et al. [28], and then apply a resize again to shrink images to match the input size of the considered model, which is \(224\times 224\). For the inverse-preprocessing, we resize preprocessed images as \(256\times 256\), and then add the mean values to the resized images, convert the images of BGR space to RGB space, and resize the images of RGB space to \(224\times 224\). Finally, the results of preprocessed and inverse-preprocessed are shown in Fig. 3. Footnote 3: [https://data.acin.tuwien.ac.at/index.php/s/RueHQUs2JtoHeJ](https://data.acin.tuwien.ac.at/index.php/s/RueHQUs2JtoHeJ) **Classifiers.** We train the model using RMSprop optimizer with batch size 64, learning rate 0.0001, momentum 0.9, weight decay 0.0002, projection depth 256, and the number of memory neurons \(a=100\)[1]. We report in Table 1 the performances of the trained models on different feature sets (RGB, depth, and RGB-D) on the RGB-D Object Dataset and OCID. Figure 3: Samples in the preprocessing and inverse-preprocessing spaces, where the inverse-preprocessing (preprocessing) space is highlighted in blue (orange). Also, we include the performance of another deep neural architecture, which is AlexNet [29], a convolutional neural network originally trained on ImageNet [30]. We use Alexnet as another backbone neural network for RCFusion alongside ResNet-18. We also use the pre-trained ResNet-18 to classify the RGB images of the considered datasets and train the model using the RMSprop optimizer, where the batch size, learning rate, momentum, and weight decay are the one proposed by the original training of RCFusion'[1]. We also report the accuracy of ResNet-18 in Table 1. **Adversarial Attack.** To evaluate the adversarial robustness, we test the trained networks with attacks that either jointly or separately target the RGB and depth parts. We leverage the Adversarial Robustness Toolbox, (ART)4 from which we select the _untargeted_\(\ell_{\infty}\)-norm version of Projected Gradient Descent (PGD) and PGD-based maximum-confidence patch attack. PGD [22] is first used to test the robustness when all the RGB and depth features are perturbed, as shown in Fig. 4. Within this configuration, we perform 100 iterations, with a step size is 0.05 for the RGB-D Object Dataset and OCID dataset. Furthermore, to mimic a real-world scenario, we leverage the adversarial patch to attack against the RGB part, where the maximum perturbation \(\epsilon=20\), the step size is 1, and we let vary the patch size in \([0,112]\). We provide an example of this attack schema in Fig. 5. Footnote 4: [https://github.com/Trusted-AI/adversarial-robustness-toolbox/](https://github.com/Trusted-AI/adversarial-robustness-toolbox/) **Parameter Setting.** For training RCFusion, we refer to the parameter set \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Input Part} & RGB-D & \multirow{2}{*}{OCID} \\ & & Object Dataset & \\ \hline \multirow{3}{*}{RCFusion on ResNet-18} & RGB-D & 95.04\% & 91.51\% \\ & RGB & 88.70\% & 87.02\% \\ & Depth & 82.03\% & 40.35\% \\ \hline \multirow{3}{*}{RCFusion on AlexNet} & RGB-D & 83.72\% & 65.22\% \\ & RGB & 69.51\% & 59.65\% \\ \cline{1-1} & Depth & 61.48\% & 20.45\% \\ \hline \multirow{2}{*}{ResNet-18} & RGB & 87.65\% & 88.08\% \\ \cline{1-1} & Depth & 80.14\% & 35.05\% \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy of RCFusion on ResNet-18 and AlexNet for object recognition on RGB-D Object Dataset and OCID. ting provided by the author of [1] and, to obtain a fair comparison, we adjust them to obtain the performance of RCFusion on RGB-D Object Dataset and OCID as consistent as possible with the performance presented in [1]. We have fixed the only hyperparameter of our defense, namely the rejection threshold \(\beta\), using Algorithm 2. This algorithm finds the rejection threshold appropriate to obtain the desired FPR (\(r\)), which in our experiments, we required to be equal to \(10\%\) on the clean (unperturbed) samples. In the following, we discuss how the parameters of this algorithm should be set to obtain an appropriate threshold. The number of iterations (\(T\)) should be set large enough to allow it to converge. We have set it equal to \(1e10\). Whereas the step size \(\rho\) should be small because a small variation in the threshold can greatly impact the correspondent FPR. Thus we set it equal to \(1e-5\). **Performance Metrics.** We denote the original (undefended) performance of ResNet-18/AlexNet with "ResNet-18"/"AlexNet", the original (undefended) Figure 4: The scheme we use for modifying the full image to attack the RGB-D part of RCFusion, where the maximum perturbation is \(0.3\), and the inverse-preprocessing and preprocessing spaces are highlighted in blue and orange. Figure 5: The scheme we use for modifying only a portion image to attack the RGB part of RCFusion, where the patch size is \(35\times 35\), and the inverse-preprocessing and preprocessing spaces are highlighted in blue and orange, separately. performance of RCFusion with "RCFusion", the adversarial training performance of RCFusion with "RCFusion_AT", the percentage of samples rejected by the detector with "Rejection" and the classifier defended with the proposed detector with "Defended". **Security Evaluation.** We compare the object detector and its robust model considering their security evaluation curves [14; 31], reporting classification accuracy against an increasing \(\ell_{\infty}\)-norm perturbation size \(\epsilon\), used to perturb all the test samples. To set the scaling parameter \(\lambda\) in Eq. (11), we have tried different values, and we have found that the most appropriate to be 30. There are two cases to calculate the accuracy of the robust detector: i) without attack (i.e., for \(\epsilon=0\)), the accuracy of the robust detector is computed as usual, but considering the errors of clean samples which are classified correctly but rejected; ii) under attack (i.e., for \(\epsilon>0\)), all the tested samples become adversarial examples, and we consider them correctly classified by the robust detector if they are classified either to the rejection class or their true class. We also report the rejection rates computed by dividing the number of rejected samples by the total number of tested samples. It is worth noting that the difference between the accuracy of the robust detector and rejection rate at each \(\epsilon>0\) corresponds to the fraction of adversarial examples which are not rejected but still correctly assigned to their true class. ### Experimental Results We now discuss our empirical findings by reporting the robustness of RCFusion against adversarial attacks, analyzing the factors influencing its vulnerability, and the robustness of the proposed rejection-based defense. **Robustness of RCFusion.** We here empirically investigate the robustness of RCFusion on the maximum perturbation \(\epsilon\) following the attack pipeline depicted in Fig. 4 To this end, we apply PGD [22] separately against the RGB and depth feature only, and then we apply PGD against the RGB and depth feature combined. We report their results against RCFusion in Fig. 6 when considering the RGB-D Object Dataset and OCID datasets. The attack against both RGB and depth parts shows that the model is not robust against adversarial manipulations, as its accuracy drops to zero with a small perturbation budget. Interestingly, this result is also achieved by computing PGD against the sole RGB part. On the contrary, when PGD is applied against the depth part only, the attack needs a higher perturbation budget to drop the accuracy to zero. From these results, it emerges that the _performance of the whole RCFusion network is being hurt more by the RGB information than the depth information_. We have carried out some experiments to investigate why, and we will present them later on in this section. However, the depth information alone, even if it can be expressive enough, is not a good choice. Figure 6: The robustness evaluation curve is computed by modifying the full image to attack the RGB, depth, and RGB-D parts of RCFusion for RGB-D Object Dataset (left) and OCID (right), respectively. Figure 7: The robustness evaluation curve is computed by modifying only a portion image to attack the RGB part of RCFusion and ResNet-18 for RGB-D Object Dataset (left) and OCID (right), respectively. enough to exhibit more robustness than RGB information, usually leads to poor accuracy (as previously shown in Table 1). Therefore, training the classifier only on the depth features could not be considered a solution to obtain a classifier that is more robust to perturbations of the input. The big difference in the accuracy is given by the fact that the OCID dataset contains many objects that are distinct but have almost the same shape, like the ones belonging to the classes "ball" and "orange" and to the classes "Kleenex box" and "cereal box", whereas the other dataset contains objects that have different shapes. Furthermore, we test the robustness of RCFusion, trained on both the RGB and depth features, in real-world scenarios where the attacker can physically tamper with objects. To this end, we explore the effectiveness of adversarial patches computed to target the RGB part solely, and we compare its robustness with the one of ResNet-18. We aim to understand to which extent using a system that also leverages depth features may help with respect to employing a simpler ResNet-18. To do so, we create adversarial patches against RCFusion and ResNet-18 and depict the collected results in Fig. 7, considering both the RGB-D Object and OCID datasets. From the empirical results, we highlight that the influence of the adversarial patches on the robustness of both RCFusion and ResNet-18 are very similar, with RCFusion being slightly more accurate. Such minimal discrepancy might be caused by (i) the additional complexity that is included inside the architecture of RCFusion and (ii) the additional information provided by the depth information. However, this advantage still decreases with the increase in the size of the adversarial patch. Therefore, it is easy to conclude that the robustness of RCFusion is almost as vulnerable to adversarial patches as ResNet-18. _This means that using RCFusion, which is a more complex system than ResNet-18, which leverage also the depth, does not provide relevant advantages in terms of security._ **Interpretation of RCFusion Vulnerability.** In the following, we analyze the internal representations in RCFusion to explain why it is vulnerable to adversarial attacks. To this end, we use the CKA measure, seen in Section 3, to evaluate the similarity of features abstracted by RCFusion from the RGB and depth part separately and present the CKA similarity heatmaps in Fig. 8. We can see: (i) the linear and RBF kernels give similar results on the RGB-D Object Dataset and OCID. This conclusion is consistent with that presented in [20]; (ii) the heatmaps generated on the depth information tend to show a distinctive block structure[32] (seen as a yellow square on the heatmap) than the RGB information. We conjecture this is because _the information learned by the network trained on depth is more redundant_ (there is less information to learn). Therefore, the trained classifier turns out to be smoother and, thus, more robust to input perturbations. **Robustness of the Proposed Defenses.** We here inspect the robustness results offered by our rejection-based defensive method. The results are reported in Fig. 9- 13. Firstly, we try to apply the rejection base mechanism on a simple ResNet-18 instead than to RCFusion. We report the result obtained when the attacker can modify the full input and the ResNet-18 is trained on the RGB (left column) and depth part of the input (right) for the two considered datasets. From Fig. 9, we can see that the accuracy of the defended ResNet-18 (called Defended in the figure) decreases fast when the classifier is under attack (\(\epsilon>0\)). This means that applying the rejection mechanism to the ResNet-18 does not provide a notable advantage, except when the ResNet-18 Figure 8: The CKA similarity heatmaps generated based on the RGB and depth information by applying the linear (first two) and RBF (last two) kernels, where (a) is the results of RGB-D Object Dataset, (b) is the results of OCID. is trained on the depth part of the RGB-D Object Dataset. Therefore, we evaluate the defense's performance when applied to RC-Fusion constructed by ResNet-18. Also, to compare the impact of different backbone networks on RCFusion performance, we evaluate the performance of the defense when applied to RCFusion constructed by AlexNet. We present the results when the attacker can modify the full image in Fig. 10- 11. We can see that the accuracy of defended classifier and RCFusion decreases until it reaches zero. This means that the attack algorithm we employ to perform the analysis works correctly. Moreover, from these two plots, we can see that even though the accuracy decreases when the perturbation increases, _the accuracy of the defended RFCusion decreases more gracefully than that of the defended ResNet-18._ Besides, we also can see that the performance of the defense is higher than that of the defense. Figure 9: The robustness evaluation curve computed by modifying the full image of the input of ResNet-18 trained on the RGB (left) and depth (right) separately for the RGB-D Object Dataset (top-row) and OCID Dataset (bottom-row). the detector when applied to RCFusion constructed by ResNet-18 (Fig. 10) drops faster than that of the detector when applied to RCFusion constructed by AlexNet (Fig. 11), in other words, the robustness of the detector when applied to RCFusion constructed by ResNet-18 is inferior to that of the detector when applied to RCFusion constructed by AlexNet. To explore the performance of the defended RCFusion in a more realistic scenario, we assess its robustness against adversarial patches. To this end, we perform the attack by modifying a portion of an image and present the result in Fig. 12- 13. In the absence of attacks (i.e., _patch size_\(=0\)), the performance of RCFusion slightly outperforms the classifier defended with the proposed detector. This is expected, and it is because a small portion of legitimate samples is incorrectly flagged as adversarial examples. Under attack (_patch size_\(>0\)), the defended classifier shows more robustness than that of RCFusion, as the accuracy of the defended classifier decreases more gracefully than that of RCFusion when _patch size_\(>0\). It is worth noticing that the accuracy of the defended classifier even increases for a small _patch size_, as the test samples immediately become blind-spot adversarial examples when modified slightly and end up in a region that is far from the rest of the data. Moreover, with the increase of the _patch size_, the test samples gradually drifted inside a different class, making them indistinguishable for the rejection-based defense. Overall, by comparing the performance of the defended classifier and the undefended RCFusion, and the rejection rate, we show that _our defense mechanism provides a more robust performance than that of the undefended RCFusion under attack performed by modifying the full and a portion of an image_. **Comparison with Adversarial Training.** We compare the performance of our detector against the RCFusion model defended with the _adversarial training_ technique developed by Wang et al. [8]. The defense in [8] augments the training dataset with correctly labeled adversarial examples, thus helping the neural network better generalize when confronted with malicious noise. To do so, we start by computing adversarial examples on the full images to attack the RGB-D part of RCFusion, with a perturbation \(\epsilon=0.1(0.2)\) for RGB-D Object Dataset (OCID), and we use them along with the unperturbed training data to produce robust models (RCFusion_AT). We depict the experimental comparison between our detector and the adversarial training defense by Wang et al. [8] in Fig.14 and Fig.15. The key observation is that conventional adversarial training, for attacks in jecting perturbations optimized by randomly-picking all considered \(\epsilon\), i.e. \(\epsilon\in\{0.1,0.2,0.3,0.4,0.5,1,1.5,2,2.5\}\) (Fig.14), and by using an \(\epsilon=0.1(0.2)\) for RGB-D Object Dataset (OCID) (Fig.15) for RGB-D Object Dataset (OCID), provide only a small improvement in robustness. In contrast, the proposed robustness provided by the proposed detector is more elevated. This is because, with adversarial training, as soon as the perturbation applied to the adversarial examples is slightly larger than the classifiers' margin, they are misclassified. Instead, using the proposed detector, they are classified as adversarial examples unless they become quite similar (in deep space) to the samples of the target class. Therefore, _our defense provides higher robustness than adversarial training_. Moreover, they are complementary; therefore, they could be used jointly to obtain even more robustness. In conclusion, our proposed technique demonstrates superior efficacy in countering adversarial attacks, enhancing model robustness against malicious perturbations while maintaining high accuracy and reliability in detecting out-of-distribution samples. Figure 10: The robustness evaluation curve is computed by modifying the full image of the RGB-D (left) and RGB (right) channel of RCFusion constructed by ResNet-18 on the RGB-D Object Dataset (top-row) and OCID (bottom-row), when the step size is 0.05. Figure 11: The robustness evaluation curve is computed by modifying the full image of the RGB-D (left) and RGB (right) channel of RCFusion constructed by AlexNet on the RGB-D Object Dataset (top-row) and OCID (bottom-row), when the step size is 0.05. Figure 12: The robustness evaluation curve is computed by modifying only a portion image of the RGB-D (left) and RGB (right) channel of RCFusion constructed by ResNet-18 on the RGB-D Object Dataset (top-row) and the OCID Dataset (bottom-row). Figure 13: The robustness evaluation curve is computed by modifying only a portion image of the RGB-D (left) and RGB (right) channel of RCFusion constructed by AlexNet on the RGB-D Object Dataset (top-row) and the OCID Dataset (bottom-row). **Remarks.** In this work, we have assessed the performance of a state-of-the-art RGBD-based object recognition system called RCFusion against adversarial examples. Given that this system considers not only the RGB features but also the depth, it is reasonable to suppose it is more resilient to adversarial examples that change only the RGB part of the input compared to a system based only on RGB features. However, we have shown that their robustness is similar. Our results show that the vulnerability of RCFusion is mainly due to the usage of RGB features that, even if combined with the deep features, make the system vulnerable. However, they are necessary to obtain satisfactory performance. Therefore, we have proposed a defense based on a detection mechanism that, as we have shown, can make RCFusion more robust with negligible overhead. Moreover, we have shown that this defense mechanism is more effective than the only defenses proposed so far to secure RGBD-based systems [8]. ## 6 Related Work In the following, we discuss the work related to the vulnerability of RGB-D models and the previously proposed defenses against adversarial examples. **Vulnerabilities of RGB-D models.** While it is straightforward to compute adversarial attacks against a machine learning model, understanding the rationale behind such weakness is a difficult task. Figure 14: Comparison between the robustness of the proposed defense methodology with the robustness of the adversarial training considering all considered \(\epsilon\) on the RGB-D Object Dataset (left) and the OCID (right), respectively. Geirhos et al. [2] remark on their discussion that humans most likely rely on the shape of observations to categorize and recognize objects, while deep neural networks retrieve information from the observed texture. To support this intuition, the authors interview volunteers by asking them to classify silhouettes and textures of objects and feed the same inputs to RGB neural networks. The authors also test the robustness of both human volunteers and neural networks by testing them with common corruptions applied to images, but they did not test adversarial attacks that target the shape or depth information. Tu et al. [33] analyze the robustness of object detectors of self-driving cars that recognize objects by acquiring RGB images and proximity scans with Lidar sensors. The authors develop attacks against both components, jointly or separately, and show how much they degrade the performance of the target classifier. Abdelfattah et al. [3] similarly evaluate the robustness of RGB-D models against adversarial perturbations with the intent of misleading the cloud point reconstruction. They achieve this result by virtually creating a single object with an adversarial shape and texture. Yu et al. [4] investigates the robustness of fusion models that leverage RGB and thermal information to compute image segmentation. Their results highlight that these models are not effective against adversaries, even if the attacks are conducted against one single part at a time. Figure 15: Comparison between the robustness of the proposed defense methodology with the robustness of the adversarial training considering a single \(\epsilon\) on the RGB-D Object Dataset (left) and the OCID (right), where \(\epsilon=0.1(0.2)\) for RGB-D Object Dataset (OCID). Xie et al. [34] investigate the adversarial robustness of 3D object recognition by considering a set of attacks, including pixel-based attacks, universal patches, and black-box attacks in the form of transferability attacks. Their main findings suggest that robust depth recognition can improve the adversarial robustness of RGB-D models. Even if these recent works analyze the robustness of RGB-D models, they all lack an in-depth study regarding the reason for different levels of the robustness of the RGB and depth component. Complementary to previous works, we empirically assess the robustness of both RGB and depth features, and we analyze the variability learned at training time in each internal layer to explain the reason behind depth robustness and RGB vulnerability. **Adversarial defenses.** So far, no work has studied the effectiveness of defenses on RGB-D object recognition systems. The only work that proposes a defense for an RGB-D system is the one proposed by Wang et al. [8] that aims to secure an object detector. In this work, the authors study the application of adversarial training [22] on both the RGB and depth components of fusion networks, and they discover that both accuracy and robustness decreased when hardening separately and jointly the two parts. Many works, instead, have previously proposed defenses to secure RGB systems against adversarial examples. Crecchi, Sotgiu, et al. [23; 35] propose a detection mechanism that trains a machine learning model on the internal representation learned by the network to defend. At test time, the detector discards all the input samples whose internal representation mismatches the one learned at training time. Meng et al. [36] propose Magnet, a detector that intercepts anomalous samples by computing the difference between the input and its de-noised version, leveraging an autoencoder neural network. All these methods leverage a detection mechanism similar to our proposed defense but not applied to RGB-D systems to discard adversarial patch attacks. ## 7 Conclusions In this work, we investigate the lack of robustness of RGB-D systems by explaining that attackers can easily obtain misclassification thanks to the weakness introduced by the color information. We explain this phenomenon by leveraging the Central Kernel Alignment metric, showing that models trained on RGB or both RGB and depth are more sensitive to minimal changes of input samples compared to networks trained only on depth, hence amplifying the weakness to adversarial examples. To reduce the vulnerability of RGB-D systems, we develop a detector capable of discarding anomalous input samples by comparing their deep fusion representation with centroids computed at training time. We empirically show that our defense mechanism can reduce the effect of adversarial examples and adversarial patches aimed to circumvent such detectors. Moreover, we have shown that the only approach that was proposed by previous works to defend RGBD-based systems, namely, adversarial training, can only slightly increase the robustness of RCFusion against adversarial examples with respect to the undefended model. Whereas the proposed approach is more effective in spite of being also less expensive at training time, as discussed in Section 4. One limitation of our work is that our detector still uses the RGB information. It would be ideal to leverage only the depth channel of test samples since we showed that depth alone is more robust to minimal perturbation. However, this may not be possible because the accuracy of the systems based on depth is much lower than those based on color. Hence, in our future work, we will work on creating a detector based only on depth to obtain a more robust system and thus increase the perturbation that the attackers should apply to images to subvert the system. ## Acknowledgments This work was partly supported by the PRIN 2017 project RexLearn, funded by the Italian Ministry of Education, University and Research (grant no. 2017TWNMH2); by BMK, BMDW, and the Province of Upper Austria in the frame of the COMET Programme managed by FFG in the COMET Module S3AI; by Spoke 10 "Logistics and Freight" within the Italian PNRR National Centre for Sustainable Mobility (MOST), CUP I53C22000720001; and by the Key Research and Development Program of Shaanxi (Program Nos. 2022ZDLGY06-07, 2021ZDLGY15-01, 2021ZDLGY09-04 and 2021GY-004), the International Science and Technology Cooperation Research Project of Shenzhen (GJHZ20200731095204013), the National Natural Science Foundation of China (Grant No. 61772419).
2309.17315
Data-Driven Newton Raphson Controller Based on Koopman Operator Theory
Newton-Raphson controller is a powerful prediction-based variable gain integral controller. Basically, the classical model-based Newton-Raphson controller requires two elements: the prediction of the system output and the derivative of the predicted output with respect to the control input. In real applications, the model may not be known and it is infeasible to predict the system sometime ahead and calculate the derivative by finite difference method as done in simulation. To solve these problems, in this work, we utilize the Koopman operator framework to reconstruct a linear model of the original nonlinear dynamical system and then utilize the output of the new linear system as the predictor of the Newton-Raphson controller. This method is only based on collected data within some time instant thus more practical. Three examples related to highly nonlinear systems are provided to verify the effectiveness of our proposed method.
Mi Zhou
2023-09-29T15:24:25Z
http://arxiv.org/abs/2309.17315v1
# Data-Driven Newton Raphson Controller Based on Koopman Operator Theory ###### Abstract Newton-Raphson controller is a powerful prediction-based variable gain integral controller. Basically, the classical model-based Newton-Raphson controller requires two elements: the prediction of the system output and the derivative of the predicted output with respect to the control input. In real applications, the model may not be known and it is infeasible to predict the system sometime ahead and calculate the derivative by finite difference method as done in simulation. To solve these problems, in this work, we utilize the Koopman operator framework to reconstruct a linear model of the original nonlinear dynamical system and then utilize the output of the new linear system as the predictor of the Newton-Raphson controller. This method is only based on collected data within some time instant thus more practical. Three examples related to highly nonlinear systems are provided to verify the effectiveness of our proposed method. ## I Introduction Trajectory tracking control is one of the most important topics in the robotics field such as mobile robots [1], self-driving cars [2], quadrotor UAVs [3], underwater robots [4], and so on. Many methods have been proposed to achieve real-time tracking performance. Existing techniques include Proportional-Integral-Derivative (PID) [4], Byrnes-Isidori regulator [5], model predictive control [6, 7], etc. Newton Raphson (NR) controller, first proposed in [8], is a tracking method that based on variable-gain integrator and the Newton-Raphson method for finding zeros of a function. This technique consists of three elements: (i) output prediction which tracks the reference signal; (ii) an integral controller with variable gain; (iii) a speedup of the control action for enhancing the tracker's accuracy and guaranteeing the stability of the closed-loop system. [9] provided a detailed introduction to this technique and theoretical derivation about the convergence of the tracking controller and error analysis with persuasive illustrative simulation and laboratory experiments. Subsequently, more works appeared that used Newton Raphson controller to solve several challenging problems such as tracking control of leader-follower multi-agent systems [8, 1], distributed formation control of multi-agent mobile systems in swarms and platoons [10], driving an underactuated system in a potentially adversarial environment modeled as a pursuit-evasion game [11], tracking control of nonlinear inverted pendulums and differentially driven cars [12], and so on. All these works showed that the tracking convergence using this method is quantified for both constant or time-dependent reference signals and was quite fast and had a large region of convergence as well. The NR regulation technique described above is based on a look-ahead simulation of the systems which works as both a predictor and an observer. This mechanism, however, requires a precise model of the system in order to obtain a reliable output prediction and hence an effective tracking performance. Therefore, some works started to explore the potential of neural networks in the predictor. [11] formulated a pursuit-evasion game, regarded it as a tracking regulation problem solved using the Newton-Raphson controller, and used a deep neural network to approximate the behavior of the evader gathered online during the pursuit. In [12], the authors utilized a feed-forward neural network as an output predictor for the Newton-Raphson controller thus achieving a model-free realization. However, the training process has a high reliance on the accuracy of the data thus it lacks robustness and real-time realization. There is an increasing need for data-driven system identification approaches with the development of more complicated robotic systems. One alternative to data-driven approaches is to use neural networks and deep learning to identify a system. However, the deep-learning method suffers from long-time training. The Koopman operator framework offers new opportunities for the control of nonlinear systems from the perspective of linearizing systems. It is a super powerful tool for identifying and linearizing a nonlinear system with higher accuracy compared to traditional linearization methods. As we know, the computational limitation due to nonlinearity is an essential challenge in robotics control. Instead of linearizing systems directly, Koopman analysis achieves linearization by representing the nonlinear dynamics in a globally linear framework. The Koopman-based approach is a perfect model-free system identification method with low time complexity. It has thus found wide applications in model predictive control, active learning, and hybrid control of robots. There are loads of works related to Koopman operator theory with good theoretical foundations and applications in real robotic systems. A detailed introduction to Koopman operator theory can be found in [13]. In [14], the authors used model predictive control (MPC) to control soft continuum manipulator arms after using Koopman operator theory to identify the soft manipulator models. In [15], the authors used Koopman eigenfunctions to lift the nonlinear system dynamics to provide a linear approach for the design of MPC with state and input constraints. [16] proposed a high-order optimal control strategy implemented in the Koopman operator framework and tested in the Duffing oscillator and Clohessy-Wiltshire problem which models the relative motion between two satellites. All these works present good and efficient performance of the Koopman operator in identifying systems. The objective of this paper is thus to propose a real-time data-driven Newton-Raphson controller by using Koopman linearization. We will test our tracking algorithm on the Van Der Pol system, an overhead crane system, and a differentially driven car system. In all experiments, the system has to track a time-varying reference signal. We then compare the tracking results with the classical model-based Newton-Raphson controller. This paper is organized as follows: in Section II, we formulate our problem. In Section III, the Koopman operator theory and the proposed controller are introduced in detail. Section IV provides three examples to illustrate the efficiency of our controller by comparing it with the classical model-based Newton-Raphson controller. We finally conclude our article in Section V. ## II Problem statement Consider the following nonlinear system: \[\dot{x}(t)=f(x(t),u(t)) \tag{1}\] with the output equation as \[y(t)=h(x(t)) \tag{2}\] where \(x\in\mathbb{R}^{n}\), \(u(t)\in\mathbb{R}^{m}\), \(f(x(t),u(t))\) is continuously differentiable in \(x\) for every \(u\in\mathbb{R}^{m}\), and continuous in \(u\) for every \(x\in\mathbb{R}^{n}\), and \(h(x(t))\) is continuously differentiable. Moreover, to make sure Eqn. (1) has a unique solution on \(t\in[0,\infty)\), we make the following assumptions: **Assumption 1** ([1]): 1. _For every compact set_ \(\Gamma_{1}\subset\mathbb{R}^{n}\) _and_ \(\Gamma_{2}\subset\mathbb{R}^{m}\)_, the functions_ \(f(x(t),u(t))\) _and_ \(\frac{\partial f}{\partial x}(x(t),u(t))\) _are Lipschitz continuous on_ \(\Gamma_{1}\times\Gamma_{2}\)_._ 2. _For very compact set_ \(\Gamma_{2}\subset\mathbb{R}^{m}\)_, there exists_ \(K>0\) _such that, for every_ \(x\in\mathbb{R}^{n}\) _and_ \(u\in\Gamma_{2}\)_,_ \[||f(x,u)||\leq K(||x||+1).\] Define \(r(t)\in\mathbb{R}^{k}\) as the reference signal. The output tracking control problem is defined as \[\lim_{t\rightarrow\infty}||r(t)-y(t)||=0 \tag{3}\] which can also be viewed as solving the root of time-dependent equations \(r(t)-y(t)=0\). This brings the idea of designing a controller with the following iterative form: \[u_{n+1}=u_{n}-\frac{r(t)-y(t)}{(r(t)-y(t))^{\prime}} \tag{4}\] to find the root (i.e., controller) \(u(t)\). In the design of the Newton-Raphson controller, the prediction phase is, saying, at time \(t\), we can predict the system from time \(t\) to \(t+T\) by solving the following differential equation \[\dot{\tilde{x}}(\tau)=f(\tilde{x}(\tau),u(t)),\quad\tau\in[t,t+T], \tag{5}\] with the initial condition \(\tilde{x}(t)=x(t)\). Then we can define the estimator from \(t\) to \(t+T\) as \[g(x(t),u(t)):=h(\tilde{x}(t+T)). \tag{6}\] The Newton-Raphson controller proposed in [9] has the following form: \[\dot{u}(t)=\alpha\left(\frac{dg}{du}(x(t),u(t))\right)^{-1}(r(t+T)-g(x(t),u(t))) \tag{7}\] where \(r(t+T)\) is assumed to be known in advance at time \(t\). Please note that whether the system is actuated or underactuated will not influence the work of the controller but the system should be controllable. The calculation of \(\frac{dg(x(t),u(t))}{du}\), however, has a high computation demand if the system is nonlinear. There are three ways to calculate \(\frac{dg(x(t),u(t))}{du}\), to the authors' best knowledge: 1. Finite difference method (FDM) where \[\frac{dg(x(t),u(t))}{du}=\frac{g(x,u+\delta u)-g(x,u)}{\delta u}.\] This method is direct but very time-consuming. 2. Without loss of generality, assume \(h(\tilde{x}(t+T))=\tilde{x}(t+T)\). Use Eqn. (5): \[\frac{dg(x(t),u(t))}{du(t)}=\frac{d\tilde{x}(t+T)}{du(t)}\] (8) where \[\left[\frac{d\ddot{x}(\tau)}{du(t)}\right]=\frac{\partial f(\tilde{x}(\tau), u(t))}{\partial x}\frac{d\tilde{x}(\tau)}{du(t)}+\frac{\partial f(\tilde{x}( \tau),u(t))}{\partial u}.\] (9) By defining a new variable \(\tilde{\xi}(t)=\frac{d\tilde{x}(\tau)}{du(t)}\), we can obtain the (8) by solving ODE (9). 3. The third method is based on linearized models (\(\dot{x}=Ax+Bu\) and \(y=Cx\)). If the model is linear or we linearize the nonlinear system locally, we have the predicted output \[y(t+T)=C_{t}(e^{A_{t}T}x_{t}+A_{t}^{-1}(e^{A_{t}T}-I_{n})(B_{t}u)).\] (10) where \(I_{n}\) is the \(n\times n\) identity matrix. Thus, \[\frac{\partial y(t+T)}{\partial u}=C_{t}A_{t}^{-1}(e^{A_{t}T}-I_{n})B_{t},\] (11) This method, however, only works for linear models. If a model is nonlinear and we linearized locally, it is possible that the linearized model is not controllable which makes this method not feasible in this case. For example, the Dubins car is controllable but the linearized model of the Dubins car is not controllable. Therefore, we propose using Koopman operator theory to lift the nonlinear systems to linear systems, thus alleviating the complexity and still keeping the controllability of the original nonlinear systems. ## III Newton-Raphson controller based on Koopman Operator theory In this section, we will first give a brief introduction to the principle of Koopman operator. Based on this, we propose our controller based on the linearized model obtained by Koopman operator theory. ### _Koopman operator theory [14]_ The Koopman operator provides a linear representation of the flow of a nonlinear system in an infinite-dimensional space of observables. Consider a dynamical system \[\dot{x}=F(x(t))\] where \(x(t)\in\mathbb{R}^{n}\) and \(F\) is a continuously differentiable function. The system can be lifted to an infinite-dimensional function space \(\mathcal{F}\) composed of all continuous real-valued functions. In \(\mathcal{F}\), the flow of the system is characterized by the linear Koopman operator \(U_{t}:\mathcal{F}\rightarrow\mathcal{F}\) which describes the evolution of the observables along the trajectories of the system. We seek the projection of the Koopman operator onto a finite-dimensional subspace. Denote \(\tilde{\mathcal{F}}\subset\mathcal{F}\) to be the subspace of \(\mathcal{F}\) spanned by \(N>n\) linearly independent basis function \(\{\phi_{t}:\mathbb{R}^{n}\rightarrow\mathbb{R}\}_{t=1}^{N}\). For convenience, we assume the first \(n\) basis functions are the states, i.e., \(\phi_{t}(x)=x_{i}\). Thus, written as a vector form, we have \[\phi(x)=[x_{1},x_{2},\cdots x_{n},\phi_{n+1}(x),\cdots,\phi_{N}(x)]. \tag{12}\] Any observables \(\tilde{f}\in\mathcal{F}\) can be expressed as a linear combination of elements of these basis functions, i.e., \[\tilde{f}=w_{1}\phi_{1}+w_{2}\phi_{2}+\cdots+w_{N}\phi_{N}=w^{\top}\phi(x)^{\top}\] where \(w_{i}\in\mathbb{R}\). The \(\phi(x)\) is called the lifted state and \(w\) is the vector representation of \(\tilde{f}\). Given this representation, we can obtain an approximation of the Koopman operator \(U_{t}\in\mathbb{R}^{N\times N}\) on \(\mathcal{F}\) that satisfies \[\tilde{U}_{t}w=w^{\prime}\] The objective is to find the \(\tilde{U}_{t}\) based on observable data in \(\tilde{\mathcal{F}}\). ### _Proposed controller design_ For dynamical systems with inputs Eqn. (1), we aim to build a linear model from the Koopman operator theory aforementioned: \[z[(j+1)] =Az[j]+Bu[j] \tag{13}\] \[x[j] =Cz[j] \tag{14}\] for each \(j\in\mathbb{N}\), \(A\in\mathbb{R}^{N\times N}\) is the state transition matrix, \(z=\phi(x)\) is the lifted state, \(B\in\mathbb{R}^{N\times m}\) is the control matrix, and \(C=\begin{bmatrix}I_{n\times n}&0_{n\times(N-n)}\end{bmatrix}\) is a projection operator from the lifted space onto the state space. Denote \[\alpha[k]=[x[k],u[k]]\] \[\beta[k]=[F(x[k],u[k]),u[k]].\] We then identify a finite-dimensional approximation of the Koopman operator via the Extending Dynamic Mode Decomposition (EDMD) algorithm [17] using observed data. The corresponding Koopman operator is \[\tilde{U}_{T_{s}}=\Gamma_{c}^{\dagger}\Gamma_{n},\] where \(\dagger\) means pseudo-inverse, \(K\) is the time horizon for collecting data, \(\beta[k]=F(\alpha[k])\), \(k=1,2,\cdots K\), and \[\Gamma_{c}=\frac{1}{K}\sum_{k=1}^{K}\phi(\alpha[k])^{\top}\phi( \alpha[k]),\] \[\Gamma_{n}=\frac{1}{K}\sum_{k=1}^{K}\phi(\alpha[k])^{\top}\phi( \beta[k])\] The continuous Koopman operator can then be written as \(\log(\tilde{U}_{T_{s}})/\Delta t\) where \(\Delta t\) is the sampling time. The \(\tilde{U}_{T_{s}}^{\top}\) is the best approximation of a transition matrix between the elements of snapshot pairs in the \(L^{2}\)-norm sense, i.e., \[\min_{U_{T_{s}}^{\top}}\sum_{k=1}^{K}\Big{\|}U_{T_{s}}^{\top}\phi(\alpha[k])- \phi(\beta[k])\Big{\|}_{2}^{2}. \tag{15}\] The best \(A\) and \(B\) matrices in (13) can be isolated by partitioning the \(\tilde{U}_{T_{s}}^{\top}\): \[\tilde{U}_{T_{s}}^{\top}=\begin{bmatrix}A_{N\times N}&B_{N\times m}\\ 0_{m\times N}&I_{m\times m}\end{bmatrix},\] where \(I_{m\times m}\) is the identity matrix. Fig. 1 shows the diagram of the proposed data-driven Newton-Raphson tracking scheme. In this algorithm, we first collect data from the nonlinear system and build a lifted linear system from the collected data. After that, the predictor \(y(t+T)\) and the derivative term \(\frac{\partial\beta(x,u)}{\partial u}\) are obtained from the linearized model. In this way, we avoid the problems aforementioned. ## IV Simulation In this section, we provide three examples to verify the efficiency of the proposed controller. We then compare the results with that of the classical Newton-Raphson controller Fig. 1: Diagram of the proposed control framework: A lift linear system are built based on Koopman operator theory; the derivative and prediction of Newton-Raphson controller is calculated using the linearized model. with respect to tracking accuracy and time. All the experiments are implemented on a personal computer with MATLAB R2020b. For all the experiments, we use mean square error as the measure for tracking performance, which is defined as \[MSE=\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}||y(t_{i})-r(t_{i})||_{2}^{2},\] where \(N_{d}=\frac{t_{f}}{dt}\) is the number of sampling points. The average MSE and time complexity of 10 experiments are taken for comparison. ### _Example 1: Van Der Pol system_ A typical Van Der Pol system has the following form: \[\left\{\begin{matrix}\dot{x}_{1}=x_{2}\\ \dot{x}_{2}=-x_{1}+(1-x_{1}^{2})x_{2}+u\end{matrix}\right.. \tag{16}\] The objective is to make \(y=x_{1}\) tracking the signal \(r(t)=\frac{\pi}{8}\sin t+\frac{\pi}{6}\). The basis is chosen as \(z=[x_{1},x_{2},x_{1}^{2},x_{1}^{2}x_{2}]^{\top}\)1. Using EDMD algorithm, we obtain constant matrix \(A_{4\times 4}\), \(B_{4\times 1}\), with which we rebuild a linearized system \(\dot{z}=Az+Bu\) from the collected data. The derivative \(\frac{dg(x,\pi)}{dt}\) and the predictor \(y=x_{1}\) can then be obtained by using this linearized system. For the identification using Koopman operator theory, the prediction horizon is chosen as \(T_{s}=2\)s and the numbers for trials for data collection is \(N_{s}=10\) with random initialization of the initial state and control input 2. The initial state is set as \([0,0]^{\top}\). The speedup parameter is chosen as \(\alpha=20\) for both the NR controller and KNR controller. The sampling time is 0.01 s. The prediction horizon is \(T=0.15\) s. The system is simulated for \(t_{f}=20\)s. Footnote 1: Please note that this choice of basis is not unique. Footnote 2: Please note that the larger the value of \(N_{b}\) and \(N_{s}\), the better the tracking results. However, to compromise the time complexity, we chose this group of parameters. Fig. 2 is the trajectory of \(x_{1}(t)\) for both KNR and NR. As we can see, KNR has a smaller oscillation at the beginning than that of NR. Fig. 3 is the control input by using both algorithms. It shows that the KNR reaches zero faster than the NR and has fewer oscillations as well. Table I summarizes the compared results with the classical Newton-Raphson (NR) controller. All the parameters keep the same for both algorithms. As we can see, the KNR needs less time to catch up with the reference signal and has higher accuracy. This is easy to be explained since when calculating the \(\frac{dg(x,u)}{du}\), the linearized system has higher accuracy. Regarding the time complexity of KNR in Table I, the time can be reduced on a large scale by reducing the number of trials and size of the prediction window. Thus the difference of time complexity between NR and KNR may be negligible. state is \([0,0,0,0]^{\top}\). The objective is to let the observation \(\theta\) track a predefined signal which is the contour of the obstacles. Let the predefined signal \(r(t)=\sin(0.1t)\) and the simulation time \(t_{f}=20\). The prediction horizon is \(T=0.15\). The speedup parameter is \(\alpha=20\). The basis for the lifted model is chosen as \([x,\dot{x},\theta,\dot{\theta},\sin(\theta),\cos(\theta)]\). Fig. 5 is the tracking result of the proposed controller and Fig. 6 is the corresponding control input. As we can see, both KNR and NR can track the reference signal very well but the control input of KNR has less magnitude. Table II summarizes the compare results for the overhead crane system. ### _Example 3: Differentially driven car_ The vehicle's kinematic dynamic in global coordinates is as follows [12]: \[\begin{bmatrix}\dot{x}(t)\\ \dot{y}(t)\\ \dot{\theta}(t)\end{bmatrix}=\begin{bmatrix}\frac{\rho}{2}\cos\theta(t)&\frac{ \rho}{2}\cos\theta(t)\\ \frac{\rho}{2}\sin\theta(t)&\frac{\rho}{2}\sin\theta(t)\\ -\frac{\rho}{2}&\frac{\rho}{2}\end{bmatrix}\begin{bmatrix}\omega_{L}(t)\\ \omega_{R}(t)\end{bmatrix} \tag{18}\] where the system state is \((x(t),y(t),\theta(t))^{\top}\) and the control input is \(u(t)=(\omega_{L}(t),\omega_{R}(t))^{\top}\). Physically, \(x(t)\) (resp. \(y(t)\)) is the \(x\) (resp. \(y\)) position of the car in the world coordinates. \(\theta(t)\) is the orientation of the car with respect to the global coordinates system as shown in Fig. 7. \(\omega_{R}(t)=v_{R}/\rho\) (resp. \(\omega_{L}(t)=v_{L}/\rho\)) is the angular velocity of the right wheel (resp. left wheel). \(\rho\) is the radius of the wheels. \(D\) is the width of the vehicle. As we can see, this system is highly nonlinear. Let \(\rho=0.1\) m, \(D=0.4\) m. Similar to [12], we define the following reference trajectory \(r(t)\): \[r(t)=\begin{cases}(-0.0001t^{3}+0.25t,0.0475t^{3}-0.3601t^{2}+0.3t+3),\\ t<5\\ (5\sin(0.05t),\ 3\sin(0.1t)),\ t>5\end{cases} \tag{19}\] Fig. 4: Illustration for a planar two-dimensional overhead crane system tracking a pre-defined trajectory. Fig. 5: Tracking results for the overhead crane system. Fig. 6: Control input (i.e., \(F(t)\)) of the overhead crane system. Fig. 7: Illustration of a differentially driven car.
2309.11764
Causal inference with outcome dependent sampling and mismeasured outcome
Outcome-dependent sampling designs are extensively utilized in various scientific disciplines, including epidemiology, ecology, and economics, with retrospective case-control studies being specific examples of such designs. Additionally, if the outcome used for sample selection is also mismeasured, then it is even more challenging to estimate the average treatment effect (ATE) accurately. To our knowledge, no existing method can address these two issues simultaneously. In this paper, we establish the identifiability of ATE and propose a novel method for estimating ATE in the context of generalized linear model. The estimator is shown to be consistent under some regularity conditions. To relax the model assumption, we also consider generalized additive model. We propose to estimate ATE using penalized B-splines and establish asymptotic properties for the proposed estimator. Our methods are evaluated through extensive simulation studies and the application to a dataset from the UK Biobank, with alcohol intake as the treatment and gout as the outcome.
Min Zeng, Zeyang Jia, Zijian Sui, Jinfeng Xu, Hong Zhang
2023-09-21T03:58:26Z
http://arxiv.org/abs/2309.11764v1
# Causal inference with outcome dependent sampling and mismeasured outcome ###### Abstract Outcome-dependent sampling designs are extensively utilized in various scientific disciplines, including epidemiology, ecology, and economics, with retrospective case-control studies being specific examples of such designs. Additionally, if the outcome used for sample selection is also mismeasured, then it is even more challenging to estimate the average treatment effect (ATE) accurately. To our knowledge, no existing method can address these two issues simultaneously. In this paper, we establish the identifiability of ATE and propose a novel method for estimating ATE in the context of generalized linear model. The estimator is shown to be consistent under some regularity conditions. To relax the model assumption, we also consider generalized additive model. We propose to estimate ATE using penalized B-splines and establish asymptotic properties for the proposed estimator. Our methods are evaluated through extensive simulation studies and the application to a dataset from the UK Biobank, with alcohol intake as the treatment and gout as the outcome. Average treatment effect; Causal inference; Outcome dependent sampling; Outcome mismeasured outcome ## 1 Introduction Numerous studies in the fields of biomedical and social sciences are focused on determining the causal impact of a binary treatment on a specific outcome. Although randomized controlled trials (RCTs) serve as the gold standard for establishing causal relationships, they may not always be feasible due to financial, logistical, or ethical constraints. As a result, researchers often rely on observational studies. Various methodologies, such as propensity score techniques (Rosenbaum and Rubin, 1983; Rubin and Thomas, 2000) and instrumental variable estimation methods (Angrist et al., 1996; Angrist and Krueger, 2001), have been developed to estimate average treatment effect (ATE) in observational studies. However, the efficacy of these methodologies relies on sampling randomness. In cases where sample selection is not random, these methods are no longer valid. Outcome-dependent sampling (ODS) represents a non-random sampling design in which the selection of sample units depends on the outcome of interest. ODS offers some advantages over simple random sampling, such as enhanced statistical power in the situation where the outcome is rare (Schlesselman, 1982). However, ODS greatly complicates the statistical data analysis and result interpretation. The case-control design, along with its variations, is the most prevalent form of ODS design. Ideally, in such designs, the sampling process relies exclusively on the outcome rather than any other variables in the study. If the unique characteristics of ODS designs are not considered, the conventional causal inference methods may be subject to selection bias (Gabriel et al., 2022; Bareinboim and Pearl, 2016). A large quantity of research based on ODS designs has been published(Wacholder et al., 1992; Breslow and Holubkov, 1997), but the majority of these studies focus on association analysis instead of causal inference. Several researchers (L. Penning de Vries and Groenwold, 2022; Mansson et al., 2007; Robins, 1999) have attempted to avoid this issue by focusing on causal risk ratio. Van der Laan (2008) and Van der Laan and Rose (2011), on the other hand, proposed an ATE estimator based on a weighted targeted maximum likelihood by incorporating information on disease prevalence. However, implementing an ideal ODS design in practice can often be challenging, as sample selection may be influenced, at least partially, by diagnosis or measurement. As a result, the true outcome of interest may be unobserved, and the measured outcome may differ from the true outcome. Various factors contribute to mismeasurement in outcome variables, such as the unavailability of costly measurements, the propensity to misreport responses to sensitive questions, and the inevitable recall bias. Numerous studies have investigated the impact of mismeasurement of outcome variables such as bias and efficiency loss (Copeland et al., 1977; Neuhaus, 1999; Shu and Yi, 2019; Fox et al., 2022). Some researchers have opted to develop sensitivity analyses of mismeasurement binary outcomes to reduce bias (Lash and Fink, 2003; Fox et al., 2005; Lyles and Lin, 2010). Shu and Yi (2019, 2020, 2019) derived asymptotic bias of the conventional inverse probability of treatment weighting (IPTW) and doubly robust (DR) estimators ignoring the measurement error, and proposed a modified weighting method by rectifying the mismeasurement bias. Although research on addressing either selection bias or mismeasurement bias has garnered much attention, very few methods have been developed to deal with these two types of bias simultaneously except for Beesley and Mukherjee (2022) and Jurek et al. (2013). However, these studies focus on association analysis. To our knowledge, no causal inference method has been developed to simultaneously address both issues. In this paper, we derived a novel generalized linear model (GLM) to establish the relationship between the observed samples and the target population. This allows for an intuitive understanding of the combined effects of ODS and measurement error on ATE estimation. Then we derive estimation equations (EE) to estimate unknown parameters, through which we obtain an ATE estimator. We call this method GLM-EE. The GLM-EE estimator is proven to be consistent and asymptotically normal. Furthermore, to relax model assumption, we introduce a generalized additive model (GAM) based estimator (Hastie and Tibshirani, 1987; Yoshida and Naito, 2014; Marx and Eilers, 1998), which employs penalized spline as a smoothing technique. Unknown parameters can again be estimated by solving a set of estimation equations and we refer to this method as GAM-EE. Asymptotic properties of the GAM-EE estimator are also established. Through simulation study, we demonstrate that both proposed estimators effectively address selection bias and mismeasurement bias. Moreover, the GAM-EE method is shown to be more robust to model misspecification with little efficiency sacrifice. We further applied our method to a real-world dataset from the UK Biobank, which aims to investigate the ATE of alcohol consumption (treatment) on gout disease (outcome) among male individuals aged 40 to 80. Gou is a form of arthritis that arises when uric acid crystals accumulate in the joints, causing inflammation and pain. However, this outcome measurement suffers from misdiagnoses with a high false negative rate of 10-30% and a low false positive rate of about 5% (Vazquez-Mellado et al., 2012; Kiefer et al., 2016). We applied our methods to this dataset and conducted a sensitivity analysis to evaluate their performance in real-world research. The remainder of this article is structured as follows. In Section 2, we introduce our model and establish the identifiability of ATE under some appropriate assumptions. In Section 3, we describe our GLM-EE and GAM-EE methods and establish their theoretical properties. In Section 4 and Section 5, we evaluate the performance of the two methods through extensive simulation studies and a real data application, respectively. In Section 6, we conclude the article and discuss future research directions. All technical details, along with supplementary information for the numerical studies in Sections 4 and 5, are provided in Supplementary Materials. ## 2 Identifiability of average treatment effect ### Ordinary scenario We start by reviewing the identifiability of ATE in ordinary scenarios where samples are selected randomly and measurement error is absent. Let \(T\) and \(Y\) denote the binary treatment and true outcome of interest, respectively. Let \(Y(t)\) denote the potential or counterfactual outcome for a given subject with exposure level \(t\)(Rubin, 2005). Suppose \(Y\), \(T\), and \(Y(t)\) take values in the binary set \(\{0,1\}\). Let \(X\) denote a vector of covariates or confounding variables. Our target parameter ATE, denoted as \(\tau\), is defined as the expected difference between the potential outcomes: \[\tau=\mathbb{E}[Y(1)-Y(0)],\] where the expectation is evaluated in the target population. In the standard causal inference framework, the identifiability of \(\tau\) hinges on three fundamental assumptions stated as follows: Assumption 1: (consistency) \[Y=TY(1)+(1-T)Y(0).\] Assumption 2: (positivity) \[1>\mathbb{P}(T=1|X)>0.\] Assumption 3: (unconfoundness) \[(Y(1),Y(0))\perp T\mid X.\] Under Assumptions 1-3, \(\tau\) is identifiable, as demonstrated by the following formula: \[\tau=\mathbb{E}\{g_{1}(X)-g_{0}(X)\}, \tag{1}\] where \(g_{i}(x)=\mathbb{E}[Y\mid X=x,T=i],\ i=1,0\). As evident from Equation (1), the identifiability of \(\tau\) relies not only on \(g_{1}\) and \(g_{0}\) but also on the distribution of covariates \(X\) in the target population. In scenarios involving ODS designs and measurement error, both \(g_{i}(x)\) and the distribution of \(X\) are ambiguous. Consequently, the identifiability of \(\tau\) needs further assumptions. ### ODS with measurement error Let \(Y^{*}\) denote the observed outcome, which may differ from the true outcome \(Y\) due to measurement error. Let \(S\) represent an indicator of selection into the study, with \(S=1\) for "selected" and \(S=0\) for "not selected". The sample distribution is expressed by \(\mathbb{P}(Y^{*},X,T\mid S=1)\), where \(\mathbb{P}(\cdot)\) denotes the probability density function. To ensure the identifiability of \(\tau\), we introduce two additional assumptions characterizing the mechanism of sample selection and outcome measurement. Assumption 4: (selection conditional independence): the sample selection procedure is independent of \((Y,X,T)\) given \(Y^{*}\), that is \[S\perp(Y,X,T)\mid Y^{*}.\] Assumption 5: (measurement conditional independence): the observed outcome \(Y^{*}\) is independent of \((X,T)\) given \(Y\), that is \[Y^{*}\perp(X,T)\mid Y.\] Assumption 4 naturally aligns with outcome-dependent sampling (ODS) design, as it posits that samples are selected solely based on the observed outcome \(Y^{*}\). Assumption 5 states that the observed outcome \(Y^{*}\) relies exclusively on the true outcome \(Y\), which indicates the influence of \(X\) and \(T\) on \(Y^{*}\) is completely mediated by \(Y\). This frequently arises in clinical diagnosis and is extensively employed in the literature (Shu and Yi, 2019, 2020). Figure 1 employs the directed acyclic graph (DAG) to illustrate the problem, where subfigure (a) corresponds to the ordinary design under Assumptions 1-3, while subfigure (b) depicts the ODS design with measurement error under Assumptions 1-5. It follows from Assumptions 4 and 5 that \(\mathbb{P}(Y^{*}=j\mid Y=i,X)=\mathbb{P}(Y^{*}=j\mid Y=i)\) and \(\mathbb{P}(S=j\mid Y^{*}=i,X)=\mathbb{P}(S=j\mid Y^{*}=i)\) for \(i,j=1\) or \(0\). To simplify statement, we denote \(p_{ij}=\mathbb{P}(Y^{*}=j|Y=i)\) for \(i,j=1\) or \(0\), where \(p_{01}\) is the false positive rate of the disease and \(p_{10}\) is the false negative rate of the disease. Let \(v=\mathbb{P}(Y=1)\) denote the disease prevalence in the target population. Since \(v\), \(p_{01}\), and \(p_{10}\) are usually attainable through existing literature and medical expert consultations, we assume these values are known. Let \(s=\mathbb{P}(S=1|Y^{*}=0)/\mathbb{P}(S=1|Y^{*}=1)\) denote the sampling ratio between cases and controls. This ratio measures the degree of sampling bias, with \(s=1\) indicating random sampling, and as \(s\) deviates further from 1, the level of selection bias increases. Let \(v^{*}=\mathbb{P}(Y^{*}=1)\) denote the observed disease prevalence, which may differ from \(v\) due to measurement error. Let \(g_{i}^{*}(x)=\mathbb{E}[Y^{*}|X=x,T=i,S=1]\) denote the expectation of \(Y^{*}\) conditional on \(X=x\), \(T=t\), and \(S=1\), which can be identified by the sample distribution. The following lemma explores the relationship between \(g_{i}^{*}(x)\) and \(g_{i}(x)\). **Lemma 2.1**: _Under Assumptions 1-5, for \(i\) = \(0\) or \(1\), we have_ \[g_{i}^{*}(X)=\frac{((1-p_{10}-p_{01})g_{i}(X)+p_{01})s}{1+((1-p_{10}-p_{01})g_ {i}(X)+p_{01})(s-1)}, \tag{2}\] _where_ \[s=\frac{\mathbb{P}(Y^{*}=1|S=1)/v^{*}}{\mathbb{P}(Y^{*}=0|S=1)/(1-v^{*})}, \tag{3}\] \[v^{*}=(1-p_{10}-p_{01})v+p_{01}. \tag{4}\] Lemma 2.1 indicates that the sampling ratio \(s\) and observed disease prevalence \(v^{*}\) are determined by \(v\), \(p_{01}\) and \(p_{10}\). Also, there is a one-to-one function relationship between \(g_{i}^{*}(X)\) and \(g_{i}(X)\) given \(v\), \(p_{01}\) and \(p_{10}\), which demonstrates that \(g_{i}(X)\) is identifiable since \(g_{i}^{*}(X)\) is determined by the sample distribution. To ensure the identifiability of \(\tau\), one must also calculate the expectation of \(g_{i}(X)\). **Lemma 2.2**: _Under Assumptions 1-5, for \(i\) = \(0\) or \(1\), we have_ \[\mathbb{E}[g_{i}(X)]=v^{*}u_{i1}+(1-v^{*})u_{i0}, \tag{5}\] _where_ \[u_{ij}=\int g_{i}(x)f(x|Y^{*}=j,S=1)dx, \tag{6}\] \(v^{*}\) _is given in (4) and \(f(\cdot\mid Y^{*},S)\) represents the conditional density of \(X\) given \(Y^{*}\) and \(S\)._ The proof of Lemma 2.2 is straightforward by applying the law of total probability and Assumption 4. Applying Lemmas 2.1-2.2, we can establish the identifiability of \(\tau\), as described in Theorem 2.1. **Theorem 2.1**: _Under Assumptions 1-5, the average treatment effect \(\tau\) is identifiable:_ \[\tau=\mathbb{E}[g_{1}(X)]-\mathbb{E}[g_{0}(X)],\] _where_ \[\mathbb{E}[g_{i}(X)] =\frac{v^{*}}{1-p_{10}-p_{01}}\int\left(\frac{g_{i}^{*}(x)}{s-g_{ i}^{*}(x)(s-1)}-p_{01}\right)f(x|Y^{*}=j,S=1)dx\] \[+\frac{1-v^{*}}{1-p_{10}-p_{01}}\int\left(\frac{g_{i}^{*}(x)}{s-g _{i}^{*}(x)(s-1)}-p_{01}\right)f(x|Y^{*}=j,S=1)dx,\ i=1,\ 0, \tag{7}\] _where \(s\) and \(v^{*}\) are given in (3) and (4), respectively._ Since \(\mathbb{P}(Y^{*},X,T|S=1)\) can be approximated by its sample version, we can consistently estimate ATE \(\tau\) if \(p_{01},p_{10},v\) are given. We provide methods for estimating \(\tau\) in the next section. ## 3 Estimation of ATE \(\tau\) In this section, we derive the estimation bias of naive method ignoring selection sampling and mismeasurement, then propose two debias methods. Both methods depend on an adjusted link function associated with sampling ratio \(s\) and mismeasurement probabilities \(p_{01}\) and \(p_{10}\). According to Theorem 2.1, to estimate \(\tau\), we can first estimate \(g_{i}\) then estimate \(\mathbb{E}[g_{i}(X)]\) \(i=0,\ 1.\) To begin with, we model \(g_{i}\) with a logistic link: \[g_{i}(x)=\frac{\exp(\eta(T=i,X=x))}{1+\exp(\eta(T=i,X=x))},\] where the index \(\eta(t,x)\) is a function of \(t\) and \(x.\) Applying Lemma 2.1, we obtain that \[g_{i}^{*}(x)=h(\eta(T=i,X=x)),\] where \[h(\eta)=\frac{\left(p_{01}+\exp(\eta)(1-p_{10})\right)s}{1+p_{01}(s-1)+\exp( \eta)\left(1+(1-p_{10})(s-1)\right)}. \tag{8}\] serves as an adjusted link function.The log-likelihood function of the observed samples is \[\ell_{n}\left(y^{*},\eta(t,\mathbf{x})\right)=\sum_{i=1}^{n}\left\{y_{i}^{*}\log( \frac{\mu_{i}}{1-\mu_{i}})+\log(1-\mu_{i})\right\}, \tag{9}\] where \(\mu_{i}=h(\eta(t_{i},\mathbf{x}_{i}))\) for \(i\)-th sample and \(n\) is the size of sample. **Remark 1:** The adjusted link function \(h(\eta)\) is a monotone increasing function of index \(\eta.\) The shape of its curve is highly influenced by the values of \(p_{01},\)\(p_{10},\) and \(s.\) For example, the function has a supremum of \(\frac{(1-p_{10})s}{1+(1-p_{10})(s-1)}\) and an infimum of \(\frac{p_{01}s}{1+p_{01}(s-1)}.\) Figure 4 in Appendix B of Supplementary Materials illustrates the curves of \(h(\eta)\) for various combinations of \(p_{01},\)\(p_{01},\) and \(s.\) It is worth noting that when \(p_{01}=0,\)\(p_{10}=0,\) and \(s=1,\)\(h(\eta)\) degenerate to the logistic link function, corresponding to the situation of random sampling and no measurement error. **Remark 2:** As shown in Figure 4, when \(p_{01}>0,\)\(p_{10}>0,\) and \(s>1,\) the shape of the curves becomes deflated and shifted. Especially when \(s\) is large and \(p_{01}\) is bigger than \(0,\) the infimum of \(h(\eta)\) approaches \(1.\) This leads to a significantly reduced range of the adjusted link function, which in turn makes model estimation challenging. To avoid an excessively small range of \(h(\eta)\) when \(s\) is big, it is crucial to ensure that \(p_{01}\) does not deviate too far from \(0.\) Fortunately, in real world, \(p_{01}\) is often very close to \(0,\) maintaining a reasonable range for the \(\eta\) function. **Remark 3:** Consider the ODS design with no measurement error, that is \(p_{01}=0,\)\(p_{10}=0\) and \(s>1.\) If we further assume that \(\eta\) has a linear form \(\eta(t,\mathbf{x})=a_{0}+a_{1}T+\mathbf{b^{\mathsf{T}}}\mathbf{x},\) then the logistic regression produces consistent estimates of the slope parameters \(a_{1}\) and \(\boldsymbol{b}^{\mathsf{T}}\). However, when it comes to ODS design with measurement error, that is \(p_{01}>0\), \(p_{10}>0\) and \(s>1\), all the logistic regression estimators will be biased. Details can be seen in Czado and Santner (1992). ### A generalized linear model based estimator In this subsection, we assume that the true model for \(\eta(t,\boldsymbol{x})\) has a linear form: \(\eta(t,\boldsymbol{x})=a_{0}+a_{1}T+\boldsymbol{b}^{\mathsf{T}}\boldsymbol{x}\). Denote \(\boldsymbol{\beta}^{\mathsf{T}}=(a_{0},a_{1},\boldsymbol{b}^{\mathsf{T}})\). The estimation of \(\boldsymbol{\beta}\) can be performed by maximizing the log-likelihood function of GLM with the adjusted link function \(h(\cdot)\). The estimation of \(\tau\) can then be obtained by following the subsequent steps. **Step 0**: _Determine the values of \(p_{10},p_{01},v\)._ **Step 1**: _Estimate sampling ratio \(s\) and the adjusted link function \(h\)._ Motivated by (3), we obtain an estimator of \(s\) by solving the estimating equation. \[S_{n}(s):=\frac{1}{n}\sum_{i=1}^{n}\{sv^{*}(1-y_{i}^{*})-(1-v^{*})y_{i}^{*}\}=0. \tag{10}\] Denote the resulting estimator by \(\hat{s}=\frac{\sum_{i=1}^{n}y_{i}^{*}(1-v^{*})}{\sum_{i=1}^{n}(1-y_{i}^{*})v^{ *}}\). Plugging \(\hat{s}\) in (8), we obtain an estimator of the link function \(h(\cdot)\), denoted as \(\hat{h}(\cdot)\). **Step 2**: _Estimate \(\boldsymbol{\beta}\)._ We can estimate \(\boldsymbol{\beta}\) by solving the following score equations: \[G_{n}(\boldsymbol{\beta}):=\frac{1}{n}\frac{\partial\ell_{n}\left(\boldsymbol {\beta}\right)}{\partial\boldsymbol{\beta}}=0, \tag{11}\] where \(l_{n}(\beta)\) is defined in (9), with \(h(\cdot)\) begging replaced by \(\hat{h}(\cdot)\). Denote the resulting estimator by \(\hat{\boldsymbol{\beta}}^{\mathsf{T}}=(\hat{a}_{0},\hat{a}_{1},\hat{ \boldsymbol{b}}^{\mathsf{T}})\). Denote \(\hat{g}_{1}(x)=\text{expit}(\hat{a}_{0}+\hat{a}_{1}+\hat{\boldsymbol{b}}^{ \mathsf{T}}x)\) and \(\hat{g}_{0}(x)=\text{expit}(\hat{a}_{0}+\hat{\boldsymbol{b}}^{\mathsf{T}}x)\). **Step 3**: _Estimate \(\boldsymbol{u}=(u_{11},u_{10},u_{01},u_{00})^{\mathsf{T}}\)._ Inspired by (6), we can get the estimators of \(u_{ij}\) by \[\hat{u}_{11}=\frac{1}{\sum_{i=1}^{n}y_{i}^{*}}\sum_{i=1}^{n}y_{i}^{*}\hat{g}_{ 1}(x_{i}),\ \ \hat{u}_{10}=\frac{1}{\sum_{i=1}^{n}(1-y_{i}^{*})}\sum_{i=1}^{n}(1-y_{i}^{*}) \hat{g}_{1}(x_{i}),\] \[\hat{u}_{01}=\frac{1}{\sum_{i=1}^{n}y_{i}^{*}}\sum_{i=1}^{n}y_{i}^{*}\hat{g}_{ 0}(x_{i}),\ \ \hat{u}_{00}=\frac{1}{\sum_{i=1}^{n}(1-y_{i}^{*})}\sum_{i=1}^{n}(1-y_{i}^{*}) \hat{g}_{0}(x_{i}).\] **Step 4**: _Estimate the average treatment effect \(\tau\)._ According to (5), we can estimate \(\tau\) by \[\hat{\tau}=\left[v^{*}\hat{u}_{11}+(1-v^{*})\hat{u}_{10}\right]-\left[v^{*}\hat{ u}_{01}+(1-v^{*})\hat{u}_{00}\right],\] where \(v^{*}\) is calculated according to (4). To establish the asymptotic distribution of estimated parameters, we write steps 1-3 in a form of estimation equations: \[\left(\begin{array}{c}S_{n}(s)\\ G_{n}(s,\mathbf{\beta})\\ M_{n}(\mathbf{\beta},\mathbf{u})\end{array}\right)=:\frac{1}{n}\sum_{i}^{n}\mathbf{\psi} \left(y_{i}^{*},x_{i},t_{i},\mathbf{\theta}\right)=0, \tag{12}\] where \(\mathbf{\theta}^{\mathsf{T}}=(s,\mathbf{\beta}^{\mathsf{T}},\mathbf{u}^{\mathsf{T}})\), \(\mathbf{u}^{\mathsf{T}}=(u_{11},u_{10},u_{01},u_{11})\) and \[M_{n}(\mathbf{u},\mathbf{\beta}):=\frac{1}{n}\sum_{i=1}^{n}\left(\begin{array}{c}y_ {i}^{*}(u_{11}-g_{1}(x_{i},\mathbf{\beta}))\\ (1-y_{i}^{*})(u_{10}-g_{1}(x_{i},\mathbf{\beta}))\\ y_{i}^{*}(u_{01}-g_{0}(x_{i},\mathbf{\beta}))\\ (1-y_{i}^{*})(u_{00}-g_{0}(x_{i},\mathbf{\beta}))\end{array}\right).\] Denoting the resulting estimator by \(\hat{\mathbf{\theta}}=\left(\hat{s},\,\hat{\mathbf{\beta}}^{\mathsf{T}},\hat{\mathbf{u}}^ {\mathsf{T}}\right)^{\mathsf{T}}\), we have the following theorem. **Theorem 3.1**: _Let \(\mathbf{\theta}_{0}\) denote the true parameter values for \(\mathbf{\theta}\). If the true index function \(\eta\) has the linear form \(\eta(t,\mathbf{x})=a_{0}+a_{1}t+\mathbf{b}^{\mathsf{T}}\mathbf{x}\), then under some regularity conditions given in Appendix A of Supplementary Materials, we have that \(\hat{\mathbf{\theta}}\) is consistent for \(\mathbf{\theta}_{0}\), and_ \[\sqrt{n}(\hat{\mathbf{\theta}}-\mathbf{\theta}_{0})\stackrel{{ d}}{{ \rightarrow}}N(0,\mathbf{V}(\mathbf{\theta}_{0})),\] _where the limiting variance of \(\sqrt{n}\hat{\mathbf{\theta}}\) can be written as the sandwich matrix form: \(\mathbf{V}(\mathbf{\theta}_{0})=\mathbf{H}(\mathbf{\theta}_{0})^{-1}\mathbf{B}\big{\{} \mathbf{H}(\mathbf{\theta}_{0})^{-1}\big{\}}^{\mathsf{T}}\) with \(\mathbf{B}(\mathbf{\theta}_{0})=\mathbb{E}\left\{\mathbf{\psi}(Y^{*},X,T,\mathbf{\theta}_ {0})\mathbf{\psi}(Y^{*},X,T,\mathbf{\theta}_{0})^{\mathsf{T}}\big{|}\,S=1\right\}\) and \(\mathbf{H}(\mathbf{\theta}_{0})=\mathbb{E}\{\partial\mathbf{\psi}(Y^{*},X,T,\mathbf{ \theta}_{0})/\partial\mathbf{\theta}_{0}^{\mathsf{T}}|S=1\}\)._ Utilizing Theorem 3.1, we can estimate \(\tau\) by \(\hat{\tau}_{\mathrm{GLM}}=\mathbf{q}^{\mathsf{T}}\hat{\mathbf{\theta}}\), where \(\mathbf{q}^{\mathsf{T}}=(\mathbf{0}^{\mathsf{T}},\mathbf{c}^{\mathsf{T}})\), \(\mathbf{c}^{\mathsf{T}}=(v^{*},v^{*}-1,v^{*},v^{*}-1)\). Subsequently, applying the Slutsky Theorem, we obtain the asymptotic normality of \(\hat{\tau}_{\rm GLM}\): \[\sqrt{n}(\hat{\tau}_{\rm GLM}-\tau)\stackrel{{ d}}{{\to}}N(0, \boldsymbol{q}^{\sf T}\mathbf{V}(\boldsymbol{\theta}_{0})\boldsymbol{q}).\] The covariance matrix \(\mathbf{V}(\boldsymbol{\theta}_{0})\) can be estimated by \(\hat{\mathbf{V}}(\boldsymbol{\theta}_{0})=\hat{\mathbf{H}}^{-1}\hat{\mathbf{ B}}\big{\{}\hat{\mathbf{H}}^{-1}\big{\}}^{\sf T}\), where \(\hat{\mathbf{H}}=-\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{\psi}^{\prime}(y_{i}^{* },x_{i},t_{i},\hat{\boldsymbol{\theta}})\) and \(\hat{\mathbf{B}}=\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{\psi}(y_{i}^{*},x_{i},t _{i},\hat{\boldsymbol{\theta}})\boldsymbol{\psi}(y_{i}^{*},x_{i},t_{i},\hat{ \boldsymbol{\theta}})^{\sf T}\). This allows us to make statistical inference regarding \(\tau\). ### A generalized additive model based estimator In real-world studies, the linearity assumption of \(\eta(t,\boldsymbol{x})\) is often violated. To enhance the robustness of our method, we employ the generalized additive model (Hastie and Tibshirani, 1987; Marx and Eilers, 1998) to capture the nonlinear characteristics of \(\eta(t,\boldsymbol{x})\). This approach enables us to develop an improved estimator that is resilient to model misspecification. To begin with, we denote by \(\tilde{\eta}(t_{i},\boldsymbol{x}_{i})\) the true index of the \(i\)-th individual and \(\eta(t_{i},\boldsymbol{x}_{i})\) a working index. We assume \(\tilde{\eta}(t_{i},\boldsymbol{x}_{i})\) has the following additive form: \[\tilde{\eta}(t_{i},\boldsymbol{x}_{i})=at_{i}+\sum_{j=1}^{D}\tilde{\eta}_{j}(x _{ij}),\ j=1,...,D, \tag{13}\] where \(x_{ij}\) is the \(j\)-th covariate of \(i\)-th sample. We also assume that \(\mathbb{E}[\tilde{\eta}_{j}(X_{ij})]=0\) for \(j=1,\ldots,D\) to ensure the identifiability of \(\tilde{\eta}_{j}\). We approximate \(\tilde{\eta}_{j}(x_{ij})\) by the following B-spline model: \[\eta_{j}(x_{ij})=\sum_{k=-p+1}^{K_{n}}B_{k}^{p}(x_{ij})b_{k,j},\ j=1,\ldots,D,\] where \(B_{k}^{p}(x)\) is the \(p\)-th B-spline functions defined recursively (\(k=-p+1,\ldots,K_{n}\)) (De Boor, 1978). Here \(K_{n}\) is the number of knots, \(p\) is the degree of B-spline, and \(b_{k,j}\)'s are unknown parameters. To simplify notation, we denote \(B_{k}^{p}(x)\) as \(B_{k}(x)\), unless we explicitly state the degree of the B-spline. Our primary focus henceforth will be on the \(p\)-th B-spline. We model the index of the \(i\)-th sample as \[\eta(t_{i},\boldsymbol{x}_{i})=at_{i}+\sum_{j=1}^{D}\sum_{k=-p+1}^{K_{n}}B_{k }(x_{ij})b_{k,j}=at_{i}+\sum_{j=1}^{D}\boldsymbol{B}(x_{ij})^{\sf T} \boldsymbol{b}_{j}=\boldsymbol{Z}_{i}\boldsymbol{\beta},\] where \(\boldsymbol{Z}_{i}=\left(\boldsymbol{B}(x_{i1})^{\sf T},\ldots,\boldsymbol{B} (x_{iD})^{\sf T},t_{i}\right)\), \(\boldsymbol{B}(x)=\left(B_{-p+1}(x),\ldots,B_{K_{n}}(x)\right)^{\sf T}\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},a)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b}^{\sf T},b)\), \(\boldsymbol{\beta}^{\sf T}=(\boldsymbol{b} \(\mathbf{b}^{\mathsf{T}}=(\mathbf{b}_{1}^{\mathsf{T}},\ldots,\mathbf{b}_{D}^{\mathsf{T}})\), and \(\mathbf{b}_{j}=\{b_{k,j}\}_{k=-p+1}^{K_{n}},j=1,\ldots,D\). Note that the dimension of \(\mathbf{\beta}\) is \(D(K_{n}+p)+1\). The observed log-likelihood function based on the above B-spline approximation shares the same form as (9), except that \(\eta\) is no longer the true index function \(\tilde{\eta}\). We then utilize the ridge-corrected penalized log-likelihood function proposed by Marx and Eilers (1998) and Yoshida and Naito (2014): \[\ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)=\ell_{n} \left(\mathbf{\beta}\right)-\sum_{j=1}^{D}\frac{\lambda_{jn}}{2n}\mathbf{b}_{j}^{ \mathsf{T}}\Delta_{m}^{\mathsf{T}}\Delta_{m}\mathbf{b}_{j}-\frac{\gamma_{n}}{2n} \sum_{j=1}^{D}\mathbf{b}_{j}^{\mathsf{T}}\mathbf{b}_{j}, \tag{14}\] where \(\lambda_{n}=\{\lambda_{jn}\}_{j=1}^{D}\) and \(\gamma_{n}\) are tuning parameters. \(\Delta_{m}\) is the \(m\)-th order difference matrix (Dierck, 1995). The spline parameters are subject to penalization, where the first penalty term is a usual trick in the penalized spline estimators to prevent the estimate from wriggling when the spline dimension \(D(K_{n}+p)\) is large, and the second penalty term aims to ensure the nonsingularity of the Hessian matrix of \(\ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)\). The score functions are obtained with \[G_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)=\frac{\partial \ell_{\mathrm{rp},n}\left(\mathbf{\beta},\lambda_{n},\gamma_{n}\right)}{\partial \mathbf{\beta}}. \tag{15}\] Similar to the operations in Section 3.1, we can obtain an estimator of \(\tau\) by applying Steps 0-4 with \(G_{n}\) in Step 2 being replaced by \(G_{\mathrm{rp},n}\). Also, we write Steps 1-3 in a form of estimating equations: \[\left(\begin{array}{c}S_{n}(s)\\ G_{\mathrm{rp},n}(\mathbf{\beta},s,\lambda_{n},\gamma_{n})\\ M_{n}(\mathbf{u},\mathbf{\beta})\end{array}\right)=:\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi }\left(y_{i}^{*},x_{i},t_{i},\mathbf{\theta},\lambda_{n},\gamma_{n}\right)=0. \tag{16}\] Denote the resulting estimators as \(\hat{\mathbf{\theta}}=\left(\hat{s},\,\hat{\mathbf{\beta}}^{\mathsf{T}},\hat{\mathbf{u}}^ {\mathsf{T}}\right)^{\mathsf{T}}\). Note that, unlike (12), the dimension of \(\mathbf{\psi}\) in (16) is not fixed, which increases with the sample size \(n\). Similarly, The ATE \(\tau\) can be estimated by \(\hat{\tau}_{\mathrm{GAM}}=\mathbf{q}^{\mathsf{T}}\hat{\mathbf{\theta}}\). Before stating asymptotic properties of \(\hat{\tau}_{\mathrm{GAM}}\), we introduce some notations. Let \((s_{0},\mathbf{\beta_{0}^{\mathsf{T}}},\mathbf{u}_{0}^{\mathsf{T}})^{\mathsf{T}}\), where \(s_{0}\) is the solution of \(\mathbb{E}\left[S_{n}(s)|S=1\right]=0\), \(\mathbf{\beta}_{0}=\underset{\mathbf{\beta}}{\mathrm{argmin}}\mathbb{E}\left[\log\frac{ f(Y,\mathbf{X},\tilde{\eta})}{f(Y,\mathbf{X},\mathbf{\beta})}|S=1\right]\) is the best spline approximation of the true index function \(\tilde{\eta}(t,\mathbf{x})\) based on Kullback-Leibler measure and \(\mathbf{u}_{0}\) is the solution of \(\mathbb{E}[M_{n}(\mathbf{u},\mathbf{\beta}_{0})|S=1]=0\). Let \(\tau\) denote the true value of ATE. We have the following asymptotic results for \(\tau_{\mathrm{GAM}}\). **Theorem 3.2**: _If the true index function \(\tilde{\eta}\) obeys the additive form as (13), then under some regularity conditions in Appendix A of Supplementary Materials, we have that \(\hat{\tau}_{\mathrm{GAM}}\) is consistent for \(\tau\), and \(\hat{\tau}_{\mathrm{GAM}}-\tau\) is asymptotically normal with asymptotic mean \(\mathbf{Bias}(\hat{\tau}_{\mathrm{GAM}})\) (refer to (A.14) in Appendix A for an explicit expression), and asymptotic covariance \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})=\frac{1}{n}\mathbf{q}^{\mathsf{T}}\tilde{ \mathbf{H}}(\lambda_{n})^{-1}\tilde{\mathbf{B}}\big{\{}\tilde{\mathbf{H}}( \lambda_{n})^{-1}\big{\}}^{\mathsf{T}}\mathbf{q}\), where_ \[\tilde{\mathbf{H}}(\lambda_{n})=\mathbb{E}\left\{\tilde{\mathbf{\psi}}^{\prime}(Y ^{*},X,T,\mathbf{\theta}_{0},\lambda_{n},\gamma_{n}=0)|S=1\right\},\] \[\tilde{\mathbf{B}}=\mathbb{E}\left\{\tilde{\mathbf{\psi}}(Y^{*},X,T,\mathbf{\theta}_{ 0},\lambda_{n}=0,\gamma_{n}=0)\tilde{\mathbf{\psi}}(Y^{*},X,T,\mathbf{\theta}_{0}, \lambda_{n}=0,\gamma_{n}=0)^{\mathsf{T}}|S=1\right\}.\] _Refer to (A.5) and (A.6) in Appendix A for explicit expressions of \(\tilde{\psi}\) and \(\tilde{\psi}^{\prime}\), respectively. Furthermore, \(\mathbf{Bias}(\hat{\tau}_{\mathrm{GAM}})=O(n^{-(p+1)/(2p+3)})\) and \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})=O(n^{-2(p+1)/(2p+3)})\)._ **Remark 1:** Theorem 3.2 demonstrates that \(\hat{\tau}_{\mathrm{GAM}}\) is \(n^{-(p+1)/(2p+3)}\)-consistent and asymptotic normal, and the asymptotic order of \(\hat{\tau}_{\mathrm{GAM}}\)'s mean squared error is \(O(n^{-2(p+1)/(2p+3)})\). These results coincide with those of Yoshida and Naito (2014). **Remark 2:** If the true index follows a linear form, then \(\hat{\tau}_{\mathrm{GLM}}\) proposed in Section 3.1 is \(n^{-1/2}\)-consistent. However, \(\hat{\tau}_{\mathrm{GLM}}\) is generally sensitive to the linear assumption. On the other hand, \(\hat{\tau}_{\mathrm{GAM}}\) has a much wider applicability, subject to a lower efficiency in terms of convergence rate. **Remark 3:** In large sample cases, \(\mathbf{V}(\hat{\tau}_{\mathrm{GAM}})\) can be consistently approximated by \(\hat{\mathbf{V}}(\hat{\tau}_{\mathrm{GAM}})=\frac{1}{n}\mathbf{q}^{\mathsf{T}} \hat{\mathbf{\hat{H}}}^{-1}\hat{\mathbf{\hat{B}}}\big{\{}\hat{\mathbf{\hat{H}}} ^{-1}\big{\}}^{\mathsf{T}}\mathbf{q}\), with \(\hat{\mathbf{\hat{H}}}=-\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi}^{\prime}(y_{i}^{*},x_ {i},t_{i},\hat{\mathbf{\theta}},\lambda_{n},\gamma_{n}=0)\) and \(\hat{\mathbf{\hat{B}}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{\psi}(y_{i}^{*},x_{i},t_{i},\hat{\mathbf{\theta}},\lambda_{n}=0,\gamma_{n}=0)\mathbf{\psi}(y_{i}^{*},x_{i},t_{i}, \hat{\mathbf{\theta}},\lambda_{n}=0,\gamma_{n}=0)^{\mathsf{T}}\). Statistical inference of \(\tau\) can be made according to the asymptotic normality of \(\hat{\tau}_{\mathrm{GAM}}\). ## 4 Simulation studies In this section, we evaluate the finite-sample performance of our proposed GLM-EE and GAM-EE methods through simulations. ### Data generation The data generation process consists of two steps: data pool creation and case-control sample selection. We start by creating a data pool representing the target population, where each patient's true disease status and diagnosis status are generated. We independently sample two continuous covariates, \(X_{1}\) and \(X_{2}\), from a standard normal and uniform distribution, respectively. Another discrete covariate, \(U\), is sampled from a Bernoulli distribution with \(\mathbb{P}(U=1)=0.5\). The treatment indicator \(T\) is sampled from a Bernoulli distribution with \(\mathbb{P}(T=1|X_{1},X_{2},U)=\text{expit}(1+0.1X_{1}-0.1X_{2}-0.5U)\). To demonstrate the utility of our methods in both linear and nonlinear settings, we consider the following outcome models: \[\textbf{M1:}\quad\tilde{\eta}=a_{0}-2T-U-0.5X_{1}+X_{2};\] \[\textbf{M2:}\quad\tilde{\eta}=a_{0}-2T-U-\sin(3\pi X_{1})+(3(X_{2}-0.5))^{3};\] \[\textbf{M3:}\quad\tilde{\eta}=a_{0}-2T-U-\exp(2X_{1})-\sin(3\pi X_{2})X_{2};\] \[\textbf{M4:}\quad\tilde{\eta}=a_{0}-2T-U-\exp(2X_{1})+(3(X_{2}-0.5))^{3}+X_{1} X_{2}.\] **M1** is a typical linear model. **M2-M3** are non-linear but still follows the additive form in (13). **M4** violates both linear and additive assumptions. In all the models, the intercept term \(a_{0}\) is set based on a predetermined disease prevalence \(v\). The true outcome \(Y\) is sampled from the Bernoulli distribution with success probability \(\text{expit}(\tilde{\eta})\). We then generate the observed outcome \(Y^{*}\) based on \(Y\) with conditional probability \(p_{10}\) and \(p_{01}\). Thus we construct a data pool to simulate a population of size 1000,000. We then randomly sample \(n/2\) cases and \(n/2\) controls from the data pool based on the observed outcome \(Y^{*}\)'s, but with only \(Y^{*},T,U,X_{1},X_{2}\) kept in the subsequent analysis. We fix \(p_{01}=0\) as it is usually very small in practice. We consider various combinations of \(v\), \(p_{10}\), and \(n\). That is, \(v=0.001,\ 0.01,\ \text{and}\ 0.1\), \(p_{01}=0,\ 0.2,\ \text{and}\ 0.4\), \(n=500\) and \(2000\). For each combination of \(v\) and outcome model, the true \(\tau\) are calculated through Monte Carlo integration. When applying the GAM-EE method, the number of knots for \(X_{1}\) and \(X_{2}\) is set to be \(K_{n}=10\), and the value of \(\lambda_{n}\) is selected based on the Bayesian information criterion as in Marx and Eilers (1998), and \(\gamma_{n}\) is set to be 0.1. For each combination of \(v\), \(p_{10}\), \(p_{01}\), \(n\), and outcome model, we repeat simulations for 500 times. ### Debias capacity of GLM-EE and GAM-EE We first evaluate the debiasing capability of our proposed methods in model **M1**, where the true index has a linear form. Along with the standard GLM-EE and GAM-EE methods, we also consider three naive estimators based on the GLM-EE method and three naive estimators based on the GAM-EE method. These naive estimators are obtained by applying the GLM-EE and GAM-EE methods but intentionally ignoring the information of measurement and selection (i.e., manually fix \(p_{10}=0\) and \(s=1\) when applying the GLM-EE and GAM-EE methods). The EE methods ignoring measurement information, selection information, and both information are denoted as "naive 3", "naive 2" and "naive 1", respectively. We also consider the IPTW method as a comparison. Figure 2 depicts the box plots of the ATE estimators. The standard GLM-EE and GAM-EE estimators' box covers the true \(\tau\) (represented by the red line), and the empirical means closely align with the true \(\tau\). Conversely, the naive and IPTW estimators exhibit obvious bias, as their boxes fail to cover the true \(\tau\). The biases are particularly large for IPTW, naive 1, and naive 2, and they are much bigger than naive 3. This observation suggests that the biases are primarily due to sampling bias instead of measurement error. To further compare the performance of standard GAM-EE and GLM-EE methods, we present the relative biases, root mean squared errors, coverage probabilities of \(\hat{\tau}_{\text{GLM}}\) and \(\hat{\tau}_{\text{GAM}}\) in Table 1. Both \(\hat{\tau}_{\text{GLM}}\) and \(\hat{\tau}_{\text{GAM}}\) produce fairly small empirical biases and reasonable coverage probabilities close to the nominal level of 95%. Furthermore, \(\hat{\tau}_{\text{GAM}}\) has slightly bigger RMSEs than \(\hat{\tau}_{\text{GLM}}\) since GAM-EE requires to estimate more parameters than GLM-EE. ### Robustness of GLM-EE and GAM-EE To evaluate the robustness of our methods, we conduct a detailed comparison between the standard GLM-EE method (\(\hat{\tau}_{\text{GLM}}\)) and the standard GAM-EE method (\(\hat{\tau}_{\text{GAM}}\)) in different nonlinear model settings. Tables 2 summarize the results of model **M2** and **M3**, where the GLM-EE method suffers from the problem of model misspecification. In the simulation situations, \(\hat{\tau}_{\text{GLM}}\) produces systematic biases, depending on the prevalence \(v\) (the lower the prevalence, the larger the bias). On the other hand, \(\hat{\tau}_{\text{GAM}}\) produces consistently smaller empirical biases and RMSEs than \(\hat{\tau}_{\text{GLM}}\), especially in model **M3**. The coverage probabilities of \(\hat{\tau}_{\text{GAM}}\) are also close to the nominal level of 95%. This demonstrates the high performance of the GAM-EE method in nonlinear but additive settings. Table 3 shows the results for model **M4**, where both GLM-EE and GAM-EE methods suffer from model misspecification problems, but \(\hat{\tau}_{\text{GAM}}\) has smaller biases and RMSEs in general. Overall, The GAM-EE method outperforms the GLM-EE method in nonlinear settings and loses little statistical efficiency compared with the GLM-EE method in linear settings. The above results support the theoretical results established in Section 3. For scenarios where \(p_{01}>0\), as discussed in Section 3, the estimate of \(\tau\) is not stable if \(v\) is rather small. We increase the sample sizes to \(n=3000\) and only consider four combinations of \(p_{01}\) and \(v\) (i.e, \(p_{01}=0.03,\,0.06\), and \(v=0.05,\,0.1\)). Tables 4 in Appendix B of Supplementary Materials summarize the corresponding results, showing the same behaviors as scenarios with \(p_{01}=0\). ## 5 Real data analysis In this section, we apply the GAM-EE and GLM-EE methods to a real-world example. We aim to analyze the effect of alcohol intake on the risk of developing gout. We use data from the UK BioBank database, a large-scale prospective cohort study including 502,543 volunteer participants aged 37 to 73 years from UK between 2007 and 2010. We collected information on the treatment (alcohol intake), the observed outcome (gout diagnosis status), and covariates including education level, ethnicity, diet score (summarized score of diet habits), BMI, physical exercise, TDI (Townsend deprivation index), age, and household income. After eliminating the missing data and limiting our sample to only males, we obtained a target population of 136,741 subjects (refer to Table 6 in Appendix B of Supplementary Materials for detailed information). Within this population, 3.85% subjects are diagnosed with gout (\(v^{*}=3.85\%\)), but the true disease prevalence \(v\) is unknown. However, if we know the values of false positive rate \(p_{01}\) and false negative rate \(p_{10}\), we can calculate the true disease prevalence by \(v=(v^{*}-p_{01})/(1-p_{10}-p_{01})\) according to (4). We apply our proposed GLM-EE and GAM-EE methods to the dataset. When applying the GAM-EE method, the number of knots \(K_{n}\) is fixed to be 5. The value of \(\lambda_{n}\) is selected based on the Bayesian information criterion from a candidate sequence ranging from 1 to 20, and \(\gamma_{n}\) is set to be 0.1. First, we aim to extend the discussion in Section 4 with the purpose of evaluating the validity of our proposed EE methods, leveraging the full dataset. The corresponding results can be regarded as benchmarks. It is important to mention that while the full dataset does not have the bias sampling problem, it still suffers from measurement error problems. We draw a case-control subsample from the full dataset based on the diagnosed status, with the same case and control size of 2500. The subsample suffers from both selection and mismeasurement biases. We then apply GAM-EE and GLM-EE methods to the subsample. This process is repeated for 500 times. Figure 5 in Appendix B of Supplementary Materials depicts the box plots of our estimators. The results given by GAM-EE method are fairly close to the corresponding benchmarks, with differences ranging from \(1\times 10^{-7}\) to \(1\times 10^{-2}\). On the other hand, the results given by GLM-EE method deviate from the corresponding benchmark, with differences ranging from \(1\times 10^{-6}\) to \(2\times 10^{-2}\). The standard errors of the two methods are close to each other. These results indicate that GAM-EE method is more robust than GLM-EE method in this example. Second, we demonstrate the practical utility of our methods in real-world research by conducting a sensitivity analysis. This time we only have a case-control subsample and the real disease status is unobserved. Based on literature review (Vazquez-Mellado et al., 2012; Kiefer et al., 2016) and experts experience, we determine possible ranges of disease prevalence \(v\in(0.030,0.045)\), false positive rate \(p_{10}\in(10\%,30\%)\), and false negative rate \(p_{01}\in(0\%,6\%)\). We select several breakpoints within these ranges, apply our methods, and summarize the results in Figure 3. Evidently, within the possible ranges of \(v\), \(p_{10}\), and \(p_{01}\), the estimated \(\tau\) is significantly greater than 0, in terms of 95% confidence intervals. The median ATE ranges from 0.01 to 0.04, depending on the specification of \(v\), \(p_{01}\) and \(p_{10}\). Therefore, we conclude that alcohol intake has a significant positive ATE on the risk of developing gout. ## 6 Discussion This paper presents novel methods for addressing an analytical challenge that arises when conducting causal inference in the context of outcome-dependent sampling (ODS) design with measurement error. In such scenarios, ordinary ATE estimators are susceptible to selection and mismeasurement biases. Our proposed GLM-EE and GAM-EE methods leverage additional information from the target population on disease prevalence and mismeasurement rates to address the biases. The effectiveness of our methods for eliminating the influence of ODS and mismeasurement appears to be robust to the specification of outcome models in simulation studies. We also provide practical guidance for conducting sensitivity analysis in real-world ODS studies. We apply our methods to the UK Biobank dataset to estimate ATE of alcohol intake on gout. Our methods demonstrated promising performance in this application. Our methods focus on estimating ATE, although they can readily be extended to estimate other causal effect measures of interest, such as the causal risk ratio and the causal odds ratio. Furthermore, although we consider a scenario with two treatment options, our methods can be generalized to multiple treatment arms. As discussed in Section 3, the adjusted link function may be quite flat in cases where \(v\) is small and \(p_{01}>0\). This can lead to instability in solving the estimating equations, necessitating a large sample size to ensure convergence. However, The increase in sample size will highly increase the computation time, especially when the dimension of covariates or the size of the knots is big. As a result, when \(p_{01}>0\), we only consider scenarios where the disease prevalence \(v\) is not small. This limitation may restrict the generalizability of our methods and we leave this interesting topic as a future work to explore. Overall, our proposed methods provide a valuable tool for addressing the analytical challenges associated with causal inference in the presence of ODS and measurement error. Our methods offer a practical and effective means in obtaining unbiased estimates of ATE, even when the outcome model is not linear. The work of HZ is partly supported by the National Natural Science Foundation of China (7209121, 12171451). This research has been conducted using the UK Biobank resource (application number 96744), subject to a data transfer agreement. The data that support the findings in this paper can be obtained from the UK Biobank ([http://www.ukbiobank.ac.uk](http://www.ukbiobank.ac.uk)). ## Supplementary Materials Appendix A (referenced in Section 2-3) and Appendix B (referenced in Sections 3-5) are available.
2309.06218
An exploration of the phonon frequency spectrum and Born-von Karman periodic boundary conditions in 1D and 2D Lattice systems using a computational approach
The concept of periodic boundary conditions (PBCs) is immensely significant in treating an ideal lattice of infinite extent as a finite lattice. An explicit usage of PBCs is often found missing in undergraduate texts on analytical treatment of lattice dynamics. The aim of the present work is to cover this gap by illustrating the application of Born-von Karman PBCs in lattice dynamical calculations using a computational approach. The equations of motion are set up for a linear diatomic lattice with a basis, using the nearest neighbour approximation. The solution is obtained by implementing fourth order Runge-Kutta algorithm in python. Fast Fourier Transform (FFT) technique is then used to obtain the phonon frequency spectrum corresponding to the computed solutions. Similar computations are extended to obtain the phonon spectrum for monatomic square and honeycomb lattices under the second nearest neighbour approximation. The computed results are validated against the analytical ones in each case. The target group of the present work are the students and educators at the undergraduate level.
Jeet Shannigrahi, Pragati Ashdhir
2023-09-12T13:36:18Z
http://arxiv.org/abs/2309.06218v1
An exploration of the phonon frequency spectrum and Born-von Karman periodic boundary conditions in 1D and 2D Lattice systems using a computational approach. ###### Abstract Periodic boundary conditions (PBCs) are a pivotal concept in the treatment of ideal lattices of infinite extent as a finite lattice. Most undergraduate texts that delve into the analytical treatment of lattice dynamics do not explicitly incorporate the usage of PBCs. Moreover, most textbooks and existing literature predominantly solve the system in the frequency domain. The aim of the present work is to bridge this gap by demonstrating the application of Born von Karman PBCs in constructing a unit cell that effectively captures the dynamics of the entire lattice. The lattice dynamical equations are solved in the displacement-time domain using numerical methods. The Fast Fourier Transform technique is then used to obtain the phonon frequency spectrum corresponding to the computed instantaneous displacements. The approach explores the concept by first constructing the equations of atomic motion for linear lattices with and without a basis in the nearest neighbor approximation. Subsequently, such calculations are extended to obtain the phonon spectrum for two dimensional lattices such as the square lattice and the honeycomb lattice using the next-nearest neighbor approximations. A short range force constant model employing the central and angular forces is used for the monatomic square lattice. The dynamics of monatomic honeycomb lattice is investigated using the central forces to model the interatomic interactions. The computed results are validated against the analytical results of the given problem. Our work serves to showcase a novel method for understanding the implementation of PBCs in constructing a unit cell for a given lattice system and capturing its phonon dispersion spectrum. The approach is expected to be physically more intuitive for a student to understand the periodicity of a given lattice and its related dynamics. The target group of the present work comprise undergraduate students and educators who seek a thorough and didactic comprehension of lattice dynamics. + Footnote †: preprint: APS/123-QED ## I Introduction: The study of atomic vibrations in solids is of paramount importance in condensed matter physics. It provides useful insight for understanding the various physical properties of solids such as thermal conductivity [1], elasticity, thermal expansion coefficients and specific heat capacity [2; 3]. The arbitrary vibrational motion of the lattice can be considered a superposition of normal modes, such that each mode corresponds to the atoms oscillating uniformly at a specific frequency. The quanta of these lattice vibrations are known as phonons. The phonons are bosonic quasi-particles representing the collective excitations in crystalline solids. In analogy with the photons as quantized light waves, the phonons are quantized elastic waves propagating down a lattice [4]. In the existing vast wealth of literature and the reputed undergraduate level texts on solid state physics [2; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], the problem of lattice dynamics is traditionally solved in the reciprocal space rather than in the direct space. Typically, the analytical treatment comprises of deriving the dispersion relations for the phonon spectrum by making use of dynamical matrices [11]. These matrices contain all information about the dynamical behaviour of the crystal based on the by specific models for interatomic interactions. The method intrinsically assumes the propagation of plane progressive waves [14] across a bounded lattice and consequently the existence of normal modes of atomic vibrations. This leads to the formulation of a secular determinant or characteristic equation yielding the normal mode frequencies of the given crystal system. Further, it is assumed that the end-effects or surface-effects have no bearing on the bulk properties of a crystalline solid [15]. This calls for the implementation of boundary conditions that allow us to treat an infinite lattice as a finite lattice. A convenient choice of boundary conditions for the running wave representation of lattice vibrational modes are the periodic boundary conditions proposed by Born and von Karman (BvK) [16]. An explicit implementation of BvK PBCs in the lattice dynamical calculations is found missing in textbooks due to the cumbersome underlying mathematics. In this work, we propose a novel approach by numerically solving the lattice dynamical problem in direct space, which invites new opportunities for pedagogical discourse by explicitly emphasizing the implementation of periodic boundary conditions. It is expected to accord educators with a better tool to demonstrate the effect such conditions have on lattice structures. Our method allows the solution of lattice dynamical equations in the physically more intuitive direct lattice-time domain vis-a-vis the reciprocal lattice-frequency domain. The dynamical equations for 1D and 2D lattice systems subjected to PBCs are solved using the fourth order Runge-Kutta method or the classic Runge-Kutta method as laid out in Kutta [1901][17]. The phonon frequencies of the given lattice systems are obtained using the computational technique of Fast Fourier Transform (FFT). The computational approach as detailed in the subsequent sections attempts to present an effective pedagogical technique for enhancing classroom discussions on lattice dynamics and the phonon spectrum. Python language is used as the computational platform for this work. In the following two subsections (I.1 and I.2), we give a brief description of the requisites for understanding the methodology used in our approach to lattice dynamics. ### Model Formulation Lattice dynamics is the study of atomic vibrations in a crystal. A typical theory of lattice dynamics is based on the assumption that the equilibrium position of each constituent atom is a lattice site, about which it executes small oscillations. By small oscillations, it is implied that the displacement of atoms from their equilibrium is small compared with the interatomic spacing, so that the interatomic forces obey Hooke's law. This is equivalently the harmonic approximation under which only second order terms in atomic displacements are retained in the power series expansion of potential corresponding to the interatomic interactions. The atoms are hence modeled as being linked to each other through elastic springs [18] as shown in Fig.2. A crystal lattice is a periodic arrangement of atoms or groups of atoms in space. It may be generated by the spatial repetition of a unit cell containing some definite number of atoms. The purpose of the periodic boundary conditions is to allow us to construct a unit cell which simulates the behaviour of the entire lattice. An ideal lattice is considered to be effectively infinite in order to ignore the surface effects [15]. The application of Born von Karman boundary conditions allow us to elegantly condense ideal lattices into a finite unit cell which exhibits phonon dispersion and other related properties, as the actual lattice of infinite extent [16]. The implementation of the Born von Karman conditions for an infinitely long linear diatomic lattice is illustrated in Fig. 1. The lattice is condensed into a unit cell of four atoms, two of each kind, connected to each other by identical springs. The two dissimilar atoms at the two ends of the unit cell chain are assumed to be joined together so as to complete the loop. This way each atom in the unit cell has the atoms of the other kind as its immediate neighbors. In the present work we have employed the Short Range Force Constant Model that takes into account only the short range interactions between the constituent atoms of a crystal. We have used both the central and angular type of short range forces for coupling the atoms, depending on the necessity borne by the complexity of the lattice structure. A detailed description of central and angular forces is given in section (V) The interatomic forces for only the nearest neighbors and next-nearest neighbors are taken into account as the higher order neighboring atoms are can be considered essentially screened in many practical applications [19]. Similarly, even though the classical harmonic approximation is no longer valid in extremes of temperature [11], it is a reasonable approximation for the purposes of this work. The coupled differential equations of motion so set-up for each atom in the condensed unit cell of a given lattice system are solved using the fourth order Runge-Kutta (RK-4) algorithm [17]. ### Fast Fourier Transform Technique The Fast Fourier Transform (FFT) is a popular computational technique that converts a time domain signal into individual spectral components and thereby provides frequency information about the signal. FFT is essentially an optimized algorithm for the implementation of the "Discrete Fourier Transformation" (DFT). Our model is a novel illustration of the Fast Fourier Trans Figure 1: Implementation of Born von Karman periodic boundary conditions: An infinitely long linear diatomic lattice is condensed into an equivalent unit cell comprising of four atoms with two of each kind. Figure 2: Illustration of harmonic approximations for a cubic lattice: Atoms are connected to each other with elastic springs that obey Hooke’s law. form (FFT) algorithm in the physical sciences, diverging from its more common usage in signal processing. While a detailed overview of the method is beyond the scope of this work, interested readers can refer to the work by _Ashdhir et al_ for a more comprehensive explanation [20]. We do, however, provide a concise description of the method as follows. The FFT technique involves sampling a signal over a specific period of time. If the number of time domain samples is say, \(N\), then implementation of the FFT algorithm on the sampled signal yields a \(N\)-point frequency spectrum. An accurate FFT implementation requires a wise choice of two critical parameters: the Sampling Rate (number of samples taken per unit time) and the Sampling Time Period (the total time duration for which the samples are recorded). In the application of FFT to lattice vibrations, the computed instantaneous displacements of atoms serve as the displacement-time domain signals which are processed by the algorithm to yield the phonon frequency spectrum. In our case, a time-step of \(2^{-16}\) s or \(2^{-15}\) s is used to compute the time domain solutions of atomic displacements and velocities over a total time duration in the range \(90-150\) s for the lattice systems considered. The particular choice of time-step and time-duration is arrived at by performing multiple runs of the code using different combinations of values for the two parameters to get accurate FFT results. The reciprocal of time step equal to \(2^{16}\) s\({}^{-1}\) or \(2^{15}\) s\({}^{-1}\) is the sampling rate and the time duration of \(90-150\) s is the sampling period for the given FFT computation. The FFT spectrum captures the normal mode frequencies at the high symmetry points within the first Brillouin zone (FBZ). The FBZ is a primitive unit cell in the reciprocal lattice space (also referred to as \(k\)-space) corresponding to the given real space lattice system. The periodicity of the lattice mandates that to obtain a unique relationship between the state of vibration of the lattice and the wave vector \(\vec{k}\), we need to confine the latter to within the FBZ. This justifies our approach on the analysis of the phonon spectrum within the first Brillouin zone. ## II Monatomic linear chain The monatomic linear chain is the simplest model for understanding the introductory lattice dynamics of a harmonic crystal. It consists of a one dimensional array of identical atoms connected by elastic bonds represented by springs each of force constant, say, \(\alpha\). The equilibrium separation between two adjacent atoms is the lattice constant \(a\). Figure 3 shows a segment of the infinitely long linear chain. When a longitudinal wave propagates through the monatomic linear chain, the constituent atoms get displaced from their respective equilibrium positions. Referring to the diagram, the equation of motion of the \(\dot{t}^{th}\) atom using the nearest neighbor approximation can be written as \[M\,\frac{d^{2}x_{i}}{dt^{2}}=-\alpha\,\left(x_{i}-x_{i+1}\right)-\alpha\,\left( x_{i}-x_{i-1}\right), \tag{1}\] where \(x_{i}\) is the instantaneous displacement of the \(\dot{t}^{th}\) atom from its equilibrium position and \(M\) is the atomic mass. The first term on the RHS of the equation is the restoring force on the \(\dot{t}^{th}\) atom due to the \((i+1)^{th}\) atom and the second term is the restoring force on the \(\dot{t}^{th}\) atom due to the \((i-1)^{th}\) atom. In the traditional treatment, a plane wave solution of the form \[x_{i}=\zeta_{i}\,\,e^{(\vec{k}.\vec{r}-\omega t)} \tag{2}\] is assumed for Eq.(1). Substituting Eq.(2) in Eq.(1), we get the dispersion relation for the linear monatomic lattice as \[\omega=2\,\sqrt{\frac{\alpha}{M}}\,\,sin\big{(}\frac{ka}{2}\big{)} \tag{3}\] As has been well documented in literature, [2; 6; 8; 9; 11] the monatomic linear lattice acts as a _low-pass mechanical filter_ such that the maximum phonon frequency that is allowed to pass through the lattice is equal to \(2\,\sqrt{(\alpha/M)}\) at the Brillouin zone boundary \(k=\frac{\pi}{a}\) where \(\alpha\) is the central force constant and \(a\) is the lattice constant. Higher frequencies will be strongly attenuated in the lattice. Figure 4 depicts the dispersion relation for a monatomic linear chain. It can be seen that the dispersion curve for a linear monatomic lattice has only the acoustical branch extending from \(\omega=0\) at the zone centre (\(k=0\)) to \(\omega=2\,\sqrt{(\alpha/M)}\) at the zone boundary (\(k=\frac{\pi}{a}\)). Figure 5 shows our computational model. By applying the BvK periodic boundary conditions consistent with nearest neighbor interactions, the given infinite linear lattice is reduced to a lattice comprising of two identical atoms joined by two identical elastic springs in a cyclical manner. If \(x_{i}(t)\) and Figure 3: A segment of an infinitely long linear monatomic chain: Atoms of mass \(M\) are connected to each other by elastic springs each of force constant \(\alpha\). \(x_{i+1}(t)\) are the instantaneous displacements of the two atoms, their equations of motion can be written as: \[M\,\frac{d^{2}x_{i}}{dt^{2}}=-\alpha(2\,x_{i}-x_{i+1}-x_{i+1}) \tag{4}\] \[M\,\frac{d^{2}x_{i+1}}{dt^{2}}=-\alpha(2\,x_{i+1}-x_{i}-x_{i}) \tag{5}\] The given coupled second order differential equations are numerically solved using the fourth order Runge Kutta method Runge (1964) under different sets of initial conditions. the computed frequency is independent of the initial energy of the atoms. The initial excitation energy of the atoms only affect the height of the FFT peaks, which are indicative of the amplitudes of lattice vibrations corresponding to that specific frequency. ## III Diatomic Linear Chain with a Basis The diatomic linear chain represents a bit more complex case of modeling of a lattice structure. It is a lattice with dissimilar atoms of masses, say, \(m\) and \(M\) assuming _(M>\(m\))_. The similar atoms are separated by distance \(a\) and the dissimilar atoms are separated by distances \(d\) and \((a-d)\) as shown in Fig. 10. Hence, two springs constants \(\alpha\) and \(\beta\) are considered between the masses \(m\) and \(M.\) It can be viewed as a non-Bravais lattice comprising of two interpenetrating Bravais lattices, each corresponding to a given atom type. When a longitudinal wave passes through the given lattice, the atoms get displaced from their respective equilibrium positions and the equations of motion in the nearest neighbor approximation can be formulated as \[m\,\frac{d^{2}x_{2i}}{dt^{2}}=-\beta(x_{2i}-x_{2i+1})-\alpha(x_{2i}-x_{2i-1}) \tag{6}\] \[M\,\frac{d^{2}x_{2i+1}}{dt^{2}}=-\beta(x_{2i+1}-x_{2i})-\alpha(x_{2i+1}-x_{2i+ 2}) \tag{7}\] Once again, the traditional treatment assumes the existence of plane wave solutions of the form given in Eq.(2) to satisfy the above two equations of motion yielding the dispersion relation \[\omega^{2} = \frac{(\alpha+\beta)(m+M)}{2\;m\;M} \tag{8}\] \[\pm\frac{\sqrt{(m+M)^{2}(\alpha+\beta)^{2}-8mM\alpha\beta(1-cos( ka)}}{2\;m\;M}\] for the given diatomic lattice with a basis. Figure 11 depicts the above dispersion relation. It can be seen that for the diatomic chain with a basis, the dispersion curve has two frequency branches associated with it. The upper dispersion branch known as the _optical branch_ is non-zero at \(k=0,\) the zone center frequency being given by Eq.(9). \[\omega_{1}=\sqrt{\frac{(\alpha+\beta)(m+M)}{mM}} \tag{9}\] The optical branch frequencies decrease from the zone center value (Eq.(9)) to the value at the FBZ boundary given by Eq.(10). \[\omega_{2}=\sqrt{\frac{\alpha+\beta}{m}} \tag{10}\] Figure 11: Phonon Dispersion Curve for Diatomic Chain with Basis: The curve has two branches, one acoustical and one optical branch. The upper optical branch frequencies decrease from the zone center value \(\omega_{1}=\sqrt{\frac{(\alpha+\beta)(m+M)}{mM}}\) to the zone boundary value \(\omega_{2}=\sqrt{\frac{\alpha+\beta}{m}}.\) The lower acoustic branch rises from zero at the zone center to zone boundary value \(\omega_{3}=\sqrt{\frac{\alpha+\beta}{M}}.\) Figure 10: A Diatomic Linear Chain with a Basis: It comprises of two types of atoms with different masses such that the similar atoms are separated by distance \(a\) and the dissimilar atoms are separated by distances \(d\) and \((a-d).\) Figure 9: FFT plot for a monatomic linear chain with \(m=1\) and \(\alpha=20\): Observed frequency peak at 1.422 with initial displacements of the atoms being \(x_{1}=-1,\)\(x_{2}=3\) and \(x_{1}=x_{2}=0.\) The lower branch of the dispersion curve is the _acoustic branch_. It rises from zero at the zone center to a value given by Eq.(11) at the FBZ boundary. \[\omega_{3}=\sqrt{\frac{\alpha+\beta}{M}}\quad(for\ m<M), \tag{11}\] where \(\omega_{1},\ \omega_{2},\ \omega_{3}\) represent the angular frequencies. The corresponding temporal frequencies \(f_{1},\ f_{2},\ f_{3}\) can be found using the relation \(f=\frac{1}{2\pi}\omega\). This shows that a diatomic chain behaves as a _band-pass mechanical filter_ since the frequencies corresponding to those between the acoustic and optical branches at the zone boundary (Eq.(10) and Eq.(11) ) are forbidden by the system [2]. The computational model of the diatomic chain obtained after implementation of BvK periodic boundary conditions is shown in Fig. 12. It comprises of four atoms, two of each kind. The chain unit is completed by connecting the \((2i+2)^{th}\) atom of mass \(m\) with the \((2i-1)^{th}\) atom of mass \(M\). The condensation of an infinitely long diatomic lattice with basis into a finite unit of four atoms can also be understood in terms of interpenetration of two condensed monatomic lattices similar to the one depicted in Fig. 5, one each for masses \(m\) and \(M\) with force constants \(\alpha\) and \(\beta\) respectively. In the nearest neighbor approximation, the coupled equations of atomic motion can be written as \[m\,\frac{d^{2}x_{2i-1}}{dt^{2}}=-\alpha(x_{2i-1}-x_{2i+2})-\beta(x_{2i-1}-x_{2 i}) \tag{12}\] \[M\,\frac{d^{2}x_{2i}}{dt^{2}}=-\alpha(x_{2i}-x_{2i-1})-\beta(x_{2i}-x_{2i+1}) \tag{13}\] \[m\,\frac{d^{2}x_{2i+1}}{dt^{2}}=-\alpha(x_{2i+1}-x_{2i})-\beta(x_{2i+1}-x_{2i+ 2}) \tag{14}\] \[M\,\frac{d^{2}x_{2i+2}}{dt^{2}}=-\alpha(x_{2i+2}-x_{2i+1})-\beta(x_{2i+2}-x_{2 i-1}) \tag{15}\] The equations are solved using the Fourth Order Runge Kutta method to obtain the numerical solutions for instantaneous positions and velocities of the atoms. The choice of initial conditions for atomic displacements and velocities is randomized (using NumPy's _random_ module in Python). The FFT of the computed instantaneous displacements for each of the four atoms in the condensed lattice unit of the successfully captures the three phonon frequencies, two corresponding to the FBZ boundary and one to the zone center. The atomic masses and force constants in arbitrary units are taken as: \(m=1.0\) ; \(M=4.0\) ; \(\alpha=8.0\); \(\beta=12.0\). The FFT plot for only one of the four atoms is shown in Fig. 13. The corresponding plots of the other three atoms are not shown because as expected, those atoms also exhibit three peaks at identical frequencies, two of which can be mapped with the two optical phonon modes (\(\omega_{1}\) & \(\omega_{2}\)) and one with the acoustical phonon frequency (\(\omega_{3}\)). The magnitudes of FFT peaks are insignificant in the context of present study. The height of the peaks are determined by the initial excitation conditions imposed on the lattice, which in our case are randomized to eliminate any prejudices in our computed phonon frequencies. Physically, the magnitude of FFT peaks are indicative of amplitude of atomic vibrations in a given normal mode of the lattice. Table 2 gives the theoretical and computed values of the zone centre and zone boundary phonon frequencies. There is a close agreement between the two sets of values within a maximum absolute error of around 2%. This indicates the accuracy and \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Theoretical Values} & \multicolumn{3}{|c|}{FFT Values} \\ \hline \(f_{1}\) & \(f_{2}\) & \(f_{3}\) & \(f_{1}^{{}^{\prime}}\) & \(f_{2}^{{}^{\prime}}\) & \(f_{3}^{{}^{\prime}}\) \\ \hline 0.795 & 0.712 & 0.353 & 0.799 & 0.710 & 0.360 \\ \hline \end{tabular} \end{table} Table 2: Comparison of theoretical and FFT computed values of zone centre and zone boundary phonon frequencies for a diatomic chain with a basis. Figure 12: Computational model of a Diatomic Chain with Basis: Implementation of BvK periodic boundary conditions condenses the given infinite linear lattice into a lattice comprising of four atoms, two of each kind joined by two kinds of elastic springs in a cyclic manner. Figure 13: FFT Plot for an atom of a diatomic chain with a basis exhibits 3 peaks: Two peaks are mapped with the two optical phonon modes and one with the acoustical phonon mode fidelity of the model's predictions. ## IV Diatomic chain without a basis The diatomic linear chain without a basis is a special case of the lattice system described in section (III). It is a Bravais lattice with two dissimilar atoms of masses \(m\) and \(M\) placed alternately at identical separation distances. The atoms are coupled to each other by identical elastic springs with force constant \(\alpha\) as depicted in Fig. 14. The analytical expressions for phonon frequencies of the given lattice can be easily arrived at by putting \(\alpha=\beta\) in Eq.(9)- Eq.(11) to yield the Optical branch frequency at zone centre as \[\omega_{1}=\sqrt{\frac{(2\,\alpha)(m+M)}{m\,M}}, \tag{16}\] Optical branch frequency at zone boundary as \[\omega_{2}=\sqrt{\frac{(2\,\alpha)}{m}}, \tag{17}\] Acoustic branch frequency at zone boundary: \[\omega_{3}=\sqrt{\frac{(2\,\alpha)}{M}}\quad(for\ m<M). \tag{18}\] The condensed lattice unit after applying the BvK PBCs comprises of four atoms as depicted in Fig. 14. It is similar to the earlier case in Fig. 12, except that the force constant \(\beta\) is replaced by \(\alpha\). The corresponding equations of motion are also similar in form to Eq.(12) through Eq.(15) with \(\alpha=\beta\). Similar to the procedure in previous cases, the FFT algorithm is applied to the computed time domain displacement solutions of the newly formulated equations of motion. The computational parameters are taken as: \(m=1.0\) ; \(M=4.0\) & \(\alpha=10.0\). Interestingly, this time the normal mode frequencies as captured by FFT turn out to be different for the two kinds of atoms as depicted in Fig. 15 and Fig. 16. Each of the atom types exhibit equal frequencies at \(0.800\) Hz corresponding to the optical branch at the zone center ( \(k=0\)) given by Eq.(16). However they differ in phonon frequency peaks at the zone boundary, the lighter atoms with mass \(m\) exhibit a peak at \(0.712\) Hz given by Eq.(17), while the the heavier atoms with mass \(M\) exhibit a peak at at \(0.348\) Hz given by Eq.(18). It can thus be concluded that while both types of atom participate in the optical normal mode of vibration, Figure 14: Diatomic Linear Chain: Infinitely long chain of two dissimilar atoms of masses \(m\) and \(M\) at equal separation \(d\) joined by identical elastic springs with force constant \(\alpha\). bration involves only the heavier atoms. The computed and theoretical values of the FBZ phonon frequencies for the given diatomic lattice without basis have a fairly good agreement as shown in Table 3. Our computational model explicitly highlights the subtly distinct behaviour of diatomic lattices with and without basis with regard to the normal modes of vibration. ## V Monatomic square lattice The monatomic square lattice represents the simplest case of modeling a two-dimensional lattice structure. It is a Bravais lattice with identical atoms arranged in a planar pattern shown in Fig. 17. Each of the atoms in this lattice structure has two degrees of freedom which somewhat increases the complexity of lattice dynamics. To work out the dynamics of lattices with more than one dimension we may need to model the interatomic interactions by non-central or angular forces in addition to the central forces. The two-body central forces that we considered for 1D lattice systems were assumed to act in a direction collinear with the equilibrium line connecting the two interacting atoms and to arise out of the instantaneous relative displacement between them. The two-body non-central or angular forces as defined by de Launey [21] depend on the angle which the line joining the moving atoms makes with the equilibrium position of the line. Using the illustration in Fig. 18, the analytical expressions for central and deLauney type of angular forces can be written as \[\mathbf{\vec{F}}_{\text{central}}=-\alpha\left[\mathbf{\hat{e}}_{i}\cdot( \mathbf{\vec{r}}_{o}-\mathbf{\vec{r}}_{i})\right]\cdot\mathbf{\hat{e}}_{i} \tag{19}\] \[\mathbf{\vec{F}}_{\text{angular}}=-\beta\left[\mathbf{\hat{e}}_{i}\times( \mathbf{\vec{r}}_{o}-\mathbf{\vec{r}}_{i})\right]\times\mathbf{\hat{e}}_{i}, \tag{20}\] where \(\mathbf{o}\) represents the reference atom and \(\mathbf{\hat{r}}\) represents a neighboring atom ; \(\mathbf{\hat{e}}_{i}\) is a unit vector in the direction of the equilibrium line joining the two atoms ; \(\mathbf{\vec{r}}_{i}\) and \(\mathbf{\vec{r}}_{o}\) are the instantaneous displacements of the \(i^{th}\) and \(o^{th}\) atoms respectively; \(\alpha\) and \(\beta\) are the central and angular force constants respectively. The net restoring force on atom \(\mathbf{o}\) due to its interaction with atom \(\mathbf{i}\) is given by the vector sum of the central and angular forces as \[\mathbf{\vec{F}}=\mathbf{\vec{F}}_{\text{central}}+\mathbf{\vec{F}}_{\text{ angular}} \tag{21}\] \[\mathbf{\vec{F}}=-\beta\left(\mathbf{\vec{r}}_{o}-\mathbf{\vec{r}}_{i}\right) -\left(\alpha-\beta\right)\left[\mathbf{\hat{e}}_{i}\cdot(\mathbf{\vec{r}}_{o }-\mathbf{\vec{r}}_{i})\right]\cdot\mathbf{\hat{e}}_{i} \tag{22}\] The resultant force as given in Eq.(22) can be resolved into the Cartesian components \(\mathbf{x},\mathbf{y}\) as given below: \[F_{x}=-\beta\left(x_{o}-x_{i}\right)-\left(\alpha-\beta\right) e_{ix}\left[e_{ix}\left(x_{o}-x_{i}\right)\right.\\ \left.+e_{iy}\left(y_{o}-y_{i}\right)\right] \tag{23}\] \[F_{y}=-\beta\left(y_{o}-y_{i}\right)-\left(\alpha-\beta\right)e_{ iy}\left[e_{ix}\left(x_{o}-x_{i}\right)\right.\\ \left.+e_{iy}\left(y_{o}-y_{i}\right)\right] \tag{24}\] Figure 17: Monatomic square lattice structure: Solid lines connect the nearest neighbors and the dashed lines connect the second nearest neighbors. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Theoretical Values} & \multicolumn{3}{|c|}{FFT Values} \\ \hline \(f_{1}\) & \(f_{2}\) & \(f_{3}\) & \(f_{1}^{{}^{\prime}}\) & \(f_{2}^{{}^{\prime}}\) & \(f_{3}^{{}^{\prime}}\) \\ \hline 0.795 & 0.712 & 0.353 & 0.800 & 0.712 & 0.348 \\ \hline \end{tabular} \end{table} Table 3: Comparison of theoretical and FFT computed values of zone centre and zone boundary phonon frequencies for a diatomic chain without a basis. Figure 18: Illustration of Central and deLauney type Angular forces to model interatomic interactions. To construct the computational model of the square lattice, we arbitrarily choose one atom as our origin (indexed as 0) and assign indices \((1,\ 2,\ 3,\ 4)\) to its nearest neighbor atoms and indices \((5,\ 6,\ 7,\ 8)\) to the second nearest neighbor atoms as shown in Fig. 19. The unit cell thus contains nine atoms, each one vibrating with two degrees of freedom (DOF) along the two orthogonal (x & y) axes. To span the planar infinite lattice, the unit cell is repeated in all the four directions as shown in Fig. 20. The occupation number of the given unit cell is 4 (one origin atom plus four corner atoms, each shared by four unit cells plus 4 edge atoms, each shared by two unit cells) with each atom having two DOF and so a monatomic square lattice is expected to exhibit 8 phonon modes. In the harmonic approximation used for the model, the interactions between the atom with its nearest neighbors and next-nearest neighbors are modeled by elastic springs. The central force constants for nearest and next-nearest neighbor interactions are represented by \(\alpha_{1}\), \(\alpha_{2}\) respectively and the angular force constants by \(\beta_{1}\), \(\beta_{2}\) respectively for nearest and next-nearest neighbor interactions. The relative magnitudes of force constants between the nearest and next-nearest neighbors need to be commensurate with the corresponding interatomic separations. The nearest neighbors tend to have a stronger bonding as compared to next-nearest neighbors. In a real world situation, the exact ratio of the these force constants depends on many factors, such as, the physical nature of interatomic interactions, the specific atomic composition, the specific lattice structure. In the absence of a universal choice, we have used the ratio reported in the work by Cserti [22]. The nearest neighbor interactions are assumed to be twice as strong as the next-nearest neighbor interactions, so we have used \(\alpha_{1}:\alpha_{2}=2:1\) and \(\beta_{1}:\beta_{2}=2:1\) in our computational model. Table 4 summarises the details related to the lattice dynamics of square lattice with respect to the arbitrarily chosen origin atom with index 0. The nearest neighbor distance is represented by \(a\) and so the second nearest neighbor distance becomes \(\sqrt{2}\ a\). The spatial coordinates of the neighboring atoms, the force constants involved and the direction cosines (DCs) of the equilibrium line joining the origin atom and its eight neighbors are listed in the table. Using the expressions given in Eq.(23) and Eq.(24) for \(\mathbf{x}\) and \(\mathbf{y}\) components of the net force experienced by the atom \(\mathbf{0}\) due to the atom \(\mathbf{i}\) and the lattice details in table 4, the Cartesian components of equations of motion for our refer \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Index & \multicolumn{2}{|c|}{Force constants} & \multicolumn{2}{|c|}{Spatial Positions} & \multicolumn{2}{|c|}{DCs} \\ \hline \(i\) & Central & Angular & \(p_{i}\) & \(q_{i}\) & \(e_{ix}\) & \(e_{iy}\) \\ \hline 1 & \(\alpha_{1}\) & \(\beta_{1}\) & \(a\) & 0 & 1 & 0 \\ \hline 2 & \(\alpha_{1}\) & \(\beta_{1}\) & \(-a\) & 0 & \(-1\) & 0 \\ \hline 3 & \(\alpha_{1}\) & \(\beta_{1}\) & 0 & \(a\) & 0 & 1 \\ \hline 4 & \(\alpha_{1}\) & \(\beta_{1}\) & 0 & \(-a\) & 0 & \(-1\) \\ \hline 5 & \(\alpha_{2}\) & \(\beta_{2}\) & \(a\) & \(a\) & \(\frac{1}{\sqrt{2}}\) & \(\frac{1}{\sqrt{2}}\) \\ \hline 6 & \(\alpha_{2}\) & \(\beta_{2}\) & \(-a\) & \(-a\) & \(-\frac{1}{\sqrt{2}}\) & \(-\frac{1}{\sqrt{2}}\) \\ \hline 7 & \(\alpha_{2}\) & \(\beta_{2}\) & \(-a\) & \(a\) & \(-\frac{1}{\sqrt{2}}\) & \(\frac{1}{\sqrt{2}}\) \\ \hline 8 & \(\alpha_{2}\) & \(\beta_{2}\) & \(a\) & \(-a\) & \(\frac{1}{\sqrt{2}}\) & \(-\frac{1}{\sqrt{2}}\) \\ \hline \end{tabular} \end{table} Table 4: Table listing the details related to the lattice dynamics of square lattice with respect to the origin atom with index 0 Figure 19: Unit cell of a square lattice showing the nearest neighbors \(1,\ 2,\ 3,\ 4\) and next-nearest neighbors \(5,\ 6,\ 7,\ 8\) with respect to atom 0. Figure 20: Construction of Square Lattice by repetition of unit cells: Each unit cell contains nine atoms indexed \(0-8\). ence atom \(\mathbf{0}\) are formulated as follows: \[M\,\frac{d^{2}x_{0}}{dt^{2}}=-[\alpha_{1}(2x_{0}-x_{1}-x_{2})+ \beta_{1}(2x_{0}-x_{3}-x_{4})\\ +\frac{(\alpha_{2}+\beta_{2})}{2}\left(4x_{0}-x_{5}-x_{6}-x_{7}-x_ {8}\right)+\\ +\frac{(\alpha_{2}-\beta_{2})}{2}\left(-y_{5}-y_{6}+y_{7}+y_{8} \right)] \tag{25}\] \[M\,\frac{d^{2}y_{0}}{dt^{2}}=-[\alpha_{1}(2y_{0}-y_{3}-y_{4})+ \beta_{1}(2y_{0}-y_{1}-y_{2})+\\ +\frac{(\alpha_{2}+\beta_{2})}{2}\left(4y_{0}-y_{5}-y_{6}-y_{7}- y_{8}\right)+\\ +\frac{(\alpha_{2}-\beta_{2})}{2}\left(-x_{5}-x_{6}+x_{7}+x_{8} \right)] \tag{26}\] Implementation of Bvk periodic boundary conditions for the given lattice requires us to identify the neighbors of the unit cell atoms indexed \(1-8\). Table 5 lists the nearest and next-nearest neighbors of these atoms using the periodicity of the lattice depicted in Fig. 20. The equations of motion for the eight atoms can hence be constructed as done for the reference atom \(\mathbf{0}\) in Eq.(25) and Eq.(26). Thus, we have a system of 18 coupled second order differential equations to be solved simultaneously in the displacement-time domain using the fourth order Runge Kutta method. To aid the understanding of expected phonon spectrum, we depict the first Brillouin zone for the given monatomic square lattice with the high symmetry points in Fig. 21. The \((k_{x},k_{y})\) coordinates of the annotated symmetry points are \(\Gamma(0,0)\), \(X\left(\frac{\pi}{a},0\right),L\left(\frac{\pi}{a},\frac{\pi}{a}\right)\), \(\triangle\left(\frac{\pi}{2a},0\right)\) and \(\Sigma\left(\frac{\pi}{2a},\frac{\pi}{2a}\right)\). The traditional treatment of the problem by assuming plane wave solutions of the form in Eq.(2) for Eq.(25) - Eq.(26) has been done in the book authored by H.C.Gupta[8]. The expression for secular determinant derived therein is given below in Eq.(27): \[\begin{vmatrix}2\alpha\,(1-C_{1})+2\beta\,(1-C_{2})&2\,S_{1}\,\,S_{2}(\alpha- \beta)\\ +2\,(\alpha+\beta)(1-C_{1}\,C_{2})&\\ -M\,\omega^{2}&\\ 2\,S_{1}\,\,S_{2}(\alpha-\beta)&+2\,(\alpha+\beta)(1-C_{1}\,C_{2})\\ -M\,\omega^{2}&\\ \end{vmatrix}=0 \tag{27}\] where the notations mean: \(C_{1}=cos(ak_{x})\), \(C_{2}=cos(ak_{y})\), \(S_{1}=sin(ak_{x})\), \(S_{2}=sin(ak_{y})\); \((k_{x},\,\,k_{y})\) are the position coordinates of a point in the Brillouin zone. The expressions for the phonon frequencies corresponding to the high symmetry points in the FBZ are obtained by solving the determinant Eq.(27) using the \((k_{x},\,\,k_{y})\) coordinates for these points. The derived expressions are given in Table 6. The degeneracy of the longitudinal and transverse acoustical branches at the \(\mathbf{L}\) point is to be noted. \begin{table} \begin{tabular}{|c|c|c|} \hline Reference atom & Nearest neighbors & \begin{tabular}{c} Next - nearest neighbors \\ \end{tabular} \\ \hline 0 & 1,2,3,4 & 5,6,7,8 \\ \hline 1 & 0,2,5,8 & 3,4,6,7 \\ \hline 2 & 0,1,6,7 & 3,4,5,8 \\ \hline 3 & 0,4,5,7 & 1,2,6,8 \\ \hline 4 & 0,3,6,8 & 1,2,5,7 \\ \hline 5 & 1,3,7,8 & 0,2,4,6 \\ \hline 6 & 2,4,7,8 & 0,1,3,5 \\ \hline 7 & 2,3,5,6 & 0,1,4,8 \\ \hline 8 & 1,4,5,6 & 0,2,3,7 \\ \hline \end{tabular} \end{table} Table 5: Table listing the nearest and the next-nearest neighbors of atoms in the unit cell of monatomic square lattice. Figure 22: Phonon dispersion spectrum for a monatomic square lattice along the symmetry directions \([10]\) and \([11]\). The dispersion curves in the symmetry directions [1 0] and [1 1] are shown in Fig. 22. Both the directions have one longitudinal acoustic (LA) and one transverse acoustic (TA) branch. The FFT computation of the instantaneous displacement solutions obtained for the system of 18 coupled equations of motion for the 9 atoms in the unit cell exhibit 4 peaks for each atom, irrespective of the randomized set of initial conditions. The plot for each atom exhibits FFT peaks of varying heights but at the same frequency values. As discussed earlier, the relative heights of FFT peaks are only indicative of the degree of participation of a given atom in a given normal (phonon) mode of vibration. This aspect of the problem is insignificant in the context of the present work. Hence, we depict the FFT plot of only one of the atoms in Fig. 23. The values of the model parameters atomic mass and force constants used for computation are: \(m=0.01\) ; \(\alpha_{1}=3.0\) ; \(\alpha_{2}=1.5\) ; \(\beta_{1}=2.0\) ; \(\beta_{2}=1.0\). A mapping and comparison of computed phonon frequencies with the values calculated using the analytical frequency expressions in Table 7. It is seen that our FFT computation exactly captures the degenerate \(LA\) and \(TA\) branch frequency at the \(L\)-point. The phonon frequencies at the \(\Sigma\)-point are also captured quite accurately within an absolute error of 2%. For the given choice of force constant ratios used in the computation, the \(TA\) branch at \(X\) point becomes degenerate with the \(L\)-point \(LA\), \(TA\) branches and so is also captured exactly. The limited existing literature [22; 23; 24] available on monatomic square lattice models employ only the central forces to account for nearest and next-nearest neighbor interatomic interactions. Our model can be easily reduced to the special case of central force approximation for interatomic interactions up to the second nearest neighbors. This is achieved by putting \(\beta_{1}=\beta_{2}=0\) in Eq.(25) - Eq.(26) and the remaining 16 equations of motion for atoms indexed \(1-8\). The corresponding analytical expressions for the 8 phonon frequencies at the FBZ symmetry points are given in Table 9. The \(LA\) and \(TA\) branches at the \(L\)-point again seen \begin{table} \begin{tabular}{|c|c|c|c|} \hline Phonon & Symmetry & Branch & Frequency \\ Frequency & Points & type & Expression \\ \hline \(f_{1}^{L}\) & \(L\left(\frac{\pi}{a},\frac{\pi}{a}\right)\) & LA & \(\frac{1}{2\pi}\sqrt{\frac{4\alpha_{1}+4\alpha_{2}}{m}}\) \\ \(f_{2}^{L}\) & & TA & \(\frac{1}{2\pi}\sqrt{\frac{4\alpha_{1}+4\alpha_{2}}{m}}\) \\ \hline \(f_{1}^{X}\) & \(X\left(\frac{\pi}{a},0\right)\) & LA & \(\frac{1}{2\pi}\sqrt{\frac{4\alpha_{1}+4\beta_{1}+4\beta_{2}}{m}}\) \\ \(f_{2}^{X}\) & & TA & \(\frac{1}{2\pi}\sqrt{\frac{4\alpha_{2}+4\beta_{1}+4\beta_{2}}{m}}\) \\ \hline \(f_{1}^{\triangle}\) & \(\triangle\left(\frac{\pi}{2a},0\right)\) & LA & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{1}+2\beta_{1}+2\beta_{2}}{m}}\) \\ \(f_{2}^{\triangle}\) & & TA & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{2}+2\beta_{1}+2\beta_{2}}{m}}\) \\ \hline \(f_{1}^{\Sigma}\) & \(\Sigma\left(\frac{\pi}{2a},\frac{\pi}{2a}\right)\) & LA & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{1}+2\beta_{1}+4\alpha_{2}}{m}}\) \\ \(f_{2}^{\Sigma}\) & & TA & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{1}+2\beta_{1}+4\beta_{2}}{m}}\) \\ \hline \end{tabular} \end{table} Table 6: Table listing the analytical expressions for phonon frequencies at the high symmetry points in a monatomic square lattice in terms of nearest and next-nearest neighbor central force and angular force constants \(\alpha_{1}\), \(\alpha_{2},\beta_{1}\)\(\&\beta_{2}\) respectively. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(f_{1}^{L^{\prime}}=6.753\) & \(f_{1}^{X^{\prime}}=7.024\) & & \(f_{1}^{X^{\prime}}=6.469\) \\ \((0.01\%)\) & \((9.90\%)\) & \(-\) & \((1.62\%)\) \\ \(f_{2}^{L^{\prime}}=6.753\) & \(f_{2}^{X^{\prime}}=6.753\) & & \(f_{2}^{X^{\prime}}=5.844\) \\ \((0.01\%)\) & \((0.01\%)\) & \(-\) & \((1.86\%)\) \\ \hline \end{tabular} \end{table} Table 7: Table lists the expected theoretical phonon frequency values and corresponding computational values for a monatomic square lattice when considering both central and angular force interactions (The percentages within brackets denote the absolute percentage errors). Figure 23: FFT plot for an atom of a monatomic square lattice modeled using central and angular forces: Each atom in the unit cell exhibits 4 peaks. to be degenerate. Further, for our given choice of the ratio of two central force constants (\(\alpha_{1}=2\alpha_{2}\)), the degeneracy is reflected in other phonon modes too. The \(T\!A\) branch frequencies at \(X\) and \(\Sigma\) points become equal to the \(L\!A\) branch frequency at \(\Delta\)-point. Similarly, the \(L\!A\) branches at \(X\) and \(\Sigma\) points become degenerate. The FFT computation using the model parameter values \(m=0.01\), \(\alpha_{1}=3.0\), \(\alpha_{2}=1.5\) yield 3 distinct peaks for each of the 9 atoms in the unit cell. Figure 24 depicts the FFT plot for one of the atoms in the unit cell of the given monatomic square lattice. The mapping and comparison of the computed and analytical phonon frequencies is given in table 9. All the FFT computed peaks can be seen to exhibit relatively higher absolute errors of around 13%. It is evident from the results summarized in Tables 7 and 9 that the phonon dynamics of a monatomic square lattice is better explained by modeling the interatomic interactions in terms of central and angular forces. The incorporation of angular forces lifts the degeneracy of the phonon modes that is encountered in the model using only the central forces for nearest and next-nearest neighbor interactions. However, in both the models of monatomic square lattice discussed, the FFT spectrum has some missing values as indicated by dashes in tables 7 and 9. The failure of FFT algorithm to capture the missing phonon frequencies is attributed to the limitation of the lattice dynamical model to account for interatomic interactions responsible for the missing phonon modes. Theoretical Phonon Frequencies at the Symmetry Points \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Theoretical Phonon Frequencies at the Symmetry Points} \\ \hline \(L\left(\frac{\pi}{a};\frac{\pi}{a}\right)\) & \(X\left(\frac{\pi}{a},0\right)\) & \(\triangle\left(\frac{\pi}{2a},0\right)\) & \(\Sigma\left(\frac{\pi}{2a},\frac{\pi}{2a}\right)\) \\ \hline \(f_{1}^{L}=6.752\) & \(f_{1}^{X}=5.513\) & \(f_{1}^{\triangle}=3.898\) & \(f_{1}^{\Sigma}=5.513\) \\ \(f_{2}^{L}=6.752\) & \(f_{2}^{X}=3.898\) & \(f_{2}^{\triangle}=2.757\) & \(f_{2}^{\Sigma}=3.898\) \\ \hline \multicolumn{4}{|c|}{Phonon Frequencies captured by FFT} \\ \hline \(f_{1}^{L^{\prime}}=5.850\) & \(f_{1}^{X^{\prime}}=4.777\) & \(f_{1}^{\Sigma^{\prime}}=3.379\) & \(f_{1}^{\Sigma}=4.777\) \\ \((13.35\%)\) & \((13.35\%)\) & \((13.31\%)\) & \((13.35\%)\) \\ \(f_{2}^{L^{\prime}}=5.850\) & \(f_{2}^{X^{\prime}}=3.379\) & & \(f_{2}^{\Sigma}=3.379\) \\ \((13.31\%)\) & \(-\) & & \((13.31\%)\) \\ \hline \end{tabular} allotropes like graphite [22; 25; 26; 27], carbon nanotubes [28; 29; 30] and fullerenes [31; 32] that have potential industrial applications. In this section we extend our model for computing the phonon spectrum using BvK boundary conditions to a honeycomb lattice. It is a non-Bravais lattice comprising of lattice sites at the corners of hexagonal unit cells. Our model assumes that each of these lattice sites are occupied by identical atoms of mass \(M\) each, held together by elastic springs as shown in Fig. 25. It can be seen that the lattice sites labelled \(I\) and \(II\) are not equivalent due to the different orientation of adjacent neighbors at the respective sites. The dynamics of the lattice is calculated using central force type interactions between the nearest and the next-nearest neighbor atoms. The occupation number of the hexagonal unit cell is 2 as each of the six corner atoms is shared by 3 adjacent unit cells. Further, each of the two unit cell atoms has two DOF, so we expect the honeycomb lattice to exhibit 4 phonon modes. Figure 26 depicts three complete unit cells labelled \(I\), \(II\), \(III\) and an incomplete cell labelled \(IV\). Each atom sitting at a given lattice site has three nearest neighbors occupying its non-equivalent lattice sites and six next-nearest neighbors at its equivalent lattice sites. For the atom indexed \(0\), the atoms with indices _1,3,5_ are the nearest neighbors connected by springs with force constant \(\alpha_{1}\) and the atoms with indices _2,4_ connected by springs with force constant \(\alpha_{2}\) are the next-nearest neighbors. The implementation of the BvK conditions is reflected by lattice sites with the same index number. If the given lattice had infinite spatial extent, then the atoms sitting at these lattice sites would be equivalent to each other and exhibit identical dynamics. The unit cell of our computational model thus contains six atoms indexed 0-5. The nearest and next-nearest neighbors of each of these atoms in the unit cell are listed in TableX. Table XI summarises the lattice dynamical details of the given honeycomb lattice with respect to the arbitrarily chosen origin atom with index \(0\). The nearest neighbor distance equal to the edge length of the hexagonal unit cell is represented by \(a\), so the second nearest neighbor distance becomes \(\sqrt{3}\)\(a\). The spatial coordinates of the neighboring atoms, the force constants and the direction cosines (DCs) of the equilibrium line joining the origin atom to its nine neighbors are listed in the table. Using these details, the Cartesian components of equations of motion for our reference atom **0** are formulated \begin{table} \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} Reference \\ atom \\ \end{tabular} & \begin{tabular}{c} Nearest \\ neighbors \\ \end{tabular} & \begin{tabular}{c} Next - nearest \\ neighbors \\ \end{tabular} \\ & & \begin{tabular}{c} [Each index to \\ represent 3 atoms] \\ \end{tabular} \\ \hline 0 & 1,3,5 & 2,4 \\ \hline 1 & 0,2,4 & 3,5 \\ \hline 2 & 1,3,5 & 0,4 \\ \hline 3 & 0,2,4 & 1,5 \\ \hline 4 & 1,3,5 & 0,2 \\ \hline 5 & 0,2,4 & 1,3 \\ \hline \end{tabular} \end{table} Table X: Table listing the nearest and the next-nearest neighbors of six atoms indexed in the unit cell of honeycomb lattice. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}{c} Index \\ \end{tabular} & \begin{tabular}{c} Hexagonal \\ Cells \\ \end{tabular} & \begin{tabular}{c} Force \\ constants \\ \end{tabular} & \begin{tabular}{c} Spatial Positions \\ \end{tabular} & \begin{tabular}{c} DCs \\ \end{tabular} \\ \hline \begin{tabular}{c} \(i\) \\ \end{tabular} & \begin{tabular}{c} Cell Index \\ \end{tabular} & \begin{tabular}{c} Central \\ Force \\ \end{tabular} & \begin{tabular}{c} \\ \(p_{i}\) \\ \end{tabular} & \begin{tabular}{c} \(q_{i}\) \\ \end{tabular} & \begin{tabular}{c} \(\hat{\textbf{e}}_{ii}\) \\ \end{tabular} & \begin{tabular}{c} \(\hat{\textbf{e}}_{vi}\) \\ \end{tabular} \\ \hline 1 & \(I,III\) & \(\alpha_{1}\) & \(-\frac{q}{2}\) & \(\frac{\sqrt{3}a}{2}\) & \(-\frac{1}{2}\) & \(\frac{\sqrt{3}}{2}\) \\ \hline 3 & \(I,IV\) & \(\alpha_{1}\) & \(a\) & 0 & 1 & 0 \\ \hline 5 & \(II,III\) & \(\alpha_{1}\) & \(-\frac{q}{2}\) & \(-\frac{\sqrt{3}a}{2}\) & \(-\frac{1}{2}\) & \(-\frac{\sqrt{3}}{2}\) \\ \hline 2 & \(III\) & \(\alpha_{2}\) & \(-\frac{3q}{2}\) & \(\frac{\sqrt{3}a}{2}\) & \(-\frac{\sqrt{3}}{2}\) & \(\frac{1}{2}\) \\ \hline 4 & \(III\) & \(\alpha_{2}\) & \(-\frac{3q}{2}\) & \(-\frac{\sqrt{3}a}{2}\) & \(-\frac{\sqrt{3}}{2}\) & \(-\frac{1}{2}\) \\ \hline 2 & \(I,IV\) & \(\alpha_{2}\) & \(\frac{3q}{2}\) & \(\frac{\sqrt{3}a}{2}\) & \(\frac{\sqrt{3}}{2}\) & \(\frac{1}{2}\) \\ \hline 4 & \(II,IV\) & \(\alpha_{2}\) & \(\frac{3q}{2}\) & \(-\frac{\sqrt{3}a}{2}\) & \(\frac{\sqrt{3}}{2}\) & \(-\frac{1}{2}\) \\ \hline 2 & \(II\) & \(\alpha_{2}\) & 0 & \(-\sqrt{3}a\) & 0 & \(-1\) \\ \hline 4 & \(I\) & \(\alpha_{2}\) & 0 & \(\sqrt{3}a\) & 0 & 1 \\ \hline \end{tabular} \end{table} Table XI: Table listing the details related to the lattice dynamics of monatomic honeycomb lattice with respect to the origin atom with index 0. Figure 25: Hexagonal lattice is a non-Bravais lattice: Lattice sites labelled \(I\) and \(II\) are not equivalent due to the different orientation of adjacent neighbors at the respective sites. as follows: Equation of motion in \(\mathbf{x}\) direction for atom 0: \[m\,\frac{d^{2}x_{0}}{dt^{2}}=-[(\alpha_{1})\,(x_{0}-x_{3})+( \alpha_{1})\,\left(\frac{\sqrt{3}}{4}\right)(y_{1}-y_{5})+\\ +\frac{(\alpha_{1})}{4}\,(2x_{0}-x_{1}-x_{5})+(\alpha_{2})\,\left( \frac{3}{4}\right)(4x_{0}-2x_{2}-2x_{4})] \tag{28}\] Equation of motion in \(\mathbf{y}\) direction for atom 0: \[m\,\frac{d^{2}y_{0}}{dt^{2}}=-[(\alpha_{1})\,\left(\frac{3}{4} \right)(2y_{0}-y_{1}-y_{5})+(\alpha_{2})\,(2y_{0}-y_{2}-y_{4})+\\ +(\alpha_{1})\,\left(\frac{\sqrt{3}}{4}\right)(x_{1}-x_{5})+( \alpha_{2})\,\left(\frac{1}{4}\right)(4y_{0}-2y_{2}-2y_{4})] \tag{29}\] Similar equations of motion are formulated by choosing each of the remaining five atoms in the unit cell. Hence, we get a system of _12_ coupled second order differential equations which are solved simultaneously in displacement-time domain using the fourth order Runge-Kutta algorithm. The computation is done using \(m=0.01\) for the atomic mass parameter and for different ratios of the force constants (\(\alpha_{1}:\alpha_{2}\)). The FFT computation of the instantaneous displacement solutions obtained for the system of _12_ coupled equations of motion for the six atoms in the unit cell exhibit 4 peaks for each atom, irrespective of the randomized set of initial conditions and our choice of force constant ratio (\(\alpha_{1}:\alpha_{2}\)). Once again, the relative heights of FFT peaks are only indicative of the degree of participation of a given atom in a given normal (phonon) mode of vibration and this aspect of the problem is insignificant in the context of the present work. Hence, we depict the FFT plots of only one of the unit cell atoms in Fig. 27 and Fig. 28 corresponding to the force constant ratios (\(\alpha_{1}:\alpha_{2}=4:1\)) and (\(\alpha_{1}:\alpha_{2}=20:1\)) respectively. The other ratio values that were explored in computation are (\(\alpha_{1}:\alpha_{2}=2:1,\ 6:1,\ 8:1,\ 10:1,\ \&12:1\)). The FFT computation of displacement-time solution in each case revealed only 4 distinct peaks. To validate the FFT computed results, the analytical treatment of the problem is also done by setting up the corresponding secular determinant. The equations of motion for atom 1 as origin are derived using its neighbor details given in Table 10 and the corresponding lattice details in Table 11. The differential equations of motion so formulated for atom 1 are: Figure 27: FFT plot of honeycomb lattice with force constant ratio 4:1. \begin{table} \begin{tabular}{|p{113.8pt}|} \hline Elements of the Secular Determinant (S) \\ \hline \(S_{11}=S_{33}=M\,\omega^{2}-\frac{3}{2}\,\alpha_{1}-3\,\alpha_{2}+\frac{3}{2} \,\alpha_{2}\,(C_{1}+C_{2})\) \\ \hline \(S_{12}=S_{21}=-\,\frac{\sqrt{3}}{2}\,\alpha_{2}\,(C_{2}-C_{1})\) \\ \hline \(S_{13}=S_{31}^{*}=\alpha_{1}\,e^{ik_{1}a}+\frac{\alpha_{1}}{2}\,e^{-i(\frac{ ik_{2}}{2})}\,C_{3}\) \\ \hline \(S_{14}=S_{41}^{*}=-\,\frac{\sqrt{3}}{2}\,\alpha_{1}\,e^{-i(\frac{ik_{2}}{2})}\, S_{1}\) \\ \hline \(S_{22}=S_{44}=M\,\omega^{2}-\frac{3}{2}\,\alpha_{1}\,-3\,\alpha_{2}+2\,\alpha_{2}\,C_{4}+ \frac{\alpha_{2}}{2}(C_{1}+C_{2})\) \\ \hline \(S_{23}=S_{32}^{*}=-\,\frac{\sqrt{3}}{2}\,\alpha_{1}\,e^{-i(\frac{ik_{2}}{2})}\, S_{1}\) \\ \hline \(S_{24}=S_{42}^{*}=\frac{3}{2}\,\alpha_{1}\,e^{-i(\frac{ik_{2}}{2})}\,C_{3}\) \\ \hline \(S_{34}=S_{43}=\frac{\sqrt{3}}{2}\,\alpha_{2}\,(C_{1}-C_{2})\) \\ \hline with \(C_{1}=cos(\frac{3k_{4}a}{2}+\frac{\sqrt{3}k_{4}a}{2})\); \(C_{2}=cos(\frac{3k_{4}a}{2}-\frac{\sqrt{3}k_{4}a}{2})\); \(C_{3}=cos(\frac{\sqrt{3}k_{4}a}{2})\); \(C_{4}=cos(\sqrt{3}k_{4}a)\); \(S_{1}=sin(\frac{\sqrt{3}k_{4}a}{2})\) \\ \hline \end{tabular} \end{table} Table 11: Table 10: Table 11: Table 10: Table 11: Table 12: Table 10: Table 11: Table 11: Table 10: Table 11: Table 11: Table 10: Table 11: Table Equation of motion in \(\mathbf{x}\) direction for atom 1: \[m\,\frac{d^{2}x_{1}}{dt^{2}}=-[\alpha_{1}\left(x_{1}-x_{2}\right)- \alpha_{1}\left(\frac{\sqrt{3}}{4}\right)(y_{4}-y_{0})+\\ \alpha_{1}\,\left(\frac{1}{4}\right)\,(2x_{1}-x_{0}-x_{4})+\alpha _{2}\left(\frac{3}{4}\right)(4x_{1}-2x_{3}-2x_{5})] \tag{30}\] Equation of motion in \(\mathbf{y}\) direction for atom 1: \[m\,\frac{d^{2}y_{1}}{dt^{2}}=-[\alpha_{1}\left(\frac{3}{4}\right) (2y_{1}-y_{0}-y_{4})-\alpha_{1}\left(\frac{\sqrt{3}}{4}\right)(x_{4}-x_{0})+\\ \alpha_{2}\,(2y_{1}-y_{3}-y_{5})+\alpha_{2}\left(\frac{1}{4}\right) (4y_{1}-2y_{3}-2y_{5})]. \tag{31}\] The elements of the \(4\times 4\) secular determinant \(S\) obtained using Eqn. 28 - Eqn. 31 are summarized in Table 11. The analytical expressions of the phonon dispersion relations obtained by solving the secular determinant at the FBZ symmetry points are given in Table 13. Figure 29 shows the first Brillouin zone and the location of symmetry points for a honeycomb lattice. The analytically derived phonon spectrum is depicted in Fig. 30. Tables 14 and 15 give the mapping of analytical and computed phonon frequencies for force constant ratios \(\alpha_{1}:\alpha_{2}=4:1\) and \(\alpha_{1}:\alpha_{2}=20:1\), corresponding to the plots in Fig. 27 and Fig. 28 respectively. On comparing the analytical and FFT computed values of phonon frequencies, it is found that irrespective of the \(\alpha_{1}:\alpha_{2}\) ratio, FFT captures the \(f_{1}^{G}\) and the four \(K\)-point frequencies in each case. The phonon branches corresponding to \(f_{2}^{K}\) & \(f_{4}^{K}\) are found to be very nearly degenerate in each case. The relative accuracy and resolution of the captured FFT peaks, however, varies with ratio of force constants. As can be seen from the tables, the \(20:1\) ratio exhibits a higher accuracy than \(4:1\) ratio for all the \(K\)-point captured peaks. The \(f_{1}^{G}\) and \(f_{1}^{K^{\prime}}\) peaks exhibit a better resolution for \(4:1\) ratio than the \(20:1\) ratio. The \(M\)-point phonon frequencies are not captured in any of the cases except the capture of \(f_{4}^{M^{\prime}}\) peak for the \(4:1\) ratio because of its coincidental close degeneracy with the \(f_{2}^{K}\) & \(f_{4}^{K}\) peaks for the given ratio of force constants. Overall, our FFT computation successfully captures the frequency peaks for each of the four normal modes of vibration expected for a monatomic honeycomb lattice. Its failure to capture the all the phonon frequencies is once again attributed to the inadequacy of our model to account for the interatomic interactions involved in the phonon modes corresponding to the missing frequencies. It is proposed that if two-body deLauney [21] or three-body CGW [33] type of angular forces are used to model \begin{table} \begin{tabular}{|c|c|c|} \hline Phonon & Symmetry & Frequency Expression \\ Frequency & Points & Frequency Expression \\ \hline \(f_{1}^{G}\) & & \(\frac{1}{2\pi}\sqrt{\frac{3\alpha_{1}}{m}}\) \\ \(f_{2}^{G}\) & & \(\frac{1}{2\pi}\sqrt{\frac{3\alpha_{1}+1.5\alpha_{2}}{m}}\) \\ \(f_{3}^{G}\) & & \(0\) \\ \(f_{4}^{G}\) & & \(\frac{1}{2\pi}\sqrt{\frac{1.5\alpha_{2}}{m}}\) \\ \hline \(f_{1}^{M}\) & & \(\frac{1}{2\pi}\sqrt{\frac{3\alpha_{1}+2\alpha_{2}}{m}}\) \\ \(f_{2}^{M}\) & \(M\left(\frac{3\pi}{3\alpha},\,0\right)\) & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{1}+6\alpha_{2}}{m}}\) \\ \(f_{3}^{M}\) & & \(\frac{1}{2\pi}\sqrt{\frac{2\alpha_{2}}{m}}\) \\ \(f_{4}^{M}\) & & \(\frac{1}{2\pi}\sqrt{\frac{\alpha_{1}+6\alpha_{2}}{m}}\) \\ \hline \(f_{1}^{K}\) & & \(\frac{1}{2\pi}\sqrt{\frac{\frac{9}{4}\alpha_{1}+4\alpha_{2}+\frac{1}{2}\sqrt{ \frac{9}{4}\alpha_{1}^{2}+\alpha_{2}^{2}}}{m}}\) \\ \(K\left(\frac{2\pi}{3\alpha},\,\frac{2\pi}{3\sqrt{3}\alpha}\right)\) & \(\frac{1}{2\pi}\sqrt{\frac{\frac{9}{4}\alpha_{1}+4\alpha_{2}-\frac{1}{2}\sqrt{ \frac{9}{4}\alpha_{1}^{2}+\alpha_{2}^{2}}}{m}}\) \\ \(f_{2}^{K}\) & & \(\frac{1}{2\pi}\sqrt{\frac{\frac{9}{4}\alpha_{1}+4\alpha_{2}-\frac{1}{2}\sqrt{ \frac{9}{4}\alpha_{1}^{2}+\alpha_{2}^{2}}}{m}}\) \\ \(f_{3}^{K}\) & & \(\frac{1}{2\pi}\sqrt{\frac{\frac{3}{4}\alpha_{1}+4\alpha_{2}-\frac{1}{2}\sqrt{ \frac{9}{4}\alpha_{1}^{2}+\alpha_{2}^{2}}}{m}}\) \\ \(f_{4}^{K}\) & & \(\frac{1}{2\pi}\sqrt{\frac{\frac{3}{4}\alpha_{1}+4\alpha_{2}+\frac{1}{2}\sqrt{ \frac{9}{4}\alpha_{1}^{2}+\alpha_{2}^{2}}}{m}}\) \\ \hline \end{tabular} \end{table} Table 11: Table listing the analytical expressions for phonon frequencies at the high symmetry points in a monatomic honeycomb lattice in terms of nearest neighbor and next-nearest neighbor central force constants \(\alpha_{1}\) and \(\alpha_{2}\) respectively. the nearest and next-nearest neighbors along with the central forces, one may capture the missing frequencies in the phonon spectrum of a monatomic honeycomb lattice. ## VII Conclusion The traditional analytical method of lattice dynamical investigation assumes the existence of plane wave solutions for each atom in the unit cell. This approach relies on an implicit assumption of BvK periodic boundary conditions for constructing the secular determinant. In the pedagogical context at the undergraduate level, the method works very well for solving problems relating to 1D and 2D lattices which involve only central forces in the nearest neighbor approximation. Most of the standard solid state physics textbooks such as Omar[2], Dekker[6] and Kittel[9] often focus on analyzing simplified models for linear lattices or provide a qualitative description of phonon dynamics in simple 2D and 3D lattices assuming nearest neighbour central force inter-atomic approximations. The conventional analytical approach for an undergraduate student becomes mathematically intricate even for the relatively straightforward cases of monatomic square and simple cubic bravais lattices[11]. Our novel approach comprises of explicitly incorporating the BvK boundary conditions to condense an infinite lattice to a finite lattice, solving the equations of atomic motion in the displacement-time domain and then using the FFT technique to compute the phonon spectrum. This approach allows students to move beyond the mathematical rigor and unveil the foundational aspects of the rich physics behind periodic solids. The work serves to provide a valuable tool for visualizing lattice dynamics with the explicit implementation of PBCs, facilitating an intuitive understanding of lattice periodicity and vibrations for undergraduate students. The various models discussed in the present work can be easily extended to explore the lattice dynamics of other commonly encountered structures. Some exercises are suggested below for the interested reader: 1. Diatomic Square lattice using central and angular forces. 2. Hexagonal lattice with two non-equivalent lattices sites \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Theoretical Phonon Frequencies at the Symmetry Points} \\ \hline \(G\left(0,0\right)\) & \(M\left(\frac{2\pi}{3a},\ 0\right)\) & \(K\left(\frac{2\pi}{3a},\ \frac{2\pi}{3\sqrt{3a}}\right)\) \\ \hline \(f_{1}^{G}=12.328\) & \(f_{1}^{M}=12.531\) & \(f_{1}^{K}=12.733\) \\ \(f_{2}^{G}=12.482\) & \(f_{2}^{M}=10.794\) & \(f_{2}^{K}=9.279\) \\ \(f_{3}^{G}=0.0\) & \(f_{3}^{M}=2.250\) & \(f_{3}^{K}=3.179\) \\ \(f_{4}^{G}=1.949\) & \(f_{4}^{M}=8.115\) & \(f_{4}^{K}=9.281\) \\ \hline \multicolumn{3}{|c|}{Phonon Frequencies captured by FFT} \\ \hline \(f_{1}^{G}=12.322\) & & \(f_{1}^{K}=12.766\) \\ \((0.04\%)\) & \(-\) & \((0.26\%)\) \\ & & \(f_{2}^{K^{\prime}}=9.337\) \\ \(-\) & \(-\) & \((0.63\%)\) \\ \(f_{3}^{G^{\prime}}=0.0\) & & \(f_{3}^{K^{\prime}}=3.368\) \\ \((0.00\%)\) & \(-\) & \((5.94\%)\) \\ & & \(f_{4}^{K^{\prime}}=9.337\) \\ \(-\) & \(-\) & \((0.60\%)\) \\ \hline \end{tabular} \end{table} Table 17: Table lists the expected theoretical phonon frequency values and corresponding computational values for a honeycomb lattice with force constant ratio 20:1. \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Theoretical Phonon Frequencies at the Symmetry Points} \\ \hline \(G\left(0,0\right)\) & \(M\left(\frac{2\pi}{3a},\ 0\right)\) & \(K\left(\frac{2\pi}{3a},\ \frac{2\pi}{3\sqrt{3a}}\right)\) \\ \hline \(f_{1}^{G}=12.328\) & \(f_{1}^{M}=12.531\) & \(f_{1}^{K}=12.733\) \\ \(f_{2}^{G}=12.482\) & \(f_{2}^{M}=10.794\) & \(f_{2}^{K}=9.279\) \\ \(f_{3}^{G}=0.0\) & \(f_{3}^{M}=2.250\) & \(f_{3}^{K}=3.179\) \\ \(f_{4}^{G}=1.949\) & \(f_{4}^{M}=8.115\) & \(f_{4}^{K}=9.281\) \\ \hline \multicolumn{3}{|c|}{Phonon Frequencies captured by FFT} \\ \hline \(f_{1}^{G}=12.322\) & & \(f_{1}^{K^{\prime}}=12.766\) \\ \((0.04\%)\) & \(-\) & \((0.26\%)\) \\ & & \(f_{2}^{K^{\prime}}=9.337\) \\ \(-\) & \(-\) & \((0.63\%)\) \\ \(f_{3}^{G^{\prime}}=0.0\) & & \(f_{3}^{K^{\prime}}=3.368\) \\ \((0.00\%)\) & \(-\) & \((5.94\%)\) \\ & & \(f_{4}^{K^{\prime}}=9.337\) \\ \(-\) & \(-\) & \((0.60\%)\) \\ \hline \end{tabular} \end{table} Table 18: Table lists the expected theoretical phonon frequency values and corresponding computational values for a honeycomb lattice with force constant ratio 20:1. Figure 30: Phonon Spectrum \((\omega^{2}-k)\) of honeycomb lattice in the first Brillouin zone: Temporal frequency notations are used to annotate the high symmetry points. occupied by atoms of different masses. * Dynamics of a simple cubic lattice, body-centred cubic lattice and face-centred cubic in the nearest neighbour central force approximation. (For reference, readers may refer to Problem 5 on Page 449-450 in Solid State Physics by Ashcroft and Mermin [11] for a face-centred cubic lattice and Pages 109-114 in the book by H.C. Gupta [8] for body-centred cubic lattice.) The analytical expressions given in the textbooks can be used to validate and interpret the computed results of the lattices in the nearest neighbor approximation. * The readers suggested to explore the inclusion of anharmonicity in the system (for example, a cubic interatomic force as laid out in Problem 9.24 on Page 336 in the book by Harvey Gould) [34] for the case of a 1D linear monatomic chain. The conventional linear algebra cannot be employed in cases of non-linear interacting forces. However, the approach elucidated in the paper will account for the anharmonicity in the system. We firmly believe that the present work contributes to the development of a computational acumen in budding undergraduate physicists, piquing their curiosity to delve deeper into Physics through the use of numerical tools. *_A preliminary account of the present work was presented at the March meeting of the American Physical Society, March 20-22, 2023 [Bulletin of the American Physical Society 2023, Session TT02.00006]._
2309.11002
PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving
Pedestrian detection under valet parking scenarios is fundamental for autonomous driving. However, the presence of pedestrians can be manifested in a variety of ways and postures under imperfect ambient conditions, which can adversely affect detection performance. Furthermore, models trained on publicdatasets that include pedestrians generally provide suboptimal outcomes for these valet parking scenarios. In this paper, wepresent the Parking Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research dealing with real-world pedestrians, especially with occlusions and diverse postures. PPD consists of several distinctive types of pedestrians captured with fisheye cameras. Additionally, we present a pedestrian detection baseline on PPD dataset, and introduce two data augmentation techniques to improve the baseline by enhancing the diversity ofthe original dataset. Extensive experiments validate the effectiveness of our novel data augmentation approaches over baselinesand the dataset's exceptional generalizability.
Zizhang Wu, Xinyuan Chen, Fan Song, Yuanzhu Gan, Tianhao Xu, Jian Pu, Rui Tang
2023-09-20T01:55:19Z
http://arxiv.org/abs/2309.11002v2
# PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving ###### Abstract Pedestrian detection under valet parking scenarios is fundamental for autonomous driving. However, the presence of pedestrians can be manifested in a variety of ways and postures under imperfect ambient conditions, which can adversely affect detection performance. Furthermore, models trained on public datasets that include pedestrians generally provide suboptimal outcomes for these valet parking scenarios. In this paper, we present the Parking Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research dealing with real-world pedestrians, especially with occlusions and diverse postures. PPD consists of several distinctive types of pedestrians captured with fisheye cameras. Additionally, we present a pedestrian detection baseline on PPD dataset, and introduce two data augmentation techniques to improve the baseline by enhancing the diversity of the original dataset. Extensive experiments validate the effectiveness of our novel data augmentation approaches over baselines and the dataset's exceptional generalizability. Datasets, Pedestrian detection, Data augmentation, Valet parking ## I Introduction To develop an advanced driver assistance system (ADAS) that is both effective and safe for parking lot scenarios [1, 2, 3, 4, 5], it is critical to ensure the safety of road users such as pedestrians. The detection range of conventional pinhole cameras is often insufficient to detect the variety of behaviors and postures displayed by pedestrians. As an alternative to pinhole cameras, fisheye cameras could have a wider field of vision (FoV) [6], which is necessary for the perception of close range and low altitude, particularly in a traffic bottleneck. Thus, fisheye cameras are becoming increasingly prominent in driverless vehicles as intelligent alternatives to traditional cameras. Nevertheless, such pedestrian detection [7, 8, 9, 10] still remains difficult due to evasive irregular postures and imprecise surrounding circumstances. First, there is a wide range of pedestrian behaviors that are rarely represented in publicly available datasets, such as occlusion, lying down, walking, etc. Second, the fisheye lens's radial distortion leads to substantial appearance distortion [11, 12], complicating the pedestrian recognition process. Additionally, the quality of images is significantly affected by environmental factors such as light and opacity. Current datasets and benchmarks for pedestrian detection, including Caltech USA [13], KITTI [14], CityPersons [15], and Wider-Person [16], have aided in rapid progress for the pedestrian detection task. These datasets usually encompass urban, rural, or highway driving scenes, and their pinhole cameras comfortably capture high-grade images with clear and distinguishable pedestrians. Moreover, Valeo delivers the fisheye automotive dataset WoodScape with extensive content [17]. However, public datasets place insufficient emphasis on pedestrians with irregular postures and fisheye image formation. The models trained on public datasets reveal suboptimal performance in difficult parking scenes without a large number of training instances, as shown in Fig.1 and Fig.2. To expand real-world fisheye pedestrians' images with various occlusion and postures under valet parking scenes, this paper offers a new large-scale fisheye dataset called **P**arking **P**edestrian **D**ataset (**PPD**) to promote the research on pedestrian problems, as shown in Figure 2 (b). Different from other pedestrian datasets [13, 14, 15, 16], our **PPD** dataset focuses on pedestrian detection and provides more than 330K fisheye images in valet parking scenes. To guarantee the pedestrians' diversity, we collect data from different parking lots, various periods, and diverse pedestrian situations. Additionally, we subdivide **PPD** into three sub-datasets: Occlusion Pedestrians (**OP**), Posture Pedestrians (**PP**), and Lying-down Pedestrians (**LP**). **OP** involves pedestrians occluded Fig. 1: The cross-dataset testing on the **PPD** dataset. “Evaluation” refers to explicitly evaluating public datasets’ pre-trained models on the **PPD** dataset, with suboptimal results compared with the model trained from scratch. “Finetune” means finetuning these pre-trained models on the **PPD** dataset with little advancement. or parking lots' pillars. **PP** is concerned with pedestrians' abundant postures, including standing, stooping, sitting, etc. **LP** concentrates on lying-down pedestrians, the most perilous situation that requires immediate early warning. To reduce annotation costs and further broaden the diversity of pedestrians, we further propose two data augmentation techniques: **Occ-Data-Augmentation (ODA)** and **Pos-Data-Augmentation (PDA)** for pedestrians' occlusions and postures, respectively. Using **ODA** and **PDA**, high-quality synthetic images are generated through the collecting, localization, and local fusion procedures, complementing the commonly used hybrid augmentation methods [18, 19, 20]. Besides, we build pedestrian detection baselines on our **PPD** dataset and extensive experiments validate the effectiveness of our novel data augmentation approaches. In addition, the cross-dataset evaluation reveals **PPD**'s exceptional capacity to generalize. Our contributions are summarized as follows: * We provide the first fisheye dataset comprising over 330K fisheye images, particularly for diverse occlusion and postures of pedestrians in parking lot scenes. * We report the baseline results of pedestrian detection on the proposed **PPD** dataset and propose two novel data augmentation techniques to improve the baseline. * Extensive experiments demonstrate the effectiveness of **ODA**, **PDA**, and **PPD**'s exceptional generalizability. ## II Related Work In this section, we briefly introduce the related works to our topic, i.e., pedestrian detection dataset, pedestrian detection frameworks, and data augmentation methods. ### _Pedestrian Detection Datasets_ Pioneer works of pedestrian detection datasets involve [13, 14, 15, 16, 21, 22, 23, 24, 25], which contribute to great progress in pedestrian detection. There are large-scale datasets such as Caltech USA [13] and KITTI [14], which contain urban, rural, and highway scenes and provide annotation frame sequences on videos. However, both datasets have low pedestrian densities. More recently, researchers proposed vast and diversified datasets, WiderPerson [16] and CityPersons [15]. CityPersons [15] is the subset of the CityScapes Dataset, whose average pedestrian density grows three times that of KITTI [14]. WiderPerson [16] contains a range of scenarios, including marathon, traffic, activity, dance, and miscellany, totaling approximately 400 thousand annotated instances. Moreover, Valeo proposed the extensive automotive dataset WoodScape [17] with fisheye cameras instead of pinhole cameras. However, there is no publicly available benchmark dataset for valet parking pedestrian scenarios, particularly those including varied occlusions and postures, where suboptimal detection of pedestrians forms a threat to driving safety. ### _Pedestrian Detection Frameworks_ CNN-based pedestrian detection methods can be generally categorized into one-stage [26, 27, 28] and two-stage [29, 30] methods. As an end-to-end pipeline, one-stage methods achieve a significant trade-off between performance and speed, such as the SSD series [31, 32, 33, 34], YOLO series [27, 28, 35, 36] and Retinanet [26]. In contrast, two-stage methods, such as the RCNN series [29, 30, 37, 38], take advantage of the predefined anchors to improve the performance at the cost of speed. Furthermore, recent works [39, 40, 41, 42] fuse multi-scale feature maps to improve pedestrian detection with different scales. Moreover, other works [8, 43, 44, 45] focus on crowded pedestrian detection problems for actual applications. [43] designed a new boundary box regression loss specifically for better pedestrian localization. Liu et al. offer a non-maximum suppression method to refine the bounding boxes given by detectors [44]. Fig. 2: Green boxes indicate ground truth and red boxes indicate predictions. (a) The suboptimal performance of the detector pre-trained on public datasets in parking lot scenes. (b) Our Parking Pedestrian Dataset (PPD) uses data augmentation methods and provides diverse large real-world pedestrian data with occlusion and different postures. ### _Data Augmentation Methods_ Data Augmentation methods such as random cropping, color dithering and flipping, play an important role in achieving state-of-the-arts [46, 47, 48]. These enhancements are more generic in nature and are particularly relevant to expressing data transformation invariance. In addition, hybrid image augmentation [18, 19, 20] can mix cross-image information, which usually applies appropriate changes with labels to increase diversity. Furthermore, the adaptations of mixups [20, 49, 50] are popular among hybrid image augmentation. CutMix [49] pastes a rectangular crop of the image instead of mixing all pixels. It creates new composite pictures by combining the rectangular grids of individual images with actual boxes. Cut-Paste-and-Learn [50] extracts objects in poses and then mixes and pastes them to different backgrounds. Copy-paste [20] fuses information from different images in an object-aware manner: copying and pasting instances across images. ## III Parking Pedestrians Dataset In this section, we introduce our **P**arking **P**edestrian **D**ataset (**PPD**) dataset in detail, including the data collection, annotation protocols, informative statistics, and dataset characteristics. ### _Data Collection_ To ensure the diversity of pedestrians, we collect data from 3 cities, 12 parking lots, two periods (morning and evening), and different pedestrians with various ages, heights, clothes, and postures. In total, we captured 100 videos that last from 1 hour to 6 hours and with an average of 2 hours. Then, we convert the videos into pictures and select the images containing pedestrian instances. For high-quality images, we restrict the visible range of the fisheye camera and further remove distorted and blurred pedestrian images. Also, we do not cover all pedestrians' continuous moving processes for redundant annotations. Instead, we select the best-quality images and then apply our data augmentation methods (discussed later) to increase the data variance. Based on the images' content, we also divide them into three categories: occlusion pedestrians, posture pedestrians, and lying-down pedestrians. Table I illustrates the statistics of the **PPD** dataset. A total of more than 330K images comprise three sub-datasets: Occlusion Pedestrians Dataset, Posture Pedestrians Dataset and Living-down Pedestrians Dataset, with amounts of 111,521, 118,224 and 115,936, respectively. Besides, every sub-dataset further performs partitioning into training, validation and testing sets at a ratio of 5:3:2. ### _Image Annotation_ We annotate the dataset in the same way as the Caltech dataset [22], by drawing a tight bounding box around each pedestrian's complete body. However, occluded pedestrians are special since the foreground-like car bodies or parking lot pillars often lead to incomplete pedestrian instances. Therefore, we have to estimate the distance from the pedestrian instance to the car's fisheye camera, and then roughly calculate the size of the box according to the depth proportion, as shown below: \[W_{o}=W_{p}\times(1-D_{o}/D_{max}), \tag{1}\] \[H_{o}=H_{p}\times(1-D_{o}/D_{max}), \tag{2}\] where \(H_{p}\) and \(W_{p}\) are the average human height and width, predefined as 1.7 meters and 0.3 meters, respectively. \(D_{o}\) is the depth from the occluded pedestrian instance to the camera. Since the parking space has a fixed size, we can estimate the depth approximately by the relative location between the pedestrian instance and the nearby parking space within the same image. \(D_{max}\) is the max depth of the fisheye camera. Finally, based on the depth ratio between \(D_{o}\) and \(D_{max}\), we can roughly infer the annotated width \(W_{o}\) and height \(H_{o}\), as shown in Equations. (1) and (2). ### _Sub-datasets Description_ The occluded pedestrian dataset provides three occlusion scenarios with different occlusion rates. As shown in Fig. 3, to better restore reality, we collect the pedestrians occluded by the cars' part and cube obstacles with 10 occlusion rates, starting from not occluded to 99% with 10% increments per class. The posture pedestrian dataset contains four postures: standing, sitting, squatting and bending over. We strive to cover eight pedestrians' orientations, which are front, rear, left, right, left front, right front, left rear, and right rear. In addition, we divide lying-down pedestrians into the new subset since they are the most dangerous cases. We make an effort to cover the same eight orientations as the posture subset. The detailed distribution of posture and lying-down pedestrians is shown in Fig. 4, and a detailed explanation of **PPD**'s categories is reported in Table II. Furthermore, to further broaden pedestrians' diversity, we apply two novel data augmentation techniques to occlusion and posture pedestrians. The data volume increases, and later experiments will indicate improvements in pedestrian detection performance. ### _Dataset Characteristics_ Our **PPD** dataset exhibits differences from public datasets in image representation style, scenarios, quantity and diversity. Below, we elaborate on the four main characteristics of our **PPD** dataset. **Fisheye image representation.** The **PPD** dataset consists of fisheye images, different from the common pinhole images of public datasets. Fisheye images provide a larger field-of-view (FoV), which is more suitable for close-range and low-lying pedestrian detection. **Specific parking scenarios.** The **PPD** dataset focuses on pedestrian detection in parking scenarios, which is also distinct from natural scenes of public datasets. The environmental conditions in parking scenarios, such as light and opacity, significantly increase the detection difficulty. Concerning a variety of tough pedestrian scenarios, **PPD** can promote research in dealing with real-world pedestrian problems. **Large quantity.** Our **PPD** dataset obtains more than 330 thousand data samples from more than 200-hour parking scene video clips. We constantly collect diverse parking pedestrian scenarios, eventually reaching the goal of over one million data. **High quality and diversity.** Our **PPD** dataset covers 3 cities, 12 parking lots from different periods and different pedestrian cases. Additionally, we carefully select high-quality images with high resolution and apply data augmentation techniques to enlarge diversity. ## IV The Proposed Data Augmentations Training detectors for special pedestrians usually requires a large quantity of data, which demands tremendous resources and effort to acquire and annotate. Considering these challenges, we provide two novel data augmentation techniques: **O**cc-**D**ata-**A**ugmentation (**ODA**) and **P**os-**D**ata-**A**ugmentation (**PDA**). Specifically, **ODA** focuses on occluded pedestrians, and **PDA** targets pedestrians with different postures. In this section, we describe those two data augmentation methods in detail. ### _Overall Pipeline_ We define our data augmentation process as \(f(*)\), so the overall structure states are as follows: \[I_{i}^{syn}=f(I_{i}^{bg},M_{j}),i=1,2,\cdots,N,j=1,2,\cdots,K \tag{3}\] where \(I_{i}^{bg}\) is the background image, \(M_{j}\) indicates pedestrian masks and \(I_{i}^{syn}\) indicates the produced synthetic images. As shown in Fig. 5, our augmentation pipeline contains three stages: (1) collecting pedestrian masks and background images; (2) determining where to paste pedestrian masks; and (3) fusing pedestrian masks with background images. Fig. 4: The detailed distribution of posture pedestrian dataset and lying-down pedestrians dataset. Fig. 3: Examples of different occlusion scales and occlusion types in the Occlusion Pedestrian Dataset. ### _Occ-Data Augmentation_ We present the **O**cc-**D**ata-**A**ugmentation (**ODA**) method for occluded pedestrians with three procedures. The detailed procedure can be found in Algorithm 1. #### Iv-B1 Collecting pedestrian masks and background images We observe that in the valet parking scenes, cars' front or rear parts mostly occlude pedestrians. To address such occlusion, our **ODA** requires the masks of pedestrians \(M_{j}^{p},j=1,2,\cdots,K\) as foreground and the background images \(I_{i}^{bg},i=1,2,\cdots,N\), containing the masks of cars' front parts \(M_{i}^{car},i=1,2,\cdots,N\). For a background image, we label all available cars' front parts with the Labelme tool [51], as shown in Fig. 5 (a). Note that the pedestrian masks should be diverse and high quality, which determines the reality of the synthesized images. Thus, we process them with morphological operations such as OPEN and ERODE. To seamlessly paste pedestrian masks into background images, we apply occlusion-aware scaling, resizing the mask to a more realistic scale. #### Iv-B2 Localization of synthetic occlusion Following Algorithm 1, we take one background image \(I_{i}^{bg}\) as input and randomly pick one car's front part \(\widetilde{M}_{t}^{car}\) as the pasting location. Then, we prepare the pedestrian mask \(M_{j}^{p}\), randomly selected from the mask list \(M^{p}\). #### Iv-B3 Local fusion for occlusion For more precise localization, we paste the top-left point \(P_{off}\) of the pedestrian mask above the car's front part \(\widetilde{M}_{t}^{car}\) with random but limited distances, as shown in Fig. 5(d). Specifically, we calculate \(P_{off}\) according to: \[P_{off}=\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}randint(x_{1},x_{2}-w_{p}^{\prime})\\ y_{1}-randint(0.2*h_{car},0.3*h_{car})\end{bmatrix} \tag{4}\] where \(w_{p}^{\prime}\) is the width of the resized pedestrian mask and \((x_{1},y_{1})\), \((x_{2},y_{2})\) are the top-left and bottom-right points of the front car part's mask. \(h_{car}\) and \(w_{car}\) are the corresponding width and height of the front car part's mask, respectively. It is noteworthy that all coordinates are relative to the top-left vertex. Furthermore, we remove the intersection between the pedestrian mask and the background, that is, the occlusion region: \(I_{occ}=\widetilde{M}_{j}^{p}\cap\widetilde{M}_{t}^{car}\) (line 13 of Algorithm 1). In this way, we accomplish the pseudo occluded pedestrian's mask \(I_{avil}^{fg}\). Then, we paste the pedestrian's mask into the background image and update the label, according to \[I_{i}^{syn}=\alpha*I_{avil}^{fg}+(1-\alpha)*I_{i}^{bg} \tag{5}\] where \(\alpha=1.0\) with foreground and \(\alpha=0.0\) with background. Finally, we generate synthetic image \(I_{i}^{syn}\) with the pseudo label, as shown in the top row of Fig. 6. Fig. 5: Overview of our data augmentation methods. Our methods include **O**cc-**D**ata-**A**ugmentation (**ODA**) and **P**os-**D**ata-**A**ugmentation (**PDA**). **ODA** and **PDA** have the same pipeline: (1) collecting pedestrian masks and background images; (2) determining where to paste pedestrian masks; (3) fusing pedestrian masks with background images. ### _Pos-Data-Augmentation_ ``` 0: pedestrian masks \(M_{j}^{p},j=1,2,\cdots,K\); background images \(I_{i}^{bg},i=1,2,\cdots,N\); front car parts' masks \(M_{i}^{car},i=1,2,\cdots,N\) 0:\(I_{i}^{syn},i=1,2,\cdots,N\) 1: Fix random seed; 2: Shuffle \(I^{bg}\) and \(M^{car}\); 3:\(i\gets 1\); 4:while\(i<=N\)do 5:\(\widehat{M}_{i}^{car},t=1,2,\cdots,T\leftarrow\) randomly select some of labels from \(M_{i}^{car}\); 6:for\(\widehat{M}_{t}^{car}\) in \(\widehat{M}^{car}\)do 7:\(x_{1},y_{1},x_{2},y_{2}\leftarrow\widehat{M}_{i}^{car}\)'s shape; 8:\(w_{car},h_{car}=x_{2}-x_{1},y_{2}-y_{1}\); 9:\(M_{j}^{p}\in\mathbb{R}^{w_{p}\times h_{p}}\leftarrow\) randomly select from \(M^{p}\); 10: Resize \(M_{j}^{p}\) to \(\widehat{M}_{j}^{p}\) with \(w_{p}^{\prime}=w_{p}\frac{h_{car}}{h_{p}}\), \(h_{p}^{\prime}=h_{car}\); 11:\(P_{off}=(randinit(x_{1},x_{2}-w_{p}^{\prime}),y_{1}-randinit(0.2*h_{car},0.3*h_ {car}))\); 12:\(I_{occ}=\widehat{M}_{j}^{p}*\widehat{M}_{i}^{car}\) in \(P_{off}\); 13:\(I_{avail}^{fg}=\widehat{M}_{j}^{p}-I_{occ}\); 14:\(I_{i}^{syn}=\alpha*I_{avail}^{fg}+(1-\alpha)*I_{i}^{bg}\); 15: Create pseudo label of \(I_{syn}\); 16:endfor 17:\(i\gets i+1\); 18:endwhile 19:return\(I^{syn}\). ``` **Algorithm 1** Occ-Data-Augmentation We illustrate our **Pos-Data-Augmentation (PDA)** method in Algorithm 2, which also contains three steps. #### Iv-C1 Collecting pedestrian masks and background images Different from **ODA**, **PDA** only requires one source pedestrian image \(I_{s}\) and background images \(I_{i}^{bg},i=1,2,\cdots,N\). The source image requires pedestrians with a complete body structure and black background, with the assistance of the Labelme tool [51]. Moreover, Liquid Warping GAN (AttLWB) [52] could create different human postures with a reference video, which warps the visible textures of source images to the desired poses. With the help of AttLWB [52], we obtain a series of synthetic pedestrian masks \(M_{j}^{p},j=1,2,\cdots,K\) with different pedestrian postures, as shown in Fig. 5(c). Furthermore, we also take the morphological operations OPEN and ERODE to process the synthetic pedestrian masks to remove the noise near the mask's contours. #### Iv-C2 Localization of synthetic posture pedestrians. Since posture pedestrians must lie within the freespace region of parking scenes, we first detect the freespace region. We train a simple semantic segmentation model for freespace region detection and randomly pick one location within the model's freespace prediction. It is notable that this approach slightly requires time and computational resources, but we apply the procedure to ensure the quality of pseudo labels. #### Iv-C3 Local fusion of pedestrian masks Finally, we resize the masks at a limited scale and paste the pedestrian masks into the background at the selected freespace location: \[\hat{I}_{i}^{syn}=\alpha*M_{j}^{syn}+(1-\alpha)*\hat{I}_{i}^{bg}. \tag{6}\] Then, we obtain synthetic posture pedestrians with pseudo labels, as shown in Fig. 5 (e). The bottom row of Fig. 6 illustrates the **PDA**'s examples. ## V Experiments In this section, we report the baseline results and the results of two proposed data-augmentation techniques on our **PPD** dataset. Then, we discuss **PPD**'s generalization across datasets. Furthermore, we analyze the effects of **ODA** and **PDA** by ablation studies and comparisons. ### _Experimental Settings_ #### V-A1 Implementation Details We conduct experiments with the Pytorch framework on Ubuntu system and employ eight NVIDIA RTX A6000s. The learning rate is set to 0.12, while the momentum and learning decay rates are set to 0.9 and 0.01, respectively. For training, we adopt the stochastic gradient descent (SGD) solver, 48 epochs, and 16 batch size. For our data augmentation experiments, we mix 250,000 augmentation images with the original **PPD** dataset. Fig. 6: Examples of augmented images. Top row: ODA results; Bottom row: PDA results. #### Iv-A2 Evaluation Metrics The detection task ought to chase superb targets for the location to ensure pedestrians' safety. Therefore, we select a high IoU criterion of 0.75 for object detection metrics: Average Precision (AP) and Average Recall (AR). The high threshold forms a stricter examination to filter more robust models for the pedestrian detection task. #### Iv-A3 Baseline methods Our baseline detectors contain CenterNet [53] with backbone DLA34 [54], YOLOF [35], Faster R-CNN [37], Cascade RCNN [55] and RetinaNet [26] with ResNet-50 [56] backbone. All baselines have occupied the field of object detection in recent years. To ensure comparability, all baselines utilize the same experimental settings as their release. #### Iv-A4 Datasets We also choose several public datasets for cross-dataset evaluation: COCO [21], KITTI [14], CityPersons [15], and WiderPerson [16], where the last two datasets aim for the category "Person". ### _Results and Analysis_ #### Iv-B1 **Results on PPD and PPD w/ DA** To demonstrate the effectiveness of our data-augmentation methods, we conduct baseline evaluation based on the **PPD** dataset and the mixed dataset with augmentation images, as shown in Table III. For the original **PPD** dataset, the two-stage Faster RCNN wins on AP75, and the one-stage CenterNet wins on AR75. As a new anchor-free pipeline, CenterNet focuses on object center detection, which brings higher recall and perhaps lower precision. All the performance enhancements are approximately 2% to 4% when mixed with data-augmentation images. We attribute the advancement to our realistic synthetic images, which satisfy the appetite of pedestrians with various occlusions and postures at a low cost. Besides, we explore the subdatasets' performance with data-augmentation images (w/ DA), as shown in Table IV. Additional data-augmentation images take effect individually on both sub-datasets. #### Iv-B2 **Cross-dataset Evaluation** First, we test how well models, which perform well on commonly used datasets, perform on our **PPD** dataset. We train CenterNet models on the public datasets COCO, KITTI, CityPersons, and WiderPerson. Then, we infer and evaluate them on the **PPD** dataset, as shown in Table V. We observe that models pre-trained on public datasets perform suboptimally, indicating their inadequacy with irregular pedestrian fisheye instances. Furthermore, we finetune these models on the **PPD** dataset, also as shown in Table V. In comparison to the **PPD** model trained from scratch, the pre-trained cross-dataset models do not make much advancement, even with a small performance drop similar to the KITTI dataset. We conjecture that public datasets are insufficient to compensate for the absence of different pedestrians with occlusion and varied postures. Moreover, we conduct generalization experiments from **PPD** on public datasets based on CenterNet [53], as shown in Table VI. After finetuning, our **PPD** dataset gains approximately 2% to 5% enhancement compared to the baselines trained on public datasets, especially for AR 75. **PPD**'s pedestrian cases cover the usual pedestrian scenes, which considerably increases the recall and lift generalization ability. #### Iv-B3 **Ablation Study** We conduct ablation studies for the **Occ**-**Data**-Augmentation (**ODA**) and **Pos**-**Data**-Augmentation (**PDA**) methods based on CenterNet [53], as shown in Table VII. (b) and (c) rows show that training only with synthetic images does not make sense because of the data domain shift. From rows (d) and (e), our **ODA** and **PDA** obviously make great progress, especially **ODA**, contributing approximately 2% AP75 improvement. Both techniques are effective, and the result performs best in combination, as shown in the (f) row. #### V-C4 **Comparison with Copy-paste** Copy-paste [20] plays an important role in hybrid image augmentation. We compare our data-augmentation techniques with copy-paste based on CenterNet, as shown in Table VIII. Surprisingly, from (b) row, the model's performance with copy-paste degrades by approximately 5%. We analyze copy-paste copies and paste instances across images but without any fusion processing, which easily leads to unreliable images and false detection. In contrast, our well-organized adaptive pasting localization and fusion strategies bring an improvement of 3.2% in AP75 and 3.0% in AR75 in the (c) row. #### V-C5 **Discussion with amount of data-augmentation images** Theoretically, we could produce infinite pseudo-labeling images. However, a large quantity of training data occupies large amounts of resources and time. To trade off the efficiency and performance, we explore the optimal quantity for pseudo-labeling images based on the CenterNet [53] method and the **PPD** dataset, as shown in Table IX. Interestingly, from rows (d), (e) and (f), most training data do not perform the best result, perhaps resulting from overfitting. Before overfitting, a larger image volume means greater enhancement, as illustrated in rows (b), (c) and (d). ## VI Conclusion In this paper, we have presented a new dataset, the Parking Pedestrians Dataset (**PPD**), as well as two unique data-augmentation techniques, Occ-Data-Augmentation (**ODA**) and Pos-Data-Augmentation (**PDA**). By providing a diversity of pedestrian postures, the proposed dataset aims to assist the industry in constructing a more secure advanced driving assistance system. Moreover, we provide two techniques for enhancing pedestrian detection performance using data augmentation. Extensive experiments on the proposed **PPD** validate the effectiveness of the techniques. However, **PPD** has a large capacity for development, including how to strengthen the realism of the data augmentation, simplify our methodologies, deal with sustainably increasing data and have the potential for diverse vision tasks. Nevertheless, we expect **PPD** to inspire more relevant research and promote the performance of pedestrian detection under parking scenes. In the future, the proposed **PPD** dataset's potential not only lies in pedestrian detection but can also be extended into other vision tasks, such as pixel-wise semantic segmentation, video object detection and 3D object detection tasks.
2305.19953
Multi-Dataset Co-Training with Sharpness-Aware Optimization for Audio Anti-spoofing
Audio anti-spoofing for automatic speaker verification aims to safeguard users' identities from spoofing attacks. Although state-of-the-art spoofing countermeasure(CM) models perform well on specific datasets, they lack generalization when evaluated with different datasets. To address this limitation, previous studies have explored large pre-trained models, which require significant resources and time. We aim to develop a compact but well-generalizing CM model that can compete with large pre-trained models. Our approach involves multi-dataset co-training and sharpness-aware minimization, which has not been investigated in this domain. Extensive experiments reveal that proposed method yield competitive results across various datasets while utilizing 4,000 times less parameters than the large pre-trained models.
Hye-jin Shim, Jee-weon Jung, Tomi Kinnunen
2023-05-31T15:37:48Z
http://arxiv.org/abs/2305.19953v2
# Multi-Dataset Co-Training with Sharpness-Aware Optimization ###### Abstract Audio anti-spoofing for automatic speaker verification aims to safeguard users' identities from spoofing attacks. Although state-of-the-art spoofing countermeasure(CM) models perform well on specific datasets, they lack generalization when evaluated with different datasets. To address this limitation, previous studies have explored large pre-trained models, which require significant resources and time. We aim to develop a compact but well-generalizing CM model that can compete with large pre-trained models. Our approach involves multi-dataset co-training and sharpness-aware minimization, which has not been investigated in this domain. Extensive experiments reveal that proposed method yield competitive results across various datasets while utilizing 4,000 times less parameters than the large pre-trained models. Hye-jin Shim\({}^{1}\), _Jee-weon Jung\({}^{2,\dagger}\), Tomi Kinnunen\({}^{1}\)\({}^{1}\)_University of Eastern Finland, Finland \({}^{2}\)Carnegie Mellon University, USA [email protected], [email protected], [email protected] **Index Terms**: audio spoofing, spoofing detection, sharpness aware minimization, generalization, multi-dataset training ## 1 Introduction Automatic speaker verification (ASV) systems [1], even state-of-the-art, have been reported to be easily deceived by _spoofing attacks_ including speech synthesis (text-to-speech, TTS), voice conversion (VC). For the reliable ASV systems, _audio anti-spoofing_ has emerged which aims to distinguish the utterance from a real human (_bona fide_) or spoofing attacks (_spoofed_). To develop spoofing countermeasure (CM) models, various studies have conducted focusing on feature [2, 3, 4], model architecture [5, 6, 7, 8], and other techniques (e.g. loss function and data augmentation) [9, 10, 11]. While most state-of-the-art CMs perform well on specific evaluation datasets, they do not generalize well when cross-evaluated on different datasets [12, 13, 14, 15, 16, 17, 18]. Several studies have explored improving generalization capability and literature can be divided into two strands: The first strand exploits techniques to develop a model with feature adjustment, gradient-based methods, and adversarial learning [12, 13, 15]. The other strand develops model using domain adaptation, continual learning, and self-supervised learning to leverage large-scale datasets [16, 17, 18, 19, 20, 21]. Even though the latter demonstrated promising results with a large gap, it typically requires additional learning steps with a large, heavyweight model such as wav2Vec2.0 [22] or HuBERT [23] that typically contains billions of parameters. Dating back to the base assumption of large-scale pre-trained models, there is a core premise of "_more data leads to better (generalization) performance_" that lies on deep learning as well as statistics. It expects that exposure of large amounts of data to the model can lead to better generalization. Following this idea, abundant studies have demonstrated the effectiveness of training a model on diverse datasets to enhance their robustness and adaptability, namely unsupervised/semi-supervised pre-training and self-supervised learning. Otherwise, training a model using multiple datasets at once has been well-known as a challenging and unsolved problem because of different characteristics of datasets that may interfere with the target task. Nevertheless, a few studies have been conducted in this direction [24, 25, 26, 27]. There is a potential to be improved, so further inspection and exploration yet remain. In this study, we aim to develop a compact and well-generalized CM model leveraging multi-dataset co-training. At an early stage of the present study, we conducted pilot experiments with gradually enlarging datasets for training the model at once. The result is shown in Table 1 and its outcome indicates that merely combining datasets that span different domains does not guarantee generalization. These observations motivated us to address more elaborate ways to optimize the model when using multiple training datasets. To mitigate this problem, we explore the way to reduce the perturbation which can be caused by domain mismatch across different datasets. Recent studies have shown gradient-based methods, especially sharpness-aware related works [28, 29, 30], demonstrate to avoid a severe perturbation during the training process and enhance generalization capability. Here, the term _sharpness_ in this context refers to the curvature of neighborhoods in the loss surface. To this end, we exploit two recently proposed optimization techniques: _sharpness-aware minimization_ (SAM) [28] and _Adaptive sharpness-aware minimization_ (ASAM) [29]. SAM and ASAM are both designed to find flat minimas by taking into account the sharpness in the loss surface in addition to the gradients. Hence, we hypothesize that combining sharpness-aware training -- an approach designed to avoid sharp loss minima -- with multiple dataset co-training (to handle diverse data) has the potential to lead to improving the generalization of CM. Our study first attempts to optimize multi-dataset co-training and also practical effectiveness of sharpness-aware training remains presently unknown in the CM task. Provid \begin{table} \begin{tabular}{l c} \hline \hline Train Dataset(s) & EER(\%) \\ \hline ASVspoof 2015 & 38.83 \\ ASVspoof 2019 & 1.38 \\ \hline ASVspoof 2019 + ASVspoof 2015 & 1.56 \\ ASVspoof 2019 + ASVspoof 2015 + WaveFake & 1.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of using a single dataset and multiple datasets in audio anti-spoofing. Evaluation is conducted with the ASVspoof 2019 LA dataset. ing initial answers to this question forms the main novelty of our work; we implement our proposed method using the state-of-the-art graph neural network-based AASIST [8] model with various evaluation data. Our comprehensive experimental validation reveals that both approaches are effective in that the proposed model shows competitive results throughout various datasets using a number of parameters more than 4000 times less than the large pre-trained models and leads to better generalization. ## 2 Multi-dataset training ### Related works Several previous works have focused on enlarging the amount of training data for the training based on "_more data leads to better (generalization) performance_" in line with fitting the general distribution with large amounts of data. Both unsupervised learning and self-supervised learning align with this principle aimed to enhance model generalization with more data. However, above mentioned studies are taking into account the conditions that are hard to get labeled data for the same task. It means if labeled data is available, it is basically helpful for training, however, multi-dataset co-training is even unveiled yet compared to the methodologies for exploiting unlabeled data. Few studies [27, 31] have conducted multi-dataset co-training in other domains. There exists a few preliminary works in audio spoofing utilizing multiple datasets [13, 32]. They concentrated on developing a single model that can detect diverse types of attacks as audio spoofing attacks can be divided into two categories: logical access (LA) and physical access (PA). The former includes TTS and VC attacks, whereas the latter refers to replay attacks only. However, no research has explored multi-dataset training to deal with one category of attack (either LA or PA). The potential effectiveness of training the model with multiple datasets simultaneously for the same task has yet to be explored in depth, but it would be worthwhile to investigate further. ### Summary of datasets used in this study We use three datasets concurrently to train a _single model_ in a _single phase_: ASVspoof 2015 [33], ASVspoof 2019 LA [34], and WaveFake [35]. The latest ASVspoof edition in 2021 additionally introduced DeepFake (DF) scenario which includes lossy codecs used for media storage. In this study, we _only_ deal with LA spoofing attacks for training a model. To address generalization to an unseen domain, we use the ASVspoof 2021 LA and DF tasks for the evaluation. Note that ASVspoof 2015, ASVspoof 2019, and a part of ASVspoof 2021 are based upon the Voice Cloning Toolkit (VCTK) corpus [36]; however, they cover different attacks. Supporting this basis, it is assumed that ASVspoof 2015 evaluation can be theoretically easy when a CM is trained on the more diverse LA 2019 train set. However, it has empirically confirmed that this is not the case [13, 18]. An overview of the selected datasets is shown in Table 2. **ASVspoof 2015**[33] is the earliest and smallest database among the four existing ASVspoof editions. The evaluation set consists of five known and five unknown attacks composed of different TTS and VC systems. In this context, the term _known_ attack indicates an attack in the train and test set is overlapped, while _unknown_ attack indicates to scenarios where the test set includes attacks that were not encountered during the training phase. **ASVspoof 2019**[34] is a large-scale dataset that covers advanced technologies developed during the four years following ASVspoof 2015. It includes 6 and 13 types of spoofing attacks in train and test, respectively. There are two known attacks, four partially known attacks, and seven unknown attacks between train and test sets. Here, _partially known_ attack denotes a scenario where some of the attacks are present in both the train set and test set, but some attacks are not present in the train set. **WaveFake**[35] is collected using six different state-of-the-art TTS methods. It considers the created samples to resemble the training distributions. All spoofed data has been generated using the last, competitive VC and TTS models. Note that we utilize whole spoofed speech for WaveFake since no standardized test protocol exists. WaveFake contains spoofed utterances from two speakers, originating from LJSPEECH [37] and JSUT [38] datasets, respectively. **ASVspoof 2021**[39] is the latest and hardest edition of the ASVspoof challenge series. It only contains a test set and introduces real telephony systems both applied to encoding and transmission artifacts in the LA scenario of the ASVspoof 2019 dataset. The DF scenario additionally consists of bona fide and spoofed data processed through various audio compressors containing data from two additional datasets, VCC 2018 [40] and VCC 2020 [41]. ## 3 Sharpness-aware optimizations When working with multiple datasets simultaneously, a model may easily struggle with domain information between the main task, despite having access to explicit class labels. Domain information can distract the model from converging, although it may generalize better once the train loss has well converged. We thus seek methods that can prevent the model from being distracted by domain discrepancies between different datasets. Besides, this direction can also cast away the necessity of the additional pre-training and fine-tuning steps in a straight way. In particular, _sharpness-aware minimization_ (SAM) [28] recently demonstrates state-of-the-art performance in various tasks [42, 43, 44, 45]. SAM aims to find a _flat_ region in the parameter space with both low loss itself and neighborhoods, seeking flat minima. It uses worst-case perturbation of the model parameters on every iteration in the training phase and can be easily implemented on top of existing optimizers such as Adam [46]. Moreover, there are several follow-up studies related to sharpness. For instance, the authors in [29] proposed a scale-invariant version, _adaptive_ variant of SAM, adaptive SAM (ASAM). It solves the scale dependency problem by removing the effect of scaling and helps build a solid correlation with the generalization gap. In the following subsections, we detail both SAM and ASAM. \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & \# Spks & \# Uts & \# Conds \\ \hline ASVspoof 2015 & 25 / 46 & 16375 / 193404 & 5 / 10 \\ ASVspoof 2019 LA & 20 / 48 & 25380 / 108978 & 6 / 13 \\ WaveFake & 2 & 117985 & 6 \\ \hline ASVspoof 2021 LA & - / 48 & 181566 & - / 13 \\ ASVspoof 2021 DF & - / 48 & 611829 & - / 13 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of data statistics used in this study. #Spks, # Uts, and # Conds refer to the number of speakers, utterances, and spoofing conditions, respectively. Division of train and test set is indicated by /. ### Sharpness-Aware Minimization Given a labeled training set \(S=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) drawn from i.i.d an unknown data distribution \(D\), the _training loss_ function with model parameter \(\mathbf{w}\) and _population loss_ are defined as \(L_{S}\) and \(L_{D}(w)=\mathbb{E}_{(x,y)}\)\(\mathcal{D}[\{\mathbf{w},x,y\}]\), respectively. While the training loss is the empirical/sample-based estimator, the population loss refers to the corresponding theoretical quantity when the knowledge of the actual joint distribution \((x,y)\) is fully known. So, population loss can be thought of as the empirical training loss, applied to an infinitely large training set \((n\rightarrow\infty)\). Our goal is to select \(\mathbf{w}\) not only for having low training loss \(L_{S}(\mathbf{w})\) but also for low population loss \(L_{D}(s)\), for improved generalization. To achieve such a goal, SAM is designed to minimize the following PAC-Bayesian generalization upper bound, where the sharpness term appears explicitly in another following equation. (The term in brackets indicates the sharpness of \(L_{s}\) at \(\mathbf{w}\)): \[L_{D}(\mathbf{w})\leq\max_{\|\mathbf{\varepsilon}\|_{2}\leq \rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})+h(\|\mathbf{w}\|_{2}^{2}/\rho^{2})\] \[=[\max_{\|\mathbf{\varepsilon}\|_{2}\leq\rho}L_{S}(\mathbf{w}+ \mathbf{\varepsilon})-L_{S}(\mathbf{w})]+L_{S}(\mathbf{w})+h(\|\mathbf{w}\|_{ 2}^{2}/\rho^{2})\] Here, \(h\) is a strictly increasing function under conditions on \(L_{D}(\mathbf{w})\)and \(\rho\) is a predefined constant controlling the radius of a neighborhood in \(l^{p}\) ball (\(p\ni[1,\infty]\), [28] revealed that \(p=2\) is optimal). A detailed explanation of the PAC-Bayesian generalization bound is omitted for the limited space. Refer to Appendix A.1 of [28]) for full details. Finally, for any \(\rho>0\) and \(\mathbf{\varepsilon}\approx 0\) (to avoid division by 0), the model loss is defined as: \[\min_{\mathbf{w}}L_{S}^{\text{SAM}}(\mathbf{w})+\lambda\|\mathbf{ w}\|_{2}^{2},\] \[\text{where}L_{S}^{\text{SAM}}(\mathbf{w})\triangleq\max_{\| \mathbf{\varepsilon}\|_{p}\leq\rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})\] ### Adaptive Sharpness-Aware Minimization Even if the vanilla SAM performs usually well, it is easily affected by parameter re-scaling as a sharpness term in SAM defined on a rigid region with a fixed radius. This may disturb the generalization of SAM. To address this shortcoming, ASAM [29] uses adaptive sharpness that removes the effect of scaling and adjusting the maximization region, leading to an improved training path. It utilizes the normalization operation \(T_{\mathbf{w}}^{-1}\) and achieves better generalization compared to SAM. Firstly, the normalization operator of weight \(\mathbf{w}\) can be defined, if \(T_{\mathbf{w}}\) is a family of invertible linear operators and \(T_{A}^{-1}\mathbf{w}=T_{\mathbf{w}}^{-1}\) for any invertible scaling operator \(A\), which does not alter the loss function. With this normalization operator, adaptive sharpness objective function is defined as: \[L_{S}^{\text{SASM}}(\mathbf{w})\triangleq\max_{\|T_{\mathbf{w}}^{-1}\mathbf{ \varepsilon}\|_{p}\leq\rho}L_{S}(\mathbf{w}+\mathbf{\varepsilon})\] ## 4 Experimental settings For experiments, we deploy the "light" version of the recent AASIST model [8], referred to as AASIST-L. It includes a graph attention layer to capture information both in spectral and temporal domains and max graph operations to select features in a competitive manner. The main difference between AASIST and AASIST-L is the number of parameters for practical purposes including 297K and 85K numbers of parameters, respectively. We use Adam [46] as our base optimizer. When exploiting SAM and ASAM, optimization proceeds similarly, but with the additional sharpness term added to the training loss as explained above. As our aim is to focus on the generalization throughout corpora rather than dealing with architectural details, we did not adjust parameters (e.g. learning rate, pooling ratio). Full details of AASIST model can be referred to [8]. All models were implemented using PyTorch and trained for 100 epochs. Performance evaluation is done based on equal error rate (EER) and we selected the best-performing model in terms of EER on the development set. Code for experiments of this study is available in: https: \begin{table} \begin{tabular}{l c c c} \hline \hline Attack & (a) & (b) & (a)+(b) & (a)+(b)+(c) \\ \hline Traditional & 21.54 & 12.18 & 14.18 & **10.77** \\ Wav.Concat. & 55.22 & **12.07** & 20.32 & 13.09 \\ Neural AR & 46.82 & **23.10** & 27.70 & 24.87 \\ Neural non-AR & 40.23 & **20.47** & 25.05 & 23.21 \\ Unknown & 24.45 & 20.38 & 17.94 & **16.05** \\ \hline Pooled & 33.54 & 18.20 & 22.14 & 19.49 \\ \hline \hline \end{tabular} \end{table} Table 4: Per-attack results of ASVspoof 2021 DF evaluation. The best results are selected between w/o SAM, SAM, and ASAM. The best results in the same row are represented in boldface. ((a) ASVspoof 2015, (b) ASVspoof 2019, (c) WaveFake.) \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**ASVspoof 2015**} & \multicolumn{3}{c}{**ASVspoof 2019 LA**} & \multicolumn{3}{c}{**ASVspoof 2021 LA**} & \multicolumn{3}{c}{**ASVspoof 2021 DF**} \\ \cline{2-13} & w/o & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{w/o} & SAM & ASAM & \multicolumn{3}{c}{_Average_} \\ \cline{2-13} & SAM & ASAM & \multicolumn{3}{c}{SAM} & SAM & ASAM & \multicolumn{3}{c}{SAM} & SAM & ASAM & \multicolumn{3}{c}{SAM} & \multicolumn{3}{c}{SAM} & ASAM & \multicolumn{3}{c}{_Average_} \\ \hline **2015** & 8.25 & 6.50 & 5.83 & 38.83 & 29.50 & 30.70 & 39.87 & 32.27 & 31.09 & 33.54 & 28.40 & 21.80 & 25.55 \\ \hline **2019 LA** & 5.98 & 4.32 & 3.53 & 1.38 & 1.06 & 1.48 & 12.18 & 7.08 & 10.18 & 18.20 & 21.16 & 19.58 & 8.84 \\ \hline **2015** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13} **+2019 LA** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13} **+2019 LA** & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13} ## 5 Results and Analyses **Main results** In Table 3, we validate our proposed model using both in-domain datasets (ASVspoof 2015 and ASVspoof 2019 LA) and out-of-domain datasets (ASVspoof 2021 LA and ASVspoof 2021 DF). Firstly, as for the multi-dataset co-training, the pooled results are shown in the last column. We confirm that the best result achieved when all three datasets are all utilized. Even though the full usage of three datasets did not show a consistent result as the best result, the effectiveness of using multiple datasets was demonstrated. Secondly, the results of sharpness-aware optimizations can be easily comparable in the last row. Sharpness-aware optimization methods improve the performance in most cases, regardless of whether using a single dataset or multiple datasets. Except for the ASVspoof 2019 LA evaluation case that gets the lowest EER using SAM, ASAM shows the best results. Through results, we observe that SAM and ASAM substantially benefit the model optimization. **Per-attack results on ASVspoof 2021 DF** For further analysis, Table 4 shows the results for each attack in ASVspoof 2021 DF evaluation result. We depict the best results for each training dataset combination : each column refers to ASVspoof 2015 (w/ ASAM), ASVspoof 2019 (w/o SAM), ASVspoof 2015 + ASVspoof 2019 (w/o SAM), and ASVspoof2015 + ASVspoof 2019 + WaveFake (w/ SAM). The interesting thing to be noted is that (a)+(b)+(c) which includes all datasets show superior in Traditional and Unknown attack in a large gap compared to the best result which is from (b) only. In terms of generalization, _unknown_ is the most critical subset; thus we interpret that the lowest EER in unknown signifies better generalization. Hence, these results back up the effectiveness of the proposed method for the generalization performance. **Mini-batch composition** When utilizing multiple datasets, an imbalance exists between different datasets. We thus further explore whether balancing the number of samples drawn from each dataset can be advantageous in terms of performance. Table 5 describes the results, which confirm that simply balancing the samples between different datasets within a mini-batch are helpful. We confirm improvement in both reported train dataset configurations, where the performance of the most extensive setting is further improved by 14% relative. **Comparison with other studies** In Table 6, we compare our results with other state-of-the-art systems including the studies utilizing a large pre-trained model. Among four evaluation protocols, our model demonstrates competitive performance with only 85K parameters in two protocols: ASVspoof 2015 and ASVspoof 2019 LA. In the other two remaining protocols, our model underperforms; nonetheless, taking into account the number of parameters and training time, we argue that our approach remains competitive. Given the fact that the purpose of CM models is to aid ASV systems, lightweight yet well-generalizing models are worth further investigation. ## 6 Conclusions Recent studies have widely exploited large pre-trained models to leverage as much data as possible to develop a well-generalized model. While training a single model with multiple datasets is a straightforward way to utilize diverse data without additional training, it is well-known that handling domain differences is challenging. Given the nature that CM models are inherently built to support ASV systems, these enormous systems can be potentially not applicable because of their size. In this paper, we explore a case study in the audio anti-spoofing field which lacks a large amount of data compared to other research domains. To optimize the model to handle multiple datasets simultaneously, we utilize sharpness-aware methodologies, which include a curvature-based term in the objective function to reduce the gap between the variance of the data. Using a number of parameters more than 4000 times less than the large pre-trained models, our proposed method demonstrates effectiveness in both in-domain evaluations on unknown attacks and out-of-domain evaluations. ## 7 Acknowledgements This work was supported by the Academy of Finland (Decision No. 349605, project "SPEECHFAKES") \begin{table} \begin{tabular}{l c c} \hline \hline **Method** & **\# Params** & **EER(\%)** \\ \hline \multicolumn{3}{c}{**ASVspoof 2015**} \\ \hline Primary result [47] & - & 1.21 \\ **Ours** & 85K & 0.66 \\ \hline \multicolumn{3}{c}{**ASVspoof 2019 LA**} \\ \hline SSAD+LCNN big [48] & - & 5.31 \\ Imag-pre + J\&S [16] & - & 0.87 \\ wav2Vec2.0 [18] & 317M + (290\(\pm\)30K) & 1.28 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 3.55 \\ **Ours** & 85K & 0.99 \\ \hline \multicolumn{3}{c}{**ASVspoof 2021 LA**} \\ \hline Img-pre+RawBoost [16] & - & 7.71 \\ wav2Vec2.0-XLSR [17] & 317M + 297K & 6.15 \\ wav2Vec2.0-XLSR [18] & 317M + 297K & 9.66 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 9.55 \\ **Ours** & 85K & 7.08 \\ \hline \multicolumn{3}{c}{**ASVspoof 2021 DF**} \\ \hline Img-pre+RawBoost [16] & - & 19.11 \\ wav2Vec2.0-XLSR [17] & 317M + 297K & 7.69 \\ wav2Vec2.0-XLSR [18] & 317M + 297K & 4.75 \\ HuBERT-XL [18] & 317M + (290\(\pm\)30K) & 13.07 \\ **Ours** & 85K & 18.20 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison with other state-of-the-art results including the research which utilized pre-trained model with large datasets. \begin{table} \begin{tabular}{c c c c} \hline \hline Training datasets & Mini-batch & Loss & EER \\ \hline 2015 + 2019 LA & pooled & w/o SAM & 1.56 \\ \hline 2015 + 2019 LA & **balanced** & w/o SAM & 1.49 \\ \hline 2015 + 2019 LA & pooled & ASAM & 1.27 \\ \hline 2015 + 2019 LA & **balanced** & ASAM & 1.09 \\ + WaveFake & **balanced** & ASAM & 1.09 \\ \hline \hline \multicolumn{3}{c}{//github.com/shimz/MDL\_sharpness.} \\ \hline \end{tabular} \end{table} Table 5: The comparison of mini-batch composition strategies using ASVspoof 2019 LA evaluation. Pooled mini-batch refers to the condition which ignores the balance between datasets, and balanced mini-batch refers to the condition which considers the balance between multiple datasets.
2308.16444
Frank-Wolfe algorithm for DC optimization problem
In the present paper, we formulate two versions of Frank--Wolfe algorithm or conditional gradient method to solve the DC optimization problem with an adaptive step size. The DC objective function consists of two components; the first is thought to be differentiable with a continuous Lipschitz gradient, while the second is only thought to be convex. The second version is based on the first and employs finite differences to approximate the gradient of the first component of the objective function. In contrast to past formulations that used the curvature/Lipschitz-type constant of the objective function, the step size computed does not require any constant associated with the components. For the first version, we established that the algorithm is well-defined of the algorithm and that every limit point of the generated sequence is a stationary point of the problem. We also introduce the class of weak-star-convex functions and show that, despite the fact that these functions are non-convex in general, the rate of convergence of the first version of the algorithm to minimize these functions is ${\cal O}(1/k)$. The finite difference used to approximate the gradient in the second version of the Frank-Wolfe algorithm is computed with the step-size adaptively updated using two previous iterations. Unlike previous applications of finite difference in the Frank-Wolfe algorithm, which provided approximate gradients with absolute error, the one used here provides us with a relative error, simplifying the algorithm analysis. In this case, we show that all limit points of the generated sequence for the second version of the Frank-Wolfe algorithm are stationary points for the problem under consideration, and we establish that the rate of convergence for the duality gap is ${\cal O}(1/\sqrt{k})$.
R. Díaz Millán, O. P. Ferreira, J. Ugon
2023-08-31T04:07:43Z
http://arxiv.org/abs/2308.16444v1
# Frank-Wolfe algorithm for DC optimization problem ###### Abstract In the present paper, we formulate two versions of Frank-Wolfe algorithm or conditional gradient method to solve the DC optimization problem with an adaptive step size. The DC objective function consists of two components; the first is thought to be differentiable with a continuous Lipschitz gradient, while the second is only thought to be convex. The second version is based on the first and employs finite differences to approximate the gradient of the first component of the objective function. In contrast to past formulations that used the curvature/Lipschitz-type constant of the objective function, the step size computed does not require any constant associated with the components. For the first version, we established that the algorithm is well-defined of the algorithm and that every limit point of the generated sequence is a stationary point of the problem. We also introduce the class of weak-star-convex functions and show that, despite the fact that these functions are non-convex in general, the rate of convergence of the first version of the algorithm to minimize these functions is \(\mathcal{O}(1/k)\). The finite difference used to approximate the gradient in the second version of the Frank-Wolfe algorithm is computed with the step-size adaptively updated using two previous iterations. Unlike previous applications of finite difference in the Frank-Wolfe algorithm, which provided approximate gradients with absolute error, the one used here provides us with a relative error, simplifying the algorithm analysis. In this case, we show that all limit points of the generated sequence for the second version of the Frank-Wolfe algorithm are stationary points for the problem under consideration, and we establish that the rate of convergence for the duality gap is \(\mathcal{O}(1/\sqrt{k})\). **Keywords:** Frank-Wolfe method; DC optimization problem; finite difference; weak-star-convex function. **AMS subject classification:** 90C25, 90C60, 90C30, 65K05. ## 1 Introduction The DC optimization method involves minimizing a function that can be represented as the difference between two convex functions. We are interested in finding a solution to a constrained DC optimization problem, where the constraint set \(\mathcal{C}\subset\mathbb{R}^{n}\) is a convex and compact set, and \(f:=g-h\) where \(g:\mathbb{R}^{n}\to\mathbb{R}\) is a continuously differentiable convex function, and \(h:\mathbb{R}^{n}\to\mathbb{R}\) is a convex function possibly non-differentiable. To the best of our knowledge, DC optimization can be traced back to pioneering works such as [39, 40], which introduced the first algorithms for this problem. Since then, the DC optimization has attracted the attention of the mathematical programming community (see for example [6, 10, 12, 30, 41]), not only for its own sake but also because it is an abstract model for several families of practical optimization problems. Although we are not concerned with practical issues at this time, we emphasize that practical applications emerge whenever the natural structure of the problem is modelled as a DC optimization problem, such as sparse generalized eigenvalue problems [38], sparse optimization problems [18], facility location and clustering problems [33]. The Frank-Wolfe algorithm has a long history, dating back to Frank and Wolfe's work in the 1950s to minimize convex quadratic functions over compact polyhedral sets, see [14]. This method was generalized about ten years later to minimize convex differentiable functions with Lipschitz continuous gradients and compact constraint convex sets, see [31]. Since then, this method has attracted the attention of several researchers who work with continuous optimization, thus becoming also known as the conditional gradient method. One of the factors that explain the interest in this method is its simplicity and ease of implementation. In fact, each method iteration only requires access to a linear minimization oracle over a compact convex set. It is also worth noting that, due to the method's simplicity, it allows for low storage costs and ready exploration of separability and sparsity, making its application in large-scale problems quite attractive. It is interesting to note that the popularity of this method has increased significantly in recent years as a result of the emergence of several applications in machine learning, see [23, 27, 28]. For all of these reasons, several variants of this method have arisen throughout the years and new properties of it have been discovered, resulting in a large literature on it, papers dealing with this method include [3, 7, 8, 16, 17, 20, 25, 29, 32]. In the present paper, we formulate two versions of _Frank-Wolfe algorithm or conditional gradient method to solve DC optimization problem with an adaptive step size_. The second version is based on the first and employs finite differences to approximate the gradient \(\nabla g\). It is worth mentioning that the Frank-Wolfe algorithm has previously been formulated in the context of DC programming, see [24]. See also [43] which establishes theoretical connections between DC algorithms and convex-concave procedures with the Frank-Wolfe algorithm. In contrast to the previous study, which assume that the curvature/Lipschitz-type constant of \(f=g-h\) is bounded from above, the analysis of the Frank-Wolfe method done here just assumes that the gradient of the first component \(g\) of \(f\) is Lipschitz continuous. While designing the methods, even if we assume that the gradient of \(g\) is Lipschitz continuous in both formulations, we will not compute the step size using the Lipschitz constant of \(\nabla g\), as in earlier formulations that employed the curvature/Lipschitz-type constant of the objective function \(f\). The step size will be computed adaptively, based on an idea introduced in [2] (see also [4, 36]) that approximates the Lipschitz constant. It has been shown in previous works that Frank-Wolfe algorithm produces a stationary point with a convergence rate of only \(\mathcal{O}(1/\sqrt{k})\) for non-convex objective functions, see [26] (see also [24]). We introduce the concept of weak-star-convex function notion, which generalizes the star-convex func tion concept proposed in [34] as well as a few other related concepts, such as [21, 42]. The weak-star-convex objective functions are also taken into account in the convergence analysis of the first method presented here. The rate of convergence is proven to be \(\mathcal{O}(1/k)\) for both the function values and the duality gap, despite the fact that the weak-star-convex functions are in general non-convex. As a result, among the functions for which the Frank-Wolfe method yields an approximate solution with a convergence rate of \(\mathcal{O}(1/k)\), we include the set of weak-star-convex functions that are differences of convex functions with the first component having a Lipschitz gradient. As mentioned previously, the second version of the Frank-Wolfe algorithm uses finite difference to approximate the gradient of the function \(g\). Similar to the strategies adopted in [19], the finite difference utilized here to approximate the gradient is computed with the step-size updated adaptively utilizing two previous iterations. Finite difference is an old concept that has been used to approximate derivatives in optimization settings (for example, see [35, Section 8.1]) and has already appeared in the study of the Frank-Wolfe algorithm (for example, see [13, 15, 23, 37]). It is worth noting that all previous applications of finite difference in the Frank-Wolfe algorithm provided approximate gradients with absolute error, whereas the one used here has the advantage of providing us with a relative error and thus being quite simple to analyze. We show that all limit points of the generated sequence for the second version of the Frank-Wolfe algorithm are stationary points for the problem under consideration, and we establish that the rate of convergence is \(\mathcal{O}(1/\sqrt{k})\) for the duality gap. The following describes the manner in which this paper is organized. Some notations and auxiliary results are presented in Section 2. Section 3 presents the problem, the hypotheses required, and some related notations. Section 4 is devoted to the formulation of the first version of the Frank-Wolfe algorithm, as well as its well definition and analysis of the sequence generated by it. Section 5 contains the formulation of the second version of the algorithm and its well definition and analysis. Finally, in Section 6, some conclusions are presented. ## 2 Preliminaries In this section, we recall some notations, definitions and basics results used throughout the paper. A function \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) is said to be _convex_ if \(\varphi(\lambda x+(1-\lambda)y)\leq\lambda\varphi(x)+(1-\lambda)\varphi(y)\), for all \(x,y\in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\). And \(\varphi\) is _strictly convex_ when the last inequality is strict for \(x\neq y\), for a comprehensive study of convex function see [22]. We say that \(f:\mathbb{R}^{n}\to\mathbb{R}\) is _locally Lipschitz_ if, for all \(x\in\mathbb{R}^{n}\), there exist a constant \(K_{x}>0\) and a neighborhood \(U_{x}\) of \(x\) such that \(|f(x)-f(y)|\leq K_{x}\|x-y\|\), for all \(y\in U_{x}.\) It is well known that, if \(f:\mathbb{R}^{n}\to\mathbb{R}\) is convex, then \(f\) is locally Lipschitz. And we also know that if \(f:\mathbb{R}^{n}\to\mathbb{R}\) is continuously differentiable then \(f\) is locally Lipschitz, see [9, p. 32]. Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a locally Lipschitz function. The _Clarke's subdifferential_ of \(f\) at \(x\in\mathbb{R}^{n}\) is given by \(\partial_{c}f(x)=\{v\in\mathbb{R}^{n}:\ f^{\circ}(x;d)\geq v^{T}d,\ \forall d\in\mathbb{R}^{n}\}\), where \(f^{\circ}(x;d)\) is the _generalized directional derivative_ of \(f\) at \(x\) in the direction \(d\) given by \[f^{\circ}(x;d)=\limsup_{u\to x\atop t\downarrow 0}\frac{f(u+td)-f(u)}{t}.\] For an extensive study of locally Lipschitz functions and Clarke's subdifferential see [9, p. 27]. If \(f\) is convex, then \(\partial_{c}f(x)\) coincides with the subdifferential \(\partial f(x)\) in the sense of convex analysis, and \(f^{\circ}(x;d)\) coincides with the usual directional derivative \(f^{\prime}(x;d)\); see [9, p. 36]. We recall that if \(f:\mathbb{R}^{n}\to\mathbb{R}\) is differentiable, then \(\partial_{c}f(x)=\{\nabla f(x)\}\) for any \(x\in\mathbb{R}^{n}\); see [9, p. 33]. **Theorem 2.1** ([9, p. 27]).: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a locally Lipschitz function. Then, \(\partial_{c}f(x)\) is a nonempty, convex, compact subset of \(\mathbb{R}^{n}\) and \(\|v\|\leq K_{x},\) for all \(v\in\partial_{c}f(x)\), where \(K_{x}>0\) is the Lipschitz constant of \(f\) around \(x\). Moreover, \(f^{\circ}(x;d)=\max\{v^{\mathrm{ T}}d:\ v\in\partial_{c}f(x)\}\)._ **Theorem 2.2** ([9, p. 38-39]).: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be given by \(f=g-h\), where \(g,h:\mathbb{R}^{n}\to\mathbb{R}\) is locally Lipschitz functions and \(g\) is differentiable. Then, \(f^{\circ}(x;d)=\nabla g(x)^{\mathrm{ T}}d-h^{\prime}(x;d)\), for all \(x,d\in\mathbb{R}^{n}\) and \(\partial_{c}f(x)=\{\nabla g(x)\}-\partial h(x)\)._ The next result is a combination of Theorem 2.2 with [9, Corollary on p. 52]. **Theorem 2.3**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a locally Lipschitz function and \(\mathcal{C}\subset\mathbb{R}^{n}\) is a closed and convex set. If \(x^{*}\in\mathcal{C}\) is a minimizer of \(f\) in \(\mathcal{C}\), then \(v^{\mathrm{ T}}(x-x^{*})\geq 0\), for all \(v\in\partial_{c}f(x^{*})\) and all \(x\in\mathcal{C}\). As a consequence, if \(f=g-h\) with \(g,h:\mathbb{R}^{n}\to\mathbb{R}\) is locally Lipschitz functions and \(g\) is differentiable, then \((\nabla g(x^{*})-u)^{\mathrm{ T}}(x-x^{*})\geq 0\), for all \(u\in\partial_{c}h(x^{*})\) and all \(x\in\mathcal{C}\)._ Hence, according to Theorem 2.3, every point satisfying the following inequality \(v^{\mathrm{ T}}(x-x^{*})\geq 0\), for all \(v\in\partial_{c}f(x^{*})\) and for all \(x\in\mathcal{C}\), is called to be a _stationary point_ of \(\min_{x\in\mathcal{C}}f(x)\). A continuously differentiable function \(g:\mathbb{R}^{n}\to\mathbb{R}\) has gradient \(\nabla g\) is \(L\)_-Lipschitz continuous_ on \(\mathcal{C}\subset\mathbb{R}^{n}\) if there exists a Lipschitz constant \(L>0\) such that \(\|\nabla g(x)-\nabla g(y)\|\leq L\|x-y\|\) for all \(x,y\in\mathcal{C}\). Thus, by using the fundamental theorem of calculus, we obtain the following result whose proof can be found in [5, Proposition A.24], see also[11, Lemma 2.4.2]. **Proposition 2.4**.: _Let \(g:\mathbb{R}^{n}\to\mathbb{R}\) be a differentiable with gradient \(L\)-Lipschitz continuous on \(\mathcal{C}\subset\mathbb{R}^{n}\), \(x\in\mathcal{C}\), \(v\in\mathbb{R}^{n}\) and \(\lambda\in[0,1]\). If \(x+\lambda v\in\mathcal{C}\), then \(g(x+\lambda v)\leq g(x)+\nabla g(x)^{\mathrm{ T}}v\lambda+\frac{L}{2}\|v\|^{2}\lambda^{2}\)._ **Proposition 2.5**.: _The function \(h:\mathbb{R}^{n}\to\mathbb{R}\) is convex if and only if \(h(y)\geq h(x)+\langle u,y-x\rangle\), for all \(x,y\in\mathbb{R}^{n}\) and all \(u\in\partial h(x)\)._ **Proposition 2.6** ([22, Proposition 6.2.1]).: _Let \(h:\mathbb{R}^{n}\to\mathbb{R}\) be convex. Let \((x^{k})_{k\in\mathbb{N}}\) and \((u^{k})_{k\in\mathbb{N}}\) be sequences such that \(u^{k}\in\partial h(x^{k})\), for all \(k\in\mathbb{N}\). If \(\lim_{k\to+\infty}x^{k}=\bar{x}\) and \(\lim_{k\to+\infty}u^{k}=\bar{u}\), then \(\bar{u}\in\partial h(\bar{x})\)._ **Proposition 2.7** ([22, Proposition 6.2.2]).: _Let \(h:\mathbb{R}^{n}\to\mathbb{R}\) be convex. The mapping \(\partial h\) is locally bounded, i.e. the image \(\partial h(B)\) of a bounded set \(B\subset\mathbb{R}^{n}\) is a bounded set in \(\mathbb{R}^{n}\)._ In the following we recall an useful results for our study on iteration complexity bounds for Frank-Wolfe algorithm, its proof can be found in [1, Lemma 13.13, Ch. 13, p. 387]. **Lemma 2.8**.: _Let \((a_{k})_{k\in\mathbb{N}}\) and \((b_{k})_{k\in\mathbb{N}}\) be nonnegative sequences of real numbers satisfying_ \[a_{k+1}\leq a_{k}-b_{k}\beta_{k}+\frac{A}{2}\beta_{k}^{2},\qquad k=0,1,2,\dots,\] _where \(\beta_{k}=2/(k+2)\) and \(A\) is a positive number. Suppose that \(a_{k}\leq b_{k}\), for all \(k\). Then 1. \(a_{k}\leq\dfrac{2A}{k}\)_, for all_ \(k=1,2,\ldots.\)__ 2. \(\min_{\ell\in\{\lfloor\frac{k}{2}\rfloor+2,\ldots,k\}}b_{\ell}\leq \dfrac{8A}{k-2}\)_, for all_ \(k=3,4,\ldots,\) _where,_ \(\lfloor k/2\rfloor=\max\left\{n\in\mathbb{N}:\ n\leq k/2\right\}.\)__ ## 3 The DC optimization problem We are interested in solving the following constrained DC optimization problem \[\min_{x\in\mathcal{C}}f(x):=g(x)-h(x), \tag{1}\] where \(\mathcal{C}\subset\mathbb{R}^{n}\) is a compact and convex set, \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a continuously differentiable convex function and \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex function possibly non-differentiable, with domain on \(\mathbb{R}^{n}\). _Throughout the paper we assume that the gradient \(\nabla g\) is \(L\)-Lipschitz continuous on \(\mathcal{C}\subset\mathbb{R}^{n}\)_, i.e., there exists a Lipschitz constant \(L>0\) such that **(A)**: \(\|\nabla g(x)-\nabla g(y)\|\leq L\|x-y\|\) for all \(x,y\in\mathcal{C}\). Since we are assuming that \(\mathcal{C}\subset\mathbb{R}^{n}\) is a compact set, its _diameter_ is a finite number defined by \[\operatorname{diam}(\mathcal{C}):=\max\left\{\|x-y\|:\ x,y\in\mathcal{C} \right\}.\] Since \(\mathcal{C}\subset\mathbb{R}^{n}\) is a compact, the study of problem (1) is bounded from below. Then, optimum value of the problem (1) satisfy \(+\infty<f^{*}:=\inf_{x\in\mathcal{C}}f(x)\) and optimal set \(\mathcal{C}^{*}\) is non-empty. According to Theorem 2.3, the _first-order optimality condition_ for problem (1) is stated as \[(\nabla g(\bar{x})-\bar{u})^{T}(x-\bar{x})\geq 0,\qquad\forall\bar{u}\in \partial g(\bar{x}),\quad\forall x\in\mathcal{C}. \tag{2}\] In general, the condition (2) is necessary but not sufficient for optimality. A point \(\bar{x}\in\mathcal{C}\) satisfying condition (2) is called a _stationary point_ to problem (1). Consequently, all \(x^{*}\in\mathcal{C}^{*}\) satisfies (2). We finish this section by presenting a variant of a classical example of DC optimization problem that fits the aforementioned requirements. **Example 3.1**.: Let \(\mathcal{C}_{i}\subset\mathbb{R}^{n}\) be an arbitrary set, for all \(i=1,\ldots,m\). The _square distance with respect to the set \(\mathcal{C}_{i}\)_, denoted by \(d^{2}_{C_{i}}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is defined \[d^{2}_{C_{i}}(x):=\inf_{y\in C_{i}}\|x-y\|^{2}. \tag{3}\] En general, the function \(d^{2}_{C_{i}}\) is not convex. However, due to \(\|x-y\|^{2}=\|x\|^{2}-2x^{T}y+\|y\|^{2}\), the distance function \(d^{2}_{C_{i}}\) becomes to \[d^{2}_{C_{i}}(x):=\|x\|^{2}-\sup_{y\in C_{i}}\left(2x^{T}y-\|y\|^{2}\right), \tag{4}\] which is a difference of two convex functions, see [22, Example 2.1.4]. Indeed, \(d^{2}_{C_{i}}=g-h\), where \(g(x)=\|x\|^{2}\) is a convex quadratic function and \(h(x)=\sup_{y\in C_{i}}\left(2x^{T}y-\|y\|^{2}\right)\) is a convex function given by the supremum of affine functions \(x\mapsto\ell_{y}(x):=2x^{T}y-\|y\|^{2}\), for \(y\in C_{i}\). Let us state the _constrained generalized Fermat-Weber location problem with _square distances_. For that, take \(\omega_{i}\geq 0\), \(i=1,\ldots,m\), such that \(\sum_{i=1}^{m}\omega_{i}=1\) and note that \[\sum_{i=1}^{m}\alpha_{i}d_{C_{i}}^{2}(x)=\|x\|^{2}-\sum_{i=1}^{m}\omega_{i}\sup _{y\in C_{i}}\left(2x^{T}y-\|y\|^{2}\right).\] Thus, for a given constrained set compact and convex \(\mathcal{C}\subset\mathbb{R}^{n}\), the DC version of the constrained generalized Fermat-Weber location problem is stated as follows \[\min_{x\in\mathcal{C}}\left(\|x\|^{2}-\sum_{i=1}^{m}\omega_{i}\sup_{y\in C_{i }}\left(2x^{T}y-\|y\|^{2}\right)\right). \tag{5}\] Finally, note the objective function of the last problem satisfies all conditions in problem 1. ## 4 Classical Frank-Wolfe algorithm with an adaptive stepsize In this section, we formulate the _classic Frank-Wolfe algorithm_ to solve problem (1) with an adaptive stepsize and provide a convergence analysis. Although we assume condition **(A)** for \(g\), we will not use the Lipschitz constant to compute the stepsize in the formulation of the algorithm. The stepsize will be computed adaptively, by using a scheme introduced in [2, 4] that approximates the Lipschitz constant. The analysis of convergence of the method presented here shows that the convergence rate for weak-star-convex functions, despite not being convex, is still \(\mathcal{O}(1/k)\) for both function values and duality gap. It should be mentioned that a version of the classic Frank-Wolfe algorithm has been proposed to address the problem (1) in [24]. The algorithm proposed incorporates a step size that depends on the curvature/Lipschitz-type constant of the gradient of \(f=g-h\). According to the convergence analysis for non-convex objective functions (eg, [26]), it has been shown that the algorithm produces a stationary point with a convergence rate of only \(\mathcal{O}(1/\sqrt{k})\). For stating the algorithm, we should assume that there exists a linear optimization oracle (LO oracle) capable of minimizing linear functions over the set \(\mathcal{C}\). The statement of the algorithm is as follows: **Algorithm 1**.: **Frank-Wolfe\({}_{C,f:=g-h}\) algorithm** **Step 0.**: Select \(x^{0}\in\mathcal{C}\) and \(L_{0}>0\). Set \(k=0\). **Step 1.**: Take \(u^{k}\in\partial h(x^{k})\). Set \(j:=\min\{\ell\in\mathbb{N}:\ 2^{\ell}L_{k}\geq 2L_{0}\}\). **Step 2.**: Use an "LO oracle" to compute an optimal solution \(p^{k}\) and the optimal value \(\omega(x_{k})\) as follows \[p^{k}\in\operatorname*{argmin}_{p\in\mathcal{C}}(\nabla g(x^{k})-u^{k})^{ \operatorname{\mbox{\tiny T}}}(p-x^{k}),\hskip 28.452756pt\omega(x_{k}):=( \nabla g(x^{k})-u^{k})^{\operatorname{\mbox{\tiny T}}}(p^{k}-x^{k}). \tag{6}\] **Step 3.**: If \(\omega(x_{k})=0\), then **stop**. Otherwise, compute the step size \(\lambda_{j}\in(0,1]\) as follows \[\lambda_{j}=\min\left\{1,\frac{|\omega(x_{k})|}{2^{j}L_{k}\|p^{k}-x^{k}\|^{2} }\right\}:=\operatorname*{argmin}_{\lambda\in(0,1]}\left\{-|\omega(x_{k})| \lambda+\frac{2^{j}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\lambda^{2}\right\}. \tag{7}\] **Step 4.**: If \[f(x^{k}+\lambda_{j}(p^{k}-x^{k}))\leq f(x^{k})-|\omega(x_{k})|\lambda_{j}+ \frac{2^{j}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\lambda_{j}^{2}, \tag{8}\] then set \(j_{k}=j\) and go to **Step 5**. Otherwise, set \(j=j+1\) and go to **Step 3**. **Step 5**.: Set \(\lambda_{k}:=\lambda_{j_{k}}\) and define the next iterate \(x^{k+1}\) and the next approximation to the Lipschitz constant \(L_{k+1}\) as follows \[x^{k+1}:=x^{k}+\lambda_{k}(p^{k}-x^{k}),\hskip 28.452756ptL_{k+1}:=2^{j_{k}-1}L_{k}. \tag{9}\] Set \(k\gets k+1\), and go to **Step 1**. _Since the convergence analysis of Algorithm 1 is similar to that presented in [3], we will not include the proofs of the results here. We will only include the convergence rate analysis for weak-star-convex functions, which will be introduced in the next section_. In order to simplify the notations, from now on we will use the following notation: \[\omega_{k}:=\omega(x_{k}). \tag{10}\] Since \(\omega_{k}=0\) implies that \(x^{k}\) is a stationary point to problem (1). Thus, _in view of (6) from now on we assume that \(\omega_{k}<0\), for all \(k\in\mathbb{N}\)_. Next result establishes that the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 1 is well defined. **Proposition 4.1**.: _For all \(j\in\mathbb{N}\) such that \(2^{j}L_{k}\geq L\), the inequality (8) holds. Consequently, the number \(j_{k}\) in **Step 4** is well defined. Furthermore, \(j_{k}\) is the smallest non-negative integer satisfying the following two conditions_ \[2^{j_{k}}L_{k}\geq 2L_{0},\] \[f(x^{k}+\lambda_{k}(p^{k}-x^{k}))\leq f(x^{k})-|\omega_{k}|\lambda_{k}+\frac{2 ^{j_{k}}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\lambda_{k}^{2}. \tag{11}\] _In addition, the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 1 is well defined. And the following inequality holds_ \[f(x^{k+1})\leq f(x^{k})-\frac{1}{2}|\omega_{k}|\lambda_{k},\hskip 56.905512ptk=0,1,\ldots.\] For simplifying the notations we define the following constants \[\alpha:=2(L+L_{0})\operatorname{diam}(\mathcal{C})^{2}>0.\] **Lemma 4.2**.: _The sequence \((L_{k})_{k\in\mathbb{N}^{*}}\) satisfies the following inequalities_ \[L_{0}\leq L_{k}\leq L+L_{0},\hskip 28.452756ptk=0,1,\ldots.\] _and the step size sequence \((\lambda_{k})_{k\in\mathbb{N}}\) satisfies_ \[\lambda_{k}\geq\min\left\{1,\frac{|\omega_{k}|}{\alpha}\right\},\hskip 28.452756ptk =0,1,\ldots.\] **Theorem 4.3**.: \(\lim_{k\to+\infty}\omega_{k}=0\)_. Consequently, every limit point of the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 1 is a stationary point to problem (1)._ Next, we present some iteration-complexity bounds for the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 1. **Theorem 4.4**.: _For every \(N\in\mathbb{N}\), there holds_ \[\min\left\{|\omega_{k}|:\ k=0,1,\ldots,N\right\}\leq\max\left\{2(f(x^{0})-f^{* }),\sqrt{2\alpha(f(x^{0})-f^{*})}\right\}\frac{1}{\sqrt{N+1}}.\] Since \(\mathcal{C}\) is compact and the functions \(g\) and \(h\) are convex, we can define the following constants: \[\Theta:=\min\left\{\frac{2}{\bar{\Gamma}\operatorname{diam}(\mathcal{C})},\frac{ 1}{\alpha}\right\}\qquad\bar{\Gamma}:=\max_{x\in\mathcal{C}}\{\|\nabla g(x)\|+ \|u\|:\;u\in\partial h(x)\},\qquad\beta:=\frac{\bar{\Gamma}^{2}}{\Theta L_{0}}.\] **Theorem 4.5**.: _Let \(\epsilon>0\). Then, Algorithm 1 generates a point \(x^{k}\) such that \(|\omega_{k}|\leq\epsilon\), performing at most,_ \[\left(\max\left\{\Gamma^{2},\alpha\Gamma\right\}\frac{1}{\epsilon^{2}}\right) \left(2+\log_{2}\left(\beta\frac{1}{\epsilon^{2}}\right)\right)=\mathcal{O}( |\log_{2}(\epsilon)|\epsilon^{-2})\] _evaluations of functions \(f\), and \(\max\left\{\Gamma^{2},\alpha\Gamma\right\}\frac{1}{\epsilon^{2}}+1=\mathcal{O }(\epsilon^{-2})\) evaluations of gradients of \(f\) and subgradients of \(h\)._ ### Iteration-complexity analysis: The weak-star-convex case In this section we present iteration-complexity bounds for the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 1 when the objective function \(f=g-h\) is a weak-star-convex function and has Lipschitz continuous gradient in \(\mathcal{C}\subset\mathbb{R}^{n}\). Let us begin by recalling the concept of the star-convex function introduced in [34]. **Definition 4.6**.: Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a convex set. A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is said to be star-convex in \(\mathcal{C}\) if its set of global minima \(X^{*}\) on the set \(\mathcal{C}\) is not empty and for any \(x^{*}\in X^{*}\) we have \[f(\lambda x^{*}+(1-\lambda)x)\leq\lambda f(x^{*})+(1-\lambda)f(x),\qquad \forall x\in\mathcal{C},\forall\lambda\in[0,1]. \tag{12}\] Every convex function with global minimizer set non-empty is a star-convex function, but in general, star-convex functions need not be convex. In the following we present two examples of star-convex functions that are not convex, which appeared in [34]. We show that these examples are difference of two convex functions. Although the first example does not fit the assumptions of problem 1 we include it here because it is a classical example of star-convex function. **Example 4.7**.: The star-convex function \(\phi(t)=|t|(1-e^{-|t|})\) is not convex. In addition, \(\phi\) is a difference of two convex functions. Indeed, considering the following two convex functions \(\varphi(t)=|t|(1-e^{-|t|})+e^{|t|}\) and \(\psi(t)=e^{|t|}\), we have \(\phi=\varphi-\psi\). **Example 4.8**.: Consider the star-convex function \(f(s,t)=s^{2}t^{2}+s^{2}+t^{2}\). The function \(f\) is not convex, but is a difference of two convex functions. Indeed, letting the following convex functions \(g(s,t)=(s^{2}+t^{2})^{2}+s^{2}+t^{2}\) and \(h(s,t)=s^{2}t^{2}+s^{4}+t^{4}\) we have \(f=g-h\). Let us introduce the weak-star-convex function notion, which generalizes the star-convex function concept proposed in [34] as well as a few other related concepts, such as [21, 42]. **Definition 4.9**.: Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a convex set. A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is said to be weak-star-convex in \(\mathcal{C}\) if its set of global minima \(X^{*}\) on the set \(\mathcal{C}\) is not empty and, for each \(x\in\mathcal{C}\), there exists \(x_{x}^{*}\in X^{*}\) such that \[f(\lambda x_{x}^{*}+(1-\lambda)x)\leq\lambda f(x_{x}^{*})+(1-\lambda)f(x), \qquad\forall\lambda\in[0,1]. \tag{13}\] As aforementioned, star-convex functions are not needed to be convex, but it should be noted that every star-convex function is a weak star-convex function. In fact, the minimizer \(x_{x}^{*}\) used in Definition 4.9 depends on the point \(x\). We outline a general procedure for generating star-convex functions in the next examples. **Example 4.10**.: Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a convex set and \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a function that is not necessarily star-convex in \(\mathcal{C}\). The collection of global minima of \(f\) on the set \(\mathcal{C}\) is denoted by the set \(X^{*}\). Assume that the following conditions holds: * the set \(X^{*}\) is not empty, i.e., \(X^{*}\neq\varnothing\); * \(\mathcal{C}:=\cup_{i=1}^{n}\Omega_{i}\), where \(\Omega_{i}\) is a convex set, for all \(i=1,\ldots,n\); * there exists \(x_{1}^{*},\ldots,x_{n}^{*}\in X^{*}\) such that \(x_{i}^{*}\in\Omega_{i}\), for all \(i=1,\ldots,n\); * the function \(f\) is convex on \(\Omega_{i}\), for all \(i=1,\ldots,n\). Under the condition \((i),(ii),(iii)\) and \((iv)\) we can show that the function \(f\) is weak-star-convex. Indeed, take \(x\in\mathcal{C}\). The items \((ii)\) implies that the exists \(i\) such that \(x\in\Omega_{i}\). Moreover, the items \((i)\) and \((ii)\) implies that there exists a minimizer of \(f\) such that \(x_{i}^{*}\in\Omega_{i}\). Since \(f\) is convex in the convex set \(\Omega_{i}\) the inequality (13) holds for \(x_{x}^{*}=x_{i}^{*}\). Therefore, \(f\) is weak-star-convex. For instance, let \(f:\mathbb{R}^{2}\to\mathbb{R}\) be defined by \[\varphi(t,s)=\frac{1}{2}\left(t^{2}+s^{2}\right)-|t|-|s|. \tag{14}\] The function \(\varphi\) is weak-star-convex but not star-convex. In fact, the minimizer set of \(\varphi\) is given by \(X^{*}=\{x_{1}^{*}:=(1,1),x_{2}^{*}:=(-1,1),x_{3}^{*}:=(1,-1),x_{4}^{*}:=(-1,- 1)\}\), which satisfies the item \((i)\). Set \(\Omega_{1}:=\{(s,t)\in\mathbb{R}^{2}:\ s\geq 0,t\geq 0\}\), \(\Omega_{2}:=\{(s,t)\in\mathbb{R}^{2}:\ s\geq 0,t\geq 0\}\), \(\Omega_{3}:=\{(s,t)\in\mathbb{R}^{2}:\ s\geq 0,-t\geq 0\}\) and \(\Omega_{4}:=\{(s,t)\in\mathbb{R}^{2}:\ -s\geq 0,-t\geq 0\}\). Since the sets \(\Omega_{1}\), \(\Omega_{2}\), \(\Omega_{3}\) and \(\Omega_{4}\) are convex and \(\mathbb{R}^{2}:=\cup_{i=1}^{4}\Omega_{i}\), the item \((ii)\) is satisfied. We also have \(x_{i}^{*}\in\Omega_{i}\), for \(i=1,2,3,4\). Consequently, the item \((iii)\) is also satisfied. Finally, note that \(\varphi(t,s)=(1/2)\left(t^{2}+s^{2}\right)-t-s\), for all \((s,t)\in\Omega_{1}\), \(\varphi(t,s)=(1/2)\left(t^{2}+s^{2}\right)+t-s\), for all \((s,t)\in\Omega_{2}\), \(\varphi(t,s)=(1/2)\left(t^{2}+s^{2}\right)-t+s\), for all \((s,t)\in\Omega_{2}\) and \(\varphi(t,s)=(1/2)\left(t^{2}+s^{2}\right)+t+s\), for all \((s,t)\in\Omega_{1}\). Hence, we obtain that \(f\) is convex on the set \(\Omega_{i}\), for \(i=1,2,3,4\). Thus, the item \((iv)\) is satisfied. Therefore, because items \((i)\),\((ii)\), \((iii)\) and \((iv)\) are satisfied, we conclude that the function (14) is weak-star-convex. Since \[0=f\Big{(}-\frac{1}{2}+\frac{1}{2}\Big{)}>\frac{1}{2}f(-1)+\frac{1}{2}f(1)=-1,\] we conclude that \(f\) is not star-convex. In general, with a little more effort, we can show that the function \(f:\mathbb{R}^{n}\to\mathbb{R}\) defined by \(f(x)=\alpha\|x\|^{2}-\beta\|x\|_{1}\), where \(\alpha>0\) and \(\beta>0\), is weak-star-convex but not star-convex. **Example 4.11**.: In some cases the generalised Fermat-Weber problem (5) is weak star convex. For example, when \(m=1\) and \(C_{1}\) is a finite union of compact convex sets such that every set in the union contains a solution from \(X^{*}\), then the objective is weak-star-convex. **Proposition 4.12**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a locally Lipschitz function. Assume that the set of global minimum of \(f\) on the set \(\mathcal{C}\), denoted by the set \(X^{*}\), is non-empty, and let \(f^{*}\) be the minimum value of \(f\) on the set \(\mathcal{C}\). If \(f\) is weak-star-convex in \(\mathcal{C}\), then for each \(x\in\mathcal{C}\) there exists \(x_{x}^{*}\in X^{*}\) such that \(f^{*}-f(x)\geq v^{\mbox{\tiny T}}(x_{x}^{*}-x)\), for all \(v\in\partial_{c}f(x)\)._ Proof.: Let \(x\in\mathcal{C}\). It follows from Definition 4.9 that there exists a minimizer \(x_{x}^{*}\in X^{*}\) of \(f\) such that \(f(x_{x}^{*})-f(x)\geq(f(x+\lambda(x_{x}^{*}-x))-f(x))/\lambda\). Then, taking the limit as \(\lambda\) goes to \(+0\) we have \(f(x_{x}^{*})-f(x)\geq f^{\circ}(x;x_{x}^{*}-x)\). Since \(f^{*}=f(x^{*})\), by using the last statement in Theorem 2.1 yields the desired inequality. **Corollary 4.13**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a locally Lipschitz function. If \(f\) is weak-star-convex in \(\mathcal{C}\), then each stationary point \(\bar{x}\in\mathcal{C}\) of the problem \(\min_{x\in\mathcal{C}}f(x)\) is also a global minimizer._ Proof.: Let \(\bar{x}\in\mathcal{C}\) be a stationary point of the problem \(\min_{x\in\mathcal{C}}f(x)\), denotes by \(X^{*}\) the set of its global solution and by \(f^{*}\) the global minimum value. Thus, we have \[v^{T}(x-\bar{x})\geq 0,\qquad\forall v\in\partial_{c}f(\bar{x}),\qquad\forall\ x \in\mathcal{C}. \tag{15}\] Since \(f\) is weak-star-convex in \(\mathcal{C}\), it follows from Proposition 4.12 and Theorem 2.2 that there exists \(x_{\bar{x}}^{*}\in\mathcal{C}^{*}\) such that \(f^{*}-f(\bar{x})\geq f^{\circ}(\bar{x};x_{\bar{x}}^{*}-\bar{x})\geq\bar{v}^{T} (x_{\bar{x}}^{*}-\bar{x})\). Thus, by using (15) we conclude that \(f^{*}\geq f(\bar{x})\). Considering that \(f^{*}\) the global minimum value, we have \(f(\bar{x})=f^{*}\). Therefore, \(\bar{x}\) is also a global minimizer. **Theorem 4.14**.: _Assume that \(f:\mathbb{R}^{n}\to\mathbb{R}\) is weak-star-convex in \(\mathcal{C}\). Let \((x^{k})_{k\in\mathbb{N}}\) be the sequence generated by Algorithm 1. Then,_ 1. \(f(x^{k})-f^{*}\leq\frac{4(L+L_{0})\operatorname{diam}(\mathcal{C})^{2}}{k}\)_, for all_ \(k=1,2,\ldots\)_._ 2. \(\min_{\ell\in\left\{\lfloor\frac{k}{2}\rfloor+2,\ldots,k\right\}}|\omega_{ \ell}|\leq\frac{16(L+L_{0})\operatorname{diam}(\mathcal{C})^{2}}{k-2}\)_, for all_ \(k=3,4,\ldots\)_, where_ \(\lfloor k/2\rfloor=\max_{n\in\mathbb{N}}\left\{n\leq k/2\right\}.\)__ Proof.: It follows from (11) in Proposition 4.1 that \[f(x^{k}+\lambda_{k}(p^{k}-x^{k}))\leq f(x^{k})-|\omega(x_{k})|\lambda_{k}+ \frac{2^{j_{k}}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\lambda_{k}^{2}. \tag{16}\] On the other hand, by using (7) we conclude that \[\lambda_{k}=\operatorname{argmin}_{\lambda\in(0,1]}\left\{-|\omega(x_{k})| \lambda+\frac{2^{j_{k}}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\lambda^{2}\right\}.\] Hence, letting \(\beta_{k}:=2/(k+2)\), it follows from (16) and the last inequality that \[f(x^{k}+\lambda_{k}(p^{k}-x^{k}))\leq f(x^{k})-|\omega_{k}|\beta_{k}+\frac{2^ {j_{k}}L_{k}}{2}\|p^{k}-x^{k}\|^{2}\beta_{k}^{2}.\] Since \(\|p(x^{k})-x^{k}\|\leq\operatorname{diam}(\mathcal{C})\), the last inequality together with (9) and Lemma (4.2) yield \[f(x^{k+1})-f^{*}\leq f(x^{k})-f^{*}-|\omega_{k}|\beta_{k}+(L+L_{0})\operatorname {diam}(\mathcal{C})^{2}\beta_{k}^{2}.\] Taking into account that \(f\) is a weak-star-convex function in \(\mathcal{C}\), it follows from Proposition 4.12 and Theorem 2.2 that there exists \(x_{k}^{*}\in\mathcal{C}^{*}\) such that \[f^{*}-f(x^{k})\geq(\nabla g(x^{k})-u^{k})^{\mathrm{T}}(x_{k}^{*}-x^{k}).\] Thus, it follows from (6) and (10) that \(0\geq f^{*}-f(x^{k})\geq\omega_{k}\), which implies that \(0\leq f(x^{k})-f^{*}\leq|\omega_{k}|\). Thus, applying Lemma 2.8 with \[a_{k}:=f(x^{k})-f^{*}\leq b_{k}:=|\omega_{k}|,\qquad A:=2(L+L_{0})\,\mathrm{ diam}(\mathcal{C})^{2},\] the desired inequalities follows. According to Theorem 4.14, functions with the weak-star-covexity property allow the Frank-Wolfe method to efficiently minimize them even when their landscape is not convex. ## 5 Frank-Wolfe algorithm with finite difference In this section, based on Algorithm 1, we formulate a _version of Frank-Wolfe algorithm to solve problem (1) with finite difference approximating the gradient \(\nabla g\) and with an adaptive step size_. For that, let us first recall de concept of finite difference of a continuously differentiable \(g:\mathbb{R}^{n}\to\mathbb{R}\). Let \(D(x,h)\) be the _vector finite difference of \(g\) at \(x\) with increment \(s>0\)_, which approximates the gradient \(\nabla g(x)\), defined by \[D(x,s):=\left(D_{1}(x,s),\ldots,D_{n}(x,s)\right),\qquad\quad D_{i}(x,s):=\frac {g(x+se_{i})-g(x)}{s},\quad i=1,\ldots,n. \tag{17}\] The next lemma gives us an upper bound for the approximation error of the gradient of the function \(\nabla g\) by its finite difference \(D(x,h)\), see for example [35, (8.4) p. 195]. **Lemma 5.1**.: _Assume that \(g\) satisfies the assumption_ **(A)**_. Then, there holds_ \[\|\nabla g(x)-D(x,s)\|\leq\frac{\sqrt{n}L}{2}s,\qquad\forall x\in\mathbb{R}^{ n},\quad\forall s>0.\] Proof.: Setting \(\nabla g(x)=(\partial_{1}g(x),\ldots,\partial_{n}g(x))\) and using (17) we have \[\|\nabla g(x)-D(x,s)\|^{2}\leq\sum_{i=1}^{n}\left(\partial_{i}g(x)-D_{i}(x,s) \right)^{2}. \tag{18}\] Applying Proposition 2.4 with \(v=e_{i}\) and \(\lambda=s\), we conclude that \(D_{i}(x,s)-\partial_{i}g(x)\leq(Ls)/2\). Thus, (18) implies that \(\|\nabla g(x)-D(x,s)\|^{2}\leq n((Ls)/2)^{2}\), which implies the desired inequality. The statement of the Frank-Wolfe algorithm to solve problem (1) with finite difference approximating the gradient \(\nabla g\) is as follows: **Step 0.** Select \(x^{0},x^{1}\in\mathcal{C}\), \(x^{0}\neq x^{1}\) and \(L_{1}>0\). Set \(k=1\). **Step 1.** Take \(u^{k}\in\partial h(x^{k})\). Set \(j:=\min\{\ell\in\mathbb{N}:\ 2^{\ell}L_{k}\geq 2L_{1}\}\) and \(i=0\). **Step 2**.: Set \[s_{ij}=\frac{2L_{1}}{2^{i+j}L_{k}\sqrt{n}}\|x^{k}-x^{k-1}\|. \tag{19}\] **Step 3**.: Use an "LO oracle" to compute an optimal solution \(p^{ij}\) and the optimal value \(\omega_{ij}(x_{k})\) as follows \[p^{ij}\in\operatorname{argmin}_{p\in\mathcal{C}}(D(x^{k},s_{ij})-u^{k})^{ \mathrm{ T}}(p-x^{k}),\qquad\quad\omega_{ij}(x_{k}):=(D(x^{k},s_{ij})-u^{k})^{ \mathrm{ T}}(p^{ij}-x^{k}). \tag{20}\] **Step 4**.: If \(\omega_{ij}(x_{k})<0\), then set \(i_{k}=i\) and go to **Step 5**. Otherwise, set \(i\gets i+1\), and go to **Step 2**. **Step 5**.: Compute the step size \(\lambda_{j}\in(0,1]\) as follows \[\lambda_{j}=\min\left\{1,\frac{|\omega_{i_{k}j}(x_{k})|}{2^{j}L_{k}\|p^{i_{k} j}-x^{k}\|^{2}}\right\}:=\operatorname{argmin}_{\lambda\in(0,1]}\left\{-| \omega_{i_{k}j}(x_{k})|\lambda+\frac{2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^{2} \lambda^{2}\right\}. \tag{21}\] **Step 6**.: If \[f(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\leq f(x^{k})-\frac{1}{2}|\omega_{i_{k} j}(x_{k})|\lambda_{j}-\frac{2^{j}L_{k}}{4}\|p^{i_{k}j}-x^{k}\|^{2}\lambda^{2}_{j} +\frac{L_{1}}{4}\|x^{k}-x^{k-1}\|^{2}, \tag{22}\] then set \(j_{k}=j\) and go to **Step 7**. Otherwise, set \(j\gets j+1\) and go to **Step 2**. **Step 7**.: Set \(\lambda_{k}:=\lambda_{j_{k}}\), \(p^{k}:=p^{i_{k}j_{k}}\), define the next iterate \(x^{k+1}\) and the next approximation to the Lipschitz constant \(L_{k+1}\) as follows \[x^{k+1}:=x^{k}+\lambda_{k}(p^{k}-x^{k}),\qquad\quad L_{k+1}:=2^{j_{k}-1}L_{k}. \tag{23}\] Set \(k\gets k+1\), and go to **Step 1**. Let us describe the main features of Algorithm 2. First of all, note that due to \(x^{0},x^{1}\in\mathcal{C}\), \(p^{k}\in\mathcal{C}\) and \(\lambda_{k}\in(0,1]\) for all \(k=0,1,\ldots\), and \(\mathcal{C}\) be convex set, it follows from inductive arguments and first equality in (23) that \((x^{k})_{k\in\mathbb{R}}\subset\mathcal{C}\). Note that (20) implies that \[\omega_{ij}(x_{k})\leq D(x^{k},s_{ij})^{T}(p-x^{k}),\qquad p\in\mathcal{C}. \tag{24}\] Hence, considering that \(x^{k}\in\mathcal{C}\), letting \(p=x^{k}\) in the last inequality we conclude that \[\omega_{ij}(x_{k})\leq 0. \tag{25}\] In order to simplify the notations, from now on we will use the following two notations: \[s_{k}:=s_{i_{k}j_{k}},\qquad\qquad\omega_{k}:=\omega_{i_{k}j_{k}}(x_{k}). \tag{26}\] **Proposition 5.2**.: _Assume that in_ **Step 3** _we have \(\omega_{ij}(x_{k})=0\), for all \(i\in\mathbb{N}\). Then,_ \[(\nabla g(x^{k})-u^{k})^{\mathrm{ T}}(p-x^{k})\geq 0,\qquad\forall p\in\mathcal{C}, \tag{27}\] _i.e., \(x^{k}\) is a stationary point to problem (1)._ Proof.: If in **Step 3** we have \(\omega_{ij}(x_{k})=0\), for all \(i\in\mathbb{N}\), then it follows from (24) that \[0\leq(D(x^{k},s_{ij})-u^{k})^{\mathrm{ T}}(p-x^{k}),\qquad\forall i\in\mathbb{N}, \quad\forall p\in\mathcal{C}. \tag{28}\] Since (19) implies that \(\lim_{i\to+\infty}s_{ij}=0\), it follows from (17) that \(\lim_{i\to+\infty}D(x^{k},s_{ij})=\nabla g(x^{k})\). Hence, taking limit in (28) we conclude that (27) holds. _In view of Proposition 5.2, (25) and (26) from now on we assume that \(\omega_{k}<0\), for all \(k\in\mathbb{N}\)._ ### Well-definedness Herein, we establish that the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 2 is well defined. **Lemma 5.3**.: _Let \(s_{ij}\) be define in (19) and assume that \(2^{j}L_{k}\geq 2L\). Then, there holds_ \[\|\nabla g(x^{k})-D(x^{k},s_{ij})\|\leq\frac{L_{1}}{2}\|x^{k}-x^{k-1}\|.\] Proof.: By applying Lemma 5.1 with \(x=x^{k}\) and \(s=s_{ij}\)we obtain that \[\|\nabla g(x^{k})-D(x^{k},s_{ij})\|\leq\frac{\sqrt{n}L}{2}s_{ij}\] Therefore, using (19) and considering that \(2^{j}L_{k}\geq 2L\) the desired inequality follows. **Lemma 5.4**.: _Let \(j\in\mathbb{N}\) such that \(2^{j}L_{k}\geq 2(L+L_{1}/2)\). Then, (22) holds and \(j_{k}\) is the smallest \(j\in\mathbb{N}\) satisfying the following two conditions \(2^{j}L_{k}\geq 2L_{1}\) and (22). Consequently, the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 2 is well defined and satisfies the following inequality:_ \[f(x^{k+1})\leq f(x^{k})-\frac{1}{2}|\omega_{k}|\lambda_{k}-\frac{L_{k+1}}{2}\| x^{k+1}-x^{k}\|^{2}+\frac{L_{1}}{4}\|x^{k}-x^{k-1}\|^{2},\qquad\qquad k=1,2,\ldots. \tag{29}\] Proof.: It follows from Proposition 2.4 that \[g(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\leq g(x^{k})+\nabla g(x^{k})^{\mbox{ \tiny T}}(p^{i_{k}j}-x^{k})\lambda_{j}+\frac{L}{2}\|p^{i_{k}j}-x^{k}\|^{2} \lambda_{j}^{2}. \tag{30}\] On the other hand, considering that \(u^{k}\in\partial h(x^{k})\) it follows from Proposition (2.5) that \[h(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\geq h(x^{k})+\lambda_{j}\langle u^{k}, p^{i_{k}j}-x^{k}\rangle \tag{31}\] Thus, combining (30) with (31) and taking into account the equality in (20) and (25), some algebraic manipulations shows that \[f(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\leq f(x^{k})-|\omega_{i_ {k}j}(x_{k})|\lambda_{j}+\frac{2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^{2}\lambda_{ j}^{2}+\\ (\nabla g(x^{k})-D(x^{k},s_{i_{k}j}))^{\mbox{\tiny T}}(p^{i_{k}j }-x^{k})\lambda_{j}+\frac{L-2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^{2}\lambda_{j}^ {2}. \tag{32}\] To proceed with the proof we need first to prove that for \(\lambda_{j}\) defined in (21) we have \[-|\omega_{i_{k}j}(x_{k})|\lambda_{j}+\frac{2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^ {2}\lambda_{j}^{2}\leq-\frac{1}{2}|\omega_{i_{k}j}(x_{k})|\lambda_{j}. \tag{33}\] For that we separately consider two cases: \(\lambda_{j}=|\omega_{i_{k}j}(x_{k})|/(2^{j}L_{k}\|p^{i_{k}j}-x^{k}\|^{2})\) and \(\lambda_{j}=1\). In the former, we have \(2^{j}L_{k}\|p^{i_{k}j}-x^{k}\|^{2}=|\omega_{i_{k}j}(x_{k})|/\lambda_{j}\),which substituting into the left hand side of (33) yields an equality. If now \(\lambda_{j}=1\), then it follows from (21) that \(\lambda_{j}=1\leq|\omega_{i_{k}j}(x_{k})|/(2^{j}L_{k}\|p^{i_{k}j}-x^{k}\|^{2})\) or equivalently \(2^{j}L_{k}\|p^{i_{k}j}-x^{k}\|^{2}\leq|\omega_{i_{k}j}(x_{k})|\) and \(\lambda_{j}=1\), which again substituting into the left hand side of (33) yields the inequality. Hence, (33) holds. Therefore, (32) becomes \[f(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\leq f(x^{k})-\frac{1}{2}| \omega_{i_{k}j}(x_{k})|\lambda_{j}+\\ (\nabla g(x^{k})-D(x^{k},s_{i_{k}j}))^{\mbox{\tiny T}}(p^{i_{k}j} -x^{k})\lambda_{j}+\frac{L-2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^{2}\lambda_{j}^{2}. \vspace{-0.2cm} \tag{34}\] Since \(2^{j}L_{k}\geq 2(L+L_{1}/2)>L\), we can apply Lemma 5.3 to obtain that \[(\nabla g(x^{k})-D(x^{k},s_{i_{k}j}))^{\mbox{\tiny T}}(p^{i_{k}j}-x^{k}) \lambda_{j}\leq\frac{L_{1}}{2}\|x^{k}-x^{k-1}\|\|p^{i_{k}j}-x^{k}\|\lambda_{j}. \vspace{-0.2cm}\] Hence, considering that \(2\|x^{k}-x^{k-1}\|\|p^{i_{k}j}-x^{k}\|\lambda_{j}\leq\|x^{k}-x^{k-1}\|^{2}+\|p ^{i_{k}j}-x^{k}\|^{2}\lambda_{j}^{2}\), we have \[(\nabla g(x^{k})-D(x^{k},s_{i_{k}j}))^{\mbox{\tiny T}}(p^{i_{k}j}-x^{k}) \lambda_{j}\leq\frac{L_{1}}{4}\|p^{i_{k}j}-x^{k}\|^{2}\lambda_{j}^{2}+\frac{L _{1}}{4}\|x^{k}-x^{k-1}\|^{2}.\vspace{-0.2cm}\] Thus, combining the last inequality with (34) and considering that \(\lambda_{j}<1\), we conclude that \[f(x^{k}+\lambda_{j}(p^{i_{k}j}-x^{k}))\leq f(x^{k})-\frac{1}{2}|\omega_{i_{k} j}(x_{k})|\lambda_{j}+\frac{L+L_{1}/2-2^{j}L_{k}}{2}\|p^{i_{k}j}-x^{k}\|^{2} \lambda_{j}^{2}+\frac{L_{1}}{4}\|x^{k}-x^{k-1}\|^{2}.\vspace{-0.2cm}\] Considering that \(2^{j}L_{k}\geq 2(L+L_{1}/2)\), the last inequality implies that (22) holds. Since the first trial \(j\) in (22) is defined in **Step 1**, i.e., \(j:=\min\{\ell\in\mathbb{N}:\ 2^{\ell}L_{k}\geq 2L_{1}\}\) and \(2^{j+1}L_{k}\geq 2^{j}L_{k}\) we conclude that \(j_{k}\) is the smallest \(j\in\mathbb{N}\) satisfying \(2^{j}L_{k}\geq 2L_{1}\) and (22). Consequently, \((x^{k})_{k\in\mathbb{N}}\) is well defined, which proof the first three statements of the lemma. Therefore, (29) follows from the definition of \(j_{k}\) in **Step 6**, (22) and (23), which concludes the proof. Since the proof of the next lemma is similar to Lemma 4.2 it will be omitted. **Lemma 5.5**.: _The sequence \((L_{k})_{k\in\mathbb{N}^{*}}\) satisfies the following inequalities_ \[L_{1}\leq L_{k}\leq 2(L+L_{1}),\hskip 28.452756ptk=1,2,\ldots.\vspace{-0.2cm} \tag{35}\] _and the step size sequence \((\lambda_{k})_{k\in\mathbb{N}^{*}}\) satisfies_ \[\lambda_{k}\geq\min\left\{1,\frac{|\omega_{k}|}{2(L+L_{1})\,\mbox{\rm diam}( \mathcal{C})^{2}}\right\},\hskip 28.452756ptk=1,2,\ldots.\vspace{-0.2cm} \tag{36}\] ### Convergence analysis In this section we analyze the convergence properties of the sequence \((x^{k})_{k\in\mathbb{N}}\). For that, we assume that the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by Algorithm 2 is infinite. **Lemma 5.6**.: _The sequence \(\left(f(x^{k})+(L_{1}/2)\|x^{k}-x^{k-1}\|^{2}\right)_{k\in\mathbb{N}}\) is monotone and non-increasing. Moreover, there holds:_ \[\lim_{k\rightarrow+\infty}\|x^{k}-x^{k-1}\|=0.\vspace{-0.2cm} \tag{37}\] _As a consequence, \(\lim_{k\rightarrow+\infty}f(x^{k})=\bar{f}\) for some \(\bar{f}\geq f^{*}\)._ Proof.: Since \(L_{k}\geq L_{1}\), for all \(k\in\mathbb{N}\), it follows from Lemma 5.4 that \[f(x^{k+1})\leq f(x^{k})-\frac{1}{2}|\omega_{k}|\lambda_{k}-\frac{L_{1}}{2}\|x^{ k+1}-x^{k}\|^{2}+\frac{L_{1}}{2}\|x^{k}-x^{k-1}\|^{2},\qquad\qquad k=1,2,\ldots.\] which, taking into account that \(\frac{1}{2}|\omega_{k}|\lambda_{k}\geq 0\), yields \[f(x^{k+1})+\frac{L_{1}}{2}\|x^{k+1}-x^{k}\|^{2}\leq f(x^{k})+\frac{L_{1}}{2}\| x^{k}-x^{k-1}\|^{2},\qquad\qquad k=1,2,\ldots.\] which implies the first statement of the lemma. We proceed to prove the second statement. For that we first note that Lemma 5.5 implies that \(L_{1}\leq L_{k}\), for all \(k\in\mathbb{N}\). Thus, using again that \(\frac{1}{2}|\omega_{k}|\lambda_{k}\geq 0\), it follows from (29) that \[\frac{L_{k+1}}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{L_{k}}{4}\|x^{k}-x^{k-1}\|^{2} \leq f(x^{k})-f(x^{k+1}),\qquad k=1,2,\ldots. \tag{38}\] Let \(\ell\in\mathbb{N}\) be such that \(\ell>1\). Summing up the inequality (38) from \(1\) to \(\ell\) we obtain \[\sum_{k=1}^{\ell}\Big{(}\frac{L_{k+1}}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{L_{k}}{4} \|x^{k}-x^{k-1}\|^{2}\Big{)}\leq f(x^{1})-f^{*}. \tag{39}\] Considering that \(L_{1}\leq L_{k}\), for all \(k\in\mathbb{N}\), some algebraic manipulations show that \[-\frac{L_{1}}{4}\|x^{1}-x^{0}\|^{2}+\frac{L_{1}}{4}\sum_{k=1}^{\ell}\|x^{k+1}- x^{k}\|^{2}\leq\sum_{k=1}^{\ell}\Big{(}\frac{L_{k+1}}{2}\|x^{k+1}-x^{k}\|^{2}- \frac{L_{k}}{4}\|x^{k}-x^{k-1}\|^{2}\Big{)}.\] Combining (39) with the last inequality we obtain that \[\sum_{k=1}^{\ell}\|x^{k+1}-x^{k}\|^{2}\leq\frac{4}{L_{1}}\Big{(}f(x^{1})-f^{*} +\frac{L_{1}}{4}\|x^{1}-x^{0}\|^{2}\Big{)}.\] Since the last inequality holds for all \(\ell\in\mathbb{N}\), we have \(\sum_{k=1}^{+\infty}\|x^{k+1}-x^{k}\|^{2}\leq+\infty\). Thus, (37) holds. To prove the last statement, we first note that due to \(\big{(}f(x^{k})+(L_{1}/2)\|x^{k}-x^{k-1}\|^{2}\big{)}_{k\in\mathbb{N}}\) be a non-increasing sequence and bounded from below by \(f^{*}\), it must be convergent. Thus, we set \(\bar{f}=\lim_{k\to+\infty}\big{(}f(x^{k})+(L_{1}/2)\|x^{k}-x^{k-1}\|^{2}\big{)}\). Hence, considering that \[f(x^{k})=f(x^{k})+(L_{1}/2)\|x^{k}-x^{k-1}\|^{2}-(L_{1}/2)\|x^{k}-x^{k-1}\|^{2 }\geq f^{*},\] by using (37) we conclude that \(\lim_{k\to+\infty}f(x^{k})=\bar{f}\geq f^{*}\) and the proof is concluded. For simplifying the notations we define the following constants \[\bar{\alpha}:=2(L+L_{1})\operatorname{diam}(\mathcal{C})^{2}>0. \tag{40}\] **Lemma 5.7**.: _The following inequality holds:_ \[f(x^{k+1})\leq f(x^{k})-\frac{1}{2}\min\bigg{\{}|\omega_{k}|,\frac{1}{\bar{ \alpha}}|\omega_{k}|^{2}\bigg{\}}-\frac{L_{1}}{2}\|x^{k+1}-x^{k}\|^{2}+\frac{ L_{1}}{2}\|x^{k}-x^{k-1}\|^{2}, \tag{41}\] _for all \(k=1,2,\ldots\)._ Proof.: Combining inequalities (29) with (35) and (36), and also using (40), we conclude that \[f(x^{k+1})\leq f(x^{k})-\frac{1}{2}|\omega_{k}|\min\left\{1,\frac{1}{\bar{\alpha} }|\omega_{k}|\right\}-\frac{L_{1}}{2}\|x^{k+1}-x^{k}\|^{2}+\frac{L_{1}}{4}\|x^ {k}-x^{k-1}\|^{2},\] for all \(k=1,2,\ldots\). Thus, (41) is an immediate consequence of the previous inequality. In the next theorem we will use the following constant \[\Gamma:=2(f(x^{1})-f^{*})+L_{1}\|x^{2}-x^{1}\|^{2}. \tag{42}\] **Theorem 5.8**.: _The following statements hold:_ 1. \(\lim_{k\to+\infty}\omega_{k}=0\)_. Consequently, every limit point of the sequence_ \((x^{k})_{k\in\mathbb{N}}\) _generated by Algorithm_ 2 _is a stationary point to problem (_1_);_ 2. _for every_ \(N\in\mathbb{N}\)_, there holds_ \[\min\left\{|\omega_{k}|:\ k=1,\ldots,N\right\}\leq\max\left\{\Gamma,\sqrt{ \bar{\alpha}\Gamma}\right\}\frac{1}{\sqrt{N}}.\] Proof.: To prove the item (i), we first note that (41) is equivalent to \[\frac{1}{2}\min\left\{|\omega_{k}|,\frac{1}{\bar{\alpha}}|\omega_{k}|^{2} \right\}\leq\Big{(}f(x^{k})+\frac{L_{1}}{2}\|x^{k}-x^{k-1}\|^{2}\Big{)}-\Big{(} f(x^{k+1})+\frac{L_{1}}{2}\|x^{k+1}-x^{k}\|^{2}\Big{)}, \tag{43}\] for all \(k=1,2,\ldots\). It follows from Lemma 5.6 that \(\big{(}f(x^{k})+(L_{1}/2)\|x^{k}-x^{k-1}\|^{2}\big{)}_{k\in\mathbb{N}^{*}}\) monotone and non-increasing. Thus, considering that it is also bounded from below by \(f^{*}\), we conclude that it converges. Hence, taking limit in (43) we have \[\lim_{k\to+\infty}\min\left\{|\omega_{k}|,\frac{1}{\bar{\alpha}}|\omega_{k}|^{ 2}\right\}=0,\] which implies the first statement of item (i). For proving the second statement of item (i), take \(\bar{x}\) a limit point of the sequence \((x^{k})_{k\in\mathbb{N}}\) and subsequence \((x^{k_{i}})_{k\in\mathbb{N}}\) such that \(\lim_{i\to+\infty}x^{k_{i}}=\bar{x}\). Since \(\mathcal{C}\) is a compact set the sequence \((u^{k})_{k\in\mathbb{N}^{*}}\) is bounded. Thus, without loss of generality we assume that \(\lim_{i\to+\infty}u^{k_{i}}=\bar{u}\). On the other hand, Lemma 5.6 together with (19) imply that \[\lim_{k\to+\infty}s_{k}=0. \tag{44}\] Let \((s_{k_{i}})_{i\in\mathbb{N}^{*}}\) subsequence of \((s_{k})_{k\in\mathbb{N}^{*}}\). Thus, we conclude from (17) that \[\lim_{i\to+\infty}D(x^{k_{i}},s_{k_{i}})=\nabla g(\bar{x}). \tag{45}\] Finally, using (20) we conclude that \[(D(x^{k_{i}},s_{k_{i}})-u^{k_{i}})^{T}(p-x^{k_{i}})\geq\omega_{k_{i}},\qquad \forall p\in\mathcal{C}.\] Therefore, taking limite in the last inequality and using (44) together (45) we obtain that \[(\nabla g(\bar{x})-\bar{u})^{T}(p-\bar{x})\geq 0,\qquad\bar{u}\in\partial g(\bar{x }),\qquad\forall\ x\in\mathcal{C},\] which implies that \(\bar{x}\) is a stationary point to problem (1). To prove the item (ii), take \(N\in\mathbb{N}^{*}\). Thus, (43) implies that \[\sum_{k=1}^{N}\frac{1}{2}\min\left\{|\omega_{k}|,\frac{1}{\bar{\alpha}}| \omega_{k}|^{2}\right\}\leq\Big{(}f(x^{1})+\frac{L_{1}}{2}\|x^{2}-x^{1}\|^{2} \Big{)}-\Big{(}f(x^{N+1})+\frac{L_{1}}{2}\|x^{N}-x^{N-1}\|^{2}\Big{)}.\] Hence, we conclude that \[\min\left\{\frac{1}{2}\min\left\{|\omega_{k}|,\frac{1}{\bar{\alpha}}|\omega_{ k}|^{2}\right\}:\ k=1,\ldots N\right\}\leq\frac{1}{N}\left(f(x^{1})-f^{*}+ \frac{L_{1}}{2}\|x^{2}-x^{1}\|^{2}\right).\] It follows from the last inequality that there exists \(\bar{k}\in\{1,\ldots N\}\) such that \[|\omega_{\bar{k}}|\leq\frac{1}{N}\left(2(f(x^{1})-f^{*})+L_{1}\|x^{2}-x^{1}\|^ {2}\right),\qquad|\omega_{\bar{k}}|\leq\sqrt{\frac{\bar{\alpha}}{N}\left(2(f( x^{1})-f^{*})+L_{1}\|x^{2}-x^{1}\|^{2}\right)},\] which, by using (42), implies the item (ii). ## 6 Conclusions We studied the asymptotic convergence properties and iteration complexity of Algorithm 1. In particular, for weak-star-convex objective functions, a rate of convergence is proven to be \(\mathcal{O}(1/k)\) for both the function values and the duality gap. The study of Algorithm 2, on the other hand, became more complex because of the usage of finite difference, therefore we were only capable of providing its asymptotic convergence analysis, and rate of convergence for the duality gap is \(\mathcal{O}(1/\sqrt{k})\). Then, an extensive iteration complexity study for this algorithm is absent, including weak-star-convex objective functions. In particular, there are thus various research opportunities to improve the current results for Algorithm 2. ## Funding The first and third authors were supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602). The second author was supported in part by CNPq - Brazil Grants 304666/2021-1. This work was done, in part, while the second author visited the first and third authors at Deakin University in November 2022. The second author thanks the host institution for funding the visit and for the pleasant scientific atmosphere it provided during his visit.
2302.14208
Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance. Certain novelties (e.g., changes in environment dynamics) can interfere with the performance or prevent agents from accomplishing task goals altogether. In this paper, we introduce general methods and architectural mechanisms for detecting and characterizing different types of novelties, and for building an appropriate adaptive model to accommodate them utilizing logical representations and reasoning methods. We demonstrate the effectiveness of the proposed methods in evaluations performed by a third party in the adversarial multi-agent board game Monopoly. The results show high novelty detection and accommodation rates across a variety of novelty types, including changes to the rules of the game, as well as changes to the agent's action capabilities.
Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz
2023-02-28T00:05:48Z
http://arxiv.org/abs/2302.14208v2
# Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments ###### Abstract. Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance. Certain novelties (e.g., changes in environment dynamics) can interfere with the performance or prevent agents from accomplishing task goals altogether. In this paper, we introduce general methods and architectural mechanisms for detecting and characterizing different types of novelties, and for building an appropriate adaptive model to accommodate them utilizing logical representations and reasoning methods. We demonstrate the effectiveness of the proposed methods in evaluations performed by a third party in the adversarial multi-agent board game Monopoly. The results show high novelty detection and accommodation rates across a variety of novelty types, including changes to the rules of the game, as well as changes to the agent's action capabilities. Open-world AI, Agent Architecture, Adaptive Multiagent Systems (AAMAS 2023). A. Ricci, W. Yowd, N. Agmon, R. An (eds.), May 29 - June 2, 2023. ## 1. Introduction: Open-World AI Many classical adversarial AI tasks, such as game playing, take place in "closed-world" domains where all aspects of the domain--the types of entities, their properties, their actions, and the overall domain dynamics--are fixed. They are typically known to the agents before they start their task performance, and they do not change during task execution. Examples of such domains are "perfect information games" such as Chess, Go, or Ms.Pac-man, where the rules of the game, the goals of the players, and the entire state of the game are always known by all agents (Srivastava et al., 2014; Srivastava et al., 2014; Srivastava et al., 2015). This characteristic simplifies the game AI behavior by limiting the number of novelties to instances of known types (e.g., a chess move with the bishop a player has not seen before), thus allowing the development of the game AI without needing to anticipate any unknown scenarios within the bounds of the system (e.g., a novel piece with novel rules being introduced). In contrast, agents operating in an "open-world" must be able to handle changes to entities and domain rules. Specifically, in the context of open-world games, the rules, the state, and the actions of other players might only be partially known or could change anytime. The agent thus must discover these changes while playing the game (Srivastava et al., 2014; Srivastava et al., 2015). Especially _interactive novelties_ where agents interact with each other and with the environment present a challenge to any agent departing from a _closed-world_ assumption (Srivastava et al., 2014; Srivastava et al., 2015; Srivastava et al., 2015). In open-world environments, the action's effects and interaction's effects can be changed during the task operation time. Therefore, making the wrong move or wrongfully interacting with other agents can cause the agent to fail the task. In an effort to tackle the challenges of interactive novelties in adversarial open worlds, we propose general methods and architectural mechanisms that allow AI agents to detect, characterize, and adapt to interactive novelties in adversarial games. We develop a general novelty-handling framework, as well as symbolic logical reasoning methods to detect, learn, and adapt to novelties in _open-world_ environments. Our main contributions include (1) an architectural framework to handle interactive novelties in an adversarial environment, and (2) new logical reasoning approaches to characterize novelties and accommodate them during planning (that expands current state space, action space, and expected action effects). Background and related work Recent applications of multi-agent environments such as multi-player games (Paktor et al., 2017), Poker (Poker, 2017), social robotic systems (Poker, 2017), and adversarial attack and defense (Zhu et al., 2017) consist of adversary elements and complex agent's behaviors. Therefore, learning how to adapt to the opponents' strategies becomes an essential task for current AI architecture. Unlike collaborative AI, where all the agents manage to work together to pursue a team goal, adversarial AI agents must learn other agents' behaviors to develop suitable strategies to maximize their own goals. This paper uses the open-world Monopoly environment as the primary test bed. Monopoly contains several main characteristics of an adversarial environment, such as unknown opponents' behaviors, stochastic elements (e.g., dice rolls, community cards, and chance cards), and novelties in the game. These characteristics can be found in many real-world domains, such as stock market forecasting, self-driving vehicles, or cybersecurity. Current cognitive architecture systems such as probabilistic graphical models (Zhu et al., 2017; Poker, 2017) provide an excellent tool that combines graph theory and probability theory to enable efficient probabilistic reasoning and learning. The model is widely used in the AI community as one of the main tools to generate state-of-the-art results. These results show the capabilities of the model to handle some of the challenges in traditional cognitive architecture, such as perception, interaction, and adaptation. However, these approaches are not explicitly developed to deal with _closed-world_ environment. Even though these methods have shown excellent results in _closed-world_ environments, addressing open-world and interactive novelty remains a challenge. Over the past two decades, many research studies attempted to tackle the challenge of open-world AI. However, the challenge of integrating a general intelligence system capable of detecting and adapting to an open-world environment still remains unsolved (Zhu et al., 2017; Dwork et al., 2017; Poker, 2017; Poker, 2017). Several challenges of integrating general AI systems are pointed out in previous studies, such as the difficulty of integrating the requisite capabilities (e.g., detecting novelty and adapting to the novelty), and the difficulty of measuring the performance of the agent towards human-like behavior AI (Dwork et al., 2017). Reinforcement learning (RL) methods have been proposed as a solution for open-world environment (Beng et al., 2016; Dwork et al., 2017; Poker, 2017; Poker, 2017) in recent years. These methods use past and present experience to learn a new representation of the world or attempt to construct a suitable control policy in dynamically-changing environments. However, RL and deep RL suffer to adapt to small environmental changes. Small pixels change in Atari arcade games can cause the RL agent to corrupt and fail to complete the task, and adaptation to novelties may often take as long as training the agent from scratch (Dwork et al., 2017). Finally, recent works in the explainable AI (XAI) literature have looked at answering contrastive queries (Poker, 2017), which could very well be about potential novelties in the world. However, applying such a line of work for detecting open-world novelties would require an agent (assumed to be a human in the loop in XAI) to formulate and initiate queries to the agent to elicit the presence of novelties. Similarly, XAI works (Poker, 2017) that initiate an explanatory dialogue depending on the human in the loop (instead of automated detection and characterization) to analyze and detect open-world novelties. Finally, there are works that learn terms in the user's vocabulary (Poker, 2017). The user can then use these terms to advise the agent on accommodating the novelty. Current approaches in cognitive AI systems such as Cognitive-Affective State System (CASS), and Sigma Cognitive Architecture have attempted to address the open-world AI challenge (Dwork et al., 2017; Poker, 2017; Poker, 2017). Both architectures have been constructed to solve the problem without updating their core components or characterizing the novelty. These approaches may improve the overall performance of the AI. However, both architectures are not good enough to apprehend specific changes in the environment and accommodate those changes. More developments are needed for these architectures to perform well in a realistic open-world environment, where a part of the information can be changed, such as adversary mental models, transition functions, and agents' actions and interactions. ## 3. Preliminaries ### n-Person Non-Cooperative Stochastic Turn-Based Games We consider \(\mathcal{M}=\langle n,S,\{A_{i}\}_{i\in n},T,R,Y\rangle\) as the non-cooperative stochastic game environment consisting of a finite, non empty state space \(\mathcal{S}\); \(n\) players, \(\{1,2,\cdots,n\}\); a finite set of action set \(\{A_{1},A_{2},A_{3},\cdots A_{n}\}\) for each of the players; a set of conditional transition probabilities between states \(T\), such that \(T(s,a_{1},a_{2},\cdots,a_{n},s^{\prime})=P(s^{\prime}|s,a_{1},\cdots,a_{n})\); a reward function \(R\) so that \(R:\mathcal{S}\times A\rightarrow\mathbb{R}\), where \(A=A_{1}\times A_{2}\times A_{3}\times\cdots\times A_{n}\). An n-person stochastic game is turn-based if at each state, there is exactly one player who determines the next state. In order to formulate the problem, we extend the action sets \(A_{i}\) for \(i\in\{1,2,\cdots,n\}\) to be state dependent. For each particular state \(s\), there is a restricted action set \(A_{ir}\), there is at most \(i\in\{1,2,\cdots,n\}\) such that \(|A_{ir}|>1\). At the beginning of the game, all players start at the same initial state \(s_{0}\in\mathcal{S}\). Each player independently performs an action \(a_{1}^{i}\in A\). Given \(s_{0}\) and the selected actions \(a_{1}=\{a_{1}^{1},a_{1}^{2},\cdots,a_{n}^{n}\}\in A\), the next state \(s_{1}\) is derived based on \(s_{0}\) and \(a_{1}\), with a probability \(P(s_{1}|s_{0},a_{1})\). Then, each player independently performs an action \(a_{i}^{2}\), next state \(s_{2}\) is derived based on \(s_{1}\) and \(a_{2}\), with a probability \(P(s_{2}|s_{1},a_{2})\). The game continues in this fashion for an infinite number of steps, or until the goal is reached. Therefore, the game generates a random history \(h=\{s_{0},a_{1},s_{1},a_{2},...\}\in H=S\times A\times S\times A...\). Based on a partial of the history \(h^{\prime}=\{s_{0},a_{1},s_{1},a_{2},...s_{k}\}\), we can derive the conditional distribution, so-called strategy \(\pi_{i}(h^{\prime})\in P(A_{i})\), with \(P(A_{i})\) is the set of probability measures on \(A_{i}\). A strategy set \(\pi=\{s_{1};s_{2},...,\pi_{n}\}\) consists of a strategy \(\pi_{i}\) for each player \(i\) is used to determine the next action \(a_{i}^{k+1}\). Finally, the reward function \(R\) is specified based on the transition of the game, and \(\gamma\in(0,1]\) is the discount factor which determines the importance of immediate and long-term future rewards. ### Interactive Novelty In general, novelty refers to a change in the environment where the agent can neither apprehend the change from its own knowledge base nor from its past experience. In this paper, we want to address the challenge of detecting and accommodating interactive novelty. More specifically, interactive novelty refers to the change in agent's actions, interactions, and relations. * **Novelty Level 1 [Action]**: New classes or attributes of external agent behavior. * **Novelty Level 2 [Interaction]**: New classes or attributes of dynamic, local properties of behaviors impacting multiple entities. * **Novelty Level 3 [Relation]**: New classes or attributes of static, local properties of the relationship between multiple entities. We denote \(\mathcal{C}=\{C_{1},C_{2},\cdots,C_{n}\}\in A\) as the interaction set of the agent. The set represents the agent's capability to interact with other agents, or with the environment. The relation set \(\mathcal{L}=\{L_{1},L_{2},\cdots,L_{n}\}\in H\) represents the relationship of the agent with other agents, or agent with the environment, such that the relationship is shown as a part of the history, or action sequence. Each action \(a_{i}\) in the action set \(A\) is defined by a preconditions set \(\delta_{i}(a)\) and an effects set \(\beta_{i}(a)\). A preconditions set \(\delta_{i}(a)\) of an action \(a_{i}\) includes all the conditions that need to be satisfied in order to execute the action. Meanwhile, the effects set \(\beta_{i}\) of an action \(a_{i}\) indicates the expected results after a successful execution of action \(a_{i}\). The set of interactive novelties \(\mathcal{N}\) consist of all the changes that can occur in action set, interaction set, and relation set. In this scenario, action novelty refers to changes in action space, action preconditions or action effects. We denote \(A^{\prime}=\{A^{\prime}_{1},A^{\prime}_{2},\cdots,A^{\prime}_{n}\}\) as the new action set, which contains all unknown actions to the agent, such that \(A^{\prime}\cap A=\emptyset\), and \(A^{\prime}\notin\mathcal{K}\mathcal{B}\), where \(\mathcal{K}\mathcal{B}\) is the agent knowledge base. We assume that the preconditions \(\delta^{\prime}\) and effects \(\beta^{\prime}\) of the new action set \(A^{\prime}\) are completely unknown to the agent, and both must be discovered through agent's interactions. Similarly, we can present the new interaction set as \(\mathcal{C}^{\prime}\), and relation set \(\mathcal{L}^{\prime}\), then formulate interaction novelty and relation novelty accordingly. ### Problem Formulation: Interactive Novelty Detection and Adaptation The integrated architecture allows us to map all the essential information in _section 3.1_ of the environment to the knowledge base, \(\mathcal{K}\mathcal{B}\). Based on the information, we can construct the strategy \(\pi\) using the truncated-rollout MTCS solver. However, because interactive novelties may occur throughout the course of the game, the plan must be adjusted in order to accommodate new actions, interactions, or relations. As described in _Section 3.1_, the pre-novelty environment is represented as a non-cooperative stochastic turn-based game: \[\mathcal{M}=\langle n,S,\{A_{i}\}_{i\in n},T,R,\gamma\rangle\] In order to detect and accommodate interactive novelties, we define a detection function \(d(s,a,s^{\prime})\) to determine if there is any unexpected change in the environment after the agent selects an action \(a\) in state \(s\) and observes the next state \(s^{\prime}\), or if the agent performed a new action. In addition, an identification function \(\iota(s,a,s^{\prime})\) characterizes the cause of the change based on logical reasoning. The purpose of these functions is to represent the new environment after novelty (post-novelty) \(\mathcal{M}^{\prime}\), such that \[\mathcal{M}^{\prime}=\langle n,S^{\prime},\{A^{\prime}_{i}\}_{i\in n},T^{ \prime},R^{\prime},\gamma\rangle\] where \(S^{\prime}\) is the new state space post-novelty. The set post-novelty \(\{A^{\prime}_{i}\}_{i\in n}\) is the new finite action set with respect to each agent \(\alpha\) in the environment, \(T^{\prime}\) is the new conditional transition function, and \(R^{\prime}\) is the new reward function post-novelty. From the new model of the post-novelty world \(\mathcal{M}^{\prime}\), we modify the current strategy set \(\pi\) in order to adapt to the changes in the environment, as described in the next section. ## 4. Adversarial Domain: Open-World Monopoly ### Environment Implementation Monopoly is a multi-player adversarial board game where all players start at the same position. The game can support up to 4 players, described in Figure 1. All players roll dice to move across the board. The ultimate goal of the game is to be the last player standing after bankrupting other players. This objective can be achieved by buying properties, and railroads, monopolizing color sets, and developing houses on properties. If one player lands on a property owned by another player, they get charged rent or a fee. After monopolizing color sets and developing houses and hotels, players can charge higher fees when other players land on their properties. The game includes different surprise factors such as chance cards, community cards, jail, auction, and trading ability between agents. These elements can completely change the game. Hence, any action in the game needs to be adapted to dice rolls, community cards, chance cards, and the decisions of other players. These game characteristics make it more challenging for integrated planning and execution. In the game simulator, novelties can be injected on top of the standard game to study how the agent detects and accommodates these changes (Hardt et al., 2017). The third-party team that ran the evaluation developed the Open-World Monopoly environment. Unlike traditional Monopoly, where we can fully observe all the states and actions of other agents, the Open-world Monopoly does not allow us to monitor all the actions and interactions on our turn (Shen et al., 2018). So, the environment is partially observable. Figure 1. Classic Monopoly Board ### Interactive Novelties in Monopoly We implement three different categories of interactive novelty discussed in _Novelty Characterization_ section into a classic Monopoly game. Some theoretical examples of novelty are described as below: * **Action Novelty**: This class of novelty can be illustrated through a stay-in-jail action. For this novelty, the player could stay in jail as long as they want. However, the player must pay a certain fee to the bank each time they receive rent (when the player decides to stay in jail by their own choice voluntarily). * **Interaction Novelty**: We illustrate the relation novelty through a loan interaction between two agents. For example, a player could send a loan request to another player and pay the loan back over a specific amount of time that both parties agree. * **Relation Novelty**: We illustrate the relation novelty through a relation property, where we enforce a relation of homogeneity between properties in a specific monopolized color group (one color group). The player must homogeneously improve a monopolized set of properties in a given move. For example, imagine the player has 3 orange properties (a monopolized set). In the default game, you could set up a house on the first property, and leave the second one unimproved. For this novelty, in the move, if you improve the first property, you must also improve the second and third so that the properties are 'homogeneously' improved at the end of the move. Failure to do this will lead to the improvement being revoked at the end of the move. ## 5. The Architectural Framework The architecture includes four main components: the environment interface, the novelty handling component, a knowledge base, and a planning agent, as shown in Figure 2. The Novelty Handler component was integrated in the "Agent Development Environment" (ADE) (Beskin et al., 2015) which allows for the development of different integrated architectures. The Knowledge Base of the agent is constructed by the **Belief Stack**, **Action Stack**, **Interaction Stack**, and **Relation Stack**. The Planning Agent component develops and operates the plan based on the information in the knowledge base and the goal. The Monopoly Interface connects to the Monopoly API so that novelties can be injected into the environment. These novelties are detected and characterized by the novelty handler component. The component is developed using Answer Set Programming (ASP), a declarative programming oriented towards challenging search problems (Bellek et al., 2016; Bellek et al., 2017; Bellek et al., 2018). After the novelties are determined, the novelty handler updates the new actions, effects, or states to the knowledge base. When the agent receives the updated information, the planning agent then reconstructs the plan according to the new knowledge base. ### Novelty Detection We record the information of the game as provided by the game environment and compare it with our "expectation" state of the game board. This "expectation" state is derived from the agent's knowledge base of the game, including expected states, actions, action preconditions, and end effects. Then, the game environment provides us with the actual game board states and actions that have occurred between the current time step and the previous time our agent performed an action (e.g., after our agent lands on a property and buys it, all other agents get to act in order until it is our agent's turn again). When we notice a discrepancy between our expected state and the actual state, we surmise that something must have changed within the game, i.e., a novelty may have been introduced, which makes some aspects of our domain representation incorrect. Such unpredicted changes require the agent to update its knowledge base accordingly (e.g., a new action is added to the action space of Monopoly). An example of the novelty-handling component is shown in _Algorithm 1_. The evaluation is run in a tournament setup (many games in a tournament), discussed in section 6. Therefore, when the agent detects a novelty from the current game, this novelty information will be used to adapt to the next game. ``` 1:Initialization: 2:State space \(\mathcal{S}\), Action space \(\mathcal{A}\), Expected State \(\mathcal{S}\)' 3:\(d(s_{t},a_{t},s^{\prime}_{t})=False\) 4:\((s_{t},a_{t},s^{\prime}_{t})=None\) 5:\(t=0\)\(\triangleright\) Time step 6:while Game does not end do 7:if\(a_{t}=None\)then\(\triangleright\) No action was performed 8:if\(S_{t+1}\neq S^{\prime}_{t+1}\)then 9:\(d(s_{t},a_{t},s^{\prime}_{t})=True\)\(\triangleright\) Novelty Detected 10:\(t(s_{t},a_{t},s^{\prime}_{t})=Relation\)\(\triangleright\) Novelty Characterization 11:else 12:\(d(s_{t},a_{t},s^{\prime}_{t})=False\) 13:\(t(s_{t},a_{t},s^{\prime}_{t})=None\) 14:endif 15:else 16:if\(a_{t}\not\in\mathcal{A}\)then\(\triangleright\) Unknown Action 17:\(d(s_{t},a_{t},s^{\prime}_{t})=True\)\(\triangleright\) Novelty Detected 18:\(t(s_{t},a_{t},s^{\prime}_{t})=Action\)\(\triangleright\) Novelty Characterization 19:\(\triangleright\) Case 1: All precondition \(\delta(a_{t+1})\) for action \(a_{t+1}\) are satisfied but action \(a_{t+1}\) is not executable 20:elseif\(\delta(a_{t+1})==True\wedge a_{t+1}==False\)then 21:\(d(s_{t},a_{t},s^{\prime}_{t})=True\)\(\triangleright\) Novelty Detected 22:\(t(s_{t},a_{t},s^{\prime}_{t})=Action\)\(\triangleright\) Novelty Characterization 23:\(\triangleright\) Case 2: At least one precondition for action \(A_{t+1}\) is not satisfied but action \(a_{t+1}\) is executable 24:elseif\(\delta(a_{t+1})==False\wedge a_{t+1}==True\)then 25:\(d(s_{t},a_{t},s^{\prime}_{t})=True\)\(\triangleright\) Novelty Detected 26:\(t(s_{t},a_{t},s^{\prime}_{t})=Action\)\(\triangleright\) Novelty Characterization 27:else 28:then...\(\triangleright\) More Cases of Interactive Novelty 29:else 30:\(d(s_{t},a_{t},s^{\prime}_{t})=False\) 31:\(t(s_{t},a_{t},s^{\prime}_{t})=None\) 32:endif 33:endif 34:t = t + 1 35:endwhile 36:return\(d(s_{t},a_{t},s^{\prime}_{t})\), \(t(s_{t},a_{t},s^{\prime}_{t})\) ``` **Algorithm 1** Novelty Detection Pipeline ### Novelty Characterization Next, the agent uses a novelty identification module to characterize the novelty. This module has several sub-modules (which can be run in parallel), each focused on determining a specific novelty type. Each novelty identification sub-module uses the same ASP code (except for two changes) that is used for hypothetical reasoning about the effect of an action. The first change is that, a particular parameter, which is the focus of that specific sub-module, which was originally a fact, is now replaced by "choice" rules of ASP that enumerate different values that the parameter can take. The second change is that constraints are added to remove possible answer sets where the predicted game board state does not match the observed game board state. The resulting program's answer sets give us the parameter values which reconcile the predicted game board state and the observed game board state. If there is only one answer set and thus a unique parameter value, then if this value is different from the value we had earlier, we have identified a novelty. Now we can update our ASP code that was used for hypothetical reasoning by simply replacing the earlier value of the parameter with the new value. Below we first give a glimpse of how ASP can be used for reasoning about the next state and how that code can be minimally modified to infer a novelty. To reason about the next state, the ASP code will first define the game parameters through facts such as the following: ``` dice_value(1..6). player(player1;player2). cash(1..1000). asset("B80_Railroad"). penalty(50). ``` Then rules of the following form are used to define actions and fluents. ``` action(sell_property(P,X)):-player(P),asset(X). fluent(asset_owned(P,V)):-player(P),asset(V). ``` Properties of actions, such as their pre-conditions, and their effects are defined using rules of the following kind: ``` %Executabilityofsellingassets :-occurs(sell_property(P,V),T),player(P), asset(V),time(T),notholds(asset_owned(P,V),T). ``` %Effectofsellingass not_holds(asset_owned(P,V),T+1):- Figure 2. The overall architecture of the novelty handling framework in an adversarial environment holds(asset_owned(P,V),T), occurs(sell_property(P,V),T), player(P), asset(V),time(T). not_holds(asset_mortgaged(P,V),T+1) :- holds(asset_owned(P,V),T), occurs(sell_property(P,V),T), player(P), asset(V),time(T). holds(current_cash(P,X+V),T+1) :- holds(current_cash(P,X),T), occurs(sell_property(P,V),T), not holds(asset_mortgaged(P,V),T), asset_price(V,Y), player(P), asset(V),time(T). holds(current_cash(P,X+V),T+1) :- holds(current_cash(P,X),T), occurs(sell_property(P,V),T), occurs(sell_property(P,V),T), occurs(sell_property(P,V),T), holds(asset_mortgaged(P,V),T), asset_m_price(V,Y),player(P),asset(V),time(T). not_holds(current_cash(P,X),T+1) :- holds(current_cash(P,X),T), occurs(sell_property(P,V),T), holds(asset_mortgaged(P,V),T), asset_m_price(V,Y),player(P),asset(V),time(T). MEXectability of paying jail fine :- occurs(pay_jail_fine(P), T), player(P), time(T), not holds(in_jail(P), T). :- occurs(pay_jail_fine(P), T), player(P), time(T), not holds(current_cash(P, _), T). :- occurs(pay_jail_fine(P), T), player(P), time(T), holds(current_cash(P,X),T), X < 50. MEXeffect of paying jail fine not_holds(in_jail(P), T+1) :- holds(in_jail(P), T), occurs(pay_jail_fine(P), T), player(P), time(T). not_holds(current_cash(P, X), T+1) :- holds(current_cash(P,X),T), holds(in_jail(P), T), occurs(pay_jail_fine(P), T), player(P), time(T). The inertia rules are expressed as follows: holds(F,T+1) :- fluent(F), holds(F,T), not not_holds(F,T+1), time(T). not_holds(F,T+1) :- fluent(F), not_holds(F,T), not holds(F,T+1), time(T). The initial state is defined using holds facts with respect to time step 0, such as: holds(in_jail(player1), 0). holds(current_cash(player1,500),0). An action occurrence at time step 0 is then defined as a fact in the following form. occurs(pay_jail_fine(player1),0). Now when a complete ASP program with rules and facts of the above kind is run, we get an answer set from which we can determine the state of the world at time step 1. Suppose that the answer set has the facts: holds(in_jail(player1), 0). occurs(pay_jail_fine(player1),0). holds(current_cash(player1,500),0). holds(current_cash(player1,450),1). while our next observation gives us: obs(current_cash(player1,477),1). The discrepancy between our prediction about player1's current_cash being 500 (at time step 1) is different from our observation that player1's current_cash is 477. This suggests there is a novelty. This can be determined by the following two simple rules. discrepancy(F,T) :- fluent(F), time(T), holds(F,T), not observed(F,T). discrepancy(F,T) :- fluent(F), time(T), not holds(F,T), observed(F,T). While the above could have been implemented in any language, including in the simulator language, which we also implemented in Python, having it in ASP, makes it easier for us to take the next step, which is to find out what the novelty is. In ASP, we have to modify the above ASP code by adding the following and removing "penalty(50)" (referring to the jail fine in the Monopoly game) from the original code. oneto200(1..500). 1 { penalty(X) : oneto200(X)} 1. %choice rule :- obs(current_cash(P,X),1), holds(current_cash(P,Y),1), X!=Y, player(P). In the above, the first fact and the choice rule defines the range of penalty that we are exploring. If we had just those two rules, we will multiple answer sets with a penalty ranging from 1 to 500. The constraint (the last ASP rule) then eliminates all the answer sets where the observation about current_cash does not match with the holds. In the answer set that remains, we get the penalty value that would make the observation match with the holds, thus allowing us to figure out the novelty with respect to the penalty. In this particular case, the program will have the answer set with "penalty(23)" thus characterizing the novelty that the penalty is now 23. ### Novelty Accommodation Since novelties in the state (features, dynamics, actions) mean the agent would have to replan often and would have to do so based on the most updated information, we were interested in developing an online planning algorithm to determine the best action. However, with environments that are both _long-horizon_ and _stochastic_, using online planning approaches like Monte-Carlo tree search, quickly becomes intractable. To address this problem, we formulate a truncated-rollout-based algorithm that uses updated domain dynamics (learned from detected novelties) for a few steps of the rollout and then uses a state evaluation function to approximate the return for the rest of that rollout. In our evaluation function, we use both domain-specific components and a more general heuristic to approximate the return from the state after the truncated rollout. Furthermore, to ensure the agent adapts to the detected novelties, we made both the environment simulator used for rollouts and the evaluation function sufficiently flexible and conditioned on the environment attributes; we only used a few tuned constants. Thus, whenever a novelty was detected, we updated the relevant attributes in our simulator and evaluation function before running our algorithm to decide our actions. Using this approach, we are able to incorporate novel information into our decision-making process and adapt efficiently. An example of the whole process is shown in _Algorithm 2_. We will now provide a detailed description of the rollout algorithm and the evaluation function. In our algorithm, when choosing the next best action in a given state, we execute multiple rollouts for each possible action and compute the mean return value for each action. Each rollout is terminated either when some terminal state is reached or when some \(k\) number of actions have been taken. The rollouts use the updated domain dynamics of the environment. Due to potentially high branching factor, we keep these rollouts to be short (which also limits the effects of errors in our characterization of any novelties). However, to infer some approximation of the long-term value of an action, we use an evaluation function. Our evaluation function consists of two components: one that is domain-specific and the other that is heuristic-based and can be applied to any domain in general. The heuristic component of the evaluation function involves relaxing the domain and performing another rollout on the relaxed domain for some depth \(l\). Some examples of relaxations include limiting adversarial agents' actions and determination of domain dynamics. For instance, in the case of the Monopoly domain, we prevent the agent from taking any buying or trading actions. On the other hand, the domain-specific component of the evaluation function computes the value of the state as the sum of two terms: \(\mathcal{M}_{\text{assets}}\) and \(\mathcal{M}_{\text{monopoly}}\) where \(\mathcal{M}_{\text{assets}}\) is the value of all the assets the agent owns, whereas \(\mathcal{M}_{\text{monopoly}}\) computes the maximum rent that the agent would get if it gains a Monopoly over any color, scaled down by how far the agent is to get the monopoly. ## 6. Evaluation & Results ### External Evaluation In an effort to maintain the integrity of the evaluation, all the information about the novelty was hidden from our team, and all the information about our architecture or methodologies was also hidden from the evaluation team. The external evaluations were performed by a third-party team that originally created the Open-world Monopoly domain. Our agent was evaluated on the three interactive novelties: _action, interaction_, and _relation_. For each type of interactive novelties, more than 20 different novelties were introduced during the evaluation process. Each novelty level also has three difficulty levels (shown in _Table 2_). The difficulty levels expressed the difficulty of detecting and using the novelty. For instance, if the novelty involved an action, an _easy_ difficulty means the action was available for the agent to perform without any preconditions. A _medium_ difficulty means the actions can be detected by observing other agents. A _hard_ difficulty means the agent can only act under specific circumstances, and it may require the agent to explore the environment to learn the action. There were more than 60 novelties in which the agent was evaluated in total. At least 600 tournaments (100 games per tournament) were run to measure our agent's performance. Tournaments were started with a traditional Monopoly game (no novelty). At a certain point throughout the tournament (non-specific, e.g., on the \(5^{th}\) game), a novelty was introduced. To avoid ambiguity between novelties, only one specific novelty at a time was injected into a tournament. In our internal evaluation, ASP performed excellently in novelty detection and characterization. However, due to the characteristics of ASP, the novelty-handling component run time can be very high. Moreover, due to the nature of the game Monopoly (the game can go on indefinitely), and limited computational resources, we decided to use Python to overload the requirements of our solver and leverage the access to the simulator instead of using ASP to the first model and subsequently detect for novelties, to optimize the run time for the external evaluation. Our agent was evaluated based on four different metrics. M1 is the percent of correctly detected trials (CDT). In this case, the percent of CDT is the percent of trails that have at least one True Positive and no False Positives (FP). M2 is the percent FP (the agent reports novelty when no novelty exists). M3 is the novelty reaction performance (NRP) before the novelty was introduced (pre-novelty). M4 is the novelty reaction performance (NRP) after the novelty was introduced (post-novelty). To measure the NRP, our agent was evaluated against a heuristic agent which embedded some of the most common strategies in Monopoly (e.g., target some specific color, never buy some properties, always reserve money, etc.). Finally, we compute the novelty reaction performance (NRP) of the agent based on the following formula: \[NRP=\frac{\mathcal{W}_{agent}}{\mathcal{W}_{baseline}}\] Where, \(\mathcal{W}_{agent}\) is the win rate of our agent (pre-novelty for M3, and post-novelty for M4). \(\mathcal{W}_{baseline}\) is 65%. The results suggest that our cognitive architecture provides outstanding solutions for the game regardless of the complexity of the environment and differing levels of novelty. Furthermore, our agent achieved a perfect precision score (100% in percent of CDT and 0% of FP) at all difficulty levels of action and interaction novelties. The agent achieved a nearly perfect precision score in relation novelties. However, the agent missed 20% of the novelties in the hard level of difficulty. These failures to detect certain novelties happened due to the nature of the relation novelty category: we can only detect these novelties types when a specific action is executed. Due to the stochasticity of the Monopoly game, the agent would sometimes not perform a specific action throughout the entire evaluation. To identify a relation novelty, the agent may need to perform a particular action at a specific state to reveal the novelty. For example, in the relation property novelty scenario (discussed in section 4.2), this novelty only occurs when we monopolize a green color group (one of the most challenging color groups to monopolize due to the cost of each property). The agent may then fail to detect the novelty because the agent would never monopolize the green color group throughout testing. The M3 and M4 NRP scores in all novelty levels show that our agent outperformed the baseline agent before and after when novelties were introduced. The scores in _Table 1_ indicate that our cognitive architecture and accommodation strategies allow the planning agent to handle interactive novelties perfectly. ### Internal Evaluation #### 6.2.1. Agent Performance Without Novelty Accommodation In order to understand the effectiveness of the novelty handler components (detection, characterization, and accommodation), we conduct experiments on all the novelties and record the win rate of the MCTS agent with and without the support from the novelty handler across all the novelties with a random number of games for each game tournament. Table 2 shows the overall performance of the MCTS agent with the novelty handler against the MCTS agent without the novelty handler. The result suggests that the MCTS agent with the support of the novelty handler outperforms the vanilla MCTS agent without novelty handling. There is a significant 10% win rate difference between them. Furthermore, the results also indicate that the novelty handler components play an essential role in the agent's performance in an adversarial open-world domain. Although some novelties can have an essential effect on the game, and some novelties may not affect the game (nuisance novelties), the novelty handler mechanism still shows its efficiency in enhancing the agent's performance. For example, restricted color novelty can significantly affect the agent's strategies for buying and trading properties. On the other hand, other novelties, such as selling houses or property rates, can have minimal effects on the game. #### 6.2.2. Agent Performance Against Existing Methods In order to learn the performance level of our agent, we compare our agent against other Monopoly-playing agents. For this experiment, we evaluate our agent's performance against the hybrid deep reinforcement learning agent (POP) and double-deep Q-learning (DDQN) algorithms. The authors compare the standard reinforcement approach to their hybrid approach, and the experimental results show that the hybrid agents outperform traditional RL agents. Significantly, the hybrid POP agent has a win rate of 91% against a fixed-policy agent that is developed base on the Monopoly world champion's strategy. In our evaluation, we ran two instances of our agents against one of the fixed-policy agents and the hybrid deep reinforcement learning agent in trials. The results are shown in table 3. The results show \begin{table} \begin{tabular}{|p{42.7pt}||p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline \multicolumn{4}{|c|}{Novelty Level 1: Action} \\ \hline Metrics & Easy & Medium & Hard \\ \hline & Mean & Mean & Mean \\ \hline M1 & 100\% & 100\% & 100\% \\ M2 & 0\% & 0\% & 0\% \\ M3 & 141.54\% & 129.23\% & 136.92\% \\ M4 & 151.79\% & 135.38\% & 143.08\% \\ \hline \multicolumn{4}{|c|}{Novelty Level 2: Interaction} \\ \hline M1 & 100\% & 100\% & 100\% \\ M2 & 0\% & 0\% & 0\% \\ M3 & 124.31\% & 142.77\% & 121.85\% \\ M4 & 130.46\% & 134.15\% & 113.23\% \\ \hline \multicolumn{4}{|c|}{Novelty Level 3: Relation} \\ \hline M1 & 100\% & 100\% & 80\% \\ M2 & 0\% & 0\% & 0\% \\ M3 & 147.08\% & 132.31\% & 150.15\% \\ M4 & 146.46\% & 121.85\% & 145.23\% \\ \hline \end{tabular} \end{table} Table 1. Evaluation \begin{table} \begin{tabular}{|p{42.7pt}||p{42.7pt}|p{42.7pt}|} \hline Novelty & Win rate of adaptive MCTS agent & Win rate of non-adaptive MCTS agent \\ \hline & Mean \(\pm\) SD & Mean \(\pm\) SD \\ \hline Action & 83.22\% \(\pm\) 5.33\% & 76.38\% \(\pm\) 6.313\% \\ \hline Relation & 81.86\% \(\pm\) 7.26\% & 68.45\% \(\pm\) 5.164\% \\ \hline Interaction & 89.6\% \(\pm\) 9.01\% & 72.5\% \(\pm\) 7.692\% \\ \hline \end{tabular} \end{table} Table 2. Evaluation Results Of Agent’s Performance With and Without Novelty Handler our agent's dominant performance against the hybrid reinforcement learning approach. Our agent has a more than 85% win rate in the tournament compared to 12% of the hybrid learning agent. ## 7. Conclusion Our work presented a new agent architecture for interactive novelty handling in an adversarial environment that can detect, characterize, and accommodate novelties. Our architecture is modeled based on the thought process of human cognition when we deal with environmental changes. First, we use ASP to detect and characterize interactive novelties (action, interaction, and relation). Then, we update the detected novelties to our agent's knowledge base. Finally, we utilize the truncated-rollout MCTS agent to accommodate the novelty. The external evaluation results support the cognitive architecture's effectiveness in handling different levels of interactive novelty. However, the architecture has potential limitations in novelty characterization and learning agent behavior. One limitation of this architecture is the capability to learn the opponents' behaviors. Our cognitive architecture does not explicitly model the opponent's strategy to detect the change in other agents' behaviors and adapt accordingly. To address this limitation, we propose two additional models that can be a part of the novelty handler component. The first approach is to model the opponents' behavior using probabilistic reasoning (Brandt et al., 2004; Goyal et al., 2017; Goyal et al., 2017). In these models, we can learn the action probability distribution based on the game state, which helps us detect any change in opponents' behaviors. Secondly, we would like to model the opponents' behavior using reinforcement learning. Recent applications of reinforcement learning show promising results in learning opponents' behavior without knowing opponent's observations and actions during both training and execution processes (Goyal et al., 2017; Goyal et al., 2017). Ultimately, we believe improving the model's capability of predicting another agent's behaviors is the biggest area for growth. ###### Acknowledgements. This work was funded in part by DARPA grant W911NF-20-2-0006. We would like to thank Mayank Kejriwal, Shilpa Thomas, Min-Hsueh Chiu and other members of the University of Southern California team for the Monopoly simulator and agent evaluation.
2309.09895
Maximum principles in unbounded Riemannian domains
The necessity of a Maximum Principle arises naturally when one is interested in the study of qualitative properties of solutions to partial differential equations. In general, to ensure the validity of these kind of principles one has to consider some additional assumptions on the ambient manifold or on the differential operator. The present work aims to address, using both of these approaches, the problem of proving Maximum Principles for second order, elliptic operators acting on unbounded Riemannian domains under Dirichlet boundary conditions. Hence there is a natural division of this article in two distinct and standalone sections.
Andrea Bisterzo
2023-09-18T15:59:09Z
http://arxiv.org/abs/2309.09895v3
# Maximum principles in unbounded Riemannian domains ###### Abstract. The necessity of a Maximum Principle arises naturally when one is interested in the study of qualitative properties of solutions to partial differential equations. In general, to ensure the validity of these kinds of principles one has to consider some additional assumptions on the ambient manifold or on the differential operator. The present work aims to address, using both of these approaches, the problem of proving Maximum Principles for second order, elliptic operators acting on unbounded Riemannian domains under Dirichlet boundary conditions. Hence there is a natural division of this article in two distinct and standalone sections. ## 1. Introduction In this work we address the validity of the maximum principle for bounded solutions to the problem \[\left\{\begin{array}{ll}\Delta u\geq cu&\mbox{in }\Omega\\ u\leq 0&\mbox{on }\partial\Omega\end{array}\right.\] where \(\Omega\) is an unbounded domain inside the Riemannian manifold \((M,g)\). We shall present two kinds of results where the common root is the assumption that \(\Omega\) is "small" from the viewpoint of the operator. The first result requires that the underlying manifold has a special structure (warped product cylinder) and the smallness of the domain is encoded in its (Dirichlet) parabolicity. The second result has a more abstract flavour as it holds in any Riemannian manifold provided that the domain is small in a spectral sense. In the Euclidean setting a classical Maximum Principle for unbounded domains contained in the complement of a cone states as follows (for a reference, see [3]) **Theorem 1.1**.: _Consider a possibly unbounded domain \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 2\), whose closure is contained in the complement of a non-degenerate solid cone \(\mathcal{C}\subset\mathbb{R}^{n}\). If \(u\in C^{0}(\overline{\Omega})\cap W^{1,2}_{loc}(\Omega)\) is a distributional solution to_ \[\left\{\begin{array}{ll}-\Delta u+c\ u\leq 0&\mbox{in }\Omega\\ u\leq 0&\mbox{on }\partial\Omega\\ \sup_{\Omega}u<+\infty,\end{array}\right.\] _where \(0\leq c\in C^{0}(\Omega)\), then_ \[u\leq 0\ \ \ \ \text{in }\Omega.\] The proof is essentially based on the fact that the Euclidean space is a model manifold, that is, the manifold obtained by quotienting the warped product \(([0,+\infty)\times\mathbb{S}^{n-1},\mathrm{d}r\otimes\mathrm{d}r+r^{2}g^{ \mathbb{S}^{n-1}})\) with respect to the relation that identifies \(\{0\}\times\mathbb{S}^{n-1}\) with a point \(o\), called _pole_, and then extending smoothly the metric in \(o\). Influenced by the model structure of \(\mathbb{R}^{n}\), in Section 2 we obtain a transposition of the previous theorem to warped product manifolds satisfying certain (radial) curvature conditions and replacing the notion of _cone_ with the notion of _strip_. The assumptions on the geometry of \(M\) and on \(\Omega\) are needed to construct a suitable barrier function, crucial for the validity of the result. We stress that the main theorem of Section 2 will be first stated in the context of (Dirichlet-)parabolic manifolds and then reinterpreted in the language of maximum principles. This is the content of Corollary 2.8. On the other hand, if we want to recover a maximum principle without requiring any assumption on the structure of the manifold (and of the domain), then we have to consider some additional hypotheses on the differential operator and on its spectrum. These kinds of assumptions are natural if one compares with the compact case. **Theorem 1.2**.: _Let \((M,g)\) a Riemannian manifold, \(\Omega\subseteq M\) a bounded domain and \(\mathcal{L}\) a linear elliptic operator with (sufficiently) regular coefficients. Then, the Maximum Principle holds for \(\mathcal{L}\) in \(\Omega\) with Dirichlet boundary conditions if and only if the first Dirichlet eigenvalue of \(\mathcal{L}\) on \(\Omega\) is positive._ Inspired by this fact, one might wonder if this property can be generalized to unbounded domains. This is true in the Euclidean space according to the very interesting work [13] by Samuel Nordmann. In Section 3 we shall extend Nordmann result to Riemannian domains. To this end, we first obtain an ABP-like inequality for the differential operator \(\mathcal{L}\) acting on bounded smooth domains. Next, we will use it to construct a couple of generalized eigenelements \((\lambda_{1},\varphi)\) for \(\mathcal{L}\) on possibly nonsmooth bounded domains and, using an exhaustion argument, on unbounded smooth domains. Following the proof obtained by Nordmann, in Theorem 3.23 we get a maximum principle for the operator \(\mathcal{L}\) acting on an unbounded smooth domain \(\Omega\) of a general Riemannian manifold \((M,g)\) under the assumption that \(\lambda_{1}>0\). In the last section we will apply Theorem 3.23 to generalize some of the results obtained in [6] by the author together with Stefano Pigola. ## 2. Maximum principle for unbounded domains in the complement of a strip The already cited Theorem 1.1 is a milestone in the Euclidean analysis of PDEs. A possible proof makes use of the next classical lemma (see [3, Lemma 2.1]), which is based on the existence of a suitable positive \((-\Delta+c)\)-subharmonic function. We state this result in a more general setting. **Lemma 2.1**.: _Let \((M,g)\) be a complete manifold. Given a (possibly unbounded) domain \(\Omega\subset M\), suppose \(u\in W^{1,2}_{\mbox{\tiny loc}}(\Omega)\cap C^{0}(\overline{\Omega})\) is a distributional solution to_ \[\left\{\begin{array}{ll}-\Delta u+c\ u\leq 0&\mbox{in}\ \Omega\\ u\leq 0&\mbox{on}\ \partial\Omega\\ \sup_{\Omega}u<+\infty,\end{array}\right.\] _where \(0\leq c\in C^{0}(\Omega)\). If there exists a function \(\phi\in C^{2}(\Omega)\cap C^{0}(\overline{\Omega})\) (possibly depending on \(u\)) satisfying_ \[\left\{\begin{array}{ll}-\Delta\phi+c\ \phi\geq 0&\mbox{in}\ \Omega\\ \phi>0&\mbox{in}\ \overline{\Omega}\end{array}\right.\] _and_ \[\limsup_{\begin{subarray}{c}d^{M}(p,p_{0})\to+\infty,\\ p\in\Omega\end{subarray}}\frac{u(p)}{\phi(p)}\leq 0\] _for any fixed \(p_{0}\in\Omega\) (where \(d^{M}\) is the intrinsic distance on \(M\)), then \(u\leq 0\) in \(\Omega\)._ Proof.: Let \(w:=\frac{u}{\phi}\in W^{1,2}_{loc}(\Omega)\cap C^{0}(\overline{\Omega})\). We have \[\Delta w+2g\left(\nabla w,\frac{\nabla\phi}{\phi}\right)+w\frac{\Delta\phi}{ \phi}=\frac{\Delta u}{\phi}\geq c\ \frac{u}{\phi}=c\ w\qquad\quad\mbox{in}\ \mathcal{D}^{\prime}\] i.e. \[\mathcal{L}w:=-\Delta w-2g\left(\nabla w,\frac{\nabla\phi}{\phi}\right)+w\frac {-\Delta\phi+c\ \phi}{\phi}\leq 0\qquad\quad\mbox{in}\ \mathcal{D}^{\prime}.\] By assumption, for any \(\epsilon>0\) and any fixed \(p_{0}\in M\) there exists \(0<R_{\epsilon}\xrightarrow{\epsilon\to 0}\infty\) so that \(w(p)\leq\epsilon\) for every \(p\in\Omega\) satisfying \(d^{M}(p,p_{0})\geq R_{\epsilon}\). Hence, for \(\Omega_{\epsilon}:=B^{M}_{R_{\epsilon}}(p_{0})\cap\Omega\) we get \[\left\{\begin{array}{ll}\mathcal{L}w\leq 0&\mbox{in any connected component of}\ \Omega_{\epsilon}\\ w\leq\epsilon&\mbox{on the boundary of any connected component of}\ \Omega_{\epsilon}.\end{array}\right.\] Since \(\frac{-\Delta\phi+c\phi}{\phi}\geq 0\), by the standard maximum principle \(w\leq\epsilon\) in any connected component of \(\Omega_{\epsilon}\). Letting \(\epsilon\to 0\) we get \(w\leq 0\) in \(\Omega\), i.e. \(u\leq 0\) in \(\Omega\). As said above, the previous lemma is the key ingredient to obtain the unbounded maximum principle contained in Theorem 1.1. Indeed, for any bounded above supersolution \(u\) we only have to find a barrier function \(\phi\) satisfying the assumptions of Lemma 2.1. Observe that, since in Theorem 1.1\(u\) is assumed to be bounded above, the dependence of \(\phi\) on \(u\) may be bypassed just requiring that \(\phi\xrightarrow{|x|\to+\infty}+\infty\). It is precisely the presence of the cone \(\mathcal{C}\) in the complement of \(\Omega\) that allows us to easily construct \(\phi\). Proof of Theorem 1.1.: Consider the spherical coordinates \((r,\theta)\) on \(\mathbb{R}^{n}\) and set \(\Lambda=\mathbb{S}^{n-1}\setminus\mathcal{C}\). We define \(\phi\) as the restriction to \(\Omega\) of the function \(\Phi:(0,+\infty)\times\Lambda\to\mathbb{R}_{\geq 0}\) given by \[\Phi(r,\theta)=\left\{\begin{array}{ll}\ln(r)+C_{0}&\mbox{if }n=2\\ r^{\alpha}\psi(\theta)&\mbox{if }n\geq 3,\end{array}\right.\] where \(\psi\) is the first Dirichlet eigenfunction of \(\Delta^{\mathbb{S}^{n-1}}\Big{|}_{\Lambda}\) with associated first eigenvalue \(\lambda_{1}>0\) and \(\alpha\in\mathbb{R}\) satisfies the identity \[\alpha(\alpha+n-2)-\lambda_{1}=0.\] By the nodal domain theorem, it follows that \(\phi>0\) in \(\Omega\) and thus \((-\Delta+c)\phi\geq 0\). Moreover, by construction, \(\phi\) diverges as \(|x|\to+\infty\). By Lemma 2.1, the claim follows. Using a different point of view, we can interpret Theorem 1.1 in terms of a the Dirichlet-parabolicity of the domain \(\Omega\). **Definition 2.2**.: _Given a Riemannian manifold \((M,g)\) without boundary, we say that a domain \(\Omega\subseteq M\) is Dirichlet parabolic (\(\mathcal{D}\)-parabolic) if the unique bounded solution \(u\in C^{0}(\overline{\Omega})\cap C^{\infty}(\Omega)\) to the problem_ \[\left\{\begin{array}{ll}-\Delta u=0&\mbox{in }\Omega\\ u=0&\mbox{on }\partial\Omega\end{array}\right.\] _is the constant null function._ **Remark 2.3**.: Note that in the definition of \(\mathcal{D}\)-parabolicity the boundary of the manifold (domain) at hand does not necessarily have to be smooth. For an interesting work about Dirichlet parabolicity, containing a detailed overview about the topic, we suggest [15]. As an application of what done so far, we get that any domain \(\Omega\subset\mathbb{R}^{n}\) contained in the complement of a cone is \(\mathcal{D}\)-parabolic. **Corollary 2.4**.: _If \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 2\), is a (possibly unbounded) domain whose closure is contained in the complement of a non-degenerate solid cone \(\mathcal{C}\subset\mathbb{R}^{n}\), then \(\Omega\) is \(\mathcal{D}\)-parabolic._ Proof.: Fixed any bounded function \(u\in C^{0}(\overline{\Omega})\cap C^{\infty}(\Omega)\) satisfying \[\left\{\begin{array}{ll}-\Delta u=0&\mbox{in }\Omega\\ u=0&\mbox{on }\partial\Omega,\end{array}\right.\] by Theorem 1.1 we get \(u\leq 0\). Applying the same argument to \(v=-u\), it also follows that \(u\geq 0\), obtaining \(u\equiv 0\) ### From Euclidean space to warped products Clearly, previous construction is strongly based on the fact that the Euclidean space is a model manifold. Using this viewpoint, a natural question could be the following \begin{tabular}{l l} \(Can\) _we \(retrace\) what we have done so far to obtain a suitable \(barrier\)_\(\phi\)_ \\ \(on\) _any_\(warped\)\(product\)\(manifold\)\(M=I\times_{\sigma}N\)? \\ \end{tabular} **Remark 2.5**.: When we consider \(\mathbb{R}^{n}\) as a warped product manifold, the cone \(\mathcal{C}\) (whose vertex coincides with the pole \(o\)) can be seen as a strip that extends along the "radial" direction. \(\Omega\)\(r\)\(r\)\(0\)\(\Lambda\)\(\Omega\)\(r\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Lambda\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\)\(\Omega\Omega\)\(\Omega\ While at the beginning of this section we explained how to prove \(\mathcal{D}\)-parabolicity using Lemma 2.1, for more general warped product manifolds we will apply the following Dirichlet-Khas'minskii test (see [15, Lemma 14]) to subdomains of the ambient manifold. **Lemma 2.6** (\(\mathcal{D}\)-Khas'minskii test).: _Given a Riemannian manifold \((M,g)\) with boundary \(\partial M\neq\emptyset\), if there exists a compact set \(K\subset M\) and a function \(0\leq\phi\in C^{0}(M\setminus\mathrm{int}\ K)\cap W^{1,2}_{loc}(\mathrm{int} \ M\ \setminus K)\) such that \(\phi(x)\to\infty\) as \(d^{M}(x,x_{0})\to\infty\) for some (any) \(x_{0}\in M\), and_ \[-\int_{\mathrm{int}\ M\ \setminus K}g(\nabla\phi,\nabla\rho) \leq 0\] \[\forall 0\leq\rho\in C^{0}(M\setminus\mathrm{int}\ K)\cap W^{1,2}_{ loc}(\mathrm{int}\ M\ \setminus K),\] _then \(M\) is \(\mathcal{D}\)-parabolic._ Before stating the main theorem of this section we briefly recall that the radial Ricci curvature \(\mathrm{Ric}_{rr}\) at a point \(p=(r,\xi)\) of a warped product manifold \(M=I\times_{\sigma}N\) is given by \[\mathrm{Ric}_{rr}(p)=\mathrm{Ric}\left(\frac{\partial}{\partial r},\frac{ \partial}{\partial r}\right)(p)=-\frac{\sigma^{\prime\prime}(r)}{\sigma(r)}.\] In particular, on noting that \(\sigma(r)>0\) for every \(r\in I\), we get \[\mathrm{Ric}_{rr}(p)\geq 0\ \ (\mathrm{resp.}\ \leq 0)\ \ \ \ \Leftrightarrow\ \ \ \ \sigma^{\prime\prime}(r)\leq 0\ \ (\mathrm{resp.}\ \geq 0).\] **Theorem 2.7**.: _Let \(M=\mathbb{R}_{\geq 0}\times_{\sigma}N\) be a warped product manifold of dimension \(\dim(M)\geq 2\), where \(\sigma:\mathbb{R}_{\geq 0}\to\mathbb{R}_{>0}\) is a positive smooth function and \(N\) is a closed manifold. Consider \(\Omega\subset M\) an unbounded domain whose closure is contained in the strip \([0,+\infty)\times\Lambda\), where \(\Lambda\subset N\) is a non-empty, smooth and connected open subset of \(N\) such that \(\overline{\Lambda}\neq N\). Assume that either one of the following conditions is satisfied_ 1. \(\mathrm{Ric}_{rr}\leq 0\) _eventually and_ \(\exists\lim_{r\to\infty}\sigma(r)=c\in[0,+\infty)\)_;_ 2. \(\mathrm{Ric}_{rr}\geq 0\) _eventually and_ \(\exists\lim_{r\to\infty}\sigma(r)=c\in(0,+\infty]\)_;_ 3. \(\sigma\in O(r^{\beta})\) _for_ \(0<\beta<\frac{1}{2}\) _as_ \(r\to+\infty\) _and_ \(\frac{\sigma^{\prime}}{\sigma}\in L^{\infty}\) _eventually._ _Then \(\overline{\Omega}\) is \(\mathcal{D}\)-parabolic._ Proof.: We recall that \(\Omega\) is \(\mathcal{D}\)-parabolic if every \(u\in C^{\infty}(\Omega)\cap C^{0}(\overline{\Omega})\cap L^{\infty}(\Omega)\) satisfying the Dirichlet problem \[\left\{\begin{array}{ll}-\Delta u=0&\mbox{in }\Omega\\ u=0&\mbox{on }\partial\Omega\end{array}\right. \tag{2.2}\] vanishes everywhere. By the invariance of \(\mathcal{D}\)-parabolicity by removing compact domains, it is enough to prove that there exists an appropriate compact subset \(K\subset\Omega\) such that the resulting subdomain \(U:=\Omega\setminus K\) is \(\mathcal{D}\)-parabolic. To this end, in turn, following the philosophy of Khas'minskii test, we only have to find a nonnegative function \(\phi\in C^{0}(\overline{U})\cap W^{1,2}_{\mbox{\tiny{loc}}}(U)\) satisfying the conditions \[\left\{\begin{array}{ll}-\Delta\phi\geq 0\\ \lim_{\begin{subarray}{c}d^{M}(p_{0},x)\to\infty\\ x\in\Omega\end{subarray}}\phi(x)=+\infty\end{array}\right.\] for any fixed \(p_{0}\in M\). Indeed, in this case given any solution \(u\in C^{\infty}(U)\cap C^{0}(\overline{U})\cap L^{\infty}(U)\) of (2.2), suppose by contradiction that \(\sup_{U}u>0\). Then there exists \(x_{0},x_{1}\in U\) such that \(\sup_{U}u\geq u(x_{1})>u(x_{0})=:u_{0}>0\). Define \(v:=u-u_{0}-\epsilon\phi\), for \(\epsilon\) small enough so that \(v(x_{1})>0\), and set \(W:=\{x\in U\ :\ v(x)>0\}\). Then \(x_{1}\in W\) and \(W\) is bounded since \(\phi\to+\infty\) as \(d^{M}(p_{0},x)\to\infty\). By the fact that \(\Delta v\geq 0\) weakly in \(W\) and \(v\leq 0\) on \(\partial W\), using the strong maximum principle we get \(v\leq 0\) on \(W\), thus obtaining a contradiction. It follows that \(u\leq 0\). By applying the same argument to the function \(-u\), we conclude \(u\equiv 0\), as desired. It remains to prove the existence of the function \(\phi\) and the corresponding compact set \(K\). Thanks to the structure of the warped product manifold, we can assume \(\phi\) to be of the form \(\phi(r,\xi)=h(r)\psi(\xi)\). So, let \(\psi\) be the positive first Dirichlet eigenfunction of the Laplacian on \(\Lambda\) \[\left\{\begin{array}{ll}-\Delta_{\Lambda}\psi=\lambda_{1}\psi\geq 0&\mbox{ in } \Lambda\\ \psi=0&\mbox{ on }\partial\Lambda.\end{array}\right.\] With this choice the differential inequality \(-\Delta\phi\geq 0\) is equivalent to the second order ODE \[h^{\prime\prime}+(m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{ \sigma^{2}}\lambda_{1}h\leq 0. \tag{2.3}\] Whence, we are reduced to find a solution \(h\) to (2.3). This is obtained via a case by case analysis: 1. \(\sigma^{\prime\prime}\geq 0\) eventually and \(\exists\lim_{r\to\infty}\sigma(r)=c\in[0,+\infty)\): by assumption, there exists \(A\geq 1\) so that \[\sigma^{\prime\prime}\geq 0\hskip 28.452756pt\mbox{and thus}\hskip 28.452756pt\sigma\geq c\] in \([A,+\infty)\). This implies that \(\sigma^{\prime}\xrightarrow{r\to+\infty}C\leq 0\) and \(\sigma^{\prime}\leq 0\) eventually, so we can assume that \(\sigma^{\prime}\leq 0\) for \(r\geq A\). In particular, \(-K\leq\sigma^{\prime}\leq 0\) for a positive constant \(K\). Let \(h(r):=r\), defined in \([A,+\infty)\): since \(h^{\prime}=1\geq 0\), \(h^{\prime\prime}=0\) and \(\sigma^{\prime}\leq 0\), we get \[h^{\prime\prime}+(m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{\sigma^ {2}}\lambda_{1}h\leq 0.\] By construction, \(h(r)\xrightarrow{r\to+\infty}+\infty\) and \(h(r)>0\) in \([A,+\infty)\). Whence, defining \(U:=\Omega\cap([A,+\infty)\times N)\) and taking \(\phi(r,\xi)=h(r)\psi(\xi)\), by the previous argument we obtain that \(U\) is \(\mathcal{D}\)-parabolic. 2. a. \(\sigma^{\prime\prime}\leq 0\) eventually and \(\exists\lim_{r\to\infty}\sigma(r)=c\in(0,+\infty)\): as in previous case, there exists \(A\geq 1\) so that \[\sigma^{\prime\prime}\leq 0\hskip 28.452756pt\text{and thus}\hskip 28.452756pt \sigma\leq c\] in \([A,+\infty)\), implying (w.l.o.g.) \(0\leq\sigma^{\prime}\leq K<+\infty\) in \([A,+\infty)\). Let \(\beta\in(0,1)\) and \(h(r):=r^{\beta}\): we get \[h^{\prime\prime}+(m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}- \frac{1}{\sigma^{2}}\lambda_{1}h \leq(m-1)\frac{\sigma^{\prime}}{\sigma}\beta r^{\beta-1}-\frac{1}{ \sigma^{2}}\lambda_{1}r^{\beta}\] \[\leq\frac{r^{\beta}}{\sigma}\left[(m-1)K\beta-\frac{1}{c}\lambda_{1}\right]\] and choosing \(\beta\in(0,1)\) so that \(\left[(m-1)K\beta-\frac{1}{c}\lambda_{1}\right]\leq 0\), we obtain \[h^{\prime\prime}+(m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{ \sigma^{2}}\lambda_{1}h\leq 0.\] Since \(h\) is positive and diverges as \(r\to+\infty\), we can proceed exactly as in previous case, obtaining that \(U:=\Omega\cap([A,+\infty)\times N)\) is \(\mathcal{D}\)-parabolic. 2. b. \(\sigma^{\prime\prime}\leq 0\) eventually and \(\exists\lim_{r\to+\infty}\sigma(r)=+\infty\): by assumption, there exists \(A>1\) so that \(\sigma^{\prime\prime}\leq 0\) in \([A,+\infty)\). Together with the fact that \(\sigma\to+\infty\) as \(r\to+\infty\), this implies that \(\sigma^{\prime}\) is decreasing and eventually positive. In particular, \(\sigma^{\prime}\leq K\) is bounded in \([A,+\infty)\). Choosing \(h(r)=\sigma^{\beta}(r)\) for \(\beta>0\), we get \[h^{\prime\prime}+ (m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{\sigma^{ 2}}\lambda_{1}h\] \[=\sigma^{\beta-2}\left[(\sigma^{\prime})^{2}\beta(\beta+m-2)- \lambda_{1}\right]+\underbrace{\beta\sigma^{\beta-1}\sigma^{\prime\prime}}_{ \leq 0}\] in \([A,+\infty)\) and, thanks to the boundedness of \(\sigma^{\prime}\), we can take a positive \(\beta\) small enough so that \[(\sigma^{\prime})^{2}\beta(\beta+m-2)-\lambda_{1}\leq 0,\] obtaining \[h^{\prime\prime}+(m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{\sigma ^{2}}\lambda_{1}h\leq 0\] in \([A,+\infty)\). As in first case, it follows that the subdomain \(U:=\Omega\cap([A,+\infty)\times N)\) is \(\mathcal{D}\)-parabolic. 3. \(\sigma\in O(r^{\beta})\) for \(0<\beta<\frac{1}{2}\) as \(r\to\infty\) and \(\frac{\sigma^{\prime}}{\sigma}\in L^{\infty}\) eventually: let \(K>0\) and \(A_{0}>0\) so that \(\frac{\sigma^{\prime}}{\sigma}<K\) in \([A_{0},+\infty)\). Then, under the current assumptions, the function \(h(r):=r\) satisfies \[h^{\prime\prime}+ (m-1)\frac{\sigma^{\prime}}{\sigma}h^{\prime}-\frac{1}{\sigma^{ 2}}\lambda_{1}h\] \[<(m-1)K-\frac{1}{\sigma^{2}}\lambda_{1}r\xrightarrow{r\to+\infty}-\infty\] implying that there exists \(A>A_{0}\) so that equation (2.3) is satisfied in \([A,+\infty)\). Again, it follows that the domain \(U:=\Omega\cap([A,+\infty)\times N)\) is \(\mathcal{D}\)-parabolic. As a consequence of the above analysis, we get a \(\mathcal{D}\)-parabolic subdomain of the form \(U:=\Omega\cap([A,+\infty)\times N)\), for \(A>0\) big enough. Since \(\Omega\setminus U=([0,A]\times N)\cap\Omega\) is compact in \(\Omega\) and \(U\) is \(\mathcal{D}\)-parabolic, by [15, Corollary 11] the domain \(\Omega\) is itself \(\mathcal{D}\)-parabolic, thus completing the proof. A direct application of Theorem 2.7 gives the following maximum principle for unbounded domains. Its proof is based on a characterization of the \(\mathcal{D}\)-parabolicity contained in [15, Proposition 10], which asserts that a Riemannian manifold \(X\) with nonempty boundary \(\partial X\neq\emptyset\) is \(\mathcal{D}\)-parabolic if and only if every subharmonic bounded function \(u\in C^{0}(X)\cap W^{1,2}_{loc}(\text{int }X)\) satisfies \(\sup_{X}u=\sup_{\partial X}u\). **Corollary 2.8** (Unbounded maximum principle).: _Let \(M=\mathbb{R}_{\geq 0}\times_{\sigma}N\) be a warped product manifold of dimension \(\dim(M)\geq 2\), where \(\sigma:\mathbb{R}_{\geq 0}\to\mathbb{R}_{>0}\) is a positive smooth function and \(N\) a closed manifold. Consider \(\Omega\subset M\) an unbounded domain whose closure is contained in the strip \([0,+\infty)\times\Lambda\), where \(\Lambda\subset N\) is a non-empty, smooth and connected open subset of \(N\) such that \(\overline{\Lambda}\neq N\). Moreover, suppose the validity of either one of the following conditions_ 1. \(\operatorname{Ric}_{rr}\leq 0\) _eventually and_ \(\exists\lim_{r\to\infty}\sigma(r)=c\in[0,+\infty)\)_;_ 2. \(\operatorname{Ric}_{rr}\geq 0\) _eventually and_ \(\exists\lim_{r\to\infty}\sigma(r)=c\in(0,+\infty]\)_;_ 3. \(\sigma\in O(r^{\beta})\) _for_ \(0<\beta<\frac{1}{2}\) _as_ \(r\to\infty\) _and_ \(\frac{\sigma^{\prime}}{\sigma}\in L^{\infty}\) _eventually._ Proof.: We first prove the claim. **Lemma 2.9**.: _Let \(M\) be a _If \(u\in C^{0}(\overline{\Omega})\cap W^{1,2}_{loc}(\Omega)\) is a bounded above distributional solution of the problem_ \[\left\{\begin{array}{rl}-\Delta u+c\ u\leq 0&\mbox{in }\Omega\\ u\leq 0&\mbox{on }\partial\Omega,\end{array}\right.\] _where \(0\leq c\in C^{0}(\Omega)\), then_ \[u\leq 0\quad\mbox{ in }\Omega.\] Proof.: Consider \(u\in C^{0}(\overline{\Omega})\cap W^{1,2}_{loc}(\Omega)\) a bounded above distributional solution to the problem \[\left\{\begin{array}{rl}-\Delta u+c\ u\leq 0&\mbox{in }\Omega\\ u\leq 0&\mbox{on }\partial\Omega.\end{array}\right.\] If \(u^{+}:=\max\{u,0\}\), by Kato's inequality (see [18, Proposition A.1]) we get \[\left\{\begin{array}{rl}-\Delta u^{+}\leq-cu^{+}\leq 0&\mbox{in }\Omega\\ u^{+}=0&\mbox{on }\partial\Omega.\end{array}\right.\] Using Theorem 2.7 and [15, Proposition 10] it follows that \(u^{+}=0\) in \(\Omega\), implying \(u\leq 0\) in \(\Omega\). ## 3. A maximum principle for general unbounded domains in complete manifolds In the present section we aim to prove a Maximum Principle for second order elliptic operators acting on unbounded domains of more general Riemannian manifolds. We stress that in the main theorem of this section, i.e. Theorem 3.23, we only require the positivity (in the spectral sense) of the operator, with no further assumptions neither on the geometry or on the structure of the ambient manifold. The result is obtained readapting the work made in the Euclidean case by Samuel Nordmann, [13]. Most of the effort consists into recover in a Riemannian setting some classical Euclidean tools. In particular, it will be crucial the achievement of an Alexandroff-Bakelman-Pucci estimate, which will allow us to construct a (generalized) first eigenfunction in unbounded domains. The Maximum Principle will be a straightforward consequence of the existence of such eigenfunction. ### ABP inequality In the very interesting article [7], Cabre proved a Riemannian version of the Alexandroff-Bakelman-Pucci estimate for elliptic operators in nondivergent form acting on manifolds with nonnegative sectional curvature. In his work, he used the assumption on the sectional curvature to ensure two fundamental tools: the (global) volume doubling property for the Riemannian measure dv and the classical Hessian comparison principle by Rauch. In particular, since these two tools (with different curvature bounds) are available in every relatively compact domain \(\Omega\subset M\) regardless of any assumption on the sectional curvature of \(M\), it is reasonable to expect that we can locally recover the results by Cabre up to multiply by appropriate constants depending on \(\Omega\) and on the lower bound of its sectional curvature. Among its various applications, the ABP inequality is one of the main ingredients used by Berestycki, Nirenberg and Varadhan in [4] to prove the existence of the _generalized principal eigenfunction_ of a second order differential operator \(\mathcal{L}\) on Euclidean domains, that is, a generalization of the notion of eigenfunction to operators acting on possibly nonsmooth or unbounded domains. In this paper we will see how to transplant the construction of the generalized principal eigenfunction into general bounded (and into smooth unbounded) Riemannian domains: this will allow us to prove a maximum principle for uniformly elliptic second order differential operators acting in smooth unbounded domains. Following the proof in [7], we get a version of the ABP inequality for uniformly elliptic operators of the form \[\mathcal{L}u(x):=\mathcal{M}u(x)+c(x)u(x), \tag{3.1}\] with \[\mathcal{M}u(x):=\operatorname{div}\big{(}A(x)\cdot\nabla u(x)\big{)}+g(B(x), \nabla u(x)\big{)},\] acting on a bounded Riemannian domain \(\Omega\subset M\), where \(c\in C^{0}(M)\) is a continuous function, \(B\in C^{\infty}(M;TM)\) is a smooth vector field and \(A\in\operatorname{End}(TM)\) is a positive definite symmetric endomorphism of the tangent bundle \(TM\) so that \[c_{0}\ g(\xi,\xi)\leq g(A(x)\cdot\xi,\xi)\leq C_{0}\ g(\xi,\xi) \qquad\forall x\in M,\forall\xi\in T_{x}M\] and \[g(B(x),B(x))\leq b,\ \ \ \ |c(x)|\leq b\qquad\ \forall x\in M\] for some positive constants \(c_{0},C_{0}\) and \(b\). Moreover, we assume that the local coefficients \(a_{i}^{j}\) of the endomorphism \(A\) satisfy \[\Big{|}\big{|}a_{i}^{j}\Big{|}\Big{|}_{C^{1}}\leq a\qquad\forall i,j, \tag{3.2}\] where \(a\in\mathbb{R}_{>0}\). The strategy we adopt to achieve the ABP inequality is strongly based on the existence of a suitable atlas composed by harmonic charts. To this aim, let's start by introducing the following definition. **Definition 3.1**.: _Given an \(n\)-dimensional Riemannian manifold \((M,g)\), we recall that the \(C^{1}\)-harmonic radius of \(M\) at \(x\in M\), denoted with \(r_{h}(x)\), is the supremum among all \(R>0\) so that there exists a coordinate chart \(\phi:B_{R}(x)\to\mathbb{R}^{n}\) with the following properties_ 1. \(2^{-1}g^{\mathbb{R}^{n}}\leq g\leq 2g^{\mathbb{R}^{n}}\) _in the local chart_ \((B_{R}(x),\phi)\)_;_ 2. \(||\partial_{k}g_{ij}||_{C^{0}(B_{R}(x))}\leq\frac{1}{R}\) _for every_ \(k=1,...,n\)_;_ 3. \(\phi\) _is an harmonic map._ Defining \(r_{h}(M):=\inf_{x\in M}r_{h}(x)\), if we suppose that \[|\mathrm{Ric}|\leq K\quad\text{and}\quad\mathrm{inj}_{(M,g)}\geq i \tag{3.3}\] for some constants \(K,i\in\mathbb{R}_{>0}\), by [11, Corollary] it follows that there exists a constant \(r_{0}=r_{0}(n,K,i)>0\) so that \[r_{h}(M)\geq r_{0}.\] As a consequence, under the assumptions (3.3) we can choose a cover of harmonic charts (with fixed positive radius) providing a uniform \(C^{1}\)-control on the metric and on its derivatives. **Theorem 3.2**.: _Let \((M,g)\) be a complete Riemannian manifold of dimension \(\dim(M)=n\) and \(\Omega\Subset M\) a bounded smooth domain. Denote \(\Omega_{r}:=\{x\in M\ :\ d(x,\Omega)<r\}\) for \(r>0\)._ _Then, there exists a positive constant \(C=C(n,a,b,c_{0},C_{0},r_{h}(\overline{\Omega}),|\Omega|,|\Omega_{r_{h}( \overline{\Omega})}|)\) such that for every \(u\in C^{2}(\Omega)\) satisfying_ \[\left\{\begin{array}{l}\mathcal{M}u\geq f\ \mathrm{in}\ \Omega\\ \limsup_{x\to\partial\Omega}u(x)\leq 0,\end{array}\right.\] _it holds_ \[\sup_{\Omega}u\leq C\ \mathrm{diam}(\Omega)\left|\left|f\right|\right|_{L^{n}( \Omega)}. \tag{3.4}\] The key result that we need to prove Theorem 3.2 is the following Euclidean integral Harnack inequality, whose proof can be found in [8, Theorem 9.22] **Theorem 3.3**.: _Let \(\mathcal{L}:=a^{ij}\partial_{i}\partial_{j}+b^{i}\partial_{i}+c\) be an uniformly elliptic differential operator acting on a bounded domain \(U\subset\mathbb{R}^{n}\) with_ \[c_{0}\leq[a^{ij}]\leq C_{0}\quad and\quad|b^{i}\partial_{i}|,|c|\leq b,\] _for some positive constants \(c_{0},C_{0}\) and \(b\), and let \(f\in L^{n}(U)\). If \(u\in W^{2,n}(U)\) satisfies \(\mathcal{L}u\leq f\) and is nonnegative in a ball \(B_{2R}(z)\subset U\), then_ \[\left(\fint_{B_{R}(z)}u^{p}\right)^{\frac{1}{p}}\leq C_{1}\left(\inf_{B_{R}(z )}u+R\ \left|\left|f\right|\right|_{L^{n}(B_{2R}(z))}\right)\] _where \(p\) and \(C_{1}\) are positive constants depending on \(n,\ bR,\ c_{0}\) and \(C_{0}\)._ **Remark 3.4**.: If \(b=0\), i.e. if \(B=b^{i}\partial_{i}\) is the null vector field and \(c\equiv 0\), then the constants \(p\) and \(C_{1}\) in previous theorem do not depend on the radius \(R\). **Remark 3.5**.: If \(\Omega\) is a bounded smooth domain and \(u\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega})\) satisfies \[\left\{\begin{array}{l}\mathcal{M}u\leq f\quad\text{in}\ \Omega\\ u\equiv C\quad\quad\text{on}\ \partial\Omega\\ \frac{\partial u}{\partial A\cdot\nu}\leq 0\quad\text{on}\ \partial\Omega,\end{array}\right.\] where \(\nu\) is the outward pointing unit vector field normal to \(\partial\Omega\), then we can consider a larger bounded smooth domain \(\Lambda\Supset\Omega\) and we can extend \(u\) and \(f\) to \(\Lambda\) by imposing \(u\equiv C\) and \(f\equiv 0\) in \(\Lambda\setminus\overline{\Omega}\). In this way we get a function \(u\in C^{0}(\Lambda)\cap W^{2,n}(\Lambda)\) satisfying \(\mathcal{M}u\leq f\) weakly in \(\Lambda\), i.e. so that \[\int_{\Lambda}[-g(A\cdot\nabla u,\nabla\phi)+g(B,\nabla u)\phi]\ \operatorname{dv} \leq\int_{\Lambda}f\phi\ \operatorname{dv}\qquad\forall 0\leq\phi\in C_{c}^{ \infty}(\Lambda).\] **Remark 3.6**.: We stress that if \(\Omega\) is a bounded smooth domain, \(u\in C^{2}(\Omega)\cap C^{1}(\overline{\Omega})\) satisfies \[\left\{\begin{array}{ll}\mathcal{M}u\leq 0&\text{in }\Omega\\ u\equiv C&\text{on }\partial\Omega\end{array}\right.\] and \(x_{0}\in\partial\Omega\) is a global minimum for \(u\) in \(\overline{\Omega}\), then \[\frac{\partial u}{\partial A\cdot\nu}(x_{0})\leq 0.\] Indeed, by decomposing \(A\cdot\nu=(A\cdot\nu)^{\top}+(A\cdot\nu)^{\perp}\), where \((A\cdot\nu)^{\top}\) and \((A\cdot\nu)^{\perp}\) are tangential and normal to \(\partial\Omega\) respectively, one can check that \[\frac{\partial u}{\partial A\cdot\nu}(x_{0})=(A(x_{0})\cdot\nu(x_{0}))^{\perp }\frac{\partial u}{\partial\nu}(x_{0})=\underbrace{g\big{(}A(x_{0})\cdot\nu(x_ {0}),\nu(x_{0})\big{)}}_{>0}\frac{\partial u}{\partial\nu}(x_{0})\] where the first equality follows from the fact that \(x_{0}\in\partial\Omega\) is a minimum for \(u|_{\partial\Omega}\), implying that the tangential component (to \(\partial\Omega\)) of \(\nabla u\) vanishes at \(x_{0}\). Hence \(\frac{\partial u}{\partial A\cdot\nu}(x_{0})\) and \(\frac{\partial u}{\partial\nu}(x_{0})\) have the same sign. By standard Hopf's Lemma it follows that \(\frac{\partial u}{\partial A\cdot\nu}(x_{0})\leq 0\). **Remark 3.7**.: Using the local expression of the differential operator \(\mathcal{M}\), we can estimate the constant of Theorem 3.3 in every local chart in terms of the coefficients \(A,B\) and \(c\) and of the fist order derivatives of the metric, i.e. in terms of the harmonic radius of \(M\) thanks to condition (ii). Indeed, if \(X\) is a vector field, in local coordinates \[\operatorname{div}\left(X\right)=\frac{\partial X^{k}}{\partial x^{k}}+X^{t} \Gamma_{kt}^{k}\] obtaining \[\operatorname{div}(A\cdot\nabla u) =\operatorname{div}\left(a_{i}^{j}\frac{\partial}{\partial x^{j} }\otimes dx^{i}\left[g^{hk}\frac{\partial u}{\partial x^{k}}\frac{\partial}{ \partial x^{h}}\right]\right)\] \[=\frac{\partial}{\partial x^{j}}\left(a_{i}^{j}g^{hi}\frac{ \partial u}{\partial x^{h}}\right)+a_{i}^{t}g^{hi}\frac{\partial u}{\partial x ^{h}}\Gamma_{kt}^{k}.\] Hence the differential operator \(\mathcal{M}\) writes as \[\mathcal{M}u =\operatorname{div}\left(A\cdot\nabla u\right)+g(B,\nabla u)\] \[=\operatorname{div}\left(a_{i}^{j}\frac{\partial}{\partial x^{j}} \otimes dx^{i}\left[g^{hk}\frac{\partial u}{\partial x^{k}}\frac{\partial}{ \partial x^{h}}\right]\right)+g\left(B^{j}\frac{\partial}{\partial x^{j}},g^{hk }\frac{\partial u}{\partial x^{k}}\frac{\partial}{\partial x^{h}}\right)\] \[=\frac{\partial}{\partial x^{j}}\left(a_{i}^{j}g^{hi}\frac{ \partial u}{\partial x^{h}}\right)+a_{i}^{t}g^{hi}\frac{\partial u}{\partial x ^{h}}\Gamma_{kt}^{k}+B^{k}\frac{\partial u}{\partial x^{k}}\] \[=a_{i}^{j}g^{hi}\frac{\partial^{2}u}{\partial x^{j}\partial x^{h }}+\left(\frac{\partial}{\partial x^{j}}\left(a_{i}^{j}g^{ki}\right)+a_{i}^{t }g^{ki}\Gamma_{ht}^{h}+B^{k}\right)\frac{\partial u}{\partial x^{k}}.\] As a consequence, under the assumptions (3.3) the coefficients of \(\mathcal{M}\) have the same bounds in every harmonic chart of the manifold \(M\). In particular, in Theorem 3.3 we can chose the same constants \(p=p(n,r_{h}(M),a,b,c_{0},C_{0})\) and \(C=C(n,r_{h}(M),a,b,c_{0},C_{0})\) for every harmonic chart, avoiding any dependence on the local chart. Lastly, we stress that if we consider an operator of the form \[\mathcal{M}(u)=\operatorname{tr}\left(A\cdot\operatorname{Hess}(u)\right)+g(B,\nabla u),\] then the same conclusion holds true without requiring the condition (3.2). Proof of Theorem 3.2.: We start by supposing that \(u\) and the coefficients of \(\mathcal{M}\) are smooth up to the boundary of \(\Omega\). Consider the solution \(w\) of the problem \[\left\{\begin{array}{ll}\mathcal{M}w=-F:=-(\mathcal{M}u)^{-}\leq 0&\text{in } \Omega\\ w=0&\text{on }\partial\Omega.\end{array}\right.\] By assumption, \(u\in C^{\infty}(\overline{\Omega})\) and so \(F=(\mathcal{M}u)^{-}\) is Lipschitz in \(\overline{\Omega}\), implying that \(w\in C^{2,\alpha}(\overline{\Omega})\) for any \(\alpha\in(0,1)\). Moreover, by the standard maximum principle, we have \(w\geq 0\). Now consider the function \(w-u\): by definition \[\left\{\begin{array}{ll}\mathcal{M}(w-u)\leq 0&\text{in }\Omega\\ w-u\geq 0&\text{on }\partial\Omega\end{array}\right.\] and, again by standard maximum principle, \[w\geq u\quad\text{ in }\Omega.\] Take \(z_{0}\in\Omega\) so that \(S=w(z_{0})=\sup_{\Omega}w>0\) and consider the function \(v:=S-w\geq 0\). Let \(r:=r_{h}(\overline{\Omega})\) and consider the \(r\)-neighbourhood \(\Omega_{r}\) of \(\Omega\) \[\Omega_{r}:=\{x\in M\ :\ d(x,\Omega)<r\}.\] Since \(v|_{\partial\Omega}\equiv S\), by Remark 3.6, we can extend \(v\) and \(F\) to \(\Omega_{r}\) as done in Remark 3.5. Observe that, without loss of generality, we can suppose \(\operatorname{diam}(\Omega)\geq r\). Otherwise, \(\Omega\) is contained in an harmonic local chart and the theorem follows by the standard Euclidean ABP inequality. Consider an open cover \(\mathcal{W}\) of \(\overline{\Omega}\) given by \[\mathcal{W}:=\{(W_{1}:=B_{r/4}(x_{1}),\phi_{1}),...,(W_{t}:=B_{r/4}(x_{t}),\phi _{t})\}\] satisfying the following assumptions * \(x_{i}\in\overline{\Omega}\) for every \(i=1,...,t\); * \(d(x_{i},x_{j})\geq\frac{r}{8}\) for every \(i\neq j\); * \(\mathcal{W}\) is maximal (by inclusion). For a reference see [10, Lemma 1.1]. Moreover, observe that by construction \[\bigcup_{i\leq t}W_{i}\subset\Omega_{r}.\] Since every chart of \(\mathcal{W}\) is an harmonic chart, then \[|\Omega_{r}|\geq\left|\cup_{1\leq i\leq t}B_{r/8}(x_{i})\right|=\sum_{i\leq t }|B_{r/8}(x_{i})|\geq t2^{-n/2}|\mathbb{B}_{r/8}|\] implying that \[t\leq\frac{|\Omega_{r}|2^{n/2}}{|\mathbb{B}_{r/8}|} \tag{3.5}\] where \(\mathbb{B}_{s}\) denotes the Euclidean ball of radius \(s\). Now let \(\mathcal{U}\) and \(\mathcal{V}\) the dilated covers obtained from \(\mathcal{W}\) \[\mathcal{U} :=\{(U_{1}:=B_{r}(x_{1}),\phi_{1}),...,(U_{t}:=B_{r}(x_{t}),\phi_ {t})\}\] \[\mathcal{V} :=\{(V_{1}:=B_{r/2}(x_{1}),\phi_{1}),...,(V_{t}:=B_{r/2}(x_{t}), \phi_{t})\}.\] Observe that \[W_{i}\cap W_{j}\neq\emptyset\quad\Rightarrow\quad\exists B_{r/4}(x_{ij}) \subseteq V_{i}\cap V_{j}\] which implies, by (i) in Definition 3.1, \[\begin{split}\frac{|V_{j}|}{|V_{i}\cap V_{j}|}&= \frac{|B_{r/2}(x_{j})|}{|V_{i}\cap V_{j}|}\leq\frac{|B_{r/2}(x_{j})|}{|B_{r/4} (x_{ij})|}\\ &\stackrel{{(i)}}{{\leq}}\frac{2^{n/2}|\mathbb{B}_{r /2}|}{2^{-n/2}|\mathbb{B}_{r/4}|}=\frac{2^{n}|\mathbb{B}_{r/2}|}{|\mathbb{B}_ {r/4}|}\leq 2^{n}C_{\mathbb{R}^{n}}\end{split} \tag{3.6}\] whenever \(W_{i}\cap W_{j}\neq\emptyset\), where \(C_{\mathbb{R}^{n}}=2^{n}\) is the Euclidean doubling constant. It follows that if \(W_{i}\cap W_{j}\neq\emptyset\) \[\fint_{V_{i}\cap V_{j}}v^{p}\leq C_{D}\fint_{V_{j}}v^{p} \tag{3.7}\] where \(C_{D}:=4^{n}\). In any local chart \(U_{i}\) we can apply Theorem 3.3, obtaining \[\begin{split}\fint_{V_{i}}v^{p}\ \mathrm{d}v&\leq 2^{n} \fint_{\mathbb{B}_{r/2}}(v\circ\phi_{i})^{p}\ \mathrm{d}x\\ &\leq 2^{n}C_{1}^{p}\left[\inf_{\mathbb{B}_{r/2}}v\circ\phi_{i}^{-1}+ \frac{r}{2}\left||F\circ\phi_{i}^{-1}\right||_{L^{n}(\mathbb{B}_{r})}\right]^{ p}\\ &\leq 2^{n}C_{1}^{p}\left[\inf_{V_{i}}v+\frac{r}{2}\sqrt{2}\left||F \right||_{L^{n}(U_{i})}\right]^{p}\end{split} \tag{3.8}\] that implies \[\begin{split}\left(\fint_{V_{i}}v^{p}\text{ dv}\right)^{1/p}& \leq\underbrace{2^{n/p}C_{1}}_{=:\widetilde{C_{1}}}\left[\inf_{V_{i}}v+ \frac{r}{\sqrt{2}}\left|\left|F\right|\right|_{L^{n}(U_{i})}\right]\\ &\leq\widetilde{C}_{1}\left[\inf_{V_{i}}v+r\left|\left|F\right| \right|_{L^{n}(U_{i})}\right]\qquad\forall i=1,...,t.\end{split} \tag{3.9}\] Summing up over \(i=1,...,t\), on the left side of (3.8) we have \[\sum_{i\leq t}\fint_{V_{i}}v^{p}\geq\frac{1}{\left|\widehat{\Omega}\right|} \int_{\widehat{\Omega}}v^{p}=\fint_{\widehat{\Omega}}v^{p} \tag{3.10}\] where \[\widehat{\Omega}:=\bigcup_{1\leq i\leq t}V_{i}\subseteq\Omega_{r}.\] Now let \(j\in\{1,...,t\}\) be so that \[\left(\inf_{V_{j}}v+r\left|\left|F\right|\right|_{L^{n}(U_{j})}\right)=\max_ {i\leq t}\left(\inf_{V_{i}}v+r\left|\left|F\right|\right|_{L^{n}(U_{i})}\right).\] and let \(\mathcal{S}:=\{W_{i_{1}},...,W_{i_{m}}\}\subseteq\mathcal{W}\) be a sequence of coordinate neighbourhoods joining \(W_{j}=:W_{i_{1}}\) and \(z_{0}\in W_{i_{m}}\) and such that \[W_{i_{q}}\neq W_{i_{s}}\quad\forall q\neq s,\] \[W_{i_{q}}\cap W_{i_{q+1}}\neq\emptyset\quad\forall q=1,...,m-1.\] We get \[\inf_{V_{j}}v=\inf_{V_{i_{1}}}v \leq\inf_{V_{i_{1}}\cap V_{i_{2}}}v\] \[\stackrel{{\text{by \eqref{eq:w_i_1}}}}{{\leq}}C_{D} \left(\fint_{V_{i_{2}}}v^{p}\right)^{1/p}\] \[\leq C_{D}\widetilde{C}_{1}\left(\inf_{V_{i_{2}}}v+r\left|\left| F\right|\right|_{L^{n}(U_{i_{2}})}\right)\] \[\leq C_{D}\widetilde{C}_{1}\left(\inf_{V_{i_{2}}}v+r\left|\left| F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)\] where \[\widetilde{\Omega}=\bigcup_{1\leq i\leq t}U_{i}.\] Iterating \[\inf_{V_{j}}v \leq(C_{D}\widetilde{C}_{1})^{m}\left(\inf_{V_{im}}v+m\;r\left| \left|F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)\] \[=(C_{D}\widetilde{C}_{1})^{m}\left(m\;r\left|\left|F\right|\right| _{L^{n}(\widetilde{\Omega})}\right)\] \[\leq(C_{D}\widetilde{C}_{1})^{t}\left(t\;\operatorname{diam}( \Omega)\left|\left|F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)\] \[=C_{2}\;\operatorname{diam}(\Omega)\left|\left|F\right|\right|_{ L^{n}(\widetilde{\Omega})}\] where, using (3.5), \(C_{2}:=t(C_{D}\widetilde{C}_{1})^{t}\) can be bounded from above by \[C_{2}\leq\frac{|\Omega_{r}|2^{n/2}}{|\mathbb{B}_{r/8}|}(C_{D}\widetilde{C}_{1 })^{\frac{|\Omega_{r}|2^{n/2}}{|\mathbb{B}_{r/8}|}}.\] Observe that, without loss of generality, \(C_{D}\widetilde{C}_{1}\geq 1\). In this way we obtain \[\sum_{i\leq t}\widetilde{C}_{1}^{p}\left(\inf_{V_{i}}+r\left| \left|F\right|\right|_{L^{n}(U_{i})}\right)^{p} \leq t\widetilde{C}_{1}^{p}\left(\inf_{V_{j}}v+\operatorname{diam} (\Omega)\left|\left|F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)^{p}\] \[\leq\widetilde{C}_{2}^{p}\left(\operatorname{diam}(\Omega)\left| \left|F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)^{p} \tag{3.11}\] where \(\widetilde{C}_{2}:=t^{1/p}\widetilde{C}_{1}(C_{2}+1)\). Using (3.9), (3.10) and (3.11), it follows \[\fint_{\widehat{\Omega}}v^{p}\leq\widetilde{C}_{2}^{p}\left(\operatorname{ diam}(\Omega)\left|\left|F\right|\right|_{L^{n}(\widetilde{\Omega})}\right)^{p}\] i.e. \[\left(\fint_{\widehat{\Omega}}v^{p}\right)^{1/p}\leq\widetilde{C}_{2}\; \operatorname{diam}(\Omega)\left|\left|F\right|\right|_{L^{n}(\widetilde{ \Omega})}. \tag{3.12}\] Recalling that \(v\equiv S\) in \(\widehat{\Omega}\setminus\Omega\), we get \[\left(\fint_{\widehat{\Omega}}v^{p}\right)^{1/p}\geq\left(\frac{1}{|\widehat {\Omega}|}\int_{\widehat{\Omega}\setminus\Omega}v^{p}\right)^{1/p}\geq\left( \frac{|\widehat{\Omega}\setminus\Omega|}{|\widehat{\Omega}|}\right)^{1/p}S=: \theta^{1/p}S\] and, since \(|F|\leq|f|\chi_{\Omega}\), by (3.12) \[\left(\fint_{\widehat{\Omega}}v^{p}\right)^{1/p}\leq\widetilde{C}_{2}\; \operatorname{diam}(\Omega)\left|\left|F\right|\right|_{L^{n}(\widetilde{ \Omega})}\leq\widetilde{C}_{2}\;\operatorname{diam}(\Omega)\left|\left|f \right|\right|_{L^{n}(\Omega)}.\] Whence \[\sup_{\Omega}w=S\leq C\;\operatorname{diam}(\Omega)\left|\left|f\right|\right| _{L^{n}(\Omega)} \tag{3.13}\] where \(C=\frac{\widetilde{C}_{2}}{\theta^{1/p}}\). In particular, previous inequality implies \[\sup_{\Omega}w\leq C\;\operatorname{diam}(\Omega)\left|\Omega\right|^{1/n} \left|\left|f\right|\right|_{L^{\infty}(\Omega)}.\] For the general case, i.e. removing the smoothness assumption on \(u\) and on the coefficients of \(\mathcal{M}\) up to the boundary, we can proceed by an exhaustion of \(\Omega\) by smooth, relatively compact subdomains, as done in [7, Theorem 2.3]. Indeed, let \(\{U_{\epsilon}\}_{\epsilon>0}\) be a family of relatively compact subdomain of \(\Omega\) with smooth boundary so that \(u\leq\epsilon\) in \(\Omega\setminus U_{\epsilon}\) (recall that \(\limsup_{x\to\partial\Omega}u(x)\leq 0\)) and satisfying \(\bigcup_{\epsilon}U_{\epsilon}=\Omega\) and define \(u_{\epsilon}=u-\epsilon\in C^{2}(\overline{U_{\epsilon}})\). If we consider the following sequences * \(\{u_{k}\}_{k}\subset C^{\infty}(\overline{U_{\epsilon}})\) approximating uniformly \(u\) and its derivatives up to order \(2\); * \(\{A_{k,\epsilon}\}_{k}\subset\operatorname{End}(TM)\) a sequence of positive definite symmetric endomorphisms of the tangent bundle \(TM\) whose coefficients are smooth and converge to the ones of \(A\) in \(W^{1,n}(U_{\epsilon})\); then, defining \(u_{k,\epsilon}:=u_{k}-\epsilon\) and \(F_{k,\epsilon}:=\left(\operatorname{div}\left(A_{k,\epsilon}\cdot\nabla u_{k, \epsilon}\right)+g(B,\nabla u_{k,\epsilon})\right)^{-}\), by (3.13) in previous step we get \[\sup_{U_{\epsilon}}u_{k,\epsilon}\leq C\,\operatorname{diam}(\Omega)\left| \left|F_{k,\epsilon}\right|\right|_{L^{n}(U_{\epsilon})}.\] Thanks to the properties of the sequences defined, we get \[\sup_{U_{\epsilon}}u_{k,\epsilon}\xrightarrow{k}\sup_{U_{\epsilon}}u_{\epsilon}\] and \[F_{k,\epsilon}\xrightarrow{k}F\qquad\text{ in }L^{n}(U_{\epsilon})\] that, together with previous inequality, imply \[\sup_{U_{\epsilon}}u_{\epsilon}\leq C\,\operatorname{diam}(\Omega)\left| \left|F\right|\right|_{L^{n}(U_{\epsilon})},\] i.e. \[\sup_{U_{\epsilon}}u\leq C\,\operatorname{diam}(\Omega)\left|\left|f\right| \right|_{L^{n}(U_{\epsilon})}+\epsilon.\] Letting \(\epsilon\to 0\), thanks to the fact that \(\limsup_{x\to\partial\Omega}u\leq 0\) and \(U_{\epsilon}\to\Omega\), we finally get \[\sup_{\Omega}u\leq C\,\operatorname{diam}(\Omega)\left|\left|f\right|\right| _{L^{n}(\Omega)}.\] **Remark 3.8**.: Observe that the constant \(C\) in previous theorem depends on \(n,\ a,\ b,\ c_{0},\ C_{0}\) and on the family of harmonic neighbourhoods \(\mathcal{W}\) that \(\Omega\) intersects. In particular, by construction if \(\Omega\) and \(\Omega^{\prime}\) are covered by the same family of harmonic neighbourhoods \(\mathcal{W}\), \(\left|\Omega\right|>\left|\Omega^{\prime}\right|\) and \(C\) and \(C^{\prime}\) are the constants given by Theorem 3.2 on \(\Omega\) and \(\Omega^{\prime}\) respectively, then \[C>C^{\prime}.\] As a consequence, the constant \(C\) is monotone (increasing) with respect to the inclusion and so we can use the same \(C=C(\Omega)\) for every subdomain \(\Omega^{\prime}\subseteq\Omega\). **Remark 3.9**.: The explicit expression of the constant \(C\) in (3.4) is the following \[C=\frac{t^{1/p}2^{n/p}\left[t\left(2^{n(p+1)/p}C_{\mathbb{R}^{n}}C_{1}\right)^{t }+1\right]}{\theta^{1/p}}\] where, denoting \(r:=r_{h}(\overline{\Omega})\), * \(p=p(n,r,a,b,c_{0},C_{0})\) and \(C_{1}=C_{1}(n,r,a,b,c_{0},C_{0})\) are the constants given in Theorem 3.3; * \(C_{\mathbb{R}^{n}}\) is the Euclidean doubling constant; * \(\theta=1-\frac{|\Omega|}{|\overline{\Omega}|}\); * \(t\leq\frac{|\Omega_{r}|^{2n/2}}{|\mathbb{B}_{r/8}|}\). Observe that in the Euclidean case we have \(r_{h}=+\infty\), implying that if \(\Omega\subset\mathbb{R}^{n}\) is a fixed bounded domain, then we can choose a radius \(R=(8\ \mathrm{diam}(\Omega))\) in order to get \(\Omega\subset\mathbb{B}_{R/8}\). By Remark 3.8, we can use the ABP constant of the domain \(\mathbb{B}_{R/8}\) also for the domain \(\Omega\). In particular, thanks to the Euclidean (global) doubling property, the constants \(t\) and \(\theta\) of the domain \(B_{R/8}\) do not depend neither on \(\mathbb{B}_{R/8}\) nor \(\Omega\), while the constants \(p\) and \(C_{1}\) depend on \(n,\ R\) (and hence on \(\mathrm{diam}(\Omega)\)), \(b,c_{0}\) and \(C_{0}\). This means that in case \(M=\mathbb{R}^{n}\) the constant in Theorem 3.2 depends on the domain \(\Omega\) only through its diameter. Moreover, by Remark 3.4, this last dependence on the diameter of \(\Omega\) is avoided in case \(b=0\) (for instance for the Euclidean Laplacian). ### Generalized principal eigenfunction in general bounded domains As already claimed, the aim of this section is to prove a maximum principle for smooth unbounded domains in a general Riemannian manifolds. While in the bounded case the validity of the maximum principle is strictly related to the positivity of the first Dirichlet eigenvalue, in unbounded domains the existence of classical principal eigenelements is not even guaranteed. In this direction, following what done by Nordman in [13], we will consider a generalization of the notion of principal eigenvalue (and related eigenfunction) in order to extend this relation to unbounded smooth domains. **Definition 3.10**.: _The generalized principal Dirichlet eigenvalue of the operator \(\mathcal{L}\) acting on a (possibly nonsmooth) domain \(\Omega\subset M\) is defined as_ \[\lambda_{1}^{-\mathcal{L}}(\Omega):=\sup\{\lambda\in\mathbb{R}\ :\ \mathcal{L}+ \lambda\ admits\ a\ positive\ supersolution\}\] _where \(u\) is said to be a supersolution for the operator \(\mathcal{L}+\lambda\) if \(u\in C^{2}(\overline{\Omega})\) and it satisfies_ \[\left\{\begin{array}{rl}(\mathcal{L}+\lambda)u\leq 0&\mathrm{in}\ \Omega\\ u\geq 0&\mathrm{on}\ \partial\Omega.\end{array}\right.\] Clearly, the previous definition makes sense both in bounded and unbounded domains and in the former case it coincides with the classical notion of principal eigenvalue. Moreover, if \(A^{-1}\cdot B=\nabla\eta\) for a smooth function \(\eta\) (for instance, if \(B\equiv 0\)), then \(\mathcal{L}\) is symmetric on \(L^{2}(\Omega,\mathrm{d}\mathrm{v}_{\eta})\), where \(\mathrm{d}\mathrm{v}_{\eta}=e^{\eta}\ \mathrm{d}\mathrm{v}\), and we have a variational characterization of \(\lambda_{1}\) through the Rayleigh quotient \[\lambda_{1}^{-\mathcal{L}}(\Omega)=\inf_{\begin{subarray}{c}\psi\in H^{1}_{0} (\Omega,\mathrm{d}\mathrm{v}_{\eta})\\ ||\psi||_{L^{2}(\Omega,\mathrm{d}\mathrm{v}_{\eta})}=1\end{subarray}}\left(\int _{\Omega}g(A\cdot\nabla\psi,\nabla\psi)\ \mathrm{d}\mathrm{v}_{\eta}-\int_{\Omega}c\psi^{2}\ \mathrm{d} \mathrm{v}_{\eta}\right).\] The next step consists in proving the existence of a couple of generalized eigenelements. The first result we need is a boundary Harnack inequality, obtained adapting [2, Theorem 1.4] to the Riemannian setting. **Theorem 3.11** (Krylov-Safonov Boundary Harnack inequality).: _Let \((M,g)\) be a complete Riemannian manifold and \(\Omega\subset M\) a bounded domain with possibly nonsmooth boundary. Fix \(x_{0}\in\Omega\) and consider \(G\subset\Omega\cup\Sigma\) compact, where \(\Sigma\) is a smooth open subset of \(\partial\Omega\). Then, there exists a positive constant \(C\), depending on \(x_{0},\ \Omega,\ \Sigma,\ G,\ a,\ b,\ c_{0}\) and \(C_{0}\), so that for every nonnegative function \(u\in W^{2,p}_{loc}(\Omega\cup\Sigma)\), \(p>n\), satisfying_ \[\left\{\begin{array}{ll}\mathcal{L}u=0&a.e.\ \mathrm{in}\ \Omega\\ u>0&\ \mathrm{in}\ \Omega\\ u=0&\ \mathrm{on}\ \Sigma\end{array}\right.\] _we have_ \[u(x)\leq Cu(x_{0})\ \ \ \ \forall x\in G.\] Proof.: Let \(\mathcal{U}:=\{U_{1},...,U_{m}\}\) be a family of local charts of \(M\) intersecting and covering \(\partial\Omega\) and with the property that \(\partial G\cap U_{i}\) is connected for every \(i\). Fix \(\epsilon>0\) small enough so that \(d^{M}(x_{0},\partial\Omega)>2\epsilon\), \[\emptyset\neq\{x\in\Omega\ :\ d(x,\partial\Omega)\in(\epsilon,2\epsilon)\} \subseteq\bigcup_{1\leq i\leq m}U_{i}\] and \[\{x\in\Omega\ :\ d(x,\partial\Omega)>2\epsilon\}\neq\emptyset.\] Let \(\Omega_{\epsilon}\) a smooth subdomain of \(\Omega\) satisfying \[\{x\in\Omega\ :\ d(x,\partial\Omega)>2\epsilon\}\subseteq\Omega_{\epsilon} \subseteq\{x\in\Omega\ :\ d(x,\partial\Omega)>\epsilon\}.\] Clearly, \(\partial\Omega_{\epsilon}\subset\bigcup_{1\leq i\leq m}U_{i}\). Now complete \(\mathcal{U}\) to a cover of \(\Omega\) by coordinate neighbourhoods of \(M\) \[\mathcal{V}=\mathcal{U}\cup\mathcal{U}^{\prime}=\mathcal{U}\cup\{U_{m+1},...,U_{ h}\}\] so that \[\overline{\Omega}_{\epsilon}\subset\bigcup_{m+1\leq i\leq h}U_{i}\qquad\text{ and }\qquad\partial\Omega\cap\left(\bigcup_{m+1\leq i\leq h}U_{i}\right)=\emptyset.\] Up to considering a larger family \(\mathcal{U}^{\prime}\), we can suppose that for every \(i=m+1,...,h\) there exists \(W_{i}\Subset U_{i}\) open subset such that \[\overline{\Omega}_{\epsilon}\subset\bigcup_{m+1\leq i\leq h}W_{i},\qquad \quad\partial\Omega\cap\left(\bigcup_{m+1\leq i\leq h}W_{i}\right)=\emptyset\] and \[W_{i}\cap W_{j}\neq\emptyset\quad\Leftrightarrow\quad U_{i}\cap U_{j}\neq\emptyset.\] Lastly, up to considering a larger family \(\mathcal{U}\) and a smaller \(\epsilon\), we can suppose that for every \(i\in\{1,...,m\}\) there exists a compact subset \(E_{i}\subset\left(U_{i}\cap\overline{\Omega}\right)\) so that \[\overline{\Omega}\setminus\Omega_{\epsilon}\subset\bigcup_{1\leq i\leq m}E_{i}\] and every \(E_{i}\) intersects at least one \(W_{j}\). For every \(i=m+1,...,h\) we can apply the Euclidean version of Krylov-Safonov Harnack inequality, [8, Corollary 8.21], to the couple \(W_{i}\Subset U_{i}\). Let \(C_{i}=C_{i}(n,U_{i},b,c_{0},C_{0},W_{i})>0\) be the corresponding constant and define \[K:=\max_{m+1\leq i\leq h}C_{i}\geq 1.\] If \(x\in G\), we have two possible cases: 1. \(\frac{x\in G\cap\Omega_{\epsilon}}{U_{i_{1}},..,U_{i_{t}}}\in\mathcal{U}^{\prime}\) so that \[x\in W_{i_{1}}, x_{0}\in W_{i_{t}}\qquad\quad\text{and}\] \[W_{i_{j}}\cap W_{i_{j+1}}\neq\emptyset\ \ \forall j=1,...,t-1\] and by (Euclidean) Krylov-Safonov Harnack inequality, we get \[u(x) \leq\sup_{W_{i_{1}}}u\leq K\inf_{W_{i_{1}}}u\leq K\inf_{W_{i_{1}} \cap W_{i_{2}}}u\] \[\leq K\sup_{W_{i_{2}}}u\leq...\leq K^{t}\inf_{W_{i_{t}}}u\leq K^ {t}u(x_{0}).\] Since the sequence of neighbourhoods can be chosen with at most \(h-m\) different elements, it follows that \[u(x)\leq\widetilde{K}\ u(x_{0})\] where \(\widetilde{K}:=K^{k-m}\) does not depend on the choice of \(x\in G\cap\Omega_{\epsilon}\). 2. \(x\in G\setminus\Omega_{\epsilon}\): without loss of generality, we can suppose \(x\in U_{1}\). By Theorem 1.4 in [2] applied to \(U_{1}\) and \(E_{1}\), we get \[u(x)\leq B_{1}\ u(z(x))\] where \(B_{1}=B_{1}(n,a,b,c_{0},C_{0},U_{1},E_{1})>1\) and \(z(x)\in U_{1}\cap W_{j}\) for some \(j\geq m+1\), up to enlarge slightly \(W_{j}\) and \(E_{1}\). Retracing what done in previous point, we obtain that \[u(x)\leq B_{1}\ u(z(x))\leq B_{1}\sup_{W_{j}}u\leq B_{1}\ \widetilde{K}\ u(x_{0}).\] Choosing \(B:=\max_{1\leq i\leq m}B_{i}\) and defining \(C:=B\widetilde{K}\geq\widetilde{K}\), we get \[u(x)\leq C\ u(x_{0})\] for every \(x\in G\), obtaining the claim. **Remark 3.12**.: Observe that \(C\) actually depends only on the neighbourhoods that \(G\) intersects and not really on \(G\), i.e. \(C\) is "stable" under small perturbations. Next stage consists in the construction of a function \(u_{0}\) which vanishes at those points of \(\partial\Omega\) that admit a _barrier_. It will be needed to show that the generalized principal eigenfunction vanishes at smooth portions of \(\partial\Omega\). **Definition 3.13**.: _We say that \(y\in\partial\Omega\) admits a strong barrier if there exists \(r>0\) and \(h\in W^{2,n}_{loc}(\Omega\cap B_{r}(y))\) which can be extended continuously to \(y\) by setting \(h(y)=0\) and so that_ \[\mathcal{M}h\leq-1.\] **Remark 3.14**.: As proved by Miller in [12], the strong barrier condition at \(y\in\partial\Omega\) is implied by the exterior cone condition in any local chart, i.e. by the fact that in every local chart around \(y\) there exists an exterior truncated cone \(C_{y}\) with vertex at \(y\) and lying outside \(\overline{\Omega}\). In particular, on every smooth sector \(\Sigma\) of \(\partial\Omega\) every point \(y\in\Sigma\) satisfies the (local) exterior cone condition, and thus the strong barrier condition. **Theorem 3.15**.: _Let \((M,g)\) be a complete Riemannian manifold. Given a (possibly nonsmooth) bounded domain \(\Omega\subset M\), there exists \(u_{0}\) positive solution to \(\mathcal{M}u_{0}=-g_{0}\in\mathbb{R}_{<0}\) in \(\Omega\) that can be extended as a continuous function at every point \(y\in\partial\Omega\) admitting a strong barrier by setting \(u_{0}(y)=0\)._ Proof.: Consider \(\Lambda\subset M\) a bounded, open and smooth domain containing \(\overline{\Omega}\) properly and let \(\mathcal{G}\) be the positive Dirichlet Green function on \(\overline{\Lambda}\) associated to the differential operator \(\mathcal{M}-1\). Fixed \(x_{0}\in\Lambda\setminus\overline{\Omega}\), let \(G(\cdot):=\mathcal{G}(x_{0},\cdot)\) so to have \[\left\{\begin{array}{ll}\mathcal{M}G=G&\text{in }\Omega\\ G>0&\text{in }\overline{\Omega}\end{array}\right.\] and define \[g_{0}=\min_{\overline{\Omega}}G\qquad\quad\text{and}\qquad\quad G_{0}=\max_{ \overline{\Omega}}G.\] Consider an exhaustion \(\{H_{j}\}_{j}\) of \(\Omega\) by smooth nested subdomains satisfying \(\overline{H}_{j}\subset H_{j+1}\) and let \(u_{j}\) be the solutions of \[\left\{\begin{array}{ll}\mathcal{M}u_{j}=-g_{0}&\text{in }H_{j}\\ u_{j}=0&\text{on }\partial H_{j}.\end{array}\right.\] In particular, \(u_{j}\in W^{2,p}(H_{j})\) for every \(p>n\) and, by the standard maximum principle, \(\{u_{j}\}_{j}\) is an increasing sequence of positive functions. Moreover \[\mathcal{M}(u_{j}+G)=-g_{0}+G\geq 0\] so, again by maximum principle, it follows that \[u_{j}+G\leq\max_{\partial\Omega_{j}}G\leq G_{0},\] i.e. \(u_{j}\leq G_{0}-G\leq G_{0}\) for every \(j\). Hence there exists a function \(u_{0}\) so that \[u_{j}\rightharpoonup u_{0} \text{in }W^{2,p}(E)\] \[u_{j}\to u_{0} \text{in }C^{1}(E)\] for every \(p>n\) and every \(E\subset\Omega\) compact. Moreover, \(\mathcal{M}u_{0}=-g_{0}\) and \(0<u_{0}\leq G_{0}\) by construction. The next step consists in proving that \(u_{0}\) can be extended continuously to \(0\) at every \(y\in\partial\Omega\) admitting a strong barrier. Fix such a \(y\in\partial\Omega\) admitting a strong barrier, i.e. so that for some \(B_{r}(y)\) there exists in \(U=B_{r}(y)\cap\Omega\) a positive function \(h\in W^{2,n}_{loc}(U)\) satisfying \(\mathcal{M}h\leq-1\) which can be extended continuously to \(y\) by imposing \(h(y)=0\). Without loss of generality, we can suppose \(r<\operatorname{inj}(y)\). Let \(h\) be the strong barrier associated to \(y\) and choose \(j\) big enough so that \(V=H_{j}\cap B_{r/2}(y)\neq\emptyset\): choosing \(\epsilon>0\) small so that \[\epsilon\mathcal{M}\left(d(x,y)^{2}\right)\leq\frac{1}{2}\quad\text{ in }U\] the function \(\widetilde{h}=h+\epsilon d(x,y)^{2}\) satisfies \[\mathcal{M}\widetilde{h}\leq-\frac{1}{2}\quad\text{ in }U.\] Moreover, if \(d(x,y)=\frac{r}{2}\) and \(x\in\overline{H}_{j}\), then \[\widetilde{h}(x)\geq\epsilon\frac{r^{2}}{4}=:\delta\] and, up to decrease \(\epsilon\), we can suppose \(\delta\leq 1\) and that the function \(w=G_{0}\frac{\widetilde{h}}{\delta}-u_{j}\) satisfies \[\left\{\begin{array}{ll}\mathcal{M}w\leq 0&\text{in }V\\ w\geq 0&\text{on }\partial V.\end{array}\right.\] By the Maximum Principle, it follows \(w\geq 0\) in \(V\), i.e. \[u_{j}(x)\leq G_{0}\frac{\widetilde{h}(x)}{\delta}\quad\text{ in }V.\] Fixing \(x\in H_{j}\cap B_{r/2}(y)\) and letting \(j\to+\infty\), it follows \[u_{0}(x)\leq G_{0}\frac{\widetilde{h}(x)}{\delta}.\] Since the previous inequality holds for every \(x\in H_{j}\cap B_{r/2}(y)\) and for every \(j\) big enough, by the continuity of \(\widetilde{h}\) in \(y\) the claim follows. **Remark 3.16**.: Theorem 3.15 has been obtained thanks to an adaptation of the argument presented in [4, Section 3]. Unless small details, the structure of the proof remained unchanged with respect to the one by Berestycki, Nirenberg and Varadhan. Finally, we can prove the existence of a generalized principal eigenfunction in any bounded Riemannian domain **Theorem 3.17**.: _Let \((M,g)\) be a complete Riemannian manifold of dimension \(\dim(M)=n\) and consider a (possibly nonsmooth) bounded domain \(\Omega\subset M\). If \(u_{0}\) is the function obtained in Theorem 3.15, then_ 1. _there exists a principal eigenfunction_ \(\phi\) _of_ \(\mathcal{L}\)__ \[\mathcal{L}\phi=-\lambda_{1}\phi\] _so that_ \(\phi\in W^{2,p}_{loc}(\Omega)\) _for every_ \(p<+\infty\)_;_ 2. _normalizing_ \(\phi\) _to have_ \(\phi(x_{0})=1\) _for a fixed_ \(x_{0}\in\Omega\)_, there exists a positive constant_ \(C\)_, depending only on_ \(x_{0},\ \Omega,\ a,\ b,\ c_{0}\) _and_ \(C_{0}\)_, so that_ \(\phi\leq C\)_;_ 3. _there exists a positive constant_ \(E>0\) _so that_ \(\phi\leq Eu_{0}\)_._ **Remark 3.18**.: The proof proceeds (more or less) as in [4, Theorem 2.1]. We present it for completeness. Proof.: Fix \(x_{0}\in\Omega\) and consider a compact subset \(F\subset\Omega\) so that \(x_{0}\in\mathrm{int}\ F\) and \(|\Omega\setminus F|=\delta\), where \(\delta>0\) is a constant (small enough) to be chosen. Let \(\{\Omega_{j}\}_{j}\) be a sequence of relatively compact smooth subdomains of \(\Omega\) with \(F\subset\Omega_{1}\) and satisfying \[\overline{\Omega}_{i}\subset\Omega_{i+1}\ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \bigcup_{i}\Omega_{i}=\Omega.\] By the smoothness of \(\Omega_{j}\), for every \(j\) there exists a couple of principal eigenelements \((\mu_{j},\phi_{j})\) for \(\mathcal{L}\) so that \[\left\{\begin{array}{ll}\mathcal{L}\phi_{j}=-\mu_{j}\phi_{j}&\text{in } \Omega_{j}\\ \phi_{j}>0&\text{in }\Omega_{j}\\ \phi_{j}=0&\text{on }\partial\Omega_{j}\end{array}\right.\] rescaled so that \(\phi_{j}(x_{0})=1\) and with \(\phi_{j}\in W^{1,p}(\Omega_{j})\) for any \(p<+\infty\). Moreover, since \(\phi_{k}>0\) in \(\overline{\Omega}_{j}\) for \(k>j\), by the standard maximum principle it follows that \(\mu_{j}>\mu_{j+1}>\lambda_{1}:=\lambda_{1}^{-\mathcal{L}}(\Omega)\) for every \(j\). In particular, by monotonicity \(\{\mu_{j}\}_{j}\) converges to a certain \(\mu\geq\lambda_{1}\). By the standard Harnack inequality applied in \(\Omega_{1}\) it follows that there exists a positive constant \(C=C(n,a,b,c_{0},C_{0},x_{0},\Omega_{1},F)\) so that \[\max_{F}\phi_{j}\leq C\ \phi_{j}(x_{0})=C \tag{3.14}\] for every \(j\geq 1\). Now consider \(U_{j}:=\Omega_{j}\setminus F\) and \(v=\phi_{j}-C\): we have \[\mathcal{M}v=-c\phi_{j}-\mu_{j}\phi_{j}\geq-b\phi_{j}-\mu_{j}\phi_{j}\] and \[\limsup_{x\to\partial U_{j}}v\leq 0.\] Now let \(\Lambda\) be a smooth, bounded domain containing \(\overline{\Omega}\) and let \(C_{\Lambda}\) be the constant given by Theorem 3.2 on \(\Lambda\). Observing that \(\overline{U}_{j}\subset\Lambda\) for every \(j\) by Theorem 3.2 and Remark 3.8 it follows that \[\begin{split}\max_{\overline{U}_{j}}\phi_{j}-C&=\max_{ \overline{U}_{j}}v\\ &\leq C_{\Lambda}\ \mathrm{diam}(\Lambda)\ \left|\left|(b+\mu_{j})\phi_{j} \right|\right|_{L^{n}(U_{j})}\\ &\leq C_{\Lambda}\ \mathrm{diam}(\Lambda)\ (b+\mu_{j})\ \max_{ \overline{U}_{j}}\phi_{j}\ \delta^{\frac{1}{n}}.\end{split} \tag{3.15}\] Let \(B_{r}\) be a ball completely contained in \(F\): by [14, Lemma 6.3] there exists a positive constant \(K\), depending only on \(\mathrm{dim}(M)\) and on the coefficients of \(\mathcal{L}\), so that \[\mu_{j}\leq\frac{K}{r^{2}}.\] Using the previous inequality in (3.15), we get \[\max_{\overline{U}_{j}}\phi_{j}-C\leq C_{\Lambda}\ \mathrm{diam}(\Lambda)\ \left(b+\frac{K}{r^{2}}\right)\ \max_{\overline{U}_{j}}\phi_{j}\ \delta^{\frac{1}{n}}\] and choosing \(\delta\) small enough so that \[C_{\Lambda}\ \mathrm{diam}(\Lambda)\ \left(b+\frac{K}{r^{2}}\right)\ \delta^{\frac{1}{n}}\leq\frac{1}{2}\] we obtain \[\max_{\overline{U}_{j}}\phi_{j}\leq 2C\] that, together with (3.14), implies \[\max_{\overline{\Omega}_{j}}\phi_{j}\leq 2C=:C.\] By interior \(W^{2,p}\) estimates ([8, Theorem 6.2]), it follows that \[\left|\left|\phi_{k}\right|\right|_{W^{2,p}(\Omega_{j})}\leq C_{j}\ \ \ \ \ \ \ \forall k\geq j+1\] implying the existence of a function \(\phi\), positive in \(\Omega\), so that \[\phi_{j}\rightharpoonup\phi \text{in }W^{2,p}_{loc}(\Omega)\] \[\phi_{j}\to\phi \text{in }W^{2,\infty}_{loc}(\Omega).\] By construction, \(\phi\) solves \[\mathcal{L}\phi=-\mu\phi\ \ \ \text{ in }\Omega\] with \(\phi(x_{0})=1\) and \(\phi\leq C\). Moreover, by definition of \(\lambda_{1}\) and by the fact that \(\mu\geq\lambda_{1}\), it follows that \(\mu=\lambda_{1}\), obtaining the claims 1 and 2. Lastly, observing that \[\left\{\begin{array}{ll}\mathcal{M}\phi_{j}=-(\mu_{j}+c)\phi_{j}\geq-(\mu_{j }+b)\phi_{j}&\text{in }\Omega_{j}\\ \phi_{j}=0&\text{on }\partial\Omega_{j}\end{array}\right.\] and recalling that \[\left\{\begin{array}{ll}\mathcal{M}u_{0}=-g&\mbox{in }\Omega\\ u_{0}>0&\mbox{in }\Omega\end{array}\right.\] we get \[\left\{\begin{array}{ll}\mathcal{M}\left(\phi_{j}-\frac{C}{g_{0}}(\mu_{j}^{+}+ b)u_{0}\right)\geq-(\mu_{j}+b)C+(\mu_{j}^{+}+b)C\geq 0&\mbox{in }\Omega_{j}\\ \phi_{j}-\frac{C}{g_{0}}(\mu_{j}^{+}+b)u_{0}<0&\mbox{on }\partial\Omega_{j} \end{array}\right.\] and, by standard maximum principle, \[\phi_{j}\leq\frac{C}{g_{0}}(\mu_{j}^{+}+b)u_{0}\quad\mbox{ in }\Omega_{j}.\] Letting \(j\to\infty\), it follows \[\phi\leq\frac{C}{g_{0}}(\lambda_{1}^{+}+b)u_{0}=Eu_{0}.\] **Remark 3.19**.: Using remark 3.14, Theorem 3.15 and the third point of the previous theorem, we can see that the function \(\phi\) vanishes on every smooth portion of \(\partial\Omega\). As a consequence, if we consider a smooth domain \(\Omega\) and \(x_{0}\in\partial\Omega\), then for every \(R>0\) there exists a couple of eigenelements \((\varphi^{R},\lambda_{1}^{-\mathcal{L}})\) of the following Dirichlet problem \[\left\{\begin{array}{ll}\mathcal{L}\varphi^{R}=-\lambda_{1}^{R}\varphi^{R}& \mbox{in }\Omega\cap B_{R}(x_{0})\\ \varphi^{R}=0&\mbox{on smooth portions of }\partial(\Omega\cap B_{R}(x_{0})). \end{array}\right.\] ### Generalized principal eigenfunction in smooth unbounded domains As a consequence of previous construction, we get the analogue of Theorem 1.4 in [5]. The Euclidean proof can be retraced step by step thanks to Theorem 3.11 and Theorem 3.17. We propose it for completeness **Theorem 3.20**.: _Given an unbounded smooth domain \(\Omega\subset M\), for any \(R>0\) consider the truncated eigenvalue problem_ \[\left\{\begin{array}{ll}\mathcal{L}\varphi^{R}=-\lambda_{1}^{R}\varphi^{R}& \mbox{in }\Omega\cap B_{R}\\ \varphi^{R}=0&\mbox{on }\partial(\Omega\cap B_{R})\end{array}\right..\] _where \(B_{R}=B_{R}(x_{0})\) for a fixed \(x_{0}\in\partial\Omega\). Then:_ 1. _for almost every_ \(R>0\) _there exists and is well defined the couple of eigenelemts_ \((\lambda_{1}^{R},\varphi^{R})\)_, with_ \(\varphi^{R}\) _positive in_ \(\Omega\cap B_{R}\)_;_ 2. \(\lambda_{1}^{R}\searrow\lambda_{1}\) _as_ \(R\to+\infty\)_;_ 3. \(\varphi^{R}\) _converges in_ \(C^{2,\alpha}_{loc}\) _to some_ \(\varphi\) _principal eigenfunction of_ \(\Omega\)_._ Proof.: By the smoothness of \(\Omega\), for any \(i\in\mathbb{N}\) there exists \(r(i)\geq i\) so that \(\Omega\cap B_{i}\) is contained in a single connected component \(\Omega_{i}\) of \(\Omega\cap B_{r(i)}\). Moreover, we can suppose \(\Omega_{i}\subset\Omega_{i+1}\) for every \(i\). By [1], it follows that \[\lim_{i\to\infty}\lambda_{1}^{-\mathcal{L}}(\Omega_{i})=\lambda_{1}^{- \mathcal{L}}(\Omega).\] Now fix \(x_{1}\in\Omega_{1}\) and let \(\varphi^{i}\) the generalized principal eigenfunction of \(-\mathcal{L}\) in \(\Omega_{i}\), obtained by Theorem 3.17, normalized so that \(\varphi^{i}(x_{1})=1\). Fixed \(i>j\in\mathbb{N}\), since \(\varphi^{i}\in W^{2,p}(\Omega\cap B_{j})\) for every \(p<+\infty\) and vanishes on \(\partial\Omega\cap B_{j}\), by Theorem 3.11 with \(\Omega=\Omega_{j+1}\), \(\Sigma=\partial\Omega\cap B_{j+1}\) and \(G=\overline{\Omega\cap B_{j}}\), it follows that there exists a positive constant \(C_{j}\) so that \[\sup_{\Omega\cap B_{j}}\varphi^{i}\leq C_{j}\ \varphi^{i}(x_{1})=C_{j}\ \ \ \ \ \ \ \ \forall i>j.\] By [8, Theorem 9.13] it follows that \(\{\varphi^{i}\}_{i>j}\) are uniformly bounded in \(W^{2,p}(\Omega\cap B_{j-1/2})\) for every \(p<+\infty\). Thus, up to a subsequence \[\varphi^{i}\stackrel{{ i}}{{\rightharpoonup}}\phi_{j}\ \ \ \ \ \ \ \ \ \text{in}\ W^{2,p}(\Omega\cap B_{j-1/2})\ \ \forall p<+\infty\] and, by [8, Theorem 7.26], \[\varphi^{i}\stackrel{{ i}}{{\rightharpoonup}}\phi_{j}\ \ \ \ \ \ \ \ \ \text{in}\ C^{1}(\overline{\Omega}\cap B_{j-1})\] to a nonnegative function \(\phi_{j}\) that solves \[\left\{\begin{array}{ll}\mathcal{L}\phi_{j}=-\lambda_{1}^{-\mathcal{L}}( \Omega)\phi_{j}&\text{a.e. in }\Omega\cap B_{j-1}\\ \phi_{j}=0&\text{on }\partial\Omega\cap B_{j-1}.\end{array}\right.\] By construction, \(\phi_{j}(x_{1})=1\) and so \(\phi_{j}\) is positive in \(\Omega\cap B_{j-1}\) by the strong maximum principle. Using a diagonal argument, we can extract a subsequence \(\{\varphi^{i_{k}}\}_{i_{k}}\) converging to a positive function \(\varphi\) that is a solution of the above problem for all \(j>1\). ### Maximum principle in smooth unbounded domains Once that the existence of the couple of (generalized) principal eigenelements in smooth unbounded domains has been proved, we can proceed to show the validity of the maximum principle under the assumption that the generalized principal eigenvalue is positive. In what follows we consider an operator \(\mathcal{L}\) of the form (3.1) and we assume that there exists a function \(\eta:\Omega\to\mathbb{R}\), \(\eta\in C^{1}(\Omega)\) so that \[\nabla\eta=A^{-1}\cdot B.\] Before proving the main result of this section, we introduce two technical lemmas **Lemma 3.21**.: _Let \((M,g)\) be a Riemannian manifold and \(\Omega\subset M\) a (possibly unbounded) smooth domain. If \(v\) satisfies_ \[\left\{\begin{array}{ll}\mathcal{L}v\geq 0&\text{in}\ \Omega\\ v\leq 0&\text{on}\ \partial\Omega\end{array}\right.\] _and \((\lambda_{1},\varphi)\) are generalized principal eigenelements of \(\mathcal{L}\) on \(\Omega\) with Dirichlet boundary conditions, defining \(\sigma:=\frac{v}{\varphi}\) we get_ \[\operatorname{div}\left(\varphi^{2}e^{\eta}A\cdot\nabla\sigma\right)\geq \lambda_{1}e^{\eta}\sigma\varphi^{2}\ \ \ \ \ \ \ \ \text{in}\ \Omega \tag{3.16}\] _and_ \[\sigma_{+}\varphi^{2}g(\nu,A\cdot\nabla\sigma)=0\hskip 28.452756pt\text{on } \partial\Omega \tag{3.17}\] _where \(\sigma_{+}=\max(0,\sigma)\). Since \(\varphi=0\) at \(\partial\Omega\), condition (3.17) must be understood as the limit when approaching the boundary with respect to the direction \(A\cdot\nu\), where \(\nu\) is the outward pointing unit vector field normal to \(\partial\Omega\)._ Proof.: By the assumptions, it clearly follows \[\operatorname{div}\left(e^{\eta}A\cdot\nabla v\right)=e^{\eta}[\operatorname{ div}\left(A\cdot\nabla v\right)+g(B,\nabla v)]\] that, together with the fact that \(v\) is a subsolution, implies \[\operatorname{div}\left(e^{\eta}A\cdot\nabla v\right)+e^{\eta}cv=e^{\eta} \mathcal{L}v\geq 0.\] Moreover, since \(\varphi\) is a principal eigenfunction, we get \[\operatorname{div}\left(e^{\eta}A\cdot\nabla\varphi\right)+c\ e^{\eta} \varphi=-\lambda_{1}e^{\eta}\varphi,\] that, using previous inequality, implies \[\operatorname{div}\left(\varphi^{2}e^{\eta}A\cdot\nabla\sigma\right)\geq \underbrace{e^{\eta}\left[g(\nabla\varphi,A\cdot\nabla v)-g(\nabla v,A\cdot \nabla\varphi)\right]}_{=0\text{ by the symmetry of }A}+v\lambda_{1}e^{\eta}\varphi\] obtaining (3.16). Now let \(x_{0}\in\partial\Omega\) and set \(x_{\epsilon}:=\exp_{x_{0}}(-\epsilon A(x_{0})\cdot\nu(x_{0}))\) for \(\epsilon>0\) small enough, where \(\nu\) is the outward pointing unit vector field normal to \(\partial\Omega\). Recalling that \(v\leq 0\) at \(\partial\Omega\), we have two possible cases: 1. \(\sigma(x_{\epsilon})\leq 0\) as \(\epsilon\) becomes small: then, \(\sigma^{+}(x_{\epsilon})=0\) and thus (3.17) trivially holds in the sense of the limit for \(x\) approaching the boundary of \(\Omega\) along the direction \(A(x_{0})\cdot\nu(x_{0})\). 2. \(v(x_{0})=0\) and \(v(x_{\epsilon_{n}})>0\) for a sequence \(\epsilon_{n}\xrightarrow{n}0\): in this case \[g(A(x_{0})\cdot\nu(x_{0}),\nabla v(x_{0}))\leq 0\] and, by the standard Hopf's lemma, \[g(A(x_{0})\cdot\nu(x_{0}),\nabla\varphi(x_{0}))\\ =g(A(x_{0})\cdot\nu(x_{0}),\nu(x_{0}))\ g(\nu(x_{0}),\nabla\varphi (x_{0}))>0,\] obtaining \[\lim_{\epsilon\to 0}\sigma(x_{\epsilon})=\frac{g(A(x_{0})\cdot\nu(x_{0}), \nabla v(x_{0}))}{g(A(x_{0})\cdot\nu(x_{0}),\nabla\varphi(x_{0}))}\leq 0.\] From the definition of \(\sigma\) and the fact that \(v(x_{0})\leq 0\), it follows that \[\varphi^{2}(x_{\epsilon})\sigma^{+}(x_{\epsilon})g\left(\nu(x_{0 }),A(x_{0})\cdot\nabla\sigma(x_{\epsilon})\right)\\ =[g\left(A(x_{0})\cdot\nu(x_{0}),\nabla v(x_{\epsilon})\right)\\ -\sigma(x_{\epsilon})g\left(A(x_{0})\cdot\nu(x_{0}),\nabla \varphi(x_{\epsilon})\right)]\underbrace{v^{+}(x_{\epsilon})}_{\xrightarrow{ \epsilon\to 0}0}\\ \xrightarrow{\epsilon\to 0}0\] implying the claim. Now consider the sequence of cut-off functions \(\{\rho_{k}\}_{k}\subset C^{\infty}_{c}(M)\) satisfying \[\left\{\begin{array}{l}0\leq\rho_{k}\leq 1\\ \left.||\nabla\rho_{k}||_{L^{\infty}(M)}\xrightarrow{k}0\\ \rho_{k}\nearrow 1.\end{array}\right. \tag{3.18}\] For a reference, see [17]. Without loss of generality we can suppose \(\{\rho_{k}\neq 0\}\cap\partial\Omega\neq\emptyset\) for every \(k\). **Lemma 3.22**.: _Let \((M,g)\) be a Riemannian manifold and \(\Omega\subset M\) a (possibly unbounded) smooth domain. Supposing \(\lambda_{1}:=\lambda_{1}^{-\mathcal{L}}(\Omega)\geq 0\), we have_ \[\lambda_{1}\int_{\Omega}\rho_{k}^{2}e^{\eta}(v^{+})^{2}\,\,\mathrm{dv}\leq\int _{\Omega}g(\nabla\rho_{k},A\cdot\nabla\rho_{k})e^{\eta}(v^{+})^{2}\,\,\mathrm{ dv}\] _for every \(k\), where \(\{\rho_{k}\}_{k}\subset C^{\infty}_{c}(M)\) is a sequence of cut-off functions satisfying (3.18) and so that \(\{\rho_{k}\neq 0\}\cap\partial\Omega\neq\emptyset\)._ Proof.: Fix \(k\in\mathbb{N}\) and let \(U_{k}\subset\subset M\) be an open domain so that * \(\mathrm{supp}(\rho_{k})\subset U_{k}\); * \(\Sigma_{k}:=U_{k}\cap\partial\Omega\) is smooth (possibly not connected). Let \(\nu\) be the outward pointing unit vector field normal to \(\partial\Omega\) and, for \(\epsilon>0\) small enough, define \[S_{k,\epsilon}:=\left\{y\in U_{k}\cap\Omega\ :\ y=\exp_{x}\left(-\epsilon A(x) \cdot\nu(x)\right)\text{ for }x\in\partial\Omega\right\}.\] Next step consists in proving that there exists \(\epsilon_{k}>0\) so that \(S_{k,\epsilon}\) is a (possibly not connected) smooth hypersurface of \(\Omega\) for every \(0\leq\epsilon\leq\epsilon_{k}\). To this aim, let \(p\in M\) and define \(O_{p}\subset T_{p}M\) as the set of vectors \(X_{p}\) such that the length \(l_{X_{p}}\) of the geodesic whose initial data is \((p,X_{p})\) is greater than \(1\). Observe that if \(\alpha\in\mathbb{R}_{>0}\), then \(l_{\alpha X_{p}}=\alpha^{-1}l_{X_{p}}\) and hence \[X_{p}\in O_{p}\quad\Rightarrow\quad tX_{p}\in O_{p}\ \forall t\in(0,1].\] Set \(O:=\cup_{p\in M}O_{p}\) and observe that the exponential map is smooth on \(O\) ([16, Lemma 5.2.3]). Now fix \(p\in\partial\Omega\). Since \(A(p)\) is nonsingular and linear, the differential of the map \(\exp_{p}\circ A(p):O_{p}\cap N_{p}\partial\Omega\to M\) evaluated in \(0_{p}\in O_{p}\) is nonsingular and it is given by \[d_{0_{p}}(\exp_{p}\circ A(p))=\underbrace{d_{0_{p}}\exp_{p}}_{=Id}\circ\,d_{0_ {p}}A(p)=A(p).\] Retracing the proofs Proposition 5.5.1 and Corollary 5.5.3 in [16], we obtain that there exists an open neighbourhood \(W\) of the zero section in \(N\partial\Omega\) (the normal bundle of \(\partial\Omega\)) on which \(F:=\exp\circ A\) is a diffeomorphism onto its image. In particular, there exists a continuous function \(\epsilon:\partial\Omega\to\mathbb{R}_{>0}\) so that \[(p,-t\nu(p))\in W\quad\forall t\in[0,\epsilon(p)]\] (see the proof of [16, Corollary 5.5.2]). Now consider a neighbourhood \(V_{k}\subset\subset M\) of \(U_{k}\) that intersects \(\partial\Omega\) smoothly and so that for \[\epsilon_{k}:=\min_{p\in V_{k}}\epsilon(p)\] we have \[Z_{k,\epsilon}:=\{(p,-\epsilon\nu(p))\ :\ p\in V_{k}\cap\partial\Omega\} \subset W\quad\forall\epsilon\in[0,\epsilon_{k}].\] Moreover, up to enlarge \(V_{k}\), we have \[S_{k,\epsilon}=(\exp\circ A)\,(Z_{k,\epsilon})\cap U_{k}.\] Since \(V_{k}\cap\partial\Omega\) (and hence \(Z_{k,\epsilon}\)) is smooth and \((exp\circ A)\,\Big{|}_{Z_{k,\epsilon}}\) is a diffeomorphism onto its image, it follows that \(S_{k,\epsilon}=(\exp\circ A)\,(Z_{k,\epsilon})\cap U_{k}\) is a smooth (possibly not connected) hypersurface for every \(\epsilon\in[0,\epsilon_{k}]\). Now define \[\Omega_{k,\epsilon}:=[\Omega\cap U_{k}]\setminus\bigcup_{0<t<\epsilon}S_{ \epsilon,k}\] and, up to decrease \(\epsilon_{k}\), suppose \[\Omega_{k,\epsilon}\neq\emptyset\quad\forall\epsilon\in[0,\epsilon_{k}].\] By construction \[\bigcup_{0<\epsilon<\epsilon_{k}}\Omega_{\epsilon,k}=\Omega\cap U_{k}.\] Multiplying (3.6) by \(\sigma^{+}\rho_{k}^{2}\) and integrating over \(\Omega_{\epsilon,k}\), by the divergence theorem we get \[\int_{\partial\Omega_{\epsilon,k}}\sigma^{+}\rho_{k}^{2}e^{\eta} \varphi^{2}g(\nu,A\cdot\nabla\sigma)-\int_{\Omega_{\epsilon,k}}g\left(\nabla \left(\sigma^{+}\rho_{k}^{2}\right),A\cdot\nabla\sigma\right)e^{\eta}\varphi^{2}\] \[\geq\lambda_{1}\int_{\Omega_{\epsilon,k}}e^{\eta}\varphi^{2}( \sigma^{+})^{2}\rho_{k}^{2}.\] Observe that \[\int_{\partial\Omega_{\epsilon,k}}\sigma^{+}\rho_{k}^{2}e^{\eta}\varphi^{2}g( \nu,A\cdot\nabla\sigma)=\int_{S_{\epsilon,k}\cap\operatorname{supp}(\rho_{k}) }\sigma^{+}\rho_{k}^{2}e^{\eta}\varphi^{2}g(\nu,A\cdot\nabla\sigma)\] since \(\rho_{k}\equiv 0\) on \(\partial\Omega_{\epsilon,k}\setminus(S_{\epsilon,k}\cap\operatorname{supp}( \rho_{k}))\). Moreover, \[g\left(\nabla\left(\rho_{k}^{2}\sigma^{+}\right),A\cdot\nabla\sigma\right) \geq-g\left(\nabla\rho_{k},A\cdot\nabla\rho_{k}\right)(\sigma^{+})^{2},\] obtaining \[\int_{\partial\Omega_{\epsilon,k}}\sigma^{+}\rho_{k}^{2}e^{\eta} \varphi^{2}g(\nu,A\cdot\nabla\sigma)+\int_{\Omega_{\epsilon,k}}g\left(\nabla \rho_{k},A\cdot\nabla\rho_{k}\right)(\sigma^{+})^{2}e^{\eta}\varphi^{2} \tag{3.20}\] \[\geq\lambda_{1}\int_{\Omega_{\epsilon,k}}e^{\eta}\varphi^{2}( \sigma^{+})^{2}\rho_{k}^{2}. \tag{3.19}\] The next step is to study the behaviour of previous integrals as \(\epsilon\to 0\). Since \[0\leq\lambda_{1}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{2}\chi_{\Omega_{ \epsilon,k}}\leq\lambda_{1}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{2}\] and \[\lambda_{1}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{2}\chi_{\Omega_{ \epsilon,k}}\to\lambda_{1}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{2}\quad \text{a.e. in $\Omega$ as $\epsilon\to 0$},\] by dominated convergence theorem we get \[\lambda_{1}\int_{\Omega_{\epsilon,k}}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_ {k}^{2}=\lambda_{1}\int_{\Omega}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{ 2}\chi_{\Omega_{\epsilon,k}}\xrightarrow{\epsilon\to 0}\lambda_{1}\int_{ \Omega}e^{\eta}\varphi^{2}(\sigma^{+})^{2}\rho_{k}^{2}. \tag{3.21}\] Similarly, using the fact that \(A\) is positive definite, we obtain \[\int_{\Omega_{\epsilon,k}}g\left(\nabla\rho_{k},A\cdot\nabla\rho_{k}\right)( \sigma^{+})^{2}e^{\eta}\varphi^{2}\xrightarrow{\epsilon\to 0}\int_{\Omega}g \left(\nabla\rho_{k},A\cdot\nabla\rho_{k}\right)(\sigma^{+})^{2}e^{\eta}\varphi ^{2}. \tag{3.22}\] Lastly, for \(F:=\sigma^{+}\rho_{k}^{2}e^{\eta}\varphi^{2}g(\nu,A\cdot\nabla\sigma)\) we have \[\int_{\partial\Omega_{\epsilon,k}}F(y)=\int_{S_{k,\epsilon}}F(y)=\int_{ \partial\Omega}F\left(\exp_{x}(-\epsilon A(x)\cdot\nu(x))\right)\] and for every \(x\in\partial\Omega\) \[F\left(\exp_{x}(-\epsilon A(x)\cdot\nu(x))\right)\xrightarrow{\epsilon\to 0}0\] by (3.17). Using the dominated convergence theorem, we get \[\int_{\partial\Omega_{\epsilon,k}}\sigma^{+}\rho_{k}^{2}e^{\eta}\varphi^{2}g( \nu,A\cdot\nabla\sigma)=\int_{\partial\Omega_{\epsilon,k}}F(y)\xrightarrow{ \epsilon\to 0}0. \tag{3.23}\] Letting \(\epsilon\to 0\) in (3.19) and using (3.21), (3.22) and (3.23), it follows that \[\int_{\Omega}g\left(\nabla\rho_{k},A\cdot\nabla\rho_{k}\right)(\sigma^{+})^{ 2}e^{\eta}\varphi^{2}\geq\lambda_{1}\int_{\Omega}e^{\eta}\varphi^{2}(\sigma^{ +})^{2}\rho_{k}^{2},\] obtaining the claim, since \(\sigma^{+}\varphi=v^{+}\). We are finally ready to prove the main theorem of this section. **Theorem 3.23** (Unbounded Maximum Principle).: _Let \((M,g)\) be a complete Riemannian manifold and \(\Omega\subset M\) a (possibly unbounded) smooth domain. If \(\lambda_{1}^{-\mathcal{L}}(\Omega)>0\), then every function \(u\in C^{2}(\overline{\Omega})\) that satisfies_ \[\left\{\begin{array}{ll}\mathcal{L}u\geq 0&\text{in }\Omega\\ u\leq 0&\text{on }\partial\Omega\\ \sup_{\Omega}u<+\infty\end{array}\right.\] _is nonpositive._ Proof.: Let \(u\) be a \(\mathcal{L}\)-subsolution with \(u\leq 0\) at \(\partial\Omega\) and suppose by contradiction that \(u^{+}\not\equiv 0\). By Lemma 3.22 \[\lambda_{1}\leq\frac{\int_{\Omega}g\left(\nabla\rho_{k},A\cdot\nabla\rho_{k} \right)e^{\eta}(u^{+})^{2}}{\int_{\Omega}\rho_{k}^{2}e^{\eta}(u^{+})^{2}}.\] Now consider the bounded function \(w=e^{\eta/2}u^{+}\). We get \[\frac{g(\nabla\rho_{k},A\cdot\nabla\rho_{k})w^{2}}{\int_{\Omega}\rho_{k}^{2}w ^{2}}\leq C_{0}\frac{g(\nabla\rho_{k},\nabla\rho_{k})w^{2}}{\int_{\Omega}\rho_ {k}^{2}w^{2}}\leq C_{0}\frac{||\nabla\rho_{k}||_{L^{\infty}(M)}^{2}\,w^{2}}{ \int_{\Omega}\rho_{k}^{2}w^{2}}\] Since \(||\nabla\rho_{k}||_{L^{\infty}(M)}\xrightarrow{k}0\), up to extract a subsequence we can suppose \(||\nabla\rho_{k}||_{L^{\infty}(M)}\searrow 0\), obtaining that the sequence \[\left\{\frac{||\nabla\rho_{k}||_{L^{\infty}(M)}^{2}\,w^{2}}{\int_{\Omega}\rho_ {k}^{2}w^{2}}\right\}_{k}\] is nonincreasing and converges to \(0\) almost everywhere. By the monotone convergence theorem, we get \[\lambda_{1}\leq\int_{\Omega}\frac{g(\nabla\rho_{k},A\cdot\nabla\rho_{k})e^{\eta}( v^{+})^{2}}{\int_{\Omega}\rho_{k}^{2}e^{\eta}(v^{+})^{2}}\leq C_{0}\int_{\Omega} \frac{||\nabla\rho_{k}||_{L^{\infty}(M)}^{2}\,w^{2}}{\int_{\Omega}\rho_{k}^{2} w^{2}}\xrightarrow[k\to+\infty]{}0\] obtaining a contradiction. ## 4. Some applications of the maximum principle in unbounded domains Now we are going to apply Theorem 3.23 to generalize the symmetry results contained in [6]. ### Strongly stable solutions in homogeneous domains To start, consider a complete Riemannian manifold \((M,g)\). We recall that an _isoparametric domain_\(\Omega\subseteq M\) is a domain endowed by a singular Riemannian foliation \(\overline{\Omega}=\bigcup_{t}\Sigma_{t}\) whose regular leaves are connected parallel hypersurfaces with constant mean curvature \(H^{\Sigma_{t}}\). Now let \(\Psi:M\to\mathbb{R}\) be a smooth function and consider the weighted Riemannian manifold \(M_{\Psi}:=(M,g,\mathrm{d}\mathrm{v}_{\Psi})=(M,g,e^{\Psi}\mathrm{d}\mathrm{v})\). We say that \(\Omega\subseteq M_{\Psi}\) is a \(\Psi\)-_isoparametric domain_ if \(\overline{\Omega}\) is foliated by parallel hypersurfaces \(\Sigma_{t}\) of constant weighted mean curvature, i.e. so that \[H^{\Sigma_{t}}_{\Psi}=H^{\Sigma_{t}}-g(\nabla\Psi,\vec{\nu})\equiv const.\] where \(\vec{\nu}\) is the unit vector field normal to \(\Sigma_{t}\). Lastly, we say that \(\Omega\subseteq M\) is an _homogeneous domain_ if \(\Omega\) is an isoparametric domain whose regular leaves are orbits of the action of a closed subgroup of \(\mathrm{Iso}_{0}(M)\), the identity component of \(\mathrm{Iso}(M)\). **Definition 4.1** (\(\Psi\)-homogeneous domain).: _Given a weighted Riemannian manifold \(M_{\Psi}\), we say that \(\Omega\subseteq M_{\Psi}\) is a \(\Psi\)-homogeneous domain if it is a \(\Psi\)-isoparametric domain and a homogeneous domain simultaneously._ For further details about isoparametric and homogeneous domains, see [6]. We only recall that * given an homogeneous domain \(\overline{\Omega}\) of a complete Riemannian manifold \(M\), there always exists a (finitely generated) integral distribution \(\{X_{1},...,X_{k}\}\) of Killing vector fields of \(M\) spanning pointwise every tangent space to al leaves \(\Sigma_{t}\) of the foliation of \(\overline{\Omega}\); * if \(\overline{\Omega}\) is homogeneous and \(\Psi:M\to\mathbb{R}_{>0}\) is a symmetric (at least on \(\overline{\Omega}\)) smooth weight, then the symmetry of \(\Psi\) turns \(\overline{\Omega}\) into e \(\Psi\)-homogeneous domain. Before proceeding with the first symmetry result of this section, we recall that on the weighted manifold \(M_{\Psi}\) we have a natural counterpart to the standard Laplacian. It is the _weighted Laplacian_, also called \(\Psi\)_-Laplacian_, which is defined by the formula \[\Delta_{\Psi}u=e^{\Psi}\mathrm{div}(e^{-\Psi}\nabla u)=\Delta u-g(\nabla\Psi, \nabla u).\] We also recall that **Definition 4.2**.: _The function \(u\in C^{3}(\Omega)\cap C^{1}(\overline{\Omega})\) is said to be a stable (respectively strongly stable) solution to (4.1) if_ \[\lambda_{1}^{-\Delta\psi+f^{\prime}(u)}(\Omega):=\inf_{\begin{subarray}{c}\varphi \in C^{\infty}_{c}(\Omega),\\ \varphi\neq 0\end{subarray}}\frac{\int_{\Omega}\left(|\nabla\varphi|^{2}+f^{ \prime}(u)\varphi^{2}\right)\mathrm{dv}_{\Psi}}{\int_{\Omega}\varphi^{2} \mathrm{dv}_{\Psi}}\geq 0\ \ \ \ (\mathrm{resp.}\ >0).\] **Definition 4.3**.: _If \(\overline{\Omega}\) is a weighted \(\Psi\)-homogeneous domain with soul \(P\) inside the weighted manifold \(M_{\Psi}\), let \(d:M\to\mathbb{R}_{\geq 0}\) as \(x\mapsto\mathrm{dist}(x,P)\). A function \(u\) on \(\overline{\Omega}\) is said to be_ * symmetric _if there exists a function_ \(\widehat{u}:\mathbb{R}\to\mathbb{R}\) _so that_ \[u(x)=\widehat{u}(d(x));\] * locally symmetric _if_ \(u\in C^{1}(\overline{\Omega})\) _and_ \(X(u)\equiv 0\) _for any smooth vector field_ \(X\in\mathcal{D}\)_._ By [6, Lemma 3.7], these notions of symmetry coincide in our \(\Psi\)-homogeneous setting. The first theorem, stated below, provides an adaptation of the symmetry result [6, Theorem 5.1] to (possibly) noncompact \(\Psi\)-homogeneous domains. To achieve this goal we make use of Theorem 3.23 in order to replace the nodal domain theorem used by the author and S. Pigola in [6]. However, this leads to more restrictive assumptions on the solution, namely that it has to be _strongly stable_. **Theorem 4.4**.: _Let \(\overline{\Omega}\) be a (possibly noncompact) \(\Psi\)-homogeneous domain with soul \(P\) inside the weighted manifold \(M_{\Psi}\). Moreover, assume that \(\Psi\) is symmetric (at least on \(\overline{\Omega}\)) and denote with \(\mathcal{D}=\{X_{1},...,X_{k}\}\) the integrable distribution of Killing vector fields associated to the foliation of \(\overline{\Omega}\)._ _Then, every strongly stable solution \(u\in C^{3}(\Omega)\cap C^{1}(\overline{\Omega})\cap W^{1,1}(\Omega)\) to_ \[\left\{\begin{array}{ll}\Delta_{\Psi}u=f(u)&\mbox{in }\Omega\\ u=c_{j}&\mbox{on }(\partial\Omega)_{j}\end{array}\right. \tag{4.1}\] _so that_ \[\sup_{\Omega}X_{\alpha}(u)<+\infty\ \ \ and\ \ \ u|X_{\alpha}|\in L^{1}( \Omega,\mathrm{dv}_{\Psi})\ \ \ for\ every\ \alpha\in A\] _is symmetric._ **Remark 4.5**.: In [6, Theorem 5.1] the authors proved a symmetry result for (regular enough) stable solutions to \[\left\{\begin{array}{ll}\Delta_{\Psi}u=f(u)&\mbox{in }\Omega\\ u=c_{j}&\mbox{on }(\partial\Omega)_{j}\end{array}\right.\] in case \(\overline{\Omega}\) is a compact \(\Psi\)-homogeneous domain with associated Killing distribution \(\{X_{\alpha}\}_{\alpha\in A}\) and the weight \(\Psi\) satisfies the compatibility condition \[g(X_{\alpha},\Psi)\equiv const.\quad\text{on }\Omega\quad\forall\alpha\in A.\] In fact, the preceding compatibility condition implies that the weight has to be symmetric (at least on \(\Omega\)). Indeed, if \(X_{\alpha}\in\mathcal{D}\), denoting \(C_{\alpha}:=g(X_{\alpha},\nabla\Psi)\) we have \[\int_{\Omega}\text{div}(\Psi X_{\alpha})\ \text{dv} =\int_{\Omega}\underbrace{g(X_{\alpha},\nabla\Psi)}_{C_{\alpha}} \ \text{dv}+\int_{\Omega}\Psi\underbrace{\text{div}\,(X_{\alpha})}_{=0}\ \text{dv}\] \[=C_{\alpha}|\Omega|,\] while, by the divergence theorem, \[\int_{\Omega}\text{div}(\Psi X_{\alpha})\ \text{dv}=\int_{\partial\Omega} \Psi\ \underbrace{g(X_{\alpha},\nu)}_{=0}\ \text{dv}=0,\] where \(\nu\) denotes the unit vector field normal to \(\partial\Omega\). Putting together previous equality, we obtain \(C_{\alpha}=0\) for every \(\alpha\in A\). By previous remark, this exactly means that \(\Psi\) is symmetric. Proof of Theorem 4.4.: Let \(X=X_{j}\in\mathcal{D}\) and define \[v:=X(u).\] Since \(u\) is locally constant on \(\partial\Omega\) and \(X|_{\partial\Omega}\) is tangential to \(\partial\Omega\), we have \[v=0\quad\text{ on }\partial\Omega.\] By [6, Lemma 5.4] \[\Delta_{\Psi}v=f^{\prime}(u)v\] implying that \(v\in C^{2}(\Omega)\) is a solution of \[\left\{\begin{array}{ll}\left(\Delta_{\Psi}-f^{\prime}(u)\right)v=0&\text{ in }\Omega\\ v=0&\text{on }\partial\Omega\\ \sup_{\Omega}v<+\infty.\end{array}\right.\] and, since \(\lambda_{1}^{-\Delta_{\Psi}+f^{\prime}(u)}(\Omega)>0\), by Theorem 3.23 \[v\leq 0\quad\text{ in }\Omega. \tag{4.2}\] Let \(Z:=uX\): since \(X\) is Killing, it follows that \(\text{div}X=0\) implying \[\text{div}_{\Psi}Z=e^{\Psi}\text{div}(e^{-\Psi}Z)=v-\underbrace{g(\nabla\Psi, X)}_{=0}u=v\quad\in L^{1}(M,\text{dv}_{\Psi})\] and, by the fact that \(X_{x}\) is tangential to \(\Sigma_{d(x)}\), \[g(Z,\nu)=0\] for \(\nu\) unit vector field normal to \(\partial\Omega\). Applying the Stokes theorem by Gaffney ([6, Theorem 4.8]), we get \[\int_{\Omega}v\ \mathrm{dv}_{\Psi} =\int_{\Omega}\mathrm{div}_{\Psi}Z\ \mathrm{dv}_{\Psi}\] \[=\int_{\partial\Omega}g(Z,\nu)\ \mathrm{da}_{\Psi}=0\] that, together with (4.2), implies \(v=0\) in \(\overline{\Omega}\). We have thus proved that \(X_{\alpha}(u)\equiv 0\) in \(\overline{\Omega}\) for every \(\alpha\in A\). Thanks to the fact that \(\mathcal{D}\) generates every tangent space to all leaves, it follows that \(u\) is locally symmetric, and hence symmetric, on \(\overline{\Omega}\). ### Strongly stable solutions in non-homogeneous domains in warped product manifolds Now consider a weighted warped product manifold \[M_{\Psi}=(I\times_{\sigma}N)_{\Psi}\] where \(I\subseteq\mathbb{R}\) is an interval, \((N,g^{N})\) is a (possibly noncompact) Riemannian manifold without boundary and \(\Psi\) is a smooth weight function of the form \[\Psi(r,\xi)=\Phi(r)+\Gamma(\xi)\] for \((r,\xi)\in I\times N\). The second result we want to deal with concerns the case when the domain is an annulus in \(\overline{A}(r_{1},r_{2})\subseteq M\) and there are not enough Killing vector fields tangential to \(N\) (and thus there are not enough local isometries acting on the leaves of the annulus). Despite this lack of symmetries on the domain, in [6, Theorem 6.5] the authors showed that, requiring the finiteness of \(\mathrm{vol}_{\Gamma}(N)\), some potential theoretic tools can be used to recover a symmetry result under a stability-like assumption on the solution. More in details, they showed that if \(f^{\prime}(t)\leq 0\) and \(u\) is a solution to \[\left\{\begin{array}{ll}\Delta_{\Psi}u=f(u)&\mbox{in }A(r_{1},r_{2})\\ u\equiv c_{1}&\mbox{on }\{r_{1}\}\times N\\ u\equiv c_{2}&\mbox{on }\{r_{2}\}\times N\end{array}\right.\] so that \(||u||_{C^{2}_{rad}}<+\infty\) and \(f^{\prime}(u)\geq-B\) for some nonnegative constant \(B\) satisfying \[0\leq B<\left(\int_{r_{1}}^{r_{2}}\frac{\int_{r_{1}}^{s}e^{-\Phi(z)\sigma^{m-1 }(z)\ \mathrm{d}z}}{e^{-\Phi(s)}\sigma^{m-1}(s)}\ \mathrm{d}s\right)^{-1}, \tag{4.3}\] then \(u(r,\xi)=\widehat{u}(r)\) is symmetric. **Remark 4.6**.: As already observed by the authors, as a consequence of condition (4.3) we get the existence of a positive smooth supersolution of the stability operator \(-\Delta_{\Psi}+f^{\prime}(u)\) in \(\mathrm{int}M\), that implies the stability of the solution \(u\). We stress that, as already claimed, the second result we present in this section is based on some potential theoretic tools. The first notion we need is Neumann-counterpart of the Dirichlet parabolicity. We say that a connected weighted Riemannian manifold \(M_{\Psi}\) with (possibly empty) boundary \(\partial M\) is _Neumann parabolic_ (or \(\mathcal{N}\)-_parabolic_) if for any given \(u\in C^{0}(M)\cap W^{1,2}_{loc}(\mathrm{int}M,\mathrm{dv}_{\Psi})\) satisfying \[\left\{\begin{array}{ll}\Delta_{\Psi}u\geq 0&\text{in }\mathrm{int}M\\ \partial_{\nu}u\leq 0&\text{on }\partial M\\ \sup_{M}u<+\infty\end{array}\right.\] it holds \[u\equiv const.,\] where \(\nu\) is the outward pointing unit normal to \(\partial M\). In the case \(\partial M=\emptyset\), the normal derivative condition is void. As an application of Theorem 3.23, we can replace (4.3) in [6, Theorem 6.5] with the (simpler) strong stability condition of \(u\). Moreover, we only need the manifold \(N_{\Gamma}\) to be parabolic, avoiding the assumption on the finiteness of its volume (originally required in [6]). **Theorem 4.7**.: _Let \(M_{\Psi}=(I\times_{\sigma}N)_{\Psi}\) where \((N,g^{N})\) is a complete (possibly noncompact), connected, \((n-1)\)-dimensional Riemannian manifold without boundary. Moreover, assume that \(N_{\Gamma}\) is parabolic._ _Let \(u\in C^{4}\left(\overline{A}(r_{1},r_{2})\right)\) be a solution of the Dirichlet problem_ \[\left\{\begin{array}{ll}\Delta_{\Psi}u=f(u)&\text{in }A(r_{1},r_{2})\\ u\equiv c_{1}&\text{on }\{r_{1}\}\times N\\ u\equiv c_{2}&\text{on }\{r_{2}\}\times N\end{array}\right.\] _where \(c_{j}\in\mathbb{R}\) are given constants and the function \(f(t)\) is of class \(C^{2}\) and satisfies \(f^{\prime\prime}(t)\leq 0\). If \(u\) is strongly stable and_ \[||u||_{C^{2}_{rad}}=\sup_{A(r_{1},r_{2})}|u|+\sup_{A(r_{1},r_{2})}|\partial_{ r}u|+\sup_{A(r_{1},r_{2})}|\partial_{r}^{2}u|<+\infty,\] _then \(u(r,\xi)=\widehat{u}(r)\) is symmetric._ Proof of Theorem 4.7.: Let us consider the function \[v(r,\xi):=\Delta^{N}_{\Gamma}u(r,\xi)\] which vanishes on \(\partial A(r_{1},r_{2})\). By a direct calculation we have \([\Delta^{M}_{\Psi},\Delta^{N}_{\Gamma}]=0\), that implies \[\Delta^{M}_{\Psi}v =\Delta^{N}_{\Gamma}f(u)\] \[=f^{\prime\prime}(u)|\nabla^{N}u|^{2}_{N}+f^{\prime}(u)v\] \[\leq f^{\prime}(u)v.\] It follows that \(v\) satisfies \[\left\{\begin{array}{ll}\Delta_{\Psi}(-v)\geq f^{\prime}(u)(-v)&\mbox{in }A(r_{1},r_{2})\\ -v=0&\mbox{on }\partial A(r_{1},r_{2})\end{array}\right..\] and, using the strong stability assumption on \(u\), by Theorem 3.23 we get \[v\geq 0\mbox{ in }A(r_{1},r_{2}). \tag{4.4}\] On the other hand, thanks to the parabolicity of \(N_{\Gamma}\), we can apply [9, Proposition 3.1] and [6, Lemma 6.12] obtaining \[\int_{A(r_{1},r_{2})}v\ \mathrm{d}v_{\Psi}=\int_{r_{1}}^{r_{2}}\left(\int_{\{t\} \times N}\Delta_{\Gamma}^{N}u(t,\xi)\ \mathrm{d}v_{\Gamma}(\xi)\right)e^{-\Phi(t)}\sigma^{m-1}(t)\ \mathrm{d}t=0\] that, together with (4.4), implies \(v\equiv 0\) in \(A(r_{1},r_{2})\). It follows that for every fixed \(\overline{r}\in[r_{1},r_{2}]\) the function \(\xi\mapsto v(\overline{r},\xi)\) is constant on \(N\) and thus \(\xi\mapsto u(\overline{r},\xi)\) is a bounded harmonic function on the parabolic manifold \(N_{\Gamma}\). By definition of parabolicity, this implies that \(u(\overline{r},\cdot)\) is constant in \(N_{\Gamma}\), as claimed. ## Acknowledgements The author would like to thank Stefano Pigola for the several discussions and precious suggestions about the present work, Giona Veronelli for engaging in productive conversations about the ABP inequality and Alberto Farina for having introduced the author to the Euclidean results that inspired Section 2. The author acknowledges the support of the GNAMPA (INdAM) project "Applicazioni geometriche del metodo ABP".
2309.08553
Update on Sidon-Ramsey numbers
We provide two new exact Sidon-Ramsey numbers to the list known so far. We also improve the upper bounds of the next two Sidon-Ramsey numbers. In doing so, we comment on the tendencies we found on the Sidon-Ramsey partitions that were studied to obtain these results.
Manuel A. Espinosa-García, Daniel Pellicer
2023-09-15T17:14:28Z
http://arxiv.org/abs/2309.08553v1
# Update on Sidon-Ramsey Numbers ###### Abstract We provide two new exact Sidon-Ramsey numbers to the list known so far. We also improve the upper bounds of the next two Sidon-Ramsey numbers. In doing so, we comment on the tendencies we found on the Sidon-Ramsey partitions that were studied to obtain these results. Key Words: Sidon set, Ramsey theory, Sidon-Ramsey partition, Sidon-Ramsey numbers. AMS Subject Classification (2010): Primary: 11B75. Secondary: 05D10. ## 1 Background A _Sidon set_ is a subset \(S\) of an additive group such that all pairwise sums are distinct, i.e., if \(a+b=c+d\), for some \(a,b,c,d\in S\), then \(\{a,b\}=\{c,d\}\). The most studied problems in this area are the following. One of them is to find the maximum size of a Sidon set contained in \([n]:=\{1,2,\ldots,n\}\) as subset of \(\mathbb{Z}\), and the maximal density of a Sidon set in the set of positive integers in the group \(\mathbb{Z}\). \(F_{2}(n)\) is defined as the size of the largest Sidon set contained in the interval \([n]\), and it is called the _Sidon number_ of \(n\). It is known that \[n^{1/2}(1-o(1))\leq F_{2}(n)\leq n^{1/2}+O(n^{1/4}).\] The lower bound is inferred from a construction made by Singer of Sidon sets (see [5]), and the upper bound was proved by Erdos and Turan (see [1]). Some of the first exact values known for \(F_{2}(n)\) are included in Table 1. The density problem for the Sidon numbers consists of maximizing the length of an interval that can be partitioned into \(k\) Sidon sets, we call such division a _Sidon-Ramsey partition_. \(\mathrm{SR}(k)\) is defined as the minimum \(n\) such that there is no Sidon-Ramsey partition of \([n]\) in \(k\) parts, and it is called the _Sidon-Ramsey number_ of \(k\). These numbers were introduced by Liang, Li, Xiu y Xu in [3]. The Sidon-Ramsey numbers satisfy \[k^{2}-O(k^{c})\leq SR(k)\leq k^{2}+Ck^{3/2}+O(k),\] where \(c\leq 1.525\) and \(C\leq 1.996\) (see [2]). Table 2 lists the previously known values of \(\mathrm{SR}(k)\), that is, those for \(k\leq 5\). It also includes the bounds for the next values of \(k\) given in [6]. In this paper we establish the values of the next two Sidon-Ramsey numbers; they are given in Section 3. Before that, in Section 2 we describe the procedures implemented in python to find Sidon sets on a given finite subset of \(\mathbb{Z}\). Then, in Section 4 we improve the bounds of \(SR(8)\) and of \(SR(9)\). We conclude with some remarks and open problems in Section 5. ## 2 Description of routines In this section we describe the procedures we implemented in python in order to find Sidon sets of prestablished sizes on given subsets of \(\mathbb{Z}\). When determining the exact values of Sidon-Ramsey numbers we used both procedures in separate computers to validate the results. For convenience, in this paper we will abbreviate 'Sidon set with \(k\) elements' by \(k\)-SS. \begin{table} \begin{tabular}{c c c c c c c|c} \(n\) & \([1]\) & \([2,3]\) & \([4,6]\) & \([7,11]\) & \([12,17]\) & \([18,25]\) & \([26,34]\) \\ \(F_{2}(n)\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) \\ \(n\) & \([35,44]\) & \([45,55]\) & \([56,72]\) & \([73,85]\) & \([86,106]\) & \([107,127]\) & \([128,151]\) \\ \(F_{2}(n)\) & \(8\) & \(9\) & \(10\) & \(11\) & \(12\) & \(13\) & \(14\) \\ \end{tabular} \end{table} Table 1: Values of \(F_{2}(n)\) for small \(n\). \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \(k\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11\) \\ \hline upper bound for \(SR(k)\) & \(55\) & \(70\) & \(97\) & \(118\) & \(141\) & \(166\) \\ lower bound for \(SR(k)\) & \(50\) & \(65\) & \(81\) & \(97\) & \(114\) & \(133\) \\ \end{tabular} \end{table} Table 2: \(\mathrm{SR}(k)\) for \(k\leq 5\), and \(\mathrm{SR}(k)\) bounds for \(6\leq k\leq 11\). ### Using Sidon sets on smaller subsets of \(\mathbb{Z}\) This algorithm finds all Sidon sets with at most \(k\) elements in the set \([n]\). We build the Sidon sets recursively, using Sidon sets in \([m]\) to build Sidon sets in \([m+1]\). First, for \(k\geq 1\) we consider all Sidon sets in the interval \([1]\) with at most \(k\) element, and denote this set by \(S_{1,k}\). Notice that \(S_{1,k}\) consists exclusively of \(\varnothing\) and \(\{1\}\). Recursively, we build the sets with all the Sidon sets with at most \(k\) elements in \([t]\), denoted by \(S_{t,k}\), as follows: 1. We add all the elements in \(S_{t-1,k}\) to \(S_{t,k}\). 2. To each Sidon set in \(S_{t-1,k}\) with at most \(k-1\) elements we add the element \(t\). If the new set is a Sidon set, we add it to \(S_{t,k}\). ### Directly constructing all Sidon sets of a given size on a given subset of \(\mathbb{Z}\) Next we explain how we find all \(k\)-SS's in the set \([n]\). It suffices to determine a way to obtain all \(k\)-SS's that contain \(1\) and \(n\), since all others will be translates of some \(k\)-SS's obtained by the same procedure in subsets \([m]\) for some \(m<n\). Let \(X=\{x_{1},\ldots,x_{k}\}\) with \(x_{1}=1\) and \(x_{k}=n\) be a \(k\)-SS where \(x_{i}<x_{j}\) if \(i<j\). Then \(X\) is completely determined by the \((k-1)\)-tuple \((y_{1},\ldots,y_{k}):=(x_{2}-x_{1},x_{3}-x_{2},\ldots,x_{k}-x_{k-1})\). The numbers \(y_{k}\) satisfy the following properties. 1. If \(i\neq j\) then \(y_{i}\neq y_{j}\). More generally, if \(i_{1}\leq i_{2}\), \(i_{3}\leq i_{4}\) and \(\{i_{1},i_{2}\}\neq\{i_{3},i_{4}\}\) then \[\sum_{j=i_{1}}^{i_{2}}y_{j}\neq\sum_{j=i_{3}}^{i_{4}}y_{j}.\] 2. \(\sum_{i=1}^{k-1}\!y_{i}=n-1\). Conversely, any set \(\{y_{1},\ldots,y_{k-1}\}\) satisfying the above properties induces the Sidon set \(\{1,1+y_{1},1+y_{1}+y_{2},\ldots,1+\sum_{i=1}^{k-1}y_{i}\}\). Our strategy consists of two steps. * Find all ordered sets \((y_{1},\ldots,y_{k-1})\) of numbers such that their sum is \(n-1\) and that \(y_{i}<y_{i+1}\) for all \(i\). * For each of the sets in the previous step determine which of the permutations of its entries induces a Sidon set (and so we only need to verify that they satisfy the first item above). Clearly this procedure becomes too slow when the values of \(k\) and \(n\) increase. As an example, our implementation in python took less than a second to find the 96 8-SS in [38] that contain 1 and 38, and it took a couple of seconds to find the 195 9-SS in [50] that contain 1 and 50. When asked to determine all 13-SS in the set [107], followed by those in [108] and by those in [109] the three results were ready only after 19 hours (the outcome is that there are two that contain 1 and 107 and none that contain either 1 and 108, or 1 and 109). As expected, the computation of the 13-SS in the sets [110] and [111] that contain both endpoints of the intervals was even lengthier. ## 3 New exact numbers In this section we establish the value of two Sidon sets that were not previously known. **Theorem 3.1**.: _The number \(SR(6)\) is \(51\)._ Proof.: The lower bound is given by the following partition of [50]: \[\{4,6,14,28,29,33,40,46,49\}, \{2,5,11,18,22,23,37,45,47\},\] \[\{3,7,8,24,30,32,39,42\}, \{1,13,15,26,31,34,35,41\},\] \[\{9,12,19,21,27,43,44,48\}, \{10,16,17,20,25,36,38,50\}.\] We verified the upper bound in two distinct ways. First, we found all triples of mutually disjoint 9-SS's in the set [51]. There are \(12,094\) triples, and for each of them we did an exhaustive search to determine that it cannot be completed to a Sidon-Ramsey partition of [51]. The search was carried out by finding all 8-SS's in the complement in [51] of each triple and determining whether four of them are mutually disjoint. We also verified that there is no quadruple of mutually disjoint 9-SS's, and it was previously known that there are no 10-SS's in [51] (see Table 1). Therefore there are no partitions of [51] into 6 Sidon sets that do not have three 9-SS's or three 8-SS's. Second, with the same techniques described above (with pairs of 9-SS's instead of triples) we made an exhaustive search to establish that there is only one Sidon-Ramsey partition of [50] (the one in the displayed equation). If there existed a Sidon-Ramsey partition of [51] then it would be possible to obtain that one from the one in [50] by adding 51 to one of the parts. However, none of the parts remains a Sidon set when the number 51 is added. **Theorem 3.2**.: _The number \(SR(7)\) is \(66\)._ Proof.: The following partition is a witness of the lower bound: \[\{1,3,15,22,30,33,46,50,55,56\}, \{2,9,17,21,26,27,47,49,60,63\},\] \[\{4,7,13,23,24,28,36,54,61\}, \{5,12,14,20,34,44,45,57,62\},\] \[\{6,10,16,29,38,41,43,58,59\}, \{8,25,31,32,35,40,51,53,65\},\] \[\{11,18,19,37,39,42,48,52,64\}.\] The upper bound was verified by finding all triples of mutually disjoint 10-SS's in the set [66]. There are 1601160 such triples, and for each of them we did an exhaustive search to determine that the triple cannot be completed to a Sidon-Ramsey partition of [66] with the addition of four disjoint 9-SS's. Besides, there are 12435 and 0 quadruples and quintuples of mutually disjoint 10-SS's in the set [66], and none of them can be extended to a Sidon-Ramsey partition of [66]. ## 4 Further results Our techniques and computational resources seem not to be enough to determine \(SR(8)\), but we improved the bounds in Table 2 as shown in the following results. **Theorem 4.1**.: _The Sidon-Ramsey number with \(k=8\) satisfies \(\mathrm{SR}(8)\leq 86\)._ Proof.: From Table 1 we know that the size of each part of a Sidon set in [86] is at most 12. There are two 12-SS's in [86] and they have non empty intersection, forcing any partition of [86] into eight Sidon sets to have at most one 12-SS. Also, we found the 102484 11-SS's in [86]. In order to find partitions of [86] into eight Sidon sets with one of them of size 12, we needed to complete the 12-SS with at least four mutually disjoint 11-SS's. There are 3266 quadruples of disjoint 11-SS's in the complement of each 12-SS, and none of them can be completed to an 8 Sidon-Ramsey partition of [86] (it is not possible to add another 11-SS's to any of this quatruples, and it is neither possible complete with a triple of 10-SS's). If we don't use 12-SS, we need to use at least six disjoint 11-SS's. We found 4030 sixtuples of disjoint 11-SS's, none of whose complements contains a 10-SS's nor an 11-SS's. We conclude there is no 8 Sidon-Ramsey partition of [86]. **Theorem 4.2**.: _The Sidon-Ramsey number with \(k=9\) satisfies \(\mathrm{SR}(9)\leq 111\)._ Proof.: From Table 1 we know that the size of each part of a Sidon set in [111] is at most 13. There are twenty eight 13-SS's in [111] distributed as follows: * Two 13-SS's in [107] plus their eight translates. * Six 13-SS's in [110] that contain 1 and 110, plus their six translates. * Six 13-SS's in [111] that contain 1 and 111. (There are no 13-SS's in [108] or [109] that use both ends of the interval.) In order to partition [111] into 9 Sidon sets we need at least three 13-SS's. However, no triple amont the twenty eight 13-SS's in [111] above is disjoint. A Sidon-Ramsey partition is _balanced_ if the sizes of any pair of parts differ in at most 1, and it is _strongly-balanced_ if the sizes are all the same. In [4] they try to find small intervals that contains many disjoint Sidon sets of the same size, not necessarily making a partition. When this constructions make a partition of some set \([n]\), it is an example of a strongly-balanced Sidon-Ramsey partition. In [3] and [6] Sidon-Ramsey partitions of \([SR(k)-1]\) in \(k\) parts are given, for \(k\leq 5\); they are all balanced partitions. The above discussion suggests that Sidon-Ramsey partitions of \([\text{SR}(k)-1]\) are balanced, but as we shall see, this is not the case. So far we have found 5 balanced Sidon-Ramsey partitions of [65] with 7 parts each: \[\{1,3,15,22,30,33,46,50,55,56\}, \{2,9,17,21,26,27,47,49,60,63\},\] \[\{4,7,13,23,24,28,36,54,61\}, \{5,12,14,20,34,44,45,57,62\},\] \[\{6,10,16,29,38,41,43,58,59\}, \{8,25,31,32,35,40,51,53,65\},\] \[\{11,18,19,37,39,42,48,52,64\}.\] \[\{1,3,15,22,30,33,46,50,55,56\}, \{2,6,14,24,27,29,38,57,58,64\},\] \[\{5,8,23,28,39,40,47,49,53\}, \{10,12,13,25,34,41,45,51,59\},\] \[\{7,16,20,21,36,42,44,54,61\}, \{9,11,17,32,43,48,52,62,65\},\] \[\{4,18,19,26,31,35,37,60,63\}.\] \[\{2,4,16,23,31,34,47,51,56,57\}, \{7,10,15,19,33,43,44,50,63,65\},\] \[\{5,12,18,21,29,39,54,58,59\}, \{9,11,17,28,37,40,41,55,62\},\] \[\{8,13,14,22,24,42,45,49,64\}, \{3,20,26,27,30,35,46,48,60\},\] \[\{1,6,25,32,36,38,52,53,61\}.\] \[\{2,4,16,23,31,34,47,51,56,57\}, \{3,9,10,29,38,40,43,53,61,65\},\] \[\{5,8,22,24,37,44,45,49,55\}, \{14,21,27,32,35,36,52,62,64\},\] \[\{7,11,12,26,39,42,48,50,60\}, \{1,13,19,20,28,30,33,54,58\},\] \[\{6,15,17,18,25,41,46,59,63\}.\] \[\{3,5,17,24,32,35,48,52,57,58\}, \{2,6,14,16,19,37,38,44,53,64\},\] \[\{7,10,25,30,41,42,49,51,55\}, \{1,13,15,18,31,39,40,46,50\},\] \[\{4,21,22,26,28,36,47,56,59\}, \{8,11,23,27,29,34,54,62,63\},\] \[\{9,12,20,33,43,45,60,61,65\}.\] These partitions have two parts of 10 elements each, and five parts with 9 elements each. We conjecture that these and their reflected partitions (constructed by including the numbers \(66-x\) instead of \(x\) in each part) are all balanced Sidon-Ramsey partitions with those parameters. On the other hand, this is the first \(k\) for which there are non-balanced Sidon-Ramsey partitions with \(k\) parts in \([SR(k)-1]\). There is only one, namely \[\{1,3,15,22,30,33,46,50,55,56\}, \{2,9,17,21,26,27,47,49,60,63\},\] \[\{4,7,12,16,31,41,42,48,62,64\}, \{5,11,14,28,32,39,44,52,54\},\] \[\{6,19,23,24,34,43,57,59,65\}, \{8,10,20,36,37,40,45,51,58\},\] \[\{13,18,25,29,35,38,53,61\}.\] It has three parts with 10 elements each, three parts with 9 elements each and one part with 8 elements. Uniqueness was verified by determining all disjoint triples and quadruples of 10-SS's in [65] and analyzing one by one whether they could be extended to a Sidon-Ramsey partition with 7 parts. This was enough, since there is no 5-tuple of disjoint 10-SS's in [65]. The previous discussion naturally leads to the following open problems. **Open Problem 4.3**.: _Is it true that for any positive integer \(t\) there exists a Sidon-Ramsey partition in \(k\) parts of \([\mathrm{SR}(k)-1]\) such that a pair of parts differs in size by \(t\)?_ **Open Problem 4.4**.: _Is there any positive integer \(k\) such that there are more non-balanced Sidon-Ramsey partitions than balanced Sidon-Ramsey partitions in \(k\) parts of \([\mathrm{SR}(k)-1]\)?_ The Sidon-Ramsey partitions obtained so far show the following tendency when they are not strongly-balanced. The large Sidon sets of the partition do not include 1 and \(n\) simultaneously. Furthermore, if one of those large Sidon sets includes 1 then the density of the small numbers of the part seems to be lower than the density of the large numbers of the part; for example, \(\{1,3,15,22\}\) compared with \(\{46,50,55,56\}\) in the first part of the first two balanced Sidon-Ramsey partitions of [65]. An analogous behavior can be observed for those large parts containing \(n\) of a Sidon-Ramsey partition of \([n]\). Intuitively, one can think that it is easier to find \(k\)-SS's in \([m+\ell]\) than in \([m]\) (assuming \(\ell\geq 1\)). This suggests that if we are told to bet on a given \(d\)-tuple of disjoint Sidon sets (say the large ones in the partition) so that they can be completed to a Sidon-Ramsey partition of \([n]\), then we should improve the chances of success if we manage to choose those \(d\) sets so that they contain neither 1 nor \(n\). In that way, the remaining Sidon sets (say, the smaller ones) must be chosen within a larger interval (although the evidence given by the 5 balanced partitions of [65] shown above does not support this guess). Based on the optimal Sidon-Ramsey partitions known so far, it seems like the chances of success are higher if the sets in the \(d\)-tuple mentioned above cover only a few numbers in the two ends of the interval \([n]\). For example, when we look for the numbers \(\{1,2,3,4,5,61,62,63,64,65\}\) in the five balanced partitions of [65] shown above (so that we take the five smallest and the five largest ones), the two large sets include 4 of those extreme numbers in most of the cases, and only in one of them they include 5 of these numbers. In comparison, a random pair of disjoing 10-SS in [65] contains 6 or more of those numbers. The number 5 for picking small and large numbers was chosen here since there are precisely 5 more parts to be chosen (the Sidon sets with 9 elements), but the evaluation is not very different if we choose the first and last six or seven numbers of [65]. If true, this idea can be used to improve the lower bounds for \(SR(k)\) by searching for large parts of the partition by favoring tuples that do not concentrate near \(1\) or near \(n\). ## 5 Conclusions While searching for Sidon-Ramsey partitions we realized how relevant it is to know all \(k\)-SS's in \([n]\) for the smallest values of \(n\) for which they exist. Denote by \(g_{k}(n)\) the number of \(k\)-SS's in \([n]\) that contain \(1\) and \(n\). For \(k\leq 9\) the values of \(g_{k}(n)\) for small \(n\) strongly suggest that these are non-decreasing funcions. It was striking to us that \(g_{10}(n)\) is not non-decreasing, since \(g_{10}(56)=2\) whereas \(g_{10}(57)=g_{10}(58)=0\). Our intuition suggested us that if there are \(10\)-SS's in [56] that use \(1\) and \(56\) then there should also be \(10\)-SS's in [57] that use \(1\) and \(57\), since there is a little more space in \(\{2,\ldots,56\}\) to accomodate \(8\) numbers to complete a \(10\)-SS with \(1\) and \(57\), in comparison with \(\{2,\ldots,55\}\) to be completed with \(1\) and \(56\). The phenomenon of \(g_{k}(n)\) decreasing to \(0\) repeats with \(g_{11}(n)\), \(g_{12}(n)\) and \(g_{13}(n)\). The function \(g_{11}(n)\) equals \(0\) if \(n<73\), while \(g_{11}(73)=4\) and \(g_{11}(74)=0\) (for every \(n>74\) the value of \(g_{11}(n)\) is positive). Similarly, \(g_{12}(n)=0\) if \(n<86\) while \(g_{12}(86)=2\), \(g_{12}(87)=g_{12}(88)=g_{12}(89)=g_{12}(90)=0\), \(g_{12}(91)=2\) and from there on \(g_{12}(n)\) seems to be strictly increasing. Finally, \(g_{13}(n)=0\) if \(n<107\) while \(g_{13}(107)=2\) and \(g_{13}(108)=g_{13}(109)=0\) (curiously enough, \(g_{13}(110)=g_{13}(111)=6\) so that \(g_{13}\) is not even strictly increasing after the first two non-zero values). **Open Problem 5.1**.: _Which are the numbers \(k\) for which the function \(g_{k}(n)\) just defined is non-decreasing?_ The number of known exact values of \(SR(k)\) is very small to have much intuition of the nature of the Sidon-Ramsey partition attaining those numbers. Here we were able to improve the upper bounds of two more Sidon-ramsey numbers. We hope that soon new clever constructions of Sidon-Ramsey partitions will help improving the lower bounds as well (or suggesting that they are sharp). ## Acknowledgments The second author was supported by PAPIIT-UNAM under project grant IN104021 and by CONACYT "Fondo Sectorial de Investigacion para la Educacion" under grant A1-S-10839.