id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.14723 | Geometricities of driven transport in presence of reservoir squeezing | In a bare site coupled to two reservoirs, we explore the statistics of boson
exchange in the presence of two simultaneous processes: squeezing the two
reservoirs and driving the two reservoirs. The squeezing parameters compete
with the geometric phaselike effect or geometricity to alter the nature of the
steadystate flux and noise. The even (odd) geometric cumulants and the total
minimum entropy are found to be symmetric (antisymmetric) with respect to
exchanging the left and right squeezing parameters. Upon increasing the
strength of the squeezing parameters, loss of geometricity is observed. Under
maximum squeezing, one can recover a standard steadystate fluctuation theorem
even in the presence of phase different driving protocol. A recently proposed
modified geometric thermodynamic uncertainty principle is found to be robust. | Javed Akhtar, Jimli Goswami, Himangshu Prabal Goswami | 2023-09-26T07:33:37Z | http://arxiv.org/abs/2309.14723v1 | # Geometricities of driven transport in presence of reservoir squeezing
###### Abstract
In a bare site coupled to two reservoirs, we explore the statistics of boson exchange in the presence of two simultaneous processes: squeezing the two reservoirs and driving the two reservoirs. The squeezing parameters compete with the geometric phaselike effect or geometricity to alter the nature of the steadystate flux and noise. The even (odd) geometric cumulants and the total minimum entropy are found to be symmetric (antisymmetric) with respect to exchanging the left and right squeezing parameters. Upon increasing the strength of the squeezing parameters, loss of geometricity is observed. Under maximum squeezing, one can recover a standard steadystate fluctuation theorem even in the presence of phase different driving protocol. A recently proposed modified geometric thermodynamic uncertainty principle is found to be robust.
## I Introduction
Phase-different multiparametric temporal driving allows an additional leverage over a system's dynamics [1; 2]. This leverage is due to gauge-invariant geometric observables during the system's time evolution which affect the driven transport and time-dependent energy conversion processes. Additional phases during the time evolution of a system that arise during cyclic variations of two parameter adiabatic driving are usually referred to as Pancharatnam-Berry phases which introduce non-triviality into a well understood system [3]. As a first application, the holonomy of the parametric space was engineered to observe bias-independent electronic pumping under slow periodic variations [4]. Subsequently, this paradigm was extended to hem in upon nonequilibrium systems that exchange matter/energy with macroscopic reservoirs [5; 6]. Usually, geometric contributions in nonequilibrium quantum systems are introduced by either driving the reservoirs' temperatures or chemical potentials or even the system-reservoir couplings [6]. In such systems, the geometric effects not only actuate the steadystate dynamics but also lead to violations of the well established fluctuation theorems (FT) and thermodynamic uncertainty relationships (TUR) [7; 8; 9], which are otherwise robust even in the presence quantum coherences, entanglement and quantum squeezing. These geometric effects are almost entirely quantified by identifying its contribution to the generating function describing any exchange processes in a nonequilibrium quantum system [6; 9; 10]. The resulting generating functions, derived from a full counting statistical (FCS) method, have an additive term apart from the inherent dynamic term which is driving dependent and possesses a geometric curvature in the parameter space [7; 11]. The geometric contribution in such nonequilibrium quantum systems can also be observed during the evolution of the system's density matrix [12]. Although observable, it is no longer a phase factor and hence, is referred to as geometric phaselike effect or simply geometricity.
Such effects have also been explored in quantum heat engines, thermoelectric devices and molecular junctions [12; 13; 14; 15; 16; 17]. Enhancement of engine's constancy, affecting the coherent contribution to flux, observation of giant Fano factors and fractional quantization of the flux etc have been reported [7; 14; 18]. On a separate note, in the absence of geometric effects, general observables like flux, higher order fluctuations, constancy, thermodynamic affinities are also affected when parameters describing the reservoirs are altered, eg. by introducing quantum mechanical squeezing [19; 20; 21; 22; 23; 24]. Squeezed reservoirs are also known to introduce additional quantum control which have been exploited to observe nontrivial quantum thermodynamics like additional corrective parameters on the classical fluctuation theorem of the Crooks type [25] or not leading to Jarzynski-Wojcik type of fluctuation theorem [26]. Squeezed states of the thermal reservoirs have also been exploited to overcome Carnot limit in heat engines[27; 28; 29; 30; 31], violate universal maximum power theories [14; 8; 15] and introduce higher order correlated photon pairs from MgO:LiNbO\({}_{3}\) crystals [32; 33]. To corral a universal understanding of the role of squeezed initial states in FTs and TURs, several possibilities are currently under conceptualization [34; 35; 36]. For example, higher order fluctuations during photon transport can be maximized due to mixing between a qubit and squeezed resonators [37]. When treated separately, both geometricity (introduced via tuning the reservoirs) and squeezing the reservoirs inherently affect the quantum thermodynamics of nonequilibrium systems separately. Hence, it is a natural question to ask about the quantum thermodynamics of nonequilibrium systems where squeezed reservoirs are subjected to periodic modulations. This paper is the first to address this question. Since, presence of geometricity make even a simple model, eg. a resonant level coupled to two thermal or electronic reservoirs non-trivial [12; 17], we focus on such a system, where the reservoirs are squeezed.
In this work, we study the effect of squeezing the reservoirs on the statistics of particle exchange when the temperature of the reservoirs are periodically modulated. The geometricity that manifests itself into the quantum
statistics is explored in a toy model which is a bare site coupled to two squeezed reservoirs. Such a model is passably standard and well-studied in quantum transport [38; 39; 7]. Our work focuses on identifying the competition between squeezing and driving on the fluctuations of boson exchange within a quantum statistical framework. We implement the acknowledged methodology of full counting statistics (FCS) within a quantum master equation framework [40]. Firstly, in Sec. (II) we present our model and the general formalism used. In Sec. (III), we show our results and analysis after which we conclude in Sec. (V).
## II Model and Formalism
A bare site coupled to two reservoirs have been thoroughly studied both in presence and absence of squeezing. [41; 42; 43; 24]. The site can be effectively described by two Fock states that correspond to a boson-occupied (\(|1\rangle\)) and an unoccupied state, (\(|0\rangle\)), separated by an energy \(\hbar\omega_{o}\) (see the appendix for the Hamiltonian). On the experimental front, such a model can be a flexural mode of a GaAs-based nanobeam structure piezoelectrically coupled to squeezed electronic noise (squeezed thermal reservoirs) [30] or a qubit system realizable in an NMR setup [44] as well as in a transmon qubit around a SQUID setup [45]. The schematic representation of the model is shown in Fig.(1 a). In such a nonequilibrium system, the time evolution of reduced density matrix, \(\hat{\rho}\), (within standard Born-Markov approximation techniques) is a Pauli rate equation (decoupled from coherences) with two Fock states, \(|1\rangle\) and \(|0\rangle\) acting as the boson exchanger between the two squeezed reservoirs. When the reservoirs are driven, the rates become time-dependent (see the appendix). Within the standard theory of full counting statistical (FCS) formalism [40; 46], one can keep track of the net number of bosons exchanged, \(q\), through a moment generating vector for the reduced system in terms of the auxiliary counting field, \(\lambda\). In the Liouville space, the reduced moment generating density vector, \(|\tilde{\rho}(\lambda,t)\rangle\), can be written as,
\[|\dot{\tilde{\rho}}(\lambda,t)\rangle\rangle=\tilde{\mathcal{L}}(\lambda)| \tilde{\rho}(0,0)\rangle\rangle, \tag{1}\]
where the elements of the density vector contain the populations of the occupied and unoccupied states, \(\{\rho_{11},\,\rho_{00}\}\) (appendix) with the time-dependent evolution superoperator, \(\tilde{\mathcal{L}}(\lambda,t)\), given by
\[\dot{\mathcal{L}}(\lambda,t)=\left[\begin{array}{cc}-\alpha_{L}(t)-\alpha_{ R}(t)&\beta_{L}(t)e^{\lambda}+\beta_{R}(t)\\ \alpha_{L}(t)e^{-\lambda}+\alpha_{R}(t)&-\beta_{L}(t)-\beta_{R}(t)\end{array} \right]. \tag{2}\]
It is a standard practice to ignore the Lamb shifts terms so that the quantum mechanical rates of boson exchange between the system and reservoirs can be recast as:
\[\alpha_{\nu}(t) =\gamma_{\nu}\{\cosh{(2x_{\nu})(n_{\nu}(t)+\frac{1}{2})}+\frac{1 }{2}\}, \tag{3}\] \[\beta_{\nu}(t) =\gamma_{\nu}\{\cosh{(2x_{\nu})(n_{\nu}(t)+\frac{1}{2})}-\frac{1 }{2}\}. \tag{4}\]
\(\gamma_{\nu},\nu=\ell,r\) represents the coupling of the bare site and the \(\nu\)-th reservoir with \(n_{\nu}=(e^{\hbar\omega_{o}/T_{\nu}(t)}-1)^{-1}\) being the Bose-Einstein distribution of the \(\nu\)-th bath. \(x_{\nu}>0\) is the renormalized parameter responsible for squeezing the \(\nu\)-th harmonic bath within the Markovian regime [47] (see the appendix). Within this approximation, the squeezing properties get symmetrically distributed about the concerned left or right squeezed bath's frequency [48]. The parametric modulation is present in the reservoirs' temperatures, \(T_{\nu}(t)\), which we take to be of the following form,
\[T_{\ell}(t) :=T_{\ell}^{o}+A_{o}\cos(\Omega t+\phi), \tag{5}\] \[T_{r}(t) :=T_{r}^{o}+A_{o}\sin(\Omega t+\phi), \tag{6}\]
\(A_{o}\), \(\Omega\) and \(\phi\) are the amplitude, frequency and phase difference between the driving protocols, respectively. Note that, this theory is valid under the adiabatic evolution assumption, where the individual decay timescales of the system and reservoirs are well separated.
In the steady state, when \(\lambda=0\), a zero eigenvalue \(\zeta_{o}(t)\) is obtained from the RHS of Eq.(2). From this zero-eigenvalue, a cumulant generating function, \(S(\lambda)\) within the domain \(\lambda\in\{-\infty,\infty\}\), can be constructed which allows evaluation of the \(n\)-th order cumulants,
Figure 1: (a) Schematic diagram of two squeezed harmonic baths interacting with a bosonic site with two Fock states (\(|0\rangle\) and \(|1\rangle\)). The temperatures of the two squeezed baths are time-dependent via an amplitude-modulated phase different driving protocol as per Eq.(5)) and (b) represents a circle in the parameter space of \(T_{\ell}\) and \(T_{r}\) with \(T_{\ell}^{0}=300K,T_{r}^{0}=250K\). Squeezing dependent (c) dynamic (d) geometric cumulant generating function with squeezing parameters (\(x_{\ell},x_{r}\)) = (0,0),(0.7,0),(0,0.7) and (\(\pi,\pi\)) for (c) outermost to innermost curves and (d) in order of decreasing magnetote. The other parameters are fixed throughout the manuscript at \(\omega_{o}=7.4\pi THz\), \(\gamma_{\ell}=\gamma_{r}=1000THz\), \(\Omega=100THz\), \(A_{o}=100,\phi=\pi/4\).
\(j^{(n)}=\partial_{\lambda}S(\lambda)|_{\lambda=0}\)[40]. In the presence of phase different driving protocol, \(S(\lambda)\) is known to be additively separable into two components, one dynamic (\(S_{d}(\lambda,t)\)) and a geometric term \(S_{g}(\lambda,t)\). The geometric term \(S_{g}(\lambda,t)\) essentially is the source of geometricity in such driven dynamics and is obtainable from the left eigenvector (\(\langle L_{o}(\lambda,t)|\)) and the right eigenvector (\(|R_{o}(\lambda,t)\rangle\)) of the r.h.s of Eq.(1) for the eigenvalue \(\zeta_{o}(\lambda,t)\). It is nonexistent if the two parameters (Eq. (5) are driven without any phase difference,i.e \(\phi=0\)[17]. Both the dynamic and geometric cumulants can be evaluated as [7; 8; 9; 10; 11; 12]
\[j_{d}^{(n)} = \partial_{\lambda}^{n}S_{d}(\lambda)_{\lambda=0}=\frac{1}{t_{p}} \int_{0}^{t_{p}}\partial_{\lambda}^{n}\zeta_{o}(\lambda,t)|_{\lambda=0}(t)dt \tag{7}\] \[j_{g}^{(n)} = \partial_{\lambda}^{n}S_{g}(\lambda)_{\lambda=0}=\frac{1}{t_{p}} \int_{t_{p}}^{0}\partial_{\lambda}^{n}\langle L_{o}(\lambda,t)|\dot{R}_{o}( \lambda,t)\rangle dt|_{\lambda=0}\] (8) \[=-\partial_{\lambda}^{n}\oiint_{S}{\cal F}_{T_{\ell}T_{r}}( \lambda)dT_{\ell}dT_{r}|_{\lambda=0} \tag{9}\]
with \(t_{p}=2\pi/\Omega\) being the time period of the chosen external driving (Eq.(5)). In Eq. (9), the integrand, \(F_{T_{\ell}T_{r}}(\lambda)\), is the known as the geometric curvature and is analogous to the Pancharatnam-Berry curvature [7; 8; 9; 10; 11; 12] in the \(T_{\ell},T_{r}\) surface, \(S\). Here, \(n=1\) and \(2\) correspond to the flux and noise respectively. Both the quantities depend on the squeezing parameters, \(x_{\ell}\), \(x_{r}\) through the modified rates in Eq.(3) and Eq.(4). The dynamic and geometric cumulant generating functions are shown in Fig.(1c,d) for different squeezing parameters.
## III Results and Discussion
By evaluating the eigensystem of Eq.(2), we can identify the smallest eigenvalue \(\zeta_{o}(\lambda,t)\) (see the appendix, Eq.(A8)), from which we numerically obtain the dynamic flux and noise using Eq.(7). The behavior of the two dynamic cumulants (\(n=1,2\)) are shown in Fig. (2a and b) for equal initial temperatures. The quantitative behavior is not that different from the undriven case [49] apart from change in magnitude. These also retain the symmetry (antisymmetry) of the even (odd) cumulants with respect to the exchange of the left and right squeezing parameters under equal initial temperature (\(T_{\ell}^{0}=T_{r}^{0}\)) setting as well as the saturating behavior as observed earlier for undriven case [49]. The solid lines in Fig.(2) are evaluated by keeping \(x_{\ell}\) fixed while \(x_{r}\) is varied. The dotted lines represent the case when \(x_{\ell}\to x_{r}\) while \(x_{\ell}\) is varied, denoted by the symbol \(x_{\ell}\leftrightarrow x_{r}\) in the abscissa. This is simply because the rates that affect the dynamic cumulants are just scaled by the hyperbolic cosine functions (Eq.3 and Eq.(4)) and doesn't alter the overall mathematical structure of the eigenvalue \(\zeta_{o}(\lambda,t)\) (Eq.(A8)). In the figures, we also have denoted the cumulants in absence of squeezing (\(x_{\nu}=0\)) and driving as \(j_{o}^{(n)}\) by defining a dimensionless ratio \(C_{d/g}^{(n)}:=j_{d/g}^{(n)}/j_{o}^{(n)}\). When\(|C_{d/g}^{(n)}|>(<)1\), the squeezing increases (decreases) the value of the cumulant in comparison to the unsqueezed and undriven case.
On substituting the left and right eigenvectors of \(\tilde{\cal L}(\lambda,t)\) in the geometricity term \(\langle L(\lambda,t)|\dot{R}(\lambda,t)\rangle\), of Eq.(8) we can identify the geometric flux and geometric noise. The geometric flux is given by,
\[j_{g}^{(1)}=-\frac{\Omega}{2\pi}\int_{0}^{t_{p}}\frac{2\Gamma\cosh(2x_{\ell}) \cosh(2x_{r})(X_{r}^{+}-X_{l}^{+})dt}{(\gamma_{\ell}X_{\ell}^{+}+\gamma_{r}X_ {r}^{+})^{3}}dt \tag{10}\]
and shown in Fig. (2c). Note that the geometric flux decays to zero at higher values of the squeezing parameter. The geometric noise is given by
\[j_{g}^{(2)}=-\frac{\Omega}{2\pi}\int_{0}^{t_{p}}\frac{12\Gamma^{2}\cosh(2x_{ \ell})\cosh(2x_{r})(X_{r}^{+}-X_{l}^{+})dt}{(\gamma_{\ell}+\gamma_{r})(\gamma _{\ell}X_{\ell}^{+}+\gamma_{r}X_{r}^{+})^{5}} \tag{11}\]
with \(\Gamma=\gamma_{\ell}\gamma_{r}(\gamma_{\ell}+\gamma_{r})\) and \(X_{\nu}^{\pm}:=\cosh(2x_{\nu})(2n_{\nu}(t)\pm 1)\). The r.h.s of Eq. (10) and Eq.(11) are evaluated as a function of the squeezing parameters and the scaled function is shown in Fig. (2c) and Fig. (2 d) respectively. Both the geometric cumulants are observed decaying to zero as the squeezing parameters are increased. Further, it is also observed that the geometric flux (odd cumulant) is symmetric with respect to exchanging the squeezing parameters while the second cumulant is antisymmetric, contrary to the behavior of the dynamic cumulants. The
Figure 2: (a) Behavior of the absolute dynamic flux, \(j_{d}^{(1)}\) as a function of the two reservoirs’ squeezing parameters (the \(\leftrightarrow\) indicates exchanging the values of the two parameters \(x_{\ell}\) and \(x_{r}\)) evaluated at equal initial temperatures \(T_{\ell}^{0}=T_{r}^{0}=300K\). The solid (dotted) lines are when \(x_{\ell}\) is fixed (\(x_{\ell}\) is changed to \(x_{r}\)). Note the antisymmetry due to the exchange \(x_{\ell}\leftrightarrow x_{r}\). (b) Behavior of the scaled dynamic noise (second cumulant). Note the equality upon exchanging the \(x_{\ell}\) and \(x_{r}\) values. Plot of geometric scaled flux ((c)) and noise ((d)) highlighting the equality and antisymmetry upon exchanging the squeezing parameters.
symmetric behavior upon exchanging \(x_{\ell}\) and \(x_{r}\) in the geometric flux is because the denominator in the integrand inside the r.h.s of Eq.(10) is symmetric with respect to exchange. The noise is antisymmetric with respect to exchange because the numerator imparts a negative sign upon exchanging the squeezing parameters. It is interesting to note that both the exchange symmetry and the antisymmetry doesn't hold when the initial temperatures are different. This is shown graphically in Fig.(3c and d). Note that, in Eq.(11), when \(X_{r}^{+}=X_{\ell}^{+}\), we obtain \(j_{g}^{(2)}=0\). This condition can be triggered by controlling the squeezing parameters \(x_{\ell}\) and \(x_{r}\) and can be seen as the zero line along the diagonal (\(x_{\ell}=x_{r}\)) of the contour plot in Fig.(3d). Under this same condition, the integral in Eq.(10) is however nonzero and one observes geometric flux without geometric fluctuations.
We now move on to explain why the geometric effects in the flux and fluctuations vanish at higher squeezing values as seen in Fig.(2c,d) and Fig.(3). This is because \(S_{g}(\lambda)\) vanishes at higher values of \(x_{\ell},x_{r}\), as seen in Fig. (1d). The geometric curvature, in the present model is of the form,
\[F_{T_{\ell}T_{r}}(\lambda)=-\frac{2\Gamma C_{\ell}C_{r}\sin(\lambda)}{\{K+4f( \lambda)\}^{3/2}} \tag{12}\]
with
\[C_{\nu} =\frac{1}{k_{B}T_{\nu}^{2}}\hbar\omega e^{\hbar\omega/k_{B}T_{ \nu}}((n_{\nu}+1/2)\cosh(2x_{\nu})-\frac{1}{2}) \tag{13}\] \[K =\gamma_{\nu}\sum_{\nu=\ell,r}2\cosh(2x_{\nu})(n_{\nu}+\frac{1}{2})\] (14) \[f(\lambda) =\prod_{\nu=\ell,r}\gamma_{\nu}(\cosh(2x_{\nu})(n_{\nu}+1/2)-1/2)\] \[\times e^{\hbar\omega/k_{B}T_{\ell}}(e^{\lambda}-1)+e^{\hbar\omega/k_{B }T_{\ell}}(e^{-\lambda}-1) \tag{15}\]
and is analogous to the known expression for the un-squeezed case (\(x_{\nu}=0\)) [49]. \(F_{T_{\ell}T_{r}}(\lambda)\) is finite for the unsqueezed case around \(\lambda=0\). At low values of \(\lambda\), the \(\sin(\lambda)\) dominates over the denominator's \(f(\lambda)\) term resulting in the typical modified sinusoidal shape as already reported. In the present case too, at lower values of squeezing (\(x_{\ell},x_{r}\approx 0\)) around \(\lambda=0\), such a behavior is shown for \(S_{g}(\lambda)\) as shown in Fig. (1d). As \(x_{\nu}\) is increased, the hyperbolic terms from the squeezing parameters start contributing more to Eq. (12) around \(\lambda=0\) and changes the overall geometricity. These squeezed parameters can now be used to gain control or steer the underlying geometric statistics.
Note that, in general, the mathematical structure of \(F_{T_{l}T_{r}}\) in Eq.(12) is such that the numerator (denominator) has an overall squared (cube-halved) dependence on the cosine hyperbolic terms. This structure dictates that, as one keeps squeezing the reservoirs the denominator keeps increasing and hence the amplitude (quantified by the co-efficients) of the \(\sin(\lambda)\) term keeps reducing, which results in lower slope around \(\lambda=0\). This causes the geometric flux and subsequent cumulants to keep reducing and finally vanishes as shown in Fig.(1d). We can safely conclude that squeezing the reservoirs reduces the geometricity of the driven system. In this high-squeezing limit, even upon increasing the frequency of phase-different driving, \(\Omega\gg 1\), the statistics of exchange is solely governed by the dynamicity of the system, i.e \(S_{d}(\lambda)\). We can prove this analytically by considering the following limiting case. Under the assumption that \(n_{\nu}\ll 1/2\) (low temperature regime), we have,
\[C_{\nu}|_{n_{\nu}\ll 1/2} \propto\cosh(2x_{\nu})-1 \tag{16}\] \[K|_{n_{\nu}\ll 1/2} \propto\sum_{\nu}\cosh(2x_{\nu})\] (17) \[f(\lambda)|_{n_{\nu}\ll 1/2} \propto\prod_{\nu}(\cosh(2x_{\nu})-1) \tag{18}\]
which results in
\[F_{T_{\ell}T_{r}}|_{n_{\nu}\ll 1/2} \propto\frac{\sin(\lambda)}{\sqrt{\sum_{\nu}\cosh^{3}(2x_{\nu})} \sqrt{\prod_{\nu}(\cosh(2x_{\nu})-1)}} \tag{19}\]
Figure 3: Plot highlighting absence of symmetry and antisymmetry in the geometric flux (a) and noise (b) under unequal initial temperatures upon exchanging the squeezing parameters. Contour plots showing the vanishing geometric flux (c) and noise (d) at higher values of squeezing parameters. Note the zero values along the diagonal.
In the above expression, taking either of the two limits, \(x_{\ell}\rightarrow\infty\) or \(x_{r}\rightarrow\infty\) results in the r.h.s being zero. Thus, squeezing the reservoirs to its extremum kills the geometric curvature or the geometricity resulting in \(S_{g}(\lambda)=0\). The complete contour plots of the two geometric cumulants \(C_{g}^{(1)}\) and \(C_{g}^{(2)}\) as a function of \(x_{\ell}\) and \(x_{r}\) are shown in Fig. (3c and d). In both the plots, the geometric effects vanish at higher values of squeezing.
## IV Thermodynamic Uncertainty Relationship
For an undriven case, \(j_{g}^{(n)}=0\) (when \(\Omega=0\) or \(\phi=0\)), a standard thermodynamic uncertainity relationship (TUR), reminiscent of a steadystate fluctuation theorem holds, given by \(F\mathcal{A}\geq 2k_{B}\)[44; 50] with \(F=j^{(2)}/j^{(1)}\) being the Fano factor while \(\mathcal{A}\) is the thermodynamic affinity of the system. This TUR has been shown not to hold in the presence of geometric effects [14].
In the present case, one can recover the standard TUR in the high squeezing limit of either reservoir. Under maximum squeezing, \(F_{T_{l}T_{r}}(\lambda)=0\) kills the geometric contributions to the system statistics. We can hence recover a Gallavoti-Cohen symmetry,
\[\lim_{x_{\nu}\rightarrow\infty}\frac{1}{t_{p}}\int_{0}^{t_{p}} \zeta_{o}(\lambda,t)dt\\ =\lim_{x_{\nu}\rightarrow\infty}\frac{1}{t_{p}}\int_{0}^{t_{p}} \zeta_{o}(-\lambda-\lim_{x_{\nu}\rightarrow\infty}\mathcal{A},t)dt, \tag{20}\]
with
\[\mathcal{A}=\log\left(\frac{\int_{0}^{t_{p}}X_{\ell}^{-}X_{r}^{+}dt}{\int_{0}^ {t_{p}}X_{\ell}^{+}X_{r}^{-}dt}\right) \tag{21}\]
where the time and squeezing dependent quantities \(X_{\nu}^{\pm}\) are defined in the text below Eq.(11). Eq.(21) reduces to the known expression \(1/T_{\ell}-1/T_{r}\) in absence of driving [7] that leads to a steadystate fluctuation theorem. The recovery of the symmetry hence allows us to recover the standard TUR,
\[\lim_{x_{\nu}\rightarrow\infty}\mathcal{A}\frac{\lim_{x_{\nu}\rightarrow \infty}j_{d}^{(2)}}{\lim_{x_{\nu}\rightarrow\infty}j_{d}^{(1)}}\geq 2k_{B} \tag{22}\]
In the case of finite (but not maximal) squeezing, the geometricities are still present. TUR in such a case has been shown to get modified by including a geometric correction factor [18],
\[\frac{j^{(2)}\Sigma}{(j^{(1)})^{2}g(\Omega)}\geq 2k_{B} \tag{23}\]
where, \(g(\Omega)\) is the driving dependent geometric correction factor and is of the form,
\[g(\Omega)=\frac{1}{(1+j_{g}^{(1)}/j_{d}^{(1)})^{2}} \tag{24}\]
We numerically evaluate Eq.(24) and plot \(g(\Omega)\) as a function of \(x_{\ell}\) in Fig.(4a) where a discontinuity is observed at \(x_{\ell}=0.7\). This discontinuity is at that point of \(x_{\ell}\), where \(\mathcal{A}=0\) (\(e^{\mathcal{A}}\) is shown as a vertically slanted line) that results in \(j_{d}^{(1)}=0\) in Eq.(24). Further, for any fixed value of \(j_{g}^{(1)}\) in Eq.(23), the r.h.s is greater (less) than unity when \(j_{d}^{(1)}<(>)0\) and vice versa. We have earlier shown that, for an undriven case, the direction of the dynamic flux \(j_{d}^{(1)}\) is controllable through the squeezing parameters due to the modification of the thermodynamic affinity, \(\mathcal{A}\)[49]. Thus by controlling \(x_{\ell}\), we observe regions where \(g(\Omega)>1\) and \(g(\Omega)<1\) characterized by a shift between these two regions at that value of \(x_{\ell}\) where \(\mathcal{A}=0\) as seen in Fig.(4a). The curve below unity is evaluated by maintaining positive geometric flux and \(\mathcal{A}>0\) (\(T_{\ell}=T_{r},x_{\ell}=0.7,x_{r}=0\)) so that \(g(\omega)<1\). In a standard context without squeezing, as in the work of Lu et al [18] where \(g(\Omega)\) was introduced, the dynamic flux is solely dependent on the temperature gradient which controls the thermodynamic affinity and the \(g(\Omega)\) is a continuous function either below unity or above unity depending on the signs of the dynamic or geometric fluxes.
We state that this observed discontinuity doesn't lead to violation of the modified TUR, Eq. (23). Although, not highlighted in the earlier work [18], the continuity of \(g(\Omega)\) in Eq. (23) as a function of a system parameter is rather limited to positive dynamic flux, characterized by \(\mathcal{A}>0\). As long as we maintain \(\mathcal{A}>0\), by properly choosing \(x_{\ell}\) and \(x_{r}\) values, the modified TUR given by Eq.(23) always holds within these two separate re
Figure 4: (a) Behavior of the geometric correction factor, \(g(\Omega)\) to the TUR as a function of \(x_{\ell}\). Note the singularity at \(x_{\ell}=0.7\). At this value of \(x_{\ell}\), the thermodynamic force, \(\mathcal{A}=0\) (dotted line). (b) Behavior of the minimum entropy produced. It is also zero at \(x_{\ell}=0.7\). (c) Contour of \(\Sigma_{min}\) as a function of squeezing parameters. The region where it is zero is where \(\mathcal{A}=0\). (d) Symmetry in the minimum entropy upon exchanging the values of the squeezing parameters.
gions. By maintaining, \(\mathcal{A}>0\), we directly estimate the minimum entropy production, \(\Sigma_{min}\) in the presence of geometricities,
\[\Sigma_{min}=2k_{B}\frac{\left(j_{d}^{(1)}+j_{g}^{(1)}\right)^{2}}{j_{d}^{(2)}+ j_{g}^{(2)}}g(\Omega) \tag{25}\]
In the above equation, it is not possible to separate the entropy rates into dynamic and geometric contributions. Although when \(\Omega\gg 1\), \(j_{g}^{(1)}\gg j_{d}^{(1)}\), the same cannot be said for the second cumulant which makes the denominator in Eq.(25) to have combined dynamic and geometric contributions. Nonetheless, the modified TUR allows an easy way to evaluate the total minimum entropy production rate. Note that, in the presence of geometricities, evaluation of entropies with contribution from both dynamic and geometric components is not at all straightforward [51] due to production of excess entropies. We evaluate Eq.(25) and plot it as a function of the left reservoir's squeezing parameters in Fig.(3)b,c and d). The dependence of \(\Sigma_{min}\) on \(x_{\ell}\) is nonlinear and saturates at higher values. In Fig.(3)c), we show a contour map of \(\Sigma_{min}\) for a wide range of \(x_{\ell}\) and \(x_{r}\) values. There exists a wide region of \(\Sigma_{min}\) around the diagonal of the contour is where \(\mathcal{A}\approx 0\) that results in \(\Sigma_{min}\)=0. This region is actually not allowed since \(g(\Omega)\) is strictly not defined. One shouldn't substitute this zero value in Eq. (23) and claim it as a violation of the TUR. In Fig.(3)d), we show the existence of the exchange symmetry (\(x_{\ell}\leftrightarrow x_{r}\)) in the entropy under equal temperature setting.
## V Conclusion
We employ a full counting statistical method to derive a tilted driven quantum master equation for a simple bosonic site coupled to two squeezed harmonic reservoirs. The temperatures of the two squeezed reservoirs are assumed to be adiabatically driven with a phase-different driving protocol. This allowed us to explore the combined effect of squeezing parameters and the geometricities or geometric phaselike contributions to the steadystate observables, the flux (first cumulant) and the noise (second cumulant). The dynamic cumulants exhibit similar qualitative behavior as a function of squeezing parameters to that of what is already known for an undriven scenario, albeit with modified magnitudes. The geometric cumulants are however affected by the squeezing parameters. The odd (even) geometric cumulants are found to be antisymmetric (symmetric) with respect to exchanging the left and right squeezing parameters when the initial thermal gradient is maintained at zero. These also decay to zero as we keep increasing the strength of the reservoirs' squeezing parameters. This is because an increased squeezing prohibits the generation of geometricity in the cumulant generating function. Hence, under maximum squeezing, one can recover a standard steadystate fluctuation theorem which also leads to a standard thermodynamic uncertainty relation even in the presence of phase different driving protocol. Using a recently proposed modified geometric thermodynamic uncertainty principle, which is robust in the presence of squeezing, we estimate the minimum entropy production rate at finite values of dynamic flux. This minimum entropy production rate cannot be separated into dynamic and geometric contributions. It exhibits a saturating behavior and is also symmetric with respect to exchange of the left and right squeezing parameters under a zero initial thermal gradient scenario.
###### Acknowledgements.
HPG acknowledge the support from the University Grants Commission, New Delhi for the startup research grant, UGC(BSR), Grant No. F.30-585/2021(BSR) and the Science and Engineering Research Board for the startup grant with file number SERB/SRG/2021/001088.
## Appendix A Appendix
The Hamiltonian of the bare site interacting with two bosonic reservoirs can be written as,
\[\hat{H}=\hbar\omega_{o}\hat{b}^{\dagger}\hat{b}+\sum_{i=\nu,\nu\in L,R}\hbar \omega_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}+\hat{V}, \tag{26}\]
with
\[\hat{V}=\sum_{i,\nu\in L,R}k_{i}^{\nu}(\hat{a}_{i\nu}^{\dagger}\hat{b}+\hat{a} _{i\nu}\hat{b}^{\dagger}) \tag{27}\]
Here, \(\hbar\omega_{o}\hat{b}^{\dagger}\hat{b}\) is the on-site Hamiltonian with bare frequency \(\omega_{o}\), while \(\hat{b}^{\dagger}(\hat{b})\) is the bosonic creation (annihilation operator) on the site. The second term is the reservoir Hamiltonian with squeezed harmonic states and is a sum of two terms that represent the left (L) and right (R) squeezed reservoirs. The single particle operators \(\hat{a}_{i\nu}^{\dagger}(\hat{a}_{i\nu})\) represent the creation (annihilation) of a boson in the i-th mode from (of) the \(\nu\)-th bath. \(\hat{V}\) is the system bath coupling Hamiltonian with \(k_{i}^{\nu}\) being the coupling constant for the i-th squeezed mode of the \(\nu\)th bath to the bare site mode. The squeezed density matrix for the \(\nu\)-th reservoir (\(\hat{H}_{\nu}\) being the \(\nu\)th reservoir Hamiltonian) is given by
\[\hat{\rho}_{\nu} = \frac{1}{Z}\exp\{-\beta_{\nu}(t)\hat{S}_{\nu}\hat{H}_{\nu}\hat{S} _{\nu}^{\dagger}\}, \tag{28}\] \[\hat{S}_{\nu} = \prod_{k}e^{\frac{1}{2}(x_{\nu}^{*}\hat{a}_{k\nu}^{\dagger 2}-h.c)}. \tag{29}\]
\(\beta_{\nu}(t)=(k_{B}T_{\nu}(t))^{-1}\) being the inverse temperature and \(\hat{S}_{\nu}\) is the squeezing operator on the \(k-\)th mode of the
\(\nu-\)th bath, with \(x_{\nu}\) being the squeezing the \(\nu\)-th reservoir's squeezing parameter [26; 47; 49; 52]. Assuming that the initial density matrix is factorisable and there is well separation between the system and reservoir timescales (adiabaticity) [47], we can write down two adiabatic Pauli-type master equations, with time-dependent squeezed rates,
\[\dot{\rho}_{11} =-(\gamma_{L}(1+N_{L}(t))+\gamma_{R}(1+N_{R}(t)))\rho_{11} \tag{11}\] \[+(\gamma_{L}N_{L}(t)+\gamma_{R}N_{R}(t))\rho_{00}\] \[\dot{\rho}_{00} =(\gamma_{L}(1+N_{L}(t))+\gamma_{R}(1+N_{R}(t)))\rho_{11}\] (12) \[-(\gamma_{L}N_{L}(t)+\gamma_{R}N_{R}(t))\rho_{00}\]
where \(\langle m|\rho|m\rangle=\rho_{mm}\) represents the probability of occupation of the occupied and unoccupied Fock states. Note that the populations and coherences are decoupled and the equations are effectively classical albeit with quantum mechanical rates [53]. The driving dependent, squeezed occupation factors are given by [47],
\[N_{\nu}(t)=\big{(}\cosh(2x_{i\nu})(n_{\nu}(t)+\frac{1}{2})-\frac{1}{2}\big{)} \tag{13}\]
where \(n_{\nu}(t)\) is the driven Bose function for the \(\nu\)-th squeezed bath. Now we can recast the above two equations in the Liouville space and following the standard procedure of FCS by introducing the auxiliary counting field, \(\lambda\) to keep track of the net number of bosons exchanged, \(q\)[46; 40], we can arrive at Eq.(2), where the quantum mechanical rates have been redefined to \(\alpha_{\nu}(t)=\gamma_{L}(1+N_{L}(t))\) and \(\beta_{\nu}=\gamma_{\nu}N_{\nu}(t)\). The \(\lambda\)-dependent zero eigen value of Eq.(2) is given by,
\[\zeta_{o}(\lambda,t) =-(\gamma_{\ell}X_{\ell}^{+}+\gamma_{r}X_{r}^{+}) \tag{14}\] \[+\sqrt{(\gamma_{\ell}+\gamma_{r})^{2}+(\gamma_{\ell}X_{\ell}^{-}+ \gamma_{r}X_{r}^{-})f(\lambda)}\] \[X_{\nu}^{\pm} =\cosh(2x_{\nu})(2n_{\nu}(t)\pm 1),\nu=l,r\] (15) \[f(\lambda) =(\gamma_{\ell}e^{-\lambda}(1+X_{\ell}^{+})+\gamma_{r}e^{2\lambda }(X_{r}^{+})) \tag{16}\]
from which the dynamic flux and noise can be numerically evaluated using Eq. (7).
|
2301.00174 | Efficient Methods for Approximating the Shapley Value for Asset Sharing
in Energy Communities | With the emergence of energy communities, where a number of prosumers invest
in shared generation and storage, the issue of fair allocation of benefits is
increasingly important. The Shapley value has attracted increasing interest for
redistribution in energy settings - however, computing it exactly is
intractable beyond a few dozen prosumers. In this paper, we first conduct a
systematic review of the literature on the use of Shapley value in
energy-related applications, as well as efforts to compute or approximate it.
Next, we formalise the main methods for approximating the Shapley value in
community energy settings, and propose a new one, which we call the stratified
expected value approximation. To compare the performance of these methods, we
design a novel method for exact Shapley value computation, which can be applied
to communities of up to several hundred agents by clustering the prosumers into
a smaller number of demand profiles. We perform a large-scale experimental
comparison of the proposed methods, for communities of up to 200 prosumers,
using large-scale, publicly available data from two large-scale energy trials
in the UK (UKERC Energy Data Centre, 2017, UK Power Networks Innovation, 2021).
Our analysis shows that, as the number of agents in the community increases,
the relative difference to the exact Shapley value converges to under 1% for
all the approximation methods considered. In particular, for most experimental
scenarios, we show that there is no statistical difference between the newly
proposed stratified expected value method and the existing state-of-the-art
method that uses adaptive sampling (O'Brien et al., 2015), although the cost of
computation for large communities is an order of magnitude lower. | Sho Cremers, Valentin Robu, Peter Zhang, Merlinda Andoni, Sonam Norbu, David Flynn | 2022-12-31T10:51:22Z | http://arxiv.org/abs/2301.00174v1 | # Efficient Methods for Approximating the Shapley Value for Asset Sharing in Energy Communities
###### Abstract
With the emergence of energy communities, where a number of prosumers invest in shared generation and storage, the issue of fair allocation of benefits is increasingly important. The Shapley value has attracted increasing interest for redistribution in energy settings - however, computing it exactly is intractable beyond a few dozen prosumers. In this paper, we first conduct a systematic review of the literature on the use of Shapley value in energy-related applications, as well as efforts to compute or approximate it. Next, we formalise the main methods for approximating the Shapley value in community energy settings, and propose a new one, which we call the _stratified expected value_ approximation. To compare the performance of these methods, we design a novel method for _exact_ Shapley value computation, which can be applied to communities of up to several hundred agents by clustering the prosumers into a smaller number of demand profiles. We perform a large-scale experimental comparison of the proposed methods, for communities of up to 200 prosumers, using large-scale, publicly available data from two large-scale energy trials in the UK (UKERC Energy Data Centre, 2017, UK Power Networks Innovation, 2021). Our analysis shows that, as the number of agents in the community increases, the relative difference to the exact Shapley value converges to under 1% for all the approximation methods considered. In particular, for most experimental scenarios, we show that there is no statistical difference between the newly proposed stratified expected value method and the existing state-of-the-art method that uses adaptive sampling (O'Brien et al., 2015), although the cost of computation for large communities is an order of magnitude lower.
keywords: energy community, fair allocation, prosumer, Shapley value +
Footnote †: journal:
+
Footnote †: journal:
## Nomenclature
**Subscripts and Sets**
\(i\): \(\quad\) For agents (households)
\(k\): \(\quad\) For classes (clusters)
\(\mathcal{N}\): \(\quad\) Set of agents in the community
\(\mathcal{S}\): \(\quad\) For subcoalitions formed by agents in community
###### Abstract
We consider a class of \(SoC\)-based systems, which are based on the \(SoC\)-based systems. We show that the \(SoC\)-based systems are capable of producing a \(SoC\)-based system with a \(SoC\)-based system. We show that the \(SoC\)-based system is capable of producing a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(SoC\)-based system with a \(S\)-based
\(RL_{i}\) Annual cost of agent \(i\) according to adaptive sampling \([\mathcal{L}]\)
\(K\) Number of classes of unique demands in the community
\(N_{k}\) Number of agents that belongs to class \(k\)
\(P()\) Multivariate hypergeometric distribution
\(\hat{\phi}_{k}\) Cost redistributed to class \(k\) by certain redistribution method (\(\overline{MC}_{k}\), \(\overline{SEV}_{k}\), or \(RL_{k}\)) \([\mathcal{L}]\)
\(RD_{\phi}(\hat{\phi}_{k})\) Relative difference of a redistributed cost to the Shapley value for class \(k\)\([\%]\)
\(RD_{\phi}(\hat{\phi})\) Average relative difference of a redistribution method to the Shapley value \([\%]\)
**Abbreviations**
DF Depreciation Factor
DoD Depth of Discharge
P2P Peer-to-Peer
RES Renewable Energy Source
SoC State of Charge
## 1 Introduction
Recent years have seen a shift towards decentralized energy systems, in which communities of prosumers (consumers with their own local renewable generation capacity and storage) satisfy more of their own energy needs from renewable energy generated from local sources. A number of regions, such as the European Union [1] and the United Kingdom [2] are providing supportive regulations to encourage communities of consumers to shift away from the dependence on centralized energy generation, and towards more decentralized and local energy generation and storage systems.
One significant recent trend are transactive energy models that aim to achieve better coordination between production and consumption in local energy systems by use of market-based mechanisms that allow energy exchanges between energy end users and prosumers. In broad lines, there are two main models of organisation for local transactive energy systems [3]. One is peer-to-peer (P2P) energy trading systems, in which prosumers invest in their own energy assets (such as solar PV panels, wind turbine, and or battery storage) and buy and sell energy with their neighbours directly, based on their individually-owned assets [4; 5; 6; 7]. In this scenario, each prosumer is metered separately and pays the value of its _net metered_ electricity demand (demand after using its generation and storage capacity). Another is the formation of _energy communities_, where prosumers group together and buy a shared generation resource (such as a community wind turbine) and or a shared community battery. Here, the whole community is _"behind the meter"_, i.e. pays for the net demand of the entire community over the billing period. The differences between the two models are illustrated in Figure 1. Each prosumer in Figure 1a owns energy sources and a battery, and individually interacts with the central power grid, in which the net demand is counted. It can be seen that the power flow between a prosumer and the utility grid is bidirectional, and any excess generation by the prosumer is sold to the grid. Furthermore, a P2P trading scheme makes buying and selling energy among peers possible, which is represented by dotted arrows in Figure 1a. On the contrary, the energy community in Figure 1b presents a group of consumers sharing energy assets and interacting with the utility grid as a single entity. Net demand is computed for the whole community. These "behind the meter" models rely on a _community aggregator_, which controls the energy assets and distributes the generated/discharge power to the households in the community. The aggregator is also in charge of receiving energy from the utility grid whenever there is a deficit and sending back energy to the grid when there is a surplus in the community.
This coalitional model around shared assets is increasingly popular, not just in academic research - for example, in Scotland, UK, Community Energy Scotland 1 identified 300+ energy communities that formed around a shared energy asset - typically a wind turbine, but similar examples exist all over the world. Such energy communities can consist of anywhere between several dozens to several hundred houses (e.g. a village or a city neighbourhood), often located on the LV network behind a local transformer. The prosumers share the outputs of jointly-owned energy assets, as well as the energy bill for the aggregate residual demand, i.e. the part of the demand not covered by the local generation and storage assets. Therefore, the community aggregator is not only responsible for the control and distribution of energy in the community but also for allocating any revenues from exporting energy and the bills of the residual demand. Clearly, one of the key challenges in this setting is the redistribution of such costs and benefits to the prosumers in a _fair_ way.
Footnote 1: [https://communityenergyscotland.org.uk/](https://communityenergyscotland.org.uk/)
Coalitional game theory has long studied such redistribution problems in a wide variety of systems [8]. A key concept is the Shapley value, first proposed by the Nobel prize-winning economist Lloyd Shapley [9]. The Shapley value has recently begun getting substantial attention in the energy applications - with a rapid increase in the number of papers using Shapley value in energy systems in recent years (see Section 2). However, a key challenge with the Shapley value is that computing it is exponential in the number of agents, making exact computation intractable beyond a small number of agents.
The prior papers dealt with this in several ways. Most consider experimental models with up to a maximum of \(\sim\)10-20 agents, to keep computations tractable. Another approach is to use some simpler heuristics for cost redistribution (e.g. [10]), but it is not clear how close these are to the exact Shapley value.
Yet another approach is to use sampling. Sampling-based approaches do have merit, and in this paper, we implemented the most advanced sampling-based method we are aware of, that of O'Brien et al. [11], which uses reinforcement learning techniques to perform adaptive sampling to calculate the Shapley value. However, they also have disadvantages: for larger settings, a very large number of samples may be needed to get a reasonable approximation of the true Shapley value, which increases the computation cost considerably. Also, in community energy applications, sampling-based methods have the disadvantage that they may not produce a consistent result if they need to be rerun for verification purposes. Community energy schemes rely on the distributed trust of the prosumers in the community, and hence on the ability to sometimes
Figure 1: Two different configurations of a community of prosumers, where (a) prosumers with their own energy assets are connected to the central grid individually, and (b) prosumers with jointly-owned assets interact with the grid as a whole through the aggregator.
rerun the calculations of the coalition coordinator, if they wish. But, as the calculation at each run depends on which random samples are drawn, results will be slightly different, even on the same data. Hence, there is an important knowledge gap about approximating the Shapley value in larger settings, which our paper aims to address. Specifically, we study both sampling-based and deterministic methods for approximating Shapley, compare their performance w.r.t. the true, exact Shapley value, and derive their computation costs. As one of the contributions of this paper, we introduce a novel redistribution method that approximates the Shapley value well within polynomial time, and compare it to existing methods.
A key open challenge in this space remains determining the "ground truth", i.e. computing the exact, true Shapley value to compare other methods to, especially for larger realistically-sized communities (e.g. dozens to several hundred prosumers). Prior approaches, like O'Brien et al. [11] use a setting of only 20 agents as a "ground truth" to compute the exact Shapley, as they naturally find larger settings unfeasible to compute with unique agents. Yet, as we show in our experiments, an approximation method that does poorly for a small number of agents (e.g. 5-20) may actually do well for a realistically sized setting of 100-200 agents. Another important contribution of this paper is that we develop a method to compute the _exact_ Shapley value for larger communities, up to 200 agents. Intuitively, the core idea behind the method (see Section 4.2) is to cluster the agents in a much smaller number of consumption profiles, and use the symmetry of the combinations of agents to greatly reduce the cost of exact Shapley calculation.
Finally, as part of our contributions, we implemented our method in realistic community case studies, both in terms of demand, generation and battery data used, and in terms of size (up to 200 households), granularity and duration (half-hourly data over a whole year). We used two different datasets, both containing household energy consumption data in the UK and the corresponding wind generation and battery data. One draws data from the Thames Valley vision trial [12] while the other draws data from the Low Carbon London project [13]. This provides a highly realistic case study to provide confidence in the robustness of our experimental comparison results.
The rest of the paper is organised as follows. First, a review of the literature is provided in Section 2. Section 3 presents the community energy model used. Section 4 introduces the Shapley value and its computation methods. Then, Section 5 presents the experimental comparison across a number of scenarios. Finally, Section 6 concludes the paper with a discussion.
## 2 Literature Study
This Section presents our systematic study of the literature on Shapley value computation and energy systems. We note that the distribution of benefits and costs in smart energy systems is a broad one, and Shapley value is just one of the possible solution concepts. It is, however, the most widely used concept and broadly applicable to a variety of settings - with many of the alternatives only applicable in specific settings. Also, Shapley value has a very strong foundation in coalitional game theory, and has had a wide impact in many fields, ever since it was proposed by Nobel-prize winning economist Lloyd Shapley. However, a key problem with applying the Shapley value (especially in the case we study, i.e. energy communities settings with a sizeable number of prosumers) is that it is not computable exactly in settings beyond a few dozen agents, as the computational cost of _exact_ Shapley computation is combinatorial. As highlighted by the introduction and our systematic review below, our work helps to close this important knowledge gap by providing and validating computationally-efficient tools to approximate Shapley, with validation in a highly realistic case study of a community energy setting.
The literature study section is divided into two subsections: first, in Section 2.1, we provided a systematic overview of previous works that use Shapley value concept in energy applications, while in Section 2.2 we discuss existing state-of-the-art techniques for Shapley approximation. Our review is enhanced by providing a systematic table that captures and summarises the prior literature related to the application of Shapley values in energy settings, along four key dimensions: the particular sub-area of energy where a referenced paper applied the Shapley value, the computational techniques employed, the type of approach to computing Shapley (whether exact, or approximation based under some assumptions, or both), and finally the number of agents (be it prosumers, households, participants, etc.) that the experimental part of the paper considers.
We argue that providing such a table of prior works is important to highlight the current state-of-the-art in the field and make the contribution of this work clearer to the reader.
### Use of Shapley Values in Energy Applications
Energy communities are an increasingly important topic of research in energy systems, and a notable number of recent papers consider using the Shapley value as an underlying redistribution method. Chis and Koivunen [14] propose a coalitional cost-game optimisation of a portfolio of energy assets using Shapley value as the underlying redistribution method, modelling a realistic case study of 9 households. Safdarian et al. [15] use the Shapley value for coalition-based value sharing in energy communities, modelling an energy community in southern Finland with up to 24 apartments. Vespermann et al. [16] study the market design of a local energy community with shared storage and consider a number of solution concepts such as the nucleolus and Shapley values. Their numerical simulations study communities ranging in size from 4 up to 16 prosumers. Robu et al. [17] consider a cooperative coalitional game for energy group buying. While they discuss Shapley value as a solution concept, their focus is on other coalition properties.
There are also works that study variations of energy communities. Vinyals [18] explores a model in which the community consists of prosumers with assets and pure consumers, and the excess energy generated is shared among the community members. Although the work focuses on the energy distribution model that minimises the total cost of the community while meeting regulatory restrictions, it also presents an individually rational cost redistribution scheme. Long et al. [19] propose a method for energy trading of excess generation by the prosumers and individual cost calculation based on coalitional game theory and the Shapley value, and tests on a community that consists of 5 prosumers with solar PV generation and energy storage, and 5 consumers with no assets. Similarly, Hupez et al. [20] compare the use of Nash versus Shapley value concepts in an LV energy community model in which the excess energy of the prosumers is shared among other consumers with a case study of 3 prosumer nodes. Singh et al. [21] present the use of Shapley value for energy trading among microgrids, using a case study of 3 microgrids. Zhang et al. [22] consider the use of Shapley value to divide gains in alliances among retailers in the Chinese energy settlement market, considering alliances up to a size of 9 agents.
In addition to the above, applications of Shapley value can be found in many domains within energy systems. The most relevant previous works identified (after a systematic search) on the use of Shapley value in energy application are summarised on Table 1. It reviews 40 selected papers that the authors found to be relevant both to the energy domain and Shapley value computation. It classifies them based on four criteria. The first is regarding the energy application domain in which the Shapley value is applied. The second is techniques used in the work, which could be for computing the Shapley value, but also for solving the underlying problem. The third criterion is how the Shapley value is computed. Most papers compute the exact Shapley value, but there are also many works that make use of approximation methods. Finally, the maximum number of agents used for computing Shapley value in their experimental analyses is given. Some works that apply approximation methods also compute the exact Shapley value as a benchmark. In such cases, the corresponding maximum number of agents for both methods is listed.
From this analysis, we observe that community energy/P2P trading was the most popular application of the Shapley value, but they were also common in other domains, such as the allocation of distribution loss [30; 32; 33] and congestion cost [39; 40; 41], as well as profit distribution in virtual power plants (VPPs) [42; 43; 44]. There were some less obvious applications, namely, cost allocation of net loss variability [50] and coordinated operation of existing facilities and the emerging power-to-gas technology [51; 52]. The range of techniques used by the authors was very broad, ranging from machine learning techniques (e.g., reinforcement learning, K-means clustering), optimisation techniques (e.g., mixed-integer linear programming, particle swarming), to comparison to other allocation methods (e.g., nucleolus, Nash equilibrium).
We found it especially important to provide a classification of Shapley value computation methods and the maximum number of agents considered. Crucially, for exact Shapley computation methods, the number of agents is always kept low to keep the computation tractable, usually to less than 10 agents. There are few papers that take into account more agents, such as Alam et al. [28] and previous work by some of the co-authors of this paper (Norbu et al. [10; 24]), but these studies do not attempt to compute the Shapley value exactly for a large number of agents and instead use approximation methods like the simple marginal
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Reference** & **Energy Application Area** & **Techniques Used** & **Type of Shapley** & **Max No.** \\ & & & **Computation** & **of Agents** \\ \hline Norbu et al. [10, 23, 24] & Energy community & Systematic comparison, & Marginal contribution & 200 prosumers \\ & & Data-driven approach & & \\ Chis \& Koivunen [14] & Energy community & Cost minimisation & Exact & 9 households \\ & & under constraints & & \\ Safdarian et al. [15] & Energy community & K-means clustering & Random sampling & 24 apartments \\ Han et al. [25] & Energy community & Sample allocation with & Exact & 16 prosumers \\ & & estimated variance & Stratified sampling & 50 prosumers \\ Kulmala et al. [26] \& & Energy community & Comparison of cost & Exact \& approximation & 6 households \\ Baranauskas et al. [27] & & redistribution methods & & \\ Long et al. [19] & Energy community/P2P trading & Fairness analysis & Exact & 10 households \\ Alam et al. [28] & Energy community/P2P trading & Utility maximisation & Exact & 10 prosumers \\ & & under constraints & Random sampling & 100 prosumers \\ Hupez et al. [20] & Energy community/P2P trading & Shapley and & Exact & 3 nodes \\ Vespermann et al. [16] & Storage in energy communities & Nash equilibrium analysis & & \\ Singh et al. [21] & Trading between microgrids & Decentralised & Exact & 3 microgrids \\ & & coordinated scheduling & & \\ Jia et al. [29] & Trading between microgrids & Cost minimisation & Exact & 3 microgrids \\ & & under constraints & & \\ Zhang et al. [22] & Coordination of electr. retailers & Transaction cost, & Bilateral Shapley value & 9 retailers \\ Sharma \& & Resource dependence theory & & \\ Sharma \& & Loss allocation in distribution & Exploitation of & Exact & 24 participants \\ Abhyankar [30, 31] & & network topology & Sequential Shapley & 68 participants \\ Amaris et al. [32] & Loss allocation in distribution & Circuit laws, & Aumann-Shapley value & 35 units \\ & & Systematic comparison & & \\ Pourahmadi \& & Loss allocation in distribution & Benchmarking, & Exact & 15 units \\ Dehghanian [33] & & Systematic comparison & & \\ Azad-Farsani et al. [34] & Allocation of loss reduction & Point estimation, & Exact & 15 units \\ & in distribution & stochastic iterative algorithm & & \\ Yu et al. [35] & Allocation of loss reduction & Approximation of & Aumann-Shapley value & 12 units \\ & in distribution & Shapley value and nucleolus & & \\ Vicente-Pastor et al. [36] & Network coordination & Mechanism design & Exact & 3 stakeholders \\ Azualalam et al. [37] & Network cost allocation & SD estimation & Stratified sampling & 25 customers \\ O’Brien et al. [11] & Demand-side response & Reinforcement Learning & Adaptive sampling & 20 participants \\ Maleki et. al [38] & Coordination of cooling loads & Bounded rational reasoning, & Bounded rational & 15 appartments \\ Singh et al. [39] & Congestion cost allocation & Dynamic programming & Shapley value & \\ Xiao and Li [40] & Congestion cost allocation & Comparison to Shapley & Exact & 3 nodes \\ Voswinkel et al. [41] & Congestion cost allocation & Pool-based model & Exact & 6 lines \\ & & Congestion cost allocation & Constraint optimisation & Exact \& approximation & 11 congestions \\ Cheng et al. [42] & Profit distribution in VPP & Coalition and core analysis & Marginal contribution & 3 participants \\ Wang et al. [43] & Profit distribution in VPP & Real-world feasibility study & Exact & 2 participants \\ Dabbagh \& & Profit distribution in VPP & Risk aversion degree & Exact & 6 participants \\ Sheikh-El-Eslami [44] & & & & \\ Fang et al. [45] & Profit distribution in CHP-VPP & Particle swarm & Exact (modified) & 4 stakeholders \\ Chattopadhyay [46] & Profit distribution & Linear programming & Exact & 3 participants \\ & in emission trading & & & \\ Liao et al. [47] & Allocation of emission allowance & Systematic comparison & Exact & 3 power plants \\ Zhou et al. [48] & Carbon obligation allocation & Systematic comparison & Exact & 3 DSOs \\ Zhang et al. [49] & Allocation of emission allowance & Entropy, gravity model & Exact & 8 regions \\ Mays [50] & Net load variability & Consumption behaviour & Exact & 9 profiles \\ & & profiling & & \\ Zhang et al. [51] & Coordinated bidding of wind & Shapley and & Exact & 3 participants \\ & farms and P2G facilities & nucleolus analysis & & \\ Li et al. [52] & Coordinated operation of & Risk aversion degree, MILP & Exact & 2 parties \\ & NGG and P2G facilities & & & \\ Churkin et al. [53] & Transmission expansion planning & Shapley and & Exact & 5 countries \\ \hline \end{tabular}
\end{table}
Table 1: Summary of published studies on the use of Shapley value in energy applications
contribution method also used in this study. The complexity of computing Shapley often restricts studies to experimental simulations with small numbers of agents - yet, in practice, larger settings appear frequently. Realistically sized energy communities have more members, e.g., there are usually 50-200 consumers behind a substation/LV transformer in Europe [54], or potentially even more sharing an asset such as a large community wind turbine. Hence Shapley approximation methods are needed - yet, the understanding of what is a good approximation for large settings is still lacking. Our work aims to fill this knowledge gap.
### Approximation Methods for Shapley values
Due to the large runtime of the Shapley value computation, it has received strong interest in efficient approximation methods since its introduction. Currently, many approximation methods compute the expected marginal contribution of an agent to the _sampled_ coalitions, initially suggested by Mann and Shapley [55]. Furthermore, the seminal work of Castro et al. [56; 57] proposes a polynomial calculation method which highlights the concept of _stratified sampling_, which has been refined in other works [28; 25], and is a key concept in the method we develop as well. Many recent works also provide theoretical error bound of sampling-based approximation methods [58; 59; 60; 61; 62].
A major obstacle to approximating the Shapley value is that there does not exist a general deterministic approximation method that is a fully polynomial-time approximation scheme (FPTAS), and a fully polynomial-time randomized approximation scheme (FPRAS) is the best one can achieve when approximating the Shapley value [58]. Yet, deterministic methods have desirable characteristics for cost redistributions of consumers. Such methods produce the same results after every run given the same inputs, allowing consumers to verify the calculated cost themselves, in contrast to random sampling methods where the redistributed cost can differ depending on the samples drawn. Furthermore, deterministic methods would also guarantee the same cost redistributed to consumers with the exact same demand profile. Such properties can provide consumers with additional trust in the model. Bhagat et al. [63] provides a deterministic Shapley approximation method to their newly proposed budgeted games. Their method is theoretically proven to approximate the Shapley value with a constant additive error by replacing the value function with a relaxed function. However, theoretical analyses of deterministic methods are especially difficult in many real-world energy applications, in which the cost function results from a control procedure over the energy assets over a long time horizon rather than in closed form. Hence, many recent studies in the energy communities have performed empirical analyses to evaluate the performances of the approximation methods (e.g. [14; 15; 16]).
The publications closest to this work are O'Brien et al. [11] and Norbu et al. [10]. O'Brien et al. [11] propose an enhancement of the methods first outlined by Castro et al., that uses reinforcement learning to do the stratified sampling in an adaptive way. Their method is one of the methods used as a benchmark in this paper. However, a key limitation of [11] is that they still use a comparison benchmark of only 20 agents, while we develop a way to compute it exactly for much larger communities. Moreover, we wanted to develop and test some deterministic methods of Shapley value approximation that do not depend on the number of samples and can be reproduced to give the same result. Finally, the work of Norbu et al. [10] considers redistribution in realistic community energy settings, starting from marginal value principles, but they do not approximate Shapley value as such. However, with their support, we use the same demand/generation dataset of a community of 200 prosumers in the UK, as it provides a realistic experimental case study to test the methods we develop. Additionally, we also look at a different dataset with a larger number of households to further provide confidence within our methods. This paper is a considerably extended and revised version of preliminary work presented in a poster at the ACM 2022 E-Energy conference [64].
## 3 Community energy model
Consider an energy community \(\mathcal{N}\) consisting of a set of \(|\mathcal{N}|=N\) prosumers, a shared battery and renewable energy source (RES). In this study, a lithium-ion battery and Enercon E-33 wind turbines [65] with a rated power of 330 kW were considered as the community's energy storage system and RES, respectively. Each prosumer in the community has a half-hourly power demand profile represented as \(d_{i}(t)\) for the power demand of agent \(i\) at time step \(t\). The final time step of the operation of the system is denoted as \(T\)
In this study, the data consists of half-hourly demands and generation during a 1 year period, and hence \(T=365\times 48=17520\).
The demand of the community at time \(t\), \(d_{\mathcal{N}}(t)\), is simply the sum of the demands of the agents in the community at \(t\), described as the following.
\[d_{\mathcal{N}}(t)=\sum_{i\in\mathcal{N}}d_{i}(t),\quad\forall t\in\{1,...,T\} \tag{1}\]
Furthermore, a community has a generation profile, \(g(t)\), by the jointly owned local renewable energy generation, and the power of the battery, \(p^{\mathrm{bat}}(t)\). The battery is considered charging when \(p^{\mathrm{bat}}(t)\) is negative and discharging when \(p^{\mathrm{bat}}(t)\) is positive. Finally, a community is required to buy power from the utility grid if the community assets do not provide enough power for the demand. If there is a surplus of power, on the other hand, a community can sell excess power to the grid. The power of the utility grid is denoted as \(p^{\mathrm{grid}}(t)\), where the value is positive when power is bought from the grid and negative when power is sold to the grid.
Given these variables, the following constraint needs to be satisfied at every time step.
\[d_{\mathcal{N}}(t)=p^{\mathrm{grid}}(t)+p^{\mathrm{bat}}(t)+g(t),\quad\forall t \in\{1,...,T\} \tag{2}\]
The constraint assures that the community power demand is met from the power sources. Additionally, when the generation is greater than the demand, all of the energy from the excess power is stored in the battery and or sold to the utility grid.
### Battery Control Algorithm
The use of battery was regulated at each time point using the heuristic-based battery control algorithm from Norbu et al. (2010). The battery keep tracks of its state of charge (SoC), so that the battery capacity is not exceeded. The algorithm first looks at whether the community's demand, \(d_{\mathcal{N}}(t)\), is smaller than the generation of the local RES, \(g(t)\). If the generation is greater than the demand, the battery is charged as long as it has not reached the maximum battery capacity, \(SoC^{\mathrm{max}}\). If the battery has reached the maximum capacity or the surplus power is larger than the maximum (dis)charging power of the battery, \(p^{\mathrm{bat,\ max}}\), then the remaining energy is sold to the utility grid. The energy is sold with the price of \(\tau^{s}(t)\) (pence/kWh), also known as the export tariff. If the community power demand is greater than generation, the battery is discharged if it has not reached the minimum battery capacity, \(SoC^{\mathrm{min}}\). If the battery has reached the minimum capacity or the power deficit is larger than \(p^{\mathrm{bat,\ max}}\), then energy is bought from the grid to meet demand. The import tariff, or the price of buying energy from the grid at \(t\) (in pence/kWh), is denoted as \(\tau^{b}(t)\). More details about the battery control algorithm can be found in A.
Note that, in the heuristic-based battery control algorithm above, we considered _flat_ import/export tariffs (in which the price remains the same throughout the time period of the operation), and moreover, importing or exporting energy to the grid is always worse price-wise than consuming/storing it locally, when possible. This is a realistic assumption in the current climate, when import prices are high, and so-called feed-in tariffs (i.e. tariffs paid to very small renewable generators) are being phased out. It is possible to have more advanced control heuristics in case of dynamic or time-of-use prices from the grid that include, e.g. a price prediction component. However, the Shapley computation methods proposed in this paper can also be combined with more complex control cases. This is because the methods we develop apply to the overall cost function, working to minimise the times of iterations needed to recompute it - but are independent of how the control is performed.
### Community Cost Calculation
A cost function is a key attribute of a coalitional game. Here, energy cost calculation of the community (or any subset of prosumers) is explained. The community energy cost calculation can be seen as the cost function in this study, and it is required for redistribution methods described in Section 4.1.
The community energy cost is composed of three components. The first is the cost of energy bought from the grid, subtracted by the revenue of energy sold to the grid during the time period. The energy bought
and sold at each time point, \(e^{b}(t)\) and \(e^{s}(t)\) respectively, are determined by the battery control algorithm explained in Section 3.1. The cost \(c_{T}^{\text{grid}}(\mathcal{N})\) is computed as the following.
\[c_{T}^{\text{grid}}(\mathcal{N})=\sum_{t=1}^{T}e^{b}(t)\tau^{b}(t)-\sum_{t=1}^{ T}e^{s}(t)\tau^{s}(t) \tag{3}\]
The second component of the cost for installing and operating the wind turbine, \(c_{T}^{\text{wind}}(\mathcal{N})\). The annual cost is calculated as the following.
\[c_{T}^{\text{wind}}(\mathcal{N})=\frac{\text{WT generation capacity}*\text{cost per kW}}{\text{Lifetime (in years)}} \tag{4}\]
The wind turbine generation capacity is calculated as the maximum receiving power from the wind turbine in one time step. The receiving power from the wind turbine was chosen to be \(0.006\times N\) times the power output of one wind turbine. The maximum capacity of the wind turbine increases linearly with the number of prosumers in the community, and hence the cost also increases linearly with the size. The cost of the wind turbine was set to 1072 PS(GB pounds)/kW and lifetime to 20 years, which is realistic for current technologies in the UK market [10].
The last component of the cost is the battery. The cost of the battery, \(c_{T}^{\text{bat}}(\mathcal{N})\) is computed as the following.
\[c_{T}^{\text{bat}}(\mathcal{N})=\frac{\text{battery capacity}*\text{cost per kWh}}{\max\big{(}\text{ Lifetime (in years) },\frac{1}{\text{DF}}\big{)}} \tag{5}\]
In this study, the community battery capacity is set to be \(5\times N\) kWh. Similarly to the wind turbine, the battery capacity increases linearly with the community size, and therefore the community battery cost also increases linearly with the community size. The cost of battery per kWh was set to 150 PS/kWh and the lifetime of the battery to 20 years. The variable DF is the depreciation factor of the battery determined by the battery degradation model from Norbu et al. [10]. Although the battery is given a lifetime, the lifetime can be shortened or additional maintenance costs may be required depending on the number of charge cycles and depth of discharge (DoD). Hence, using a battery degradation model can give a better assessment of the annual battery cost. The details of the battery degradation model are presented in B.
The total cost of the community, \(c_{T}(\mathcal{N})\) is the sum of the three components, which is the following.
\[c_{T}(\mathcal{N})=c_{T}^{\text{grid}}(\mathcal{N})+c_{T}^{\text{wind}}( \mathcal{N})+c_{T}^{\text{bat}}(\mathcal{N}) \tag{6}\]
The community cost can be computed for any subset of agents, and thus the cost contribution of an agent to a group can be determined by comparing the cost of the group with and without the agent. Specifically, every agent in the group contributes equally to the cost of the wind turbine and the battery (from Equations (4) and (5)), but it does not mean the usage of the assets are equal among agents. For example, agents with demand profiles that are well-aligned to the energy generation of the wind turbine will make better use of the community generation assets, resulting in requiring less imported energy from the utility grid to match their demand. On the other hand, agents with demand profiles that are poorly aligned with the generation will put greater pressure on the community battery capacity and equivalently cause more energy to be imported. Therefore, the _marginal value_ with which each prosumer causes the total cost to rise is a key factor to consider.
The community energy cost calculation can be seen as a cost function for a set of prosumers with demands. The notation of the community cost is simplified to \(c(\mathcal{N})\) w.l.o.g., because time horizon \(T=1\) year is used to compute costs in the rest of the paper.
## 4 Shapley Value Computation
The redistribution of costs or benefits in a game using the Shapley values is considered to be fair in the literature [9; 66]. The cost of prosumer \(i\) according to the Shapley value, \(\phi_{i}\), is computed as the following.
\[\phi_{i}=\sum_{\mathcal{S}\subseteq\mathcal{N}\setminus\{i\}}\frac{|\mathcal{ S}|!(N-|\mathcal{S}|-1)!}{N!}(c(\mathcal{S}\cup\{i\})-c(\mathcal{S})) \tag{7}\]
The marginal contribution of prosumer \(i\) to the subcoalition of prosumers \(\mathcal{S}\), denoted as \(c(\mathcal{S}\cup\{i\})-c(\mathcal{S})\), is how much the prosumer adds to the cost by joining the subcoalition. Then, the Shapley value of agent \(i\) can be seen as the mean marginal contribution of \(i\) for all possible subcoalitions in the community and all possible permutations of these subcoalitions.
An alternative way to write the Shapley equation that is particularly useful for our approach is through using the concept of _stratum_, given as the following.
\[\phi_{i}=\frac{1}{N}\sum_{j=0}^{N-1}\sum_{\begin{subarray}{c}\mathcal{S}\subseteq \mathcal{N}\setminus\{i\},\\ |\mathcal{S}|=j\end{subarray}}\frac{j!(N-1-j)!}{(N-1)!}(c(\mathcal{S}\cup\{i\} )-c(\mathcal{S})) \tag{8}\]
It can be seen that the marginal contribution of agent \(i\) to a subcoalition \(\mathcal{S}\) is computed as in Equation (7). Then, the marginal contribution is multiplied by the relative frequency of \(\mathcal{S}\) in the stratum. A stratum \(j\) is a set of all possible subcoalition \(\mathcal{S}\) with \(|\mathcal{S}|=j\). From this, the expected marginal contribution of agent \(i\) to a stratum \(0\) (empty subcoalition) up to stratum \(N-1\) (subcoalition of the whole community except \(i\)) can be computed. Then, the Shapley value of agent \(i\) is equivalent to averaged expected marginal contributions over the strata.
Yet, computing Shapley exactly from these equations has a very large time complexity (exponential to the number of agents in the community, as the marginal contributions to every subcoalition of the community is needed), which makes it intractable very quickly as the community size increases.
### Methods for Determining Approximate Shapley Values
In this subsection, we present 3 key methods for computing the Shapley values, starting from the simplest one (last marginal contribution), to increasingly more complex ones such as stratified expected value and adaptive sampling. In Section 4.2 we will present a method for exact Shapley computation in the case of a restricted number of types, while in Section 4.3 we discuss the computational properties of these methods.
#### 4.1.1 Last Marginal Contribution
While computing Shapley value directly requires exponential number of steps, it is possible to use the marginal contribution principle to design a much simpler scheme that considers the marginal contribution of each agent w.r.t. the other \(N-1\)[26; 10]. Formally, let the cost of an agent \(i\) in the community \(\mathcal{N}\) be simply the marginal contribution of agent \(i\) to the rest of the community, defined as the following.
\[MC_{i}=c(\mathcal{N})-c(\mathcal{N}\setminus\{i\}) \tag{9}\]
The annual energy cost \(MC_{i}\) of agent \(i\) uses the same intuition as Equation (7) in Shapley value calculation. But, whereas the Shapley value takes the mean marginal contribution of agent \(i\) for _every_ possible subcoalition in the community, this method computes the cost by only looking at the _last_ marginal contribution, making it a much more time-efficient method. However, costs based on the last marginal contributions do not hold the same property of the Shapley values in which the sum of individual cost is equivalent to the total community cost [9]. Hence, the last marginal cost needs to be normalised. The final redistributed cost \(\overline{MC}_{i}\) of agent \(i\) according to the normalised last marginal contribution (simply the marginal contribution method from now on) is given as:
\[\overline{MC}_{i}=c(\mathcal{N})\frac{MC_{i}}{\sum_{q\in\mathcal{N}}MC_{q}} \tag{10}\]
The time complexity of the marginal contribution method is \(\mathcal{O}(N)\), so, while simple, it is a very computationally efficient method.
#### 4.1.2 Stratified Expected Value
The last marginal contribution method only takes into account the marginal contribution of the last stratum. Starting from this observation, we propose a novel Shapley redistribution scheme that goes a step further and considers the expected marginal contribution for _every stratum_, while still avoiding the huge combinatorial cost of the exact Shapley method. We call this the _stratified expected values_ method.
Formally, for agent \(i\), an agent profile \(p_{-i}\) that has average energy demands from the rest of the agents in the community is created. The demand of the agent profile \(p_{-i}\) at time \(t\) is calculated as:
\[d_{p_{-i}}(t)=\frac{\sum_{q\in\mathcal{N}\setminus\{i\}}d_{q}(t)}{N-1},\quad \forall t\in\{1,...,T\} \tag{11}\]
The main idea of the method is that since \(p_{-i}\) has the average demand of the rest of the community for every time step, computing the marginal contribution from a set of agents with such a demand profile can approximate the _expected_ marginal contribution of that stratum. Since the Shapley value can also be seen as the mean of expected marginal contribution of every stratum, taking the mean of approximated marginal contribution of every stratum should give an average "in expectation" value that approximates the Shapley value. Hence, the cost of agent \(i\) based on the stratified expected values method \(SEV_{i}\) is calculated as the following.
\[SEV_{i}=\frac{1}{N}\sum_{j=0}^{N-1}c(\{1,...,j\}\cup\{i\})-c(\{1,...,j\}), \quad\text{such that }d_{1}=...=d_{j}=d_{p_{-i}} \tag{12}\]
Similarly to the marginal contribution method, the sum of individual energy costs does not equal the community's total cost since this method uses fictitious agents with demand profiles \(d_{p_{-i}}\). Hence, a normalisation step is required, given as follows.
\[\overline{SEV}_{i}=c(\mathcal{N})\frac{SEV_{i}}{\sum_{q\in\mathcal{N}}SEV_{q}} \tag{13}\]
The time complexity of computing the individual costs with this method is \(\mathcal{O}(N^{2})\) since for each agent, it requires to calculate the average marginal contribution once for every stratum, ranging from 0 to \(N-1\). While this is obviously more than the \(\mathcal{O}(N)\) computation of the last marginal value method, it is still much less than the exponential cost of computing the Shapley values, and still very tractable for medium and relatively large community sizes. The stratified expected value method uses the same intuition as the last marginal contribution method: it considers the last marginal contribution with respect to the expected demand value of the other agents (thus ignoring the combinatorial explosion of computing all orders) - but it does so _for every stratum_, taking an average among them. Thus, it is intuitive to formulate a hypothesis that the stratified expected value method should give a better estimation of the Shapley value than the simpler marginal contribution method, which ignores the strata structure. We explore this hypothesis in Section 5.
#### 4.1.3 Adaptive Sampling Shapley Approximation
The previous redistribution methods were deterministic, providing the same numerical results every time the redistributed costs are calculated, given the demands of the agents remain the same. We also compared the performance of a state-of-the-art, random sampling Shapley approximation method. Specifically, we implemented the adaptive sampling method using reinforcement learning introduced by O'Brien et al. (2017). For each agent \(i\), this method samples a subcoalition randomly from a stratum and computes the marginal contribution of agent \(i\) to the subcoalition, repeating this step for \(M\) samples predetermined by the user. After every sample, the expected marginal contribution and its estimated standard deviation (SD) of the stratum are updated. The selection of the stratum at the next sample is dependent on the estimated SDs of the strata, where strata with larger spread are more likely chosen. Such a procedure allows to sample more from strata with larger uncertainty, hence sampling more efficiently. Finally, the mean of all expected
marginal contributions of the strata is computed as the cost. The details of the algorithm are given in C.
The time complexity of this redistribution scheme is \(\mathcal{O}(N\cdot M)\). Note that, in principle, the number of samples required to approximate the Shapley value well increases faster as the community size increases, hence \(M\) is set to a value that is \(M\gg N\). For this study, \(M\) was set to 1000 when running this method to assure multiple samples are taken from each stratum.
```
1:Number of prosumers in the community, \(N\). Number of classes, \(K\). Number of prosumers in each class, \(N_{1},N_{2},...,N_{K}\) with \(N_{1}+N_{2}+...+N_{K}=N\) and \(N_{1}\geq N_{2}\geq...\geq,N_{K}\geq 1\). Demands of the classes, \(d_{1},d_{2},...,d_{K}\), where each \(d\) contains half hourly demands during time period of T (1 year).
2:Table containing costs of all possible subcoalition combinations, \(CS\).
3:functionCreateTableAllSubcoalitionCosts(\(N,N_{1},...,N_{K},d_{1},...,d_{K}\))
4:for all \((n_{1},n_{2},...,n_{K})\in\prod_{k=1}^{K}\{0,1,...,N_{k}\}\)do\(\triangleright\) Cartesian product
5:\(\mathcal{S}=\bigcup_{k=1}^{K}\{1,...,n_{k}\}\) with demands \(d_{k}\)\(\triangleright\) Union of sets with \(1,...,n_{k}\) having the same demand \(d_{k}\)
6:\(CS[n_{1},n_{2},...,n_{K}]=c(\mathcal{S})\)\(\triangleright\) Cost of subcoalition, Eq. 6
7:endfor
8:return\(CS\)
9:endfunction
```
**Algorithm 1** Create a table containing energy costs of every possible combinations of number of classes
### Exact Computation of Shapley Values with K classes
Given the above redistribution methods, what is really needed is the "ground truth" consisting of the exact Shapley values, to compare the performance of these approximation methods for a realistic size community (e.g. \(N=200\) prosumers behind a transformer). Prior works that do this, like O'Brien et al. [11], reduce the number of prosumers to \(N=20\) to compute the exact Shapley, but we argue this method is not really a satisfactory way to proceed. This is because, crucially, the quality of an approximation for a larger
community (e.g. \(N=50,100\) or \(200\) agents) can be very different than for a very small number of agents, up to \(20\) (we clearly show this effect in our experiments as well).
The key intuition is that, while computing the Shapley values of \(N\) unique agents requires a time complexity that is exponential to \(N\), the computation time can be significantly reduced if the community consists of a limited number of classes of agents, where agents in the same class have the same demand profile.
Let the new model be defined as the following. A community \(\mathcal{N}\) still consists of \(N\) agents, with now \(K\) classes of demand profiles in the community such that every agent belongs to one class, and all the agents in the same class have equal half-hourly demands. We assume w.l.o.g. that classes are ordered by size, i.e.
\[N\geq N_{1}\geq...\geq N_{K}\geq 1 \tag{14}\]
where \(N_{k}\) is the size of the class \(k\). Then, the number of all possible energy costs of subcoalition in the community is \((N_{1}+1)\times...\times(N_{K}+1)\), since from each class \(k\) you can have \(0\) to \(N_{k}\) agents being part of the subcoalition. This is important to note because computing the annual costs of subcoalitions (the cost function) is the most computationally expensive part of computing the redistributed costs, since it has to run through one year of half-hourly demands datapoints every time the cost function is called. The energy costs of every subcoalition may be used multiple times to compute the marginal contribution in Shapley calculation, hence storing the values in a table of the dimension \((N_{1}+1)\times...\times(N_{K}+1)\) can be time-saving. Algorithm 1 shows the creation of the table storing the costs of all possible subcoalitions in a community.
The table containing costs of every subcoalition can also be represented as a hyperrectangle of \(K\) dimensions and the size \(N_{1}\times...\times N_{K}\). Each axis represents the number of agents in the class. A stratum can be represented in such a hyperrectangle as a hyperplane cutting through in which the sum of axes equals the size of the stratum. Hence, strata correspond to planes parallel to each other. Figure 2 shows an example case where \(K=3\) with \(N_{1}=7\) (x-axis), \(N_{2}=4\) (y-axis), and \(N_{3}=2\) (z-axis). Stratum 5 is represented by the plane, where \(x+y+z=5\).
Once the energy costs of all possible subcoalitions have been computed, the Shapley values of the agents, which is the energy cost the agent owes to the community, can be found. Because of the symmetry axiom [9], the Shapley values of agents in the same class (same energy demands) are equal, and hence it is only required to calculate the Shapley values once for each class. Algorithm 2 is used to determine the Shapley values when the community consists of \(K\) classes of agents. The algorithm first loops over the number of classes starting from \(k=2\) (Line 2, Algorithm 2). Class 1, the largest of the classes, is skipped for efficiency since it can be computed after the Shapley values of all other classes are determined. Then, for each class, it will iterate through the strata from \(0\) to \(N-1\) (Line 4, Algorithm 2). Between Line 5 and 12, the Shapley value
Figure 2: A representation of all possible subcoalitions in a community of \(K=3\) classes with \(N_{1}=7\), \(N_{2}=4\), and \(N_{3}=2\).
of the iterating class is updated by adding the marginal contribution. Line 5 of Algorithm 2 shows that it will iterate through every possible subcoalition of the size of the stratum. As shown in Equation (7), the marginal contribution of an agent in a community is described as \(c(\mathcal{S}\cup\{i\})-c(\mathcal{S})\). Hence, it can be seen in Line 5 that the maximum number of agents from the iterating class \(k\) is one less than \(N_{k}\), so that the marginal contribution can be computed. Line 6 assures that the formed subcoalition is from stratum \(j\).
What makes it possible to compute the Shapley values efficiently for limited number of classes is the _(multivariate) hypergeometric distribution_[67]. The probability mass function of the multivariate hypergeometric distribution \(P(\{n_{1},...,n_{K}\},\{N_{1},...,N_{K}\},N,n)\) computes the relative frequency of selecting \(n_{1}\) agents from class \(1\) with the size of \(N_{1}\), \(n_{2}\) agents from class \(2\) of size of \(N_{2}\), and repeating until class \(K\), in a community of \(N\) agents. The function is formulated as the following.
\[P(\{n_{1},...,n_{K}\},\{N_{1},...,N_{K}\},N,n)=\frac{\prod_{1}^{K}\binom{N_{k} }{n_{k}}}{\binom{N}{n}} \tag{15}\]
where \(\sum_{1}^{K}N_{k}=N\) and \(\sum_{1}^{K}n_{k}=n\). The hypergeometric distribution allows to compute the probability of certain set of agents to be selected ahead over the chosen agent at the specific stratum. Line 7 in Algorithm 2 shows the step where the probability of the set of agents being ahead at stratum \(s\) is computed. (The Shapley value of an agent can then be computed by replacing the relative frequency of a subcoalition in Eq. 8, with the hypergeometric function.)
Line 8 in Algorithm 2 computes the marginal contribution of the agent from class \(k\) using the table containing costs of subcoalitions from Algorithm 1. The marginal contribution in Line 9 is added with the factor of the relative frequency of the subcoalition from stratum \(j\). After iterating through every strata and subcoalitions, the value is divided by the total number of strata, which is \(N\) (Line 13). It can be seen that computation steps of Lines 8, 9, and 13 are equivalent to Equation (8), with the only difference being the relative frequency is computed using the hypergeometric function.
Line 15 computes the Shapley value of agents in class \(1\). Since the values of agents of all the other classes are known and the sum of Shapley values of all agents must equal to the community cost, the Shapley value of class \(1\) is equal to the remaining cost after subtracting the cost distributed to agents in class \(2\) to \(K\) from the community cost, then equally divide it by the agents from class \(1\), by the efficiency property [9].
### Complexity of Shapley value computation
Table 2 shows the time complexities of exact Shapley values and the three approximation methods used in this study for two scenarios; when the community of size \(N\) consists of unique demand profiles and when the number of demand profiles is limited to \(K\) classes.
In the case of \(N\) unique demands, it was explained previously that it requires \(2^{N}\) steps to compute the Shapley value of an agent. To compute the Shapley values of the whole community, it is required for \(N-1\) agents since the value of the last agent can be determined by simply subtracting the sum of the rest of the agents' values from the total cost. This is due to the efficiency property of the Shapley value, in which the sum of the redistributed values equals the total value [9]. Hence, the time complexity of a community of unique agents is \(\mathcal{O}(2^{N}\cdot(N-1))\).
\begin{table}
\begin{tabular}{l c c} \multicolumn{3}{c}{**Time Complexity**} \\ \hline
**Algorithm** & **Unique** & \(K\) **Classes** \\ \hline Exact Shapley & \(\mathcal{O}(2^{N}\cdot(N-1))\) & \(\mathcal{O}(N^{K}\cdot(K-1))\)* \\ \hline Marginal Contribution & \(\mathcal{O}(N)\) & \(\mathcal{O}(K)\) \\ \hline Stratified Expected Values & \(\mathcal{O}(N^{2})\) & \(\mathcal{O}(K\cdot N)\) \\ \hline Approx. Shapley RL & \(\mathcal{O}(N\cdot M)\) & \(\mathcal{O}(K\cdot M)\) \\ \hline \multicolumn{3}{l}{*Upper bound time complexity} \\ \end{tabular}
\end{table}
Table 2: Time complexity per algorithm
When the community is restricted to \(K\) classes, the number of times the cost function needs to be computed by Algorithm 1 is equal to the number of all possible combinations of agents which is \((N_{1}+1)\times...\times(N_{K}+1)\) (illustrated in Figure 2). Considering w.l.o.g that the classes are ordered by the size, i.e., \(N_{1}\geq N_{2}\geq...\geq N_{K}\), and assuming there are at least two non-empty classes, i.e. \(K\geq 2\), then it holds that \(N_{i}+1\leq N,\,\forall i=1,...,K\). The number of cost function calculations is hence upper bounded by \(N^{K}\). Due to the symmetry property of the Shapley value [9], it is only required to be computed once per class. Furthermore, it is required to compute \(K-1\) times with the same reasoning as in the unique demand profiles scenario. Therefore, it gives the time complexity of \(\mathcal{O}(N^{K}\cdot(K-1))\). While it seems that \(\mathcal{O}(N^{K}\cdot(K-1))\) (for \(K\) classes) is large, in fact, for a large \(N\) and a small number of classes \(K\) this is much smaller than \(2^{N}\), hence in practice, exact Shapley computation with \(K\) classes has a much lower computation cost than unique agents.
For the marginal contribution method (Section 4.1.1), the complexity was determined to be \(\mathcal{O}(N)\) for a community of \(N\) agents, as it requires to compute the marginal contribution once for every agent. With \(K\) classes, however, this is reduced to \(\mathcal{O}(K)\). Since agents with the same energy demands are assigned the same cost, Equations (9) and (10) are only required to be computed once for each class.
The time complexity of stratified expected values method (Section 4.1.2) is \(\mathcal{O}(N^{2})\) for \(N\) unique agents, but it can be reduced to \(\mathcal{O}(K\cdot N)\) for \(K\) classes. Similarly to the marginal contribution method, it is only required to compute Equation (12) once per class.
Finally, RL-based Shapley approximation method (Section 4.1.3) has the time complexity reduced from \(\mathcal{O}(N\cdot M)\) to \(\mathcal{O}(K\cdot M)\), where \(M\) is the number of samples per agent chosen by the user, by the same reasoning as the marginal contribution and stratified expected values methods.
## 5 Experimental Comparison
For the experimental comparison, the energy demands of 200 households in a realistically-sized energy community in the UK sharing a community wind turbine and battery were used. In this study, experiments are carried out on two scenarios. Section 5.1 presents the experimental setup of the first scenario. Here, agents of the community are grouped into two classes based on their annual consumption size (large vs. small energy consumers). The performance of the redistribution methods is tracked with increasing community size, keeping the ratio of large to small consumers constant. Section 5.2 presents the experimental setup of the second scenario. Here, agents are grouped into four classes based on their consumption profile throughout a typical day. Again, the performances of the redistribution methods are compared with increasing community size. Finally, Section 5.3 provides discussions on the results from the two scenarios. All of the experimental code was written in and run with Python 3 (version 3.8.5).
### Scenario 1: Large and Small Consumers
**Dataset and parameters.** For the first experimental comparison, the energy demands of 200 households in a realistically-sized energy community in the UK sharing a community wind turbine and battery were used, using the case study from Norbu et al. [10] (with the kind permission of the authors). The energy demands of the households are provided for every 30 minutes during one calendar year, which was largely collected in a well-known smart energy demonstrator project in the UK, the Thames Valley Vision project [12]. The half-hourly power generated by the wind turbine was calculated based on the power curve of the Enercon E-33 wind turbines [65] and real wind data of the Kirkwall airport weather station in Orkney, Scotland from the UK Met Office Integrated Data Archive System (MIDAS) [68] provided by the British Atmospheric Data Centre (BADC). Furthermore, an import tariff of 16 UK pence/kWh and an export tariff of 0 pence/kWh were used.
**Demand profiles.** Two-hundred prosumers are grouped into two classes of small consumers or large consumers according to the annual energy consumption. Small consumer and large consumer profiles are made from the average half-hourly demands of each group. In this study, two cases are tested; groups split into the 90% smallest and 10% largest consumers by total annual demand, respectively, and 80% smallest and 20% largest in the second test. The community consists of agents that had small and large consumer profiles, with the corresponding ratios (9:1 or 8:2).
**Setup and performance measure.** Communities with small and large consumer profiles are used to compare how well the redistribution methods approximate the Shapley values. In the first setting, the ratio of the community is kept to 9:1, and the approximation performances are measured for varying community size of \(N=10\) up to \(N=200\). Similarly, the ratio is kept constant to 8:2 in the second setting, and performances of varying community size is tested.
The redistribution methods are compared to the exact Shapley values (the ground truth) of small and large agent profiles. The _relative difference_ to the exact Shapley values was used for the comparison. The percentage relative difference of a cost to the Shapley value is defined as the following.
\[RD_{\phi}(\hat{\phi}_{k})=\frac{|\hat{\phi}_{k}-\phi_{k}|}{\phi_{k}}\times 100 \tag{16}\]
where \(\hat{\phi}_{k}\) is the energy cost of agent of class \(k\) (in this simulation, either small or large consumer profile) from a particular redistribution method, which are \(\overline{MC}_{k}\), \(\overline{SEV}_{k}\), and \(RL_{k}\). The variable \(\phi_{k}\) is the cost redistributed to class \(k\) according to the Shapley value. The relative difference does not only take the magnitude of difference between the approximation method and the exact value, but also considers how large the exact value is. This provides a fairer evaluation between different demand profiles, as demand profiles with naturally large energy cost could have significant approximation error in terms of magnitude only from slight deviation.
**Results.** We investigated whether the size of the community influences how well the redistribution methods approximate the Shapley values. Figure 3 shows the change in relative difference to the exact Shapley values of the redistribution methods with increasing community size up to 200 households while keeping the same ratio of small and large agent profiles. In Figure 2(a), the ratio of small consumer agents and large consumer agents were kept to 9:1, while Figure 2(b) used the ratio of 8:2.
### Scenario 2: Different Consumption Profiles
**Dataset and parameters.** In the second case study, the demands and the wind data were taken from a dataset in Kaggle2, a ML data platform. The half-hourly demands of 5567 households in the London area,
Figure 3: Relative differences of the small and large consumer agent profiles to the exact Shapley values for the redistribution methods with increasing size of the community.
UK, between November 2011 and February 2014 were recorded by the UK Power Networks during the Low Carbon London project [13]. The corresponding London weather data was provided by Dark Sky [69], and the generated power by the wind turbine was calculated using the same method as Norbu et al. [10].
Both the demands and wind power data were aggregated to generate an averaged half-hourly data points for one year, aligned by the calendar weeks. From the demand data, households were removed if less than 95% of the data points from the year were missing, resulting in 5251 households left. The remaining miss data points were filled using linear interpolation.
Similarly to the first case study, an import tariff of 16 UK pence/kWh and an export tariff of 0 pence/kWh were used.
**Demand profiles.** In the second case, the agents were grouped not by their total demands but rather by their consumption profiles over a day (24 hours). Clustering energy consumers into a number of classes, according to their daily consumption profile, is a well-established practice in energy demand modeling [70; 71; 72], used both in research and practice, by energy suppliers. Identifying consumption patterns of customers can help the energy provider to provide customers with recommendation as well as managing energy loads.
The 5251 agents are clustered using K-means clustering from the energy consumption of the winter months. From the resulting clusters, four groups showing distinct behaviours were chosen as consumption classes, and is presented in Figure 4.
Figure 4 shows the daily consumption behaviours of the four selected clusters. The first cluster on the left shows a increase in demand in the morning, then a large peak in the evening, thus named "evening peaker". The second cluster from the left has energy demands increased in the morning and stay high during the day. There is a small evening peak, but has relatively even consumption throughout the day. This group is called "stay at home", as it requires certain energy consumption during the day such as heating, computers, and kitchen appliances. The third cluster shows a large peak in the morning followed by a decreased consumption during the day, and a final large peak in the evening. It can be seen that the morning and evening peaks are roughly the same size, and therefore it is named "M-shaped" consumers. The fourth cluster had almost no energy demand during the day, but had high demand overnight. Such behaviour is also observed in [70], a study on clustering consumers on demand profiles. This group was named "night owl", and it is more of a rare case, having less than 1% of the households classified in this study.
In 2020, due to COVID-19, working from home became the norm, thus it would be of interest to look at a change of behaviour from working at the office to home. Hence looking at consumption behaviours of classes like "evening peak" and "stay at home" were chosen for this study. Furthermore, to add more variety and create a realistic community, we have included classes with distinct behaviours such as "M-shape" and "night owl" classes. From grouped agents, the half-hourly energy demands are created for four consumer profiles. The details of clustering and the production of demand profiles are given in D.
**Setup and performance measure.** Communities with four consumer profiles are used to perform experiments. In this case study, we perform two experiments. In the first experiment, the ratio of the community was kept constant, and the performances of the redistribution methods were tested for community sizes of \(N=10\) to \(N=200\). We tested two scenarios; one in which the community is concentrated to one class, and one in which the community is more evenly spread out between the classes. In the second setting,
Figure 4: Average daily energy demands of consumer profiles, evening peaker, stay at home, M-shape, and night owl
the community size is kept constant, but the composition (ratios) of the consumer classes in the community changes.
To compare the performance of the redistribution methods, we used the _average relative difference_ to the exact Shapley values. The relative difference to the exact Shapley values is as defined in Equation (16). The average relative difference to the Shapley value of a redistribution method is the mean relative difference of the community:
\[RD_{\phi}(\hat{\phi})=\frac{1}{N}\sum_{k}^{K}N_{k}\cdot RD_{\phi}(\hat{\phi}_{k}) \tag{17}\]
where \(\hat{\phi}\) is a redistribution method with costs \(\hat{\phi}_{1},...\hat{\phi}_{K}\) assigned to \(K\) classes.
**Results.** Figure 5 shows the change in average relative differences with increasing community size, starting from \(N=10\) up to \(N=200\). Figure (a)a presents the result of the community with compositions of 70% "evening peak", 10% "stay at home", 10% "M-shape", and 10% "night owl", and Figure (b)b with compositions of 30% "evening peak", 30% "stay at home", 30% "M-shape", and 10% "night owl". Figure 6 shows the change in average relative differences of redistribution methods with change in the composition of the community. The community size was set to 200, and the ratios of "M-shape" and "night owl" agents were also kept constant to 20% and 10% respectively. Initially, the "evening peak" class is set to be 65% of the community and "stay at home" class to 5%. After every run, the ratio of "evening peak" is reduced by 5% and "stay at home" increased by 5%, until the "stay at home" makes up 65% of the community and "evening peak" with only 5%. Detailed results showing the breakdown of the performance across the different consumer profiles are presented in E.
community, the Shapley calculation is dominated by marginal contributions of the agent to already large-sized subcoalitions. Variations in marginal contributions to large subcoalitions are often small, making it possible for less complex methods to approximate well for large communities.
The stratified expected values method outperforms the simpler marginal contribution method in all cases and all scenarios, hence the intuitive hypothesis we formulated in Section 4.1.2 clearly holds. Furthermore, there is a minimal difference in the performances between the stratified expected values and the adaptive sampling methods in most cases. The number of samples per agent was set to 1000 for the adaptive sampling method, meaning that for a case of 100 prosumers community, the adaptive sampling method had a time complexity ten times higher than the stratified expected values method (from Table 2). Yet the figures show that the stratified expected values method outperform the adaptive sampling method in many scenarios and perform comparatively overall. In fact, paired two-sample t-tests on 2 class experiments with 0.05 significance level showed that the stratified expected values method (90/10 split: \(M=0.0339\), \(SD=0.0520\). 80/20 split: \(M=0.0285\), \(SD=0.0471\)) had a smaller average relative difference to true Shapley values compared to RL-based adaptive sampling method (90/10 split: \(M=0.1110\), \(SD=0.0496\). 80/20 split: \(M=0.0934\), \(SD=0.0444\)) for both 90/10 split(\(t(19)=-3.919\), \(p<0.001\)) and 80/20 split (\(t(19)=-4.236\), \(p<0.001\)). Furthermore, in the 4 class case with community concentrated to 1 class (Figure 5a), the stratified expected values (\(M=0.0071\), \(SD=0.0070\)) outperformed the adaptive sampling method (\(M=0.0254\), \(SD=0.0114\)) (\(t(19)=-5.376\), \(p<0.001\)). Only in the case of evenly spread 4 class community (Figure 5b), the adaptive sampling method (\(M=0.0283\), \(SD=0.0062\)) outperformed the stratified expected values (\(M=0.0341\), \(SD=0.0048\)) (\(t(19)=3.173\), \(p=0.005\)). Hence from these results, it seems that the stratified expected values method does well approximating the Shapley values when the community is concentrated to one class, and outperforms the state-of-the-art sampling method.
When looking at how the composition of the community affects the performances of the redistribution methods in Figure 6, all three methods approximate Shapley values well (all methods in every scenario less than 0.15% difference), as the community size is large already. Still, the stratified expected values and the RL-based adaptive sampling methods outperform the marginal contribution method. A paired t-test with 0.05 significance level showed that there is no significant difference between the stratified expected values (\(M=0.0199\), \(SD=0.0114\)) and the adaptive sampling (\(M=0.0220\), \(SD=0.0056\)) methods on their average relative differences to the Shapley values (\(t(12)=-0.660\), \(p=0.52\)). Yet it can be seen from Figure 6 that the stratified expected values method has smaller difference to true Shapley values when the community is concentrated on one class, and shows larger errors when the community is more even. This in line with the findings from Figure 5. It can be seen in D that most consumers had a "evening
Figure 6: Average relative differences to the exact Shapley values for the redistribution methods of different community compositions with \(N=200\).
peak" and made up 60% of the households studied. Hence it is common to have a community that contains majority of the same consumption behaviour class, making the stratified expected values method desirable in real-world scenarios.
Although it approximates the exact Shapley values very well, a potential disadvantage of the RL-based method (and any method using random sampling) is that the redistributed values can vary every time the algorithm is run. The fluctuating performance of the RL-based method can be seen in Figures 3 and 5. In practice, the random output of the method can have an undesirable effect on the _perceived fairness_ of the redistribution, as prosumers with the same demand profile can result in being assigned slightly different costs.
## 6 Conclusions & Further Work
While the use of the Shapley value is increasingly popular in energy systems, previous works often sidestep the issue of how it can be efficiently computed in large, realistically-sized settings. The issue is made more pressing by the increasing popularity of community energy projects, where prosumers share joint renewable generation and storage assets and costs.
This paper aims to close this gap by proposing a new method to efficiently approximate the Shapley value, and characterising both their computational complexity and performance (in terms of distance to the exact Shapley value), using large-scale, realistic case studies of energy communities in the UK. We compare the performance of the new method with an already-existing deterministic method and a non-deterministic, state-of-the-art sampling method. Moreover, in order to develop a "ground truth" benchmark to compare these approximations, we propose a novel method to compute the Shapley value _exactly_ even for large population sizes by clustering agents into a smaller number of consumption profiles or classes.
Our experimental analysis shows that the relative difference to the true Shapley (while large for a few agents) converges to under 1% for larger scenarios, basically for all methods considered. In particular, in almost all scenarios studied, the newly proposed stratified expected value method and the state-of-the-art adaptive sampling method perform extremely close to true Shapley values. Interesting to observe that the stratified expected value method performs similarly to the adaptive sampling method [11] for large populations, although its computational cost is often much lower. In fact, the stratified expected values method outperformed the adaptive sampling method when the community was concentrated to one class, showing a high potential for application in real-world energy communities.
There are a number of directions we find promising to explore in future work. An interesting question to explore is the case when the local distribution network, where the energy community is based is subject to physical capacity constraints (voltage, power) [24]. Such constraints could potentially restrict all prosumers to participate in the scheme equally at certain times, and would lead to changes in the coalitional game, as well as in the computation of fair redistribution payments based on the Shapley value. Another possible improvement on our current model can be made by providing a more detailed cost calculation of the assets, such as in [73]. Although our model takes into account battery degradation for a more accurate annual cost of the battery, a better overall cost estimation can be achieved by considering a longer period of time and taking into account the investment and maintenance costs of these assets, as well as economic factors such as the inflation rate.
We are also considering extending this work, by implementing our redistribution strategies in a blockchain-enabled smart contract (such as in [74; 75]), which would commit the members of the energy community to a protocol to share the benefits and costs. Based on systematic reviews on blockchains in energy systems [76; 75], smart contracts should allow a more decentralised energy system, while preserving the privacy of individual prosumer data, such as demand data. The marginal contribution and stratified expected values methods used in this study already do have favourable characteristics for preserving privacy, as they do not require other prosumers' individual consumption information, only the aggregate consumption of the community. The use of smart contracts could further strengthen the protection of sensitive information and a more secure asset monitoring.
Finally, while this paper focuses on the key topic of Shapley value computation, there are many other fairness concepts that could be explored in energy applications, and it would be relevant to compare their
outcome to Shapley values. Conversely, there are many promising concepts proposed in coalitional game theory literature [8] that - to our knowledge, have been explored much less in energy applications - such as the least-core [77] or the nucleolus [16]. The application and adaptation of such fairness concepts in energy could be a fruitful area, for both research and practice, providing energy communities with the computational tools to make best use of shared energy assets.
## Acknowledgement
The authors would like to acknowledge the input and contributions of TU Delft master students Daan Hofman, Titus Naber and Kawin Zheng in the initial stages of this work.
In terms of funding, Valentin Robu acknowledges the support of the project "TESTBED2: Testing and Evaluating Sophisticated information and communication Technologies for enaBling scalablE smart griD Deployment", funded by the European Union Horizon2020 Marie Sklodowska-Curie Actions (MSCA) [Grant agreement number: 872172]. Sonam Norbu, Merlinda Andoni, Valentin Robu and David Flynn also acknowledge the support of the InnovateUK Responsive Flexibility (ReFLEX) project [ref: 104780]. Merlinda Andoni and David Flynn also acknowledge the support of the UK Engineering and Physical Science Research Council through the National Centre for Energy Systems Integration (CESI) (grant EP/P001173/1) and DecarbonISation PATHways for Cooling and Heating (DISPATCH) project (grant EP/V042955/1).
|
2309.12434 | Exploration of technical debt in start-ups | Context: Software start-ups are young companies aiming to build and market
software-intensive products fast with little resources. Aiming to accelerate
time-to-market, start-ups often opt for ad-hoc engineering practices, make
shortcuts in product engineering, and accumulate technical debt. Objective: In
this paper we explore to what extent precedents, dimensions and outcomes
associated with technical debt are prevalent in start-ups. Method: We apply a
case survey method to identify aspects of technical debt and contextual
information characterizing the engineering context in start-ups. Results: By
analyzing responses from 86 start-up cases we found that start-ups accumulate
most technical debt in the testing dimension, despite attempts to automate
testing. Furthermore, we found that start-up team size and experience is a
leading precedent for accumulating technical debt: larger teams face more
challenges in keeping the debt under control. Conclusions: This study
highlights the necessity to monitor levels of technical debt and to
preemptively introduce practices to keep the debt under control. Adding more
people to an already difficult to maintain product could amplify other
precedents, such as resource shortages, communication issues and negatively
affect decisions pertaining to the use of good engineering practices. | Eriks Klotins, Michael Unterkalmsteiner, Panagiota Chatzipetrou, Tony Gorschek, Rafael Prikladnicki, Nirnaya Tripathi, Leandro Bento Pompermaier | 2023-09-21T19:02:02Z | http://arxiv.org/abs/2309.12434v1 | # Exploration of Technical Debt in Start-ups
###### Abstract.
_Context_: Software start-ups are young companies aiming to build and market software-intensive products fast with little resources. Aiming to accelerate time-to-market, start-ups often opt for ad-hoc engineering practices, make shortcuts in product engineering, and accumulate technical debt.
_Objective_: In this paper we explore to what extent precedents, dimensions and outcomes associated with technical debt are prevalent in start-ups.
_Method:_ We apply a case survey method to identify aspects of technical debt and contextual information characterizing the engineering context in start-ups.
_Results_: By analyzing responses from 86 start-up cases we found that start-ups accumulate most technical debt in the testing dimension, despite attempts to automate testing. Furthermore, we found that start-up team size and experience is a leading precedent for accumulating technical debt: larger teams face more challenges in keeping the debt under control.
_Conclusions_: This study highlights the necessity to monitor levels of technical debt and to preemptively introduce practices to keep the debt under control. Adding more people to an already difficult to maintain product could amplify other precedents, such as resource shortages, communication issues and negatively affect decisions pertaining to the use of good engineering practices.
Software start-ups, technical debt +
[MISSING_PAGE_POST]
of how technical debt influences start-ups and to enable start-up teams to make better decisions in regards to the trade-off between quality and time-to-market.
Technical debt has been extensively studied in the context of established companies and in relation to software maintenance (Tom et al., 2016; Tom et al., 2017). For example, Tom et al. (2018) present a taxonomy comprising precedents, dimensions, and outcomes of technical debt. We adopt the terminology from this taxonomy to enable traceability.
Precedents are contextual factors in the development organization that contribute to the accumulation of technical debt, e.g. a lack of resources. Dimensions describe different types of technical debt, e.g. documentation, architecture, or testing debt. Outcomes refer to consequences of having excess technical debt, such as impaired productivity or quality (Tom et al., 2018).
While technical debt is a liability, development teams should manage it and use it as a leverage to attain otherwise unattainable goals (Tom et al., 2018). In the start-up context, the concept of technical debt is explored only superficially. Giardino et al. (2018) argue that the need for speed, cutting edge technologies and uncertainty about a product's market potential are the main precedents for cutting corners in product engineering. However, if a start-up survives past its initial phases, management of technical debt becomes more and more important (Giardino et al., 2018; Gudmund et al., 2018).
Our earlier study on software engineering anti-patterns in start-ups (Tom et al., 2018) indicates that poorly managed technical debt could be one contributing factor to high start-up failure rates, driven by poor product quality and difficult maintenance. Negative effects of technical debt on team productivity has also been observed (Giardino et al., 2018).
In this study, we explore how start-ups estimate technical debt, what are precedents for accumulating technical debt, and to what extent start-ups experience outcomes associated with technical debt. We use a case survey as data source and apply a combination of quantitative and qualitative methods to explore technical debt in the surveyed companies. Our objective is to provide a fine-grained understanding of technical debt and its components that could provide a basis for defining start-up context-specific practices for technical debt management.
The main contribution of this paper is an empirical investigation that identifies the key precedents for the accumulation of technical debt in software start-ups, and the primary dimensions where the accumulation of debt has been observed by practitioners.
The rest of the paper is structured as follows. In Section 2 we introduce relevant concepts to understand our study. Section 3 presents the study design while results are presented in Section 4. The results are discussed and interpreted in Section 5. Section 6 concludes the paper.
## 2. Background and Related Work
### Software start-ups
Software start-ups are small companies created for the purpose of developing and bringing an innovative product or service to market, and to benefit from economy of scale. Even though start-ups share many characteristics with small and medium enterprises, start-ups are different due to the combination of challenges they face (Tom et al., 2016; Tom et al., 2018).
Start-ups are characterized by high risk, uncertainty, lack of resources, rapid evolution, immature teams, and time pressure among other factors. However, start-ups are flexible to adopt new engineering practices, and reactive to keep up with emerging technologies and markets (Tom et al., 2018; Tom et al., 2018).
Start-up companies rely on external funding to support their endeavors. In 2015 alone, start-up companies have received investments of 429 billion USD in the US and Europe alone (Tom et al., 2018; Tom et al., 2018). With an optimistic start-up failure rate of 75% that constitutes of 322 billion USD of capital potentially wasted on building unsuccessful products.
Earlier studies show that product engineering challenges and inadequacies in applied engineering practices could be linked to start-up failures (Tom et al., 2018; Tom et al., 2018). To what extent software engineering practices are responsible or linked to success rate is very hard to judge. However, if improved software engineering practices could increase the likelihood of success by only a few percent, it would yield a significant impact on capital return.
### Technical debt
Technical debt is a metaphor to describe the extra effort arising from maintaining and removing suboptimal or flawed solutions from a software product. Technical debt can be attributed to the software itself (e.g. source code), and also other artifacts and processes that comprise the product, and are relevant for maintenance and evolution of the product. For example, user manuals, knowledge distribution, operational processes, and infrastructure (Tom et al., 2018).
Suboptimal solutions find their way into software products due to a variety of reasons, such as ignorance of good engineering practices, oversight, lack of skills, or pragmatism (Bradbury et al., 2016). Taking engineering shortcuts and delivering flawed solutions is often used as leverage to achieve faster time-to-market. However, the debt should be re-paved by removing flawed solutions from the product (Tom et al., 2018; Tom et al., 2018).
When not addressed, suboptimal solutions make maintenance and evolution of software products difficult, any changes in the product require more effort than without the debt. This extra effort takes time away from developing new features and may overwhelm a team with firefighting tasks just to keep the product running, and decreases product quality altogether (Tom et al., 2018).
Giardino et al. (2018) argue that technical debt in start-ups accumulates from prioritizing development speed over quality, team aspects, and lack of resources. We combine results from their work, which is specific to start-ups, with a general taxonomy of technical debt by Tom et al. (Tom et al., 2018). We adopt the model of precedents, dimensions, and outcomes as proposed by Tom et al. (Tom et al., 2018) and map it with the categories of the Greenfield start-up model (Giardino et al., 2018) to identify and to focus on relevant aspects of technical debt for start-ups, see Fig. 1.
As precedents, we study engineering skills and attitudes, communication issues, pragmatism, process, and resources. We explore technical debt in forms of code smells, software architecture, documentation, and testing. Furthermore, we attempt to understand to what extent team productivity and product quality is a challenge in start-ups. We use this conceptual model of technical debt as a basis to scope and define the research methodology, discussed next.
## 3. Research Methodology
### Research questions
To achieve our goal and to drive the study we formulate the following research questions:
**RQ1:** How do start-ups estimate technical debt?
**Rationale:** Technical debt can incur in different forms, for example, as code smells, incomplete or outdated documentation, suboptimal software architecture, or shortcuts taken in testing (Krishnan et al., 2017). We aim to understand how start-ups estimate different types of technical debt, what types of technical debt are prevalent in start-ups and primary candidates for further investigation. In addition, what types of technical debt are least accumulated, i.e. are irrelevant or already well managed in the start-up context.
**RQ2:** What are precedents of technical debt in start-ups?
**Rationale:** Earlier studies report a number of precedents contributing to the accumulation of technical debt, such as prioritizing time-to-market over product quality and severe lack of resources (Bordes et al., 2016), developer skills and attitude, lack of process, oversight, and ignorance (Sutton et al., 2017). We aim to corroborate what precedents, identified by earlier studies in other contexts, are also present in start-ups.
**RQ3:** What outcomes linked to technical debt do start-ups report?
**Rationale:** Decreasing productivity, decaying morale, product quality issues, and increasing risks are reported as outcomes of technical debt (Bordes et al., 2016; Sutton et al., 2017). Yet, there is a belief that any amount of technical debt can be written off if a product or a specific feature does not succeed in market (Bordes et al., 2016). We aim to corroborate what outcomes, identified by earlier studies and linked to increased amounts of technical debt, do start-ups report.
### Data collection
We used a case survey method to collect primary data from start-up companies (Krishnan et al., 2017; Krishnan et al., 2017).
The case survey method is based on a questionnaire and is a compromise between a traditional case study and a regular survey (Krishnan et al., 2017). We have designed the questionnaire to collect practitioners experiences about specific start-up cases.
During the questionnaire design phase, we conducted multiple internal and external reviews to ensure that all questions are relevant, clear and that we receive meaningful answers. First, the questions were reviewed in multiple rounds by the first three authors of this paper to refine scope of the survey and question formulations. Then, with help of other researchers from the Software Start-up Research Network1, we conducted a workshop to gain external input on the questionnaire. A total of 10 researchers participated and provided their input.
Footnote 1: The Software Start-up Research Network, [https://softwarestartups.org/](https://softwarestartups.org/)
Finally, the questionnaire was piloted with four practitioners from different start-ups. During the pilots, respondents filled in the questionnaire while discussing questions, their answers and any issues with the first author of this paper.
As a result of these reviews, we improved the question formulations and removed some irrelevant questions. The finalized questionnaire2 contains 85 questions in 10 sections. The questionnaire captures 285 variables from each start-up case.
Footnote 2: [http://startupcontestmap.org/exp-survey/woifenw2](http://startupcontestmap.org/exp-survey/woifenw2)
From all the variables, 45 variables focus on capturing the magnitude of dimensions, precedents, and outcomes linked to technical debt3. The questions capture the respondents' agreement with a statement on a Likert scale: not at all (1), a little (2), somewhat (3), very much (4). The values indicate the degree of agreement with a statement. Statements are formulated consistently in a way that lower values indicate less precedents, less outcomes, and less technical debt.
Footnote 3: The subset of questions used in this study is available here: [http://eriskslotins.lv/uploads/TD-in-start-ups-questions.pdf](http://eriskslotins.lv/uploads/TD-in-start-ups-questions.pdf)
In addition to questions pertaining technical debt, the questionnaire contains questions inquiring the engineering context in the start-up and applied software engineering practices.
The data collection took place between December 1, 2016, and June 15, 2017. The survey was promoted through personal contacts, by attending industry events, and by posts on social media websites. Moreover, we invited other researchers from the Software Start-up Research Network to collaborate on the data collection. This collaboration helped to spread the survey across many geographical locations in Europe, North and South America, and Asia.
### Data analysis
To analyze the survey responses we used a number of techniques. We started by screening the data and filtering out duplicate cases, responses with few questions answered, or otherwise unusable responses. In the screening we attempt to be as inclusive as possible and do not remove any cases based on the provided responses.
The respondent estimates on technical debt aspects are measured on an ordinal scale, measured from 1 (not at all) to 4 (very much). Respondent and start-up demographics such as age and years of operation are measured with categorical variables on a nominal scale.
Overall, we analyze responses from 86 start-up cases, 75 data-points per each case, and 6450 data-points overall. To gain an overview of the data, the results were visualized by histograms, box-plots and contingency tables (Krishnan et al., 2017).
We use the Chi-Squared test of association to test if the associations between the examined variables are not due to chance. To
Figure 1. Aspects of technical debt
prevent Type I errors, we used exact tests, specifically, the Monte-Carlo test of statistical significance based on 10 000 sampled tables and assuming (\(p=0.05\)) (Hansen et al., 2010).
To examine the strength of associations we use Cramer's V test. We interpret the test results as suggested by Cohen (Cohen, 1998), see Table 1. To explore specifics of the association, such as which cases are responsible for this association, we perform post-hoc testing using adjusted residuals. We consider an adjusted residual significant if the absolute value is above 1.96 (\(Adj.residual>1.96\)), as suggested by Agresti (Agresti, 2000). The adjusted residuals drive our analysis on how different groups of start-ups estimate aspects of technical debt. However, due to the exploratory nature of our study, we do not state any hypotheses upfront and drive our analysis with the research questions.
Full results, contingency tables, histograms and calculation details are accessible on-line4 for a full disclosure.
Footnote 4: [http://eriksolditons.lv/uploads/TD-in-start-ups-sm.pdf](http://eriksolditons.lv/uploads/TD-in-start-ups-sm.pdf)
### Validity threats
In this section we follow guidelines by Runeson et al. (Runeson et al., 2010) and discuss four types of validity threats and applied countermeasures in the context of our study.
#### 3.4.1. Construct validity
Construct validity concerns whether operational measures really represent the studied subject (Runeson et al., 2010). A potential threat is that the statements we use to capture respondent estimates are not actually capturing the studied aspects of technical debt.
To address this threat we organized a series of workshops with other researchers and potential respondents to ensure that questions are clear, to the point, and capture the studied phenomenon.
Each aspect, i.e. type of precedent, is triangulated by capturing it by at least three different questions in the questionnaire. To avoid biases stemming from respondents opinions about technical debt and to capture the actual situation we avoid mentioning technical debt in the questions. Instead, we formulate the questions indirectly to capture respondent estimates on different aspects associated with technical debt. For example, we ask whether they find it difficult to understand requirements documentation.
To accommodate for the fact that a respondent may not know answers to some of the questions, we provide an explicit 'I do not know" answer option to all Likert scale questions.
#### 3.4.2. Internal validity
This type of validity threat addresses causal relationships in the study design (Runeson et al., 2010). In our study we use a model of precedents, dimensions and outcomes of technical debt. The literature, for example, Tom et al. (Tom et al., 2012) and Li et al. (Li et al., 2012), suggest that there is a causality between the three. We, however, present respondent estimates on precedents, dimensions and the outcomes separately without considering or implying any causality.
#### 3.4.3. External validity
This type of validity threat concerns to what extent the results could be valid to start-ups outside the study (Runeson et al., 2010). The study setting for participants was close to real life as possible, that is, the questionnaire was filled in without researcher intervention and in the participants own environment.
A sampling of participants is a concern to external validity. We use convenience sampling to recruit respondents and with help of other researchers, distributed the survey across a number of different start-up communities. Demographic information from respondent answers shows that our sample is skewed towards active companies, respondents with little experience in start-ups, young companies and small development teams of 1-8 engineers. In these aspects our sample fits the general characteristics of start-ups, see for example, Giardino et al. (Giardino et al., 2011, 2012) and Klofins et al. (Kloins et al., 2013). However, there clearly is a survivorship bias, that is, failed start-ups are underrepresented, thus our results reflect state-of-practice in active start-ups.
Another threat to external validity stems from case selection. The questionnaire was marketed to start-ups building software-intensive products, however due to the broad definition of software start-ups (see Giardino et al. (Giardino et al., 2011)), it is difficult to differentiate between start-ups and small medium enterprises. We opted to be as inclusive as possible and to discuss relevant demographic information along with our findings.
#### 3.4.4. Conclusion validity
This type of validity threat concerns the possibility of incorrect interpretations arising from flaws in, for example, instrumentation, respondent and researcher personal biases, and external influences (Runeson et al., 2010).
To make sure that respondents interpret the questions in the intended way we conducted a number of pilots, workshops and improved the questionnaire afterwards. To minimize the risk of systematic errors, the calculations and statistical analysis was performed by the first and the third author independently, and findings were discussed among the authors.
To strengthen reliability and repeatability of our study, all survey materials and calculations with immediate results are published online.
## 4. Results
To answer our research questions we analyze 6450 data-points from 86 start-up cases. The majority of these start-ups (63 out of 86, 73%) are active and had been operating for 1 - 5 years (58 out of 86, 67%), see Fig. 2. Start-ups are geographically distributed among Europe (34 out of 86, 40%), South America (41 out of 86, 47%), Asia (7 out of 86) and North America (2 out of 86).
Our sample is about equally distributed in terms of the product development phase. We follow a start-up life-cycle model proposed by Crowne (Crowne, 2007) and distinguish between inception, stabilization, growth and maturity phases. In our sample, 16 start-ups have been working on a product but haven't yet released it to market, 24 teams had released the first version and actively develop it further with customer input, 26 start-ups have a stable product and they focus on gaining customer base, and another 16 start-ups have mature
\begin{table}
\begin{tabular}{c l} Cramer’s V value & Interpretation \\ \hline \(\geq\) 0.1 & Weak association \\ \(\geq\) 0.3 & Moderate association \\ \(\geq\) 0.5 & Strong association \\ \hline \end{tabular}
\end{table}
Table 1. Interpretation of Cramer’s V test
products and they focus on developing variations of their products. The distribution of start-ups by their life-cycle phase and length of operation is shown in Fig. 3. In the figure, bubble size denotes the a number of people in the team. Most start-ups in the sample (75 out of 86, 87%) have small teams of 1 - 8 engineers actively working on the product.
About an equal number of start-ups had indicated that they work on more than one product at the time. Start-ups in our sample do per-customer customization to some extent: 10 companies (11%) had specified that they tailor each product instance to a specific customer, 30 companies (35%) do not do per-customer customization at all, while 43 start-ups (49%) occasionally perform product customization for an individual customer.
The questionnaire was filled in mostly by start-up founders (64 out of 86, 74%) and engineers employed by start-ups (15 out of 86, 17%). About a half of respondents have specified that their area of expertise is software engineering (49 out of 86, 56%). Others have specified marketing, their respective domain, and business development as their areas of expertise.
Respondents length of software engineering experience ranges from 6 months to more than 10 years. A large portion of respondents (44 out of 86, 51%) had less than 6 months of experience in working with start-ups at the time when they joined their current start-up.
### Dimensions of technical debt
We start our exploration by looking at the extent to which the dimensions of technical debt (documentation, architecture, code, and testing) are present in the surveyed start-ups. We quantify the degree of technical debt by aggregating respondent answers on questions pertaining to each dimension, Answers were given on a Likert scale where higher values indicate more estimated technical debt in a given dimension.
Responses from the whole sample indicate that start-ups estimate some technical debt (2 on a scale from 1 to 4) in documentation, architecture, and code dimensions, while testing debt is estimated as the most prevalent (3 in a scale from 1 to 4). Fig. 4 shows the median (dark horizontal line), first and third quartile, and minimum and maximum estimates on all statements pertaining to a specific debt type.
To explore the estimated degree of technical debt further, we analyze the influence of respondent demographics, such as relationship with the start-up and background, and start-up demographics, such as product life-cycle phase, team skill level and longevity of the start-up, on the responses.
The analysis shows that only start-up state, that is if the start-up is active, paused, acquired, or closed, has an effect on the overall estimates of technical debt, see Table 2. In the table we show strength (measured by Cramer's V test) of statistically significant associations (\(p<0.05\), measured by Chi-Square test) between relevant characteristics of start-ups and technical debt dimensions. In the last column we show if the characteristic has an effect on all dimensions together.
Observe that the level of engineering skills and domain knowledge pertains to the whole team. However, practical experience pertains only to the respondent. We show respondent characteristics as well to illustrate to what extent respondents background influences their responses. For instance, respondents with more practical experience estimate documentation debt more critically, see Table 2, and are more critical about skills shortages, see Table 3.
Figure 4. Estimates for the prevalence of different dimensions of technical debt from the case survey
Figure 3. Distribution of start-ups by product phase and length of operation
Figure 2. Distribution of start-ups by the founding year and their current state
We highlight important findings in framed boxes and discuss them in Section 5.
**Finding 1:** Start-ups that are in the active category estimate technical debt, overall in all dimensions, lower than closed or acquired start-ups.
Product phase, team size and level of domain knowledge have effects on individual technical debt dimensions. We present these results next.
#### 4.1.1. Documentation debt
Documentation debt refers to any shortcoming in documenting aspects of software development, such as architecture, requirements, and test cases (Krishnan, 2017).
We look into requirements, architecture and test documentation because these are the essential artifacts guiding a software project. Requirements capture stakeholders needs and provide a joint understanding of what features are expected from the software. Architecture documentation lists design principles, patterns and components comprising the software. Documentation of test cases supports testing activities and provides means for quality assurance (Krishnan, 2017).
Only 1% (7 out of 84) of start-ups in our sample have explicitly stated that they do not document requirements in any way. The most popular forms of documenting requirements are informal notes and drawings (50 out of 86, 58%), followed by organized lists (20 out of 86, 20%).
Responses from the whole sample show that start-ups have some amount of documentation debt (\(Median=2.0\)), see Fig. 4. Exploring what start-up characteristics have an effect on the estimates, see Table 2, we found that start-ups who are active estimate documentation debt lower than acquired or closed companies. We also found that teams with sufficient domain knowledge estimate documentation debt lower than teams with many gaps in their domain knowledge.
#### 4.1.2. Architecture debt
Architecture debt refers to compromises in internal qualities of the software such as maintainability, scalability, and evolvability (Krishnan, 2017).
Results from the whole sample show that start-ups experience some architectural debt (\(Median=2.0\)), see Fig. 4. By looking into what start-up characteristics have an effect on how respondents estimate architecture debt, we found that state of the start-up and the product phase have an effect on the estimates, see Table 2. Active start-ups have provided substantially lower estimates than acquired and closed companies. We found that start-ups who have just started on product engineering and haven't yet released it to market, experience almost no architectural debt. During stabilization and growth phases the estimates become more critical. However, during the maturity phase estimates become slightly more optimistic, see Fig. 5. Box-plots in the figure show responses from all statements pertaining to architecture debt.
#### 4.1.3. Code debt
Code debt refers to a poorly written code. Signs of a poorly written code are, for example, unnecessary complex logic, code clones, and bad coding style affecting code readability. Poorly written code is difficult to understand and change (Krishnan, 2017; Krishnan, 2017; Krishnan, 2017).
Results from the whole sample show that start-ups experience some code debt (\(Median=2.0\)), see Fig. 4. By looking into what start-up characteristics have an effect on how respondents estimate code debt we found that state of the start-up, team size, level of per-customer tailoring has an effect on the estimates, see Table 2. Active start-ups estimate code debt lower than acquired start-ups. Start-ups with larger teams (9 or more people), provide higher estimates on code debt than small teams. Start-ups who do not offer per-customer customization estimate code debt lower. However, start-ups that occasionally tailor their product to needs of a specific customer estimates their code debt higher.
#### 4.1.4. Testing debt
Testing debt refers to lack of test automation leading to the need of manually retesting the software before a release. The effort of manual regression testing grows exponentially with the number of features, slowing down release cycles and making defect detection a time consuming and tedious task (Krishnan, 2017).
\begin{table}
\begin{tabular}{c l l l l l l} \# & Characteristic & Documentation & Architecture & Code & Testing & All \\ \hline
1 & State of the start-up & 0.346 & 0.326 & 0.414 & - & 0.318 \\
2 & Product phase & - & 0.329 & - & - & - \\
3 & Overall team size & - & - & 0.427 & - & - \\
4 & Level of domain knowledge & 0.334 & - & - & - & - \\
5 & Per-customer tailoring & - & - & 0.423 & - & - \\
6 & Practical experience & 0.337 & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 2. Results of Cramer’s V test on associations between dimensions and start-up characteristics with \(p<0.05\)
Figure 5. Box-plot showing how in different product phases start-ups estimate architecture debt
Answers to questions inquiring use of automated testing show that about a third (26 out of 86, 30%) of start-ups are attempting to implement automated testing, and only 17 start-ups (20%) have explicitly stated that no test automation is used.
Despite attempts to automate, companies across the whole sample estimate their testing debt somewhat high (\(Median=3\)), see Fig. 4. Manual exploratory testing is reported as the primary testing practice, regardless of start-up life-cycle phase, team size and engineering experience, and length of operation.
Similar results, showing that only a small number of mobile application projects have any significant code coverage by automated tests, and listing time constraints as the top challenge for adopting automated testing, were obtained by Kochhar et al. (Kochhar et al., 2017).
### Precedents for technical debt
We asked the respondents to estimate various precedents of technical debt in their start-ups, such as attitudes towards good software engineering practices, pragmatic decisions to make shortcuts in product engineering, communication issues in the team, level of team engineering skills, time and resource shortages, and lack of established SE processes.
Box-plots with median responses from the whole sample are shown in Fig. 6. Higher values indicate stronger agreement with the presence of a precedent in the start-up. Poor attitude is the least common precedent for technical debt, while resource shortage is estimated as the most prevalent precedent.
Looking into what start-up characteristics influence the responses, we find that start-up team size and team's engineering skills have a significant effect on the estimates overall, see Table 3. Larger teams of 9 or more people estimate the precedents for technical debt higher than small teams.
In the results we show only characteristics with statistically relevant associations, thus listed characteristics differ between Tables 2 and 3.
#### 4.2.1. Attitude towards good engineering practices
The responses to questions about following good engineering practices suggest that start-up engineers do realize the importance of following good architecture, coding and testing practices (\(Median=1\)), see Fig. 6.
Comparing how responses on attitude differ by start-up characteristics we found that start-ups who are active, estimate their attitudes more optimistically than acquired or closed companies. That is, they agree more with benefits from following good engineering practices, such as coding conventions and throughout testing of the product.
#### 4.2.2. Pragmatism
Estimates on statements about pragmatic considerations, that is, prioritization of time-to-market over good engineering practices, show that start-ups are ready to make shortcuts to speed up time-to-market (\(Median=2\)). However, the spread of estimates suggests that different companies have very different attitudes towards deliberately introducing technical debt, see Fig. 6. Comparing how estimates on attitude differ by start-up characteristics we found that start-ups with larger teams of 15 or more developers estimate pragmatic precedents higher than smaller teams.
#### 4.2.3. Communication
Estimates on statements about communication show that communication issues could be one of the precedents for introducing technical debt, see Fig. 6. We observe that communication issues become significantly more severe in larger engineering teams of 13 and more people than in smaller teams. Moreover, the results suggest that teams with better engineering skills experience fewer communication issues.
#### 4.2.4. Engineering skills
Estimates to what extent start-ups face lack of engineering skills show that skills shortage could be a precedent for accumulating technical debt, see Fig. 6.
Comparing how estimates on skill shortages differ by start-up characteristics we found that the state of the start-up, team size, length of practical experience, and level of estimated engineering skills have the significant influence on the estimates. We found that active start-ups estimate skills shortages lower than closed down companies. A somewhat expected result is that teams with adequate engineering skills provide significantly lower estimates for challenges associated with skills shortages.
#### 4.2.5. Resources
Looking at differences between estimates on time and other types of resources we found that they are tied together, see Fig. 6. That is, companies reporting time shortages also report resource shortages. A potential explanation for the association is that time pressure is created internally by a need to get the product out and start generating revenue, and not by an external market pressure. We also found that per-customer customization, overall team size, and level of domain knowledge has an effect on how start-ups estimate resource shortages.
Estimates of resources and time shortages show that resource issues are the highest estimated precedent for technical debt (\(Median=2\)
Figure 6. Box-plot showing how the sample estimates different precedents for technical debt
2.5). We find that occasional per-customer tailoring is associated with higher estimates on resource shortages. Potentially, start-ups suffering from lack of resources opt for occasional customization to serve needs of an important customer, thus acquiring resources for further development. Start-ups with smaller teams of 1-3 people estimate resource shortages lower than larger start-ups of 9-12 people. A plausible explanation for this association could be that supporting a larger team requires more resources.
#### 4.2.6. Process
Respondent estimates on the software engineering process issues show that frequent and unplanned changes occur and could cause difficulties in avoiding technical debt, see Fig. 6. We found that estimates on process issues become more severe as team size grows.
### Outcomes of technical debt
To explore potential outcomes of technical debt we presented respondents with statements exploring to what extent team productivity and product quality are concerns in their start-ups. Estimates from the whole sample show that start-ups experience some quality and productivity issues (\(Median=2\)) that could be associated with accumulated technical debt. We found that team size is the only characteristic that influences the estimates (\(Cramer^{\prime}sV=0.362\)).
Looking into what types of technical debt are associated with specific outcomes linked to technical debt, we found a clear association between estimates of technical debt and estimates of the outcomes, see Table 4.
Code debt has the most severe impact on both productivity and quality. Architecture debt have a similar effect, albeit to a lesser extent. Documentation debt impairs productivity. However, we did not find a statistically significant association between testing debt and loss of productivity or quality.
## 5. Discussion
### Reflections on the research questions
Our results on how start-ups estimate technical debt show that active start-ups estimate aspects of technical debt significantly lower than closed or acquired start-ups, see Finding 1 in Section 4.1. A plausible explanation for this result could be that lower technical debt helps start-ups to have a more stable and easier to maintain product. Thus giving a start-up more room for evolving the product into something the market wants, i.e. to pivot (Bowden, 2017). However, excess technical debt hinders product evolution and could be one of contributing factors to the shutdown of a company.
An alternative explanation is that technical debt could be invisible and compensated by the team's implicit knowledge. However, when a start-up is acquired by another company and the product is transferred to another team, all the technical debt becomes visible. Difficulties to capture undocumented knowledge and the associated drop in performance of the receiving team has been recognized in the context of agile project handover (S
start-ups mature, see Fig 5. Marketing of the product could be a source of new challenges for the product development team. For example, the product must support different configurations for different customer segments, provide a level of service, and cope with a flow of requests for unanticipated features (Krause et al., 2015; Krause et al., 2016). Earlier shortcuts in product architecture are therefore exposed and must be addressed.
Overall team size and level of engineering skills could be the most important characteristics contributing to precedents and linked to technical debt in most dimensions, see Finding 3 in Section 4.2. Larger teams of 9 or more people experience more challenges and report higher technical debt in all categories. This finding is similar to Melo et al. (2016) studying productivity in agile teams. Smaller teams are better aligned and more efficient in collaboration with little overhead. However, as the team size grows more processes and artifacts for coordination are needed (Melo et al., 2017). Therefore larger teams have more artifacts that can degrade.
Team size could be an indicator of the general complexity of a start-up and the product. More people are added to the team when there are more things to be taken care of. Therefore, the technical debt could stem not only from the number of people but also from increasing complexity of the organization itself.
Our results show that increase in team size is also associated with outcomes of technical debt, a decrease in productivity and product quality, see Finding 4 in Section 4.3. This result could be explained by our earlier discussion on how larger teams require more coordination for collaboration. However, the more critical estimates by larger teams could be also associated with the increase in product complexity as new features are added. Rushing to release new features could contribute to the accumulation of technical debt until deliberate, corrective actions are taken, as observed in mobile application development (Rushing et al., 2017).
As a software product grows, it naturally becomes more difficult to maintain. For instance, if individual product components do not change and the new components are at the same quality level as existing ones, the increased number of components and their dependencies requires more effort from engineers to maintain the product and creates more room for defects (Rushing et al., 2017). This is software decay and is not the same as avoidable technical debt stemming from the trade-off between quality and speed. Distinguishing between true technical debt and software decay is an important next step in providing practical support for software-intensive product engineering in start-ups.
### Implications for practitioners
This study presents several implications for practitioners:
1. Start-up teams with higher level of engineering skills and respondents with more experience perceive aspects of technical debt more severely. Less skilled teams may not be aware of their practices introducing additional technical debt, and amount of technical debt in their products. Using tools and occasional external expert help could help to identify unrealized technical debt, and to improve any sub-optimal practices.
2. Start-up team size correlates with more severe precedents and outcomes of technical debt. Keeping a team small and skilled could be a strategy to mitigate precedents for technical debt. To support growth of the team, more coordination practices need to be introduced, and impact on technical debt monitored. Additional coordination practices require more maintenance of coordination artifacts. Thus, there is a practical limit how large a team can grow before it needs to be divided into sub-teams.
3. There is an association between levels of technical debt and a start-up outcome. Having less technical debt could give a start-up more room for pivoting and product evolution in the long term.
4. There are certain moments when the effects of technical debt are the most severe. For example, shipping a product to a large number of customers, scaling up the team, and handing the product over to another team. The anticipation of such moments and adequate preparations could help to mitigate the negative effects of technical debt.
5. The most significant type of technical debt in start-ups is code smells. We found that poorly structured and documented code has the strongest association with issues in team productivity and product quality. However, detection of code smells can be automated with open-source tools, thus alleviating removal of this type of debt.
## 6. Conclusions and Future Work
In this paper, we report how technical debt is estimated in start-ups building software-intensive products. We explore to what extent precedents, dimensions, and outcomes, identified by earlier studies, are relevant in the start-up context. We attempt to identify what start-up characteristics have an amplifying or remedying effect on technical debt.
Our results show that, even though start-up engineers realize the importance of good engineering practices, they cut corners in product engineering, mostly due to resource pressure and a need for faster time to market. The results suggest that precedents for technical debt become more severe as start-ups evolve and severity of the precedents could be associated with the number of people working in a start-up and a product life-cycle phase.
Our results show significantly different estimates from closed, acquired and operational start-ups. The differences highlight how start-ups use technical debt as a leverage, and emphasizes the importance of careful technical debt management.
This exploratory study leads to a formulation of several hypotheses:
1. Technical debt peaks at the growth stage when a start-up attempts to market the product.
2. The number of people in a team amplifies precedents for technical debt.
3. There is an association between a start-up outcome and their technical debt management strategy.
We aim to explore these hypotheses further by triangulating results from this study with qualitative data from interviews and artifact analysis.
## 7. Acknowledgments
The authors of this paper would like to thank all practitioners who found time and motivation share their experiences. Reaching this diverse population of start-ups would not be possible without help and support from Software Start-up Research Network community. and specifically Nana Assyne, Anh Nguyen Duc, Ronald Jabangwe, Jorge Melegati, Bajwa Sohaib Shahid, Xiaofeng Wang, Rafael Matone Chanin, and Pekka Abrahamsson.
Work of R. Prikladnicki is supported by Fapergs (process 17/2551-0001205-4).
|
2309.14045 | Impacts of Gravitational-Wave Background from Supermassive Black Hole
Binaries on the Detection of Compact Binaries by LISA | In the frequency band of Laser Interferometer Space Antenna (LISA), extensive
research has been conducted on the impact of foreground confusion noise
generated by galactic binaries within the Milky Way galaxy. Additionally, the
recent evidence for a stochastic signal, announced by the NANOGrav, EPTA, PPTA,
CPTA and InPTA, indicates that the stochastic gravitational-wave background
generated by supermassive black hole binaries (SMBHBs) can contribute a strong
background noise within in LISA band. Given the presence of such strong noise,
it is expected to have a considerable impacts on LISA's scientific missions. In
this work, we investigate the impacts of the SGWB generated by SMBHBs on the
detection of massive black hole binaries (MBHBs), verified galactic binaries
(VGBs) and extreme mass ratio inspirals (EMRIs) in the context of LISA, and
find it crucial to resolve and eliminate the exceed noise from the SGWB to
ensure the success of LISA's missions. | Fan Huang, Yan-Chen Bi, Zhoujian Cao, Qing-Guo Huang | 2023-09-25T11:21:24Z | http://arxiv.org/abs/2309.14045v1 | Impacts of Gravitational-Wave Background from Supermassive Black Hole Binaries on the Detection of Compact Binaries by LISA
###### Abstract
In the frequency band of Laser Interferometer Space Antenna (LISA), extensive research has been conducted on the impact of foreground confusion noise generated by galactic binaries within the Milky Way galaxy. Additionally, the recent evidence for a stochastic signal, announced by the NANOGrav, EPTA, PPTA, CPTA and InPTA, indicates that the stochastic gravitational-wave background generated by supermassive black hole binaries (SMBHBs) can contribute a strong background noise within in LISA band. Given the presence of such strong noise, it is expected to have a considerable impacts on LISA's scientific missions. In this work, we investigate the impacts of the SGWB generated by SMBHBs on the detection of massive black hole binaries (MBHBs), verified galactic binaries (VGBs) and extreme mass ratio inspirals (EMRIs) in the context of LISA, and find it crucial to resolve and eliminate the exceed noise from the SGWB to ensure the success of LISA's missions.
## I Introduction
Laser Interferometer Space Antenna (LISA) is the space-borne gravitational wave (GW) detector operating in the frequency band of approximately \(10^{-4}\sim 10^{-1}\) Hz [1; 2; 3]. This low-frequency band is abundant in a variety of GW sources that will enable us to observe the universe in a new and unique way, yielding now insights in a wide range of topics in astrophysics and cosmology.
LISA has proposed a multitude of scientific objectives (SOs) associated with the necessary observation requirements for their fulfillment. Those observation requirements are in turn related to mission requirements (MRs) pertaining to noise performance, mission duration _etc_, which requires calculation of signal-to-noise-ratio (SNR) for assessment [2]. Different noise performance levels can lead to significant variations in SNR for a specific GW source. Meanwhile, the detectability and parameter measurement accuracy of this source will also be effected by the noise.
According to the 2017 LISA design [2], the strain sensitivity curves of LISA is the combination of predicted Michelson-equivalent sensitivity and stochastic gravitational wave background (SGWB) noise. In previous papers, the foreground noise around \(10^{-3}\) Hz due to galactic binary [4; 5; 6] and the SGWB above \(10^{-3}\) Hz from stellar origin black holes (SOBHs) [7] were discussed. And the recent evidence for stochastic signal consistent with SGBW in the spectrum from \(10^{-9}\sim 10^{-1}\) Hz, announced by North American Nanohertz Observatory for Gravitational Waves (NANOGrav) [8; 9; 10], the European Pulsar Timing Array (EPTA) in collaboration with the Indian Pulsar Timing Array (InPTA) [11; 12], the Parkes Pulsar Timing Array (PPTA) [13; 14] and the Chinese Pulsar Timing Array (CPTA) [15; 16], indicates that the SGWB due to Supermassive black hole binaries (SMBHBs) can significantly contribute a background noise in LISA frequency band [17; 18; 19; 20], bringing potential challenges to the LISA mission. However, this influence has not been investigated well before.
In this study, we utilize the up-to-date SGWB data from SMBHB indicated by NANOGrav 15-year data set, to analysis its impacts on the LISA mission. This paper is organized as follows: We firstly introduce the sensitivity of LISA and present the analytic-fit sensitivity curve we adopted in Section II. Since the majority of individual LISA sources will be binary systems covering broad range of masses [2; 3], we then address the impacts from the SGWB generated by SMBHBs on the detection of compact binaries in the LISA mission. More specifically, we examine the effect on the detection of massive black hole binaries, verified galactic binaries (VGBs) and extreme mass ratio inspirals (EMRIs) in Sections III and IV, respectively. Finally, we draw our conclusion in Section V.
## II Sensitivity of LISA
Here we adopt the analytic-fit sensitivity curve \(S_{a}(f)\) for Michelson-style LISA data channel given by [5], as follows
\[S_{a}(f)=\frac{10}{3L^{2}}\left(P_{\rm{OMS}}(f)+2(1+\cos^{2}(f/f_{*}))\frac{P_{ \rm{acc}}(f)}{(2\pi f)^{4}}\right)\left(1+\frac{6}{10}\left(\frac{f}{f_{*}} \right)^{2}\right), \tag{1}\]
where the transfer frequency \(f_{*}=19.09\) mHz and arm length \(L=2.5\times 10^{6}\) km. In addition to the instrument noise, the galactic confusion noise, also called stochastic foreground, generated by unresolved galactic binaries will contribute an extra noise \(S_{c}(f)\) to sensitivity, yielding the LISA's effective strain sensitivity curve \(S_{n}(f)\) becomes the sum of \(S_{a}(f)\) and \(S_{c}(f)\)[4, 2, 5, 6]. The detailed expressions of single-link metrology noise \(P_{\rm{OMS}}(f)\) and single test mass acceleration noise \(P_{\rm{acc}}(f)\), and the galactic confusion noise \(S_{c}(f)\) are given in [5]. Such additive of strain sensitivity indicate that we could calculate the impacts of SWGB via adding up its corresponding strain sensitivity \(S_{\rm{GW}}(f)\) to \(S_{n}(f)\).
Following methods proposed by [4, 5], we define the noise strain sensitivity due to the SGWB as
\[S_{\rm{GW}}(f)=\frac{3H_{0}^{2}}{2\pi^{2}}\frac{\Omega_{\rm{GW}}(f)}{f^{3}}, \tag{2}\]
which can be added to the strain sensitivity of LISA with galactic confusion noise \(S_{n}(f)\) to obtain an effective strain sensitivity \(S_{\rm{eff}}(f)=S_{n}(f)+S_{\rm{GW}}(f)\)[4, 2, 5, 6], and then the SNR effected by the present of GW background can be written as
\[{\rm{SNR}}=2\left[\int df\frac{|\tilde{h}(f)|^{2}}{S_{\rm{eff}}(f)}\right]^{1 /2}, \tag{3}\]
where \(\tilde{h}(f)\) is the frequency domain representation of time-domain waveform \(h(t)\), which encodes exquisite information of intrinsic parameters of GW source.
In Fig. 1, we present the effective characteristic strain \(S_{\rm{eff}}(f)\) influenced by the SGWB originating from SMBHBs together with the specific GW signals of VGBs and an illustrative example of EMRIs. The effective characteristic strain, which represents the cumulative impact of all SMBHBs, is depicted as the purlywood-colored region in Fig. 1. Notably, within the frequency range below several \(10^{-2}\) Hz, it experiences a dramatic increase, partially obscuring specific signals associated with VGBs and EMRIs. It is believed that GW signals from SMBHBs with SNR equal to or greater than \(\mathcal{O}(100)\) (Here we set 100 as criteria to be on the safe side) can be resolved and eliminated from the SGWB, as indicated in [21, 2]. Our approach involves using \(S_{\rm{eff}}(f)\), which accounts for contributions from all SMBHBs, as a baseline. We iteratively eliminate the contributions from SMBHB events with \({\rm{SNR}}\geq 100\) and derive a new effective characteristic strain. This iterative process continues until the effective characteristic strain reaches convergence.
As a result, the shift in characteristic strain, represented by the purple-colored region in Fig. 1, becomes reduced compared to the non-eliminated scenario. Furthermore, the frequency band influenced by the SGWB shifts from several \(10^{-2}\) Hz down to a few \(10^{-3}\) Hz, moving away from LISA's most sensitive range.
Given that the MRs of LISA set the requirements for a minimum SNR level for specific detectable sources, we will further discuss the impact of the SGWB on LISA's detection in terms of variations in SNR in the subsequent sections.
## III Detectability of MBHBs
Massive black hole binaries (MBHBs) are categorized by two types, intermediate mass black holes binaries (IMBHBs) with intermediate mass range between few hundreds and \(10^{5}M_{\odot}\) for each black holes, and Super
Figure 1: The expected effective characteristic strain \(S_{\rm{eff}}(f)\) in the frequency range \([10^{-5},1]\) Hz. The purlywood line represents the effective characteristic strain derived from the SGWB of all SMBHBs, while the purple line illustrates the effective characteristic strain obtained from the SGWB of SMBHBs after excluding those with \({\rm{SNR}}\geq 100\). The shaded regions in both cases indicate the 90% credible intervals. Additionally, we depict the expected characteristic strain curves of the LISA with black dashed lines. The darkcyan line corresponds to the characteristic strain produced by an illustrative EMRI signal [22], while the star marker represents VGB signals [23, 24].
massive black holes binaries (SMBHBs) with mass above \(10^{5}M_{\odot}\). Tracing the origin, growth and merger history MBHs across cosmic ages is a vital science objectives of LISA.
The origin of MBHs lurking at the centres of galaxies as power source of active galactic nuclei, is an on-going topic. Some studies predict masses range of the their seeds is around \(10^{3}M_{\odot}\) to fews \(10^{5}M_{\odot}\), within formation redshift \(10\sim 15\)[25]. After accretion episodes and repeated merging in the period of cosmic structures clustering, those seeds can grows up to \(10^{8}M_{\odot}\) and more [26]. During the growth of seeds, accretion and mergers imprint different information on the spins of them. In order to measure the dimensionless spins and misalignment of spins with the orbital angular momentum with low absolute error, the accumulated SNR (from inspiral phase up to merger) required to reach certain level.
In studying the growth mechanism of MBHs from epoch of the earliest quasars, an accumulated SNR of at least \(\sim 200\) is required to ensure the accurate measurement of parameters. This SNR requirement is also needed for testing the propagation of GWs in LISA's science investigations (SIs) [2].
In the absence of a SGWB, the expected minimum observation rate of several MBHBs per year would fulfill the requirements of SO2, assuming a conservative population model [27].
In Fig. 2, we present contour plots of SNR with a value of SNR = 200, using the waveform model derived from references [28; 29]. These plots depict the SNR values for GW signals emitted by MBHBs, both in the presence and absence of the SGWB generated by SMBHBs. The plots are presented in the plane with the source-frame total mass (\(M\)) and redshift (\(z\)) of the sources. Without loss of generality, we assume a mass ratio of 0.2 for the binary systems. This choice corresponds to the parameterization used in LISA's Sensitivity Curve SI 2.1, which is designed for the search for seed black holes at cosmic dawn [2].
Our analysis reveals a significant reduction in the detectable redshift of GW signals generated by MBHBs with an SNR of 200 in the mass range \(10^{4}\sim 10^{8}M_{\odot}\). This reduction occurs when the SGWB from SMBHBs cannot be resolved and eliminated during the observation period. Given that the goals of LISA SI 2.2 involve the detection of sources at redshift \(z<3\) with masses ranging from \(10^{5}\) to \(10^{6}M_{\odot}\) and an accumulated SNR of at least approximately 200, the presence of the SGWB has a substantial impact on this investigation. However, should we succeed in resolving and eliminating GW signals with SNR values greater than or equal to 100, the impact on the detection of MBHBs in the LISA mission will be significantly reduced. Consequently, the objectives of SI 2.2 will be slightly effected for MBHBs with masses exceeding \(10^{6}\)\(M_{\odot}\).
## IV Detectabilities of VGBs and EMRIs
Since there are large numbers of compact binaries in the Milky Way galaxy that emit continuous and nearly monochromatic electromagnetic (EM) signals, parts of those binaries are already verified by the observation other than GW detection. For those VGBs emitting GW signal in LISA's frequency band, the joint EM and GW observation can be performed. The details of those VGBs in LISA's band can be found in [24; 23]. In LISA's SO1: Study the formation and evolution of compact binary stars in the Milk Way Galaxy, the capability to detected and measure the (intrinsic and orbital) parameters of those VGBs is vital. Assuming the strain sensitivity without effect from SGWB, together with estimation of population of VGBs given in [30], LISA should be able to detect and resolve \(\sim 25000\) VGBs.
We utilize data from VGBs obtained from Gaia DR3 [23]. Following the procedures outlined in [31], the characteristic strain of the gravitational wave (GW) signal emanating from VGBs, as illustrated in Fig. 1, is described by
\[h_{c}=\sqrt{T_{\rm obs}}\frac{2(G\mathcal{M})^{5/3}}{c^{4}d}\pi^{2/3}f^{7/6}. \tag{4}\]
Figure 2: LISA’s SNR=200 curves for the GW signals of MBHBs with and without present of SGWB generated by SMBHBs, in the plane of total source-frame with mass \(M\), redshift \(z\). And the mass ration of binaries \(q\) is 0.2. The black dashed line is curve without SGWB for SMBHBs. The orange line shows the curve under effect of SGWB from all SMBHBs combined, and the purple line gives the curve under effect of SGWB from SMBHBs that all the signals with SNR\(\geq 100\) are resolved and eliminated. The shaded region indicates the corresponding 90% credible interval respectively.
Here, \(\mathcal{M}\) represents the chirp mass of each individual VGB, \(d\) signifies the distance to the source, and \(f\) corresponds to the gravitational wave frequency, which is twice the orbital frequency. For the sake of simplicity, the VGBs can be characterized as monochromatic GW signals with a set of parameters obtained from Gaia DR3. It is important to note that the SNR of VGBs can be readily calculated by taking the ratio of the characteristic strain \(h_{c}\) to the effective characteristic strain of the detector \(S_{\text{eff}}(f)\).
The Extreme Mass Ratio Inspiral (EMRI) is the inspiral of a stellar-mass compact object, such as a stellar-mass black hole, a neutron star or a white drawf, into a SMBH. Here we adopt the Numerical Klu
Figure 3: The SNR for VGB systems and an EMRI example are depicted here. The threshold corresponds to an SNR of 8, signifying the minimum requirement for detecting those compact binary signals. The orange dots represent the SNR for LISA at its original sensitivity level, while the burywood error bars illustrate the SNR for LISA sensitivity influenced by the SGWB generated by all SMBHBs. The purple error bars points indicate the SNR for LISA’s sensitivity effected by SGWB from SMBHBs with SNR values \(<100\). For detailed information regarding the names and parameters of VGBs, please refer to [23; 24], and for the parameters of the illustrative EMRI signal, kindly refer to Figure 1.
naries due to the SGWB generated by SMBHBs are more significant than the expected galactic background. Understanding the SGWB and investigation their impacts on the LISA mission are crucial for the success of space-borne GW detectors. Additionally, our results may offer an alternative perspective for the design of future GW detectors.
_Acknowledgements._ We would like to thank Xilong Fan and Lijin Shao for useful conversation. This work makes use of the Black Hole Perturbation Toolkit. This work is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15). We acknowledge the use of HPC Cluster of ITP-CAS.
|
2309.10223 | A Generalized Approach for Recovering Time Encoded Signals with Finite
Rate of Innovation | In this paper, we consider the problem of recovering a sum of filtered
Diracs, representing an input with finite rate of innovation (FRI), from its
corresponding time encoding machine (TEM) measurements. So far, the recovery
was guaranteed for cases where the filter is selected from a number of
particular mathematical functions. Here, we introduce a new generalized method
for recovering FRI signals from the TEM output. On the theoretical front, we
significantly increase the class of filters for which reconstruction is
guaranteed, and provide a condition for perfect input recovery depending on the
first two local derivatives of the filter. We extend this result with
reconstruction guarantees in the case of noise corrupted FRI signals. On the
practical front, in cases where the filter has an unknown mathematical
function, the proposed method streamlines the recovery process by bypassing the
filter modelling stage. We validate the proposed method via numerical
simulations with filters previously used in the literature, as well as filters
that are not compatible with the existing results. Additionally, we validate
the results using a TEM hardware implementation. | Dorian Florescu | 2023-09-19T00:27:16Z | http://arxiv.org/abs/2309.10223v1 | # A Generalized Approach for Recovering Time Encoded Signals with Finite Rate of Innovation
###### Abstract
In this paper, we consider the problem of recovering a sum of filtered Diracs, representing an input with finite rate of innovation (FRI), from its corresponding time encoding machine (TEM) measurements. So far, the recovery was guaranteed for cases where the filter is selected from a number of particular mathematical functions. Here, we introduce a new generalized method for recovering FRI signals from the TEM output. On the theoretical front, we significantly increase the class of filters for which reconstruction is guaranteed, and provide a condition for perfect input recovery depending on the first two local derivatives of the filter. We extend this result with reconstruction guarantees in the case of noise corrupted FRI signals. On the practical front, in cases where the filter has an unknown mathematical function, the proposed method streamlines the recovery process by bypassing the filter modelling stage. We validate the proposed method via numerical simulations with filters previously used in the literature, as well as filters that are not compatible with the existing results. Additionally, we validate the results using a TEM hardware implementation.
Event-driven, nonuniform sampling, analog-to-digital conversion, time encoding, finite rate of innovation.
## I Introduction
Shannon's iconic work on reconstructing bandlimited signals from uniform samples [1] was generalized on multiple levels. A notable generalization is from bandlimited signals to signals belonging to shift-invariant spaces (SIS), which enables reconstructing a linear combination of uniformly spaced filters \(g(t)=\sum_{k\in\mathbb{Z}}a_{k}\varphi(t-kT)\) from uniform samples [2] and nonuniform samples [3]. An important advantage of SIS is that the theory is backward compatible with previous theory when \(\varphi(t)\) is a sinc function, or a B-spline, but also enables choosing new functions \(\varphi(t)\) satisfying some predefined requirements [2]. The fact that the SIS filters are uniformly spaced can be a strong constraint when the input is a sparse sequence of filters centered in real-values \(\left\{\tau_{k}\right\}_{k=1}^{K}\):
\[g(t)=\sum\nolimits_{k=1}^{K}a_{k}\varphi(t-\tau_{k}),\quad t\in\left[0,t_{ \text{M}}\right]. \tag{1}\]
Such signals can no longer be considered part of a SIS. This recovery problem has twice the number of unknowns as before, requiring to compute \(\left\{\tau_{k},a_{k}\right\}_{k=1}^{K}\) using measurements of \(g(t)\). We call this problem finite-rate-of-innovation (FRI) sampling [4]. The versatility of FRI sampling allowed its application in a large number of areas such as ECG acquisition and compression [5], radioastronomy [6], image processing [7], ultrasound imaging [8], calcium imaging [9], or the Unlimited Sensing Framework [10, 11, 12, 13].
In this paper we consider the problem of recovering (1) from Time Encoding Machine (TEM) measurements, which represents a different generalization of Shannon's sampling inspired from the information processing in the brain, characterized by low power consumption [14]. A TEM with input \(g(t)\) is an operator \(\mathcal{T}\) defined as \(\mathcal{T}g=\left\{t_{k}\right\}_{k\in\mathbb{Z}}\), where \(\left\{t_{k}\right\}\) is a strictly increasing sequence of time samples known as spikes or trigger times. Input recovery was demonstrated for the case when \(g(t)\) is a bandlimited function [14, 15], a function in a shift-invariant space [15, 16], or a bandlimited function with jump discontinuities [13, 17].
**Related Work.** The work on FRI signal recovery from TEM measurements started with [18] and is still at an early stage. Furthermore, currently, the input recovery guarantees assume that the FRI filters \(\varphi(t)\) are particular mathematical functions such as polynomial or exponential splines [18], hyperbolic secant kernels [19] or alpha synaptic functions [20]. The case of periodic FRI signals was considered in [21, 22, 23, 24]. We note that this line of work assumes that the TEM input is bandlimited and uses Nyquist rate type conditions for recovery. A common scenario is that \(\varphi(t)\) is not chosen by design, but rather results from the physical properties of the acquisition device [5]. In such cases, the existing work on FRI signal recovery for TEMs does not offer any guarantees. Moreover, the existing work is based on exact analytical tools for recovery, such as Prony's method, which are known to not allow good stability to noise or model mismatch. The study of general filters \(\varphi(t)\) was first numerically validated in [25].
###### Abstract
We consider two classes of TEMs: the Asynchronous Sigma-Delta Modulator (ASDM) and the integrate-and-fire (IF). The ASDM is characterised by low power consumption and modular design [26], comprising a loop with an adder, integrator, and a noninverting Schmitt trigger, as depicted in Fig. 1(a). The initial conditions are \(z(0)=-b\) and \(y(0)=0\), where \(b\) is a positive constant. We assume that \(|g(t)+g_{0}|<b\), which ensures that \(y(t)\) is strictly increasing in the immediate positive vicinity of \(t=0\). This means that eventually \(y(t)=\delta\), and this time point represents the first ASDM output sample \(t=t_{1}\). This determines the ASDM output to change to \(z(t)=b\), which, in turn, ensures that \(y(t)\) is strictly decreasing for \(t>t_{1}\). Eventually \(y(t)=-\delta\) for \(t=t_{2}\), \(z(t)\) toggles back to \(-b\), and the process continues recursively. The output sequence \(\left\{t_{n}\right\}_{n\geqslant 1}\) satisfies the _\(t\)-transform_ equations [14]
\[\mathcal{L}_{n}g=\left(-1\right)^{n}\left[2\delta-b\Delta t_{n} \right]-g_{0}\Delta t_{n},\quad n\in\mathbb{Z}_{+}^{*}, \tag{2}\]
Figure 1: The TEMs considered in this work: (a) The asynchronous sigma-delta modulator (ASDM). (b) The integrate-and-fire (IF) model.
where \(\Delta t_{n}\triangleq t_{n+1}-t_{n}\), \(\mathcal{L}_{n}g\triangleq\int_{t_{n}}^{t_{n+1}}g\left(s\right)\), \(\delta\) and \(b\) are the threshold and amplitude of the Schmitt trigger output, respectively. Parameter \(g_{0}\) represents a bias that is typically considered \(0\) in simulations, but plays a role in explaining hardware measurements [27].
The IF TEM is inspired from neuroscience, previously used to fit models of biological neurons [28], but also used in machine learning [29, 30] or recovery of FRI signals [18, 19]. The functioning principle of the IF, depicted in Fig. 1(b), is as follows. The input \(g(t)\), added with bias parameter \(b\), is integrated, which results in strictly increasing function \(y(t)\). Each time \(y(t)\) crosses threshold \(\delta\), the integrator is reset and the IF generates an output spike time \(t_{k}\). The IF TEM is described by the following equations
\[\mathcal{L}_{n}g=\delta-b\Delta t_{n},\quad n\in\mathbb{Z}_{+}^{*}. \tag{3}\]
For both the ASDM and IF the assumption is that \(\left|g(t)\right|\leqslant g_{\infty}<b\), leading to the following sampling density bounds
\[T_{\text{m}}\leqslant\Delta t_{n}\leqslant T_{\text{M}}, \tag{4}\]
where \(T_{\text{m}}\triangleq\frac{2\delta}{b+g_{\infty}}\), \(T_{\text{M}}\triangleq\frac{2\delta}{b-g_{\infty}}\) for the ASDM and \(T_{\text{m}}\triangleq\frac{\delta}{b+g_{\infty}}\), \(T_{\text{M}}\triangleq\frac{\delta}{b-g_{\infty}}\) in the case of the IF [14].
## III Recovery of FRI Signals from TEM Samples
### _Proposed Sampling Setup_
Let \(g(t)\) belong to the input space spanned by (1), where \(\left\{a_{k},\tau_{k}\right\}_{k=1}^{K}\) are unknown values satisfying \(\varepsilon_{a}<\left|a_{k}\right|<g_{\infty},0<\tau_{k}<\tau_{k+1}<t_{\text{M} },\tau_{k+1}-\tau_{k}>\varepsilon_{\tau}\), \(\varepsilon_{a},\varepsilon_{\tau},g_{\infty},t_{\text{M}}>0\) and \(K\) is the unknown number of pulses with shape \(\varphi(t)\). We also assume that \(\varphi(t)\) is known, \(\varphi\in C^{2}(-L,0)\cap C\left(\mathbb{R}\right)\), and that \(\operatorname*{supp}\left(\varphi\right)\subseteq\left[-L,\infty\right)\). In other words, \(\varphi(t)\) has finite support, is second order differentiable on \((-L,0)\) and continuous on \(\mathbb{R}\). Furthermore, we assume that the left derivative \(\varphi_{-}^{\prime}(t)\) and right derivative \(\varphi_{+}^{\prime}(t)\) exist and are bounded for \(t\in\mathbb{R}\). We also assume that \(\varphi^{\prime}(t)>0,t\in(-L,0)\). Furthermore, we assume that \(\max_{t}\left|\varphi(t)\right|=1\), which does not reduce generality. The conditions on \(\varphi(t)\) are defining a space of functions that are relatively common, including the previously studied cases of polynomial and exponential splines [18] and alpha synaptic activation functions [20]. The hyperbolic secant kernel [19] does not fully satisfy the conditions as its support is \(\mathbb{R}\), but our analysis is also applicable given its fast decay to \(0\) (see Section VI).
Furthermore, we assume \(\max_{t}\left|g(t)\right|\leqslant g_{\infty}\), \(\varepsilon_{\tau}<L\), \(\tau_{1}\geqslant L\), and that \(g(t)\) is sampled with an ASDM or IF TEM over the finite time interval \(\left[0,t_{\text{M}}\right]\) to yield output time encoded samples \(\left\{t_{n}\right\}_{n=1}^{N}\). Both TEMs enable computing \(\left\{\mathcal{L}_{n}g\right\}_{n=1}^{N-1}\) from \(\left\{t_{n}\right\}_{n=1}^{N}\) via (2), (3), respectively. The problem we propose is to recover \(\left\{a_{k},\tau_{k}\right\}_{k=1}^{K}\) from \(\left\{t_{n}\right\}_{n=1}^{N}\), i.e., to compute \(\left\{a_{k},\tau_{k}\right\}_{k=1}^{K}\) from \(\left\{\mathcal{L}_{n}g\right\}_{n=1}^{N-1}\), which is independent of the particular TEM model used. We assume that \(\varepsilon_{\tau}>4T_{\text{M}}\), which ensures there are at least \(4\) TEM samples in between each two consecutive pulses via (4).
### _Existing Recovery Methods_
The work in [18] considers the estimation of \(\left\{\tau_{k},a_{k}\right\}\) from \(\left\{t_{n}\right\}\) for an IF TEM where the filter \(\varphi(t)\) is a polynomial or exponential spline (E-spline), compactly supported with support length \(L\). It is assumed that the pulses have no overlaps, i.e., \(\tau_{k+1}-\tau_{k}>L\). Moreover, for identifying pulse \(a_{k}\varphi\left(t-\tau_{k}\right)\), three spike times \(\left\{t_{n+1}\right\}_{i=0}^{2}\) are used, which are assumed to be located in an interval of length \(L/2\) at the onset of the pulse. Furthermore, it is assumed that
\[\sum_{n=1}^{2}c_{n,m}\left(\varphi*1_{\left[0,t_{n+1}-t_{n}\right]}(t-\tau_{n })\right)=e^{\varkappa_{m}t},\quad m\in\left\{0,1\right\}, \tag{5}\]
has exact analytical solutions \(c_{n,m}\), where \(\omega_{0},\omega_{1}\) are parameters of the E-spline. The values of \(\tau_{k}\) and \(a_{k}\) are then found by computing the signal moments \(s_{m}=\sum_{n=1}^{2}c_{n,m}y(t_{n+1})\) and then solving \(s_{m}=a_{k}e^{\varkappa_{m}\tau_{k}}\) via Prony's method. These results are extended for inputs generated with polynomial splines and piece-wise constant signals. When the pulses overlap, i.e., \(\tau_{k+1}-\tau_{k}<L\), recovery is still possible if samples of multiple TEMs are recorded [18]. For a filter \(\varphi(t)=\operatorname*{sech}^{2}(t)\)[19]
\[\mathcal{L}_{n}g=\frac{e^{t_{n}}P\left(e^{2t_{n}}\right)}{Q\left(e^{2t_{n}} \right)},\quad Q(x)=\sum_{k=1}^{K}\left(1+e^{-2\tau_{k}}x\right), \tag{6}\]
where \(P(x)\) and \(Q(x)\) are polynomials. The values \(\left\{\tau_{k},a_{k}\right\}\) can then be uniquely recovered by solving analytically (6) via (3), where the recovery approach is inspired from the recovery of FRI signals from nonuniform samples [31]. Moreover, a different analytical recovery was shown for \(\varphi(t)=te^{-t}\cdot 1_{\left[0,\infty\right)}(t)\). We note that the functions \(\varphi(t)\) above are piecewise elementary functions. Elementary functions represent a small subset of all continuous functions [32]. In fact, in practice, the impulse response of a filter rarely fits perfectly a mathematical expression, as it often results from the physical properties of a given acquisition device [33]. Therefore, using the methods above may introduce an additional error source due to model
mismatch. Thus, introducing a new method allowing perfect recovery for general filters \(\varphi(t)\) could tackle significantly wider scenarios and applications.
A separate line of work considers the case where \(g(t)\) is a periodic bandlimited signal, and the TEM sampling rate satisfies a Nyquist rate type recovery guarantee, leading to parametric input recovery approaches [21, 22, 23, 24]. In this work, we consider the general scenario where the input to the TEM is aperiodic and not necessarily bandlimited.
## IV The Proposed Recovery Method
As discussed in Section III-B, assuming that \(\varphi(t)\) is an E-spline, the method in [18] requires three consecutive TEM samples \(\left\{t_{n+i}\right\}_{i=0}^{2}\) located at the onset of the pulse to be estimated, which amounts to two consecutive integrals \(\left\{\mathcal{L}_{n+i}g\right\}_{i=0}^{3}\). Here we will show that this information is enough to recover the pulse when \(\varphi(t)\) satisfies much more relaxed assumptions, which don't require that the pulse is generated with any particular mathematical function (e.g. exponential, or polynomial).
### _The Case of One Pulse_
We first assume that \(K=1\), and subsequently extend to \(K\geqslant 1\). Let \(n_{1}\in\mathbb{Z}\) be the index of the TEM output located right after the onset of filter \(\varphi(t-\tau_{1})\), defined as
\[n_{1}\triangleq\min_{n\in\left\{1,\ldots,N\right\}}\left\{n\ \big{|}\ t_{n}>\tau_{1}-L\right\}. \tag{7}\]
Our recovery makes use of the TEM output samples \(t_{n_{1}+i},i\in\left\{1,2,3\right\}\), which, assuming \(4T_{\text{M}}<L\), satisfy
\[0<t_{n_{1}-1}\leqslant\tau_{1}-L<t_{n_{1}+i}<\tau_{1},\quad i\in\left\{0, \ldots,3\right\}. \tag{8}\]
Let \(I_{n}(\tau)\triangleq\int_{t_{n}}^{t_{n+1}}\varphi(t-\tau)dt\). The idea behind the recovery is to compute the ratio of two consecutive integrals
\[\frac{\mathcal{L}_{n_{1}+2}g}{\mathcal{L}_{n_{1}+1}g}=\frac{a_{1}I_{n_{1}+2}( \tau_{1})}{a_{1}I_{n_{1}+1}(\tau_{1})}=\frac{I_{n_{1}+2}(\tau_{1})}{I_{n_{1}+ 1}(\tau_{1})}, \tag{9}\]
which is not a function of \(a_{1}\), but only \(\tau_{1}\). If \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\) is strictly increasing, then \(\tau_{1}\) can be uniquely estimated from (9). This recovery approach is illustrated in Fig. 2. The following lemma derives conditions to guarantee the required monotonicity.
**Lemma 1**.: _Function \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\) is finite, differentiable, and_
\[\left(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\right)^{\prime} \geqslant\frac{\varphi_{m,\delta}^{\prime}}{2\varphi_{\mathsf{M},\delta}^{ \prime}}\frac{T_{\text{M}}^{3}}{T_{\text{M}}^{2}}\cdot\bar{\epsilon}_{\text{m}}, \tag{10}\] \[\bar{\epsilon}_{\text{m}}\triangleq 2+\frac{T_{\text{m}}}{T_{ \text{M}}}-\frac{2\varphi\left(t_{\delta}\right)}{{\varphi_{\mathsf{m},\delta }^{\prime}}^{2}}\cdot\varphi_{\mathsf{M},\delta}^{\prime\prime}\cdot\frac{2T_ {\text{M}}}{T_{\text{m}}}-\frac{{\varphi_{\mathsf{M},\delta}^{\prime}}^{2}}{{ \varphi_{m,\delta}^{\prime}}^{2}},\]
_if \(4T_{\text{M}}<L\), where \(t_{\delta}=-L+2T_{\text{M}},\varphi_{\mathsf{M},\delta}^{\prime}\triangleq \max_{t\in\mathbb{S}_{\delta}}\varphi^{\prime}(t),\varphi_{\mathsf{m},\delta }^{\prime}\triangleq\min_{t\in\mathbb{S}_{\delta}}\varphi^{\prime}(t),\varphi _{\mathsf{M},\delta}^{\prime\prime}\triangleq\max_{t\in\mathbb{S}_{\delta}} \varphi^{\prime\prime}(t)\), and \(\mathbb{S}_{\delta}\triangleq\left[-L+\frac{T_{\text{m}}}{2},-L+4T_{\text{M}}\right]\)._
Proof.: The proof is in Section VII.
We note that the monotonicity of \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\) can be guaranteed via (10) if \(\bar{\epsilon}_{\text{m}}>0\). In this case, we can find \(\tau_{1}\) by solving (9). The next section tackles the case of multiple pulses.
Figure 2: The recovery of the \(k\)th pulse from the FRI input. The integrals \(I_{n_{k}+1}\) and \(I_{n_{k}+2}\), computed from the TEM output \(\left\{t_{n_{k}+i}\right\}_{i=0}^{3}\), contain all the information needed to uniquely identify \(\tau_{k}\) and \(a_{k}\).
### _The Case of Multiple Pulses_
The following theorem is our main result, which relaxes the existing assumptions on the filter that enable perfect FRI signal reconstruction from TEM samples.
**Theorem 1** (**FRI Input Recovery**).: _Let \(g(t)=\sum_{k=1}^{K}a_{k}\varphi(t-\tau_{k})\) be a FRI input satisfying \(\varepsilon_{a}<\left|a_{k}\right|<g_{\infty}\), \(\Delta\tau_{k}>\varepsilon_{\tau}\), \(\operatorname{supp}(\varphi)\subseteq\left[-L,\infty\right)\), \(\|\varphi\|_{\infty}=1,|g(t)|\leqslant g_{\infty}\). Furthermore, assume that \(\varphi\in\mathbb{C}^{2}\left(-L,0\right)\cap\mathbb{C}\left(\mathbb{R}\right)\) and \(\varphi^{\prime}(t)>0,t\in\left(-L,0\right)\). Let \(\left\{t_{n}\right\}_{n=1}^{N}\) be the output samples of a TEM with input \(g(t)\), such that \(4T_{\text{M}}<\varepsilon_{\tau}\) and \(4T_{\text{M}}<L\). Then \(\left\{\tau_{k},a_{k}\right\}_{k=1}^{K}\) can be perfectly recovered from \(\left\{t_{n}\right\}_{n=1}^{N}\) if_
\[2+\frac{T_{\text{m}}}{T_{\text{M}}}-\frac{2\varphi\left(t_{\delta}\right)}{ \varphi^{\prime}_{\text{m},\delta}}\cdot\varphi^{\prime\prime}_{\text{M}, \delta}\cdot\frac{2T_{\text{M}}}{T_{\text{m}}}-\frac{\varphi^{\prime}_{\text{ M},\delta}}{\varphi^{\prime}_{\text{m},\delta}}^{2}>0, \tag{11}\]
_where \(t_{\delta}=-L+2T_{\text{M}}\), \(\varphi^{\prime}_{\text{M},\delta}=\max_{t\in\mathbb{S}_{\delta}}\varphi^{ \prime}(t)\), \(\varphi^{\prime}_{\text{m},\delta}=\min_{t\in\mathbb{S}_{\delta}}\varphi^{ \prime}(t)\), \(\varphi^{\prime\prime}_{\text{M},\delta}\triangleq\min_{t\in\mathbb{S}_{ \delta}}\varphi^{\prime\prime}(t)\), and \(\mathbb{S}_{\delta}=\left[-L+T_{\text{m}}/2,-L+4T_{\text{M}}\right]\)._
Proof.: To compute \(n_{1}\) as defined in (7), we note that \(\mathcal{L}_{n}g=0,n<n_{1}-1\) and \(\mathcal{L}_{n_{1}-1}g\neq 0\). Using (2), we get that
\[n_{1}=\min_{t\in\left\{1,\ldots,N\right\}}\left\{n\ \big{|}\ \left|\mathcal{L}_{n-1}g \right|>0\right. \tag{12}\]
Via the separation property \(\varepsilon_{\tau}>4T_{\text{M}}\), it follows that \(t_{n}\not\in\operatorname{supp}\left[\varphi(-\tau_{i})\right]\), for \(n\in\left\{n_{1}+1,\ldots,n_{1}+3\right\}\) and \(i\in\left\{2,\ldots,K\right\}\). Therefore pulses \(k=2,3\ldots,K\) have no effect on \(\left\{t_{n_{1}+1},\ldots,t_{n_{1}+3}\right\}\). Using Lemma 1 via condition \(4T_{\text{M}}<L\), (11) implies that \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\) is strictly increasing, and therefore \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}=\frac{\mathcal{L}_{n_{1}+2}g}{ \mathcal{L}_{n_{1}+1}g}\) has a unique solution \(\tau_{1}\). This can be computed via line search to arbitrary accuracy.
The amplitude of the first pulse then satisfies \(a_{1}=\frac{\mathcal{L}_{n_{1}+1}g}{I_{n_{1}+1}(\tau_{1})}\). For the next pulse, we remove the contribution of the first one from the measurements via
\[\mathcal{L}_{n+1}^{2}g\triangleq\mathcal{L}_{n+1}g-\int_{t_{n+1}}^{t_{n+2}}a_ {1}\varphi\left(t-\tau_{1}\right)dt,\quad 1\leqslant n\leqslant N-1.\]
The process continues recursively for \(k=2,\ldots,K\), such that
\[\begin{split} n_{k}=\min_{t\in\left\{1,\ldots,N\right\}}\left\{ n\ \big{|}\ \left|\mathcal{L}_{n-1}^{k}g\right|>0\right\},\\ \frac{I_{n_{k}+2}(\tau_{k})}{I_{n_{k}+1}(\tau_{k})}=\frac{\mathcal{ L}_{n_{k}+2}g}{\mathcal{L}_{n_{k}+1}g},\quad a_{k}=\frac{\mathcal{L}_{n_{k}+1}g}{I_{n _{k}+1}(\tau_{k})}.\end{split} \tag{13}\]
First, we note that our separate conditions \(4T_{\text{M}}<\varepsilon_{\tau}\) and \(4T_{\text{M}}<L\) imply that \(\varepsilon_{\tau}\) and \(L\) are not interrelated as in [18]. In fact, in our case \(\varepsilon_{\tau}\) can be arbitrarily small for high enough sampling rates. Second, we note that (11) is both a sampling rate condition as well as a condition on \(\varphi(t)\). For example, if \(\varphi(t)\) is a first order B-spline, \(\varphi^{\prime}_{\text{m},\delta}=\varphi^{\prime}_{\text{M},\delta}\), \(\varphi^{\prime\prime}_{\text{M},\delta}=0\) and (11) reduces to \(\frac{T_{\text{m}}}{T_{\text{M}}}+1>0\), which is always true. To illustrate how condition (11) behaves when changing the sampling rate \(\delta\) and filter \(\varphi(t)\), we computed the left-hand side of (11) for the case of the B-spline of order \(1\), the main lobe of a sinc \(\varphi(t)=\operatorname{sinc}(\pi t)\cdot 1_{[-1,1]}(t)\) and an exponential spline [18]. The results, for \(100\) values of \(\delta\) uniformly spaced between \(10^{-3}\) and \(1\), are depicted in Fig. 3. As is the case for the results in the literature, it turns out that decreasing \(\delta\) (increasing the sampling density) is favourable towards input recovery even in this general scenario. In the following we will show rigorously that, under a mild additional assumption, the observations in Fig. 3 hold true in the general case.
**Corollary 1**.: _Let \(g(t)\) be a function satisfying all the conditions in Theorem 1 apart from (11). Under the additional assumption that \(\varphi^{\prime}_{+}(-L)\neq 0\), there exists a TEM threshold \(\delta>0\) such that \(g(t)\) is perfectly recovered from the corresponding TEM samples \(\left\{t_{n}\right\}_{n=1}^{N}\)._
Figure 3: Evaluating the recovery condition of Theorem 1 for particular cases of filters and a range of 100 TEM sampling rates. For arbitrarily small \(\delta\), all cases converge to the achievable recovery condition of a first order B-spline.
Proof.: When \(\delta\to 0\), we can simplify equations (10) as follows
\[\lim_{\delta\to 0}\frac{{\varphi_{\mathsf{M},\delta}^{2}}^{2}}{{\varphi_{ \mathsf{m},\delta}^{\prime}}^{2}}=1,\quad\lim_{\delta\to 0}\frac{2\varphi\left(t_{ \delta}\right)}{{\varphi_{\mathsf{m},\delta}^{\prime}}^{2}}=0,\quad\lim_{ \delta\to 0}\frac{T_{\mathsf{m}}}{T_{\mathsf{M}}}=\frac{b-g_{\infty}}{b+g_{ \infty}}, \tag{14}\]
which holds for both the ASDM and IF TEM. The second limit holds because \(\lim_{\delta\to 0}{\varphi_{\mathsf{m},\delta}^{\prime}}={\varphi_{+}^{\prime}}(-L)\neq 0\) and \(\varphi(t)=0,t<-L\). By using continuity on \(\mathbb{R}\) we get \(\varphi(-L)=0\). Thus, \(\lim_{\delta\to 0}\bar{\epsilon}_{\mathsf{m}}=1+\frac{T_{\mathsf{m}}}{T_{ \mathsf{M}}}\), which is strictly positive. By writing \(\bar{\epsilon}_{\mathsf{m}}(\delta)\) explicitly as a function of \(\delta\), it follows that \(\exists\delta=\delta_{0}\) such that \(\bar{\epsilon}_{\mathsf{m}}(\delta_{0})>0\), leading to \(\left(\frac{I_{\mathsf{m}+1}(\tau)}{I_{\mathsf{m}+1}(\tau)}\right)^{\prime}>0\) via Lemma 1. This satisfies the conditions for the perfect recovery of \(g(t)\) in Theorem 1.
We note that condition \({\varphi_{+}^{\prime}}(-L)=0\) is sufficient, but not necessary. In Section VI-A we show that recovery works even when the condition doesn't hold. We note that in practice, due to numerical errors, one would compute \(n_{k}\) in (13) via \(\left|\mathcal{L}_{n-1}^{k}g\right|>tol\), where \(tol\) is a tolerance set by the user. The proposed recovery is summarized in Algorithm 1, where \(\mathcal{L}_{n}g\) is computed via (2) for the ASDM and (3) for the IF.
**Remark 1**.: _The tolerance \(tol\) accounts for the effect of noise or numerical inaccuracies, which may lead to \(\left|\mathcal{L}_{n-1}^{k}g\right|>0\) even for \(n<n_{k}\). Additionally, we note that Algorithm 1 does not necessarily require \(\widehat{n}_{k}=n_{k}\) and works with any \(\widehat{n}_{k}\geqslant n_{k}\) such that \(\tau_{k}-L<t_{\widehat{n}_{k}+i}<\tau_{k}\), \(\forall i\in\{0,1,2,3\}\)._
```
Data:\(\{t_{n}\}_{n=1}^{N},\{\mathcal{L}_{n}g\}_{n=1}^{N-1},\varphi(t),tol\) Result:\(\widehat{K},\{\widehat{\tau}_{k},\widehat{a}_{k}\}_{k=1}^{K},\widehat{g}(t)\). 1. Compute \(\mathcal{L}_{n}g=\mathcal{L}_{n}g\), for \(n\in\{1,\ldots,N\}\) and \(k=1\). 2. While \(\exists n\in\{1,\ldots,N\}\) s.t. \(\left|\mathcal{L}_{n}^{k}g\right|>tol\) \(2a)\) Compute \(\widehat{n}_{k}=\min\limits_{n\in\{1,\ldots,N\}}\left\{n\ \big{|}\ \left|\mathcal{L}_{n-1}^{k}g\right|>tol\right\}\) \(2b)\) Compute \(I_{n}(\tau)=\int_{t_{n}}^{t_{n+1}}\varphi(t-\tau)dt\) for \(n\in\{\widehat{n}_{k}+1,\widehat{n}_{k}+2\}\) and \(\tau\in(t_{n-2}+L,t_{n-1}+L)\). \(2c)\) Find \(\widehat{\tau}_{k}\) from \(\frac{I_{\widehat{a}_{k}+2}(\tau)}{I_{\widehat{a}_{k}+1}(\tau)}=\frac{\mathcal{ L}_{\widehat{a}_{k}+2}}{\mathcal{L}_{\widehat{a}_{k}+1}}\) via line search. \(2d)\) Compute \(\widehat{a}_{k}=\frac{\mathcal{L}_{\widehat{a}_{k}+1}^{k}g}{I_{\widehat{a}_{k}+ 1}(\tau_{k})}\). \(2e)\) Compute \(\mathcal{L}_{n}^{k+1}g\triangleq\mathcal{L}_{n}^{k}g-\int_{t_{n}}^{t_{n+1}} \widehat{a}_{k}\varphi\left(t-\widehat{\tau}_{k}\right)dt\), \(k=k+1\). 3. Compute \(\widehat{K}=k-1\). 4. Compute \(\widehat{g}(t)=\sum_{k=1}^{K}\widehat{a}_{k}\varphi(t-\widehat{\tau}_{k})\).
```
**Algorithm 1**Recovery Algorithm.
### _Sampling Density for the IF TEM with No Bias_
In this subsection we deal with the special case \(b=0\) for the IF TEM. Here, the lower bound \(\Delta t_{n}\geqslant T_{\mathsf{m}}=\frac{\delta}{b+g_{\infty}}=\frac{\delta} {g_{\infty}}\) still holds true, but the upper bound does not, as (4) assumes that \(g(t)>b-g_{\infty}>0\), which is no longer true. We note that this bound is only required in our proofs for \(\Delta t_{n_{k}+i},i\in\{0,1,2\}\). Assuming that \(\varphi(t)\) satisfies Theorem 1, the following holds
\[\delta=\left|a_{k}\int_{t_{n_{k}+i}}^{t_{n_{k}+i+1}}\varphi(s-\tau_{k})ds \right|\geqslant\varepsilon_{a}\int_{t_{n_{k}+i}}^{t_{n_{k}+i+1}}\varphi(s-t_{n _{k}+i}-L)ds=\varepsilon_{a}\int_{-L}^{-L+\Delta t_{n_{k}+i}}\varphi(s)ds= \varepsilon_{a}F\left(-L+\Delta t_{n_{k}+i}\right), \tag{15}\]
where \(F(t)\triangleq\int_{-L}^{t}\varphi(s)ds\). Above we used that \(\varphi(t)>0\) and is increasing for \(t\in[-L,0]\), \(t_{n_{k}+i}>\tau_{k}-L\). It follows that \(F(t)\) is strictly increasing with a strictly increasing inverse \(F^{-1}\) for \(t\in[0,L]\). Therefore, (15) implies that
\[\Delta t_{n_{k}+i}\leqslant L+F^{-1}\left(\frac{\delta}{\varepsilon_{a}}\right) \triangleq T_{\mathsf{M}}, \tag{16}\]
where the new definition of \(T_{\mathsf{M}}\) only applies when \(b=0\), therefore reinforcing the theoretical guarantee in Theorem 1 in this special case. In the next section we analyse the effect of noise on the recovery guarantees.
## V Robustness to Noise
Assume that the noise corrupted input \(\widetilde{g}(t)\) satisfies
\[\widetilde{g}(t)=\sum_{k=1}^{K}a_{k}\varphi(t-\tau_{k})+\eta(t),\quad t\in[0,t_{ \mathsf{M}}]\,, \tag{17}\]
where \(|\eta(t)|\leqslant\eta_{\infty}\) is drawn from the uniform distribution on \([-\eta_{\infty},\eta_{\infty}]\). Function \(\widetilde{g}(t)\) is input to a TEM that generates output samples \(\left\{t_{n}\right\}_{n=1}^{N}\). In this noisy case, we redefine \(T_{\text{m}}\triangleq\frac{2\delta}{\mathbb{E}+g_{\infty}+\eta_{\infty}}\,T_{ \text{M}}\triangleq\frac{2\delta}{\mathbb{E}-g_{\infty}-\eta_{\infty}}\), which satisfies the old notation (4) for \(\eta_{\infty}=0\). Given that \(|\widetilde{g}(t)|<g_{\infty}+\eta_{\infty}\), it is shown similarly to [14] that \(T_{\text{m}}\leqslant\Delta t_{n}\leqslant T_{\text{M}}\). The problem proposed is to recover \(\left\{\tau_{k},a_{k}\right\}_{k=1}^{K}\) from the noisy TEM output.
The following theorem derives recovery guarantees in the case of one pulse \(\widetilde{g}(t)=a_{1}\varphi(t-\tau_{1})+\eta(t)\). This corresponds to one iteration of Step 2) in Algorithm 1.
**Theorem 2** (**Noisy Input Recovery**).: _Let \(\left\{\widetilde{t}_{n}\right\}_{n=1}^{N}\) be the TEM output in response to \(\widetilde{g}(t)=a_{1}\varphi(t-\tau_{1})+\eta(t)\), where \(|\eta(t)|<\eta_{\infty}\), such that \(\eta_{\delta}\triangleq\eta_{\infty}T_{\text{M}}<\varphi_{\text{m},\delta}^{ \prime}\cdot e_{a}T_{\text{m}}\). Furthermore, let \(\bar{\epsilon}_{\text{m}}\) defined as in (10) and assume that (11) is true. Then \(\widehat{\tau}_{1}\) computed via Algorithm 1 satisfies_
\[|\tau_{1}-\widehat{\tau}_{1}|\leqslant e_{\tau}\triangleq\frac{2g_{\infty} \eta_{\delta}\bar{\epsilon}_{\text{m}}}{\left(\varepsilon_{a}T_{\text{m}} \varphi_{\text{m},\delta}^{\prime}-\eta_{\delta}\right)^{2}}\frac{2\varphi_{ \text{M},\delta}^{\prime}}{\varphi_{\text{m},\delta}^{\prime}}\frac{T_{ \text{M}}^{3}}{T_{\text{m}}^{3}}. \tag{18}\]
_Assuming \(e_{\tau}<T_{\text{m}}/2\), then \(|a_{1}-\widehat{a}_{1}|\leqslant e_{a}\triangleq\frac{T_{\text{M}}}{T_{\text {m}}^{3}}\frac{2e_{\tau}g_{\infty}+\eta_{\delta}}{\varphi_{\text{m},\delta}^{ \prime}}\)._
Proof.: The proof is in Section VII.
We extend the result in Theorem 2 recursively for \(K\) pulses. Specifically, we assume that Step 2) of Algorithm 1 was computed \(K-1\) times, for \(K\geqslant 2\), and that \(\left\{e_{\tau_{k}},e_{a_{k}}\right\}_{k=1}^{K-1}\) are known, and we derive \(e_{\tau_{K}}\) and \(e_{a_{K}}\). In a noiseless scenario, \(a_{K}\) and \(\tau_{K}\) could be perfectly recovered from local integrals \(\mathcal{L}_{n}a_{K}\varphi(\cdot-\tau_{K})\). Thus, we first estimate a noise bound for the local integral of the \(K\)th pulse \(\eta_{\delta,K}\) satisfying \(\left|\mathcal{L}_{n}\left[a_{K}\varphi(\cdot-\tau_{K})\right]-\mathcal{L}_{n }^{K}\widetilde{g}\right|<\eta_{\delta,K}\). Then \(e_{\tau_{K}}\) and \(e_{a_{K}}\) can be derived by via (35-41), where \(\eta_{\delta}\) is substituted with \(\eta_{\delta,K}\). Using Algorithm 1 step 2),
\[\mathcal{L}_{n}^{K}\widetilde{g}=\left(\mathcal{L}_{n}g+\mathcal{L}_{n}\eta \right)-\sum_{k=1}^{K-1}\mathcal{L}_{n}\left[\widehat{a}_{k}\varphi(\cdot- \widehat{\tau}_{k})\right].\]
When computing \(\left|\mathcal{L}_{n}a_{K}\varphi(\cdot-\tau_{k})-\mathcal{L}_{n}^{K} \widetilde{g}\right|\) we get
\[\left|\sum_{k=1}^{K-1}\mathcal{L}_{n}\left[a_{k}\varphi(\cdot-\tau_{k})- \widehat{a}_{k}\varphi(\cdot-\widehat{\tau}_{k})\right]+\mathcal{L}_{n}\eta \right|\leqslant\sum_{k=1}^{K-1}\mathcal{L}_{n}\left|a_{k}\varphi(\cdot-\tau_ {k})-\widehat{a}_{k}\varphi(\cdot-\widehat{\tau}_{k})\right|+\eta_{\delta} \tag{19}\]
We bound the absolute value in (19) as
\[\begin{split}|a_{k}\varphi(t-\tau_{k})-\widehat{a}_{k}\varphi(t- \widehat{\tau}_{k})|&=|a_{k}\left[\varphi(t-\tau_{k})-\varphi(t- \widehat{\tau}_{k})\right]+\left(a_{k}-\widehat{a}_{k}\right)\varphi(t- \widehat{\tau}_{k})|\\ &\leqslant|a_{k}|\,\varphi_{\text{M}}^{\prime}e_{\tau_{k}}+e_{a_{k} }\leqslant g_{\infty}\varphi_{\text{M}}^{\prime}e_{\tau_{k}}+e_{a_{k}}, \end{split} \tag{20}\]
where \(\varphi_{\text{M}}^{\prime}\triangleq\max\left\{\|\varphi_{-}^{\prime}\|_{ \infty},\|\varphi_{+}^{\prime}\|_{\infty}\right\}\) and \(\varphi_{-}^{\prime}(t),\varphi_{+}^{\prime}(t)\) are the left and right derivatives, respectively. Using (20) in (19),
\[\left|\mathcal{L}_{n}a_{K}\varphi(\cdot-\tau_{k})-\mathcal{L}_{n}^{K} \widetilde{g}\right|\leqslant\eta_{\delta,K},\quad\eta_{\delta,K}\triangleq T _{\text{M}}\sum_{k=1}^{K-1}\left(g_{\infty}\varphi_{\text{M}}^{\prime}e_{\tau_ {k}}+e_{a_{k}}\right)+\eta_{\delta}. \tag{21}\]
By repeating steps (35-41) for \(\eta_{\delta,K}\), we get that
\[e_{\tau_{K}}=\frac{2g_{\infty}\eta_{\delta}\bar{\epsilon}_{\text{m}}}{\left( \varepsilon_{a}T_{\text{m}}\varphi_{\text{m},\delta}^{\prime}-\eta_{\delta,K} \right)^{2}}\frac{2\varphi_{\text{M},\delta}^{\prime}}{\varphi_{\text{m},\delta}^ {\prime}}\frac{T_{\text{M}}^{3}}{T_{\text{m}}^{3}}. \tag{22}\]
Furthermore, assuming \(e_{\tau_{K}}<T_{\text{m}}/2\), \(e_{a_{K}}=\frac{T_{\text{M}}}{T_{\text{m}}^{2}}\frac{2e_{\tau}g_{\infty}+\eta_{ \delta,K}}{\varphi_{\text{m},\delta}^{\prime}}\). Then the estimation errors \(\left\{e_{\tau_{k}},e_{a_{k}}\right\}_{k=1}^{K}\) for Algorithm 1 can be computed recursively as above via \(\eta_{\delta,1}=\eta_{\delta}\).
## VI Numerical and Hardware Experiments
Here we test our recovery approach for a wide selection of FRI filters including filters previously used in the literature and new synthetically generated filters. We consider the case of a simulated ASDM and also an analog hardware implementation. Furthermore, to allow a comparison with the existing methods, we show examples using sampling setups proposed in the literature [18, 19]. We evaluate the recovery error of the FRI parameters using \(\mathsf{Err}_{\tau}\) and \(\mathsf{Err}_{a}\), defined as
\[\mathsf{Err}_{\tau}=\tfrac{1}{K}\sum_{k=1}^{K}100\cdot\frac{|\widehat{\tau}_ {k}-\tau_{k}|}{|\tau_{k}|},\quad\mathsf{Err}_{a}=\tfrac{1}{K}\sum_{k=1}^{K}100 \cdot\frac{|\widehat{a}_{k}-a_{k}|}{|a_{k}|}. \tag{23}\]
Furthermore, we evaluate the recovery error for \(g(t)\) as \(\mathsf{Err}_{g}=100\cdot\frac{\|g-\widehat{g}\|_{2}}{\|g\|_{2}^{2}}\)\((\%)\). This section is organised as follows. Sections VI-A and VI-B present examples with a B-spline filter and a randomly generated filter, respectively. Section VI-C shows recovery examples with an E-spline filter and a hyperbolic secant filter. Finally, Section VI-D presents a recovery example for a hardware implementation of an ASDM.
### _FRI Input Recovery with a B-spline Filter_
We first evaluate Algorithm 1 for a filter \(\varphi_{b}\) representing a B-Spline of order \(3\) scaled such that it is supported in \([-1,1]\) and has amplitude \(1\). We note that \(\varphi_{b}^{\prime}(-L)=0\), and therefore the condition in Corollary 1 is not true. Even so, we demonstrate numerically that recovery works in this case. Signal \(g(t)\) was generated for \(\tau_{1}=1.15,\tau_{2}=2.52,\tau_{3}=4.74,\tau_{4}=5.81,a_{1}=3.22,a_{2}=-2.34,a _{3}=2.87,a_{4}=3.54\). Signal \(g(t)\) was sampled with an ASDM with parameters \(b=12,\delta=1,g_{0}=0\). The input Diracs, signal \(g(t)\), TEM output along with the reconstructed signals via Algorithm 1 with \(tol=0.05\) are depicted in Fig. 4(a). The resulting errors are \(\mathsf{Err}_{\tau}=0.098\%\), \(\mathsf{Err}_{a}=0.39\%\), and \(\mathsf{Err}_{g}=0.86\%\).
### _FRI Recovery with a Random Filter_
To demonstrate the generalization enabled by the proposed algorithm, we considered the case of a randomly generated filter \(\varphi_{r}(t)\), which was not validated numerically or demonstrated theoretically in the existing literature. The random filter consists of a random increasing function followed by a random decreasing function. To generate the first function, we convolved a uniform random noise sequence with a B-spline of degree \(9\). We subtracted the minimum to make it strictly positive, then integrated it and scaled it to be in the interval \([0,1]\). The second function was generated in the same way, only here we subtracted the maximum to make the final function strictly decreasing. The resulted filter \(\varphi_{r}(t)\) was used to generate an input \(g(t)\) for \(\tau_{1}=1.21,\tau_{2}=2.26,\tau_{3}=4.65,a_{1}=3.22,a_{2}=-2.33,a_{3}=-2.87\). Signal \(g(t)\) was input to an ASDM with parameters \(\delta=0.5,b=12,g_{0}=0\). The input Diracs, TEM input, TEM output, the recovery of the input Diracs and of the TEM input via Algorithm 1 with \(tol=0.006\) are depicted in Fig. 4(b). We note that the sampling rate is higher compared to Fig. 4(a), mainly due to using a filter with an irregular shape. However, as shown in Corollary 1 recovery is still possible for \(\delta\) small enough. The corresponding recovery errors are \(\mathsf{Err}_{\tau}=0.12\%\), \(\mathsf{Err}_{a}=1.33\%\), and \(\mathsf{Err}_{g}=1.29\%\).
### _Comparison with Existing Methods_
Here we compare Algorithm 1 with the method in [18], for the case of an E-spline filter, and also with the method in [19] for the case of a squared hyperbolic secant filter. We implement Algorithm 1 based on the IF TEM model to allow a comparison with the results in the literature. As the method in [18] is restricted to specific class of filters, we first selected a second order E-spline for both methods, defined as
\[\varphi_{e}(t)=\left\{\begin{array}{cc}\frac{e^{a_{1}-\alpha_{0}}}{a_{1}- \alpha_{0}}e^{-\alpha_{0}t}+\frac{e^{-a_{1}+\alpha_{0}}}{a_{0}-\alpha_{1}}e^{ -\alpha_{1}t},&-L\leqslant t\leqslant-L/2\\ \frac{1}{\alpha_{0}-\alpha_{1}}e^{-\alpha_{0}t}+\frac{1}{\alpha_{1}-\alpha_{0 }}e^{-\alpha_{1}t},&-L/2\leqslant t\leqslant 0,\\ 0,&\mathrm{otherwise},\end{array}\right.\]
where \(\alpha_{0}=0-j1.047,\alpha_{1}=0+j1.047\). The TEM input is
\[g(t)=\sum_{k=1}^{4}a_{k}\varphi_{e}(t-\tau_{k})+\eta(t), \tag{24}\]
where \(\tau_{1}=1,\tau_{2}=4,\tau_{3}=7,\tau_{4}=9.5\), \(a_{1}=4.36,a_{2}=4.04,a_{3}=4.92,\) and \(a_{4}=5.45\). Furthermore, \(\eta(t)\) is uniform noise bounded by \(|\eta(t)|\leqslant\eta_{\infty}=0.05\).
The output of the filter is encoded with an IF model with parameters \(\delta=0.5,b=0\). The E-spline, IF encoding and the Prony based recovery in [18] were implemented using publicly available software [34]. The signal \(g(t)\), the IF output samples \(\{t_{n}\}\) and reconstructions with Algorithm 1 using \(tol=0.05\) and the Prony based method in [18] are depicted in Fig. 5(a).
Figure 4: Evaluating Algorithm 1 with a B-spline filter and a randomly generated filter. This result demonstrates the applicability of the proposed method for a wider class of filters than previously possible.
We computed \(\mathsf{Err}_{g}\) and averaged it over \(100\) different noise signals \(\eta(t)\). This resulted in \(0.47\%\) for the Prony based recovery and \(0.43\%\) for the proposed method. The Prony based recovery is not guaranteed to work for overlapping pulses. We adjust the pulses to new time locations \(\tau_{k}^{*}=0.4\cdot\tau_{k}\). The results, depicted in Fig. 5(a2), show that the proposed method is able to handle a significant amount of overlapping. This is primarily due to step 2e) in Algorithm 1, which removes the contribution of each identified pulse to future TEM samples.
We further compared the proposed method with the method in [19], which is based on the assumption that the filter is constructed with the hyperbolic secant function defined as \(\mathrm{sech}(t)=\frac{2\mathrm{e}^{t}}{1+2\mathrm{e}^{t}}\). Here we consider the case where \(\varphi(t)=\mathrm{sech}^{2}(t)\), as presented in [19]. We generated the TEM input as \(g(t)=\sum_{k=1}^{2}a_{k}\mathrm{sech}^{2}(t-\tau_{k})\) where \(\tau_{1}=-3,\tau_{2}=2,a_{1}=7.44,a_{2}=4.8\). The TEM used is an IF model with \(\delta=1,b=0\). The signal \(g(t)\), the IF output samples \(\{t_{n}\}\) and reconstructions with Algorithm 1 with \(tol=0.01\) and the method in [19] are depicted in Fig. 5(b1). The resulted errors are \(\mathsf{Err}_{\tau}=0.03\%\), \(\mathsf{Err}_{a}=0.005\%\), and \(\mathsf{Err}_{g}=0.13\%\) for the method in [19] and \(\mathsf{Err}_{\tau}=0.04\%\), \(\mathsf{Err}_{a}=0.14\%\), and \(\mathsf{Err}_{g}=0.17\%\) for Algorithm 1. To exploit the flexibility of the proposed method, we repeated the experiment with the same parameters by changing the filter to \(\varphi(t)=\mathrm{sech}^{3}(t)\) and \(\varphi(t)=\mathrm{sech}^{9}(t)\). The method in [19] is not compatible with these filters and leads to unstable reconstructions. We note that these filters also don't satisfy the conditions of Theorem 1, as they are supported on the real axis. Interestingly, Algorithm 1 still performs well with errors \(\mathsf{Err}_{g}=0.2\%\) and \(\mathsf{Err}_{g}=0.12\%\), respectively. The results are illustrated in Fig. 5(b2).
### _Hardware Experiment_
We validate the proposed recovery method using a hardware implementation of the acquisition pipeline in Fig. 1 as follows. The FRI input signal was generated on a PC and fed to the circuit via the audio channel. The input was subsequently amplified prior to being injected into the ASDM hardware, depicted in Fig. 6(a).
We generated an input \(g(t)\) using \(\varphi(t)=\frac{\sin(\Omega t)}{\Omega t}\cdot 1_{[-\frac{\pi}{8},\frac{ \pi}{8}]}(t),\) where \(\Omega=\frac{\pi}{280}~{}\mathrm{Mrad/s}\), representing the windowed main lobe of a sinc function. The input satisfies \(g(t)=\sum_{k=1}^{3}a_{k}\varphi(t-\tau_{k}),t\in[0,6.2~{}\mathrm{ms}]\), where \(\tau_{1}=1.807~{}\mathrm{ms}\), \(\tau_{2}=2.706~{}\mathrm{ms}\), \(\tau_{3}=3.648~{}\mathrm{ms}\), \(a_{1}=4.39\), \(a_{2}=6.62\), \(a_{3}=5.48\). We note that, although using the \(\mathrm{sinc}\) function, signal \(g(t)\) is not bandlimited
Figure 5: (a) Comparative recovery with the proposed method and the Prony-based method in [18]. (a1) The noisy FRI input is based on an E-spline of order 1. With no overlaps, both methods recover the TEM input correctly, (a2) The pulses overlap, and thus the conditions in [18] are not satisfied and the recovery is unstable. (b) Comparative recovery with the method in [19], (b1) The input generated via \(\varphi(t)=\mathrm{sech}^{2}(t)\) is recovered with Algorithm 1 and the method in [19], (b2) The filters are \(\varphi(t)=\mathrm{sech}^{3}(t)\) and \(\varphi(t)=\mathrm{sech}^{9}(t)\), which don’t satisfy the conditions in [19], leading to unstable reconstructions.
Figure 6: (a) ASDM Hardware. (b) Oscilloscope screenshot depicting the ASDM input \(g(t)\) via channel 1 (yellow) and ASDM output \(z(t)\) via channel 2 (green). (c) FRI Input Recovery with Algorithm 1.
due to windowing. The ASDM responded to \(g(t)\) with output signal \(z(t)\). We extracted the output switching times \(\left\{t_{n}\right\}_{n=1}^{417}\) by computing the zero crossings of \(z(t)\).
When the TEM is simulated, as in sections VI-A-VI-C, its parameters are known _a priori_. In the case of a hardware experiment, the parameters need to be identified from the data [27]. To increase the precision of measurements we use one every \(6\) ASDM samples denoted as \(\overline{I}_{n}\triangleq t_{6n+1}\). Furthermore, we compute \(\overline{\mathcal{L}}_{n}g\triangleq\int_{\overline{I}_{n}}^{t_{n+1}}g(s)ds =\sum_{m=1}^{6}\mathcal{L}_{6n+m}g\). Using this notation, we derive the following from (2)
\[\overline{\mathcal{L}}_{n}g=\sum_{m=1}^{6}\left(-1\right)^{6n+m}\left[2\delta -b\Delta t_{6n+m}\right]-g_{0}\Delta t_{6n+m}=c_{1}\sum_{m_{1}=0}^{2}\Delta t _{6n+2m_{1}+1}-c_{2}\sum_{m_{1}=0}^{2}\Delta t_{6n+2m_{1}+2},\]
where \(c_{1}=b-g_{0}\), \(c_{2}=b+g_{0}\) represent hardware parameters. We identify \(\widehat{c}_{1}=7.5415\) and \(\widehat{c}_{2}=-1.7565\) above via least squares using \(\left\{\overline{\mathcal{L}}_{n}g\right\}_{41}^{53}\), which represent input integrals over the support of the last pulse centered in \(\tau_{3}\). Subsequently, we use \(\widehat{c}_{1}\) and \(\widehat{c}_{2}\) to compute \(\left\{\overline{\mathcal{L}}_{n}g\right\}_{1}^{69}\), which covers the whole support of \(g(t)\). To compensate for nonidealities, for each pulse \(k\), we compute \(\widehat{n}_{k}\) as in Algorithm 1 from \(\left\{\overline{\mathcal{L}}_{n}g\right\}_{1}^{69}\) using \(tol=130\), and subsequently run the algorithm again by replacing step 2a) with \(\widehat{n}_{k}^{*}=\widehat{n}_{k}+1\). Each pair of resulted values for \(\widehat{\tau}_{k}\) and \(\widehat{a}_{k}\) are averaged to compute the final FRI parameters. The FRI signal \(\widehat{g}(t)\) and parameters \(\left\{\widehat{\tau}_{k},\widehat{a}_{k}\right\}_{k=1}^{3}\) are depicted in Fig. 6(b). The corresponding recovery errors are \(\mathsf{Err}_{\tau}=0.19\%\), \(\mathsf{Err}_{a}=1.08\%\), and \(\mathsf{Err}_{g}=3.15\%\).
## VII Proofs
Proof for Lemma 1.: The filter satisfies \(\varphi(t)=0,t<-L\) and, given that \(\varphi^{\prime}(t)>0,t\in(-L,0)\), it follows that \(\varphi(t)>0,t\in(-L,0)\) and thus \(\varphi(t-\tau)>0,t\in(-L+\tau,\tau)\). Given that \(4T_{\mathsf{M}}<L\), we get (8), and therefore \(I_{n_{1}+1}(\tau)>0\) and thus \(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\) is well-defined. Moreover, the ratio is the composition of differentiable functions, therefore it is itself differentiable.
For simplicity, let \(f(t)\triangleq\varphi(t-\tau),t\in[\tau-L,\tau],f_{l}=f\left(t_{l}\right),l\in \left\{n_{1}+1,n_{1}+2,n_{1}+3\right\}\). The following holds
\[I_{n_{1}+i}^{\prime}(\tau)=\varphi\left(t_{n_{1}+i}-\tau\right)-\varphi\left( t_{n_{1}+i+1}-\tau\right)=-\Delta f_{n_{1}+i},\]
for \(i\in\left\{1,2\right\}\). Using the above, the following holds
\[\left(\frac{I_{n_{1}+2}(\tau)}{I_{n_{1}+1}(\tau)}\right)^{\prime}=\frac{I_{n_ {1}+2}^{\prime}(\tau)I_{n_{1}+1}(\tau)-I_{n_{1}+2}(\tau)I_{n_{1}+1}^{\prime}( \tau)}{I_{n_{1}+1}^{2}(\tau)}=\frac{\Delta f_{n_{1}+1}I_{n_{1}+2}(\tau)-\Delta f _{n_{1}+2}I_{n_{1}+1}(\tau)}{I_{n_{1}+1}^{2}(\tau)}\triangleq\epsilon. \tag{25}\]
The final objective is to show that \(\epsilon\) is positive. We proceed by providing subsequent lower bounds for \(\epsilon\) as follows. By rearranging the terms in (25) and using that \(I_{n}(\tau)=\mathcal{L}_{n}f\),
\[\frac{\epsilon\cdot I_{n_{1}+1}^{2}(\tau)}{\Delta f_{n_{1}+1}\Delta f_{n_{1}+2 }}=\frac{\mathcal{L}_{n_{1}+2}f}{\Delta f_{n_{1}+2}}-\frac{\mathcal{L}_{n_{1}+ 1}f}{\Delta f_{n_{1}+1}}. \tag{26}\]
Function \(f(t)\) is positive, differentiable and strictly increasing just as \(\varphi(t)\). We next expand \(f(t)\) in Taylor series with anchor points \(t_{n_{1}+1}\) and \(t_{n_{1}+2}\), respectively,
\[f(t) =f_{n_{1}+1}+f^{\prime}\left(\xi_{n_{1}+1}\right)(t-t_{n_{1}+1}) \leqslant f_{n_{1}+1}+f_{\mathsf{M}}^{\prime}(t-t_{n_{1}+1}),\] \[f(s) =f_{n_{1}+2}+f^{\prime}\left(\xi_{n_{1}+2}\right)(s-t_{n_{1}+2}) \geqslant f_{n_{1}+2}+f_{\mathsf{m}}^{\prime}(s-t_{n_{1}+2}),\]
for \(t\in[t_{n_{1}+1},t_{n_{1}+2}]\) and \(s\in[t_{n_{1}+2},t_{n_{1}+3}]\), such that
\[t_{n_{1}+1}\leqslant\xi_{n_{1}+1}\leqslant t\leqslant t_{n_{1}+2}, \quad f_{\mathsf{M}}^{\prime}\triangleq\max_{t\in[t_{n_{1}+1},t_{n_{1}+3}]}f^{ \prime}(t)\] \[t_{n_{1}+2}\leqslant\xi_{n_{1}+2}\leqslant s\leqslant t_{n_{1}+3}, \quad f_{\mathsf{m}}^{\prime}\triangleq\min_{t\in[t_{n_{1}+1},t_{n_{1}+3}]}f^{ \prime}(t).\]
We can thus bound the local integrals of \(f(t)\) as
\[\begin{split}\mathcal{L}_{n_{1}+1}f\leqslant f_{n_{1}+1}\cdot \Delta t_{n_{1}+1}+f_{\mathsf{M}}^{\prime}\frac{\Delta t_{n_{1}+1}^{2}}{2},\\ \mathcal{L}_{n_{1}+2}f\geqslant f_{n_{1}+2}\cdot\Delta t_{n_{1}+2}+f _{\mathsf{m}}^{\prime}\frac{\Delta t_{n_{1}+2}^{2}}{2}.\end{split} \tag{27}\]
By combining (26) and (27) we get
\[\frac{\epsilon\cdot I_{n_{1}+1}^{2}(\tau)}{\Delta f_{n_{1}+1}\Delta f_{n_{1}+2}}>f _{n_{1}+2}\frac{\Delta t_{n_{1}+2}}{\Delta f_{n_{1}+2}}+\frac{f_{\mathsf{m}}^{ \prime}}{2}\frac{\Delta t_{n_{1}+2}^{2}}{\Delta f_{n_{1}+2}}-\left(f_{n_{1}+1} \frac{\Delta t_{n_{1}+1}}{\Delta f_{n_{1}+1}}+\frac{f_{\mathsf{M}}^{\prime}}{2} \frac{\Delta t_{n_{1}+1}^{2}}{\Delta f_{n_{1}+1}}\right).\end{split} \tag{28}\]
We then use that \(f_{n_{1}+1},f_{n_{1}+2}>0\), and thus
\[\begin{split}& f_{n_{1}+1}\frac{\Delta t_{n_{1}+1}}{\Delta f_{n_{1}+1} }+\frac{f^{\prime}_{\text{M}}}{2}\frac{\Delta t^{2}_{n_{1}+1}}{\Delta f_{n_{1}+ 1}}\leqslant\frac{f_{n_{1}+1}}{f^{\prime}_{\text{m}}}+\frac{f^{\prime}_{\text{ M}}\Delta t_{n_{1}+1}}{2f^{\prime}_{\text{m}}},\\ & f_{n_{1}+2}\frac{\Delta t_{n_{1}+2}}{\Delta f_{n_{1}+2}}+\frac{ f^{\prime}_{\text{m}}}{2}\frac{\Delta t^{2}_{n_{1}+2}}{\Delta f_{n_{1}+2}} \geqslant\frac{f_{n_{1}+2}}{f^{\prime}_{\text{M}}}+\frac{f^{\prime}_{\text{ m}}\Delta t_{n_{1}+2}}{2f^{\prime}_{\text{M}}}.\end{split} \tag{29}\]
As before, we plug (29) into (28) and get
\[\frac{\bar{\epsilon}\Delta t_{n_{1}+1}f^{\prime}_{\text{m}}}{2f^{\prime}_{ \text{M}}}>\frac{f_{n_{1}+2}}{f^{\prime}_{\text{M}}}+\frac{f^{\prime}_{\text{ m}}\Delta t_{n_{1}+2}}{2f^{\prime}_{\text{M}}}-\left(\frac{f_{n_{1}+1}}{f^{ \prime}_{\text{m}}}+\frac{f^{\prime}_{\text{M}}\Delta t_{n_{1}+1}}{2f^{\prime }_{\text{m}}}\right),\]
where \(\bar{\epsilon}\triangleq\frac{\epsilon\cdot I^{2}_{n_{1}+1}(\tau)}{\Delta f_{n _{1}+1}\Delta f_{n_{1}+2}\Delta t_{n_{1}+1}}\cdot\frac{2f^{\prime}_{\text{M}}}{ f^{\prime}_{\text{m}}}\). We rearrange such that
\[\begin{split}\bar{\epsilon}>\frac{\Delta t_{n_{1}+2}}{\Delta t_{n _{1}+1}}+\frac{2f_{n_{1}+2}}{f^{\prime}_{\text{m}}\Delta t_{n_{1}+1}}-\frac{2 f_{n_{1}+1}f^{\prime}_{\text{M}}}{f^{\prime\ 2}_{\text{m}}\Delta t_{n_{1}+1}}-\frac{f^{\prime\ 2}_{ \text{M}}}{f^{\prime\ 2}_{\text{m}}}\\ =\frac{\Delta t_{n_{1}+2}}{\Delta t_{n_{1}+1}}+2\frac{f_{n_{1}+2}f ^{\prime}_{\text{m}}-f_{n_{1}+1}f^{\prime}_{\text{M}}}{f^{\prime\ 2}_{\text{m}}\Delta t_{n_{1}+1}}-\frac{f^{\prime\ 2}_{ \text{M}}}{f^{\prime\ 2}_{\text{m}}}.\end{split} \tag{30}\]
We rewrite (30) as
\[\bar{\epsilon}>\frac{\Delta t_{n_{1}+2}}{\Delta t_{n_{1}+1}}+2\frac{\Delta f_{n _{1}+1}}{f^{\prime}_{\text{m}}\Delta t_{n_{1}+1}}-2f_{n_{1}+1}\frac{f^{\prime }_{\text{M}}-f^{\prime}_{\text{m}}}{f^{\prime\ 2}_{\text{m}}\Delta t_{n_{1}+1}}-\frac{f^{\prime\ 2}_{ \text{M}}}{f^{\prime\ 2}_{\text{m}}} \tag{31}\]
We bound the first term on the RHS as \(\frac{\Delta t_{n_{1}+2}}{\Delta t_{n_{1}+1}}\geqslant\frac{T_{\text{m}}}{ \mathcal{H}}\). For the second term, we have that
\[2\frac{\Delta f_{n_{1}+1}}{f^{\prime}_{\text{m}}\Delta t_{n_{1}+1}}=\frac{2f^{ \prime}(\bar{\xi}_{n_{1}+1})}{f^{\prime}_{\text{m}}}\geqslant 2, \tag{32}\]
where \(\bar{\xi}_{n_{1}+1}\in[t_{n_{1}+1},t_{n_{1}+2}]\). Lastly, we bound the third term on the RHS of (31) as
\[\begin{split} 2f_{n_{1}+1}\frac{f^{\prime}_{\text{M}}-f^{\prime}_{ \text{m}}}{f^{\prime\ 2}_{\text{m}}\Delta t_{n_{1}+1}}&=2f_{n_{1}+1}\frac{f^{\prime}( \zeta_{\text{M}})-f^{\prime}(\zeta_{\text{m}})}{f^{\prime\ 2}_{\text{m}}\Delta t_{n_{1}+1}}=\frac{2f_{n_{1}+1}}{f^{\prime\ 2}_{\text{m}}}\left|\frac{f^{\prime}(\zeta_{\text{M}})-f^{\prime}(\zeta_{\text{ m}})}{\zeta_{\text{M}}-\zeta_{\text{m}}}\right|\cdot\frac{|\zeta_{\text{M}}- \zeta_{\text{m}}|}{\Delta t_{n_{1}+1}}\\ &=\frac{2f_{n_{1}+1}}{f^{\prime\ 2}_{\text{m}}}\cdot\left|f^{\prime \prime}(\zeta_{n_{1}+1})\right|\cdot\frac{|\zeta_{\text{M}}-\zeta_{\text{m}}|}{ \Delta t_{n_{1}+1}}\leqslant\frac{2f\left(2T_{\text{M}}+\tau-L\right)}{f^{ \prime\ 2}_{\text{m}}}\cdot f^{\prime\prime}_{\text{M}}\cdot\frac{t_{n_{1}+3}-t_{n_{1}+1} }{t_{n_{1}+2}-t_{n_{1}+1}}\leqslant\frac{2\varphi\left(t_{\delta}\right)}{f^{ \prime\ 2}_{\text{m}}}\cdot f^{\prime\prime}_{\text{M}}\cdot\frac{2T_{\text{M}}}{T_{ \text{m}}},\end{split}\]
where \(t_{\delta}=-L+2T_{\text{M}}\), \(\zeta_{\text{m}},\zeta_{\text{M}}\in[t_{n_{1}+1},t_{n_{1}+3}]\) s.t. \(f^{\prime}(\zeta_{\text{m}})=f^{\prime}_{\text{m}}\) and \(f^{\prime}(\zeta_{\text{M}})=f^{\prime}_{\text{M}}\), \(\bar{\zeta}_{n_{1}+1}\in[t_{n_{1}+1},t_{n_{1}+3}]\) s.t. \(f^{\prime\prime}(\bar{\zeta}_{n_{1}+1})=\frac{f^{\prime}(\zeta_{\text{M}})-f^{ \prime}(\zeta_{\text{m}})}{\zeta_{\text{M}}-\zeta_{\text{m}}}\) and \(f^{\prime\prime}_{\text{M}}=\max_{t\in[t_{n_{1}+1},t_{n_{1}+3}]}|f^{\prime\prime}(t)|\). Furthermore, the inequalities above also use that \(f_{n_{1}+1}\leqslant f(\tau+t_{\delta})\), which is due to \(t_{n_{1}-1}\leqslant\tau-L<t_{n_{1}}<t_{n_{1}}\). Finally,
\[\varphi^{\prime}_{\text{M},\delta}=\max_{t\in\mathbb{S}^{\tau}_{\delta}}f^{\prime}(t ),\quad\varphi^{\prime}_{\text{m},\delta}=\min_{t\in\mathbb{S}^{\tau}_{\delta}}f^{ \prime}(t),\quad\varphi^{\prime\prime}_{\text{M},\delta}=\max_{t\in\mathbb{S}^{ \tau}_{\delta}}f^{\prime\prime}(t),\]
where \(\mathbb{S}^{\tau}_{\delta}=[\tau-L+T_{\text{m}}/2,\tau-L+4T_{\text{M}}]\). Using \([t_{n_{1}+1},t_{n_{1}+3}]\subseteq\mathbb{S}^{\tau}_{\delta}\), we get that \(f^{\prime}_{\text{M}}\leqslant\varphi^{\prime}_{\text{M},\delta}\), \(\varphi^{\prime}_{\text{m},\delta}\leqslant f^{\prime}_{\text{m}}\), and \(f^{\prime\prime}_{\text{M}}\leqslant\varphi^{\prime\prime}_{\text{M},\delta}\). By plugging these bounds in (31), we get \(\bar{\epsilon}>\bar{\epsilon}_{\text{m}}\). According to the definition of \(\bar{\epsilon}\)
\[\epsilon=\frac{\Delta f_{n_{1}+1}\Delta f_{n_{1}+2}\Delta t_{n_{1}+1}}{I^{2}_{n_ {1}+1}(\tau)}\frac{f^{\prime}_{\text{m}}}{2f^{\prime}_{\text{M}}}\bar{ \epsilon}\geqslant\frac{f^{\prime\ 2}_{\text{m}}\,^{2}T^{3}_{\text{m}}}{I^{2}_{n_{1}+1}(\tau)}\frac{f^{\prime}_{ \text{m}}}{2f^{\prime}_{\text{M}}}\bar{\epsilon}\geqslant\frac{\varphi^{ \prime\prime}_{\text{m},\delta}}{2\varphi^{\prime}_{\text{M},\delta}}\frac{T^{3}_{ \text{m}}}{T^{2}_{\text{M}}}\bar{\epsilon
The measurement error then can be bounded as
\[\frac{\mathcal{L}_{n_{1}+2\widetilde{g}}}{\mathcal{L}_{n_{1}+1\widetilde{g}}}- \frac{\mathcal{L}_{n_{1}+2}g}{\mathcal{L}_{n_{1}+1}g}\leqslant\frac{\mathcal{L} _{n_{1}+2}g+\eta_{\delta}}{\mathcal{L}_{n_{1}+1}g}-\frac{\mathcal{L}_{n_{1}+2 }g}{\mathcal{L}_{n_{1}+1}g}\leqslant\eta_{\delta}\cdot\frac{\mathcal{L}_{n_{1} +1}g+\mathcal{L}_{n_{1}+2}g}{\mathcal{L}_{n_{1}+1}g\left(\mathcal{L}_{n_{1}+1}g -\eta_{\delta}\right)}\leqslant\frac{2g_{\infty}T_{\mathsf{M}}\eta_{\delta}}{ \varepsilon_{a}T_{\mathsf{m}}\varphi^{\prime}_{\mathsf{m},\delta}\left( \varepsilon_{a}T_{\mathsf{m}}\varphi^{\prime}_{\mathsf{m},\delta}-\eta_{\delta} \right)}. \tag{36}\]
Similarly, it can be shown that
\[\frac{\mathcal{L}_{n_{1}+2}g}{\mathcal{L}_{n_{1}+1}g}-\frac{\mathcal{L}_{n_{1} +2\widetilde{g}}}{\mathcal{L}_{n_{1}+1}\widetilde{g}}\leqslant\frac{2g_{ \infty}T_{\mathsf{M}}\eta_{\delta}}{\varepsilon_{a}T_{\mathsf{m}}\varphi^{ \prime}_{\mathsf{m},\delta}\left(\varepsilon_{a}T_{\mathsf{m}}\varphi^{\prime }_{\mathsf{m},\delta}+\eta_{\delta}\right)}. \tag{37}\]
Using (36) and (37), we get that
\[\left|\frac{\mathcal{L}_{n_{1}+2\widetilde{g}}}{\mathcal{L}_{n_{1}+1} \widetilde{g}}-\frac{\mathcal{L}_{n_{1}+2}g}{\mathcal{L}_{n_{1}+1}g}\right| \leqslant\frac{2g_{\infty}T_{\mathsf{M}}\eta_{\delta}}{\left(\varepsilon_{a}T _{\mathsf{m}}\varphi^{\prime}_{\mathsf{m},\delta}-\eta_{\delta}\right)^{2}}. \tag{38}\]
**2) \(\mathbf{a_{1}<0}\)**. We repeat derivations (34)-(38) where \(g(t)\) is replaced by \(-g(t)\), yielding the same bound (38).
Lastly, the error for computing \(\tau\) via \(\frac{f_{n_{1}+2\widetilde{\tau}}(\tau)}{T_{n_{1}+1}(\tau)}=\frac{\mathcal{L}_ {n_{1}+2\widetilde{g}}}{\mathcal{L}_{n_{1}+1}g}\) is
\[\left|\tau_{1}-\widehat{\tau}_{1}\right|<\frac{2g_{\infty}T_{\mathsf{M}}\eta_ {\delta}}{\left(\varepsilon_{a}T_{\mathsf{m}}\varphi^{\prime}_{\mathsf{m}, \delta}-\eta_{\delta}\right)^{2}}\cdot\left[\min_{\tau}\left(\frac{I_{n_{1}+2 }(\tau)}{I_{n_{1}+1}(\tau)}\right)^{\prime}\right]^{-1}\leqslant\frac{2g_{ \infty}T_{\mathsf{M}}\eta_{\delta}\varepsilon_{\mathsf{m}}}{\left( \varepsilon_{a}T_{\mathsf{m}}\varphi^{\prime}_{\mathsf{m},\delta}-\eta_{\delta }\right)^{2}}\cdot\frac{2\varphi^{\prime}_{\mathsf{M},\delta}\,T_{\mathsf{M}} ^{2}}{{\varphi^{\prime}_{\mathsf{m},\delta}}^{3}}=e_{\tau},\]
where the last inequality uses (33). For \(a_{1}\), we note that
\[a_{1}=\frac{\mathcal{L}_{n_{1}+1}g}{\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot -\tau_{1})\right]},\quad\widehat{a}_{1}=\frac{\mathcal{L}_{n_{1}+1}g+ \mathcal{L}_{n_{1}+1}\eta}{\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\widehat{ \tau}_{1})\right]}, \tag{39}\]
and thus
\[\left|a_{1}-\widehat{a}_{1}\right|\leqslant\frac{\left|\mathcal{L}_{n_{1}+1}g \cdot\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\widehat{\tau}_{1})-\varphi( \cdot-\tau_{1})\right]\right|}{\left|\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot -\tau_{1})\right]\right|\cdot\left|\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot- \widehat{\tau}_{1})\right]\right|}+\mathcal{N},\]
where \(\mathcal{N}\triangleq\frac{\left|\mathcal{L}_{n_{1}+1}\eta\cdot\mathcal{L}_{n_ {1}+1}\left[\varphi(\cdot-\tau_{1})\right]\right|}{\left|\mathcal{L}_{n_{1}+1} \left[\varphi(\cdot-\tau_{1})\right]\right|\cdot\left|\mathcal{L}_{n_{1}+1} \left[\varphi(\cdot-\widehat{\tau}_{1})\right]\right|}+\mathcal{N},\)
we use that
\[\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\widehat{\tau}_{1})\right]=\int_{t_{n _{1}+1}+\tau_{1}-\widehat{\tau}_{1}}^{t_{n_{1}+2}+\tau_{1}-\widehat{\tau}_{1}}f (s)ds, \tag{40}\]
\[\left|a_{1}-\widehat{a}_{1}\right|\leqslant\frac{\left|\mathcal{L}_{n_{1}+1}g \right|\cdot\left|\int_{\mathbb{M}}f(s)ds\right|}{\left|\mathcal{L}_{n_{1}+1} \left[\varphi(\cdot-\tau_{1})\right]\right|\cdot\left|\mathcal{L}_{n_{1}+1} \left[\varphi(\cdot-\widehat{\tau}_{1})\right]\right|}+\mathcal{N},\]
s.t. \(\mathbb{M}\triangleq\left[t_{n_{1}+1},t_{n_{1}+1}+\widehat{\tau}_{1}-\tau_{1} \right]\cup\left[t_{n_{1}+2}+\widehat{\tau}_{1}-\tau_{1},t_{n_{1}+2}\right]\). Using \(\left|\mathcal{L}_{n_{1}+1}g\right|<g_{\infty}T_{\mathsf{M}}\), \(\left|\mathcal{L}_{n_{1}+1}\eta\right|<\eta_{\delta}\), \(\left|\int_{\mathbb{M}}f(s)ds\right|\leqslant 2e_{\tau}\), and \(\left|\mathcal{L}_{n_{1}+1}\varphi(\cdot-\tau_{1})\right|<T_{\mathsf{M}}\),
\[\left|a_{1}-\widehat{a}_{1}\right|\leqslant\frac{T_{\mathsf{M}}(2e_{\tau}g_{ \infty}+\eta_{\delta})}{\left|\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\tau_{1}) \right]\right|\cdot\left|\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\widehat{ \tau}_{1})\right]\right|}. \tag{41}\]
The following holds from (40), using that \(e_{\tau}<T_{\mathsf{m}}/2\).
\[\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\widehat{\tau}_{1})\right]\geqslant \int_{-L+\tau+T_{\mathsf{m}}/2}^{-L+\tau+3T_{\mathsf{m}}/2}f(s)ds\geqslant T_{ \mathsf{m}}\varphi^{\prime}_{\mathsf{m},\delta}. \tag{42}\]
Using (34) we get \(\mathcal{L}_{n_{1}+1}\left[\varphi(\cdot-\tau_{1})\right]\geqslant T_{\mathsf{m}} \varphi^{\prime}_{\mathsf{m},\delta}\), which completes the proof via (41).
## VIII Conclusions
In this paper, we introduced a new recovery method for FRI signals from TEM measurements that can tackle a wider class of FRI filters than previously possible. We introduced guarantees in the noiseless and noisy scenarios. We validated the method numerically, showing it can tackle existing FRI filters, but also random filters, which are not compatible with existing approaches. We further validated our method via a TEM hardware experiment. When the FRI filter is not designed, as it may result from the environment and the physical properties of the acquisition device, the proposed method is still applicable and allows bypassing the filter modelling stage. Additionally, by allowing a wider class of filters, the proposed algorithm can incorporate non-idealities, thus enabling a co-design of hardware and algorithms for future FRI acquisition systems. |
2309.00030 | Audio-Driven Dubbing for User Generated Contents via Style-Aware
Semi-Parametric Synthesis | Existing automated dubbing methods are usually designed for Professionally
Generated Content (PGC) production, which requires massive training data and
training time to learn a person-specific audio-video mapping. In this paper, we
investigate an audio-driven dubbing method that is more feasible for User
Generated Content (UGC) production. There are two unique challenges to design a
method for UGC: 1) the appearances of speakers are diverse and arbitrary as the
method needs to generalize across users; 2) the available video data of one
speaker are very limited. In order to tackle the above challenges, we first
introduce a new Style Translation Network to integrate the speaking style of
the target and the speaking content of the source via a cross-modal AdaIN
module. It enables our model to quickly adapt to a new speaker. Then, we
further develop a semi-parametric video renderer, which takes full advantage of
the limited training data of the unseen speaker via a video-level
retrieve-warp-refine pipeline. Finally, we propose a temporal regularization
for the semi-parametric renderer, generating more continuous videos. Extensive
experiments show that our method generates videos that accurately preserve
various speaking styles, yet with considerably lower amount of training data
and training time in comparison to existing methods. Besides, our method
achieves a faster testing speed than most recent methods. | Linsen Song, Wayne Wu, Chaoyou Fu, Chen Change Loy, Ran He | 2023-08-31T15:41:40Z | http://arxiv.org/abs/2309.00030v1 | # Audio-Driven Dubbing for User Generated Contents via Style-Aware Semi-Parametric Synthesis
###### Abstract
Existing automated dubbing methods are usually designed for Professionally Generated Content (PGC) production, which requires massive training data and training time to learn a person-specific audio-video mapping. In this paper, we investigate an audio-driven dubbing method that is more feasible for User Generated Content (UGC) production. There are two unique challenges to design a method for UGC: 1) the appearances of speakers are diverse and arbitrary as the method needs to generalize across users; 2) the available video data of one speaker are very limited. In order to tackle the above challenges, we first introduce a new Style Translation Network to integrate the speaking style of the target and the speaking content of the source via a cross-modal AdaIN module. It enables our model to quickly adapt to a new speaker. Then, we further develop a semi-parametric video renderer, which takes full advantage of the limited training data of the unseen speaker via a video-level retrieve-warp-refine pipeline. Finally, we propose a temporal regularization for the semi-parametric renderer, generating more continuous videos. Extensive experiments show that our method generates videos that accurately preserve various speaking styles, yet with considerably lower amount of training data and training time in comparison to existing methods. Besides, our method achieves a faster testing speed than most recent methods.
Talking face generation, video generation, GAN, thin-plate spline.
## I Introduction
With the popularity of the User Generated Content (UGC) [1, 2] (_e.g._ YouTube and TikTok), dubbing technologies have come into the sight of ordinary users for producing creative and entertaining contents. In this paper, we propose an automated audio-driven dubbing method for UGC production based on two requirements: 1) the dubbing method needs to handle various users; 2) most users are impientent to record a long video for training a model and wish to get dubbed videos as fast as possible.
Most existing audio dubbing methods only cope with one/several speakers, requiring massive training data and long training time (Fig. 1 (b)). Consequently, these methods are usually targeted for the Professionally Generated Content (PGC) [3, 4] production (_e.g._ films and TV shows) and rarely used in the UGC setting. Such techniques are currently inapplicable for UGC production due to the following two challenges: 1) _Speaker Variance_. Different speakers have their unique mouth shapes and textures. We define the time-varying mouth shapes of a speaker as its unique speaking style, both in terms of mouth shape and movement timing. Many methods designed for PGC production either consider only the homogeneous generation of one's video by its own audio [5] or neglect the speaking style differences between speakers [6]. Compared with PGC production, to generate high quality talking videos for any unseen speaker, the dubbing methods designed for UGC production need to preserve their unique speaking styles. The main challenge lies in tackling the speaking style of a given speaker and the speaking content of a given audio at the same time. 2) _Training Resource_. A dubbing method designed for UGC production should quickly adapt to an unseen speaker with very limited video data. For methods designed for PGC production, adapting to an unseen speaker is expensive as one needs to retrain the whole network on massive training data from the speaker. The main challenge is to generate realistic and lip synchronized talking videos rapidly after giving a short video of an unseen speaker.
In this paper, we aim at realizing audio-driven dubbing for UGC through mitigating the issue of speaker variance and lowering the consumption of training resources (Fig. 1). To tackle the aforementioned challenges, we first propose a cross-modal Adaptive Instance Normalization (AdaIN [7]) in our Style Translation Network that maps the source audio to the mouth motion of an unseen target speaker with preserved speaking style. AdaIN is popular for image style transfer [8] and we extend it to fuse information from different modalities (_i.e._ audio and video). Then, we propose a semi-parametric framework that helps to bring down the training resources, including both training data and time. The limited video |
2309.06441 | Learning Disentangled Avatars with Hybrid 3D Representations | Tremendous efforts have been made to learn animatable and photorealistic
human avatars. Towards this end, both explicit and implicit 3D representations
are heavily studied for a holistic modeling and capture of the whole human
(e.g., body, clothing, face and hair), but neither representation is an optimal
choice in terms of representation efficacy since different parts of the human
avatar have different modeling desiderata. For example, meshes are generally
not suitable for modeling clothing and hair. Motivated by this, we present
Disentangled Avatars~(DELTA), which models humans with hybrid explicit-implicit
3D representations. DELTA takes a monocular RGB video as input, and produces a
human avatar with separate body and clothing/hair layers. Specifically, we
demonstrate two important applications for DELTA. For the first one, we
consider the disentanglement of the human body and clothing and in the second,
we disentangle the face and hair. To do so, DELTA represents the body or face
with an explicit mesh-based parametric 3D model and the clothing or hair with
an implicit neural radiance field. To make this possible, we design an
end-to-end differentiable renderer that integrates meshes into volumetric
rendering, enabling DELTA to learn directly from monocular videos without any
3D supervision. Finally, we show that how these two applications can be easily
combined to model full-body avatars, such that the hair, face, body and
clothing can be fully disentangled yet jointly rendered. Such a disentanglement
enables hair and clothing transfer to arbitrary body shapes. We empirically
validate the effectiveness of DELTA's disentanglement by demonstrating its
promising performance on disentangled reconstruction, virtual clothing try-on
and hairstyle transfer. To facilitate future research, we also release an
open-sourced pipeline for the study of hybrid human avatar modeling. | Yao Feng, Weiyang Liu, Timo Bolkart, Jinlong Yang, Marc Pollefeys, Michael J. Black | 2023-09-12T17:59:36Z | http://arxiv.org/abs/2309.06441v1 | # Learning Disentangled Avatars with Hybrid 3D Representations
###### Abstract
Tremendous efforts have been made to learn animatable and photorealistic human avatars. Towards this end, both explicit and implicit 3D representations are heavily studied for a holistic modeling and capture of the whole human (_e.g._, body, clothing, face and hair), but neither representation is an optimal choice in terms of representation efficacy since different parts of the human avatar have different modeling desiderata. For example, meshes are generally not suitable for modeling clothing and hair. Motivated by this, we present Disentangled Avatars (DELTA), which models humans with hybrid explicit-implicit 3D representations. DELTA takes a monocular RGB video as input, and produces a human avatar with separate body and clothing/hair layers. Specifically, we demonstrate two important applications for DELTA. For the first one, we consider the disentanglement of the human body and clothing and in the second, we disentangle the face and hair. To do so, DELTA represents the body or face with an explicit mesh-based parametric 3D model and the clothing or hair with an implicit neural radiance field. To make this possible, we design an end-to-end differentiable renderer that integrates meshes into volumetric rendering, enabling DELTA to learn directly from monocular videos without any 3D supervision. Finally, we show that how these two applications can be easily combined to model full-body avatars, such that the hair, face, body and clothing can be fully disentangled yet jointly rendered. Such a disentanglement enables hair and clothing transfer to arbitrary body shapes. We empirically validate the effectiveness of DELTA's disentanglement by demonstrating its promising performance on disentangled reconstruction, virtual clothing try-on and hairstyle transfer. To facilitate future research, we also release an open-sourced pipeline for the study of hybrid human avatar modeling.
+
Footnote †: journal:
## 1. Introduction
Recent years have witnessed an unparalleled surge in the utilization of 3D human reconstruction and reenactment in numerous applications such as virtual and augmented reality, telepresence, games, and movies. It is of broad interest to create personal avatars from readily available setups (_e.g._, monocular videos). It is desirable in practice for the avatars to be photorealistic, 3D-consistent, animatable, easily editable and generalizable to novel poses. These characteristics call for a faithful disentanglement and modeling of different semantic components of the avatar (_e.g._, face and hair for head, body and clothing for whole body). Therefore, how to disentangle human avatars while yielding accurate reconstructions is of great significance and remains an open challenge.
Existing methods for learning 3D human avatars can be roughly categorized into _explicit_ ones and _implicit_ ones. Explicit methods (_e.g._, [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] for body) typically use triangular meshes as representation, and the reconstruction heavily relies on statistical shape priors, such as 3D morphable models for head [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 311, 335, 312, 336, 313, 337, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 390, 387, 388, 391, 388, 392, 389, 393, 394, 395, 396, 397, 398, 399, 400, 411, 42, 434, 446, 448, 451, 461, 472, 473, 48, 48, 491, 492, 403, 404, 41, 404, 42, 441, 405, 406, 407, 408, 409, 410, 42, 434, 44, 449, 452, 453, 461, 472, 473, 48, 493, 409, 411, 434, 44, 44, 474, 48, 494, 495, 400, 411, 44, 402, 44, 44, 44, 454, 462, 474, 48, 496, 401, 402, 403, 404, 405, 406, 407, 409, 410, 408, 411, 409, 42, 44, 44, 455, 461, 474, 48, 497, 498, 410, 409, 42, 44, 455, 462, 474, 48, 499, 411, 44, 493, 400, 409, 42, 401, 44, 406, 407, 408, 409, 411, 443, 44, 44, 455, 463, 474, 48, 499, 420, 43, 44, 464, 475, 48, 495, 401, 409, 43, 410, 44, 44, 455, 465, 476, 497, 411, 44, 498, 411, 44, 449, 453, 466, 477, 498, 412, 409, 454, 468, 499, 470, 413, 471, 48, 499, 42, 403, 409, 43, 411, 44, 415, 416, 417, 418, 419, 42, 42, 434, 44, 455, 461, 475, 48, 499, 43, 410, 44, 455, 462, 476, 497, 413, 490, 450, 409, 411, 409, 42, 409, 43, 411, 44, 455, 463, 477, 48, 499, 42, 400, 42, 409, 44, 451, 46, 474, 48, 499, 450, 409, 410, 451, 46, 499, 46, 470, 409, 42, 409, 43, 411, 48, 41, 49, 45,
2019; Zheng et al., 2022) or with volumetric representation (Gafni et al., 2021; Gao et al., 2022; Peng et al., 2021). Both explicit and implicit methods use a single 3D representation to model different parts of the avatar, which ignores the representation efficacy and therefore can be sub-optimal. For example, triangular meshes are an efficient representation for faces and minimally clothed body, for which statistical template priors are available, but meshes are generally a poor representation for hair or clothing since they can be inefficient to capture the underlying geometry. On the other hand, implicit representation renders high-fidelity 2D views but it is nontrivial to animate and usually can not generalize to unseen poses and expressions. Since no single 3D representation is perfect, _why not use different one for different part of the avatar?_ Motivated by this, we propose **D**isEntang**L**ed ava**T**ar** (DELTA), which models face and body with explicit triangular meshes, and models hair and clothing with an implicit neural radiance field (NeRF) (Mildenhall et al., 2020). The intuition behind such a design is in two folds. First, both faces and bodies have regular topological structures and live in a low-dimensional subspace (Basri and Jacobs, 2003; Li et al., 2009). It is therefore a well-motivated choice to represent the face or body geometry with mesh templates. Second, hair consists of countless freely deformed thin strands, which hinders triangular meshes to be a suitable representation. Clothing (_e.g._, dresses) also consists of complex topological structures and has a diverse set of styles. Due to the complex nature of hair and clothing, it is highly difficult to accurately model their surface geometry, which renders NeRF an arguably better choice of representation.
The effectiveness of hybrid 3D representation has already found its traces in human-scene reconstruction (Pavlakos et al., 2022), clothed body modeling (Feng et al., 2022), and human eye modeling (Li et al., 2022). For example, (Pavlakos et al., 2022) reconstructs the static scene with a NeRF which excels at representing fine-grained scene details, and the people inside with a SMPL (Loper et al., 2015) representation which is good at body pose recovery. Despite modeling different subjects under different context, the essence of hybrid representation is the adoption of heterogeneous 3D representations such that each representation can be made the best use of. Extending our prior work (Feng et al., 2022), DELTA is the _first_ method to demonstrate the power of hybrid representation for learning human avatars (including face, body, hair and clothing). Specifically, we instantiate the idea of DELTA in two capture settings. First, we consider the disentangled reconstruction of human head where the head (and upper shoulder) is represented by a parametric mesh model (_i.e._, FLAME (Li et al., 2017) and SMPL-X (Pavlakos et al., 2019)) and the hair is represented by a NeRF. Unlike existing works (Gafni et al., 2021; Grassal et al., 2022; Zheng et al., 2022), DELTA additionally reconstruct the upper body (_e.g._, shoulder), such that people with long hair can be better captured. Second, we consider the disentangled reconstruction of human body where the body is represented by a parametric mesh model (_i.e._, SMPL-X) and the clothing is represented by a NeRF. Combining the disentangled capture of both human head and body, we demonstrate that both hair and clothing can be simultaneously transferred to arbitrary reconstructed human body. See Figure 1 for an illustration.
Distinct from existing work (Li et al., 2022; Pavlakos et al., 2022), at the very heart of DELTA is our novel mesh-integrated volumetric renderer, which not only drives the disentanglement of different parts of the avatar (_i.e._, face, hair, body, clothing), but also enables the end-to-end differentiable learning directly from monocular videos without any 3D supervision. We expect the idea of hybrid 3D representation to be quite general, and DELTA aims to demonstrate the power of hybrid 3D representation by bringing together meshes and NeRFs in modeling human avatars.
_Why is disentanglement so important for learning avatars?_ We answer this question by listing some key desiderata for photorealistic avatar creation. First, the pose-dependent factors should be disentangled from the appearance such that the captured avatar can be easily reusable in new environments. Second, disentangling the human body, hair, and clothing is crucial to accurately model their respective dynamics, since the motion dynamics of the human body, hair, and clothing are completely distinct from each other. Moreover, modeling the interaction between body and hair/clothing also requires an accurate disentanglement. Such a disentanglement becomes even more important when performing physical simulation on the reconstructed avatar. Third, human body, hair and clothing have totally different material and physical properties, which results in different lighting phenomena. In order to construct realistic and generalizable avatars, human body and hair/clothing have to be disentangled and modeled separately. Towards the goal of learning disentangled avatars, our contributions are listed below:
* By substantially extending our previous work (Feng et al., 2022), we propose the disentangled avatar that models face/body and hair/clothing with a hybrid 3D representation. Such an hybrid representation marries the statistical prior from mesh surfaces and the representation flexibility from implicit functions. DELTA is one of the first methods that uses a hybrid explicit-implicit representation to reconstruct high-fidelity disentangled avatars.
* We design a novel differentiable volumetric rendering method that incorporates meshes into volumetric rendering.
* The framework of DELTA is fully differentiable and end-to-end trainable. It is trained on a monocular video (_e.g._, from web cameras) without requiring any 3D supervision.
* For the face and body, DELTA delivers high-fidelity details while being able to effortlessly reposed. For the hair and clothing region, DELTA yields realistic hair and clothing reconstruction owing to the powerful implicit NeRF representation.
* We emphasize that the major contribution of DELTA is to serve as a demonstration to showcase the potentials of hybrid 3D representation in modeling human avatars.
## 2. Related Work
### Head Avatar Creation
**Explicit head avatars**. Explicit head avatars are typically based on explicit 3D representations (_e.g._, triangular meshes). 3D morphable models (3DMM) (Blanz and Vetter, 1999), which are obtained from a population of 3D head scans (Egger et al., 2020), are widely used as a stronger statistical prior to represent the geometry of faces. Built upon 3DMM, many improved variants have been proposed, including multi-linear models for shape and expression (Cao et al., 2013; Vlasic et al., 2006), full-head models (Dai et al., 2020; Li et al., 2021),
2017; Ploumpis et al.2020), and deep nonlinear models (Ranjan et al.2018; Tran and Liu, 2018). Besides, morphable models also provide a linear model for textures (Aldrian and Smith, 2010; Blanz and Vetter, 1999, 2003; Paysan et al., 2009). 3DMM and its variants can be used to reconstruct faces through an optimization procedure (Gecer et al., 2019; Romdhani and Vetter, 2005; Schonborn et al., 2017; Thies et al., 2016) or learning-based estimation (Deng et al., 2019; Dib et al., 2021; Feng et al., 2021; Khakhulin et al., 2022; Lattas et al., 2020; Li et al., 2018; Sanyal et al., 2019; Shang et al., 2020; Tewari et al., 2019, 2018, 2017; Wen et al., 2021). **Besides 3DMM template priors**, other priors (_e.g._, symmetry (Liu et al., 2022; Wu et al., 2020), causality (Liu et al., 2022; Wen et al., 2021), identity (Cole et al., 2017; Feng et al., 2021)) are also considered in 3D face reconstruction. Despite producing good coarse facial geometry, these methods are usually unable to reconstruct fine-grained facial details and the entire head (_e.g._, hair). Some methods (Alldieck et al., 2018; Cao et al., 2015; Feng et al., 2021) use mesh displacements to reconstruct fine details such as wrinkles, producing fine-grained geometry. Following a similar spirit, Grassal et al. (2022) use a geometry refinement network that learns a pose-dependent offset function for geometry corrections, and produces photorealistic outputs under novel views. PointAvatar (Zheng et al., 2023) uses a deformable point-based representation to reconstruct human heads from videos. Unlike previous work, DELTA captures the head avatar with disentangled face and hair components. DELTA adopts the explicit mesh-based representation to model the face region, making it easily animatable. For the hair, we utilize an implicit NeRF-based representation, capable of accommodating various hair types. With this approach, we can utilize models tailored for faces and hair, and it also unlocks potential applications like hairstyle transfer.
**Implicit head avatars.** Implicit models normally encode the 3D head avatar with NeRF-based representation (Mildenhall et al., 2020; Muller et al., 2022) or implicit surface functions (Chen and Zhang, 2019; Kellnhofer et al., 2021; Mescheder et al., 2019; Park et al., 2019; Yariv et al., 2020). NeRF-based methods have been explored for 3D face modeling from images or videos (Chan et al., 2021; Gafni et al., 2021; Park et al., 2021; Wang et al., 2021). Gafni et al. (2021) reconstruct an animatable NeRF from a single monocular video, which is conditioned on the expression code from a 3DMM. Gao et al. (2022) propose a NeRF-based linear blending representation where expression is encoded by multi-level voxel fields. AvatarMAV (Xu et al., 2023) uses neural voxel fields to represent motion and appearance to achieve fast head reconstruction. LatentAvatar (Xu et al., 2023) reconstructs a NeRF-based head avatar that is driven by latent expression codes, and these expression codes are learned in an end-to-end and self-supervised manner without the tracking of templates. However, NeRF-based head representations generally suffer from poor 3D geometry and struggles to generalize to unseen poses/expressions. Approaches utilizing implicit surface functions generally provide better geometry for faces. Yenamandra et al. (2021) proposes an implicit morphable face model that disentangles texture and geometry. Zheng et al. (2022) parameterize the head with implicit surface functions in the canonical space, and represents the expression- and pose-dependent deformations via learned blendshapes and skinning fields. Ramon et al. (2021) use an optimization-based approach to estimate the signed distance function (SDF) of a full head from a few images, and this optimization is constrained by a pre-trained 3D head SDF model. In contrast to both explicit and implicit head avatars that use a holistic 3D representation, DELTA is the first method that adopts a hybrid explicit-implicit 3D representation to separately model face and hair. DELTA marries the strong controllability of the mesh-based face and the high-fidelity rendering of the NeRF-based hair.
### Full Body Avatar Creation
**Explicit Body Avatars.** The 3D surface of a human body is typically represented by a learned statistical 3D model using an explicit mesh representation (Anguelov et al., 2005; Joo et al., 2018; Loper et al., 2015; Osman et al., 2020; Pavlakos et al., 2019). The parametric models (Loper et al., 2015; Pavlakos et al., 2019) can produce a minimal clothed body when the shape parameters are provided. Numerous optimization and regression methods have been proposed to compute 3D shape and pose parameters from images, videos, and scans. See (Liu et al., 2022; Tian et al., 2022) for recent surveys. We focus on methods that capture full-body pose and shape, including the hands and facial expressions (Choutas et al., 2020; Feng et al., 2021; Pavlakos et al., 2019; Rong et al., 2021; Xiang et al., 2019; Xu et al., 2020; Zhou et al., 2021). Such methods, however, do not capture hair, clothing, or anything that deviates the body. Also, they rarely recover texture information, due to the large geometric discrepancy between the clothed human in the image and captured minimal clothed body mesh. Some methods choose to model body along with clothing. However, clothing is more complex than the body in terms of geometry, non-rigid deformation, and appearance, making the capture of clothing from images challenging. Explicit ways to capture clothing often use additional vertex offsets relative to the body mesh (Alldieck et al., 2019, 2018, 2019; Jin et al., 2020; Lazova et al., 2019; Ma et al., 2020; Xiu et al., 2023). While such an approach generally works well for tight clothing, it still struggles to capture loose clothing like skirts and dress.
**Implicit Body Avatars**. Recently, implicit representations have gained traction in modeling the human body (Alldieck et al., 2021; Xu et al., 2020). Correspondingly, methods have been developed to estimate implicit body shape from images (Xu et al., 2020). However, similar to explicit body model (Pavlakos et al., 2019), they only model minimal clothed body. When it comes to clothed avatars, recent methods are leveraging implicit representations to handle more complex variations in clothing styles, aiding in the recovery of clothing structures. For instance, (He et al., 2021; Huang et al., 2020; Saito et al., 2019, 2020; Xiu et al., 2022; Zheng et al., 2021) extract pixel-aligned spatial features from images and map them to an implicit shape representation. To amitate the captured non-parametric clothed humans, Yang et al. (2021) predict skeleton and skinning weights from images to drive the representation. Corona et al. (2021) represent clothing layers with deep unsigned distance functions (Chibane et al., 2020), and learn the clothing style and clothing cut space with an auto-decoder. Once trained, the clothing latent code can be optimized to match image observations, but it produces over-smooth results without detailed wrinkles. PoseVocab (Li et al., 2023) models NeRF-based human avatars by learning pose encoding. Although such implicit models can capture various clothing styles
much better than explicit mesh-based approaches, faces and hands are usually poorly recovered due to the lack of a strong prior on the human body. In addition, such approaches typically require a large set of manually cleaned 3D scans as training data. Recently, various methods recover 3D clothed humans directly from multi-view or monocular RGB videos (Chen et al., 2021; Jiang et al., 2022; Liu et al., 2021; Peng et al., 2021, 2022, 2021; Qiu et al., 2023; Su et al., 2021; Weng et al., 2022). They optimize avatars from image information using implicit shape rendering (Liu et al., 2020; Niemeyer et al., 2020; Yariv et al., 2021, 2020) or volume rendering (Mildenhall et al., 2020), no 3D scans are needed. Although these approaches demonstrate impressive performance, hand gestures and facial expressions are difficult to capture and animate due to the lack of model expressiveness and controllability. AvatarReX (Zheng et al., 2023) learns a NeRF-based full-body avatar with disentangled modeling of face, body and hands, but the clothing is still entangled with body.
Unlike prior methods, we view clothing as a separate layer above the body and combine explicit body models and implicit clothing to leverage the advantages of both. The mesh-based body model allows us to create human shapes with detailed components (_e.g._, hands) and to control the body (_e.g._, expressions and hand articulations). With implicit representation, we can capture a variety of clothing using images, without the need for 3D scans. Moreover, the disentangled modeling of explicit body and implicit clothing facilitates seamless clothing transfer, enabling applications like virtual try-ons.
### Other Related Work
**Hybrid 3D representation**. The potentials of hybrid 3D representation have also been demonstrated in other 3D reconstruction tasks. Pavlakos et al. (2022) represent the background static scene as a NeRF and the people inside as SMPL models. Li et al. (2022) model the eye-ball surface with an explicit parametric surface model and represents the periocular region and the interior of the eye with deformable volumetric representations. Hybrid explicit-implicit representation has also been explored in transparent object reconstruction (Xu et al., 2022) and haptic rendering (Kim et al., 2004).
**Hair modeling**. How to represent hair is a long-standing problem in human modeling (Ward et al., 2007). Strand-based modeling is widely adopted to model human hair (Becler et al., 2012; Chai et al., 2013, 2012; Herrera et al., 2012; Hu et al., 2014; Luo et al., 2012, 2013; Nam et al., 2019; Rosu et al., 2022; Sun et al., 2021; Yang et al., 2019; Zhang et al., 2017; Zhang and Zheng, 2019; Zhou et al., 2018). Zheng et al. (2023) recover the strand-based 3D hair from an intermediate representation that consists of a strand map and a depth map. Neural Haircut (Skiyarova et al., 2023) uses a two-stage coarse-to-fine optimization to reconstruct the strand-level hair. More recently, volumetric representation is also applied to perform hair modeling (Saito et al., 2018; Wang et al., 2022). Their primary focus is on hair reconstruction, and they typically utilize head-tracked meshes from multi-view images (Rosu et al., 2022; Wang et al., 2021, 2022) or reconstruct faces from videos with stationary heads (Skiyarova et al., 2023). None of these methods, however, are designed to learn faces from monocular videos with dynamic facial expressions. In contrast, our approach distinguishes itself by learning both facial features and hair from monocular videos, even when the head is moving. Since the primary objective of DELTA is to disentangle the representation of faces and hair rather than accurately capturing hair geometry, we employ a NeRF representation for hair modeling. The disentangled capture of face, upper body and hair is a necessary step before one can perform high-fidelity hair modeling, so DELTA also serves as a stepping stone for future work that combines better hair modeling in creating disentangled head avatars.
**Garment reconstruction**. The task of reconstructing 3D garments from images or videos has proven to be a complex challenge (Danerek et al., 2017; Hong et al., 2021; Li et al., 2021; Qiu et al., 2023; Su et al., 2022; Zhao et al., 2021; Zhu et al., 2020). This complexity arises from the wide diversity in clothing topologies. To tackle this, existing methods often rely on either clothing template meshes or implicit surface functions. Typically, these approaches demand access to 3D data. Many approaches employ training data produced by physics-based simulations (Bertiche et al., 2020; Patel et al., 2020; Santesteban et al., 2019; Vidaurre et al., 2020) or require template meshes fit to 3D scans (Chen et al., 2021; Halimi et al., 2022; Pons-Moll et al., 2017; Tiwari et al., 2020; Xiang et al., 2021). Jiang et al. (2020) train a mesh-based multi-clothing model on 3D datasets with various clothing styles. Zhu et al. (2020) introduce a adaptable template that allows for encoding clothing with diverse topologies within a single mesh template. Then during inference, a trained network produces the 3D clothing as a separate mesh-based layer by recognizing and predicting the clothing style from an image. Zhu et al. (2022) fit template meshes to non-parametric 3D reconstructions. While these methods recover garments from images, they are limited in visual fidelity, as they do not capture clothing appearance. Additionally, methods with such predefined clothing style templates can not easily handle the real clothing variations, limiting their applications. In contrast, Corona et al. (2021) represent clothing layers with deep unsigned distance functions (Chibane et al., 2020), and learn the clothing style and clothing cut space with an auto-decoder. Once trained, the clothing latent code can be optimized to match image observations, but it produces over-smooth results without detailed wrinkles. Instead, DELTA models the clothing layer with a neural radiance field, and optimizes the body and clothing layer from scratch instead of the latent space of a learned clothing model. Therefore, DELTA produces avatars with higher visual fidelity (see Section 5).
## 3. Delta: Learning Disentangled Avatars
Given a monocular video, DELTA reconstructs a head (or body) avatar where head/body and hair/clothing are fully disentangled. Once the avatar is built, we can animate it with novel poses and change the hairstyle and clothing effortlessly. Because the way that DELTA reconstructs head and body shares many similarities, we simplify the description by referring the face or body as _avatar interior_ and the hair or clothing as _avatar exterior_.
### Hybrid Explicit-Implicit 3D Representations
Previous work on face and body modeling (Bi et al., 2021; Grassal et al., 2022; Li et al., 2017; Lombardi et al., 2018; Loper et al., 2015; Pavlakos et al., 2019) has demonstrated that both human faces and bodies can be accurately modeled by mesh-based representations. In the light of these encouraging results, we choose mesh as the representation
for the face and body. Specifically, we use SMPL-X [14] to make full use of the human geometry priors. When it comes to representing hair and clothing, it remains an open problem which representation works the best. Because of the complex geometry of hair and clothing, we propose to model both hair and clothing with NeRF [13] - a more flexible and expressive implicit representation. Distinct from meshes, NeRF is agnostic to the style, geometry and topology of hair and clothing.
**Explicit avatar interior by SMPL-X**. SMPL-X is an expressive body model with detailed face shape and expressions. A subject's face and body with neutral expression in the rest pose is defined as
\[T_{P}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})=\tilde{T}+B_{S}(\mathbf{\beta};\mathcal{S} )+B_{P}(\mathbf{\theta};\mathcal{P})+B_{E}(\mathbf{\psi};\mathcal{E}), \tag{1}\]
where \(\tilde{T}\in\mathbb{R}^{n_{o}\times 3}\) is a template of body shape in the rest pose, \(\mathbf{\beta}\in\mathbb{R}^{|\tilde{\mathbf{\beta}}|}\) is the body identity parameters, and \(B_{S}(\mathbf{\beta};\mathcal{S}):\mathbb{R}^{|\tilde{\mathbf{\beta}}|}\to\mathbb{R}^{ n_{o}\times 3}\) are the identity blend shapes. More specifically, \(B_{S}(\mathbf{\beta};\mathcal{S})=\sum_{i=1}^{|\tilde{\mathbf{\beta}}|}\mathbf{\beta}_{i} \mathbf{S}_{i}\) where \(\mathbf{\beta}_{i}\) is the \(i\)-th linear coefficient and \(\mathcal{S}_{i}\) is the \(i\)-th orthonormal principle component. \(\mathbf{\theta}\in\mathbb{R}^{3n_{o}+3}\) denotes the pose parameters, and \(\mathbf{\psi}\in\mathbb{R}^{|\mathbf{\psi}|}\) denotes the facial expression parameters. Similar to the shape space \(\mathcal{S}\), \(B_{P}(\mathbf{\theta};\mathcal{P}):\mathbb{R}^{|\mathbf{\theta}|}\to\mathbb{R}^{n_{o} \times 3}\) denotes the pose blend shapes (\(\mathcal{P}\) is the pose space), and \(B_{E}(\mathbf{\psi};\mathcal{E}):\mathbb{R}^{|\mathbf{\psi}|}\to\mathbb{R}^{n_{o} \times 3}\) denotes the expression blend shapes from the SMPL-X model (\(\mathcal{E}\) is the expression space). To increase the flexibility of SMPL-X, we add additional vertex offsets \(\mathbf{O}:=\{F_{d}(\mathbf{t}_{1}),F_{d}(\mathbf{t}_{2}),\cdots,F_{d}(\mathbf{t}_{n_{o}})\}^ {\top}\in\mathbb{R}^{n_{o}\times 3}\) in the canonical space. The offset is modeled by a vertex-wise implicit function \(F_{d}:\mathbf{t}\to\mathbf{o}\), which predicts an offset \(\mathbf{o}\in\mathbb{R}^{3}\) for the vertex \(\mathbf{t}\in\mathbb{R}^{3}\) in the rest template. Therefore, we augment the body shape with the following set of offsets:
\[\tilde{T}_{P}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})=T_{P}(\mathbf{\beta},\mathbf{ \theta},\mathbf{\psi})+\mathbf{O}. \tag{2}\]
The albedo is represented by an implicit function \(F_{t}:\mathbf{t}\to\mathbf{c}^{\text{mesh}}\) which predicts the RGB color \(\mathbf{c}^{\text{mesh}}\) of each given vertex \(\mathbf{t}\) on the surface. Specifically, we sample vertex \(\mathbf{t}\) from the template mesh \(\tilde{T}\) if the video is under uniform lighting. For more complex lighting conditions, in order to better model the texture, we sample \(\mathbf{t}\) from the surface after the pose deformation. More details can be found in Section 5.2. To capture more geometric details, we use an upsampled version of SMPL-X with \(n_{o}=38,703\) vertices and \(n_{t}=77,336\) faces [15]. Similar to [12], we also add additional faces inside the mouth region for head avatar modeling.
**Implicit avatar exterior by NeRF**. Based on NeRF [13], we define the avatar exterior (hair or clothing) in the
Fig. 2: DELTA takes a monocular RGB video and clothing/hair segmentation masks as input, and outputs a human avatar with separate body and clothing/hair layers. Green letters indicate optimizable modules or parameters.
canonical 3D space as an implicit function \(F_{h}:\mathbf{x}^{c}\rightarrow(\mathbf{e}^{\text{nerf}},\sigma)\) which can be parameterized by a multi-layer perceptron (MLP). \(\mathbf{e}^{\text{nerf}}\) represents the RGB color. Given a query point \(\mathbf{x}^{c}\in\mathbb{R}^{3}\) in the canonical space, the implicit NeRF-based function \(F_{h}\) outputs an emitted RGB color \(\mathbf{e}^{\text{nerf}}\) and a volume density \(\sigma\).
### Pose-dependent Deformation
**Explicit avatar interior deformation**. Given the monocular video, we need to model the movement of this subject. Since our avatar interior model is based on SMPL-X, it provides a good way to capture the pose deformation and facial expressions. For each frame of given video, we estimate the parameters of shape \(\mathbf{\theta}\in\mathbb{R}^{|\mathbf{\theta}|}\) and expression \(\mathbf{\psi}\in\mathbb{R}^{|\mathbf{\psi}|}\). Then we can deform the head/body to the observation pose using the linear blend skinning function (_i.e._, LBS). The deformation for the explicit SMPL-X mesh model is modeled by a differential function \(M(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})\) that outputs a 3D human body mesh \((\mathbf{V},F)\) where \(\mathbf{V}\in\mathbb{R}^{n_{p}\times 3}\) is a set of \(n_{n}\) vertices and \(F\in\mathbb{R}^{n_{t}\times 3}\) is a set of \(n_{t}\) faces with a fixed topology:
\[M(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})=\text{LBS}(\widetilde{T}_{P}(\mathbf{ \beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O}),J(\mathbf{\beta}),\mathbf{\theta},W), \tag{3}\]
in which \(\mathbf{W}\in\mathbb{R}^{n_{k}\times n_{n}}\) is the blend skinning weights used in the LBS function. \(J(\mathbf{\beta})\in\mathbb{R}^{n_{k}\times 3}\) is a function of body shape (Pavalkos et al., 2019), representing the shape-dependent joints. Given a template vertex \(t_{i}\), the vertex \(\mathbf{v}_{i}\) can be computed with simple linear transformation. Specifically, the forward vertex-wise deformation can be written as the following equation in the homogeneous coordinates:
\[\underbrace{\mathbf{v}_{i}}_{\text{Posed vertex}} = \underbrace{\sum_{k=1}^{n_{k}}W_{k,i}G_{k}(\mathbf{\theta},J(\mathbf{ \beta}))\cdot\begin{bmatrix}\mathbf{I}&\mathbf{o}_{i}+\mathbf{b}_{i}\\ \mathbf{0}&1\end{bmatrix}}_{\text{$M_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{\varphi},\mathbf{O })$: Deformation to the posed space}}\cdot\underbrace{\mathbf{t}_{i}}_{\text{Template vertex}},\]
where \(M_{i}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})\in\mathbb{R}^{4\times 4}\) is the deformation function of template vertex \(\mathbf{t}_{i}\). \(W_{k,i}\) is the \((k,i)\)-th element of the blend weight matrix \(\mathbf{W}\), \(G_{k}(\mathbf{\theta},J(\mathbf{\beta}))\in\mathbb{R}^{4\times 4}\) is the world transformation of the \(k\)-th joint and \(\mathbf{b}_{i}\) is the \(i\)-th vertex of the sum of all blend shapes \(\mathbf{B}\coloneqq B_{S}(\mathbf{\beta})+B_{P}(\mathbf{\theta})+B_{E}(\mathbf{\psi})\). We denote \(\mathbf{V}\) as the vertex set of the posed avatar \((\mathbf{v}_{i}\in\mathbf{V})\). Both \(\mathbf{v}_{i}\) and \(\mathbf{t}_{i}\) are the homogeneous coordinates when applying this deformation function.
**Implicit avatar exterior deformation**. Aiming to learn the NeRF-based clothing/hair representation in the canonical space, we need to deform from the posed space to the canonical space. Therefore, we perform backward deformation on the top of the explicit body skinning. Given a query point \(\mathbf{x}^{p}\) in the posed space (from the observed video frame), we first find the nearest \(k\) points on the body surface \(M\). Then we use the weighted backward skinning function to transform the posed point \(\mathbf{x}^{p}\) to the canonical space (_i.e._, \(\mathbf{x}^{c}\)). To model more accurate clothing/hair movement and deformation, we further learn a pose-dependent deformation function \(F_{e}:(\mathbf{x}^{c},\mathbf{v}^{p}_{n(\mathbf{x}^{p})})\in\mathbb{R}^{6}\rightarrow\mathbf{ \Delta}\mathbf{x}^{c}\in\mathbb{R}^{3}\), where \(\mathbf{x}^{p}\) denotes a point in observation space and \(n(\mathbf{x}^{p})\) is the set of indices of the nearest points to \(\mathbf{x}^{p}\) in \(\mathbf{V}^{p}\) which denotes the posed body meshes in \(M(\mathbf{0},\mathbf{\theta},\mathbf{0},\mathbf{0})\). \(F_{e}\) aims to predict the detailed non-rigid deformation for the query point in the canonical space. Then the residual \(\mathbf{\Delta}\mathbf{x}^{c}\) is added back to \(\mathbf{x}^{c}\), and the displaced point \(\tilde{\mathbf{x}}^{c}=\mathbf{x}^{c}+\mathbf{\Delta}\mathbf{x}^{c}\) is fed to the canonical NeRF model \(F_{h}\) in order to compensate the exterior clothing/hair deformation in the observation space. Specifically, we have the inverse blend skinning mapping from the observation space to the posed space as the following transformation:
\[\underbrace{\mathbf{x}^{c}}_{\text{Cannical}}=\underbrace{\mathbf{x}_{\mathbf{\eta}_{1} \in n(\mathbf{x}^{p})}\alpha_{i}(\mathbf{x}^{p})\cdot M_{i}(\mathbf{0},\mathbf{\theta},\mathbf{0}, \mathbf{0})\cdot M_{i}^{-1}(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})\cdot\underbrace{ \mathbf{x}^{p}}_{\text{Posed}}},\]
where \(\alpha_{i}\) is the parameter that balances the importance:
\[\alpha_{i}(\mathbf{x}^{p})=\frac{1}{Z}\exp\left(-\frac{1}{2\sigma^{2}}\cdot\|\mathbf{x} ^{p}-\mathbf{\upsilon}_{i}\|\cdot\|\mathbf{\upsilon}_{nn(\mathbf{x}^{p})}-\mathbf{\upsilon}_{ i}\|\right).\]
Where \(Z\coloneqq\sum_{\mathbf{\upsilon}_{i}\in n(\mathbf{x}^{p})}\alpha_{i}(\mathbf{x}^{p})\) is a normalizing coefficient, \(\mathbf{\upsilon}_{i}\in\mathbb{R}^{n_{k}}\) is the blend weights of \(\mathbf{\upsilon}_{i}\), \(\sigma\) is a constant and \(nn(\mathbf{x}^{p})\) denotes the index of the nearest point of \(\mathbf{x}^{p}\) in \(\mathbf{V}^{p}\).
### Mesh-integrated Volume Rendering
**Camera model**. We simplify the problem by using a scaled orthographic camera model \(\mathbf{p}=\{s,t^{\top}\}^{\top}\) where \(s\in\mathbb{R}\) is the isotropic scale and \(\mathbf{t}\in\mathbb{R}^{2}\) denotes the translation.
**Mesh rasterization**. With the geometry parameters \((\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\), the vertex offsets \(\mathbf{O}\), the RGB color \(\mathbf{e}^{\text{mesh}}\) of vertices in the upsampled SMPL-X template and the camera parameters \(\mathbf{p}\), we render the colored mesh into an image with \(\mathcal{R}_{m}(M(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},F_{d}),\mathbf{c}^{\text{mesh}}, \mathbf{p})\) where \(\mathcal{R}_{m}\) denotes the differentiable rasterizer function.
**Mesh-integrated volume rendering**. Finally we discuss how to take mesh into consideration while performing volumetric rendering. The basic idea is that the camera ray will stop when it intersects with the mesh in the 3D space. Given a camera ray \(\mathbf{r}(t)=\mathbf{q}+\mathbf{t}\mathbf{d}\) with center \(\mathbf{q}\in\mathbb{R}^{3}\) and direction \(\mathbf{d}\in\mathbb{R}^{3}\). The rendering interval is \(t\in[t_{m},t_{f}]\subset\mathbb{R}\) (near and far bounds). Unlike previous work, we integrate the body model, \(M(\mathbf{\beta},\mathbf{\theta},\mathbf{\psi},\mathbf{O})\), into the volumetric rendering. Specifically, if \(\mathbf{r}(t)\) intersects \(M\), we set the \(t_{f}\) such that \(\mathbf{r}(t_{f})\) is the intersection point with \(M\). In this case, we use the mesh color instead of the NeRF color \(\mathbf{c}^{\text{nerf}}(\mathbf{r}(t_{f}))\) (see Figure 3). More formally, the expected color of the camera ray \(r\) is defined as
\[\mathbf{c}(\mathbf{r})=\int_{t_{m}}^{t_{f}}\mathbf{c}^{\text{nerf}}(\mathbf{r}(t))\cdot T(t) \cdot\sigma(\mathbf{r}(t))+\mathds{1}_{s}(\mathbf{r})\cdot\delta(t-t_{f})\cdot\mathbf{c}^{ \text{mesh}}dt,\]
where \(\mathds{1}_{s}(\mathbf{r})\) is the indicator function for whether the ray intersects the mesh surface (1 if true, 0 otherwise), \(\delta(\cdot)\) denotes the Dirac delta function and \(T(t)=\exp(-\int_{t_{m}}^{t}\sigma(\mathbf{r}(s))ds)\). When \(\mathds{1}_{s}(\mathbf{r})\) is true, we
Figure 3. Illustration of mesh-integrated volume rendering.
set the \(t_{f}\) such that \(\mathbf{t}(t_{f})\) is the intersection point with the SMPL-X mesh \(M\). \(e^{\text{mesh}}\) is the vertex color of the intersected mesh. We approximate the integral with evenly split \(n_{b}\) bins in practice:
\[\mathbf{c}(\mathbf{r})=\big{(}1-\sum_{k=1}^{n_{b}-1}T_{k}\big{(}1-\exp(- \sigma_{k}\Delta_{k})\big{)}\big{)}\cdot\big{(}(1-\mathds{1}_{\mathbf{s}}(\mathbf{r})) \mathbf{c}^{\text{neef}}(\mathbf{r}_{n_{b}}^{c})\] \[+\mathds{1}_{\mathbf{s}}(\mathbf{r})\cdot\mathbf{c}^{\text{mesh}}(\mathbf{r}_{n_ {b}})\big{)}+\sum_{j=1}^{n_{b}-1}T_{j}\big{(}1-\exp(-\sigma_{j}\Delta_{j}) \big{)}e^{\text{neef}}(\mathbf{r}_{j}^{c}),\]
where we define \(T_{j}=\exp(-\sum_{\mathbf{q}=1}^{j-1}\sigma_{j}\Delta_{j})\). \(\mathbf{r}_{j}\) is sampled from the \(j\)-th bin along the camera ray \(\mathbf{r}\cdot\mathbf{r}_{i}^{c}\) is the corresponding canonical point for the observed point \(\mathbf{r}_{i}\).
### Objective Function
**Overall objective function**. Given a sequence of \(n_{f}\) images, \(I_{f}(1\leq f\leq n_{f})\), we optimize \(\mathbf{\beta}\) and the weights of the MLPs \(F_{d},F_{h},F_{t},F_{e}\) jointly across the entire sequence, and \(\mathbf{\theta}_{f}\) and \(\mathbf{p}_{f}\) per frame. We use the following overall objective function:
\[\mathcal{L}=\mathcal{L}_{\text{recon}}+\mathcal{L}_{\text{ext}}+\mathcal{L}_{ \text{int}}+\mathcal{L}_{\text{reg}}, \tag{4}\]
with reconstruction loss \(\mathcal{L}_{\text{recon}}\), avatar exterior loss \(\mathcal{L}_{\text{ext}}\), avatar interior loss \(\mathcal{L}_{\text{int}}\) (\(\mathcal{L}_{\text{int}}^{\text{body}}\) or \(\mathcal{L}_{\text{int}}^{\text{face}}\)) and regularization \(\mathcal{L}_{\text{reg}}\). For simplicity, we omit the frame index \(f\) and the optimization arguments whenever there is no ambiguity. For videos, the final objective function is the average over all frames.
**Reconstruction loss**. We minimize the difference between the rendered image and the input image with the following objective:
\[\mathcal{L}_{\text{recon}}=\lambda_{\text{pixel}}\cdot\mathcal{L}_{\mathcal{ S}}(\mathcal{R}_{\text{o}}-I)+\lambda_{\text{semantic}}\cdot\mathcal{L}_{ \text{semantic}}(\mathcal{R}_{\text{o}},I), \tag{5}\]
where \(\mathcal{L}_{\mathcal{S}}\) is the Huber loss (Huber, 1964) that penalizes the pixel-level difference. \(\mathcal{L}_{\text{semantic}}\) is used to regularize the semantic difference. More specifically, we use an ID-MRF loss (Wang et al., 2018)\(\mathcal{L}_{\text{int}}\) as \(\mathcal{L}_{\text{semantic}}\) for reconstructing the body avatar, and an perceptual loss (Johnson et al., 2016)\(\mathcal{L}_{\text{per}}\) as \(\mathcal{L}_{\text{semantic}}\) for reconstructing the head avatar. While the Huber loss focuses on the overall reconstruction, the semantic loss allows us to reconstruct more details as previously shown by Feng et al. (2021).
**Avatar exterior loss** Only minimizing the reconstruction error \(\mathcal{L}_{\text{recon}}\) results in a NeRF that models the entire avatar including the body/face regions. Our goal is to only capture exterior components such as clothing or hair using \(F_{h}\). To achieve this, we employ a segmentation mask to explicitly limit the space within which the NeRF density can be. Given a segmentation mask \(S_{\text{e}}\), which is represented by \(\mathbf{1}\) for every exterior pixel (clothing or hair) and \(\mathbf{0}\) elsewhere, we minimize the following exterior loss:
\[L_{\text{ext}}=\lambda_{\text{ext}}\left\lVert S_{\text{o}}-S_{\text{e}} \right\rVert_{1,1}, \tag{6}\]
with the rendered NeRF mask \(S_{\text{o}}\), which is obtained by sampling rays for all image pixels and computing per ray
\[\mathbf{s}_{\mathbf{o}}(\mathbf{r})=\sum_{k=1}^{n_{b}-1}T_{k}\big{(}1-\exp(- \sigma_{k}\Delta_{k})\big{)}. \tag{7}\]
Minimizing \(L_{\text{ext}}\) ensures that the aggregated density across rays (excluding the far bound) outside of clothing or hair is \(0\). Therefore, only the intended exterior region is captured by the NeRF model.
**Avatar interior loss**. To further disentangle the avatar interior and exterior, we need to ensure that the interior mesh model does not capture any exterior variation. To this end, we define a few additional loss functions based on prior knowledge.
First, the interior mesh should match the masked image. Given a binary mask \(S\) of the entire avatar (\(1\) for inside, \(0\) elsewhere), we minimize the difference between the silhouette of the rendered body (denoted by \(\mathcal{R}_{m}^{\delta}(M,\mathbf{p})\)) and the given mask as
\[\mathcal{L}_{\text{silhouette}}=\lambda_{\text{silhouette}}\mathcal{L}_{\mathcal{ S}}(\mathcal{R}_{m}^{\delta}(M,\mathbf{p})-S). \tag{8}\]
Second, the interior mesh should match visible avatar interior (_e.g._, for reconstructing the body, the body mesh should match the visible body region). Only optimizing \(\mathcal{L}_{\text{silhouette}}\) results in meshes that also fit the avatar exterior (_e.g._, clothing or hair). This is undesired especially for loose clothing or long hair, and also leads to visible artifacts when transferring clothing between subjects. Instead, given a binary mask \(S_{b}\) of the visible body parts (\(1\) for body parts, \(0\) elsewhere), we minimize the following part-based silhouette loss
\[\mathcal{L}_{\text{int-mask}}=\lambda_{\text{int-mask}}\mathcal{L}_{\mathcal{ S}}(S_{b}\odot\mathcal{R}_{m}^{\delta}(M,\mathbf{p})-S_{b}), \tag{9}\]
and a part-based photometric loss
\[\mathcal{L}_{\text{skin}}=\lambda_{\text{skin}}\mathcal{L}_{\mathcal{S}}(S_{b} \odot(\mathcal{R}_{m}(M,\mathbf{c},\mathbf{p})-I)), \tag{10}\]
to put special emphasis on fitting visible interior parts.
Third, the interior mesh should stay within the exterior region. Specifically, the body or face should be generally covered by the clothing or hair, yielding to the following loss function:
\[\mathcal{L}_{\text{inside}}=\lambda_{\text{inside}}\mathcal{L}_{\mathcal{ S}}(ReLU(\mathcal{R}_{m}^{\delta}(M,\mathbf{p})-S_{c})). \tag{11}\]
Fourth, the skin color of occluded body vertices should be similar to visible skin regions. For this, we minimize the difference between the body colors in occluded regions and the average skin color as
\[\mathcal{L}_{\text{skin-inside}}=\lambda_{\text{skin-inside}}\mathcal{L}_{ \mathcal{S}}(S_{c}\odot(\mathcal{R}_{m}(M,\mathbf{c},\mathbf{p})-\mathbf{C}_{ \text{skin}})), \tag{12}\]
where \(\mathbf{C}_{\text{skin}}\) is the average color of the visible skin regions. In practice, we encountered challenges with skin detection not performing effectively. Therefore, for body video sequences, we assume that the hands are visible and utilize these hand regions to compute the average skin color. Moreover, for face videos, we determine the skin color by computing the mean color of the cheek region.
Combining the loss functions above, we use the following \(\mathcal{L}_{\text{int}}\) for reconstructing the interior avatar:
\[\mathcal{L}_{\text{int}}=\mathcal{L}_{\text{silhouette}}+\mathcal{L}_{\text{ int-mask}}+\mathcal{L}_{\text{skin}}+\mathcal{L}_{\text{inside}}+\mathcal{L}_{\text{ skin-inside}}. \tag{13}\]
**Regularization**. We regularize the reconstructed mesh surface with
\[\mathcal{L}_{\text{reg}}=\lambda_{\text{edge}}\mathcal{L}_{\text{edge}}(M)+\lambda_ {\text{offset}}\left\lVert\mathbf{O}\right\rVert_{2,2}, \tag{14}\]
where \(\mathcal{L}_{\text{edge}}\) denotes the relative edge loss (Hirshberg et al., 2012) between the optimized interior mesh with and without the applied offsets. For the offset loss, we apply different weights to the body, hand and face region. Details are given in the experiment section.
## 4. Intriguing Insights
**Hybrid representation for general 3D modeling.** While the proposed DELTA demonstrates the effectiveness of hybrid 3D representation for human avatar modeling, the idea of hybrid representation can be broadly useful for modeling general 3D objects and scenes, especially for objects whose components have quite different physical properties. For example, a burning candle can be represented with a mesh-based candle and a NeRF-based flame, and a hourglass can be represented with mesh-based glass and point-based sand. DELTA shows the power of hybrid 3D representation through the lens of human avatar modeling, and we expect more future efforts can be put in exploring hybrid 3D representation.
**Hybrid vs. holistic 3D representation.** It has been a long-standing debate regarding the optimal holistic 3D representation for shape modeling. In the existing graphics pipeline, meshes are still a _de facto_ choice for holistic 3D representation due to its efficiency in storage and rendering. However, meshes can be quite limited in representing certain geometric structures, such as hair strand, fluid, smoke and complex clothing. Implicit 3D representations (Chen and Zhang, 2019; Mescheder et al., 2019; Mildenhall et al., 2020; Park et al., 2019) demonstrate strong flexibility in complex shape representation, and in particular, NeRF further shows great novel view synthesis quality. However, it is difficult for NeRF to capture thin shell geometry like human body. While there is no single perfect 3D representation for all objects, why not combine the advantages of different representations and use them together? However, hybrid representation also inevitably introduces some shortcomings. First, the rendering process for hybrid representation becomes highly nontrivial and case-dependent. For example, our mesh-integrated volume rendering only works for the hybrid mesh and NeRF representation. Second, the representational heterogeneity makes subsequent learning and processing more difficult. For example, learning a generative model on hybrid representation is far more complicated than holistic representation. Moreover, editing hybrid representation will also become more challenging for designers. Third, how to choose the right 3D representations to combine is task-dependent. While DELTA uses meshes for human head and NeRFs for hair, it could be better to use a strand-based representation for hair.
## 5. Experiments and Results
### Datasets
DELTA offers a solution for capturing dynamic objects from monocular video. We demonstrate the effectiveness of our approach by applying it to the challenging tasks of capturing clothing and hair from videos. To evaluate our approach, we introduce two types of datasets, one for full-body and one for head capture.
**Full-body datasets.** To compare with other state-of-the-art methods of realistic human capturing. We evaluate DELTA on sequences from public sources: People Snapshot (Alldieck et al., 2018), iPER (Liu et al., 2019), SelfRecon (Jiang et al., 2022). However, none of them provide complicated clothes such as long dresses. Thus, we capture our own data MPIIS-SCARF, where we record videos of each subject wearing short and long dresses. For People Snapshot, we use the provided SMPL pose as initialization instead of running PIXIE (Feng et al., 2021). To be specific, we use 4 subjects ("male-3-casual", "female-3-casual", "male-4-casual", "female-4-casual") from People Snapshot (Alldieck et al., 2018) for qualitative and quantitative evaluation. The quantitative evaluation follows the settings of Anim-NeRF (Chen et al., 2021). We further use 4 subjects ("subject003", "subject016", "subject022", "subject023") with outfit 1 and motion 1 from iPER (Liu et al., 2019) and 4 synthetic video data ("female outfit1", "female outfit2", "female outfit3", "male outfit1") and 1 self-captured video ("CHH female") from SelfRecon (Jiang et al., 2022) for qualitative evaluation. For MPIIS-SCARF, we use A-pose videos of subject "Yao" with six types of clothing for qualitative evaluation, those videos include loose dressing and short skirts. For each subject, we use around 100-150 images for optimization. For each frame, we run PIXIE (Feng et al., 2021) to initialize \((\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\), and camera \(\mathbf{p}\). For datasets without providing silhouette masks, we compute \(S\) with (Lin et al., 2022), and (Dabhi, 2022) for \(S_{c}\).
**Head datasets.** We also evaluate DELTA on head videos from public sources. To be specific, we use video "MVI_1810" from IMAvatar (Zheng et al., 2022), "person_0000" and "person_0004" from neural head avatar (Grassal et al., 2022). As subjects with long hair are missing, we further collected one video with long hair from the Internet, named video "b0_0" (Xiao, 2022) (2:30). For each image from the video, we detect the upper body region and resize it to an image with 512x512 size. We then estimate 68 landmarks (Bulat and Tzimiropoulos, 2017) and iris (Lugaresi et al., 2019), portrait matting with MODNet (Ke et al., 2022), and segment face and hair with face parsing (Zilrunning, 2019). Given the estimated labels and SMPL-X model, we roughly estimate the shape and texture parameters for the subject, and camera, pose, expression and lighting (Spherical harmonic) for each frame. Subsequently, for enhanced SMPL-X shape fitting, we perform parameter optimization across all frames, where shape and texture parameters are shared across frames. These optimized parameters serve as the initialization for our model training. Nonetheless, these videos often lack backviews of the head as they predominantly focus on face-related areas. To demonstrate our method's capacity for capturing complete hairs, we also incorporate synthetic data from the AGORA dataset (Patel et al., 2021). We select three subjects from Agora, each containing the mesh, texture, and corresponding SMPL fits. 200 images are rendered from the textured mesh for training DELTA.
### Implementation Details
We choose \(\sigma=0.1\) and \(|\mathcal{N}\left(\mathbf{x}\right)|=6\). For full-body video, we set \(t_{n}=-0.6\), and \(t_{f}=0.6\) and weight the individual losses with \(\lambda_{\text{pixel}}=1.0\), \(\lambda_{\text{semantic}}=0.0005\), \(\lambda_{\text{ext}}=0.5\), \(\lambda_{\text{silhouette}}=0.001\), \(\lambda_{\text{int-mask}}=30\), \(\lambda_{\text{skin}}=1.0\), \(\lambda_{\text{inside}}=40\), \(\lambda_{\text{skin-inside}}=0.01\), \(\lambda_{\text{edge}}=500\), \(\lambda_{\text{offset}}=400\). For \(\lambda_{\text{offset}}\), the weight ratio of body, face and hands region is \(2:3:12\). Note that it is important to perform the first stage NeRF training without optimizing the non-rigid deformation model. In this stage, we also set \(\lambda_{\text{semantic}}=0\). In the second stage, the non-rigid deformation model then explains clothing deformations that cannot be explained by the body transformation. And \(L_{semantic}\) helps capture more details that can not be modelled by the non-rigid deformation. The overall optimization time is around 40 hours with NVIDIA V100. In head video settings, we conducted SMPL-X fitting
for all frames during data processing, that ensures accurate face fitting. By employing this as our initialization for DELTA training, we can directly train both mesh-based face and NeRF-based hair components. The chosen hyperparameters include \(t_{n}=-1.5\), and \(t_{f}=1.5\). We assign weights to individual losses as follows: \(\lambda_{\text{pixel}}=1.0\), \(\lambda_{\text{semantic}}=0.015\), \(\lambda_{\text{ext}}=0.5\), \(\lambda_{\text{silhouette}}=0.001\), \(\lambda_{\text{int-mask}}=30\), \(\lambda_{\text{skin}}=1.0\), \(\lambda_{\text{inside}}=40\), \(\lambda_{\text{skin-inside}}=0.001\), \(\lambda_{\text{edge}}=500\), \(\lambda_{\text{offset}}=400\). To enhance training efficiency, we adopt Instant-NGP (Li et al., 2023; Muller et al., 2022) for parameterizing the hair component. Unlike the MLP layers in the original NeRF model, Instant-NGP leverages a hash table to store feature grids at various coarseness scales, resulting in fast training and inference speeds. We then require around 40 minutes of optimization time with NVIDIA A100.
### Comparison to Existing Methods
Our approach enables the creation of hybrid explicit-implicit avatars from monocular videos. We note that this has not been achieved by previous methods, which typically model clothed bodies or heads holistically using either implicit or explicit representations. To evaluate the effectiveness of our approach, we compare it to existing state-of-the-art methods on the challenging tasks of clothed-body and head modeling. The explicit-implicit modeling of DELTA also naturally disentangles objects such as the body and clothing, thereby enabling garment reconstruction. Unlike previous methods that reconstruct cloth geometry from a single image with the help of extensive 3D scan data, our approach can reconstruct garments from images alone. We evaluate the effectiveness of DELTA for garment reconstruction by comparing it to existing methods.
**Body and clothing modeling.** We quantitatively compare NB (Omran et al., 2018), SMPLpix (Prokudin et al., 2021), Neural Body (Peng et al., 2021) and Anim-NeRF (Chen et al., 2021), following the evaluation protocol of (Chen et al., 2021). To be specific, we use 4 subjects ("subject003", "subject016", "subject022", "subject023") with outfit 1 and motion 1 from iPER (Liu et al., 2019) for qualitative evaluation. For all subjects, we uniformly select frames 1-490 with a step-size 4 for optimization. We use 4 synthetic video data ("female outfit1", "female outfit2", "female outfit3", "male outfit1") and 1 self-captured video ("CHH female") from SelfRecon (Jiang et al., 2022). For each subject, we use 100 frames for optimization. For self-captured data, we use A-pose videos of subject "Yao" with six types of clothing for qualitative evaluation, those videos include loose dressing and short skirts. For each video, we uniformly select frames 0-400 with a step-size 2 for optimization. Table 1 shows that DELTA is more accurate than the other methods under most metrics. The qualitative comparison in Figure 4 demonstrates that DELTA can better reconstruct the hand and face geometry compared to SelfRecon (Jiang et al., 2022) and Anim-NeRF (Chen et al., 2021).
Fig. 4. Qualitative comparison with SelfRecon (Jiang et al., 2022) and Anim-NeRF (Chen et al., 2021) for reconstruction. While all methods capture the clothing with comparable quality, our approach has much more detailed face and hands due to the disentangled representation of clothing and body.
Fig. 5. Qualitative comparison with neural head avatar (NHA) (Grassal et al., 2022) and IMAvatar (Zheng et al., 2022) for reconstruction. Our method exhibits superior performance in capturing the geometry of the face and shoulders. Moreover, it achieves exceptional rendering quality for the hair. This can be attributed to the effective utilization of a disentangled representation for separating the hair and face components in DELTA.
Fig. 6. Qualitative result on synthetic upper-body videos. The leftmost and rightmost images show the colored rendering of the learned avatars. The middle images show the hybrid rendering of the estimated upper body and hair. The results validate DELTA’s ability to accurately represent complete hair views, including both short and long hair types.
**Face and hair modeling**. We conduct an evaluation of our proposed method using four real-world videos. To assess the effectiveness of our approach, we compare it with two state-of-the-art methods, neural head avatar (NHA) [11] and IMavatar [12]. To ensure a fair comparison, we adopt the same experimental protocol, where we train NHA and IMavatar using exactly the same set of video frames and reserve the remaining frames for evaluation. To be specific, for subjects "person_0000", "person_0004" and "MV1_1810", we sample every 50 frames for evaluation, and for the subject "b0_0", we sample every 5 frames. Following neural head avatar [11], for each image, we keep the trained model and optimize per-frame parameters such as camera, pose, and expression. Consistent with prior research [11, 12, 13], we employ four image-based metrics to evaluate our approach. These metrics include pixel-wise L1 loss, peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM), and the learned perceptual image patch similarity (LPIPS). We find that NHA only focuses on the face, neck, and hair regions for training and evaluation. For a fair comparison, we compute the metrics on both the whole human region and only face, neck and hair regions.
The quantitative comparison presented in Table 2 demonstrates that our method attains the highest level of quality when considering the entire human region. However, when specifically focusing on the face, hair, and neck regions, it is worth noting that NHA achieves superior results for subjects with short hair, such as "person_0000". Nevertheless, when it comes to subjects with longer hair, NHA struggles to capture both hair and face details, as exemplified in instances such as "MV1_1810" and "b0_0". In contrast, our method performs effectively across various hair types and successfully captures the entirety of the avatar, including changes in the shoulders. This capability can be attributed to the utilization of hybrid representations within our approach.
We additionally provide qualitative comparisons for novel view images and shapes in Figure 5, along with supplementary qualitative results of DELTA applied to synthetic upper-body videos from the AGORA [12] dataset in Figure 6. Our method showcases superior performance in capturing accurate face and shoulder geometry, while also delivering high-quality renderings of the hair.
BCNet (Jiang et al., 2020) in Fig 7. DELTA gives better visual quality than SMPLicit and BCNet. Note that the training/optimization settings are different, they reconstruct the body and garment from a single image, while our results are learned from video. However, they require a large set of 3D scans and manually designed cloth templates for training, while we do not need any 3D supervision, and capture the garment appearance as well. Figure 7 shows that DELTA reconstructs different clothing types more faithfully.
**Reposing.** For clothed body modeling, unlike previous methods that represent clothed bodies holistically, DELTA offers more fine-grained control over body pose especially hand pose. Figure 8 shows reproging into novel poses. Similar to the face and hair, utilizing an explicit shape model to present face region facilitates generalization across a wide range of facial expression animations. As Figure 9 shows different expressions of the reconstructed avatar.
**Clothing and hair transfer.** Figures 1, 8 and 9 qualitatively demonstrate the capability of our hybrid 3D representation in enabling clothing and hair transfer between avatars. We note that the clothing and hair is able to seamlessly adapt to accommodate various body shapes. Furthermore, the trained hair and clothing models can be both seamlessly transferred to different subjects. One potential application involves utilizing an existing body estimation method like PXIE (Feng et al., 2021) to estimate the body shape from a single image. Subsequently, our captured hair and clothing models can be applied to this subject, offering a streamlined approach for virtual try-on applications, as shown in Figure 10.
**Altering human Shape.** Figure 11 highlights an additional facet of DELTA's capabilities. We show the capacity to alter human body or face shape through adjustments in SMPL-X shape parameters. Subsequently, the NeRF-based clothing or hair seamlessly adjusts to align with the modified shape.
### Ablation Study
We run different ablation experiments to show the impact of different components of our hybrid representation, and to show the impact of the pose refinements.
**Effect of representations.** DELTA consists of a NeRF to represent clothing, and a mesh with vertex displacements. Figure 12 compares NeRF to holistically represent body and clothing (i.e., DELTA w/o body-clothing segmentation) and mesh-only based representation (i.e., DELTA w/o NeRF). Our hybrid representation is better able to estimate the face, hands, and complex clothing. Note that, unlike our hybrid representation, none of the existing body NeRF methods can transfer clothing between avatars.
**Effect of pose refinement**. Since the pose estimation for each frame is not accurate, the pose refinement is important to gain details. We try learning our method without pose refinement. Figure 14 shows that pose refinement improves the image quality a lot.
## 6. Discussion and Limitation
**Segmentation.** DELTA requires body and clothing/hair segmentation for training. Segmentation errors of the subject and background negatively impact the visual quality of the extracted avatar, and erroneous clothing or hair segmentation results in poor separation of mesh-based body and NeRF-based clothing or hair part. Figure 13 shows the wrong reconstruction due to consistent clothing segmentation errors, e.g. the belt is not recognized as part of clothing in segmentation, this results in wrong disentanglement between human body and clothing. Enforcing temporal consistency by exploiting optical flow could improve the segmentation quality.
**Geometric quality.** The strength of NeRF is its visual quality and the ability to synthesize realistic images, even when the geometry is not perfect. Figure 15 and Figure 16 show examples of noisy
\begin{table}
\begin{tabular}{c|l|c c c c c c c} Video & Model & \multicolumn{4}{c|}{Whole} & Face, Hair and Neck \\ & & L1 \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LIPIS \(\downarrow\) & L1 \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LIPIS \(\downarrow\) \\ \hline \multirow{4}{*}{person\_0000} & NHA (Grassal et al., 2022) & 0.094 & 12.15 & 0.843 & 0.198 & **0.012** & **24.92** & **0.920** & **0.046** \\ & IMavatar (Zheng et al., 2022) & 0.024 & 22.55 & 0.882 & 0.177 & 0.015 & 23.70 & 0.917 & 0.089 \\ \cline{2-10} & DELTA & **0.021** & **24.04** & **0.892** & **0.122** & 0.017 & 23.37 & 0.914 & 0.086 \\ \hline \multirow{4}{*}{MVI\_1810} & NHA (Grassal et al., 2022) & 0.054 & 16.01 & 0.817 & 0.195 & 0.038 & 18.94 & 0.842 & 0.149 \\ & IMavatar (Zheng et al., 2022) & **0.039** & 20.33 & 0.829 & 0.171 & **0.031** & 21.44 & 0.851 & 0.137 \\ \cline{1-1} & DELTA & **0.039** & **21.33** & **0.835** & **0.156** & 0.034 & **22.12** & **0.852** & **0.132** \\ \hline \multirow{4}{*}{b0\_0} & NHA (Grassal et al., 2022) & 0.062 & 15.60 & 0.874 & 0.203 & 0.042 & 16.12 & 0.896 & 0.137 \\ & IMavatar (Zheng et al., 2022) & 0.043 & 19.61 & 0.871 & 0.188 & 0.030 & 20.13 & 0.905 & **0.097** \\ \cline{1-1} & DELTA & **0.025** & **23.28** & **0.909** & **0.096** & **0.022** & **21.47** & **0.917** & 0.103 \\ \hline \end{tabular}
\end{table}
Table 2. Quantitative comparison of novel pose and expression synthesis on public real videos.
Figure 10. Virtual try-on Application of DELTA. Given a single image, we can estimate the body shape using PXIE (Feng et al., 2021). The body texture is from PXIE template. Both the trained hair and clothing can be subsequently applied to this subject, resulting in smooth virtual try-on applications. In this instance, the captured hair is derived from the second example in Figure 6, and the clothing is from the second example of Figure 7.
geometry despite good visual quality. In contrast, recent SDF-based methods have demonstrated good geometric reconstruction (e.g., [12]). It may be possible to leverage their results to better represent the underlying clothed shape or to regularize NeRF.
**Novel poses and views**. Although DELTA demonstrates generalization to unseen poses, artifacts may occur in extreme poses. As depicted in Figure 17, the animation results for new poses exhibit satisfactory performance for the body and face regions. However, artifacts are prevalent in the non-rigid fusion (clothing or hair) component. Notably, for regions that have not been encountered in the training data, our model will fail to capture the desired details. For instance, in the example featuring short hair, the hair in the head top is always missing in all poses and views in the video. To address these limitations, potential solutions include incorporating regularization techniques during NeRF optimization or training a generative model using a diverse set of training examples encompassing different individuals and poses. These approaches have the potential to enhance the robustness and accuracy of the model when dealing with unseen regions and extreme poses.
**Pose initialization**. DELTA refines the body pose during optimization. However, it may fail if the initial pose is far from the right pose. Handling difficult poses where PIXIE [14] fails requires a more robust 3D body pose estimator.
Fig. 16: Two examples of captured hair appearance and geometry. While DELTA gives good visual quality for hair renderings, the underlying geometry of the NeRF hair is noisy.
Fig. 12: Rendered images and extracted meshes from different components of DELTA. Our hybrid representation gives a better estimated face, hand, and clothing geometry than vanilla NeRF or a mesh-based representation.
Fig. 14: Rendering results of clothed body (up) and head (bottom) w/o and w/ pose refinement. The pose refinement improves the visual quality of the reconstruction, as more texture details are reconstructed. For the face subject, please zoom in to check the difference.
Fig. 13: The wrong clothing segmentation masks result in a visible gap within the reconstructed clothing.
Fig. 12: Rendered images and extracted meshes from different components of DELTA. Our hybrid representation gives a better estimated face, hand, and clothing geometry than vanilla NeRF or a mesh-based representation.
Fig. 14: Rendering results of clothed body (up) and head (bottom) w/o and w/ pose refinement. The pose refinement improves the visual quality of the reconstruction, as more texture details are reconstructed. For the face subject, please zoom in to check the difference.
**Dynamics**. DELTA handles non-rigid cloth deformation with the pose-conditioned deformation model. While the global pose can account for some deformation, how to accurately model the clothing and hair dynamics as a function of body movement remains an open problem and is an important future work.
**Lighting**. As with other NeRF methods, we do not factor lighting and material properties. This results in baked-in shading and the averaging of specular reflections across frames. Factoring lighting from shape and material is a key next step to improve realism.
**Facial expressions**. DELTA uses the facial expressions estimated by PIXIE [12] which is unable to capture the full spectrum of emotions (cf. [1]). Also, we have not fully exploited neural radiance fields to capture complex changes in facial appearance, _e.g._, due to the movement of mouth opening. We believe this is a promising future direction.
## 7. Concluding Remarks
DELTA is able to automatically extract human body, clothing or hair from a monocular video. Our key novelty is a hybrid representation that combines a mesh-based body model with a neural radiance field to separately model the body and clothing/hair. This factored representation enables DELTA to transfer clothing/hair between avatars, animate the body pose of the avatars including finger articulation, alter their body shape and facial expression, and visualize them from unseen viewing directions. This property makes DELTA well suited to VR and virtual try-on applications. Finally, DELTA outperforms existing avatar extraction methods from videos in terms of visual quality and generality.
###### Acknowledgements.
We would like to sincerely thank Sergey Prokudin, Yuliang Xiu, Songyou Peng, Qianli Ma for fruitful discussions, and Peter Kults, Zhen Liu, Yandong Wen, Hongwei Yi, Xu Chen, Soubhik Sanyal, Omri Ben-Dov, Shashank Tripathi for proofreading. We also thank Betty Mohler, Sarah Danes, Natalia Marciniak, Tsvetelina Alexiadis, Claudia Gallatz, and Andres Camilo Mendoza Patino for their supports with data. This work was partially supported by the Max Planck ETH Center for Learning Systems.
**Disclosure**. MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.
|
2309.00024 | Efficient Multi-View Graph Clustering with Local and Global Structure
Preservation | Anchor-based multi-view graph clustering (AMVGC) has received abundant
attention owing to its high efficiency and the capability to capture
complementary structural information across multiple views. Intuitively, a
high-quality anchor graph plays an essential role in the success of AMVGC.
However, the existing AMVGC methods only consider single-structure information,
i.e., local or global structure, which provides insufficient information for
the learning task. To be specific, the over-scattered global structure leads to
learned anchors failing to depict the cluster partition well. In contrast, the
local structure with an improper similarity measure results in potentially
inaccurate anchor assignment, ultimately leading to sub-optimal clustering
performance. To tackle the issue, we propose a novel anchor-based multi-view
graph clustering framework termed Efficient Multi-View Graph Clustering with
Local and Global Structure Preservation (EMVGC-LG). Specifically, a unified
framework with a theoretical guarantee is designed to capture local and global
information. Besides, EMVGC-LG jointly optimizes anchor construction and graph
learning to enhance the clustering quality. In addition, EMVGC-LG inherits the
linear complexity of existing AMVGC methods respecting the sample number, which
is time-economical and scales well with the data size. Extensive experiments
demonstrate the effectiveness and efficiency of our proposed method. | Yi Wen, Suyuan Liu, Xinhang Wan, Siwei Wang, Ke Liang, Xinwang Liu, Xihong Yang, Pei Zhang | 2023-08-31T12:12:30Z | http://arxiv.org/abs/2309.00024v1 | # Efficient Multi-View Graph Clustering with Local and Global Structure Preservation
###### Abstract.
Anchor-based multi-view graph clustering (AMVGC) has received abundant attention owing to its high efficiency and the capability to capture complementary structural information across multiple views. Intuitively, a high-quality anchor graph plays an essential role in the success of AMVGC. However, the existing AMVGC methods only consider single-structure information, i.e., local or global structure, which provides insufficient information for the learning task. To be specific, the over-scattered global structure leads to learned anchors failing to depict the cluster partition well. In contrast, the local structure with an improper similarity measure results in potentially inaccurate anchor assignment, ultimately leading to sub-optimal clustering performance. To tackle the issue, we propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG). Specifically, a unified framework with a theoretical guarantee is designed to capture local and global information. Besides, EMVGC-LG jointly optimizes anchor construction and graph learning to enhance the clustering quality. In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number, which is time-economical and scales well with the data size. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method.
multi-view graph clustering; large-scale clustering +
Footnote †: dagger}\)Corresponding author
+
Footnote †: dagger}\)Corresponding author
and decomposition (Golovolov et al., 2017; Zhang et al., 2018; Zhang et al., 2019), which hinder their application on large-scale scenarios. As a result, anchor-based multi-view graph clustering (AMVGC), which selects typical anchors to denote the whole data, is proposed to enhance the algorithm's efficiency. For instance, Li et al. (Li et al., 2020) propose an innovative anchor-based multi-view clustering method by fusing the local structure and diverse features. Shi et al. (Shi et al., 2020) generate the indicator matrix by an integrated framework that unifies the anchor graph learning and structure rotation.
Two basic structures, i.e., global and local structures, both play crucial roles in numerous AMVGC studies. As shown in Figure 1 (right), the global structure, which captures the relationship between samples and all anchors, usually has a dense correspondence matrix. Meanwhile, the matrix with local structure is always blocky (with an appropriate permutation) owing to only the relationship between samples and one anchor with high similarity being considered. However, the over-scattered global structure leads to learned anchors failing to depict the cluster partition well, while the local structure with an improper similarity measure results in potentially inaccurate anchor assignment. A natural question is whether combining the two structures leads to a better method (Zhou et al., 2019). The effectiveness of similar thought is demonstrated by other studies in vision (Zhou et al., 2019) and language (Zhou et al., 2019; Zhang et al., 2019) tasks. Nonetheless, it is non-trivial to incorporate global and local structures in the MVC field due to the objective equation inconsistency. One pioneer work, (Chen et al., 2019) merged global and local structures to learn the similarity of the original data on the kernel space in a single view case. However, their high time expenditure and single-view scenario limit the scalability of the method. To the best of our knowledge, no generalized framework with global and local structure preservation has been proposed in the field of AMVGC. Besides, existing anchor selection usually uses a heuristic sampling strategy and is separated from the graph learning phase, resulting in the clustering performance dependent on the quality of anchor initialization. For instance, Kang et al. (Kang et al., 2019) generate fixed anchors in each view by k-means and average the generated anchor graphs into the fusion graph. Because of the randomness and inflexibility of the k-means and sampling strategy, their clustering performance usually exhibits poor stability.
To tackle these problems, we design a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG). Specifically, a unified framework with a theoretical guarantee is designed to capture local and global information. Besides, EMVGC-LG jointly optimizes anchor construction and graph learning to enhance the clustering quality. Moreover, we theoretically prove that the proposed paradigm with a global structure can well approximate the local information. In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number, which is time-economical and scales well with the data size. Meanwhile, a two-step iterative and convergent optimization algorithm is designed in this paper. We summarize the contributions of this paper as follows:
* We design an anchor graph learning framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG). With the proven properties, the proposed anchor graph paradigm can not only capture the global structure between data but also well approximate the local structure.
* In contrast to existing sampling or fixed anchors, the anchor learning and graph fusion processes are jointly optimized in our framework to enhance the clustering quality.
* Extensive experiments on ten benchmark datasets demonstrate the effectiveness and efficiency of our proposed method.
## 2. Related Work
In this section, we present recent research regarding our work, comprising multi-view graph clustering (MVGC) and anchor-based multi-view graph clustering (AMVGC).
### Multi-View Graph Clustering
With the given dataset \(\{\mathbf{X}^{(p)}\}_{p=1}^{o}\in\mathbb{R}^{d_{p}\times n}\) consisting of \(n\) samples from \(o\) views, the representative multi-view graph clustering (MVGC) model can express as follows:
\[\min_{\mathbf{S}^{(p)},\mathbf{S}}\sum_{p=1}^{n}\left\|\mathbf{X }^{(p)}-\mathbf{X}^{(p)}\mathbf{S}^{(p)}\right\|_{F}^{2}+\mu\mathcal{L}\left( \mathbf{S}^{(p)},\mathbf{S}\right),\] \[\text{s.t.}\ \left\{\begin{array}{l}\text{diag}\left(\mathbf{S}^{(p)} \right)=\mathbf{0},\mathbf{S}^{(p)\top}\mathbf{1}_{n}=\mathbf{1}_{n},\mathbf{S }^{(p)}\geq\mathbf{0},\\ \text{diag}(\mathbf{S})=\mathbf{0},\mathbf{S}^{\top}\mathbf{1}_{n}=\mathbf{1 }_{n},\mathbf{S}\geq\mathbf{0},\end{array}\right.\]
where the first term denotes the data self-representation matrix learning module, and the second term represents the structure integration process performed in \(\{\mathbf{S}^{(p)}\}_{p=1}^{o}\) to generate a common \(\mathbf{S},\mu\) represents a trade-off parameter, \(\mathcal{L}\) is the regularization term. After obtaining the fused global graph \(\mathbf{S}\), we need to obtain the spectral embedding \(\mathbf{F}\in\mathbb{R}^{n\times k}\):
\[\min_{\mathbf{F}}\operatorname{Tr}\left(\mathbf{F}^{\top}\mathbf{L}\mathbf{F} \right),s.t.\mathbf{F}^{\top}\mathbf{F}=\mathbf{I}_{k}, \tag{1}\]
where \(\mathbf{L}\) denotes the graph Laplace operator, which is calculated by \(\mathbf{D}-\mathbf{W}\). \(\mathbf{D}\) is a diagonal matrix, and its element is defined as \(d_{ii}=\sum_{j=1}^{n}s_{ij}\). \(\mathbf{W}\) is the symmetric similarity matrix, which is
Figure 1. Two types of the anchor graph: local (left) and global (right). The two types of information often have strong patterns: blocky and dense. \(\mathcal{G}_{LA}\) and \(\mathcal{M}_{LA}\) denote the visualization and matrix of local anchor graph. \(\mathcal{G}_{GA}\) and \(\mathcal{M}_{GA}\) denote the visualization and matrix of global anchor graph.
calculated by \(\frac{\text{S}48^{\top}}{2}\). F is the spectral representation (Han et al., 2017; Li et al., 2018), and the final clustering label is acquired after the \(k\)-mean algorithm.
Numerous methods have been proposed on the basis of this framework by imposing different constraints (Zhou et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019) or exploring various types of regularization terms (Zhou et al., 2017; Li et al., 2019; Li et al., 2019). Nie et al. (2019) generate the optimal Laplacian matrices by the linear combinations of Laplacian basis matrices constructed from multi-view samples. Gao et al. (Gao et al., 2019) learn independent representations from each view to capture the diverse information and use a consensus clustering structure to ensure intra-view consistency. Zhang et al. (Zhang et al., 2019) take the view-specific subspace representation as a tensor and explore the intersection information from the diverse views by using low-rank constraints.
Nevertheless, such a paradigm cannot prevent the whole graph construction and consequent spectral decomposition process (Zhou et al., 2017; Li et al., 2019). The time overhead of the methods is at least \(\mathcal{O}\left(n^{3}\right)\) and the space complexity is at least \(\mathcal{O}\left(n^{2}\right)\), largely hindering the large-scale applications.
### Anchor-based Multi-View Graph Clustering
In recent years, anchor-based multi-view graph clustering (AMVGC) has received abundant attention owing to its high efficiency. By constructing the relationship between the representative anchors selected and samples, i.e., anchor graph \(\mathbf{Z}^{(p)}\), to recover the full graph, the space complexity of AMVGC can efficiently reduce from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(nm)\)(Zhou et al., 2017; Li et al., 2019).
To our knowledge, a data point within a subspace could be calculated as a linear combination of other data from the same subspace, which is known as the self-expression proposition (Zhou et al., 2017; Li et al., 2019). Therefore, AMVGC methods can count on the self-expression property for the purpose of constructing a reliable anchor graph, which we also refer to as the global structure since it utilizes the whole data information. The classical anchor-based multi-view graph clustering with global structure can be shown as follows:
\[\begin{split}&\min_{\mathbf{Z}}\ \sum_{p=1}^{p}\left\|\mathbf{X}^{(p)}- \mathbf{A}^{(p)}\mathbf{Z}\right\|_{F}^{2}+\mu\left\|\mathbf{Z}\right\|_{F}^{ 2},\\ &\text{s.t.}\ \mathbf{Z}^{\top}\mathbf{1}_{m}=\mathbf{1}_{n}, \mathbf{Z}\geq\mathbf{0},\end{split} \tag{2}\]
where \(\mathbf{A}^{(p)}\) denotes the anchor matrix from the \(p\)-th view, and \(\mathbf{Z}\) denotes the common anchor graph. The final clustering indicates matrix can be calculated from the SVD decomposition of \(\mathbf{Z}\)(Zhou et al., 2017; Li et al., 2019; Li et al., 2019). Consequently, the computational and space expenditure is reduced from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(nm)\), where \(n,m,\) and \(o\) denotes the number of samples, anchors, and views, correspondingly.
Based on a specific similarity measure, local structure only considers the relationship between samples and one anchor with high similarity. The significance of preserving local manifold structure has been well recognized in non-linear model and cluster analysis (Li et al., 2019; Li et al., 2019) since neighboring samples usually maintain consistent label information. A classical anchor-based multi-view graph clustering with local structure can be calculated by
\[\begin{split}&\min_{\mathbf{Z}}\sum_{p=1}^{p}\sum_{i=1}^{n}\sum_{j =1}^{m}\left\|\mathbf{x}_{i}^{(p)}-\mathbf{a}_{j}^{(p)}\right\|_{2}^{2}\mathbf{ z}_{ji}+\mu_{1}\sum_{i=1}^{n}\sum_{j=1}^{m}\mathbf{z}_{ji}^{2}\,,\\ &\text{s.t.}\ \mathbf{z}_{i}\geq 0,\mathbf{z}_{i}^{\top} \mathbf{1}=\mathbf{1}\,,\end{split} \tag{3}\]
where \(\mathbf{x}_{i}^{(p)}\) is the \(i\)-th data from the \(p\)-th view, \(\mathbf{a}_{j}^{(p)}\) is the \(j\)-th anchor from the \(p\)-th view, \(\mathbf{z}_{i}\) denotes the \(i\)th column of \(\mathbf{Z}\), and \(\mu\) is the balance parameter that constrains \(\mathbf{z}_{i}\geq 0\) and \(\mathbf{z}_{i}^{\top}\mathbf{1}=\mathbf{1}\) guarantee the probabilistic properties of \(\mathbf{z}_{i}\).
## 3. Method
### Problem Formulation
Intuitively, a high-quality graph plays an important role in the success of graph-based clustering (Zhou et al., 2017; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Most multi-view graph clustering(MVGC) methods usually adopt a self-representation strategy to characterize the samples. Although the global structure is well-explored, the local structure is ignored, which provides insufficient information for the learning task and ultimately leads to sub-optimal clustering performance (Li et al., 2019; Li et al., 2019; Li et al., 2019). Several attempts have been made to address the issues (Li et al., 2019). For instance, He et al. (He et al., 2019) merged global and local structures to generate the similarity of the original data in the kernel space. Wen et al. (Wen et al., 2019) uses low-rank constraints to produce adaptive graphs for the purpose of one-step clustering. However, their high time and space expenditure hinder the application of the method.
In this paper, we employ an anchor strategy, which selects representative samples to capture the manifold structure. Besides, we adopt the view-independent anchor and generate a common anchor graph to efficiently excavate the complementary and consistent information of multiple views. With the view-specific anchor \(\mathbf{A}^{(p)}\in\mathbb{R}^{d_{p}\times m}\) and the consistent anchor graph \(\mathbf{Z}\in\mathbb{R}^{m\times n}\), the classical anchor-based multi-view graph clustering with global structure Eq.(2) can be formulated into the following equivalence problem:
\[\begin{split}&\min_{\mathbf{Z}}\sum_{p=1}^{o}\text{tr}\left( \mathbf{Z}^{\top}\mathbf{A}^{(p)\top}\mathbf{A}^{(p)}\mathbf{Z}\right)-2 \sum_{p=1}^{o}\text{tr}\left(\mathbf{X}^{(p)\top}\mathbf{A}^{(p)}\mathbf{Z} \right)+\mu\,\text{tr}\left(\mathbf{Z}^{\top}\mathbf{Z}\right),\\ & s.t.\ \ \ \mathbf{Z}\geq 0,\mathbf{Z}^{\top}\mathbf{1}= \mathbf{1},\end{split} \tag{4}\]
where the \(\mu\) is balanced hyperparameter of regularization term.
For local structure preserving, we summarize the paradigm of the traditional AMVGC with local structure and introduce the terms \(\text{tr}(\mathbf{A}^{(p)}\text{ diag}(\mathbf{Z}\mathbf{1})\mathbf{A}^{(p) \top})\), which can be mathematically derived from numerous methods, including BIMVC(Li et al., 2019), MVASM(Li et al., 2019), and MGLSMC(Li et al., 2019). With the local term, our objective equation becomes:
(5) \[\begin{split}&\min_{\mathbf{Z}}\sum_{p=1}^{o}\text{tr}\left( \mathbf{Z}^{\top}\mathbf{A}^{(p)\top}\mathbf{A}^{(p)}\mathbf{Z}\right)-2\sum_{p=1 }^{o}\text{tr}\left(\mathbf{X}^{(p)\top}\mathbf{A}^{(p)}\mathbf{Z}\right)\\ &\ \
anchors. However, the anchor construction, as well as the structure learning are separated from each other, which could restrict the clustering capability. Unlike traditional techniques, we learn anchors automatically instead of sampling in this paper. Finally, we can define the optimization for EMVGC-LG as follows:
\[\begin{split}\min_{\mathbf{A}^{(p)},\mathbf{Z}}&\sum _{p=1}^{p}\operatorname{tr}\left(Z^{\top}\mathbf{A}^{(p)\top}\mathbf{A}^{(p)} \mathbf{Z}\right)-2\sum_{p=1}^{p}\operatorname{tr}\left(\mathbf{X}^{(p)\top} \mathbf{A}^{(p)}\mathbf{Z}\right)\\ &+\lambda\sum_{p=1}^{p}\operatorname{tr}\left(\mathbf{A}^{(p)} \operatorname{diag}(\mathbf{Z1})\mathbf{A}^{(p)\top}\right)+\mu\operatorname{ tr}\left(\mathbf{Z}^{\top}\mathbf{Z}\right).\\ \text{s.t.}&\mathbf{Z}\geq 0,\mathbf{Z}^{\top} \mathbf{1}=\mathbf{1}\end{split} \tag{6}\]
**Proposition 1**.: _By setting \(\lambda\in(0,1]\,,\mu=\lambda\mu_{1}\), minimizing Eq. (3) can be approximated by minimizing Eq. (6)._
Proof.: By adding the item \(\sum_{p=1}^{p}\operatorname{tr}(\mathbf{X}^{(p)\top}\mathbf{X}^{(p)})\) not related to the optimized variables, Eq. (6) can be transformed into the following equivalence problem:
(7) \[\begin{split}&\text{Eq.}(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:
### Discussions
#### 3.3.1. Convergence
As the iterations proceed, two variables of the above optimization procedure will be separately addressed. Since each subproblem has reached the global optimum, the value of the EMVGC-LG function will monotonically decrease and finally reach convergence. Moreover, since the lower bound of the objective function can be easily proved to be \(-\sum_{p=1}^{p}\text{tr}(\mathbf{X}^{(p)\top}\mathbf{X}^{(p)})\) (by Proposition 1), our proposed EMVGC-LG is proven to converge to the local optimum.
#### 3.3.2. Space Complexity
In this paper, the primary memory overhead of our approach is the matrix \(\mathbf{Z}\in\mathbb{R}^{m\times n}\) and \(\mathbf{A}^{(p)}\in\mathbb{R}^{d_{p}\times m}\). As a result, the space complexity of EMVGC-LG is \(m\) (\(n+d\)), where \(d=\sum_{p=1}^{p}d_{p}\). \(m\ll n\) and \(d\ll n\) are within our settings. Therefore, the space complexity of EMVGC-LG is \(\mathcal{O}(n)\).
#### 3.3.3. Time Complexity
The time complexity of EMVGC-LG is composed of two optimization steps, as previously mentioned. The time complexity of updating \(\{\mathbf{A}^{(p)}\}_{p=1}^{o}\) is \(\mathcal{O}\left((nmd+m^{3})v\right)\). When analytically obtaining \(\mathbf{Z}\), it costs \(\mathcal{O}(nm^{3})\) for all columns. Therefore, the total time cost of the optimization process is \(\mathcal{O}\left(n(mdv+m^{3})+m^{3}v\right)\). Consequently, the computational complexity of EMVGC-LG is \(\mathcal{O}(n)\), which is linearly related to the number of data.
## 4. Experiment
In this section, we perform numerous experiments to assess our proposed EMVGC-LG. Concretely, we discuss the clustering quality on synthetic and real datasets, the evolution of the objective values, the running time, the sensitivity of the parameters, and the ablation study. Our code is accessible on the [https://github.com/wy1019/EMVGC-LG](https://github.com/wy1019/EMVGC-LG).
### Synthetic Datasets
In order to visualize the different influences of local and global structure, we performed experiments on a synthetic two-view two-dimensional dataset containing 500 samples extracted from five clusters produced by a Gaussian function. From Figure 2, we can observe that incorporating local and global information can effectively enhance the clustering performance. Compared with a singular structure, our strategy improves the performance by 11.85% and 26.08%, respectively, and yields a clearer partition of clusters.
### Real-world Datasets
Ten extensively available datasets were used to assess the validity of the proposed algorithms. The information of the datasets are as follows, including Reuters121, Flower172, Mfeat3, VGGFace504, Caltech2565, YouTubeFace107, Cifar106, Cifar1006, YouTubeFace207, and YouTubeFace507. The detailed information is listed in Table 1. Specifically, Reuters12 is a subset of Reuters, which contains 1200 documents described in five languages, including English, France, German, Italian, and Spanish. Flower17 is a flower dataset with 80 images for each class. Mfeat contains 200 images of handwritten numbers from 0 to 9. Each sample is expressed by six different feature sets, i.e., 216-dimensional FAC, 76-dimensional FOU, 64-dimensional KAR, 6 MORs, 240-dimensional Pix, and 47-dimensional ZER. VGGFace50 is derived from VGGFace. Caltech256 contains 30607 images spanning 257 object categories. Features were extracted from 60,000 tiny color images of 10 categories and 100 categories in four views of Cifar10 and Cifar100 by ResNet18, ResNet50, and DenseNet121. YoutubeFace10, YoutubeFace20, and YouTubeFace50 are face video datasets withdrawn from YouTube.
Footnote 1: [https://archive.ics.uci.edu/ml/datasets/reuters-21578-text-categorization+collection](https://archive.ics.uci.edu/ml/datasets/reuters-21578-text-categorization+collection)
Footnote 2: [https://www.robots.ox.ac.uk/vgg/data/flowers/17/](https://www.robots.ox.ac.uk/vgg/data/flowers/17/)
Footnote 3: [http://www.svclc.ucsd.edu/projects/crossmodal/](http://www.svclc.ucsd.edu/projects/crossmodal/)
### Compared Methods
Along with our proposed EMVGC-LG, we run ten state-of-the-art multi-view graph clustering methods for comparison, including Multi-view k-means Clustering on Big Data (RMKM) (Kumar et al., 2017), Parameter-free Auto-weighted Multiple Graph Learning (AMGL) (Zhou et al., 2018), Flexible Multi-View Representation Learning for Subspace Clustering (FMR) (Zhou et al., 2018), Partition Level Multi-view Subspace Clustering (PMSC) (Zhou et al., 2018), Binary Multi-View Clustering (BMVC) (Zhou et al., 2018), Large-scale Multi-view Subspace Clustering in Linear Time (LMVSC)(Zhou et al., 2018), Scalable Multiview Subspace Clustering with Unified Anchors (SMVC) (Zhou et al., 2018), Multi-view clustering: a Scalable and Parameter-free Bipartite Graph Fusion Method (SFMC) (Zhou et al., 2018), Fast Multiview Clustering via Nonnegative and Orthogonal Factorization (FMCNOP)(Zhou et al., 2018), and Fast Parameter-free Multiview Subspace Clustering with Consensus Anchor Guidance (FPMVS-CAG)(Zhou et al., 2018).
For all the aforementioned algorithms, we set their parameters as their recommended range. In the proposed method, we adjusted \(\lambda\) to \([10^{-3},10^{-2},10^{-1},1]\), \(\mu\) to \([0,10^{-4},1,10^{4}]\), and the anchor numbers to [k, 2k, 5k], with a grid search scheme. In addition, we repeated each experiment 10 times to calculate the mean performance and standard deviation. To assess the clustering performance, we use three widely used metrics, consisting of Accuracy (ACC), Normalised Mutual Information (NMI), and Fscore. All experiments were conducted on a desktop computer with Intel core i9-10900X CPU and 64G RAM, MATLAB 2020b (64-bit).
### Experimental Results
Table 2 indicates the clustering performance on ten datasets. The best results are marked in red, and the second-best results are
\begin{table}
\begin{tabular}{c c c c} \hline \hline Datasets & Samples & Views & Clusters \\ \hline Reuters12 & 1200 & 5 & 6 \\ Flower17 & 1360 & 7 & 17 \\ Mfeat & 2000 & 2 & 10 \\ VGGFace50 & 16936 & 4 & 50 \\ Caltech256 & 30607 & 4 & 257 \\ YouTubeFace10 & 38654 & 4 & 10 \\ Cifar10 & 60000 & 4 & 9 \\ Cifar100 & 60000 & 4 & 99 \\ YouTubeFace20 & 63896 & 4 & 20 \\ YouTubeFace50 & 126054 & 4 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Benchmark datasets
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Dataset & RMKM & AMGL & FMR & PMSC & BMVC & LMVSC & SMVSC & SFMC & FMCNOF & PPMVS-CAG & Proposed \\ \hline \multicolumn{10}{c}{ACC (\%)} \\ \hline Reuters12 & 44.00\(\pm\)0.00 & 18.12\(\pm\)0.42 & 52.35\(\pm\)1.90 & 27.88\(\pm\)0.75 & 50.33\(\pm\)0.00 & 47.92\(\pm\)2.98 & 55.69\(\pm\)3.06 & 17.08\(\pm\)0.00 & 28.58\(\pm\)0.00 & 45.82\(\pm\)3.29 & 60.74\(\pm\)2.01 \\ Flower17 & 23.24\(\pm\)0.00 & 9.70\(\pm\)1.53 & 33.43\(\pm\)1.75 & 20.82\(\pm\)0.74 & 26.99\(\pm\)**0.00** & 37.12\(\pm\)1.86 & 27.13\(\pm\)0.84 & 7.57\(\pm\)0.00 & 17.43\(\pm\)0.00 & 25.99\(\pm\)1.83 & 53.28\(\pm\)2.51 \\ Mfeat & 80.80\(\pm\)0.00 & 71.42\(\pm\)5.42 & 64.53\(\pm\)2.28 & 66.41\(\pm\)4.40 & 65.80\(\pm\)0.00 & 82.86\(\pm\)6.95 & 65.57\(\pm\)3.99 & 56.90\(\pm\)0.00 & 56.95\(\pm\)0.00 & 65.11\(\pm\)4.18 & 88.53\(\pm\)4.87 \\ VGGFace50 & 8.23\(\pm\)0.00 & 2.95\(\pm\)0.35 & O/M & O/M & 10.30\(\pm\)0.00 & 10.56\(\pm\)**2.06** & 13.56\(\pm\)0.60 & 3.64\(\pm\)0.00 & 5.51\(\pm\)0.00 & 12.06\(\pm\)0.36 & 15.29\(\pm\)0.74 \\ Caltech256 & 9.87\(\pm\)0.00 & O/M & O/M & O/M & O/M & 8.63\(\pm\)0.00 & 9.57\(\pm\)0.17 & 10.54\(\pm\)0.15 & O/M & 2.70\(\pm\)0.00 & 8.78\(\pm\)0.07 & 11.70\(\pm\)3.06 \\ YouTubeFace10 & 7.48\(\pm\)0.00 & O/M & O/M & O/M & 55.85\(\pm\)0.70 & 7.44\(\pm\)5.30 & 72.93\(\pm\)9.36 & 55.80\(\pm\)0.00 & 43.42\(\pm\)0.00 & 67.95\(\pm\)5.70 & 79.55\(\pm\)5.47 \\ CIFAR10 & O/M & O/M & O/M & O/M & O/M & 27.81\(\pm\)0.00 & 29.02\(\pm\)0.81 & 29.11\(\pm\)1.35 & 10.02\(\pm\)0.00 & 20.53\(\pm\)0.00 & 26.89\(\pm\)0.71 & 31.14\(\pm\)1.07 \\ CIFAR100 & O/M & O/M & O/M & O/M & 8.32\(\pm\)0.00 & 9.53\(\pm\)0.15 & 8.34\(\pm\)0.17 & 1.18\(\pm\)0.00 & 3.66\(\pm\)0.00 & 7.29\(\pm\)0.11 & 10.96\(\pm\)0.35 \\ YouTubeFace20 & O/M & O/M & O/M & O/M & O/M & 57.39\(\pm\)**0.00** & 67.26\(\pm\)3.53 & 67.13\(\pm\)**2.40** & O/M & 38.61\(\pm\)0.00 & 63.08\(\pm\)3.79 & 72.79\(\pm\)2.73 \\ YouTubeFace50 & O/M & O/M & O/M & O/M & 66.00\(\pm\)**0.00** & 68.32\(\pm\)2.45 & O/M & O/M & O/M & 21.66\(\pm\)0.00 & 64.24\(\pm\)2.97 & 70.52\(\pm\)2.62 \\ \hline \multicolumn{10}{c}{NMI (\%)} \\ \hline Reuters12 & 26.83\(\pm\)0.00 & 3.46\(\pm\)0.76 & 31.43\(\pm\)0.84 & 14.76\(\pm\)0.58 & 27.25\(\pm\)0.00 & 27.86\(\pm\)1.49 & 32.46\(\pm\)1.72 & 12.61\(\pm\)0.00 & 7.15\(\pm\)0.00 & 24.59\(\pm\)3.20 & 35.80\(\pm\)1.03 \\ Flower17 & 22.07\(\pm\)0.00 & 10.25\(\pm\)0.41 & 30.65\(\pm\)0.91 & 19.13\(\pm\)0.48 & 25.62\(\pm\)0.00 & 35.37\(\pm\)1.10 & 25.78\(\pm\)**0.76** & 7.87\(\pm\)0.00 & 14.68\(\pm\)0.00 & 25.81\(\pm\)1.59 & 51.77\(\pm\)1.72 \\ Mfeat & 82.28\(\pm\)0.00 & 77.12\(\pm\)2.19 & 66.21\(\pm\)0.73 & 63.29\(\pm\)1.63 & 59.39\(\pm\)0.90 & 80.26\(\pm\)4.06 & 7.95\(\pm\)9.92\(\pm\)1.21 & 68.15\(\pm\)0.00 & 55.47\(\pm\)0.00 & 57.77\(\pm\)2.73 & 82.90\(\pm\)2.78 \\ VGGFace50 & 9.66\(\pm\)0.00 & 2.044\(\pm\)0.50 & O/M & O/M & 13.48\(\pm\)0.00 & 12.64\(\pm\)0.28 & 12.61\(\pm\)0.9 & 1.63\(\pm\)0.00 & 4.74\(\pm\)0.00 & 11.47\(\pm\)0.55 & 18.71\(\pm\)0.76 \\ Caltech256 & 31.01\(\pm\)0.00 & O/M & O/M & O/M & 31.83\(\pm\)0.00 & 31.96\(\pm\)0.11 & 28.27\(\pm\)0.24 & O/M & 1.60\(\pm\)0.00 & 22.97\(\pm\)0.21 & 34.78\(\pm\)0.17 \\ YouTubeFace100 & 78.83\(\pm\)0.00 & O/M & O/M & O/M & 54.66\(\pm\)0.00 & 77.74\(\pm\)2.03 & 78.57\(\pm\)2.80 & 77.46\(\pm\)0.00 & 39.15\(\pm\)0.00 & 76.11\(\pm\)3.06 & 82.21\(\pm\)2.97 \\ CIFAR10 & O/M & O/M & O/M & O/M & 17.90\(\pm\)0.00 & 17.84\(\pm\)0.53 & 16.00\(\pm\)0.99 & 0.16\(\pm\)0.00 & 10.33\(\pm\)0.00 & 15.45\(\pm\)0.99 & 18.22\(\pm\)0.66 \\ CIFAR100 & O/M & O/M & O/M & O/M & O/M & 15.05\(\pm\)**0.00** & 15.40\(\pm\)0.18 & 14.40\(\pm\)0.20 & 0.53\(\pm\)0.00 & 7.04\(\pm\)0.00 & 13.62\(\pm\)0.16 & 18.42\(\pm\)0.33 \\ YouTubeFace20 & O/M & O/M & O/M & O/M & O/M & 70.65\(\pm\)**0.00** & 76.78\(\pm\)**1.34** & 78.36\(\pm\)2.39 & O/M & 45.45\(\pm\)0.00 & 74.30\(\pm\)1.95 & 80.57\(\pm\)1.60 \\ YouTubeFace50 & O/M & O/M & O/M & O/M & O/M & 81.90\(\pm\)**0.00** & 82.43\(\pm\)0.78 & O/M & O/M & 43.03\(\pm\)0.00 & 82.08\(\pm\)1.07 & 84.17\(\pm\)0.83 \\ \hline \multicolumn{10}{c}{Fscore (\%)} \\ \hline Reuters12 & 32.18\(\pm\)0.00 &
marked in blue. "O/M" represents the unavailable results due to time-out or out-of-memory errors. According to the results, we have the following observation:
1. Compared with existing MVC methods, our proposed algorithm achieves the best or second-best performance on ten datasets. In comparison with the second-best ones, our EMVGC-LG acquires 5.05%, 16.16%, 5.67%, 1.93%, 1.16%, 4.67%, 2.03%, 1.43%, 5.53% and 2.2% in terms of ACC, which demonstrates local and global structure fusion strategy. In other metrics, EMVGC-LG also achieved desirable performance.
2. Comparison with classical MVGC methods, i.e., RMKM, AMGL, FMR, and PMSC, encounter scalability problems on large-scale datasets due to the huge matrix computation and memory generated by the full graph construction. Our EMVGC-LG outperforms them by 8.39%, 19.85%, 7.73%, 7.06%, 1.87%, and 4.67% of ACC on six datasets, showing the superiority of our anchor-based algorithm.
3. Comparison with existing AMVGC methods, i.e., BMVC, LMVSC, SMVSC, SFMC, FMCNOF, and FPMVS-CAG, our EMVGC-LG still acquires comparable or better performance. In particular, the LMVSC method shows better results than other methods, which demonstrates its superiority in large-scale scenarios. In terms of ACC, our EMVGC-LG achieves better performance than LMVSC by large margins of 12.82%, 16.16%, 5.67%, 4.73%, 2.13%, 5.07%, 2.12%, 1.43%, 5.53%, and 2.20%, illustrating our effectiveness.
### Running Time Comparison
To validate the computational efficiency of the proposed EMVGC-LG, we plot the average running time of each algorithm on ten benchmark datasets in Figure 3. The results of some compared algorithms on large-scale datasets are not reported due to memory
Figure 4. The ablation study of our local and global structure combination strategy on five benchmark datasets.
Figure 5. The ablation study of our anchor learning strategy on five benchmark datasets. “Fixed” indicates without our anchor learning strategy.
Figure 3. Time comparison of different MVC Methods on ten datasets
Figure 6. Sensitivity analysis of anchor number m of our method on two benchmark datasets.
overflow errors caused by their excessive time and space complexity. As shown in the Figure 3, we can observe that
1. Compared to full graph-based clustering methods, the proposed EMVGC-LG significantly reduces run time through the construction of anchor graphs.
2. Compared to the anchor-based MVC approach, i.e., SMVSC and FPMVS-CAG, the proposed EMVGC-LG requires more time consumption, mainly due to our local and global structure preservation strategy. Generally, the extra time spent is worthwhile since our proposed EMVGC-LG demonstrates its superiority in most datasets.
### Ablation Study
#### 4.6.1. Local and Global Structure Combination Strategy
The local and global structure combination strategy is the main contribution of this paper. To further demonstrate the effectiveness of this strategy, we present the experimental results of the ablation study in Figure 4, where "Local" and "Global" indicates only using the local or global structure, respectively. In our experimental setting, we optimize the Eq.(3) and Eq.(2) in the optimization process to obtain the final clustering result. In terms of ACC, the proposed structure combination strategy improves the algorithm performance on the Flower17, Mfeat, VGGFace50, Cifar100, and YouTubeFace20 datasets by **21.03%,8.71%, 6.74%, 1.46%**, and **2.30**% compared to the simple local structure respectively, which demonstrates the effectiveness of our strategy.
#### 4.6.2. Anchor Learning Strategy
We conducted ablation experiments with the proposed anchor learning strategy, as shown in Figure 5. "Fixed" indicates initializing anchors by k-means without updating during the optimization process. Compared to the above methods, our approach significantly improves the clustering performance and avoids the high time expenditure of k-means, demonstrating the effectiveness of the anchor learning strategy.
### Convergence and Sensitivity
We conducted several experiments to demonstrate the convergence of the proposed algorithm. As shown in Figure 8, the objective value of our algorithm is monotonically decreasing in each iteration. These results clearly verify the convergence of our proposed algorithm. To investigate the sensitivity of our algorithm to the number of anchors m, we investigated how our performance shifts for different numbers of anchors. As shown in Figure 6, the number of anchors has some effect on the performance of our algorithm and basically achieves optimal performance at m = k. Moreover, two hyperparameters, \(\lambda\), and \(\mu\), are used in our method, \(\lambda\) is the local and global structure balanced parameter, and \(\mu\) is the coefficient of the sparsity regularization term. As is shown in Figure 7, we conducted comparative experiments on two benchmark datasets to illustrate the impact of these two parameters on performance. In terms of VGGFace50, our method performs better when \(\lambda\) is less than 0.001, and \(\mu\) is larger than 1. When the value of \(\lambda\) is larger than 0.01, EMVGC-LG works well on the Mfeat dataset, and \(\mu\) has little effect on it. Thus, with fixed \(\lambda\), the variation of \(\mu\) has a smaller effect on the final performance on most datasets, while ACC with the same \(\mu\) will be affected by \(\lambda\).
## 5. Conclusion
In this paper, we propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG). Specifically, EMVGC-LG considers preserving the local and global structures in a unified framework, which provides comprehensive information for clustering. We theoretically prove that the proposed paradigm with the global structure can well approximate the local information. Besides, anchor construction and graph learning are jointly optimized in our unified framework to enhance the clustering quality. In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number, which is time-economical and scales well with the data size. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method. In the future, we will explore the relationship between local and global structure in more detail, e.g., under what circumstances is local structure preferable to global structure?
## 6. Acknowledgments
This work was supported by the National Key R&D Program of China (no. 2020AAA0107100) and the National Natural Science Foundation of China (project no. 62325604, 62276271).
|
2305.19878 | Investigating Impacts of Health Policies Using Staggered
Difference-in-Differences: The Effects of Adoption of an Online Consultation
System on Prescribing Patterns of Antibiotics | We use a recently proposed staggered difference-in-differences approach to
investigate effects of adoption of an online consultation system in English
general practice on antibiotic prescribing patterns. The target estimand is the
average effect for each group of practices (defined by year of adoption) in
each year, which we aggregate across all adopting practices, by group, and by
time since adoption. We find strong evidence of a positive effect of adoption
on antibiotic prescribing rates, though the magnitude of effect is relatively
small. As time since adoption increases, the effect size increases, while
effects vary across groups. | Kate B. Ellis, Ruth H. Keogh, Geraldine M. Clarke, Stephen O'Neill | 2023-05-31T14:13:29Z | http://arxiv.org/abs/2305.19878v2 | Investigating Impacts of Health Policies Using Staggered Difference-in-Differences: The Effects of Adoption of an Online Consultation System on Prescribing Patterns of Antibiotics
## Abstract
We use a recently proposed staggered difference-in-differences approach to investigate effects of adoption of an online consultation system in English general practice on antibiotic prescribing patterns. The target estimand is the average effect for each group of practices (defined by year of adoption) in each year, which we aggregate across all adopting practices, by group, and by time since adoption. We find strong evidence of a positive effect of adoption on antibiotic prescribing rates, though the magnitude of effect is relatively small. As time since adoption increases, the effect size increases, while effects vary across groups.
**Keywords:** antibiotics, difference-in-differences, online consultation system, parallel trends, policy evaluation, staggered adoption
## Running Head: Effects of Adoption of an Online Consultation System
_We declare no conflicts of interest._
Introduction
The use of digital tools in health care settings is growing and has been accelerated by the COVID-19 pandemic. In the United Kingdom (UK) there has been increased adoption of online consultation (OC) systems by general practitioner (GP) practices. These systems enable patients to contact their GP practice by submitting an online form with details of the query (Eccles et al., 2019). After this initial contact, patients may be offered face-to-face appointments, or queries may be dealt with remotely by telephone, video or via online message. There are 31 OC systems that are approved for use by the NHS, varying in design, functionality and how they are implemented (Chappell et al., 2023). In this paper, we examine GP practices using the _askmyGP_ OC system ([https://askmygp.uk/](https://askmygp.uk/)). Practices using _askmyGP_ are encouraged to use the system to facilitate "total digital triage". Under this model, all patient contact to the practice is channelled through an OC system, which patients can do themselves or with assistance from administrative staff, creating a single workflow to support the triage of patient contacts (NHS England, 2020). However, adoption of the _askmyGP_ OC system does not always result in total digital triage.
Adoption of an OC system aims to improve the experience of patients by being convenient and accessible, and of practitioners through increasing efficiency of their practice (Eccles et al., 2019). However, it is important to investigate whether adoption could have any unintended implications. Antibiotic prescribing is of particular concern as the misuse and overuse of antibiotics accelerates antibiotic resistance (World Health Organisation, 2020). Since the majority of antibiotics in England are prescribed within general practice (UK Health Security Agency, 2022), this motivates assessing whether adoption of an OC system in general practice causes a change in antibiotic prescribing patterns.
Health policy interventions are often introduced at a cluster level, with roll out being staggered across different groups of units over time. This is the case with OC systems, which have been adopted by different GP practices at different times. When evaluating the effects of adoption, consideration is required as to how to make use of outcomes measured for a GP practice before and after adoption. We focus on a situation in which GP practice-level data is available across a series of time periods, with information in each period on whether a GP practice has adopted the OC system, on the outcome and on a number of characteristics.
When all treated units initiate treatment at the same time, the standard two-way fixed effects difference-in-differences (DiD) estimator is commonly used and provides an unbiased estimate of the average treatment effect in the treated (ATT) provided identification assumptions such as the
parallel trends assumption hold (de Chaisemartin and D'Haultfoeuille, 2022). However, when roll out of an intervention is staggered over time, recent literature has shown that this estimator can be biased for the ATT when there is treatment effect heterogeneity over time (Goodman-Bacon, 2021, de Chaisemartin and D'Haultfoeuille, 2020). The bias arises due to inappropriate comparisons of the outcomes between units that initiate the intervention at different time points (Baker et al., 2022). Several methods have since been proposed to handle variation in timing of treatment initiation (Callaway and Sant'Anna, 2021, Sun and Abraham, 2021, Wooldridge, 2021, Borusyak et al., 2022, de Chaisemartin and D'Haultfoeuille, 2020). These methods overcome the issues surrounding standard two-way fixed effect regressions by explicitly avoiding using inappropriate units as controls. In this paper, we consider one such method proposed by Callaway and Sant'Anna (2021) which estimates the average treatment effect for each group of treated units (defined by the time period of treatment initiation) on outcomes measured in each post-treatment time period. These ATTs are referred to as group-time average treatment effects (GTATTs) and the estimation approach is hereby referred to as the staggered DiD approach.
We apply the staggered DiD approach to practice-level data to assess whether adoption of the _askmyGP_ OC system caused a change in the antibiotic prescribing habits of health care professionals in English general practice between March 2019 and February 2022. Similar data have previously been used to investigate this topic, but the analysis did not consider the staggered roll out of the system (Dias and Clarke, 2020). The previous evaluation was also restricted to practices that were channelling all patient contact to the practice through _askmyGP_ and so investigated the effect of adoption of total digital triage, rather than of the OC system in general. To the best of our knowledge, the staggered DiD approach has not previously been used to investigate effects of adoption of OC systems. However, it been used to assess some implications of the use of telemedicine more broadly. For instance, by using the staggered implementation of Telehealth Parity Laws to assess potential impacts on medical care expenditures in the United States (Dong, 2022).
The remainder of the paper is organised as follows. Section 2 outlines the research question in more detail and describes the data sources. In Section 3, we define the causal estimand of interest and describe the estimation methods, including the standard two-way fixed effects DiD estimator and the staggered DiD approach, in the context of the application. We also discuss methods to assess their identification assumptions. Section 4 describes how the methods are applied to the data and presents the results. The application includes investigations of treatment effect heterogeneity across different dimensions and the sensitivity of the results to certain assumptions.
We conclude with a discussion in Section 5. Additional details on the inclusion criteria, the identification assumptions, and the application results are reported in the online supplementary material accompanying the paper.
## 2 Background to case study and data sources
The aim is to investigate the impact of adoption of the _askmyGP_ OC system on antibiotic prescribing rates among GP practices that adopted the system. We carry out our main analyses at the yearly level, making use of practice-level data in each year on whether a GP practice has adopted the OC system, on the outcome and on a number of characteristics. Our analysis period is the 4 full years between March 2018 and February 2022, with \(t=1\) referring to March 2018-February 2019, \(t=2\) referring to March 2019-February 2020 and so on. Assuming any effects of the COVID-19 pandemic prior to March 2020 were negligible in England, we define year periods to run from March to February to avoid straddling the start of the pandemic.
Although our main analyses are carried out at the yearly level between March 2018 and February 2022, we made use of monthly data from May 2017 to March 2022 on use of the system and antibiotic prescribing rates to define our study cohort. All GP practices in England who met certain inclusion criteria over this period were eligible (see Section S.1 of the online supplementary materials for details). We obtained a list of unique identifiers of GP practices that had adopted the _askmyGP_ version 3 OC system at any point after its launch in July 2018, up until March 2022. This data included the number of patient-initiated contacts recorded through the system each month for each practice. Our study cohort includes GP practices that had their first patient-initiated contact recorded through the system between March 2019 and February 2021 (during \(t=2\) or \(t=3\)), and those that did not adopt the system up to and including February 2022.
The intervention of interest is adoption of the _askmyGP_ OC system, which is time-dependent. We define the year of adoption of _askmyGP_ as the first year in which there was at least one patient-initiated contact recorded through the system. We assume that the adoption was absorbing, that is, that once a practice first adopted the system, it continued to have access for the rest of our analysis period.
The outcome of interest is the yearly mean antibiotic prescribing rates, where the antibiotic prescribing rates are defined as; the total number of items of antibacterial drugs prescribed each month divided by the monthly practice list size (the number of patients registered to the practice). Antibacterial drugs are classified as such by the British National Formulary (BNF) (OpenPrescribing, 2023a). This data is publicly available from OpenPrescribing, a search
interface of the raw English Prescribing Dataset published by the NHS Business Services Authority (OpenPrescribing, 2023b).
We also use data on various GP practice-level characteristics extracted from publicly available data sources (Office for National Statistics, 2022, Office for National Statistics, 2021, GOV.UK, 2019). These sources all produce demographics according to Lower Layer Super Output Area (LSOA), which were then weighted by the proportion of registered patients living in each LSOA to estimate the demographics for patients registered to each eligible GP practice in each month (NHS Digital, 2022). For our main analyses, which are carried out at the yearly level, we use the yearly mean percentage of male patients, percentage of patients of black and minority ethnicity, percentage of patients aged 65 years and over, percentage of patients with third level education, and index of multiple deprivation score of patient area, and a yearly binary indicator for the classification of rurality of patient area.
In total there were 6,397 eligible practices. Of these, 176 (2.75%) adopted the _askmyGP_ OC system between March 2019 and February 2021 (during \(t=2\) or \(t=3\)), and we refer to them as the '_askmyGP_ practices' throughout. We refer to the remaining practices (who did not adopt the system up to and including February 2022) as the 'never adopters' throughout. Although these practices did not adopt _askmyGP_, they may have been using other OC systems over the analysis period. We therefore assume that their management of patient consultations represents those of the _askmyGP_ practices had they not adopted the _askmyGP_ system. Of the _askmyGP_ practices, 41 adopted the system in the year March 2019-February 2020 (\(t=2\)), and we refer to them as the 'early adopters'. The remaining 135 _askmyGP_ practices adopted during March 2020-February 2021 (\(t=3\)), and we refer to them as the 'late adopters'. Figure 1 displays the time periods under consideration and the year of adoption of the system for each group of practices (defined by year of adoption).
## 3 Methods
### Notation and definition of the target estimand
We adopt the notation and potential outcomes framework used by Callaway and Sant'Anna (2021). Let \(t=1,...,T\) denote each time period under consideration. In our application these are years with \(t=1\) referring to March 2018-February 2019 and \(T=4\) referring to March 2021-February 2022. Let \(G_{ig}\) denote a binary variable that is equal to 1 if unit \(i\) initiates treatment in period \(g\), and let \(C_{i}\) denote a binary variable that is equal to 1 if unit \(i\) is not treated in any time period under consideration ('never treated'). The observed outcome for unit \(i\) in time period \(t\) is denoted \(Y_{it}\), which in our application is the yearly mean antibiotic prescribing rate. The vector of measured covariates for unit \(i\) in time period \(t\) is denoted \(\mathbf{X}_{it}\), which in our application is the vector of yearly practice-level characteristics. Let \(Y_{i,t}(0)\) denote the potential outcome for unit \(i\) in time period \(t\) if they were never treated up to and including time period \(T\) and let \(Y_{i,t}(g)\) denote the potential outcome for unit \(i\) in time period \(t\) if they were to have initiated treatment in time period \(g\).
The target estimand is the expected difference in the outcome in period \(t\) for those who initiate treatment in period \(g\) (\(g\leq t\)) if all units in this group had initiated treatment in period \(g\) compared with if they had never been treated up to and including time period \(T\);
Figure 1: Illustration of the time periods under consideration and year of adoption of the system for each group of practices (defined by year of adoption). Within each group, years in which no practices have access to the system are highlighted in grey, the year of adoption of the system is highlighted by light green and years in which all practices have access to the system are highlighted in dark green.
\[\text{ATT}(\text{g},\text{t})=\text{E}\big{[}Y_{\text{t}}(g)-Y_{\text{t}}(0)\big{|} G_{g}=1\big{]} \tag{1}\]
This is the ATT in time period \(t\) for the group of units that initiated treatment in time period \(g\). Following Callaway and Sant'Anna (2021), we refer to the target estimand as a group-time average treatment effect (GTATT).
### No variation in treatment timing: Two-by-two difference-in-differences
We first consider the simplified setting in which there are only two time periods (\(T=2\)) and all units are untreated in the first time period and a group of units are treated in the second. Hence, there are only two groups of units defined by time of treatment initiation in this setting; those that initiate treatment in the second period (\(G_{i2}=1\)) and those that are never treated (\(G_{i2}=C_{i}=0\)). Since there are only two time periods and two groups of units, this setting is commonly referred to as a two-by-two DiD setup. Figure 2 illustrates an example of this setup in our application.
Since there is only one group of treated units, which are observed at one period post-treatment, the target estimand is only defined for \(g=2\) and \(t=2\);
\[\text{ATT}=E\big{[}Y_{2}(2)-Y_{2}(0)\big{|}G_{2}=1\big{]} \tag{2}\]
We consider identification under a _conditional_ parallel trends assumption. This assumption states that if the group that initiated treatment in time period \(g\) had in fact never been treated, their expected outcomes, conditional on measured covariates, would have followed a parallel path over time to the expected outcomes for those that never became treated. Under this assumption, together with the no anticipation and consistency assumptions (see Section S.2 of the online
Figure 2: Illustration of a two-by-two DiD setup in our application. Within each group, years in which no practices have access to the system are highlighted in grey and the year of adoption of the system is highlighted by light green.
supplementary materials for details), the conditional ATT (CATT), where the conditioning is on the vectors of covariates in time periods \(t=1\) and \(t=2\), can be identified as follows;
\[CATT=E[Y_{2}(2)-Y_{2}(0)|\mathbf{X_{1}},\mathbf{X}_{2},G_{2}=1] \tag{3}\]
\[=E[Y_{2}(2)|\mathbf{X_{1}},\mathbf{X}_{2},G_{2}=1]-(E[Y_{2}(0)-Y_{1}(0)|\mathbf{X_{1}},\bm {X}_{2},G_{2}=0]+E[Y_{1}(0)|\mathbf{X_{1}},G_{2}=1]) \tag{4}\]
\[=(E[Y_{2}|\mathbf{X_{1}},\mathbf{X}_{2},G_{2}=1]-E[Y_{1}|\mathbf{X_{1}},G_{2}=1])-(E[Y_{2}| \mathbf{X_{1}},\mathbf{X}_{2},G_{2}=0]-E[Y_{1}|\mathbf{X_{1}},G_{2}=0]) \tag{5}\]
Since the CATT is the conditional difference in changes over the two time periods between the two groups, it can be estimated by the coefficient of an interaction term between time period and group indicators in a two-way fixed effects regression model (Baker, 2019);
\[Y_{tt}=\omega+\alpha_{i}+\varphi_{t}+\delta\tau_{t}G_{i2}+\mathbf{\gamma^{T}}\mathbf{X _{tt}}+\epsilon_{tt}\text{ for }i=1,...,n,t=1,2 \tag{6}\]
where \(\alpha_{i}\) is the unit fixed effect for unit \(i\), \(\varphi_{t}\) is the fixed effect for time \(t\), \(\tau_{t}\) is the time period indicator where \(\tau_{t}=1\) if \(t=2\) and \(0\) otherwise, and \(\epsilon_{tt}\) are the residuals that are assumed to be normally distributed with conditional mean zero. The unit fixed effects account for differences between units that are the same over time and the fixed effects for time account for changes over the two time periods that are the same for all units. Assuming the treatment does not affect the measured covariates, inclusion of time-varying covariates controls for measured time-varying differences between groups, where any associated changes in outcome would otherwise be wrongly attributed to the treatment (Zeldow and Hatfield, 2021). Assuming model (6), it can be shown that the CATT in (5) is given by the coefficient of the interaction term between time period and group indicators, \(\delta\). When one uses a linear model with no interaction between covariates and treatment group, the conditional effect is equal to the marginal effect. Therefore, under this assumption and the identification assumptions, the ATT in (2) is also given by \(\delta\).
In the two-by-two DiD setup one cannot assess the conditional parallel trends assumption since trends in pre-treatment periods cannot be examined. When there is more than one pre-treatment period but still no variation in treatment timing, conventionally the assumption is assessed by testing for differences in pre-treatment trends between the two groups. However, Bilinski and Hatfield (2018) propose exploring the potential impact of a violation on the ATT estimate itself. This can be done by comparing the ATT estimate produced under the conditional parallel trends assumption and an ATT estimate that is produced when a linear trend difference between groups that is extrapolated from pre-treatment periods is allowed (see Section S.3 of the online supplementary materials for details) (Bilinski and Hatfield, 2018).
### Variation in treatment timing: The staggered difference-in-differences approach
When there is variation in treatment timing, multiple treated groups can be defined by time of treatment initiation. In our application illustrated in Figure 1, there are 4 time periods (\(T=4\)) and the year of adoption of the system varies across the practices. 3 groups of practices are defined by year of adoption; those that never adopted (\(C_{i}=1\)), the early adopters who adopted in the second year (\(G_{i2}=1\)) and the late adopters who adopted in the third (\(G_{i3}=1\)). When the treatment effect is homogenous across time and group, the GTATTs in (1) will be equal for all \(t\) and \(g\). In which case, under the same assumptions as in Section 3.2, the overall ATT can be identified by the coefficient for an indicator of having initiated treatment by that period in a two-way fixed effects regression model;
\[Y_{it}=\omega+\alpha_{i}+\varphi_{t}+\delta\mathbb{I}\big{(}t\geq g\ \cap\ G_{ig}=1\big{)}+\mathbf{\gamma}^{T}\mathbf{X_{it}}+\epsilon_{it}\text{ for }i=1,...,n,t=1,...,T \tag{7}\]
This estimator has been shown to be a weighted average of all possible two-by-two DiD estimates that are produced by comparing outcomes of a group of units whose treatment status changes between two time periods to a group of units whose treatment status does not change (Goodman-Bacon, 2021, Callaway and Sant'Anna, 2022). In some of these comparisons, units that have already been treated are used as comparators for later treated units. When treatment effects vary over time \(t\), the change in effects for the earlier treated units renders these comparisons misleading (Baker et al., 2022). Since the standard two-way fixed effects DiD estimator includes these estimates in the weighted average, it thus gives a biased estimate of the overall ATT.
When treatment effects vary over time, an alternative is to estimate the GTATTs for all post-treatment combinations of \(g\) and \(t\), and then aggregate these into an overall ATT estimate. The GTATTs can be estimated using either outcome regression, inverse probability weighting or doubly robust methods (Callaway and Sant'Anna, 2021). Callaway and Sant'Anna (2021) restrict these methods to the use of pre-treatment covariates only to avoid the risk of involving covariates that have been affected by treatment.
The GTATT estimand in (1) can be identified using outcome regression, by using the result that under identification assumptions it can be expressed as
\[\text{ATT(g,t)}^{OR}=E\left[\frac{G_{g}}{E\big{[}G_{g}\big{]}}\Big{(}Y_{t}-Y_{ g-1}-m_{g,t}(X)\Big{)}\right] \tag{8}\]
This relies on estimating the conditional outcome change
\[m_{g,t}(X)=E\big{[}Y_{t}-Y_{g-1}\big{|}X_{g-1},C=1\big{]} \tag{9}\]
which is the expected change in outcome between periods \(g-1\) and \(t\) conditional on pre-treatment covariates and on never being treated (\(C=1\)).
Alternatively, the GTATT estimand can be identified using inverse probability weighting, since under identification assumptions it can be expressed as
\[\text{ATT(g,t)}^{IPW}=E\left[\left(\frac{G_{g}}{E\big{[}G_{g}\big{]}}-\frac{ \frac{P_{g}(X)C}{1-P_{g}(X)}}{E\left[\frac{P_{g}(X)C}{1-P_{g}(X)}\right]} \right)\big{(}Y_{t}-Y_{g-1}\big{)}\right] \tag{10}\]
This relies on estimating the propensity score
\[P_{g}(X)=\ P\big{(}G_{g}=1\big{|}X_{g-1},\ G_{g}+C=1\big{)} \tag{11}\]
which is the probability of having initiated treatment in time period \(g\), conditional on pre-treatment covariates and on either having initiated treatment in time period \(g\) or never being treated. The \(\text{ATT(g,t)}^{IPW}\) is a weighted average of the observed changes in outcome for those that are never treated and those that initiate treatment in period \(g\). The weights are defined by the propensity scores where observations from those who are never treated that have similar characteristics to those that initiate treatment in period \(g\) are up weighted, ensuring that the covariates of the treatment and comparison group are balanced (Baker, 2019).
Combining these two approaches, the GTATT estimand can be also identified using doubly robust methods, since under identification assumptions it can be expressed in terms of the \(\text{ATT(g,t)}^{OR}\), or the \(\text{ATT(g,t)}^{IPW}\), and a term with mean zero;
\[\text{ATT(g,t)}^{DR}=E\left[\left(\frac{G_{g}}{E\big{[}G_{g}\big{]}}-\frac{ \frac{P_{g}(X)C}{1-P_{g}(X)}}{E\left[\frac{P_{g}(X)C}{1-P_{g}(X)}\right]} \right)\big{(}Y_{t}-Y_{g-1}-m_{g,t}(X)\big{)}\right] \tag{12}\]
If either the propensity score model or the conditional outcome change model is correctly specified, under the conditional parallel trends, no anticipation, consistency and overlap assumptions (see Section S.2 of the online supplementary materials for details), where the conditioning is now on the vector of pre-treatment covariates in time period \(g-1\), the GTATT in (1) is given by the \(\text{ATT(g,t)}^{DR}\) in (12).
Callaway and Sant'Anna (2021) proposed several overall single summaries of the ATT. We consider the average of the average effects of treatment for each group;
\[\theta_{overall}=\sum\nolimits_{g}\theta_{group}(g)P(G=g) \tag{13}\]
This is the average effect of treatment experienced by all units that were ever treated and so it has the same interpretation as the ATT in the two-by-two DiD setup (Callaway and Sant'Anna, 2021).
One can also aggregate the GTATTs to assess whether the treatment has a heterogeneous effect by time of treatment initiation (treatment group) or by time since treatment initiation. The average effect of treatment over time for group \(g\) is defined as
\[\theta_{group}(g)=\frac{1}{T-g+1}\sum\nolimits_{t=g}^{T}ATT(g,t) \tag{14}\]
The average effect of having initiated treatment \(e\) periods ago is defined as
\[\theta_{length}(e)=\sum\nolimits_{g}I(g+e\leq T)ATT(g,g+e)P(G=g|G+e\leq T) \tag{15}\]
Inference for the GTATTs and aggregate summary measures can be conducted using influence functions (Callaway and Sant'Anna, 2021).
When assessing the conditional parallel trends assumption in this setup, one might consider GTATTs estimated for pre-treatment periods. Assuming there is no anticipation, if there is evidence of an effect in pre-treatment periods, then there is evidence against the assumption. However, for methods that estimate treatment effects according to time since treatment initiation such as the staggered DiD approach, Rambachan and Roth (2023) propose presenting confidence intervals for the average effects of having initiated treatment \(e\) periods ago (equation (15)) under different potential violations of the assumption as a sensitivity analysis (see Section S.3 of the online supplementary materials for details) (Rambachan and Roth, 2023).
## 4 Application to _askmyGP_ data
### Descriptive analyses
Table 1 summarizes the practice-level characteristics in February 2019 for the early and never adopters, and in February 2020 for the late and never adopters. Since the early adopters adopted in March 2019-February 2020 and the late adopters adopted in March 2020-February 2021, these
are the last pre-adoption months for each group respectively. In each of the pre-adoption months, the early and late adopters were broadly similar to the never adopters with the following exceptions: the early and late adopters had a smaller proportion of registered patients of black and minority ethnicity, were more likely to be located in a rural area and had a somewhat greater proportion of registered patients aged 65 years and older. Also, in February 2019, the early adopters tended to have larger practice list sizes than the never adopters. We treated all these practice characteristics as potential confounders as we did not believe that distributions of these characteristics could be affected by adoption of the system in the time frame considered, and all could plausibly be associated with antibiotic prescribing rates. It seems plausible that a practice may be more likely to adopt the system if they believed their patients would sufficiently utilise it, which could relate to the age distribution of patients for instance.
Figure 3 illustrates the distributions of yearly mean antibiotic prescribing rates per 1,000 patients in the 4 years between March 2018 and February 2022 for the never, early and late adopters. Over the whole analysis period, the early and late adopters tended to prescribe more items of antibiotics per patient compared to the never adopters. Since this was also the case prior to any adoption of the system, this is likely due to variation in practice and patient characteristics. Overall, trends in antibiotic prescribing rates were similar across the groups.
### Application of the two-by-two difference-in-differences method
We carried out a two-by-two DiD analysis to estimate the effect of adoption of the system for the early adopters in their year of adoption. Figure 2 illustrates the time periods and groups considered for this analysis. We carried out the analysis at the yearly level using the yearly mean antibiotic prescribing rates and practice-level characteristics. Our two-way fixed effects regression model (6) included the following time-varying covariates: yearly mean practice list size, percentage male, percentage of black and minority ethnicity, percentage aged 65 or older, percentage with third level education, deprivation score of patient area, and deprivation score of patient area squared, and the yearly binary indicator for the rural classification. We used cluster-robust standard errors to allow for correlation of prescribing rates over time within practices. To give a better indication of the magnitude of effect, we also estimated the expected percentage increase in average prescribing rates for the early adopters in their year of adoption, compared to if these practices had not adopted it. We estimated this percentage by subtracting the ATT estimate from the observed mean prescribing rate for the early adopters between March 2019 and February 2020 and then divided the ATT estimate by this difference (which is the estimated mean counterfactual for the early adopters between 2019 and February 2020).
Figure 3: Box plots of yearly mean antibiotic prescribing rates per 1,000 patients in \(t=1,2,3,4\) for each group of practices (defined by year of adoption). Within each group, years in which no practices have access to the system are highlighted in grey, the year of adoption of the system is highlighted by light green and years in which all practices have access to the system are highlighted in dark green.
The early adopter's average monthly numbers of items of antibiotics prescribed per 1,000 patients between March 2019 and February 2020 were estimated to be 2.3 items higher than if these practices had not adopted the system. There was only very weak evidence of an effect (95% confidence interval (CI)=[-0.4, 4.9], p-value=0.095). This ATT translates into an estimated 5.15% increase in antibiotic prescribing rates for the early adopters in their year of adoption, compared to if these practices had not adopted it.
We carried out Bilinski and Hatfield's (2018) approach to assess the conditional parallel trends assumption for this analysis. Since one cannot use their approach if there is only one pre-treatment period, this was done at the quarterly level using quarterly mean prescribing rates and practice-level characteristics. When assessing the assumption conventionally, one might conclude that there is no evidence against the assumption since there is no evidence of there being a linear trend difference between groups pre-adoption (p-value=0.905). However, when allowing for a linear violation, we cannot rule out changes in the ATT estimate of at least the magnitude of the ATT estimate when assuming no violation. Although this gives some evidence against the conditional parallel trends assumption, we can rule out considerable changes in the ATT estimate when allowing for a linear violation (see Table D1 in the online supplementary materials for details).
### Application of the staggered difference-in-differences approach
We applied the doubly robust method of the staggered DiD approach (equation (12)) to the early and late adopters across the 4 years between March 2018 and February 2022, using the never adopters as control units. Again, we carried out the analysis at the yearly level using yearly mean prescribing rates and practice-level characteristics. We used the same covariate specification as in our two-by-two DiD analysis, although now restricted to pre-adoption covariates only. This was done using the _csdid_ package in Stata, which is based on their paper (Rios-Avila, 2023). To give a better indication of the magnitude of effect, we also estimated the overall expected percentage increase in average prescribing rates post-adoption of the system, compared to if these practices had not adopted it. We estimated this percentage by subtracting the overall ATT estimate (\(\theta_{overall}\), equation (13)) from the observed mean prescribing rate for the _askmyGP_ practices between March 2019 and February 2022, and then divided the overall ATT estimate by this difference.
Table 2 gives the estimated GTATTs (\(ATT(g,t)\), equation (1)) in each post-adoption year for each group, which all give strong evidence of a positive association between adoption of the system and antibiotic prescribing rates. The estimated overall average effect of adoption of the system experienced by all eligible practices that adopted (\(\theta_{overall}\), equation (13)) was 1.7 - this is the expected increase in the monthly numbers of items of antibiotics prescribed per 1,000 patients post-adoption of the system, compared to if these practices had not adopted it. We found strong evidence of there being an effect of adoption when averaged across all those that adopted it (95% CI=[1.1, 2.4], p-value<0.001). This overall ATT translates into an estimated 4.40% increase in antibiotic prescribing rates post-adoption of the system, compared to if these practices had not adopted it.
The GTATT estimates were smaller for the late adopters, compared to the corresponding estimates for the early adopters with the same length of access to the system, suggesting effects of adoption vary across groups. The average effect of adoption was greater for the early adopters (\(\theta_{group}(2)=\)2.9 items per 1,000 patients per month, 95% CI=[1.4, 4.4], p-value<0.001), than for the late adopters who had a shorter length of access to the system (\(\theta_{group}(3)=\)1.2 items per 1,000 patients per month, 95% CI=[0.6, 1.9], p-value<0.001). Table 3 gives the estimates of the average effect of having adopted 0,1 and 2 years ago (\(\theta_{length}(e)\), equation (15)), which suggest that with increasing time since adoption of the system, the effect size increases.
Since there was no evidence of an effect of adoption for the late adopters in the year prior to their year of adoption, one might conclude there is no evidence against the conditional parallel trends assumption (_ATT_(3,2)=0.4 items per 1,000 patients per month, 95% CI=[-0.3,1.1], p-value=0.251). However, by carrying out Rambachan and Roth's (2023) approach, we found that our causal conclusions rely quite strongly on the conditional parallel trends assumption holding. For instance, if we were to allow for an exactly linear violation of the conditional parallel trends assumption extrapolated from pre-adoption periods, we would no longer conclude that there is a significant effect of adoption at any period post adoption at the 5% significance level. However, we would still rule out effects of adoption that are large in magnitude (see Table D2 in the online supplementary materials for details).
## 5 Discussion
We used a recently proposed staggered DiD approach (Callaway and Sant'Anna, 2021) to investigate the impact of adoption of an OC system in English general practice on antibiotic prescribing rates, using longitudinal data from 6,397 GP practices. We compared this approach to a more standard DiD method that does not handle the staggered adoption of the system and assessed the validity of our assumptions using recently proposed methods (Bilinski and Hatfield, 2018, Rambachan and Roth, 2023). Our results suggest that adoption of the as_kmygGP_ OC system increases antibiotic prescribing rates in English general practice, though the magnitude of effect is relatively small. In 2016 NHS England launched a national programme to combat antibiotic resistance where clinical commissioning groups were supported to reduce the number of antibiotics prescribed in primary care by 4% (NHS England, 2016). This suggests that even relatively small changes in antibiotic prescribing rates are considered important and an overall estimated increase of 4.40% post-adoption of the system may be non-trivial.
This study did not incorporate information regarding the reasons for which antibiotics were prescribed at an individual level, and further research is required to understand the reasons for an increase. Prescribing of antibiotics is often necessary and increased rates could reflect higher quality care, rather than a lack of adherence to prescribing guidelines. It has been suggested that if ease of access is improved by adoption of an OC system, the threshold of what patients feel the need to contact their practice about could be lowered, potentially generating new patient demand or uncovering previously unmet need (Salisbury, 2021). If there are higher rates of patient-initiated low acuity demand, this would likely lead to some increase in prescribing rates. Another potential mechanism of the observed association may be if GPs' work pressure in fact
increased with adoption of an OC system, since this has been associated with increased prescribing of antibiotics (Allen et al., 2022).
Our results suggest that as the time since adoption increases, the effect size increases. This seems credible as effects of adoption would likely increase as practice staff and patients become more accustomed to the system. It also seems plausible that over time, practices would be more likely to be using _askmyGP_ to facilitate total digital triage. This would likely lead to a greater improvement of practice efficiency than if patient queries were not always channelled through the system. Under the same mechanism described above, an increase in practice efficiency could also lead to some increase in prescribing rates. There is also suggestion that effects of adoption vary across groups. The early adopters could be considered to be innovative as they adopted the system prior to the COVID-19 pandemic when there was strong guidance from the NHS that all practices must have access to an OC system (NHS England, 2020). This might also correspond to other behaviours of a practice, such as fully utilising the system as expected leading to a greater effect. However, as more practices in England adopted OC systems over time, those that never adopted the _askmyGP_ system would be more similar in terms of their management of patient consultations to the _askmyGP_ practices, which may partly explain why the estimated effects of adoption were smaller for the late adopters.
A previous evaluation on the effects of adoption of total digital triage on prescribing rates of antibiotics used the synthetic control method (Dias and Clarke, 2020). While they found a slight increase in average antibiotic prescribing rates, they found no evidence of an effect of adoption of total digital triage during 2019 for 19 English practices that adopted between August 2018 and mid-2019 (ATT = 0.61 items per 1,000 patients per month, 95% CI= [-2.2,4.9], p-value=0.534). Their evaluation was restricted to practices using the _askmyGP_ OC system to facilitate total digital triage, and their control pool consisted of practices who went on to adopt total digital triage after the analysis period and so were unlikely to be using other OC systems during the period that they were used as comparators. Their findings are therefore not directly comparable to ours. The synthetic control method relies on different assumptions to the DiD methods considered in this paper. In particular, it does not rely on the conditional parallel trends assumption, but on there being a close pre-treatment match between the characteristics of the treatment group and of a weighted combination of control units (Rehkopf and Basu, 2018). Similarly to DiD methods, extensions have been proposed that handle variation in timing of treatment initiation, such as the generalized synthetic control method (Xu, 2017).
The methods considered in this paper rely on the conditional parallel trends assumption. This assumption is violated if there is a confounder that has either a time-varying effect on the outcome or a time-varying difference between the treatment and control group which has not been appropriately adjusted for (Zeldow and Hatfield, 2021). Using two recently proposed methods (Bilinski and Hatfield, 2018, Rambachan and Roth, 2023) we found some evidence against this assumption. Since the practice-level characteristics used in our analyses were only estimates of the characteristics of registered patients to each practice, there would likely have been some residual confounding. Using model (6) for our two-by-two DiD analysis, we control for measured time-varying differences in characteristics between groups but assume that the characteristics have a constant effect on antibiotic prescribing rates over time. Using the staggered DiD approach, we no longer control for time-varying differences in characteristics between groups but do allow for measured pre-adoption characteristics to have a time-varying effect on antibiotic prescribing rates. Although these methods rely on slightly different assumptions, the ATT estimate from our two-by-two DiD analysis is almost identical to the corresponding GTATT estimate from our staggered DiD analysis (see Section S.4 of the online supplementary materials for an additional comparison). However, the standard errors obtained differ and further assessment of the inference procedures of these methods is required.
Since our control pool included practices operating other OC systems, we estimated the effects of adoption of the _askmyGP_ OC system compared to not adopting that particular system. It is difficult to determine how generalisable these results may be to other OC systems since they all vary in functionality. Also, these effects would plausibly be smaller than the general effects of adoption of an OC system compared to not adopting any OC system. Our analyses can be considered as intention to treat (ITT) analyses since we included _askmyGP_ practices irrespective of whether they used the system to facilitate total digital triage or potentially stopped using the system. Further research is needed to assess potential impacts of adoption of OC systems and alternative statistical methods, such as the generalized synthetic control method (Xu, 2017), should be applied to assess the validity of our results.
### Data availability
The data used in this study was generated by combining public domain data and practice-level data on use of the _askmyGP_ OC system provided by the Health Foundation. There is a Data Sharing Agreement in place between _askmyGP_ and the Health Foundation's Improvement Analytics Unit giving the unit permission to carry out research using the data provided. This data is not personal sensitive data; however, it is commercially sensitive and so cannot be provided by the authors.
### Funding
KE is funded by the NIHR [NIHR Pre-Doctoral Fellowship Programme (NIHR302010)]. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. RHK is funded by UK Research and Innovation (Future Leaders Fellowship MR/S017968/1).
[MISSING_PAGE_POST]
NHS ENGLAND. 2016. _NHS England launches national programme to combat antibiotic overusage_ [Online]. Available: [https://www.england.nhs.uk/2016/03/antibiotic-overusage/](https://www.england.nhs.uk/2016/03/antibiotic-overusage/) [Accessed 2023].
* NHS ENGLAND (2020) NHS ENGLAND. 2020. _Advice on how to establish a remote 'total triage' model in general practice using online consultations_ [Online]. Available: [https://www.england.nhs.uk/coronavirus/documents/advice-on-how-to-establish-a-remote-total-triage-model-in-general-practice-using-online-consultations/](https://www.england.nhs.uk/coronavirus/documents/advice-on-how-to-establish-a-remote-total-triage-model-in-general-practice-using-online-consultations/) [Accessed 2023].
* OFFICE FOR NATIONAL STATISTICS (2021). _Lower layer Super Output Area population estimates_ [Online]. GOV.UK. Available: [https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populati-onestimates/datasets/lowserupoututareamidyearpopulationestimates](https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populati-onestimates/datasets/lowserupoututareamidyearpopulationestimates) [Accessed 2023].
* OFFICE FOR NATIONAL STATISTICS (2022). _Historic census data_ [Online]. GOV.UK. Available: [https://www.ons.gov.uk/census/historicensusdata](https://www.ons.gov.uk/census/historicensusdata) [Accessed 2023].
* OPENPRESCRIBING (2023a). _Antibacterial drugs_ [Online]. Bennett Institute for Applied Data Science, University of Oxford. Available: [https://openprescribing.net/bnf/0501/](https://openprescribing.net/bnf/0501/) [Accessed 2023].
* OPENPRESCRIBING (2023b). _OpenPrescribing_ [Online]. Bennett Institute for Applied Data Science, University of Oxford. Available: [https://openprescribing.net/about/](https://openprescribing.net/about/) [Accessed 2023].
* RAMBACHAN & ROTH (2023) RAMBACHAN, A. & ROTH, J. 2023. A More Credible Approach to Parallel Trends. _The Review of Economic Studies_, rdad018.
* REHKOPF & BASU (2018) REHKOPF, D. H. & BASU, S. 2018. A New Tool for Case Studies in Epidemiology-the Synthetic Control Method. _Epidemiology_, 29, 503-505.
* RIOS-AVILA (2023) RIOS-AVILA, F. 2023. _csid_ [Online]. Github. Available: [https://github.com/friosavila/stpackages/tree/main/csdid](https://github.com/friosavila/stpackages/tree/main/csdid) [Accessed 2023].
* SALISBURY (2021) SALISBURY, H. 2021. _E-consultations are increasing the GP workload_ [Online]. BMJ. Available: [https://www.bmi.com/content/375/bmi.n2867](https://www.bmi.com/content/375/bmi.n2867) [Accessed 2023].
* SUN & ABRAHAM (2021) SUN, L. & ABRAHAM, S. 2021. Estimating dynamic treatment effects in event studies with heterogeneous treatment effects. _Journal of Econometrics_, 225, 175-199.
* UK HEALTH SECURITY AGENCY (2022) UK HEALTH SECURITY AGENCY. 2022. _English surveillance programme for antimicrobial utilisation and resistance (ESPAUR) Report 2021 to 2022_ [Online]. GOV.UK. Available: [https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment-data/file/1118310/ESPAUR-report-2021-to-2022.pdf](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment-data/file/1118310/ESPAUR-report-2021-to-2022.pdf) [Accessed 2023].
* WOOLDRIDGE (2021) WOOLDRIDGE, J. M. 2021. _Two-Way Fixed Effects, the Two-Way Mundlak Regression, and Difference-in-Differences Estimators_ [Online]. Available: [https://papers.ssrn.com/sol3/papers.cfm?abstract](https://papers.ssrn.com/sol3/papers.cfm?abstract) id=3906345 [Accessed 2023].
* WORLD HEALTH ORGANISATION (2020) WORLD HEALTH ORGANISATION. 2020. _Antibiotic resistance_ [Online]. Available: [https://www.who.int/news-room/fact-sheets/detail/antibiotic-resistance](https://www.who.int/news-room/fact-sheets/detail/antibiotic-resistance) [Accessed 2023].
* XU (2017) XU, Y. 2017. Generalized Synthetic Control Method: Causal Inference with Interactive Fixed Effects Models. _Political Analysis_, 25, 57-76.
* ZELDOW & HATFIELD (2021) ZELDOW, B. & HATFIELD, L. A. 2021. Confounding and regression adjustment in difference-in-differences studies. _Health Services Research_, 56, 932-941.
**Supplemental Material for "Investigating Impacts of Health Policies Using Staggered Difference-in-Differences: The Effects of Adoption of an Online Consultation System on Prescribing Patterns of Antibiotics"**
**Kate B. Ellis1, Ruth H. Keogh1, Geraldine M. Clarke2, Stephen O'Neill3**
Footnote 1: The National Bureau of Standards and Technology, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
Footnote 2: The National Bureau of Standards and Technology, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
Footnote 3: The National Bureau of Standards and Technology, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
_Address for correspondence:_ Ruth H. Keogh, Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK, WC1E 7HT, E-mail: [email protected]
**Structure of the document**
This document provides supplementary material to the paper "Investigating Impacts of Health Policies Using Staggered Difference-in-Differences: The Effects of Adoption of an Online Consultation System on Prescribing Patterns of Antibiotics". Section S.1 provides the inclusion criteria for GP practices. Section S.2 provides details of the estimation method's identification assumptions. Section S.3 provides further details of the methods used to assess the conditional parallel trends assumption. Section S.4 gives some additional results of the application.
### s.1 Inclusion criteria
GP practices included in the analyses fulfilled the following inclusion criteria:
1. Be an English GP practice.
2. Be active in every month between May 2017 and March 2022. _If practices had no registered patients and/or had missing values of the number of items of antibiotics prescribed, we took this as an indicator that the practice was not active in that month._
3. Have a practice list size of at least 100 patients in every month between May 2017 and March 2022.
4. Not be another prescribing setting that offers atypical services to standard GP practices. _This implicitly excludes walk-in centres, extended access services and community health services._
5. Not have used _askmyGP_ version 3 prior to March 2019. _Since some practices may have used askmyGP version 2 prior to the launch of version 3 in July 2018, to try to ensure that we only analysed practices where we could be fairly certain of their date of adoption of any version of askmyGP, we excluded 32 practices that were using askmyGP version 3 in any month prior to March 2019._
6. Not have first started using _askmyGP_ version 3 after February 2021. _Practices that adopted askmyGP late (defined as after February 2021) were very likely to have adopted an alternative OC system before February 2021, as there was strong guidance by the NHS at the start of the COVID-19 pandemic in March 2020 that all practices must have access to an OC system (NHS England, 2020). We did not have access to data on adoption of other OC systems and therefore could not exclude such practices directly. We excluded 6 practices that first started using askmyGP version 3 between March 2021 and February 2022._
### s.2 Identification assumptions
In this section we give further details of each of the estimation method's identification assumptions. The two-by-two DiD method described in Section 3.2 relies on assumptions 1, 2 and 3. The doubly robust method of the staggered DiD approach described in Section 3.3 relies on assumptions 1, 2, 3 and 4.
We first introduce the _unconditional_ parallel trends assumption, which is commonly relied upon in DiD analyses;
\[E\big{[}Y_{t}(0)-Y_{t-1}(0)\big{|}G_{g}=1\big{]}=E[Y_{t}(0)-Y_{t-1}(0)|C=1]\text { for each }g\text{ and }t\in\{2,...,T\}\]
This assumption states that if the group that initiated treatment in time period \(g\) had in fact never been treated, their expected outcomes would have followed a parallel path over time to the expected outcomes for those that never became treated. It is violated if there exists a confounder that has either a time-varying effect on the outcome or a time-varying difference between the treatment and control group (Zeldow and Hatfield, 2021). If one has measured and can appropriately adjust for such confounders, as we do in our analyses, an unbiased estimate of the ATT(g,t) can be obtained under the slightly weaker _conditional_ parallel trends assumption.
**Assumption 1: Conditional Parallel Trends**
\[E\big{[}Y_{t}(0)-Y_{t-1}(0)\big{|}\mathbf{X},G_{g}=1\big{]}=E[Y_{t}(0)-Y_{t-1}(0) \big{|}\mathbf{X},C=1]\]
\[\text{for each }g\text{ and }t\in\{2,...,T\}\]
where \(\mathbf{X}=\mathbf{(X_{t-1},X_{t})}\) for the two-by-two DiD method and \(\mathbf{X}=\mathbf{X_{g-1}}\) for the staggered DiD approach. This assumption states that if the group that initiated treatment in time period \(g\) had in fact never been treated, their expected outcomes, conditional on measured covariates, would have followed a parallel path over time to the conditional expected outcomes for those that never became treated.
**Assumption 2: No Anticipation**
\[E\big{[}Y_{t}(g)\big{|}\mathbf{X},G_{g}=1\big{]}=E\big{[}Y_{t}(0)\big{|}\mathbf{X},G_ {g}=1\big{]}\text{ for each }g\text{ and }t\in\{1,...,T\}\text{ with }t<g\]
where \(\mathbf{X}=\mathbf{X}_{t}\) for the two-by-two DiD method and \(\mathbf{X}=\mathbf{X_{g-1}}\) for the staggered DiD approach. This assumption states that for those who initiate treatment in time period \(g\), in time periods before \(g\), conditional on measured covariates, the expected untreated potential outcomes is equal to the expected potential outcomes for the observed treatment initiation time \(g\).
**Assumption 3: Consistency**
\[Y_{t}(g)=Y_{t}\text{ if }G_{g}=1\text{ and }Y_{t}(0)=Y_{t}\text{ if }C=1\text{ for each }g\text{ and }t\in\{1,...,T\}\]
This assumption states that observed outcomes are equal to the potential outcomes for the observed treatment initiation time, including for those that are never treated, for all time periods.
**Assumption 4: Overlap**
For each \(g\), there exists some \(\varepsilon>0\) such that \(P\big{(}\ G_{g}=1\big{)}>\varepsilon\) and
\[p_{g}(X)=P\big{(}G_{g}=1\big{|}\mathbf{X_{g-1}},\ G_{g}+C=1\big{)}<1-\varepsilon\ \text{almost surely}\]
This assumption states that the probability of initiating treatment in time period \(g\) is positive, and that the propensity scores are uniformly bounded away from 1.
### Methods for assessing the conditional parallel trends assumption
In this section we give further details on each of the methods used to assess the conditional parallel trends assumption. Researchers often test for differences in expected outcome trends between treatment and control groups, conditional on measured covariates, in periods prior to treatment and use this to assess the plausibility that trends would be parallel in the absence of treatment. These tests often have low power and as such ATT estimates may be biased by pre-existing trends that are not detected with substantial probability (Roth, 2022).
Bilinski and Hatfield (2018) recommend exploring the potential impact of a conditional parallel trends violation on the ATT estimate itself (Bilinski and Hatfield, 2018). Under their approach, one compares the ATT estimate produced under the conditional parallel trends assumption with an ATT estimate that is produced when a linear trend difference between groups that is extrapolated from pre-treatment periods is allowed. When all treated units start treatment at the same time (\(T_{0}\)) and there is more than one pre-treatment period, their approach can be implemented by comparing ATT estimates (\(\beta\) and \(\beta^{\prime}\)) from two two-way fixed effects regression models;
\[Y_{it}=\omega+\alpha_{i}+\varphi_{t}+\sum_{k=T_{0}}^{T}\beta_{k}\mathbb{I} \big{(}t=k\ \cap\ G_{iT_{0}}=1\big{)}+\mathbf{\gamma^{T}}\mathbf{X_{it}}+\varepsilon_{it}\]
\[Y_{it}=\omega^{\prime}+\alpha^{\prime}{}_{i}+\varphi^{\prime}{}_{t}+\sum_{k= T_{0}}^{T}\beta^{\prime}{}_{k}\mathbb{I}\big{(}t=k\ \cap\ G_{iT_{0}}=1\big{)}+\theta G_{iT_{0}}t+\mathbf{\gamma^{T}}\mathbf{X_{it}}+ \varepsilon^{\prime}{}_{it}\]
These model specifications produce treatment effect estimates for each post-treatment period ensuring that the linear trend difference (\(\theta\) in (\(S7\))) is estimated only using data in pre-treatment periods. The ATT from each model is then the simple average of these period effects; \(\beta=\frac{1}{T-T_{0}+1}\sum_{k=T_{0}}^{T}\beta_{k}\) and \(\beta^{\prime}=\frac{1}{T-T_{0}+1}\sum_{k=T_{0}}^{T}\beta^{\prime}{}_{k}\). They recommend using the confidence interval for the difference in ATTs between the two models (\(\beta-\beta^{\prime}\)) to assess the potential impact of the violation.
For methods that estimate treatment effects according to time since treatment initiation such as the staggered DiD approach, Rambachan and Roth (2023) recommend presenting confidence intervals for the average effects of having initiated treatment \(e\) periods ago (\(\theta_{length}(e)\), equation (15)) under different violations of the assumption as a sensitivity analysis (Rambachan and Roth, 2023). One of their methods extrapolates pre-treatment trend differences to post-treatment periods while restricting the slope of the trend difference to not change by more than some positive constant \(M\) across all consecutive periods. They suggest that this restriction would be reasonable when groups could be differentially affected by smoothly evolving trends that would plausibly continue after treatment initiation. Another of their methods allows for violations between consecutive post-treatment periods that are no more than some positive constant \(\bar{M}\) times the maximum violation between consecutive pre-treatment periods. They suggest that this restriction would be reasonable when there could be confounding shocks that would plausibly be of a similar magnitude in pre- and post-treatment periods. They recommend reporting the values of \(M\) and \(\bar{M}\) such that the effect estimates are no longer statistically significant at the 5% level, which they refer to as breakdown values.
### Additional application results
We now give further details on the results of our application of Bilinski and Hatfield's (2018) approach to assess the conditional parallel trends assumption for our two-by-two DiD analysis, which we carried out at the quarterly level using quarterly mean prescribing rates and practice-level characteristics. Table D1 gives estimates and 95% confidence intervals for the ATT produced under the conditional parallel trends assumption (\(\beta\)), the ATT produced when a linear trend difference is allowed (\(\beta^{\prime}\)), the linear trend difference between groups (\(\theta\)) and the difference in ATTs (\(\beta-\beta^{\prime}\)). The 95% confidence interval for the difference in ATTs includes values greater in magnitude than the ATT estimate produced under the conditional parallel trends assumption. Therefore, when allowing for a linear violation, we cannot rule out changes in the ATT estimate of at least the magnitude of the ATT estimate when assuming no violation of the conditional parallel trends assumption. However, we can rule out considerable changes in the ATT estimate.
_Table D1: Estimates and 95% confidence intervals for the ATT produced under the conditional parallel trends assumption (\(\beta\)), the ATT produced when a linear trend difference is allowed (\(\beta^{\prime}\)), the linear trend difference between groups (\(\theta\)) and the difference in ATTs (\(\beta-\beta^{\prime}\)) (items per 1,000 patients)._
\begin{tabular}{l c c} Parameter & Estimate & 95\% CI \\ \hline ATT produced under the conditional parallel trends assumption (\(\beta\)) & 2.4 & [0.1,4.6] \\ ATT that is produced when a linear trend difference is allowed (\(\beta^{\prime}\)) & 2.3 & [-0.5,5.1] \\ Linear trend difference between groups (\(\theta\)) & 0.0 & [-0.5,0.5] \\ Difference in ATTs between the two models & 0.1 & [-2.9,3.2] \\ (\(\beta-\beta^{\prime}\)) & & \\ \end{tabular}
We now give further details on the results of our application of Rambachan and Roth's (2023) approach to assess the conditional parallel trends assumption for our staggered DiD analysis, which was done using the _HonestDiD_ package in R (Rambachan, 2022). Table D2 gives the breakdown values of M and \(\bar{\text{M}}\) and the corresponding 95% confidence intervals at the breakdown values for each of the average effects of having initiated treatment \(e\) periods ago. For instance, if we allow for violations of the conditional parallel trends assumption between consecutive post-adoption periods that are no more than 90% of the maximum violation in consecutive pre-adoption periods, we would no longer conclude that there is a significant effect of adoption of the system at two years since adoption at the 5% significance level.
_Table D2: Breakdown values of M and \(\bar{\text{M}}\), and the corresponding 95% confidence intervals under imposed restrictions at the breakdown values for each the average effects of having initiated treatment \(e\) periods ago (items per 1,000 patients)._
\begin{tabular}{l c c c} \(e=0\) & \(e=1\) & \(e=2\) \\ \hline Breakdown value of M & 0 & 0 & 0 \\
95\% CI under restriction that slope of & [-0.1,1.8] & [-0.4,2.6] & [-0.8,4.8] \\ extrapolated pre-trend can change by no more & & & \\ than the breakdown value of M & 1.1 & 0.9 & 0.9 \\
95\% CI under restriction of no more violation & [-0.1,2.5] & [0.0,3.8] & [-0.1,6.6] \\ than the breakdown value of M times the & & & \\ maximum violation in pre-treatment periods & & & \\ \end{tabular}
To make an additional comparison between the two-by-two DiD method (applied in Section 4.2) and the doubly robust method of the staggered DiD approach (applied in Section 4.3), we carried out a two-by-two DiD analysis to estimate the effect of adoption of the system for the early
adopters in their second year of access to the system. The first time period was again March 2018-February 2019 (\(t=1\)), where neither the early adopters nor the never adopters had adopted the system. The second time period for this analysis was March 2021-February 2022 (\(t=4\)), the second full year after the early adopters had adopted the system. The resulting ATT estimate (ATT=3.0 per 1,000 patients, 95% Cl=[0.1, 5.8], p-value=0.039) was close to the corresponding GTATT estimate from our staggered DiD analysis (_ATT_(2,4)=3.2 per 1,000 patients, 95% Cl=[1.3, 5.1], p-value=0.001, Table 2). However, the standard errors produced under these two approaches differ again.
|
2302.14323 | Read Pointer Meters in complex environments based on a Human-like
Alignment and Recognition Algorithm | Recently, developing an automatic reading system for analog measuring
instruments has gained increased attention, as it enables the collection of
numerous state of equipment. Nonetheless, two major obstacles still obstruct
its deployment to real-world applications. The first issue is that they rarely
take the entire pipeline's speed into account. The second is that they are
incapable of dealing with some low-quality images (i.e., meter breakage, blur,
and uneven scale). In this paper, we propose a human-like alignment and
recognition algorithm to overcome these problems. More specifically, a Spatial
Transformed Module(STM) is proposed to obtain the front view of images in a
self-autonomous way based on an improved Spatial Transformer Networks(STN).
Meanwhile, a Value Acquisition Module(VAM) is proposed to infer accurate meter
values by an end-to-end trained framework. In contrast to previous research,
our model aligns and recognizes meters totally implemented by learnable
processing, which mimics human's behaviours and thus achieves higher
performances. Extensive results verify the good robustness of the proposed
model in terms of the accuracy and efficiency. | Yan Shu, Shaohui Liu, Honglei Xu, Feng Jiang | 2023-02-28T05:37:04Z | http://arxiv.org/abs/2302.14323v2 | Read Pointer Meters in complex environments based on a Human-like Alignment and Recognition Algorithm
###### Abstract
Recently, developing an automatic reading system for analog measuring instruments has gained increased attention, as it enables the collection of numerous state of equipment. Nonetheless, two major obstacles still obstruct its deployment to real-world applications. The first issue is that they rarely take the entire pipeline's speed into account. The second is that they are incapable of dealing with some low-quality images (i.e., meter breakage, blur, and uneven scale). In this paper, we propose a human-like alignment and recognition algorithm to overcome these problems. More specifically, a Spatial Transformed Module(STM) is proposed to obtain the front view of images in a self-autonomous way based on an improved Spatial Transformer Networks(STN). Meanwhile, a Value Acquisition Module(VAM) is proposed to infer accurate meter values by an end-to-end trained framework. In contrast to previous research, our model aligns and recognizes meters totally implemented by learnable processing, which mimics human's behaviours and thus achieves higher performances. Extensive results verify the good robustness of the proposed model in terms of the accuracy and efficiency. The code and the datasets will be available in [https://github.com/shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild](https://github.com/shuyansy/A-detection-and-recognition-pipeline-of-complex-meters-in-wild).
Analog measuring instruments, Pointer meters reading, Spatial Transformed Module, Value Acquisition Module +
Footnote †: journal: Journal of computer science and technology: Instruction for authors. JOURNAL OF COMPUTER SCIENCE AND TECHOLOGY
## 1 Introduction
In the complex industrial environment, there are harsh environments such as radiation, toxic, high temperature, etc., it is necessary to inspect the production condition with the help of instruments to ensure safety[1]. Traditionally acquired data is typically read artificially by humans, who are capable of deriving precise readings from complex meters in a variety of shapes, forms, and styles, despite never having seen the meter in question. However, the manual method is always more labor intensive and time consuming. So it is of great practical significance to rely on inspection robots and computer vision technology[2, 3, 4, 5] for automatic meter reading.
Substation meters are now classified as digital and pointers. While reading digital meters can be considered an OCR task and is relatively simple to accomplish using text spotting techniques[6, 7, 8, 9], reading pointer meters presents a different and more difficult problem: there are major visual changes between meter faces, the camera viewpoint has a significant effect on their depicted shape and numbering location, and the existence of shadows, meter breakage, and specular reflections adds to the pointer hands' perplexity. While this issue has been around for a long time, few previous solutions have been capable of reliably obtaining readings from meters, except in extremely limited circumstances. Additionally, it is difficult for researchers to work on this project due to the lack of reliable training and evaluation standards.
Existing automatic meter reading systems[10, 11,
12, 13], according to relevant literature, include the following pipelines: To begin, the meter's pure area is detected using conventional neural network-based detection algorithms or image processing techniques; then the captured target is aligned to a front view by perspective transform method. Lastly, meter values can be obtained by meter component (the pointer and the scale) retrieval and meter number recognition. However, most of these methods suffer from two main problems. First, the alignment process is typically time-consuming due to its intricate point-matching steps, which hinders the overall efficiency of the system. Second, their reading model is not robust; it consists of isolated and independent modules for meter component retrieval and number recognition, which are unaware of their interdependence, resulting in poor accuracy. Therefore, "how to design an algorithm for efficient alignment and robust recognition of pointer meters" remains largely unsolved.
To address these issues, we propose a novel human-like alignment and recognition algorithm, which simplifies the meter reading pipeline as shown in Fig 1. To be more precise, we propose a novel Spatial Transformed Module (STM) for alignment via implicitly learning homography transformation, which is heavily inspired by the Spatial Transformer Networks (STN)[14]. STM is more efficient to align meter than previous morphological conversion methods by discarding point-matching process. Additionally, a Value Acquisition Module (VAM) is established in a unified framework of meter component retrieval and meter number recognition, simulating the structure of an end-to-end text spotter. By excavating the relationship between meter components and meter number, VAM can learn a richer representation and thus can read precise meter values from low-quality images. As shown in Fig 2, on the MC1260 dataset we proposed, the FPS of STM is 50 FPS which is 5 times faster than the conventional alignment method. Meanwhile, VAM can handle some difficult data such as meter breakage, blur and uneven scale.
In this paper, we make the following contributions:
(i) We design a unified framework involving detection, alignment and recognition stages. The detection can simply be an off-the-shelf object detection model. The alignment stage involves a deep neural network which introduces an improved STN to regress homography transformation parameters implicitly. At the recognition stage, we are the first to establish an end-to-end architecture to tightly couple meter component
Figure 2: (a) shows the efficiency of our STM for meter alignment, which is 5 times faster than the conventional perspective transform method. (b) shows our VAM (bottom line) can read more accurate values in some low-quality images than prior methods (top line).
retrieval and meter number recognition, boosting both the accuracy and efficiency of the pointer meter reading.
(ii) We propose a new benchmark dataset called Meter_Challenge (MC1296) which contains 1296 images captured in scene by automatic robots. MC1296 is organized in a tree structure, containing images, annotations and evaluation metrics for different tasks (meter detection, meter alignment, and meter recognition) from top to bottom.
(iii) Extensive experiments verify the effectiveness and robustness of the method we propose.
The rest of this paper is organized as follows. The related background knowledge is provided in Section 2, including the previous pointer meter reading pipelines, the Spatial Transformer Networks (STN) and the end-to-end text spotting methods highly related to our works. Section 3 introduces the implementation process of the proposed method. In Section 4, the proposed method is verified by extensive simulation experiments and ablation studies. The conclusions of this paper are summarized in Section 5.
## 2 Related Works
We commence this section by reviewing major pointer meter reading frameworks. Additionally, we discuss the research on STN and end-to-end text spotting methods which are highly relevant to our works.
### Pointer meter reading frameworks
Numerous advances[10, 11, 12, 13, 15, 16, 17, 18] have been made in the reading of pointer meters over the last few years. The existing frameworks are generally divided into three stages: meter detection, meter alignment, and meter recognition. Traditional algorithms[18] such as template matching and the table lookup method are used in meter detection. To address this issue with complex backgrounds, some object detection methods such as Faster RCNN[13] have been introduced. In order to calibrate the camera angle to get a front view image, perspective transform techniques[13, 11] are applied by calculating transformation matrix determined by points matching. Image processing methods[19] also propose using the image subtraction method or the Hough Transform algorithm to extract the pointer for meter recognition. Additionally, machine learning and deep learning are used to improve reading accuracy. While He et al.[16] improve the Mask RCNN[20] method for pointer segmentation. Following that, final values can be determined by calculating the pointer angle and meter number output. The majority of the aforementioned approaches are able to read pointer meters, but few of them can balance accuracy and speed due to complex post-processing in meter alignment[13] or inadequate visual representations in meter recognition.
### Spatial Transformer Networks(STN)
In contrast to the conventional perspective transform method, which explicitly calculates the transformation matrix. STN[14] introduces a novel learnable module that enables spatial manipulation of data within the network. STN is advantageous for a wide variety of computer vision tasks due to its efficiency and flexibility. ASTER[21] consists of a rectification network and a recognition network that can deal with text that is distorted or has an irregular layout. Lee et al.[22] propose Image-and-Spatial Transformer Networks (ISTNs) for downstream image registration optimization. Additionally, Yang et al.[23] introduce a clock alignment architecture based on STN, which motivates us to develop a more efficient meter alignment module.
### End-to-end text spotters
To spot texts in images, a straight two-stage idea is proposed to cascade existing detector and recognizer sequentially. However, due to the lack of complementarity between detector and recognizer, they suffer from low efficiency and accuracy. To mitigate this problem, an end-to-end trainable Neural Network for text spotting is attempted, with state-of-the-art performances achieved. Li et al. [24] first builds an unified end-to-end work that simultaneously localizes and recognizes text with a single forward pass, with positive results achieved in horizontal text spotting task. Benefiting from convolution sharing strategy, FOTS [25] and EAA [26] pool multi-oriented text regions from feature map by designing RoI Rotate and Text-Alignment layer, respectively. Unfortunately, few of researchers take end-to-end text spotters into their pointer meter recognition frameworks.
Our work is structured similarly to existing frameworks for pointer meter reading. To increase the applicability of previous work, we replace the traditional perspective transform method by an improved STN and then create an end-to-end meter recognition module for meter component retrieval and meter number recognition.
## 3 Methods
The purpose of this paper is to design an algorithm for efficient alignment and robust recognition of pointer meters. To achieve this goal, we establish a unified framework, which is shown in Fig 3. Our proposed architecture accepts an image as input and then performs detection, alignment, and recognition sequentially. It is noteworthy that our STM (see Sec.3.2) can directly transform the detected meter into an aligned view without any post-processing steps. Meanwhile, the VAM (see Sec.3.3) we proposed can learn rich visual representation by excavating the relationship between component retrieval and number recognition.
### Meter Detection
Cropping meter regions prior to recognition is necessary to eliminate background interference. To accomplish this, some traditional image processing techniques such as Hough Circle Detection and Template Matching are used, both of which have shortcomings in some low-quality images. At the moment, object detection networks are used to detect and crop the meter, as follows.
Figure 3: The proposed framework of the pointer meter recognition. YDM can detect meter targets and crop meter regions into STM, where aligned views can be obtained. VAM can output meter values accurately and efficiently.
\[I_{det}=\Phi_{det}(I;\Theta_{det})\in\mathbb{R}^{\mathbb{N}^{3\times h\times w}} \tag{1}\]
where \(I\) is the given unlabeled image, while \(\Phi_{dec}\) and \(\Theta_{dec}\) represent detecting function and learnable parameters, respectively.
The detector actually can be done using any off-the-shelf object detector. However, to reduce the efficiency cost and handle small meter targets, we propose a YOLO-based Detection Module (YDM) based on YOLO-v5[27], which has achieved promising performance in many tasks. To achieve a better performance in our tasks where data is scarce and target is small, we apply multi-scale training strategy and artificially augment the images by copy-pasting some small objects. The performance of YDM can be seen in Sec. 4.
### Meter Alignment
**Motivation.** The detected pure meter image could be directly passed to a module for reading recognition. This is typically not ideal for two reasons: first, due to the limitations of the localisation module; and second, even when the meter is properly localised, it can be hard to read at times due to the viewpoint's interference. Previous methods apply directly perspective transform to calibrate the camera angle to get a front view image as shown in follows:
\[\begin{split}(x,y,w^{\prime})=(u,v,w)\cdot T=\\ (u,v,w)\cdot\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{bmatrix}\end{split} \tag{2}\]
\[\begin{split}(X,Y)&=(\frac{x}{w^{\prime}},\frac{y}{w^{ \prime}})\\ (U,V)&=(\frac{u}{w^{\prime}},\frac{v}{w^{\prime}})\end{split} \tag{3}\]
where \((U,V)\) represents the coordinate of a point in the original image, \((X,Y)\) is the coordinate of the corresponding point in the transformed image, \((u,v,w)\) and \((x,y,w^{\prime})\) are the homogenous space representation of \((U,V)\) and \((X,Y)\), respectively. By matching four feature points between two images, transform matrix \(T\) is determined. Their methods, however, suffer primarily from complex points matching algorithms, which are time-consuming and not very robust. This drives us to design a more efficient and stronger module for meter alignment.
**Revisiting vanilla STN.** Different from Perspective Transform which calculates transformation matrix by points matching, STN can transform the detected meter to a fronto-paralleled view by learned homography transformation parameters. Specifically, given the output \(I_{det}\) of YDM, STN establishes mapping \(\phi_{stn}\) by predicting homography transformation \(H\) with 8 degree of freedoms, and \(\phi_{sam}\) represents Differentiable Image Sampling(DIS) operation to obtain the canonical view of \(I_{det}\) by bilinear interpolation:
\[\begin{split} H=\Phi_{stn}(I_{det})\in\mathbb{R}^{3\times 3}\\ I_{align}=\Phi_{sam}(I_{det},H)\in\mathbb{R}^{3\times h\times w }\end{split} \tag{4}\]
Therefore, how to predict accurate homography transformation \(H\) is a key issue.
**Spatial Transformed Module(STM).** It is a direct idea to regress \(H\) given ground truth \(\hat{H}\) in a supervised way. Nonetheless, based on our major findings and rigorous testing, the deep network fails to learn the explicit parameter of H for the following reasons: (i) The training data is limited to the deep CNN's huge parameters; (ii) \(H\)'s parameters have a large range of values, making the regression difficult to optimise. To circumvent these problems, we model the implicit spatial transformation relationship between images instead of regressing \(H\) directly.
Specifically, for a \(\hat{I}_{det}\) in training set, we firstly annotate its inner dial region with a binary mask map. Then, for various meter forms, we match four pairs of feature points to determine the real \(\hat{H}\). As for an irreg
ular ellipse, the endpoints of the major axis and minor axis are utilized as the initial points, while the corresponding points are defined by the intersection of the major axis, the minor axis, and the circumcircle. For a rectangular shape, the \(\hat{H}\) can be calculated by mapping the vertices of the rectangle directly to the vertices of the image. Then we can get the aligned image \(\hat{I}_{align}\) by perspective transform:
\[\hat{I}_{align}=warp(\hat{I}_{det},\hat{H}) \tag{5}\]
The vertex coordinate offsets \(\hat{\delta}_{c}\) between \(\hat{I}_{det}\) and \(\hat{I}_{align}\) can be obtained, which is the training objective of STM implemented by Mean-Squared (MSE) Loss:
\[L_{align}=\sum_{i}(\delta_{ci}-\hat{\delta}_{ci})^{2} \tag{6}\]
Where \(i\) is the index of coordinates. Therefore, the algorithm of STM can be adjusted as follows:
\[\begin{split}\delta_{c}&=\Phi_{stm}(I_{det})\in\mathbb{ R}^{4\times 2}\\ H&=warp\_inv(I_{det},I_{det}+\delta_{c})\in\mathbb{R}^{3 \times 3}\\ I_{align}&=\Phi_{sam}(I_{det},H)\in\mathbb{R}^{3\times h \times w}\end{split} \tag{7}\]
In our training process, we use ResNet18[28] to extract the feature of \(I_{det}\), and by the propogation of network, accurate \(H\) and canonical images can be acquired.
### Meter Recognition
**Overall Design.** What is the best way to read meters like a human? Key meter elements like the pointer, scales, and number were predicted in previous methods to achieve this goal. However, they tended to create independent modules to handle different component and number, resulting in a suboptimal solution for meter recognition. We propose a unified framework called Value Acquisition Module (VAM) that consists of meter component retrieval branch and meter number recognition branch to excavate a deep relationship between them. As illustrated in Fig 3, we apply ResNet18 as the backbone and create two separate feature merging modules to form a pair of complementary branches. Specifically, upsampling and pixel-wise addition are used to fuse intermediate layers of ResNet. VAM allows these two diametrically different tasks to benefit from each other by disentangling weight sharing and introducing a mirror symmetry of FPN[29]. Ablation studies are demonstrated in Sec. 4.
**Meter component retrieval branch.** We retrieve meter component (meter pointer and key scales) using semantic segmentation methods that are heavily inspired by the Mask-RCNN[20]. The branch generates two 1-channel segmentation maps, namely the Pointer Map and the Key Scale Map, by performing two distinct \(1\times 1\) convolutional operations on the backbone features. The Pointer Map indicates the location of the meter's pointer, whereas the Key Scale Map indicates its angle. The Pointer Map and Key Scale Map are both trained by minimizing the Dice loss:
\[\begin{split} L_{pm}&=1-\frac{2\sum_{i}P_{pm}(i)G_{ pm}(i)}{\sum_{i}P_{pm}(i)^{2}+\sum_{i}G_{pm}(i)^{2}}\\ L_{ksm}&=1-\frac{2\sum_{i}P_{ksm}(i)G_{ksm}(i)}{ \sum_{i}P_{ksm}(i)^{2}+\sum_{i}G_{ksm}(i)^{2}}\end{split} \tag{8}\]
where \(pm\) and \(ksm\) represent Pointer Map and Key Scale Map, and \(P_{(\cdot)}(i)\) refer to the value of \(i^{\text{th}}\) pixel in the predicted result while \(G_{(\cdot)}(i)\) refer to the value of \(i^{\text{th}}\) pixel in the GT region.
The final loss for the meter component retrieval branch is a weighted combination of the two maps, balanced by \(\lambda\in(0,1)\) as
\[L_{com}=\lambda L_{PointerMap}+(1-\lambda)L_{KeyScaleMap} \tag{9}\]
In our experiments, we set \(\lambda\) to 0.4, assigning more
importance to Key Scale Map, which is relatively difficult to learn in training process due to its small spatial occupation.
**Meter number recognition branch.** Previous methods recognize numbers in meters with another system, which poses severe memory waste and low efficiency. In our VAM, meter number recognition branch resembles like the standard text spotters, which is mentioned in Sec.2. To further boost the inference speed, We only detect the key number in the meter, the one closest to the number '0', and then recognize it with the assistance of feature sampling.
The key number detection task is deemed a text classification task, in which one convolution is applied to output dense per-pixel predictions of the key number localization. The key number bounding box can be obtained by the minimum bounding rectangle operation. Meanwhile, to overcome the class imbalance problem, we introduce online hard example mining (OHEM)[30] to better distinguish between number areas and backgrounds, in which the balanced factor is set to 3 in our work. The set of selected positive elements by OHEM in the score map as \(\omega\), the loss function for key number detection can be formulated as:
\[\begin{split}& L_{num\_det}=\frac{1}{\parallel\Omega\parallel} \sum_{x\in\Omega}Cross\_Entropy(p_{x},p_{x}^{*})\\ &=\frac{1}{\parallel\Omega\parallel}\sum_{x\in\Omega}(-p_{x}^{*} logp_{x}-(1-p_{x}^{*})log(1-p_{X}))\end{split} \tag{10}\]
where \(\parallel\cdot\parallel\) means the number of elements in a set, and the \(p_{x}\) and \(p_{x}^{*}\) are the predicted pixel and the ground truth label, respectively.
The feature sampling layer aims to convert detected feature regions into fixed-size outputs from which a RNN-based sequence recognizer can be established. We introduce RoIRotate in [8] to our work, which can transform theorated area into a fixed-size region via max-pooling and bilinear interpolation. Similar to but distinguished from STN, RoIRotate gets affine transformation via an unsupervised way, resulting in a more general operation for extracting features for regions of interest. To improve recognition performance, we use only ground truth key number regions during training rather than predicted number regions.
Given the transformed number feature, we first permute key number features \(F\in\mathbb{R}^{C\times H\times W}\) into 2D sequence feature \(L\in\mathbb{R}^{C\times W}\) in several sequential convolutions, which has the same configurations as CRNN[31]. Then, for each time step \(t=0,1,\ldots,T+1\), we feed \(l_{1},\ldots,l_{w}\in L\) into bi-directional LSTM, with D=256 output channels per direction, which can be formulated as follows:
\[\begin{split}& h_{t}^{{}^{\prime}}=f(x_{t},h_{t-1}^{{}^{\prime}}) \\ & y_{t}=\varphi(h_{t}^{{}^{\prime}})=softmax(W_{0}h_{t}^{{}^{ \prime}})\end{split} \tag{11}\]
where \(f()\) is the recurrence formulation, \(h_{t}\) is the hidden state at time step t, and the \(W_{0}\) linearly transforms hidden states to the output space of size 12, including 10 Arabic numerals and a token representing "-", and a special END token. Finally, a CTC layer is applied to align the predicted sequence to label sequence. Following[31], the recognition loss can be formulated as
\[L_{num\_reco}=-\frac{1}{N}\sum_{n=1}^{N}logp(y_{n}^{*}\mid x) \tag{12}\]
where \(N\) is the number of number regions in an input image, and \(y_{n}^{*}\) is the recognition label.
**Training procedure and inference.** VAM is a unified module which can be trained end-to-end. the overall loss function can be calculated as follows:
\[L=L_{com}+L_{num\_det}+L_{num\_reco} \tag{13}\]
In our inference process, binarized score maps for pointer and key scale are firstly obtained by applying
threshold algorithm \(\lambda=0.5\). Then, a thinning algorithm is applied to turn the pointer into a straight line segmentation and the Hough line transform is used to obtain the position of the pointer. Meanwhile the key scale centres can be localized by calculating the average pixels position within the closed area. Finally the meter reading is calculated by angle method, which is given by
\[Result=\frac{\alpha_{1}}{\alpha_{2}}\times num\_rec \tag{14}\]
where \(\alpha_{1}\) is the angle between the pointer and the zero scale, and \(\alpha_{2}\) is the angle between the zero scales and the key scale. The \(num\_rec\) is the output of the meter number recognition branch, then the reading of the meter is completed automatically.
## 4 Experiments
### Datasets
To our knowledge, there have been no publicly available and appropriate benchmarks for this task. As a result, we created a new dataset called Meter_Challenge (MC1296), which contains 1296 images of scenes captured by automated robots. To help the model adapt to its natural environment, the dataset includes complex backgrounds, multiple scales, a variety of viewpoint angles, and a variety of meter shapes. To better fit the meter reading task, we organized the dataset into a tree structure, with each level representing a distinct task (meter detection, meter alignment, and meter recognition), complete with associated images, annotations, and evaluation metrics. Fig 4 illustrates some visualization results, while Table 1 contains summary statistics.
### Implementation details
In this paper, the system we propose consists of YDM, STM, and VAM. YDM has the similar configurations with [27], so we focus on the implement of STM
\begin{table}
\begin{tabular}{l c c c} \hline Dataset\_task & Train\_size & Test\_size & Annotations \\ \hline M\_detection & 1036 & 260 & mb \\ M\_alignment & 1028 & 247 & co \\ M\_reading & 739 & 185 & psn \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the proposed MC\(\_\)1260 dataset. ”mb”,”co”, ”psn” represent meter bounding box, coordinate offsets, and pointer/scale/number mask and number, respectively.
Figure 4: Visualization results of one sample in the data. (i), (ii) and (iii) mean the data for the meter detection, meter alignment, and meter recognition.
and VAM. Specifically, for both of the module we use ResNet pretrained in ImageNet[32] as backbone, and the image size is 640 and the training batch size is 8. We use Adam to optimizer the two network and set the initial learning rate to \(1\times 10^{-4}\) with the momentum of 0.9.
Meanwhile, some basic data augmentation techniques are applied, such as the random cropping, random rotation, and the brightness contrast. Our experiment is conducted on one general GPU (GTX-1080), with the environment PyTorch 1.5.0.
### Meter detection results
To disentangle the effects of YDM, we begin by reporting the dataset's meter detection results. To conform to the object detection literature, we report the average precision (AP) at two different bounding box IoU thresholds, AP50 and AP75. AP50 denotes the average precision for IoU thresholds greater than 0.5, while AP75 denotes the average precision for IoU thresholds greater than 0.75. As shown in the Table 2, the meter detection task is relatively successful. To demonstrate the advantages of our method, we compare it to a commonly used YOLO algorithm[33] and the method in [13], which demonstrates that our YDM performs better in terms of accuracy and efficiency. The qualitative results are demonstrated in Fig 5, which shows TDM can detect meters with different shapes and sizes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & AP50(\%) & AP75(\%) & FPS \\ \hline Liu.et al[13] & 91.3 & 89.5 & 4.3 \\ YOLO[33] & 90.0 & 88.2 & 6.7 \\ Ours & 98.6 & 97.1 & 12.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The quantitative results of different methods for meter detection.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Rel(\%) & Ref(\%) & FPS \\ \hline None & 5.91 & 1.20 & - \\ Perspective transform[13] & 1.72 & 0.23 & 10 \\ STN[14] & 3.40 & 0.95 & 44 \\ STM & 1.70 & 0.26 & 50 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The quantitative results of different methods for meter alignment.“rel” is Average Relative Error and “ref” is Average Reference Error.
Figure 5: Qualitative results of the meter detection, where yellow bounding box means pointer meter and green bounding box means digital meter. “ID-num” is the detection confidence.
### Meter alignment results
To demonstrate the STM's availability and robustness in the recognition system, we conducted extensive experiments on the validation dataset, comparing it to the traditional perspective transform method and STN. Fig.6 illustrates the qualitative findings. As can be seen, the image can be easily and automatically transformed into a front-viewing image using STM, regardless of the camera angle. However, due to the limited learning capability of pure STN, it is difficult to align the meter in terms of some extremely large camera angles.
Additionally, as shown in Tab 3, we conducted ablation studies to demonstrate its superiority by demonstrating inference speed and influence on meter recognition task. Take note that the average relative error and the average reference error are the evaluation metrics used to represent the meter recognition error rate, which will be discussed in detail in Sec 4.5. It can be seen that STM contributes to reduce the recognition error rate as it allows meters to be read from various angles and sizes. Our STM also achieves competitive accuracy to perspective transform while increasing inference speed, indicating that STM achieves a more favorable trade-off between accuracy and efficiency.
### Meter recognition results
To demonstrate our method's recognition performance, we incrementally compare it to other methods. To minimize inter-person variability in readings, the readings obtained by human vision are the average of the results of twenty expert workers. Meanwhile, to make the comparison more fair, we follow similar evaluation metrics as [16]. Specifically, we choose the average relative error \(\hat{\Theta}\) and the average reference error \(\hat{\Gamma}\)
Figure 6: Qualitative results of the meter alignment, where the top row is the original images, the middle row and the bottom row are the transformed image generated by STN and STM. Note that STN can not handle images with extreme large camera angles.
as evaluation indicators, as shown in follow
\[\begin{split}\hat{\Theta}&=\frac{\sum_{i=1}^{n}\frac{|p_{ i}-g_{i}|}{g_{i}}}{n}\times 100\%\\ \hat{\Gamma}&=\frac{\sum_{i=1}^{n}\frac{|p_{i}-g_{i}|} {R}}{n}\times 100\%\end{split} \tag{15}\]
Where \(p_{i}\) is the predicted meter value, \(g_{i}\) is the ground truth value. R represents the meter's range, and n represents the total number of experimental data. As shown in Tab 4, our method outperforms previous methods in terms of average relative error and achieves competitive results with [13] in average reference error, indicating that our algorithm has strong capacity in reading recognition. Additionally, our method can perform inference at a rate of approximately 25 frames per second, demonstrating that it is practical for real-world applications. We show some visualization results in Fig 7, demonstrating our method's high adaptability to a complex environment with variable illumination, scale, and image tilt.
To disentangle the effects of the unified framework VAM, we conduct ablation studies to investigate the relationships between the meter component retrieval and meter numbering recognition branches. We begin by reporting the full model's end-to-end results on the Tab 5. Notably, we evaluate pointer/key scale detection and key number recognition using the AP50 and number-level accuracy recognition metrics, respectively. It can be demonstrated that by optimizing all loss functions simultaneously, our model achieves a reasonable level of success in detection and recognition tasks. Additionally, we construct a two-stage model in which the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Avenue & Rel(\%) & Ref(\%) \\ \hline Zheng et al.[11] & Measurement(2016) & 10.32 & 0.91 \\ He et al.[16] & ICIST(2019) & 1.85 & 0.30 \\ Liu et al.[13] & Measurement(2020) & 1.77 & **0.24** \\ Ours & - & **1.70** & 0.26 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The quantitative results of different methods for meter reading recognition. “Ref” is Average Relative Error and “Ref” is Average Reference Error.
Figure 7: Some visualization results produced by our method. The red line is the predicted pointer line, the blue points are the key scale areas, and the meter reading results are shown in the left top.
meter component retrieval and meter number recognition branches are trained independently. The meter component retrieval network is built by removing the meter number recognition branch, and similarly, the meter number recognition network is built by removing the meter component retrieval branch from the original network. Our proposed VAM outperforms the two-stage method by a significant margin in both the meter component retrieval and meter number recognition tasks. The results indicate that our joint training strategy accelerated the convergence of model parameters.
## 5 Conclusion
We propose a novel method for accurate and efficient pointer meter reading, which is implemented by the equipment of YDM, STM, and VAM. Specifically, STM can obtain the front view of images autonomously with the improved STN, and VAM can recognize meters accurately with the unified frameworks with the combination of meter component retrieval branch and meter number recognition branch. Experiments on the challenging datasets we proposed demonstrate that the proposed method has a strong capacity for pointer meter reading. Currently, the algorithm has been successfully applied to robots performing substation inspections. Future work will concentrate on model acceleration in order to develop a more efficient framework for video meter reading.
|
2310.00147 | Beyond-DFT $\textit{ab initio}$ Calculations for Accurate Prediction of
Sub-GeV Dark Matter Experimental Reach | As the search space for light dark matter (DM) has shifted to sub-GeV DM
candidate particles, increasing attention has turned to solid state detectors
built from quantum materials. While traditional solid state detector targets
(e.g. Si or Ge) have been utilized in searches for dark matter (DM) for
decades, more complex, anisotropic materials with narrow band gaps are
desirable for detecting sub-MeV dark matter through DM-electron scattering and
absorption channels. In order to determine if a novel target material can
expand the search space for light DM it is necessary to determine the projected
reach of a dark matter search conducted with that material in the DM mass -
DM-electron scattering cross-section parameter space. The DM-electron
scattering rate can be calculated from first-principles with knowledge of the
loss function, however the accuracy of these predictions is limited by the
first-principles level of theory used to calculate the dielectric function.
Here we perform a case study on silicon, a well-studied semiconducting
material, to demonstrate that traditional Kohn-Sham density functional theory
(DFT) calculations erroneously overestimate projected experimental reach. We
show that for silicon this can be remedied by the incorporation of self-energy
corrections as implemented in the GW approximation. Moreover, we emphasize the
care that must taken in selecting the appropriate level of theory for
predicting experimental reach of next-generation complex DM detector materials. | Elizabeth A. Peterson, Samuel L. Watkins, Christopher Lane, Jian-Xin Zhu | 2023-09-29T21:13:29Z | http://arxiv.org/abs/2310.00147v1 | Beyond-DFT _ab initio_ Calculations for Accurate Prediction of Sub-GeV Dark Matter Experimental Reach
###### Abstract
As the search space for light dark matter (DM) has shifted to sub-GeV DM candidate particles, increasing attention has turned to solid state detectors built from quantum materials. While traditional solid state detector targets (e.g. Si or Ge) have been utilized in searches for dark matter (DM) for decades, more complex, anisotropic materials with narrow band gaps are desirable for detecting sub-MeV dark matter through DM-electron scattering and absorption channels. In order to determine if a novel target material can expand the search space for light DM it is necessary to determine the projected reach of a dark matter search conducted with that material in the DM mass - DM-electron scattering cross-section parameter space. The DM-electron scattering rate can be calculated from first-principles with knowledge of the loss function, however the accuracy of these predictions is limited by the first-principles level of theory used to calculate the dielectric function. Here we perform a case study on silicon, a well-studied semiconducting material, to demonstrate that traditional Kohn-Sham density functional theory (DFT) calculations erroneously overestimate projected experimental reach. We show that for silicon this can be remedied by the incorporation of self-energy corrections as implemented in the GW approximation. Moreover, we emphasize the care that must taken in selecting the appropriate level of theory for predicting experimental reach of next-generation complex DM detector materials.
## I Introduction
As the search for dark matter (DM) particles turns towards efforts to detect light DM in the sub-GeV mass range, solid-state detectors have increasingly become detector targets of interest [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. Recent experimental and theoretical efforts have focused on absorption and scattering of bosonic and fermionic light DM particles with nucleons, electrons, and bosonic quasiparticles like phonons in detector materials [2; 3; 4; 5; 6; 8; 9; 10; 11; 14]. As nuclear recoil experiments have struggled to detect GeV-scale DM candidate particles, attention has turned to electron recoil experiments in semiconductor targets, that promise to enable detection of even lighter DM due to their meV-eV electronic excitation energies [1; 5; 7; 12].
Next generation DM detectors that can probe even lighter DM will require more complex materials such as topological materials, superconductors, etc. which host desirable properties such as narrow band gaps and anisotropies that capture the daily modulation in the DM wind impinging on the Earth [2; 3; 4; 5; 7; 9; 11; 15]. The novelty and complexity of these materials, however, may limit the experimental ease of rapid characterization and screening for desirable properties. As such, leveraging first-principles methods that handle many-body effects and strong correlations will become increasingly important in predicting projected experimental reach of novel DM detector materials. Moreover, the calculated DM-electron scattering rates may be strongly affected by the first-principles approximations employed with certain, generally low-cost, approximations producing results that may significantly misrepresent the parameter space that is excluded.
In a semiconducting detector, dielectric screening plays a significant role in determining the measured signals from the absorption and scattering of DM particles with electrons [9; 10; 11]. The loss function, i.e. the imaginary part of the inverse dielectric function, can be utilized to predict expected DM-electron scattering rates [9; 16], in analogy to electron energy loss spectroscopy (EELS). As such, the complex dielectric function is a central quantity of interest.
From a first-principles perspective, calculation of the dielectric function is a standard procedure [17; 18]. From density functional theory (DFT) calculations, the dielectric function in the random phase approximation (RPA), neglecting local field effects, or exchange-correlation effects, is readily calculable from DFT wavefunctions and eigenvalues. Other methods for calculating the electronic structure or the dielectric function that can capture exchange and (strong) correlation effects include dynamical mean-field theory (DMFT) [19], many-body perturbation theory (MBPT) [20], density functional perturbation theory (DFPT) [21], and time-dependent density functional theory (TD-DFT) [22]. In practice, the methods employed by existing open-source software packages for calculating dielectric functions from first-principles are oriented towards understanding optical properties of materials, in particular absorption of photons in the visible spectrum, and hence restricted to calculations in the long wavelength limit with negligible momentum transfer \(\mathbf{q}\). Impinging DM particles may impart both finite momentum \(\mathbf{q}\) and finite energy \(\omega\) when scattering off an electron, necessitating methods for calculating the dielect
tric function at finite momentum transfer \(\mathbf{q}\).
Response functions like the dielectric function describe the reaction of a material to a perturbation, making them fundamentally excited state properties which require a careful consideration of many-body effects to accurately calculate. In a first-principles framework, in order to accurately capture many-body effects it is necessary to work beyond the standard first-principles DFT formalism. Green's function approaches have been shown to significantly improve upon the accuracy of the computed energy eigenvalues by implementing self-energy corrections [20; 23; 24]. The workhorse method of Green's function approximations is the GW approximation [20; 23] which approximates the self-energy as a product of the single-particle Green's function (\(G\)) and the screened Coulomb potential (\(W\)), or \(\Sigma=iGW\), neglecting vertex corrections. The GW approximation is typically implemented as a correction scheme to the DFT eigenvalues and wavefunctions to compute quasiparticle-corrected single-particle energy eigenvalues. Beyond improving upon the accuracy of the electronic structure energy eigenvalues, the GW approximation is a convenient method for calculations of the dielectric function. This is because, in GW, calculation of the inverse dielectric function at finite \(\mathbf{q}\) is a necessary prerequisite for calculating the screened Coulomb potential (\(W\)). A number of GW approximation software packages exist [23; 25; 26; 27; 28; 29; 30; 31] and it has been shown that the GW quasiparticle-correction scheme can immensely improve the accuracy of calculated dielectric functions [23; 24].
There are existing packages, such as QEDAR[1], DarkELF[32], and EXCEED-DM [33; 34], that enable the calculation of DM-electron scattering rates for various common semiconducting materials used as DM detectors (e.g. Si and Ge). While QEDAR does not include in-medium screening effects, DarkELF and EXCEED-DM calculate the scattering based on the dielectric function, thereby including these effects. However, these packages base their dielectric function calculations on DFT outputs, which may not provide the most accurate energy eigenvalues, as noted above. These packages have been frequently used in high-impact results in the field of light DM [35; 36; 37; 38; 39; 40; 41; 42; 43], suggesting that if there is a significant difference in the DM-electron scattering rate when including, for example, the GW-corrected energy eigenvalues, then these results may be misrepresenting the parameter space that is excluded.
Here we illustrate and emphasize the significance of incorporating many-body effects into calculations of the dielectric function for prediction of projected experimental reach of novel DM detector materials. We present a case study of the well-studied semiconductor silicon (Si), demonstrating that even for a material that does not host substantial relativistic effects or strong correlations, the incorporation of many-body effects is essential for accurate prediction of the projected experimental reach obtained via first-principles calculations. We leave calculation of more complex materials to future work.
## II Theoretical framework
The loss function \(L(\mathbf{q},\omega)=\mathrm{Im}\left[-\epsilon(\mathbf{q},\omega)^{-1}\right]\) is the central quantity describing how a dielectric medium responds to an impinging particle. To predict the experimental reach of a detector material, we calculate the electronic structure in order to calculate the complex dielectric function and from there the DM-electron scattering rate and projected reach.
We begin with a DFT calculation using a plane-wave basis to obtain the Kohn-Sham energy eigenvalues and wavefunctions using the Kohn-Sham equations [44]
\[H^{KS}\psi_{n\mathbf{k}}^{KS}(\mathbf{r})=\left[-\frac{1}{2}\nabla^{2}+v_{ext }(\mathbf{r})+\int\frac{\rho(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{ \prime}|}d\mathbf{r}^{\prime}+v_{XC}[\rho]\right]\psi_{n\mathbf{k}}^{KS}( \mathbf{r})=\varepsilon_{n\mathbf{k}}^{KS}\psi_{n\mathbf{k}}^{KS}(\mathbf{r})\;, \tag{1}\]
Here energy is measured in atomic units, \(\psi_{n\mathbf{k}}^{KS}(\mathbf{r})\) and \(\varepsilon_{n\mathbf{k}}^{KS}\) are Kohn-Sham wavefunctions and eigenvalues parametrized by band index \(n\) and crystal momentum \(\mathbf{k}\), \(\rho(\mathbf{r})=\sum_{n\mathbf{k}}|\psi_{n\mathbf{k}}(\mathbf{r})|^{2}\) is the charge density, \(v_{ext}(\mathbf{r})\) is the external potential from the ionic background, and \(v_{XC}[\rho]\) is the exchange-correlation potential encompassing all many-body effects not captured by the single-particle framework.
We employ the GW approximation as a correction scheme to the Kohn-Sham DFT energy eigenvalues by replacing the exchange-correlation potential with the self-energy operator as in Refs. [20; 23]:
\[\left[-\frac{1}{2}\nabla^{2}+v_{ext}(\mathbf{r})+\int\frac{\rho(\mathbf{r}^{ \prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}d\mathbf{r}^{\prime}+\Sigma( \varepsilon_{n\mathbf{k}}^{GW})\right]\psi_{n\mathbf{k}}^{GW}(\mathbf{r})= \varepsilon_{n\mathbf{k}}^{GW}\psi_{n\mathbf{k}}^{GW}(\mathbf{r})\;, \tag{2}\]
Specifically, we perform a single-shot \(G_{0}W_{0}\) calculation, using the Kohn-Sham DFT eigenvalues and wave
functions to produce the self-energy operator and then solve for the GW energy eigenvalues \(\varepsilon_{n\mathbf{k}}^{GW}\) as
\[H^{KS}\psi_{n\mathbf{k}}^{KS}(\mathbf{r})-\left[\Sigma(\varepsilon_{n\mathbf{k}}^ {KS})-v_{XC}[\rho]\right]\psi_{n\mathbf{k}}^{KS}(\mathbf{r})=\varepsilon_{n \mathbf{k}}^{GW}\psi_{n\mathbf{k}}^{KS}(\mathbf{r})\;. \tag{3}\]
With the Kohn-Sham DFT wavefunctions \(\psi_{n\mathbf{k}}^{KS}\) and GW energy eigenvalues \(\varepsilon_{n\mathbf{k}}^{GW}\) we may construct the complex dielectric function.
Most generally, the dielectric tensor is a linear response function that describes how a displacement field in a dielectric medium is produced by application of a perturbing electric field as
\[D_{\alpha}(\mathbf{q},\omega)=\sum_{\beta}\epsilon_{\alpha\beta}(\mathbf{q}, \omega)E_{\beta}(\mathbf{q},\omega)\;. \tag{4}\]
where \(\alpha\) and \(\beta\) are Cartesian directions. Several symmetries can be leveraged to simplify the dielectric tensor. In an isotropic material, the longitudinal (diagonal) and transverse (off-diagonal) components of the dielectric tensor can be strictly separated. In a material with cubic symmetry, the diagonal components of the dielectric tensor reduce to a single scalar function, significantly simplifying the problem of calculating the dielectric function.
\[P_{\mathbf{G}\mathbf{G}^{\prime}}(\mathbf{q},\omega)=\lim_{\eta\to 0} \sum_{nn^{\prime}\mathbf{k}}\left\langle n\mathbf{k}|e^{-i(\mathbf{q}+\mathbf{ G})\cdot\mathbf{r}}|n^{\prime}\mathbf{k}+\mathbf{q}\right\rangle\left\langle n^{ \prime}\mathbf{k}+\mathbf{q}|e^{i(\mathbf{q}+\mathbf{G}^{\prime})\cdot \mathbf{r}}|n\mathbf{k}\right\rangle\frac{f(\varepsilon_{n^{\prime}\mathbf{k}+ \mathbf{q}})-f(\varepsilon_{n\mathbf{k}})}{\varepsilon_{n^{\prime}\mathbf{k}+ \mathbf{q}}-\varepsilon_{n\mathbf{k}}-\hbar\omega-i\hbar\eta}\;, \tag{6}\]
where \(n\) is a band index of an electronic state, \(\mathbf{k}\) is the crystal momentum of an electronic state, \(\mathbf{q}\) is the momentum transfer (here restricted to the first Brillouin zone), \(\mathbf{G}\) is a reciprocal lattice vector, \(f(\varepsilon)\) is the Fermi-Dirac distribution function, and \(\eta\) is an infinitesimal positive quantity.
The DM-electron scattering rate can be written in terms of the loss function \(L(\mathbf{q},\omega)=\mathrm{Im}\left[-\epsilon(\mathbf{q},\omega)^{-1}\right]\)[9; 10], or in some formalisms the dynamic structure factor, a related quantity [45; 46; 47]. For example, for an impinging DM particle \(\chi\) with velocity \(\mathbf{v}_{\chi}\) and an arbitrary DM-electron interaction potential \(V(\mathbf{q})\), Ref. [9] outlines how the transition rate can be expressed as
\[\Gamma(\mathbf{v}_{\chi})=\int\frac{d^{3}q}{(2\pi)^{3}}|V(\mathbf{q})|^{2} \left[2\frac{q^{2}}{e^{2}}\,\mathrm{Im}\left(-\frac{1}{\epsilon(\mathbf{q}, \omega)}\right)\right]\;. \tag{7}\]
From the first-principles calculated inverse dielectric function, we estimate the expected sensitivity of a detector to DM-electron scattering following the formalism outlined in Refs. [7; 9; 10; 12; 46]. The total rate per target mass at time \(t\) can be expressed as
\[R(t)=\frac{1}{\rho_{T}}\frac{\rho_{\chi}}{m_{\chi}}\int d^{3}\mathbf{v}_{\chi} f_{\chi}(\mathbf{v}_{\chi},t)\Gamma(\mathbf{v}_{\chi}), \tag{8}\]
where the transition rate \(\Gamma(\mathbf{v}_{\chi})\) has been defined in Eq. (7), \(f_{\chi}(\mathbf{v}_{\chi},t)\) is the DM velocity distribution at time \(t\), \(\rho_{\chi}=0.3\,\mathrm{GeV/cm^{3}}\) is the local DM density, \(m_{\chi}\) is the DM mass, and \(\rho_{T}\) is the mass density of the target material.
In DM-electron scattering, the scattering rate is usually parameterized in the literature with
\[V(\mathbf{q}) =\frac{g_{\chi}g_{e}}{q^{2}+m_{V/\phi}^{2}}, \tag{9}\] \[\bar{\sigma}_{e} \equiv\frac{\rho_{\chi e}^{2}}{\pi}\left|V(q_{0})\right|^{2},\] (10) \[F_{\mathrm{DM}}(q) \equiv\frac{q_{0}^{2}+m_{V/\phi}^{2}}{q^{2}+m_{V/\phi}^{2}}, \tag{11}\]
where the DM is coupled to the Standard Model through some scalar (\(\phi\)) or vector (\(V\)) mediator with mass
\(m_{V/\phi}\) with coupling \(g_{\chi}\) to DM and coupling \(g_{e}\) to electrons. Here, \(\bar{\sigma}_{e}\) is a reference cross section for DM-electron scattering, \(\mu_{\chi e}\) is the DM-electron reduced mass \(\mu_{\chi e}=\frac{m_{e}m_{\chi}}{m_{e}+m_{\chi}}\), \(q_{0}\equiv\alpha m_{e}\) is a reference momentum, and \(F_{\rm DM}(q)\) is the DM form factor. With Eqs. (7) and (8), we arrive at the following scattering rate per target mass at time \(t\)
\[R(t)=\frac{\bar{\sigma}_{e}}{\rho_{T}}\frac{\rho_{\chi}}{m_{\chi}}\frac{\pi}{( 2\pi)^{4}\alpha\mu_{\chi e}^{2}}\int d\omega\,d^{3}{\bf q}\,q\,F_{\rm DM}^{2}( q)\,{\rm Im}\left[-\frac{1}{\varepsilon({\bf q},\omega_{\bf q})}\right]\tilde{g}(v_{ \rm min},\psi,t)\;, \tag{12}\]
where we have defined
\[\tilde{g}(v_{\rm min},\psi,t)=q\int d^{3}{\bf v}_{\chi}f_{\chi}({\bf v}_{\chi},t)\delta(E_{f}-E_{i})\;, \tag{13}\]
similarly to Ref. [7].
To calculate the expected sensitivity of a Si detector using our DFT and GW calculations, we assume a background-free search with a kg-year exposure, and set limits at the 90% C.L. We assume the Standard Halo Model [48; 49] and average the time-dependent rate over the full exposure (a small effect for an isotropic crystal such as Si).
Two caveats of this formalism are immediately apparent. The first is that it is highly dependent on the accuracy of the first-principles wavefunctions and energy eigenvalues used to compute the polarizability. For highly accurate predictions of scattering rates this re
Figure 1: The loss function, the negative imaginary part of the inverse dielectric function, for Si calculated with (a) DFT and (b) the GW approximation plotted as a function of energy transfer \(\omega\) and momentum transfer \({\bf q}\), restricted to the first BZ, in units of energy. The difference between the two loss functions indicates a blue shift in the quasiparticle-corrected GW loss function relative to that calculated using DFT as seen over (c) all the \({\bf q}\) points sampled along the \(q_{z}\) axis of the first BZ and in (d) the \({\bf q}\to 0\) cut.
quires using first-principles methods beyond traditional density functional theory (DFT). The second is that it is restricted to calculation of scalar dielectric functions, limiting its ability to capture screening effects in highly anisotropic materials. For this work we address the first of these caveats, leaving discussion of anisotropic materials to forthcoming work.
## III Calculation details
To calculate the finite \(\mathbf{q}\) RPA dielectric function of Si, we first perform density functional theory calculations using Quantum Espresso [50; 51; 52]. We use the BerkeleyGW package [23; 25] to calculate the full frequency dielectric function using DFT-level energy eigenvalues and wavefunctions. We next calculate the self-energy operator and quasiparticle corrections in the GW approximation and recalculate the full frequency dielectric function with the quasiparticle-corrected GW energy eigenvalues.
DFT calculations are performed with a plane-wave basis in the generalized-gradient approximation (GGA) as implemented by Perdew, Burke, and Ernzerhof (PBE) [53] using a 100 Ry energy cut-off on a \(10\times 10\times 10\)\(\mathbf{k}\)-grid. A scalar relativistic norm-conserving pseudopotential including 4 valence electrons per Si atom from the Pseudo-Dojo project is used [54].
The DFT-level and quasiparticle-corrected GW full frequency dielectric functions are calculated using a 4.0 Ry \(\mathbf{G}\)-vector cut-off and 100 meV broadening for a frequency range from 0-30 eV on a \(10\times 10\times 10\)\(\mathbf{q}\)-grid. The self-energy operator is calculated using 4 filled and 96 empty bands.
## IV Results
We calculate the RPA dielectric function of Si using DFT and GW eigenvalues for a broad range of relevant energy transfers \(\omega\) and momentum transfers \(\mathbf{q}\) from im
Figure 2: The real part of the inverse dielectric function for Si calculated with (a) DFT and (b) the GW approximation plotted as a function of energy transfer \(\omega\) and momentum transfer \(\mathbf{q}\), restricted to the first BZ, in units of energy. The difference between the two inverse dielectric functions indicates a blue shift in the quasiparticle-corrected GW dielectric function relative to that calculated using DFT as seen over (c) all the \(\mathbf{q}\) points sampled along the \(q_{z}\) axis of the first BZ and in (d) the \(\mathbf{q}\to 0\) cut.
pinging particles. We calculate the dielectric function for momentum transfer \(\mathbf{q}\) up to a 4.0 Ry (54.4 eV) kinetic energy cut-off for plane waves with momentum \(\mathbf{q}+\mathbf{G}\) and kinetic energy \(\frac{\hbar^{2}|\mathbf{q}+\mathbf{G}|^{2}}{2m_{s}}\) with \(\mathbf{q}\) and \(\mathbf{G}\) defined as in Eq.5 above. We consider energy transfers \(\omega\) up to 30 eV. The static dielectric constant \(\epsilon(\mathbf{q}\rightarrow\mathbf{0},\omega=\mathbf{0})\) calculated with GW, 11.46, agrees much better with the experimental value, 11.7 [55], than that calculated using DFT, 12.86.
In Fig. 1 we plot the RPA loss function, restricting ourselves to \(\mathbf{q}\) vectors along the high symmetry \(q_{z}\) line in the first BZ for visual brevity. The momentum transfer \(\mathbf{q}\) is plotted in units of energy. The loss function is qualitatively similar for the DFT (Fig. 1a) and GW (Fig. 1b) eigenvalues. By plotting the difference between the GW and DFT loss functions (Fig. 1c,d), we see that the GW loss function is blue shifted to higher energy relative to the DFT loss function, by \(\sim 1\) eV, a substantial energy difference relative to the DM-electron interaction energy-scales of interest. Additionally, the peak of the loss function calculated from GW is slightly larger than that calculated from DFT due to quasiparticle self-energy effects. The blue shift of the GW loss function is a result of the larger band gap calculated via the GW approximation (0.97 eV) relative to that calculated using DFT (0.58 eV) as a result of quasiparticle self-energy corrections. This band gap increase shifts the imaginary part of the dielectric function to higher energy because the smallest energy difference for a transition between a filled and unfilled band shift to higher energy.
In Fig. 2 we plot the corresponding real part of the RPA inverse dielectric function. Again, the real part of the RPA inverse dielectric function is qualitatively similar for the DFT (Fig. 2a) and GW (Fig. 2b) eigenvalues. The difference between the real parts of the GW and DFT inverse dielectric functions (Fig. 2c) reveals again that the GW results are blue shifted to higher energy.
In Fig. 3, we show the results of the sensitivity calculation for MeV-scale DM masses for the limits of a heavy vector mediator (\(m_{V}\rightarrow\infty\)) (Fig. 3(a)) and a light vector mediator (\(m_{V}\to 0\)) (Fig. 3(b)). The limits calculated with the GW and DFT loss functions are markedly different, emphasizing the importance of accounting for many-body effects in first-principles calculations of the dielectric function for predicting experimental reach. In fact, the bare DFT calculation tends to _overestimate_ DM sensitivity. As shown in Fig. 3(c), the electron scattering cross sections corresponding to equivalent DM masses are consistently larger when calculated with GW as compared to DFT. The effect is most significant for heavy mediators, where the electron scattering cross sections can be up to \(\sim\)4.5 times larger. This corresponds to a substantial overestimation of the parameter space that can be probed by Si when the reach estimation is performed using only DFT. For more complicated, anisotropic crystal structures, we expect this discrepancy to increase and leave those calculations to future work.
## IV Summary
Here we have used Si as a case study to demonstrate the importance of incorporating beyond-DFT, many
Figure 3: Sensitivity curves (90% C.L.) for DM-electron scattering with a Si target, assuming a kg-yr exposure and zero background for a (a) heavy mediator and (b) light mediator. In both panels, we compare the sensitivity obtained using the loss function obtained from DFT and that calculated using the GW approximation, where we see a significant difference in the expected sensitivity and (c) relative electron-DM scattering cross-section.
body effects in first-principles calculations of the dielectric function for predicting the expected scattering rates and the experimental reach of detectors utilizing DM-electron scattering for sub-GeV DM detection. We have calculated the RPA dielectric function of Si using DFT eigenvalues and GW-corrected eigenvalues obtained through a single-shot \(G_{0}W_{0}\) calculation. By using the loss function to model the response of a Si detector to impinging particles, we have calculated the projected experimental reach of a Si detector. Crucially, we have found that DFT _overestimates_ the experimental reach. By using the GW approximation, we have been able to correct for this overestimation to produce a more realistic projected reach. As recent DM search results often rely on DFT to calculate limits on DM-electron scattering, these limits are likely overestimating their exclusion regions. We suggest to revisit these results with energy eigenvalues that have been calculated at a higher level of theory, such as the GW approximation. The novel detector materials that will be at the forefront of the search for sub-GeV DM are expected to host much more complex many-body and correlation effects than Si. We note that the GW approximation is only one of many ways, such as DMFT, to account for many-body effects and that different treatments will be more appropriate for different materials. Ultimately though, our work underscores the limitations of DFT and the importance of utilizing beyond-DFT methods in first-principles calculations of the experimental reach of both current and next-generation sub-GeV DM detectors.
## Acknowledgements
This work was supported by the U.S. DOE NNSA under Contract No. 89233218CNA000001. E.A.P., C. L. and J.-X.Z. acknowledge support by the LANL LDRD Program through project number 20220135DR. S.L.W. acknowledges support from the LANL Director's Postdoctoral Fellowship award 20230782PRD1. This work was supported in part by the Center for Integrated Nanotechnologies, a DOE Office of Science user facility, in partnership with the LANL Institutional Computing Program for computational resources. Additional computations were performed at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award ERCAP0020494.
|
2309.04242 | Neutrino Phenomenology in a Model with Generalized CP symmetry within
Type-I seesaw framework | We investigate the consequences of generalized CP (GCP) symmetry within the
context of the two Higgs doublet model (2HDM), specifically focusing on the
lepton sector. Utilizing the Type-I seesaw framework, we study an intriguing
connection between the Dirac Yukawa couplings originating from both Higgs
fields, leading to a reduction in the number of independent Yukawa couplings
and simplifying the scalar and Yukawa sectors when compared to the general
2HDM. The CP3 constraint results in two right-handed neutrinos having equal
masses and leads to a diagonal right-handed Majorana neutrino mass matrix.
Notably, CP symmetry experiences a soft break due to the phase associated with
the vacuum expectation value of the second Higgs doublet. The model aligns well
with observed charged lepton masses and neutrino oscillation data, explaining
both masses and mixing angles, and yields distinct predictions for normal and
inverted neutrino mass hierarchies. It features a novel interplay between
atmospheric mixing angle $\theta_{23}$ and neutrino mass hierarchy: the angle
$\theta_{23}$ is below maximal for the normal hierarchy and above maximal for
inverted hierarchy. Another interesting feature of the model is inherent CP
violation for the inverted hierarchy. | Tapender, Sanjeev Kumar, Surender Verma | 2023-09-08T10:10:23Z | http://arxiv.org/abs/2309.04242v1 | # Neutrino Phenomenology in a Model with Generalized CP symmetry within Type-I seesaw framework
###### Abstract
We investigate the consequences of generalized CP (GCP) symmetry within the context of the two Higgs doublet model (2HDM), specifically focusing on the lepton sector. Utilizing the Type-I seesaw framework, we study an intriguing connection between the Dirac Yukawa couplings originating from both Higgs fields, leading to a reduction in the number of independent Yukawa couplings and simplifying the scalar and Yukawa sectors when compared to the general 2HDM. The CP3 constraint results in two right-handed neutrinos having equal masses and leads to a diagonal right-handed Majorana neutrino mass matrix. Notably, CP symmetry experiences a soft break due to the phase associated with the vacuum expectation value of the second Higgs doublet. The model aligns well with observed charged lepton masses and neutrino oscillation data, explaining both masses and mixing angles, and yields distinct predictions for normal and inverted neutrino mass hierarchies. It features a novel interplay between atmospheric mixing angle \(\theta_{23}\) and neutrino mass hierarchy: the angle \(\theta_{23}\) is below maximal for the normal hierarchy and above maximal for inverted hierarchy. Another interesting feature of the model is inherent CP violation for the inverted hierarchy.
## 1 Introduction
The standard model (SM) of particle physics provides a unified and well-tested theoretical framework for explaining the interactions of known fundamental particles. It explains how quarks and charged leptons acquire mass. However, it cannot account for the non-zero mass of neutrinos, which is necessary to explain observed neutrino oscillations. One way to naturally explain non-zero neutrino masses is by
introducing right-handed neutrinos into the particle content of the SM and allowing them to have a Majorana mass term. This is commonly known as the Type-I seesaw mechanism [1, 2, 3, 4, 5]. The smallness of the neutrino masses can be attained by setting Majorana neutrino mass at high energy scale.
Extending beyond the Standard Model (SM), a natural step involves adding another Higgs doublet, known as the Two-Higgs Doublet Model (2HDM). Initially proposed to address matter-antimatter asymmetry alongside the quark mixing matrix [6], the 2HDM doesn't explain neutrino mass. The vacuum expectation values of these two SU(2) doublets, spontaneously break the CP symmetry contributing as an extra source for generating matter-antimatter asymmetry [6]. Further, the need for a second Higgs doublet arises naturally in the Minimal Supersymmetric Standard Model (MSSM) [7] and axion models [8, 9]. Another reason for considering 2HDM is that it preserves the \(\rho\) parameter [6], connecting the mechanism of electroweak symmetry breaking with the masses of SM gauge bosons [10].
Despite these characteristic features, 2HDMs have shortcomings, including the inability to explain neutrino mass, dark matter, and the allowance of tree-level flavor changing neutral currents (FCNC). The presence of FCNC arises because both SU(2) scalar doublets can couple to fermions. However, there are studies suggesting mechanisms to mitigate FCNC interactions. For example:
1. FCNC interactions can be fine-tuned by carefully selecting Yukawa couplings that are suppressed by the heavy mass of the scalar boson responsible for FCNC [11].
2. FCNC can be eliminated by employing a global symmetry, such as Z\({}_{2}\), which restricts a given scalar boson from coupling to fermions of different electric charges [12, 13].
3. Tree-level FCNC can be eliminated by using a global U(1) Peccei-Quinn symmetry [14].
In addition to addressing FCNC, there have been various attempts to enhance 2HDMs to incorporate neutrino masses [15, 16, 17, 18, 19, 20] and dark matter [21, 22, 23, 24, 25, 26].
However, 2HDM poses a challenge due to its large number of free parameters, making it difficult to probe through collider experiments like the LHC. In general, the scalar potential of the 2HDM consists of fourteen parameters and can exhibit CP conserving or CP violating behavior [11, 27]. Consequently, additional constraints are necessary, often derived from symmetry arguments, to establish relationships among these parameters.
The study of generalized CP (GCP) transformations within the scalar sector of 2HDM is an example of imposing additional symmetries [28, 29]. GCP transformations can be categorized in various ways. In Ref. [28], they are classified into three categories: CP1, CP2, and CP3. CP1 and CP2 correspond to discrete transformations, while CP3 is a continuous transformation that can be extended to the fermionic sector [29], and they have applied this to the quark sector. Furthermore, the CP symmetries
of the scalar sector in the 2HDM have been thoroughly investigated using the basis invariant bilinear formalism [30, 31, 32, 33, 34, 35] in many works [31, 32, 33, 34].
In 2HDM extensions addressing neutrino mass, explicit neutrino phenomenology, including mixing angles and mass-squared differences, is often absent. In this study, we've extended CP3 to the 2HDM's leptonic Yukawa sector, introducing CP violation through the second Higgs's _vev_ phase. Neutrino masses are generated _via_ the Type-I seesaw relation, involving right-handed neutrinos.
The paper is structured as follows: In Section 2, we present the basic formalism of 2HDM. Section 3 elaborates on extending CP3 to the neutrino Yukawa sector within the Type-I seesaw mechanism. We discuss our numerical analysis in Section 4. Finally, in Section 5, we summarize our conclusions.
## 2 Two Higgs Doublet Model under Generalized CP Symmetry
In 2HDMs, the Standard Model's field content expands with the addition of an extra Higgs doublet, denoted as \(\Phi_{2}\), which possesses the same charge assignments as the Higgs field in the SM. However, this minimal expansion in the scalar sector results in an increased number of free parameters. To address this parameter growth, it becomes imperative to introduce specific symmetries. In this context, the Generalized CP (GCP) symmetry is considered. Under GCP, scalar doublets undergo transformations as elucidated in [28]:
\[\Phi_{a}\rightarrow\Phi_{a}^{GCP}=X_{a\alpha}\Phi_{\alpha}^{*}\, \tag{1}\]
where X is an arbitrary unitary CP transformation matrix.
There always exist a choice of basis for which most general GCP transformation matrix can be brought to the form [36]
\[X=\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}\, \tag{2}\]
where \(0\leq\theta\leq\pi/2\). So, we have three distinct cases with respect to the parameter \(\theta\) as mentioned in [28]:
1. When \(\theta=0\), the symmetry is referred to as CP symmetry of order one (CP1).
2. For \(\theta=\pi/2\), the symmetry is known as CP symmetry of order two (CP2).
3. In the range \(0<\theta<\pi/2\), the symmetry is labeled as CP symmetry of order three (CP3) and importantly it constitutes a continuous symmetry.
The most general scalar potential with two Higgs doublets can be written as
\[\begin{split} V_{H}&=m_{11}^{2}\Phi_{1}^{\dagger}\Phi_{1 }+m_{22}^{2}\Phi_{2}^{\dagger}\Phi_{2}-[m_{12}^{2}\Phi_{1}^{\dagger}\Phi_{2}+H.c.]+\frac{1}{2}\lambda_{1}(\Phi_{1}^{\dagger}\Phi_{1})^{2}+\frac{1}{2}\lambda_{2 }(\Phi_{2}^{\dagger}\Phi_{2})^{2}\\ &\quad+\lambda_{3}(\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{2}^{\dagger} \Phi_{2})+\lambda_{4}(\Phi_{1}^{\dagger}\Phi_{2})(\Phi_{2}^{\dagger}\Phi_{1})\\ &\quad+\big{[}\frac{1}{2}\lambda_{5}(\Phi_{1}^{\dagger}\Phi_{2})^ {2}+\lambda_{6}(\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{1}^{\dagger}\Phi_{2})+ \lambda_{7}(\Phi_{2}^{\dagger}\Phi_{2})(\Phi_{1}^{\dagger}\Phi_{2})+H.c.\big{]} \,\end{split} \tag{3}\]
having total 14 parameters. Here, \(m_{11}^{2}\), \(m_{22}^{2}\) and \(\lambda_{1}\) through \(\lambda_{4}\) are real parameters while \(m_{12}^{2}\), \(\lambda_{5}\), \(\lambda_{6}\) and \(\lambda_{7}\) are generally complex.
Based on the findings in Ref. [29], it's established that, in addition to the standard CP symmetry CP1, CP3 is the only symmetry that can be extended to the Yukawa sector for leptons. To maintain CP3 invariance within the scalar potential, certain conditions must be satisfied. Specifically, we must have \(m_{11}^{2}=m_{22}^{2}\), \(m_{12}^{2}=0\), \(\lambda_{2}=\lambda_{1}\), \(\lambda_{6}=0\), \(\lambda_{7}=0\) and \(\lambda_{5}=\lambda_{1}-\lambda_{3}-\lambda_{4}\) which must be a real parameter.
To avoid the presence of Goldstone bosons following spontaneous symmetry breaking, it's necessary to introduce soft CP3 symmetry breaking. Therefore, we will consider \(m_{11}^{2}\neq m_{22}^{2}\) and \(\Re[m_{12}^{2}]\neq 0\). This softly broken CP3 symmetric potential also leads to a CP-violating vacuum expectation value (_vev_) for the second Higgs doublet, which assumes a crucial role in the exploration of CP violation in the lepton sector, as we will delve into in the upcoming sections.
## 3 CP3 in Yukawa Sector with Type-I seesaw
In our study, we've expanded upon the Standard Model (SM) by incorporating one Higgs doublet and three right-handed neutrinos denoted as \(N_{R}\). Within the framework of the Type-I seesaw mechanism, the relevant Yukawa Lagrangian responsible for generating the masses of both charged leptons and neutrinos1 is expressed as follows
Footnote 1: For quark masses, see Ref. [29]
\[-\mathcal{L}_{Y}=\overline{L}_{L}\Gamma_{a}\Phi_{a}l_{R}+\overline{L}_{L}Y_{a} \bar{\Phi}_{a}N_{R}+\frac{1}{2}\overline{N_{R}^{c}}MN_{R}+H.c.\, \tag{4}\]
where \(L_{L}\), \(l_{R}\) are Standard Model \(SU(2)\) left-handed doublets and right-handed singlets, \(\Phi_{a}\) (a = 1, 2) are Higgs doublets and N\({}_{R}\) are right-handed neutrino singlets. \(\Gamma_{a}\) and \(Y_{a}\) are the Yukawa coupling matrices for charged leptons and neutrinos respectively and M is lepton number violating Majorana mass term for right-handed neutrinos. Now we will extend the GCP symmetry to the leptonic Yukawa sector.
The fields involved in Eqn.(4) transforms under GCP symmetry as
\[\Phi_{a} \rightarrow\Phi_{a}^{GCP}=X_{ab}\Phi_{b}^{*}\,\] \[\tilde{\Phi}_{a} \rightarrow\tilde{\Phi}_{a}^{GCP}=X_{ab}^{*}(\tilde{\Phi}_{b}^{ \dagger})^{T}\,\] \[L_{L} \to L_{L}^{GCP}=iX_{\gamma}\gamma^{0}C\overline{T}_{L}^{T}\,\] \[l_{R} \to l_{R}^{GCP}=iX_{\beta}\gamma^{0}C\overline{l}_{R}^{T}\,\] \[N_{R} \to N_{R}^{GCP}=iX_{\gamma}\gamma^{0}C\overline{N}_{R}^{T}\,\]
where \(\gamma^{0}\) is Dirac gamma matrix and C is charge conjugation matrix, \(X\), \(X_{\zeta}\), \(X_{\beta}\) and \(X_{\gamma}\) are CP transformation matrices.
For Lagrangian to remain invariant under these CP transformations we find Yukawa coupling matrices to transform as
\[\Gamma_{b}^{*} =X_{\zeta}^{\dagger}\Gamma_{a}X_{\beta}X_{ab}\, \tag{6}\] \[Y_{b}^{*} =X_{\zeta}^{\dagger}Y_{a}X_{\gamma}X_{ab}^{*}\, \tag{7}\]
and Majorana mass matrix to transform as
\[M^{*}=X_{\gamma}^{T}MX_{\gamma}\, \tag{8}\]
where
\[M=\begin{pmatrix}M_{11}&M_{12}&M_{13}\\ M_{12}&M_{22}&M_{23}\\ M_{13}&M_{23}&M_{33}\end{pmatrix}. \tag{9}\]
The CP transformation matrices involved are given by
\[X_{\zeta} =\begin{pmatrix}\cos\zeta&\sin\zeta&0\\ -\sin\zeta&\cos\zeta&0\\ 0&0&1\end{pmatrix}\, \tag{10}\] \[X_{\beta} =\begin{pmatrix}\cos\beta&\sin\beta&0\\ -\sin\beta&\cos\beta&0\\ 0&0&1\end{pmatrix}\,\] (11) \[X_{\gamma} =\begin{pmatrix}\cos\gamma&\sin\gamma&0\\ -\sin\gamma&\cos\gamma&0\\ 0&0&1\end{pmatrix}. \tag{12}\]
It was found in Ref. [29] that CP3 symmetry with \(\theta=\pi/3\) (\(\zeta=\beta=\gamma=\pi/3\)) can be extended to Yukawa sector producing correct quark masses. Under these conditions, the forms of Yukawa coupling matrices given in Eqns.(6) and (7) become
\[\Gamma_{1}=\begin{pmatrix}ia_{11}&ia_{12}&a_{13}\\ ia_{12}&-ia_{11}&a_{23}\\ a_{31}&a_{32}&0\end{pmatrix}\,\ \ \Gamma_{2}=\begin{pmatrix}ia_{12}&-ia_{11}&-a_{23}\\ -ia_{11}&-ia_{12}&a_{13}\\ -a_{32}&a_{31}&0\end{pmatrix}\, \tag{13}\]
\[Y_{1}=\begin{pmatrix}ib_{11}&ib_{12}&b_{13}\\ ib_{12}&-ib_{11}&b_{23}\\ b_{31}&b_{32}&0\end{pmatrix}\,\ \ Y_{2}=\begin{pmatrix}ib_{12}&-ib_{11}&-b_{23}\\ -ib_{11}&-ib_{12}&b_{13}\\ -b_{32}&b_{31}&0\end{pmatrix}\, \tag{14}\]
where all \(a\)'s and \(b\)'s are real parameters. The choice of \(\theta=\pi/3\) alongside \(\zeta=\beta=\gamma=\pi/3\) in the leptonic sector stems from the similarity in GCP transformation properties between quarks and leptonic fields, as outlined in Eqn.(5).
Now, we need to solve constrains given by Eqn.(8) which can be rewritten as:
\[M^{*}-X_{\gamma}^{T}MX_{\gamma}=0. \tag{15}\]
Using Eqns.(9) and (12) in Eqn.(15), the set of constraints are
\[M_{11}^{*}-M_{11}\cos^{2}\gamma-M_{22}\sin^{2}\gamma+M_{12}\sin 2\gamma = 0, \tag{16}\] \[M_{12}^{*}-M_{12}\cos 2\gamma+(-M_{11}+M_{22})\sin\gamma\cos\gamma = 0,\] (17) \[M_{22}^{*}-M_{22}\cos^{2}\gamma-2M_{12}\cos\gamma\sin\gamma-M_{1 1}\sin^{2}\gamma = 0,\] (18) \[M_{13}^{*}-M_{13}\cos\gamma+M_{23}\sin\gamma = 0,\] (19) \[M_{23}^{*}-M_{23}\cos\gamma-M_{13}\sin\gamma = 0,\] (20) \[M_{33}^{*}-M_{33} = 0. \tag{21}\]
In Eqns.(16), (17) and (18), the real part can be separated out as
\[\begin{pmatrix}1-\cos^{2}\gamma&\sin 2\gamma&-\sin^{2}\gamma\\ -\cos\gamma\sin\gamma&1-\cos 2\gamma&\cos\gamma\sin\gamma\\ -\sin^{2}\gamma&-2\cos\gamma\sin\gamma&1-\cos^{2}\gamma\end{pmatrix}\begin{pmatrix} \Re[M_{11}]\\ \Re[M_{12}]\\ \Re[M_{22}]\end{pmatrix}=0, \tag{22}\]
and the imaginary part can be separated out as
\[\begin{pmatrix}-1-\cos^{2}\gamma&\sin 2\gamma&-\sin^{2}\gamma\\ -\cos\gamma\sin\gamma&-1-\cos 2\gamma&\cos\gamma\sin\gamma\\ -\sin^{2}\gamma&-2\cos\gamma\sin\gamma&-1-\cos^{2}\gamma\end{pmatrix}\begin{pmatrix} \Im[M_{11}]\\ \Im[M_{12}]\\ \Im[M_{22}]\end{pmatrix}=0. \tag{23}\]
. Further, from Eqns.(19) and (20) we have, for real part
\[\begin{pmatrix}1-\cos\gamma&\sin\gamma\\ -\sin\gamma&1-\cos\gamma\end{pmatrix}\begin{pmatrix}\Re[M_{13}]\\ \Re[M_{23}]\end{pmatrix}=0, \tag{24}\]
and, for imaginary part,
\[\begin{pmatrix}-1-\cos\gamma&\sin\gamma\\ -\sin\gamma&-1-\cos\gamma\end{pmatrix}\begin{pmatrix}\Im[M_{13}]\\ \Im[M_{23}]\end{pmatrix}=0. \tag{25}\]
For \(\gamma=\pi/3\), we have
\[\begin{pmatrix}\frac{3}{4}&\frac{\sqrt{3}}{2}&\frac{-3}{4}\\ -\frac{\sqrt{3}}{4}&\frac{3}{2}&\frac{\sqrt{3}}{4}\\ \frac{-3}{4}&\frac{-\sqrt{3}}{2}&\frac{3}{4}\end{pmatrix}\begin{pmatrix}\Re[M _{11}]\\ \Re[M_{12}]\\ \Re[M_{22}]\end{pmatrix}=0, \tag{26}\]
\[\begin{pmatrix}\frac{-5}{4}&\frac{\sqrt{3}}{2}&\frac{-3}{4}\\ -\frac{\sqrt{3}}{4}&-\frac{1}{2}&\frac{\sqrt{3}}{4}\\ \frac{-3}{4}&-\frac{\sqrt{3}}{2}&-\frac{5}{4}\end{pmatrix}\begin{pmatrix} \Im[M_{11}]\\ \Im[M_{12}]\\ \Im[M_{22}]\end{pmatrix}=0, \tag{27}\]
and
\[\begin{pmatrix}\frac{1}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&\frac{1}{2}\end{pmatrix}\begin{pmatrix}\Re[M_{13}]\\ \Re[M_{23}]\\ \end{pmatrix}=0, \tag{28}\]
\[\begin{pmatrix}-\frac{3}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{3}{2}\end{pmatrix}\begin{pmatrix}\Im[M_{13}]\\ \Im[M_{23}]\\ \end{pmatrix}=0. \tag{29}\]
In Eqns.(27), (28), and (29), the determinant of the square matrix is non-zero, implying a unique solution where \(\Im[M_{11}]\), \(\Im[M_{12}]\), \(\Im[M_{22}]\), \(\Re[M_{13}]\), \(\Re[M_{23}]\), \(\Im[M_{13}]\), and \(\Im[M_{23}]\) all equal zero. On the other hand, in Eqn.(26), the determinant of the square matrix is zero, indicating arbitrary solutions, with \(\Re[M_{12}]\) equal to zero and \(\Re[M_{11}]\equiv M_{1}\) equal to \(\Re[M_{22}]\). Furthermore, Eqn.(21) leads to \(\Im[M_{33}]\) being zero, leaving only \(\Re[M_{33}]\equiv M_{3}\) as the relevant parameter.
It's worth noting that the CP3 constraint results in two right-handed neutrinos having equal masses, leading to a diagonal matrix \(M\), described as follows
\[M=\begin{pmatrix}M_{1}&0&0\\ 0&M_{1}&0\\ 0&0&M_{3}\end{pmatrix}. \tag{30}\]
After spontaneous symmetry breaking (SSB), both Higgs doublets get _vevs_, given by
\[\langle\Phi_{1}\rangle=\begin{pmatrix}0\\ \frac{v_{1}}{\sqrt{2}}\end{pmatrix}\,\ \langle\Phi_{2}\rangle=\begin{pmatrix}0 \\ e^{i\alpha}\frac{v_{2}}{\sqrt{2}}\end{pmatrix}\, \tag{31}\]
with condition that \(v=\sqrt{v_{1}^{2}+v_{2}^{2}}\approx 245\) GeV, where \(v\) is the standard model _vev_. Consequently, the charged leptons mass matrix becomes
\[M_{l} = \frac{1}{\sqrt{2}}(v_{1}\Gamma_{1}+e^{i\alpha}v_{2}\Gamma_{2}), \tag{32}\] \[= \frac{1}{\sqrt{2}}(\cos\phi\Gamma_{1}+e^{i\alpha}\sin\phi\Gamma_ {2})v, \tag{33}\]
and for neutrinos we have, Dirac mass matrix given by
\[M_{D} = \frac{1}{\sqrt{2}}(v_{1}Y_{1}+e^{-i\alpha}v_{2}Y_{2}), \tag{34}\] \[= \frac{1}{\sqrt{2}}(\cos\phi Y_{1}+e^{-i\alpha}\sin\phi Y_{2})v, \tag{35}\]
where \(\phi\) is defined as \(\tan\phi=v_{2}/v_{1}\).
We work in the basis in which charged lepton mass matrix is diagonal. The charged lepton mass matrix can be diagonalized as
\[M_{l}=U_{l}m_{diag}U_{R}^{\dagger}, \tag{36}\]
where \(U_{l}\) and \(U_{R}\) are \(3\times 3\) unitary matrices and \(m_{diag}=diag(m_{e},m_{\mu},m_{\tau})\) is diagonal matrix with positive real entries giving mass eigenvalues of electron, muon, and tau, respectively. So, we have
\[U_{l}^{\dagger}M_{l}M_{l}^{\dagger}U_{l}=m_{diag}^{2}, \tag{37}\]
where \(U_{l}\) rotates \(M_{D}\) into the basis in which charged leptons are diagonal
\[M_{D}^{new}=U_{l}^{\dagger}M_{D}. \tag{38}\]
Using Type-I seesaw, the effective light neutrinos mass matrix is given by
\[M_{\nu} = -M_{D}^{new}M^{-1}(M_{D}^{new})^{T}\,, \tag{39}\] \[= -(U_{l}^{\dagger}M_{D})M^{-1}(U_{l}^{\dagger}M_{D})^{T}, \tag{40}\]
which is a complex symmetric matrix. This matrix is related to Yukawa coupling matrices \(Y_{1}\) and \(Y_{2}\) through Eqn.(35). The effective light neutrino mass matrix can be diagonalized by \(3\times 3\) unitary matrix \(U\) as
\[U^{\dagger}M_{\nu}U^{*}=m, \tag{41}\]
where \(m_{ik}=m_{i}\delta_{ik}\), \(m_{i}>0\)\((i,k=1,2,3)\).
We will now move forward with the numerical determination of charged lepton masses and the parameters governing neutrino oscillations. This process entails the variation of free parameters to ascertain the permissible parameter space within the model.
## 4 Numerical Analysis and Discussion
In our numerical analysis, we generated random numbers uniformly for the _vev_-phase \(\alpha\) in the range of 0 to \(2\pi\). We also generated random numbers uniformly for the masses of the right-handed neutrinos \(M_{1}\) and \(M_{3}\), which ranged from \(10^{11}\) to \(10^{13}\) GeV and from 1.1\(\times 10^{13}\) to \(10^{15}\) GeV, respectively. We considered two cases for the _vev_\(v_{1}\), as discussed in the following subsections.
### When \(v_{1}<<v_{2}\)
In this scenario, we examined the influence of a very small _vev_ on our parameter space. To emphasize the dominance of _vev_\(v_{2}\), we randomly varied \(v_{1}\) in the range of \((0-5)\times 10^{-6}\) GeV. The value of \(v_{2}\) is subsequently determined using the equation \(v_{2}=\sqrt{v^{2}-v_{1}^{2}}\) GeV. We then determined the masses of charged leptons and the parameters governing neutrino oscillations, which we discuss in the following subsections.
#### 4.1.1 Charged Lepton Masses
To compute the masses of charged leptons, we varied the Yukawa coupling parameters within the range specified in Table 1. We then proceeded to numerically diagonalize the mass matrix \(M_{l}M_{l}^{\dagger}\), as described in Eqn.(37), to obtain the squared masses of charged leptons \((m_{e}^{2},m_{\mu}^{2},m_{\tau}^{2})\). Through this analysis, we identified parameter values that consistently yielded the correct charged lepton masses for both normal (\(m_{1}<m_{2}<m_{3}\)) and inverted (\(m_{3}<m_{1}<m_{2}\)) hierarchies of neutrinos. The benchmark points are listed in the second column of Table 2. With these parameter values, we calculated the charged lepton masses, as presented in Table 3, which closely align with experimentally observed values.
#### 4.1.2 Neutrino Oscillation Parameters
To find the neutrino mixing angles and mass-squared differences, we first rotated the neutrino mass matrix using the previously derived \(U_{l}\). The matrix \(M_{\nu}\), described in Eqn.(40), generally contains Yukawa couplings \(Y_{1}\) and \(Y_{2}\), as given in Eqn.(14). We then introduced random variations to these Yukawa coupling parameters within the specified ranges presented in the second column of Table 4,
\begin{table}
\begin{tabular}{|c|c|c|} \hline \hline Yukawa Coupling & When \(v_{1}<<v_{2}\) & When \(v_{1}\) ranges from \((10-17)\) Gev \\ Parameter & Range & Range \\ \hline \(a_{11}\) & \(3\times 10^{-5}-6\times 10^{-5}\) & \(2\times 10^{-5}-3\times 10^{-5}\) \\ \(a_{12}\) & \(8\times 10^{-5}-1\times 10^{-4}\) & \(6\times 10^{-5}-8\times 10^{-5}\) \\ \(a_{13}\) & \(6\times 10^{-3}-8\times 10^{-3}\) & \(6\times 10^{-3}-7\times 10^{-3}\) \\ \(a_{23}\) & \(6\times 10^{-3}-8\times 10^{-3}\) & \(8\times 10^{-3}-9\times 10^{-3}\) \\ \(a_{31}\) & \(4\times 10^{-4}-6\times 10^{-4}\) & \(4\times 10^{-4}-6\times 10^{-4}\) \\ \(a_{32}\) & \(1\times 10^{-4}-2\times 10^{-4}\) & \(1\times 10^{-4}-2\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The ranges of Yukawa couplings used in numerical analysis for charged leptons.
Figure 1: Predictions for effective Majorana mass \(|m_{ee}|\) for normal (left) and inverted (right) hierarchy when \(v_{1}<<v_{2}\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline \multirow{2}{*}{Yukawa Coupling Parameter} & \multicolumn{2}{c|}{When \(v_{1}<<v_{2}\)} & \multicolumn{2}{c|}{When \(v_{1}\) ranges from \((10-17)\) Gev} \\ \cline{2-3} & \multicolumn{2}{c|}{Range} & \multicolumn{2}{c|}{Range} \\ \cline{2-3} \cline{5-5} & NH & IH & NH & IH \\ \hline \(b_{11}\) & \(6\times 10^{-2}-8\times 10^{-2}\) & \(7\times 10^{-2}-2\times 10^{-1}\) & \(6\times 10^{-2}-8\times 10^{-2}\) & \(8\times 10^{-2}-9\times 10^{-2}\) \\ \(b_{12}\) & \(1\times 10^{-2}-5\times 10^{-2}\) & \(2\times 10^{-3}-4\times 10^{-3}\) & \(2\times 10^{-2}-3\times 10^{-2}\) & \(2\times 10^{-3}-3\times 10^{-3}\) \\ \(b_{13}\) & \(7\times 10^{-1}-9\times 10^{-1}\) & \(3\times 10^{-1}-5\times 10^{-1}\) & \(6\times 10^{-1}-8\times 10^{-1}\) & \(1\times 10^{-1}-2\times 10^{-1}\) \\ \(b_{23}\) & \(1\times 10^{-4}-5\times 10^{-4}\) & \(4\times 10^{-1}-6\times 10^{-1}\) & \(1\times 10^{-3}-2\times 10^{-3}\) & \(4\times 10^{-1}-6\times 10^{-1}\) \\ \(b_{31}\) & \(1\times 10^{-4}-5\times 10^{-4}\) & \(3\times 10^{-2}-5\times 10^{-2}\) & \(1\times 10^{-2}-2\times 10^{-2}\) & \(3\times 10^{-2}-5\times 10^{-2}\) \\ \(b_{32}\) & \(4\times 10^{-2}-6\times 10^{-2}\) & \(3\times 10^{-2}-5\times 10^{-2}\) & \(4\times 10^{-2}-6\times 10^{-2}\) & \(6\times 10^{-2}-7\times 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The ranges of Yukawa couplings used in numerical analysis for normal and inverted hierarchies of neutrinos.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Masses \\ (MeV) \\ \end{tabular} } & \multicolumn{2}{c|}{When \(v_{1}<<v_{2}\)} & \multicolumn{2}{c|}{When \(v_{1}\) ranges from \((10-17)\) Gev} \\ \cline{2-3} & & NH & IH \\ \hline \(m_{e}\) & 0.510 & 0.511 & 0.511 \\ \(m_{\mu}\) & 105.66 & 105.66 & 105.66 \\ \(m_{\tau}\) & 1776.87 & 1776.85 & 1776.85 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Benchmark points for both the cases (i) \(v_{1}<<v_{2}\) (second column) (ii) \(v_{1}\) ranges from \((10-17)\) GeV (third column), yielding correct values of charged lepton masses.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Parameters \\ (Units) \\ \end{tabular} } & \multicolumn{2}{c|}{When \(v_{1}<<v_{2}\)} & \multicolumn{2}{c|}{When \(v_{1}\) ranges from \((10-17)\) GeV} \\ \cline{2-3} & & NH & IH \\ \hline \(a_{11}\) & \(5.740882\times 10^{-5}\) & \(2.692411\times 10^{-5}\) & \(2.692412\times 10^{-5}\) \\ \(a_{12}\) & \(9.066478\times 10^{-5}\) & \(7.993756\times 10^{-5}\) & \(7.993757\times 10^{-5}\) \\ \(a_{13}\) & \(7.124930\times 10^{-3}\) & \(6.382725\times 10^{-3}\) & \(6.382726\times 10^{-3}\) \\ \(a_{23}\) & \(7.377109\times 10^{-3}\) & \(8.028028\times 10^{-3}\) & \(8.028028\times 10^{-3}\) \\ \(a_{31}\) & \(5.843790\times 10^{-4}\) & \(5.945601\times 10^{-4}\) & \(5.945602\times 10^{-4}\) \\ \(a_{32}\) & \(1.377038\times 10^{-4}\) & \(1.065540\times 10^{-4}\) & \(1.065540\times 10^{-4}\) \\ \(v_{1}\) (GeV) & \(3.287415\times 10^{-6}\) & \(11.00\) & \(15.30\) \\ \(v_{2}\) (GeV) & \(245\) & \(244.75\) & \(244.52\) \\ \(\alpha\) (\({}^{o}\)) & \(197.54\) & \(177.68\) & \(170.20\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Benchmark points for both the cases (i) \(v_{1}<<v_{2}\) (second column) (ii) \(v_{1}\) ranges from \((10-17)\) GeV (third column), yielding correct values of charged lepton masses.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline Parameters & \multicolumn{3}{|c|}{When \(v_{1}<<v_{2}\)} & \multicolumn{3}{|c|}{When \(v_{1}\) ranges from \((10-17)\) Gev} \\ \cline{2-5} (Units) & \multicolumn{2}{|c|}{NH} & \multicolumn{1}{|c|}{IH} & \multicolumn{1}{|c|}{NH} & \multicolumn{1}{|c|}{IH} \\ \hline \(\theta_{13}\) (\({}^{\circ}\)) & 8.44 & 8.85 & 8.86 & 8.22 \\ \(\theta_{12}\) (\({}^{\circ}\)) & 31.70 & 34.30 & 32.10 & 31.80 \\ \(\theta_{23}\) (\({}^{\circ}\)) & 43.76 & 47.09 & 42.96 & 48.14 \\ \(\Delta m^{2}_{21}\) (eV\({}^{2}\)) & \(7.61\times 10^{-5}\) & \(7.33\times 10^{-5}\) & \(7.64\times 10^{-5}\) & \(7.04\times 10^{-5}\) \\ \(\Delta m^{2}_{31}\) (eV\({}^{2}\)) & \(2.58\times 10^{-3}\) & \(2.48\times 10^{-3}\) & \(2.49\times 10^{-3}\) & \(2.43\times 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: The values of the neutrino oscillation parameters obtained using benchmark points given in Table 5 for normal and inverted hierarchical neutrino masses.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \hline Parameter & \multicolumn{2}{|c|}{When \(v_{1}<<v_{2}\)} & \multicolumn{2}{|c|}{When \(v_{1}\) ranges from \((10-17)\) Gev} \\ \cline{2-5} (Units) & NH & IH & NH & IH \\ \hline \(b_{11}\) & \(7.244577\times 10^{-2}\) & \(1.066963\times 10^{-1}\) & \(6.009812\times 10^{-2}\) & \(8.006565\times 10^{-2}\) \\ \(b_{12}\) & \(1.498695\times 10^{-2}\) & \(3.644816\times 10^{-3}\) & \(2.334813\times 10^{-2}\) & \(2.601354\times 10^{-3}\) \\ \(b_{13}\) & \(8.560231\times 10^{-1}\) & \(4.519143\times 10^{-1}\) & \(6.935666\times 10^{-1}\) & \(1.522807\times 10^{-1}\) \\ \(b_{23}\) & \(1.033423\times 10^{-4}\) & \(5.112375\times 10^{-1}\) & \(1.791234\times 10^{-3}\) & \(4.531115\times 10^{-1}\) \\ \(b_{31}\) & \(1.356340\times 10^{-4}\) & \(4.546001\times 10^{-2}\) & \(1.655557\times 10^{-2}\) & \(3.909223\times 10^{-2}\) \\ \(b_{32}\) & \(5.796197\times 10^{-2}\) & \(4.435139\times 10^{-2}\) & \(4.880238\times 10^{-2}\) & \(6.157254\times 10^{-2}\) \\ \(M_{1}\) (GeV) & \(2.447917\times 10^{12}\) & \(6.721219\times 10^{12}\) & \(3.066113\times 10^{12}\) & \(4.417490\times 10^{12}\) \\ \(M_{3}\) (GeV) & \(2.222544\times 10^{14}\) & \(5.140671\times 10^{14}\) & \(2.748449\times 10^{14}\) & \(6.211162\times 10^{14}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The benchmark points yielding correct neutrino phenomenology (i.e. neutrino mixing angles and mass-squared differences are within \(3\sigma\) experimental range [37]) for normal and inverted hierarchical neutrino masses.
considering both normal (NH) and inverted (IH) hierarchies. This process allowed us to identify a benchmark point that consistently yielded values for the neutrino oscillation parameters [37]. These values are listed in the second column of Table 5 for reference. The corresponding values of the mixing angles and mass-squared differences can be found in the second column of Table 6.
For these parameter values, the effective Majorana neutrino mass takes the values \(|m_{ee}|=\left|\sum_{i}U_{ei}^{2}m_{i}\right|=0.02693\) eV and \(|m_{ee}|=0.04952\) eV for NH and IH of neutrinos, respectively. The masses \(m_{1}\), \(m_{2}\), and \(m_{3}\) exhibited significant degeneracy in the case of NH, while for IH, there was an order of magnitude difference between the lightest mass and the other heavier masses. A linear correlation between \(|m_{ee}|\) and the lightest neutrino mass (\(m_{1}\) for NH and \(m_{3}\) for IH) is evident in Fig. 1.
CP violation in the leptonic sector remains unobserved to date. Consequently, it will be instructive to scrutinize the model's predictions for CP violation. In our model, CP violation, effectively, seeds from the complex _vev_\(\langle\Phi_{2}\rangle\)_via_ its phase \(\alpha\). CP violation can be understood in a rephasing invariant way by defining Jarlskog parameter \(J_{CP}=\Im\left(U_{\mu 3}U_{e3}^{*}U_{e2}U_{\mu 2}^{*}\right)\)[38, 39, 40]. For the present case, i.e. \(v_{1}<<v_{2}\), we find that \(J_{CP}\) and \(\delta\) (the phase of the \(U_{e3}\) element in \(U\)) are exceedingly small, regardless of the neutrino mass hierarchy, and whether the _vev_-phase \(\alpha\) is equal to zero or non-zero (see Fig. (2)). Figure (2) clearly illustrates that when \(v_{1}<<v_{2}\), the model predicts a effective CP conserving scenario, regardless
Figure 3: Correlation between the mixing angles \(\theta_{13}\) and \(\theta_{23}\) for normal (left) and inverted (right) hierarchies.
Figure 2: \(\delta-J_{CP}\) correlation with _vev_-phase \(\alpha=0\) and \(\alpha\neq 0\) for normal (left) and inverted (right) hierarchies.
of the value of the _vev_-phase \(\alpha\).
Furthermore, in this parameter space region, the mixing angle \(\theta_{23}\) falls within the lower octant (\(\theta_{23}<45^{o}\)) for the normal hierarchy (NH) and the upper octant (\(\theta_{23}>45^{o}\)) for the inverted hierarchy (IH), as illustrated in Fig. 3. Specifically, we note that the mixing angle \(\theta_{23}\) is approximately \(44^{o}\) for the normal hierarchy and \(47^{o}\) for the inverted hierarchy. In the subsequent section, we will explore an alternative scenario by increasing the _vev_\(v_{1}\) to observe its impact on CP violation in the leptonic sector.
### When _vev_\(v_{1}\) is in the GeV range
In this specific scenario, we randomly varied the _vev_\(v_{1}\) within the range of 10 to 17 GeV to analyze the influence of both _vevs_ on the parameter space. Once again, in this case, \(v_{2}\) is determined by the relation \(v_{2}=\sqrt{v^{2}-v_{1}^{2}}\) GeV. The predictions obtained for masses and other parameters will be discussed as follows.
Figure 4: Correlation plots between the effective neutrino mass \(|m_{ee}|\) and lightest neutrino mass, as well as between \(J_{CP}\) and CP-violating phase \(\delta\), for both normal (first row) and inverted (second row) hierarchies when the _vev_\(v_{1}\) ranges from 10 to 17 GeV.
#### 4.2.1 Charged lepton masses
In this case we have randomly varied the Yukawa couplings in range as shown in third column of Table 1 and after diagonalizing the mass matrix \(M_{l}M_{l}^{\dagger}\) numerically using Eqn.(37) we have obtained squared-masses of the charged leptons. The benchmark point and prediction for the masses of charged leptons are listed in third column of Tables 2 and 3, respectively.
#### 4.2.2 Neutrino Oscillation Parameters
The ranges of Yukawa couplings considered in the numerical analysis are presented in the third column of Table 4. In both normal and inverted hierarchies of neutrinos, we have identified the benchmark
Figure 5: Correlation between the mixing angles \(\theta_{13}\) and \(\theta_{23}\) for normal (left) and inverted (right) hierarchies.
Figure 6: Correlation of \(\delta\) with \(\theta_{23}\) and _vev_-phase \(\alpha\) for NH (first row) and IH (second row).
point and obtained the values of the mixing angles and mass squared differences, as shown in the third column of Tables 5 and 6, respectively.
For these values, we have determined that \(|m_{ee}|\) is approximately 0.01191 eV for NH and 0.04726 eV for IH. Even with the increased value of \(v_{1}\), the masses \(m_{1}\), \(m_{2}\), and \(m_{3}\) follow the same trend as in the previous case when \(v_{1}<<v_{2}\).
The correlation plots, in Fig. 4, are shown for NH (first row) and IH (second row) cases. It can be seen from Fig. 4 that \(|m_{ee}|\) is strongly correlated to \(m_{1}\) (Fig. 4(a)) and Jarlskog invariant \(J_{CP}\in(-0.006\to 0.006)\) (Fig. 4(b)) for NH. For IH \(|m_{ee}|\in(4.67\to 4.83)\times 10^{-2}\) eV (Fig. 4(c)) and \(J_{CP}\in(-0.021\to-0.018)\oplus(0.018\to 0.021)\) (Fig. 4(d)). It is interesting to note that in case of IH, the CP violating phase \(\delta=0\) is disallowed thus making this scenario necessarily CP violating. Another characteristic feature of the model is predicted correlation between yet unknown \(\theta_{23}\)-octant and neutrino mass hierarchy. In Fig. 5 we have shown allowed parameter space of the model in \((\theta_{13}-\theta_{23})\) plane. It is evident from these plots that, for NH (IH), mixing angles \((\theta_{13}-\theta_{23})\) exhibit negative (positive) correlation. Thus, \(\theta_{23}\) resides in the lower octant for NH and in higher octant for IH see Fig. 5. Further, the model exhibit a sharp prediction for \(\theta_{23}\) approximately equal to \(43^{o}\) (\(48^{o}\)) for NH (IH). Similarly, \(\theta_{13}\) is found to be around \(8.8^{o}\) (\(8.24^{o}\)) for NH (IH). Some other interesting correlation plots are shown in Fig. 6.
Multi-Higgs models often lead to significant flavor-changing neutral currents (FCNC) [41]. One approach to reduce these couplings is by aligning all right-handed fermions to interact with a single Higgs. This alignment can be achieved through an additional global \(Z_{2}\) symmetry [12, 13]. Alternatively, it can result from adjusting the ratio of the _vevs_\(v_{2}/v_{1}\). In our study, we are investigating the consequences of Generalized CP (GCP) symmetry without imposing additional symmetry constraints on the most general Yukawa couplings and their corresponding neutrino phenomenology. Within the model it is possible that the Higgs responsible for FCNC is very heavy and thus, produce vanishing FCNCs. The examination of FCNC effects in the current model is beyond the scope of this study and will be explored in future research.
## 5 Conclusions
In extended theoretical frameworks beyond the Standard Model, the count of free parameters increases compared to those at low energy. By introducing additional symmetry into the Lagrangian, we can substantially reduce the number of free parameters. In this study, we investigate the impact of GCP symmetries in the leptonic sector within the context of the 2HDM model. To incorporate non-zero neutrino masses, we extend the lepton sector by introducing right-handed neutrinos through the Type-I seesaw mechanism.
The scalar potential of the 2HDM model typically involves fourteen free parameters. However, due to the GCP symmetry we impose, this number reduces to four in the unbroken CP3 case and six in the softly broken CP3 symmetry case. Consequent to GCP, the charged lepton (Eqn. (13)) and neutrino Yukawa coupling (Eqn. (14)) matrices contain 12 independent parameters, six each in charged lepton and neutrino sectors. Also, Majorana mass term have two real parameters (Eqn. (30)).
This model exhibits a rich phenomenology and reveals strong correlations among neutrino oscillation parameters. The complex _vev_ phase \(\alpha\) is the sole source of CP violation in the model. We consider two distinct phenomenological scenarios:
1. In the first scenario, where _vev_\(v_{1}\) is much smaller than \(v_{2}\), CP is conserved regardless of the value of the _vev_-phase \(\alpha\) (see Fig. 2). This scenario provides a unique phenomenology for normal and inverted hierarchies of neutrinos. Notably, the model precisely predicts the neutrino mixing angles, particularly the atmospheric mixing angle \(\theta_{23}\).
2. In the second scenario, where \(v_{1}\) is in the GeV range, the atmospheric mixing angle \(\theta_{23}\) is below (for NH) or above (for IH) maximality, approximately \(\approx 43^{o}\) and \(\approx 48^{o}\) respectively (see Fig. 5). The Dirac CP phase \(\delta\) is tightly constrained to be within the range of \(-10^{o}\) to \(10^{o}\) for the NH case. However, if the neutrino masses follow an inverted mass spectrum, the model inherently exhibits CP violation with \(\delta\) approximately equal to \(\pm 40^{o}\). In summary, our investigation into the 2HDM model with GCP symmetries reduces the number of free parameters, leading to precise predictions for neutrino mixing angles and distinct CP-violating scenarios, shedding light on its unique phenomenology.
## Acknowledgments
Tapender acknowledges the financial support provided by Central University of Himachal Pradesh. The authors, also, acknowledge Department of Physics and Astronomical Science for providing necessary facility to carry out this work.
|
2309.13058 | Mathematical Modeling and Optimal Control of Untrue Information :
Dynamic SEIZ in Online Social Networks | We propose to model the phenomenon of the spread of a rumor in this paper. We
manipulate a model that is based on SEIR model that specializes in spreading
rumors. In the second part, we introduce a control strategy to fight against
the diffusion of the rumor. Our main objective is to characterize the three
optimal controls that minimize the number of spreaders, susceptibles who enter
and spread the rumor, and skeptics. For that matter, using the maximum
principle of Pontryagin, we prove the existence and give characterization of
our controls. To illustrate the theoretical results obtained, numerical
simulations are given to concretize our approach. | Fulgence Mansal, Ibrahima Faye | 2023-09-09T09:16:44Z | http://arxiv.org/abs/2309.13058v1 | Mathematical Modeling and Optimal Control of Untrue Information : Dynamic SEIZ in Online Social Networks
###### Abstract
We propose to model the phenomenon of the spread of a rumor in this paper. We manipulate a model that is based on SEIR model that specializes in spreading rumors. In the second part, we introduce a control strategy to fight against the diffusion of the rumor. Our main objective is to characterize the three optimal controls that minimize the number of spreaders, susceptibles who enter and spread the rumor, and skeptics. For that matter, using the maximum principle of Pontryagin, we prove the existence and give characterization of our controls. To illustrate the theoretical results obtained, numerical simulations are given to concretize our approach.
Fulgence Mansal
Universite Catholique de l'Afrique de l'Oueset/ UUZ
Laboratory Decision Mathematics and Numerical Analysis (LMDAN / UCAD)
Ibrahim Faye
Universite Alioune DIOP de Bambey, Senegal
Laboratory Decision Mathematics and Numerical Analysis (LMDAN)
## 1 Introduction
The phenomenon of rumor is a complex phenomenon that has eluded man since ancient times, where it intersects many factors and interventions, including what is natural, sociological, economic, and psychological. Communities have known over the years the emergence of many rumors that have spread widely among them; it was also the focus of interaction and analysis by the commanders of these societies throughout history [1]; human beings have fabricated rumors and disseminated them for political, economic, and social purposes [5], where they are exploited to achieve commercial profits or to achieve victories in wars by dissolving fear and surrender within the enemy or with holding confidence in their leaders. The phenomenon of rumor has known many changes in its composition, in line with the change that societies know and the development of daily life in general with the increasing use of technological instruments and modern technologies in communication within communities. This phenomenon has witnessed a dramatic rise and an increase in the speed of its spread. This increase contributes significantly to huge consequences on the other hand. The development of the phenomenon of rumors and the strength of their influence and impact within societies gave this phenomenon another dimension [14], as it became used by the media and intelligence in competition between countries and what is known
as propaganda and polemic or buzz by publishing some false news in whole or in part to influence the opinions of voters by raising or decreasing the popularity of politicians as happened in the elections between Trump and Hillary where Hillary was the most popular and was the favorite to win until the last weeks before the presidential election [37], where some of the specialized communication agencies published many news about Hillary contributed significantly to influence public opinion tendency to Trump, who eventually won.
Mathematical modeling is one of the most important applications of mathematics that contribute to the representation and simulation of social, economic, biological, and ecological phenomena and convert them into mathematical equations that are formulated, studied, analyzed, and interpreted see [29]. In this context, many researchers have developed different mathematical models representing the dynamics of the rumor [34].
In the work [31], authors gave a review and a study of several mathematical models of rumor's propagation.
Related Work. In 1964, Goffman and Newill developed in [15] a new concept for modeling the transmission of ideas within a society based on the mathematical model \(SIR\) due to the great similarity between the two phenomena.
With the development of societies and the emergence of modern technological means (transport communication), new factors have emerged that further complicate the phenomenon of rumor and contribute to the large spread of rumors;
As an example, in the work done by Luis M.A. Bettencourt et al. [8], the authors proposed a new model taking into account new factors by extending the \(SIR\) model to a \(SEIZR\) model with two additional compartments.
With the emergence of social networks and their impacts on communication within communities where they are taking more and more space within the community, it became clear that they must be taken into account as major intervening in the spread of rumors; in this context many of the works that adopted this hypothesis have been produced.
To reduce the negative effect of rumor propagation, in this paper, we introduce a compartmental model of rumor propagation, which considers the rumor refutation of public and information feedback [7].
Compartmental models are a mathematical approach applied to measure and predict the spread of various infectious diseases. The method of misinformation diffusion is usually a similar approach as a virus spreading process. In transmission epidemics, there is each user infected with viruses and can become susceptible to viruses. In [4], the authors proposed the \(SEIZ\) model where the skeptics are the individuals becoming immune to infection. Although it is similar to the removed (R) individuals, skeptic transitions directly from the susceptible state and their interaction will still affect different compartments as well. In this way in [19] authors proposed a model where the rumor spreads between two different scenarios and which do not share information with each other.
In order to demonstrate the effectiveness of the model we have proposed, we will present a numerical simulation with the following figure so that we can see how well the model adapts to reality. Initial values are approximate data that we suggested after studying and researching some statistics about the users of social networks; the values are attached in the table.
The key contributions of this paper are: We demonstrate the capability of the \(SEIZ\) model to quantify compartment transition dynamics. We showcase how such information could facilitate the development of screening criteria for distinguishing rumors from real news happenings.
The paper is organized as follows. In section 2 is given the model formulation. In section 3, we give some basic models on the model. Section 4 is devoted to optimal control problem and in
the last section, numericals simulations are given. In Section 5, we give the concluding remarks.
## 2 Model formulation
Compartmental models are a mathematical approach used to evaluate and predict the spread of various infectious diseases [10]. At the beginning, mathematical models for the rumors were considered merely speculative and imprecise, but for the fact that rumor spreading is now seen like the transmission of disease [24]. The spreading of rumor is in many ways similar to the spreading of epidemic infection by the spreader or the infections to notify or infect the susceptible [2]. \(SEIZ\) model is a compartmental model that breaks the population into distinct compartments and establishing parameters for the rates at which the population transitions between compartments. These parameters are obtained by looking at the relationships between each class of the population and making assumptions about the disease.
One drawback of the SIS model is that once a susceptible individual gets exposed to disease, he can only directly transition to infected status. In fact, especially on Twitter, this assumption does not work well; people's ideologies are complex and when they are exposed to news or rumors, they may hold different views, take time to adopt an idea, or even be skeptical to some facts. In this situation, they might be persuaded to propagate a story, or commence only after careful consideration themselves. Additionally, it is quite conceivable that an individual can be exposed to a story (i.e. received a tweet), yet never post a tweet themselves.
Based on this reasoning, we considered a more applicable, robust model, the \(SEIZ\) model which was first used to study the adoption of Feynman diagrams. In the context of Twitter, the different compartments of the \(SEIZ\) model can be viewed as follows: Susceptible (S) represents a user who has not heard about the news yet; infected (I) denotes a user who has tweeted about the news; skeptic (Z) is a user who has heard about the news but chooses not to tweet about it; and exposed (E) represents a user who has received the news via a tweet but has taken some time, an exposure delay, prior to posting. We note that referring to the Z compartment as skeptics is in no way an implication of belief or skepticism of a news story or rumor. We adopt this terminology as this was the nomenclature used by the original authors of the \(SEIZ\) model.
A major improvement of the \(SEIZ\) model over the SIS model is the incorporation of exposure delay. That is, an individual may be exposed to a story, but not instantaneously tweet about it. After a period of time, he may believe it and then be promoted to the infected compartment. Further, it is now possible for an individual in this model to receive a tweet, and not tweet about it themselves. As shown in Figure (1), \(SEIZ\) rules can be summarized as follows:
1. Skeptics recruit from the susceptible compartment with rate b, but these actions may result either in turning the individual into another skeptic (with probability \(l\)), or it may have the unintended consequence of sending that person into the exposed (E) compartment with probability \((1-l)\).
2. A susceptible individual will immediately believe a news story or rumor with probability \(p\), or that person will move to the exposed (E) compartment with probability \((1-p)\).
3. Transitioning of individuals from the exposed compartment to the infected class can be caused by one of two separate mechanisms: * recruitment into the susceptible compartment is in constant rate, * an individual in the exposed class has further contact with an infected individual (with contact rate \(\rho\)), and this additional contact promotes him to infected;
* an individual in the exposed class may become infected purely by self- adoption (with rate \(\varepsilon\)), and not from additional contact with those already infected.
The \(SEIZ\) model is mathematically represented by the following system of ODEs. A slight difference of our implementation of this model is that we do not incorporate vital dynamics, which includes the rate at which individuals enter and leave the population N. In epidemiological disease applications, this encompasses the rate at which people become susceptible (e.g. born) and deceased. In our application, a topic has a net duration not exceeding several days. Thus, the net entrance and exodus of users over these relatively short time periods is not expected to noticeably impact compartment sizes and our ultimate findings.
From this extension, the \(SEIZ\) model explored one more compartment Skeptics (\(Z\)).
In this model, the susceptibility immediately infected with probability \(p\), and \((1-p)\) is the possibility of an individual transiting to the incubator class instead, from which they adopted.
After the contact of an infected and a Skeptic, the Skeptic succeeds to convince him that the information is false at a rate \(\lambda ZI\) ; after a certain period, a portion of the infected decide not to spread the rumor at a rate \(\delta I\).
\(N(t)\) denotes the total population where the network has a disease-free status with \(S^{*}=N,E^{*}=I^{*}=Z^{*}=0.\) Fig. 4 illustrates the relationship between each compartment.
With the relationships between each compartment described by the parameters above, we have the following set of ODEs:
\[\left\{\begin{array}{ll}\frac{dS}{dt}&=\pi N-\mu S-\beta S\frac{I}{N}-bS \frac{Z}{N}\\ \\ \frac{dE}{dt}&=(1-p)\beta S\frac{I}{N}+(1-l)bS\frac{Z}{N}-\rho E\frac{I}{N}- \varepsilon E-\mu E\\ \\ \frac{dI}{dt}&=p\beta S\frac{I}{N}+\rho E\frac{I}{N}+\varepsilon E-\delta I- \lambda I\frac{Z}{N}-\mu I,\\ \\ \frac{dZ}{dt}&=lbS\frac{Z}{N}+\delta I+\lambda I\frac{Z}{N}-\mu Z\end{array}\right. \tag{1}\]
Figure 1: Transition rates of SEIZ Model.
In order to express system of equation (1) as a portion of the entire population, and since the recovered class does not appear in the first four equations of the system (1), we use the following substitution:
\[s=\frac{S}{N};\ i=\frac{I}{N};\ e=\frac{E}{N};\ z=\frac{Z}{N}\]
is used.
Hence, the resulting system of equation shall be :
\[\left\{\begin{array}{ll}\frac{ds}{dt}&=\pi-\mu s-\beta si-bsz\\ \\ \frac{de}{dt}&=(1-p)\beta si+(1-l)bsz-\rho ei-\varepsilon e-\mu e\\ \\ \frac{di}{dt}&=p\beta si+\rho ei+\varepsilon e-\delta i-\lambda iz-\mu i\\ \\ \frac{dz}{dt}&=lbsz+\delta i+\lambda iz-\mu z\end{array}\right. \tag{2}\]
Table 2 provide description of each parameter of this model. This provides a more intuitive look into the model and relates the relationships above with the actual equations. The definitions of the parameters are designed in Table 2.
## 3 Model Basic Properties
Since the model monitors human populations, all the variables and the associated parameters are non-negative at all time. It is important to show that the model variables of the model remain non negative for all non-negative initial conditions.
\begin{table}
\begin{tabular}{|c|l|l|} \hline Parameter & Description & Units \\ \hline \(\pi\) & Recruitment rate into the Susceptible population & Per unit time \\ \hline \(\beta\) & Rate of contact between S and I & Per unit time \\ \hline \(b\) & Rate of contact between S and Z & Per unit time \\ \hline \(\rho\) & Rate of contact between E and I & Per unit time \\ \hline \(\varepsilon\) & Incubation rate & Per unit time \\ \hline \(\frac{1}{\varepsilon}\) & Average Incubation rate & Per unit time \\ \hline \(p\) & Transmission rate S\(>\)I, given contact with I & Unit-less \\ \hline \(l\) & Transmission rate S\(>\)Z, given contact with Z & Unit-less \\ \hline \(1-l\) & S\(>\)E Probability given contact with skeptics & Unit-less \\ \hline \(1-p\) & S\(>\)E Probability given contact with adopters & Unit-less \\ \hline \(\mu\) & Deconnect rate of network & Per unit time \\ \hline \end{tabular}
\end{table}
Table 1: Parameters model formulations and their descriptions
### Positivity of the solution
Since the model monitors population for a different class, it is required to show that all the state variables remain nonnegative for all times.
**Theorem 1**.: _Let \(\Omega=\{(s,e,i,z)\in\mathbb{R}^{4}:s(0)>0,e(0)>0,i(0)>0,z(0)>0\}\), then the solution \(\{s(t),e(t),i(t),z(t)\}\) of the system (2) is positive for all \(t\geq 0\)._
Proof.: Taking the first equation of (2) we have
\[\frac{ds}{dt} =\pi-\mu s-\beta si-bsz\Longrightarrow\frac{ds}{dt}\geq-\mu s\] \[\Longrightarrow\frac{ds}{s}\geq-\mu dt\Longrightarrow\int\frac{ ds}{s}\geq\int-\mu dt\] \[\Longrightarrow s(t)\geq s(0)e^{-\mu t}\geq 0.\]
From the second equation of (2)
\[\frac{de}{dt} =(1-p)\beta si+(1-l)bsz-\rho ei-\varepsilon e-\mu e\Longrightarrow \frac{de}{dt}\geq-(\varepsilon+\mu)e\] \[\Longrightarrow\frac{de}{e}\geq-(\varepsilon+\mu)dt\Longrightarrow \int\frac{de}{e}\geq\int-(\varepsilon+\mu)dt\] \[\Longrightarrow e(t)\geq e(0)e^{-(\varepsilon+\mu)t}\geq 0.\]
From third equation of (2)
\[\frac{di}{dt} =p\beta si+\rho ei+\varepsilon e-\delta i-\lambda iz-\mu i \Longrightarrow\frac{di}{dt}\geq-(\delta+\mu)i\] \[\Longrightarrow\frac{di}{i}\geq-(\delta+\mu)dt\Longrightarrow\int \frac{di}{i}\geq\int-(\delta+\mu)dt\] \[\Longrightarrow i(t)\geq i(0)e^{-(\delta+\mu)t}\geq 0.\]
From fourth equation of (2)
\[\frac{dz}{dt} =lbsz+\delta i+i(\gamma i+\lambda z)-\mu z\Longrightarrow\frac{ds }{dt}\geq-\mu z\] \[\Longrightarrow\frac{dz}{z}\geq-\mu dt\Longrightarrow\int\frac{ dz}{z}\geq\int-\mu dt\] \[\Longrightarrow z(t)\geq z(0)e^{-\mu t}\geq 0.\]
### Existence of the solution
**Theorem 2**.: _The region \(D=\{(s,e,i,z)\in\mathbb{R}^{4}_{+}:\ s+e+i+z\leq\frac{\pi}{\mu}\}\) is positively invariant and attract all solutions in \(\mathbb{R}^{4}_{+}\)_
Proof.: Adding all the equations from (1), gives the rate of change of the total human population \(\frac{dN}{dt}=\frac{dS}{dt}+\frac{dE}{dt}+\frac{dI}{dt}+\frac{dZ}{dt}\)
\[\frac{dN}{dt} =\pi N-\mu S-\mu E-\mu I-\mu Z\] \[\frac{dN}{dt} =\pi N-\mu(S+E+I+Z)\] \[\frac{dN}{dt} =\pi N-\mu N\]
Since \(\frac{dN}{dt}=\pi N-\mu N\) whenever \(N(t)>\pi\), then \(\frac{dN}{dt}<0\), implying \(\frac{dN}{dt}\) is bounded by \(\pi N-\mu N\). Thus, a standard comparison theorem by [26] can be used to show that :
\[N(t)\leq N(0)e^{-\mu t}+\frac{\pi}{\mu}(1-e^{-\mu t})\]
In particular, \(N(t)\leq\frac{\pi}{\mu}\) if \(N(0)\leq\frac{\pi}{\mu}\). Thus \(D\) is positively invariant Therefore, the model is epidemiologically and mathematically well posed within the region.
\(\Box\)
### Basic Reproduction Number
In this section, we obtained the threshold parameter that governs the spread of rumor like a disease which is called the basic reproduction number which is determined. To obtain the basic reproduction number, we used the next-generation matrix method so that it is the spectral radius of the next-generation matrix. The basic reproduction number \(R_{0}\) is an important parameter to characterize the transmission of rumor. In this section, we discuss the existence and uniqueness of Rumor Free Equilibrium (RFE) of the model and its analysis. The model Equations (2) has an RFE given by on a simple calculation:
\[M_{0}=(S^{*},E^{*},I^{*},Z^{*})=(\frac{\pi}{\mu},\ 0,\ 0,\ 0).\]
The local stability of RFE given will be investigated using the next generation matrix method [32]. We calculate the next generation matrix for the system of the question (2) by enumerating the number of ways that:
* new spreaders arise
* number of ways that individuals can move but only one way to create a spreader.
Only the equations concerning contaminated and/or contagious individuals (disseminated information) are necessary.
\[\left\{\begin{array}{ll}\frac{de}{dt}&=(1-p)\beta si+(1-l)bsz-\rho ei- \varepsilon e-\mu e\\ \\ \frac{di}{dt}&=p\beta si+\rho ei+\varepsilon e-\delta i-\lambda iz-\mu i \end{array}\right.\]
We take stock of what goes in and what goes out of each compartment:
1. We note \(\mathcal{F}_{i}(x)\) rate at which **new spreaders** enter compartment \(i\).
2. We note \(\mathcal{V}_{i}^{+}(x)\) those which come from the other compartments by any other cause (displacement, healing, etc...).
3. We note \(\mathcal{V}_{i}^{-}(x)\) the speed of those leaving compartment \(i\) (for example, mortality, movement, change in epidemiological status,...).
We finally have :
\[\dot{x}=\mathcal{F}_{i}(x)+\mathcal{V}_{i}(x);\quad avec\quad\mathcal{V}_{i}(x) =\mathcal{V}_{i}^{+}(x)+\mathcal{V}_{i}^{-}(x)\]
We denote by \(X_{S}\) the state without disease:
\[X_{S}=\{x\in\mathbb{R}^{p}\mid x_{i}=0,\ i=1,\ldots,p\}\]
The following assumptions are made:
1. \(x\geq 0,\) \(\mathcal{F}_{i}(x)\geq 0,\) \(\mathcal{V}_{i}^{+}(x)\geq 0,\) \(\mathcal{V}_{i}^{-}(x)\geq 0\)
2. If \(x_{i}=0\), then \(\mathcal{V}_{i}^{-}(x)=0\). If there is nothing in a compartment, nothing can nothing can come out of it. This is the essential property of a compartmental model.
3. For \(i\geq p\), then \(\mathcal{F}_{i}(x)=0\). Compartments with an index less than p are "uninfected". By definition, 'infected" cannot appear in these compartments.
4. If \(x\in X_{S}\), then \(\mathcal{F}_{i}(x)=0\) and \(\mathcal{V}_{i}^{+}(x)=0\) for \(i=1,...,p\). If there are no germ carriers in the population, new 'infected" cannot appear.
The Jacobian of \(f\) is written around the equilibrium point (\(f(t,\bar{x})=0\)) without disease \(x^{*}\):
\(J(x^{*})=D\mathcal{F}(x^{*})+D\mathcal{V}(x^{*})\)
For \(F=\left[\dfrac{\partial\mathcal{F}_{i}}{\partial x_{j}}\right]_{1\leq i,j\leq p}\) et \(V=\left[\dfrac{\partial\mathcal{V}_{i}}{\partial x_{j}}\right]_{1\leq i,j\leq p}\)
Where
1. \(F\geq 0\) is a positive definite matrix and
2. \(V\) is a Metzler matrix (i,e off-diagonal terms are positive),
Figure 2: The entry and exit balance sheet
We define \(R_{0}\) then as follows :
\(R_{0}=\rho(FV^{-1})=det(FV^{-1}-\lambda I)\), Where \(\rho\) the spectral ray.
the matrices \(F\) and \(V\) are defined as follows, respectively: :
\[\mathcal{F}=\begin{pmatrix}(1-p)\beta s\frac{i}{N}\\ p\beta s\frac{i}{N}\end{pmatrix}\]
\[\mathcal{V}=\mathcal{V}^{+}+\mathcal{V}^{-}=\begin{pmatrix}-\rho ei-\varepsilon e -\mu e\\ -\delta i-\mu i+\varepsilon e-\lambda iz\end{pmatrix}.\]
So, let
F = rate of appearance of new spreaders into the compartment and,
V = rate of transfer into (out) of compartment
\[F=\left(\begin{array}{cc}0&(1-p)\,\beta\,S_{0}\\ 0&p\beta\,S_{0}\end{array}\right),\quad V=\left(\begin{array}{cc}\varepsilon +\mu&0\\ -\varepsilon&\delta+\mu\end{array}\right)\]
\[F=\left(\begin{array}{cc}0&(1-p)\,\beta\,\dfrac{\pi}{\mu}\\ &&\\ 0&p\,\dfrac{\pi\beta}{\mu}\end{array}\right),\quad V=\left(\begin{array}{cc} \varepsilon+\mu&0\\ -\varepsilon&\delta+\mu\end{array}\right)\]
\[detV=(\varepsilon+\mu)(\delta+\mu)\]
Hence the next generation matrix with large domain is two dimensional and is given by \(FV^{-1}\)
\[K=FV^{-1}=\left(\begin{array}{cc}\dfrac{(1-p)\,\beta\,\varepsilon\,\pi}{\mu \,\left(\varepsilon+\mu\right)\left(\delta+\mu\right)}&\dfrac{(1-p)\,\beta\, \pi}{\mu\,\left(\delta+\mu\right)}\\ \dfrac{p\beta\,\varepsilon\,\pi}{\mu\,\left(\varepsilon+\mu\right)\left( \delta+\mu\right)}&\dfrac{p\beta\,\pi}{\mu\,\left(\delta+\mu\right)}\end{array}\right) \tag{3}\]
Entry \(K_{ij}\) represents expected number of secondary cases in compartment \(i\) by an individual in compartment \(j\)
The dominant eigenvalue of (3) is equal to \(R_{0}\), therefore we evaluate the characteristic equation of (3) by using \(det(FV^{-1}-\lambda I)=0\),which gives after some calculs
\[\lambda^{2}-\lambda\left[\dfrac{(1-p)\beta\varepsilon\pi}{\mu(\varepsilon+\mu )(\delta+\mu)}+\dfrac{\pi p\beta(\varepsilon+\mu)}{\mu(\varepsilon+\mu)( \delta+\mu)}\right]=0\]
Finally we have the following expresion of \(R_{0}\) like that :
\[R_{0}=\dfrac{\beta\,\pi\,\left(\varepsilon+p\mu\right)}{\mu\,\left(\varepsilon +\mu\right)(\delta+\mu)}\]
**Theorem 3.1**.: _The system of equation (2) is locally asymptotically stable if all its eigenvalues are less than zero at rumor free equilibrium \(M_{0}=(\dfrac{\pi}{\mu},\,\,0,\,\,0)\)_
Next generation operator \((FV^{-1})\) gives rate at which individuals in compartment \(j\) generate new infections in compartment \(i\) times average length of time individual spends in single visit to compartment \(j\)
Proof.: At rumor-free equilibrium point, the Jacobian matrix is :
\[J(M_{0})=\begin{pmatrix}-\mu&0&-\dfrac{\beta\pi}{\mu}&-\dfrac{b\pi}{\mu}\\ 0&-\varepsilon-\mu&\dfrac{(1-p)\beta\pi}{\mu}&\dfrac{(1-l)b\pi}{\mu}\\ 0&\varepsilon&\dfrac{p\beta\pi}{\mu}-\delta-\mu&0\\ 0&0&\delta&\dfrac{lb\pi}{\mu}-\mu\end{pmatrix} \tag{4}\]
Now we try to calculate the eigenvalues of (4) by finding the characteristic equation using the formula \(det(J-\lambda I)=0\)
\[det(J-\lambda I)=det\begin{pmatrix}-\mu-\lambda&0&-\dfrac{\beta\pi}{\mu}&- \dfrac{b\pi}{\mu}\\ 0&-\varepsilon-\mu-\lambda&\dfrac{(1-p)\beta\pi}{\mu}&\dfrac{(1-l)b\pi}{\mu} \\ 0&\varepsilon&\dfrac{p\beta\pi}{\mu}-\delta-\mu-\lambda&0\\ 0&0&\delta&\dfrac{lb\pi}{\mu}-\mu-\lambda\end{pmatrix}=0 \tag{5}\]
From the Jacobian matrix of (5), we obtained a characteristic polynomial:
\[(-\lambda-\mu)(\lambda^{3}+a_{2}\lambda^{2}+a_{1}\lambda+a_{0})=0, \tag{6}\]
where
\[\left\{\begin{array}{ll}a_{2}&=-\dfrac{lb\pi}{\mu}+3\mu-\dfrac{p \beta\pi}{\mu}+\delta+\varepsilon\\ \\ a_{1}&=(\varepsilon+\delta)(1-R_{0})+(\dfrac{lb\pi}{\mu}-\mu)(\dfrac{p\beta \pi}{\mu}-\delta-\varepsilon-2\mu)\\ \\ a_{0}&=-\dfrac{(1-l)b\pi\varepsilon\,\delta}{\mu}(\dfrac{lb\pi}{\mu}-\mu)( \varepsilon+\delta)(1-R_{0})\end{array}\right. \tag{7}\]
From (6 ) clearly, we see that :
\[-\lambda-\mu =0\Longrightarrow\lambda=-\mu<0 \tag{8}\]
or
\[\lambda^{3}+a_{2}\lambda^{2}+a_{1}\lambda+a_{0} =0. \tag{9}\]
From (9) we applied Routh-Hurwitz criteria. By this criteria, (9) has strictly negative real root if and only if \(a_{2}>0\), \(a_{0}>0\), and \(a_{2}*a_{1}>a_{0}\).
Obviously we see that \(a_{2}\) is positive because \(3\mu+\delta+\varepsilon>\dfrac{\pi}{\mu}(lb+p\beta)\), but \(a_{0}\) to be positive \(1-R_{0}\) must be positive, which leads to \(R_{0}<1\) because \(\mu>\dfrac{lb\pi}{\mu}\). Therefore, RFE will be locally asymptotically stable if and only if \(R_{0}<1\).
Thus this theorem implies that for any given rumor in a population, it can be eliminated when \(R_{0}<1\).
**Theorem 3.2**.: _When \(R_{0}\) is less than or equal to one, the rumor-free equilibrium is globally asymptotically stable._
Proof.: A suitable Lyapunov function \(L\) to establish the global stability of the rumor-free equilibrium is defined as \(L=wI\). The derivative of the Lyapunov function with respect to time \(t\) is:
\[\begin{array}{ll}\frac{dL}{dt}&=\frac{di}{dt}=w[\ (p\beta\frac{\pi}{\mu})i-( \delta+\mu)i]\\ &=w(\delta+\mu)[\ \frac{p\beta\pi}{\mu(\delta+\mu)}-1]i\\ &=w(\delta+\mu)[\ \frac{p\beta\pi(\varepsilon+\mu)}{\mu(\delta+\mu)( \varepsilon+\mu)}-1]i\\ &\leq w(\delta+\mu)[\ \frac{\beta\pi(\varepsilon+p\mu)}{\mu(\delta+\mu)( \varepsilon+\mu)}-1]i\ \ \ \ \ \ \mbox{for}\ \ p\in[0,1]\\ &\leq w(\delta+\mu)[R_{0}-1]i\ \ \ \ \mbox{for}\ \ p\in[0,1]\end{array}\]
If \(R_{0}\leq 1\), then \(\frac{dL}{dt}\leq 0\) holds. Furthermore, \(\frac{dL}{dt}\leq 0\) if and only if \(I=0\). Hence, \(L\) is Lyapunov function on \(D\) and the largest compact invariant set in \(\{(I,M,G,R)\in D,\frac{dL}{dt}=0\}\) is the singleton \((\frac{\pi}{\mu},0,0,0)\). The global stability follows from LaSalle's [27] invariance principle, when \(R_{0}\leq 1\). Hence, the disease-free equilibrium is globally asymptotically stable.
### The Endemic Equilibrium
The endemic equilibrium is denoted by \(M^{*}=(S^{*},0,I^{*},Z^{*})\) and it occurs when the disease persists in the community. To obtain it, we equating all the model equations (2) to zero. Then we obtain for \(E^{*}=0\):
\[I^{*}=\frac{\pi}{\beta S^{*}}-\frac{\mu}{\beta}-\frac{pb}{\lambda}S^{*}-\frac{ \delta}{\lambda\beta}-\frac{\mu}{\lambda\beta},\]
\[Z^{*}=\frac{p\beta}{\lambda}S^{*}-\frac{\delta}{\lambda}-\frac{\mu}{\lambda},\]
When we substitute these expressions into the last equation of (2), we obtained a characteristic polynomial of susceptible,
\[\begin{array}{l} lBS(\frac{p\beta}{\lambda}S^{*}-\frac{\delta}{\lambda}- \frac{\mu}{\lambda})+\delta(\frac{\pi}{\beta S^{*}}-\frac{\mu}{\beta}-\frac{ pb}{\lambda}S^{*}-\frac{\delta}{\lambda\beta}-\frac{\mu}{\lambda\beta})+\\ \lambda(\frac{p\beta}{\lambda}S^{*}-\frac{\delta}{\lambda}-\frac{\mu}{\lambda} )(\frac{\pi}{\beta S^{*}}-\frac{\mu}{\beta}-\frac{pb}{\lambda}S^{*}-\frac{ \delta}{\lambda\beta}-\frac{\mu}{\lambda\beta})-\mu(\frac{p\beta}{\lambda}S^ {*}-\frac{\delta}{\lambda}-\frac{\mu}{\lambda})=0.\end{array} \tag{10}\]
From (10) we get the following result:
\[\begin{array}{ll}S^{2}\frac{pb\beta}{\lambda}(l-p)+S(-\frac{lb\delta}{\lambda}- \frac{lbu}{\lambda}-pu-\frac{\delta p}{\lambda}-\frac{pu}{\lambda}+\frac{pbu}{ \lambda}-\frac{p\beta u}{\lambda})+\mu^{2}(\frac{1}{\lambda}+\frac{1}{\beta}+ \frac{1}{\lambda\beta})\\ +p\beta\pi-\frac{\mu\pi}{\beta S}=0,\end{array} \tag{11}\]
which gives
\[AS^{3}+BS^{2}+CS+D=0 \tag{12}\]
where
\[\begin{array}{ll}A&=\frac{pb\beta}{\lambda}(l-p),\\ \\ B&=-\frac{1}{\lambda}(lbS+lb\mu+\delta p+p\mu-pb\mu+p\beta\mu+p\lambda\mu),\\ \\ C&=p\beta\pi+\mu^{2}(\frac{\beta+\lambda+1}{\lambda\beta})+\frac{\delta\mu}{ \lambda},\\ \\ D&=-\frac{\mu\pi}{\beta}.\end{array} \tag{13}\]
**Lemma 3.1**.: _An endemic equilibrium point \(M^{*}\) exists and is positive if \(R_{0}>1.\)_
## 4 The Model with Controls
Now, we introduce our controls into system (2). As control measures to fight the spread of rumor, we extend our system by including three kinds of controls \(u\), \(v\), and \(w\).
1. The first control \(u\) is to tell users that the information or publication is false and contains a malicious rumor.
2. The second control \(v\) is through to reduce the number of susceptible who entering and spread the rumor.
3. The last one \(w\) is also applied by the sceptik or number of people who will question the fake news or those deactivates an account after learning that it is fake or aimed at spreading the rumor.
With the aim of better understanding the effects of any control measure of these strategies, we introduce three new variables: \(\pi_{i},\ i=1,2,3\). We note that \(\pi_{i}=0\) in the absence of control, and \(\pi_{i}=1\) in the presence of control.
\[\left\{\begin{array}{ll}\frac{ds}{dt}&=\pi-\mu s-\beta si-bsz-\pi_{1}us\\ \\ \frac{de}{dt}&=(1-p)\beta si+(1-l)bsz-\rho ei-\varepsilon e-\mu e-\pi_{2}ve\\ \\ \frac{di}{dt}&=p\beta si+\rho ei+\varepsilon e-\delta i-\lambda iz-\mu i-\pi_{3 }wi\\ \\ \frac{dz}{dt}&=lbsz+\delta i+\lambda iz-\mu z+\pi_{1}us\end{array}\right. \tag{14}\]
### Optimal Control Problem
We define the objective functional as follows:
\[J=\int_{0}^{T}[I(t)+\frac{1}{2}Au^{2}(t)+\frac{1}{2}v^{2}(t)+\frac{1}{2}w^{2}(t)]dt \tag{15}\]
where \(A>0\), \(B>0\), and \(C>0\) are the cost coefficients:
\[J(u^{*},v^{*},w^{*})=\min J(u,v,w)\mbox{ over }\Gamma \tag{16}\]
The set of admissible controls is defined as follows :
\[\Gamma\ =\ \left\{u,v,w\in L^{1}(0,T)\mbox{ such that }(u(t),v(t),w(t))\ \ \in[0,1]\times[0,1]\times[0,1]\ \forall\ t\in[0,T]\right\} \tag{17}\]
### Existence of an optimal control solution
Let us consider an optimal control problem having the form (15) We analyze sufficient conditions for the existence of a solution to the optimal control problem (15). Using a result in Refs. Fleming and Rishel ([12] ), and Hattaf and Yousfi ([16]), existence of the optimal control can be obtained.
Let us consider now \(L\) as the function that we integrate \(I(t)+\frac{1}{2}Au^{2}(t)+\frac{1}{2}v^{2}(t)+\frac{1}{2}w^{2}(t)\) and we get the following lemma
**Lemma 1**.: _The integrand \(L(S,E,I,Z,u,v,w)\) in the objective functional is convex on \(\Gamma\) and there exist constants \(c_{1}\) and \(c_{2}\) such that \(L(S,E,I,Z,u,v,w)\geq c_{1}+c_{2}(|u|^{2}+|v|^{2}+|w|^{2})^{\frac{\alpha}{2}}\)_
We have the following theorem:
**Theorem 4.1**.: _Consider the control problem with system (14). There exists an optimal control \((u^{*},v^{*},w^{*})\in\Gamma^{3}\) such that the control set \(\Gamma\) is convex and closed._
Proof.: The existence of the optimal control can be obtained using a result by Fleming and Rishel ([12] ), checking the following step:
* By definition, \(\Gamma\) is closed. Take any control \(u_{1},u_{2}\in\Gamma\) and \(\lambda\in[0,1]\). Then \(\lambda u_{1}+(1-\lambda)u_{2}\geq 0\).. Additionally, we observe that \(\lambda u_{1}\leq\lambda(1-\lambda)u_{2}\leq(1-\lambda)\); then \(\lambda u_{1}+(1-\lambda)u_{2}\leq\lambda+(1-\lambda)=1\) and \(0\leq\lambda u_{1}+(1-\lambda)u_{2}\leq 1\), for all \(u_{1},u_{2}\in\Gamma\) and \(\lambda\in[0,1]\). Therefore, \(\Gamma\) is convex and condition 1 is satisfied.
* The integrand in the objective functional (15) is convex on \(\Gamma\). It rests to show that there exist constants \(c_{1},c_{2}>0\), and \(\alpha>1\) such that the integrand \(L(S,E,I,Z,u,v,w)\) of the objective functional satisfies : \[L(S,E,I,Z,u,v,w)=I(t)+\frac{1}{2}Au^{2}(t)+\frac{1}{2}v^{2}(t)+\frac{1}{2}w^{2} (t)\geq c_{1}+c_{2}(|u|^{2}+|v|^{2}+|w|^{2})^{\frac{\alpha}{2}}\] The state variables are bounded; let \(c_{1}=I\), \(c_{2}=\inf(\frac{A}{2},\frac{B}{2},\frac{C}{2})\), and \(\alpha=2\), then it follows that : \[L(S,E,I,Z,u,v,w)\geq c_{1}+c_{2}(|u|^{2}+|v|^{2}+|w|^{2})^{\frac{\alpha}{2}}\] (18) Then, from Fleming and Rishel [12], we conclude that there exists an optimal control.
### Characterization of optimal controls
Let us consider an optimal control problem having the form (15). Pontryagin's Maximum principle [33] allows to use costate functions to transform the optimization problem to the problem of determining the pointwise minimum relative to \(u^{*}\), \(v^{*}\), and \(w^{*}\) of the Hamiltonian. The Hamiltonian is built from the cost functional (15)and the controlling dynamics (2) derive the optimality conditions:
\[H=I(t)+\frac{1}{2}Au^{2}+\frac{1}{2}Bv^{2}+\frac{1}{2}Cw^{2}+\sum_{i=1}^{n}p_{ i}g_{i} \tag{19}\]
where \(g_{i}\) denotes the right side of the differential equation of the \(i-\)th state variables.
\[\begin{array}{ll}H=&I(t)+\frac{1}{2}Au^{2}+\frac{1}{2}Bv^{2}+\frac{1}{2}Cw^{ 2}+p_{1}(\pi-\mu s-\beta si-bsz-\pi_{1}us)\\ &+\ p_{2}((1-p)\beta si+(1-l)bsz-\rho ei-\varepsilon e-\mu e-\pi_{2}ve)\\ &+\ p_{3}(p\beta si+\rho ei+\varepsilon e-\delta i-\lambda iz-\mu i-\pi_{3}wi) \\ &+\ p_{4}(lbsz+\delta i+\lambda iz-\mu z+\pi_{1}us)\end{array} \tag{20}\]
where the \(p_{i},\ i=1\,...\,4\) are the associated adjoints for the states \(S,E,I,Z\). The optimality system of equations is found by taking the appropriate partial derivatives of the Hamiltonian (8) with respect to the associated state variable.
The following theorem is a consequence of the maximum principle.
**Theorem 4.2**.: _Given an optimal control \((u^{*},v^{*},w^{*})\) and corresponding solutions to the state system \(S^{*},E^{*},I^{*},Z^{*}\) that minimize the objective functional \(J(u^{*},v^{*},w^{*})\) there exist adjoint variables \(p_{1}(t),\ p_{2}(t),\ p_{3}(t),\ p_{4}(t)),\) satisfying_
\[\left\{\begin{array}{ll}\dot{p}_{1}&=-\left[p_{1}(-\mu-\beta i-bz-\pi_{1}u)+ p_{2}((1-p)\beta i+(1-l)bz)+p_{3}(p\beta i)+p_{4}(lbz+\pi_{1}u)\right]\\ \dot{p}_{2}&=-\left[p_{2}(-\rho e-\varepsilon-\mu-\pi_{2}v)+p_{3}(\rho i+ \varepsilon)\right]\\ \dot{p}_{3}&=-\left[1+p_{1}(-\beta s)+p_{2}((1-p)\beta s-\rho e)+p_{3}(p\beta s +\rho e-\delta-\lambda z-\mu-\pi_{3}w)\right]\\ \dot{p}_{4}&=-\left[p_{1}(-bs)+p_{2}(1-l)bs+p_{3}(\lambda i)+p_{4}(lbs+\lambda i -\mu)\right]\end{array}\right. \tag{21}\]
_with the transversality conditions_
\[\begin{array}{lll}p_{1}(T)&=&0\\ p_{2}(T)&=&0\\ p_{3}(T)&=&0\\ p_{4}(T)&=&0\end{array} \tag{22}\]
_Furthermore, we may characterize the optimal pair by the piecewise continuous functions and for \(\pi_{1}=\pi_{2}=\pi_{3}=1\)_
\[u^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{1}\,S}{A}(p_{1}-p_{4})\right),1 \right\},\] \[v^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{2}\,p_{2}\,e}{B}\right),1\right\}, \tag{23}\] \[w^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{3}\,p_{3}\,i}{C}\right),1\right\},\]
Proof.: The existence of optimal controls follows from Corollary 4.1 of Fleming and Rishel [12] since the integrand of \(J\) is a convex function of \((u,v,w)\), and the state system satisfies the Lipchitz property with respect to the state variables because the state solutions are \(L^{\infty}\) bounded. The following can be derived from Pontryagin's maximum principle ([33]):
\[\dot{p}_{1}=-\frac{\partial H}{\partial S}\,,\dot{p}_{2}=-\frac{\partial H}{ \partial E}\,,\dot{p}_{3}=-\frac{\partial H}{\partial I}\ \dot{p}_{4}=-\frac{\partial H}{\partial Z},\]
with \(p_{i}(T)=0,for\ i=1,\ 2,\ 3,\ 4\). evaluated at the optimal controls and the corresponding states, which results in adjoint system of theorem (4.2). The Hamiltonian \(H\) is minimized with respect to the controls at the optimal controls; therefore, we differentiate \(H\) with respect to \(u\), \(v\), and \(w\) on the set \(\Gamma\), respectively, thereby obtaining the following optimality conditions:
\[\frac{\partial H}{\partial u}=Au(t)\,-p_{1}\pi_{1}S\,+p_{4}\pi_{1}S=0 \Longleftrightarrow u(t)=\frac{\pi_{1}\,S}{A}(p_{1}-p_{4})\]
\[\frac{\partial H}{\partial v}=Bv(t)\,-p_{2}\pi_{2}e=0\Longleftrightarrow v(t )=\frac{\pi_{2}\,p_{2}\,e}{B}\]
\[\frac{\partial H}{\partial w}=Cw(t)\,-p_{3}\pi_{3}i=0\Longleftrightarrow w(t )=\frac{\pi_{3}\,p_{3}\,i}{C}\]
Solving for \(u^{*}\), \(v^{*}\), and \(w^{*}\), we obtain for the bounds in \(\Gamma\) of the controls,
\[u^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{1}\,S}{A}(p_{1}-p_{4})\right),1 \right\},\] \[v^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{2}\,p_{2}\,e}{B}\right),1\right\}, \tag{24}\] \[w^{*}(t) = \min\left\{\max\left(0,\frac{\pi_{3}\,p_{3}\,i}{C}\right),1\right\},\]
However, if \(\pi_{i}=0\) where \(i=1,2,3\) the controls attached to his case will be eliminated and removed.
By the standard variation arguments with the control bounds, we obtain the optimal solutions (23)
### Numerical Simulation
In this section, we present the results obtained by solving numerically the optimality system.This system consists of the state system, adjoint system, initial and final time conditions, and the control characterization. So, the optimality system is given by the following:
In this paragraph, we give numerical simulation to highlight the effectiveness of the strategy that we have developed in the framework of eliminating the rumor and limit its spread; the initial values are the same as in Table 1; with regard to other initial values, they proposed values after a statistical study.
#### 4.4.1 Numerical Simulation for \(R_{0}<1\)
Figure 3 illustrates the dynamics of SEIZ in the absence of controls, and we can see that the starting number is initially low and that the number of susceptible individuals has decreased over time. We note that from the outset, infected and susceptible individuals tend towards 0 and that only skeptics have increased from their initial state to a higher number. This figure shows that if we have a low density of individuals, then within a few days or months, the false tends to disappear because of the skeptics.
\begin{table}
\begin{tabular}{|c|l|c|c|} \hline Parameter & Description & Units & Value \\ \hline \(\pi\) & Recruitment rate into the Susceptible population & Per unit time & Value \\ \hline \(\beta\) & Rate of contact between S and I & Per unit time & Value \\ \hline \(b\) & Rate of contact between S and Z & Per unit time & Value \\ \hline \(\rho\) & Rate of contact between E and I & Per unit time & Value \\ \hline \(\varepsilon\) & Incubation rate & Per unit time & Value \\ \hline \(1\) & Average Incubation rate & Per unit time & Value \\ \hline \(\varepsilon\) & Transmission rate S\(>\)I, given contact with I & Unit-less & Value \\ \hline \(l\) & Transmission rate S\(>\)Z, given contact with Z & Unit-less & Value \\ \hline \(1-l\) & S\(>\)E Probability given contact with skeptics & Unit-less & Value \\ \hline \(1-p\) & S\(>\)E Probability given contact with adopters & Unit-less & Value \\ \hline \(\mu\) & Deconnect rate of network & Per unit time & Value \\ \hline \end{tabular}
\end{table}
Table 2: Parameters model formulations and their descriptions
#### 4.4.2 Numerical Simulation for \(R_{0}>1\)
Figure 5 represents the dynamics of SEIZ in the absence of controls and we can see that the number of susceptible people has increased from its initial state to a number to stabilize. We note that from the outset, the infected become more and more numerous in sharing false information until they stabilize, while the skeptics remain much lower than the infected. This balance shows that false information is quickly relayed.
Figure 4: Dynamics of infected and Sceptiks for different values de \(\beta\) and all the other values constants for \(R_{0}<1\)
In order to increase the stifling rate A. we should generally improve the level of scientific knowledge of the public in society. This way, the public can clearly identify general rumors and not easily believe and spread them. Figure 6 illustrates how the number of the sceptiks people changes over time with different Rate of contact between S and I noted \(\beta\). From 6, we can establish that when the Rate of contact between S and I decreases, the number of sceptiks people decreases. Therefore reducing this Rate of contact can control the spread of rumors.
#### 4.4.3 Case 1: Applying Only Control \(u\)
Since it will be applied to ignorant individuals, we will be limited to displaying and comparing the curves of infected and Sceptiks in case with control strategy. In this scenario, we simulate the case where we apply a single control u with which we inform a portion of the ignorants by the false information, so we win this proportion in favor of stifiers. We observe from our Figure that, some days after the implementation of the strategy, the impact will start to appear as we note that the number will gradually decrease until it stabilizes. On the other hand, the number
Figure 6: Dynamics of infected and Sceptiks for different values de \(\beta\) and all the other values constants for \(R_{0}>1\)
of Sceptiks in this model will suddenly start to rise. This change is probably due to the fact that the control is aimed at telling the ignorant people to turn to stifler ones. In this way, we win a number of people in the fight against the spread of false news.
#### 4.4.4 Case 2: Applying Only Control \(v\)
Here, we will implement only control v, that the effect of the strategy will appear on the number of infected as the number will gradually decrease. This rapid change is attributed to the fact that control directly targets this group.
In the second scenario, we apply a single v-control, but this time one that focuses on broad-casters. The figure shows that the number of diffusers has decreased, but this time we see that the number of sceptiks tends towards zero; however, we note that this control also leads to a loss of both s diffusers and skeptics.
Figure 8: Optimal control u for SEIR optimal control problem
Figure 7: Dynamics of the model with the control \(u\)
#### 4.4.5 Case 2: Applying Only Control \(u\), \(v\) and \(w\)
In this strategy, we implemented the three controls as an intervention to eradicate the rumor from the community or population. Figure 11 shows that the number of infectious individuals and sceptics is zero for some time before they start to increase progressively, with a clear progression of sceptics above broadcasters or infected individuals. Consequently, the application of this strategy is effective in eradicating rumor as a community disease in a specified period of time.
Figure 10: Optimal control v for the SEIR optimal control problem
Figure 9: Dynamics of the model with the control \(v\)
Figure 11: Dynamics of the SEIR model for all optimals controls u, v and w applied
Figure 12: optimals controls u, v and w of the third strategy.
Conclusions and future Work
In this paper, we give a new simple mathematical model which describes the dynamics of rumor propagation. The model is based on two compartmental models by combining them in order to take into account more factors that are involved in the dynamic. Three control strategies were introduced, and referring to the introduction of three new variables, \(i=1,2,3\), we could study and combine several scenarios in order to see the impact and the effect of each one of these controls on the reduction of the rumor spread. The goal is achieved and the numerical resolution of the system with difference equations as well as the numerical simulations enabled us to compare and see the difference between each scenario in a concrete way. The purpose of the work is achieved and we have proved the effectiveness of our strategy and its importance in fighting the spread of any rumor like throughout any social network.
|
2305.00497 | Inclusion of radiation in the CCM approach of the $φ^4$ model | We present an effective Lagrangian for the $\phi^4$ model that includes
radiation modes as collective coordinates. The coupling between these modes to
the discrete part of the spectrum, i.e., the zero mode and the shape mode,
gives rise to different phenomena which can be understood in a simple way in
our approach. In particular, the energy transfer between radiation, translation
and shape modes is carefully investigated in the single-kink sector. Finally,
we also discuss the inclusion of radiation modes in the study of oscillons.
This leads to relevant phenomena such as the oscillon decay and the
kink-antikink creation. | S. Navarro-Obregón, L. M. Nieto, J. M. Queiruga | 2023-04-30T14:48:40Z | http://arxiv.org/abs/2305.00497v1 | # Inclusion of radiation in the CCM approach of the \(\phi^{4}\) model
###### Abstract
We present an effective Lagrangian for the \(\phi^{4}\) model that includes radiation modes as collective coordinates. The coupling between these modes to the discrete part of the spectrum, i.e., the zero mode and the shape mode, gives rise to different phenomena which can be understood in a simple way in our approach. In particular, the energy transfer between radiation, translation and shape modes is carefully investigated in the single-kink sector. Finally, we also discuss the inclusion of radiation modes in the study of oscillons. This leads to relevant phenomena such as the oscillon decay and the kink-antikink creation.
## I Introduction
Topological solitons are non-linear field theory solutions that appear in many branches of physics, from condensed matter to cosmology [1; 2; 3; 4], and that have gained interest over the last decades. The stability of these objects is guaranteed by their topological charge, which is a topological invariant conserved during time evolution. They have particle-like behaviour, so they can interact with each other, with external fields or with radiation, as well as being annihilated and even created in pairs. Among all of them, the \(\phi^{4}\) model is particularly interesting: it can be formulated in \(1+1\) dimensions, which makes it simpler from a computational point of view. In addition, the static solitons (called kinks) as well as the spectrum of perturbations can be determined analytically. The study of low-energy perturbations around the kinks shows that, in the linearised theory, there exists a discrete spectrum formed by two states localised around the core of the topological soliton - the zero mode and the shape mode - as well as a continuum of states representing radiation moving away from the kink.
However, the analysis of the dynamics in the full non-linear theory is extremely involved, which
often means that the equations have to be solved numerically. This complexity comes, in part, from the multiple interaction channels, namely: attractive or repulsive static forces, excitation of the internal degrees of freedom, and interaction with radiation. One method to reduce the complexity of the topological soliton dynamics is through the Collective Coordinate Method (CCM). Within this approach, the field theory Lagrangian is reduced to a mechanical one with a finite number of degrees of freedom. In Bogomolnyi-Prasad-Sommerfield (BPS) models, this approach gives rise to the so-called canonical moduli space. Here, the relevant degrees of freedom are the positions of the solitons, and the dynamics can be described effectively as a geodesic motion in the manifold given by the BPS solitons. When we deal with non-BPS sectors this study requires the introduction of a potential, which accounts for the static interactions between solitons, but the formalism is basically the same as in the BPS sector.
This effective point of view can be improved by introducing new coordinates which take into account internal degrees of freedom. But, even in the apparently simple case of \(\phi^{4}\), there is a complicated pattern of final states in scattering processes related to the non-integrability of the model, which cannot be explained satisfactorily by a simple choice of translational and internal oscillatory degrees of freedom [5; 6]. Recently, an important improvement has been made by means of the introduction of the relativistic moduli space [7; 8]. This approach, unlike the standard CCM, can also accommodate some relativistic degrees of freedom. As shown in [7], this quantitatively improves the agreement between the effective model and the field theory. However, neither of these approaches considers dissipative degrees of freedom as generalised coordinates, i.e., they cannot describe radiation. The effect of radiation in soliton dynamics can be very relevant in certain violent processes, such as the kink-antikink annihilation, but it is also determinant in long-time dynamics and may contribute to the fractal structure of the kink scattering. One of the purposes of this work is to introduce the radiation modes as generalised coordinates and study their role in certain dynamical processes.
Apart from the topologically non-trivial solutions of the \(\phi^{4}\) model, there are other time dependent soliton-like structures that deserve special attention, the oscillons. They are topologically trivial solutions (they are in the topological sector of the vacuum) that are long-lived, and in contrast to other time-dependent solitons such as Q-balls, they are not associated to any conserved charge. They are ubiquitous in a wide range of models from one to three dimensions [9; 10; 11; 12; 13; 14], and they have found applications in many scenarios in theoretical physics, from dark matter [15; 16; 17] to cosmology [18; 19; 20]. Some of their characteristics such as profiles and life-time have been studied, mostly numerically, in the literature. Much less is known about their internal structure (see [21] for a recent publication). We will introduce radiation degrees of freedom in an effective model for the \(\phi^{4}\) oscillon. They are able to provide a decay channel for oscillons below the critical amplitude. In
addition, the scattering modes, or more precisely, an effective version of them, are able to describe qualitatively some features of the internal modes hosted by the oscillon, including the decay into kink-antikink pairs.
This paper is organised as follows. In Sec. 2 we briefly review the \(\phi^{4}\) model and its spectrum of perturbations. In Sec. 3 we introduce the radiation modes as collective coordinates and analyse the radiation emitted by a wobbling kink. After that, in Sec. 4 we explore some analytic solutions to the lowest order in perturbation theory involving radiation. In Sec. 5 we introduce the zero mode as a collective coordinate and study its interaction with the rest of the modes. In Sec. 6 we extend our approach to describe oscillons. Finally, Sec. 7 contains our conclusions and further comments. We also add two appendices with some computational details.
## 2 The model and the linearised spectrum
The field theory model that we will discuss is the so-called \(\phi^{4}\) model, which is described by the following \((1+1)\) dimensional Lagrangian for a real scalar field \(\phi(x,t)\)
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{1}{2}(\phi^ {2}-1)^{2}\,. \tag{1}\]
The field equation for this model reads
\[\Box\phi+2\phi(\phi^{2}-1)=0\,. \tag{2}\]
It can be shown by using a Bogomolnyi rearrangement that the static field configurations also satisfy the following first order BPS equations
\[\phi^{\prime}(x)\pm(\phi(x)^{2}-1)=0\,. \tag{3}\]
In addition to vacuum solutions (\(\phi(x)=\pm 1\)), there are non-trivial solutions interpolating the vacua which can be computed analytically
\[\phi_{K(\bar{K})}(x)=\pm\tanh(x-x_{0})\,. \tag{4}\]
The solution with positive sign is called kink (\(K\)), and the one with negative sign is called antikink (\(\bar{K}\)). They depend on a free parameter, \(x_{0}\), which is interpreted as the position of the kink (antikink). There is a topological charge associated to these solutions, \(Q=\pm 1\), which is conserved during time evolution. The perturbations around the kink (antikink) do not depend on the position or the topological charge of the solution, therefore we will consider a kink centred at the origin perturbed as follows
\[\phi(x,t)=\phi_{K}(x)+\eta(x,t), \tag{5}\]
with \(\eta(x,t)=\eta(x)e^{i\omega t}\). Substituting (5) into (2), at linear order, the field equation looks like
\[-\eta^{\prime\prime}(x)+\left(6\phi_{K}(x)^{2}-2\right)\eta(x)=\omega^{2}\eta(x). \tag{6}\]
The equation (6) can be considered a Sturm-Liouville differential equation, where we can identify \(U(x)=6\phi_{K}(x)^{2}-2\) with a Poschl-Teller potential. The spectrum of eigenstates and eigenvalues associated to this Schrodinger-like equation is
\[\eta_{0}(x) = \frac{\sqrt{3}}{2}\operatorname{sech}^{2}x,\qquad\omega_{0}=0, \tag{7}\] \[\eta_{s}(x) = \sqrt{\frac{3}{2}}\sinh x\operatorname{sech}^{2}x,\qquad\omega_{ s}=\sqrt{3},\] (8) \[\eta_{q}(x) = \frac{3\tanh^{2}x-q^{2}-1-3iq\tanh x}{\sqrt{(q^{2}+1)(q^{2}+4)}} e^{iqx},\qquad\omega_{q}=\sqrt{q^{2}+4}, \tag{9}\]
with \(q\in\mathbb{R}\). Altogether, the following relations are satisfied:
\[\left\langle\eta_{0}(x),\eta_{s}(x)\right\rangle = \left\langle\eta_{0}(x),\eta_{q}(x)\right\rangle=\left\langle \eta_{s}(x),\eta_{q}(x)\right\rangle=0, \tag{10}\] \[\left\langle\eta_{q}(x),\eta_{q^{\prime}}(x)\right\rangle = 2\pi\,\delta(q-q^{\prime}). \tag{11}\]
We have chosen the normalization of the scattering modes such that asymptotically they are plane waves of amplitude one. Moreover, the general theory of the Sturm-Liouville systems ensures that the eigenstates of (6) form a basis, so \(\mathcal{B}=\left\{\eta_{0}(x),\eta_{s}(x),\eta_{q}(x)\right\}\) may be used to build a general configuration belonging to the linearised field configuration space. As a consequence, a general field configuration close to the kink solution can be expanded as follows
\[\phi(x,t)=\phi_{K}(x)+c_{0}(t)\eta_{0}(x)+c_{s}(t)\eta_{s}(x)+\int_{\mathbb{R }}dq\,c_{q}(t)\eta_{q}(x). \tag{12}\]
This natural assumption contains all possible degrees of freedom of the kink: the zero mode \(\eta_{0}(x)\) is responsible for the infinitesimal rigid translation of the kink, the shape mode \(\eta_{s}(x)\) is responsible for the modification of the width of the kink, and the radiation modes (or scattering states) \(\eta_{q}(x)\) are related to the continuum of perturbative fluctuations around the vacuum, that propagate freely to infinity. Such a general ansatz will be used as the basis for the field configurations that we will study throughout this work.
## III Leading radiation from the Wobbling kink
In this section we study in detail the radiation emitted by a kink whose shape mode is excited with a small amplitude. A similar analysis was performed in [23], albeit with a different approach, so we will use those results as a first check of the validity of our guess for the dissipative modes. In order to do that, let us assume that the kink is at rest at the origin so that we can disregard
the translational degree of freedom, \(\eta_{0}(x)\). Hence, we consider the following simplified ansatz
\[\phi(x,t)=\phi_{K}(x)+c_{s}(t)\eta_{s}(x)+\int_{\mathbb{R}}dq\,c_{q}(t)\eta_{q}(x)\,. \tag{3.1}\]
Since the shape mode solution is exact at linear order in perturbation theory, we will assume that \(c_{q}(t)\sim\mathcal{O}\left(c_{s}^{2}(t)\right)\). Substituting (3.1) in (2.2) we obtain, at linear order in \(c_{s}(t)\),
\[\eta_{s}(x)\bigg{(}\tilde{c}_{s}(t)+\omega_{s}^{2}c_{s}(t)\bigg{)}=0\,. \tag{3.2}\]
Consequently, the shape mode oscillates with frequency \(\omega_{s}\), i.e.,
\[\ddot{c}_{s}(t)+\omega_{s}^{2}c_{s}(t)=0\Rightarrow c_{s}(t)=A_{0}\cos(\omega _{s}t)\,. \tag{3.3}\]
Since this solution solves exactly the equation at this order, we may conclude that there is no source for radiation at linear order in the shape mode amplitude. At second order in \(c_{s}(t)\) we get
\[\eta_{s}(x)\bigg{(}\tilde{c}_{s}(t)+\omega_{s}^{2}c_{s}(t)\bigg{)}+\int_{ \mathbb{R}}dq\,\eta_{q}(x)\bigg{(}\ddot{c}_{q}(t)+\omega_{q}^{2}c_{q}(t)\bigg{)} +6c_{s}^{2}(t)\phi_{K}(x)\eta_{s}^{2}(x)=0\,. \tag{3.4}\]
Projecting onto \(\eta_{s}(x)\) and assuming the orthogonality relations (2.10), equation (3.4) reduces to
\[\ddot{c}_{s}(t)+\omega_{s}^{2}c_{s}(t)+\frac{9\pi\sqrt{6}}{32}\,c_{s}^{2}(t)= 0\,. \tag{3.5}\]
This is the equation of an anharmonic oscillator corrected by a quadratic term. If we now project onto \(\eta_{q^{\prime}}^{*}(x)\) and assume again the orthogonality relations (2.10)-(2.11), the equation (3.4) reads as
\[\ddot{c}_{q}(t)+\omega_{q}^{2}c_{q}(t)+\frac{3}{\pi}c_{s}^{2}(t)\int_{ \mathbb{R}}dx\,\eta_{q}^{*}(x)\phi_{K}(x)\eta_{s}^{2}(x)=0\,, \tag{3.6}\]
where the last term can be interpreted as the overlap between the scattering state of frequency \(\omega_{q}\) with the combination \(\phi_{k}(x)\eta_{s}^{2}\). Such a term can be computed exactly:
\[\mathcal{F}(q)=\int_{\mathbb{R}}dx\,\eta_{q}^{*}(x)\phi_{K}(x)\eta_{s}^{2}(x) =-\frac{i\pi}{32}\sqrt{\frac{q^{2}+4}{q^{2}+1}}\frac{q^{2}(q^{2}-2)}{\sinh \left(\pi q/2\right)}\,. \tag{3.7}\]
Therefore, (3.6) looks like
\[\ddot{c}_{q}(t)+\omega_{q}^{2}c_{q}(t)+\frac{3}{\pi}c_{s}^{2}(t)\mathcal{F}(q) =0\,. \tag{3.8}\]
Let us assume that the amplitude of the shape mode \(c_{s}(t)\) is given by its linear approximation, i.e., \(c_{s}(t)=A_{0}\cos(\omega_{s}t)\). We are considering that the shape mode is the only source of radiation, so in the absence of the shape mode there is no radiation. We will take this into account imposing the initial conditions \(c_{q}(0)=0\) and \(\dot{c}_{q}(0)=0\). With this choice, the general solution of (3.8) takes the form
\[c_{q}(t)=-\frac{3}{2\pi}\frac{(4\omega_{s}^{2}-\omega_{q}^{2})-\omega_{q}^{2} \cos(2\omega_{s}t)-(4\omega_{s}^{2}-2\omega_{q}^{2})\cos(\omega_{q}t)}{\omega_ {q}^{2}(4\omega_{s}^{2}-\omega_{q}^{2})}\mathcal{F}(q)\,. \tag{3.9}\]
This expression provides time-dependent amplitudes to radiation modes. As a consequence, the radiation emitted by an oscillating kink is
\[R(x,t)=\int_{\mathbb{R}}\,dq\,c_{q}(t)\eta_{q}(x), \tag{3.10}\]
with \(c_{q}(t)\) given by (3.9). This is the exact form of the radiation at leading order for a static wobbling kink valid at all distances and times. Notice that the structure of \(\mathcal{F}(q)\) indicates that some frequencies are suppressed in the radiation. It has a maximum at \(q\approx 2\sqrt{2}\), which is consistent with the fact that the shape mode is the quadratic source for radiation. Moreover, it can be shown analytically that at large distances all frequencies but \(q=2\sqrt{2}\) are suppressed. A non-trivial calculation (see Appendix A for details) leads to the following expression for radiation at infinity
\[R_{\infty}(x,t)=\frac{3\,\pi A_{0}^{2}}{2\sinh(\sqrt{2}\pi)}\sqrt{\frac{3}{8}} \cos\big{(}2\sqrt{3}t-2\sqrt{2}x-\delta\big{)}. \tag{3.11}\]
This expression agrees with the one obtained in [23]. Following [23], the decay of the amplitude of the shape mode into radiation can be determined directly from (3.11). Taking into account that the average energy flux carried by the wave (3.11) has to be equal to the rate of change of the energy of the excited kink (for details see [23]), we arrive at
\[A(t)=\frac{1}{\sqrt{A_{0}^{-2}+0.03\,t}}\,, \tag{3.12}\]
where \(A_{0}\) is the initial amplitude of the shape mode. Notice that in this approximation there is no back-reaction of the radiation into the shape mode. Therefore, strictly speaking, the shape mode oscillates harmonically with frequency \(\omega_{s}\) and constant amplitude. In addition, this approximation suggests that it is possible to excite resonantly the shape mode with radiation of the appropriate frequency. In order to obtain these results, we have to consider the shape mode amplitude as a free collective coordinate interacting with the radiation coordinates. We will study these issues in detail in the next section.
## IV Interaction of radiation and shape mode
In this section we will derive, within the collective coordinate approach, the effective Lagrangian involving the shape mode coupled to radiation, i.e., \(c_{s}(t)\) and \(c_{q}(t)\) will be consider genuine collective coordinates. In order to achieve that, let us suppose a perturbation of the kink solution of the form (3.1), where \(c_{s}(t)\sim\mathcal{O}(A_{0})\) and \(c_{q}(t)\sim\mathcal{O}(A_{0}^{2})\) with \(|A_{0}|\ll 1\). Substituting (3.1) into (2.1) we
obtain
\[{\cal L}_{s,q} = -\frac{4}{3}+\frac{1}{2}\left(\dot{c}_{s}^{2}(t)-\omega_{s}^{2}c_{s} ^{2}(t)\right)+\pi\int_{\mathbb{R}}dq\,\left(\dot{c}_{q}(t)\dot{c}_{-q}(t)- \omega_{q}^{2}c_{q}(t)c_{-q}(t)\right) \tag{4.1}\] \[-\int_{\mathbb{R}}dx\left(2\phi_{K}(x)c_{s}^{3}(t)\eta_{s}^{3}(x)+ 6\phi_{K}(x)c_{s}^{2}(t)\eta_{s}^{2}(x)\int_{\mathbb{R}}dq\,c_{q}(t)\eta_{q}(x )\right)\] \[-\int_{\mathbb{R}}dx\Biggl{(}6\phi_{K}(x)c_{s}(t)\eta_{s}(x)\int _{\mathbb{R}^{2}}dqdq^{\prime}c_{q}(t)c_{q^{\prime}}(t)\eta_{q}(x)\eta_{q^{ \prime}}(x)\Biggr{)},\]
where the normalization of the shape mode and the delta-normalization relation (2.11) were taken into account. Integrating in the \(x-\)variable we get
\[\int_{\mathbb{R}}dx\,\phi_{K}(x)\,\eta_{s}(x)^{3}=\frac{3}{32} \sqrt{\frac{3}{2}}\pi\,, \tag{4.2}\] \[\int_{\mathbb{R}}dx\,\phi_{K}(x)\,\eta_{s}^{2}(x)\,\eta_{q}(x)= \frac{i\pi}{32}\sqrt{\frac{q^{2}+4}{q^{2}+1}}\frac{q^{2}(q^{2}-2)}{\sinh\left( \pi q/2\right)}\,,\] (4.3) \[\int_{\mathbb{R}}dx\,\phi_{K}(x)\eta_{s}(x)\eta_{q}(x)\eta_{q^{ \prime}}(x)=\] \[\frac{\pi}{16}\sqrt{\frac{3}{2}}\frac{17+17q^{2}+17q^{\prime 2}+ 10q^{2}q^{\prime 2}-q^{4}-q^{\prime 4}+q^{2}q^{\prime 4}+q^{4}q^{ \prime 2}-q^{6}-q^{\prime 6}}{\sqrt{(q^{2}+1)(q^{2}+4)}\sqrt{(q^{\prime 2}+1)(q^{ \prime 2}+4)}\cosh(\frac{\pi}{2}(q+q^{\prime}))}\,. \tag{4.4}\]
One finally obtains the following expression
\[{\cal L}_{s,q} = \frac{1}{2}\biggl{(}\dot{c}_{s}^{2}(t)-\omega_{s}^{2}c_{s}^{2}(t) \biggr{)}+\pi\int_{\mathbb{R}}dq\,\left(\dot{c}_{q}(t)\dot{c}_{-q}(t)-\omega_ {q}^{2}c_{q}(t)c_{-q}(t)\right)-\frac{3\pi}{16}\sqrt{\frac{3}{2}}c_{s}^{3}(t) \tag{4.5}\] \[-\frac{3\pi i}{16}\int_{\mathbb{R}}\,dq\,\sqrt{\frac{q^{2}+4}{q^ {2}(q^{2}-2)}}c_{s}^{2}(t)c_{q}(t)+c_{s}(t)\int_{\mathbb{R}^{2}}dqdq^{\prime} \,f_{sq}(q,q^{\prime})c_{q}(t)c_{q^{\prime}}(t)\,,\]
where we have removed from (4.1) a constant term (kink rest energy with opposite sign) since it does not contribute to the field equations, and \(f_{sq}(q,q^{\prime})\) can be read from (4.4). The Lagrangian (4.5) describes a system of harmonic oscillators coupled by the last two terms. The equations of motion governing the evolution of \(c_{q}(t)\) and \(c_{s}(t)\) are yielded by
\[\ddot{c}_{-q}(t) + \omega_{q}^{2}\,c_{-q}(t)+\frac{3i}{32}\sqrt{\frac{q^{2}+4}{q^{2 }+1}}\frac{q^{2}(q^{2}-2)}{\sinh\left(\pi q/2\right)}c_{s}^{2}(t)-\frac{1}{\pi }c_{s}(t)\int dq^{\prime}f_{sq}(q,q^{\prime})c_{q^{\prime}}(t)=0, \tag{4.6}\] \[\ddot{c}_{s}(t) + \omega_{s}^{2}c_{s}(t)+\frac{9\pi}{16}\sqrt{\frac{3}{2}}c_{s}^{2} (t)+\frac{3\pi}{8}i\int_{\mathbb{R}}dq\,\sqrt{\frac{q^{2}+4}{q^{2}+1}}\frac{q^ {2}(q^{2}-2)}{q^{2}+1}\frac{}{\sinh(\pi q/2)}c_{s}(t)c_{q}(t)\] (4.7) \[- \int_{\mathbb{R}^{2}}dqdq^{\prime}\,f_{sq}(q,q^{\prime})c_{q}(t)c _{q^{\prime}}(t)=0.\]
This coupled system has a straightforward interpretation: Eq. (4.6) is a forced harmonic oscillator of fundamental frequency \(\omega_{q}\) with external force proportional to \(c_{s}^{2}(t)\) (notice that this equation was derived in the previous section, but now we allow for back-reaction of radiation in the shape mode). Regarding Eq. (4.7), its structure is more involved. It describes an anharmonic oscillator of fundamental frequency \(\omega_{s}\) coupled linearly to the \(c_{q}(t)\). Note that, for sufficiently small amplitudes of \(c_{s}(t)\), this expression reduces to the well-known Hill's equation provided that \(c_{q}(t)\) is periodic.
In order to solve the system (4.6)-(4.7) numerically, we have to choose a discretization in \(q\), i.e., we have to select \(N\) scattering modes labelled by \(q_{i}\) and solve the coupled system of \(N+1\) ordinary differential equations. Note that the discretization of the integrals in \(q\) gives rise a Hamiltonian system that conserves energy. Then, the discretization fixes a time cut-off of order \(t_{c}=1/\Delta q\) (where \(\Delta q=q_{i}-q_{i-1}\)), which implies that for times bigger than \(t_{c}\) our computations are no longer trustable. We will see in Sec. 6 that there are other effective ways to mimic the effect of dissipative degrees of freedom.
In our first numerical experiment we will show that the Lagrangian (4.5) describes accurately the decay of the shape mode for \(t\lesssim t_{c}\). We will choose the following initial conditions (IC)
\[c_{s}(0)=A_{0}\,,\quad c^{\prime}_{s}(0)=0\,,\quad c_{q}(0)=0\,,\quad\text{and} \quad c^{\prime}_{q}(0)=0\,. \tag{4.8}\]
These IC describe a kink with the shape mode excited with an initial amplitude of \(A_{0}\) and no radiation. Due to the addition of the dissipative degrees of freedom (radiation modes), the effective system (4.6)-(4.7) naturally allows the decay of the shape mode. In Fig. 1 we compare the decay of the shape mode obtained from field theory with the solution of the effective system (4.6)-(4.7). Notice that, the bigger the number of scattering modes added to the effective model, the better the fit to field theory. We see that, for times \(t_{c}<1/\Delta q\) and number of scattering modes \(N>5\), both solutions agree with great accuracy.
As we have learnt from the previous section, the most relevant scattering modes to describe the decay should have frequencies close to \(\omega=2\omega_{s}\), since these modes carry the energy to infinity. In
Figure 1: Decay of the shape mode in field theory (solid line) and in the effective model (4.5) (dashed line) compared to the analytical decay law given by (3.12) (dotted-dashed line). For the computation we have chosen \(n=20\) equidistant scattering modes in the interval \(q\in[-3,3]\).
terms of the system (4.6)-(4.7) this has a simple explanation: if only modes with frequency \(\omega\) far from \(2\omega_{s}\) are allowed, the forced harmonic oscillator equation (4.6) never enters in the resonant region \(\omega=2\omega_{s}\), and as a result the scattering modes excited are not able to carry enough energy from the shape mode. For small shape mode amplitudes and radiation frequencies close to \(2\omega_{s}\) the equation (4.7) enters in the unstable region, and the amplitude of the shape mode grows exponentially. Assuming a monochromatic wave of the form (4.16), the equation (4.7) reduces to
\[\ddot{c}_{s}(t)+\left(\omega_{s}^{2}+f(q_{0})\sin(\omega_{q_{0}}t)\right)c_{s}( t)=0\,, \tag{4.9}\]
with
\[f(q_{0})=-\frac{3\pi A_{q_{0}}}{4}\frac{q_{0}^{2}(q_{0}^{2}-2)}{\sinh\left(\pi q _{0}/2\right)}\sqrt{\frac{q_{0}^{2}+4}{q_{0}^{2}+1}}\,. \tag{4.10}\]
The equation (4.9) constitutes a Mathieu equation. By means of the general theory of Mathieu equations we can compute the bands of instability. For small \(A_{q_{0}}\) they satisfy the relation \(\omega_{s}/\sqrt{q_{0}^{2}+4}=k/2\), where \(k\in\mathbb{Z}\). For \(k=1\) the instability appears when the frequency of the radiation is twice the frequency of the shape mode. Hence, the radiation triggers resonantly the shape mode and one should expect an exponential amplification of its amplitude. Due to energy conservation, as \(c_{s}(t)\) grows, the third term in (4.6) transfers energy to the radiation modes and the exponential growth stops. Notice that this resonant transfer mechanism between internal modes has been also observed in two and three-dimensional solitons [30, 31].
Figure 2: Frequency spectrum of the radiation emitted by a wobbling kink obtained from (4.6)-(4.7). We have taken \(n=40\) equidistant scattering modes in the interval \(q\in[-4,4]\), time of simulation \(t=t_{c}\), and the initial values of the shape mode \(c_{s}(0)=0.03\), \(0.05\) and \(0.07\) (solid, dashed and dotted-dashed line respec.). It can be appreciated that the maxima take place approximately at \(\omega_{q}=2\omega_{s}\) for small shape mode amplitudes.
In our second numerical experiment we will explore the excitation of the shape mode when the kink is illuminated with radiation of frequency \(\omega_{q}\). We choose the following IC
\[c_{s}(0)=0\,,\quad c^{\prime}_{s}(0)=0\,, \tag{4.11}\] \[c_{q}(0)=0\,,\quad\dot{c}_{q}(0)=0\,,\text{ for }q\neq q_{0},-q_{0}\,,\] (4.12) \[c_{q}(0)=A_{q}\,,\quad\dot{c}_{q}(0)=iA_{q}\omega_{q},\text{ for }q=q_{0},\] (4.13) \[c_{q}(0)=A_{q}\,,\quad\dot{c}_{q}(0)=-iA_{q}\omega_{q},\text{ for }q=-q_{0}\,. \tag{4.14}\]
In linear theory, the IC (4.13) and (4.14) correspond to the solution
\[c_{q}(t)=A_{q}e^{i\omega_{q}t}\delta(q-q_{0})+A_{q}e^{-i\omega_{q}t}\delta(q+q_ {0})\,. \tag{4.15}\]
This choice describes a superposition of a kink with a combination of scattering modes of frequency \(\omega_{q_{0}}\). Initially, the radiation has the form
\[\int_{\mathbb{R}}dq\,c_{q}(t)\eta_{q}(x) = \frac{A_{q_{0}}}{\sqrt{(q_{0}^{2}+1)(q_{0}^{2}+4)}}\left[6\tanh(x) (\cos(\omega_{0}t+q_{0}x)\tanh(x)+q_{0}\sin(\omega_{0}t+q_{0}x))\right. \tag{4.16}\] \[\left.-2(1+q_{0}^{2})\cos(\omega_{0}t+q_{0}x)\right].\]
Notice that, even though asymptotically (4.16) is a plane wave of frequency \(\omega_{0}\), close to the origin it gets distorted by the presence of the kink. In Fig. 3 we compare the excitation of the shape mode in field theory with the Eqs. (4.6)-(4.7) with IC (4.11)-(4.14).
Quite remarkably, we can obtain an approximate analytical solution for the shape mode in the background of radiation for frequencies different from the resonance frequency. For small radiation amplitudes the relevant terms in Eq. (4.7) are the harmonic oscillator part and the last term
Figure 3: Shape mode excitation by external radiation of different frequencies and amplitude \(A_{q}=0.02\) in field theory (solid line) and in the effective theory (dashed line) (4.5). We have taken into account \(n=30\) equidistant scattering modes in the interval \(q\in[-3,3]\).
(radiation source). This equation has the form of a forced harmonic oscillator. Regarding the initial conditions given by (4.11)-(4.14) we get
\[c_{s}(t)=A_{q_{0}}^{2}\Omega(q_{0})\left(\frac{1}{\omega_{s}^{2}}+\frac{\left(4 \omega_{q_{0}}^{2}-(\text{sech}(\pi q_{0})+1)\omega_{s}^{2}\right)\cos\left(t \omega_{s}\right)}{\omega_{s}^{2}\left(\omega_{s}^{2}-4\omega_{q_{0}}^{2} \right)}+\frac{\text{sech}(\pi q_{0})\cos\left(2t\omega_{q_{0}}\right)}{\omega _{s}^{2}-4\omega_{q_{0}}^{2}}\right), \tag{4.17}\]
where
\[\Omega(q_{0})=-\frac{3\sqrt{\frac{3}{2}}\pi\left(8q_{0}^{4}+34q_{0}^{2}+17 \right)}{4\left(q_{0}^{4}+5q_{0}^{2}+4\right)}\,. \tag{4.18}\]
For larger radiation amplitudes this expression is not valid anymore, and new phenomena may appear (see for example [24] for \(K\bar{K}\) creation). A comparison between the approximate analytical solution and the field theory results can be found in Fig. 4. There we can appreciate the accuracy between both results. Indeed, this analytic expression allows us to explain the negative amplitude excitation of the shape mode.
So far we have not taken into account the translational degree of freedom in our model. However, the translational mode can interact with radiation and produce unexpected phenomena like the negative radiation pressure [25]. We will explore some features of the interaction of the translational mode in our approach in the following section.
## 5 Adding the translational mode
In this section we aim to generalise the previous approach allowing for translations of the kink. This new case will give rise to a richer structure and will allow for new couplings among the
Figure 4: Shape mode excitation by external radiation of different frequencies and amplitude \(A_{q}=0.02\) in field theory (solid line) and with the approximate solution (dashed line) (4.17). We have taken into account \(n=30\) equidistant scattering modes in the interval \(q\in[-3,3]\).
different collective coordinates. Let us now consider a configuration of the form
\[\phi(x,t)=\phi_{K}\big{(}x-a(t)\big{)}+c_{s}(t)\eta_{s}\big{(}x-a(t)\big{)}+\int_{ \mathbb{R}}dq\,c_{q}(t)\eta_{q}\big{(}x-a(t)\big{)}, \tag{5.1}\]
where \(a(t)\) is the collective coordinate that describes the translation of the kink (or the kink zero mode). Once more, we have to substitute the field configuration ansatz into the Lagrangian density of the full theory (2.1). Notice that (5.1) only adds new contributions to the kinetic part with respect to (3.1), whereas the potential terms remain the same. The additional contributions to the effective Lagrangian due to the presence of the zero mode are given by
\[{\cal L}_{t} = \frac{1}{2}\int_{\mathbb{R}}dx\,\bigg{(}\dot{a}^{2}(t)\bigg{(} \int_{\mathbb{R}}dq\,dq^{\prime}\,c_{q}(t)c_{q^{\prime}}(t)\eta^{\prime}_{q}(x )\eta^{\prime}_{q^{\prime}}(x)+{\phi^{\prime}}^{2}_{K}(x)+c_{s}^{2}(t)\eta^{ \prime}{}^{2}_{s}(x) \tag{5.2}\] \[+2\phi^{\prime}_{K}(x)c_{s}(t)\eta^{\prime}_{s}(x)+2\phi^{\prime }_{K}(x)\int_{\mathbb{R}}dq\,c_{q}(t)\eta^{\prime}_{q}(x)+2c_{s}(t)\eta^{ \prime}_{s}(x)\int_{\mathbb{R}}dq\,c_{q}(t)\eta^{\prime}_{q}(x)\bigg{)}\] \[-\dot{a}(t)\bigg{(}2\dot{c}_{s}(t)\eta_{s}(x)\int_{\mathbb{R}}dq \,c_{q}(t)\eta^{\prime}_{q}(x)+2c_{s}(t)\eta^{\prime}_{s}(x)\int_{\mathbb{R}} dq\,\dot{c}_{q}(t)\eta_{q}(x)\] \[+2\int_{\mathbb{R}}dq\,dq^{\prime}\,c_{q^{\prime}}(t)\dot{c_{q}} (t)\eta_{q}(x)\eta^{\prime}_{q^{\prime}}(x)\bigg{)}\bigg{)}.\]
Integrating in the \(x-\)coordinate, adding \({\cal L}_{s,q}\), and assuming \(\big{|}\dot{a}(t)\big{|}\ll 1\), we finally get the following effective Lagrangian
\[{\cal L}_{s,q,t} = \frac{1}{2}\,\bigg{(}\dot{c}_{s}^{2}(t)-\omega_{s}^{2}c_{s}^{2}(t )\bigg{)}+\pi\int dq\,\bigg{(}\dot{c}_{q}(t)\dot{c}_{-q}(t)-\omega_{q}^{2}c_{ q}(t)c_{-q}(t)\bigg{)}+c_{s}^{2}(t)\int dqf_{s}(q)c_{q}(t) \tag{5.3}\] \[+ c_{s}(t)\int dqdq^{\prime}f_{sq}(q,q^{\prime})c_{q}(t)c_{q^{ \prime}}(t)+\frac{2}{3}\dot{a}^{2}(t)+\frac{\pi}{4}\sqrt{\frac{3}{2}}\dot{a}^ {2}(t)c_{s}(t)+\dot{a}^{2}(t)\int dqf_{aa}(q)c_{q}(t)\] \[+ \dot{a}(t)\int dqf_{as}(q)\,\big{(}\dot{c}_{s}(t)c_{q}(t)-c_{s}(t )\dot{c}_{q}(t)\big{)}+\dot{a}(t)\int dqdq^{\prime}f_{a}(q,q^{\prime})\dot{c} _{q}c_{q^{\prime}}(t).\]
The couplings between modes are collected in the following functions
\[f_{s}(q) = -\frac{3i\pi}{16}\sqrt{\frac{q^{2}+4}{q^{2}+1}}\frac{q^{2}(q^{2}- 2)}{\sinh\big{(}\pi q/2\big{)}}, \tag{5.4}\] \[f_{as}(q) = -\frac{\pi}{4}\sqrt{\frac{3}{2}}\sqrt{\frac{q^{2}+1}{q^{2}+4}} \frac{q^{2}+3}{\cosh\big{(}\frac{\pi q}{2}\big{)}}\,,\] (5.5) \[f_{aa}(q) = -i\frac{\pi}{4}\sqrt{\frac{q^{2}+4}{q^{2}+1}}\frac{q^{2}}{\sinh \big{(}\pi q/2\big{)}}\,,\] (5.6) \[f_{a}(q,q^{\prime}) = \frac{3i\pi}{4}\frac{\big{(}4+q^{2}+q^{\prime 2}\big{)}}{\sqrt{(q^{2} +1)(q^{2}+4)}\sqrt{(q^{\prime 2}+1)(q^{\prime 2}+4)}}\frac{q^{2}-q^{\prime 2}}{ \sinh(\frac{\pi}{2}(q+q^{\prime}))}\] (5.7) \[-\frac{2i\pi q^{\prime}\,\big{(}4-9qq^{\prime}-2q^{2}-2q^{\prime 2 }+q^{2}q^{\prime 2}\big{)}}{\sqrt{(q^{2}+1)(q^{2}+4)}\sqrt{(q^{\prime 2}+1)(q^{\prime 2 }+4)}}\delta(q+q^{\prime})\,,\] \[f_{sq}(q,q^{\prime}) = -\frac{3\pi}{8}\sqrt{\frac{3}{2}}\frac{17+17q^{2}+17q^{\prime 2}+10q^{2}q^{ \prime 2}-q^{4}-q^{\prime 4}+q^{2}q^{\prime 4}+q^{4}q^{\prime 2}-q^{6}-q^{\prime 6}}{ \sqrt{(q^{2}+1)(q^{2}+4)}\sqrt{(q^{\prime 2}+1)(q^{\prime 2}+4)}\cosh(\frac{\pi}{2}(q+q^{ \prime}))}. \tag{5.8}\]
The quadratic terms in (5.3) are just a collection of harmonic oscillators plus the kinetic term of the kink. On the other hand, the cubic terms describe the interaction between the modes. The
field equations associated to (5.3) for the radiation modes are yielded by
\[\ddot{c}_{-q}(t) + \omega_{q}^{2}c_{-q}(t)-\frac{1}{2\pi}c_{s}^{2}(t)f_{s}(q)-\frac{1} {\pi}c_{s}(t)\int dq^{\prime}f_{sq}(q,q^{\prime})c_{q^{\prime}}(t)-\frac{1}{2\pi }\dot{a}^{2}(t)f_{aa}(q) \tag{5.9}\] \[- \frac{1}{2\pi}\ddot{a}(t)f_{as}(q)c_{s}(t)-\frac{1}{\pi}\dot{a}(t )f_{as}(q)\dot{c}_{s}(t)+\frac{1}{2\pi}\dot{a}(t)\int dq^{\prime}\dot{c}_{q^{ \prime}}(t)\left(f_{a}(q,q^{\prime})-f_{a}(q^{\prime},q)\right)\] \[+ \frac{1}{2\pi}\ddot{a}(t)\int dq^{\prime}f_{a}(q,q^{\prime})c_{q^ {\prime}}(t)=0,\]
whilst the equations of motion determining the evolution of \(c_{s}(t)\) and \(a(t)\) take the form
\[\ddot{c}_{s}(t) + \omega_{s}^{2}c_{s}(t)-2c_{s}(t)\int dqf_{s}(q)c_{q}(t)-\int dqdq^ {\prime}f_{sq}(q,q^{\prime})c_{q}(t)c_{q^{\prime}}(t)-\frac{\pi}{4}\sqrt{\frac {3}{2}}\dot{a}^{2}(t) \tag{5.10}\] \[+ 2\dot{a}(t)\int dqf_{as}(q)\dot{c}_{q}(t)+\ddot{a}(t)\int dqf_{ as}(q)c_{q}(t)=0\,,\]
and
\[\frac{4}{3}\ddot{a}(t) + \frac{\pi}{2}\sqrt{\frac{3}{2}}\left(\ddot{a}(t)c_{s}(t)+\dot{a}( t)\dot{c}_{s}(t)\right)+2\int dqf_{aa}(q)\left(\ddot{a}(t)c_{q}(t)+\dot{a}(t) \dot{c}_{q}(t)\right) \tag{5.11}\] \[+ \int dqf_{as}(q)\left(\ddot{c}_{s}(t)c_{q}(t)-c_{s}(t)\ddot{c}_{ q}(t)\right)+\frac{d}{dt}\int dqdq^{\prime}f_{a}(q,q^{\prime})\dot{c}_{q}c_{q^ {\prime}}(t)=0\,.\]
The first configuration we will discuss is the translating kink without any excitation. In the non-relativistic approach this corresponds to \(a(t)=x_{0}+vt\), while the rest of the modes vanish. However, a simple inspection of the system (5.9)-(5.11) reveals something surprising at first glance: if \(\dot{a}(t)\neq 0\), the terms proportional to \(\dot{a}(t)^{2}\) in (5.9) and (5.10) act as sources for the shape mode and radiation. This seems contradictory since, due to the Lorentz invariance of the model, the boosted kink is an exact solution. The Lorentz boosted version of the kink has the following form
\[\phi(x,t)=\tanh\left(\frac{x-vt}{\sqrt{1-v^{2}}}\right). \tag{5.12}\]
Of course, (5.12) does not contain any radiation. This apparent pathology of the effective model can be easily resolved. The standard CCM approach is not Lorentz invariant at first order. This is obvious from the asymmetry between time and spatial coordinates in the ansatz (5.1). However, once we consider higher order terms, approximate Lorentz invariant solutions exist. Actually, the mentioned source terms for \(c_{s}(t)\) and \(c_{q}(t)\) are responsible for the Lorentz contraction of a moving kink. Let us consider a moving kink with velocity \(v\ll 1\) located at the origin, then initially \(\dot{a}(t)=v\). Assuming that \(c_{s}(t)\) and \(c_{q}(t)\) do not depend on time, an approximate solution of (5.10) and (5.9) is given by
\[c_{s}(t)=\frac{\pi}{4\sqrt{6}}v^{2},\quad c_{q}(t)=-\frac{iq^{2}\,\mbox{csch}( \pi q/2)}{8\sqrt{(q^{2}+1)(q^{2}+4)}}v^{2}, \tag{5.13}\]
where we disregard corrections of order \({\cal O}(v^{4})\). On the other hand, if we expand (5.12) with respect to \(v\), at \(t=0\) we get
\[\phi(0,x)=\tanh(x)+\frac{1}{2}\left(x-x\tanh^{2}(x)\right)v^{2}+{\cal O}(v^{4 }). \tag{5.14}\]
We denote the first correction to the Lorentz contraction by \(\phi^{(1)}(x)=\frac{1}{2}\left(x-x\tanh^{2}(x)\right)v^{2}\). The projection of \(\phi^{(1)}(x)\) onto the spectral modes gives
\[\langle\phi^{(1)}(x),\eta_{s}(x)\rangle = \frac{\pi}{4\sqrt{6}}v^{2}\,, \tag{5.15}\] \[\langle\phi^{(1)}(x),\eta_{q}(x)\rangle = -\frac{i\pi q^{2}\csc(\pi q/2)}{4\sqrt{(q^{2}+1)(q^{2}+4)}}v^{2}\,. \tag{5.16}\]
Note that there is an extra \(2\pi\) factor due to the normalization of the scattering modes. This is easily interpreted: this effective model already describes relativistic effects at this order. The solution (5.13) can be interpreted as a first order Lorentz boost, i.e., in terms of the coordinates of our model the Lorentz boosted kink takes the form
\[\phi_{B}(x,t) = \tanh(x-vt)+\phi^{(1)}(x-vt)+{\cal O}(v^{4})= \tag{5.17}\] \[= \tanh(x-vt)+v^{2}\frac{\pi}{4\sqrt{6}}\eta_{s}(x-vt)-v^{2}\int dq \frac{iq^{2}\csc(\pi q/2)}{8\sqrt{(q^{2}+1)(q^{2}+4)}}\eta_{q}(x-vt),\]
for \(x_{0}=0\). If one is interested in the description of relativistic processes, this approach does not seem appropriate since, even to describe a simple boosted solution, one needs to excite a large number of modes. Still, we would like to emphasise that, already at quadratic order, the "non-relativistic" CCM approach describes relativistic effects, although these effects require in general, the excitation of scattering modes. There is a simple ansatz that describes the exact Lorentz boost with only two degrees of freedom given by the following expression
\[\phi(x,t)=\phi_{K}\left(\frac{x-a(t)}{\delta(t)}\right)\,. \tag{5.18}\]
This type of ansatz was first considered in [26] (for recent discussions see [7]). However, after the introduction of the scattering modes, it seems not possible to obtain analytical results, thus we will not pursue this line in this section. However, as we will see in Sec. 6, some small generalizations of (5.18) may be used to describe effectively dissipative degrees of freedom.
In our second experiment we will study the excitation of the translational mode by radiation. Notice that the first and last term of (5.11) resemble the Newton equation, where we can identify \(m_{K}=4/3\). However, for the IC representing a kink at rest illuminated by linear radiation (4.16), the rate of change of the total momentum of radiation given by the last term in (5.11) vanishes, so the kink remains at rest. As a matter of fact, it is well-known that the \(\phi^{4}\) kink is transparent to linear radiation as it was emphasised in [25], and that the negative radiation pressure appears at fourth order in perturbation theory. Nevertheless, the shape mode may act as an intermediary between the zero mode and radiation. Namely, a non-trivial interplay between the modes could be as follows: the radiation triggers the shape mode due to the term proportional \(f_{sq}(q,q^{\prime})\) in (5.10). Once \(c_{s}(t)\) is excited, there is a source for \(a(t)\) in (5.11) given by the term proportional to \(f_{as}(q)\). The excitation of \(a(t)\) is roughly of order \(A_{0}A_{q}\), where \(A_{0}\) is the amplitude of the shape
mode and \(A_{q}\) is the amplitude of the \(q-\)scattering mode. Since from (4.17) \(A_{0}\) is of order \(A_{q}^{2}\), the excitation of the zero mode due to this mechanism appears at order \({\cal O}(A_{q}^{3})\). However, contrary to the negative radiation pressure effect, there is no net momentum transfer at this order and the kink simply oscillates about its rest position. This can be seen easily from (5.11) disregarding the last term and keeping terms of \({\cal O}(A_{q}^{3})\) order. In Fig. 5 we show the maximum amplitude of the zero mode as a function of the incident radiation frequency. This shows that for small \(q\) the term proportional to \(f_{a}\) is indeed subleading.
For larger times, higher order corrections to radiation start to play a role and the last term in (5.11) cannot be neglected anymore. This term transfers a net momentum to the kink [25] and it is pulled back in the opposite direction of the incoming radiation.
So far, we have discussed only the single-kink sector. Of course, one expects that the combination of translational and scattering modes should play an important role in the \(K\bar{K}\) sector, allowing for energy dissipation in scattering processes or even describing \(K\bar{K}\) annihilation. This is a more ambitious goal that we leave for future research. However, during a kink-antikink scattering process, when the solitons completely overlap, an intermediate state that resembles the profile of an oscillon is formed. This suggest that the study of oscillons, with appropriate initial conditions, may provide useful information about these violent processes. This is the point of view we adopt in the following section.
Figure 5: Maximum displacement of the kink due to radiation of different frequencies and amplitude \(A_{q}=0.05\). The solid line represents the complete effective model, the dashed line represents the simulation when only the \(f_{as}\) source is considered, and the dotted dashed represents the simulation when only the source \(f_{a}\) is taken into account. We have taken into account \(n=30\) equidistant scattering modes in the interval \(q\in[-5,5]\). The time of simulation is \(t=t_{c}\).
Effective model for the radiating oscillon
As we have shown, the scattering modes play an important role in the dynamics of the single-kink sector. In this section we will illustrate the importance of such modes for a non-topological soliton, the oscillon of the 1-dimensional \(\phi^{4}\) model. In particular, we will describe the decay of oscillons below the critical amplitude, the possibility of internal modes and the \(K\bar{K}\) formation. Since oscillons have been observed starting from rather generic initial data, we decide to follow first [12; 27], and take the following ansatz
\[\Phi_{\rm o}(x;a)=-1+a\,{\rm sech}(x/R), \tag{6.1}\]
where \(a\) is identified with the amplitude of the oscillon and \(R\) accounts for its size. Later, in this section, we will change to a Gaussian profile used in [28]. The effective Lagrangian associated to (6.1) assuming that \(a\) depends on time takes the following form
\[{\cal L}^{o}=R\left(\dot{a}^{2}(t)-\frac{1}{3}(12+\frac{1}{R^{2}})a^{2}(t)+\pi a ^{3}(t)-\frac{2}{3}a^{4}(t)\right), \tag{6.2}\]
This is the Lagrangian of an anharmonic oscillator of frequency \(\omega_{o}=\sqrt{\frac{1}{3}(12-\frac{1}{R^{2}})}\). For small amplitudes, the frequency of \(a(t)\) is above the threshold frequency \(\omega_{t}=2\). As a result, \(a(t)\) couples directly to the continuum and collapses into radiation. As a consequence, for \(a\) small enough, the initial data (6.1) does not evolve into an oscillon. However, for \(a(t)\) large enough, the non-linearities in (6.2) decrease the oscillator frequency avoiding the direct coupling to radiation. Therefore, the oscillon weakly couples to radiation and its amplitude decreases very slowly. The details of this coupling are, of course, not described in (6.2), and one needs to add dissipative degrees of freedom. This can be done as in the previous sections considering an ansatz of the form
\[\Phi_{\rm o,\ rad}(x;a,c_{q})=-1+a(t)\,{\rm sech}(x/R)+\int_{\mathbb{R}}dqc_{q} (t)\eta_{q}(x/R). \tag{6.3}\]
It is enough to consider the truncated Lagrangian at second order in \(c_{q}(t)\) since we are interested in the description of the oscillon coupled to the slow-amplitude emitted radiation. However, as we have mentioned, it is fundamental to consider the action at all orders in \(a(t)\), since the existence of the oscillon is linked to the non-linear structure of the Lagrangian. Proceeding as in the previous sections we arrive at
\[{\cal L}^{o}_{r} = \pi\left(\int_{\mathbb{R}}dq\,\dot{c}_{q}(t)\dot{c}_{-q}(t)-\int _{\mathbb{R}}dq\,w_{q,R}^{2}\,c_{q}(t)c_{-q}(t)\right)+\dot{a}^{2}(t)-\omega_ {o}^{2}a^{2}(t)+\pi a^{3}(t)-\frac{2}{3}a^{4}(t) \tag{6.4}\] \[+\int_{\mathbb{R}^{2}}dqdq^{\prime}\,f_{1}(q,q^{\prime},R)\,c_{q }(t)c_{q^{\prime}}(t)+\dot{a}(t)\int_{\mathbb{R}}dq\,f_{2}(q)\dot{c}_{q}(t)+ a(t)\int_{\mathbb{R}}dq\,f_{3}(q)c_{q}(t)\] \[+a(t)\int_{\mathbb{R}^{2}}dqdq^{\prime}\,f_{4}(q,q^{\prime})c_{q }(t)c_{q^{\prime}}(t)+a^{2}(t)\int_{\mathbb{R}}dqdq^{\prime}\,f_{5}(q,q^{ \prime})c_{q}(t)c_{q^{\prime}}(t)\] \[+a^{3}(t)\int_{\mathbb{R}}dq\,f_{6}(q)c_{q}(t),\]
with \(w_{q,R}=\sqrt{\left(q/R\right)^{2}+4}\) and where
\[f_{1}(q,q^{\prime},R) = -\frac{3\pi}{5R^{2}}\frac{4q+4q^{\prime}+5q^{3}+5q^{\prime 3}+q^{5}+q ^{\prime 5}}{\sqrt{\left(q^{2}+1\right)\left(q^{2}+4\right)}\sqrt{\left(q^{ \prime 2}+1\right)\left(q^{\prime 2}+4\right)}}\operatorname{csch}\left(\frac{\pi}{2}(q+q^{ \prime})\right), \tag{6.5}\] \[f_{2}(q) = \frac{\pi}{2}\frac{\sqrt{q^{2}+1}}{\sqrt{q^{2}+4}}\operatorname {sech}\left(\frac{\pi q}{2}\right),\] (6.6) \[f_{3}(q,R) = \pi\frac{\sqrt{q^{2}+1}}{\sqrt{q^{2}+4}}\left(\frac{q^{2}+3}{4R^ {2}}-2\right)\operatorname{sech}\left(\frac{\pi q}{2}\right),\] (6.7) \[f_{4}(q,q^{\prime}) = \frac{3\pi}{4}\frac{11+14q^{\prime 2}+14q^{\prime 2}+2q^{2}q^{ \prime 2}+3q^{4}+3q^{\prime 4}}{\sqrt{\left(q^{2}+1\right)\left(q^{2}+4\right)} \sqrt{\left(q^{\prime 2}+1\right)\left(q^{\prime 2}+4\right)}}\operatorname{sech}\left( \frac{\pi}{2}(q+q^{\prime})\right),\] (6.8) \[f_{5}(q,q^{\prime}) = -\frac{3\pi}{5}\frac{4q+4q^{\prime}+5q^{3}+5q^{\prime 3}+q^{5}+q ^{\prime 5}}{\sqrt{\left(q^{2}+1\right)\left(q^{2}+4\right)}\sqrt{\left(q^{ \prime 2}+1\right)\left(q^{\prime 2}+4\right)}}\operatorname{csch}\left(\frac{\pi}{2}(q+q ^{\prime})\right),\] (6.9) \[f_{6}(q) = \frac{\pi}{4}\frac{(q^{2}+1)^{3/2}}{\sqrt{q^{2}+4}}\operatorname {sech}\left(\frac{\pi q}{2}\right). \tag{6.10}\]
Notice that we have omitted a global factor \(R\) in (6.4) since it does not have any effect on the equations of motion. In Fig. 6 we illustrate the evolution of the oscillon profile at \(x=0\) for different initial amplitudes and sizes.
Figure 6: Comparison between the effective model (dashed line) (6.4) and field theory (solid line). The scattering modes have been taken in the interval \(q\in[-5,5]\).
In the upper panel we show two configurations with small initial amplitudes. This results into a fast decay of the initial configuration into radiation. It is worth mentioning even with a large number of radiation modes (\(n>20\)) the amplitude in the effective model decays slower than in the field theoretical simulation. This suggests that the dissipative mechanism provided by the set of scattering modes is actually not very efficient. In the lower panel we show a genuine oscillon. The numerical simulation reveals that the oscillon hosts an internal non-dissipative mode, due to the amplitude modulation during the time evolution. This mode can be related to the variation of the oscillon size. Hence, let us consider a small variation of the oscillon width as follows
\[\Phi_{\rm o}(x;a,\delta)=-1+a\,{\rm sech}\left(\frac{x}{R+\delta}\right). \tag{6.11}\]
If \(\delta\ll 1\) we may expand about \(\delta=0\) up to first order
\[\Phi_{\rm o}(x;a,\delta)=-1+a\,{\rm sech}(x/R)+\frac{a\delta}{R^{2}}\,x\,\,{ \rm sech}(x/R)\tanh(x/R)\,. \tag{6.12}\]
The additional term to the unperturbed oscillon corresponds to the so-called Derrick mode. This correction should codify the possible changes of the oscillon size, and luckily represent the behaviour expected from the full numerical result. This simple choice has a problem. At \(a=0\), the ansatz (6.11) does not depend on \(\delta\). This implies that the moduli metric associated to \((a,\delta)\) is not well-defined at this point (see for example [22] for a discussion about the null vector problem). In order to cure this issue we may perform a simple change of coordinates \(\delta\to\delta/a\). Finally, we get
\[\Phi_{\rm o,\,s}(x;a,\delta)=-1+a\,{\rm sech}(x/R)+\frac{\delta}{R^{2}}\,x\, \,{\rm sech}(x/R)\tanh(x/R)\,. \tag{6.13}\]
Treating \(a\) and \(\delta\) as collective coordinates (recall that \(R\) remains fixed) we get the below effective Lagrangian
\[{\cal L}^{o}_{s} = R\dot{a}(t)^{2}-\omega_{o}^{2}a(t)^{2}+\frac{1}{R}\left(\frac{ \pi^{2}}{36}+\frac{1}{3}\right)\dot{\delta}(t)^{2}-\frac{1}{R}\left(\pi^{2} \left(\frac{1}{9}+\frac{7}{180R^{2}}\right)+\frac{4}{3}\right)\delta(t)^{2}+ \pi Ra(t)^{3} \tag{6.14}\] \[-\frac{2}{3}Ra(t)^{4}+\frac{\pi}{R^{2}}\left(\frac{11\pi^{2}}{80 }-1\right)\delta(t)^{3}-\frac{1}{R^{3}}\left(\frac{\pi^{4}}{600}+\frac{\pi^{ 2}}{90}-\frac{2}{15}\right)\delta(t)^{4}+\dot{a}(t)\dot{\delta}(t)\] \[+\left(\frac{1}{3R^{2}}-4\right)a(t)\delta(t)+\frac{\pi}{R}\left( \frac{3\pi^{2}}{16}-1\right)a(t)\delta(t)^{2}-\frac{1}{R^{2}}\left(\frac{7 \pi^{2}}{90}-\frac{1}{3}\right)a(t)\delta(t)^{3}\] \[+\pi a(t)^{2}\delta(t)-\frac{2}{3}a(t)^{3}\delta(t)-\frac{\pi^{2} }{15R}a(t)^{2}\delta(t)^{2}.\]
The associated equations of motion are collected below
\[\ddot{a}(t) + \frac{1}{3}\left(\frac{1}{R^{2}}+12\right)a(t)-\frac{3}{2}\pi a( t)^{2}+\frac{4}{3}a(t)^{3}+\frac{1}{2R}\ddot{\delta}(t)+\frac{\left(12R^{2}-1 \right)}{6R^{3}}\delta(t) \tag{6.15}\] \[- \frac{\pi\left(3\pi^{2}-16\right)}{32R^{2}}\delta(t)^{2}+\frac{ \left(7\pi^{2}-30\right)}{180R^{3}}\delta(t)^{3}-\frac{\pi}{R}a(t)\delta(t)+ \frac{\pi^{2}}{15R^{2}}a(t)\delta(t)^{2}\] \[+ \frac{1}{R}a(t)^{2}\delta(t)=0\,,\]
and
\[\ddot{\delta}(t) + \frac{\left(\pi^{2}\left(20R^{2}+7\right)+240R^{2}\right)}{5\left( \pi^{2}+12\right)R^{2}}\delta(t)-\frac{27\pi\left(11\pi^{2}-80\right)}{40\left( \pi^{2}+12\right)R}\delta(t)^{2}+\frac{\left(3\pi^{4}+20\pi^{2}-240\right)}{25 \left(\pi^{2}+12\right)R^{2}}\delta(t)^{3} \tag{6.16}\] \[+ \frac{18R}{\pi^{2}+12}\ddot{a}(t)+\frac{\left(72R^{2}-6\right)}{ \left(\pi^{2}+12\right)R}a(t)-\frac{18\pi R}{\pi^{2}+12}a(t)^{2}+\frac{12R}{ \pi^{2}+12}a(t)^{3}-\frac{9\pi\left(3\pi^{2}-16\right)}{4\left(\pi^{2}+12 \right)}a(t)\delta(t)\] \[+ \frac{3\left(7\pi^{2}-30\right)}{5\left(\pi^{2}+12\right)R}a(t) \delta(t)^{2}+\frac{12\pi^{2}}{5\pi^{2}+60}a(t)^{2}\delta(t)=0\,.\]
Notice that the frequency of the width perturbation \(\delta(t)\) is above the threshold. At first glance it may seem that any excitation of \(\delta(t)\) should be dissipated very fast through the coupling with radiation. However, as it happens with the oscillon itself, if \(\delta(t)\) is excited at sufficiently high amplitudes, the non-linear terms may decrease its frequency below \(\omega_{t}\), becoming a confined mode. Let us now investigate the accuracy of our new effective model with field theory for the initial conditions of Fig. 6 (d). The corresponding comparison is depicted in Fig. 7.
Note that the prediction of the effective model (6.14) and field theory agree with great precision. Therefore, we can confirm that the previous quasi-periodic behaviour is related to the existence of an internal state bounded to the oscillon. Despite the accuracy of the model we can go beyond and include the radiation modes as well, since these degrees of freedom may be significant for certain field configurations. However, as shown in Fig. 6, the addition of genuine radiation modes to the effective model does not seem to dissipate efficiently the energy. If one is interested in the dynamics close to the oscillon core and for not very large times, one could add instead modes that resemble the scattering modes, i.e., with spatial frequencies above the mass threshold, but confined to the
Figure 7: Comparison between field theory (solid line) and the effective model with \(a_{0}=0.5\), \(\delta_{0}=0\), and \(R=2\) (dashed line).
oscillon. Following this strategy we consider the following ansatz
\[\Phi_{\rm o,\ rad}(x;a,\delta)=-1+a(t)e^{-\left(\frac{x}{R}\right)^{2}}+\sum_{k=1 }^{n}\delta_{k}(t)h_{k}(x/r)\,. \tag{6.17}\]
For simplicity, we have changed the initial oscillon profile to a Gaussian profile. Although (6.1) seems to model better the oscillon tails, the profile (6.17) greatly simplifies the calculations. We choose the modes as follows
\[h_{k}(x/R)=\frac{1}{k!}\frac{d^{k}}{dr^{k}}e^{-\left(\frac{x}{r} \right)^{2}}. \tag{6.18}\]
The first one is, in fact, the Derrick mode associated to the new profile. The precise choice of the rest of the modes is not very relevant as long as they have increasing spatial frequencies in the oscillon core. The effective Lagrangian can be written symbolically in a very simple way
\[\mathcal{L}_{r}^{o}=\sum_{k,l=0}^{n}m_{k,l}\dot{\xi}_{k}(t)\dot{ \xi}_{l}(t)-\sum_{k,l=0}^{n}\omega_{k,l}^{2}\xi_{k}(t)\xi_{l}(t)-V(\xi_{k}(t))\,, \tag{6.19}\]
where \(\xi_{0}(t)=a(t)\) and \(\xi_{k}(t)=\delta_{k}(t)\) for \(k=1,...n\), the matrices \(m_{k,l}\) and \(\omega_{k,l}^{2}\) are constant, and \(V(\xi_{k}(t))\) is a potential that couples non-linearly all the modes. Using standard techniques we can diagonalize simultaneously \(m_{k,l}\) and \(\omega_{k,l}\) and rewrite (6.19) in normal coordinates \(\eta_{k}(t)\) as follows
\[\mathcal{L}_{\rm r}^{\rm o}=\sum_{k=0}^{n}m_{k}\eta_{k}^{2}(t)- \sum_{k=0}^{n}\omega_{k}^{2}\eta_{k}^{2}(t)-V(\eta_{k}(t))\,. \tag{6.20}\]
Therefore, in our approach, the oscillon is described by \(n+1\) anharmonic oscillators of proper frequencies \(\omega_{k}\) coupled non-linearly by the potential \(V(\eta_{k}(t))\). The system (6.20) is conservative, therefore it cannot dissipate energy. However, the modes \(\delta_{k}(t)\) act effectively as dissipative degrees of freedom, storing energy from the mode amplitude \(a(t)\). The energy transfer mechanism works actually very efficiently as we will show. As a consequence, for not very large times, the model is able to describe how the initial data decays into radiation.
In our following experiment we study the decay of an initial configuration of the form
\[\Phi_{\rm o}(x;a_{0})=-1+a_{0}e^{-\left(\frac{x}{R}\right)^{2}}\,. \tag{6.21}\]
Below a critical value of \(a_{0}\), the initial configuration given by (6.21) decays into radiation. This value can be taken as the minimal \(a_{0}\) such that the proper frequency of a(t) coincides with the mass threshold frequency. By expanding the frequency and imposing that the secular terms vanish, we can compute the frequency correction due to higher order terms. This gives
\[\omega=\omega_{0}+\frac{a_{0}^{2}\left(3\sqrt{2}\omega_{0}^{2}-80 \right)}{8\omega_{0}^{3}}+\mathcal{O}(a_{0}^{4}),\ \omega_{0}=\sqrt{4+1/R^{2}}\,. \tag{6.22}\]
From this expression we can compute the critical amplitude value for the oscillon formation
\[a_{0}^{\rm critical}(R)=\frac{\sqrt{\frac{2}{191}\left(20+3\sqrt{2}\right)}}{ R}\,. \tag{6.23}\]
One situation with \(a_{0}<a_{0}^{\rm critical}(R)\) is illustrated in Fig. (a)a. The effective model mimics the decay accurately for \(t\lesssim 60\). For \(t>60\) the energy stored in the internal modes is transferred back to the amplitude. In Fig. (b)b we show a genuine oscillon with an internal mode excited.
Interestingly, the effective model (6.20) also describes the creation of a \(K\bar{K}\) pair from the oscillon profile. In order to understand this phenomenon it is enough to analyse the effective action for \(a(t)\). Similarly to (6.2), the effective Lagrangian for \(a(t)\) with the profile (6.17) is given by
\[\mathcal{L}^{o}=\sqrt{\frac{\pi}{2}}R\left(\frac{1}{2}\dot{a}(t)^{2}-\frac{ \left(4R^{2}+1\right)a(t)^{2}}{2R^{2}}-\frac{a(t)^{4}}{2\sqrt{2}}+2\sqrt{\frac {2}{3}}a(t)^{3}\right). \tag{6.24}\]
The potential for \(a(t)\) is depicted in Fig. 9. For \(R\gtrsim 2.6\) the potential develops a new local minimum around \(a\approx 2\). For large enough initial amplitudes, \(a(t)\) is able to climb the potential barrier and sit on the upper minimum. If internal modes are absent and \(a(t)\) possesses sufficient energy to overcome the potential barrier, it must descend to the minimum at zero due to the conservation of energy. But when the internal modes are included, they are able to store the energy excess allowing \(a(t)\) to oscillate in the upper minimum. This situation is identified with the formation of a \(K\bar{K}\) pair which leaves the positive vacuum \(\phi=1\) at \(x=0\).
Figure 8: \(\phi(0,t)\) for the initial data (6.21) for different values of the initial amplitude in full numerics (solid line) and in the effective model from (6.20) (dashed line).
In Fig. 10 we show the value of the field at the origin \(\phi(t,0)\) for different initial amplitudes \(a_{0}\) and \(R=4\). The regions where the field takes a value close to \(1\) correspond to the \(K\bar{K}\) creation. In order to produce a pair the system needs to have an energy \(E>2M_{k}\). But above this value the pair is not always produced, leading to oscillon regions whose energy is dissipated into radiation. This of course resembles the characteristic fractal pattern in the \(K\bar{K}\) scattering processes. In the intermediate stages right after the scattering, the field profile looks like a bump above the \(-1\) vacuum, resembling the profile of an oscillon. This is the connection between oscillons and \(K\bar{K}\) scattering we have mentioned before.
Figure 10: Comparison between the effective model (6.20) and field theory for the initial data (6.21). The color palette indicates the value of the field \(\phi\) at the origin, \(\phi(0,t)\).
Figure 9: Effective potential for \(a(t)\) at \(R=4\).
In order to reproduce the fractal pattern visible at \(a_{0}<2\) one needs to add modes of higher frequencies. This suggests that the scattering modes are essential to explain this structure. Similar results are obtained for values for \(2\lesssim R\lesssim 6\). For \(R<2.6\), the effective potential has only a local minima at \(a=0\), therefore the field cannot sit on the upper vacuum describing the creation of the \(K\bar{K}\) pair. On the other hand, for very large values of \(R\), our modes decrease their spatial frequency, and the model as given by the ansatz (6.17) does only reproduce qualitatively the pair creation. This can be solved easily by adding higher frequency modes.
We would like to emphasise that in our approach, the moduli space metric is trivial (i.e. constant), and the relevant dynamics is completely encoded in the specific coupling between modes given by the potential.
To close this section we are going to illustrate the rich structure given by the internal structure of the oscillons. Apart from the internal mode described by the Derrick mode illustrated in Figs. 7 and 8 (b), it is possible to have oscillations with more than one internal mode excited. This gives rise to complicated long-live oscillatory patterns which are very sensitive to the initial conditions.
In Fig. 11 we show an interesting pattern. This behaviour is characterized by the excitation of two internal modes. From the point of view of the effective model, the amplitudes of these modes are big enough to decrease their frequencies below the mass threshold. As a consequence, they remain excited for very long times (numerically, at \(t\approx 7000\) in our time units, the oscillon ceases to exist).
The results of this section show the relevance of the radiative modes in the oscillon dynamics. Although for certain initial data the oscillon evolution is well-approximated by the amplitude degree of freedom plus one internal mode, in general, one needs higher frequency modes (which in
Figure 11: \(\phi(0,t)\) with initial data given by (6.1) with \(R=10\) and \(a_{0}=0.23\).
our approach are confined to the oscillon core) to describe correctly the dynamics. Interestingly, a reasonably simple model (a set of coupled anharmonic oscillators) is able to describe the main features of the oscillon dynamics, including the decay into radiation and the pair \(K\bar{K}\) creation.
## VII Summary and Conclusions
In this paper, we have introduced dissipative degrees of freedom in the moduli space approximation of the \(\phi^{4}\) model. We have computed, within this approach, the radiation emitted at infinity by a wobbling kink at the lowest order in the shape mode amplitude. Our results are in complete agreement with the well-know calculations from [23].
We begin our investigation by studying in detail the interaction of radiation with the vibrational mode. In terms of the effective model, there are two leading mechanisms that explain the energy transfer between them: a Mathieu instability and a resonance. These mechanisms show that the strongest coupling between radiation and the shape mode occurs for \(\omega_{q}=2\omega_{s}\). We contemplate two different experiments: in the first one we analyse a kink with its shape mode initially excited and discuss the main frequency of the radiation emitted. In the second one, we irradiate a kink with linear radiation and study how the shape mode is triggered. Notably, we found an analytic expression for the excitation of the shape mode for frequencies away from the unstable region.
We have also studied the role of the translational mode. Despite the fact that the standard CCM approach is non-relativistic, when considered up to second order in \(\dot{a}^{2}(t)\), it is able to reproduce at second order the Lorentz contraction of the kink. The vibrational degree of freedom is not enough to reproduce correctly this relativistic effect, however, the inclusion of scattering modes allows for an exact Lorentz contraction at second order. This suggest that an effective model able to describe dissipative effects should be a nice candidate to describe detailed features of non-linear processes such as kink scattering. Of course, the model looses its usefulness if all radiation modes are included, since basically one recovers field theory. However, a judicious choice of modes describing effectively the scattering modes could shed more light on the understanding of many non-linear processes.
We devoted the last section to the derivation of an effective model for the \(\phi^{4}\) oscillon. Although its natural frequency is above the mass threshold limit, the non-linear terms decrease the frequency below such a limit, avoiding the direct coupling with radiation. The numerical simulations indicate that the oscillon can host a discrete mode responsible for modifications of the width. We implemented this behaviour through the inclusion of the Derrick mode associated to the change of size of the oscillon. This new proposal gives a good agreement with the full numerical simulations. We have added higher frequency modes confined to the oscillon core. They represent scattering
modes which may store energy for certain time, acting effectively as dissipative degrees of freedom. The effective equations are a system of coupled anharmonic oscillators with a trivial moduli space metric. Interestingly, once these degrees of freedom are added, this simple effective model is able to describe the KAK creation from initial oscillon data.
Our results suggest that the radiation modes play a crucial role in the study of solitons dynamics. They are of course necessary to explain the decay of non-topological solution such as the oscillon, or long-lived internal modes such as the shape mode. But they also seem to be fundamental to disentangle the complicated patterns in soliton scattering processes. In addition, the results presented here can be easily generalised to other models. The study of the internal structure of oscillons within this approach in different models deserve further research, therefore is left for a future investigation.
###### Acknowledgements.
J.Q. thanks J.J. Blanco-Pillado and A. Wereszczynski for useful discussions. This research was supported by Spanish MCIN with funding from European Union NextGenerationEU (PRTRC17.I1) and Consejeria de Educacion from JCyL through QCAYLE project, as well as MCIN project PID2020-113406GB-I00. SNO's research is carried out thanks to a pre-doctoral contract financed by the Junta de Castilla y Leon through the European Social Fund Plus (ESF+).
## Appendix A Radiation from the shape mode
In this appendix we provide some details of the calculation of the radiation emitted by a wobbling kink with its shape mode excited. We start with (3.10) with the scattering amplitudes given by (3.9) and split it as follows
\[R(x,t)=R_{1}(x)+R_{2}(x,t),\] (A.1)
where
\[R_{1}(x) = i\int_{\mathbb{R}}\,dq\,R_{0}(q)\,\left(4\omega_{s}^{2}-\omega_ {q}^{2}\right)\eta_{q}(x)\,,\] (A.2) \[R_{2}(x,t) = -i\int_{\mathbb{R}}\,dq\,R_{0}(q)\,\left(\omega_{q}^{2}\cos(2 \omega_{s}t)+(4\omega_{s}^{2}-2\omega_{q}^{2})\cos(\omega_{q}t)\right)\eta_{q} (x)\,,\] (A.3)
and
\[R_{0}(q)=\frac{3A_{0}^{2}}{64}\frac{q^{2}\left(q^{2}-2\right)}{\sqrt{q^{2}+1} \,\omega_{q}(4\omega_{s}^{2}-\omega_{q}^{2})\sinh(\pi q/2)}\,.\] (A.4)
### Evaluation of the function \(R_{1}(x)\)
Let us begin with (A.2). In the first place, notice that we can divide the integrand into an odd and even contribution. Due to the symmetric interval of integration, only the even contribution is not null, which results in
\[R_{1}(x)=-2\int_{0}^{\infty}\,dq\,R_{0}(q)\,(4\omega_{s}^{2}-\omega_{q}^{2})\, \mbox{Im}(\eta_{q}(x)),\] (A.5)
where
\[\mbox{Im}(\eta_{q}(x))=-(q^{2}+1)\sin qx-3\,q\tanh x\cos qx+3\tanh^{2}x\sin qx\,.\] (A.6)
Now we will split the integral as
\[R_{1}(x) = \frac{3A_{0}^{2}}{32}\bigg{(}\int_{0}^{\infty}dq\,\frac{q^{2}\,( q^{2}-2)\sin qx}{(q^{2}+4)\sinh(\pi q/2)}+3\tanh x\int_{0}^{\infty}dq\,\frac{q^{3} \,(q^{2}-2)\cos qx}{(q^{2}+4)(q^{2}+1)\sinh(\pi q/2)}\] (A.7) \[-3\tanh^{2}x\int_{0}^{\infty}dq\,\frac{q^{2}\,(q^{2}-2)\sin qx}{ (q^{2}+4)(q^{2}+1)\sinh(\pi q/2)}\bigg{)}\,,\]
and we will solve each term separately by transforming the problem in the calculation of the solution of an inhomogeneous differential equation. To simplify the notation, let us write
\[R_{1}(x) = \frac{3A_{0}^{2}}{32}\bigg{(}\alpha(x)+3\tanh x\,\beta(x)-3\tanh^ {2}x\,\gamma(x)\bigg{)}\,.\] (A.8)
The inhomogeneous differential equation satisfied by \(\alpha(x)\) is
\[\alpha^{\prime\prime}(x)-4\alpha(x)=-\int_{0}^{\infty}dq\,\frac{q^{2}(q^{2}-2 )\sin qx}{\sinh(\pi q/2)}=-6\,(3-\cosh 2x)\tanh x\,\mbox{sech}^{4}\,x.\] (A.9)
Given the obvious condition \(\alpha(x)=0\), and to avoid divergences as \(|x|\to\infty\), the solution to this differential equation turns out to be
\[\alpha(x)=6e^{2x}\log\left(1+e^{-2x}\right)-6e^{-2x}\log\left(1+e^{2x}\right) -2\tanh x\left(3-\mbox{sech}^{2}\,x\right)\,.\] (A.10)
Regarding the function \(\beta(x)\), it must be solution of the forth order differential equation
\[\beta^{(4)}(x)-5\beta^{\prime\prime}(x)+4\beta(x)=\int_{0}^{\infty}dq\,\frac{ q^{3}(q^{2}-2)\cos qx}{\sinh(\pi q/2)}=3\,(21-18\cosh 2x+\cosh 4x)\,\mbox{sech}^{6}\,x,\] (A.11)
with conditions \(\beta^{\prime}(0)=0\) and \(\beta^{(3)}(0)=0\). In order to avoid divergences at \(|x|\to\infty\), the solution must be
\[\beta(x) = -4e^{2x}\log(1+e^{-2x})-4e^{-2x}\log(1+e^{2x})+5-\frac{4}{(1+e^{ 2x})^{2}}+\frac{\pi e^{x}}{2}-2\tanh x\] (A.12) \[-2\sinh x\arctan e^{x}.\]
Finally, the \(\gamma(x)\) term of \(R_{1}(x)\) will be the solution of the differential equation
\[\gamma^{(4)}(x)-5\gamma^{\prime\prime}(x)+4\gamma(x)=\int_{0}^{\infty}dq\, \frac{q^{2}(q^{2}-2)}{\sinh(\pi q/2)}\sin qx=6\,(3-\cosh 2x)\tanh x\,\mbox{ sech}^{4}\,x,\] (A.13)
with conditions \(\gamma(0)=0\) and \(\gamma^{\prime\prime}(0)=0\). The well behaved solution at \(|x|\to\infty\) is
\[\gamma(x) = 2e^{-2x}\log(1+e^{2x})-2e^{2x}\log(1+e^{-2x})-e^{-2x}(1+\tanh x)+1 +\frac{\pi e^{x}}{2}\] (A.14) \[-2\cosh x\arctan e^{x}.\]
Substituting (A.10), (A.12) and (A.14) into (A.8), we get the complete expression of \(R_{1}(x)\)
\[R_{1}(x)=\frac{3A_{0}^{2}}{64\cosh^{2}x}\left(3\pi\sinh x+16\tanh x-24x\right)\,.\] (A.15)
A plot of this function \(R_{1}(x)\) can be seen in Fig. 12.
### Evaluation of the function \(R_{2}(x,t)\)
The evaluation of the contribution of \(R_{2}(x,t)\) seems to be more challenging. Although an analytical calculation seems impossible, some properties of the radiation can be obtained by certain approximations. We will focus now on the asymptotic radiation. As we have seen, the \(R_{1}(x)\) contribution is exponentially suppressed for large \(x\), therefore the asymptotic radiation is entirely given by the \(R_{2}(x,t)\) contribution.
Through some elemental algebra is straightforward to verify that the integrand of \(R_{2}(x,t)\) can be written as
\[(4\omega_{s}^{2}-\omega_{q}^{2})\bigg{(}\cos(\omega_{q}t)-\cos(2\omega_{s}t)+ \cos(\omega_{q}t)-4\omega_{s}^{2}\frac{\cos(\omega_{q}t)-\cos(2\omega_{s}t)}{ 4\omega_{s}^{2}-\omega_{q}^{2}}\bigg{)}\,,\] (A.16)
where we have removed temporarily a \(-iR_{0}(q)\eta_{q}(x)\) factor. This expression can be arranged as
\[(4\omega_{s}^{2}-\omega_{q}^{2})\bigg{(}-2\sin(\frac{\omega_{q}+2\omega_{s}}{ 2}t)\sin(\frac{\omega_{q}-2\omega_{s}}{2}t)+\cos(\omega_{q}t)-\frac{8\omega_{ s}^{2}\sin(\frac{\omega_{q}+2\omega_{s}}{2}t)\sin(\frac{\omega_{q}-2\omega_{s}}{2}t)}{ \omega_{q}^{2}-4\omega_{s}^{2}}\bigg{)}\,.\] (A.17)
Figure 12: Representation of \(R_{1}(x)\) given by (A.15) for \(A_{0}=1\).
Let us define the following variables
\[\Omega_{q}^{+}=\frac{\omega_{q}+2\omega_{s}}{2},\qquad\Omega_{q}^{-}=\frac{\omega_ {q}-2\omega_{s}}{2}\,.\] (A.18)
As it is well known, there are some sequence of functions that converge weakly to the Dirac delta. One of them is
\[f_{n}(x)=\frac{\sin(nx)}{\pi x}\stackrel{{ n\to \infty}}{{\longrightarrow}}\delta(x).\] (A.19)
In our case, we could identify
\[\frac{\sin(\Omega_{q}^{-}t)}{\pi\Omega_{q}^{-}}\stackrel{{ t\to \infty}}{{\longrightarrow}}\delta(\Omega_{q}^{-})=\sqrt{6}\bigg{(}\delta(q-2 \sqrt{2})+\delta(q+2\sqrt{2})\bigg{)}\,,\] (A.20)
where it was used a property of the Dirac delta. In order to achieve this, let us rewrite (A.17) as
\[(4\omega_{s}^{2}-\omega_{q}^{2})\bigg{(}-2\pi\Omega_{q}^{-}\sin(\Omega_{q}^{+} t)\frac{\sin(\Omega_{q}^{-}t)}{\pi\Omega_{q}^{-}}+\cos(\omega_{q}t)-2\pi t\, \omega_{s}^{2}\,\mbox{sinc}(\Omega_{q}^{+}t)\frac{\sin(\Omega_{q}^{-}t)}{\pi \Omega_{q}^{-}}\bigg{)}\,.\] (A.21)
Substituting (A.20) into (A.21) and integrating, we can realise that the contribution from the first term in (A.21) vanishes. This fact is supported by the numerical analysis, which shows that the contribution is suppressed in time. The second term may be computed as follows: let us call such contribution as
\[\Psi(x,t)=i\int_{\mathbb{R}}\,dq\,R_{0}(q)\,(4\omega_{s}^{2}-\omega_{q}^{2}) \cos(\omega_{q}t)\eta_{q}(x)\,.\] (A.22)
To facilitate the calculation, let's divide \(\Psi(x,t)\) in the same way as we did for \(R_{1}(x)\)
\[\Psi(x,t) = \frac{3iA_{0}^{2}}{64}\bigg{(}\int_{\mathbb{R}}dq\,\frac{q^{2}\, (q^{2}-2)\cos(\omega_{q}t)}{(q^{2}+4)\sinh(\pi q/2)}e^{iqx}+3i\tanh x\int_{ \mathbb{R}}dq\,\frac{q^{3}\,(q^{2}-2)\cos(\omega_{q}t)}{(q^{2}+1)(q^{2}+4) \sinh(\pi q/2)}e^{iqx}\] (A.23) \[-3\tanh^{2}x\int_{\mathbb{R}}dq\,\frac{q^{2}\,(q^{2}-2)\cos( \omega_{q}t)}{(q^{2}+1)(q^{2}+4)\sinh(\pi q/2)}e^{iqx}\bigg{)}\,.\]
Each of the terms from \(\Psi(x,t)\) can be conceived as wave packets where the dispersion relation is non-linear and where the constituent amplitudes are exponentially suppressed in \(q\)
\[\Psi(x,t) = \frac{3iA_{0}^{2}}{138}\bigg{(}\int_{\mathbb{R}}dq\,\frac{q^{2}\, (q^{2}-2)}{(q^{2}+4)\sinh(\pi q/2)}\left(e^{i(qx+\omega_{q}t)}+e^{i(qx-\omega_ {q}t)}\right)\] (A.24) \[+3i\tanh x\int_{\mathbb{R}}dq\,\frac{q^{3}\,(q^{2}-2)}{(q^{2}+1)( q^{2}+4)\sinh(\pi q/2)}\left(e^{i(qx+\omega_{q}t)}+e^{i(qx-\omega_{q}t)}\right)\] \[-3\tanh^{2}x\int_{\mathbb{R}}dq\,\frac{q^{2}\,(q^{2}-2)}{(q^{2}+1 )(q^{2}+4)\sinh(\pi q/2)}\left(e^{i(qx+\omega_{q}t)}+e^{i(qx-\omega_{q}t)} \right)\bigg{)}\,.\]
In the literature, the problem of the calculation of wave packets is well known when the amplitude is a Gaussian function, that is, \(A(q)=e^{-\alpha^{2}(q-q_{c})^{2}}\). For that case, as the amplitude is exponentially suppressed away from the maximum, it is a good approximation to Taylor expand the dispersion relation around \(q_{c}\). Then, \(\omega(q)\approx q_{c}v_{p}+(q-q_{c})v_{g}+\frac{1}{2}\Gamma(q-q_{c})^{2}+\dots\) where \(v_{p}\) is the phase velocity,
\(v_{g}\) is the group velocity, and \(\Gamma\) is the dispersion parameter. The final result of this approximation is collected by the following expression
\[\int_{\mathbb{R}}dq\,A(q)e^{i(qx\mp\omega(q)t)}\approx\frac{\sqrt{2\pi}}{\sqrt{2 \alpha^{2}\pm i\Gamma t}}\exp\left(-\frac{1}{2}\left(\frac{x\mp v_{g}t}{\sqrt{ 2\alpha^{2}\pm i\Gamma t}}\right)^{2}\right)\exp\left(iq_{c}(x\mp v_{p}t) \right). \tag{100}\]
Notice that this integral is also suppressed in time. Finally, using the approximation (99) in the last term of (101), we obtain the only non-vanishing contribution, that looks like
\[R_{2}(x,t)= \frac{9\pi A_{0}^{2}}{2\sqrt{8}\sinh(\sqrt{2}\pi)}\sin(2\sqrt{3} \,t)\sin(2\sqrt{2}\,x)+\frac{3\pi A_{0}^{2}}{2\sinh(\sqrt{2}\pi)}\sin(2\sqrt{ 3}\,t)\cos(2\sqrt{2}\,x)\tanh x \tag{101}\] \[-\frac{3\,\pi A_{0}^{2}}{2\sqrt{8}\sinh(\sqrt{2}\pi)}\sin(2\sqrt {3}\,t)\sin(2\sqrt{2}\,x)\tanh^{2}x\,.\]
In the asymptotic spatial regime, the previous expression reduces to
\[R_{2}(x,t)=\frac{3\,\pi A_{0}^{2}}{2\sinh(\sqrt{2}\pi)}\sqrt{\frac{3}{8}} \bigg{(}\cos\left(2\sqrt{3}\,t\mp 2\sqrt{2}\,|x|\mp\delta\right)-\cos\left(2 \sqrt{3}\,t\pm 2\sqrt{2}\,|x|\pm\delta\right)\bigg{)}\,, \tag{102}\]
with
\[\delta=\arctan\sqrt{2}\,. \tag{103}\]
Note that the Dirac delta approximation picks a single frequency in the \(q-\)integral, that is why we have obtained a superposition of two travelling waves of the same frequencies and opposite directions, i.e., a standing wave. Choosing the outgoing wave in the positive direction we get
\[R_{\infty}(x,t)=\frac{3\,\pi A_{0}^{2}}{2\sinh(\sqrt{2}\pi)}\sqrt{\frac{3}{8}} \cos\big{(}2\sqrt{3}\,t-2\sqrt{2}\,x-\delta\big{)}. \tag{104}\]
An alternative derivation of the same result can be achieved as follows. Let us split the integral of the last term in (100) as follows
\[I(x,t)= -4i\omega_{s}^{2}\int_{-\infty}^{0}dq\tilde{R}_{0}(q)\tilde{\eta }_{q}(x)\frac{\cos\big{(}\omega_{q}t\big{)}-\cos\big{(}2\omega_{s}t\big{)}}{4 \omega_{s}^{2}-\omega_{q}^{2}}e^{iqx} \tag{105}\] \[-4i\omega_{s}^{2}\int_{0}^{\infty}dq\tilde{R}_{0}(q)\tilde{\eta} _{q}(x)\frac{\cos\big{(}\omega_{q}t\big{)}-\cos\big{(}2\omega_{s}t\big{)}}{4 \omega_{s}^{2}-\omega_{q}^{2}}e^{iqx}\] \[\equiv I_{1}+I_{2},\]
where
\[\tilde{R}_{0}(q) = \frac{3A_{0}^{2}}{64}\sqrt{\frac{q^{2}+4}{q^{2}+1}\frac{q^{2} \left(q^{2}-2\right)}{\omega_{q}^{2}\sinh(\pi q/2)}}, \tag{106}\] \[\tilde{\eta}_{q}(x) = \frac{3\tanh^{2}x-q^{2}-1-3iq\tanh x}{\sqrt{(q^{2}+1)(q^{2}+4)}}. \tag{107}\]
Note that values of \(q\) close to \(0\) as well as large values are suppressed by the function \(R_{0}(q)\). Since the main contribution to the integrals (105) at large \(t\) comes from a neighbourhood of
(notice that the modes corresponding to \(q=q_{\pm}\) grow linearly with time), the frequency \(\omega_{q}\) can be linearly approximated by
\[\omega_{q}\approx 2\sqrt{3}\pm\sqrt{\frac{2}{3}}\left(q\mp 2\sqrt{2}\right)+ \mathcal{O}\left(q\mp 2\sqrt{2}\right)^{2}. \tag{100}\]
Then, using the linear approximation for \(\omega_{q}\) we may approximate \(I_{2}\) by the following expression
\[I_{2}(x,t)\approx-4i\tilde{R}_{0}(q_{+})\tilde{\eta}_{q_{+}}(x)\omega_{s}^{2} \int_{0}^{\infty}dq\frac{\cos\left(\widetilde{\omega}_{q}t\right)-\cos\left(2 \omega_{s}t\right)}{4\omega_{s}^{2}-\omega_{q}^{2}}e^{iqx}\,, \tag{101}\]
where \(\widetilde{\omega}_{q}=2\sqrt{3}+\sqrt{\frac{2}{3}}\left(q-2\sqrt{2}\right)\). We are assuming implicitly that in a neighbourhood of \(q=2\sqrt{2}\), both \(\tilde{R}_{0}(q)\) and \(\tilde{\eta}_{q}(x)\) are approximately constant in \(q\). Similarly, a straightforward manipulation leads to the following expression for \(I_{1}\)
\[I_{1}(x,t)\approx 4i\tilde{R}_{0}(q_{+})\tilde{\eta}_{q_{+}}^{*}(x)\omega_{s}^ {2}\int_{0}^{\infty}dq\frac{\cos\left(\widetilde{\omega}_{q}t\right)-\cos \left(2\omega_{s}t\right)}{4\omega_{s}^{2}-\omega_{q}^{2}}e^{-iqx}\,. \tag{102}\]
We have finally
\[I(x,t)=8\omega_{s}^{2}\tilde{R}_{0}(q_{+})\left(\mathrm{Re}\left(\tilde{\eta} _{q_{+}}(x)\right)\mathcal{F}_{s}(t)+\mathrm{Im}\left(\tilde{\eta}_{q_{+}}(x )\mathcal{F}_{c}(t)\right)\right)\,, \tag{103}\]
where
\[\mathcal{F}_{s}(t) = \int_{0}^{\infty}dq\frac{\cos\left(\widetilde{\omega}_{q}t\right) -\cos\left(2\omega_{s}t\right)}{4\omega_{s}^{2}-\omega_{q}^{2}}\sin\left(qx \right), \tag{104}\] \[\mathcal{F}_{c}(t) = \int_{0}^{\infty}dq\frac{\cos\left(\widetilde{\omega}_{q}t\right) -\cos\left(2\omega_{s}t\right)}{4\omega_{s}^{2}-\omega_{q}^{2}}\cos\left(qx \right)\,. \tag{105}\]
The integrals (104) and (105) can be computed analytically for all \(x\), but the expressions are not particularly illuminating. For \(x>>0\) and large \(t\) we have
\[\mathcal{F}_{s}(t) = -\frac{\pi\cos\left(2\sqrt{3}\,t-2\sqrt{2}\,x\right)}{4\sqrt{2}}\,, \tag{106}\] \[\mathcal{F}_{c}(t) = -\frac{\pi\sin\left(2\sqrt{3}\,t-2\sqrt{2}\,x\right)}{4\sqrt{2}}\,. \tag{107}\]
Substituting (106) and (107) into (103) we obtain (for \(x\gg 0\))
\[I(x,t)=\frac{3\,\pi A_{0}^{2}}{2\sinh(\sqrt{2}\pi)}\sqrt{\frac{3}{8}}\cos \left(2\sqrt{3}\,t-2\sqrt{2}\,x-\mathrm{arctanh}\,\sqrt{2}\right). \tag{108}\]
## Appendix B Details of the numerical simulations
In this appendix we discuss the numerical coding scheme that we have used to solve numerically the equation of motion
\[\ddot{\phi}-\phi^{\prime\prime}+2\phi(\phi^{2}-1)=0\,, \tag{109}\]
where dots and primes denote derivatives with respect to time and space respectively. In order to solve (145) we have discretized the second order spatial derivative as
\[\phi^{\prime\prime}(x,t)=\frac{\phi(x+\Delta x,t)-2\phi(x,t)+\phi(x-\Delta x,t)}{ (\Delta x)^{2}}\,, \tag{146}\]
and performed the temporal evolution through the leapfrog method. In addition, we have employed absorbing boundary conditions to avoid that the radiation is scattered-back and interferes with the system. These conditions read as
\[\left.(\partial_{t}+\partial_{x}\phi)\right|_{x=\frac{L}{2},t} = 0\,, \tag{147}\] \[\left.(\partial_{t}-\partial_{x}\phi)\right|_{x=-\frac{L}{2},t} = 0\,. \tag{148}\]
The simulations have been run in a lattice of length \(L=30\) with \(\Delta x=0.0375\), so the number of spatial points is \(N_{space}=1600\). Finally, to guarantee the stability of the method we have chosen \(\Delta t=\frac{L}{10(N_{space}+2)}\,,\) which satisfies the Courant condition with \(\Delta t/\Delta x<0.5\).
|
2309.09000 | The Quantum-Extended Church-Turing Thesis in Quantum Field Theory | The quantum-Extended Church-Turing thesis has been explored in many physical
theories including general relativity but lacks exploration in quantum field
theories such as quantum electrodynamics. Through construction of a
computational model whose gate set mimics the interactions of QED, we
demonstrate that one of the defining features of quantum field theory, particle
creation and annihilation, is not likely to violate the quantum-Extended
Church-Turing thesis. Through this computational model, it is shown that
particle creation is likely only another form of quantum parallelism. However,
whether or not the quantum-Extended Church-Turing thesis will hold for all
computational devices in quantum field theories is still not known. For
example, we briefly examine certain interactions in quantum electrodynamics
which may create multi-qubit gates. These gates may have exponential complexity
at the cost of being exponentially weak. This may in turn allow for
computational advantage over traditional gate sets such as Clifford+T. | Cameron Cianci | 2023-09-16T14:04:33Z | http://arxiv.org/abs/2309.09000v1 | # The Quantum-Extended Church-Turing Thesis in Quantum Field Theory
###### Abstract
The quantum-Extended Church-Turing thesis has been explored in many physical theories including general relativity but lacks exploration in quantum field theories such as quantum electrodynamics. Through construction of a computational model whose gate set mimics the interactions of QED, we demonstrate that one of the defining features of quantum field theory, particle creation and annihilation, is not likely to violate the quantum-Extended Church-Turing thesis. Through this computational model, it is shown that particle creation is likely only another form of quantum parallelism. However, whether or not the quantum-Extended Church-Turing thesis will hold for all computational devices in quantum field theories is still not known. For example, we briefly examine certain interactions in quantum electrodynamics which may create multi-qubit gates. These gates may have exponential complexity at the cost of being exponentially weak. This may in turn allow for computational advantage over traditional gate sets such as Clifford+T.
## 1 The Church-Turing Thesis
The Church-Turing thesis states that every solvable decision problem can be calculated on a Turing machine [1, 2, 3]. This thesis was later extended to complexity theory by proposing that any efficient calculation done by a physical system can be efficiently computed by a Turing machine, where an efficient computation is a computation which can be performed in polynomial time [4].
The extended Church-Turing thesis held until the appearance of Shor's algorithm, which allows quantum computers to factor large numbers in exponentially less time than the best classical algorithm [5, 6]. This algorithm implies that certain computational problems may be efficiently solved on a quantum computer but not on a classical Turing machine. In response to this development, the quantum-Extended Church-Turing thesis was formulated, which proposes that any efficient calculation can be efficiently done by a quantum Turing machine [7].
qECT in General Relativity
Armed with a violation of the extended Church-Turing thesis, physicists sought out new ways to use physics to violate the quantum-Extended Church-Turing thesis. Many of the most promising approaches to date have been found through utilizing time dilation in general relativity.
In general relativity, time dilation allows for an observer to initialize a computing machine outside of a black hole, and then cross the event horizon in order to view the result of an infinitely long computation in constant time [8, 9]. This allows the observer to perform calculations which would otherwise take infinite time on a Turing machine. As the results of these infinitely long computational problems cannot be computed by a Turing machine, this violates the Church-Turing thesis. However, Susskind proposed solving this through altering the Church-Turing thesis to require the physical system to be able to communicate with the holographic boundary of space. This modification prevents the event horizon of a black hole from being used to violate the Church-Turing thesis [10].
Similarly to how the properties of general relativity have been leveraged to test the quantum-Extended Church-Turing thesis, we will investigate if the properties of quantum field theory may allow for computational advantage over traditional quantum computers.
## 3 Computing Machines in Quantum Electrodynamics
Quantum field theory is the most accurate description of the physical world which has been verified through experiment [11, 12, 13]. The widely successful Standard Model of particle physics is a quantum field theory which encapsulates the strong, weak, and electromagnetic forces, leaving out only gravity [14, 15, 16]. Although the study of quantum computing brings computer science a step closer to the fundamental laws of physics, there are still properties of quantum field theory which are not shared by quantum mechanics.
Foremost of these properties is particle creation and annihilation. This property allows quantum field theory to create new particles, or new computing subsystems, during a computation. At a first glance, this property is reminiscent of a nondeterministic Turing machine, which can branch to create new Turing machines and calculate each possible solution to a problem in parallel [17]. However, if particle creation in quantum field theory indeed behaved this way, it would violate the quantum-Extended Church-Turing thesis, assuming \(P\neq NP\). We will instead demonstrate that this property is likely only another form of quantum parallelism, a property already present in traditional quantum computers.
One of the simplest quantum field theories is Quantum Electrodynamics (QED). QED only describes electromagnetic interactions, disregarding the Strong and Weak force, as well as gravity. This theory simply couples a fermionic field to a U(1) gauge boson. The Lagrangian of quantum electrodynamics is [14],
\[\mathcal{L}=\bar{\psi}(i\gamma^{\mu}\partial_{\mu}+m)\psi-\frac{1}{4}F_{\mu \nu}F^{\mu\nu}-ie\bar{\psi}\gamma^{\mu}A_{\mu}\psi \tag{1}\]
We will use this physical theory to inspire a computational model, constructing a gate set which mimics the interactions of QED. In this computational model, we will make use of the spin degree of freedom of the photon as a qubit. Therefore, computational gates which create new photons will also create new qubits.
In quantum field theory, not only can we have a superposition of states, but we can also have a superposition of particle number. This can be illustrated through the following two states,
\[\frac{1}{\sqrt{2}}(A_{1}(x)-A_{2}(x))\left|\Omega\right>\approx\frac{1}{\sqrt{ 2}}(\left|0\right>-\left|1\right>) \tag{2}\]
\[\frac{1}{2}(A_{1}(x)-A_{2}(x))+\frac{1}{2}(A_{1}(x)A_{1}(y)+A_{2}(x)A_{2}(y)) \left|\Omega\right>\approx\frac{1}{2}(\left|0\right>-\left|1\right>)+\frac{1}{ 2}(\left|00\right>+\left|11\right>) \tag{3}\]
The operator \(A_{1}(x)\) creates a horizontally polarized photon at position \(x\), and the operator \(A_{2}(y)\) creates a vertically polarized photon at position \(y\). In this example, horizontally polarized photons are defined as the \(\left|0\right>\) computational basis state, and vertically polarized photons are defined as the \(\left|1\right>\) computational basis state.
Equation 2 shows a photon at position x in a \(\left|-\right>\) state, a computational state which can be easily obtained in a normal quantum computer. However, equation 3 demonstrates a superposition of particle number, which includes both a single photon in the state \(\left|-\right>\) in superposition with a bell state \(\frac{1}{\sqrt{2}}(\left|00\right>+\left|11\right>)\). Wavefunctions such as this second example allow quantum field theory to realize states in a way which cannot be realized using quantum mechanics, as quantum mechanics preserves particle number [18].
## 4 Example Computer in QED
Since calculating the exact interactions and time evolution of an arbitrary state in QED can be incredibly complex, we will instead use a simpler gate based model as a guide. The gate set of this computational model will mimic the interactions of QED.
The time evolution operator in quantum field theory is found by exponentiating the Hamiltionian to obtain the time evolution operator \(U(t)\)[14].
\[U(t)=e^{-i{\cal H}t} \tag{4}\]
Perturbation theory allows us to expand the time evolution operator in orders of the coupling constant \(e\). In this way we can investigate different interactions which take place in QED. For example, we can see the structure of the following interaction which is present at fourth order in perturbation theory.
\[{\cal O}(e^{4})a_{p}^{\dagger}a_{q}^{\dagger}a_{r}a_{s}\in\frac{(-i{\cal H}t)^ {4}}{4!} \tag{5}\]
This interaction has the form of a two-qubit gate and is somewhat similar to those used to generate two-qubit gates in superconducting architectures [19, 20]. Due to this,
we will include a two-qubit gate in the gate set of the QED computational model. This interaction can be visually represented through the following Feynman diagram, where time is along the x-axis and space is along the y-axis.
In addition to two-qubit gates, the following interaction is present at sixth order.
\[\mathcal{O}(e^{6})a_{p}^{\dagger}a_{q}^{\dagger}a_{r}^{\dagger}a_{s}^{\dagger}a _{t}a_{u}\in\frac{(-i\mathcal{H}t)^{6}}{6!} \tag{6}\]
Due to this interaction, we will similarly adopt a novel gate into the gate set which can output four qubits from two input qubits, allowing for the computational model to utilize particle creation. The interaction can be expressed diagramatically using the following Feynman diagram,
As is conventionally done in quantum computing, we can view these gates in a circuit diagram. For example,
Figure 1: A Feynman diagram whose form suggests a two qubit gate.
Figure 2: A Feynman diagram in Quantum Electrodynamics which allows for the creation of new qubits.
This ability to create new qubits differentiates this computational model from a traditional quantum computer. This brings about the question, does the addition of this particle creation gate bring computational advantage to devices in quantum field theory?
## 5 Equivalence to a Qutrit Quantum Computer
To answer the previous question, we will now demonstrate that the computation model constructed can be simulated by a quantum computer with an exponential number of ancilla qutrits. Therefore, these particle creation gates would not allow for a violation of the quantum-Extended Church-Turing thesis.
We can demonstrate the equivalence of this QED gate set to a traditional quantum computer as follows. Each spin degree of freedom in quantum electrodynamics can be translated to 3 states. The first two states are common to quantum computers, \(|0\rangle\), \(|1\rangle\), however we also include a third state \(|\Omega\rangle\), which indicates that the particle does not exist.
Figure 4: **The same circuit as in Figure 3, simulated on a quantum computer with access to ancilla qubits initialized in the \(|\Omega\rangle\) state.**
Figure 3: **Use of the particle creation gate depicted in a circuit diagram, which may be generated from the interaction in Figure 2.**
Re-examining the state from Section 3, \(|\psi\rangle\approx\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)+\frac{1}{\sqrt{2}}(|00 \rangle+|11\rangle)\), we can re-express this state as the qutrit state,
\[\frac{1}{2}(A_{1}(x)-A_{2}(x))+\frac{1}{2}(A_{1}(x)A_{1}(y)+A_{2}(x)A_{2}(y)) \left|\Omega\right\rangle\approx\frac{1}{\sqrt{2}}(|0\Omega\rangle+|1\Omega \rangle)+\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle) \tag{7}\]
In addition to being able to express a superposition of particle number using qutrits, and since the computing machine is limited to a polynomial amount of time, only an exponential number of particles can be created with the gates included. In this way, all states which can be reached in exponential time can be expressed with a finite number of qutrits.
The gate set outlined in the previous sections translate to two-qutrit and four-qutrit gates, which can be efficiently simulated using Clifford+T gates. In this way, any state and any operator which can be efficiently computed in the computational model put forth in Section 4, can be efficiently simulated by a qutrit quantum computer with an exponential number of ancilla qutrits. Therefore, these particle creation gates would not allow for a violation of the quantum-Extended Church-Turing thesis.
Through understanding that qutrit states in superposition corresponds to a superposition of particle number in the computational model, this computational model suggests that particle creation acts only as another form of quantum parallelism. This finding demonstrates that particle creation, one of the defining aspects of quantum field theory, will likely not allow for a violation of the quantum-Extended Church-Turing thesis.
## 6 Exponentially Small Multi-Qubit Operations
Although particle creation may not violate the quantum-Extended Church-Turing thesis, this does not imply that other aspects of quantum field theory will not. There are still other possible mechanisms which may still allow for devices in quantum field theory to gain computational advantage over quantum computers.
We can discover such an example by investigating the limitations of the constructed gate set. The four-qutrit particle creation gate shown previously can be simulated efficiently by Clifford+T gates since it has a fixed size. However, quantum electrodynamics allows for interactions which scale with the system size, and therefore are likely more difficult to simulate with a quantum computer. These interactions are exponentially suppressed by the coupling constant \(e\), suggesting that these interactions may be best modeled by a new class of gates best described as _exponentially suppressed multi-qubit_ gates.
A few examples of these high order Feynman diagrams are shown below for a 7-qubit case.
Each vertex suppresses the amplitude of the diagram by a factor of \(e\), the coupling constant of electromagnetism. Therefore, the amplitude of an \(N\)-qubit operation which leveraged this type of interaction must scale as \(e^{N}\).
As a first approach to modeling these interactions, it may be useful to examine multi-qubit operators \(U\) which satisfy the following constraint,
\[Tr(\mathcal{I}-U)\leq 2^{-n} \tag{8}\]
Where \(n\) is the number of qubits to which the gate is applied. These operators may have up to \(\mathcal{O}(2^{2n})\) parameters, which makes them exponentially difficult to exactly realize with the traditional Clifford+T gate set. However, it may be possible to approximate these gates using traditional gate sets.
Examining the successes of quantum computing indicate that access to these gates may provide computational advantage. The Gottesman-Knill theorem demonstrates that the computational speedup of quantum computers does not arise solely from the properties of entanglement or superposition [21]. Furthermore, Shor's algorithm instead suggests that quantum speedups arise from exponentially small finely tuned gates [5]. In this way, access to these gates may allow for greater computational power than a traditional gate set such as Clifford+T.
If computational advantage is identified with these gates, then it is important to remember that the quantum-Extended Church-Turing thesis is only violated if these gates can be realized in QED or the Standard Model. Althought these gates are inspired
Figure 5: Example Feynman diagrams which may allow for multi-qubit gates whose magnitude is suppressed exponentially by the coupling constant \(\mathcal{O}(e^{N})\).
by exponentially weak mutiparticle interactions in QED, symmetries and other properties of quantum field theories often constrain physically realizable interactions.
## 7 Conclusion and Further Directions
In this paper, we construct a computational model to investigate the computational power of quantum field theories. Using this model, we demonstrate that particle creation, one of the defining properties of quantum field theory, is not likely to violate the quantum-Extended Church-Turing thesis, as the states and gates of this computational model can be efficiently simulated by a qutrit quantum computer with exponentially many ancilla.
Although particle creation may not allow for computational advantage over a quantum computer, other properties of quantum field theories such as exponentially weak multi-qubit interactions may still allow for computational advantage.
To begin investigating this, it may be useful to examine if the class of operators which satisfy \(Tr(\mathcal{I}-U)\approx 2^{-n}\) allow for computational advantage in any known problems, or if these gates can be approximated in polynomial time by traditional gate sets. Since these operators have exponentially many tunable parameters, it is possible that they may be able to access parts of the state space which would be impossible for normal gate sets to efficiently access.
Even if computational advantage is identified through this multi-qubit gate set, these gates must still be constructed in a quantum field theory such as QED or the Standard Model in order to claim a violation of the quantum-Extended Church-Turing thesis. However, seeing as Shor's algorithm finds a computational speedup through exponentially small, finely tuned gates, the computational power of this gate set may be of interest.
|
2309.06864 | Funneling and spin-orbit coupling in transition-metal dichalcogenide
nanotubes and wrinkles | Strain engineering provides a powerful means to tune the properties of
two-dimensional materials. Accordingly, numerous studies have investigated the
effect of bi- and uniaxial strain. Yet, the strain fields in many systems such
as nanotubes and nanoscale wrinkles are intrinsically inhomogeneous and the
consequences of this symmetry breaking are much less studied. Understanding how
this affects the electronic properties is crucial especially since wrinkling is
a powerful method to apply strain to two-dimensional materials in a controlled
manner. In this paper, we employ density functional theory to understand the
correlation between the atomic and the electronic structure in nanoscale
wrinkles and nanotubes of the prototypical transition metal dichalcogenide
$\mathrm{WSe}_2$. Our research shows that the symmetry breaking in these
structures leads to strong Rashba-like splitting of the bands at the $\Gamma$
point and they thus may be utilized in future tunable spintronic devices. The
inhomogeneous strain reduces the band gap and leads to a localization of the
band edges in the highest-curvature region, thus funneling excitons there.
Moreover, we show how wrinkles can be modeled as nanotubes with the same
curvature and when this comparison breaks down and further inhomogenities have
to be taken into account. | Mohammadreza Daqiqshirazi, Thomas Brumme | 2023-09-13T10:20:40Z | http://arxiv.org/abs/2309.06864v1 | # Tunneling and spin-orbit coupling in transition-metal dichalcogenide nanotubes and wrinkles
###### Abstract
Strain engineering provides a powerful means to tune the properties of two-dimensional materials. Accordingly, numerous studies have investigated the effect of bi- and uniaxial strain. Yet, the strain fields in many systems such as nanotubes and nanoscale wrinkles are intrinsically inhomogeneous and the consequences of this symmetry breaking are much less studied. Understanding how this affects the electronic properties is crucial especially since wrinkling is a powerful method to apply strain to two-dimensional materials in a controlled manner. In this paper, we employ density functional theory to understand the correlation between the atomic and the electronic structure in nanoscale wrinkles and nanotubes of the prototypical transition metal dichalcogenide WSe\({}_{2}\). Our research shows that the symmetry breaking in these structures leads to strong Rashba-like splitting of the bands at the \(\Gamma\) point and they thus may be utilized in future tunable spintronic devices. The inhomogeneous strain reduces the band gap and leads to a localization of the band edges in the highest-curvature region, thus funneling excitons there. Moreover, we show how wrinkles can be modeled as nanotubes with the same curvature and when this comparison breaks down and further inhomogenities have to be taken into account.
Introduction
Two-dimensional (2D) materials have been the focus of a myriad of researches in the last decade due to their fascinating properties. After the successful synthesis of graphene [1] other materials such as hexagonal Boron nitride (h-BN) [2], transition metal dichalcogenides (TMDCs) [3; 4; 5; 6] and black phosphorous [7] joined the class of 2D materials quite quickly. Many researchers investigated the intriguing properties of this new class of materials such as their extraordinary strength and high deformation before rupture, high mobility, and ease of property alteration [8; 9; 10; 11; 12; 13; 14]. 2D materials proved to be useful for many applications such as nanoelectronics, spintronics, and catalysis [15; 16; 17; 18].
Still, researchers are trying to expand the applicability of these materials by methods such as alloying [19], introduction of defects [20], creation of van der Waals heterostructures or by applying external pressure and fields [21; 22]. Another method offering a reversible and non-destructive route to modulate the properties of 2D materials is strain engineering.[23; 24; 25; 26; 27; 28; 29; 30] Uniaxial and biaxial strains have been studied extensively [31; 32] and there are standard techniques which can be used already during the synthetization of 2D materials [12]. Often the application of in-plane stress will lead to the formation of wrinkles and there are established methods to produce these wrinkles [33] which can even be used to determine the mechanical properties of the layered material [34; 35; 36]. Furthermore, the changes of the electronic structure in these wrinkles leads to funneling, _i.e._, a preferential emission of light from certain spatial position along the wrinkle.[37; 38; 39; 28; 30; 30] However, the local strain in such samples is far from being homogeneous and the comparison with calculations of homogeneously strained systems might be misleading. Understanding this (local) inhomogeneous strain in 2D materials requires further research due to the vast opportunities for future applications such as polarized single photon emitters [41; 42] and flexible optoelectronics [23].
In order to understand the influence of the inhomogeneous strain in 2D systems one can study idealized model systems which have a similar strain state but are easier to control from an experimental point of view or need less approximations in the respective theoretical description. Nanotubes (NTs) are structures where strain fields can play an important role [43; 44] and even though those are not strictly inhomogeneous, since the strain can be defined by a constant curvature, they represent such a simple model system which is different from the uni- or biaxially strained 2D layers and closer to the wrinkled systems in experiments.
The investigation of the electronic properties of inhomogeneously strained materials requires methods such as density functional theory (DFT) or density functional based tight binding (DFTB) which can be computationally demanding for large systems. Fortunately, for NTs, researchers have developed a method to reduce the size of the curved systems - cyclic DFT [45; 46; 47] employs the helical boundary condition in nanotubes in order to reduce the cost of the calculation and the method was successfully used to determine the bending modulus of various 2D systems. Yet, to the best of our knowledge, no one investigated the similarities and differences of NTs and wrinkles even if this understanding is crucial as in larger wrinkles (or other system where the variation of strain is due to a variation of the local curvature) it is computationally impossible to model the system entirely. NTs could then be used to further simplify the calculation by the use of cyclic DFT.
The sheer size of inhomogeneously strained structures causes many theoretical investigations to neglect relativistic effects beyond scalar-relativistic limits in order to enhance computational speed. Including spin-orbit coupling (SOC) in systems with heavy elements (like TMDCs) is however very important to understand many fascinating physical effects occurring in 2D materials such as the Hall effect at room temperature [48], the Rashba effect [49; 50] and spin-valley coupling [51].
The Rashba effect, _i.e._, the emergence of spin splitting in momentum direction, [49; 52] occurs due to inversion symmetry breaking in the presence of SOC. Such splitting was initially observed in zinc blends, wurtzite, and in systems under an external electric field. Another type of system in which this splitting is observed, is Janus-type 2D materials due to the resulting internal electric field perpendicular to the structure. Yet, the internal electric field and the subsequent splitting are quite small (\(\alpha\approx\mathcal{O}(10\ meV\mathrm{\AA})\)). Cheng _et al._[53] studied such polar TMDCs (WSSe as an example) proving their stability and recommended them to be used for Datta-Das spin field effect transistors. This behavior can be even more interesting if the splitting manifests adjustability without the need of an external electric field. Yao _et al._[54] utilized biaxial strain to manipulate the Rashba splitting in Janus heterostructures and concluded that the change of orbital overlaps increases the splitting. Since many symmetries including inversion symmetry are broken in curved systems such as NTs and wrinkles, one could also imagine the emergence of similar phenomena especially since such a curvature-induced SOC has already been shown for carbon NTs.[55; 56]
Having mentioned the importance of inhomogeneous strain in 2D structures, in this paper, we investigate WSe\({}_{2}\) in the form of NTs and wrinkles theoretically. We concentrate on these
systems specifically since we also want to estimate if and when NTs can be used to model large scale wrinkles in order to ease the computational load using helical boundary conditions. The similarities between NTs and wrinkles are expounded and strain associated phenomena in the band structures are described. We explore NTs and wrinkles in ranges larger than any previous investigation to the best of our knowledge with an _ab initio_ method including SOC effects which provides electronic insight about these materials for better future applications.
## II Results and Discussion
In the following, we will investigate how inhomogeneous strain affects wrinkles and NTs of monolayer WSe\({}_{2}\) - a prototypical example of the TMDCs. In order to compare wrinkles and NTs, the initial wrinkled structure was created with a circular profile as shown in Figure 1 with a wavelength to amplitude ratio of \(\lambda/A=4\) and using NTs as input. We will discuss the deformation energy and changes in the band dispersion and we will explain the origin of funneling in nanoscale wrinkles as well as a Rashba-like splitting that occurs in curved TMDC structures. We will further explain in detail the similarities of electronic structure of NTs and wrinkles.
### Deformation energy/band gaps
The relaxation of the initial structure is very different for NTs and wrinkles. While the former only change the diameter (_cf._, Figures 1b and 1c), the latter relax into structures which do not resemble the initial nanotube-like profile anymore (Figures 1e and 1f), especially in the region close to the middle plane of the unit cell (inflection point). In this region, the wrinkled structure resembles a flat monolayer and is thus under less local strain in comparison to the corresponding nanotube. However, the peak of the wrinkles deforms stronger, leading to areas with higher curvature. The curvature in NTs on the other hand is constant since they always remain circular (Figure 1c).
In order to allow for a better comparison between NTs and wrinkles, we will introduce two new measures - the average and the maximum radius of curvature, \(R_{\text{ave}}\) and \(R_{\text{max}}\) - by fitting a spline to the positions of the tungsten atoms and calculating its curvature. The comparison of the deformation energies, \(E_{\text{def}}\), as function of the average radius of curvature in Figure 2 shows that there is a small difference between NTs and wrinkles for the smaller systems (_i.e._, with larger av
Figure 1: WSe\({}_{2}\) structures investigated in this study. (a), (b) nanotube 3D and side view, (d), (e) wrinkle 3D and side view (periodic boundary condition on back and front). (c) and (f) side view of the relaxed nanotube and wrinkle, respectively. The structural parameters such as the nanotube diameter, d, and the amplitude or wavelength of the wrinkle, A or \(\lambda\), are indicated in (b) and (e). Wrinkles with an initially elliptical profile (\(\lambda=4A\)) relax into a structure with smoother areas between peaks and valleys.
erage curvature) while the difference between armchair and zigzag is negligible. The deformation energy, \(E_{\text{def}}\), has been calculated as follows:
\[E_{\text{def}}=\frac{E_{\text{sys}}}{N_{\text{u.c.}}}-E_{\text{mono}}, \tag{1}\]
where \(E_{\text{sys}}\) is the energy of the wrinkle/nanotube, \(N_{\text{u.c.}}\) is the number of formula units, and \(E_{\text{mono}}\) is the energy of the flat monolayer. This indicates that the local relaxation at the inflection point of the wrinkles and the corresponding increasing curvature at the maxima can be understood as a redistribution of the strain which leads to a total energy gain. Figure 2 furthermore shows that this energy gain (per formula unit) becomes very small for \(R_{\text{ave}}\gtrsim 25\text{\AA}\).
Interestingly, the profile of the relaxed wrinkles differs from the sinusoidal wave which is assumed in continuum mechanics and which follows from the harmonic approximation. The deviation from the sinusoidal shape is more prominent for shorter wavelengths and vanishes for wrinkles with larger wavelength which are more similar to monolayers. In fact, the long-range behavior is expected and can already be predicted by analyzing the average curvature of differently wrinkled
Figure 2: Variation of the deformation energy with increasing average radius of curvature, \(R_{\text{ave}}\), _i.e._, with increasing wavelength and diameter for wrinkles and nanotubes, respectively.
curves (see section "Curvature analysis" in the supporting information (SI) for more details [57]). In brief, for \(\lambda/A\gtrapprox 3.5\) the profile tends to be sinusoidal, while for smaller \(\lambda/A\) an elliptical or circular profile is preferred. This deviation from the harmonic solution is important for analyzing the strain fields using electron microscopy images for which one requires an assumption about the shape of the wrinkle (_cf._, Refs. [58; 59]). Our structures with \(\lambda\approx 4A\) after relaxation are in fact better fitted by two sine functions with one having an almost three times larger wavelength.
Such wrinkle profiles, having periodically wrinkled areas with peaks and valleys have already been observed in wrinkling experiments on polymeric substrate [22; 35]. Yet, in other wrinkling experiments [60], single wrinkles with only peaks connected with areas of lower strain have been found. In order to keep the discussions in this paper general, we focus on fully relaxed wrinkles which correspond to the expected relaxed freestanding wrinkle profile. The investigation of substrate-induced effects is an interesting topic which is however beyond the current investigation. The relaxation in the wrinkles leads to differences in the electronic structure compared to the nanotube-like structures used for the initial geometry. Yet, the comparison of the band dispersion for NTs and wrinkles with approximately the same average curvature (especially for large wavelength or diameter) also reveals similarities especially for the conduction and valence band (CB and VB). Figure 3 shows the band structures for a (24,24) nanotube (d=44 \(\AA\) ) and the corresponding wrinkle (\(\lambda=88\)A). Both the valence-band maximum (VBM) and conduction-band minimum (CBM) show large contributions from tungsten atoms (_cf._ Figure S3 for the monolayer in the rectangular unit cell) indicating that those are the high-symmetry K points of the primitive unit cell which are mapped to the \(\Gamma-\mathrm{X}\) line (more details about the backfolding can be found in the SI [57], section "Brillouin zone of wrinkles/NTs and spin texture"). Not only the global band gap, \(E_{\mathrm{gap}}\), is comparable for this specific example but also the dispersion of the first few VBs and CBs and accordingly derived physical quantities such as mobility and conductivity. This is a promising outcome as it suggests that for global variables of large-scale wrinkles a similar nanotube can be used to reduce the computational cost. In Figure 4 we compare the direct band gap as function of the average curvature for all investigated systems.
The global direct band gap of most wrinkles is slightly smaller than those of the NTs with similar average curvature and in both systems the band gap approaches the value of the monolayer, \(E_{\mathrm{gap}}^{ml}\approx 1.34\,\mathrm{eV}\), for small average curvature, _i.e._, large radius of curvature - please note that the calculated band gap for the monolayer can vary depending on the settings[61]. Thus, NTs can possibly be used to model the global band gap changes in large wrinkles (or similar systems under
inhomogeneous strain) if the average curvature of the wrinkles is taken into account. It is worth to reemphasize that the _ab-initio_ modeling of inhomogeneous strain in wrinkles is computationally very demanding due to the symmetry breaking, the presence of many heavy atoms, and spin-orbit coupling and that this similarity can be utilized to simplify the theoretical modeling. One can then subsequently utilize helical boundary conditions to further reduce the computational cost. However, there are also some differences for the bands close to the VBM/CBM - in wrinkles we find more bands with similar dispersion but small differences in the maximum/minimum energy and this can explain the funneling found in wrinkled systems as explained in the following.
Figure 3: Comparison of the band structures for the (24,24) nanotube and the corresponding wrinkle. The smallest direct band gap is the high-symmetry K point of the monolayer which is mapped to the \(\Gamma-\mathrm{X}\) line (_cf._, element-projected band structures in the SI [57], Figure S3, and the section “Brillouin zone of wrinkles/nanotubes and spin texture”).
### Funneling
Funneling is the phenomena of absorption and emission of light from different spatial positions along the wrinkle. The directional guiding of the excitons can be achieved by a spatial modification of, _e.g._, the dielectric screening[62] or the band gap of the material due to external strain.[37] This phenomena attracted the attention of numerous scientists, and was subject to several researches. [28; 30; 37; 38; 39; 63] It has for example been shown that the photovoltaic behavior of 2D materials [64] and light emitting diodes [65] can significantly be enhanced. Furthermore, it was proposed that highly directional exciton transport promises not only compelling advantages for exciton-based applications but that it could also be interesting for reaching truly 1D regimes to study quantum transport phenomena of correlated many-body states.[63] From the experimental point of view, photoluminescence microscopy is routinely used to study these systems although it can be quite challenging at the nanoscale; [38] other techniques such as time-resolved transient absorption microscopy[66] are possible too but also need additional input from theory to interpret the results.
Figure 4: Variation of the global direct band gap, \(E_{\text{gap}}\), due to the strain field formed in wrinkles and nanotubes of WSe\({}_{2}\). The dash-dotted line indicates the direct band gap of the flat monolayer.
However, in conjunction with theoretical estimates of the band structure changes due to external strain[67; 68; 69] or different stacking regions[70; 68] a quantitative description of experimentally observed shifts of excitonic peaks is possible.
In order to understand the difference between all the bands close to the VBM and CBM and relate this to experimental observations, we projected the band structure on different atoms along the wrinkle. Figure 5 shows that the VBM is spread all over the wrinkle while the VB-1 and VB-2 (the two bands below the VB) close to the VBM are more localized in the straight and the curved region, respectively. The CBM on the other hand is localized at the top of the wrinkle in the regions with large local strain while the higher CB also show contributions from the straight regions. Since the different minima of the CBs have a larger difference in energy than the VBs, the lowest band gap is found in the regions with large local curvature. Figures S14a and S14b in the SI [57] show the variation of the local density of states (LDOS) along the (24,24) wrinkle close to VBM and CBM, respectively, and confirm that the energy levels close to the band gap are more localized close to the peaks and valleys of the wrinkle. Yet, since the band extrema also have small differences in the momentum direction which complicate the situation, further studies using, _e.g._, the Bethe-Salpeter equation to describe the excitonic states are needed which are - at the moment - however only possible for the smallest systems of our study.[71] Nevertheless, as shown previously, the changes of the local band gap give a very good estimate of the shifts of excitonic peaks.[67; 68; 69]
Another important effect which will influence the dynamics of excitons, are the internal electric fields which are induced by the curvature [72]. In order to estimate the local electric field which result from the local curvature, we use the dipoles as calculated with the Hirshfeld partitioning scheme [73]. Figure 6 shows the magnitude and direction of the tungsten dipoles for the (24,24) wrinkle. One can clearly see that the dipoles at the peaks and valleys of the wrinkle are larger - due to this inhomogeneity there will be an effective electric field which could be another force driving the excitons to the regions with higher curvature. This inhomogeneity can also be seen in the contour plot of the total electrostatic potential, Figure S13. Yet, in experiments the screening by a substrate might be important as well and we will leave this fascinating topic for future work since this is beyond the current investigation.
Most interestingly, the internal electric field due to the dipoles leads to a Rashba-like SOC splitting as can be seen in Figures 3 and S15 for the CB states crossing at the \(\Gamma\) point.
### SOC splitting
The splitting of the bands close to the \(\Gamma\) point resembles the Rashba spin-orbit splitting found in quantum wells or Janus-type TMDCs [74; 75]. Comparing band structures with and without SOC (shown in the SI [57], Figure S16), we can directly see the effect of SOC on the band dispersion. We observed this Rashba-like splitting (_i.e._, splitting of the band energies in momentum direction [49]) in all investigated systems and furthermore found in very small NTs an apparent avoided crossing of the SOC-split states which might be either due to the interaction between atoms of opposite sides or an artifact due to possible strain-induced changes of the hexagonal symmetry (_cf._, Figures 7 and S7).
The Rashba splitting in nanoscale wrinkles and NTs occurs due to the symmetry breaking caused by the inhomogenous strain field and the resulting electric dipoles perpendicular to the wrinkle and nanotube. Hence, in presence of spin-orbit coupling two degenerate spin bands split
Figure 5: Contribution of curved and straight sections of the (24,24) wrinkle to its band structure, the smallest local band gaps from each region are also indicated by arrows. The inset depicts the region division.
into two separate bands in momentum direction. These SOC-split states are mainly localized at the top (_i.e._, highest curvature part) of the wrinkle (_cf._, Figure 5).
Figure 7 depicts the band structure of the armchair (11,11) wrinkle and nanotube highlighting also the changes with increasing curvature if compared to Figure 3 - the splitting not only increases in momentum direction but the Rashba-split states also move up in energy such that they eventually become the VBM (see also the Figures S17-S20). One main difference of the wrinkle with respect to the nanotube is the larger splitting between the uppermost VBs and the lower bands which might be due to the higher curvature at the top of the wrinkles and the more diverse local strain state in wrinkles.
Another important difference can be found in the spin texture of the highest valence bands and lowest conduction bands as shown in the section "Brillouin zone of wrinkles/NTs and spin texture" in the SI [57]. While the NTs always show twofold degenerate bands coming from the K and K' point of the 2D material, the degeneracy is slightly lifted in the wrinkle probably due to the different strain states along one period of the wrinkle - in fact, a slight asymmetry is also visible in the contour plot of the total electrostatic potential shown in Figure S13. This also leads to a
Figure 6: Magnitude of the Hirshfeld dipole vectors[73] of the tungsten atoms along the (24,24) wrinkle – in order to better visualize the position along the wrinkle, the magnitude has the same sign as the z component, \(\mu_{z}\). The inset sketches the rotation of the dipole vectors along the wrinkle (black balls are the W positions).
different spin texture (compared to the NT) in which the VB of the wrinkle does not automatically have the opposite spin expectation value of the band just below (VB-1). The largest contribution of \(\langle\sigma_{i}\rangle\) for the Rashba-split states close to \(\Gamma\) is always coming from \(\langle\sigma_{y}\rangle\) and \(\langle\sigma_{z}\rangle\), _i.e._, the directions perpendicular to \(\mathbf{k}\). Figures S5 and S7 furthermore show that for the wrinkle the lowest CB has the opposite spin polarization of the VB at the K point (the extrema closer to the X point); the inhomogeneous strain is not large enough to change the pattern which is also observed in the monolayer.[76]
Figure 8 depicts the expectation value \(\langle\sigma_{z}\rangle\) for the four highest valence bands of the (11,11) wrinkle. Note that close to \(\Gamma\) only two bands are visible since the bands are doubly-degenerate due to the folding to the 1D Brillouin zone. We thus show the average expectation value of the two degenerate states \((\langle\sigma_{z}\rangle_{1}+\langle\sigma_{z}\rangle_{2})/2\). The expectation values \(\langle\sigma_{x}\rangle\) and \(\langle\sigma_{y}\rangle\) are either zero or the two degenerate states have opposite signs. A complete discussion about the individual spin states can be found in the SI [57], Figures. S5-S8.
In order to examine the strength of the Rashba-like splitting the Rashba coupling parameter[77], \(\alpha_{R}\), has been calculated for the armchair systems using
\[\alpha_{R}=\frac{2E_{R}}{k_{R}}, \tag{2}\]
where \(E_{R}\) and \(k_{R}\) are the Rashba energy and the shift of the bands in the momentum direction, respectively. Unfortunately, the different back folding of the bands in the zigzag structures leads to the primitive unit cell's K point being mapped to \(\Gamma\) thus obscuring the SOC-split states. This
Figure 7: Band structure of the (11,11) wrinkle (left) and nanotube (right). The Rashba-like splitting in the momentum direction is clearly visible for the VB in the vicinity of the \(\Gamma\) point and is – for this system with a small \(\lambda/A\) ratio – even above the former VBM at the K point.
prevents an easy and correct fitting of the Rashba model to the band structure even if the band structures in the SI [57] clearly show the same splitting.
The Rashba coupling parameter shown in Figure 9 decreases as the wrinkle wavelength or nanotube diameter increases. This trend can be explained by the decreasing strain difference between the outer and the inner chalcogen layer and the corresponding smaller induced electric field. It is also evident that band gap and band dispersion of large NTs and wrinkles are converging to the corresponding flat monolayer (_cf._ Figures S3 and S19). Furthermore, the Rashba coupling parameter is only comparable between wrinkles and NTs if it is shown with respect to the minimum radius of curvature, \(R_{\text{min}}\), i.e. the highest curvature. This is once more due to the wrinkles having higher curvature at their peaks. The Rashba coupling parameter in our structures are relatively high and almost half the size of elemental surfaces [78] and one order of magnitude larger than in Janus type TMDCs [53; 79].
Figure 8: The expectation value \(\langle\sigma_{z}\rangle\) for the (11,11) wrinkle is shown by the colored dots for the four highest valence bands. Note that only two bands are visible since the bands are doubly-degenerate due to the folding to the 1D Brillouin zone.
Furthermore, the coupling parameter in wrinkles/NTs with small \(R_{\rm min}\) can also be comparable to the one found in heterostructures including BiSb[80] even if one probably needs to include higher order terms to properly describe the large splitting.[81] It should be noted that the largest splittings are those easiest to reach experimentally since the bands move above the monolayer VBM at K. This is also a possibility for tuning the electronic structure and the SOC-induced splitting for the application in spintronic devices: using (mono-)layers of TMDCs deposited on elastic substrates which can be used to induce different wrinkle morphologies as shown in, e.g., Refs. [82; 83].
It should be noted however that the exact shape of the wrinkle can be different from our research, e.g., due to substrate effects; the conclusions in above sections still hold as they are due to strain effects and the subsequent symmetry breaking. Yet slight quantitative differences are ex
Figure 9: The Rashba coupling parameter in armchair WSe\({}_{2}\) nanotubes and wrinkles. The splitting reduces as the \(R_{min}\) increases, _i.e._, as the local strain due to the curvature decreases.
pected and require further investigations. We expect that nanotubes can still be used to model the curvature effects in wrinkled systems and that the variation of the local band gap leads to exciton funneling. Furthermore, the interaction with substrates and additional external fields might lead to even higher Rashba-like splittings.
## III Conclusion
We investigated NTs and 2D wrinkles of WSe\({}_{2}\) theoretically and analyzed the influence of the induced inhomogeneous strain on their electronic properties but the following conclusions should generally apply to all TMDCs. We found that the inhomogeneous strain causes symmetry breaking in these structures which leads to a Rashba-like splitting of the valence band at \(\Gamma\). Therefore, these structures - particularly the smaller wavelengths - could be promising candidates for spintronic applications. In fact, spin-polarized STM using an additional graphene layer as electrode[84; 85] should be able to measure the Rashba-like splitting of the VBM. We believe that this is a general feature of wrinkled 2D TMDCs due to the non-uniform strain and our study will thus pave the way for the employment of a wide range of materials in spintronic devices. This bears witness to the important role of SOC in the physics of 2D TMDCs nanoscale wrinkles and NTs. Moreover, wrinkling should be regarded as a method for introducing out of plane dipoles in the 2D system which might be useful in other contexts. Investigations of bilayers and heterostructures made of dichalcogenides could even result in appearance of more fascinating phenomena.
Furthermore, nanoscale TMDC wrinkles do not follow a sine wave profile as suggested by continuum mechanics and which has been widely utilized in strain analysis. The profiles in our study are basically composed of a superposition of two sine waves. Thus, the curvature varies smoother in the profiles maxima and minima. This suggests that the classical formulation of such structures conceals important physical properties and conclusions based upon might be misleading, _e.g._, in the calculation of strain fields from the surface topology in nanoscale wrinkles. Moreover, we attribute the funneling of excitons to the localization of states in different spatial locations due to the presence of the inhomogeneous strain. Yet, more advanced calculations are needed to evaluate the influence of the, _e.g._, induced dipoles or substrate and an additional investigation of the wrinkled samples via STM [84; 85] could furthermore help to understand the band alignment better.
Additionally, we demonstrated that NTs can be used to approximate the wrinkles for reduction
of computational costs, nonetheless one needs to take care of the differences that exist.
## IV Methods
NTs and wrinkles of a monolayer of WSe\({}_{2}\) having two edge types: armchair and zigzag were investigated using an all-electron method based on DFT as implemented in FHI-aims code [86]. We utilized the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional [87] together with the Tkatchenko-Scheffler dispersion correction [88] and the non-self-consistent SOC implementation [89]. Additionally, no symmetry was imposed on any of the structures in this calculations. In order to compare wrinkles and NTs, the initial wrinkled structure was created with an elliptical profile as shown in Figure 1 with a wavelength to amplitude ratio of \(\lambda/A=4\) and using NTs as input. The unit cell was fixed only in the direction of the wrinkles with \(\lambda\) to retain the compression. All structures have been relaxed utilizing the Broyden-Fletcher-Godfarb-Shanno method to reach forces below \(1\,\mathrm{meV}/\mathrm{\SIUnitSymbolAngstrom}\). Subsequently, the Mulliken projected band structures with and without spin-orbit coupling were calculated. NTs are labeled with the rolling vector (m,n) and for wrinkles the similar notation is used such that the (m,n) wrinkle is similar to the (m,n) nanotube. The geometries as well as the band structures for all systems which have been investigated within this work were uploaded to the NOMAD repository, Ref. [90]
## Conflicts of interest
There are no conflicts of interest to declare.
###### Acknowledgements.
The authors would like to thank Prof. T. Heine, Dr. A. Kuc, R. Kempt and F. Arnold for fruitful discussions. This project was financially supported by the SFB 1415, Project ID No. 417590517. We would like to acknowledge the Center for Information Service and High Performance Computing [Zentrum fur Informationsdienste und Hochleistungsrechnen (ZIH)] at TU Dresden. Also, the authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS [91] at Julich Supercomputing Centre
**Table of Contents**
**Strain to introduce Rashba-like splitting:** By employing an _ab-initio_ method (Density functional theory) a Rashba-like splitting is observed in the electronic band structure of strained two-dimensional transition metal dichalcogenides in the forms of wrinkles and nanotubes. Additionally, The assumption of modeling wrinkles as nanotubes is investigated for the electronic studies of these structures.
(JSC). |
2303.17932 | Trimming Phonetic Alignments Improves the Inference of Sound
Correspondence Patterns from Multilingual Wordlists | Sound correspondence patterns form the basis of cognate detection and
phonological reconstruction in historical language comparison. Methods for the
automatic inference of correspondence patterns from phonetically aligned
cognate sets have been proposed, but their application to multilingual
wordlists requires extremely well annotated datasets. Since annotation is
tedious and time consuming, it would be desirable to find ways to improve
aligned cognate data automatically. Taking inspiration from trimming techniques
in evolutionary biology, which improve alignments by excluding problematic
sites, we propose a workflow that trims phonetic alignments in comparative
linguistics prior to the inference of correspondence patterns. Testing these
techniques on a large standardized collection of ten datasets with expert
annotations from different language families, we find that the best trimming
technique substantially improves the overall consistency of the alignments. The
results show a clear increase in the proportion of frequent correspondence
patterns and words exhibiting regular cognate relations. | Frederic Blum, Johann-Mattis List | 2023-03-31T09:55:48Z | http://arxiv.org/abs/2303.17932v1 | # Trimming Phonetic Alignments Improves the Inference of Sound
###### Abstract
Sound correspondence patterns form the basis of cognate detection and phonological reconstruction in historical language comparison. Methods for the automatic inference of correspondence patterns from phonetically aligned cognate sets have been proposed, but their application to multilingual wordlists requires extremely well annotated datasets. Since annotation is tedious and time consuming, it would be desirable to find ways to improve aligned cognate data automatically. Taking inspiration from trimming techniques in evolutionary biology, which improve alignments by excluding problematic sites, we propose a workflow that trims phonetic alignments in comparative linguistics prior to the inference of correspondence patterns. Testing these techniques on a large standardized collection of ten datasets with expert annotations from different language families, we find that the best trimming technique substantially improves the overall consistency of the alignments. The results show a clear increase in the proportion of frequent correspondence patterns and words exhibiting regular cognate relations.
## 1 Introduction
With the introduction of automated methods for the inference of correspondence patterns from multilingual wordlists (List, 2019), computational historical linguistics has acquired a new technique with multiple applications in the field. Correspondence patterns have been used to identify problematic cognate judgments in individual datasets (List, 2019) or to assess their general characteristics (Wu et al., 2020), they have been used as the basis to predict cognate reflexes (Bodt and List, 2022; List et al., 2022; Tresoldi et al., 2022) or to reconstruct protoforms (List et al., 2022). They have also shown to be useful to compare different cognate judgments with respect to the overall regularity they introduce in a multilingual dataset (Greenhill et al., 2023).
While machine-readable correspondence patterns have already shown to be useful for various tasks in historical linguistics, their basic properties have so far not yet been thoroughly investigated. Thus, although we can easily see that correspondence patterns show long-tail distributions with respect to the number of alignment sites that individual patterns reflect in multilingual datasets, no closer investigations of these patterns have been carried out so far. Here, historical linguistics can learn from evolutionary biology, where specific characteristics of alignments of DNA or protein sequences have been investigated for several decades now. Scholars have also looked into the characteristics of those alignment sites that turn out to be problematic when it comes to phylogenetic reconstruction and similar secondary tasks (Talavera and Castresana, 2007; Dress et al., 2008). In order to handle these "irregular" sites, biologists have proposed methods to _trim_ alignments by removing sites that contradict more general evolutionary tendencies. This allows scholars to reduce the amount of artifacts in the data and retrieve more accurate information about the evolutionary processes behind the alignments.
In computational historical linguistics, _trimming_ of alignments has so far been ignored. In classical historical language comparison, however, the practice of ignoring specific sites in the alignment of cognate words has been practiced for a long time. When arguing for particular sound changes or correspondence patterns, scholars routinely consider only the supposed _root_ of a cognate set (Trask, 2000, 290), ignoring inflectional and derivational markers or irregular parts of individual cognate reflexes. While this is a common procedure for the comparative method, it is seldom made explicit. One of the few cases where this process _is_ made explicit is offered by Payne (1991). Here, the author provides an alignment matrix where all the non-cognate material is set into brackets, distin
guishing them from the true alignment sites. This step is accompanied by a detailed discussion of the morphemic elements and its implication for reconstructing the proto-forms, a step that is rarely put into such detail. The importance of this practice is also reflected in tools that allow for the manual correction of alignments, like EDICTOR List (2017) and RefLex Segerer and Flavier (2015) which offer options to flag alignment sites as problematic (or important). Specifically the trimming facility of the EDICTOR tool has also been used to increase the transparency of cognate sets in studies devoted to phylogenetic reconstruction Sagart et al. (2019); Cayon and Chacon (2022).
Given the highly skewed distributions of alignment sites over correspondence patterns in computational comparative linguistics and the practice of human annotators to regularly ignore certain parts of phonetically aligned cognate sets in historical linguistics, it would be beneficial to find automated ways to _trim_ phonetic alignments in multilingual wordlists. Trimmed alignments could either form the basis of a more extensive annotation of phonetic alignments in a computer-assisted setting List (2017), or they could serve as the basis of extensive cross-linguistic, typologically oriented studies devoted to the regularity of sound change and sound correspondence patterns. For example, correspondence patterns have already been used in typological studies investigating the history of pronoun systems in South America Rojas-Berscia and Roberts (2020), or for studies with simulated data that use phonetic alignments to construct artificial cognate sets Wichmann and Rama (2021).
In the following, we will provide a first framework for the trimming of phonetic alignments and test it on ten datasets from typologically diverse language families. Our experiments show that trimming increases the overall regularity of the correspondence patterns - even when using very rudimentary strategies- and thus shrinks the long tail of their distributions over alignment sites. The closer inspection of individual trimmed alignments, however, also shows that our methods still have a lot of room for improvement. We conclude by pointing to various techniques that could enhance the trimming of phonetic alignments in the future.
## 2 Background
Sound correspondences are the core of the comparative method. They form the basis for proving genetic relationship between languages, for establishing the internal classification of language families, as well as for the reconstruction of proto-languages. Sets of sound correspondences are commonly analyzed as _correspondence patterns_. A crucial component of correspondence patterns in contrast to sound correspondences is that the correspondence set is not defined on the basis of language pairs, but rather as a pattern shared between several languages List (141, 2019). In other words, a correspondence pattern is defined as the set of sounds in any number of daughter languages that derive from the same phoneme of the ancestral language in a specific environment Hoenigswald (1960); Anttila (1972).
In order to qualify as a _pattern_, sound correspondences must be backed by many examples. Examples are drawn from concrete cognate sets that need to be phonetically aligned in order to reveal which sounds correspond with each other. In order to constitute a valid pattern that would be accepted as a _regular_ or _systematic_ sound correspondence Trask (2000, 336), a considerably large amount of examples backing a certain pattern must be assembled from the data. This step is necessary to avoid chance similarities resulting from erroneous cognate judgments or undetected scarce borrowings. While the minimum number of examples is not universally agreed upon, most scholars tend to accept two or three examples as sufficient to consider a pattern as regular.
Correspondence patterns are typically represented with the help of a matrix, in which the rows correspond to individual languages and the columns correspond to patterns, with cell values indicating the sounds (the _reflexes_) of individual language varieties in individual patterns Clackson (2007, 307). Correspondence patterns are traditionally inferred by manually inspecting phonetic alignments of cognate sets, trying to identify individual columns (_alignment sites_) in the alignments that are compatible with each other Anttila (1972); List (2019). Figure 1 illustrates this process with phonetic alignments of fictitious words from fictitious languages. In order to reconstruct the ancestral form underlying a cognate set, it is common to ignore certain sites in the alignment that are considered as difficult to align. Problems of alignability Schweikhard and List (2020, 10) usually result from the fact that words in a cognate set are not entirely, but only partially cognate. This can be
due to processes of word formation or inflection in individual language varieties Wu and List (2023), as illustrated in Figure 2 with data from Quechua Blum et al. (forthcoming).
## 3 Materials and Methods
### Materials
We use ten freely available datasets from typologically diverse language families, taken from the Lexibank collection List et al. (2022). This collection contains datasets that were (retro)standardized following the recommendations of the Cross-Linguistic Data Formats initiative (CLDF, [https://cldf.clld.org](https://cldf.clld.org), Forkel et al. 2018). One core aspect of CLDF is to make active use of _reference catalogs_ like Glottolog ([https://glottolog.org](https://glottolog.org), Hammarstrom et al. 2022) and Concepticon ([https://concepticon.clld.org](https://concepticon.clld.org)), List et al. 2023). Reference catalogs in this context are metadata collections that provide extensive information on very general linguistic constructs, such as languages, concepts, or speech sounds. By linking the languages in a given dataset to Glottolog, by providing Glottocodes for individiual language varieties, one guarantees the comparability of the language varieties with other datasets which have also been linked to Glottolog. By mapping concepts in multilingual wordlists to Concepticon, one guarantees the comparability of the concepts with other datasets that have also been linked to Concepticon. Apart from Glottolog and Concepticon, many datasets from the Lexibank collection offer standardized phonetic transcriptions following the Cross-Linguistic Transcription Systems reference catalog (CLTS, [https://clts.clld.org](https://clts.clld.org), List et al. 2021, see Anderson et al. 2018). In this reference catalog, more than 8000 different speech sounds are defined and can be distinguished with the help of distinctive features. At the same time, new, so far unseen sounds can be derived using a specific parsing algorithm underlying the PyCLTS software package List et al. (2020). As a result, the Lexibank collection of multilingual wordlists offers a large number of multilingual datasets that have been standardized with respect to languages, concepts, and transcriptions.
Apart from offering standardized phonetic transcriptions, all datasets also offer cognate judgments provided by experts. Alignments were computed automatically, using the SCA method for multiple phonetic alignments List (2012, 2014) in its default settings. Of the ten datasets, two (crossandean and walworthpolynesian) were reduced to 20 language varieties in order to have datasets of comparable sizes. While the datasets differ with respect to the number of language varieties and time depth of the families in question, they are all large enough to allow us to infer a substantial amount of frequent sound correspondence patterns.
### Methods
#### 3.2.1 Trimming Phonetic Alignments
The main purpose of trimming is to remove problematic alignments and increase the potential of retrieving relevant information from the remaining sites. In biology, trimming of sequence alignments is primarily performed to improve phylogenetic inference. The goal is to reduce the noise in the data in order to get a clearer picture of the actual phylogenetic information contained in DNA sequences Talavera and Castresana (2007). Despite the removal of some data, the accuracy of phylogenetic trees inferred from the data often improves. To assure that enough relevant information is maintained after trimming, trimmed alignments need to have some minimal length. Several tools for automated trimming have been developed in evolutionary biology. Some of them select the most reliable columns and remove sparse alignments that consists mainly of gaps Capella-Gutierrez et al. (2009), while other tools focus on entropy values and evaluate whether a site is expected or not Criscuolo and Gribaldo (2010). The most ambiguous and divergent sites
Figure 1: Corresponding alignment sites in a set of four fictitious languages.
Figure 2: Trimming morphemes in Quechua. The root is combined with different morphemes in some varieties.
are removed in this approach, arguing that they might result from erroneous judgements of homology (Steenwyk et al., 2020).
In contrast to the trimming of DNA sequences in biology, the main goal of trimming alignments in linguistics is not to infer phylogenetic trees, but to make the alignments more useful for secondary use in computing sound correspondences and helping phonological reconstruction. Each cognate set is reduced to a 'core' alignment, which can then later be reconstructed as approximating the _root_ in the proto-language of the respective cognate set.
Our initial trimming strategies focus on the presence of gaps in the alignment sites. For this purpose, we compute the proportion of gaps in each site and evaluate whether this proportion is above or below a certain threshold (_gap threshold_). All sites which are above the threshold are identified as _candidates_ for trimming. The default value for the gap threshold in our implementation is 0.5, which means that we _could_ trim all sites in which the majority of sounds is a gap.
However, since a naive trimming of all alignment sites exceeding our gap threshold might well lead to the trimming of all sites in an alignment and therefore discard the corresponding cognate set in its entirety, we define a minimal skeleton of alignment sites that should not be touched by the trimming procedure (similar to the minimal sequence length in DNA trimming). This skeleton is based on consonant-vowel profiles of the alignments and defaults to CV and VC. The preference of minimal CV/VC skeletons for aligned cognate sets is justified by linguistic practice (Tian et al., 2022) and can be adjusted to account for extended root structures, such as, for example, CVC. This means that only those results of the trimming procedure are accepted that leave a core alignment of at least one consonant and one vowel, ignoring their particular order. In order to make sure that the core is preserved, we first define an ordered list of candidate sites that could be removed and then start removing them site after site, checking after each removal whether the core skeleton has been left untouched. When only the core skeleton is left, trimming is stopped.
Based on this general procedure of trimming until a core skeleton defined by the user is reached, we test two detailed strategies for trimming. In the first strategy, we only trim _consecutive_ gaps occurring in the beginning or the end of the alignment, a strategy that is also used in the context of sequence comparison in biology (Raghava and Barton, 2006). This _core-oriented_ strategy allows us to drop spurious prefixes and suffixes occurring in some language varieties in individual alignments. In order to create our ordered list of candidate sites, we start from the right-most sites in our alignment and combine them with the left-most sites. In the second strategy, we trim all sites where the frequency of gaps exceeds our threshold, regardless of their position. This _gap-oriented_ strategy would also trim gapped sites occurring in the beginning and the end of an alignment, but may additionally trim gapped sites regardless of their position. In order to create our ordered list of candidate sites, we sort all sites exceeding the gap threshold by the proportion of gaps in reversed order. Figure 3 illustrates the calculation of gap profiles and the trimming using the two strategies defined here for a toy example of fictitious words from fictitious languages.
\begin{table}
\begin{tabular}{l|c|c|c|l|l} Data set & Lang. & Concepts & Cog.-Sets & Words & Source \\ \hline constenlachibchan & 25 & 106 & 213 & 1216 & Constenla Umaña (2005) \\ crossandean & 20 & 150 & 223 & 2789 & Blum et al. (forthcoming) \\ drawlex & 20 & 100 & 179 & 1341 & Kolipakam et al. (2018) \\ felekessemitic & 21 & 150 & 271 & 2622 & Feleke (2021) \\ HATtorijaponic & 10 & 197 & 235 & 1710 & Hattori (1973) \\ HOuchinese & 15 & 139 & 228 & 1816 & Hüüüu (2004) \\ leekoreanic & 15 & 206 & 233 & 2131 & Lee (2015) \\ robinsonap & 13 & 216 & 253 & 1424 & Robinson and Holton (2012) \\ walkworthpolynesian & 20 & 205 & 383 & 3637 & Walworth (2018) \\ zhivolovobugrian & 21 & 110 & 182 & 1974 & Zhivlov (2011) \\ \end{tabular}
\end{table}
Table 1: Number of languages, concepts, non-singleton cognate sets and total entries across the different datasets
#### 3.2.2 Evaluating Cognate Set Regularity
With the method by List (2019), correspondence patterns can be inferred from phonetically aligned cognate sets with the help of an iterative partitioning strategy which clusters the individual alignment sites. The resulting patterns are reflected by varying amounts of alignment sites, which we can use to compute certain statistics, building on earlier work by Greenhill et al. (2023). In a first step, we can compare the number of frequently recurring patterns with the number of patterns that do not recur frequently in the data. Based on this comparison, we can compute the proportion of alignment sites that are assigned to a frequently recurring pattern. This comes close to the notion of "regular" correspondence patterns in traditional historical linguistics, with the difference that we need to choose a concrete threshold by which a pattern recurs in our data (the _pattern threshold_, which is set to 3 by default). By defining frequently recurring patterns as _regular_, we can now assess for individual cognate sets how many of the alignment sites reflect regular patterns and how many reflect irregular patterns. This allows us to distinguish _regular_ from _irregular_ cognate sets by calculating the proportion of alignment sites reflecting regular correspondence patterns and setting some threshold beyond which we consider a cognate set as irregular (the _cognate threshold_, which is set to 0.75 by default). Having identified regular cognates in a given wordlist, we can contrast them with irregular cognates and calculate the proportion of _reflexes_ (words in individual cognate sets) that appear in regular cognate sets. Given that this proportion gives us an idea of how many of the words in our data that appear in cognate relations can be assigned to some regular cognate set via regular sound correspondences, we interpret this proportion of _regular words_ as the _overall regularity_ of the dataset.
Selecting meaningful thresholds is not an easy task, specifically when calculations depend on multiple parameters as in our case. We decided to take a conservative pattern threshold of 3, which means that a pattern to be considered as regular must at least recur across three alignment sites in a given dataset. For the regularity of cognate sets, we decided for an even more conservative threshold of 0.75, which means that three quarters of the alignment sites in a given cognate set must reflect correspondence patterns that recur three or more times in the data.
#### 3.2.3 Evaluating Trimmed Alignments
We make use of this interpretation of frequency as regularity in order to evaluate the success of our trimming operations. In order to check to which degree the trimming of phonetic alignments leads to an increase of overall regularity, modeled by taking the frequency of correspondence patterns into account, we compare three different constellations, namely (a) no trimming, (b) core-oriented trimming, and (c) gap-oriented trimming. We compare the three methods by computing the _proportion of regular correspondence patterns_ and the _proportion of regular words_ in all datasets, as outlined in the previous section. A successful trimming strategy should lead to an increase of both measures.
For further evaluation, we implement a random model that compares our targeted trimming strategies with a random strategy for trimming. To account for this, we randomly delete the same amount of alignment sites from each alignment as we did with the gap- or core-oriented strategies, while preserving the ratio of consonantal and vocalic alignment sites. With this step we assure that the resulting randomly trimmed alignment preserves the minimal CV/VC skeleton. For each dataset and trimming-strategy, we run the random model 100 times and analyze how many times the random model surpasses the results of the targeted model with respect to the proportion of regular words. This error analysis helps us to assess whether a
Figure 3: Artificial example for the computation of gap profiles followed by trimming using the _core-oriented_ (left) and the _gap-oriented_ strategy (right).
trimming strategy systematically outperforms the random model.
#### 3.2.4 Implementation
The new methods for the trimming of phonetic alignments are implemented in Python in the form of a plugin to the LingRex software package ([https://pypi.org/project/lingrex](https://pypi.org/project/lingrex), List and Forkel 2022, Version 1.3.0). LingRex itself extends LingPy ([https://pypi.org/project/lingpy](https://pypi.org/project/lingpy), List and Forkel 2021, Version 2.6.9) - which we use for phonetic alignments - by providing the method for correspondence pattern detection which we use to evaluate the consequences of trimming our alignments. For the handling of the cross-linguistic datasets provided in CLDF, CLDFBench ([https://pypi.org/project/cldfbench](https://pypi.org/project/cldfbench), Forkel and List 2020, Version 1.13.0) is used with the PyLexibank plugin ([https://pypi.org/project/pylexibank](https://pypi.org/project/pylexibank), Forkel et al. 2021, Version 3.4.0 ).
## 4 Results
### General Results
The two trimming strategies were applied to all datasets in our sample and regularity scores for the proportion of regular sound correspondence patterns and the proportion of regular words were computed. Given that the trimming strategies might reduce alignments only to a core skeleton (CV/VC), only those cognate sets whose alignments consist of at least one vocalic and one consonantal site were considered in this comparison. Phonetic alignments were carried out with the help of the default settings of the SCA method List (2012). Correspondence patterns were computed with the help of the method by List (2019). The results of our general comparison of different trimming strategies are presented in Table 2. For both the proportion of regular correspondence patterns and the proportion of regular words, the best result for each dataset is highlighted in the table. Without exception, the gap-oriented trimming strategy yields the highest proportion of regular correspondence patterns and the highest proportion of regular words. The core-oriented trimming strategy outperforms the baseline without trimming in some cases, but not consistently, often only leading to minimal improvements over the baseline. Random tests confirm this trend for both trimming strategies.
The reduction of alignment sites generally leads to a reduced number of correspondence patterns inferred from the individual datasets, no matter which trimming procedure is applied. This holds in all settings for both irregular and regular correspondence patterns (see Appendix A for details). Gap-oriented trimming removes more patterns than core-oriented trimming, which is also expected, given that in the latter setting we preserve some sites in the core that would otherwise have been trimmed. Figure 4 visualizes the reduction of correspondence patterns and alignment sites for all ten datasets in our sample. This analysis allows us to make two
Figure 4: Distribution of alignment sites per pattern with gap-oriented trimming and without. Each point on the x-axis represents one correspondence pattern, its value on the y-axis reflects the number of alignment sites it contains. The patterns are sorted on the x-axis by their number of alignment sites. Gap-oriented trimming and the baseline are distinguished by shape and color.
general observations. First, frequently recurring correspondence patterns tend to grow with respect to the number of alignment sites in which they recur after trimming. We attribute this to the greedy nature of the correspondence pattern inference procedure. Second, the long tail of correspondence patterns with very few alignment sites is substantially shortened in almost all languages. This provides yet another perspective on the necessity of trimming in linguistics. Many of the patterns with a low amount of alignment sites do indeed seem to contain erroneous alignment judgements, and trimming them successfully improves the distribution of sites across the patterns. The two datasets where the tail does not seem substantially shortened, crossandean and zhivlovobugrian, are also the ones with the lowest gain in the proportion of regular correspondence patterns. While there are still small improvements, it does seem that in those cases the gap-oriented trimming does not seem as effective as for other datasets.
One likely explanation for this observation is the fact that both datasets, as well as hatoriaponic, include language varieties that are closely related to each other. zhivlovobugrian includes data from one subgroup of the Uralic language family, while the Quechua languages from crossandean are generally considered to be quite similar to each other and of shallow time-depth. In those cases, we expect many forms that are (nearly) identical to each other. This would directly result in correspondence patterns of high frequency, from which not too many sites are trimmed. Especially for crossandean, this is reflected by the fact that it has the highest proportion of regular words across all the datasets, pointing to a very regular set of lexical items.
Table 3 shows the results of our error analysis, comparing in how many out of 100 trials for each trimming strategy the proportion of regular words was higher in the random trial than in the concrete trimming method. As we can see from the table, the random-deletion model often outperforms the core-oriented trimming strategy, while it performs consistently worse than the gap-oriented trimming strategy. This clearly shows that it is not enough to trim alignment sites at random in order to reduce the noise in the data. As can be expected due to traditional theories on the regularity of sound change, specific sites, which reflect irregular correspondence patterns, must be targeted. For some datasets, the random model does surprisingly well in the core-oriented setting, and in some cases, it is even consistently better than the targeted core-strategy. This can be explained by the fact that the random trimming might also trim sites within the core - sites that apparently are very irregular in some languages - and hence improve the model in comparison to a trimming-model where a certain core is always preserved. Given that the model performs worse than the gap-oriented trimming in all languages, it seems recommendable to trim all sites above the gap-threshold, regardless of their position in the alignment. The successful trimming of sites that include a majority of gaps shows that those sites contain many irregular correspondences, and removing them improves our measures of regularity. We are now able to explain more words in the dataset with a lower number of regular correspondence patterns.
\begin{table}
\begin{tabular}{|l|c c|c c|c c|} \hline & \multicolumn{2}{c|}{Original} & \multicolumn{2}{c|}{Core} & \multicolumn{2}{c|}{Gap} \\ \hline Dataset & P & W & P & W & P & W \\ \hline \(\text{\small{constENLACHIBChAN}}\) & 0.71 & 0.50 & 0.69/ 0.70 & 0.46/ 0.47 & **0.76**/ 0.70 & **0.51**/ 0.43 \\ \(\text{\small{crossSANDEAN}}\) & 0.73 & 0.58 & 0.74/ 0.73 & 0.60/ 0.59 & **0.75**/ 0.73 & **0.64**/ 0.59 \\ \(\text{\small{dravlex}}\) & 0.56 & 0.23 & 0.57/ 0.55 & 0.27/ 0.23 & **0.61**/ 0.55 & **0.31**/ 0.24 \\ \(\text{\small{fELEKEMItic}}\) & 0.55 & 0.22 & 0.58/ 0.56 & 0.25/ 0.24 & **0.62**/ 0.56 & **0.29**/ 0.25 \\ \(\text{\small{hattoriJaponic}}\) & 0.58 & 0.33 & 0.57/ 0.58 & 0.33/ 0.33 & **0.59**/ 0.58 & **0.38**/ 0.34 \\ \(\text{\small{htouchInese}}\) & 0.65 & 0.40 & 0.65/ 0.65 & 0.42/ 0.40 & **0.69**/ 0.64 & **0.45**/ 0.35 \\ \(\text{\small{leekoreanic}}\) & 0.44 & 0.21 & 0.47/ 0.45 & 0.20/ 0.21 & **0.52**/ 0.47 & **0.22**/ 0.20 \\ \(\text{\small{robinsonap}}\) & 0.64 & 0.36 & 0.65/ 0.63 & 0.37/ 0.47 & **0.67**/ 0.63 & **0.41**/ 0.35 \\ \(\text{\small{wallworthpolynesian}}\) & 0.66 & 0.40 & 0.66/ 0.65 & 0.40/ 0.39 & **0.72**/ 0.66 & **0.48**/ 0.39 \\ \(\text{\small{Zhivlovobugrian}}\) & 0.57 & 0.24 & 0.58/ 0.57 & 0.26/ 0.25 & **0.61**/ 0.58 & **0.28**/ 0.26 \\ \hline \end{tabular}
\end{table}
Table 2: Proportion of regular correspondence patterns (P) and regular words (W) across all datasets after trimming. The numbers after the slashes provide the average from 100 iterations of the random model.
Further experimentation will have to be done with respect to different gap thresholds. Our initial threshold of 0.5 reflects the fact that we did not want to search for the threshold with the highest number of regularity, but rather to account heuristically for sites that include more gaps than reflexes of sound. Furthermore, the optimal threshold might well be different for each language family, given that correspondence patterns can differ greatly across languages. For example, patterns of change in which sounds are lost in certain positions might be very frequent for one language family, but not in another, leading to a different role of gaps in the correspondence patterns.
### Success and Failure of Trimming
Our implementation is fully compatible with computer-assisted workflows List (2017). We output all data in a way that experts can check them, and make both the trimmed sites as well as the resulting (ir)regular correspondence patterns explicit. This makes it possible to use the output of our method in various tasks in historical linguistics. Figure 5 provides one example from the constelachibchan dataset of the output that our trimming provides. The figure presents a subset of cognate words for the concept ashes, including all gaps in the original alignment from the selected languages. All alignment sites which featured mostly gaps were successfully trimmed from the alignment and are displayed as greyed out in the example. Three alignment sites remain, which pattern well with the reconstruction of ASHES in Proto-Chibchan as provided by Pache (2018, 41). If the core-oriented trimming were performed instead, five instead of three alignment sites would have remained in the final alignment, as the two sites represented by the fourth and sixth column are within the preserved core. This case illustrates the advantage of the gap-oriented trimming strategy, as all spurious alignment sites are trimmed from the data, regardless of their position.
The closer inspection of individual trimmed alignments shows that our methods still have a lot of room for improvement. One major problem lies in the nature of the gap-oriented trimming. As we remove all sites which include mostly gaps, we might lose relevant correspondence patterns in which the gaps do not constitute an erroneous alignment, but rather an actual case of gaps in the pattern. It is a very reasonable assumption that there are language families in which merger with zero occurred for some correspondence pattern in the majority of languages. One such example can be found in Figure 6, where the trimmed alignments for the concept water in several Chibchan languages can be found. Again, we add to the data from the constelachibchan-dataset the reconstruction as provided by Pache (2018, 235). As we can see, the alignment site which includes the reflexes the glottal stop as reconstructed for Proto-Chibchan contains gaps in most languages. With the current methodology which focuses exclusively on gaps, this pattern will be trimmed from the alignment, despite reflecting relevant information. This is paralleled by discussions in biology, where gaps might contain phylogenetically relevant information Tan
\begin{table}
\begin{tabular}{|l|c|c|} \hline Dataset & Core & Gap \\ \hline constelachibchan & 0.58 & 0.00 \\ crossandean & 0.02 & 0.00 \\ drawlex & 0.00 & 0.00 \\ felexesemittic & 0.17 & 0.01 \\ hattorijaponic & 0.40 & 0.00 \\ Houchinese & 0.05 & 0.00 \\ leekoreanic & 0.54 & 0.06 \\ robinsonap & 0.34 & 0.00 \\ walkworthpolynesian & 0.11 & 0.00 \\ zhivolovobugrian & 0.12 & 0.05 \\ \hline \end{tabular}
\end{table}
Table 3: Percentage of models with random deletion of alignment sites that achieved higher regularity than the respective trimming model.
Figure 5: Gap-oriented trimming for the cognate words of ashes in Chibchan languages
Figure 6: Trimming for the cognate words of water in Chibchan
et al., 2015). This opens up the question whether we will be able to feed such information into the trimming algorithm, and preserve certain patterns that we know of that would otherwise be trimmed.
What remains to be done in future studies is to manually evaluate trimmed correspondence patterns. This is a general task for historical language comparison, as linguists often base their reconstruction judgements on impressionistic statements of regularity or only report the most frequent correspondence patterns.
## 5 Conclusion
We introduce the concept of trimming multiple sequence alignments, originally developed for applications in evolutionary biology, to the field of historical linguistics. Trimming as such is already practiced implicitly in the comparative method, but as of yet, there are no computational implementations for the procedure. Our trimming algorithms provide considerable improvements compared to state-of-the-art alignment methods. By trimming the alignment sites down to a subsequence without gaps, we achieve a higher number of regular correspondence patterns and cognate sets than without trimming. Even though our technique is merely a very preliminary approximation to the classical workflow of the comparative method, the average regularity of correspondence patterns across data sets is improved in all settings analyzed. Our study thus shows that automated trimming is both achievable and worthwhile in computational historical linguistics.
The main target of our trimming-strategies were alignment sites that included more gaps than defined in a certain threshold. Our model comparison shows that the best results are achieved when all such sites are trimmed, rather than only those at the periphery of stable alignment sites. Similar to biology, we find that alignment sites with many gaps contain divergent information, and trimming them improves the accuracy of our methods. It is also not sufficient to trim sites at random, since in that case we lose correspondence patterns that explain the data well. The examples we provide show both the potential of trimming alignment sites and their methodological limitations. The success of our strategy varies considerably between the datasets. A closer analysis of those cases where improvements are considerably small could provide valuable information for improved trimming strategies to be implemented in the future.
## Limitations
In addition to the already discussed problems related to the exclusive focus on gaps, we have only tested the trimming with respect to a generalized function of regularity in each dataset. It is not yet clear whether this actually improves the computational success of secondary tasks like reconstructions or new methods of cognate detection.
## Ethics Statement
Our data are taken from publicly available sources. For this reason, we do not expect that there are ethical issues or conflicts of interest in our work.
## Supplementary Material
The supplementary material accompanying this study contains the data and code needed to replicate the results reported here, along with detailed information on installing and using the software. It is curated on GitHub ([https://github.com/pano-tacanan-history/trimming-paper](https://github.com/pano-tacanan-history/trimming-paper), Version 1.1) and has been archived with Zenodo ([https://doi.org/10.5281/zenodo.7780719](https://doi.org/10.5281/zenodo.7780719)).
## Acknowledgements
This research was supported by the Max Planck Society Research Grant _CALC3_ (FB, JML, [https://digling.org/calc/](https://digling.org/calc/)) and the ERC Consolidator Grant _ProduSemy_ (JML, Grant No. 101044282, see [https://doi.org/10.3030/101044282](https://doi.org/10.3030/101044282)). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency (nor any other funding agencies involved). Neither the European Union nor the granting authority can be held responsible for them. We thank Nathan W. Hill and Thiago C. Chacon and the anonymous reviewers for helpful comments. We are grateful to all people who share their data openly. |
2309.04915 | Exact oscillations and chaos on a non-Abelian coil | We construct new exact solutions of the Georgi-Glashow model in $3+1$
dimensions. These configurations are periodic in time but lead to a stationary
energy density and no energy flux. Nevertheless, they possess a characteristic
frequency which manifests itself through non-trivial resonances on test fields.
This allows us to interpret them as non-Abelian self sustained coils. We show
that for larger energies a transition to chaotic behavior takes place, which we
characterize by Poincar\'e sections, Fourier spectra and exponential growth of
the geodesic deviation in an effective Jacobi metric, the latter triggered by
parametric resonances. | Fabrizio Canfora, Nicolas Grandi, Marcelo Oyarzo, Julio Oliva | 2023-09-10T02:08:00Z | http://arxiv.org/abs/2309.04915v2 | # Exact oscillations and chaos on a non-Abelian coil
###### Abstract
We construct new exact solutions of the Georgi-Glashow model in \(3+1\) dimensions. These configurations are periodic in time but lead to a stationary energy density and no energy flux. Nevertheless, they possess a characteristic frequency which manifests itself through non-trivial resonances on test fields. This allows us to interpret them as non-Abelian self sustained coils. We show that for larger energies a transition to chaotic behavior takes place, which we characterize by Poincare sections, Fourier spectra and exponential growth of the geodesic deviation in an effective Jacobi metric, the latter triggered by parametric resonances.
## 1 Introduction
Time-periodic configurations arising in nonlinear hyperbolic problems are notoriously difficult to construct (see [1; 2; 3] and references therein) and, at the same time, extremely interesting physically (see e.g. [4; 5; 6]). In Euclidean spaces, the relevance of topologically non-trivial configurations which are periodic in Euclidean time, representing instantons at finite temperature, is particularly relevant for the analysis of the phase diagram of gauge theories [7; 8]. The interest in these configurations arises, in part, from the difficulty to study time dependent configurations in lattice gauge theories [22; 23]. It also experienced a remarkable growth in the recent years, due to the intensive research in out-of-equilibrium physics (see e.g. [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] and references therein).
In the present paper we construct new exact, time dependent solutions to the Einstein-Yang-Mills-Higgs system in \(3+1\) dimensions, with quite intriguing physical properties. These configurations are periodic in real time in such a way that the energy-density is stationary and their non-Abelian Poynting vector vanishes, so that there is no energy flux. In spite of this, as we will show below, they possess a characteristic frequency which manifests itself through non-trivial resonances of test fields, charged under the non-Abelian gauge symmetry, which propagate in these backgrounds. These new analytic solutions possess genuine non-Abelian features as they can be interpreted as non-Abelian self-sustained coils.
Besides the intrinsic interest to construct analytical time-dependent configurations, the technical tools allow to discuss very interesting open questions on the chaotic behavior of Yang-Mills theory. The analysis of chaos in non-Abelian gauge theories raised huge interest since the early years soon after the discovery of Yang-Mills theory (see [33; 34; 35; 36; 37] and references therein). In recent years, two references in particular [38; 39] triggered a burst of activity on this topic due to the discovery of novel relations with holography and quantum chaos (see [40; 41; 42; 43] and references therein). The usual starting point of these analyses is a homogeneous Ansatz for the Yang-Mills-Higgs fields with, very often, the Higgs field in the fundamental representation, which only depend on time, in such a manner that the corresponding field equations can be analyzed with the available tools of chaotic dynamics (see [44]). On the other hand, this starting point prevents, in many situations, to include non-trivial topological fluxes, that either need some non-trivial dependence on space-like coordinates, or the presence of the Higgs field in the adjoint representation, in order to get a gauge-invariant version of the magnetic flux. Therefore, if one is interested in the analysis of the interplay of topology and chaos, it is important to generalize a little bit the notion of homogeneous field and to construct an Ansatz in which the fields depend non-trivially on the spatial coordinates, keeping alive the topological fluxes, but in such a way that the field equations reduce to a dynamical system.
An important technical tool to succeed in the aforementioned construction turns out to be the non-spherical hedgehog Ansatz developed for the Skyrme model, originally introduced in [24]-[30], that allowed to discover the first analytic and topologically non-trivial solutions in the Skyrme model which are periodic in time in such a way that the energy-momentum tensor is static [31; 32]. As explained below, in a certain sense the results presented here represent an extension of those in [31] and [32] to the Yang-Mills-Higgs case, with the Higgs in the adjoint representation of the gauge group.
At a first glance, the analytic solutions representing non-Abelian self-sustained coils, to be described in the following sections, could suggest the appearance of some integrable sector of the theory. In fact, this is not the case: the chaotic behavior appears anyway. However, in the analysis of the chaotic regime, the analytic solutions manifest themselves through "integrability islands" in the corresponding Poincare sections. One of the main tools that we will use in the analysis of chaotic dynamics was introduced in [65] and is based on the Jacobi metric [63]. Our analysis shows that such a tool, which to the best of our knowledge has not been employed so far in the analysis of chaos in Yang-Mills theory, is actually very effective when compared with different techniques.
The paper is organized as follows: in Section 2.1 the conventions and the Georg-Glashow model are presented. In Section 2.2 we introduce the time dependent Ansatz for the Yang-Mills and Higgs fields in flat spacetime. Later, the new exact solutions of the system are derived, as well as some of their perturbations. The cases with and without vacuum expectation value are studied separately in sections 3.1 and 3.2 respectively. In Section 5 we study the resonance frequencies of the configurations with a quantum scalar field probe in the fundamental of \(SU(2)\). Some remarks and conclusions are given in the last section.
Basic setup
In this section, the model and the time-dependent Ansatz are introduced, together with the corresponding equations of motion and the resulting energy momentum tensor and non-Abelian Poynting vector.
### The model
Our starting point is the Georgi-Glashow model for \(SU\left(2\right)\), with field content given by a Lie algebra valued 1-form gauge potential \(A\) and a Higgs field \(\Phi\) which transforms in the adjoint representation. They are algebra valued objects
\[A=A^{a}_{\ \mu}t_{a}dx^{\mu}\,\qquad\qquad\Phi=\Phi^{a}t_{a}\, \tag{1}\]
where we consider anti-Hermitian matrices \(t_{a}\equiv i\sigma_{a}\), where \(\{\sigma_{a}\,\ a=1,2,3\}\) are the Pauli matrices. These generators fulfill \(t_{a}t_{b}=-\delta_{ab}-\varepsilon_{abc}t_{c}\).
The action for the model reads
\[I\left[A,\Phi\right]=\int d^{4}x\sqrt{-g}\left(-\frac{1}{4e^{2}}F^{a\mu\nu}F_{ a\mu\nu}-\frac{1}{2e^{2}}D_{\mu}\Phi^{a}D^{\mu}\Phi_{a}-\frac{\lambda}{4} \left(\Phi^{a}\Phi_{a}-\nu^{2}\right)^{2}\right)\, \tag{2}\]
where \(e\) is a positive gauge coupling constant, \(\lambda\) is a positive scalar self coupling, and \(\nu\) is the vacuum expectation value of the Higgs field. As usual, the field strength and the covariant derivative are defined by
\[F_{\mu\nu} = \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+\left[A_{\mu},A_{\nu} \right]\, \tag{3}\] \[D_{\mu}\cdot = \nabla_{\mu}\cdot+\left[A_{\mu},\cdot\right]. \tag{4}\]
The field equations are obtained by computing the stationary variation with respect to the fields \(A^{a}_{\ \mu}\) and \(\Phi^{a}\) which respectively give the following expressions
\[D_{\mu}F^{\mu\nu}-\left[\Phi,D^{\nu}\Phi\right] = 0\, \tag{5}\] \[D_{\mu}D^{\mu}\Phi-e^{2}\lambda\left(\Phi^{a}\Phi_{a}-\nu^{2} \right)\Phi = 0. \tag{6}\]
The energy momentum tensor of this model is computed by varying the action with respect to the metric, resulting in
\[T_{\mu\nu}=T^{\sf Gauge}_{\mu\nu}+T^{\sf Higgs}_{\mu\nu}\, \tag{7}\]
with
\[T^{\sf Gauge}_{\mu\nu}=\frac{1}{e^{2}}\left(F_{a\mu\lambda}F^{a \ \lambda}_{\ \nu}-\frac{1}{4}g_{\mu\nu}F^{a\rho\sigma}F_{a\rho\sigma}\right)\, \tag{8}\] \[T^{\sf Higgs}_{\mu\nu}=\frac{1}{e^{2}}\left(D_{\mu}\Phi^{a}D_{\nu }\Phi_{a}-\frac{1}{2}g_{\mu\nu}D_{\sigma}\Phi^{a}D^{\sigma}\Phi_{a}-g_{\mu \nu}\frac{\lambda e^{2}}{4}\left(\Phi^{a}\Phi_{a}-\nu^{2}\right)^{2}\right). \tag{9}\]
From now on we set \(e=1\) without loss of generality, since the only relevant combination is \(\lambda e^{2}\).
### The time dependent Ansatz
In the present section we define an appropriate Ansatz which allows us to solve the field equations analytically with a time dependent profile.
Let us first fix the geometry considering flat spacetime in cylindric coordinates
\[ds^{2}=-dt^{2}+dz^{2}+d\rho^{2}+\rho^{2}d\varphi^{2}. \tag{10}\]
The range of the coordinates are the usual, \(\varphi\in[0,2\pi]\) with \(\varphi\sim\varphi+2\pi\), \(\rho\in[0,+\infty[\) and \(t,z\in\mathbb{R}\). In this background we define our Ansatz for the gauge field and Higgs fields, as
\[A = -\frac{W(t)}{\sqrt{2}}\left(t_{1}\,\rho d\varphi-t_{2}\,d\rho \right)-\frac{1}{2}t_{3}\,d\varphi\, \tag{11}\] \[\Phi = G(t)\,t_{3}. \tag{12}\]
Both the gauge \(W(t)\) and Higgs \(G(t)\) profiles depend explicitly on time. The non-Abelian field strength defined in (3) for the Ansatz (11) reads
\[F=\frac{\dot{W}}{\sqrt{2}}\left(\,dt\wedge d\rho\,t_{2}-\rho\,dt\wedge d\varphi \,t_{1}\right)-W^{2}\rho\,d\rho\wedge d\varphi\,t_{3}. \tag{13}\]
It has two electric components, one of them is along the second generator of the gauge group while pointing in the radial spatial direction, while the other is aligned with the first generator and it points around the cylinder. The magnetic field is aligned with the third generator and it goes along the axis of the cylinder.
With the above Ansatz, the energy momentum tensor has a natural cylindrical symmetry, and it can be written as
\[T_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}=\frac{1}{e^{2}}\left(\mathcal{E}\,dt^{2} -p_{\perp}(d\rho^{2}+\rho^{2}d\theta^{2})-p_{z}\,dz^{2}\right)\,, \tag{14}\]
with
\[\mathcal{E}=\frac{1}{2}\left(\dot{G}^{2}+\dot{W}^{2}\right)+\frac {1}{2}W^{2}\left(4G^{2}+W^{2}\right)+\frac{\lambda}{4}\left(G^{2}-\nu^{2} \right)^{2}\, \tag{15}\] \[p_{\perp}=-\frac{1}{2}\left(\dot{G}^{2}+W^{4}\right)+\frac{ \lambda}{4}\left(G^{2}-\nu^{2}\right)^{2}\,,\] (16) \[p_{z}=p_{\perp}+W^{2}\left(2G^{2}+W^{2}\right)-\frac{1}{2}W^{ \prime 2}\,. \tag{17}\]
It is worth to emphasize that, in spite of considering a time dependent configuration, there are no energy fluxes. This feature can be interpreted as an interplay between the non-Abelian character of the solution and the time dependence of the gauge fields, in such a way that the trace in the definition of the energy momentum tensor cancels out the radiation of the gauge field. We will discuss this feature in more detail in the forthcoming sections.
To give some physical content to the above construction, let us first recall one of the most useful features of the Georgi-Glashow model: the presence of a scalar field in the adjoint representation allows us to construct a gauge invariant quantity representing the effective Abelian gauge field of the theory
\[F_{\sf eff}={\rm tr}\left(\Phi F\right). \tag{18}\]
For the configuration (11) and (12), the above projection gives
\[F_{\sf eff}=\rho\,G\,W^{2}d\rho\wedge d\varphi. \tag{19}\]
In the present case, the 2-form (19) corresponds to an effective uniform Abelian magnetic flux along the \(z\)-axis. The exact configurations that will be discussed in the following are periodic in time, hence the effective Abelian magnetic field will be periodic as well.
Now let us consider a cylinder of radius \(R_{0}\) inside of which the fields are given by the Ansatz (11)-(12), while they vanish outside. In order to match the fields in the interior of the cylinder with those outside it, we require the usual Maxwell junction conditions for the corresponding Abelian part (19). These conditions tell us that the normal component to the interface of the effective Abelian magnetic field must be continuous, which is satisfied by \(B_{\sf eff}=G\left(t\right)W\left(t\right)^{2}\partial_{z}\). Also, since the Poynting vector is zero everywhere, there is no energy flux outside the cylinder. Consequently, if we are able to construct explicitly exact solutions for the gauge and Higgs profiles which are periodic in time, then such configurations can be interpreted as coils with a self-generated \(AC\) current.
A very important property of the ansatz for the gauge and Higgs fields given in Eqs. (11) and (12) is that it reduces the full coupled system of non-linear partial differential equations to the following two coupled ordinary differential equations
\[\frac{d^{2}G}{dt^{2}}+4W^{2}G+\lambda G\left(G^{2}-\nu^{2}\right) =0\, \tag{20}\] \[\frac{d^{2}W}{dt^{2}}+4G^{2}W+2W^{3}=0. \tag{21}\]
It is conceptually useful to rewrite this system of second order differential equations as a Newtonian system for the time-dependent variables \(\left(G,W\right)\) in the form
\[\frac{d^{2}G}{dt^{2}}=-\frac{\partial}{\partial G}V\left(G,W\right)\,\qquad\qquad\frac{d^{2}W}{dt^{2}}=-\frac{ \partial}{\partial W}V\left(G,W\right)\, \tag{22}\]
in terms of the effective potential
\[V(G,W)=2W^{2}G^{2}+\frac{1}{2}W^{4}+\frac{\lambda}{4}\left(G^{2}-\nu^{2} \right)^{2}. \tag{23}\]
The additive constant was introduced to set to zero the energy of the configuration with a non-trivial vacuum expectation value. This potential is bounded from below and has two global minimum at \(G=\pm\nu\,\ W=0\) and a saddle point at \(G=0,W=0\). A plot of the level curves of this potential is shown in Fig. 1.
s a first interesting result notice that, integrating the system (22)-(23) once, we recover the conservation of the energy density \({\cal E}\), in spite of the fact that the field configuration is time dependent. This is consistent with the absence of energy fluxes in our configuration.
Naively, one could conclude that this configuration is static and hence there is no a characteristic frequency of the system. Nevertheless, this is not the case as we will show by computing the time-dependent transition amplitude of a scalar probe field in the adjoint representation propagating in the exact solutions of the above form. Such transition amplitude discloses a clear resonance effect when the frequency of the test field matches the characteristic frequency of the background solutions. The present situation is reminiscent of the spin-from-isospin effect for Skyrmions and non-Abelian monopoles [59; 60; 61] in which case the energy-momentum tensor is spherically symmetric and yet these configurations are not spherically symmetric in the obvious sense as the angular momentum operator is naturally supplemented by an extra term arising from the internal symmetry group. This fact is behind the commonly used statement "gauge field are invariant up to an internal transformation".
Notice that the equations (20) and (21) have the shift symmetry
\[t\to t-t_{0}\, \tag{24}\]
which implies that one of the integration constants of the system sets the zero of the time variable. Moreover, they have the scaling invariance
\[(t,W,G,\nu,\lambda)\rightarrow\left(\frac{t}{T},TW,TG,T\nu,\lambda\right)\, \tag{25}\]
where \(T\) is an arbitrary constant. For vanishing \(\nu\) this implies that a second integration constant sets the time scale and the overall scale of the fields. For \(\nu\) finite, these can be fixed by the value of \(\nu\).
Figure 1: Level curves of the effective potential \(V\) for the cases \(\nu\neq 0\) (left) and \(\nu=0\) (right).
Exact solutions
In this section we present our exact solutions and analyze their properties, studing separately the cases with a without vacuum expectation value. In each case, we explore the vacuum and perturbative solutions, the pure Yang-Mills and pure Higgs cases, and the solutions with both fields turned on.
### Configurations with non-vanishing vacuum expectation value
In this section we will consider configurations with non-vanishing vacuum expectation value \(\nu\neq 0\).
Perturbative solution:The first trivial observation in this case is that there is a vacuum static solution in which \(W(t)=0\) and \(G(t)=\pm\nu\). Such solution can be perturbed as
\[W(t)=\epsilon w(t)\, \tag{10}\] \[G(t)=\pm\nu+\epsilon g(t)\, \tag{11}\]
where \(\epsilon\) is a small parameter and \(w(t)\) and \(g(t)\) are new unknown functions. Plugging this back into the equations of motion and expanding to first order in \(\epsilon\), we get a perturbative solution
\[W(t)=\epsilon\sin(2\nu(t-t_{0})+\delta)\, \tag{12}\] \[G(t)=\pm\nu+\epsilon a\cos(\sqrt{2\lambda}\nu(t-t_{0}))\, \tag{13}\]
where \(\epsilon\) now becomes a small integration constant, and \(a,t_{0}\) and \(\delta\) are integration constants of order one. Notice that these solutions are periodic only when \(\sqrt{\lambda/2}=p/q\) with \(p,q\in\mathbb{N}\). The period then reads
\[t\sim t+\frac{\pi}{\nu}\sqrt{\frac{2}{\lambda}}\,p=t+\frac{\pi}{\nu}q \tag{14}\]
Pure Yang-Mills solution:There is a pure Yang-Mills sector of the theory, which is obtained setting \(G(t)=0\). In this case, the field equations (20)-(21) reduce to the equations of a quartic oscillator, namely
\[\frac{d^{2}W}{dt^{2}}+2W^{3}=0\, \tag{15}\]
that can be solved in the form
\[W(t)=\pm a\,\operatorname{sn}(a\left(t-t_{0}\right),-1)\ \, \tag{16}\]
where \(\operatorname{sn}(x,m)\) is the Jacobi elliptic sine function, and \(a\) is a constant of integration. Notice that the same constant sets both the time scale and the amplitude of the oscillation. This can be traced back to the scaling symmetry (25), taking into account that the value of \(\nu\) does not enter into the present pure Yang-Mills solution.
We can calculate the energy density of the configuration according to the expression (15), obtaining
\[\mathcal{E}=\frac{1}{4}\left(2a^{4}+\lambda\nu^{4}\right)\, \tag{17}\]
where we see that the energy density is conserved.
Solutions (10) are periodic, their period can be obtained from the periodicity properties of the Jacobi elliptic sine, resulting in the expression
\[t\sim t+\frac{2}{a}K_{20}(-1)\, \tag{12}\]
where the function \(K_{pq}(m)\) has been defined according to
\[K_{pq}(m)=p\,K(m)+i\,q\,K(1-m). \tag{13}\]
In this expression, \(K\) is the complete elliptic integral of the first kind, and \(p,q\in\mathbb{N}\). Here and in what follows, we are choosing the values of \(p\) and \(q\) as the smallest integers that make the resulting period real.
Pure Higgs solution:There is also a pure Higgs configuration, which is obtained by setting \(W(t)=0\) and solving the remaining equation for \(G(t)\), resulting in
\[G(t)=\pm\nu\sqrt{\frac{2q}{1+q}}\,\mathrm{sn}\!\left(\sqrt{\frac{-\lambda}{1+ q}}\,\nu\,(t-t_{0}),q\right)\, \tag{14}\]
where \(q\) is a constant of integration.
As for the pure Yang-Mills case, this is a periodic solution whose period is given by that of the Jacobi sine, in the form
\[t\sim t+\frac{2}{\nu}\sqrt{\frac{1+q}{-\lambda}}K_{20}(q)\, \tag{15}\]
where \(K_{pq}\) defined as in equation (13).
Solution (14) is explicitly real for \(q<-1\). However, using the definition of and properties of the Jacobi elliptic functions, it can be analytically continued to \(q\in(-1,0]\) in the form
\[G(t)=\pm\nu\sqrt{\frac{-2q}{1+q}}\,\mathrm{sc}\!\left(-\sqrt{\frac{\lambda}{1 +q}}\,\nu\,(t-t_{0}),1-q\right)\, \tag{16}\]
where \(\mathrm{sc}(x,m)=i\,\mathrm{sn}(-ix,1-m)\) is another Jacobi function.
Expression (16) is again periodic, but in this case the period is written in the form
\[t\sim+\frac{2}{\nu}\sqrt{\frac{1+q}{\lambda}}K_{22}(1-q)\, \tag{17}\]
which connects smoothly to (15) as \(q\to-1\).
This configuration has an energy density given by
\[\mathcal{E}=\frac{\lambda}{4}\left(\frac{1-q}{1+q}\right)^{2}\nu^{4}\, \tag{18}\]
which is again conserved.
Solution with both fields:For the generic case with non-vanishing Higgs, the solution reads
\[G(t) = \pm_{1}\sqrt{2}\,\nu\,\mathrm{dn}\!\left(\sqrt{8-\lambda}\,\nu\left( t-t_{0}\right),\frac{\lambda}{8-\lambda}\right)\, \tag{29}\] \[W(t) = \pm_{2}\sqrt{\frac{\lambda(\lambda-4)}{8-\lambda}}\,\nu\,\mathrm{ sn}\!\left(\sqrt{8-\lambda}\,\nu\left(t-t_{0}\right),\frac{\lambda}{8-\lambda} \right)\, \tag{30}\]
where \(\mathrm{dn}^{2}(x,m)=1-m\,\mathrm{sn}^{2}(x,m)\) is another Jacobi elliptic function. This solution is explicitly real for \(\lambda\in[4,8)\). There is no integration constant controlling the frequency of the oscillation, nor its amplitude. However, the vacuum expectation value parameter \(\nu\) changes the amplitude and the frequency of the configuration in the same amount, due to the scaling symmetry discussed in the previous section (25). The period is given by
\[t\sim t+\frac{2}{\nu\sqrt{8-\lambda}}K_{22}\!\left(\frac{\lambda}{8-\lambda} \right)\,. \tag{31}\]
Using the identities and the relations between the Jacobi elliptic functions one can write (29)-(30) in an alternative form which is manifestly real for \(\lambda>8\). In such case we have
\[G\left(t\right) = \pm_{1}\sqrt{2}\,\nu\,\mathrm{dc}\!\left(\sqrt{\lambda-8}\,\nu \,\left(t-t_{0}\right),1-\frac{\lambda}{8-\lambda}\right)\, \tag{32}\] \[W\left(t\right) = \pm_{2}\sqrt{\frac{\lambda\left(\lambda-4\right)}{\lambda-8}}\, \nu\,\mathrm{sc}\!\left(\sqrt{\lambda-8}\,\nu\left(t-t_{0}\right),1-\frac{ \lambda}{8-\lambda}\right). \tag{33}\]
Where \(\mathrm{dc}(x,m)=\mathrm{dn}(-ix,1-m)\) is a further elliptic function. The period now reads
\[t\sim t+\frac{2}{\nu\sqrt{\lambda-8}}K_{22}\!\left(1-\frac{\lambda}{8-\lambda }\right). \tag{34}\]
The \(\lambda=8\) can be integrated from the equations (20)-(21) and it reads
\[G\left(t\right) = \sqrt{2}\,\nu\,\sin\!\left(2\sqrt{2}\nu\left(t-t_{0}\right) \right)\, \tag{35}\] \[W\left(t\right) = 2\,\nu\,\cos\!\left(2\sqrt{2}\nu\left(t-t_{0}\right)\right). \tag{36}\]
Here, \(t_{0}\) is the only integration constant of the solution. The period of this solution can be written as
\[t\sim t+\frac{\pi}{\sqrt{2}\nu}. \tag{37}\]
For any value of the coupling \(\lambda\), the energy density of this exact configuration is given by the expression
\[\mathcal{E}=\frac{1}{4}(2\lambda-7)\lambda\nu^{4}\, \tag{38}\]
which behaves smoothly in the \(\lambda\to 8\) limit.
In Fig.2 we overlap the solutions we found in this section with the level curves of the effective potential (23) for \(\nu=1,\lambda=6,8,12\).
nfigurations with vanishing vacuum expectation value
Perturbative solution:With \(\nu=0\) we still have a static solution, now at \(W(t)=G(t)=0\), that can perturbed to obtain
\[W(t)=\epsilon(t-t_{0})\, \tag{3.26}\] \[G(t)=\epsilon a(t-t_{0}). \tag{3.27}\]
Higher order perturbations result in further corrections to the overall coefficient of the linear term, up to order \(\epsilon^{3}\) at which there is an additional correction which goes as \((t-t_{0})^{5}\).
Pure Yang-Mills solution:The pure Yang-Mills configuration is the same as in the case with non-vanishing vacuum expectation value, which is to be expected since the Higgs field plays no role in it.
Pure Higgs solution:Regarding the pure Higgs configuration, it satisfies the equation of motion
\[\frac{d^{2}G}{dt^{2}}+\lambda G^{3}=0. \tag{3.28}\]
This is again a quartic oscillator, with solution
\[G(t)=\pm\sqrt{2}a\,\mathrm{sn}\left(a\sqrt{\lambda}(t-t_{0}),-1\right)\, \tag{3.29}\]
where \(a\) is an integration constant. The period takes the form
\[t\sim t+\frac{2}{a\sqrt{\lambda}}K_{20}(-1). \tag{3.30}\]
The energy density on the other hand, reads
\[\mathcal{E}=a^{4}\lambda. \tag{3.31}\]
It is interesting to notice that formulas (3.29) to (3.31) can be obtained from the corresponding equations for the finite vacuum expectation value case, by taking the limit \(\nu\to 0\) and \(q\to-1\) with the constraint \(\nu/\sqrt{1+q}=i\,a\).
Figure 2: Solutions with non-vanishing vacuum expectation value \(\nu\neq 0\), for the particular cases \(\lambda=6,8,12\) from left to right. In yellow is the perturbative solution, while in orange is the exact solution. The vertical and horizontal lines represent the pure Yang-Mills and pure Higgs solutions respectively, the last one corresponding to the smallest possible amplitude \(q\to-\infty\).
olution with both fields:The fact that for the linearly perturbed solution we have a Higgs profile \(G\) that is proportional to the Yang-Mills profile \(W\), suggests that in the non-perturbative case we can try to reduce the equations (20)-(21) into a unique equation, by considering the ansatz
\[G(t)=\pm\sqrt{\frac{2}{4-\lambda}}W(t). \tag{32}\]
Here the proportionality factor has been chosen so that the resulting equations for \(G(t)\) and \(W(t)\) coincide. Notice that the shape of the potential provides that \(\lambda\) must be positive and the equation (32) implies that \(\lambda<4\). The resulting master equation is given by
\[\frac{d^{2}W}{dt^{2}}+2\left(\frac{8-\lambda}{4-\lambda}\right)W^{3}=0\, \tag{33}\]
which can be solved by
\[W(t)=\pm a\,\operatorname{sn}\!\left(a\sqrt{\frac{8-\lambda}{4-\lambda}}\left( t-t_{0}\right),-1\right)\, \tag{34}\]
Notice that as before the amplitude \(a\) is tied to the frequency due to the scaling symmetry, but now it is an integration constant. Consequently, the profile for the Higgs field reads
\[G(t)=\pm\sqrt{\frac{2}{4-\lambda}}\,a\operatorname{sn}\!\left(a\sqrt{\frac{8- \lambda}{4-\lambda}}\left(t-t_{0}\right),-1\right). \tag{35}\]
These solutions are explicitly real for \(\lambda<4\) and cannot be extended to \(\lambda>4\).
The energy density of this configuration is
\[\mathcal{E}=\frac{a^{4}\left(\lambda-8\right)\left(\lambda-6\right)}{2\left( \lambda-4\right)^{2}}\, \tag{36}\]
while the period can be obtained in terms of the complete elliptic integral of the first kind \(K\) as
\[t\sim t+\frac{2}{a}\sqrt{\frac{\lambda-4}{\lambda-8}}\,K_{20}(-1). \tag{37}\]
Figure 3: Solutions with non-vanishing vacuum expectation value \(\nu=0\), for the particular cases \(\lambda=1,2,3\) from left to right. In orange is the exact solution. The vertical and horizontal lines represent the pure Yang-Mills and pure Higgs solutions respectively.
Chaotic behaviour
At a first glance, one could think that the appearance of the nice analytic solutions described in the previous sections may hint at the integrability of the Yang-Mills-Higgs sector described by the Ansatz in Eqs. (11) and (12). This possibility becomes quite clear taking into account that using the homogeneous Ansatz which is usually employed in the analysis of chaos in Yang-Mills theory (see [40; 41; 42; 43] and references therein) it has not been possible to find analytic solutions, to the best of our knowledge. In the following sections we will show that this is not the case: the chaotic behavior appears nevertheless, if one increases the energy of the system.
To characterize the chaotic regime, we will use three different and somewhat complementary techniques:
1. **Poincare sections:** The phase space of the system is 4-dimensional and can be parameterized by the coordinates \((G,W)\) and the canonical momenta (\(p_{G}=\dot{G},p_{W}=\dot{W}\)). The conservation of the energy (15) reduces in one the dimensionality of the space where the trajectories develop. Poincare sections are then constructed by performing one further projection onto the plane \((W,p_{W})\). Regular trajectories appear in the Poincare section as sets of points that can be connected with smooth curves. Chaotic behavior on the other hand, corresponds to sparse sets that fill the section. The presence of analytic solutions manifests itself through "integrability islands".
2. **Fourier analysis:** Chaos can often be confused with a quasiperiodic behavior, a combination of linear oscillators with non-commensurable frequencies. In order to exclude the latter possibility of our analysis, we consider the discrete Fourier spectrum of one of the canonical variables. A non-smooth Fourier spectrum is a clear signature of a chaotic regime.
3. **Geodesic divergence:** In classical mechanics, the time evolution of a Newtonian system of the kind defined by (22)-(23) can be described as a non-affine parametrization of the geodesic curves on a manifold endowed with the so-called Jacobi metric [62; 63], defined according to \[ds^{2}=g_{ij}\,dq^{i}dq^{j} =2(\mathcal{E}-V)\left(dW^{2}+dG^{2}\right)\,\] \[=4(\mathcal{E}-V)^{2}\,dt^{2}\,\] (24) where \(i,j\) run on the independent generalized coordinates \(q^{i}=(G,W)\) and \(\mathcal{E}\) is the energy of the system (15). The relation between the curvature of the manifold and the stability of the geodesics is expressed in terms of the Jacobi-Levi-Civita equation for the Jacobi field \(\eta^{i}\), measuring the deviation between two infinitesimally close geodesics \[\nabla_{s}^{2}\eta^{i}-\mathcal{R}^{i}_{\ jkl}\frac{dq^{j}}{ds}\frac{dq^{k}}{ ds}\eta^{l}=0\.\] (25)
where \(\nabla_{s}\) is the covariant derivative. In a two dimensional manifold the Riemann tensor can be written in terms of the scalar curvature \(\mathcal{R}\) in the form \(\mathcal{R}^{i}{}_{jkl}=\mathcal{R}(\delta^{i}{}_{k}g_{jl}-\delta^{i}{}_{l}g_{ jk})/2\). This implies that
\[\nabla_{s}^{2}\eta^{i}+\frac{\mathcal{R}}{2}\left(\eta^{i}-\frac{dq^{i}}{ds} \frac{dq^{j}}{ds}\eta_{j}\right)=0. \tag{4.3}\]
Where in the second term we used the fact that \(s\) is an affine parameter and the tangent vector is normalized to one. Contracting with \(dq_{i}/ds\) and \(\varepsilon_{ij}\,dq^{j}/ds\) (with \(\varepsilon_{ij}\) the Levi-Civita tensor) and taking into account the geodesic equation \(\nabla_{s}(dq^{i}/ds)=0\), we can write
\[\frac{d^{2}\eta_{\perp}}{ds^{2}}+\frac{\mathcal{R}}{2}\eta_{\perp}=0\,\qquad \qquad\nabla_{s}^{2}\eta_{\parallel}=0\, \tag{4.4}\]
Here we have defined \(\eta_{\parallel}=\eta^{i}(dq^{i}/ds)\) and \(\eta_{\perp}=\epsilon_{ij}\eta^{i}(dq^{j}/ds)\).
It is clear that a negative scalar curvature \(\mathcal{R}<0\) would lead to solutions with an exponential grow in time for \(\eta_{\perp}\). However, for our Newtonian system the Ricci scalar \(\mathcal{R}\) is given by
\[\mathcal{R} =\frac{1}{2\left(\mathcal{E}-V\right)^{3}}\left[4W^{2}\left(2G^{ 2}+W^{2}\right)^{2}+G^{2}\left(4W^{2}+\lambda\left(G^{2}-\nu^{2}\right)\right) ^{2}\right]+\] \[\quad+\frac{1}{2\left(\mathcal{E}-V\right)^{2}}\left[(4+3\lambda )G^{2}+10W^{2}-\lambda\nu^{2}\right]\, \tag{4.5}\]
Recalling that \(\mathcal{E}-V>0\), the curvature scalar has a chance to be negative only when \(G\) and \(W\) are small enough, so that the last term in the second line superseeds the rest. In most configurations this is not the case, thus the instabilities we eventually find should come from parametric resonance, as the scalar \(\mathcal{R}\) is time dependent (for a similar situation, see [65]).
The equation for the nearby geodesic deviation can be rewritten as
\[\frac{d^{2}Y}{dt^{2}}+\Sigma\,Y=0\, \tag{4.6}\]
where \(Y=\eta_{\perp}/\sqrt{\mathcal{E}-V}\) and \(\Sigma\) is a function of time defined as
\[\Sigma =\,2\mathcal{R}(\mathcal{E}-V)^{2}-\frac{1}{2(\mathcal{E}-V)}\frac{d^{2 }V}{dt^{2}}-\frac{3}{4(\mathcal{E}-V)^{2}}\left(\frac{dV}{dt}\right)^{2} \tag{4.7}\]
The form of the solution \(Y\) as a function of time gives us insight about the behaviour of the perturbation of the configuration, varying the initial conditions. If \(Y\) is constant, it gives a signal of stability, while if \(Y\) grows exponentially in time, the system could develop a chaotic behavior in that region.
### Chaos with vanishing vacuum expectation value
The solution in (3.34)-(3.35) provides a uni-parametric family of analytic solutions with parameter \(a\), in terms of which the energy is fixed through (3.8). By evaluating the solution at the initial time \(t_{0}\), we obtain a set of initial conditions with periodic evolution. To depart from the analytic solution, we write one of the canonical variables in terms of the energy, say \(p_{G}^{2}(t_{0})=2(\mathcal{E}-V)-\dot{W}(t_{0})\), and then move the value of the energy \(\mathcal{E}\) away from (3.8).
As we increase the energy with a fixed value of the coupling \(\lambda\), we find chaotic behaviour above a critical value, see the figure 4.
Figure 4: Transition to chaos with vanishing vacuum expectation value. From left to right: Poincaré section, frequency spectrum and the logarithm of the geodesic deviation \(\log|Y|\). We evolved up to \(t_{f}=15000\) with \(a=0.5\). Notice that between \(\mathcal{E}=0.127\) and \(\mathcal{E}=0.128\) there is a transition to chaos.
### Chaos with non-vanishing vacuum expectation value
The solution (3.17) is devoid of integration constants. We will proceed as before in order to explore the phase space, evaluating the initial conditions using the solution and deforming one of the fields away from the analytic regime by varying the energy \(\mathcal{E}\). The results are presented in figure 5.
We checked the transition to chaos can be observed also on the effective \(U(1)\) field (2.18). This is interesting since such a non-linear phenomenon is not expected in the standard linear \(U(1)\) gauge dynamics, representing a genuine non-Abelian effect.
Figure 5: Transition to chaos with non-vanishing vacuum expectation value. From left to right: Poincaré section, frequency spectrum and the logarithm of the geodesic deviation \(\log|Y|\). We evolved up to \(t_{f}=15000\) with \(\lambda=4.2\). Notice that between \(\mathcal{E}=1.195\) and \(\mathcal{E}=0.179\) there is a transition to chaos.
Probe scalar field
In order to explore the features of these configurations let us consider a probe scalar field \(\psi\) which transforms in the fundamental representation of \(SU(2)\). The covariant derivative is defined as
\[D_{\mu}\psi=\partial_{\mu}\psi+A_{\mu}\psi\, \tag{100}\]
in such a way that its commutator gives the field strength (3) _i.e_\(\left[D_{\mu},D_{\nu}\right]\psi=F_{\mu\nu}\psi\). The action principle for the scalar field is given by
\[I[\psi,\psi^{\dagger}]=-\int d^{4}x\sqrt{-g}\left(D_{\mu}\psi\right)^{\dagger} D^{\mu}\psi. \tag{101}\]
The equation coming from the variation of this action (101) in the background (11) expands as
\[-\partial_{t}^{2}\psi+\partial_{z}^{2}\psi+\frac{1}{\rho}\partial _{\rho}\psi+\partial_{\rho}^{2}\psi+\frac{1}{\rho^{2}}\partial_{\varphi}^{2} \psi-W^{2}\psi-\frac{1}{4\rho^{2}}\psi+ \tag{102}\] \[+\frac{2}{\rho^{2}}\left(-\frac{W}{\sqrt{2}}\rho t_{1}-\frac{1}{ 2}t_{3}\right)\partial_{\varphi}\psi+\frac{1}{\rho}\frac{W}{\sqrt{2}}t_{2} \psi+\frac{2W}{\sqrt{2}}t_{2}\partial_{\rho}\psi=0\.\]
In order to apply time-dependent perturbation theory, to evaluate transitions amplitudes of the state of the scalar triggered by the interaction with the background field, it is convenient to separate the above equations into two terms. The first term corresponds to
\[H_{0}\psi=-\partial_{t}^{2}\psi+\partial_{z}^{2}\psi+\frac{1}{\rho}\partial_{ \rho}\psi+\partial_{\rho}^{2}\psi+\frac{1}{\rho^{2}}\partial_{\varphi}^{2} \psi-\frac{1}{4\rho^{2}}\psi-\frac{t_{3}}{\rho^{2}}\partial_{\varphi}\psi\]
which defines the action of \(H_{0}\) on the scalar, while the second term defines \(H_{\text{int}}\) by
\[H_{\text{int}}\psi=-W^{2}\psi-\frac{\sqrt{2}W}{\rho^{2}}t_{1}\partial_{\varphi }\psi+\frac{1}{\rho}\frac{W}{\sqrt{2}}t_{2}\psi+\frac{2W}{\sqrt{2}}t_{2} \partial_{\rho}\psi. \tag{103}\]
This splitting allows to analyze the time-dependent part of the gauge field with time-dependent perturbation theory taking advantage of the fact that the "unperturbed Hamiltonian" \(H_{0}\psi=0\) can be solved exactly. Hereafter we proceed in a canonical fashion, and details can be found in the Appendix.
Using the symbols \(\uparrow,\downarrow\) to denote the up and down components of the field \(\psi\), and the indices \(n,\ell,m\) to identify the longitudinal, radial and angular modes respectively, we obtain the eigenstates of the free hamitonian \(H_{0}\) as \(|\uparrow n\ell m\pm\rangle\) and \(|\downarrow n\ell m\pm\rangle\) where \(\pm\) denote left and right movers in the angular direction. The referred transition amplitude turns out to be given by
\[\langle\downarrow\ell^{\prime}m^{\prime}n^{\prime}+|H_{\text{int} }|\uparrow\ell mn-\rangle = -\frac{\pi L\mathcal{N}_{\ell^{\prime}m^{\prime}n^{\prime}} \mathcal{N}_{\ell mn}}{R_{0}\sqrt{2\omega_{\ell^{\prime}m^{\prime}n^{\prime}} \mathcal{N}_{\ell mn}}}\delta_{\ell}^{m^{\prime}}\delta_{n}^{\ell^{\prime}} \alpha_{n}^{m-\frac{1}{2}}\int dt\,W(t)e^{i(\omega_{\ell^{\prime}m^{\prime}n^{ \prime}}-\bar{\omega}_{\ell mn})t}\times \tag{104}\] \[\times\int_{0}^{R_{0}}d\rho\,\rho\,J_{m^{\prime}+\frac{1}{2}} \left(\alpha_{n^{\prime}}^{m^{\prime}+\frac{1}{2}}\frac{\rho}{R_{0}}\right)J _{m+\frac{1}{2}}\left(\alpha_{n}^{m-\frac{1}{2}}\frac{\rho}{R_{0}}\right)\,\]
where \(J_{\eta}\) are Bessel functions and their zeros are labeled by \(\alpha_{\eta}^{\eta}\) with \(n=1,2,\ldots\), the constants \(\mathcal{N}_{\ell mn}\) are for normalization in a cylinder of length \(L\) and radius \(R_{0}\), and \(\omega_{\ell mn}\) denote the eigenfrequencies of the unperturbed Hamiltonian \(H_{0}\).
The above formula for the transition amplitude corresponds to a probe scalar field, coupled to the time-dependent topologically non-trivial Yang-Mills-Higgs background, and it is the main technical result of the present section. In particular, Eq. (5.1) shows that if the classical background is in its integrable phase, then as it has been discussed in the analysis of the Poincare sections, the Fourier spectrum of the gauge field \(W(t)\) has few relevant peaks. In these cases, the transition amplitude will be different from zero in such few cases, corresponding to the resonances between \(\omega_{\ell^{\prime}m^{\prime}n^{\prime}}-\bar{\omega}_{\ell mn}\) and the Fourier components of \(W\). On the other hand, in the chaotic regime, the amplitude will be different from zero in a broad band of values for \(\omega_{\ell^{\prime}m^{\prime}n^{\prime}}-\bar{\omega}_{\ell mn}\). Therefore, the transition amplitudes of the probe scalar field can detect whether the non-Abelian coil is in the chaotic or integrable regime.
## 6 Conclusions
In the present paper we have discussed how the chaotic behavior of time-dependent configurations in the \(SU(2)\) Georgi-Glashow model is affected by the Higgs coupling constant, by the vacuum expectation value as well as by the presence of topologically non-trivial fluxes, which in the present case correspond to the flux of the non-Abelian magnetic field projected along the Higgs field.
There are many intriguing questions which have not been analyzed in detail so far in the literature. For instance: does the presence of the Higgs potential and of the vacuum expectation value increase or decrease the chaotic behavior of the theory? What is the effect of non-trivial topological fluxes? The main technical problem to solve in order to answer such questions is related to the construction of a suitable Ansatz for the gauge and Higgs fields. Indeed, one can easily write down explicit expressions both for the gauge and for the Higgs fields where all the components depend on time only, as it is usually done in the literature on the chaotic behavior of Yang-Mills-Higgs theory: see [33]-[44] and references therein. In this way the field equations reduce consistently to a dynamical system which can be analyzed using the known tools of chaotic dynamics. However, if all the fields only depend on time, then the topological fluxes may vanish. This is the reason why it is useful to design an Ansatz in such a way that the fields depend in a non-trivial way also on the spatial coordinates, keeping alive the topological fluxes, but with the field equations reducing to an autonomous dynamical system. In the present work we have constructed such an Ansatz.
A byproduct of the analysis is that we have also have identified an integrable sector where the field equations can be integrated analytically and the corresponding exact solutions represent the non-Abelian version of self-sustained alternating current generator. Moreover, the Ansatz has been constructed in such a way that one can, for instance, increase (or decrease) the control parameters, such as the Higgs coupling and the vacuum expectation value, and analyze how this change affects the chaotic properties. This situation is especially suitable to be studied using the tools introduced by Casetti, Pettini and Cohen (see [64] and references therein) in their geometric approach to the search for the stochasticity threshold in Hamiltonian dynamics. Using these tools we have shown that as
one increases the energy, integrability is lost. Moreover, we proved that the chaotic behavior and sensitive dependence on the initial condition shown by the exponential growth of the geodesic deviation in Jacobi metric, are triggered by a parametric resonance.
## Acknowledgements
F. C. and J. O. have been funded by Fondecyt Grants 1200022 and 1221504. The work of M. O. is partially funded by Beca ANID de Doctorado 21222264. N. G. wants to thank Centro de Estudios Cientificos (CECs) and Universidad de Concepcion (UdeC) by hospitality and support during this work. The Centro de Estudios Cientificos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of ANID.
## Appendix A Perturbation theory
The free Hamiltonian \(H_{0}\), can be diagonalized by the following field configuration that fulfills the equation (5.3) with \(W=0\)
\[\Phi\left(t,\vec{x}\right)=\bar{\phi}\left(t,\vec{x}\right)+\phi\left(t,\vec{ x}\right)\,\] (A.1)
\[\bar{\phi}^{a}(t,\vec{x})\equiv\sum_{\ell mn}\!\frac{\bar{\mathcal{N}}_{\ell mn }}{\sqrt{2\bar{\omega}_{\ell mn}}}\sin\!\left(\frac{\pi\ell z}{L}\right)J_{m -\frac{1}{2}}\!\!\left(\frac{\alpha_{n}^{m-\frac{1}{2}}\rho}{R_{0}}\right) \!\!\left(\begin{array}{c}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The state with smallest energy in the system is given by \(\bar{\omega}_{011}=\omega_{001}=(\alpha_{1}^{\frac{1}{2}}/R_{0})^{2}\).
The conjugate momenta for the Lagrangian defined in (5.2), following the standard definitions are
\[P_{a} = \sqrt{\gamma}\left(\partial_{t}\psi^{\dagger}-\psi^{\dagger}A_{t} \right)_{a}\,\] (A.6) \[P^{\prime a} = \sqrt{\gamma}\left(\partial_{t}\psi+A_{t}\psi\right)^{a}\,\] (A.7)
here \(\gamma\) is the determinant of the induced metric \(\gamma_{\mu\nu}=g_{\mu\nu}+\delta_{\mu}^{t}\delta_{\nu}^{t}\), which is the spatial section of the metric (2.10), then \(\sqrt{\gamma}=\rho\). The canonical momenta given in terms of (A.1) through the above definition, forms a representation of the canonical algebra
\[\left[\Phi^{a}\left(t,\mathbf{x}\right),P_{b}\left(t,\mathbf{y}\right)\right] =i\delta_{b}^{a}\delta\left(\mathbf{x}-\mathbf{y}\right)\,\quad\left[\Phi_{a}^{\dagger}\left(t, \mathbf{x}\right),P^{\prime b}\left(t,\mathbf{y}\right)\right]=i\delta_{a}^{ b}\delta\left(\mathbf{x}-\mathbf{y}\right)\,\] (A.8)
with the following commutation relation for the creation/annihilation operators
\[\left[\boldsymbol{l}_{\uparrow\ell mn},\boldsymbol{l}_{\uparrow \ell^{\prime}m^{\prime}n^{\prime}}^{\dagger}\right] = \left[\boldsymbol{l}_{\uparrow\ell mn},\boldsymbol{l}_{\uparrow \ell^{\prime}m^{\prime}n^{\prime}}^{\dagger}\right]=\left[\boldsymbol{r}_{ \uparrow\ell mn},\boldsymbol{r}_{\uparrow\ell^{\prime}m^{\prime}n^{\prime}}^ {\dagger}\right]=\delta_{\ell}^{\ell^{\prime}}\delta_{m}^{m^{\prime}}\delta_{ n}^{n^{\prime}}\,\] \[\left[\boldsymbol{l}_{\downarrow\ell mn},\boldsymbol{l}_{\downarrow \ell^{\prime}m^{\prime}n^{\prime}}^{\dagger}\right]=\left[\boldsymbol{l}_{ \downarrow\ell mn},\boldsymbol{r}_{\downarrow\ell^{\prime}m^{\prime}n^{\prime}}^ {\dagger}\right]=\left[\boldsymbol{r}_{\downarrow\ell mn},\boldsymbol{r}_{ \downarrow\ell^{\prime}m^{\prime}n^{\prime}}^{\dagger}\right]=\delta_{\ell}^{ \ell^{\prime}}\delta_{m}^{m^{\prime}}\delta_{n}^{n^{\prime}}\.\]
To see how this works, we compute one commutator between \(\Phi^{a}\) and \(P_{b}\). The following representations of the Dirac delta will be useful
\[\delta\left(\varphi-\varphi^{\prime}\right) = \sum_{m=-\infty}^{\infty}\frac{1}{2\pi}e^{im\left(\varphi-\varphi ^{\prime}\right)}\,\] \[\delta\left(\rho-\rho^{\prime}\right) = \sum_{n=1}^{\infty}\frac{2\rho}{R_{0}^{2}J_{\eta+1}\left(\alpha_{n }^{n}\right)^{2}}J_{\eta}\left(\frac{\alpha_{n}^{\eta}\rho}{R_{0}}\right)J_{ \eta}\left(\frac{\alpha_{n}^{\eta}\rho^{\prime}}{R_{0}}\right)\,\] (A.9) \[\delta\left(z-z^{\prime}\right) = \sum_{\ell=1}^{\infty}\frac{2}{L}\sin\left(\frac{\pi\ell z^{ \prime}}{L}\right)\sin\left(\frac{\pi\ell z}{L}\right)\.\]
The expression of \(P_{b}\) following the definition for our magnetic background is
\[P_{a}\equiv\rho\partial_{t}\Phi^{\dagger}\equiv\bar{p}_{a}+p_{a}\,\] (A.10)
where
\[\bar{p}_{a} = \sum_{\ell mn}^{-}\rho i\bar{\mathcal{N}}_{\ell mn}\sqrt{\frac{ \bar{\omega}_{\ell mn}}{2}}\sin\!\left(\frac{\pi\ell z}{L}\right)J_{m-\frac{1 }{2}}\!\left(\frac{\bar{\chi}_{n}^{m}\rho}{R_{0}}\right)\!\left(\begin{array}[] {c}\boldsymbol{l}_{\uparrow\ell mn}^{\dagger}e^{-i\left(m\varphi-\bar{ \omega}_{\ell mn}t\right)}-\tilde{\boldsymbol{l}}_{\uparrow\ell mn}e^{-i\left(m \varphi+\bar{\omega}_{\ell mn}t\right)}\\ \boldsymbol{l}_{\downarrow\ell mn}^{\dagger}e^{i\left(m\varphi+\bar{\omega}_{ \ell mn}t\right)}-\tilde{\boldsymbol{l}}_{\downarrow\ell mn}e^{i\left(m \varphi-\bar{\omega}_{\ell mn}t\right)}\end{array}\right)^{\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Let us compute explicitly the first parenthesis of (A.11). The first commutator is
\[\left[\bar{\phi}^{1},\bar{p}_{1}\right] = \sum_{\ell mn}^{-}i\rho\bar{\cal N}_{\ell mn}^{2}\sin\left(\frac{ \pi\ell z}{L}\right)\sin\left(\frac{\pi\ell z^{\prime}}{L}\right)J_{m-\frac{1} {2}}\left(\frac{\bar{\chi}_{n}^{m}\rho}{R_{0}}\right)J_{m-\frac{1}{2}}\left( \frac{\bar{\chi}_{n}^{m}\rho^{\prime}}{R_{0}}\right)e^{im\left(\varphi-\varphi ^{\prime}\right)}\,\] (A.12) \[= \delta\left(z-z^{\prime}\right)\sum_{\ell mn}^{-}i\frac{1}{2\pi} \frac{2}{R_{0}^{2}J_{m+\frac{1}{2}}\left(\bar{\chi}_{n}^{m}\right)^{2}}\rho J _{m-\frac{1}{2}}\left(\frac{\bar{\chi}_{n}^{m}\rho}{R_{0}}\right)J_{m-\frac{1} {2}}\left(\frac{\bar{\chi}_{n}^{m}\rho^{\prime}}{R_{0}}\right)e^{im\left( \varphi-\varphi^{\prime}\right)}\,\] \[= i\frac{1}{2\pi}\delta\left(z-z^{\prime}\right)\delta\left(\rho- \rho^{\prime}\right)\sum_{m=1}^{\infty}e^{im\left(\varphi-\varphi^{\prime} \right)}\.\]
while the second term is
\[\left[\phi^{1},p_{1}\right] = \sum_{\ell mn}\frac{2i}{\pi L}\frac{\rho}{R_{0}^{2}J_{m+\frac{3}{ 2}}(\chi_{n}^{m})^{2}}\sin\!\left(\frac{\pi\ell z}{L}\right)\sin\!\left(\frac{ \pi\ell z^{\prime}}{L}\right)J_{m+\frac{1}{2}}\!\left(\frac{\chi_{n}^{m}\rho }{R_{0}}\right)J_{m+\frac{1}{2}}\!\left(\frac{\chi_{n}^{m}\rho^{\prime}}{R_{0 }}\right)e^{-im\left(\varphi-\varphi^{\prime}\right)}\,\] (A.13) \[= \sum_{m=0}^{\infty}i\frac{1}{2\pi}\delta\left(z-z^{\prime}\right) \delta\left(\rho-\rho^{\prime}\right)e^{-im\left(\varphi-\varphi^{\prime} \right)}\.\]
Replacing in the first parenthesis in (A.11)
\[\left[\bar{\phi}^{1},\bar{p}_{1}\right]+\left[\phi^{1},p_{1}\right]=i\delta \left(z-z^{\prime}\right)\delta\left(\rho-\rho^{\prime}\right)\frac{1}{2\pi} \left(\sum_{m=1}^{+\infty}e^{im\left(\varphi-\varphi^{\prime}\right)}+\sum_{m= 0}^{\infty}e^{-im\left(\varphi-\varphi^{\prime}\right)}\right)\,\] (A.14)
changing the sign in the second summation and using the fact that we get the representation of the delta function, thus
\[\left[\bar{\phi}^{1},\bar{p}_{1}\right]+\left[\phi^{1},p_{1}\right]=i\delta \left(z-z^{\prime}\right)\delta\left(\rho-\rho^{\prime}\right)\delta\left( \varphi-\varphi^{\prime}\right)\.\] (A.15)
One can show that the same mechanism works for the second parenthesis in (A.11),
\[\left[\bar{\phi}^{2},\bar{p}_{2}\right]+\left[\phi^{2},p_{2}\right]=i\delta \left(z-z^{\prime}\right)\delta\left(\rho-\rho^{\prime}\right)\delta\left( \varphi-\varphi^{\prime}\right)\.\] (A.16)
Replacing back into (A.11) we find
\[\left[\Phi^{a}\left(t,\vec{x}\right),P_{b}\left(t,\vec{x}^{\prime}\right) \right]=i\delta_{b}^{a}\delta\left(z-z^{\prime}\right)\delta\left(\rho-\rho^{ \prime}\right)\delta\left(\varphi-\varphi^{\prime}\right)\,\] (A.17)
as promised.
|
2309.10960 | Numerical Study of Wind Pressure Loads on Low Rise Buildings under
different Terrain | This is a numerical study of wind pressure loads on low rise buildings in
which three different types of roofs were analyzed which are the flat, gable
and circular roof at different wind speed. The numerical analysis was performed
using FLUENT package based on values of k (turbulence kinetic energy) and
(dissipation rate of turbulence) based on partial differential equation. Also,
flat, and shallow escarpment terrains were considered during the simulation to
determine the coefficient of pressure at different wind speed for different
roof types. For the shallow escarpment terrain, a flat roof was considered at
different velocities and for the flat terrain, three different types of roofs
are considered which are the flat, gable and circular roof. It is observed that
as the wind speed increases, the coefficient of drag decreases. It also shows
the effect of vortex formed at the leeward direction of the building which
implies the higher the wind speed, the larger the vortex formed and the lower
the building ventilation and higher the damage on the roof of the building.
Based on the analysis, it is preferable to use a circular roof based on the
aerodynamic characteristics of wind around building walls and roofs. | Saidi Olayinka Olalere, Olufemi Alayode | 2023-09-19T23:12:20Z | http://arxiv.org/abs/2309.10960v1 | # Numerical Study of Wind Pressure Loads on Low Rise Buildings under different Terrain
###### Abstract
This is a numerical study of wind pressure loads on low rise buildings in which three different types of roofs were analyzed which are the flat, gable and circular roof at different wind speed. The numerical analysis was performed using FLUENT package based on values of k (turbulence kinetic energy) and k (dissipation rate of turbulence) based on partial differential equation. Also, flat, and shallow escarpment terrains were considered during the simulation to determine the coefficient of pressure at different wind speed for different roof types. For the shallow escarpment terrain, a flat roof was considered at different velocities and for the flat terrain, three different types of roofs are considered which are the flat, gable and circular roof. It is observed that as the wind speed increases, the coefficient of drag decreases. It also shows the effect of vortex formed at the leeward direction of the building which implies the higher the wind speed, the larger the vortex formed and the lower the building ventilation and higher the damage on the roof of the building. Based on the analysis, it is preferable to use a circular roof based on the aerodynamic characteristics of wind around building walls and roofs.
Wind Pressure, Low-Rise Building, Terrain, Wind flow, Building roofs, Shallow Escarpment, Flat roof, Gable roof, Circular roof.
## I Introduction
Wind induced dispersion of pollutants in different locations depends on turbulence characteristics and velocity profile of the wind. These will in turn depend on the roughness and general configuration of the upstream terrain. Flow over low-rise building encompasses the need to monitor both internal and external unsteady pressure, wind loads on low rise building and its load paths through both structural and non-structural components.
According to [14] internal pressure of design wind loads on building envelope contribute a significant portion to the total design wind load depending upon the dominant opening size and location, shape of the building, surrounding conditions and other aerodynamic factors. Design wind loads on building envelope are due to a net combination of external and internal pressure [15] in which internal and external pressure measurements are also essential for assessing infiltration or exfiltration of air, moisture movement and thermal variations through building envelope which have significant influence on both the internal environment and the energy needs of building.
Accurate assessment of internal pressures is, therefore, essential both from wind loads and energy efficiency of buildings aspects. Thus, in the presence of openings, the algebraic sum of the external and internal pressure is used to assess the design wind loads on building envelope components such as walls, roofs, roof tiles, windows, and doors. Low rise building are fully immersed within the layer of aerodynamic roughness where the turbulence intensities are high [16] were given as the shape of the building, the spatial variation of external pressure at the opening, the geometries of the openings, the size and location of the opening with respect to the windward as well as the background porosity, ventilation opening sizes, internal volume and compartmentalization, wind direction, upstream flow turbulence intensities and flexibility of the building envelope. The fulfillment of certain conditions of opening porosity and internal volume become a reason for the formation of turbulence energy at the opening that causes the internal pressure to exceed the external pressure fluctuation [17].
A study by [17] examined the effects of opening location and sizes, background leakage, compartmentalization, roof, and vents in which the experiment shows that the external roof pressure are highly correlated with respect to time with the internal pressure with decrease in the ratio of the internal volume to the opening area leads to increase in the internal pressures for wind directions normal to the opening.
With the invention of aerodynamic in which inflow of wind through the building envelope leads to over pressurization of the internal dwelling unless there is an equivalent opening in the leeward side to relieve the pressure. So, the aerodynamic factors that govern the magnitude and direction of internal pressure in a building are fluctuation of external pressure at the openings, the upstream wind direction, size and position of opening, internal volume and compartmentalization, natural
ventilation opening and leakages due to crack and outlet ducts (Holmes 2009).
The purpose of the research is to get the numerical characteristics of wind pressure loads on low rise building under different terrain since the effect of wind pressure load on structures should be emphasize because of its negative effect on the economy. Reardon and Holmes (1981) at James Cook University gave a description on research on low-rise structures in which it was concluded that:(i) Flows perpendicular to a wall, a more turbulent environment resulted in closer reattachment, more free streamline curvature and lower pressure, and (ii) For quartering flows the action of the vortices was enhanced by roof overhangs. Reardon (1997) fatigue failure on industrial and residential low-rise buildings resulted in research in metal cladding fastener failure with repeated, cyclic, gust loading. "The worst mean roof auctions, independent of direction, occur along the edges near the windward corner, but not at the corner itself".
However, most low-rise buildings are in amongst their peers, not isolated out in a field. The impact of a field of similar structures surrounding a subject structure was the topic of extensive studies in the 1980s for the fledgling solar power industry (Derickson, 1992). Low rise buildings are routinely adversely impacted by the speed-up effect caused by terrain, as noted by Davenport (1993).
The difficulties to assess wind-induced loads for low-rise buildings arise are because, "They are usually immersed within the layer of aerodynamic roughness on the earth's surface, where the turbulence intensities are high, and interference and shelter effects are important, but difficult to quantify. Roof loadings, with all the variations due to changes in geometry, are of critical importance for low-rise buildings. The highest wind loading on the surface of a low-rise structure are generally the suctions on the roof, and many structural failures are initiated there" according to Holmes (2001).
Wind pressure load has been simulated using different software. Numerical models are based on evaluation of the spatial and time dependent properties of wind-induced pressure. The time dependent loads on buildings can be determined by Large Eddy Simulation (LES) or by Direct Numerical Simulation (DNS). The calculation of the structural response to fluctuating loading is possible with models like finite element modeling. The commercial software FLUENT 6.2 was utilized for this simulation and the governing equation employed were the Reynolds Average Navier Strokes (RANS) equations, together with the k-turbulence model. The inlet, top, outlet, and two sides of the computational domain were set at different values.
## II Literature Review
To observe abnormalities or irregularities in the behaviors of flow around a low-rise building is relevant. These irregularities have been described by different observers, researchers, scientists, engineers which depend on how well the experiment is controlled. The experiments are carried out in a standard laboratory environment under controlled and adverse operating conditions, and the results are compared, analyzed, and interpreted. In describing wind load in building, different conceptual models have been developed. So, flow over obstacles has been extensively investigated both experimentally and numerically (Cook, 1997).
According to (Kopp et.al. 2007) internal pressure can contribute a significant portion to the total design wind load on which the intensity and distribution depends on the severity of the aerodynamic factors involved and the internal pressure account for more than 50% of the wind load. Wind induced internal pressure on low rise buildings with openings such as windows and doors can form a higher proportion of the total design wind load (Holmes 2001).
The internal pressure is affected by the complex dynamics of wind and building interaction to properly design building envelopes and components from the perspective of wind resistance, water intrusion and energy performance. So, internal pressure is affected in a complex manner by opening size and location, compartmentalization, background leakage, flexibility of envelope, internal volume, and external pressure distribution at the opening wind direction (Oh et al 2008). The interaction of wind and building causes the variation of pressure more than the resistance capacity of the building envelope that could lead to failure of the building components.
Holmes (1979) conducted a study on the internal pressure fluctuation of a single opening building model using boundary layer wind tunnel to investigate the relationship between internal pressure and Helmholtz resonance. This study revealed that the internal pressure in buildings with opening responds quickly to external pressure fluctuation like a Helmholtz resonance. Then it shows that air moves in and out of the building in response to external pressure and the internal pressure fluctuation due to the compressibility effects of the air.
A boundary layer study of the behaviors of transient wind-induced internal pressure to compare the phenomenon of overshooting to peak values of steady state internal pressure fluctuations (Lucluan,1989) in which the observation shows that the steady-state peak fluctuation is higher than the transient response overshooting. In this study, the doors and windows located on the windward side cause an increase in the density of the air inside and inflation of the building as wind rushes in which result in the buildup of positive internal pressure. Therefore, the location of opening at specific part of the envelope lead to the development of significant internal pressure variation due to the interaction of wind and building which creates a region of separation and reattachment of flows depending on the size of the building and angle of attack.
A study was also conducted to investigate the transient behavior of the internal pressure due to sudden breach of opening under smooth and turbulent flow (Vickery 1994) and a study of sustained dynamic action of turbulent wind over an opening vital of imposing
damage to the building (Mehta 1992). This experiment shows that the internal pressure doesn't decay with time in the case of turbulent flow in which fluctuation of the internal pressure was equivalent to that of the external pressures. It was observed that the correlated internal pressure fluctuation with that of external pressure provides a higher peak load. The effect of openings and porosity in internal pressure was examined to evaluate its influence on the internal pressure (Woods,1995) and a numerical study performed on the viability of the synchrony of formation of sudden overshoot characteristics between wind tunnel and full-scale studies (Sharma,2010). The results of the experiment show steady state theory agrees with experimental measurement of internal pressure for the case of a single opening.
An investigation of the influence of Helmholtz resonance on internal pressure in a low rise building under oblique wind flow in which the result shows that the effect of resonance at oblique flow being significant causes large fluctuation in internal pressure (Richard,2003).
Kopp et al (2008) performed an internal volume sealed wind tunnel experimental study to examine the effects of ten different opening configurations on the internal pressure of low-rise building. The results of the experiment show that the peak internal pressure strongly correlates in time with the external pressure. The internal pressure coefficient was large when there was an opening in the windward side of the building. Wall leakages acts to ease the internal pressure fluctuation, and this could basically be due to the leakage of air through the leeward and side walls that contribute to deflating the building interior (Sharma 2007).
### Low Rise Building
Low rise buildings, which implies roofed low-rise structures are between 4.0m to 4.5m in height and are frequently equate with being low-cost structures. Low rise building depends on composite action and load sharing behavior within and between wall, roof and floor system for stiffness stability and strength (Foliente 1998) with low aspect ratio (ratios of their overall height to their plan dimension), shallow foundation, flexible horizontal diaphragms and are frequently constructed with several different materials of dissimilar stiffness, strength, and mass properties.
### Wind Flow Topography
Wind speed effect constitute abrupt changes in the general topography located in any exposure which increased considerably by natural and man-made topography.
As the wind approaches a shallow feature, its speed first reduces slightly as its encounter the start of the slope upwards as shown in Fig 1. It gradually increase in speed as it flows up the slope towards the crest. The maximum speed-up occurs at the crest or slightly upwind of it.
#### 2.0.2 Shallow hill
Beyond the crest, the flow speed gradually reduces to a value close to that well upwind of the topographic feature which is a feature with a downwind slope as shown in Fig 2.
#### 2.0.3 Steep Escarpment
In Fig 3, the seperation occur at the start of the upwind slope, immediately downwind of the crest.
Figure 1: Shallow Escarpment (Holmes 2001)
Figure 3: Steep Escarpment (Holmes 2001)
Figure 2: Shallow hill or ridge (Holmes 2001)
The separation may occur at the start of the upwind slope and on the downwind slope for a ridge as seen in Fig 4.
### Building Roof Profile
#### 3.0.1 Flat Roof
Fig 5 shows a sloped form of roof which is horizontal. This can be made from metal like lead (welded or folded seamed), tin (folded, soldered or folded seamed) or copper. The notation a is the horizontal dimension and h is the mean roof height.
#### 3.0.2 Hip Roof
Fig 6 is a type of roof where all sides slope downward to the walls, usually with a fairly gentle slope.
Fig 7 is a triangular portion of a wall between the edge of a sloping roof. The shape of the gable depends on the structural system being used.
### Wind Flow Terrain
Selection of terrain categories are due to the effect of obstructions which constitute the ground surface roughness. The terrain category depends on the direction of wind under consideration. So, if wind in a fully developed boundary layer encounter a change of surface roughness, the adjustment starts at ground level and gradually moves upward as shown in Fig 8. The result is the development of an internal boundary layer over the new terrain (Deaves,1981). For flow from smooth terrain (roughness length \(Z_{1}\)) to rougher terrain \(Z_{2}\) with \(Z_{2}\)\(>\)\(Z_{1}\)
\[X_{I}(Z)=Z_{2}\big{(}\frac{Z}{0.36Z_{2}}\big{)}^{4/3} \tag{1}\]
Fig 6 is a type of roof where all sides slope downward to the walls, usually with a fairly gentle slope.
Terrain in which a specific building stands can be assessed as being one of the following terrain categories; Category 1: Exposed open terrain with a few or no obstructions and in which the average height of any object surrounding the structure is less than 1.0m.
Category 2: Open terrain with well-scattered obstructions having height generally between 3.0m and 3.5m.
Category 3: Terrain with numerous closely spaced obstructions having the size of building structure up to 4.0m in height with or without a few isolated tall structures.
Figure 4: Steep hill or ridge (Holmes 2001)
Figure 8: Internal boundary layer development at a change of terrain roughness (Holmes 2001)
Figure 5: Flat roof (ASCE/SEI 7-05)
Figure 6: Hip roof (ASCE/SEI 7-05)
Figure 7: Gable roof (ASCE/SEI 7-05)
Category 4: Terrain with numerous large high closely spaced obstructions.
### Computational Fluid Dynamics
The rapid developments in both computer hardware and software have created a possible environment for the practical applications of CFD to simulate flows within and around buildings and other structures. Gomes (2005) presented the comparison of experiment and numerical results of wind pressure on some irregular-plan shape. Due to the complex flow field in the vicinity of a building, past investigations of CFD applications are mainly focused on rather simple geometry or low-rise buildings. The FLUENT fluid simulation software is used to simulate the wind pressure load on low rise building. The selected software is selected based on its allowance for users to create their own physical models from the user interface to develop relationships and the boundary conditions from the simulation.
Studies by Meroney (2009) employed experimental and computational approaches to study external pressure on buildings. Guha et al. (2009) studied characterization of flow through openings on the Texas Technology University Building. Computational simulation result obtained for internal pressure responses of the test shows that Helmholtz frequency matches the analytical solution.
## 3 Materials and Methods
### Mathematical model
In recent times, there have been efforts in combining computer fluid dynamics and atmospheric model capabilities to monitor the effects on air flows of different terrain. This is what is required to simulate different flow patterns. Different research has been carried out on different buildings. Brown et. al. (2001) measured velocity distributions for two- and three-dimensional building array in wind tunnel. The purpose of the experiments was to provide high quality and spatial dense data which is used to evaluate the performance of computational fluid dynamic models. The atmospheric model was improved to simulate air flow around buildings under influence of mesoscale wind variations (Yamada,2003).
### Methods of analysis
Various analytical and numerical approaches have been employed for resolving different types of partial differential equations subject to suitable boundary conditions including nonlinear equations. However, numerical methods have been preferred because of the difficulty and accuracy associated with analytical techniques.
#### 2.0.1 Finite difference method
The finite difference method was exclusively used for many years to solve numerically, differential equations. However, when dealing with situations like flows at very high Renolds number, flows around arbitrarily shaped objects and strongly time dependent flows, there are short comings such as numerical instability, lack of accuracy and difficulties in properly treating boundary conditions for curved walls.
#### 2.0.2 Finite element method
The Finite element method is the most interesting practical technique for finding approximate solutions to partial differential equations in engineering and science. FEM is used to solve a wide variety of problems and in many instances, the only viable method for obtaining solutions. While the FEM is built on a rich mathematical background, it is still one of the most practical numerical schemes yet devised for solving complex problems.
This method requires division of problem domain in many sub domains and each sub domain is called a finite element. Therefore, the problem domains consist of many finite element patches. One of the major advantages of finite element method is that a general-purpose computer program can be developed easily to analyses various kind of problems. Any shape can be handled with ease using this method. The procedures involve in FEM are as stated below:
* Discretization of the solution domain
* Selection of proper interpolation model
* Derivation of element stiffness matrices
* Assemblage of element equation to obtain overall equilibrium equation.
* Solution for the unknowns in the model equation.
A sequence of the approximate solutions is obtained as the element is reduced successively. If the underlisted conditions are satisfied, the sequence will converge to the exact solution.
* The field variable must be continuous.
* All uniform states of the field variable and its partial derivation in the functional must have representation in the interpolating polynomial.
* The field variable and its partial derivative must be continuous at the element boundary.
#### 2.0.3 Finite volume method
The finite volume method is associated with special discretization of the domain and offers a more readily understood weighted residual for approximating the solution of the partial differential equation through local conservation. FVM is not associated with mathematical analysis as with FEM.
FVM is involved with mathematical analysis in relation to stability and convergence.
#### 2.0.4 Computational fluid dynamics
One of the methods to estimate the wind loading acting on buildings is to measure the mean pressure and the root mean square pressure (pressure fluctuation) on the building envelope. The mean pressure and the root mean square pressure can be obtained by conducting wind tunnel test. However, it is costly to conduct a wind tunnel test. Computational fluid dynamic (CFD) is an
alternative way to solve this problem. The mean pressure acting on building can be obtained using a turbulent model called k - \(\varepsilon\) model, but the estimation of pressure fluctuations needs to be relied on a model called Large Eddy Simulation (LES). Although the 3D LES can give a good analysis of the flow fields. To predict the wind-induced pressure fluctuations more efficiently, three main procedures are involved. Firstly, predict the mean flow quantities such as mean velocity flow field, turbulent kinetic energy (k) and turbulent energy dissipation (\(\varepsilon\)) using the modified k-\(\varepsilon\) model. A modified k-\(\varepsilon\) model is used to obtain a more accurate turbulent kinetic energy near the building. Secondly, generate a velocity fluctuation flow field that satisfies the mean turbulent quantities. Finally, solve the Poisson equation to predict the pressure fluctuation. The Poisson equation is derived from the incompressible momentum equations and continuity. This model is applicable for both 2D, and 3D simulation and it is believed that this model requires less computational effort comparing to the LES model.
#### 2.0.5 Direct numerical simulation
Direct numerical simulation of the Navier stroke equation for a full range of turbulent motion for all scales (large down to dissipation) is the goal for numerical simulation of fluid flow. It is the most accurate way to model fluid flow numerically (Murakami, 1997). The only approximations made would be those necessary numerically and would be chosen to minimize discretization errors. When properly carried out, DNS results would be comparable in every way to quality experiment data (Ferziger, 1993). The main advantages are the clear definition of all conditions (initial, boundary and forcing) and the production of data for every single variable. However, from a practical viewpoint, only simple geometries and low Reynolds number will be modeled and while DNS is unsurpassed in its ability to predict turbulence, it is unlikely to become a practical engineering tool (Speziale, 1998).
However, basic computation using DNS provides very valuable information for verifying and revising turbulent models (Murakami, 1998).
#### 2.0.6 Boundary conditions
Boundary conditions must be physically realistic. Hence, dependent on the geometry, materials, and the value of pertinent parameters. For this study, a flow over a low-rise building, the building has a solid boundary that is no slip boundary conditions.
The importance of the use of physical meaningful boundary conditions in numerical simulation cannot be over stressed, because if not properly defined can lead to error.
### Governing equation
The governing equations for fluid flow are equations of rate and change in position to forces that cause(s) deformation. They define the property of fluid by the function of space and time (Fox and Mc Donald, 1994). It can be in the integral form (control volume) or differential form (point to point) forms. Differential forms will be employed in this research.
The governing equation for numerical simulation of wind pressure load on low rise building are the continuity equation which justified the assumption that air flow via wind effect could be treated as a continuous distribution of matter (Fox and Mc Donald, 1994). The momentum equation governs the rate of transfer and transport of air and the associated forces and the energy equation.
However, it is assumed that these equations are closed systems of nonlinear partial differential equations.
#### 3.0.1 Mathematical formulation
Applying the fundamental laws of mechanics to a fluid gives the governing equations for a fluid. The conservation of mass equation is:
\[\frac{\partial\rho}{\partial t}+V\cdot(\rho\overrightarrow{\nu})=0 \tag{2}\]
and the conservation of momentum equation is:
\[\rho\frac{\partial\vec{\nu}}{\partial t}+\rho(\vec{\nu}\cdot\vec{\nu})\,\vec{ \nu}=-V\vec{\nu}+\rho\vec{\partial}+\vec{\nu}\cdot\vec{\tau}_{ij} \tag{3}\]
These equations along with the conservation of energy equation form a set of coupled, non-linear partial differential equations. It is not possible to solve these equations analytically for most engineering problems. However, it is possible to obtain approximate computer-based solutions to the governing equations for a variety of engineering problems. This is the subject matter of Computational Fluid Dynamics (CFD).
#### 3.0.2 Prediction of mean flow quantities
In the current model, mean flow calculations are made using the standard K-\(\varepsilon\) model. The governing equations of the standard k- \(\varepsilon\) model are:
\[\begin{array}{l}\frac{\partial U_{l}}{\partial\chi_{l}}=0\\ \\ \frac{DU_{l}}{Dt}=-\frac{1}{\rho}\,\frac{\partial P}{\partial\chi_{l}}+\, \frac{\partial}{\partial X_{j}}\,\left[\,(\nu\,+\right.\\ \left.\nu_{t}\right)(\frac{\partial U_{j}}{\partial x_{i}}+\,\frac{\partial U _{j}}{\partial x_{i}})\,\right]\end{array} \tag{4}\]
\[\begin{array}{l}\frac{DU_{l}}{Dt}=-\frac{1}{\rho}\,\frac{\partial P}{ \partial\chi_{l}}+\,\frac{\partial}{\partial X_{j}}\,\left[\,(\nu\,+\right.\\ \left.\nu_{t}\right)(\frac{\partial U_{j}}{\partial x_{i}}+\,\frac{\partial U _{j}}{\partial x_{i}})\,\right]\end{array} \tag{5}\]
\[\begin{array}{l}\frac{DK}{Dt}=\frac{1}{\rho}\,\frac{\partial}{\partial X_{j }}\,\left[\,\left(\nu\,+\,\frac{\nu_{t}}{\sigma_{k}}\right)\frac{\partial K}{ \partial x_{j}}\,\right]\cdot P_{k}-\,\varepsilon\\ \\ \frac{D\varepsilon}{Dt}=\frac{1}{\rho}\,\frac{\partial}{\partial X_{j}}\, \left[\,\left(\nu\,+\,\frac{\nu_{t}}{\sigma_{k}}\right)\frac{\partial\, \varepsilon}{\partial x_{j}}\,\right]\,+\frac{\varepsilon}{K}\\ \\ (C_{1}P_{k}-\,C_{2}\,\varepsilon\,)\end{array} \tag{6}\]
where the eddy viscosity \(V_{t}\) is expressed as a function of the turbulent kinetic energy k, and the energy dissipation rate \(\varepsilon\) as
\[V_{t}=C_{\mu}\frac{K^{2}}{\varepsilon} \tag{8}\]
In the above equations \(P_{k}\) is given by
\[P_{k}=V_{t}S^{2} \tag{9}\] \[v_{t}=C_{\mu}^{*}\ \frac{K^{2}}{\varepsilon},\qquad C_{\mu}^{*}=C_{ \mu}\frac{\Omega}{S}(\frac{\Omega}{S}<1)\] (10) \[v_{t}=C_{\mu}^{*}\ \frac{K^{2}}{\varepsilon},\qquad C_{\mu}^{*}=C_{ \mu}\frac{\Omega}{S}(\frac{\Omega}{S}\geq 1) \tag{11}\]
Where;
\[\text{S}=\frac{1}{2}(\frac{\partial<u_{i}>}{\partial x_{j}}+\frac{\partial< x_{j}>}{\partial x_{i}})^{2} \tag{12}\] \[\boldsymbol{a}=\frac{1}{2}(\frac{\partial<u_{i}>}{\partial x_{j}}- \frac{\partial<x_{j}>}{\partial x_{i}})^{2} \tag{13}\]
### Applications of CFD
FLUENT, like other commercial CFD codes, offers a variety of boundary condition options such as velocity inlet, pressure inlet, pressure outlet, etc. It is very important to specify the proper boundary conditions to have a well-defined problem. The alternative to DNS found in most CFD packages (including FLUENT) is to solve the Reynolds Averaged Navier Stokes (RANS) equations. RANS equations govern the mean velocity and pressure because these quantities vary smoothly in space and time, they are much easier to solve.
In the determination of the profile, the software GAMBIT will be used to design the profile in which it will be meshed before it will be export to FLUENT.
## IV Result and Discussion
In the consideration of the numerical study of wind pressure load on low rise buildings under different terrain. Analysis was carried out using the computer software known as ANSYS Fluent 6.2 for the simulation and the governing equation is the Reynolds Average Navier Stoke (RANS) equation together with the k-e model.
The simulation was carried out for 3 different building roofs which are: the flat, gable and circular roof. Also, two different terrains were considered which are the flat and shallow escarpment.
The model of the building was developed in GAMBIT which is a modeling software that works with Fluent 6.2. A 2D model of three different roofs was used. They are flat roof model which is made up of front wall, rear wall and a roof; gable roof model which is made up of a front wall, rear wall, front roof and rear roof; and circular roof that consisted of a front wall, rear wall and roof.
The meshes were generated using quadrilateral cells of a known dimension. The inlet boundary specified a defined pressure outlet, and the model is specified as the walls. After the specification, it is then imported to the commercial software known as ANSYS Fluent for simulation. The simulation is pressure based in which the velocity formulation is absolute, and the viscous model is based on k-epsilon using standard wall function. The Fluent fluid material used is air (as wind) and the operating pressure is the atmospheric pressure.
The simulation was performed for the 3 different types of roofs at different velocities which are 12m/s, 15m/s and 20m/s in order to compare the effect of wind at various speeds. Also, this was analyzed for the flat and shallow escarpment terrain.
### Flat Roof on a Shallow Escarpment
A flat roof is considered with a shallow escarpment for two different velocities of wind through the windward and leeward direction. This shows discrepancies in the velocity magnitude, total pressure, and the stream function.
Fig 9a & b shows the contours of total pressure around the roof section of the building. An area known as the stagnation zone is noticed on the top of the roof which shows that the static pressure drops to negative (shown by the blue color on the roof). It is evident that as the speed of wind increases, the stagnation zone also increases and can also be verified from Fig 10a & b which represent the contour at the leeward side of the building. The vortex formed at the leeward side of the building is proportional to the velocity of wind. The contour of stream function showing the path-lines, result in separation shear layer at the top of the roof as shown in Fig 11a & b. It is evident that increase in velocity of wind leads to development of vortices from the rolling up of shear layer. The contour for coefficient of pressure for a flat roof at a shallow escarpment is shown in Fig 12a & b.
Figure 9a: The contour of total pressure around the roof section of flat roof model at wind velocity of 12m/s.
Figure 10a. The contours of vorticity magnitude at leeward side of the model at a wind speed of 12m/s. Figure 10b. The contours of vorticity magnitude at leeward side of the model at a wind speed of 20m/s. Figure 11a. The contours of the stream function around the building model at velocity of 12m/s. Figure 11b. The contours of the stream function around the building model at velocity of 20m/s. Figure 11a. The contours of the stream function around the building model at velocity of 20m/s. Figure 11b. The contours of the stream function around the building model at velocity of 20m/s. Figure 11a. The contours of the stream function around the building model at velocity of 12m/s. Figure 11a. The contours of the stream function around the building model at velocity of 20m/s. Figure 11b. The contours of the stream function around the building model at velocity of 20m/s. Figure 11a. The contours of the stream function around the building model at velocity of 20m/s. Figure 11a shows the coefficient of drag plotted against the flow time for a flat roof model under a shallow escarpment terrain. This shows that as the velocity increases, the flow time, and the coefficient of drag decreases. Fig 11b shows the average weighted area of the model against the time step in which the velocity of wind increases as the average weighted area of the model increases at a constant time step.
Figure 13b. The average weighted area against time step for flat roof model at different velocity.
### Roof types on a Flat terrain
Fig 14 shows the coefficient of drag against flow time for the 3 different types of roofs at a constant velocity of 12m/s.
Fig 15 shows the total pressure around the 3 different roofs of the model in which the stagnation zone is formed in the roof of the building which implies a negative pressure. It implies that the pressure decreases most for the flat roof, next to it is the g
Figure 14: The coefficient of drag against flow time for the different roof model at velocity of 12m/s
Figure 13a. The coefficient of drag against flow time for flat roof model at different velocity.
Figure 13b. The average weighted area against time step for flat roof model at different velocity.
Fig 16 shows the contour of vorticity magnitude around the 3 different roofs of the model in which the gable roof has the highest pressure, next to it is the circular and then the flat roof. The vortex formed is highest in the flat roof, then the circular and the gable.
Fig 17 shows the stream function for the 3 different roofs, this implies that the flat roof possesses the highest vortex formed then the circular and the gable roof. For the flat roof, the vortex increases over time which will lead to an obstruction of wind flow, which in turn causes damage to the roof.
Fig 18 shows the coefficient of pressure for the 3 different types of roofs at a constant velocity of 12m/s.
## V Conclusion
The numerical study of wind pressure load on low rise buildings under different terrain was carried out in which the coefficient of drag was obtained for the roof type considered at different velocities. The k-\(\epsilon\) model used which is pressure based is used to obtain the changes in the contours of the total pressure, vorticity magnitude and the stream function for different types of roofs at different terrain. This study has been able to identify the effect of high and low speed wind on a building and the best type of roof for construction. Also, the aerodynamic characteristics of wind around building walls and roofs.
|
2306.17652 | Accurate 2D Reconstruction for PET Scanners based on the Analytical
White Image Model | In this paper, we provide a precise mathematical model of crystal-to-crystal
response which is used to generate the white image - a necessary compensation
model needed to overcome the physical limitations of the PET scanner. We
present a closed-form solution, as well as several accurate approximations, due
to the complexity of the exact mathematical expressions. We prove,
experimentally and analytically, that the difference between the best
approximations and real crystal-to-crystal response is insignificant. The
obtained responses are used to generate the white image compensation model. It
can be written as a single closed-form expression making it easy to implement
in known reconstruction methods. The maximum likelihood expectation
maximization (MLEM) algorithm is modified and our white image model is
integrated into it. The modified MLEM algorithm is not based on the system
matrix, rather it is based on ray-driven projections and back-projections. The
compensation model provides all necessary information about the system.
Finally, we check our approach on synthetic and real data. For the real-world
acquisition, we use the Raytest ClearPET camera for small animals and the NEMA
NU 4-2008 phantom. The proposed approach overperforms competitive,
non-compensated reconstruction methods. | Tomislav Matulić, Damir Seršić | 2023-06-30T13:38:11Z | http://arxiv.org/abs/2306.17652v2 | # Accurate 2D Reconstruction for PET Scanners based on the Analytical White Image Model
###### Abstract
In this paper, we provide a precise mathematical model of crystal-to-crystal response which is used to generate the white image - a necessary compensation model needed to overcome the physical limitations of the PET scanner. We present a closed-form solution, as well as several accurate approximations, due to the complexity of the exact mathematical expressions. We prove, experimentally and analytically, that the difference between the best approximations and real crystal-to-crystal response is insignificant. The obtained responses are used to generate the white image compensation model. It can be written as a single closed-form expression making it easy to implement in known reconstruction methods. The maximum likelihood expectation maximization (MLEM) algorithm is modified and our white image model is integrated into it. The modified MLEM algorithm is not based on the system matrix, rather it is based on ray-driven projections and back-projections. The compensation model provides all necessary information about the system. Finally, we check our approach on synthetic and real data. For the real-world acquisition, we use the Raytest ClearPET camera for small animals and the NEMA NU 4-2008 phantom. The proposed approach overperforms competitive, non-compensated reconstruction methods.
_Keywords:_ Positron Emission Tomography, Maximum-Likelihood Expectation-Maximization algorithm, Raytest ClearPET, Exact crystal response model, White image model
## 1 Introduction
Image reconstruction in Positron Emission Tomography (PET) has been well investigated over the last 50 years. Textbook analytical methods for image reconstruction, like Filtered Back-Projection (FBP) and Back-Projection Filtering (BPF), are well known, but rarely used in practice. They usually solely rely on Radon transformation and do not take into account a precise physical model of the PET scanner. Thus, reconstructed real-world images tend to be distorted. Still, analytical methods can consider the physical model of the PET scanner, as described in [19], which results in less image degradation.
Besides analytical methods, there are several iterative reconstruction algorithms, which prevail in practical applications. Important representatives are Maximum Likelihood Expectation Maximization (MLEM) [31], and its more efficient version Ordered Subset Expectation |
2302.14860 | Revocable Cryptography from Learning with Errors | Quantum cryptography leverages many unique features of quantum information in
order to construct cryptographic primitives that are oftentimes impossible
classically. In this work, we build on the no-cloning principle of quantum
mechanics and design cryptographic schemes with key-revocation capabilities. We
consider schemes where secret keys are represented as quantum states with the
guarantee that, once the secret key is successfully revoked from a user, they
no longer have the ability to perform the same functionality as before. We
define and construct several fundamental cryptographic primitives with
key-revocation capabilities, namely pseudorandom functions, secret-key and
public-key encryption, and even fully homomorphic encryption, assuming the
quantum subexponential hardness of the learning with errors problem. Central to
all our constructions is our approach for making the Dual-Regev encryption
scheme (Gentry, Peikert and Vaikuntanathan, STOC 2008) revocable. | Prabhanjan Ananth, Alexander Poremba, Vinod Vaikuntanathan | 2023-02-28T18:58:11Z | http://arxiv.org/abs/2302.14860v3 | # Revocable Cryptography from Learning with Errors
###### Abstract
Quantum cryptography leverages many unique features of quantum information in order to construct cryptographic primitives that are oftentimes impossible classically. In this work, we build on the no-cloning principle of quantum mechanics and design cryptographic schemes with _key-revocation capabilities_. We consider schemes where secret keys are represented as quantum states with the guarantee that, once the secret key is successfully revoked from a user, they no longer have the ability to perform the same functionality as before.
We define and construct several fundamental cryptographic primitives with _key-revocation capabilities_, namely pseudorandom functions, secret-key and public-key encryption, and even fully homomorphic encryption, assuming the quantum subexponential hardness of the learning with errors problem. Central to all our constructions is our approach for making the Dual-Regev encryption scheme (Gentry, Peikert and Vaikuntanathan, STOC 2008) revocable.
###### Contents
* 1 Introduction
* 1.1 Our Contributions in More Detail
* 1.2 Overview
* 1.3 Applications
* 1.4 Related Work
* 2 Preliminaries
* 2.1 Quantum Computing
* 2.2 Lattices and Cryptography
* 3 Quantum Discrete Gaussian Sampling for \(q\)-ary Lattices
* 3.1 Gaussian Superpositions
* 3.2 Algorithm: GenGauss
* 3.3 Algorithm: QSampGauss
* 4 Quantum Goldreich-Levin Theorem for Large Fields
* 4.1 Post-Quantum Reductions and Quantum Rewinding
* 4.2 Goldreich-Levin Theorems for Large Fields
* 4.3 Amplification
* 5 Definition: Key-Revocable Public-Key Encryption
* 5.1 Security Definition
* 5.2 Key-Revocable Public-Key Fully Homomorphic Encryption
* 5.3 From Single-Bit to Multi-Bit Security
* 6 Key-Revocable Dual-Regev Encryption
* 6.1 Construction
* 6.2 Simultaneous Search-to-Decision Reduction with Quantum Auxiliary Input
* 6.3 Distinct Pair Extraction
* 6.4 Proof of Theorem 6.1
* 7 Key-Revocable Fully Homomorphic Encryption
* 7.1 Construction
* 7.2 Proof of Theorem 7.1
* 8 Revocable Pseudorandom Functions
* 8.1 Definition
* 8.2 Security
* 8.3 Construction
Introduction
Quantum computing presents exciting new opportunities for cryptography, using remarkable properties of quantum information to construct cryptographic primitives that are unattainable classically. At the heart of quantum cryptography lies the _no-cloning principle_[22, 17] of quantum information which stipulates that it is fundamentally impossible to copy an unknown quantum state. Indeed, Wiesner [23] in his seminal work from the 1970s, used the no-cloning principle to construct a quantum money scheme, wherein quantum states are used to construct banknotes that can be verified to be authentic (using a secret key) but cannot be counterfeited. Ever since this watershed moment, and especially so in the recent years, a wide variety of primitives referred to as _unclonable_ primitives have been studied and constructed in the context of encryption [11, 1, 12, 13], digital signatures [14] and pseudorandom functions [15].
Our Work: Revocable Cryptography.Delegation and recovation of privilege are problems of great importance in cryptography. Indeed, the problem of revocation in the context of digital signatures and certificates in the classical world is a thorny problem [23, 24]. In this work, we undertake a systematic study of _revocable (quantum) cryptography_ which allows us to delegate and revoke privileges in the context of several fundamental cryptographic primitives. This continues a recent line of work in quantum cryptography dealing with revoking (or certifiably deleting) states such as quantum ciphertexts or simple quantum programs [25, 1, 1, 1, 1, 13, 14, 15, 16].
As a motivating example, consider the setting of an employee at a company who takes a vacation and wishes to authorize a colleague to perform certain tasks on her behalf, tasks that involve handling sensitive data. Since the sensitive data is (required to be) encrypted, the employee must necessarily share her decryption keys with her colleague. When she returns from vacation, she would like to have her decryption key back; naturally, one would like to ensure that her colleague should not be able to decrypt future ciphertexts (which are encrypted under the same public key) once the key is "returned". Evidently, if the decryption key is a classical object, this is impossible to achieve.
In revocable (quantum) cryptography, we associate a cryptographic functionality, such as decryption using a secret key, with a quantum state in such a way that a user can compute this functionality if and only if they are in possession of the quantum state. We then design a revocation algorithm which enables the user to certifiably return the quantum state to the owner. Security requires that once the user returns the state (via our revocation algorithm), they should not have the ability to evaluate the functionality (e.g. decrypt ciphertexts) anymore. We refer to this new security notion as _revocation security_.
Another, possibly non-obvious, application is to detecting malware attacks. Consider a malicious party who lacks into an electronic device and manages to steal a user's decryption keys. If cryptographic keys are represented by classical bits, it is inherently challenging to detect _phishing attacks_ that compromise user keys. For all we know, the intruder could have stolen the user's decryption keys without leaving a trace. Indeed, a few years ago, decryption keys which were used to protect cell-phone communications [25] were successfully stolen by spies without being detected. With revocable cryptography, a malicious user successfully stealing a user key would invariably revoke the decryption capability from the user. This latter event can be detected.
Our Results in a Nutshell.We construct revocable cryptographic objects under standard cryptographic assumptions. Our first main result constructs a key-revocable public-key encryption scheme, and our second main result constructs a key-revocable pseudorandom function. We obtain several corollaries and extensions, including key-revocable secret-key encryption and key-revocable fully homomorphic encryption. In all these primitives, secret keys are represented as quantum states that retain the functionality of the original secret keys. We design revocation procedures and guarantee that once a user successfully passes the procedure, they cannot compute the functionality any more.
All our constructions are secure under the quantum subexponential hardness of learning with errors [14]. At the heart of all of our contributions lies our result which shows that the Dual-Regev public-key encryption scheme [1] satisfies revocation security.
Related Notions.There are several recent notions in quantum cryptography that are related to revocability. Of particular relevance is the stronger notion of copy-protection introduced by Aaronson [1]. Breaking the revocable security of a task gives the adversary a way to make two copies of a (possibly different) state both of which are capable of computing the same functionality. Thus, uncloneability is a stronger notion. However, the only known constructions of copy-protection [13, 12] rely on the heavy hammer of post-quantum secure indistinguishability obfuscation for which there are no known constructions based on well-studied assumptions. Our constructions, in contrast, rely on the post-quantum hardness of the standard learning with errors problem. Another related notion is the significantly weaker definition of secure software leasing [1] which guarantees that once the quantum state computing a functionality is returned, the _honest evaluation algorithm_ cannot compute the original functionality. Yet another orthogonal notion is that of certifiably deleting _ciphertexts_, originating from the works of Unruh [15] and Broadbent and Islam [16]. In contrast, our goal is to delegate and revoke _cryptographic capabilities_ enabled by private keys. For detailed comparisons, we refer the reader to Section 1.4.
### Our Contributions in More Detail
We present our results in more detail below. First, we introduce the notion of key-revocable public-key encryption. Our main result is that dual-Regev public-key encryption scheme [1] satisfies revocation security. After that, we study revocation security in the context of fully homomorphic encryption and pseudorandom functions.
Key-Revocable Public-Key Encryption.We consider public-key encryption schemes where the decryption key, modeled as a quantum state, can be delegated to a third party and can later be revoked [1]. The syntax of a key-revocable public-key scheme (Definition 5.1) is as follows:
* \(\mathsf{KeyGen}(1^{\lambda})\): this is a setup procedure which outputs a public key \(\mathsf{PK}\), a master secret key MSK and a decryption key \(\rho_{\mathsf{SK}}\). While the master secret key is typically a classical string, the decryption key is modeled as a quantum state. (The use cases of MSK and \(\rho_{\mathsf{SK}}\) are different, as will be evident below.)
* \(\mathsf{Enc}(\mathsf{PK},x)\): this is the regular classical encryption algorithm which outputs a ciphertext \(\mathsf{CT}\).
* \(\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT})\): this is a quantum algorithm which takes as input the quantum decryption key \(\rho_{\mathsf{SK}}\) and a classical ciphertext, and produces a plaintext.
* \(\mathsf{Revoke}(\mathsf{PK},\mathsf{MSK},\sigma)\): this is the revocation procedure that outputs \(\mathsf{Valid}\) or \(\mathsf{Invalid}\). If \(\sigma\) equals the decryption key \(\rho_{\mathsf{SK}}\), then \(\mathsf{Revoke}\) is expected to output \(\mathsf{Valid}\) with high probability.
After the decryption key is returned, we require that the sender loses its ability to decrypt ciphertexts. This is formalized as follows (see Definition5.3): conditioned on revocation being successful, the adversary should not be able to predict whether it is given an encryption of a message versus uniform distribution over the ciphertext space with probability better than1\(\frac{1}{2}+\mathsf{negl}(\lambda)\). We prove the following in Theorem6.1.
Footnote 1: The definition is intentionally formulated as a 1-bit unpredictability game; this is inspired by the notion of _uncloneable-indistinguishable security_ considered by Broadbent and Lord [11]. Unlike the traditional cryptography literature, in this setting, 1-bit unpredictability is not equivalent to computational indistinguishability; the reason is that we also incorporate whether revocation is successful in the security experiment. Nonetheless, our construction satisfies the indistinguishability-based security notion as well.
**Theorem 1.1** (Informal).: _Assuming that the \(\mathsf{LWE}\) and \(\mathsf{SIS}\) problems with subexponential modulus are hard against quantum adversaries running in subexponential time (see Section2.2), there exists a key-revocable public-key encryption scheme._
Due to the quantum reduction from \(\mathsf{SIS}\) to \(\mathsf{LWE}\)[10], the two assumptions are, in some sense, equivalent. Therefore, we can in principle rely on the subexponential hardness of \(\mathsf{LWE}\) alone.
Our work improves upon prior works, which either use post-quantum secure indistinguishability obfuscation [11, 12] or consider the weaker private-key setting [10].
Key-Revocable Fully Homomorphic Encryption.We go beyond the traditional public-key setting and design the first _fully homomorphic encryption_ (\(\mathsf{FHE}\)) scheme [12, 13] with key-revocation capabilities. Our construction is based on a variant of the (leveled) \(\mathsf{FHE}\) scheme of Gentry, Sahai and Waters [10], which we extend to a key-revocable encryption scheme using Gaussian superpositions. The syntax of a key-revocable \(\mathsf{FHE}\) scheme is the same as in the key-revocable public-key setting from before (Definition5.1), except for the additional algorithm \(\mathsf{Eval}\) which is the same as in a regular \(\mathsf{FHE}\) scheme. We prove the following in Theorem7.1.
**Theorem 1.2** (Informal).: _Assuming that the \(\mathsf{LWE}\) and \(\mathsf{SIS}\) problems with subexponential modulus are hard against quantum adversaries running in subexponential time (see Section2.2), there exists a key-revocable (leveled) fully homomorphic encryption scheme._
We prove the theorem by invoking the security of our key-revocable Dual-Regev public-key encryption scheme in Section6.
(Key-)Revocable Pseudorandom Functions.We consider other cryptographic primitives with key-revocation capabilities that go beyond decryption functionalities; specifically, we introduce the notion of _key-revocable_ pseudorandom functions (\(\mathsf{PRFs}\)) with the following syntax:
* \(\mathsf{Gen}(1^{\lambda})\): outputs a \(\mathsf{PRF}\) key \(k\), a quantum key \(\rho_{k}\) and a master secret key \(\mathsf{MSK}\).
* \(\mathsf{PRF}(k;x)\): on key \(k\) and input \(x\), output a value \(y\). This is a deterministic algorithm.
* \(\mathsf{Eval}(\rho_{k},x)\): on input a state \(\rho_{k}\) and an input \(x\), output a value \(y\).
* \(\mathsf{Revoke}(\mathsf{MSK},\sigma)\): on input verification \(\mathsf{MSK}\) and state \(\sigma\), outputs \(\mathsf{Valid}\) or \(\mathsf{Invalid}\).
After the quantum key \(\rho_{k}\) is successfully returned, we require that the sender loses its ability to evaluate the \(\mathsf{PRF}\). This is formalized as follows (see Definition8.3): any efficient adversary can simultaneously pass the revocation phase and succeed in predicting the output of a pseudorandom function on a challenge input \(x^{*}\) versus uniform with probability at most \(\frac{1}{2}+\mathsf{negl}(\lambda)\). In fact, we consider a more general definition where the adversary receives many challenge inputs instead of just one challenge input.
We give the first construction of key-revocable pseudorandom functions (\(\mathsf{PRFs}\)) from standard assumptions. Previous schemes implicit in [11] either require indistinguishability obfuscation, or considered weaker notions of revocable \(\mathsf{PRFs}\) in the form of _secure software leasing_[1, 10], which merely prevents the possiblity of _honestly_ evaluating the \(\mathsf{PRF}\) once the key is revoked.
Since in the context of pseudorandom functions, it is clear what is being revoked, we instead simply call the notion revocable pseudorandom functions.
**Theorem 1.3** (Informal).: _Assuming that the \(\mathsf{LWE}\) and \(\mathsf{SIS}\) problems with subexponential modulus are hard against quantum adversaries running in subexponential time (see Section2.2), there exist key-revocable pseudorandom functions._
Revocable pseudorandom functions immediately give us key-revocable (many time secure) secret-key encryption schemes.
Discussion: Unclonable Cryptography from \(\mathsf{LWE}\). Over the years, the existence of many fundamental cryptographic primitives such as pseudorandom functions [12], fully homomorphic encryption [13], attribute-based encryption [1] and succinct argument systems [10] have been based on the existence of learning with errors. In fact, as far as we know, there are only a few foundational primitives remaining (indistinguishability obfuscation is one such example) whose existence is not (yet) known to be based on learning with errors.
This situation is quite different in the world of unclonable cryptography. Most of the prominent results have information-theoretic guarantees but restricted functionalities [1, 2] or are based on the existence of post-quantum indistinguishability obfuscation [14, 11]. While there are works [10] that do propose lattice-based constructions of unclonable primitives, there are still many primitives, such as quantum money and quantum copy-protection, whose feasibility we would like to establish based on the existence of learning with errors. We hope that our work presents new toolkits towards building more unclonable primitives from \(\mathsf{LWE}\).
Independent and Concurrent Work.Independently and concurrently, Agrawal et al. [1], explored the notion of public-key encryption with secure leasing which is related to key-revocable public-key encryption. Their notion as such is stronger than ours: they achieve classical revocation whereas we achieve quantum revocation. On the one hand, they achieve a generic construction based on any post-quantum secure public-key encryption whereas our notion is based on the post-quantum hardness of learning with errors. They also explore other notions of advanced encryption with secure leasing including attribute-based encryption and functional encryption, which are not explored in our work.
On the other hand, their construction of revocable public-key encryption involves many abstractions whereas our construction is based on the versatile Dual-Regev public-key encryption scheme. Additionally, we obtain key-revocable _fully homomorphic encryption_ and key-revocable _pseudorandom functions_ which are unique to our work.
### Overview
We now give a technical overview of our constructions and their high level proof ideas. We begin with the key-revocable public-key encryption construction. A natural idea would be to start with Regev's public-key encryption scheme [14] and to then upgrade the construction in order to make it revocable. However, natural attempts to associate an unclonable quantum state with the decryption key fail and thus, we instead consider the Dual-Regev public-key encryption scheme and make it key-revocable. We describe the scheme below.
Key-Revocable Dual-Regev Public-Key Encryption.Our first construction is based on the _Dual-Regev_ public-key encryption scheme [13] and makes use of Gaussian superpositions which serve as a quantum decryption key. We give an overview of Construction 2 below.
* \(\mathsf{KeyGen}(1^{n})\): sample a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) along with a _short trapdoor basis_\(\mathsf{td}_{\mathbf{A}}\). To generate the decryption key, we employ the following procedure2: Using the matrix \(\mathbf{A}\) as input, first create a Gaussian superposition of short vectors in \(\mathbb{Z}^{m}\cap(-\frac{q}{2},\frac{q}{2}]^{m}\), denoted by3 Footnote 2: In Section 3.2, this is formalized as the procedure \(\mathsf{GenGauss}\) (see Algorithm 1). \[|\psi\rangle=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m}:\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{ \sigma}(\mathbf{x})\,|\mathbf{x}\rangle\otimes|\mathbf{A}\cdot\mathbf{x}\ (\mathrm{mod}\ q)\rangle\] where \(\rho_{\sigma}(\mathbf{x})=\exp(-\pi\|\mathbf{x}\|^{2}/\sigma^{2})\) is the Gaussian measure, for some \(\sigma>0\). Next, measure the second register which partially collapses the superposition and results in the _coset state_ \[|\psi_{\mathbf{y}}\rangle=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{ q}^{m}:\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{ \sigma}(\mathbf{x})\,|\mathbf{x}\rangle\] for some outcome \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\). Finally, we let \(|\psi_{\mathbf{y}}\rangle\) be the decryption key \(\rho_{\mathsf{SK}}\), \((\mathbf{A},\mathbf{y})\) be the public key \(\mathsf{PK}\), and we let the trapdoor \(\mathsf{td}_{\mathbf{A}}\) serve as the master secret key \(\mathsf{MSK}\). Footnote 3: Note that the state is not normalized for convenience.
* \(\mathsf{Enc}(\mathsf{PK},\mu)\): to encrypt a bit \(\mu\in\{0,1\}\), sample a random string \(\mathbf{s}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n}\) together with discrete Gaussian errors \(\mathbf{e}\in\mathbb{Z}^{m}\) and \(e^{\prime}\in\mathbb{Z}\), and output the (classical) ciphertext \(\mathsf{CT}\) given by \[\mathsf{CT}=(\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal},\mathbf{ s}^{\intercal}\mathbf{y}+e^{\prime}+\mu\cdot\lfloor\frac{q}{2}\rfloor)\ \in\mathbb{Z}_{q}^{m}\times\mathbb{Z}_{q}.\]
* \(\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT})\): to decrypt a ciphertext \(\mathsf{CT}\) using the decryption key \(\rho_{\mathsf{SK}}=|\psi_{\mathbf{y}}\rangle\), first apply the unitary \(U:|\mathbf{x}\rangle\,|0\rangle\rightarrow|\mathbf{x}\rangle\,|\mathsf{CT} \cdot(-\mathbf{x},1)^{\intercal}\rangle\) on input \(|\psi_{\mathbf{y}}\rangle\,|0\rangle\), and then measure the second register in the computational basis. Because \(|\psi_{\mathbf{y}}\rangle\) is a superposition of short vectors \(\mathbf{x}\) subject to \(\mathbf{A}\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\), we obtain an approximation of \(\mu\cdot\lfloor\frac{q}{2}\rfloor\) from which we can recover \(\mu\).4 Footnote 4: For appropriate choices of parameters, decryption via rounding succeeds at outputting \(\mu\) with overwhelming probability and hence we can invoke the _Almost as Good as New Lemma_[1] to recover the original state \(|\psi_{\mathbf{y}}\rangle\).
* \(\mathsf{Revoke}(\mathsf{PK},\mathsf{MSK},\rho)\): to verify the returned state \(\rho\) given as input the public key \((\mathbf{A},\mathbf{y})\) and master secret key \(\mathsf{td}_{\mathbf{A}}\), apply the projective measurement \(\{|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|,I-|\psi_{\mathbf{y}} \rangle\langle\psi_{\mathbf{y}}|\}\) onto \(\rho\). Output \(\mathsf{Valid}\), if the measurement succeeds, and output \(\mathsf{Invalid}\), otherwise.
Implementing revocation, efficiently.Note that performing a projective measurement onto a fixed Gaussian state \(|\psi_{\mathbf{y}}\rangle\) is, in general, computationally infeasible. In fact, if it were to be possible to efficiently perform this projection using \((\mathbf{A},\mathbf{y})\) alone, then one could easily use such a procedure to solve the short integer solution (SIS) problem. Fortunately, we additionally have the trapdoor for \(\mathbf{A}\) at our disposal to perform the projection.
One of our contributions is to design a _quantum discrete Gaussian sampler for \(q\)-ary lattices5_ which, given as input \((\mathbf{A},\mathbf{y},\mathsf{td}_{\mathbf{A}},\sigma)\), implements a unitary that efficiently prepares the Gaussian superposition \(|\psi_{\mathbf{y}}\rangle\) from scratch with access to the trapdoor \(\mathsf{td}_{\mathbf{A}}\). At a high level, our Gaussian sampler can be thought of as an explicit quantum reduction from the _inhomogenous_SIS problem [1] to the search variant of the LWE problem (see Section 3.3).
Footnote 5: In Section 3.3, this is formalized as the procedure QSampGauss (see Algorithm 2).
Proving security: Initial challenges.Let us first discuss some high level ideas behind proving the security of the above construction. We would like to prove that if the above scheme is insecure in the presence of a particular adversary, then we can use such an adversary to contradict some well-known computational assumption. That is, there exists an adversary who can simultaneously pass the revocation step successfully and also predict whether it receives a ciphertext or a uniform element from the ciphertext space. Towards designing such a reduction, an initial attempt would be to use the predictor, predicting an encryption of a valid message versus uniform, to break some computational assumption. Indeed, since the ciphertexts look like samples from the LWE distribution, we might be tempted to directly invoke LWE to prove this. Unfortunately, this argument is flawed! For all we know, the adversary could be doing the following: given the state \(|\psi_{\mathbf{y}}\rangle\), it clones it, returns the cloned version and, then uses the original copy to distinguish encryption of a valid message versus uniform. In this case, the predictor is running the decryption algorithm honestly and thus it is not feasible to use such an adversary to break LWE.
This suggests that we may be able to argue that a computationally bounded adversary cannot possibly clone the state \(|\psi_{\mathbf{y}}\rangle\). Indeed, if the adversary did succeed at cloning \(|\psi_{\mathbf{y}}\rangle\), then we should be able to measure the two copies separately in order to come up with a short solution in the kernel of \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) - thereby solving the short integer solution (SIS) problem [1]. However, it is not clear if the adversary needs to clone the state in order for it to succeed. Perhaps the adversary did not clone the state after all and nevertheless succeeded at distinguishing a valid ciphertext versus uniform ciphertext. For all we know, the adversary could have been successful in breaking LWE.
Since it is not possible to detect which scenario we are in (i.e. whether the adversary successfully cloned or whether it solved the LWE problem), it is important that the reduction leverages the fact that the adversary simultaneously returns the original state and yet at the same time violates the 1-bit unpredictability experiment, in order to break some computational assumption.
Insight: Reduction to SIS.Our goal is to use the state returned by the adversary and to leverage the 1-bit prediction guarantee in order to break some computational problem. It should seem suspicious whether such a reduction is even possible: after all the adversary is returning the state we gave them! _How could this possibly help?_ Our main insight lies in the following observation: while the adversary does eventually return the state we give them, the only way it can later succeed in the prediction experiment is if it retains useful information about the state. If we could somehow extract this information from the adversary, then using the extracted information alongside the returned state, we could hope to break some computational assumption. For instance, suppose we
can extract a short vector \(\mathbf{x}\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\). By measuring the state returned by the adversary, we could then hope to get a second short vector \(\mathbf{x}^{\prime}\) such that \(\mathbf{A}\cdot\mathbf{x}^{\prime}=\mathbf{y}\ (\mathrm{mod}\ q)\), and from this, we can recover a short solution in the kernel of \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\).
Even if, for a moment, we disregard the issue of being able to extract \(\mathbf{x}\) from the adversary, there are still some important missing steps in the above proof template:
* Firstly, measuring the returned state should give a vector different from \(\mathbf{x}\) with non-negligible probability. In order to prove this, we need to argue that the squared ampltidine of every term is bounded away from \(1\). We prove this statement (Lemma2.9) holds as long as \(\mathbf{A}\) is full rank.
* Secondly, the reduction to SIS would only get as input \(\mathbf{A}\) and not a trapdoor for \(\mathbf{A}\). This means that it will no longer be possible for the reduction to actually check whether the state returned by the adversary is valid. We observe that, instead of first verifying whether the returned state is valid and then measuring in the computational basis, we can in fact skip verification and immediately go ahead and measure the state in the computational basis; this is implicit in the analysis in the proof of Lemma6.9.
* Finally, the adversary could have entangled the returned state with its residual state in such a way that measuring the returned state always yields the same vector \(\mathbf{x}\) as the one extracted from the adversary. In the same analysis in the proof of Lemma6.9, we prove that, even if the adversary entangles its state with the returned state, with non-negligible probability we get two distinct short vectors mapping \(\mathbf{A}\) to \(\mathbf{y}\).
All that is left is to argue that it is possible to extract \(\mathbf{x}\) from the adversary while simultaneously verifying whether the returned state is correct or not. To show that we can indeed extract another short pre-image from the adversary's quantum side information, we prove what we call a _simultaneous search-to-decision reduction with quantum auxiliary input_ with respect to the Dual-Regev scheme (see Theorem6.8). This constitutes the main technical result of this work.
Main contribution: Simultaneous search-to-decision reduction with quantum advice.Informally, our theorem says the following: any successful Dual-Regev distinguisher with access to quantum side information Aux (which depends on the decryption key) can be converted into a successful extractor that finds a key on input Aux - even conditioned on Revoke succeeding on a seperate register \(R\). We now present some intuition behind our proof.
Suppose there exists a successful Dual-Regev distinguisher \(\mathcal{D}\) (as part of the adversary \(\mathcal{A}\)) that, given quantum auxiliary information Aux, can distinguish between \((\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal},\mathbf{s}^{\intercal }\mathbf{y}+e^{\prime})\) and uniform \((\mathbf{u},r)\in\mathbb{Z}_{q}^{m}\times\mathbb{Z}_{q}\) with advantage \(\epsilon\).
Ignoring register \(R\): For now, let us ignore the fact that Revoke is simultaneously applied on system \(R\). Inspired by techniques from the _leakage resilience_ literature [10], we now make the following observation. Letting \(\mathbf{y}=\mathbf{A}\cdot\mathbf{x}_{0}\ (\mathrm{mod}\ q)\), for some Gaussian vector \(\mathbf{x}_{0}\) with distribution proportional to \(\rho_{\sigma}(\mathbf{x}_{0})\), the former sample can be written as \((\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal},(\mathbf{s}^{ \intercal}\mathbf{A}+\mathbf{e}^{\intercal})\cdot\mathbf{x}_{0}+e^{\prime})\). Here, we assume a _noise flooding_ regime in which the noise magnitude of \(e^{\prime}\) is significantly larger than that of \(\mathbf{e}^{\intercal}\cdot\mathbf{x}_{0}\). Because the distributions are statistically close, the distinguisher \(\mathcal{D}\) must succeed at distinguishing the sample from uniform with probability negligibly close to \(\epsilon\). Finally, we invoke the LWE assumption and claim that the same distinguishing advantage persists, even if we replace \((\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal})\) with a random string \(\mathbf{u}\in\mathbb{Z}_{q}^{m}\). Here, we rely on the fact that the underlying LWE sample
is, in some sense, independent of the auxiliary input Aux handed to the distinguisher \(\mathcal{D}\). To show that this is the case, we need to argue that the reduction can generate the appropriate inputs to \(\mathcal{D}\) on input \(\mathbf{A}\); in particular it should be able to generate the auxiliary input Aux (which depends on a state \(|\psi_{\mathbf{y}}\rangle\)), while simultaneously producing a Gaussian vector \(\mathbf{x}_{0}\) such that \(\mathbf{A}\cdot\mathbf{x}_{0}=\mathbf{y}\pmod{q}\). Note that this seems to violate the SIS assumption, since the ability to produce both a superposition \(|\psi_{\mathbf{y}}\rangle\) of pre-images and a single pre-image \(\mathbf{x}_{0}\) would allow one to obtain a collision for \(\mathbf{y}\).
_Invoking Gaussian-collapsing_: To overcome this issue, we ask the reduction to generate the quantum auxiliary input in a different way; rather than computing Aux as a function of \(|\psi_{\mathbf{y}}\rangle\), we compute it as a function of \(|\mathbf{x}_{0}\rangle\), where \(\mathbf{x}_{0}\) results from _collapsing_ the state \(|\psi_{\mathbf{y}}\rangle\) via a measurement in the computational basis. By invoking the _Gaussian collapsing property_[14], we can show that the auxiliary information computed using \(|\psi_{\mathbf{y}}\rangle\) is computationally indistinguishable from the auxiliary information computed using \(|\mathbf{x}_{0}\rangle\). Once we invoke the collapsed version of \(|\psi_{\mathbf{y}}\rangle\), we can carry out the reduction and conclude that \(\mathcal{D}\) can distinguish between the samples \((\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}_{0})\) and \((\mathbf{u},r)\), where \(\mathbf{u}\) and \(r\) are random and \(\mathbf{x}_{0}\) is Gaussian, with advantage negligibly close to \(\epsilon\).6 Notice that \(\mathcal{D}\) now resembles a so-called _Goldreich-Levin_ distinguisher [13].
Footnote 6: Technically, \(\mathcal{D}\) can distinguish between \((\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}_{0}+e^{\prime})\) and \((\mathbf{u},r)\) for a Gaussian error \(e^{\prime}\). However, by defining a distinguisher \(\tilde{\mathcal{D}}\) that first shifts \(\mathbf{u}\) by a Gaussian vector \(e^{\prime}\) and then runs \(\mathcal{D}\), we obtain the desired distinguisher.
_Reduction to Goldreich-Levin_: Assuming the existence of a quantum Goldreich-Levin theorem for the field \(\mathbb{Z}_{q}\), one could then convert \(\mathcal{D}\) into an extractor that extracts \(\mathbf{x}_{0}\) with high probability. Prior to our work, a quantum Goldreich-Levin theorem was only known for \(\mathbb{Z}_{2}\)[1, 1]. In particular, it is unclear how to extend prior work towards higher order fields \(\mathbb{Z}_{q}\) because the interference pattern in the analysis of the quantum extractor does not seem to generalize beyond the case when \(q=2\). Fortunately, we can rely on the _classical_ Goldreich-Levin theorem for finite fields due to Dodis et al. [15], as well as recent work by Bitansky, Brakerski and Kalai. [1] which shows that a large class of classical reductions can be generically converted into a quantum reductions. This allows us to obtain the first quantum Goldreich-Levin theorem for large fields, which we prove in Section4. Specifically, we can show that a distinguisher \(\mathcal{D}\) that, given auxiliary input Aux, can distinguish between \((\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}_{0})\) and \((\mathbf{u},r)\) with advantage \(\varepsilon\) can be converted into a quantum extractor that can extract \(\mathbf{x}_{0}\) given Aux in time \(\mathrm{poly}(1/\varepsilon,q)\) with probability negligibly close to \(1\). The fact that the extractor succeeds with probability negligibly close to \(1\) is crucial in our analysis mentioned below.
_Incorporating \(R\)_: To complete the security proof behind our key-revocable Dual-Regev scheme, we need to show something _stronger_; namely, we need to argue that the Goldreich-Levin extractor succeeds on input Aux - even conditioned on the fact that Revoke outputs Valid when applied on a separate register \(R\) (which may be entangled with Aux). At first sight, it might seem as though all the previous ideas are of no use since the guarantee of the Goldreich-Levin extractor only holds when we ignore the register \(R\).
Fortunately, the Goldreich-Levin extractor succeeds with probability negligibly close to \(1\). Since the probability that revocation succeeds is non-negligible, this implies that the extractor has to succeed with non-negligible probability - even if we condition on revocation succeeding on register \(R\). Using this fact, we can successfully carry out the reduction to SIS.
### Applications
We leverage our result of key-revocable Dual-Regev encryption to get key-revocable fully homomorphic encryption and revocable pseudorandom functions.
Key-Revocable Dual-Regev Fully Homomorphic Encryption.Our first application of our key-revocable public-key encryption concerns fully homomorphic encryption schemes. We extend our key-revocable Dual-Regev scheme towards a (leveled) \(\mathsf{FHE}\) scheme in Construction 3 by using the \(\mathsf{DualGSW}\) variant of the \(\mathsf{FHE}\) scheme by Gentry, Sahai and Waters [13, 14].
To encrypt a bit \(\mu\in\{0,1\}\) with respect to the public-key \((\mathbf{A},\mathbf{y})\), sample a matrix \(\mathbf{S}\xleftarrow{\mathbb{Z}}_{q}^{n\times N}\) together with a Gaussian error matrix \(\mathbf{E}\in{\mathbb{Z}}^{m\times N}\) and row vector \(\mathbf{e}\in{\mathbb{Z}}^{N}\), and output the ciphertext
\[\mathsf{CT}=\left[\tfrac{\mathbf{A}^{\intercal}\mathbf{S}+\mathbf{E}}{ \mathbf{y}^{\intercal}\mathbf{S}+\mathbf{e}}\right]+\mu\cdot\mathbf{G}\ (\mathrm{mod}\ q)\,\in\,{\mathbb{Z}}_{q}^{(m+1)\times N}.\]
Here, \(\mathbf{G}\) is the _gadget matrix_ which converts a binary vector in into its field representation over \({\mathbb{Z}}_{q}\). As before, the decryption key consists of a Gaussian superposition \(|\psi_{\mathbf{y}}\rangle\) of pre-images of \(\mathbf{y}\).
Note that the \(\mathsf{DualGSW}\) ciphertext can be thought of as a column-wise concatenation of \(N\)-many independent Dual-Regev ciphertexts. In Theorem7.1, we prove the security of our construction by invoking the security of our key-revocable Dual-Regev scheme.
Revocable Pseudorandom Functions.Our next focus is on leveraging the techniques behind key-revocable public-key encryption to obtain revocable pseudorandom functions. Recall that the revocation security of pseudorandom functions stipulates the following: any efficient adversary (after successfully revoking the state that enables it to evaluate pseudorandom functions) cannot predict whether it receives pseudorandom outputs on many challenge inputs versus strings picked uniformly at random with probability better than \(\frac{1}{2}+\mathsf{negl}(\lambda)\). An astute reader might notice that revocation security does not even imply the traditional pseudorandomness guarantee! Hence, we need to additionally impose the requirement that a revocable pseudorandom function should also satisfy the traditional pseudorandomness guarantee.
Towards realizing a construction satisfying our definitions, we consider the following template:
1. First show that there exists a \(\mu\)-revocable pseudorandom function for \(\mu=1\). Here, \(\mu\)-revocation security means the adversary receives \(\mu\)-many random inputs after revocation.
2. Next, we show that any 1-revocable pseudorandom function also satisfies the stronger notion of revocation security where there is no a priori bound on the number of challenge inputs received by the adversary.
3. Finally, we show that we can generically upgrade any revocable \(\mathsf{PRF}\) in such a way that it also satisfies the traditional pseudorandomness property.
The second bullet is proven using a hybrid argument. The third bullet is realized by combining a revocable \(\mathsf{PRF}\) with a post-quantum secure \(\mathsf{PRF}\) (not necessarily satisfying revocation security).
Hence, we focus the rest of our attention on proving the first bullet.
_1-revocation security._ We start with the following warmup construction. The secret key \(k\) comprises of matrices \(\mathbf{A},\{\mathbf{S}_{i,0},\mathbf{S}_{i,1}\}_{i\in[\ell],b\in\{0,1\}}\), where \(\mathbf{A}\xleftarrow{\mathcal{S}}_{q}^{n\times m}\), \(\mathbf{S}_{i,b}\in{\mathbb{Z}}_{q}^{n\times n}\) such that all \(\mathbf{S}_{i,b}\) are sampled
from some error distribution and the output of the pseudorandom function on \(x\) is denoted to be \(\lfloor\sum_{i\in[\ell]}\mathbf{S}_{i,x_{i}}\mathbf{A}\rceil_{p}\), where \(q\gg p\) and \(\lfloor\cdot\rceil_{p}\) refers to a particular rounding operation modulo \(p\).
In addition to handing out a regular PRF key \(k\), we also need to generate a quantum key \(\rho_{k}\) such that, given \(\rho_{k}\) and any input \(x\), we can efficiently compute \(\mathsf{PRF}(k,x)\). Moreover, \(\rho_{k}\) can be revoked such that any efficient adversary after revocation loses the ability to evaluate the pseudorandom function. To enable the generation of \(\rho_{k}\), we first modify the above construction. We generate \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) and include this as part of the key. The modified pseudorandom function, on input \(x\), outputs \(\lfloor\sum_{i\in[\ell]}\mathbf{S}_{i,x_{i}}\mathbf{y}\rceil_{p}\). We denote \(\sum_{i\in[\ell]}\mathbf{S}_{i,x_{i}}\) by \(\mathbf{S}_{x}\) and, with this new notation, the output of the pseudorandom function can be written as \(\lfloor\mathbf{S}_{x}\mathbf{y}\rceil_{p}\).
With this modified construction, we now describe the elements as part of the quantum key \(\rho_{k}\):
* For every \(i\in[\ell]\), include \(\mathbf{S}_{i,b}\mathbf{A}+\mathbf{E}_{i,b}\) in \(\rho_{k}\), where \(i\in[\ell]\) and \(b\in\{0,1\}\). We sample \(\mathbf{S}_{i,b}\) and \(\mathbf{E}_{i,b}\) from a discrete Gaussian distribution with appropriate standard deviation \(\sigma>0\).
* Include \(|\psi_{\mathbf{y}}\rangle\) which, as defined in the key-revocable Dual-Regev construction, is a Gaussian superposition of short solutions mapping \(\mathbf{A}\) to \(\mathbf{y}\).
To evaluate on an input \(x\) using \(\rho_{k}\), compute \(\sum_{i}\mathbf{S}_{i,x_{i}}\mathbf{A}+\mathbf{E}_{i,x_{i}}\) and then using the state \(|\psi_{\mathbf{y}}\rangle\), map this to \(\sum_{i}\mathbf{S}_{i,x_{i}}\mathbf{y}+\mathbf{E}_{i,x_{i}}\). Finally, perform the rounding operation to get the desired result.
Our goal is to show that after the adversary revokes \(|\psi_{\mathbf{y}}\rangle\), on input a challenge input \(x^{*}\) picked uniformly at random, it cannot predict whether it has received \(\lfloor\sum_{i\in[N]}\mathbf{S}_{i,x_{i}^{*}}\mathbf{y}\rceil_{p}\) or a uniformly random vector in \(\mathbb{Z}_{p}^{n}\).
_Challenges in proving security_: We would like to argue that when the state \(|\psi_{\mathbf{y}}\rangle\) is revoked, the adversary loses its ability to evaluate the pseudorandom function. Unfortunately, this is not completely true. For all we know, the adversary could have computed the pseudorandom function on many inputs of its choice before the revocation phase and it could leverage this to break the security after revocation. For instance, suppose say the input is of length \(O(\log\lambda)\) then in this case, the adversary could evaluate the pseudorandom function on all possible inputs before revocation. After revocation, on any challenge input \(x^{*}\), the adversary can then successfully predict whether it receives a pseudorandom output or a uniformly chosen random output. Indeed, a pseudorandom function with \(O(\log\lambda)\)-length input is learnable and hence, there should be no hope of proving it to be key-revocable. This suggests that, at the very least, we need to explicitly incorporate the fact that \(x^{*}\) is of length \(\omega(\log\lambda)\), and more importantly, should have enough entropy, in order to prove security.
_Our insight_: Our insight is to reduce the security of revocable pseudorandom function to the security of key-revocable Dual-Regev public-key encryption. At a high level, our goal is to set up the parameters in such a way that the following holds:
* \((\mathbf{A},\mathbf{y})\), defined above, is set to be the public key corresponding to the Dual-Regev public-key encryption scheme,
* \(|\psi_{\mathbf{y}}\rangle\), which is part of the pseudorandom function key, is set to be the decryption key of the Dual Regev scheme,
* Suppose that the challenge ciphertext, denoted by \(\mathsf{CT}^{*}\), comprises of two parts: \(\mathsf{CT}^{*}_{1}\in\mathbb{Z}_{q}^{n\times m}\) and \(\mathsf{CT}^{*}_{2}\in\mathbb{Z}_{q}^{n}\). Note that if \(\mathsf{CT}^{*}_{1}\approx\mathbf{s}^{\intercal}\mathbf{A}\) and \(\mathsf{CT}^{*}_{2}\approx\mathbf{s}^{\intercal}\mathbf{y}\), for some LWE secret vector \(\mathbf{s}\), then
\(\mathsf{CT}_{1}^{*}\) can be mapped onto \(\mathsf{CT}_{2}^{*}\) using the state \(|\psi_{\mathbf{y}}\rangle\). We use \(\mathsf{CT}_{1}^{*}\) to set the challenge input \(x^{*}\) in such a way that \(\mathsf{CT}_{2}^{*}\) is the output of the pseudorandom function on \(x^{*}\). This implicitly resolves the entropy issue we discussed above; by the semantic security of Dual-Regev, there should be enough entropy in \(\mathsf{CT}_{1}^{*}\) which translates to the entropy of \(x^{*}\).
It turns our goal is quite ambitious: in particular, it is unclear how to set up the parameters in such that the output of the pseudorandom function on \(x\) is exactly \(\mathsf{CT}_{2}^{*}\). Fortunately, this will not be a deterrant, we can set up the parameters such that the output is \(\approx\mathsf{CT}_{2}^{*}+\mathbf{u}\), where \(\mathbf{u}\) is a vector that is known to reduction.
Once we set up the parameters, we can then reduce the security of revocable pseudorandom functions to revocable Dual Regev.
_Implementation details_: So far we established the proof template should work but the implementation details of the proof need to be fleshed out. Firstly, we set up the parameters in such a way that \(\ell=nm\lceil\log q\rceil\). This means that there is a bijective function mapping \([n]\times[m]\times[\lceil\log q\rceil]\) to \([\ell]\). As a result, the quantum key \(\rho_{k}\) can be alternately viewed as follows:
* For every \(i\in[n],j\in[m],\tau\in[\lceil\log q\rceil],b\in\{0,1\}\), include \(\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}\) in \(\rho_{k}\). We sample \(\mathbf{S}_{b}^{(i,j,\tau)}\) and \(\mathbf{E}_{b}^{(i,j,\tau)}\) from a discrete Gaussian with appropriate standard deviation \(\sigma>0\).
The output of the pseudorandom function on input \(x\) can now be interpreted as
\[\mathsf{PRF}(k,x)=\left[\sum_{\begin{subarray}{c}i\in[n],j\in[m]\\ \tau\in[\lceil\log q\rceil]\end{subarray}}\mathbf{S}_{x_{i}}^{(i,j,\tau)} \mathbf{y}\right]_{p}\]
Next, we modify \(\rho_{k}\) as follows: instead of generating, \(\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}\), we instead generate \(\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}+\mathsf{M}_ {b}^{(i,j,k)}\), for any set of matrices \(\{\mathsf{M}_{b}^{(i,j,\tau)}\}\). The change should be undetectable to a computationally bounded adversary, thanks to the quantum hardness of learning with errors. In the security proof, we set up the challenge input \(x^{*}\) in such a way that summing up the matrices \(\mathsf{M}_{x_{i}^{+}}^{(i,j,\tau)}\) corresponds to \(\mathsf{CT}_{1}^{*}\), where \(\mathsf{CT}_{1}^{*}\) is part of the key-revocable Dual-Regev challenge ciphertext as discussed above. With this modification, when \(\rho_{k}\) is evaluated on \(x^{*}\), we get an output that is close to \(\mathsf{CT}_{2}^{*}+\mathbf{u}\), where \(\mathbf{u}\approx\sum_{i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]}\mathbf{y}\) is known to the reduction (discussed above) - thereby violating the security of key-revocable Dual-Regev scheme.
### Related Work
Copy-Protection.Of particular relevance to our work is the foundational notion of copy-protection introduced by Aaronson [1]. Informally speaking, a copy-protection scheme is a compiler that transforms programs into quantum states in such a way that using the resulting states, one can run the original program. Yet, the security guarantee stipulates that any adversary given one copy of the state cannot produce a bipartite state wherein both parts compute the original program.
While copy-protection is known to be impossible for arbitrary unlearnable functions [1, 2], identifying interesting functionalities for which copy-protection is feasible has been an active research direction [1, 1, 2]. Of particular significance is the problem of copy-protecting cryptographic functionalities, such as decryption and signing functionalities. Coladangelo
et al. [11] took the first step in this direction and showed that it is feasible to copy-protect decryption functionalities and pseudorandom functions assuming the existence of post-quantum indistinguishability obfuscation. While a very significant first step, the assumption of post-quantum iO is unsatisfactory: there have been numerous post-quantum iO candidate proposals [14, 15, 16, 17, 18, 19, 20], but not one of them have been based on well-studied assumptions7.
Footnote 7: We remark that, there do exist post-quantum-insecure iO schemes based on well-founded assumptions [13].
Our work can be viewed as copy-protecting cryptographic functionalities based on learning with errors under a weaker yet meaningful security guarantee.
Secure Software Leasing.Another primitive relavent to revocable cryptography is secure software leasing [1]. The notion of secure software leasing states that any program can be compiled into a functionally equivalent program, represented as a quantum state, in such a way that once the compiled program is returned back8, the honest evaluation algorithm on the residual state cannot compute the original functionality. Key-revocable encryption can be viewed as secure software leasing for decryption algorithms. However, unlike secure software leasing, key-revocable encryption satisfies a much stronger security guarantee, where there is no restriction on the adversary to run honestly after returning back the software. Secure leasing for different functionalities, namely, point functions [16, 17], evasive functions [1, 18] and pseudorandom functions [1] have been studied by recent works.
Footnote 8: According to the terminology of [1], this refers to finite term secure software leasing.
Encryption Schemes with Revocable Ciphertexts.Unruh [19] proposed a (private-key) quantum timed-release encryption scheme that is _revocable_, i.e. it allows a user to _return_ the ciphertext of a quantum timed-release encryption scheme, thereby losing all access to the data. Unruh's scheme uses conjugate coding [20, 14] and relies on the _monogamy of entanglement_ in order to guarantee that revocation necessarily erases information about the plaintext. Broadbent and Islam [1] introduced the notion of _certified deletion_9 and constructed a private-key quantum encryption scheme with the aforementioned feature which is inspired by the quantum key distribution protocol [14, 15]. In contrast with Unruh's [19] notion of revocable quantum ciphertexts which are eventually returned and verified, Broadbent and Islam [14] consider certificates which are entirely classical. Moreover, the security definition requires that, once the certificate is successfully verified, the plaintext remains hidden even if the secret key is later revealed.
Footnote 9: This notion is incomparable with another related notion called unclonable encryption [1, 2], which informally guarantees that it should be infeasible to clone quantum ciphertexts without losing information about the encrypted message.
Using a hybrid encryption scheme, Hiroka, Morimae, Nishimaki and Yamakawa [17] extended the scheme in [14] to both public-key and attribute-based encryption with certified deletion via _receiver non-committing_ encryption [1, 18]. As a complementary result, the authors also gave a public-key encryption scheme with certified deletion which is _publicly verifiable_ assuming the existence of one-shot signatures and extractable witness encryption. Using _Gaussian superpositions_, Poremba [19] proposed _Dual-Regev_-based public-key and fully homomorphic encryption schemes with certified deletion which are publicly verifiable and proven secure assuming a _strong Gaussian-collapsing conjecture_ -- a strengthening of the collapsing property of the Ajtai hash. Bartusek and Khurana [1] revisited the notion of certified deletion and presented a unified
approach for how to generically convert any public-key, attribute-based, fully-homomorphic, timed-release or witness encryption scheme into an equivalent quantum encryption scheme with certified deletion. In particular, they considered a stronger notion called _certified everlasting security_ which allows the adversary to be computationally unbounded once a valid deletion certificate is submitted.
## Acknowledgements
P.A. thanks Fatih Kaleoglu for several insightful discussions.
This work was done (in part) while the authors were visiting the Simons Institute for the Theory of Computing. P.A. is supported by a research gift from Cisco. A.P. is partially supported by AFOSR YIP (award number FA9550-16-1-0495), the Institute for Quantum Information and Matter (an NSF Physics Frontiers Center; NSF Grant PHY-1733907) and by a grant from the Simons Foundation (828076, TV).
## 2 Preliminaries
Let \(\lambda\in\mathbb{N}\) denote the security parameter throughout this work. We assume that the reader is familiar with the fundamental cryptographic concepts.
### Quantum Computing
For a comprehensive background on quantum computation, we refer to [11, 12]. We denote a finite-dimensional complex Hilbert space by \(\mathcal{H}\), and we use subscripts to distinguish between different systems (or registers). For example, we let \(\mathcal{H}_{A}\) be the Hilbert space corresponding to a system \(A\). The tensor product of two Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) is another Hilbert space denoted by \(\mathcal{H}_{AB}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). Let \(\mathcal{L}(\mathcal{H})\) denote the set of linear operators over \(\mathcal{H}\). A quantum system over the 2-dimensional Hilbert space \(\mathcal{H}=\mathbb{C}^{2}\) is called a _qubit_. For \(n\in\mathbb{N}\), we refer to quantum registers over the Hilbert space \(\mathcal{H}=\left(\mathbb{C}^{2}\right)^{\otimes n}\) as \(n\)-qubit states. More generally, we associate _qudits_ of dimension \(d\geq 2\) with a \(d\)-dimensional Hilbert space \(\mathcal{H}=\mathbb{C}^{d}\). For brevity, we write \(\mathcal{H}_{d}^{n}=\mathcal{H}_{d}^{\otimes n}\), where \(\mathcal{H}_{d}\) is \(d\)-dimensional. We use the word _quantum state_ to refer to both pure states (unit vectors \(\left|\psi\right\rangle\in\mathcal{H}\)) and density matrices \(\rho\in\mathcal{D}(\mathcal{H})\), where we use the notation \(\mathcal{D}(\mathcal{H})\) to refer to the space of positive semidefinite matrices of unit trace acting on \(\mathcal{H}\). Occasionally, we consider _subnormalized states_, i.e. states in the space of positive semidefinite operators over \(\mathcal{H}\) with trace norm not exceeding \(1\).
The _trace distance_ of two density matrices \(\rho,\sigma\in\mathcal{D}(\mathcal{H})\) is given by
\[\mathrm{TD}(\rho,\sigma)=\frac{1}{2}\mathrm{Tr}\left[\sqrt{(\rho-\sigma)^{ \dagger}(\rho-\sigma)}\right].\]
Let \(q\geq 2\) be a modulus and \(n\in\mathbb{N}\) and let \(\omega_{q}=e^{\frac{2\pi i}{q}}\in\mathbb{C}\) denote the primitive \(q\)-th root of unity. The \(n\)-qudit \(q\)_-ary quantum Fourier transform_ over the ring \(\mathbb{Z}_{q}^{n}\) is defined by the operation,
\[\mathsf{FT}_{q}:\quad\left|\mathbf{x}\right\rangle\quad\mapsto\quad\sqrt{q^{ -n}}\sum_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\omega_{q}^{\left\langle\mathbf{x},\mathbf{y}\right\rangle}\left|\mathbf{y}\right\rangle,\qquad\forall\mathbf{x }\in\mathbb{Z}_{q}^{n}.\]
The \(q\)-ary quantum Fourier transform is _unitary_ and can be efficiently performed on a quantum computer for any modulus \(q\geq 2\)[12].
A quantum channel \(\Phi:\mathcal{L}(\mathcal{H}_{A})\to\mathcal{L}(\mathcal{H}_{B})\) is a linear map between linear operators over the Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\). Oftentimes, we use the compact notation \(\Phi_{A\to B}\) to denote a quantum channel between \(\mathcal{L}(\mathcal{H}_{A})\) and \(\mathcal{L}(\mathcal{H}_{B})\). We say that a channel \(\Phi\) is _completely positive_ if, for a reference system \(R\) of arbitrary size, the induced map \(I_{R}\otimes\Phi\) is positive, and we call it _trace-preserving_ if \(\operatorname{Tr}[\Phi(X)]=\operatorname{Tr}[X]\), for all \(X\in\mathcal{L}(\mathcal{H})\). A quantum channel that is both completely positive and trace-preserving is called a quantum \(\mathsf{CPTP}\) channel.
A polynomial-time _uniform_ quantum algorithm (or \(\mathsf{QPT}\) algorithm) is a polynomial-time family of quantum circuits given by \(\mathcal{C}=\{C_{\lambda}\}_{\lambda\in\mathbb{N}}\), where each circuit \(C\in\mathcal{C}\) is described by a sequence of unitary gates and measurements; moreover, for each \(\lambda\in\mathbb{N}\), there exists a deterministic polynomial-time Turing machine that, on input \(1^{\lambda}\), outputs a circuit description of \(C_{\lambda}\). Similarly, we also define (classical) probabilistic polynomial-time (\(\mathsf{PPT}\)) algorithms. A quantum algorithm may, in general, receive (mixed) quantum states as inputs and produce (mixed) quantum states as outputs. Occasionally, we restrict \(\mathsf{QPT}\) algorithms implicitly; for example, if we write \(\operatorname{Pr}[\mathcal{A}(1^{\lambda})=1]\) for a \(\mathsf{QPT}\) algorithm \(\mathcal{A}\), it is implicit that \(\mathcal{A}\) is a \(\mathsf{QPT}\) algorithm that outputs a single classical bit.
A polynomial-time _non-uniform_ quantum algorithm is a family \(\{(C_{\lambda},\nu_{\lambda})\}_{\lambda\in\mathbb{N}}\), where \(\{C_{\lambda}\}_{\lambda\in\mathbb{N}}\) is a polynomial-size (not necessarily uniformly generated) family of circuits where, for each \(\lambda\in\mathbb{N}\), a subset of input qubits to \(C_{\lambda}\) consists of a polynomial-size auxiliary density matrix \(\nu_{\lambda}\). We use the following notion of indistinguishability of quantum states in the presence of auxiliary inputs.
**Definition 2.1** (Indistinguishability of ensembles of quantum states, [20]).: _Let \(p:\mathbb{N}\to\mathbb{N}\) be a polynomially bounded function, and let \(\rho_{\lambda}\) and \(\sigma_{\lambda}\) be \(p(\lambda)\)-qubit quantum states. We say that \(\{\rho_{\lambda}\}_{\lambda\in\mathbb{N}}\) and \(\{\sigma_{\lambda}\}_{\lambda\in\mathbb{N}}\) are quantum computationally indistinguishable ensembles of quantum states, denoted by \(\rho_{\lambda}\approx_{c}\sigma_{\lambda}\,,\) if, for any \(\mathsf{QPT}\) distinguisher \(\mathcal{D}\) with single-bit output, any polynomially bounded \(q:\mathbb{N}\to\mathbb{N}\), any family of \(q(\lambda)\)-qubit auxiliary states \(\{\nu_{\lambda}\}_{\lambda\in\mathbb{N}}\), and every \(\lambda\in\mathbb{N}\),_
\[\big{|}\operatorname{Pr}[\mathcal{D}(\rho_{\lambda}\otimes\nu_{\lambda})=1]- \operatorname{Pr}[\mathcal{D}(\sigma_{\lambda}\otimes\nu_{\lambda})=1]\big{|} \leq\mathsf{negl}(\lambda)\,.\]
**Lemma 2.2** ("Almost As Good As New" Lemma, [1]).: _Let \(\rho\in\mathcal{D}(\mathcal{H})\) be a density matrix over a Hilbert space \(\mathcal{H}\). Let \(U\) be an arbitrary unitary and let \((\boldsymbol{\Pi}_{0},\boldsymbol{\Pi}_{1}=\mathbf{I}-\boldsymbol{\Pi}_{0})\) be projectors acting on \(\mathcal{H}\otimes\mathcal{H}_{\mathsf{aux}}\). We interpret \((U,\boldsymbol{\Pi}_{0},\boldsymbol{\Pi}_{1})\) as a measurement performed by appending an ancillary system in the state \(|0\rangle\langle 0|_{\mathsf{aux}}\), applying the unitary \(U\) and subsequently performing the two-outcome measurement \(\{\boldsymbol{\Pi}_{0},\boldsymbol{\Pi}_{1}\}\) on the larger system. Suppose that the outcome corresponding to \(\boldsymbol{\Pi}_{0}\) occurs with probability \(1-\varepsilon\), for some \(\varepsilon\in[0,1]\). In other words, it holds that \(\operatorname{Tr}[\boldsymbol{\Pi}_{0}(U\rho\otimes|0\rangle\langle 0|_{\mathsf{aux}}U^{ \dagger})]=1-\varepsilon\). Then,_
\[\operatorname{TD}(\rho,\widetilde{\rho})\leq\sqrt{\varepsilon},\]
_where \(\widetilde{\rho}\) is the state after performing the measurement and applying \(U^{\dagger}\), and after tracing out \(\mathcal{H}_{\mathsf{aux}}\):_
\[\widetilde{\rho}=\operatorname{Tr}_{\mathsf{aux}}\left[U^{\dagger}\left( \boldsymbol{\Pi}_{0}U(\rho\otimes|0\rangle\langle 0|_{\mathsf{aux}})U^{\dagger} \boldsymbol{\Pi}_{0}+\boldsymbol{\Pi}_{1}U(\rho\otimes|0\rangle\langle 0|_{ \mathsf{aux}})U^{\dagger}\boldsymbol{\Pi}_{1}\right)U\right].\]
**Lemma 2.3** (Quantum Union Bound, [1]).: _Let \(\rho\in\mathcal{D}(\mathcal{H})\) be a state and let \(\boldsymbol{\Pi}_{1},\ldots,\boldsymbol{\Pi}_{n}\geq 0\) be sequence of (orthogonal) projections acting on \(\mathcal{H}\). Suppose that, for every \(i\in[n]\), it holds that \(\operatorname{Tr}[\boldsymbol{\Pi}_{i}\rho]=1-\varepsilon_{i}\), for \(\varepsilon_{i}\in[0,1]\). Then, if we sequentially measure \(\rho\) with projective measurements \(\{\boldsymbol{\Pi}_{1},\boldsymbol{\Pi}-\boldsymbol{\Pi}_{1}\},\ldots,\{ \boldsymbol{\Pi}_{n},\mathbf{I}-\boldsymbol{\Pi}_{n}\}\), the probability that all measurements succeed is at least_
\[\operatorname{Tr}[\boldsymbol{\Pi}_{n}\cdots\boldsymbol{\Pi}_{1}\rho\boldsymbol{ \Pi}_{1}\cdots\boldsymbol{\Pi}_{n}]\geq 1-4\sum_{i=1}^{n}\varepsilon_{i}.\]
### Lattices and Cryptography
Let \(n,m,p,q\in\mathbb{N}\) be positive integers. The rounding operation for \(q\geq p\geq 2\) is the function
\[\lfloor\cdot\rfloor_{p}\ :\ \mathbb{Z}_{q}\to\mathbb{Z}_{p}\ :\ x\mapsto \lfloor(p/q)\cdot x\rfloor\ (\mathrm{mod}\ p).\]
A _lattice_\(\Lambda\subset\mathbb{R}^{m}\) is a discrete subgroup of \(\mathbb{R}^{m}\). Given a lattice \(\Lambda\subset\mathbb{R}^{m}\) and a vector \(\mathbf{t}\in\mathbb{R}^{m}\), we define the coset with respect to vector \(\mathbf{t}\) as the lattice shift \(\Lambda-\mathbf{t}=\{\mathbf{x}\in\mathbb{R}^{m}:\,\mathbf{x}+\mathbf{t}\in \Lambda\}\). Note that many different shifts \(\mathbf{t}\) can define the same coset. The _dual_ of a lattice \(\Lambda\subset\mathbb{R}^{m}\), denoted by \(\Lambda^{*}\), is the lattice of all \(y\in\mathbb{R}^{m}\) that satisfy \(\langle\mathbf{y},\mathbf{x}\rangle\in\mathbb{Z}\), for every \(\mathbf{x}\in\Lambda\). In other words, we let
\[\Lambda^{*}=\{\mathbf{y}\in\mathbb{R}^{m}\,:\,\langle\mathbf{y},\mathbf{x} \rangle\in\mathbb{Z},\ \text{for all}\ \mathbf{x}\in\Lambda\}\,.\]
In this work, we mainly consider _\(q\)-ary lattices_\(\Lambda\) that that satisfy \(q\mathbb{Z}^{m}\subseteq\Lambda\subseteq\mathbb{Z}^{m}\), for some integer modulus \(q\geq 2\). Specifically, we consider the lattice generated by a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) for some \(n,m\in\mathbb{N}\) that consists of all vectors which are perpendicular to the rows of \(\mathbf{A}\), namely
\[\Lambda_{q}^{\perp}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\,\mathbf{A} \cdot\mathbf{x}=\mathbf{0}\ (\mathrm{mod}\ q)\}.\]
For any _syndrome_\(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) in the column span of \(\mathbf{A}\), we also consider the coset \(\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\) given by
\[\Lambda_{q}^{\mathbf{y}}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\,\mathbf{ A}\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\}=\Lambda_{q}^{\perp}(\mathbf{A})+\mathbf{c},\]
where \(\mathbf{c}\in\mathbb{Z}^{m}\) is an arbitrary integer solution to the equation \(\mathbf{Ac}=\mathbf{y}\ (\mathrm{mod}\ q)\).
We use the following facts due to Gentry, Peikert and Vaikuntanathan [1].
**Lemma 2.4** ([1], Lemma 5.1).: _Let \(n\in\mathbb{N}\) and let \(q\geq 2\) be a prime modulus with \(m\geq 2n\log q\). Then, for all but a \(q^{-n}\) fraction of \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\), the subset-sums of the columns of \(\mathbf{A}\) generate \(\mathbb{Z}_{q}^{n}\). In other words, a uniformly random matrix \(\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\) is full-rank with overwhelming probability._
Gaussian Distribution.The _Gaussian measure_\(\rho_{\sigma}\) with parameter \(\sigma>0\) is defined as
\[\rho_{\sigma}(\mathbf{x})=\exp(-\pi\|\mathbf{x}\|^{2}/\sigma^{2}),\ \ \ \ \ \forall \mathbf{x}\in\mathbb{R}^{m}.\]
Let \(\Lambda\subset\mathbb{R}^{m}\) be a lattice and let \(\mathbf{t}\in\mathbb{R}^{m}\). We define the _Gaussian mass_ of \(\Lambda-\mathbf{t}\) as the quantity
\[\rho_{\sigma}(\Lambda-\mathbf{t})=\sum_{\mathbf{y}\in\Lambda}\rho_{\sigma}( \mathbf{y}-\mathbf{t}).\]
The _discrete Gaussian distribution_\(D_{\Lambda-\mathbf{t},\sigma}\) assigns probability proportional to \(e^{-\pi\|\mathbf{x}-\mathbf{t}\|^{2}/\sigma^{2}}\) to every lattice point \(\mathbf{x}\in\Lambda\). In other words, we have
\[D_{\Lambda-\mathbf{t},\sigma}(\mathbf{x})=\frac{\rho_{\sigma}(\mathbf{x}- \mathbf{t})}{\rho_{\sigma}(\Lambda-\mathbf{t})},\ \ \ \ \forall\mathbf{x}\in\Lambda.\]
The following lemma follows from [12, Lemma 2.11] and [1, Lemma 5.3].
**Lemma 2.5**.: _Let \(n\in\mathbb{N}\) and let \(q\) be a prime with \(m\geq 2n\log q\). Let \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\). Let \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) be arbitrary. Then, for any \(\sigma\geq\omega(\sqrt{\log m})\), there exists a negligible function \(\varepsilon(m)\) such that_
\[D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}),\sigma}(\mathbf{x})\,\leq\,2^{-m}\cdot \frac{1+\varepsilon}{1-\varepsilon},\ \ \ \ \ \forall\,\mathbf{x}\in\Lambda_{q}^{\perp}(\mathbf{A}).\]
Let \(\mathcal{B}^{m}(\mathbf{0},r)=\{\mathbf{x}\in\mathbb{R}^{m}\,:\,\|\mathbf{x}\|\leq r\}\) denote the \(m\)-dimensional ball of radius \(r>0\). We use of the following tail bound for the Gaussian mass of a lattice [1, Lemma 1.5 (ii)].
**Lemma 2.6**.: _For any \(m\)-dimensional lattice \(\Lambda\), shift \(\mathbf{t}\in\mathbb{R}^{m}\), \(\sigma>0\) and \(c\geq(2\pi)^{-\frac{1}{2}}\) it holds that_
\[\rho_{\sigma}\left((\Lambda-\mathbf{t})\setminus\mathcal{B}^{m}(\mathbf{0},c \sqrt{m}\sigma)\right)\leq(2\piec^{2})^{\frac{m}{2}}e^{-\pi c^{2}m}\rho_{ \sigma}(\Lambda).\]
In addition, we also make use of the following tail bound for the discrete Gaussian which follows from [10, Lemma 4.4] and [10, Lemma 5.3].
**Lemma 2.7**.: _Let \(n\in\mathbb{N}\) and let \(q\) be a prime with \(m\geq 2n\log q\). Let \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\). Let \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) be arbitrary. Then, for any \(\sigma\geq\omega(\sqrt{\log m})\), there exists a negligible function \(\varepsilon(m)\) such that_
\[\Pr_{\mathbf{x}\sim D_{\Lambda_{q}^{q}(\mathbf{A}),\sigma}^{\mathbf{y}}}\left[ \|\mathbf{x}\|>\sigma\sqrt{m}\right]\leq 2^{-m}\cdot\frac{1+\varepsilon}{1- \varepsilon}.\]
Given a modulus \(q\in\mathbb{N}\) and \(\sigma\in(0,q/2\sqrt{m})\), the _truncated_ discrete Gaussian distribution \(D_{\mathbb{Z}_{q}^{m},\sigma}\) over \(\mathbb{Z}^{m}\cap(-\frac{q}{2},\frac{q}{2}]^{m}\) with support \(\{\mathbf{x}\in\mathbb{Z}_{q}^{m}:\|\mathbf{x}\|\leq\sigma\sqrt{m}\}\) is the density function defined below:
\[D_{\mathbb{Z}_{q}^{m},\sigma}(\mathbf{x})=\frac{\rho_{\sigma}(\mathbf{x})}{ \sum_{\mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{m}}\rho_{ \sigma}(\mathbf{z})}.\]
Finally, we recall the following _noise smudging_ property.
**Lemma 2.8** (Noise smudging, [11]).: _Let \(y,\sigma>0\). Then, the statistical distance between the distribution \(D_{\mathbb{Z},\sigma}\) and \(D_{\mathbb{Z},\sigma+y}\) is at most \(y/\sigma\)._
We use the following technical lemma on the min-entropy of the truncated discrete Gaussian distribution, which we prove below.
**Lemma 2.9**.: _Let \(n\in\mathbb{N}\) and let \(q\) be a prime with \(m\geq 2n\log q\). Let \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\). Then, for any \(\sigma\geq\omega(\sqrt{\log m})\), there exists a negligible \(\varepsilon(m)\) such that_
\[\max_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\max_{\begin{subarray}{c}\mathbf{x}\in \mathbb{Z}_{q}^{m},\,\|\mathbf{x}\|\leq\sigma\sqrt{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\pmod{q}\end{subarray}}\left\{\frac{\rho_{ \sigma}(\mathbf{x})}{\sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m}, \,\|\mathbf{z}\|\leq\sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\pmod{q}\end{subarray}}\rho_{\sigma}(\mathbf{ z})}\right\}\ \leq\ 2^{-m+1}\cdot\frac{1+\varepsilon}{1-\varepsilon}.\]
Proof.: Suppose that \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) is a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\), i.e. \(\mathbf{A}\) is full-rank. Then,
\[\max_{\begin{subarray}{c}\mathbf{y}\in\mathbb{Z}_{q}^{n}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\max_{ \begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m},\|\mathbf{x}\|\leq\sigma \sqrt{m}\\ \sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq \sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})\end{subarray}}\left\{\begin{array}{c}\rho_{\sigma}(\mathbf{x}) \\ \sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq \sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})\\ \leq&\max_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\sup_{\mathbf{x}\in\Lambda_{q}^{ \perp}(\mathbf{A})}D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}),\sigma}(\mathbf{x} )\end{array}\right.\] \[\leq \max_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\sup_{\mathbf{x}\in \Lambda_{q}^{\perp}(\mathbf{A})}D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}), \sigma}(\mathbf{x})\] \[+\max_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\max_{\begin{subarray}{c} \mathbf{x}\in\mathbb{Z}_{q}^{m},\|\mathbf{x}\|\leq\sigma\sqrt{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\frac{\rho_{ \sigma}(\mathbf{x})}{\sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m}, \|\mathbf{z}\|\leq\sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})}\cdot\frac{\rho_{\sigma}(\Lambda_{q}^{\mathbf{y}}(\mathbf{A}) \setminus\mathcal{B}^{m}(\mathbf{0},\sigma\sqrt{m}))}{\rho_{\sigma}(\Lambda_{ q}^{\mathbf{y}}(\mathbf{A}))}\]
where \(B^{m}(\mathbf{0},r)=\{\mathbf{x}\in\mathbb{R}^{m}\,:\,\|\mathbf{x}\|\leq r\}\). Using the fact that
\[\frac{\rho_{\sigma}(\mathbf{x})}{\sum_{\begin{subarray}{c}\mathbf{z}\in \mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})}\ \leq\ 1,\]
for \(\mathbf{x}\in\mathbb{Z}_{q}^{m}\) with \(\mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\), and the fact that
\[\Pr_{\mathbf{v}\sim D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}),\sigma}}\Big{[} \|\mathbf{v}\|>\sigma\sqrt{m}\Big{]}=\frac{\rho_{\sigma}(\Lambda_{q}^{\mathbf{ y}}(\mathbf{A})\setminus\mathcal{B}^{m}(\mathbf{0},\sigma\sqrt{m}))}{\rho_{ \sigma}(\Lambda_{q}^{\mathbf{y}}(\mathbf{A}))}\]
we get that
\[\max_{\begin{subarray}{c}\mathbf{y}\in\mathbb{Z}_{q}^{n}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\max_{ \begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m},\|\mathbf{x}\|\leq\sigma \sqrt{m}\\ \sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq \sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})\end{subarray}}\left\{\begin{array}{c}\rho_{\sigma}(\mathbf{x}) \\ \sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq \sigma\sqrt{m}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{z})\end{array}\right\}\] \[\leq\max_{\mathbf{y}\in\mathbb{Z}_{q}^{n}}\left\{\sup_{\mathbf{x} \in\Lambda_{q}^{\perp}(\mathbf{A})}D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}), \sigma}(\mathbf{x})+\Pr_{\mathbf{v}\sim D_{\Lambda_{q}^{\mathbf{y}}(\mathbf{A}), \sigma}}\Big{[}\|\mathbf{v}\|>\sigma\sqrt{m}\Big{]}\right\}.\]
Because \(\sigma\geq\omega(\sqrt{\log m})\), the claim then follows from Lemma 2.5 and Lemma 2.7.
The Short Integer Solution problem.The _Short Integer Solution_ (SIS) problem was introduced by Ajtai [1] in his seminal work on average-case lattice problems.
**Definition 2.10** (Short Integer Solution problem, [1]).: _Let \(n,m\in\mathbb{N}\), \(q\geq 2\) be a modulus and let \(\beta>0\) be a parameter. The Short Integer Solution problem (\(\mathsf{SIS}^{m}_{n,q,\beta}\)) problem is to find a short solution \(\mathbf{x}\in\mathbb{Z}^{m}\) with \(\|\mathbf{x}\|\leq\beta\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{0}\pmod{q}\) given as input a matrix \(\mathbf{A}\mathop{\leftarrow}^{\mathtt{s}}\mathbb{Z}^{n\times m}_{q}\)._
Micciancio and Regev [14] showed that the SIS problem is, on the average, as hard as approximating worst-case lattice problems to within small factors. Subsequently, Gentry, Peikert and Vaikuntanathan [1] gave an improved reduction showing that, for parameters \(m=\operatorname{poly}(n)\), \(\beta=\operatorname{poly}(n)\) and prime \(q\geq\beta\cdot\omega(\sqrt{n\log q})\), the average-case \(\mathsf{SIS}^{m}_{n,q,\beta}\) problem is as hard as approximating the shortest independent vector problem (SIVP) problem in the worst case to within a factor \(\gamma=\beta\cdot\tilde{O}(\sqrt{n})\). We assume that \(\mathsf{SIS}^{m}_{n,q,\beta}\), for \(m=\Omega(n\log q)\), \(\beta=2^{o(n)}\) and \(q=2^{o(n)}\), is hard against quantum adversaries running in time \(\operatorname{poly}(q)\) with success probability \(\operatorname{poly}(1/q)\).
The Learning with Errors problem.The _Learning with Errors_ problem was introduced by Regev [13] and serves as the primary basis of hardness of post-quantum cryptosystems. The problem is defined as follows.
**Definition 2.11** (Learning with Errors problem, [13]).: _Let \(n,m\in\mathbb{N}\) be integers, let \(q\geq 2\) be a modulus and let \(\alpha\in(0,1)\) be a noise ratio parameter. The (decisional) Learning with Errors (\(\mathsf{LWE}^{m}_{n,q,\alpha q}\)) problem is to distinguish between the following samples_
\[(\mathbf{A}\mathop{\leftarrow}^{\mathtt{s}}\mathbb{Z}^{n\times m}_{q}, \mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal}\pmod{q})\quad\text{ and }\quad(\mathbf{A}\mathop{\leftarrow}^{\mathtt{s}} \mathbb{Z}^{n\times m}_{q},\mathbf{u}\mathop{\leftarrow}^{\mathtt{s}}\mathbb{ Z}^{m}_{q}),\]
_where \(\mathbf{s}\mathop{\leftarrow}^{\mathtt{s}}\mathbb{Z}^{n}_{q}\) is a uniformly random vector and where \(\mathbf{e}\sim D_{\mathbb{Z}^{m},\alpha q}\) is a discrete Gaussian error vector. We rely on the quantum \(\mathsf{LWE}^{m}_{n,q,\alpha q}\) assumption which states that the samples above are computationally indistinguishable for any \(\mathsf{QPT}\) algorithm._
As shown in [13], the \(\mathsf{LWE}^{m}_{n,q,\alpha q}\) problem with parameter \(\alpha q\geq 2\sqrt{n}\) is at least as hard as approximating the shortest independent vector problem (SIVP) to within a factor of \(\gamma=\widetilde{O}(n/\alpha)\) in worst case lattices of dimension \(n\). In this work we assume the subexponential hardness of \(\mathsf{LWE}^{m}_{n,q,\alpha q}\) which relies on the worst case hardness of approximating short vector problems in lattices to within a subexponential factor. We assume that the \(\mathsf{LWE}^{m}_{n,q,\alpha q}\) problem, for \(m=\Omega(n\log q)\), \(q=2^{o(n)}\), \(\alpha=1/2^{o(n)}\), is hard against quantum adversaries running in time \(\operatorname{poly}(q)\). We note that this parameter regime implies \(\mathsf{SIS}^{m}_{n,q,\beta}\)[16].
Trapdoors for lattices.We use the following _trapdoor_ property for the \(\mathsf{LWE}\) problem.
**Theorem 2.12** ([14], Theorem 5.1).: _Let \(n,m\in\mathbb{N}\) and \(q\in\mathbb{N}\) be a prime with \(m=\Omega(n\log q)\). There exists a randomized algorithms with the following properties:_
* \(\mathsf{GenTrap}(1^{n},1^{m},q)\)_: on input_ \(1^{n},1^{m}\) _and_ \(q\)_, returns a matrix_ \(\mathbf{A}\in\mathbb{Z}^{n\times m}_{q}\) _and a trapdoor_ \(\mathsf{td}_{\mathbf{A}}\) _such that the distribution of_ \(\mathbf{A}\) _is negligibly (in the parameter_ \(n\)_) close to uniform._
* \(\mathsf{Invert}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{b})\)_: on input_ \(\mathbf{A}\)_,_ \(\mathsf{td}_{\mathbf{A}}\) _and_ \(\mathbf{b}=\mathbf{s}^{\intercal}\cdot\mathbf{A}+\mathbf{e}^{\intercal}\pmod{q}\)_, where_ \(\|\mathbf{e}\|\leq q/(C_{T}\sqrt{n\log q})\) _and_ \(C_{T}>0\) _is a universal constant, returns_ \(\mathbf{s}\) _and_ \(\mathbf{e}\) _with overwhelming probability over_ \((\mathbf{A},\mathsf{td}_{\mathbf{A}})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q)\)_._
Quantum Discrete Gaussian Sampling for \(q\)-ary Lattices
In this section, we review some basic facts about Gaussian superpositions and present our _quantum discrete Gaussian sampler_ which is used to revoke the decryption keys for our schemes.
### Gaussian Superpositions
In this section, we review some basic facts about _Gaussian superpositions_. Given \(q\in\mathbb{N}\), \(m\in\mathbb{N}\) and \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\), we consider Gaussian superpositions over \(\mathbb{Z}^{m}\cap(-\frac{q}{2},\frac{q}{2}]^{m}\) of the form
\[\ket{\psi}=\sum_{\mathbf{x}\in\mathbb{Z}_{q}^{m}}\rho_{\sigma}(\mathbf{x}) \ket{\mathbf{x}}.\]
Note that the state \(\ket{\psi}\) is not normalized for convenience and ease of notation. The tail bound in Lemma2.6 implies that (the normalized variant of) \(\ket{\psi}\) is within negligible trace distance of a _truncated_ discrete Gaussian superposition \(\ket{\tilde{\psi}}\) with support \(\{\mathbf{x}\in\mathbb{Z}_{q}^{m}:\|\mathbf{x}\|\leq\sigma\sqrt{\frac{m}{2}}\}\), where
\[\ket{\tilde{\psi}}=\sum_{\mathbf{x}\in\mathbb{Z}_{q}^{m}}\sqrt{D_{\mathbb{Z}_ {q}^{m},\frac{\sigma}{\sqrt{2}}}(\mathbf{x})}\ket{\mathbf{x}}=\left(\sum_{ \mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{\frac{m}{2}}} \rho_{\sigma}(\mathbf{z})\right)^{-\frac{1}{2}}\sum_{\mathbf{x}\in\mathbb{Z}_ {q}^{m}:\|\mathbf{x}\|\leq\sigma\sqrt{\frac{m}{2}}}\rho_{\sigma}(\mathbf{x}) \ket{\mathbf{x}}.\]
In this work, we consider Gaussian superpositions with parameter \(\sigma=\Omega(\sqrt{m})\) which can be efficiently implemented using standard quantum state preparation techniques; for example using _quantum rejection sampling_ and the _Grover-Rudolph algorithm_[1, 1, 1, 1].
Gaussian coset states.Our key-revocable encryption schemes in Section6 and Section7 rely on Gaussian superpositions over \(\mathbf{x}\in\mathbb{Z}_{q}^{m}\) subject to a constraint of the form \(\mathbf{A}\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\), for some matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) and image \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\). In Algorithm1, we give a procedure called GenGauss that, on input \(\mathbf{A}\) and \(\sigma>0\), generates a Gaussian superposition state of the form
\[\ket{\psi_{\mathbf{y}}}=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q} ^{m}:\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x})\ket{ \mathbf{x}},\]
for some \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) which is statistically close to uniform whenever \(m\geq 2n\log q\) and \(\sigma\geq\omega(\sqrt{\log m})\). Because \(\ket{\psi_{\mathbf{y}}}\) corresponds to a (truncated) Gaussian superposition over a particular lattice coset,
\[\Lambda_{q}^{\mathbf{y}}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\mathbf{A} \cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\},\]
of the \(q\)-ary lattice \(\Lambda_{q}^{\perp}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\,\mathbf{A} \cdot\mathbf{x}=\mathbf{0}\ (\mathrm{mod}\ q)\}\), we refer to it as a _Gaussian coset state_.
Finally, we recall an important property of Gaussian coset states.
Gaussian-collapsing hash functions.Unruh [25] introduced the notion of collapsing hash functions in his seminal work on computationally binding quantum commitments. Informally, a hash function is called _collapsing_ if it is computationally difficult to distinguish between a superposition of pre-images and a single (measured) pre-image.
In recent work, Poremba [14] proposed a special variant of the collapsing property with respect to _Gaussian superpositions_. Previously, Liu and Zhandry [13] implicitly showed that the _Ajtai_ hash function \(h_{\mathbf{A}}(\mathbf{x})=\mathbf{A}\mathbf{x}\ (\mathrm{mod}\ q)\) is collapsing - and thus _Gaussian-collapsing_ - via the notion of _lossy functions_ and by assuming the superpolynomial hardness of (decisional) \(\mathsf{LWE}\).
We use the following result on the Gaussian-collapsing property of the Ajtai hash function.
**Theorem 3.1** (Gaussian-collapsing property, [14], Theorem 4).: _Let \(n\in\mathbb{N}\) and \(q\) be a prime with \(m\geq 2n\log q\), each parameterized by \(\lambda\in\mathbb{N}\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\). Then, the following samples are computationally indistinguishable assuming the quantum hardness of decisional \(\mathsf{LWE}^{m}_{\mathbf{n},q,\alpha q}\), for any noise ratio \(\alpha\in(0,1)\) with relative noise magnitude \(1/\alpha=\sigma\cdot 2^{o(n)}:\)_
\[\left(\mathbf{A}\mathop{\leftarrow}^{\mathrm{s}}\mathbb{Z}_{q}^{n\times m}, \ |\psi_{\mathbf{y}}\rangle=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q} ^{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x})\ |\mathbf{x}\rangle\,,\ \mathbf{y}\in\mathbb{Z}_{q}^{n}\right)\ \approx_{c}\ \ \left(\mathbf{A}\mathop{ \leftarrow}^{\mathrm{s}}\mathbb{Z}_{q}^{n\times m},\ |\mathbf{x}_{0}\rangle\,,\ \mathbf{A}\cdot \mathbf{x}_{0}\,\in\mathbb{Z}_{q}^{n}\right)\]
_where \(\left(\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y}\right)\leftarrow\mathsf{ GenGauss}(\mathbf{A},\sigma)\) and where \(\mathbf{x}_{0}\sim D_{\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}}\) is a discrete Gaussian error._
### Algorithm: GenGauss
The state preparation procedure \(\mathsf{GenGauss}(\mathbf{A},\sigma)\) is defined as follows.
```
Input: Matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) and parameter \(\sigma=\Omega(\sqrt{m})\). Output: Gaussian state \(\left|\psi_{\mathbf{y}}\right\rangle\) and \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\).
1 Prepare a Gaussian superposition in system \(X\) with parameter \(\sigma>0\): \[\left|\psi\right\rangle_{XY}=\sum_{\mathbf{x}\in\mathbb{Z}_{q}^{m}}\rho_{ \sigma}(\mathbf{x})\left|\mathbf{x}\right\rangle_{X}\otimes\left|\mathbf{0} \right\rangle_{Y}.\]
2 Apply the unitary \(U_{\mathbf{A}}:\left|\mathbf{x}\right\rangle\left|\mathbf{0}\right\rangle \rightarrow\left|\mathbf{x}\right\rangle\left|\mathbf{A}\cdot\mathbf{x}\ (\mathrm{mod}\ q)\right\rangle\) on systems \(X\) and \(Y\): \[\left|\psi\right\rangle_{XY}=\sum_{\mathbf{x}\in\mathbb{Z}_{q}^{m}}\rho_{ \sigma}(\mathbf{x})\left|\mathbf{x}\right\rangle_{X}\otimes\left|\mathbf{A} \cdot\mathbf{x}\ (\mathrm{mod}\ q)\right\rangle_{Y}.\]
3 Measure system \(Y\) in the computational basis, resulting in the state \[\left|\psi_{\mathbf{y}}\right\rangle_{XY}=\sum_{\begin{subarray}{c}\mathbf{x }\in\mathbb{Z}_{q}^{m}:\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x}) \left|\mathbf{x}\right\rangle_{X}\otimes\left|\mathbf{y}\right\rangle_{Y}.\]
4 Output the state \(\left|\psi_{\mathbf{y}}\right\rangle\) in system \(X\) and the outcome \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) in system \(Y\).
```
**Algorithm 1**\(\mathsf{GenGauss}(\mathbf{A},\sigma)\)
### Algorithm: QSampGauss
Recall that, in Algorithm 1, we gave a procedure called \(\mathsf{GenGauss}(\mathbf{A},\sigma)\) that prepares a Gaussian coset state \(\ket{\psi_{\mathbf{y}}}\), for a randomly generated \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\). In general, however, generating a specific Gaussian coset state on input \((\mathbf{A},\mathbf{y})\) requires a _short trapdoor basis_\(\mathsf{td}_{\mathbf{A}}\) for the matrix \(\mathbf{A}\). This task can be thought of as a quantum analogue of the _discrete Gaussian sampling problem_[12], where the goal is to output a sample \(\mathbf{x}\sim D_{Z^{m},\sigma}\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\) on input \((\mathbf{A},\mathbf{y})\) and \(\sigma>0\).
In Algorithm 2, we give a procedure called \(\mathsf{QSampGauss}\) which, on input \((\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\) generates a specific Gaussian coset state \(\ket{\psi_{\mathbf{y}}}\) of the form
\[\ket{\psi_{\mathbf{y}}}=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q} ^{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x})\ \ket{\mathbf{x}}.\]
Our procedure \(\mathsf{SampGauss}\) in Algorithm 2 can be thought of as an explicit quantum reduction from \(\mathsf{ISIS}_{n,q,\sigma\sqrt{m/2}}^{m}\) to \(\mathsf{LWE}_{n,q,q/\sqrt{2}\sigma}^{m}\) which is inspired by the quantum reduction of Stehle et al. [20] which reduces \(\mathsf{SIS}\) to \(\mathsf{LWE}\). To obtain the aforementioned reduction, one simply needs to replace the procedure \(\mathsf{Invert}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\cdot)\) in Step 4 in Algorithm 2 with a solver for the \(\mathsf{LWE}_{n,q,q/\sqrt{2}\sigma}^{m}\) problem.
In Theorem 3.3, we prove the correctness of Algorithm 2. As a technical ingredient, we rely on a _duality lemma_[13] that characterizes the Fourier transform of a Gaussian coset state in terms of its dual state. Note that \(\ket{\psi_{\mathbf{y}}}\) corresponds to a Gaussian superposition over a lattice coset,
\[\Lambda_{q}^{\mathbf{y}}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\mathbf{A }\cdot\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\},\]
of the \(q\)-ary lattice \(\Lambda_{q}^{\perp}(\mathbf{A})=\{\mathbf{x}\in\mathbb{Z}^{m}:\,\mathbf{A} \cdot\mathbf{x}=\mathbf{0}\ (\mathrm{mod}\ q)\}\). Here, the _dual_ of \(\Lambda_{q}^{\perp}(\mathbf{A})\) satisfies \(q\cdot\Lambda_{q}^{\perp}(\mathbf{A})^{*}=\Lambda_{q}(\mathbf{A})\), where \(\Lambda_{q}(\mathbf{A})\) corresponds to the lattice generated by \(\mathbf{A}^{\intercal}\), i.e.
\[\Lambda_{q}(\mathbf{A})=\{\mathbf{z}\in\mathbb{Z}^{m}:\,\mathbf{z}=\mathbf{A }^{\intercal}\cdot\mathbf{s}\ (\mathrm{mod}\ q),\ \text{for some}\ \mathbf{s}\in\mathbb{Z}^{n}\}.\]
The following lemma relates the Fourier transform of \(\ket{\psi_{\mathbf{y}}}\) with a superposition of \(\mathsf{LWE}\) samples with respect to a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) and a phase which depends on \(\mathbf{y}\). In other words, the resulting state can be thought of as a superposition of Gaussian balls around random lattice vectors in \(\Lambda_{q}(\mathbf{A})\).
**Lemma 3.2** ([13], Lemma 16).: _Let \(m\in\mathbb{N}\), \(q\geq 2\) be a prime and \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\). Let \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\) and let \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) be arbitrary. Then, the \(q\)-ary quantum Fourier transform of the (normalized variant of the) Gaussian coset state_
\[\ket{\psi_{\mathbf{y}}}=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{ m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\ \ \ \rho_{\sigma}(\mathbf{x})\ket{\mathbf{x}}\]
_is within negligible (in \(m\in\mathbb{N}\)) trace distance of the (normalized variant of the) Gaussian state_
\[\ket{\hat{\psi}_{\mathbf{y}}}=\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\sum_{ \mathbf{e}\in\mathbb{Z}_{q}^{m}}\rho_{q/\sigma}(\mathbf{e})\,\omega_{q}^{- \langle\mathbf{s},\mathbf{y}\rangle}\ket{\mathbf{s}^{\intercal}\mathbf{A}+ \mathbf{e}^{\intercal}\ (\mathrm{mod}\ q)}.\]
The procedure \(\mathsf{QSampGauss}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\) is defined as follows.
```
Input: Matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\), a trapdoor \(\mathsf{td}_{\mathbf{A}}\), an image \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) and parameter \(\sigma=O(\frac{q}{\sqrt{m}})\). Output: Gaussian state \(\ket{\psi_{\mathbf{y}}}\).
1 Prepare the following superposition with parameter \(q/\sigma>0\): \[\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\ket{\mathbf{s}}\otimes\sum_{\mathbf{e} \in\mathbb{Z}_{q}^{m}}\rho_{q/\sigma}(\mathbf{e})\ket{\mathbf{e}}\otimes\ket{ \mathbf{0}}\]
2 Apply the generalized Pauli operator \(\mathbf{Z}_{q}^{-\mathbf{y}}\) on the first register, resulting in the state \[\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\omega_{q}^{-\langle\mathbf{s},\mathbf{ y}\rangle}\ket{\mathbf{s}}\otimes\sum_{\mathbf{e}\in\mathbb{Z}_{q}^{m}}\rho_{q/ \sigma}(\mathbf{e})\ket{\mathbf{e}}\otimes\ket{\mathbf{0}}\]
3 Apply the unitary \(U_{\mathbf{A}}:\ket{\mathbf{s}}\ket{\mathbf{e}}\ket{\mathbf{0}}\to\ket{ \mathbf{s}}\ket{\mathbf{e}}\ket{\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^ {\intercal}\ (\text{mod}\ q)}\), resulting in the state \[\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\sum_{\mathbf{e}\in\mathbb{Z}_{q}^{m}} \rho_{q/\sigma}(\mathbf{e})\,\omega_{q}^{-\langle\mathbf{s},\mathbf{y} \rangle}\ket{\mathbf{s}}\ket{\mathbf{e}}\ket{\mathbf{s}^{\intercal}\mathbf{A}+ \mathbf{e}^{\intercal}\ (\text{mod}\ q)}\]
4 Coherently run \(\mathsf{Invert}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\cdot)\) on the third register in order to uncompute the first and the second register, resulting in a state that is close in trace distance to the following state: \[\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\sum_{\mathbf{e}\in\mathbb{Z}_{q}^{m}} \rho_{q/\sigma}(\mathbf{e})\,\omega_{q}^{-\langle\mathbf{s},\mathbf{y} \rangle}\ket{0}\ket{0}\ket{\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{ \intercal}\ (\text{mod}\ q)}\]
5 Discard the first two registers. Apply the (inverse) quantum Fourier transform and output the resulting state.
```
**Algorithm 2**\(\mathsf{QSampGauss}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\)
Let us now prove the correctness of Algorithm 2.
**Theorem 3.3** (Quantum Discrete Gaussian Sampler).: _Let \(n\in\mathbb{N}\), \(q\) be a prime with \(m\geq 2n\log q\) and \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\). Let \((\mathbf{A},\mathsf{td}_{\mathbf{A}})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q)\) be sampled as in Theorem 2.12 and let \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) be arbitrary. Then, with overwhelming probability, \(\mathsf{QSampGauss}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\) in Algorithm 2 outputs a state which is within negligible trace distance of the (normalized variant of the) state,_
\[\ket{\psi_{\mathbf{y}}}=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^ {m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\text{mod}\ q)\end{subarray}}\rho_{\sigma}( \mathbf{x})\ket{\mathbf{x}}.\]
Proof.: From Lemma 2.4 and Theorem 2.12, it follows that \((\mathbf{A},\mathsf{td}_{\mathbf{A}})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q)\) yields a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix whose columns generate \(\mathbb{Z}_{q}^{n}\) with overwhelming probability. Moreover, since \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\), the inversion procedure \(\mathsf{Invert}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\cdot)\) from Theorem 2.12 in Step 4 in
Algorithm 2 succeeds with overwhelming probability at generating the Gaussian state
\[\ket{\hat{\psi}_{\mathbf{y}}}=\sum_{\mathbf{s}\in\mathbb{Z}_{q}^{n}}\sum_{ \mathbf{e}\in\mathbb{Z}_{q}^{m}}\rho_{q/\sigma}(\mathbf{e})\,\omega_{q}^{- \langle\mathbf{s},\mathbf{y}\rangle}\ket{\mathbf{s}^{\mathsf{T}}\mathbf{A}+ \mathbf{e}^{\mathsf{T}}\ (\mathrm{mod}\ q)}\]
Applying the (inverse) quantum Fourier transform \(\mathsf{FT}_{q}^{\dagger}\), the claim then follows from Lemma 3.2.
## 4 Quantum Goldreich-Levin Theorem for Large Fields
In this section, we give a proof of the first quantum Goldreich-Levin theorem for large fields \(\mathbb{Z}_{q}\).
### Post-Quantum Reductions and Quantum Rewinding
We first review some recent work by Bitansky, Brakerski and Kalai [1] that enables us to convert a wide range of classical reductions into post-quantum reductions (which allow for quantum auxiliary input) in a constructive manner. We first review some basic terminology from [1].
Let \(\lambda\in\mathbb{N}\) be a parameter. A _non-interactive assumption_\(\mathsf{P}=(\mathsf{G},\mathsf{V},c)\) with respect to a set of polynomials \(d(\lambda),n(\lambda)\) and \(m(\lambda)\) is characterized as follows:
* The generator \(\mathsf{G}\) takes as input \(1^{\lambda}\) and \(r\in\{0,1\}^{d}\), and returns \(x\in\{0,1\}^{n}\).
* The verifier \(\mathsf{V}\) takes as input \(1^{\lambda}\) and \((r,y)\in\{0,1\}^{d}\times\{0,1\}^{m}\), and returns a single bit output.
* \(c(\lambda)\) is the threshold associated with the assumption.
Given a (possibly randomized) _solver_, we characterize the _advantage_ in solving an assumption \(\mathsf{P}\) in terms of the absolute distance between the solving probability (or, _value_) and the threshold \(c\); for example, for a _decision assumption_\(\mathsf{P}\) (with \(m=1\)) we characterize the value in solving \(\mathsf{P}\) in terms of \(\frac{1}{2}+\varepsilon\), where the threshold is given by \(c(\lambda)=\frac{1}{2}\) and \(\varepsilon>0\) is corresponds to the _advantage_. We say that a reduction is _black-box_ if it is oblivious to the representation and inner workings of the solver that is being used. Moreover, we say that a reduction is _non-adaptive_ if all queries to the solver are known ahead of time.
We use the following theorem.
**Theorem 4.1** ([1], adapted from Theorem 7.1).: _Let \(c\in\mathbb{R}\). Suppose that there exists a classical reduction from solving a non-interactive assumption \(\mathsf{Q}\) to solving a non-interactive assumption \(\mathsf{P}\) such that the following holds: if the \(\mathsf{P}\)-solver has advantage \(\varepsilon>0\) then the \(\mathsf{Q}\)-solver has advantage \(c\) (independent of \(\varepsilon\)) with running time \(\mathrm{poly}(1/\varepsilon,c,\lambda)\)._
_Then, there exists a quantum reduction from solving \(\mathsf{Q}\) to quantumly solving \(\mathsf{P}\) such that the following holds: if the quantum \(\mathsf{P}\)-solver (with non-uniform quantum advice) has advantage given by \(\varepsilon>0\), then the the \(\mathsf{Q}\)-solver has advantage \(c\) (the same as the classical reduction) with running time \(\mathrm{poly}(1/\varepsilon,c,\lambda)\)._
**Remark 4.2**.: _We note that [1] consider a more general theorem where the advantage of the classical \(\mathsf{Q}\)-solver can depend on the advantage of the \(\mathsf{P}\)-solver. But in the case when the classical \(\mathsf{Q}\)-solver's advantage is independent of the \(\mathsf{P}\)-solver's advantage then, as reflected in the above theorem, it turns out the advantage of the quantum \(\mathsf{Q}\)-solver is the same as the classical \(\mathsf{Q}\)-solver._
### Goldreich-Levin Theorems for Large Fields
The following result is implicit in the work of Dodis et al. [10].
**Theorem 4.3** (Classical Goldreich-Levin Theorem for Large Fields, [10], Theorem 1).: _Let \(q\) be a prime and \(m\in\mathbb{N}\). Let \(\sigma\in(2\sqrt{m},q/2\sqrt{m})\) and let \(H=\{\mathbf{x}\in\mathbb{Z}_{q}^{m}\,:\,\|\mathbf{x}\|\leq\sigma\sqrt{m}\}\) be a subset of \(\mathbb{Z}_{q}^{m}\). Let \(\mathsf{aux}:H\to\{0,1\}^{*}\) be any (possibly randomized) auxiliary information. Suppose there exists a distinguisher \(\mathcal{D}\) which runs in time \(T(\mathcal{D})\) such that_
\[\left|\Pr\left[\mathcal{D}\big{(}\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}, \mathsf{aux}(\mathbf{x})\big{)}=1\,:\,\begin{subarray}{c}\mathbf{u}\stackrel{{ \$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\end{subarray}\right]-\Pr\left[ \mathcal{D}\big{(}\mathbf{u},r,\mathsf{aux}(\mathbf{x})\big{)}=1\,:\, \begin{subarray}{c}\mathbf{u}\stackrel{{\$}}{{\leftarrow}} \mathbb{Z}_{q}^{m},r\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q} \\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\end{subarray}\right]\right|=\varepsilon.\]
_Then, there exists a (classical) non-adaptive black-box extractor \(\mathcal{E}\) whose running time is given by \(T(\mathcal{E})=T(\mathcal{D})\cdot\mathrm{poly}(m,\sigma,1/\varepsilon)\) and succeeds with probability at least_
\[\Pr\left[\mathcal{E}\big{(}\mathsf{aux}(\mathbf{x})\big{)}=\mathbf{x}\,\,:\, \,\mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\right]\geq\frac{\varepsilon^{ 3}}{512\cdot m\cdot q^{2}}.\]
Using the constructive post-quantum reduction from Theorem 4.1, we can convert Theorem 4.3 into a quantum Goldreich-Levin Theorem for finite fields, and obtain the following.
**Theorem 4.4** (Quantum Goldreich-Levin Theorem for Large Fields).: _Let \(q\) be a prime and \(m\in\mathbb{N}\). Let \(\sigma\in(2\sqrt{m},q/2\sqrt{m})\) and let \(\Phi:\mathcal{L}(\mathcal{H}_{q}^{m})\to\mathcal{L}(\mathcal{H}_{\textsc{aux}})\) be any \(\mathsf{CPTP}\) map with auxiliary system \(\mathcal{H}_{\textsc{aux}}\). Suppose there exists a distinguisher \(\mathcal{D}\) which runs in time \(T(\mathcal{D})\) such that_
\[\left|\Pr\left[\mathcal{D}\big{(}\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}, \mathsf{aux}(\mathbf{x})\big{)}=1\,:\,\begin{subarray}{c}\mathbf{u}\stackrel{{ \$}}{{\leftarrow}}\mathbb{S}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathsf{aux}(\mathbf{x})\leftarrow\Phi(|\mathbf{x}\rangle(\mathbf{x})|)\end{subarray} \right]-\Pr\left[\mathcal{D}\big{(}\mathbf{u},r,\mathsf{aux}(\mathbf{x}) \big{)}=1\,:\,\begin{subarray}{c}\mathbf{u}\stackrel{{\$}}{{ \leftarrow}}\mathbb{S}_{q}^{m},r\stackrel{{\$}}{{\leftarrow}} \mathbb{S}_{q}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathsf{aux}(\mathbf{x})\leftarrow\Phi(|\mathbf{x}\rangle(\mathbf{x})|) \end{subarray}\right]=\varepsilon.\]
_Then, there exists a quantum extractor \(\mathcal{E}\) that runs in time \(T(\mathcal{E})=\mathrm{poly}(m,T(\mathcal{D}),\sigma,q,1/\varepsilon)\) with_
\[\Pr\left[\mathcal{E}\big{(}\mathsf{aux}(\mathbf{x})\big{)}=\mathbf{x}\,\,: \,\begin{subarray}{c}\mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathsf{aux}(\mathbf{x})\leftarrow\Phi(|\mathbf{x}\rangle(\mathbf{x})|) \end{subarray}\right]\,\geq\,\frac{\varepsilon^{3}}{512\cdot m\cdot q^{2}}.\]
Proof.: The proof follows immediately by combining Theorem 4.3 and Theorem 4.1.
### Amplification
We now show that it is possible to _boost_ the success probability of the Goldreich-Levin extractor, assuming a particular kind of leakage on the hidden vector. Consider the following algorithm.
```
Input: Matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\), vector \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\) and auxiliary input \(\mathsf{aux}(\mathbf{x})\in\{0,1\}^{*}\). Parameters:\(\nu,\delta\in(0,1)\). Output: Vector \(\mathbf{x}\in\mathbb{Z}_{q}^{m}\).
1for\(i=1,\ldots,\lceil\frac{1}{\nu}\ln\left(\frac{1}{\delta}\right)\rceil\)do
2 run \(\mathbf{x}_{i}\leftarrow\mathcal{E}(\mathsf{aux}(\mathbf{x}))\), where \(\mathcal{E}\) is the Goldreich-Levin extractor in Theorem 4.3. if\(\mathbf{x}_{i}\in\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathcal{B}^{m}( \mathbf{0},\sigma\sqrt{m})\)then
3 output \(\mathbf{x}_{i}\)
4else
5continue
6 end if
7
8 end for
```
**Algorithm 3**\(\mathsf{BoostedExtractor}(\mathbf{A},\mathbf{y},\mathsf{aux}(\mathbf{x}))\)
**Theorem 4.5** (Boosted Classical Goldreich-Levin Theorem for Large Fields).: _Let \(n,m\in\mathbb{N}\) be integers and let \(q\) be a prime. Let \(\sigma\in(2\sqrt{m},q/2\sqrt{m})\) and let \(H=\{\mathbf{x}\in\mathbb{Z}_{q}^{m}\,:\,\|\mathbf{x}\|\leq\sigma\sqrt{m}\}\) be a subset of \(\mathbb{Z}_{q}^{m}\). Let \(\mathsf{aux}:H\to\{0,1\}^{*}\) be any (possibly randomized) auxiliary information. Suppose that there exists a distinguisher \(\mathcal{D}\) which runs in time \(T(\mathcal{D})\) such that_
\[\Pr\left[\mathcal{D}\big{(}\mathbf{A},\mathbf{y},\mathbf{u},\mathbf{u}^{ \mathsf{T}}\mathbf{x},\mathsf{aux}(\mathbf{x})\big{)}=1:\begin{array}{c} \mathbf{A}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n\times m }\\ \mathbf{u}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}(\mathrm{mod}\,q)\end{array}\right]- \Pr\left[\mathcal{D}\big{(}\mathbf{A},\mathbf{y},\mathbf{u},r,\mathsf{aux}( \mathbf{x})\big{)}=1:\begin{array}{c}\mathbf{A}\stackrel{{\$}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\\ \mathbf{u}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}(\mathrm{mod}\,q)\end{array}\right] \end{array}\right]=\varepsilon.\]
_Let \(\nu=512mq^{2}/\varepsilon^{3}\) and \(\delta=\exp(-\Omega(n))\) be parameters. Then, \(\mathsf{BoostedExtractor}(\mathbf{A},\mathbf{y},\mathsf{aux}(\mathbf{x}))\) in Algorithm 3 is a non-adaptive black-box extractor that runs in time \(T(\mathcal{D})\cdot\mathrm{poly}(n,m,\sigma,q,1/\varepsilon)\) and outputs a short vector in the coset \(\Lambda_{q}^{\mathbf{Y}}(\mathbf{A})\) with probability at least_
\[\Pr\left[\mathsf{BoostedExtractor}(\mathbf{A},\mathbf{y},\mathsf{aux}(\mathbf{ x}))\in\Lambda_{q}^{\mathbf{Y}}(\mathbf{A})\cap\mathcal{B}^{m}(\mathbf{0}, \sigma\sqrt{m})\,:\begin{array}{c}\mathbf{A}\stackrel{{\$}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}\ (\mathrm{mod}\ q)\end{array}\right]\geq 1- \exp(-\Omega(n)).\]
Proof.: Recall that the Goldreich-Levin extractor \(\mathcal{E}\) in Theorem 4.3 is a non-adaptive black-box extractor running in time \(T(\mathcal{E})=T(\mathcal{D})\cdot\mathrm{poly}(m,\sigma,1/\varepsilon)\) that, on input \(\mathsf{aux}(\mathbf{x})\), outputs \(\mathbf{x}\) with probability at least \(\varepsilon^{3}/512mq^{2}\). Let \(L=\lceil\frac{1}{\nu}\ln\left(\frac{1}{\delta}\right)\rceil\) with \(\nu=512mq^{2}/\varepsilon^{3}\) and \(\delta=\exp(-\Omega(n))\). Therefore, the probability that \(\mathsf{BoostedExtractor}(\mathbf{A},\mathbf{y},\mathsf{aux}(\mathbf{x}))\) in Algorithm 3 fails is at most
\[(1-\nu)^{L}\leq\exp(-L\cdot\nu)\leq\exp(-\Omega(n)).\]
This proves the claim.
Using the constructive post-quantum reduction from Theorem 4.1, we can convert Theorem 4.5 into a (boosted) quantum Goldreich-Levin Theorem for finite fields, and obtain the following.
**Theorem 4.6** (Boosted Quantum Goldreich-Levin Theorem for Large Fields).: _Let \(n,m\in\mathbb{N}\) and \(q\) be a prime. Let \(\sigma\in(2\sqrt{m},q/2\sqrt{m})\). Let \(\Phi:\mathcal{L}(\mathcal{H}_{q}^{m})\to\mathcal{L}(\mathcal{H}_{\textsc{aux}})\) be any \(\mathsf{CPTP}\) map with auxiliary system \(\mathcal{H}_{\textsc{aux}}\). Suppose that there exists a distinguisher \(\mathcal{D}\) which runs in time \(T(\mathcal{D})\) such that_
\[\Pr\left[\mathcal{D}\big{(}\mathbf{A},\mathbf{y},\mathbf{u},\mathbf{u}^{ \mathsf{T}}\mathbf{x},\mathsf{aux}(\mathbf{x})\big{)}=1:\begin{array}{c} \mathbf{A}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n\times m }\\ \mathbf{u}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}(\mathrm{mod}\,q)\end{array}\right]- \Pr\left[\mathcal{D}\big{(}\mathbf{A},\mathbf{y},\mathbf{u},r,\mathsf{aux}( \mathbf{x})\big{)}=1:\begin{array}{c}\mathbf{A}\stackrel{{\$}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\\ \mathbf{u}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}(\mathrm{mod}\,q)\end{array}\right] \end{array}\right]=\varepsilon,\]
_where \(\mathsf{aux}(\mathbf{x})\leftarrow\Phi(|\mathbf{x}\rangle\langle\mathbf{x}|)\). Then, there exists a quantum extractor \(\mathcal{E}\) that has a running time of \(T(\mathcal{E})=T(\mathcal{D})\cdot\mathrm{poly}(n,m,\sigma,q,1/\varepsilon)\) and outputs a short vector in \(\Lambda_{q}^{\mathbf{Y}}(\mathbf{A})\) with probability at least_
\[\Pr\left[\mathcal{E}(\mathbf{A},\mathbf{y},\mathsf{aux}(\mathbf{x}))\in \Lambda_{q}^{\mathbf{Y}}(\mathbf{A})\cap\mathcal{B}^{m}(\mathbf{0},\sigma\sqrt {m})\,:\begin{array}{c}\mathbf{A}\stackrel{{\$}}{{\leftarrow}} \mathbb{Z}_{q}^{n\times m}\\ \mathbf{x}\sim D_{\mathbb{Z}_{q}^{m},\sigma}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}\ (\mathrm{mod}\ q)\end{array}\right]\geq 1- \exp(-\Omega(n)).\]
Proof.: The proof follows immediately by combining Theorem 4.5 and Theorem 4.1.
Definition: Key-Revocable Public-Key Encryption
Let us now give a formal definition of key-revocable public-key encryption schemes.
**Definition 5.1** (Key-Revocable Public-Key Encryption).: _A key-revocable public-key encryption scheme consists efficient algorithms \((\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\), where \(\mathsf{Enc}\) is a \(\mathsf{PPT}\) algorithm and \(\mathsf{KeyGen},\mathsf{Dec}\) and \(\mathsf{Revoke}\) are \(\mathsf{QPT}\) algorithms defined as follows:_
* \(\mathsf{KeyGen}(1^{\lambda})\)_: given as input a security parameter_ \(\lambda\)_, output a public key_ \(\mathsf{PK}\)_, a master secret key_ \(\mathsf{MSK}\) _and a quantum decryption key_ \(\rho_{\mathsf{SK}}\)_._
* \(\mathsf{Enc}(\mathsf{PK},x)\)_: given a public key_ \(\mathsf{PK}\) _and plaintext_ \(x\in\{0,1\}^{\ell}\)_, output a ciphertext_ \(\mathsf{CT}\)_._
* \(\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT})\)_: given a decryption key_ \(\rho_{\mathsf{SK}}\) _and ciphertext_ \(\mathsf{CT}\)_, output a message_ \(y\)_._
* \(\mathsf{Revoke}\left(\mathsf{PK},\mathsf{MSK},\sigma\right)\)_: given as input a master secret key_ \(\mathsf{MSK}\)_, a public key_ \(\mathsf{PK}\) _and quantum state_ \(\sigma\)_, output_ \(\mathsf{Valid}\) _or_ \(\mathsf{Invalid}\)_._
Correctness of Decryption.For every \(x\in\{0,1\}^{\ell}\), the following holds:
\[\mathsf{Pr}\left[x\leftarrow\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT})\ :\ \begin{array}{c}(\mathsf{PK},\mathsf{MSK},\rho_{\mathsf{SK}})\leftarrow\mathsf{ KeyGen}(1^{\lambda})\\ \mathsf{CT}\leftarrow\mathsf{Enc}(\mathsf{PK},x)\end{array}\right]\geq 1-\nu( \lambda),\]
where \(\nu(\cdot)\) is a negligible function.
Correctness of Revocation.The following holds:
\[\mathsf{Pr}\left[\mathsf{Valid}\leftarrow\mathsf{Revoke}\left(\mathsf{PK}, \mathsf{MSK},\rho_{\mathsf{SK}}\right)\ :\ (\mathsf{PK},\mathsf{MSK},\rho_{\mathsf{SK}})\leftarrow\mathsf{ KeyGen}(1^{\lambda})\right]\geq 1-\nu(\lambda),\]
where \(\nu(\cdot)\) is a negligible function.
**Remark 5.2**.: _Using the well-known "Almost As Good As New Lemma" (Lemma 2.2), the procedure \(\mathsf{Dec}\) can be purified to obtain another quantum circuit \(\widetilde{\mathsf{Dec}}\) such that \(\widetilde{\mathsf{Dec}}(\rho_{\mathsf{SK}},\mathsf{CT})\) yields \((x,\rho_{\mathsf{SK}}^{\prime})\) with probability at least \(1-\nu(\lambda)\) and moreover, \(\mathsf{CT}\) is an encryption of \(x\) and \(\mathrm{TD}(\rho_{\mathsf{SK}}^{\prime},\rho_{\mathsf{SK}})\leq\nu^{\prime}(\lambda)\) with \(\nu^{\prime}(\lambda)\) is another negligible function._
### Security Definition
Let \(\Sigma=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) be a key-revocable public-key encryption scheme. We consider the following security experiment, defined below.
**Definition 5.3**.: _A key-revocable public-key encryption scheme \(\Sigma=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) is secure if, for every \(\mathsf{QPT}\) adversary \(\mathcal{A}\), the following holds:_
\[\Pr\left[b\leftarrow\mathsf{Expt}_{\Sigma}^{\mathcal{A}}(1^{\lambda},b)\ :\ b\stackrel{{ \$}}{{\leftarrow}}\{0,1\}\right]\leq\frac{1}{2}+\mathsf{negl}(\lambda),\]
_where \(\mathsf{Expt}_{\Sigma}^{\mathcal{A}}(1^{\lambda},b)\) is as defined in Figure 1._
**Remark 5.4**.: _In the traditional setting, 1-bit unpredictability and computational indistinguishability are equivalent in the following sense: if there are two distributions \(D_{0}\) and \(D_{1}\) such that an efficient adversary can distinguish these two distributions with advantage \(\epsilon\) then the same adversary can predict \(D_{0}\) versus \(D_{1}\) with probability \(\frac{1}{2}+\frac{\epsilon}{2}\)._
_This observation no longer applies to the above setting where we simultaneously need to consider the success probability of \(\mathsf{Revoke}\). As a result, our definition is incomparable with a variant of the above definition where we instead require the adversary to distinguish a valid ciphertext versus uniform._
Hybrid Lemma.We present a hybrid lemma for 1-bit unpredictability below. This lemma will be useful in applications where we can employ hybrid argument in a similar vein as done in the computational indistinguishability setting.
**Lemma 5.5** (Hybrid Lemma for 1-Bit Unpredictability).: _Suppose there exists a sequence of hybrid experiments \(\mathsf{H}_{1},\ldots,\mathsf{H}_{k}\) such that any \(\mathsf{QPT}\) predictor \(\mathcal{A}\) can predict \(\mathsf{H}_{i}\) versus \(\mathsf{H}_{i+1}\) with advantage at most \(\epsilon_{i}\). Then, \(\mathcal{A}\) can only predict hybrid \(\mathsf{H}_{1}\) versus \(\mathsf{H}_{k}\) with advantage of at most \(\sum_{i=1}^{k-1}\epsilon_{i}\)._
Proof.: Let \(\mathcal{A}\) be a \(\mathsf{QPT}\) adversary and suppose that for \(i\in[k-1]\):
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{i}^{\mathcal{A}}]+\frac{1}{2} \mathsf{Pr}[1\leftarrow\mathsf{H}_{i+1}^{\mathcal{A}}]=\frac{1}{2}+\epsilon_{ i}.\]
Figure 1: Security Experiment
We give a proof by induction. First, note that the base case for \(k=2\) follows immediately by the definition of \(\epsilon_{1}\). Now fix an arbitrary \(k\geq 2\), and suppose that \(\mathcal{A}\) can predict hybrid \(\mathsf{H}_{1}\) versus \(\mathsf{H}_{k}\) with advantage at most \(\sum_{i=1}^{k-1}\epsilon_{i}\). In other words, by the induction hypothesis we have
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}]+\frac{1}{2} \mathsf{Pr}[1\leftarrow\mathsf{H}_{k}^{\mathcal{A}}]=\frac{1}{2}+\sum_{i=1}^{ k-1}\epsilon_{i}.\]
Suppose also that
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{k}^{\mathcal{A}}]+\frac{1}{2} \mathsf{Pr}[1\leftarrow\mathsf{H}_{k+1}^{\mathcal{A}}]=\frac{1}{2}+\epsilon_ {k}.\]
By taking the sum of the two equations above, we get
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}]+\frac{1}{2} \mathsf{Pr}[1\leftarrow\mathsf{H}_{k}^{\mathcal{A}}]+\frac{1}{2}\mathsf{Pr}[ 0\leftarrow\mathsf{H}_{k}^{\mathcal{A}}]+\frac{1}{2}\mathsf{Pr}[1\leftarrow \mathsf{H}_{k+1}^{\mathcal{A}}]=1+\sum_{i=1}^{k}\epsilon_{i}.\]
Using the identity \(\mathsf{Pr}[0\leftarrow\mathsf{H}_{k}^{\mathcal{A}}]+\mathsf{Pr}[1\leftarrow \mathsf{H}_{k}^{\mathcal{A}}]=1\), we obtain the desired identity for \(k+1\):
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}]+\frac{1}{2} \mathsf{Pr}[1\leftarrow\mathsf{H}_{k+1}^{\mathcal{A}}]=\frac{1}{2}+\sum_{i=1} ^{k}\epsilon_{i}.\]
This proves the claim.
### Key-Revocable Public-Key Fully Homomorphic Encryption
A key-revocable public-key fully homomorphic encryption scheme defined for a class of functions \(\mathcal{F}\), in addition to \((\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\), consists of the following \(\mathsf{PPT}\) algorithm:
* \(\mathsf{Eval}(\mathsf{PK},f,\mathsf{CT})\): on input a public key \(\mathsf{PK}\), function \(f\in\mathcal{F}\), ciphertext \(\mathsf{CT}\), outputs another ciphertext \(\mathsf{CT}^{\prime}\).
**Remark 5.6**.: _Sometimes we allow \(\mathsf{KeyGen}\) to additionally take as input different parameters associated with the implementations of the functions in \(\mathcal{F}\). For example, we allow \(\mathsf{KeyGen}\) to take as input a parameter \(L\) in such a way that all the parameters in the system depend on \(L\) and moreover, the homomorphic evaluation is only supported on circuits (in \(\mathcal{F}\)) of depth at most \(L\)._
Correctness of Evaluation and Decryption.For every \(f\in\mathcal{F}\) with \(\ell\)-bit inputs, every \(x\in\{0,1\}^{\ell}\), the following holds:
\[\mathsf{Pr}\left[f(x)\leftarrow\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT}^{ \prime})\ :\
### From Single-Bit to Multi-Bit Security
Consider the following transformation from a single-bit key-revocable public-key encryption scheme to a multi-bit scheme. While such a transformation was known for indistinguishability-based encryption schemes, we show that the same transformation also works in the 1-bit unpredictability setting.
**Construction 1** (Single-Bit to Multi-Bit Transformation).: _Let \(\Sigma=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) be a single-bit key-revocable public-key encryption scheme. Then, for \(k\in\mathbb{N}\), we define the corresponding multi-bit transformation \(\Sigma^{k}=\left(\mathsf{KeyGen}^{k},\mathsf{Enc}^{k},\mathsf{Dec}^{k}, \mathsf{Revoke}^{k}\right)\) as follows:_
* \(\mathsf{KeyGen}^{k}(1^{\lambda})\)_: given as input a security parameter_ \(\lambda\)_, run_ \(\mathsf{KeyGen}(1^{\lambda})\) _to output a public key_ \(\mathsf{PK}\)_, a master secret key_ \(\mathsf{MSK}\) _and a quantum decryption key_ \(\rho_{\mathsf{SK}}\)_._
* \(\mathsf{Enc}^{k}(\mathsf{PK},x)\)_: given a public key_ \(\mathsf{PK}\) _and plaintext_ \(x\in\{0,1\}^{k}\)_, output the ciphertext_ \[\mathsf{CT}=\left(\mathsf{Enc}(\mathsf{PK},x_{1}),\ldots,\mathsf{Enc}(\mathsf{ PK},x_{k})\right).\]
* \(\mathsf{Dec}^{k}(\rho_{\mathsf{SK}},\mathsf{CT})\)_: given a decryption key_ \(\rho_{\mathsf{SK}}\) _and a ciphertext_ \(\mathsf{CT}=\mathsf{CT}_{1},\ldots,\mathsf{CT}_{k}\)_, decrypt each of the ciphertexts separately by running the purified variant_10 _of_ \(\mathsf{Dec}\) _and re-using the key._ Footnote 10: See Remark 5.2.
* \(\mathsf{Revoke}^{k}\left(\mathsf{PK},\mathsf{MSK},\sigma\right)\)_: given as input a master secret key_ \(\mathsf{MSK}\)_, a public key_ \(\mathsf{PK}\) _and quantum state_ \(\sigma\)_, run_ \(\mathsf{Revoke}\left(\mathsf{PK},\mathsf{MSK},\sigma\right)\) _to output_ \(\mathsf{Valid}\) _or_ \(\mathsf{Invalid}\)_._
The following claim follows immediately from the "Almost As Good As New Lemma" (Lemma 2.2) mentioned in Remark 5.2.
**Claim 5.7**.: _Let \(\lambda\in\mathbb{N}\) be the security parameter. If \(\Sigma=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) satisfies correctness of decryption and revocation, then so does \(\Sigma^{k}\) in Construction 1 for any \(k=\mathrm{poly}(\lambda)\)._
Finally, we show the following.
**Claim 5.8**.: _Let \(\lambda\in\mathbb{N}\) be the security parameter. If \(\Sigma=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) is a secure key-revocable public-key encryption scheme, then so is \(\Sigma^{k}\) in Construction 1 for any \(k=\mathrm{poly}(\lambda)\)._
Proof.: Let \(\lambda\in\mathbb{N}\) and \(k=\mathrm{poly}(\lambda)\). Let \(\mathcal{A}\) be a \(\mathsf{QPT}\) adversary and suppose that
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A}}_{\Sigma^{k}}(1^{\lambda},b)\ :\ b\stackrel{{\$}}{{\leftarrow}}\{0,1\}\right]=\frac{1}{2}+\epsilon( \lambda),\]
for some \(\varepsilon(\lambda)\) with respect to \(\mathsf{Expt}^{\mathcal{A}}_{\Sigma^{k}}(1^{\lambda},b)\) in Figure 1. We show that \(\varepsilon(\lambda)\) is negligible.
For \(i\in[k]\), we now consider the following sequence of intermediate hybrid experiments \(\mathsf{H}^{\mathcal{A}}_{i}\) defined in Figure 2, where \(\mathsf{H}_{1}=\mathsf{Expt}^{\mathcal{A}}_{\Sigma^{k}}(1^{\lambda},0)\) and \(\mathsf{H}_{k}=\mathsf{Expt}^{\mathcal{A}}_{\Sigma^{k}}(1^{\lambda},1)\). Because the single-bit scheme \(\Sigma\) is secure, there exist negligible functions \(\epsilon_{i}(\lambda)\) such that for each \(i\in[k-1]\),
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}^{\mathcal{A}}_{i}(1^{\lambda})]+ \frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{H}^{\mathcal{A}}_{i+1}(1^{\lambda}) ]=\frac{1}{2}+\epsilon_{i}(\lambda).\]
Using Lemma 5.5, we get that \(\epsilon(\lambda)\leq\sum_{i=1}^{k-1}\epsilon_{i}(\lambda)\leq\mathsf{negl}(\lambda)\). This proves the claim.
## 6 Key-Revocable Dual-Regev Encryption
In this section, we present the first construction of key-revocable public-key encryption from standard assumptions. Our construction involves making the Dual Regev public-key encryption of Gentry, Peikert and Vaikuntanathan [1] key revocable.
### Construction
We define our Dual-Regev construction below.
**Construction 2** (Key-Revocable Dual-Regev Encryption).: _Let \(n\in\mathbb{N}\) be the security parameter and \(m\in\mathbb{N}\). Let \(q\geq 2\) be a prime and let \(\alpha,\beta,\sigma>0\) be parameters. The key-revocable public-key scheme \(\mathsf{RevDual}=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) consists of the following \(\mathsf{QPT}\) algorithms:_
* \(\mathsf{KeyGen}(1^{\lambda})\rightarrow(\mathsf{PK},\rho_{\mathsf{SK}}, \mathsf{MSK}):\) _sample_ \((\mathbf{A}\in\mathbb{Z}_{q}^{n\times m},\mathsf{td}_{\mathbf{A}})\leftarrow \mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\) _and generate a Gaussian superposition_ \((\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y})\leftarrow\mathsf{Gen} \mathsf{Gauss}(\mathbf{A},\sigma)\) _with_ \[\left|\psi_{\mathbf{y}}\right\rangle\ =\sum_{\begin{subarray}{c}\mathbf{x}\in \mathbb{Z}_{q}^{n}\\ \mathbf{Ax}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x})\ \left|\mathbf{x}\right\rangle,\]
_for some \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\). Output \(\mathsf{PK}=(\mathbf{A},\mathbf{y})\), \(\rho_{\mathsf{SK}}=\left|\psi_{\mathbf{y}}\right\rangle\) and \(\mathsf{MSK}=\mathsf{td}_{\mathbf{A}}\)._
* \(\mathsf{Enc}(\mathsf{PK},\mu)\to\mathsf{CT}:\) _to encrypt a bit_ \(\mu\in\{0,1\}\)_, sample a random vector_ \(\mathbf{s}\xleftarrow{\mathbf{s}}\mathbb{Z}_{q}^{n}\) _and errors_ \(\mathbf{e}\sim D_{\mathbb{Z}^{m},\,\alpha q}\) _and_ \(e^{\prime}\sim D_{\mathbb{Z},\,\beta q}\)_, and output the ciphertext pair_ \[\mathsf{CT}=\left(\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^{\intercal}\ ( \mathrm{mod}\ q),\mathbf{s}^{\intercal}\mathbf{y}+e^{\prime}+\mu\cdot\lfloor \frac{q}{2}\rfloor\ (\mathrm{mod}\ q)\right)\in\mathbb{Z}_{q}^{m}\times \mathbb{Z}_{q}.\]
* \(\mathsf{Dec}(\rho_{\mathsf{SK}},\mathsf{CT})\to\{0,1\}:\) _to decrypt_ \(\mathsf{CT}\)_, apply the unitary_ \(U:\left|\mathbf{x}\right\rangle\left|0\right\rangle\to\left|\mathbf{x}\right\rangle \left|\mathsf{CT}\cdot(-\mathbf{x},1)^{\intercal}\right\rangle\) _on input_ \(\left|\psi_{\mathbf{y}}\right\rangle\left|0\right\rangle\)_, where_ \(\rho_{\mathsf{SK}}=\left|\psi_{\mathbf{y}}\right\rangle\)_, and measure the second register in the computational basis. Output_ \(0\)_, if the measurement outcome is closer to_ \(0\) _than to_ \(\lfloor\frac{q}{2}\rfloor\)_, and output_ \(1\)_, otherwise._
* \(\mathsf{Revoke}(\mathsf{MSK},\mathsf{PK},\rho)\to\{\top,\bot\}\)_: on input_ \(\mathsf{td}_{\mathbf{A}}\leftarrow\mathsf{MSK}\) _and_ \((\mathbf{A},\mathbf{y})\leftarrow\mathsf{PK}\)_, apply the measurement_ \(\{\left|\psi_{\mathbf{y}}\right\rangle\left\langle\psi_{\mathbf{y}}\right|,I- \left|\psi_{\mathbf{y}}\right\rangle\left\langle\psi_{\mathbf{y}}\right|\}\) _onto the state_ \(\rho\) _using the procedure_ \(\mathsf{QSampGauss}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\) _in Algorithm_ 2_. Output_ \(\top\)_, if the measurement is successful, and output_ \(\bot\) _otherwise._
Correctness of Decryption.Follows from the correctness of Dual-Regev public-key encryption.
Correctness of Revocation.Follows from Theorem3.3.
Let us now prove the security of our key-revocable Dual-Regev construction.
**Theorem 6.1**.: _Let \(n\in\mathbb{N}\) and \(q\) be a prime modulus with \(q=2^{o(n)}\) and \(m\geq 2n\log q\), each parameterized by the security parameter \(\lambda\in\mathbb{N}\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and let \(\alpha,\beta\in(0,1)\) be noise ratios chosen such that \(\beta/\alpha=2^{o(n)}\) and \(1/\alpha=2^{o(n)}\cdot\sigma\). Then, assuming the subexponential hardness of the \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) and \(\mathsf{SIS}_{n,q,\sigma\sqrt{2m}}^{m}\) problems, the scheme \(\mathsf{RevDual}=\left(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{ Revoke}\right)\) in Construction2 is a secure key-revocable public-key encryption scheme according to Definition5.3._
**Remark 6.2**.: _Note that our construction only handles 1-bit messages. However, we can apply the transformation in Definition5.3 to obtain a key-revocable public-key encryption scheme for multi-bit messages._
Guide for proving Theorem6.1.
* The first step towards proving Theorem6.1 is the simultaneous search-to-decision reduction (Theorem6.8). Here, we show how to extract a short vector mapping \(\mathbf{A}\) to \(\mathbf{y}\) from an efficient adversary who has a non-negligible advantage in Definition5.3.
* Next, we exploit the search-to-reduction to extract two distinct short vectors mapping \(\mathbf{A}\) to \(\mathbf{y}\). This is proven in Section6.3.
* Finally, we put all the pieces together in Section6.4 and show how to use the result from Section6.3 in order to break the SIS assumption.
### Simultaneous Search-to-Decision Reduction with Quantum Auxiliary Input
Our first result concerns distinguishers with quantum auxiliary input that can distinguish between Dual-Regev samples and uniformly random samples with high probability. In Theorem6.3, we give a search-to-decision reduction: we show that such distinguishers can be converted into a quantum
extractor that can obtain a Dual-Regev secret key with overwhelming probability. We then improve on the result and give a _simultaneous_ search-to-decision reduction in Theorem6.8 which holds even if additionally require that a _revocation_ procedure succeeds on a separate register.
We first show the following result.
**Theorem 6.3** (Search-to-Decision Reduction with Quantum Auxiliary Input).: _Let \(n\in\mathbb{N}\) and \(q\) be a prime modulus with \(q=2^{o(n)}\) and let \(m\geq 2n\log q\), each parameterized by the security parameter \(\lambda\in\mathbb{N}\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and let \(\alpha,\beta\in(0,1)\) be noise ratios with \(\beta/\alpha=2^{o(n)}\) and \(1/\alpha=2^{o(n)}\cdot\sigma\). Let \(\mathcal{A}=\{(\mathcal{A}_{\lambda,\mathbf{A}},\nu_{\lambda})\}_{\lambda\in \mathbb{N}}\) be any non-uniform quantum algorithm consisting of a family of polynomial-sized quantum circuits_
\[\left\{\mathcal{A}_{\lambda,\mathbf{A}}:\mathcal{L}(\mathcal{H}_{q}^{m}\otimes \mathcal{H}_{B_{\lambda}})\rightarrow\mathcal{L}(\mathcal{H}_{R_{\lambda}} \otimes\mathcal{H}_{\textsc{aux}_{\lambda}})\right\}_{\mathbf{A}\in\mathbb{Z} _{q}^{n\times m}}\]
_and polynomial-sized advice states \(\nu_{\lambda}\in\mathcal{D}(\mathcal{H}_{B_{\lambda}})\) which are independent of \(\mathbf{A}\). Then, assuming the quantum hardness of the \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) assumption, the following holds for every \(\mathsf{QPT}\) distinguisher \(\mathcal{D}\). Suppose that there exists a function \(\varepsilon(\lambda)=1/\mathrm{poly}(\lambda)\) such that_
\[\left|\mathsf{Pr}\left[1\leftarrow\mathsf{SearchToDecisionExpr}^{\mathcal{A, \mathcal{D}}}(1^{\lambda},0)\right]-\mathsf{Pr}\left[1\leftarrow\mathsf{SearchToDecisionExpr}^{ \mathcal{A,\mathcal{D}}}(1^{\lambda},1)\right]\right|=\varepsilon(\lambda).\]
_Then, there exists a quantum extractor \(\mathcal{E}\) that takes as input \(\mathbf{A}\), \(\mathbf{y}\) and system \(\textsc{Aux}\) of the state \(\rho_{R,\textsc{aux}}\) and outputs a short vector in the coset \(\boldsymbol{\Lambda}_{q}^{\mathbf{y}}(\mathbf{A})\) in time \(\mathrm{poly}(\lambda,m,\sigma,q,1/\varepsilon)\) such that_
\[\Pr\left[\begin{array}{c|c}\mathcal{E}(\mathbf{A},\mathbf{y},\rho_{\textsc{ Aux}})=\mathbf{x}&\mathbf{A}\stackrel{{\varepsilon}}{{\leq}} \mathbb{Z}_{q}^{n\times m}\\ \bigwedge\\ \mathbf{x}\in\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathbb{E}^{m}(\mathbf{0},\sigma\sqrt{\frac{m}{2}})&(|\psi_{\mathbf{y}}\rangle,\mathbf{y}\rangle \leftarrow\mathsf{GenGauss}(\mathbf{A},\sigma)\\ \rho_{R,\textsc{aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\psi_{ \mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\nu_{\lambda})\end{array} \right]\geq 1-\mathsf{negl}(\lambda).\]
Proof.: Let \(\lambda\in\mathbb{N}\) be the security parameter and let \(\mathcal{A}=\{(\mathcal{A}_{\lambda,\mathbf{A}},\nu_{\lambda})\}_{\mathbf{A} \in\mathbb{Z}_{q}^{n\times m}}\) be a non-uniform quantum algorithm. Suppose that \(\mathcal{D}\) is a \(\mathsf{QPT}\) distinguisher with advantage \(\varepsilon=1/\mathrm{poly}(\lambda)\).
To prove the claim, we consider the following sequence of hybrid distributions.
\(\mathsf{H}_{0}\): This is the distribution \(\mathsf{lwe.Dist}^{\mathcal{A,\mathcal{D}}}\left(1^{\lambda}\right)\) in Figure4.
\(\mathsf{H}_{1}\): This is the following distribution:
1. Sample a random matrix \(\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\).
2. Sample \(\mathbf{s}\xleftarrow{s}\mathbb{Z}_{q}^{n}\), \(\mathbf{e}\sim D_{\mathbb{Z}^{m},\alpha q}\) and \(e^{\prime}\sim D_{\mathbb{Z},\beta q}\).
3. Sample a Gaussian vector \(\mathbf{x}_{0}\sim D_{\mathbb{Z}^{m}_{q},\frac{\sigma}{\sqrt{2}}}\) and let \(\mathbf{y}=\mathbf{A}\cdot\mathbf{x}_{0}\ (\texttt{mod}\ q)\).
4. Run \(\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0}\rangle\langle\mathbf{x}_{0}| \otimes\nu_{\lambda})\) to generate a state \(\rho_{R,\textsc{aux}}\) in systems \(R\) and aux.
5. Run the distinguisher \(\mathcal{D}(\mathbf{A},\mathbf{y},\mathbf{s}^{\intercal}\mathbf{A}+\mathbf{e}^ {\intercal},\mathbf{s}^{\intercal}\mathbf{y}+e^{\prime},\rho_{\textsc{aux}})\) on the reduced state \(\rho_{\textsc{aux}}\).
\(\mathsf{H}_{2}:\) This is the following distribution:
1. Sample a uniformly random matrix \(\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\).
2. Sample \(\mathbf{s}\xleftarrow{s}\mathbb{Z}_{q}^{n}\), \(\mathbf{e}\sim D_{\mathbb{Z}^{m},\alpha q}\) and \(e^{\prime}\sim D_{\mathbb{Z},\beta q}\). Let \(\mathbf{u}=\mathbf{A}^{\intercal}\mathbf{s}+\mathbf{e}\).
3. Sample a Gaussian vector \(\mathbf{x}_{0}\sim D_{\mathbb{Z}^{m}_{q},\frac{\sigma}{\sqrt{2}}}\) and let \(\mathbf{y}=\mathbf{A}\cdot\mathbf{x}_{0}\ (\texttt{mod}\ q)\).
4. Run \(\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0}\rangle\langle\mathbf{x}_{0}| \otimes\nu_{\lambda})\) to generate a state \(\rho_{R,\textsc{aux}}\) in systems \(R\) and aux.
5. Run the distinguisher \(\mathcal{D}(\mathbf{A},\mathbf{y},\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}_ {0}+e^{\prime},\rho_{\textsc{aux}})\) on the reduced state \(\rho_{\textsc{aux}}\).
\(\mathsf{H}_{3}:\) This is the following distribution:
1. Sample a uniformly random matrix \(\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\).
2. Sample \(\mathbf{u}\xleftarrow{s}\mathbb{Z}_{q}^{m}\) and \(e^{\prime}\sim D_{\mathbb{Z},\beta q}\).
3. Sample a Gaussian vector \(\mathbf{x}_{0}\sim D_{\mathbb{Z}^{m}_{q},\frac{\sigma}{\sqrt{2}}}\) and let \(\mathbf{y}=\mathbf{A}\cdot\mathbf{x}_{0}\ (\texttt{mod}\ q)\).
4. Run \(\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0}\rangle\langle\mathbf{x}_{0}| \otimes\nu_{\lambda})\) to generate a state \(\rho_{R,\textsc{aux}}\) in systems \(R\) and aux.
5. Run the distinguisher \(\mathcal{D}(\mathbf{A},\mathbf{y},\mathbf{u},\mathbf{u}^{\intercal}\mathbf{x}_ {0}+e^{\prime},\rho_{\textsc{aux}})\) on the reduced state \(\rho_{\textsc{aux}}\).
\(\mathsf{H}_{4}\): This is the following distribution:
1. Sample a uniformly random matrix \(\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\).
2. Sample \(\mathbf{u}\xleftarrow{s}\mathbb{Z}_{q}^{m}\) and \(r\xleftarrow{s}\mathbb{Z}_{q}\).
3. Sample a Gaussian vector \(\mathbf{x}_{0}\sim D_{\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}}\) and let \(\mathbf{y}=\mathbf{A}\cdot\mathbf{x}_{0}\) (mod \(q\)).
4. Run \(\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0}\rangle\langle\mathbf{x}_{0}| \otimes\nu_{\lambda})\) to generate a state \(\rho_{R,\textsc{aux}}\) in systems \(R\) and aux.
5. Run the distinguisher \(\mathcal{D}(\mathbf{A},\mathbf{y},\mathbf{u},r,\rho_{\textsc{aux}})\) on the reduced state \(\rho_{\textsc{aux}}\).
\(\mathsf{H}_{5}\): This is the distribution \(\mathsf{unif.Dist}^{\mathcal{A},\mathcal{D}}\left(1^{\lambda}\right)\) in Figure 5.
We now show the following:
**Claim 6.4**.: _Assuming \(\mathsf{LWE}_{n,q,\alpha q}^{m}\), the hybrids \(\mathsf{H}_{0}\) and \(\mathsf{H}_{1}\) are computationally indistinguishable,_
\[\mathsf{H}_{0}\,\approx_{c}\,\mathsf{H}_{1}.\]
Proof.: Here, we invoke the _Gaussian-collapsing property_ in Theorem 3.1 which states that the following samples are indistinguishable under \(\mathsf{LWE}_{n,q,\alpha q}^{m}\),
\[\left(\mathbf{A}\,\xleftarrow{\mathbb{Z}}_{q}^{n\times m},\;|\psi_{\mathbf{y }}\rangle=\sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m}\\ \mathbf{A}\mathbf{x}\rightarrow\mathbf{y}\end{subarray}}\rho_{\sigma}( \mathbf{x})\;|\mathbf{x}\rangle\,,\;\mathbf{y}\in\mathbb{Z}_{q}^{n}\right)\; \approx_{c}\;\left(\mathbf{A}\,\xleftarrow{\mathbb{Z}}_{q}^{n\times m},\;| \mathbf{x}_{0}\rangle\,,\;\mathbf{A}\cdot\mathbf{x}_{0}\,\in\mathbb{Z}_{q}^{n}\right)\]
where \((|\psi_{\mathbf{y}}\rangle\,,\mathbf{y})\leftarrow\mathsf{GenGauss}(\mathbf{A },\sigma)\) and where \(\mathbf{x}_{0}\sim D_{\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}}\) is a sample from the discrete Gaussian distribution. Because \(\mathcal{A}_{\lambda,\mathbf{A}}\) is a family efficient quantum algorithms, this implies that
\[\mathcal{A}_{\lambda,\mathbf{A}}(|\psi_{\mathbf{y}}\rangle\langle\psi_{ \mathbf{y}}|\otimes\nu_{\lambda})\quad\approx_{c}\quad\mathcal{A}_{\lambda, \mathbf{A}}(|\mathbf{x}_{0}\rangle\langle\mathbf{x}_{0}|\otimes\nu_{\lambda}),\]
for any polynomial-sized advice state \(\nu_{\lambda}\in\mathcal{D}(\mathcal{H}_{B_{\lambda}})\) which is independent of \(\mathbf{A}\).
**Claim 6.5**.: _Hybrids \(\mathsf{H}_{1}\) and \(\mathsf{H}_{2}\) are statistically indistinguishable. In other words,_
\[\mathsf{H}_{1}\,\approx_{s}\,\mathsf{H}_{2}.\]
Proof.: Here, we invoke the _noise flooding_ property in Lemma 2.8 to argue that \(\mathbf{e}^{\intercal}\mathbf{x}_{0}\ll e^{\prime}\) holds with overwhelming probability for our choice of parameters. Therefore, the distributions in \(\mathsf{H}_{1}\) and \(\mathsf{H}_{2}\) are computationally indistinguishable.
**Claim 6.6**.: _Assuming \(\mathsf{LWE}^{m}_{n,q,\alpha q}\), the hybrids \(\mathsf{H}_{2}\) and \(\mathsf{H}_{3}\) are computationally indistinguishable,_
\[\mathsf{H}_{2}\,\approx_{c}\,\mathsf{H}_{3}.\]
Proof.: This follows from the \(\mathsf{LWE}^{m}_{n,q,\alpha q}\) assumption since the reduction can sample \(\mathbf{x}_{0}\sim D_{\mathbb{Z}^{m},\frac{\sigma}{\sqrt{2}}}\) itself and generate \(\rho_{R,\textsc{aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x }_{0}\rangle\langle\mathbf{x}_{0}|\otimes\nu_{\lambda})\) on input \(\mathbf{A}\in\mathbb{Z}^{n\times m}_{q}\) and \(\nu_{\lambda}\).
Finally, we show the following:
**Claim 6.7**.: _Assuming \(\mathsf{LWE}^{m}_{n,q,\alpha q}\), the hybrids \(\mathsf{H}_{4}\) and \(\mathsf{H}_{5}\) are computationally indistinguishable,_
\[\mathsf{H}_{4}\,\approx_{c}\,\mathsf{H}_{5}.\]
Proof.: Here, we invoke the _Gaussian-collapsing property_ in Theorem3.1 again.
Recall that \(\mathsf{H}_{0}\) and \(\mathsf{H}_{5}\) can be distinguished with probability \(\varepsilon=1/\mathrm{poly}(\lambda)\). We proved that the hybrids \(\mathsf{H}_{0}\) and \(\mathsf{H}_{3}\) are computationally indistinguishable and moreover, hybrids \(\mathsf{H}_{4}\) and \(\mathsf{H}_{5}\) are computationally indistinguishable. As a consequence, it holds that hybrids \(\mathsf{H}_{3}\) and \(\mathsf{H}_{4}\) can be distinguished with probability at least \(\varepsilon-\mathsf{negl}(\lambda)\).
We leverage this to obtain a Goldreich-Levin reduction. Consider the following distinguisher.
Note that \(r+e^{\prime}\ (\mathrm{mod}\ q)\) is uniform whenever \(r\xleftarrow{s}\mathbb{Z}_{q}\) and \(e^{\prime}\sim D_{\mathbb{Z},\beta q}\). Therefore, our previous argument shows that there exists a negligible function \(\eta\) such that:
\[\left|\Pr\left[\tilde{\mathcal{D}}(\mathbf{A},\mathbf{y},\mathbf{ u},\mathbf{u}^{\intercal}\mathbf{x}_{0},\rho_{\textsc{aux}})=1\right| _{\begin{subarray}{c}\mathbf{A}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}, \mathbf{u}\xleftarrow{s}\mathbb{Z}_{q}^{m}\\ \mathbf{x}_{0}\sim D_{\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}},\mathbf{y} \leftarrow\mathbf{A}\cdot\mathbf{x}_{0}\ (\mathrm{mod}\ q)\\ \rho_{R,\textsc{aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0 }\rangle\langle\mathbf{x}_{0}|\otimes\nu_{\lambda})\end{subarray}}\right]\] \[-\Pr\left[\tilde{\mathcal{D}}(\mathbf{A},\mathbf{y},\mathbf{u}, \mathbf{r},\rho_{\textsc{aux}})=1\right|_{\begin{subarray}{c}\mathbf{A} \xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\\ \mathbf{x}_{0}\sim D_{\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}},\mathbf{y} \leftarrow\mathbf{A}\cdot\mathbf{x}_{0}\ (\mathrm{mod}\ q)\\ \rho_{R,\textsc{aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\mathbf{x}_{0 }\rangle\langle\mathbf{x}_{0}|\otimes\nu_{\lambda})\end{subarray}}\right] \geq\varepsilon-\eta(\lambda).\]
Using Theorem 4.6, it follows that there exists a quantum Goldreich-Levin extractor \(\mathcal{E}\) running in time \(T(\mathcal{E})=\operatorname{poly}(\lambda,n,m,\sigma,q,1/\varepsilon)\) that outputs a short vector in \(\Lambda_{q}^{\mathcal{Y}}(\mathbf{A})\) with probability at least
\[\Pr\left[\begin{smallmatrix}\mathcal{E}(\mathbf{A},\mathbf{y},\rho_{\text{ Aux}})=\mathbf{x}\\ \wedge&\\ \mathbf{x}\in\Lambda_{q}^{\mathcal{Y}}(\mathbf{A})\cap\mathcal{B}^{m}( \mathbf{0},\sigma\sqrt{\frac{m}{2}})\end{smallmatrix}:\begin{smallmatrix} \mathbf{A}\stackrel{{\mathcal{E}}}{{\leftarrow}}\mathbb{Z}_{q}^{n \times m}\\ \mathbf{x}_{\mathbf{x}\sim D}\mathbb{Z}_{q}^{m},\frac{\sigma}{\sqrt{2}}\\ \mathbf{y}\leftarrow\mathbf{A}\mathbf{x}_{\mathbf{0}}\pmod{q}\\ \rho_{\text{RA}\text{A}\text{C}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(| \mathbf{x}_{\mathbf{0}}\rangle\langle\mathbf{x}_{\mathbf{0}}|\otimes\nu_{ \lambda})\end{smallmatrix}\right]\geq 1-\exp(-\Omega(n)).\]
Assuming the \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) assumption, we can invoke the Gaussian-collapsing property in Theorem 3.1 once again which implies that the quantum extractor \(\mathcal{E}\) satisfies
\[\Pr\left[\begin{smallmatrix}\mathcal{E}(\mathbf{A},\mathbf{y},\rho_{\text{ Aux}})=\mathbf{x}\\ \wedge&\\ \mathbf{x}\in\Lambda_{q}^{\mathcal{Y}}(\mathbf{A})\cap\mathcal{B}^{m}( \mathbf{0},\sigma\sqrt{\frac{m}{2}})\end{smallmatrix}:\begin{smallmatrix} \mathbf{A}\stackrel{{\mathcal{E}}}{{\leftarrow}}\mathbb{Z}_{q}^{n \times m}\\ (|\psi_{\mathbf{y}}\rangle,\mathbf{y}\rangle\leftarrow\mathsf{GenGauss}( \mathbf{A},\sigma)\\ \rho_{\text{RA}\text{C}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\psi_{ \mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\nu_{\lambda})\end{smallmatrix} \right]\geq 1-\mathsf{negl}(\lambda).\]
This proves the claim.
Next, we improve on the result in Theorem 6.3 and give a _simultaneous_ search-to-decision reduction with quantum auxiliary input which holds even if additionally require that a _revocation_ procedure succeeds on a separate register.
To formalize the notion that revocation is applied on a separate register, we introduce the following procedure called \(\mathsf{IneffRevoke}\) which is defined below.
Finally, we prove the following theorem which constitutes the main technical result of this work.
**Theorem 6.8** (Simultaneous Search-to-Decision Reduction with Quantum Auxiliary Input).: _Let \(n\in\mathbb{N}\) and \(q\) be a prime modulus with \(q=2^{o(n)}\) and let \(m\geq 2n\log q\), each parameterized by the security parameter \(\lambda\in\mathbb{N}\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and let \(\alpha,\beta\in(0,1)\) be noise ratios with \(\beta/\alpha=2^{o(n)}\) and \(1/\alpha=2^{o(n)}\cdot\sigma\). Let \(\mathcal{A}=\{(\mathcal{A}_{\lambda,\mathbf{A}},\nu_{\lambda})\}_{\lambda\in \mathbb{N}}\) be any non-uniform quantum algorithm consisting of a family of polynomial-sized quantum circuits_
\[\left\{\mathcal{A}_{\lambda,\mathbf{A}}:\mathcal{L}(\mathcal{H}_{q}^{m}\otimes \mathcal{H}_{B_{\lambda}})\to\mathcal{L}(\mathcal{H}_{R_{\lambda}}\otimes \mathcal{H}_{\textsc{aux}_{\lambda}})\right\}_{\mathbf{A}\in\mathbb{Z}_{q}^{n \times m}}\]
_and polynomial-sized advice states \(\nu_{\lambda}\in\mathcal{D}(\mathcal{H}_{B_{\lambda}})\) which are independent of \(\mathbf{A}\). Then, assuming the quantum hardness of the \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) assumption, the following holds for every \(\mathsf{QPT}\) distinguisher \(\mathcal{D}\). Suppose that there exists a function \(\varepsilon(\lambda)=1/\mathrm{poly}(\lambda)\) such that_
\[\mathsf{Pr}\left[b\leftarrow\mathsf{SimultSearchToDecisionExpt^{\mathcal{A,D }}}(1^{\lambda},b)\,:\,b\,\smash{\mathop{\leftarrow}\limits^{\mathtt{s}}}\, \{0,1\}\right]=\frac{1}{2}+\varepsilon(\lambda).\]
_Then, there exists a quantum extractor \(\mathcal{E}\) that takes as input \(\mathbf{A}\), \(\mathbf{y}\) and system Aux of the state \(\rho_{R,\textsc{aux}}\) and outputs a short vector in the coset \(\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\) in time \(\mathrm{poly}(\lambda,m,\sigma,q,1/\varepsilon)\) such that_
\[\mathrm{Pr}\left[\begin{smallmatrix}(\mathsf{IneffRevoke}(\mathbf{A },\mathbf{y},\sigma,\cdot)\otimes\mathcal{E}(\mathbf{A},\mathbf{y},\cdot))( \rho_{R,\textsc{aux}})=(\top,\mathbf{x})\\ \bigwedge\\ \mathbf{x}\in\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathcal{B}^{m}(\mathbf{0 },\sigma\sqrt{\frac{m}{2}})\end{smallmatrix}\right]\] \[\geq\mathrm{Pr}\left[(\mathsf{IneffRevoke}(\mathbf{A},\mathbf{y}, \sigma,\rho_{R})=\top\,:\,\begin{smallmatrix}\mathbf{A}\,\smash{\mathop{ \leftarrow}\limits^{\mathtt{s}}}\,\mathbb{Z}_{q}^{n\times m}\\ \rho_{R,\textsc{aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\psi_{ \mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\nu_{\lambda}\rangle)\end{smallmatrix} \right]-\mathsf{negl}(\lambda).\]
Proof.: Let \(\mathcal{A}=\{(\mathcal{A}_{\lambda,\mathbf{A}},\nu_{\lambda})\}_{\lambda\in \mathbb{N}}\) be a non-uniform quantum algorithm and let \(\mathcal{D}\) be any \(\mathsf{QPT}\) distinguisher. Let \(\mathsf{simult.lwe.Dist^{\mathcal{A,D}}}\) and \(\mathsf{simult.unif.Dist^{\mathcal{A,D}}}\) be the two distributions which are defined in Figure 9 and Figure 10, respectively.
By assumption, we have that there exists a function \(\varepsilon(\lambda)=1/\mathrm{poly}(\lambda)\) such that
\[\mathsf{Pr}\left[b\leftarrow\mathsf{SimultSearchToDecisionExpt^{ \mathcal{A,D}}}(1^{\lambda},b)\,:\,b\,\smash{\mathop{\leftarrow}\limits^{ \mathtt{s}}}\,\{0,1\}\right]\] \[=\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{simult.lwe.Dist^{ \mathcal{A,D}}}(1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{ simult.unif.Dist^{\mathcal{A,D}}}(1^{\lambda})]\,=\,\frac{1}{2}+\varepsilon(\lambda).\]
Recall that the distributions \(\mathsf{Iwe.Dist}\) (Figure 4) and \(\mathsf{unif.Dist}\) (Figure 5) are the same as the distributions \(\mathsf{simult.Iwe.Dist}\) and \(\mathsf{simult.Dist}\), except that the procedure \(\mathsf{IneffRevoke}\) is not performed in the experiment; instead, the register \(R\) is simply traced out and \(\mathcal{D}\) is run on the reduced state in system \(\mathsf{Aux}\).
Figure 10: The distribution \(\mathsf{simult.unif.Dist}^{\mathcal{A,D}}\left(1^{\lambda}\right)\).
Using the fact that dropping \(\mathsf{IneffRevoke}\) can only increase the success probability, we get
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{Iwe.Dist}^{\mathcal{A,D}}( 1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{unif.Dist}^{\mathcal{A,D }}(1^{\lambda})]\] \[\geq\ \frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{simult.Iwe.Dist}^{ \mathcal{A,D}}(1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{ simult.unif.Dist}^{\mathcal{A,D}}(1^{\lambda})]\,=\,\frac{1}{2}+\varepsilon( \lambda).\]
In other words, the \(\mathsf{QPT}\) algorithm \(\mathcal{D}\) can successfully predict whether it has received a Dual-Regev sample or a uniformly random sample. Therefore,11 we can now invoke Theorem 6.3 to argue there exists a quantum extractor \(\mathcal{E}\) that takes as input \(\mathbf{A}\), \(\mathbf{y}\) and system \(\mathsf{Aux}\) of the state \(\rho_{R,\mathsf{Aux}}\) and outputs a short vector in the coset \(\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\) in time \(\mathrm{poly}(\lambda,m,\sigma,q,1/\varepsilon)\) such that
Footnote 11: Here, we use the following fact: Suppose that \(D_{0}\) and \(D_{1}\) are two distributions. Then, any QPT algorithm can predict \(b\) when given a sample from \(D_{b}\), where \(b\xleftarrow{\$}\{0,1\}\), with probability \(\frac{1}{2}+\frac{\varepsilon}{2}\) if and only if the algorithm can distinguish between the distributions \(D_{0}\) and \(D_{1}\) with probability \(\varepsilon\).
\[\Pr\left[\begin{smallmatrix}\mathcal{E}(\mathbf{A,y},\rho_{\mathsf{Aux}})= \mathbf{x}&\mathbf{A}\xleftarrow{\$}\mathbb{Z}_{q}^{n\times m}\\ \bigwedge&(\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y})\leftarrow\mathsf{ GenGauss}(\mathbf{A},\sigma)\\ \mathbf{x}\in\Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathcal{B}^{m}( \mathbf{0},\sigma\sqrt{\frac{m}{2}})&\rho_{R,\mathsf{Aux}}\leftarrow\mathcal{ A}_{\lambda,\mathbf{A}}(\left|\psi_{\mathbf{y}}\right\rangle\left\langle\psi_{\mathbf{y}} \right|\otimes\nu_{\lambda}\right\rangle]\end{smallmatrix}\right]\geq 1-\mathsf{negl}(\lambda).\]
By expanding the above probability in terms of conditional probabilities with respect to whether \(\mathsf{IneffRevoke}\) succeeds (or fails), we get that
\[\Pr\left[\begin{smallmatrix}(\mathsf{IneffRevoke}(\mathbf{A,y}, \sigma,:)\otimes\mathcal{E}(\mathbf{A,y},:))(\rho_{R,\mathsf{Aux}})=(\top, \mathbf{x})&\mathbf{A}\xleftarrow{\$}\mathbb{Z}_{q}^{n\times m}\\ \bigwedge&(\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y})\leftarrow\mathsf{ GenGauss}(\mathbf{A},\sigma)\\ \rho_{R,\mathsf{Aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(\left|\psi_{ \mathbf{y}}\right\rangle\left\langle\psi_{\mathbf{y}}\right|\otimes\nu_{ \lambda}\right\rangle]\end{smallmatrix}\right]\] \[\geq\Pr\left[(\mathsf{IneffRevoke}(\mathbf{A,y},\sigma,\rho_{R})= \top\ :\ \begin{smallmatrix}\mathbf{A}\xleftarrow{\$}\mathbb{Z}_{q}^{n\times m}\\ (\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y})\leftarrow\mathsf{ GenGauss}(\mathbf{A},\sigma)\\ \rho_{R,\mathsf{Aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(\left|\psi_{ \mathbf{y}}\right\rangle\left\langle\psi_{\mathbf{y}}\right|\otimes\nu_{ \lambda})\end{smallmatrix}\right]-\mathsf{negl}(\lambda).\]
### Distinct Pair Extraction
The following lemma allows us to analyze the probability of simultaneously extracting two distinct preimages in terms of the success probability of revocation and the success probability of extracting a preimage from the adversary's state.
**Lemma 6.9** (Projection onto Distinct Pairs).: _Let \(\rho\in\mathcal{D}(\mathcal{H}_{X}\otimes\mathcal{H}_{Y})\) be an any density matrix, for some Hilbert spaces \(\mathcal{H}_{X}\) and \(\mathcal{H}_{Y}\). Let \(|\psi\rangle=\sum_{x\in\mathcal{S}}\alpha_{x}\,|x\rangle\in\mathcal{H}_{X}\) be any state supported on a subset \(\mathcal{S}\subseteq\mathcal{X}\), and let \(\mathbf{\Pi}=|\psi\rangle\langle\psi|\) denote its associated projection. Let \(\mathbf{\Pi}_{\mathcal{S}}\) be the projector onto \(\mathcal{S}\) with_
\[\mathbf{\Pi}_{\mathcal{S}}=\sum_{x\in\mathcal{S}}|x\rangle\langle x|.\]
_Let \(\mathcal{E}:\mathcal{L}(\mathcal{H}_{Y})\to\mathcal{L}(\mathcal{H}_{X^{ \prime}})\) be any \(\mathsf{CPTP}\) map of the form_
\[\mathcal{E}_{Y\to X^{\prime}}(\sigma)=\mathrm{Tr}_{E}\left[V_{Y\to X^{ \prime}E}\,\sigma\,V_{Y\to X^{\prime}E}^{\dagger}\right],\quad\forall\sigma\in \mathcal{D}(\mathcal{H}_{Y}),\]
_for some unitary \(V_{Y\to X^{\prime}E}\). Consider the projector \(\mathbf{\Gamma}\) given by_
\[\mathbf{\Gamma}=\sum_{x,x^{\prime}\in\mathcal{S}:x\neq x^{\prime}}|x\rangle \langle x|_{X}\otimes V_{Y\to X^{\prime}E}^{\dagger}(|x^{\prime}\rangle \langle x^{\prime}|_{X^{\prime}}\otimes I_{E})V_{Y\to X^{\prime}E}.\]
_Let \(\rho_{X}=\operatorname{Tr}_{Y}[\rho_{XY}]\) denote the reduced state. Then, it holds that_
\[\operatorname{Tr}[\boldsymbol{\Gamma}\rho]\,\geq\,\left(1-\max_{x\in\mathcal{S }}|\alpha_{x}|^{2}\right)\cdot\operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}] \cdot\operatorname{Tr}\left[\boldsymbol{\Pi}_{\mathcal{S}}\,\mathcal{E}_{Y \to X^{\prime}}(\sigma)\right],\]
_where \(\sigma=\operatorname{Tr}[(\boldsymbol{\Pi}\otimes I)\rho]^{-1}\cdot \operatorname{Tr}_{X}[(\boldsymbol{\Pi}\otimes I)\rho]\) is a reduced state in system \(Y\)._
Proof.: Because the order in which we apply \(\boldsymbol{\Gamma}\) and \((\boldsymbol{\Pi}\otimes I)\) does not matter, we have the inequality
\[\operatorname{Tr}\left[\boldsymbol{\Gamma}\rho\right]\geq\operatorname{Tr} \left[(\boldsymbol{\Pi}\otimes I)\,\boldsymbol{\Gamma}\rho\right]= \operatorname{Tr}\left[\boldsymbol{\Gamma}(\boldsymbol{\Pi}\otimes I)\rho \right]. \tag{1}\]
Notice also that \((\boldsymbol{\Pi}\otimes I)\rho(\boldsymbol{\Pi}\otimes I)\) lies in the image of \((\boldsymbol{\Pi}\otimes I)\) with \(\boldsymbol{\Pi}=|\psi\rangle\langle\psi|\), and thus
\[(\boldsymbol{\Pi}\otimes I)\rho(\boldsymbol{\Pi}\otimes I)=\operatorname{Tr }[(\boldsymbol{\Pi}\otimes I)\rho]\cdot(|\psi\rangle\langle\psi|\otimes \sigma), \tag{2}\]
for some \(\sigma\in\mathcal{D}(\mathcal{H}_{Y})\). Putting everything together, we get that
\[\operatorname{Tr}\left[\boldsymbol{\Gamma}\rho\right] \geq \operatorname{Tr}\left[\boldsymbol{\Gamma}(\boldsymbol{\Pi} \otimes I)\rho\right]\] (using inequality (1)) \[= \operatorname{Tr}\left[\boldsymbol{\Gamma}(\boldsymbol{\Pi} \otimes I)\rho(\boldsymbol{\Pi}\otimes I)\boldsymbol{\Gamma}\right]\] (since
\[\boldsymbol{\Gamma}(\boldsymbol{\Pi}\otimes I)\]
is a projector) \[= \operatorname{Tr}[(\boldsymbol{\Pi}\otimes I)\rho]\cdot \operatorname{Tr}\left[\boldsymbol{\Gamma}\left(|\psi\rangle\langle\psi| \otimes\sigma\right)\boldsymbol{\Gamma}\right]\] (using equation (2)) \[= \operatorname{Tr}[(\boldsymbol{\Pi}\otimes I)\rho]\cdot \operatorname{Tr}\left[\boldsymbol{\Gamma}\left(|\psi\rangle\langle\psi| \otimes\sigma\right)\right]\] (since
\[\boldsymbol{\Gamma}\]
is a projector) \[= \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\operatorname{Tr }\left[\sum_{x,x^{\prime}\in\mathcal{S}:x\neq x^{\prime}}|x\rangle\langle x| _{X}\otimes V_{Y\to X^{\prime}E}^{\dagger}\left(|x^{\prime}\rangle\langle x ^{\prime}|_{X^{\prime}}\otimes I_{E}\right)V_{Y\to X^{\prime}E}\left(|\psi \rangle\langle\psi|\otimes\sigma\right)\right]\] \[= \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\sum_{x^{\prime} \in\mathcal{S}}\left(\sum_{x\in\mathcal{S}:x\neq x^{\prime}}|\langle x|\psi \rangle|^{2}\right)\operatorname{Tr}\left[V_{Y\to X^{\prime}E}^{\dagger}(|x^{ \prime}\rangle\langle x^{\prime}|_{X^{\prime}}\otimes I_{E})V_{Y\to X^{ \prime}E}\,\sigma\right]\] \[= \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\sum_{x^{\prime} \in\mathcal{S}}\left(1-|\alpha_{x^{\prime}}|^{2}\right)\operatorname{Tr} \left[(|x^{\prime}\rangle\langle x^{\prime}|_{X^{\prime}}\otimes I_{E})V_{Y \to X^{\prime}E}\,\sigma\,V_{Y\to X^{\prime}E}^{\dagger}\right]\] \[\geq \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\left(1-\max_{x \in\mathcal{S}}|\alpha_{x}|^{2}\right)\cdot\sum_{x^{\prime}\in\mathcal{S}} \operatorname{Tr}\left[(|x^{\prime}\rangle\langle x^{\prime}|_{X^{\prime}} \otimes I_{E})V_{Y\to X^{\prime}E}\,\sigma\,V_{Y\to X^{\prime}E}^{ \dagger}\right]\] \[= \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\left(1-\max_{x \in\mathcal{S}}|\alpha_{x}|^{2}\right)\cdot\sum_{x^{\prime}\in\mathcal{S}} \operatorname{Tr}\left[|x^{\prime}\rangle\langle x^{\prime}|_{X^{\prime}} \operatorname{Tr}_{E}\left[V_{Y\to X^{\prime}E}\,\sigma\,V_{Y\to X^{\prime}E}^ {\dagger}\right]\right]\] \[= \operatorname{Tr}[\boldsymbol{\Pi}\rho_{X}]\cdot\left(1-\max_{x \in\mathcal{S}}|\alpha_{x}|^{2}\right)\cdot\operatorname{Tr}\left[\boldsymbol{ \Pi}_{\mathcal{S}}\,\mathcal{E}_{Y\to X^{\prime}}(\sigma)\right].\]
This proves the claim.
### Proof of Theorem 6.1
Proof.: Let \(\mathcal{A}\) be a QPT adversary and suppose that
\[\operatorname{Pr}\left[b\leftarrow\operatorname{Expt}^{\mathcal{A}}(1^{\lambda},b )\ :\ b\stackrel{{\$}}{{\leftarrow}}\{0,1\}\right]=\frac{1}{2}+\epsilon( \lambda),\]
for some \(\varepsilon(\lambda)\) with respect to \(\operatorname{Expt}^{\mathcal{A}}(1^{\lambda},b)\) in Figure 11. We show that \(\varepsilon(\lambda)\) is negligible.
\[\underline{\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)}\text{:}\]
1. The challenger samples \((\mathbf{A}\in\mathbb{Z}_{q}^{n\times m},\mathsf{td}_{\mathbf{A}})\leftarrow \mathsf{GenTrap}(1^{n},1^{m},q)\) and generates \[|\psi_{\mathbf{y}}\rangle\ =\ \sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m} \\ \mathbf{Ax}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}(\mathbf{x})\ |\mathbf{x}\rangle\,,\] for some \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\), by running \((|\psi_{\mathbf{y}}\rangle\,,\mathbf{y})\leftarrow\mathsf{GenGauss}(\mathbf{A},\sigma)\). The challenger lets \(\mathsf{MSK}\leftarrow\mathsf{td}_{\mathbf{A}}\) and \(\mathsf{PK}\leftarrow(\mathbf{A},\mathbf{y})\) and sends \(\rho_{\mathsf{SK}}\leftarrow|\psi_{\mathbf{y}}\rangle\) to the adversary \(\mathcal{A}\).
2. \(\mathcal{A}\) generates a (possibly entangled) bipartite state \(\rho_{R,\textsc{aux}}\) in systems \(\mathcal{H}_{R}\otimes\mathcal{H}_{\textsc{aux}}\) with \(\mathcal{H}_{R}=\mathcal{H}_{q}^{m}\), returns system \(R\) and holds onto the auxiliary system Aux.
3. The challenger runs \(\mathsf{Revoke}(\mathsf{PK},\mathsf{MSK},\rho_{R})\), where \(\rho_{R}\) is the reduced state in system \(R\). If the outcome is \(\top\), the game continues. Otherwise, output \(\mathsf{Invalid}\).
4. \(\mathcal{A}\) submits a plaintext bit \(\mu\in\{0,1\}\).
5. The challenger does the following depending on \(b\in\{0,1\}\): * if \(b=0\): the challenger samples a vector \(\mathbf{s}\xleftarrow{s}\mathbb{Z}_{q}^{n}\) and errors \(\mathbf{e}\sim D_{\mathbb{Z}^{m},\,\alpha q}\) and \(e^{\prime}\sim D_{\mathbb{Z},\,\beta q}\), and sends a Dual-Rgeev encryption of \(\mu\in\{0,1\}\) to \(\mathcal{A}\): * if \(b=1\): the challenger samples \(\mathbf{u}\xleftarrow{s}\mathbb{Z}_{q}^{m}\) and \(r\xleftarrow{s}\mathbb{Z}_{q}\) uniformly at random and sends the following pair to \(\mathcal{A}\): \[(\mathbf{u},r)\in\mathbb{Z}_{q}^{m}\times\mathbb{Z}_{q}.\]
6. \(\mathcal{A}\) returns a bit \(b^{\prime}\in\{0,1\}\).
Figure 11: The key-revocable security experiment according to Definition 5.3.
Suppose for the sake of contradiction that \(\epsilon(\lambda)\) is non-negligible. We show that we can use \(\mathcal{A}\) to break the \(\mathsf{SIS}^{m}_{n,q,\sigma\sqrt{2m}}\) problem. Without loss of generality, we assume that \(\mathcal{A}\) submits the plaintext \(x=0\). By the assumption that \(\epsilon(\lambda)\geq 1/\mathrm{poly}(\lambda)\), it follows from Theorem6.8 that there exists a quantum Goldreich-Levin extractor \(\mathcal{E}\) that takes as input \(\mathbf{A}\), \(\mathbf{y}\) and system Aux of the state \(\rho_{R,\textsc{Aux}}\) and outputs a short vector in the coset \(\Lambda^{\mathsf{Y}}_{q}(\mathbf{A})\) in time \(\mathrm{poly}(\lambda,m,\sigma,q,1/\varepsilon)\) such that
\[\Pr\left[\begin{smallmatrix}(\mathsf{Revoke}(\mathbf{A},\mathsf{td}_{\mathbf{ A}},\mathbf{y},\cdot)\otimes\mathcal{E}(\mathbf{A},\mathbf{y},\cdot))(\rho_{R, \textsc{Aux}})=(\top,\mathbf{x})\\ \bigwedge\\ \mathbf{x}\in\Lambda^{\mathbf{y}}_{q}(\mathbf{A})\cap\mathcal{B}^{m}( \mathbf{0},\sigma\sqrt{\frac{m}{2}})\end{smallmatrix}:\begin{smallmatrix}( \mathbf{A},\mathsf{td}_{\mathbf{A}})\leftarrow\mathsf{GenTrap}^{(1n,1^{m},q)} \\ \rho_{R,\textsc{Aux}}\leftarrow\mathcal{A}_{\lambda,\mathbf{A}}(|\psi_{\mathbf{ y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\nu\rangle\end{smallmatrix})\geq 1/\mathrm{poly}( \lambda).\]
Here, we rely on the correctness of \(\mathsf{GenTrap}\) in Theorem2.12 and \(\mathsf{QSampGauss}\) in Theorem3.3, as well as the fact that revocation must necessarily succeed with inverse-polynomial probability.
Consider the following procedure in Algorithm4.
```
Input: Matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\). Output: Vector \(\mathbf{x}\in\mathbb{Z}^{m}\).
1 Generate a Gaussian state \(\left(\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y}\right)\leftarrow\mathsf{ GenGauss}(\mathbf{A},\sigma)\) with \[\left|\psi_{\mathbf{y}}\right\rangle\ =\ \sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m}\\ \mathbf{Ax}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\ \ \rho_{\sigma}(\mathbf{x})\ \left|\mathbf{x}\right\rangle\] for some vector \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\).
2 Run \(\mathcal{A}\) to generate a bipartite state \(\rho_{R\textsc{Aux}}\) in systems \(\mathcal{H}_{R}\otimes\mathcal{H}_{\textsc{Aux}}\) with \(\mathcal{H}_{R}=\mathcal{H}_{q}^{m}\).
3 Measure system \(R\) in the computational basis, and let \(\mathbf{x}_{0}\in\mathbb{Z}_{q}^{n}\) denote the outcome.
4 Run the quantum Goldreich-Levin extractor \(\mathcal{E}(\mathbf{A},\mathbf{y},\rho_{\textsc{Aux}})\) from Theorem6.8, where \(\rho_{\textsc{Aux}}\) is the reduced state in system \(\mathcal{H}_{\textsc{Aux}}\), and let \(\mathbf{x}_{1}\in\mathbb{Z}_{q}^{n}\) denote the outcome.
5 Output the vector \(\mathbf{x}=\mathbf{x}_{1}-\mathbf{x}_{0}\).
```
**Algorithm 4**\(\mathsf{SIS}_{\mathsf{Solver}}(\mathbf{A})\)
To conclude the proof, we show that \(\mathsf{SIS}_{\mathsf{Solver}}(\mathbf{A})\) in Algorithm4 breaks the \(\mathsf{SIS}^{m}_{n,q,\sigma\sqrt{2m}}\) problem whenever \(\varepsilon(\lambda)=1/\mathrm{poly}(\lambda)\). In order to guarantee that \(\mathsf{SIS}_{\mathsf{Solver}}(\mathbf{A})\) is successful, we use the distinct pair extraction result of Lemma6.9. This allows us to analyze the probability of simultaneously extracting two distinct short pre-images \(\mathbf{x}_{0}\neq\mathbf{x}_{1}\) such that \(\mathbf{Ax}_{0}=\mathbf{y}=\mathbf{Ax}_{1}\ (\mathrm{mod}\ q)\) - both in terms of the success probability of revocation and the success probability of extracting a pre-image from the adversary's state \(\rho_{\textsc{Aux}}\) in system \(\mathcal{H}_{\textsc{Aux}}\). Assuming that \(\mathbf{x}_{0},\mathbf{x}_{1}\) are distinct short pre-images such that \(\left\|\mathbf{x}_{0}\right\|\leq\sigma\sqrt{\frac{m}{2}}\) and \(\left\|\mathbf{x}_{1}\right\|\leq\sigma\sqrt{\frac{m}{2}}\), it then follows that the vector \(\mathbf{x}=\mathbf{x}_{1}-\mathbf{x}_{0}\) output by \(\mathsf{SIS}_{\mathsf{Solver}}(\mathbf{A})\) has norm at most \(\sigma\sqrt{2m}\), and thus yields a solution to \(\mathsf{SIS}^{m}_{n,q,\sigma\sqrt{2m}}\).
We remark that the state \(\left|\psi_{\mathbf{y}}\right\rangle\) prepared by Algorithm4 is not normalized for ease of notation. Note that the tail bound in Lemma2.6 implies that (the normalized variant of) \(\left|\psi_{\mathbf{y}}\right\rangle\) is within negligible trace distance of the state with support \(\{\mathbf{x}\in\mathbb{Z}_{q}^{m}:\left\|\mathbf{x}\right\|\leq\sigma\sqrt{ \frac{m}{2}}\}\). Therefore, for the sake
of Lemma 6.9, we can assume that \(|\psi_{\mathbf{y}}\rangle\) is a normalized state of the form
\[|\psi_{\mathbf{y}}\rangle=\left(\sum_{\begin{subarray}{c}\mathbf{z}\in\mathbb{Z} _{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{\frac{m}{2}}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{ \frac{\sigma}{\sqrt{2}}}(\mathbf{z})\right)^{-\frac{1}{2}}\sum_{\begin{subarray} {c}\mathbf{x}\in\mathbb{Z}_{q}^{m},\|\mathbf{x}\|\leq\sigma\sqrt{\frac{m}{2}} \\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{\sigma}(\mathbf{x}) \left|\mathbf{x}\right\rangle.\]
Before we analyze Algorithm 4, we first make two technical remarks. First, since \(\sigma\geq\omega(\sqrt{\log m})\), it follows from Lemma 2.9 that, for any full-rank \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) and for any \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\), we have
\[\max_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m},\|\mathbf{x}\|\leq \sigma\sqrt{\frac{m}{2}}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\left\{\frac{ \rho_{\frac{\sigma}{\sqrt{2}}}(\mathbf{x})}{\sum_{\begin{subarray}{c} \mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{\frac{m}{2}}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{ \frac{\sigma}{\sqrt{2}}}(\mathbf{z})}\right\}\ \leq\ 2^{-\Omega(m)}.\]
Second, we can replace the procedure \(\mathsf{Revoke}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\rho_{R})\) by an (inefficient) projective measurement \(\{|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|,I-|\psi_{\mathbf{y}} \rangle\langle\psi_{\mathbf{y}}|\}\), since they produce statistically close outcomes. This follows from the fact that \(\mathsf{Revoke}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\rho_{R})\) applies the procedure \(\mathsf{QSampGauss}\) in Algorithm 2 as a subroutine, which is correct with overwhelming probability accccording to Theorem 3.3.
Let us now analyze the success probability of Algorithm 4. Putting everything together, we get
\[\mathsf{Pr}\left[\begin{matrix}\mathbf{x}\leftarrow\mathsf{SIS}_{ \mathsf{Solver}(\mathbf{A})}\\ \bigwedge\\ \mathbf{x}\neq\emptyset\ \mathrm{s.t.}\ \|\mathbf{x}\|\leq\sigma\sqrt{2m} \end{matrix}\right. : \mathbf{A}\stackrel{{\text{\tiny$\xi$}}}{{\leftarrow}} \mathbb{Z}_{q}^{n\times m}\] \[\geq\left(1-\max_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m },\|\mathbf{x}\|\leq\sigma\sqrt{\frac{m}{2}}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\left\{\frac{ \rho_{\frac{\sigma}{\sqrt{2}}}(\mathbf{x})}{\sum_{\begin{subarray}{c} \mathbf{z}\in\mathbb{Z}_{q}^{m},\|\mathbf{z}\|\leq\sigma\sqrt{\frac{m}{2}}\\ \mathbf{A}\mathbf{z}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\rho_{ \frac{\sigma}{\sqrt{2}}}(\mathbf{z})}\right\}\] \[\quad\cdot\Pr\left[\mathsf{IneffRevoke}(\mathbf{A},\mathbf{y}, \rho_{R})=\top\ :\ \begin{subarray}{c}\mathbf{A}\stackrel{{\text{\tiny$\xi$}}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\ \mathrm{s.t.}\ \mathbf{A}\ \mathrm{is\ full-rank}\\ \rho_{R,\mathrm{A}\mathrm{x}\mathrm{x}\leftarrow\mathcal{A}_{\lambda,\mathbf{A} }}(|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\omega_{\lambda} \rangle\end{subarray}\right]\] \[\quad\cdot\Pr\left[\mathcal{E}\big{(}\mathbf{A},\mathbf{y}, \rho_{\mathrm{A}\mathrm{x}\mathrm{x}}\big{)}\ \in\ \Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathcal{B}^{m}(\mathbf{0},\sigma\sqrt{ \frac{m}{2}})\ :\ \begin{subarray}{c}\mathbf{A}\stackrel{{\text{\tiny$\xi$}}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\ \mathrm{s.t.}\ \mathbf{A}\ \mathrm{is\ full-rank}\\ (|\psi_{\mathbf{y}}\rangle,\mathbf{y}\rangle\leftarrow\mathsf{GenGauss}( \mathbf{A},\sigma)\\ \rho_{R,\mathrm{A}\mathrm{x}\mathrm{x}\leftarrow\mathcal{A}_{\lambda,\mathbf{A} }}(|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\omega_{\lambda} \rangle)\end{subarray}\right]\] \[\geq\left(1-2^{-\Omega(m)}\right)\cdot\Pr\left[\begin{matrix}( \mathsf{IneffRevoke}(\mathbf{A},\mathbf{y},\cdot)\otimes\mathcal{E}(\mathbf{A},\mathbf{y},\cdot))(\rho_{R\mathrm{A}\mathrm{x}\mathrm{x}})=(\mathbb{T}, \mathbf{x}_{1})&\mathbf{A}\stackrel{{\text{\tiny$\xi$}}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\\ \bigwedge\\ \mathbf{x}_{1}\in\ \Lambda_{q}^{\mathbf{y}}(\mathbf{A})\cap\mathcal{B}^{m}(\mathbf{0},\sigma \sqrt{\frac{m}{2}})&\mathbf{A}\stackrel{{\text{\tiny$\xi$}}}{{ \leftarrow}}\mathbb{Z}_{q}^{n\times m}\\ \rho_{R,\mathrm{A}\mathrm{x}\mathrm{x}\leftarrow\mathcal{A}_{\lambda,\mathbf{A} }}(|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\otimes\omega_{\lambda} \rangle)\end{matrix}\right]\] \[\geq\left(1-2^{-\Omega(m)}\right)\cdot\left(1/\mathrm{poly}( \lambda)-q^{-n}\right)\ \ \geq\ \ \ 1/\mathrm{poly}(\lambda).\]
In the last line, we applied the simultaneous search-to-decision reduction from Theorem 6.8 and Lemma 2.4. Therefore, \(\mathsf{SIS}_{\mathsf{Solver}}(\mathbf{A})\) in Algorithm 4 runs in time \(\mathrm{poly}(q,1/\varepsilon)\) and solves \(\mathsf{SIS}_{n,q,\sqrt{2m}}^{m}\) whenever \(\varepsilon=1/\mathrm{poly}(\lambda)\). Therefore, we conclude that \(\varepsilon(\lambda)\) must be negligible.
Key-Revocable Fully Homomorphic Encryption
In this section, we describe our key-revocable (leveled) fully homomorphic encryption scheme from \(\mathsf{LWE}\) which is based on the so-called \(\mathsf{DualGSW}\) scheme used by Mahadev [14] which itself is a variant of the homomorphic encryption scheme by Gentry, Sahai and Waters [11].
Let \(\lambda\in\mathbb{N}\) be the security parameter. Suppose we would like to evaluate \(L\)-depth circuits consisting of \(\mathsf{NAND}\) gates. We choose \(n(\lambda,L)\gg L\) and a prime \(q=2^{o(n)}\). Then, for integer parameters \(m\geq 2n\log q\) and \(N=(m+1)\cdot\lceil\log q\rceil\), we let \(\mathbf{I}\) be the \((m+1)\times(m+1)\) identity matrix and let \(\mathbf{G}=[\mathbf{I}\,\|\,\mathbf{2I}\,\|\,\ldots\,\|\,2^{\lceil\log q\rceil- 1}\mathbf{I}]\in\mathbb{Z}_{q}^{(m+1)\times N}\) denote the so-called _gadget matrix_ which converts a binary representation of a vector back to its original vector representation over the field \(\mathbb{Z}_{q}\). Note that the associated (non-linear) inverse operation \(\mathbf{G}^{-1}\) converts vectors in \(\mathbb{Z}_{q}^{m+1}\) to their binary representation in \(\left\{0,1\right\}^{N}\). In other words, we have that \(\mathbf{G}\circ\mathbf{G}^{-1}\) acts as the identity operator.
### Construction
**Construction 3** (Key-Revocable \(\mathsf{DualGSW}\) encryption).: _Let \(\lambda\in\mathbb{N}\) be the security parameter. The scheme \(\mathsf{RevDualGSW}=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Eval}, \mathsf{Revoke})\) consists of the following \(\mathsf{QPT}\) algorithms:_
\[\mathsf{KeyGen}(1^{\lambda},1^{L})\rightarrow(\mathsf{PK},\rho_{ \mathsf{SK}}):\text{sample a pair }(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}, \mathsf{td}_{\mathbf{A}})\leftarrow\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q) \text{ and generate a Gaussian superposition }(\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y})\leftarrow\mathsf{Gen} \mathsf{Gauss}(\mathbf{A},\sigma)\text{ with }\] \[\left|\psi_{\mathbf{y}}\right\rangle\ =\!\!\!\sum_{\begin{subarray}{c} \mathbf{x}\in\mathbb{Z}_{q}^{m}\\ \mathbf{Ax}=\mathbf{y}\end{subarray}}\!\!\!\rho_{\mathbf{x}}(\mathbf{x})\ \left| \mathbf{x}\right\rangle,\] \[\text{for some }\mathbf{y}\in\mathbb{Z}_{q}^{n}\text{. Output }\mathsf{PK}=(\mathbf{A},\mathbf{y})\text{, }\rho_{\mathsf{SK}}=\left|\psi_{\mathbf{y}}\right\rangle\text{ and }\mathsf{MSK}=\mathsf{td}_{\mathbf{A}}.\] \[\mathsf{Enc}(\mathsf{PK},\mu):\text{to encrypt }\mu\in\left\{0,1\right\}\text{, parse }(\mathbf{A},\mathbf{y})\leftarrow\mathsf{PK}\text{, sample a random matrix }\mathbf{S}\xleftarrow{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\underline{\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)}\text{:}\]
1. The challenger samples \((\mathbf{A}\in\mathbb{Z}_{q}^{n\times m},\mathsf{td}_{\mathbf{A}})\leftarrow \mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\) and generates \[|\psi_{\mathbf{y}}\rangle\ =\ \sum_{\begin{subarray}{c}\mathbf{x}\in\mathbb{Z}_{q}^{m} \\ \mathbf{Ax}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\ \rho_{\sigma}(\mathbf{x})\ | \mathbf{x}\rangle\,,\] for some \(\mathbf{y}\in\mathbb{Z}_{q}^{n}\), by running \((|\psi_{\mathbf{y}}\rangle\,,\mathbf{y})\leftarrow\mathsf{Gen}\mathsf{Gauss}( \mathbf{A},\sigma)\). The challenger lets \(\mathsf{MSK}\leftarrow\mathsf{td}_{\mathbf{A}}\) and \(\mathsf{PK}\leftarrow(\mathbf{A},\mathbf{y})\) and sends \(\rho_{\mathsf{SK}}\leftarrow|\psi_{\mathbf{y}}\rangle\) to the adversary \(\mathcal{A}\).
2. \(\mathcal{A}\) generates a (possibly entangled) bipartite state \(\rho_{R,\textsc{aux}}\) in systems \(\mathcal{H}_{R}\otimes\mathcal{H}_{\textsc{aux}}\) with \(\mathcal{H}_{R}=\mathcal{H}_{q}^{m}\), returns system \(R\) and holds onto the auxiliary system Aux.
3. The challenger runs \(\mathsf{Revoke}(\mathsf{MSK},\mathsf{PK},\rho_{R})\), where \(\rho_{R}\) is the reduced state in system \(R\). If the outcome is \(\top\), the game continues. Otherwise, output \(\mathsf{Invalid}\).
4. \(\mathcal{A}\) submits a plaintext bit \(\mu\in\{0,1\}\).
5. The challenger does the following depending on \(b\in\{0,1\}\): * if \(b=0\): The challenger samples a random matrix \(\mathbf{S}\xleftarrow{s}\mathbb{Z}_{q}^{n\times N}\) and errors \(\mathbf{E}\sim D_{\mathbb{Z}^{m\times N},\,\alpha q}\) and row vector \(\mathbf{e}\sim D_{\mathbb{Z}^{N},\,\beta q}\), and outputs the ciphertext \[\mathsf{CT}=\left[\begin{subarray}{c}\mathbf{A}^{\intercal}\mathbf{S}+ \mathbf{E}\\ \mathbf{y}^{\intercal}\mathbf{S}+\mathbf{e}\end{subarray}\right]+\mu\cdot \mathbf{G}\ \in\mathbb{Z}_{q}^{(m+1)\times N}\text{.}\] * if \(b=1\): the challenger samples a matrix \(\mathbf{U}\xleftarrow{s}\mathbb{Z}_{q}^{m\times N}\) and row vector \(r\xleftarrow{s}\mathbb{Z}_{q}^{N}\) uniformly at random, and sends the following to \(\mathcal{A}\): \[\left[\begin{subarray}{c}\mathbf{U}\\ \mathbf{r}\end{subarray}\right]\ \in\mathbb{Z}_{q}^{(m+1)\times N}\text{.}\]
6. \(\mathcal{A}\) returns a bit \(b^{\prime}\in\{0,1\}\).
### Proof of Theorem 7.1
**Theorem 7.1**.: _Let \(L\) be an upper bound on the \(\mathsf{NAND}\)-depth of the circuit which is to be evaluated. Let \(n\in\mathbb{N}\) and \(q\) be a prime modulus with \(n=n(\lambda,L)\gg L\), \(q=2^{o(n)}\) and \(m\geq 2n\log q\), each parameterized by the security parameter \(\lambda\in\mathbb{N}\). Let \(N=(m+1)\cdot\lceil\log q\rceil\) be an integer. Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and let \(\alpha,\beta\in(0,1)\) be parameters such that \(\beta/\alpha=2^{o(n)}\) and \(1/\alpha=2^{o(n)}\cdot\sigma\). Then, assuming the subexponential hardness of the \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) and \(\mathsf{SIS}_{n,q,\sigma\sqrt{2m}}^{m}\) problems, the scheme \(\mathsf{RevDualGSW}=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{ Eval},\mathsf{Revoke})\) in Construction 3 is a secure key-revocable (leveled)
Figure 12: The key-revocable security experiment according to Definition 5.3.
fully homomorphic encryption scheme according to Definition5.3._
Proof.: Let \(\mathcal{A}\) be a \(\mathsf{QPT}\) adversary and suppose that
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\ :\ b\stackrel{{\$}}{{ \leftarrow}}\{0,1\}\right]=\frac{1}{2}+\epsilon(\lambda),\]
for some \(\varepsilon(\lambda)\) with respect to \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\) in Figure12. Note that the \(\mathsf{RevDualGSW}\) ciphertext can (up to an additive shift) be thought of as a column-wise concatenation of \(N\)-many independent ciphertexts of our key-revocable Dual-Regev scheme in Construction2. Therefore, we can invoke Claim5.8 and Theorem6.1 in order to argue that \(\varepsilon(\lambda)\) is at most negligible.
## 8 Revocable Pseudorandom Functions
In this section, we introduce the notion of _key-revocable_ pseudorandom functions (or simply, called _revocable_) and present the first construction from (quantum hardness of) learning with errors.
### Definition
Let us first recall the traditional notion of \(\mathsf{PRF}\) security [13], defined as follows.
**Definition 8.1** (Pseudorandom Function).: _Let \(\lambda\in\mathbb{N}\) and \(\kappa(\lambda),\ell(\lambda)\) and \(\ell^{\prime}(\lambda)\) be polynomials. A (post-quantum) pseudorandom function \((\mathsf{pqPRF})\) is a pair \((\mathsf{Gen},\mathsf{PRF})\) of \(\mathsf{PPT}\) algorithms given by_
* \(\mathsf{Gen}(1^{\lambda}):\) _On input_ \(1^{\lambda}\)_, it outputs a key_ \(k\in\{0,1\}^{\kappa}\)_._
* \(\mathsf{PRF}(k,x):\) _On input_ \(k\in\{0,1\}^{\kappa}\) _and_ \(x\in\{0,1\}^{\ell}\)_, it outputs a value_ \(y\in\{0,1\}^{\ell^{\prime}}\)_._
_with the property that, for any \(\mathsf{QPT}\) distinguisher \(\mathcal{D}\), we have_
\[\left|\Pr\left[\mathcal{D}^{\mathsf{PRF}(k,\cdot)}(1^{\lambda})=1\right]\,:\,k \leftarrow\mathsf{Gen}(1^{\lambda})\right]-\Pr\left[\mathcal{D}^{F(\cdot)}(1^ {\lambda})=1\right]\,:\,F\stackrel{{\$}}{{\leftarrow}}\mathcal{F} ^{\ell,\ell^{\prime}}\Big{|}\,\Big{|}\leq\,\mathsf{negl}(\lambda),\]
_where \(\mathcal{F}^{\ell,\ell^{\prime}}\) is the set of all functions with domain \(\{0,1\}^{\ell}\) and range \(\{0,1\}^{\ell^{\prime}}\)._
We now present a formal definition of revocable pseudorandom functions below.
**Definition 8.2** (Revocable Pseudorandom Function).: _Let \(\lambda\in\mathbb{N}\) be the security parameter and let \(\kappa(\lambda),\ell(\lambda)\) and \(\ell^{\prime}(\lambda)\) be polynomials. A revocable pseudorandom function \((\mathsf{rPRF})\) is a scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\) consisting of the following efficient algorithms:_
* \(\mathsf{Gen}(1^{\lambda})\)_: on input the security parameter_ \(\lambda\in\mathbb{N}\)_, it outputs a_ \(\mathsf{PRF}\) _key_ \(k\in\{0,1\}^{\kappa}\)_, a quantum state_ \(\rho_{k}\) _and a master secret key_ \(\mathsf{MSK}\)_._
* \(\mathsf{PRF}(k,x)\)_: on input a key_ \(k\in\{0,1\}^{\kappa}\) _and an input string_ \(x\in\{0,1\}^{\ell}\)_, it outputs a value_ \(y\in\{0,1\}^{\ell^{\prime}}\)_. This is a deterministic algorithm._
* \(\mathsf{Eval}(\rho_{k},x)\)_: on input a state_ \(\rho_{k}\) _and an input_ \(x\in\{0,1\}^{\ell}\)_, it outputs a value_ \(y\in\{0,1\}^{\ell^{\prime}}\)_._
* \(\mathsf{Revoke}(\mathsf{MSK},\sigma)\)_: on input key_ \(\mathsf{MSK}\) _and a state_ \(\sigma\)_, it outputs_ \(\mathsf{Valid}\) _or_ \(\mathsf{Invalid}\)_._
We additionally require that the following holds:
Correctness.For each \((k,\rho_{k},\mathsf{MSK})\) in the support of \(\mathsf{Gen}(1^{\lambda})\) and for every \(x\in\{0,1\}^{\ell}\):
* (Correctness of evaluation:) \[\mathsf{Pr}\left[\mathsf{PRF}(k,x)=\mathsf{Eval}(\rho_{k},x)\right]\geq 1- \mathsf{negl}(\lambda).\]
* (Correctness of revocation:) \[\mathsf{Pr}\left[\mathsf{Valid}\leftarrow\mathsf{Revoke}(\mathsf{MSK},\rho_{k}) \right]\geq 1-\mathsf{negl}(\lambda).\]
### Security
We define revocable \(\mathsf{PRF}\) security below.
**Definition 8.3** (Revocable \(\mathsf{PRF}\) Security).: _A revocable pseudorandom function \((\mathsf{rPRF})\) satisfies revocable \(\mathsf{PRF}\) security if, for every QPT adversary \(\mathcal{A}\) and every polynomial \(\mu=\mu(\lambda)\in\mathbb{N}\),_
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A},\mu}(1^{\lambda},b)\ :\ b\stackrel{{\$}}{{\leftarrow}}\{0,1\}\right]\leq\frac{1}{2}+ \mathsf{negl}(\lambda),\]
_where \(\mathsf{Expt}^{\mathcal{A},\mu}\) is as defined in Figure 13. If the above property holds for a fixed polynomial \(\mu(\lambda)\), then we say that \(\mathsf{rPRF}\) satisfies \(\mu\)-revocable \(\mathsf{PRF}\) security._
Figure 13: Revocable \(\mathsf{PRF}\) security
From one-query to multi-query security.We show that proving security with respect to \(\mu=1\) is sufficient. That is, we show the following.
**Claim 8.4**.: _Supoose an \(\mathsf{rPRF}\) scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\) satisfies \(1\)-revocable \(\mathsf{PRF}\) security. Then, \(\mathsf{rPRF}\) also satisfies the stronger notion of (multi-query) revocable PRF security._
Proof.: We consider a sequence of hybrids defined as follows. Let \(\mathcal{A}\) be a QPT adversary participating in the revocable PRF security experiment and let \((x_{1},y_{1}),\ldots,(x_{\mu},y_{\mu})\) denote the challenge input-output pairs, for some polynomial \(\mu=\mu(\lambda)\). We also denote by \(k\) the \(\mathsf{PRF}\) key sampled using \(\mathsf{Gen}\) by the challenger in Figure13.
\(\mathsf{H}_{i}\), for \(i\in[\mu+1]\): In this hybrid, \(y_{1},\ldots,y_{i-1}\) are sampled uniformly at random from \(\{0,1\}^{\ell^{\prime}}\) and \(y_{i},\ldots,y_{\mu}\) are generated as follows: \(y_{j}=\mathsf{PRF}(k,x_{j})\) for \(j\geq i\).
We claim that \(\mathcal{A}\) can win the \(1\)-bit unpredictability game between hybrids \(\mathsf{H}_{i}\) and \(\mathsf{H}_{i+1}\), for all \(i\in[\mu]\), with probability \(\frac{1}{2}+\mathsf{negl}(\lambda)\). That is, a bit \(b\) is sampled uniformly at random and if \(b=0\) then \(\mathcal{A}\) participates in \(\mathsf{H}_{i}\) and if \(b=1\) then \(\mathcal{A}\) participates in \(\mathsf{H}_{i+1}\). We claim that \(\mathcal{A}\) can predict \(b\) with probability \(\frac{1}{2}+\mathsf{negl}(\lambda)\). Once we show this, we can then invoke the hybrid lemma for \(1\)-bit unpredictability (Lemma5.5) to complete the proof.
Suppose the above claim is not true. Let the prediction probability of \(\mathcal{A}\) be \(\frac{1}{2}+\varepsilon\), where \(\varepsilon\) is inverse polynomial. Then we use \(\mathcal{A}\) to break \(1\)-revocation security of \(\mathsf{rPRF}\). Specifically, we construct a reduction \(\mathcal{B}\) that does the following:
* Get \(\rho_{k}\) from the challenger.
* Sample \(x_{i+1},\ldots,x_{\mu}\) uniformly at random from \(\{0,1\}^{\ell}\). Denote \(\rho_{k}^{(i+1)}=\rho_{k}\). Do the following for \(j=i+1,\ldots,\mu\): \(\mathsf{Eval}(\rho_{k}^{(j)},x_{j})\) to obtain \(y_{j}\). Using Almost as good as new lemma [1], recover \(\rho_{k}^{(j+1)}\), where \(\rho_{k}^{(j+1)}\) is negligibly12 close to \(\rho_{k}\) in trace distance. Footnote 12: Technically, this depends on the correctness error and we start with a \(\mathsf{rPRF}\) that is correct with probability negligibly close to \(1\).
* Forward \(\rho_{k}^{(\mu+1)}\) to \(\mathcal{A}\).
* When the challenger sends the message \(\mathsf{REVOKE}\) then forward this message to \(\mathcal{A}\).
* If \(\mathcal{A}\) sends \(\sigma\). Forward this to the challenger.
* If the revocation did not fail, the guessing phase begins. The challenger sends \((x^{*},y^{*})\). Then, sample \(x_{1},\ldots,x_{i-1}\) uniformly at random from \(\{0,1\}^{\ell}\) and \(y_{1},\ldots,y_{i-1}\) uniformly at random from \(\{0,1\}^{\ell^{\prime}}\). Set \(x_{i}=x^{*}\) and \(y_{i}=y^{*}\). Send \((x_{1},y_{1}),\ldots,(x_{\mu},y_{\mu})\) to \(\mathcal{A}\).
* Output \(b\), where \(b\) is the output of \(\mathcal{A}\).
From the quantum union bound Lemma2.3, "Almost As Good As New" lemma (Lemma2.2) and the correctness of \(\mathsf{rPRF}\), it follows that \(\operatorname{TD}(\rho_{k},\rho_{k}^{(\mu+1)})\leq\mathsf{negl}(\lambda)\) and thus, the success probability of \(\mathcal{A}\) when given \(\rho_{k}^{(\mu+1)}\) instead of \(\rho_{k}\) is now at least \(\frac{1}{2}+\varepsilon-\mathsf{negl}(\lambda)\). Moreover, by the design of \(\mathcal{B}\), it follows that the success probability of \(\mathcal{B}\) in breaking \(1\)-revocation security of \(\mathsf{rPRF}\) is exactly the
same as the success probability of \(\mathcal{A}\) in breaking revocation security of \(\mathsf{rPRF}\). This contradicts the fact that \(\mathsf{rPRF}\) satisfies \(1\)-revocation security.
**Remark 8.5**.: _As in the case of key revocable public-key encryption, we could consider an alternate definition defined with respect to computational indistinguishability: instead of requiring the adversary (in the guessing phase) to predict whether it receives a pseudorandom output or a string sampled uniformly at random, we could instead require the adversary to_ **distinguish** _a pseudorandom sample from the uniform distribution. For a reason similar to the revocable PKE case, these two definitions are incomparable. We leave the investigation of the indistinguishability-based definition to the future works._
**Remark 8.6**.: _Our notion of revocable \(\mathsf{PRF}\) security from Definition 8.3 does not directly imply traditional notion of \(\mathsf{pqPRF}\) security13 from Definition 8.1. The reason is that the definition does not preclude the possibility of there being an input \(x\) (say an all zeroes string) on which, \(\mathsf{PRF}\) outputs \(x\) itself (or the first bit of \(x\) if the output of \(\mathsf{PRF}\) is a single bit)._
Footnote 13: Although any revocable \(\mathsf{PRF}\) is a _weak_\(\mathsf{PRF}\). Recall that a weak PRF is one where the adversary receives as input \((x_{1},y_{1}),\ldots,(x_{\mu},y_{\mu})\), where \(x_{i}\)s are picked uniformly at random. The goal of the adversary is to distinguish the two cases: all \(y_{i}\)s are pseudorandom or all \(y_{i}\)s are picked uniformly at random.
Motivated by Remark 8.6, we now introduce the following notion of a _strong_\(\mathsf{rPRF}\).
**Definition 8.7** (Strong \(\mathsf{rPRF}\)).: _We say that a scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\) is a strong revocable pseudorandom function (or, strong \(\mathsf{rPRF}\)) if the following two properties hold:_
1. \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\) _satisfy revocable_ \(\mathsf{PRF}\) _security according to Definition_ 8.3_, and_
2. \((\mathsf{Gen},\mathsf{PRF})\) _satisfy_ \(\mathsf{pqPRF}\) _security according to Definition_ 8.1_._
**Remark 8.8**.: _Instantiating pseudorandom functions in the textbook construction of private-key encryption [12] from revocable pseudorandom functions, we get a private-key revocable encryption scheme._
We show that the issue raised in Remark 8.6 is not inherent. In fact, we give a simple generic transformation that allows us to obtain strong \(\mathsf{rPRF}\)s by making use of traditional \(\mathsf{pqPRF}\)s.
**Claim 8.9** (Generic Transformation for Strong \(\mathsf{rPRF}\)s).: _Let \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\) be an \(\mathsf{rPRF}\) scheme which satisfies revocable \(\mathsf{PRF}\) security, and let \((\overline{\mathsf{Gen}},\overline{\mathsf{PRF}})\) be a \(\mathsf{pqPRF}\). Then, the scheme \((\widetilde{\mathsf{Gen}},\overline{\mathsf{PRF}},\widetilde{\mathsf{Eval}}, \widetilde{\mathsf{Revoke}})\) is a strong \(\mathsf{rPRF}\) which consists of the following algorithms:_
* \(\widetilde{\mathsf{Gen}}(1^{\lambda})\)_: on input the security parameter_ \(1^{\lambda}\)_, first run_ \((k,\rho_{k},\mathsf{MSK})\leftarrow\mathsf{Gen}(1^{\lambda})\) _and then output_ \(((K,k),(K,\rho_{k}),\mathsf{MSK})\)_, where_ \(K\leftarrow\overline{\mathsf{Gen}}(1^{\lambda})\) _is a_ \(\mathsf{pqPRF}\) _key._
* \(\widetilde{\mathsf{PRF}}((K,k),x)\)_: on input a key_ \((K,k)\) _and string_ \(x\in\{0,1\}^{\ell}\)_, output_ \(\overline{\mathsf{PRF}}(K,x)\oplus\mathsf{PRF}(k,x)\)_._
* \(\widetilde{\mathsf{Eval}}((K,\rho_{k}),x)\)_: on input_ \((K,\rho_{k})\) _and_ \(x\in\{0,1\}^{\ell}\)_, output_ \(\overline{\mathsf{PRF}}(K,x)\oplus\mathsf{Eval}(\rho_{k},x)\)_._
* \(\widetilde{\mathsf{Revoke}}(\mathsf{MSK},(K,\sigma))\)_: on input a master secret key_ \(\mathsf{MSK}\) _and a pair_ \((K,\rho_{k})\)_, first discard the key_ \(K\) _and then run_ \(\mathsf{Revoke}(\mathsf{MSK},\sigma)\)_._
Proof.: Let us first show that the scheme \((\widetilde{\mathsf{Gen}},\widetilde{\mathsf{PRF}},\widetilde{\mathsf{Eval}}, \widetilde{\mathsf{Revoke}})\) maintains revocable \(\mathsf{PRF}\) security. Suppose that there exists a \(\mathsf{QPT}\) adversary \(\mathcal{A}\) and a polynomial \(\mu=\mu(\lambda)\in\mathbb{N}\) such that
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A},\mu}(1^{\lambda},b)\ :\ b\stackrel{{ \$}}{{\leftarrow}}\{0,1\}\right]=\frac{1}{2}+\epsilon(\lambda),\]
for some function \(\epsilon(\lambda)=1/\mathrm{poly}(\lambda)\) and \(\mathsf{Expt}^{\mathcal{A},\mu}\) as defined in Figure13. We show that this implies the existence of a \(\mathsf{QPT}\) distinguisher \(\mathcal{D}\) that breaks the revocable \(\mathsf{PRF}\) security of the scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\). The distinguisher \(\mathcal{D}\) proceeds as follows:
1. \(\mathcal{D}\) receives as input a quantum state \(\rho_{k}\), where \((k,\rho_{k},\mathsf{MSK})\leftarrow\mathsf{Gen}(1^{\lambda})\) is generated by the challenger. Then, \(\mathcal{D}\) generates a \(\mathsf{pqPRF}\) key \(K\leftarrow\widetilde{\mathsf{Gen}}(1^{\lambda})\) and sends \((K,\rho_{k})\) to \(\mathcal{A}\).
2. When \(\mathcal{A}\) returns a state \(\rho\), \(\mathcal{D}\) forwards it to the challenger as part of the revocation phase.
3. When \(\mathcal{D}\) receives the challenge input \((x_{1},\ldots,x_{\mu})\) and \((y_{1},\ldots,y_{\mu})\) from the challenger, \(\mathcal{D}\) sends \((x_{1},\ldots,x_{\mu})\) and \((\overline{\mathsf{PRF}}(K,x_{1})\oplus y_{1},\ldots,\overline{\mathsf{PRF}}(K, x_{\mu})\oplus y_{\mu})\) to \(\mathcal{A}\).
4. When \(\mathcal{A}\) outputs \(b^{\prime}\), so does the distinguisher \(\mathcal{D}\).
Note that the simulated challenge distribution above precisely matches the challenge distribution from the experiment \(\mathsf{Expt}^{\mathcal{A},\mu}\) from Figure13. Therefore, if \(\mathcal{A}\) succeeds with inverse polynomial advantage \(\epsilon(\lambda)=1/\mathrm{poly}(\lambda)\), so does \(\mathcal{D}\) - thereby breaking the revocable \(\mathsf{PRF}\) security of the scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\). Consequently, \((\widetilde{\mathsf{Gen}},\widetilde{\mathsf{PRF}},\widetilde{\mathsf{Eval}}, \widetilde{\mathsf{Revoke}})\) satisfies revocable \(\mathsf{PRF}\) security.
To see why \((\widetilde{\mathsf{Gen}},\widetilde{\mathsf{PRF}})\) satisfy \(\mathsf{pqPRF}\) security according to Definition8.1, we can follow a similar argument as above to break the \(\mathsf{pqPRF}\) security of \((\overline{\mathsf{Gen}},\overline{\mathsf{PRF}})\). Here, we rely on the fact that the keys \((k,\rho_{k},\mathsf{MSK})\leftarrow\mathsf{Gen}(1^{\lambda})\) and \(K\leftarrow\overline{\mathsf{Gen}}(1^{\lambda})\) are sampled independently from another.
**Remark 8.10**.: _We note that previous works [11, 12] do not explicitly require in their definitions that either secure software leasing or copy-protection of pseudorandom functions to preserve the pseudorandomness property (although their constructions could still satisfy the traditional pseudorandomness property)._
### Construction
We construct a PRF satisfying \(1\)-revocation security (Definition8.3).
Shift-Hiding Construction.We construct a _shift-hiding_ function which is loosely inspired by shift-hiding shiftable functions introduced by Peikert and Shiehian [13].
Let \(n,m\in\mathbb{N}\), \(q\in\mathbb{N}\) be a modulus and let \(\ell=nm\lceil\log q\rceil\). In the following, we consider matrix-valued functions \(F:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\), where \(F\) is one of the following functions:
* \(\mathcal{Z}:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\) which, on input \(x\in\{0,1\}^{\ell}\), outputs an all zeroes matrix \(\mathbf{0}\in\mathbb{Z}_{q}^{n\times m}\), or:
* \(H_{r}:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\) which, on input \(x\in\{0,1\}^{\ell}\), outputs \(\mathbf{M}\in\mathbb{Z}_{q}^{n\times m}\), where \(r\in\{0,1\}^{\ell}\) and \(x=r\oplus\mathsf{bindecomp}(\mathbf{M})\), where \(\mathbf{M}\in\mathbb{Z}_{q}^{n\times m}\) and \(\mathsf{bindecomp}(\cdot)\) takes as input a matrix and outputs a binary string that is obtained by concatenating the binary decompositions of all the elements in the matrix (in some order).
We show that there exist \(\mathsf{PPT}\) algorithms \((\mathcal{KG},\mathcal{E})\) (formally defined in Construction 4) with the following properties:
* \(\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\): on input \(1^{n},1^{m}\), a modulus \(q\in\mathbb{N}\), a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) and a function \(F\in\{\mathcal{Z}\}\cup\{H_{r}:r\in\{0,1\}^{\ell}\}\), it outputs a pair of keys \((pk_{F},sk_{F})\).
* \(\mathcal{E}(pk_{F},x)\): on input \(pk_{F}\), \(x\in\{0,1\}^{\ell}\), it outputs \(\mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}+F(x)\), where \(\mathbf{S}_{x}\in\mathbb{Z}_{q}^{n\times n}\) and \(\mathbf{E}_{x}\in\mathbb{Z}_{q}^{n\times m}\), where \(||\mathbf{E}_{x}||_{\infty}\leq(m\sigma)^{2}\cdot(nm\lceil\log(q)\rceil)\). Moreover, there is an efficient algorithm that recovers \(\mathbf{S}_{x}\) given \(sk_{F}\) and \(x\).
We show that our construction of \((\mathcal{KG},\mathcal{E})\) satisfies a _shift-hiding property_; namely, for any \(r\in\{0,1\}^{\ell}\),
\[\{pk_{\mathcal{Z}}\}\approx_{c}\{pk_{H_{r}}\},\]
for any \(pk_{F}\) with \((pk_{F},sk_{F})\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\), where \(\mathbf{A}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n\times m}\), and \(F\in\{\mathcal{Z},H_{r}\}\).
In the construction below, we consider a bijective function \(\phi:[n]\times[m]\times[\lceil\log(q)\rceil]\rightarrow[\ell]\).
**Construction 4**.: _Consider the \(\mathsf{PPT}\) algorithms \((\mathcal{KG},\mathcal{E})\) defined as follows:_
* \(\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\)_: on input_ \(1^{n},1^{m}\)_, a modulus_ \(q\in\mathbb{N}\)_, a matrix_ \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) _and function_ \(F\in\{\mathcal{Z}\}\cup\{H_{r}:r\in\{0,1\}^{\ell}\}\)_, it outputs a pair of keys_ \(\kappa_{F}=(pk_{F},sk_{F})\) _generated as follows:_ 1. _For every_ \(i,j\in[n],\tau\in[\lceil\log(q)\rceil]\)_, define_ \(\{\mathsf{M}_{b}^{(i,j,\tau)}\}\) _as follows:_
* _If_ \(F=\mathcal{Z}\)_, then for every_ \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]\)_, let_ \(\mathsf{M}_{b}^{(i,j,\tau)}=\mathbf{0}\in\mathbb{Z}_{q}^{n\times n}\)_,_
* _If_ \(F=H_{r}\)_, then for every_ \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]\)_, let_ \(\mathsf{M}_{b}^{(i,j,\tau)}=(b\oplus r_{\phi(i,j,\tau)})\cdot\mathbf{I}_{n \times n}\)_._ 2. _For every_ \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]\)_,_ \(b\in\{0,1\}\)_, compute:_ \[pk_{b}^{(i,j,\tau)}=\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}+\mathsf{M}_{b}^{(i,j,\tau)},\] \[sk_{b}^{(i,j,\tau)}=\left(\left\{\mathbf{S}_{b}^{(i,j,\tau)}, \mathbf{E}_{b}^{(i,j,\tau)}\right\}\right),\] _where for every_ \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]\)_,_ \(b\in\{0,1\}\)_:_
* \(\mathbf{S}_{b}^{(i,j,\tau)}\gets D_{\mathbb{Z}_{q},\sigma}^{n\times n}\)_,_
* \(\mathbf{E}_{b}^{(i,j,\tau)}\gets D_{\mathbb{Z}_{q},\sigma}^{n\times m}\)__ 3. _Output_ \(pk_{F}=\left(\mathbf{A},\left\{pk_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{c}i \in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\right)\) _and_ \(sk_{F}=\left\{sk_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\)_._
* \(\mathcal{E}(pk_{F},x)\)_: on input_ \(pk_{F}\) _and_ \(x\in\{0,1\}^{\ell}\)_, proceed as follows:_ 1. _Parse_ \(pk_{F}=\left(\mathbf{A}\left\{pk_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{c}i \in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\right)\)__ 2. _Output_ \(\sum_{\begin{subarray}{c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil]\end{subarray}}pk_{\begin{subarray}{c}i,j,\tau\end{subarray}}^ {(i,j,\tau)}\)_._
**Claim 8.11** (Correctness).: _Let \((\mathcal{KG},\mathcal{E})\) be the pair of \(\mathsf{PPT}\) algorithms in Construction 4. Let \((pk_{F},sk_{F})\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\) with \(F\in\{\mathcal{Z}\}\cup\{H_{r}:r\in\{0,1\}^{\ell}\}\). Then, the output of \(\mathcal{E}(pk_{F},x)\) is of the form:_
\[\mathcal{E}(pk_{F},x)=\mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}+F(x),\]
_where \(\mathbf{S}_{x}\in\mathbb{Z}_{q}^{n\times n}\) and \(\mathbf{E}_{x}\in\mathbb{Z}_{q}^{n\times m}\) with \(||\mathbf{E}_{x}||_{\infty}\leq(m\sigma)^{2}\cdot(nm\lceil\log(q)\rceil)\). Moreover, there is an efficient algorithm that recovers \(\mathbf{S}_{x}\) given \((pk_{F},sk_{F})\)._
Proof.: Let \((pk_{F},sk_{F})\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\). Parse \(pk_{F}=\left(\mathbf{A},\left\{pk_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{ c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\right)\) and \(sk_{F}=\left\{sk_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\), where:
\[pk_{b}^{(i,j,\tau)}=\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i, j,\tau)}+\mathsf{M}_{b}^{(i,j,\tau)},\]
\[sk_{b}^{(i,j,\tau)}=\left(\{\mathbf{S}_{b}^{(i,j,\tau)},\mathbf{E}_{b}^{(i,j, \tau)}\}\right)\]
There are two cases to consider here:
Case 1. \(F=\mathcal{Z}\): in this case, \(\mathsf{M}_{b}^{(i,j,\tau)}=\mathbf{0}\), for every \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\). Thus, the following holds:
\[\sum_{\begin{subarray}{c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil]\end{subarray}}pk_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)} = \underbrace{\left(\sum_{\begin{subarray}{c}i\in[n],j\in[m],\\ r\in[\lceil\log(q)\rceil]\end{subarray}}\mathbf{S}_{x_{\phi(i,j,\tau)}}^{(i,j, \tau)}\right)}_{\mathbf{S}_{x}}\mathbf{A}+\underbrace{\left(\sum_{ \begin{subarray}{c}i\in[n],j\in[m],\\ r\in[\lceil\log(q)\rceil]\end{subarray}}\mathbf{E}_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)}\right)}_{\mathbf{E}_{x}}+\left(\sum_{\begin{subarray}{c}i\in[n],j\in[ m],\\ r\in[\lceil\log(q)\rceil]\end{subarray}}\mathsf{M}_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)}\right)\] \[= \mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}+\mathcal{Z}(x)\]
Moreover, \(||\mathbf{E}_{b}^{(i,j,\tau)}||_{\infty}\leq(m\sigma)^{2}\) and thus, \(||\mathbf{E}_{x}||_{\infty}\leq(m\sigma)^{2}\cdot(nm\lceil\log(q)\rceil)\).
Case 2. \(F=H_{r}\):
\[\sum_{\begin{subarray}{c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil]\end{subarray}}pk_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)} = \mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}+\left(\sum_{\begin{subarray} {c}i\in[n],j\in[m],\\ \tau\in[\lceil\log(q)\rceil]\end{subarray}}\mathsf{M}_{x_{\phi(i,j,\tau)}}^{(i, j,\tau)}\right)\] \[= \mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}+H_{r}(x),\]
where \(\mathbf{S}_{x}\) and \(\mathbf{E}_{x}\) are as defined above. The second equality holds because of the fact that \(\mathsf{M}_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)}\) has the value \((b\oplus r_{\phi(i,j,\tau)})\cdot 2^{\tau}\) in the \((i,j)^{th}\) position and zero, everywhere else. Thus, summing up all the \(\mathsf{M}_{x_{\phi(i,j,\tau)}}^{(i,j,\tau)}\) matrices results in the matrix \(\mathsf{M}\), where \(x\oplus r\) is the binary decomposition of \(\mathsf{M}\).
Finally, it is clear that \(\mathbf{S}_{x}\) can be efficiently recovered from \(sk_{F}\) and \(x\)
**Claim 8.12** (Shift-hiding property).: _Assuming the quantum hardness of learning with errors, the pair \((\mathcal{KG},\mathcal{E})\) in Construction 4 has the property that_
\[\{pk_{\mathcal{Z}}\}\approx_{c}\{pk_{H_{r}}\},\]
_for any \(pk_{F}\) with \((pk_{F},sk_{F})\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},F)\), where \(\mathbf{A}\xleftarrow{\$}\mathbb{Z}_{q}^{n\times m}\), \(r\in\{0,1\}^{\ell}\) and \(F\in\{\mathcal{Z},H_{r}\}\)._
Proof.: For every \(i\in[n],j\in[m],\tau\in[\lceil\log(q)\rceil]\), \(b\in\{0,1\}\), let \(\mathsf{M}_{b}^{(i,j,\tau)}=(b\oplus r_{\phi(i,j,\tau)})\cdot\mathbf{I}_{n \times n}\). Then from the quantum hardness of learning with errors, the following holds for every \((i,j,\tau)\) and \(b\in\{0,1\}\):
\[\{\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}\}\approx_ {c}\{\mathbf{S}_{b}^{(i,j,\tau)}\mathbf{A}+\mathbf{E}_{b}^{(i,j,\tau)}+\mathsf{ M}_{b}^{(i,j,\tau)}\}\]
Since \(\{\mathbf{S}_{b}^{(i,j,\tau)}\}\) and \(\{\mathbf{E}_{b}^{(i,j,\tau)}\}\) are sampled independently for every \((i,j,\tau)\) and \(b\in\{0,1\}\), the proof of the claim follows.
**Remark 8.13**.: _When consider the all-zeroes function \(\mathcal{Z}\), we drop the notation from the parameters. For instance, we denote \(pk_{\mathcal{Z}}\) to be simply \(pk\)._
Construction.We consider the following parameters which are relevant to our PRF construction. Let \(n,m\in\mathbb{N}\) and let \(q\in\mathbb{N}\) be a modulus with \(q=2^{o(n)}\). Let \(\ell=nm\lceil\log q\rceil\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and let \(p\ll q\) be a sufficiently large rounding parameter with
\[n\cdot m^{3.5}\sigma^{3}\lceil\log q\rceil=(q/p)\cdot 2^{-o(n)}.\]
We describe our construction below.
**Construction 5** (Revocable PRF scheme).: _Let \(n\in\mathbb{N}\) be the security parameter and \(m\in\mathbb{N}\). Let \(q\geq 2\) be a prime and let \(\sigma>0\) be a parameter. Let \((\mathcal{KG},\mathcal{E})\) be the procedure in Construction 4. Our revocable_ PRF _scheme is defined as follows:_
* \(\mathsf{Gen}(1^{\lambda})\)_: This is the following key generation procedure:_ 1. _Sample_ \((\mathbf{A},\mathsf{td_{A}})\leftarrow\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\)_._ 2. _Compute_ \(\kappa_{\mathcal{Z}}\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},\mathcal{Z})\)_, where_ \(\mathcal{Z}:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\) _is the such that_ \(\mathcal{Z}(x)\) _outputs an all zero matrix for every_ \(x\in\{0,1\}^{\ell}\)_. Parse_ \(\kappa_{\mathcal{Z}}\) _as_ \((pk,sk)\)_._ 3. _Generate a Gaussian superposition_ \((\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y}\in\mathbb{Z}_{q}^{n}) \leftarrow\mathsf{Gen}\mathsf{Gauss}(\mathbf{A},\sigma)\) _with_ \[\left|\psi_{\mathbf{y}}\right\rangle\ =\sum_{\begin{subarray}{c}\mathbf{x}\in \mathbb{Z}_{q}^{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x})\ \left| \mathbf{x}\right\rangle.\] _Output_ \(k=(pk,sk,\mathbf{y})\)_,_ \(\rho_{k}=(pk,\left|\psi_{\mathbf{y}}\right\rangle)\) _and_ \(\mathsf{MSK}=\mathsf{td_{A}}\)_._
* \(\mathsf{PRF}(k,x)\)_: this is the following procedure:_ 1. _Parse the key_ \(k\) _as a tuple_ \((pk,sk),\mathbf{y})\)_._ 2. _Output_ \(\left|\mathbf{S}_{x}\mathbf{y}\right|_{p}\)_. Here,_ \(\mathbf{S}_{x}\in\mathbb{Z}_{q}^{n\times n}\) _is a matrix that can be efficiently recovered from_ \(sk\) _as stated in_ _Claim 8.11_._
* \(\mathsf{Eval}(\rho_{k},x)\)_: this is the following evaluation algorithm:_ 1. _Parse_ \(\rho_{k}\) _as_ \((pk,\rho)\)_._ 2. _Compute_ \(\mathsf{M}_{x}\leftarrow\mathcal{E}(pk,x)\)_._ 3. _Measure the register_ \(\mathsf{Aux}\) _of the state_ \(U(\rho\otimes|0\rangle\langle 0|_{\mathsf{Aux}})U^{\dagger}\)_. Denote the resulting outcome to be_ \(\mathbf{z}\)_, where_ \(U\) _is defined as follows:_ 4. _Output_ \(\mathbf{z}\)_._
* \(\mathsf{Revoke}(\mathsf{MSK},\rho)\)_: given as input the trapdoor_ \(\mathsf{td}_{\mathbf{A}}\leftarrow\mathsf{MSK}\)_, apply the projective measurement_ \(\{|\psi_{\mathbf{y}}\rangle\langle\psi_{\mathbf{y}}|\,,I-|\psi_{\mathbf{y}} \rangle\langle\psi_{\mathbf{y}}|\}\) _onto the state_ \(\rho\) _using the procedure_ \(\mathsf{QSampGauss}(\mathbf{A},\mathsf{td}_{\mathbf{A}},\mathbf{y},\sigma)\) _in Algorithm_ 2_. Output_ \(\mathsf{Valid}\) _if the measurement is successful, and_ \(\mathsf{Invalid}\) _otherwise._
**Lemma 8.14**.: _The above scheme satisfies correctness for our choice of parameters._
Proof.: The correctness of revocation follows immediately from the correctness of \(\mathsf{QSampGauss}\) in Algorithm 2, which we showed in Theorem 3.3. Next, we show the correctness of evaluation. Let \(\kappa_{\mathcal{Z}}\leftarrow\mathcal{KG}(1^{n},1^{m},q,\mathbf{A},\mathcal{ Z})\) with \(\kappa_{\mathcal{Z}}=(\mathsf{PK},\mathsf{SK})\). From Claim 8.11, we have for any \(x\in\{0,1\}^{\ell}\):
\[\mathcal{E}(\mathsf{PK},x)=\mathbf{S}_{x}\mathbf{A}+\mathbf{E}_{x}\ (\mathrm{mod}\ q),\]
where \(\mathbf{S}_{x}\in\mathbb{Z}_{q}^{n\times n}\) and \(\mathbf{E}_{x}\in\mathbb{Z}_{q}^{n\times m}\) with \(||\mathbf{E}_{x}||_{\infty}\leq(m\sigma)^{2}\cdot(nm\lceil\log(q)\rceil)\). Recall that \(\mathsf{GenGauss}(\mathbf{A},\sigma)\) outputs a state \(|\psi_{\mathbf{y}}\rangle\) that is overwhelmingly supported on vectors \(\mathbf{t}\in\mathbb{Z}_{q}^{m}\) such that \(\|\mathbf{t}\|\leq\sigma\sqrt{\frac{m}{2}}\) with \(\mathbf{A}\cdot\mathbf{t}=\mathbf{y}\ (\mathrm{mod}\ q)\). Therefore, we have for any input \(x\in\{0,1\}^{\ell}\):
\[\left\lfloor\mathcal{E}(\mathsf{PK},x)\cdot\mathbf{t}\right\rceil_{p}=\left \lfloor\mathbf{S}_{x}\mathbf{A}\cdot\mathbf{t}+\mathbf{E}_{x}\cdot\mathbf{t} \right\rceil_{p}=\left\lfloor\mathbf{S}_{x}\cdot\mathbf{y}+\mathbf{E}_{x} \cdot\mathbf{t}\right\rceil_{p}=\left\lfloor\mathbf{S}_{x}\cdot\mathbf{y} \right\rceil_{p},\]
where the last equality follows from the fact that
\[\|\mathbf{E}_{x}\cdot\mathbf{t}\|_{\infty}\leq\|\mathbf{E}_{x}\|_{\infty} \cdot\|\mathbf{t}\|_{\infty}\leq(m\sigma)^{2}\cdot(nm\lceil\log(q)\rceil) \cdot\sigma\sqrt{m/2}.\]
and \(n\cdot m^{3.5}\sigma^{3}\lceil\log q\rceil=(q/p)\cdot 2^{-o(n)}\) for our choice of parameters.
**Theorem 8.15**.: _Let \(n\in\mathbb{N}\) and \(q\) be a prime modulus with \(q=2^{o(n)}\) and \(m\geq 2n\log q\), each parameterized by \(\lambda\in\mathbb{N}\). Let \(\ell=nm\lceil\log q\rceil\). Let \(\sigma\in(\sqrt{2m},q/\sqrt{2m})\) and \(\alpha\in(0,1)\) be any noise ratio with \(1/\alpha=\sigma\cdot 2^{o(n)}\), and let \(p\ll q\) be a sufficiently large rounding parameter with_
\[n\cdot m^{3.5}\sigma^{3}\lceil\log q\rceil=(q/p)\cdot 2^{-o(n)}.\]
_Then, assuming the quantum subexponential hardness of \(\mathsf{LWE}_{n,q,\alpha q}^{m}\) and \(\mathsf{SIS}_{n,q,\sigma\sqrt{2m}}^{m}\), our revocable \(\mathsf{PRF}\) scheme \((\mathsf{Gen},\mathsf{PRF},\mathsf{Eval},\mathsf{Revoke})\)defined in Construction 5 satisfies 1-revocation security according to Definition 8.3._
Proof.: Let \(\mathcal{A}\) be a \(\mathsf{QPT}\) adversary and suppose that
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\ :\ b\stackrel{{\$}}{{\leftarrow}}\{0,1\}\right]=\frac{1}{2}+ \epsilon(\lambda),\]
\[\underline{\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)}\text{:}\]
**Initialization Phase**:
* The challenger runs the procedure \(\mathsf{Gen}(1^{\lambda})\): 1. Sample \((\mathbf{A},\mathsf{td_{A}})\leftarrow\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\). 2. Generate \(\mathbf{A}_{N}\in\mathbb{Z}_{q}^{(n+m)\times m}\) with \(\overline{\mathbf{A}_{N}}\xleftarrow{\$}\mathbb{Z}_{q}^{m\times m}\) and \(\underline{\mathbf{A}_{N}}=\mathbf{A}\). 3. Compute \(\kappa_{\mathcal{Z}}\leftarrow\mathcal{KG}(1^{n},1^{m},1^{q},\mathbf{A}_{N}, \mathcal{Z})\), where \(\mathcal{KG}\) is as defined in Construction 4 and \(\mathcal{Z}:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\) is such that \(\mathcal{Z}(x)\) outputs an all zero matrix for every \(x\in\{0,1\}^{\ell}\). Parse \(\kappa_{\mathcal{Z}}\) as \((pk,sk)\). 4. Generate \((\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y}\in\mathbb{Z}_{q}^{n}) \leftarrow\mathsf{Gen}\mathsf{Gauss}(\mathbf{A},\sigma)\) with \[\left|\psi_{\mathbf{y}}\right\rangle\ =\ \sum_{\begin{subarray}{c}\mathbf{x}\in \mathbb{Z}_{q}^{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\ (\mathrm{mod}\ q)\end{subarray}}\ \rho_{\sigma}(\mathbf{x})\ \left|\mathbf{x}\right\rangle.\] 5. Let \(k=(pk,sk,\mathbf{y})\), \(\rho_{k}=(pk,\left|\psi_{\mathbf{y}}\right\rangle)\) and \(\mathsf{MSK}=\mathsf{td_{A}}\).
* The challenger sends \(\rho_{k}=(pk,\left|\psi_{\mathbf{y}}\right\rangle)\) to \(\mathcal{A}\).
**Revocation Phase**:
* The challenger sends the message \(\mathtt{REVOKE}\) to \(\mathcal{A}\).
* \(\mathcal{A}\) generates a (possibly entangled) bipartite quantum state \(\rho_{R,\textsc{aux}}\) in systems \(\mathcal{H}_{R}\otimes\mathcal{H}_{\textsc{aux}}\) with \(\mathcal{H}_{R}=\mathcal{H}_{q}^{m}\), returns system \(R\) and holds onto the auxiliary system \(\textsc{Aux}\).
* The challenger runs \(\mathsf{Revoke}(\mathsf{MSK},\rho_{R})\), where \(\rho_{R}\) is the reduced state in system \(R\). If the outcome is \(\mathsf{Invalid}\), the challenger aborts.
**Guessing Phase**:
* The challenger samples \(x\leftarrow\left\{0,1\right\}^{\ell}\) and sends \((x,y)\) to \(\mathcal{A}\), where
* If \(b=0\): compute \(\mathbf{S}_{x}\) from \(sk\) as in Claim 8.11. Set \(y=\left\lfloor\mathbf{S}_{x}\mathbf{y}\right\rceil_{p}\).
* If \(b=1\): sample \(y\leftarrow\{0,1\}^{n}\).
* \(\mathcal{A}\) outputs a string \(b^{\prime}\) and wins if \(b^{\prime}=b\).
Figure 14: The revocable \(\mathsf{PRF}\) experiment \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\) for Construction 5.
for some \(\varepsilon(\lambda)\) with respect to experiment \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\) in Figure 14. Let us now show that \(\varepsilon(\lambda)\) is negligible.
Suppose for the sake of contradition that \(\epsilon(\lambda)=1/\mathrm{poly}(\lambda)\). Let us now introduce a sequence of hybrid experiments which will be relevant for the remainder of the proof.
Let \(\mathsf{RevDual}=(\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Revoke})\) be the \(n\)-bit key-revocable Dual-Regev scheme from Construction 2. Fix \(\mu=0^{n}\), where \(\mu\) is the challenge message in the dual-Regev encryption security.
\(\mathsf{H}_{0}\): This is \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},0)\) in Figure 14.
\(\mathsf{H}_{1}\): This is the same experiment as \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},0)\), except for the following changes:
* Sample a random string \(r\leftarrow\{0,1\}^{\ell}\).
* Run the procedure \(\mathsf{RevDual}.\mathsf{KeyGen}(1^{\lambda})\) instead of \(\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\) and \(\mathsf{GenGauss}(\mathbf{A},\sigma)\) to obtain \((\mathbf{A}\in\mathbb{Z}_{q}^{n\times m},\mathbf{y}\in\mathbb{Z}_{q}^{n}, \mathsf{MSK},\rho_{\mathsf{SK}})\).
* Compute \((\mathsf{CT}_{1},\mathsf{CT}_{2})\leftarrow\mathsf{RevDual}.\mathsf{Enc}( \mathbf{A},\mathbf{y},\mu)\), where \(\mathsf{CT}_{1}\in\mathbb{Z}_{q}^{n\times m}\) and \(\mathsf{CT}_{2}\in\mathbb{Z}_{q}^{n}\).
* Set \(x=r\oplus\mathsf{bindecomp}(\mathsf{CT}_{1})\).
The rest of the hybrid is the same as before.
Note that Hybrids \(\mathsf{H}_{0}\) and \(\mathsf{H}_{1}\) are identically distributed.
\(\mathsf{H}_{2}\): This is the same experiment as before, except that the challenger now uses an alternative key-generation algorithm:
* As before, run the procedure \(\mathsf{RevDual}.\mathsf{KeyGen}(1^{\lambda})\) instead of \(\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\) and \(\mathsf{GenGauss}(\mathbf{A},\sigma)\) to obtain \((\mathbf{A}\in\mathbb{Z}_{q}^{n\times m},\mathbf{y}\in\mathbb{Z}_{q}^{n}, \mathsf{MSK},\rho_{\mathsf{SK}})\). Sample \(r\leftarrow\{0,1\}^{\ell}\).
* Let \(H_{r}:\{0,1\}^{\ell}\rightarrow\mathbb{Z}_{q}^{n\times m}\) be as defined in the beginning of Section 8.3.
* Run the alternate algorithm \(\kappa_{H}\leftarrow\mathcal{KG}(1^{n},1^{m},1^{q},\mathbf{A},H_{r})\) instead of \(\kappa_{\mathcal{Z}}\leftarrow\mathcal{KG}(1^{n},1^{m},1^{q},\mathbf{A}, \mathcal{Z})\).
* Compute the ciphertext \((\mathsf{CT}_{1}^{*},\mathsf{CT}_{2}^{*})\leftarrow\mathsf{RevDual}.\mathsf{ Enc}(\mathbf{A},\mathbf{y},\mu)\), where \(\mathsf{CT}_{1}^{*}\in\mathbb{Z}_{q}^{n\times m}\). Then, set \(x^{*}=r\oplus\mathsf{bindecomp}(\mathsf{CT}_{1}^{*})\). Send \(x^{*}\) to the adversary in the guessing phase.
\(\mathsf{H}_{3}\): This is the same hybrid as before, except that we choose \(\mathsf{CT}_{1}^{*}\xleftarrow{s}\mathbb{Z}_{q}^{n\times m}\) and \(\mathsf{CT}_{2}^{*}\xleftarrow{s}\mathbb{Z}_{q}^{n}\).
\(\mathsf{H}_{4}\): This is the \(\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},1)\) in Figure 14.
Note that hybrids \(\mathsf{H}_{3}\) and \(\mathsf{H}_{4}\) are identically distributed.
Suppose that the following holds:
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}(1 ^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{H}_{2}^{\mathcal{A}}( 1^{\lambda})]=\frac{1}{2}+\delta_{1}(\lambda),\ \ \text{and}\] \[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{2}^{\mathcal{A}}( 1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{H}_{3}^{\mathcal{A}}( 1^{\lambda})]=\frac{1}{2}+\delta_{2}(\lambda)\]
for some functions \(\delta_{1}(\lambda)\) and \(\delta_{2}(\lambda)\). We claim that either \(\delta_{1}(\lambda)\geq 1/\mathrm{poly}(\lambda)\) or \(\delta_{2}(\lambda)\geq 1/\mathrm{poly}(\lambda)\) must hold. This is easily seen as follows. By taking the sum of the two expressions above, we get
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}( 1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{H}_{2}^{\mathcal{A}}( 1^{\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{H}_{3}^{\mathcal{A}}( 1^{\lambda})]\] \[=\frac{1}{2}+\delta_{1}(\lambda)+\frac{1}{2}+\delta_{2}(\lambda).\]
Note that we have the identity \(\mathsf{Pr}[0\leftarrow\mathsf{H}_{2}^{\mathcal{A}}(1^{\lambda})]+\mathsf{ Pr}[1\leftarrow\mathsf{H}_{2}^{\mathcal{A}}(1^{\lambda})]=1\). Moreover, because the hybrids \(\mathsf{H}_{0}\) and \(\mathsf{H}_{1}\), as well as hybrids \(\mathsf{H}_{3}\) and \(\mathsf{H}_{4}\), are identically distributed, we have
\[\mathsf{Pr}[0\leftarrow\mathsf{H}_{1}^{\mathcal{A}}(1^{\lambda})] =\mathsf{Pr}[0\leftarrow\mathsf{H}_{0}^{\mathcal{A}}(1^{\lambda})]= \mathsf{Pr}[0\leftarrow\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},0)]\quad\text{ and}\] \[\mathsf{Pr}[1\leftarrow\mathsf{H}_{3}^{\mathcal{A}}(1^{\lambda})] =\mathsf{Pr}[1\leftarrow\mathsf{H}_{4}^{\mathcal{A}}(1^{\lambda})]= \mathsf{Pr}[1\leftarrow\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},1)].\]
Plugging the identities above into the equation from before, we get
\[\frac{1}{2}+\varepsilon(\lambda) =\frac{1}{2}\mathsf{Pr}[0\leftarrow\mathsf{Expt}^{\mathcal{A}}(1 ^{\lambda},0)]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\mathsf{Expt}^{\mathcal{A}}( 1^{\lambda},1)]\] \[=\frac{1}{2}+\delta_{1}(\lambda)+\delta_{2}(\lambda).\]
In other words, we get \(\delta_{1}(\lambda)+\delta_{2}(\lambda)=\epsilon(\lambda)\), which implies that either \(\delta_{1}(\lambda)\geq 1/\mathrm{poly}(\lambda)\) or \(\delta_{2}(\lambda)\geq 1/\mathrm{poly}(\lambda)\). To complete the proof, we show that both \(\delta_{1}(\lambda)\) and \(\delta_{2}(\lambda)\) are negligible, which yields a contradiction to our assumption that \(\varepsilon=1/\mathrm{poly}(\lambda)\).
**Claim 8.16**.: _By the shift-hiding property14 of \((\mathcal{KG},\mathcal{E})\) in Claim 8.12, we have \(\delta_{1}(\lambda)\leq\mathsf{negl}(\lambda)\)._
Footnote 14: Technically, we are invoking the 1-bit indistinguishability variant of the shift-hiding property of \((\mathcal{KG},\mathcal{E})\) in Claim 8.12, which is implied by the regular indistinguishability notion.
Proof.: We first define alternate hybrids \(\widetilde{\mathsf{H}_{1}}\) and \(\widetilde{\mathsf{H}_{2}}\) as follows:
* \(\widetilde{\mathsf{H}_{1}}\) is the same as \(\mathsf{H}_{1}\) except that \(\mathsf{Revoke}\) is not applied on the returned state,
* \(\widetilde{\mathsf{H}_{2}}\) is the same as \(\mathsf{H}_{2}\) except that \(\mathsf{Revoke}\) is not applied on the returned state.
Since ignoring \(\mathsf{Revoke}\) only increases the success probability of the adversary, the following holds:
\[\frac{1}{2}\mathsf{Pr}[0\leftarrow\widetilde{\mathsf{H}_{1}}^{\mathcal{A}}(1^ {\lambda})]+\frac{1}{2}\mathsf{Pr}[1\leftarrow\widetilde{\mathsf{H}_{2}}^{ \mathcal{A}}(1^{\lambda})]\geq\frac{1}{2}+\delta_{1}(\lambda)\]
We now argue that \(\delta_{1}(\lambda)\leq\mathsf{negl}(\lambda)\).
Suppose not. We design a reduction \(\mathcal{B}\) that violates the shift-hiding property as follows.
* Sample \(r\xleftarrow{8}\{0,1\}^{\ell}\). Send \((\mathcal{Z},H_{r})\) to the challenger.
* The challenger responds with \(pk=\left(\mathbf{A},\left\{\mathsf{CT}_{b}^{(i,j,\tau)}\right\}_{\begin{subarray}{ c}i\in[n],j\in[m]\\ \tau\in[\lceil\log(q)\rceil],b\in\{0,1\}\end{subarray}}\right)\)
* Compute \((\left|\psi_{\mathbf{y}}\right\rangle,\mathbf{y}\in\mathbb{Z}_{q}^{n}) \leftarrow\mathsf{GenGauss}(\mathbf{A},\sigma)\) from the challenger.
* Set \(\rho_{k}=(pk,\rho)\).
* Compute \((\mathsf{CT}_{1},\mathsf{CT}_{2})\leftarrow\mathsf{RevDual.Enc}(\mathbf{A}, \mathbf{y},\mu)\), where \(\mathsf{CT}_{1}\in\mathbb{Z}_{q}^{n\times m}\) and \(\mathsf{CT}_{2}\in\mathbb{Z}_{q}^{n}\). Set \(x^{*}=r\oplus\mathsf{bindecomp}(\mathsf{CT}_{1})\).
* Compute \(\mathsf{Eval}(\rho_{k},x^{*})\) to obtain \(y^{*}\) while recovering \(\rho_{k}^{*}\) (using almost as good as new lemma [1]), such that \(\mathsf{TD}(\rho_{k}^{*},\rho_{k})\leq\mathsf{negl}(\lambda)\).
* Send \(\rho_{k}^{*}\) to \(\mathcal{A}\).
* \(\mathcal{A}\) computes a state on two registers \(R\) and aux. It returns the state on the register \(R\).
* \(\mathcal{A}\), on input the register aux and \((x^{*},y^{*})\), outputs a bit \(b^{\prime}\).
* Output \(b^{\prime}\).
If \(pk\) is generated using \(\mathsf{K}\mathcal{G}(1^{n},1^{m},q,\mathbf{A},\mathcal{Z})\) then we are in the hybrid, \(\widetilde{\mathsf{H}_{1}}^{\mathcal{A}}\). If \(pk\) is generated using \(\mathsf{K}\mathcal{G}(1^{n},1^{m},q,\mathbf{A},H_{r})\) then we are in the hybrid, \(\widetilde{\mathsf{H}_{2}}^{\mathcal{A}}\). Thus, we violate the shift-hiding property with advantage \(\delta_{1}(\lambda)\). This completes the proof.
Next, we invoke the security of the \(n\)-bit variant of our key-revocable Dual-Regev scheme which follows from Claim 5.8 and Theorem 6.1.
**Claim 8.17**.: _By the security of our \(n\)-bit key-revocable Dual-Regev encryption scheme which is based on the transformation in Construction 1, we have that \(\delta_{2}(\lambda)\leq\mathsf{negl}(\lambda)\)._
Proof.: Suppose \(\delta_{2}(\lambda)=1/\mathrm{poly}(\lambda)\). Using \(\mathcal{A}\), we design a reduction \(\mathcal{B}\) that violates the revocation security of Construction 1, thus contradicting Theorem 6.1.
The reduction \(\mathcal{B}\) proceeds as follows.
* First, it receives as input \(\mathbf{A},\mathbf{y}\) and a quantum state \[\left|\psi_{\mathbf{y}}\right\rangle=\sum_{\begin{subarray}{c}\mathbf{x}\in \mathbb{Z}_{q}^{m}\\ \mathbf{A}\mathbf{x}=\mathbf{y}\end{subarray}}\rho_{\sigma}(\mathbf{x}) \left|\mathbf{x}\right\rangle.\]
* The reduction generates a quantum state \(\rho_{k}\) as follows:
* Sample a random string \(r\xleftarrow{s}\{0,1\}^{\ell}\).
* Let \(H_{r}:\{0,1\}^{\ell}\to\mathbb{Z}_{q}^{n\times m}\) be as defined in the beginning of Section 8.3.
* Run the algorithm \(\kappa_{H}\leftarrow\mathsf{K}\mathcal{G}(1^{n},1^{m},1^{q},\mathbf{A},H_{r})\) and parse \(\kappa_{H}\) as \((pk,sk)\).
* Set \(\rho_{k}=(pk,\left|\psi_{\mathbf{y}}\right\rangle)\). Send \(\rho_{k}\) to \(\mathcal{A}\).
* \(\mathcal{A}\) outputs a state on two registers \(R\) and aux. The register \(R\) is returned. The reduction forwards the register \(R\) to the challenger.
* The reduction then gets the challenge ciphertext \(\mathsf{CT}=[\mathsf{CT}_{1},\mathsf{CT}_{2}]^{\intercal}\in\mathbb{Z}_{q}^{n \times m}\times\mathbb{Z}_{q}^{n}\). The reduction then sets \[x^{*}=r\oplus\mathsf{bindecomp}(\mathsf{CT}_{1})\] and sends \(x^{*}\) to \(\mathcal{A}\) in the guessing phase, together with \(y=\lfloor\mathbf{S}_{x^{*}}\mathbf{y}+\mathsf{CT}_{2}\rfloor_{p}\) which is computed using the secret key \(\mathsf{SK}\) (c.f. Claim 8.11).
* \(\mathcal{A}\) outputs a bit \(b^{\prime}\). \(\mathcal{B}\) outputs \(b^{\prime}\).
There are two cases to consider here. In the first case, we have \(\mathsf{CT}=[\mathsf{CT}_{1},\mathsf{CT}_{2}]^{\intercal}\in\mathbb{Z}_{q}^{n \times m}\times\mathbb{Z}_{q}^{n}\) is a Dual-Regev ciphertext. Here, \(y=\lfloor\mathbf{S}_{x^{*}}\mathbf{y}+\mathsf{CT}_{2}\rfloor_{p}\) precisely corresponds to the output of the pseudorandom function on \(\rho_{k}\) and \(x\). In the second case, we have \(\mathsf{CT}=[\mathsf{CT}_{1},\mathsf{CT}_{2}]^{\intercal}\in\mathbb{Z}_{q}^{ n\times m}\times\mathbb{Z}_{q}^{n}\), where \(\mathsf{CT}_{1}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{n \times m}\) and \(\mathsf{CT}_{2}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\). Here, \(y=\lfloor\mathbf{S}_{x^{*}}\mathbf{y}+\mathsf{CT}_{2}\rfloor_{p}\) is (negligibly close) to a uniform distribution on \(\mathbb{Z}_{p}^{m}\).
Thus, the first case precisely corresponds to \(\mathsf{H}_{2}\) and the second case corresponds to \(\mathsf{H}_{3}\). As a result, \(\mathcal{B}\) violates the revocation security of Construction 1 with advantage \(\delta_{2}(\lambda)\). This completes the proof.
Putting everything together, we have shown that
\[\Pr\left[b\leftarrow\mathsf{Expt}^{\mathcal{A}}(1^{\lambda},b)\ :\ b\stackrel{{\$}}{{ \leftarrow}}\{0,1\}\right]\leq\frac{1}{2}+\mathsf{negl}(\lambda).\]
|
2309.04901 | One-Bit-Aided Modulo Sampling for DOA Estimation | Modulo sampling has recently drawn a great deal of attention for cutting-edge
applications, due to overcoming the barrier of information loss through sensor
saturation and clipping. This is a significant problem, especially when the
range of signal amplitudes is unknown or in the near-far case. To overcome this
fundamental bottleneck, we propose a one-bit-aided (1bit-aided) modulo sampling
scheme for direction-of-arrival (DOA) estimation. On the one hand, one-bit
quantization involving a simple comparator offers the advantages of low-cost
and low-complexity implementation. On the other hand, one-bit quantization
provides an estimate of the normalized covariance matrix of the unquantized
measurements via the arcsin law. The estimate of the normalized covariance
matrix is used to implement blind integer-forcing (BIF) decoder to unwrap the
modulo samples to construct the covariance matrix, and subspace methods can be
used to perform the DOA estimation. Our approach named as 1bit-aided-BIF
addresses the near-far problem well and overcomes the intrinsic low dynamic
range of one-bit quantization. Numerical experiments validate the excellent
performance of the proposed algorithm. | Qi Zhang, Jiang Zhu, Fengzhong Qu, De Wen Soh | 2023-09-10T00:02:30Z | http://arxiv.org/abs/2309.04901v2 | # One-Bit-Aided Modulo Sampling for Doa Estimation
###### Abstract
Modulo sampling or unlimited sampling has recently drawn a great deal of attention for cutting-edge applications, due to overcoming the barrier of information loss through sensor saturation and clipping. This is a significant problem, especially when the range of signal amplitudes is unknown or in the near-far case. To overcome this fundamental bottleneck, we propose a one-bit-aided (1bit-aided) modulo sampling scheme for direction-of-arrival (DOA) estimation. On the one hand, one-bit quantization involving a simple comparator offers the advantages of low-cost and low-complexity implementation. On the other hand, one-bit quantization provides an estimate of the normalized covariance matrix of the unquantized measurements via the arcsin law. The estimate of the normalized covariance matrix is used to implement blind integer-forcing (BIF) decoder to unwrap the modulo samples to construct the covariance matrix, and subspace methods can be used to perform the DOA estimation. Our approach named as 1bit-aided-BIF addresses the near-far problem well and overcomes the intrinsic low dynamic range of one-bit quantization. Numerical experiments validate the excellent performance of the proposed algorithm compared to using a high-precision ADC directly in the given set up.
Qi Zhang\({}^{1}\) Jiang Zhu\({}^{2}\) Zhiwei Xu\({}^{2}\) De Wen Soh\({}^{1}\)\({}^{1}\) Information Systems Technology and Design, Singapore University of Technology and Design
\({}^{2}\)Ocean College, Zhejiang University, Zhoushan, 316021, China DOA estimation, modulo sampling, one-bit sampling, integer-forcing decoder
## 1 Introduction
Direction-of-arrival (DOA) estimation refers to the process of estimating the bearing of targets from the outputs of a receiving sensor array, which is an active field of research in signal processing [1]. Traditional algorithms for DOA estimation such as subspace-based methods, compressed sensing-based methods, atomic norm-based approaches, and Bayesian approaches require high-resolution samples from the sensor array hardware [2, 3, 4, 5]. However, in many real-world scenarios, acquiring such observations is often impractical or requires power-consuming hardware devices, and two typical situations are introduced in [6]. One scenario involves the ambiguity of a signal's amplitude range, which could potentially exceed the dynamic range of the analog-to-digital converter (ADC) by a significant margin. Hence, measurements may be truncated, leading to information loss, and conventional algorithms may become ineffective under such circumstances. The other challenge is the near-far problem, which arises in scenarios involving two emitters, one considerably closer to the receiver than the other. Consequently, unless a high-bit ADC is utilized which is power-consuming, the sensor faces a dilemma: it can either prioritize the near-field emitter, resulting in the far-field emitter being submerged in quantization noise, or it can attempt to recover information from the far-field emitter, leading to the clipping of samples from the near-field emitter.
A strategy to address sensor saturation issue in DOA estimation is based on the use of a 1-bit ADC, where measurements capture only the sign of the signal [7, 8]. However, this architecture faces limitations when dealing with the near-far problem, as the weak signal will be submerged in the quantization noise. Recently, modulo ADCs have been designed to address the above problem, where a modulo operator is applied before sampling and modulo samples are obtained [9, 10, 11, 12]. From the algorithm perspective, the unlimited sampling framework (USF) is proposed and extended to DOA estimation, and the recovery guarantee is proposed based on the oversampling assumption [13, 14, 6]. In addition, the integer-forcing (IF) decoder is proposed for zero-mean random vectors with a known covariance matrix and is extended to the blind version [15, 9, 16]. For jointly stationary processes with zero mean, the linear prediction method combined with the IF decoder with known autocorrelation functions is proposed and extended to the blind version [9, 17, 18]. Moreover, algorithms based on the Fourier domain are also proposed for periodic signals and the line spectrum [10, 19, 20, 21].
In this paper, we use both one-bit samples and modulo samples, which correspond to the most significant bit and the least significant \(B\) bits of the original samples, to estimate the DOAs for uniform and sparse linear arrays, and the architecture is shown in Fig. 1. Furthermore, with the aid of one-bit samples, we proposed an algorithm based on the IF decoder with unknown covariance matrix named as one-bit-aided blind IF (1bit-aided-BIF). In detail, we use one-bit samples to construct the normalized covariance matrix based on the arcsin law [7]. Then, the IF decoder is applied to recover
the original signal. We will iteratively update the estimate of the covariance matrix using the recovered measurements that are consistent with the one-bit samples. Finally, subspace methods such as the root MUSIC algorithm are applied to perform the DOA estimation. Numerical results are conducted to demonstrate the performance of the proposed algorithm in the near-far problem.
## 2 Problem Setup
Consider linear arrays comprising \(N\) sensors located at positions \(\{\frac{d_{\lambda\lambda}}{2}\}_{n=1}^{N}\), where \(d_{n}\in\mathbb{N}\) is a non-negative integer, and \(\lambda\) represents the signal wavelength. We define the set \(\mathbb{D}=\{d_{1},\cdots,d_{N}\}\) to describe the sensor spacing configuration. For the \(t\)-th snapshot, the noisy unquantized measurement is given by
\[\mathbf{g}(t)=\sum_{k=1}^{K}\mathbf{a}(\theta_{k})x_{k}(t)+\mathbf{w}(t),\;t= 1,\cdots,T, \tag{1}\]
where \(\theta_{k}\) and \(\mathbf{x}_{k}\triangleq[x_{k}(1),\cdots,x_{k}(T)]^{\mathrm{T}}\sim\mathcal{CN }(\mathbf{0},\sigma_{k}^{2}\mathbf{I}_{T})\) are the DOA and complex amplitudes of the \(k\)-th target, \(\mathbf{a}(\theta_{k})\) is the steering vector defined as
\[\mathbf{a}(\theta_{k})=\left[\mathrm{e}^{\mathrm{j}\pi d_{1}\sin\theta_{k}}, \mathrm{e}^{\mathrm{j}\pi d_{2}\sin\theta_{k}},\ldots,\mathrm{e}^{\mathrm{j} \pi d_{N}\sin\theta_{k}}\right]^{\mathrm{T}}, \tag{2}\]
and \(\mathbf{w}(t)\sim\mathcal{CN}(\mathbf{0},\sigma^{2}\mathbf{I})\) is the additive white Gaussian noise, and \(\mathbf{w}(t_{1})\) is independent of \(\mathbf{w}(t_{2})\) for \(t_{1}\neq t_{2}\). In this paper, several prototypes of linear arrays are considered as follows. In the case of Uniform Linear Arrays (ULAs), the sensors are uniformly spaced, with \(\mathbb{D}=\{0,1,\cdots,N-1\}\). For Coprime Arrays, the prototype set \(\mathbb{D}\) is defined as \(\{Pq\mid 0\leq q\leq P-1\}\cup\{Qp\mid 0\leq p\leq P-1\}\), where \(P\) and \(Q\) are coprime integers, and \(N=P+Q-1\). In the context of Nested Arrays, we have \(\mathbb{D}=\{n\}_{n=1}^{N_{1}}\cup\{m\left(N_{1}+1\right)\}_{m=1}^{N_{2}}\), with \(N=N_{1}+N_{2}\).
Rather than acquiring the discrete-time signal \(\mathbf{g}(t)\) generated by a high-dynamic-range and high-resolution ADC, we acquire the most significant bit from the \(1\)-bit ADC and the least significant \(B\) bits from the modulo ADC in this paper. Specifically, the one-bit quantized sample of the sensor \(n\) at time \(t\) is given by
\[h_{n}(t)=\mathrm{sign}(\Re\{g_{n}(t)\})/\sqrt{2}+\mathrm{sign}(\Im\{g_{n}(t)\} )/\sqrt{2}, \tag{3}\]
where \(\Re\{g_{n}(t)\}\) and \(\Im\{g_{n}(t)\}\) return the real part and imaginary part of \(g_{n}(t)\) respectively. In addition, the \(B\)-bit quantized modulo sample from the \(n\)-th sensor at time \(t\) is
\[y_{n}(t)=\mathcal{Q}_{B,\lambda}(\mathcal{M}_{\lambda}(\Re\{g_{n}(t)\}))+ \mathrm{j}\mathcal{Q}_{B,\lambda}(\mathcal{M}_{\lambda}(\Im\{g_{n}(t)\})), \tag{4}\]
where \(\mathcal{M}_{\lambda}(z)=z-2\lambda\lfloor\frac{z}{2\lambda}+\frac{1}{2}\rfloor\) is the modulo operator with range \([-\lambda,\lambda]\), and \(\mathcal{Q}_{B,\lambda}(\cdot)\) is the \(B\)-bit quantization operator. Let \(D=2^{B}\), the quantization intervals of \(\mathcal{Q}_{B,\lambda}(\cdot)\) are \(\left\{\left(-\lambda+\frac{2\lambda l}{D},-\lambda+\frac{2\lambda(l+1)}{D} \right)\right\}_{l=0}^{D-1},\) and for \(z\in\left(-\lambda+\frac{2\lambda l}{D},-\lambda+\frac{2\lambda(l+1)}{D}\right)\) in the \(l\)-th interval, the quantized value is \(\mathcal{Q}_{B,\lambda}(z)=-\lambda+\frac{2\lambda(2l+1)}{D}\) which is the middle point of the interval. For the sake of notation convenience, we are using a uniform quantizer here. However, it is worth noting that the algorithm we propose later is equally applicable to non-uniform quantizers.
## 3 Algorithm
In this section, the 1bit-aided-BIF algorithm will be proposed to perform the DOA estimation given the one-bit observations and quantized modulo measurements.
### Estimate the Normalized Covariance Matrix
Let \(\bar{\mathbf{C}}\triangleq\mathbb{E}[\mathbf{h}\mathbf{h}^{\mathrm{H}}]\) be the covariance matrix of one-bit samples (3). Obviously \(\widetilde{C}_{ii},=1\). According to the arcsine law, the correlation matrix \(\bar{\mathbf{C}}\) (or the normalized covariance matrix) of unquantized measurements (1) and the covariance matrix \(\bar{\mathbf{C}}\) of one-bit measurements are related via [7]
\[\mathbf{\widetilde{C}}=\sin\left(\frac{\pi}{2}\Re\left\{\bar{\mathbf{C}} \right\}\right)+\mathrm{j}\sin\left(\frac{\pi}{2}\Im\left\{\bar{\mathbf{C}} \right\}\right), \tag{5}\]
where \(\sin(\cdot)\) is an element-wise operator. The empirical covariance matrix (also normalized covariance matrix) estimate of one-bit measurements is
\[\mathbf{\widehat{\widehat{C}}}=\frac{1}{L}\sum_{l=1}^{L}\mathbf{h}_{l}\mathbf{ h}_{l}^{\mathrm{H}}. \tag{6}\]
According to (5) and (6), an estimate of the normalized covariance matrix \(\mathbf{\widehat{\widehat{C}}}\) can be obtained.
As the IF decoder introduced later is specifically designed for real-valued signals, we will consider the real vector \(\bar{\mathbf{g}}(t)=[\Re\{\mathbf{g}(t)\};\Im\{\mathbf{g}(t)\}]\) instead of \(\mathbf{g}(t)\) below. In reality, based on the facts that \(\mathbb{E}[\mathbf{g}\mathbf{g}^{\mathrm{H}}]=\mathbb{C}\) and \(\mathbb{E}[\mathbf{g}\mathbf{g}^{\mathrm{T}}]=\mathbf{0}\), the covariance matrix \(\mathbf{C}_{r}\triangleq\mathbb{E}[\bar{\mathbf{g}}\bar{\mathbf{g}}^{\mathrm{ T}}]\) and \(\mathbf{C}\triangleq\mathbb{E}[\mathbf{g}\mathbf{g}^{\mathrm{H}}]\) have the following relationship
\[\mathbf{C}_{r}=\left[\begin{array}{cc}\Re\{\mathbf{C}\}&-\Im\{\mathbf{C}\}\\ \Im\{\mathbf{C}\}&\Re\{\mathbf{C}\}\end{array}\right]. \tag{7}\]
Thus the estimate of the normalized covariance matrix of \(\bar{\mathbf{g}}(t)\) denoted as \(\mathbf{\widehat{\widehat{C}}}_{r}\) can be obtained according to arcsine law and equation (7).
### Integer-Forcing Decoder
Let \(\bar{\mathbf{y}}(t)=[\Re\{\mathbf{y}(t)\};\Im\{\mathbf{y}(t)\}]\), we have the fact that
\[\bar{\mathbf{y}}(t)=\bar{\mathbf{g}}(t)+2\lambda\mathbf{\epsilon}(t)+\mathbf{z}(t), \tag{8}\]
where \(\mathbf{z}(t)=\bar{\mathbf{y}}(t)-\mathcal{M}_{\lambda}(\bar{\mathbf{g}}(t))\) is the quantization noise, and \(\mathbf{\epsilon}(t)\in\mathbb{Z}^{2N}\) is an integer vector. Although \(\mathbf{z}(t)\) is determined by \(\bar{\mathbf{g}}(t)\), under suitable conditions, models that assume that each element of \(\mathbf{z}(t)\) follows a uniform distribution \(\mathcal{U}([-\frac{\lambda}{D},\frac{\lambda}{D}])\) provide good performance [22]. Note that for any integer matrix \(\mathbf{A}\in\mathbb{Z}^{2N\times 2N}\), based on the fact that \(\mathcal{M}_{\lambda}(2\lambda\mathbf{\Lambda}\mathbf{\epsilon}(t))=\mathbf{0}\), we have
\[\mathcal{M}_{\lambda}(\mathbf{A}(\bar{\mathbf{g}}(t)+\mathbf{z}(t)))= \mathcal{M}_{\lambda}(\mathbf{A}\bar{\mathbf{y}}(t)). \tag{9}\]
In the IF decoder, an invertible matrix \(\widehat{\mathbf{A}}\) is obtained which aims to compress the amplitude of \(\bar{\mathbf{g}}(t)\) by solving the optimization problem
\[\min_{\begin{subarray}{c}\mathbf{A}\in\mathbb{Z}^{2N\times 2N} \\ \det(\mathbf{A})\neq 0\end{subarray}}\max_{k=1,\ldots,2N}\mathbb{E}\left(\left\| \mathbf{a}_{k}^{\mathrm{T}}(\bar{\mathbf{g}}(t)+\mathbf{z}(t))\right\|^{2}\right)\] \[\iff\min_{\begin{subarray}{c}\mathbf{A}\in\mathbb{Z}^{2N\times 2N} \\ \det(\mathbf{A})\neq 0\end{subarray}}\max_{k=1,\ldots,2N}\mathbf{a}_{k}^{T} \left(\mathbf{C}_{r}+\frac{\lambda^{2}}{3D^{2}}\mathbf{I}_{2N}\right)\mathbf{a }_{k}, \tag{10}\]
where \(\mathbf{a}_{k}\) is the \(k\)th column of \(\mathbf{A}\). This optimization problem is NP-hard in general, and the Lenstra-Lenstra-Lovasz lattice reduction (LLL) algorithm can be applied to approximately solve it in polynomial time [23]. Given that \(\mathcal{M}_{\lambda}(\widehat{\mathbf{A}}(\bar{\mathbf{g}}(t)+\mathbf{z}(t) ))=\widehat{\mathbf{A}}(\bar{\mathbf{g}}(t)+\mathbf{z}(t))\) with high probability, eq. (9) implies \(\widehat{\mathbf{A}}(\bar{\mathbf{g}}(t)+\mathbf{z}(t))=\mathcal{M}_{\lambda} (\widehat{\mathbf{A}}\bar{\mathbf{y}}(t))\) and \(\bar{\mathbf{g}}(t)\) can be estimated as
\[\widehat{\mathbf{g}}(t)=\widehat{\mathbf{A}}^{-1}\mathcal{M}_{\lambda}( \widehat{\mathbf{A}}\bar{\mathbf{y}}(t)). \tag{11}\]
### _1bit-Aided-BIF Algorithm_
As the covariance matrix \(\mathbf{C}_{r}\) is not known in our setting, we use the LLL algorithm to solve an approximate optimization problem of (10) in the initialization step as shown below
\[\widehat{\mathbf{A}}=\min_{\begin{subarray}{c}\mathbf{A}\in\mathbb{Z}^{2N \times 2N}\\ \det(\mathbf{A})\neq 0\end{subarray}}\max_{k=1,\ldots,2N}\mathbf{a}_{k}^{T} \widehat{\widehat{\mathbf{C}}}_{r}\mathbf{a}_{k}, \tag{12}\]
where \(\widehat{\widehat{\mathbf{C}}}_{r}\) is the estimate of the normalized covariance matrix derived in Section 3.1. Then, we can obtain the estimates of \(\bar{\mathbf{g}}(t)\) according to (11). Let \(\mathbb{T}=\{t|\text{sign}(\widehat{\bar{\mathbf{g}}}(t))/\sqrt{2}=\bar{ \mathbf{h}}(t)\}\) denote the set of time instances where the estimated samples match the sign of the ground truth, and the covariance matrix \(\mathbf{C}_{r}\) can be estimated as
\[\widehat{\mathbf{C}}_{r}=\frac{1}{|\mathbb{T}|}\sum_{t\in\mathbb{T}}\widehat{ \widehat{\mathbf{g}}}(t)\widehat{\mathbf{g}}(t)^{\mathrm{T}}. \tag{13}\]
Next, we update \(\widehat{\mathbf{A}}\) by solving (10), where \(\mathbf{C}_{r}\) is replaced by \(\widehat{\mathbf{C}}_{r}\). We will iteratively update the estimate of the covariance matrix \(\widehat{\mathbf{C}}_{r}\) and set \(\mathbb{T}\) until \(\widehat{\mathbf{C}}_{r}\) achieves convergence or the maximum number of iterations is reached. Finally, the covariance matrix of \(\mathbf{g}(t)\) can be estimated as \(\widehat{\mathbf{C}}=\frac{1}{|\mathbb{T}|}\sum_{t\in\mathbb{T}}\widehat{ \mathbf{g}}(t)\widehat{\mathbf{g}}(t)^{\mathrm{H}}\) where \(\widehat{\mathbf{g}}(t)\) is the complex form of \(\widehat{\widehat{\mathbf{g}}}(t)\), and subspace-based algorithms such as root MUSIC can be applied to perform DOA estimation. In general, our algorithm is summarized in Algorithm 1.
```
1:Inputs: Modulo samples \(\mathbf{y}(t)\), \(1\)-bit samples \(\mathbf{h}(t)\), \(t=1,\cdots,T\); number of signals \(K\); maximum number of iterations \(\mathrm{Iter}_{\max}\).
2:Estimate the normalized covariance matrix \(\widetilde{\mathbf{C}}_{r}\) according to the arcsine law and (7).
3:Estimate \(\mathbf{A}\) by solving (12) using the LLL algorithm.
4:Estimate covariance matrix \(\mathbf{C}_{r}\) according to (13).
5:while\(\widehat{\mathbf{C}}_{r}\) does not converge or \(\mathrm{Iter}_{\max}\) is not reached do
6: Calculate \(\widehat{\mathbf{A}}\) by solving (10) using the LLL algorithm, where \(\mathbf{C}_{r}\) is substituted with \(\widehat{\mathbf{C}}_{r}\).
7: Update the estimate of the covariance matrix \(\widehat{\mathbf{C}}_{r}\) (13).
8:endwhile
9:Estimate the DOAs \(\mathbf{\theta}\) by performing root MUSIC on \(\widehat{\mathbf{C}}\).
10:Outputs:\(\{\widehat{\mathbf{\theta}}\}\).
```
**Algorithm 1** 1bit-aided-BIF Algorithm
Fig. 1: Computational array signal processing setup using the one-bit-aided modulo sampling/unlimited sensing architecture. Modulo non-linearity maps high-dynamic-range sensor array samples into low-dynamic-range folded samples. The one-bit quantization preserves the sign information of the original measurements. The 1bit-aided-BIF algorithm is used to carry out the estimation of DOAs based on the unified measurement scheme.
## 4 Simulation
In this section, numerical experiments are conducted using a ULA to verify the performance of the proposed 1bit-aided-BIF algorithm in the near-far problem. The number of sources is \(K=3\), which are located at \(\mathbf{\theta}=[-2^{\circ};3^{\circ};75^{\circ}]\). The signal-to-noise ratio (SNR) of the \(k\)-th source is defined as \(\mathrm{SNR}_{k}\triangleq 20\log\sigma_{k}/\sigma\). We set \(\mathrm{SNR}_{1}=30\) dB and \(\mathrm{SNR}_{3}=15\) dB. The second signal \(\theta_{2}\) will be set as the weak signal with lowest SNR in the following experiments. \(\mathbf{\theta}\) is detected provided that \(\min\{\theta_{k}-\widehat{\theta}_{i}\}_{i=1}^{K}\leq 0.1^{\circ}\) for all \(k\), and the detection probability \(\mathrm{P}_{\mathrm{D}}\) is used as a criterion for performance evaluation. For comparison with the benchmark (conventional ADC) and to highlight the benefits of modulo ADC in quantization, we assume that the dynamic range of conventional ADC matches the signal \(\mathbf{g}(t)\), and the threshold of conventional ADC for the I and Q channels is set to four times the standard deviation, i.e., \(4\sqrt{(\sum_{k=1}^{K}\sigma_{k}^{2})/2}\). The proposed algorithm also addresses the case of sparse linear arrays. All statistical results presented here are averaged over 1000 Monte Carlo trials.
Fig. 2 shows the MUSIC spectrum of the 1bit-aided-BIF algorithm with a \(4\)-bit modulo ADC, and the samples and the recovered signal for \(t=15\) are plotted. The signal of \(t=15\) is perfectly recovered and has a smaller NMSE (\(-41.3\) dB) compared to the samples obtained by 5-bit ADC (\(-26.4\) dB) due to low quantization noise. Compared to conventional ADC, the MUSIC spectrum of the one-bit-aided modulo ADC framework results in improved DOA estimation performance, i.e., the peak near the weak target location (\(3^{\circ}\)) is sharper, and the corresponding peak localization is closer to the true DOA.
Fig. 3 shows the performance of the 1bit-aided-BIF algorithm with different modulo ADC bit depths against the SNR of the weak signal. We set \(T=10^{4}\). As \(\text{SNR}_{2}\) increases, all cases show improved detection probabilities. Modulo ADC with \(B=3\) (equivalent to \(4\) bits, including the \(1\)-bit ADC) exhibits performance on par with a \(6\)-bit ADC. Furthermore, modulo ADCs with \(B=4\) and \(B=5\) (with totally \(5\) and \(6\) bits) outperform the \(9\)-bit ADC. Fig. 4 depicts the 1bit-aided-BIF algorithm's performance across various modulo ADC bit depths versus the number of snapshots, with \(\mathrm{SNR}_{2}=-10\) dB. As \(T\) increases, all cases exhibit enhanced detection probabilities. Specifically, modulo ADC with \(B=3\) matches the performance of a \(6\)-bit ADC. Additionally, modulo ADCs with \(B=4\) and \(B=5\) (a total of \(5\) and \(6\) bits) surpass the \(9\)-bit ADC when \(T>10^{3}\).
## 5 Conclusion
This paper proposes the one-bit-aided modulo ADC sampling framework to tackle the clipping and near-far problem in DOA estimation, and the 1bit-aided-BIF algorithm is proposed to perform the DOA estimation. Numerical experiments demonstrate the excellent performance of the proposed sampling architecture and algorithm compared to conventional high-precision ADC.
Figure 4: The detection probability versus the snapshot.
Figure 3: The detection probability versus \(\mathrm{SNR}_{2}\).
Figure 2: The performance of 1bit-aided-BIF algorithm in near-far problem with \(\mathrm{SNR}_{2}=-10\) dB and \(T=10^{4}\). |
2309.08685 | Fairly Allocating Goods in Parallel | We initiate the study of parallel algorithms for fairly allocating
indivisible goods among agents with additive preferences. We give fast parallel
algorithms for various fundamental problems, such as finding a Pareto Optimal
and EF1 allocation under restricted additive valuations, finding an EF1
allocation for up to three agents, and finding an envy-free allocation with
subsidies. On the flip side, we show that fast parallel algorithms are unlikely
to exist (formally, $CC$-hard) for the problem of computing Round-Robin EF1
allocations. | Rohan Garg, Alexandros Psomas | 2023-09-15T18:21:07Z | http://arxiv.org/abs/2309.08685v1 | # Fairly Allocating Goods in Parallel
###### Abstract
We initiate the study of parallel algorithms for fairly allocating indivisible goods among agents with additive preferences. We give fast parallel algorithms for various fundamental problems, such as finding a Pareto Optimal and EF1 allocation under restricted additive valuations, finding an EF1 allocation for up to three agents, and finding an envy-free allocation with subsidies. On the flip side, we show that fast parallel algorithms are unlikely to exist (formally, _CC_-hard) for the problem of computing Round-Robin EF1 allocations.
## 1 Introduction
The last two decades have witnessed a remarkable improvement in our computational power, largely due to the widespread adoption of parallel computing. Parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. At the same time, in the overwhelming majority of the AI literature, "efficient algorithm" is a synonym for "efficient sequential algorithm." In this paper, we initiate the study of parallel algorithms for a fundamental problem in fair division: allocating a set of \(m\) indivisible items to \(n\) agents with additive preferences.
Some classical algorithms for this problem proceed in rounds, e.g. the Round Robin procedure or the Envy-Cycle Elimination procedure [10] that achieve envy-freeness up to one item (henceforth, EF1), while others are computationally intractable (NP-hard), e.g. the maximum Nash welfare (MNW) solution that achieves Pareto efficiency (henceforth, PO) and EF1. Our goal in this paper is to design, for various, fundamental fair division tasks, algorithms that run in polylogarithmic time and use a polynomial number of processors, or to prove that no such algorithm is likely to exist.
### Our contribution
As a warm-up for the reader unfamiliar with the capabilities of parallel algorithms, in Section 3 we consider the basic problem of whether, given an allocation, various fairness properties can be quickly verified in parallel. We show that envy-freeness (EF), envy-freeness up-to-one item (EF1), and envy-freeness up-to-any item (EFX) can all be checked efficiently in parallel, i.e. we give _NC_ algorithms for verifying these properties. In Section 4, we show how to use algorithms with logarithmic query complexity [11] to get fast parallel algorithms for computing EF1 allocations for two and three agents, as well as how to compute EF1 and fractionally PO allocations for two agents, by mimicking the adjusted winner process.
In Sections 5 and 6, we study the complexity of allocating items to _restricted additive agents_, that is when the value of agent \(i\) for item \(j\) is either \(0\) or \(v_{j}\) (i.e. each item has an _inherent_ value
and agent \(i\) either sees this value or not), and the value of agent \(i\) for a subset of items \(S\) is simply \(\sum_{j\in S}v_{i,j}\). We first explore the complexity of finding an EF1 allocation. Arguably, the simplest EF1 algorithm in this setting is the Round-Robin procedure (agents choose items one at a time, following a fixed order). In Section 5, we show that, for a given order \(\sigma\) over the agents, one cannot "shortcut" the execution of Round-Robin: the problem is _CC_-hard. Surprisingly, this holds even for the case when each agent positively values at most 3 items and each item is positively valued by at most 3 agents.
Despite this strong negative result, we can efficiently, in parallel, compute an EF1 and PO allocation when there are a constant number of inherent values, even when agents positively value more items, and items are positively valued by more agents. Furthermore, quite similarly to Round-Robin, the allocations output by our algorithm are "balanced," in the sense that agents receive the same number of items (up to divisibility issues). The complexity of our algorithm is parameterized by \(t\), the number of inherent values: it runs in time \(O(\log^{2}(mn))\) and requires \(O(m^{5.5+t}n^{5.5})\) processors. Our algorithm is via a reduction to the problem of _maximum weight perfect matching_ in a bipartite graph. A beautiful result of [13] shows that a _minimum_ weight perfect matching (which can be used to find a maximum weight perfect matching) can be found efficiently in parallel when the weight of the heaviest edge is polynomially bounded. Our reduction creates multiple copies of each agent such that the unique item matched to the \(j\)-th copy corresponds to the item allocated in the \(j\)-th round of _some_ Round-Robin procedure (and hence the overall allocation is EF1). The edge weights are increasing (for different copies of the same agent), in a way that every maximum weight matching must give a high-value item to a copy of agent \(i\) before giving two high-value items to copies of a different agent. The restriction on the valuations allows us to control the rate at which the weights increase, and specifically bound the maximum weight by a polynomial, so that the algorithm of [13] can be used. We note that when weights are not bounded by a polynomial, the maximum weight matching problem cannot be solved efficiently in parallel (formally, the problem is _CC_-hard), so removing the condition on the valuation functions would require a fundamentally different approach.
Finally, in Section 7, we study the problem of fair allocation with subsidies [17, 1], where the goal is to find an integral allocation of the items as well as payments to the agents, such that the overall solution is envy-free. We give an _NC_ algorithm for this problem, and, in fact, prove that one can compute similar solutions in parallel even in the presence of additional constraints on the payments, e.g. "A should not be paid more than B", or "A should be paid no more than 10 dollars." We formulate the problem of finding a constraint-satisfying and envy-eliminating vector as a purely graph-theoretic problem on a graph we call the _payment rejection graph_. The constraints are included by adding edges to this graph. In the most general sense, we can add edges to our graph that correspond to a constraint of the form "if agent \(i\) gets paid more than \(x\) dollars, then agent \(j\) must get paid more than \(y\) dollars". Any meaningful overall constraint that can be formulated as a set of such smaller constraints can be added to the problem instance. We highlight that it is not straightforward to implement such constraints in the existing algorithms for the fair division with subsidies problem, especially if one insists on a parallel solution. Our main insight here is that by carefully constructing a large graph to represent the set of all payment vectors, the problem of simultaneously eliminating envy and satisfying constraints can be solved by computing directed reachability in parallel.
### Related work
Understanding the parallel complexity of various problems has been a central theme in theoretical computer science, with some major recent breakthroughs, e.g. [1]. However, the parallel
complexity of problems in fair division remains relatively unstudied. The closest works to ours are that of [23] and [11]. [24] study the housing allocation and housing market problems, and give parallel and distributed algorithms. The housing allocation problem asks for a matching between \(n\) agents and \(n\) houses when agents have strict orderings over the houses. The housing market problem asks for a matching between \(n\) agents and \(n\) houses when the agents arrive at a market each owning a single house. On the flip side, [23] show that finding the core of a housing market is _CC_-hard by showing that the Top-Trading Cycle Algorithm also solves a _CC_-complete problem: Lexicographically First Maximal Matching. In [11], the authors study the parallel complexity of allocating \(m\) divisible homogeneous resources to a set of \(n\) agents with nondecreasing utility functions over the amount of each resource received. They show, that for \(n\) processors, the parallel time complexity of finding an allocation that has welfare no more than \(\epsilon\) less than a welfare-maximizing allocation is lower bounded by \(\Omega(m\log\frac{1}{n\epsilon})\). They also give an efficient parallel algorithm that computes an approximately accurate solution for \(m=2\) resources.
## 2 Preliminaries
We consider the problem of allocating a set \(\mathcal{M}\) of indivisible goods, labeled by \(\{1,\ldots,m\}\), to a set of agents \(\mathcal{N}\), labeled by \(\{1,\ldots,n\}\). A _fractional_ allocation \(X\in[0,1]^{n\cdot m}\) defines for each agent \(i\in\mathcal{N}\) and \(j\in\mathcal{M}\) the fraction of item \(j\) that agent \(i\) receives. A allocation \(X\) is _integral_ if \(X_{i,j}\in\{0,1\}\) for all \(i\in\mathcal{N}\) and \(j\in\mathcal{M}\). An allocation \(X=(X_{1},\ldots,X_{n})\) is _complete_ if \(\cup_{i\in\mathcal{N}}X_{i}=\mathcal{M}\) and _partial_ otherwise. Unless stated otherwise, we use allocation to refer to a complete allocation. We use the term _bundle_ to refer to a subset of items, and use \([k]\) to denote the set \(\{1,\ldots,k\}\).
Each agent \(i\in\mathcal{N}\) has a private valuation function \(v_{i}:2^{\mathcal{M}}\to\mathbb{R}_{+}\) which describes the utility agent \(i\) receives for each bundle. A valuation function \(v_{i}\) is _additive_ if \(v_{i}(X_{i})=\sum_{j\in X_{i}}X_{i,j}\cdot v_{i}(\{j\})\). A valuation function \(v_{i}\) is _restricted additive_ when \(v_{i}\) is additive, and for each item \(g\in\mathcal{M}\), \(v_{i}(g)\in\{0,v(g)\}\). To ease notation, we write \(v_{i,j}=v_{i}(\{j\})\) for the value of agent \(i\) for item \(j\).
An allocation \(X\) is _envy-free_ (EF) if \(v_{i}(X_{i})\geq v_{i}(X_{j})\) for all agents \(i,j\in\mathcal{N}\). Since integral EF allocations don't always exist (e.g. consider the case of a single item and two agents that have positive value for it), the community has turned to notions of approximate fairness. An integral allocation \(X\) is _envy-free up to one good_ (EF1) if for all agents \(i,j\in\mathcal{N}\) there exists a good \(g\in X_{j}\) such that \(v_{i}(X_{i})\geq v_{i}(X_{j}\backslash g)\)[12]. An integral allocation \(X\) is _envy-free up to any good_ (EFX) if for all agents \(i,j\in\mathcal{N}\), for all goods \(g\in X_{j}\), \(v_{i}(X_{i})\geq v_{i}(X_{j}\backslash g)\)[12]. The _envy-graph_ for an allocation \(X\) is the complete weighted directed graph \(G_{X}=(\mathcal{N},E)\), where there is a vertex for each agent \(i\in\mathcal{N}\), and there is an edge \(e\in E\) from vertex \(i\) to vertex \(j\) with the weight \(v_{i}(X_{j})-v_{i}(X_{i})\)[12].
An allocation \(X\)_Pareto dominates_ another allocation \(Y\) if \(v_{i}(X_{i})\geq v_{i}(Y_{i})\), for all \(i\in\mathcal{N}\), and there exists some agent \(j\) such that \(v_{j}(X_{j})>v_{j}(Y_{j})\). An integral allocation is called _Pareto-Optimal_ (PO) or _Pareto-Efficient_ (PE) if no other integral allocation Pareto dominates it. An allocation is called _Fractionally Pareto-Optimal_ (fPO) if no other (integral or fractional) allocation Pareto dominates it.
Fair division with subsidies.In the problem of fair division with subsidies, we eliminate the envy of an allocation by using payments. An _allocation with payments_\(X_{\vec{q}}=(X,\vec{q})\) is a tuple of an integral allocation \(X\) and a payment vector \(\vec{q}=(q_{1},\ldots,q_{n})\), where \(q_{i}\) is the payment to agent \(i\). Under such an allocation with payments \(X_{\vec{q}}\), agent \(i\)'s utility is \(v_{i}(X_{i})+q_{i}\). We can extend the definition of envy-freeness to this setting: an allocation with payments \((X,\vec{q})\) is _envy-free_ if \(v_{i}(X_{i})+q_{i}\geq v_{i}(X_{j})+q_{j}\) for all agents \(i,j\in\mathcal{N}\). An allocation \(X\) is _envy-freeable_ if there exists a
payment vector \(\vec{q}\) such that \((X,\vec{q})\) is envy-free. For a given envy-freeable allocation \(X\), a payment vector \(\vec{q}\) is _envy-eliminating_ if the allocation with payments \(X_{\vec{q}}\) is _envy-free_. [10] prove that, given an envy-freeable allocation \(X\), one can find an envy-eliminating payment vector for \(X\) by computing all-pairs-shortest paths on the envy graph of \(X\) with the edge weights negated.
Parallel computation.For sequential algorithms, our model of computation is typically a single processor that has access to some memory. For parallel algorithms, in this paper we adopt the CREW (**C**oncurrent **R**ead **E**xclusive **W**rite) PRAM (Parallel RAM) model of computation [12]. The CREW PRAM model allows simultaneous access to any one memory location for read instructions only.1 We assume a shared memory model where each processor has some local memory to execute its program and all processors can access global shared memory. Additionally, all computation is _synchronous_, i.e., all processors are coordinated by some common clock.
Footnote 1: It is well known that the strongest PRAM model, the CRCW PRAM model, with \(p\) processors can be simulated by the weakest PRAM model, the EREW PRAM model, with \(p\) processors with at most a \(O(\log p)\) factor slowdown [11].
To describe parallel algorithms, we use \(p_{k}\) to denote the \(k\)-th processor. Often we will index processors by items or agents or pairs, e.g. \(p_{j}\) for the processor assigned to item \(j\), or \(p_{(i,j)}\) for the processor assigned to the agent \(i\), item \(j\) pair. We give the basic notions of efficiency and hardness in the parallel world as well as some useful parallel primitives. A reader familiar with parallel algorithms can safely skip the remainder of this section.
For sequential algorithms, we seek polynomial time algorithms, a.k.a algorithms in \(P\). The analog of this for parallel algorithms is _NC_ or _Nick's Class_. The randomized counterpart of \(P\) is _RP_; similarly, here we have _RNC_.
**Definition 1** (\(Nc^{k}\)[11]).: The class \(\mathit{NC}^{k}\) includes all problems of input size \(N\) that can be solved in time \(O(\log^{k}n)\) using a polynomial in \(N\) number of processors.
**Definition 2** (\(\mathit{RNC}^{k}\)[11]).: The class \(\mathit{RNC}^{k}\) includes all problems of input size \(N\) that can be solved in time \(O(\log^{k}n)\) using a polynomial in \(N\) number of processors, where each processor can generate an (independently drawn) uniformly random integer in the range \([1,\ldots,M]\) for some integer \(M\geq 1\).
The class \(\mathit{NC}\) (resp. \(\mathit{RNC}\)) includes all problems of input size \(N\) that are in \(\mathit{NC}^{k}\) (resp. \(\mathit{RNC}^{k}\)) for some constant (with respect to \(N\)) \(k\). We seek \(\mathit{NC}\) and \(\mathit{RNC}\) algorithms. That is, when we say that some problem can be solved efficiently in parallel, this means there is an \(\mathit{NC}\) or \(\mathit{RNC}\) algorithm for it.
On the flip side, when we say that a problem cannot be solved efficiently in parallel, we mean that the problem is \(\mathit{CC}\)-hard2. To define the complexity class _CC_, we need to define the Circuit Comparator Value Problem (CCVP) and comparator gates. A _comparator gate_ is a gate that has two inputs and two outputs. The first output wire outputs the minimum of the inputs, and the second output wire outputs the maximum of the inputs. CCVP is defined as follows: given a circuit of comparator gates, the inputs to the circuit, and one output wire of the circuit, calculate the value of this wire.
Footnote 2: Another notion of parallel hardness, _P-Completeness_, is often used to identify problems in \(P\) that seem to be inherently sequential and thus are likely to not admit any efficient parallel algorithm. The classes _NC_ and _CC_ are incomparable as are _RNC_ and _CC_[1]. Currently, no fast parallel algorithms are known for problems in _CC_.
**Definition 3** (\(\mathit{CC}\)[12]).: _CC_ is the class of all problems that are log-space many-one reducible to CCVP.
The class \(\mathit{CC}\) is not known to be in \(\mathit{NC}\) nor \(\mathit{P}\)-Complete, and, if some problem is \(\mathit{CC}\)-hard, this fact can be taken as evidence that the problem does not admit an efficient parallel solution. The class \(\mathit{CC}\) has natural complete problems, such as the Stable Marriage Problem and the Lexicographically First Maximal Matching problem [1].
### Useful parallel primitives
When describing efficient sequential algorithms we utilize various primitives, e.g. summation, multiplication, sorting, max-weight matching, etc, that take polynomial time, and we can assume the reader knows, without proof. In the case of parallel algorithms, we find it instructive to state some of these useful primitives in this subsection.
**Sum.** We can efficiently take the sum of \(n\) numbers in parallel. To see this, notice that we can use \(n\) processors to create a binary tree where the leaves of the tree are the \(n\) numbers. In each time step, we use a processor to sum two values and then pass that value up the tree. In \(O(\log n)\) steps, we will have the sum of all numbers.
**Sorting.** We can efficiently sort \(n\) numbers in parallel. For a more detailed discussion on parallel sorting algorithms, we refer the reader to [1]. In our parallel algorithms, we use the bitonic sorting network. The bitonic sorting network requires \(O(\log^{2}n)\) time and uses \(O(n)\) processors. It is theoretically possible to sort in parallel using only \(O(\log n)\) time using \(O(n)\) processors via the AKS sorting network, but the constant hidden by the big-O notation is too large for use in practice [1].
**Reduction Operators.** A reduction operator allows us to quickly aggregate the entries of an array into one value in parallel. We will often find the maximum (or minimum) of a list of \(n\) values. We can execute this in \(O(\log n)\) time by using \(O(n)\) processors. Similar to computing sums, we create a binary tree where the leaves of the tree are the \(n\) numbers. In each time step, we use a separate processor to compute the maximum (or minimum) of two values and then pass that value up the tree. In \(O(\log n)\) steps, we will have the maximum (or minimum) of all numbers. Similarly, for binary entries, we can compute the AND or OR over all entries.
**Graph Algorithms.** Many problems on graphs can be solved efficiently in parallel. For example, we can compute all-pairs shortest paths and find the minimum spanning tree efficiently (and deterministically) in parallel [1]. We can find minimum weight perfect matchings [13] and find the global minimum cut of an undirected graph efficiently in parallel by utilizing randomization [10]. For brevity, this is all we list here and refer the reader to [1] for more parallel graph algorithms.
## 3 Verification of fairness
As a warm-up, we begin by showing that, given an allocation, we can efficiently, in parallel, verify its fairness properties.
**Theorem 1**.: _Given an allocation \(X\) and the valuation functions of \(n\) additive (or restricted additive) agents, the problem of deciding whether \(X\) satisfies EF is in NC._
Proof.: We wish to test whether or not each agent prefers their own bundle to any other agent's bundle. For each ordered pair of agents \((i,j)\), we assign \(|X_{i}|+|X_{j}|\leq m\) processors. First, we compute the value of \(v_{i}(X_{i})\) and \(v_{i}(X_{j})\) using parallel sum procedures; each sum takes \(O(\log m)\) time. Next, we test whether \(v_{i}(X_{i})\geq v_{i}(X_{j})\). For each ordered pair of agents \((i,j)\), we assign one bit in memory, initially set to \(0\). If \(v_{i}(X_{i})\geq v_{i}(X_{j})\), processor \(p_{(i,j)}\) will flip the bit indexed by the
agent pair \((i,j)\) to \(1\). Setting this bit for all ordered pairs is done simultaneously. Finally, using \(n^{2}\) processors, we take the minimum across these bits to find if there is any pair of agents that does not respect envy-freeness; this step takes \(O(\log n^{2})\) time; if the minimum is \(1\), then the allocation is envy-free. We overall used at most \(O(n^{2}m)\) processors, and the total time was \(O(\log m+\log n)\).
To test if an allocation is EF1, we use similar ideas to that of testing EF. For every ordered pair of agents, we allocate \(O(m)\) processors to test whether or not the removal of each item from \(j\)'s bundle eliminates \(i\)'s envy. For each item, we set a separate bit to \(1\) to signify whether or not that item's removal eliminates envy. We take the maximum across all bits to see if there is any one item that satisfies EF1. Then we ensure that the minimum for all ordered pairs of agents is \(1\).
To test if an allocation is EFX, we run the same procedure as that of testing EF1 except instead of computing maximums of the inequality bits, we compute minimums. It is straightforward to see that this difference results in a correct EFX verification procedure. We include the full proofs for completeness.
**Theorem 2**.: _Given an allocation \(X\) and the valuation functions of \(n\) additive (or restricted additive) agents, the problem of deciding whether \(X\) satisfies EF1 is in NC._
Proof.: For each (ordered) pair of agents \((i,j)\) we will allocate \(|X_{j}|\leq m\) processors. Each processor is in charge of testing whether or not the removal of a specific item in \(j\)'s bundle will reduce \(i\)'s value for \(j\)'s bundle so that \(i\) no longer envies \(j\). Let \(X_{j,k}\) denote the \(k\)-th item in \(j\)'s bundle in allocation \(X\), and let \(p_{(i,j,k)}\) be the processor assigned to the (ordered) pair \((i,j)\) and item \(X_{j,k}\). \(p_{(i,j,k)}\) tests the following inequality: \(v_{i}(X_{i})\geq v_{i}(X_{j}\setminus\{X_{j,k}\})\). The values for \(v_{i}(X_{i})\) and \(v_{i}(X_{j})\) can be computed in parallel via parallel sum.
For each ordered pair of agents \((i,j)\), we will also allocate \(m\) bits in shared memory. These bits will initially be set to \(0\). If any processor \(p_{(i,j,k)}\) assigned to the ordered pair \((i,j)\) finds that the \(k\)-th item makes the inequality hold, the corresponding bit is set to \(1\). After all processors test their assigned inequality, we compute the maximum (the OR operation) of these \(m\) bits for each agent. This can be done in \(O(\log m)\) time using \(m\) additional processors, via a tournament.3 If the maximum of these \(m\) bits is \(1\), then there is one item that can be removed from \(j\)'s bundle such that \(i\) no longer envies \(j\). We can compute this "EF1-bit" for all \(n^{2}\) ordered pairs of agents in parallel. Finally, we take the minimum (AND operator) of these \(n^{2}\) bits in a similar way; if this minimum bit is \(0\), then there exists a pair of agents that do not satisfy the EF1 relation, and otherwise, EF1 is satisfied for all pairs.
Footnote 3: Think of building a binary-tree bottom-up, with the leaves corresponding to the original \(m\) bits. In the first time step, processor \(i\) takes the maximum of the leaves in positions \(2i\) and \(2i+1\) and stores it in the corresponding parent node. In the second time step, processor \(i\) takes the maximum of the nodes in positions \(i\) and \(i+1\) from the parent nodes in the previous step, and so on.
The time complexity of this process is \(O(1)\) time to populate the bits and then \(O(\log n+\log m)\) time to compute the maximums, minimums, and sums. We use \(O(n^{2}m)\) processors.
**Theorem 3**.: _Given an allocation \(X\) and the valuation functions of \(n\) additive (or restricted additive) agents, the problem of deciding whether \(X\) satisfies EFX is in NC._
Proof.: For each (ordered) pair of agents \((i,j)\) we will allocate \(|X_{j}|\leq m\) processors. Each processor is in charge of testing whether or not the removal of a specific item in \(j\)'s bundle will reduce \(i\)'s value for \(j\)'s bundle so that \(i\) no longer envies \(j\). Let \(X_{j,k}\) denote the \(k\)-th item in \(j\)'s bundle in allocation \(X\), and let \(p_{(i,j,k)}\) be the processor assigned to the (ordered) pair \((i,j)\) and item \(X_{j,k}\). \(p_{(i,j,k)}\) tests the following inequality: \(v_{i}(X_{i})\geq v_{i}(X_{j}\setminus\{X_{j,k}\})\). The values for \(v_{i}(X_{i})\) and \(v_{i}(X_{j})\) can be computed in parallel via parallel sum.
For each ordered pair of agents \((i,j)\), we will also allocate \(m\) bits in shared memory. These bits will initially be set to \(0\). If any processor \(p_{(i,j,k)}\) assigned to the ordered pair \((i,j)\) finds that the \(k\)-th item makes the inequality hold, the corresponding bit is set to \(1\). After all processors test their assigned inequality, we compute the minimum (the AND operation) of these \(m\) bits for each agent. This can be done in \(O(\log m)\) time using \(m\) additional processors, via a tournament. If the minimum of these \(m\) bits is \(1\), then any item can be removed from \(j\)'s bundle to ensure that \(i\) no longer envies \(j\). We can compute this "EFX-bit" for all \(n^{2}\) ordered pairs of agents in parallel. Finally, we take the minimum (AND operator) of these \(n^{2}\) bits in a similar way; if this minimum bit is \(0\), then there exists a pair of agents that do not satisfy the EFX relation, and otherwise, EFX is satisfied for all pairs.
The time complexity of this process is \(O(1)\) time to populate the bits and then \(O(\log m+\log n)\) time to compute the minimums, and sums. We use \(O(n^{2}m)\) processors.
## 4 EF1 allocations for two and three additive agents
In this section, we discuss how to efficiently, in parallel, compute EF1 allocations for two and three additive agents.
Our algorithms work via a reduction. Specifically, Oh et al. [1] prove that EF1 allocations can be found using a logarithmic number of value queries4 for two agents with monotonic utilities and three agents with additive utilities. For sequential algorithms and additive agents, implementing a query takes \(O(m)\) time, since one needs to sum the values of the items in a subset. However, using \(O(m)\) processors, one can implement a value query in \(O(\log m)\) time. Therefore, the results of [1] can be directly translated to our setting.
Footnote 4: A value query on input \(i\), \(S\subseteq\mathcal{M}\) returns the value \(v_{i}(S)\) of agent \(i\) for the subset \(S\) of items.
**Theorem 4**.: _For the case of additive agents, if there exists a query algorithm that uses \(k\) queries to compute an allocation \(X\), then there exists a parallel algorithm that uses \(O(m)\) processors and computes \(X\) in time \(O(k\log m)\)._
Proof.: Consider any sequential fair division algorithm \(\mathcal{A}\) for additive agents that only has query access to agents' valuations, and specifically, it can ask a query, \(query(S,i)\), to learn the value of subset \(S\) for agent \(i\). Suppose \(\mathcal{A}\) requires \(k\) query calls. We give a parallel algorithm that efficiently implements \(query(S,i)\). In order to implement \(query(S,i)\), we run a parallel-sum procedure using \(O(m)\) processors on the elements specified by \(S\), using the valuation function of agent \(i\). In \(O(\log m)\) time, we then have the sum of all item values in \(S\) for agent \(i\). Since \(\mathcal{A}\) uses \(k\) queries, and for each query we compute a sum, we get an overall runtime of \(O(k\log m)\) using \(O(m)\) processors.
As corollaries, we can derive _NC_ algorithms that produce EF1 allocations for two or three agents with additive utilities via the algorithms of Oh et al. [1], since these algorithms have polylogarithmically many value queries.
**Corollary 1**.: _The problem of finding an EF1 allocation for two and three additive agents is in NC._
Next, we notice that the two-agent algorithm of Oh et al. [1] mimics the classic cut-and-choose algorithm from continuous cake-cutting. The authors show that for any ordering of indivisible items on a line, there exists a way for the first agent to cut (split the items into two pieces) such that when the second agent selects her favorite piece, the overall allocation is EF1. The main difficulty is, of course, finding this cut using only a logarithmic number of queries. Here, we observe that since such a cut can be found for an arbitrary ordering of the items, by ordering the
items in increasing \(v_{1,j}/v_{2,j}\) (mimicking the adjusted-winner process [1]) we can also guarantee fractional Pareto efficiency (fPO). Since the basic operations (sorting, adding, etc) in the adjusted winner process can be done in parallel, we overall get a fractionally PO and EF1 \(\mathit{NC}\) algorithm.
**Theorem 5**.: _The problem of finding an fPO and EF1 allocation for two additive agents is in NC._
Proof.: Consider sorting the items in non-increasing order of the ratio \(v_{1,j}/v_{2,j}\) on a line. In [1], it is shown that every fPO allocation is a split of the items such that agent 1 gets all the items to the left of the split and agent 2 gets all the items to the right of the split and the allocation is discrete. This leaves us with \(m+1\) allocations where each allocation is a partition of the goods into left and right halves. In [1] it is shown that an EF1 + fPO allocation must exist. Thus, it must be one of these \(m+1\) splits. Now, we can run a binary search over the splits to find one that is EF1. Checking if an allocation is EF1 can be done in parallel by Theorem 2. After each check, we reduce the set of allocations to the correct half by checking which agent's envy violates EF1.
We can sort the items by their ratios using bitonic sorting. Checking each individual split takes \(O(\log m)\) time and requires \(O(m)\) processors. Since there are \(m+1\) allocations, running binary search over them takes \(O(\log m)\) time where at each step we check if the allocation is EF1. This gives us a final time complexity of \(O(\log^{2}m)\), where we require \(O(m)\) processors.
Finally, we show that for \(n\) identical and additive agents, there exists a simple \(\mathit{NC}\) algorithm for finding an EF1 allocation. Notice that for identical agents, it is easy to predict what item will be allocated in the \(k\)-th round of the Round-Robin procedure: since all agents have the same ranking over the items, the \(k\)-th item allocated is precisely the \(k\)-th favorite item.
**Theorem 6**.: _The problem of finding an EF1 allocation for \(n\) identical, additive agents is in NC._
Proof.: Begin by sorting the items in decreasing value (breaking ties arbitrarily) and let the item with the \(k\)-th highest value be labeled \(m_{k}\). Let \(\sigma\) be some order over the agents and let \(\sigma_{i}\), for \(i\in[n]\), represent the agent in the \(i\)'th index of \(\sigma\). Consider allocating any item \(m_{k}\). If \(k\) is not divisible by \(n\), we allocate item \(m_{k}\) to agent \(\sigma_{k\ mod\ n}\). If \(k\) is divisible by \(n\), we allocate item \(m_{k}\) to agent \(\sigma_{n}\). This returns the same allocation as that of running Round-Robin using the order \(\sigma\) and as such is an EF1 allocation. Sorting the items takes \(O(\log^{2}m)\) time and requires \(O(m)\) processors. Then, allocating each item simultaneously takes \(O(1)\) time and a total of \(O(m)\) processors. In total, the time complexity is \(O(\log^{2}m)\) time and we require \(O(m)\) processors.
## 5 Traditional EF1 algorithms are inherently sequential
In this section, we give limits to what parallel algorithms can achieve in our setting. Specifically, we show that "Round-Robin looking" allocations cannot be found efficiently in parallel. We consider the following problem, which we call Fixed-Order Round-Robin: Given a set \(\mathcal{M}\) of \(m\) items, a set \(\mathcal{N}\) of \(n\) agents, a strict ordering \(\sigma=\{\sigma_{1}\succ\cdots\succ\sigma_{n}\}\) over the agents, and a designated agent, item pair \((i^{*},j^{*})\), decide if agent \(i^{*}\) is allocated item \(j^{*}\) by Round-Robin with \(\sigma\) as the order over the agents. We give a log-space reduction from Lexicographically-First Maximal Matching to Fixed-Order Round-Robin.
**Theorem 7**.: Fixed-Order Round-Robin _is CC-Hard, even for the case of \(n\) restricted additive agents, i.e. \(v_{i,j}\in\{0,v(j)\}\), where every agent positively values at most \(3\) items and every item is positively valued by at most \(3\) agents._
Proof.: We reduce the 3-Lexicographically-First Maximal Matching (3-LFMM) problem to Fixed-Order Round-Robin. In the LFMM problem, we are given a bipartite graph \(G=(X,Y,E)\) where \(X=\{x_{i}\}_{i=1}^{n}\), \(Y=\{y_{i}\}_{i=1}^{m}\), and \(E\subseteq X\times Y\). The lexicographically first maximal matching of \(G\), \(M_{lex}\), is produced by successively matching vertices in \(X\), in the order \(x_{1},\ldots,x_{n}\), each one with the available vertex in \(Y\) that has the smallest index. The LFMM problem is to decide if a designated edge belongs to the lexicographically first maximal matching of a bipartite graph \(G\). In the 3-LFMM problem, each vertex in \(G\) has degree at most 3. [1] prove that 3-LFMM is _CC_-complete.
Let \(G=(X,Y,E)\) with a designated edge \(e^{*}\) be an instance of the 3-LFMM problem. Without loss of generality, let \(|X|\geq|Y|\). We construct an instance of Fixed-Order Round-Robin as follows. For each vertex \(x_{i}\in X\) we create an agent, and for each vertex \(y_{j}\in Y\) we create an item. For each \(e=(x_{i},y_{j})\in E\), we set \(v_{i,j}=m-j+1\). For \(e=(x_{i},y_{j})\notin E\), \(v_{i,j}=0\). By construction, since each vertex in \(G\) has degree at most 3, each agent values positively at most 3 items, and each item is valued positively by at most 3 agents. Let the ordering of the vertices in \(X\) correspond to \(\sigma\), i.e. \(\sigma_{i}=i\). Notice that this construction takes logarithmic space. Therefore, to conclude the proof of Theorem 7, it suffices to show that \(e^{*}=(x_{i^{*}},y_{j^{*}})\in M_{lex}\) if and only if agent \(i^{*}\) gets item \(j^{*}\) in the execution of Round-Robin that corresponds to \(\sigma\). We prove a stronger statement, using induction.
Our inductive hypothesis is that, for a given number \(k\), for any \(j\in[m]\), \((x_{k},y_{j})\in M_{lex}\) if and only if agent \(k\) gets item \(j\) in the \(k\)-th round of the execution of Round-Robin that corresponds to \(\sigma\). For \(k=1\), we have that \((x_{1},y_{j})\in M_{lex}\) if and only if \(j=argmin_{\ell\in[m]}\{(x_{1},y_{\ell})\in E\}\), which, by construction, happens if and only if \(v_{1,j}>v_{1,\ell}\) for all \(\ell\in[m]\), i.e., if and only if agent 1 picks item \(j\) in the execution of Round Robin, noting that agent 1 is first in \(\sigma\) and that agents don't pick items they have zero value for. Assume the hypothesis is true for numbers less than or equal to \(k\), and that \((x_{k+1},y_{j})\in M_{lex}\). By the inductive hypothesis, all edges \((x_{i},y_{\ell})\in M_{lex}\) for \(i\leq k\) correspond to items allocated in the first \(k\) rounds in the execution of Round-Robin. \((x_{k+1},y_{j})\in M_{lex}\) if and only if \(j\) is the smallest index among all unmatched neighbors of \(x_{k+1}\) at the \((k+1)\)-st step of building the lexicographically first maximal matching. Since smaller indices (of edges) correspond to strictly higher valuations, we have that, by construction, \((x_{k+1},y_{j})\in M_{lex}\) if and only if \(v_{k+1,j}>v_{k+1,\ell}\) for all items \(\ell\in[m]\) that have not been allocated in the first \(k\) rounds in the execution of Round-Robin. This holds if and only if \(j\) is the item selected by agent \(k+1\) in the \((k+1)\)-st round of Round-Robin (noting once again that, in Round-Robin, agents don't pick items with zero value for them).
## 6 EF1 + PO for restricted additive with a bounded number of values
In this section, we present a new randomized parallel algorithm that gives an EF1 and PO allocation for assigning \(m\) indivisible items to \(n\) agents with _restricted additive_ valuations. Recall that a valuation function \(v_{i}\) is _restricted additive_ if \(v_{i}\) is additive, and for each item \(g\in\mathcal{M}\), \(v_{i}(g)\in\{0,v(g)\}\). The complexity of the algorithm is parameterized by \(t\), the number of "inherent" item values, i.e. the number of different values \(v(g)\) can take. Formally, our parallel algorithm has _polylog(m, n)_ time complexity and requires \(\mathit{poly}(m,n)\cdot O(m^{t})\) processors.
Here, we describe how our algorithm works. We construct a weighted bipartite graph \(G\) where on one side of the graph, we have vertices corresponding to items, and on the other side, we have vertices corresponding to copies of agents. We ensure that the two sides have the same number of vertices by adding \(mn-m\) dummy items that all agents have zero value for. We first describe the
vertices representing the set of items. Let this side be \(A\). To populate \(A\), we create a vertex \(a_{j}\) for each \(j\in\mathcal{M}\). We will think of \(A\) as partitioned in buckets \(\mathcal{M}_{1}\dots\mathcal{M}_{t}\), where \(t\) is the number of different item values. \(M_{i}\) is the set of items with the \(i\)'th highest value. Finally, we add vertices that correspond to dummy items. Let the set of dummy vertices be \(\mathcal{M}_{d}\). On the other side of the bipartition, we have vertices corresponding to copies of agents. Let this side be \(B\). We create \(m\) buckets of \(n\) vertices where each of these \(n\) vertices represents an agent. Formally, we create a set of vertices \(\{b_{(1,j)},b_{(2,j)},\dots,b_{(n,j)}\}\) for \(j\in[m]\). The \(c\)-th bucket will be called \(\mathcal{N}_{c}\). For each \(j\in\mathcal{M}_{f}\) and \(i\in\mathcal{N}\), if \(v_{i,j}>0\), we add, for all \(c\in[m]\), the edge \((a_{j},b_{(i,c)})\) with weight \(-m^{(t-f)}\cdot c\). For each dummy item \(j\in\mathcal{M}_{d}\) and \(i\in[n]\), we add, for all \(c\in[m]\), the edge \((a_{j},b_{(i,c)})\) with weight \(0\). We refer to this weight function as \(w(\cdot)\). We give an example of the weighted bipartite graph in Figure 1 where there are three agents, and three items in buckets \(\mathcal{M}_{1}\) and \(\mathcal{M}_{f}\) along with some dummy vertices in \(\mathcal{M}_{d}\).
Once the graph \(G=(X\cup Y,E,w)\) is constructed, we compute a maximum-weight perfect matching \(M^{*}\) and return the allocation corresponding to \(M^{*}\). We assume that every (non-dummy) item is valued by _someone_. This is without loss of generality since, if an item is not valued by anyone, this can be checked efficiently in parallel, and the item can be discarded. The formal description of the algorithm is given in Algorithm 1. We prove that this algorithm always outputs an EF1 and PO allocation.
The following lemma shows that this algorithm satisfies Pareto Optimality.
**Lemma 1**.: _Algorithm 1 outputs a Pareto Optimal allocation._
Proof.: We show that the resulting maximum-weight matching saturates the left side of the bipartition. As a result, all items are allocated to agents that value those items since an edge in the graph is only present between an item-agent pair when the agent values that item.
We show this by using Hall's Marriage Theorem. Hall's Theorem characterizes necessary and sufficient conditions for a bipartite graph to have a perfect matching. Recall Hall's Theorem:
**Theorem 8** (Hall's Theorem [1]).: _A bipartite graph \(G=(L\cup R,E)\) contains an \(L\)-saturating perfect matching if and only if for every subset \(W\) of \(L\), its neighborhood, \(N_{G}(W)\), satisfies_
\[|N_{G}(W)|\geq|W|\]
Figure 1: \(G\) for an instance with three agents.
This holds for the graph \(G\) used in Algorithm 1. Consider any subset \(W\) of \(A\). Since \(W\) is comprised of vertices corresponding to non-dummy items and vertices corresponding to dummy items, it suffices to show that a vertex of either type has a large enough neighborhood in \(B\). Every item \(j\in\mathcal{M}\) is associated with a vertex \(a_{j}\in A\). A vertex \(a_{j}\) corresponding to the non-dummy item \(j\) has at least \(m\) edges coming out of it: an edge to the same agent \(i\) in each of the \(m\) blocks. Any vertex corresponding to a dummy item is connected to all vertices in \(B\). Thus, any subset \(W\) of \(A\) will result in a neighborhood of size at least \(|W|\) in \(B\). So, \(G\) will always contain at least one perfect matching that saturates \(A\). Since our allocation corresponds to this matching, each item is given to an agent that values it. In the restricted additive setting, this corresponds to a Pareto Optimal allocation.
The next lemma is crucial for showing the EF1 guarantee of Algorithm 1.
**Lemma 2**.: _For any two agents \(i\) and \(j\), and \(c\in[m-1]\), \(i\) weakly prefers the item matched to her in bucket \(\mathcal{N}_{c}\) to the item that is matched to \(j\) in bucket \(\mathcal{N}_{c+1}\)._
Proof.: Let vertex \(j\) be matched to \(M^{*}(j)\) in \(M^{*}\). We want to show that
\[v_{i}(M^{*}(b_{(i,c)}))\geq v_{i}(M^{*}(b_{(j,(c+1))})).\]
Assume that this is not true. Then, the following holds for matching \(M^{*}\). Agent \(i\) is matched to item \(\ell\) from \(\mathcal{M}_{f+h}\) for some \(h\in[t-f]\) in bucket \(\mathcal{N}_{c}\) and agent \(j\) is matched to item \(\ell^{\prime}\) from \(\mathcal{M}_{f}\) in bucket \(\mathcal{N}_{c+1}\). However, we know that agent \(i\) values item \(\ell^{\prime}\). So the edge \((a_{\ell^{\prime}},b_{(i,c)})\) exists in \(G\). We show that we can augment \(M^{*}\) and increase its weight, thus proving that it was not the maximum weight matching in the first place; a contradiction. Towards this, consider matching item \(\ell^{\prime}\) to agent \(i\) in bucket \(\mathcal{N}_{c}\) and matching item \(\ell\) to agent \(i\) in any bucket \(\mathcal{N}_{p}\) for \(p>c\) where \(b_{(i,p)}\) is unmatched. We show that the new matching has a higher total weight.
Notice that besides this item switch, all other edges remain the same. So, we need to show:
\[w(a_{\ell^{\prime}},b_{(i,c)})+w(a_{\ell},b_{(i,p)})>w(a_{\ell^{\prime}},b_{(j,(c +1))})+w(a_{\ell},b_{(i,c)})\]
Expanding using the weight function, we have:
\[w(a_{\ell^{\prime}},b_{(i,c)})+w(a_{\ell},b_{(i,p)}) =-cm^{(t-f)}-pm^{t-(f+h)}\] \[w(a_{\ell^{\prime}},b_{(j,(c+1))})+w(a_{\ell},b_{(i,c)}) =-(c+1)m^{(t-f)}-cm^{t-(f+h)}.\]
Subtracting the weight of the old edges from the modified matching edges, we have:
\[-cm^{(t-f)}-pm^{t-(f+h)}-(-(c+1)m^{(t-f)}-cm^{t-(f+h)})=m^{(t-f)}+(c-p)m^{t-(f+h )}.\]
We have that \(c\geq 1\) and \(p\leq m\), so the smallest value that \((c-p)\) can take is \((1-m)\). We have:
\[m^{(t-f)}+(c-p)m^{(f+h)} \geq m^{(t-f)}+(1-m)m^{t-(f+h)}\] \[>m^{(t-f)}+(-m)m^{t-(f+h)}\] \[=m^{(t-f)}-m^{(t-f-h+1)}.\]
The largest value that the second term can take is when \(h=1\). This gives us,
\[m^{(t-f)}-m^{(t-f-h+1)}\geq m^{(t-f)}-m^{(t-f)}=0.\]
Thus, we can strictly increase the weight of the matching; a contradiction.
The repeated application of Lemma 2 gives us Lemma 3.
**Lemma 3**.: _Algorithm 1 outputs an EF1 allocation._
Proof.: Notice that every vertex in \(B\) is matched to some item in \(A\) (the matched item may be a dummy item of value \(0\)). By Lemma 2, for any two agents \(i\) and \(j\), we have that \(v_{i}(M^{*}(b_{(i,c)}))\geq v_{i}(M^{*}(b_{(j,(c+1))}))\). So, in particular, we have that agent \(i\) weakly prefers the item they received in bucket \(\mathcal{N}_{1}\) to the item agent \(j\) receives in bucket \(\mathcal{N}_{2}\). Agent \(i\) also weakly prefers the item they received in bucket \(\mathcal{N}_{2}\) to the item agent \(j\) receives in bucket \(\mathcal{N}_{3}\) and so on. As a result, we know that agent \(i\) has at least the same value for the set of items she receives in buckets \(\mathcal{N}_{1}\) through bucket \(\mathcal{N}_{m}\) as that of the set of items agent \(j\) receives in buckets \(\mathcal{N}_{2}\) through bucket \(\mathcal{N}_{m}\). Thus, by removing the item agent \(j\) receives in \(\mathcal{N}_{1}\), agent \(i\) will certainly have no envy for agent \(j\).
Finally, we show that Algorithm 1 runs in randomized polylogarithmic time using \(f(m,n)\cdot O(m^{t})\) processors where \(f\) is a polynomial (in \(m\) and \(n\)) function.
**Lemma 4**.: _Algorithm 1 takes \(O(\log^{2}(mn))\) time using \(O(m^{5.5+t}n^{5.5})\) processors._
Proof.: We will show that each step of Algorithm 1 runs in polylogarithmic time using at most \(f(m,n)\cdot O(m^{t})\) processors. In Algorithm 1, sorting the items takes \(O(\log^{2}m)\) time and \(O(m)\) processors. Adding the dummy items takes \(O(1)\) time using \(O(mn)\) processors. The first for loop runs in \(O(1)\) time using \(O(mn)\) processors. The second for loop runs in \(O(1)\) time using \(O(mn)\) processors. The third for loop runs in \(O(1)\) time using \(O(m^{2}n)\) processors. Computing the maximum weight perfect matching is the only step in our algorithm that requires \(f(m,n)\cdot O(m^{t})\) processors when we have \(t\) different inherent item values. From [10], there is a randomized parallel algorithm that computes the _minimum weight perfect matching_ of a graph. Notice that one
can compute the maximum weight perfect matching of the graph by first negating the edge weights and then running a minimum weight perfect matching algorithm. The algorithm of [12] takes \(O(\log^{2}(mn))\) time using \(O(m^{5.5}n^{5.5}W)\) processors where we have \(n\) agents and \(m\) items and \(W\) is the weight of the heaviest edge in unary. When we have \(t\) different item values, we have \(W\leq m^{t}\). Finally, extracting the allocation from the maximum weight perfect matching takes \(O(1)\) time using \(O(m^{2}n)\) processors. The step with the largest time complexity is computing the maximum weight perfect matching. Thus, the total time complexity of Algorithm 1 is \(O(\log^{2}(mn))\) and requires a total of \(O(m^{5.5+t}n^{5.5})\) processors.
Combined, Lemmas 1, 3, and 4 give us the following theorem.
**Theorem 9**.: _Algorithm 1 is a parallel algorithm that returns an EF1 and Pareto Optimal allocation of \(m\) indivisible items to \(n\) agents with restricted-additive valuations, from a set of \(t\) different inherent item-values, in time \(O(\log^{2}(mn))\) using \(O(m^{5.5+t}n^{5.5})\) processors._
Notice that binary valuations are a special case of restricted additive valuations (with one inherent item value). Thus, we get an _RNC_ algorithm for binary valuations.
**Corollary 2**.: _The problem of finding an EF1 and Pareto Optimal allocation for \(n\) additive agents with binary valuations is in RNC._
We note here that for a given instance of restricted additive fair division, we can reduce the number of inherent item values at the expense of some loss in the EF1 and PO guarantees. Concretely, if we round the valuations \(v_{i,j}\) to \(v^{\prime}_{i,j}\) such that \(v^{\prime}_{i,j}\in[\alpha\cdot v_{i,j},v_{i,j}]\), for an \(\alpha\in[0,1)\), then an EF1 and PO allocation in \(v^{\prime}\) is an \(\alpha\)-EF1 and \(\alpha\)-PO allocation with respect to \(v\). Assuming the item values are in the range \([1,V]\), one can use such a rounding to create \(\lceil\log_{\frac{1}{\alpha}}(V+1)\rceil\) intervals.
**Theorem 10**.: _Let there be \(n\) restricted additive agents and \(m\) indivisible items such that \(v(g)\in[1,V]\) for all \(g\in\mathcal{M}\). Then, there exists a parallel algorithm that computes an \(\alpha\)-EF1 and \(\alpha\)-PO allocation in time \(O(\log^{2}(mn))\) using \(O(m^{5.5+\lceil\log_{\frac{1}{\alpha}}(V+1)\rceil}n^{5.5})\) processors._
Proof.: We begin by rounding the valuations \(v\) to new valuation functions \(v^{\prime}\) where \(v^{\prime}_{i,j}\in[\alpha\cdot v_{i,j},v_{i,j})\) for some \(\alpha\in[0,1)\). Specifically, all values in the interval \([1,1/\alpha)\) will be rounded down to \(1\), values in the interval \([1/\alpha,(1/\alpha)^{2})\) will be rounded down to \(1/\alpha\), and so on. This creates \(\lceil\log_{\frac{1}{\alpha}}(V+1)\rceil\) intervals, and therefore \(t=\lceil\log_{\frac{1}{\alpha}}(V+1)\rceil\) different inherent item-values in \(v^{\prime}\). Using Algorithm 1, we can compute an EF1 and PO allocation \(X\) with respect to \(v^{\prime}\). We claim that \(X\) is an \(\alpha\)-EF1 and \(\alpha\)-PO allocation with respect to \(v\).
First, we show the \(\alpha\)-EF1 guarantee. Since \(X\) is EF1 with respect to \(v^{\prime}\), for every pair of agents \(i,j\), there exists some good \(g\) in agent \(j\)'s bundle such that (1) \(v^{\prime}_{i}(X_{i})\geq v^{\prime}_{i}(X_{j}\setminus\{g\})\). Since \(\alpha\in[0,1)\), we have that (2) \(v_{i}(X_{i})\geq v^{\prime}_{i}(X_{i})\). By the construction of \(v^{\prime}\), we also have (3) \(v^{\prime}_{i}(X_{j}\setminus\{g\})\geq\alpha\cdot v_{i}(X_{j}\setminus\{g\})\). Stitching (1), (2), and (3) together, we the \(\alpha\)-EF1 guarantee:
\[v_{i}(X_{i})\geq v^{\prime}_{i}(X_{i})\geq v^{\prime}_{i}(X_{j}\setminus\{g\}) \geq\alpha\cdot v_{i}(X_{j}\setminus\{g\}).\]
Next, we show the \(\alpha\)-PO guarantee. Consider some other allocation \(X^{\prime}\). Since \(X\) is PO with respect to \(v^{\prime}\), we know, for any other allocation \(X^{\prime}\), \(X^{\prime}\) does not Pareto dominate \(X\). That is, there exists at least one agent \(i\) such that \(v^{\prime}_{i}(X^{\prime}_{i})\leq v^{\prime}_{i}(X_{i})\). By construction of \(v^{\prime}\), we have that, for every subset of items \(S\), \(\alpha\cdot v_{i}(S)\leq v^{\prime}_{i}(S)\leq\ v_{i}(S)\). Therefore, we have:
\[\alpha\cdot v_{i}(X^{\prime}_{i})\leq v^{\prime}_{i}(X^{\prime}_{i})\leq v^{ \prime}_{i}(X_{i})\leq v_{i}(X_{i}).\]
That is, agent \(i\)'s utility cannot be improved by a factor more than \(1/\alpha\); \(X\) is \(\alpha\)-PO.
Fair allocations with subsidies in parallel
In this section, we study fair division with subsidies. First, in Section 7.1, we show how to adjust the algorithm of [10] and compute an envy-freeable allocation and corresponding envy-eliminating payment vector in parallel. Second, in Section 7.2, we give an efficient parallel algorithm that computes a payment vector that not only eliminates envy but additionally satisfies other user-specified constraints (defined later in this section).
### Envy-Freeable allocations and payments in _Nc_
We prove that the algorithm of [10], for finding an envy-freeable allocation and envy-eliminating payment vectors can be parallelized.
First, note that the welfare-maximizing allocation, which gives each item to the agent with the highest value for it, can be shown to be envy-freeable. Now, given an envy-freeable allocation \(X\), the algorithm of [10] for finding envy-eliminating payments at a high-level, constructs the envy-graph \(G_{X}\), negates all the edge weights in \(G_{X}\), and computes all-pairs-shortest-paths on the modified \(G_{X}\). Then, for each agent, \(i\), the algorithm singles out the path with the lowest overall weight (out of \(n\) shortest paths) that starts at \(i\)'s vertex in \(G_{X}\). One can show that paying agent \(i\) the sum of the edge weights along this path results in an envy-eliminating payment. We show that all these steps can be parallelized efficiently, noting that one can apply known techniques to solve the all-pairs-shortest-paths problem in parallel. The full proof is included for completeness.
**Theorem 11**.: _The problem of finding an envy-freeable allocation \(X\) and an envy-eliminating payment vector for \(X\) for \(n\) additive agents is in NC._
Proof.: Consider computing a welfare-maximizing allocation. A welfare-maximizing allocation is one in which the sum of utilities is maximized. This can be achieved by allocating each item to whichever agent values it the most. The characterization that welfare-maximizing allocations are envy-freeable is given in [10]. Finding a welfare-maximizing allocation can be done efficiently in parallel because, for each item, we can use the parallel reduction operator to find the agent with maximum value in \(O(\log n)\) time. This gives us a time complexity of \(O(\log n)\) using \(O(mn)\) processors.
The algorithm of [10] for finding envy-eliminating payments proceeds as follows. Construct the envy-graph \(G_{X}\) for an envy-freeable allocation \(X\). Negate all the edge weights in \(G_{X}\) and run an all-pairs-shortest-paths algorithm on \(G_{X}\). Let \(\ell(i,j)\) be the length of the shortest path from \(i\) to \(j\) in \(G_{X}\) with all the weights negated. For each agent \(i\), find the vertex \(j^{*}\) such that \(\ell(i,j^{*})\) is the least out of all \(\ell(i,j)\) values. Set \(\vec{q}_{i}=\ell(i,j^{*})\). To see that this can be parallelized, consider each step in turn. Using \(O(n^{2})\) processors, we can create the envy graph and add weights (negating them first) to all the edges appropriately in \(O(1)\) time. Computing all-pairs shortest paths on this graph can be done in time \(O(\log^{2}n)\) using \(O(n^{3})\) processors [11]. Finding the shortest path that starts at agent \(i\) in \(G_{X}\) can be done using a parallel reduction operator. Using \(O(n^{2})\) processors total, we can find the shortest path for each agent in \(O(\log n)\) time. Thus, given an envy-freeable allocation \(X\), the problem of finding an envy-eliminating payment vector for \(X\) lies in _NC_.
Combining these two steps, we can find an envy-freeable allocation \(X\) and an envy-eliminating payment vector for \(X\) for \(n\) additive agents efficiently in parallel.
### Computing constrained envy-eliminating payment vectors in _Nc_
In this section, we give a different algorithm for finding an envy-eliminating payment vector, \(\vec{q}\). A key feature of our algorithm is that it allows for additional constraints on the final solution.
Formally, we are given an allocation \(X\) of \(m\) items to \(n\) additive agents each with a valuation function \(v_{i}\) that takes integer values, and a set \(C\) of constraints of the form "_if agent \(i\) is paid more than \(x\) dollars, then agent \(j\) must be paid more than \(y\) dollars_." We are interested in computing a payment vector \(\vec{q}\) that is envy-eliminating and satisfies all such constraints in \(C\), or deciding that no such vector exists. We call this problem Constrained Payments. We assume that no agent is paid more than \(m\Delta\) dollars, where \(\Delta=\max_{i,j}v_{i,j}\). This is because, for any meaningful solution, we need not pay any one agent more than \(m\Delta\) dollars as this is the maximum value any agent can have for the entire set of items.
We note that many non-trivial constraints on the payment vector can be formulated as a set of these smaller individual constraints. For example, the constraint "_agent 1 should not be paid more than agent 2_" can be imposed by adding the constraint "_if agent 1 is paid more than \(x\) dollars, then agent 2 is paid more than \(x\) dollars_" for all \(x\in[m\Delta]\). Or, the constraint "_agent 1 should not be paid more than 10 dollars_" can be imposed by adding the constraint "_if agent \(1\) is paid more than 10 dollars, then agent 2 is paid more than \(m\Delta\) dollars_". When \(C\) is empty, we get back the original problem of finding an unconstrained envy-eliminating payment vector. Our main result for the fair division with subsidies problem is Theorem 12 whose proof is given later in this section.
**Theorem 12**.: _If \(v_{i,j}\) is integral for all \(i\in[n]\) and \(j\in[m]\), Constrained Payments can be solved in \(O(\log^{2}(mn\Delta))\) time using \(O(n^{3}m^{3}\Delta^{3})\) processors, where \(\Delta=\max_{i,j}v_{i,j}\)._
Before we give the proof, we give an informal explanation of the key ideas and the main algorithm. The full algorithm is given later in this section as Algorithm 2 along with the proof of Theorem 12.
Note that \(C\) is upper-bounded by \(O(n^{2}m^{2}\Delta^{2})\), and hence the size of \(C\) does not appear in the bounds of Theorem 12. The main challenge with incorporating constraints into the final payment vector is that the previous approach of running all-pairs-shortest-paths on the envy graph does not allow us to isolate specific dollar amounts for which we want to impose a constraint on. To resolve this, we construct a larger, modified graph where each vertex corresponds to an agent _coupled_ with a specific payment amount. We call this graph _the payment rejection graph_. Our goal is to select a single vertex for each agent from the payment rejection graph, which will define the final payment vector. An edge in the payment rejection graph will exactly represent the causal relationship defined by a constraint.
Formally, the payment rejection graph is a directed graph \(G_{p}=(V,E)\) with a total of \(nm\Delta\) vertices. We arrange the \(nm\Delta\) vertices on an \(n\times m\Delta\) two-dimensional grid. Vertex \((i,j)\) corresponds to agent \(i\) being paid \(\vec{q}_{i}=j\) dollars. We use the term _rejecting a vertex_\((i,j)\) to denote that \(\vec{q}_{i}=j\) will not be in the final payment vector. We use the term _payment row_ for an agent \(i\) when referring to the set of vertices \(\{(i,j)\mid j\in[m\Delta]\}\). An edge from node \((i,j)\) to \((k,\ell)\), denoted by \((i,j)\rightarrow(k,\ell)\), signifies that if we have rejected all vertices \((i,j^{\prime})\) for \(j^{\prime}\leq j\), then we also reject all vertices \((k,\ell^{\prime})\) for \(\ell^{\prime}\leq\ell\). The idea of modeling rejections as edges in a directed graph was first used to find consistent global states in distributed systems [1]. We adapt this approach to find envy-eliminating payments.
We maintain a "current" payment vector and initialize it to the all-zero payment vector (i.e we select the vertex \((i,0)\) for each agent \(i\)). Then, we iteratively increase the agents' payments by one dollar until no envy is present. Although this process seems sequential, we show that we can quickly, in parallel, determine which payment components are not part of _any_ envy-eliminating payment vector. To see this, consider two agents \(i\) and \(j\) and a current payment vector \(\vec{q}\). We can compute the envy \(i\) has for \(j\) (or, similarly, \(j\) has for \(i\)) subject to these two payments by comparing \(i\)'s value for \(i\)'s bundle and payment \((v_{i}(X_{i})+\vec{q}_{i})\) to that of \(j\)'s \((v_{i}(X_{j})+\vec{q}_{j})\). If it is the case that \(i\) envies \(j\) subject to the payments \(\vec{q}_{i}\) and \(\vec{q}_{j}\), we must increase \(i\)'s payment by one dollar. So, we
will increment \(\vec{q}_{i}\) to \(\vec{q}_{i}+1\). Now, if there is any other agent \(k\) that envies \(i\) subject to the new payment, we know we will have to increase \(k\)'s payment by one dollar as well. As a result, we can make the following inference: if we pay agent \(i\) more than \(\vec{q}_{i}\) dollars, we have to pay agent \(k\) more than \(\vec{q}_{k}\) dollars. So, we can place an edge \((i,\vec{q}_{i})\rightarrow(k,\vec{q}_{k})\). Notice that the meaning of these edges holds transitively (i.e if \((i,\vec{q}_{i})\rightarrow(j,\vec{q}_{j})\) and \((j,\vec{q}_{j})\rightarrow(k,\vec{q}_{k})\), then \((i,\vec{q}_{i})\rightarrow(k,\vec{q}_{k})\)). Since this observation does not require us to use any information about other vertices in the graph besides the set \(\{(i,\vec{q}_{i}),(i,\vec{q}_{i}+1),(k,\vec{q}_{k})\}\), by using a separate processor for each pair of vertices, we can place all edges in the graph simultaneously. Here, we give an example of a payment rejection graph for a specific valuation profile.
_Example of Payment Rejection Graph._ Consider the following instance \(I\). The value in the \(i\)'th row and \(j\)'th column is the value agent \(i\) has for item \(j\). The envy-freeable allocation \(X\) is the following: agent \(1\) gets item \(3\), agent two gets item \(2\), and agent \(3\) gets item \(1\). In this example, the vertex \((1,0)\) would be in the set \(F\). This is because agent \(1\) initially envies agent \(2\) and to ensure that this payment is rejected, we add it to \(F\). Finally, in the payment rejection graph, we have only included the most informative edges.
For this example, the first envy-eliminating payment vector corresponds to selecting the vertices \(\{(1,1),(2,0),(3,1)\}\). If we pay agent \(1\) and agent \(3\) one dollar each, we will eliminate envy from the allocation. We have not included all edges for clarity.
The algorithm boils down to computing directed reachability from some specific vertices in the constructed payment rejection graph. We identify which vertices will not be a part of any envy-eliminating payment vector initially, and then follow edges from these vertices. These vertices are of the form \((i,0)\) where there is some other vertex \((j,0)\) where \(v_{i}(X_{i})<v_{i}(X_{j})\). Agent \(i\) must be paid and so vertex \((i,0)\)_will_ be rejected. To find all vertices that are reachable from initially rejected vertices, we take the transitive closure of \(G_{p}\), which can be done efficiently in parallel [1]. Vertices that are reachable from any initially rejected vertex will be marked as rejected. Then, we find the minimum payment component for each agent using a parallel reduction operator. If there is no minimum component (i.e all vertices along some agent's payment row have
Figure 2: The Payment Rejection Graph for Instance \(I\) given above.
been rejected), then we output "No satisfying vector". If all agents have a valid payment, we output the envy-eliminating payment vector \(\vec{q}\). Since edges in \(G_{p}\) correspond exactly to a constraint in \(C\), all constraints can be added to \(G_{p}\) simultaneously in parallel. Now, the algorithm identifies the first envy-eliminating payment vector that respects these constraints. As a direct result, we get an _NC_ algorithm when \(\Delta\) is bounded by a polynomial of \(n\) and \(m\). The formal description of the algorithm is given in Algorithm 2. The proof of Theorem 12 is an immediate implication of the following two lemmas.
```
1:Envy-Freeable Allocation \(X\), \(v_{i}(X_{j})\)\(\forall i,j\in\mathcal{N}\), \(v_{i}\)\(\forall i\in\mathcal{N}\)
2:Constrained Envy-Eliminating Payment Vector \(\vec{q}\)
3:var\(G\): Payment Rejection Graph
4:for all\((i\in[n],j\in[m\Delta])\) in parallel do\(\triangleright\) 1. Creating The Payment Rejection Graph
5: Create node \((i,j)\) in \(G\)
6:for all\(((i,j)\in V,(k,l)\in V\ |\ )\) in parallel do
7:if\(v_{i}(X_{i})+j<v_{i}(X_{k})+(l+1)\)then
8: Add edge \((k,l)\rightarrow(i,j)\) to \(G\)\(\triangleright\) Extra Constraints can be Added Here
9:var\(F\): Set of initially envious agents
10:for all\((i\in[n],j\in[n])\) in parallel do\(\triangleright\) 2. Identify initially envious agents, \(F\)
11:if\(v_{i}(X_{i})<v_{i}(X_{j})\)then
12: Add \((i,0)\) to \(F\)
13:\(G_{T}\) = TransitiveClosure\((G=(V,E_{T}))\)\(\triangleright\) 3. Transitive Closure on Rejection Edges
14:for all\(v\in F\) in parallel do
15:for all\(v^{\prime}\in V\) s.t \(v\to v^{\prime}\in E_{T}\)do
16: Mark\(v^{\prime}\) as "Rejected" \(\triangleright\) 4. Rejecting Vertices
17:for all\((i\in[n])\) in parallel do\(\triangleright\) 5. Find Minimum Vertex for Each Agent
18:\(\vec{q_{i}}=argmin_{j\in[m\Delta]}\{(i,j)\ |\ (i,j)\ not\ ``Rejected"\}\)
19:if\(\vec{q_{i}}=\textit{null}\)then
20: Exit and return "No satisfying vector"
21:return:\(\vec{q}\)
```
**Algorithm 2** Parallel Payment Rejection Algorithm
**Lemma 5**.: _Algorithm 2 runs in \(O(\log^{2}(mn\Delta))\) time using \(O(n^{3}m^{3}\Delta^{3})\) processors._
Proof.: Since our algorithm runs in steps, it suffices to show that each step takes a polylogarithmic (in \(m\), \(n\), and \(\Delta\)) amount of time and uses a polynomial (in \(m\), \(n\), and \(\Delta\)) number of processors.
In step one, we create the payment rejection graph. The first for loop takes \(O(1)\) time using \(nm\Delta\) processors to create each node. The second for loop takes \(O(1)\) time using a separate processor for each pair of vertices in \(G\). This requires \((nm\Delta)^{2}\) processors in total.
In step two, we compute the set of initially envious agents, \(F\). By using a separate processor for each pair \(i\) and \(j\) of states that are of the form \((i,0)\) and \((j,0)\), we can complete the for loop to compute the set of initially envying agents in \(O(1)\) time using \(O(n^{2})\) processors in total.
In step three, we take the transitive closure of the edges in \(G\). We cite [11] for a detailed discussion on parallel transitive closure techniques. It is well known that taking the transitive closure of a graph on \(n\) nodes takes \(O(\log^{2}n)\) time using \(O(n^{3})\) processors in the CREW PRAM model. Our graph has \(nm\Delta\) nodes, so this transitive closure step takes \(O(\log^{2}nm\Delta)\) time using \(O(n^{3}m^{3}\Delta^{3})\) processors.
In step four, we mark all vertices that are reachable from \(F\) as rejected. This set can have size at most \(nm\Delta\). Thus, by using a separate processor for each vertex, we can check if it is reachable from \(F\) and mark it as needed in \(O(1)\) time using \(O(nm\Delta)\) processors.
In step five, we find the minimum viable vertex for each agent. We will use a parallel reduction operator to find the minimum viable vertex for each agent. Note that if at the end of this process, some agent does not have a valid payment as the final minimum unrejected vertex, this means there is no payment that satisfies the imposed set of constraints and also eliminates envy. In this case, the algorithm outputs "No satisfying vector". We need \(O(nm\Delta)\) processors total and this step will take \(O(\log m\Delta)\) time.
In summary, to find the overall time complexity and processor requirements for our algorithm, we need to single out the step with the largest time and processor costs. Step 3 is the most expensive step in our algorithm. So, our overall runtime is \(O(\log^{2}nm\Delta)\) time and we require \(O(n^{3}m^{3}\Delta^{3})\) processors.
**Lemma 6**.: _Algorithm 2 computes a constraint-satisfying and envy-eliminating payment vector._
Proof.: Let \(\vec{q}\) be the payment vector output of Algorithm 2. Suppose \(\vec{q}\) is not envy-eliminating. \(\vec{q}\) is a set of vertices chosen from the payment rejection graph where we select one vertex from each row. Thus, \(\vec{q}=\{(1,\vec{q}_{1}),(2,\vec{q}_{2}),\ldots,(n,\vec{q}_{n})\}\). If \(\vec{q}\) is not envy-eliminating, then there exist some \(i,j\in\mathcal{N}\) where \(i\neq j\) and: \(v_{i}(X_{i})+\vec{q}_{i}<v_{i}(X_{j})+\vec{q}_{j}\). Since, we have \(\vec{q}_{j}\) as \(j\)'s payment, we know that we rejected the vertex \((j,\vec{q}_{j}-1)\). However, since we have that \(v_{i}(X_{i})+\vec{q}_{i}<v_{i}(X_{j})+\vec{q}_{j}\), it must be that there is an edge \((j,\vec{q}_{j}-1)\rightarrow(i,\vec{q}_{i})\) as this is exactly the requirement for there to be an edge between two vertices in the payment rejection graph. Since \((j,\vec{q}_{j}-1)\) was rejected and there is an edge \((j,\vec{q}_{j}-1)\rightarrow(i,\vec{q}_{i})\), it must be that \((i,\vec{q}_{i})\) was rejected as well.
Suppose \(\vec{q}\) is not constraint-satisfying. This means there is some \((i,\vec{q}_{i})\in\vec{q}\) that violates a constraint. User-added constraints are in the form of edges from one vertex to another in the payment rejection graph. Suppose there was a constraint of the form \((j,\vec{q}_{j})\rightarrow(i,\vec{q}_{i})\) that is not satisfied. This means that \((j,\vec{q}_{j})\) was rejected and yet \((i,\vec{q}_{i})\) was not. However, since we take the transitive closure of all edges in the payment rejection graph and \((j,\vec{q}_{j})\) was rejected we know that \((i,\vec{q}_{i})\) is also in the neighborhood of some vertex in \(F\) and will also be rejected. As a result, we know that \((i,\vec{q}_{i})\) cannot be part of the output payment vector.
**Corollary 3**.: _The problem of finding an envy-eliminating and constraint-satisfying payment vector is in NC if \(\Delta=\max_{i,j}v_{i,j}\) is polynomial in \(n\) and \(m\)._
## 8 Conclusion
Our results show that many problems in fair division admit efficient parallel solutions. Our main contributions are efficient parallel fair division algorithms for allocating indivisible goods to restricted additive agents, finding constrained payment vectors along with envy-freeable allocations under the subsidy model, and finding fair allocations for up to three agents. Our hardness result shows that the traditional Round-Robin EF1 algorithm cannot be directly translated to the parallel setting. We leave open many interesting research directions. Is the problem of finding any EF1 allocation _CC_-Hard? Are any problems in fair division _P_-Complete [1]? Can we give _deterministic_ parallel algorithms for restricted additive fair division? |
2305.07663 | Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based
Comparison of Feature Spaces | Safety-critical applications require transparency in artificial intelligence
(AI) components, but widely used convolutional neural networks (CNNs) widely
used for perception tasks lack inherent interpretability. Hence, insights into
what CNNs have learned are primarily based on performance metrics, because
these allow, e.g., for cross-architecture CNN comparison. However, these
neglect how knowledge is stored inside. To tackle this yet unsolved problem,
our work proposes two methods for estimating the layer-wise similarity between
semantic information inside CNN latent spaces. These allow insights into both
the flow and likeness of semantic information within CNN layers, and into the
degree of their similarity between different network architectures. As a basis,
we use two renowned explainable artificial intelligence (XAI) techniques, which
are used to obtain concept activation vectors, i.e., global vector
representations in the latent space. These are compared with respect to their
activation on test inputs. When applied to three diverse object detectors and
two datasets, our methods reveal that (1) similar semantic concepts are learned
regardless of the CNN architecture, and (2) similar concepts emerge in similar
relative layer depth, independent of the total number of layers. Finally, our
approach poses a promising step towards semantic model comparability and
comprehension of how different CNNs process semantic information. | Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade | 2023-04-30T13:53:39Z | http://arxiv.org/abs/2305.07663v2 | # Revealing Similar Semantics Inside CNNs:
###### Abstract
Safety-critical applications require transparency in artificial intelligence (AI) components, but widely used convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability. Hence, insights into what CNNs have learned are primarily based on performance metrics, because these allow, e.g., for cross-architecture CNN comparison. However, these neglect how knowledge is stored inside. To tackle this yet unsolved problem, our work proposes two methods for estimating the layer-wise similarity between semantic information inside CNN latent spaces. These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures. As a basis, we use two renowned explainable artificial intelligence (XAI) techniques, which are used to obtain concept activation vectors, i.e., global vector representations in the latent space. These are compared with respect to their activation on test inputs. When applied to three diverse object detectors and two datasets, our methods reveal that (1) similar semantic concepts are learned _regardless of the CNN architecture_, and (2) similar concepts emerge in similar _relative_ layer depth, independent of the total number of layers. Finally, our approach poses a promising step towards semantic model comparability and comprehension of how different CNNs process semantic information.
Keywords:Explainable Artificial Intelligence Network Comparison Feature Space Comparison Semantic Concept.
## 1 Introduction
The emerging use of artificial intelligence (AI) and especially CNNs in safety-critical applications such as automated driving and medicine, has made the interpretability and transparency [3, 32] of these models increasingly essential, not least because industrial and legal standards demand sufficient evidence of developed AI modules for safe and ethical use[1, 14]. Therefore, it is crucial to
develop methods that reveal the model semantics, i.e., what was learned where inside, in particular in relation to other models. Such model comparability at knowledge level can enhance the general understanding of model knowledge encoding, the influence of architectures, and possibly also datasets. Some potential future applications are retrieval of dataset bias, and informed model selection and architecture modification.
One popular method of knowledge representation assessment within the field of XAI is analysis of semantic concepts, where concepts correspond to real-world objects or notions [5, 32, 33]. These concepts are associated with vectors in the CNN feature space, the so-called concept activation vectors (CAV) [22, 50]. By examining the CAVs and their responses to model inputs, experts can gain valuable insights into model operation.
This research proposes two architecture-agnostic strategies for estimating the similarity of feature spaces and semantic concepts in CNNs. These allow to answer for any two CNN layers how similar they are regarding their learned concepts (unsupervised strategy) and regarding any given set of user-defined concepts (supervised strategy). To achieve this, we use the concept analysis methods TCAV [22] (supervised) and ICE [50] (unsupervised) as the basis. Both generate CAVs for concept-related samples during training. The response of these CAVs to test data is then measured to determine the feature space similarity with respect to the given concepts. The contributions and findings of this work are the following:
* We conduct a **concept-based comparison of feature spaces** and show how the same semantic information is processed differently across various CNN backbones;
* For the comparison we propose **an unsupervised and a supervised layer-wise approach to compare the semantic information** encoded in CNNs, which are shown to yield intuitive and interpretable results regarding CNN knowledge inspection;
* The main findings of our concept-based comparison of feature spaces are: **same semantic concepts are learned across different CNN architectures** and can be extracted from proper layers, representations of **concepts are located at the same relative depth of the backbone** in feature spaces of different networks.
## 2 Related Work
**Explainable AI.** The field of XAI encompasses interpretability techniques [41] to explain the predictions of machine learning functions like neural networks (NNs) to a human. While ante-hoc approaches using models that are interpretable, e.g., produce human-understandable concept outputs [23, 28, 7], are preferable [39], we here concentrate on already trained CNNs. For post-hoc explainability, one can distill approximate interpretable surrogate models. Concept-based examples are flow-graphs [18], layer-wise concept hierarchies [47, 48, 49],
decision trees [46, 8], and rule sets [34, 35]. However, the limited fidelity to the original CNN renders them unsuitable for quantitative CNN comparison. Other post-hoc methods concentrate on explaining the behavior for single samples. Such can be applied in an approximate model-agnostic manner [35, 38] or model-specific based on the model internal processing, like prominent saliency methods [51, 43, 4, 2]. Such local approaches, even if aggregated to global information like in [24], only give limited insights into concepts represented in the CNN internals. Instead, this work relies on concept analysis [40], i.e., XAI methods that allow direct insights into the human-understandable concepts learned by a CNN.
**Concept Analysis.** Early techniques associate single CNN units with concepts [5, 33], disregarding the distributed nature of CNN representations. Supervised linear methods like state-of-the-art TCAV [22] associate concepts to latent space vectors. Further extensions to use-cases like concept regression [15] and localization [29] also stuck to this principle. There are also non-linear alternatives like clustering [16, 21] or NNs [9], which, however, pose additional requirements to the labels. Unsupervised approaches require no concept labels at all, like ICE [50] that applies matrix factorization to the latent space. Alternatives relying on intelligent choice of concept candidate patches [12, 11] lead to less interpretable results [50].
**Network Comparison.** Existing neural network comparison methods foremost utilize performance or error-estimation metrics, and qualitative manual observation based on visual analytics or XAI. Examples for object detection model analysis are the TIDE [6] metrics and visualizations toolbox, and the framework by Miller et al. [31] to analyze models' ability to handle false negative occurrences. More knowledge-based approaches measure the compliance with constraints like object relations [13, 42] or temporal consistency [45].
## 3 Background
In contrast to the mentioned methods, our approach involves comparing feature spaces, i.e., knowledge encoded in CNNs, through semantic concepts and their responses to various inputs. A (visual) semantic concept refers to a feature of an image that can be expressed in natural language (e.g., "head" or "green") [5, 10]. Concepts can be associated with a numeric vector in the latent space, known as the concept vector [22, 10]. The approaches for this used in this paper are shortly recapitulated in the following.
**TCAV.** TCAV [22] is a supervised concept analysis method that utilizes Concept Activation Vectors (CAVs) to represent concepts in the latent space of a NN. Parameters of CAVs correspond to those of a binary linear classifier that separates the feature space of a given layer in a concept-versus-rest manner. The classifier is trained using the activations of concept-related and unrelated samples. Geometrically, a CAV is the normal vector to the separation hyperplane and indicates the direction of the concept in the latent space. The similitude between a sample and concepts is defined by cosine similarity. This feature of CAVs can be employed for ranking of input samples by concept-relevance.
**ICE.** The unsupervised ICE [50] approach employs Non-Negative Matrix Factorization (NMF) to mine a pre-defined small number of Non-negative CAVs (NCAVs) in the latent space. These NCAVs correspond to the most frequent patterns of activation in convolutional filters caused by the training samples. NCAVs are then utilized to map input sample activations of dimensionality \(C\times H\times W\) to \(C^{\prime}\times H\times W\) dimensional concept activations, where \(C\), \(H\), \(W\), and \(C^{\prime}\) represent the _channel_, _height_, _width_, and _concept_ dimensions, respectively. Each of the \(C^{\prime}\) concept activations of size \(1\times H\times W\) is normalized, interpolated to the original input size, and employed as a saliency map to highlight the concept-related regions. The examples of such binarized masks are presented in Fig. 5.
## 4 Semantic Comparability Methods
To address the gap in literature on comparison of model semantics, we introduce supervised and unsupervised approaches that use concept representations to compare feature spaces of CNN backbones. These methods rely on relative semantic similarity ranking of samples and overlap estimation of concept saliency maps. Section 4.1 and Section 4.2 provide details on the unsupervised and supervised comparison approaches, respectively.
### Unsupervised Concept Similarity
The proposed unsupervised approach addresses two key questions: _"Are there similar concepts in feature spaces of different layers?"_ and _"How similar are they?"_. We utilize ICE [50] to identify and extract the most prominent activation patterns, which are represented by NCAVs, in the feature spaces of different layers. Then, we measure the overlap between binarized concept saliency maps on test data to compare the similarity of extracted concepts in selected layers. Although we use ICE in our work, the general approach is not limited to this specific method, and shall only showcase the usage of saliency methods.
Figure 0(a) depicts the process of layer-wise unsupervised knowledge comparison in two trained _Tested Networks_, which may have different architectures. _L1_
Figure 1: Unsupervised evaluation of concept similarity (a) and concept similarity scoring (b).
and _L2_ are indices of analyzed layers. The first step involves using the activations of training samples (_Train Acts_) obtained from the _Train Images_ to automatically extract concept vectors (_NCAVs_) with the _Concept Miner_. Subsequently, during the testing phase (Fig. 0(b)), the _NCAVs_ are utilized to generate _Concept Masks_ for activations (_Test Acts_) of _Test Images_. To evaluate concept similarity via masks, we process obtained continuous _Concept Masks_. Each of them is normalized between 0 and 1, bilinearly interpolated to the same size (e.g., size of corresponding _Test Images_), and then binarized by thresholding, where the threshold value is a hyperparameter. After completing the preprocessing step, we calculate the _Unsupervised Concept Similarity_ (\(UCS_{i,j}\)) for any pair of concepts by averaging the pixelwise Jaccard index, also known as Intersection over Union (IoU), of set of binary concept masks obtained for test samples:
\[UCS_{i,j}=\frac{1}{N}\sum_{k=1}^{N}\text{IoU}(M_{i}^{k},M_{j}^{k})\;,\quad \text{IoU}(M_{i}^{k},M_{j}^{k})=\frac{\sum\texttt{AND}(M_{i}^{k},M_{j}^{k})}{ \sum\texttt{OR}(M_{i}^{k},M_{j}^{k})}\;, \tag{1}\]
where, \(i\) and \(j\) are concept indices, \(N\) is the number of test samples, and \(M_{i}^{k},M_{j}^{k}\in\{\texttt{True},\texttt{False}\}^{W\times H}\) are binary concept masks binary interpolated to the same fixed size \(W\times H\) (defined by user) for the test sample at index \(k\), AND and OR refer to pixel-wise intersection and union of binary masks, respectively.
Therefore, by comparing the projections of extracted concepts onto the input space, we indirectly measure the similarity between concepts and even describe the similarity of latent spaces. By using different test sets to excite and extract desired concepts in various layers, human experts can gain valuable insights into the knowledge similitude across different models.
### Supervised Feature Space Similarity
The supervised approach aims to answer the question, _"How similar is the arrangement of feature spaces in compared layers with respect to a given concepts?"_. In order to answer it, CAVs [22] are utilized as pivot vectors, around which we estimate the behaviour of feature spaces with activations of test samples.
Figure 2: Supervised concept-based estimation of feature space similarity (a) and feature space similarity scoring (b).
In Figure 1(a), the supervised concept-based layer-wise feature space comparison process is shown for two trained _Test Networks_. _L1_ and _L2_ are indices of analyzed layers. In the first stage, the _Concept Extractor_ is employed to extract _CAVs_ for each pair of the compared layers, using the training sample activations (_Train Acts_) obtained from concept-related images (_Train Concepts_). Next (Fig. 1(b)), to compare the feature spaces with respect to selected concepts, we compute the cosine similarity between the _CAVs_ and the activations of test samples (_Test Acts_). Finally, we use the Pearson Correlation Coefficient (PCC) to compare resulting series of cosine similarities and estimate the _Supervised Feature Space Similarity_ (\(SFSS_{u,v}\)), which takes into account the ranking information of the samples as well as accounts for the relative orientation of sample activations in the feature space:
\[SFSS_{u,v} =\frac{1}{M}\sum_{i=1}^{M}\text{PCC}\left(\left\{\text{CS}_{u,k}^ {i}\right\}_{k=1}^{N}\right),\left\{\text{CS}_{v,k}^{i}\right\}_{k=1}^{N}\right) \tag{2}\] \[\text{CS}_{*,k}^{i} =\cos(CAV_{*}^{i},x_{*,k})\,\ *\in\{u,v\} \tag{3}\]
where, indices \(u\) and \(v\) represent network layers, \(M\) is the total number of test concepts, \(i\) is the index of the currently tested concept, \(N\) is the total number of test samples, and \(\text{CS}_{*,k}\) is a series of cosine similarities between the tested concept's \(CAV\) and the activation \(x_{*,k}\) of the \(k\)-th test sample in layer \(*\).
Although we propose using PCC for the computation of \(SFSS_{u,v}\), it can be replaced with a statistical metric that preserves the rank order of values in the series. Spearman's rank correlation coefficient, for example, is a valid alternative.
Hence, by ranking and comparing the similarities between concepts representations and test sample activations across multiple layers and models, we can indirectly estimate the generalized similarity and arrangement of the feature spaces in them.
## 5 Experimental Setup
Our experiments follow the methodology outlined in the previous section, which involves two main parts: 1) unsupervised layer-wise estimation of semantic similarity with binary concept masks (Sec. 4.1); and 2) supervised layer-wise comparison of model feature spaces with sample semantic similarity rankings (Sec. 4.2). In the subsequent subsections, we provide all details on the experimental setup.
### Experimental Data of Test Images
We assume that the semantic complexity of the test data may affect the performance of the proposed methods. To investigate this, we conduct the evaluation using two datasets with similar knowledge categories but varying semantic complexity: MS COCO 2017 [25] and CelebA [27]. The CelebA is a low semantic diversity dataset, which comprises over 202,599 homogenous images with celebrity faces. In contrast, the MS COCO dataset is an object detection dataset with
high semantic diversity, featuring images of various objects in different contexts. This dataset includes images of different shapes with 2D object bounding box annotations. We utilized a subset of more than 2,000 MS randomly selected COCO images, containing _person_ class objects in various positions and situations. To streamline further visual validation, we only used non-crowd instances with bounding box areas of at least \(20,000\) pixels. The resulting subset includes more than 2679 bounding boxes of people in different poses and locations extracted from 1685 images.
### Models
We perform a semantic comparison of three object detectors of different paradigms and generations, which also feature different backbones, to evaluate the applicability of our approach:
* one-stage YOLOv5s1[20] with residual (res.) DarkNet [36, 17] backbone; Footnote 1: [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
* one-stage SSD2[26], which utilizes a VGG [44] backbone; Footnote 2: [https://pytorch.org/vision/stable/models/ssd](https://pytorch.org/vision/stable/models/ssd)
* two-stage FasterRCNN3[37] with inverted res. MobileNetV3[19] backbone.
Footnote 3: [https://pytorch.org/vision/stable/models/faster_rcnn](https://pytorch.org/vision/stable/models/faster_rcnn)
All models are trained on the semantically rich MS COCO [25], which is expected to contain semantic concepts relevant to both test datasets (Sec. 5.1). The models above are further referred to as YOLO5, SSD, and RCNN.
### Concept Mining and Synthetic Concept Generation
The effectiveness of supervised concept-based analysis heavily relies on the quality of the concept-related training data. Unfortunately, publicly available datasets with concept labels are scarce, and existing ones may not be suitable for all research domains and tasks To address this issue, we suggest generating synthetic concept samples using concept information automatically extracted from task-specific datasets.
For this, we mine concept-related superpixels (image patches) with ICE [50] from MS COCO bounding boxes of the _person_ class with an area of at least
Figure 3: Examples of generated MS COCO synthetic concept training samples.
\(20,000\) pixels (Sec. 5.1). For experiments we selected 3 concepts, each comprising 100 superpixels, and semantically corresponding to labels "legs", "head", and "torso". Concepts were extracted from YOLOv5s layers 8.v3.c, 9.v1.c, and 10.c respectively.
In order to create a synthetic concept sample, 1 to 5 concept-related superpixels are randomly selected, and placed on a background of random noise drawn from a uniform distribution. Figure 3 shows examples of MS COCO synthetic concepts. Additionally, we rescale the superpixels by a factor between 0.9 and 1.1 before placement. The dimensions of the generated samples are set to be \(640\times 480\) pixels.
To conduct experiments on the CelebA dataset, we utilized four concepts extracted from the 7.conv layer of YOLO5 (based on the results of Experiment 1, see Sec. 6.1). These concepts correspond to semantic labels "hairs", "upper face", "lower face", and "neck". Each concept sample contains one concept superpixel. The CelebA concept sample has the same size as the dataset sample, i.e., \(178\times 218\) pixels. Top row of Figure 5 displays examples of concept masks, which are utilized for concept superpixel cropping.
### Dimensionality of Concept Activation Vectors
The TCAV [22] method employs 3D-CAVs to represent concepts. However, an alternative approach is to use a 1D-representation, as concept information can be encoded in the linear combination of feature space channels [10, 50]. The use of a 1D-CAV offers several benefits [30] over the 3D-CAV: 1) it is more stable and computationally efficient, as it reduces the number of computational parameters, which is particularly important for a layer-to-layer comparison of deep backbones; 2) it is translation invariant since the spatial information of the concept is aggregated and only the presence or absence of the concept affects the channel activation strength. Given the mentioned advantages, we have opted to utilize 1D-CAVs in our experiments for supervised feature space comparison (Section 4.2).
Figure 4 illustrates the process of obtaining 1D- and 3D-CAVs, where C, H, and W represent the _channel_, _height_, and _width_ dimensions, respectively. The arrows indicate the concept extraction process (see Sec. 3), where all input representations are aggregated across the _height_ and _width_ dimensions before computing the 1D-CAV. When dealing with 1D-representations to compute the
Figure 4: Generation of 3D- and 1D-CAV representations.
similarity between the concept and sample the sample activation undergoes the same aggregation across the _height_ and _width_ dimensions.
### Experiment-specific Settings
**Experiment 1:**_Unsupervised Concept Similarity._ We carried out experiments on unsupervised concept similarity using datasets of varying semantic diversity to showcase how concept mining influenced by the input data. To train the NCAVs, we used 100 and 300 random samples from CelebA and MS COCO datasets, respectively, as explained in Section 4.1 and Section 5.1. Afterward, we evaluated the performance using another 100 samples of each dataset to compute our \(UCS_{i,j}\) metric from Section 4.1. The experimental results are presented in the form of a heatmap for each pair of layers.
For CelebA, we extracted 5 concepts per layer, while for MS COCO the number of mined concepts is set to 10. We used a value of \(BT=0.25\) (see Sec. 6.1 for the analysis of impact of \(BT\) values) for binarizing concept masks. Examples of resulting masks for different \(BT\) values can be seen in Fig. (c)c in Sec. 6.1.
**Experiment 2:**_Supervised Feature Space Similarity._ We evaluated the layer-wise feature space similarity of neural networks by conducting tests using CAVs trained on synthetic concepts from MS COCO and CelebA (see Section 5.3). To measure the \(SFSS_{i,j}\) ranking metric, we used 200 randomly sampled MS COCO images and the results are presented as a heatmap, with each cell representing a layer combination. To plot the heatmaps, we selected 10 layers uniformly distributed over the backbone depth of the networks under test (see Sec. 6.2), which are listed in Table 1.
## 6 Experimental Results
### Unsupervised Concept Similarity
The experiments were carried out following the methodology outlined in Section 4.1, using the setup described in Section 5.5. Figures 6, 7, (a)a, and (b)b depict
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{NN} & \multicolumn{10}{c}{Layer id} \\ \cline{2-11} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline YOLO5 & 4.v3.c & 5.c & 6.v3.c & 7.c & 8.v3.c & 9.v2.c & 13.v3.c & 17.v3.c & 18.c & 20.v3.c & 21.c \\ \hline SSD & f.5 & f.10 & f.14 & f.17 & f.21 & e.0.1 & e.0.5 & e.1.0 & e.2.0 & e.3.0 & e.4.0 \\ \hline RCNN & 5.b.3 & 7.b.2 & 8.b.2 & 9.b.2 & 10.b.2 & 11.b.3 & 12.b.3 & 13.b.3 & 14.b.3 & 15.b.3 & 16 \\ \hline \end{tabular}
\end{table}
Table 1: Shorthands of selected CNN intermediate layers for experiments (b=block, f=features, e=extra, c=conv, v=cv).
the concept similarity heatmaps of concepts mined in different layers, while Figures 5 and 8 show examples of the produced concept masks.
**Impact of data semantic diversity.** After manually inspecting concept masks and \(UCS\)-heatmaps generated with CelebA for different layers of the tested CNN backbones, we discovered that YOLO5 7.c, SSD e.0.3, and RCNN 15.b.3.0 layers (see Tab.1 for shorthands) had the most similar concepts and feature spaces. These are shown in the top row of Fig. 6. Furthermore, we found a one-to-one correspondence between extracted concepts in these layers. For convenience, we arranged the heatmap's horizontal axis by placing the most similar concept pairs on the diagonal. All of the extracted concepts are interpretable and correspond to semantic labels (with optimal layers for them in brackets): "hair" (YOLO5.c4, SSD.c4, RCNN.c2), "upper face" (YOLO5.c1, SSD.c2, RCNN.c0), "lower face" (YOLO5.c2, SSD.c0, RCNN.c4), "neck" (YOLO5.c3, SSD.c3, RCNN.c1), and "background" (YOLO5.c0, SSD.c1, RCNN.c3). Figure 5 demonstrates examples of binary masks generated for "hair", "lower face" and "neck" concepts.
In contrast to CelebA, not all of the concepts mined from MS COCO are human-interpretable and have counterparts in other models. This is demonstrated in the example of layers 8.v3.c, e.0.5, and 15.b.3.0 of YOLO5, SSD, and RCNN (bottom row of Fig. 6). The higher semantic variability of input samples in MS COCO, where input samples may contain different sets of concepts, makes it more challenging to mine meaningful concepts. However, after a manual inspection of the most similar concept pairs highlighted by our approach, we found concepts corresponding to semantic labels such as "legs" (YOLO5.cc, SSD.c0, RCNN.c2), "head" (YOLO5.c2, SSD.c1, RCNN.c1), and "background" (YOLO5.c8, SSD.c8, RCNN.c7). Examples of their binary masks are depicted in the bottom row of Figure 5.
To summarize our observations, we conclude that datasets with high semantic variability may lead to a lower quality of automatic concept extraction results. Our proposed method can quantify and visualize this issue, assist in finding similar concepts, and identify layers with similar semantic information. Also, for
Figure 5: Examples of binary concept masks obtained unsupervised using ICE for CelebA (top) and MS COCO (bottom) at binarization threshold \(BT=0.25\).
a comprehensive unsupervised comparison of model concepts and feature spaces, we recommend using datasets with semantically homogeneous samples, similar to those found in CelebA.
**Semantically similar layers identification.** The proposed method enables the identification of the most similar layers in different networks. For example, the top row diagrams of Figure 6 display the layers with the highest level of feature space correspondence for CelebA, where each concept of one layer has a distinct counterpart in another. Another example in Figure 7 demonstrates non-optimal variations where a concept has multiple possible counterparts (Fig. 6(a)) or no matches (Fig. 6(b)).
In our experience, the main factors that influence the identification of similar layers are the number of concepts mined and the semantic complexity of the test dataset, as demonstrated in Figure 6.
Figure 6: Unsupervised concept similarity (\(UCS_{i,j}\)) estimates of different concepts \(c_{i}\) (x-axis) and \(c_{j}\) (y-axis) mined in _optimal layers_.
Figure 7: Unsupervised concept similarity (\(UCS_{i,j}\)) estimates of different concepts \(c_{i}\) (x-axis) and \(c_{j}\) (y-axis) mined in _non-optimal layers_.
**Concept robustness with regards to \(Bt\).** The parameters of binary masks generated by ICE on test samples depend on the binarization threshold \(BT\): higher \(BT\) values may reduce mask size and, hence, impact concept similarity, as illustrated in Fig. (c)c. By leveraging this finding, we can also quantify the relative robustness of different concepts. As illustrated in Figures (a)a and (b)b, concepts like YOLO5.c1 and SSD.c2, as well as YOLO5.c2 and SSD.c0, exhibit the most resilience to changes in \(BT\), making them the most robust ones.
### Supervised Feature Space Similarity
**Semantic information flow.** Figures 9 and 10 display the layer-wise similarity between the feature spaces of models with respect to given concepts. Notably, the diagonal values in the heatmap of Figure 9 are more intense, indicating that semantic similarity is primarily influenced by the layer's relative depth in the backbone. Therefore, we can compare entire networks by evaluating a selected set of N layers (like in Table 1) that are evenly distributed throughout the backbone. Such approach helps save processing power and time while preserving the global picture.
Figure 8: Influence of concept mask binarization threshold value (\(BT\)) on unsupervised concept similarity estimation of concepts \(c_{i}\) (x-axis) and \(c_{j}\) (y-axis) for 7.conv and backbone.extra.0.3 layers of YOLO5 and SSD.
Figure 9: Supervised feature space comparison of SSD and YOLO5 layers (all convolutional layers indexed from 0 to 20 resp. 54).
**Concept complexity.** By examining Figure 10, we can observe that concepts derived from the CelebA dataset, which are lower in abstraction level and pertain to different parts of the face, lead to greater similarity between layers in identical model pairs compared to the more complex body part concepts extracted from MS COCO. Additionally, these concepts are more clearly defined across a broader range of layers, resulting in distinguishable clusters (larger darker regions) on the heatmaps. These observations imply that these concepts are more effectively represented in the feature spaces of the compared models.
**Network architecture differences.** Among the tested backbones, the MobileNetV3 backbone of RCNN exhibits a remarkably distinct behavior. Specifically, MobileNetV3 captures the same semantic information in two distinct regions within the network: at the beginning and in the middle. This can be observed in the middle and right columns of Figure 10, where we see a pattern with two distinct clusters (darker areas) along the vertical axis, between layers 0 and 3, and layers 5 and 8. This pattern is not observed in the direct comparison of the DarkNet and VGG backbones of YOLO5 and SSD, and, thus, only typical to MobileNetV3. We attribute this peculiarity to the distinctive network-building technique employed in inverted residual blocks of MobileNetV3, which allows to propagate semantic information of tested concepts across the network more effectively. This, in turn, leads to a comparatively lesser decrease in semantic similarity of deeper layers in RCNN than in SSD and YOLO5.
Thereby, the proposed method for supervised feature space comparison also allows us to identify significant variations in the feature spaces and semantic
Figure 10: Supervised feature space comparison of selected model layers (cf. Tab. 1).
representation learning across different models, and also can be used to judge on optimal model architectures with respect to interpretability.
### Limitations and Future Work
Our approaches for concept analysis naturally inherit all limitations of data-driven methods, like dependence on high-quality data. Thus, manual visual validation of used CAVs and NCAVs, as done here, remains inevitable. Our proposed semi-automatic data generation can be used to reduce labeling costs. Moreover, we found that the semantic diversity of the test data strongly affects the quality of the extracted concepts, and hence recommend using semantically homogeneous sets for testing.
A limitation inherent to using ICE concept masks is the differing and low mask resolution resulting from the different activation map dimensions. Choosing the scaling factors individually for any pair of layers may mitigate this, however, at cost of comparability. Another issue is the dependence on the binarization threshold \(BT\). An interesting future direction could therefore be to directly compare the non-binary concept masks.
In general, it will be interesting to apply our approach to further large NN architectures, e.g., transformers, and other visual tasks than object detection.
## 7 Conclusion and Outlook
In this research, we presented architecture-agnostic supervised and unsupervised methods for estimating the similarity of feature spaces in CNN backbones. Proposed methods help to reveal how the same semantic information is processed across various model backbones, and enable identification of the semantically similar layers. We use semantic concept vectors, namely CAVs and NCAVs, to assess the behavior of the latent space through the concept's response to the test data. Experiments on two datasets and three different backbone architectures trained on the same data revealed that regardless of the NN architecture, layers with similar semantic information can be found, as we found network layers with one-to-one concept correspondence. We also discovered that the feature space semantic information depends on the relative depth of the layer in the network backbone. Therefore, to compare different CNN backbones, it seems to be sufficient to compare only a subset of layers of uniform depth-distance in the backbone. Finally, our method provides valuable insights, which may be useful for applications like informed model selection, meta-analysis of network architectures, or dataset bias retrieval.
#### Acknowledgments
The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Climate Action within the project "KI Wissen - Entwicklung von Methoden fur die Einbindung von Wissen in maschinelles Lernen". The authors would like to thank the consortium for the successful cooperation. |
2309.05389 | Soundness and Completeness of a Model-Checking Proof System for CTL | We propose a local model-checking proof system for a fragment of CTL. The
rules of the proof system are motivated by the well-known fixed-point
characterisation of CTL based on unfolding of the temporal operators. To
guarantee termination of proofs, we tag the sequents of our proof system with
the set of states that have already been explored for the respective temporal
formula. We define the semantics of tagged sequents, and then state and prove
soundness and completeness of the proof system, as well as termination of proof
search for finite-state models. | Georg Friedrich Schuppe, Dilian Gurov | 2023-09-11T11:39:47Z | http://arxiv.org/abs/2309.05389v1 | # Soundness and Completeness of a Model-Checking Proof System for CTL
###### Abstract
We propose a local model-checking proof system for a fragment of CTL. The rules of the proof system are motivated by the well-known fixed-point characterisation of CTL based on unfolding of the temporal operators. To guarantee termination of proofs, we tag the sequents of our proof system with the set of states that have already been explored for the respective temporal formula. We define the semantics of tagged sequents, and then state and prove soundness and completeness of the proof system, as well as termination of proof search for finite-state models.
## 1 Introduction
Computation Tree Logic (CTL) is a well-known branching-time temporal logic [3, 5]. Many useful temporal specification patterns can be expressed naturally in CTL. The logic is supported by numerous off-the-shelf model checking tools such as nuSMV [2].
The standard, _global_ approach to model checking of a CTL formula \(\phi\) w.r.t. a given state \(s\) of a given Kripke structure \(\mathcal{M}\) is to first compute the set \(\llbracket\phi\rrbracket^{\mathcal{M}}\) of all states that satisfy the formula, i.e., the _denotation_ of \(\phi\), and then to check whether \(s\in\llbracket\phi\rrbracket^{\mathcal{M}}\). This approach allows the use of _symbolic_ representations of the denotations of the formula and its subformulas, typically as BDDs (as in nuSMV).
An alternative, _local_ approach is to start with the state \(s\) and incrementally explore its neighbourhood as required by the formula \(\phi\), by _unfolding_ the latter step-by-step. One obvious advantage of this approach is that it only explores the part of the model that is required to establish or reject the checked formula. Another advantage is that local model checking can be phrased as proof search in a _deductive proof system_. It can then be implemented in a straightforward manner in a logic programming environment such as Prolog. This can be very useful for _education purposes_, since it gives the opportunity for students to create, without much effort, an own tool that can analyse non-trivial models
of system behaviour (typically with up to a few thousand states). In fact, the model-checking proof system presented here has been developed for and used in the course _Logic for Computer Scientists_, given at KTH Royal Institute of Technology, Stockholm.
It is well-known that CTL can be embedded into the (alternation-free fragment of the) modal \(\mu\)-calculus [6]. Since local model checking proof systems have already been proposed for the latter logic, as for instance in [1], designing one for CTL based on the embedding should be straightforward. However, there are good reasons for designing a self-standing proof system, like the one we propose here. The foremost reason for us has been to utilise the circumstance that it is the alternation-free fragment of the modal \(\mu\)-calculus that we need to take into account. This suggests that the approach to guaranteeing termination of proof search employed in [6] of _tagging_ formulas with the set of states that have already been explored w.r.t. the formula (in this case essentially requiring only the outermost fixed-point needs to be tagged) can be lifted from the level of formulas to the level of sequents. Thus, tagged sequents need to be given a formal semantics, to allow to state formally soundness and completeness of the proof system, and to argue for termination of proof search for finite-state models.
Since our proof system has originally been designed for education purposes, to keep the presentation simple, we have chosen not to include the Until operator of CTL in our treatment, and leave its addition as an exercise to the interested reader. This does not present any technical difficulties, and simply follows the pattern of the other temporal operators and their fixed-point characterisation.
## 2 Syntax and Semantics of the Logic
We start by presenting the syntax and semantics of the logic, which we call CTL\({}^{-}\), since it is a fragment of CTL.
**Definition 2.1** (Logic Syntax).: The language is defined over a set of atomic propositions _Atoms_, ranged over by \(p\), as follows:
\[\begin{array}{rcl}\phi&\,::=&p\,|\,\neg p\,|\,\phi_{1}\wedge\phi_{2}\,|\, \phi_{1}\vee\phi_{2}\,|\,A\psi\,|\,E\psi\\ \psi&\,::=&X\phi\,|\,G\phi\,|\,F\phi\end{array}\]
The formulas \(\phi\) are called _state formulas_ and \(\psi\)_path formulas_. The strict alternation of path and state quantifiers gives rise to six combinations. Notice that negation is only allowed over atomic propositions. The reason for this is that it is cumbersome to come up with a rule for negated formulas in Section 3. However, this restriction does not affect the expressiveness of the logic, since negated formulas can be "deMorganised" so as to push the negation to the atomic propositions.
**Definition 2.2** (Kripke Structure).: A _Kripke structure_ is a tuple \(\mathcal{M}=(S,\rightarrow,L)\), where \(S\) is a set of _states_, \(\rightarrow\) a binary _transition relation_ on \(S\), and \(L:\)
\(S\to 2^{Atoms}\) a _labelling function_ that assigns to every state the set of atomic propositions that are deemed true in that state.
Given a Kripke structure, the semantics of a CTL formula \(\phi\) is defined as the set \(\llbracket\phi\rrbracket^{\mathcal{M}}\subseteq S\) of states that satisfy the formula, sometimes referred to as its _denotation_. Inspired by [1], however, we shall define this notion relative to a set \(U\subseteq S\) of states, called a _tag_. We will use such tags in Section 3 to guarantee finiteness of proof trees. Only formulas starting with a temporal operator will need (non-empty) tags.
**Definition 2.3** (Logic Semantics).: Let \(\mathcal{M}=(S,\rightarrow,L)\) be a Kripke structure. The semantics of formulas is inductively defined by the following equations:
\[\llbracket p\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} \{s\in S\mid p\in L(s)\} \tag{1}\] \[\llbracket-p\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} S\setminus\llbracket p\rrbracket^{\mathcal{M}}_{\varnothing}\] (2) \[\llbracket\phi\wedge\psi\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} \llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cap \llbracket\psi\rrbracket^{\mathcal{M}}_{\varnothing}\] (3) \[\llbracket\phi\vee\psi\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} \llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cup \llbracket\psi\rrbracket^{\mathcal{M}}_{\varnothing}\] (4) \[\llbracket EX\phi\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} \mathit{pre}_{\exists}(\llbracket\phi\rrbracket^{\mathcal{M}}_{ \varnothing})\] (5) \[\llbracket AX\phi\rrbracket^{\mathcal{M}}_{\varnothing} \stackrel{{\text{def}}}{{=}} \mathit{pre}_{\forall}(\llbracket\phi\rrbracket^{\mathcal{M}}_{ \varnothing})\] (6) \[\llbracket EF\phi\rrbracket^{\mathcal{M}}_{U} \stackrel{{\text{def}}}{{=}} \mu Y.(\llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cup \mathit{pre}_{\exists}(Y)\setminus U)\] (7) \[\llbracket AF\phi\rrbracket^{\mathcal{M}}_{U} \stackrel{{\text{def}}}{{=}} \mu Y.(\llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cup \mathit{pre}_{\forall}(Y)\setminus U)\] (8) \[\llbracket EG\phi\rrbracket^{\mathcal{M}}_{U} \stackrel{{\text{def}}}{{=}} \nu Y.(\llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cap \mathit{pre}_{\exists}(Y)\cup U)\] (9) \[\llbracket AG\phi\rrbracket^{\mathcal{M}}_{U} \stackrel{{\text{def}}}{{=}} \nu Y.(\llbracket\phi\rrbracket^{\mathcal{M}}_{\varnothing}\cap \mathit{pre}_{\forall}(Y)\cup U) \tag{10}\]
where the state transformers \(\mathit{pre}_{\exists}:S\to S\) and \(\mathit{pre}_{\forall}:S\to S\), and the _least_ and _greatest fixed-point_\(\mu Y.f(Y)\) and \(\nu Y.f(Y)\) of a monotone function \(f:S\to S\) are defined as follows:
\[\mathit{pre}_{\exists}(Y) \stackrel{{\text{def}}}{{=}} \{s\in S\mid\exists s^{\prime}\in Y.\ s\to s^{\prime}\} \tag{11}\] \[\mathit{pre}_{\forall}(Y) \stackrel{{\text{def}}}{{=}} \{s\in S\mid\forall s^{\prime}\in S.\ (s\to s^{\prime}\Rightarrow s^{\prime}\in Y)\}\] (12) \[\mu Y.f(Y) \stackrel{{\text{def}}}{{=}} \bigcap\ \{X\subseteq S\mid f(X)\subseteq X\}\] (13) \[\nu Y.f(Y) \stackrel{{\text{def}}}{{=}} \bigcup\ \{X\subseteq S\mid f(X)\supseteq X\} \tag{14}\]
If the tag \(U\) is empty, the semantics coincides with the standard semantics of CTL. The semantic rules for \(\llbracket EF\phi\rrbracket^{\mathcal{M}}_{\varnothing}\), \(\llbracket AF\phi\rrbracket^{\mathcal{M}}_{\varnothing}\), \(\llbracket EG\phi\rrbracket^{\mathcal{M}}_{\varnothing}\) and \(\llbracket AG\phi\rrbracket^{\mathcal{M}}_{\varnothing}\) fall back on known embeddings of CTL into the modal \(\mu\)-calculus [4].
We shall later need the following result.
**Lemma 2.1** (Reduction Lemma [1]).: For any monotone function \(\psi\) on a powerset \(\mathit{Pow}(D)\), and any \(p\in D\), we have:
\[p\in\mu Y.\psi(Y) \Leftrightarrow\ p\in\psi(\mu Y.(\psi(Y)\setminus\{p\})) \tag{15}\] \[p\in\nu Y.\psi(Y) \Leftrightarrow\ p\in\psi(\nu Y.(\psi(Y)\cup\{p\})) \tag{16}\]
The right-hand sides of these logical equivalences involve a slightly modified unfolding of the fixed points: For the least fixed point of a single element, \(p\) is removed in the unfolding; for the greatest it is added.
## 3 A Local Model-Checking Proof System
We present our model-checking procedure in the form of a deductive system, consisting of rules over sequents \(\mathcal{M},s\ \vdash_{U}\ \phi\). To guarantee finiteness of proof
Figure 1: A Local Model-Checking Proof System for CTL\({}^{-}\).
trees, and with this completeness of the proof systems as well as termination of proof search, we equip our sequents with _tags_\(U\subseteq S\) as already introduced in Section 2. The rules of our proof system are presented in Figure 1. In the premises of the \(\mathsf{A}\)-rules, \(s_{1},\ldots,s_{n}\) denote _all_ successors of state \(s\) in the Kripke structure \(\mathcal{M}\), while in the premises of the \(\mathsf{E}\)-rules, \(s^{\prime}\) denotes _some_ successor of \(s\). To prove that a state \(s\) in a Kripke structure \(\mathcal{M}\) satisfies a formula \(\phi\) of the logic, one needs to derive the sequent \(\mathcal{M},s\vdash_{\varnothing}\phi\), where the tag is initially empty.
_Example._ Consider the following Kripke structure:
We would like to show that the formula \(\mathsf{EF}\) (\(\mathsf{EG}\)\(r\)) holds in state \(s_{0}\) of the Kripke structure. This can be established by the following proof tree:
\[\begin{array}{c}\infer{\mathcal{M},s_{2}\vdash_{\varnothing}r}{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[{ \infer[\varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing]{\infer[\varnothing]{\infer[ \varnothing]{\infer[\varnothing{\Set[\varnothing]{\infer[{ \infer[\varnothing]{\infer[\varnothing]{\infer[{\SetSet[ \varnothing]{\infer[\varnothing]{\infer[{\SetSet[{ \varnothing]{\SetSetSetSet[{\varnothing{\
sequent whenever all its premises are valid and the side-conditions hold.
_Rule \(p\)_. If \(p\in L(s)\), the conclusion \(\mathcal{M},s\models_{\varnothing}p\) follows directly from Definition 3.1 and (1). The argument for rule \(\neg p\) is dual.
_Rule \(\wedge\)_. We show that \(\mathcal{M},s\models_{\varnothing}\phi\) and \(\mathcal{M},s\models_{\varnothing}\psi\) imply \(\mathcal{M},s\models_{\varnothing}\phi\wedge\psi\). If \(s\in\llbracket\phi\rrbracket_{\varnothing}\) and \(s\in\llbracket\psi\rrbracket_{\varnothing}\), then obviously
\[s\in\llbracket\phi\rrbracket_{\varnothing}\cap\llbracket\psi\rrbracket_{ \varnothing}\stackrel{{(\ref{eq:1})}}{{=}}\llbracket\phi\wedge \psi\rrbracket_{\varnothing}.\]
_Rules \(\vee_{1},\vee_{2}\)_. We show that \(\mathcal{M},s\models_{\varnothing}\phi\) implies \(\mathcal{M},s\models_{\varnothing}\phi\vee\psi\). If \(s\in\llbracket\phi\rrbracket_{\varnothing}\), then
\[s\in\llbracket\phi\rrbracket_{\varnothing}\cup\llbracket\psi\rrbracket_{ \varnothing}\stackrel{{(\ref{eq:1})}}{{=}}\llbracket\phi\vee \psi\rrbracket_{\varnothing}.\]
The argument for \(\vee_{2}\) is similar.
_Rule \(EX\)_. We show that \(\mathcal{M},s^{\prime}\models_{\varnothing}\phi\) implies \(\mathcal{M},s\models_{\varnothing}EX\phi\), where \(s\to s^{\prime}\) with \(s,s^{\prime}\in S\). If \(s^{\prime}\in\llbracket\phi\rrbracket_{\varnothing}\), then with (11), it holds that
\[s\in\mathit{pre}_{\exists}(\llbracket\phi\rrbracket_{\varnothing})\stackrel{{ (\ref{eq:1})}}{{=}}\llbracket EX\phi\rrbracket_{\varnothing}.\]
The reasoning for \(AX\) and \(\mathit{pre}_{\forall}\) is similar.
_Rule \(EG_{1}\)_. If \(s\in U\), then \(s\in\llbracket EG\phi\rrbracket_{U}\) holds by (9).
_Rule \(EG_{2}\)_. We show that \(\mathcal{M},s\models_{\varnothing}\phi\) and \(\mathcal{M},s^{\prime}\models_{U,s}EG\phi\) imply \(\mathcal{M},s\models_{U}EG\phi\) when \(s\not\in U\), where \(s\to s^{\prime}\) with \(s,s^{\prime}\in S\). Applying the appropriate semantic rule, and unfolding the fixed point once using Lemma 2.1, we get the equivalences
\[s\in\llbracket EG\phi\rrbracket_{U}\] \[\stackrel{{(\ref{eq:1})}}{{\Leftrightarrow}} s\in\nu Y.(\llbracket\phi\rrbracket_{\varnothing}\cap\mathit{pre}_{\exists}(Y)\cup U)\] \[\stackrel{{(\ref{eq:1})}}{{\Leftrightarrow}} s\in\llbracket\phi\rrbracket_{\varnothing}\cap\mathit{pre}_{\exists}(\nu Y.( \llbracket\phi\rrbracket_{\varnothing}\cap\mathit{pre}_{\exists}(Y)\cup U \cup\{s\}))\cup U\] \[\stackrel{{(\ref{eq:1})}}{{\Leftrightarrow}} s\in\llbracket\phi\rrbracket_{\varnothing}\cap\mathit{pre}_{\exists}(\llbracket EG\phi \rrbracket_{U,s})\cup U\]
Since we assume \(\mathcal{M},s^{\prime}\models_{U,s}EG\phi\), we have \(s^{\prime}\in\llbracket EG\phi\rrbracket_{U,s}\) and thus \(s\in\mathit{pre}_{\exists}(\llbracket EG\phi\rrbracket_{U,s})\). Together with the assumption that \(s\in\llbracket\phi\rrbracket_{\varnothing}\), we can conclude that \(s\in\llbracket EG\phi\rrbracket_{U}\), and hence \(\mathcal{M},s\models_{U}EG\phi\).
_Rule \(EF_{1}\)_. We show that \(\mathcal{M},s\models_{\varnothing}\phi\) implies \(\mathcal{M},s\models_{U}EF\phi\) when \(s\not\in U\).
We have the equivalences
\[s \in\llbracket EF\phi\rrbracket_{U}\] \[\stackrel{{\eqref{eq:s}}}{{\Leftrightarrow}} s \in\mu Y.(\llbracket\phi\rrbracket_{\varnothing}\cup\mathit{pre}_{\exists}(Y)\setminus U)\] \[\stackrel{{\eqref{eq:s}}}{{\Leftrightarrow}} s \in\llbracket\phi\rrbracket_{\varnothing}\cup\mathit{pre}_{\exists}(\mu Y.( \llbracket\phi\rrbracket_{\varnothing}\cup\mathit{pre}_{\exists}(Y)\setminus U \setminus\{s\}))\setminus U\] \[\stackrel{{\eqref{eq:s}}}{{\Leftrightarrow}} s \in\llbracket\phi\rrbracket_{\varnothing}\cup\mathit{pre}_{\exists}(\llbracket EF \phi\rrbracket_{U,s})\setminus U\]
Now, we can observe that
\[s\in\llbracket\phi\rrbracket_{\varnothing}\cup\mathit{pre}_{\exists}( \llbracket EF\phi\rrbracket_{U,s})\setminus U\]
holds when \(s\in\llbracket\phi\rrbracket_{\varnothing}\) and \(s\not\in U\), and hence \(\mathcal{M},s\models_{U}EF\phi\).
_Rule \(EF_{2}\)._ We show that \(\mathcal{M},s^{\prime}\models_{U,s}EF\phi\) and \(s\not\in U\) imply \(\mathcal{M},s\models_{U}EF\phi\). Since we assume \(s^{\prime}\in\llbracket EF\phi\rrbracket_{U,s}\) and \(s\not\in U\), by the same equivalences as in the previous case we can conclude that \(s\in\llbracket EF\phi\rrbracket_{U}\), and hence \(\mathcal{M},s\models_{U}EF\phi\).
_Rule \(AG_{1}\)._ If \(s\in U\), then \(s\in\llbracket AG\phi\rrbracket_{U}\) holds by (10).
_Rule \(AG_{2}\)._ The argument is similar to the proof of \(EG_{2}\).
_Rule \(AF_{1}\)._ The argument is similar to the proof of \(EF_{1}\).
_Rule \(AF_{2}\)._ The argument is similar to the proof of \(EF_{2}\).
This concludes the proof of soundness.
## 5 Completeness of the Proof System
A deductive system is termed _complete_ if for every semantically valid sequent there exists a derivation of that sequent (that is, all valid sequents can be proved). We show completeness by using the idea of a _canonical proof_. The idea of the proof is that for every valid sequent there is a way to apply rules backwards that is guaranteed to terminate with axiom rules as leaves, and thus, produce a proof of the sequent.
### Reversibility
First, we show that all rules are _reversible_: For each rule, if the conclusion is valid, then there exists a rule that can be applied backward, so that the premises are valid.
**Theorem 5.1** (Reversibility).: The rules of the proof system from Figure 1 are reversible.
_Proof_. We consider each rule in turn.
_Rule \(\wedge\)_. If \(\mathcal{M},s\models_{\varnothing}\phi\wedge\psi\) is valid, we can apply Rule \(\wedge\) backwards. If \(s\in\llbracket\phi\wedge\psi\rrbracket_{\varnothing}\), then necessarily \(s\in\llbracket\phi\rrbracket_{\varnothing}\) and \(s\in\llbracket\phi\rrbracket_{\varnothing}\), and thus all premises of Rule \(\wedge\) are valid, enabling backward application of the rule.
_Rules \(\vee_{1},\vee_{2}\)_. If \(\mathcal{M},s\models_{\varnothing}\phi\vee\psi\) is valid, then \(s\in\llbracket\phi\vee\psi\rrbracket_{\varnothing}\), and hence necessarily either \(s\in\llbracket\phi\rrbracket_{\varnothing}\) or \(s\in\llbracket\phi\rrbracket_{\varnothing}\) has to hold. Thus, either the premises of Rule \(\vee_{1}\) or those of \(\vee_{2}\) are valid and the corresponding rule can be applied backwards.
_Rules \(EX,AX\)_. Assuming \(\mathcal{M},s\models_{\varnothing}EX\phi\) is valid, then \(s\in\mathit{pre}_{\exists}(\llbracket\phi\rrbracket_{\varnothing})\) by (5). Using the definition of \(\mathit{pre}_{\exists}\) (11), we can conclude that existence of a \(s^{\prime}\in S\) with \(s\to s^{\prime}\) and \(s^{\prime}\in\llbracket\phi\rrbracket_{\varnothing}\) is necessary and thus \(\mathcal{M},s^{\prime}\models_{\varnothing}\phi\) is valid. The reasoning for rule \(AX\) and \(\mathit{pre}_{\forall}\) is similar.
_Rule \(EG_{1}\)_. Assuming \(\mathcal{M},s\models_{U}EG\phi\) and \(s\in U\), there is no premise to be proven valid and the rule is always applicable backwards.
_Rule \(EG_{2}\)_. If we assume \(\mathcal{M},s\models_{U}EG\phi\), but \(s\not\in U\), we have to show that \(\mathcal{M},s\models_{\varnothing}\phi\) and \(\mathcal{M},s^{\prime}\models_{U,s}EG\phi\). Unfolding the fixed point using Lemma 2.1, we get
\[s\in\llbracket EG\phi\rrbracket_{U}\Leftrightarrow s\in\llbracket\phi\rrbracket _{\varnothing}\cap\mathit{pre}_{\exists}(\llbracket EG\phi\rrbracket_{U,s})\cup U.\]
Since \(s\not\in U\), then necessarily \(s\in\llbracket\phi\rrbracket_{\varnothing}\) and \(s\in\mathit{pre}_{\exists}(\llbracket EG\phi\rrbracket_{U,s})\). From \(s\in\mathit{pre}_{\exists}(\llbracket EG\phi\rrbracket_{U,s})\), we can conclude that there exists a \(s^{\prime}\) with \(s\to s^{\prime}\) and \(\mathcal{M},s^{\prime}\models_{U,s}EG\phi\). \(\mathcal{M},s\models_{\varnothing}\phi\) follows directly.
_Rules \(EF_{1},EF_{2}\)_. Assuming \(\mathcal{M},s\models_{U}EF\phi\) and \(s\not\in U\), we can obtain
\[s\in\llbracket EF\phi\rrbracket_{U}\Leftrightarrow s\in\llbracket\phi\rrbracket _{\varnothing}\cup\mathit{pre}_{\exists}(\llbracket EF\phi\rrbracket_{U,s})\setminus U\]
through unfolding of the fixed point once using Lemma 2.1. Since \(s\not\in U\), either \(s\in\llbracket\phi\rrbracket_{\varnothing}\) or \(s\in\mathit{pre}_{\exists}(\llbracket EF\phi\rrbracket_{U,s})\) is necessarily valid. From \(s\in\llbracket\phi\rrbracket_{\varnothing}\) follows \(\mathcal{M},s\models_{\varnothing}\phi\). From \(s\in\mathit{pre}_{\exists}(\llbracket EF\phi\rrbracket_{U,s})\), we can conclude that there exists a \(s^{\prime}\) with \(s\to s^{\prime}\) and \(\mathcal{M},s^{\prime}\models_{U,s}EF\phi\). Thus, either \(EF_{1}\) or \(EF_{2}\) is always applicable backwards when the conclusion is valid.
_Rules \(AG_{1},AG_{2}\)_. The argument is similar to the reversibly of \(EG_{1}\) and \(EG_{2}\).
_Rules \(AF_{1},AF_{2}\)_. The argument is similar to the reversibility of \(EF_{1}\) and \(EF_{2}\).
This concludes the proof of reversibility. \(\blacksquare\)
Hence, starting a proof from any semantically valid sequent, there is a way to "grow" a derivation tree upwards, maintaining semantic validity as an invariant property of the nodes of the derivation tree.
### Termination
To obtain a (canonical) proof, however, we need to argue that every branch of the tree is bound to terminate, and furthermore with an axiom.
**Lemma 5.1** (Finiteness of Derivation Trees).: Every derivation produced with the rules of the proof system from Figure 1 is finite for finite-state Kripke structures.
Proof.: Between conclusion and premises, we observe that application of each reversible rule either \((i)\) decreases the length of the sequent formulas or \((ii)\) decreases the number of leftover untagged states \(S\setminus U\). Defining a lexicographical ordering through these two criteria on a series of backward applications, it is easy to see that such a series would be monotonically decreasing.
### Completeness
Finally, we are ready to show completeness of our proof system.
**Theorem 5.2** (Completeness).: The proof system from Figure 1 is complete for finite-state Kripke structures.
Proof.: For any valid sequent, by Theorem 5.1, there always exists a backwards applicable rule, and, by Lemma 5.1, any series of backward rule applications is terminating. Thus, eventually, every branch must terminate by reverse application of an axiom rule, and hence, there exists a proof of the sequent.
Observe that soundness, completeness, and finiteness of derivation trees guarantee _decidability_ of sequent validity.
## 6 Conclusion
In this paper, we have presented a local model-checking proof system for a fragment of CTL, and have proved its soundness and completeness, and termination of proof search for finite-state models. Extending the proof system and the proofs to the full CTL is a routine exercise.
The proof system has been developed for and used in the course _Logic for Computer Scientists_, given at KTH Royal Institute of Technology, Stockholm.
|
2309.04097 | Discretized Radial Projections in $\mathbb{R}^d$ | We generalize a Furstenberg-type result of Orponen-Shmerkin to higher
dimensions, leading to an $\epsilon$-improvement in Kaufman's projection
theorem for hyperplanes and an unconditional discretized radial projection
theorem in the spirit of Orponen-Shmerkin-Wang. Our proof relies on a new
incidence estimate for $\delta$-tubes and a quasi-product set of $\delta$-balls
in $\mathbb{R}^d$. | Kevin Ren | 2023-09-08T03:25:08Z | http://arxiv.org/abs/2309.04097v1 | # Discretized Radial Projections in \(\mathbb{R}^{d}\)
###### Abstract
We generalize a Furstenberg-type result of Orponen-Shmerkin to higher dimensions, leading to an \(\varepsilon\)-improvement in Kaufman's projection theorem for hyperplanes and an unconditional discretized radial projection theorem in the spirit of Orponen-Shmerkin-Wang. Our proof relies on a new incidence estimate for \(\delta\)-tubes and a quasi-product set of \(\delta\)-balls in \(\mathbb{R}^{d}\).
###### Contents
* 1 Introduction
* 1.1 Connections and related work
* 1.2 Discretized results
* 1.3 Proof ideas
* 1.4 Structure of the paper
* 2 Preliminaries
* 2.1 Definitions
* 2.2 Plates
* 2.3 An Elementary Estimate
* 2.4 Multiscale analysis
* 2.5 Uniform sets and branching numbers
* 2.6 Combinatorial and probabilistic preliminaries
* 2.7 Energy
* 3 Improved incidence estimates for quasi-product sets
* 3.1 An improved slicing estimate
* 3.2 An improved Furstenberg estimate
* 3.3 From Furstenberg to weak slicing
* 3.4 An intermediate slicing result
* 3.5 Formal exhaustion argument
* 3.6 Proof of Proposition 3.1
* 4 Improved incidence estimates for regular sets
* 4.1 Initial reductions
* 4.2 Transferring angular non-concentration to ball non-concentration
* 4.3 Finding a special \(\Delta\)-tube
* 4.4 Product-like structure
* 5 Improved incidence estimates for general sets
* 6 Sets contained in an \((r_{0},k)\)-plate
* 6.1 Multiscale analysis
* 6.2 Good multiscale decomposition
* 7 Power decay around \(k\)-planes
* 8 Radial projection estimates
* 8.1 Maximal plate concentration case
* 8.2 Proof of Theorem 1.13, general case
* 9 Corollaries of Radial Projection Estimates
* A Proof of Balog-Szemeredi-Gowers
## 1 Introduction
Let \(X\) be a set in \(\mathbb{R}^{n}\), and define the radial projection \(\pi_{x}(y):=\frac{y-x}{|y-x|}\in S^{n-1}\). We wish to study the size of radial projections \(\pi_{x}(Y)\) of \(Y\), where \(x\) is taken in some set \(X\). Recently, Orponen, Shmerkin, and Wang [19] proved a strong radial projection theorem in two dimensions, but they prove a conditional result in higher dimensions. In this paper, we shall remove the condition \(\dim_{H}(X)\geq k-\frac{1}{k}+\eta(k)\) in higher dimensions, which answers Conjecture 1.5 of [26] and improves Theorem 1.9 of [19]. We also improve upon the previously known result \(\frac{d-1}{d}\min(\dim_{H}(X),\dim_{H}(Y))+\eta(d,\dim_{H}(X),\dim_{H}(Y))\) of [25, Theorem 6.15].
**Theorem 1.1**.: _Let \(X,Y\subset\mathbb{R}^{d}\) be Borel sets with \(\dim_{H}(X),\dim_{H}(Y)\leq k\). If \(X\) is not contained in a \(k\)-plane, then_
\[\sup_{x\in X}\dim_{H}(\pi_{x}(Y\setminus\{x\}))\geq\min(\dim_{H}(X),\dim_{H}(Y )).\]
In fact, we can prove the following slicing result, which improves Proposition 6.8 of [19] and makes progress towards answering Conjecture 1.10 of [19].
**Corollary 1.2**.: _Let \(s\in(d-2,d]\), then there exists \(\varepsilon(s,d)>0\) such that the following holds. Let \(\mu,\nu\) be Borel probability measures on \(\mathbb{R}^{d}\) with disjoint supports that satisfy \(\mathcal{E}_{s}(\mu),\mathcal{E}_{s}(\nu)<\infty\) and \(\dim_{H}(\mathrm{spt}(\nu))<s+\varepsilon(s,d)\). Further, assume that \(\mu,\nu\) don't simultaneously give full measure to any affine \((d-1)\)-plane \(H\subset\mathbb{R}^{d}\). Then there exist restrictions of \(\mu,\nu\) to subsets of positive measure
_(which we keep denoting \(\mu,\nu\)) such that the following holds. For almost every affine 2-plane \(W\subset\mathbb{R}^{d}\) (with respect to the natural measure on the affine Grassmanian), if the sliced measures \(\mu_{W}\), \(\nu_{W}\) on \(W\) is non-trivial, then they don't simultaneously give full measure to any line. In other words,_
\[(\gamma_{d,2}\times\mu)\{(V,x):\mu_{V,x}(\ell)\nu_{V,x}(\ell)=|\mu_{V,x}||\nu_{ V,x}|>0\text{ for some }\ell\in\mathbb{A}(V+x,1)\}=0,\]
_where we parametrize affine 2-planes as \(V+x\), for \(x\in\mathbb{R}^{d}\) and \(V\) in the Grassmannian \(\operatorname{Gr}(d,2)\) with the rotationally invariant Haar measure \(\gamma_{d,2}\)._
We also deduce an \(\varepsilon\)-improvement in Kaufman's projection theorem for hyperplanes. The proof is a standard higher-dimensional generalization of the details in [18, Section 3.2] and we will omit it. For \(\sigma\in S^{n-1}\), let \(\pi_{\sigma}\) be projection in the direction orthogonal to \(\sigma\).
**Theorem 1.3**.: _For every \(k<s<t\leq d\), there exists \(\varepsilon(s,t)\) such that the following holds. Let \(E\) be an analytic set in \(\mathbb{R}^{d}\) with \(\dim_{H}(E)=t\). Then_
\[\dim_{H}\{\sigma\in S^{d-1}:\dim_{H}(\pi_{\sigma}(E))\leq s\}\leq s-\varepsilon.\]
**Remark 1.4**.: _Kaufman's theorem is sharp when \(s=k\) and \(t\in(k,k+1]\) because \(E\) can be contained within a \((k+1)\)-plane._
We also derive a higher-dimensional version of Beck's theorem (unlike in the discrete setting, the higher-dimensional version cannot proved by projection onto a generic 2D plane). The proof again follows similarly to the 2D version presented in [19, Corollary 1.4].
**Corollary 1.5**.: _Let \(X\subset\mathbb{R}^{d}\) be a Borel set such that \(\dim_{H}(X\setminus H)=\dim_{H}X\) for all \(k\)-planes \(H\). Then, the line set \(\mathcal{L}(X)\) spanned by pairs of distinct points in \(X\) satisfies_
\[\dim_{H}(\mathcal{L}(X))\geq\min\{2\dim_{H}X,2k\}.\]
### Connections and related work
Radial projections have also been used to study the Falconer distance set problem, which asks for lower bounds on the Hausdorff dimension of the distance set \(\Delta(X):=\{|x-y|:x,y\in X\}\) given the value of \(\dim_{H}(X)\) for some \(X\in\mathbb{R}^{d}\). In two dimensions, Wolff [33] used Fourier analysis to show that if \(\dim_{H}(X)\geq\frac{4}{3}\), then \(\Delta(X)\) has positive Lebesgue measure. Using Orponen's radial projection theorem [17], Guth-Iosevich-Ou-Wang [7] used a good-bad tube decomposition and decoupling to improve the threshold to \(\dim_{H}(X)\geq\frac{5}{4}\). See also works of Keleti-Shmerkin [14][14], Shmerkin [28], Liu [15], and Stull [29] which provide better lower bounds for \(\dim_{H}(\Delta(X))\) given that \(\dim_{H}(X)\in(1,\frac{5}{4})\). In higher dimensions, the works of Du-Iosevich-Ou-Wang-Zhang [3] and Wang-Zheng [32] used a good-bad tube decomposition using Orponen's radial projection theorem and decoupling techniques [17] to provide state-of-the-art results when the dimension \(d\) is even; when \(d\) is odd, a more classical approach purely
based on decoupling gave the best estimates [6], [9]. More recently, Shmerkin and Wang [27] prove a radial projection theorem in the spirit of this paper to provide an improved lower bound when \(\dim_{H}(X)=\frac{d}{2}\), \(d=2,3\); using their framework combined with updated results of [19], one can show for example that \(\dim_{H}(\Delta(X))\geq\frac{5}{8}\) when \(X\subset\mathbb{R}^{3}\) satisfies \(\dim_{H}(X)=\frac{3}{2}\). In fact, all of these works prove lower bounds on the size of the pinned distance set, \(\Delta_{x}(X):=\{|x-y|:y\in X\}\). In the forthcoming companion papers [4], [5], we use Theorem 1.1 to improve the lower bounds for the Falconer distance set problem in all dimensions \(d\).
Very recently, radial projections in dimension \(2\) have been used to prove the ABC sum-product conjecture and Furstenberg set conjecture, and yield progress on the discretized sum-product problem [21], [22]. It is natural to wonder whether the exciting progress in \(2\) dimensions will generalize to higher dimensions. The starting point of the breakthrough work of [21] (which was also used in [22]) is a sharp radial projection theorem in \(2\) dimensions, [19, Theorem 1.1]. We hope to use our higher dimensional radial projection theorem to prove analogous results to [21], [22] in all dimensions.
### Discretized results
We deduce Theorem 1.1 from \(\delta\)-discretized versions. The following notation will be used throughout this paper.
**Definition 1.6**.: _Let \(P\subset\mathbb{R}^{d}\) be a bounded nonempty set, \(d\geq 1\). Let \(\delta>0\) be a dyadic number, and let \(0\leq s\leq d\) and \(C>0\). We say that \(P\) is a \((\delta,s,C,k)\)-set if for every \((r,k)\)-plate \(H\) with \(r\in[\delta,1]\), we have_
\[|P\cap H|_{\delta}\leq C\cdot|P|_{\delta}\cdot r^{s}.\]
_If \(k\) is not specified, we default to \(k=0\) (which becomes a standard definition from [18] because \((r,0)\)-plates are \(r\)-balls)._
**Definition 1.7**.: _Let \(\mathcal{T}\subset\mathbb{R}^{d}\) be a bounded nonempty set of dyadic \(\delta\)-tubes, \(d\geq 2\). Let \(\delta>0\) be a dyadic number, and let \(0\leq s\leq d\), \(0\leq k\leq d-2\), and \(C>0\). We say that \(\mathcal{T}\) is a \((\delta,s,C,k)\)-set of tubes if for every \((r,k+1)\)-plate \(H\) and \(\delta\leq r\leq 1\), we have_
\[|\mathcal{T}\cap H|\leq C\cdot|\mathcal{T}|\cdot r^{s}. \tag{1.1}\]
_If \(k\) is not specified, we default to \(k=0\). We also say \(\mathcal{T}\) is a \((\delta,s,C,k)\)-set of tubes from scale \(r_{1}\) to \(r_{2}\) if the non-concentration condition (1.1) holds for \(r_{2}\leq r\leq r_{1}\)._
A \((\delta,s,C,k)\)-set of balls cannot be concentrated in a small neighborhood of a \(k\)-plane, while a \((\delta,s,C,k)\)-set of tubes cannot be concentrated in a small neighborhood of a \((k+1)\)-plane.
The main ingredient in the proof of Theorem 1.1 is an \(\varepsilon\)-improvement to the (dual) Furstenberg set problem that generalizes Theorem 1.3 in [18] to higher dimensions.
**Theorem 1.8**.: _For any \(0\leq k<d-1\), \(0\leq s<k+1\), \(s<t\leq d\), \(\kappa>0\), there exists \(\varepsilon(s,t,\kappa,k,d)>0\) such that the following holds for all small enough \(\delta\in 2^{-\mathbb{N}}\), depending only on \(s,t,\kappa,k,d\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a \((\delta,t,\delta^{-\varepsilon})\)-set with \(\cup\mathcal{P}\subset[0,1)^{d}\), and let \(\mathcal{T}\subset\mathcal{T}^{\delta}\) be a family of \(\delta\)-tubes. Assume that for every \(p\in\mathcal{P}\), there exists a \((\delta,s,\delta^{-\varepsilon},0)\) and \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set \(\mathcal{T}(p)\subset\mathcal{T}\) such that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\). Then \(|\mathcal{T}|\geq\delta^{-2s-\varepsilon}\)._
**Remark 1.9**.: _The condition of \(\mathcal{T}(p)\) being a \((\delta,\kappa,\delta^{-\varepsilon},k+1)\)-set is to prevent the counterexample in (say) \(\mathbb{R}^{3}\) when \(s=1,t\in(1,2]\), and \(\mathcal{T}\) is a maximal set of \(\delta^{-2}\) many essentially distinct tubes in \([0,1]^{2}\). This condition is automatically taken care of when \(s>k\): any \((\delta,s,\delta^{-\varepsilon},1)\)-set is a \((\delta,\kappa,\delta^{-\varepsilon},k+1)\)-set with \(\kappa=s-k\)._
**Remark 1.10**.: _We can make this decay around \(k\)-plane assumption assuming that (1) \(P\) is a \((\delta,\kappa,\delta^{-\varepsilon},k+1)\)-set and (2) for \(p\in P\), \(|\mathcal{T}(p)\cap P|\geq\delta^{\varepsilon}|P|\). This will be useful for radial projection estimates, since we can guarantee (1) by Theorem B.1 of [25] and (2) because we can get rid of \(\lesssim\delta^{\varepsilon}|P|\) many pairs \((p,q)\) for a fixed \(p\)._
In fact, we can prove the following refined version of Theorem 1.8.
**Theorem 1.11**.: _For any \(0\leq k<d-1\), \(0\leq s<k+1\), \(\max(s,k)<t\leq d\), \(\kappa>0\), \(r_{0}\leq 1\), there exists \(\varepsilon(s,t,\kappa,k,d)>0\) such that the following holds for all small enough \(\delta/r_{0}\in 2^{-\mathbb{N}}\cap(0,\delta_{0})\), with \(\delta_{0}\) depending only on \(s,t,\kappa,k,d\). Let \(H\) be a \((r_{0},k+1)\)-plate, \(\mathcal{P}\subset\mathcal{D}_{\delta}\cap H\) be a \((\delta,t,(\delta/r_{0})^{-\varepsilon})\)-set with \(\cup\mathcal{P}\subset[0,1)^{d}\), and let \(\mathcal{T}\subset\mathcal{T}^{\delta}\cap H\) be a family of \(\delta\)-tubes. Assume that for every \(p\in\mathcal{P}\), there exists a set \(\mathcal{T}(p)\subset\mathcal{T}\) such that:_
* \(T\cap p\neq\emptyset\) _for all_ \(T\in\mathcal{T}(p)\)_;_
* \(\mathcal{T}(p)\) _is a_ \((\delta,s,(\delta/r_{0})^{-\varepsilon}r_{0}^{k-s},0)\)_-set down from scale_ \(r\)_;_
* \(\mathcal{T}(p)\) _is a_ \((\delta,\kappa,(\delta/r_{0})^{-\varepsilon}r_{0}^{-\kappa},k)\)_-set._
_Then \(|\mathcal{T}|\geq(\frac{\delta}{r_{0}})^{-\varepsilon}\delta^{-2s}r_{0}^{2(s- k)}\)._
**Remark 1.12**.: _(a) Given fixed \(k,\kappa\), the value of \(\varepsilon\) can be chosen uniformly in a compact subset of \(\{(s,t):0\leq s<k+1,\max(s,k)<t\leq d\}\). Indeed, if \(\varepsilon>0\) works for \((s,t)\), then \(\frac{\varepsilon}{2}\) works in the \(\frac{\varepsilon}{2}\)-neighborhood of \((s,t)\)._
_(b) Conjecture: can we replace the condition of being in \(H\) by \(\mathcal{T}(p)\) being a \((\delta,k,(\delta/r_{0})^{-\varepsilon},0)\)-set from scales \(1\) to \(r_{0}\)?_
Using Theorem 1.11, a bootstrap argument based on [19] gives the following.
**Theorem 1.13**.: _Let \(k\in\{1,2,\cdots,d-1\}\), \(k-1<\sigma<s\leq k\), and \(\varepsilon>0\). There exist \(N,K_{0}\) depending on \(\sigma,s,k\), and \(\eta(\varepsilon)>0\) (with \(\eta(1)=1\)) such that the following holds. Fix \(r_{0}\leq 1\), and \(K\geq K_{0}\). Let \(\mu,\nu\) be \(\sim 1\)-separated \(s\)-dimensional measures with constant \(C_{\mu},C_{\nu}\) supported on \(E_{1},E_{2}\), which lie in \(B(0,1)\). Assume that \(|\mu|,|\nu|\leq 1\). Let \(A\) be the pairs of \((x,y)\in E_{1}\times E_{2}\) that lie in some \(K^{-1}\)-concentrated \((r_{0},k)\)-plate. Then there exists a set
_with \(\mu\times\nu(B)\lesssim K^{-\eta}\) such that for every \(x\in E_{1}\) and \(r\)-tube \(T\) through \(x\), we have_
\[\nu(T\setminus(A|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{r_{0}^{\sigma-(k-1)+ N\varepsilon}}K^{N}.\]
_The implicit constant may depend on \(C_{\mu},C_{\nu},\sigma,s,k\)._
**Remark 1.14**.: _(a) It is not assumed that \(\mu,\nu\) are a probability measures, just that \(\mu(B(0,1)),\nu(B(0,1))\leq 1\)._
_(b) If \(\alpha>d-1\), then the numerology of Theorem 1.13 doesn't apply. Instead, Orponen's radial projection theorem [17] in dimension \(d\) applies. The result (stated in [7, Lemma 3.6] for \(d=2\), but can be generalized to all dimensions \(d\)) is that for \(\gamma=\varepsilon/C\), there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu_{1}\times\mu_{2}(B)\leq r^{\gamma}\) such that for every \(x\in E_{1}\) and \(\delta\)-tube \(T\) through \(x\), we have_
\[\mu_{2}(T\setminus B|_{x})\lesssim r^{d-1-\varepsilon}.\]
_Note that the set \(A\) of "concentrated pairs" is not needed here._
_(c) If \(r\sim r_{0}\), we can obtain a slightly better result by projecting to a generic \(k\)-dimensional subspace and following the argument in [3, Section 3.2]. The result is that for \(\gamma=\varepsilon/C\), there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu_{1}\times\mu_{2}(B)\leq\delta^{\gamma}\) such that for every \(x\in E_{1}\) and \(r\)-tube \(T\) through \(x\), we have_
\[\mu_{2}(T\setminus B|_{x})\lesssim r^{k-1-\varepsilon}.\]
_The set \(A\) is again not needed in this case. The main novelty of Theorem 1.13 comes when \(r<r_{0}\)._
### Proof ideas
The main proof ideas for Theorem 1.8 are as follows:
1. Perform a standard multiscale decomposition argument due to [18] to reduce the original problem to two building blocks: the case when \(\mathcal{P}\) is a \((\delta,s)\)-set and when \(\mathcal{P}\) is a \(t\)-regular set. The first case doesn't happen all the time and has no loss by an elementary incidence argument, so we focus on gaining an \(\varepsilon\)-improvement in the second case. A \(t\)-regular set \(\mathcal{P}\) has the special property that \(|\mathcal{P}\cap Q|\) is still a \((\Delta,t)\)-set for \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\), \(\Delta=\delta^{1/2}\).
2. If \(\mathcal{P}\) is \(t\)-regular with \(\Delta=\delta^{1/2}\), we may find a \(\Delta\)-tube \(\mathbf{T}\) such that upon dilation of \(\mathbf{T}\) to \([0,1]^{d}\), we obtain a new Furstenberg problem with the ball set having a quasi-product structure. See Appendix A of [18].
3. Finally, we will use discretized sum-product type arguments to conclude an \(\varepsilon\)-improvement to the dual Furstenberg problem assuming \(\mathcal{P}=X\times Y\subset\mathbb{R}^{d-1}\times\mathbb{R}\) has a quasi-product structure. In very rough terms, we shall lift \(Y\) to have dimension close to \(1\), and apply multi-linear Kakeya. This idea of lifting the dimension was found in He's work on a higher-rank discretized sum-product theorem [11] in a slightly different context.
To prove Theorem 1.11, we use a similar multiscale decomposition argument as in (1) to reduce to two building blocks: a smaller version of the setting of Theorem 1.11 and a smaller version of Theorem 1.8. The smaller version of Theorem 1.11 has no loss by an elementary incidence argument, and the smaller version of Theorem 1.8 admits a gain.
For Theorem 1.13, we first prove the case when \(\mu,\nu\) are supported in a \(r_{0}K\) plate (where \(K\) is a small power of \(r_{0}^{-1}\)). This uses a similar argument as in [19, Lemma 2.8]. The general case follows from applying this special case many times.
### Structure of the paper
In Section 2, we introduce some key concepts that will be used throughout the paper. In Sections 3 through 5, we prove Theorem 1.8 first for quasi-product sets following ideas of [10], and then for regular sets and finally for general sets following [18]. In Section 6, we prove Theorem 1.11 from Theorem 1.8. In Section 7, we generalize a radial projection theorem of Shmerkin [25, Theorem 6.3] that enables us to assume our sets have power decay around \(k\)-planes. In Section 8, we prove Theorem 8.1 following ideas from [19]. Finally, in Section 9, we prove Theorem 1.1 and 1.2 from the discretized results.
**Acknowledgments.** The author is supported by a NSF GRFP fellowship. The author would like to thank Xiumin Du, Tuomas Orponen, Yumeng Ou, Pablo Shmerkin, Hong Wang, and Ruixiang Zhang for helpful discussions. We thank Paige Bright and Yuqiu Fu for suggesting to include a higher-dimensional version of Beck's theorem in this paper.
## 2 Preliminaries
This section will summarize the argument of [18], and in lieu of proofs (with the exception of Proposition 2.9), we either refer the reader to [18] or defer the proof to a later section.
### Definitions
We use \(A\lesssim B\) to denote \(A\leq CB\) for some constant \(C\). We use \(A\lesssim_{N}B\) to indicate the constant \(C\) can depend on \(N\). We will also use \(A\lessapprox B\) in future proofs; its exact meaning will always be clarified when used.
For a finite set \(A\), let \(|A|\) denote the cardinality of \(A\). If \(A\) is infinite, let \(|A|\) denote the Lebesgue measure of \(A\).
For a set \(A\), let \(A^{c}=\mathbb{R}^{d}\setminus A\).
For a tube \(T\), let \(\ell(T)\) denote the central line segment of \(T\).
For a set \(E\), let \(E^{(\delta)}\) be the \(\delta\)-neighborhood of \(E\).
For \(A\subset X\times Y\) and \(x\in X\), define the slice \(A|_{x}=\{y\in Y:(x,y)\in A\}\) and \(A|^{y}=\{x\in X:(x,y)\in A\}\).
For a measure \(\mu\) and a set \(G\), define the restricted measure \(\mu|_{G}\) by \(\mu|_{G}(A)=\mu(G\cap A)\). The renormalized restricted measure is \(\mu_{G}=\frac{1}{\mu(G)}\mu|_{G}\).
For vectors \(v_{1},\cdots,v_{i}\in\mathbb{R}^{d}\), \(1\leq i\leq d\), the quantity \(|v_{1}\wedge\cdots\wedge v_{i}|\) is the non-negative volume of the parallelepiped spanned by \(v_{1}\) through \(v_{i}\).
\(B(x,r)\) is the ball in \(\mathbb{R}^{d}\) of radius \(r\) centered at \(x\). We also use the notation \(B_{r}\) for an arbitrary \(r\)-ball in \(\mathbb{R}^{d}\).
For sets \(A,B\) and \(P\subset A\times B\), let \(A\stackrel{{ P}}{{+}}B:=\{a+b:(a,b)\in P\}\).
**Definition 2.1**.: _We say \(\mu\) supported in \(\mathbb{R}^{d}\) is an \(\alpha\)-dimensional measure with constant \(C_{\mu}\) if \(\mu(B_{r})\leq C_{\mu}r^{\alpha}\) for all \(r\leq 1\) and balls \(B_{r}\) of radius \(r\)._
### Plates
We work in \(\mathbb{R}^{d}\). An \((r,k)\)-plate is the \(r\)-neighborhood of a \(k\)-dimensional hyperplane in \(\mathbb{R}^{d}\). We construct a set \(\mathcal{E}_{r,k}\) of \((r,k)\)-plates with the following properties:
* Each \((\frac{r}{2},k)\)-plate intersecting \(B(0,1)\) lies in at least one plate of \(\mathcal{E}_{r,k}\);
* For \(s\geq r\), every \((s,k)\)-plate contains \(\lesssim\left(\frac{s}{r}\right)^{(k+1)(d-k)}\) many \((r,k)\)-plates of \(\mathcal{E}_{r,k}\).
For example, when \(k=1\) and \(d=2\), we can simply pick \(\sim r^{-1}\) many \(r\)-tubes in each of an \(r\)-net of directions. This generalizes to higher \(k\) and \(d\) via a standard \(r\)-net argument, but we haven't seen it in the literature, so we provide a precise construction.
An \(r\)-net of a metric space is a subset \(S\) such that \(B(x,r)\cap B(y,r)=\emptyset\) for \(x\neq y,x,y\in S\). The affine Grassmanian manifold \(\mathbb{A}(k,d)\) is the set of all \(k\)-planes in \(\mathbb{R}^{d}\). By counting degrees of freedom, we see that \(\dim\mathbb{A}(k,d)=(k+1)(d-k)\). Any such plane is uniquely \(V=V_{0}+a\) for some \(k\)-dimensional subspace \(V_{0}\) and \(a\in V_{0}^{\perp}\). For \(V=V_{0}+a\) and \(W=W_{0}+b\), define their distance \(d_{\mathbb{A}}\) to be (following Section 3.16 of [16]):
\[d_{\mathbb{A}}(V,W)=\|\pi_{V_{0}}-\pi_{W_{0}}\|_{op}+|a-b|,\]
where \(\pi_{V_{0}}:\mathbb{R}^{d}\to V_{0}\) and \(\pi_{W_{0}}:\mathbb{R}^{d}\to W_{0}\) are orthogonal projections, and \(\|\cdot\|_{op}\) is the usual operator norm for linear maps. Let \(\mathbb{A}_{0}(k,d)\) be the submanifold of \(k\)-planes \(V_{0}+a\) with \(a\in B(0,10)\). Since the manifold \((\mathbb{A}_{0}(k,d),d_{\mathbb{A}})\) is compact and smooth, it can be covered by finitely many charts that are \(\sim 1\)-bilipschitz to a subset of \(\mathbb{R}^{(k+1)(d-k)}\).
From a maximal \(cr\)-net \(\mathcal{N}\) of the set of affine planes of \(\mathbb{A}_{0}(k,d)\) with \(c>0\) a sufficiently small constant, we can construct a set \(\mathcal{E}_{r,k}\) of \((r,k)\)-plates whose central planes are the elements of \(\mathcal{N}\). We now check the two properties for \(\mathcal{E}_{r,k}\).
To prove the first property, let \(H\) be a \((\frac{r}{2},k)\)-plate intersecting \(B(0,1)\). Then the central plane \(P=P_{H}\) must lie at distance \(\leq 2cr\) from some element \(Q\) of \(\mathcal{N}\) (otherwise, we can add it to the net). Let \(P=P_{0}+a\) and \(Q=Q_{0}+b\). Hence, \(\|\pi_{P_{0}}-\pi_{Q_{0}}\|_{op}\leq 2cr\) and \(|a-b|\leq 2cr\), so for \(x\in P\cap B(0,10)\) (so \(x-a\in P_{0}\)),
\[|\pi_{Q_{0}}(x-a)-(x-a)|\leq 2cr|x-a|\leq 2cr(|x|+|a|)\leq 40cr.\]
Now, note that \(\pi_{Q_{0}}(x-a)+b\in Q\). It is close to \(x\) if \(c<\frac{1}{100}\):
\[|\pi_{Q_{0}}(x-a)+b-x|\leq 40cr+|a-b|\leq 50cr<\frac{r}{2}.\]
We have proved \(P\cap B(0,10)\subset Q^{(r/2)}\) and thus \(P^{(r/2)}\cap B(0,10)\subset Q^{(r)}\). Hence, \(H\) is contained in the \((r,k)\)-plate with central plane \(Q\).
To prove the second property, we note that the set of \(k\)-planes in \(\mathbb{A}(k,d)\) whose intersection with \(B(0,10)\) is contained in a given \((s,k)\)-plate is contained in an \(O(s)\)-ball \(B\) of \(\mathbb{A}(k,d)\). First suppose \(B\) is contained within some coordinate chart; we would like to prove that \(|\mathcal{N}\cap B|\lesssim\left(\frac{s}{r}\right)^{(k+1)(d-k)}\). To show this, note that \(\{B(x,r):x\in\mathcal{N}\cap B\}\) is a packing of \(B^{(r)}\) with finitely overlapping \(r\)-balls. Now map the chart to \(\mathbb{R}^{(k+1)(d-k)}\). Since the map only distorts distances by a constant factor, we can pack \(|\mathcal{N}\cap B|\) many finitely overlapping \(c_{1}r\)-balls into a ball of radius \(O(s)\). Thus by a volume argument, we have \(|\mathcal{N}\cap B|\lesssim\left(\frac{s}{r}\right)^{(k+1)(d-k)}\). Since there are finitely many charts, we can apply the argument to \(B\) intersecting each chart, which proves the second property.
We specialize our discussion to tubes. For each scale \(\delta\), let \(\mathcal{T}^{\delta}\) be a cover of \([0,1]^{d}\) with \(\delta\)-tubes such that every \(\frac{\delta}{2}\)-tube (and in particular every \(r\)-tube with \(r<\frac{\delta}{2}\)) is contained in at least \(1\) and at most \(C_{d}\) many tubes of \(\mathcal{T}^{\delta}\). Slightly abusing notation (a la [18]), we will also use \(\mathcal{T},\mathcal{T}_{\delta},\mathcal{T}_{\Delta}\) to represent sets of tubes, where the subscript \(\delta\) helpfully indicates a set of \(\delta\)-tubes.
In Theorem 1.13, we pay attention to certain plates with disproportionately much mass.
**Definition 2.2**.: _We say that a \((r,k)\)-plate \(H\) is \(c\)-concentrated on \(\mu\) if \(\mu(H)\geq c\)._
Other notation is following [18]. Unlike [18], we work with ordinary rather than dyadic tubes. The advantage of dyadic tubes is that every \(2^{-n}\)-tube is in a unique \(2^{-m}\)-tube if \(n>m\); thus, dyadic tubes will avoid the \(C_{d}\) loss incurred by the finitely overlapping cover \(\mathcal{T}^{\delta}\). However, dyadic tubes have the disadvantage that they don't behave well under rotations or dilations, and it would be more cumbersome to define \((\delta,s,C,k)\)-sets of dyadic tubes (whereas the definition for ordinary tubes is more geometric). Thus, in principle it is possible to work with dyadic tubes and save on the \(C_{d}\) loss, but it doesn't affect our numerology in the end (since our losses will depend badly on \(d\) anyway), so we chose to work with ordinary tubes throughout.
**Definition 2.3**.: _[_18_]_ _Let \(P\subset\mathbb{R}^{d}\) be a bounded nonempty set, \(d\geq 1\). Let \(\delta>0\) be a dyadic number, and let \(0\leq s\leq d\) and \(C>0\). We say that \(P\) is a \((\delta,s,C)\)-set if_
\[|P\cap Q|_{\delta}\leq C\cdot|P|_{\delta}\cdot r^{s},\qquad Q\in\mathcal{D}_ {r}(\mathbb{R}^{d}),r\in[\delta,1].\]
**Definition 2.4**.: _Let \(\mathcal{T}\subset\mathbb{R}^{d}\) be a bounded nonempty set of dyadic \(\delta\)-tubes, \(d\geq 2\). Let \(\delta>0\) be a dyadic number, and let \(0\leq s\leq d\), \(0\leq k\leq d-2\), and \(C>0\). We say that \(\mathcal{T}\) is a \((\delta,s,C,k)\)-set of tubes if for every \((r,k+1)\)-plate \(H\) and \(\delta\leq r\leq 1\), we have_
\[|\mathcal{T}\cap H|\leq C\cdot|\mathcal{T}|\cdot r^{s}.\]
_If \(k\) is not specified, we default to \(k=0\)._
The following is a simpler interpretation of \((\delta,s,C,k)\)-set if the tubes all pass through the same point.
**Definition 2.5**.: _Let \(\sigma(t)\in S^{d-1}\) be the slope of the central axis of \(t\)._
**Lemma 2.6**.: _Let \(\mathcal{T}\) be a set of \(\delta\)-tubes intersecting \(p\). Then if \(\mathcal{T}\) is a \((\delta,s,C,k)\)-set, then \(\sigma(\mathcal{T})\) is a \((\delta,s,O(C),k)\)-set. Conversely, if \(\sigma(\mathcal{T})\) is a \((\delta,s,C,k)\)-set, then \(\mathcal{T}\) is a \((\delta,s,O(C),k)\)-set._
Proof.: Let \(\pi_{p}:\mathbb{R}^{d}\to S^{d-1}\) denote spherical projection through \(p\). Then \(\pi_{p}(t\setminus B(p,1/2))\) is well-defined and equals \(\sigma(t)\), up to an additive loss of \(C\delta\). Fix a \((r,k)\)-plate \(H\in S^{d-1}\). Then the set of tubes with slope in \(H\) and passing through \(p\) must lie in a \((r+C\delta,k+1)\)-plate \(p^{(C\delta)}+\pi_{p}^{-1}(H)\). Conversely, for any \((r,k+1)\)-plate \(W\) containing \(p\), the set of possible slopes of tubes through \(p\) contained in \(W\) is contained in a \((r+C\delta,k)\)-plate \((\pi_{p}(W-p))^{C\delta}\).
We will need the following lemma from [18].
**Lemma 2.7** ([18], Lemma 2.7).: _Let \(P\subset[-2,2]^{d}\) be a \((\delta,s,C)\)-set. Then \(P\) contains a \(\delta\)-separated \((\delta,s,O_{d}(C))\)-subset \(P^{\prime}\) with \(|P^{\prime}|\leq\delta^{-s}\)._
First, since \((\delta,\kappa,\delta^{-\varepsilon},k)\)-sets are \((\delta,\kappa,\delta^{-\varepsilon},k^{\prime})\)-sets for any \(k^{\prime}<k\), we can assume that \(k\leq s<k+1\). Next, since \((\delta,t,\delta^{-\varepsilon})\)-sets are \((\delta,t^{\prime},\delta^{-\varepsilon})\)-sets for \(t^{\prime}<t\), we may assume \(t\leq k+1\). In particular, we get \(t-s\leq 1\), a useful assumption.
We record a useful geometric fact about \((r,k)\)-plates.
**Lemma 2.8**.: _Fix \(C_{\mathrm{sep}}\geq 1\), then there exists \(r_{0}\) depending on \(C_{\mathrm{sep}}\) such that the following is true for \(r<r_{0}\). If \((x,y)\) lie in an \((r,k)\)-plate \(H\) and \(|x-y|=C_{\mathrm{sep}}^{-1}\), then any \(r\)-tube \(T\) through \(x,y\) will lie in \(H^{(CC_{\mathrm{sep}}r)}\), which is a \((CC_{\mathrm{sep}}r,k)\)-plate._
Proof.: For \(C\) sufficiently large: If \(T\) does not lie in \(H^{(CC_{\mathrm{sep}}r)}\), then \(H\cap T\) will be contained in a \((2C_{\mathrm{sep}})^{-1}\)-tube segment of \(T\).
### An Elementary Estimate
We prove a classical estimate which can be viewed as Theorem 1.8 with \(\varepsilon=0\). We won't need the fact that \(\mathcal{T}(p)\) is a \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set. The \(d=2\) case is proven as Proposition 2.13 and Corollary 2.14 of [18]. For higher dimensions, the proof is similar and we sketch the details. Let \(A\lessapprox_{\delta}B\) denote the inequality
\[A\leq C\cdot\log(\frac{1}{\delta})^{C}B.\]
**Proposition 2.9**.: _Let \(0\leq s\leq t\leq d-1\), and let \(C_{P},C_{T}\geq 1\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a \((\delta,t,C_{P})\)-set. Assume that for every \(p\in\mathcal{P}\) there exists a \((\delta,s,C_{T})\)-family \(\mathcal{T}(p)\subset\mathcal{T}^{\delta}\) of dyadic \(\delta\)-tubes with the property that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\), and \(|\mathcal{T}(p)|=M\) for some \(M\geq 1\)._
_Let \(\mathcal{T}\subset\mathcal{T}^{\delta}\) be arbitrary, and define \(I(\mathcal{P},\mathcal{T})=\{(p,T)\in\mathcal{P}\times\mathcal{T}:T\in\mathcal{T }(p)\}\). Then_
\[|I(\mathcal{P},\mathcal{T})|\lessneq_{\delta}\sqrt{C_{P}C_{T}}\cdot(M\delta^{ s})^{\theta/2}\cdot|\mathcal{T}|^{1/2}|\mathcal{P}|,\]
_where \(\theta=\theta(s,t)=\frac{d-1-t}{d-1-s}\in[0,1]\). (If \(s=t=d-1\), then \(\theta(s,t)=0\).)_
The following corollary of Proposition 2.9 is the form we will use.
**Corollary 2.10**.: _Let \(0\leq s\leq t\leq d-1\), and let \(C_{P},C_{T}\geq 1\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a \((\delta,t,C_{P})\)-set. Assume that for every \(p\in\mathcal{P}\) there exists a \((\delta,s,C_{T})\)-family \(\mathcal{T}(p)\subset\mathcal{T}^{\delta}\) of dyadic \(\delta\)-tubes with the property that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\), and \(|\mathcal{T}(p)|=M\) for some \(M\geq 1\). If \(\mathcal{T}=\cup_{p\in\mathcal{P}}\mathcal{T}(p)\), then_
\[|\mathcal{T}|\gtrapprox(C_{P}C_{T})^{-1}\cdot M\delta^{-s}\cdot(M\delta^{s})^{ \frac{t-s}{d-1-s}}.\]
_(If \(s=t=d-1\), then \(\frac{t-s}{d-1-s}=0\).)_
**Remark 2.11**.: _To use Corollary 2.10, we need \(t\leq d-1\). Fortunately, this is a harmless assumption because \(s<d-1\), and changing \(t\) to \(\min(t,d-1)\) makes the hypothesis of Theorem 1.8 weaker._
Proof.: We begin with an application of Cauchy-Schwarz.
\[|I(\mathcal{P},\mathcal{T})| =\sum_{T\in\mathcal{T}}|\{p\in\mathcal{P}:T\in\mathcal{T}(p)\}|\] \[\leq|\mathcal{T}|^{1/2}\left|\{(T,P,P^{\prime}):T\in\mathcal{T}(p )\cap\mathcal{T}(p^{\prime})\}\right|^{1/2}.\]
Note that we have the following bounds:
\[|\mathcal{T}(p)\cap\mathcal{T}(p^{\prime})|\lesssim\min\left\{C_{T}\cdot M \cdot\left(\tfrac{\delta}{d(p,p^{\prime})+\delta}\right)^{s},\left(\tfrac{1}{ d(p,p^{\prime})+\delta}\right)^{d-1}\right\}, \tag{2.2}\]
where \(d(p,p^{\prime})\) stands for the distance of the midpoints of \(p\) and \(p^{\prime}\). To prove (2.2), observe that if \(T\in\mathcal{T}(p)\cap\mathcal{T}(p^{\prime})\), then \(T\) lies in a \(\tfrac{\delta}{d(p,p^{\prime})+\delta}\)-tube with central line being the line between \(p\) and \(p^{\prime}\). Thus, the first bound in (2.2) follows from \(\mathcal{T}(p)\) being a \((\delta,s,C_{T})\)-set with \(|\mathcal{T}(p)|=M\), and the second bound is the maximum number of essentially distinct \(\delta\)-tubes that can fit inside a \(\tfrac{\delta}{d(p,p^{\prime})+\delta}\)-tube.
Write \(\theta:=\theta(s,t):=\tfrac{(d-1)-t}{(d-1)-s}\in[0,1]\). (If \(s=t=d-1\), we set \(\theta:=0\).) The parameter \(\theta\) is chosen so that \(t=s\theta+(d-1)(1-\theta)\). Then (2.2) and the inequality \(\min\{a,b\}\leq a^{\theta}b^{1-\theta}\) imply that
\[|\mathcal{T}(p)\cap\mathcal{T}(p^{\prime})|\lesssim(C_{T}M\delta^{s})^{\theta }\cdot d(p,p^{\prime})^{-t}.\]
Since \(\mathcal{P}\) is a \((\delta,t,C_{P})\)-set, for fixed \(p\in\mathcal{P}\) we have
\[\sum_{p^{\prime}}(d(p,p^{\prime})+\delta)^{-t}\lesssim\sum_{\sqrt{2}\cdot \delta\leq 2^{-j}\leq\sqrt{2}}2^{tj}|\{p^{\prime}\in\mathcal{P}:d(p,p^{\prime}) \leq 2^{-j}\}|\lessneq_{\delta}C_{P}\cdot|\mathcal{P}|.\]
We deduce that
\[\sum_{p,p^{\prime}}|\mathcal{T}(p)\cap\mathcal{T}(p^{\prime})|\lesssim(C_{T}M \delta^{s})^{\theta}\sum_{p,p^{\prime}}(d(p,p^{\prime})+\delta)^{-t}\lessneq_{ \delta}C_{P}(C_{T}M\delta^{s})^{\theta}\cdot|\mathcal{P}|^{2},\]
so
\[|I(\mathcal{P},\mathcal{T})|\lessneq_{\delta}C_{P}^{1/2}(C_{T}M\delta^{s})^{ \theta/2}\cdot|\mathcal{T}|^{1/2}|\mathcal{P}|\leq\sqrt{C_{P}C_{T}}\cdot(M \delta^{s})^{\theta/2}\cdot|\mathcal{T}|^{1/2}|\mathcal{P}|.\]
This proves Proposition 2.9, and Corollary 2.10 follows by observing \(|I(\mathcal{P},\mathcal{T})|\geq M|\mathcal{P}|\).
### Multiscale analysis
Following Section 4 of [18], we would like to change scale from \(\delta\) to \(\Delta>\delta\), while preserving the properties of \(\mathcal{T}(p)\). We say \(A\lessneq_{\delta}B\) if there exists an absolute constant \(C\geq 1\) such that \(A\leq C\cdot[\log(1/\delta)]^{C}\). We start by naming the objects in Theorem 1.8.
**Definition 2.12**.: _Fix \(\delta\in 2^{-\mathbb{N}},s\in[0,d-1],C>0,M\in\mathbb{N}\). We say that a pair \((\mathcal{P}_{0},\mathcal{T}_{0})\subset\mathcal{D}_{\delta}\times\mathcal{T} ^{\delta}\) is a \((\delta,s,C_{1},\kappa,C_{2},M)\)-nice configuration if for every \(p\in\mathcal{P}_{0}\), there exists a \((\delta,s,C_{1},0)\) and \((\delta,\kappa,C_{2},k)\)-set \(\mathcal{T}(p)\subset\mathcal{T}_{0}\) with \(|\mathcal{T}(p)|=M\) and such that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\)._
Using the method of induction on scales, we would like to relate nice configurations at scale \(\delta\) to nice configurations at scales \(\Delta\), \(\frac{\delta}{\Delta}\), where \(\delta<\Delta\leq 1\). The following proposition, which combines Propositions 4.1 and 5.2 of [18], gives a way of doing so with only polylog losses. Our proof relies on the same ideas as [18], with some technical simplifications. We defer the proof to Section 6, where we prove a slightly more general version.
**Proposition 2.13**.: _Fix dyadic numbers \(0<\delta<\Delta\leq 1\). Let \((\mathcal{P}_{0},\mathcal{T}_{0})\) be a \((\delta,s,C_{1},\kappa,C_{2},M)\)-nice configuration. Then there exist sets \(\mathcal{P}\subset\mathcal{P}_{0}\), \(\mathcal{T}(p)\subset\mathcal{T}_{0}(p),p\in\mathcal{P}\), and \(\mathcal{T}_{\Delta}\subset\mathcal{T}^{\Delta}\) such that denoting \(\mathcal{T}=\cup_{p\in\mathcal{P}}\mathcal{T}(p)\) the following hold:_
1. \(|\mathcal{D}_{\Delta}(\mathcal{P})|\approx_{\delta}|\mathcal{D}_{\Delta}( \mathcal{P}_{0})|\) _and_ \(|\mathcal{P}\cap Q|\approx_{\delta}|\mathcal{P}_{0}\cap Q|\) _for all_ \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\)_._
2. _There exists_ \(\mathbf{N}\) _such that_ \(|\mathcal{T}\cap\mathbf{T}|\sim\mathbf{N}\) _for all_ \(\mathbf{T}\in\mathcal{T}_{\Delta}\)_._
3. \((\mathcal{D}_{\Delta}(\mathcal{P}),\mathcal{T}_{\Delta})\) _is_ \((\Delta,s,C_{\Delta}^{1},\kappa,C_{\Delta}^{2},M_{\Delta})\)_-nice for some_ \(C_{\Delta}^{1}\approx_{\delta}C_{1}\)_,_ \(C_{\Delta}^{2}\approx_{\delta}C_{2}\)_, and_ \(M_{\Delta}\geq 1\)_._
4. _For each_ \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\)_, let_ \(\mathcal{T}_{\Delta}(Q)\) _be the tubes in_ \(\mathcal{T}_{\Delta}\) _through_ \(Q\)_. Then for all_ \(\mathbf{T}\in\mathcal{T}_{\Delta}(Q)\)_, we have_ \[|\{(p,T)\in(\mathcal{P}\cap Q)\times\mathcal{T}:T\in\mathcal{T}(p)\text{ and }T\subset\mathbf{T}\}|\gtrapprox\delta}{\frac{M\cdot|\mathcal{P}\cap Q|}{| \mathcal{T}_{\Delta}(Q)|}.\]
_._
5. _For each_ \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\)_, there exist_ \(C^{1}_{Q}\approx_{\delta}C_{1}\)_,_ \(C^{2}_{Q}\approx_{\delta}C_{2}\)_,_ \(M_{Q}\geq 1\)_, a subset_ \(\mathcal{P}_{Q}\subset\mathcal{P}\cap Q\) _with_ \(|\mathcal{P}_{Q}|\gtrapprox\Delta\)__\(|\mathcal{P}\cap Q|\)_, and a family of tubes_ \(\mathcal{T}_{Q}\subset\mathcal{T}^{\delta/\Delta}\) _such that_ \((S_{Q}(\mathcal{P}_{Q}),\mathcal{T}_{Q})\) _is_ \((\delta/\Delta,s,C^{1}_{Q},\kappa,C^{2}_{Q},M_{Q})\)_-nice._
_Furthermore, the families \(\mathcal{T}_{Q}\) can be chosen so that_
\[\frac{|\mathcal{T}_{0}|}{M}\gtrapprox\delta\ \frac{|\mathcal{T}_{\Delta}|}{M_{ \Delta}}\cdot\left(\max_{Q\in\mathcal{D}_{\Delta}(\mathcal{P})}\frac{| \mathcal{T}_{Q}|}{M_{Q}}\right). \tag{2.3}\]
Iterate this proposition to get (for details, see [27, Corollary 4.1])
**Corollary 2.14**.: _Fix \(N\geq 2\) and a sequence \(\{\Delta_{j}\}_{j=0}^{n}\subset 2^{-\mathbb{N}}\) with_
\[0<\delta=\Delta_{N}<\Delta_{N-1}<\cdots<\Delta_{1}<\Delta_{0}=1.\]
_Let \((\mathcal{P}_{0},\mathcal{T}_{0})\subset\mathcal{D}_{\delta}\times\mathcal{T} ^{\delta}\) be a \((\delta,s,C_{1},\kappa,C_{2},M)\)-nice configuration. Then there exists a set \(\mathcal{P}\subset\mathcal{P}_{0}\) such that:_
1. \(|\mathcal{D}_{\Delta_{j}}(\mathcal{P})|\approx_{\delta}|\mathcal{D}_{\Delta_{ j}}(\mathcal{P}_{0})|\) _and_ \(|\mathcal{P}\cap\boldsymbol{p}|\approx_{\delta}|\mathcal{P}_{0}\cap\boldsymbol {p}|\)_,_ \(1\leq j\leq N\)_,_ \(\boldsymbol{p}\in\mathcal{D}_{\Delta_{j}}(\mathcal{P})\)_._
2. _For every_ \(0\leq j\leq N-1\) _and_ \(\boldsymbol{p}\in\mathcal{D}_{\Delta_{j}}\)_, there exist numbers_ \(C^{1}_{\boldsymbol{p}}\approx_{\delta}C^{1}\)_,_ \(C^{2}_{\boldsymbol{p}}\approx_{\delta}C^{2}\)_, and_ \(M_{\boldsymbol{p}}\geq 1\)_, and a family of tubes_ \(\mathcal{T}_{\boldsymbol{p}}\subset\mathcal{T}^{\Delta_{j+1}/\Delta_{j}}\) _with the property that_ \((S_{\boldsymbol{p}}(\mathcal{P}\cap\boldsymbol{p}),\mathcal{T}_{\boldsymbol {p}})\) _is a_ \((\Delta_{j+1}/\Delta_{j},s,C^{1}_{\boldsymbol{p}},\kappa,C^{2}_{\boldsymbol{p} },M_{\boldsymbol{p}})\)_-nice configuration._
_Furthermore, the families \(\mathcal{T}_{\boldsymbol{p}}\) can be chosen such that if \(\boldsymbol{p}_{j}\in\mathcal{D}_{\Delta_{j}}(\mathcal{P})\) for \(0\leq j\leq N-1\), then_
\[\frac{|\mathcal{T}_{0}|}{M}\gtrapprox\sum_{j=0}^{N-1}\frac{|\mathcal{T}_{ \boldsymbol{p}_{j}}|}{M_{\boldsymbol{p}_{j}}}.\]
_Here, \(\gtrapprox\delta\) means \(\gtrsim_{N}\log(1/\delta)^{C}\), and likewise for \(\gtrapprox\delta,\approx_{\delta}\)._
### Uniform sets and branching numbers
The following exposition borrows heavily from [21, Section 2.3].
**Definition 2.15**.: _Let \(n\geq 1\) and_
\[\delta=\Delta_{n}<\Delta_{n-1}<\cdots<\Delta_{1}\leq\Delta_{0}=1\]
_be a sequence of dyadic scales. We say that a set \(P\subset[0,1)^{d}\) is \(\{\Delta_{j}\}_{j=1}^{n}\)-uniform if there is a sequence \(\{N_{j}\}_{j=1}^{n}\) such that \(N_{j}\in 2^{\mathbb{N}}\) and \(|P\cap Q|_{\Delta_{j}}=N_{j}\) for all \(j\in\{1,2,\cdots,n\}\) and \(Q\in\mathcal{D}_{\Delta_{j-1}}(P)\)._
**Remark 2.16**.: _By uniformity, we have \(|P|_{\Delta_{m}}=|P\cap Q|_{\Delta_{m}}|P|_{\Delta_{\ell}}\) for \(0\leq\ell<m\leq n\) and \(Q\in\mathcal{D}_{\Delta_{\ell}}(P)\)._
As a result, we can always refine a set \(P\) to be uniform:
**Lemma 2.17**.: _Let \(P\subset[0,1)^{d}\), \(m,T\in\mathbb{N}\), and \(\delta=2^{-mT}\). Let \(\Delta_{j}:=2^{-jT}\) for \(0\leq j\leq m\), so in particular \(\delta=\Delta_{m}\). Then there is a \(\{\Delta_{j}\}_{j=1}^{m}\)-uniform set \(P^{\prime}\subset P\) such that_
\[|P^{\prime}|_{\delta}\geq(2T)^{-m}|P|_{\delta}.\]
_In particular, if \(\varepsilon>0\) and \(T^{-1}\log(2T)\leq\varepsilon\), then \(|P^{\prime}|\geq\delta^{\varepsilon}|P|\)._
Uniform sets can be encoded by a branching function.
**Definition 2.18**.: _Let \(T\in\mathbb{N}\), and let \(\mathcal{P}\subset[0,1)^{d}\) be a \(\{\Delta_{j}\}_{j=1}^{n}\)-uniform set, with \(\Delta_{j}:=2^{-jT}\), and with associated sequence \(\{N_{j}\}_{j=1}^{n}\subset\{1,\ldots,2^{dT}\}^{n}\). We define the branching function \(f:[0,n]\to[0,dn]\) by setting \(f(0)=0\), and_
\[f(j):=\frac{\log|\mathcal{P}|_{2^{-jT}}}{T}=\frac{1}{T}\sum_{i=1}^{j}\log N_{ i},\quad i\in\{1,\ldots n\},\]
_and then interpolating linearly between integers._
**Definition 2.19**.: _Let \(s_{f}(a,b)=\frac{f(b)-f(a)}{b-a}\) denote the slope of a line segment between \((a,b)\) and \((f(a),f(b))\). We say that a function \(f:[0,n]\to\mathbb{R}\) is \(\varepsilon\)-superlinear on \([a,b]\subset[0,n]\), or that \((f,a,b)\) is \(\varepsilon\)-superlinear, if_
\[f(x)\geq f(a)+s_{f}(a,b)(x-a)-\varepsilon(b-a),x\in[a,b].\]
_We say that \((f,a,b)\) is \(\varepsilon\)-linear if_
\[|f(x)-f(a)-s_{f}(a,b)(x-a)|\leq\varepsilon(b-a),x\in[a,b].\]
The following lemma converts between branching functions and the uniform structure of \(P\). It is [18, Lemma 8.3] (or an immediate consequence of the definitions)
**Lemma 2.20**.: _Let \(P\) be a \((\Delta^{i})_{i=1}^{m}\)-uniform set in \([0,1)^{d}\) with associated branching function \(f\), and let \(\delta=\Delta^{m}\)._
* _If_ \(f\) _is_ \(\varepsilon\)_-superlinear on_ \([0,m]\)_, then_ \(P\) _is a_ \((\delta,s_{f}(0,m),O_{\Delta}(1)\delta^{-\varepsilon})\)_-set._
* _If_ \(f\) _is_ \(\varepsilon\)_-linear on_ \([0,m]\)_, then_ \(P\) _is a_ \((\delta,s_{f}(0,m),O_{\Delta}(1)\delta^{-\varepsilon},O_{\Delta}(1)\delta^{- \varepsilon})\)_-regular set between scales_ \(\delta\) _and_ \(1\)_._
The crucial branching lemma is [18, Lemma 8.5] applied to the function \(\frac{2}{d}\cdot f\):
**Lemma 2.21**.: _Fix \(s\in(0,1)\) and \(t\in(s,d]\). For every \(\varepsilon>0\) there is \(\tau=\tau(\varepsilon,s,t)>0\) such that the following holds: for every piecewise affine \(d\)-Lipschitz function \(f:[0,m]\to\mathbb{R}\) such that_
\[f(x)\geq tx-\varepsilon m\text{ for all }x\in[0,m],\]
_there exists a family of non-overlapping intervals \(\{[c_{j},d_{j}]\}_{j=1}^{n}\) contained in \([0,m]\) such that:_
1. _For each_ \(j\)_, at least one of the following alternatives holds:_ 1. \((f,c_{j},d_{j})\) _is_ \(\varepsilon\)_-linear with_ \(s_{f}(c_{j},d_{j})\geq s\)_;_ 2. \((f,c_{j},d_{j})\) _is_ \(\varepsilon\)_-superlinear with_ \(s_{f}(c_{j},d_{j})=s\)_._
2. \(d_{j}-c_{j}\geq\tau m\) _for all_ \(j\)_;_
3. \(|[0,m]\setminus\cup_{j}[c_{j},d_{j}]|\lesssim_{s,t}\varepsilon m\)_._
### Combinatorial and probabilistic preliminaries
In this section, we collect a few of the results from additive combinatorics and probability that will be used in the following sections.
First, we make the following observation (Lemma 19 of [10]) about intersections of high-probability events. (That lemma was stated for Lebesgue measure but the same proof works for general measures \(\nu\).)
**Lemma 2.22**.: _Let \(A\subset\mathbb{R}^{d}\) equipped with a measure \(\nu\) and \(\Theta\) be an index set equipped with a probability measure \(\mu\). Suppose there is \(K\geq 1\) and for each \(\theta\in\Theta\), a Borel subset \(A_{\theta}\) with \(\nu(A_{\theta})\geq\nu(A)/K\). Then_
\[\mu^{\otimes q}(\{(\theta_{1},\theta_{2},\cdots,\theta_{q}):\nu(A_{\theta_{1}} \cap A_{\theta_{2}}\cap\cdots\cap A_{\theta_{q}})\geq\frac{\nu(A)}{2K^{q}}\}) \geq\frac{1}{2K^{q}}.\]
Next, we state Rusza's triangular inequality [10, Lemma 21] (see also [23]):
**Lemma 2.23**.: _For any sets \(A,B,C\subset\mathbb{R}^{d}\), we have_
\[|B|_{\delta}|A-C|_{\delta}\lesssim_{d}|A-B|_{\delta}|B-C|_{\delta}.\]
We also would like the Plunnecke-Rusza inequality, in the form stated by [10, Lemma 22]:
**Lemma 2.24**.: _Let \(A,B\) be bounded subsets of \(\mathbb{R}^{d}\). For all \(K\geq 1\), \(\delta>0\), if \(|A+B|_{\delta}\leq K|B|_{\delta}\), then for all \(k,\ell\geq 1\), we have_
\[|kA-\ell A|_{\delta}\lesssim_{d}K^{k+\ell}|B|_{\delta}.\]
_Here, \(kA=\underbrace{A+\cdots+A}_{k\text{ times}}\)._
In a similar spirit, the set of \(w\) such that \(X+wX\) is small compared to \(|X|\) forms a ring. The following is a restatement of [11, Lemma 30(i,ii)] for \(\mathbb{R}\). Note that \(\operatorname{End}(\mathbb{R})\simeq\mathbb{R}\) with identity \(1\).
**Lemma 2.25**.: _Define \(S_{\delta}(X;K)=\{w\in[-K,K]:|X+wX|_{\delta}\leq K|X|_{\delta}\}\)._
1. _If_ \(a\in S_{\delta}(X;\delta^{-\varepsilon})\) _and_ \(b\in\mathbb{R}\) _such that_ \(|a-b|\leq\delta^{1-\varepsilon}\)_, then_ \(b\in S_{\delta}(X;\delta^{-O(\varepsilon)})\)_._
2. _If_ \(1,a,b\in S_{\delta}(X;\delta^{-\varepsilon})\)_, then_ \(a+b,a-b,ab\) _all belong to_ \(S_{\delta}(X;\delta^{-O(\varepsilon)})\)
The following theorem (a special case of Theorem 5 of [11]) is a quantitative statement that \(\frac{1}{2}\)-dimensional subrings of \(\mathbb{R}\) don't exist. In fact, by repeated sum-product operations, we can get all of \(\mathbb{R}\).
**Theorem 2.26**.: _We work in \(\mathbb{R}^{1}\). Given \(\kappa,\varepsilon_{0}>0\), there exist \(\varepsilon>0\) and an integer \(s\geq 1\) such that for \(\delta<\delta_{0}(\kappa,\varepsilon_{0})\), the following holds. For every \((\kappa,\delta^{-\varepsilon})\)-set \(A\subset B(0,\delta^{-\varepsilon})\), we have_
\[B(0,\delta^{\varepsilon_{0}})\subset\langle A\rangle_{s}+B(0,\delta),\]
_where \(\langle A\rangle_{1}:=A\cup(-A)\) and for any integer \(s\geq 1\), define \(\langle A\rangle_{s+1}:=\langle A\rangle_{s}\cup(\langle A\rangle_{s}+\langle A \rangle_{1})\cup(\langle A\rangle_{s}\cdot\langle A\rangle_{1})\)._
Finally, we shall need a discretized variant of the Balog-Szemeredi-Gowers theorem. Our version is closest to [20, Theorem 4.38], which is taken from [1, p. 196], which in turn refers to Exercise 6.4.10 in [31]. But the exercise is only sketched in [31], so for completeness, we provide a proof in Appendix A.
**Theorem 2.27**.: _Let \(K\geq 1\) and \(\delta>0\) be parameters. Let \(A,B\) be bounded subsets of \(\mathbb{R}^{d}\), and let \(P\subset A\times B\) satisfy_
\[|P|_{\delta}\geq K^{-1}|A|_{\delta}|B|_{\delta}\text{ and }|\{a+b:(a,b)\in P\}|_{ \delta}\leq K(|A|_{\delta}|B|_{\delta})^{1/2}.\]
_Then one can find subsets \(A^{\prime}\subset A,B^{\prime}\subset B\) satisfying_
* \(|A^{\prime}|_{\delta}\gtrsim_{d}K^{-2}|A|_{\delta},|B^{\prime}|_{\delta} \gtrsim_{d}K^{-2}|B|_{\delta}\)_,_
* \(|A^{\prime}+B^{\prime}|_{\delta}\lesssim_{d}K^{8}(|A|_{\delta}|B|_{\delta})^{ 1/2}\)_,_
* \(|P\cap(A^{\prime}\times B^{\prime})|\gtrsim_{d}\frac{|A|_{\delta}|B|_{\delta} }{K^{2}}\)_._
_(Implicit constants depend on \(d\) but not on \(\delta,K\).)_
We also need the following version of multi-linear Kakeya.
**Theorem 2.28** (Theorem 1 in [2]).: _Let \(2\leq k\leq d\) and \(\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{k}\) be families of \(1\)-tubes in \(\mathbb{R}^{d}\). Then_
\[\int_{\mathbb{R}^{d}}\left(\sum_{T_{1}\in\mathcal{T}_{1}}\cdots\sum_{T_{k}\in \mathcal{T}_{k}}|e(T_{1})\wedge\cdots\wedge e(T_{k})|\chi_{T_{1}\cap\cdots \cap T_{k}}(x)\right)^{1/(k-1)}dx\lesssim_{k,d}\left(\prod_{i=1}^{k}|\mathcal{ T}_{i}|\right)^{1/(k-1)}.\]
_Here, \(e(T_{i})\) is the unit vector in the direction of tube \(T_{i}\)._
### Energy
**Definition 2.29**.: _The \((s,k)\)-Riesz energy of a finite Borel measure \(\mu\) on \(\mathbb{R}^{d}\) is_
\[I^{\delta}_{s,k}(\mu)=\int(|(x_{0}-x_{1})\wedge\cdots\wedge(x_{0}-x_{k})|+ \delta)^{-s}\,d\mu(x_{0})\cdots d\mu(x_{k}).\]
_If \(k=1\) and \(\delta=0\), we recover the usual \(s\)-dimensional Riesz energy._
**Lemma 2.30**.:
1. _Fix_ \(0<s<t\) _and a measure_ \(\mu\) _with total mass_ \(C\)_. If_ \(\mu(H_{r})\leq Cr^{t}\) _for every_ \((r,k-1)\)_-plate_ \(H_{r}\) _and_ \(r>0\)_, then_ \(I^{0}_{s,k}(\mu)\lesssim_{s,t}C^{k}\)_._
2. _Fix_ \(0<\delta<\frac{1}{2}\)_. If_ \(I^{\delta}_{s_{i},k_{i}}(\mu)\leq C\) _for_ \(1\leq i\leq m\)_, then_ \(\operatorname{spt}(\mu)\) _contains a set which is simultaneously a_ \((\delta,\frac{s_{i}}{k_{i}},O(1)\cdot(Cm)^{1/k_{i}}\log\delta^{-1},k_{i}-1)\)_-set for each_ \(i\)_._
**Remark 2.31**.: _If \(k_{1}=m=1\) in part (b), then we can drop the log factor (c.f. proof of Lemma A.6 in [18]). We don't know if we can drop the log factor for \(k>1\) or \(m>1\)._
Proof.: (a) Let \(\rho_{i}\) be the distance between \(x_{i}\) and the plane spanned by \(x_{0},\cdots,x_{i-1}\); notice that \(|(x_{0}-x_{1})\wedge\cdots\wedge(x_{0}-x_{k})|=\prod_{i=1}^{k}\rho_{i}\). Thus, we can rewrite \(I_{s,k}(\mu)\) as an iterated integral
\[\int d\mu(x_{0})\int\rho_{1}^{-s}d\mu(x_{1})\int\rho_{2}^{-s}\,d\mu(x_{2}) \cdots\int\rho_{k}^{-s}\,d\mu(x_{k}).\]
We will be done if we show for all \(1\leq i\leq k\) and choices of \(x_{0},\cdots,x_{i-1}\), that \(\int\rho_{i}^{-s}\,d\mu(x_{i})\lesssim C\). Let \(H\) be the span of \(x_{0}\) through \(x_{i-1}\), and observe that by definition, \(\{x_{i}:\rho_{i}\geq r\}\subset H^{(r)}\), which is contained in a \((r,k-1)\)-plate. Thus,
\[\int\rho_{i}^{-s}\,d\mu(x_{i})\lesssim C+\sum_{\rho=2^{-n},n\geq 1}C\rho^{t-s} \lesssim_{s,t}C.\]
(b) Let \(P_{i}=\{x_{0}\in\operatorname{spt}(\mu):\int(|(x_{0}-x_{1})\wedge\cdots \wedge(x_{0}-x_{k_{i}})|+\delta)^{-s_{i}}\,d\mu(x_{1})\cdots d\mu(x_{k_{i}})< 2mC\}\). By Markov's inequality, \(\mu(P_{i})>1-\frac{1}{2m}\), so by the union bound, \(P=\cap_{i=1}^{m}P_{i}\) satisfies \(\mu(P)>\frac{1}{2}\).
We claim that \(\mu(P\cap H_{r})\leq Cr^{s_{i}/k_{i}}\) for all \((r,k_{i}-1)\)-plates \(H_{r}\) and \(\delta\leq r\leq 1\), \(1\leq i\leq m\). Indeed, if \(P\cap H_{r}=\emptyset\), then we are done. Otherwise, pick \(x_{0}\in P\cap H_{r}\) and observe that if \(x_{1},x_{2},\cdots,x_{k_{i}}\in H_{r}\), then \(|(x_{0}-x_{1})\wedge\cdots\wedge(x_{0}-x_{k_{i}})|+\delta\lesssim r\). Thus, we get \(\mu(H_{r})^{k_{i}}\cdot r^{-s}\leq 2C\), so \(\mu(P\cap H_{r})\leq(2Cr^{s})^{1/k_{i}}\).
Finally, let \(P^{\prime}_{c}\subset\mathcal{D}_{\delta}(P)\) be those dyadic \(\delta\)-cubes \(p\) such that \(\mu(p)\sim c\). We know \(\sum_{c=2^{-n}\in[\delta^{d},1]}\mu(P^{\prime}_{c})\geq\frac{1}{4}\), so by dyadic pigeonholing, some \(\mu(P^{\prime}_{c})\gtrsim(\log\delta^{-1})^{-1}\). Then \(P^{\prime}_{c}\) will be a \((\delta,\frac{s_{i}}{k_{i}},O(1)\cdot(Cm)^{1/k_{i}}\log\delta^{-1},k_{i}-1)\)-set for all \(1\leq i\leq m\).
## 3 Improved incidence estimates for quasi-product sets
The main novelty of this paper is the following Proposition, which is a higher-dimensional refinement of [20, Proposition 4.36] (see also [18, Proposition A.7]). It can be viewed as a variant of Theorem 1.8 for quasi-product sets.
**Proposition 3.1**.: _Given \(0\leq k<d-1\), \(0\leq s<k+1\), \(\tau,\kappa>0\), there exist \(\eta(s,k,\kappa,\tau,d)>0\) and \(\delta_{0}(s,k,\kappa,\tau,d)>0\) such that the following holds for all \(\delta\in(0,\delta_{0}]\)._
_Let \(\mathbf{Y}\subset(\delta\cdot\mathbb{Z})\cap[0,1)\) be a \((\delta,\tau,\delta^{-\eta})\)-set, and for each \(\mathbf{y}\in\mathbf{Y}\), assume that \(\mathbf{X_{y}}\subset(\delta\cdot\mathbb{Z})^{d-1}\cap[0,1)^{d-1}\) is a \((\delta,\kappa,\delta^{-\eta},k)\)-set with cardinality \(\geq\delta^{-s+\eta}\). Let_
\[\mathbf{Z}=\bigcup_{\mathbf{y}\in\mathbf{Y}}\mathbf{X_{y}}\times\{\mathbf{y}\}.\]
_For every \(\mathbf{z}\in\mathbf{Z}\), assume that \(\mathcal{T}(\mathbf{z})\) is a set of \(\delta\)-tubes each making an angle \(\geq\frac{1}{100}\) with the plane \(y=0\) with \(|\mathcal{T}(\mathbf{z})|\geq\delta^{-s+\eta}\) such that \(\mathbf{z}\in T\) for all \(T\in\mathcal{T}(\mathbf{z})\). Then \(|\mathcal{T}|\geq\delta^{-2s-\eta}\), where \(\mathcal{T}=\cup_{\mathbf{z}\in\mathbf{Z}}\mathcal{T}(\mathbf{z})\)._
**Remark 3.2**.: _In contrast to Theorem 1.8 and [20, Proposition 4.36], we (perhaps surprisingly) don't need any non-concentration assumptions on the tube sets \(\mathcal{T}(\mathbf{z})\) (even when \(d=2\)). Instead, it suffices to have weak non-concentration assumptions on \(\mathbf{Y}\) and \(\mathbf{X_{y}}\) for each \(\mathbf{y}\in\mathbf{Y}\). The non-concentration assumption on \(\mathbf{X_{y}}\) is necessary: otherwise, we can take \(s=k\), and let \(\mathbf{Z}\) to be the \(\delta\)-balls contained in some \((\delta,k+1)\)-plate \(H\), and \(\mathcal{T}\) to be the \(\delta\)-tubes contained in \(H\)._
### An improved slicing estimate
We will eventually deduce Proposition 3.1 from the following slicing estimate.
**Theorem 3.3**.: _For \(0\leq k\leq d-2\), \(0\leq s<k+1\), and \(0<\kappa\leq 1\), there exists \(\varepsilon>0\) such that the following holds for sufficiently small \(\delta<\delta_{0}(s,k,d,\varepsilon)\). Let \(\mathcal{T}\) be a \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set of \(\delta\)-tubes each making angle \(\geq\frac{1}{100}\) with the plane \(y=0\) with \(|\mathcal{T}|\geq\delta^{-2s+\varepsilon}\). Let \(\mu\) be a probability measure on \(\mathbb{R}\) such that for all \(\delta\leq r\leq 1\), we have \(\mu(B_{r})\leq\delta^{-\varepsilon}r^{\kappa}\). Then there is a set \(\mathcal{D}\subset\mathbb{R}\) with \(\mu(\mathcal{D})\geq 1-\delta^{\varepsilon}\) such that the slice of \(\cup\mathcal{T}^{\prime}\) at \(z=z_{0}\) has \(\delta\)-covering number \(\geq\delta^{-s-\varepsilon}\), for every subset \(\mathcal{T}^{\prime}\subset\mathcal{T}\) with \(|\mathcal{T}^{\prime}|\geq\delta^{\varepsilon}|\mathcal{T}|\) and \(x\in\mathcal{D}\)._
**Remark 3.4**.: _One should compare Theorem 3.3 to [10, Theorem 1]. Indeed, if \(k=0\) and \(d=2\), Theorem 3.3 is a direct corollary of [10, Theorem 1]. We can see this by using ball-tube duality, which turns \(\mathcal{T}\) into a subset of \(\mathbb{R}^{2}\). Under this duality, the slice of \(\cup\mathcal{T}^{\prime}\) at \(z=z_{0}\) becomes the orthogonal projection \(\pi_{\tilde{z}_{0}}\) to a line in the dual space, for some \(\tilde{z}_{0}\in S^{1}\). The map \(z_{0}\to\tilde{z}_{0}\) induces a pushforward measure \(\tilde{\mu}\) of \(\mu\) which still satisfies the non-concentration condition \(\tilde{\mu}(B_{r})\lesssim\delta^{-\varepsilon}r^{\kappa}\), so we can apply [10, Theorem 1]. (For more details, see the proof of Proposition A.7 in [18].)_
_In higher dimensions, we can still use duality to turn \(\mathcal{T}\) into a subset of \(\mathbb{A}(d,1)\sim\mathbb{R}^{2(d-1)}\), and then slices of \(\cup\mathcal{T}^{\prime}\) become orthogonal projections to \((d-1)\)-planes. Unfortunately, [10, Theorem 1] does not apply because the pushforward measure \(\tilde{\mu}\) is still supported on a line in \(S^{d-1}\). This approach is bound to fail because [10, Theorem 1] does not use the strong assumption that \(\mathcal{T}\) is non-concentrated around \((k+1)\)-planes. Using this assumption is the key novelty of this proof._
Nonetheless, Theorem 3.3 will borrow many ideas from the proof of [10, Theorem 1] and He's previous work [11]. Roughly, the strategy is as follows.
* As in [10], reduce to the following slightly weaker statement: given \(\mathcal{T}\) and \(\mu\), we can find a subset \(\mathcal{T}^{\prime}\subset\mathcal{T}\) such that the conclusion of Theorem 3.3 holds for \(\mathcal{T}^{\prime}\) in place of \(\mathcal{T}\). This relies on a formal exhaustion argument.
* Then, as in [10], reduce this slightly weaker to the following even weaker statement: there exists \(z_{0}\in E:=\operatorname{spt}\mu\) such that the slice of \(\cup\mathcal{T}\) at \(z=z_{0}\) has \(\delta\)-covering number \(\geq\delta^{-s-\varepsilon}\). This relies on additive combinatorics (e.g. the Balog-Szemeredi-Gowers theorem) and some probability.
* Assume this is false: that for all \(z_{0}\in E\), the slice of \(\cup\mathcal{T}\) at \(z=z_{0}\) has \(\delta\)-covering number \(\lessapprox\delta^{-s}\). Using additive combinatorics as in [11], the same conclusion is true for all \(z_{0}\in E^{\prime}\), which is the set of sums or differences of \(m\) many terms, each of which is a product of \(m\) elements of \(E\). (Here, \(m\) will be a fixed large integer.)
* Finally, if \(m\) is sufficiently large in terms of \(\kappa,\varepsilon\), then \(E^{\prime}\) contains a large interval \([0,\delta^{\varepsilon}]\) (c.f. [11, Theorem 5]). Essentially, we have a set of \(\gtrapprox\delta^{-2s}\) many tubes \(\mathcal{T}\), each containing \(\gtrapprox\delta^{-1}\) many \(\delta\)-balls, such that the union of the \(\delta\)-balls has cardinality \(\lessapprox\delta^{-(s+1)}\). Without further restrictions, this Furstenberg-type problem doesn't lead to a contradiction: take \(s=k\) and \(\mathcal{T}\) to be the set of \(\delta\)-tubes in a \((\delta,k+1)\)-plate. Luckily, our set of tubes \(\mathcal{T}\) is still a \((\delta,\kappa,\delta^{-O(\varepsilon)},k)\)-set, which rules out this counterexample. Indeed, we may finish using multi-linear Kakeya.
The reader be warned: we shall execute this strategy in reverse order. This is mainly because the main innovation of the paper is the fourth bullet point.
### An improved Furstenberg estimate
The following estimate complements work of Zahl [34]: we prove an \(\varepsilon\)-improvement on the union of tubes under a mild \((k+1)\)-plane non-concentration for the set of tubes. As in Zahl [34], the key technique is multilinear Kakeya.
**Theorem 3.5**.: _For any \(0\leq k<d-1\), \(0\leq s<k+1\), \(0<\kappa\leq 1\), there exists \(\varepsilon>0\) such that the following holds for sufficiently small \(\delta>0\). Let \(\mathcal{T}\) be a \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set of \(\delta\)-tubes with \(|\mathcal{T}|\geq\delta^{-2s+\varepsilon}\), and for each \(t\in\mathcal{T}\), let \(P_{t}\) be a set of \(\delta\)-balls intersecting \(t\) such that \(|P_{t}|\geq\delta^{-1+\varepsilon}\). Then \(|\cup P_{t}|\gtrsim\delta^{-(s+1)-\varepsilon}\)._
Proof.: The proof below is lossy and can possibly be improved (say by induction on scale). Also, the \(\varepsilon\) can be determined explicitly in terms of the parameters but we choose not to do so here.
We use \(\gtrapprox\) notation to hide \(\delta^{-C\varepsilon}\) terms, where \(C\) can depend on the other parameters. Let \(P=\cup P_{t}\), and suppose \(|P|\gtrapprox\delta^{-(s+1)}\). Let \(\mathcal{T}(p)\) be the set of tubes in \(\mathcal{T}\) through \(p\). Use a bush argument to upper bound \(|\mathcal{T}(p)|\):
\[\delta^{-(s+1)}\gtrapprox|P|\geq|\cup_{t\ni p}(P_{t}\backslash B(p,\delta^{2 \varepsilon}))|\geq\delta^{2d\varepsilon}\sum_{t\ni p}|P_{t}\backslash B(p, \delta^{2\varepsilon})|\geq\delta^{-1+(2d+1)\varepsilon}|\mathcal{T}(p)|.\]
Thus, \(|\mathcal{T}(p)|\lessapprox\delta^{-s}\) for all \(p\in P\). We get the following inequality chain
\[\delta^{-2s-1}\lessapprox\delta^{-1}|\mathcal{T}|\lessapprox I(P,\mathcal{T}) \lessapprox\delta^{-s}|P|\lessapprox\delta^{-2s-1}.\]
This means \(I(P,\mathcal{T})\approx\delta^{-2s-1}\), \(|P|\approx\delta^{-s-1}\), and \(|\mathcal{T}|\approx\delta^{-2s}\). Now perform a dyadic pigeonholing to extract a subset \(P^{\prime}\subset P\) such that \(|\mathcal{T}(p)|\in[M,2M]\) for all \(p\in P^{\prime}\) and \(I(P^{\prime},\mathcal{T})\approx\delta^{-2s-1}\). We know from before that \(M\leq\delta^{-s}\), and \(\delta^{-2s-1}\lessapprox I(P^{\prime},\mathcal{T})\approx M|P^{\prime}| \lessapprox M|P|\lessapprox\delta^{-2s-1}\), so \(M\approx\delta^{-s}\) and \(|P^{\prime}|\approx\delta^{-s}\). (This type of dyadic pigeonholing will also be used later. We also remark that dyadic pigeonholing was not necessary to achieve this step; simply let \(P^{\prime}\) be the set of \(p\in P\) satisfying \(|\mathcal{T}(p)|\geq\delta^{-s+C\eta}\) for some large \(C\), and use the bound on \(I(P,\mathcal{T})\) to get a lower bound for \(|P^{\prime}|\).)
Now, we claim that \(P^{\prime}\) is a \((\delta,\kappa,\delta^{-O(\varepsilon)},k+1)\)-set. Fix \(\delta<r<1\) and let \(H_{r}\) be a \((r,k+1)\)-plate. We first bound \(I(P^{\prime}\cap H_{r},\mathcal{T})\). Letting \(H_{r^{\prime}}\) be the \((r^{\prime},k+1)\)-plate that is a dilate of \(H_{r}\) with the same center, we have
\[I(P^{\prime}\cap H_{r},\mathcal{T}) \leq\sum_{r^{\prime}=2^{-\mathbb{N}}\cap[r,1]}I(P^{\prime}\cap H_ {r},H_{r^{\prime}}\setminus H_{r^{\prime}/2})\] \[\leq\sum_{r^{\prime}}\frac{r}{r^{\prime}\delta}\cdot|\mathcal{T} \cap H_{r^{\prime}}|\] \[\leq\sum_{r^{\prime}}\frac{r}{r^{\prime}\delta}\cdot|\mathcal{T} |\delta^{-\varepsilon}(r^{\prime})^{\kappa}\] \[\lesssim\delta^{-1}|\mathcal{T}|\delta^{-\varepsilon}r^{\kappa}.\]
Thus, since \(I(P^{\prime}\cap H_{r},\mathcal{T})\gtrapprox\delta^{-s}|P^{\prime}\cap H_{r}|\), we have \(|P^{\prime}\cap H_{r}|\lessapprox\delta^{s-1}|\mathcal{T}|r^{\kappa}\approx \delta^{-(s+1)}r^{\kappa}\approx|P^{\prime}|r^{\kappa}\).
Finally, since \(I(P^{\prime},\mathcal{T})\gtrapprox|P^{\prime}|\delta^{-s}\gtrapprox\delta^{-2s -1}\) and \(|\mathcal{T}|\lessapprox\delta^{-2s}\), by dyadic pigeonholing there exists a subset \(\mathcal{T}^{\prime}\subset\mathcal{T}\) with \(|\mathcal{T}^{\prime}|\approx|\mathcal{T}|\) such that each \(t\in\mathcal{T}^{\prime}\) contains \(\approx\delta^{-1}\) many \(\delta\)-balls in \(P^{\prime}\). Now since \(I(P^{\prime},\mathcal{T}^{\prime})\gtrapprox\delta^{-1}|\mathcal{T}^{\prime}| \gtrapx\delta^{-2s-1}\), \(|P^{\prime}|\lessapprox\delta^{-s-1}\), and \(|\mathcal{T}(p)|\lessapprox\delta^{-s}\) for all \(p\in P^{\prime}\), by dyadic pigeonholing we can find \(\tilde{P}\subset P^{\prime}\) with \(|\tilde{P}|\gtrapx|P^{\prime}|\) such that each \(p\in\tilde{P}\) lies in \(\approx\delta^{-s}\) many tubes in \(\mathcal{T}^{\prime}\).
Now, we are in good shape to apply multilinear Kakeya. For \(p\in\tilde{P}\), let \(\mathcal{T}(p)\) be the tubes in \(\mathcal{T}^{\prime}\) through \(p\). By a bush argument, \(\cup\mathcal{T}(p)\) contains \(\gtrapprox\delta^{-(s+1)}\) many \(\delta\)-balls in \(P\). Since \(P^{\prime}\) is a \((\delta,\kappa,\delta^{-O(\varepsilon)},k+1)\)-set, there are \(\gtrapprox\delta^{-(s+1)(k+3)}\) many \((k+3)\)-tuples of points \((p_{0},p_{1},\cdots,p_{k+2})\) such that \(p_{0}\) and \(p_{i}\) lie on some tube \(t_{i}\in\mathcal{T}_{i}\) and \(|e(t_{1})\wedge\cdots\wedge e(t_{k+2})|\gtrapx 1\) (where \(e(t)\) is the unit vector in the direction of tube \(t\)). Thus, there is a choice of \(p_{1},\cdots,p_{k+2}\) such that there are \(\gtrapx\delta^{-(s+1)}\) many valid choices for \(p_{0}\). But this leads to a contradiction by the following argument. Let \(\mathcal{T}_{i}\) be the tubes of \(\mathcal{T}\) through \(p_{i}\), \(1\leq i\leq k+2\); then by a rescaled version of Multilinear Kakeya (Theorem 2.28), the number of valid choices for \(p_{0}\) is \(\lessapprox\left(\prod_{i=1}^{k+2}|\mathcal{T}_{i}|\right)^{1/(k+1)}\lessapprox \delta^{-s(k+2)/(k+1)}\), which (using \(s<k+1\)) is much smaller than \(\delta^{-(s+1)}\) provided that \(\varepsilon,\delta\) are sufficiently small in terms of the parameters. This contradiction completes the proof.
### From Furstenberg to weak slicing
_This subsection contains ideas from [11] and [10]._
**Theorem 3.6**.: _For \(0\leq k\leq d-2\), \(0\leq s<k+1\), and \(0<\kappa\leq 1\), there exists \(\varepsilon>0\) such that the following holds for sufficiently small \(\delta<\delta_{0}(s,k,d,\varepsilon)\). Let \(\mathcal{T}\) be a \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set of \(\delta\)-tubes each making angle \(\geq\frac{1}{100}\) with the plane \(y=0\) with \(|\mathcal{T}|\geq\delta^{-2s+\varepsilon}\). Let \(\mu\) be a probability measure on \(\mathbb{R}\) such that for all \(\delta\leq r\leq 1\), we have \(\mu(B_{r})\leq\delta^{-\varepsilon}r^{\kappa}\). Then there exists \(z_{0}\in\mathrm{spt}\mu\) such that the slice of \(\cup\mathcal{T}\) at \(z=z_{0}\) has \(\delta\)-covering number \(\geq\delta^{-s-\varepsilon}\)._
Proof.: We use \(\lessapprox\) to denote \(\leq C\delta^{-C\varepsilon}\), where \(C\) may depend on \(\kappa,s\).
Let \(E:=\mathrm{spt}\mu\); without loss of generality, assume \(E\) is closed. Let \(z_{1}=\inf E\) and \(z_{2}=\sup E\); then \(d(z_{1},z_{2})\geq\delta^{2\varepsilon/\kappa}\) since \(\mu(B(z_{1},\delta^{2\varepsilon/\kappa}))\leq\delta^{-\varepsilon}\cdot \delta^{2\varepsilon}\leq\delta^{\varepsilon}<1\).
Let \(X=\cup\mathcal{T}(z=z_{1})\) and \(Y=\cup\mathcal{T}(z=z_{2})\); we are given that \(|X|_{\delta},|Y|_{\delta}\lessapprox\delta^{-s}\). On the other hand, since each \(T\in\mathcal{T}\) passes through \(O(1)\) many elements in \(X\) and \(O(1)\) many elements in \(Y\), we get that
\[\delta^{-2s}\lessapprox|X|_{\delta}|Y|_{\delta}\gtrsim|\mathcal{T}|\gtrapprox \delta^{-2s},\]
so in fact, \(|X|,|Y|\approx\delta^{-s}\) and \(|\mathcal{T}|\approx\delta^{-2s}\).
Let \(E^{\prime}:=[z_{1},z_{2}]\setminus(B(z_{1},\delta^{2\varepsilon/\kappa}) \cup B(z_{2},\delta^{2\varepsilon/\kappa}))\); then \(\mu(E^{\prime})\geq 1-2\delta^{\varepsilon}\geq\frac{1}{2}\) for \(\delta\) small enough.
Let \(f(z)=\frac{z-z_{1}}{z_{2}-z}\); note that on \(E^{\prime}\), we have that \(f\) is \(\approx 1\)-bilipschitz, and \(f(z)\approx 1\) for all \(z\in E^{\prime}\).
The problem condition literally states \(|(z_{2}-z)X+(z-z_{1})Y|_{\delta}\lessapprox\delta^{-s}\) for \(z\in E^{\prime}\); since \((z_{2}-z)\approx 1\), we can divide through by \((z_{2}-z)\) to get
\[|X+f(z)Y|_{\delta}\lessapprox\delta^{-s}\text{ for }z\in E^{\prime}.\]
Now pick an arbitrary \(z^{\prime}\in E^{\prime}\). In particular, we get \(|X+f(z^{\prime})Y|_{\delta}\lessapprox\delta^{-s}\), so by Lemma 2.23, we have for all \(z\in E^{\prime}\),
\[|X-\frac{f(z)}{f(z^{\prime})}X|_{\delta}\leq\frac{|X+f(z)Y|_{\delta}|f(z)Y+ \frac{f(z)}{f(z^{\prime})}X|_{\delta}}{|f(z)Y|_{\delta}}\lessapprox\delta^{ -s}.\]
In addition, since \(|X+f(z^{\prime})Y|_{\delta}\lessapprox\delta^{-s}\lessapprox|X|_{\delta}\), the Plunnecke-Rusza inequality (Lemma 2.24) gives \(|X+X|_{\delta}\lessapprox|Y|_{\delta}\lessapprox\delta^{-s}\).
Define \(\tilde{\mu}=g_{*}(\mu)\), the pushforward of \(g(z)=\frac{f(z)}{f(z^{\prime})}\); then \(g\) (like \(f\)) is \(\lessapprox\) 1-bilipschitz on \(E^{\prime}\), so \(\tilde{\mu}\) also satisfies a non-concentration condition \(\tilde{\mu}(B_{r})\lessapprox r^{\kappa}\) for \(\delta\leq r\leq 1\). Now pick \(\varepsilon_{0}>0\), and assume \(\varepsilon\) is chosen sufficiently small in terms of \(\varepsilon_{0}\). By the iterated sum-product Theorem 2.26, we can find an integer \(m\geq 1\) such that for \(\delta<\delta_{0}(\kappa,\varepsilon,\varepsilon_{0})\),
\[B(0,\delta^{\varepsilon_{0}})\subset\langle A\rangle_{m}+B(0,\delta),\]
where \(\langle A\rangle_{1}:=A\cup(-A)\) and for any integer \(m\geq 1\), define \(\langle A\rangle_{m+1}:=\langle A\rangle_{m}\cup(\langle A\rangle_{m}+ \langle A\rangle_{1})\cup(\langle A\rangle_{m}\cdot\langle A\rangle_{1})\).
By applying the ring structure Lemma 2.25 many times, we see that \(B(0,\delta^{\varepsilon_{0}})\subset S_{\delta}(X;\delta^{-O_{m}(\varepsilon)})\) and since \(1\in S_{\delta}(X;\delta^{-O(\varepsilon)})\), that \(B(1,\delta^{\varepsilon_{0}})\subset S_{\delta}(X;\delta^{-O_{m}(\varepsilon)})\). By definition of \(S_{\delta}\) and Lemma 2.23, we get for \(w\in B(1,\delta^{\varepsilon_{0}})\),
\[|X+wf(z^{\prime})Y|_{\delta}\leq\frac{|X-wX|_{\delta}|wX+wf(z^{\prime})Y|_{ \delta}}{|wX|_{\delta}}\lessapprox\delta^{-s}.\]
In other words, for all \(z_{0}\in I:=f^{-1}([(1-\delta^{\varepsilon_{0}})f(z^{\prime}),(1+\delta^{ \varepsilon_{0}})f(z^{\prime})])\), the slice \(\cup\mathcal{T}(z=z_{0})\) has \(\delta\)-covering number \(\lessapprox\delta^{-s}\). Since \(f\) is \(\approx 1\)-bilipschitz and \(f(z^{\prime})\approx 1\), we have \(|I|\approx\delta^{\varepsilon_{0}}\).
Now, we seek a contradiction to Theorem 3.5. For every \(t\in\mathcal{T}\), we let \(P_{t}\) be the \(\delta\)-balls on \(t\) with \(z\)-coordinate in \(I\). We observe the following:
* Recall our assumption that \(\mathcal{T}\) is a \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set of \(\delta\)-tubes with \(|\mathcal{T}|\geq\delta^{-2s+\varepsilon}\).
* \(|P_{t}|\gtrsim\delta^{-1}|I|\gtrapprox\delta^{-1+\varepsilon_{0}}\).
* \(|\cup P_{t}|\leq|\cup_{\varepsilon\in I}(X_{*}+f(z)Y_{*})|\lessapprox\delta^ {-(s+1)}\).
Thus, if \(\varepsilon,\varepsilon_{0}\) are sufficiently small in terms of \(s,k,\kappa\), then we contradict Theorem 3.5.
### An intermediate slicing result
_This subsection contains ideas from [10]._
Let \(\mathcal{E}(\mathcal{T},\varepsilon)\) be the set of exceptional slices,
\[\mathcal{E}(\mathcal{T},\varepsilon)=\{z_{0}\in\mathbb{R}:\exists\mathcal{T}^ {\prime}\subset\mathcal{T},|\mathcal{T}^{\prime}|\geq\delta^{\varepsilon}| \mathcal{T}|,|\cup\mathcal{T}^{\prime}(z=z_{0})|<\delta^{-s-\varepsilon}\}.\]
Just like in [10, Proposition 25], we will prove a weaker version of Theorem 3.3; the stronger version follows from a formal exhaustion argument which we present in the next subsection.
**Theorem 3.7**.: _With assumptions of Theorem 3.3, there exists \(\mathcal{T}^{\prime}\subset\mathcal{T}\) such that \(\mu(\mathcal{E}(\mathcal{T}^{\prime}))\leq\delta^{\varepsilon}\)._
Proof.: We use \(\lessapprox\) to denote \(\leq C\delta^{-C\varepsilon}\), where \(C\) may depend on \(\kappa,s\). Let \(\pi:\mathbb{R}^{d}\to\mathbb{R}^{d-1}\) be the projection onto the plane orthogonal to the \(z\)-axis. For a tube \(t\), let \(t(z^{\prime})=\pi(t\cap\{z=z^{\prime}\})\), and for a set of tubes \(\mathcal{T}\), let \(\mathcal{T}(z^{\prime})\) denote the slice \(\pi(\mathcal{T}\cap\{z=z^{\prime}\})\).
We follow the argument in [10, Proof of Proposition 7]. Suppose Theorem 3.7 is false. We can find \(z_{1}\) and a subset \(\mathcal{T}^{\prime\prime\prime}\subset\mathcal{T}\) with \(|\mathcal{T}^{\prime\prime\prime}|\geq\delta^{\varepsilon}|\mathcal{T}|\) such that \(|\cup\mathcal{T}^{\prime\prime\prime}(z_{1})|<\delta^{-s-\varepsilon}\). For this \(\mathcal{T}^{\prime\prime\prime}\) we have \(\mu(\mathcal{E}(\mathcal{T}^{\prime\prime\prime}))\geq\delta^{\varepsilon}\), hence \(\mu(\mathcal{E}(\mathcal{T}^{\prime\prime\prime})\setminus B(z_{1},\delta^{3 \varepsilon/\kappa}))\geq\delta^{\varepsilon}-\delta^{2\varepsilon}>0\) by the non-concentration property of \(\mu\). Thus, we can find \(z_{2}\) with \(|z_{1}-z_{2}|\gtrapprox 1\) and \(\mathcal{T}^{\prime\prime}\subset\mathcal{T}^{\prime\prime\prime}\) with \(|\mathcal{T}^{\prime\prime}|\geq\delta^{2\varepsilon}|\mathcal{T}|\) such that \(X:=|\cup\mathcal{T}^{\prime\prime}(z_{1})|<\delta^{-s-\varepsilon}\) and \(Y:=|\cup\mathcal{T}^{\prime\prime}(z_{2})|<\delta^{-s-\varepsilon}\). Since every \(t\in\mathcal{T}^{\prime\prime}\) passes through a point in \(X\) and a point in \(Y\), and since there are \(\lessapprox\)\(1\) many
tubes through given points \(x\in X\) and \(y\in Y\), we can find \(\mathcal{T}^{\prime}\subset\mathcal{T}^{\prime\prime}\) with \(|\mathcal{T}^{\prime}|\gtrapprox|\mathcal{T}|\gtrapprox\delta^{-2s}\) such that for every \(x\in X,y\in Y\), there is at most one tube in \(\mathcal{T}^{\prime}\) through \(x,y\). In particular,
\[\delta^{-2s}\lessapprox|\mathcal{T}^{\prime}|\leq|X|_{\delta}|Y|_{\delta} \lessapprox\delta^{-2s},\]
and so \(|X|_{\delta},|Y|_{\delta}\gtrapprox\delta^{-s}\), \(|\mathcal{T}|\lessapprox|\mathcal{T}^{\prime}|\lessapprox\delta^{-2s}\).
For this \(\mathcal{T}^{\prime}\) we have \(\mu(\mathcal{E}(\mathcal{T}^{\prime}))\geq\delta^{\varepsilon}\), so defining \(\mathcal{D}=\mathcal{E}(\mathcal{T}^{\prime})\setminus(B(z_{1},\delta^{3 \varepsilon/\kappa})\cup B(z_{2},\delta^{3\varepsilon/\kappa}))\), we have \(\mu(\mathcal{D})\geq\delta^{\varepsilon}-2\delta^{2\varepsilon}>\delta^{2\varepsilon}\).
**Claim 1.** For \(z=az_{1}+(1-a)z_{2}\in\mathcal{D}\), we have \(a,1-a\gtrapprox1\). Furthermore, there exists \(X_{z}\subset X\), \(Y_{z}\subset X\), and \(\mathcal{T}_{z}\subset\mathcal{T}^{\prime}\) with \(|X_{z}|_{\delta},|Y_{z}|_{\delta}\gtrapprox\delta^{-s},|\mathcal{T}_{z}| \gtrapprox\delta^{-2s}\) such that \(|X_{z}+\frac{1-a}{a}Y_{z}|\lessapprox\delta^{-s}\) and for each \(t\in\mathcal{T}_{z}\), we have \(t(z_{1})\in X_{z}^{(\delta)}\) and \(t(z_{2})\in Y_{z}^{(\delta)}\).
_Proof._ The first claim is evident by definition of \(\mathcal{D}\). For the second claim, since \(z\in\mathcal{E}(\mathcal{T}^{\prime})\), there exists \(\mathcal{T}_{z}^{\prime}\subset\mathcal{T}^{\prime}\) such that \(|\mathcal{T}_{z}^{\prime}|\gtrapprox\delta^{-2s}\) and \(|\mathcal{T}_{z}^{\prime}(z)|_{\delta}\lessapprox\delta^{-s}\). Now notice that for each \(x\in X,y\in Y\) there is at most one tube \(t\in\mathcal{T}^{\prime}\) passing through \(x,y\). Let \(P\) be the set of \((x,y)\in X\times Y\) with exactly one tube \(t_{x,y}\in\mathcal{T}_{z}^{\prime}\) passing through \(x,y\). So \(|P|\geq|\mathcal{T}_{z}^{\prime}|\gtrapprox\delta^{-2s}\). We also observe that \(|ax+(1-a)y-t_{x,y}(z)|\leq\delta\), and so \(|aX+(1-a)Y|_{\delta}\leq|\mathcal{T}_{z}^{\prime}(z)|_{2\delta}\lessapprox \delta^{-2s}\). Thus, by the Balog-Szemeredi-Gowers theorem 2.27, we can find \(X_{z}\subset X\), \(Y_{z}\subset Y\), and \(\mathcal{T}_{z}\subset\mathcal{T}_{z}^{\prime}\) such that \(|aX_{z}|_{\delta},|(1-a)Y_{z}|\gtrapprox\delta^{-s},|\mathcal{T}_{z}|\gtrapprox \delta^{-2s}\), \(|aX_{z}+(1-a)Y_{z}|\lessapprox\delta^{-s}\), and for each \(t\in\mathcal{T}_{z}\), we have \(t(z_{1})\in X_{z}^{(\delta)}\) and \(t(z_{2})\in Y_{z}^{(\delta)}\). Then \(|X_{z}|_{\delta},|Y_{z}|_{\delta}\gtrapprox\delta^{-s}\) and \(|X_{z}+\frac{(1-a)}{a}Y_{z}|\lessapprox\delta^{-s}\), proving the Claim.
Now, we apply Lemma 2.22 to the sets \(X_{z}^{(\delta)}\times Y_{z}^{(\delta)}\), the measure \(\frac{1}{\mu(\mathcal{D})}\mu|_{\mathcal{D}}\), and \(K=\delta^{-C\varepsilon}\) for a sufficiently large \(C\). The result, after applying Fubini's theorem, is that we can find \(z_{*}\), \(X_{*}:=X_{z_{*}}\subset X\), \(Y_{*}:=Y_{z_{*}}\subset Y\), and a subset \(\mathcal{D}^{\prime}\subset\mathcal{D}\) with \(\mu(\mathcal{D}^{\prime})\gtrapprox\mu(\mathcal{D})\gtrapprox 1\) and \(z_{*}\in\mathcal{D}^{\prime}\) such that for all \(z\in\mathcal{D}^{\prime}\), we have
\[|X_{*}^{(\delta)}\cap X_{z}^{(\delta)}||Y_{*}^{(\delta)}\cap Y_{z}^{(\delta)}| \gtrapprox\delta^{2(n-1)}\delta^{-2s}.\]
Since \(|X_{*}^{(\delta)}\cap X_{z}^{(\delta)}|\lesssim|X|_{\delta}\lessapprox\delta^ {-s}\) and \(|Y_{*}^{(\delta)}\cap Y_{z}^{(\delta)}|\lesssim|Y|_{\delta}\lessapprox\delta^ {-s}\), we have in fact \(|X_{*}^{(\delta)}\cap X_{z}^{(\delta)}|,|Y_{*}^{(\delta)}\cap Y_{z}^{(\delta)}| \approx\delta^{-s}\). In particular, \(|X_{z}^{(\delta)}|,|Y_{z}^{(\delta)}|\approx\delta^{-s}\) for all \(z\in\mathcal{D}^{\prime}\).
The next leg of the proof is to show:
**Claim 2.** For all \(z\in\mathcal{D}^{\prime}\), if we write \(z=az_{1}+(1-a)z_{2}\), then \(|X_{*}+\frac{1-a}{a}Y_{*}|_{\delta}\lessapprox\delta^{-s}\).
_Proof._ Note that Claim 1 tells us \(|X_{z}+\frac{1-a}{a}Y_{z}|_{\delta}\lessapprox\delta^{-s}\). Combining this with the Rusza triangle inequality (Lemma 2.23), \(X_{*}^{(\delta)}\cap X_{z}^{(\delta)}\subset X_{z}^{(\delta)}\), and \(|A^{(\delta)}|_{\delta}\sim_{d}|A|_{\delta}\) for any subset \(A\) of the doubling metric space \(\mathbb{R}^{d}\), we have
\[|X_{z}-X_{*}^{(\delta)}\cap X_{z}^{(\delta)}|_{\delta}\lesssim|X_{z}^{(\delta) }-X_{z}^{(\delta)}|_{\delta}\lesssim|X_{z}-X_{z}|_{\delta}\lesssim\frac{|X_{z}+ \frac{1-a}{a}Y_{z}|_{\delta}^{2}}{|\frac{1-a}{a}Y_{z}|}\lessapprox\delta^{-s}.\]
The same argument shows (where \(z_{*}=a_{*}z_{1}+(1-a_{*})z_{2}\)):
\[|X_{*}-X_{*}^{(\delta)}\cap X_{z}^{(\delta)}|_{\delta}\lesssim|X_{*}-X_{*}|_{ \delta}\lesssim\frac{|X_{*}+\frac{1-a_{*}}{a_{*}}Y_{*}|_{\delta}^{2}}{|\frac {1-a_{*}}{a_{*}}Y_{*}|}\lessapprox\delta^{-s}.\]
Thus, by Lemma 2.23 again, we have
\[|X_{*}-X_{z}|_{\delta}\lesssim\frac{|X_{z}-X_{*}^{(\delta)}\cap X_{z}^{(\delta)}| _{\delta}|X_{*}-X_{*}^{(\delta)}\cap X_{z}^{(\delta)}|_{\delta}}{|X_{*}^{(\delta }\cap X_{z}^{(\delta)}|_{\delta}}\lessapprox\delta^{-s}.\]
Similarly, we have \(|Y_{*}-Y_{z}|_{\delta}\lessapprox\delta^{-s}\). A final application of Lemma 2.23 gives
\[|X_{*}+\frac{1-a}{a}Y_{*}|_{\delta} \lesssim\frac{|X_{z}+\frac{1-a}{a}Y_{*}|_{\delta}|X_{z}-X_{*}|_{ \delta}}{|X_{z}|_{\delta}}\] \[\lesssim\frac{|X_{z}+\frac{1-a}{a}Y_{z}|_{\delta}|X_{z}-X_{*}|_{ \delta}|\frac{1-a}{a}(Y_{z}-Y_{*})|_{\delta}}{|X_{z}|_{\delta}|\frac{1-a}{a}Y_ {z}|_{\delta}}\] \[\lessapprox\frac{\delta^{-s}\delta^{-s}\delta^{-s}}{\delta^{-s} \delta^{-s}}\leq\delta^{-s}.\]
This proves Claim 2.
Finally, we seek a contradiction by applying Theorem 3.6 to \(\mathcal{T}_{z_{*}}\) and \(\mu|_{\mathcal{D}^{\prime}}\). We satisfy the condition (if \(\varepsilon\) is sufficiently small) because Claim 1 and \(|\mathcal{T}|\lessapprox\delta^{-2s}\) tell us that \(\mathcal{T}_{z_{*}}\) is a \((\delta,\kappa,\delta^{-O(\varepsilon)},k)\)-set with \(|\mathcal{T}_{z_{*}}|\gtrapprox\delta^{-2s}\). But we violate the conclusion (if \(\varepsilon\) is sufficiently small) because Claim 2 tells us that \(|aX_{*}+(1-a)Y_{*}|_{\delta}\lessapprox\delta^{-s}\). This contradiction finishes the proof of Theorem 3.7.
### Formal exhaustion argument
Using Theorem 3.7, we prove the following proposition, which implies Theorem 3.3 with a different value for \(\varepsilon\).
**Proposition 3.8**.: _For \(0\leq k<d-1\), \(0\leq s<k+1\), and \(0<\kappa\leq 1\), there exists \(\varepsilon>0\) such that the following holds for sufficiently small \(\delta<\delta_{0}(s,k,d,\varepsilon)\). Let \(\mathcal{T}\) be a \((\delta,\kappa,\delta^{-\varepsilon/2},k)\)-set of \(\delta\)-tubes each making angle \(\geq\frac{1}{100}\) with the plane \(y=0\) with \(|\mathcal{T}|\geq\delta^{-2s+\varepsilon/2}\). Let \(\mu\) be a probability measure on \(\mathbb{R}\) such that for all \(\delta\leq r\leq 1\), we have \(\mu(B_{r})\leq\delta^{-\varepsilon}r^{\kappa}\). Then \(\mu(\mathcal{E}(\mathcal{T},\frac{\varepsilon}{3}))\leq\delta^{\varepsilon/2}\)._
The idea is the following. A first application of Theorem 3.7 gives a subset \(\mathcal{T}^{\prime}\subset\mathcal{T}\) with \(\mu(\mathcal{E}(\mathcal{T}^{\prime},\epsilon))\leq\delta^{\epsilon}\). Either \(\mathcal{T}^{\prime}\) is large enough in which case we are done or we can cut \(\mathcal{T}^{\prime}\) out of \(\mathcal{T}\) and apply Theorem 1.8 again. This will give us another subset \(\mathcal{T}^{\prime}\). Then we iterate until the union of these sets \(\mathcal{T}^{\prime}\) is large enough.
Proof.: Let \(N\geq 0\) be an integer. Suppose we have already constructed pairwise disjoint sets \(\mathcal{T}_{1},\cdots,\mathcal{T}_{N}\) such that \(\mu(\mathcal{E}(\mathcal{T}_{i},\epsilon))\leq\delta^{\epsilon}\) for every \(i=1,\cdots,N\). Either we have
\[\left|\mathcal{T}\setminus\bigcup_{i=1}^{N}\mathcal{T}_{i}\right|\leq\delta^ {\frac{\varepsilon}{2}}|\mathcal{T}|, \tag{3.4}\]
in which case we stop, or the set \(\mathcal{T}\setminus\bigcup_{i=1}^{N}\mathcal{T}_{i}\) satisfies the conditions of Theorem 3.3. In the latter case Theorem 3.3 gives us \(\mathcal{T}_{N+1}\subset\mathcal{T}\setminus\bigcup_{i=1}^{N}\mathcal{T}_{i}\) with
\(\mu(\mathcal{E}(A_{N+1},\epsilon))\leq\delta^{\epsilon}\). By construction, \(\mathcal{T}_{N+1}\) is disjoint with any of the \(\mathcal{T}_{i}\), \(i=1,\cdots,N\).
When this procedure ends write \(\mathcal{T}_{0}=\bigcup_{i=1}^{N}\mathcal{T}_{i}\). Then (3.4) says \(|\mathcal{T}\setminus\mathcal{T}_{0}|\leq\delta^{\frac{\epsilon}{2}}|\mathcal{ T}|\). Moreover, since the \(\mathcal{T}_{i}\)'s are disjoint, \(|\mathcal{T}_{0}|=\sum_{i=1}^{N}|\mathcal{T}_{i}|\).
Set \(a_{i}=\frac{|\mathcal{T}_{i}|}{|\mathcal{T}_{0}|}\). We claim that
\[\mathcal{E}(\mathcal{T},\frac{\varepsilon}{3})\subset\bigcup_{I}\bigcap_{i\in I }\mathcal{E}(\mathcal{T}_{i},\varepsilon),\]
where the index set \(I\) runs over subsets of \(\{1,2,\cdots,n\}\) with \(\sum_{i\in I}a_{i}\geq\delta^{\frac{\epsilon}{2}}\). Since \(\mu(\mathcal{E}(\mathcal{T}_{i})))\leq\delta^{\varepsilon}\) for all \(i\), the desired upper bound \(\mu(\mathcal{E}(\mathcal{T},\frac{\varepsilon}{3}))\leq\delta^{\varepsilon/2}\) then follows immediately from Markov's inequality applied to the event \(\sum a_{i}\mathbb{I}_{\mathcal{T}_{i}}\) (or [10, Lemma 20]).
We will now show the claim. Let \(z_{0}\in\mathcal{E}(A,\frac{\varepsilon}{3})\), so there exists \(\mathcal{T}^{\prime}\subset\mathcal{T}\) with \(|\mathcal{T}^{\prime}|\geq\delta^{\frac{\epsilon}{3}}|\mathcal{T}|\) and \(|\pi_{z_{0}}(\mathcal{T}^{\prime})|_{\delta}\leq\delta^{-s-\frac{\epsilon}{3}}\). Consider the index set \(I\) defined as
\[I=\{1\leq i\leq n\mid|\mathcal{T}^{\prime}\cap\mathcal{T}_{i}|\geq\delta^{ \varepsilon}|\mathcal{T}_{i}|\}.\]
We have
\[\delta^{\varepsilon/2}|\mathcal{T}| \leq|\mathcal{T}^{\prime}|-|\mathcal{T}\setminus\mathcal{T}_{0}|\] \[\leq\sum_{i=1}^{n}|\mathcal{T}^{\prime}\cap\mathcal{T}_{i}|\] \[\lesssim\sum_{i\in I}|\mathcal{T}_{i}|+\sum_{i\notin I}\delta^{ \varepsilon}|\mathcal{T}_{i}|\] \[\lesssim\sum_{i\in I}a_{i}|\mathcal{T}|+\delta^{\varepsilon}| \mathcal{T}|\]
Hence \(\sum_{i\in I}a_{i}\geq\delta^{\frac{\epsilon}{2}}\). On the other hand, for all \(i\in I\), since
\[|\pi_{z_{0}}(\mathcal{T}^{\prime}\cap\mathcal{T}_{i}))|_{\delta}\leq|\pi_{z_{ 0}}(\mathcal{T}^{\prime})|_{\delta}\leq\delta^{-s-\frac{\epsilon}{3}},\]
we have \(z_{0}\in\mathcal{E}(\mathcal{T}_{i},\varepsilon)\) for all \(i\in I\). This finishes the proof of the claim.
### Proof of Proposition 3.1
_This subsection is based on Section A.7 of [18]._
We restate Proposition 3.1.
**Proposition 3.9**.: _Given \(0\leq k<d-1\), \(0\leq s<k+1\), \(\tau,\kappa>0\), there exist \(\eta(s,k,\kappa,\tau,d)>0\) and \(\delta_{0}(s,k,\kappa,\tau,d)>0\) such that the following holds for all \(\delta\in(0,\delta_{0}]\)._
_Let \(\mathbf{Y}\subset(\delta\cdot\mathbb{Z})\cap[0,1)\) be a \((\delta,\tau,\delta^{-\eta})\)-set, and for each \(\mathbf{y}\in\mathbf{Y}\), assume that \(\mathbf{X_{y}}\subset(\delta\cdot\mathbb{Z})^{d-1}\cap[0,1)^{d-1}\) is a \((\delta,\kappa,\delta^{-\eta},k)\)-set with cardinality \(\geq\delta^{-s+\eta}\). Let_
\[\mathbf{Z}=\bigcup_{\mathbf{y}\in\mathbf{Y}}\mathbf{X_{y}}\times\{\mathbf{y}\}.\]
_For every \(\mathbf{z}\in\mathbf{Z}\), assume that \(\mathcal{T}(\mathbf{z})\) is a set of \(\delta\)-tubes each making an angle \(\geq\frac{1}{100}\) with the plane \(y=0\) with \(|\mathcal{T}(\mathbf{z})|\geq\delta^{-s+\eta}\) such that \(\mathbf{z}\in T\) for all \(T\in\mathcal{T}(\mathbf{z})\). Then \(|\mathcal{T}|\geq\delta^{-2s-\eta}\), where \(\mathcal{T}=\cup_{\mathbf{z}\in\mathbf{Z}}\mathcal{T}(\mathbf{z})\)._
Proof.: Let \(A\lessapprox B\) denote \(A\leq C\delta^{-C\eta}B\) for some absolute constant \(C\geq 1\). A \((\delta,u,m)\)-set stands for a \((\delta,u,C\delta^{-C\eta},m)\)-set.
First, without loss of generality, assume \(|\mathcal{T}(\mathbf{z})|=\delta^{-s+\eta}\) for each \(\mathbf{z}\in\mathbf{Z}\).
Suppose \(|\mathcal{T}|\leq\delta^{-2s-\eta}\). Let
\[\mathcal{T}(\mathbf{y})=\bigcup_{\mathbf{x}\in\mathbf{X}_{\mathbf{y}}} \mathcal{T}(\mathbf{x},\mathbf{y}).\]
Since each tube in \(\mathcal{T}(\mathbf{y})\) has angle \(\geq\frac{1}{100}\) with the plane \(y=0\), it only intersects \(O(1)\) many \(\delta\)-balls \((\mathbf{x},\mathbf{y})\) for a given \(\mathbf{y}\). Since \(|\mathcal{T}(\mathbf{x},\mathbf{y})|\gtrapprox\delta^{-s}\) for each \(\mathbf{x}\in\mathbf{X}_{\mathbf{y}}\), we get \(|\mathcal{T}(\mathbf{y})|\gtrapprox\delta^{-s}|\mathbf{X}_{\mathbf{y}}|\). With the counter-assumption \(|\mathcal{T}|\lessapprox\delta^{-2s}\), this forces \(|\mathbf{X}_{\mathbf{y}}|\lessapprox\delta^{-s}\) for each \(\mathbf{y}\in\mathbf{Y}\). On the other hand, \(|\mathbf{X}_{\mathbf{y}}|\gtrapprox\delta^{-s}\) and so \(|\mathcal{T}|\approx\delta^{-2s}\).
Now, we check that \(\mathcal{T}(\mathbf{y})\) is a \((\delta,\kappa,\delta^{-O(\eta)},k)\)-set. Pick a \((r,k+1)\)-plane \(H\). We claim that either \(\mathcal{T}(\mathbf{y})\cap H=\emptyset\) or \(H(y=\mathbf{y})\) is contained in a \((O(r),k)\)-plate. Indeed, if \(H(y=\mathbf{y})\) is not contained within a \((Cr,k)\)-plate, then \(H\) is contained within the \(O(C^{-1})\)-neighborhood of the plane \(y=\mathbf{y}\), which means that \(H\) cannot contain any tubes of \(\mathcal{T}(\mathbf{y})\) if \(C\) is large enough (since the tubes of \(\mathcal{T}(\mathbf{y})\) have angle \(\geq\frac{1}{100}\) with that plane). Thus, we may assume \(H(y=\mathbf{y})\) is contained within a \((Cr,k)\)-plate, which means
\[|\mathcal{T}(\mathbf{y})\cap H|=|\bigcup_{\mathbf{x}\in\mathbf{ X}_{\mathbf{y}}\cap H}\mathcal{T}(\mathbf{x},\mathbf{y})\cap H|\\ \leq|\mathbf{X}_{\mathbf{y}}\cap H|\cdot\delta^{-s+\eta}\lessapprox| \mathbf{X}_{\mathbf{y}}|r^{\kappa}\cdot\delta^{-s+\eta}\lessapprox r^{\kappa} |\mathcal{T}(\mathbf{y})|.\]
Since \(|\mathcal{T}(\mathbf{y})|\approx|\mathcal{T}|\) for each \(\mathbf{y}\in\mathbf{Y}\), there is a subset \(\overline{\mathcal{T}}\subset\mathcal{T}\) such that \(|\mathcal{T}|\approx|\overline{\mathcal{T}}|\) and each \(T\in\overline{\mathcal{T}}\) belongs to \(\approx|\mathbf{Y}|\) of the sets \(\mathcal{T}(\mathbf{y})\). We show \(\overline{\mathcal{T}}\) is a \((\delta,\kappa,\delta^{-O(\eta)},k)\)-set. Indeed, given a \((r,k+1)\)-plate \(H\), we have
\[|\overline{\mathcal{T}}\cap H|\approx\sum_{T\in\overline{\mathcal{ T}}\cap H}\frac{1}{|\mathbf{Y}|}\sum_{\mathbf{y}\in\mathbf{Y}}\mathbbm{1}_{ \mathcal{T}(y)}(T)\\ \lessapprox\frac{1}{|\mathbf{Y}|}\sum_{\mathbf{y}\in\mathbf{Y}}| \overline{\mathcal{T}}(\mathbf{y})\cap H|\lessapprox\frac{1}{|\mathbf{Y}|} \sum_{\mathbf{y}\in\mathbf{Y}}r^{\kappa}|\mathcal{T}(y)|\leq r^{\kappa}| \overline{\mathcal{T}}|.\]
Finally, we refine \(\mathbf{Y}\) further: since
\[\sum_{\mathbf{y}\in\mathbf{Y}}|\overline{\mathcal{T}}\cap\mathcal{T}(y)|=\sum_ {T\in\overline{\mathcal{T}}}|\{\mathbf{y}\in\mathbf{Y}:T\in\mathcal{T}(\mathbf{ Y})\}|\approx|\overline{\mathcal{T}}||\mathbf{Y}|,\]
we can find a subset \(\overline{\mathbf{Y}}\subset\mathbf{Y}\) with the property that \(|\overline{\mathcal{T}}(y)|:=|\overline{\mathcal{T}}\cap\mathcal{T}(y)|\approx| \overline{\mathcal{T}}|\) for each \(y\in\overline{\mathbf{Y}}\). Also, \(\overline{\mathbf{Y}}\) is still a \((\delta,\tau,\delta^{-O(\eta)})\)-set.
Now for each \(\mathbf{y}\in\overline{\mathbf{Y}}\), the large subset \(\overline{\mathcal{T}}(y)\subset\overline{\mathcal{T}}\) has small covering number \(|\mathbf{X}_{\mathbf{y}}|\approx\delta^{-s}\). On the other hand, \(|\overline{\mathcal{T}}|\approx\delta^{-2s}\). This contradicts Theorem 3.3 if \(\eta\) is chosen sufficiently small in terms of the \(\varepsilon\) of the theorem.
## 4 Improved incidence estimates for regular sets
In this section, we prove a version of Theorem 1.8 for regular sets.
**Definition 4.1**.: _Let \(\delta\in 2^{-2\mathbb{N}}\) be a dyadic number. Let \(C,K>0\), and let \(0\leq s\leq d\). A non-empty set \(\mathcal{P}\subset\mathcal{D}_{\delta}\) is called \((\delta,s,C,K)\)-regular if \(\mathcal{P}\) is a \((\delta,s,C,0)\)-set, and_
\[|\mathcal{P}|_{\delta^{1/2}}\leq K\cdot\delta^{-s/2}.\]
**Theorem 4.2**.: _For any \(0\leq s,k<d-1\), \(\max(s,k)<t\leq d\), \(\kappa>0\), there exists \(\varepsilon(s,t,\kappa,k,d)>0\) such that the following holds for all small enough \(\delta\in 2^{-\mathbb{N}}\), depending only on \(s,t,\kappa,k,d\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a \((\delta,t,\delta^{-\varepsilon},\delta^{-\varepsilon})\)-regular set. Assume that for every \(p\in\mathcal{P}\), there exists a \((\delta,s,\delta^{-\varepsilon},0)\) and \((\delta,\kappa,\delta^{-\varepsilon},k)\)-set \(\mathcal{T}(p)\subset\mathcal{T}\) with \(|\mathcal{T}(p)|=M\) such that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\). Then \(|\mathcal{T}|\geq M\delta^{-s-\varepsilon}\)._
### Initial reductions
_This subsection is based on Sections 6 and A.1-A.3 of [18]._
In this section, let \(A\lessapprox B\) denote \(A\leq C\delta^{-C\varepsilon}B\) for some constant \(C\geq 1\) depending only on \(s,t,\kappa,k,d\). Also, let \(\mathcal{P}\cap Q:=\{p\in\mathcal{P}:p\subset Q\}\).
The proof will be based on contradiction, so assume \(|\mathcal{T}|\leq M\delta^{-s-\varepsilon}\). Let's rename \(\mathcal{P}\) to \(\mathcal{P}_{0}\) and \(\mathcal{T}\) to \(\mathcal{T}_{0}\), reserving \(\mathcal{P},\mathcal{T}\) for the use of Proposition 2.13.
By Corollary 2.10, we have \(1\gtrapprox(M\delta^{s})^{\frac{t-s}{d-1-s}}\), so \(M\lessapprox\delta^{-s}\) and \(|\mathcal{T}|\lessapprox\delta^{-2s}\). But \(\mathcal{T}(p)\) is a \((\delta,s,\delta^{-\varepsilon})\)-set, so \(M\approx\delta^{-s}\). Finally, by Lemma 2.7, we may assume \(|\mathcal{P}_{0}|\approx\delta^{-t}\) (passing to subsets will preserve the \((\delta,t,\delta^{-\varepsilon},\delta^{-\varepsilon})\)-regularity of \(\mathcal{P}_{0}\)).
The next reduction will make the value \(|\mathcal{P}_{0}\cap Q|\) uniform for different \(Q\in\mathcal{D}_{\Delta}(\mathcal{P}_{0})\). Let \(\mathcal{Q}_{0}=\mathcal{D}_{\Delta}(\mathcal{P}_{0})\). By \((\delta,t,\delta^{-\varepsilon},\delta^{-\varepsilon})\)-regularity of \(\mathcal{P}_{0}\), we have \(|\mathcal{Q}_{0}|\lessapprox\Delta^{-t}\). On the other hand, since \(\mathcal{P}_{0}\) is a \((\delta,t)\)-set, we have that for all \(Q\in\mathcal{Q}_{0}\),
\[|\mathcal{P}_{0}\cap Q|\lessapprox\Delta^{-t} \tag{4.5}\]
This means \(|\mathcal{Q}_{0}|\gtrapprox\Delta^{-t}\). Hence, \(|\mathcal{Q}_{0}|\approx\Delta^{-t}\). Now using (4.5) again and \(|\mathcal{P}_{0}|\approx\Delta^{-2t}\), there exists \(\mathcal{Q}_{0}^{\prime}\subset\mathcal{Q}_{0}\) with \(|\mathcal{Q}_{0}^{\prime}|\gtrapprox|\mathcal{Q}_{0}|\) such that for each \(Q\in\mathcal{Q}_{0}^{\prime}\),
\[|\mathcal{P}_{0}\cap Q|\approx\Delta^{-t} \tag{4.6}\]
Using (4.6), we quickly check that \(\mathcal{Q}_{0}^{\prime}\) is a \((\Delta,t)\)-set. Indeed, for \(r\in(\Delta,1)\) and \(Q_{r}\in\mathcal{D}_{r}\), we have
\[|\mathcal{Q}_{0}^{\prime}\cap Q_{r}|\stackrel{{\eqref{eq:r-r-r-r-
_Proof_. By Proposition 2.13(iii), we know that \((\mathcal{D}_{\Delta}(\mathcal{P}),\mathcal{T}_{\Delta})\) is \((\Delta,s,C^{1}_{\Delta},\kappa,C^{2}_{\Delta},M_{\Delta})\)-nice, so \(M_{\Delta}\gtrapprox\Delta^{-s}\). Also, by Corollary 2.10, we have that
\[|\mathcal{T}_{\Delta}|\gtrapprox M_{\Delta}\delta^{-s/2}\cdot(M_{\Delta}\delta^{ s/2})^{\frac{t-s}{4-1-s}}. \tag{4.8}\]
Next, for any \(Q\in\mathcal{Q}\), we know that \((S_{Q}(\mathcal{P}\cap Q),\mathcal{T}_{Q})\) is \((\Delta,s,C^{1}_{Q},\kappa,C^{2}_{Q},M_{Q})\)-nice. Recall that
\[S_{Q}(\mathcal{P}\cap Q)=\{S_{Q}(p):p\in\mathcal{P},p\subset Q\}\subset \mathcal{D}_{\Delta}.\]
We also know \(|\mathcal{P}\cap Q|\approx|\mathcal{P}^{\prime}_{0}\cap Q|\approx\delta^{-t/2}\) and \(\mathcal{P}\) is a \((\delta,t)\)-set, so by a similar check to (4.7), we get that \(S_{Q}(\mathcal{P}\cap Q)\) is a \((\Delta,t)\)-set. Thus by Corollary 2.10, we have
\[|\mathcal{T}_{Q}|\gtrapprox M_{Q}\cdot\delta^{-s/2}\]
But by our counterassumption \(|\mathcal{T}_{0}|\lessapprox\delta^{-2s}\), we get from (2.3) in Proposition 2.13 and \(M\gtrapprox\delta^{-s}\),
\[\delta^{-2s}\gtrapprox\frac{|\mathcal{T}_{\Delta}|}{M_{\Delta}}\cdot\frac{| \mathcal{T}_{Q}|}{M_{Q}}\cdot M\gtrapprox\frac{|\mathcal{T}_{\Delta}|}{M_{ \Delta}}\cdot\delta^{-3s/2}.\]
Thus, \(|\mathcal{T}_{\Delta}|\lessapprox M_{\Delta}\delta^{-s/2}\). Substitute into (4.8) to get
\[\delta^{-s/2}\gtrapprox\frac{|\mathcal{T}_{\Delta}|}{M_{\Delta}}\gtrapprox \delta^{-s/2}\cdot(M_{\Delta}\delta^{s/2})^{\frac{t-s}{d-1-s}}.\]
Thus, \(M_{\Delta}\delta^{s/2}\lessapprox 1\), so \(M_{\Delta}\lessapprox\Delta^{-s}\) and \(|\mathcal{T}_{\Delta}|\lessapprox\delta^{-s}\), proving the Claim.
Thus, we get the higher-dimensional analogues of properties (H1-2), (G1-4) of [18] except we only know \(|\mathcal{T}|\lessapprox\delta^{-2s}\) and not \(|\mathcal{T}|\gtrapprox\delta^{-2s}\). But this is not a limitation. We repeat and relabel these properties here:
1. \(|\mathcal{Q}|\approx\Delta^{-t}\) and \(|\mathcal{P}\cap Q|\approx\Delta^{-t}\) for all \(Q\in\mathcal{Q}\).
2. Every tube \(\mathbf{T}\in\mathcal{T}_{\Delta}\) satisfies \(|\mathcal{T}\cap\mathbf{T}|\lessapprox\delta^{-s}\).
3. For every square \(Q\in\mathcal{Q}\), there corresponds a \((\Delta,s,0)\)-set and \((\Delta,\kappa,k)\)-set \(\mathcal{T}_{\Delta}(Q)\subset\mathcal{T}_{\Delta}\) of cardinality \(\approx M_{\Delta}\approx\Delta^{-s}\) such that \(\mathbf{T}\cap Q\neq\emptyset\) for all \(\mathbf{T}\in\mathcal{T}_{\Delta}(Q)\).
4. \(|\mathcal{T}|\lessapprox\delta^{-2s}\) and \(|\mathcal{T}_{\Delta}|\approx\Delta^{-2s}\).
5. For \(\mathbf{T}\in\mathcal{T}_{\Delta}(Q)\), we have \[|\{(p,T)\in(\mathcal{P}\cap Q)\times\mathcal{T}:T\in\mathcal{T}(p)\cap\mathbf{ T}\}|\gtrapprox\Delta^{-s-t}.\] Item (G1) follows from Proposition 2.13(i). Item (G3) follows from Proposition 2.13(iii) and Claim. Item (G4) follows from \(|\mathcal{T}_{0}|\lessapprox\delta^{-2s}\) and Claim.
Item (G5) follows from Proposition 2.13(iv) and the estimation \(M\cdot|\mathcal{P}\cap Q|/|\mathcal{T}_{\Delta}(Q)|\approx\Delta^{-s-t}\), which uses item (G1), item (G3), and the fact \(M\approx\delta^{-s}\) we proved at the beginning of the argument.
Item (G2) follows from Proposition 2.13(ii), the fact that a given \(\delta\)-tube lies in \(\lesssim 1\) many of the \(\mathbf{T}\)'s in \(\mathcal{T}_{\Delta}\), and item (G4):
\[\delta^{-2s}\gtrapprox|\mathcal{T}\gtrsim\sum_{\mathbf{T}\in\mathcal{T}_{ \Delta}}|\mathcal{T}\cap\mathbf{T}|\sim|\mathcal{T}_{\Delta}|\cdot\mathbf{N} \approx\Delta^{-2s}\cdot\mathbf{N}.\]
### Transferring angular non-concentration to ball non-concentration
_This subsection is based on Section A.4 of [18]._
We first recall some notation. For a unit vector \(\sigma\in\mathbb{R}^{d}\), define \(\pi_{\sigma}(\vec{v}):=\vec{v}-(\vec{v}\cdot\sigma)\sigma\) to be the orthogonal projection to the orthogonal complement of \(\sigma\). For a \(\delta\)-tube \(T\), let \(\sigma(T)\in S^{d-1}\) denote the direction of \(T\).
In this subsection, we fix a \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\). Our goal is to show that for many \(\mathbf{T}\in\mathcal{T}_{\Delta}(Q)\), the \(\Delta^{-1}\)-rescaled version of \(\pi_{\sigma(\mathbf{T})}(\cup(\mathcal{P}\cap Q))\) contains a \((\Delta,s,0)\) and \((\Delta,\kappa^{\prime},k)\)-set for some \(\kappa^{\prime}>0\). This is the content of the next Proposition 4.3, which is a higher-dimensional extension of Lemma A.6 of [18]. The proposition encodes the following principle: If we have a set of orthogonal projections in \(\mathrm{Gr}(d,d-1)\) (which we view as \(S^{d-1}\)) that don't concentrate around \(k\)-planes, and we have a \(t\)-dimensional set \(X\) with \(t>k\), then many projections of \(X\) will not concentrate around \(k\)-planes.
**Proposition 4.3**.: _Let \(0\leq\max(s,k)<t\leq d\), \(\kappa>0\), and \(\mathbf{A},\mathbf{B}>0\). Let \(\mathcal{P}\) be a \((\Delta,t,\Delta^{-\mathbf{A}\varepsilon})\)-set in \([0,1)^{d}\), and let \(\Gamma\subset S^{d-1}\) be a \((\Delta,s,\Delta^{-\mathbf{A}\varepsilon},0)\)-set and \((\Delta,\kappa,\Delta^{-\mathbf{A}\varepsilon},k)\)-set. There exists a subset \(\Sigma\subset\Gamma\) with \(|\Sigma|\geq\frac{1}{2}|\Gamma|\) such that the following holds for all \(\sigma\in\Sigma\): if \(\mathcal{P}^{\prime}\subset\mathcal{P}\) is an arbitrary subset of cardinality \(|\mathcal{P}^{\prime}|\geq\Delta^{\mathbf{B}\varepsilon}|\mathcal{P}|\), then \(\pi_{\sigma}(\mathcal{P}^{\prime})\) contains a \((\Delta,\frac{1}{k+1}\min(\frac{t-k}{2},\kappa),\Delta^{-\mathbf{C}(\mathbf{A} +\mathbf{B})\varepsilon},k)\) and \((\Delta,s,\Delta^{-\mathbf{C}(\mathbf{A}+\mathbf{B})},0)\)-set, where \(\mathbf{C}\geq 1\) is absolute depending on \(k\)._
Proof.: We will use a variation of the energy argument due to Kaufman [13] in the form used to prove [18, Lemma A.6]. An alternate proof can follow [10, Lemma 27], but this approach would give weaker bounds.
Let \(\mu\) be the \(\Delta\)-discretized probability measure corresponding to \(\mathcal{P}\),
\[\mu:=\frac{1}{|\mathcal{P}|}\sum_{q\in\mathcal{P}}\frac{\mathcal{L}^{d}|_{q}}{ \Delta^{d}},\]
where \(\mathcal{L}^{d}\) is \(d\)-dimensional Lebesgue measure. Since \(\mathcal{P}\) is a \((\Delta,t,\Delta^{-\mathbf{A}\varepsilon})\)-set, we have \(\mu(B(x,r))\lessapprox r^{t}\) for all \(r>\delta\), and it's also true for \(r<\delta\) since \(\mu\) behaves like Lebesgue measure at small scales. We will choose a uniformly random \(\sigma\in\Gamma\) and consider what happens to the energy of \(\mu\) under projection by \(\sigma\). By linearity of expectation and definition of energy,
\[E_{s,1}:=\mathbb{E}_{\sigma}[I_{s,1}^{\Delta}(\pi_{\sigma}\mu)]=\int\mathbb{E }_{\sigma}[(|\pi_{\sigma}(x_{0}-x_{1})|+\delta)^{-s}]\,d\mu(x_{0})d\mu(x_{1}).\]
Since \(\Gamma\) is a \((\Delta,s)\)-set, we have \(\mathbb{E}_{\sigma}[(|\pi_{\sigma}(x_{0}-x_{1})|+\Delta)^{-s}]\lesssim(\log\Delta ^{-1})\cdot\Delta^{-\mathbf{A}\varepsilon}|x_{0}-x_{1}|^{-s}\) (c.f. [13]), and so \(E_{s,1}\lessapprox I^{0}_{s,1}(\mu)\lessapprox 1\) by Lemma 2.30(a) and \(s<t\).
Analogously, we have (let \(\beta=\min(\kappa,\frac{t-k}{2})\)):
\[E_{\beta,k+1}:=\mathbb{E}_{\sigma}[I^{\Delta}_{\beta,k+1}(\pi_{\sigma}\mu)]= \int\mathbb{E}_{\sigma}\left[\left(\left|\bigwedge_{i=1}^{k+1}\pi_{\sigma}(x_ {0}-x_{i})\right|+\Delta\right)^{-\beta}\right]\,d\mu(x_{0})\cdots d\mu(x_{k+ 1}).\]
Observe that
\[\left|\bigwedge_{i=1}^{k+1}\pi_{\sigma}(x_{0}-x_{i})\right|=\left|\sigma\wedge \bigwedge_{i=1}^{k+1}\pi_{\sigma}(x_{0}-x_{i})\right|=\left|\sigma\wedge \bigwedge_{i=1}^{k+1}(x_{0}-x_{i})\right|=\left|\bigwedge_{i=1}^{k+1}(x_{0}-x _{i})\right|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(Q\in{\cal Q}\) and their large subsets have nice projections in the sense of Proposition 4.3 in every direction orthogonal to the tubes \({\bf T}\in{\cal T}_{\Delta}^{\pi}(Q)\). We keep the symbol "\(\pi\)" as a reminder of this fact.
The next goal is to find a tube \({\bf T}_{0}\) with the following properties:
* The set \(\{Q\in{\cal Q}:{\bf T}_{0}\in{\cal T}_{\Delta}^{\pi}(Q)\}\) contains a \((\Delta,t-s)\)-subset, which we denote \({\bf T}_{0}({\cal Q})\).
* \(|{\cal T}\cap{\bf T}_{0}|\lessapprox\Delta^{-2s}\).
* For each \(Q\in{\bf T}_{0}({\cal Q})\), there exists a subset \({\cal P}_{Q}\subset{\cal P}\cap Q\) such that \[|{\cal P}_{Q}|\approx\Delta^{-t}\mbox{ and }|{\cal T}(p)\cap{\bf T}_{0}| \approx\Delta^{-s}\mbox{ for all }p\in{\cal P}_{Q}.\]
* Let \(\sigma\) be the direction of \({\bf T}\). Then \(\pi_{\sigma}(S_{Q}({\cal P}_{Q}))\) contains a \((\Delta,\kappa^{\prime},k)\)-set with cardinality \(\gtrapprox\Delta^{-s}\), where \(\kappa^{\prime}:=\frac{1}{k+1}\min(\frac{t-k}{2},\kappa)\).
To get (P1)- (P3), we will mostly follow Section A.4 of [18]. (We have used the fact that \({\cal T}\) is a \((\Delta,\kappa,\Delta^{-{\bf A}\varepsilon},k)\)-set, by converting it into ball concentration near \((k+1)\)-planes in Proposition 4.3; the rest of the argument will only use the fact that \({\cal T}\) is a \((\Delta,s,\Delta^{-{\bf A}\varepsilon},0)\)-set.) First, we refine the sets \({\cal Q}\) and \({\cal T}_{\Delta}^{\pi}(Q)\) further to ensure that the family \(\{Q\in{\cal Q}:{\bf T}\in{\cal T}_{\Delta}^{\pi}(Q)\}\) will be \((\Delta,t-s)\)-sets for \({\bf T}\in{\cal T}_{\Delta}\). Indeed, we have
\[\sum_{{\bf T}\in{\cal T}_{\Delta}}\sum_{\begin{subarray}{c}Q,Q^{ \prime}\in{\cal Q}\\ Q\neq Q^{\prime}\end{subarray}}\frac{\mathbb{1}_{{\cal T}_{\Delta}^{\pi}(Q) \cap{\cal T}_{\Delta}^{\pi}(Q^{\prime})}({\bf T})}{d(Q,Q^{\prime})^{t-s}} =\sum_{Q,Q^{\prime}\in{\cal Q},Q\neq Q^{\prime}}\frac{|{\cal T}_ {\Delta}^{\pi}(Q)\cap{\cal T}_{\Delta}^{\pi}(Q^{\prime})|}{d(Q,Q^{\prime})^{t -s}}\] \[\lessapprox\sum_{Q,Q^{\prime}\in{\cal Q},Q\neq Q^{\prime}}\frac{1 }{d(Q,Q^{\prime})^{t}}\lessapprox\Delta^{-2t}.\]
The first \(\lessapprox\) inequality uses the fact that \({\cal T}_{\Delta}^{\pi}(Q)\) is a \((\Delta,s)\)-set of tubes with \(|{\cal T}_{\Delta}^{\pi}(Q)|\approx\Delta^{-s}\), and the second \(\lessapprox\) inequality uses the fact that \({\cal Q}\) is a \((\Delta,t)\)-set with \(|{\cal Q}|\approx\Delta^{-t}\).
Thus, by Markov's inequality, for a fixed absolute large constant \(C\geq 1\), we have
\[\sum_{\begin{subarray}{c}Q,Q^{\prime}\in{\cal Q}\\ Q\neq Q^{\prime}\end{subarray}}\frac{\mathbb{1}_{{\cal T}_{\Delta}^{\pi}(Q) \cap{\cal T}_{\Delta}^{\pi}(Q^{\prime})}({\bf T})}{d(Q,Q^{\prime})^{t-s}} \geq\Delta^{-C\varepsilon+2(s-t)} \tag{4.9}\]
can only hold for \(\lessapprox\Delta^{C\varepsilon-2s}\) many tubes \({\bf T}\in{\cal T}_{\Delta}\).
**Claim 2.** If \(C\geq 1\) is sufficiently large, then there exists a subset \(\overline{{\cal Q}}\subset{\cal Q}\) with \(|\overline{{\cal Q}}|\geq\frac{1}{2}|{\cal Q}|\) such that for all \(Q_{0}\in\overline{{\cal Q}}\), at most half of the tubes \({\bf T}\in{\cal T}_{\Delta}^{\pi}(Q_{0})\) satisfy (4.9).
_Proof._ Suppose this is not true: there exists a set \({\cal Q}_{\rm bad}\) such that for \(Q_{0}\in{\cal Q}\), at least \(\frac{1}{2}|{\cal T}_{\Delta}^{\pi}(Q_{0})|\) many tubes \({\bf T}\in{\cal T}_{\Delta}^{\pi}(Q_{0})\) satisfy (4.9). Then apply Corollary 2.10 to \({\cal Q}_{\rm bad}\) and the bad parts of \({\cal T}_{\Delta}^{\pi}(Q_{0})\), which are still \((\Delta,s)\)-sets. By Corollary 2.10, we have \(\gtrapprox\Delta^{-2s}\) many \(\Delta\)-tubes in \({\cal T}_{\Delta}\) that satisfy (4.9). But we
observed before that (4.9) only holds for \(\lessapprox\Delta^{C\varepsilon-2s}\) many tubes \(\mathbf{T}\in\mathcal{T}_{\Delta}\). By choosing \(C\) large enough (and \(\delta\) small enough), we obtain a contradiction.
In what follows, the \(C\) in Claim 2 will be absorbed into the \(\lessapprox\) notation. Replace \(\mathcal{Q}\) by \(\overline{\mathcal{Q}}\) and \(\mathcal{T}_{\Delta}^{\pi}(Q)\) by their good subsets without changing notation. All of the properties (G1)-(G5) remain valid, and
\[\sum_{\begin{subarray}{c}Q,Q^{\prime}\in\mathcal{Q}\\ Q\neq Q^{\prime}\end{subarray}}\frac{\mathbb{1}_{\mathcal{T}_{\Delta}^{\pi}(Q) \cap\mathcal{T}_{\Delta}^{\pi}(Q^{\prime})}(\mathbf{T})}{d(Q,Q^{\prime})^{t-s} }\lessapprox\Delta^{2(s-t)},\qquad\mathbf{T}\in\mathcal{T}_{\Delta}^{\pi}(Q_ {0}),Q_{0}\in\mathcal{Q}. \tag{4.10}\]
Now, we will find \(\mathbf{T}_{0}\in\mathcal{T}_{\Delta}\) satisfying
\[|\mathbf{T}_{0}(\mathcal{Q})|:=|\{Q\in\mathcal{Q}:\mathbf{T}_{0}\in\mathcal{T }_{\Delta}^{\pi}(Q)\}|\gtrapprox\Delta^{s-t}. \tag{4.11}\]
Indeed, the average tube works, because of the following: since \(|\mathcal{T}_{\Delta}|\approx\Delta^{-2s},|\mathcal{Q}|\approx\Delta^{-t}\), and \(|\mathcal{T}_{\Delta}^{\pi}(Q)|\approx\Delta^{-s}\) (by (G4), (G1), (G3) respectively), we have
\[\frac{1}{|\mathcal{T}_{\Delta}|}\sum_{\mathbf{T}\in\mathcal{T}_{\Delta}}|\{Q \in\mathcal{Q}:\mathbf{T}_{0}\in\mathcal{T}_{\Delta}^{\pi}(Q)\}|=\frac{1}{| \mathcal{T}_{\Delta}|}\sum_{Q\in\mathcal{Q}}|\mathcal{T}_{\Delta}^{\pi}(Q)| \approx\frac{|\mathcal{Q}|\cdot\Delta^{-s}}{\Delta^{-2s}}\approx\Delta^{s-t}.\]
Now, we show that using (4.10) and (4.11), the family \(\mathbf{T}_{0}(\mathcal{Q})\subset\{Q\in\mathcal{Q}:Q\cap\mathbf{T}_{0}\neq \emptyset\}\) contains a \((\Delta,t-s)\)-set, which proves item (P1). Indeed, rewrite (4.10) as
\[\sum_{\begin{subarray}{c}Q,Q^{\prime}\in\mathbf{T}_{0}(\mathcal{Q})\\ Q\neq Q^{\prime}\end{subarray}}\frac{1}{d(Q,Q^{\prime})^{t-s}}\lessapprox \Delta^{2(s-t)}. \tag{4.12}\]
Let
\[\mathbf{T}_{0}^{\prime}(\mathcal{Q}):=\{Q\in\mathbf{T}_{0}(\mathcal{Q}):\sum_ {Q^{\prime}\in\mathbf{T}_{0}(\mathcal{Q})\setminus\{Q\}}d(Q,Q^{\prime})^{s-t }\leq\Delta^{s-t-C\varepsilon}\}. \tag{4.13}\]
By Markov's inequality on (4.12), we have \(|\mathbf{T}_{0}(\mathcal{Q})\setminus\mathbf{T}_{0}^{\prime}(\mathcal{Q})| \lessapprox\Delta^{s-t+C\varepsilon}\). Hence, if \(C\) is chosen large enough, we have by (4.11), \(|\mathbf{T}_{0}^{\prime}(\mathcal{Q})|\geq\frac{1}{2}|\mathbf{T}_{0}(\mathcal{ Q})|\gtrapprox\Delta^{s-t}\). By Markov's inequality on (4.13), we have that for all \(Q\in\mathbf{T}_{0}^{\prime}(\mathcal{Q})\) and \(r\in(\delta,1)\),
\[|\{Q^{\prime}\in\mathbf{T}_{0}(\mathcal{Q}):d(Q,Q^{\prime})\leq r\}|\leq \Delta^{s-t-C\varepsilon}r^{t-s}.\]
Thus, \(\mathbf{T}_{0}^{\prime}(\mathcal{Q})\) is a \((\Delta,t-s)\)-set, which proves (P1).
To get (P2), we use (G2).
\[|\mathcal{T}\cap\mathbf{T}_{0}|\lessapprox\delta^{-s}=\Delta^{-2s}.\]
By (G5), we have
\[|\{(p,T)\in(\mathcal{P}\cap Q)\times\mathcal{T}:T\in\mathcal{T}(p)\cap\mathbf{ T}_{0}\}|\gtrapprox\Delta^{-s-t}. \tag{4.14}\]
Fix \(Q\in\mathbf{T}_{0}(\mathcal{Q})\). Since \(|\mathcal{P}\cap Q|\approx\Delta^{-t}\) by (G1) and \(|\mathcal{T}(p)\cap\mathbf{T}_{0}|\lessapprox\Delta^{-s}\) since \(\mathcal{T}(p)\) is a \((\delta,s)\)-set, we use (4.14) to find a subset \(\mathcal{P}_{Q}\subset\mathcal{P}\cap Q\) with
\[|\mathcal{P}_{Q}|\approx|\mathcal{P}\cap Q|\approx\Delta^{-t}\text{ and }|\mathcal{T}(p)\cap\mathbf{T}_{0}|\approx\Delta^{-s}\text{ for all }p\in\mathcal{P}_{Q}.\]
This verifies (P3). Finally, we get (P4) by \(|\mathcal{P}_{Q}|\geq\Delta^{\mathbf{B}\varepsilon}|\mathcal{P}\cap Q|\) for some constant \(\mathbf{B}\geq 1\) and Proposition 4.3.
### Product-like structure
_This subsection is based on Section A.6 of [18]._
Our goal is to find a product-type structure and apply Proposition 3.1. Choose coordinates such that the \(y\)-axis is in the direction of \(\mathbf{T}_{0}\), and let \(\pi(\mathbf{x},y):=\mathbf{x}\in\mathbb{R}^{d-1}\) denote the orthogonal projection to the orthogonal complement of the \(y\)-axis. Define the function \(\Delta^{-1}(\mathbf{x},y)=(\Delta^{-1}\mathbf{x},y)\). If \(T\in\mathbf{T}_{0}\), then \(\Delta^{-1}T\) is roughly a \(\Delta\)-tube: it is contained in some \(C\Delta\)-tube and contains a \(c\Delta\)-tube for some universal constants \(c,C>0\). This technicality will not cause issues in what follows.
For each \(Q\in\mathbf{T}_{0}(\mathcal{Q})\), let \(\mathbf{y}_{Q}\in\Delta\cdot\mathbb{Z}\cap[0,1)\) be a point such that the plane \(y=\mathbf{y}_{Q}\) intersects \(Q\). By (P1), we know that \(\mathbf{Y}=\{\mathbf{y}_{Q}:Q\in\mathbf{T}_{0}(\mathcal{Q})\}\) is a \((\Delta,t-s)\)-set. By (P4), we know that for each \(\mathbf{y}\in\mathbf{Y}\) that \(\pi(\Delta^{-1}(\mathcal{P}\cap Q))\) contains a \((\Delta,\kappa^{\prime},k)\)-set \(\mathbf{X}_{\mathbf{y}}^{\prime}\) with cardinality \(\gtrapprox\Delta^{-s}\). Let \(\mathbf{X}_{\mathbf{y}}\subset(\Delta\cdot\mathbb{Z})^{d}\cap[0,1]^{d}\) that is \(\mathbf{X}_{\mathbf{y}}^{\prime}\) rounded to the nearest multiple of \(\Delta\).
Now, let \(L=(\Delta\cdot\mathbb{Z})\cap B(0,\Delta(\sqrt{d}+1))\) and \(\mathcal{T}(\mathbf{Z})=\{\sigma(T)+x:T\in\mathcal{T}\cap\mathbf{T}_{0},x\in L\}\). Clearly, \(|\mathcal{T}(\mathbf{Z})|\lesssim_{d}|\mathcal{T}\cap\mathbf{T}_{0}|\lessapprox \Delta^{-2s}\) by (P2). On the other hand, we show that \(|\mathcal{T}(\mathbf{z})|:=|\{\mathbf{T}\in\mathcal{T}(\mathbf{Z}):\mathbf{z} \in\mathbf{T}\}|\gtrapprox\Delta^{-s}\) for any \((\mathbf{x},\mathbf{y})\in\mathbf{Z}\). This follows since \(\mathbf{z}=(\mathbf{x},\mathbf{y}_{Q})\) for some \(Q\) and \(\mathbf{x}\in\mathbf{X}_{\mathbf{y}}\). Let \(p\in\mathcal{P}_{Q}\) such that \(d(\pi(\Delta^{-1}p),\mathbf{x})\leq\Delta\). We know \(d((\pi(\Delta^{-1}p),\mathbf{y}_{Q}),\Delta^{-1}p)\leq\Delta\) since \(Q\) has diameter \(\Delta\), so by triangle inequality, we have \(d(\Delta^{-1}p,\mathbf{z})\leq(\sqrt{d}+1)\Delta\). Thus, \(\mathcal{T}(\mathbf{z})\) contains \(\{\sigma(T)+x:T\in\mathcal{T}(p)\cap\mathbf{T}_{0}\}\) for some suitable \(x\in L\). By (P3), we get the desired cardinality estimate \(|\mathcal{T}(\mathbf{z})|\approx\Delta^{-s}\).
Finally, we apply Proposition 3.1 to the sets \(\mathbf{Z}\) and \(\mathcal{T}(\mathbf{Z})\) to obtain a contradiction if \(\varepsilon>0\) is sufficiently small. This proves Theorem 4.2.
## 5 Improved incidence estimates for general sets
In this section, we will prove the following refinement of Theorem 1.8, following Sections 7-9 of [18].
**Theorem 5.1**.: _For any \(0\leq k<d-1\), \(0\leq s<k+1\), \(s<t\leq d\), \(\kappa>0\), there exist \(\varepsilon(s,t,\kappa,k,d)>0\) and \(\eta(s,t,\kappa,k,d)>0\) such that the following holds for all small enough \(\delta\in 2^{-\mathbb{N}}\), depending only on \(s,t,\kappa,k,d\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a \((\delta,t,\delta^{-\varepsilon})\)-set with \(\cup\mathcal{P}\subset[0,1)^{d}\), and let \(\mathcal{T}\subset\mathcal{T}^{\delta}\) be a family of \(\delta\)-tubes. Assume that for every \(p\in\mathcal{P}\), there exists a \((\delta,s,\delta^{-\lambda},0)\) and \((\delta,\kappa,\delta^{-\lambda},k)\)-set \(\mathcal{T}(p)\subset\mathcal{T}\) with \(|\mathcal{T}(p)|=M\) such that \(T\cap p\neq\emptyset\) for all \(T\in\mathcal{T}(p)\). Then \(|\mathcal{T}|\geq M\delta^{-s-\varepsilon}\)._
The original theorem follows from taking \(\varepsilon=\eta\) and pigeonholing, since \(M\in(\delta^{-s+\varepsilon},\delta^{-d})\).
Proof.: Before anything else, we state the dependencies of the parameters: \(\varepsilon_{0}(s,t,\kappa,k,d)\), \(\varepsilon(\varepsilon_{0},s,t,\kappa,k,d),T(\varepsilon),\tau(s,t,\varepsilon), \eta(\varepsilon_{0},\tau)\).
First, choose \(T=T(\varepsilon)\) such that \(\frac{2\log T}{T}\leq\varepsilon\). By Lemma 2.17 we may find a subset \(\mathcal{P}^{\prime}\subset\mathcal{P}\) with \(|\mathcal{P}^{\prime}|\geq\delta^{\varepsilon}|\mathcal{P}|\) that is \(\{2^{-jT}\}_{j=1}^{m}\)-uniform for \(2^{-mT}=\delta\) with
associated sequence \(\{N_{j}\}_{j=1}^{m}\). Thus, \(\mathcal{P}^{\prime}\) is a \((\delta,t,\delta^{-2\varepsilon})\)-set. Replacing \(\mathcal{P}\) with \(\mathcal{P}^{\prime}\) and \(\varepsilon\) with \(\frac{\varepsilon}{2}\), we may assume from the start that \(\mathcal{P}\) is \(\{2^{-jT}\}_{j=1}^{m}\)-uniform.
Let \(f\) be the corresponding branching function. Since \(\mathcal{P}\) is a \((\delta,t,\delta^{-\varepsilon})\)-set, we have \(f(x)\geq tx-\varepsilon m\) for all \(x\in[0,m]\).
Let \(\{[c_{j},d_{j}]\}_{j=1}^{n}\) be the intervals from Proposition 2.14 applied with parameters \(s,t,\varepsilon\), corresponding to a sequence \(0<\delta=\Delta_{n}<\Delta_{n-1}<\cdots<\Delta_{1}<\Delta_{0}=1\). We can partition \(\{0,1,\cdots,n-1\}=\mathcal{S}\cup\mathcal{B}\), "structured" and "bad" scales such that:
* \(\frac{\Delta_{j}}{\Delta_{j+1}}\geq\delta^{-\tau}\) for all \(j\in\mathcal{S}\), and \(\prod_{j\in\mathcal{B}}(\Delta_{j}/\Delta_{j+1})\leq\delta^{-\varepsilon}\);
* For each \(j\in\mathcal{S}\) and \(\mathbf{p}\in\mathcal{D}_{\Delta_{j}}(\mathcal{P})\), the set \(\mathcal{P}_{j}:=S_{\mathbf{p}}(\mathcal{P}\cap\mathbf{p})\) is either 1. an \((t_{j},\Delta_{j+1}/\Delta_{j},(\Delta_{j}/\Delta_{j+1})^{\varepsilon},(\Delta _{j}/\Delta_{j+1})^{\varepsilon})\)-regular set, where \(t_{j}\in(s,2)\); 2. a \((s,\Delta_{j+1}/\Delta_{j},(\Delta_{j}/\Delta_{j+1})^{\varepsilon})\)-set.
* \(\prod_{j\in S}(\Delta_{j}/\Delta_{j+1})^{t_{j}}\geq|\mathcal{P}|\cdot\prod_{j \in\mathcal{B}}(\Delta_{j+1}/\Delta_{j})^{d}\geq|\mathcal{P}|\delta^{O_{s,t,d }(\varepsilon)}\).
Apply Proposition 2.14 and \(\frac{\Delta_{j}}{\Delta_{j+1}}\geq\delta^{-\tau}\) to get a family of tubes \(\mathcal{T}_{\mathbf{p}}\subset\mathcal{T}^{\Delta_{j+1}/\Delta_{j}}\) with the property that \((S_{\mathbf{p}}(\mathcal{P}\cap\mathbf{p}),\mathcal{T}_{\mathbf{p}})\) is a \((\Delta_{j+1}/\Delta_{j},s,C_{j}^{1},\kappa,C_{j}^{2}M_{\mathbf{p}})\)-nice configuration for some \(C_{j}^{1},C_{j}^{2}\lessapprox_{\delta}(\Delta_{j+1}/\Delta_{j})^{-\tau^{-1} \eta}\) and
\[\frac{|\mathcal{T}_{0}|}{M}\gtrapprox\prod_{j=0}^{N-1}\frac{|\mathcal{T}_{ \mathbf{p}_{j}}|}{M_{\mathbf{p}_{j}}}.\]
Let \(\mathcal{S}_{1}=\{j\in S:t_{j}\geq\frac{s+t}{2}\}\) and \(\mathcal{S}_{2}=\mathcal{S}\setminus\mathcal{S}_{1}\). Then
\[\prod_{j\in S_{1}}(\Delta_{j}/\Delta_{j+1})^{t_{j}}\geq|\mathcal{P}|\delta^{O_ {s,t,d}(\varepsilon)}\prod_{j\in S_{2}}(\Delta_{j}/\Delta_{j+1})^{-\frac{s+t}{ 2}}\geq\delta^{\frac{t-s}{2}+O_{s,t,d}(\varepsilon)}.\]
For \(j\in\mathcal{S}_{1}\) we apply Theorem 4.2 with parameters \(s,\frac{s+t}{2}\), and for \(j\in\mathcal{S}_{2}\) we apply Corollary 2.10. If \(\varepsilon_{0}(s,t,\kappa,k,d)\) is the \(\eta\) from Theorem 4.2, then for \(\tau^{-1}\eta<\varepsilon_{0}\), we get
\[\frac{|\mathcal{T}_{0}|}{M}\gtrapprox\prod_{j\in\mathcal{S}_{1}}\left(\frac{ \Delta_{j}}{\Delta_{j+1}}\right)^{-s-\varepsilon_{0}}\cdot\prod_{j\in\mathcal{ S}_{2}}\left(\frac{\Delta_{j}}{\Delta_{j+1}}\right)^{-s+O(\varepsilon)}\geq \delta^{-s(1-\varepsilon)-(\frac{t-s}{2}+O_{s,t,d}(\varepsilon))\varepsilon_{0 }+O(\varepsilon)}\geq\delta^{-s-\varepsilon}\]
as long as \(\varepsilon\) is taken small enough in terms of \(\varepsilon_{0},s,t,d\).
## 6 Sets contained in an \((r_{0},k)\)-plate
We restate Theorem 1.11.
**Theorem 6.1**.: _For any \(0\leq k<d-1\), \(0\leq s<k+1\), \(\max(s,k)<t\leq d\), \(\kappa>0\), \(r_{0}\leq 1\), there exists \(\varepsilon(s,t,\kappa,k,d)>0\) such that the following holds for all small enough \(\delta/r_{0}\in 2^{-\mathbb{N}}\cap(0,\delta_{0})\), with \(\delta_{0}\) depending only on \(s,t,\kappa,k,d\). Let \(H\) be a \((r_{0},k+1)\)-plate, \(\mathcal{P}\subset\mathcal{D}_{\delta}\cap H\) be a \((\delta,t,(\delta/r_{0})^{-\varepsilon})\)-set with \(\cup\mathcal{P}\subset[0,1)^{d}\), and let \(\mathcal{T}\subset\mathcal{T}^{\delta}\cap H\) be a family of \(\delta\)-tubes. Assume that for every \(p\in\mathcal{P}\), there exists a set \(\mathcal{T}(p)\subset\mathcal{T}\) such that:_
* \(T\cap p\neq\emptyset\) _for all_ \(T\in\mathcal{T}(p)\)_;_
* \(\mathcal{T}(p)\) _is a_ \((\delta,s,(\delta/r_{0})^{-\varepsilon}r_{0}^{k-s},0)\)_-set down from scale_ \(r\)_;_
* \(\mathcal{T}(p)\) _is a_ \((\delta,\kappa,(\delta/r_{0})^{-\varepsilon}r_{0}^{-\kappa},k)\)_-set._
_Then \(|\mathcal{T}|\geq(\frac{\delta}{r_{0}})^{-\varepsilon}\delta^{-2s}r_{0}^{2(s- k)}\)._
### Multiscale analysis
We will use Theorem 1.8 to prove Theorem 1.11. Let \(S_{H}\) be the dilation sending \(H\) to \([0,1]^{d}\). Then \(\mathcal{P},\mathcal{T}(p)\), and \(\mathcal{T}\) become deformed under \(S_{H}\), but they satisfy the following statistics assumptions for \(r\in[\frac{\delta}{r_{0}},1]\):
\[|\mathcal{P}\cap S_{H}^{-1}(Q)|\leq\left(\frac{\delta}{r_{0}}\right)^{- \varepsilon}\cdot|\mathcal{P}|\cdot r^{t},\qquad Q\in\mathcal{D}_{r}(\mathbb{ R}^{d}), \tag{6.15}\]
\[|\mathcal{T}(p)\cap S_{H}^{-1}(\mathbf{T})|\leq\left(\frac{\delta}{r_{0}} \right)^{-\varepsilon}\cdot|\mathcal{T}(p)|\cdot r^{s},\qquad\mathbf{T}\quad r \text{-tube}, \tag{6.16}\]
\[|\mathcal{T}(p)\cap S_{H}^{-1}(W)|\leq\left(\frac{\delta}{r_{0}}\right)^{- \varepsilon}\cdot|\mathcal{T}(p)|\cdot r^{\kappa},\qquad W\quad(r,k+1)\text{- plate}. \tag{6.17}\]
To prove (6.15), observe that \(S_{H}^{-1}(Q)\) is contained in an \(r\)-ball, and then we use that \(\mathcal{P}\) is a \((\delta,t,(\delta/r_{0})^{-\varepsilon},0)\)-set.
To prove (6.16), observe that \(S_{H}^{-1}(\mathbf{T})\) is contained in a box with \(k\) sides of length \(r\) and \(d-k\) sides of length \(rr_{0}\). This box can be covered by \(\sim r_{0}^{-k}\) many \(rr_{0}\)-balls. Finally, use that \(\mathcal{T}(p)\) is a \((\delta,s,(\delta/r_{0})^{-\varepsilon}r_{0}^{k-s},0)\)-set.
To prove (6.17), observe that \(S_{H}^{-1}(W)\) is contained in a \((rr_{0},k)\)-plate.
Using these observations, we obtain the following refinement of Proposition 2.13. We use \((\delta,s,C_{1}r_{0}^{k-s},\kappa,C_{2},M)\)-nice configuration down from scale \(r_{0}\) to indicate that \(\mathcal{T}(p)\) is a \((\delta,s,C_{1}r_{0}^{k-s},0)\)-set down from scale \(r_{0}\).
**Proposition 6.2**.: _Fix dyadic numbers \(0<\delta^{\prime}=\frac{\delta}{r_{0}}<\Delta\leq 1\). Let \((\mathcal{P}_{0},\mathcal{T}_{0})\) be a \((\delta,s,C_{1}r_{0}^{k-s},\kappa,C_{2},M)\)-nice configuration down from scale \(r_{0}\), and assume \(\mathcal{P}_{0}\subset H\) for some \((r_{0},k)\)-plate \(H\). Then there exist refinements \(\mathcal{P}\subset\mathcal{P}_{0}\), \(\mathcal{T}(p)\subset\mathcal{T}_{0}(p),p\in\mathcal{P}\), and \(\mathcal{T}_{\Delta}(Q)\subset\mathcal{T}^{\Delta}\) such that denoting \(\mathcal{T}_{\Delta}=\cup_{Q\in\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}))} \mathcal{T}_{\Delta}(Q)\) and \(\mathcal{T}=\cup_{p\in\mathcal{P}}\mathcal{T}(p)\) the following hold:_
* \(|\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}))|\approx_{\Delta}|\mathcal{D}_{\Delta} (S_{H}(\mathcal{P}_{0}))|\) _and_ \(|S_{H}(\mathcal{P})\cap Q|\approx_{\Delta}|S_{H}(\mathcal{P}_{0})\cap Q|\) _for all_ \(Q\in\mathcal{D}_{\Delta}(\mathcal{P})\)_._
_._
2. _We have_ \(|\mathcal{T}\cap\mathbf{T}|\lessneq\frac{|\mathcal{T}_{0}|}{|\mathcal{T}_{\Delta}|}\) _for all_ \(\mathbf{T}\in\mathcal{T}_{\Delta}\)_._
3. \((\mathcal{D}_{\Delta}(S_{H}(\mathcal{P})),\mathcal{T}_{\Delta})\) _is_ \((\Delta,s,C^{1}_{\Delta},\kappa,C^{2}_{\Delta},M_{\Delta})\)_-nice for some_ \(C^{1}_{\Delta}\approx_{\Delta}C_{1}\)_,_ \(C^{2}_{\Delta}\approx_{\Delta}C_{2}\)_, and_ \(M_{\Delta}\geq 1\)_._
4. _For all_ \(\mathbf{T}\in\mathcal{T}_{\Delta}(Q)\)_, we have_ \[|\{(p,T)\in\mathcal{P}\times\mathcal{T}:T\in\mathcal{T}(p)\text{ and }T\subset S^{-1}_{H}(\mathbf{T})\}|\gtrapprox\Delta\ \frac{M\cdot|S_{H}(\mathcal{P})\cap Q|}{|\mathcal{T}_{\Delta}(Q)|}.\]
5. _For each_ \(Q\in\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}_{2}))\)_, there exist_ \(C^{1}_{Q}\approx_{\Delta}C_{1}\)_,_ \(C^{2}_{Q}\approx_{\Delta}C_{2}\)_,_ \(M_{Q}\geq 1\)_, a subset_ \(\mathcal{P}_{Q}\subset\mathcal{P}\cap Q\) _with_ \(|\mathcal{P}_{Q}|\gtrapprox\Delta\ |\mathcal{P}\cap Q|\) _and a family of tubes_ \(\mathcal{T}_{Q}\subset\mathcal{T}^{\delta/\Delta}\) _such that_ \((S^{-1}_{H}\circ S_{Q}(S_{H}(\mathcal{P}_{2})\cap Q),\mathcal{T}_{Q})\) _is_ \((\delta/\Delta,s,C^{1}_{Q}r^{k-s}_{0},\kappa,C^{2}_{Q},M_{Q})\)_-nice down from scale_ \(r_{0}\)_._
_Furthermore, the families \(\mathcal{T}_{Q}\) can be chosen so that_
\[\frac{|\mathcal{T}_{0}|}{M}\gtrapprox\Delta\ \frac{|\mathcal{T}^{\Delta}( \mathcal{T})|}{M_{\Delta}}\cdot\left(\max_{Q\in\mathcal{D}_{\Delta}(\mathcal{ P}_{2})}\frac{|\mathcal{T}_{Q}|}{M_{Q}}\right). \tag{6.18}\]
Proof.: The proof will involve many dyadic pigeonholing steps.
**Step 1: construct \(\mathcal{T}_{\Delta}(Q)\).** For a given \(Q\in\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}_{0})):=\mathcal{Q}_{0}\), we claim that we can find a subset \(\mathcal{P}_{Q}\subset\mathcal{P}_{0}\cap S^{-1}_{H}(Q)\) with \(|\mathcal{P}_{Q}|\approx_{\Delta}|\mathcal{P}_{0}\cap S^{-1}_{H}(Q)|\) and a family of dyadic \(\Delta\)-tubes \(\overline{\mathcal{T}}_{\Delta}(Q)\) intersecting \(Q\) such that the following holds:
1. \(\overline{\mathcal{T}}_{\Delta}(Q)\) is a \((\Delta,s,C^{1}_{\Delta},0)\)-set and \((\Delta,\kappa,C^{2}_{\Delta},k)\)-set for some \(C^{1}_{\Delta},C^{2}_{\Delta}\approx_{\Delta}C_{1}\).
2. there exists a constant \(H_{Q}\approx_{\Delta}M\cdot|\mathcal{P}_{Q}|/|\overline{\mathcal{T}}_{\Delta }(Q)|\) such that \[|\{(p,T)\in\mathcal{P}_{Q}\times\mathcal{T}_{0}:T\in\mathcal{T}_{0}(p)\text{ and }T\subset S^{-1}_{H}(\mathbf{T})\}|\gtrapprox H_{Q},\qquad\mathbf{T}\in \overline{\mathcal{T}}_{\Delta}(Q).\]
This claim generalizes [18, Proposition 4.1] and relies on the same dyadic pigeonholing steps; for brevity, we only state these steps and refer the reader to [18] for the detailed proof. (We essentially follow the same proof for (T2), and we introduce a nice shortcut to derive (T1) from (T2).) Let \(\mathcal{T}_{\Delta}(Q)\subset\mathcal{T}^{\Delta}\) be a minimal finitely overlapping cover of \(S_{H}(\mathcal{T}_{Q}):=\cup_{p\in\mathcal{P}_{0}\cap Q}S_{H}(\mathcal{T}_{0}(p))\) by \(\Delta\)-tubes. For \(p\in\mathcal{P}_{0}\cap Q\), define
\[\mathcal{T}_{\Delta,j}(p)=\{\mathbf{T}\in\mathcal{T}_{\Delta}(Q):2^{j-1}<|\{T \in\mathcal{T}(p):T\subset S^{-1}_{H}(\mathbf{T})\}|\leq 2^{j}\}.\]
Since \(|\mathcal{T}_{\Delta}(Q)|\lesssim 100\Delta^{-2(d-1)}\) and \(M\lesssim\sum_{j}2^{j}\cdot|\mathcal{T}_{\Delta,j}(p)|\), we in fact have
\[M\lesssim\sum_{M\Delta^{2(d-1)}/200\leq 2^{j}\leq M}2^{j}\cdot|\mathcal{T}_{\Delta,j} (p)|\]
Thus, by dyadic pigeonholing, there exists \(j=j(p)\) such that \(2^{j}\cdot|\mathcal{T}_{\Delta,j}(p)|\approx_{\Delta}M\). Another dyadic pigeonholing allows us to find \(\mathcal{P}_{Q}\subset\mathcal{P}_{0}\cap Q\) such that \(j(p)\) is constant for \(p\in\mathcal{P}_{Q}\). This is the desired refinement \(\mathcal{P}_{Q}\) of \(\mathcal{P}_{0}\cap Q\). Finally, let
\[\mathcal{T}_{\Delta,i}(Q):=\{\mathbf{T}\in\mathcal{T}_{\Delta}(Q):2^{i-1}<|\{p \in\mathcal{P}_{Q}:\mathbf{T}\in\mathcal{T}_{\Delta}(p)\}|\leq 2^{i}\}.\]
Then by a similar dyadic pigeonholing (for calculations, see [18, Proposition 4.1]), there is \(i\) such that
\[\frac{1}{200}|\mathcal{P}_{Q}|\Delta^{d-1}\leq 2^{i}\leq|\mathcal{P}_{Q}|\text{ and }2^{i+j}\cdot| \mathcal{T}_{\Delta,i}(Q)|\approx_{\Delta}M\cdot|\mathcal{P}_{Q}|. \tag{6.19}\]
Finally, we define \(\overline{\mathcal{T}}_{\Delta}(Q):=\mathcal{T}_{\Delta,i}(Q)\), which is the desired refinement of \(\mathcal{T}_{\Delta}(Q)\).
We check (T2) holds with \(H_{Q}=2^{i+j}\), which satisfies \(H_{Q}\approx_{\Delta}M\cdot|\mathcal{P}\cap Q|/|\overline{\mathcal{T}}_{\Delta}|\) by (6.19) and \(|\mathcal{P}_{Q}|\approx_{\Delta}|\mathcal{P}\cap Q|\). With this choice of \(H_{Q}\), fix \(\mathbf{T}\in\overline{\mathcal{T}}_{\Delta}\) and note that
\[|\{(p,T)\in\mathcal{P}_{Q}\times\mathcal{T}_{0}:T\in\mathcal{T}_ {0}(p),T\subset S_{H}^{-1}(\mathbf{T})\}|=\sum_{p\in\mathcal{P}_{Q}}|\{T\in \mathcal{T}(p):T\subset S_{H}^{-1}(\mathbf{T})\}|\\ \geq 2^{j}|\{p\in\mathcal{P}_{Q}:\mathbf{T}\in\mathcal{T}_{\Delta} (p)\}|\geq 2^{i+j}=H.\]
To check (T1), we first pick a \(r\)-tube \(\mathbf{T}_{r}\) with \(r\geq\Delta\). Then by (T2) and (6.16),
\[|\{\mathbf{T}\in\overline{\mathcal{T}}_{\Delta}:\mathbf{T}\subset \mathbf{T}_{r}\}|\lesssim\frac{1}{H}|\{(p,T)\in\mathcal{P}_{Q}\times\mathcal{T }_{0}:T\in\mathcal{T}_{0}(p),T\subset S_{H}^{-1}(\mathbf{T}_{r})\}|\\ \lesssim\frac{1}{H}|\mathcal{P}_{Q}|\cdot C_{1}Mr^{s}\lessneq C_ {1}|\overline{\mathcal{T}}_{\Delta}|r^{s}.\]
Thus, \(\overline{\mathcal{T}}_{\Delta}(Q)\) is a \((\Delta,s,C_{\Delta}^{1},0)\)-set with \(C_{\Delta}^{1}\approx_{\Delta}C_{1}\). Doing the same calculation with an \((r,k+1)\)-blank instead of an \(r\)-tube, we get that \(\overline{\mathcal{T}}_{\Delta}(Q)\) is a \((\Delta,\kappa,C_{\Delta}^{2},k)\)-set with \(C_{\Delta}^{2}\approx_{\Delta}C_{1}\). This proves (T1) and thus the claim.
**Step 2: uniformity of \(|\mathcal{T}_{0}\cap\mathbf{T}|\).** By the pigeonhole principle, we can find \(\overline{M}_{\Delta}\geq 1\) and a subset \(\mathcal{Q}\subset\mathcal{D}_{\Delta}(\mathcal{P})\) with \(|\mathcal{Q}|\approx_{\Delta}|\mathcal{Q}_{0}|\) such that \(|\overline{\mathcal{T}}_{\Delta}(Q)|\sim\overline{M}_{\Delta}\) for all \(Q\in\mathcal{Q}\). Write
\[\overline{\mathcal{T}}_{\Delta}=\bigcup_{Q\in\mathcal{Q}}\overline{\mathcal{T }}_{\Delta}(Q).\]
Next, by another dyadic pigeonholing, we can find a subset \(\overline{\mathcal{T}}_{\Delta}^{\prime}\subset\overline{\mathcal{T}}_{\Delta}\) such that \(I(\mathcal{Q},\overline{\mathcal{T}}_{\Delta}^{\prime})\gtrapprox I(\mathcal{Q}, \overline{\mathcal{T}}_{\Delta})\) and \(|\mathcal{T}_{0}\cap\mathbf{T}|\sim N_{\Delta}\) for all \(\mathbf{T}\in\overline{\mathcal{T}}_{\Delta}^{\prime}\). Also, \(|\overline{\mathcal{T}}_{\Delta}(Q)|\lesssim\overline{M}_{\Delta}\) for all \(Q\in\mathcal{Q}\). Thus, we can find \(\mathcal{Q}^{\prime}\subset\mathcal{Q}\) with \(|\mathcal{Q}^{\prime}|\approx_{\Delta}|\mathcal{Q}|\), and for each \(Q\in\mathcal{Q}^{\prime}\) a subset \(\mathcal{T}_{\Delta}(Q)\) of cardinality \(\approx\overline{M}_{\Delta}\), such that \(\mathcal{T}_{\Delta}(Q)\subset\overline{\mathcal{T}}_{\Delta}^{\prime}\). In other words,
\[|\mathcal{T}_{0}\cap\mathbf{T}|\sim N_{\Delta}\text{ for }\mathbf{T}\in \mathcal{T}_{\Delta}(Q).\]
Thus, we obtain item (ii).
\[|\mathcal{T}_{0}|\geq|\mathcal{T}_{\Delta}|\cdot\min_{\mathbf{T}\in\mathcal{T }_{\Delta}}|\mathcal{T}_{0}\cap\mathbf{T}|\sim|\mathcal{T}_{\Delta}|\cdot N_{ \Delta}. \tag{6.20}\]
Reduce the families \(\mathcal{T}_{\Delta}(Q)\) such that their cardinality is \(M_{\Delta}:=\min(|\mathcal{T}_{\Delta}(Q)|:Q\in\mathcal{Q}\})\approx_{\delta} \overline{M}_{\Delta}\). By (T1), \(\mathcal{T}_{\Delta}(Q)\) remains a \((\Delta,s,C_{\Delta}^{1},0)\) and \((\Delta,\kappa,C_{\Delta}^{2},k)\)-set with \(C_{\Delta}^{1},C_{\Delta}^{2}\approx_{\delta}C_{1}\).
Finally, define
\[\mathcal{P}=\bigcup_{Q\in\mathcal{Q}}\mathcal{P}_{Q},\]
where \({\cal Q}\) is the latest refinement of \({\cal Q}_{0}\). Since \(|{\cal P}_{Q}|\approx_{\Delta}|{\cal P}_{0}\cap S_{H}^{-1}(Q)|\), we get that item (i) holds.
For \(p\in{\cal P}_{Q}={\cal P}\cap Q\), \(Q\in{\cal Q}\), define
\[{\cal T}(p)=\bigcup_{{\bf T}\in{\cal T}_{\Delta}(Q)}({\cal T}_{0}(p)\cap{\bf T} ),{\cal T}=\bigcup_{p\in{\cal P}}{\cal T}(p),{\cal T}_{\Delta}=\bigcup_{Q\in{ \cal Q}}{\cal T}_{\Delta}(Q).\]
Thus, \(({\cal D}_{\Delta}({\cal P}),{\cal T}_{\Delta})=({\cal Q},{\cal T}_{\Delta})\) is a \((\Delta,s,C_{\Delta}^{1},\kappa,C_{\Delta}^{2},M_{\Delta})\)-nice configuration, establishing item (iii). To summarize, in this step, we refined \({\cal Q}\) and \({\cal T}_{\Delta}(Q)\) for \(Q\in{\cal Q}\), so (T2)/(iv) still holds (with same \(H_{Q}\) and a weaker implied constant).
**Step 3: uniformity of \({\cal T}(p)\) and construct \({\cal T}_{Q}\).** This step will be devoted to verifying (v) and (6.18). We will not change \({\cal P},{\cal T}\), or \({\cal T}_{\Delta}\).
Fix \(Q\in{\cal Q}\), and let \({\cal P}_{Q}={\cal P}\cap S_{H}^{-1}(Q)\). Define
\[{\cal T}(Q)=\bigcup_{p\in{\cal P}_{Q}}{\cal T}(p).\]
By dyadic pigeonholing and (T2), we can find a \(\approx_{\Delta}\)-comparable subset of \({\cal P}_{Q}\) (which we keep denoting \({\cal P}_{Q}\)) such that
\[|{\cal T}(p)|\approx_{\Delta}M,\qquad p\in{\cal P}_{Q}.\]
Next,
\[|{\cal T}(Q)|\leq\sum_{{\bf T}\in{\cal T}_{\Delta}(Q)}|{\cal T}\cap{\bf T}| \lessapprox_{\Delta}M_{\Delta}\cdot N_{\Delta}. \tag{6.21}\]
For a given \(p\in Q\), we consider the tube packet \({\mathbb{U}}(p):={\cal T}(p)\cap S_{H}^{-1}(Q)\) (discarding duplicate tubelets). Each tubelet \(u\in{\mathbb{U}}(p)\) lies in at most \(\Delta^{-2(d-1)}\) many tubes of \({\cal T}(p)\), so by dyadic pigeonholing, we can refine \({\cal T}(p)\) by a \(\log\Delta^{-1}\) factor to ensure that each tubelet \(u\in{\mathbb{U}}(p)\) lies in \(\sim m(p)\) many tubes of \({\cal T}(p)\), and there are \(M(p)\approx_{\Delta}\frac{M}{m(p)}\) many distinct tubelets through \(p\). By refining \({\cal P}_{Q}\) by a \((\log\Delta^{-1})\)-factor, we may assume \(m(p)\approx m_{Q}\) for each \(p\in{\cal P}_{Q}\). Now, define
\[{\cal P}^{Q}:=S_{H}^{-1}\circ S_{Q}\circ S_{H}({\cal P}_{Q})\mbox{ and }{\cal T}_{Q}:=\bigcup_{p\in{\cal P}_{Q}}S_{H}^{-1}\circ S_{Q}\circ S_{H}({ \mathbb{U}}(p)).\]
Since tubelets are essentially distinct and each tubelet in any \({\mathbb{U}}(p)\) corresponds to \(\approx m_{Q}\) many tubes in \({\cal T}(Q)\), we obtain:
\[|{\cal T}(Q)|\gtrapprox\Delta\left|\bigcup_{p\in{\cal P}_{Q}}{\mathbb{U}}(p) \right|\cdot m_{Q}\gtrsim|{\cal T}_{Q}|\cdot\frac{M}{M_{Q}}. \tag{6.22}\]
Then (6.18) will follow by combining (6.20), (6.21), and (6.22).
We finally check \(({\cal P}^{Q},{\cal T}_{Q})\) is a \((\delta/\Delta,s,C_{Q}^{1}r_{0}^{k-s},\kappa,C_{Q}^{2},M_{Q})\)-nice configuration down from scale \(r_{0}\). First, for any \(\overline{\delta}<r<r_{0}\), we have for any \((r,0)\)-plank \(H\) in \(S^{d-1}\),
\[|\sigma({\cal T}_{Q})\cap H|\sim_{\Delta}\frac{1}{m_{Q}}|\sigma({\cal T}(p)) \cap H|\lessapprox_{\Delta}\frac{1}{m_{Q}}\cdot C\cdot M\cdot r^{s}=C\cdot M_{ Q}\cdot r^{s}.\]
Thus, \(\sigma(\mathcal{T}_{Q})\) is a \((\overline{\delta},s,C_{Q}^{1},0)\)-set down from scale \(r_{0}\) with \(C_{Q}^{1}\approx_{\Delta}C_{1}\). Similarly, \(\sigma(\mathcal{T}_{Q})\) is a \((\overline{\delta},\kappa,C_{Q}^{2},k)\)-set with \(C_{Q}^{2}\approx_{\Delta}C_{2}\). This shows item (v) and thus the proof of the Proposition.
### Good multiscale decomposition
The idea is to apply Proposition 6.2, then apply Theorem 1.8 to bound \(|\mathcal{T}_{\Delta}|\) and Corollary 6.5 to bound \(|\mathcal{T}_{Q}|\). Unfortunately, while we use pigeonholing to ensure that \(\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}))\) is a \((\Delta,t)\)-set, we don't know that \(S_{H}^{-1}\circ S_{Q}(S_{H}(\mathcal{P})\cap Q)\) is a \((\frac{\delta}{\Delta},t)\)-set. In fact, we won't show this statement, but rather a slightly weaker statement that is good enough. For this, a good choice of \(\Delta\) based on the branching structure of \(\mathcal{P}\) is needed.
First, we explain the pigeonholing preliminaries.
**Lemma 6.3**.: _Given \(P\subset H_{r}\), a \((r_{0},k)\)-plane, there is a subset \(P^{\prime}\subset P\) with \(|P^{\prime}|_{\delta}\geq(\log(\frac{r_{0}}{\delta}))^{-1}|P|_{\delta}\) such that \(|Q\cap S_{H}(P)|\) is constant for all \(Q\in\mathcal{D}_{\delta/r_{0}}(S_{H}(P))\)._
Proof.: Let \(f(N)=\sum\{|P\cap S_{H}^{-1}(Q)|_{\delta}:Q\in\mathcal{D}_{\delta/r_{0}}([0,1 ]^{d}),|P\cap S_{H}^{-1}(Q)|_{\delta}\in[N,2N]\}\). Then \(\sum_{N\text{\rm\leavevmode\nobreak\ \rm dyadic}}f(N)=|P|_{\delta}\). For each \(N\), either \(f(N)=0\) or \(N\leq f(N)\leq(r_{0}/\delta)^{d}\cdot N\). Hence, if \(N_{0}\) is the largest \(N\) for which \(f(N)>0\), we get \(f(N_{0})\geq N_{0}>\sum_{M<N_{0}(\delta/r_{0})^{d}/100\text{\rm\leavevmode \nobreak\ \rm dyadic}}f(M)\). Thus, we have
\[\sum_{N_{0}(\delta/r_{0})^{d}/100<M<N_{0}\text{\rm\leavevmode\nobreak\ \rm dyadic }}f(M)>\frac{1}{2}|P|_{\delta}.\]
Thus, by dyadic pigeonholing, there exists \(M\in(N_{0}(\delta/r_{0})^{d}/100,N_{0})\) such that \(f(M)\geq\frac{1}{20d}(\log(\frac{r_{0}}{\delta}))^{-1}|P|_{\delta}\).
The next step is to make \(S_{H}^{-1}\circ S_{Q}(S_{H}(\mathcal{P})\cap Q)\) satisfy a \(t^{\prime}\)-dimensional spacing condition with \(t^{\prime}\) just slightly less than \(t\), for all \(Q\in\mathcal{D}_{\Delta}(S_{H}(\mathcal{P}))\) at a certain scale \(\Delta\). To do so, we need the following lemma.
**Lemma 6.4**.: _Fix \(C,\varepsilon>0\), and let \(\frac{\delta}{r_{0}}=\Delta^{m}\) and \(P\subset H_{r}\) be a \((\delta,t,C,0)\)-set in \(H_{r}\), a \((r_{0},k)\)-plane. Let \(L=\log(\frac{r_{0}}{\delta})\cdot(\log(1/\Delta))^{m}\). If \(t^{\prime}<\frac{t-d\varepsilon}{1-\varepsilon}\), then there exists \(m\varepsilon\leq k\leq m\) and a subset \(P^{\prime}\subset P\) with \(|P^{\prime}|\geq L^{-1}|P|\) such that for any \(k\leq j\leq m\), \(Q\in\mathcal{D}_{\Delta^{k}}(P^{\prime})\), and \(R\in\mathcal{D}_{\Delta^{j}}(P^{\prime})\cap Q\), we have_
\[|P^{\prime}\cap S_{H}^{-1}(R)|\leq|P^{\prime}\cap S_{H}^{-1}(Q)|\cdot\Delta^{( j-k)t^{\prime}}, \tag{6.23}\]
_and for \(\delta\leq r\leq\frac{\delta}{r_{0}}\) and a ball \(B_{r}\), we have_
\[|P^{\prime}\cap B_{r}|\leq C\cdot L\cdot|P^{\prime}\cap S_{H}^{-1}(Q)|\left( \frac{r}{\Delta^{k}}\right)^{t^{\prime}}. \tag{6.24}\]
Proof.: Throughout this proof we will not distinguish between \(m\varepsilon\) and \(\lceil m\varepsilon\rceil\).
First, we will make \(S_{H}(P)\) uniform at scales \(1,\Delta,\Delta^{2},\cdots,\Delta^{m}=\frac{\delta}{r_{0}}\). By Lemmas 6.3 and 2.17, we can find \(|P^{\prime}|\geq L^{-1}|P|\) such that there is a sequence \((N_{j})_{j=1}^{n}\) with \(|S_{H}(P^{\prime})\cap Q|_{\Delta^{k}}=N_{k}\) for all \(1\leq k\leq n\) and \(Q\in\mathcal{D}_{\Delta^{k}}(P^{\prime})\).
Let \(m\varepsilon\leq k\leq m\) be the largest index such that \(N_{k}\geq|P^{\prime}|\Delta^{m\varepsilon\cdot d+(k-m\varepsilon)t^{\prime}}\) for one (equivalently all) \(Q\in\mathcal{D}_{\Delta^{k}}(P^{\prime})\). Certainly \(k=m\varepsilon\) is a valid index since \(|\mathcal{D}_{\Delta^{m\varepsilon}}|=\Delta^{-dm\varepsilon}\).
Now, we will check the given conditions. By maximality of \(k\), we have for \(k\leq j\leq m\),
\[N_{j}\leq|P^{\prime}|\Delta^{dm\varepsilon+(j-m\varepsilon)t^{\prime}}\leq N _{k}\Delta^{(j-k)t^{\prime}}.\]
Noticing that \(|P^{\prime}\cap S_{H}^{-1}(Q)|=|S_{H}(P^{\prime})\cap Q|\) and likewise for \(R\in\mathcal{D}_{\Delta^{j}}(P^{\prime})\cap Q\), this proves (6.23).
To check (6.24), we recall that \(LN_{k}\geq L\cdot|P^{\prime}|\Delta^{dm\varepsilon+(k-m\varepsilon)t^{\prime }}\geq|P|\Delta^{dm\varepsilon+(k-m\varepsilon)t^{\prime}}\). Using \(r\leq\frac{\delta}{r_{0}}=\Delta^{m}\), \(t^{\prime}\leq\frac{t-d\varepsilon}{1-\varepsilon}\), and that \(P\) is a \((\delta,t,C)\)-set, we have
\[|P^{\prime}\cap B_{r}|\leq|P\cap B_{r}|\leq C|P|r^{t}\leq C\cdot N_{k}L\left( \frac{r}{\Delta^{k}}\right)^{t^{\prime}}.\]
Finally, we will need the following variant of Corollary 2.10.
**Corollary 6.5**.: _Let \(0\leq\max(s,k)<t\leq d-1\), \(\delta\leq r\leq 1\), and let \(C_{P}\geq 1,C_{T}\geq 0\). Let \(\mathcal{P}\subset\mathcal{D}_{\delta}\) be a set contained in an \((r_{0},k+1)\)-plate \(H\) satisfying the following conditions:_
* _For all_ \(\frac{\delta}{r_{0}}\leq r\leq 1\) _and balls_ \(B_{r}\)_, we have_ \[|\mathcal{P}\cap S_{H}^{-1}(B_{r})|\leq C_{P}\cdot|\mathcal{P}|\cdot r^{t}.\] (6.25)
* _For all_ \(\delta\leq r\leq\frac{\delta}{r_{0}}\) _and balls_ \(B_{r}\)_, we have_ \[|\mathcal{P}\cap B_{r}|\leq C_{P}\cdot|\mathcal{P}|\cdot r^{t}.\] (6.26)
_Assume that for every \(p\in\mathcal{P}\) there exists a family \(\mathcal{T}(p)\subset\mathcal{T}^{\delta}\) of dyadic \(\delta\)-tubes satisfying the following conditions:_
* \(T\cap p\neq\emptyset\) _for all_ \(T\in\mathcal{T}(p)\)_;_
* \(|\mathcal{T}(p)\cap\mathbf{T}|\leq C_{T}\cdot|\mathcal{T}(p)|\cdot r_{0}^{k-s }x^{s}\) _for all_ \(x\)_-tubes_ \(\mathbf{T}\) _with_ \(\delta\leq x\leq r_{0}\)_._
_Further assume that \(|\mathcal{T}(p)|=M\) for some \(M\geq 1\). If \(\mathcal{T}=\cup_{p\in\mathcal{P}}\mathcal{T}(p)\), then_
\[|\mathcal{T}|\gtrsim(C_{P}C_{T})^{-1}\cdot Mr_{0}^{s-k}\delta^{-s}.\]
Proof.: Let
\[j_{P}(\mathcal{P},\mathcal{T})=\{(q,t)\in\mathcal{P}\times\mathcal{T}(p):t\in \mathcal{T}(q)\}\]
We have the following:
**Lemma 6.6**.: _For all \(p\in\mathcal{P}\), we have \(j_{p}(\mathcal{P},\mathcal{T})\lesssim_{s,t,k}C_{P}C_{T}|\mathcal{P}|\cdot Mr_{0 }^{k-s}\delta^{s}\)._
Proof.: We count \(j_{p}(\mathcal{P},\mathcal{T})\) by first choosing a dyadic \(\delta<r<1\), then counting the number of \(q\in\mathcal{P}\) with \(|p-q|\sim r\), then finally counting the number of \(t\in\mathcal{T}\) that pass through \(p,q\).
If \(r>\frac{\delta}{r_{0}}\), we claim that if \(|x-y|\in[r,2r]\) and some tube through \(x,y\) lies in \(H\), then \(|S_{H}(x)-S_{H}(y)|\leq 100r\), so \(y\in S_{H}^{-1}(B_{100r}(S_{H}(x)))\).
To prove this, we may assume \(r_{0}\leq\frac{1}{50}\), as otherwise we can use the simple fact \(|S_{H}(x)-S_{H}(y)|\leq r_{0}^{-1}|x-y|\leq 100r\). Now choose a coordinate system such that the first \(k+1\) axes correspond to the long sides of \(H\), and the remaining axes correspond to the short sides of \(H\). Let \(x-y=(\vec{a},\vec{b})\in\mathbb{R}^{k+1}\times\mathbb{R}^{d-k-1}\). Then \(|\vec{a}|\leq|x-y|\leq r\). Furthermore, we have \(|\vec{b}|\leq 50r_{0}|\vec{a}|\), otherwise any tube through \(x,y\) would be roughly orthogonal to \(H\) and intersect \(H\) in a subtube with length \(2r_{0}\leq 1\), contradiction. Thus, we have \(|S_{H}(x)-S_{H}(y)|\leq|\vec{a}|+r_{0}^{-1}|\vec{b}|\leq 100r\).
Using the claim and condition (6.25), we see that there are \(\lesssim C_{P}|\mathcal{P}|\cdot r^{t}\) many choices for \(q\). For each \(q\), the set of tubes \(t\in\mathcal{T}(p)\) passing through \(q\) lies in a \(\frac{\delta}{r}\)-tube, so by the tube non-concentration condition (and noting that \(\frac{\delta}{r}<r_{0}\)), we have \(C_{T}\cdot Mr_{0}^{k-s}\left(\frac{\delta}{r}\right)^{s}\) choices for \(t\).
Thus, the contribution to \(j_{p}(\mathcal{P},\mathcal{T})\) for a given dyadic \(r>\frac{\delta}{r_{0}}\) is \(C_{P}C_{T}|\mathcal{P}|\cdot Mr_{0}^{k-s}\delta^{s}\cdot r^{s-t}\), and summing over dyadic \(r\) gives \(C_{P}C_{T}|\mathcal{P}|\cdot Mr_{0}^{k-s}\delta^{s}\cdot O_{s-t}(1)\).
If \(r<\frac{\delta}{r_{0}}\), then by condition 6.26 we see that there are \(\lesssim C_{P}|\mathcal{P}|\cdot r^{t}\) many choices for \(q\). For each \(q\), the set of tubes \(t\in\mathcal{T}(p)\) passing through \(q\) lies in a \(\frac{\delta}{r}\)-tube \(\mathbf{T}_{\delta/r}\). We note that \(\frac{\delta}{r}>r_{0}\), so the tube non-concentration doesn't apply directly, but luckily we note that \(\mathbf{T}_{\delta/r}\cap H\) can be covered by \((\frac{\delta}{rr_{0}})^{k}\) many \(r_{0}\)-tubes. Thus, by using tube non-concentration at scale \(r_{0}\), we have \(C_{T}\cdot Mr_{0}^{k}\cdot(\frac{\delta}{rr_{0}})^{k}\) choices for \(t\).
Thus, the contribution to \(j_{p}(\mathcal{P},\mathcal{T})\) for a given dyadic \(r<\frac{\delta}{r_{0}}\) is (after some manipulation)
\[C_{P}C_{T}|P|Mr_{0}^{k}\left(\frac{\delta}{r_{0}}\right)^{t}\cdot\left(\frac{ rr_{0}}{\delta}\right)^{t-k}.\]
Since \(t>\max(k,s)\), the sum is \(\lesssim C_{P}C_{T}|P|Mr_{0}^{k}\left(\frac{\delta}{r_{0}}\right)^{s}\).
Adding up both \(r>\frac{\delta}{r_{0}}\) and \(r<\frac{\delta}{r_{0}}\) contributions, we prove the Lemma.
For \(t\in\mathcal{T}\), let \(\mathcal{P}(t)=\{p\in\mathcal{P}:t\in\mathcal{T}(p)\}\). By Cauchy-Schwarz, we have
\[(M|\mathcal{P}|)^{2}=\left(\sum_{t\in\mathcal{T}}|\mathcal{P}(t)|\right)^{2} \leq|\mathcal{T}|\sum_{t\in\mathcal{T}}|\mathcal{P}(t)|^{2}=|\mathcal{T}|\sum_ {p\in\mathcal{P}}j_{p}(\mathcal{P},\mathcal{T}).\]
By Lemma 6.6, we get
\[|\mathcal{T}|\geq\frac{M^{2}|\mathcal{P}|^{2}}{C_{P}C_{T}|\mathcal{P}|^{2}Mr_{ 0}^{k-s}\delta^{s}}=(C_{P}C_{T})^{-1}Mr_{0}^{s-k}\delta^{-s}.\]
Proof of Theorem 6.1.: A small reduction: we would like to assume \(|\mathcal{T}(p)|\sim M\) for all \(p\in\mathcal{P}\). To assume this, we first observe that \(|\mathcal{T}(p)|\geq M_{0}=(\delta/r_{0})^{-\varepsilon}r_{0}^{k-s}\delta^{-s}\) for all \(p\in\mathcal{P}\). On the other hand, if for at least half of the \(p\in\mathcal{P}\) (call them \(\mathcal{P}^{\prime}\)) we have \(|\mathcal{T}(p)|\geq M_{0}(\delta/r_{0})^{-1}\), then we are immediately done by Corollary 6.5 applied to \(\mathcal{P}^{\prime}\) and \(\mathcal{T}(p)\). Thus, by reducing \(\mathcal{P}\) if necessary, we may assume \(|\mathcal{T}(p)|\in(M_{0},M_{0}(\delta/r_{0})^{-1})\). Then by reducing \(\mathcal{P}\) further by a \(\lesssim\log(\delta/r_{0})^{-1}\) factor, we may assume \(|\mathcal{T}(p)|\in(M,2M)\) for some \(M\in(M_{0},M_{0}(\delta/r_{0})^{-1})\). Finally, we may remove some tubes from each \(\mathcal{T}(p)\) to make \(|\mathcal{T}(p)|=M\). Then \((\mathcal{P}_{0},\mathcal{T}_{0})\) is a \((\delta,s,C_{1}r_{0}^{k-s},\kappa,C_{2},M)\)-nice configuration.
Pick \(\beta(s,t,k)>0\) such that \(\frac{t-d\beta}{1-\beta}>\max(s,k)\), and let \(t^{\prime}=\frac{1}{2}(\frac{t-d\beta}{1-\beta}+\max(s,k))\). Pick \(\Delta>0\) such that \(\log(1/\Delta)<\Delta^{-\varepsilon}\). Find \(\Delta^{\prime}=\Delta^{k}\in(\delta/r_{0},(\delta/r_{0})^{\beta})\) such that the conclusion of Lemma 6.4 holds. Now by Proposition 6.2, we have
\[\frac{|\mathcal{T}|}{M}\geq\frac{|\mathcal{T}_{Q}|}{M_{Q}}\cdot\frac{| \mathcal{T}^{\Delta^{\prime}}(\mathcal{T})|}{M_{\Delta^{\prime}}}.\]
If \(\varepsilon<\beta\eta^{2}\), where \(\eta(s,t,\kappa,k,d)\) is the parameter in Theorem 5.1, we have \(\frac{|\mathcal{T}^{\Delta^{\prime}}(\mathcal{T})|}{M_{\Delta^{\prime}}}\geq (\Delta^{\prime})^{-s-\sqrt{\varepsilon}}\).
Pick \(Q\). Then \(S_{H}^{-1}\circ S_{Q}(S_{H}(\mathcal{P})\cap Q)\) satisfies the conditions of Corollary 6.5 with \(C_{P}=\left(\frac{\delta}{r_{0}}\right)^{-\varepsilon}\cdot L\cdot\Delta^{-d}\). Thus, we have \(\frac{|\mathcal{T}_{Q}|}{M_{Q}}\geq\left(\frac{\delta}{r_{0}}\right)^{- \varepsilon}\Delta^{d}L^{-1}\cdot r_{0}^{s-k}\left(\frac{\delta}{\Delta^{ \prime}}\right)^{-s}\). Using these two bounds and \(M\geq(\delta/r_{0})^{\varepsilon}r_{0}^{s-k}\delta^{-s}\), we get
\[|\mathcal{T}|\geq\left(\frac{\delta}{r_{0}}\right)^{2\varepsilon-\sqrt{ \varepsilon}\beta}\Delta^{-d}L^{-1}r_{0}^{2(s-k)}\cdot\delta^{-2s}.\]
It remains to choose \(\varepsilon<\beta^{2}/100\) and also for \(\frac{\delta}{r_{0}}\) small enough, we have \(\Delta^{-d}<\left(\frac{\delta}{r_{0}}\right)^{-\varepsilon}\) and \(L\leq\left(\frac{\delta}{r_{0}}\right)^{-\varepsilon}\Delta^{-\varepsilon m} \leq\left(\frac{\delta}{r_{0}}\right)^{-2\varepsilon}\). Thus, \(|\mathcal{T}|\geq\left(\frac{\delta}{r_{0}}\right)^{-\varepsilon}r_{0}^{2(s-k )}\cdot\delta^{-2s}\) and we are done.
## 7 Power decay around \(k\)-planes
In this section, we will roughly deal with the following situation:
* \(\mu,\nu\) are \(s\)-Frostman measures with \(k-1<s\leq k\);
* \(\nu\) gives mass \(\leq\varepsilon\) to any \((r_{0},k)\)-plate.
In other words, \(\nu\) does not concentrate around \((r_{0},k)\)-plates. We would like to understand the \(\nu\)-mass of \((r,k)\)-plates for \(r\) much smaller than \(r_{0}\). A result of Shmerkin [25, Proposition B.1] says that there exist \(r_{1}(r_{0},s,k),\kappa(s,k)>0\), a subset \(X\subset\mathrm{spt}\mu\) with \(\mu(X)>1-O(\varepsilon)\), and for each \(x\in X\), a subset \(Y_{x}\subset\mathrm{spt}\nu\) with \(\mu(Y_{x})>1-O(\varepsilon)\) such that \(\nu(H\cap Y_{x})\leq r^{\eta}\) for all \(r\leq r_{1}\) and \((r,k)\)-plates \(H\) through \(x\). Thus, we do obtain a power decay for sufficiently small \(r\). But what is the optimal starting point of the power decay? Can we hope for a power
decay \(\nu(H\cap Y_{x})\lesssim K(\frac{r}{r_{0}})^{\eta}\) for all \((r,k)\)-plates through \(x\)? The answer is yes, and indeed we shall prove it by making small but meaningful tweaks to Shmerkin's argument. But before stating our result, we shall introduce some convenient notation. We define thin \(k\)-plates, a generalization of thin tubes, as follows.
**Definition 7.1**.: _Let \(K,t\geq 0\), \(1\leq k\leq d-1\), and \(c\in(0,1]\). Let \(\mu,\nu\in\mathbb{P}(\mathbb{R}^{d})\) supported on \(X,Y\). Fix \(G\subset X\times Y\). We say \((\mu,\nu)\) has \((t,K,c)\)-thin \(k\)-plates on \(G\) down from scale \(r_{0}\) if_
\[\nu(H\cap G|_{x})\leq K\cdot r^{t}\quad\text{ for all }r\in(0,r_{0})\text{ and all }(r,k)\text{-plates }H\text{ containing }x. \tag{7.27}\]
**Remark 7.2**.: _In this paper, we will choose \(G=(A\cup B)^{c}\) where \(\mu\times\nu(B)\) is small. (The complement is taken with respect to \(\mathbb{R}^{d}\times\mathbb{R}^{d}\).) In this case, the equation (7.27) becomes_
\[\nu(H\backslash(A|_{x}\cup B|_{x}))\leq K\cdot r^{t}\quad\text{ for all }r\in(0,r_{0})\text{ and all }(r,k)\text{-plates }H\text{ containing }x.\]
Now, we can state the main proposition, which generalizes and extends Proposition B.1 of [25]. It may be of independent interest.
**Proposition 7.3**.: _Let \(1\leq k\leq d-1\) and \(k-1<s\leq k\). There exist \(\eta(\kappa,k,d)>0\) and \(K_{0}(\kappa,k,d)>0\) with the following property. Fix \(r_{0}\leq 1\) and \(K\geq K_{0}\). Suppose that \(\mu,\nu\) are positive measures with \(|\mu|,|\nu|\geq 1\) and for any \((r,k-1)\)-plate \(H\), we have_
\[\mu(H) \leq C_{\mu}r^{\kappa},\] \[\nu(H) \leq C_{\nu}r^{\kappa}.\]
_Let \(A\subset X\times Y\) be the pairs of points that lie in some \(K^{-1}\)-concentrated \((r_{0},k)\)-plate. Then there exists \(B\) with \(\mu\times\nu(B)\leq K_{0}K^{-1}\) such that \((\mu,\nu)\) have \((\eta,Kr_{0}^{-\eta})\)-thin \(k\)-plates on \((A\cup B)^{c}\). (The complement is taken with respect to \(\mathbb{R}^{d}\times\mathbb{R}^{d}\).)_
**Remark 7.4**.: _(a) We can apply Proposition 7.3 in case \(\mu,\nu\) are \(s\)-dimensional with \(s>k-1\)._
_(b) In Proposition B.1 of [25], the exponents for \(\mu,\nu\) are allowed to differ. The proof of Proposition 7.3 is easily modified to include this detail._
To prove Proposition 7.3, we need the following two lemmas. Fix \(r\leq r_{0}\). The first says that there are few dense \((r,k)\)-plates, and the second says that for most \(x\in X\), the dense \((r,k)\)-plates through \(x\) lie in some \((r_{0},k)\)-plate.
**Lemma 7.5**.: _There is \(N=N(\kappa,k,d)\) such that the following holds: let \(\nu\) be a measure with mass \(\leq 1\) such that \(\nu(W)\leq C_{\nu}\rho^{\kappa}\) for all \((\rho,k-1)\)-plates \(W\), \(1>\rho>r\). Let \(\mathcal{E}_{r,k}\) be a set of \((r,k)\)-plates such that every \((s,k)\)-plate contains \(\lesssim\left(\frac{s}{r}\right)^{(k+1)(d-k)}\) many \(r\)-plates of \(\mathcal{E}_{r,k}\) (as in Section 2.2). Let \(\mathcal{H}=\{H\in\mathcal{E}_{r,k}:\nu(H)\geq a\}\). Then \(|\mathcal{H}|\lesssim(\frac{C_{\nu}}{a})^{N}\)._
Lemma 7.5 follows from the condition on \(\mathcal{E}_{r,k}\) and the following generalization of [25, Lemma B.3]. In the case \(a=\delta^{\eta}\), the resulting bound is stronger but the assumption is also stronger.
**Lemma 7.6**.: _Suppose \(\nu(W)\leq C_{\nu}\rho^{\kappa}\) for all \((\rho,k-1)\)-plates \(W\), \(1>\rho>r\). Then there exists a family of \(\lesssim a^{-1}\) many \((r(C_{\nu}/a^{2})^{1/\kappa},k)\)-plates \(\{T_{j}\}\) such that every \((r,k)\)-plate \(H\) with \(\nu(H)\geq a\) is contained in some plate \(T_{j}\)._
Proof.: Choose a maximal set of \((r,k)\)-plates \(\{Y_{j}\}_{j=1}^{m}\) such that
1. \(\nu(Y_{i})\geq a\),
2. \(\nu(Y_{i}\cap Y_{j})\leq a^{2}/2\) for \(1\leq i<j\leq m\).
We claim \(m\leq 2a^{-1}\). Indeed, if \(S=\sum_{i=1}^{m}\nu(Y_{i})\) and \(f=\sum_{i=1}^{m}\mathbb{1}_{Y_{i}}\), then, then
\[S^{2}=\left(\int f\,d\nu\right)^{2}\leq\int f^{2}\,d\nu=S+\sum_{1\leq i<j\leq m }\nu(Y_{i}\cap Y_{j})\leq S+m^{2}a^{2}/2. \tag{7.28}\]
Now, \(S\geq ma>2\), so \(S^{2}-S>\frac{S^{2}}{2}\). Combining with (7.28) gives \(S^{2}<m^{2}a^{2}\), a contradiction.
Let \(\{T_{j}\}_{j=1}^{m}\) be the \((r(C_{\nu}/a^{2})^{1/\kappa},k)\)-plates with same central \(k\)-plane as \(Y_{j}\). We show the problem condition. Given an \((r,k)\)-plate \(H\) with \(\nu(H)\geq a\), by maximality there exists \(Y_{j}\) such that \(\nu(H\cap Y_{j})\geq a^{2}/2\). Thus, if \(\angle(H,Y_{j})\) is the largest principal angle between the central planes of \(H\) and \(Y_{j}\), then \(H\cap Y_{j}\) is contained in a box of dimensions
\[\underbrace{1\times\cdots\times 1}_{(k-1)\text{ times}}\times r/\angle(H,Y_{j})\times \underbrace{r\times\cdots\times r}_{d-k\text{ times}}.\]
Thus, \(H\cap Y_{j}\) is contained in a \((r/\angle(H,Y_{j}),k-1)\)-plate, so \(\nu(H\cap Y_{j})\leq C_{\nu}(r/\angle(H,Y_{j}))^{\kappa}\). Thus, \(\angle(H,Y_{j})\lesssim r(C_{\nu}/a^{2})^{1/\kappa}\), so \(H\) is contained in \(T_{j}\).
**Remark 7.7**.: _We would like to present an alternative proof of Lemma 7.5, which was the original one found by the author. It gives slightly worse bounds but we believe it is slightly more motivated._
_If \(a>1\) then \(\mathcal{H}=\emptyset\), so assume \(a\leq 1\). Let \(\xi=\left(\frac{a}{2C_{\nu}}\right)^{1/\kappa}\leq 1\). By induction, for each \(0\leq i\leq k\), there exist \(x_{0},\cdots,x_{i}\) such that \(|x_{0}\wedge x_{1}\wedge\cdots\wedge x_{i}|\geq\xi^{i}\) and that lie in at least \(|\mathcal{H}|(\frac{a}{2})^{i+1}\) many elements of \(\mathcal{H}\)._
_The base case \(i=1\) is trivial. For the inductive step, suppose \(x_{0},\cdots,x_{i}\) are found. Let \(\Omega\) be the \(\tilde{r}\)-neighborhood of the span of \(x_{1},\cdots,x_{i}\). Then since \(\mu(\Omega)\leq C_{\nu}\xi^{\kappa}<\frac{1}{2}a\) for every \(H\in\mathcal{H}\), we have \(\mu(H\setminus\Omega)\geq\frac{1}{2}a\). Thus, there is \(x_{i+1}\in\mathbb{R}^{d}\setminus\Omega\) such that \(x_{0},\cdots,x_{i+1}\) lie in at least \(|\mathcal{H}|(\frac{a}{2})^{(i+2)}\) many elements of \(\mathcal{H}\), and by construction, \(|x_{0}\wedge x_{1}\wedge\cdots\wedge x_{i+1}|\geq\xi^{i+1}\). This completes the inductive step and thus the proof of the claim._
_Finally, the set of \((r,k)\)-plates through \(x_{1},\cdots,x_{k}\) must lie in a \((r\xi^{-k},k)\)-plate, so at most \(\xi^{-k(k+1)(d-k)}\) many \((r,k)\)-plates of \(\mathcal{E}_{r,k}\) can lie in it. Thus, \(|\mathcal{H}|\leq(\frac{a}{2})^{-(k+1)}\xi^{-k(k+1)(d-k)}\lesssim(\frac{C_{ \nu}}{a})^{N}\)._
The following lemma is in the same spirit as [25, Proposition B.2].
**Lemma 7.8**.: _Let \(\mathcal{H}\) be a collection of \((r,k)\)-plates, and suppose \(\mu(W)\leq C_{\mu}\rho^{\kappa}\) for all \((\rho,k-1)\)-plates \(W\), \(1>\rho>r\). Then for all \(x\in X\) except a set of \(\mu\)-measure \(\leq C_{\mu}\left(\frac{r}{r_{0}}\right)^{\kappa}|\mathcal{H}|^{2}\), there exists an \((r_{0},k)\)-plate that contains every \((r,k)\)-plate in \(\mathcal{H}\) that passes through \(x\)._
Proof.: The exceptional set is contained in the set of \(x\in X\) that lies in two plates of \(\mathcal{H}\) with "angle" \(\geq\frac{1}{r_{0}}\). The intersection of two such plates is contained in a box with dimensions \(\underbrace{r\times\cdots\times r}_{d-k\text{ times}}\times\frac{r}{r_{0}} \times\underbrace{1\times\cdots\times 1}_{k-1\text{ times}}\), which in turn is contained in a \((\frac{r}{r_{0}},k-1)\)-plate (since \(r_{0}\leq 1\)). Thus, by assumption on \(\mu\), this box has mass \(\lesssim C_{\mu}\left(\frac{r}{r_{0}}\right)^{s-(k-1)}\). Finally, there are \(|\mathcal{H}|^{2}\) pairs of plates in \(\mathcal{H}\).
Proof of Proposition 7.3.: Fix \(r\leq r_{0}\), and let \(\eta=\frac{\kappa}{4N}\), where \(N\) is the constant in Lemma 7.5. We may assume \(N\geq 2\). By Lemmas 7.5 and 7.8, we can find a set \(E_{r}\) with \(\mu(E_{r})\leq K^{-2}\left(\frac{r}{r_{0}}\right)^{\eta}\) and, for each \(x\notin E_{r}\), a set \(P_{r}(x)\subset Y\) that is either empty or a \((r_{0}^{1/2}r^{1/2},k)\)-plate through \(x\) such that \(\nu(W)\leq K\left(\frac{r}{r_{0}}\right)^{\eta}\) for every \(W\) intersecting \(Y\setminus P_{r}(x)\).
Now, let \(E=\cup_{n\geq 0}E_{r_{0}K^{-2^{n}}}\) and \(P(x)=\cup_{n\geq 0}P_{r_{0}K^{-2^{n}}}(x)\). We claim that \(\mu(E)\leq K^{-1}\) and if \(x\notin E\), then \(\nu(P(x)\setminus A|_{x})\lesssim K^{-1}\). Then if \(r\geq r_{0}K^{-1}\), then \(\nu(W)\leq 1\leq K\left(\frac{r}{r_{0}}\right)^{\eta}\) for \(\eta<1\); for any \(r_{0}K^{-2^{n}}\leq r<r_{0}K^{-2^{n-1}}\), we have for any \((r,k)\)-plate \(W\),
\[\nu(W\setminus P_{r_{0}K^{-2^{n}}}(x))\leq K\left(\frac{r_{0}K^{-2^{n-1}}}{r_ {0}}\right)^{\eta}\leq K\left(\frac{r}{r_{0}}\right)^{\eta/2}\]
Then \((\mu,\nu)\) have \((\eta/2,K^{2}r_{0}^{-\eta/2},1-K^{-1})\)-thin \(k\)-plates relative to \(A\).
To prove the first claim, we observe that \(\mu(E)\leq K^{-2}\sum_{n=0}^{\infty}K^{-\eta 2^{n}}\leq K^{-1}\) if \(K_{0}\) is sufficiently large in terms of \(\eta\).
Next, by definition of \(P_{r_{0}K^{-2^{n-1}}}\), we have \(\nu(P_{r_{0}K^{-2^{n}}}(x)\setminus P_{r_{0}K^{-2^{n-1}}}(x))\leq K\left( \frac{r_{0}K^{-2^{n-1}}}{r_{0}}\right)^{\eta/2}\leq K^{1-2^{n-2}\eta}\). We also have the bound \(\nu(P_{r_{0}2^{-n}}(x)\setminus A|_{x})\leq K^{-1}\) from the given condition (note that \(P_{r_{0}2^{-n}}(x)\) is a \((r_{0}2^{-(n-1)},k)\)-plate). Thus,
\[\nu(P(x)\setminus A|_{x}) \leq\sum_{n=0}^{\log\eta^{-1}}\nu(P_{r_{0}K^{-2^{n}}}(x)\setminus A |_{x})+\sum_{n=\log\eta^{-1}}^{\infty}\nu(P_{r_{0}K^{-2^{n}}}(x)\setminus P_{r _{0}K^{-2^{n-1}}}(x))\] \[\leq\log\eta^{-1}\cdot K^{-1}+\sum_{n=\log\eta^{-1}}^{\infty}K^{ 1-2^{n-2}\eta}\] \[\lesssim K^{-1},\]
if \(K_{0}\) is chosen large enough.
Radial projection estimates
In this section, we will first prove a key special case, and then the general case of Theorem 1.13.
### Maximal plate concentration case
_This subsection is based on ideas from [19]._
**Theorem 8.1**.: _Let \(k\in\{1,2,\cdots,d-1\}\), \(k-1<\sigma<s\leq k\), and fix \(K\geq 1\). There exists \(N\in\mathbb{N}\) and \(K_{0}\) depending on \(\sigma,s,k\) such that the following holds. Fix \(r_{0}\leq 1\) and \(K_{1},K_{2}\geq K_{0}\). Let \(\mu,\nu\) be \(\sim 1\)-separated \(s\)-dimensional measures with constant \(C_{\mu},C_{\nu}\) supported on \(E_{1},E_{2}\), which lie in an \((r_{0},k)\)-plate \(H_{r}\). Assume that \(|\mu|,|\nu|\leq 1\). Let \(A\) be the pairs of \((x,y)\in E_{1}\times E_{2}\) that lie in some \(K_{1}^{-1}\)-concentrated \((\frac{r_{0}}{K_{2}},k)\)-plate. Then there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu\times\nu(B)\lesssim K_{1}^{-1}\) such that for every \(x\in E_{1}\) and \(r\)-tube \(T\) through \(x\), we have_
\[\mu(T\setminus(A|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{r_{0}^{\sigma-(k- 1)}}(K_{1}K_{2})^{N}.\]
_The implicit constant may depend on \(s,k\)._
Theorem 8.1 is the special case of Theorem 1.13 where \((\mu,\nu)\) are concentrated in a \((r_{0}K,k)\)-plate for some small \(K\ll r_{0}\) (we call this the maximal plate concentration case). For this, we closely follow the bootstrapping approach of [19]. There are three ingredients.
* The next Proposition 8.2 will be the base case for the bootstrapping argument (\(\sigma=0\)).
* Proposition 7.3 will ensure power decay for \(\mu,\nu\) around \(k\)-planes.
* Theorem 1.11 will be used in the bootstrapping step to upgrade \(\sigma\) to \(\sigma+\eta\).
**Proposition 8.2**.: _Let \(1\leq k\leq d-1\) and \(k-1<s\leq k\), then there exists \(N=N(s,k)\) such that the following holds. Fix \(K\geq K_{0}\). Then for any \(s\)-dimensional measures \(\mu,\nu\) with constant \(\sim 1\) contained in the \(r_{0}\)-neighborhood of a \(k\)-plane and \(d(\mu,\nu)\gtrsim 1\), there exists \(B\subset X\times Y\) with \(\mu\times\nu(B)\leq K^{-1}\) such that \((\mu,\nu)\) has \((0,K^{N}r_{0}^{k-1})\)-thin tubes on \(B^{c}\) down from scale \(r_{0}\)._
Proof.: Let \(\tilde{\mu},\tilde{\nu}\) be the projected measures on the \(k\)-plane. Then \(\tilde{\mu},\tilde{\nu}\) satisfy \(s\)-dimensional Frostman conditions for \(r_{0}\leq r\leq 1\). Let
\[B=\{(x,y):x,y\in T\text{ for some }r_{0}\text{-tube }T\text{ with }\nu(T)\geq K^{N}r_{0}^{k-1}.\}\]
The rest is a standard argument following [8, Proof of Lemma 3.6]. Define the radial projection \(P_{y}(x)=\frac{x-y}{|x-y|}\). Orponen's radial projection theorem [17, Equation (3.5)] can be written in the form (where \(p=p(s,k)>1\)):
\[\int\|P_{x}\tilde{\mu}\|_{L^{p}}^{p}\,d\tilde{\mu}(x)\lesssim 1. \tag{8.29}\]
To effectively use (8.29), we will show that \(|P_{x}(B|_{x})|\) is small for \(x\in X\). Indeed, let \(\mathcal{T}_{x}\) be a minimal set of finitely overlapping \(2r_{0}\)-tubes through \(x\) such that any \(r_{0}\)-tube through \(x\) with \(\nu(T)\geq K^{N}r_{0}^{k-1}\) lies in a \(2r_{0}\)-tube in \(\mathcal{T}_{x}\). Then each \(2r_{0}\)-tube in \(\mathcal{T}_{x}\) has \(\nu\)-measure \(\geq K^{N}r_{0}^{k-1}\). Since \(d(x,\nu)\gtrsim 1\), we conclude that \(|\mathcal{T}_{x}|\lesssim K^{-N}r_{0}^{1-k}\). Therefore, since the Lebesgue measure \(|P_{x}(T)|\lesssim r_{0}^{k-1}\) for a \(2r_{0}\)-tube \(T\) through \(x\), we obtain \(|P_{x}(B|_{x})|\lesssim K^{-N}\). Finally, we can use Holder's inequality and (8.29) to upper bound \(\mu\times\nu(B)\):
\[\mu\times\nu(B) =\int\nu(B|_{x})d\mu(x)\] \[=\int\left(\int_{P_{x}(B|_{x})}P_{x}(\nu)\right)d\mu(x)\] \[\leq\sup_{x}|P_{x}(B|_{x})|^{1-1/p}\int\|P_{x}\nu\|_{L^{p}}d\mu(x)\] \[\lesssim K^{-N(1-1/p)}.\]
Choose \(N=1+(1-1/p)^{-1}\) to finish (the implicit constant is dominated by \(K\geq K_{0}\) if \(K_{0}\) is large enough).
The bootstrapping step is as follows:
**Proposition 8.3**.: _Let \(k\in\{1,\cdots,d-1\}\), \(0\leq\sigma\leq k\), \(\max(\sigma,k-1)<s\leq k\), \(\kappa>0\). There exist \(\eta(\sigma,s,\kappa,k,d)\) and \(K_{0}(\eta,k)>0\) such that the following holds. Fix \(r_{0}\leq 1\) and \(K\geq K_{0}\). Let \(\mu,\nu\) be \(\sim K^{-1}\)-separated \(s\)-dimensional measures with constant \(K\) supported on \(X,Y\), which lie in an \((r_{0},k)\)-plate \(H\). Let \(G\subset X\times Y\). Suppose that \((\mu,\nu)\) and \((\nu,\mu)\) have \((\sigma,Kr_{0}^{-(\sigma-(k-1))})\)-thin tubes and \((\kappa,Kr_{0}^{-\kappa})\)-thin \(k\)-plates on \(G\) down from scale \(r_{0}\). Then there exists a set \(B\subset X\times Y\) with \(\mu\times\nu(B)\leq K^{-1}\) such that \((\mu,\nu)\) and \((\nu,\mu)\) have \((\sigma+\eta,K^{d+1}r_{0}^{-(\sigma+\eta-(k-1))})\)-thin tubes on \(G\setminus B\) down from scale \(r_{0}\). Furthermore, \(\eta(\sigma,s,\kappa,k,d)\) is bounded away from zero on any compact subset of \(\{(\sigma,s,\kappa,k):\max(\sigma,k-1)<s\leq k\leq d-1\}\)._
**Remark 8.4**.: _The reader is advised to set \(r_{0}=1\) in the following argument, in which case it is a straightforward modification of [19, Lemma 2.8], with one small technical exception in the proof of the concentrated case, where we improve upon the dyadic pigeonholing step. Also if \(r_{0}=1\), then the simpler Theorem 1.8 can be used instead of Theorem 1.11 in the proof._
Proof.: We are given that for all \(r\in(0,r_{0}]\),
\[\nu(T\cap G|_{x})\leq K\cdot\frac{r^{\sigma}}{r_{0}^{\sigma-(k-1)}}\text{ for all $r$-tubes $T$ containing $x\in X$}, \tag{8.30}\]
\[\nu(W\cap G|_{x})\leq K\cdot\frac{r^{\sigma}}{r_{0}^{\sigma-(k-1)}}\text{ for all $(r,k)$-plates $W$ containing $x\in X$}, \tag{8.31}\]
\[\mu(T\cap G|^{y})\leq K\cdot\frac{r^{\sigma}}{r_{0}^{\sigma-(k-1)}}\text{ for all $r$-tubes $T$ containing $y\in Y$}, \tag{8.32}\]
\[\mu(W\cap G|^{y})\leq K\cdot\frac{r^{\sigma}}{r_{0}^{\sigma-(k-1)}}\text{ for all $(r,k)$-plates $W$ containing $y\in Y$}. \tag{8.33}\]
For \(x\in X\) and \(r\leq r_{0}\), let \(\mathcal{T}^{\prime\prime}_{x,r}\) denote the \(r\)-tubes through \(x\) such that
\[\nu(T\cap G|_{x})\geq K^{d+1}\cdot\frac{r^{\sigma+\eta}}{r_{0}^{\sigma+\eta-( k-1)}}. \tag{8.34}\]
Now, let \(\mathcal{T}^{\prime}_{x,r}\) denote a covering of \(\mathcal{T}^{\prime\prime}_{x,r}\) by essentially distinct \(2r\)-tubes. Then for \(x\in X\), since \(d(x,Y)\geq K^{-1}\), we have that the tubes in \(\mathcal{T}^{\prime}_{x,r}\) have \(\lesssim K^{d-1}\)-overlap on \(\nu\), so \(|\mathcal{T}^{\prime}_{x,r}|\lesssim\frac{r^{-(\sigma+\eta)}}{r_{0}^{-(\sigma +\eta-(k-1))}}\). For a dyadic \(r\in(0,r_{0}]\), let \(H_{r}=\{(x,y)\in G:y\in\cup\mathcal{T}^{\prime}_{x,r}\}\), where \(\cup\mathcal{T}^{\prime}_{x,r}\) denotes the union of the tubes in \(\mathcal{T}^{\prime}_{x,r}\).
**Claim.** There are \(\eta(\sigma,s,\kappa,k,d)>0\) and \(K_{0}(\eta)>0\) such that the following holds for \(K\geq K_{0}\). If \(\frac{r}{r_{0}}<K^{-1/\eta}\), then \(\mu\times\nu(H_{r})\leq 2\left(\frac{r}{r_{0}}\right)^{\eta}\). Furthermore, \(\eta(\sigma,s,\kappa,k,d)\) is bounded away from zero on any compact subset of \(\{(\sigma,s,\kappa,k,d):\max(\sigma,k-1)<s\leq k\leq d-1\}\).
We will be done if we show the claim. Indeed, let \(B_{1}=\cup_{r\leq r_{0}\text{ dyadic }}H_{r}\); then for any dyadic \(r\leq r_{0}\) and any \(r\)-tube \(T\) through some \(x\in X\), we either have \(T\in\mathcal{T}^{\prime}_{x,r}\), which means \(T\cap G|_{x}\setminus B_{1}|_{x}=\emptyset\), or the negation of (8.34) holds. In either case, we get
\[\nu(T\cap G|_{x}\setminus B_{1}|_{x})\leq K^{d+1}\cdot\frac{r^{\sigma+\eta}}{ r_{0}^{\sigma+\eta-(k-1)}}. \tag{8.35}\]
We have (8.35) for dyadic \(r\leq r_{0}\), but it also holds for all \(r\leq r_{0}\) at the cost of introducing a multiplicative factor of \(2^{\sigma+\eta}\leq 2^{k+1}\) on the RHS of (8.35). Thus, \((\mu,\nu)\) have \((\sigma+\eta,2^{k+1}\cdot K^{d}r_{0}^{-(\sigma+\eta-(k-1))})\)-thin tubes on \(G\setminus B_{1}\) down from scale \(r_{0}\). Now we move to upper-bounding \(\mu\times\nu(B_{1})\). By (8.30) and (8.34), we have \(H_{r}\neq\emptyset\) for all \(r>r_{0}K^{-d/\eta}\), and so if \(K\geq K_{0}\) from Claim, then
\[\mu\times\nu(B_{1})\leq\sum_{r\leq r_{0}K^{-d/\eta}\text{ dyadic }}\mu\times\nu(H_{r})\leq\sum_{r\leq r_{0}K^{-d/\eta}\text{ dyadic }}2\left(\frac{r}{r_{0}}\right)^{\eta}\leq C_{\eta}K^{-d}.\]
Let \(K_{0}\) be the maximum of the value of \(K_{0}\) from Claim, \(2C_{\eta}\), and \(2^{k+1}\). Since \(d\geq 2\), we get \(\mu\times\nu(B_{1})\leq\frac{1}{2}K^{-1}\) and \((\mu,\nu)\) have \((\sigma+\eta,K^{d+1}r_{0}^{-(\sigma+\eta-(k-1))})\)-thin tubes on \(G\setminus B_{1}\) down from scale \(r_{0}\). We can analogously find \(B_{2}\subset X\times Y\)
with \(\mu\times\nu(B_{2})\leq\frac{1}{2}K^{-1}\) such that \((\nu,\mu)\) have \((\sigma+\eta,K^{d+1}r_{0}^{-(\sigma+\eta-(k-1))})\)-thin tubes on \(G\setminus B_{2}\) down from scale \(r_{0}\), and so \(B=B_{1}\cup B_{2}\) would be a good choice. Now we turn to proving the Claim.
_Proof of Claim_. We will choose \(\eta=\min\{\frac{1}{2}(6+\frac{15(d-1)}{s-\max(\sigma,k-1)})^{2},\frac{1}{5} \varepsilon^{2}\}\), where \(\varepsilon\) is obtained from Theorem 1.11. From Remark 1.12 and the continuity of the function \((s,\sigma,k)\mapsto(s-\max(\sigma,k-1))^{-1}\), we see that \(\eta(\sigma,s,\kappa,k,d)\) is bounded away from zero on any compact subset of \(\{(\sigma,s,\kappa,k,d):\max(\sigma,k-1)<s\leq k\leq d-1\}\).
Suppose that Claim is false. Let \(\mathbf{X}=\{x\in X:\nu(H_{r})\geq\left(\frac{r}{r_{0}}\right)^{\eta}\}\). Then \(\mu(\mathbf{X})\geq\left(\frac{r}{r_{0}}\right)^{\eta}\).
Recall that for \(x\in X\), the fiber \(H_{r}|_{x}\) is covered by \(\mathcal{T}^{\prime}_{x,r}\), which is a set of cardinality \(\lesssim\frac{r^{-(\sigma+\eta)}}{r_{0}^{-(\sigma+\eta-(k-1))}}\). Let
\[\mathcal{T}_{x}=\{T\in\mathcal{T}^{\prime}_{x,r}:\nu(T\cap H_{r}|_{x})\geq \frac{r^{\sigma+3\eta}}{r_{0}^{\sigma+3\eta-(k-1)}}\},\qquad Y_{x}=(H_{r}|_{x })\cap\bigcup\mathcal{T}_{x}.\]
Then \(\nu(Y_{x})\geq\left(\frac{r}{r_{0}}\right)^{\eta}-\left(\frac{r}{r_{0}}\right) ^{2\eta}\geq\left(\frac{r}{r_{0}}\right)^{2\eta}\) for all \(x\in\mathbf{X}\). Furthermore, for every \(T\in\mathcal{T}_{x}\), we have
\[\frac{r^{\sigma+3\eta}}{r_{0}^{\sigma+3\eta-(k-1)}}\leq\nu(T\cap Y_{x})\leq \frac{r^{\sigma-\eta}}{r_{0}^{\sigma-\eta-(k-1)}}. \tag{8.36}\]
The upper bound follows from \(Y_{x}\subset H_{r}|_{x}\subset G|_{x}\), (8.30), and \(K\leq\left(\frac{r}{r_{0}}\right)^{-\eta}\). In fact, we have in general,
\[\nu(T^{(\rho)}\cap Y_{x})\leq\left(\frac{r}{r_{0}}\right)^{-\eta}\rho^{\eta},\qquad\rho\in[r,1],T\in\mathcal{T}_{x}.\]
We also take the time to state the thin plates assumption:
\[\nu(W^{(\rho)}\cap Y_{x})\leq\left(\frac{r}{r_{0}}\right)^{\kappa-\eta}\qquad \rho\in[r,1],W\text{ is }(\rho,k)\text{-plate}.\]
Since \(\cup\mathcal{T}_{x}\) covers \(Y_{x}\), we get by the upper bound in (8.36), \(|\mathcal{T}_{x}|\gtrsim\frac{r^{-\sigma+\eta}}{r_{0}^{-\sigma+3\eta+(k-1)}} \nu(Y_{x})\geq\frac{r^{-\sigma+3\eta}}{r_{0}^{-\sigma+3\eta+(k-1)}}\). Hence, \(\mathcal{T}_{x}\) is a \((r,\sigma,r_{0}^{-(\sigma-(k-1))}\left(\frac{r}{r_{0}}\right)^{-5\eta})\)-set and \((r,\kappa,r_{0}^{-\kappa}\left(\frac{r}{r_{0}}\right)^{-5\eta},k-1)\)-set for each \(x\in\mathbf{X}\).
Let \(\gamma=\frac{15\eta}{s-\max(\sigma,k-1)}\). Call a tube \(T\in\mathcal{T}_{x}\) concentrated if there is a ball \(B_{T}\) with radius \(\left(\frac{r}{r_{0}}\right)^{\gamma}\) such that
\[\nu(T\cap B_{T}\cap Y_{x})\geq\frac{1}{3}\cdot\nu(T\cap Y_{x}). \tag{8.37}\]
Suppose that there is \(\mathbf{X}^{\prime}\subset\mathbf{X}\) with \(\mu(\mathbf{X}^{\prime})\geq\mu(\mathbf{X})/2\) such that for each \(x\in\mathbf{X}^{\prime}\), at least half the tubes of \(\mathcal{T}_{x}\) are non-concentrated. Since \(\mu(\mathbf{X}^{\prime})\geq\frac{1}{2}\mu(\mathbf{X})/2\geq\frac{1}{2}\left( \frac{r}{r_{0}}\right)^{2\eta}\) and \(\mu\) is Frostman with constant \(K\leq\left(\frac{r}{r_{0}}\right)^{-\eta}\), we can find a \((r,\sigma,\left(\frac{r}{r_{0}}\right)^{-3\eta})\)-set \(P\subset\mathbf{X}^{\prime}\). For each \(x\in\mathbf{X}^{\prime}\), the set of non-concentrated tubes \(\mathcal{T}_{x}^{\prime}\subset\mathcal{T}_{x}\) is a \((r,\sigma,2r_{0}^{-(\sigma-(k-1))}\left(\frac{r}{r_{0}}\right)^{-5\eta})\)-set and \((r,\kappa,2r_{0}^{-\kappa}\left(\frac{r}{r_{0}}\right)^{-5\eta},k-1)\)-set. Let \(\mathcal{T}=\cup_{x\in P}\mathcal{T}_{x}^{\prime}\). By Lemma 2.8, since \(d(X,Y)\geq K^{-1}\), we have that \(\mathcal{T}\) is contained in the \(O(K)\cdot r_{0}\)-neighborhood of \(H\). Now, we apply Theorem 1.11 with \(\overline{r}_{0}:=\min(O(K)\cdot r_{0},1)\). Since \(K\leq\left(\frac{r}{r_{0}}\right)^{-\eta}\) and \(\sigma\leq k\), we still have that for each \(x\in\mathbf{X}^{\prime}\), the set of non-concentrated tubes \(\mathcal{T}_{x}^{\prime}\) is a \((r,\sigma,2\overline{r}_{0}^{-(\sigma-(k-1))}\left(\frac{r}{\overline{r}_{0}} \right)^{-7\eta})\)-set and \((r,\kappa,2\overline{r}_{0}^{-\kappa}\left(\frac{r}{\overline{r}_{0}}\right)^ {-7\eta},k-1)\)-set. At this point, let us remark that implicit constants are dominated by \(\left(\frac{r}{\overline{r}_{0}}\right)^{-\eta}\geq K^{\eta}\) if \(K\geq K_{0}(\eta)\) is chosen large enough.
Then if \(\eta\leq\varepsilon^{2}/4\), where \(\varepsilon\) is obtained from Theorem 1.11, then
\[|\mathcal{T}|\geq\frac{r^{-2\sigma-2\sqrt{\eta}}}{\overline{r}_{0}^{-2(\sigma -(k-1))-2\sqrt{\eta}}}\geq\frac{r^{-2\sigma-\sqrt{\eta}}}{r_{0}^{-2(\sigma-( k-1))-\sqrt{\eta}}}.\]
In other words, we get a gain of \(\left(\frac{r}{r_{0}}\right)^{-\sqrt{\eta}}\), which means a two-ends argument gives an immediate contradiction. Specifically, by (8.36) and (8.37), we have for each non-concentrated \(T\in\mathcal{T}\), \(\nu\times\nu(\{(x,y):x,y\in T,d(x,y)\geq\left(\frac{r}{r_{0}}\right)^{\gamma} \})\geq\frac{2}{3}\nu(T\cap Y_{x})^{2}\geq\frac{r^{2\sigma+6\eta}}{r_{0}^{2 \sigma+6\eta-2(k-1)}}\). Thus, by Fubini, there exists a pair \((x,y)\) with \(d(x,y)\geq\left(\frac{r}{r_{0}}\right)^{\gamma}\) such that \(x,y\in T\) for \(\gtrsim\frac{r^{2\sigma+6\eta}}{r_{0}^{2\sigma+6\eta-2(k-1)}}|\mathcal{T}| \geq\left(\frac{r}{r_{0}}\right)^{-\sqrt{\eta}+6\eta}\) many tubes \(T\in\mathcal{T}\). However, since \(d(x,y)\geq\left(\frac{r}{r_{0}}\right)^{\gamma}\), we have that \(x,y\) can only lie in \(\lesssim\left(\frac{r}{r_{0}}\right)^{-(d-1)\gamma}\) many essentially distinct \(2r\)-tubes. Since \(\sqrt{\eta}-6\eta\geq(d-1)\gamma\), we get a contradiction.
Now we focus on the concentrated case: assume there is a subset \(\mathbf{X}^{\prime}\subset\mathbf{X}\) with \(\mu(\mathbf{X}^{\prime})\geq\mu(\mathbf{X})/2\) such that at least half of the tubes in \(\mathcal{T}_{x}\) are concentrated for all \(x\in\mathbf{X}^{\prime}\). This case is where we use the fact that \(\nu\) is a \(s\)-dimensional measure. Let \(\mathcal{T}_{x}^{\prime}\) denote the concentrated tubes and \(\{B_{T}:T\in\mathcal{T}_{x}^{\prime}\}\) denote the corresponding heavy \(\left(\frac{r}{r_{0}}\right)^{\gamma}\)-balls. Because the family \(\mathcal{T}_{x}\) has \(K\)-overlap on \(\mathrm{spt}(\nu)\), the set
\[H^{\prime}=\{(x,y):x\in\mathbf{X}^{\prime},y\in T\cap B_{T}\cap Y_{x}\text{ for some }T\in\mathcal{T}_{x}^{\prime}\}\]
has measure
\[(\mu\times\nu)(H^{\prime})\gtrsim K^{-1}\cdot\mu(\mathbf{X}^{\prime}) \cdot\inf_{x\in\mathbf{X}^{\prime}}|\mathcal{T}^{\prime}_{x}|\cdot\inf_{x\in \mathbf{X}^{\prime},T\in\mathcal{T}^{\prime}_{x}}\nu(T\cap B_{T}\cap Y_{x})\\ \gtrsim\left(\frac{r}{r_{0}}\right)^{2\eta}\cdot\frac{r^{-\sigma+ 3\eta}}{r_{0}^{-(\sigma-3\eta-(k-1))}}\cdot\frac{r^{\sigma+3\eta}}{r_{0}^{ \sigma+3\eta-(k-1)}}=\left(\frac{r}{r_{0}}\right)^{8\eta}.\]
Notice that if \((x,y)\in H^{\prime}\), then there is a tube \(T(x,y)\in\mathcal{T}^{r}\) containing \(x,y\) such that
\[\nu(B(y,2(r/r_{0})^{\gamma})\cap T(x,y))\gtrsim\frac{r^{\sigma+3\eta}}{r_{0}^{ \sigma+3\eta-(k-1)}}.\]
Thus, \(\nu\) can't be too concentrated near \(y\):
\[\nu(B(y,r))\leq K\cdot r^{s}\leq\frac{1}{2}\nu(B(y,2\left(\frac{r}{r_{0}} \right)^{\gamma})\cap T(x,y)),\]
assuming \(4\eta<s-\sigma\) and \(k-1<s\). (The relevant inequalities are \(K\leq\left(\frac{r}{r_{0}}\right)^{-\eta}\) and \(r^{s-\sigma-3\eta}\leq r_{0}^{s-\sigma-3\eta}\leq r_{0}^{k-1-\sigma-3\eta}\).)
Therefore, for each \((x,y)\in H^{\prime}\), we can choose a dyadic number \(r\leq\xi(x,y)\leq(r/r_{0})^{\gamma}\) such that
\[\nu(A(y,\xi(x,y),2\xi(x,y))\cap T(x,y))\geq\left(\frac{r}{r_{0}}\right)^{ \sigma+4\eta}\left(\frac{\xi(x,y)}{(r/r_{0})^{\gamma}}\right)^{\eta}r_{0}^{k},\]
where the annulus \(A(y,\xi,2\xi):=B(y,2\xi)\setminus B(y,\xi)\). (One remark: [19] used dyadic pigeonholing at this step, but we can't do this because then we would introduce a \(\log r_{0}^{-1}\) factor. Fortunately, we are allowed to introduce the decaying tail \(\left(\frac{\xi(x,y)}{(r/r_{0})^{\gamma}}\right)^{\eta}\), which is summable in \(\xi(x,y)\).)
Then, recalling that \((\mu\times\nu)(H^{\prime})\gtrsim\left(\frac{r}{r_{0}}\right)^{7\eta}\), we can further find \(r\leq\xi\leq\left(\frac{r}{r_{0}}\right)^{\gamma}\) such that
\[(\mu\times\nu)(H^{\prime\prime})\geq\left(\frac{r}{r_{0}}\right)^{8\eta}\left( \frac{\xi(x,y)}{(r/r_{0})^{\gamma}}\right)^{\eta},\text{ where }H^{\prime\prime}=\{(x,y)\in H^{ \prime}:\xi(x,y)=\xi\}\subset G.\]
By Fubini, we can find \(y\in Y\) such that \(\mu(H^{\prime\prime}|^{y})\geq\left(\frac{r}{r_{0}}\right)^{8\eta}\left(\frac {\xi(x,y)}{(r/r_{0})^{\gamma}}\right)^{\eta}\). Then by construction, \(H^{\prime\prime}|^{y}\) can be covered by a collection of tubes \(\mathcal{T}_{y}\subset\mathcal{T}^{r}\) containing \(y\) that satisfy
\[\nu(A(y,\xi,2\xi)\cap T)\geq\nu(A(y,\xi(x,y),2\xi(x,y))\cap T(x,y))\geq\left( \frac{r}{r_{0}}\right)^{\sigma+4\eta}\left(\frac{\xi(x,y)}{(r/r_{0})^{\gamma} }\right)^{\eta}r_{0}^{k}.\]
Finally, we claim that \(\mathcal{T}_{y}\) contains a subset \(\mathcal{T}^{\prime}_{y}\) whose directions are separated by \(\geq(r/\xi)\), such that \(|\mathcal{T}^{\prime}_{y}|\gtrsim\mu(H^{\prime\prime}|^{y})\cdot r^{\eta}\cdot \left(\frac{\xi r_{0}}{r}\right)^{\sigma}r_{0}^{-k}\) if \(\xi>\frac{r}{r_{0}}\) and \(|\mathcal{T}^{\prime}_{y}|\gtrsim\mu(H^{\prime\prime}|^{y})\).
\(\mu(H^{\prime\prime}|^{y})\cdot r^{\eta}\cdot\left(\frac{\xi r_{0}}{r}\right)^{ \sigma}r_{0}^{-k}\) if \(\xi>\frac{r}{r_{0}}\) if \(r<\xi<\frac{r}{r_{0}}\). Indeed, if \(\xi>\frac{r}{r_{0}}\), then any \(r/\xi\)-tube \(\mathbf{T}\) containing \(y\) has
\[\mu(\mathbf{T}\cap H^{\prime\prime}|^{y})\leq\mu(\mathbf{T}\cap A|^{y})\leq K \cdot\left(\frac{r}{\xi r_{0}}\right)^{\sigma}r_{0}^{k}\leq\left(\frac{r}{r_{0 }}\right)^{\eta}\cdot\left(\frac{r}{\xi r_{0}}\right)^{\sigma}r_{0}^{k}.\]
If \(\xi<\frac{r}{r_{0}}\), then any \(r/\xi\)-tube \(\mathbf{T}\) containing \(y\) lies in the union of \((\frac{r}{\xi r_{0}})^{k}\) many \(r_{0}\)-tubes, and so
\[\mu(\mathbf{T}\cap H^{\prime\prime}|^{y})\leq\mu(\mathbf{T}\cap A|^{y})\leq K \cdot\left(\frac{r}{\xi r_{0}}\right)^{-k}r_{0}^{k}\leq\left(\frac{r}{r_{0}} \right)^{\eta}\cdot\left(\frac{r}{\xi}\right)^{k}.\]
Thus, if \(\xi>\frac{r}{r_{0}}\), then it takes \(\gtrsim\mu(H^{\prime\prime}|^{y})\cdot r^{\eta}\cdot\left(\frac{\xi r_{0}}{r} \right)^{\sigma}r_{0}^{-k}\) many \((r/\xi)\)-tubes to cover \(H^{\prime\prime}|^{y}\), and perhaps even more to cover \(\cup\mathcal{T}_{y}\). We may now choose \(\mathcal{T}_{y}^{\prime}\subset\mathcal{T}_{y}\) to be a maximal subset with \((r/\xi)\)-separated directions to prove the claim for \(\xi>\frac{r}{r_{0}}\). A similar argument holds for \(\xi<\frac{r}{r_{0}}\).
Finally, let's first assume \(\xi>\frac{r}{r_{0}}\). Since \(\mathcal{T}^{y}\) has bounded overlap in \(\mathbb{R}^{d}\setminus B(y,\xi)\), we obtain
\[\left(\frac{r}{r_{0}}\right)^{\sigma+13\eta}\left(\frac{\xi}{(r/ r_{0})^{\gamma}}\right)^{2\eta}\cdot\left(\frac{\xi r_{0}}{r}\right)^{\sigma}r_{0}^ {-k}\\ \lesssim\inf_{T\in\mathcal{T}_{y}^{\prime}}\nu(A(y,\xi,2\xi)\cap T )\cdot|\mathcal{T}_{y}^{\prime}|\lesssim\nu(B(y,2\xi))\leq C\cdot(2\xi)^{s}.\]
We will obtain a contradiction if we show the opposite inequality holds, for \(\gamma=\frac{15\eta}{s-\max(k,\sigma)}\). Since \(2\eta+\sigma<s\) and \(\xi\leq(\frac{r}{r_{0}})^{\gamma}\), it suffices to check \(\xi=(\frac{r}{r_{0}})^{\gamma}\).
If \(\xi<\frac{r}{r_{0}}\), then we obtain
\[\left(\frac{r}{r_{0}}\right)^{\sigma+13\eta}\left(\frac{\xi}{(r/ r_{0})^{\gamma}}\right)^{2\eta}\cdot\left(\frac{\xi r_{0}}{r}\right)^{k}r_{0}^{-k}\\ \lesssim\inf_{T\in\mathcal{T}_{y}^{\prime}}\nu(A(y,\xi,2\xi)\cap T )\cdot|\mathcal{T}_{y}^{\prime}|\lesssim\nu(B(y,2\xi))\leq C\cdot(2\xi)^{s}.\]
Again, since \(2\eta+k<s\), it suffices to check \(\xi=\frac{r}{r_{0}}\).
This proves the result.
Proof of Theorem 8.1.: By Propositions 7.3 (with \(\frac{r_{0}}{K_{2}}\) for \(r_{0}\)) and 8.2, there exists a set \(B_{0}\subset X\times Y\) with \(\mu\times\nu(B_{0})\lesssim K_{1}^{-1}\) such that \((\mu,\nu)\) and \((\nu,\mu)\) have \((0,K_{1}^{N}r_{0}^{k-1})\)-thin tubes on \(B_{0}^{c}\) down from scale \(r_{0}\), and \((\mu,\nu)\) and \((\nu,\mu)\) have \((\kappa,K_{1}\left(\frac{r_{0}}{K_{2}}\right)^{-\kappa})\)-thin \(k\)-plates on \((A\cup B_{0})^{c}\). Then iterate Proposition 8.3 applied to a uniform \(\eta(\sigma,s,\kappa,k,d)\). So initially we have \(K=\max(K_{1}^{N}K_{2}^{\kappa},K_{0}(\eta,k))\), and after each iteration, \(K\) becomes \(K^{d+1}\). After iterating \(\lesssim\eta^{-1}\) many times and letting \(B_{1}\subset X\times Y\) be the union of the \(B\)'s outputted from the Proposition (so \(\mu\times\nu(B_{1})\lesssim K^{-1}\leq K_{1}^{-1}\)), we find that \((\mu,\nu)\) and \((\nu,\mu)\) have \((\sigma,K^{(d+1)\eta^{-1}}r_{0}^{k-1})\)-thin tubes on \((A\cup B_{0}\cup B_{1})^{c}\). Then we can take \(B:=B_{0}\cup B_{1}\) to be our desired set.
### Proof of Theorem 1.13, general case
We will prove Theorem 1.13, which we restate here.
**Theorem 8.5**.: _Let \(k\in\{1,2,\cdots,d-1\}\), \(k-1<\sigma<s\leq k\), and \(\varepsilon>0\). There exist \(N,K_{0}\) depending on \(\sigma,s,k\), and \(\eta(\varepsilon)>0\) (with \(\eta(1)=1\)) such that the following holds. Fix \(r_{0}\leq 1\), and \(K\geq K_{0}\). Let \(\mu,\nu\) be \(\sim 1\)-separated \(s\)-dimensional measures with constant \(C_{\mu},C_{\nu}\) supported on \(E_{1},E_{2}\), which lie in \(B(0,1)\). Assume that \(|\mu|,|\nu|\leq 1\). Let \(A\) be the pairs of \((x,y)\in E_{1}\times E_{2}\) that lie in some \(K^{-1}\)-concentrated \((r_{0},k)\)-plate. Then there exists a set \(B\subset E_{1}\times E_{2}\) with \(\mu\times\nu(B)\lesssim K^{-\eta}\) such that for every \(x\in E_{1}\) and \(r\)-tube \(T\) through \(x\), we have_
\[\nu(T\setminus(A|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{r_{0}^{\sigma-(k- 1)+N\varepsilon}}K^{N}.\]
_The implicit constant may depend on \(C_{\mu},C_{\nu},\sigma,s,k\)._
**Remark 8.6**.: _Note that in Theorem 8.1, we demand the stronger conclusion \(\mu\times\nu(B)\lesssim K^{-1}\)._
The idea is to apply Theorem 8.1 at different scales. As a start, if \(\varepsilon=1\), then we can directly apply Theorem 8.1 with \(K_{1}=K_{2}=K\) (and thus we may take \(\eta(1)=1\)).
We may assume \(\varepsilon=\frac{1}{M}\) for some \(M\). Let \(N\) be the large constant in Lemma 7.5, and let \(\eta_{n}=(N+2)^{n-M}\). For \(1\leq n\leq M\), let \(A_{n}\) be the pairs of \((x,y)\in E_{1}\times E_{2}\) that lie in some \(K^{-\eta_{n}}\)-concentrated \((r_{0}^{n\varepsilon},k)\)-plate. We remark that \(A_{M}=A\).
**Lemma 8.7**.: _Fix \(n\geq 1\). There exists a set \(B_{n}\subset A_{n}\) with \(\mu\times\nu(B_{n})\lesssim K^{-\eta_{n}}\) such that for every \(x\in E_{1}\) and \(r\)-tube through \(x\) that intersects \(A_{n}|_{x}\), we have_
\[\nu(T\setminus(A_{n+1}|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{r_{0}^{n \varepsilon(\sigma-(k-1))+N\varepsilon}}K^{N}.\]
Proof.: By Lemma 2.8, there exists an absolute constant \(C\) such that every \(r\)-tube through some \((x,y)\in A_{n}\) lies in some \(K^{-\eta_{n}}\)-concentrated \((CK^{-n},k)\)-plate. We can find a collection \(\mathcal{H}\) of essentially distinct \(K^{-\eta_{n}}\)-concentrated \((2CK^{-n},k)\)-plates such that each \(K^{-\eta_{n}}\)-concentrated \((CK^{-n},k)\)-plate is contained within some element of \(\mathcal{H}\). By Lemma 7.5, \(|\mathcal{H}|\lesssim K^{-N\eta_{n}}\). By construction, every \(r\)-tube through some \((x,y)\in A_{n}\) is contained in some member of \(\mathcal{H}\). Apply Theorem 8.1 to each \(H\in\mathcal{H}\) with measures \(\mu|_{H},\nu|_{H}\) and \(K_{1}\to K^{-\eta_{n+1}}\), \(K_{2}\to 2Cr_{0}^{-\varepsilon}\), and \(r_{0}\to r_{0}^{n\varepsilon}\) to obtain a set \(B_{H}\) with \(\mu\times\nu(B_{H})\lesssim K^{-\eta_{n+1}}\). Let \(B_{n}=\cup_{H\in\mathcal{H}}B_{H}\), and then \(\mu\times\nu(B_{n})\leq K^{N\eta_{n}}\cdot K^{-\eta_{n+1}}<K^{-\eta_{n}}\) since \((N+1)\eta_{n}<\eta_{n+1}\).
Proof of Theorem 8.5.: Let \(B=\cup_{n=1}^{M}B_{n}\); then \(\mu\times\nu(B)\leq K^{-\eta_{0}}\). Fix an \(r\)-tube \(T\) and \(x\in E_{1}\). Let \(n\leq M-1\) be the largest number such that \(T\) passes through points in \(A_{n}|_{x}\). Then by Lemma 8.7, we have \(\nu(T\setminus(A_{n+1}|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{K^{-n(\sigma- (k-1)}}K^{N}\). If \(m<M-1\), then \(T\cap A_{n+1}|_{x}=\emptyset\). In any case, we have \(\nu(T\setminus(A|_{x}\cup B|_{x}))\lesssim\frac{r^{\sigma}}{r_{0}^{\sigma-(k- 1)+N\varepsilon}}K^{N}\), completing the proof of Theorem 8.5.
## 9 Corollaries of Radial Projection Estimates
We prove a variant of Corollary 1.1.
**Proposition 9.1**.: _Fix \(s\in(k-1,k]\) and \(\eta>0\). Let \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\) be measures with \(\mathcal{E}_{s}(\mu),\mathcal{E}_{s}(\nu)<\infty\) and \(\sim 1\)-separated supports. Suppose that \(\mu(H)=\nu(H)=0\) for each \(k\)-plane \(H\in\mathbb{A}(\mathbb{R}^{d},k)\). Then for \(\mu\)-almost all \(x\), for all sets \(Y\) of positive \(\nu\)-measure,_
\[\dim_{H}(\pi_{x}Y)\geq s-\eta.\]
Proof.: The proof is standard and follows [27, Proof of Proposition 6.9]. By Lemma 2.30, by passing to subsets of nearly full measure and replacing \(s\) by an arbitrary \(s^{\prime}<s\), we may assume that \(\mu(B_{r}),\nu(B_{r})\lesssim r^{s}\) for all \(r\in(0,1]\).
Fix \(\varepsilon>0\). By a compactness argument, there exists \(r_{0}>0\) such that \(\mu(H),\nu(H)<\varepsilon\) for all \((r_{0},k)\)-plates \(H\). In Theorem 8.1, we know that for \(\varepsilon>0\) sufficiently small, the set \(A=\emptyset\). Thus, there exists \(B\subset X\times Y\) with \(\mu\times\nu(B)\lesssim\varepsilon\) such that for every \(x\in X\) and \(r\)-tube through \(x\), we have
\[\nu(T\setminus B|_{x})\lesssim_{\eta,\varepsilon,s}r^{s-\eta}.\]
Thus, there is a set \(X\) with \(\mu(X)>1-O(\varepsilon)\) such that if \(x\in X\), then
\[\dim_{H}(\pi_{x}Y)\geq s-\eta\text{ for all }Y\text{ with }\nu(Y)\geq O(\varepsilon).\]
Taking \(\varepsilon\to 0\) completes the proof.
Using this, we prove Corollary 1.2.
**Corollary 9.2**.: _Let \(s\in(d-2,d]\), then there exists \(\varepsilon(s,d)>0\) such that the following holds. Let \(\mu,\nu\) be Borel probability measures on \(\mathbb{R}^{d}\) with disjoint supports that satisfy \(\mathcal{E}_{s}(\mu),\mathcal{E}_{s}(\nu)<\infty\) and \(\dim_{H}(\operatorname{spt}(\nu))<s+\varepsilon(s,d)\). Further, assume that \(\mu,\nu\) don't simultaneously give full measure to any affine \((d-1)\)-plane \(H\subset\mathbb{R}^{d}\). Then there exist restrictions of \(\mu,\nu\) to subsets of positive measure (which we keep denoting \(\mu,\nu\)) such that the following holds. For almost every affine 2-plane \(W\subset\mathbb{R}^{d}\) (with respect to the natural measure on the affine Grassmanian), if the sliced measures \(\mu_{W}\), \(\nu_{W}\) on \(W\) is non-trivial, then they don't simultaneously give full measure to any line. In other words,_
\[(\gamma_{d,2}\times\mu)\{(V,x):\mu_{V,x}(\ell)\nu_{V,x}(\ell)=|\mu_{V,x}||\nu _{V,x}|>0\text{ for some }\ell\in\mathbb{A}(V+x,1)\}=0\]
_where we parametrize affine 2-planes as \(V+x\), for \(x\in\mathbb{R}^{d}\) and \(V\) in the Grassmannian \(\operatorname{Gr}(d,2)\) with the rotationally invariant Haar measure \(\gamma_{d,2}\)._
Proof.: First, if \(\mu(H)>0\) for some affine \((d-1)\)-plane \(H\), then \(\nu(H^{c})>0\) where \(H^{c}\) denotes the complement of \(H\) in \(\mathbb{R}^{d}\). By restricting \(\mu\) to \(H\) and \(\nu\) to \(H^{c}\) (and calling the results \(\mu,\nu\)), we see that the sliced measures \(\mu_{W}\) and \(\nu_{W}\) can't give full mass to any line \(\ell\) for any affine \((d-1)\)-plane \(W\), for the simple reason that \(\mu_{W}(\ell)>0\) forces \(\ell\subset H\), and \(\nu_{W}(\ell)>0\) forces \(\ell\subset H^{c}\). Likewise, we are done if
\(\nu(H)>0\) for some affine \((d-1)\)-plane \(H\subset\mathbb{R}^{d}\). Thus, assume \(\mu(H)=\nu(H)=0\) for all affine \((d-1)\)-planes \(H\).
With this assumption, the remainder of the proof is nearly identical to the proof of Proposition 6.8 in [27], except using Proposition 9.1 instead of [27, Proposition 6.9]. One can take \(\varepsilon(s,d)\) to be arbitrarily close to \(s-(d-2)\).
Finally, we can deduce Theorem 1.1 from either Proposition 9.1 or Proposition 9.2, see [19, Section 4] for details. The only case not yet considered in this paper is when either \(\mu,\nu\) gives positive mass to a \(k\)-plane. But this special case was considered in [19, Section 4] (briefly, if \(X\) gives positive mass to some \(k\)-plane, then radial projections become orthogonal projections and then we apply Kaufman's projection theorem; if \(Y\) gives positive mass to some \(k\)-plane \(H\), then for \(x\notin H\), we have \(\dim_{H}(\pi_{x}(Y))=\dim_{H}(Y)\).)
## Appendix A Proof of Balog-Szemeredi-Gowers
By a standard covering argument (e.g. see Section 3 of [12]), Theorem 2.27 follows from the case \(\delta=0\), which we prove below.
**Theorem A.1** (refined Theorem 4.1 of [30]).: _Let \(K\geq 1\) be a parameter. Let \(A,B\) be finite subsets of \(\mathbb{R}^{n}\), and let \(P\subset A\times B\) satisfy \(|P|\geq K^{-1}|A||B|\). Suppose that \(|\overset{P}{A+B}|\leq K(|A||B|)^{1/2}\), where \(A+B=\{a+b:(a,b)\in P\}\). Then one can find subsets \(A^{\prime}\subset A,B^{\prime}\subset B\) with \(|A^{\prime}|\geq\frac{1}{16K^{2}}|A|,|B^{\prime}|\geq\frac{1}{16K^{2}}|B|\) such that \(|A^{\prime}+B^{\prime}|\leq 2^{12}K^{8}(|A||B|)^{1/2}\) and \(|P\cap(A^{\prime}\times B^{\prime})|\geq\frac{|A||B|}{16K^{2}}\)._
Proof.: We follow the exposition in [24].
**Claim.** There exist subsets \(A^{\prime}\subset A,B^{\prime}\subset B\) with \(|P\cap(A^{\prime}\times B^{\prime})|\geq\frac{|A||B|}{16K^{2}}\), such that for each \(a\in A^{\prime},b\in B^{\prime}\), there are \(\geq\frac{|A||B|}{2^{12}K^{8}}\) many pairs \((a^{\prime},b^{\prime})\in A\times B\) such that \((a,b^{\prime})\), \((a^{\prime},b^{\prime})\), and \((a^{\prime},b)\in P\).
Assuming the claim, we will see how the theorem follows. First, we get \(|A^{\prime}||B^{\prime}|\geq|P\cap(A^{\prime}\times B^{\prime})|\geq\frac{|A|| B|}{16K^{2}}\). Since \(|A^{\prime}|\leq|A|\) and \(|B^{\prime}|\leq|B|\), we get \(|A|\geq\frac{|A|}{16K^{2}}\) and \(|B|\geq\frac{|B|}{16K^{2}}\).
Next, for \(a\in A^{\prime},b\in B^{\prime}\), we have
\[a+b=(a+b^{\prime})-(a^{\prime}+b^{\prime})+(a^{\prime}+b).\]
Thus, there are \(\geq|A||B|2^{-12}K^{-5}\) many solutions to \(a+b=x-y+z\) with \(x,y,z\in A\overset{P}{+}B\). Since \(|A\overset{P}{+}B|\leq K(|A||B|)^{1/2}\), we get \(|A^{\prime}+B^{\prime}|\lesssim\frac{K^{3}(|A||B|)^{3/2}}{|A||B|2^{-12}K^{-5}}= 2^{12}K^{8}|A|^{1/2}|B|^{1/2}\).
Now, we prove the claim. For convenience, we can prune \(P\) to satisfy \(|P|=K^{-1}|A||B|\) (this is not necessary but will make the proof look nicer). Treat \((A\cup B,P)\) as a bipartite graph with an edge between \(a\in A\) and \(b\in B\) if \((a,b)\in P\). Then we want to find \(A^{\prime},B^{\prime}\) such that there are many paths of length \(3\) between any \(a\in A^{\prime},b\in B^{\prime}\).
The average degree of a vertex in \(A\) is \(K^{-1}|B|\). Thus, if we delete the vertices in \(A\) with degree \(\leq\frac{1}{2}K^{-1}|B|\), then at least \(\frac{1}{2K}|A||B|\) many edges remain. Let \(E\) be the set of edges. For \(v\in A\cup B\), let \(N(v)\) be the set of neighbors of \(v\).
Now pick a vertex \(b\in B\). On average, it has \(\frac{|E|}{|B|}\geq\frac{1}{2K}|A|\) many neighbors.
Now, we say \((a,a^{\prime})\in A^{2}\) is bad if \(|N(a)\cap N(a^{\prime})|<\frac{1}{128K^{3}}|B|\). For \(v\in B\), let \(\text{Bad}_{v}\) be the set of bad pairs in \(N(v)^{2}\). There are \(\binom{|A|}{2}\) many pairs in \(A\), so (expectation is taken over uniformly chosen \(v\in B\))
\[\mathbb{E}[|\text{Bad}_{v}|]<\binom{|A|}{2}\cdot\frac{1}{128K^{3}}<\frac{|A|^{ 2}}{256K^{3}}.\]
If \(A_{bad,v}\) is the set of vertices of \(A\) that lie in at least \(\frac{|A|}{32K^{2}}\) many pairs of \(B_{v}\), then
\[\mathbb{E}[|A_{bad,v}|]\leq\frac{2\mathbb{E}[|B_{v}|]}{|A|/(32K^{2})}<\frac{|A |}{4K}.\]
Finally, let \(A_{v}=N(v)\setminus A_{bad,v}\). Then by linearity of expectation,
\[\mathbb{E}[|A_{v}|]=\mathbb{E}[|N(v)|]-\mathbb{E}[|A_{bad,v}|]>\frac{|A|}{2K} -\frac{|A|}{4K}=\frac{|A|}{4K}.\]
Thus, there exists \(v\in B\) such that \(|A_{v}|>\frac{|A|}{4K}\). Then, let \(A^{\prime}=A_{v}\) and
\[B^{\prime}=\{w\in B:|N(w)\cap A^{\prime}|\geq\frac{|A|}{16K^{2}}.\]
Let \(E(X,Y)\) be the number of edges between \(X\) and \(Y\). We first check that \(E(A^{\prime},B^{\prime})\geq\frac{|A||B|}{16K^{2}}\). Indeed, since every vertex of \(A\) has degree \(\geq\frac{|B|}{2K}\), we have
\[|E(A^{\prime},B)|\geq\frac{|A^{\prime}||B|}{2K}\geq\frac{|A||B|^{2}}{8K^{2}}.\]
On the other hand, every vertex in \(B\setminus B^{\prime}\) corresponds to fewer than \(\frac{|A|}{16K^{2}}\) many edges of \(A^{\prime}\), so \(|E(A^{\prime},B\setminus B^{\prime})|\leq\frac{|A||B|^{2}}{16K^{2}}\). Hence, \(|E(A^{\prime},B^{\prime})|\geq\frac{|A||B|^{2}}{16K^{2}}\).
Finally, for any \(v\in A^{\prime}\), \(w\in B^{\prime}\), we know that \(w\) has at least \(\frac{|A|}{16K^{2}}\) many neighbors in \(A^{\prime}\), and fewer than \(\frac{|A|}{32K^{2}}\) of those form a bad pair with \(w\). For the remaining \(\geq\frac{|A|}{32K^{2}}\) vertices \(v^{\prime}\) that do not form a bad pair with \(w\), there are \(\geq\frac{|B|}{128K^{3}}\) many vertices \(w^{\prime}\in B\) that are common neighbors of \(v,v^{\prime}\). Thus, we get at least \(\frac{|A|}{32K}\cdot\frac{|B|}{128K^{3}}=\frac{|A||B|}{2^{12}K^{5}}\) many paths \((v,w^{\prime},v^{\prime},w)\) between \(v\) and \(w\).
|
2309.06680 | STUPD: A Synthetic Dataset for Spatial and Temporal Relation Reasoning | Understanding relations between objects is crucial for understanding the
semantics of a visual scene. It is also an essential step in order to bridge
visual and language models. However, current state-of-the-art computer vision
models still lack the ability to perform spatial reasoning well. Existing
datasets mostly cover a relatively small number of spatial relations, all of
which are static relations that do not intrinsically involve motion. In this
paper, we propose the Spatial and Temporal Understanding of Prepositions
Dataset (STUPD) -- a large-scale video dataset for understanding static and
dynamic spatial relationships derived from prepositions of the English
language. The dataset contains 150K visual depictions (videos and images),
consisting of 30 distinct spatial prepositional senses, in the form of object
interaction simulations generated synthetically using Unity3D. In addition to
spatial relations, we also propose 50K visual depictions across 10 temporal
relations, consisting of videos depicting event/time-point interactions. To our
knowledge, no dataset exists that represents temporal relations through visual
settings. In this dataset, we also provide 3D information about object
interactions such as frame-wise coordinates, and descriptions of the objects
used. The goal of this synthetic dataset is to help models perform better in
visual relationship detection in real-world settings. We demonstrate an
increase in the performance of various models over 2 real-world datasets
(ImageNet-VidVRD and Spatial Senses) when pretrained on the STUPD dataset, in
comparison to other pretraining datasets. | Palaash Agrawal, Haidi Azaman, Cheston Tan | 2023-09-13T02:35:59Z | http://arxiv.org/abs/2309.06680v2 | # STUPD: A Synthetic Dataset for Spatial and Temporal Relation Reasoning
###### Abstract
Understanding relations between objects is crucial for understanding the semantics of a visual scene. It is also an essential step in order to bridge visual and language models. However, current state-of-the-art computer vision models still lack the ability to perform spatial reasoning well. Existing datasets mostly cover a relatively small number of spatial relations, all of which are static relations that do not intrinsically involve motion. In this paper, we propose the Spatial and Temporal Understanding of **P**repositions **D**ataset (STUPD) - a large-scale video dataset for understanding static and dynamic spatial relationships derived from prepositions of the English language. The dataset contains 150K visual depictions (videos and images), consisting of 30 distinct spatial prepositional senses, in the form of object interaction simulations generated synthetically using Unity3D. In addition to spatial relations, we also propose 50K visual depictions across 10 temporal relations, consisting of videos depicting event/time-point interactions. To our knowledge, no dataset exists that represents temporal relations through visual settings. In this dataset, we also provide 3D information about object interactions such as frame-wise coordinates, and descriptions of the objects used. The goal of this synthetic dataset is to help models perform better in visual relationship detection in real-world settings. We demonstrate an increase in the performance of various models over 2 real-world datasets (ImageNet-VidVRD and Spatial Senses) when pretrained on the STUPD dataset, in comparison to other pretraining datasets.
## 1 Introduction
Identifying relationships between objects are crucial for semantic understanding of the visual world. However, current state-of-the-art computer vision models still find it challenging to understand relationships [1; 2; 3; 4; 5; 6]. For instance, even for simple relations in 2D pixel space such as "left", "right", "above" and "below", Cho et al. [2] found a large gap between upper-bound accuracy and the performance of generative transformers. Compared to an upper-bound accuracy of 99.3%, the average accuracy of 3 models was only 24.7%, with the best model achieving 51.2%.
In human languages, relational concepts are conveyed using prepositions, which are words used "_to show a relationship in space or time"_[7]. Examples of prepositions include "above", "before" and "with". Existing computer vision datasets cover English parts-of-speech such as nouns/objects [8; 9], verbs/actions [10; 11; 12], adjectives/attributes [13; 14], etc. However, despite their importance, prepositions are significantly understudied in computer vision as a distinct class of concepts.
Prepositions may have one or more senses, which are distinct definitions of a word in different contexts. For example, the preposition "against" has 2 distinct spatial senses [15]. One refers to a situation where 2 objects are moving in opposite directions and the other where an object is leaning
on another. For simplicity, we will henceforth use the term "preposition" to refer to both prepositions (the words) and its senses (the definitions), except where clear distinctions are required. A detailed glossary of all terms introduced in this paper is included in the Appendix.
From Table 1, it can be observed that image datasets that contain hundreds to thousands of relation classes actually have fewer than 30 prepositions (an exception is the recent VSR dataset [16] which covers 65 prepositions). As for existing video datasets, only 6-8 prepositions are covered. Furthermore, datasets thus far contain only _static_ prepositions, which are prepositions that do not necessarily involve any motion, such as "above" and "behind". The vast majority of such examples come from very simple and intuitive preposition classes such as "on" or "near", which are easier to label by human annotators. None of the existing datasets include _dynamic_ prepositions, which are prepositions that intrinsically involve motion, such as "into", "onto", etc. Finally, existing datasets are also extremely imbalanced due to the long-tailed distribution of relationship occurrences.
This kind of highly restrictive relational domain in existing datasets is not an effective approach towards visual reasoning, because it only focuses on position, while ignoring many fundamental relational characteristics, such as relative speed, contact and physical forces of interaction. The prospect of the ability to distinguish between different spatial (as well as temporal) configurations with higher granularity, thus, makes it worthwhile to study the wider variety of prepositions for effective visual reasoning. Through this, datasets can be richer in information, and models would be able to differentiate between many related but different relational categories (such as "above" and "over"). A granular understanding of prepositional relations also allows for better understanding of language semantics, which is an equally important and complementary aspect of visual reasoning in the understanding of a scene.
Apart from spatial reasoning, understanding temporal relations is also a crucial component for visual reasoning. Many relations require understanding dynamics of interactions over time. Visual representation of temporal relationships is a challenging task because temporal concepts (such as time) are unintuitive to visualize. This is one of the reasons why temporal relations are heavily underrepresented in visual reasoning datasets. Without effectively understanding temporal relations, spatial relations remain isolated, and their progression cannot be understood. Thus spatial and temporal relations should be treated as equally important aspects of visual reasoning.
**Contributions.** To address these issues, we created the Spatial and Temporal Understanding of Prepositions Dataset (STUPD) as the first dataset to cover dynamic spatial prepositions and include temporal relations. The contributions of this paper are as follows:
1. **Comprehensive synthetic dataset for spatial relations**: This paper introduces a dataset consisting of 150,000 images and videos that capture 30 different spatial relations. The dataset incorporates physical interactions using a sophisticated physics engine coupled with diverse backgrounds.
2. **Comprehensive synthetic dataset for temporal relations**: In addition to the spatial relations dataset, this paper introduces a separate dataset comprising 50,000 sets of videos depicting 10 different temporal relationships. Through this, the paper also introduces a definitive framework for defining and distinguishing between different temporal relations, for future works to build on.
3. **Detailed 3D information and bounding box annotations**: To enhance the quality and usability of the dataset, each image and video in the dataset is accompanied by detailed 3D information and bounding box annotations.
4. **Effective pre-training dataset with real-world applicability**: The proposed datasets are primarily designed to serve as a highly effective pre-training resource for computer vision models. Pre-training on this dataset provides a solid foundation for subsequent fine-tuning on real-world datasets. Later in the paper, we demonstrate that pretraining on STUPD increases performance on real-world visual reasoning tasks.
## 2 Related Work
### Image Datasets
In recent years, image-based datasets have attempted to present spatial relationships through simple 2D object interactions [14; 18; 21]. However, 2D interactions restrict the scope of distinguishable visual relations. Synthetically generated datasets are becoming increasing popular as a way to bridge the information gap in image datasets through 3D spatial relations [3; 19]. An example is the CLEVR dataset [19] which consists of synthetic images with objects arranged in various configurations to promote generalization and systematic reasoning.
However, synthetic datasets in this domain do not provide three-dimensional information about object location or orientation, rendering the perceptual input provided as effectively two-dimensional. Some works such as Goyal et al [17] provide annotated synthetic 3D scenes. This allows models to better understand object interactions and distinguish between subtle visual relations such as impact and contact.
A common theme across different visual relation datasets is to mix complex actions and prepositional relations [26]. For instance, in the Action Genome dataset [11], the action "sitting" and preposition "on" is combined into a single dynamic relation "sitting on a sofa". However, actions themselves require a fundamental understanding of spatial relations, as put forth by Hua et al. [27], who argue that actions can be decomposed into chains of consecutive spatial relations between objects. Hence, relation understanding tasks should sit at the root of all other tasks that involve understanding more complex spatio-temporal relationships. Similarly, many datasets [16; 22] present a larger number of spatial relations, which are overlapping in meaning. For example, "below" and "beneath", or "adjacent to" and "beside". Both pairs includes different prepositions but are essentially describing the same preposition sense. Hence, the mixing of spatial relations with similar meanings results in redundant representations.
#### 2.1.1 Graph-based scene relation representation
Relations can be explicitly modeled as graphs [11; 14; 20; 22; 28], which can substitute the need for 3D information in a restricted manner. This form of representation can also allow multiple spatial relations to co-exist, which may be useful in understanding complex scenes. While these works have shown strong performance in identifying low-level object relationships, understanding of higher-order relationships are still not clearly understood through this approach.
### Video Datasets
Many spatial relations have a dynamic nature, meaning that they intrinsically involve motion (e.g. "onto"), which cannot be represented by image datasets. Various works have proposed video
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**Type** & **Dataset** & **Year** & \begin{tabular}{c} **3D** \\ **info?** \\ \end{tabular} & \begin{tabular}{c} **\# Preps** \\ \end{tabular} & \begin{tabular}{c} **Dyn?** \\ \end{tabular} & \begin{tabular}{c} **Tem?** \\ \end{tabular} &
\begin{tabular}{c} **Real/** \\ **Synth** \\ \end{tabular} & **Size** \\ \hline \hline Image & VSR [16] & 2022 & N & **65** & N & N & Real & 10K \\ Image & Liu et al. [3] & 2021 & N & 6 & N & N & Synth & 83K \\ Image & Rel3D [17] & 2020 & **Y** & 25 & N & N & Synth & 27.3K \\ Image & SpatialSense [18] & 2019 & N & 9 & N & N & Real & 11.5K \\ Image & CLEVR [19] & 2017 & N & 4 & N & N & Synth & 100K \\ Image & Visual Genome 50 [20] & 2017 & N & 21 & N & N & Real & 108K \\ Image & VRD [21] & 2016 & N & 24 & N & N & Real & 5K \\ Image & Scene Graphs [22] & 2015 & N & 29 & N & N & Real & 5K \\ \hline VR & iGibson 2.0 [23] & 2021 & Y & 6 & N & N & Synth & N/A \\ \hline Video & CATER [24] & 2020 & N & 7 & N & **Y** & Real & 5.5K \\ Video & Action Genome [11] & 2020 & N & 6 & N & N & Real & 1.75K \\ Video & VidOR [25] & 2019 & N & 8 & N & N & Real & 10K \\ \hline Video & **STUPD (ours)** & 2023 & **Y** & 40 & **Y** & **Y** & Synth & **200K** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of relations datasets. (Preps = Prepositions (relations), Dyn = Dynamic in nature, Tem = Temporal, Synth = Synthetically generated)
datasets [11; 23; 25], but they only cover the basic static positional prepositions (e.g. "behind", "above" and "below"). Shang et al. [25] have a few additional static prepositions such as "(facing) towards", but overall, dynamic spatial and also temporal prepositions are severely under-researched. The CATER dataset [24] covers just the 3 most basic temporal prepositions ("before", "during" and "after").
### Other related visual reasoning tasks
Various other tasks are related to visual relationship reasoning, which require the use of both spatial and temporal cues to match visual features with labels for objects and relations. This includes tasks such as video grounding [29; 30; 31] and visual question answering [32; 33]. Hence, many methods from visual relationship reasoning can be transferred to the above mentioned tasks and vice versa.
## 3 The STUPD Dataset
The STUPD dataset is a dataset for visual reasoning. It contains synthetically generated images and videos depicting spatial and temporal relations between objects. These relations are derived from the list of prepositions of the English language, which are words representing relations between different subjects within a sentence. The STUPD dataset provides 5000 images/videos for each preposition, resulting in 150,000 images and videos corresponding to spatial relations (referred to as **Spatial-STUPD**) and 50,000 collections of videos corresponding to temporal relations (referred to as **Temporal-STUPD**). The videos contains realistic interactions between objects of different kinds. The dataset is statistically balanced with respect to object combinations. The dataset can be used to pretrain models to perform visual reasoning better, and we demonstrate this in the paper.
### Predicate Vocabulary
The Prepositions Project (TPP) [34], a database of all prepositions in the English language, lists 373 prepositions in total. We use TPP as the source for our vocabulary, and select prepositions only from the two largest groups in TPP (spatial and temporal prepositions) for this paper. We first apply a structured filtering process on the list of all prepositions from TPP, the details of which are outlined in the appendix. Through the filtering process, we shortlisted **30 spatial prepositions and 10 temporal prepositions**. These prepositions act as predicate relations for our visual reasoning dataset. Spatial relation categories are divided into two subcategories - static spatial relations (relations that do not involve relative movement between objects) and dynamic spatial relations (relations that involve movement of the objects). We describe all these relation categories, along with their definitions and context of usage, in the appendix.
### Setting and structure
#### 3.2.1 Spatial dataset structure
Consider a spatial relation triplet _<subject, predicate, object>_. For each predicate (relation) in the STUPD dataset, _subject_ and _object_ are represented by a collection of 3D objects. These 3D object templates (also referred to as prefabs) were selected from the ShapeNet dataset [35], which contains high-quality annotated 3D templates of real-life objects. The detailed curation process of the prefabs used is explained in the appendix.
We group all object categories into 8 supercategories based on size and physical properties to simplify the types of interactions between different objects. These 8 supercategories are _small objects_ (everyday objects that are small enough to be easily maneuvered), _furniture, vehicles, person, large-scale grounded objects_ (large heavy objects that are usually grounded, e.g. buildings), _containers, track_ (roads and paths), and _tunnels_. The idea behind supercategories is that categories within a supercategory have similar behavior of physical interaction.
Overall, we curated 183 prefab instances varying across 45 object categories and 8 supercategories. An overview of the 3D prefabs used, along with other design choices are presented in the appendix.
It should be noted that in the STUPD dataset, the representation of relation triplets _<subject, predicate, object>_ has a slightly different meaning than in previous works. Certain predicate relations in our
vocabulary such as <_(moving) up>_ and <_(scattered) all over (the scene)>_ describe the relation between the predicate and the subject category (and not any object category). Hence, the <_object_> is empty for certain spatial relation categories. Note that subjects as well as objects can refer to multiple instances of the same category of prefabs.
#### 3.2.2 Temporal dataset structure.
Temporal predicates in the STUPD dataset depict a relation between 2 events (a stretch of time where something occurs) or time points (a single moment of time). Consider the temporal relation triplet <_Event/TimePoint A, relation, Event/TimePoint B>_. The challenging part of visual temporal relation representation is the visual depiction of events and time points. In this dataset, temporal relations are represented by means of videos, where events and time points are depicted using the spatial dataset generated. Each event is represented by a spatial relation (static or dynamic) that occurs over variable time spans. A static event simply means that there is an occurrence of a static relation a certain number of frames. On the other hand, time points are represented by single frame inside the temporal videos, and these are sampled from only static spatial events, since a single frame cannot represent the temporal nature of a dynamic spatial relation.
### Dataset characteristics
#### 3.3.1 Spatial-STUPD dataset characteristics
All static spatial relations are generated as single RGB images(frames)(\(f=1\)), while dynamic spatial relations are generated as a collection of \(f=30\) consecutive RGB images (frames), which can be combined together to create a video depicting object interactions with dynamic movement. We synthetically generate 5000 examples of each spatial relation using the Unity3D perception platform ([36]), which allows the use of a physics engine to emulate realistic physical interactions between different objects.
To ensure enough variance in the dataset, we randomize a variety of parameters of the generated images, such as the selection of the objects (in a constrained manner to only allow selective supercategory interactions, described above), the color of the objects, the distance between the objects, the relative position and rotation of the objects, the perspective of the camera, and even the background of the image. All visual relations in the STUPD dataset are with respect to the camera's perspective, hence removing any ambiguity of perspective. We provide annotations for each spatial interaction in the form of subject/object information (including category, supercategory, bounding box information and 3D coordinates), as well as the predicate relation category. Note that all spatial relations are independent of each other. Hence each spatial interaction corresponds to only one predicate relation category. Some examples of our dataset can be seen in Figure1 as well as in the appendix.
#### 3.3.2 Temporal-STUPD dataset characteristics
We generate pairs of videos of a constant length of \(W=150\) frames (referred to as the temporal window), where each video corresponds to the occurrence of a single event or time point. An important characteristic of temporal relations is the overlapping nature of temporal relation predicates. Event/TimePoint interactions can represent multiple temporal relations simultaneously. For example, consider _Event A_ which occurs just after _TimePoint B_. In this case, temporal triplets <_Event A, after, TimePoint B_> and <_Event A, around, TimePoint B_> both apply. Hence in the STUPD dataset, each temporal interaction may have multiple temporal relation categories associated. An overview of all temporal relations is presented in Figure 2.
### Statistics of the STUPD dataset
#### 3.4.1 Spatial-STUPD.
Our primary goal, through this dataset, is to create a well balanced dataset, with a wide and balanced variety of _subject-object_ interactions. Firstly, each spatial and temporal relation has 5000 datapoints each. As mentioned above, we constrain the interaction of supercategories to emulate real-world physical constraints and ensure similarity of size of _subject_ and _object_. During dataset generation, we adjust the number of examples generated for each subject/object supercategory pair based on the
total number of object categories interacting, so that individual category occurrences are more or less equal throughout the dataset. In Figure 3(a), we include the distribution of all supercategory occurrences in the STUPD dataset (including both subjects as well as objects). The frequencies are normalized by the number of prefab categories associated with each supercategory and presented as a fraction (percent). As can be seen, the normalized distribution of the majority of supercategories is more or less similar. A couple of observations are as follows.
1. Supercategories 'track' and 'tunnel' have lower frequencies because of their association with only a small number of spatial relations (such as _<(movement) along track>_ and _<subject (passing) through a tunnel>_.
2. It can be seen that the frequency of'small objects' is slightly lower than others. This is a conscious design choice, because of the size mismatch between other supercategories, having much larger sizes (such as buildings, vehicles, or furniture). We, however, try to maintain balance by including appropriate numbers of interactions between the larger objects within the small objects supercategory and other supercategories.
#### 3.4.2 Temporal-STUPD.
Since Events/Time Points are randomly sampled from Spatial-STUPD, the distribution of Events/Time Points is similar to that in Figure 3(a). In Figure 3(b), we illustrate the occurrence of different supercategories across the 50,000 data points. Each predicate has atleast 5000 occurrences. However, because of the overlap between many temporal relations, many temporal predicates occur more
Figure 1: Some examples of Spatial-STUPD, which contains 30 spatial relations. These relations can be divided into two categories - static (involving no motion) and dynamic (involving relative motion between the subject and object)
Figure 2: We propose 10 temporal relations representing interactions between different events or time points within a specified temporal window of \(W\) frames. Different temporal prepositions are used in specific contexts in English. For each relation, A, B, and/or C can be an event(E), time point(T) or either event or a time point(E/T). Each temporal relation can have multiple types of event/time point interactions. The translucent shade of certain events in the figure represents the possible variation in the point of occurrence.
frequently in the dataset. For instance, <_before_> is a subset of <_by_>, and hence <_by_> occurs whenever <_before_> occurs, but not necessarily vice versa. Similary, <while_> is a subset of <during_> (related to two events occuring simultaneously) and <since_> is a subset of <at> (related to an event occurring at a particular time instance).
## 4 Baselines
### Spatial-STUPD baselines
In this subsection, we aim to demonstrate that STUPD is an effective pretraining dataset for real world visual reasoning tasks. Ideally, visual reasoning models first pretrained on STUPD, and then transfered to real world datasets, should results in an increase in performance. To demonstrate the effect of STUPD on real world visual reasoning tasks, we choose two real-world visual reasoning datasets - the SpatialSense Dataset [18] (to demonstrate performance on static spatial relations) and ImageNet-VidVRD [37] (to demonstrate performance on dynamic spatial relations).
#### 4.1.1 Selection of baseline models
We choose six baselines to evaluate our dataset, inspired by the baselining approach following in [18]. These models include two simple models (_Language-based model_, which only takes verbal phrases as input, and _Coordinate-based model_, which only takes the coordinates of objects as input) and four deep-learning based model (_DRNet[38]_, _VIPCNN_[39], _PPRFCN_[40], and _VTransE_[41]). While the aforementioned deep-learning based models were specifically designed for visual relationship reasoning, the two simple models were chosen to highlight different aspects of Spatial-STUPD as well as other datasets. For example, the _Language-based_ model (which takes the subject and object phrases from a relation triplet, and predicts the predicate) highlights the statistical bias related to subject/object distribution, as is explained in detail in the following subsection. On the other hand, the _Coordinate-based_ model (which takes relative spatial coordinates of the subject and object, as well as their bounding box coordinates, and predicts the predicate) highlights the role of coordinate as well as bounding box information and bounding box, while being isolated from any visual features. Hence, through the selection of the various baselines, the role of various components of the dataset can be individually understood.
Additionally, a random baseline is also presented. The models are evaluated on a single label predicate classification task (unlike various previous approaches where the task is a binary classification task to evaluate if the relation triplet, when given as input, holds true). The architecture of the models are adjusted according to the task and dataset used. Further details can be found in the appendix. It should be noted that since the architecture of various models has been slightly adjusted to fit the task as well as training dataset, we refer to a model \(X\) as '_X-based_', to differentiate between the original proposed architecture and the model used in this paper.
Figure 3: Dataset statistics. (a) The occurrence of prefab categories is roughly consistent throughout the dataset. (b) The blue line represents the minimum number of temporal relation occurrence. A single temporal interaction can have multiple temporal relation predicates associated.
#### 4.1.2 Model performance on STUPD
First we train the baseline models on only the Spatial-STUPD dataset. The results are shown in Table2. We note the suboptimal accuracy on the _language-based_ model. This is infact, a positive outcome. Predicting the predicate based on only the subject and object category information represents imbalance and/or bias within the dataset. A well-balanced dataset should produce low accuracy on this task, as is seen in the accuracy results. Next, we observe the best performance on Spatial-STUPD is achieved through the _VTransE-based_ model, followed by the _coordinate-based_ model, which is a relatively simple model. This demonstrates the higher importance of spatial coordinate/bounding box information over visual features. We also observe the higher performance on dynamic predicates in comparison to static predicates, with the exception of the _DRNet-based_ model. This indicates that dynamic data is loaded with more information for spatial reasoning, hence establishing the need for datasets with dynamic information. On the other hand, _DRNet-based_[38] model outperforms all models on static data. This special suitability towards images rather than videos may be because of architectural design choices such as the isolation of bounding box feature maps.
#### 4.1.3 Comparison between different pretraining datasets
We propose STUPD primarily as an effective pretraining dataset before transfering on real-world dataset. To demonstrate the effect of pretraining a model on STUPD, we compare the results of pretraining on various datasets. For each of the two real-world datasets, we compare the performance with two pretraining datasets - ImageNet dataset [42](for the SpatialSense dataset)/KINETICS-400 dataset [43](for the ImageNet-VidVRD dataset), and the CLEVR dataset [19]. While the ImageNet/KINETICS-400 dataset serve as general large-scale pretraining datasets for many real-world tasks, the CLEVR dataset is a sythetic dataset with a similar setting as Spatial-STUPD. In general, one of the main purpose of any synthetic dataset is to aid models through additional data proxies for real-world settings, hence serving as effective pretraining options. The results of pretraining on Spatial-STUPD is compared with no pretraining (i.e. direct training on the real world dataset) and other pretraining datasets in Table 3 and Table 4. The details of the training tasks are included in the appendix.
It can be seen that Spatial-STUPD dataset, when used as a pretraining dataset for visual relationship reasoning tasks, improves performance on real-world datasets, especially for deep learning models. On the other hand, CLEVR does not lead to a significant increase in performance in comparison to from-scratch training in most cases. Finally, it can be seen that ImageNet (or KINETICS-400) pretraining infact does not help improve performance in any significant manner. Overall, STUPD is well aligned for various visual relation reasoning tasks, in comparison to other similar synthetic datasets, as well as very large general pretraining datasets like ImageNet/KINETICS.
The fact that ImageNet/KINETICS-400 pretraining does not lead to significant improvement in performance indicates the fact that higher quality visual features do not contribute towards visual relationship reasoning. Effective visual relationship reasoning is a result of other forms of data including bounding box information and relative spatial positioning. This can be confirmed by the performance of Coordinate-only model in the case of ImageNet-VidVRD training, in comparison to any pretraining. It can also be noticed that the jump in accuracy after pretraining is much more pronounced in the case of ImageNet-VidVRD training than SpatialSense training. This indicates the importance of dynamic information for effective visual relationship reasoning.
\begin{table}
\begin{tabular}{l l l l} \hline
**Model** & **Overall Accuracy** & **Static Accuracy** & **Dynamic Accuracy** \\ \hline Random & 3.34 & 3.34 & 3.34 \\ Language-based & 28.90 & 26.76 & 31.66 \\ Coordinate-based & 75.60 & 72.54 & 78.32 \\ VIPCNN-based & 64.24 & 61.52 & 70.37 \\ PPRFCN-based & 68.19 & 66.41 & 69.47 \\ VTransE-based & **76.58** & 72.22 & **80.39** \\ DRNet-based & 70.32 & **81.35** & 60.70 \\ \end{tabular}
\end{table}
Table 2: Visual reasoning performance trained on all 30 spatial relations in the Spatial-STUPD dataset. The values presented are accuracy metrics in percent.
### Temporal-STUPD baselines
There are two components of the Temporal-STUPD that constitute the dataset - time stamps (or the time at which events start or stop) and the sequence of images (visual features) accompanying them. Hence, in order to understand the contribution of each of these components, we run two types of models on the Temporal-STUPD dataset - a _Time-only_ model and a _Time+Image_ model. In the Time-only model, we give information on the starting and ending frames for all associated events/time points in order, in the form of a feature vector into series of fully connected layers with batch normalization. The task is modeled as a multi-label classification problem, where an input may correspond to multiple outputs. In the Time+Image model, we feed the videos in a 3D convolutional network with a ResNet18 backbone, and concatenate the final fully connected layer to the activations of the Time-Only model. We calculate two metrics - mean Average Precision and accuracy (a prediction is accurately predicted if all the labels are correctly identified, and no other label is classified as ground-truth). The results are shown in Table 5. We notice that information about the starting and ending time points of various events plays a major role in identifying the temporal relation. Adding information about the event nature does not add significant information.
In this paper, we do not demonstrate the effect of pretraining on Temporal-STUPD on the results of training on real-world datasets, since there are no real-world datasets yet that focus on temporal relations (to the best of our knowledge). We thus, encourage researchers to build on top of this work, with a focus on temporality.
## 5 Limitations and Future Work
The STUPD dataset was designed with simplicity in mind. A prepositional word can have multiple senses, sometimes with subtle differences in meaning or usage in different contexts. In the case of
\begin{table}
\begin{tabular}{l c c c} (ImageNet-VidVRD training) & \multicolumn{3}{c}{**Pretraining dataset**} \\ \hline
**Model** & **no pretraining** & **KINETICS-400** & **CLEVR** & **Spatial-STUPD** \\ \hline Random & 10.00 & 10.00 & 10.00 & 10.00 \\ Language-based & 54.35 & N/A & **55.25** & 54.71 \\ Coordinate-based & 54.49 & N/A & 52.11 & **54.79** \\ VipCNN-based & 50.68 & 50.54 & 58.44 & **86.95** \\ PPRFCN-based & 51.72 & 51.87 & 49.87 & **62.64** \\ VTransE-based & 56.60 & 56.88 & 64.64 & **73.97** \\ DRNet-based & 57.98 & 57.29 & 68.07 & **87.29** \\ \end{tabular}
\end{table}
Table 4: Effect of Spatial-STUPD pretraining on the ImageNet-VidVRD [37] dataset. The values presented presented are accuracy metrics in percent.
\begin{table}
\begin{tabular}{l c c c} \multicolumn{3}{c}{**Pretraining dataset**} \\ \hline
**Model** & **no pretraining** & **ImageNet** & **CLEVR** & **Spatial-STUPD** \\ \hline Random & 16.67 & 16.67 & 16.67 & 16.67 \\ Language-based & **43.13** & N/A & 43.04 & 42.91 \\ Coordinate-based & 47.45 & N/A & 47.62 & **49.59** \\ VipCNN-based & 41.17 & 41.94 & 41.11 & **44.28** \\ PPRFCN-based & 44.12 & 42.61 & 42.08 & **44.98** \\ VTransE-based & 49.81 & 49.85 & 46.98 & **50.84** \\ DRNet-based & 51.93 & 52.54 & 52.84 & **54.28** \\ \end{tabular}
\end{table}
Table 3: Effect of Spatial-STUPD pretraining on the SpatialSense [18] dataset. The values presented are accuracy metrics in percent.
\begin{table}
\begin{tabular}{l c c}
**Model** & **mAP** & **accuracy** \\ \hline Time only & 87.0\% & 78.3\% \\ Time+Image & 89.1\% & 79.2\% \\ \end{tabular}
\end{table}
Table 5: Baseline for Temporal-STUPD
spatial relations, we restrict context of usage by limiting subjects and objects to physical objects, thus allowing us to group different senses into a single preposition. Further works may focus on creating visual datasets to disambiguate between the subtle meanings of different senses of a preposition. Another dataset design choice was to limit the types of objects to at most 2 types (categories) per image, for simplicity. However, this somewhat limits with number of potential prepositions included, as some comparative prepositions require 3 types of objects in order to be depicted properly. An example is _as far as_, which depicts a comparison between two distances. This cannot be represented by a scene with interactions between only two objects.
In this paper, we treat spatial-STUPD and temporal-STUPD independently. However, future works should attempt training visual reasoning models to combine the knowledge from both subsets of STUPD and analyse properties of models that understand spatio-temporal reasoning effectively.
Finally, while 3D information is readily available in STUPD due to its synthetic nature, this was not utilized in this paper, primarily in order to compare the results with previous works. Future works may examine whether and how 3D information may help with certain reasoning tasks.
## 6 Conclusion
Static representations such as image based datasets are not sufficient for machine learning systems to fully understand spatial relations well. Spatial relations have many subtle characteristics such as relative movement, velocity, direction, orientation, which can only be fully justified through flexible dynamic representations such as synthetic based videos. In this paper, we introduced a novel dataset which aims to cover the subtle differences between different spatial relations through simple object interactions. Through various experiments, it is evident that the dynamic nature of senses helps model identify relations better. Our studies also demonstrate the nature of spatio-temporal learning in 3D deep learning models. It is observed that models initially rely more on spatial cues, but slowly learn about temporal cues as well, and the combination of spatio-temporal cues results in higher accuracy.
Although this dataset consists of simple object interactions, we hope that it can be used to make models understand more complex scene structures, such as nuanced contexts of preposition use in the English language, or for understanding the underlying dynamics of actions better in various action recognition tasks. |
2309.16493 | Efficient Hardware Implementation of Constant Time Sampling for HQC | HQC is one of the code-based finalists in the last round of the NIST post
quantum cryptography standardization process. In this process, security and
implementation efficiency are key metrics for the selection of the candidates.
A critical compute kernel with respect to efficient hardware implementations
and security in HQC is the sampling method used to derive random numbers. Due
to its security criticality, recently an updated sampling algorithm was
presented to increase its robustness against side-channel attacks.
In this paper, we pursue a cross layer approach to optimize this new sampling
algorithm to enable an efficient hardware implementation without comprising the
original algorithmic security and side-channel attack robustness.
We compare our cross layer based implementation to a direct hardware
implementation of the original algorithm and to optimized implementations of
the previous sampler version. All implementations are evaluated using the
Xilinx Artix 7 FPGA. Our results show that our approach reduces the latency by
a factor of 24 compared to the original algorithm and by a factor of 28
compared to the previously used sampler with significantly less resources. | Maximilian Schöffel, Johannes Feldmann, Norbert Wehn | 2023-09-28T14:57:48Z | http://arxiv.org/abs/2309.16493v1 | # Efficient Hardware Implementation of Constant Time Sampling for HQC
###### Abstract
HQC is one of the code-based finalists in the last round of the NIST post quantum cryptography standardization process. In this process, security and implementation efficiency are key metrics for the selection of the candidates. A critical compute kernel with respect to efficient hardware implementations and security in HQC is the sampling method used to derive random numbers. Due to its security criticality, recently an updated sampling algorithm was presented to increase its robustness against side-channel attacks.
In this paper, we pursue a cross layer approach to optimize this new sampling algorithm to enable an efficient hardware implementation without comprising the original algorithmic security and side-channel attack robustness.
We compare our cross layer based implementation to a direct hardware implementation of the original algorithm and to optimized implementations of the previous sampler version. All implementations are evaluated using the Xilinx Artix 7 FPGA. Our results show that our approach reduces the latency by a factor of 24 compared to the original algorithm and by a factor of 28 compared to the previously used sampler with significantly less resources.
HQC, PQC, code-based cryptography, KEM, sampling
## I Introduction
Quantum computers are expected to revolutionize sectors such as medicine, materials science, and artificial intelligence once they reach maturity with adequate computing power. However, they also pose serious threats on communication security, and it is expected that it is possible to break State of the Art (SoA) public key cryptography by the end of this decade [1]. Therefore, the United States National Institute for Standards and Technology (US NIST) is currently conducting a process to find new, quantum computer resistant cryptographic algorithms (Post-Quantum Cryptography or PQC) [2].
Hamming Quasi Cyclic (HQC) [3] is one of the code-based candidates for standardization that has advanced to the final round of the NIST PQC process. Compared to the already standardized lattice-based algorithm KYBER, implementations of code-based algorithms have both a larger computational complexity and larger memory footprint. However, viable alternatives to lattice-based algorithms are already required today in case that the relatively new lattice-based algorithms turn out to be insecure in future (Crypto Agility). As a result, the application requirements in environments such as the Industrial Internet of Things (IIoT), which are limited by computing power and available energy but have strict timing constraints, can often only be met by using dedicated hardware accelerators.
For such applications, several papers on hardware implementations have already been published. The authors of HQC proposed a High-Level Synthesis (HLS)-based design that outperforms the remaining code-based candidates in the final round of the NIST process [4]. A HW/SW co-design of HQC that targets IoT applications was proposed in [5], and the authors found that the memory and the sampling unit are the main contributors to the area requirement. Furthermore, they showed that sampling and polynomial multiplication are the main drivers of the HQC latency. One of these bottlenecks, the polynomial multiplication, was addressed by the LEAP multiplier in [6].
Secure and efficient sampling algorithms to derive random numbers are still a subject of research, and a detailed study of different algorithms and protection measures with respect to power side channels was conducted in [7]. While their focus was on algorithms rather than hardware implementations which ensure robustness against power SCAs, the authors also provided a hardware implementation, which, however, requires significant computation time and was designed primarily for BIKE, the other code-based candidate in the NIST process.
Recently, the authors of HQC introduced an updated, more secure sampling procedure based on Sendrier's modified Fisher-Yates approach [8] as a response to successful timing SCAs on the HQC sampling procedure [9].
In conclusion, the focus of research so far has been mainly on polynomial ring multiplication. But for the other critical factor - the new sampling algorithm - an efficient solution has yet to be explored. Existing implementations are either prone to timing SCAs or have high computation time.
In this paper, we present to the best of our knowledge the first cross-layer approach for this new sampler. We investigate how the interrelationships between the algorithmic layer, which provides the base for a timing SCA-resistant implementation, and the hardware implementation layer can be exploited to maximize implementation efficiency without comprising security. Our new contributions are:
1) A reduction of the computational complexity from
\(\mathcal{O}(n^{2})\) to \(\mathcal{O}(n)\) through a new approach to represent the polynomials during the sampling procedure.
2. A hardware implementation with a pipelining scheme that is robust against timing side channel attacks. The resource efficiency (latency / number of required FPGA resources) is increased by jointly considering the implementation of the sampling procedure and polynomial ring arithmetic.
3. A detailed comparison of implementation results of a standard implementation, our cross layer approach and optimized implementations of the previous sampler on a Xilinx Artix 7 FPGA.
Our new algorithm requires an update of the original HQC specification, as the random numbers derived from the same seed are not equal to the original algorithm. However, since standardization is still in progress, we encourage to adopt these changes as they do not add complexity to software implementations nor reduce security.
This paper is structured as follows. In Section II, a background of HQC and the sampling algorithm will be provided. In Section III, the algorithmic improvements will be explained. In Section IV, the hardware design will be presented. In Section V, the results of the hardware implementation will be provided, evaluated and compared with the SoA. In Section VI, we draw a conclusion.
## II Background
HQC [3] is a Key-Encapsulation Mechanism (KEM) and its security is based on the hardness of the syndrome decoding problem. Its design rational bases on concatenated Reed-Muller/Reed-Solomon code \(\mathcal{C}\) and erroneous codewords are generated by adding and multiplying random values on the secret message to hide the same. These arithmetic operations are performed on sparse (low hamming weight) and dense (high hamming weight) polynomials \(v\) in the ring \(\mathcal{R}=\mathbb{F}_{2}[X]/(X^{n}-1)\), and \(v\) can be represented in two different ways:
1. **Explicit representation:** A bit array \(v\) of length \(n=17669\) where each bit entry \(v_{i}\) represents the coefficient \(v_{i}\) in \(v=\sum_{i=0}^{n-1}v_{i}\cdot x^{i}\). The explicit representation is used for dense polynomials.
2. **Support representation:** An array \(c\) of length \(\omega\leq 75\), where each integer entry \(c_{i}\) represents the coordinate of a non-zero coefficient in the corresponding \(v\): \(v=\sum_{i=0}^{\omega-1}x^{c_{i}}\). The support representation is used for sparse polynomials.
HQC is available in three different parameter sets, in this work we refer to HQC-128. Besides the arithmetic in \(\mathcal{R}\), the major contributor to the computational complexity of HQC is the sampling procedure [5], where random error polynomials are generated through the expansion of a secretly, truly random generated seed using the Extendable Output Function (XOF) SHAKE [10]. For HQC to operate securely and correctly, the sampled error polynomials must fulfill two criteria. First, as the sampling procedure directly involves the computation of security critical values, a constant execution time (independent from the sampled values) is a crucial countermeasure against SCAs. In addition, the sampled errors must meet strict requirements regarding their Hamming weight \(\omega\), since a too low Hamming weight would allow attacks on the KEM, and a too large Hamming weight would mean that the original message could not be decrypted by the communication partners as it would exceed the error correction capability of \(\mathcal{C}\). Therefore, Algorithm 1 has been proposed by the HQC authors based on the previous works of Sendrier [8] to meet both requirements.
The algorithm operates with the error polynomials in their support form and, if required, converts them to explicit representation once the sampling is completed (Line 12). An array with \(\omega\) random words is generated through the expansion of \(seed\) with SHAKE in Line 1. This \(randomwords\) array is further processed in the \(mod\_loop\), which implements the "sampling with bias" as introduced by Sendrier [8]. In \(unique\_check\_loop\), each of the sampled values in \(support\) are compared with all previously sampled values. If there is a duplicated value, the \(found\) flag is set and \(support[i]\) receives the loop iterator \(i\) as a value instead. Because of the biased sample in Line 3 and the fact that \(unique\_check\_loop\) iterates backwards, the condition \(support[i]\geq i\) is always satisfied. These measures guarantee that the hamming weight of the resulting polynomial is exactly \(\omega\).
```
Data:\(n=17669,seed,\omega\) Result:\(v\in\mathcal{R}=\mathbb{F}_{2}[X]/(X^{n}-1)\) with hamming weight \(\omega\)
1\(randomwords\xleftarrow{\$}prng(seed,\omega)\);
2mod_loop:for\(0\leq i\leq\omega-1\)do
3\(support[i]\gets i+(randomwords[i]\mod(n-i))\);
4
5 end for
6
7\(\text{unique\_check\_loop:for}\ (\omega-1)\geq i\geq 0\)do
8\(found\gets 0\);
9for\((i+1)\leq j\leq\omega-1\)do
10\(found\gets compare(support[i],support[j])\);
11
12 end for
13\(support[i]\gets found\?\ i:support[i]\);
14
15 end for
16\(v\gets transform(support)\);
17
18return v
```
**Algorithm 1**Constant weight sampling algorithm based on [8] with modifications by the authors of HQC. Note that \(word\) refers to 32-bit values. The "\(\xleftarrow{\$}\)" operator refers to sampling bytes from a random distribution.
In general, sampling and ring arithmetic in HQC are used in the following sequences:
\[h\xleftarrow{\$}\mathcal{R}, \tag{1}\]
\[(x,y)\xleftarrow{\$}\mathcal{R}^{2}, \tag{2}\]
\[z\gets h\cdot y+x \tag{3}\]
where \(h\) is a dense polynomial which is directly sampled through seed expansion by using SHAKE, and \(x\) and \(y\) are sparse (error) polynomials with hamming weight \(\omega\) sampled with Algorithm 1.
## III Algorithmic Optimizations
In the following, we explain the algorithmic optimizations in our cross layer approach and evaluate each one in terms of its security impact, compatibility with the original algorithm, effect on hardware implementations, and drawbacks.
### _Reducing computational complexity: Uniqueness Check_
**Description:** The uniqueness check during the sampling procedure ensures that each of the sampled values occurs only once, thus guaranteeing the Hamming weight of the result. In the support representation in Algorithm 1, each element of the array holds the coordinate of a "1" coefficient in the polynomial. To ensure that the same coordinate only occurs once in the array, a comparison with all previously sampled coordinates in the array is required, as all of them could hold the same coordinate. This causes the computational complexity of the algorithm to be \(\mathcal{O}(\omega^{2})\).
**Improvement:** We propose to store the sampled values in explicit representation instead. There, a single comparison with the currently stored coefficient bit \(v_{i}\) at the sampled coordinate \(c_{i}\) is necessary to determine the uniqueness of this value, thus significantly reducing the complexity from \(\mathcal{O}(\omega^{2})\) to \(\mathcal{O}(\omega)\).
**Security:** On our target platform (FPGA), the polynomials are stored in Block Random Access Memory (BRAM) or Look Up Tables (LUTs) without any intermediate cache, thus guaranteeing a memory address independent access latency. On other platforms, this measure can induce side channel attack possibilities depending on the hardware architecture. For architectures with address dependent memory access latency secret information (in this case the location of the word inside the corresponding array) can be leaked. This is for example the case for software implementations on cache-based computer architectures.
**Hardware:** The major drawback of this method is that it increases the memory requirement for the polynomial by a factor of 15.7 (17669 bits instead of 1125 bits), thus memory has to be traded-off against runtime. Furthermore, for the subsequent polynomial multiplication in \(\mathcal{R}\) to be efficient, one of the polynomials (\(y\), see Equation 3) needs to be available in support representation.
**Compatibility:** This optimization is bit-true, thus it maintains seed compatibility.
### _Increasing Throughput: Enabling Efficient Pipelining_
**Description:** The algorithm can be further improved to enable an efficient pipelining scheme that increases the hardware throughput, i.e. number of coordinates sampled per time. In Algorithm 1, the mod_loop requires the first element, \(randomwords[0]\), to calculate \(support[0]\), whereas the unique_check_loop requires \(support[\omega-1]\) in its first iteration, thus introducing a data dependency between both loops which prohibits efficient pipelining.
**Improvement:** We suggest an implicit inversion of the \(randomwords\) array, where the first word returned by \(prng\) is used as \(randomwords[\omega-1]\) instead of \(randomwords[0]\). Using this approach, the mod_loop and unique_check_loop loops can be combined into a single loop, allowing efficient pipelining. Since SHAKE (the prng algorithm) returns random values sequentially, it is possible to operate directly on the returned word in the pipelined loop instead of waiting for the random array to be returned as a whole, as described in our improved Algorithm 2.
**Security:** Given that SHAKE is a cryptographically secure XOF [10], it holds that \(prng\) returns uniformly randomly distributed words. Hence, from a security perspective, the quality of the content in \(randomwords[i]\) and \(randomwords[i+1]\) is equivalent \(\forall i\) in the given context.
**Hardware:** Pipelining and concurrent execution can significantly reduce the time it takes to sample a polynomial. In addition, including random words and support computation in the same pipeline loop eliminates the BRAM required to store these arrays as a whole.
**Compatibility:** Seed compatibility is not given since the same seed yield to different sampled polynomials compared to the reference implementation supplied by the authors of HQC. Therefore, it requires an update of the original algorithmic specification, e.g. by replacing \(randomwords[i]\) in Line 3 in Algorithm 1 with \(randomwords[\omega-1-i]\).
```
Data:\(n=17669,seed,\omega\) Result:\(v\in\mathcal{R}=\mathbb{F}_{2}[X]/(X^{n}-1)\) with hamming weight \(\omega\)
1for\((\omega-1)\geq i\geq 0\)do
2\(randomwords[i]\xleftarrow{}\)\(prng(seed,1)\);
3\(support[i]\gets i+(randomwords[i]\mod(n-i))\);
4if\(v[support[i]]=1\)then
5\(v[i]=1\);
6
7 end if
8else
9\(v[support[i]]=1\);
10
11 end if
12
13 end for return v
```
**Algorithm 2**Our proposal for the sampling algorithm
_Decreasing memory footprint: Joint consideration of the sampling and arithmetic in \(\mathcal{R}=\mathbb{F}_{2}[X]/(X^{n}-1)\)_
**Description:** In SoA implementations, sampling and the arithmetic in \(\mathcal{R}\) are treated as separated functions without considering synergies between both to increase their efficiency (number of hardware resources and latency). Memories are used in the following way for these two functions (e.g. [4][5]):
1. \(h\) is sampled and stored in the Random Access Memory (RAM) (\(RAM0\)) in explicit representation.
2. \(x\) is sampled and stored in \(RAM1\) in explicit representation.
3. \(y\) is sampled and stored in \(RAM2\) in support representation.
4. \(z^{\prime}\), the product of h and y, is computed and stored in \(RAM3\). The product of \(h\) and \(y\) requires twice the storage of the explicit representation.
5. \(z\) is calculated by the modular reduction of \(z^{\prime}\) and adding \(x\).
Thus, four RAMs are required to execute this procedure. Notably, memory has been shown to be the major contributor to the area requirement of the HQC implementation [5].
**Improvement:** We propose the following procedure instead:
1. \(x\) is sampled and stored in \(RAM0\) in explicit representation.
2. \(y\) is sampled and stored in \(RAM1\) in explicit representation during the uniqueness check, and concurrently stored in \(RAM2\) in support representation. The support representation of \(y\) is later required for the dense-sparse polynomial multiplication.
3. \(h\) is sampled and stored in \(RAM1\), thus overwriting the no longer required explicit representation of \(y\).
4. \(z\) is computed and stored in \(RAM0\). Each intermediate result of \(z\) is reduced modulo \(X^{n}-1\), thereby only requiring storage for single explicit representation.
**Hardware:** This procedure has four advantages:
1. It eliminates the need for an additional RAM when sampling the sparse polynomial \(y\) in explicit representation (see Section III-A), since \(RAM1\) can be reused for \(h\) after sampling \(y\).
2. \(x\) can directly be stored inside the result RAM for \(z\), thus removing its requirement for an additional RAM.
3. The required memory size for \(z\) is halved.
4. The schoolbook approach in [4] and [5] multiplies \(h\cdot y\) wordwise, i.e. each memory word in \(h\) is multiplied by each coordinate in \(y\) based on a windowing method. The intermediate results are stored in \(RAM0\) and repeatedly added to the subsequent intermediate results (which corresponds to an XOR operation since the elements of the polynomial are in \(\mathbb{F}_{2}\)). This procedure can be used to implicitly add x during the multiplication steps by preloading \(x\) instead of zeros into \(RAM0\), thus eliminating the need for an additional computation step. This requires to use the schoolbook approach instead of the optimized LEAP multiplier [6].
**Security:** This improvement does not affect the security in the hardware implementation.
## IV Hardware Design
In our cross-layer approach described in Section III, we improved the algorithmic layer to enable an efficient hardware implementation with respect to
1. **latency** reduction through the optimizations described in Sections III-A and III-B, and
2. **hardware resources** reduction through the optimizations described in Sections III-B and III-C.
In this chapter, we present the challenges and solutions of our hardware design which incorporates the above optimizations.
### _Sampler_
The major challenge of a pipeline schedule for Algorithm 2 is to ensure that no secret information gets leaked through the execution time. Here, the two critical operations are the modulo operation in Line 3 and the uniqueness check procedure and its measures.
For fixed modulus, the Barrett Reduction [11] is often used in hardware implementations to ensure constant time calculation and resource efficiency. This replaces the division with two multiplications, subtractions and shifts using a precomputed reduction factor \(R\) instead, see Algorithm 3. However, the modulus of the sampling algorithm is not fixed, but is in the range \([n,n-\omega]\), and therefore requires the storage of \(\omega\) precomputed 18-bit values of \(R\). For the given range with \(\omega=75\), we observed the following dependency, which drastically reduces the required storage from \(18\cdot\omega\ bits\) to \(\omega+18\ bits\) at the cost of one addition and subtraction:
\[R_{i-1}=R_{i}-14+LUT[i], \tag{4}\]
where \(LUT\) is a precomputed one-bit array of length \(\omega\) and \(R_{0}=\frac{2^{2}}{n-(\omega-1)}\).
```
Data:\(W,M\), \(R\leftarrow\lfloor\frac{2^{32}}{M}\rfloor\) Result:\(X=W\mod M\)
1\(X\gets W-M\cdot((W\cdot R)>>32)\);
2if\(X\geq M\)then
3\(X\gets X-M\);
4
5 end if return X
```
**Algorithm 3**Barret reduction algorithm used in the hardware implementation based on [11]
For the uniqueness checking procedure, we ensure a constant time implementation as follows. We assume that the polynomial \(v\) is stored in \(w\) words of length \(m_{w}\) in a BRAM that allows simultaneous reads and writes, and that one word holds multiple coordinates. If a new coordinate is to be written, the currently stored data must be read first to prevent overwriting. In a pipelined architecture, this means that in cycle \(i\), reads belonging to the current iteration are performed, and in cycle \(i+1\), the corresponding bit in the word is set and written to memory. This causes two challenges. First, a read-after-write hazard could occur if the same address is written and read in the same cycle (i.e., by two successive sampled values), which leads to reading obsolete data in the second access. We solve this by detecting and storing the last word written in a register, which is used for the next write if a hazard is detected, instead of the word read from memory.
Second, in most cases, if a non-unique value is found, it requires a write to a different memory word than the one previously read. However, stalling the pipeline to perform this required read would reveal secret information due to the non-constant execution time. Therefore, we locally keep record of the memory words belonging to the first \(\omega\) coordinates and write them to memory at the end of the sampling procedure. The actual writing to memory is performed independently of the uniqueness of the respective values, so as not to leak information about the power.
The operation principle of the pipelined sampler is presented in Figure 1. All stages are designed such that they are able to operate in parallel. The first stage of the pipeline implements the PRNG word generation. The words are generated by the SHAKE function, which works by permuting and then squeezing the internal state of the Keccak [10]. Our design uses an Intellectual Property (IP) core from the authors of Keccak to perform the state permutation [12], the squeezing is executed on a 64-bit boundary. Three pipeline stages are used to calculate the Barret Reduction, as we use a Karazuba based approach to efficiently map the multiplication to the DSP slices of our FPGA. The last two stages implement the uniqueness check and memory accesses.
### _Arithmetic in \(\mathcal{R}\)_
The polynomial multiplication in \(\mathcal{R}\) is implemented using a similar approach as in [4] and [5], with modifications to incorporate our proposed improvements such that addition, multiplication and reduction are executed simultaneously. To reduce the hardware complexity, we do not read and add the intermediate results in the arithmetic module as in the cited works, but instead, we implemented a modify access that XORs the data currently stored in memory with the incoming data in our memory module. This approach ensures that our hardware has similar performance (resource utilization, frequency and clock cycles) like the LEAP multiplier [6].
## V Results
An objective comparison for hardware implementations in the US NIST PQC process is difficult because the US NIST only specified the Xilinx Artix 7 product family as target platform. However, the devices within this family still differ significantly in terms of available resources and speed level, and comparing results from different devices (as done several times in the SoA) is not objective. Therefore, in our comparison, the same device was used as in the original HQC implementation [4], Xilinx Artix xc7a100tfg256-1 FPGA with the Xilinx Vivado 2020.1 software version.
### _Comparison of sampler implementations_
In Table I we compare four different implementations to evaluate the efficiency of optimization III-A and III-B. For all implementations, we considered the hamming weight \(\omega=66\) as required during the key generation procedure of HQC128. _HW/SW_ and _HLS_ refer to implementations of the previously used sampling algorithm that is susceptible to timing SCAs as described in [9]. _Original Alg._ and _New Alg._ present the results of our implementations of the original, new sampling procedure and our optimized one. In all implementations, the hardware of the SHAKE module is not included. However, _HW/SW_ used the same IP block for it as the one employed in our design.
As shown in the table, the implementation of the new algorithm requires less clock cycles than the two preceding versions, while increasing the security via constant time implementation. The reason for this is that the old algorithm requires more random numbers to be sampled with SHAKE due to rejections, and that it operates on a non-optimal 24-bit rather
Fig. 1: Sketch of the sampling pipeline.
than 32-bit boundary for its random values, which complicates the access pattern for typical 32-bit or 64-bit memories as used in [5]. Furthermore, the results in the table show that more than one order of magnitude (\(24\times\)) of latency decrease is achieved through our optimizations III-B and III-A over _Original Alg._.
### _Key Generation function_
A conclusive evaluation of the optimizations presented in this paper requires a view of the entire HQC cryptosystem, not just the sampler. Therefore, we included the joint design with the \(\mathcal{R}\) arithmetic module in the implementation of the HQC key generation function by [4]. For the integration, the sampler module has been further modified to support the SHAKE-based seed expansion to sample \(h\). The results of this integration are illustrated in Table II. As presented, our \(\mathcal{R}\) module provides a comparable resource efficiency (latency / number of resources) like the LEAP multiplier [6]. In addition, optimization III-C leads to a notable reduction in the amount of memory required, which has previously been shown to be the largest area contributor in the Application Specific Integrated Circuit (ASIC) implementation of HQC [5]. When all optimizations are combined, the HQC key generation function is substantially faster than SoA implementations. Most importantly, the results show that our optimizations significantly reduce the performance gap between code-based and lattice-based algorithms, as shown by comparing the KYBER512 hardware implementation [13] to our design.
## VI Conclusion
In this work, we presented to the best of our knowledge the first in-depth study on the efficient implementation of the new, more secure HQC sampling procedure. We introduced novel methodologies to implement the sampling algorithm that substantially enhances the efficiency of its hardware implementation and reduces the algorithmic complexity from \(\mathcal{O}(n^{2})\) to \(\mathcal{O}(n)\). The changes were carefully evaluated not to induce any new side channel attack possibilities or to comprise the security of the implementation. Our results show that these improvements reduce the latency of the sampling algorithm by a factor of 24, while the hardware resources required remain comparable to those of the original sampling algorithm, particularly when implemented jointly with the ring arithmetic in HQC. Based on these changes, and the fact that HQC was already shown to be the most efficient code-based cryptosystem in the NIST process, the performance gap between code-based and lattice-based algorithms substantially decreases - from a factor of 10.5 to a factor of 1.7 in terms of execution time - by integrating our proposals into a SoA hardware design of the HQC-128 key generation function. To conclude, HQC is a very competitive alternative to KYBER in case it turns out to be insecure in the future in terms of resource efficiency.
|
2307.16881 | On higher multiplicity hyperplane and polynomial covers for symmetry
preserving subsets of the hypercube | Alon and F\"uredi (European J. Combin. 1993) gave a tight bound for the
following hyperplane covering problem: find the minimum number of hyperplanes
required to cover all points of the n-dimensional hypercube {0,1}^n except the
origin. Their proof is among the early instances of the polynomial method,
which considers a natural polynomial (a product of linear factors) associated
to the hyperplane arrangement, and gives a lower bound on its degree, whilst
being oblivious to the (product) structure of the polynomial. Thus, their proof
gives a lower bound for a weaker polynomial covering problem, and it turns out
that this bound is tight for the stronger hyperplane covering problem.
In a similar vein, solutions to some other hyperplane covering problems were
obtained, via solutions of corresponding weaker polynomial covering problems,
in some special cases in the works of the fourth author (Electron. J. Combin.
2022), and the first three authors (Discrete Math. 2023). In this work, we
build on these and solve a hyperplane covering problem for general symmetric
sets of the hypercube, where we consider hyperplane covers with higher
multiplicities. We see that even in this generality, it is enough to solve the
corresponding polynomial covering problem. Further, this seems to be the limit
of this approach as far as covering symmetry preserving subsets of the
hypercube is concerned. We gather evidence for this by considering the class of
blockwise symmetric sets of the hypercube (which is a strictly larger class
than symmetric sets), and note that the same proof technique seems to only
solve the polynomial covering problem. | Arijit Ghosh, Chandrima Kayal, Soumi Nandi, S. Venkitesh | 2023-07-31T17:47:33Z | http://arxiv.org/abs/2307.16881v1 | # On higher multiplicity hyperplane and polynomial covers
###### Abstract
Alon and Furedi (European J. Combin. 1993) gave a tight bound for the following hyperplane covering problem: find the minimum number of hyperplanes required to cover all points of the \(n\)-dimensional hypercube \(\{0,1\}^{n}\) except the origin. Their proof is among the early instances of the _polynomial method_, which considers a natural polynomial (a product of linear factors) associated to the hyperplane arrangement, and gives a lower bound on its degree, whilst being oblivious to the (product) structure of the polynomial. Thus, their proof gives a lower bound for a _weaker_ polynomial covering problem, and it turns out that this bound is tight for the _stronger_ hyperplane covering problem.
In a similar vein, solutions to some other hyperplane covering problems were obtained, via solutions of corresponding weaker polynomial covering problems, in some special cases in the works of the fourth author (Electron. J. Combin. 2022), and the first three authors (Discrete Math. 2023). In this work, we build on these and solve a hyperplane covering problem for general symmetric sets of the hypercube, where we consider hyperplane covers with higher multiplicities. We see that even in this generality, it is enough to solve the corresponding polynomial covering problem. Further, this seems to be the limit of this approach as far as covering symmetry preserving subsets of the hypercube is concerned. We gather evidence for this by considering the class of _blockwise_ symmetric sets of the hypercube (which is a strictly larger class than symmetric sets), and note that the same proof technique seems to only solve the polynomial covering problem.
Notations.\(\mathbb{R}\) denotes the set of all real numbers, \(\mathbb{Z}\) denotes the set of all integers, \(\mathbb{N}\) denotes the set of all nonnegative integers, and \(\mathbb{Z}^{+}\) denotes the set of all positive integers. \([a,b]\) denotes the closed interval of all integers between \(a\) and \(b\); further, we denote \([n]\coloneqq[1,n]\). \(\mathbb{R}[\mathbb{X}]\) denotes the polynomial ring over the field \(\mathbb{R}\) and a collection of indeterminates \(\mathbb{X}\), where either there are \(n\) indeterminates \(\mathbb{X}=(X_{1},\ldots,X_{n})\), or there are \(N=n_{1}+\cdots+n_{k}\) indeterminates partitioned into \(k\) blocks as \(\mathbb{X}=(\mathbb{X}_{1},\ldots,\mathbb{X}_{k})\) with each \(\mathbb{X}_{j}=(X_{j,1},\ldots,X_{j,n_{j}})\).
## 1 Introduction and overview
We will work over the field \(\mathbb{R}\), and consider the \(n\)-variate polynomial ring \(\mathbb{R}[\mathbb{X}]\). A classic result by Alon and Furedi [1] states that any collection of (affine) hyperplanes1 in \(\mathbb{R}^{n}\), whose union
contains every point of the hypercube (or Boolean cube) \(\{0,1\}^{n}\) except the all-zeros point \(0^{n}\coloneqq(0,\ldots,0)\), must have at least \(n\) hyperplanes. This lower bound is also tight, attained by the collection of hyperplanes defined by the equations: \(X_{i}=1,\,i\in[n]\).2 Further, the lower bound proof by [1] is among the early instances of the _polynomial method_ in combinatorics. Note that the union of any finite collection of hyperplanes in \(\mathbb{R}^{n}\), as a set of points, is exactly equal to the zero set of the product of the affine linear polynomials defining the individual hyperplanes. So the lower bound on the number of hyperplanes follows from a lower bound on the degree of this _product polynomial_.
Footnote 2: This result of Alon and Furedi [1] is, in fact, true over any field \(\mathbb{F}\), and not just for \(\mathbb{F}=\mathbb{R}\).
An interesting point to note in the lower bound proof by [1] is that the polynomial method is oblivious to the _product structure_ of the polynomials corresponding to collections of hyperplanes, or _any other structural property_ of polynomials, and is only sensitive to the degree of the polynomials. In other words, we may as well consider a _polynomial covering problem_ satisfying the same vanishing conditions - find the minimum degree of a polynomial, _among all unstructured polynomials_, that vanish at every point of \(\{0,1\}^{n}\) except \(0^{n}\) - and the proof by the polynomial method goes through. Therefore, in hindsight, it is amazing that the lower bound for the _weaker_ polynomial covering problem is, in fact, tight for the _stronger_ hyperplane covering problem. In this work, we are interested in further exploring this power of the polynomial method in giving tight bounds for some hyperplane covering problems by simply considering the corresponding weaker polynomial covering problems.
In order to describe our motivations as well as our results, let us first fix some terminologies and notations. We will identify a hyperplane \(H\) in \(\mathbb{R}^{n}\) with its defining affine linear polynomial \(H(\mathbb{X})\). Let \(t\geq 1\), \(\ell\in[0,t-1]\), and consider any subset \(S\subsetneq\{0,1\}^{n}\). We define
* a \((t,\ell)\)-exact hyperplane cover for \(S\) to be a finite collection of hyperplanes (considered as a multiset) in \(\mathbb{R}^{n}\) such that each point in \(S\) is contained in at least \(t\) hyperplanes, and each point in \(\{0,1\}^{n}\setminus S\) is contained in exactly \(\ell\) hyperplanes.
* a \((t,\ell)\)-exact polynomial cover for \(S\) to be a nonzero polynomial that vanishes at each point in \(S\) with multiplicity3 at least \(t\), and vanishes at each point in \(\{0,1\}^{n}\setminus S\) with multiplicity exactly \(\ell\). Footnote 3: We say that a polynomial \(P\) vanishes at a point \(a\) with multiplicity at least \(t\) if all the derivatives of \(P\) having order at most \(t-1\) vanish at \(a\). We will give a formal definition in the Preliminaries (Section 2.3).
Let \(\mathsf{EHC}_{n}^{(t,\ell)}(S)\) denote the minimum size of a \((t,\ell)\)-exact hyperplane cover for \(S\), and let \(\mathsf{EPC}_{n}^{(t,\ell)}(S)\) denote the minimum degree of a \((t,\ell)\)-exact polynomial cover for \(S\). In these notations, [1] show the following.
**Theorem 1.1** ([1]).: \(\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus\{0^{n}\})=\mathsf{EPC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus\{0^{n}\})=n\)_._
It is obvious from the definitions that, in general, we have \(\mathsf{EHC}_{n}^{(t,\ell)}(S)\geq\mathsf{EPC}_{n}^{(t,\ell)}(S)\). For completeness, we give a quick proof in Appendix A.1. In the present work, we are broadly concerned with the following question.
**Question 1.2**.: _Given a proper subset \(S\subsetneq\{0,1\}^{n}\) and integers \(t\geq 1,\,\ell\in[0,t-1]\), under what conditions can we say that \(\mathsf{EHC}_{n}^{(t,\ell)}(S)=\mathsf{EPC}_{n}^{(t,\ell)}(S)\)?_
### Motivation
The present work could be considered a sequel to earlier works by the fourth author [20], and the first three authors [14]. The work [20] relies heavily on the polynomial method using Alon's Combinatorial Nullstellensatz [1] (also see Buck, Coley, and Robbins [1], and Alon and Tarsi [1]), and the work [14] relies heavily on a recent _multiplicity extension_ of the Combinatorial Nullstellensatz given by Sauermann and Wigderson [21]. The problems of concern, in the two earlier works as well as in the present work, belong to a larger class of questions that have been of interest for a long time, and have rich literature. We mention some of these related works in Section 1.4.
Let us now detail the primary motivations for our present work.
* As a multiplicity extension of Theorem 1.1 for the polynomial covering problem, Sauermann and Wigderson [21] determined the following. [[21]] For \(t\geq 1\), \(\ell\in[0,t-1]\), we have \[\mathsf{EPC}_{n}^{(t,\ell)}(\{0,1\}^{n}\setminus\{0^{n}\})=\begin{cases}n+2t-2 &\text{if }\,\ell=t-1,\\ n+2t-3&\text{if }\,\ell<t-1\leq\left\lfloor\frac{n+1}{2}\right\rfloor.\end{cases}\]
* In a remarkable development, using techniques different from the polynomial method, Clifton and Huang [1] proved the following bounds for the hyperplane covering problem. [[1]] For all \(n\geq 3\), \(t\geq 4\), we have \[n+t+1\leq\mathsf{EHC}_{n}^{(t,0)}(\{0,1\}^{n}\setminus\{0^{n}\})\leq n+ \binom{t}{2},\] Further, for \(n\geq 2\), \(t=2,3\), we have \(\mathsf{EHC}_{n}^{(t,0)}(\{0,1\}^{n}\setminus\{0^{n}\})=n+\binom{t}{2}\).
* We say a subset \(S\subseteq\{0,1\}^{n}\) is symmetric if \(S\) is closed under permutations of coordinates. Note that the Hamming weight of any \(x\in\{0,1\}^{n}\) is defined by \(|x|=|\{i\in[n]:x_{i}=1\}|\). Thus, the subset \(S\) is symmetric if and only if \[x\in S,\,y\in\{0,1\}^{n},\,|y|=|x|\quad\implies\quad y\in S.\] For any symmetric set \(S\subseteq\{0,1\}^{n}\), we define \(W_{n}(S)=\{|x|:x\in S\}\). It is immediate that a symmetric set \(S\) is determined by the corresponding set \(W_{n}(S)\). Also for \(i\in[0,n]\), let \(W_{n,i}=[0,i-1]\cup[n-i+1,n]\), and define the symmetric set \(T_{n,i}\subseteq\{0,1\}^{n}\) by \(W_{n}(T_{n,i})=W_{n,i}\).4 Footnote 4: Here we have \(W_{n,0}=\emptyset\) and \(T_{n,0}=\emptyset\). The fourth author [20] gave a combinatorial characterization of \(\mathsf{EPC}_{n}^{(1,0)}(S)\) for all symmetric sets \(S\subsetneq\{0,1\}^{n}\), as well as a partial result towards answering Question 1.2 in this setting. The characterization is in terms of a simple combinatorial measure. For any symmetric set \(S\subseteq\{0,1\}^{n}\), define \[\mu_{n}(S) =\max\{i\in[0,\lceil n/2\rceil]:W_{n,i}\subseteq W_{n}(S)\},\] and \[\Lambda_{n}(S) =|W_{n}(S)|-\mu_{n}(S).\] Further, denote \(\overline{\mu}_{n}(S)\coloneqq\mu_{n}(\{0,1\}^{n}\setminus S)\) and \(\overline{\Lambda}_{n}(S)\coloneqq\Lambda_{n}(\{0,1\}^{n}\setminus S)\).
**Theorem 1.5** ([20]).:
1. _For any symmetric set_ \(S\subsetneq\{0,1\}^{n}\)_, we have_ \[\mathsf{EPC}_{n}^{(1,0)}(S)=\Lambda_{n}(S).\]
2. _For any symmetric set_ \(S\subsetneq\{0,1\}^{n}\) _such that_ \(W_{n,2}\not\subseteq W_{n}(S)\)_, we have_ \[\mathsf{EHC}_{n}^{(1,0)}(S)=\mathsf{EPC}_{n}^{(1,0)}(S) =\Lambda_{n}(S)\] \[=\begin{cases}|W_{n}(S)|&\text{if }\,W_{n,1}\not\subseteq W_{n}(S),\\ |W_{n}(S)|-1&\text{if }\,W_{n,1}\subseteq W_{n}(S).\end{cases}\]
3. \(\mathsf{EHC}_{n}^{(1,0)}(T_{n,2})=\mathsf{EPC}_{n}^{(1,0)}(T_{n,2})=2=|W_{n,2} |-\mu_{n}(T_{n,2})=\Lambda_{n}(T_{n,2})\)_._
It is interesting, and important for further discussions, to note the constructions that imply the equalities in Theorem 1.5.
**Example 1.6** ([20]).:
1. Let \(S\subsetneq\{0,1\}^{n}\) be a symmetric set. By the proof of Theorem 1.5(a) [20, Proposition 6.1], for every \(a\in\{0,1\}^{n}\setminus S\), there exists a polynomial \(Q_{a}(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) such that \(\deg(Q_{a})\leq\Lambda_{n}(S)\), \(Q_{a}|_{S}=0\), and \(Q_{a}(a)=1\). Then choose scalars \(\beta_{a}\in\mathbb{R},\,a\in\{0,1\}^{n}\setminus S\) such that the polynomial \(Q(\mathbb{X})\coloneqq\sum_{a\in\{0,1\}^{n}\setminus S}\beta_{a}Q_{a}(\mathbb{ X})\) satisfies \(\deg(Q)\leq\Lambda_{n}(S)\), \(Q|_{S}=0\), and \(Q(b)\neq 0\) for all \(b\in\{0,1\}^{n}\setminus S\). So the polynomial \(Q(\mathbb{X})\) witnesses the equality in Theorem 1.5(a). Note that the set of scalars \(B\coloneqq\{\beta_{a}:a\in\{0,1\}^{n}\setminus S\}\) can always be chosen so that \(Q(\mathbb{X})\) satisfies the above required conditions. For instance, consider a subfield of \(\mathbb{R}\) defined by \(\widehat{\mathbb{Q}}\coloneqq\mathbb{Q}\big{(}\{Q_{a}(b):a,b\in\{0,1\}^{n} \setminus S\}\big{)}\).5 It then follows that \(\mathbb{R}\) is an infinite dimensional \(\widehat{\mathbb{Q}}\)-vector space. So we can choose \(B\) to be any \(\widehat{\mathbb{Q}}\)-linearly independent subset of \(\mathbb{R}\) of size \(2^{n}-|S|\). Footnote 5: For any \(B\subsetneq\mathbb{R}\), the notation \(\mathbb{Q}(B)\) denotes the smallest subfield of \(\mathbb{R}\) that contains \(\mathbb{Q}\) and \(B\). This subfield exists and is unique, by elementary field theory.
2. Let \(S\subsetneq\{0,1\}^{n}\) be a symmetric set such that \(W_{n,2}\not\subseteq W_{n}(S)\). If \(W_{n,1}\not\subseteq W_{n}(S)\), then the collection of hyperplanes \(\{H^{\prime}_{t}(\mathbb{X}):t\in W_{n}(S)\}\), defined by \(H^{\prime}_{t}(\mathbb{X})\coloneqq\sum_{i=1}^{n}X_{i}-t\), \(t\in W_{n}(S)\) witnesses equality in Theorem 1.5(b). If \(W_{n,1}=\{0,n\}\subseteq W_{n}(S)\), note that the hyperplane \(H^{*}_{(1,1)}(\mathbb{X})\coloneqq\sum_{i=1}^{n-1}X_{i}-(n-1)X_{n}\) satisfies \(H^{*}_{(1,1)}(x)=0\) for \(x\in\{0,1\}^{n}\) if and only if \(x\in\{0^{n},1^{n}\}\), that is, \(x\in T_{n,1}\). Then the collection of hyperplanes \(\{H^{*}_{(1,1)}(\mathbb{X})\}\sqcup\{H^{\prime}_{t}(\mathbb{X}):t\in W_{n}(S )\setminus\{0,n\}\}\) witnesses the equality in Theorem 1.5(b).
3. The collection of hyperplanes \(\{H^{*}_{(2,1)}(\mathbb{X}),H^{*}_{(2,2)}(\mathbb{X})\}\), where \(H^{*}_{(2,1)}(\mathbb{X})\coloneqq\sum_{i=1}^{n-1}X_{i}-(n-3)X_{n}+1\) and \(H^{*}_{(2,2)}(\mathbb{X})\coloneqq\sum_{i=1}^{n-2}X_{i}-(n-2)X_{n-1}\), witnesses the equality in Theorem 1.5(c).
Further, the following was conjectured in [20], appealing to Theorem 1.5(b) and (c).
**Conjecture 1.7** ([20]).: _For any symmetric set \(S\subsetneq\{0,1\}^{n}\) such that \(W_{n,2}\subseteq W_{n}(S)\), we have_
\[\mathsf{EHC}_{n}^{(1,0)}(S)=|W_{n}(S)|-2,\]
_and therefore, \(\mathsf{EHC}_{n}^{(1,0)}(S)>\mathsf{EPC}_{n}^{(1,0)}(S)\) if \(W_{n,2}\subsetneq W_{n}(S)\)._
1. _Aaronson, Groenland, Grzesik, Kielak, and Johnston_ _[_1_]_ _considered problem of determining_ \(\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)\) _for general nonempty subsets_ \(S\subseteq\{0,1\}^{n}\)_, and obtained the following._
**Theorem 1.8** ([1]).: _For any nonempty subset \(S\subseteq\{0,1\}^{n}\), we have_
\[\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)\geq n-\lfloor\log_{2}|S|\rfloor.\]
Improving upon Theorem 1.8, [10] bounded \(\mathsf{EPC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S)\) for all \(t\geq 1\), in a more abstract sense by introducing a combinatorial measure called _index complexity_. For any subset \(S\subseteq\{0,1\}^{n}\), \(|S|>1\), the index complexity of \(S\) is defined to be the smallest positive integer \(r_{n}(S)\) such that for some \(I\subseteq[n]\), \(|I|=r_{n}(S)\), there is a point \(u\in S\) such that for each \(v\in S\), \(v\neq u\), we get \(v_{i}\neq u_{i}\) for some \(i\in I\), that is, the point \(u\) can be _separated from all other points_ in \(S\) in the coordinates in \(I\). (The index complexity of a singleton set is defined to be zero.)
The improvement to Theorem 1.8 was achieved via the following two results.
**Proposition 1.9** ([10]).: _For any nonempty subset \(S\subseteq\{0,1\}^{n}\), we have_
\[r_{n}(S)\leq\lfloor\log_{2}|S|\rfloor.\]
**Theorem 1.10** ([10]).: _For any nonempty subset \(S\subseteq\{0,1\}^{n}\) and \(t\geq 1\), we have_
\[\mathsf{EPC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S)\geq n-r_{n}(S)+2t-2.\]
* Returning to the context of symmetric sets, note that for any symmetric set \(S\subseteq\{0,1\}^{n}\), the complement \(\{0,1\}^{n}\setminus S\) is also symmetric. Further, we say a symmetric set \(S\) is a layer if \(|W_{n}(S)|=1\). The first three authors [10] answered Question 1.2 in the affirmative for the complement of any layer \(S\), and for all \(t\geq 1\), \(\ell=t-1\). In particular, this improves Theorem 1.3. **Theorem 1.11** ([10]).: _For any layer \(S\subsetneq\{0,1\}^{n}\) with \(W_{n}(S)=\{w\}\), and any \(t\geq 1\), we have_ \[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S)=\mathsf{EPC}_{n}^{(t,t-1)}( \{0,1\}^{n}\setminus S)=\max\{w,n-w\}+2t-2.\] The following construction by [10] is important throughout our discussion. For completeness, we give a proof in Appendix B. **Lemma 1.12** ([10]).: _For \(i\in[0,\lceil n/2\rceil]\), the collection of hyperplanes \(\{H_{(i,j)}^{*}(\mathbb{X}):j\in[i]\}\) defined by_
\[H_{(i,j)}^{*}(\mathbb{X})=\sum_{k=1}^{n-j}X_{k}-(n-2i+j)X_{n-j+1}-(i-j),\quad j \in[i],\]
_satisfies the following._
* _For every_ \(a\in T_{n,i}\)_, there exists_ \(j\in[i]\) _such that_ \(H_{(i,j)}^{*}(a)=0\)_._
* \(H_{(i,j)}^{*}(b)\neq 0\) _for every_ \(b\in\{0,1\}^{n}\setminus T_{n,i}\)_,_ \(j\in[i]\)_._
A construction that implies the equality in Theorem 1.11 is then immediate. **Example 1.13** ([10]).: Let \(S\subsetneq\{0,1\}^{n}\) be a layer with \(W_{n}(S)=w\), and \(t\geq 1\). Let \(w^{\prime}=\min\{w,n-w\}\). Denote \(H_{0}^{\circ}(\mathbb{X})=X_{1}\), \(H_{1}^{\circ}(\mathbb{X})=X_{1}-1\). Then the collection of hyperplanes
\[\{H_{(w^{\prime},j)}^{*}(\mathbb{X}):j\in[w^{\prime}]\}\ \sqcup\bigsqcup_{ \ell\in[t-1]}\{H_{0}^{\circ}(\mathbb{X}),H_{1}^{\circ}(\mathbb{X})\}\qquad \qquad\text{(disjoint union, as a multiset)}\]
witnesses the equality in Theorem 1.11.
**Remark 1.14**.: Appealing to Theorem 1.5, Theorem 1.11, and the definition of index complexity, it will be interesting ahead to note that for a layer \(S\subsetneq\{0,1\}^{n}\) with \(W_{n}(S)=w\), we have6
Footnote 6: We will understand the index complexity of symmetric sets in more detail in Section 3.2.
\[\overline{\Lambda}_{n}(S)=\max\{w,n-w\}=n-r_{n}(S),\]
and Theorem 1.11, in fact, shows that for the layer \(S\) and for all \(t\geq 1\), we have
\[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S) =\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)+2t-2\] \[=\overline{\Lambda}_{n}(S)+2t-2\] \[=n-r_{n}(S)+2t-2\] \[=\mathsf{EPC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)+2t-2=\mathsf{ EPC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S).\]
Further, [10] disprove Conjecture 1.7, pertaining to the remaining case '\(W_{n,2}\subsetneq W_{n}(S)\)', by providing a counterexample.
In the present work, we will build upon some of the above results.
### Our results: higher multiplicity hyperplane covers
As mentioned earlier, we are broadly interested in understanding when Question 1.2 has an answer in the affirmative. In the present work, we will obtain some such characterizations when \(t\geq 1,\;\ell=t-1\), for some structured subsets of the hypercube; specifically, we will consider symmetric sets, as well as a _block generalization_ of symmetric sets. Strictly speaking, we will also have some nondegeneracy conditions in some characterizations.
#### Proof technique
We also have a common proof technique for our results, which is simple and similar to the approach adopted in the earlier works [1, 2, 3, 10]. To summarize the technique, consider a subset \(S\subsetneq\{0,1\}^{n}\) (with a suitable structure, as we detail later), and suppose we would like to determine \(\mathsf{EHC}_{n}^{(t,t-1)}(S)\). Via the polynomial method, we first obtain a lower bound for the _weaker_ polynomial covering problem, say \(\mathsf{EPC}_{n}^{(t,t-1)}(S)\geq L_{t}\) (for some \(L_{t}\geq 1\)). We then construct a hyperplane cover to obtain an upper bound \(\mathsf{EHC}_{n}^{(t,t-1)}(S)\leq L_{t}\) for the _stronger_ hyperplane covering problem. Thus, we immediately have the inequalities
\[L_{t}\geq\mathsf{EHC}_{n}^{(t,t-1)}(S)\geq\mathsf{EPC}_{n}^{(t,t-1)}(S)\geq L _{t},\]
which gives a tight characterization.
#### Some fundamental hyperplane families
Before we detail our results, let us fix the notations for some fundamental hyperplane families which will appear repeatedly in this work.
1. For each \(t\in[0,n]\), define \(H_{t}^{\prime}(\mathbb{X})=\sum_{i=1}^{n}X_{i}-t\). Further, for any \(W\subseteq[0,n]\), let \(\mathcal{H}_{W}^{\prime}(\mathbb{X})=\{H_{t}^{\prime}(\mathbb{X}):t\in W\}\).
2. For each \(i\in[0,\lceil n/2\rceil]\), \(j\in[i]\), as defined in Lemma 1.12, we have \[H^{*}_{(i,j)}(\mathbb{X})=\sum_{k=1}^{n-j}X_{k}-(n-2i+j)X_{n-j+1}-(i-j).\] Further, let \(\mathcal{H}^{*}_{i}(\mathbb{X})=\{H^{*}_{(i,j)}(\mathbb{X}):j\in[i]\}\).
3. Define \(H^{\circ}_{0}(\mathbb{X})=X_{1}\) and \(H^{\circ}_{1}(\mathbb{X})=X_{1}-1\). Further, let \(\mathcal{H}^{\circ m}(\mathbb{X})=\bigsqcup_{\ell=1}^{m}\{H^{\circ}_{0}( \mathbb{X}),H^{\circ}_{1}(\mathbb{X})\}\) (disjoint union, as a multiset), for any \(m\geq 1\).
#### 1.2.1 Warm-up: Index complexity of symmetric sets
We have seen that [10] obtain a lower bound on the polynomial covering problem for any general subset of the hypercube, in terms of index complexity (Theorem 1.10), by employing the polynomial method. Further, they show that in the case of a single layer of the hypercube, this lower bound is tight (Theorem 1.11). As a consequence, it is also seen that the index complexity of a single layer can be expressed in terms of the combinatorial measure \(\Lambda_{n}\) introduced in [11] (also see Theorem 1.5 and Remark 1.14). To summarise, we have the following.
**Proposition 1.15** (Implicit in [10]).: _For a layer \(S\subsetneq\{0,1\}^{n}\) with \(W_{n}(S)=\{w\}\), we have_
\[\overline{\Lambda}_{n}(S)=n-r_{n}(S)=\max\{w,n-w\}.\]
Such an equality is no longer true for general symmetric sets. We can, in fact, precisely understand the general case combinatorially. We introduce some terminology before we proceed.
For any \(a\in[-1,n-1]\), \(b\in[1,n+1]\), \(a<b\), denote the set of weights \(I_{n,a,b}=[0,a]\cup[b,n]\), and we say a peripheral interval is the symmetric set \(J_{n,a,b}\subseteq\{0,1\}^{n}\) defined by \(W_{n}(J_{n,a,b})=I_{n,a,b}\).7 We will consider _inner and outer approximations_ of a symmetric set.
Footnote 7: Here, we have the convention \([0,-1]=[n+1,n]=\emptyset\).
Let \(S\subseteq\{0,1\}^{n}\) be a symmetric set.
* If \(S\subsetneq\{0,1\}^{n}\), then the inner interval of \(S\), denoted by in-int\((S)\), is defined to be the peripheral interval \(J_{n,a,b}\subseteq\{0,1\}^{n}\) of maximum size such that \(J_{n,a,b}\subseteq S\). Further, we define in-int\((\{0,1\}^{n})=J_{n,\lfloor n/2\rfloor,\lfloor n/2\rfloor+1}\).
* Let \(\mathcal{O}(S)\) be the collection of all peripheral intervals \(J_{n,a,b}\) such that \(S\subseteq J_{n,a,b}\) and \(I_{n,a,b}=W_{n}(J_{n,a,b})\) has minimum size. It is easy to check that there exists either a unique peripheral interval \(J_{n,a,b}\in\mathcal{O}(S)\), or exactly a pair of peripheral intervals \(J_{n,a,b},J_{n,n-b,n-a}\in\mathcal{O}(S)\) such that the quantity \(|a+b-n|\) is minimum. The outer interval of \(S\), denoted by out-int\((S)\), is defined by \[\text{out-int}(S)=\begin{cases}J_{n,a,b}&\text{if $J_{n,a,b}$ is the unique minimizer of $|a+b-n|$},\\ J_{n,a,b}&\text{if $J_{n,a,b},J_{n,n-b,n-a}$ are minimizers of $|a+b-n|$, and $a>n-b$}.\end{cases}\] We will elaborate a bit on the definitions in the Preliminaries (Section 2.4), and further discuss the uniqueness (and therefore, well-definedness) of inner and outer intervals, along with some illustrations, in Appendix C. Now define \[\text{in}_{n}(S) =(\min\{a,n-b\}+1)+|W_{n}(S)\setminus W_{n,\min\{a,n-b\}+1}| \text{where $J_{n,a,b}=\text{in-int}(S)$},\] and \[\text{out}_{n}(S) =a+n-b+1=|I_{n,a,b}|-1 \text{where $J_{n,a,b}=\text{out-int}(S)$}.\]
Towards understanding the index complexity of general symmetric sets, we obtain the following important relation between inner and outer intervals of symmetric sets.
**Proposition 1.16**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\), we have_
\[\operatorname{in}_{n}(\{0,1\}^{n}\setminus S)+\operatorname{out}_{n}(S)\geq n.\]
_Further, equality holds if and only if either \(S\) or \(\{0,1\}^{n}\setminus S\) is a peripheral interval._
We are now ready to characterize the index complexity of symmetric sets.
**Proposition 1.17**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\), we have \(r_{n}(S)=\operatorname{out}_{n}(S)\)._
Also, the following is trivial, by definitions.
**Fact 1.18** (By definitions).: _For any symmetric set \(S\subsetneq\{0,1\}^{n}\), we have \(\Lambda_{n}(S)=\operatorname{in}_{n}(S)\)._
The following is then an immediate corollary of Proposition 1.16, Proposition 1.17, and Fact 1.18.
**Corollary 1.19**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\), we have_
\[\overline{\Lambda}_{n}(S)\geq n-r_{n}(S).\]
_Further, equality holds if and only if either \(S\) or \(\{0,1\}^{n}\setminus S\) is a peripheral interval._
Note that if \(S\subseteq\{0,1\}^{n}\) is a layer, then \(\{0,1\}^{n}\setminus S\) is a peripheral interval, and hence Corollary 1.19 recovers Proposition 1.15.
#### 1.2.2 Covering symmetric sets
Note that [10] had disproved Conjecture 1.7, but not characterized \(\operatorname{\mathsf{EHC}}_{n}^{(1,0)}(S)\) for all symmetric sets \(S\subsetneq\{0,1\}^{n}\). We obtain this characterization here, and more, with our first main result. (In particular, this answers a question of the fourth author [25, Open Problem 36].) Our first main result extends Theorem 1.5 and Theorem 1.11, and answers Question 1.2 in the affirmative for symmetric sets, with \(t\geq 1,\,\ell=t-1\). As a proof attempt, for a general symmetric set, we may directly apply Theorem 1.10 (which was obtained in [10] by the polynomial method), and then attempt to find a tight construction. This would require a precise understanding of the index complexity of symmetric sets, which we obtain in Proposition 1.17. However, the lower bound obtained Thus, is _weak_. It turns out that the tight lower bound is larger, and the gap is, in fact, exactly captured by Corollary 1.19!
For convenience, we will state the result in terms of complements of symmetric sets (which are also symmetric). This will be an important distinction in an extended setting, which we consider later.
**Theorem 1.20**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\) and \(t\geq 1\), we have_
\[\operatorname{\mathsf{EHC}}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S)= \operatorname{\mathsf{EPC}}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S)=\overline{ \Lambda}_{n}(S)+2t-2.\]
Interestingly, we obtain the tight bound in Theorem 1.20 since our instantiation of the polynomial method turns out to be _stronger_ than that in the proof of Theorem 1.10 by [10]. This relative strength is also captured exactly by Corollary 1.19!
**Remark 1.21**.: The proof of Theorem 1.20 will, in fact, show that for any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\) and \(t\geq 1\), we have
\[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S) =\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)+2t-2\] \[=\overline{\Lambda}_{n}(S)+2t-2\] \[=\mathsf{EPC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus S)+2t-2=\mathsf{ EPC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus S).\]
A simple generalization of Example 1.13 gives a construction implying the equality in Theorem 1.20.
**Example 1.22**.: Let \(S\subseteq\{0,1\}^{n}\) be a nonempty symmetric set, and \(t\geq 1\). Then the collection of hyperplanes
\[\mathcal{H}_{\overline{\mu}_{n}(S)}^{*}(\mathbb{X})\sqcup\mathcal{H}_{W_{n}( \{0,1\}^{n}\setminus S)\setminus W_{n,\overline{\mu}_{n}(S)}}^{\prime}( \mathbb{X})\sqcup\mathcal{H}^{\circ(t-1)}(\mathbb{X})\]
witnesses the equality in Theorem 1.20.
#### 1.2.3 Covering special \(k\)-_wise symmetric_ sets
Fix a positive integer \(k\geq 1\), and consider the hypercube \(\{0,1\}^{N}\) as a product of \(k\) hypercubes \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\) (and so \(N=n_{1}+\cdots+n_{k}\)). We would like to extend the notion of symmetric sets to subsets in \(\{0,1\}^{N}\) which also respect the structure of \(\{0,1\}^{N}\) as a _product of \(k\) blocks_. We define a subset \(S\subseteq\{0,1\}^{N}\) to be a \(k\)-wise grid if \(S=S_{1}\times\cdots\times S_{k}\), where each \(S_{i}\subseteq\{0,1\}^{n_{i}}\) is symmetric. Further, we say \(S=S_{1}\times\cdots\times S_{k}\) is a \(k\)-wise layer if each \(S_{i}\) is a layer. Then we define a general \(k\)-wise symmetric set to be a union of an arbitrary collection of \(k\)-wise layers.
Note that every \(k\)-wise grid \(S_{1}\times\cdots\times S_{k}\) is a \(k\)-wise symmetric set, as given by
\[S=\bigsqcup_{\text{layer }L_{i}\subseteq S_{i},\,i\in[k]}(L_{1}\times \cdots\times L_{k}),\]
but the converse is not true. For instance, the complement of a \(k\)-wise layer \(L_{1}\times\cdots\times L_{k}\) is \(k\)-wise symmetric, as given by
\[\{0,1\}^{N}\setminus(L_{1}\times\cdots\times L_{k})=\bigsqcup_{\begin{subarray} {c}\emptyset\neq I\subseteq[k]\\ \text{layer }\widetilde{L}_{i}\subseteq\{0,1\}^{n_{i}},\,L_{i}\neq L_{i},\,i \in I\\ \text{layer }L_{i}^{\prime}\subseteq\{0,1\}^{n_{i}},\,i\not\in I\end{subarray}} \bigg{(}\prod_{i\in I}\widetilde{L}_{i}\bigg{)}\times\bigg{(}\prod_{i\not\in I }L_{i}^{\prime}\bigg{)},\]
which is clearly not a \(k\)-wise grid.
#### Covering complements of \(k\)-wise grids
Our second main result extends Theorem 1.20 to complements of \(k\)-wise grids, Thus, answering Question 1.2 in the affirmative in this case.
**Theorem 1.23**.: _For any nonempty \(k\)-wise grid \(S=S_{1}\times\cdots\times S_{k}\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have_
\[\mathsf{EHC}_{N}^{(t,t-1)}(\{0,1\}^{N}\setminus S)=\mathsf{EPC}_{N}^{(t,t-1)}( \{0,1\}^{N}\setminus S)=\sum_{i=1}^{k}\overline{\Lambda}_{n_{i}}(S_{i})+2t-2.\]
**Remark 1.24**.: The proof of Theorem 1.23 will, in fact, show that for any nonempty \(k\)-wise grid \(S=S_{1}\times\cdots\times S_{k}\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have
\[\mathsf{EHC}_{N}^{(t,t-1)}(\{0,1\}^{N}\setminus S) =\mathsf{EHC}_{N}^{(1,0)}(\{0,1\}^{N}\setminus S)+2t-2\] \[=\sum_{i=1}^{k}\overline{\Lambda}_{n_{i}}(S_{i})+2t-2\] \[=\mathsf{EPC}_{N}^{(1,0)}(\{0,1\}^{N}\setminus S)+2t-2=\mathsf{ EPC}_{N}^{(t,t-1)}(\{0,1\}^{N}\setminus S).\]
A construction that implies the equality in Theorem 1.23 is a block extension of Example 1.22.
**Example 1.25**.: Let \(S=S_{1}\times\cdots\times S_{k}\subseteq\{0,1\}^{N}\) be a nonempty \(k\)-wise grid, and \(t\geq 1\). Then the collection of hyperplanes
\[\bigg{(}\bigsqcup_{j=1}^{k}\big{(}\mathcal{H}_{\overline{\mathcal{B}}_{n_{j} }(S_{j})}^{*}(\mathbb{X}_{j})\sqcup\mathcal{H}_{W_{n_{j}}(\{0,1\}^{n_{j}} \setminus S_{j})\setminus W_{n_{j}},\overline{n}_{n_{j}}(S_{j})}^{*}(\mathbb{ X}_{j})\big{)}\bigg{)}\sqcup\mathcal{H}^{\circ(t-1)}(\mathbb{X}_{1})\]
witnesses the equality in Theorem 1.23.
#### A special case: covering subcubes and their complements
Here we consider the special case of \(2\)-wise grids, where one of the blocks is a _full_ hypercube. The results we mention here are immediate from previous results, and hence we simply mention them without repeating the proofs.
By a subcube of a hypercube \(\{0,1\}^{n}\), we mean a subset of the form \(\{0,1\}^{I}\times\{a\}\), where \(I\subseteq[n]\) and \(a\in\{0,1\}^{[n]\setminus I}\). Since we are now concerned with polynomials with vanishing conditions on a subcube, without loss of generality, we will assume that the subcube is \(\mathcal{Q}_{m}\coloneqq\{0,1\}^{m}\times\{0^{n-m}\}\), for some \(m\in[0,n]\). This is true since we can permute coordinates, as well as introduce translations of variables in any polynomial without changing the degree of the polynomial. Further, we will assume that \(1\leq m\leq n-1\). So \(\mathcal{Q}_{m}\) is a \(2\)-wise grid, where we consider the product \(\{0,1\}^{n}=\{0,1\}^{m}\times\{0,1\}^{n-m}\).
Covering complements of subcubes.As a consequence of Theorem 1.23, we immediately get the following about covering complements of subcubes.
**Corollary 1.26**.: _For any \(1\leq m\leq n-1\) and \(t\geq 1\), we have_
\[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus\mathcal{Q}_{m})=\mathsf{EPC}_ {n}^{(t,t-1)}(\{0,1\}^{n}\setminus\mathcal{Q}_{m})=n-m+2t-2.\]
In this case, Example 1.25 simplifies to the following.
**Example 1.27**.: Let \(1\leq m\leq n-1\) and \(t\geq 1\). Then the collection of hyperplanes
\[\{X_{m+1}-1,\ldots,X_{n}-1\}\sqcup\mathcal{H}^{\circ(t-1)}(\mathbb{X})\]
witnesses the equality in Corollary 1.26.
A variant of Corollary 1.26 can be obtained by considering arbitrary symmetric sets in the _second block_. For a symmetric set \(S\subseteq\{0,1\}^{n-m}\), denote \(\mathcal{Q}_{m}(S)=\{0,1\}^{m}\times S\).
**Corollary 1.28**.: _For \(1\leq m\leq n-1\), any nonempty symmetric set \(S\subseteq\{0,1\}^{n-m}\), and \(t\geq 1\), we have_
\[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{n}\setminus\mathcal{Q}_{m}(S))=\mathsf{EPC}_ {n}^{(t,t-1)}(\{0,1\}^{n}\setminus\mathcal{Q}_{m}(S))=\overline{\Lambda}_{n-m}( S)+2t-2.\]
In this case, Example 1.25 simplifies to the following.
**Example 1.29**.: Let \(1\leq m\leq n-1\), \(S\subseteq\{0,1\}^{n-m}\) be a nonempty symmetric set, and \(t\geq 1\). Also denote \(\mathbb{X}=(\mathbb{X}^{\prime},\mathbb{X}^{\prime\prime})\) with \(\mathbb{X}^{\prime}=(X_{1},\ldots,X_{m}),\,\mathbb{X}^{\prime\prime}=(X_{m+1}, \ldots,X_{n})\). Then the collection of hyperplanes
\[\mathcal{H}^{*}_{\overline{\mu}_{n-m}(S)}(\mathbb{X}^{\prime\prime})\sqcup \mathcal{H}^{\prime}_{W_{n-m}(\{0,1\}^{n-m}\setminus S)\setminus W_{n-m}, \overline{\mu}_{n-m}(S)}(\mathbb{X}^{\prime\prime})\sqcup\mathcal{H}^{ \circ(t-1)}(\mathbb{X}^{\prime\prime})\]
witnesses the equality in Corollary 1.28.
Covering subcubes.It turns out that covering subcubes is easier than covering their complements. In fact, we can even consider a more general case - with an arbitrary symmetric set in the second block, as in Corollary 1.28, as well as more general multiplicities. We give a quick proof here.
**Proposition 1.30**.: _For \(1\leq m\leq n-1\), any symmetric set \(S\subseteq\{0,1\}^{n-m}\), and \(t\geq 1,\,\ell\in[0,t-1]\), we have_
\[\mathsf{EHC}_{n}^{(t,\ell)}(\{0,1\}^{m}\times S)=\mathsf{EHC}_{n-m}^{(t,\ell)} (S).\]
_In particular, for any nonempty symmetric set \(S\subseteq\{0,1\}^{n-m}\) and \(t\geq 1\), we have_
\[\mathsf{EHC}_{n}^{(t,t-1)}(\{0,1\}^{m}\times S)=\mathsf{EHC}_{n-m}^{(t,t-1)}(S )=\Lambda_{n-m}(S)+2t-2.\]
Proof.: Denote the indeterminates \(\mathbb{X}=(\mathbb{X}^{\prime},\mathbb{X}^{\prime\prime})\) with \(\mathbb{X}^{\prime}=(X_{1},\ldots,X_{m}),\,\mathbb{X}^{\prime\prime}=(X_{m+1},\ldots,X_{n})\). Let \(\mathcal{H}(\mathbb{X})=\{h_{1}(\mathbb{X}),\ldots,h_{q}(\mathbb{X})\}\) be a \((t,\ell)\)-exact hyperplane cover for \(\{0,1\}^{m}\times S\) with \(q=|\mathcal{H}|=\mathsf{EHC}_{n}^{(t,\ell)}(\{0,1\}^{m}\times S)\). Now let \(\mathcal{H}^{\prime\prime}(\mathbb{X}^{\prime\prime})=\mathcal{H}(0^{m}, \mathbb{X}^{\prime\prime})=\{h_{1}(0^{m},\mathbb{X}^{\prime\prime}),\ldots,h_ {q}(0^{m},\mathbb{X}^{\prime\prime})\}\). Then it is immediate that \(\mathcal{H}^{\prime\prime}(\mathbb{X}^{\prime\prime})\) is a \((t,\ell)\)-exact hyperplane cover for \(S\subsetneq\{0,1\}^{n-m}\). This implies \(\mathsf{EHC}_{n}^{(t,\ell)}(\{0,1\}^{m}\times S)\geq\mathsf{EHC}_{n-m}^{(t, \ell)}(S)\).
Conversely, let \(\mathcal{H}(\mathbb{X}^{\prime\prime})=\{h_{1}(\mathbb{X}^{\prime\prime}), \ldots,h_{q}(\mathbb{X}^{\prime\prime})\}\) be a \((t,\ell)\)-exact hyperplane cover for \(S\subsetneq\{0,1\}^{n-m}\) with \(q=|\mathcal{H}(\mathbb{X}^{\prime\prime})|=\mathsf{EHC}_{n-m}^{(t,\ell)}(S)\). Then again, it is immediate that \(\overline{\mathcal{H}}(\mathbb{X}^{\prime},\mathbb{X}^{\prime\prime})\coloneqq \mathcal{H}(\mathbb{X}^{\prime\prime})\) is a \((t,\ell)\)-exact hyperplane cover for \(\{0,1\}^{m}\times S\). This implies \(\mathsf{EHC}_{n}^{(t,\ell)}(\{0,1\}^{m}\times S)\leq\mathsf{EHC}_{n-m}^{(t, \ell)}(S)\). Thus, we have proved the first identity.
The second identity then follows immediately from Theorem 1.20.
### Our results: higher multiplicity polynomial covers
Let us now look at a few instances where we can solve the polynomial covering problem in broader generality, but not the hyperplane covering problem. In fact, in this extended setting, we will also need some _nondegeneracy conditions_ to obtain a clean combinatorial characterization.
Consider the hypercube \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\). Recall that we will now work with the indeterminates \(\mathbb{X}=(\mathbb{X}_{1},\ldots,\mathbb{X}_{k})\), where \(\mathbb{X}_{j}=(X_{j,1},\ldots,X_{j,n_{j}})\) are the indeterminates for the \(j\)-th block. Let \(t\geq 1,\,\ell\in[0,t-1]\), and consider any subset \(S\subseteq\{0,1\}^{N}\). We define
* a \((t,\ell)\)-block exact hyperplane cover for \(S\) to be a \((t,\ell)\)-exact hyperplane cover \(\mathcal{H}(\mathbb{X})\) (in \(\mathbb{R}^{N}\)) for \(S\) such that \[|\mathcal{H}(a,\mathbb{X}_{j})|=|\mathcal{H}(\mathbb{X})|,\quad\text{for every }a\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\}^{n_{j+1} }\times\cdots\times\{0,1\}^{n_{k}},\,j\in[k].\]
* a \((t,\ell)\)-block exact polynomial cover for \(S\) to be a nonzero polynomial \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) such that 1. the polynomial \(P(\mathbb{X})\) vanishes at each point in \(S\) with multiplicity at least \(t\), 2. for each \(j\in[k]\), and every point \((a,\widetilde{a})\in\{0,1\}^{N}\setminus S\) with \(a\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\}^{n_{j+1}} \times\cdots\times\{0,1\}^{n_{k}},\,\widetilde{a}\in\{0,1\}^{n_{j}}\), the polynomial \(P(a,\mathbb{X}_{j})\) vanishes at \(\widetilde{a}\) with multiplicity exactly \(\ell\).
Let \(\mathsf{b}\text{-}\mathsf{EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\) denote the minimum size of a \((t,\ell)\)-block exact hyperplane cover for \(S\), and let \(\mathsf{b}\text{-}\mathsf{EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\) denote the minimum degree of a \((t,\ell)\)-block exact polynomial cover for \(S\). It is obvious from the definitions that, in general, we have \(\mathsf{b}\text{-}\mathsf{EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\geq \mathsf{b}\text{-}\mathsf{EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S).\) For completeness, we give a quick proof in Appendix A.2. Further, it is trivial from the definitions that \(\mathsf{b}\text{-}\mathsf{EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\geq\mathsf{ EHC}^{(t,\ell)}_{N}(S)\) and \(\mathsf{b}\text{-}\mathsf{EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\geq\mathsf{ EPC}^{(t,\ell)}_{N}(S)\). A blockwise variant of Question 1.2 that we will consider is the following.
**Question 1.31**.: _Given a proper subset \(S\subsetneq\{0,1\}^{N}\) and integers \(t\geq 1,\,\ell\in[0,t-1]\), under what conditions can we say that \(\mathsf{b}\text{-}\mathsf{EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)=\mathsf{b} \text{-}\mathsf{EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\)?_
Unfortunately, we are unable to answer Question 1.31 in the generality that we consider; in fact, we suspect that the answer could be negative. Instead, we can solve simply the blockwise polynomial covering problem.
**Question 1.32**.: _Given a proper subset \(S\subsetneq\{0,1\}^{N}\) and integers \(t\geq 1,\,\ell\in[0,t-1]\), under what conditions can we (combinatorially) characterize \(\mathsf{b}\text{-}\mathsf{EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\)?_
#### 1.3.1 Covering _pseudo downward closed_ (PDC) \(k\)-wise symmetric sets
Our proof technique extends further to a more general class of \(k\)-wise symmetric sets to give a characterization for the blockwise polynomial covering problem, that is, we answer Question 1.32. In fact, the tight polynomial construction for this characterization hints that in this generality, the answers to Question 1.2 and Question 1.31 could be negative.
Consider the two obvious total orders \(\leq\) and \(\leq^{\prime}\) on \(\mathbb{N}\) defined by
\[0<1<2<3<\cdots\quad\text{and}\quad 0>^{\prime}1>^{\prime}2>^{\prime}3>^{\prime}\cdots\]
Let \(\mathscr{T}=\{\leq,\leq^{\prime}\}\). For any \(S\subseteq\{0,1\}^{N}\) and \(j\in[k]\), let \(S_{j}\subseteq\{0,1\}^{n_{j}}\) denote the projection of \(S\) onto the \(j\)-th block. Consider any \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\). It is immediate that each \(S_{j}\) is symmetric, \(S_{1}\times\cdots\times S_{k}\) is a \(k\)-wise grid, and \(S\subseteq S_{1}\times\cdots\times S_{k}\). Further, denote
\[W_{(n_{1},\ldots,n_{k})}(S)=\{(|x_{1}|,\ldots,|x_{k}|):(x_{1},\ldots,x_{k})\in S\}.\]
Then clearly, \(W_{(n_{1},\ldots,n_{k})}(S)\subseteq W_{n_{1}}(S_{1})\times\cdots\times W_{n_{k }}(S_{k})\). For each \(j\in[k]\), we consider an arbitrarily chosen total order \(\leq_{j}\!\in\mathscr{T}\) on \(W_{n_{j}}(S_{j})\), say denoted by \(W_{n_{j}}(S_{j})=\{w_{j,0}<_{j}\cdots<_{j}w_{j,q_{j}}\}\), and
further for each \(z_{j}\in[0,q_{j}]\), define the symmetric set \([S]_{j,z_{j}}\subseteq\{0,1\}^{n_{j}}\) by \(W_{n}([S]_{j,z_{j}})=\{w_{j,0}<_{j}\cdots<_{j}w_{j,z_{j}}\}\).
We define a \(k\)-symmetric set \(S\subseteq\{0,1\}^{N}\) to be pseudo downward closed (PDC) if for every \((w_{1,z_{1}},\ldots,w_{k,z_{k}})\in W_{(n_{1},\ldots,n_{k})}(S)\) we have \(W_{n_{1}}([S]_{1,z_{1}})\times\cdots\times W_{n_{k}}([S]_{k,z_{k}})\subseteq W _{(n_{1},\ldots,n_{k})}(S)\), that is, \(W_{(n_{1},\ldots,n_{k})}(S)\) is _downward closed_8 in \(W_{n_{1}}(S_{1})\times\cdots\times W_{n_{k}}(S_{k})\) under the partial order induced by \(\leq_{1},\ldots,\leq_{k}\). Further, let
Footnote 8: For any poset \((P,\leq)\), a subset \(D\subseteq P\) is downward closed if for any \(x\in D\) we have \(y\in D\) for all \(y\in P\), \(y\leq x\).
\[\mathcal{N}(S)=\{(z_{1},\ldots,z_{k})\in\mathbb{N}^{k}:(w_{1,z_{1}},\ldots,w_{ k,z_{k}})\in W_{(n_{1},\ldots,n_{k})}(S)\}.\]
It is clear that \(\mathcal{N}(S)\) is downward closed in \(\mathbb{N}^{k}\). Also let \(\mathsf{E}^{(\mathrm{out})}(S)\) denote the set of all minimal elements of the complement set \(\mathbb{N}^{k}\setminus\mathcal{N}(S)\) with respect to the natural partial order on \(\mathbb{N}^{k}\).
It is quite easy to check that the complement of a PDC \(k\)-symmetric set is again PDC \(k\)-symmetric. We defer the proof to the Appendix. Our third main result generalizes Theorem 1.23, but solves only the block polynomial covering problem, that is, answers Question 1.32. Note that in this generality, the combinatorial characterization that we have is nicer to describe in terms of complements.9
Footnote 9: This is why, for consistency, we have retained the description in terms of complements throughout this work.
**Theorem 1.33**.: _For any nonempty PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have_
\[\mathsf{b}\text{-}\mathsf{EPC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}(\{0,1\}^{N} \setminus S)=\max_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)} \bigg{\{}\sum_{j\in[k]:z_{j}\geq 1}\overline{\Lambda}_{n_{j}}([S]_{j,z_{j}-1}) \bigg{\}}+2t-2.\]
**Remark 1.34**.: The proof of Theorem 1.33 will also show that for any nonempty PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have
\[\mathsf{b}\text{-}\mathsf{EPC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}( \{0,1\}^{N}\setminus S) =\mathsf{b}\text{-}\mathsf{EPC}^{(1,0)}_{(n_{1},\ldots,n_{k})}(\{0,1\}^{N}\setminus S)+2t-2\] \[=\max_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)} \bigg{\{}\sum_{i\in[k]:z_{i}\geq 1}\overline{\Lambda}_{n_{j}}([S]_{j,z_{j}-1}) \bigg{\}}+2t-2.\]
A construction that implies the equality in Theorem 1.33 can be adapted from Example 1.25 as follows.
**Example 1.35**.: For any fundamental family of hyperplanes \(\mathcal{H}(\mathbb{X})=\{H_{1}(\mathbb{X}),\ldots,H_{p}(\mathbb{X})\}\) defined in Section 1.2, let us abuse notation and also denote the corresponding product polynomial by \(\mathcal{H}(\mathbb{X})=H_{1}(\mathbb{X})\cdots H_{p}(\mathbb{X})\). Let \(S\subseteq\{0,1\}^{N}\) be a nonempty PDC \(k\)-wise symmetric set, and \(t\geq 1\). Assuming notations as in Example 1.25, for each \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)\), define
\[\mathcal{H}_{S,(z_{1},\ldots,z_{k})}(\mathbb{X})=\prod_{j\in[k]:z_{j}\geq 1} \Big{(}\mathcal{H}^{*}_{\overline{\mathcal{H}}_{n_{j}}([S]_{j,z_{j}-1})}( \mathbb{X}_{j})\cdot\mathcal{H}^{\prime}_{W_{n_{j}}(\{0,1\}^{n_{j}}\setminus[S ]_{j,z_{j}-1})\setminus W_{n_{j}},\overline{\mathcal{H}}_{n_{j}}([S]_{j,z_{j}-1 })}(\mathbb{X}_{j})\Big{)}.\]
Now consider a subfield of \(\mathbb{R}\) defined by \(\widehat{\mathbb{Q}}=\mathbb{Q}\big{(}\mathcal{H}_{S,(z_{1},\ldots,z_{k})}(b):b \in\{0,1\}^{N}\), \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)\big{)}\). It follows that \(\mathbb{R}\) is an infinite dimensional \(\widehat{\mathbb{Q}}\)-vector space. Choose any \(\widehat{\mathbb{Q}}\)-linearly independent subset \(\{\lambda_{S,(z_{1},\ldots,z_{k})}:(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{ out})}(S)\}\subseteq\mathbb{R}\). Then the polynomial
\[\bigg{(}\sum_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)}\lambda_{S,(z _{1},\ldots,z_{k})}\mathcal{H}_{S,(z_{1},\ldots,z_{k})}(\mathbb{X})\bigg{)} \cdot\mathcal{H}^{\circ(t-1)}(\mathbb{X}_{1})\]
witnesses the equality in Theorem 1.33.
#### Covering \(k\)-wise grids
Note that both \(k\)-wise grids and their complements are special cases of PDC \(k\)-wise symmetric sets. So our first two main results (Theorem 1.20, and Theorem 1.23 via Corollary 1.36) are, in fact, corollaries of our third main result (Theorem 1.33). Further, the tight example of a polynomial cover mentioned in Example 1.35 specializes to the tight examples of hyperplane covers mentioned in Example 1.22 and Example 1.25.
Theorem 1.23 characterizes the hyperplane and polynomial covering problems for complements of \(k\)-wise grids. In this case, appealing to Theorem 1.33, we get the following, we see that the blockwise variants of our covering problems are equivalent to the usual _non-blockwise_ covering problems.
**Corollary 1.36**.: _For any nonempty \(k\)-wise grid \(S=S_{1}\times\cdots\times S_{k}\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have_
\[\mathsf{b-EPC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}(\{0,1\}^{N}\setminus S) =\mathsf{EPC}^{(t,t-1)}_{N}(\{0,1\}^{N}\setminus S)\] \[=\sum_{j=1}^{k}\overline{\Lambda}_{n_{j}}(S_{j})+2t-2\] \[=\mathsf{EHC}^{(t,t-1)}_{N}(\{0,1\}^{N}\setminus S)=\mathsf{b- EHC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}(\{0,1\}^{N}\setminus S).\]
Further, when it comes to covering \(k\)-wise grids (and not their complements), we get the following as a corollary of Theorem 1.33.
**Corollary 1.37**.: _For any \(k\)-wise grid \(S=S_{1}\times\cdots\times S_{k}\subsetneq\{0,1\}^{N}\) and \(t\geq 1\), we have_
\[\mathsf{b-EPC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}(S)=\max\big{\{}\Lambda_{n_{j}}( S_{j}):j\in[k]\big{\}}+2t-2.\]
A construction that implies the equality in Corollary 1.37 is a special case of Example 1.35.
**Example 1.38**.: Let \(S=S_{1}\times\cdots\times S_{k}\subsetneq\{0,1\}^{N}\) be a nonempty \(k\)-wise grid, and \(t\geq 1\). For each \(j\in[k]\), define
\[\mathcal{H}_{S_{j}}(\mathbb{X}_{j})=\mathcal{H}^{*}_{\overline{\mathcal{H}}_{ n_{j}}(S_{j})}(\mathbb{X}_{j})\cdot\mathcal{H}^{\prime}_{W_{n_{j}}(\{0,1\}^{n_{j} }\setminus S_{j})\setminus W_{n_{j}},\overline{\mathcal{H}}_{n_{j}}(S_{j})}( \mathbb{X}_{j}).\]
Now consider a subfield of \(\mathbb{R}\) defined by \(\widehat{\mathbb{Q}}=\mathbb{Q}\big{(}\mathcal{H}_{S_{j}}(b):b\in\{0,1\}^{n_ {j}}\), \(j\in[k]\big{)}\). It follows that \(\mathbb{R}\) is an infinite dimensional \(\widehat{\mathbb{Q}}\)-vector space. Choose any \(\widehat{\mathbb{Q}}\)-linearly independent subset \(\{\lambda_{1},\ldots,\lambda_{k}\}\subseteq\mathbb{R}\). Then the polynomial
\[\bigg{(}\sum_{j=1}^{k}\lambda_{j}\mathcal{H}_{S_{j}}(\mathbb{X}_{j})\bigg{)} \cdot\mathcal{H}^{\circ(t-1)}(\mathbb{X}_{1})\]
witnesses the equality in Corollary 1.37.
#### 1.3.2 Partial results on other multiplicity polynomial covers
Let us now mention a couple of results on \((t,0)\)-exact polynomial covers. The first result concerns the polynomial covering problem for the _Hamming ball_, which is a symmetric set defined by a set of weights of the form \([0,w]\).
**Proposition 1.39**.: _For \(w\in[0,n-1]\), let \(S\subsetneq\{0,1\}^{n}\) be the symmetric set defined by \(W_{n}(S)=[0,w-1]\). Then for any \(t\in\left[2,\lfloor\frac{n+3}{2}\rfloor\right]\), we have_
\[\mathsf{EPC}_{n}^{(t,0)}(S)=w+2t-3.\]
_Further, the answer to Question 1.2 is negative, in general._
The second result concerns the polynomial covering problem for a single layer. Surprisingly, in this case, our proof employs basic analytic facts about coordinate transformations of polynomials, but we do not know of a proof via the polynomial method.
**Proposition 1.40**.: _For any layer \(S\subsetneq\{0,1\}^{n}\) with \(W_{n}(S)=\{w\}\), and \(t\geq 1\), we have_
\[\mathsf{EPC}_{n}^{(t,0)}(S)=t.\]
#### 1.3.3 Cool-down: Index complexity of PDC \(k\)-wise symmetric sets
We conclude this by noting that the index complexity, which is a weaker notion for the blockwise covering problems that we consider, can be characterized to a good extent, even in the generality of PDC \(k\)-wise symmetric sets. Note that for symmetric sets \(S,S^{\prime}\subseteq\{0,1\}^{n}\) with \(S^{\prime}\subseteq S\), if \(J_{n,a,b}=\text{out-int}(S)\), then \(J_{n,a,b}=\text{out-int}(S^{\prime})\) if and only if \(\{a,b\}\subseteq W_{n}(S^{\prime})\). This turns out to be an important structural feature that we will work with.
Assume the block decomposition of the hypercube \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\). Now let \(S\subseteq\{0,1\}^{N}\) be a nonempty PDC \(k\)-wise symmetric set. Further, for each \(j\in[k]\), consider \(S_{j}\subseteq\{0,1\}^{n_{j}}\) (the \(j\)-th projection of \(S\)) and let \(J_{n_{j},a_{j},b_{j}}=\text{out-int}(S_{j})\). We define \(S\) to be \(\mathsf{outer}\)\(\mathsf{intact}\) if for every \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\text{in})}(S)\) and \(j\in[k]\), we have \(J_{n_{j},a_{j},b_{j}}=\text{out-int}([S]_{j,z_{j}})\). Equivalently, \(S\) is outer intact if and only if
\[\{a_{j},b_{j}\}\subseteq W_{n_{j}}([S]_{j,z_{j}})\text{ for each }j\in[k], \quad\text{for every }(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\text{in})}(S).\]
**Proposition 1.41**.: _For any nonempty outer intact PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\), we have_
\[r_{N}(S)=\sum_{j=1}^{k}r_{n_{j}}(S_{j})=\sum_{j=1}^{k}\text{out}_{n_{j}}(S_{j }).\]
An important special case of Proposition 1.41 is for a \(k\)-wise layer, which is trivially outer intact PDC. As an immediate corollary of Proposition 1.41 and Proposition 1.15, we get the following.
**Corollary 1.42**.: _For any \(k\)-wise layer \(S=S_{1}\times\cdots\times S_{k}\subseteq\{0,1\}^{N}\) with \(W_{n_{j}}(S_{j})=\{w_{j}\},\,j\in[k]\), we have_
\[r_{N}(S)=\sum_{j=1}^{k}\min\{w_{j},n_{j}-w_{j}\}.\]
Proposition 1.41 shows that the index complexity is sensitive only to the blockwise projections, but Theorem 1.33 (for any PDC \(k\)-wise symmetric set) shows that the characterization of the polynomial covering problem is more sensitive to the _specific PDC structure_. This adds to our observation that our polynomial method argument is stronger than simply giving a lower bound in terms of index complexity.
### Related work
In addition to the works that motivated our results, there is a plethora of literature on hyperplane covering problems and related questions, over both the reals as well as finite fields. Even more, the polynomial method itself has been subject to intense investigation in the last few decades. We mention here a sample from this vast literature that we believe is most relevant to our present work.
#### Hyperplane covering problems
* Alon, Bergmann, Coppersmith, and Odlyzko studied a _balancing problem_ for sets of binary vectors, which admits a simple reformulation as a hyperplane covering problem. An extension of this problem to higher order complex roots of unity, which takes the form of a polynomial covering problem, was studied by Hegedus [14].
* Kos, Meszaros, and Ronyai [15] extended the result of Alon and Furedi [1] to the case where the vanishing constraints at every point of the hypercube have multiplicities depending on the individual coordinates of the point. The question in [1] itself was extracted by Barany from the work of Komjath [13].
* Linial and Radhakrishnan [16] considered the notion of an _essential hyperplane cover_ for the hypercube, which is a minimal family of hyperplanes that are sufficiently _oblique_, and such that every coordinate influences at least one hyperplane. They gave an upper bound of \(\lfloor n/2\rfloor+1\) and a lower bound of \(\Omega(\sqrt{n})\). Saxton [17] gave a tight bound of \(n+1\) in the special case wherein the coefficients of all the variables in the affine linear polynomials representing the hyperplanes are restricted to be nonnegative. Recent breakthroughs by Yehuda and Yehudayoff [18], and Araujo, Balogh, and Mattos [1] have improved the lower bound to \(n^{5/9-o(1)}\).
* Jamison [15], Brouwer [13], Ball [14], Zanella [14], Ball and Serra [1], Blokhuis [1], and Bishnoi, Boyadzhiyska, Das and Meszaros [1], to name a few.
#### The polynomial method
* One of the simplest ways to formally encapsulate the polynomial method is via a classical algebraic object called the _finite-degree Zariski closure_. It was defined by Nie and Wang [12] in the context of combinatorial geometry over finite fields, who studied bounds on its size for arbitrary subsets of the hypercube. However, it had been studied implicitly even earlier by, for instance, Wei [13], Heijnen and Pellikaan [15], Keevash and Sudakov [16], and Ben-Eliezer, Hod, and Lovett [1]. Attempts to characterize the finite-degree Zariski closures of symmetric sets of the hypercube were done in the works of Hegedus [14, 15], the fourth author [17], as well as Srinivasan and the fourth author [18] (and also implicitly in Bernasconi and Egidi [1]).
* A stronger notion than finite-degree Zariski closure is another algebraic object called the _affine Hilbert function_. The affine Hilbert functions of all layers of the hypercube over all fields were determined by Wilson [19]. Further, Bernasconi and Egidi [1] determined the affine
Hilbert functions of all symmetric sets of the hypercube over the reals. This was extended to the setting of larger grids by the fourth author [20].
* An even stronger notion than affine Hilbert functions is yet another algebraic object called the _Grobner basis_, along with the associated collection of _standard monomials_. Anstee, Ronyai, and Sali [1], and Friedl and Ronyai [13] studied the standard monomials for any subset of the hypercube in terms of a combinatorial phenomenon called _order shattering_. Felszeghy, Rath, and Ronyai [13] characterized the standard monomials of all symmetric sets of the hypercube via a _lex game_. Hegedus and Ronyai [13, 14], and Felszeghy, Hegedus, and Ronyai [13] characterized the Grobner basis for special cases of symmetric sets of the hypercube.
### Organization of the paper
In Section 2, we begin by covering some preliminaries, as well as setup some terminologies and notations. In Section 3, we will obtain characterizations of index complexity for the symmetry preserving subsets that we are interested in. This covers our warmup results (SectioN 1.2.1) and our cooldown results (Section 1.3.3). In Section 4, we will prove our third main result - a characterization for the blockwise polynomial covering problem (Theorem 1.33 in Section 1.3.1). We will also note that our first main result (Theorem 1.20 in Section 1.2.2), our second main result (Theorem 1.23 in Section 1.2.3), as well as all other results in Section 1.2.2, Section 1.2.3, and Section 1.3.1 are corollaries of Theorem 1.33. In Section 5, we will prove our partial results on other higher multiplicity polynomial covers (Section 1.3.2). Finally, in Section 6, we conclude with a discussion on some open questions.
## 2 Preliminaries
In this section, we will refresh some essential preliminary notions, as well as setup terminologies and notations.
### Posets
Let \((P,\leq)\) be a poset, that is, \(\leq\) is a partial order on a nonempty set \(P\). For a subset \(S\subseteq P\), we denote \(\min_{\leq}(S)\) to be the set of all minimal elements of \(S\), and \(\max_{\leq}(S)\) to be the set of all maximal elements, that is,
\[\min_{\leq}(S) =\{a\in S:(b\in S,\,b\leq a)\implies b=a\},\] \[\max_{\leq}(S) =\{a\in S:(b\in S,\,b\geq a)\implies b=a\}.\]
Further, we define the sets of outer extremal elements and inner extremal elements of \(S\), respectively, by
\[\mathsf{E}^{(\text{out})}_{\leq}(S) =\min_{\leq}(P\setminus S),\] \[\text{and}\quad\mathsf{E}^{(\text{in})}_{\leq}(S) =\max_{\leq}(S).\]
A subset \(S\subseteq P\) is defined to be downward closed if
\[a\in S,\,b\in P,\,b\leq a\quad\implies\quad b\in P.\]
For two posets \((P_{1},\leq_{1})\) and \((P_{2},\leq_{2})\), the product poset is the poset \((P_{1}\times P_{2},\leq)\), where \(\leq\) is defined by
\[(a_{1},a_{2})\leq(b_{1},b_{2})\text{ if and only if }a_{1}\leq_{1}b_{1}\text{ and }a_{2}\leq_{2}b_{2}.\]
We also say \(\leq\) is the induced order on \(P_{1}\times P_{2}\).
If we consider the obvious total order \(\leq\) on \(\mathbb{N}\) given by \(0<1<2<3<\cdots\), then the induced order on \(\mathbb{N}^{k}\) is called the natural order on \(\mathbb{N}^{k}\).
### Symmetry preserving subsets of the hypercube
We are interested in hyperplane and polynomial covering problems for some structured subsets of the hypercube \(\{0,1\}^{n}\), where the _structures_ that we are concerned with are specified by invariance under the action of some subgroups of the symmetric group \(\mathfrak{S}_{n}\).
Symmetric sets.Let \(S\subseteq\{0,1\}^{n}\). We say \(S\) is symmetric if
\[(x_{1},\ldots,x_{n})\in S\text{ and }\sigma\in\mathfrak{S}_{n}\quad\implies \quad(x_{\sigma(1)},\ldots,x_{\sigma(n)})\in S.\]
It follows immediately that \(S\) is symmetric if and only if
\[x\in S,\,y\in\{0,1\}^{n},\text{ and }|y|=|x|\quad\implies\quad y\in S.\]
In this case, we denote \(W_{n}(S)=\{|x|:x\in S\}\subseteq[0,n]\). So the symmetric set \(S\) is completely determined by \(W_{n}(S)\). If \(|W_{n}(S)|=1\), then we say \(S\) is a layer. It is immediate that a subset of the hypercube is symmetric if and only if it is a union of some collection of layers.
Two combinatorial measures.For any \(x\in\{0,1\}^{n}\), the Hamming weight of \(x\) is defined by \(|x|=\{i\in[n]:x_{i}=1\}\). For any subset of coordinates \(I\subseteq[n]\), we denote \(x_{I}=(x_{i}:i\in I)\in\{0,1\}^{I}\). We require a simple combinatorial measure defined in [1]. For a subset \(S\subseteq\{0,1\}^{n}\), the index complexity is defined by
\[r_{n}(S)=\min\{|I|:I\subseteq[n],\,\text{there exists }a\in S\text{ such that }b_{I}\neq a_{I}\text{ for all }b\in S,\,b\neq a\}.\]
So \(r_{n}(S)\) is the minimum number of coordinates required to _separate some element in \(S\) from all other elements in \(S\)_.
An important symmetric set that we will need consists of elements with Hamming weights in an _initial interval_ of weights or a _final interval_ of weights. For \(i\in[0,n]\), define \(W_{n,i}=[0,i-1]\cup[n-i+1,n]\), and the symmetric set \(T_{n,i}\subseteq\{0,1\}^{n}\) by \(W_{n}(T_{n,i})=W_{n,i}\). We also require another combinatorial measure, that is specific to symmetric sets, defined in [15]. For any symmetric set \(S\subseteq\{0,1\}^{n}\), define
\[\mu_{n}(S) =\max\{i\in[0,\lceil n/2\rceil]:W_{n,i}\subseteq W_{n}(S)\},\] \[\text{and}\quad\Lambda_{n}(S) =|W_{n}(S)|-\mu_{n}(S).\]
Further, we denote \(\overline{\mu}_{n}(S)\coloneqq\mu_{n}(\{0,1\}^{n}\setminus S)\) and \(\overline{\Lambda}_{n}(S)\coloneqq\Lambda_{n}(\{0,1\}^{n}\setminus S)\). We will also need a simple fact about the invariance of the above two combinatorial measures under _complementation of coordinates_. It follows straightforwardly from the definitions, and we give a proof in Appendix D.
**Fact 2.1**.: _Let \(S\subseteq\{0,1\}^{n}\) be a symmetric set, and \(\widetilde{S}\) be the image of \(S\) under the coordinate transformation \((X_{1},\ldots,X_{n})\mapsto(1-X_{1},\ldots,1-X_{n})\)._
1. _If_ \(S\neq\{0,1\}^{n}\)_, then_ \(\Lambda_{n}(\widetilde{S})=\Lambda_{n}(S)\)_._
2. _If_ \(S\neq\emptyset\)_, then_ \(r_{n}(\widetilde{S})=r_{n}(S)\)_._
_Blockwise_ **symmetric sets.** Now fix a _block decomposition_ of the hypercube as \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\). Let \(S\subseteq\{0,1\}^{N}\). We say \(S\) is \(k\)-wise symmetric if
\[\begin{array}{ll}&\big{(}(x_{1,1},\ldots,x_{1,n_{1}}),\ldots,(x_{k,1},\ldots,x_{k,n_{k}})\big{)}\in S\text{ and }(\sigma_{1},\ldots,\sigma_{k})\in \mathfrak{S}_{n_{1}}\times\cdots\times\mathfrak{S}_{n_{k}}\\ \implies&\big{(}(x_{1,\sigma_{1}(1)},\ldots,x_{1,\sigma_{1}(n_{1})}),\ldots,(x_{k,\sigma_{k}(1)},\ldots,x_{k,\sigma_{k}(n_{k})})\big{)}\in S.\end{array}\]
It follows immediately that \(S\) is \(k\)-wise symmetric if and only if
\[\begin{array}{ll}&(x_{1},\ldots,x_{k})\in S,\,(y_{1},\ldots,y_{k})\in\{0,1 \}^{N},\text{ and }|y_{i}|=x_{i}\text{ for all }i\in[k]\\ \implies&(y_{1},\ldots,y_{k})\in S.\end{array}\]
In this case, we denote \(W_{(n_{1},\ldots,n_{k})}(S)=\{(|x_{1}|,\ldots,|x_{k}|):(x_{1},\ldots,x_{k}) \in S\}\subseteq[0,n_{1}]\times\cdots\times[0,n_{k}]\). So the \(k\)-wise symmetric set \(S\) is completely determined by \(W_{(n_{1},\ldots,n_{k})}(S)\). For each \(j\in[k]\), let \(S_{j}\subseteq\{0,1\}^{n_{j}}\) denote the \(j\)-th projection of \(S\), that is, \(S_{j}=\{x_{j}\in\{0,1\}^{n_{j}}:(x_{1},\ldots,x_{k})\in S\}\). So we clearly have \(W_{(n_{1},\ldots,n_{k})}(S)\subseteq W_{n_{1}}(S_{1})\times\cdots W_{n_{k}}(S_ {k})\). We say \(S\) is a \(k\)-wise grid if \(W_{(n_{1},\ldots,n_{k})}(S)=W_{n_{1}}(S_{1})\times\cdots\times W_{n_{k}}(S_ {k})\). We say a \(S\) is a \(k\)-wise layer if \(|W_{(n_{1},\ldots,n_{k})}(S)|=1\), or equivalently, each \(S_{j}\) is a layer. It is immediate that a subset of a hypercube is \(k\)-wise symmetric if and only if it is a union of some collection of \(k\)-wise layers.
Consider the two obvious total orders \(\leq\) and \(\leq^{\prime}\) on \(\mathbb{N}\) defined by
\[0<1<2<3<\cdots\quad\text{and}\quad 0>^{\prime}1>^{\prime}2>^{\prime}3>^{\prime}\cdots\]
Let \(\mathscr{T}=\{\leq,\leq^{\prime}\}\). Let \(S\subseteq\{0,1\}^{N}\) be \(k\)-wise symmetric. Fix arbitrary total orders \(\leq_{j}\in\mathscr{T}\) on \(W_{n_{j}}(S_{j})\) for each \(j\in[k]\), and consider the induced partial order \(\preceq\) on \(W_{n_{1}}(S_{1})\times\cdots\times W_{n_{k}}(S_{k})\). We define \(S\) to be pseudo downward closed (PDC) if \(W_{(n_{1},\ldots,n_{k})}(S)\) is downward closed in \(W_{n_{1}}(S_{1})\times\cdots\times W_{n_{k}}(S_{k})\). Further, for all \(j\in[k]\), enumerate \(W_{n_{j}}(S_{j})=\{w_{j,0}<_{j}\cdots<_{j}w_{j,q_{j}}\}\), and for each \(z_{j}\in[0,q_{j}]\), define the symmetric set \([S]_{j,z_{j}}\subseteq\{0,1\}^{n_{j}}\) by \(W_{n_{j}}([S]_{j,z_{j}})=\{w_{j,0}<_{j}\cdots<_{j}w_{j,z_{j}}\}\). Then define
\[\mathcal{N}(S)=\{(z_{1},\ldots,z_{k})\in\mathbb{N}^{k}:(w_{1,z_{1}},\ldots,w_{ k,n_{k}})\in W_{(n_{1},\ldots,n_{k})}(S)\}.\]
It is immediate that the following are both equivalent conditions to \(S\) being PDC.
* \(\mathcal{N}(S)\) is downward closed in \(\mathbb{N}^{k}\) with respect to the natural order (also denoted by \(\leq\)).
* \(W_{n_{1}}([S]_{1,z_{1}})\times\cdots\times W_{n_{k}}([S]_{k,z_{k}})\subseteq W_ {(n_{1},\ldots,n_{k})}(S)\) for each \((z_{1},\ldots,z_{k})\in\mathcal{N}(S)\).
We will also need two simple indexing sets in our results. We denote
\[\begin{array}{ll}\mathsf{E}^{(\text{out})}(S)&\coloneqq\mathsf{E}^{(\text{ out})}_{\leq}(\mathcal{N}(S))=\{(z_{1},\ldots,z_{k})\in\mathbb{N}^{k}:(w_{1,z_{1}}, \ldots,w_{k,z_{k}})\in\mathsf{E}^{(\text{out})}_{\leq}(W_{(n_{1},\ldots,n_{k})}(S ))\},\\ \mathsf{E}^{(\text{in})}(S)&\coloneqq\mathsf{E}^{(\text{in})}_{\leq}(\mathcal{N}( S))=\{(z_{1},\ldots,z_{k})\in\mathbb{N}^{k}:(w_{1,z_{1}},\ldots,w_{k,z_{k}})\in \mathsf{E}^{(\text{in})}_{\leq}(W_{(n_{1},\ldots,n_{k})}(S))\}.\end{array}\]
### Polynomials, multiplicities, hyperplanes, and covers
We will work with the polynomial ring \(\mathbb{R}[\mathbb{X}]\), where \(\mathbb{X}=(X_{1},\ldots,X_{n})\) are the indeterminates. We are interested in higher order vanishing properties of polynomials. Let \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\). For any \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{n}\), denote \(|\alpha|=\alpha_{1}+\cdots+\alpha_{n}\). We will denote the \(\alpha\)-th order partial derivative of \(P(\mathbb{X})\) by \(\partial^{\alpha}P(\mathbb{X})\), that is,
\[\partial^{\alpha}P(\mathbb{X})\coloneqq\frac{\partial^{|\alpha|}P(\mathbb{X })}{\partial X_{1}^{\alpha_{1}}\cdots\partial X_{n}^{\alpha_{n}}}.\]
For any \(t\geq 0\) and \(a\in\mathbb{R}^{n}\), we define the multiplicity of \(P(\mathbb{X})\) at \(a\) as follows: we define \(\operatorname{mult}(P(\mathbb{X}),a)\geq t\) if \(\partial^{\alpha}P(a)=0\), for all \(\alpha\in\mathbb{N}^{n}\), \(|\alpha|<t\). Therefore, we get \(\operatorname{mult}(P(\mathbb{X}),a)=t\) if \(\operatorname{mult}(P(\mathbb{X}),a)\geq t\) and \(\partial^{\alpha}P(a)\neq 0\) for some \(\alpha\in\mathbb{N}^{n}\) with \(|\alpha|=t\).
An affine hyperplane in \(\mathbb{R}^{n}\) is any set of the form \(K+v\), where \(K\subseteq\mathbb{R}^{n}\) is a vector subspace with \(\dim(K)=n-1\), and \(v\in\mathbb{R}^{n}\). In the rest of the paper, we will drop the adjective 'affine' and simply refer to these as hyperplanes. A set \(H\subseteq\mathbb{R}^{n}\) is a hyperplane if and only if \(H=\mathcal{Z}(P)\coloneqq\{a\in\mathbb{R}^{n}:P(a)=0\}\) for some nonzero polynomial \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) with \(\deg(P)=1\). In fact, we will identify \(H\) with its defining affine linear polynomial, and denote \(P(\mathbb{X})\) by \(H(\mathbb{X})\). So according to the context (which will be obvious), \(H(\mathbb{X})\) will either denote the hyperplane as a subset of \(\mathbb{R}^{n}\) or the defining affine linear polynomial. Similarly, if \(\mathcal{H}(\mathbb{X})=\{H_{1}(\mathbb{X}),\ldots,H_{k}(\mathbb{X})\}\) is a family of hyperplanes, we may also abuse notation and denote the corresponding defining polynomial by \(\mathcal{H}(\mathbb{X})=H_{1}(\mathbb{X})\cdots H_{k}(\mathbb{X})\). For our concern, the family \(\mathcal{H}(\mathbb{X})\) will be a multiset, and \(|\mathcal{H}(\mathbb{X})|\) will denote the multiset cardinality of the family, that is, the number of hyperplanes counted with repetition.
We are interested in covering10 subsets of the hypercube \(\{0,1\}^{n}\) by polynomials and families of hyperplanes. Let \(S\subsetneq\{0,1\}^{n}\), and consider _multiplicity parameters_\(t\geq 1\), \(\ell\in[0,t-1]\). We define
Footnote 10: We say a polynomial \(P\) covers a point \(a\) if \(a\in\mathcal{Z}(P)\). Similarly, we say \(P\) covers a point \(a\) with multiplicity at least \(t\) if \(a\in\mathcal{Z}^{t}(P)\).
* a nonzero polynomial \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) to be a \((t,\ell)\)-exact polynomial cover for \(S\) if \[\operatorname{mult}(P(\mathbb{X}),a)\geq t\quad\text{for all $a\in S$},\] \[\text{and}\quad\operatorname{mult}(P(\mathbb{X}),b)=\ell\quad \text{for all $b\in\{0,1\}^{n}\setminus S$}.\]
* a finite multiset of hyperplanes \(\mathcal{H}(\mathbb{X})\) in \(\mathbb{R}^{n}\) to be a \((t,\ell)\)-exact hyperplane cover for \(S\) if \[|\{H(\mathbb{X})\in\mathcal{H}(\mathbb{X}):H(a)=0\}|\geq t \quad\text{for all $a\in S$},\] \[\text{and}\quad|\{H(\mathbb{X})\in\mathcal{H}(\mathbb{X}):H(b)=0\}|=\ell \quad\text{for all $b\in\{0,1\}^{n}\setminus S$}.\] This implies that \(\mathcal{H}(\mathbb{X})\) is also a \((t,\ell)\)-exact polynomial cover for \(S\).
Let \(\mathsf{EHC}_{n}^{(t,\ell)}(S)\) denote the minimum size of a \((t,\ell)\)-exact hyperplane cover for \(S\), and let \(\mathsf{EPC}_{n}^{(t,\ell)}(S)\) denote the minimum degree of a \((t,\ell)\)-exact polynomial cover for \(S\). The definitions immediately imply that \(\mathsf{EHC}_{n}^{(t,\ell)}(S)\geq\mathsf{EPC}_{n}^{(t,\ell)}(S)\).
A covering result.In the results of Alon and Furedi [1] (Theorem 1.1), as well as Sauermann and Wigderson [24] (Theorem 1.3), there is nothing sacrosanct about the origin; one could instead choose to avoid any single point. We will use these version of the results, and therefore state it here.
**Theorem 2.2** ([1]).: _For any \(a\in\{0,1\}^{n}\), we have_
\[\mathsf{EHC}_{n}^{(1,0)}(\{0,1\}^{n}\setminus\{a\})=\mathsf{EPC}_{n}^{(1,0)}(\{0, 1\}^{n}\setminus\{a\})=n.\]
**Theorem 2.3** ([13]).: _For all \(t\geq 1\), \(\ell\in[0,t-1]\), and any \(a\in\{0,1\}^{n}\), we have_
\[\mathsf{EPC}_{n}^{(t,\ell)}(\{0,1\}^{n}\setminus\{a\})=\begin{cases}n+2t-2& \text{if }\,\ell=t-1,\\ n+2t-3&\text{if }\,\ell<t-1\leq\lfloor\frac{n+1}{2}\rfloor.\end{cases}\]
Nondegenerate polynomial and hyperplane covers for the _blockwise_ hypercube.Fix a block decomposition of the hypercube \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\). We will work with the polynomial ring \(\mathbb{R}[\mathbb{X}]\), where \(\mathbb{X}=(\mathbb{X}_{1},\ldots,\mathbb{X}_{k})\), and \(\mathbb{X}_{j}\) is the set of indeterminates for the \(j\)-th block.
We are interested in covering subsets of the hypercube \(\{0,1\}^{N}\) by polynomials and families of hyperplanes. In this context, our proof techniques work under some nondegeneracy conditions. Let \(S\subsetneq\{0,1\}^{N}\), and consider _multiplicity parameters_\(t\geq 1\), \(\ell\in[0,t-1]\). We define
* a nonzero polynomial \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) to be a \((t,\ell)\)-block exact polynomial cover for \(S\) if 1. for every point \(a\in S\), we have \(\operatorname{mult}(P(\mathbb{X}),a)\geq t\). 2. for each \(j\in[k]\), and every point \((a,\widetilde{a})\in\{0,1\}^{N}\setminus S\) with \(a\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\}^{n_{j+1}} \times\cdots\times\{0,1\}^{n_{k}}\), \(\widetilde{a}\in\{0,1\}^{n_{j}}\), we have \(\operatorname{mult}(P(a,\mathbb{X}_{j}),\widetilde{a})=\ell\).
* a finite multiset of hyperplanes \(\mathcal{H}(\mathbb{X})\) in \(\mathbb{R}^{N}\) to be a \((t,\ell)\)-block exact hyperplane cover for \(S\) if 1. for every \(a\in S\), we have \(|\{H(\mathbb{X})\in\mathcal{H}(\mathbb{X}):H(a)=0\}|\geq t\). 2. for every \(b\in\{0,1\}^{N}\setminus S\), we have \(|\{H(\mathbb{X})\in\mathcal{H}(\mathbb{X}):H(b)=0\}|=\ell\). 3. for each \(j\in[k]\), and every \(a\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\}^{n_{j+1}} \times\cdots\times\{0,1\}^{n_{k}}\), we have \(|\mathcal{H}(a,\mathbb{X}_{j})|=|\mathcal{H}(\mathbb{X})|\). (In other words, no two hyperplanes in the family _collapse_ into one, upon restriction to any single block.)
Let \(\mathsf{b}\text{-}\mathsf{EHC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\) denote the minimum size of a \((t,\ell)\)-block exact hyperplane cover for \(S\), and let \(\mathsf{b}\text{-}\mathsf{EPC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\) denote the minimum degree of a \((t,\ell)\)-block exact polynomial cover for \(S\). The definitions immediately imply that every \((t,\ell)\)-block exact hyperplane cover is also a \((t,\ell)\)-block exact polynomial cover, and so \(\mathsf{b}\text{-}\mathsf{EHC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\geq \mathsf{b}\text{-}\mathsf{EPC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\). For completeness, we give a quick proof in Appendix A.2. Further, it is trivial that \(\mathsf{b}\text{-}\mathsf{EHC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\geq\mathsf{ EHC}_{N}^{(t,\ell)}(S)\) and \(\mathsf{b}\text{-}\mathsf{EPC}_{(n_{1},\ldots,n_{k})}^{(t,\ell)}(S)\geq\mathsf{ EPC}_{N}^{(t,\ell)}(S)\).
### Peripheral intervals, and inner and outer intervals of symmetric sets
For any \(a\in[-1,n-1]\), \(b\in[1,n+1]\), \(a<b\), denote the set of weights \(I_{n,a,b}=[0,a]\cup[b,n]\), and we say a peripheral interval is the symmetric set \(J_{n,a,b}\subseteq\{0,1\}^{n}\) defined by \(W_{n}(J_{n,a,b})=I_{n,a,b}\). Here, we have the convention \([0,-1]=[n+1,n]=\emptyset\). In other words, a peripheral interval \(J_{n,a,b}\) could be either (i) _one-sided_, that is, one or both of the weight intervals \([0,a]\), \([b,n]\) could be empty (\(a=-1\)
or \(b=n+1\) or both), or (ii) _two-sided_, that is, both the weight intervals \([0,a]\), \([b,n]\) are nonempty (\(a\geq 1\) and \(b\leq n\)).
Now let \(S\subseteq\{0,1\}^{n}\) be a symmetric set.
* If \(S\subsetneq\{0,1\}^{n}\), then the inner interval of \(S\), denoted by in-\(\text{int}(S)\), is defined to be the peripheral interval \(J_{n,a,b}\subseteq\{0,1\}^{n}\) of maximum size such that \(J_{n,a,b}\subseteq S\). It is easy to check that in-\(\text{int}(S)\) is unique. Further, we define \(\text{in-int}(\{0,1\}^{n})=J_{n,\lfloor n/2\rfloor,\lfloor n/2\rfloor+1}\).
* Let \(\mathcal{O}(S)\) be the collection of all peripheral intervals \(J_{n,a,b}\) such that \(S\subseteq J_{n,a,b}\) and \(I_{n,a,b}=W_{n}(J_{n,a,b})\) has minimum size. It is easy to see that \(\mathcal{O}(S)\) can contain several peripheral intervals; the following is an example. **Example 2.4**.: Let \(n\) be even, and choose \(W_{n}(S)=\{w\in[0,n]:w\text{ is even}\}\). Then for any even \(w\in[0,n]\), we have \(I_{n,w,w+2}=[0,w]\cup[w+2,n]\), \(|I_{n,w,w+2}|=n\) and \(S\subseteq J_{n,w,w+2}\). Further, for any peripheral interval \(J_{n,a,b}\supseteq S\), it is immediate that \(|b-a|\leq 2\), and so \(|I_{n,a,b}|\geq n\). Thus, \(\mathcal{O}(S)=\{J_{n,w,w+2}:w\in[0,n]\text{ is even}\}\). Moving on, consider the function \(\lambda_{S}:\mathcal{O}(S)\to\mathbb{N}\) defined by \[\lambda_{S}(J_{n,a,b})=|a+b-n|,\quad\text{for all $J_{n,a,b}\in\mathcal{O}(S)$.}\] It is easy to check that the minimizer of \(\lambda_{S}\) is either a unique peripheral interval \(J_{n,a,b}\), or exactly a pair of peripheral intervals \(\{J_{n,a,b},J_{n,n-b,n-a}\}\). The outer interval of \(S\), denoted by out-\(\text{int}(S)\), is defined by \[\text{out-int}(S)=\begin{cases}J_{n,a,b}&\text{if $J_{n,a,b}$ is the unique minimizer of $\lambda_{S}$,}\\ J_{n,a,b}&\text{if $\{J_{n,a,b},J_{n,n-b,n-a}\}$ are minimizers of $\lambda_{S}$ and $a>n-b$.}\end{cases}\] Therefore, out-\(\text{int}(S)\) is unique.
We will discuss more on uniqueness of inner and outer intervals, and look at some illustrations, in Appendix C. Now define
\[\text{in}_{n}(S) =(\min\{a,n-b\}+1)+|W_{n}(S)\setminus W_{n,\min\{a,n-b\}+1}| \text{where $J_{n,a,b}=\text{in-int}(S)$,}\] \[\text{and out}_{n}(S) =a+n-b+1=|I_{n,a,b}|-1 \text{where $J_{n,a,b}=\text{out-int}(S)$.}\]
**Remark 2.5**.: Let \(J_{n,a,b}\subseteq\{0,1\}^{n}\) be a peripheral interval. It is trivially true that in-\(\text{int}(J_{n,a,b})=\text{out-int}(J_{n,a,b})=J_{n,a,b}\). Further, it is easy to check that
\[\text{in-int}(\{0,1\}^{n}\setminus J_{n,a,b}) =\begin{cases}\emptyset&\text{if $a\geq 1$, $b\leq n$,}\\ J_{n,a+1,n}&\text{if $b=n+1$,}\\ J_{n,0,b-1}&\text{if $a=-1$,}\end{cases}\] \[\text{and}\quad\text{out-int}(\{0,1\}^{n}\setminus J_{n,a,b}) =\begin{cases}J_{n,-1,a+1}&\text{if $a\geq n-b$,}\\ J_{n,b-1,n+1}&\text{if $a<n-b$.}\end{cases}\]
Therefore,
\[\text{in}_{n}(J_{n,a,b}) =\max\{a,n-b\}+1, \text{in}_{n}(\{0,1\}^{n}\setminus J_{n,a,b}) =b-a-1,\] \[\text{out}_{n}(J_{n,a,b}) =a+n-b+1, \text{out}_{n}(\{0,1\}^{n}\setminus J_{n,a,b}) =\min\{n-a,b\}-1.\]
The following interesting and important observations are immediate from Remark 2.5, and the definitions.
**Observation 2.6**.:
1. _For any peripheral interval_ \(J_{n,a,b}\subseteq\{0,1\}^{n}\)_, we have_ \[\mathrm{in}_{n}(J_{n,a,b})+\mathrm{out}_{n}(\{0,1\}^{n}\setminus J_{n,a,b})= \mathrm{in}_{n}(\{0,1\}^{n}\setminus J_{n,a,b})+\mathrm{out}_{n}(J_{n,a,b})=n.\]
2. _For any symmetric set_ \(S\subseteq\{0,1\}^{n}\)_, we have_ \(S=\mathrm{in-int}(S)=\mathrm{out-int}(S)\) _if and only if either_ \(S\) _or_ \(\{0,1\}^{n}\setminus S\) _is a peripheral interval._
## 3 Index complexity of symmetric and PDC \(k\)-wise symmetric sets
### Inner and outer intervals of symmetric sets
Let us first prove Proposition 1.16, which relates the inner and outer intervals of symmetric sets. We will give two proofs, one combinatorial and another via the polynomial method. We mention the statement again, for convenience.
**Proposition 1.16**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\), we have_
\[\mathrm{in}_{n}(\{0,1\}^{n}\setminus S)+\mathrm{out}_{n}(S)\geq n.\]
_Further, equality holds if and only if either \(S\) or \(\{0,1\}^{n}\setminus S\) is a peripheral interval._
First proof.: We note that the assertion is immediately true, by Observation 2.6(a), if either \(S\) or \(\{0,1\}^{n}\setminus S\) is a peripheral interval.
Now suppose \(S\subseteq\{0,1\}^{n}\) is some nonempty symmetric set. Let \(J_{n,a,b}=\mathrm{out-int}(S)\). So by definition, we get \(\mathrm{out}_{n}(S)=\mathrm{out}_{n}(J_{n,a,b})\). It is, therefore, enough to prove \(\mathrm{in}_{n}(\{0,1\}^{n}\setminus S)\geq\mathrm{in}_{n}(\{0,1\}^{n}\setminus J _{n,a,b})\).
Let \(J_{n,a^{\prime},b^{\prime}}=\mathrm{in-int}(\{0,1\}^{n}\setminus S)\), and
\[M_{n}(S)\coloneqq\{w\in W_{n}(\{0,1\}^{n}\setminus S):\min\{a^{\prime},n-b^{ \prime}\}+1\leq w\leq\max\{n-a^{\prime},b^{\prime}\}-1\}.\]
So, by definition, \(\mathrm{in}_{n}(\{0,1\}^{n}\setminus S)=\min\{a^{\prime},n-b^{\prime}\}+1+|M_ {n}(S)|\). Also, since \(J_{n,a,b}=\mathrm{out-int}(S)\), we get
\[[0,n]\setminus I_{n,a,b}\subseteq I_{n,\min\{a^{\prime},n-b^{\prime}\},\max \{n-a^{\prime},b^{\prime}\}}\sqcup M_{n}(S).\]
This immediately gives
\[\mathrm{in}_{n}(\{0,1\}^{n}\setminus J_{n,a,b}) \leq|I_{n,\min\{a^{\prime},n-b^{\prime}\},\max\{n-a^{\prime},b^{ \prime}\}}|+|M_{n}(S)|\] \[=\min\{a^{\prime},n-b^{\prime}\}+1+|M_{n}(S)|\] \[=\mathrm{in}_{n}(\{0,1\}^{n}\setminus S). \tag{1}\]
It is clear that equality is attained exactly when \(\mathrm{in}_{n}(\{0,1\}^{n}\setminus S)=\mathrm{in}_{n}(\{0,1\}^{n}\setminus J _{n,a,b})\). By (1), this means equality is attained exactly when \(\mathrm{in}_{n}(\{0,1\}^{n}\setminus J_{n,a,b})=\min\{a^{\prime},n-b^{\prime} \}+1+|M_{n}(S)|\). This happens exactly when \(S=J_{n,a,b}=J_{n,a^{\prime},b^{\prime}}\), that is, \(S=\mathrm{in-int}(S)=\mathrm{out-int}(S)\). By Observation 2.6(b), this is equivalent to either \(S\) or \(\{0,1\}^{n}\setminus S\) being a peripheral interval.
_Second proof._ Let \(J_{n,a,b}=\text{out-int}(S)\). By the minimality of size of \(I_{n,a,b}\), we have \(\{a,b\}\subseteq W_{n}(S)\). Without loss of generality, assume \(a\geq n-b\). Define \(P(\mathbb{X})=X_{1}\cdots X_{a}(X_{a+1}-1)\cdots(X_{a+n-b+1}-1)\). We clearly have \(1^{a}0^{n-a}\in S\), and \(P(1^{a}0^{n-a})\neq 0\). Now consider any \(x\in S,\,x\neq 1^{a}0^{n-a}\). We have three cases.
1. \(|x|=a,\,x\neq 1^{a}0^{n-a}\). Then there exists \(i\in[1,a]\) such that \(x_{i}=0\), so and \(P(x)=0\).
2. \(|x|<a\). Then there exists \(i\in[1,a]\) such that \(x_{i}=0\), and s \(P(x)=0\).
3. \(|x|>a\), which means \(|x|\geq b\), since \(S\subseteq J_{n,a,b}\). So \(|\{i\in[n]:x_{i}=0\}|<n-b+1\). This implies that there exists \(i\in[a+1,a+n-b+1]\) such that \(x_{i}=1\), and so \(P(x)=0\).
Consider the family of hyperplanes
\[\mathsf{h}(\mathbb{X})\coloneqq\mathcal{H}^{\star}_{\overline{\mu}_{n}(S)}( \mathbb{X})\sqcup\mathcal{H}^{\prime}_{W_{n}(\{0,1\}^{n}\setminus S)\setminus W _{n,\overline{\mu}_{n}(S)}}(\mathbb{X}).\]
By Lemma 1.12, we have \(\mathcal{H}^{\star}_{\overline{\mu}_{n}(S)}(x)=0\) if and only if \(|x|\in W_{n,\overline{\mu}_{n}(S)}\). Further, by definition, we have \(\mathcal{H}^{\prime}_{W_{n}(\{0,1\}^{n}\setminus S)\setminus W_{n,\overline{ \mu}_{n}(S)}}(x)=0\) if and only if \(|x|\in W_{n}(\{0,1\}^{n}\setminus S)\setminus W_{n,\overline{\mu}_{n}(S)}\). Thus, we have \(\mathsf{h}(x)=0\) if and only if \(x\in\{0,1\}^{n}\setminus S\).
So we conclude that the polynomial \(P\mathsf{h}(\mathbb{X})\) satisfies \(P\mathsf{h}(1^{a}0^{n-a})\neq 0\), and \(P\mathsf{h}(x)=0\) for all \(x\in\{0,1\}^{n}\), \(x\neq 1^{a}0^{n-a}\). Therefore, by Theorem 2.2, we get \(\deg(P\mathsf{h})\geq n\). Now by the definitions, we also have
\[\deg(P) =a+n-b+1=\text{out}_{n}(S),\] \[\text{and}\quad\deg(\mathsf{h}) =\overline{\mu}_{n}(S)+|W_{n}(\{0,1\}^{n}\setminus S) \setminus W_{n,\overline{\mu}_{n}(S)}|=\text{in}_{n}(\{0,1\}^{n}\setminus S).\]
Hence,
\[\text{in}_{n}(\{0,1\}^{n}\setminus S)+\text{out}_{n}(S)=\deg(\mathsf{h})+\deg (P)=\deg(P\mathsf{h})\geq n.\]
By the definition above, \(\deg(P)=a+n-b+1=\text{out}_{n}(S)\), and therefore we have shown that \(\deg(\mathsf{h})\geq b-a-1\). But, again by the definition above, we have \(\deg(\mathsf{h})=\overline{\mu}_{n}(S)+|W_{n}(\{0,1\}^{n}\setminus S)\setminus W _{n,\overline{\mu}_{n}(S)}|\). Thus, we have equality exactly when \(\overline{\mu}_{n}(S)+|W_{n}(\{0,1\}^{n}\setminus S)\setminus W_{n,\overline{ \mu}_{n}(S)}|=b-a-1\), where \(J_{n,a,b}=\text{out-int}(S)\). This is true if and only if \(S=J_{n,a,b}\). By Observation 2.6(b), this is equivalent to either \(S\) or \(\{0,1\}^{n}\setminus S\) being a peripheral interval.
### Index complexity of symmetric sets
We will now proceed to prove Proposition 1.17 which characterizes the index complexity of symmetric sets. We will need a definition and a technical lemma. Let \(p\in\{0,1\}^{n}\) and denote \(I_{0}(p)\coloneqq\{i\in[n]:p_{i}=0\},\,I_{1}(p)\coloneqq\{i\in[n]:p_{i}=1\}\). So \(|I_{0}(p)|=n-|p|\) and \(|I_{1}(p)|=|p|\). For any \(I_{0}\subseteq I_{0}(p),\,I_{1}\subseteq I_{1}(p)\), we define the separation of \(p\) with respect to \((I_{0},I_{1})\), denoted by \(\operatorname{sep}(p,I_{0},I_{1})\subseteq\{0,1\}^{n}\), to be the maximal symmetric set such that for every \(x\in\operatorname{sep}(p,I_{0},I_{1})\), we have \(x_{I_{0}\sqcup I_{1}}\neq p_{I_{0}\sqcup I_{1}}\). We will refer to the special case \(\operatorname{sep}(p)\coloneqq\operatorname{sep}(p,I_{0}(p),I_{1}(p))\) as simply the separation of \(p\).
**Remark 3.1**.: It follows by definition that \(|p|\not\in W_{n}(\operatorname{sep}(p,I_{0},I_{1}))\), for any \(I_{0}\subseteq I_{0}(p)\), \(I_{1}\subseteq I_{1}(p)\).
**Lemma 3.2**.: _For any \(p\in\{0,1\}^{n}\), and \(I_{0}\subseteq I_{0}(p),\,I_{1}\subseteq I_{1}(p)\), we have_
\[\operatorname{sep}(p,I_{0},I_{1})=J_{n,|I_{1}|-1,n-|I_{0}|+1}.\]
_In particular, we have \(\operatorname{sep}(p)=J_{n,|p|-1,|p|+1}\)._
Proof.: Without loss of generality, assume \(p=1^{a}0^{n-a}\), and \(I_{1}=[1,u]\subseteq[1,a]\), \(I_{0}=[n-v+1,n]\), for some \(u\in[0,a]\), \(v\in[0,n-a]\). So \(|I_{1}|=u\), \(|I_{0}|=v\). We observe the following.
1. For any \(x=1^{a^{\prime}}y\) with \(a^{\prime}\geq u\) and \(y\in\{0,1\}^{n-a^{\prime}}\), we have \(x_{I_{1}}=p_{I_{1}}=1^{u}\).
2. For any \(x=y0^{b^{\prime}}\) with \(b^{\prime}\geq v\) and \(y\in\{0,1\}^{n-b^{\prime}}\), we have \(x_{I_{0}}=p_{I_{0}}=0^{v}\).
Combining the above two observations, we get that for any \(x=1^{a^{\prime}}y0^{b^{\prime}}\) with \(a^{\prime}\geq u,\,b^{\prime}\geq v\) and \(y\in\{0,1\}^{n-a^{\prime}-b^{\prime}}\), we have \(x_{I_{0}\cup I_{1}}=p_{I_{0}\cup I_{1}}\). Since \(\mathrm{sep}(p,I_{0},I_{1})\) is a symmetric set, this implies that
\[[u,n-v]\cap W_{n}(\mathrm{sep}(p,I_{0},I_{1}))=\emptyset,\quad\text{that is}, \quad\mathrm{sep}(p,I_{0},I_{1})\subseteq J_{n,u-1,n-v+1}.\]
Now consider any \(x\in J_{n,u-1,n-v+1}\). We have two cases.
1. \(|x|\leq u-1\). Since \(|I_{1}|=u\), there exists \(i\in I_{1}\) such that \(x_{i}=0\), but \(p_{i}=1\).
2. \(|x|\geq n-v+1\). Since \(|I_{0}|=v\), there exists \(i\in I_{0}\) such that \(x_{i}=1\), but \(p_{i}=0\).
Hence, we conclude that \(J_{n,u-1,n-v+1}\subseteq\mathrm{sep}(p,I_{0},I_{1})\).
We are now ready to prove Proposition 1.17. We mention the statement again, for convenience.
**Proposition 1.17**.: _For any nonempty symmetric set \(S\subseteq\{0,1\}^{n}\), we have \(r_{n}(S)=\mathrm{out}_{n}(S)\)._
Proof.: Let \(J_{n,a,b}\) be the outer interval of \(S\). So \(\mathrm{out}_{n}(S)=a+n-b+1\). By the minimality of size of \(I_{n,a,b}\), we have \(\{a,b\}\subseteq W_{n}(S)\). So, in particular, \(p\coloneqq 1^{a}0^{n-a}\in S\). Without loss of generality, assume \(a\geq n-b\). Now consider any \(x\in S,\,x\neq 1^{a}0^{n-a}\). We have three cases.
1. \(|x|=a\), \(x\neq 1^{a}0^{n-a}\). Then there exists \(i\in[1,a]\) such that \(x_{i}=0\), but \(p_{i}=1\).
2. \(|x|<a\). Then there exists \(i\in[1,a]\) such that \(x_{i}=0\), but \(p_{i}=1\).
3. \(|x|>a\), which means \(|x|\geq b\), since \(S\subseteq J_{n,a,b}\). So \(|\{i\in[n]:x_{i}=0\}|<n-b+1\). This implies there exists \(i\in[a+1,a+n-b+1]\) such that \(x_{i}=1\), but \(p_{i}=0\).
Thus, in all three cases, there exists \(i\in[1,a+n-b+1]\) such that \(x_{i}\neq p_{i}\). Hence, we conclude that \(r_{n}(S)\leq a+n-b+1=\mathrm{out}_{n}(S)\).
Now, in order to prove the reverse inequality, let \(p\in S\) and \(I\subseteq[n],\,r_{n}(S)=|I|\) such that for every \(x\in S,\,x\neq p\), we have \(x_{I}\neq p_{I}\). Further, let \(I_{0}=I\cap I_{0}(p)\) and \(I_{1}=I\cap I_{1}(p)\). By definition of index complexity, Lemma 3.2, and Remark 3.1, we get
\[W_{n}(S)\setminus\{|p|\}\subseteq W_{n}(\mathrm{sep}(p,I_{0},I_{1}))=I_{n,|I_ {1}|-1,n-|I_{0}|+1}. \tag{2}\]
Also, trivially, we have \(|I_{1}|\leq|p|\leq n-|I_{0}|\). Since \(J_{n,a,b}\) is the outer interval of \(S\), we have exactly one of the two cases, by (2) and the minimality of size of \(I_{n,a,b}\).
1. \(|p|=a\) and \(W_{n}(S)\setminus\{p\}\subseteq I_{n,a-1,b}\subseteq I_{n,|I_{1}|-1,n-|I_{0}|+1}\). So \(|I_{1}|\geq a\), \(|I_{0}|\geq n-b+1\).
2. \(|p|=b\) and \(W_{n}(S)\setminus\{p\}\subseteq I_{n,a,b+1}\subseteq I_{n,|I_{1}|-1,n-|I_{0}|+1}\). So \(|I_{1}|\geq a+1\), \(|I_{0}|\geq n-b\).
In either of the two cases, we finally get
\[r_{n}(S)=|I|=|I_{0}|+|I_{1}|\geq a+n-b+1=\mathrm{out}_{n}(S).\qed\]
**Remark 3.3**.: We also note from the proof of Proposition 1.17, that if \(J_{n,a,b}\) is the outer interval of the symmetric set \(S\), with \(a\geq n-b\), then the set \(I=[1,a+n-b+1]\) satisfies \(|I|=r_{n}(S)\), and the point \(p=1^{a}0^{n-a}\) is such that for every \(x\in S,\,x\neq 1^{a}0^{n-a}\), we have \(x_{I}\neq(1^{a}0^{n-a})_{I}\). On the other hand, if \(a<n-b\), then these choices change to \(I=[b-a,n]\) and \(p=1^{b}0^{n-b}\).
### Index complexity of PDC \(k\)-wise symmetric sets
Let us now proceed to prove Proposition 1.41, which characterizes the index complexity of PDC \(k\)-wise symmetric sets. We mention the statement again, for convenience.
**Proposition 1.41**.: _For any nonempty outer intact PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\), we have_
\[r_{N}(S)=\sum_{j=1}^{k}r_{n_{j}}(S_{j})=\sum_{j=1}^{k}\mathrm{out}_{n_{j}}(S_{ j}).\]
Proof.: For each \(j\in[k]\), let \(J_{n_{j},a_{j},b_{j}}\) be the outer interval of \(S_{j}\), and further, as indicated in Remark 3.3, let
\[\big{(}p^{(j)},I^{(j)}\big{)}=\begin{cases}\big{(}1^{a_{j}}0^{n_{j}-a_{j}}, \big{[}1,a_{j}+n_{j}-b_{j}+1\big{]}\big{)}&\text{if $a_{j}\geq n_{j}-b_{j}$},\\ \big{(}1^{b_{j}}0^{n_{j}-b_{j}},\big{[}b_{j}-a_{j},n_{j}\big{]}\big{)}&\text{ if $a_{j}<n_{j}-b_{j}$},\end{cases}\]
satisfy the definition of index complexity \(r_{n_{j}}(S_{j})\). Now consider any \((z_{1},\dots,z_{k})\in\mathsf{E}^{(\mathrm{in})}(S)\). Since \(S\) is outer intact, for each \(j\in[k]\), we have the following.
* \(J_{n_{j},a_{j},b_{j}}\) is the outer interval of \([S]_{j,z_{j}}\).
* \(p^{(j)}\in[S]_{j,z_{j}}\).
* \(p^{(j)}\) and \(I^{(j)}\) satisfy the definition of index complexity \(r_{n_{j}}([S]_{j,z_{j}})\), as indicated in Remark 3.3.
Define \(p=(p^{(1)},\dots,p^{(k)})\in S\) and \(I=I^{(1)}\sqcup\dots\sqcup I^{(k)}\). Now consider any \(x=(x^{(1)},\dots,x^{(k)})\in S,\,x\neq p\). So there exists \(j\in[k]\) such that \(x^{(j)}\neq p^{(j)}\). Since \(x\in S\), there exists \((z_{1},\dots,z_{k})\in\mathsf{E}^{(\mathrm{in})}(S)\) such that \(x\in[S]_{1,z_{1}}\times\dots\times[S]_{k,z_{k}}\), and so \(x^{(j)}\in[S]_{j,z_{j}}\). Then by the choice of \(I^{(j)}\), we get \(x^{(j)}_{I^{(j)}}\neq p^{(j)}_{I^{(j)}}\). Thus, we have \(r_{N}(S)\leq|I|=\sum_{j=1}^{k}I^{(j)}|=\sum_{j=1}^{k}r_{n_{j}}(S_{j})\).
To prove the reverse inequality, now suppose \(p=(p^{(1)},\dots,p^{(k)})\in S\) and \(I=I^{(1)}\sqcup\dots\sqcup I^{(k)}\subseteq[N]\) satisfy the definition of index complexity \(r_{N}(S)\). Let \((z_{1},\dots,z_{k})\in\mathsf{E}^{(\mathrm{in})}(S)\) such that \(p\in[S]_{1,z_{1}}\times\dots\times[S]_{k,z_{k}}\). Fix any \(j\in[k]\). Consider any \(y\in[S]_{j,z_{j}},\,y\neq p^{(j)}\), and let \(x=(p^{(1)},\dots,p^{(j-1)},y,p^{(j+1)},\dots,p^{(k)})\). Then \(x\in[S]_{1,z_{1}}\times\dots\times[S]_{k,z_{k}}\subseteq S\) and \(x\neq p\). This implies \(x_{I}\neq p_{I}\), which means \(y_{I^{(j)}}\neq p^{(j)}_{I^{(j)}}\). Thus, we get \(|I^{(j)}|\geq r_{n_{j}}([S]_{j,z_{j}})=r_{n_{j}}(S_{j})\). Hence, \(r_{N}(S)=|I|=\sum_{j=1}^{k}|I^{(j)}|\geq\sum_{j=1}^{k}r_{n_{j}}(S_{j})\).
The final equality in the statement is then immediate from Proposition 1.17.
Covering PDC \(k\)-wise symmetric sets
Let us now prove our third main result (Theorem 1.33). We mention the statement again, for convenience. Recall that we work with the indeterminates \(\mathbb{X}=(\mathbb{X}_{1},\ldots,\mathbb{X}_{k})\), where \(\mathbb{X}_{j}=(X_{j,1},\ldots,X_{j,n_{j}})\) are the indeterminates for the \(j\)-th block.
**Theorem 1.33**.: _For any nonempty PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\) and \(t\geq 1\), we have_
\[\mathsf{b}\text{-}\mathsf{EPC}^{(t,t-1)}_{(n_{1},\ldots,n_{k})}(\{0,1\}^{N} \setminus S)=\max_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\text{out})}(S)}\bigg{\{} \sum_{j\in[k]:z_{j}\geq 1}\overline{\Lambda}_{n_{j}}([S]_{j,z_{j}-1})\bigg{\}}+2t-2.\]
Proof.: Let us first prove the lower bound. Let \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) be a \((t,t-1)\)-block exact polynomial cover for \(\{0,1\}^{N}\setminus S\). Fix any \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\text{out})}(S)\). Note that for any \(j\in[k]\), we have \(z_{j}\geq 1\) if and only if \([S]_{j,z_{j}-1}\neq\emptyset\). So, without loss of generality, we assume \(z_{j}\geq 1\) for all \(j\in[k]\). It is now enough to show that
\[\deg(P)\geq\sum_{j=1}^{k}\overline{\Lambda}_{n_{j}}([S]_{j,z_{j}-1})+2t-2.\]
Consider any \(j\in[k]\), and let
\[\overline{\mu}_{j}\coloneqq\overline{\mu}_{n}([S]_{j,z_{j}-1})=\max\{i\in[0, \lceil n/2\rceil]:W_{n_{j},i}\subseteq W_{n}(\{0,1\}^{n_{j}}\setminus[S]_{j, z_{j}-1})\}.\]
So either \(\overline{\mu}_{j}\in W_{n}([S]_{j,z_{j}-1})\) or \(n_{j}-\overline{\mu}_{j}\in W_{n}([S]_{j,z_{j}-1})\). Without loss of generality, suppose \(\overline{\mu}_{j}\in W_{n}([S]_{j,z_{j}-1})\). Then clearly, \(|W_{n_{j}}([S]_{j,z_{j}-1})\setminus\{\overline{\mu}_{j}\}|=n_{j}-|W_{n}(\{0, 1\}^{n_{j}}\setminus[S]_{j,z_{j}-1})|\). Also clearly, \(1^{\overline{\mu}_{j}}0^{n_{j}-\overline{\mu}_{j}}\in[S]_{j,z_{j}-1}\). Define
\[Q(\mathbb{X})=P(\mathbb{X})\cdot\prod_{j=1}^{k}\mathcal{H}^{\prime}_{W_{n_{j }}([S]_{j,z_{j}-1})\setminus\{\overline{\mu}_{j}\}}(\mathbb{X}_{j}).\]
Recall that we have \(W_{(n_{1},\ldots,n_{k})}(S)=\{(|x^{(1)}|,\ldots,|x^{(k)}|):x\in S\}\). Further, recall that we have \(W_{(n_{1},\ldots,n_{k})}([S]_{1,z_{1}-1}\times\cdots\times[S]_{k,z_{k}-1}) \coloneqq W_{n_{1}}([S]_{1,z_{1}-1})\times\cdots\times W_{n_{k}}([S]_{k,z_{k} -1})\). Consider any \(x=(x^{(1)},\ldots,x^{(k)})\in\{0,1\}^{N}\). We have the following cases.
1. \((|x^{(1)}|,\ldots,|x^{(k)}|)=(\overline{\mu}_{1},\ldots,\overline{\mu}_{k})\). So we have * \(\mathrm{mult}(P(x^{\prime}_{(j)},\mathbb{X}_{j}),x^{(j)})=t-1\), where \(x=(x^{\prime}_{(j)},x^{(j)})\), for every \(j\in[k]\). * \(\mathcal{H}^{\prime}_{W_{n_{j}}([S]_{j,z_{j}-1})\setminus\{\overline{\mu}_{j} \}}(x^{(j)})\neq 0\), for every \(j\in[k]\). This implies \(\mathrm{mult}(Q(\mathbb{X}),x)=t-1\). Note that this is where we need \(P(\mathbb{X})\) to be a \((t,t-1)\)-block exact polynomial cover and not just a \((t,t-1)\)-exact polynomial cover for \(\{0,1\}^{N}\setminus S\).
2. \((|x^{(1)}|,\ldots,|x^{(k)}|)\in W_{(n_{1},\ldots,n_{k})}([S]_{1,z_{1}-1}\times \cdots\times[S]_{k,z_{k}-1})\setminus\{(\overline{\mu}_{1},\ldots,\overline{ \mu}_{k})\}\). So we have * \(\mathrm{mult}(P(x^{\prime}_{(j)},\mathbb{X}_{j}),x^{(j)})=t-1\), where \(x=(x^{\prime}_{(j)},x^{(j)})\), for every \(j\in[k]\). * There exists \(j\in[k]\) such that \(|x^{(j)}|\neq\overline{\mu}_{j}\), and so \(\mathcal{H}^{\prime}_{W_{n_{j}}([S]_{j,z_{j}-1})\setminus\{\overline{\mu}_{j} \}}(x^{(j)})=0\) This implies \(\mathrm{mult}(Q(\mathbb{X}),x)\geq t\).
* \((|x^{(1)}|,\ldots,|x^{(k)}|)\not\in W_{(n_{1},\ldots,n_{k})}([S]_{1,z_{1}-1}\times \cdots\times[S]_{k,z_{k}-1})\). So \(\operatorname{mult}(P(\mathbb{X}),x)\geq t\), and this implies \(\operatorname{mult}(Q(\mathbb{X}),x)\geq t\).
Thus, \(Q(\mathbb{X})\) is a \((t,t-1)\)-exact polynomial cover for \(\{0,1\}^{N}\setminus L\), where \(L=L_{1}\times\cdots\times L_{k}\) is a \(k\)-wise layer given by \(W_{n_{j}}(L_{j})=\{\overline{\mu}_{j}\}\), \(j\in[k]\). So Theorem 1.10 and Corollary 1.42 imply
\[\deg(Q)\geq N-r_{N}(L)+2t-2=N-\sum_{j=1}^{k}\overline{\mu}_{j}+2t-2. \tag{3}\]
Further, by construction, we have
\[\deg(Q) =\deg(P)+\sum_{j=1}^{k}\big{(}n_{j}-|W_{n}(\{0,1\}^{n_{j}}\setminus [S]_{j,z_{j}-1})|\big{)}\] \[=\deg(P)+N-\sum_{j=1}^{k}|W_{n_{j}}(\{0,1\}^{n_{j}}\setminus[S]_{ j,z_{j}-1})|. \tag{4}\]
From (3) and (4), we get
\[\deg(P)\geq\sum_{j=1}^{k}\big{(}|W_{n_{j}}(\{0,1\}^{n_{j}}\setminus[S]_{j,z_{ j}-1})|-\overline{\mu}_{j}\big{)}+2t-2=\sum_{j=1}^{k}\overline{\Lambda}_{n_{j}} ([S]_{j,z_{j}-1})+2t-2.\]
This completes the proof of the lower bound.
Let us now show that the construction in Example 1.35 attains the lower bound we just proved. Recall that Example 1.35 defines a polynomial
\[\mathsf{h}_{S}(\mathbb{X})\coloneqq\bigg{(}\sum_{(z_{1},\ldots,z_{k})\in \mathsf{E}^{(\operatorname{out})}(S)}\lambda_{S,(z_{1},\ldots,z_{k})}\mathcal{ H}_{S,(z_{1},\ldots,z_{k})}(\mathbb{X})\bigg{)}\cdot\mathcal{H}^{\circ(t-1)}( \mathbb{X}_{1}),\]
where, for each \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\operatorname{out})}(S)\), we have
\[\mathcal{H}_{S,(z_{1},\ldots,z_{k})}(\mathbb{X})=\prod_{j\in[k]:z_{j}\geq 1} \Big{(}\mathcal{H}^{*}_{\overline{\mu}_{n}([S]_{j,z_{j}-1})}(\mathbb{X}_{j}) \cdot\mathcal{H}^{\prime}_{W_{n_{j}}(\{0,1\}^{n_{j}}\setminus[S]_{j,z_{j}-1}) \setminus W_{n_{j}},\overline{\mu}_{n}([S]_{j,z_{j}-1})}(\mathbb{X}_{j})\Big{)},\]
and further, \(\{\lambda_{S,(z_{1},\ldots,z_{k})}:(z_{1},\ldots,z_{k})\in\mathsf{E}^{( \operatorname{out})}(S)\}\subseteq\mathbb{R}\) is a \(\widehat{\mathbb{Q}}\)-linearly independent subset of \(\mathbb{R}\), where the subfield \(\widehat{\mathbb{Q}}\coloneqq\mathbb{Q}\big{(}\mathcal{H}_{S,(z_{1},\ldots,z_{ k})}(b):b\in\{0,1\}^{N}\), \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\operatorname{out})}(S)\big{)}\). Firstly, note that since \(\mathcal{H}^{\circ(t-1)}(\mathbb{X}_{1})=X_{1}^{t-1}(X_{1}-1)^{t-1}\), we clearly get \(\operatorname{mult}(\mathcal{H}^{\circ(t-1)}(\mathbb{X}_{1}),x)=t-1\) for all \(x\in\{0,1\}^{N}\). Now fix any \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\operatorname{out})}(S)\), and consider any \(x\in\{0,1\}^{N}\). We observe
\[\mathcal{H}_{S(z_{1},\ldots,z_{k})}(x)\neq 0\] \[\iff \mathcal{H}^{*}_{\overline{\mu}_{n}([S]_{j,z_{j}-1})}(x^{(j)}) \cdot\mathcal{H}^{\prime}_{W_{n}(\{0,1\}^{n_{j}}\setminus[S]_{j,z_{j}-1}) \setminus W_{n_{j}},\overline{\mu}_{n}([S]_{j,z_{j}-1})}(x^{(j)})\neq 0, \text{for all }j\in[k]:z_{j}\geq 1\] \[\iff x^{(j)}\not\in\{0,1\}^{n_{j}}\setminus[S]_{j,z_{j}-1}, \text{for all }j\in[k]:z_{j}\geq 1\] \[\iff x\in\bigg{(}\prod_{j\in[k]:z_{j}\geq 1}[S]_{j,z_{j}-1}\bigg{)} \times\bigg{(}\prod_{j\in[k]:z_{j}=0}\{0,1\}^{n_{j}}\bigg{)}.\]
Now it is easy to check that
\[\{0,1\}^{N}\setminus S=\bigcap_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}( S)}\bigg{(}\{0,1\}^{N}\biggm{\backslash}\biggm{(}\prod_{j\in[k]:z_{j}\geq 1}[S]_{j,z_{j}-1 }\bigg{)}\times\bigg{(}\prod_{j\in[k]:z_{j}=0}\{0,1\}^{n_{j}}\bigg{)}\bigg{)}.\]
So by the \(\widehat{Q}\)-linear independence of \(\{\lambda_{S,(z_{1},\ldots,z_{k})}:(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{ out})}(S)\}\subseteq\mathbb{R}\), we get
\[\sum_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)}\lambda _{S,(z_{1},\ldots,z_{k})}\mathcal{H}_{S,(z_{1},\ldots,z_{k})}(x)=0\] \[\iff \mathcal{H}_{S,(z_{1},\ldots,z_{k})}(x)=0,\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{ for all }(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{ out})}(S)\] \[\iff x\in\bigcap_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}( S)}\bigg{(}\{0,1\}^{N}\biggm{\backslash}\biggm{(}\prod_{j\in[k]:z_{j}\geq 1}[S]_{j,z_{j} -1}\bigg{)}\times\bigg{(}\prod_{j\in[k]:z_{j}=0}\{0,1\}^{n_{j}}\bigg{)}\bigg{)}\] \[\iff x\in\bigcap_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)} \bigg{(}\{0,1\}^{N}\biggm{\backslash}\biggm{(}\prod_{j\in[k]:z_{j}\geq 1}[S]_{j,z_{j} -1}\bigg{)}\times\bigg{(}\prod_{j\in[k]:z_{j}=0}\{0,1\}^{n_{j}}\bigg{)}\bigg{)}\] \[\iff x\in\{0,1\}^{N}\setminus S.\]
Thus, we have
* \(\mathrm{mult}(\mathsf{h}_{S}(\mathbb{X}),x)\geq t\) if \(x\in\{0,1\}^{N}\setminus S\).
* \(\mathrm{mult}(\mathsf{h}_{S}(x_{j}),\mathbb{X}_{j}),x^{(j)})=t-1\), where \(x=(x_{(j)},x^{(j)})\), for each \(j\in[k]\).
Hence, \(\mathsf{h}_{S}(\mathbb{X})\) is a \((t,t-1)\)-block exact polynomial cover for \(\{0,1\}^{N}\setminus S\).
Now for any \((z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}(S)\), we have
\[\deg(\mathcal{H}_{S,(z_{1},\ldots,z_{k})}) =\sum_{j\in[k]:z_{j}\geq 1}\bigg{(}\overline{\mu}_{n}([S]_{j,z_ {j}-1})+|W_{n_{j}}(\{0,1\}^{n_{j}}\setminus[S]_{j,z_{j}-1})\setminus W_{n_{j},\overline{\mu}_{n}([S]_{j,z_{j}-1})}|\bigg{)}\] \[=\sum_{j\in[k]:z_{j}\geq 1}\overline{\Lambda}_{n_{j}}([S]_{j,z_ {j}-1}).\]
Hence,
\[\deg(\mathsf{h}_{S})=\max_{(z_{1},\ldots,z_{k})\in\mathsf{E}^{(\mathrm{out})}( S)}\bigg{\{}\sum_{j\in[k]:z_{j}\geq 1}\overline{\Lambda}_{n_{j}}([S]_{j,z_{j} -1})\bigg{\}}+2t-2,\]
that is, \(\mathsf{h}_{S}(\mathbb{X})\) attains the lower bound.
## 5 Partial results for \((t,0)\)-exact polynomial covers
Let us have a few notations before we proceed. For any \(\alpha\in\mathbb{N}^{n}\), we denote \(\alpha!\coloneqq\alpha_{1}\cdots\alpha_{n}\). Further, for any \(a\in\mathbb{R}^{n}\), we define the polynomial \((\mathbb{X}-a)^{\alpha}\coloneqq(X_{1}-a_{1})^{\alpha_{1}}\cdots(X_{n}-a_{n}) ^{\alpha_{n}}\). Therefore, for any \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) and \(a\in\mathbb{R}^{n}\), the Taylor's expansion of \(P\) about the point \(a\) is
\[P(\mathbb{X})=\sum_{0\leq|\alpha|\leq\deg(P)}\frac{(\partial^{\alpha}P)(a)}{ \alpha!}(\mathbb{X}-a)^{\alpha}.\]
Let us first prove Proposition 1.39. We mention the statement again, for convenience.
**Proposition 1.39**.: _For \(w\in[1,n-1]\), let \(S\subsetneq\{0,1\}^{n}\) be the symmetric set defined by \(W_{n}(S)=[0,w-1]\). Then for any \(t\in\left[2,\left\lfloor\frac{n+3}{2}\right\rfloor\right]\), we have_
\[\mathsf{EPC}_{n}^{(t,0)}(S)=w+2t-3.\]
_Further, the answer to Question 1.2 is negative, in general._
Proof.: Let \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) be a \((t,0)\)-exact polynomial cover for \(S\). Consider the restricted polynomial \(\widetilde{P}(X_{1},\ldots,X_{w})\coloneqq P(X_{1},\ldots,X_{w},0^{n-w})\). Then \(\widetilde{P}(1^{w})=P(1^{w}0^{n-w})\neq 0\), since \(1^{w}0^{n-w}\not\in S\). Further, for any \(x\in\{0,1\}^{w},\,x\neq 1^{w}\), we have \(|x|\leq w-1\), which means \(x0^{n-w}\in S\), and so \(\operatorname{mult}(\widetilde{P}(X_{1},\ldots,X_{w}),x)\geq\operatorname{ mult}(P(\mathbb{X}),x0^{n-w})\geq t\). Thus, \(\widetilde{P}(X_{1},\ldots,X_{w})\) is a \((t,0)\)-exact polynomial cover for \(\{0,1\}^{w}\setminus\{1^{w}\}\). So by Theorem 2.3, we get \(\deg(\widetilde{P})\geq w+2t-3\). Hence, \(\deg(P)\geq\deg(\widetilde{P})\geq w+2t-3\).
In order to show that the lower bound is tight, let \(Q(X_{1},\ldots,X_{w})\in\mathbb{R}[\mathbb{X}]\) be a \((t,0)\)-exact polynomial cover of the symmetric set \(\{0,1\}^{w}\setminus\{1^{w}\}\), with \(\deg(Q)=w+2t-3\), as ensured by Theorem 2.3. Further, let \(\alpha=Q(1^{w})\neq 0\). Now define a polynomial
\[\widetilde{Q}(\mathbb{X})=\sum_{1\leq i_{1}<\cdots<i_{w}\leq n}Q(X_{i_{1}}, \ldots,X_{i_{w}}).\]
Then clearly, \(\deg(\widetilde{Q})=\deg(Q)=w+2t-3\). Now consider any \(x\in\{0,1\}^{n}\). We observe the following.
* If \(|x|\leq w-1\), then for any \(1\leq i_{1}<\cdots<i_{w}\leq n\), we have \[\operatorname{mult}(\widetilde{Q}(\mathbb{X}),x)\geq\operatorname{mult}(Q(X_ {i_{1}},\ldots,X_{i_{w}}),(x_{i_{1}},\ldots,x_{i_{w}}))\geq t.\]
* If \(|x|\geq w\), let \(\{j_{1},\ldots,j_{u}\}=\{i\in[n]:x_{i}=1\}\). Then we have \[\widetilde{Q}(x)=\sum_{\begin{subarray}{c}1\leq i_{1}<\cdots<i_{w}\leq n\\ \{i_{1},\ldots,i_{w}\}\subseteq\{j_{1},\ldots,j_{u}\}\end{subarray}}\alpha \ =\binom{u}{w}\alpha\neq 0.\]
Thus, \(\widetilde{Q}(\mathbb{X})\) is a \((t,0)\)-exact polynomial cover for \(S\). Hence, \(\widetilde{Q}(\mathbb{X})\) attains the lower bound.
Let us now show an example that illustrates that the answer to Question 1.2 is negative in general, that is, \(\mathsf{EHC}_{n}^{(t,0)}(S)>\mathsf{EPC}_{n}^{(t,0)}(S)\) in general. Let \(t=2\), \(n=3>2t-3\). Let \(S\subseteq\{0,1\}^{3}\) be the symmetric set defined by \(W_{n}(S)=\{0,1\}\). So \(S=\{(0,0,0),(1,0,0),(0,1,0),(0,0,1)\}\). We have \(\mathsf{EPC}_{3}^{(2,0)}(S)=3\) (since \(w=1\) in this case). Now suppose there are three hyperplanes \(\{h_{1},h_{2},h_{3}\}\) that form a \((2,0)\)-exact hyperplane cover for \(S\). Without loss of generality, let \(h_{1}(0,0,0)=h_{2}(0,0,0)=0\). So \(h_{1}(X_{1},X_{2},X_{3})=a_{1}X_{1}+a_{2}X_{2}+a_{3}X_{3},\,h_{2}(X_{1},X_{2},X _{3})=b_{1}X_{1}+b_{2}X_{2}+b_{3}X_{3}\), for some \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}\in\mathbb{R}\). Further, without loss of generality, suppose \(h_{1}(1,0,0)=h_{1}(0,1,0)=0\). This gives \(h_{1}(1,1,0)=h_{1}(1,0,0)+h_{1}(0,1,0)=0\), which contradicts \(\{h_{1},h_{2},h_{3}\}\) being a \((2,0)\)-exact hyperplane cover for \(S\). So we conclude that \(|\mathcal{Z}(h_{j})\cap\{(1,0,0),(0,1,0),(0,0,1)\}|\leq 1\) for \(j\in[3]\). This implies that it is impossible for each point in \(\{(1,0,0),(0,1,0),(0,0,1)\}\) to be covered \(2\) times by \(\{h_{1},h_{2},h_{3}\}\). Hence \(\mathsf{EHC}_{3}^{(2,0)}(S)>3\).
Now let us prove Proposition 1.40. We mention the statement again, for convenience.
**Proposition 1.40**.: _For any layer \(S\subsetneq\{0,1\}^{n}\) with \(W_{n}(S)=\{w\}\), and \(t\geq 1\), we have_
\[\mathsf{EPC}_{n}^{(t,0)}(S)=t.\]
Proof.: Let \(P(\mathbb{X})\in\mathbb{R}[\mathbb{X}]\) be a \((t,0)\)-exact polynomial cover for \(S\). Fix any \(a\in S\). So there exists an invertible linear map \(T:\mathbb{R}^{n}\to\mathbb{R}^{n}\) (or equivalently, an invertible change of coordinates) such that the polynomial \(\widetilde{P}(\mathbb{X})\coloneqq P(L(\mathbb{X}-a)+a)\) is a \((t,0)\)-exact polynomial cover for \(\{a\}\). Also, \(\deg(\widetilde{P})=\deg(P)\). Now we have the Taylor's expansion of \(\widetilde{P}\) about \(a\) as
\[\widetilde{P}(\mathbb{X})=\sum_{0\leq|\alpha|\leq\deg(P)}\frac{(\partial^{ \alpha}\widetilde{P})(a)}{\alpha!}(\mathbb{X}-a)^{\alpha}.\]
Since \(\widetilde{P}(\mathbb{X})\) is a \((t,0)\)-exact polynomial cover for \(\{a\}\), we have \((\partial^{\alpha}\widetilde{P})(a)=0\) for \(|\alpha|<t\). This gives
\[\widetilde{P}(\mathbb{X})=\sum_{t\leq|\alpha|\leq\deg(P)}\frac{(\partial^{ \alpha}\widetilde{P})(a)}{\alpha!}(\mathbb{X}-a)^{\alpha},\]
which implies \(\deg(P)\geq t\). Further, this lower bound is tight; for instance, the polynomial \(P(\mathbb{X})\coloneqq\big{(}\sum_{i=1}^{n}X_{i}-w\big{)}^{t}\) witnesses the lower bound.
## 6 Conclusion and open questions
In this work, we have proved Theorem 1.33, which also subsumes our other main results (Theorem 1.20 and Theorem 1.23). We note that Theorem 1.33 characterizes the tight bound for the \((t,t-1)\)-block exact polynomial cover, but in the special cases of Theorem 1.20 and Theorem 1.23, our tight example specializes to the tight example for the \((t,t-1)\)-exact hyperplane cover. Therefore, as seen in the earlier work of Alon and Furedi [1] as well as initial attempts in [12, 1], solving the _weaker_ polynomial covering problem by the polynomial method indeed solves the _stronger_ hyperplane covering problem in these settings.
Some of the obvious questions that seem beyond the proof technique employed in this work are the following.
**Question 6.1**.: _In the broad generality of the polynomial covering problem considered for PDC blockwise symmetric sets, is the degeneracy condition necessary? More precisely, for any nonempty PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\) and \(t\geq 1\), is it true that \(\mathsf{b}\text{-}\mathsf{EPC}_{(n_{1},\ldots,n_{k})}^{(t,t-1)}(S)=\mathsf{ EPC}_{N}^{(t,t-1)}(S)\)?_
_We believe this could be true._
**Question 6.2**.: _How do we solve the hyperplane covering problem considered for PDC blockwise symmetric sets? In other words, do Theorem 1.20 and Theorem 1.23 extend to the setting of Theorem 1.33 for the exact hyperplane cover version? More precisely, for any nonempty PDC \(k\)-wise symmetric set \(S\subseteq\{0,1\}^{N}\) and \(t\geq 1\), is it true that \(\mathsf{b}\text{-}\mathsf{EPC}_{(n_{1},\ldots,n_{k})}^{(t,t-1)}(S)=\mathsf{b} \text{-}\mathsf{EHC}_{(n_{1},\ldots,n_{k})}^{(t,t-1)}(S)\)?_
_Our work shows that our proof technique can not possibly extend to prove this. We therefore believe this may not be true._
**Question 6.3**.: _Characterize the index complexity of all nonempty PDC \(k\)-wise symmetric sets. In other words, obtain the characterization without requiring the outer intact condition in Proposition 1.41._
Covering by hyperplanes and polynomials
Let us give quick proofs of some simple statements that broadly assert that a hyperplane cover is a stronger notion than a polynomial cover.
Exact hyperplane and polynomial covers: \(\mathsf{EHC}_{n}^{(t,\ell)}\geq\mathsf{EPC}_{n}^{(t,\ell)}\)
Consider \(S\subsetneq\{0,1\}^{n}\), and \(t\geq 1\), \(\ell\in[0,t-1]\). Let us prove that \(\mathsf{EHC}_{n}^{(t,\ell)}(S)\geq\mathsf{EPC}_{n}^{(t,\ell)}(S)\). Let \(\mathcal{H}(\mathbb{X})=\{H_{1}(\mathbb{X}),\ldots,H_{k}(\mathbb{X})\}\) be a \((t,\ell)\)-exact hyperplane cover for \(S\). We have
* \(|\{i\in[k]:H_{i}(a)=0\}|\geq t\) for all \(a\in S\).
* \(|\{i\in[k]:H_{i}(b)=0\}|=\ell\) for all \(b\in\{0,1\}^{n}\setminus S\).
Now consider the polynomial \(\mathcal{H}(X)=H_{1}(\mathbb{X})\cdots H_{k}(\mathbb{X})\).
1. Fix any \(a\in S\) and \(\alpha\in\mathbb{N}^{n}\), \(|\alpha|\leq t-1\). We have by the product rule for derivatives, \[(\partial^{\alpha}\mathcal{H})(a)=\sum_{\begin{subarray}{c}\gamma^{(1)}, \ldots,\gamma^{(k)}\in\mathbb{N}^{n}\\ \gamma^{(1)}+\cdots+\gamma^{(k)}=\alpha\end{subarray}}\binom{\alpha}{\gamma^{ (1)}}\ \cdots\ \gamma^{(k)}(\partial^{\gamma^{(1)}}H_{1})(a)\cdots(\partial^{\gamma^{(k)}}H_ {k})(a).\] For each \(\gamma^{(1)},\ldots,\gamma^{(k)}\in\mathbb{N}^{n}\) with \(\gamma^{(1)}+\cdots+\gamma^{(k)}=\alpha\), since \(|\{i\in[k]:H_{i}(a)=0\}|\geq t\) and \(|\gamma^{(1)}|+\cdots+|\gamma^{(k)}|=|\alpha|\leq t-1\), there exists \(i\in[k]\) such that \(\gamma^{(i)}=0^{n}\). This implies \((\partial^{\gamma^{(1)}}H_{1})(a)\cdots(\partial^{\gamma^{(k)}}H_{k})(a)=0\). Thus, \((\partial^{\alpha}\mathcal{H})(a)=0\).
2. Fix any \(b\in\{0,1\}^{n}\setminus S\). Since \(|\{i\in[k]:H_{i}(b)=0\}|=\ell\), by the argument above, we get \((\partial^{\beta}\mathcal{H})(b)=0\) for every \(\beta\in\mathbb{N}^{n}\), \(|\beta|\leq\ell-1\). Now recall that the collection \(\mathcal{H}(\mathbb{X})\) is a multiset, and suppose we alternatively represent \(\mathcal{H}(\mathbb{X})=\{(F_{1}(\mathbb{X}))^{(m_{1})},\ldots,(F_{v}(\mathbb{ X}))^{(m_{v})}\}\), where \(F_{1}(\mathbb{X}),\ldots,F_{v}(\mathbb{X})\) are distinct, and \((F_{u}(\mathbb{X}))^{(m_{u})}\) (for \(u\in[v]\)) indicates \(m_{u}\) many copies of \(F_{u}(\mathbb{X})\). Then the condition \(|\{i\in[k]:H_{i}(b)=0\}|=\ell\) implies that there exists a subset \(U\subseteq[v]\) such that \(\sum_{u\in U}m_{u}=\ell\), and \(F_{u}(b)=0\) exactly when \(u\in U\). Further, by definition, we also have the inequality \(\ell\leq t-1<k\). This means \(U\subsetneq[v]\), and \(F_{u^{\prime}}(b)\neq 0\) for all \(u^{\prime}\in[v]\setminus U\). Without loss of generality, we may assume \(U=[v^{\prime}]\) for some \(v^{\prime}\in[0,v-1]\). So we have the following. * \(F_{u}(b)=0\) if \(u\in[v^{\prime}]\), and \(F_{u}(b)\neq 0\) if \(u\in[v^{\prime}+1,v]\). * \(\sum_{u=1}^{v^{\prime}}m_{u}=\ell\). Now for each \(u\in[v^{\prime}]\), since \(F_{u}(\mathbb{X})\) is an affine linear polynomial, let \(i_{u}\in[n]\) be the least such that \(\text{coeff}(X_{i_{u}},F_{u})\neq 0\). Define \(\nu=\sum_{u=1}^{v^{\prime}}m_{u}e_{i_{u}}\). Then we get \[(\partial^{\nu}\mathcal{H})(b)=\prod_{u=1}^{v^{\prime}}(\text{coeff}(X_{i_{u}},F_{u}))^{m_{u}}\neq 0,\] where \(|\nu|=\sum_{u=1}^{v^{\prime}}m_{u}=\ell\). Thus, \(\mathcal{H}(\mathbb{X})\) is a \((t,\ell)\)-exact polynomial cover for \(S\). This completes the proof.
Block exact hyperplane and polynomial covers: \(\mathsf{b}\mathsf{-EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}\geq\mathsf{b}\mathsf{- EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}\)
Let \(\{0,1\}^{N}=\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{k}}\). Consider \(S\subsetneq\{0,1\}^{N}\), and \(t\geq 1\), \(\ell\in[0,t-1]\). Let us prove that \(\mathsf{b}\mathsf{-EHC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\geq\mathsf{b} \mathsf{-EPC}^{(t,\ell)}_{(n_{1},\ldots,n_{k})}(S)\). Let \(\mathcal{H}(\mathbb{X})=\{H_{1}(\mathbb{X}),\ldots,H_{k}(\mathbb{X})\}\) be a \((t,\ell)\)-block exact hyperplane cover for \(S\). We have
* \(|\{i\in[k]:H_{i}(a)=0\}|\geq t\) for all \(a\in S\).
* \(|\{i\in[k]:H_{i}(b)=0\}|=\ell\) for all \(b\in\{0,1\}^{N}\setminus S\).
* for each \(j\in[k]\), and every \(a\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\}^{n_{j+1} }\times\cdots\times\{0,1\}^{n_{k}}\), we have \(|\mathcal{H}(a,\mathbb{X}_{j})|=|\mathcal{H}(\mathbb{X})|\).
Now consider the polynomial \(\mathcal{H}(\mathbb{X})=H_{1}(\mathbb{X})\cdots H_{k}(\mathbb{X})\).
* repeating the argument as in Appendix A.1(a), we can show that \((\partial^{\alpha}\mathcal{H})(a)=0\) for any \(a\in S\) and \(\alpha\in\mathbb{N}^{N}\), \(|\alpha|\leq t-1\). So \(\mathrm{mult}(\mathcal{H}(\mathbb{X}),a)\geq t\) for all \(a\in S\).
* Fix any \(b\in\{0,1\}^{N}\setminus S\). Again, repeating the argument as in Appendix A.1(b), we can show that \((\partial^{\beta}\mathcal{H})(b)=0\) for every \(\beta\in\mathbb{N}^{N}\), \(|\beta|\leq\ell-1\). Further, now fix any \(j\in[k]_{\underline{\cdot}}\) and denote \(b=(b^{\prime},\widetilde{b})\), where \(b^{\prime}\in\{0,1\}^{n_{1}}\times\cdots\times\{0,1\}^{n_{j-1}}\times\{0,1\} ^{n_{j+1}}\times\cdots\times\{0,1\}^{n_{k}}\), \(\widetilde{b}\in\{0,1\}^{n_{j}}\). This immediately gives \(\partial^{\widetilde{\beta}}\mathcal{H}(b^{\prime},\mathbb{X}_{j})|_{ \widetilde{b}}=0\) for every \(\widetilde{\beta}\in\mathbb{N}^{n_{j}}\), \(|\widetilde{\beta}|\leq\ell-1\). Further, we have \(|\mathcal{H}(b^{\prime},\mathbb{X}_{j})|=|\mathcal{H}(\mathbb{X})|\). This implies \[|\{H(b^{\prime},\mathbb{X}_{j})\in\mathcal{H}(b^{\prime},\mathbb{X}_{j}):H(b^ {\prime},\widetilde{b})=0\}|=|\{H(\mathbb{X})\in\mathcal{H}(\mathbb{X}):H(b)= 0\}|=\ell.\] Now repeating the argument as in Appendix A.1(b) over the hypercube \(\{0,1\}^{n_{j}}\), for the point \(\widetilde{b}\in\{0,1\}^{n_{j}}\), we can show that there exists \(\nu\in\mathbb{N}^{n_{j}}\), \(|\nu|=\ell\) such that \(\partial^{\nu}\mathcal{H}(b^{\prime},\mathbb{X}_{j})|_{\widetilde{b}}\neq 0\). So \(\mathrm{mult}(\mathcal{H}(b^{\prime},\mathbb{X}_{j}),\widetilde{b})=\ell\).
Thus, \(\mathcal{H}(\mathbb{X})\) is a \((t,\ell)\)-block exact polynomial cover for \(S\). This completes the proof.
## Appendix B An exact hyperplane cover: proof of Lemma 1.12
Let us give a proof of the exact hyperplane cover for the symmetric set \(T_{n,i}\subseteq\{0,1\}^{n}\), \(i\in[0,\lceil n/2\rceil]\), constructed by [10], given in Lemma 1.12. We mention the statement again, for convenience.
**Lemma 1.12**.: _For \(i\in[0,\lceil n/2\rceil]\), the collection of hyperplanes \(\{H^{*}_{(i,j)}(\mathbb{X}):j\in[i]\}\) defined by_
\[H^{*}_{(i,j)}(\mathbb{X})=\sum_{k=1}^{n-j}X_{k}-(n-2i+j)X_{n-j+1}-(i-j),\quad j \in[i],\]
_satisfies the following._
* _For every_ \(a\in T_{n,i}\)_, there exists_ \(j\in[i]\) _such that_ \(H^{*}_{(i,j)}(a)=0\)_._
* \(H^{*}_{(i,j)}(b)\neq 0\) _for every_ \(b\in\{0,1\}^{n}\setminus T_{n,i}\)_,_ \(j\in[i]\)
Proof.: For any \(a\in\{0,1\}^{n}\), denote \(I_{0}(a)=\{t\in[n]:a_{t}=0\}\) and \(I_{1}(a)=\{t\in[n]:a_{t}=1\}\). Consider any \(a\in T_{n,i}\). So \(|a|\in[0,i-1]\cup[n-i+1,n]\). We have two cases.
1. \(|a|\in[0,i-1]\). Then \(|I_{0}(a)|\geq n-i+1\). Let \(t_{0}\) be the \((n-i+1)\)-th element in \(I_{0}(a)\). This means \(t_{0}\in[n-i+1,n]\), which implies that there exists \(j\in[i]\) such that \(t_{0}=n-j+1\). So \(a_{n-j+1}=a_{t_{0}}=0\). Further, by definition of \(t_{0}\), we get \[|a_{[1,n-j]}|=|a_{[1,n-j+1]}|=(n-j+1)-(n-i+1)=i-j.\] Thus, \(H^{*}_{(i,j)}(a)=(i-j)-(n-2i+j)\cdot 0-(i-j)=0\).
2. \(|a|\in[n-i+1,n]\). Then \(|I_{1}(a)|\geq n-i+1\). Let \(t_{1}\) be the \((n-i+1)\)-th element in \(I_{1}(a)\). This means \(t_{1}\in[n-i+1,n]\), which implies that there exists \(j\in[i]\) such that \(t_{0}=n-j+1\). So \(a_{n-j+1}=a_{t_{1}}=1\). Further, by definition of \(t_{1}\), we get \[|a_{[1,n-j]}|=|a_{[1,n-j+1]}|-1=(n-i+1)-1=n-i.\] Thus, \(H^{*}_{(i,j)}(a)=(n-i)-(n-2i+j)\cdot 1-(i-j)=0\).
Now consider any \(b\in\{0,1\}^{n}\setminus T_{n,i}\). So \(|b|\in[i,n-i]\). Fix any \(j\in[i]\). We have two cases.
1. \(b_{n-j+1}=0\). Then \(|b_{[1,n-j]}|\in[i,n-i]\), and so \[H^{*}_{(i,j)}(b)\in[i-(i-j),n-i-(i-j)]=[j,n-2i+j],\] which implies \(H^{*}_{(i,j)}(b)\geq j\geq 1\).
2. \(b_{n-j+1}=1\). Then \(|b_{[1,n-j]}|\in[i-1,n-i-1]\), and so \[H^{*}_{(i,j)}(b)\in[i-1-(n-2i+j)-(i-j),n-i-1-(n-2i+j)-(i-j)]=[2i-n-1,-1],\] which implies \(H^{*}_{(i,j)}(b)\leq-1\).
This completes the proof.
## Appendix C Inner and outer intervals of symmetric sets
Here we discuss some ancillary details about inner and outer intervals of symmetric sets.
### Uniqueness of inner and outer intervals
Let \(S\subseteq\{0,1\}^{n}\) be a symmetric set. It is immediate that \(\text{in-int}(\emptyset)=\text{out-int}(\emptyset)=J_{n,-1,n+1}=\emptyset\). It is also immediate that \(\text{in-int}(\{0,1\}^{n})=\text{out-int}(\{0,1\}^{n})=J_{n,\lfloor n/2\rfloor, \lfloor n/2\rfloor+1}=\{0,1\}^{n}\).
Now consider a nonempty symmetric set \(S\subsetneq\{0,1\}^{n}\). We note the following.
* There exists \(w\in[0,n]\) such that \(w\not\in W_{n}(S)\), that is, \(W_{n}(S)\subseteq[0,w-1]\cup[w+1,n]\). Let \[a=\max\{a^{\prime}\in[0,w-1]:[0,a]\subseteq W_{n}(S)\}\qquad\text{ with the convention }\max(\emptyset)=-1,\] \[b=\min\{b^{\prime}\in[w+1,n]:[b^{\prime},n]\subseteq W_{n}(S)\} \qquad\text{ with the convention }\min(\emptyset)=n+1.\] Then it follows immediately by definition that \(\text{in-int}(S)=J_{n,a,b}\), and is therefore unique.
* Recall that \(\mathcal{O}(S)\) is the collection of all peripheral intervals \(J_{n,a,b}\) such that \(S\subseteq J_{n,a,b}\) and \(I_{n,a,b}=W_{n}(J_{n,a,b})\) has minimum size. Now consider the function \(\lambda_{S}:\mathcal{O}(S)\to\mathbb{N}\) defined by \[\lambda_{S}(J_{n,a,b})=|a+b-n|,\quad\text{for all }J_{n,a,b}\in\mathcal{O}(S).\] We observe a simple property of the minimizer of \(\lambda_{S}\). **Observation C.1**.: _The minimizer of \(\lambda_{S}\) is either a unique peripheral interval \(J_{n,a,b}\), or exactly a pair of peripheral intervals \(\{J_{n,a,b},J_{n,n-b,n-a}\}\)._
Proof.: Suppose the minimizer of \(\lambda_{S}\) is not unique, that is, there are two distinct minimizers \(J_{n,a,b}\), \(J_{n,a^{\prime},b^{\prime}}\in\mathcal{O}(S)\). Then by definition of \(\mathcal{O}(S)\), we already have \(|I_{n,a,b}|=|I_{n,a^{\prime},b^{\prime}}|\), which implies \(a-b=a^{\prime}-b^{\prime}\). So there exists \(h\in\mathbb{Z}\) such that \(a^{\prime}=a+h\), \(b^{\prime}=b+h\). Further, by the minimization of \(\lambda_{S}\), we have \(|a+b-n|=|a^{\prime}+b^{\prime}-n|\), which yields two cases.
* \(a+b-n=a^{\prime}+b^{\prime}-n\), that is, \(a+b=a^{\prime}+b^{\prime}\). This implies \(h=0\), and so \(a^{\prime}=a\), \(b^{\prime}=b\).
* \(a+b-n=n-a^{\prime}-b^{\prime}\), that is, \(a+b=2n-(a^{\prime}+b^{\prime})\). This implies \(h=n-(a+b)\), and so \(a^{\prime}=n-b\), \(b^{\prime}=n-a\).
This completes the proof.
Recall that \(\text{out-int}(S)\) is defined by
\[\text{out-int}(S)=\begin{cases}J_{n,a,b}&\text{if }J_{n,a,b}\text{ is the unique minimizer of }\lambda_{S},\\ J_{n,a,b}&\text{if }\{J_{n,a,b},J_{n,n-b,n-a}\}\text{ are minimizers of }\lambda_{S}\text{ and }a>n-b.\end{cases}\]
Thus, by Observation C.1, it is immediate that \(\text{out-int}(S)\) is unique.
### Illustrations of inner and outer intervals
Let us illustrate some examples of inner and outer intervals. Figure 1 shows two typical symmetric sets - _one-sided_ and _two-sided_ - and their inner and outer intervals.
Figure 1: (a) a _two-sided_ symmetric set \(S\), and (b) a _one-sided_ symmetric set \(S^{\prime}\)
Note that Figure 1 is a typical illustration. The inner and outer intervals are special when \(S\) itself is either a peripheral interval or the complement of a peripheral interval. Figure 2 shows the inner and outer intervals of a _two-sided_ peripheral interval and its complement.
Figure 3 shows the inner and outer intervals of a _one-sided_ peripheral interval and its complement. Note that the complement of a _one-sided_ peripheral interval is again a _one-sided_ peripheral interval.
Figure 2: (a) a _two-sided_ peripheral interval \(J_{n,a,b}\), and (b) the complement of \(J_{n,a,b}\)
Invariance of \(\Lambda_{n}\) and \(r_{n}\) under complementation
Let us quickly prove Fact 2.1. We can prove this fact by carefully following the definitions. Instead, let us prove by using inner and outer intervals. Let \(S\subseteq\{0,1\}^{n}\) be a symmetric set, and \(\widetilde{S}\) be the image of \(S\) under the coordinate transformation \((X_{1},\ldots,X_{n})\mapsto(1-X_{1},\ldots,1-X_{n})\). This implies
\[W_{n}(\widetilde{S})=\{n-w:w\in W_{n}(S)\}.\]
So we get the following observations.
* If \(\text{out-int}(S)=J_{n,a,b}\), then \(\text{out-int}(\widetilde{S})=J_{n,n-b,n-a}\).
* If \(\text{in-int}(S)=J_{n,a,b}\), then \(\text{in-int}(\widetilde{S})=J_{n,n-b,n-a}\).
Then, using Proposition 1.17 and Fact 1.18 completes the proof of Fact 2.1.
|
2309.04929 | Learning-based Incentive Mechanism for Task Freshness-aware Vehicular
Twin Migration | Vehicular metaverses are an emerging paradigm that integrates extended
reality technologies and real-time sensing data to bridge the physical space
and digital spaces for intelligent transportation, providing immersive
experiences for Vehicular Metaverse Users (VMUs). VMUs access the vehicular
metaverse by continuously updating Vehicular Twins (VTs) deployed on nearby
RoadSide Units (RSUs). Due to the limited RSU coverage, VTs need to be
continuously online migrated between RSUs to ensure seamless immersion and
interactions for VMUs with the nature of mobility. However, the VT migration
process requires sufficient bandwidth resources from RSUs to enable online and
fast migration, leading to a resource trading problem between RSUs and VMUs. To
this end, we propose a learning-based incentive mechanism for migration task
freshness-aware VT migration in vehicular metaverses. To quantify the freshness
of the VT migration task, we first propose a new metric named Age of Twin
Migration (AoTM), which measures the time elapsed of completing the VT
migration task. Then, we propose an AoTM-based Stackelberg model, where RSUs
act as the leader and VMUs act as followers. Due to incomplete information
between RSUs and VMUs caused by privacy and security concerns, we utilize deep
reinforcement learning to learn the equilibrium of the Stackelberg game.
Numerical results demonstrate the effectiveness of our proposed learning-based
incentive mechanism for vehicular metaverses. | Junhong Zhang, Jiangtian Nie, Jinbo Wen, Jiawen Kang, Minrui Xu, Xiaofeng Luo, Dusit Niyato | 2023-09-10T03:36:56Z | http://arxiv.org/abs/2309.04929v1 | # Learning-based Incentive Mechanism for Task Freshness-aware Vehicular Twin Migration
###### Abstract
Vehicular traverses are an emerging paradigm that integrates extended reality technologies and real-time sensing data to bridge the physical space and digital spaces for intelligent transportation, providing immersive experiences for Vehicular Metaverse Users (VMUs). VMUs access the vehicular metaverse by continuously updating Vehicular Twins (VTs) deployed on nearby RoadSide Units (RSUs). Due to the limited RSU coverage, VTs need to be continuously online migrated between RSUs to ensure seamless immersion and interactions for VMUs with the nature of mobility. However, the VT migration process requires sufficient bandwidth resources from RSUs to enable online and fast migration, leading to a resource trading problem between RSUs and VMUs. To this end, we propose a learning-based incentive mechanism for migration task freshness-aware VT migration in vehicular metaverses. To quantify the freshness of the VT migration task, we first propose a new metric named Age of Twin Migration (aoTM), which measures the time elapsed of completing the VT migration task. Then, we propose an AoTM-based Stackelberg model, where RSUs act as the leader and VMUs act as followers. Due to incomplete information between RSUs and VMUs caused by privacy and security concerns, we utilize deep reinforcement learning to learn the equilibrium of the Stackelberg game. Numerical results demonstrate the effectiveness of our proposed learning-based incentive mechanism for vehicular metaverses.
Metaverse, vehicular twin, Stackelberg game, Age of Information, deep reinforcement learning.
## I Introduction
The rapid advancement of immersive communication, such as Virtual Reality (VR), Augmented Reality (AR), and ubiquitous Artificial Intelligence (AI) has given rise to the vehicular metaverse. Vehicular metaverses are expected to lead the revolution of intelligent transportation systems by seamlessly blending virtual and physical spaces, allowing for providing immersive services for Vehicular Metaverse Users (VMUs) (i.e., drivers and passengers within vehicles) [1]. Vehicular Twins (VTs) are highly accurate virtual hybrid replicas that cover the entire life cycle of vehicles and VMUs [2]. The VTs are updated by sensing data from the surrounding environment to achieve physical-virtual synchronization [3]. Through VTs, VMUs can access the vehicular metaverse to enjoy a wide range of metaverse applications, such as AR navigation, virtual education, and virtual games [2, 4].
To ensure seamless immersive experiences for VMUs in the vehicular metaverse, resource-limited vehicles offload latency-sensitive and computation-intensive tasks of updating VTs to nearby edge servers in RoadSide Units (RSUs) [2]. However, due to the limited coverage of RSUs and the mobility of vehicles, each VT has to be migrated from the current RSU to another to provide uninterrupted immersive services for VMUs. Therefore, the task freshness of the VT migration, i.e., the time it takes to complete the VT migration, is critical to VMUs. To ensure VT migration efficiency, VMUs need to purchase sufficient resources from RSUs for facilitating VT migration, especially bandwidth resources. Without loss of generality, the Metaverse Service Provider (MSP) is set as the manager of RSUs, which is the sole provider of bandwidth resources during VT migration. The MSP aims to optimize its bandwidth selling price and maximize revenue from resource trading with incomplete information. Existing work has been conducted to optimize resource pricing and allocation based on the incentive mechanism in the metaverse [5, 6, 7]. The authors in [5] formulated a Stackelberg game joint user association and resource pricing. The authors in [6] proposed a hierarchical game-theoretic approach to study a reliable coded distributed computing scheme in vehicular metaverses. However, they ignore the VT migration issue caused by the mobility of vehicles. Therefore, it is still challenging in tackling the resource trading problem in VT migration.
To address the above challenges, in this paper, we propose a new metric named Age of Twin Migration (aoTM) according to the concept of Age of Information (AoI). Considering that VMUs may be reluctant to disclose their private information for privacy security during VT migration, we propose a learning-based incentive mechanism between the MSP and VMUs. The main contributions are summarized as follows:
* To quantify the freshness of the VT migration task, we propose a new metric named AoTM according to the concept of AoI for vehicular metaverses and apply it to evaluate the immersion of VMUs.
* To improve VT migration efficiency under information incompleteness, we formulate the Stackelberg game between the MSP and VMUs, in which the MSP acts as the leader and VMUs act as followers.
* We utilize Deep Reinforcement Learning (DRL) to solve
the Stackelberg game under incomplete information. Numerical results demonstrate that the proposed learning-based scheme can converge to the Stackelberg equilibrium and outperform baseline schemes.
## II System Model
As shown in Fig. 1, edge-assisted remote rendering as a key technology is applied in vehicular metaverses [5]. To construct VTs for lower-latency and ultra-reliable metaverse services, such as AR navigation, e-commerce, and virtual games, the large-scale rendering tasks are offloaded to nearby edge servers in RSUs with abundant resources (i.e., storage, bandwidth, and computing) [2]. However, due to the dynamic mobility of vehicles and the limited service coverage of RSUs [1], VTs must be migrated from the source RSUs to the destination RSUs for realizing fully immersive metaverse services. We provide more details of the system model as follows:
* **MSP:** The MSP as the manager of RSUs can schedule resources of RSUs to provide necessary resources (e.g., computing and bandwidth) for VMUs [5]. After being authorized, the MSP can manage a number of communication channels between the source RSUs and the destination RSUs [5]. Besides, the MSP leverages sensing data (e.g., traffic conditions and vehicle locations) sent by VMUs to update VTs for providing ultra-reliable and real-time metaverse services for VMUs.
* **VTs:** VTs are the digital replicas deployed in RSUs. They cover the life cycle of vehicles and VMUs and act as intelligent assistants managing metaverse applications [2]. In addition, VTs can also analyze and predict their VMUs' behavior through a pre-trained machine learning model. Note that we consider that each VMU has a corresponding VT and the VT can be transmitted in the form of blocks during migration.
* **VMUs:** Without loss of generality, VMUs refer to drivers and passengers within vehicles. The widespread use of VR, AR, and spatial audio devices enables VMUs to enjoy metaverse services through Head-Mounted Displays (HMDs) as well as AR windshields and side windows [1]. Additionally, smart sensors on VMUs (e.g., cameras, Inertial Measurement Units (IMU) suits) collect and send sensing data (e.g., driver fatigue level and vehicle locations) to the MSP for VT synchronization [2].
## III Problem Formulation
In this section, to quantify the freshness of the VT migration task, we first propose a new metric named AoTM, which can evaluate the immersion of VMUs. Then, we design a Stackelberg game model between the MSP and VMUs for VT migration and analyze the game to prove the existence and the uniqueness of Stackelberg equilibrium among the MSP and VMUs [5, 8]. In this paper, we consider that one MSP and a set \(\mathcal{N}=\{1,\dots,n,\dots,N\}\) of \(N\) VMUs participate in VT migration and all VTs of VMUs need to be migrated.
### _Age of Twin Migration_
AoI has been widely utilized to quantify data freshness at the destination [9]. It is defined as the time elapsed since the latest received update was generated at its source, which is a promising metric to improve the performance of time-critical services [10]. Similarly, in vehicular metaverses, to quantify the freshness of the VT migration task, we propose a new metric named AoTM according to the concept of the AoI, which is defined as the time elapsed between the last successfully received VT block and the generation of the first VT block in the VT migration.
We consider that the Orthogonal Frequency Division Multiplexing Access (OFDMA) technology is applied in the system [5], which ensures that all communication channels occupied by the source RSU and the destination RSU are orthogonal. For VMU \(n\in\mathcal{N}\), given the purchased bandwidth \(b_{n}\in(0,+\infty)\) from the MSP, the achievable task transmission rate between the source RSU and the destination RSU is \(\gamma_{n}=b_{n}\log_{2}\left(1+\frac{\rho h^{0}d^{-\pi}}{N_{0}}\right)\), where \(\rho\), \(h^{0}\), \(d\), \(\varepsilon\), and \(N_{0}\) represent the transmitter power of the source RSU, the unit channel power gain, the distance between the source RSU and the destination RSU, the path-loss coefficient, and the average noise power, respectively [5]. Therefore, for VMU \(n\), the AoTM of the VT migration task is
\[A_{n}=\frac{D_{n}}{\gamma_{n}}, \tag{1}\]
following the pre-copy live migration strategy in [11], the total migrated VT data \(D_{n}\) includes the information of system configuration (e.g., CPU and GPU), historical memory data, and real-time states of VMU \(n\).
### _Stackelberg Game_
In VT migration, the MSP is the sole bandwidth resource holder and VMUs rely on bandwidth resources provided by the MSP to migrate VTs between RSUs. As a result, a monopoly market is formed, in which the MSP, as the monopolist, has the pricing power of bandwidth and VMUs need to respond to the price by deciding how much bandwidth to purchase. To be specific, when the selling price of bandwidth is low, VMUs may be willing to purchase more bandwidth for enhancing immersive experiences. Conversely, VMUs are reluctant to
Fig. 1: A learning-based incentive mechanism framework for VT migration.
purchase when the selling price is high, resulting in poor task freshness. Therefore, the selling price of bandwidth has a significant impact on the immersion of VMUs.
To maximize the MSP's profit and maintain its monopoly power, the Stackelberg game can provide a powerful game theoretical model that has been widely used by the monopolist to strategically set the price. The Stackelberg game between the MSP and VMUs consists of two stages. In the first stage, the MSP as the leader decides the selling price of bandwidth for its maximum utility. In the second stage, each VMU as a follower determines the bandwidth demand to maximize its utility. Note that the second stage of the game can be formulated as a competitive game [6].
#### Iii-B1 Utility formation in the VT migration
The utility of VMU \(n\) is the difference between the profit corresponding to its immersion and its cost of purchasing bandwidth. The higher AoTM impacts the immersive experiences of VMUs negatively, resulting in decreasing the immersion of VMUs [6]. Following [12], the immersion function of VMU \(n\) obtained from the MSP is defined as \(G_{n}=\alpha_{n}\ln\left(1+1/A_{n}\right)\), where \(\alpha_{n}>0\) is the unit profit for the immersion of VMU \(n\). Therefore, the utility function of VMU \(n\) is
\[U_{n}(b_{n})=G_{n}-p\cdot b_{n}, \tag{2}\]
where \(p>0\) is the unit selling price of bandwidth. In the follower stage, each VMU \(n\) maximizes its revenue \(U_{n}(b_{n})\) by deciding the best bandwidth demand to purchase. Thus, the problem of maximizing the utility of VMU \(n\) is formulated as
\[\begin{split}\textbf{Problem 1:}\max_{b_{n}}&\;U_{n}(b_{n}) \\ &\text{s.t.}&\;b_{n}>0.\end{split} \tag{3}\]
For the MSP, its utility is the difference between the sum of bandwidth fees paid by all VMUs and the transmission cost for VT migration tasks, which is affected by the unit selling price of bandwidth and bandwidth demands of VMUs. Thus, the utility of the MSP is
\[U_{s}(p)=\sum_{n=1}^{N}(p\cdot b_{n}-C\cdot b_{n}), \tag{4}\]
where \(C>0\) is the unit transmission cost of bandwidth for executing the VT migration task, which is proportional to the amount of bandwidth sold to the VMUs. In the first stage, considering that the bandwidth sold by the MSP has a maximum bandwidth \(B^{max}\) and the maximum bandwidth pricing \(p^{max}\), the MSP maximizes its revenue by deciding a selling price that ensures the total bandwidth sales do not exceed \(B^{max}\) and the bandwidth price does not exceed \(p^{max}\). Thus, the problem of maximizing the utility of the MSP is formulated as
\[\begin{split}\textbf{Problem 2:}\max_{p}&\;U_{s}(p) \\ &\text{s.t.}&\;0<\sum_{n=1}^{N}b_{n}\leq B^{max},\\ &\;b_{n}>0,\;\forall n\in\{1,\ldots,N\},\\ &\;0<C\leq p\leq p^{max}.\end{split} \tag{5}\]
#### Iii-B2 Stackelberg equilibrium analysis
The Stackelberg game is formulated by combining **Problem 2** and **Problem 1**. We seek the Stackelberg equilibrium to obtain the optimal solution to the formulated game. In the Stackelberg equilibrium, the MSP's utility is maximized considering that the VMUs make bandwidth demand strategies based on the best response, and neither the MSP nor any VMU can improve the individual utility by deviating from their strategies [5, 6]. The Stackelberg equilibrium is defined as follows:
**Definition 1**.: _(Stackelberg Equilibrium): We denote \(\boldsymbol{b}^{*}=\{b_{n}^{*}\}_{n=1}^{N}\) and \(p^{*}\) as the optimal bandwidth demand strategy vector and the optimal unit bandwidth selling price, respectively. Then, the strategies \((\boldsymbol{b}^{*}=\{b_{n}^{*}\}_{n=1}^{N},p^{*})\) can be Stackelberg equilibrium if and only if the following set of inequalities is strictly satisfied:_
\[\left\{\begin{array}{l}U_{s}\left(\boldsymbol{b}^{*},p^{*}\right)\geq U_{s }\left(\boldsymbol{b}^{*},p\right),\\ U_{n}\left(b_{n}^{*},\boldsymbol{b_{-n}^{*}},p^{*}\right)\geq U_{n}\left(b_{n},\boldsymbol{b_{-n}^{*}},p^{*}\right),\;\forall n\in\mathcal{N}.\end{array}\right. \tag{6}\]
In the following, we adopt the backward induction method to prove the Stackelberg equilibrium [5].
**Theorem 1**.: _The sub-game perfect equilibrium in the VMUs' subgame is unique._
Proof.: We derive the first-order derivative and the second-order derivative of \(U_{n}(b_{n})\) with respect to \(b_{n}\) as follows:
\[\begin{split}\frac{\partial U_{n}(b_{n})}{\partial b_{n}}& =\frac{\alpha_{n}\log_{2}\left(1+\frac{\rho h^{0}d-\epsilon}{N_{0}} \right)}{D_{n}+b_{n}\log_{2}\left(1+\frac{\rho h^{0}d-\epsilon}{N_{0}}\right) }-p,\\ \frac{\partial^{2}U_{n}(b_{n})}{\partial b_{n}^{2}}&=- \frac{\alpha_{n}\bigg{(}\log_{2}\left(1+\frac{\rho h^{0}d-\epsilon}{N_{0}} \right)\bigg{)}^{2}}{\bigg{(}D_{n}+b_{n}\log_{2}\left(1+\frac{\rho h^{0}d- \epsilon}{N_{0}}\right)\bigg{)}^{2}}<0.\end{split} \tag{7}\]
As the first-order derivative of \(U_{n}(b_{n})\) has a unique zero point, and the second-order derivative of \(U_{n}(b_{n})\) is negative, the VMU's utility function \(U_{n}(b_{n})\) is strictly concave with respect to \(b_{n}\). Then, based on the first-order optimality condition, i.e., \(\frac{\partial U_{n}(b_{n})}{\partial b_{n}}=0\), we can obtain the best response function of VMU \(n\), given by
\[b_{n}^{*}=\frac{\alpha_{n}}{p}-\frac{D_{n}}{\log_{2}\left(1+\frac{\rho h^{0}d- \epsilon}{N_{0}}\right)}. \tag{8}\]
Therefore, the sub-game perfect equilibrium in the VMUs' subgame is unique.
**Theorem 2**.: _There exists a unique Stackelberg equilibrium \((\boldsymbol{b}^{*},p^{*})\) in the formulated game._
Proof.: Based on **Theorem 1**, the MSP as the leader in the Stackelberg game knows that there exists a unique Nash equilibrium among VMUs under any given value of \(p\). Therefore, the MSP can maximize its utility by choosing the optimal \(p\). By substituting (8) into (4), we have
\[U_{s}=\sum_{n=1}^{N}(p-C)\Bigg{(}\frac{\alpha_{n}}{p}-\frac{D_{n}}{\log_{2} \left(1+\frac{\rho h^{0}d-\epsilon}{N_{0}}\right)}\Bigg{)}. \tag{9}\]
Then, by taking the first-order derivative and the second-order derivative of \(U_{s}(p)\) with respect to \(p\), respectively, we have
\[\begin{split}\frac{\partial U_{s}(p)}{\partial p}&= \sum_{n=1}^{N}\Bigg{(}-\frac{D_{n}}{\log_{2}\left(1+\frac{\rho h^{0}d^{-s}}{N_ {0}}\right)}+\frac{\alpha_{n}C}{p^{2}}\Bigg{)},\\ \frac{\partial^{2}U_{s}(p)}{\partial^{2}p}&=\sum_{n= 1}^{N}-\frac{2C\cdot\alpha_{n}}{p^{3}}<0.\end{split} \tag{10}\]
Since the first-order derivative of \(U_{s}(p)\) has a unique zero point, i.e., \(p^{*}=\sqrt{\frac{C\log_{2}\left(1+\frac{\rho h^{0}d^{-s}}{N_{0}}\right)\sum_{ n=1}^{N}\alpha_{n}}{\sum_{n=1}^{N}D_{n}}}\), and the second-order derivative of \(U_{s}(p)\) is negative, \(U_{s}(p)\) is strictly concave, indicating that the MSP has a unique optimal solution to the formulated game [8]. Based on the optimal strategy of the MSP, the VMUs' optimal strategies can be obtained [6]. Therefore, the Stackelberg equilibrium can be obtained uniquely in the formulated game.
## IV Learning-based Incentive Mechanism with Incomplete Information
In this section, we first introduce the DRL algorithm. Then, we describe how to transform the Stackelberg game into a learning task. Specifically, we model the Stackelberg game between the MSP and VMUs as a Partially Observable Markov Decision Process (POMDP) and design a DRL-based learning algorithm to explore the optimal solution to the Stackelberg model, where the MSP is the learning agent.
### _Deep Reinforcement Learning for Stackelberg Game_
Due to the competitive effect, each VMU only has its local information which is incomplete in the game and determines the bandwidth strategies in a fully non-cooperative manner [5]. DRL can be utilized to learn an optimal policy from past experiences based on the current state and the given reward without knowing any prior information. Here are the details of the DRL formulation.
#### Iv-A1 State space
At the current game round \(k\in\mathcal{K}=\{0,\ldots,k,\ldots,K\}\), the state space is defined as a union of the current MSP's pricing strategy and VMUs' bandwidth demand strategies, which is denoted as \(S_{k}\triangleq\left\{p_{k},\mathbf{b}_{k}\right\}.\)
#### Iv-A2 Partially observable policy
To tackle the non-stationary problem in the DRL system for facilitating VT migration, we formulate the partially observable space for VT migration. The MSP agent can only make decisions according to its local observation of the environment. We define the observation space \(o_{k}\) of the MSP at the current game round \(k\) as a union of its historical pricing strategies and VMUs' bandwidth demand strategies for past \(L\) rounds, given by
\[o_{k}\triangleq\left\{p_{k-L},\mathbf{b}_{k-L},p_{k-L+1},\mathbf{b}_{k-L+1},\ldots,p_{ k-1},\mathbf{b}_{k-1}\right\}. \tag{11}\]
Note that \(p_{k-L}\) and \(\mathbf{b}_{k-L}\) can be generated randomly during the initial stage when \(k<L\). We consider historical information because it enables the MSP agent to learn how its strategy changes impact the game result of the current time slot. When receiving an observation \(o_{k}\), the MSP agent needs to take a pricing action \(p_{k}\) to maximize its utility. Given the lower bound cost \(C\) and the upper bound price \(p^{max}\) for the pricing action, the action space can be represented as \(p_{k}\in[C,p^{max}]\), and the MSP's policy can be represented as \(\pi_{\mathbf{\theta}}\left(p_{k}\mid o_{k}\right)\rightarrow[C,p^{max}]\). Note that we use a neural network to represent the policy \(\pi_{\mathbf{\theta}}\) and the value function \(V_{\pi_{\mathbf{\theta}}}(\cdot)\), where \(\mathbf{\theta}\) is the neural network parameter.
#### Iv-A3 Reward
After the state transition, the MSP would gain a reward based on the current state \(S_{k}\) and the corresponding action \(p_{k}\). The reward function of the MSP can be defined as
\[R(S_{k},p_{k})=\begin{cases}1,\,U_{s}^{k}\geq U_{best}^{k},\\ 0,\,U_{s}^{k}<U_{best}^{k},\end{cases} \tag{12}\]
where \(U_{s}^{k}\) is the current utility of the MSP in (4) and \(U_{best}^{k}\) is the highest utility that the MSP has obtained until round \(k\).
#### Iv-A4 Value function
Given a policy \(\pi_{\mathbf{\theta}}\), the value function \(V_{\pi_{\mathbf{\theta}}}(S)\) can measure the expected return when starting in \(S\) and following \(\pi_{\mathbf{\theta}}\) thereafter [13], which is defined as
\[V_{\pi_{\mathbf{\theta}}}(S)\triangleq\hat{\mathbb{E}}_{\pi_{\mathbf{\theta}}}\left[ \sum_{k=0}^{K}\gamma^{k}R\left(S_{k},p_{k}\right)\mid S_{0}=S\right], \tag{13}\]
where \(\hat{\mathbb{E}}_{\pi_{\mathbf{\theta}}}(\cdot)\) is the expected value of a random variable given that the MSP agent follows the policy \(\pi_{\mathbf{\theta}}\), and \(\gamma\in[0,1]\) is the reward discounting factor to reduce the weights as the time step increases.
#### Iv-A5 Actor-critic framework design
We leverage the popular actor-critic framework and the Proximal Policy Optimization method for policy iteration [8]. Following [13], at each training iteration, we randomly sample experiences from the replay buffer to update the network parameter. Then, _Generalized Advantage Estimation_[14] is used to compute variance-reduced advantage function estimator \(A(S,p)\) that utilizes a learning state-value function \(V_{\pi_{\mathbf{\theta}}}(S)\). Since the policy and the value function share the same parameter \(\mathbf{\theta}\) of the neural network, the loss function consists of the policy surrogate \(L^{CLIP}\left(\mathbf{\theta}\right)\) and the value function error term \(L^{VF}\left(\mathbf{\theta}\right)\). Finally, to update the policy and the value function, we utilize stochastic gradient ascent to maximize the objective function as follows:
\[\mathbf{\theta}_{e+1}=\arg\max_{\mathbf{\theta}_{e}}\frac{1}{|I|}\sum_{|I|}\hat{\mathbb{ E}}_{k}\Big{[}L^{CLIP}_{k}\left(\mathbf{\theta}_{e}\right)-cL^{VF}_{k}\left(\mathbf{ \theta}_{e}\right)\Big{]}, \tag{14}\]
\[\begin{split} L^{CLIP}_{k}(\mathbf{\theta}_{e})=\hat{\mathbb{E}}_{k} \bigg{[}&\min\Bigl{(}r_{k}(\mathbf{\theta}_{e})A(S_{k},p_{k}),\\ &\quad\quad\quad\quad\quad f_{clip}\left(r_{k}(\mathbf{\theta}_{e}) \right)A(S_{k},p_{k})\Bigr{)}\bigg{]},\end{split} \tag{15}\]
\[L^{VF}_{k}(\mathbf{\theta}_{e})=\Bigl{(}V_{\pi_{\mathbf{\theta}_{e}}}(S_{k})-V_{k}^{ targ}\Bigr{)}^{2}, \tag{16}\]
where
\[r_{k}(\mathbf{\theta}_{e})=\frac{\pi_{\mathbf{\theta}_{e}}(p_{k}|o_{k})}{\pi_{\mathbf{ \theta}_{e}^{old}}(p_{k}|o_{k})}, \tag{17}\]
\[\begin{split} A\left(S_{k},p_{k}\right)=&-V_{\pi_{ \mathbf{\theta}_{e}}}\left(S_{k}\right)+\sum_{l=k}^{K-1}\gamma^{l-k}R(S_{l},p_{l}) \\ &+\gamma^{K-k}V_{\pi_{\mathbf{\theta}_{e}}}\left(S_{K}\right),\end{split} \tag{18}\]
and
\[f_{clip}(r_{k}(\mathbf{\theta}_{e}))=\begin{cases}1-\epsilon,\,r_{k}(\mathbf{\theta}_{e}) <1-\epsilon,\\ 1+\epsilon,\,r_{k}(\mathbf{\theta}_{e})>1+\epsilon,\\ r_{k}(\mathbf{\theta}_{e}),\,1-\epsilon\leq r_{k}(\mathbf{\theta}_{e})\leq 1+ \epsilon.\end{cases} \tag{19}\]
Here, \(V_{k}^{targ}\) is the total discount reward from time step \(k\) until the end of the episode, \(\mathbf{\theta}_{e}\) and \(\mathbf{\theta}_{e+1}\) are the policy parameter in episode \(e\) and \(e+1\), \(\mathbf{\theta}_{e}^{old}\) represents the policy parameter for sampling in episode \(e\), \(c\) is a loss coefficient of the value function, \(r_{k}\) is the importance ratio, and \(I\) is the batch size of sampled experiences for calculating policy gradients.
### _Algorithm Details_
Motivated by the above analysis, the proposed DRL algorithm details are illustrated in **Algorithm 1**. The time complexity of the proposed DRL algorithm is determined by the multiplication operations in a fully connected deep neural network [8], which can be expressed as \(\mathcal{O}\left(\sum_{f=1}^{F}\epsilon_{f}\epsilon_{f-1}\right)\), where \(\epsilon_{f}\) is the number of neural units in layer \(f\) and \(F\) is the number of hidden layers.
```
1 Initialize max round in an episode \(K\), number of episodes \(E\), batch size \(I\) and network parameter \(\mathbf{\theta}\);
2for Episode\(e\in 1,\dots,E\)do
3 Reset environment state \(S_{0}\) and replay buffer \(\mathcal{BF}\);
4for Round\(k\in 0,\dots,K\)do
5 MSP observes a state \(S_{k}\) and updates its observation \(o_{k-1}\) into \(o_{k}\);
6 Input \(o_{k}\) into MSP's actor policy \(\pi_{\mathbf{\theta}_{e}}\) and determine the current price strategy \(p_{k}\);
7 VMUs determine bandwidth demands through (8);
8 Update \(S_{k}\) into \(S_{k+1}\) and calculate reward \(R_{k}\) for the MSP through (12). Then, update \(U_{best}^{k}\) when a higher reward is obtained;
9 Store transition \((o_{k},p_{k},R_{k},o_{k+1})\) into \(\mathcal{BF}\);
10if\(k\%\,|I|==0\)then
11for\(m\in 1,\dots,M\)do
12 Sample a random mini-batch of data with a size \(|I|\) from \(\mathcal{BF}\) to update the actor and critic through (14);
13
14 end if
15
16 end for
17
18 end for
```
**Algorithm 1**Proposed DRL-based Solution for VT Migration
## V Numerical Results
In this section, we evaluate the performance of the VT migration system for vehicular metaverses and the proposed DRL-based incentive mechanism through simulation experiments. We first describe the experimental settings, followed by the experimental results and analysis.
### _Experiment Settings_
We consider that there is one MSP and the number of VMUs \(N\in[1,6]\). Each VT has the data size \(D_{n}\in[100,300]\,(\mathrm{MB})\) and the immersion coefficient \(\alpha_{n}\in[5,20]\). The MSP's maximum bandwidth, transmission cost, and maximum selling price are set to \(50\mathrm{MHz}\), \(5\), and \(50\), respectively. As for the RSU parameters, the transmitter power of the source RSU \(\rho\) is \(40\mathrm{dBm}\), the unit channel power gain \(h_{0}\) is \(-20\mathrm{dB}\), the distance between the RSUs \(d\) is \(500\mathrm{m}\), the path-loss coefficient \(\epsilon\) is \(2\), and the average noise power \(N_{0}\) is \(-150\mathrm{dBm}\). The parameters of the DRL are selected through fine-tuning. Specially, we set \(L=4\), \(D=20\), \(E=500\), \(K=100\), \(M=10\), and \(lr=0.00001\) during experiments. Both the two hidden layers of the neural network have \(64\) nodes.
### _Experiment Results_
Figure 2 shows the convergence of the proposed DRL-based incentive mechanism when there are two VMUs. We set \(\alpha_{1}=\alpha_{2}=5\), \(D_{1}=200\mathrm{MB}\), \(D_{2}=100\mathrm{MB}\), and cost \(C=5\). As shown in Fig. 2(a), the game return of each episode converges to the maximum round, which indicates that the MSP can always choose the optimal strategy in each round. In Fig. 2(b), the utility of the MSP converges to the Stackelberg equilibrium. Therefore, the DRL-based incentive mechanism under incomplete information is as strong as the Stackelberg game with complete information.
Figure 3 shows the performance of the proposed DRL-based incentive mechanism. In Fig. 3(a) and Fig. 3(b), we study the influence of the unit transmission cost. Specifically, we study the unit transmission cost by changing it from \(5\) to \(9\) and consider that there are two VMUs whose VT data sizes are \(200\mathrm{MB}\) and \(100\mathrm{MB}\), and whose immersion coefficients are both \(5\). From Fig. 3(a) and Fig. 3(b), we can see that both the utilities and strategies of the MSP and VMUs in the optimal solutions of the proposed scheme are approaching the Stackelberg equilibrium, which demonstrates that the proposed scheme can find the optimal solution under incomplete information. As the unit transmission cost increases, the pricing of the MSP also increases in Fig. 3(a). For example, when the unit transmission cost is \(5\), the MSP sets the price at \(25\) to incentive VMUs to perform VT migration. However, when the unit transmission cost is \(9\), a higher price of \(34\) will be set. In Fig. 3(b), we can observe that the total bandwidth strategy of VMUs decreases when the unit transmission cost increases. For example, when the unit transmission cost is \(6\), VMUs purchase bandwidth resources of \(27.9\). While VMUs only purchase bandwidth resources of \(23.4\) when the unit transmission cost is \(8\). Both the utilities of the MSP and VMUs significantly decrease due to the high cost of transmission in Fig. 3(a) and
Fig. 2: Convergence of DRL-based incentive mechanism.
Fig. 3(b). The reason is that when the transmission cost is high, the MSP would increase the bandwidth price due to the cost consideration, leading to a decrease in bandwidth purchased by VMUs because of the high price. Furthermore, we compare the proposed DRL-based scheme with random and greedy schemes. In the random scheme, the MSP determines the price randomly in each game round, while in the greedy scheme, the MSP determines the best price by selecting from past game rounds. In Fig. 3(a), we can find that our proposed scheme outperforms the baseline schemes.
Next, we study the impacts of the number of VMUs in Fig. 3(c) and Fig. 3(d). We set the data size of the VT as \(100\mathrm{MB}\), and the immersion coefficient \(\alpha_{n}\) is \(5\). As shown in Fig. 3(c), the utility of the MSP increases when the number of VMUs increases. For example, the utility of the MSP is \(7.03\) when there are only two VMUs. When the number of VMUs increases to \(6\), the MSP can obtain a higher utility of \(20.35\). Note that the price of the MSP remains unchanged initially and increases later. The reason is that when there are fewer VMUs, the bandwidth resources of the MSP are sufficient, but when the number of VMUs is too large, the bandwidth of the MSP becomes insufficient. Therefore, the MSP needs to increase the price of bandwidth to limit the purchase of excessive bandwidth by VMUs. As shown in Fig. 3(d), the average bandwidth purchased by VMUs remains unchanged at first and decreases later. Due to the competition among VMUs, the average utility of VMUs decreased by \(12.8\%\) as the number of VMUs increases from \(2\) to \(6\).
## VI Conclusion
In this paper, we proposed a learning-based incentive mechanism for task freshness-aware VT migration in vehicular metaverses. To quantify the task freshness of the VT migration, we proposed a new metric called AoTM according to the concept of the AoI. Then, we formulated the resource trading problem between the MSP and VMUs as a Stackelberg game. Furthermore, we utilized DRL to solve the game under incomplete information. Finally, numerical results demonstrate the effectiveness of the proposed mechanism. In the future, we will adopt more effective immersive metrics in conjunction with AoTM to better evaluate the immersion of VMUs and may develop a prototype system to evaluate our framework. Besides, we aim to extend our model to scenarios with multiple MSPs and VMUs.
|
2309.12187 | Bounded point derivations on Campanato spaces | Let $X$ be a compact subset of the complex plane and $x \in X$. A necessary
and sufficient condition is given in terms of Hausdorff contents for the
existence of a bounded point derivation at $x$ on the space of vanishing
Campanato functions that are analytic in a neighborhood of $X$. This
generalizes many known conditions for the existence of bounded point
derivations on other function spaces. | Evan Abshire, Stephen Deterding | 2023-09-21T15:52:01Z | http://arxiv.org/abs/2309.12187v1 | # Bounded point derivations on Campanato spaces
###### Abstract
Let \(X\) be a compact subset of the complex plane and \(x\in X\). A necessary and sufficient condition is given in terms of Hausdorff contents for the existence of a bounded point derivation at \(x\) on the space of vanishing Campanato functions that are analytic in a neighborhood of \(X\). This generalizes many known conditions for the existence of bounded point derivations on other function spaces.
## 1 Introduction
This paper concerns questions about the smoothness of functions at boundary points of subsets of the complex plane. Let \(X\) be a compact subset of the complex plane and let \(R(X)\) denote the uniform closure of rational functions with poles off \(X\). \(R(X)\) is widely studied in the theory of complex approximation; for example, Runge's theorem states that if \(A(X)\) denotes the space of functions that are analytic in a neighborhood of \(X\), then \(R(X)=A(X)\); that is, every analytic function on \(X\) can be uniformly approximated by rational functions with poles off \(X\). Every function in \(R(X)\) is differentiable at an interior point of \(X\), but in general, the functions in \(R(X)\) are not differentiable at the boundary points of \(X\); however, in many cases the functions in \(R(X)\) possess a greater degree of smoothness than what otherwise would be expected.
One such example is that \(R(X)\) may admit a bounded point derivation at a boundary point \(x\). For a non-negative integer \(t\), we say that \(R(X)\) admits a \(t\)-th order bounded point derivation at \(x\) if there exists a constant \(C>0\) such that
\[|f^{(t)}(x)|\leq C||f||_{\infty}\]
for all rational functions \(f\) with poles off \(X\). Here \(||\cdot||_{\infty}\) denotes the uniform norm on \(X\).
Bounded point derivations play an important role in the theory of rational approximation. Suppose that \(\{f_{j}\}\) is a sequence of rational functions with poles off \(X\) that converges to a limit function \(f\) on \(X\). If \(x\) is
an interior point of \(X\) then the sequence of derivatives \(\{f^{\prime}_{j}(x)\}\) converges uniformly to \(f^{\prime}(x)\); however, if \(x\) is a boundary point then the sequence of derivatives might not converge at all. Nevertheless, if \(R(X)\) admits a bounded point derivation at \(x\), then the sequence of derivatives will converge and one can define a derivative for \(f\) at \(x\) as \(f^{\prime}(x)=\lim\limits_{j\to\infty}f^{\prime}_{j}(x)\).
Bounded point derivations can be defined for other spaces of functions as well. Let \(X\) be a compact subset of \(\mathbb{C}\) and let \(U\) be an open subset of \(\mathbb{C}\). Some spaces on which bounded point derivations have been studied include \(A_{\alpha}(U)\), the space of little Lipschitz functions of order \(\alpha\) that are analytic on \(U\)[7], \(A_{0}(X)\), the space of VMO functions that are analytic on \(X\)[2], and \(A^{s}(U)\), the functions on \(U\) in the small negative Lipschitz space that are analytic on \(U\)[8]. The focus of this paper is on bounded point derivations on Campanato spaces. Campanato spaces include spaces of Lipschitz functions, functions of bounded mean oscillation, and functions in the negative Lipschitz space as special cases and thus can be used to generalize these results.
It is conjectured that for each space there are necessary and sufficient conditions for the existence of bounded point derivations given in terms of an appropriate capacity. Since there is no general theory, the conditions must be verified on a case by case basis; however, since the Campanato spaces contain \(A_{\alpha}(U)\), \(A_{0}(X)\), and \(A^{s}(U)\) determining the conditions for the existence of bounded point derivations in the space of analytic functions on the Campanato spaces verifies the conditions for these other spaces as well. Proving the following theorem is thus the principal focus of this paper. (See Section 2 for relevant definitions.)
**Theorem 1**.: Suppose \(t\) is a non-negative integer, \(1\leq p<\infty\), and let \(\lambda\geq 0\) also satisfy \(2-p<\lambda<2+p\). Let \(X\) be a compact subset of \(\mathbb{C}\) and let \(A_{p,\lambda}(X)\) denote the space of functions in the vanishing Campanato space \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) that are also analytic in a neighborhood of \(X\). Let \(A_{n}(x)\) denote the annulus \(\{z:2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Then \(A_{p,\lambda}(X)\) admits a \(t\)-th order bounded point derivation at \(x\) if and only if
\[\sum_{n=1}^{\infty}2^{(t+1)n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}(x)\setminus X )<\infty,\]
where \(M_{*}^{1+\frac{\lambda-2}{p}}\) denotes lower \((1+\frac{\lambda-2}{p})\)-dimensional Hausdorff content.
Theorem 1 is similar to other existence theorems for bounded point derivations. Lord and O'Farrell [7, Theorem 1.2] proved the following theorem for the case of Lipschitz approximation.
**Theorem 2**.: Suppose \(U\subseteq\mathbb{C}\) is bounded and open, \(0<\alpha<1\) and \(t\) is a non-negative integer. Let \(A_{\alpha}(U)\) denote the space of functions in the little Lipshitz class of order \(\alpha\) that are analytic on \(U\), \(x\in\partial U\), and let \(A_{n}(x)\) denote the annulus \(\{z:2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Then \(A_{\alpha}(U)\) admits a \(t\)-th order bounded point derivation at \(x\) if and only if
\[\sum_{n=1}^{\infty}2^{(t+1)n}M_{*}^{1+\alpha}(A_{n}(x)\setminus U)<\infty,\]
where \(M_{*}^{1+\alpha}\) denotes lower \((1+\alpha)\)-dimensional Hausdorff content.
The second author proved the following theorem for the case of BMO approximation [2, Theorem 1].
**Theorem 3**.: Let \(X\) be a compact subset of \(\mathbb{C}\) with the property that every relatively open subset of \(X\) has positive area, let \(t\) be a non-negative integer and let \(A_{0}(X)\) denote the space of \(VMO(\mathbb{C})\) functions that are analytic on a neighborhood of \(X\). Choose \(x\in\partial X\) and let \(A_{n}(x)\) denote the annulus \(\{2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Then \(A_{0}(X)\) admits a \(t\)-th order bounded point derivation at \(x\) if and only if
\[\sum_{n=1}^{\infty}2^{(t+1)n}M_{*}^{1}(A_{n}(x)\setminus X)<\infty,\]
where \(M_{*}^{1}\) denotes lower \(1\)-dimensional Hausdorff content.
O'Farrell has also proven a similar theorem involving negative Lipschitz classes [8, Theorem 3.7].
**Theorem 4**.: Suppose \(0<\beta<1\) and \(s=\beta-1\). Let \(U\subseteq\mathbb{C}\) be a bounded open set and \(x\in\partial U\). Let \(A_{x}^{s}(U)\) denote the functions on \(U\) in the small negative Lipschitz space that are analytic on some neighborhood of \(x\) and let \(A_{n}(x)\) denote the annulus \(\{2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Then \(A_{x}^{s}(U)\) admits a \(t\)-th order bounded point derivation at \(x\) if and only if
\[\sum_{n=1}^{\infty}2^{n(t+1)}M_{*}^{\beta}(A_{n}(x)\setminus U)<\infty,\]
where \(M_{*}^{\beta}\) denotes lower \(\beta\)-dimensional Hausdorff content.
In the next section, it will be demonstrated that these three theorems are special cases of Theorem 1. In particular, Theorem 2 is the case of \(\lambda=2+p\alpha\), Theorem 3 is the case of \(\lambda=2\), and Theorem 4 is the case of \(\lambda=2+ps\).
## 2 Campanato Spaces
Campanato spaces (also called Morrey-Campanato spaces) were introduced by Campanato in 1963 [1] and generalize spaces of functions of bounded mean oscillation. Let \(f\in L^{1}_{loc}(\mathbb{C})\) and let \(B\) be a ball with radius \(r\). If \(|B|\) denotes the area of \(B\), then the mean value of \(f\) on \(B\), which is denoted by \(f_{B}\) is given by
\[f_{B}=\frac{1}{|B|}\int_{B}|f|dA.\]
Let \(1\leq p<\infty\) and \(\lambda\geq 0\). The Campanato seminorm, which we denote by \([f]_{\mathscr{L}^{p,\lambda}}\), generalizes the mean oscillation of \(f\) and is given by
\[[f]_{\mathscr{L}^{p,\lambda}}=\sup_{B}\left(\frac{1}{r^{\lambda}}\int_{B}|f(z)-f_{B }|^{p}dA\right)^{\frac{1}{p}}\]
where the supremum is taken over all balls \(B\) in \(\mathbb{C}\). Equivalently (See [10, Lemma 5.6.1] for proof.) the Campanato seminorm can also be given by
\[[f]_{\mathscr{L}^{p,\lambda}}=\sup_{B}\left(\frac{1}{r^{\lambda}}\inf_{c\in \mathbb{C}}\int_{B}|f(z)-c|^{p}dA\right)^{\frac{1}{p}}.\]
In both definitions of the Campanato seminorm, up to a constant multiple, the supremum can be taken over squares with side length \(r\) instead of balls of radius \(r\). The Campanato space \(\mathscr{L}^{p,\lambda}(\mathbb{C})\) is the space of \(L^{p}\) functions with finite Campanato seminorms. That is,
\[\mathscr{L}^{p,\lambda}(\mathbb{C})=\{f\in L^{p}(\mathbb{C}):[f]_{\mathscr{L} ^{p,\lambda}}<\infty\}.\]
\(\mathscr{L}^{p,\lambda}(\mathbb{C})\) is a Banach space with norm given by
\[||f||_{\mathscr{L}^{p,\lambda}}=[f]_{\mathscr{L}^{p,\lambda}}+||f||_{p},\]
where \(||f||_{p}\) denotes the \(L^{p}\) norm. We also note that if \(f\in L^{p}(\mathbb{C})\), then \(f\in\mathscr{L}^{p,\lambda}(\mathbb{C})\) if for each ball \(B\) there exists a constant \(c(B)\) such that
\[\int_{B}|f(z)-c(B)|^{p}dA(z)\leq C(f)r^{\lambda},\]
,
where the constant \(C(f)\) depends only on \(f\).
An important feature of Campanato spaces is the following coincidence of spaces that comes from the Campanato embedding property [10, Theorem 5.5.1]. If \(p,p_{1},\lambda,\lambda_{1}\) are such that \(1\leq p,p_{1}<\infty\), \(0\leq\lambda,\lambda_{1}<\infty\), and \(\frac{\lambda_{1}-2}{p_{1}}=\frac{\lambda-2}{p}\) then \(\mathscr{L}^{p,\lambda}(\mathbb{C})=\mathscr{L}^{p_{1},\lambda_{1}}(\mathbb{C})\).
The significance of the Campanato spaces is that they contain several well known function spaces as special cases. For \(p\in[1,\infty)\), the case of \(\lambda=2\) is \(\mathrm{BMO}(\mathbb{C})\) the space of functions of bounded mean oscillation, the case \(2<\lambda\leq 2+p\) is the space of Lipschitz continuous functions \(\mathrm{Lip}_{\alpha}(\mathbb{C})\), where \(\alpha=\frac{\lambda-2}{p}\), and the case of \(\lambda<2\) corresponds to Morrey spaces [10, Theorem 5.7.1]. Another coincidence with \(\lambda<2\) occurs with the negative
Lipschitz space \(\mathrm{Lip}_{\beta}(\mathbb{C})\), where \(\beta=\frac{\lambda-2}{p}[9]\). It should be noted that \(\mathscr{L}^{p,\lambda}(\mathbb{C})\) consists only of constant functions when \(\lambda>p+2\).
We now define the vanishing Campanato spaces to serve as generalizations of the space of functions of vanishing mean oscillation. Given \(f\in L^{p}(\mathbb{C})\) and \(\delta>0\) let
\[\Omega^{p,\lambda}_{f}(\delta)=\sup_{B}\left\{\left(\frac{1}{r^{\lambda}}\int _{B}|f(z)-f_{B}|^{p}dA\right)^{\frac{1}{p}}:\ \mathrm{radius}\ B\leq\delta\right\}\]
where the supremum is taken over all balls \(B\subseteq\mathbb{C}\) with \(\mathrm{radius}\leq\delta\). Let \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) denote the subspace of functions in \(\mathscr{L}^{p,\lambda}(\mathbb{C})\) with the property that \(\Omega^{p,\lambda}_{f}(\delta)\to 0\) as \(\delta\to 0\). \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) are the vanishing Campanato spaces. As with the Campanato spaces, up to a constant multiple, the supremum can be taken over squares instead of balls in the definition of the vanishing Campanato spaces.
Like the Campanato spaces, the vanishing Campanato spaces include some well known function spaces as special cases. When \(\lambda=2\), \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) coincides with \(VMO(\mathbb{C})\), the space of functions of vanishing mean oscillation. Likewise when \(2<\lambda\leq 2+p\), \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) coincides with the little Lipschitz class \(\mathrm{lip}_{\alpha}(\mathbb{C})\) where \(\alpha=\frac{\lambda-2}{p}\).
If \(X\) is a compact set with the property that every relatively open subset of \(X\) has positive area, then we define \(\mathscr{L}^{p,\lambda}(X)=\{f|_{X}:f\in\mathscr{L}^{p,\lambda}(\mathbb{C})\}\) and \(V\mathscr{L}^{p,\lambda}(X)=\{f|_{X}:f\in V\mathscr{L}^{p,\lambda}(\mathbb{C})\}\). Let \([f]_{\mathscr{L}^{p,\lambda}(X)}=\inf[F]_{\mathscr{L}^{p,\lambda}}\), where the infimum is taken over all functions \(F\) such that \(F=f\) on \(X\). \([f]_{\mathscr{L}^{p,\lambda}(X)}\) is a seminorm on \(\mathscr{L}^{p,\lambda}(X)\), which vanishes only at the constant functions. If we let \(||f||_{\mathscr{L}^{p,\lambda}(X)}=[f]_{\mathscr{L}^{p,\lambda}(X)}+||f||_{L^{ p}(X)}\), then \(||f||_{\mathscr{L}^{p,\lambda}(X)}\) defines a norm on \(\mathscr{L}^{p,\lambda}(X)\).
Let \(A_{p,\lambda}(X)\) denote the space of \(V\mathscr{L}^{p,\lambda}(\mathbb{C})\) functions that are analytic in a neighborhood of \(X\). Suppose \(x\) is a point on the boundary of \(X\). Then \(A_{p,\lambda}(X)\) admits a \(t\)-th order bounded point derivation at \(x\) if there is a constant \(C\) such that
\[|f^{(t)}(x)|\leq C||f||_{\mathscr{L}^{p,\lambda}(X)}\]
for all functions \(f\in A_{p,\lambda}(X)\).
## 3 Hausdorff Content
It is conjectured that for each function space, the conditions for the existence of bounded point derivations are given in terms of a certain capacity. The appropriate capacity for studying bounded point derivations on \(A_{p,\lambda}(X)\) is lower \((1+\frac{\lambda-2}{p})\)-dimensional Hausdorff content, which is defined as follows. A measure function is
an increasing function \(h(t)\), \(t\geq 0\) such that \(h(t)\to 0\) as \(t\to 0\). Given a measure function \(h\) and a set \(E\subseteq\mathbb{C}\), let
\[M^{h}(E)=\inf\sum_{j}h(r_{j}),\]
where the infimum is taken over all countable coverings of \(E\) by squares with side length \(r_{j}\). Let \(\alpha>0\). The lower \(\alpha\)-dimensional Hausdorff content of \(E\) is denoted by \(M^{\alpha}_{*}(E)\) and defined by
\[M^{\alpha}_{*}(E)=\sup M^{h}(E)\]
where the supremum is taken over all measure functions \(h\) with \(h(t)\leq t^{\alpha}\) and \(t^{-\alpha}h(t)\to 0\) as \(t\to 0^{+}\). Furthermore up to a constant multiplicative bound, the infimum can be taken over dyadic squares, or balls of radius \(r_{j}\). It follows directly from the definition that lower \(\alpha\)-dimensional Hausdorff content is monotone; that is, if \(E\subseteq F\) then \(M^{\alpha}_{*}(E)\leq M^{\alpha}_{*}(F)\). Another property of Hausdorff content is that if \(B_{r}\) is a ball of radius \(r\) then \(M^{\alpha}_{*}(B_{r})=r^{\alpha}\).
We now review Frostman's lemma (See [3, pg.62] for proof.), which is a key result in relating Hausdorff content and measure.
**Lemma 5**.: Let \(h\) be a measure function and let \(K\subseteq\mathbb{C}\) be a set with positive lower \(\alpha\)-dimensional Hausdorff content. Then there is a Borel measure \(\nu\) with support on \(K\) such that
1. \(\nu(B)\leq Ch(r)\) for all balls \(B\) with radius \(r\).
2. \(\nu(K)\geq M^{\alpha}_{*}(K)\).
## 4 Preliminary Results
In this section, we prove some key lemmas that will be used in the proof of Theorem 1. Our first result is of independent interest, as it extends a result of Kaufman [6, Theorem (b)] to Campanato spaces.
**Theorem 6**.: Suppose \(1\leq p<2\) and \(2-p<\lambda\leq 2+p\). Let \(S\) be a compact set of positive \(1+\frac{\lambda-2}{p}\)-measure. Then there is a function \(g\) analytic off \(S\) in \(\mathscr{L}^{p,\lambda}(\mathbb{C})\) with Taylor expansion \(z^{-1}+\ldots\) at infinity.
_Note that the condition that \(S\) has positive \(1+\frac{\lambda-2}{p}\)-measure is satisfied by subsets of \(\mathbb{C}\) with positive area._
Proof.: The proof follows that in [6]. To simplify computations in the proof we will rewrite \(1+\frac{\lambda-2}{p}\) as \(\frac{p+\lambda-2}{p}\). By Frostman's lemma there is a measure \(\nu\) supported on \(S\) such that \(\nu(B(z,r))\leq Cr^{\frac{p+\lambda-2}{p}}\) for every ball \(B\) of radius \(r>0\). Let
\[g(z)=\int(\zeta-z)^{-1}d\nu(\zeta).\]
Then \(g\) is analytic off \(S\) and \(g(z)=z^{-1}+\ldots\). To prove that \(g\in\mathscr{L}^{p,\lambda}(\mathbb{C})\) let \(B=B(w,r)\) be a ball centered at \(w\) with radius \(r\) and \(B^{*}=B(w,2r)\). Now define
\[g_{1}(z)=\int_{B^{*}}(\zeta-z)^{-1}d\nu(\zeta)\]
and let \(g_{2}(z)=g(z)-g_{1}(z)\).
Let \(q=\frac{p}{p-1}\). Then it follows from Holder's inequality and Fubini's theorem that
\[\int_{B}|g_{1}(z)|^{p}dA(z) \leq\int_{B}\left(\int_{B^{*}}|\zeta-z|^{-p}d\nu(\zeta)\right) \left(\int_{B^{*}}1d\nu(\zeta)\right)^{\frac{p}{q}}dA(z)\] \[\leq Cr^{\frac{p+3-2}{q}}\int_{B^{*}}\int_{B}|\zeta-z|^{-p}dA(z) d\nu(\zeta)\] \[\leq Cr^{\frac{p+3-2}{q}}r^{\frac{p+3-2}{p}}r^{2-p}=Cr^{\lambda}.\]
Moreover,
\[\int_{B}|g_{2}(z)-g_{2}(w)|^{p}dA(z) =\int_{B}\left|\int_{\mathbb{C}\setminus B^{*}}[(\zeta-z)^{-1}-( \zeta-w)^{-1}]d\nu(\zeta)\right|^{p}dA(z)\] \[\leq\int_{B}\left(\int_{\mathbb{C}\setminus B^{*}}|\zeta-w|^{-1} d\nu(\zeta)\right)^{p}dA(z)\] \[\leq(2r)^{p}\int_{B}\left(\int_{\mathbb{C}\setminus B^{*}}|\zeta -w|^{-2}d\nu(\zeta)\right)^{p}dA(z)\] \[\leq(2r)^{p}\pi r^{2}\left((2r)^{-2}(2r)^{\frac{p+3-2}{p}}\right) ^{p}=Cr^{\lambda}\]
Thus
\[\int_{B}|g(z)-g_{2}(w)|^{p}dA(z)\leq Cr^{\lambda}\]
and hence \(g\in\mathscr{L}^{p,\lambda}(\mathbb{C})\).
Next we verify some important properties of a function that is crucial to the proof of the necessity of the criterion in Theorem 1.
**Lemma 7**.: Let \(X\) be a compact subset of \(\mathbb{C}\) and suppose \(1\leq p<2\) and \(2-p<\lambda\leq 2+p\). Let \(A_{n}\) denote the annulus \(\{2^{-(n+1)}\leq|z|\leq 2^{-n}\}\) and suppose \(\nu_{n}\) is a measure on \(A_{n}\setminus X\) with the following properties:
1. \(\nu_{n}(B_{r})\leq\epsilon_{n}r^{1+\frac{\lambda-2}{p}}\) for all balls \(B\) with radius \(r\).
2. \(\int\nu_{n}=C\epsilon_{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\).
Let \(t\) be a non-negative integer and define
\[f_{n}(z)=\int\left(\frac{\zeta}{|\zeta|}\right)^{t+1}\frac{d\nu_{n}(\zeta)}{ \zeta-z}.\]
Let \(f_{n,B}=\frac{1}{|B|}\int_{B}|f_{n}|dA\). Then
1. \([f_{n}]_{\mathcal{L}^{p,\lambda}}\leq C\epsilon_{n}\).
2. If \(B\subseteq A_{k}\) is a ball of radius \(r\) and \(k\neq n-1,n\), or \(n+1\), then \[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n}(z)|^{p}dA\right)^{\frac{1}{p}}\leq C \epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n} \setminus X).\]
3. If \(B\subseteq A_{k}\) is a ball of radius \(r\) and \(k\neq n-1,n\), or \(n+1\), then \[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n,B}|^{p}dA\right)^{\frac{1}{p}}\leq C \epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n} \setminus X).\]
4. If \(||\cdot||_{p}\) denotes the \(L^{p}\) norm on \(\mathbb{C}\), then \[||f_{n}||_{p}\leq C\epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{*}^{1+\frac{ \lambda-2}{p}}(A_{n}\setminus X).\]
Proof.: As in the previous proof, to simplify calculations we will write \(1+\frac{\lambda-2}{p}\) as \(\frac{p+\lambda-2}{p}\). Let \(q=\frac{p}{p-1}\). The proof of the first proposition is done is the same way as the proof of Theorem 6. To prove the second proposition we first note that it follows from Holder's inequality that
\[\int_{B}|f_{n}(z)|^{p}dA(z) \leq\int_{B}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|}\right)^{ p}dA(z) \tag{1}\] \[=\int_{B}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{\frac{2-p}{ p}}|\zeta-z|^{\frac{p+\lambda-2}{p}}}\right)^{p}dA(z)\] \[\leq\int_{B}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{2-\lambda }}\right)\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{\frac{p+\lambda-2}{p-1} }}\right)^{\frac{p}{q}}dA(z).\]
We first evaluate the second inner integral. Because \(\nu_{n}\) has support on \(A_{n}\setminus X\) and \(k\neq n-1,n,\) or \(n+1,\) it follows that \(|\zeta-z|\geq 2^{-(n+2)}\) and hence
\[\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{\frac{n+\lambda-2}{p- 2}}}\right)^{\frac{p}{q}}\leq C2^{n(p+\lambda-2)}\left(\epsilon_{n}M_{*}^{ \frac{p+\lambda-2}{p}}(A_{n}\setminus X)\right)^{\frac{p}{q}}.\]
For the remaining two integrals, we can use Fubini's theorem to interchange the order of integration. Hence
\[\int_{B}|f_{n}(z)|^{p}dA(z) \leq C2^{n(p+\lambda-2)}\left(\epsilon_{n}M_{*}^{\frac{p+\lambda- 2}{p}}(A_{n}\setminus X)\right)^{\frac{p}{q}}\int\int_{B}|\zeta-z|^{\lambda-2} dA(z)d\nu_{n}(\zeta)\] \[\leq C2^{n(p+\lambda-2)}\left(\epsilon_{n}M_{*}^{\frac{p+\lambda- 2}{p}}(A_{n}\setminus X)\right)^{\frac{p}{q}}r^{\lambda}\epsilon_{n}M_{*}^{ \frac{p+\lambda-2}{p}}(A_{n}\setminus X).\]
Thus
\[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n}(z)|^{p}dA(z)\right)^{ \frac{1}{p}} \leq C2^{n(\frac{p+\lambda-2}{p})}\left(\epsilon_{n}M_{*}^{\frac{ p+\lambda-2}{p}}(A_{n}\setminus X)\right)^{\frac{1}{q}}\left(\epsilon_{n}M_{*}^{ \frac{p+\lambda-2}{p}}(A_{n}\setminus X)\right)^{\frac{1}{p}}\] \[=C\epsilon_{n}2^{n(\frac{p+\lambda-2}{p})}M_{*}^{\frac{p+\lambda- 2}{p}}(A_{n}\setminus X).\]
Similarly, we prove the third proposition. We first observe that
\[\int_{B}|f_{n,B}|^{p}dA =\int_{B}\left|\frac{1}{|B|}\int_{B}f_{n}(z)dA(z)\right|^{p}dA\] \[\leq Cr^{2-2p}\left(\int_{B}\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|} dA(z)\right)^{p}.\]
Then by applying Holder's inequality we obtain
\[r^{2-2p}\left(\int_{B}\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|}dA(z)\right)^{p} \leq r^{2-2p}\left(\int_{B}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta -z|}\right)^{p}dA(z)\right)\left(\int_{B}1dA\right)^{\frac{p}{q}}\] \[=C\int_{B}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|}\right)^{p} dA(z).\]
Since this is the same integral as in (1), repeating the same calculations yields
\[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n,B}|^{p}dA(z)\right)^{ \frac{1}{p}}\leq C\epsilon_{n}2^{n(\frac{p+\lambda-2}{p})}M_{*}^{\frac{p+ \lambda-2}{p}}(A_{n}\setminus X).\]
To prove the fourth proposition, we use the same techniques as before. By Holder's inequality
\[\int_{X}|f_{n}|^{p}dA(z) \leq\int_{X}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|}\right)^{p} dA(z)\] \[\leq\int_{X}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{p}}\right) \left(\int 1d\nu_{n}\right)^{\frac{p}{q}}dA(z)\] \[=C\left(\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p}}(A_{n}\setminus X )\right)^{\frac{p}{q}}\int_{X}\left(\int\frac{d\nu_{n}(\zeta)}{|\zeta-z|^{p}} \right)dA(z).\]
Then by Fubini's theorem,
\[\int_{X}|f_{n}|^{p}dA(z)\leq C\left(\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p}}( A_{n}\setminus X)\right)^{\frac{p}{q}}\int\int_{X}\frac{1}{|\zeta-z|^{p}}dA(z)d\nu_{n} (\zeta).\]
Let \(B_{r}(\zeta)\) be a ball of radius \(r\) centered at \(\zeta\) such that \(X\subseteq B_{r}(\zeta)\). Then since \(p<2\),
\[\int\int_{X}\frac{1}{|\zeta-z|^{p}}dA(z)d\nu_{n} \leq\int\int_{B_{r}(\zeta)}\frac{1}{|\zeta-z|^{p}}dA(z)d\nu_{n}(\zeta)\] \[\leq C\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p}}(A_{n}\setminus X).\]
Thus
\[\left(\int_{X}|f_{n}|^{p}dA(z)\right)^{\frac{1}{p}} \leq C\left(\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p}}(A_{n} \setminus X)\right)^{\frac{1}{q}}\left(\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p }}(A_{n}\setminus X)\right)^{\frac{1}{p}}\] \[=C\epsilon_{n}M_{*}^{\frac{p+\lambda-2}{p}}(A_{n}\setminus X).\]
## 5 Proof of Sufficiency
We first prove the case of sufficiency in Theorem 1.
**Theorem 8**.: Suppose that \(X\) is a compact subset of \(\mathbb{C}\) and let \(A_{n}(x)\) denote the annulus \(\{2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Let \(t\) be a non-negative integer, \(1\leq p<\infty\), and let \(\lambda\geq 0\) also satisfy \(2-p<\lambda<2+p\). If
\[\sum_{n=0}^{\infty}2^{(t+1)n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}(x)\setminus X )<\infty,\]
then \(A_{p,\lambda}(X)\) admits a \(t\)-th order bounded point derivation at \(x\).
Proof.: We first choose \(f\in A_{p,\lambda}(X)\) so that \([f]_{\mathscr{L}^{p,\lambda}(X)}=1\). Choose \(K_{n}\subseteq A_{n}\setminus X\) so that \(f\) is analytic on \(A_{n}\setminus K_{n}\). Fix \(n\) and let \(\{Q_{j}\}\) be a covering of \(K_{n}\) by dyadic squares with no overlap except at the boundaries
and let \(r_{j}\) denote the side length of \(Q_{j}\). Let \(Q_{j}^{*}=\frac{3}{2}Q_{j}\), the square with side length \(\frac{3}{2}r_{j}\) and the same center as \(Q_{j}\), and let \(D_{n}=\bigcup Q_{j}^{*}\). Then it follows from the Cauchy integral formula that
\[f^{(t)}(x)=\frac{t!}{2\pi i}\sum_{n}\int_{\partial D_{n}}\frac{f(z)}{(z-x)^{t+ 1}}dz.\]
For each square \(Q_{j}\) construct a smooth function \(\phi_{j}\) with support on \(Q_{j}^{*}\) such that \(||\nabla\phi_{j}||_{\infty}\leq Cr_{j}^{-1}\), and \(\sum_{j}\phi_{j}=1\) on a neighborhood of \(\bigcup Q_{j}\). Such a construction is found in [4, Theorem 3.1]. Let \(\phi=1-\sum_{j}\phi_{j}\). Then \(\phi=1\) on \(\partial D_{n}\) and by Green's Theorem,
\[\left|\frac{t!}{2\pi i}\int_{\partial D_{n}}\frac{f(z)}{(z-x)^{t+ 1}}dz\right| =\left|\frac{t!}{2\pi i}\int_{\partial D_{n}}\frac{f(z)\phi(z)}{ (z-x)^{t+1}}dz\right|\] \[=\left|\frac{t!}{\pi}\int_{D_{n}}\frac{f(z)}{(z-x)^{t+1}}\frac{ \partial\phi}{\partial\overline{z}}dA\right|\] \[\leq\frac{t!}{\pi}\sum_{j}\left|\int_{Q_{j}^{*}}\frac{f(z)}{(z-x) ^{t+1}}\frac{\partial\phi_{j}}{\partial\overline{z}}dA\right|.\]
Moreover, \(\int_{Q_{j}^{*}}(z-x)^{-(t+1)}\frac{\partial\phi_{j}}{\partial\overline{z}}dA =\int_{\partial Q_{j}^{*}}(z-x)^{-(t+1)}\phi_{j}(z)dz=0\), and hence
\[\sum_{j}\left|\int_{Q_{j}^{*}}\frac{f(z)}{(z-x)^{t+1}}\frac{\partial\phi_{j}}{ \partial\overline{z}}dA\right|\leq\sum_{j}\int_{Q_{j}}\frac{|f(z)-f_{Q_{j}^{*} }|}{|z-x|^{t+1}}\left|\frac{\partial\phi_{j}}{\partial\overline{z}}\right|dA.\]
Thus by Holder's inequality,
\[\sum_{j}\int_{Q_{j}^{*}}\frac{|f(z)-f_{Q_{j}^{*}}|}{|z-x|^{t+1}} \left|\frac{\partial\phi_{j}}{\partial\overline{z}}\right|dA \leq\sum_{j}2^{n(t+1)}\left(\int_{Q_{j}^{*}}|f(z)-f_{Q_{j}^{*}}|^ {p}dA\right)^{\frac{1}{p}}\left(\int_{Q_{j}^{*}}\left|\frac{\partial\phi_{j}}{ \partial\overline{z}}\right|^{q}dA\right)^{\frac{1}{q}}\] \[\leq C2^{n(t+1)}\sum_{j}r_{j}^{1+\frac{\lambda-2}{p}}\left(\frac{ 1}{r^{\lambda}}\int_{Q_{j}^{*}}|f(z)-f_{Q_{j}^{*}}|^{p}dA\right)^{\frac{1}{p}}.\]
Since \(f\in V\mathscr{L}^{p,\lambda}(\mathbb{C})\) it follows that \(h(t)=t^{1+\frac{\lambda-2}{p}}\Omega_{f}^{p,\lambda}(\frac{3}{2}t)\) is admissible for \(M_{*}^{1+\frac{\lambda-2}{p}}(K_{n})\). Hence by taking the infimum over all such covers \(\{Q_{j}\}\) we have that
\[\left|\frac{t!}{2\pi i}\int_{\partial D_{n}}\frac{f(z)}{(z-x)^{t+1}}dz\right| \leq C2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(K_{n}).\]
Since \(M_{*}^{1+\frac{\lambda-2}{p}}\) is monotone, it follows that
\[|f^{(t)}(x)| \leq C\sum_{n=1}^{\infty}2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(K_{ n})\] \[\leq C\sum_{n=1}^{\infty}2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(A_ {n}\setminus X)\] \[\leq C.\]
Now suppose \(g\in A_{p,\lambda}(X)\) is analytic on \(X\) and let \(f=\frac{g}{|g|_{\mathscr{L}^{p,\lambda}(X)}}\). Then \([f]_{\mathscr{L}^{p,\lambda}(X)}=1\) and hence \(|f^{(t)}(x)|\leq C\). Thus \(|g^{(t)}(x)|\leq C[g]_{\mathscr{L}^{p,\lambda}(X)}\) and \(A_{p,\lambda}(X)\) admits a \(t\)-th order bounded point derivation at \(x\).
## 6 Proof of Necessity
Finally, we prove the case of necessity in Theorem 1
**Theorem 9**.: Suppose \(X\) is a compact subset of \(\mathbb{C}\) and let \(A_{n}(x)\) denote the annulus \(\{2^{-(n+1)}\leq|z-x|\leq 2^{-n}\}\). Let \(t\) be a non-negative integer, \(1\leq p<\infty\), and let \(\lambda\geq 0\) also satisfy \(2-p<\lambda<2+p\). If \(A_{p,\lambda}(X)\) admits a \(t\)-th order bounded point derivation at \(x\), then
\[\sum_{n=1}^{\infty}2^{(t+1)n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}(x)\setminus X )<\infty.\]
Proof.: We will verify the theorem by proving the contrapositive. Furthermore, we can assume that \(x=0\) and that \(X\) is contained entirely within the closed unit disk. First suppose that \(p<2\). If
\[\sum_{n=1}^{\infty}2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)=\infty,\]
then we can find a decreasing sequence \(\epsilon_{n}\to 0\) such that
\[\sum_{n=1}^{\infty}2^{n(t+1)}\epsilon_{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n} \setminus X)=\infty,\]
and \(2^{n(t+1)}\epsilon_{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\leq 1\) for all \(n\).
By Frostman's lemma, for each \(n\in\mathbf{N}\) there exists a positive measure \(\nu_{n}\) supported on \(A_{n}\setminus X\) such that
1. \(\nu_{n}(B)\leq\epsilon_{n}r^{1+\frac{\lambda-2}{p}}\) for all balls \(B\) with radius \(r\).
2. \(\int\nu_{n}=C\epsilon_{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\).
Let
\[f_{n}(z)=\int\left(\frac{\zeta}{|\zeta|}\right)^{t+1}\frac{dv_{n}(\zeta)}{\zeta- z}.\]
It follows from Lemma 7 that \(f_{n}\) is analytic off \(A_{n}\), \([f_{n}]_{\mathscr{L}^{p,\lambda}}\leq C\epsilon_{n}\) and \(f_{n}\in A_{p,\lambda}(X)\). Moreover,
\[f_{n}^{(t)}(0)=t!\int\frac{d\nu_{n}(\zeta)}{|\zeta|^{t+1}},\]
and hence \(f_{n}^{(t)}(0)\geq C2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\). For each \(m\in\mathbb{N}\), choose \(M\) so that
\[1\leq\sum_{n=m}^{M}2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\leq 2\]
and let
\[g_{m}(z)=\sum_{n=m}^{M}f_{n}(z).\]
It follows that there is a nonzero constant for which \(g_{m}^{(t)}(0)\) is bounded below for all \(m\). We wish to show that \(||g_{m}||_{\mathscr{L}^{p,\lambda}(X)}\to 0\) as \(m\to\infty\).
Let \(B\) be a ball of radius \(r\) contained in the annulus \(A_{k}=\{z:2^{-(k+1)}\leq|z|\leq 2^{k}\}\) and choose \(f_{n}\) with \(m\leq n\leq M\). Then it follows from Proposition 1 of Lemma 7 that \([f_{n}]_{\mathscr{L}^{p,\lambda}}\leq C\epsilon_{n}\). However, if \(k\neq n-1,n\), or \(n+1\), then
\[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n}(z)|^{p}dA\right)^{\frac{1}{p}}\leq C \epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n} \setminus X)\]
and
\[\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n,B}|^{p}dA\right)^{\frac{1}{p}}\leq C \epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n} \setminus X).\]
Hence
\[[g_{m}]_{\mathscr{L}^{p,\lambda}} \leq\sum_{n=m}^{M}[f_{n}]_{\mathscr{L}^{p,\lambda}}\] \[\leq 3C\epsilon_{m}+\sum_{n=m}^{M}\left(\frac{1}{r^{\lambda}}\int_{ B}|f_{n}(z)|^{p}dA\right)^{\frac{1}{p}}+\left(\frac{1}{r^{\lambda}}\int_{B}|f_{n,B}|^{p} dA\right)^{\frac{1}{p}}\] \[\leq 3C\epsilon_{m}+C\sum_{n=m}^{M}\epsilon_{n}2^{n(1+\frac{ \lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X).\]
In addition, by Proposition 4 of Lemma 7 we have that
\[||g_{m}||_{p} \leq\sum_{n=m}^{M}||f_{n}||_{p}\] \[\leq C\sum_{n=m}^{M}\epsilon_{n}2^{n(1+\frac{\lambda-2}{p})}M_{* }^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X).\]
Therefore it follows that
\[||g_{m}||_{\mathscr{L}^{p,\lambda}} \leq 3C\epsilon_{m}+C\sum_{n=m}^{M}\epsilon_{n}2^{n(1+\frac{ \lambda-2}{p})}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\] \[\leq 3C\epsilon_{m}+C2^{-m(t-\frac{\lambda-2}{p})}\sum_{n=m}^{M} \epsilon_{n}2^{n(t+1)}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}\setminus X)\] \[\leq 3C\epsilon_{m}+C2^{-m(t-\frac{\lambda-2}{p})}.\]
Hence \(||g_{m}||_{\mathscr{L}^{p,\lambda}}\to 0\) as \(m\rightarrow\infty\) but \(g_{m}^{(t)}(0)\) is bounded below by a nonzero constant for all \(m\). Hence \(A_{p,\lambda}\) does not admit a \(t\)-th order bounded point derivation at \(0\), which proves the theorem when \(p<2\).
We prove the case when \(p\geq 2\) by using the fact that the Campanato space \(\mathscr{L}^{p,\lambda}(X)\) coincides with the Campanato space \(\mathscr{L}^{p_{1},\lambda_{1}}(X)\) when \(\frac{\lambda_{1}-2}{p_{1}}=\frac{\lambda-2}{p}\). Given \(p\geq 2\) and \(\lambda>2\), choose \(\lambda_{1}\) so that \(2+\frac{\lambda-2}{p}\leq\lambda_{1}<2+\frac{2(\lambda-2)}{p}\) and let \(p_{1}=p\left(\frac{\lambda_{1}-2}{\lambda-2}\right)\). Then \(1\leq p_{1}<2\) and \(\frac{\lambda-2}{p}=\frac{\lambda_{1}-2}{p_{1}}\). Thus the result of Theorem 9 for the pair \(p\) and \(\lambda\) follows from the result with the pair \(p_{1}\) and \(\lambda_{1}\).
If \(p\geq 2\) and \(\lambda<2\), choose \(\lambda_{1}\) so that \(2+\frac{2(\lambda-2)}{p}<\lambda_{1}\leq 2+\frac{\lambda-2}{p}\) and let \(p_{1}=p\left(\frac{2-\lambda_{1}}{2-\lambda}\right)\). Then \(1\leq p_{1}<2\) and \(\frac{\lambda-2}{p}=\frac{\lambda_{1}-2}{p_{1}}\). Thus the result of Theorem 9 for the pair \(p\) and \(\lambda\) follows from the result with the pair \(p_{1}\) and \(\lambda_{1}\).
When \(\lambda=2\), the Campanato space \(\mathscr{L}^{p,2}(X)\) coincides with \(\text{BMO}(X)\) for all \(p\), so the result of Theorem 9 for \(p\geq 2\) follows from the result with \(p<2\) in this case.
## 7 An Example
Let \(A_{n}=\{z\in\mathbb{C}:\frac{1}{2^{n+1}}\leq|z|\leq\frac{1}{2^{n}}\}\). For a given value of \(n\in\mathbb{N}\), \(A_{n}\) represents an annulus of the complex plane centered at the origin of the complex plane. From each annulus, an open disk is removed, with the constraint that the deleted disks may not sit on the edge of two annuli. More precisely, let \(D_{n}\) denote the open disk deleted from \(A_{n}\). A roadrunner set \(X\) is defined as \(X=\bigcup_{n=1}^{\infty}[A_{n}\setminus D_{n}]\). See Figure 1.
**Theorem 10**.: There exists a roadrunner set \(X\) such that \(A_{p,\lambda}(X)\) admits a bounded point derivation at \(0\) for every value of \(\lambda\) and \(p\), where \(1\leq p<\infty\), \(p-2<\lambda<p+2\), and \(\lambda\geq 0\).
Proof.: Let \(r_{n}=\frac{1}{n!}\), where \(r_{n}\) is the radius of each deleted disk \(D_{n}\), and let \(\epsilon>0\) be arbitrary. It is known that \(A_{p,\lambda}(X)\) admits a bounded point derivation at \(0\) if \(\sum_{n=1}^{\infty}4^{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}(x)\setminus X)<\infty\). From the ratio test, it follows that if
\[\lim_{n\to\infty}\left|\frac{4^{n+1}M_{*}^{*}(A_{n+1}(x)\setminus X)}{4^{n}M_ {*}^{*}(A_{n}(x)\setminus X)}\right|<1,\]
then \(A_{p,\lambda}(X)\) admits a bounded point derivation at \(0\) for every value of \(\lambda\) and \(p\). Since \(\lim_{n\to\infty}\frac{1}{n+1}=0\), then it follows that \(\lim_{n\to\infty}\frac{1}{(n+1)^{\epsilon}}=0\). For the given roadrunner set, it follows from the ratio test that
\[\lim_{n\to\infty}\left|\frac{4^{n+1}(M_{*}^{*}(A_{n+1}(x)\setminus X)}{4^{n}( M_{*}^{*}(A_{n}(x)\setminus X)}\right|=\lim_{n\to\infty}\left|\frac{4r_{n+1}^{ \epsilon}}{r_{n}{}^{\epsilon}}\right|=\lim_{n\to\infty}\left|\frac{4(n!)}{(n+1 )!}\right|=\lim_{n\to\infty}\left|\frac{4}{n+1}\right|=0.\]
Since the ratio test yields \(0\) for any arbitrary positive number \(\epsilon\), regarding the given roadrunner set, this implies that \(\sum_{n=1}^{\infty}4^{n}M_{*}^{1+\frac{\lambda-2}{p}}(A_{n}(x)\setminus X)<\infty\) for every value of \(p\) and \(\lambda\).
|
2302.14684 | Exploring 3D community inconsistency in human chromosome contact
networks | Researchers developed chromosome capture methods such as Hi-C to better
understand DNA's 3D folding in nuclei. The Hi-C method captures contact
frequencies between DNA segment pairs across the genome. When analyzing Hi-C
data sets, it is common to group these pairs using standard bioinformatics
methods (e.g., PCA). Other approaches handle Hi-C data as weighted networks,
where connected node represent DNA segments in 3D proximity. In this
representation, one can leverage community detection techniques developed in
complex network theory to group nodes into mesoscale communities containing
similar connection patterns. While there are several successful attempts to
analyze Hi-C data in this way, it is common to report and study the most
typical community structure. But in reality, there are often several valid
candidates. Therefore, depending on algorithm design, different community
detection methods focusing on slightly different connectivity features may have
differing views on the ideal node groupings. In fact, even the same community
detection method may yield different results if using a stochastic algorithm.
This ambiguity is fundamental to community detection and shared by most complex
networks whenever interactions span all scales in the network. This is known as
community inconsistency. This paper explores this inconsistency of 3D
communities in Hi-C data for all human chromosomes. We base our analysis on two
inconsistency metrics, one local and one global, and quantify the network
scales where the community separation is most variable. For example, we find
that TADs are less reliable than A/B compartments and that nodes with highly
variable node-community memberships are associated with open chromatin.
Overall, our study provides a helpful framework for data-driven researchers and
increases awareness of some inherent challenges when clustering Hi-C data into
3D communities. | Dolores Bernenko, Sang Hoon Lee, Ludvig Lizana | 2023-02-28T15:53:06Z | http://arxiv.org/abs/2302.14684v1 | # Exploring 3D community inconsistency in human chromosome contact networks
###### Abstract
Researchers have developed chromosome capture methods such as Hi-C to better understand DNA's 3D folding in nuclei. The Hi-C method captures contact frequencies between DNA segment pairs across the genome. When analyzing Hi-C data sets, it is common to group these pairs using standard bioinformatics methods (e.g., PCA). Other approaches handle Hi-C data as weighted networks, where connected node pairs represent DNA segments in 3D proximity. In this representation, one can leverage community detection techniques developed in complex network theory to group nodes into mesoscale communities containing nodes with similar connection patterns. While there are several successful attempts to analyze Hi-C data in this way, it is common to report and study the most typical community structure. But in reality, there are often several valid candidates. Therefore, depending on algorithm design, different community detection methods focusing on slightly different connectivity features may have differing views on the ideal node groupings. In fact, even the same community detection method may yield different results if using a stochastic algorithm. This ambiguity is fundamental to community detection and shared by most complex networks whenever interactions span all scales in the network. This is known as community inconsistency. This paper explores this inconsistency of 3D communities in Hi-C data for all human chromosomes. We base our analysis on two inconsistency metrics, one local and one global, and quantify the network scales where the community separation is most variable. For example, we find that TADs are less reliable than A/B compartments and that nodes with highly variable node-community memberships are associated with open chromatin. Overall, our study provides a helpful framework for data-driven researchers and increases awareness of some inherent challenges when clustering Hi-C data into 3D communities.
## I Introduction
Chromosomes' three-dimensional (3D) folded structure is critical to understanding genetic processes and genome evolution. The discovery of these 3D structures relied on the analysis of pan-genome-wide chromosome capture data, represented by a pairwise interaction matrix called Hi-C map [1; 2; 3]. These maps reveal substructures of different scales, including the dichotomous division into A (active) versus B (inactive) compartments, determined from principal component analysis (PCA) [1], and smaller-scale topologically associated domains (TADs) identified using the Arrowhead algorithm [3]. Several molecular biologists find these structures appealing because they define DNA regions with correlated gene expression and epigenetic modifications. In addition, their borders enrich binding sites for architectural proteins such as CTCF (11-zinc finger protein CCCTC-binding factor) in humans and CP190 in _Drosophila melanogaster_[4].
As researchers delved deeper into this crucial topic, they discovered that TADs act as shielded 3D domains with more internal than external contacts, similar to the definition of "communities" in network science [5; 6; 7], and that some TADs are nested or partially overlapped. Also, studies of A/B compartments showed cross-scale organization as they split into six subclasses (A1, A2, B1, etc.) [8; 3]. Furthermore, the authors of this paper developed a network-community-detection method that considers the average contact frequency between distant DNA segments based on their sequence distance. This method revealed a spectrum of mesoscale 3D communities in Hi-C data [9; 10], ranging from A/B compartments to TADs. All these findings point to chromosomes as having a complex multi-scale structure.
Like Hi-C networks, most complex networks have a blend of overlapping communities at different scales, making community detection challenging. This complex
ity complicates finding statistically significant communities, as most community detection methods rely on the objective function called modularity [6; 7]. In this function, the scale is defined as the value of a "resolution parameter" [11], and most community detection algorithms select its value rather arbitrarily without a principled guideline (see Ref. [12] for the relationship between the resolution parameter and a parameter from another established community detection framework called the stochastic block model). Recently, some researchers (including one of the authors of this paper) have directly acknowledged this fact and tried extracting informative structural properties based on the ensemble of cross-scale "inconsistently" detected communities [13; 14; 15].
This paper utilizes two representative inconsistency metrics14, one local and one global, to quantitatively assess the scales that provide the most reliable 3D communities in the Hi-C data. The reliability is based on the multi-scale "landscape" of Hi-C communities. In Sec.II, we present the Hi-C data and the associated weighted networks. We also describe our inconsistency analysis framework and community detection method, and how we assign chromatin states to each Hi-C bin by calculating folds of enrichment ratios. In Sec.III, we present our findings and conclude our paper in Sec. IV.
Footnote 1: We emphasize that we are using the terminology for weighted networks since we use the weighted version of the Hi-C interaction map. For binary networks, they are reduced to the conventional version: \(A_{ij}\) is either 0 or 1 representing absence or presence of the edge, and \(k_{i}\) is the number of neighbouring nodes to \(i\) (the degree).
## II Methods
### Transforming Hi-C data as weighted network
We use the same Hi-C intra-chromosomal contact map as our previous series of studies [9; 10] [human cell line GM12878 (B-lymphoblastoid) [3; 16]]. Also, as before [9; 10], we use the MAPQG0 data set at the 100 kilobase-pair (kb) resolution and normalize the interaction map with the Knight-Ruiz (KR) matrix balancing [17]. As a result, we treat each 100 kb chromatin locus as the minimal unit, or "node", and the normalized interaction weights between nodes \(i\) and \(j\) as weighted edges, using network science terminology [5].
### Network community detection and inconsistency
One of the most popular ways to detect network communities [6; 7]--densely-connected substructures--is to maximize the objective function called modularity1
Footnote 1: We emphasize that we are using the terminology for weighted networks since we use the weighted version of the Hi-C interaction map. For binary networks, they are reduced to the conventional version: \(A_{ij}\) is either 0 or 1 representing absence or presence of the edge, and \(k_{i}\) is the number of neighbouring nodes to \(i\) (the degree).
\[\mathcal{M}=\frac{1}{2m}\sum_{i\neq j}\left[\left(A_{ij}-\gamma P_{ij}\right) \delta\left(g_{i},g_{j}\right)\right]\,. \tag{1}\]
Here, \(A_{ij}\) denotes the adjacency matrix elements corresponding to the interaction weights between nodes \(i\) and \(j\) (\(A_{ij}=0\) indicates no edge), and \(P_{ij}\) is the expected edge weight based on _a priori_ information. The most popular choice is when considering only the overall tendency of node-node interaction \(P_{ij}=k_{i}k_{j}/(2m)\), where \(k_{i}\) represents node \(i\)'s strength (the sum of its weights) and \(m\) is a normalization constant ensuring that \(-1\leq\mathcal{M}\leq 1\). Finally, \(g_{i}\) is the community index of node \(i\) and \(\delta\) is the Kronecker delta. A key parameter in our study is the resolution parameter \(\gamma\), which controls the overall community scale [11].
In principle, maximizing the modularity function with respect to all of the possible community divisions, encoded as \(\{g_{i}\}\) in Eq. (1), is a mathematically well-defined deterministic concept. However, due to the computational limitation imposed by the problem, it is prohibitively difficult to find the exact solution, e.g., from the comprehensive enumeration of the network divisions. Therefore, most network community detection algorithms rely on various types of approximations or parameter restrictions. Many algorithms take a stochastic approach to sample the community partitions, just as in standard Monte Carlo [18]. One example is the Louvain-type algorithms [19; 20] we use here (detailed in Sec. II.C).
Although stochastic approaches like Louvain have been successful in terms of speed and accuracy in many community detection applications, their stochastic nature may produce _multiple_ results that sometimes include _inconsistent_ elements. Researchers tend to work around this inconsistency [21; 22] by choosing the most consistent, or reproducible, network partition. However, one of the authors of this paper has turned this inconsistency into an advantage, using it to probe network structural information [13; 14] at both global and local levels (Figs. 1 and 2). In particular, by studying inconsistency measures one may pinpoint scale regimes or specific node collections that are the most statistically reliable (at the global level) or flexible (at the local level). For a detailed theoretical framework, we defer to Ref. [14]. But below, we remind the reader of the essential parts used in this analysis.
One metric we study is partition inconsistency (PaI). It quantifies the global degree of inconsistency among community partitions in the entire network (Fig. 1). PaI is based on a recently developed similarity measure [23]\(S_{\alpha\beta}\) between community configurations \(\alpha\) and \(\beta\). The PaI value \(\Omega(\geq 1)\) indicates the effective number of independent configurations. A small (large) \(\Omega\) value represents more consistent (inconsistent) regimes, respectively. Using PaI, one extracts the most statistically reliable ranges of community scales by focusing on the (local) minima of \(\Omega\), in particular, alongside another meaningful evidence of stable communities: the number of communities stays flat at a specific integer value [illustrated in Fig. 1b)].
While PaI describes the network's global inconsistency, we use another metric, membership inconsistency (MeI), to quantify local (individual-node) inconsistencies (Fig. 2). MeI represents the effective number of inde
pendent communities for a specific node across different community configurations. As shown in Fig. 2b), the MeI values properly detect the functionally flexible or "bridge" nodes participating in different modules2. In Sec. III, we use PaI and MeI to study global and local community inconsistencies of Hi-C maps and relate to these metrics to other biological data.
Footnote 2: The MeI measure introduced in Ref. [14] is a more principled and improved measure than the original “companionship inconsistency (CoI)” measure first introduced in Ref. [13], by considering the possibility of more than two community memberships.
### GenLouvain method
The stochastic community detection method we utilize throughout our work is version 2.1 of GenLouvain [20] (see [https://github.com/GenLouvain/GenLouvain](https://github.com/GenLouvain/GenLouvain) for the latest version). GenLouvain is a variant of the celebrated Louvain algorithm [19], which is one of the most widely used algorithms and is popular due to its speed and established packages in various programming languages. Starting from single-node communities, the algorithm accepts or rejects trial merging processes based on the modularity change in a greedy fashion. To determine the community stability across network scales, we run GenLouvain several times, at least 100, for each resolution parameter \(\gamma\) and then calculate the global (PaI) and local (MeI) inconsistency metrics.
### Cross-scale node-membership correlations
We use GenLouvain to produce an ensemble of community partitions from Hi-C data, for fixed scale parameters \(\gamma\). However, some of these partitions seems correlated. To better understand these correlations, we use a graphical embedding technique designed to illustrate high-dimensional data on a 2D plane. Specifically, we use t-SNE (t-distributed stochastic neighbor embedding) [24].
t-SNE is a general framework that aggregates data points based on some distance metric. While there are several choices, we use the so-called correlation distance \(D\), which is common for random vectors and defined as
\[D=1-r(\mathbf{u},\mathbf{v}), \tag{2}\]
where \(r(\mathbf{u},\mathbf{v})\) is the correlation between the vectors \(\mathbf{u}\) and \(\mathbf{v}\), conventionally defined as
\[r(\mathbf{u},\mathbf{v})=\frac{(\mathbf{u}-\bar{\mathbf{u}})\cdot(\mathbf{v}- \bar{\mathbf{v}})}{||(\mathbf{u}-\bar{\mathbf{u}})||_{2}||(\mathbf{v}-\bar{ \mathbf{v}})||_{2}}, \tag{3}\]
Figure 2: Illustration of the membership inconsistency (MeI). (a) The co-membership composition for node \(i\) in configurations \(\alpha\) and \(\beta\). From this co-membership structure, we calculate the MeI value \(\Psi_{i}\) as in Ref. [14]. (b) Example distribution of MeI for a small network.
Figure 1: Illustration of partition inconsistency (PaI). (a) A community ensemble composed of two configurations \(\alpha\) and \(\beta\). The similarity measure [23] quantifies the degree of similarity between the community configurations. Based on this measure, we calculate the PaI value \(\Omega\) as in Ref. [14]. (b) Cross-scale PaI curve (varying resolution parameter \(\gamma\)) alongside the average number of communities. Shaded areas show ranges of statistically reliable community scales.
where \(\bar{u}\) and \(\bar{v}\) are the mean of the elements (so that \(\bar{\mathbf{u}}=\bar{u}\times[1,1,1,\ldots]\)) and \(||\cdots||_{2}\) is the Euclidean norm. According to Eq. (2), \(D=0\) if they are perfectly correlated (\(r=1\)) and \(D\approx 1\) if they are uncorrelated (\(r\approx 0\)).
In our analysis, we create these vectors from 100 GenLouvain runs. Each vector is a Boolean representation of the node-community membership for one node (each element is either 1 or 0) depending on whether the node belongs to a specific community at a particular GenLouvain iteration. For our analysis, we used scikit-learn[25]. To reach the best visualization results, we tuned several parameters : perplexity: 20, early exaggeration: 8,initialization: random, and number of iterations: 1356.
### Chromatin states and enrichment
In Results (Sec. III), we analyze the inconsistency measures PaI and MeI in terms of chromatin states derived from an established chromatin division[26] that we downloaded from the ENCODE database[27]. This data set constitutes a list of start and stop positions associated with chromatin states called peaks. These peaks result from integrating several biological data sets, e.g., ChIP and RNA-seq, with a multivariate hidden Markov model (HMM). The authors[26] use 15 "HMM states" (S1-S15): active promoter (S1), weak promoter (S2), inactive/poised promoter (S3), strong enhancer (S4 and S5), weak/poised enhancer (S6 and S7), insulator (S8), transcriptional transition (S9), transcriptional elongation (S10), weakly transcribed (S11), Polycomb-repressed (S12), heterochromatin (S13), and repetitive/copy number variation (S14 and S15).
The start and stop regions for these 15 HMM do not match perfectly with the Hi-C bins. To classify every Hi-C bin into one of these HMM states, we calculate the folds of enrichment (FE) relative to a chromosome-wide average according to the following steps.
1. Count the number of peaks \(k_{X}\) per bin, where \(X=\mathrm{S1},\ldots,\mathrm{S15}\). Because some peaks span multiple bins, we only count the peak starts.
2. Calculate the peak frequency's expected value using the hypergeometric test (chromosome-wide sampling without replacement). The expected number of X peaks per bin is calculated as \(\bar{k}^{\prime}_{X}=K_{X}\times(n/N)\), where, \(n\) is the number of peaks of any state in a bin, \(N\) is the total number of peaks per chromosome, and \(K_{X}\) is the total number of peaks for state \(X\).
3. Calculate the folds of enrichment \(\mathrm{FE}_{X}\) for each HMM state \(X\) per bin by dividing the observed by the expected peak number, \(\mathrm{FE}_{X}=k_{X}/\bar{k}^{\prime}_{X}\).
We note each Hi-C bin can be enriched in several chromatin states. Based on enrichment, we divide Hi-C bins into five groups (A-D) if \(\mathrm{FE}_{X}>1\):
(A) Promoters: \(X=\) S1 and S2. (B) Enhancers: \(X=\) S4, S5, S6, and S7. (C) Transcribed regions: \(X=\) S9, S10, and S11. (D) Heterochromatin and other repressive states: \(X=\) S3, S13, S14, and S15. (E) Insulators: \(X=\) S8 Apart from these five groups, we assign bins that are not enriched in any state to the category "NA".
## III Results
### Local community inconsistency
We illustrated the local inconsistency associated with a single Hi-C map in Fig. 3. This map depicts the number of contacts between all 100 kb DNA-segment pairs (KR normalized) in human chromosome 10. Along the diagonal, we highlight the GenLouvain-derived communities[20],
Figure 3: Membership inconsistency analysis for \(\gamma=0.75\). The horizontal axis represents the genomic position along human chromosome 10. (a) Community membership of DNA segments shown as rectangles along the DNA sequence. Rectangles of the same color enclose DNA segments that are members of the same community (a single configuration). (b) Hi-C data is depicted as a heat map of contact frequencies between DNA-segment pairs. Rectangles along the main diagonal represent DNA segments’ community membership from panel (a). (c) Nodes’ membership inconsistency measured across 100 GenLouvain realizations. The dark-blue plot shows Mel scores of DNA segments in a chromosome (the network’s nodes). The background of this panel shows the color-coded community membership from panel (a).
at resolution parameter \(\gamma=0.75\). Squares sharing colors have the same community membership. These colors are better illustrated in the stripe above the map, showing how communities appear along the linear DNA sequence. We note that some scattered segments have the same color, which indicates that communities assemble DNA segments in 3D proximity, not only 1D adjacent neighbors. This contrasts the conventional notion of TADs, which comprise contiguous DNA stretches. To separate notations, we denote unbroken units of DNA stretches in a community as _domains_.
It is essential to realize that the domains and communities in Figs. 3a) and 3b) represent a single configuration, or partition, of Hi-C network communities at one specific resolution parameter value \(\gamma\). Since GenLouvain uses a stochastic maximization algorithm, we expect to find other partitions if running it several times on the same data set, some of which may differ substantially. To quantify this variability, we generated 100 independent network partitions and calculated the local inconsistency measure Mel[14] that quantifies how many different community configurations a single node effectively belongs to. We plot the Mel profile along chromosome 10 in Fig. 3c). This profile shows that about half of the domains do not change community membership (the median value of Mel = 1.02), whereas the rest show significantly more variability (MeI \(\approx\) 4). We also note that the MeI score is relatively uniform within each domain and that sharp MeI transitions occur near domain boundaries.
Based on previous work[14; 22], we anticipate that the MeI profile changes with the network scale. Therefore, we scanned through a wide range of community scales, extracted 3D communities, and calculated the Mel profile. We show the result from such a sweep in Fig. 4a), where each MeI profile is associated with one \(\gamma\) value. We note that some DNA regions have low Mel scores (\(\gamma>0.6\)), which indicates that nodes in those regions mostly appear in the same communities for most \(\gamma\) val
Figure 4: Membership inconsistency (MeI) across network scales (\(0.5\leq\gamma\leq 0.9\)) on human chromosome 10. (a) Mel profile for each resolution parameter \(\gamma\). Below the waterfall plot, we draw blue squares that enclose DNA segments with low Mel scores across all scales. (b) Distribution of MeI scores shown for three network scales (\(\gamma=0.75,0.80,0.85\)). We separated the distributions into A,..., and E groups (and ’NA’) highlighting the nodes’ chromatin state. Each distribution is shown as a boxen plot with a center line marking the median. (c) Median Mel across \(\gamma\) and chromatin groups.
ues. We indicate this as colored rectangles below the MeI profiles. But other DNA regions show the opposite behavior. These regions contain nodes that often do not appear in the same communities, which results in high and variable MeI values. Overall, the local node inconsistency grows as \(\gamma\) becomes larger.
### Local inconsistency and chromatin states
To appreciate the MeI variations from a biological perspective, we analyzed them relative to local chromatin states. As outlined in the Methods (Sec. II.E), we use five states and calculate the folds of enrichment for each node. We denote the chromatin states as promoters (A), enhancers (B), transcribed regions (C), heterochromatin and other repressive states (D), and insulators (E).
Below the MeI profile in Fig. 4, we show boxen plots for three \(\gamma\) values. Each subplot illustrates the distribution of MeI scores associated with each chromatin group (A-E); 'NA' represents nodes that are not enriched in any chromatin type. If following the MeI medians (horizontal lines), we note that groups A-C have consistently higher values than the rest. In panel (c), we explore this observation more thoroughly and plot the median MeI for several \(\gamma\) values. The lines show that MeI grows with \(\gamma\) and that groups A-C are more inconsistent than the chromosome-wide average (denoted 'all'). This result suggests that nodes flagged as active chromatin have more variable node-community memberships.
### Cross-scale global inconsistency
The previous subsection analyzed the cross-scale local inconsistency measure (MeI) for chromosome 10. Here, we extend the inconsistency analysis to all human chromosomes using the global inconsistency PaI instead of MeI (PaI yields one number per \(\gamma\) value instead of a chromosome-wide profile). The PaI score measures the effective number of independent network partitions (see Sec. II). By the mathematical construction of PaI [14], if there is no special scale of communities, the "null-model" behavior of PaI as a function of \(\gamma\) would be as follows. As \(\gamma\) gets larger from \(\gamma=0\), the PaI value start to increase as the average number of communities increases enough to form a certain level of inconsistency (for \(\gamma=0\), PaI trivially vanishes because there cannot be any inconsistency for the single community composed of all of the nodes). On the other extreme case of \(\gamma\to\infty\), each individual node tends to form its own singleton community, so again there is no inconsistency, or PaI becomes zero. Therefore, if there is no particular characteristic scale of communities, the PaI curves against \(\gamma\) would be single-peaked ones without any nontrivial behavior such as local minima. In reality, there are characteristic scales of communities, where PaI reach its local minima [14], which indicate the most meaningful community scale. As our results show, the Hi-C communities also exhibit such characteristic scales. To better understand this metric, we revisit chromosome 10 before analyzing all human chromosomes.
We plot the PaI values for chromosome 10 in Fig. 5 as violet circles. When \(\gamma\) is small (\(<0.5\)), we note that PaI has a plateau extending over several \(\gamma\) values. Such a plateau is ideal for stable community partitions (see Fig. 1). However, this case is trivial because the community comprises the entire network (PaI = 1, thus one effective community). Next, if \(\gamma\) increases above 0.5, PaI starts to fluctuate, which indicates that partitions become more variable. Notably, the growing trend stops at \(\gamma\approx 0.6\), and PaI decays to eventually reach a local minimum at \(\gamma\approx 0.65\). This local minimum represents relatively stable communities, hinted by the small effective number of independent partitions at that scale. As we increase \(\gamma\) above 0.65, the community structure becomes less and less stable, along with the rapidly growing number of communities (the green circles). However, asymptotically, the number of independent community ensembles grows slightly less than the number of communities per ensemble. For example, at \(\gamma=0.9\), there are 1.75 independent ensembles (effective), each of which is composed of 25 communities. As a final remark, it is essential to realize that the growing trend of PaI with \(\gamma\) does not necessarily imply the lack of intrinsic organizational scales. Instead, it indicates fuzzy scale transitions where we observe a short range of stable communities at the local PaI minimum.
When examining the PaI curve for chromosome 10, we noticed one significant local minimum with a relatively stable community partition. Next, we ask if similar inconsistency patterns appear across all human chromosomes. To this end, we plotted PaI against \(\gamma\) for chromosomes 1-22 and X in Fig. 6.
We found several commonalities. First, similar to chromosome 10, most PaI curves have at least one local min
Figure 5: Global inconsistency measured by PaI (the violet circles) for human chromosome 10, along with the average number of communities (the green circles) across a range of \(\gamma\) values with the small (smaller than the symbols themselves for all cases) error bars representing the standard deviation.
imum and maximum. Some chromosomes even have two minima (e.g., chromosomes 1, 3, 9, 14, etc.), which indicate multiple stable scales of communities. Also, when \(\gamma\) becomes large enough, the network enters the multi-community regime. Second, as \(\gamma\) grows, so does PaI. This growth indicates that community structures become increasingly inconsistent. Although it is natural to observe higher inconsistency for larger numbers of communities as there are more possible combinations. As future work, it would be informative to check its scaling behavior across different chromosomes and classify chromosomes based on the functional shapes of PaI and the number of communities.
### Node membership correlations
When analyzing the PaI and MeI scores across \(\gamma\), we noted that the Hi-C networks exhibit relatively few independent partitions (e.g., \(\mathrm{Pal}_{\mathrm{chr10}}<3\)), and that each node belongs to just a few communities (median MeI \(<4\)). This suggests that the community partitions are correlated. To better understand these correlations, we use a stochastic embedding technique called t-SNE that projects high-dimensional data clusters on a 2D plane (see Sec. II.D). In our case, the data set is the community membership per node over 100 GenLouvain runs.
We show the t-SNE analysis in Fig. 7 for three \(\gamma\) values, where each filled circle represents a network node (again, using chromosome 10). The closer two circles appear in the plot, the more correlated their node-community
Figure 6: Chromosome-wide global community inconsistency measured by PaI. Each panel shows the values of the PaI metric (the violet circles) and the average number of communities (the green circles) across a range of \(\gamma\) values with the small (smaller than the symbols themselves for all cases) error bars representing the standard deviation.
memberships are. As we increase \(\gamma\), we note that node clusters split and that some circles become isolated. We interpret this as the ensemble of network partitions grows with \(\gamma\) and becomes increasingly dissimilar.
While the clustering is identical in panels a) and b), we color-coded them differently to highlight specific features. In a), the colors represent the local inconsistency score (MeI). We note that nodes having high MeI tend to separate from nodes with low MeI. We also see that the low MeI nodes have relatively stronger correlations, thereby forming more distinct clusters. Panel a) also indicates that nodes with low MeI have similar node-community memberships.
In panel b), the color-coding illustrates the chromatin type. To simplify the analysis, we consider two large chromatin groups--Euchromatin and Heterochromatin--instead of the five we used before (Sec. II.E. These two large groups reflect the traditional division into open and closed chromatin that is associated with active transcription and repression, respectively. In terms of our previous definitions, we form the following two groups:
Euchromatin: A, B, and C, Heterochromatin: D
Note that we disregarded group E ("Insulators") as it is associated with boundaries rather than long chromatin stretches, such as Eu- and Heterochromatin. Also, while most nodes belong to either Eu- or Heterochromatin, some are enriched in both types3. We call this group "mixed". Finally, there is yet another node group that does not enrich any of the two chromatin types ("NA").
Footnote 3: Imagine a Venn diagram with two large circles portraying the chromatin enrichment for each node. While most nodes separate into either Eu- or Heterochromatin, the diagram shows a significant overlap where some nodes are enriched in both chromatin types. These nodes belong to the mixed group.
Figure 7: Node-community membership correlations across three resolution parameters (\(\gamma=0.65,0.75,0.85\)) visualized with t-SNE dimension reduction. Nodes are represented as colored circles. Those having correlated community memberships over several GenLouvain realizations tend to cluster. In contrast, nodes that repel each other belong less likely to the same community. In (a), the colors indicate MeI scores (see legend under each plot). In (b) chromatin types (Eu- or Heterochromatin).
In panel b), we observe that community membership correlations are associated with chromatin type. For example, when \(\gamma=0.75\), Euchromatin and Mixed nodes separate from the large cluster and form new sub-groups. The nodes in these subgroups generally have high MeI scores indicating a larger variability in their community memberships.
Overall, panels (a) and (b) show a scale-dependent separation between communities associated with active and inactive DNA regions (red and blue nodes repel each other). This separation resembles the A/B compartmentalization but for small-scale 3D structures. Furthermore, the MeI score suggests that these structures have multiple independent ways to assemble if formed from Euchromatin nodes. This observation hints at higher structural variability of the accessible genome, which may reflect the dynamic nature of gene expression processes.
## IV Conclusions
There is a growing awareness that Hi-C networks have a complex scale-dependent community structure. While some communities have a hierarchical or nested organization, others form a patchwork of partially overlapping communities. In this regard, Hi-C networks do not represent exceptions. Instead, they belong to the norm: most complex networks show convoluted multi-scale behaviors whenever competing organization principles shape the network structure. These principles force some nodes into ambiguous community memberships, making network partitioning challenging.
To better understand the scales where this might cause problems when clustering Hi-C data, we have analyzed the node-community variability over an ensemble of network partitions and estimated the ensemble's size. We have found that it typically grows as we zoom in to the network. However, this trend has significant breaks where the ensemble size drops at some specific network scales. This drop narrows the distribution of possible network partitions. We hypothesize that these minima represent the most common partitions of the average 3D chromosome organization (over a cell population).
Moreover, we have found nodes that belong to several communities when calculating the node-community membership variability. These ambiguous nodes act as bridges and are associated with specific chromatin types. For example, we have found the highest variability for nodes classified as enriched in active chromatin. This finding contrasts inactive (or repressed) chromatin nodes that typically exhibit a relatively more consistent community organization. One explanation is Euchromatin's somewhat higher physical flexibility when exploring the nuclear 3D proximity in search of other DNA regions to form functional contacts (see a recent review [28]). An alternative explanation is that fuzzy node-community memberships reflect significant cell-to-cell variations. While some physical interactions might be stable in one cell, they may be absent in another. Therefore, as Hi-C maps portray the average contact frequency over many cells, this variability may manifest in ambiguous nodes.
Finally, a recent study investigated the challenges in finding reliable communities in Hi-C data [29]. This work aims to map out the landscape of feasible network partitions in Hi-C networks and found that the width of the landscape is scale-dependent. Our study takes a more node-centric view, where we calculated the local inconsistency of individual nodes and discovered that some nodes have fuzzy node-community memberships. Both studies highlight that finding reliable communities in Hi-C data is challenging, especially on some scales. One root cause is that Hi-C networks are almost entirely connected (with many weak links). Under these circumstances, we expect that Hi-C networks have several community divisions. Divisions that cannot be distinguished without additional data, such as gene expression or epigenetic profiles. This fundamental problem suggests that there is a significant likelihood of disagreement on the ideal network division between any community-finding or data-clustering methods. This challenge has likely contributed to debates on the actual differences between TADs and sub-TADs [30; 31].
###### Acknowledgements.
The authors thank Daekyung Lee for providing the Python codes to calculate the inconsistency measures [14]. S.H.L. was supported by the National Research Foundation (NRF) of Korea Grants Nos. NRF-2021R1C1C1004132 and NRF-2022R1A4A1030660. LL acknowledges financial support from the Swedish Research Council (Grant No. 2021-04080).
|
2309.03934 | The Monodromic Axion-Photon Coupling | We consider the general form of the axion coupling to photons in the
axion-Maxwell theory. On general grounds this coupling takes the form of a
monodromic function of the axion, which we call $g(a)$, multiplying the
Chern-Pontryagin density $F \widetilde{F}$ of the photon. We show that the
non-linearity of $g(a)$ is a spurion for the shift symmetry of the axion. In
this context, when $g(a) \neq \mathbb{Z}a$, the linearized coupling of the
axion $g'(a)$ is not quantized and there is a correlated mass term for the
axion. Singularities in $g(a)$ due to the fast rearrangement of degrees of
freedom are shown to have corresponding cusps and singularities in the axion
potential. We derive the general form of $g(a)$ for the QCD axion, axions with
perturbatively broken shift symmetries and axions descending from extra
dimensions. In all cases, we show that there is a uniform general form of the
monodromic function $g(a)$ and it is connected to the axion potential. | Prateek Agrawal, Arthur Platschorre | 2023-09-07T18:00:00Z | http://arxiv.org/abs/2309.03934v2 | # The Monodromic Axion-Photon Coupling
###### Abstract
We consider the general form of the axion coupling to photons in the axion-Maxwell theory. On general grounds this coupling takes the form of a monodromic function of the axion, which we call \(g(a)\), multiplying the Chern-Pontryagin density \(F\widetilde{F}\) of the photon. We show that the non-linearity of \(g(a)\) is a spurion for the shift symmetry of the axion. In this context, when \(g(a)\neq\mathbb{Z}a\), the linearized coupling of the axion \(g^{\prime}(a)\) is not quantized and there is a correlated mass term for the axion. Singularities in \(g(a)\) due to the fast rearrangement of degrees of freedom are shown to have corresponding cusps and singularities in the axion potential. We derive the general form of \(g(a)\) for the QCD axion, axions with perturbatively broken shift symmetries and axions descending from extra dimensions. In all cases, we show that there is a uniform general form of the monodromic function \(g(a)\) and it is connected to the axion potential.
###### Contents
* 1 Introduction
* 2 Axion-Maxwell Theory
* 2.1 Quantization
* 2.2 Symmetries of \(\mathcal{L}_{\rm axMax}\)
* 2.3 General properties of \(g(a)\)
* 3 The QCD axion
* 4 Perturbative PQ breaking
* 4.1 The effective potential
* 4.2 The effective axion-photon coupling
* 5 Axions from extra dimensions and \(g(a)\)
* 5.1 5D instantons
* 5.2 Calculation of \(V(a)\)
* 5.3 Calculation of \(g(a)\)
* 6 Discussion
* A Axion-Euler-Heisenberg
* B Kaluza-Klein calculation
* B.1 The axion potential
* B.2 The effective axion-photon coupling
## 1 Introduction
The axion-Maxwell Lagrangian describes the low-energy physics of one of the most compelling new physics candidates, the axion, and its experimentally important coupling to photons. The discovery of the axion-photon interaction will not just be a discovery of a new particle, but can provide deep insights into the structure of the standard model. The QCD axion elegantly explains the non-observation of CP violation in the strong sector [1; 2; 3]. Axions can solve the cosmological puzzle of dark matter [4; 5; 6] and may appear as dark energy [7; 8]. The axion-photon coupling can provide access to the fundamental unit of electric charge [9; 10; 11; 12] and test simple models of Grand Unification [13]. Axions have a strong interplay with ideas in quantum gravity [14; 15] and string theory [16; 17; 18].
A large part of this wealth of information derives from the special nature of the axion-photon coupling and the associated symmetries and redundancies. In this work, we derive the general form of this coupling that is ideally suited to study the quantization of the axion-photon coupling, the physics of axion domain walls and strings and the symmetry structure in axion-Maxwell theory.
We argue that the general low-energy axion-Maxwell Lagrangian takes the form,
\[\mathcal{L}_{\rm axMax}=-\frac{1}{4e^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}F_{a}^{ 2}\partial_{\mu}a\partial^{\mu}a-V(a)+\frac{g(a)}{16\pi^{2}}F_{\mu\nu}\widetilde {F}^{\mu\nu}\,. \tag{1}\]
For convenience we have chosen a basis where \(F_{a}\), the fundamental period of the axion, as well as the electromagnetic gauge coupling \(e\) are included in the kinetic term. The function \(g(a)\) is a monodromic function, defined by the property,
\[g(a+2\pi)=g(a)+2\pi n\,. \tag{2}\]
The integer \(n\) is the monodromic charge of the function \(g(a)\). This property of the monodromic function arises from the discrete gauge symmetry of the axion, \(a\to a+2\pi\), under which the path integral weight, \(e^{iS}\), is required to be invariant.
It is an extremely important fact that the monodromy of \(g(a)\) does not imply that the perturbative coupling of the axion to photons around the CP conserving point \(a=0\) is quantized. Indeed, the coupling for canonically normalized fields,
\[g_{a\gamma\gamma}=\frac{\alpha_{\rm em}}{\pi F_{a}}g^{\prime}(a)|_{a=0}, \tag{3}\]
which can be an arbitrary number for a non-linear monodromic function \(g(a)\).
This resolves a small puzzle in the QCD axion coupling to photons, as was noted in [9] and further discussed in [10]. On one hand, we usually justify the non-quantized couplings of the axion by invoking the mixing with the pion. On the other hand, for all values of the axion the pion remains heavy and can stay integrated out, leaving an apparent non-monodromic function. The resolution to the non-quantization of the coupling therefore should appear in the low-energy axion-Maxwell theory without needing to invoke the pion. Indeed, this is achieved by a monodromic non-linear function \(g(a)\).
The form of the coupling \(g(a)\) generated by the anomaly between \(U(1)_{\sf PQ}\) and \(U(1)_{\rm em}^{2}\) is \(g(a)=na\). This form of \(g(a)\) is protected by the continuous shift symmetry of the axion, which also protects the axion from getting a mass. Both the potential \(V(a)\) and a non-linear \(g(a)\) are therefore spurions for the axion continuous shift symmetry breaking [9; 10]. This gives a precise sense in which the deviation from quantization of axion couplings and the generation of a mass are linked. Thus, while the monodromy of \(g(a)\) follows from topology, the special case of \(g(a)=na\) additionaly requires the presence of a continuous global shift symmetry.
In general we expect the size of the two spurions for the same symmetry to be commensurate. If the axion-photon coupling is nearly quantized, then we can express the degree of non-quantization as
\[g(a)-na=zf(a), \tag{4}\]
with \(z\ll 1\) and \(f(a)\) an \(O(1)\) periodic function. The estimate for the mass of the axion is
\[m^{2}\sim z\frac{\Lambda^{4}}{F_{a}^{2}}\,. \tag{5}\]
where \(\Lambda\) is the UV cutoff of the effective theory consistent with the coupling \(g(a)\) (e.g. for the QCD axion \(\Lambda\simeq\Lambda_{\rm QCD}\)). We emphasize that this is a heuristic estimate and the actual correlation may be different in specific examples. However this correlation highlights the point that if there is an axion that is parametrically lighter than its naively expected mass, that also corresponds to a coupling to photons that is very nearly an integer. Similarly, if an axion coupled to photons picks up a mass it generically also picks up a non-linear \(g(a)\) coupling to photons [10].
The general periodic function \(f(a)\) in equation (4) can be expanded in Fourier modes. In some cases only the first few terms in the expansion dominate. This is simply the expected contribution from axion-dependent perturbative corrections to non-topological quantities like \(\alpha_{\rm em}\). However, in many cases, including the case of the QCD axion, the final form of \(g(a)\) requires the sum over the entire Fourier tower, and it is interesting that a closed form for \(g(a)\) can be derived.
The functional coupling \(g(a)\) elucidates many interesting physics points. As mentioned above, it captures the correct monodromy in the axion-Maxwell Lagragian when all other fields can be integrated out for all values of the axion. In cases where this is not possible (e.g. when some particles become light at some value of the axion field) \(g(a)\) also captures fast rearrangement of degrees of freedom through its singularities at isolated points. This correlates with cusps and singularities in the axion potential at the same point, and interesting dynamics induced on an axion domain wall.
Phenomenologically, the full non-linear form of \(g(a)\) is most relevant for scenarios where the axion traverses an \(O(1)\) fraction of its field range. This is certainly true for axion strings and domain walls, and sharp features in \(g(a)\) can affect axion emission from these objects. It can also be true for dense axion objects, like axion miniclusters or superradiant axion clouds surrounding rotating black holes.
The fact that in many simple models the whole Fourier tower needs to be summed up to get the relevant \(g(a)\) highlights another interesting point. For effective field theories involving compact fields the standard polynomial basis might not be the most convenient basis to work in.
This paper is organised as follows. In section 2, the general properties of \(g(a)\) and the symmetries of the axion-Maxwell Lagrangian in the presence of \(g(a)\) are discussed, together with the connection between the mass and non-quantization of \(g(a)\). Section 3 discusses the QCD axion and the corresponding axion-photon coupling. In section 4, the important case of case of perturbative shift symmetry breaking is introduced and shown to share many features of the QCD axion. The final section 5 is entirely devoted to axion potentials and photon couplings in the presence of a tower of states.
## 2 Axion-Maxwell Theory
In this section, we discuss the general properties and symmetries of the axion-Maxwell Lagrangian \(\mathcal{L}_{\rm axMax}\) in the presence of an effective axion-photon coupling \(g(a)\).
\[\mathcal{L}_{\rm axMax}=-\frac{1}{4e^{2}}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}F_{a }^{2}\partial_{\mu}a\partial^{\mu}a-V(a)+\frac{g(a)}{16\pi^{2}}F_{\mu\nu} \widetilde{F}^{\mu\nu}\,. \tag{5}\]
Here \(F_{a}\) is the fundamental period of the axion, and we have normalized the gauge field such that the electric charge of the electron is -1.
### Quantization
The function \(g(a)\) is a monodromic function, defined by the property,
\[g(a+2\pi)=g(a)+2\pi n\,. \tag{6}\]
Here \(n\) is the monodromic charge of the function \(g(a)\), which is usually taken to be an integer in order for the path integral weight \(e^{iS}\) to be invariant under the identification \(a\equiv a+2\pi\).
To be more precise, the quantization of the monodromy depends on the global structure of the gauge group [19; 20]. If the smallest allowed representation has physical electric charge \(eq\), then the electromagnetic instanton number \(I=\frac{1}{16\pi^{2}}\int F\widetilde{F}\) is valued in \(\frac{\mathbb{Z}}{q^{2}}\). In such a model, the monodromic
charge \(n\) can take values in \(q^{2}\mathbb{Z}\). Correspondingly, colourless magnetic monopoles can have a physical minimum magnetic charge \(q_{m}=\frac{2\pi}{eq}\) by Dirac quantization.
The quantization of the monodromic charge \(n\) can also be shown by several connected topological arguments similar to Dirac's argument for quantization of electric charge in \(U(1)\) gauge theory. In a theory with a \(U(1)\) gauge field and an axion discrete gauge shift symmetry, both magnetic monopoles and axion strings exist as twisted sectors. In quantum field theory the cores of these objects may be singular, but in presence of gravity these singularities will be behind a horizon.
Consider a magnetic monopole with minimum magnetic charge \(q_{m}=\frac{2\pi}{eq}\) scattering on a trajectory through an axion string loop [21]. For the purposes of this thought experiment, it does not matter if the axion has a mass or not. Along the trajectory, the monopole sees a monodromy of the axion as \(g(\theta+2\pi)-g(\theta)\). Through the Witten effect, this implies that the monopole electric charge shifts by \(\Delta q_{e}=-\frac{q_{m}e^{2}}{4\pi^{2}}(g(\theta+2\pi)-g(\theta))=-\frac{n \epsilon}{q}\). By Dirac-Zwanziger quantization of dyons \(\Delta q_{e}\in eq\mathbb{Z}\) and therefore the monodromic charge \(n\in q^{2}\mathbb{Z}\).
There is another argument for the quantization in this example that does not directly rely on the Witten effect. As the monopole traverses the axion string loop, electric charge is carried from the magnetic monopole to the axion string by the Goldstone-Wilczek current [22], which is a purely bulk effect following from the axion-photon coupling. The current can be integrated to calculate the total charge exchanged between the monopole and the axion string (see e.g. [12]). The charge carriers on the axion strings are zero modes of bulk fields (e.g. PQ fermions), and therefore are also quantized.
In the remainder of this paper we shall focus on theories in which the smallest unit of charge is that of the electron such that \(g(a)\) has integer monodromy, but we shall briefly return to this issue when we discuss the QCD axion and the allowed standard model representations. Results for other minimal charges can be recovered by the appropriate multiplication.
### Symmetries of \(\mathcal{L}_{\rm axMax}\)
We begin by reviewing the symmetry structure of this theory (see e.g. [23; 24; 25; 26] for further details) in the limit of a massless axion and \(g(a)=a\).
In fact, in the even simpler limit where the axion coupling to photon is turned off, we have the following symmetry structure. The Maxwell theory is well-known to have two global one-form symmetries, the electric \(U(1)_{\epsilon}^{(1)}\) and magnetic \(U(1)_{m}^{(1)}\), under which Wilson lines and 't Hooft lines transform respectively [27]. These symmetries act on the photon and the dual photon by a shift by a closed one-form,
\[A\xrightarrow{U(1)_{\epsilon}^{(1)}}A+c^{(1)},\quad dc^{(1)} =0\,, \tag{3}\] \[\tilde{A}\xrightarrow{U(1)_{m}^{(1)}}\tilde{A}+\tilde{c}^{(1)}, \quad d\tilde{c}^{(1)} =0\,. \tag{4}\]
Equivalently, non-contractible Wilson and 't Hooft loops transform by a \(U(1)\) phase under the respective symmetries. In the absence of charged matter, these symmetries above are clearly symmetries of the Maxwell Lagrangian. In the real world we know the electric symmetry to be emergent below the electron mass, and it is strongly believed that the magnetic symmetry will also be broken completely [28].
The massless axion Lagrangian has an ordinary global \(U(1)_{\mathsf{PQ}}^{(0)}\) symmetry, the usual continuous shift symmetry of the massless axion,
\[a\xrightarrow{U(1)_{\mathsf{PQ}}^{(0)}}a+c^{(0)},\quad dc^{(0)} =0\,, \tag{5}\]
as well as a two-form symmetry \(U(1)^{(2)}\) which measures the axion winding number, under which axion string worldsheets are charged. This symmetry is a shift symmetry of the dual two-form field \(B\),
\[B\xrightarrow{U(1)^{(2)}}B+c^{(2)},\quad dc^{(2)}=0\,. \tag{6}\]
or equivalently a phase rotation of a non-contractible axion string worldsheet. The symmetry structure at this level is thus,
\[U(1)^{(0)}_{\sf PQ}\times U(1)^{(2)}\times U(1)^{(1)}_{e}\times U(1)^{(1)}_{m}\,. \tag{7}\]
A linear axion-photon coupling in the Lagrangian introduces mixed anomalies between the \(U(1)^{(0)}_{\sf PQ}\) and the one-form symmetries of Maxwell theory, as well as an ABJ anomaly [29; 30] for the axion shift symmetry,
\[\partial_{\mu}j^{\mu}_{\sf PQ}=\frac{1}{16\pi^{2}}F_{\mu\nu}\widetilde{F}^{\mu \nu}\,. \tag{8}\]
Therefore, from this point of view it is somewhat mysterious which symmetry is formally responsible for protecting the axion from getting a mass. One argument could be that if we are working on \(\mathbb{R}^{4}\), then there are no Abelian instantons and the RHS does not produce any physical effect. In particular, we can define the PQ charge on a fixed time slice,
\[{\sf Q}=\int d^{3}x\left(j^{0}_{\sf PQ}-\frac{1}{8\pi^{2}}\epsilon^{ijk}A_{i} \partial_{j}A_{k}\right)\,. \tag{9}\]
The charge \({\sf Q}\) defined on \(\mathbb{R}^{3}\) is gauge invariant, so it looks like we can rescue the shift symmetry of the axion if we are content to work on \(\mathbb{R}^{4}\).
However, this argument is a bit too quick. We cannot use the same argument for potential UV contributions to the axion mass. The topology of spacetime seen by the Abelian gauge field can change in the UV, both in extra-dimensional theories and 4D theories, a simple example being the 't Hooft-Polyakov monopole. It will be much more useful to find a symmetry and associated spurions that parametrize both UV and IR mass generation effects on general manifolds. This is especially valuable for the case of axions where we expect at least quantum gravitational effects to generate a mass.
On a general manifold there does not exist a gauge-invariant charge \({\sf Q}\). This is most easily seen if our spatial slice is \(S^{1}\times S^{2}\) with magnetic flux \(m=\frac{1}{2\pi}\int_{S^{2}}F_{23}\) on the \(S^{2}\). Performing a large gauge transformation \(A_{1}\to A_{1}+2\pi\) on the compact \(S^{1}\) shifts the charge \(Q\) by \(m\). The operator implementing the \(U(1)_{\sf Q}\) symmetry, \(\exp(i\alpha{\sf Q})\) with \(\alpha\in[0,2\pi)\) is not gauge-invariant under this transformation. A more modern viewpoint is that the introduction of topologically non-trivial backgrounds can be captured by turning \(U(1)^{(0)}_{\sf PQ}\) into an unbroken non-invertible symmetry [23; 26; 31]. In the cases that such background fluxes or instantons become dynamical, the symmetry is explicitly broken and the axion is expected to get a mass. This is the case when the axion in addition to the photon is also coupled to a non-Abelian gauge theory or when magnetic monopoles are dynamical.
In the presence of a general effective axion-photon coupling \(g(a)\), the conservation equation (8) of \(U(1)^{(0)}_{\sf PQ}\) is modified to
\[\partial_{\mu}j^{\mu}_{\sf PQ}=\frac{g^{\prime}(a)}{16\pi^{2}}F_{\mu\nu} \widetilde{F}^{\mu\nu} \tag{10}\]
and no such conserved PQ charge (equation (9)) exists unless \(g^{\prime}(a)\) is an integer. General forms of \(g^{\prime}(a)\) therefore explicitly break \(U(1)^{(0)}_{\sf PQ}\). It is this sense in which \(g(a)\) can parametrize both the UV dynamical topology changes as well as other dynamical sources of \(U(1)^{(0)}_{\sf PQ}\) breaking. The non-linearity of \(g(a)\) therefore acts as a spurion for the \(U(1)^{(0)}_{\sf PQ}\) shift symmetry.
### General properties of \(g(a)\)
We have seen that the general non-linear function \(g(a)\) breaks the (non-invertible) axion shift symmetry. Therefore, we expect a general connection between \(g(a)\) and the potential for the axion \(V(a)\). Indeed, \(g^{\prime}(a)\notin\mathbb{Z}\) implies a mass for the axion. Similarly, a potential for the axion \(V(a)\) and a quantized axion-photon coupling will flow to a non-quantized \(g(a)\) with the same monodromy.
In the examples considered in this paper, the connection between the potential \(V(a)\) and axion-photon coupling \(g(a)\) is best provided by the repackaging of a real parameter \(z\) together with the axion \(a\) into a complex quantity,
\[\mathcal{Z}=ze^{ia}\,. \tag{11}\]
The real and imaginary parts of powers of \(\mathcal{Z}\) respectively contribute to the CP even potential \(V(a)\) and CP odd effective axion-photon coupling \(g(a)\), providing the connection between the two. The duality \(\mathcal{Z}\to\frac{1}{\mathcal{Z}}\) leaves both the potential and \(g(a)\) invariant. Such a repackaging of the parameters in the case of instanton contributions to the axion potential \(z=e^{-S_{\rm inert}}\) was already noted in contributions to the superpotential in [32].
Common to these examples is a prototypical axion-photon coupling \(g(a)\) that can be expressed as a contour integral,
\[g(a)={\rm Im}\int_{C}\frac{d\mathcal{Z}}{\mathcal{Z}}\frac{1-\mathcal{Z}}{1+ \mathcal{Z}}\,, \tag{12}\]
where the contour \(C\) is an arc at radius \(z\) of angular size \(a\). The monodromic charge \(n\) can be extracted from equation (12) by the poles of the integrand that are included in the closed contour at radius \(z\). The poles for this particular function are located at \(\mathcal{Z}=0\) and \(\mathcal{Z}=-1\) with respective residues \(1\) and \(-2\), giving a monodromic charge that is \(n={\rm sign}(1-z)\).
The effective axion-photon coupling can be extracted from equation (12) by performing the contour integral over the arc \(C\),
\[g(a)=2\arctan\left(\frac{1-z}{1+z}\tan\frac{a}{2}\right)+2\pi{\rm sign}(1-z) \Theta(a-\pi)\,, \tag{13}\]
where \(\Theta\) is the Heaviside function. The full profile of \(g(a)\) is plotted for several relevant parameter values in figure 1. The function \(g(a)\) can be decomposed into a monodromic part \(na\) and a periodic part, the latter captures the explicit breaking of the continuous axion shift symmetry.
The feature most relevant to current experiments is the slope of the effective axion-photon coupling around the minimum \(a=0\) of the potential,
\[g^{\prime}(0)=\frac{1-z}{1+z}\,. \tag{14}\]
Under the transformation \(z\to\frac{1}{z}\), the slope and monodromy of \(g(a)\) swap signs, which is a reflection of the \(\mathcal{Z}\to 1/\mathcal{Z}\) duality mentioned above.
There are three values of the real parameter \(z\) that are interesting. At the points \(z=\{0,\infty\}\), \(g^{\prime}(a)\in\mathbb{Z}\) and the axion shift symmetry is restored. In our examples, the axion potential also vanishes for these values of \(z\). The function \(g(a)\) does not have a well-defined limit as \(z\to 1\), it changes discontinuously across \(z=1\). In this limit, \(g^{\prime}(0)=k\in\mathbb{Z}\), but the axion shift symmetry is not restored, and the monodromy is not equal to \(k\).
Common to our examples will be the restoration of a \(\mathbb{Z}_{2}\) discrete symmetry at \(z=1\), which has an anomaly with electromagnetism. The anomaly is captured in the low-energy effective theory by \(g(a)\) changing discontinuously across \(z=1\). Furthermore, the different profiles \(\lim\limits_{z\to 1^{-}}g(a)\) and \(\lim\limits_{z\to 1^{+}}g(a)\)
re both discontinuous at the point \(a=\pi\), describing a fast rearrangement of degrees of freedom and restoration of a \(U(1)\) symmetry. This discontinuity in \(g(a)\) at \(a=\pi\) is reproduced at the same point by a singularity in the potential \(V(a)\) or its derivatives.
## 3 The QCD axion
We study QCD in the two flavour approximation \(N_{f}=2\) coupled to the axion with Lagrangian
\[\mathcal{L} =\frac{1}{2}F_{a}^{2}(\partial a)^{2}-\frac{1}{4e^{2}}F_{\mu\nu}F ^{\mu\nu}-\frac{1}{2g_{s}^{2}}\text{Tr}\left(G_{\mu\nu}G^{\mu\nu}\right)\] \[\quad+\sum_{i=1}^{2}\overline{\Psi}_{i}\left(i\not{D}-m_{i} \right)\Psi_{i}+\frac{Na}{8\pi^{2}}\text{Tr}\left(G_{\mu\nu}\widetilde{G}^{ \mu\nu}\right)+\frac{Ea}{16\pi^{2}}F_{\mu\nu}\widetilde{F}^{\mu\nu}, \tag{10}\]
where \(F_{a}\) is the fundamental period of the axion, \(E\) is the primordial anomaly of \(U(1)_{\sf PQ}\) with \(U(1)_{\rm em}\) and \(N\in\frac{1}{2}\mathbb{Z}\) is the anomaly coefficient of \(U(1)_{\sf PQ}\) with QCD.1
Footnote 1: Here we have used the unfortunate standard convention making \(N\) in general half-integer.
The condition on \(E\) in order for the axion to have \(2\pi\) periodicity depends on the the chosen subgroup \(\Gamma=1,\mathbb{Z}_{2},\mathbb{Z}_{3}\) or \(\mathbb{Z}_{6}\) of the standard model gauge group \(SU(3)\times SU(2)\times U(1)/\Gamma\)[20]. In this paper we take \(\Gamma=\mathbb{Z}_{6}\) in order for the electron to have the minimum quantum of electric charge. With this choice, a sufficient condition for axion \(2\pi\) periodicity is
\[E-\frac{2N}{3}\in\mathbb{Z}\,. \tag{11}\]
The axion-gluon coupling explicitly breaks the PQ shift symmetry of the axion. We therefore expect the low-energy effective axion theory to have a potential \(V(a)\) and generate an effective axion-photon coupling \(g(a)\). The symmetries, phases and domain walls of this theory have been well-studied using the chiral Lagrangian [33; 34; 35; 36] and anomaly matching [37; 38].
Figure 1: The effective axion-photon coupling \(g(a)\) for the prototypical example in equation (14) at values \(z=\{0.01,0.5,0.99,1.01,2,100\}\) showing that \(g(a)\) jumps across \(z=1\) and further becomes discontinuous as \(z\to 1^{\pm}\) at \(a=\pi\).
The mass for the axion and its coupling to photons have been calculated at high precision [39],
\[m_{a}^{2}=\frac{m_{\pi}^{2}f_{\pi}^{2}N^{2}}{F_{a}^{2}}\left(\frac{4}{z+\frac{1}{ z}+2}+\ldots\right),\qquad g_{a\gamma\gamma}=\frac{\alpha_{\rm em}}{\pi F_{a}} \left(E-\frac{5}{3}N-\frac{1-z}{1+z}N+\ldots\right), \tag{11}\]
where \(z=\frac{m_{k}}{m_{\perp}}\) measures the isospin breaking of \(SU(2)_{V}\) and \(\ldots\) denote higher order terms in the chiral Lagrangian. Note that as is conventional we have written the coupling \(g_{a\gamma\gamma}\) in the canonical basis for both axions and photons.
The usual explanation for the irrational contribution proportional to \(\frac{1-z}{1+z}\) is that it arises from the mixing with the pion. This is certainly true, but raises a minor puzzle. In the effective theory, we can integrate out the pion and for all values of the axion, the pion degree of freedom is heavy and the EFT is valid. Therefore, the quantization of the monodromy of the axion-photon coupling should be visible in the effective theory.
The resolution to this puzzle has been discussed in [9] and [10] and arises exactly through the monodromic function \(g(a)\). As we will show below, the axion coupling to photons can be packaged in this functional form, such that \(g(a)\) has integer monodromy under the axion discrete gauge symmetry, but \(g^{\prime}(0)\) can be irrational. We review the calculation of the axion potential in the Chiral Lagrangian and derive the form of \(g(a)\) relevant for the QCD axion below.
The effective Lagrangian for the photon, the QCD axion \(a\) and the pion \(\pi^{0}\), can be written as,
\[\mathcal{L}=\frac{F_{a}^{2}}{2}\ (\partial a)^{2}+\frac{f_{\pi}^{2}}{2}\ ( \partial\pi^{0})^{2}-V(a,\pi^{0})+\left(E-\frac{5}{3}N\right)\frac{a}{16\pi^{2}} F\tilde{F}+\frac{\pi^{0}}{16\pi^{2}}F\tilde{F}\,, \tag{12}\]
with a potential \(V(a,\pi^{0})\) given by
\[V(a,\pi^{0})=f_{\pi}^{2}m_{\pi}^{2}\left(1-\cos\frac{2Na}{2}\cos\pi^{0}+\frac{ 1-z}{1+z}\sin\frac{2Na}{2}\sin\pi^{0}\right)\,. \tag{13}\]
In this basis, the two discrete gauge symmetries involving the axion and the pion are implemented by \((a,\pi^{0})\to(a+2\pi,\pi^{0}+2N\pi)\) and \(\pi^{0}\to\pi^{0}+2\pi\). The potential has characteristic eigenvector directions which reverse roles when the sign of \(1-z\) flips, which will be important to our discussion throughout.
We would like to study the low-energy limit of this theory. In the limit that \(f_{\pi}\ll F_{a}\), the pion is much more massive than the axion and can be integrated out. If this can be done consistently at every value of the axion \(a\), then axion domain walls are completely describable within the effective field theory. There can in general be additional domain walls (perhaps metastable or unstable) that also involve rearrangements of heavy degrees of freedom or new massless states appearing on the domain walls. These domain walls are not described completely within the EFT. We show that the function \(g(a)\) captures this exact behaviour.
An axion domain wall \(a\to a+2\pi\) in this particular basis of the potential (equation (13)) requires a pion domain wall \(\pi^{0}\to\pi^{0}+n\pi\) with \(n\in 2N\mathbb{Z}\). For \(0\leq z<1\), the most energetically favourable domain wall is \(\pi^{0}\to\pi^{0}-2N\pi\) with the \(\pi^{0}\to\pi^{0}+2N\pi\) domain wall having an additional tension \(\Delta T\propto\sqrt{|\frac{1-z}{1+z}|}\). For \(1<z<\infty\), the roles of the two domain walls are reversed.
For any \(z\geq 0\), to first order in \(\frac{f_{\pi}}{F_{a}}\), we can integrate out the pion using its equation of motion,
\[\frac{\partial V}{\partial\pi^{0}}=0\implies\pi^{0}=-\arctan\left(\frac{1-z }{1+z}\tan\frac{2Na}{2}\right)-\pi{\rm sign}(1-z)\sum_{k=1}^{2N}\Theta\left(a -(2k-1)\frac{\pi}{2N}\right)\,. \tag{14}\]
Here the \(\Theta\) Heaviside-function ensures that the axion domain wall is smooth and heavy pion degrees of freedom are not excited.
This yields the effective Lagrangian as,
\[\mathcal{L}=\frac{F_{a}^{2}}{2}(\partial a)^{2}-V(a)+\frac{g(a)}{16\pi^{2}}F_{\mu \nu}\tilde{F}^{\mu\nu}, \tag{10}\]
with
\[V(a) =-f_{\pi}^{2}m_{\pi}^{2}\sqrt{1-\frac{4z}{\left(1+z\right)^{2}} \sin^{2}\left(\frac{2Na}{2}\right)}\,, \tag{11}\] \[g(a) =Ea-\frac{5}{3}Na-\arctan\left(\frac{1-z}{1+z}\tan\frac{2Na}{2} \right)-\pi\text{sign}(1-z)\sum_{k=1}^{2N}\Theta\left(a-(2k-1)\frac{\pi}{2N} \right)\,. \tag{12}\]
We see the prototypical example of the function \(g(a)\) - it has a monodromy under axion shift symmetry given by
\[g(a+2\pi)=g(a)+2\pi\left(E-\frac{2N}{3}-N\left(1+\text{sign}(1-z)\right)\right)\,. \tag{13}\]
The monodromic charge \(\left(E-\frac{2N}{3}-N\left(1+\text{sign}(1-z)\right)\right)\in\mathbb{Z}\) by equation (10). For the specific choice \(\frac{E}{N}=\frac{8}{3}\), the monodromy vanishes when \(0\leq z<1\) and is \(2N\) when \(z>1\).
The slope around the axion minimum for a generic \(z\) is irrational and the axion-photon coupling at this point is given by
\[g_{a\gamma\gamma}=\frac{\alpha_{\text{em}}}{\pi F_{a}}g^{\prime}(0)=\frac{ \alpha_{\text{em}}}{\pi F_{a}}\left(E-\frac{5}{3}N-\frac{1-z}{1+z}N\right)\,. \tag{14}\]
Note that both the potential \(V(a)\) and \(g(a)\) depend on the axion-pion mixing parameter \(z\) and that \(g(a)\) is quantized exactly in the limit \(z\to\{0,\infty\}\) when the mass vanishes. Under the transformation \(z\leftrightarrow\frac{1}{z}\), the effective potential is left unaltered and the monodromy of \(g(a)\) changes by \(2N\).
In the isospin restoring limit \(z\to 1\), the potential (equation (11)) has an additional \(\mathbb{Z}_{2}\subset SU(2)_{V}\) pion parity \((-1)^{N_{\pi}}\) symmetry that sends \(\pi^{0}\to-\pi^{0}\) and the tension difference between the domain walls \(\pi^{0}\to\pi^{0}\pm 2N\pi\) goes to zero. The profiles and monodromies for \(\lim\limits_{z\to 1-}g(a)\) and \(\lim\limits_{z\to 1+}g(a)\) differ as this \(\mathbb{Z}_{2}\) is broken by the Wess-Zumino-Witten term. Additionally, in the limit \(z\to 1\), the pion shift symmetry is restored at \(a=\pi\) and the pion becomes massless. The potential \(V(a)\) has a corresponding cusp at this point and \(g(a)\) is discontinuous due to the massless pion jump. This cusp and the discontinuity at \(a=\pi\) is an accidental restoration of the pion shift symmetry at \(a=\frac{\pi}{2N}\) for \(N_{f}=2\) at this order in the chiral Lagrangian and is resolved at higher orders [40].
In the next few sections, we shall see several physical systems with the same prototypical \(g(a)\) but different potentials \(V(a)\).
## 4 Perturbative PQ breaking
It is very instructive to compare the breaking of the axion shift symmetry by QCD effects to a perturbative form of shift symmetry breaking. The simplest such model is a massive charged Dirac fermion \(\Psi\) coupled to a \(U(1)\) gauge field and an axion \(a\) coupled through a chiral mass term, resulting in a Lagrangian of the form
\[\mathcal{L}=i\overline{\Psi}\not{D}\Psi-f\overline{\Psi}e^{ia\gamma_{5}}\Psi -m_{\Psi}\overline{\Psi}\Psi\,. \tag{15}\]
This Lagrangian has an axion shift symmetry \(a\to a+c\) and \(\Psi\to e^{-i\frac{\pi}{2}\gamma_{5}}\Psi\), which is explicitly broken by the mass term \(m_{\Psi}\). We therefore expect to generate both a \(g(a)\) and \(V(a)\) and the low-energy effective Lagrangian should be of the form equation (1). We shall see that this simple model captures a lot of the features of the QCD axion.
### The effective potential
In order to calculate \(g(a)\) and \(V(a)\) in this simple model, we compute the effective action by integrating out the massive fermion,
\[iS_{\rm eff}={\rm Tr}\ln\left[i\left(i\not{D}-m_{\Psi}-fe^{ia\gamma_{5}}\right) \right]. \tag{4.2}\]
This yields an effective potential for a constant axion \(a\) as
\[V(a)=2i{\rm Tr}\ln\left[\partial^{2}+m_{\Psi}^{2}+2m_{\Psi}f\cos a+f^{2}\right]. \tag{4.3}\]
This is simply the Coleman-Weinberg potential following from a particle with an effective mass
\[m(a)^{2}=m_{\Psi}^{2}+2m_{\Psi}f\cos a+f^{2}=(m_{\Psi}+f)^{2}\left(1-\frac{4}{ \frac{1}{z}+z+2}\sin^{2}\left(\frac{a}{2}\right)\right)\,, \tag{4.4}\]
where we have defined the parameter
\[z=\frac{f}{m_{\Psi}}\,. \tag{4.5}\]
The Coleman-Weinberg potential \(V(a)\) is
\[V(a)=-c_{1}m(a)^{2}-\frac{m(a)^{4}}{16\pi^{2}}\ln\frac{m(a)^{2}}{c_{2}^{2}}\,, \tag{4.6}\]
where \(c_{1}\) and \(c_{2}\) are renormalization scheme-dependent quantities.
This potential shares many features of the QCD axion. For instance, the potential becomes axion independent when \(z\to\{0,\infty\}\) as this is when either \(f\) or \(m_{\Psi}\) are zero, the latter being the shift symmetry restoring limit in which the axion can be rotated into a \(F\widetilde{F}\) term.
In the limit \(z=1\), the effective mass of the particle \(m(a)\) vanishes at the chiral symmetry restoring point \(a=\pi\) and should not be integrated out. This is reflected by a singularity in \(V^{\prime\prime\prime\prime}\) at \(a=\pi\).
### The effective axion-photon coupling
Just as with the QCD axion, one expects to generate an effective axion-photon coupling \(g(a)\) from shift symmetry breaking. Such terms can be calculated by first taking a derivative of the effective action (equation (4.2)) with respect to \(a\) to obtain
\[\frac{\delta S_{\rm eff}}{\delta a}=-{\rm Tr}\left(\frac{\gamma_{5}fe^{ia \gamma_{5}}}{i\not{D}-m_{\Psi}-fe^{ia\gamma_{5}}}\right)={\rm Tr}\left(\frac{ \gamma_{5}f(ie^{ia\gamma_{5}}\not{D}+m_{\Psi}e^{ia\gamma_{5}}+f)}{\not{D}^{2} +m(a)^{2}}\right)\,. \tag{4.7}\]
The trace is both over spinor indices as well as the implicit momentum integrals and this time we keep both a constant axion \(a\) and a constant field strength \(F_{\mu\nu}\), such that \(\left(\not{D}\right)^{2}=D^{2}-\frac{1}{2}\sigma_{\mu\nu}F^{\mu\nu}\) with \(\sigma_{\mu\nu}=\frac{i}{2}[\gamma_{\mu},\gamma_{\nu}]\). By matching equation (4.7) with the same derivative of the low-energy effective axion-Maxwell action (Eq. (2.1)), we find \(g^{\prime}(a)\).
Since \(g^{\prime}(a)\) is CP even, we expand equation (4.7) to second order in \(F\) and keep only CP odd terms as
\[\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=\frac{1}{4}{\rm Tr}\left(\gamma _{5}\frac{f^{2}+m_{\Psi}f\cos a}{\left(D^{2}+m(a)^{2}\right)^{3}}\sigma^{\mu \nu}F_{\mu\nu}\sigma^{\alpha\beta}F_{\alpha\beta}\right)\,. \tag{4.8}\]
The trace can now be reduced by using the gamma matrix identity \({\rm Tr}\left(\gamma^{5}\sigma_{\mu\nu}\sigma_{\alpha\beta}\right)=-i4\epsilon _{\mu\nu\alpha\beta}\) and inserting a trace over four momenta yields
\[\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=2i\int\frac{d^{4}p}{(2\pi)^{4 }}\frac{f^{2}+m_{\Psi}f\cos a}{\left(p^{2}-m(a)^{2}\right)^{3}}F\widetilde{F}\,. \tag{4.9}\]
The momentum integrals are convergent and rewriting in terms of the order parameter \(z\) yields
\[\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=\frac{1}{16\pi^{2}}\left(\frac{z^{2 }+z\cos a}{1+2z\cos a+z^{2}}\right)F\widetilde{F}\,. \tag{4.10}\]
Comparing this result to the effective low-energy action of the axion (equation (2.1)) allows for the matching
\[g^{\prime}(a)=\frac{z^{2}+z\cos a}{1+2z\cos a+z^{2}}\,. \tag{4.11}\]
Integrating this function yields the effective low energy axion-photon coupling as
\[g(a)=\frac{1}{2}a-\arctan\left(\frac{1-z}{1+z}\tan\frac{a}{2}\right)-{\rm sign }(1-z)\pi\Theta(a-\pi)\,. \tag{4.12}\]
Similar to the potential, \(g(a)\) captures many features of the symmetries of the Lagrangian in its fully summed form. For instance, in the limit \(z=\infty\) or \(m_{\Psi}=0\), \(g(a)\) becomes \(aF\widetilde{F}\). When \(z=0\), the effective coupling to photons vanishes as \(f=0\). Thus in both cases \(g(a)\) becomes of the form \(\mathbb{Z}a\) when the axion becomes massless as predicted by general symmetry arguments.
Similar to the QCD axion, in the limit \(z\to 1\), there is an apparent restoration of a \(\mathbb{Z}_{2}\) symmetry that acts on the fields as
\[\Psi(t,x)\to\gamma^{0}\Psi(t,-x)\qquad A_{\mu}(t,x)\to(-1)^{\mu}A_{\mu}(t,-x) \qquad a(t,x)\to a(t,-x)\,, \tag{4.13}\]
where \((-1)^{\mu}=1\) if \(\mu=t\) and \(-1\) otherwise. This symmetry leaves the Lagrangian invariant in this limit up to the change of the Chern-Pontryagin density \(F\widetilde{F}\). Correspondingly, the profiles for \(\lim\limits_{z\to 1^{-}}g(a)\) and \(\lim\limits_{z\to 1^{+}}g(a)\) differ and the monodromies are respectively \(0\) and \(1\). Both profiles of \(g(a)\) also have a discontinuous jump at the chiral symmetry restoring point \(a=\pi\) where the fermion becomes massless.
## 5 Axions from extra dimensions and \(g(a)\)
A class of interesting axions is those that descend from gauge theories and higher form fields in extra dimensions. These are particularly motivated both due to the fact that they arise generically in string compactifications as well as due to high quality global symmetry that descends from a gauge symmetry in the bulk.
In these models, axion potentials arise from charged objects wrapping internal cycles, which appear as instantons in the 4D theory, see e.g. [41; 42; 43; 32; 44]. Alternatively, this potential can be thought of as arising from the axion dependence of a KK tower of states which undergoes spectral flow as axion \(a\to a+2\pi\). A similar effect arises from a tower of dyonic states in axion-Maxwell theory [45].
In this section we bridge the relation between these two sources for the axion potential through an instructive example. In doing so, we show that massive charged fermions with additional compact degrees of freedom coupled to the axion generate a similar axion potential \(V(a)\) and an effective axion-photon coupling \(g(a)\).
In appendix A, we reformulate the results of this section in terms of an effective worldline formalism of a charged massive 4D fermion with additional compact degrees of freedom coupled to the axion. In doing so, we derive the effective axion-Euler-Heisenberg Lagrangian to all orders in constant \(a\) and
### 5D instantons
We consider a \(U(1)\) gauge theory with gauge field \(A\) in 5D Euclidean space (\(g_{\mu\nu}=\delta_{\mu\nu}\)) with a massive charged fermion \(\Psi\), with the fifth dimension \(y\) compactified on a circle of radius \(R\),
\[S=\int d^{4}x\int dy\,\left[-\frac{1}{4e^{2}}F_{MN}F^{MN}-\Psi^{\dagger}\left( \not{D}+m\right)\Psi\right]\,. \tag{5.1}\]
The axion is identified with a Wilson loop \(\int dyA\) around the compact extra dimension in almost axial gauge as
\[A_{5}(x,y)=\frac{a(x)}{2\pi R}\,. \tag{5.2}\]
In any theory with a compact dimension, the modes of the particle can be understood in terms of a tower of states (KK modes). In the present theory, this leads to a description of the 5D fermion as a tower of electrically charged massive 4D fermions with axion-dependent masses.
An alternative and more useful representation for our purposes is in terms of winding modes of the fermion around the compact dimension. Non-local loops of the fermion around the compact dimension appear as instanton effects in 4D, giving a mass to the axion. In such a formulation, the axion dependence of the theory can be put into a twisted boundary condition for the fermions [46],
\[\Psi(y+2\pi nR)\simeq e^{in(\pi-a)}\Psi(y)\,, \tag{5.3}\]
in which we have also given the fermion additional anti-periodic boundary conditions to align the minimum of the potential with \(a=0\).
The Green's function on the compactified space \(G\) in the presence of an axion can similarly be decomposed as a sum over twisted flat space Green's functions \(D_{F}\) as
\[G(x,y)=\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}D_{F}(x,y+2\pi nR)\,. \tag{5.4}\]
Thus, only fermion propagators that loop around the extra dimensions are sensitive to an axion background, and can hence generate a potential for the axion and an effective axion-photon \(g(a)^{2}\). This effect is suppressed by the small spacelike propagator for heavy \(\Psi\) to loop around the extra dimension \(z=e^{-2\pi Rm}\), such that the instanton contributions to the axion effective action can be packaged in the complex number quantity,
\[\mathcal{Z}=e^{-2\pi Rm}e^{ia}\,. \tag{5.5}\]
The real and imaginary parts of powers of \(\mathcal{Z}\) respectively contribute to the CP even potential \(V(a)\) and CP odd effective axion-photon coupling \(g(a)\). Similar contributions were noted to the superpotential in [32].
The equivalence between a tower of states (e.g. dyons) and instantons is exactly given by the equivalence between the KK mode and winding modes formulation [47] (of which equation (5.4) is a special example). The relation between the two is provided by Poisson resummation [45],
\[\sum_{n=-\infty}^{\infty}s\left(n-\frac{a}{2\pi}\right)=\sum_{k=-\infty}^{ \infty}e^{-ika}S(k)\qquad,\qquad S(k)=\int_{-\infty}^{\infty}dx\ e^{-i2\pi kx} s(x)\,. \tag{5.6}\]
We proceed to calculate both the potential \(V(a)\) and effective axion-photon coupling \(g(a)\) by evaluating the effective action (Eq. (101)) after integrating out the massive charged fermions in the winding mode basis. An alternative derivation of both \(V(a)\) and \(g(a)\) using KK modes can be found in the appendix B.
### Calculation of \(V(a)\)
In order to obtain an effective action for the axion and photons, we integrate out the fermions to obtain
\[e^{S_{\text{eff}}[a,A]}=\int D\Psi D\overline{\Psi}\ e^{S[a,A,\Psi]}\,. \tag{102}\]
This yields an effective action
\[S_{\text{eff}}[a,A]=S[a,A]+\text{Tr}\left(\log\left(-\not{D}-m\right)\right)\,. \tag{103}\]
A simple way to calculate \(V(a)\) is to take a derivative of the effective action (103) with respect to a constant axion \(a\) and set the photon field to zero. This yields
\[\left(2\pi R\right)\frac{\delta S_{\text{eff}}}{\delta a}\supset-\text{Tr} \left(\gamma_{5}G\right)\,. \tag{104}\]
The compact Green's function \(G\) can be expanded in terms of twisted flat space Green's functions as
\[\text{Tr}\left(\gamma_{5}G\right)=\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}\text {Tr}\left(i\gamma_{5}\frac{e^{n2\pi R\partial_{5}}}{\not{D}+m}\right)\,. \tag{105}\]
Multiplying top and bottom by the same factor, one arrives at
\[\text{Tr}\left(\gamma_{5}G\right)=\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}\text {Tr}\left(i\gamma_{5}\frac{e^{n2\pi R\partial_{5}}(\not{D}-m)}{\left(\not{D} \right)^{2}-m^{2}}\right)\,. \tag{106}\]
This allows us to calculate the 4D potential by equation (104) as
\[\frac{\partial V}{\partial a}=4\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}\int \frac{dp^{5}}{(2\pi)^{5}}\frac{p_{5}e^{in2\pi Rp_{5}}}{p^{2}+(p_{5})^{2}+m^{2} }\,. \tag{107}\]
Note that this average of momentum in 5D is non-zero due to the discrete nature of the momenta. Integrating these out3 yields
Footnote 3: The \(n=0\) term is axion independent and vanishes due to CP symmetry.
\[\frac{\partial V}{\partial a}=-4\sum_{n=1}^{\infty}(-1)^{n}\sin\left(na\right) \int\frac{dp^{4}}{(2\pi)^{4}}e^{-n|2\pi R|\sqrt{p^{2}+m^{2}}}\,. \tag{108}\]
Integrating over momenta and with respect to the axion \(a\) yields the effective potential
\[V(a)=\frac{m^{2}}{(2\pi R)^{2}}\sum_{n=1}^{\infty}\frac{1}{\pi^{2}n^{3}}e^{-n 2\pi|Rm|}(-1)^{n}\cos\left(na\right)\left(1+\frac{3}{2\pi|Rm|n}+\frac{3}{\left( 2\pi Rmn\right)^{2}}\right)\,. \tag{109}\]
This potential is known as the one generated by a four-dimensional particle with a rotor degree of freedom coupled to the axion [45] and was also discussed in various limits in [46; 47; 48; 49]. Various other representations of this potential are recorded in the appendices in equation (A.12) and equation (B.2).
We see that for the 5D instantons, the spurion is parametrized by the parameter \(z=e^{-2\pi Rm}\) with the symmetry \(z\leftrightarrow\frac{1}{z}\) leaving the potential invariant and implementing the \(-m\) to \(m\) domain wall.
At the symmetric point \(z=1\), the 5D fermion becomes massless and the Lagrangian has an apparent \(\mathbb{Z}_{2}\) (5D parity) symmetry, which is broken by the topological Chern-Simons term. In this limit, at the point \(a=\pi\), the lightest 4D fermion in the tower becomes massless and a 4D \(U(1)\)-chiral symmetry is restored, meaning that this fermion should not have been integrated out. This is reflected by a singularity of \(V^{\prime\prime}\) at \(a=\pi\).
### Calculation of \(g(a)\)
In this section, we calculate the effective axion-photon coupling resulting from a charged massive fermion with additional compact degrees of freedom coupled to the axion. The existence of such a \(g(a)\) was already well-known in the context of finite temperature field theory in various dimensions see e.g. [50; 51] and references therein.
We can calculate \(g(a)\), the effective axion-photon coupling, by taking a derivative of the effective action (equation (102)) with respect to the axion, which will contain a term of the form
\[i\frac{g^{\prime}(a)}{16\pi^{2}}F\tilde{F}\subset(2\pi R)\frac{\delta S_{\rm eff }}{\delta a}\,. \tag{106}\]
The calculation of \(g(a)\) will proceed along similar lines as in section 4. We keep both a background constant axion \(a\) and a constant background field of the zero KK mode of the photon \(F_{\mu\nu}^{(0)}\) which we simply write as \(F_{\mu\nu}\). In these circumstances, \(\left(\not{D}\right)^{2}=D^{2}-\frac{1}{2}\sigma_{\mu\nu}F^{\mu\nu}\).
Returning to the effective action in equation (101) and keeping only the relevant CP odd terms as
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=mi\sum_{n=-\infty}^{ \infty}e^{in(a+\pi)}{\rm Tr}\left(\gamma_{5}\frac{e^{n2\pi R\partial_{5}}}{ \left(\not{D}\right)^{2}-m^{2}}\right)\,. \tag{107}\]
Expanding this to second order in \(F\), one obtains
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=i\frac{m}{4}\sum_{n=- \infty}^{\infty}e^{in(a+\pi)}{\rm Tr}\left(\gamma_{5}\frac{e^{n2\pi R\partial_ {5}}}{D^{2}-m^{2}}\sigma^{\mu\nu}F_{\mu\nu}\sigma^{\alpha\beta}F_{\alpha\beta }\right)\,. \tag{108}\]
Using the identity \({\rm Tr}\left(\gamma^{5}\sigma_{\mu\nu}\sigma_{\alpha\beta}\right)=-4\epsilon_ {\mu\nu\alpha\beta}\) one arrives at
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=2mi\sum_{n=-\infty}^{ \infty}e^{in(a+\pi)}\int\frac{dp^{5}}{(2\pi)^{5}}\frac{e^{in2\pi Rp^{5}}}{ \left((p^{5})^{2}+p^{2}+m^{2}\right)^{3}}F\widetilde{F}\,. \tag{109}\]
The integrals over momenta are convergent. Performing the integrals yields
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}|_{\rm odd}=\frac{i}{32\pi^{2}}{ \rm sign}(m)\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}e^{-2\pi|Rmn|}F\widetilde{F}\,. \tag{110}\]
By comparing with equation (106), we find that
\[g^{\prime}(a)=\frac{{\rm sign}(m)}{2}\sum_{n=-\infty}^{\infty}e^{in(a+\pi)}e ^{-2\pi|Rmn|}\,. \tag{111}\]
This sum can be explicitly calculated and yields a \(g^{\prime}(a)\) of the form
\[g^{\prime}(a)=\frac{1}{2}\frac{\sinh 2\pi Rm}{\cosh 2\pi Rm+\cos a}\,. \tag{112}\]
Integrating this with respect to \(a\) yields our final result for \(g(a)\) and adding a primordial \(\pm\frac{1}{2}\)-level Chern-Simons term (see footnote 2) yields
\[g(a)=\pm\frac{1}{2}a+\arctan\left(\frac{1-z}{1+z}\tan\left(\frac{a}{2}\right) \right)+\pi\text{sign}(1-z)\Theta(a-\pi)\,. \tag{5.22}\]
In the limit \(R\to\infty\), this reproduces the well-known result,
\[g(a)=\frac{1}{2}\left(\pm 1+\frac{m}{|m|}\right)a\,. \tag{5.23}\]
Similar to the QCD axion, there is an apparent \(\mathbb{Z}_{2}\) (5D parity) restoration as \(m\to 0\) or \(z\to 1\) in equation (5.1). The profiles and monodromies of \(\lim\limits_{z\to 1^{-}}g(a)\) and \(\lim\limits_{z\to 1^{+}}g(a)\) differ however due to the gauge-parity anomaly. In the same limit \(z\to 1\), there is a jump in both profiles of \(g(a)\) at \(a=\pi\) due to the lightest fermion in the tower becoming massless.
In the 5D theory, a domain wall describing the \(-m\to m\) transition has a massless chiral fermion on it and describes anomaly inflow consistent with our 4D analysis.
## 6 Discussion
We have considered the general properties of the monodromic axion-photon coupling \(g(a)\) and the symmetries of the low-energy axion-Mawell Lagrangian in the presence of such a coupling. We argued that the non-quantization of \(g^{\prime}(a)\) is a spurion for the axion shift symmetry. The connection between the axion potential and this coupling has been considered supported by several examples including the QCD axion, perturbative shift symmetry breaking and fermions with additional compact degrees of freedom. In all such cases, a prototypical monodromic function \(g(a)\) was derived and could be expanded in terms of a linear monodromic function with same monodromic charge as \(g(a)\) and a periodic function. In some cases only the first few terms in the expansion of the periodic function dominated. However, in many simple models the whole Fourier tower needs to be summed up to get the relevant \(g(a)\). In such cases \(g(a)\) captured the rearrangement of heavy degrees of freedom through its singularities at isolate points. This correlated with cusps and singularities in the axion potential.
There are a number of model building applications of this formulation. Instead of building effective field theories with polynomial axion couplings, more general non-linear couplings can arise naturally through the \(g(a)\) portal. This may have interesting avenues for constructing more general natural potentials for axions. Phenomenologically, the full non-linear form of \(g(a)\) is most relevant for scenarios where the axion traverses an \(O(1)\) fraction of its field range. This is certainly true for axion strings and domain walls, and sharp features in \(g(a)\) can affect axion emission from these objects. It can also be true for dense axion objects, like axion miniclusters or superradiant axion clouds surrounding rotating black holes.
It will be interesting to study the effective photon coupling for mesons in the chiral Lagrangian, e.g. for the pion \(g(\pi^{0})\) after integrating out \(\eta^{\prime}\). It has been shown [52] in the context of a one-flavor QCD \(N\) that degrees of freedom rearrange on the \(\eta^{\prime}\to\eta^{\prime}+2\pi\) domain wall, leading to a fractional quantum hall droplet and a potential jump in \(g(\eta^{\prime})\). It will be nice to see this physics captured within the effective field theory.
Lastly, several contributions to \(g(a)\) could be considered in the presence of CP-odd sources of axion shift symmetry breaking such as magnetic monopoles (with fermions) and non-Abelian instantons.
_Note added_: While this manuscript was being finalized we became aware of other studies appearing today [53; 54] which also consider quantization of the axion-gauge couplings. The main focus of these works is different from the non-linear coupling to photons highlighted in this paper.
###### Acknowledgements.
We would like to thank Mario Reig for useful comments on the draft and John March-Russell, Michael Nee and Thomas Harvey for helpful discussions. PA is supported by the STFC under Grant No. ST/T000864/1. AP is supported by a STFC Studenship No. 2397217 and Prins Bernhard Cultuurfondsbeurs No. 40038041 made possible by the Pieter Beijer fonds and the Data-Piet fonds.
## Appendix A Axion-Euler-Heisenberg
In this appendix we connect our results back to those of a 4D worldline particle with a compact additional degrees of freedom coupled to the axion. We do this using the Schwinger proper time formalism and consider a fermionic particle with electrically charged translational modes \(x^{\mu}\) and a compact additional degree of freedom \(q\) which is coupled to the axion \(a\). In doing so, we show how a potential \(V(a)\) (equation (101)) and effective axion-photon coupling \(g(a)\) (equation (102)) arise in such a wordline formalism. This will provide us with the low energy effective axion Maxwell field theory to all orders in \(a\) and \(F\) by studying the axion-Euler-Heisenberg Lagrangian resulting from integrating out such fermions. Axial couplings to the worldlines of particles have been studied in [55; 56]. The effective Euler-Heisenberg Lagrangian following from loops of fermions with couplings to non compact psuedoscalar particles has been studied in [57].
We can derive the 4D wordline formalism of a fermion with additional compact degrees of freedom coupled to the axion by starting from the 5D effective action (100), repeated here for completeness,
\[S\supset\mathrm{Tr}\left(\ln\left(-\not{D}-m\right)\right)\,. \tag{102}\]
The presence of a psuedoscalar (the axion) implies that the Euclidean effective action has both a real and imaginary part as the operator \(\not{D}\) no longer has a positive definite spectrum. For this reason, the contributions to the effective action are split into a real and imaginary part as
\[S\supset\mathrm{Tr}\left(\ln\left|\not{D}+m\right|\right)+i\mathrm{Tr}\left( \mathrm{Arg}\left(-\not{D}-m\right)\right)\,. \tag{103}\]
For our purposes, this split is done by taking a derivative of the effective action (102) with respect to \(m^{2}\) as
\[\frac{dS}{dm^{2}}\supset\frac{1}{2}\mathrm{Tr}\left(\frac{1}{-\not{D}^{2}+m^{ 2}}\right)-\frac{1}{2m}\mathrm{Tr}\left(\frac{\not{D}}{-\not{D}^{2}+m^{2}} \right)\,. \tag{104}\]
The first term in equation (104) can be reformulated using standard techniques [58; 59] in terms of a wordline effective action for a fermion with 4 translation degrees of freedom \(x^{\mu}\) and one additional compact degree of freedom \(q\) as
\[S\supset-\frac{1}{2}\int_{0}^{\infty}\frac{ds}{s}e^{-sm^{2}}\int Dqe^{-\int_{ 0}^{s}d\tau\left(\frac{1}{4}\dot{q}^{2}-i\dot{q}\frac{m}{2\pi}\right)}\int Dxe ^{-\int_{0}^{s}d\tau\left(\frac{1}{4}\dot{x}^{2}-i\dot{x}_{\mu}A^{\mu}\right) }\mathrm{Spin}[x,A]\,. \tag{105}\]
Here we have isolated the compact fifth degree of freedom \(q\) of the fermion that is coupled to the axion and the spin factor is given by
\[\mathrm{Spin}[x,A]=\mathrm{Tr}P\mathrm{exp}\left[-\frac{i}{4}[\gamma^{\mu}, \gamma^{\nu}]\int_{0}^{s}d\tau F_{\mu\nu}(x(\tau))\right]\,. \tag{106}\]
From the effective action it is clear that this term will generate an effective potential \(V(a)\) for the axion \(a\) and an effective Euler-Heisenberg Lagrangian, which shall be calculated in the next section. This effective potential was studied for loops of bosonic particles with additional rotor degree of freedoms coupled to the axion in reference [45].
Importantly however, the second term in equation (A.3) does not vanish when an axial coupling is present. This term generates an effective axion photon coupling. We thus see that loops of fermionic particles with an additional compact degree of freedom \(q\) and charged translation modes \(x^{\mu}\) can generate an effective \(F\widetilde{F}\) coupling when this additional degree of freedom is coupled to the axion.
We now proceed to calculate both the real and imaginary contributions (Eq. (A.2)) to the effective action resulting in an effective axion-Euler-Heisenberg Lagrangian. We shall do this calculation in terms of the KK mode decomposition with frequencies \(\omega_{n}=\frac{\pi}{2\pi R}(2n+1)\) and \(n\in\mathbb{Z}\).
The first term in equation (A.3) can be rewritten using Schwinger-proper time as
\[\frac{1}{2}\text{Tr}\left(\frac{1}{-\not{D}^{2}+m^{2}}\right)=\frac{1}{2}\int _{0}^{\infty}ds\;e^{-sm^{2}}\text{Tr}\left[\langle x|e^{s\not{D}^{2}}|x\rangle \right]\,.\] (A.6)
We proceed by splitting the covariant derivative \((\not{D})^{2}=(\not{D}_{4})^{2}+(\not{D}_{5})^{2}\) into the 4D covariant derivative \(\not{D}_{4}\) and the covariant derivative over the fifth dimension \(\not{D}_{5}\) and have taken the axion \(a\) and field strength \(F\) to be constant.
The trace over the 4D covariant derivative in the presence of a constant field strength \(F\) can be calculated using the well-known identity in 4D [60, 61],
\[\text{Tr}\left[\langle x|e^{\not{D}_{4}^{2}s}|x\rangle\right]=\frac{1}{64\pi^{ 2}}\frac{F\widetilde{F}}{\text{Im}\cosh{(sX)}}\text{Tr}\left[\exp\left(-\frac{ s}{2}\sigma_{\mu\nu}F^{\mu\nu}\right)\right]\,,\] (A.7)
and the trace identity as
\[\text{Tr}\left[\exp\left(-\frac{s}{2}\sigma_{\mu\nu}F^{\mu\nu}\right)\right]= 4\;\text{Re}\cosh{sX}\,,\] (A.8)
with \(X\) given by
\[X\equiv\sqrt{\frac{1}{2}F^{2}+\frac{i}{2}F\widetilde{F}}\,.\] (A.9)
We can now calculate the action \(S\) by plugging these identities into equation (A.6) and integrating with respect to \(m^{2}\) (equation (A.3)) to obtain the well-known formula for the Euler-Heisenberg Lagrangian. In case of a constant axion \(a\) and field strength \(F_{\mu\nu}\) this is
\[\mathcal{L}\supset-\frac{1}{32\pi^{2}}\int_{0}^{\infty}\frac{ds}{s}e^{-sm^{2} }\frac{\text{Re}\cosh{sX}}{\text{Im}\cosh{sX}}F\widetilde{F}\frac{1}{(2\pi R )}\sum_{n=-\infty}^{\infty}e^{-\left(\omega_{n}-\frac{m}{2\pi R}\right)^{2}s}\,.\] (A.10)
By expanding this formula in powers of \(s\), we can find an alternative integral representation for the potential of the axion. Observe that to second order in \(s\),
\[\frac{\text{Re}\cosh{sX}}{\text{Im}\cosh{sX}}F\widetilde{F}=\frac{4}{s^{2}}+ \frac{2}{3}F^{2}+\mathcal{O}(s^{2})\,.\] (A.11)
We recognize the first term as the vacuum energy contribution to the Euler-Heisenberg Lagrangian. This yields an alternative integral representation for the potential of the axion of the form
\[V(a)=\frac{1}{8\pi^{2}}\int_{0}^{\infty}\frac{ds}{s^{3}}e^{-sm^{2}}\sum_{n=- \infty}^{\infty}e^{-\left(\omega_{n}-\frac{m}{2\pi R}\right)^{2}s}\,,\] (A.12)
and can be rewritten in terms of instanton supressed contributions using Poisson resummation [45].
We now proceed to calculate the imaginary part of the effective action (equation (A.2)) by calculating the contribution of the second term in equation (A.3). In doing so, we recover an alternative representation for \(g(a)\) and complete the axion-Euler-Heisenberg Lagrangian.
The second term in equation (A.3) can be calculated in a similar manner using Schwinger proper time
\[-\frac{1}{2m}\text{Tr}\left(\frac{\not{D}}{-\not{D}^{2}+m^{2}} \right)=-\frac{1}{2m}\int_{0}^{\infty}ds\ e^{-sm^{2}}\text{Tr}\left[\langle x |\gamma_{5}\left(\partial_{5}-i\frac{a}{2\pi R}\right)e^{\not{D}^{2}s}|x \rangle\right]\,.\] (A.13)
This term is non-zero due to the discrete nature of the momenta of the additional degree of freedom.
This term can now be straightforwardly calculated by again splitting the covariant derivative and using \((\not{D}_{4})^{2}=D^{2}-\frac{1}{2}\sigma_{\mu\nu}F^{\mu\nu}\) and the trace identity identity
\[\text{Tr}\left[\gamma_{5}\text{exp}\left(-\frac{s}{2}\sigma_{ \mu\nu}F^{\mu\nu}\right)\right]=-4\ \text{Im}\cosh sX\,.\] (A.14)
Using this latter trace identity and our expression for the 4D propagator (equation (A.7)), we can calculate the imaginary contribution to the effective action as
\[\frac{d\mathcal{L}}{dm^{2}}\supset\frac{i}{32\pi^{2}m}\int_{0}^{ \infty}ds\ e^{-sm^{2}}F_{\mu\nu}\tilde{F}_{\mu\nu}\sum_{n=-\infty}^{\infty} \left(\omega_{n}-\frac{a}{2\pi R}\right)e^{-\left(\omega_{n}-\frac{a}{2\pi R} \right)^{2}s}\,.\] (A.15)
Performing the integral with respect to \(s\) and then the integral with respect to \(m\), one obtains the contribution to the effective action
\[\mathcal{L}\supset\frac{i}{16\pi^{2}}F_{\mu\nu}\widetilde{F}_{ \mu\nu}\sum_{n=-\infty}^{\infty}\arctan\left(\frac{m}{\omega_{n}-\frac{a}{2 \pi R}}\right).\] (A.16)
By taking a derivative with respect to \(a\), we arrive at expression in equation (B.8) and thus find another representation of \(g(a)\) as an infinite series.
The sum of equations (A.10) and (A.16) provides us with an alternative worldline formulation resulting in an effective axion-Euler-Heisenberg Lagrangian providing us with the effective low energy axion Maxwell theory Lagrangian to all orders in the axion and photon.
## Appendix B Kaluza-Klein calculation
In this appendix we re-derive the axion potential \(V(a)\) (equation (5.14)) and the effective axion coupling \(g(a)\) (equation (5.22)) for a tower of fermionic states in the Kaluza-Klein decomposition basis. We take the fermion to satisfy anti-periodic boundary conditions such that \(\omega_{n}=\frac{\pi}{2\pi R}\left(2n+1\right)\) in order to align the minimum of the potential with \(a=0\). The axion potential for other boundary conditions can always be obtained by shifting the axion \(a\).
### The axion potential
The axion gains an effective potential from interactions with the fermion. We can calculate this effective potential by taking the axion \(a\) to be constant,
\[V(a)=-2\sum_{n=-\infty}^{\infty}\int\frac{d^{4}p}{(2\pi)^{4}}\ln \left(\left(\omega_{n}-\frac{a}{2\pi R}\right)^{2}+p^{2}+m^{2}\right).\] (B.1)
The potential in equation (B.1) is well-known [48] and the resulting potential for the axion is
\[V(a)=-2\int\frac{d^{4}p}{(2\pi)^{4}}\ln\left(1+e^{-4\pi RE_{p}} +2e^{-2\pi RE_{p}}\cos a\right).\] (B.2)
### The effective axion-photon coupling
We can calculate \(g(a)\), the effective axion-photon coupling, by taking a derivative of the effective action (equation (102)) with respect to the axion, which will contain a term of the form
\[i\frac{g^{\prime}(a)}{16\pi^{2}}F\tilde{F}\subset(2\pi R)\frac{\delta S_{\rm eff }}{\delta a}\,. \tag{104}\]
We keep both a constant axion \(a\) and constant zero KK mode of the photon \(F_{\mu\nu}\).
Proceeding with the calculation, from equation (102), we have that
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset-{\rm Tr}\left(i\gamma^{5} \frac{1}{\not{D}+m}\right)\,. \tag{105}\]
For a constant axion we can expand the denominator as
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset-{\rm Tr}\left(i\gamma^{5} \frac{\not{D}-m}{D^{2}-\frac{1}{2}\sigma^{\mu\nu}F_{\mu\nu}-m^{2}}\right)\,. \tag{106}\]
We proceed to expand the denominator to order \(F^{2}\) as
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset-\frac{1}{4}{\rm Tr}\left( i\gamma^{5}\left(\not{D}-m\right)\left(\frac{1}{D^{2}-m^{2}}\right)^{3}\sigma^{ \mu\nu}F_{\mu\nu}\sigma^{\alpha\beta}F_{\alpha\beta}\right)\,. \tag{107}\]
Using the identity \({\rm Tr}\left(\gamma^{5}\sigma_{\mu\nu}\sigma_{\alpha\beta}\right)=-4\epsilon _{\mu\nu\alpha\beta}\) and ignoring higher order \(F\) contributions, we find that
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset\frac{2im}{2\pi R}F\tilde{F} \sum_{n=-\infty}^{\infty}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{1}{\left(\left( \omega_{n}-\frac{a}{2\pi R}\right)^{2}+k^{2}+m^{2}\right)^{3}}\,. \tag{108}\]
Performing the momentum integral yields
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset\frac{im}{2\pi R}F\tilde{F} \sum_{n=-\infty}^{\infty}\frac{1}{16\pi^{2}}\frac{1}{\left(\omega_{n}-\frac{a }{2\pi R}\right)^{2}+m^{2}}\,. \tag{109}\]
The frequency sum can be done by a method of images as
\[\frac{1}{2\pi R}\sum_{n=-\infty}^{\infty}\frac{1}{\left(\omega_{n}-\frac{a}{2 \pi R}\right)^{2}+m^{2}}=\frac{1}{2|m|}\frac{\sinh\left(2\pi R|m|\right)}{ \cosh\left(2\pi Rm\right)+\cos\left(a\right)}\,. \tag{110}\]
Plugging this into equation (109), one obtains
\[(2\pi R)\frac{\delta S_{\rm eff}}{\delta a}\supset i\frac{\sinh\left(2\pi R| m|\right)}{\cosh\left(2\pi Rm\right)+\cos\left(a\right)}\frac{{\rm sign}(m)}{32 \pi^{2}}F\tilde{F}\,. \tag{111}\]
By comparing this expression with equation (104), we see that
\[g^{\prime}(a)=\frac{1}{2}\frac{\sinh\left(2\pi Rm\right)}{\cosh\left(\pi Rm \right)+\cos\left(a\right)}\,, \tag{112}\]
which implies
\[g(a)=\arctan\left(\tanh\left(\pi Rm\right)\tan\left(\frac{a}{2}\right)\right)+ \pi{\rm sign}(m)\Theta(a-\pi)\,. \tag{113}\]
This function has the correct \(\frac{1}{2}\)-level Chern Simons. |
2309.04083 | Short words of infinite order | Given an infinite linear group with a finite set of generators, we show that
the shortest word length of an element of infinite order has an upper bound
that depends only on the number of generators and the degree. This provides a
quantification of the Burnside problem for linear groups. In degree two, an
explicit bound is computed using an exceptional connection to reflection
groups. | Junho Peter Whang | 2023-09-08T02:50:16Z | http://arxiv.org/abs/2309.04083v1 | # Short words of infinite order
###### Abstract.
Given an infinite linear group with a finite set of generators, we show that the shortest word length of an element of infinite order has an upper bound that depends only on the number of generators and the degree. This provides a quantification of the Burnside problem for linear groups. In degree two, an explicit bound is computed using an exceptional connection to reflection groups.
###### Contents
* 1 Introduction
* 2 Finite Burnside sets
* 3 Stokes matrices
* 4 Reflective representations
## 1. Introduction
### Main results
The first main result of this paper is the following.
**Theorem 1.1**.: _Given integers \(r,n\geq 1\), there exists an integer \(\ell=\ell(r,n)\geq 0\) such that, for any finite set \(S\) of \(r\) matrices in \(\operatorname{GL}_{n}(\mathbb{C})\) generating a group \(G\) of infinite order, there is an element of infinite order in \(G\) with word length \(\leq\ell\) in \(S\)._
Here, the word length of an element \(g\in G\) in \(S\) is the minimum integer \(k\geq 0\) such that \(g=s_{1}\cdots s_{k}\) for some \(s_{1},\ldots,s_{k}\in S\cup S^{-1}\). The nontrivial content of Theorem 1.1 lies in the case where \(S\) consists only of torsion elements. We point out a loose resemblance to the systolic inequality in Riemannian geometry: by work of Gromov [8], the length of the shortest noncontractible loop in an essential closed Riemannain manifold has an upper bound that depends only on the dimension and the volume of the manifold. Theorem 1.1 is inspired by the classical Burnside problem for linear groups. Recall that, by Schur's theorem [16], a complex linear representation of a finitely generated group \(F\) has finite image if and only if the image of each element of \(F\) is a torsion element. A natural question is whether torsionness of image for only a small part of \(F\) is enough to ensure that a given representation is finite. This leads us to the following definition.
**Definition 1.2**.: Let \(F\) be a group, and let \(\mathcal{C}\) be a class of group homomorphisms with domain \(F\). Let \(L\subseteq F\) be a subset. We say that \(L\) is a _Burnside set_ for \(\mathcal{C}\) if the following are equivalent for every \(\rho\in\mathcal{C}\):
1. \(\rho(\gamma)\) is torsion for all \(\gamma\in L\).
2. \(\rho\) has finite image.
A motivating example is where \(F=\pi_{1}(\Sigma)\) is the fundamental group of a surface \(\Sigma\) of genus \(g\) with \(n\) punctures, and \(L\) is the set of its simple loops. If \(3g+n-3>0\), the simple loops form a sparse yet infinite collection of elements in the fundamental group. Patel-Shankar-Whang [17] showed that, for \(g>0\), the simple loops form a Burnside set for the class of semisimple degree \(2\) representations of \(F\) over \(\mathbb{C}\). This was used in the proof of the \(p\)-curvature conjecture in rank \(2\) for generic curves _loc.,cit._, and also in the classification of \(\operatorname{SL}_{2}(\mathbb{C})\)-local systems with finite mapping class group orbits by Biswas-Gupta-Mj-Whang [2]. (On the other hand, Koberda-Santharoubane [10] showed that the set of simple loops cannot be a Burnside set for _all_ semisimple representations of a given surface group over \(\mathbb{C}\).) The following is a reformulation of Theorem 1.1.
**Theorem 1.3**.: _For any finitely generated group \(F\) and integer \(n\geq 1\), there exists a finite Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{n}(\mathbb{C}))\)._
We prove Theorem 1.3 by combining Schur's theorem with Laurent's solution [11] of Lang's \(\mathbb{G}_{m}\) conjecture (See also [14, 15]) and Procesi's work [12] on invariant theory of matrices. (We mention in passing that Lang's \(\mathbb{G}_{m}\) conjecture was also used in [4] to show that anisotropic linear groups that are boundedly generated must be virtually solvable.) Our proof is nonconstructive, and it raises the interesting problem of explicitly constructing finite Burnside sets for \(\operatorname{Hom}(F,\operatorname{GL}_{n}(\mathbb{C}))\). For \(n=1\) or more generally for the class of abelian representations of \(F\), it is trivial that any finite generating set of \(F\) provides a Burnside set. Our second result is a solution to this problem in the simplest nontrivial case, where \(n=2\).
**Theorem 1.4**.: _Let \(F\) be a group generated by a finite set \(S\). The set of elements of word length \(\leq 3|S|\) in \(S\) is a Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{2}(\mathbb{C}))\)._
In fact, we produce an explicit finite set of words in \(S\) that forms a Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{2}(\mathbb{C}))\). To prove Theorem 1.4, we first introduce and study reflective representations, which are representations \(F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) that send the standard free generators of \(F_{r}\) to orthogonal reflections (Definition 4.1). Theorem 1.4 can be deduced from Theorem 1.5 below, using an exceptional correspondance established by Fan-Whang [7] between free group representations of degree \(2\) and reflective representations of degree \(4\).
**Theorem 1.5**.: _Let \(F_{r}=\langle\gamma_{1},\dots,\gamma_{r}\rangle\) be a free group of rank \(r\). Then_
\[L_{r}=\{\gamma_{i_{1}}\dots\gamma_{i_{u}}:1\leq i_{1}<\dots<i_{u}\leq r\text{ and }1\leq u\leq r\}\]
_is a Burnside set for the class of all semisimple reflective representations of \(F_{r}\)._
Note that the set \(L_{r}\) defined above is independent of the degrees of the reflective representations. Theorem 1.5 is closely related to the fact that a Coxeter group is finite if and only if it admits a positive definite cosine matrix. The tools involved in our proof are classical: aside from Schur's theorem, we make use of the Coxeter identity, Kronecker's theorem on roots of unity, Sylvester's criterion on definiteness of matrices, and a Galois trick employed earlier in [17].
Theorem 1.1 is related to the following result of Breuillard-Gelander [3] on the Tits alternative: if \(G\) is a finitely generated linear group that is not virtually solvable, then there is a constant \(m=m(G)\) such that, for any finite generating set
of \(G\), there exist two elements \(a,b\in G\) each with word length \(\leq\ell\) in \(S\) that are independent, i.e., generate a free nonabelian subgroup of rank \(2\) in \(G\). Note that our Theorem 1.1 yields a weaker conclusion (existence of a single "independent" short word) from a weaker hypothesis (\(G\) need only be finitely generated and infinite). In addition, if the degree is fixed, then our upper bound on word length depends on the order of the chosen generating set but is otherwise is independent of the group, while the bound in [3] is depends on the group but not on the generating set.
### Applications to surface group representations
A fruitful observation is that, for a natural presentation of the free group \(F_{r}\) as the fundamental group of \(\mathbb{C}-\{p_{1},\ldots,p_{r}\}\) for some distinct marked points \(p_{1},\ldots,p_{r}\) in the complex plane, the elements of \(L_{r}\) in Theorem 1.5 correspond to simple loops. Thus, there exists a finite Burnside set consisting of simple loops for the class of semisimple reflective representations of \(\pi_{1}(\mathbb{C}-\{p_{1},\ldots,p_{r}\})\). By passing to a double cover and utilizing the exceptional correspondance of [7], we can show that if \(\Sigma\) is a surface of positive genus with at most two punctures, then there is an explicitly determined Burnside set consisting of _finitely many_ simple loops for the set of all semisimple degree \(2\) representations of \(\pi_{1}(\Sigma)\) over \(\mathbb{C}\) (Corollary 4.6); this provides an effective strengthening of [17, Theorem 1.2] for \(\Sigma\). In addition, Theorem 1.5 yields a new case of the \(p\)-curvature conjecture, when combined with Shankar's theorem [18] that vector bundles with flat connection on a generic curve with almost all \(p\)-curvatures vanishing must have finite monodromy along simple loops.
**Theorem 1.6**.: _Let \(C\) be the complement of a finite generic set of points in the affine line \(\mathbb{A}^{1}\). The \(p\)-curvature conjecture holds for vector bundles with flat connection on \(C\) whose monodromy representations are reflective._
The \(p\)-curvature conjecture of Grothendieck-Katz is a type of local-to-global principle for differential equations. It states that a system of linear differential equations on an algebraic variety over \(\mathbb{C}\) admits a full set of algebraic solutions over \(\mathbb{C}\) if and only if it does so modulo \(p\) (or, equivalently, has vanishing \(p\)-curvature) for all but finitely many prime numbers \(p\); see [9] for a reference.
Theorem 1.6 applies in particular to the following Fuchsian system of differential equations, encountered in two-dimensional topological field theory, whose monodromy representations define the so-called monodromy groups of Frobenius manifolds (See [6, Section 5], in particular Equation (5.31b) and surrounding paragraphs). Fix a skew-symmetric matrix \(A\in\mathfrak{so}(r)\), and let \(p_{1},\ldots,p_{r}\in\mathbb{C}\) be distinct points. For \(1\leq k\leq r\), let \(E_{k}\) be the \(r\times r\) matrix with unique nonzero entry \(1\) in the \((k,k)\)th place, and let \(\mathbb{I}\) be the identity matrix. Consider the following Fuchsian system for the vector valued function \(Y(z)\):
\[\frac{d}{dz}Y=\sum_{k=1}^{r}\frac{E_{k}(\frac{1}{2}\mathbb{I}-A)}{z-p_{i}}Y.\]
Assuming generic choices of the points \(p_{i}\), Theorem 1.6 shows that the \(p\)-curvature conjecture holds for the system above.
### Organization of the paper
This paper is organized as follows. In Section 2, we prove Theorem 1.3. In Section 3, we introduce the notion of Stokes matrices and their associated representations, proving the equivalent of Theorem 1.5. In Section 3, we prove Theorems 1.5 and 1.4 using Stokes matrices and the exceptional correspondance established in [7].
### Acknowledgments
I thank Peter Sarnak and Alexander Lubotzky for enlightening discussions and comments. This work was supported by the Samsung Science and Technology Foundation under Project Number SSTF-BA2201-03.
## 2. Finite Burnside sets
### Burnside-Schur theory
Our goal in Section 2.1 is to show (Proposition 2.4) that the finiteness of a representation \(\rho:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) is essentially equivalent to the quasiunipotency of the elements in the image of \(\rho\), modulo considerations of global semisimplicity. We first recall the statements of some relevant results, including Schur's theorem.
**Theorem 2.1** (Schur [16]).: _Let \(\Gamma\leq\operatorname{GL}_{n}(\mathbb{C})\) be a finitely generated group. If \(\gamma\) has finite order for every \(\gamma\in\Gamma\), then \(\Gamma\) is finite._
**Theorem 2.2** (Jordan's theorem, see [5]).: _There exists an effective constant \(N(n)\) such that, for any finite group \(G\leq\operatorname{GL}_{n}(\mathbb{C})\), there exists a normal abelian subgroup \(H\) of index \(\leq N(n)\) in \(G\)._
**Theorem 2.3** (Bass [1]).: _Let \(G\) be a subgroup of \(\operatorname{GL}_{n}(\mathbb{C})\) that acts irreducibly on \(\mathbb{C}^{n}\). If the set of traces of all elements in \(G\) is a bounded subset of \(\mathbb{C}\), then \(G\) is bounded, i.e., conjugate to a subgroup of \(U(n)\)._
Recall that an element \(g\in\operatorname{GL}_{n}(\mathbb{C})\) is said to be _quasiunipotent_ if every eigenvalue of \(g\) is a root of unity. By combining the above ingredients, we deduce the following result which, in essence, allows us to reduce the problem of finiteness of a representation \(\rho:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) to an infinite system of trigonometric Diophantine equations in infinitely many unknowns (eigenvalues of \(\rho(\gamma)\) as \(\gamma\) ranges over all elements of \(F_{r}\)). This will be used in our proof of Theorem 1.3.
**Proposition 2.4**.: _There is an effectively determined finite set \(U_{r,n}\subseteq F_{r}\) such that a representation \(\rho:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) has finite image if and only if_
1. \(\rho(\gamma)\) _has finite order for every_ \(\gamma\in U_{r,n}\)_, and_
2. \(\rho(\gamma)\) _is quasiunipotent for every_ \(\gamma\in F_{r}\)_._
Proof.: By Theorem 2.2, there is an effective constant \(N(n)\geq 1\) such that any finite subgroup of \(\operatorname{GL}_{n}(\mathbb{C})\) has a normal abelian subgroup of index \(\leq N(n)\). Let us choose a finite subset \(T_{r,n}\) of \(F_{r}\) such that, for every group homomorphism \(\varphi:F_{r}\to G\) where \(G\) is a finite group of order at most \(N(n)\), there is a subset of \(T_{r,n}\) that generates \(\ker(\varphi)\). We claim that the set
\[U_{r,n}=T_{r,n}\cup\{[a,b]:a,b\in T_{r,n}\}.\]
satisfies the conclusions of the proposition. Indeed, let \(\rho:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) be a representation satisfying conditions (1) and (2). If \(\rho\) is irreducible, then \(\rho\) is unitarizable by Theorem 2.3 and condition (2). It follows that every \(\rho(\gamma)\) has finite order for \(\gamma\in F_{r}\), showing that \(\rho\) has finite image by Theorem 2.1.
It remains to treat the case where \(\rho\) is reducible. The semisimplification \(\rho^{s}\) of \(\rho\) has finite image, by arguing as above. Now, by Theorem 2.2 we can choose a subset \(S_{\rho}\subseteq T_{r,n}\) such that its image under \(\rho^{s}\) generates a normal abelian subgroup of index \(\leq N(n)\) in \(\rho^{s}(F_{r})\). In particular, \(S_{\rho}\) generates a finite index subgroup of \(F_{r}\). We may assume that \(\rho(F_{r})\) consists of block upper triangular matrices (with each diagonal block corresponding to an irreducible summand of \(\rho^{s}\)), and moreover that \(\rho(S_{\rho})\) belongs to the group of upper triangular matrices. Given any \(a,b\in S_{\rho}\)
it follows that the commutator \([\rho(a),\rho(b)]\) is a unipotent matrix and hence has finite order if and only if it is the identity matrix. Our choice of \(U_{r,n}\) and the hypothesis on \(\rho\) implies therefore that the restriction of \(\rho\) to \(\langle S_{\rho}\rangle\) is abelian, and hence \(\rho(\langle S_{\rho}\rangle)\) is finite since \(\rho(\gamma)\) has finite order for every \(\gamma\in S_{\rho}\subseteq T_{r,n}\subseteq U_{r,n}\). Since \(\langle S_{\rho}\rangle\) has finite index in \(F_{r}\), we conclude that \(\rho\) has finite image.
### Finite Burnside sets
We turn to the proof of Theorem 1.3. In addition to Proposition 2.4, we shall need Laurent's solution of Lang's \(\mathbb{G}_{m}\) conjecture and Procesi's theory on invariants of \(n\times n\) matrices.
Let \(\mathbb{G}_{m}=\mathbb{C}^{\times}\) denote the multiplicative group. Let \(n\geq 1\) be an integer. By a torsion point on \(\mathbb{G}_{m}^{n}=(\mathbb{C}^{\times})^{n}\) we shall mean a point \((\zeta_{1},\dots,\zeta_{n})\in\mathbb{G}_{m}^{n}\) all of whose coordinates are roots of unity. Given a closed subvariety \(Y\) of \(\mathbb{G}_{m}^{n}\), by a torsion point on \(Y\) we shall mean a point of \(Y\) that is a torsion point of \(\mathbb{G}_{m}^{n}\).
**Theorem 2.5** (Lang's \(\mathbb{G}_{m}\) conjecture, Laurent [11]).: _Let \(Y\) be a closed subvariety of \(\mathbb{G}_{m}^{n}\). Then the Zariski closure of the set of torsion points on \(Y\) is a finite union of effectively determined torsion cosets of linear subtori of \(\mathbb{G}_{m}^{n}\)._
Let now \(r,n\geq 1\) be integers. Let \(\operatorname{Mat}_{n}\simeq\mathbb{A}^{n^{2}}\) be the affine scheme parametrizing \(n\times n\) matrices. The group \(\operatorname{GL}_{n}\) acts on \(\operatorname{Mat}_{n}^{r}\) by simultaneous conjugation. Let \(X_{i}\) denote the \(i\)th matrix variable, whose \(n\times n\) coordinates form elements generating the coordinate ring \(\mathbb{Q}[\operatorname{Mat}_{n}^{r}]\) of \(\operatorname{Mat}_{n}^{r}\). It is clear that regular functions of the form \(\operatorname{tr}(X_{i_{1}}\dots X_{i_{j}})\) are \(\operatorname{GL}_{n}\)-invariant. Procesi shows that in fact all \(\operatorname{GL}_{n}\)-invariant regular functions on \(\operatorname{Mat}_{n}^{r}\) are polynomial combinations of such trace functions.
**Theorem 2.6** (Procesi [12]).: _The ring \(\mathbb{Q}[\operatorname{Mat}_{n}^{r}]^{\operatorname{GL}_{n}}\) is finitely generated by elements of the form \(\operatorname{tr}(X_{i_{1}}\dots X_{i_{j}})\) with \(j\leq 2^{n}-1\) and \(i_{k}\in\{1,\dots,n\}\) for \(k=1,\dots,j\)._
We now restate and prove Theorem 1.3.
**Theorem 1.3**.: _For any finitely generated group \(F\) and integer \(n\geq 1\), there exists a finite Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{n}(\mathbb{C}))\)._
Proof.: It is enough to consider the case where \(F=F_{r}\) is a free group of rank \(r\geq 1\). It is also enough to show that a finite Burnside set exists for \(\operatorname{Hom}(F_{r},\operatorname{SL}_{n}(\mathbb{C}))\) for all \(r,n\geq 1\), since we have an embedding \(\operatorname{GL}_{n}(\mathbb{C})\to\operatorname{SL}_{n+1}(\mathbb{C})\) given by
\[g\mapsto\begin{bmatrix}g&0\\ 0&\det(g)^{-1}\end{bmatrix}.\]
Let \(\gamma_{1},\dots,\gamma_{r}\) be a set of free genrators of \(F_{r}\). For \(\ell\in\mathbb{Z}_{\geq 0}\), let us write \(L(\ell)\) for the subset of \(F_{r}\) consisting of those \(\gamma\in F_{r}\) of word length at most \(\ell\) in \(S=\{\gamma_{1},\dots,\gamma_{r}\}\).
Let \(X=X(F_{r},\operatorname{SL}_{n})=\operatorname{SL}_{n}^{r}\left/\!\!\!/ \operatorname{GL}_{n}\right.\) be the \(\operatorname{SL}_{n}\)-character variety of \(F_{r}\). It is constructed as the invariant theoretic quotient of \(\operatorname{SL}_{n}^{r}\) by the diagonal conjugation action of \(\operatorname{GL}_{n}\). By Theorem 1.3 or Hilbert's basis theorem, \(X\) is an integral scheme of finite type over \(\mathbb{Q}\). For each \(w\in F_{r}\), let us write \(s_{1}(w),\dots,s_{n}(w)\in\mathbb{Q}[X]\) for the regular functions on \(X\) given by the coefficients of the characteristic polynomial of the image of \(w\). More precisely,
\[\det(\lambda-\rho(w))=\lambda^{n}+\sum_{i=1}^{n}s_{i}(\rho(w))\lambda^{n-i}.\]
Let us introduce the following notation. For each \(\ell\geq 0\), let us define the scheme \(Y_{\ell}\) by the fiber product diagram
where the right vertical arrow is given by \(\rho\mapsto(s_{1}(\rho(w)),\dots,s_{n}(\rho(w)))_{w\in L(\ell)}\), and the bottom horizontal arrow is given by sending each point of \((\mathbb{G}_{m}^{n})^{L(\ell)}\) denoted \((e_{1}(w),\dots,e_{n}(w))_{w\in L(\ell)}\) to the elementary symmetric polynomials in the entries:
\[\left(\sum_{i=1}^{n}e_{i}(w),\cdots,\prod_{i=1}^{n}e_{i}(w)\right)_{w\in L( \ell)}.\]
Note that there is an action of \((S_{n})^{L(\ell)}\) on \((\mathbb{G}_{m}^{n})^{L(\ell)}\) by obvious permutations, with respect to which the bottom horizontal arrow is equivariant. Similarly, we have \((S_{n})^{L(\ell^{\prime})\setminus L(\ell)}\)-invariant projections
\[\pi_{\ell^{\prime},\ell}:(\mathbb{G}_{m}^{n})^{L(\ell^{\prime})}\to(\mathbb{G }_{m}^{n})^{L(\ell)}\]
for \(\ell^{\prime}\geq\ell\), inducing invariant morphisms \(\pi_{\ell^{\prime},\ell}:Y_{\ell^{\prime}}\to Y_{\ell}\).
By Procesi's Theorem (Theorem 2.6), there is an effectively determined \(\ell_{0}\geq 0\) such that \(X\to(\mathbb{A}^{n})^{L(\ell)}\) is a closed immersion for all \(\ell\geq\ell_{0}\). Thus, for \(\ell\geq\ell_{0}\) the morphism \(Y_{\ell}\to(\mathbb{G}_{m}^{n})^{L(\ell)}\) is a closed immersion. We shall choose \(\ell_{0}\) so large that in fact \(L(\ell_{0})\) contains the set \(U_{r,n}\) constructed in Lemma 2.3. For \(\ell\geq\ell_{0}\), let \(Z_{\ell}\) denote the Zariski closure of the set of torsion points on the closed subscheme \(Y_{\ell}\subseteq(\mathbb{G}_{m}^{n})^{L(\ell)}\). By Theorem 2.5, \(Z_{\ell}\) is an effectively determined finite union of torsion cosets of linear subtori in \((\mathbb{G}_{m}^{n})^{L(\ell)}\). For \(\ell^{\prime}\geq\ell\geq\ell_{0}\), the morphism \(\pi:Y_{\ell^{\prime}}\to Y_{\ell}\) sends \(Z_{\ell^{\prime}}\) into \(Z_{\ell}\). Let us write
\[Z_{\ell}^{\prime}=\pi_{\ell,\ell_{0}}(Z_{\ell}^{\prime})\subset Z_{\ell_{0}} \quad\text{for all}\quad\ell\geq\ell_{0}.\]
Then we have a descending chain \(Z_{\ell_{0}}=Z_{\ell_{0}}^{\prime}\supseteq Z_{\ell_{0}+1}^{\prime}\supseteq Z _{\ell_{0}+2}^{\prime}\supseteq\dots\) of finite unions of torsion cosets of algebraic subtori in \((\mathbb{G}_{m}^{n})^{L(\ell_{0})}\), which must eventually stabilize. Let
\[Z=\bigcap_{i=0}^{\infty}Z_{\ell_{0}+i}^{\prime}\]
and let us choose \(t\geq\ell_{0}\) such that \(Z_{\ell}^{\prime}=Z\). If \(x\) is a torsion point of \(Z\), then
\[\pi_{\ell,\ell_{0}}^{-1}(x)\cap Z_{\ell}\]
contains a torsion point for every \(\ell\geq\ell_{0}\), because the projections \(\pi_{\ell,\ell_{0}}\) are group homomorphisms. But \(\pi_{\ell,\ell_{0}}^{-1}(x)\cap Z_{\ell}\) is finite, by our hypothesis on \(\ell_{0}\); indeed, \((\mathbb{G}_{m}^{n})^{L(\ell)}\to(\mathbb{A}^{n})^{L(\ell)}\) has finite fibers and the image of any \(y\in\pi_{\ell,\ell_{0}}^{-1}(x)\cap Z_{\ell}\) in \(X\subset(\mathbb{A}^{n})^{L(\ell)}\) is determined by the image of \(x\) in \(X\subset(\mathbb{A}^{n})^{L(\ell_{0})}\). It follows that some (and hence every) point in \(\pi_{\ell,\ell_{0}}^{-1}(x)\cap Z_{\ell}\) is a torsion point in \((\mathbb{G}_{m}^{n})^{L(\ell)}\) for every \(\ell\geq\ell_{0}\). It follows that, if \(\rho\in X\) is in the image of \(Z\), then \(\rho(\gamma)\) is quasi-nipotent for all \(\gamma\in F_{r}\). In particular, by Proposition 2.4, for any representation \(\rho:F_{r}\to\operatorname{SL}_{n}(\mathbb{C})\), if \(\rho(\gamma)\) has finite order for all \(\gamma\in L(t)\), then \(\rho\) has finite image. This proves the theorem.
## 3. Stokes matrices
### Stokes matrices
Fix an integer \(r\geq 1\). Let us write \([r]=\{1,\ldots,r\}\), and endow it with the usual linear ordering. We shall endow any subset \(I\subset[r]\) with the induced ordering. Given an \(r\times r\) matrix \(a=[x_{ij}]\) and \(I\subseteq[r]\), we shall write \(a_{I}=[x_{ij}]_{i,j\in I}\) for the corresponding submatrix consisting of the entries whose coordinates belong to \(I\). For each \(i\in[r]\), let \(E_{i}\) be the \(r\times r\) matrix having unique nonzero entry \(1\) in the \((i,i)\)th place. Thus, \(\mathbb{I}=\sum_{i=1}^{r}E_{i}\) is the identity matrix.
**Definition 3.1**.: A _Stokes matrix_ of dimension \(r\) is an \(r\times r\) upper triangular unipotent matrix. Let \(V(r)\) denote the space of Stokes matrices of rank \(r\).
Note that, if \(s\in V(r)\) is a Stokes matrix, then the submatrix \(s_{I}\) is a Stokes matrix of dimension \(|I|\) for any \(I\subseteq[r]\). Below, let \(F_{r}\) denote the free group of rank \(r\) with free generators \(\gamma_{1},\ldots,\gamma_{r}\). For \(I\subseteq[r]\), we shall write \(\gamma_{I}=\gamma_{i_{1}}\ldots\gamma_{i_{|I|}}\). Given a Stokes matrix, we can define a representation \(\rho:F_{r}\to\operatorname{GL}_{r}\) as follows.
**Definition 3.2** (Associated representation).: Let \(k\) be a field, and let \(s\in V(r)(k)\).
1. Let \(\rho_{s}:F_{r}\to\operatorname{GL}_{r}(k)\) be the representation such that \[\rho_{s}(\gamma_{i})=\mathbb{I}-E_{i}(s+s^{T})\] for \(i=1,\ldots,r\).
2. Let \(U_{0}^{s}\) denote the kernel of \(s+s^{T}\) on \(U=k^{r}\), and let \(W^{s}=U/U_{0}^{s}\).
3. Write \(\tilde{\rho}_{s}\) for the representation \(F_{r}\to\operatorname{GL}(W^{s})\) induced by \(\rho_{s}\).
**Lemma 3.3**.: _Let \(k\) be a field. Given \(s\in V(r)(k)\), we have the following._
1. _(Coxeter identity) We have_ \(\rho(\gamma_{[r]})=-s^{-1}s^{T}\)_._
2. _The symmetric pairing_ \(s+s^{T}\) _on_ \(k^{r}\) _is preserved by_ \(\rho_{s}\)_._
3. _For any_ \(I\subseteq[r]\)_, we have_ \(\det(\lambda-\rho_{s}(\gamma_{I}))=(\lambda-1)^{r-|I|}\det(\lambda+s_{I}^{-1} s_{I}^{T})\)_._
4. \(U_{0}^{s}\) _is an invariant subspace of_ \(U\) _on which_ \(F_{r}\) _acts trivially via_ \(\rho_{s}\)_._
5. \(\tilde{\rho}_{s}\) _preserves the nondegenerate symmetric pairing on_ \(W^{s}\) _induced by_ \(s+s^{T}\)_._
Proof.: We note that, for each \(k\), the product \(s(1-E_{1}(s+s^{T}))\cdots(1-E_{k}(s+s^{T}))\) has the same first \(k\) rows as \(-s^{T}\) and the same last \(n-k\) rows as \(s\). In particular, we have \(s\rho(\gamma_{[r]})=-s^{T}\), proving (1). To prove (2), it suffices to show hat
\[\rho_{s}(\gamma_{k})^{T}(s+s^{T})\rho_{s}(\gamma_{k})=s+s^{T}\]
for each \(i=1,\ldots,r\). By definition,
\[\rho_{s}(\gamma_{k})^{T}(s+s^{T})\rho_{s}(\gamma_{k}) =(1-E_{k}(s+s^{T}))^{T}(s+s^{T})(1-E_{k}(s+s^{T}))\] \[=(1-(s+s^{T})E_{k})(s+s^{T})(1-E_{k}(s+s^{T}))=s+s^{T}\]
noting that \(E_{k}(s+s^{T})E_{k}=2E_{k}\). This gives (2). To prove (3), let \(v_{1},\ldots,v_{r}\) denote the standard basis vectors of \(U=k^{r}\). Let \(I=\{i_{1}<\cdots<i_{|I|}\}\). Note that \(\rho_{s}(\gamma_{i})\) for each \(i\in I\) preserves \(U_{I}:=\operatorname{Span}(v_{i}:i\in I)\), and acts trivially on the quotient \(U/U_{I}\). Since the action of \(\rho_{s}(\gamma_{I})\) on \(U_{I}\) is given by \(-s_{I}^{-1}s_{I}^{T}\) by (1), we obtain (3). Parts (4) and (5) are clear.
**Definition 3.4**.: Let \(K\) be a field complete with respect to a nontrivial absolute value \(|\cdot|\). We say that a subgroup \(G\leq\operatorname{GL}_{n}(K)\) is _bounded_ if it lies in a compact subgroup of \(\operatorname{GL}_{n}(K)\), with respect to the topology induced by \(|\cdot|\).
**Proposition 3.5**.: _Let \(K\) be field complete with respect to a nontrivial absolute value \(|\cdot|\). Let \(s\in V(r)(K)\) be a Stokes matrix. If every eigenvalue of \(\rho_{s}(\gamma_{I})\) has absolute value \(1\) for every \(I\subseteq[r]\), the semisimplification of \(\rho_{s}\) has bounded image._
Proof.: Let \(s=[x_{ij}]\in V(r)\) be such that \(\rho_{s}(\gamma_{I})\) is elliptic for every \(I\subseteq[r]\). By applying Lemma 3.3(3) with \(I=\{i,j\}\), we see that
\[\lambda^{2}-(2-x_{ij}^{2})\lambda+1=\det(\lambda+s_{I}^{-1}s_{I}^{T})=(\lambda -\zeta_{1})(\lambda-\zeta_{2})\]
for some \(\zeta_{1},\zeta_{2}=\zeta_{1}^{-1}\in K\) with absolute value \(1\) (for the unique absolute value on \(K(\zeta_{1})\) extending the one on \(K\)). If \(|\cdot|\) is nonarchimedean with valuation ring \(\mathcal{O}\), then \(x_{ij}^{2}-2=\zeta_{1}+\zeta_{2}\in\mathcal{O}\) and hence \(x_{ij}\in\mathcal{O}\). This shows \(\operatorname{Im}(\rho_{s})\leq\operatorname{GL}_{r}(\mathcal{O})\) and we are done. Thus, it remains to treat the case where \(|\cdot|\) is archimedean. We may assume that \(K=\mathbb{R}\) or \(\mathbb{C}\) and that \(|\cdot|\) is the usual absolute value on \(\mathbb{C}\). In this case, \(\zeta_{1}=\bar{\zeta}_{2}\in\mathbb{C}\) and \(x_{ij}^{2}=2-2\Re(\zeta_{1})\geq 0\) and hence \(x_{ij}\in\mathbb{R}\) for each \(1\leq i<j\leq r\). This shows that \(s\in V(r)(\mathbb{R})\), and we may henceforth assume that \(K=\mathbb{R}\). Next, we claim that \(s+s^{T}\) is positive semidefinite. We recall the following generalization of Sylvester's criterion:
**Fact 3.6**.: _Suppose \(a\) is an \(r\times r\) real symmetric matrix such that \(\det(a_{I})\geq 0\) for every \(I\subseteq[r]\). Then \(a\) is positive semidefinite._
It thus suffices to show that \(\det((s+s^{T})_{I})\geq 0\) for each \(I\subseteq[r]\). We have
\[\det((s+s^{T})_{I})=\det(s_{I}+s_{I}^{T})=\det(\mathbb{I}+s_{I}^{-1}s_{I}^{T}).\]
By Lemma 3.3(3), the right hand side is a product of numbers of the form \(1-\zeta\) where \(\zeta\) is an eigenvalue of \(\rho_{s}(\gamma_{I})\). Moreover, the nonreal factors \(1-\zeta\) come in conjugate pairs since \(s_{I}\) has real entries. Since the eigenvalues of \(\rho_{s}(\gamma_{I})\) have absolute value \(1\), it follows that \(\det((s+s^{T})_{I})\geq 0\). Thus \(s+s^{T}\) is positive semidefinite.
By Lemma 3.3(5), the associated representation \(\tilde{\rho}_{s}\) on \(W^{s}\) preserves the positive definite symmetric pairing induced by \(s+s^{T}\). It follows that \(\operatorname{Im}(\tilde{\rho}_{s})\) lies in a compact subgroup of \(\operatorname{GL}(W_{s})\), and is in particular semisimple. The semisimplification of \(\rho_{s}\), being the direct sum of \(\tilde{\rho}_{s}\) with \(\dim U_{0}\) copies of the trivial representation, therefore is bounded.
**Corollary 3.7**.: _Let \(k\) be a field of characteristic zero. Let \(s\in V(r)(k)\) be a Stokes matrix. If \(\rho_{s}(\gamma_{I})\) has finite order for every \(I\subseteq[r]\), then the semisimplification of \(\rho_{s}\) has finite image._
Proof.: First, we may assume that \(k\) is algebraically closed. Let us write \(s=[x_{ij}]\). By Lemma 3.3(3) with \(I=\{i,j\}\), we see by arguing as in the proof of Proposition 3.5 that every \(x_{ij}\) is an algebraic integer, and hence the eigenvalues of \(\rho_{s}(\gamma)\) are all algebraic integers for all \(\gamma\in F_{r}\). Moreover, we have \(\operatorname{Im}(\rho_{s})\subseteq\operatorname{GL}_{r}(\bar{\mathbb{Q}})\), and hence we may assume that \(k=\bar{\mathbb{Q}}\). Now, for each embedding \(\sigma:\bar{\mathbb{Q}}\hookrightarrow\mathbb{C}\), it follows by Proposition 3.5 that the image of
\[\tilde{\rho}_{s}^{\sigma}=\sigma\circ\tilde{\rho}_{s}:F_{r}\to\operatorname{ GL}(W^{s})\to\operatorname{GL}(W^{s}\otimes\mathbb{C})\]
is semisimple, and in particular \(\tilde{\rho}_{s}^{\sigma}(\gamma)\) is semisimple and has absolute \(1\) for all \(\gamma\in F_{r}\). We thus deduce that for any \(\gamma\in F_{r}\) the eigenvalues of \(\rho_{s}(\gamma)\) are algebraic integers all of whose conjugates have absolute value \(1\), and hence are roots of unity by Kronecker's theorem. Since \(\tilde{\rho}_{s}(\gamma)\) is semisimple, it follows that \(\tilde{\rho}_{s}(\gamma)\) has finite order for every \(\gamma\in F_{r}\), and therefore \(\tilde{\rho}_{s}\) has finite image by Schur's theorem. Since
the semisimplification of \(\rho_{s}\) is a direct sum of \(\tilde{\rho}_{s}\) and \(\dim U_{0}\) copies of the trivial representation, the desired result follows.
## 4. Reflective representations
### Reflective representations
Our aim here is to prove Theorem 1.5. Let us first introduce the notion of reflective representations. Let \((\cdot,\cdot)\) be the standard symmetric bilinear form on \(\mathbb{C}^{n}\). Let \(q(x_{1},\ldots,x_{n})=x_{1}^{2}+\cdots+x_{n}^{2}\) be the associated quadratic form. Let \(S(n)=\{v\in\mathbb{C}^{n}:q(v)=1\}\) denote the (algebraic) unit sphere.
**Definition 4.1**.: Let \(F_{r}=\langle\gamma_{1},\ldots,\gamma_{r}\rangle\) be a free group of rank \(r\). Given a sequence \(u=(u_{1},\ldots,u_{r})\in S(n)^{r}\), the _reflective representation_\(\rho_{u}:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) associated to \(u\) is the representation such that
\[\rho(\gamma_{i})(v)=v-2(v,u_{i})u_{i}\]
for all \(v\in\mathbb{C}^{n}\) and \(i=1,\ldots,r\). We call a representation \(\rho=F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\)_reflective_ if \(\rho=\rho_{u}\) for some \(u\in S(n)^{r}\).
The above terminology is motivated by the fact that, if the associated vectors \(u_{1},\ldots,u_{r}\) lie in \(\mathbb{R}^{n}\), then the image of \(F_{r}\) in \(\operatorname{O}(n,\mathbb{R})\) is a reflection group. For later use, it will be useful to set up some terminology for more general quadratic spaces. Our conventions and notations are imported from [7]. Let \(k\) be a field of characteristic zero. Let \(V\) be a finite-dimensional vector space over \(k\), also viewed as an affine space over \(k\). Let \(q\) be a nondegenerate quadratic form on \(V\). We define the _unit sphere_\(S(q)\) to be the affine hypersurface \(S(q)=\{u\in V:q(u)=1\}\). The orthogonal group \(\operatorname{O}(q)\) of \(q\) is the automorphism group of the quadratic space \((V,q)\). In other words,
\[\operatorname{O}(q)=\{g\in\operatorname{GL}(V):q(gv)=q(v)\text{ for all }v\in V\}\leq \operatorname{GL}(V).\]
The special orthogonal group of \(q\) is \(\operatorname{SO}(q)=\operatorname{O}(q)\cap\operatorname{SL}(V)\). Let
\[\operatorname{Cl}(q)=\operatorname{Cl}^{0}(q)\oplus\operatorname{Cl}^{1}(q)\]
be the \(\mathbb{Z}/2\)-graded Clifford algebra associated to \((V,q)\), and let \(j:V\to\operatorname{Cl}^{1}(V)\) be the canonical embedding. We denote by \(\alpha\) the automorphism of \(\operatorname{Cl}(q)\) induced by the automorphism \(v\to-v\) of the quadratic space \((V,q)\).
We define the pin group \(\operatorname{Pin}(q)\) as the subgroup of \(\operatorname{Cl}^{\times}(q)\) generated by \(j(S(q))\). We can write \(\operatorname{Pin}(q)=\operatorname{Pin}^{0}(q)\sqcup\operatorname{Pin}^{1}(q)\) where \(\operatorname{Pin}^{i}(q)=\operatorname{Pin}(q)\cap\operatorname{Cl}^{i}(q)\) for \(i=0,1\). By definition, the spin group is \(\operatorname{Spin}(q)=\operatorname{Pin}^{0}(q)\). We have a natural surjective morphism \(\pi:\operatorname{Pin}(q)\to\operatorname{O}(q)\) which sends \(g\in\operatorname{Pin}(q)\) to the orthogonal transformation \(\pi(g)\in\operatorname{O}(q)\) of \(V\) given by
\[\pi(g)(v)=\alpha(g)\otimes v\otimes g^{-1}\quad\text{for all}\quad v\in V.\]
The image of \(\operatorname{Spin}(q)\) under \(\pi\) is equal to \(\operatorname{SO}(q)\). The following definition was introduced in [7].
**Definition 4.2**.: Let \((V,q)\) be a nondegenerated quadratic space over a field \(k\) of characteristic zero, and let \(n\geq 1\) be an integer.
1. The _moduli space of \(r\) points on \(S(q)\)_ is \(A(r,n)=S(q)^{r}\mathbin{/\!\!/}\operatorname{SO}(q)\).
2. The _moduli space of \(r\) unoriented points on \(S(q)\)_ is \(A^{\prime}(r,q)=S(q)^{r}\mathbin{/\!\!/}\operatorname{O}(q)\).
Here, \(\operatorname{SO}(q)\) and \(\operatorname{O}(q)\) act diagonally on \(S(q)^{r}\).
For \((V,q)=(\mathbb{C}^{n},x_{1}^{2}+\cdots+x_{n}^{2})\), we recover \(S(n)=S(q)\), and we shall write \(\mathrm{O}(n)=\mathrm{O}(q)\), \(\mathrm{Cl}(n)=\mathrm{Cl}(q)\), \(A(r,n)=A(r,q)\), etc.
**Proposition 4.3**.: _The morphism \(S(n)^{r}\to\mathrm{Hom}(F_{r},\mathrm{GL}_{n})\) given by construction of the associated reflective representation \(u\mapsto\rho_{u}\) descends to a morphism_
\[\Phi:A^{\prime}(r,n)\to X(F_{r},\mathrm{GL}_{n})=\mathrm{Hom}(F_{r},\mathrm{ GL}_{n})\,/\!\!/\,\mathrm{GL}_{n}\,.\]
Proof.: Since \(A^{\prime}(r,n)\) is the categorical quotient of \(S(n)^{r}\) by \(\mathrm{O}(n)\), it suffices to show that if \(u,u^{\prime}\in S(n)^{r}\) are \(\mathrm{O}(n)\)-equivalent then they have the same image in \(X(F_{r},\mathrm{GL}_{n})\). But if \(u=g\cdot u^{\prime}\), then
\[\rho_{u}(\gamma_{i})(v)=v-2(v,u_{i})u_{i}=v-2(v,g\cdot u_{i}^{\prime})g\cdot u _{i}^{\prime}=g(\rho_{u^{\prime}}(\gamma_{i})(g^{-1}v))\]
for all \(v\in V\) and \(i=1,\ldots,r\). This gives us the desired result.
**Proposition 4.4**.: _Let \(r\geq 1\) be an integer._
1. _We have a chain of closed immersions_ \[A^{\prime}(r,1)\hookrightarrow A^{\prime}(r,2)\hookrightarrow\cdots \hookrightarrow A^{\prime}(r,r)\] _induced by the standard embedding of quadratic spaces_ \[(\mathbb{C}^{m},x_{1}^{2}+\cdots+x_{m}^{2})\to(\mathbb{C}^{n},x_{1}^{2}+\cdots +x_{n}^{2})\] _for_ \(m\leq n\)_. For_ \(r\leq m\)_, we have isomorphisms_ \(A^{\prime}(r,r)\simeq A^{\prime}(r,m)\)_._
2. _We have an isomorphism_ \(A^{\prime}(r,r)\to V(r)\) _given by taking_ \(u=[u_{1},\ldots,u_{r}]\) _to the unique Stokes matrix_ \(s=s(u)\) _such that_ \([2(u_{i},u_{j})]=s+s^{T}\)_._
3. _Given integers_ \(r\geq 1\) _and_ \(1\leq m\leq n\)_, the diagram_ _commutes, where the bottom horizontal arrow is induced by the standard block diagonal embedding_ \(\mathrm{GL}_{m}\to\mathrm{GL}_{n}\)_._
4. _The following diagram is commutative:_ _Proof._ (1) and (2) follow from the invariant theory of the orthogonal group ([13, Chapter 11 SS2.1] and [19, Chapter 2 SS9]). See [7, Proposition 5.3] for details. (3) is obvious. To prove (4), it suffices to prove the commutativity on the dense open subset of \(A^{\prime}(r,r)\) parametrizing those \(u\in A^{\prime}(r,r)\) represented by a sequence \((u_{1},\ldots,u_{r})\in S(m)^{r}\) with \(u_{1},\ldots,u_{r}\) linearly independent. But in this case, we see that \(\rho_{s(u)}\) is conjugate to \(\rho_{u}\) via the change-of-basis matrix \([u_{1}\cdots u_{r}]\).
We now restate and prove Theorem 1.5.
**Theorem 1.5**.: _Let \(F_{r}=\langle\gamma_{1},\ldots,\gamma_{r}\rangle\) be a free group of rank \(r\). Then_
\[L_{r}=\{\gamma_{i_{1}}\ldots\gamma_{i_{u}}:1\leq i_{1}<\cdots<i_{u}\leq r\text { and }1\leq u\leq r\}\]
_is a Burnside set for the class of all semisimple reflective representations of \(F_{r}\)._
Proof.: Let \(\rho:F_{r}\to\operatorname{GL}_{n}(\mathbb{C})\) be a semisimple reflective representation, with associated vectors \(u_{1},\dots,u_{r}\in S(n)\). By Proposition 4.4, there exist \(u_{1}^{\prime},\dots,u_{r}^{\prime}\in S(r)\) such that \([u_{1},\dots,u_{r}]=[u_{1}^{\prime},\dots,u_{r}^{\prime}]\) in \(A^{\prime}(r,r)\). Let \(s\in V(r)(\mathbb{C})\) be the unique Stokes matrix such that \(s+s^{T}=[2(u_{i}^{\prime},u_{j}^{\prime})]\). Let \(m=\max\{r,n\}\). The class of \(\rho_{s}\) in \(X(F_{r},\operatorname{GL}_{m})\) is the same as the class of \(\rho\). Since \(\rho\) is semisimple, it follows that \(\rho\) is isomorphic to the semisimplification of \(\rho_{s}\). Theorem 1.5 then follows from Corollary 3.7.
### Exceptional isomorphism
In [7], an exceptional isomorphism between the moduli space of points \(A(r+1,4)\) defined in Section 4 and the characer variety \(X(F_{r},\operatorname{SL}_{2})=\operatorname{SL}_{2}^{r}\mathchoice{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}\) was established. We will make use of this isomorphism in the derivation of Theorem 1.4 from Theorem 1.5.
Let \(V=\operatorname{Mat}_{2}\) be the space of \(2\times 2\) matrices with quadratic from \(q=\det\) given by the determinant. Let \(S(q)=\{v\in V:q(v)=1\}=\operatorname{SL}_{2}\). The Clifford algebra \(\operatorname{Cl}(q)\) associated to \((V,q)\) can be identified with the \(\mathbb{Z}/2\)-graded algebra
\[M=M^{0}\oplus M^{1}=\operatorname{Mat}_{2}^{2}\oplus\operatorname{Mat}_{2}^{2}\iota\]
where \(\iota(a,b)=(b,a)\iota\) for all \((a,b)\in\operatorname{Mat}_{2}^{2}\). We have an embedding \(j:V\to M^{1}\) given by \(j(x)=(x,\bar{x})\iota\) where \(\bar{x}\) denotes the adjugate of \(x\), namely,
\[\text{if }x=\begin{bmatrix}x_{11}&x_{12}\\ x_{21}&x_{22}\end{bmatrix}\text{ then }\bar{x}=\begin{bmatrix}x_{22}&-x_{12}\\ -x_{21}&x_{11}\end{bmatrix}\text{.}\]
Under the above identification, the spin group \(\operatorname{Spin}(q)=\langle j(S(q))\rangle\cap M^{0}\) is identified with \(\operatorname{SL}_{2}^{2}\). The conjugation action of \(\operatorname{Spin}(q)\) on \(j(V)\) factors through the surjection \(\pi:\operatorname{Spin}(q)\to\operatorname{SO}(4)\), and note that we have
\[(a,b)\cdot(u,u^{-1})\iota=(a,b)(u,u^{-1})\iota(a^{-1},b^{-1})=(aub^{-1},(aub^{ -1})^{-1})\iota\]
for every \((a,b)\in\operatorname{SL}_{2}^{2}\) and \(u\in\operatorname{SL}_{2}\). Thus, we have
\[S(q)^{r+1}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/} \kern 2.0pt/}{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SO}(q)=S(q)^{r+1} \mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{Spin}(q)= \operatorname{SL}_{2}^{r+1}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}^{2}\]
where the action of \(\operatorname{SL}_{2}^{2}\) on \(\operatorname{SL}_{2}^{r+1}\) is given by
\[(a,b)\cdot(u_{0},\dots,u_{r})=(au_{0}b^{-1},\dots,au_{r}b^{-1})\]
for every \((a,b)\in\operatorname{SL}_{2}^{2}\) and \((u_{0},\dots,u_{r})\in\operatorname{SL}_{2}^{r+1}\). In light of this and the fact that the quadratic spaces \((\mathbb{C}^{4},x_{1}^{2}+\dots+x_{4}^{2})\) and \((V,q)\) are isomorphic over \(\mathbb{C}\), we obtain the exceptional isomorphism
\[A(r+1,4) =S(4)^{r+1}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/} \kern 2.0pt/}{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SO}(4)=S(4)^{r+1} \mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{Spin}(4)\simeq S(q)^{r+1} \mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{Spin}(q)\] \[\simeq\operatorname{SL}_{2}^{r+1}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/} \kern 2.0pt/}{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}^{2}\to \operatorname{SL}_{2}^{r}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}=X(F_{r},\operatorname{SL}_{2})\]
where the arrow \(\operatorname{SL}_{2}^{r+1}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/} \kern 2.0pt/}{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}^{2}\to \operatorname{SL}_{2}^{r}\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{ \mathbin{\hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}{\mathbin{ \hbox to 0.0pt{\kern 2.0pt/}\kern 2.0pt/}\kern 2.0pt/}\operatorname{SL}_{2}\) is given by
for \(i=0,\ldots,r\). Note that we have
\[\rho(\gamma_{i}) =\rho(\delta_{i-1}\delta_{i})=\rho(\delta_{i-1})\rho(\delta_{i})\] \[=(u_{i-1},u_{i-1}^{-1})\iota(u_{i},u_{i}^{-1})\iota=(u_{i-1},u_{i-1 }^{-1})(u_{i}^{-1},u_{i})\] \[=(u_{i-1}u_{i}^{-1},(u_{i-1}u_{i}^{-1})^{-1})\]
for \(i=1,\ldots,r\). Thus, the restriction
\[\rho_{u}|_{F_{r}}:F_{r}\to\operatorname{Spin}(q)=\operatorname{SL}_{2}\times \operatorname{SL}_{2}\]
gives rise to a pair \(\rho_{u}^{\prime},\rho_{u}^{\prime\prime}:F_{r}\rightrightarrows\operatorname {SL}_{2}\) of representations by two projections to \(\operatorname{SL}_{2}\), where concretely \(\rho_{u}^{\prime}(\gamma_{i})=u_{i-1}u_{i}^{-1}\) and \(\rho_{u}^{\prime\prime}(\gamma_{i})=(u_{i-1}u_{i}^{-1})^{-1}\) for \(i=1,\ldots,r\). The isomorphism
\[A(r+1,4)\simeq A(r+1,q)\simeq X(F_{r},\operatorname{SL}_{2})\]
is induced by the assignment \(u\mapsto\rho_{u}^{\prime}\).
_Remark_.: We can describe the assignment in the inverse direction as follows. Given a representation \(\rho^{\prime}:F_{r}\to\operatorname{SL}_{2}(\mathbb{C})\), we define
\[u=(u_{0},\ldots,u_{r})\in S(q)^{r+1}=\operatorname{SL}_{2}^{r+1}\]
by setting \(u_{0}=\mathbb{I}\) and inductively letting \(u_{i}=\rho(\gamma_{i})^{-1}u_{i-1}\) for each \(i=1,\ldots,r\). The class of \(u\) in \(S(q)^{r+1}/\!\!/\operatorname{SO}(q)\) is well-defined and depends only on the class of \(\rho^{\prime}\) in \(X(F_{r},\operatorname{SL}_{2})\), and provides the inverse map \(X(F_{r},\operatorname{SL}_{2})\to A(r+1,q)\simeq A(r+1,4)\).
### Degree \(2\) representations
We now discuss the derivation of Theorem 1.4 from Theorem 1.5. First, we record a basic lemma on Burnside sets.
**Lemma 4.5**.: _Let \(F\) be a free group of finite rank. Let \(G\) and \(H\) be groups._
1. _If_ \(G\to H\) _is a surjective morphism with finite kernel, a Burnside set for_ \(\operatorname{Hom}(F,H)\) _is a Burnside set for_ \(\operatorname{Hom}(F,G)\) _and vice versa._
2. _A Burnside set for the union_ \(\operatorname{Hom}(F,G)\cup\operatorname{Hom}(F,H)\) _is a Burnside set for_ \(\operatorname{Hom}(F,G\times H)\)_, and vice versa._
Proof.: (1) A Burnside set for \(\operatorname{Hom}(F,H)\) is a Burnside set for \(\operatorname{Hom}(F,G)\), since a morphism \(F\to G\) has finite image if and only if the composition \(F\to G\to H\) has finite image. A Burnside set for \(\operatorname{Hom}(F,G)\) is a Burnside set for \(\operatorname{Hom}(F,G)\), since a morphism \(F\to H\) lifts to a morphism \(F\to G\) which has finite image if and only if the original morphism has finite image. (2) This follows from the fact that a morphism \(F\to G\times H\) has finite image if and only if each of its components \(F\to G\) and \(F\to H\) has finite image.
**Theorem 1.4**.: _Let \(F\) be a group generated by a finite set \(S\). The set of elements of word length \(\leq 3|S|\) in \(S\) is a Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{2}(\mathbb{C}))\)._
Proof.: It suffices to treat the case where \(F=F_{r}\) is a free group on \(r\) generators \(\gamma_{1},\ldots,\gamma_{r}\). We will explicitly construct a finite set of words in \(S=\{\gamma_{1},\ldots,\gamma_{r}\}\) that provides a Burnside set for \(\operatorname{Hom}(F_{r},\operatorname{GL}_{2}(\mathbb{C}))\). First, note that the morphisms
\[\operatorname{SL}_{2}(\mathbb{C})\xrightarrow{\operatorname{pr}}\operatorname {PSL}_{2}(\mathbb{C})\quad\text{and}\quad\operatorname{GL}_{2}(\mathbb{C}) \xrightarrow{(\det,\operatorname{pr})}\mathbb{C}^{\times}\times\operatorname {PSL}_{2}(\mathbb{C})\]
are surjective with finite fibers. In light of Lemma 4.5, we see that if \(L\subseteq F_{r}\) is a finite Burnside set for \(\operatorname{Hom}(F,\operatorname{SL}_{2}(\mathbb{C}))\) that contains \(\{\gamma_{1},\ldots,\gamma_{r}\}\), then \(L\) is a
Burnside set for \(\operatorname{Hom}(F,\operatorname{GL}_{2}(\mathbb{C}))\). It therefore suffices to construct a Burnside set for \(\operatorname{Hom}(F,\operatorname{SL}_{2}(\mathbb{C}))\). We divide our analysis into the loci
\[\operatorname{Hom}(F,\operatorname{SL}_{2}(\mathbb{C}))=\operatorname{Hom}(F, \operatorname{SL}_{2}(\mathbb{C}))^{\operatorname{red}}\cup\operatorname{Hom}(F,\operatorname{SL}_{2}(\mathbb{C}))^{\operatorname{ss}}\]
of reducible representations and of semisimple representations. We first claim that the set
\[L^{\prime}=\{\gamma_{1},\dots,\gamma_{r}\}\cup\{[\gamma_{i},\gamma_{j}]:1\leq i <j\leq r\}\]
is a Burnside set for \(\operatorname{Hom}(F,\operatorname{SL}_{2}(\mathbb{C}))^{\operatorname{red}}\). Indeed, suppose that \(\rho:F_{r}\to\operatorname{SL}_{2}(\mathbb{C})\) is reducible and \(\rho(\gamma)\) has finite order for every \(\gamma\in L^{\prime}\). By reducibility of \(\rho\), we see that \(\rho([F_{r},F_{r}])\) is unipotent. It follows that \(\rho([\gamma_{i},\gamma_{j}])\), being of finite order, must be the identity matrix for each \(1\leq i<j\leq r\). It follows that the image of \(\rho\) is abelian, and hence finite since \(\gamma_{1},\dots,\gamma_{r}\in L^{\prime}\). Note that each \(\gamma\in L^{\prime}\) has word length \(\leq 4\) (and \(L^{\prime}=\{\gamma_{1}\}\) if \(r=1\)).
It remains to find a finite Burnside set for the locus of semisimple representations \(\operatorname{Hom}(F_{r},\operatorname{SL}_{2}(\mathbb{C}))^{\operatorname{ss}}\). Given \(\rho\in\operatorname{Hom}(F_{r},\operatorname{SL}_{2}(\mathbb{C}))^{ \operatorname{ss}}\), let \(u\in S(q)^{r+1}\) be a sequence of vectors such that \(\rho=\rho^{\prime}_{u}\) using the notation of Section 4.2. Then we see that \(\rho\) has finite image if and only if \(\rho_{u}:W_{r}\to\operatorname{Pin}(q)\) has finite image. Since the map \(\operatorname{Pin}(q)\to\operatorname{O}(q,\mathbb{C})\simeq\operatorname{O}( 4,\mathbb{C})\) is surjective with finite kernel (the latter may be verified by explicit computation), we conclude by Lemma 4.5 that \(\rho_{u}\) has finite image if and only if the composition \(\bar{\rho}_{u}:W_{r}\to\operatorname{O}_{n}(\mathbb{C})\) has finite semisimplification. But \(\bar{\rho}_{u}\) is a reflective representation, and by Theorem 1.5 (or rather its proof) it has finite semisimplification if and only if \(\rho_{u}(\gamma)\) has finite order for all \(\gamma\in L^{\prime\prime\prime}\), where
\[L^{\prime\prime\prime}=\{\delta_{i_{1}}\cdots\delta_{i_{u}}:0\leq i_{1}< \cdots<i_{u}\leq r\text{ and }1\leq u\leq r+1\}.\]
The last condition is satisfied if and only if \(\rho\) has finite order for every \(\gamma\in L^{\prime\prime}\), where \(L^{\prime\prime}=\{\delta^{2}:\delta\in L^{\prime\prime\prime}\}\subseteq F_ {r}\). Finally, note that every \(\gamma\in L^{\prime\prime}\) has word length at most \(3r\) in \(\gamma_{1},\dots,\gamma_{r}\).
**Corollary 4.6**.: _Let \(\Sigma\) be a surface of genus \(g>0\) with \(n\leq 2\) punctures. There is an explicit finite Burnside set \(L\) consisting of simple loops on \(\Sigma\) for the class of all semisimple representations \(\pi_{1}(\Sigma)\to\operatorname{GL}_{2}(\mathbb{C})\)._
Proof.: We may assume without loss of generality that \(\Sigma\) has at least one puncture. Let \(r=2g+n-1\), so that \(\pi_{1}(\Sigma)\simeq F_{r}\). Note that \(\Sigma\) is a hyperelliptic double covering of the orbifold surface \(\bar{\Sigma}\) of genus \(0\) with \(r+1=2g+n\) orbitfold points of index \(2\) and one puncture. The orbifold fundamental group of \(\bar{\Sigma}\) is
\[W_{r}=\langle\delta_{0},\dots,\delta_{r}|\delta_{0}^{2},\dots,\delta_{r}^{2}\rangle\]
in which \(\pi_{1}(\Sigma)=F_{r}\) sits as the index \(2\) subgroup generated by \(\gamma_{1},\dots,\gamma_{r}\) where we define \(\gamma_{i}=\delta_{i-1}\delta_{i}\) for \(i=1,\dots,r\). Since the set \(L^{\prime\prime\prime}\) considered in the proof of Theorem 1.5 above correspond to simple loops on \(\bar{\Sigma}\), we see that the set \(L^{\prime\prime}=\{\delta^{2}:\delta\in L^{\prime\prime\prime}\}\subseteq \pi_{1}(\Sigma)\) consists of simple loops or their squares on the double cover \(\Sigma\). Since \(L^{\prime\prime}\) thus constructed is a Burnside set for the semisimple representations \(F_{r}\to\operatorname{GL}_{2}(\mathbb{C})\), the claim follows.
|
2309.14290 | Automated Market Makers for Cross-chain DeFi and Sharded Blockchains | In this paper we provide an execution framework for Automated Market Maker
(AMM) to be deployed across independent blockchain platforms as well as
concurrent sharding within the same blockchain platform. The framework provides
economic incentives to participate through a mechanism that guarantee fixed
prices across pairwise liquidity pools. | Mohsen Pourpouneh, Kurt Nielsen, Jesper Balman Gravgaard | 2023-09-25T16:53:37Z | http://arxiv.org/abs/2309.14290v2 | # Automated Market Makers for Cross-chain DeFi and Sharded Blockchains+
###### Abstract
In this paper we provide an execution framework for Automated Market Maker (AMM) to be deployed across independent blockchain platforms as well as concurrent sharding within the same blockchain platform. The framework provides economic incentives to participate through a mechanism that guarantee fixed prices across pairwise liquidity pools.
_Keywords:_ DeFi, Automated Market Makers, Sharded Blockchains, Cross-chain DeFi.
## 1 Introduction
Automated Market Makers (AMMs) are an important part of DeFi and a major driver of the Web 3.0 adoption across different blockchain platforms. The initial motivation of AMM was the opportunity to exchange crypto assets without direct interaction and matching of buyers or sellers of crypto assets. This significantly reduces the complexity of the solution while at the same time, improving the security of the AMM as a truly decentralized application. Note that an AMM solution is essentially one of more smart contracts safeguarded by the underlying blockchain platform. Ethereum
has been the most used blockchain platform for AMM solutions, and token bridges as well as second layer blockchains have broadened the uptake. Recent developments take this development one step further and run AMMs across independent blockchain platforms i.e., cross-chain DeFi. This opens up a set of new challenges, such as the challenge of representing states and, hence data and tokens across independent blockchain platforms. Another challenge is to mimic the economic incentives cooked into the Ethereum version of AMMs. This is primarily the arbitrage opportunity that comes from the "all or nothing" execution i.e. atomic execution of the entire AMM solution "one user at a time \(\dot{}\). As an example, with Uniswap a user can swap asset \(A\) to asset \(B\) and then swap asset \(B\) to asset \(C\) and then potentially swap asset \(C\) back to asset \(A\) without other users interfering. Sometimes this "multi-swap \(\dot{}\) returns profit to the user and since the public ledger allows anyone to constantly monitoring the AMM solution, this type of arbitrage essentially for free. This is, however, only feasible because of the atomic execution and sequential use of the AMM solution and cannot be transferred to a cross-chain AMM solution without additional built-in incentives.
The cross-chain situation is similar to the most efficient sharding model that essentially operates like independent blockchains where transactions are automatically off-loaded across different shards in an ideally way that favors unlimited parallelisation i.e. asynchronous and concurrent execution. Although very few blockchain networks comes with such real scalable sharding there is a few such as the Partisia Blockchain1. Examples of other sharded blockchains are Elastico Luu et al. (2016), OmniLedger Kokoris-Kogias et al. (2018), and RapidChain Zamani et al. (2018).
Footnote 1: [https://partisiablockchain.gitlab.io/documentation/](https://partisiablockchain.gitlab.io/documentation/)
In this paper we present a way to adjust AMMs (using the Uniswap V2 Adams et al. (2020) as an illusive example) to work across independent blockchains and shards and still allow for guaranteed multi-swap exchanges and arbitrage opportunities. This is done without the non-scalable sequential use of the entire AMM solution. The model introduce a "lock-swap" mechanism that guarantee a user fixed prices (as relative pairwise exchange prices) for a given swap. Such a lock-swap creates a "virtual liquidity pool" that other users face if they enters the AMM solution after a lock-swap has been placed. The virtual liquidity pool provides worse relative prices to the user than those given to the user holding the lock-swap, hence the mechanism favor first movers. Unlike the Ethereum sequential use, the "lock-swap" minimize the impact on the entire AMM solution such that assets can be exchanged in parallel and across independent blockchains and shards - also in liquidity pools with one or more lock-swap.
The lock-swap is an option where "instant-swap" is default. While the challenges in capturing and representing states across independent blockchains is time consuming, the use of lock-swap may become default. On the other hand with a scalable blockchain like Partisia Blockchain where transactions are executed as fast as information flow through the network, instant-swap may be sufficient. Also with an efficient market the multi-swap arbitrage opportunities may anyway only exist for very short periods. Finally, unlike Ethereum, the lock-swap is a simple add-on feature where the sequential use of AMMs on Ethereum is cooked into the very foundation i.e., the Ethereum blockchain platform. This might be relevant for the uptake of AMMs outside of the blockchain
ecosystem as build-in "free" arbitrage opportunities may no be acceptable for financial regulators.
## 2 Uniswap core model
Decentralized exchanges (DeX) are smart contracts that allows users to swap assets without the need for any third party or broker. A well studied category of DeXs are the so called automated market makers (AMMs). To execute swaps AMMs use a bonding curve that determines the price of the asset (with respect to each other) purly based on the amount of asset (liquidity) locked in the smart contract. In its simplest form, an AMM has two reserve pools, also known as _liquidity pools_, for a pair of assets say \(A\) and \(B\). Users can exchange the asset \(A\) for asset \(B\) by depositing \(A\) asset in the smart contract and receiving \(B\) asset in return. Anyone can add liquidity to these pools by depositing \(A\) and \(B\) asset to each pool. To incentivize the liquidity providers for providing liquidity into the AMM, for each trade the pool charges a fee for every swap being executed, which is distributed (proportionally) among those who provided liquidity. The AMMs use a deterministic pricing rule to provide the exchange ratio (i.e., the price) between the two assets, hence they are also called "Constant Function Market Makers" Angeris and Chitra (2020). Uniswap Adams (2020) is one of the most prominent implementation of such models, that uses a constant product function to determine the price between the pair of assets. Other examples includes Stable Swap Egorov (2019), and Balancer Martinelli and Mushegian (2019).
The contract curve design for the core of Uniswap requires that at any time the reserves of two pools satisfy a _constant product_ formula. That is, let \(A_{t}\) and \(B_{t}\) be the balances of the two pools at any time \(t\), and \(\Delta A>0\) be a swap for the \(A\) asset in exchange for the \(B\) asset. Then the balances of the two pools (in the absence of swapping fees) should change in such a way that, \((A_{t}+\Delta A)(B_{t}-\Delta B)=A_{t}\times B_{t}\). This is illustrated in Figure 1.
Figure 1: Uniswap V2 curve and a swap of size \(\Delta A\).
The basic idea
As an example consider two shards. On the first shard there is a pool for exchanging \(A\) to \(B\) assets. Let \(\langle A^{1},B^{1}\rangle=\langle 100,10\rangle\) denote the (actual) balances of the \(A\) and \(B\) assets on shard 1. On the second shard there is a pool for exchanging \(B\) and \(C\) assets. Let \(\langle B^{2},C^{2}\rangle=\langle 200,20\rangle\) denote the balances of the \(B\) and \(C\) assets on shard 2. A user wants to swap 20 of the \(A\) asset for \(C\) asset. We assume that the user is only interested in either getting the \(C\) asset or keeping the \(A\) asset. In what follows, we consider two scenarios, one for the case that the user executes the swap by simply executing the swaps one by one; and the other using a lock mechanism.
### Naive approach
By submitting the swap for \(\Delta A=20\), the user expects to get \(\Delta B=1.67\), of the \(B\) asset, and use those to exchange them for \(\Delta C=0.165\) of the \(C\) asset.
Consider a scenario where the user successfully, swaps the \(\Delta A=20\) on shard 1, and gets 1.67 of the \(B\) asset. Meanwhile, another user submits a swap for 1.67 of the \(B\) asset, which (in the absence of the fees) updates the AMM to \(\langle A^{1},B^{1}\rangle=\langle 100,10\rangle\). Therefore, after these two swaps the AMM state is the same as of that before the first swap.
Now, assume that (after the user swaps \(\Delta A\) to \(\Delta B\)) the ratio of the \(\langle B^{2},C^{2}\rangle\) on shard 2 changes to say \(\langle B^{2},C^{2}\rangle=\langle 250,16\rangle\). This can be due to a swap that is being executed on shard 2 on the \(\langle B^{2},C^{2}\rangle\) pool. This implies that the user with the input \(\Delta B=1.67\) to get \(\Delta C=0.1\) of the \(C\) asset, which is a lot less than the expected 0.165 of the \(C\) asset. Therefore, the user either has to get the 0.1 of the \(C\) asset, or he must trade the \(\Delta B\) back to the \(A\) asset. However, due to the changes in shard 1, an input of size \(\Delta B=1.67\), implies \(\Delta A=14.3\). Therefore, the user is stuck between getting \(\Delta C=0.1\) of the \(C\) asset or \(\Delta A=14.3\) of the \(A\) asset in return for the 20 of the initial \(A\) asset.
### Scalable AMM
We assume that the user submits a multi-swap of size \(\langle\Delta A=20,\Delta C\rangle\), with the minimum acceptable \(\widetilde{\Delta C}\) of the \(C\) assets. Further we assume that no lock-swap is submitted by any user so far, hence the balances of the virtual AMM is the same as the actual AMM. That is, \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle A^{1},B^{1}\rangle\). Since the \(A\) and \(C\) tokens are different shards, then the multi-swap is executed by a lock-swap for \(A\to B\) on shard 1, followed by another lock-swap for \(B\to C\) on shard 2.
\[\langle\Delta A,\Delta C\rangle\rightarrow\underbrace{\langle\Delta A,\Delta B \rangle}_{\text{lock-swap}}\text{ and }\underbrace{\langle\Delta B,\Delta C\rangle}_{\text{lock-swap}}\]
In what follows we describe the steps to execute such a multi-swap, using two lock-swaps on shard 1 and 2.
1. The lock-swap on shard 1 yields \(\Delta B=1.67\) for the given \(\Delta A=20\). Lock the \(\Delta A\) and \(\Delta B\) assets on shard 1. However as the lock-swap is not executed yet (and might be cancelled
later), the balances of the actual pool remains the same, i.e., \(\langle A^{1},B^{1}\rangle=\langle 100,10\rangle\), however, the lock-swap changes the balances of the virtual pool that keeps track of the liquidity with the lock-swap being executed, i.e., \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 120,8.33\rangle\).
2. Perform a lock-swap of \(\Delta B=1.67\) for \(\Delta C\), on the shard 2 (the steps are similar to that of the lock-swap for \(\Delta A\) to \(\Delta B\)). For the above example this yields \(\Delta C=0.165\) of the \(C\) assets. * If \(\Delta C\geq\widehat{\Delta C}\), then all locks need to be resolved. The first lock is resolved by, \(\Delta A\) assets being added to the \(A\) pool on shard 1. This implies the actual balances to be updated to \(\langle A^{1},B^{1}\rangle=\langle 120,8.33\rangle\), while keeps the balances of the virtual pool untouched (since the lock-swap is already executed on the virtual pool). Given the output, i.e., \(\Delta B\), the assets are added to the \(B^{2}\) pool on shard 2, and the \(\Delta C\) are returned to the user, and the pool updates to \(\langle B^{2},C^{2}\rangle=\langle 201.67,19.835\rangle\). Note that in this case the user gets at least \(\widehat{\Delta C}\). * If \(\Delta C<\widehat{\Delta C}\), then both lock-swaps must be cancelled. This implies the locked amounts to be removed/added to the virtual pools, i.e., \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 100,10\rangle\), whereas the actual balances of the pool remain untouched (since the lock-swap was not executed on the actual AMM). Note that, unlike the naive approach, in this case the user gets 20 of the \(A\) assets back.
The above example shows the execution of a multi-swap when no swap is executed after the lock-swap. Next we consider two cases, where an instant-swap is executed after the lock-swap.
### Swap in the same direction as the lock-swap
Consider the case where an instant swap is submitted by a user after the lock-swap. That is let the first transaction be a multi-swap for \(A\) to \(C\) assets with \(tx_{1}=\langle\Delta A=20,\Delta C\rangle\), and the second transaction to be an instant-swap of the form \(tx_{2}=\langle\overline{\Delta A}=10,\overline{\Delta B}\rangle\), i.e., exchanging 10 of the \(A\) asset for the \(B\) asset. The following steps takes place,
1. The first transaction is a lock-swap, therefore it locks 1.67 of the \(B\) asset on shard 1, and the incoming 20 of the \(A\) asset from the user. This keeps the balances of the actual AMM the same, and updates the virtual AMM to \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 120,8.33\rangle\). Figure 2, shows the state of the curves before and after the lock-swap.
2. The second swap is an instant-swap, as the pool is partially-locked by transaction 1, we consider two situations: executing \(tx_{2}\) with the lock-swap and without the lock-swap. The output asset will be the minimum of the two. That is, * Swap without \(tx1\), i.e., swap on the actual AMM. In this case, \(\Delta B_{1}=0.9\). * Swap with \(tx1\), i.e., swap on the virtual AMM. In this case \(\Delta B_{2}=0.64\). Take \(\Delta B=\min\{\Delta B_{1},\Delta B_{2}\}=0.64\), as the output of the instant-swap. Since the swap is an instant-swap it is executed on the actual pool, therefore the balances of the actual AMM updates to \(\langle A^{1},B^{1}\rangle=\langle 110,9.36\rangle\), and the virtual AMM to \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 130,7.69\rangle\). Figure 3, shows the state of the curve after the instant-swap.
3. Given that transaction 1 locks the pool, to resolve the lock-swap, there are two possible cases: either transaction 1 can be completed (executed) with respect to the minimum acceptable by the user (i.e., \(\widetilde{\Delta C}\)), or it must be cancelled. * \(tx_{1}\) is executed. That is exchanging \(\Delta B\) for the \(C\) asset on shard 2 yields at least as \(\widetilde{\Delta C}\). In this case all the locks need to be resolved. Let \(\langle B^{2}_{L},C^{2}_{L}\rangle=\langle B^{2}+\Delta B,C^{2}-\Delta C\rangle\). The actual pool updates to \(\langle A^{1},B^{1}\rangle=\langle 130,7.69\rangle\), as the locked liquidities are added/removed from the pools. Note that, this is similar to the case that the lock-swap was executed in the first place. As a result, the two curves (i.e., the curves corresponding to \(\langle A^{1},B^{1}\rangle\), and \(\langle A^{1}_{L},B^{1}_{L}\rangle\) pools) are the same. Finally, \(\Delta B\) is added to the actual AMM of \(\langle B,C\rangle\) pair, and the \(\Delta C\) is returned to the user, i.e., \(\langle B^{2},C^{2}\rangle=\langle B^{2}+\Delta B,C^{2}-\Delta C\rangle\). * \(tx_{1}\) is cancelled. In this case the state of the pool remains the same as of that after transaction \(tx_{2}\), i.e., \(\langle A^{1},B^{1}\rangle=\langle 110,9.36\rangle\). However, since the lock-swap is cancelled
Figure 2: Illustration of the instant-swap and lock-swap.
the liquidities that were added due to the lock-swap must be removed, hence the state is updated to \(\langle A_{L}^{1},B_{L}^{1}\rangle=\langle 130-20,7.69+1.67\rangle\). Therefore this is similar to the case that the lock-swap did not exist in the first place. As a result, the two curves (i.e., the curves corresponding to \(\langle A^{1},B^{1}\rangle\), and \(\langle A_{L}^{1},B_{L}^{1}\rangle\) pools) are the same.
### Swap in the opposite direction as the lock-swap
Consider the case where an instant swap is submitted by a user after the lock-swap. That is let the first transaction be a multi-swap for \(A\) to \(C\) assets with \(tx_{1}=\langle\Delta A=20,\Delta C\rangle\), and the second transaction to be an instant-swap of the form \(tx_{2}=\langle\overline{\Delta A},\overline{\Delta B}=3\rangle\), i.e., exchanging 3 of the \(B\) asset for the \(A\) asset. The following steps takes place,
1. The first transaction is a lock-swap, therefore it locks 1.67 of the \(B\) asset on shard 1, and the incoming 20 of the \(A\) asset from the user. This keeps the balances of the actual AMM untouched, and updates the virtual AMM to \(\langle A_{L}^{1},B_{L}^{1}\rangle=\langle 120,8.33\rangle\).
2. The second swap is an instant-swap, as the pool is partially-locked by transaction 1, we consider two situations: executing \(tx_{2}\) with the lock-swap and without the lock-swap. The output asset will be the minimum of the two. That is, * Swap without \(tx1\), i.e., swap on the actual AMM. In this case \(\Delta A_{1}=23.08\). * Swap with \(tx1\), i.e., swap on the virtual AMM. In this case \(\Delta A_{2}=31.76\). Take \(\Delta A=\min\{\Delta A_{1},\Delta A_{2}\}=23.08\). Since the swap is an instant-swap it is executed on the
Figure 3: State of the curves after an instant-swap.
actual pool, therefore the balances of the actual AMM updates to \(\langle A^{1},B^{1}\rangle=\langle 76.92,13\rangle\), and the virtual AMM to \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 96.92,11.33\rangle\).
3. Given that transaction 1 locks the pool to resolve the lock-swap, there are two possible cases: either transaction 1 can be completed (executed) with respect to the minimum acceptable by the user (i.e., \(\widetilde{\Delta C}\)), or it must be cancelled. * \(tx_{1}\) is executed. That is exchanging \(\Delta B\) for the \(C\) asset on shard 2 yields at least as \(\widetilde{\Delta C}\). In this case all the locks need to be resolved. Let \(\langle B^{2}_{L},C^{2}_{L}\rangle=\langle B^{2}+\Delta B,C^{2}-\Delta C\rangle\). The actual pool updates to \(\langle A^{1},B^{1}\rangle=\langle 96.92,11.33\rangle\), as the locked liquidities are added/removed from the pools. Note that, this is similar to the case that the lock-swap was executed in the first place. As a result, the two curves (i.e., the curves corresponding to \(\langle A^{1},B^{1}\rangle\), and \(\langle A^{1}_{L},B^{1}_{L}\rangle\) pools) are the same. Finally, \(\Delta B\) is added to the actual AMM of \(\langle B,C\rangle\) pair, and the \(\Delta C\) is returned to the user, i.e., \(\langle B^{2},C^{2}\rangle=\langle B^{2}+\Delta B,C^{2}-\Delta C\rangle\). * \(tx_{1}\) is cancelled. In this case the state of the pool remains the same as of that after transaction \(tx_{2}\), i.e., \(\langle A^{1},B^{1}\rangle=\langle 76.92,13\rangle\). However, since the lock-swap is cancelled the liquidities that were added due to the lock-swap must be removed, hence the state is updated to \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle 96.92-20,11.33+1.67\rangle\). Therefore this is similar to the case that the lock-swap did not exist in the first place. As a result, the two curves (i.e., the curves corresponding to \(\langle A^{1},B^{1}\rangle\), and \(\langle A^{1}_{L},B^{1}_{L}\rangle\) pools) are the same.
## 4 Formal description
In this section we consider the general setup for the scalable swap, with two pools on two different shards. Let, \(A\) and \(B\) be a pair of asset on the first shard with balances \(\langle A^{1},B^{1}\rangle\), and \(B\), \(C\) be a pair of asset with a pool on the second shard with the balances of \(\langle B^{2},C^{2}\rangle\). The aim is to execute a multi-swap of size \(\Delta A\) from \(A\) to \(C\), such that it guarantees an output of at least size \(\widetilde{\Delta C}\) of the \(C\) asset. To guarantee the output we introduce the "locking" mechanism, which allows the pool to lock a part of the liquidity. The locking mechanism affects the liquidity of the pool, however as the swap might be canceled later, we can not immediately update the state of the pool. Therefore, we let \(\langle A^{1}_{L},B^{1}_{L}\rangle\), to denote the state of the pool after a lock-swap. Initially, set \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle A^{1},B^{1}\rangle\). In what follows we describe the procedure for any swap of the form \(\Delta A\rightarrow\Delta B\). Note that, such a swap can be either a lock-swap or an instant-swap, and further it can be a part of a multi-swap. To simplify the notation we ignore the swap fees in the what follows.
Protocol 1, describes the case for an instant-swap. In case the pool is not partially-locked, the swap executed in a similar way as Uniswap V2. However, in case the pool is partially-locked the protocol considers the amount of the exchanged asset from both the actual pool and the virtual pool, and pays the user the minimum of the two, finally the state of the curves are updated accordingly.
Protocol 2, describes the case for an lock-swap. Since, the swap is a lock-swap, then a lock-swap affects the state of the virtual AMM.
```
1:Input: Asset \(\Delta A\)
2:if pool is not partially-locked then
3: Compute \(\Delta B=B^{1}-\frac{A^{1}\times B^{1}}{A^{1}+\Delta A}\)
4: Set \(\langle A^{1},B^{1}\rangle=\langle A^{1}+\Delta A,B^{1}-\Delta B\rangle\).
5: Set \(\langle A^{1},B^{1}\rangle=\langle A^{1}+\Delta A,B^{1}_{L}-\Delta B\rangle\)
6: Set \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle A^{1}_{L}+\Delta A,B^{1}_{L}-\Delta B\rangle\).
7: Set \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle A^{1}_{L}+\Delta A,B^{1}_{L}-\Delta B\rangle\).
```
**Protocol 1** Instant-swap for \(A\to B\) swap
```
1:Input: Asset \(\Delta A\)
2:if pool is not partially-locked then
3: Compute \(\Delta B=B^{1}-\frac{A^{1}\times B^{1}}{A^{1}+\Delta A}\)
4: Lock \(\Delta A\), and \(\Delta B\).
5: Set \(\langle A^{1},B^{1}\rangle=\langle A^{1},B^{1}\rangle\).
6: Set \(\langle A^{1}_{L},B^{1}_{L}\rangle=\langle A^{1}_{L}+\Delta A,B^{1}_{L}-\Delta B\rangle\).
7: Set \(\langle A^{1},B^{1}\rangle=\langle A^{1}_{L}+\Delta A,B^{1}_{L}-\Delta B\rangle\).
```
**Protocol 2** Lock-swap for \(A\to B\) swap
Finally, Protocol 9, shows the procedure for resolving a lock. Note that, there are two possible cases, either the user's criteria are met and the swap can be executed, or the the user's criteria are not met an the lock must be cancelled. In case the lock is executed, then the liquidity in the actual pool must be updated so that it reflects the execution of the swap. In case the lock is cancelled, the amounts in the virtual pool must be modified so that the trade swap is cancelled.
The aforementioned protocol, formally describes the protocol for a swap of the form \(A\to B\). Appendix A.1 formally describes the protocol for a swap of the form \(B\to A\).
## 5 Adding fees
In Section4 we describe the protocol without swapping fees. In this section we extend the model to include fees. Let \((1-\gamma)\) denote the swapping fees. Therefore, for any swap (either of instant-swap or lock-swap) of size \(\Delta A\) to any other asset, the actual size of the trade is \(\gamma\Delta A\). In case of the instant-swaps the fees are added to the pool after the swap is executed. For lock-swaps, the fees are added to the pool after the lock is executed. Note that in case a lock-swap is cancelled, the user still has to pay the fees. Appendix A.2, describes the protocol with the fees included.
## 6 Extension to more than two pairs
In this section, we extend our model so that one can execute a multi-swap which involves more than two pairs of asset. For simplicity, we assume 4 assets, \(A\), \(B\), \(C\), and \(D\). Let \(\langle A^{1},B^{1}\rangle\), denote the balances of the \(A\) and \(B\) assets on shard 1, \(\langle B^{2},C^{2}\rangle\), the balances of the \(B\) and \(C\) assets on shard 2, and \(\langle C^{3},D^{3}\rangle\) the balances of the \(C\) and \(D\) assets on shard 3. The aim is to execute a multi-swap of size \(\Delta A\) from \(A\) to \(D\), such that it guarantees an output of at least size \(\widetilde{\Delta D}\). Since the \(A\) and \(D\) tokens are different shards, then the multi-swap is executed by two lock-swaps for \(A\to B\) on shard 1, and \(B\to C\) on shard 2, followed by a lock-swap for \(C\to D\) on shard 3.
\[\langle\Delta A,\Delta D\rangle\rightarrow\underbrace{\langle\Delta A,\Delta B \rangle}_{\text{lock-swap}}\text{ and }\underbrace{\langle\Delta B,\Delta C \rangle}_{\text{lock-swap}}\text{ and }\underbrace{\langle\Delta C,\Delta D \rangle}_{\text{lock-swap}}\]
In this case, we assume that there is a virtual AMM for \(\langle A^{1},B^{1}\rangle\), and \(\langle B^{2},C^{2}\rangle\). The execution is similar to what described for the case of 2 shards. That is, first a lock-swap is executed on shard 1, then given the amount of \(\Delta B\), a lock-swap is executed on shard 2. Finally, if the amount of \(\Delta D\) asset in the last instant-swap meets the criteria specified by the user, all the swaps are executed and the locks are resolved in every shard, and if the criteria is not met, then the entire lock-swaps are resolved and the liquidities are added to the pools.
Conclusion
The primary purpose of the proposed swapping mechanism is to advance AMM and DeFi across and beyond the current blockchain ecosystem. Within the blockchain ecosystem, the mechanism provides an execution framework that guarantees fixed prices across liquidity pools independent of whether the liquidity pools sit on completely independent blockchains or shards. In addition, the guaranteed fixed prices are a feature rather than a consequence of the underlying blockchain, a property that may improve the adoption of DeFi in traditional finance.
Another challenge and obstacle for a broad uptake of AMM solutions within and beyond the blockchain ecosystem is frontrunning. On Ethereum and similar blockchain platforms, the AMM transactions are transparent to all but added to the blockchain consensus model by one or more actors, such as "mempool operators" or "block producers". The problem is that these actors can delay and frontrun a valuable transaction. This is clearly a problem that needs to be solved and an important research agenda. Though the recent introduction of advanced privacy-preserving technologies to blockchain provides a solution. A prime example of this is the Partisia Blockchain, where generic zero-knowledge technologies keep transactions hidden until they are fully executed. Hereby, the arbitrage opportunity is effectively addressed2.
Footnote 2: For more see the template smart contract for “Instant swap” on Partisia Blockchain [https://gitlab.com/partisiablockchain/language/example-contracts](https://gitlab.com/partisiablockchain/language/example-contracts). Template smart contracts for keeping the “lock” hidden until it has been executed will soon be available via the same link.
|
2309.13514 | Superconductivity emerging from density-wave-like order in a correlated
kagome metal | Unconventional superconductivity (USC) in a highly correlated kagome system
has been theoretically proposed for years, yet the experimental realization is
hard to achieve. The recently discovered vanadium-based kagome materials, which
exhibit both superconductivity and charge density wave (CDW) orders, are
nonmagnetic and weakly correlated, thus unlikely host USC as theories proposed.
Here we report the discovery of a chromium-based kagome metal, CsCr$_3$Sb$_5$,
which is contrastingly characterised by strong electron correlations,
frustrated magnetism, and characteristic flat bands close to the Fermi level.
Under ambient pressure, it undergoes a concurrent structural and magnetic phase
transition at 55 K, accompanying with a stripe-like $4a_0$ structural
modulation. At high pressure, the phase transition evolves into two
transitions, probably associated with CDW and antiferromagnetic
spin-density-wave orderings, respectively. These density-wave (DW)-like orders
are gradually suppressed with pressure and, remarkably, a superconducting dome
emerges at 3.65-8.0 GPa. The maximum of the superconducting transition
temperature, $T_\mathrm{c}^{\mathrm{max}}=$ 6.4 K, appears when the DW-like
orders are completely suppressed at 4.2 GPa, and the normal state exhibits a
non-Fermi-liquid behaviour, reminiscent of USC and quantum criticality in
iron-based superconductors. Our work offers an unprecedented platform for
investigating possible USC in a correlated kagome system. | Yi Liu, Zi-Yi Liu, Jin-Ke Bao, Peng-Tao Yang, Liang-Wen Ji, Si-Qi Wu, Qin-Xin Shen, Jun Luo, Jie Yang, Ji-Yong Liu, Chen-Chao Xu, Wu-Zhang Yang, Wan-Li Chai, Jia-Yi Lu, Chang-Chao Liu, Bo-Sen Wang, Hao Jiang, Qian Tao, Zhi Ren, Xiao-Feng Xu, Chao Cao, Zhu-An Xu, Rui Zhou, Jin-Guang Cheng, Guang-Han Cao | 2023-09-24T00:22:12Z | http://arxiv.org/abs/2309.13514v2 | # Superconductivity emerged from density-wave order in a kagome bad metal
###### Abstract
Unconventional superconductivity (USC) in a highly correlated kagome system has been theoretically proposed for years [1, 2, 3, 4, 5], yet the experimental realization is hard to achieve [6, 7]. The recently discovered vanadium-based kagome materials [8], which exhibit both superconductivity [2, 10, 11] and charge density wave (CDW) orders [12, 13, 14], are nonmagnetic [2, 8] and weakly correlated [15], thus unlikely host USC as theories proposed. Here we report the discovery of a chromium-based kagome bad metal, CsCr\({}_{3}\)Sb\({}_{5}\), which is contrastingly characterised by significant electron correlations and frustrated magnetism. Successive phase transitions at \(\sim\)54 K with stripe-like 4\(a_{0}\) structural modulations are observed, probably associated with CDW and antiferromagnetic spin-density-wave (SDW) orderings. Under moderately high pressures of 4-8 GPa, these density-wave orders are suppressed and, remarkably, superconductivity emerges with a maximum \(T_{\rm c}\) of 6.4 K. A quantum critical point at \(P_{\rm c}\approx\) 4 GPa is revealed, by which non-Fermi-liquid behaviours show up, reminiscent of USC in iron-based superconductors [16, 17, 18]. The electronic structure calculations indicate that the electron filling is close to the characteristic flat bands of the kagome lattice. Our work offers an unprecedented platform for investigating the mechanism of USC in a correlated kagome system.
Introduction
Materials with two-dimensional kagome lattice are featured with geometric frustration and characteristic electronic structures, from which various intriguing quantum states may emerge [19]. Among them, experimental realization of USC (here USC refers to superconductivity in which Cooper pairs are not primarily bound together by electron-phonon interactions but, instead, by spin fluctuations nearby magnetic order, according to refs. [20, 21]) in a kagome lattice is highly valuable [1, 2, 3, 4, 5, 6, 7]. The vanadium-based kagome materials \(A\)V\({}_{3}\)Sb\({}_{5}\) (\(A\) = K, Rb, Cs) recently discovered [8] present many exotic phenomena and/or states including superconductivity [2, 10, 11, 22, 1], unusual charge order [12, 13, 14, 23, 24, 25, 26], anomalous Hall effect [27], pair density wave [28], and electronic nematicity [29]. Nevertheless, this class of materials are most likely phonon-mediated conventional superconductors [3], in line with the nonmagnetic nature with relatively weak electron correlations [8, 15, 2]. Correlated kagome materials such as Mn\({}_{3}\)Sn [31], Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[32], and FeGe [33], on the other hand, generally bear robust magnetism hampering the appearance of USC. Here we report a new chromium-based kagome material, CsCr\({}_{3}\)Sb\({}_{5}\), which uniquely hosts both strong electron correlations and fragile density-wave (DW) orders. Under pressures, USC emerges in the proximity of a quantum critical point (QCP) at which the DW orders just disappear above zero temperature.
## 2 Physical properties
Single crystals of CsCr\({}_{3}\)Sb\({}_{5}\) were grown via a self-flux method [34, 2]. The as-grown crystals were characterized by single-crystal X-ray diffractions (XRD) and energy dispersive X-ray (EDX) spectroscopy (Extended Data Fig. 5a). The samples are typically thin flakes with silver metallic luster and with hexagonal morphology and, furthermore, they are stable in air. The chemical composition is of stoichiometric CsCr\({}_{3}\)Sb\({}_{5}\) within the measurement errors. The single-crystal XRD at room temperature (Extended Data Fig. 5b,c) indicate that CsCr\({}_{3}\)Sb\({}_{5}\) crystallizes in a hexagonal lattice with the space group of \(P6/mmm\) (Fig. 1a and Extended Data Table 1). The Cr atoms form a two-dimensional (2D) kagome net with Sb1 atoms located at the center of the hexagons. This Cr\({}_{3}\)Sb plane is sandwiched by the honeycomb-like layers of Sb2, and the resultant sandwiched layers of [Cr\({}_{3}\)Sb\({}_{5}\)]\({}^{-}\) are separated by layers of Cs\({}^{+}\) ions. Therefore, CsCr\({}_{3}\)Sb\({}_{5}\) is isostructural to \(A\)V\({}_{3}\)Sb\({}_{5}\) (ref. [8]).
The in-plane resistivity \(\rho_{ab}(T)\) (Fig. 1b) is nearly temperature-independent above \(\sim\)150 K with an absolute value of resistivity of 1.4 m\(\Omega\) cm, while the out-of-plane resistivity \(\rho_{c}(T)\) is semiconducting-like above 55 K. The resistivity anisotropy \(\rho_{c}/\rho_{ab}\) is as high as \(\sim\)60, indicating a quasi-2D electronic property. In the 2D-conduction scenario, the parameter \(k_{\rm F}l\), where \(k_{\rm F}\) and \(l\) are the Fermi wavevector and mean free path, respectively, is estimated to be close to unity [35], suggestive of a bad metal [36] for CsCr\({}_{3}\)Sb\({}_{5}\).
From the \({\rm d}\rho/{\rm d}T\) curve (Fig. 1c), one can identify at least two characteristic temperatures at \(T_{1}=\) 56.2 K and \(T_{2}=\) 52.7 K. The corresponding anomalies can also be found in the data of specific heat (Fig. 1d), magnetoresistance (Extended Data Fig. 6b-d), and Hall coefficient (Extended Data Fig. 6e,f). The specific heat starts to jump at \(T_{1}\) and, after climbing the peak, a shoulder appears at \(T_{2}\). The magnetic susceptibility data (Fig. 1e,f) show a drop below \(T_{1}\) and a subtle anomaly at \(T_{2}\) (see below). These results
Figure 1: **Crystal structure and physical properties of CsCr\({}_{3}\)Sb\({}_{5}\).****a**, Unit cell of the high-temperature hexagonal phase of CsCr\({}_{3}\)Sb\({}_{5}\). **b**, In-plane and out-of-plane resistivity as functions of temperature \(T\). **c**, A close-up of \(\rho_{ab}(T)\) (left axis) and d\(\rho_{ab}\)/d\(T\) versus \(T\) (right axis), which defines the characteristic temperatures, \(T_{1}\) and \(T_{2}\), as marked by the dashed vertical lines. **d**, Temperature dependence of specific heat (\(C\)). The upper inset shows the close-up at low temperatures with different scales, and the bottom inset highlights the \(C(T)\) peaks around 54 K. **e**, Temperature dependence of magnetic susceptibility \(\chi\) with different field directions. The inset plots \(1/(\chi-\chi_{0})\) versus \(T\), in which green dashed lines denote the Curie-Weiss fit. **f**, A close-up of \(\chi(T)\) highlighting the anomalies at \(T_{1}\) and \(T_{2}\).
indicate successive phase transitions in which the electronic and magnetic states change significantly.
The magnetic susceptibility drop (Fig. 1f) starts at \(T_{1}\) and completes at \(T_{2}\), with \(\Delta\chi_{ab}=5\times 10^{-4}\) emu mol\({}^{-1}\) for \(H\perp c\) and \(\Delta\chi_{c}=1\times 10^{-4}\) emu mol\({}^{-1}\) for \(H\parallel c\). In the temperature range of \(\sim\)40 K \(<T<T_{2}\), \(\chi_{ab}\) is nearly \(T\)-independent, while \(\chi_{c}\) increases obviously with decreasing \(T\). If there were no transition at \(T_{1}\), then the \(\chi(T)\) curve would approximately follow the dash-dotted lines in Fig. 1f. In this circumstance, a downward (slightly upward) kink at \(T_{2}\) for \(H\perp c\) (\(H\parallel c\)) can be detected. Note that the low-\(T\) susceptibility upturn, which comes from extrinsic paramagnetic impurities or imperfections because of a small effective moment involved, diminishes (exaggerates) the kink for \(H\perp c\) (\(H\parallel c\)). Therefore, the putative kink-like anomaly in \(\chi_{ab}(T)\) suggests an antiferromagnetic (AFM) transition with the moments lying in the \(ab\) plane.
The existence of local moments is manifested by the high-\(T\) susceptibility data, which show a Curie-Weiss-paramagnetic behaviour (Fig. 1e) obeying the formula, \(\chi(T)=\chi_{0}+C/(T-\theta_{\rm CW})\), where \(C\) refers to Curie constant, and \(\theta_{\rm CW}\) is Curie-Weiss temperature. The data fitting yields \(\chi_{0}\approx 7.8\times 10^{-4}\) emu mol\({}^{-1}\) and \(\theta_{\rm CW}\approx-340\) K for the two field directions. The Curie constant appears to be different, \(C\) = 0.56 (1.04) emu K mol\({}^{-1}\) for \(H\perp c\) (\(H\parallel c\)). Thus the Pauli-paramagnetic susceptibility is then estimated to be \(\sim 1.2\times 10^{-3}\) emu mol\({}^{-1}\) with the consideration of atomic core diamagnetism. The effective magnetic moment turns out to be \(\mu_{\rm eff}\approx 1.2\ (1.7)\ \mu_{\rm B}\)/Cr for \(H\perp c\) (\(H\parallel c\)). The large negative value of \(\theta_{\rm CW}\) indicates strong AFM interactions between the Cr magnetic moments. Note that the magnetic frustration index[37], \(f=|\theta_{\rm CW}|/T_{2}\approx\) 6.5, is moderately large, suggesting significant magnetic frustrations that commonly exist in a magnetic kagome lattice.
The \(C(T)\) data (Fig. 1d) gives information on the strength of electron correlations. In the low-\(T\) limit, where the phonon contribution is nearly negligibly small, the \(C/T\) value is reasonably close to the electronic specific-heat coefficient, hence \(\gamma_{\rm exp}\approx\) 120 mJ K\({}^{-1}\) mol\({}^{-1}\). In fact, there is a small upturn below \(\sim\)3 K in the \(C/T\) vs. \(T\) plot, which is unlikely due to Schottky anomaly (the latter generally appears below 1 K). Thus we tentatively explain this phenomenon in terms of electron-electron interactions which gives the contribution \(C_{ee}\sim-T\)ln\(T\) (ref. [38]). The data fitting with the formula \(C/T=\gamma+\beta T^{2}-\eta\)ln\(T\) yields a slightly larger value of \(\gamma_{\rm exp}=138\pm 7\) mJ K\({}^{-1}\) mol\({}^{-1}\).
## 3 Structural modulations
To understand the phase transitions at \(\sim\)55 K, we performed the single-crystal XRD down to 40 K. As shown in Fig. 2a,b and Extended Data Fig. 5a-c, satellite reflections appear at 40 and 50 K, in addition to the primary Bragg diffractions in the \({\bf a}^{*}{\bf b}^{*}\) plane. At first sight, these satellite spots seem to be related to a symmetry-related triple-\({\bf Q}\) modulation vector that corresponds to a \(4\times 4\) superlattice based on the original hexagonal lattice. However, the intensities of the satellite spots are significantly unequal and some of them (related to a specific \({\bf Q}_{i}\) vector) are even absent in the diffraction pattern (Fig. 2c,h). Therefore, the six-fold rotation symmetry of the hexagonal lattice is broken. Alternatively, the diffraction pattern in Fig. 2a,c can be interpreted as a single-\({\bf Q}\) orthorhombic lattice (\(Cmmm\), \(a=a_{0}\), \(b=\sqrt{3}a_{0}\), \(c=c_{0}\); \({\bf Q}_{\rm{ort}}\approx\) (0, 0.5, 0)) with a three-fold rotation twinning (Fig.2h). Then, the inequivalent intensity of the satellite reflections can be naturally explained with different fractions of twin domains. Notably, some
Figure 2: \(|\)**Structural modulations in CsCr\({}_{3}\)Sb\({}_{5}\).****a,b**, Reconstructed (\(hk2\)) planes of reflections at 40 and 70 K, respectively, with unit vectors \(\mathbf{a}^{*}\) and \(\mathbf{b}^{*}\) marked. **c**, A close-up around the main reflection (\(2\bar{1}2\)) in **a**, highlighting the satellite reflections that are indexed by one single **Q**-vector with six twin domains. **d**, Cut along the blue dashed line marked in **c**. **e**, Reconstructed (\(1kl\)) plane of reflections at 40 K. **f**, Line cuts along A and B marked in **e**. **g**, Crystal structure of CsCr\({}_{3}\)Sb\({}_{5}\) viewed along the \(c\) axis. The original hexagonal unit cell is marked by solid lines. The \(C\)-centered orthorhombic/monoclinic and \(a_{0}\times 4a_{0}\) unit cells are marked with dashed lines. **h**, Schematic illustrations of the six monoclinic pseudo-orthohexagonal twin domains in the reciprocal lattice space projected along [001] direction. \(\mathbf{a}^{*}_{0}\) and \(\mathbf{b}^{*}_{0}\) are the original unit cell. \(\mathbf{a}^{*}_{(i)}\) and \(\mathbf{b}^{*}_{(i)}\) (\(i=1-6\)) represent the lattice units of six individual monoclinic twin domains. Domains 1, 2 and 3 are connected by the three-fold rotation along \(\mathbf{c}^{*}\). Domains 1 and 4 (the same for other pairs of domains) are correlated by a two-fold rotation along \(\mathbf{b}^{*}\). The filled red, green and purple circles refer to satellite reflections attributed to \(\mathbf{Q}_{(\mathbf{1},\mathbf{4})}\),\(\mathbf{Q}_{(\mathbf{2},\mathbf{5})}\) and \(\mathbf{Q}_{(\mathbf{3},\mathbf{6})}\), respectively. The satellite reflections of domain 1 and 4 are overlapped. The empty circles denote the absence of mixed-order satellite reflections between any two \(\mathbf{Q}_{\mathbf{i}}\) vectors in a hexagonal unit cell.
mixed-order satellite spots between any two \(\mathbf{Q}_{i}\) vectors are absent, indicating that these three \(\mathbf{Q}_{i}\) vectors are actually independent, coming from different domains. Such a single-\(\mathbf{Q}\) modulation is also observed at 55 K (Extended Data Fig. 7j-l), which is 2.3 K higher than \(T_{2}\), indicating that the structural change happens below \(T_{1}\).
In the \(\mathbf{b}^{*}\mathbf{c}^{*}\) plane, no additional satellite spots along \(\mathbf{c}^{*}\) direction is detected (Fig 2e and Extended Data Fig. 7d-g), in conformity with the \(\theta-2\theta\) scan result shown in Extended Data Fig. 5d. Nevertheless, diffraction splittings are observed especially at high diffraction angles, which is clearly seen by doing line cuts along \(\mathbf{c}^{*}\) (Fig. 2f). This indicates a monoclinic distortion (\(C2/m\), \(a=a_{0}\), \(b\approx\sqrt{3}a_{0}\), \(c=c_{0}\), and \(\alpha>90^{\circ}\) ) where the two-fold rotation along \(\mathbf{b}^{*}\) is also broken (Extended Data Fig. 5h,i).
As conduction electrons generally couple with the underlying lattice, the structural modulations point to a CDW instability below \(T_{1}\). The CDW ordering is supported by the resistivity increase and susceptibility drop just below \(T_{1}\), because CDW ordering generally opens an energy gap. In contrast, the resistivity decreases steeply at \(T_{2}\) without remarkable structural change. This result agrees with the magnetic ordering at \(T_{2}\) where the magnetic scattering is expected to be reduced. With the magnetic susceptibility analysis above, it is reasonable to conclude that an AFM SDW order forms below \(T_{2}\).
## 4 USC under pressures
As revealed above, at ambient-pressure CsCr\({}_{3}\)Sb\({}_{5}\) is characterised by a correlated bad metal with successive phase transitions at \(T_{1}\) and \(T_{2}\), probably associated with CDW and SDW transitions, respectively. Upon applying pressure to 1.8 GPa, the resistivity decreases obviously, accompanying with enhanced metallicity (Fig. 3a). The sharp peak around 54 K is broadened, and \(T_{1}\) and \(T_{2}\) are reduced to \(\sim\)49 and \(\sim\)39 K, respectively (Extended Data Fig. 8a-c). With increasing pressures, \(T_{2}\) decreases faster than \(T_{1}\). As a result, at 4 GPa \(T_{2}\) cannot be identified (while \(T_{1}\) is reduced to 20 K) and, simultaneously, a superconducting transition emerges at 5.8 K instead. The superconducting transitions at 4 GPa \(\leq P\leq\) 8 GPa are clearly seen in Fig. 3b, and bulk superconductivity is confirmed by the ac magnetic susceptibility (\(\chi^{\prime}\)) measurement (Fig. 3c and Extended Data Fig. 9a-c). The highest superconducting transition temperature of \(T_{\rm c}^{\rm onset}=\) 6.4 K is achieved at 4.2 GPa. For \(P\geq\) 10 GPa, no superconductivity can be observed down to 1.6 K (noted that the \(\sim\)30% drop at around 3.5 K is not intrinsic, as the diamagnetic signal from the sample totally disappears down to the lowest temperature, and we attribute it to the pressure-induced superconductivity in the tiny remaining flux Sb on the sample's surface).
Figure 3e shows the superconducting transitions under different magnetic fields at 4.2 GPa (other data are given in Extended Data Fig. 8d-f). As expected, \(T_{\rm c}\) shifts to lower temperatures with increasing fields. Here we used the criteria of 50% normal-state resistivity for determining \(T_{\rm c}(H)\), from which the upper critical fields \(\mu_{0}H_{\rm c2}(T)\) are derived (Fig. 3f). The \(\mu_{0}H_{\rm c2}(T)\) data can be well described by the Ginzburg-Landau equation, \(\mu_{0}H_{\rm c2}(T)=\mu_{0}H_{\rm c2}(0)[1-(T/T_{\rm c})^{2}]\). The best fits yield the zero-temperature \(\mu_{0}H_{\rm c2}(0)\) values for each pressure. The extracted \(\mu_{0}H_{\rm c2}(0)\) are 11.95 and 14.34 T for \(P=\) 4.0 and 4.2 GPa, respectively, which exceeds the Pauli limit of \(\mu_{0}H_{\rm P}\sim 1.84T_{\rm c}\) (in tesla). Note that field direction was not sure for the present high-pressure measurement.
Figure 3: **Superconductivity emerged from density-wave orders in CsCr\({}_{3}\)Sb\({}_{5}\).****a,b**, \(\rho(T)\) curves under high pressures. The arrows and asterisks mark \(T_{1}\) and \(T_{2}\), respectively, probably associated with CDW and SDW transitions. **c**, Temperature dependence of ac susceptibility, \(\chi^{\prime}\), under high pressures. A piece of superconducting Pb was placed together with the sample, serving as a reference material. **d**, Electronic \(P-T\) phase diagram. QCP and USC denote quantum critical point and unconventional superconductivity, respectively. **e**, Superconducting transitions under magnetic fields at 4.2 GPa. **f**, Upper critical field as a function of temperature. **g**, Power \(\alpha\) (left axis) and the coefficient \(A\) (right axis) as functions of pressure (see the text for details). **h**, The relative upper critical field to the Pauli-limited field (left axis) and residual resistivity (right axis) as functions of applied pressures.
With the above high-pressure results, the electronic \(T-P\) phase diagram is established (Fig. 3d). As pressure increases, both DW orders are suppressed, and the SDW-related \(T_{2}\) appears to be more sensitive to pressures, reminiscent of suppression of SDW in iron-based superconductors [18]. At 4.0 GPa, only CDW order survives, coexisting with superconductivity. An asymmetric superconducting dome shows up nearby the DW region, resembling those of iron-based superconductors [16, 17] and other unconventional superconducting systems of CrAs [39] and MnP [40]. The highest \(T_{\rm c}\) is achieved at 4.2 GPa together with the enhancement of \(H_{\rm c2}(0)\) (Fig. 3h), where the DW order temperatures are suppressed to absolute zero, suggesting quantum criticality in the system.
One of the most important hallmarks of quantum criticality is non-Fermi-liquid behaviour at around the QCP, which is indeed observed from the normal-state resistivity near \(T_{\rm c}\). The data fitting (Extended Data Fig. 8g) with the power law \(\rho=\rho_{0}^{\prime}+CT^{\alpha}\) shows that, at \(\sim\)4 GPa, \(\alpha\) goes to \(\sim\)1.0 from \(\sim\)2.0 (Fig. 3g), suggesting the breakdown of Fermi-liquid state [38]. The detailed variations of \(\alpha\) are also displayed in the coloured contour plot (Fig. 3d), in which the non-Fermi-liquid regime is marked. In other way, if assuming Fermi-liquid scenario in the low-temperature limits, the data fitting with the formula \(\rho=\rho_{0}+AT^{2}\) (Extended Data Fig. 8h) yields an extremely remarkable enhancement of the coefficient \(A\) (Fig. 3g) and an anomaly of \(\rho_{0}\) (Fig. 3h) at \(\sim\)4 GPa. All these observations indicate a QCP at \(P_{\rm c}\approx\) 4 GPa.
## 5 DFT Calculations
To understand the basic electronic structure of CsCr\({}_{3}\)Sb\({}_{5}\), we performed the first-principles density-functional-theory (DFT) calculations for the high-temperature hexagonal phase. The calculated band structure (Fig. 4a) shows a metallic behaviour with two bands crossing the Fermi level, \(E_{\rm F}\). As a result, a hole-type Fermi surface (FS) around the \(\Gamma\)A line and an electron-type FS around the ML line are derived (Fig. 4d). Both FS sheets are quasi-two dimensional, which explains the large resistivity anisotropy. The FS topology is very different with that of CsV\({}_{3}\)Sb\({}_{5}\)[2, 13], primarily due to the different electron filling in the Cr- or V-\(3d\) orbitals. Indeed, the Fermi level is only 0.06 eV lower below the nearest flat band in CsCr\({}_{3}\)Sb\({}_{5}\), meanwhile it is \(\sim\)0.5 eV higher above the two van Hove points along the ML line. Thus, the CDW in CsCr\({}_{3}\)Sb\({}_{5}\) should has a different origin. Overall, the \(E_{\rm F}\) of CsCr\({}_{3}\)Sb\({}_{5}\) is raised by 0.5-0.7 eV because Cr has one more electron than V does (Extended Data Fig. 10d). Under a pressure of 5 GPa, interestingly, the flat bands move up (Extended Data Fig. 10e), and some bandwidths are broadened, which could roughly explain the suppression of DW orders with applying pressures.
The electronic states at around \(E_{\rm F}\) are mostly contributed from the Cr-\(3d\) orbitals (Fig. 4b). Among them, Cr-\(3d_{xz/yz}\) dominates the states at \(E_{\rm F}\), which is also seen in Extended Data Fig. 10a-c. The total density-of-states at \(E_{\rm F}\) is \(D(E_{\rm F})=\) 8.1 states eV\({}^{-1}\) fu\({}^{-1}\). This gives a theoretical value of the Pauli-paramagnetic susceptibility, \(\chi_{\rm P}^{T}=\mu_{\rm B}^{2}D(E_{\rm F})=2.6\times 10^{-4}\) emu mol\({}^{-1}\), about 1/5 of the experimental value estimated above. Meanwhile, the bare \(D(E_{\rm F})\) value corresponds to an electronic specific-heat coefficient of \(\gamma_{0}=\frac{1}{3}\pi^{2}k_{\rm B}^{2}D(E_{\rm F})=19.1\) mJ K\({}^{-2}\) mol\({}^{-1}\), which is only 1/6\(\sim\)1/7 of the experimental value. The result suggests a large electron-mass renormalization due to strong correlation effect.
Figure 4: **Electronic structure of CsCr\({}_{3}\)Sb\({}_{5}\) by DFT calculations.****a,b**, Band structure (**a**) and density of states (DOS) (**b**) calculated with spin-orbit coupling. **c**, The Brillouin zone with high-symmetry points marked. **d**, The merged Fermi-surface (FS) sheets (left), FS slices at \(k_{z}\) = 0 (middle) and \(\pi\) (right). The blue and red colours denote the hole-type and electron-type FS sheets, respectively, corresponding to the two coloured bands in **a**.
Concluding Remarks
Above we demonstrate that CsCr\({}_{3}\)Sb\({}_{5}\) bears contrastingly different properties, in comparison with its structurally analogous compound CsV\({}_{3}\)Sb\({}_{5}\) (Extended Data Table 2). Here we make a summary as follows:
i) At ambient pressure, CsCr\({}_{3}\)Sb\({}_{5}\) is a correlated bad metal with remarkably enhanced electronic specific-heat coefficient and Pauli susceptibility, undergoing successive phase transitions associated with CDW/SDW orders. The strong correlations as well as complex DW orders could originate from the electron filling close to the characteristic flat bands of the kagome lattice.
ii) The CDW order in CsCr\({}_{3}\)Sb\({}_{5}\) is very different from those in \(A\)V\({}_{3}\)Sb\({}_{5}\) (refs. [12, 3, 13]). Firstly, it shows a single-\(\mathbf{Q}\) modulation with a \(1\times 4\) supercell based on a pseudo-hexagonal lattice. Secondly, neither doubling nor quadrupling along the \(c\) axis is detected. Lastly, the DW phase is distorted from hexagonal to monoclinic, which breaks more symmetry elements.
iii) In CsCr\({}_{3}\)Sb\({}_{5}\), there exist Cr local spins with AFM interactions among them. Below \(T_{2}\), the Cr-spins order antiferromagnetically, forming the SDW state. Note that the structural distortion already occurs below \(T_{1}\) which releases the geometric frustration.
iv) Under high pressures, the DW orders are gradually suppressed, and dome-like USC emerges nearby the DW QCP. This is in contrast with the \(A\)V\({}_{3}\)Sb\({}_{5}\) family which exhibits superconductivity already at ambient pressure and, with applying pressures, superconductivity doe not disappear [6, 41, 42].
USC typically emerges from a spin- and/or charge-ordered state in a correlated electron system, as exemplified in cuprates, iron-based pnictides/chalcogenides, and heavy-fermion materials [20, 21]. Nevertheless, so far there is still no consensus on the mechanism of USC [44], although spin fluctuations are generally considered as a common pairing glue [20]. The realization of USC in the kagome bad metal CsCr\({}_{3}\)Sb\({}_{5}\) supplies a unique example, which may shed light on the mechanism of USC. Note that those theoretically proposed USC [2, 3, 4] in kagome systems actually appears in the vicinity of the van Hove filling, which is not the case of the present work.
There are a lot of issues open for the future studies. As far as the experiments are concerned, for example, whether USC can be realized at ambient pressure via chemical doping/substitutions deserves further investigations. In addition, the gate-voltage regulation might be an effective means to tune the physical properties considering the quasi-2D nature of this material. The nature of the phase transitions at \(\sim\)54 K, especially the form of magnetic structure, is also of special interest given its intimate relationship with the observed USC. Neutron diffraction measurements are highly desirable for this purpose.
## References
* [1] Ko, W.-H., Lee, P. A. & Wen, X.-G. Doped kagome system as exotic superconductor. _Phys. Rev. B_**79**, 214502 (2009). URL [https://link.aps.org/doi/10.1103/PhysRevB.79.214502](https://link.aps.org/doi/10.1103/PhysRevB.79.214502).
* [2] Yu, S.-L. & Li, J.-X. Chiral superconducting phase and chiral spin-density-wave phase in a Hubbard model on the kagome lattice. _Phys. Rev. B_**85**, 144402 (2012). URL [https://link.aps.org/doi/10.1103/PhysRevB.85.144402](https://link.aps.org/doi/10.1103/PhysRevB.85.144402).
* [3] Wang, W.-S., Li, Z.-Z., Xiang, Y.-Y. & Wang, Q.-H. Competing electronic orders on kagome lattices at van Hove filling. _Phys. Rev. B_**87**, 115135 (2013). URL [https://link.aps.org/doi/10.1103/PhysRevB.87.115135](https://link.aps.org/doi/10.1103/PhysRevB.87.115135).
* [4] Kiesel, M. L., Platt, C. & Thomale, R. Unconventional Fermi surface instabilities in the kagome Hubbard model. _Phys. Rev. Lett._**110**, 126405 (2013). URL [https://link.aps.org/doi/10.1103/PhysRevLett.110.126405](https://link.aps.org/doi/10.1103/PhysRevLett.110.126405).
* [5] Zhou, Y., Kanoda, K. & Ng, T.-K. Quantum spin liquid states. _Rev. Mod. Phys._**89**, 025003 (2017). URL [https://link.aps.org/doi/10.1103/RevModPhys.89.025003](https://link.aps.org/doi/10.1103/RevModPhys.89.025003).
* [6] Norman, M. R. Colloquium: Herbertsmithite and the search for the quantum spin liquid. _Rev. Mod. Phys._**88**, 041002 (2016). URL [https://link.aps.org/doi/10.1103/RevModPhys.88.041002](https://link.aps.org/doi/10.1103/RevModPhys.88.041002).
* [7] Kelly, Z. A., Gallagher, M. J. & McQueen, T. M. Electron doping a kagome spin liquid. _Phys. Rev. X_**6**, 041007 (2016). URL [https://link.aps.org/doi/10.1103/PhysRevX.6.041007](https://link.aps.org/doi/10.1103/PhysRevX.6.041007).
* [8] Ortiz, B. R. _et al._ New kagome prototype materials: discovery of KV\({}_{3}\)Sb\({}_{5}\), RbV\({}_{3}\)Sb\({}_{5}\), and CsV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. Materials_**3**, 094407 (2019). URL [https://link.aps.org/doi/10.1103/PhysRevMaterials.3.094407](https://link.aps.org/doi/10.1103/PhysRevMaterials.3.094407).
* [9] Ortiz, B. R. _et al._ CsV\({}_{3}\)Sb\({}_{5}\): A \(\mathbb{Z}_{2}\) topological kagome metal with a superconducting ground state. _Phys. Rev. Lett._**125**, 247002 (2020). URL [https://link.aps.org/doi/10.1103/PhysRevLett.125.247002](https://link.aps.org/doi/10.1103/PhysRevLett.125.247002).
* [10] Ortiz, B. R. _et al._ Superconductivity in the \(\mathbb{Z}_{2}\) kagome metal KV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. Mater._**5**, 034801 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevMaterials.5.034801](https://link.aps.org/doi/10.1103/PhysRevMaterials.5.034801).
* [11] Yin, Q. _et al._ Superconductivity and normal-state properties of kagome metal RbV\({}_{3}\)Sb\({}_{5}\) single crystals. _Chin. Phys. Lett._**38**, 037403 (2021). URL [https://dx.doi.org/10.1088/0256-307X/38/3/037403](https://dx.doi.org/10.1088/0256-307X/38/3/037403).
* [12] Jiang, Y.-X. _et al._ Unconventional chiral charge order in kagome superconductor KV\({}_{3}\)Sb\({}_{5}\). _Nature Materials_**20**, 1353-1357 (2021). URL [https://doi.org/10.1038/s41563-021-01034-y](https://doi.org/10.1038/s41563-021-01034-y).
* [13] Ortiz, B. R. _et al._ Fermi surface mapping and the nature of charge-density-wave order in the kagome superconductor CsV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. X_**11**, 041030 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevX.11.041030](https://link.aps.org/doi/10.1103/PhysRevX.11.041030).
* [14] Liang, Z. _et al._ Three-dimensional charge density wave and surface-dependent vortex-core states in a kagome superconductor CsV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. X_**11**, 031026 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevX.11.031026](https://link.aps.org/doi/10.1103/PhysRevX.11.031026).
* [15] Zhao, J., Wu, W., Wang, Y. & Yang, S. A. Electronic correlations in the normal state of the kagome superconductor \({\rm KV}_{3}{\rm Sb}_{5}\). _Phys. Rev. B_**103**, L241117 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevB.103.L241117](https://link.aps.org/doi/10.1103/PhysRevB.103.L241117).
* [16] Colombier, E., Bud'ko, S. L., Ni, N. & Canfield, P. C. Complete pressure-dependent phase diagrams for \({\rm SrFe}_{2}{\rm As}_{2}\) and \({\rm BaFe}_{2}{\rm As}_{2}\). _Phys. Rev. B_**79**, 224518 (2009). URL [https://link.aps.org/doi/10.1103/PhysRevB.79.224518](https://link.aps.org/doi/10.1103/PhysRevB.79.224518).
* [17] Shibauchi, T., Carrington, A. & Matsuda, Y. A quantum critical point lying beneath the superconducting dome in iron pnictides. _Annual Review of Condensed Matter Physics_**5**, 113-135 (2014). URL [https://doi.org/10.1146/annurev-conmatphys-031113-13392](https://doi.org/10.1146/annurev-conmatphys-031113-13392).
* [18] Dai, P. Antiferromagnetic order and spin dynamics in iron-based superconductors. _Rev. Mod. Phys._**87**, 855-896 (2015). URL [https://link.aps.org/doi/10.1103/RevModPhys.87.855](https://link.aps.org/doi/10.1103/RevModPhys.87.855).
* [19] Yin, J.-X., Lian, B. & Hasan, M. Z. Topological kagome magnets and superconductors. _Nature_**612**, 647-657 (2022). URL [https://doi.org/10.1038/s41586-022-05516-0](https://doi.org/10.1038/s41586-022-05516-0).
* [20] Scalapino, D. J. A common thread: The pairing interaction for unconventional superconductors. _Rev. Mod. Phys._**84**, 1383-1417 (2012). URL [https://link.aps.org/doi/10.1103/RevModPhys.84.1383](https://link.aps.org/doi/10.1103/RevModPhys.84.1383).
* [21] Stewart, G. R. Unconventional superconductivity. _Advances in Physics_**66**, 75-196 (2017). URL [https://doi.org/10.1080/00018732.2017.1331615](https://doi.org/10.1080/00018732.2017.1331615).
* [22] Zhong, Y. _et al._ Nodeless electron pairing in \({\rm CsV}_{3}{\rm Sb}_{5}\)-derived kagome superconductors. _Nature_**617**, 488-492 (2023). URL [https://doi.org/10.1038/s41586-023-05907-x](https://doi.org/10.1038/s41586-023-05907-x).
* [23] Neupert, T., Denner, M. M., Yin, J.-X., Thomale, R. & Hasan, M. Z. Charge order and superconductivity in kagome materials. _Nat. Phys._**18**, 137-143 (2022). URL [https://link.aps.org/doi/10.1103/PhysRevMaterials.6.074802](https://link.aps.org/doi/10.1103/PhysRevMaterials.6.074802).
* [24] Kang, M. _et al._ Twofold van hove singularity and origin of charge order in topological kagome superconductor \({\rm CsV}_{3}{\rm Sb}_{5}\). _Nat. Phys._**18**, 301-308 (2022). URL [https://doi.org/10.1038/s41567-021-01451-5](https://doi.org/10.1038/s41567-021-01451-5).
* [25] Zhao, H. _et al._ Cascade of correlated electron states in the kagome superconductor \({\rm CsV}_{3}{\rm Sb}_{5}\). _Nature_**599**, 216-221 (2021). URL [https://doi.org/10.1038/s41586-021-03946-w](https://doi.org/10.1038/s41586-021-03946-w).
* [26] Mielke, C. _et al._ Time-reversal symmetry-breaking charge order in a kagome superconductor. _Nature_**602**, 245-250 (2022). URL [https://doi.org/10.1038/s41586-021-04327-z](https://doi.org/10.1038/s41586-021-04327-z).
* [27] Yang, S.-Y. _et al._ Giant, unconventional anomalous Hall effect in the metallic frustrated magnet candidate, \({\rm KV}_{3}{\rm Sb}_{5}\). _Science Advances_**6**, eabb6003 (2020). URL [https://www.science.org/doi/abs/10.1126/sciadv.abb6003](https://www.science.org/doi/abs/10.1126/sciadv.abb6003).
* [28] Chen, H. _et al._ Roton pair density wave in a strong-coupling kagome superconductor. _Nature_**599**, 222-228 (2021). URL [https://doi.org/10.1038/s41586-021-03983-5](https://doi.org/10.1038/s41586-021-03983-5).
* [29] Nie, L. _et al._ Charge-density-wave-driven electronic nematicity in a kagome superconductor. _Nature_**604**, 59-64 (2022). URL [https://doi.org/10.1038/s41586-022-04493-8](https://doi.org/10.1038/s41586-022-04493-8).
* [30] Jiang, K. _et al._ Kagome superconductors AV\({}_{3}\)Sb\({}_{5}\) (A = K, Rb, Cs). _National Science Review_**10**, nwac199 (2022). URL [https://doi.org/10.1093/nsr/nwac199](https://doi.org/10.1093/nsr/nwac199).
* [31] Nakatsuji, S., Kiyohara, N. & Higo, T. Large anomalous Hall effect in a non-collinear antiferromagnet at room temperature. _Nature_**527**, 212-215 (2015). URL [https://doi.org/10.1038/nature15723](https://doi.org/10.1038/nature15723).
* [32] Liu, E. _et al._ Giant anomalous Hall effect in a ferromagnetic kagome-lattice semimetal. _Nature Physics_**14**, 1125-1131 (2018). URL [https://doi.org/10.1038/s41567-018-0234-5](https://doi.org/10.1038/s41567-018-0234-5).
* [33] Teng, X. _et al._ Discovery of charge density wave in a kagome lattice antiferromagnet. _Nature_**609**, 490-495 (2022). URL [https://doi.org/10.1038/s41586-022-05034-z](https://doi.org/10.1038/s41586-022-05034-z).
* [34] Liu, Y. _et al._ Enhancement of superconductivity and suppression of charge-density wave in As-doped CsV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. Mater._**6**, 124803 (2022). URL [https://link.aps.org/doi/10.1103/PhysRevMaterials.6.124803](https://link.aps.org/doi/10.1103/PhysRevMaterials.6.124803).
* [35] Johnston, D. C. The puzzle of high temperature superconductivity in layered iron pnictides and chalcogenides. _Advances in Physics_**59**, 803-1061 (2010).
* [36] Emery, V. J. & Kivelson, S. A. Superconductivity in bad metals. _Phys. Rev. Lett._**74**, 3253-3256 (1995). URL [https://link.aps.org/doi/10.1103/PhysRevLett.74.3253](https://link.aps.org/doi/10.1103/PhysRevLett.74.3253).
* [37] Ramirez, A. Strongly geometrically frustrated magnets. _Annual Review of Materials Science_**24**, 453-480 (1994).
* [38] Lohneysen, H. v., Rosch, A., Vojta, M. & Wolfle, P. Fermi-liquid instabilities at magnetic quantum phase transitions. _Rev. Mod. Phys._**79**, 1015-1075 (2007). URL [https://link.aps.org/doi/10.1103/RevModPhys.79.1015](https://link.aps.org/doi/10.1103/RevModPhys.79.1015).
* [39] Wu, W. _et al._ Superconductivity in the vicinity of antiferromagnetic order in CrAs. _Nat. Commun._**5**, 1-5 (2014). URL [https://doi.org/10.1038/ncomms6508](https://doi.org/10.1038/ncomms6508).
* [40] Cheng, J.-G. _et al._ Pressure induced superconductivity on the border of magnetic order in MnP. _Phys. Rev. Lett._**114**, 117001 (2015). URL [https://link.aps.org/doi/10.1103/PhysRevLett.114.117001](https://link.aps.org/doi/10.1103/PhysRevLett.114.117001).
* [41] Zheng, L. _et al._ Emergent charge order in pressurized kagome superconductor CsV\({}_{3}\)Sb\({}_{5}\). _Nature_**611**, 682-687 (2022). URL [https://doi.org/10.1038/s41586-022-05351-3](https://doi.org/10.1038/s41586-022-05351-3).
* [42] Zhang, Z. _et al._ Pressure-induced reemergence of superconductivity in the topological kagome metal CsV\({}_{3}\)Sb\({}_{5}\). _Phys. Rev. B_**103**, 224513 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevB.103.224513](https://link.aps.org/doi/10.1103/PhysRevB.103.224513).
* [43] Chen, K. Y. _et al._ Double superconducting dome and triple enhancement of \(T_{c}\) in the kagome superconductor CsV\({}_{3}\)Sb\({}_{5}\) under high pressure. _Phys. Rev. Lett._**126**, 247001 (2021). URL [https://link.aps.org/doi/10.1103/PhysRevLett.126.247001](https://link.aps.org/doi/10.1103/PhysRevLett.126.247001).
* [44] Norman, M. R. The challenge of unconventional superconductivity. _Science_**332**, 196-200 (2011). URL [https://doi.org/10.1126/science.1200181](https://doi.org/10.1126/science.1200181).
**Acknowledgments** We thank Y.T. Song for the assistance in the low-temperature single-crystal X-ray diffraction measurements. This work is supported by the National Natural Science Foundation of China (grant nos. 12050003, 12004337, 12025408, 11921004, 11834016, 12204298, 12274364, and 12274369), the National Key R&D Program of China (grant nos. 2022YFA1403202, 2021YFA1400200), Beijing Natural Science Foundation (grant no. Z190008), the Key R&D Program of Zhejiang Province, China (grant no. 2021C01002), the Strategic Priority Research Program of CAS (grant no. XDB33000000), the Users with Excellence Program of Hefei Science Center CAS (grant no. 2021HSC-UE008), and the Outstanding Member of Youth Promotion Association of CAS (grant no. Y2022004). The high-pressure experiments were carried out at the Cubic Anvil Cell (CAC) station of Synergic Extreme Condition User Facility (SECUF).
**Author Contributions** G.-H.C. coordinated the work, co-conceived the experiments with Y.L., and interpreted the result in discussion with J.-G.C., J.-K.B., Y.L., X.-F.X, and C.C. The high-pressure experiments were performed by Z.-Y.L., P.-T.Y., and B.-S.W. under the leadership of J.-G.C. J.-K.B contributed the structural analysis with the help from J.-Y.L. The theoretical calculations were made by L.-W.J., C.-C.X., H.J., and C.C. Crystals were grown by Y.L., W.-L.C., J.-Y.L., and C.-C.L. The ambient-pressure measurements were done by Y.L., W.-Z.Y., Q.T., Z.R., and Z.-A.X. The paper was written by G.-H.C., J.-G.C., J.-K.B., Y.L., and Z.-Y.L. All co-authors made comments on the manuscript.
**Competing interests** The authors declare no competing financial interests.
**Correspondence and requests for materials** should be addressed to Guang-Han Cao and Jin-Guang Cheng.
**Data availability** The data shown in the main figures are provided in the Source data.
## Methods
**Crystals Growth and Characterizations**
Single crystals of CsCr\({}_{3}\)Sb\({}_{5}\) were grown using a self-flux method from the constituent elements Cs (Alfa 99.999\(\%\)), Cr (Alfa 99.99\(\%\)), and Sb (Aladdin 99.999\(\%\)). Eutectic composition in the CsSb\(-\)CsSb\({}_{2}\) quasi-binary system was employed as the flux.The mixture of Cs, Cr, and Sb was loaded into an alumina crucible, and then it was sealed in a Ta tube by arc welding under argon atmosphere. The Ta tube was protected from oxidation by sealing in an evacuated silica ampoule. The sample loaded assembly was heated in a furnace to 850-900 \({}^{\circ}\)C holding for 18 h, and subsequently cooled slowly at a rate of 2-4 \({}^{\circ}\)C/h to 500-600 \({}^{\circ}\)C. Thin crystalline flakes can be found in the melts. The harvested crystals are stable in air with size up to 0.5 \(\times\) 0.5 \(\times\) 0.02 mm\({}^{3}\).
Single-crystal XRD was carried on a Bruker D8 Venture diffractometer with Mo K\(\alpha\) radiation. A piece of CsCr\({}_{3}\)Sb\({}_{5}\) crystal with dimensions of \(0.018\times 0.168\times 0.172\) mm\({}^{3}\) was mounted on the sample holder using the oil of polybutenes. A flow of cryogenic helium gas was used to cool the crystal from room temperature to 40 K first and then to warm up to 50 and 70 K successively. We also measured another piece of the crystal at 55, 58, and 70 K. A full data set was collected at each temperature. The data reduction including integration and scaling was done by the commercial software packages APEX4. The reconstructed images in the reciprocal space from the raw frames were produced using the reciprocal unit vectors of the hexagonal lattice by the software CrysAlis\({}^{\rm Pro}\) (CrysAlis Pro Version 171.40.53, Rigaku Oxford Diffraction.) The crystal structure of CsCr\({}_{3}\)Sb\({}_{5}\) was initially solved by SUPERFLIP\({}^{1}\). and then refined against the structure factor \(F\) in JANA2006\({}^{2}\). We also performed \(\theta-2\theta\) scan on a PANalytical X-ray diffractometer (Model EMPYREAN) with a monochromatic CuK\(\alpha_{1}\) radiation, which generates the \((00l)\) diffraction pattern. The chemical composition of the as-grown crystals was determined using EDX spectroscopy on a scanning electron microscope (Hitachi S-3700N) equipped with Oxford Instruments X-Max spectrometer.
**Physical Property Measurements**
The measurements of electrical resistivity, magneto-resistivity, Hall effect, and specific heat were carried out on a physical property measurement system (PPMS-9, Quantum Design). The resistivity was measured by a standard four-terminal method using silver paste for making the electrodes. For the measurement of \(\rho_{c}(T)\), the electrodes were made on both sides of the crystal, and the current electrodes were prominently larger than the voltage ones, such that the electric current flows basically homogeneously along the \(c\) axis. The Hall coefficient and magnetoresistance with magnetic field parallel to \(c\) axis were simultaneously measured on a nearly square shaped crystal with a six-electrode configuration. The resistance and Hall signals were obtained respectively by symmetrizing and antisymmetrizing the data collected in reversed magnetic fields. Specific heat was measured using thermal relaxation method with dozens of the crystals (total mass was 0.10 mg). The samples were glued on the heat capacity puck with N grease. The data of addenda were measured in advance. The measurements were carried out for three times at each temperature, and the final \(C(T)\) data were obtained by averaging.
The magnetic measurements were performed on a magnetic property measurement system (MPMS-3, Quantum Design). Samples with mass of 0.10 mg were carefully amounted on the sample holder. For the measurements with field perpendicular to the \(c\) axis, pieces of crystals were attached on the quartz
paddle with a little N grease. For the case with field parallel to the \(c\) axis, an additional high-purity quartz plate was used to hold the samples. The quartz plate was stuck to the quartz paddle with GE varnish. The assembly without samples was measured in advance as addenda.
The high-pressure experiments were carried out at the CAC station of SECUF at Huairou, Beijing. A standard four-probe method was used for resistivity measurements under high pressure in CAC. The sample was hung inside a Teflon capsule filled with glycerol pressure transmitting medium (PTM). The three-axis compression geometry together with the adoption of liquid PTM can ensure an excellent pressure homogeneity. The pressure values in CAC were estimated from the pressure-loading force calibration curve predetermined at room temperature. The mutual induction method was used for the ac magnetic susceptibility measurements in CAC. Several pieces of thin samples together with a piece of Pb served as the pressure marker and the superconducting reference were put inside the handmade primary/secondary coils of about 50 turns for each. The primary coil is driven by ac current of 1 mA and 317.7 Hz, while the output signal from the secondary coil was measured with a lock-in amplifier Stanford SR 830. Detail about the sample assembly and pressure calibrations of CAC can be found elsewhere[3].
### Electronic structure calculations
The DFT-based first-principles calculations were performed using Vienna ab initio Simulation Package[4]. The Kohn-Sham wave functions were treated with projected augmented wave method[5]. The exchange-correlation energy was calculated with a Perdew-Burke-Ernzerhof type functional[6]. The energy cutoff of plane-wave basis was up to 450 eV and a \(\Gamma\)-centered 12\(\times\)12\(\times\)6 k-point mesh was employed in the self-consistent calculations. The experimental room-temperature crystal structure was adopted for the ambient-pressure calculations. As for the high-pressure calculations, the lattice constants and atomic coordinates were fully relaxed, leading to the calculated structural parameters of \(a\) = 5.325 A, \(c\) = 8.683 A, and \(z(\mathrm{Sb}2)=0.2479\). The Fermi-surface sheets were obtained by constructing the tight-binding Hamiltonian with Cr-3\(d\) and Sb-5\(p\) orbitals using maximally projected Wannier function (MLWF) method[7].
## References
* a computer program for the solution of crystal structures by charge flipping in arbitrary dimensions. _Journal of Applied Crystallography_**40**, 786-790 (2007). URL https://doi/10.1107/S0021889807029238.
* [2] Petricek, V., Dusek, M. & Palatinus, L. Crystallographic computing system JANA2006: General features. _Zeitschrift fur Kristallographie-Crystalline Materials_**229**, 345-352 (2014). URL [https://doi.org/10.1515/zkri-2014-1737](https://doi.org/10.1515/zkri-2014-1737).
* [3] Cheng, J. G. _et al._ Integrated-fin gasket for palm cubic-anvil high pressure apparatus. _Rev. Sci. Instrum._**85**, 093907 (2014). URL [https://doi.org/10.1063/1.4896473](https://doi.org/10.1063/1.4896473).
* [4] Kresse, G. & Furthmuller, J. Efficient iterative schemes for \(abinitio\) total-energy calculations using a plane-wave basis set. _Phys. Rev. B_**54**, 11169 (1996). URL [https://link.aps.org/doi/10.1103/PhysRevB.54.11169](https://link.aps.org/doi/10.1103/PhysRevB.54.11169).
* [5] Blochl, P. E. Projector augmented-wave method. _Phys. Rev. B_**50**, 17953 (1994). URL [https://link.aps.org/doi/10.1103/PhysRevB.50.17953](https://link.aps.org/doi/10.1103/PhysRevB.50.17953).
* [6] Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. _Phys. Rev. Lett._**77**, 3865 (1996). URL [https://link.aps.org/doi/10.1103/PhysRevLett.77.3865](https://link.aps.org/doi/10.1103/PhysRevLett.77.3865).
* [7] Marzari, N. & Vanderbilt, D. Maximally localized generalized Wannier functions for composite energy bands. _Phys. Rev. B_**56**, 12847 (1997). URL [https://link.aps.org/doi/10.1103/PhysRevB.56.12847](https://link.aps.org/doi/10.1103/PhysRevB.56.12847).
Figure 5: **Extended Data Fig. 1 Characterizations of CsCr\({}_{3}\)Sb\({}_{5}\) crystals**. **a**, Optical photographs (left and middle top), an SEM image (right top), and the typical EDX spectrum. **b,c**, Reconstructed single-crystal XRD patterns of (\(hk0\)) and (\(h0l\)) reflection planes, respectively, at 298 K. **d**, XRD \(\theta-2\theta\) scan at 300 and 20 K. **e**, The \((00l)\) reflections at different temperatures. **f**, Relative lattice parameter \(c/c_{300\mathrm{K}}\) as a function of temperature, showing a phase transition at \(T=50\pm 5\) K.
Figure 6: **Extended Data Fig. 2 Electrical transport properties of CsCr\({}_{3}\)Sb\({}_{5}\) crystals**. **a**, \(\rho_{ab}(T)\) data in heating and cooling modes. The inset shows the four-electrode measurement configuration. **b,c**, Magneto-resistivity as functions of field along (**b**) and perpendicular to (**c**) the crystallographic \(c\) axis. **d**, Magnetoresistance (\([\rho(H)-\rho(0)]/\rho(0)\%\)) versus \(T\). **e**, Hall resistivity as a function of magnetic fields at various temperatures. **f**, Temperature dependence of Hall coefficient, \(R_{\rm H}\). Anomalies at \(T_{1}=\) 56.2 K and \(T_{2}=\) 52.7 K as marked by the dashed vertical lines in **d,f**.
Figure 7: **Extended Data Fig. 3 More information on the structural transition and modulations in CsCr\({}_{3}\)Sb\({}_{5}\).****a-c**, Reconstructed (\(hk0\)) planes of reflections at 40, 50, and 70 K, respectively, with unit vectors \(\mathbf{a}^{*}\) and \(\mathbf{b}^{*}\) marked. **d-f**, Reconstructed (\(0kl\)) planes of reflections at 40, 50, and 70 K, respectively. **g**, Reconstructed (\(\bar{1}kl\)) plane of reflections up to the highest resolution achieved by the data set. **h**, Illustration of monoclinic distortion in the reciprocal space to interpret the observed diffraction pattern in **g**. **i** Possible group-subgroup graph with the number of twin domains based on the qualitative analysis on satellite and main reflections for the structural modulation at 40 K. **j,l**, (\(hk2\)) planes of XRD reflections at 55 and 70 K, respectively, for another piece of the crystal. **k**, A close-up of the marked area in **j**.
Figure 8: **Extended Data Fig. 4 Additional high-pressure transport measurement data and their analyses for CsCr\({}_{3}\)Sb\({}_{5}\) crystals.****a-c**, Expanded plots of \(\rho(T)\) (left axis) and d\(\rho\)/d\(T\) (right axis) at \(P=1.8\), 3.7, and 4.0 GPa, respectively. **d-f**, Superconducting resistive transitions in magnetic fields at \(P=4.0\), 5.7, and 8.0 GPa, respectively. **g**, Log-log plot for the \(\rho(T)\) relations at various pressures, which gives the power \(\alpha\) in the formula \(\rho=\rho_{0}^{\prime}+CT^{\alpha}\). **h**, Fittings of low-\(T\) resistivity at various fields within Fermi-liquid scenario, which yield the coefficient \(A\) and \(\rho_{0}\) as described in the main text.
Figure 9: **Extended Data Fig. 5 Additional high-pressure magnetic susceptibility measurements for CsCr\({}_{3}\)Sb\({}_{5}\) crystals.****a**, Temperature dependence of ac susceptibility \(\chi^{\prime}\) at zero field as well as under small magnetic fields for suppressing superconductivity of the reference material Pb. **b,c**, Close-ups of **a** highlighting the superconducting transitions at 5.05 GPa (**b**) and 7.07 GPa (**c**).
Figure 10: **Extended Data Fig. 6 Additional information on the band structure of nonmagnetic hexagonal CsCr\({}_{3}\)Sb\({}_{5}\). a-c, Band structure projected with different class of Cr-3d orbitals. The symbols’ size weights the occupation fractions. d, Comparison of band structure with CsV\({}_{3}\)Sb\({}_{5}\). Flat bands are highlighted with transparent stripes. e, Comparison of band structure at 0 and 5 GPa.**
\begin{table}
\begin{tabular}{l l} \hline Chemical formula & CsCr\({}_{3}\)Sb\({}_{5}\) \\ Formula weight & 897.6 g/mol \\ X-ray wavelength & 0.71073 Å \\ Crystal system & Hexagonal \\ Space group & \(P6/mmm\) \\ & \(a\) = 5.4909(3) Å, \(\alpha\) = 90\({}^{\circ}\) \\ Unit-cell dimensions & \(b\) = 5.4909(3) Å, \(\beta\) = 90\({}^{\circ}\) \\ & \(c\) = 9.2417(5) Å, \(\gamma\) = 120\({}^{\circ}\) \\ Volume & 241.31(2) Å\({}^{3}\) \\ \(Z\) & 1 \\ Density (calculated) & 6.1771 g/cm\({}^{3}\) \\ Absorption coefficient & 20.646 mm\({}^{-1}\) \\ \(F\)(000) & 382 \\ Crystal size & 0.172 \(\times\) 0.168 \(\times\) 0.018 mm\({}^{3}\) \\ \(\theta\) range for data collection & 2.2\({}^{\circ}\) to 31.54\({}^{\circ}\) \\ Index ranges & \(-8\leq h\leq 8\), \(-8\leq k\leq 8\), \(-13\leq l\leq 13\) \\ Reflections collected & 10522 \\ Independent reflections & 207 [\(R_{\rm int}\) = 0.0532] \\ Completeness to \(\theta\) = 31.54\({}^{\circ}\) & 100\% \\ Refinement method & \(F\) \\ Data / restraints / parameters & 207 / 0 / 11 \\ Goodness - of - fit & 3.10 \\ Final \(R^{*}\) indices [\(I>3\sigma(I)\)] & \(R_{\rm obs}\) = 0.0283, \(wR_{\rm obs}\) = 0.0467 \\ \(R\) indices [all data] & \(R_{\rm all}\) = 0.0350, \(wR_{\rm all}\) = 0.0477 \\ Extinction coefficient & NA \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{l}{Largest diff. peak and hole} & 3.80 and \(-\)2.21 eÅ\({}^{-3}\) \\ \hline \multicolumn{1}{c}{\(*\)\(R\) = \(\Sigma\)\(||F_{o}\)\(|\)\(-\)\(|F_{c}\)\(||\)\(/\Sigma\)\(|\)\(F_{o}\)\(|\), \(wR=\Sigma\)\([w(|\)\(F_{o}\)\(|^{2}-|\)\(F_{c}\)\(|^{2})^{2}]\)\(/\Sigma\)\([w(|\)\(F_{o}\)\(|^{4})]\)\({}^{1/2}\) and \(w=1/(\sigma^{2}(F)+0.0001F^{2})\).} \\ \hline Label & \(x\) & \(y\) & \(z\) Occupancy & \(U_{\rm eq^{*}}\) & \(U_{11}\) & \(U_{22}\) & \(U_{33}\) & \(U_{12}\) & \(U_{13}\) & \(U_{23}\) \\ Cs1 & 0 & 0 & 0 & 1 & 22(1) & 21(1) & 23(1) & 10(1) & 0 & 0 \\ Sb1 & 0 & 0 & 0.5000 & 1 & 15(1) & 8(1) & 8(1) & 29(1) & 4(1) & 0 & 0 \\ Sb2 & 1/3 & 2/3 & 0.2624(1) & 1 & 15(1) & 15(1) & 16(1) & 7(1) & 0 & 0 \\ Cr1 & 0.5 & 0 & 0.5 & 1 & 14(1) & 12(1) & 15(2) & 17(2) & 7(1) & 0 & 0 \\ \hline \multicolumn{1}{c}{\(*\)\(U_{\rm eq}\) is defined as one third of the trace of the orthogonalized \(U_{ij}\) tensor. The anisotropic displacement factor exponent takes the form: \(-2\pi^{2}[h2a*2U_{11}+...+2hka*b*U_{12}]\).} \\ \hline Bond & & Distance (Å) \\ Cr-Cr & & 2.7455(5)\(\times\)4 \\ Cr-Sb & & 2.7455(3)\(\times\)2 \\ Cr-Sb & & 2.7081(7)\(\times\)4 \\ Cs-Sb & & 3.9914(6)\(\times\)12 \\ \hline \end{tabular}
\end{table}
Table 1: Crystallographic data of CsCr\({}_{3}\)Sb\({}_{5}\) at 298 K by the structural refinement of the single-crystal XRD. The unit of the equivalent isotropic and anisotropic displacement parameters is 0.001 Å\({}^{2}\). |
2309.03707 | A Probabilistic Semi-Supervised Approach with Triplet Markov Chains | Triplet Markov chains are general generative models for sequential data which
take into account three kinds of random variables: (noisy) observations, their
associated discrete labels and latent variables which aim at strengthening the
distribution of the observations and their associated labels. However, in
practice, we do not have at our disposal all the labels associated to the
observations to estimate the parameters of such models. In this paper, we
propose a general framework based on a variational Bayesian inference to train
parameterized triplet Markov chain models in a semi-supervised context. The
generality of our approach enables us to derive semi-supervised algorithms for
a variety of generative models for sequential Bayesian classification. | Katherine Morales, Yohan Petetin | 2023-09-07T13:34:20Z | http://arxiv.org/abs/2309.03707v1 | # A Probabilistic Semi-Supervised Approach with Triplet Markov Chains
###### Abstract
Triplet Markov chains are general generative models for sequential data which take into account three kinds of random variables: (noisy) observations, their associated discrete labels and latent variables which aim at strengthening the distribution of the observations and their associated labels. However, in practice, we do not have at our disposal all the labels associated to the observations to estimate the parameters of such models. In this paper, we propose a general framework based on a variational Bayesian inference to train parameterized triplet Markov chain models in a semi-supervised context. The generality of our approach enables us to derive semi-supervised algorithms for a variety of generative models for sequential Bayesian classification.
Katherine Morales, Yohan Petetin
Geriatric Models; Variational Inference; Semi-Supervised Learning; Triplet Markov Chains.
## 1 Introduction
This paper focuses on semi-supervised learning for sequential Bayesian classification in general generative models. Let us start by recalling the principle of Bayesian classification and semi-supervised estimation.
### Sequential Bayesian classification
We denote as \(\mathbf{x}_{T}=(x_{0},\ldots,x_{T})\) a sequence of observed random variables (r.v.) and \(\mathbf{z}_{T}=(z_{0},\ldots,z_{T})\) a sequence of latent r.v. We also introduce a sequence of labels \(\mathbf{y}_{T}=(y_{0},\ldots,\ y_{T})\) associated to the previous sequence \(\mathbf{x}_{T}\). We will assume that \(x_{t}\in\mathbb{R}^{d_{x}}\), \(z_{t}\in\mathbb{R}^{d_{x}}\) while the label \(y_{t}\) is discrete, so \(y_{t}\in\Omega=\{\omega_{1},\ldots,\omega_{C}\}\). As far as notations are concerned, we do not distinguish r.v. and their realizations. For example, \(\mathbf{x}_{T}\) can represent a noisy grayscale image while \(\mathbf{y}_{T}\) represents the original black and white image. For this application, the latent variable \(z_{t}\) can be used to govern the conditional distribution of the noise given the original label.
When the labels associated to \(\mathbf{x}_{T}\) are not observed, the objective associated to Bayesian classification consists in computing, for all \(t\), the posterior distributions
\[p(y_{t}|\mathbf{x}_{T})=\frac{\sum_{\mathbf{y}_{0:t=1},\mathbf{y}_{t+1:T}}f \,p(\mathbf{y}_{T},\mathbf{x}_{T},\mathbf{z}_{T})\mathrm{d}\mathbf{z}_{T}}{ \sum_{\mathbf{y}_{T}}\int p(\mathbf{y}_{T},\mathbf{x}_{T},\mathbf{z}_{T}) \mathrm{d}\mathbf{z}_{T}}. \tag{1}\]
Consequently, we first need to define a parameterized model \(p_{\theta}(\mathbf{x}_{T},\mathbf{y}_{T},\mathbf{z}_{T})\) which aims at describing the r.v. involved in the problem and from which it is possible to estimate \(\theta\) and next to compute (1) in a reasonable computational cost.
The estimation of \(\theta\) (i.e. the learning step) can be realized from sequences where we have at our disposal \((\mathbf{x}_{T},\mathbf{y}_{T})\) (supervised learning) or only \(\mathbf{x}_{T}\) (unsupervised learning). This general problem is commonly used in many fields, such as speech recognition [1], natural language processing [2], and activity recognition [3].
### Semi-Supervised learning
The problem we consider in this paper is a little bit different. In many real-world applications, it is expensive or impossible to obtain labels for the entire sequence due to various reasons such as the high cost of labeling, the lack of expertise, or the lack of time. So from now on we assume that we have at our disposal a sequence of observations \(\mathbf{x}_{T}\) with partially observed labels and that i) we want to train relevant generative models; and ii) we look for estimating the missing labels associated to each sequence. In other words, decomposing a sequence of labels \(\mathbf{y}_{T}\) as
\[\mathbf{y}_{T}=(\mathbf{y}_{T}^{\mathcal{L}},\mathbf{y}_{T}^{\mathcal{L}}),\]
where \(\mathbf{y}_{T}^{\mathcal{L}}=\{y_{t}\}_{t\in\mathcal{L}}\) (resp. \(\mathbf{y}_{T}^{\mathcal{L}}=\{y_{t}\}_{t\in\mathcal{U}}\)) denotes the observed (resp. the unobserved) labels (so \(\mathcal{L}\) (resp. \(\mathcal{U}\)) denotes the time index of observed (resp. unobserved) labels) we now look for estimating \(\theta\) from \((\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})\), and next computing, for all \(t\in\mathcal{U}\),
\[p(y_{t}|\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})=\frac{\sum_{y_{a,\ s\in\mathcal{U}\setminus\{t\}}}\int p(\mathbf{y}_{T},\mathbf{x}_{T}, \mathbf{z}_{T})\mathrm{d}\mathbf{z}_{T}}{\sum_{y_{a,\ s\in\mathcal{U}}}\int p (\mathbf{y}_{T},\mathbf{x}_{T},\mathbf{z}_{T})\mathrm{d}\mathbf{z}_{T}}.\]
### Scope of the paper
In this paper, we show that it is possible to propose a very general framework for semi-supervised learning in sequential generative models. In particular, we show that it is sufficient to consider a generative model in which the triplet process \(\{z_{t},x_{t},y_{t}\}_{t\geq 0}\) is Markovian. As we will see later, such a general generative model encompasses well known generative models such that the Variational Sequential Laber (VSL) [4]
or the Semi-supervised Variational Recurrent Neural Network (SVRNN) [5]. Due to this general interpretation, we are able to propose new generative models that outperform the previous ones for some semi-supervised learning tasks. The estimation of these general models is based on an adaptation of the variational Bayesian framework [6] for sequential and partially observed data.
The paper is organised as follows. In section 2, we recall the general triplet Markov chain (TMC) model [7, 8] and the principle of variational inference. In section 3, we show that our previous general models can be estimated with modified variational inference techniques in the context of semi-supervised learning. We next show that this general inference technique encompasses popular semi-supervised learning algorithms and we propose our Deep TMC model. Finally, in section 4, we compare different approaches for the image segmentation problem.
## 2 Background
This section introduces the TMC model and the principle of variational Bayesian inference techniques.
### Triplet Markov Chains
Let us first start with the pair process \(\{x_{t},y_{t}\}_{t\geq 0}\). A popular model for describing this process is the hidden Markov chain model in which the sequences of label is assumed to be Markovian and the (noisy) observations are independent given the labels. In addition, the observation at a current time only depends on the label at the same time. In other words,
\[p(\mathbf{x}_{T},\mathbf{y}_{T})\stackrel{{\mathrm{HMC}}}{{=}}p(y _{0})\prod_{t=1}^{T}p(y_{t}|y_{t-1})\prod_{t=0}^{T}p(x_{t}|y_{t}).\]
However, this model may be poor in practice due to the Markovian assumption on the label. A simple way to relax it is to consider the pairwise Markov chain (PMC) model in which the pair \(\{x_{t},y_{t}\}_{t\geq 0}\) is assumed to be Markovian,
\[p(\mathbf{x}_{T},\mathbf{y}_{T})\stackrel{{\mathrm{ PMC}}}{{=}}p(y_{0},x_{0})\prod_{t=1}^{T}p(y_{t},x_{t}|y_{t-1},x_{t-1}).\]
Even if this model is a direct generalization of the previous HMC model, it may be also unsatisfying in practice. The reason why is that it relies on the particular choice of the transition distribution \(p(y_{t},x_{t}|y_{t-1},x_{t-1})\) which may be difficult in practice. A simple way to address this problem is to introduce a new latent process \(\{z_{t}\}_{t\geq 0}\) which aims at making the previous distribution more robust. By denoting \(v_{t}=(z_{t},x_{t},y_{t})\), the general TMC model satisfies
\[p(\mathbf{z}_{T},\mathbf{x}_{T},\mathbf{y}_{T})=p(\mathbf{v}_{T})\stackrel{{ \mathrm{TMC}}}{{=}}p(v_{0})\prod_{t=1}^{T}p(v_{t}|v_{t-1}) \tag{2}\]
and so, relies on the transition distribution
\[p(v_{t}|v_{t-1})=p(z_{t},x_{t},y_{t}|z_{t-1},x_{t-1},y_{t-1}).\]
It encompasses the previous PMC and HMC models and also leads to a general class of generative models. For example, it is possible to first describe the distribution of the latent variable, then the one of the label given by the latent variable, and finally the one of the noisy observation given the latent variable and the label. In this case, the transition distribution reads as
\[p(v_{t}|v_{t-1})=p(z_{t}|v_{t-1})p(y_{t}|z_{t},v_{t-1})p(x_{t}|y_{t},z_{t},v_{t -1}).\]
Even if the nature of the previous distributions is standard, the distribution of interest
\[p(\mathbf{x}_{T},\mathbf{y}_{T})=\int p(\mathbf{z}_{T},\mathbf{x}_{T}, \mathbf{y}_{T})\mathrm{d}\mathbf{z}_{T}\]
can be complex. In order to be used in practice, we have to first choose a parameterized transition distribution \(p_{\theta}(v_{t}|v_{t-1})\) and then estimate the parameter \(\theta\) from partially labelled observations. This estimation step relies on the variational inference framework that we now recall.
### Variational Bayesian Inference
In this section, we give up the temporal aspect of the problem and we only consider an observation \(x\) and a latent r.v. \(z\); the distribution \(p_{\theta}(x,z)=p(z)p(x|z)\) is assumed to be known and we want to estimate \(\theta\) from a realization \(x\). A popular estimator is the Maximum-Likelihood (ML) estimate \(\tilde{\theta}=\arg\max_{\theta}p_{\theta}(x)\) due to its statistical properties [9, 10]. However, a direct maximization of \(p_{\theta}(x)\) is not always possible, particularly in models with latent variables where the likelihood \(p_{\theta}(x)=\int p_{\theta}(x,z)\mathrm{d}z\) is not computable. In the variational inference framework, a variational lower bound called evidence lower bound (ELBO) on the log-likelihood is optimized in order to estimate the parameters \(\theta\)[6]. This variational lower bound relies on the introduction of a parameterized variational distribution \(q_{\phi}(z|x)\) which aims at mimicking the true posterior \(p_{\theta}(z|x)\) and which is parameterized by a set of parameters \(\phi\); the ELBO reads
\[\tilde{Q}(\theta,\phi)=-\int\log\left(\frac{q_{\phi}(z|x)}{p_{\theta}(x,z)} \right)q_{\phi}(z|x)\mathrm{d}z \tag{3}\]
and satisfies, for all \((\theta,\phi)\)
\[\log(p_{\theta}(x))\geq\tilde{Q}(\theta,\phi).\]
Equality holds if \(q_{\phi}(z|x)=p_{\theta}(z|x)\). In this particular case, the alternating maximization w.r.t. \(\theta\) and \(q_{\phi}\) of the ELBO, \(\tilde{Q}(\theta,\phi)\), coincides with the EM algorithm [11].
Variational inference consists in maximizing \(Q(\theta,\phi)\) with respect to \((\theta,\phi)\) for a given class of distributions \(q_{\phi}\)
The choice of the variational distribution \(q_{\phi}(z|x)\) is critical; \(q_{\phi}(z|x)\) should be close to \(p_{\theta}(z|x)\) but should also be chosen in a such way that the associated ELBO can be exactly computed or easily approximated while remaining differentiable w.r.t. \((\theta,\phi)\). A simple way to approximate \(\tilde{Q}(\theta,\phi)\) with a Monte Carlo method is to use the reparametrization trick [12] which consists in choosing a parametric distribution \(q_{\phi}(z|x)\) such that a sample \(z^{(i)}\sim q(z|x)\) can be written as a differentiable function of \(\phi\).
## 3 Semi-supervised variational inference for TMCS
Let us now turn back to the TMC model (2) described by a parameterized transition distribution \(p_{\theta}(v_{t}|v_{t-1})\).
### Variational inference
In order to estimate \(\theta\) from partially labelled observations \((\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})\), we now want to maximize the likelihood
\[p_{\theta}(\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})=\sum_{y_{s},\,s\in \mathcal{U}}\int p_{\theta}(\mathbf{z}_{T},\mathbf{x}_{T},\mathbf{y}_{T}) \mathrm{d}\mathbf{z}_{T}\]
which is not computable in the general case. We thus adapt the variational Bayesian framework of section 2.2, where \(x\leftarrow(\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})\) and \(z\leftarrow(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}})\).
The ELBO (3) now reads
\[Q(\theta,\phi)=-\sum_{y_{s},\,s\in\mathcal{U}}\int q_{\phi}( \mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x}_{T},\mathbf{y}_{T}^{ \mathcal{L}})\times\] \[\log\left(\frac{q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{ U}}|\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})}{p_{\theta}(\mathbf{z}_{T}, \mathbf{x}_{T},\mathbf{y}_{T})}\right)\mathrm{d}\mathbf{z}_{T}. \tag{4}\]
Let us now discuss on the computation of (4). First, it is worthwhile to remark that it does not depend on the choice of the generative model: any parameterized TMC model can be used since \(p_{\theta}(\mathbf{z}_{T},\mathbf{x}_{T},\mathbf{y}_{T})\) is known. So its computation only depends on the choice of the variational distribution \(q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x}_{T},\mathbf{y }_{T}^{\mathcal{L}})\). It can be factorized in two ways. For sake of clarity, we will omit the initial distribution of the variables at time \(t=0\). The first factorization coincides with
\[q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x}_ {T},\mathbf{y}_{T}^{\mathcal{L}})= \prod_{t=1}^{T}q_{\phi}(z_{t}|\mathbf{z}_{t-1},\mathbf{y}_{t-1}, \mathbf{x}_{T},\mathbf{y}_{t+1:T}^{\mathcal{L}})\times\] \[\prod_{t\in\mathcal{U}}^{T}q_{\phi}(y_{t}|\mathbf{y}_{t-1}, \mathbf{z}_{t},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{\mathcal{L}}), \tag{5}\]
(remember that \(\mathbf{y}_{t-1}=(\mathbf{y}_{t-1}^{\mathcal{U}},\mathbf{y}_{t-1}^{\mathcal{ L}})\)) while the second one coincides with
\[q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x }_{T},\mathbf{y}_{T}^{\mathcal{L}})= \prod_{t=1}^{T}q_{\phi}(z_{t}|\mathbf{z}_{t-1},\mathbf{y}_{t}, \mathbf{x}_{T},\mathbf{y}_{t+1:T}^{\mathcal{L}})\times\] \[\prod_{t\in\mathcal{U}}^{T}q_{\phi}(y_{t}|\mathbf{y}_{t-1}, \mathbf{z}_{t-1},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{\mathcal{L}}). \tag{6}\]
As we explained in the previous section, the choice of the variational distribution depends on the generative model. In particular here, it has to take into account the factorization of the parameterized transition distribution \(p_{\theta}(v_{t}|v_{t-1})\).
It remains to compute the ELBO (4) which is nothing more than expectation according to \(q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x}_{T},\mathbf{y }_{T}^{\mathcal{L}})\). We thus propose to use a Monte-Carlo approximation based on the reparametrization trick in order to obtain a differentiable approximation \(\hat{Q}(\theta,\phi)\) of \(Q(\theta,\phi)\). More precisely, we use the classical reparameterization trick to sample sequentially according to the continuous distribution \(q(z_{t}|\mathbf{z}_{t-1},\mathbf{y}_{t},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{ \mathcal{L}})\) (or \(q(z_{t}|\mathbf{z}_{t-1},\mathbf{y}_{t},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{ \mathcal{L}})\), while we use the Gumbel-Softmax (G-S) trick [13, 14] to sample according to \(q(y_{t}|\ \mathbf{y}_{t-1},\mathbf{z}_{t},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{ \mathcal{L}})\) (or \(q(y_{t}|\ \mathbf{y}_{t-1},\mathbf{z}_{t-1},\mathbf{x}_{T},\mathbf{y}_{t+1:T}^{ \mathcal{L}})\)) since the labels are discrete.
### Particular semi-supervised algorithms for TMCs
As we have seen, semi-supervised algorithms for TMC depend on two key ingredients: the generative model described by the transition distribution \(p_{\theta}(v_{t}|v_{t-1})\) which has an impact on the performance of the model for a specific task (classification, prediction, detection, or generation), and the choice of the variational distribution \(q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{U}}|\mathbf{x}_{T},\mathbf{y }_{T}^{\mathcal{L}})\). Actually, our general framework encompasses two existing solutions and we propose a new one.
#### 3.2.1 Variational Sequential Labeler (VSL)
The VSL [4] is a semi-supervised learning model for sequential data and it has originally been proposed for the sequence labeling tasks in natural language processing. It can be seen as a particular case of the general framework we derived in section 3. This particular setting coincides with
\[p_{\theta}(v_{t}|v_{t-1})\stackrel{{\mathrm{VSL}}}{{=}}p_{\theta}( y_{t}|z_{t})p_{\theta}(z_{t}|x_{t-1},z_{t-1})p_{\theta}(x_{t}|z_{t}), \tag{7}\]
while the associated variational distribution satisfies factorization (5) with
\[q_{\phi}(z_{t}|\mathbf{z}_{t-1},\mathbf{y}_{t-1},\mathbf{x}_{T}, \mathbf{y}_{t+1:T}^{\mathcal{L}})=q_{\phi}(z_{t}|\mathbf{x}_{T}), \tag{8}\] \[q_{\phi}(y_{t}|\mathbf{y}_{t-1},\mathbf{z}_{t},\mathbf{x}_{T}, \mathbf{y}_{t+1:T}^{\mathcal{L}})=p_{\theta}(y_{t}|z_{t}). \tag{9}\]
In this case, the ELBO (4) reduces to
\[Q(\theta,\phi)\stackrel{{\rm VSL}}{{=}}\sum_{t\in \mathcal{L}}\int q_{\phi}(z_{t}|\mathbf{x}_{T})\log p_{\theta}(y_{t}|z_{t})\mathrm{ d}z_{t}+\] \[\sum_{t=0}^{T}\int q_{\phi}(z_{t}|\mathbf{x}_{T})\Bigg{[}\log p_{ \theta}(x_{t}|z_{t})-\] \[\log\left(\frac{q_{\phi}(z_{t}|\mathbf{x}_{T})}{p(z_{t}|x_{t-1}, z_{t-1})}\right)\Bigg{]}\mathrm{d}z_{t}. \tag{10}\]
It can be observed that it consists of two terms and that the previous assumptions enable to interpret it as an expectation according to \(q_{\phi}(\mathbf{z}_{T}|\mathbf{x}_{T})\). Thus, it is not necessary to sample discrete variables according to the G-S trick. Moreover, a regularization term \(\beta\) can be introduced in the second part of the ELBO in order to encourage good performance on labeled data while leveraging the context of the noisy image during reconstruction. While this model simplifies the inference, it should be noted that in the generative process, the observation \(x_{t}\) is conditionally independent of its associated label and may not be adapted to some applications.
#### 3.2.2 Semi-supervised Variational Recurrent Neural Network (SVRNN)
The generative model used in the SVRNN can also be seen a particular version of the TMC model where the latent variable \(z_{t}\) consists of the pair \(z_{t}=(z_{t}^{\prime},h_{t})\). The associated transition distribution reads:
\[p_{\theta}(v_{t}|v_{t-1})\!=\!p_{\theta}(y_{t}|v_{t-1})p_{\theta}(z_{t}|y_{t}, v_{t-1})p_{\theta}(x_{t}|y_{t},z_{t},\!v_{t-1})\!, \tag{11}\]
where
\[p_{\theta}(y_{t}|v_{t-1}) = p_{\theta}(y_{t}|h_{t-1}),\] \[p_{\theta}(z_{t}|y_{t},v_{t-1}) = \delta_{f_{\theta}(z_{t}^{\prime},y_{t},x_{t},h_{t-1})}(h_{t})\! \!\times\!p_{\theta}(z_{t}^{\prime}|y_{t},h_{t-1}),\] \[p_{\theta}(x_{t}|y_{t},z_{t},\!v_{t-1}) = p_{\theta}(x_{t}|y_{t},z_{t}^{\prime},h_{t-1}),\]
and where \(f_{\theta}\) is a deterministic function parameterized by a Recurrent Neural Network (RNN), for example.
On the other hand, the variational distribution \(q_{\phi}(\mathbf{z}_{T},\mathbf{y}_{T}^{\mathcal{L}}|\)\(\mathbf{x}_{T},\mathbf{y}_{T}^{\mathcal{L}})\) satisfies the factorization (6) with
\[q(z_{t}^{\prime}|\mathbf{z}_{t-1},\mathbf{y}_{t},\mathbf{x}_{T}, \mathbf{y}_{t+1:T}^{\mathcal{L}})=q_{\phi}(z_{t}^{\prime}|x_{t},y_{t},h_{t-1}),\] \[q(y_{t}|\mathbf{y}_{t-1},\mathbf{z}_{t-1},\mathbf{x}_{T}, \mathbf{y}_{t+1:T}^{\mathcal{L}})=q_{\phi}(y_{t}|x_{t},h_{t-1}),\]
but their final ELBO does not coincide with (4). The reason why is that they derive it from the static case and add a penalization term that encourages \(p_{\theta}(y_{t}|v_{t-1})\) and \(q_{\phi}(y_{t}|x_{t},h_{t-1})\) to be close to the empirical distribution of the data.
#### 3.2.3 Deep TMCs
We finally present a very general TMC model from which one can apply any of the previous techniques. The set of parameters \((\theta,\phi)\) can be described by any differentiable flexible function \(\psi(\cdot)\). In particular, we consider the case where the parameters are produced by a (deep) neural network.
Due to the different factorizations of the generating (resp. variational) distributions, we consider a general notation \(p_{\theta}(x_{t}|\cdot)\), \(p_{\theta}(z_{t}|\cdot)\) and \(p_{\theta}(y_{t}|\cdot)\) (resp. \(q_{\phi}(y_{t}|\cdot)\), \(q_{\phi}(z_{t}|\cdot)\)) in order to avoid presenting a specific dependence between variables. These dependencies are specified for each model and are presented in the previous sections.
Let \(\zeta(y_{t};\cdot)\) and \(\varsigma(y_{t};\cdot)\) (resp. \(\lambda(x_{t};\cdot)\); and \(\eta(z_{t};\cdot)\), \(\tau(z_{t};\cdot)\)) be two probability distributions on \(\Omega\) (resp. probability density functions on \(\mathbb{R}^{d_{x}}\); and \(\mathbb{R}^{d_{x}}\) ). The general model is described by:
\[p_{\theta}(v_{t}|v_{t-1}) =p_{\theta}(x_{t}|\cdot)p_{\theta}(z_{t}|\cdot)p_{\theta}(y_{t}| \cdot),\] \[p_{\theta}(x_{t}|\cdot) =\lambda(x_{t};\psi_{px}(\cdot)), \tag{12}\] \[p_{\theta}(z_{t}|\cdot) =\eta(z_{t};\psi_{pz}(\cdot)),\] (13) \[p_{\theta}(y_{t}|\cdot) =\zeta(y_{t};\psi_{py}(\cdot)), \tag{14}\]
(remember that \(\psi_{px}(\cdot)\) denotes the parameters of the distribution \(p_{\theta}(x_{t}|\cdot)\) and can depend on \(v_{t-1}\) or \((v_{t-1},z_{t},y_{t})\) or \((v_{t-1},y_{t})\), etc... according to the original factorization of \(p_{\theta}(v_{t}|v_{t-1})\)). Finally, the variational distribution is given by
\[q_{\phi}(z_{t}|\cdot) =\tau(z_{t};\psi_{qz}(\cdot)), \tag{15}\] \[q_{\phi}(y_{t}|\cdot) =\varsigma(y_{t};\psi_{qy}(\cdot)). \tag{16}\]
The parameters \(\theta\) (resp. \(\phi\)) are derived from neural networks \((\psi_{px},\psi_{pz},\psi_{py})\) (resp. \((\psi_{qz},\psi_{qy})\)). Note that in the VLS model, \(\psi_{qy}\) is not longer needed since the assumption \(q_{\phi}(y_{t}|z_{t})=p_{\theta}(y_{t}|z_{t})\) is made.
## 4 Simulations
In this section, we present the results of the proposed models on semi-supervised binary image segmentation. Our goal is to recover the segmentation of a binary image (\(\Omega=\{\omega_{1},\omega_{2}\}\)) from the noisy observations \(\mathbf{x}_{T}\) when a partially segmentation \(\mathbf{y}_{T}^{\mathcal{L}}\) is available.
In particular, \(\zeta(y_{t};\cdot)\) (resp. \(\varsigma(y_{t};\cdot)\)) is set as a Bernoulli distribution with parameters \(\rho_{py,t}\) (resp. \(\rho_{qq,t}\)). As for the distribution \(\lambda(x_{t};\cdot)\) (resp. \(\eta(z_{t};\cdot)\) and \(\tau(z_{t};\cdot)\)), we set it as a Gaussian distribution with parameters \([\mu_{px,t},\mathrm{diag}(\sigma_{px,t})]\) (resp. \([\mu_{px,t},\mathrm{diag}(\sigma_{pz,t})]\) and \([\mu_{qz,t},\mathrm{diag}(\sigma_{qz,t})]\)). where \(\mathrm{diag}(.)\) denotes the diagonal matrix deduced from the values of \(\sigma_{\cdot,t}\).
### Deep mTMC
In our simulations, we consider three particular cases of the deep TMC model. We start with the deep minimal TMC (d-mTMC) [15], where the choice of parameters describes the transition:
\[p_{\theta}(v_{t}|v_{t-1})\stackrel{{\rm mT}}{{=}}p_{\theta}(y_{t}|y_ {t-1})p_{\theta}(z_{t}|z_{t-1})p_{\theta}(x_{t}|y_{t},z_{t}). \tag{17}\]
So this model assumes a Markovian distribution for the labels and the latent variables aim at learning the distribution of the noise given the label and the latent variable. In order to capture temporal dependencies in the input data and to have an efficient computation of the variational distribution for the d-mTMC model, we use a deterministic function to generate \(\tilde{h}_{t}\) which takes as input \((x_{t},y_{t},z_{t},\tilde{h}_{t-1})\). Then the variational distribution \(q_{\phi}(\mathbf{x}_{T},\mathbf{y}_{T}^{L}|\,\mathbf{x}_{T},\mathbf{y}_{T}^{C})\) satisfies the factorization (6) with \(q_{\phi}(z_{t}|x_{t},y_{t},\tilde{h}_{t-1})\) and \(q_{\phi}(y_{t}|x_{t},\tilde{h}_{t-1})\).
In this case, the parameters are given by:
\[[\mu_{px,t},\sigma_{px,t}]=\psi_{px}(y_{t},z_{t}),\] \[[\mu_{pz,t},\sigma_{px,t}]=\psi_{pz}(z_{t-1}),\] \[\rho_{py,t}=\psi_{px}(y_{t-1}),\] \[[\mu_{qz,t},\sigma_{qz,t}]=\psi_{qz}(x_{t},y_{t},\tilde{h}_{t-1}),\] \[\rho_{qy,t}=\psi_{qy}(x_{t},\tilde{h}_{t-1}).\]
### Experiments settings
We used the Binary Shape Database [16] and focused on both _cattle_-type and _camel_-type images. To transform these images into a 1-D signal ( \(\mathbf{x}_{T}\) ), we used a Hilbert-Peano filling curve [17]. To evaluate the models presented in Section 3.2, we introduced non-linear blurring to highlight their ability to learn and correct for signal corruption. More precisely, we generated an artificial noise for the _cattle_-type by generating \(x_{t}\) according to
\[x_{t}|y_{t},x_{t-1}\sim\mathcal{N}\Big{(}\sin(a_{y_{t}}+x_{t-1});\sigma^{2} \Big{)}, \tag{18}\]
where \(a_{\omega_{1}}=0\), \(a_{\omega_{2}}=0.4\) and \(\sigma^{2}=0.25\). We now consider the _camel_-type image which is corrupted with a stationary multiplicative noise (non-elementary noise) given by
\[x_{t}|y_{t},z_{t}\sim\mathcal{N}\left(a_{y_{t}};b_{t_{t}}^{2}\right)*z_{t}, \tag{19}\]
where \(z_{t}\sim\mathcal{N}(0,1)\), \(a_{\omega_{1}}=0\), \(a_{\omega_{2}}=0.5\) and \(b_{\omega_{1}}=b_{\omega_{2}}=0.2\).
The generated images are presented in Fig.1(a) and Fig.2(a), respectively. More details about the image generation process are available in [15]. Additionally, we randomly selected pixels \(y_{t}\in\mathbf{y}_{T}^{\mathcal{L}}\), with a percentage of the pixels being labeled and the rest considered unobserved (_e.g._ Fig.1(c) and Fig.2(c) ).
Each model was trained using stochastic gradient descent to optimize the negative associated ELBO, with the Adam optimizer [18]. The neural networks \(\psi_{(\cdot)}\)'s were designed with two hidden layers using rectified linear units and appropriate outputs, such as linear, softplus, and sigmoid. To ensure a fair comparison, we matched the total number of parameters of all models to be approximately equal. As a result, the number of hidden units for each hidden layer differs for each model. In fact, the SVRNN, mTMC, and VLS models have 22, 25, and 41 hidden units, respectively. We used an RNN cell to generate \(\tilde{h}_{t}\) (resp. \(h_{t}\)) for the d-mTMC (resp. SVRNN) model. In the VLS model, we used the parameterization approach for \(q_{\phi}(z_{t}|\mathbf{x}_{T})\) presented in [4], which involves using a bi-directional RNN cell. We also set the regularization term to \(0,1\).
### Results
The performance of the models is evaluated in terms of the error rate (ER) of the reconstruction of the unobserved pixels. Table 1 presents the error rates obtained for reconstructing unobserved pixels on different images. The notation _image_\(\%\) is used to indicate the specific image and the percentage of unobserved labels in the image. As shown in the table, the d-mTMC consistently outperforms the VSL and the SVRNN, achieving a lower error rate for each case.
Additionally, we observe that when dealing with elementary noise, the performance of the VLS model is superior to that of SVRNN. However, this capability is lost as we increase the percentage of unobserved labels, even with elementary noise.
Moreover, our algorithm achieves superior performance with a more complex noise (the _camel_-type image). Fig. 2 displays the performance of our proposed algorithm compared to the VSL and the SVRNN on the _camel_-type image with \(60\%\) of unobserved labels.
## 5 Conclusion
In this paper, we have presented a general framework for semi-supervised estimation in generative models. In particular, by considering the TMC model, we have shown that it is possible to obtain a wide variety of generative models and to estimate them in the common framework of variational inference in the case where only a part of the observations are labelled. Our experiments demonstrate the effectiveness of the proposed approach in achieving state-of-the-art performance on the task of image segmentation.
|
2308.00127 | DiviML: A Module-based Heuristic for Mapping Neural Networks onto
Heterogeneous Platforms | Datacenters are increasingly becoming heterogeneous, and are starting to
include specialized hardware for networking, video processing, and especially
deep learning. To leverage the heterogeneous compute capability of modern
datacenters, we develop an approach for compiler-level partitioning of deep
neural networks (DNNs) onto multiple interconnected hardware devices. We
present a general framework for heterogeneous DNN compilation, offering
automatic partitioning and device mapping. Our scheduler integrates both an
exact solver, through a mixed integer linear programming (MILP) formulation,
and a modularity-based heuristic for scalability. Furthermore, we propose a
theoretical lower bound formula for the optimal solution, which enables the
assessment of the heuristic solutions' quality. We evaluate our scheduler in
optimizing both conventional DNNs and randomly-wired neural networks, subject
to latency and throughput constraints, on a heterogeneous system comprised of a
CPU and two distinct GPUs. Compared to na\"ively running DNNs on the fastest
GPU, he proposed framework can achieve more than 3$\times$ times lower latency
and up to 2.9$\times$ higher throughput by automatically leveraging both data
and model parallelism to deploy DNNs on our sample heterogeneous server node.
Moreover, our modularity-based "splitting" heuristic improves the solution
runtime up to 395$\times$ without noticeably sacrificing solution quality
compared to an exact MILP solution, and outperforms all other heuristics by
30-60% solution quality. Finally, our case study shows how we can extend our
framework to schedule large language models across multiple heterogeneous
servers by exploiting symmetry in the hardware setup. Our code can be easily
plugged in to existing frameworks, and is available at
https://github.com/abdelfattah-lab/diviml. | Yassine Ghannane, Mohamed S. Abdelfattah | 2023-07-31T19:46:49Z | http://arxiv.org/abs/2308.00127v2 | # DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
###### Abstract
Datacenters are increasingly becoming heterogeneous, and are starting to include specialized hardware for networking, video processing, and especially deep learning. To leverage the heterogeneous compute capability of modern data-centers, we develop an approach for compiler-level partitioning of deep neural networks (DNNs) onto multiple interconnected hardware devices. We present a general framework for heterogeneous DNN compilation, offering automatic partitioning and device mapping. Our scheduler integrates both an exact solver, through a mixed integer linear programming (MILP) formulation, and a modularity-based heuristic for scalability. Furthermore, we propose a theoretical lower bound formula for the optimal solution, which enables the assessment of the heuristic solutions' quality. We evaluate our scheduler in optimizing both conventional DNNs and randomly-wired neural networks, subject to latency and throughput constraints, on a heterogeneous system comprised of a CPU and two distinct GPUs. Compared to naively running DNNs on the fastest GPU, he proposed framework can achieve more than 3\(\times\) times lower latency and up to 2.9\(\times\) higher throughput by automatically leveraging both data and model parallelism to deploy DNNs on our sample heterogeneous server node. Moreover, our modularity-based "splitting" heuristic improves the solution runtime up to 395\(\times\) without noticeably sacrificing solution quality compared to an exact MILP solution, and outperforms all other heuristics by 30-60% solution quality. Finally, our case study shows how we can extend our framework to schedule large language models across multiple heterogeneous servers by exploiting symmetry in the hardware setup. Our code can be easily plugged in to existing frameworks, and is available at [https://github.com/abdeflatah-lab/diviml](https://github.com/abdeflatah-lab/diviml).
+
Footnote †: Thanks to TATA Consultancy Services (TCS) for funding support, and Dr. Rekha Singal for insightful discussion and feedback.
## I Introduction
Deep neural networks (DNNs) have emerged as an important computing paradigm making significant breakthroughs in many fields. However, DNNs are both computationally-intensive and memory-hungry, leading to a major hardware restructuring of modern datacenters to keep up with this inside compute demand. GPUs are becoming commonplace, FPGAs have been included by companies like Microsoft [1], and custom DNN accelerators such as Google's TPU [2] are continuously being developed. DNNs themselves are composed of a growing list of diverse building blocks such as convolutions, matrix-multiplications, element-wise operations, non-linear functions and shape transformations. Each of those primitives exhibits different vectorization patterns, sparsity and quantization tolerance and so may be suitable for implementation on different hardware accelerators [3, 4].
In addition to hardware heterogeneity, DNN topologies are becoming evermore irregular and complex thanks to their automated design through neural architecture search (NAS) [5]. NAS has demonstrated considerable success in creating DNN architectures that are highly efficient in terms of computational resource usage [6, 7, 8]. However, the irregular topologies it generates can be challenging to efficiently schedule on heterogeneous systems. In fact, in its most simple form, with no resource usage constraints or batching, the problem of mapping and scheduling a set of tasks with dependence is a classical NP-Hard problem [9]. Finding scalable and efficient methods for mapping such complex DNN computational graphs on heterogeneous systems is becoming more and more important to meet latency and throughput requirements imposed by modern DNNs and hardware platforms during inference.
Even though this scheduling problem has been previously explored in the context of traditional computing [10, 11], few works investigate the challenges associated with neural network models. In this paper, we investigate the scheduling of irregular DNN topologies onto heterogeneous hardware platforms with different latency and throughput requirements, under different batching conditions, and leveraging the _module-based_ nature of DNNs to significantly improve the speed and quality of our automatic scheduler. Many have used randomly-wired neural networks (RWNNs) [12] to represent NAS-designed DNNs in the context of scheduling [13], and we follow suit. Our scheduler operates on a coarse-grained computational graph of DNNs that is available through domain-specific frameworks such as PyTorch [14] or TVM [15]. Our goal is to create a fast heterogeneous scheduling plugin that can be easily integrated into these DNN frameworks to leverage heterogeneous computing platforms.
To achieve this goal, we curate a set of DNNs from the vision domain, both manually-designed ones such as ResNet [16], and NAS-found DNNs represented by an assortment of RWNNs. We investigate the scheduling of these DNNs on a sample heterogeneous computing platform with two GPUs and a CPU, and we demonstrate a considerable improvement compared to many past heuristic baselines. Our key algorithmic contribution is a fast DNN splitting heuristic, MILP-SPLIT, that detects and schedules each DNN module
separately then combines the schedules in either an optimal or quasi-optimal fashion depending on the nature of the connection between modules. MILP-SPLIT also comes with a theoretical lower bound for the optimal solution, which facilitates the evaluation of the scheduling quality. Our contributions are enumerated below:
1. We formalize the problem of partitioning and scheduling a DNN onto interconnected hardware devices in a heterogeneous computing system. We leverage both model and data parallelism to handle two core optimization objectives; latency and throughput.
2. We propose a novel linear mathematical programming model which is the first, up to our knowledge, scheduling problem formulation capable of handling both model and data parallelism for batched DNN execution.
3. We introduce MILP-SPLIT: A splitting heuristic to schedule complex modular DNNs. Alongside, we perform a rigorous theoretical analysis on the implications of modularity and inter-module communication channels, on the performance of our heuristic, via the proposal of a lower bound formula.
4. We evaluate our algorithms on computer-vision DNN benchmarks, on both mainstream DNNs and randomly wired neural networks. Compared to a single device, we achieve more than \(3\times\) lower latency and \(2.9\times\) higher throughput. Compared to heuristics from prior work, we achieve 30-60% better solution quality, and up to 395\(\times\) speedup compared to an exact solution.
## II Related Work
On the topic of general software partitioning, there exists previous work regarding heterogeneous compilation [10]. In particular, Polly-Acc offers an automatic heterogeneous compute compiler targeting CPU-GPU systems where at the compiler IR level, interesting compute kernels are detected, extracted, and modeled, and whose execution strategy is described as a schedule tree [11]. AMAP is an online adaptive decision algorithm to determine if the improvement from running a function in hardware outweighs the overhead of transferring the parameters [17], whereas [18] proposes a dynamic program scheduling approach based on the sampled energy-delay product during tentative runs. Our approach, in contrast, is performed statically during compilation, is specifically tailored for deep learning architectures, and leverages coarse graph-level descriptions of DNNs.
Under the scope of DNN based partitioning, many existing research endeavors focus solely on training [19, 20]. Alpa automates the search for pipeline-parallel schedules for DNN training on homogeneous multi-node GPU clusters. ParDNN introduces a graph slicing heuristic which forms primary clusters, the first iterative critical paths of the graph, and secondary clusters, the single nodes or remaining paths, and optimizes for load balancing during training [21]. Chen at al. [22] propose heuristic methods to optimize latency based on Heterogeneous-Earliest-Finish-Time (HEFT) and Critical-Path for mapping and scheduling DNNs on accelerators consisting of function units such as matrix multiplication or lookup tables. Unlike these approaches that were specific to DNN training, our scheduling algorithm is geared towards low-latency and high-throughput inference.
Liu et al. [23] restrict their scope to the DenseNet architecture and gives an exact and efficient algorithm for its scheduling on a heterogeneous system. However, this approach is tailored for the particular topology of the DenseNet graph and is consequently difficult to generalize to broader model architectures. We propose a more general cut-based heuristic, which also takes advantage of the dynamic programming paradigm and can significantly speed up the mixed integer linear programming (MILP) solving. Additionally, Mirhosein et al. [24] propose a reinforcement learning approach to DNN mapping for both training and inference latency optimization. This suffers however from a lack of generalization with a need to set manually load specific parameters and with training time ranging between 12 to 27 hours. In comparison, our approach focuses on inference, handles batched inputs and strives for efficiency by leveraging modularity while maintaining some optimality guarantees. Finally, SERENITY achieves memory-aware scheduling of irregularly wired neural networks on a single device by resorting to graph rewriting and divide-and-conquer approaches [25]. We focus instead on latency and throughput optimization on multiple heterogeneous devices, taking into account each device's memory constraints.
## III Problem statement and system description
Our approach is based on a coarse-grained representation of computational graphs that is commonly used in deep learning compilers. We present a compile-time mapping and scheduling framework for DNNs on heterogeneous hardware systems. The scheduler's modeling is general and agnostic to back-ends, its only limitation being what is supported by different compilers' back-ends. Figure 1 illustrates how the partitioner is integrated in a DNN compilation pipeline. It is capable of reading an input consisting of a hardware system configuration and any intermediate representation (IR) of a DNN, and outputs the
Fig. 1: Our heterogeneous scheduling framework.
appropriate mapping on the system via existing compilation backends, and its corresponding schedule. An optional clustering step prepares the DNN graph for mapping by reducing the number of task inputs to the mapping algorithms. A prime example is the fusion of convolution, batch normalization, and the ReLU activation function.
### _Problem Formulation_
We represent DNNs as a weighted directed acyclic graph (DAG), with the edges denoting data dependencies and nodes representing a DNN task (e.g. a convolutional or linear operation). If two tasks with data dependencies are mapped onto the same processor, the communication between them is implemented through data sharing in device memory and no communication cost is incurred. Each processor may execute several tasks, but each task has to be assigned to exactly one processor, in which it is entirely executed without preemption. Formally, let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be the DAG where \(\mathcal{V}\) denotes the set of tasks and \(\mathcal{E}\) represents the set of edges. Each edge \((i,j)\in\mathcal{E}\) defines a precedence relation between the tasks \(i,j\in\mathcal{V}\), and is weighted by the size of the source task's output. A task cannot be executed unless all of its predecessors (parents) have been processed and all relevant data is available. Each task \(i\in\mathcal{V}\) is assigned the following constants: \((wm_{i})\) the data size allocated for the DNN task weights, \((im_{i})\) the input tensor size and \((om_{i})\) the output tensor's size. As for our hardware system on which we map the DNN, we model it as a tuple of sets \(\mathcal{H}=(\mathcal{K},\mathcal{M},\beta)\). \(\mathcal{K}\) denotes the set of devices in our system. The two remaining sets are descriptors of the hardware system. \(\mathcal{M}:\mathcal{K}\rightarrow\mathbb{R}^{+}\) is the memory capacity for each single processor and \(\beta:\mathcal{K}^{2}\rightarrow\mathbb{R}^{+}\) the communication bandwidth between linked chips--it is null if there is no link. If tasks \(i\) and \(j\) are executed on different compute nodes \(h,k\) ; \(h\neq k\), and \((i,j)\in\mathcal{E}\), a communication time \(om_{i}/\beta_{h,k}\) is incurred.
The objective of this task scheduling problem is to allocate and schedule the tasks onto the compute nodes such that the overall completion time (latency) is minimized. We link the dataflow graph and the hardware via a map \(t:(\mathcal{V},\mathcal{K})\rightarrow\mathbb{R}^{+}\), which assigns to each task and device pair its corresponding latency. We finally add to our formulation the possibility of batching and throughput optimization. Hence we augment our problem description with a map \(\mathcal{B}:\mathcal{K}\to 2^{\mathbb{N}}\) that assigns to each device the subset of batch sizes supported. \(t\) now describes the latency of each possible batch of similar tasks \(i\in\mathcal{V}\) for each device and is redefined as \(t:\mathcal{V}\times\mathcal{K}\times\mathcal{B}(\mathcal{K})\rightarrow\mathbb{ R}^{+}\). The objective is now to find for a set of \(\mathcal{L}\) graph inputs the optimal mapping and scheduling of the tasks into different batches, while respecting the dependency within a single graph and the underlying resource constraints. Finally, we define the notion of a schedule. Let \(\mathcal{S}:\mathcal{V}\times[1,\ldots,\mathcal{L}]\rightarrow\mathcal{K} \times\mathbb{R}^{+}\) be a map which assigns each task to a device and a starting time. \(\mathcal{S}\) is a schedule if and only if \(\mathcal{S}\) respects precedence and no overlap (no two distinct batches can overlap on the same device) criteria, i.e. for every \((i,j)\in\mathcal{E}\), \(l\in[1,\ldots,\mathcal{L}]\):
\[\mathcal{S}(i,l)_{2}+1_{\mathcal{S}(i,l)_{1}\neq\mathcal{S}(j,l)_{1}}\cdot m_ {i}/\beta_{h,k}\leq\mathcal{S}(j,l)_{2}\]
The problem statement becomes:
\begin{tabular}{|p{34.1pt} p{34.1pt}|} \hline \multicolumn{2}{|c|}{**Mapping and Scheduling problem**} \\
**Input** & Objective function (latency/throughput) \(f\), \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), \(\mathcal{S}=(\mathcal{K},\mathcal{M},\beta)\), \(t\), \(\mathcal{B}\), \(\mathcal{L}\). \\
**Output** & A schedule \(\mathcal{S}:\mathcal{V}\times[1,\ldots,\mathcal{L}]\rightarrow\mathcal{K} \times\mathbb{R}^{+}\) \\ & which optimizes \(f\) \\ \hline \end{tabular}
## IV Algorithmic approaches
In this section, we demonstrate our exact scheduling approach based on solving an MILP problem. Linear programming has been effective in solving communication constrained DAG scheduling problems for tractable instances [26]. Our contributions for the exact MILP formulation are twofold: First, we incorporate memory and batching constraints into our formulation, which are commonly encountered in deep learning workloads, and we integrate our scheduler into a graph partitioning routine that we rigorously analyze to ensure the quality of its results. However, the problem of scheduling DNNs is NP-Hard, making it intractable to find exact solutions for large graph sizes. Our second contribution addresses this issue. We take advantage of the inherent modularity in most DNNs to create fast solving schemes that are either optimal or provide strong approximation guarantees.
### _MILP Problem Representation_
We introduce a novel formulation of the problem as an MILP model that explicitly considers the option of batching, where a device can process multiple inputs simultaneously. By incorporating batching, our formulation is better suited to capture the characteristics of modern deep learning workloads, which often involve a large numbers of inputs that can be efficiently processed in batches. Our approach enables us to find optimal solutions that balance the trade-offs between computation and communication costs while respecting batching and memory constraints. We add to the notation introduced earlier the following binary decision variables: \(x_{i,j,l}\) which encodes if the DNN task \(i\) corresponding to the \(l\)-th input is mapped to a device \(j\). Meanwhile, \(b_{i,j,l}\) describes if tasks of kind \(i\) running on \(j\) form a batch of size \(l\), and \(d_{i_{1},i_{2},l_{1},l_{2}}=1\) iff task \(i_{1}\) from input \(l_{1}\) is scheduled before \(i_{2}\) from input \(l_{2}\). We also consider the continuous variables: \(s_{i,j}\) the starting time for processing the batch of \(i\) tasks on \(j\), and \(C\) the total latency. The objective function \(f\) is equal to \(C\) in the latency optimization scenario or \(\mathcal{L}/C\) when optimizing for throughput. Now, we can write the mixed integer linear program, with objective minimize C, and whose constraints are as follows: Condition 1 asserts that each task is assigned to a single machine:
\[\sum_{u\in\mathcal{K}}x_{i,u,l}=1;\ \ i\in\mathcal{V},\ \ l=1,\ldots,\mathcal{L} \tag{1}\]
Condition 2 ensures that each task finishes within the reported latency :
\[s_{i,u}+\sum_{l\in\mathcal{B}_{u}}b_{i,u,l}\cdot t_{i,u,l}\leq C;\ \ i\in\mathcal{V},\ \ u\in\mathcal{K} \tag{2}\]
Condition 3 is the condition expressing the dependency and communication constraint:
\[\begin{split} s_{i,u}&+\sum_{p\in\mathcal{B}_{u}}b_{i, u,p}\cdot t_{i,u,p}+(om_{i}/\beta_{u,v})\cdot(x_{j,v,l}+x_{i,u,l}-1)\\ &\leq s_{j,v};\ \ j\in\mathcal{V},\ \ i\in par(i),\ \ u,v\in \mathcal{K},\ \ l=1,\ldots,\mathcal{L}\end{split} \tag{3}\]
Condition 4 ensures that the batch decomposition adds up correctly to the total number of items in the batch:
\[\sum_{u\in\mathcal{K}}\sum_{l\in\mathcal{B}_{u}}l\cdot b_{i,u,l}=\mathcal{L}; \ \ i\in\mathcal{V} \tag{4}\]
The following condition 5 ensures that only supported batch sizes are chosen:
\[\begin{split} b_{i,u,l}&=1\ \ \text{iff}\sum_{l^{\prime} \in[1\ldots\mathcal{L}]}x_{i,u,l^{\prime}}=l;\\ i&\in\mathcal{V},\ \ u\in\mathcal{K},\ \ l=1, \ldots,\mathcal{L}\end{split} \tag{5}\]
In its form above, it is not a linear equation but we can linearize it via the big M method[27].
Condition 6 holds the memory constraint under the supposition that all data should be preemptively moved:
\[\begin{split}\sum_{i\in\mathcal{V}}((im_{i}+om_{i})\sum_{l\in[1 \ldots\mathcal{L}]}x_{i,u,l}+wm_{i}\sum_{l\in\mathcal{B}_{u}}b_{i,u,l})\\ &\leq\mathcal{M}_{u};u\in\mathcal{K}\end{split} \tag{6}\]
Conditions 7 ensures no overlap of device usage between different batches. We linearize it similarly to condition 5:
\[\begin{split}\begin{cases}s_{i,u}+\sum\limits_{p\in\mathcal{B}_{u} }b_{i,u,p}\cdot t_{i,u,p}-s_{j,u}\leq 0\\ \text{or}\\ s_{j,u}+\sum\limits_{p\in\mathcal{B}_{u}}b_{i,u,p}\cdot t_{i,u,p}-s_{i,u} \leq 0\\ \text{if}\ \ x_{i,u_{l_{1}}}=x_{j,v_{l_{2}}}=1;\\ i,\ j\in\mathcal{V},\ \ u\in\mathcal{K},\ \ i\neq j,l_{1},\ l_{2}=1, \ldots,\mathcal{L}\end{cases}\end{split} \tag{7}\]
An optimization of the formulation of the MILP is to restrict constraint 7 to pairs of tasks (i, \(l_{1}\)) and (j, \(l_{2}\)) which do not belong to the same batch graph or are not part of a path in the DAG. The system remains equivalent to the original as the other constraints from 7 are enforced by the dependency constraint 3. Eliminating these redundant constraints is done by computing the transitive closure of the graph and which can be obtained efficiently with Purdom's algorithm [28].
### _Milp-SPLIT: Leveraging Graph Modularity_
#### Iii-B1 Single-channel modularity
The presence of highly connected clusters is a prevalent feature in many DNN graph structures. An example is shown in Figure 1(a) This characteristic can be leveraged by the scheduler to partition the global problem into independent sub-problems consisting of weakly communicating modules. This approach is particularly useful when dealing with graphs that consist of modules linked to one another, such as ResNets [16], Inception [29], or especially RWNNs [12] that are composed of several instances of sequentially linked random graph modules.
A straightforward method to identify these modules involves detecting articulation points or bridges in the graph, which correspond to vertices or edges whose removal disconnects the undirected graph, grouping tasks between them into the same module, and solving each subproblem independently. However, this approach can lead to suboptimal solutions as it does not account for communication costs through bridges and may result in inconsistent assignments of articulation points across modules. Fortunately, a dynamic programming solution exists to address these issues. To obtain an optimal global solution for the whole graph, we compute the optimal schedule for each module for every possible input-device and output-device pairings, and we combine the resulting building blocks into the best configuration. As a preprocessing step, we transform articulation points that are not endpoints of bridges into bridge edges by introducing a dummy node and a zero-cost edge between them. We also add an additional constraint that mandates the mapping of these two vertices to the same device in the global solution as is illustrated in Figure 1(b). From now on, we refer to bridges as "communication channels".
Formally, Let \(\mathcal{G}(\mathcal{V},\mathcal{E})\) be a DAG with single input and output. We denote by \(\mathcal{I}(\mathcal{Q},\mathcal{F})\) the graph obtained by reducing every module into a single vertex, where \(\mathcal{Q}\) is a partition of \(\mathcal{V}\) into a set of disjoint modules and \(\mathcal{F}:=\{(u,v)\in\mathcal{Q}^{2}|\ \ \exists x\in u\ \exists y\in v\ \ (x,y)\in \mathcal{E}\}\). In particular, if \(\mathcal{Q}\) is defined as the set of vertex modules, then \(\mathcal{I}\) is a path, and we can enumerate \(\mathcal{Q}\) as the set \([1,\ldots,|\mathcal{Q}|]\), and through this ordering we can obtain a dynamic programming problem formulation. For a given module \(M_{t}\in\mathcal{Q}\) and a pair of devices \(u,v\in\mathcal{K}\) onto which the input and output of \(M_{t}\) are mapped, and if we denote by \(opt\) the solution of a module subproblem, the recursion can be written as:
\[\begin{split} dp(M_{t},u,v)&=min_{u^{\prime},v^{ \prime}\in\mathcal{K}}\Big{(}dp(M_{t-1},u^{\prime},v^{\prime})\\ &+com(t,v^{\prime},u)\Big{)}+OPT(M_{t},u,v)\end{split}\]
The effectiveness of the proposed splitting method is influenced by the number and size balance of the extracted modules. The complexity of the approach can be expressed as \(O(|\mathcal{K}|^{2}|\mathcal{Q}|\mathbb{T})\), where \(\mathbb{T}\) represents a runtime bound for each module. This complexity analysis assumes a specific cutting strategy, but can be generalized to arbitrary cuts, where \(\mathcal{I}\) becomes a multigraph.
#### Iii-B2 Multi-channel modularity
Modularity is an important property of graphs that enables exact solving for the scheduling problem on large graphs using a divide-and-conquer approach. However, many graphs can not be split into distinct modules of comparable size that communicate through a _single_ input-output channel. In such cases, it may still be possible to decompose the graph into balanced modules that communicate through _multiple_ edges, and solve for each subgraph independently. Figure 1(a) shows an example with 1 and 2 channels. Identifying the modules boils down to computing the \(k-\)edge connected components [30] where \(k-1\) is the number of channels. Although this approach may result in a loss of optimality, it can significantly improve runtime without
a significant reduction in quality. In the case of partitioning a large graph into multichannel communicating modules, it is desirable to compute a lower bound on the optimal solution to evaluate the quality of the MILP-SPLIT (or other) heuristic, especially when solving for the entire graph is not tractable.
In order to express the lower bound for a DAG \(\mathcal{G}(\mathcal{V},\mathcal{E})\) that can be split into multichannel communicating modules, we first define for a fixed \(T\subseteq\mathcal{V}\) and for every node \(u\) in \(\mathcal{G}\) the set of nodes \(dep(u)_{T}=\{v\in T\mid\text{there is a path from $u$ to $v$}\}\), which we will refer to as the dependency set of \(u\), and the set of nodes \(pre(u)_{T}=\{v\in T\mid\text{there is a path from $v$ to $u$}\}\), and which we will refer to as the predecessor set of \(u\) (as shown in Figure 1(d)). Let \(M_{1},\dots,M_{|\mathcal{Q}|}\) be a decomposition of \(\mathcal{G}\) into such modules, where \(\bigcup_{1\leq t\leq|\mathcal{Q}|}M_{t}=\mathcal{V}\). We denote by \(\mathcal{G}_{s}=\bigcup_{s\leq t\leq|\mathcal{Q}|}M_{t}\). Our approach is to proceed inductively by computing the lower bound in a recursive manner, and using the following remark:
**Remark**.: _Let \(c\) denote the number of channels, and \((I_{t})_{1\leq t\leq c}\) and \((O_{t})_{1\leq t\leq c}\) denote respectively the set of vertices in the communication channels between \(M_{1}\) and \(\mathcal{G}_{2}\) for which the edges are in-going and out-going, i.e., the inputs of \(\mathcal{G}_{2}\) and the outputs of \(M_{1}\). For any valid scheduling of the whole graph, there exists a \(t^{\prime}\) such that the subgraph induced on \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) is completely scheduled after \(M_{1}\), and there exists a \(t"\) such that \(pre(O_{t^{\prime}})_{M_{1}}\) is completely scheduled before \(\mathcal{G}_{2}\)._
Hence, if we denote by \(OPT\) the function mapping subgraphs of \(\mathcal{G}\) onto their optimal schedule, then we obtain the pair of inequalities:
\[OPT(\mathcal{V})\geq OPT(M_{1})+min_{u\in\text{inputs}}(OPT(dep(I_{u})_{ \mathcal{G}_{2}}))\]
and
\[OPT(\mathcal{V})\geq OPT(\mathcal{G}_{2})+min_{v\in\text{outputs}}(OPT(pre(O_{v })_{M_{1}}))\]
The lower bound of the problem is obtained as the maximum value among the right-hand sides of the inequalities. This lower bound can be immediately extended to the batched throughput scenario by observing that the partial ordering defined earlier for dependency, predecessor, and module subgraphs applies to scheduling the minimal batch size that can be executed on each device. Specifically, it is necessary to schedule a minimum portion of the input to maintain the specified constraints via the communication channels outlined in the remark. However, we can do better; let \(M_{1}\) and \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) be defined as in the remark; then if \(\mathcal{L}\) is the total input batch to be processed and \(b\) any batch size supported on every device, then there is at least a batch of \(\mathcal{L}-b+1\) that needs to be processed through \(dep(I_{t^{\prime}})_{\mathcal{G}_{2}}\) after scheduling a load \(b\) of \(M_{1}\). The same reasoning holds between \(OPT(pre(O_{v})_{M_{1}})\) and \(\mathcal{G}_{2}\), and recursively throughout the graph. These bound computations can be accomplished efficiently using the presented recursive formula, which lends itself well to parallelization due to the independent nature of the subproblems considered.
## V Evaluation
We evaluate our mapping and scheduling framework on mainstream DNN models, a set of computer vision neural networks popular in the field of image classification, from the _Torchvision_ model library, and on randomly wired neural networks (RWNNs) also performing image classification tasks [12]. We focus more on the latter because the topological irregularity of RWNNs makes it more difficult to have a good intuition on what a good mapping and scheduling should look like thus necessitating automated algorithms. We choose representatives from three random graph models (Erdos-Renyi, Watts-Strogatz and Barbasi-Albert), with parameters chosen corresponding to the seeds which achieved the best accuracy in prior work [12]: we sample 6 models generated with parameters WS(4, 0.75), ER(0.2) and BA(5), and with module size \(N\in\{10,32\}\). We consider systems comprised of a CPU (Intel Xeon (skylake) CPU 2.00GHz) and two different GPUs (Nvidia Tesla T4 and A100 GPUs) connected by a 16 lanes PCIe 4.0 link to represent a typical heterogeneous system--relative speeds are shown in Table II. The complete pipeline of our scheduler's evaluation setup for the aforementioned networks starts with a Pytorch model. To convert it into a coarse grain DAG, we use the torch.fx [31] symbolic tracer and in particular the Interpreter class. This class is responsible for executing an FX graph, which represents the dataflow graph of DNN inference on a node-by-node basis. By overriding the node run method, we can individually measure the performance of executing each computational node on
Fig. 2: Modularity in DNN graphs. **sdep** : all paths within a module stem from (converge toward) at least one input (output). **wdep** : module inputs and outputs are randomly sampled for their dependencies.
different backends by invoking the appropriate routine on the respective node, thus creating our DAG while simultaneously benchmarking each operation on every device.
Our experiments perform a thorough comparison of our exact MILP solution, our modularity-based splitting heuristic (MILP-SPLIT), and a large number of established baselines from prior work, introduced in Section V-A. We present our findings when optimizing solely for latency (Section V-B) using model parallelism, and when optimizing for throughput (Section V-C) using both data and model parallelism. In both cases, we evaluate the solution quality and cost for Torchvision models, for single-module RWNNs, and for multi-module RWNNs. Our findings demonstrate the superiority and practicality of MILP-SPLIT compared to existing baseline algorithms, and the fidelity of our estimated lower bound.
### _Baselines and Heuristics_
We compare our MILP solver and MILP-SPLIT against popular scheduling algorithms and general purpose optimization heuristics which have shown success in DAG scheduling contexts or graph problems more generally.
* MET: the Minimum Execution Time algorithm is a list-based scheduling algorithm that schedules tasks based on their minimum execution time to minimize the latency of a DAG. We extend the MET algorithm to the batched throughput optimization by selecting the best batch-device combination for each task.
* Greedy: is a greedy heuristic that considers the overall latency for scheduled tasks so far when scheduling the current task.
* HEFT: the Heterogeneous Earliest Finish Time [32] algorithm is an effective approach for scheduling tasks in a heterogeneous computing environment. It assigns tasks to processing nodes with different processing speeds to minimize overall execution time, using two phases to prioritize tasks based on estimated finish times.
* Simulated Annealing (SA) [33]: is a stochastic optimization heuristic algorithm that draws inspiration from statistical mechanics concepts and has been widely used in various optimization problems, including scheduling, for example, to minimize latency [34, 35, 36].
* Biased (1+1) EA: We implement a biased version of the (1+1) EA [37] as an additional approximation heuristic. Also known as the random hill climbing algorithm, it is one of the most basic evolutionary algorithms but has been surprisingly efficient in practice [38, 39]. We qualify as biased the (1+1) EA when the initialisation is not randomly sampled but chosen in a greedy manner, by assigning each task to the device on which it runs fastest.
**Fitness function**: Here we give a succinct formulation of our problem as an objective function and an integer-string search space, which are adopted by two of our search heuristics: (1+1) EA and SA. We encode the mapping solution as a string of integers, wherein each integer in the string signifies a distinct identifier of the device to which a node is mapped. The position of each integer in the string corresponds to the layers of the DNN, arranged in a breadth-first topological ordering. Finally, the fitness function adopted for the latency (throughput) optimization problem corresponds to the latency (throughput) of a sampled mapping with a breadth-first topological ordering.
### _Latency Optimization_
Figure 3 evaluates our scheduler to optimize latency for mainstream Torchvision models. There are no real improvements for DNNs with little to no parallelism, such as AlexNet or ResNet or VGG, the optimal schedule is usually the one where all tasks are mapped to the best performing device (A100 GPU). However, for models with higher parallelism, the improvement from MILP and MILP-SPLIT are significantly higher--more than 100% and 150% for Inception v3 and GoogLeNet respectively. Both MILP and MILP-SPLIT converge to the optimal solution for all Torchvision models without a substantial increase in runtime, thanks to the simplicity and regularity of these DNNs.
Next, we evaluate RWNNs which we expect to be a significantly more challenging workload. In our first experiment in Figure 4, we schedule a _single_ module on our heterogeneous system, optimized for latency. Compared to simply running the RWNN module on the best device, there is a major \(\sim\)2\(\times\) improvement in overall latency from fully-utilizing our heterogeneous system with a CPU and 2 GPUs. When comparing across different scheduling algorithms, MILP converges to the optimal solution and is 22%-26% better than the best available heuristic on equivalent runtimes. However, with RWNNs featuring multiple modules, ten in our experiment, solving MILP on the whole model is more difficult for the solver and is exponentially slower. This motivates the use of MILP-SPLIT for those more realistic multi-module RWNNs that are representative of DNNs created by NAS.
To evaluate MILP-SPLIT, we stack multiple RWNN modules to represent realistic NAS-discovered models. In this case, each module is generated using the ER(0.2) model and may include multiple communication channels to connect to the next module. As indicated by our lower bounds formulation
\begin{table}
\begin{tabular}{c c c c} \hline \hline & CPU & GPU (T4) & GPU (A100) \\ \hline Torchvision & 223.10 (29\(\times\)) & 12.16 (1.6\(\times\)) & 7.80 (1\(\times\)) \\ RWNNs & 183.39 (7.10\(\times\)) & 32.58 (1.26\(\times\)) & 25.84 (1\(\times\)) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Relative speed in milliseconds (ms) on experiment devices, averaged over our evaluated DNNs.
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline & \multicolumn{3}{c|}{**sdep**} & \multicolumn{3}{c}{**wdep**} \\ Modules & MILP & SPLIT & factor & MILP & SPLIT & factor \\ \hline
5 & 82.69s & 2.26s & 37x & 129.08s & 2.45s & 53x \\
10 & 232.24s & 4.83s & 48x & 271.66s & 5.00s & 54x \\
20 & 1907.12s & 13.49s & 141x & **5850.37s** & 14.81s & 395x \\ \hline \hline \end{tabular}
\end{table} TABLE III: Speedup of the splitting heuristic for the latency optimization of RWNN models with [5, 10, 20] modules.
(Section IV-B1), the density of nodes and edges that are accessible from the endpoints of communication edges can significantly impact the quality of the splitting heuristic and the accuracy of the corresponding lower bound. Therefore, we evaluate our splitting heuristic using two different scenarios for the topology of communication edges. In the first scenario, module inputs and outputs are randomly sampled for their dependencies, while in the second scenario, all paths within a module stem from (converge toward) at least one input (output). We refer to these scenarios as the "weakly dependent" scenario (**wdep**) and the "strongly dependent" scenario (**sdep**), respectively, and examples are shown in Figures 3(e) and 3(c).
Based on the results presented in Table I, it can be observed that our splitting heuristic (MILP-SPLIT) exhibits a solution that is in close proximity to the optimal solution. Additionally, this heuristic outperforms all other scheduling methods considered in this study by a significant margin, as it is \(\sim\)30% better compared to the best heuristic baseline. Table III highlights that the MILP-SPLIT heuristic provides a substantial improvement (37\(\times\)-395\(\times\)) in runtime compared to MILP when both scheduling algorithms reach their best solution. Also shown in Table I is our lower bound (LBound), which offers a convenient means of obtaining a quick performance guarantee for the splitting heuristic. Our observations indicate that for the **wdep** models, the LBound is closer to the true optimum than for the **sdep** models, where it tends to be more pessimistic. This difference is attributed to the lower bound computation which considers complete overlap in scheduling separate paths originating from each module output. This is more likely to hold in an optimal schedule for the **wdep** scenario, where the distribution of these paths is more evenly spread compared to the **sdep** scenario, where a specific endpoint's emanating paths cover all the predecessor or dependency subgraphs--this phenomenon is also the reason why MILP-SPLIT is closer to the optimum on **sdep** graphs. Our results show that MILP-SPLIT is a viable and high-quality heuristic that offers lower-bound guarantees on quality.
### _Throughput Optimization_
We consider throughput optimization in the low-latency inference regime, where we batch B inputs (e.g. 128) and we find the fastest way to run that batch using the available devices. Successive inputs are queued together in groups of B before going to the hardware system for execution. This realistically mimics how inference is done in datacenters where low latency is critical to respond to user requests promptly.
Figures 6, 6, and Table IV show our throughput optimization results attained with our framework via batching. bMET, bGreedy and bHEFT are the batched equivalent of the corresponding heuristics. In this case, we have a batch of inputs B queued for processing, and our scheduler can further decompose this batch into \(\nicefrac{{B}}{{4}}\), \(\nicefrac{{B}}{{2}}\), and \(\nicefrac{{3B}}{{4}}\) when allocating
inputs to different devices. This enables our scheduler to leverage both model and data parallelism when mapping the DNN workload onto the hardware system. Unlike the latency objective, the MILP solving on the whole graph does not terminate within a 2 hours deadline, even for single RWNN modules or for regular networks with high model parallelism such as inception-based DNNs. Consequently, MILP-SPLIT outperforms naive MILP solving both in terms of scheduling quality and runtime. It is worth noting that since MILP cannot reach the optimum solution for a single RWNN module, MILP-SPLIT provides only an approximate solution for each of its module schedules. However, our splitting heuristic achieves up to \(\sim\)60% better performance than the best-performing heuristic baseline with equivalent running times. Results reported in Table IV are based on 600s deadlines for MILP-SPLIT and for other search heuristics, EA and SA. Moreover, Figure 7 provides a more detailed view of the solution quality over time, illustrating the challenge of solving the scheduling problem on the entire graph using MILP with numerous communication channels.
\begin{table}
\begin{tabular}{c c c c c c c c|c} \hline \hline
**Model** & **BD** & **bMET** & **bGreedy** & **bHEFT** & **(1+1) EA biased** & **SA** & **MILP** & **MILP-SPLIT** & **UBound** \\ \hline
1-chan & 54 & 56 & 74 & 75 & 84 & 87 & 114 & **135** & 164 \\ \hline sdep, 2-chan & 48 & 50 & 67 & 66 & 75 & 78 & 95 & **119** & 180 \\ sdep, 3-chan & 49 & 51 & 68 & 70 & 78 & 81 & 116 & **129** & 196 \\ sdep, 4-chan & 47 & 48 & 65 & 67 & 76 & 79 & 73 & **126** & 209 \\ \hline wdep, 2-chan & 51 & 53 & 76 & 75 & 86 & 87 & 89 & **137** & 182 \\ wdep, 3-chan & 49 & 52 & 73 & 73 & 82 & 85 & 72 & **137** & 181 \\ wdep, 4-chan & 47 & 50 & 72 & 74 & 82 & 84 & 65 & **138** & 207 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Throughput for RWNNs consisting of 10 modules. Results reported in images-per-second (imgs/s). Best and second best results are highlighted in red (bold) and blue respectively.
Fig. 5: Inference throughput for a batch (B=128) inputs on Torchvision models on a heterogeneous platform.
Fig. 8: Multi-node heterogeneous system for GPT-3 inference.
Fig. 7: Solution quality (throughput) over time for MILP, MILP-SPLIT and heuristics on 10 modules RWNNs.
## VI Case Study: GPT-3 Inference on Distributed Heterogeneous Compute Nodes
As DNN models continue to grow in size, it has become necessary to extend our focus beyond single-node servers to extend our formulation to more complex setups. In this case study, we investigate the use of our scheduler for a large language model (LLM), GPT-3 [40], on a distributed heterogeneous platform as shown in Figure 8. This model belongs to a category of deep neural networks that exhibits notable modularity, as it is predominantly constructed by stacking transformer modules [41]. In contrast to our earlier analysis of RWNNs, GPT-3 modules exhibit high regularity but the complexity of the problem stems from the larger search space of hardware device options. To counterbalance that, a key aspect of our analysis revolves around the exploitation of **symmetry** in the hardware platform to restrict the search space size without sacrificing solution quality. Our preliminary results LLM scheduling only consider a single decoding step as a test workload.
As reported in Table V we consider two ways to schedule our GPT-3 graph. "Single node" utilizes our MILP solvers to schedule one-sixth of the GPT-3 graph on a single node, then replicates that schedule for the rest of GPT-3 on the remaining 5 nodes. We consider this a competitive baseline because it automates the common practice of manually partitioning LLM inference to fit a single compute node then scaling up the number nodes. "Multi node" exposes the entire compute system (all 6 nodes) to our MILP-SPLIT solver, but we employ _symmetry-breaking_ techniques to compress the solution space of the explored schedules, allowing us to find a high-quality schedule in reasonable time. Symmetries arise from the fact that each schedule \(S\) represents a set of equivalent solutions \(E_{S}\), where any element within this set can be derived from \(S\) by permuting device mappings while maintaining the same overall latency. In our approach, we introduce additional constraints to our MILP formulation, enforcing a partial ordering of certain variables (e.g. #batches, #tasks, time utilization) between identical devices within a node or between nodes. For example, we can ensure that the number of tasks assigned to node \(i\) is always less than or equal node \(j\) for \(0\leq i<j<6\) in our example system (Fig. 8). This retains all non-isomorphic solutions in our search space whilst compressing it by \(\sim 4^{6}6!=2.9\times 10^{6}\), where the \(6!\) and \(4^{6}\) factors represent inter- and intra-node symmetry respectively.
Furthermore, our experimental results demonstrate that the choice of symmetry-breaking criterion can significantly impact the quality of the solution. This can be attributed to the phenomenon of premature convergence. If the symmetry-breaking constraints overly restrict the problem or generates a compressed space whose topology is not regular enough, the solver may settle for a locally optimal solution instead of exploring other potentially superior regions of the solution space either located outside of the compressed space or harder to access with the solver's intrinsic optimization heuristics due to the irregularity of the new space. We hypothesize that utilizing #batches as the symmetry-breaking criterion tends to be overly restrictive, discouraging the solver from performing batch rearrangements that would contradict the ordering constraints, thus resulting in relatively smaller improvements over MILP-SPLIT without symmetries. On the other hand, despite the discrete nature of task variables and the continuous nature of utilization time variables, both variables are coarser grain than #batches thus yielding comparable performance and surpassing the baseline schedule by \(\sim\)31% and the single node MILP-SPLIT by \(\sim\)10%. Our results lay the foundations towards multi-node heterogeneous scheduling leveraging MILP-SPLIT, and we aim to further explore this topic in future work.
## VII Conclusion
We presented a general framework that leverages both data and model parallelism to schedule DNNs on heterogeneous hardware systems. Our algorithmic approaches focused on an exact MILP solution, and a splitting heuristic, MILP-SPLIT, to utilize modularity within both conventional and randomly-wired DNNs. Our results on both throughput and latency optimization demonstrated more than 30-60% improvement compared to the best, and most widely-used heuristics, and MILP-SPLIT was up to \(\sim\)395\(\times\) faster than a full MILP solution. Finally, we extended our scheduler to larger multi-node heterogeneous server deployments by showcasing improved scheduling of GPT-3 by exploiting symmetries in the hardware system. In the future, we aim to expand our framework to explore more efficient methods for scheduling large DNNs on distributed systems, to handle DNN training, and to include pre- and post-processing portions of a deep learning workload.
|
2301.13450 | Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline
Reinforcement Learning Settings | With the rise of deep reinforcement learning (RL) methods, many complex
robotic manipulation tasks are being solved. However, harnessing the full power
of deep learning requires large datasets. Online-RL does not suit itself
readily into this paradigm due to costly and time-taking agent environment
interaction. Therefore recently, many offline-RL algorithms have been proposed
to learn robotic tasks. But mainly, all such methods focus on a single task or
multi-task learning, which requires retraining every time we need to learn a
new task. Continuously learning tasks without forgetting previous knowledge
combined with the power of offline deep-RL would allow us to scale the number
of tasks by keep adding them one-after-another. In this paper, we investigate
the effectiveness of regularisation-based methods like synaptic intelligence
for sequentially learning image-based robotic manipulation tasks in an
offline-RL setup. We evaluate the performance of this combined framework
against common challenges of sequential learning: catastrophic forgetting and
forward knowledge transfer. We performed experiments with different task
combinations to analyze the effect of task ordering. We also investigated the
effect of the number of object configurations and density of robot
trajectories. We found that learning tasks sequentially helps in the
propagation of knowledge from previous tasks, thereby reducing the time
required to learn a new task. Regularisation based approaches for continuous
learning like the synaptic intelligence method although helps in mitigating
catastrophic forgetting but has shown only limited transfer of knowledge from
previous tasks. | Sudhir Pratap Yadav, Rajendra Nagar, Suril V. Shah | 2023-01-31T07:06:03Z | http://arxiv.org/abs/2301.13450v1 | Learning Vision-based Robotic Manipulation Tasks Sequentially in Offline Reinforcement Learning Settings
###### Abstract
With the rise of deep reinforcement learning (RL) methods, many complex robotic manipulation tasks are being solved. However, harnessing the full power of deep learning requires large datasets. Online-RL does not suit itself readily into this paradigm due to costly and time-taking agent environment interaction. Therefore recently, many offline-RL algorithms have been proposed to learn robotic tasks. But mainly, all such methods focus on a single task or multi-task learning, which requires retraining every time we need to learn a new task. Continuously learning tasks without forgetting previous knowledge combined with the power of offline deep-RL would allow us to scale the number of tasks by keep adding them one-after-another. In this paper, we investigate the effectiveness of regularisation-based methods like synaptic intelligence for sequentially learning image-based robotic manipulation tasks in an offline-RL setup. We evaluate the performance of this combined framework against common challenges of sequential learning: catastrophic forgetting and forward knowledge transfer. We performed experiments with different task combinations to analyze the effect of task ordering. We also investigated the effect of the number of object configurations and density of robot trajectories. We found that learning tasks sequentially helps in the propagation of knowledge from previous tasks, thereby reducing the time required to learn a new task. Regularisation based approaches for continuous learning like the synaptic intelligence method although helps in mitigating catastrophic forgetting but has shown only limited transfer of knowledge from previous tasks.
## I Introduction
Robots now have the capability to learn many single manipulation tasks using deep Reinforcement Learning (RL), such as pick-place [1], peg-in-hole [23], Cloth folding [14], and tying rope knots [17]. Multitask RL has also been applied successfully to learn robotic manipulation tasks [8, 10]. Number of tasks and task-data distribution are kept fixed in the case of multi-task RL. Therefore, agent has to be trained from scratch whenever it needs to learn a new task, even if there is a substantial overlap between tasks. Scaling this approach to learn all manipulation tasks at par with humans is not feasible. Humans use the experience of previous tasks for learning a new task and do not need to learn from the start. The sequential (or continual) learning approach tries to address this problem by providing a framework where an agent can learn new tasks one-after-another without starting from scratch. We use offline-RL as the base framework to learn a single image-based robotic-manipulation task and then use a regularisation based continual learning approach for learning tasks sequentially. This combined framework forms main contribution of this work.
### _Related Work_
Most work in sequential task learning is focused on classification-based tasks using typical classification datasets such as MNIST, CIFAR and their variations [7, 15]. Some works in continual reinforcement Learning setup use Atari-games [12]. Other try to extend continual RL to GYM environments [2]. Recent work in [20] uses Offline-RL for solving manipulation tasks using image observations alone. While this work focuses on generalizing to novel initial conditions but does not attempt sequential task learning. On the other hand, our work is about learning tasks sequentially with only the current task data available for learning. Some very recent works try to apply continual RL on robotics manipulation tasks [22, 4]. [22] introduces a continual learning benchmark for robotic manipulation tasks. It gives baselines for major continual learning methods over these robotics tasks in online RL settings using soft actor-critic (SAC) method [9]. But this work focuses on online-continual RL with low-dimensional observation space such as joint and task space data as they assume full access to the simulator. While our work focuses on Offline RL with high-dimensional observation space (image) in sequential learning of robotics manipulation tasks.
In the sequential learning setup based on deep RL, neural-networks (NN) are prone to change in data-distribution. Hence, its accuracy on previous tasks drops significantly when it is trained on a new task. This problem is actively studied under the name of catastrophic forgetting. More broadly, in every connectionist model of memory and computation problem of **stability-plasticity** exists. This means
Fig. 1: Block Diagram of SAC-CQL-SI method for Sequential Learning
the network needs to be flexible enough to accommodate new information and simultaneously not forget previous information, as discussed in [6]. Many solutions have been suggested to mitigate this problem. We place these under two major categories architectural and penalty based. In the architectural type of solutions, relevant changes are made in the neural network architecture without changing the loss function. For example, Progressive neural network [19] uses multiple parallel paths with lateral connenctions, and policy distillation [18] distills the policy learned by larger network into smaller without loss of performance. On the other hand, penalty based methods put penalties on neural network parameters so that they stay close to solution of previous task. Two important work in this regards are Elastic Weight Consolidation (EWC) [12] and Synaptic Intelligence [24]. EWC gives regularisation based solution for catastrophic forgetting, but computation for the importance of parameters is not local. In this paper we use the approach proposed in Synaptic Intelligence [24] because of its local measure of importance for synapses (weights of Neural Network) as the nature of local computation helps in keeping the solution independent of the particularities of the problem hence making it more general.
To the best of our knowledge, this is the first work investigating sequential learning for image based robotic task manipulation in offline RL settings. In this paper, we focus on two sequential learning challenges: catastrophic forgetting and forward knowledge transfer. We analyse the effect of task-ordering and number of object configurations on forgetting and forward knowledge transfer between tasks.
## II Learning Image Based Robotic Manipulation Tasks Sequentially
In this section, we formulate our RL agent and environment interaction setup to learn robotic manipulation tasks. We then discuss the problem of sequential task learning and present an approach to solve this problem.
### _RL formulation for Learning Image Based Robotic Manipulation Tasks_
Agent and environment interaction is formally defined by the Markov Decision Process (MDP) concept. A Markov Decision Process is a discrete-time stochastic control process. In RL, we formally define the MDP as a tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{P},r,\gamma\rangle\). Here, \(\mathcal{S}\) is a finite set of states, \(\mathcal{A}\) is a finite set of actions, P is the state transition probability matrix, \(r\) is the reward for a given state-action pair and \(\gamma\) is the discount factor. A stochastic policy is defined as a distribution over actions given the states, i.e., the probability of taking each action for every state. \(\pi(\mathbf{a}|\mathbf{s})=\mathbb{P}[\mathcal{A}_{t}=\mathbf{a}|\mathcal{S}_ {t}=\mathbf{s}]\).
**RL formulation:** We formulate the vision-based robotic manipulation tasks using the deep RL framework as below.
* **Environment:** It consists of WidowX 250 five-axes robot arm equipped with a gripper. We place a table in front of the robot. Every task consists of an object placed on the table, which needs to be manipulated to complete the task successfully. We place a camera in the environment in eye-to-hand configuration.
* **State:** The state \(\mathbf{s}_{t}\) represents the RGB image of the environment captured at time step \(t\). We use capture images of size \(48\times 48\times 3\).
* **Action:** We define the action at the time step \(t\) as a 7 dimensional vector \(\mathbf{a}_{t}=\begin{bmatrix}\Delta X_{t}&\Delta O_{t}&g_{t}\end{bmatrix}^ {\top}\). Here, \(\Delta X_{t}\in\mathbb{R}^{3}\), \(\Delta O_{t}\in\mathbb{R}^{3}\), \(g_{t}\in\{0,1\}\) denotes the change in position, change in orientation, and gripper command (open/close), respectively at time step \(t\).
* **Reward:** The reward \(r(\mathbf{s}_{t},\mathbf{a}_{t})\in\{0,1\}\) is a binary variable which is equal to 1 if the task is successful and 0, otherwise. Reward is given at each time step.
The reward is kept simple and not shaped according to the tasks so that the same reward framework can be used while scaling for large number of tasks. Also, giving reward at each time step, instead at the end of the episode, makes the sum of rewards during an episode dependent on time steps. Therefore, if the agent completes a task in fewer steps, its total reward for that episode will be more.
### _Sequential Learning Problem and Solution_
We define the sequential tasks learning problem as follows. The agent is required to learn \(N\) number of tasks but with the condition that tasks will be given sequentially to the agent and not simultaneously. Therefore, when the agent is learning to perform a particular task, it can only access the data of the current task. This learning process reassembles how a human learns. Let a sequence of robotic manipulation tasks \(T_{1},T_{2},...,T_{N}\) is given. We assume that each task has the same type of state and action space. Each task has its own data in typical offline reinforcement learning format \(\langle\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{r}_{t},\mathbf{s}_{t+1}\rangle\). The agent has to learn a policy \(\pi\), a mapping from state to action, for every task. If we naively train a neural network in this fashion problem of catastrophic forgetting will occur, which means performance on the previous task will decrease drastically as soon as the neural network starts learning a new task.
We use a regularisation based approach presented in [24] to mitigate the problem of catastrophic forgetting. Figure 1 provides the overall framework we use to solve this problem. Each task data is given one by one to the algorithm, which then starts training for the current task. First, a mini-batch is sampled from this current-task data and passed to the SAC-CQL algorithm (described in the next section), which then calculates actor (Q-loss) and critic (policy) loss. If the task-index is greater than one then we add a quadratic regularisation as defined in [24] to the actor loss to reduce forgetting. Then, these losses are used to update neural networks, which represents policy (actor network) and Q-value function (critic network). After the current task is successfully learned next task data comes, and this process is repeated until all tasks are learned.
## III Integrating Sequential Task Learning with Offline RL
In this section, we discuss the SAC-CQL [13] offline algorithm and its implementation details. We then discuss the SI regularisation method for continual learning and provide details to integrate these methods to learn sequential tasks.
### _SAC-CQL algorithm for Offline RL_
There are two frameworks, namely online and offline learning, to train an RL agent. In the case of an online-RL training framework, an RL agent interacts with the environment to collect experience, update itself (train), interact again, and so on. In simple terms, the environment is always available for the RL agent to evaluate itself and improve further. This interaction loop is repeated for many episodes during training until the RL agent gets good enough to perform the task successfully. While in offline RL settings, we collect data once and are no more required to interact with the environment. This data can be collected by executing a hand-designed policy or can be obtained by a human controlling the robot (human-demonstration). Data is a sequence of \(\langle\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{r}_{t},\mathbf{s}_{t+1}\rangle\) tuples.
In recent years the SAC (soft-actor critic) [9] has emerged as the most robust algorithm for training RL agents in continuous action space (when action is a real vector), which typically is a case in robotics. SAC is an off-policy entropy based actor-critic method for continuous action MDPs. Entropy based methods add an additional entropy term to the existing optimisation goal of maximising expected reward. In addition to maximising expected reward, the RL agent also needs to maximise the entropy of the overall policy. This helps in making the policy inherently exploratory and not stuck inside a local minima. Haarnoja _et al._[9] define the RL objective in maximum entropy RL settings as in (1).
\[J(\pi)=\sum_{t=0}^{T}\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t})\sim\rho_{ \pi}}[r(\mathbf{s}_{t},\mathbf{a}_{t})+\alpha\mathcal{H}(\pi(\cdot|\mathbf{s }_{t}))]. \tag{1}\]
Here, \(\rho_{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})\) denotes the joint distribution of the state and actions over all trajectories of the agent could take and \(\mathcal{H}(\pi(\cdot|\mathbf{s}_{t}))\) is the entropy of the policy for state \(\mathbf{s}_{t}\) as defined in (2).
\[\mathcal{H}(\pi(\cdot|\mathbf{s}_{t}))=\mathbb{E}[-\text{log}(f_{\pi}(\cdot| \mathbf{s}_{t}))]. \tag{2}\]
Here, \(\pi(\cdot|\mathbf{s}_{t})\) is a probability distribution over actions and \(f_{\pi}(\cdot|\mathbf{s}_{t})\) is the density function of the policy \(\pi\). \(\alpha\) is the temperature parameter controlling the entropy in the policy.
SAC provides an actor-critic framework where policy is separately represented by the actor and critic only helps in improving the actor, thus limiting its role only to training. We use CNNs to represent both actor and critic, and instead of using a single Q-value network for the critic, we use two Q-value networks and take their minimum to better estimate Q-value as proposed in [21]. To stabilize the learning, we use two more neural-network to represent target Q-values for each critic network, as described in DQN [16]. \(\phi\), \(\theta_{1}\), \(\theta_{2}\), \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\) represents parameters of policy network, 2 Q-value networks and 2 target Q-value networks for critic respectively. Therefore in total, we use 5 CNNs to implement the SAC algorithm.
Our CNN architecture is similar to [20] except for the multi-head part, which is a single layer neural-network for each head. Q-value network takes state and action as input and directly gives Q-value. We use _tanh-guassian_ policy, as used in [20]. Since we use stochastic policy thus, the policy network takes the state as input and outputs the mean and standard deviation of the gaussian distribution of each action. Action is then sampled from this distribution and passed through _tanh_ function to bound actions between \((-1,1)\). Equation (3) defines the target Q-value which is then used in (4) to calculate Q-loss for each critic networks. Equation (5) defines policy-loss for actor network. These losses are then used to update actor and critic networks using adam [11] optimisation algorithm.
\[\hat{Q}_{\tilde{\theta}_{1},\tilde{\theta}_{2}}(\mathbf{s}_{t+1}, \mathbf{a}_{t+1})=\mathbf{r}_{t}\\ +\gamma\mathbb{E}_{(\mathbf{s}_{t+1}\sim\mathcal{D},\mathbf{a}_{ t+1}\sim\pi_{\phi}(\cdot|\mathbf{s}_{t+1}))}[\\ \text{min}[Q_{\tilde{\theta}_{1}}(\mathbf{s}_{t+1},\mathbf{a}_{t+1 }),Q_{\tilde{\theta}_{2}}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})]\\ -\alpha\text{log}(\pi_{\phi}(\mathbf{a}_{t+1}|\mathbf{s}_{t+1}))] \tag{3}\]
\[J_{Q}(\theta_{i})=\frac{1}{2}\mathbb{E}_{(\mathbf{s}_{t},\mathbf{a}_{t})\sim \mathcal{D}}[(\hat{Q}_{\tilde{\theta}_{i},\tilde{\theta}_{2}}(\mathbf{s}_{t+1}, \mathbf{a}_{t+1})-Q_{\theta_{1}}(\mathbf{s}_{t},\mathbf{a}_{t}))^{2}]. \tag{4}\]
\[J_{\pi}(\phi)=\mathbb{E}_{(\mathbf{s}_{t}\sim\mathcal{D}, \mathbf{a}_{t}\sim\pi_{\phi}(\cdot|\mathbf{s}_{t}))}[\alpha\text{log}(\pi_{\phi }(\mathbf{a}_{t}|\mathbf{s}_{t}))\\ -\text{ min}[Q_{\theta_{1}}(\mathbf{s}_{t},\mathbf{a}_{t}^{\pi}),Q _{\theta_{2}}(\mathbf{s}_{t},\mathbf{a}_{t}^{\pi})]] \tag{5}\]
Here, \(i\in\{1,2\}\), \(\mathbf{a}_{t}^{\pi}\) is the action sampled from policy \(\pi_{\phi}\) for state \(\mathbf{s}_{t}\) and \(\mathcal{D}\) represents the current task data. For offline-RL, we use the non-Lagrange version of the conservative Q-learning (CQL) approach proposed in [13] as it only requires adding a regularisation loss to already well-established continuous RL methods like Soft-Actor Critic. This loss function is defined in (6).
\[J_{Q}^{\text{total}}(\theta_{i})=J_{Q}(\theta_{i})\\ +\alpha_{\text{cql}}\mathbb{E}_{\mathbf{s}_{t}\sim\mathcal{D}}[ \text{log}\sum_{\mathbf{a}_{t}}\text{exp}(Q_{\theta_{i}}(\mathbf{s}_{t}, \mathbf{a}_{t}))-\mathbb{E}_{\mathbf{a}_{t}\sim\mathcal{D}}[Q_{\theta_{i}}( \mathbf{s}_{t},\mathbf{a}_{t})]] \tag{6}\]
Here, \(i\in\{1,2\}\), \(\alpha_{\text{cql}}\) control the amount of CQL-loss to be added to Q-loss to penalize actions that are too far away from the existing trajectories, thus keeping the policy conservative in the sense of exploration.
### _Applying Synaptic Intelligence in Offline RL_
Synaptic intelligence is a regularisation based algorithm proposed in [24] for sequential task learning. It regularises the loss function of a task with a quadratic loss function as defined in (7) to reduce catastrophic forgetting.
\[L_{\mu}=\sum_{k}\Omega_{k}^{\mu}(\tilde{\phi}_{k}-\phi_{k})^{2} \tag{7}\]
Here, \(L_{\mu}\) is the SI loss for the current task being learned with index \(\mu\), \(\phi_{k}\) is \(k\)-th weight of the policy network, and
\(\hat{\phi}_{k}\) is the reference weight corresponding to policy network parameters at the end of the previous task. \(\Omega_{k}^{\mu}\) is per-parameter regularisation strength for more details on how to calculate \(\Omega_{k}^{\mu}\) refer to [24]. SI algorithm penalizes neural network weights based on their contributions to the change in the overall loss function. Weights that contributed more to the previous tasks are penalized more and thus do not deviate much from their original values, while other weights help learn new tasks. SI defines importance of weights as the sum of the gradients over the training trajectory, as this approximates contribution to the reduction in the overall loss function. We use a similar approach to apply SI to Offline-RL as presented in [22]. Although the authors didn't use SI or offline-RL, the approach is similar to applying any regularisation based continual learning method for the actor-critic RL framework. We regularise the actor to reduce forgetting on previous tasks while learning new tasks using offline reinforcement learning. We add quadratic loss as defined in [24] to the policy-loss term in the SAC-CQL algorithm. So now over-all policy-loss becomes as described in (8)
\[J_{\pi}^{\text{total}}(\phi)=J_{\pi}(\phi)+cL_{\mu} \tag{8}\]
Here, \(c\) is regularisation strength. Another aspect of continual learning is finding a way to provide the current task index to the neural network. There are many approaches to tackle this problem, from 1-hot encoding to recognizing the task from context. We chose the most straightforward option of a multi-head neural network. Each head of the neural network represents a separate task. Therefore we simply select the head for a given task. For training each task we keep a fixed compute budget of 100k of gradient-steps.
## IV Experiments, Results and Discussion
In this section we first discuss the RL environment setup and provide details of data collection for offline RL. Further, we evaluate performance of SI with varying number of object configurations and densities for different task ordering.
### _Experimental Setup_
Our experimental setup is based on a simulated environment, Roboverse, used in [20]. It is a GYM [3] like environment based upon open-source physics simulator pybullet [5]. We collected data for three tasks using this simulated environment.
**Object Space and Tasks:** We define object space as a subset of the workspace of the robot where the target object of the task is to be placed. In our case, it is a rectangular area on the table in front of the robot. The target object is randomly placed in the object-space when initializing the task. We selected the following 3 tasks for all our experiments with some similarities.
1) _Press Button:_ Button is placed in the object-space. The objective of the task is to press the button. This is easiest task as the robot only needs to learn to reach the object.
2) _Pick Shed:_ The objective of this task is to pick the object successfully. Thus, robot also needs to learn to close the gripper apart from reaching the object. Figure 2(a) shows the object space of task pick-shed.
3) _Open Drawer:_ The objective of this task is to open the drawer.
**Data Collection:** For each task we collect 6 datasets by varying the area (40, 360 and 1000 cm\({}^{2}\)) and density (10 and 20 object configurations per cm\({}^{2}\)) of object-space. Each episode consists of 20 steps and each step is a typical tuple \(<\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{r}_{t},\mathbf{s}_{t+1}>\) used in reinforcement learning. We use simple but accurate policies to collect data. Accuracy of these data collection policies is above \(80\%\). Figure 2(b) shows how reward is distributed across object space for pick-shed task. Each dot represents a trajectory, and the color represents the total reward for each trajectory. It can be seen, when the object is placed closer to the robot, the reward is high as task is completed in few steps, while it becomes low as the object moves away.
### _Empirical Results and Analysis_
We performed a total of 72 experiments. We performed sequential learning on two task (doublets). Six doublets are possible using data collected for three tasks. These are button-shed, button-drawer, shed-button, shed-drawer, drawer-shed, and drawer-button. For each doublet sequence, we perform 2 sets of experiments, one with SI regularisation and another without SI regularisation. Each set contains 6 experiments by varying area and density of object-space. Apart from these 72 experiments, we also trained the agent for single tasks using SAC-CQL for reference baseline performance to evaluate forward transfer. We do behaviour-cloning for the initial 5k steps to learn faster as we have limited compute budget. We use metrics mentioned in [22] for evaluating the performance of a continual learning agent. Each task is trained for \(\Delta=100K\) steps. The total number of tasks in a sequence is \(N=2\). Total steps \(T=2\cdot\Delta\). The \(i\)-th task is train from \(t\in[(i-1)\cdot\Delta,i\cdot\Delta]\).
**Task Accuracy:** We evaluate the agent after every 1000 training steps by sampling 10 trajectories from the environment for each task. The accuracy of the agent for a task is defined as the number of successful trajectories out of those 10 trails. Figure 3 shows the accuracy of three task
Fig. 2: Object space and reward distribution for pick-shed task with area of size of \(1000\) cm\({}^{2}\) and density of 20 object configurations per cm\({}^{2}\). (a) Object Space. (b) Reward distribution (cumulative reward along sample trajectories) of pick-shed task (area=1000, density=20). Red semicircle on top represents the robot location
sequences (button-shed, button-drawer, drawer-button) over the complete training period of 200k steps for the area size of \(40cm^{2}\) with density of 10 and 20 object configurations per \(cm^{2}\). Top row represents sequential learning with SI while bottom row represents sequential learning without SI. SI is found to be working better as evident by overlapping Task-1 and Task-2 accuracy. We observed that SI was most helpful in button-shed task doublet due to overlapping nature of these tasks as both these tasks require reaching the object. This shows benefit of using SI for overlapping tasks
**Forgetting:** It measures decrease in accuracy of the task as we train more tasks and defined as \(F_{i}:=p_{i}(i.\Delta)-p_{i}(T)\). Here, \(p_{i}(t)\in[0,1]\) is success rate of task \(i\) at time \(t\). Figure 4 shows the forgetting of Task-1 after training Task-2. We can see that SI performed better or equal in all cases. In fact, in some cases, like button-shed forgetting is negative, which means the performance of Task-1 improved after training on Task-2. This indicates knowledge transfer from Task-1 to Task-2. This phenomenon is not seen in case of sequential learning without SI. This clearly indicates that SI helps in reducing catastrophic forgetting. No significant trends are observed in variation of object-space area but forgetting increased with the increase in object-space density. This might be due to the limited compute budget (100K) per task as tasks with more area size and density would require more training to show good results.
**Forward Transfer:** It measures knowledge transfer by comparing the performance of a given task when trained individually versus learning the task after the network is already trained on previous tasks and defined as
\[FT_{i}:=\frac{\text{AUC}_{i}-\text{AUC}^{b}_{i}}{1-\text{AUC}^{b}_{i}}, \tag{9}\]
where \(\text{AUC}_{i}=\frac{1}{\Delta}\int_{(i-1)\cdot\Delta}^{i\cdot\Delta}p_{i}(t) \text{d}t\) represents area under the accuracy curve of task \(i\) and \(\text{AUC}^{b}_{i}=\frac{1}{\Delta}\int_{0}^{\Delta}p_{i}^{b}(t)\text{d}t\), represents area under curve of the reference baseline task. \(p_{i}^{b}(t)\) represent reference baseline performance. Figure 5 shows forward transfer for Task-2 after it is trained on Task-1. We use single-task training performance as the reference for Task-2 while evaluating forward transfer. We observed that in most cases, training without SI gives a better transfer ratio than training with SI. This may be because of two reasons. Firstly, due to the high value of SI regularisation strength (which is set to 1 for all cases), this restricts movement of weights from the solution of the previous task. This can also be noticed in the form of reduced accuracy levels of Task-2 in the Figure 3. The accuracy level of Task-2 are lower as compared to its non-SI counterpart. Although, high regularisation strength helps in reducing catastrophic forgetting but also hinders the ability to learn new-task thus reducing forward-transfer. This highlights the problem of stability-plasticity, any method which tries to make learning more stable to reduce forgetting inadvertently also restricts the flexibility of the connectionist model to learn a new task.
**Training Time:** Apart from these metrics, we observed that, agent requires on an average 14k, 10k, and 16k steps to achieve its first success on Task-2 when trained directly, sequentially without SI, and sequentially with SI, respectively. This means that the agent learns the task faster when trained sequentially without adding SI regularisation but a little slower when trained sequentially with SI regularisation than directly training the task. This shows another benefit of sequential learning over single task-learning.
Another interesting observation we made in the case of shed-button (area 360, density 20) task. While training for Task-1 (pick shed) agent showed some success on Task-2 (press button) even before getting any success on Task-1 itself. This might be due to the nature of the tasks, as
Fig. 3: Task accuracy for tasks button-shed, button-drawer and drawer-button (area=360, density=10,20). Top row is with SI
the trajectory of the press button task is common for other task. Therefore, agent has tendency to acquire knowledge for similar tasks. This may also be the result of behaviour-cloning for the initial 5k steps, where the agent tries to mimic the data collection policy for a few initial training steps. Also, we observed that increasing the object space area helps in knowledge transfer, which can be seen by the increase in average forward transfer with area size.
## V Conclusion and Future Work
We investigated catastrophic forgetting and forward knowledge transfer for sequentially learning image-based robotic manipulation tasks by combining a continual learning approach with offline RL framework. We use SAC-CQL as an offline deep RL algorithm with synaptic intelligence (SI) to mitigate catastrophic forgetting. Multi-headed CNN was used to provide knowledge of the current Task-index to the neural-network. We performed a series of experiments with different task combinations and with a varying number of object configurations and densities. We found that SI is useful for reducing forgetting but showed a limited forward transfer of knowledge.
We also found that the ordering of tasks significantly affects the performance of sequential task learning. Therefore, tasks may be chosen in a manner so that the previous task helps in learning the next task as the complexity of tasks increases. This calls for exploring curriculum learning for sequential tasks. Experiments also suggests the importance of prior knowledge for continual learning. Agent trained only with state-action pairs of large number of diverse tasks (even without reward), may provide a better prior knowledge. Future work will also focus on training tasks with more number of steps to explore more interesting patterns.
Fig. 4: Forgetting Matrix. Top row is with SI regularisation, bottom row is without regularisation
Fig. 5: Forward Transfer Matrix |
2306.17691 | Tabulating Absolute Lucas Pseudoprimes | In 1977, Hugh Williams studied Lucas pseudoprimes to all Lucas sequences of a
fixed discriminant. These are composite numbers analogous to Carmichael numbers
and they satisfy a Korselt-like criterion: $n$ must be a product of distinct
primes and $p_i - \delta_{p_i} | n - \delta_n $ where $\delta_n$ is a Legendre
symbol with the first argument being the discriminant of the Lucas sequence.
Motivated by tabulation algorithms for Carmichael numbers, we give algorithms
to tabulate these numbers and provide some asymptotic analysis of the
algorithms. We show that there are only finitely many absolute Lucas
pseudoprimes $n = \prod_{i = 1}^k p_i$ with a given set of $k-2$ prime factors.
We also provide the first known tabulation for discriminant $5$. | Chloe Helmreich, Jonathan Webster | 2023-06-30T14:18:18Z | http://arxiv.org/abs/2306.17691v1 | ###### Abstract
###### Abstract
In 1977, Hugh Williams studied Lucas pseudoprimes to all Lucas sequences of a fixed discriminant. These are composite numbers analogous to Carmichael numbers and they satisfy a Korselt-like criterion: \(n\) must be a product of distinct primes and \(p_{i}-\delta_{p_{i}}|n-\delta_{n}\) where \(\delta_{n}\) is a Legendre symbol with the first argument being the discriminant of the Lucas sequence. Motivated by tabulation algorithms for Carmichael numbers, we give algorithms to tabulate these numbers and provide some asymptotic analysis of the algorithms. We show that there are only finitely many absolute Lucas pseudoprimes \(n=\prod_{i=1}^{k}p_{i}\) with a given set of \(k-2\) prime factors. We also provide the first known tabulation up to \(2^{64}\) for discriminant \(5\).
**TABULATING ABSOLUTE LUCAS PSEUDOPHIMES**
**Chloe Helmreich**
_Department of Mathematical Sciences, Butler University, Indianapolis, IN, USA_
[email protected]
**Jonathan Webster**
_Department of Mathematical Sciences, Butler University, Indianapolis, IN, USA_
[email protected]
## 1 Introduction
A base \(a\) Fermat pseudoprime is a composite integer \(n\) such that
\[a^{n-1}-1\equiv 0\pmod{n}.\]
It is well known that Carmichael numbers are the composite integers for which that congruence holds for all \(a\) such that \((a,n)=1\). Korselt showed that such a number \(n\) is a product of \(k>2\) distinct primes \(p_{1},p_{2},\ldots,p_{k}\) and \(p_{i}-1|n-1\). The least example is \(561=3\cdot 11\cdot 17\). From a computational view, Fermat's Little Theorem was a step into primality testing and Carmichael numbers are a roadblock to this being a successful test. There are two notable approaches to overcoming this obstacle. The first is by strengthening the Fermat test by considering the factors arising from a difference of squares factorization of \(a^{n-1}-1\) (e.g. [17]). A second approach combines a Fermat test with a seemingly conflicting test based on Lucas sequences. An example of this would be the Baillie-PSW test [2, 1], which is what GMP currently implements [9]. Another example would be Gratham's Frobenius pseudoprimes [10]. The pseudoprimes to the Lucas sequences are our motivating interest.
Since Carmichael numbers inform us about the reliability of the Fermat test, it would make sense to examine the analogous numbers for Lucas sequences. These
numbers are, perhaps, less well-known. H.C. Williams showed that these numbers also satisfy a Korselt-like criterion [21]. Using this result as a starting point, we continue a study of these numbers from an algorithmic point of view with an aim of tabulating them. Our key contributions are as follows:
1. We prove theorems establishing finiteness and boundedness conditions. The versions of these theorems for Carmichael numbers were initially proved by Beeger for a prime \(P\) and generalized by Duparc for \(P\) being composite [4, 5].
2. We provide an algorithmic interpretation of these theorems in the spirit of [15, 18]. In particular, the bounds on two primes are \(O(P^{2})\) and \(O(P^{3})\) but we can find both primes after creating only \(O(P(\log P)^{2})\) candidates.
3. We implemented the algorithms in C++ and tabulated all absolute Lucas pseudoprimes less than \(2^{64}\) using discriminant 5.
Since the first Lucas sequence used in the Baillie-PSW test is that of the Fibonacci sequence1, our computations will deal with Lucas sequences having discriminant 5. However, our results apply to any family (by discriminant) of Lucas sequences.
Footnote 1: Technically, there are at least 8 different ways to choose the specific parameters for the Lucas sequence. Method \(A\), \(A^{*}\), \(B\), \(B^{*}\) all use a Lucas sequence with \(d=5\) as the first check. Method \(A\), which GMP implements, uses the parameters \((1,-1)\), e.g. the Fibonacci sequence.
The rest of the paper is organized as follows. Section 2 gives the background on Lucas sequences, defines what absolute Lucas pseudoprimes are, and concludes with the Korselt-like criterion. Section 3 is a comment on how we will account for asymptotic cost. Section 4 establishes the new theorems providing bounds that may be used for algorithmic purposes. Sections 5 and 6 state algorithms for tabulating these numbers and provide some asymptotic analysis; these two sections are bifurcated by a "small" input size vs a "large" input size. Finally, section 7 addresses the practical issues with the implementation and provides some statistics on the tabulation.
## 2 Lucas Sequences
There are many equivalent definitions of the Lucas \(U\)-sequence. We state two of them and encourage the reader to consult standard sources (such as [14, 20]) for a more robust account. First, they may be defined by expressions involving roots of a certain polynomial:
\[U_{n}=U_{n}(A,B)=(\alpha^{n}-\beta^{n})/(\alpha-\beta),\]
where \(\alpha,\beta\) are the zeros of \(x^{2}-Ax+B\), and \(A\), \(B\) are relatively prime integers with \(A>0\). Let the discriminant be \(d=A^{2}-4B\). Alternatively, they we may define
these sequences with a recurrence relation:
\[U_{0}(A,B)=0,U_{1}(A,B)=1,\text{ and }U_{n}(A,B)=AU_{n-1}(A,B)-BU_{n-2}(A,B).\]
This latter definition is used to derive identities that allow efficient computation of \(U_{n}(A,B)\pmod{m}\) for large \(n\) with an algorithm akin to square-and-multiply[13]. We will frequently suppress the \(A,B\) notation and will be implicitly working with a given family of Lucas sequences all with the same discriminant.
**Theorem 1** (Analog of Fermat's Little Theorem).: _If \(p\) is an odd prime and \(p\nmid dB\), then_
\[U_{p-\left(\frac{d}{p}\right)}(A,B)\equiv 0\pmod{p}.\]
As with Fermat Little theorem, the contrapositive of this theorem can be used to detect if an integer is composite. And, one can find composite numbers for which the contrapostive of the above theorem does not detect, which motivates the following definition.
**Definition 1**.: An _\((A,B)\)-Lucas pseudoprime_ is a composite integer \(n\) satisfying
\[U_{n-\delta_{n}}(A,B)\equiv 0\pmod{n}\]
where \(\delta_{n}\) is the Jacobi symbol \(\left(\frac{d}{n}\right)\).
For example, the Fibonacci pseudoprimes (A081264) are \((1,-1)\)-Lucas pseudoprimes. The first 15 are: 323, 377, 1891, 3827, 4181, 5777, 6601, 6721, 8149, 10877, 11663, 13201, 13981, 15251, and 17119.
**Definition 2**.: An _absolute Lucas pseudoprime (to the discriminant \(d\))_ is a composite integer \(n\) satisfying
\[U_{n-\delta_{n}}(A,B)\equiv 0\pmod{n}\]
for all \(A,B\) with \(d=A^{2}-4B\) and \((n,dB)=1\), where \(\delta_{n}\) is the Jacobi symbol \(\left(\frac{d}{n}\right)\).
The numbers \(323,6601,6721,11663,\) and 17119 are absolute Lucas pseudoprimes from the above 15 Fibonacci pseudoprimes. This can be checked with a Korselt-like criterion.
**Theorem 2** (Williams' Criterion [21]).: _A composite number \(n\) is an absolute Lucas pseudoprime if and only if \(n\) is squarefree and \(p-\delta_{p}|n-\delta_{n}\) for all prime divisors \(p\) of \(n\)._
If \(d=1\), the absolute Lucas pseudoprimes are Carmichael numbers and the divisibility statement in William's Korselt-like criterion becomes \((p-1)|(n-1)\). In the algorithms for tabulating Carmichael numbers, it was common to need the Carmichael function \(\lambda(n)\). We will need a similar function but only state what values it takes for square-free numbers, which is our only concern.
**Definition 3**.: Let \(n\) be a product of distinct primes. That is, \(n=\prod_{i=1}^{r}p_{i}\). Then define \(\lambda_{d}(n)=\operatorname{lcm}(p_{1}-\delta_{p_{1}},\ldots,p_{r}-\delta_{p_{ r}})\).
If \(d=1\), \(\lambda_{1}(n)\) is the Carmichael function. While the asymptotic behavior of \(\lambda_{1}(n)\) has been well-studied (e.g. [7, 8]), we know of no results on \(\lambda_{d}(n)\) for \(d\neq 1\).
## 3 Boundedness Theorems
To tabulate Carmichael numbers with \(k\) prime factors, the general strategy is to start with a composite number, called a _preproduct_, with \(k-1\) (the "large" case) or \(k-2\) (the "small" case) prime factors and find the remaining one or two prime factors which we will usually call \(q\) and \(r\). This strategy is enabled by theorems that limit both the number and size of primes that may complete the preproduct. We show that the Korselt-like criterion may be used to get analogous boundedness and finiteness results.
**Theorem 3** (See Proposition 1 of [15]).: _Let \(n\) be an absolute Lucas pseudoprime less than \(B\) with \(k>2\) prime factors._
1. _Let_ \(r<k\) _and put_ \(P_{r}=\prod_{i=1}^{r}p_{i}\)_. Then_ \(p_{r+1}<(B/P_{r})^{1/(k-r)}\) _and_ \(p_{r+1}-\delta_{p_{r+1}}\) _is relatively prime to_ \(p_{i}\) _for all_ \(i\leq r\)_._
2. _Let_ \(P_{k-1}=\prod_{i=1}^{k-1}p_{i}\)_. Then_ \(P_{k-1}p_{k}\equiv\delta_{P_{k-1}}\delta_{p_{k}}\pmod{\lambda_{d}(P_{k-1})}\) _and_ \(p_{k}-\delta_{p_{k}}\) _divides_ \(P_{k-1}-\delta_{P_{k-1}}\)_._
3. _Each_ \(p_{i}\) _satisfies_ \(p_{i}<\sqrt{n}<\sqrt{B}\)_._
Proof.: These follow from \(p_{i}-\delta_{p_{i}}|n-\delta_{n}\).
The theorem requires \(k>2\); we will address the case of \(k=2\) below. The requirement that \(p_{r+1}-\delta_{p_{r+1}}\) is relatively prime to \(p_{i}\) for all \(i\leq r\) is stronger than the square-free requirement in the Korselt-like criterion. We call a square-free composite number \(P_{r}\)_admissible_ if all the primes satisfy the divisibility requirement of Theorem 3.1. Further, we say \(P_{r}\) is _bounds admissible_ (with respect to \(B\)) if it also satisfies the inequality in Theorem 3.1. For example, let every prime number is admissible but only primes less than \(B^{1/3}\) are bounds admissible.
When \(d=1\), the admissible numbers are also called cyclic (in the group theory sense) numbers. In [6], Erdos proved that the counting function of cyclic numbers is asymptotic to
\[\frac{e^{-\gamma}B}{\log\log\log B},\]
where \(\gamma\approx 0.5772\ldots\) is the Euler-Mascheroni constant. We believe that his proof holds for \(d\neq 1\) due to a formal replacement of various "1's" in the proof to some
appropriate Jacobi symbol. However, this \(\log\log\log B\) plays no significant role in the analysis that follows, so we do not attempt to prove this result.
**Theorem 4** (See Proposition 2 of [15]).: _Let \(n\) be an absolute Lucas pseudoprime of the form \(n=Pqr\) with \(q\) and \(r\) primes, \(q<r\), and \(P>1\). Then, there are integers \(1\leq D<P<C\) such that with \(\Delta=CD-P^{2}\),_
\[q-\delta_{q} =\frac{(P-\delta_{P})(\delta_{q}P+\delta_{r}D)}{\Delta}, \tag{1}\] \[r-\delta_{r} =\frac{(P-\delta_{P})(\delta_{r}P+\delta_{q}C)}{\Delta},\] (2) \[\frac{(p-1)P^{2}-2P}{p+1} <CD<\frac{(p+3)P^{2}+2P}{p+1}. \tag{3}\]
_where \(p\) is the largest prime dividing \(P\)._
Proof.: Since
\[q-\delta_{q}|Pqr-\delta_{P}\delta_{q}\delta_{r}=Pqr-Pr\delta_{q}+Pr\delta_{q} -\delta_{P}\delta_{q}\delta_{r}\]
it follows that \(q-\delta_{q}|Pr-\delta_{P}\delta_{r}\). Similarly, \(r-\delta_{r}|Pq-\delta_{P}\delta_{q}\). So that we define positive integers
\[D=\frac{Pq-\delta_{P}\delta_{q}}{r-\delta_{r}}\quad\text{and}\quad C=\frac{ Pr-\delta_{P}\delta_{r}}{q-\delta_{q}}.\]
satisfying \(1\leq D<P<C\). We have
\[C(q-\delta_{q})=P\left(\frac{Pq-\delta_{P}\delta_{q}}{D}+\delta_{r}\right)- \delta_{P}\delta_{r}\]
so that
\[CD(q-\delta_{q})=P^{2}q-P\delta_{P}\delta_{q}+PD\delta_{r}-D\delta_{P}\delta_{ r}.\]
Further,
\[(CD-P^{2})(q-\delta_{q}) =P^{2}\delta_{q}-P\delta_{P}\delta_{q}+PD\delta_{r}-D\delta_{P} \delta_{r}\] \[=(P-\delta_{P})(\delta_{q}P+\delta_{r}D).\]
Note that \(\Delta=CD-P^{2}\neq 0\), so that
\[q-\delta_{q}=\frac{(P-\delta_{P})(\delta_{q}P+\delta_{r}D)}{\Delta}.\]
and similarly
\[r-\delta_{r}=\frac{(P-\delta_{P})(\delta_{r}P+\delta_{q}C)}{\Delta}.\]
Note that \(p+1\leq q-\delta_{q}\) so
\[p+1\leq q-\delta_{q}=\frac{(P-\delta_{P})(\delta_{q}P+\delta_{r}D)}{\Delta}.\]
So,
\[|CD-P^{2}|<\frac{(P+1)(P+D)}{p+1}<\frac{2P(P+1)}{p+1}\]
implies
\[-\frac{2P(P+1)}{p+1}+P^{2}<CD<\frac{(2P)(P+1)}{p+1}+P^{2}\]
which is equivalent to
\[\frac{(p-1)P^{2}-2P}{p+1}<CD<\frac{(p+3)P^{2}+2P}{p+1}.\]
**Corollary 1**.: _There are only finitely many absolute Lucas pseudoprimes with \(k>2\) prime factors assuming a set of \(k-2\) of the prime factors are fixed._
**Corollary 2**.: _With the notation above, \(q<2(P+1)^{2}\) and \(r<(P+1)^{3}\)._
An interpretation of the above corollary would imply \(O(P^{2}\log P)\) arithmetic operations are required to use a sieve of Eratosthenes to find candidate primes \(q\) for \(P\). This, in turn, requires \(\Omega(P^{2}\log P)\) arithmetic operations to find \(r\) because there is at least \(O(1)\) arithmetic operations required for a given pair \(P\) and \(q\). We will see below that we can do much better than this.
## 4 Model of Computation
It is common to measure the asymptotic cost of an algorithm in either bit operations or arithmetic operations. Informally, asymptotic notation (especially big-\(O\)) is often used as a way to give guidance about the run-time of implemented algorithms. Our theorem statements will count the number of candidates created for \(q\) or \(r\) but our exposition may speak more loosely as if this were measuring time. The theorems could be viewed as the arithmetic cost of creating \(q\) and \(r\) without testing if they are prime. In which case, we could multiply these asymptotic results by the asymptotic cost of primality testing to get a result that would be an asymptotic result measuring arithmetic operations. However, this result would not be of much guidance for the run-time of an implementation because primality testing is not often the bottleneck. For example, it is often the case that \(q\) and \(r\) may be checked with \(O(1)\) arithmetic operations. Here are some examples: they may be too big, they may not be integers, they may not satisfy certain other divisibility statements, or they may be small enough to be in a look-up table. So, it could be the case that the average cost is \(O(1)\) arithmetic operations. Our implementation uses strong Fermat tests with the bases \(\{2,3,5,7,11\}\) and this is sufficient to prove primality for all 32-bit
integers [12]. Whenever complete or partial factorizations of \(n-1\) or \(n+1\) are known there are fast primality tests2 (see Sections 4.1 and 4.2 of [3] or [20] for more details). As we will see below, it is often the case that we know a complete or partial factorization of \(q-\delta_{q}\) or \(r-\delta_{r}\) and so these tests would be helpful. Given the variety of approaches that are available, we believe that it is best to provide asymptotic arguments in terms of the counts of candidates \(q\) and \(r\) rather than the more traditional bit or arithmetic operations. For empirical evidence supporting this, see Example 1 where about 15.6 million candidate primes are created and the algorithm only invoked a primality test 68 times.
Footnote 2: It is perhaps fitting for this work that these tests are also inspired by Édouard Lucas and many of the variants bear his name.
## 5 Algorithms for small preproducts
In [15, 18], Carmichael numbers are constructed of the form \(n=Pqr\). Here we briefly sketch what was shown before in order to show that a comparable tabulation algorithm exists for absolute Lucas pseudoprimes. The inequality on \(D\) found in Theorem 4 may be used in a for-loop. Following the approach of [15], we use the inequality on 4.3 to construct valid \(C\) for the inner for-loops. With \(C\) and \(D\), one can construct \(q\) and \(r\), and perform the required checks. Following the approach of [18], we use the numerator of 4.1 and construct all possible divisors. These divisors are efficiently obtained via the use of some variant of the sieve of Eratosthenes. With \(D\) and \(\Delta\), one can construct \(C\) and \(r\), and perform the required checks. Before a more thorough explanation, we deal with the smallest possible preproduct \(P=1\). This situation is unique to these numbers and cannot arise with Carmichael numbers.
### \(P=1\)
A complete tabulation must account for the case that \(n=p_{1}p_{2}\). In [21], it is proved that this only happens when \(p_{1}=p_{2}-2\), \((d|p_{1})=-1\), and \((d|p_{2})=1\). Therefore, it suffices to tabulate twin primes in set residue classes. For example, with \(d=5\) we need the primes that are \(17,19\pmod{30}\). A straightforward implementation of the sieve of Eratosthenes finds these in \(O(B^{1/2}\log\log B)\) arithmetic operations. There are other sieving methods that can improve the time by up to a factor of \((\log\log B)^{3}\)[19]. Whether a faster variant is used or not, this component of the computation contributes only to a lower-order term in the overall asymptotic cost of tabulation. Henceforth, we assume that there are always \(k>2\) prime factors in our construction.
### \(Cd\) method
The first approach follows Pinch's method of constructing \(CD\) pairs. To do so, a double nested for-loop creates \(D\) satisfying \(1\leq D<P\). The inequality found in Theorem 4.3 sets the bounds for \(C\) for the second for-loop. In the inner-loop, we check that the number implied by Theorem 4 is an absolute Lucas pseudoprime. That is, we check that \(q\) and \(r\) are integral. Second, that the divisibility statements in Theorem 2 hold for all primes. Lastly, we check that both \(q\) and \(r\) are primes. The ordering of those checks is not required from the point of view of the asymptotic cost of the algorithm but was chosen to delay the most expensive checks until last.
**Theorem 5**.: _The number of \(CD\) pairs used to tabulate all absolute Lucas pseudoprimes of the form \(Pqr\) is \(\Theta(P_{k-3}P\log P)\subset O(P^{2-\frac{1}{k-2}}\log P)\)._
Proof.: We start with the inequality found in the proof of Theorem 4 that bounds the length of the interval around \(P^{2}\):
\[|CD-P^{2}|<\frac{2P(P+1)}{p+1}<2P_{k-3}(P+1).\]
So, the interval length is bounded by \(4P_{k-3}(P+1)\). Now, the total number of \(C\) values created for each \(D\) is given by
\[\sum_{D=1}^{P-1}\left\lfloor\frac{4P_{k-3}(P+1)}{D}\right\rfloor=\Theta(P_{k-3 }P\log P).\]
Since \(P_{k-3}\) may be bounded by \(P^{1-\frac{1}{k-2}}\) (see Theorem 3.1), this gives a bound of \(O(P^{2-\frac{1}{k-2}}\log P)\).
Due to the absolute value on the inequality above, double the work is required. For each \(CD\) pair, two cases are considered. This implies that this should be about four times slower than the \(CD\) method for the Carmichael case. Since this constant is ignored in the asymptotic analysis, the result is the same as Theorem 4 from [18].
### \(D\Delta\) method
The second method is to construct the divisors of \((P-\delta_{P})(\delta_{q}P+\delta_{r}D)\). Because of the Jacobi symbol \(\delta_{r}\), the magnitude for \((\delta_{q}P+\delta_{r}D)\) can be any integer in \([1,2P-1]\) (except \(P\)). The symbol \(\delta_{q}\) allows these divisors to be positive or negative. So there are a total of \(4\) different cases to consider. Our implementation, considers divisors of numbers in the interval \([1,2P-1]\) and for each integer, we construct a set of positive divisor and a set of negative divisors (they are the same set, the algorithm just treats the two cases differently). Thus, we implicitly account for all \(4\) possible choices of Jacobi symbols. For each of the four separate cases, we constructed \(C\) by first checking it was integral. Next, we created \(q\) and \(r\) using the appropriate symbols. The Korselt-like criterion and primality of \(r\) and \(q\) were then verified.
**Theorem 6**.: _The number of \(D\Delta\) pairs used to tabulate all absolute Lucas pseudoprimes of the form \(Pqr\) is \(O(\tau(P-\delta_{P})\left(P\log P\right))\)._
Proof.: For every \(P\), we run through \(D\) on the interval \([1,P-1]\). Then count the number of divisors of \((P-\delta_{P})(\delta_{q}P+\delta_{r}D)\).
\[\sum_{D<P-1}\tau\left((P-\delta_{P})(\delta_{q}P+\delta_{r}D)\right)\] \[< \tau(P-\delta_{P})\left(\sum_{D<P-1}\tau(\delta_{q}P+\delta_{r}D )\right)\] \[< 2\tau(P-\delta_{P})\left(\sum_{n<2P}\tau(n)\right)\] \[= 2\tau(P-\delta_{P})\left(2P\log 2P+(2\gamma-1)2P+O(\sqrt{2P})\right)\] \[= 4\tau(P-\delta_{P})\left(P\log P+(2\gamma+\log 2-1)P+O(\sqrt{2P})\right)\] \[= O(\tau(P-\delta_{P})\left(P\log P\right))\]
The second inequality follows from the fact that the quantity \((\delta_{q}P+\delta_{r}D)\) can be either positive or negative and ranges in values from \(1\) to \(2P-1\). The former accounts for the \(2\) that appears and the latter accounts for the change in the summation.
As with the \(CD\) method, this is the same asymptotic result as Theorem 5 from [18] but with an implied constant that is \(4\) times larger.
**Example 1**.: Let \(P=11\cdot 13\cdot 17\cdot 19=46189\), then there are eight absolute Lucas pseudoprimes for \(d=5\) of the form \(Pqr\).
1. \(P\cdot 57349\cdot 331111621=877079242172199781\)
2. \(P\cdot 709\cdot 4093501=134053974841501\)
3. \(P\cdot 1009\cdot 378901=17658567813601\)
4. \(P\cdot 230941\cdot 29144629=310883829596647021\)
5. \(P\cdot 2161\cdot 231589=23115923797681\)
6. \(P\cdot 23\cdot 83=88174801\)
7. \(P\cdot 161659\cdot 577351=4311003447437401\)
8. \(P\cdot 1459\cdot 2251=983368161419501\)
The divisor method requires checking about \(7.8\) million \(D\Delta\) pairs. However, the \(CD\) method requires the construction of about \(4.83\) billion \(CD\) pairs. By prioritizing all other checks first, the \(D\Delta\) method used only \(68\) primality checks (and \(16\) were required to get the above output).
## 6 Algorithms for large preproducts
### Distinguishing "large" from "small"
So far, the only approach to find \(n<B\) has been to construct a preproduct \(P=P_{k-2}\) and use Theorem 4 to find the remaining two primes in time that is essentially linear in \(P\). This approach has the benefit that it is not dependent on \(k\) or \(\lambda_{d}(P)\). However, as \(P\) grows in size (with respect to \(B\)) it is more and more likely to create absolute Lucas pseudoprimes outside the tabulation bound. We may discard these but there is no obvious way to improve the asymptotic cost and only generate the \(q=p_{k-1}\) and \(r=p_{k}\) of the correct sizes. At some point it will be more efficient to exhaustively generate the candidate \(q\) values. We have two bounds on \(q\): \(Pq^{2}<B\) implies \(q<(B/P)^{1/2}\) and \(q<2(P+1)^{2}\). Assuming that \(q\) is generated with the use of a sieve or by a look-up table of precomputed primes, the cost will be roughly linear in the length of the interval (differing by \(\log B\) factors depending on the method used). These two bounds equalize around \(P=B^{1/3}\). For the "large" case, we will assume that \(P>X>B^{1/3}\) where \(X\) is some chosen cross-over point. We will construct \(q\) by exhaustive search for primes in the interval \((p_{k-2},\sqrt{B/P})\subset(p_{k-2},\sqrt{B/X})\subset[1,B^{1/3})\). With \(q\), we now know \(P_{k-1}=Pq\) and \(\lambda_{d}(P_{k-1})\). With this information we can analyze the cost of finding \(r=p_{k}\). The difficulty with getting an asymptotic estimate of the total cost of the tabulation of the "large case" is that not much is known about the asymptotic behavior of \(\lambda_{d}(P_{k-1})\). For example, if \(\lambda_{d}(P_{k-1})\) were within a fixed constant multiple \(\ell\) of \(P_{k-1}\), then there would only be \(2\ell\) candidate values of \(p_{k}\) to check. However, there is no reason to believe that this could happen. Since \(\lambda_{1}(n)\) can be very small with respect to \(n\), it would be reasonable to believe that \(\lambda_{d}(n)\) has the same property.
### Finding \(p_{k}\) given \(P_{k-1}\)
There are many approaches for finding \(p_{k}\) given \(P_{k-1}\). We describe what we did and discuss some valid options that were not implemented.
Using Theorem 3.2, we know
\[p_{k}\equiv\delta_{P_{k-1}}\delta_{p_{k}}P_{k-1}^{-1}\pmod{\lambda_{d}(P_{k- 1})}\]
which means that there are two residue classes \(r_{1},r_{2}\) modulo \(\lambda_{d}(P_{k-1})\) to consider. The number of candidates to be considered in this arithmetic progression is
\[\min\left\{\left\lceil\frac{P_{k-1}-\delta_{P_{k-1}}}{\lambda_{d}(P_{k-1})} \right\rceil,\left\lceil\frac{B}{P_{k-1}\lambda_{d}(P_{k-1})}\right\rceil \right\}.\]
The first term comes from \((p_{k}-\delta_{p_{k}})|(P_{k-1}-\delta_{P_{k-1}})\) trivially implies \(p_{k}-\delta_{p_{k}}<P_{k-1}-\delta_{P_{k-1}}\). The second term comes from the fact that \(P_{k-1}p_{k}<B\) and we compute the greatest multiple of \(\lambda_{d}(P_{k-1})\) for which the inequality holds. This is all we implemented.
The above approach is primarily based on the fact that creating these candidates in arithmetic progression is "fast" and memory efficient. However, it is unlikely to be an asymptotically optimal choice. This is because the worst-case arises when \(\lambda_{d}(P_{k-1})\) is really small. In which case, one should probably view the problem as integer factorization rather than one of sieving in an arithmetic progression. That is, the real goal is to find factors of \(P_{k-1}-\delta_{P_{k-1}}\). On this view, the congruence
\[p_{k}\equiv\delta_{P_{k-1}}\delta_{p_{k}}P_{k-1}^{-1}\pmod{\lambda_{d}(P_{k- 1})}\]
can happen to make the factoring problem easier. This happens whenever \(\lambda_{d}(P_{k-1})\) is large enough (see results on _divisors in residue classes_ and section 4.2.3 of [3]). When \(\lambda_{d}(P_{k-1})\) is particularly small, then testing candidates in arithmetic progression could be worse than trial division because there would be \(O(P_{k-1}/\lambda_{d}(P_{k-1}))=O(P_{k-1})\) candidates to check. Trial division would only check \(O(\sqrt{P_{k-1}})\) candidates and this is among the slowest of factoring algorithms. Any asymptotically faster integer factorization algorithm will find candidates for \(p_{k}\) in an asymptotically superior way.
## 7 Implementation, Statistics, and Questions
In section 5.3, we required divisors of integers in the interval \([1,2P-1]\). Since, the goal was to invoke these tabulation methods for all admissible \(P<X\), one possible option was to have a precomputed factor-table of all integers less than \(2X\). This single table could be used to check the admissibility of \(P\) and find the factors of \(P-\delta_{P}\) and \(\delta_{q}P-\delta_{r}D\) for any \(P<X\). However, this table would be very space intensive. Instead, we opted for two incremental sieves and this uses only \(O(\sqrt{P})\) space. If \(X\) is chosen as suggested in Section 6.1, this is \(O(B^{1/6})\) space. One sieve was used to find admissible \(P\) and it always stored the factors of \(P-1\) and \(P+1\) so that the factors of \(P-\delta_{P}\) would be accessible. For any admissible \(P\), another incremental sieve was instantiated to factor integers in \([1,2P-1]\) for the \(\delta_{q}P-\delta_{r}D\) term. We used MPI to have this run in parallel, and striped the work by counting admissible \(P\).
For \(d=5\), we chose \(X=6\cdot 10^{6}\). For every \(P<X\), we used a hybrid approach combining both the \(CD\) method and the \(D\Delta\) method; that is, for a given \(D\) we choose the inner-loop that would create fewer candidate values. The program computed all possible \(n=Pqr\) and we used post-processing to eliminate \(n>2^{64}\). Our choice of \(X\) means that there are no cases for \(k=3\) that need to be accounted as large. We wrote 8 distinct programs for the large case (one for each \(3<k<12\)). There are two obvious ways to implement these programs. The first is to have \(k\) incremental sieves. Each sieve is instantiated to find primes starting at the prior incremental sieve's prime and going as large as allowed by the inequality in The
orem 3.1. While this is very space efficient, it seemed like there would be a lot of overhead. Instead, we used a precomputed list of primes in the interval \([1,\sqrt{B/X})\). If \(X>B^{1/3}\), this requires \(O(B^{1/3})\) storage. For each \(k>3\), we keep track of \(k-1\) pointers in the array. At each level, we make sure that the implied product is bounds admissible. And at the \(k-2\) level, we also insure that the product exceeds \(X\).
### Timing information for "small" preproducts
We implemented the \(D\Delta\) method, the \(CD\) method, and a hybrid approach. All three programs were run on a single thread of a Xeon Phi 7210 1.30 GHz coprocessor. The timing considered two different cases to highlight the strengths of each approach. The first case was \(P<X\) and \(P\) is admissible. The second case limited \(P\) to being prime. As expected, the \(CD\) method is superior on prime inputs. For the admissible preproducts, the timing information seems to confirm that the \(CD\) method does not scale as well which would be expected from Theorem 5.
The hybrid approach appears not to offer an advantage on the prime preproducts. The overhead of updating the sieve and divisor list is more expensive than the \(CD\) method when \(D\) was large (relative to \(P\)). If this overhead could be avoided3, we believe that the hybrid method would be better (see section 3.3 of [18]). But, due to our specific implementation, it was not possible to dynamically turn off the incremental sieve. A novel variant of the incremental sieve that increments in a backwards direction would be required. With these two sieves running simultaneously (one for the interval \([P+1,2P-1]\) that runs forward and one for \([1,P-1]\) that runs backwards), it would have been possible to detect when the divisor counts were consistently larger than number of \(C\) values. At which point, we could turn off the incremental sieve and finish the computation with only the \(CD\) method.
Footnote 3: If a global look-up table had been employed, then the overhead is the look-up. We need the ability to dynamically turn off the incremental sieve.
The tables below show the timing data (in seconds) for the \(D\Delta\), \(CD\), and hybrid methods for all pre-products up to varying bounds.
\begin{tabular}{c|c|c|c} Admissible pre-product bound & \(D\Delta\) & \(CD\) & Hybrid \\ \hline \(1\cdot 10^{3}\) & 2 & 5 & 1 \\ \(2\cdot 10^{3}\) & 7 & 32 & 6 \\ \(3\cdot 10^{3}\) & 17 & 96 & 14 \\ \(4\cdot 10^{3}\) & 33 & 213 & 25 \\ \(5\cdot 10^{3}\) & 52 & 399 & 40 \\ \(6\cdot 10^{3}\) & 77 & 653 & 59 \\ \(7\cdot 10^{3}\) & 107 & 981 & 82 \\ \(8\cdot 10^{3}\) & 142 & 1433 & 109 \\ \(9\cdot 10^{3}\) & 183 & 1968 & 141 \\ \(10\cdot 10^{3}\) & 231 & 2646 & 176 \\ \end{tabular}
We also implemented the small case to only run on bound admissible pre-products. For example, let \(B=10^{15}\). Then, it took 196 seconds to find the completions for preproducts in \([10^{5}-10^{3},10^{5}]\). And it only took 150 seconds for the interval \([10^{5},10^{5}+10^{3}]\). Even though the inputs on the first interval are smaller than the inputs on the second interval, the number of admissible preproducts decreases because primes are no longer bounds admissible.
### Timing information for "large" preproducts
In the next two tables, we see some timing data on the same machine for finding absolute Lucas pseudoprimes as described in Section 6.2. The first table shows the timings for finding all such numbers with a fixed number of prime factors and the second table shows the timings when a cross-over of \(X=B^{.35}\) is chosen. As expected, the timing impact of having a cross-over is seen more clearly in the smaller \(k\) values than the larger \(k\) values. As \(k\) gets larger, it becomes very rare that a product of \(k-2\) primes will be less than \(X\). Having the cross-over, as noted above, has an impact on the memory requirements if the computation assumes the existence of a look-up table. We did not measure the impact that storage might have for these relatively small bounds (in comparison to \(2^{64}\)). One would probably need to abandon a look-up table approach if a cross-over was not used.
\begin{table}
\begin{tabular}{c|c|c|c|c|} Bound & \(k=4\) & \(k=5\) & \(k=6\) & \(k=7\) \\ \hline \(10^{10}\) & 0.2 & 40.2 & 0.01 & - \\ \(10^{11}\) & 1.1 & 0.8 & 0.2 & 0.01 \\ \(10^{12}\) & 5.4 & 4.9 & 1.4 & 0.2 \\ \(10^{13}\) & 27.7 & 30.5 & 12.3 & 2.3 \\ \(10^{14}\) & 140.6 & 200.2 & 84.8 & 20 \\ \(10^{15}\) & 664.1 & 1122.1 & 611.7 & 185 \\ \end{tabular}
\end{table}
Table 1: Timing without a Crossover
\begin{table}
\begin{tabular}{c|c|c|c|c|} Bound & \(k=4\) & \(k=5\) & \(k=6\) & \(k=7\) \\ \hline \(10^{10}\) & 0.1 & 0.1 & 0.02 & - \\ \(10^{11}\) & 0.6 & 0.7 & 0.2 & 0.01 \\ \(10^{12}\) & 3.2 & 4.7 & 1.4 & 0.2 \\ \(10^{13}\) & 17.2 & 29.5 & 11.7 & 1.9 \\ \(10^{14}\) & 92.5 & 173.3 & 91.6 & 20 \\ \(10^{15}\) & 496.7 & 964.6 & 681.8 & 184.6 \\ \end{tabular}
\end{table}
Table 2: Timing with a cross-over chosen as \(X=B^{.35}\)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(B\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & Total & \(\alpha\) \\ \hline \(10^{3}\) & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \hline \(10^{4}\) & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0.1192 \\ \hline \(10^{5}\) & 1 & 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 & 0.1806 \\ \hline \(10^{6}\) & 9 & 22 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 34 & 0.2552 \\ \hline \(10^{7}\) & 24 & 50 & 24 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0.2857 \\ \hline \(10^{8}\) & 64 & 102 & 89 & 18 & 1 & 0 & 0 & 0 & 0 & 0 & 274 & 0.3047 \\ \hline \(10^{9}\) & 159 & 189 & 249 & 106 & 7 & 0 & 0 & 0 & 0 & 0 & 710 & 0.3168 \\ \hline \(10^{10}\) & 414 & 356 & 512 & 358 & 71 & 0 & 0 & 0 & 0 & 0 & 1711 & 0.3233 \\ \hline \(10^{11}\) & 1053 & 633 & 1008 & 1040 & 316 & 17 & 0 & 0 & 0 & 0 & 4067 & 0.3281 \\ \hline \(10^{12}\) & 2734 & 1110 & 1857 & 2703 & 1268 & 180 & 3 & 0 & 0 & 0 & 9855 & 0.3328 \\ \hline \(10^{13}\) & 7301 & 2038 & 3344 & 6226 & 4174 & 966 & 59 & 0 & 0 & 0 & 0 & 24108 & 0.3371 \\ \hline \(10^{14}\) & 19674 & 3737 & 5649 & 13287 & 12078 & 4288 & 490 & 6 & 0 & 0 & 0 & 59209 & 0.3409 \\ \hline \(10^{15}\) & 53561 & 6754 & 9462 & 26821 & 31472 & 15721 & 2844 & 138 & 1 & 0 & 0 & 146774 & 0.3444 \\ \hline \(10^{16}\) & 146953 & 12215 & 15639 & 51121 & 76397 & 50690 & 13280 & 1201 & 22 & 0 & 0 & 367518 & 0.3478 \\ \hline \(10^{17}\) & 407779 & 22004 & 25186 & 94748 & 173721 & 148482 & 53529 & 7338 & 287 & 0 & 0 & 933074 & 0.3512 \\ \hline \(10^{18}\) & 1142128 & 39974 & 0 & 0 & 0 & 0 & 191645 & 37528 & 2501 & 37 & 0 & 1142128 & \\ \hline \(10^{19}\) & 3220913 & 73298 & 0 & 0 & 0 & 0 & 621182 & 165609 & 17013 & 526 & 5 & 3220913 & \\ \hline \(2^{64}\) & 4247414 & 86228 & 0 & 0 & 0 & 0 & 839627 & 240259 & 27438 & 1004 & 10 & 4247414 & \\ \hline \end{tabular}
\end{table}
Table 3: Values of \(C(B)\) and \(C(k,B)\)
### Comparison to Carmichael numbers
In a follow-up report on tabulating Carmichael numbers to \(10^{21}\), Richard Pinch provided comparable information to our Table 3 in his Table 2 [16]. After \(10^{9}\), the count of our numbers exceeds the counts of Carmichael numbers. Letting \(\alpha\) be as in the table above, the least order of magnitude for which \(\alpha>1/3\) is 13 but for Carmichael numbers it is 15. It might be reasonable to believe that there are more of these numbers than Carmichael numbers because the presence of the product of twin primes plays a significant role in this count. However, if one ignores this column the counts in this tabulation are always less than the Carmichael number counterpart. We are not entirely sure why this is, but one factor is that primes dividing \(d\) are not admissible. Since the actual asymptotic behavior of Carmichael number is still subject to many open questions (e.g. [11]), we believe that the asymptotic counts of these numbers would be subject to the same problems.
**Acknowledgements.** The first author thanks Butler Summer Institute and the second author thanks the Holcomb Awards Committee for financial support of the project. We both thank Anthony Gurovski for his initial contributions which included a tabulation up to \(10^{17}\) for \(d=5\).
|
2309.07207 | EarthPT: a time series foundation model for Earth Observation | We introduce EarthPT -- an Earth Observation (EO) pretrained transformer.
EarthPT is a 700 million parameter decoding transformer foundation model
trained in an autoregressive self-supervised manner and developed specifically
with EO use-cases in mind. We demonstrate that EarthPT is an effective
forecaster that can accurately predict future pixel-level surface reflectances
across the 400-2300 nm range well into the future. For example, forecasts of
the evolution of the Normalised Difference Vegetation Index (NDVI) have a
typical error of approximately 0.05 (over a natural range of -1 -> 1) at the
pixel level over a five month test set horizon, out-performing simple
phase-folded models based on historical averaging. We also demonstrate that
embeddings learnt by EarthPT hold semantically meaningful information and could
be exploited for downstream tasks such as highly granular, dynamic land use
classification. Excitingly, we note that the abundance of EO data provides us
with -- in theory -- quadrillions of training tokens. Therefore, if we assume
that EarthPT follows neural scaling laws akin to those derived for Large
Language Models (LLMs), there is currently no data-imposed limit to scaling
EarthPT and other similar `Large Observation Models.' | Michael J. Smith, Luke Fleming, James E. Geach | 2023-09-13T18:00:00Z | http://arxiv.org/abs/2309.07207v2 | # EarthPT: a foundation model for Earth Observation
###### Abstract
We introduce EarthPT - an Earth Observation (EO) pretrained transformer. EarthPT is a 700 million parameter decoding transformer foundation model trained in an autoregressive self-supervised manner and developed specifically with EO use-cases in mind. We demonstrate that EarthPT is an effective forecaster that can accurately predict future pixel-level surface reflectances across the 400-2300 nm range well into the future. For example, forecasts of the evolution of the Normalised Difference Vegetation Index (NDVI) have a typical error of approximately 0.05 (over a natural range of \(-1\to 1\)) at the pixel level over a five month test set horizon, out-performing simple phase-folded models based on historical averaging. We also demonstrate that embeddings learnt by EarthPT hold semantically meaningful information and could be exploited for downstream tasks such as highly granular, dynamic land use classification. Excitingly, we note that the abundance of EO data provides us with - in theory - quadrillions of training tokens. Therefore, if we assume that EarthPT follows neural scaling laws akin to those derived for Large Language Models (LLMs), there is currently no data-imposed limit to scaling EarthPT and other similar "Large Observation Models."
## 1 Introduction
Deep learning's current 'hot topics' are foundation models in the vein of EleutherAI's GPT-NeoX, OpenAI's GPT-\(N\) models, DeepMind's Chinchilla, and the RWCV Foundation's eponymous model [1; 2; 3; 4; 5]. These remarkably simple models contain a few standard deep learning building blocks and are trained by repeatedly predicting the next item in a sequence. Surprisingly, these models' performances scale with dataset and model size via a simple power law [6; 7]. 2017, rosenfeld 2019 Even more astoundingly, at a certain scale of data and compute, these models display 'emergent abilities' such as apparent knowledge of arithmetic, law, geography, and history [e.g. 8]. In March 2022 a team at Google DeepMind discovered that - optimally - the size of these foundation models should be scaled in a roughly equal proportion to the size of the dataset used to train them [4]. Smith and Geach [9] demonstrated that this implies that the current constraint on state-of-the-art textual foundation model performance is dataset size, and not model size as previously thought. Although we are running out of useful high quality textual data to train foundation models, there remains an untapped abundance of high quality data in other domains [10; 11]. Smith and Geach [9] argue that astronomy is one such domain, and we argue here that remote sensing data sets, and in particular Earth Observation (EO) spatial and temporal data, can also be used as an additional non-textual data mode to aid in the training of ever larger, more generalist, and more performant foundation models.
Here we demonstrate that EO imaging data can be used to train a sizable transformer model in the spirit of large language modelling. To this end we train a Chinchilla-optimal 700M parameter decoding transformer model on 14B tokens of EO data in the form of multispectral time series for just over one hundred million individual pixels. The time series are analogous to word and sentence sequences in textual models, but in this case represent surface-level (solar) reflectance values
measured in a number of passbands across the 400-2300 nm spectral range - i.e. the wavelengths corresponding to traditional 'optical' EO imagery.
Single pixel time series are commonly used in remote sensing to train transformer and self-attention based networks on supervised tasks (e.g. 12; 13). However, currently few works apply these models in a self-supervised manner. Those that do are typically limited to very short - or even single step - time series inputs (e.g. 14; 15). The closest approach in the literature to EarthPT is perhaps Tseng et al. (16). They show that an encoding transformer model (i.e. 17) is capable of learning semantically meaningful embeddings from remote sensing time series. Their model is trained on a relatively small dataset comprised of 21.5M tokens arranged into chunks of shape [time,channel] \(\equiv\)[12; 19]. Tseng et al. (16) note their model's capability despite its small size. Our work has a diametrical and complementary purpose; we aim to demonstrate that a transformer model trained on EO data is capable of scaling to the extent that we have seen in the natural language domain, with similar potential for wide utilisation and impact. In particular, we demonstrate that EarthPT can accurately forecast reflectance values well into the future, thus providing a method to predict - and therefore an opportunity to mitigate - future events associated with environmental threats such as drought.
## 2 Methods
This section describes the datasets that we use to train EarthPT and the hyperparameters and training routine of our chosen decoding transformer architecture.
Training imagery.ClearSky is a proprietary deep learning algorithm that accurately predicts the equivalent of European Space Agency Sentinel-2 imagery products across 10 spectral bands: Blue, Green, Red, Red Edge 1-4, NIR, and SWIR 1 and 2. The input data for ClearSky is Sentinel-1 C-band Synthetic Aperture Radar (SAR) imagery at 10 m/pixel resolution (18). SAR imagery is impervious to cloud cover, and the ClearSky algorithm allows us to construct full multispectral imagery time series of Sentinel-2 equivalent reflectances uninterrupted by cloud. In this work we generate ClearSky inferred imagery for an area of interest in the UK defined by a \(100\times 100\) km region corresponding to the TL square of the British National Grid (BNG) reference system. We define training and validation set time series range from January 2015 to December 2022, and test set time series ranges from January 2023 to May 2023. The time series are sampled at the same cadence as the observing pattern of Sentinel-1, which for this location is five days on average.
Preprocessing.We recompose the observation arrays into a set of float16 NumPy (19) arrays of shape [index,time,channel], where index corresponds to the flattened spatial index of a \(10\times 10\) km\({}^{2}\) BNG tile, time corresponds to the date of observation, and channel corresponds to the individual spectral bands and the date embedding bands of the current and next observation. The date embedding is calculated via the equation \(\hat{t}=\left(\sin\left(2\pi t/365\right),\cos\left(2\pi t/365\right)\right),\) where \(t\) is the date of the observation in days since 1st January of the year of observation. The spectral band reflectances (originally on a 0-10,000 scale) are normalised as \(\hat{v}=v/500-1,\) which keeps them approximately in the range \([-1,1]\). We treat each temporal observation as a separate 'token', and therefore the TL training set (a subset of the full UK data set) comprises approximately 100B tokens. Once constructed, we can efficiently access these data structures at train time via memory-mapping.
Transformer architecture.EarthPT is based on the autoregressive transformer architecture described in Radford et al. (20), with some alterations to accommodate our non-textual dataset. In place of the usual word embedding routine we use a multilayer perceptron to embed the input data so that it has the same dimensionality as the time embedding vector. To provide the model with a knowledge of the time of observation, we feed the network an additional pair of float embeddings corresponding to the date of the current and next observation. We train EarthPT in the usual autoregressive way, by repeatedly predicting the next observation in a given set of time series. We train using the Adam optimiser (21), and use the Huber loss (22). We trained a range of model sizes from 10M to 700M trainable parameters, and we present the hyperparameters for all our models in Appendix B. The remainder of this paper focuses on our largest EarthPT model, EarthPT-700M.
In lieu of a domain-specific neural scaling law we use the Chinchilla neural scaling law as a convenient rule-of-thumb to decide our dataset size. This law suggests that a compute-optimal decoding transformer model should be trained roughly following the scaling \(N\sim 20D,\) where \(N\) is the
number of parameters in the model, and \(D\) is the number of tokens in the training set [4]. This corresponds to 14B tokens for our 700M parameter model. To this end we train EarthPT-700M on 8 A100 PCIe 40GB GPUs for a cumulative total of 90,000 steps of 160,000 tokens each, i.e. 560 A100-hours of computation time.
## 3 Results
We find that our EarthPT models share similar training behaviour with traditional LLMs; further details of training runs can be found in Appendix B. In this section we describe how EarthPT-700M performs on the task of forecasting remote sensing data.
Analogously to how autoregressive language models can be used to generate text, we can use EarthPT to generate (i.e. forecast) future remote sensing data, in this case the pixel-wise surface reflectance across the optical-infrared. In Figure 1 we show EarthPT forecasts for four representative remote sensing indices: Normalised Difference Vegetation Index (NDVI), Normalised Difference Water Index (NDWI), Bare Soil Index (BSI), and Green Chlorophyll Vegetation Index (GCVI). These represent time streams of a single pixel selected from the TL tile. Forecasting starts on the 1st of January 2023 and runs to May 2023. We compare the forecast to the measured values of these indices across this interval, which the model has not'seen'. For brevity, we show a single pixel here, forecasting can be scaled across all pixels to generate predicted imagery products.
We can quantify performance by assessing the residual between the forecasted and measured value of the parameter of interest (e.g. NDVI) as a function of look-ahead time. Figure 2 shows the median L1 error for \(\sim\)\(10^{6}\) pixels in BNG tile TL63, up to five months into the future. This is compared to a prediction based on a phase-folded model which comprises an average annual time series constructed from 7 years of historical data. We find that EarthPT has a median L1 error across all time of 0.05 and the folded model has a median L1 error of 0.08, noting that NDVI has a natural range of \(-1\to 1\). We can conclude that EarthPT out-performs a phase-folded model consistently over the forecast window, delivering actionable predictions on key remote sensing indices (such as NDVI) that could be used, for example, in the prediction of drought conditions well in advance [23].
Figure 1: Predictions of some common remote sensing indicators for a randomly chosen pixel within the UK National Grid TL tile. We condition EarthPT on ClearSky time series from 1st January 2015 to 1st January 2023, with outputs after this divergence date constituting a long-term forecast to be compared to the unseen observations.
Figure 2: Median L1 error and interquartile ranges of NDVI predictions for 1M pixels in the TL63 tile. EarthPT long-term forecasts out-perform a simple phase-folded model based on historical averages out to a horizon of five months.
Future Work
Foundation models are notoriously flexible, and so one can envision myriad downstream tasks. In the field of geospatial data analysis, we can consider how EarthPT could be deployed for land cover classification. To illustrate, we generate representation embeddings by extracting the outputs of the penultimate neuronal layer and obtain the embedding of a pixel's time series by simply taking the mean of all of its output embeddings (one embedding is output at each time step). Each embedding has a dimensionality of 1280, but we can visualise them by projecting onto a two-dimensional manifold. We use principle component analysis (PCA) as our projection technique [24]. Figure 3 shows a selection of emergent remote sensing indices (introduced above) for a set of embeddings of time series across 2022. By colour-coding the projected embedding space we see that it has a physically meaningful organisation, with coherent structure of, for example, the time-averaged NDVI, BSI, RGB, etc. If we were to cluster and calibrate the embedding space with semantically meaningful labels (e.g. crop type, growth stage, event) this could be used to create a dynamic and highly granular land cover classifier. Furthermore, we anticipate that fine-tuning with EarthPT-learnt embeddings will be beneficial for a range of downstream tasks [see for example 17, 25]. One could imagine training EarthPT to produce a single embedding space for all EO (and other) multi-modal data types [26]. This would be remarkably powerful tool for interpreting remote sensing data, where we foresee diverse applications in a range of sectors, from agriculture to insurance and beyond.
While useful as a rule-of-thumb, the Chinchilla scaling laws may not be suitable for EO datasets, and so follow-up work will derive a specific scaling law for our ClearSky dataset. This in turn will give us a solid theoretical grounding for further scaling of EarthPT, allowing us to train a significantly larger model. For example, with our ClearSky model for the UK we have access to 4.3T (trillion) tokens that could be used to train EarthPT, and when considering larger geographic coverage we theoretically have access to over a quadrillion tokens. Compute cost aside, we could safely train a 50T parameter model on this data, assuming that our model scaling roughly follows the Chinchilla scaling law. This 50T parameter model would be around three orders of magnitude larger than the current largest optimally-trained models [4, 27]. Consequently, unlike traditional LLMs, EarthPT and other similar 'Large Observation Models' are far from their theoretical data limit [9, 28].
## 5 Conclusions
Inspired by the recent explosion of interest in LLMs, we present an Earth Observation foundation model trained on time series taken from our ClearSky generative algorithm. Our EarthPT Large Observation Model is capable of forecasting surface level optical reflectance (and therefore a wide range of common remote sensing indices) at the pixel level, months into the future. EarthPT can also produce semantically meaningful embeddings for an input time series, and we show that these capture useful information that could be exploited for land cover classification, amongst other downstream tasks. We are developing these applications and improving and extending EarthPT as part of ongoing R&D. Excitingly, the number of tokens available for training is of order \(10^{15}\), so we are not currently data constrained. If neural scaling laws hold, then improving EarthPT (and similar Large Observation Models) is a solved problem: it is a simple matter of scaling data and compute.
Figure 3: EarthPT embeddings for the two million pixel time series located on the TL63 and TL64 BNG tiles. We colour each scatter plot with a different set of emergent remote sensing index values. ‘RGB’ is the colour of a pixel in that part of the embedding space at the height of the summer of 2022. ‘Mean’ is the mean of a given index across the 2022 calendar year, and ‘std’ is the standard deviation of the index across the year. ‘NDVI peak’ is the time of the year corresponding to maximum NDVI; darker values are in the winter, and lighter values are in the summer. Note the coherent structure in the projected embedding space.
## Acknowledgements
This project is part-funded by the UK Government through the UK Shared Prosperity Fund. Cornwall Council has been chosen by Government as a Lead Authority for the fund and is responsible for monitoring the progress of projects funded through the UK Shared Prosperity Fund in Cornwall and the Isles of Scilly.
## Data and code availability
Please contact Aspia Space directly for data and model access at [email protected].
|
2309.06119 | Resource Adequacy and Capacity Procurement: Metrics and Decision Support
Analysis | Resource adequacy studies typically use standard metrics such as Loss of Load
Expectation and Expected Energy Unserved to quantify the risk of supply
shortfalls. This paper critiques present approaches to adequacy assessment and
capacity procurement in terms of their relevance to decision maker interests,
before demonstrating alternatives including risk-averse metrics and
visualisations of wider risk profile. This is illustrated with results for a
Great Britain example, in which the risk profile varies substantially with the
installed capacity of wind generation. This paper goes beyond previous
literature through its critical discussion of how current practices reflect
decision maker interests; and how decision making can be improved using a
broader range of outputs available from standard models. | Chris J. Dent, Nestor Sanchez, Aditi Shevni, Jim Q. Smith, Amy L. Wilson, Xuewen Yu | 2023-09-12T10:34:05Z | http://arxiv.org/abs/2309.06119v1 | # Resource Adequacy and Capacity
###### Abstract
Resource adequacy studies typically use standard metrics such as Loss of Load Expectation and Expected Energy Unserved to quantify the risk of supply shortfalls. This paper critiques present approaches to adequacy assessment and capacity procurement in terms of their relevance to decision maker interests, before demonstrating alternatives including risk-averse metrics and visualisations of wider risk profile. This is illustrated with results for a Great Britain example, in which the risk profile varies substantially with the installed capacity of wind generation. This paper goes beyond previous literature through its critical discussion of how current practices reflect decision maker interests; and how decision making can be improved using a broader range of outputs available from standard models.
## 1 Introduction
Power system resource adequacy (RA) is the field of managing the risk that there will be insufficient supply resource to meet electricity demand. Studies vary as to the precise class of events in scope, but is generally taken to encompass the balance of resources without considering fine detail of system
operation (see [1] for a survey of current issues). This remains a topic of great interest, due to the need to maintain an appropriate level of system reliability as the profile of resources evolves towards a lower carbon portfolio - and hence it is a key topic in assessing the performance of energy technologies within the system, and how technologies relevant to the net zero transition complement each other (or not) at whole power system level.
The area of RA also provides an excellent example of wider issues in project and policy appraisal, and in decision analysis, in that relevant decisions typically involves balancing capital investment costs (which are relatively concrete) against future reliability of the system (quantification of which is both more uncertain, and is not a cashflow that can immediately be incorporated in a monetised const-benefit analysis).
RA studies typically use a standard set of risk metrics to quantify system reliability: Loss of Load Expectation (LOLE), the expected (in the statistical sense) duration of shortfall in the future year or season under study; Expected Energy Unserved (EEU), the expected volume of energy demand not supplied; or if going beyond these, further summary statistics such as System Average Interruption Duration/Frequency Index (SAIDI/SAIFI) [2]. Formal decision analysis for capacity procurement tends involve either a risk level target set with respect to one of these statistics, or a Cost-Benefit Analysis (CBA) in terms of a per-MWh Value of Lost Load (VOLL) multiplied by EEU [3, 4] - historically less formally justified standards in terms of a deterministic measure of the margin of installed capacity over peak demand were common [1], though these are now less widespread due to increased use of probabilistic analysis and the difficulty in including new technologies such as renewables without this.
There are issues in principle with the use of such summary statistics, which can have substantial practical consequences.
* Decision makers are likely to be risk averse, and interested in variability of outcome in individual years, as well as an average over possible outcomes. Expected value indices such as LOLE and EEU by definition do not reflect this, and thus decisions based on current decision analysis approaches might not properly reflect the concerns of decision makers.
* One single-number index cannot capture the whole risk profile of a system, and if the mix of supply and demand changes there may be changes in risk profile that these indices do not capture. There is nothing fundamentally wrong with the use of summary statistics, but they must be customised to and reflect the needs of decision makers.
* Standard approaches do not specify who the decision maker is. For
instance, stakeholders (including ultimate decision makers in governments) tend to be very risk averse about electricity security of supply, due to the consequences for society and the economy if confidence in the electricity system decreases.
* The disruption costs of events are often assessed on a narrow basis of direct costs to customers, and not considering points such as wider societal and economic confidence in a robust electricity supply.
* It may not be natural to trade off investment and disconnection costs on the same (usually monetary) scale, i.e. they may not be _commensurate_[5] (the formal decision analysis term for whether quantities can naturally be compared on the same numerical scale). For instance, many customers are unlikely to be indifferent between having supply and being paid monetary compensation, though this is implicitly assumed in many analyses.
Other works have looked for instance at use of multiple metrics [1], inclusion of risk-averse metrics within optimization problems [6], and construction of probability distributions of _ex post_ outcome metrics ([7] and references therein, and the Belgian standard references within the international comparison at [8]). This paper, however, goes beyond previous literature through its critical discussion of how current practices reflect decision maker interests; and how decision making can be improved using a wider range of outputs available from standard risk model structures. For a general reference on relevant principles of decision analysis on which this work is based, including risk aversion, see [9].
This paper will first describe and critique present approaches to adequacy assessment and capacity procurement in Section 2, before presenting in Section 3 alternatives such as risk-averse metrics and broader visualisations of risk profile to support decision makers. This is illustrated with results for Great Britain, considering a range of wind generation capacities. Section 4 provides an extended discussion of issues of application and relevance to wider decision appraisal, and finally Section 5 presents summary conclusions.
Standard picture of decision analysis for capacity procurement
### Risk modelling for resource adequacy
This section provides a brief overview of the resource adequacy modelling on which results presented, based on a GB system supplied by wind and conventional generation. This is satisfactory for our purpose of demonstrating use of model outputs in supporting decision making, where the key point is how a changing penetration of wind energy changes the overall risk profile. Issues of extension to other resources such as storage and interconnection to other systems, and uncertainty management arising from limited data on extremes and complexity of systems spanning multiple countries, will be discussed in Section 4.
We denote random variables with uppercase and constants with lowercase. \(X_{t},Y_{t}\) and \(D_{t}\) denote available conventional and renewable generation, and demand, respectively at time \(t\) in the future period under study; then the surplus \(Z_{t}=X_{t}+Y_{t}-D_{t}\). This section is consistent with standard references such as [2, 10], but expressed in slightly different notation.
#### 2.1.1 Non-sequential model
Let the period for which an assessment is being made (e.g. a future peak season) be divided into \(n\) hours. LOLE, the expected number of hourly shortfalls in the period is1:
Footnote 1: For simplicity, an hourly time step is used, as per available GB data – in N America this ‘hourly’ LOLE is usually called LOLH. We will also use data from the GB peak winter (Nov-Mar) season – as daily peak demands in winter are much higher than at other times of year, and very low renewable output can occur at times of very high demand, the peak demand season dominates annual shortfall risk in GB. This observation would not apply if a large renewable capacity shifts the time of year at which the highest values of (demand - renewables) occur – however, as this paper is primarily about the choice of model _outputs_, its conclusions are also broadly relevant to such systems.
\[\text{LOLE}=\mathbb{E}\left[\sum_{t=1}^{n}\mathbb{I}(Z_{t}<0)\right]=\sum_{t=1 }^{n}\mathbb{P}(Z_{t}<0), \tag{1}\]
and the EEU is defined as:
\[\text{EEU}=\mathbb{E}\left[\sum_{t=1}^{n}\max\{0,-Z_{t}\}\right]. \tag{2}\]
This is commonly referred to as a non-sequential model, as unless there are technologies such as storage which explicitly link the system states at different times, the terms in the sum may be evaluated separately, and the times re-ordered without affecting the result.
For statistical modelling, it is often more convenient to work in a _time-collapsed_ picture with the time-collapsed variable \(Z\) representing surplus at a randomly chosen point in time. For a system that does not have storage or other technologies which link time periods, the LOLE is then specified as
\[\text{LOLE}=n\mathbb{P}(Z<0)=n\mathbb{P}(X<D^{\prime}), \tag{3}\]
and an analogous formula applies for EEU.
The most common means of estimating the distribution of \(D_{t}-Y_{t}\) is to use the empirical historic data directly in predictive risk calculations, sometimes referred to as _hindcast_[11, 2] The hindcast estimate of LOLE conditional on a particular historic weather year \(y\) is then
\[\text{LOLE}_{y}=\sum_{\tau\in T_{y}}\mathbb{P}(X_{\tau}<d_{\tau}-y_{\tau}) \tag{4}\]
where \(T_{y}\) is the set of times in historic year \(y\), \(d_{\tau}-y_{\tau}\) is written in lower case as it is historic data rather than a random variable, and _historic_ times are indexed by \(\tau\). LOLE conditional on no particular weather year is then usually estimated as \(\text{LOLE}=(1/n_{Y})\sum_{y}\text{LOLE}_{y}\), where \(n_{Y}\) is the number of years of data (and similar for EEU). As discussed later, there can be substantial uncertainty in any estimate of unconditional risk level, as the estimae of the mean will often be dominated by a small proportion of the historic years.
#### 2.1.2 Time sequential model
For model outputs beyond the standard expected value indices, such as the distribution of energy unserved, or the distribution of Loss of Load Duration (LOLD, the random variable of which LOLE is the mean) a time sequential model would be required. LOLD is specified as
\[\text{LOLD}=\sum_{t=1}^{n}\mathbb{I}(Z_{t}<0), \tag{5}\]
and the energy unserved (EU) as
\[\text{EU}=\sum_{t=1}^{n}\max\{0,-Z_{t}\}. \tag{6}\]
A wide range of other possible outputs may also be calculated, the usual mechanism for doing so being Monte Carlo simulation.
Stochastic process models must then be estimated for \(X_{t}\) and \((D_{t},Y_{t})\). In practice, again a common way to proceed is to use a hindcast estimate for the process of demand and wind, i.e.
\[\text{LOLD}_{y}=\sum_{\tau\in T_{y}}\mathbb{I}(X_{\tau}<d_{\tau}-y_{\tau}) \tag{7}\]
and similar expression for EU. The aggregate available conventional capacity is usually specified as a sum of stochastic process models for each individual unit, with the unit models commonly (as here) being two state Markov birth-death processes.
### Decision analysis for capacity procurement
It is common to look at capacity procurement for a single future year, which for simplicity we will do here. This is the approach taken in the GB capacity market, where a target risk level is set based on a cost-benefit analysis (CBA) for the future year considered [12], or might represent a long run equilibrium problem [13].
The standard CBA is then be expressed as an optimization problem [3]:
\[\text{min}\qquad c(R)+[\text{VOLL}]\times[\text{EEU}](R) \tag{8}\]
over the possible sets \(R\) of capacity-providing resources [3]. Procurement cost \(c\) and EEU are both functions of \(R\). This is commonly simplified [4] to
\[\text{min}\qquad[\text{CONE}]\times r+[\text{VOLL}]\times[\text{EEU}]_{r}, \tag{9}\]
on the assumption that the additional procured capacity simply shifts the probability distribution of surplus/deficit by the mean available capacity \(r\) from the addition2. In practice, this assumption implies that the addition should be small compared to the resource already present (generally the case in capacity markets, where the volume of new capacity is usually limited); and does not contain renewable generation or storage for which independence between existing and additional units does not apply. Here Cost of New Entry (CONE) and VOLL take fixed values, and \([\text{EEU}]_{r}\) is the expected energy unserved if the volume of capacity procured is \(r\); it is straightforward to generalise this to non-constant CONE and VOLL.
Footnote 2: This is justified either through convolution of an independent addition with a distribution of deficit that is approximately exponential in shape in the relevant region [14], or through the Central Limit Theorem if a small independent addition is made to a large background of independent units.
### Consequences of choice of VOLL
At the value of \(r\) which minimises (9),
\[[\text{LOLE}]_{r}=\frac{[\text{CONE}]}{[\text{VOLL}]} \tag{10}\]
There are various challenges to taking this as an 'optimal' solution for the real world, even if EEU is regarded as a sufficient summary statistic of risk profile.
* Studies usually implicitly assume that the control room can disconnect the amount of load required and no more, with perfect foresight. This is not the case in practice, which implies an increase in energy unserved.
* An average VOLL across all customers is usually used, i.e. the interests of customers who are relatively indifferent to disconnection are treated interchangeably with interests of customers who are more averse to disconnection, even if there is no discrimination as to who is disconnected involuntarily. If one accepts the idea of monetising unserved energy using VOLL, should customers' interests be averaged in this way, or should more weight placed on the interests of customers who are more inconvenienced by being disconnected?
* As in the introduction, one needs to consider whether reliability and procurement costs are commensurate, i.e. whether they can be compared on the same numerical scale.
This standard optimization picture in (9) typically recommends a level of reliability similar to the GB standard of 3 hours/year LOLE, whereas a system that unreliable would probably be deemed politically unacceptable. For instance, in GB, a system margin warning can be a major news story [15], whereas this lowest level of system warning actually means that some hours ahead of real time the operator was not certain of having their usual real time operating headroom, i.e. nowhere near a shortfall in real time - if actual real time shortfalls happened in a substantial proportion of colder winters, the reaction in public debate would be stronger still.
Using a higher (not average) VOLL, based on the second bullet above, would push the reliability standard more towards a level that would be considered acceptable in this wider sense but would not consider wider issues of societal confidence in the electricity supply. Indeed if such factors beyond individual customer damage are considered important, then for use in comparing different capacity portfolios VOLL might be chosen such that (9)
gives an acceptable level of system-level reliability, rather than being based on customer surveys. It is certainly the case that making different judgments associated with the first two bullets can change the 'optimal' level of reliability from (9) very substantially.
## 3 Beyond the conventional framework
### Background and multicriteria formulation
Most formal decision analysis frameworks for capacity procurement assume the approach described in the previous section, i.e. monetising future reliability in terms of VOLL multiplied by expected energy unserved. Clearly capital costs are naturally in terms of money, though there may be uncertainty in the monetary sum, or one may wish to use a capacity price curve (i.e. as more is procured, the unit cost of capacity increases) rather than a fixed CONE - we have already discussed whether this can naturally be compared with reliability on the same numerical scale. Moreover, expected monetary return is rarely a utility function that reflects decision makers' interests, particularly for mitigation of rare high impact events, and thus introducing a degree of risk aversion seems natural.
The simplest evolution of (9) would be to extend this framework to a multi-criteria decision question, seeking a Pareto frontier on which EEU cannot improved without disbenefit in terms of cost, and vice versa. However, for this case mapping the Pareto frontier is in fact equivalent to a sensitivity analysis on VOLL, so results for it are not presented here.
### Data for examples
The following sections include results from an exemplar based on the Great Britain (GB) system. The standard approaches described previously are used for risk calculations. A comparison between scenarios of different installed wind capacities is made by using the same portfolio of available conventional capacity for each scenario; and for each scenario of installed wind capacity shifting the supply-demand balance to give a common EEU of 3 GWh/year. For instance, one might hypothesise that at higher penetrations of renewable generation, for a given value of standard indices such as LOLE or EEU, greater variability of supply leads to greater variability of outcome. This comparison is controlled in the sense that it looks at overall risk profile for a range of scenarios which have the same value of the headline EEU.
The scenario of installed conventional capacity is based on one originally
provided by National Grid ESO, with a small random element added to each capacity due to the commercial sensitivity of the raw data. For sequential models, availability probabilities from NGESO are supplemented with mean repair time data from the IEEE Reliability Test System [16].
GB demand data for the 12 peak (winter) seasons 2005-17 are used, with an estimate of historic available embedded renewable capacity added back on. Demand data are rescaled to a common system scenario according to historic values of the Average Cold Spell (ACS) Peak statistic, with the given scenario defined by an ACS value.
The wind generation data used are from [17], and combine historic reanalysis wind speed data with a future scenario of what is connected to the system. Hourly capacity factors (CFs) for onshore and offshore wind in the 'near term' wind fleet from [17] are used, and for a given scenario of onshore and offshore wind capacity connected to the system these hourly CFs are multiplied by the respective GW installed capacities and added to give a total hourly available capacity. The division between onshore and offshore for a given level of total installed wind capacity is based on projections in [18].
Thus our exemplar system is generally representative of the GB system and will suffice for our illustrative purpose, however for a fully applied GB study one would need to use data that are specialised to the particular decision question considered.
### Risk averse metrics
#### 3.3.1 Background
It is possible to define alternative single number metrics which give a measure of risk aversion, for example the well known (Conditional) Value at Risk (VaR and CVaR) [19], with respect to model outputs such as the constructed distribution of energy unserved. This is defined as for a random variable \(U\) as
\[[\text{CVaR}]_{\alpha}=\mathbb{E}[U|U>u^{\prime}] \tag{11}\]
where \(P(U\geq u^{\prime})=1-\alpha\) for a risk threshold parameter \(alpha\). Thus the mean of \(U\) is the special case \([\text{CVaR}]_{0}\), and if \(U\) is the energy unserved then CVaR is a generalisation of the standard EEU index - while one can use other mappings of EU and LOLD to give risk averse utility functions, this would not have the same attractive property of generalising the expected value indices.
CVaR has the further beneficial property of convexity when embedded in a wide range of optimization problems [19]. We do note however that where
it is not necessary to embed within an optimization model, there can be difficulties in communicating CVaR results outside the specialist community, particularly as for low degrees of risk aversion the risk threshold \(u^{\prime}\) will sit in the other tail of the distribution of \(U\) from the one of interest. Another diadvantage is that CVaR values with different \(\alpha\) are not directly comparable, despite having the same dimensions.
Calculating risk-averse indices will in general require time-sequential modelling. The exception would be to work with VaR and CVaR with respect to the snapshot LOLP or Expected Power Unserved (i.e. the probability of a shortfall, or expected shortfall, at a randomly chosen point in time [14]). This would, however, need to be interpreted carefully. For instance, VAR with respect to snapshot LOLP may have a useful interpretation in terms of the expected number of hours with surplus below a given level, but other combinations of VAR/CVaR with LOLP or EPU might not be so interpretable.
#### 3.3.2 Example
Fig. 1 shows CVaR with respect to EU as a function of \(\alpha\), for a range
Figure 1: CVaR as a function of the risk aversion parameter \(\alpha\) for a range of installed wind capacities.
of installed wind capacities. As in all examples, for a controlled experiment the scenarios of different installed wind capacity have the same EEU. As anticipated in the section on data, as the wind capacity increases, the CVaR for a given \(\alpha\) also increases. This is consistent with a hypothesis that greater variability of supply would lead to greater variability of outcome - however the effect is not very large, and the next section will explore how CVaR as a summary statistic does not reveal the most striking change in the risk profile at higher wind capacities.
### Visualisations for decision makers
Instead of attempting to define metrics in this way, one could instead provide visualisations to decision makers of the consequences of particular decisions in different scenarios of planning background, and let them decide on that basis how much capacity to procure. Even if there is still a preference of working with summary statistics for formal decision analysis, there is value in supplementing this with a wider range of visualisations to understand more broadly the system's risk profile, or the consequences of results from formal optimization models.
This is an attractive idea in principle, though to go with such visualisations one needs the necessary skills in how to use them well - both on the part of the analysts in terms of how to create visualisations, and also in terms of how some technical statistical understanding on the part of decision makers may be necessary. On the other hand, visualisations such as these may be more tangible and easier to communicate than summary statistics such as CVaR. Appropriate use of scenarios and supporting narratives can help here, potentially using an interactive dashboard to provide model outputs and narratives of consequences for decisions under consideration.
Fig. 2 provides an example of this for a scenario with fixed EEU of 3 GWh/year, and for a range of installed wind capacities. There is substantial variation with installed wind capacity of the distribution of EU in our experiment, but rather less variation in the distribution of LOLD. However while the former explains the variation of CVaR with respect to EU, the CVaR being a summary statistic of the distribution of EU, the difference between scenarios of different wind capacities is less striking in the CVaR results than in the underlying probability distribution.
One can look further at the overall risk profile through the distributions of number of days with a shortfall, and of the probability distribution of EU within a day conditional on that day having a shortfall. It is clear from the lower two panels of Fig. 2 that at higher wind penetration the same EEU is made up (in a probabilistic sense) of fewer days of greater
shortfall. The variation with wind capacity of these two distributions is rather greater than that of the distributions of LOLD and EU - this difference may well be material for decision making, demonstrating how one might go in to considerable detail of risk model outputs to understand fully how the risk profile is changing.
There are important caveats on these results as quantitative calculations for the real GB system. We have carried out a calculation using a standard approach to illustrate important issues for decision support analysis in the model-world, but for a fully applied study one would need to specialise the statistical estimation to the relevant scenario, and to consider whether the modelling assumptions made give meaningful results on low probability events with respect to the real world.
Figure 2: Scenario with fixed EEU = 3 GWh/year
Discussion
### Statistical issues
In addition to this paper's technical work exploring the range of model outputs available from standard calculations and how these might be used, there are also important statistical and uncertainty management issues which must be considered. These include estimation of model inputs, including having a limited number of examples in the historic record of extreme conditions, estimates of the availability properties of conventional generation at these times, and modelling of very complex continental-scale systems.
It is common in reliability analysis to have limited direct historic data on failure events, particularly in an area such as resource adequacy where there is a specific external driver of stress events (i.e. the weather). The consequent uncertainty in estimation of model outputs tends to increase in systems with high renewables penetrations, where the highest values of (demand minus renewables) are from times combining sufficiently high demand with low renewable resource - this tends to concentrate risk in a smaller number of events as compared to a system where risk is driven by the highest values of demand.
Strikingly, these conditions that drive risk at high renewable penetrations would otherwise be regarded as extreme in any one location, where they would appear to be a standard and benign winter day with fairly low temperature and little wind. This dominance of risk profile by a limited number of years will manifest itself in different ways depending on the way the demand and renewable resource interact, and how much storage is connected to the system; for an example with a high storage penetration see [20].
This effect is illustrated for the example of this paper in Fig. 3. At low penetrations of wind generation, 2005 does not make a substantial contribution to the calculated EEU, as peak demand was quite modest that winter. However, due to the very calm conditions on certain days of fairly high demand, at high wind penetrations this becomes the most significant winter by far in driving the outcome of the risk calculation.
As a further illustration based on the experiment in this paper, Fig. 4 presents the same analysis as Fig. 2b, except with the data from winter 2005-6 omitted3. At high wind penetrations, the estimated probability of
very severe outcomes is then much lower than if data from the outlier winter 05-06 are included.
A particular manifestation of this will be in estimating tail event metrics such as VaR or CVaR of energy unserved with a high threshold - for instance in assessing what a 1 in \(n\) year risk level might be. Even if one assumes that past weather is statistically representative of the relevant future year,
Figure 4: Probability distribution of Energy Unserved (same experiment as Fig. 2b except that data from 2005-6 are omitted).
Figure 3: Proportion of estimated EEU arising from 2005 data.
by definition one will pick up a relevant weather year on average once every \(n\) years. Historic data will thus be very sparse, and estimates of such metrics (not conditional on particular weather) will be speculative.
Another key issue of uncertainty management is interconnection across very wide (continental scale) areas. As GB is an island system, this manifests as undersea connections to Ireland, Norway and the main continental European system. The level of interconnection between GB and other systems is forecast to rise rapidly - for instance one scenario study by the system operator has a minimum of 20 GW of connections to other systems on GB peak demand which is currently a little over 50 GW, making interconnector support a key consideration in evaluating adequacy risk [21]. However there will be considerable uncertainty in statistical characterisation of available support from other systems, due to the volume of model assumptions and data involved in continental scale assessments, and when carrying out a study knowledge of other systems almost certainly being poorer than knowledge of one's own. The need to bring interconnection into assessments is recognised in other systems, as discussed in [1].
### Decision support for capacity procurement
There is broad recognition that planning for system adequacy needs to recognise the changing needs of present and future systems with high renewable penetrations [1]. In particular, it is necessary to recognise that if the profile of supply and demand changes in a system, then it may be difficult to track this using a small set of single-number metrics - this paper has demonstrated by numerical experiment how this can be the case in a system with increasing penetration of renewable generation. Decision support approaches must both be transparent to the decision maker and not oversimplify the situation.
Historically, resource adequacy standards have tended to be set in terms of a single metric, usually a variant of LOLE - either the expected number of days on which there is a shortfall (the classic LOLE standard in N America being 0.1 days/year LOLE [22]), or the expected number of hours of shortfall (e.g. the GB standard of 3 hours/year LOLE; this metric is usually referred to in N America as Loss of Load Hours, LOLH, though in Europe is more commonly referred to as LOLE).
There is thus a tension between the need to reflect the increasing complexity of patterns of supply and demand, and the natural desire to have simple, transparent criteria for use in supporting procurement decisions. The structure of working in this more complex environment would be simpler if there is a single 'controlling mind' (i.e. a point of decision making in a single organisation, maybe with the decision vested in a single individual) which is
able to take judgments as to how to balance different aspects of risk profile in a decision on capacity procurement. In that case, standard approaches to uncertainty management and multi-criteria decision analysis could then be used to handle the issues described.
However when there is an industry and policy need for a clearly defined standard, one must seek appropriate compromises which reflect the changing nature of power systems and are sufficiently grounded in the relevant decision science; there is unlikely to be one definitive best approach here applicable in all situations. A further consideration is what actually _can_ be estimated confidently - in particular, in many systems it may be possible to make confident estimates of risk profile conditional on a particular weather year and on a statistical characterisation of available support from interconnectors, these being the principal sources of uncertainty in risk model outputs.
One basis for a solution could therefore be to set a public facing risk target, for use in capacity procurement, conditional on a single given weather and interconnector scenario, or an assumption that the available time series data characterise fully the range of future possibilities that might be faced. This would allow performance of alternative resource portfolios to be compared, but it might be necessary to check against other possible interconnection or weather scenarios that the outcome is likely to be robust. It may further be necessary to adjust either the risk target or scenario choice from time to time, in order to maintain an appropriate level of risk as the portfolio of technologies develops. A related challenge is that supply capacity is not a simple additive commodity of the kind that is traded in standard auctions; a fully discussion of the consequence of this may be found in [3].
It may also be necessary to make some special consideration of very extreme situations. If it is deemed infeasible to assign probabilities to events which are too sparse in the historic record, then these might be treated through scenario analysis - however if this means that there is a probability model used for some classes of event, but the events which really matter are excluded from this model, there might be logical gap in the framework.
Another situation might be where weather beyond a certain degree of severity introduces distinct failure modes which are not seen at all in less extreme, but still severe, conditions, for instance those of Texas in 2021 [23]. A further example is the severe cold and snow in GB in 1962-62 [24], which would bring very substantial changes in demand patterns over an extended period of months if it occurred again. Apart from any difficulties with assigning probabilities or mean return times, such conditions might require bespoke studies of effect of temperature and snowfall on demand and the power system, going well beyond the modelling that suffices for more normal circumstances.
Finally we note that the transparency of metrics and visualisations for decision makers is a significant issue. Standard indices such as LOLE and EEU are sometimes regarded outside the specialist modelling community as as being hard to understand, and inside that specialist community there is recognition of limits on the information that they contain. It is likely that non-specialists will find some additional metrics such as CVaR more difficult still to understand, so it is vital to take appropriate care in communicating the information that they contain. It may be, however, that visualisations of probability distributions are easier to communicate in that they contain all the information on estimated statistics of a particular quantity, though initially some might find the overt presentation of probability distributions intimidating.
### Relation to wider issues of project and policy appraisal
The questions discussed in this paper should also be seen in the wider context of decision analysis and project appraisal. Indeed this area of resource adequacy and capacity procurement often provides a very good exemplar of wider issues, in that with a fairly simple system model (at least for a single power system area), it presents a range of subtle statistical issues.
The master guide for such analysis in the public and regulated sector in GB is the HM Treasury Green Book [25]; while this is explicitly about appraisal and evaluation of policies and projects in central government, it is very influential on decision analysis in wider contexts. One key principle is that categories of assessment in an appraisal should be monetised if possible; if monetisation is not possible they should be quantified on a different scale, and if quantification is not possible they should be assessed qualitatively.
There is no dispute that RA risk is subject to quantification, albeit with significant caveats on whether confident estimates can be made without conditioning on assumptions about weather and interconnector support. However, as mentioned earlier in this article, monetising RA risk is much more doubtful - and the standard monetisation in terms of expected energy unserved multiplied by a (fixed, survey based, averaged over customers) per MWh value of lost load, tends to recommend unacceptably low level of reliability. In addition to this top-down argument, there are bottom-up arguments against averaging interests of all customers including that disconnections do not discriminate between customers with different interests.
One could go beyond the standard monetisation by making the VOLL dependent on the depth and length of shortfall, and more generally by eliciting
decision maker judgments as to how the economic value should be considered; or by considering variability of economic damage about the mean. However, this still runs into problems with scope: does one only include direct economic damage? or does one wish to include in decision analysis wider issues such as the need for wider economic and societal confidence in a reliable electricity supply? The latter is likely to be of interest to high level decision makers.
This in turn is a wider example of the issue in appraisals of managing imprecisely defined concepts such as social value, environmental capital and cultural value; energy security may be regarded as an example of the former. There will usually then be uncertainty arising from questions of scope, and of the way that any monetisation is conceptualised. This means that uncertainty in monetisations goes beyond considerations of input, parameter and modelling uncertainty to uncertainty in the conceptualisation (essentially that equally reasonable and expert people might come up with different broad approaches). The problems with bringing such disparate criteria together into a single number score has been noted previously in an energy context by Hammond and co-authors [26, 27]. A very prominent example in the wider GB infrastructure sector is the business case underpinning the present HS2 high speed rail project, in which many factors are combined into a single monetary CBA [28].
Going back to the specific case of resource adequacy, consequence is clearly quantifiable, and relative economic consequence between different scenarios is likely also be quantifiable - but on the latter, questions of scope might mean that this is naturally a multicriteria comparison. The problem comes with the final step of bringing everything together into a single line item of money, when procurement costs and reliability are not fully commensurate. In this and a very wide range of case studies in other domains, we would recommend the following:
* If monetisation is used, it should come with a sufficiently broad quantification of uncertainty, including consequences of social value etc. being imprecisely defined quantities;
* In parallel with monetisation, a multicriteria analysis should be performed, recognising where quantities are not fully commensurate, i.e. they cannot naturally be brought together in the same line item of money;
* Visualisations such as a red-amber-green representation of the different options against the range of criteria may be very helpful in presentation to decision makers.
Conclusions
This paper has explored use of a range of risk model outputs as a basis for resource adequacy assessment and capacity procurement, including extensions to standard decision analysis pictures such as risk averse metrics and wider visualisations of risk profile. For the GB example considered, as the capacity of renewables connected to the system increases, not all changes to the risk profile are captured by expected value metrics such as LOLE and EEU. If a single summary statistic is required, then CVaR with respect to the distribution of energy unserved or loss of load duration is attractive due to the way CVaR generalises these standard expected value metrics - however this may not reveal other aspects of risk profile such as how the same annual aggregate model output can be made up of a larger/smaller number of less/more severe days.
Overall, these results provide a strong argument for finding ways to balance objectives in a transparent way that recognises the interests of decision makers, and we provide examples of how this might be done, along with an extended discussion of their use in practical application - a particular challenge is the specification of transparent standards when it is not natural to construct a monovariate utility function. For the necessary combination of broad uncertainty management with decision analysis, a possible framework is a Bayes network-based decision support system as described in [29]; any alternative would require similar functionality for combining a wide range of uncertainties.
## Acknowledgments
The authors acknowledge discussions with A. Dobbie, H. Wynn, S. Zachary, and members of the IEEE RAWG and the RA activities of ESIG, EPRI and G-PST. They also acknowledge meetings in which Colin Gibson and Andrew Wright provided advice about decision maker interests based on their industry experience.
Author Chris Dent further expresses gratitude to the late Colin Gibson for exchanges throughout CD's career in energy systems, from which he learned much about power system planning and operation.
The authors acknowledge the following funding for this work: CJD, AS, JQS, ALW and XY from the Alan Turing Institute 'Towards Turing 2.0' programme under EPSRC Grant EP/W037211/1; CJD, JQS and ALW from the Turing Fellow project 'Managing Uncertainty in Government Modelling'; NS a PhD scholarship from the Mexican Conacyt funding council. CJD,
JQS and ALW also acknowledge Turing Fellowships from the Turing Institute; and CJD and JQS would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Mathematical and Statistical Foundations of Data Driven Engineering when work on this paper was undertaken (supported by EPSRC Grant Number EP/R014604/1), associated with which CJD was partially supported by a grant from the Simons Foundation. CJD further acknowledges a grant from the International Centre for Mathematical Sciences under its Knowledge Exchange Catalyst scheme to work with the Global Power System Transformation consortium.
|
2309.13272 | Natural Language Processing for Requirements Formalization: How to
Derive New Approaches? | It is a long-standing desire of industry and research to automate the
software development and testing process as much as possible. In this process,
requirements engineering (RE) plays a fundamental role for all other steps that
build on it. Model-based design and testing methods have been developed to
handle the growing complexity and variability of software systems. However,
major effort is still required to create specification models from a large set
of functional requirements provided in natural language. Numerous approaches
based on natural language processing (NLP) have been proposed in the literature
to generate requirements models using mainly syntactic properties. Recent
advances in NLP show that semantic quantities can also be identified and used
to provide better assistance in the requirements formalization process. In this
work, we present and discuss principal ideas and state-of-the-art methodologies
from the field of NLP in order to guide the readers on how to create a set of
rules and methods for the semi-automated formalization of requirements
according to their specific use case and needs. We discuss two different
approaches in detail and highlight the iterative development of rule sets. The
requirements models are represented in a human- and machine-readable format in
the form of pseudocode. The presented methods are demonstrated on two
industrial use cases from the automotive and railway domains. It shows that
using current pre-trained NLP models requires less effort to create a set of
rules and can be easily adapted to specific use cases and domains. In addition,
findings and shortcomings of this research area are highlighted and an outlook
on possible future developments is given. | Viju Sudhi, Libin Kutty, Robin Gröpler | 2023-09-23T05:45:19Z | http://arxiv.org/abs/2309.13272v1 | # Natural Language Processing for Requirements Formalization: How to Derive New Approaches?
###### Abstract
It is a long-standing desire of industry and research to automate the software development and testing process as much as possible. In this process, requirements engineering (RE) plays a fundamental role for all other steps that build on it. Model-based design and testing methods have been developed to handle the growing complexity and variability of software systems. However, major effort is still required to create specification models from a large set of functional requirements provided in natural language. Numerous approaches based on natural language processing (NLP) have been proposed in the literature to generate requirements models using mainly syntactic properties. Recent advances in NLP show that semantic quantities can also be identified and used to provide better assistance in the requirements formalization process. In this work, we present and discuss principal ideas and state-of-the-art methodologies from the field of NLP in order to guide the readers on how to create a set of rules and methods for the semi-automated formalization of requirements according to their specific use case and needs. We discuss two different approaches in detail and highlight the iterative development of rule sets. The requirements models are represented in a human- and machine-readable format in the form of pseudocode. The presented methods are demonstrated on two industrial use cases from the automotive and railway domains. It shows that using current pre-trained NLP models requires less effort to create a set of rules and can be easily adapted to specific use cases and domains. In addition, findings and shortcomings of this research area are highlighted and an outlook on possible future developments is given.
Keywords:Requirements Engineering (RE) Requirements Formalization Requirements Modeling Requirements Analysis Natural Language Processing (NLP) Semantic Role Labeling.
## 1 Introduction
Requirements engineering (RE) plays a fundamental role for the software development and testing process. There are usually many people involved in this process, such as the customer or sponsor, the users from different areas of expertise, the development and testing team, and those responsible for the system architecture. Therefore,
requirements are intentionally written in a textual form in order to be understandable for all stakeholders of the software product to be developed. However, this also means that requirements have an inherently informal character due to the ambiguity and diversity of human language. This makes it difficult to automatically analyze the requirements for further processing. There are many different processes that build on them, such as requirements verification and test case generation (see Fig. 1).
In order to handle the growing complexity and variability of software systems, model-based design and testing methods have been developed [3]. Especially for safety-critical systems such as in the automotive, railway and aerospace domains, extensive testing based on the requirements is necessary. However, the manual creation of requirements models from natural language requirements is time-consuming and error-prone, and also requires a lot of expert knowledge. This is especially true in agile software development which involves continuous improvements and many changes to requirements.
Numerous approaches based on natural language processing (NLP) have been proposed in the literature to generate requirements models using mainly syntactic properties [39; 20]. Recent advances in NLP show that semantic quantities can also be identified and used to provide better assistance in the formalization of unrestricted natural language requirements [28; 13]. However, most studies propose concrete solutions that work well only for a very specific environment.
The aim of this work is to focus on principal ideas and state-of-the-art methodologies from the field of NLP to automatically generate requirements models from natural language requirements. We iteratively derive a set of rules based on NLP information in order to guide readers on how to create their own set of rules and methods according to their specific use cases and needs. In particular, we want to in
Figure 1: Requirements as the basis for the software development and testing process.
vestigate the question: _How to derive new approaches for requirements formalization using natural language processing?_ We highlight and discuss the necessary stages of an NLP-based pipeline towards the generation of requirements models. We present and discuss two approaches in detail: (i) _a dependency and part-of-speech-based approach_, and (ii) _a semantic-role-based approach_. This significantly extends and enhances our previous work [16]. The approaches are demonstrated on two industrial use cases: a battery charging approval system from the automotive domain and a propulsion control system in the railway domain. The requirements models are represented in a human- and machine-readable format in a form of pseudocode [17]. In summary, this work aims to provide more general and long-lasting instructions on how to develop new approaches for NLP-based requirements formalization.
In Section 2, we present our proposed NLP-based pipeline, introduce the use cases, and present the two different approaches for NLP-based requirements formalization. In Section 3, we review the state of the literature on the various NLP methods for requirements formalization. In Section 4, we discuss some general findings and shortcomings in this area of research and provide an outlook on possible future developments. Finally, Section 5 concludes our work.
## 2 Methodology
In this section, we propose an NLP-based pipeline for requirements formalization. We first give a brief overview of the use cases and then present and discuss the two different approaches in detail and demonstrate the iterative development of rule sets.
The automatic generation of requirements models from functional requirements written in unrestricted natural language is highly complex and needs to be divided into several steps. Therefore, our pipeline consists of several stages, as illustrated in Figure 2. The different stages of the pipeline are described in more detail below.
**Stage 1: Preprocessing.** We start with preprocessing the requirements. According to our use cases, we perform data preparation, where we clean up the raw text, and resolve pronouns. This stage need not be limited to these steps and should be flexibly adapted to the style and domain of the requirements at hand.
**Stage 2: Decomposition.** In order to extract individual actions, the preprocessed requirements are decomposed into clauses. Industrial requirements tend to be complex in certain cases with multiple sentences combined together to convey a particular system behavior. We decompose each requirement sentence at certain conjunctions and linking words (_if_, _while_, _until_, _and_, _or_, etc.), assuming that each clause contains a single action. Multi-line and multi-paragraph requirements should also be decomposed into clauses.
**Stage 3: Entity detection.** In this stage, we use the syntactic and semantic information of each clause to identify the desired model entities, such as _signals_, _components_, and _parameters_. We construct a rule-based mapping that is iteratively derived from
the considered requirements of our specific use cases. Further, we map comparison words to _operator_ symbols (\(<\), \(>\), \(==\), etc.) using a dictionary. These rules can be easily adapted to different use cases and domains by following the same procedure as described below.
**Stage 4: Model formation.** In the final stage of the pipeline, we assemble the retrieved information to form _relations_ for each clause. These can either be assignments of the form signal(parameter*) with optional parameter, or conditional statements of the form signal() operator parameter. Then we combine them according to the derived logical operators, and assign them to specific _blocks_ (_if_, _then_, _else_, etc.). This yields a requirements model for each requirement.
The whole pipeline can be tailored according to the use case and the desired output. For example, no components (actors) are explicitly mentioned in the requirements we consider. Therefore, we have omitted this entity and assume that this information is given. The decomposition and entity detection stages are particularly based on information provided by NLP methods. We present two approaches to handle these stages: (i) _a dependency- and part-of-speech-based approach_ (Section 2.2), which utilizes the grammatical constructs of the requirement text, and (ii) _a semantic-role-based approach_ (Section 2.3), which makes use of the semantic roles given by a pre-trained model. Both approaches work with unrestricted natural language and can be further refined and adapted to different styles and domains according to the needs of the use case.
### Use cases
We demonstrate the derivation of our approaches using functional requirements from two different industrial use cases. The first use case is a battery charging approval system provided by AKKA from the automotive domain. The use case describes a sys
Figure 2: Proposed pipeline for requirements formalization.
tem for charging approval of an electric vehicle in interaction with a charging station. In total, AKKA has provided 14 separate requirement statements. The requirements are used for a model-based software development and implementation of the functionality in an electronic control unit (ECU). More details about the industrial use case can be found in [15, 16].
The second industrial use case is a propulsion control system (PPC) provided by Alstom from the railway domain. The PPC is part of a large, complex, safety-critical system. It handles the control of the entire propulsion system, including both control software as well as the electrical functions. Alstom has provided 31 requirements which do not follow a prescribed format in order not to focus on syntax when writing them. The system architecture and software modules are modeled in Matlab Simulink using a model-based design approach. More information about this use case is given in [35, 17].
Note that we show the requirements in a generalized form, as the data we use is confidential. For demonstration purposes, we have also partially modified the original requirements.
### Dependency- and Part-of-Speech-Based Approach
One possible approach to arrive at formal requirements models is to investigate the grammar of the natural language requirements. For instance, one can make use of dependency tags, part-of-speech tags or combine both of these syntactical information to arrive at the entities for the desired requirements models.
For example, consider the requirement phrase _"if the temperature is larger than t_batt_max"_. When we parse the dependency and POS tags of this phrase with the help of toolkits like spaCy12, we arrive at a directed graph as shown in Fig. 3. It illustrates that the _root_ verb of the phrase is "_is_" (from which the graph emerges) and shows how each word is associated with this _root_ verb. These associations or dependencies are represented by different tags. For example, "_temperature_" is the nominal subject (_nsubj_) of the _root_, "_t_batt_max_" is the prepositional object (_pobj_) of the _root_, etc.
Figure 3: Dependency and POS tags for an exemplary requirement phrase.
Similarly, we can find the POS tag of each word in the graph, e.g. "_temperature_" is a _noun_, "_larger_" is an _adjective_, etc. Some dependency and POS tags are presented in Table 1. We suggest the reader to gather an overview of the dependency and POS tags from the generic framework presented as universal dependencies 34.
Footnote 3: [https://universaldependencies.org/u/dep/index.html](https://universaldependencies.org/u/dep/index.html)
Footnote 4: [https://universaldependencies.org/u/pos/index.html](https://universaldependencies.org/u/pos/index.html)
With this background on dependencies and POS tags, we further discuss how we tailor our pipeline with a rule base depending on the syntactic structure of the requirement text. For better comprehensibility for readers, the requirements are presented in the order of growing complexity.
```
**Req. 1:** The error state is 'E_BROKEN', if the temperature of the battery is larger than t_batt_max.
As an entry point to the discussion, consider a simple requirement, 1. Here, the requirement has two primitive actions - one that talks about the "_error state_" and another that talks about the condition when this "_error state_" occurs. Similar to this requirement, individual industrial requirements are often composed of multiple actions, making them inherently difficult to process with information extraction toolkits. Hence, as shown in Fig. 2, we propose a decomposition step to initially decompose a long complex requirement into clauses which individually explain primitive actions.
**Decomposition (conditions).** To decompose this requirement to individual clauses, we can use the conditional keyword "_if_" as the boundary. This yields us the following clauses:
* The error state is 'E_BROKEN'
* if the temperature of the battery is larger than t_batt_max
\begin{table}
\begin{tabular}{l l} \hline \hline
**Dependency tag** & **Description** \\ \hline nsubj & Nominal subject - does the action \\ root & Root of the sentence - the action \\ dobj & Direct object - on which the action is done \\ mark & Marker - marks a clause as subordinate to another \\ conj & Conjunct - indicates a coordinating conjunction \\ \hline
**Part-of-speech tag** & **Description** \\ \hline NOUN & Nouns denote a person, place, thing, animal or idea \\ ADJ & Adjectives modify nouns \\ ADV & Adverbs modify verbs \\ DET & Determiners indicate the reference of the nouns \\ SCONJ & Subordinating conjunction \\ \hline \hline \end{tabular}
\end{table}
Table 1: A few commonly used dependency and POS tags
**Entity detection.** For the first clause, we extract the subject of the clause as _"error state"_ and the object as _"E_BROKEN"_. The root verb in the clause is _"is"_ which decides how to form the relation. By mapping the subject (with an inflection of the root verb) to the _signal_ and the object as the _parameter_, we end up with the relation set_error_state(E_BROKEN). In the second clause, with similar rules, we extract the subject of the clause as _"temperature of the battery"_ and the object as _"t_batt_max"_.
**Operator detection.** However, the root verb occurs in conjunction with a comparative term _"larger"_. The appropriate operator for this word can be fetched with the help of a hyperlinked thesaurus like Roget's Thesaurus5. This yields us the symbol "\(>\)" indicating the quality "greatness", see Table 2. The occurrence of a comparative term differentiates how we form the relation for this clause from the previous clause. The relation formed from this clause will be: temperature_of_battery() > t_batt_max.
Footnote 5: [https://sites.astro.caltech.edu/](https://sites.astro.caltech.edu/)\(\sim\)pls/roget/
Now consider a slightly different requirement with a different formulation in the second clause as shown in Eq. 2.
**Decomposition (root conjunctions).** In this clause, the conjunction _"and"_ occurs. Unless we decompose this conjunction, the clause by itself does not address a single primitive action. In this case, the conjunction _"and"_ connects the two root verbs _"is"_ (before _"larger"_) and _"is"_ (before _"smaller"_). Such root conjunctions can be decomposed by considering the conjunction as the boundary. This yields us the following clauses:
* if the temperature of the battery is larger than t_batt_max
* it is smaller than t_max
**Pronoun resolution.** The first clause is similar to the one presented in the previous requirement. However, the second clause presents a new challenge. The pronoun _"it"_
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Quantity** & **Words** & **Operator** \\ \hline Superiority & exceed, pass, larger, greater, over, above,... & \(>\) \\ Greatness & excessive, high, extensive, big, enlarge,... & \(>\) \\ \hline Inferiority & smaller, less, not pass, minor, be inferior,... & \(<\) \\ Smallness & below, decrease, limited, at most, no more than,... & \(<\) \\ \hline Sameness & equal, match, reach, come to, amount to,... & == \\ \hline \hline \end{tabular}
\end{table}
Table 2: Detection of comparison operators using Roget’s Thesaurus
needs to be resolved before proceeding to entity detection. We propose a simple pronoun resolution step to find the third person pronouns (singular: _it_, plural: _they_) by replacing each occurrence of a pronoun with the farthest subject. In this clause, we replace _"it"_ with _"the temperature of the battery"_. This yields the clause _"if the temperature of the battery is smaller than t_max"_. This is again similar to the discussed clauses and is handled with the same rules.
A further check on the grammatical number of the pronoun and its antecedent is advised, if the requirement quality is in doubt. The pronouns without an antecedent (called pleonastic pronouns) should not be resolved. We also assume first person or second person pronouns hardly occur in industrial requirements.
```
1Req.3:The error state is 'E_BROKEN', if the temperature of the battery is larger than t_batt_max and t_max.
```
Now consider a requirement which has the following clause as given in 3. Unlike the clause in the previous requirement, this clause has a conjunction _"and"_ between two noun phrases _"t_batt_max"_ and _"t_max"_.
**Decomposition (noun phrases).** This demands a slightly more complex noun phrase decomposition. Unlike the other decomposition steps described above, this step further requires to extract the subject phrase of the first noun phrase and prefix it to the latter. This yields us the following clauses:
* if the temperature of the battery is larger than t_batt_max
* if the temperature of the battery is larger than t_max
These clauses are identical to the discussed clauses and hence, the entities are extracted with the same rules.
```
1Req.4:The error state is 'E_BROKEN', if the temperature of the battery is between t_batt_max and t_max.
```
Another clause is shown in 4, with a connector "_between_" in the text.
**Decomposition (connectors like "between").** To decompose such clauses, we can replace _between A and B_ with the construct _greater than A and less than B_. This however, assumes the real values of \(A\) to be less than \(B\). Although the assumption is true in most cases, we advise to further validate the decomposition, e.g. by integrating a user feedback loop. With the above assumption, the decomposition of this clause results in the following clauses:
* if the temperature of the battery is greater than t_batt_max
* if the temperature of the battery is less than t_max
**Req. 5:** The charging approval shall be given if the connection with the charging station is _active_.
**Req. 6:** The charging approval shall _not_ be given if the connection with the charging station is _inactive_.
Requirements can also contain negations. Consider the pair of requirements 5 and 6, which when decomposed yield the following clauses:
* _R5C1_: The charging approval shall be given
* _R5C2_: if the connection with the charging station is active
* _R6C1_: The charging approval shall not be given
* _R6C2_: if the connection with the charging station is inactive
**Entity detection (negation handling).** The corresponding clauses of these requirements contradict each other, i.e. _R5C1_ and _R6C1_ contradict each other as well as _R5C2_ and _R6C2_. In _R5C1_, there is no explicit negation. Hence, we can handle this clause just as any other clause explained before. However, the root verb in _R6C1 "be"_ occurs in conjunction with a _"not"_. This can be addressed by introducing a negation operator (a logical **not**) in the relation. For example, _R5C1_ yields _give_charging_approval()_ and _R6C1_ yields _not_give_charging_approval()_.
Handling the second clauses of the requirements poses a different challenge. In _R5C2_, the word _active_ occurs, while in _R6C2_, the antonym of this word _inactive_ occurs. We propose to assume the first occurrence of a word qualifying such boolean expressions as _boolean true_ and if its antonym is cited in a different requirement, it can be assigned as _boolean false_. The antonyms can be identified using a hierarchical thesaurus like WordNet6. We assume this way of approaching negations is more intuitive than looking at a single word and inferring its sentiment. The sentiment of a word does not necessarily help us define the boolean value of the word.
Footnote 6: [http://wordnetweb.princeton.edu/perl/webwn](http://wordnetweb.princeton.edu/perl/webwn)
In the light of the above discussion, we present an overview of a few rules we use to extract the entities in the following tables. The dependency and POS tags are first mapped to the syntactic entities _subject_, _object_ and _predicate_, see Table 3. Then these syntactic entities are mapped to the model entities _signal_ and _parameter_ according to specific rules, see Table 4. The _operator_ is identified from the ADJ denoted by \(\mathrm{op}(\mathrm{ADJ})\) in the table. The rules can be extended e.g. by considering the type of action (nominal, boolean, simple, etc.) and verb types (transitive, dative, prepositional, etc.).
**Discussion.** Although a custom rule base exploiting dependency and POS tags is well suited for a particular use case, it demands a tremendous effort to generalize the rules for requirements across domains varying in nature and style. With each
new style of requirement, the rule base may need to be updated, leading to further effort and deliberation. We argue this by considering the number of dependency and POS tags, which are 37 dependency tags and 17 POS tags according to the revised universal dependencies [25].
### Semantic-Role-Based Approach
We can also formalize requirements by extracting the semantic roles in the requirement text. The semantic roles represent the relation between the words or phrase in regards to the main verb present in the sentence. It describes the conceptual meaning behind the word.
For example, consider a simple requirement phrase, "_The system shall close the valve_". As illustrated in Fig. 4, _"the system"_ is semantically the _agent_ of the action _"close"_. Further, this action is performed on _"the valve"_ which is semantically its _patient_. In general, the _agent_ is the one who initiates the action and the _patient_ is the one on whom the action is performed. These roles are semantic properties in contrast to _subject_ and _object_, we used in Section 2.2, which are syntactic properties. Some semantic roles are presented in Table 5.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Syntactic entities** & **Signal** & **Operator** & **Parameter** \\ \hline subject, predicate & predicate\_subject & - & - \\ subject, predicate, object & predicate\_subject & - & object \\ ADJ, subject, predicate, object & predicate\_ADJ & - & object \\ \hline subject, ADJ, object & subject & op(ADJ) & object \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mapping of syntactic entities to model entities
Figure 4: Semantic roles for an exemplary requirement phrase.
\begin{table}
\begin{tabular}{l l} \hline \hline
**DEP/POS tags (constituents)** & **Syntactic entities** \\ \hline nsubj, nsubjpass & subject \\ pobj, dobj & object \\ root, root + NOUN, & predicate \\ auxpass/root + ADP/ADV/ADJ & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mapping of DEP/POS tags to syntactic entities
Semantic Role Labeling (SRL) is the task of assigning these roles to the phrases or words that describe their semantic meaning. SRL is one of the leading tasks in natural language processing [27]. It can be used for applications such as question answering, data mining etc. We use SRL for requirements formalization by identifying the roles to form the desired requirements models. We use a state-of-the-art pre-trained deep learning BERT model [34] by AllenNLP7. This model generates frames, each containing a set of arguments related to a verb in the sentence. The arguments are defined based on English Propbank defined by Bonial et al. [6]. Each argument describes the semantic relation of a word to the verb with which it is associated. The arguments can be either numbered or functional. Table 6 shows some of the arguments and the corresponding roles.
Footnote 7: [https://demo.allennlp.org/semantic-role-labeling](https://demo.allennlp.org/semantic-role-labeling)
We will now demonstrate how we handle the decomposition and entity detection stages with the SRL roles. This can help in considerably reducing the number of rules compared to the first approach while handling even more complex requirements. It is worth to note that there are only 24 SRL arguments (5 numbered and 19 functional)
\begin{table}
\begin{tabular}{l l l} \hline \hline & **Argument** & **Role** \\ \hline \multirow{4}{*}{Numbered} & ARG0 & Agent \\ & ARG1 & Patient \\ \cline{1-1} & ARG2 & Instrument/beneficiary \\ \cline{1-1} & ARG3 & Start point/beneficiary \\ \cline{1-1} & ARG4 & End point \\ \hline \multirow{4}{*}{Functional} & ARGM-LOC & Location \\ \cline{1-1} & ARGM-TMP & Time \\ \cline{1-1} & ARGM-NEG & Negation \\ \cline{1-1} & ARGM-PRD & State \\ \hline \hline \end{tabular}
\end{table}
Table 6: Basic arguments based on English Propbank
\begin{table}
\begin{tabular}{l l} \hline \hline
**Role** & **Description** \\ \hline Agent & Who/what performed the action \\ Patient & Who/what was the action performed on \\ Location & Where the action was performed \\ Time & When the action was performed \\ Instrument & What was used to perform the action \\ Beneficiary & Who/what is benefited by the action \\ \hline \hline \end{tabular}
\end{table}
Table 5: List of basic semantic roles
and we argue that even an exhaustive rule base would only demand a combination of some of these different arguments.
**Req. 7:** The maximum power shall be limited to [G_Max] and the event "High device temperature" shall be indicated when the device temperature exceeds [T_Hi] \({}^{\text{o}}\)C.
Consider the requirement 7. This requirement has three desired clauses with two statements based on one condition.
**Preprocessing.** In the requirement, we have the name of an event _"High device temperature"_ within quotes which describes what happens when this event occurs in the system. In certain other cases, the use case has longer event names which by itself had semantic roles within. To help the SRL model to distinguish the event names from the rest of the requirement text, we preprocess the requirement by replacing the occurrences of event names in quotes with abbreviated notations like _"E1"_. We also eliminate square brackets and units around variables, for example _"G_Max"_ and _"T_Hi"_ in 7.
The pre-trained SRL model also retrieves frames for modal verbs like _"be"_ or _"shall"_ which may not have any related arguments. To devise a rule set that works with almost all frames, we discard frames with just one argument.
**Decomposition.** Unlike the decomposition described in Section. 2.2 which utilized dependency and POS tags, here we discuss the decomposition of complex requirements based on the detected SRL frames. As shown in Fig. 5, we obtain a total of
Figure 5: SRL frames for 7.
three frames from the pre-trained SRL model applied on _R_eq:_7. The span (obtained by adding up the requirement phrases belonging to each role in the frame) of these detected frames yield us the following decomposed clauses:
* the maximum power shall limited to G_Max
* the event E1 shall indicated
* when the device temperature exceeds T_Hi
In the second clause, we have the role _ARGM-TMP_ which describes the condition of the action. Here, we have _ARGM-TMP_ with the desired conditional clause as its span which tells us that this condition is related to this particular action. But when we look at the requirement we know that the condition clause applies to both the other clauses. So as to avoid wrong formation of the output model, we will avoid those _ARGM-TMP_ role with the full clause as span and only consider those _ARGM-TMP_ roles with a single word like _"when"_ or with the unit of time.
#### 4.2.2 Entity detection.
The first frame is associated with the verb _"limited"_. We have _"The maximum power"_ as _ARG1_ (describing on whom the action is performed on), _"to G_Max"_ as _ARG2_ (an instrument). We can map the verb together with _ARG1_ as the signal and _ARG2_ as the parameter. Note that we use the lemma (basic form) of the verb to form the signal. Also, the stop words in the arguments are eliminated. This yields us the relation for this clause as: limit_maximum_power(G_Max). From this relation, we form the construct V_ARG1(ARG2) which can be further used if we get similar frames.
The second frame, after avoiding the argument _ARGM-TMP_, follows a similar construct as the first frame only without the optional parameter. We also perform back mapping of the abbreviated event name _"E1"_ with its actual event name yielding the relation: indicate_event_high_device_temperature().
The third frame is a conditional clause which is identified by the argument _ARGGM-TMP_ with a span of _"when"_. Here, we have _"the device temperature"_ as _ARG0_ (the agent) and _"T_Hi"_ as _ARG1_ (the patient). We map _ARG0_ as the signal and _ARG1_ as the parameter.
#### 4.2.3 Operator detection.
Additionally, a comparison word _"exceeds"_ occurs in the role \(V\) of this clause. We detect the corresponding operator by looking it up in Roget's Thesaurus, similar to the first approach. In this case, the word _"exceeds"_ gives the symbol "\(>\)". Thus, this clause translates to the relation device_temperature() > T_Hi. We build a general construct for similar frames with operator as _ARG0 V ARG1_.
```
Req.8:The device fuel pump shall not be activated until the fuel level falls below [L_Fp].
```
Consider another requirement, _R_eq:_8, with negation and the conditional keyword _"until"_. SRL frames obtained from the pre-trained model are shown in _Fig. 6_.
**Entity detection (negation handling).** The first frame follows the same construct as before leaving the argument _ARG2_ optional. However, we also have the argument _ARGM-NEG_ indicating a negation in the clause which should be considered while forming the relation. In this case, we can modify the previous construct V_ARG1() to form a new construct not V_ARG1(). This finally yields the relation not activate_device_fuel_pump().
Though the second frame indicates a conditional clause, it is difficult to identify since no arguments have been assigned to the word _"until"_. So as to recognize it as a condition, we apply rules to identify it as condition like keyword identification. Considering the arguments, we have the role _ARG4_, which indicates an ending point that can be mapped to a parameter and _ARG1_ to a signal.
**Operator detection.** To find the operator symbol, the verb text will not be enough in this case as _"falls"_ alone cannot help in getting the operator symbol. So, as to get the symbol, we apply one extra rule, i.e., to consider the next word in the requirement in addition to the verb text to get the symbol. So the span text would be _"falls below"_ which gives the symbol "\(<\)".
So this particular frame would lead to fuel_level() < L_Fp leading to the formation of the construct ARG1 V ARG4.
**Req. 9:** The device fuel pump shall be deactivated within 3s and shall be closed when the fuel level exceeds [L_Fp].
SRL can handle the time constraint and some complex decomposition as well. To demonstrate this, we will modify the previous requirement as shown in Eq. 9. Figure 7 shows the identified SRL frames.
**Decomposition.** When we decompose this requirement with keywords like _"and"_ and _"when"_, it would lead to wrong formation of clause as the second clause would not show what shall be closed. Looking at the SRL frames, it correctly identifies this and forms the correct following clause by considering span text for each frame.
* The device fuel pump shall deactivated within 3s
* The device fuel pump shall closed
* when the fuel level exceeds L_Fp
Figure 6: SRL frames for Eq. 8.
The first and the second frame follows the same construct as before, i.e., V_ARG1(), and the third frame follows ARG0 V ARG4. In the first frame, we have identified _ARGM-TMP_ with a time constraint. This temporal behavior can be detected by some rule, e.g. whether the span text contains a number and a unit of time, or from the keyword _"within"_. However, we leave the discussion of modeling non-functional properties such as temporal behavior to future work.
In Table 7 we have summarized some rules for mapping the arguments to model entities. The first three lines in the table show some cases for assignments, whereas the two last lines show cases for conditions. Conditions are identified using the argument _ARGM-TMP_. The _operator_ is identified from the \(V\) role denoted by op(V) in the table.
**Discussion.** Using SRL information, we need much easier and less rules as compared to the first approach. This makes it easy to adapt this approach according to a specific use case and needs. However, in a few cases we found that SRL was not working properly. For further improvements, one could of course combine both approaches using the POS and dependency tags as well as SRL labels.
Similar as in the first approach, the underlying deep learning models are evolving quickly. They are trained with new and better annotated datasets, resulting in better generation of models with state-of-the-art performance and higher accuracy. The above mentioned constructs and rules are based on the specific model (current state
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**SRL arguments** & **Signal** & **Operator** & **Parameter** \\ \hline ARG1 (patient), V & V\_ARG1 & - & - \\ ARG1 (patient), V, ARG2 (instrument) & V\_ARG1 & - & ARG2 \\ ARG1 (patient), V, ARGM-PRD (state) & V\_ARG1 & - & ARGM-PRD \\ \hline ARG0 (agent), V, ARG1 (patient) & ARG0 & op(V) & ARG1 \\ ARG1 (agent), V, ARG4 (end points) & ARG1 & op(V) & ARG4 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Mapping of SRL arguments to model entities
Figure 7: SRL frames for Requirements Formalization
of the art [34]) we used to extract SRL arguments. This might not work completely with the future better models as they would extract more information and these constructs/rules could be outdated. So it might be necessary to change the rules when other models are used.
### Model Formation
Once the necessary entities are extracted from the requirement clauses, we head to the last stage in the pipeline, model formation. As mentioned above, we form a relation for each clause. In particular, assignments are of the form signal(parameter*) with optional parameter, and conditional statements are of the form signal() operator parameter. To generate requirement models from these relations, we use a simple domain-specific language (DSL) with abstract logical blocks. We extend the modalities of our previous work [17] following the Temporal Action Language (TeAL) of Li et al. [21]. It maps the relations according to some identified keywords to blocks. Conditional statements starting with _if/when/while_ are mapped to an if-block, those starting with _until_ to an until-block. Assignments without introductory keyword are mapped to an then-block and those starting with _else/otherwise_ to an else-block. When there is no conditional statement given in the requirement, we map all assignments to a statement-block without using any keywords. Conjunctions (_and/or_) identified between relations are also accommodated in these DSL blocks. Though this mapping appears rather trivial, our aim is to make this translation simple and flexible, so that ways are open for integration with other sophisticated languages. Obviously, our DSL is very similar to the FRETISH language [14]. The resulting models can also be further transformed into Matlab Simulink models or UML sequence diagrams, depending on what the end user desires.
For example, the model for Req. 1 is of the form:
```
if(temperature_of_battery()>t_batt_max) then(set_error_state(E_BROKEN))
```
Similarly, the more complex requirement defined as Req. 7 will yield the following model:
```
if(device_temperature()>T_Hi) then(limit_maximum_power(G_Max) andindicate_event_high_device_temperature())
```
**Req. 10:** The maximum power shall be limited to [G_Max] when the device temperature exceeds [T_Hi] \({}^{\circ}\)C, otherwise indicate the error "Maximum power exceeded", until the device temperature falls below [T_Norm] \({}^{\circ}\)C.
To demonstrate how we build the DSL with all the logical blocks, see Eq. 10. This requirement has an if-block, its corresponding then-block, an else-block and an until-block. The formal model for this requirement is of the following form:
```
if(device_temperature()>T_Hi) then(limit_maximum_power(G_Max)) else(indicate_error_maximum_power_exceeded()) until(device_temperature()<T_Norm)
```
**Discussion.** There exist many different, more or less common textual and graphical representations of behavioral models, for example temporal logic, cause-effect graphs, UML diagrams (sequence diagrams, class diagrams, activity diagrams, etc.), OCL constraints, Petri nets, or Matlab Simulink models. Initially, we planned to use sequence diagrams as output format since it is very common and can be represented textually and graphically8. However, we realized that some parts of the requirements, such as _until_-conditions or temporal behavior, cannot be handled easily. For more complex requirements, the overview gets lost due to many nested fragments. Pseudocode has the advantage of being human- and machine-readable. It is a kind of more abstract, intermediate format, one has to define a mapping for the keywords for further processing. We see this as an advantage, since this mapping can be adapted to different use cases and needs. It includes all information to be mapped to other model formats.
Footnote 8: [https://sequencediagram.org/](https://sequencediagram.org/)
## 3 Existing Methodologies
In this section, we want to give a brief overview of existing methodologies for requirements formalization.
### Controlled Natural Language (CNL)
The use of a Controlled Natural Language (CNL) is an old and common approach to enable automatic analysis. The requirements need to be written in restricted vocabulary and following some specific templates (also called patterns or boilerplates). Commonly used templates include the SOPHIST template [32], the EARS template [26] and the FRETISH template [14].
Early NLP methods used dictionaries and a large set of rules to extract syntactic and semantic information such as syntax trees and semantic roles to form models [31, 8]. More recently, Allala et al. [2] exploit templates to represent requirements and utilize Stanford CoreNLP to create test cases and meta-modeling for validation. The FRET tool can be used to map the structured requirement with the help of templates and convert them into formal models [11]. Giannakopoulou et al. [14] use
FRETISH requirements and derive temporal logic formulas from them. However, these approaches use rule sets rather than any kind of NLP method.
### Dependency and Part of Speech (POS)
As explained in Section 2.2, part-of-speech (POS) tags categorize words according to their grammatical properties, and dependency tags identify grammatical structure of the sentence. Different libraries like NLTK, spaCy and Stanford CoreNLP can be used for POS tagging and dependency parsing. Dependency annotation can also be done using Universal Dependencies Guidelines on software requirements [18].
Pre-defined patterns are considered on dependency trees to identify subtrees which would help in generation of cause-effect-graphs [12]. Unstructured Information Management Architecture (UIMA) framework has been used to extract POS tags and determine phrases to help identify actions and conditions to form models on use case descriptions [36]. Fischbach et al. [12] also constructs few patterns with combination of POS and dependency tags. Koscinski et al. [20] use Stanford CoreNLP to assign dependency tags and use pre-defined patterns to identify syntactical categories from which requirements models can be generated with a small amount of rules. Fritz et al. [13] uses the combination of both POS and dependency tags to identify semantic frames to map requirements into an existing structure. The latter two papers are very similar to our first approach, but use different categories and rules.
### Named Entity Recognition (NER)
Phrases from sentences can be classified into some predefined categories known as named entities. NER is the task of assigning these entities. Commonly used categories are person, organization, location etc. But these categories are not helpful for requirements analysis. To perform NER for requirement texts, one needs to define new categories and manually annotate a dataset to train ML-based approaches. Annotating a dataset is a laborious and a time consuming task. Also, defining categories can vary depending on the requirement type. Malik et al. [24] and Herwanto et al. [19] defined 10 and 5 entities, respectively, for software requirement specifications and privacy related tasks. In both approaches, the dataset is annotated manually. Multiple checks with multiple human annotators are carried out to get better quality data. Nayak et al. [28] have trained an NER model for creating expressions with 9 entity categories, which is then used to formalize requirements.
### Semantic Role Labeling (SRL)
As described in Section 2.3, in Semantic Role Labeling (SRL), words or phrases in a sentence are assigned labels that indicate their semantic role in the sentence, such as the agent, the patient, etc. Semantic frames have been widely used for requirements analysis in recent years. From simple dependency parsing rules [33] to machine learning models [4, 34], much has been used to extract semantic roles and frames.
Sengupta et al. [33] use the Stanford Dependency manual to create rules that help extract basic arguments for requirement analysis, whereas Fritz et al. [13] use spaCy to extract POS and dependency tags from which they create rules to extract semantic roles. Rules can also be created to recognize semantic relations based on the lexeme (word form) and its variants [5].
FrameNet was utilized to determine semantic relationships between requirements with inheritance knowledge [1]. VerbNet and other open source resources were used to perform semantic analysis for specification representation [23].
Recently, machine learning techniques have also been considered for semantic role extraction. Wang et al. [38] use the SRL model of the CogComp NLP pipeline to generate domain models with a rule set. Mate Tools [4] has been a commonly used ML model for SRL which uses linear logistic regression. Diamantopoulos et al. [10] extended Mate tools' semantic role labeler by training it with additional lexical attributes as features, and used it for mapping of requirements to formal representations. The latter work is similar to our second approach but we utilize a much more recent SRL model with very flexible rules.
### Translation and Back-Translation
Translation of requirement texts to some kind of model directly in one step is not an easy task. As seen from all the above mentioned methods, it requires multiple steps and multiple methods to form the output model. We can use the concept of Neural Machine Translation (NMT)/Code Generation to translate requirements to desired pseudocode. Code generation has been used to convert complex text into programming language code using machine learning models [22]. It can also be used to convert text into an Abstract Syntax Tree (AST) which represents syntactic structure of the desired code [30]. This would require manual creation of a large dataset with requirement texts and its corresponding pseudocode or AST which is again a laborious and a time consuming task.
Neural Machine Translation can also be used to back translate output models to requirement text. Tools like OpenNMT has been used to convert temporal logic into natural language requirements [9]. A set of rules can also be applied to create functional natural language requirements from data models [7]. A systematic survey on the generation of textual requirements specifications from models can be found in [29].
## 4 Discussion
In view of the many recent developments in this area of research, the journey towards the development of mature requirements formalization techniques has only just begun. Since the aim of this work is to present principal ideas on this research topic, we also want to discuss some findings and shortcomings that we identified during our investigations. The tremendous survey of Zhao et al. [39] has already identified some
key findings for the NLP-supported requirements engineering process, namely (i) a huge gap between the state of the art and the state of the practice, i.e., an insufficient industrial evaluation, (ii) little evidence of industrial adoption of the proposed tools, (iii) the lack of shared RE-specific language resources, and (iv) the lack of NLP expertise in RE research to advise on the choice of NLP technologies. We agree that these findings also apply to the requirements formalization process.
### Lessons Learned
The international XIVT project1 addressed the challenge of automatically analyzing and extracting requirements in order to minimize the time for testing highly configurable, variant-rich systems. During the three-year project period, we had many interesting talks and discussions with research and industry partners on this research area. The approaches presented in this work are the result of countless internal and external discussions, which shows that it is not an easy task and very manifold, but also a very interesting topic.
Footnote 1: [https://www.xivt.org/](https://www.xivt.org/)
**Generality.** It is very tempting to try to develop a single algorithm that is general enough to handle a wide range of natural language requirements. But, just as natural language is diverse, so would the algorithm to be developed have to be. Requirements are formulated by such a wide variety of people with different motivations, experience and expert knowledge, from different branches and with different social backgrounds, that it is currently impossible to create one algorithm for the huge variety of styles of unstructured natural language. The many existing approaches and our own developments show that it is reasonable to automate this process to a high degree for a specific use case, domain and writing style. In short: _Don't worry about being specific!_
**Requirement quality.** The human language is full of misunderstandings, ambiguities, inconsistent and incomplete formulations. When developing a rule set we were strongly biased by the given requirements. We kept trying to improve the rules to make them more and more accurate. In discussion with our industry partners, we realized that some parts of the requirements seemed to be somewhat poorly written. By a short reformulation, our algorithm could handle them much easier. This shows that the algorithmic treatment of the formalization process reveals many minor issues that need to be fixed by the requirement engineers, and thus also contributes to a higher quality of the requirements. In short: _Don't always blame the model!_
**Degree of automation.** For a long time of our development, we planned to build a standard input/output tool that would generate the requirements models fully automatically without asking the user for any input or confirmation. However, we have found that we are unable to properly process the data if some information within the pipeline is missing. Additionally, our industry partners have also requested to always have a quick verification option by prompting missing or uncertain information
to the user. So if even humans need to review many details to formalize a set of requirements, how should an algorithm be able to generate correct and complete models fully automatically? The fact that requirements analysis is a highly interactive process should also be taken into account when implementing the tool. It is much more desirable from the industry to have assistance in creating correct requirements models than to have a fully automated algorithm that produces some erroneous models. In the end, the required working time of an engineer decides whether the review of automatically generated models is preferable to the manual creation process. That is, the algorithms need to be highly efficient to have the chance of being used by the industry, therefore we believe that an interactive, performant user interface is essential. In addition, the information obtained from user input can also be used to improve the underlying ML algorithms. In short: _Go for user interaction!_
**Availability of data.** A crucial obstacle in this research area is that the data situation for freely available requirements with corresponding models is very poor. One would need a large corpus of these data from different use cases to compare different algorithms. Unless a reliable and sufficiently large amount of ground truth exists, a benchmark of different approaches and tools is not possible [37]. Even if one takes the time in a group of scientists to analyze, label, and model requirements, detailed discussions with industry stakeholders are essential and time-consuming. Consider the quote of Steve Jobs: _"A lot of times, people don't know what they want until you show it to them"10_. Thus, the success of ML-trained algorithms relies on having a large amount of labeled and unlabeled data sets available. In short: _Share your data!_
Footnote 10: Business Week 12 May 1998, [https://www.oxfordreference.com/view/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00005922](https://www.oxfordreference.com/view/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00005922)
**Availability of tools.** Similarly, the availability of ready-to-use algorithms and tools is very limited. Only a few tools are publicly available [39], and most of them are in a prototypical stage. This makes it very difficult for industry to identify useful tools for the application to their use cases. In short: _Go for GitHub!_
**Evolution.** Most approaches in the literature deal with a fixed set of requirements. Likewise, most freely available requirements specifications are provided in a single fixed document (with a few exceptions11). However, this is completely contrary to the highly iterative and evolving process of requirements elicitation. Just as dynamic as software development is the corresponding requirements engineering. On the other hand, we observe that NLP models have evolved very rapidly in recent years. Since the publication of the BERT models, there has been an exploding amount of research about the application and further training of these models. Researchers are hunting for better and better models with higher accuracy, algorithms from one or two years ago are already outdated and need to be revised when used for further development. In short: _Everything evolves!_
### Possible Future Developments
From our perspective, there are a number of emerging, sophisticated NLP methods and models that can be applied to the requirements formalization process. Although it is very difficult to look into the distant future of NLP research and predict developments, we do see some direct follow up to current approaches and can envision some future developments.
**Named Entity Recognition (NER).** The advantage of NER is the direct identification of model entities when they are used as labels. However, the labeled dataset must be very large to achieve reasonable accuracy. To reduce the intensive labelling work, one could use pre-trained models (e.g. provided by AllenNLP) and fine-tune them with newly annotated datasets with new categories.
**Semantic Role Labeling (SRL).** The current SRL models already have an acceptable accuracy. However, we also found that the predictions can change suddenly if just one word or comma is changed. The model may not work as well for domain-specific wording or the writing style of requirements. Therefore, the improvement in accuracy could be studied if the SRL models are pre-trained with some labeled requirements data (a starting point could be [10].
**Question-Answer Driven Semantic Role Labeling (QA-SRL).** As mentioned above, requirements analysis is highly interactive and needs several input from the user. It would be a new level of automation, if an algorithm itself is able to formulate questions and process the given answers to clarify missing or uncertain information for generating the model.
**Transfer learning (TL).** It is very successful in other fields of NLP research to train a model an some related tasks with a larger amount of data and then to fine-tune it for the actual task with much less labeled data. We could imagine using models that have already been trained for requirements classification or ambiguity detection.
**Online learning.** Current approaches use NLP models that are trained once and will not change according to user feedback. It would be much more convenient for the user if the algorithm learns from the input and suggests better results for subsequent requirements. This could be done by some sort of post-training of the models or more sophisticated continuous learning methods. This could also be useful for a completely new use case and domain, where the algorithm learns the rules from the beginning and one could avoid implementing of a rule set at all.
**Translation/back-translation.** Directly translating requirements into models with a large ML model might be far in the future or somehow not reasonable. But there are interesting developments in the research area of translating text to code that could be helpful in formalizing requirements as well. The other direction of generating boilerplates sounds much simpler, but also needs to be studied in detail. We can imagine that in the future a coexistence of requirements and models will be possible, where hand-written requirements are transformed into models and transformed back
into structured text that can be used for further processing - a kind of "speaking models".
**Semantic similarity.** As shown in our previous work [17], semantic similarity between requirements and product design descriptions (if available) is helpful for generating more concrete requirements models, i.e., identifying signal and parameter names from the design specification instead of generating abstract entity names. However, identifying the correct Boolean value (antonym detection), for example, is still a difficult task in NLP research. Moreover, requirements formalization and quality analysis are strongly interlinked. Semantic similarity can be useful for consistency checking, e.g. for identifying of identical signals and parameters used with slightly different terms within the set of requirements, see [38] for a first approach.
**Auxiliary documents.** Requirements do not fall from the sky. Usually, an extensive elicitation phase takes place in advance. One could make use of preparatory documents such as interviews, meeting notes, etc., or implementation artifacts such as issue descriptions (bug reports, feature requests, etc.). They can provide context, different formulations, and missing information. It would also be interesting to explore how to support the entire requirements development process. One would need to study the process in great detail, track and analyze the documents, and identify some successful workflows that can be assisted by NLP approaches (a starting point could be the FRET tool [14]).
## 5 Conclusion
In this work, we have presented and discussed principal ideas and state-of-the-art methodologies from the field of NLP to automatically generate requirements models from natural language requirements. We have presented an NLP pipeline and demonstrated the iterative development of rules based on NLP information in order to guide readers on how to create their own set of rules and methods according to their specific use cases and needs. We have studied two different approaches in detail. The first approach, using dependency and POS tags, shows good results but is somewhat inflexible in adopting the underlying rule set for different use cases and domains. The second approach, using semantic roles, shows similar good results, but requires less effort to create a set of rules and can be easily adapted to specific use cases and domains. The use of a human- and machine-readable format for the requirements models appears suitable for easy reading comprehension and at the same time for automatic further processing. The results show that the current pre-trained NLP models are suitable to automate the requirements formalization process to a high degree.
Furthermore, we provided an overview of the literature and recent developments in this area of research and discussed some findings and shortcomings (lessons learned) that we identified throughout the development of our approaches. Finally, we suggested some possible future developments in NLP research for an improved requirements formalization process.
#### Acknowledgments.
This research was funded by the German Federal Ministry of Education and Research (BMBF) within the ITEA projects XIVT (grant no. 01IS18059E) and SmartDelta (grant no. 01IS21083E). We thank AKKA Germany GmbH and Alstom for providing industrial use cases for the demonstration of the presented methods. We are grateful to Prof. Holger Schlingloff from Fraunhofer FOKUS/HU Berlin for his valuable recommendations.
|
2309.14509 | DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme
Long Sequence Transformer Models | Computation in a typical Transformer-based large language model (LLM) can be
characterized by batch size, hidden dimension, number of layers, and sequence
length. Until now, system works for accelerating LLM training have focused on
the first three dimensions: data parallelism for batch size, tensor parallelism
for hidden size and pipeline parallelism for model depth or layers. These
widely studied forms of parallelism are not targeted or optimized for long
sequence Transformer models. Given practical application needs for long
sequence LLM, renewed attentions are being drawn to sequence parallelism.
However, existing works in sequence parallelism are constrained by
memory-communication inefficiency, limiting their scalability to long sequence
large models. In this work, we introduce DeepSpeed-Ulysses, a novel, portable
and effective methodology for enabling highly efficient and scalable LLM
training with extremely long sequence length. DeepSpeed-Ulysses at its core
partitions input data along the sequence dimension and employs an efficient
all-to-all collective communication for attention computation. Theoretical
communication analysis shows that whereas other methods incur communication
overhead as sequence length increases, DeepSpeed-Ulysses maintains constant
communication volume when sequence length and compute devices are increased
proportionally. Furthermore, experimental evaluations show that
DeepSpeed-Ulysses trains 2.5x faster with 4x longer sequence length than the
existing method SOTA baseline. | Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He | 2023-09-25T20:15:57Z | http://arxiv.org/abs/2309.14509v2 | DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
###### Abstract
Computation in a typical Transformer-based large language model (LLM) can be characterized by batch size, hidden dimension, number of layers, and sequence length. Until now, system works for accelerating LLM training have focused on the first three dimensions: data parallelism for batch size, tensor parallelism for hidden size and pipeline parallelism for model depth or layers. These widely studied forms of parallelism are not targeted or optimized for long sequence Transformer models. Given practical application needs for long sequence LLM, renewed attentions are being drawn to sequence parallelism. However, existing works in sequence parallelism are constrained by memory-communication inefficiency, limiting their scalability to long sequence large models. In this work, we introduce DeepSpeed-Ulysses, a novel, portable and effective methodology for enabling highly efficient and scalable LLM training with extremely long sequence length. DeepSpeed-Ulysses at its core partitions input data along the sequence dimension and employs an efficient all-to-all collective communication for attention computation. Theoretical communication analysis shows that whereas other methods incur communication overhead as sequence length increases, DeepSpeed-Ulysses maintains constant communication volume when sequence length and compute devices are increased proportionally. Furthermore, experimental evaluations show that DeepSpeed-Ulysses trains 2.5x faster with 4x longer sequence length than the existing method SOTA baseline.
## 1 Introduction
Training large models with long sequences is becoming very important across the board from generative AI to models for scientific discovery. On generative AI side, conversational AI, knowledge-rich long document summarization and video generation require reasoning over long contexts in spatial and temporal domains. For example, multimodal foundation models such as ones that process speech, images and waveforms concurrently require long context reasoning over high dimensional inputs with long sequences. Similarly, chapter and book level summarization (estimated at tens and hundreds of thousands of words) are of great importance in conversational AI and abstractive summarization tasks (Beltagy et al., 2020; Kryscinski et al., 2022; MosaicML, 2023) and have shown to benefit from long sequence training (Xiong et al., 2023; Peng et al., 2023; Touvron et al., 2023). The debut of ChatGPT (and subsequent similar open source and "product" LLM brands) has pushed chat application to the forefront of modern AI, making chat applications to be more relevant than ever before. Processing long sequence is crucial for supporting longer histories in chat applications (Touvron et al., 2023).
Long sequence length is equally critical for AI for science opening doors for better understanding of structure biology, health care, climate and weather forecasting (Nguyen et al., 2023) and large molecular simulation (Zvyagin et al., 2022). For instance, by adapting large language models with gene sequences, we can create language models that can learn the evolutionary patterns of genomes using simple alphabets and extremely long sequences (the human genome
has 6.4 billion letters) [22]. In health care, diagnostic predictive model conditioned on entire patient care record requires context of long sequences [14, 15].
Despite the emerging importance of long sequence length for both generative AI and AI for science, existing large model training systems and the underlying parallelism technologies (data, tensor, pipeline, sequence parallelism) are limited in their ability to support the efficient long sequence training. Two challenges with existing parallelism approach come to the fore. First, existing parallelism approach such as data, tensor and pipeline parallelism cannot address the scaling along sequence dimension. Second, existing sequence parallelism approaches are not effective because of memory-communication inefficiencies. Furthermore, existing approaches have limited usability requiring intrusive and error prone code refactoring.
In this paper, we introduce DeepSpeed-Ulysses (or Ulysses, a very long novel), a simple, portable, and effective methodology for enabling highly efficient and scalable LLM training with extremely long sequence lengths. DeepSpeed-Ulysses partitions individual samples along the sequence dimension among participating GPUs. Then right before the attention computation, it employs all-to-all communication collective on the partitioned queries, keys and values such that each GPU receives the full sequence but only for a non-overlapping subset of the attention heads. This allows the participating GPUs to compute attention for different attention heads in parallel. Finally, DeepSpeed-Ulysses employs another all-to-all to gather the results along the attention heads while re-partitioning along the sequence dimension.
In this work, we put forward the following contributions of DeepSpeed-Ulysses to advance state of the art in long sequence parallelism:
* DeepSpeed-Ulysses trains Transformer models 4x larger sequence lengths than existing systems, while enabling training with sequences with over a million tokens.
* Communication reduction of over 10x compared to existing systems, resulting in throughput improvements of up to 2.5x, and sustained throughput of over 175 TFlops/GPU (over 54% of hardware peak).
* Fully general and implementation agnostic attention: DeepSpeed sequence parallelism (Ulysses) supports dense as well as sparse attention, and it works with efficient attention implementations such as FlashAttention v2 [16].
* Support for massive model training: DeepSpeed sequence parallelism works together with ZeRO-3 to not only support large sequence lengths but also massive model sizes.
* Easy-to-use and portable, requiring minimal code changes to the existing training frameworks.
In subsequent sections, we provide background and related work, a detailed discussion of DeepSpeed sequence parallelism core design, communication complexity analysis, experimental evaluation and comparison with existing work.
## 2 Background and Related Work
In this section, we present a brief overview of Transformer architecture, mode of parallelism to accelerate Transformer training and a discussion on closely related work to our approach
### Background
This section briefly introduces Transformer architecture and highlights different mode of parallelism of deep neural network in general and Transformer model in particular. This brief discussion is followed by specific focus on closely related work.
#### 2.1.1 Transformer Architecture
Shown in Figure 1 is a sketch of building blocks of a typical multihead attention Transformer architecture [23]. It consists of input sequences which are projected into queries (\(Q\)),keys (\(K\)) and values (\(V\)) embeddings. _QKV_ are typically a 3D tensor of size \(N\), \(b\), \(d\) where \(N\) is sequence length, \(b\) is micro batch size and \(d\) is hidden dimension. The \(QKV\) tensors are fed to the attention block, a central component of Transformer model. Outputs of attentions are inputs to the multilayer perceptron (MLP) or position-wise feed-forward block of Transformer architecture.
The attention block followed by MLP block are replicated multiple times to form an encoder, a decoder or an encoder-decoder Transformer network.
#### 2.1.2 Mode of Parallelism
Data parallelism [Dean et al., 2012] is de facto method of accelerating neural network training and has been applied widely with different neural network architectures and applications. Data parallelism in its simplest form partitions input data across sample or batch dimension while replicating model parameters across compute devices. Data parallelism is effective when the batch size is sufficiently large to hide communication cost in compute. However, it is limited when model is large and model parameter replication across devices is practically infeasible. ZeRO [Rajbhandari et al., 2020, 2021] optimization addresses this problem by partitioning model parameters across available compute devices. Moreso, large batch is known to have impacts on model quality [Keskar et al., 2016].
It is worth to note that our proposed approach is orthogonal to both data parallelism and ZeRO. Our proposed approach can be used with both methods. Also, by leveraging sequence parallelism to keep global batch size at reasonable size on large systems, we effectively ameliorate the impact of large batch size on model convergence. Sequence parallelism serves two purposes in this regard. First, sequence parallelism can accelerate time to solution for same (already explored) long sequence length; in other words, sequence parallelism reduces the iteration time proportional to additional compute resources. Second, sequence parallelism enables longer sequence training or continual pretraining where training context length gradually increase over time [Xiong et al., 2023]. Consider a real world scenario of large scale training on 1024 GPUs. The initial exploratory or pretraining set up of a (proxy) LLM has a sequence length of 8192 (8K), a micro batch size of 1 (thus, 8 million token global size) per GPU. A simple change to improve the quality of the pretrained model requires a change of sequence length from 8K to 32K, which would result in approximately 32 million global batch size. However, increasing the global batch size is not an option due to the negative impact on model quality. Therefore, sequence parallelism comes in handy as a system optimization technique with no requirement for laborious hyperparameter search. In this scenario, sequence parallelism allows for large batch sizes to be split across multiple GPUs without increasing the global batch size, regardless of the sequence length.
Tensor [Shoeybi et al., 2019] and pipeline parallelism [Narayanan et al., 2019, Huang et al., 2018, Narayanan et al., 2021] are two other popular methods for large scale training. Collectively, tensor and pipeline parallelism are called model parallelism, and are targeted at compute operators in large models. In contrast to data parallelism, model parallelism are used when models are too large (as it is in many LLMs) and can not be fully replicated across data parallel ranks. Tensor parallelism splits compute operators (i.e., attention and MLPs) within a layer and pipeline parallelism splits model in a depth-wise (layer-wise) fashion. 3D parallelism [Team and Majumder, 2020, Smith et al., 2022] combines data parallelism, tensor parallelism and pipeline parallelism to achieve higher throughput in comparison to the 3 constituents components at a cost of extensive code rewrite and productivity overhead [Wang et al., 2023].
### Related Work
For a broad overview and survey of distributed training methods for deep neural networks please see [Ben-Nun and Hoefler, 2019]. These methods are broadly categorized into data and model parallelism as described above. However, all of existing parallel methods are limited in dealing with intermediate activation memory overhead associated with extremely long sequence.
Figure 1: Multi-head attention Transformer
While recent works in sequence parallelism address the memory overhead, they are lacking in communication efficiency, thus limited in scaling capability. Similar to our work, all existing works in sequence parallelism partition the input data along sequence dimension but differ in what input projections are partitioned and how partitions are aggregated and communicated for attention computation.
The authors in [11] (henceforward called _CoAI-SP_) introduce ring self attention, a ring-like communication collective in which query projections are local whereas key and values projections are transmitted in a ring-style to compute global attention, resulting in communication complexity linear in message size, \(M\). Megatron-LM sequence parallelism [13] approach is tightly integrated with Megatron tensor parallelism. Megatron LM partitions sequence along sequence dimensions and applies allgather and reduce scatter collective to aggregate _QKV_ projections for attention computation. Communication complexity analysis shows that unlike our approach, Megatron-LM sequence parallelism communication volume increase linearly with message size (\(M\)) regardless of number of compute devices. DeepSpeed-Ulysses on the other hand keeps communication volume consistent by increasing GPUs proportional to message size or sequence length see 3.2 for more details.
Table 1 summarizes how DeepSpeed-Ulysses differs from other existing methods. DeepSpeed-Ulysses has communication efficiency advantage over the other two methods. It also benefits from leveraging ZeRO [14] optimization for model parameter partitioning across both sequence and data parallel groups. DeepSpeed-Ulysses supports different kinds of attention and it is easy to use. Megatron-LM sequence parallelism is tightly integrated with Megatron-LM tensor parallelism limiting both its memory efficiency and easy of use. _CoAI-SP_ requires a different (specific) kind of attention and is not easy to use. It is not clear how well _CoAI-SP_ ring self-attention generalizes to other attention types and mechanisms.
There are related works in sparse Transformer particularly focusing on full-attention approximation such as sparse attention [15, 16, 17]. There are also recent works on single GPU memory and compute efficient attention. A popular example in this category is Flash attention [15, 16], which leverages known techniques such as tiling and recomputation for compute and memory efficiency. These works are orthogonal to our work and were leveraged accordingly.
## 3 DeepSpeed-Ulysses Core Design
### System Design
Figure 2 shows the core design of DeepSpeed-Ulysses. As with the known transformer architecture, the design consists of input sequences \(N\) partitioned across \(P\) available devices. Each local \(N\)/\(P\) partition is projected into queries (\(Q\)), keys
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & Comm & Activation & Parameter & Attention & Ease \\ & complexity & memory efficiency & memory efficiency & agnostic & of use \\ \hline ColAI-SP [11] & \(O(M)\) & ✓ & x & x & x \\ Megatron-SP [13] & \(O(M)\) & ✓ & x & ✓ & x \\
**Ds-Ulysses** & \(O(M/P)\) & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our work (DS-Ulysses) to other sequence parallelism methods.
Figure 2: DeepSpeed sequence parallelism (DeepSpeed-Ulysses) design
(\(K\)) and values (\(V\)) embeddings. Next, (_QKV_) embeddings are gathered into global _QKV_ through highly optimized all-to-all collectives between participating compute devices. Sequel to all-to-all collective is the attention computation per head in the form:
\[Outputcontext=Softmax((QK^{T})/\sqrt{(}d))V \tag{1}\]
After the attention computation, another all-to-all collective transforms output context tensor of attention computation to sequence (_N/P_) parallel for subsequent operators (MLP MatMul, layer norm etc) in the remaining modules of transformer layer block.
### Communication Analysis
What distinguishes DeepSpeed-Ulysses from the other existing long-sequence approaches is our much smaller aggregate communication volume and overall better scalability with increasing degree of sequence parallelism compared to existing solutions, as demonstrated by the communication volume analysis below:
On modern clusters with intra-node NVSwitch interconnect and inter-node fat tree IB topology, the communication volume transmitted per link for an all-to-all for aggregate message of size \(M\) over \(P\) GPUs is _M/P_. For a transformer model with hidden size h, sequence length of N, and parallelism degree of P, DS-Sequence performs all-to-all for the _QKV_ projections with an aggregate message size of _3Nh_ before the attention computation, and another all-to-all for output context projection with a size _Nh_ for each transformer layer. Therefore, DeepSpeed sequence parallelism incurs an aggregate communication volume per link of _4Nh/P_ (or with the complexity of _O(N/P_). Note that this communication volume is constant when both \(N\) and \(P\) are increased proportionally.
In contrast, the existing approaches like Megatron-LM incur communication volume that increases linearly with N regardless of P, resulting in the communication complexity of _O(N)_. For instance, Megatron-LM performs two all-gather with the message volume of _Nh_ and two reduce-scatter with the volume of _Nh_ for each transformer layer. However, the cost of each all-gather and reduce-scatter of size \(M\) remains \(M\) when _P_\(\approx\)_1, instead of _M/P_. Therefore, Megatron-LM sequence parallelism incurs a communication volume per link of _4Nh_ which is \(P\) times larger than that for DeepSpeed sequence parallelism. This allows DeepSpeed sequence parallelism to enable training with extremely long sequences while achieving significantly higher training efficiency compared to the existing approaches. Our evaluation results match this analysis.
### Memory Efficiency
While DeepSpeed sequence parallelism reduces the activation memory when training with longer sequences, it does not impact the memory consumed by the model states. Therefore, to support large sequence length training with a large language model, DeepSpeed sequence parallelism is integrated with ZeRO-3. ZeRO Redundancy Optimizer Stage 3 (ZeRO-3) (Rajbhandari et al., 2020, 2021) is a memory optimization technique for training large models. Unlike the classic data parallel training of neural networks where model states are replicated across data parallel ranks, ZeRO-3 optimizes memory usage by partitioning model states across data parallel ranks. However, with sequence parallelism, training data can be considered in both batch (sample) and sequence dimensions and the associated parallel groups combined to form a larger group for ZeRO parallelism. Therefore, we extend ZeRO-3 partitioning to combination of data parallel and sequence parallel ranks. In other words, in DeepSpeed sequence parallelism, ZeRO partitions model states across both sequence and data parallel group and collects per rank partitions (allgather) when they are needed. Similarly, gradients are reduced across both data and sequence parallel ranks for parameter update. ZeRO support allows for huge memory savings in both sequence and data dimensions and enables scaling not just to large sequence lengths but also to large models.
### General and Attention Agnostic Solution
DeepSpeed implementation of distributed attention module is general enough to support any attention: e.g., self-attention, cross-attention, causal attention in both their dense and sparse counterparts, and their various optimized kernels that support long-sequence at local attention level such as different versions of FlashAttention. The general property of DeepSpeed-Ulysses stems from the modular nature of its core design: an attention-centric sequence parallelism design. Prior to attention computation is sequence parallelism of N/P partition, attention computation is head parallelism with full attention per head but just with fewer heads, thus attention computation can be replaced with any type of attention mechanisms, e.g., dense attention and various forms of sparse attention.
## 4 Evaluation
We evaluate DeepSpeed-Ulysses (DeepSpeed Sequence) on GPT (Radford et al., 2019), a foundation model for many NLP tasks on up to 256 A100 GPUs. Our evaluations are five-fold: i) sequence length scalability, ii) throughput for dense attention and comparison with existing system, and iii) throughput with sparse attention and comparison with existing system, iv) parallel scaling study and v) convergence study of Deep sequence parallelism. We discuss and present evaluations from each of these categories next.
### Sequence Length Scalability
The first set of experiments is strong scaling of sequence length up to 1 million tokens on 1.2 billion parameter GPT model. Results of this evaluation are shown in Figure 3. DeepSpeed sequence parallelism allows increasing sequence length linearly with the number of GPUs and sequence length scales linearly relative to and maintains similar computation throughput across different sequence length at appropriate GPU count.
### Dense Attention Evaluation
Next, we evaluate DeepSpeed sequence parallelism on 7 billion (7B) and 30 billion (30B) parameter GPT dense attention models and compare against Megatron-LM's sequence parallelism on 32 and 64 A100 GPUs respectively. The results of these evaluations are shown in Figures 4 and 5.
We compare DeepSpeed sequence parallelism with Megatron-LM for 7B and 30B models running various sequence lengths. For our evaluation we chose the sequence parallelism degree and micro-batch size that produced the best performance (measured as throughput or TFLOPs) for both DeepSpeed sequence parallelism and Megatron-LM, this we call optimal (batch size-sequence length) configurations. For DeepSpeed sequence parallelism, we always use a ZeRO parallelism degrees of 32 and 64 for 7B and 30B models respectively.
Figures 4 and 5 show that DeepSpeed sequence parallelism consistently outperforms Megatron-LM for the sequence length that can be run with both. In addition, DeepSpeed sequence parallelism can run longer sequence than Megatron
Figure 3: DeepSpeed sequence parallelism strong scalability evaluation at different sequence length and GPU counts
LM. DeepSpeed sequence parallelism performance advantages are two folds: (1) DeepSpeed sequence parallelism in combination with ZeRO-3 fits more samples than Megatron-LM because of the memory optimization leading to higher throughput (2) DeepSpeed sequence parallelism benefits from efficient all-to-all communication relative to all-gather communication as applied in Megatron-LM sequence parallelism.
### Sparse Attention Evaluation
Similarly, we evaluate DeepSpeed sequence parallelism on 7 billion and 30 billion parameter sparse attention models and benchmark against Megatron-LM sequence parallelism. Results of our evaluation are shown in Figures 6 and 7. We observe similar trends with sparse attention as dense attention experiments. We observe more than 2x throughput performance of DeepSpeed sequence parallelism compared to Megatron-LM. For memory saving, DeepSpeed sequence parallelism leveraging ZeRO-3 scales to 4x longer sequence lengths than Megatron-LM.
DeepSpeed sequence parallelism outperforms Megatron-LM for sequence length that can be run with both. In fact, the current DeepSpeed throughput is bottleneck by the local sparse attention implementation, and as a result DeepSpeed throughput decreases as the sequence length increases. We expect this gap in performance between DeepSpeed and Megatron-LM to increase further for larger sequence lengths as we improve the performance of the local sparse attention implementation in future.
Furthermore, we conduct parallel scaling studies of DeepSpeed-Ulysses along two axes. First, we fix sequence length at 131,072 tokens and increase GPU count from 64 to 256. Second, we increase the GPU count proportionally to the increase in sequence length. The results of these experiments are shown in Tables 2 and 3 respectively. For both evaluations, we used GPT-7B dense model at global batch size of 8. The tables show iteration time in microseconds as well as the achieved throughput measured in per GPU TFLOPs. Table 2 can be interpreted as strong scaling and shows that execution time decreases almost linearly as we increase the GPU count. Table 3 on the other hand, is a form of weak scaling (not in the traditional sense) with caveat that attention computation, a function of sequence length, is quadratic in complexity. In other words, as we increase sequence length, the work increases quadratically.
Communication overhead can be attributed to slight decrease in throughput as we increase communication workload (that is, sequence length or GPU count). This overhead notwithstanding, we observe good scaling at high percentages of theoretical peak GPU performance across the two studies. These good scaling results indicate good parallel efficiency of DeepSpeed-Ulysses.
### Convergence Study
Lastly, Figure 8 shows convergence of a 1.3 billion GPT model at 32K sequence length on 8 A100 GPUs with sequence parallelism degree set at 4 for both DeepSpeed-Ulysses and Megatron-LM sequence parallelism. For DeepSpeed sequence parallelism, we evaluate convergence with different ZeRO stages. DeepSpeed sequence parallelism is a purely system optimization technique that enables training of long sequence Transformer model, thus there is no (negative) on quality of trained models, this assertion is validated through experiments and is shown in Figure 8.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Seqlen & GPUs & Time (ms) & TFLOPs \\ \hline
65536 & 64 & 9676.76 & 161.3626667 \\
131072 & 128 & 17052.5143 & 157.41 \\
262144 & 256 & 33486.5 & 147.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Parallel scaling study with varying sequence length
Figure 5: Evaluation of DeepSpeed-Ulysses and Megatron LM on 30B parameter model with dense attention (64 GPUs)
## 5 Conclusion
In conclusion, we present a memory and communication efficient DeepSpeed Sequence as enabling technology for long sequence large Transformer training. DeepSpeed Sequence enables sequence parallelism across GPUs (by extension other AI accelerators), parallelizing sequence across all components of the Transformer model, including streamline support for SOTA Flash (dense and sparse) attention. Training with DeepSpeed Sequence allows both model size and sequence length to scale near indefinitely unbounded by single GPU memory limitation and at a high fraction of peak compute performance.
|
2301.13444 | Rethinking Soft Label in Label Distribution Learning Perspective | The primary goal of training in early convolutional neural networks (CNN) is
the higher generalization performance of the model. However, as the expected
calibration error (ECE), which quantifies the explanatory power of model
inference, was recently introduced, research on training models that can be
explained is in progress. We hypothesized that a gap in supervision criteria
during training and inference leads to overconfidence, and investigated that
performing label distribution learning (LDL) would enhance the model
calibration in CNN training. To verify this assumption, we used a simple LDL
setting with recent data augmentation techniques. Based on a series of
experiments, the following results are obtained: 1) State-of-the-art KD methods
significantly impede model calibration. 2) Training using LDL with recent data
augmentation can have excellent effects on model calibration and even in
generalization performance. 3) Online LDL brings additional improvements in
model calibration and accuracy with long training, especially in large-size
models. Using the proposed approach, we simultaneously achieved a lower ECE and
higher generalization performance for the image classification datasets
CIFAR10, 100, STL10, and ImageNet. We performed several visualizations and
analyses and witnessed several interesting behaviors in CNN training with the
LDL. | Seungbum Hong, Jihun Yoon, Bogyu Park, Min-Kook Choi | 2023-01-31T06:47:19Z | http://arxiv.org/abs/2301.13444v1 | # Rethinking Soft Label in
###### Abstract
The primary goal of training in early convolutional neural networks (CNN) is the higher generalization performance of the model. However, as the expected calibration error (ECE), which quantifies the explanatory power of model inference, was recently introduced, research on training models that can be explained is in progress. We hypothesized that a gap in supervision criteria during training and inference leads to overconfidence, and investigated that performing label distribution learning (LDL) would enhance the model calibration in CNN training. To verify this assumption, we used a simple LDL setting with recent data augmentation techniques. Based on a series of experiments, the following results are obtained: 1) State-of-the-art KD methods significantly impede model calibration. 2) Training using LDL with recent data augmentation can have excellent effects on model calibration and even in generalization performance. 3) Online LDL brings additional improvements in model calibration and accuracy with long training, especially in large-size models. Using the proposed approach, we simultaneously achieved a lower ECE and higher generalization performance for the image classification datasets CIFAR10, 100, STL10, and ImageNet. We performed several visualizations and analyses and witnessed several interesting behaviors in CNN training with the LDL.
## 1 Introduction
The supervision of a convolutional neural network (CNN) using a hard label has been very successful in most image classification problems [1; 2; 3]. However, in the training of a CNN using a hard label, as the number of weights of the network increases, [4] analyzed the overconfidence of the network prediction. To handle this phenomenon, [4] proposed the expectation of calibration error (ECE) to estimate the confidence of the model, and several approaches for calibrating the overconfidence of deep learning models were suggested, but they were not correlated with generalization performance and model calibration. Recently, several studies have introduced that data augmentation is effective for model generalization as well as calibration [5; 6], but the results are not significant in terms of generalization performance.
Label distribution learning (LDL) is designed for effective training through label distribution when the types of labels for supervision are difficult to define discretely and approaches the label generation (or enhancement) process as an optimization problem [20; 21]. Typically, LDL has been applied to applications that include inherent label ambiguity, such as facial age estimation, head pose estimation, facial emotion estimation, multi-label learning, partial multi-label learning, and video summarization [21; 22]. From a LDL perspective, label smoothing [14; 17; 19] is considered a subset of LDL.We
are inspired by the basic concepts of LDL and assume that the LDL potentially overcomes the discrepancy between the one-hot label-based training and the maximum confidence- based testing. To introduce this idea in a simple way, we exploited soft labels from teacher networks as a baseline distributed label. To learn the label distribution online differ from the former optimization approach, recent data augmentation techniques that merge labels and data during training were simultaneously applied.
By applying the on/offline label distribution learning scenarios, we simultaneously obtained an improvement in the model generalization and calibration without additional regularization or architecture modification. The left section of Figure 1 shows examples of the difference between hard label and LDL-based supervision for cat recognition. The graphs at the right of Figure 1 show the reliability diagram [4] when different settings of modern KD approaches in the same CNN model are applied for CIFAR100. To verify the strength of LDL for model generalization and calibration, we performed a series of image classification tasks on datasets such as CIFAR10, 100 [23], STL10 [24], and ImageNet [25]. Based on a series of experiments with image classification, we confirmed that most recent KDs cause severe overconfidence, which impedes model calibration, and even a simple LDL approach can achieve better classification accuracy and suppression of model overconfidence.
## 2 Related Works
**Model calibration.** ECE is an error that measures whether the prediction of the neural network can accurately estimate the true likelihood of the input data of the trained classes [4; 40]. In [4], temperature scaling was proposed in a way that can effectively be corrected and the reliability diagram is used to visualize model confidence for CNNs. In [30], various structural dropout methods and experiments on the drop rates according to each method were applied to the CNN model to analyze the correlation between model accuracy and ECE. VWCI [31] reduced the ECE and improved the recognition performance by defining a confidence integration loss as a probabilistic regularization term defined from a Bayesian model using multiple inferences based on probabilistic depth and dropout. It has also been reported that the AvUC loss based on uncertainty estimation in the model also aids in model calibration [32]. In addition to this, model training with mixup augmentation has been demonstrated to be effective in model calibration. However, it did not achieve much in improving the generalization performance of the model in preparation for the correction effect [5; 6].
**Label smoothing and label distribution learning.** Label smoothing was proposed to soften the hard label in the training process according to the given coefficient to prevent overconfidence and improve generalization performance [17; 18]. In [18], the authors analyzed the effect of label smoothing on deep neural network training by visualizing the penultimate fully connected layer of deep neural networks. According to the analysis results, there is evidence that the trained teacher network applied with label smoothing in the KD scenario can invalidate the effect of student model training. In recent studies [19; 28], the effect of label smoothing on teacher networks in the KD scenario was analyzed in more detail to extend the research results [18]. In [19], a quantification method that label smoothing erases meaningful information in the teacher network logit was proposed. In [28], the relationship between KD and label smoothing from the bias-variation perspective was analyzed.
Figure 1: **The difference between the supervision using the hard label (blue box) and using the soft label (red box) for image classification.** An illustration of the LDL for the cat class at the left is shown, and the reliability diagrams at the right show comparisons of the traditional hard label-based and LDL-based classification accuracy and ECE. Our LDL-based training successfully achieved better classification accuracy and lower ECE simultaneously.
From the LDL perspective, Label smoothing can be regarded as a possible solution for LDL through constant softening of the hard label. We included label smoothing in our comparisons as one of the baselines to suppress overconfidence [18].
**Knowledge distillation.** Since the introduction of the KD [7], a vast number of approaches for knowledge distillation have been proposed [8-16, 26, 27]. FitNet [8] proposed a KD method that makes the feature maps of teacher networks similar. In recent years, various approaches have been proposed from the perspective of representation learning, such as RKD [10], which achieved transfer learning through geometric relations to the output of the model, CRD using metric learning [12], mutual learning based [13], self-supervised learning-based KD [15], and weighted soft label-based KD [16]. Among the variations of KD, the born-again network [9], which achieves transfer learning through repetitive training of student models without the use of a teacher network, and similar to [9], variations of KD that are free from teachers [14, 29] were also introduced. [18] and [19] explain the relationship between KD and label smoothing of teacher networks through empirical experiments. We argue that KD should be described in terms of LDL rather than label smoothing. We have observed that modern KDs spoil model calibration to improve generalization performance.
## 3 On/Offline Label Distribution Learning (LDL)
In this section, we briefly introduce the notations and approaches for KD and label smoothing (Section 3.1), which are basic prerequisites for our LDL-based approaches. Subsequently, on and offline approaches for LDL are described in Sections 3.2 and 3.3.
### Preliminaries
**Knowledge distillation.** When the weight \(w\) of the last fully connected layer for the \(i\)th feature input \(x\) and the output with the softmax function for the \(k\)th class is given as \(p_{i}^{k}=\frac{e^{(x_{i})^{T}w_{k}}}{\sum_{l=1}^{L}e^{(x_{i})^{T}w_{l}}}\), the softening output for the neural network is given by [7].
\[\bar{p}_{i}^{k}(x)=\frac{e^{((x_{i})^{T}w_{k})/\tau}}{\sum_{l=1}^{L}e^{((x_{i} )^{T}w_{l})/\tau}}, \tag{1}\]
where \(\tau\) is a temperature scaling parameter that determines size of softening, and the total loss function \(L=(1-\lambda)L_{CE}(y,p_{\theta_{s}})+\lambda L_{CE}(\bar{p}_{\theta_{t}}, \bar{p}_{\theta_{s}})\) for the teacher model \(\theta_{t}\) and the student model \(\theta_{s}\), where \(y\) is one-hot label and \(L_{CE}(p,y)=-\sum_{k=1}^{K}y^{k}\log(p^{k})\).
**Label smoothing** The label smoothing for the hard label \(y_{i}\) for the same feature input is given as follows:
\[\tilde{y}_{i}^{k}=(1-\alpha)y_{i}^{k}+\frac{\alpha}{K-1}, \tag{2}\]
where \(\alpha\) is the smoothing coefficient, and the probabilities for each class except the hard target corresponding to the \(k\)th class are evenly distributed as \(\alpha/(K-1)\).
**Expected calibration error.** The ECE for estimating the confidence of the neural networks proposed in [4] is estimated for \(N_{test}\) samples when the softmax output \(p_{i}\) inferred for all test data and the index with the maximum probability in the output is \(\hat{c}_{i}=\underset{i}{\operatorname{argmax}}(p_{i}=k)\).
\[ECE=\sum_{m=1}^{M}\frac{|H_{m}|}{N_{test}}\left(\frac{1}{|H_{m}|}\sum_{i\in H _{m}}1(\hat{c}_{i}=c_{i})-p_{i}\right), \tag{3}\]
where \(H_{m}\) is the index set and generates \(M\) interval bins of \(((m-1)/M,m/M]\) for \(N_{test}\) samples. Typically ECE is measured by the histogram for a bin of 0.1 size by setting \(M=10\).
**Criterion of LDL with cross entropy loss.** The main objective of cross entropy loss with LDL perspective is given by:
\[L_{CE}(p,z)=-\sum_{k=1}^{K}z^{k}\log(p^{k}), \tag{4}\]
where \(z\) is a label vector that satisfies \(\sum_{j}z_{j}=1\). Label smoothing or soft label by output of teacher networks can be regarded as a specific solution for \(z\) (\(z=\tilde{y}(\alpha)\) or \(z=\bar{p}_{\theta_{k}}^{k}\)). We reformulated the problem \(\text{argmax}_{\theta}(E[h_{D_{train}}(x,y;\theta)]-E[h_{D_{test}}(x,y;\theta)])\) to find the CNN with the maximum generalization performance in a specific image classification dataset \(D\ni\{D_{train},D_{test}\}\) as follows:
\[\operatorname*{argmax}_{\theta,Z}E[h_{D_{train}}(x,z;\theta)]-E[h_{D_{test}}( x,y;\theta)], \tag{5}\]
where \(Z_{D_{train}}\ni\{z_{1},...,z_{N_{train}}\}\) is a set of new labels for all training data. Equation (5) can be solved as an optimization problem of finding a pair of the optimal labels for each input data \((x_{i},z_{i})\) such as the previously proposed LDL approaches [20; 21]. Since traditional approaches cannot update \(z\) and \(\theta\) simultaneously during deep neural network training, we applied simple but effective on/offline approaches. For the simplicity, we used basic KD setting, which teacher network generate new label set \(Z\) for target (student) neural network training. The Offline setting is a way to generate \(Z\) as a soft label as the output of the teacher network. \(Z\) is not updated during training, but some variations are possible depending on the way the ensemble of teacher networks. The online setting is a way of continuously transforming \(Z\) while updating \(\theta\). Since it is difficult to presume an optimal \(Z\), we generated diverse labels with modern data augmentation techniques. As with offline settings, there are several variations depending on the way the ensemble of teacher networks and label generation process. Figure 2 shows different types of training configurations for baseline and LDL.
### Offline LDL
We simplified the problem by using the KD setting by the teacher output as the new label to extract feasible solutions for each sample pair \((x_{i},z_{i};\theta)\). The offline LDL is illustrated in the green box in Figure 2, such that the set of sample pairs \((X_{D_{train}},Z_{D_{train}})\) under \(X_{D_{train}}\ni\{x_{1},...,x_{N_{train}}\}\) is fixed during the training process. The cross-entropy loss for training with the label generated by the teacher \(\theta_{t}\) and the student model to be trained is \(\theta_{s}\) is as follows:
\[L_{CE}(p,\bar{z})=-\sum_{k=1}^{K}\bar{z}_{i}^{k}\log(p_{i,\theta_{s}}^{k}), \tag{6}\]
where \(\bar{z}\) is defined by way of generating new labels. We used simple variations of \(\bar{z}=f(\cdot)\) for offline LDL (see Figure 2): soft label \(f(x_{i}^{k};\theta_{t})\) (Off-LDL, #4), soft label with teacher ensemble \(\bar{z}=\frac{1}{N}\sum_{n=1}^{N}f(x_{i}^{k};\theta_{t,n})\), where \(N\) is the number of teacher models (Off-En-LDL, #5), and linear combination of soft labels from multiple teachers \(-\sum_{n=1}^{N}\sum_{k=1}^{K}\bar{z}_{i,\theta_{t,n}}^{k}\log(p_{i,\theta_{s}} ^{k})\) (Off-MT-LDL, #6).
Figure 2: **Schematic of the training configurations.** We used abbreviations to simplify the notation: #1 learning from scratch (Vanilla), #2 label smoothing (LS), #3 knowledge distillation (KD), #4 soft label (Off-LDL), #5 teacher ensemble for soft label (Off-En-LDL), #6 linear combination with the multiple soft label (Off-MT-LLD), #7 soft label with data augmentation (On-LDL), #8 soft label using data augmentation with teacher ensemble (On-En-LDL), #9 linear combination of multiple soft label using data augmentation (On-MT-LDL). The red box represents the existing training method, the green box represents the offline approaches, and the blue box represents the online approaches.
### Online LDL
To reflect the objective of Equation (5) during training, it is necessary to update \(Z\) and \(\theta\) simultaneously. We applied recent data augmentation techniques that can simply adopt online LDL. Among the proposed data augmentation techniques, some techniques manipulate input data and label together. A generalized form of augmentation technique considering data and labels together is
\[\hat{x}=\mathrm{M}_{1}\otimes x_{1}+...+\mathrm{M}_{P}\otimes x_{P}\] \[\hat{y}=\lambda_{1}y_{1}+...+\lambda_{P}y_{P}, \tag{7}\]
where \(\hat{x}\) is an augmented sample of mixed label \(\hat{y}\) with \(\sum^{P}\lambda_{i}=1\) up to \(P\) samples. \(\mathrm{M}\) is a blending mask equal to the data width and height, and satisfies \(\sum^{P}\mathrm{M}_{i}(u,v)=1\), where \(u\) and \(v\) indicate the pixel location. Each augmentation algorithm is designed to stochastically determine the location and size of each sample for blending and mainly follows a uniform distribution. \(\mathrm{M}\) is provided differently for each augmentation technique. Typically, When \(P=2\), \(x_{1}\) is defined as the target image, and \(x_{2}\) is defined as the 0 image for CutOut [3]. We exploited mixup [33], CutMix [34], and RICAP [35] for data augmentation, and online LDL with data augmentation was as follows:
\[L_{CE}(\hat{p},\hat{z})=-\sum_{k=1}^{K}\hat{z}_{i}^{k}\log(\hat{p}_{i,\theta_{ s}}^{k}), \tag{8}\]
where \(\hat{p}_{i}^{k}=\frac{e^{(\hat{x}_{i})^{T}\omega_{k}}}{\sum_{l=1}^{L}e^{(\hat{x }_{i})^{T}\omega_{l}}}\). Data augmentation applies equally to teacher models for label enhancement \(\hat{z}_{i}^{k}\), but the mixed label \(\hat{y}\) is not used for training. Similar to the offline approaches corresponding to #5 and #6 in Figure 2, online LDL can be easily extended. The enhanced label through the augmentation-based teacher ensemble is obtained as \(\hat{z}=\frac{1}{N}\sum_{n=1}^{N}f(\hat{x}_{i}^{k};\theta_{t,n})\) (On-En-LDL, #8), and the linear combination of the augmentation-based soft labels from multiple teachers is given as \(-\sum_{n=1}^{N}\sum_{k=1}^{K}\hat{z}_{i,\theta_{t,n}}^{k}\log(\hat{p}_{i, \theta_{s}}^{k})\) (On-MT-LDL, #9).
## 4 Experimental Results
**Experimental setting.** We performed a series of experiments on the CIFAR10, 100 [23], and STL10 [24] datasets to verify the performance of the on/offline LDL. First, the major experiments were performed with the teacher-student network configurations of the well-used ResNet architectures [3]. We divided ResNet into small and large networks according to model size. Small size models include ResNet20 (0.27M), 56 (0.85M), and 110 (1.7M) and large size models include ResNet18 (11.18M), 50 (23.51M), and 200 (62.62M). For CIFAR10, 100, and STL10, a total of 240 epochs was trained to start with an initial learning rate of 0.05, and 0.1 learning rate scaling was applied at 150, 180, and 210 epochs. The weight decay was set to \(5.0\times 10^{-4}\), the batch size was set to 64. We also tested a long training scenario with a basic training configuration, with the assumption that LDL can fundamentally achieve better performance when the number of training pairs of sample and label is large especially in online LDL. For long training, a total of 350 epochs was trained to start with an initial learning rate of 0.05, and 0.1 learning rate scaling was applied at 150, 200, 250, and 300 epochs2. In all ensemble (En) and multiple teacher (MT) settings, ResNet20, 32, 44, 56, and 110 used together, were trained by the same learning scheduler. We measured the image classification accuracy and ECE [4] to evaluate the performance. Visualization of reliability diagrams is provided to intuitively check the strength of model calibration of the network in the same way as in [4].
Footnote 2: The long training is marked with ’+’.
**LDL with data augmentation.** The augmentation algorithms applied for On-LDL are mixup [33], CutMix [34], and RICAP [35], and the default hyper-parameter for data augmentation refers to the original implementation of each algorithm. Table 1 shows the recognition results of the on/offline LDL according to each data augmentation technique. All three algorithms showed rather poor performance in small networks and were able to achieve significant performance improvement in large networks. For long training, we only report the LDL of the augmentation technique with the best performance. All training was performed on 3 different random seeds. We omit variations in
calibration scores, which are no significant differences. We selected the CutMix [34] or RICAP [35] as a data augmentation for remain LDL experiments. Similar to the results of [5], with the mixup augmentation, the improvement in accuracy was not significant, but a model calibration effect could be seen in some cases. Rather, CutMix and RICAP achieved steady performance improvement and better model calibration. Offline LDL achieved the best accuracy and model calibration performance in ResNet20, and online LDL improved significantly as the size of the network increased. With the ResNet50, an accuracy improvement of up to 1.9% and better model calibration was achieved than in the case of data augmentation only.
**CIFAR10 and STL10.** Table 2 shows the evaluation results of the CIFAR10 and STL10 datasets for LDL training. In the two relatively small datasets, we did not evaluate the large network, as only the small network could provide sufficient performance improvement. In both CIFAR10 and SLT10, online LDL utilizing multiple teacher networks showed improvements in accuracy and model calibration. In CIFAR10, the accuracy increase of up to 1.6% and the ECE reduction effect of up to about 80% were obtained in ResNet56 compared to the vanilla model. In STL10, the accuracy increase of 3.39% and ECE improvement effect of about 85% were obtained in ResNet20 compared to the vanilla model. In CIFAR10 and STL10, RICAP achieved better performance than CutMix.
**CIFAR100.** Table 3 shows the evaluation results of LDL in CIFAR100. In the upper part of Table 3, the improvement in generalization performance is relatively insignificant for small networks. We hypothesized that a large number of weights is required to sufficiently train the LDL-based label variants and evaluated the small and large networks simultaneously on the CIFAR100 dataset. When the model has a large number of weights, it is well trained on the label variation of the new label distribution, and it is possible to achieve sufficient performance improvement and model calibration without the multiple teachers. We obtained two main observations from the experiments with three benchmarks: 1) The classification accuracy is mainly determined by the number of weights with LDL approaches, and the influence of the number of parameters in the teacher model is not noticeable. 2) The effectiveness of model calibration steadily improves regardless of the number of parameters in each model. We also compared the proposed LDL approaches with the SOTA KD methods [12; 15; 16] on the CIFAR100 dataset. Table 3 shows the comparison results with small and large student networks in terms of classification accuracy and ECE. In most cases, LDL-based methods achieved a higher generalization performance and lower ECE simultaneously. Figure 3 shows this phenomenon more dramatically: as the number of parameters in the student network is small, the SOTA KD methods show a very high ECE score, have no explanatory power for model confidence,
\begin{table}
\begin{tabular}{c c c c c} \hline Teacher & ResNet110 & ResNet110 & ResNet200 & ResNet200 \\ Student (\# param) & ResNet20 (0.27M) & ResNet56 (0.85M) & ResNet11 (11.83M) & ResNet50 (0.235M) \\ \hline Vanilla & 69.32\(\pm\)02.70/0.70 & 72.28\(\pm\)0.09/0.123 & 77.83\(\pm\)0.35/0.080 & 78.04\(\pm\)0.17/0.07 \\ Vanilla [33] & 67.29\(\pm\)04.10/1.72/ & 13.06\(\pm\)10.11 & 78.71\(\pm\)0.29/0.131 & 79.73\(\pm\)0.51/0.059 \\ Vanilla [34] & 67.23\(\pm\)02.08/0.75 & 73.99\(\pm\)0.16/0.066 & 80.32\(\pm\)0.18/0.045 & 81.73\(\pm\)0.07/0.038 \\ Vanilla [35] & 68.64\(\pm\)0.12/0.070 & 73.95\(\pm\)0.17/0.027 & 80.10\(\pm\)0.05/0.039 & 81.47\(\pm\)0.34/0.03/0.039 \\ \hline \hline Off-LDL & **69.94\(\pm\)09.10/0.51** & 73.59\(\pm\)0.60/0.085 & 78.67\(\pm\)0.1/0.060 & 79.79\(\pm\)0.37/0.087 \\ On-LDL [33] & 68.42\(\pm\)02.10/1.30 & 73.73\(\pm\)0.67/0.115 & 79.76\(\pm\)0.3/1.012 & 81.03\(\pm\)0.01/0.051 \\ On-LDL [34] & 68.25\(\pm\)06.0/0.073 & 74.44\(\pm\)0.16/0.060 & 81.26\(\pm\)0.1/0.043 & 83.09\(\pm\)0.05/**0.030** \\ On-LDL [35] & 68.90\(\pm\)00.05/0.063 & 74.87\(\pm\)0.1/0.025 & 80.57\(\pm\)0.04/0.056 & 81.64\(\pm\)0.06/0.039 \\ On-LDL+ & 69.41\(\pm\)01.07/0.073 & **75.56\(\pm\)0.28/0.022** & **81.76\(\pm\)0.25/0.034** & **83.57\(\pm\)0.05**/0.038 \\ \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) and ECE of vanilla and each LDL setup for CIFAR100 depending on each data augmentation methods [33; 34; 35].
\begin{table}
\begin{tabular}{c|c c|c} \hline & CIFAR10 & \multicolumn{2}{c|}{STL10} \\ \hline & Model & Model & Model \\ Method & ResNet20 & ResNet56 & ResNet20 & ResNet56 \\ \hline Vanilla & 92.56\(\pm\)0.10/0.033 & 93.88\(\pm\)0.08/0.038 & 83.44\(\pm\)0.10/0.067 & 84.15\(\pm\)0.31/0.074 \\ Label smoothing & 92.41\(\pm\)0.25/0.052 & 93.72\(\pm\)0.19/0.063 & 83.54\(\pm\)0.22/0.11 & 84.35\(\pm\)0.01/0.091 \\ KD (\(\alpha\)=0.1\(\tau\)=3) [7] & 92.56\(\pm\)0.0/0.032 & 93.94\(\pm\)0.03/0.038 & 83.95\(\pm\)0.56/0.070 & 84.62\(\pm\)0.1/0.070 \\ \hline Off-LDL & 92.62\(\pm\)0.18/0.028 & 93.96\(\pm\)0.16/0.031 & 84.14\(\pm\)0.42/0.055 & 84.57\(\pm\)0.13/0.054 \\ Off-En-LDL & 92.60\(\pm\)0.17/0.031 & 94.07\(\pm\)0.01/0.032 & 83.40\(\pm\)0.01/0.067 & 84.03\(\pm\)0.17/0.069 \\ Off-MT-LDL & 93.05\(\pm\)0.28/0.022 & 94.15\(\pm\)0.17/0.022 & 84.16\(\pm\)0.23/0.054 & 84.52\(\pm\)0.09/0.060 \\ \hline On-LDL & 92.20\(\pm\)0.08/0.009 & 94.04\(\pm\)0.09/0.013 & 85.87\(\pm\)0.19/0.12/0.032 & 86.24\(\pm\)0.37/0.029 \\ On-LDL+ & 93.33\(\pm\)0.02/0.008 & 94.32\(\pm\)0.05/0.011 & 85.98\(\pm\)0.04/0.023 & 86.61\(\pm\)0.15/0.029 \\ On-En-LDL & 93.25\(\pm\)0.06/0.016 & **94.48\(\pm\)0.09/0.016 & 86.45\(\pm\)0.17/0.030 & 86.92\(\pm\)0.42/0.034 \\ On-En-LDL & 93.14\(\pm\)0.005/0.006 & 94.44\(\pm\)0.07/0.067 & 86.44\(\pm\)0.14/0.08 & **87.08\(\pm\)0.06/0.009** \\ On-En-LDL+ & 93.71\(\pm\)0.02/0.017 & 94.46\(\pm\)0.27/0.017 & 86.82\(\pm\)0.30/0.030 & **87.13\(\pm\)0.08/0.037** \\ On-MT-LDL+ & **94.03\(\pm\)0.08/0.006** & 94.41\(\pm\)0.2/0.006 & **86.93\(\pm\)0.2/0.08** & 86.92\(\pm\)0.08/0.010 \\ \hline \end{tabular}
\end{table}
Table 2: Classification accuracy and ECE for small size ResNets for CIFAR10 and STL10.
and result in overconfidence. LDL techniques were able to achieve better model calibration compared to SOTA KDs as the number of parameters in the student network increased. Classification accuracy showed up to 2.86% improvement in ResNet200 compared to the best performing CRD+KD, and an improvement of at least 60% up to 88% compared to the SOTA KD methods in model calibration3.
Footnote 3: Types of teacher networks, long training results of SOTA KD methods, loss curves, and additional experimental results and analysis are in the supplementary material.
**ImageNet.** We evaluated the ImageNet dataset [25] to verify LDL methods on the large-scale dataset. For the evaluation of the ImageNet, we set up the multiple teacher and student network configurations. The learning scheduler applied the configuration of?, and the on/offline LDL methods were tested. As shown in Table 4, similar to the other datasets, the similar tendency of improved accuracy and model calibration was achieved simultaneously compared to vanilla and label smoothing.
**Other model architectures.** To validate the effect of LDL in various architectures, we performed an evaluation according to ResNet200 teacher and ResNeXt [36], DenseNet [37], and DLA [38] student networks in the CIFAR100. Table 5 lists the accuracy and model calibration improvements by LDL approaches. Compared to the vanilla model in other types of architectures, there was an accuracy improvement of about 3-5% and an ECE reduction of 20-66% was achieved.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Teacher & \multicolumn{3}{c}{ResNet110} \\ Student & ResNet20 & ResNet56 & ResNet110 \\ \hline Vanilla & 69.32±0.270/0.070 & 72.28±0.090/1.23 & 73.88±0.150/1.31 \\ Label smoothing & 69.43±0.20/0.053 & 72.87±0.170/**0.020** & 73.90±0.160/0.051 \\ \hline KD (\(\alpha\)=0.17,7.53) [7] & 71.05±0.270/0.071 & 72.76±0.11/1.18 & 73.60±0.160/1.35 \\ CRD [12] & 71.05±0.270/0.060 & 74.82±0.11/10.07 & 76.04±0.04/0.121 \\ CRD+KD [12] & **71.29±0.23**±0.129 & 75.38±0.27/10.27 & 76.67±0.46/0.125 \\ SSKD [15] & 71.00±0.040/1.122 & 74.88±0.14/10.19 & 75.73±0.17/1.18 \\ WSL [16] & 71.72±0.26/1.032 & 75.08±0.19/1.033 & 76.00±0.52/0.130 \\ \hline Off-MT-LDL & 70.75±0.15/**0.019** & 74.8±0.04/0.023 & 76.35±0.18/**0.016** \\ On-LDL+ & 69.41±0.17/0.073 & **75.54±0.24**/0.029 & **77.28±0.20**/0.025 \\ \hline \hline Teacher & \multicolumn{3}{c}{ResNet200} \\ Student & ResNet18 & ResNet50 & ResNet200 \\ \hline Vanilla & 77.83±0.55/0.080 & 78.57±0.35/0.107 & 79.47±0.58/0.101 \\ Label smoothing & 78.59±0.06/0.085 & 78.90±0.04/0.042 & 79.52±0.037 \\ \hline KD (\(\alpha\)=0.17,7.53) [7] & 77.73±0.16/0.067 & 79.12±0.45/10.13 & 80.10±0.23/0.100 \\ CRD+KD [12] & 80.41±0.14/0.108 & 80.34±0.07/0.118 & 81.58±0.20/1.10 \\ WSL [16] & 79.91±0.03/0.111 & 80.25±0.12/10.14 & 76.00±0.52/0.130 \\ \hline On-LDL & 81.26±0.11/0.043 & 83.92±0.05/**0.030** & 83.83±0.28/0.038 \\ On-LDL+ & **81.76±0.25/0.034** & **83.57±0.05**/0.038 & **84.44±0.13**/0.039 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification accuracy and ECE of the LDL and the SOTA KDs for the CIFAR 100. Among the LDL techniques, the performance of the models that achieved the highest accuracy or the lowest ECE were recorded.
Figure 3: **Performance change by methods according to small and large student models for CIFAR100.** The left graph shows the accuracy difference with the vanilla training by the number of weights of the student model, and the right graph shows the ECE difference with the vanilla training. The LDL technique shows continuous improvement in model correction as the accuracy increases, and the effect increases as the model size increases. On the other hand, SOTA KD methods have a steady improvement in accuracy, but rather impede model reliability.
**Why does LDL work?** Based on the evaluation results of four datasets, LDL is observed to have an excellent effect on generalization performance and model calibration for CNN training. Figure 4 is the result of the test set plotting the average confidence of ResNet18 and 50 for the ground truth class on the x-axis and the F1 score for each class on the y-axis. In the case of one-hot label-based training or offline LDL using only soft labels of a teacher trained on the one-hot label, most have an average confidence score of 0.9 or higher regardless of the F1 score. Applying data augmentation or label smoothing alleviates this over-confidence, but in the case of label smoothing, the confidence distribution has an excessively large variance as a result of the forced softening. The case of online LDL appears to achieve effective model calibration by balancing the distribution of confidence on the output. Figure 5 plots the top 10 highest-probability classes for the best-performing class for each training method for CIFAR100. Not only does the training methodology change the best performing class, but the distribution of output confidence for each class is also very different. The results of over-softening of label smoothing and over-confidence in the one-hot label are also observed here. Figure 6 shows examples of soft labels obtained from the teacher network after the input sample undergoes data augmentation in training on the ImageNet. We observed that the distribution of labels supervising student networks differed significantly from that of the CutMix.
**Penultimate layer output visualization.** We plot the activation of the penultimate layer to visualize the effect of LDL on the feature representation as in [18, 19]. 'beaver' and 'otter' are semantically similar classes and 'dolphin' are semantically different classes. Very interestingly, it is observed that LDL plays an appropriate role in the classification of semantically similar classes. Looking at the first row of Figure 7, the online LDL has a geometry of activation that is more effective for semantically similar classification when long training is performed. Similarly, for the relationships of'man', 'woman', and'sunflower', although less prominent than in the previous example, online LDL produces more efficient geometries for classification than other methods.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Teacher & ResNet152 & ResNet152 & ResNet152 & ResNet152 \\ Student & ResNet158 & ResNet50 & ResNet101 & ResNet152 \\ Vanilla & 70.3998.54, 0.014 & 75.9692.81, 0.032 & 77.4379.72, 0.045 & 78.2984.14, 0.050 \\ Label smoothing & 70.4089.52, 0.102 & 76.5693.12, 0.070 & 78.3694.05, 0.058 & 78.8294.30, 0.036 \\ \hline Off-LDL & **71.63/90.46**, **0.013** & 77.27/**93.61**, 0.021 & 78.72/**94.30**, 0.024 & 79.16/94.55, **0.021** \\ On-LDL & 69.03/88.91, 0.078 & 75.8492.99, **0.017** & **78.97/**94.27**, **0.016** & **79.51/**94.72**, 0.022 \\ On-LDL & 69.43/89.20, 0.062 & **77.30/**93.54**, 0.023 & 78.2984.28, 0.022 & 79.25/94.49, **0.021** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification accuracy (top1 / top5) and ECE for ImageNet dataset depending on each label enhancement setup.
Figure 4: **Plotting the relationship between the F1 score and the average output confidence for the GT class of the model according to the training methodology.** The average output confidence for each of the 100 classes of CIFAR100 and different shapes are marked for each F1 score. Values in parentheses are the accuracy and ECE of each method.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & \multicolumn{3}{c}{Student (\# param)} \\ & ResNext29, 4x64d (27.1M) [36] & DenseNet121 (6.95M) [37] & DLA (16.29M) [38] \\ \hline Vanilla & 80.30\(\pm\)0.14/0.050 & 79.6720.260.081 & 77.27\(\pm\)0.50/0.099 \\ Label smoothing & 80.73\(\pm\)0.37/0.151 & 79.81\(\pm\)0.28/0.044 & 79.07\(\pm\)0.33/0.039 \\ KD (\(\alpha\)=0.1,\(T\)=3) [7] & 80.38\(\pm\)0.34/0.052 & 79.53\(\pm\)0.10/0.080 & 77.67\(\pm\)0.21/0.101 \\ \hline Off-LDL & 80.59\(\pm\)0.10/**0.039** & 80.29\(\pm\)0.20/0.062 & 78.12\(\pm\)0.06/0.081 \\ On-LDL & **84.16\(\pm\)0.13/**0.059** & **83.35\(\pm\)0.21/0.028** & **82.93\(\pm\)0.08/0.033** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation of different types of architectures.
## 5 Conclusion
We observed and analyzed the effects of label smoothing, KD, and data augmentation on classification accuracy and model confidence in terms of label distribution learning. Although the current approach has limitations in that the method for generating labels is relatively simple and limited to image classification, through experiments and visualization on four data sets, the LDL-based training approach can simultaneously improve model accuracy and calibration. As a future study, we plan to verify the utility of LDL in other tasks and analyze LDL based on theoretical backgrounds such as class-considered approaches and risk minimization [41].
Figure 5: **The top 10 high-probability classes according to the best-performing classes by training methodology.** For each method, the F1 score for the best performing class was recorded next to the class name and overall accuracy was recorded next to the method name.
Figure 6: **Examples of supervision by CutMix and LDL.** These examples from ImageNet are label distribution and input sample pairs obtained from a teacher network after CutMix augmentation.
Figure 7: **Visualization of penultimate layer’s activation.** The first row is the training samples of the ’beaver’, ’otter’, and ’dolphin’ classes, and the second row is the test samples. The third row is the training samples of the ’man’, ’woman’, and ’sunflower’ classes, and the last row is the test samples. All plots are visualization of the activation of ResNet50. |
2309.11471 | Noise-Crypt: Image Encryption with Non-linear Noise, Hybrid Chaotic
Maps, and Hashing | To secure the digital images over insecure transmission channels, a new image
encryption algorithm Noise-Crypt is proposed in this paper. Noise-Crypt
integrates non-linear random noise, hybrid chaotic maps, and SHA-256 hashing
algorithm. The utilized hybrid chaotic maps are the logistic-tent and the
logistic-sine-cosine map. The hybrid chaotic maps enhance the pseudorandom
sequence generation and selection of substitution boxes, while the
logistic-sine-cosine map induces non-linearity in the algorithm through random
noise. This deliberate inclusion of noise contributes to increased resistance
against cryptanalysis. The proposed scheme has been evaluated for several
security parameters, such as differential attacks, entropy, correlation, etc.
Extensive evaluation demonstrates the efficacy of the proposed scheme, with
almost ideal values of entropy of 7.99 and correlation of -0.0040. Results of
the security analysis validate the potency of the proposed scheme in achieving
robust image encryption. | Laiba Asghar, Fawad Ahmed, Muhammad Shahbaz Khan, Arshad Arshad, Jawad Ahmad | 2023-09-20T17:11:35Z | http://arxiv.org/abs/2309.11471v1 | # Noise-Crypt: Image Encryption with Non-linear Noise, Hybrid Chaotic Maps, and Hashing
###### Abstract
To secure the digital images over insecure transmission channels, a new image encryption algorithm Noise-Crypt is proposed in this paper. Noise-Crypt integrates non-linear random noise, hybrid chaotic maps, and SHA-256 hashing algorithm. The utilized hybrid chaotic maps are the logistic-tent and the logistic-sine-cosine map. The hybrid chaotic maps enhance the pseudorandom sequence generation and selection of substitution boxes, while the logistic-sine-cosine map induces non-linearity in the algorithm through random noise. This deliberate inclusion of noise contributes to increased resistance against cryptanalysis. The proposed scheme has been evaluated for several security parameters, such as differential attacks, entropy, correlation, etc. Extensive evaluation demonstrates the efficacy of the proposed scheme, with almost ideal values of entropy of 7.99 and correlation of -0.0040. Results of the security analysis validate the potency of the proposed scheme in achieving robust image encryption.
random noise, hybrid chaotic maps, hash, substitution, image encryption
## I Introduction
With the advancement of network technology, the volume of data exchanged over insecure networks has increased tremendously [1]. The ubiquity of such insecure data exchanges has given rise to an increase in cyber-attacks, which provides unauthorized access to digital images and discloses sensitive information to intruders [2]. These security concerns highlight the need to develop effective image encryption algorithms. Several cryptographic algorithms have been developed to ensure data confidentiality and integrity. Image data has specific characteristics, such as containing large amount of information and having high redundancy. Hence, the encryption schemes like AES that are widely used for data encryption are not suitable for images [3]. A secure encryption algorithm must have properties of diffusion and confusion. Confusion is a change in pixel location, whereas diffusion refers to a change in grey levels intensities [4].
Chaos theory plays critical role in image encryption due to its inherent characteristics, i.e., sensitivity to control parameters, nonlinearity, ergodicity, etc. Researcher prefer to integrate chaos with image encryption algorithms to increase security of the encryption schemes [5]. Chaotic maps introduce non-linearity into encryption schemes, making them difficult for intruders to crack [6, 7]. Traditionally chaotic maps are categorized as one dimensional and multidimensional maps. The one dimensional chaotic maps are simple and have limited key space [8]. Multi-dimensional maps have large chaotic range but have complex structures [9]. To minimize these disadvantages, researchers employ hybrid approaches combining various maps to improve 1D maps [10].
In chaotic encryption schemes, another important component is the substitution box (S-box) substitution. In traditional S-box substitution techniques, the mapping bijective, which means that each element of substitution box replaces the pixel of the original image at the same location as itself. Such technique is not effective for highly correlated data [11, 12]. So, the S-box substitution methods alone are not enough. A good encryption system should be able to scramble the image effectively and also should be difficult to crack [13]. In this regard, introducing random noise in an encryption algorithm enhances its unpredictability, complicating potential decryption attempts by adversaries. This added layer of randomness disrupts patterns, making it challenging to discern any underlying structure or sequence. As a result, the security robustness of the encryption scheme is significantly bolstered. Another important component of encryption techniques is usage of hash function. If a hash function is ideal in encryption schemes, it produces distinct outputs for different inputs and generates a fixed-size output for any given input [14, 15, 16].
This paper presents a new image encryption method that uses chaotic S-box substitution, adds random noise, and includes a hash function. This approach not only ensures efficient encryption of the plaintext image but also enhances its resistance against unauthorized decryption attempts.
Main contributions of this paper are:
1. A novel image encryption scheme Noise-Crypt has been proposed. Noise-Crypt leverages random noise, which is generated through a hybrid chaotic map. This noise induces non-linearity and improves entropy of the proposed scheme making it more suitable for grayscale image encryption.
2. Integration of different chaotic maps to create improved hybrid maps. The logistic and tent maps have been combined to make a logistic-tent map, and the logistic,
sine, and cosine maps have been combined to create a logistic-sine-cosine map. This increased the chaotic region of traditional maps.
3. The SHA-256 hash function has been employed to ensure that any slight change in the plaintext image affects the cipher images and keys, enhancing the overall security and integrity of our encryption approach.
## II The Proposed Scheme - Noise-Crypt
Noise-Crypt uses properties of chaos, and hash to encrypt the images. A random noise is generated to introduce more randomness and improve the entropy. The scheme may be broken down into two algorithms. The algorithm 1 generates the keys for the S-box selection and Bit X-OR operations using two hybrid chaotic maps, whereas the algorithm 2 discusses the encryption process comprising of S-Box substitution, random noise, and Bit X-OR operation to generate the cipher image.
### _Steps_
1. Read plain image of size MxN.
2. Generate hash sh_P of the plain image text using the sha256 algorithm and take first 11 characters of the hash then convert them to a decimal number d.
3. Set dd equals to d/10*14 to make sure the number lies in [0,1] range.
4. Set the initial value x0 to dd and initiate logistic tent map to generate key sequence K with the length of (1,k), where i = M*N. Equation (1) represents logistic tent map mathematically: (1) where parameter r \(\in\) (0, 4].
5. Multiply all the values of chaotic sequence with 10*14 for precision. Use round (K) to round the values to nearest integer.
6. Take mod (K,3) on all values to make sure no value is greater than 2 and we have 3 values to select for 3 S-boxes, use reshape operation reshape (K, M, N), where M and N are the dimensions of the image, to make sure the size of chaotic sequence is same as input image.
7. Calculate MSB and LSB of each pixel so that MSB corresponds to xth row and LSB corresponds to yth column of the S-box.
8. Generate AES, Hussain and Gray S-boxes for substitution and select the S-box randomly for substitution by chaotic sequence K to substitute the pixel value I(i,j) with selected S-box value Sn(i,j), where n is the number of S-box selected.
Fig. 1: The Proposed Encryption Scheme
9. Divide the S\({}_{\text{image}}\) into fix ZxZ blocks.
10. Initiate the logistic tent map again to generate another key sequence K2 of length (1, Z) where Z is the length of the block, the initial value x0 is equal to dd. Equation (2) represents logistic tent map mathematically: \[\left\{\begin{array}{l}xn+1=\left(\begin{array}{c}rXn(1-Xn)+\\ (4-r)\,xn/2\\ \end{array}\right)mod1\quad Xi<0.5\\ xn+1=\left(\begin{array}{c}rXn(1-Xn)+\\ (4-r)\,(1-xn)/2\\ \end{array}\right)mod1\quad Xi\geq 0.5\end{array}\right.\] where parameter r \(\in\) (0, 4].
11. Multiply all the values of chaotic sequence with 10\({}^{\wedge}\)14 for precision. Use round (K2) to round the values to nearest integer.
12. Took mod (K,256) on all values to make sure no value is greater than 255 so they are same as the pixel values. Use reshape operation reshape (K, Z, Z).
13. The first block b1 of S\({}_{\text{image}}\) (1:Z, 1:Z) is Bit X-ORed with the K2. \(bitxor(block1,K2)\).
14. The next block b2 is Bit X-ORed with the output of previous block The same procedure is followed by all b(m*n/z) blocks. \(bitxor\) (\(Subimage\)\(it\)\(block\), \(Xored\)\(image\)\(ith-1\)\(block\))
15. This step gives us the Xored image X\({}_{\text{image}}\).
16. Initiate the logistic sine cosine map to generate random noise RN of length (1, MxN), the initial value x0 is equal to dd. Logistic sine cosine map can be represented mathematically as (3) \[\small\begin{array}{l}xn+1=cos\left(pi\left(\begin{array}{c}4rXn(1-Xn)+\\ (1-r)\sin(pixn)-0.5\\ \end{array}\right)\right)\end{array}\] (3) Where parameter r \(\in\) [0, 1]
17. Multiply all the values of random noise with 10\({}^{\wedge}\)14 for precision. Use round (RN) to round the values to nearest integer.
18. Take mod (RN,256) on all values to make sure no value is greater than 2155 so that all values are same as the pixel values to perform Bit X-OR operation.
19. Use reshape operation reshape (RN, M, N) to reshape random noise into 256x256 matrix.
20. The random noise RN and xored image Ximage are bitwise xored to get ciphertext image.
## III Results and Analysis
We have compared the proposed algorithm to approaches from relevant literature. Several security analysis metrics, such as the cipher image histogram, entropy, contrast, correlation, energy, homogeneity, NPCR, UACI are offered in this part to assess the effectiveness of the proposed Chaotic encryption approach. All tests are run on plaintext picture and their associated cipher images.
### _Encrypted Image Analysis_
The most common security analysis is the comparison of the histograms of both the original and encrypted images. A histogram shows the distribution of pixel brightness in an image, representing how many pixels have each intensity level [17]. This gives insights into the image's brightness and contrast. When we encrypt a picture, we change its pixel values to make the image secure and hard to read. By looking at the histogram, we can see how the encryption changes the distribution of pixel intensities.
Fig. 2: Results of Noise-Crypt; (a-b) Cameraman image with its histogram, (c-d) Encrypted image with its histogram.
The encrypted images display histograms that are evenly distributed, showing that the proposed algorithm effectively hides the content of the original image and stands strong against attacks based on histogram analysis. We can measure this using the chi-square (\(\chi\)2) test. A lower chi-square value indicates a more uniform distribution. The chi-square is mathematically defined in equation (4):
\[\begin{split} chi\;square=\sum_{i=0}^{255}\frac{(f_{i\times\epsilon} )^{2}}{\epsilon}\\ \text{where }\epsilon=\frac{M\times N}{256}\end{split} \tag{4}\]
Where \(f_{i}\) represents cipher images histogram values at index i, M and N are height and width of the image. Below are the histograms of plain and cipher images.
### _Contrast_
The intensity difference between a pixel and its neighbor is measured via contrast analysis throughout the entire picture. High contrast value indicates secure algorithm. When we encrypt an image, the randomness in the image increases and in response the contrast levels reach a very high value [19]. A greater contrast implies more randomly placed pixels in the image. Mathematically contrast is represented in equation (5):
\[\begin{split} Contrast=\sum_{i,j}|i-j|^{2}p(i,j)\end{split} \tag{5}\]
Where i and j are intensity levels.
### _Entropy_
Entropy measures the amount of uncertainty in a dataset. It refers to the degree of unpredictability or fluctuation in pixel values of images. A higher entropy number implies greater uncertainty or unpredictability, whereas a lower entropy value indicates greater predictability and less randomness.
Mathematically entropy is represented as (6):
\[\begin{split} H=\sum_{i=0}^{N-1}p(i)\times\log_{2}\frac{1}{p(i)} \end{split} \tag{6}\]
where \(N=\) gray level count, and \(p(i)=\) pixel likelihood having value \(i\).
Information entropy measures the randomness or unpredictability of information. In a chaotic encryption scheme, the closer the entropy value is to 8, the more chaotic it is [18]. The ideal entropy can be determined using equation (7):
\[\begin{split} H_{ideal}=\sum_{i=0}^{255}\frac{1}{256}\times\log _{2}\frac{1}{256}=8\end{split} \tag{7}\]
A low entropy value is advantageous for attackers and the cipher images with low entropy are sensitive to cryptanalysis and brute force attack
### _Correlation_
In image encryption, "correlation" means how pixel values in an image relate to each other. It's best if nearby pixels in the encrypted image don't have predictable or similar values. This makes the encrypted image look random. If two encrypted images have a correlation value close to 0, they don't share obvious patterns and are considered very different from each other.
Mathematically correlation coefficient cab be represented as follows:
\[\begin{split} corr\mathcal{C}=\frac{\sum_{i=1}^{N}\sum_{j=1}^{N} \left(P(i,j)-E(\rho)\right)}{\left\{\sum_{i=1}^{N}\sum_{j=1}^{N}\left(P(i,j)-E (\rho)\right)^{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\left(C(i,j)-E(\mathcal{C}) \right)^{2}\right\}}\end{split} \tag{8}\]
Where \(P(i,j)\) and \(\mathcal{C}(i,j)\) are the pixel values at specific location. \(E(P)\) and \(E(\mathcal{C})\) are the expected values.
### _Homogenity_
Homogeneity is a derived statistic of the GLCM matrix. Gray Level Co-occurrence Matrix (GLCM) is a matrix that describes spatial relationship between intensity levels of pixels at a given offset in an image. Homogeneity describes the similarity between adjacent pixel values. Encryption algorithms with low homogeneity values are considered secure because the goal of encryption is to have high randomness and high homogeneity indicates more uniformity and less randomness.
Mathematically homogeneity can be written as (9):
\[\begin{split} Homogenity=\sum_{i,j}\frac{P(i,j)}{1+\left|i-j \right|}\end{split} \tag{9}\]
Where P(i,j) is the pixel value at ith row and jth column.
### _Energy_
Energy, a statistic derived from the GLCM, is also called uniformity. It represents the sum of the squared elements in the GLCM. [20]. Algorithms with low energy values are considered secure.
Mathematically energy can be represented as (10):
\[Energy=\sum_{i,j}P(i,j)^{2} \tag{10}\]
Where i and j are two adjacent intensity levels.
### _NPCR and UACI_
NCPR calculates the difference between two ciphertext images. One is the ciphertext of the non-tampered plaintext image, the other is the tampered plaintext image. We can use this measure to see how well an encryption method stands up to differential attacks.
Mathematically NCPR is given as:
\[NCPR=\sum_{i=1}^{M}\sum_{j=1}^{N}D(i,j) \tag{11}\]
Where \(D=\begin{cases}0\text{ {if} }\text{ {{\cal C}}_{2}}(i,j)=\text{ {{\cal C}}_{1}}(i,j)\\ 1\text{ {if} }\text{ {{\cal C}}_{2}}(i,j)\neq\text{ {{\cal C}}_{1}}(i,j)\end{cases}\)
M represents the height and N is the width of the image. C1 is the encrypted version of the original image, and C2 is the encrypted version of that image when just 1 bit is altered. A higher NPCR means the encryption method is more secure, as it shows high sensitivity to minor changes in pixel values. For our method, the NPCR is 99.58%. Another measure, Unified Average Change Intensity (UACI), is used in image encryption to find out the average intensity difference between two encrypted images that come from the same original image with a slight one-bit difference.
\[UACI=\frac{1}{H\times W}\left[\frac{\sum_{i=1}^{M}\sum_{j=1}^{N} \mid C_{1}(i,j)-\text{ {{\cal C}}_{2}}(i,j)\mid}{2^{B}-1}\right]\times 100\% \tag{10}\]
In this context, M stands for the height and N for the width of the image. C1 represents the encrypted version of the original image, while C2 is the encrypted version when just a single bit of the original image is altered. Additionally, b represents the number of bits in each image pixel.
## IV Conclusion
This paper introduced and evaluated a novel encryption scheme that combines hybrid chaotic maps, random noise, and the SHA-256 hashing algorithm. The chaotic map-generated random noise increased the non-linearity of the algorithm. To further improve security, the key parameters for the chaotic maps were drawn directly from the original image, ensuring that keys were both more secure and dependent on the plaintext. Our security evaluations revealed that our image encryption technique, which leverages S-Box selection and chaotic map-driven random noise, outperformed others, particularly concerning statistical security metrics.
|
2308.16751 | Modeling terahertz emissions from energetic electrons and ions in foil
targets irradiated by ultraintense femtosecond laser pulses | Terahertz (THz) emissions from fast electron and ion currents driven in
relativistic, femtosecond laser-foil interactions are examined theoretically.
We first consider the radiation from the energetic electrons exiting the
backside of the target. Our kinetic model takes account of the coherent
transition radiation due to these electrons crossing the plasma-vacuum
interface as well as of the synchrotron radiation due to their deflection and
deceleration in the sheath field they set up in vacuum. After showing that both
mechanisms tend to largely compensate each other when all the electrons are
pulled back into the target, we investigate the scaling of the net radiation
with the sheath field strength. We then demonstrate the sensitivity of this
radiation to a percent-level fraction of escaping electrons. We also study the
influence of the target thickness and laser focusing. The same sheath field
that confines most of the fast electrons around the target rapidly sets into
motion the surface ions. We describe the THz emission from these accelerated
ions and their accompanying hot electrons by means of a plasma expansion model
that allows for finite foil size and multidimensional effects. Again, we
explore the dependencies of this radiation mechanism on the laser-target
parameters. Under conditions typical of current ultrashort laser-solid
experiments, we find that the THz radiation from the expanding plasma is much
less energetic -- by one to three orders of magnitude -- than that due to the
early-time motion of the fast electrons. | E. Denoual, L. Bergé, X. Davoine, L. Gremillet | 2023-08-31T14:13:10Z | http://arxiv.org/abs/2308.16751v2 | Modeling terahertz emissions from energetic electrons and ions in foil targets irradiated by ultraintense femtosecond laser pulses
###### Abstract
Terahertz (THz) emissions from fast electron and ion currents driven in relativistic, femtosecond laser-foil interactions are examined theoretically. We first consider the radiation from the energetic electrons exiting the backside of the target. Our kinetic model takes account of the coherent transition radiation due to these electrons crossing the plasma-vacuum interface as well as of the synchrotron radiation due to their deflection and deceleration in the sheath field they set up in vacuum. After showing that both mechanisms tend to largely compensate each other when all the electrons are pulled back into the target, we investigate the scaling of the net radiation with the sheath field strength. We then demonstrate the sensitivity of this radiation to a percent-level fraction of escaping electrons. We also study the influence of the target thickness and laser focusing. The same sheath field that confines most of the fast electrons around the target rapidly sets into motion the surface ions. We describe the THz emission from these accelerated ions and their accompanying hot electrons by means of a plasma expansion model that allows for finite foil size and multidimensional effects. Again, we explore the dependencies of this radiation mechanism on the laser-target parameters. Under conditions typical of current ultrashort laser-solid experiments, we find that the THz radiation from the expanding plasma is much less energetic - by one to three orders of magnitude - than that due to the early-time motion of the fast electrons.
## I Introduction
Intense sources of terahertz (THz) radiation are drawing growing interest as their oscillation period, of the order of a few picoseconds, makes them ideally suited for the study of numerous phenomena evolving on similar time scales [1]. Their main direct applications include medical imaging [2], molecular spectroscopy [3; 4], tomography [5], and modification of condensed matter properties [6; 7; 8], to cite only a few. While intense lasers offer promising prospects for developing compact, ultrashort THz sources, the main challenge nowadays is to produce broadband THz pulses with mJ-level energies as is required for various uses [9; 10; 11; 12]. This is a nontrivial task as the most widely explored THz generation mechanisms, namely, optical rectification in asymmetric crystals [13; 14; 15] or photoionization of gases by two-color, moderate-intensity (), femtosecond laser pulses [16; 17; 18], are to date limited to tens of \(\mu\)J THz pulse energies and field strengths.
A more auspicious approach is to irradiate gaseous targets at relativistic laser intensities (\(I_{L}>10^{18}\,\mathrm{W}\,\mathrm{cm}^{-2}\)). In this regime, it has been demonstrated that coherent transition radiation (CTR) from wakefield-accelerated relativistic electron bunches at the rear plasma boundary can lead to intense THz emissions, characterized by a few \(100\,\mu\)J energy yield and field strength [19; 20]. Such a radiation is coherent because the typical dimensions of the electron bunches (\(\sim 1-5\,\mu\)m) are smaller than the THz radiation wavelengths (\(>10-100\,\mu\)m). Consequently, the THz pulse energy essentially scales as the square number of fast electrons, which makes it a potentially very efficient mechanism [21].
CTR also operates in relativistic laser-solid interactions, whereby, compared to gas targets, it benefits from a stronger absorption of the laser energy into MeV-range electrons, and hence from an increased number of radiating particles [22; 23; 24; 25; 26; 27; 28; 29; 30]. However, because different acceleration mechanisms are at play [31], these energetic electrons are generally characterized by a much larger (\(\sim 100\times\)) angular divergence than those generated by laser wakefields, which translates into a broader CTR emission cone. Yet, owing to its high density (\(\sim 10^{19-21}\,\mathrm{cm}^{-3}\)), the hot-electron population does not only radiate via CTR when exiting a solid foil.
The latter mechanism indeed assumes that the fast electrons propagate ballistically across the plasma-vacuum interface whilst most of them actually get reflected in the strong charge-separation field that they set up in vacuum [32; 33; 34]. This results in an additional coherent, synchrotron-type radiation (CSR) of polarity opposite to that of CTR [28; 29]. An additional complication follows from the fraction of fast electrons that are able to escape the target, and thus just emit a single burst of CTR. The net THz radiation resulting from those combined processes, CTR and CSR, will be referred to as CTSR [Fig. 1(top)].
The sheath electric field induced by the hot electrons
on both sides of the target subsequently sets into motion the surface ions, a process widely known as target normal sheath acceleration (TNSA) in the context of relativistic laser-plasma interactions [35, 36, 37]. Because of their highest charge-to-mass ratio, the protons, generally present in the form of contaminants, react the fastest to that field, reaching velocities \(\sim 0.1c\) (\(c\) is the speed of light) on \(\sim 1\,\mathrm{ps}\) timescales. The resultant expanding plasma at the target backside, composed of accelerated ions and electrostatically trapped hot electrons, comprises two charge-separation regions: one negatively charged at its outer boundary, and one positively charged around its inner boundary [36]. Their time-varying properties lead to a dipole-type, low-frequency radiation [26, 27, 38, 39], henceforth labeled plasma expansion radiation (PER) [Fig. 1(bottom)].
Here, we develop two models to estimate the THz radiated spectra and energy yields from the two aforementioned mechanisms, CTSR and PER, based, respectively, on the fast-electron dynamics alone at the unperturbed target backside and, on slower time scales, fast-electron-induced ion acceleration into vacuum. Compared to previous related works [21, 27, 28, 29], we propose a unified kinetic treatment of the destructively interfering CTR and CSR as resulting from the beam electrons' trajectories in vacuum. Moreover, we model PER using a refined description of ion acceleration, notably allowing for the time-decreasing surface charge density in the expanding sheath and for the hot-electron cooling in thin foil targets. We expect our modeling to be mostly valid in the case of micrometer-range foil targets driven by relativistic femtosecond laser pulses. Our main prediction is that, under such conditions, the energy radiated by the sole fast electrons via CTR and CSR should exceed that due to plasma expansion by at least tenfold, and more in the likely case of a percent-level fraction of escaping electrons. Our work therefore settles a lingering debate on the dominant THz radiation process in ultrashort-pulse laser experiments [24, 25, 26, 27, 29, 38, 39].
We start this paper by outlining the framework of our study in Sec. II.1. In Sec. II.2, we recall the basics of far-field radiation from a charged particle moving in vacuum near a perfect conductor, using the method of the image charge. Section II.3 then characterizes the radiation from a single electron exiting a perfect conductor and experiencing a constant electric field in vacuum. In Sec. II.4, this problem is generalized to the case of an energy-angle distributed electron beam originating from the laser-irradiated side of the target. The respective integral expressions of CTR and CSR are then detailed. In Sec. II.5, we specify the various parameters of the model, of relevance to femtosecond laser-foil interactions.
Section III.1 presents the main spectral features of CTR, CSR and CTSR in a typical ultrashort laser-foil configuration. A major finding is that CTR and CSR closely compensate each other in the THz domain, when all fast electrons are made to reflect back into the target. The scaling of the CTSR yield with the sheath field strength is examined in Sec. III.2, while the possibly dominant contribution of even a small (\(\sim 1\,\%\)) fraction of escaping electrons is discussed in Sec. III.3. The sensitivity of the net THz radiation to target thickness and laser focusing is addressed in Secs. III.4 and III.5.
Section IV next considers the radiation arising from the plasma expansion. The general formalism of our approach is presented in Sec. IV.1. After detailing the underlying model of TNSA in Sec. IV.2, we derive the in
Figure 1: The two major THz radiation mechanisms in relativistic laser-foil interactions considered in this work. (top) First, coherent transition and synchrotron radiations are generated by the laser-accelerated electrons crossing the target backside and being reflected (or not) in the sheath field they set up in vacuum. (bottom) Subsequently, the nonneutral layers at the edges of the rear-side expanding plasma emit a dipole-type radiation.
tegral form of the PER energy spectra in Sec. IV.3. The dependencies of PER on the system's parameters are investigated in Secs. IV.4 and IV.5. Finally, Sec. V gathers our conclusions.
## II Coherent transition and synchrotron radiations from fast electrons
### Framework of the study
The system investigated consists of a thin (\(d\sim 1\,\mu\)m) solid foil impacted by an ultraintense (\(I_{L}\sim 10^{20}\,\)W cm\({}^{-2}\)) and ultrashort (\(\tau_{L}\sim 30\) fs) laser pulse of wavelength \(\lambda_{L}\sim 1\)\(\mu\)m, focused to a few-\(\mu\)m spot size (\(w_{L}\)). We suppose that the target is fully ionized within a few optical cycles, hence turning into a plasma of overcritical electron density \(n_{e0}\gg n_{c}\) (where \(n_{c}=4\pi^{2}\epsilon_{0}m_{e}c^{2}/e^{2}\lambda_{L}^{2}\) is the critical density, \(m_{e}\) the electron mass, \(e\) the elementary charge and \(\epsilon_{0}\) the vacuum permittivity), treated hereafter as a perfect conductor with sharp boundaries.
The relativistic electrons driven by the laser pulse at the target front side form a bunch of typical length \(c\tau_{L}\sim 10\,\mu\)m, width \(w_{L}\), number density \(n_{h}\sim n_{c}\) and average Lorentz factor \(\langle\gamma_{h}\rangle\). They are assumed to propagate ballistically on their first pass through the material. Upon crossing the backside of the target, they undergo an abrupt change in permittivity which causes transition radiation [40; 41]. Wavelengths larger than the bunch size, i.e., lying in the THz domain, are emitted coherently [42; 43; 21].
When exiting the target, the hot electrons induce a strong electric sheath field, parallel to the surface normal and of typical strength \(E_{0}\sim(m_{e}c\omega_{0}/e)\sqrt{\langle\gamma_{h}\rangle(n_{h}/n_{c})}\sim 1 0^{12-13}\,\)V m\({}^{-1}\)[36; 37; 44], which pulls the vast majority of the accelerated electrons back into the foil [34]. The synchrotron emission that they generate while experiencing the sheath field is another source of coherent THz radiation. One can anticipate, as will be examined in detail below, that this deceleration-induced radiation will interfere destructively with CTR, hereafter interpreted as resulting from the apparent sudden acceleration of the electrons at the conductor's surface [41].
The problem of CTR from laser-generated electron beams crossing at constant velocity a plasma-vacuum boundary has already been widely addressed theoretically in multiple frequency domains [42; 21; 43]. However, with the exception of Refs. [28; 29], the simultaneous modeling of CTR and CSR due to the non-ballistic motion of beam electrons in vacuum has received little attention so far. As we will see further on, when almost all of these electrons are made to reflux into the target, CTR and CSR produce THz fields with very similar spectra, yet of opposite polarity, which therefore tend to cancel each other out. The net THz radiation is thus determined by an integral over the electron trajectories in vacuum, which depends on the initial electron energy and propagation angle.
This radiation, as will be shown in Sec. III.3, is also highly sensitive to the non-compensated CTR from the fraction of high-energy electrons able to escape the target permanently. These, believed to make up at most a few percent of the whole hot-electron population [34], can escape the target at near the speed of light and thus contribute to a single uncompensated CTR flash [28; 29]. The interplay of those mechanisms, and their variations with the laser-target parameters, will be thoroughly examined in Secs. III.4 and III.5.
Before proceeding, two limitations of our modeling are already worth mentioning. First, we will consider only the first excursion of the fast electrons into vacuum, and thus neglect their subsequent THz emissions while they bounce back and forth across the target [30; 25; 31]. Accordingly, the following estimates of the energy radiated from the sole fast electrons should be considered as lower values. Second, by assuming a target of infinite transverse size, we will discard the multiple THz emissions that are expected to arise from the laser axis and the target edges when the fast electrons recirculate transversely across a finite-width target [45; 30].
### Radiation from an electron accelerated in vacuum near a perfect conductor
The electromagnetic field radiated in vacuum by an accelerated charged particle can be split into two components [46; 47]: a _velocity field_ that rapidly decays in space as \(R^{-2}\) and an _acceleration field_ that decays as \(R^{-1}\), where \(R=|\mathbf{r}-\mathbf{r}_{p}(t)|\) is the distance from the particle, located at \(\mathbf{r}_{p}(t)\), to the detector, located at \(\mathbf{r}\). In the far-field limit, the detected field reduces to the acceleration field [46; 47],
\[\mathbf{E}_{\rm acc}(\mathbf{r},t)=\frac{q}{4\pi c\varepsilon_{0}}\left[\frac{\mathbf{ \hat{R}}\times\left\{(\mathbf{\hat{R}}-\mathbf{\beta}_{p})\times\dot{\mathbf{\beta}}_{p} \right\}}{R\big{(}1-\mathbf{\beta}_{p}\cdot\mathbf{\hat{R}}\big{)}^{3}}\right]_{t_{\rm ret}}\,, \tag{1}\]
where \(\mathbf{\beta}_{p}(t)=\dot{\mathbf{r}}_{p}(t)/c\) is the normalized velocity of the particle, \(\dot{\mathbf{\beta}}_{p}(t)\) its normalized acceleration, \(q\) its charge, \(\varepsilon_{0}\) the permittivity of free space and \(\mathbf{\hat{R}}=(\mathbf{r}-\mathbf{r}_{p}(t))/R\) the unit direction of observation. The operator \(\left[\,\cdot\,\right]_{t_{\rm ret}}\) evaluates its argument at the retarded time \(t_{\rm ret}\), implicitly defined by \(t_{\rm ret}=t-R(t_{\rm ret})/c\).
When the particle moves in vacuum in the vicinity of a conductor, Eq. (1) must be corrected so that the tangential component of the electric field vanishes at the conductor surface (assumed planar). Microscopically, this arises because of the additional field generated by polarization surface currents [48]. In the perfect conductor approximation, the field induced in vacuum by those currents is identical to that from a virtual particle of charge \(-q\) and trajectory symmetric to that of the real particle with respect to the conductor surface [49; 41] (see Fig. 2). The total radiation from the particle in the presence of the conductor is therefore
\[\mathbf{E}_{\rm rad}(\mathbf{r},t)=\mathbf{E}_{\rm acc}^{-}(\mathbf{r},t)+\mathbf{E}_{\rm acc}^{+}( \mathbf{r},t)\,, \tag{2}\]
where \(\mathbf{E}^{-}_{\rm acc}(\mathbf{r},t)\) and \(\mathbf{E}^{+}_{\rm acc}(\mathbf{r},t)\) represent the acceleration fields generated by, respectively, the real (superscript \({}^{-}\)) and image (superscript \({}^{+}\)) particles. Both fields are evaluated at their own retarded times, \(t^{-}_{\rm ret}\) or \(t^{+}_{\rm ret}\), depending on the positions of the particle and observer.
The power radiated by the particle and its image charge per unit solid angle reads [46; 47]
\[\frac{dP_{\rm rad}(\mathbf{\hat{r}},t)}{d\Omega}=c\varepsilon_{0}\left|\left[R^{-} \mathbf{E}^{-}_{\rm acc}\right]_{t^{-}_{\rm ret}}+\left[R^{+}\mathbf{E}^{+}_{\rm acc} \right]_{t^{+}_{\rm ret}}\right|^{2}\,, \tag{3}\]
where we introduce the direction of observation \(\mathbf{\hat{r}}=(\sin\theta\cos\Psi,\sin\theta\sin\Psi,\cos\theta)=\mathbf{r}/r\), the polar angle \(\theta\), the azimuthal angle \(\Psi\), and \(\mathrm{d}\Omega=\sin\theta\mathrm{d}\theta\Psi\). The total energy radiated per unit solid angle can be expressed as [46; 47]:
\[\frac{\partial\mathcal{E}_{\rm rad}(\mathbf{\hat{r}})}{\partial\Omega}=\int_{- \infty}^{\infty}\frac{\partial P_{\rm rad}(\mathbf{\hat{r}},t)}{\partial\Omega} \mathrm{d}t\equiv\int_{0}^{\infty}\frac{\partial^{2}\mathcal{I}_{\rm rad}( \mathbf{\hat{r}},\nu)}{\partial\nu\partial\Omega}\mathrm{d}\nu\,, \tag{4}\]
where the second equality defines \(\partial^{2}\mathcal{I}_{\rm rad}/\partial\nu\partial\Omega\), the energy radiated per unit solid angle and frequency (\(\nu\)) interval. Upon combining Eqs. (1)-(4), performing the change of variable \(t\mapsto t_{\rm ret}\) and assuming that the accelerated particle remain far away from the observer, i.e., \(R(t)\simeq r-\mathbf{\hat{r}}\cdot\mathbf{r}_{p}(t)\), one obtains [46; 47]
\[\frac{\partial^{2}\mathcal{I}_{\rm rad}}{\partial\nu\partial\Omega}(\mathbf{\hat{r }},\nu)=\frac{q^{2}}{8\pi^{2}\varepsilon_{0}c}\left|\int\frac{\mathbf{\hat{r}} \times[(\mathbf{\hat{r}}-\mathbf{\beta}_{p}^{-})\times\hat{\mathbf{\beta}}_{p}^{-}]}{(1- \mathbf{\beta}_{p}^{-}\cdot\mathbf{\hat{r}})^{2}}e^{-2i\pi\nu(t-\mathbf{\hat{r}}\cdot\bm {r}_{p}^{-}(t)/c)}-\frac{\mathbf{\hat{r}}\times[(\mathbf{\hat{r}}-\mathbf{\beta}_{p}^{+}) \times\hat{\mathbf{\beta}}_{p}^{+}]}{(1-\mathbf{\beta}_{p}^{+}\cdot\mathbf{\hat{r}})^{2} }e^{-2i\pi\nu(t-\mathbf{\hat{r}}\cdot\mathbf{r}_{p}^{+}(t)/c)}\,\mathrm{d}t\right|^{2}\,. \tag{5}\]
This expression describes the full energy spectrum radiated by a charged particle moving in vacuum in the vicinity of a perfect conductor. In the following section, we will decompose this radiation into two components, namely, transition radiation and synchrotron radiation.
Transition and synchrotron radiations from an electron exiting and coming back into a perfect conductor
The generic scenario addressed in our study is sketched in Fig. 2. It consists of an electron (i) laser-accelerated at the front side of a foil target, (ii) traveling ballistically through it, (iii) crossing its rear side, (iv) being reflected in vacuum by the sheath field established by the hot electrons and (v) re-crossing the target's rear surface. Because of plasma shielding, stages (i) and (ii) of the electron motion do not lead to outgoing radiation from the backside of the target. By contrast, stages (iii) and (v), corresponding to apparent sudden accelerations of the electron and its image charge, give rise to two consecutive flashes of transition radiation [40; 41], at the exit (\(t=t_{\rm e}\)) and return (\(t=t_{\rm r}\)) times, while stage (iv) generates synchrotron-type radiation over the time interval \(t_{\rm e}<t<t_{\rm r}\).
To identify the transition- and synchrotron-type components in Eq. (5), where the time integral is performed over the particle trajectory in vacuum, it is convenient to express the normalized velocities (\(\mathbf{\beta}_{p}^{\pm}\)) of the electron and its image charge as
\[\mathbf{\beta}_{p}^{\pm}(t)=\begin{cases}\mathbf{\beta}_{\rm e}^{\pm}\left[\frac{t-t_ {\rm e}}{\delta t}+1\right]&t_{\rm e}-\delta t<t\leq t_{\rm e}\,,\\ \mathbf{\beta}^{\pm}(t)&t_{\rm e}<t<t_{\rm r}\,,\\ \mathbf{\beta}_{\rm r}^{\pm}\left[\frac{t_{\rm e}-t}{\delta t}+1\right]&t_{\rm r} \leq t<t_{\rm r}+\delta t\,,\\ 0&\text{otherwise}\,,\end{cases} \tag{6}\]
where \(\mathbf{\beta}_{\rm e}^{\pm}\equiv\mathbf{\beta}^{\pm}(t_{\rm e})\) and \(\mathbf{\beta}_{\rm r}^{\pm}\equiv\mathbf{\beta}^{\pm}(t_{\rm r})\) denote the velocities at the exit and return times, respectively, and \(\delta t\) is an infinitesimal time interval.
Noting that [46]
\[\frac{\mathbf{\hat{r}}\times[(\mathbf{\hat{r}}-\mathbf{\beta}_{p}^{\pm})\times\hat{\mathbf{ \beta}}_{p}^{\pm}]}{\left(1-\mathbf{\beta}_{p}^{\pm}\cdot\mathbf{\hat{r}}\right)^{2} }=\frac{\mathrm{d}}{\mathrm{d}t}\left[\frac{\mathbf{\hat{r}}\times(\mathbf{\hat{r}} \times\mathbf{\beta}_{p}^{\pm})}{1-\mathbf{\beta}_{p}^{\pm}\cdot\mathbf{\hat{r}}}\right]\,, \tag{7}\]
we can perform an integration by parts in Eq. (5) and simplify the resulting expression using the vector identity \([\mathbf{\hat{r}}\times(\mathbf{\hat{r}}\times\mathbf{a})]\cdot[\mathbf{\hat{r}}\times(\mathbf{ \hat{r}}\times\mathbf{b})]=(\mathbf{\hat{r}}\times\mathbf{a})\cdot(\mathbf{\hat{r}}\times\mathbf{b})\). We then obtain
\[\frac{\partial^{2}\mathcal{I}_{\rm rad}}{\partial\nu\partial\Omega}(\mathbf{\hat{r }},\nu) =\frac{q^{2}}{8\pi^{2}\varepsilon_{0}c}\Bigg{|}2i\pi\nu\int_{t_{\rm e }-\delta t}^{t_{\rm e}+\delta t}\mathbf{\hat{r}}\times\left[\mathbf{\beta}_{p}^{-}(t)e ^{-i\Theta^{-}(t)}-\mathbf{\beta}_{p}^{+}(t)e^{-i\Theta^{+}(t)}\right]\mathrm{d}t\] \[+\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{p}^{-}(t)}{1-\mathbf{ \beta}_{p}^{-}(t)\cdot\mathbf{\hat{r}}}e^{-i\Theta^{-}(t)}-\frac{\mathbf{\hat{r}}\times \mathbf{\beta}_{p}^{+}(t)}{1-\mathbf{\beta}_{p}^{+}(t)\cdot\mathbf{\hat{r}}}e^{-i\Theta^{+} (t)}\right]_{t_{\rm e-\delta t}}^{t_{\rm e+\delta t}}\Bigg{|}^{2}\,, \tag{8}\]
where \(\Theta^{\pm}(t)=2\pi\nu[t-\mathbf{\hat{r}}\cdot\mathbf{r}_{p}^{\pm}(t)/c]\). Since, from Eqs. (6), \(\mathbf{\beta}_{p}(t_{\rm r}+\delta t)=\mathbf{\beta}_{p}(t_{\rm e}-\delta t)=0\), the boundary value terms
vanish and the full intensity distribution radiated by the electron is therefore given by
\[\frac{\partial^{2}\mathcal{I}_{\mathrm{TSR}}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu)=\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}\] \[\times\left|\int_{t_{\mathrm{e}}}^{t_{\mathrm{r}}}\mathbf{\hat{r}} \times\left[\mathbf{\beta}_{p}^{-}(t)e^{-i\Theta^{-}(t)}-\mathbf{\beta}_{p}^{+}(t)e^{ -i\Theta^{+}(t)}\right]\,\mathrm{d}t\right|^{2}\,, \tag{9}\]
where the acronym TSR stands for transition and synchrotron radiation. Its coherent version (CTSR) in the case of a compact electron bunch will be addressed below.
The intensity distribution of transition radiation, \(\partial^{2}\mathcal{I}_{\mathrm{TR}}/\partial\nu\partial\Omega\), is obtained from Eq. (5) by integrating by parts over the intervals \(t_{\mathrm{e}}-\delta t<t<t_{\mathrm{e}}\) and \(t_{\mathrm{r}}<t<t_{\mathrm{r}}+\delta t\) and taking the limit \(\delta t\to 0\). Only the boundary value term then remains, leading to a variant of the well-known Ginzburg formula [41]
\[\frac{\partial^{2}\mathcal{I}_{\mathrm{TR}}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu)=\frac{q^{2}}{8\pi^{2}\varepsilon_{0}c}\] \[\times\left|\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\mathrm{e} }^{-}}{1-\mathbf{\beta}_{\mathrm{e}}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}} \times\mathbf{\beta}_{\mathrm{e}}^{+}}{1-\mathbf{\beta}_{\mathrm{e}}^{+}\cdot\mathbf{\hat{ r}}}\right]e^{-i\Theta_{\mathrm{e}}}\right.\] \[\left.-\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\mathrm{e}}^{-}} {1-\mathbf{\beta}_{\mathrm{e}}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}}\times\mathbf{ \beta}_{\mathrm{r}}^{+}}{1-\mathbf{\beta}_{\mathrm{r}}^{+}\cdot\mathbf{\hat{r}}}\right] e^{-i\Theta_{\mathrm{r}}}\right|^{2}\,, \tag{10}\]
where \(\Theta_{\mathrm{e,r}}\equiv 2\pi\nu(t_{\mathrm{e,r}}-\mathbf{\hat{r}}\cdot\mathbf{r}_{ \mathrm{e,r}}/c)\).
The synchrotron component of the radiation is obtained by integrating Eq. (5) by parts over the time interval \(t_{\mathrm{e}}<t<t_{\mathrm{r}}\):
\[\frac{\partial^{2}\mathcal{I}_{\mathrm{SR}}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu)=\frac{q^{2}}{8\pi^{2}\varepsilon_{0}c}\] \[+\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\mathrm{e}}^{-}}{1- \mathbf{\beta}_{\mathrm{e}}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}}\times\mathbf{ \beta}_{\mathrm{r}}^{+}}{1-\mathbf{\beta}_{\mathrm{r}}^{+}\cdot\mathbf{\hat{r}}} \right]e^{-i\Theta_{\mathrm{r}}}\] \[-\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\mathrm{e}}^{-}}{1- \mathbf{\beta}_{\mathrm{e}}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}}\times\mathbf{ \beta}_{\mathrm{e}}^{+}}{1-\mathbf{\beta}_{\mathrm{e}}^{+}\cdot\mathbf{\hat{r}}} \right]e^{-i\Theta_{\mathrm{e}}}\Bigg{|}^{2}\,, \tag{11}\]
Of course, one recovers the full spectrum (9) by summing the terms within the squared vertical bars in Eqs. (10) and (11). The above expression is interesting in showing that whatever the evolution of \(\mathbf{\beta}^{\pm}(t)\) during \(t_{\mathrm{e}}<t<t_{\mathrm{r}}\), the synchrotron radiation generates a spectrum increasingly resembling that due to transition radiation when \(t_{\mathrm{r}}-t_{\mathrm{e}}\equiv\Delta t_{r}\to 0\). This result will be demonstrated numerically in the next sections.
In the following, for simplicity, the electric sheath field acting on the electron will be assumed uniform, stationary and parallel to the target surface normal, \(\mathbf{E}_{0}=E_{0}\mathbf{\hat{z}}\) (with \(E_{0}>0\)). The electron trajectory can be easily calculated using the Hamiltonian
\[\mathcal{H}=m_{e}c^{2}\gamma(t)-e\Phi(z(t))\,, \tag{12}\]
where \(\gamma\equiv\sqrt{1+p^{2}/(m_{e}c)^{2}}\) is the electron Lorentz factor and \(\Phi=-E_{0}z\) the electrostatic potential.
The electron crosses the target backside (\(z=0\)) at time \(t=t_{\mathrm{e}}\) and transverse position \(\mathbf{r}_{\mathrm{e,\perp}}\) (neglecting the \(\delta t\) interval) with momentum \(\mathbf{p}_{\mathrm{e}}=p_{\mathrm{e}}(\sin\psi\cos\varphi,\sin\psi\sin\varphi, \cos\psi)\), where \(\psi\) and \(\varphi\) correspond to the polar and azimuthal angles, respectively. Since \(\Phi\) only depends on \(z\), \(\mathcal{H}\) is a constant of motion. Moreover, \(\gamma(t_{\mathrm{r}})=\gamma(t_{\mathrm{e}})\equiv\gamma_{\mathrm{e}}\) and \(\mathbf{p}_{\perp}(t)=\mathbf{p}_{\mathrm{e,\perp}}\) (the subscript \({}_{\perp}\) refers to the surface normal plane).
Introducing the normalized momentum \(\mathbf{u}\equiv\mathbf{p}/m_{e}c\equiv\gamma\mathbf{\beta}\), normalized position \(\mathbf{\bar{r}}\equiv eE_{0}\mathbf{r}/m_{e}c^{2}\), relative position \(\mathbf{\bar{\xi}}=\mathbf{\bar{r}}-\mathbf{\bar{r}}_{\mathrm{e,\perp}}\), normalized time \(\bar{t}\equiv eE_{0}t/m_{e}c\) and relative time \(\bar{\tau}=\bar{t}-\bar{t}_{\mathrm{e}}\), one obtains
\[\gamma(\bar{\tau})=\sqrt{\left(u_{\mathrm{e,z}}-\bar{\tau}\right) ^{2}+\gamma_{\perp}^{2}}\,, \tag{13}\] \[\beta_{z}(\bar{\tau})=\left(u_{\mathrm{e,z}}-\bar{\tau}\right)/ \gamma(\bar{\tau})\,,\] (14) \[\mathbf{\beta}_{\perp}(\bar{\tau})=\mathbf{u}_{\mathrm{e,\perp}}/\gamma( \bar{\tau})\,,\] (15) \[\bar{\xi}_{z}(\bar{\tau})=\gamma_{\mathrm{e}}-\gamma(\bar{\tau})\,,\] (16) \[\mathbf{\bar{\xi}}_{\perp}(\bar{\tau})=\frac{\mathbf{u}_{\mathrm{e,\perp} }}{2}\ln\left(\frac{\left(\gamma_{\mathrm{e}}+u_{\mathrm{e,z}}\right)\left[ \gamma(\bar{\tau})-u_{\mathrm{e,z}}+\bar{\tau}\right]}{\left(\gamma_{\mathrm{e}}-u _{\mathrm{e,z}}\right)\left[\gamma(\bar{\tau})+u_{\mathrm{e,z}}-\bar{\tau} \right]}\right)\,, \tag{17}\]
for \(0<\bar{\tau}<\overline{\Delta}t_{\mathrm{r}}\), where \(\overline{\Delta}t_{\mathrm{r}}\equiv\bar{t}_{\mathrm{r}}-\bar{t}_{\mathrm{e}}=2u_{ \mathrm{e,z}}\) is the (normalized) time spent by the electron in vacuum, and \(\gamma_{\perp}=\sqrt{1+u_{\mathrm{e,\perp}}^{2}}\).
Figure 2: Transition and synchrotron radiations from a relativistic electron confined at the target-vacuum interface. The electron trajectory starts at the left (laser-irradiated) side of the target, at time \(t_{\mathrm{r}}\) and position \(\mathbf{r}_{\mathrm{f}}\). The initial electron momentum is \(\mathbf{p}_{\mathrm{e}}=p_{\mathrm{e}}(\sin\psi\cos\varphi,\sin\psi\sin\varphi, \cos\psi)\) where \(\psi\) and \(\varphi\) are the polar and azimuthal angles, respectively. After traveling ballistically through the target (red dashed curve), the electron crosses its rear side at time \(t_{\mathrm{e}}\) and position \(\mathbf{r}_{\mathrm{e}}\). In vacuum, it is then reflected by the longitudinal sheath field \(\mathbf{E}_{0}\), assumed homogeneous and stationary, and re-enters the target at time \(t_{\mathrm{r}}\) and position \(\mathbf{r}_{\mathrm{r}}\) (blue solid curve). The color gradient represents the electrostatic potential profile \(\Phi(z)\), see Sec. II.3. The trajectory of the associated image charge is plotted as a grey dotted curve. The observation direction is \(\mathbf{\hat{r}}=(\sin\theta\cos\Psi,\sin\theta\sin\Psi,\cos\theta)=\mathbf{r}/r\), where \(\theta\) and \(\Psi\) are the corresponding polar and azimuthal angles. In this 2D representation, for simplicity, the observer is taken to lie in the plane of particle motion (i.e. \(\Psi=\varphi\)).
The full (TSR) and synchrotron-only (SR) radiation spectra are computed by substituting the above expressions into Eq. (9) and Eq. (11), respectively. Furthermore, given our choice of \(\mathbf{E}_{0}\), we have the relations \(\mathbf{\beta}_{\mathbf{\imath}}^{\pm}=\mathbf{\beta}_{\rm e}^{\mp}\) and \(\beta_{\rm r,z}^{\pm}=-\beta_{\rm e,z}^{\pm}\), allowing the TR spectrum (10) to be recast as
\[\frac{\partial^{2}{\cal I}_{\rm TR}}{\partial\nu\partial\Omega}( \mathbf{\hat{r}},\nu)=\frac{q^{2}}{2\pi^{2}\varepsilon_{0}c}\cos^{2}\left(\frac{ \Theta_{\rm e}-\Theta_{\rm r}}{2}\right)\] \[\times\left|\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\rm e}^{-}}{1- \mathbf{\beta}_{\rm e}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{ \rm r}^{-}}{1-\mathbf{\beta}_{\rm r}^{-}\cdot\mathbf{\hat{r}}}\right|^{2}\,. \tag{18}\]
### Full radiation from an energy-angle-distributed electron beam
The frequency-angle spectrum radiated by an electron bunch exiting a perfect conductor and experiencing a uniform electric field can be readily evaluated by summing Eq. (9) over \(N_{h}\gg 1\) particles (labeled by the subscript \(l\)) and taking an ensemble average:
\[\frac{\partial^{2}{\cal I}_{\rm CTSR}}{\partial\nu\partial\Omega}(\mathbf{\hat{r} },\nu)=\left\langle\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}\bigg{|}\sum_{l=1}^{ N_{h}}\big{[}\mathbf{A}_{l}^{-}-\mathbf{A}_{l}^{+}\big{]}(\mathbf{\hat{r}},\nu)\bigg{|}^{2} \right\rangle\,, \tag{19}\]
where
\[\mathbf{A}_{l}^{\pm}(\mathbf{\hat{r}},\nu)=\int_{t_{\rm e,l}}^{t_{\rm e,l}}\big{[}\bm {\hat{r}}\times\mathbf{\beta}_{p,l}^{\pm}(t)\big{]}\,e^{-i\Theta_{l}^{\pm}(t)}\, \mathrm{d}t\,. \tag{20}\]
We recall that the acronym CTSR is for _coherent_ transition and synchrotron radiation.
In the realistic case of a finite-size, energy-angle distributed electron bunch, the full radiation pattern is determined by the relative phase shifts between the fields emitted along the particle trajectories in vacuum. In the following, each electron trajectory is taken to originate from the _front side_ (\(z=-d\)) of the target, where (leaving out the subscript \(l\)) it is parameterized by the injection time \(t_{\rm f}\), initial transverse position \(\mathbf{r}_{\rm f,\perp}\) and initial normalized momentum \(\mathbf{u}_{\rm f}=\mathbf{u}_{\rm e}=\mathbf{p}_{\rm e}/m_{e}c\) of the electron. After traveling ballistically through the target, the electron reaches its rear side (\(z=0\)) at time \(t_{\rm e}=t_{\rm f}+d/c\beta_{t,z}\) and transverse position \(\mathbf{r}_{\rm e,\perp}=\mathbf{r}_{\rm f,\perp}+\mathbf{\beta}_{\rm f,\perp}d/\beta_{ \rm f,z}\).
In order to pass to the continuous limit, we introduce \(F(\mathbf{r},t,\mathbf{u})\), the flux probability function characterizing the electron beam at the front side of the target, normalized as
\[\iiint F(\mathbf{r}_{\rm f,\perp},t_{\rm f},\mathbf{u}_{\rm f})\,\mathrm{d}^{2}\mathbf{r} _{\rm f,\perp}\,\mathrm{d}t_{\rm f}\,\mathrm{d}^{3}\mathbf{u}_{\rm f}=1\,. \tag{21}\]
This probability function is taken in the form of a separable function, \(F(\mathbf{r}_{\rm f,\perp},t_{\rm f},\mathbf{u}_{\rm f})=f(\mathbf{r}_{t,\perp})h(t_{\rm f })g(\mathbf{u}_{\rm f})\), where the distribution functions \(f\), \(h\) and \(g\) will be specified in Sec. II.5. The discrete sum in Eq. (19) can be replaced by integrals over time, transverse position and momentum (see Appendix A):
\[\frac{\partial^{2}{\cal I}_{\rm CTSR}}{\partial\nu\partial\Omega} (\mathbf{\hat{r}},\nu)=\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}N_{h}^{2}\Big{|} \iiint f(\mathbf{r}_{\rm f,\perp})h(t_{\rm f})g(\mathbf{u}_{\rm f})\] \[\times\left[\mathbf{A}^{-}(\mathbf{\hat{r}},\nu)-\mathbf{A}^{+}(\mathbf{\hat{r}}, \nu)\right]\,\mathrm{d}^{2}\mathbf{r}_{\rm f,\perp}\,\mathrm{d}t_{\rm f}\,\mathrm{ d}^{3}\mathbf{u}_{\rm f}\Big{|}^{2}\,. \tag{22}\]
This formula can further be simplified by rewriting the phase term in Eq. (20) as
\[\Theta^{\pm}(t) =2\pi\nu\left[t-\mathbf{\hat{r}}\cdot\mathbf{r}^{\pm}(t)/c\right]\] \[=2\pi\bar{\nu}\left[(\pi+\bar{t}_{\rm e})-\mathbf{\hat{r}}\cdot(\mathbf{ \hat{\xi}}^{\pm}(\bar{\tau})+\bar{\mathbf{r}}_{\rm e,\perp})\right]\] \[=\Theta_{\rm e}+2\pi\bar{\nu}\left[\bar{\tau}-\mathbf{\hat{r}}\cdot \mathbf{\hat{\xi}}^{\pm}(\bar{t})\right]\,, \tag{23}\]
where \(\bar{\nu}\equiv\nu m_{e}c/eE_{0}\) is the normalized frequency and where
\[\Theta_{\rm e} =2\pi\bar{\nu}\left[\bar{t}_{\rm e}-\mathbf{\hat{r}}\cdot\bar{\mathbf{r} }_{\rm e,\perp}\right]\] \[=2\pi\bar{\nu}\left(\bar{t}_{\rm f}-\mathbf{\hat{r}}\cdot\mathbf{\bar{r} }_{\rm f}\right)+2\pi\bar{\nu}\bar{d}\left(1-\mathbf{\hat{r}}\cdot\mathbf{\beta}_{\rm f,\perp}\right)/\beta_{\rm f,z} \tag{24}\]
characterizes the propagation of the electron through the foil. Performing the change of variable \(t\mapsto\bar{\tau}\) and factoring out \(\Theta_{\rm e}\) from Eq. (20) yields
\[\frac{\partial^{2}{\cal I}_{\rm CTSR}}{\partial\nu\partial\Omega} (\mathbf{\hat{r}},\nu)=\frac{q^{2}\bar{\nu}^{2}}{2\varepsilon_{0}c}N_{h}^{2} \left|\int g(\mathbf{u})F(\mathbf{\hat{r}},\nu,\mathbf{u})\right.\] \[\times\left.\left[\mathbf{\bar{A}}^{-}(\mathbf{\hat{r}},\nu)-\mathbf{\bar{A}} ^{+}(\mathbf{\hat{r}},\nu)\right]\,\mathrm{d}^{3}\mathbf{u}\right|^{2}\,, \tag{25}\]
with \(F(\mathbf{\hat{r}},\nu,\mathbf{u})\) the form factor of the electron beam defined by
\[F(\mathbf{\hat{r}},\nu,\mathbf{u})=\iint f(\mathbf{r}_{\rm f,\perp})h(t_{\rm f})e^{-i\Theta _{\rm e}}\mathrm{d}t_{\rm f}\,\mathrm{d}^{2}\mathbf{r}_{\rm f,\perp}\,, \tag{26}\]
and where
\[\mathbf{\bar{A}}^{\pm}(\mathbf{\hat{r}},\nu)=\int_{0}^{\overline{\Delta}\bar{t}_{\rm r}} \big{[}\mathbf{\hat{r}}\times\mathbf{\beta}^{\pm}(\bar{\tau})\big{]}\,e^{-2i\bar{\nu} \big{[}\bar{\tau}-\mathbf{\hat{r}}\cdot\mathbf{\bar{\xi}}^{\pm}(\bar{\tau})\big{]}}\, \mathrm{d}\bar{\tau}\,. \tag{27}\]
Equations (25)-(27), supplemented with Eqs. (13)-(17), give the energy-angle spectrum of radiation from an electron beam exiting a perfect conductor and being reflected by a stationary and homogeneous electric sheath field. This spectrum depends not only on the distribution functions \(f(\mathbf{r}_{\perp})\), \(h(t)\) and \(g(\mathbf{u})\) characterizing the beam source at the target front side, but also on the finite target thickness, which determines the longitudinal and transverse spreading of the beam after its passage through the foil.
As explained in Sec. II.3, the respective contributions of CTR (\(\partial^{2}{\cal I}_{\rm CTR}/\partial\nu\partial\Omega\)) and CSR (\(\partial^{2}{\cal I}_{\rm CSR}/\partial\nu\partial\Omega\)) to the total spectrum can be distinguished in the time integral involved in \(\mathbf{\bar{A}}^{\pm}(\mathbf{r},\nu)\) [Eq. (27)], namely,
\[\frac{\partial^{2}\mathcal{I}_{\rm CTR}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu) =\frac{N_{h}^{2}q^{2}}{2\pi^{2}\varepsilon_{0}c}\left|\int g(\mathbf{u})F( \theta,\nu,\mathbf{u})\bar{\mathbf{A}}_{\rm TR}\left(\mathbf{\hat{r}},\nu\right)\,{\rm d}^{3 }\mathbf{u}\right|^{2}\,, \tag{28}\] \[\frac{\partial^{2}\mathcal{I}_{\rm CSR}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu) =\frac{N_{h}^{2}q^{2}}{2\pi^{2}\varepsilon_{0}c}\left|\int g(\mathbf{ u})F(\theta,\nu,\mathbf{u})\Bigg{\{}i\pi\bar{\nu}\Big{[}\bar{\mathbf{A}}^{-}(\mathbf{ \hat{r}},\nu)-\bar{\mathbf{A}}^{+}(\mathbf{\hat{r}},\nu)\Big{]}-\bar{\mathbf{A}}_{\rm TR} \left(\mathbf{\hat{r}},\nu\right)\Bigg{\}}\,{\rm d}^{3}\mathbf{u}\right|^{2}\,, \tag{29}\]
where \(\bar{\mathbf{A}}_{\rm TR}\left(\mathbf{\hat{r}},\nu\right)\) characterizes the full transition radiation of a particle in the vicinity of the perfect conductor [stages (iii) and (v) in Fig. 2] and is defined by:
\[\bar{\mathbf{A}}_{\rm TR}\left(\mathbf{\hat{r}},\nu\right) =\cos\left(\frac{\Theta_{\rm e}-\Theta_{\rm r}}{2}\right)\] \[\times\left[\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{\rm e}^{-}}{1- \mathbf{\beta}_{\rm e}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{r}}\times\mathbf{\beta}_{ \rm r}^{-}}{1-\mathbf{\beta}_{\rm r}^{-}\cdot\mathbf{\hat{r}}}\right]e^{-i\Theta_{\rm e }}\,, \tag{30}\]
where \(\Theta_{\rm r}=\Theta_{\rm e}+2\pi\bar{\nu}\left[\overline{\Delta t_{\rm r}}- \mathbf{\hat{r}}\cdot\mathbf{\hat{\xi}}\left(\overline{\Delta t_{\rm r}}\right)\right]\).
The numerical evaluation of Eqs. (25), (28) and (29) is challenging because of the four nested integrals involved, or even seven if one wishes to compute the total radiated energy (integrated over observation angles (\(\theta,\Psi\)) and frequencies \(\nu\)). To simplify, we will assume hereafter that the system is axisymmetric with respect to the target normal (\(\mathbf{\hat{z}}\) axis). The integration over (\(\mathbf{\hat{r}},\nu\)) then reduces to an integration over (\(\theta,\nu\)) due to invariance of \(\partial^{2}\mathcal{I}/\partial\nu\partial\Omega\) over \(\Psi\). In this work, the calculation over the angle of observation, frequency and momentum is parallelized on multiple GPUs, using a 2D kernel (\(\theta\times\nu,u\times\psi\times\varphi\)) to efficiently distribute the workload. The innermost time integration [Eq. (27)] is performed over 1000 time steps through a type-2 nonuniform Fourier transform, as defined in Refs. [50; 51; 52]. Numerical integrals over (\(u,\psi,\varphi\)) are computed using \(512\times 128\times 64\) points for each (\(\nu,\theta\)) pair. Radiated spectra are discretized over 160\(\times\)90 points in (\(\nu,\theta\)) space, amounting to a total of \(\sim 6\times 10^{10}\) nonuniform Fourier transforms. This makes parametric scans quite computationally expensive despite the GPU parallelization.
### Model parameters
To solve the previous formulas, the distribution functions defining the (axisymmetric) electron beam source need to be specified. The spatial and temporal profiles of the beam are assumed to be Gaussian,
\[f(r) =\frac{4\ln 2}{\pi w_{L}^{2}}e^{-4\ln 2\left(r/w_{L}\right)^{2}}\,, \tag{31}\] \[h(t) =\frac{2\sqrt{\ln 2}}{\sqrt{\pi}\tau_{L}}e^{-4\ln 2\left(t/\tau_{L} \right)^{2}}\,, \tag{32}\]
where the full-width-at-half-maximum diameter (\(w_{L}\)) and duration (\(\tau_{L}\)) are taken equal to those characterizing the intensity profile of the laser drive. The above profiles lead to the form factor
\[F(\theta,\nu,\mathbf{u})= \exp\!\left(-(\pi^{2}/4\ln 2)\nu^{2}\left[\tau_{L}^{2}+(w_{L}/c)^{ 2}\sin^{2}\theta\right]\right)\] \[\times \exp\!\left(-2i\pi\nu d\left(1-\mathbf{\hat{r}}_{\perp}\cdot\mathbf{ \beta}_{\perp}\right)/c\beta_{z}\right). \tag{33}\]
We further assume that the momentum distribution function of the beam is a separable function of absolute momentum \(u\) and direction angles (\(\psi,\varphi\)), i.e., \(g(\mathbf{u}){\rm d}^{3}\mathbf{u}=g_{u}(u)g_{\psi}(\psi)g_{\varphi}(\varphi)u^{2}\sin \psi\,{\rm d}u\,{\rm d}\psi\,{\rm d}\varphi\). Inspired by Ref. [21], we use
\[g_{u}(u) =\frac{1}{2\Delta u^{3}}\exp(-u/\Delta u)\,, \tag{34}\] \[g_{\psi}(\psi) =\frac{1}{\Delta\psi^{2}}\frac{e^{-\frac{1}{2}\left(\sin\psi/ \Delta\psi\right)^{2}}}{1-e^{-1/2\Delta\psi^{2}}}\cos\psi\,,\] (35) \[g_{\varphi}(\varphi) =1/2\pi\,, \tag{36}\]
where \(\Delta u\) and \(\Delta\psi\equiv\sin\Psi_{h}\) represent characteristic spreads in momentum and polar angle, respectively. The above distributions are normalized such that
\[\int_{0}^{\infty}g_{u}(u)\,u^{2}\,{\rm d}u=\int_{0}^{\pi/2}g_{\psi}(\psi)\,\sin \psi\,{\rm d}\psi=1\,. \tag{37}\]
The momentum spread \(\Delta u\) is chosen so that the average Lorentz factor
\[\langle\gamma_{h}\rangle=\int_{0}^{\infty}g_{u}(u)u^{2}\sqrt{u^{2}+1}\,{\rm d}u \sim\sqrt{9\Delta u^{2}+1} \tag{38}\]
equals the ponderomotive potential of the (linearly polarized) laser pulse, \(\gamma_{L}=\sqrt{1+a_{L}^{2}/2}\), where \(a_{L}\) is the dimensionless laser field strength [53].
The number of hot electrons carried by the beam is estimated from
\[N_{h}\simeq\frac{\eta_{h}\mathcal{E}_{L}}{m_{e}c^{2}\langle\gamma_{h}-1\rangle }=\frac{\mathcal{E}_{\rm beam}}{\langle\mathcal{E}_{h}\rangle}\,, \tag{39}\]
where \(\mathcal{E}_{L}=I_{L}(w_{L}^{2}\tau_{L}/8)(\pi/\ln 2)^{3/2}\) is the laser pulse energy, \(\langle\mathcal{E}_{h}\rangle=m_{e}c^{2}\langle\gamma_{h}-1\rangle\) the mean electron kinetic energy, \(\eta_{h}\) the laser-to-hot-electron energy conversion efficiency, and \(\mathcal{E}_{\rm beam}=\eta_{h}\mathcal{E}_{L}\) the total kinetic beam energy. Based on experimental measurements [54; 55], a constant value \(\eta_{h}=0.2\) will be assumed in the following.
Introducing \(n_{\rm hrb}\) as the hot-electron density at the target backside, the strength of the sheath field is expected to be [32; 36]
\[E_{0}\simeq\alpha_{E_{0}}\sqrt{\frac{n_{\rm hrb}\langle\mathcal{E}_{h}\rangle}{ \varepsilon_{0}}}\,, \tag{40}\]
where \(\alpha_{E_{0}}\leq 1\) is a scaling factor expressing that the effective average field strength experienced by the hot electrons in vacuum is lower than its maximum value at \(z=0\), \(E_{\rm max}=\sqrt{n_{\rm hro}\langle\mathcal{E}_{h}\rangle/\varepsilon_{0}}\). We will take \(\alpha_{E_{0}}=0.5\) as a fiducial parameter. Moreover, \(n_{\rm hro}\) can be related to the hot-electron density at the front side, \(n_{\rm hf0}\), defined by
\[n_{\rm hf0}\simeq\frac{\eta_{h}I_{L}}{m_{e}c^{3}\langle\gamma_{h}-1\rangle}\,. \tag{41}\]
through
\[n_{\rm hro}=n_{\rm hf0}\left(\frac{w_{L}}{w_{h}(d)}\right)^{2}\,. \tag{42}\]
The transverse size of the beam, \(w_{h}(z)\), after a ballistic propagation of a depth \(z\) through the target is estimated [56] as
\[w_{h}(z)\simeq w_{L}\sqrt{1+\left(\frac{2z\tan\Psi_{h}}{w_{L}}\right)^{2}}\,. \tag{43}\]
The sheath field \(E_{0}\) is obtained from Eqs. (40)-(43).
## III Phenomenology of Ctsr and Parametric Dependencies
### Interplay of transition and synchrotron radiations
As a first illustration of our modeling, we consider the case of a \(2\,\mu\)m thick target exposed to an intense laser pulse characterized by \(a_{L}=15\), \(\lambda_{L}=1\,\mu\)m, \(\tau_{L}=30\,\)fs and \(w_{L}=5\,\mu\)m. These parameters yield a peak pulse intensity \(I_{L}=3.1\times 10^{20}\,\)W cm\({}^{-2}\) and pulse energy \(\mathcal{E}_{L}=2.8\,\)J. The beam divergence is taken to be \(\Psi_{h}=30^{\circ}\). For these parameters (together with \(\eta_{h}=0.2\)), one obtains \(\mathcal{E}_{\rm beam}\simeq 0.56\,\)J, \(\mathcal{E}_{h}\simeq 4.9\,\)MeV, \(N_{h}\simeq 7.03\times 10^{11}\) and \(E_{0}\simeq 6.93\times 10^{12}\,\)V m\({}^{-1}\).
Figure 3(a) shows the angular distribution of the full energy spectrum (in \(\mu\)J ps sr\({}^{-1}\) units) for \(0.1\leq\nu\leq 40\,\)THz, while Figs. 3(b) and (c) display the spectra (in mJ ps sr\({}^{-1}\) units) due to only CTR and CSR, respectively. One can see that the full spectrum is peaked around \(\nu\simeq 10-25\,\)THz and \(\theta\simeq 60-90^{\circ}\). By contrast, CTR and CSR give rise separately to almost identical - but much more intense - THz spectra concentrated at lower frequencies and angles.
The latter result can be readily understood upon noting that the first (second) half of the electron trajectory in vacuum, before (resp. after) its turning point, can be viewed, given its femtosecond timescale (\(t\sim m_{e}\gamma_{\rm e}c/eE_{0}\sim 1\,\)fs), as a sudden deceleration along \(\hat{\mathbf{z}}\) (resp. acceleration towards \(-\hat{\mathbf{z}}\)) with respect to THz-range synchrotron radiation. Since, by contrast, transition radiation at the exit (return) time corresponds to a sudden apparent acceleration (resp. deceleration), the two radiation mechanisms generate very similar THz fields but of opposite polarity and so almost exactly cancel out, especially at low frequencies (\(\lesssim 10\,\)THz here) and angles (\(\lesssim 40^{\circ}\)). Mathematically speaking, this description merely amounts to taking \(\Delta t_{\rm r}\to 0\) in Eq. (27) so that Eq. (29) converges to Eq. (28). It is to be noted that the average time spent by the electrons in vacuum, \(\langle\Delta t_{\rm r}\rangle\), can be exactly solved as
\[\langle\overline{\Delta t}_{\rm r}\rangle =\iint g(u,\psi)\overline{\Delta t}_{\rm r}u^{2}\sin\psi\,{\rm d }^{2}u{\rm d}\psi \tag{44}\] \[=-3\Delta u\left(1+\coth\frac{X_{\psi}^{2}}{2}\right)\left(X_{ \psi}^{-1}D\left(X_{\psi}\right)-1\right)\,, \tag{45}\]
where \(D(x)=e^{-x^{2}}\int_{0}^{x}e^{t^{2}}\,{\rm d}t\) is the Dawson function, \(X_{\psi}\equiv 1/\sqrt{2}\Delta\psi\) and \(\overline{\Delta t}_{\rm r}=2u_{z}=2u\cos\psi\). For the parameters of Fig. 3, one finds \(\langle\Delta t_{\rm r}\rangle\simeq 4\,\)fs.
Such a strong interplay of CTR and CSR therefore precludes their separate treatment, otherwise the low-frequency part of the spectrum would be inaccurately modeled and, worse, the radiated energy would be unphysically overestimated. Indeed, while the integration of \(\partial^{2}\mathcal{I}_{\rm CTRR}/\partial\nu\partial\Omega\) over \(0.1<\nu<40\,\)THz and \(\Omega\) gives a total radiated energy of \(\mathcal{E}_{\rm CTRR}\simeq 4.86\times 10^{-3}\,\mathcal{E}_{\rm beam}\simeq 2.7\,\)mJ, the CTR and CSR spectra, both scaling as \(N_{h}^{2}\), contain alone an energy much exceeding the total beam energy, i.e., \(\mathcal{E}_{\rm CTRR}\simeq\mathcal{E}_{\rm CSR}\simeq 88\,\mathcal{E}_{\rm beam}\).
Besides the high-pass filtering effect caused by the ultrashort reflection timescale of the electrons in vacuum, the form factor \(F(\theta,\nu,\mathbf{u})\) that characterizes the spatiotemporal coherence of the entire beam acts as a low-pass filter. The high-frequency shape of the spectrum can be approximately captured by the attenuation factor \(\eta_{\rm coh}\) defined by
\[\eta_{\rm coh}(\theta,\nu) =\left|\int g(\mathbf{u})F(\theta,\nu,\mathbf{u}){\rm d}^{3}\mathbf{u}\right|^{2}\] \[=C_{\rm coh}\,e^{-(\pi^{2}/2\ln 2)\hat{\rho}^{2}\left(\tau_{L}^{2}+ \widetilde{w}_{L}^{2}\sin^{2}\theta\right)}\,, \tag{46}\]
where
\[C_{\rm coh} =\left|\int g(\mathbf{u})e^{-2i\pi\hat{\nu}\hat{\rho}[1-\hat{\mathbf{r}} \cdot\mathbf{\beta}_{\perp}]/\beta_{z}}{\rm d}^{3}\mathbf{u}\right|^{2} \tag{47}\]
quantifies the coherence of the beam after its propagation through the target of thickness \(d\). If the target is sufficiently thin, the propagation effects can be neglected and since \(\int g(\mathbf{u}){\rm d}^{3}\mathbf{u}=1\), one has \(C_{\rm coh}\simeq 1\). In this case, the contour lines of the low-pass filter can be computed as a function of \(\theta\) and \(\nu\) through
\[\eta_{\rm coh}(\theta,\nu)\simeq e^{-(\pi^{2}/2\ln 2)\hat{\rho}^{2}\left(\widetilde{r }_{L}^{2}+\widetilde{w}_{L}^{2}\sin^{2}\theta\right)}\,. \tag{48}\]
As seen in Fig. 3(a), the radiated spectrum assumes the shape of a band located, for our parameters, between \(\nu\simeq 10\,\)THz and the approximate upper bound
\[\bar{\nu}_{c}(\theta)\simeq\sqrt{\frac{-2\ln 2\ln\eta_{\rm coh}}{\pi^{2}\left( \widetilde{r}_{L}^{2}+\widetilde{w}_{L}^{2}\sin^{2}\theta\right)}}\,. \tag{49}\]
This expression is plotted as a white dashed line in Fig. 3(a) for \(\eta_{\rm coh}(\theta,\bar{\nu}_{c})=3\times 10^{-3}\), leading to \(25\leq\nu_{c}(\theta)\leq 30\,\)THz.
### Influence of the sheath field
Equation (40) represents a crude expression of the sheath field strength which, in reality, varies both with space and time [32, 36]. To examine the dependency of the THz radiation spectrum on \(E_{0}\), we perform a parametric study over \(\alpha_{E_{0}}\).
Increasing the sheath field strength shortens the electron excursion time in vacuum: the CTR and CSR fields then compensate each other even better so that the energy radiated by CTSR is further reduced. Conversely, decreasing \(E_{0}\) allows the particles to propagate over larger distances and radiate more efficiently.
Figure 4 shows the variation in the radiated energy (integrated over solid angles and the \(0.1-40\) THz frequency range) with \(\alpha_{E_{0}}\leq 1\), for the same laser-plasma parameters as in Fig. 3. The energies radiated through CTR and CSR, taken separately, decrease when lowering \(\alpha_{E_{0}}\) whilst the total radiated energy rapidly rises (by \(\sim 10\times\) from \(\alpha_{E_{0}}=1\) to \(\alpha_{E_{0}}=0.5\), and by \(\sim 20\times\) from \(\alpha_{E_{0}}=0.5\) to \(\alpha_{E_{0}}=0.2\)). As expected, CSR tends to vanish when \(\alpha_{E_{0}}\to 0\) so that CTR then accounts for all of the radiated energy. Yet, the electrons then only radiate coherently while exiting the target because they are greatly spread out upon re-entering the foil. When \(\alpha_{E_{0}}\to 1\), by contrast, they are reflected so rapidly into the target that they remain packed enough to emit two almost coincident CTR bursts [stages (iii) and (v) in Fig. 2] of same polarity (the first from the accelerating real particles, the second from the decelerating image particles), the total energy of which is four times that of the initial CTR burst.
Of course, \(\alpha_{E_{0}}\) is just a fitting parameter, introduced for simplicity to capture the overall effect of the highly nonstationary sheath field [32, 44]. Precautions must therefore be taken when analyzing the radiated energy for very weak sheath fields. Notably, below \(\alpha_{E_{0}}\simeq 0.2\), the hot electrons spend so much time (\(\gtrsim 10\,\)fs) in vacuum before returning to the target that a sizable (\(>0.1\)) fraction of the beam energy is predicted to be radiated away through CTSR. Even worse, below \(\alpha_{E_{0}}\simeq 0.1\), the total emitted energy approaches that potentially radiated by all accelerated electrons through CTR alone, entailing non-physically high (\(>\mathcal{E}_{\rm beam}\)) radiated energies. A proper treatment of the problem for such weak sheath fields would therefore require a self-consistent description of the backreaction of radiation on the electron dynamics, as is done, _e.g._, in accelerator physics [57]. This is a challenging task, well exceeding the scope of the present
Figure 4: Radiated THz energy (integrated over solid angles and \(0.1-40\,\)THz frequencies) as a function of the sheath-field parameter \(\alpha_{E_{0}}\), as defined by Eq. (40). The blue solid curve, in \(\log_{10}\,\)scale, represents the full radiated energy while the red dashed and dotted curves plot the energy yields of CTR and CSR, respectively. The laser-plasma parameters are those used in Fig. 3 (\(a_{L}=15\)). The reference case \(\alpha_{E_{0}}=0.5\) corresponds to a field \(E_{0}\simeq 6.9\times 10^{12}\,\)V m\({}^{-1}\).
semiaalytical model.
### Contribution of escaping ballistic electrons
As seen previously, very efficient compensation of the radiated THz fields takes place when the fast electrons are rapidly pulled back into the target, as happens under the considered interaction conditions. We have so far assumed that all of the fast electrons are drawn back into the target, yet it is well known that a higher-energy fraction of them can escape the sheath potential [33; 34]. If these escaping electrons are numerous enough, their uncompensated transition radiation may dominate the total radiation yield. Here, we will assess the impact on the overall THz radiation of an increasing fraction (\(\eta_{\rm esc}\), considered as a free parameter) of escaping electrons.
To this purpose, we introduce the cutoff energy \(u_{c}(\eta_{\rm esc})\equiv p_{c}(\eta_{\rm esc})/m_{e}c\) beyond which the electrons escape the target, which fulfills
\[\eta_{\rm esc} =\int_{u_{\rm c}}^{\infty}g_{u}(u)u^{2}\mathrm{d}u\] \[=e^{-u_{\rm c}/\Delta u}(2\Delta u^{2}+u_{\rm c}^{2}+2u_{\rm c} \Delta u)/2\Delta u^{2}\,, \tag{50}\]
where \(\eta_{\rm esc}\) is usually estimated to be in the percent range [34]. To simplify, we assume that the escaping electrons (characterized by \(u\geq u_{\rm c}\)) keep on propagating ballistically after exiting the target backside, and hence only emit a single burst of CTR [stage (iii) in Fig. 2]. Equations (25), (28) and (29) can thus be recast as (\(\bar{\nu}\equiv\nu m_{e}c/eE_{0}\)):
\[\frac{\partial^{2}\mathcal{I}_{\rm CTRR}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu) =\frac{N_{h}^{2}q^{2}}{2\pi^{2}\varepsilon_{0}c}\left|\int_{0}^{u_ {\rm c}}\iint_{\psi,\varphi}g(\mathbf{u})F(\theta,\nu,\mathbf{u})i\pi\bar{\nu}\Big{[} \bar{\mathbf{A}}^{-}(\mathbf{\hat{r}},\nu)-\bar{\mathbf{A}}^{+}(\mathbf{\hat{r}},\nu)\Big{]} \mathrm{d}^{3}\mathbf{u}\right.\] \[\left.\qquad\qquad+\frac{1}{2}\,\int_{u_{\rm c}}^{\infty}\iint_{ \psi,\varphi}g(\mathbf{u})F(\theta,\nu,\mathbf{u})\left[\frac{\mathbf{\hat{r}}\times\mathbf{ \beta}_{\rm e}^{-}}{1-\mathbf{\beta}_{\rm e}^{-}\cdot\mathbf{\hat{r}}}-\frac{\mathbf{\hat{ r}}\times\mathbf{\beta}_{\rm e}^{+}}{1-\mathbf{\beta}_{\rm e}^{+}\cdot\mathbf{\hat{r}}} \right]e^{-i\Theta_{\rm e}(\mathbf{u})}\,\mathrm{d}^{3}\mathbf{u}\right|^{2}\,, \tag{51}\]
\[\frac{\partial^{2}\mathcal{I}_{\rm CTRR}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu) =\frac{N_{h}^{2}q^{2}}{2\pi^{2}\varepsilon_{0}c}\left|\int_{0}^{u_ {\rm c}}\iint_{\psi,\varphi}g(\mathbf{u})F(\theta,\nu,\mathbf{u})\bar{\mathbf{A}}_{\rm TR} \left(\mathbf{\hat{r}},\nu\right)\,\mathrm{d}^{3}\mathbf{u}\right|^{2}\,, \tag{52}\]
\[\frac{\partial^{2}\mathcal{I}_{\rm CSR}}{\partial\nu\partial \Omega}(\mathbf{\hat{r}},\nu) =\frac{N_{h}^{2}q^{2}}{2\pi^{2}\varepsilon_{0}c}\left|\int_{0}^{u_ {\rm c}}\iint_{\psi,\varphi}g(\mathbf{u})F(\theta,\nu,\mathbf{u})\!\left\{i\pi\bar{ \nu}\Big{[}\bar{\mathbf{A}}^{-}(\mathbf{\hat{r}},\nu)-\bar{\mathbf{A}}^{+}(\mathbf{\hat{r}}, \nu)\Big{]}-\bar{\mathbf{A}}_{\rm TR}\left(\mathbf{\hat{r}},\nu\right)\right\}\mathrm{ d}^{3}\mathbf{u}\right|^{2}\,. \tag{53}\]
Figure 5 shows the evolution of the radiation yield in the \(0.1-40\) THz frequency range when \(\eta_{\rm esc}\) increases from \(0\,\%\) to \(10\,\%\). We observe that the CTR yield progressively diminishes with \(\eta_{\rm esc}\) while the CSR yield decreases twice as fast.
The decreasing CTR yield is due to the fact that, as \(\eta_{\rm esc}\) rises, fewer and fewer electrons emit CTR at time \(t=t_{\rm r}\) while the number of electrons emitting CTR at \(t=t_{\rm e}\) remains unchanged. Similarly, the CSR yield drops because fewer particles are decelerated. Therefore, when increasing \(\eta_{\rm esc}\), one source of CTR is preserved [stage (iii) in Fig. 2] whereas the whole source of CSR is degraded
Figure 5: Radiated THz energy (integrated over solid angles and \(0.1-40\) THz frequencies) as a function of the fraction of escaping electrons \(\eta_{\rm esc}\). The blue solid curve, in \(\log_{10}\) scale, represents the full radiated energy while the red dashed and dotted curves plot the energy yields of CTR and CSR, respectively. The laser-plasma parameters are those used in Fig. 3 (\(a_{L}=15\)). The radiated spectra of cases (a)-(c) are plotted in Fig. 6.
[stage (iv) of Fig. 2], which explains the trends seen in Fig. 5.
In parallel, the total (CTSR) THz yield weakly varies (reaching a slight minimum around \(\eta_{\mathrm{esc}}\lesssim 1\,\%\)) for \(\eta_{\mathrm{esc}}\lesssim 2\,\%\) but rapidly rises beyond this value, by \(\sim 65\times\) between \(\eta_{\mathrm{esc}}=1\,\%\) and \(\eta_{\mathrm{esc}}=10\,\%\). Note that our neglect of radiative losses becomes questionable at \(\eta_{\mathrm{esc}}=10\,\%\), for which the yield attains \(\mathcal{E}_{\mathrm{CTSR}}\simeq 0.2\,\mathcal{E}_{\mathrm{beam}}\).
Figure 6 displays the frequency-angle spectra associated with \(\eta_{\mathrm{esc}}=1\,\%\), \(5\,\%\) and \(10\,\%\), corresponding, respectively, to markers (a), (b) and (c) in Fig. 5. When \(\eta_{\mathrm{esc}}\) rises from zero, the spectrum develops an increasingly prominent low-frequency structure, with a large-angle tail around \(\theta\simeq 40-70^{\circ}\), quite above the electron beam's divergence (\(\Psi_{h}=30^{\circ}\)). For thin (\(d=2\,\mu\)m) targets, the hierarchy between CTSR from the confined electrons and CTR from the escaping electrons is reversed at around \(\eta_{\mathrm{esc}}\simeq 1\,\%\) [Fig. 6(a)], leading to a doubly peaked spectrum, yet containing a comparable integrated energy (\(\simeq 3.5\times 10^{-3}\,\mathcal{E}_{\mathrm{beam}}\simeq 2.0\,\mathrm{mJ}\)) as for \(\eta_{\mathrm{esc}}=0\%\) (Fig. 3). When \(\eta_{\mathrm{esc}}>1\%\) [Fig. 6(b,c)], the low-frequency signal from the escaping electrons largely prevails.
### Influence of the target thickness
As the electron beam propagates ballistically through the target, it spreads transversely and longitudinally depending on its momentum distribution function. The increased electron dilution at the backside of a thicker target also entails a weaker sheath field, thus allowing the electrons to propagate in vacuum over larger distances and longer times. As seen in Sec. III.2, this tends to reduce the respective yields and frequency ranges of CTR and CSR but also, and above all, to hamper the field cancellation of these two emissions, hence causing a net increase in the CTSR yield.
To illustrate this behavior, we plot in Fig. 7 the total THz spectra obtained for \(d=20\,\mu\)m and an escaping electron fraction ranging from \(1\,\%\) to \(10\,\%\). The main observations are that the radiated energy is \(\sim 16\times\) higher than at \(d=2\,\mu\)m and that the spectrum remains essentially unchanged up to \(\eta_{\mathrm{esc}}\simeq 2-3\,\%\). This means that the THz radiation is less sensitive to the (unknown) fraction of escaping electrons in thicker foils. When \(\eta_{\mathrm{esc}}=10\,\%\), the signal from the recirculating electrons is admittedly less intense than the (lower-frequency) one due to the escaping electrons, but still visible. Moreover, compared to the case of \(d=2\,\mu\)m and \(\eta_{\mathrm{esc}}=0\,\%\), the spectra associated with \(d=20\,\mu\)m have shifted to lower angles (\(\sim 25-45^{\circ}\)) and frequencies (\(\sim 10\,\)THz). This behavior results from the lower coherence of the electron beam (characterized by \(C_{\mathrm{coh}}\), see Sec. III.1) at the rear side of the target. To illustrate the effect of coherence loss, we overlay in Fig. 7 the cutoff frequency \(\nu_{c}(\theta,d)\) (white dashed line) as obtained by numerically solving Eqs. (46) and (47) for \(\eta_{\mathrm{coh}}=3\times 10^{-3}\). This cutoff frequency captures fairly well the high-\(\nu\) shape of the CTSR spectrum.
The prediction of a THz yield increasing with the target thickness between \(d=2\,\mu\)m and \(d=20\,\mu\)m and for \(\eta_{\mathrm{esc}}=1-5\,\%\) [Figs. 6(a,b) and 7(a,b)] may seem dubious given the absence of supporting experimental evidence, but should be considered along with the \(\eta_{\mathrm{esc}}=10\,\%\) case. The latter indeed suggests that, for a given fraction of escaping electrons \(\eta_{\mathrm{esc}}\), the THz yield will eventually drop with the target thickness as CTSR, which is increasingly dominated by CTR when \(\eta_{\mathrm{esc}}>1\,\%\), weakens due to degraded beam coherence.
### Influence of the laser focusing
We now examine how the THz radiation depends on the laser pulse intensity (\(\propto\,a_{L}^{2}\)) at fixed laser energy (\(\propto\,a_{L}^{2}w_{L}^{2}\)). To do so, we vary the laser field strength \(a_{L}\) while adjusting the laser spot size \(w_{L}\,\propto\,1/a_{L}\) and keeping the laser-to-hot-electron conversion efficiency fixed to \(\eta_{h}=0.2\). Since \(\langle\gamma_{h}\rangle\,\propto\,a_{L}\), lowering \(a_{L}\) causes the number of hot electrons \(N_{h}\,\propto\,\mathcal{E}_{L}/\langle\gamma_{h}\rangle\) to rise, leading to stronger separate CTR and CSR yields. Predicting the
Figure 6: Frequency-angle spectra for various fractions of escaping electrons: (a) \(\eta_{\mathrm{esc}}=1\,\%\), (b) \(\eta_{\mathrm{esc}}=5\,\%\) and (c) \(\eta_{\mathrm{esc}}=10\,\%\), corresponding to cases (a)-(c) in Fig. 5. The other laser-plasma parameters are those used in Fig. 3 (\(a_{L}=15\)). The cutoff frequency \(\nu_{c}(\theta)\) is obtained from Eqs. (49) and (47) with \(\eta_{\mathrm{coh}}=3\times 10^{-3}\).
net effect on CTSR, though, is not straightforward as it is extremely sensitive to the normalized return time \(\langle\overline{\Delta t_{r}}\rangle\propto\Delta u\propto a_{L}\), as shown by Eqs. (25) - (27) and Secs. III.1 and III.2.
Figure 8 plots the THz radiated spectra for (top) \(d=2\,\mu\)m and (bottom) \(d=20\,\mu\)m, the laser pulse energy being set to \(\mathcal{E}_{L}=2.8\,\)J, the laser amplitude to \(a_{L}=5\), and thus the waist to \(w_{L}=15\,\mu\)m. Panels (b,c) [(e,f)] are to be compared directly to their \(a_{L}=15,w_{L}=5\,\mu\)m [Fig. 6(a,b) [resp. Fig. 7(a,b)] equivalent. For \(d=2\,\mu\)m, and assuming full refluxing (\(\eta_{\rm esc}=0\,\%\)), the total radiated energy (\(\mathcal{E}_{\rm CTSR}\simeq 5.2\times 10^{-3}\,\mathcal{E}_{\rm beam}\)) is almost identical to that achieved at \(a_{L}=15\), yet the radiation is more sensitive to the escaping electrons: their contribution becomes overwhelmingly dominant even when their fraction is as low as 1 %, in which case the radiation yield is more than doubled (\(\mathcal{E}_{\rm CTSR}\simeq 1.3\times 10^{-2}\,\mathcal{E}_{\rm beam}\)). Again, when \(\eta_{\rm esc}\) is sufficiently low, increasing the target thickness from \(d=2\,\mu\)m to \(d=20\,\mu\)m significantly enhances the radiated energy, albeit to a lesser degree than at \(a_{L}=15\). The contribution of the recirculating electrons is then still significant at \(\eta_{\rm esc}=1\,\%\), but no longer so at \(\eta_{\rm esc}=5\,\%\). Note, however, that for both \(d=2\,\mu\)m and \(d=20\,\mu\)m, the model's predictions become questionable when \(\eta_{\rm esc}\gtrsim 5\,\%\) since \(\mathcal{E}_{\rm CTSR}\) then exceeds \(\sim 0.3\,\mathcal{E}_{\rm beam}\).
Interestingly, the \(\sim 10^{-2}\) beam-to-THz conversion efficiency predicted for \(a_{L}=5\) and \(\eta_{\rm esc}=1\,\%\) appears to be roughly consistent with the \(\sim 1\,\)mJ yield reported in Ref. [26] under comparable interaction conditions (\(a_{L}\simeq 5\), \(\tau_{L}\simeq 30\,\)fs, \(w_{L}\simeq 10\,\mu\)m), but with a thicker foil (\(d=5\,\mu\)m) and a lower cutoff frequency (\(\nu<9\,\)THz).
Overall, our model pinpoints the critical influence of some of its parameters, notably the sheath field strength and the fraction of escaping electrons. Yet, to our knowledge, most experiments to date have characterized THz emissions from laser-foil interactions over restricted frequency and angular ranges [26, 25], with the notable exception of Ref. [58] where the full THz distribution was diagnosed, but with an obliquely incident laser pulse and no variation in the target thickness was reported.
## IV Radiation from the expanding plasma
We now address the THz radiation that is subsequently emitted by the electron-proton plasma expanding due to the fast-electron-induced sheath field. Compared to previous efforts [26, 27, 28, 29, 59], our modeling will hinge on a more realistic description of the plasma acceleration, taking account of the time-decreasing areal charge at the accelerated ion front and of the adiabatic electron cooling taking place in thin foils.
### General formalism
A convenient formula to describe the far-field plasma expansion radiation (PER) is [60, 61]
\[\frac{\partial^{2}\mathcal{I}_{\rm PER}}{\partial\nu \partial\Omega}(\mathbf{\hat{r}},\nu)=\frac{1}{8\pi^{2}\varepsilon_{0}c^{3}}\\ \times\left|\iint\mathbf{\hat{r}}\times\left[\frac{\partial\mathbf{j}( \mathbf{r}^{\prime},t)}{\partial t}\right]_{t_{\rm ret}}e^{-2i\pi\nu t}\mathrm{d} t\mathrm{d}^{3}\mathbf{r}^{\prime}\right|^{2}\,, \tag{54}\]
where \(\mathbf{j}\) is the plasma current density which will be estimated below. We will assume, for simplicity, that the plasma is accelerated along the target normal only. Thus, \(\mathbf{j}(\mathbf{r},t)=j(\mathbf{r},t)\mathbf{\hat{z}}\) and the radiated spectrum associated with Eq. (54) reads
\[\frac{\partial^{2}\mathcal{I}_{\rm PER}}{\partial\nu\partial \Omega}(\theta,\nu)=\frac{\sin^{2}\theta}{8\pi^{2}\varepsilon_{0}c^{3}}\\ \times\left|\iint\left[\frac{\partial j}{\partial t}(\mathbf{r}^{ \prime},t)\right]_{t_{\rm ret}}e^{-2i\pi\nu t}\,\mathrm{d}t\,\mathrm{d}^{3} \mathbf{r}^{\prime}\right|^{2}\,. \tag{55}\]
Performing the change of variable \(t\to t_{\rm ret}\) leads to
\[\frac{\partial^{2}\mathcal{I}_{\rm PER}}{\partial\nu\partial\Omega}( \theta,\nu)=\frac{\sin^{2}\theta}{8\pi^{2}\varepsilon_{0}c^{3}}\\ \times\left|\int\!\!\!\int\frac{\partial j}{\partial t}(\mathbf{r}^{ \prime},t^{\prime})e^{-2i\pi\nu\left(t^{\prime}-\frac{t_{\rm ret}\mathbf{r}^{ \prime}}{c}\right)}\,\mathrm{d}t^{\prime}\,\mathrm{d}^{3}\mathbf{r}^{\prime} \right|^{2}\,. \tag{56}\]
This formula is the basis of the PER model constructed below.
### Plasma expansion in the isothermal and adiabatic regimes
To evaluate the current density \(j(\mathbf{r},t)\), we need to describe the dynamics of the expanding plasma. To do so, we start by considering the one-dimensional (1D) model proposed in Ref. [36], which applies to a collisionless plasma accelerated towards vacuum by the sheath field created by an isothermal electron population. According to this model, the plasma ion profile, initialized as a step function, retains a sharp front located at \(z_{\rm f,iso}(t)\) and moving at the velocity
\[\dot{z}_{\rm f,iso}(t)\equiv\beta_{\rm f,iso}(t)c\simeq 2c_{s0}\ln\left(\tau+ \sqrt{\tau^{2}+1}\right)\,, \tag{57}\]
where \(c_{s0}\simeq\sqrt{\langle\mathcal{E}_{h}\rangle/m_{p}}\) is the ion acoustic velocity, \(m_{p}\) the proton mass, \(\tau=\omega_{pi}t/\sqrt{2e_{N}}\) the normalized time, \(\omega_{pi}=\sqrt{n_{\rm hr0}\epsilon^{2}/m_{p}\varepsilon_{0}}\) the ion plasma frequency, \(e_{N}\equiv\exp(1)\), and \(n_{\rm hr0}\) the initial electron density at the target rear side. We estimate \(n_{\rm hr0}\) as in Eq. (42) but consider in addition a folding term (\(1+c\tau_{L}/2d\)) to describe the electron accumulation in thin foils during the laser irradiation [62]:
\[n_{\rm hr0}=\frac{\eta_{h}I_{L}}{m_{e}c^{3}((\gamma_{h}-1)\beta_{z})}\left( \frac{w_{h}(0)}{w_{h}(d)}\right)^{2}\left(1+\frac{c\tau_{L}}{2d}\right)\,. \tag{58}\]
When the longitudinal extent \(\sim c\tau_{L}\) of the fast electron bunch is short compared to the target thickness, i.e., when \(c\tau_{L}\ll 2d\), the folding term approaches unity and the electron density is that given by Eq. (42). Conversely, when \(c\tau_{L}\gtrsim 2d\), the fast electrons recirculate several times before the pulse ends and the folding term increases the electron density accordingly.
The approximation of isothermal hot electrons ceases to be valid when the rarefaction waves coming from the two sides of the foil have converged to its center, which occurs at a time \(t_{\rm ad}\simeq d/2c_{\rm s0}\)[63]. The adiabatic cooling experienced by the fast electrons at \(t>t_{\rm ad}\) precipitates the decay of the sheath field, so that the ions eventually reach a maximum velocity. In a 1D geometry, this
maximum velocity is predicted to be [63]
\[\dot{z}_{\rm f,ad}\simeq 2c_{\rm s0}\ln\left(0.32d/\lambda_{D0}+4.2\right)\,, \tag{59}\]
where
\[\lambda_{D0}=c_{\rm s0}/\omega_{pi}=\sqrt{\varepsilon_{0}\langle\mathcal{E}_{h} \rangle/n_{\rm hr0}e^{2}} \tag{60}\]
is the initial Debye length of the hot electrons.
To model the dynamic transition between the isothermal and adiabatic (cooling) expansion regimes, we use the simple interpolation formula proposed in Ref. [64]:
\[\dot{z}_{\rm f,1D}(t)\simeq\left[\dot{z}_{\rm f,iso}^{-2}(t)+\dot{z}_{\rm f,ad }^{-2}\right]^{-1/2}. \tag{61}\]
As the time-dependent sheath field (as seen by the front ions) fulfills \(\ddot{z}_{\rm f,1D}(t)=eE_{\rm f,\,1D}(t)/m_{p}\), we deduce
\[E_{\rm f,1D}(t)=\frac{m_{p}}{e}\frac{\ddot{z}_{\rm f,iso}(t)}{\left[1+\dot{z}_{ \rm f,iso}^{2}(t)/\dot{z}_{\rm f,ad}^{2}\right]^{3/2}}\,. \tag{62}\]
Next, in order to consider the effect of the transverse spreading of the hot electrons, we note that the electric field initially scales as \(\sqrt{n_{\rm hr0}}\) in the purely 1D case [36], and so apply for \(t>\tau_{L}\) a correction factor \(\sqrt{n_{\rm hr}(t)/n_{\rm hr0}}=w_{h}(d)/w_{h}(d+c(t-\tau_{L}))\) to \(E_{\rm f,1D}\) above. Finally, following Refs. [64] and [65], we take into account the expected weakening of the sheath field when the ion front has moved a distance \(z_{\rm f}\) greater than its transverse extent \(w_{h}(d)\) [\(w_{h}(z)\) being the transverse electron beam size given by Eq. (43)]. The sheath field is then expected to decay in time as \(z_{\rm f}(t)^{-2}\) in a realistic 3D geometry [64; 65].
All these considerations motivate the following approximate expression of the sheath field
\[E_{\rm f,3D}(t) =\begin{cases}E_{\rm f,1D}(t)\left[1\!+\!\frac{z_{\rm f,3D}^{2}(t )}{w_{h}^{2}(d)}\right]^{-1}&,t<\tau_{L}\\ \frac{E_{\rm f,1D}(t)w_{h}(d)}{\mu(d+c(t-\tau_{L}))}\left[1+\frac{z_{\rm f,3D} ^{2}(t)}{w_{h}^{2}(d)}\right]^{-1}&,t\geq\tau_{L}\end{cases} \tag{63}\] \[\ddot{z}_{\rm f,3D}(t) =\frac{e}{m_{p}}E_{\rm f,3D}(t)\,. \tag{64}\]
The above equations are solved numerically to obtain the time-varying position, velocity and acceleration of the (fastest) front ions.
As an example, Fig. 9 compares the normalized proton front velocities, \(\beta_{\rm f}=\dot{z}_{\rm f}/c\), as obtained from the above models. The laser-plasma parameters (\(a_{L}=15\), \(w_{L}=5\,\mu\)m, \(\tau_{L}=30\,\)fs, \(d=2\,\mu\)m) are the same as in Fig. 3. Plotted are the maximum adiabatic velocity [63]\(\dot{z}_{\rm f,ad}/c\), the unbounded ion velocity in the isothermal regime, \(\dot{z}_{\rm f,iso}/c\), and the ion velocity in the mixed isothermal/adiabatic regime without (\(\dot{z}_{\rm f,1D}/c\)) or with (\(\dot{z}_{\rm f,3D}/c\)) 3D corrections [64]. The rapid saturation of the ion velocity (reaching a maximum value \(\beta_{\rm f,3D}\simeq 0.18\) by \(t\simeq 0.1\,\)ps) that is predicted by the 3D adiabatic model is particularly manifest.
### Evaluation of the plasma current density
The plasma current density \(j(\mathbf{r},t)\) (resulting from both electron and ion contributions) involved in Eq. (56) is inferred from the charge density \(\rho(\mathbf{r},t)\) via the 1D charge conservation equation \(\partial_{t}\rho+\partial_{z}j=0\). As described in Ref. [36], the density profile of the expanding plasma exhibits two charge-separation regions: one around the mobile ion front (\(z\simeq z_{\rm f}\)) with negative areal charge density \(-\sigma_{\rm f}(t)\) and one around the initial plasma surface (\(z\simeq 0\)) with positive areal charge density \(+\sigma_{\rm f}(t)\). We will neglect the longitudinal extent of those nonneutral layers and model \(\rho(\mathbf{r},t)\) as a sum of two charged planes, centered at \(z\simeq z_{\rm f}(t)\) and \(z=0\),
\[\rho(\mathbf{r},t)=\sigma_{\rm f}(t)\left[\delta(z)-\delta(z-z_{\rm f}(t))\right] f_{\perp,{\rm PE}}(\mathbf{r}_{\perp})\,. \tag{65}\]
Here \(f_{\perp,{\rm PE}}(\mathbf{r}_{\perp})\) describes the transverse density profile of the expanding plasma. In principle, this profile should vary in time since the plasma expands both in the longitudinal and transverse directions. For simplicity, we consider a constant transverse distribution of Gaussian shape and same width as that of the recirculating electron beam:
\[f_{\perp,{\rm PE}}(\mathbf{r}_{\perp})=e^{-4\ln 2(\tau_{\perp}/w_{h}(d))^{2}}\,. \tag{66}\]
In a 1D geometry, the time-varying areal charge density is linked to the sheath field through \(\sigma_{\rm f}(t)=\varepsilon_{0}E_{\rm f,1D}(t)\). For simplicity, we assume that this relation remains approximately valid in the 3D expansion regime, i.e.,
\[\sigma_{\rm f}(t)=\varepsilon_{0}E_{\rm f,3D}(t)\,, \tag{67}\]
Figure 9: Time evolution of the ion front velocity as predicted by various models. Green dotted dashed curve: isothermal model [\(\beta_{\rm iso,1D}\) from Eq. (57)]. Blue dashed curve: mixed 1D isothermal/adiabatic model [\(\beta_{\rm f,1D}\) from Eq. (61)]. Orange solid curve: mixed 3D isothermal/adiabatic regime [\(\beta_{\rm f,3D}\) from Eq. (64)]. Black dashed curve: maximum velocity in the adiabatic regime [\(\beta_{\rm f,ad}\) from Eq. (59)]. The laser-plasma parameters are those used in Fig. 3.
with \(E_{\rm f,3D}(t)\) as defined by Eq. (63). This leads to the current density
\[j(\mathbf{r},t)=j_{\parallel}(z,t)f_{\perp,\rm PE}(\mathbf{r}_{\perp})\,, \tag{68}\]
where
\[j_{\parallel}(z,t) =-\int_{-\infty}^{z}\frac{\partial\rho(\mathbf{r},t)}{\partial t} \mathrm{d}z\] \[=-\hat{\sigma}_{\rm f}(t)\left[\mathcal{H}_{0}-\mathcal{H}_{z_{ \rm f}(t)}\right]-\sigma_{\rm f}(t)\dot{z}_{\rm f}(t)\delta_{z_{\rm f}(t)}\,. \tag{69}\]
Here, \(\mathcal{H}_{z_{\rm f}(t)}\equiv\mathcal{H}\left(z-z_{\rm f}(t)\right)\) and \(\delta_{z_{\rm f}(t)}\equiv\delta\left(z-z_{\rm f}(t)\right)\) denote, respectively, the Heaviside and Dirac delta functions centered on \(z=z_{\rm f}(t)\). Finally, introducing the transverse spatial Fourier transform
\[\mathcal{F}[f_{\perp,\rm PE}](k_{x},k_{y}) =\iint_{-\infty}^{\infty}\mathrm{d}x\mathrm{d}y\,f_{\perp,\rm PE }(x,y)\] \[\times e^{-2i\pi(k_{x}x+k_{y}y)}\] \[=\pi\frac{w_{h}^{2}(d)}{4\ln 2}e^{-\pi^{2}\frac{w_{h}^{2}(d)}{4 \ln 2}(k_{x}^{2}+k_{y}^{2})}\,, \tag{70}\]
we can rewrite Eq. (56) as
\[\frac{\partial^{2}\mathcal{I}_{\rm PER}}{\partial\nu\partial \Omega}(\theta,\nu)=\frac{w_{h}^{4}(d)\sin^{2}\theta}{2(8\ln 2)^{2}\varepsilon_{ 0}c^{3}}e^{-\left(w_{h}(d)\sin\theta\frac{\nu}{c}\right)^{2}/2\ln 2}\] \[\times\left|\int\mathcal{F}\left[\frac{\partial j_{\parallel}}{ \partial t^{\prime}}\right]\left(\frac{-\nu\cos\theta}{c}\right)e^{-2i\pi\nu t ^{\prime}}\mathrm{d}t^{\prime}\right|^{2}\,. \tag{71}\]
where \(\mathcal{F}\left[\frac{\partial j_{\parallel}}{\partial t^{\prime}}\right]\) is the spatial (along \(z\)) Fourier transform of the time derivative \(\partial j_{\parallel}/\partial t\), evaluated analytically at \(k_{z}=-\nu\cos\theta/c\) as a function of \(z_{\rm f}(t)\) and \(\sigma_{\rm f}(t)\) (See Appendix B). The time integral in the above expression is then computed numerically.
In the following sections, we study the variations in the energy yield and spectra of PER with the main laser-plasma parameters.
### Influence of the laser focusing
We first perform a scan over the laser pulse intensity (or, equivalently, degree of laser focusing) at fixed laser pulse energy (\(\mathcal{E}_{L}=2.8\,\mathrm{J}\)) and duration (\(\tau_{L}=30\,\mathrm{fs}\)), by keeping \(a_{L}w_{L}\) constant. Figure 10 plots the evolution of the energy radiated via PER (\(\mathcal{E}_{\rm PER}\), integrated over \(0.1\leq\nu\leq 40\,\mathrm{THz}\)) as a function of \(a_{L}\) (in the range \(5\leq a_{L}\leq 15\)) and for different target thicknesses (\(2\leq d\leq 20\,\mu\mathrm{m}\)). As in Sec. III, the laser-to-hot-electron conversion efficiency is set to \(\eta_{\rm h}=0.2\).
First, one observes that the predicted yields lie in the \(3\times 10^{-5}-1.7\times 10^{-4}\,\mathcal{E}_{\rm beam}\) range, _i.e._, at least one order of magnitude below the previous estimates of the energy radiated by the fast electrons alone. All curves show a monotonic increase with \(a_{L}\). In detail, for \(d=2\,\mu\mathrm{m}\), \(\mathcal{E}_{\rm PER}\) is enhanced by a factor of \(\sim 2.5\) when \(a_{L}\) is increased from \(5\) to \(15\). When \(d=5\,\mu\mathrm{m}\) (\(d=10\,\mu\mathrm{m}\)), the enhancement factor is of \(\sim 2.2\) (resp. \(\sim 1.7\)).
As displayed in Fig. 11, intensifying the laser field (from \(a_{L}=5\) to \(a_{L}=15\)) by narrowing its spot size (from \(w_{L}=15\,\mu\mathrm{m}\) to \(5\,\mu\mathrm{m}\)) broadens and shifts the THz spectrum to higher THz frequencies (from \(\sim 5-15\,\mathrm{THz}\) to \(\sim 10-25\,\mathrm{THz}\)) because of a faster plasma expansion. This effect is reinforced by the accompanying decrease in transverse width of the sheath field \(w_{h}(d)\) [see Eq. (43)], which weakens the attenuation factor associated with the Fourier transform of \(f_{\perp,\rm PE}\) in Eq. (71). The two spectra, however, are maximized at large angles \(\theta\sim 70-90^{\circ}\), as is typical for nonrelativistic dipole-like radiation.
### Influence of the target thickness
We now vary the target thickness \(d\) at fixed laser parameters (\(a_{L}=15\), \(w_{L}=5\,\mu\mathrm{m}\), \(\tau_{L}=30\,\mathrm{fs}\), \(\mathcal{E}_{L}=2.8\,\mathrm{J}\)). Figure 12 plots, as a function of \(d\geq 2\,\mu\mathrm{m}\), the maximum kinetic energy reached by the fastest protons as well as the corresponding THz energy yield. The general trend is that of a monotonic decrease in the ion kinetic energy with the target thickness, from \(\mathcal{E}_{\rm f}\simeq 15\,\mathrm{MeV}\) at \(d=2\,\mu\mathrm{m}\) down to \(\simeq 5\,\mathrm{MeV}\) at \(d=20\,\mu\mathrm{m}\). These results are consistent with the measurements reported in Refs. [62, 66, 67].
Closely following that trend, the model predicts a continuous drop in the radiated energy from \(\mathcal{E}_{\rm PER}\simeq 1.7\times 10^{-4}\,\mathcal{E}_{\rm beam}\simeq 95\,\mu \mathrm{J}\) at \(d=2\,\mu\mathrm{m}\) down to \(3.4\times 10^{-5}\,\mathcal{E}_{\rm beam}\simeq 19\,\mu\mathrm{J}\) at \(d\simeq 20\,\mu\mathrm{m}\). Figure 10 further shows that this decrease in \(\mathcal{E}_{\rm PER}\) at larger \(d\) is less pronounced when \(a_{L}\) is reduced: \(\mathcal{E}_{\rm PER}\) indeed drops by \(\sim 5\times\) at \(a_{L}=15\) and by \(\sim 3\times\) at \(a_{L}=5\).
Modifying the thickness of the foil also strongly impacts the frequency-angle spectrum of PER. Fig
Figure 10: THz energy yield (integrated over \(0.1-40\,\mathrm{THz}\)) of PER as a function of the laser field strength \(a_{L}\) (right axis), for various target thicknesses \(d\). The product \(a_{L}w_{L}\) is kept fixed to ensure a constant laser pulse energy \(\mathcal{E}_{L}=2.8\,\mathrm{J}\) (or, equivalently, a constant electron beam energy \(\mathcal{E}_{\rm beam}=0.56\,\mathrm{J}\)). The blue dashed curve plots \(w_{L}\) (left axis). Apart from \(a_{L}\) and \(w_{L}\), the laser-plasma parameters are those used in Fig. 3.
ures 13(a)-(c) display the spectra obtained for \(d=5\)\(\mu\)m, 10 \(\mu\)m and 20 \(\mu\)m, respectively. The \(d=2\,\mu\)m case is depicted in Fig. 11(b). We notice that, as the target is made thicker, the spectrum, initially distributed in a wide frequency range (\(5\,\mathrm{THz}\lesssim\nu\lesssim 30\,\mathrm{THz}\)), shrinks to a much narrower bandwidth while shifting to smaller frequencies (\(1\,\mathrm{THz}\lesssim\nu\lesssim 6\,\mathrm{THz}\)). As expected, though, its angular profile remains unchanged, with a broad maximum around \(\sim 70-90^{\circ}\).
A major prediction of our modeling is that the THz energy radiated from the plasma expansion should be about one to three orders of magnitude lower than that from the fast electrons only. The dominance of the latter is expected to be particularly marked for a fraction of escaping electrons \(\eta_{\mathrm{esc}}\gtrsim 5\%\) (compare Figs. 5 and 12).
## V Conclusions
In summary, we have developed a novel theoretical model of far-field THz emissions in relativistic ultrashort laser-foil interactions, assuming a two-stage scenario. In the first stage, we consider the radiation resulting from the fast electrons alone at the target backside, during their first and brief (lasting only a few fs) excursion into vacuum. In this phase, the vast majority of them are reflected into the target by the sheath field they have themselves created. The THz emission then originates from the combination of coherent transition/synchrotron radiations from the recirculating electrons, as well as from coherent transition radiation from the escaping electrons. In the second stage, all of the confined fast electrons are assumed to have relaxed to a thermal distribution that drives, through the sheath field, the acceleration of the target ions. The dynamics of the nonneutral layers at the edges of the expanding plasma then gives rise to a dipole-type radiation.
To our knowlege, our unified kinetic model of CTR and CSR, based on the image charge method and integrated over an ensemble of single-particle trajectories, is the first of its kind. Its predictions highlight the critical importance of treating simultaneously these two mechanisms, which tend to interfere destructively in the THz domain. Their degree of imperfect cancellation, which determines the energy yield and spectrum of the net (CTSR) radiation, depends on the details of the electron trajectories, notably the extent of their excursion in vacuum - a function of the electron momentum and sheath field strength - and the amount of escaping (assumed ballistic) electrons.
We have examined the sensitivity of this radiation to the main laser-target parameters, varying these around a reference setup characterized by \(a_{L}=15\), \(\tau_{L}=30\,\mathrm{fs}\), \(w_{L}=5\,\mu\)m and \(\eta_{h}=0.2\), corresponding to a (fixed) electron beam energy of \(\mathcal{E}_{\mathrm{beam}}=0.56\,\mathrm{J}\). An important finding is that a fraction of escaping electrons as low as \(\eta_{\mathrm{esc}}\simeq 1\,\%\) - a value consistent with expectations [34] - may suffice to shape the THz spectrum radiated from
Figure 11: THz energy yield (integrated over \(0.1-40\,\mathrm{THz}\)) of PER for laser field strengths (a) \(a_{L}=5\) and (b) \(a_{L}=15\). The product \(a_{LW}\) is kept fixed to ensure a constant laser pulse energy \(\mathcal{E}_{L}=2.8\,\mathrm{J}\) (or, equivalently, a constant electron beam energy \(\mathcal{E}_{\mathrm{beam}}=0.56\,\mathrm{J}\)). Apart from \(a_{L}\) and \(w_{L}\), the laser-plasma parameters are those used in Fig. 3.
Figure 12: Kinetic energy of the fastest ions (dashed red curve) and energy yield of PER integrated over \(0.1-40\,\mathrm{THz}\) (solid blue curve) as a function of the target thickness \(d\geq 2\,\mu\)m. Apart from \(d\), the laser-plasma parameters are those used in Fig. 3 (\(a_{L}=15\)).
thin (\(d=2\,\mu\)m) foils. At this threshold value, the spectrum is shifted to relatively low frequencies (\(\nu\lesssim 5\)\(\,\)Thz) compared to the case of full electron refluxing, though containing about the same integrated energy (\(\simeq 4\times 10^{-3}\,\mathcal{E}_{\rm beam}\simeq 2\)\(\,\)mJ). The latter, however, is predicted to rise tenfold or more should the escaping electron fraction reach \(\sim 5\%\). The minimum fraction of escaping electrons needed to govern the radiation is an increasing function of the target thickness and degree of laser focusing. For a \(20\,\mu\)m thick foil and a \(5\,\mu\)m laser spot size (\(a_{L}=15\)), this threshold fraction is found to lie between 5% and \(10\,\%\), quite a bit above the expected amount (\(\sim 1\,\%\)) of those electrons [34]. Consequently, the contribution of the confined electrons, characterized by higher frequencies (\(\nu\simeq 5-20\)\(\,\)THz), may then well dominate the total spectrum when using not-so-thin targets and tightly focused laser pulses. Near the threshold between the two regimes, the THz spectrum will exhibit two distinct bands at low and high frequencies. More generally, the angular distribution of the radiation is broad and tends to peak outside the emission cone of the fast electrons, especially in thin foils or when CTR from escaping electrons prevails.
Of course, those trends should be considered qualitative as they depend on a number of coupled parameters, which themselves depend on the detailed experimental conditions. In particular, all throughout, we have set to \(20\,\%\) the fraction of the laser pulse energy carried by the fast electrons, and to \(30^{\circ}\) the angular spread of the latter, even though those quantities are likely affected by the laser intensity and spot size. Moreover, to ensure the tractability of our already computationally heavy model, we have assumed that the fast electrons propagate ballistically across the solid foil (which implies a small enough target thickness) and that the sheath field that confines most of them is both uniform and stationary. Finally, our model being restricted to the first excursion of the fast electrons into vacuum, it discards the subsequent, possibly significant, CTSR-type bursts as the electrons recirculate through the foil and, in so doing, progressively decelerate and scatter.
These limitations notwithstanding, our modeling indicates that when CTR and CSR insufficiently compensate each other, as occurs for a large enough fraction of escaping electrons or a weak enough sheath field, the radiated energy may approach, or even exceed, the electron beam energy. This result suggests that, under such conditions, one should describe the self-consistent effect of the collective radiation on the electron dynamics. This complex problem can only be quantitatively addressed through three-dimensional, fully kinetic numerical simulations, ideally equipped with a far-field radiation diagnostic [68].
Our model for the THz radiation (PER) emitted at later times as a result of ion acceleration no longer relies on a kinetic treatment of the accelerated particles, because of the daunting complexity of the problem, but rather on a simplified description of the space- and time-varying current density distribution in the expanding plasma. Specifically, we approximate the charge-separation regions to two infinitely thin disks of opposite areal charge, located at the moving ion front and the initial target surface, and describe their dynamics by adapting the plasma expansion model proposed in Ref. [64], itself built upon several previous works [65, 36, 36]. Compared to previous attempts at estimating the THz emission from accelerated ions [26, 27, 28, 29], our description takes account of the time-varying areal charge of the nonneutral layers and goes beyond the customarily considered isothermal electron limit by considering the transition to the adiabatic regime, as is relevant to micron-thick targets.
Our computations, conducted over parameter ranges similar to those considered for CTSR - notably assuming the same driving laser pulses -, predict that the THz energy radiated during the plasma expansion phase should be at least one order of magnitude below that emitted earlier by the fast electrons. Overall, for the interaction conditions considered, the beam-to-THz conversion efficiency of PER varies in the \(\sim 10^{-5}-10^{-4}\) range and is a decreasing function of the target thickness. It also rises with the degree of laser focusing, albeit more and more slowly as the foil is made thicker (for \(2\leq d\leq 20\,\mu\)m).
Figure 13: Energy-angle spectra of PER for (a) \(5\,\mu\)m, (b) \(10\,\mu\)m and (c) \(20\,\mu\)m thick targets. The laser parameters are those of Fig. 3.
As expected, PER produces increasingly low frequencies, and over an increasingly narrow range, when the ion acceleration diminishes, that is, when the target thickness or the laser intensity are reduced. This radiation is mainly emitted at large angles (\(>50^{\circ}\)), yet this feature alone should not suffice to allow PER to be discerned over the intense background due to CTSR.
Our results evidently call for numerical and experimental validations. Regarding the latter, it should be noted that most experiments to date have characterized the THz emissions over restricted frequency and angular ranges [25; 26]. A notable exception is Ref. [58], where the full THz distribution was diagnosed but with an obliquely incident laser pulse and where no variation in the target thickness was reported. Recording detailed frequency-angle spectra that can be compared with the theoretical spectra would be highly desirable in order to better tune the model parameters and support its predictions.
## Appendix A Discrete to continuous description of the energy spectrum
We detail here the calculation steps between Eq. (19) and Eq. (22) to obtain a continuous description of the energy spectrum radiated by a set of charged particles obeying a given distribution function. We start from the discrete expression of the energy spectrum [see Eq. (19)] and introduce, for each particle (labeled by the subscript \(l\)), the amplitude \(\mathbf{A}_{l}\) and the complex phase \(\phi_{l}\) such that
\[\frac{\partial^{2}\mathcal{I}_{\mathrm{rad}}}{\partial\nu\partial \Omega}(\theta,\nu) =\left\langle\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}\middle|\sum_{l= 1}^{N_{h}}\left[\mathbf{A}_{l}^{-}(\mathbf{\hat{r}},\nu)-\mathbf{A}_{l}^{+}(\mathbf{\hat{r}}, \nu)\right]\right|^{2}\right\rangle\] \[=\left\langle\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}\middle|\sum_{ l=1}^{N_{h}}\mathbf{A}_{l}e^{i\phi_{l}}\right|^{2}\right\rangle\,, \tag{10}\]
where \(\langle\cdot\rangle\) indicates an ensemble average. We then introduce a continuous description \(\mathbf{A}_{\mathbf{r},t,\mathbf{u}}(\mathbf{\hat{r}},\nu)\) and \(\phi_{\mathbf{r},t,\mathbf{u}}(\mathbf{\hat{r}},\nu)\) of the terms \(\mathbf{A}_{l}\) and \(\phi_{l}\) by imposing that for every particle,
\[\mathbf{A}_{l}e^{i\phi_{l}}=\mathbf{A}_{\mathbf{r}_{l},t_{l},\mathbf{u}_{l}}(\mathbf{\hat{r}},\nu )\exp\left(i\phi_{\mathbf{r}_{l},t_{l},\mathbf{u}_{l}}(\mathbf{\hat{r}},\nu)\right)\,,\]
where \(\mathbf{r}_{l}\), \(t_{l}\) and \(\mathbf{u}_{l}\) represent the position, time and initial momentum of the accelerated particle. The sum of this quantity over \(N_{h}\) particles sampling the \((\mathbf{r},t,\mathbf{u})\) space can be expressed as
\[\left\langle\left|\sum_{l=1}^{N_{h}}\mathbf{A}_{l}e^{i\phi_{l}}\right|^{2}\right\rangle =\left\langle\left|\iiint_{\mathbf{r},t,\mathbf{u}}\mathbf{A}_{\mathbf{r},t,\mathbf{u}}(\mathbf{\hat{r }},\nu)e^{i\phi_{\mathbf{r},t,\mathbf{u}}(\mathbf{\hat{r}},\nu)}\times\left(\sum_{l=1}^{N _{h}}\delta(\mathbf{r}-\mathbf{r}_{l})\delta(t-t_{l})\delta(\mathbf{u}-\mathbf{u}_{l})\right) \mathrm{d}^{2}\mathbf{r}\,\mathrm{d}t\,\mathrm{d}^{3}\mathbf{u}\right|^{2}\right\rangle. \tag{11}\]
Next, we introduce the distribution function \(F(\mathbf{r},t,\mathbf{u})\) that fulfills
\[\left\langle\sum_{l=1}^{N_{h}}\delta(\mathbf{r}-\mathbf{r}_{l})\delta(t-t_{l})\delta( \mathbf{u}-\mathbf{u}_{l})\right\rangle\to N_{h}F(\mathbf{r},t,\mathbf{u})\,. \tag{12}\]
We hereafter assume that the distribution function can be taken in the form of a separable function, \(F(\mathbf{r},t,\mathbf{u})=f(\mathbf{r})h(t)g(\mathbf{u})\).
Finally, we substitute Eq. (12) into Eq. (11) and make use of \(\mathbf{A}^{-}_{\mathbf{r},t,\mathbf{u}}(\mathbf{\hat{r}},\nu)\exp\left(i\phi_{\mathbf{r},t,\mathbf{u }}(\mathbf{\hat{r}},\nu)\right)=\mathbf{A}^{-}(\mathbf{\hat{r}},\nu)-\mathbf{A}^{+}(\mathbf{\hat{ r}},\nu)\) to obtain the expression of the energy radiated coherently by an ensemble of \(N_{h}\gg 1\) electrons,
\[\frac{\partial^{2}\mathcal{I}_{\mathrm{rad}}}{\partial\nu \partial\Omega}(\theta,\nu) =\frac{q^{2}\nu^{2}}{2\varepsilon_{0}c}N_{h}^{2}\left|\iiint_{\bm {r},t,\mathbf{u}}f(\mathbf{r})h(t)g(\mathbf{u})\right.\] \[\quad\quad\quad\times\left[\mathbf{A}^{-}(\mathbf{\hat{r}},\nu)-\mathbf{A}^ {+}(\mathbf{\hat{r}},\nu)\right]\,\mathrm{d}^{2}\mathbf{r}\,\mathrm{d}t\,\mathrm{d}^{3} \mathbf{u}\right|^{2}. \tag{13}\]
Equation (13) is identical to Eq. (22).
## Appendix B Spatial Fourier transform of \(\partial_{l}j_{\parallel}\)
The 1D current density defined by Eq. (69) involves several Dirac delta functions. Its time derivative can be written as a sum of three components:
\[\partial j_{\parallel}(z,t)/\partial t= -\tilde{\sigma}(t)\left[\mathcal{H}_{0}-\mathcal{H}_{z_{\mathrm{f} }(t)}\right]\] \[-\left[2\hat{\sigma}(t)\hat{z}_{\mathrm{f}}(t)+\sigma(t^{\prime}) \tilde{z}_{\mathrm{f}}(t)\right]\delta_{z_{\mathrm{f}}(t)}\] \[+\sigma(t)\hat{z}_{\mathrm{f}}^{2}(t)\delta^{\prime}_{z_{\mathrm{f }}(t)}\,, \tag{14}\]
where \(\partial\delta(z-z_{\mathrm{f}}(t))/\partial t=-\hat{z}_{\mathrm{f}}(t)\delta^ {\prime}(z-z_{\mathrm{f}}(t))\).
The spatial Fourier transform of \(\partial j_{\parallel}(z,t)/\partial t\) that appears in the expression of the PER spectrum, Eq. (71), can then be evaluated analytically upon noting that each of the above components admits a closed-form Fourier
transform:
\[\mathcal{F}\left[\partial j_{\parallel}(z,t)/\partial t\right](k_{z})=\] \[\quad\quad-\ddot{\sigma}(t)\mathcal{F}\left[\mathcal{H}_{0}- \mathcal{H}_{z_{\mathrm{f}}(t)}\right](k_{z})\] \[\quad\quad-[2\dot{\sigma}(t)\dot{z}_{\mathrm{f}}(t)+\sigma(t^{ \prime})\ddot{z}_{\mathrm{f}}(t)]\mathcal{F}\left[\delta_{z_{\mathrm{f}}(t)} \right](k_{z})\] \[\quad\quad+\sigma(t)\dot{z}_{\mathrm{f}}^{2}(t)\mathcal{F}\left[ \delta^{\prime}_{z_{\mathrm{f}}(t)}\right](k_{z})\,, \tag{20}\]
where
\[\mathcal{F}\left[\mathcal{H}_{0}-\mathcal{H}_{z_{\mathrm{f}}(t) }\right](k_{z})=\] \[z_{\mathrm{f}}(t^{\prime})\operatorname{sinc}\left(k_{z}z_{ \mathrm{f}}(t^{\prime})\right)\exp\Bigl{(}-2i\pi k_{z}z_{\mathrm{f}}(t^{ \prime})/2\Bigr{)}, \tag{21}\] \[\mathcal{F}\left[\delta_{z_{\mathrm{f}}(t)}(z)\right](k_{z})= \exp\Bigl{(}-2i\pi k_{z}z_{\mathrm{f}}(t^{\prime})\Bigr{)},\] (22) \[\mathcal{F}\left[\delta^{\prime}_{z_{\mathrm{f}}(t)}(z)\right](k _{z})=2i\pi k_{z}\exp\Bigl{(}-2i\pi k_{z}z_{\mathrm{f}}(t^{\prime})\Bigr{)}\,. \tag{23}\]
Equation (20) is then injected into Eq. (71) where it is evaluated at the wavenumber \(k_{z}=-\nu\cos\theta/c\).
|
2309.16478 | A Bernstein theorem for ancient solution to symplectic mean curvature
flow | We proved a Bernstein theorem for ancient solution to symplectic mean
curvature flow via the complex phase map . | Xiangzhi Cao | 2023-09-28T14:42:45Z | http://arxiv.org/abs/2309.16478v1 | # A Bernstein theorem for ancient solution to symplectic mean curvature flow
###### Abstract
We proved a Bernstein theorem for ancient solution to symplectic mean curvature flow via the complex phase map.
_Keywords and phrases_: Bernstein problem, ancient solution, symplectic mean curvature flow.
_MSC 2010_: 53C24, 53E10
## 1 Introduction
Let \(X_{0}:M^{n}\to N^{n+m}\) be an isometric immersion from an \(n\)-dimensional oriented Riemannian submanifold \(M^{n}\) to The Riemannian manifold \(N^{n+m},n\geq 2,m\geq 1\). The mean curvature flow (MCF) is a one-parameter family of smooth immersions \(X:\,M^{n}\times[-T,0]\rightarrow\mathbb{R}^{n+m},T>0\), which satisfies the following evolution equation:
\[\left\{\begin{array}{l}\frac{\partial}{\partial t}X(x,t)=H(x,t),\quad x\in M ^{n},t\in[-T,0],\\ X(\cdot,0)=X_{0},\end{array}\right.\]
where \(H(p,t)\) is the mean curvature vector of \(X\left(M^{n},t\right)\subset\mathbb{R}^{n+m}\). It is well known that self shrinkers is type I singularity and translating soliton is type II singularity of the mean curvature flow. One can refer to these classical papers([7][8, 9, 10, 11, 12] ) for codiemsion one mean curvature flow. One can also refer to Huisken's four lectures [14, 16, 15, 13]. One can refer to [1, 28, 29, 4, 26, 21] for higher codimension mean curvatrue flow.
It is intersting to study special mean curvature flow, such as Lagrangian mean curvature flow, symplectic mean curvature flow, hyper-Lagrangian mean curvature flow in Hyper-Kahler manifold. Qiu [23] proved a rigidity result via complex phase map. We state it here.
**Theorem 1** (cf. Qiu [23], Theorem 2).: _Let \(X:\Sigma^{2}\to\mathbb{R}^{4}\) be a 2-dimensional complete translating soliton with nonpositive normal curvature. Assume that the image of the complex phase map is contained in a regular ball in \(\mathbb{S}^{2}\), i.e., a geodesic ball \(B_{R}(q)\) disjoint from the cut locus of \(q\) and \(R<\frac{\pi}{2}\), then \(\Sigma\) has to be an affine plane._
As an corollary, in the case of omplete Lagrangian translating soliton with non- positive normal curvature, Corollary 1 in [23] is equivalent to [6, Theorem 2]. Han and Sun [6] proved the nonexistence of translation soliton with nonnegative sectional curvature to the almost calibrated Lagrangian mean curvature flow with the lower bound of the function \(\theta\).
This papragraph is copied from [5, theroem 1.6]. Without loss of generality, we assume that the origin \(o\in\mathbb{R}^{n+m}\) lies in \(\Sigma^{n}\). Let \(\bar{B}_{R}^{n+m}\) be an Euclidean closed ball of radius \(R\) with the center at \(o\) and \(B_{R,T}(o)=\bar{B}_{R}^{n+m}\times[-T,0]\subset\mathbb{R}^{n+m}\times(-\infty, +\infty)\) be a cylindrical domain in the space-time. Consider \(\Sigma_{T}\) as the space-time domain
\[\{(X(p,t),t)\mid p\in M,t\in[-T,0]\}\subset\mathbb{R}^{n+m}\times(-\infty,+ \infty).\]
Finally, we define the space-time domain \(D_{R,T}(o)=\Sigma_{T}\cap B_{R,T}(o)\). \(D_{R,T}(o)\) is compact since \(\Sigma_{t}\) can be written as a complete graph for each \(t\).
Inspired by the papers([23][18][6]), we state our results in the case when \(n=2,m=2\).
**Theorem 2**.: _Let \(X:\Sigma^{2}\times[-T,0]\to\mathbb{R}^{4}\) be a solution to mean curvature flow with nonpositive normal curvature. Assume that the image of the complex phase map is contained in a regular ball in \(\mathbb{S}^{2}\), i.e., a geodesic ball \(B_{R}(q)\) disjoint from the cut locus of \(q\) and \(R<\frac{\pi}{2}.\) Assume that there exist a positive constant \(C_{J}\) and a nonnegative constant \(C_{H}\) such that \(|dJ|\leq C_{J}|H|\) and \(|\vec{H}(p,t)|\leq C_{H}\) for any point in \(\Sigma_{T}\). Then there exists a constant \(C\) which is independent of \(R\) and \(T\) such that_
\[\sup_{D_{R/2,T/2}(o)}\frac{|H|}{b-\psi\circ J}\leq C\left(\frac{1}{R}+\frac{1} {\sqrt{R}}+\frac{1}{\sqrt{T}}\right),\]
_where \(b\) is a constant such that \(\sup_{\mathcal{M}_{T}}\psi\circ J\leq 1-c<b<1\)._
**Remark 1**.: This conditon \(|dJ|\leq C_{J}|H|\) can be satisfied by the Lagrangian mean curvature flow in the two dimensional case.
**Questions 1**.: _It seems that the proof can be carried over to the case that \(X:\Sigma^{2n}\times[-T,0]\to\mathbb{R}^{4n}\) is a solution to mean curvature flow with nonpositive normal curvature. I guess in this case, there are some gaps needed to fill in._
**Corollary 1.1**.: _Let \(X:\Sigma^{2}\times(-\infty,0]\to\mathbb{R}^{4}\) be an ancient solution to 2-dimensional mean curvature flow with nonpositive normal curvature. Assume that the image of the complex phase map is contained in a regular ball in \(\mathbb{S}^{2}\). Assume that there exist a positive constant \(C_{J}\) and a nonnegative constant \(C_{H}\) such that \(|dJ|\leq C_{J}|H|\) and \(|\vec{H}(p,t)|\leq C_{H}\) for any point in \(\Sigma_{\infty}\), then \(\Sigma_{t}\) has to be an affine plane for any \(t\in(-\infty,0]\)._
**Remark 2**.: As we know, self shrinker and tanslating soliton are examples of ancient solution to mean curvatrue flow, so our corollary 1.1 will have wide application.
Let \(X:\Sigma^{2}\to\mathbb{R}^{4}\) be an self-shrinkers, which is defined as
\[H=V^{\perp}\]
where \(V\) is a fixed section in \(\mathbb{R}^{4}\).
**Corollary 1.2**.: _Let \(X:\Sigma^{2}\to\mathbb{R}^{4}\) be an tanslating solition with nonpositive normal curvature. Assume that the image of the complex phase map is contained in a regular ball in \(\mathbb{S}^{2}\). Assume that there exist a positive constant \(C_{J}\) and a nonnegative constant \(C_{H}\) such that \(|dJ|\leq C_{J}|V^{\perp}|\) for any point in \(\Sigma\), then \(\Sigma\) has to be an affine plane._
Let \(X:\Sigma^{2}\to\mathbb{R}^{4}\) be an self-shrinkers, which is defined as
\[H=X^{T}\]
Where \(X^{T}\) is the projection of \(X\) to the tangent bundle of \(\Sigma\).
**Corollary 1.3**.: _Let \(X:\Sigma^{2}\to\mathbb{R}^{4}\) be an self-shrinkers with nonpositive normal curvature. Assume that the image of the complex phase map is contained in a regular ball in \(S^{2}\). Assume that there exist a positive constant \(C_{J}\) and a nonnegative constant \(C_{H}\) such that \(|dJ|\leq C_{J}|X^{T}|\) and \(|X|\leq C_{H}\) for any point in \(\Sigma\), then \(\Sigma\) has to be an affine plane._
In the Lagrangian case, we know that(cf. [23]) \(|dJ|^{2}=|H|^{2}\). we can get from Corollary 1.1.
**Theorem 3**.: _Let \(X:\Sigma^{2}\times[-T,0]\to\mathbb{R}^{4}\) be a solution to 2-dimensional Lagrangian mean curvature flow with nonpositive normal curvature. Assume that there exist a positive constant \(C_{J}\) and a nonnegative constant \(C_{H}\) such that \(|dJ|\leq C_{J}|H|\) and \(|\vec{H}(p,t)|\leq C_{H}\) for any point in \(\mathcal{M}_{T}\). If the cosine of the Lagrangian angle of the initial surface has a positive lower bound, then \(\Sigma\) has to be an affine plane._
**Corollary 1.4**.: _Let \(X:\Sigma^{2}\times(-\infty,0]\rightarrow\mathbb{R}^{4}\) be an ancient solution to 2-dimensional Lagrangian mean curvature flow with nonpositive normal curvature. Assume that there exist a nonnegative constant \(C_{H}\) such that \(|\vec{H}(p,t)|\leq C_{H}\) for any point in \(\mathcal{M}_{\infty}\). If the cosine of the Lagrangian angle of the initial surface has a positive lower bound, then \(\Sigma_{t}\) has to be an affine plane for any \(t\in(-\infty,0]\)._
**Remark 3**.: As said in in [23], the fact that the cosine of the Lagrangian angle of the initial surface has a positive lower bound is preserved along the Lagrangian mean curvatrue flow. Moreover, it implies that the image of the complex phase map is contained in a regular ball in \(S^{2}\), i.e., a geodesic ball \(B_{R}(q)\) disjoint from the cut locus of \(q\) and \(R<\frac{\pi}{2}\).
Since for translationg soliton, the condtion \(|\vec{H}(p,t)|\leq C_{H}\) is automatically satisfied. So, we can derive Corollary 1 in [23] and [6, Theorem 2] from Corollary 1.4,
**Corollary 1.5**.: _Let \(X:\Sigma^{2}\to R^{4}\) be a complete Lagrangian translating soliton with nonpositive normal curvature. If the cosine of the Lagrangian angle has a positive lower bound, then \(\Sigma\) has to be an affine plane._
## 2 Preliminary
We first fix some notations and recall some basical facts about two dimensional Lagrangian mean curvature flow, two dimensional symplectic mean curvature flow and complex phase map. In the end, we give some lemmas used in the proof of this paper.
Let \(X_{0}:\Sigma^{2}\to M^{4}\) be an isometric immersion from an \(n\)-dimensional oriented Riemannian submanifold \(\Sigma\) to the Riemannian manifold \(M\). The mean curvature flow (MCF) is a one-parameter family of smooth immersions \(X:\,\Sigma^{2}\times[-T,0]\to M,T>0,\) which satisfies the following evolution equation:
\[\left\{\begin{array}{ll}\frac{\partial}{\partial t}X(x,t)=H(x,t),\quad x\in \Sigma,t\in[-T,0],\\ X(\cdot,0)=X_{0},\end{array}\right.\]
In the case where \(n=2,m=2\), the Kahler angle of \(\Sigma\) in the Kahler-Einstein surface \(M\) is denoted by \(\alpha\). The surface is called simplectic surface if \(\cos\alpha>0\), a Lagrangian surface if \(\cos\alpha=0\), holomorphic curve if \(\cos\alpha=1.\) If the intial surface is symplectic, then the flow is also symplectic as along as the flow exists(cf. [2][3][27] or [6]). The fact is proved via maximum principle of the Kahler angle angle function \(\alpha\). If the initial surface is Lagrangian, then the flow pereserve the Lagrangian property(cf.[24][25]). The fact is
proved via maximum principle of the Lagrangian angle function \(\theta\), whose definiton can be found in [6].
Next, we recall the definition of complex phase map of hyper-Lagrangian submanifold \(L^{2n}\) of hyperkahler manifold \(M^{4n}\). Let \(J_{1},J_{2},J_{3}\) be three almost complex structure of \(M\), where \(J_{3}=J_{1}J_{2},J_{1}J_{2}=-J_{2}J_{1}\). Let \(L\) be a hyper-Lagrangian submanifold of \(M\) if there is an almost function \(\hat{J}=\sum\limits_{\alpha=1}^{3}\lambda_{\alpha}J_{\alpha}\) such that the associated symplectic 2-forms \(\Omega_{\hat{J}}\) is zero on \(L\). Then the complex phase map is defined as
\[J:L\rightarrow\mathbb{S}^{2},\quad x\mapsto J(x):=(\lambda_{1},\lambda_{2}, \lambda_{3}).\]
Let \(L^{2n}\) be hyper-Lagrangian submanifold of hyperkahler manifold \(M^{4n}\). We denote the second fundamental form and mean curvature vector by \(B\) and \(H\), respectively.
**Lemma 1** (cf. [17] or [20]).: _There exists a smooth function \(\eta(r,t):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) supported on \([-R,R]\times\,-T,0]\) which has the following properties:_
_(1) \(\eta(r,t)\equiv 1\) on \([-R/2,R/2]\times[-T/2,0]\) and \(0\leq\psi\leq 1\)._
_(2) \(\eta(r,t)\) is decreasing if \(r\geq 0\), i.e., \(\partial_{r}\eta\leq 0\)._
_(3) \(\left|\partial_{r}\eta\right|/\eta^{a}\leq C_{a}/R,\left|\partial_{r}^{2}\eta \right|/\eta^{a}\leq C_{a}/R^{2}\) when \(0<a<1\)._
_(4) \(\left|\partial_{t}\eta\right|/\eta^{a}\leq C_{a}/T\) when \(0<a<1\)._
Next, we consider the mean curvature flow from a closed surface \(\Sigma\) in a hyperkahler 4-manifold M, i.e, \(X:\,\Sigma^{2}\times[-T,0]\to M,T>0\), which satisfies the following evolution equation:
\[\left\{\begin{array}{l}\frac{\partial}{\partial t}X(x,t)=H(x,t),\quad x\in \Sigma,t\in[-T,0],\\ X(\cdot,0)=X_{0},\end{array}\right. \tag{2.1}\]
For this kind of flow, it is proved in [19] that if the image of \(J\) lies in a hemisphere of \(\mathbb{S}^{2}\), then this remains so under the mean curvature flow. It is proved in [19] that the mean curvature flow of a hyper-Lagrangian submanifold \(L\) in \(M\) preserves the hyper-Lagrangian condition. Its complex phase map solves harmonic map heat equation.
**Theorem 4** (cf. [19] ).: _The complex phase maps of the mean curvature flow (2.1), \(J:\Sigma_{t}\longrightarrow\mathbb{S}^{2}\) form an evolving harmonic map heat flow, i.e.,_
\[\frac{\partial J}{\partial t}=\tau(J),\]
_where \(\tau(J)\) is the tension field of \(J\) with respect to the induced metric \(g_{t}\) on \(\Sigma_{t}\)._
**Lemma 2** (cf. [30] or [22]).: _For mean curvature flow in Euclidean space, we have_
\[\Delta|H|^{2}-\partial_{t}|H|^{2}\geq 2|\nabla H|^{2}-2|H|^{2}|B|^{2} \tag{2.2}\]
_where \(H\) and \(B\) are mean curvature vector and the second fundmental form of the mean curvature flow, respectively._
Obviously, this inequality (2.2) holds for MCF (2.1) if the ambient manifold is hyper-Kahler manifold \(\mathbb{R}^{4}\).
## 3 The proof
Proof.: We can view \(\Sigma\) as a hyper-Lagrangian submanifold in hyper-kahler manifold \(\mathbb{R}^{4}\) with respect to some almost complex structure. We denote the second fundamental form and mean curvature vector by \(B\) and \(H\). We use the notation in [23]. Let J be the complex phase map of \(\Sigma\). Let \(\rho\) be the distance function on \(S^{2}\), and \(h\) the Riemannian metric of \(S^{2}\). Define \(\psi=1-\cos\rho\), then \(Hess(\psi)=(\cos\rho)h\). It is an standard trick in geometric analysis, we need to compute
\[\left(\frac{\partial}{\partial t}-\Delta\right)\frac{|H|^{2}}{(b-\psi\circ J) ^{2}}\]
Let \(\phi=\frac{|H|^{2}}{(b-\psi\circ J)^{2}}\). A direct calculation shows that
\[\nabla\phi=\frac{\nabla|H|^{2}}{(b-\psi\circ J)^{2}}+\frac{2|H|^{2}\nabla\psi \circ J}{(b-\psi\circ J)^{3}}.\]
Similarly we can compute
\[\Delta\phi=\frac{\Delta|H|^{2}}{(b-\psi\circ J)^{2}}+\frac{4\left\langle\nabla \psi\circ J,\nabla|H|^{2}\right\rangle}{(b-\psi\circ J)^{3}}+\frac{2|H|^{2} \Delta\psi\circ J}{(b-\psi\circ J)^{3}}+\frac{6|\nabla\psi\circ J|^{2}|H|^{2} }{(b-\psi\circ J)^{4}}.\]
By Qiu [23], we know that
\[\Delta\psi\circ J=2\cos\rho|dJ|^{2}|H|^{2}-2|H|^{2}\partial_{t}\psi\circ J\]
By the above computations, we obtain
\[\Delta\phi= \frac{2|\nabla H|^{2}+\partial_{t}|H|^{2}-2|B|^{2}|H|^{2}}{(b- \psi\circ J)^{2}}+\frac{4\left\langle\nabla\psi\circ J,\nabla|H|^{2}\right\rangle }{(b-\psi\circ J)^{3}}\] \[+\frac{2\cos\rho|dJ|^{2}|H|^{2}-2|H|^{2}\partial_{t}\psi\circ J} {(b-\psi\circ J)^{3}}+\frac{6|\nabla\psi\circ J|^{2}|H|^{2}}{(b-\psi\circ J)^ {4}}.\]
On the other hand, the time derivative of \(\phi\) is given by
\[\partial_{t}\phi=\frac{\partial_{t}|H|^{2}}{(b-\psi\circ J)^{2}}-\frac{2|H|^{2} \partial_{t}\psi\circ J}{(b-\psi\circ J)^{3}}.\]
We continue the calculation as
\[\Delta\phi= \frac{2|\nabla B|^{2}-2|H|^{2}|B|^{2}}{(b-\psi\circ J)^{2}}+\frac {4\left\langle\nabla\psi\circ J,\nabla|H|^{2}\right\rangle}{(b-\psi\circ J)^{ 3}}\] \[+\frac{2\cos\rho|dJ|^{2}|H|^{2}}{(b-\psi\circ J)^{3}}+\frac{6| \nabla\psi\circ J|^{2}|H|^{2}}{(b-\psi\circ J)^{4}}+\partial_{t}\phi.\]
Note that the following relations hold:
\[\frac{2|\nabla H|^{2}}{(b-\psi\circ J)^{2}}+\frac{2|\nabla\psi \circ J|^{2}|H|^{2}}{(b-\psi\circ J)^{4}}\geq\frac{4|\nabla B||\nabla\psi\circ J ||H|}{(b-\psi\circ J)^{3}},\] \[\frac{2\left\langle\nabla|H|^{2},\nabla\psi\circ J\right\rangle}{ (b-\psi\circ J)^{3}}+\frac{4|H|^{2}|\nabla\psi\circ J|^{2}}{(b-\psi\circ J)^ {4}}=\frac{2\langle\nabla\psi\circ J,\nabla\phi\rangle}{(b-\psi\circ J)}.\]
Hence, we get
\[\Delta\phi-\partial_{t}\phi\geq 2(1-b)\frac{|dJ|^{2}|H|^{2}}{((b-\psi\circ J) )^{3}}+\frac{2\langle\nabla\psi\circ J,\nabla\phi\rangle}{(b-\psi\circ J)}\]
Fix a point \((p_{0},0)\in\Sigma^{2}\times[-T,0]\) such that \(X\left(p_{0},0\right)\) is the origin \(o\) of \(\mathbb{R}^{n+1}\). Let \(\eta\) be the function constructed in Lemma 1. We use a cut-off function supported on \(D_{R,T}(o)\) given by \(\psi(F(p,t)):=\eta(r(X),t)\), where \(r(X):=|X|\) is the distance function on \(\mathbb{R}^{4}\).
Let \(L:=-2\nabla\psi\circ J/(b-\psi\circ J)\). We can calculate
\[\Delta(\psi\phi)+\left\langle L,\nabla(\psi\phi)\right\rangle-2 \left\langle\frac{\nabla\psi}{\psi},\nabla(\psi\phi)\right\rangle-\partial_{t }(\psi\phi)\] \[= \psi\left(\Delta\phi-\partial_{t}\phi\right)+\phi\left(\Delta \psi-\partial_{t}\psi\right)+\left\langle\psi L,\nabla\phi\right\rangle+ \left\langle\phi L,\nabla\psi\right\rangle-2\frac{|\nabla\psi|^{2}}{\psi}\phi\] \[\geq 2(1-b)\psi\frac{|dJ|^{2}|H|^{2}}{(b-\psi\circ J)^{3}}+\phi\left( \Delta\psi-\partial_{t}\psi\right)+2\frac{\left\langle\nabla\psi\circ J, \nabla\psi\right\rangle}{b-\psi\circ J}\phi-2\frac{|\nabla\psi|^{2}}{\psi}\phi.\]
Note that \(D_{R,T}(o)\) is compact, since any time slice \(M_{t}\) can be written as an entire graph. Hence \(\psi\phi\) attains its maximum at some point \(F\left(p_{1},t_{1}\right)\) in \(D_{R,T}(o)\). At this point, we have
\[\nabla(\psi\phi)=0,\quad\Delta(\psi\phi)\leq 0,\quad\partial_{t}(\psi\phi)\geq 0.\]
Hence, we obtain
\[2\psi((1-b)\frac{|dJ|^{2}|H|^{2}}{(b-\psi\circ J)^{3}} \leq 2\phi\frac{\langle\nabla\psi\circ J,\nabla\psi\rangle}{b-\psi \circ J}+2\phi\frac{|\nabla\psi|^{2}}{\psi}+\phi\left(\partial_{t}\psi-\Delta\psi\right)\] \[=I+II+III.\]
Note that the following holds:
\[\left|\nabla\psi\right|^{2}=\left|\partial_{r}\eta\right|^{2}\left|\nabla r\right| ^{2}\leq n\left|\partial_{r}\eta\right|^{2}.\]
By [17], we know that
\[\left|\nabla\psi\circ J\right|\leq\left|dJ\right|\]
Where \(C_{1}=(\frac{v_{1}}{2-v_{1}})^{\frac{5}{2}}\).
By using (Young's inequality and the property of \(\eta\), we can estimate \(I\) as follows:
\[I \leq 2\phi\frac{\left|\nabla\psi\circ J\right|}{b-\psi\circ J}| \nabla\psi|\] \[\leq 2\phi\frac{\left|dJ\right|}{b-\psi\circ J}|\nabla\psi|\] \[\leq\frac{\varepsilon}{4}\psi\frac{\left|H\right|^{\frac{8}{3}} \left|dJ\right|^{\frac{4}{3}}}{(b-\psi\circ J)^{4}}+\frac{C(\varepsilon)| \nabla\psi|^{4}}{\psi^{3}}\] \[\leq\frac{\varepsilon}{4}\psi\frac{\left|H\right|^{\frac{8}{3}} \left|dJ\right|^{\frac{4}{3}}}{(b-\psi\circ J)^{4}}+\frac{n^{2}C(\varepsilon) \left|\partial_{r}\eta\right|^{4}}{\psi^{3}}\] \[\leq\frac{\varepsilon}{4}\psi\frac{\left|H\right|^{\frac{8}{3}} \left|dJ\right|^{\frac{4}{3}}}{(b-\psi\circ J)^{4}}+\frac{C(\varepsilon,n)}{R ^{4}},\]
where \(\varepsilon>0\) is an arbitrary constant, \(C(\varepsilon)\) and \(C(\varepsilon,n)\) are constants depending only on \(\varepsilon\) and \(n\). Similarly, as in [17], we can calculate by using Young's inequality and the property of \(\eta\),
\[II=2\phi\frac{\left|\nabla\psi\right|^{2}}{\psi}\leq\frac{\varepsilon}{4}\psi \phi^{2}+\frac{C(\varepsilon,n)}{R^{4}}.\]
Now we assume \(\left|\vec{H}(p,t)\right|\leq C_{H}\). Since \(\partial_{r}\eta\leq 0\), we have
\[\Delta\psi=\left(\Delta r\right)\left(\partial_{r}\eta\right)+\left|\nabla r \right|^{2}\left(\partial_{r}^{2}\eta\right)\geq\left(C_{H}+\frac{n}{r}\right) \left(\partial_{r}\eta\right)-n\left|\partial_{r}^{2}\eta\right|.\]
Hence we obtain for the second term of \(III\) in the same way as [17],
\[-\phi\Delta\psi\leq\frac{\varepsilon}{4}\psi\phi^{2}+C(\varepsilon,n)\left( \frac{1}{R^{4}}+\frac{1}{R^{2}}\right).\]
(Note that we may assume \(R/2\leq r\) for the second inequality, since \(\partial_{r}\eta\equiv 0\) for \(r\leq R/2\).)
As for the first term of \(III\), as in [17] we have
\[\phi\left(\partial_{t}\psi\right)\leq\frac{\varepsilon}{4}\psi\phi^{2}+C \left(\varepsilon,C_{H}\right)\left(\frac{1}{R^{2}}+\frac{1}{T^{2}}\right).\]
Since
\[|dJ|^{2}\geq|B|^{2}\geq\frac{|H|^{2}}{2}.\]
. Combing the above estimates, we finally obtain
\[\frac{1}{2}(1-b)(b-\psi\circ J)\psi\phi^{2}\leq\frac{\varepsilon}{4}\psi\frac{| H|^{\frac{8}{3}}|dJ|^{\frac{4}{3}}}{(b-\psi\circ J)^{4}}+\frac{3\epsilon}{4} \psi\phi^{2}+C\left(\varepsilon,n,C_{H}\right)\left(\frac{1}{R^{4}}+\frac{1}{R ^{2}}+\frac{1}{T^{2}}\right).\]
Noticing our assumption
\[|dJ|\leq C|H|\]
Since
\[\psi<b\]
So we can take a sufficiently small \(\varepsilon\) such that
\[\frac{1}{2}(1-b)(b-\psi\circ J)-\varepsilon>0.\]
Then we have
\[(\psi\phi)^{2}\leq\psi\phi^{2}\leq C\left(\frac{1}{R^{4}}+\frac{1}{R^{2}}+ \frac{1}{T^{2}}\right).\]
Since \(\psi\equiv 1\) on \(D_{R/2,T/2}(o)\),
\[\sup_{D_{R/2,T/2}(o)}\frac{|H|}{b-\psi\circ J}\leq C\left(\frac{1}{R}+\frac{1 }{\sqrt{R}}+\frac{1}{\sqrt{T}}\right).\]
This completes the proof of Theorem 2.
|
2308.00133 | A Suite of Fairness Datasets for Tabular Classification | There have been many papers with algorithms for improving fairness of
machine-learning classifiers for tabular data. Unfortunately, most use only
very few datasets for their experimental evaluation. We introduce a suite of
functions for fetching 20 fairness datasets and providing associated fairness
metadata. Hopefully, these will lead to more rigorous experimental evaluations
in future fairness-aware machine learning research. | Martin Hirzel, Michael Feffer | 2023-07-31T19:58:12Z | http://arxiv.org/abs/2308.00133v1 | # A Suite of Fairness Datasets for Tabular Classification
###### Abstract
There have been many papers with algorithms for improving fairness of machine-learning classifiers for tabular data. Unfortunately, most use only very few datasets for their experimental evaluation. We introduce a suite of functions for fetching 20 fairness datasets and providing associated fairness metadata. Hopefully, these will lead to more rigorous experimental evaluations in future fairness-aware machine learning research.
## 1 Introduction
Many people share the goal of making artificial intelligence fairer to those affected by it. There is extensive debate about which fairness interventions are appropriate and effective to achieve this goal. This debate should be informed, at least in part, by rigorous experimental evaluation. Rigorous experiments can help stakeholders make more informed choices among existing fairness interventions, as well as help researchers invent better ones. Unfortunately, most papers about fairness interventions evaluate them on at most a handful of datasets. This is because historically, it was hard to find and fetch datasets relevant to fairness, as well as associate them with fairness metadata, such as favorable labels or protected attributes.
\begin{table}
\begin{tabular}{l l r r r r r l l} \hline \hline
**name** & **origin** & **\#rows** & **\#cols** & **any** & **any** & **\#la-** & **target** & **favorable** & **protected attributes** \\ & & & & **cat.** & **mis.** & **bels** & **name** & **labels** & **(first)** & **(second)** \\ \hline ricci & OpenML & 118 & 5 & yes & no & 2 & promotion & Promotion & race & \\ tae & OpenML & 151 & 5 & no & no & 3 & class\_attribute & 3 & whether\_\_\_\_\_\_\_\_\_ \\ heart\_disease & OpenML & 303 & 13 & no & no & 2 & target & 1 & age \\ student\_math & OpenML & 395 & 32 & yes & no & 2 & g3\_ge\_10 & 1 & sex & age \\ student\_por & OpenML & 649 & 32 & yes & no & 2 & g3\_ge\_10 & 1 & sex & age \\ credig & OpenML & 1,000 & 20 & yes & no & 2 & class & good & personal\_\_\_\_\_\_ & age \\ titanic & OpenML & 1,309 & 13 & yes & yes & 2 & survived & 1 & sex \\ us\_crime & OpenML & 1,994 & 102 & yes & no & 2 & crimeg70pct & 0 & blackgtgtpct & \\ compas\_violent & ProPublica & 4,020 & 51 & yes & yes & 2 & two\_year\_recid & 0 & sex & race \\ nlsy & OpenML & 4,908 & 15 & yes & no & 2 & income\^{}6g617 & 1 & age & gender \\ compas & ProPublica & 6,172 & 51 & yes & yes & 2 & two\_year\_recid & 0 & sex & race \\ speeddating & OpenML & 8,378 & 122 & yes & yes & 2 & match & 1 & same\_er & importa\_\_\_\_\_\_\_ \\ nursery & OpenML & 12,960 & 8 & yes & no & 5 & class & spec\_prior & parents & \\ meps19 & AHRQ & 16,578 & 1,825 & yes & no & 2 & UTILIIZATION & 1 & RACE & \\ meps21 & AHRQ & 17,052 & 1,936 & yes & no & 2 & UTILIIZATION & 1 & RACE & \\ meps20 & AHRQ & 18,849 & 1,825 & yes & no & 2 & UTILIIZATION & 1 & RACE & \\ law school & OpenML & 20,800 & 11 & yes & no & 2 & upspagag & TRUE & race1 & \\ default\_credit & OpenML & 30,000 & 24 & no & no & 2 & default\_pay\_\_\_\_\_\_\_\_ & 0 & sex & \\ bank & OpenML & 45,211 & 16 & yes & no & 2 & class & 1 & age & \\ adult & OpenML & 48,842 & 14 & yes & yes & 2 & class & -50K & race & sex \\ \hline \hline \end{tabular}
\end{table}
Table 1: Static information about the datasets. Column ‘origin’ specifies from where the data is downloaded. Columns ‘#rows’ and ‘#cols’ give the shape of X. Columns ‘any cat.’ and ‘any mis.’ indicate whether X has any categorical columns and any missing values, respectively. Column ‘#labels’ shows the number of unique values in y and ‘target name’ shows the name of y. Columns ‘favorable labels’ and ‘protected attributes’ are part of the fairness metadata. “For details see [https://github.com/IBM/late/blob/master/examples/demo_fairness_datasets.ipynb](https://github.com/IBM/late/blob/master/examples/demo_fairness_datasets.ipynb).
### Related Work
OpenML [8] provides thousands of datasets ready for machine learning experiments, but does not identify which of them are relevant to fairness and does not provide fairness metadata. AIF360 [2] provides functions for fetching 8 fairness datasets along with metadata, but requires using a special class or a multi-level pandas index. Quy et al. [7] describe 15 fairness datasets, but do not provide code for fetching them, do not provide machine-readable metadata for them, and some of their datasets are difficult to obtain. We applaud OpenML, AIF360, and Quy et al. for getting most of the way towards a suite of fairness datasets and build upon their work to take the last missing step.
### Contribution
This paper describes a suite of 20 Python functions to fetch 20 datasets along with fairness metadata (see Table 1). It focuses on tabular data with classification targets, which is the most well-studied setting. (Other settings also have merit but are beyond the scope of this paper.) To make these functions easy to use, they simply return data in pandas format [6] along with fairness metadata in JSON format.
**Minimally process the data.** Our functions perform only limited preprocessing, because preprocessing impacts fairness and can be difficult to invert. That said, some preprocessing already happened at source before downloading, beyond our control. Where the prediction target is not yet categorical, our functions discretize it. Where necessary, our functions drop the feature column from which the discretized target was derived. There are some other cases where our functions drop additional feature columns because they are not useful. Finally, some feature columns lack a meaningful name and our functions rename them, e.g. from "v1" to "age". See the code for details.
**Provide fairness metadata**. Each of the functions returns a JSON object with fairness metadata. Figure 1 shows an example. The fairness metadata comprises a list of favorable labels (i.e., favorable values in \(y\)) and a list of protected attributes (i.e., column names in \(x\)). For each protected attribute, it gives a list of either ranges or values that indicate membership in the privileged group. In practice, features and labels relevant to fairness considerations are subject to interpretation and should be determined through careful consultation with stakeholders. Hence, we opted for a simple format that is easy to change.
**How to use the functions**. First install the Lale library [1] by doing pip install lale. Then you can call the functions as illustrated by the following two lines of Python code for the creditg dataset:
```
1importlale.lib.aif360
2X,yfairness_info=late.lib.aif360.fetch_creditg_df()
```
After this code, \(x\) and \(y\) contain the features and labels of the data, represented as a pandas dataframe and series [6], and fairness_info contains the metadata, represented as a JSON object as illustrated in Figure 1. At this point, you can use your favorite library to split and preprocess the data, make predictions, evaluate metrics, and perhaps mitigate bias. A popular choice for many of these tasks would be the sklearn library [3]. While our dataset fetching functions are part of the Lale library [1], you do not need to use Lale to process their results. On the other hand, Lale contains additional code that uses the metadata, including bias mitigators and fairness metrics.
balanced. Subplot 'data_di' shows the symmetric disparate impact [5]; it is the ratio of the favorable rates for the unprivileged and privileged groups. Higher disparate impact values mean the data is more fair, with values under 0.8 usually considered unfair [5]. The remaining three subplots show averages from 5-fold cross-validation experiments with a popular and well-performing classifier, XGBoost [4], with error bars showing standard deviations. Subplot 'xgb_di' shows that while bias in the data does not always exactly equal bias in predictions of a classifier trained on the data, the trends are similar across the 20 datasets. Subplot 'xgb_eo' shows equal opportunity difference, which is the difference of true positive rates between the unprivileged and privileged groups, with zero indicating perfect fairness. Subplot 'xgb_ba' shows balanced accuracy, which is the average recall for the all classes, where higher values are better and the best value is 1. Despite using 5-fold cross-validation, the classifier overfit a couple of datasets with 100% balanced accuracy.
## 4 Conclusion
We hope our functions for fetching fairness datasets are useful and we welcome contributions to their open-source code. Ideally, future papers with experimental evaluations of fairness interventions will use at least 20, if not more, datasets.
Figure 3: Metrics characterizing the datasets. Subplot ‘data_ci’ shows the class imbalance of the data based on binarizing y using the metadata. Subplot ‘data_di’ shows the symmetric disparate impact of the data. Subplots ‘xgb_di’, ‘xgb_eo’, and ‘xgb_ba’ show the symmetric disparate impact, equal opportunity difference, and balanced accuracy of predictions from XGBoost. |
2309.10009 | A Resolution of the Monopole Problem in the R_h=ct Universe | Spontaneous symmetry breaking in grand unified theories is thought to have
produced an exceedingly large number of magnetic monopoles in the early
Universe. In the absence of suppression or annihilation, these very massive
particles should be dominating the cosmic energy budget today, but none has
ever been found. Inflation was invented in part to dilute their number, thereby
rendering their density undetectable by current instruments. Should the
inflationary paradigm not survive, however, the ensuing disagreement between
theory and observation would constitute a cosmological `monopole problem' and
create further tension for any extension to the standard model of particle
physics. But as is also true for all horizon problems, a monopole overabundance
emerges only in cosmologies with an initial period of deceleration. We show
that the alternative Friedmann-Lemaitre-Robertson-Walker cosmology known as the
R_h=ct universe completely eliminates all such anomalies rather trivially and
naturally, without the need for an inflated expansion. We find that the
monopole energy density today would be completely undetectable in R_h=ct.
Evidence continues to grow that the zero active mass condition from general
relativity ought to be an essential ingredient in LCDM. | Fulvio Melia | 2023-09-18T04:38:31Z | http://arxiv.org/abs/2309.10009v1 | # A Resolution of the Monopole Problem in the \(R_{\rm h}=ct\) Universe
###### Abstract
Spontaneous symmetry breaking in grand unified theories is thought to have produced an exceedingly large number of magnetic monopoles in the early Universe. In the absence of suppression or annihilation, these very massive particles should be dominating the cosmic energy budget today, but none has ever been found. Inflation was invented in part to dilute their number, thereby rendering their density undetectable by current instruments. Should the inflationary paradigm not survive, however, the ensuing disagreement between theory and observation would constitute a cosmological'monopole problem' and create further tension for any extension to the standard model of particle physics. But as is also true for all horizon problems, a monopole overabundance emerges only in cosmologies with an initial period of deceleration. We show that the alternative Friedmann-Lemaitre-Robertson-Walker cosmology known as the \(R_{\rm h}=ct\) universe completely eliminates all such anomalies rather trivially and naturally, without the need for an inflated expansion. We find that the monopole energy density today would be completely undetectable in \(R_{\rm h}=ct\). Evidence continues to grow that the zero active mass condition from general relativity ought to be an essential ingredient in \(\Lambda\)CDM.
keywords: FLRW spacetime, The \(R_{\rm h}=ct\) universe, monopole problem +
## 1 Introduction
At the core of the standard model of particle physics is the unification of electromagnetism and the weak nuclear force into a single SU(2) \(\times\) U(1) symmetry [12; 59; 64], which is broken at low energies because of a Higgs field [9; 18; 13] that acquired a vacuum expectation value [1] some \(10^{-11}\) seconds after the Big Bang [36].
But there are many theoretical reasons to expect an even bigger--or grand--unification of all known elementary particle forces (except gravity), into a so-called Grand Unified Theory (GUT), as first proposed by Georgi and Glashow in 1974 [11]. These include: (i) the fact that the complex structure of particles in the standard model, comprising three different gauge symmetries and a wide assortment of particle properties, is more simply explained within a grand unified scheme; and (ii) that the interaction strengths are not fixed constants. They vary with energy in such a way that they meet when extrapolated to \(\sim 10^{16}\) GeV [51].
Georgi and Glashow achieved this unification quite simply using only the SU(5) non-Abelian gauge symmetry and one additional Higgs scalar field. If this second Higgs field also gained a vacuum expectation value--presumably at a temperature \(kT\sim 10^{15-16}\) GeV--the GUT symmetry would have been broken into the SU(2) and U(1) symmetries of the electroweak theory and a separate SU(3) symmetry describing the strong nuclear force.
Since then, other GUTs have been proposed and today the SU(5) version is not unique in describing all strong and electroweak forces using one grand unified scheme, relying instead on different gauge groups, such as SO(10) [10] and E(6) [14]. Nevertheless, all of them have one key property in common--the creation of 't Hooft-Polyakov monopoles when the GUT symmetry is spontaneously broken [63; 53; 55]. This happens because whenever a U(1) symmetry remains after a gauge symmetry is broken by a Higgs field, so-called stable 'hedgehog' Higgs configurations also persist [21; 54]. But a U(1) gauge symmetry is required for electromagnetism, so any GUT that unifies all three forces other than gravity must always have stable 'hedgehogs,' i.e., 't Hooft-Polyakov monopoles.
These features are examples of a topological defect, or a topological soliton [27], and are _extremely stable_, given that they cannot be turned continuously into the uniform vacuum state. GUT monopoles might therefore have created a problem for cosmology when the GUT symmetry was broken if the Universe began with a temperature \(kT\gg 10^{16}\) GeV and cooled soon after
the Big Bang. They would have appeared physically as quanta of energy localized within small volumes, interacting as massive particles with their environment.
As we shall see in SS 2, a monopole problem arises in standard Big Bang cosmology because even simple calculations suggest that so many of them would have been created that the cosmic dynamics today would be completely overwhelmed by their gravitational influence, which is clearly not the case. The earliest estimates assumed that the monopole density was at some time in thermal equilibrium [67, 55], but even other approaches avoiding this assumption [16, 8] arrived at similar conclusions.
The favored explanation for the absence of a GUT monopole presence today is inflation, introduced in various guises and for a variety of reasons [62, 20, 24], but specifically to solve the monopole, horizon and flatness problems by Alan Guth in 1981 [15]. In this picture, the early Universe was dominated by a scalar field with a potential producing a de Sitter type of accelerated expansion which simply diluted the monopole density away to insignificant levels, before resettling back to its standard expansion driven by matter, radiation and (as we now know) an unknown form of dark energy. In order for this work, the inflationary expansion had to occur after the creation of the GUT monopoles (or during it, as originally thought).
But even after three decades of development, we still do not have a complete picture of how inflation is supposed to have worked. Many would categorize it as more of a general idea than a specific, well-understood theory. In recent years, there have been several indications that its foundational tenets are simply not consistent with the data. For example, in light of the latest _Planck_ measurements [52], significant doubts have been raised concerning its assumed initial conditions [19].
Even more compellingly, a careful re-analysis of the temperature anisotropies in the CMB has shown to a high degree of confidence that the primordial power spectrum \(P(k)\) has a hard cutoff, \(k_{\rm min}=(3.14\pm 0.36)\times 10^{-4}\) Mpc\({}^{-1}\)[47, 48, 60], which creates significant tension with all slow-roll inflationary potentials. In order to correctly account for \(P(k)\), the inflationary potential must dominate over the field's kinetic energy, but then the presence of \(k_{\rm min}\) inhibits inflation from producing an expansion with a sufficient number of e-folds to simultaneously solve the horizon problem [26].
Enough concerns have now been leveled at the general inflationary paradigm that we should seriously consider whether it could have happened at all. At this point, we could either resort to a drastic simplification and merely claim
that monopoles were never produced, which would then shift the problem to the standard model of particle physics, or instead seek an alternative cosmology in which the monopole problem never emerges.
Indeed, we shall demonstrate in this paper that the alternative Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology, known as the \(R_{\rm h}=ct\) universe [49; 39], trivially and _naturally_ eliminates the GUT monopole problem completely. And it does so while also obviating, or significantly mitigating, all the other conflicts and inconsistencies in \(\Lambda\)CDM (see, e.g., Table 2 in ref. [35] and the more complete discussion in ref. [39]), including all horizon problems that plague the standard model, to which the monopole anomaly is closely related.
## 2 Topological Defects and the Monopole Problem
We have no reason to believe that the temperature in the Universe was initially lower than \(10^{16}\) GeV at the Big Bang. It is therefore reasonable to assume that the Universe must have undergone a phase transition as it cooled, corresponding to a critical temperature \(kT_{\rm c}\) of order the unification energy scale. Above \(T_{\rm c}\), the Higgs scalar field, which acts as an order parameter for the symmetry breaking, had zero expectation value, so no monopoles were present. They formed once \(T\) dropped below \(T_{\rm c}\), however, and were extremely stable [22; 65; 6].
The basic idea behind the formation of GUT monopoles follows the approach first taken by Kibble in 1976 [21], and later generalized to include its applicability to grand unified theories [54; 57]. The detailed mechanism depends on whether the phase transition was second-order (or weakly first-order), producing large fluctuations, or whether it was strongly first-order, with an associated supercooling progression. The expected monopole density, however, is very similar in both cases.
At a fundamental level, one can reasonably argue that the Higgs field during the phase transition could not have been correlated over distances greater than some characteristic scale \(\xi\). This is easy to understand in the context of general relativity, given that the transition would have occurred over a finite time, so the Higgs field at two spacetime points separated beyond their causal limit would have settled into vacuum expectation values independently of each other, producing a domain structure with a characteristic size \(\xi\) for each domain.
But the Higgs field had to be continuous, so it would have interpolated smoothly between two adjacent domains. Nevertheless, there is a probability \(p\) (not much smaller than 1) that the scalar field orientation ended up being topologically nontrivial at the intersection point of several independent domains, so a monopole or antimonopole would have formed there. One therefore estimates that the density of GUT monopoles created at the phase transition must have been
\[n_{\rm m}(t_{\rm GUT})\sim p\xi^{-3}\;, \tag{1}\]
with \(p\sim 1/10\) in typical grand unified theories [54].
At least in the case of a second-order (or weakly first-order) phase transition, \(\xi\) might have become very large as \(T\) approached \(T_{\rm c}\), but even for this situation the domain structure of the Higgs field could not have violated the causality limit [16, 8]. The largest correlation length one could contemplate is thus the gravitational (or Hubble) radius,
\[R_{\rm h}(t_{\rm GUT})\equiv\frac{c}{H(t_{\rm GUT})}\;, \tag{2}\]
where \(H(t_{\rm GUT})\) is the Hubble parameter at the time the phase transition took place (see ref. [37] for a discussion of causal limits based on the gravitational horizon in cosmology). And so we conclude from this that, in a second-order (or weakly first-order) transition, the initial GUT monopole density in the comoving frame would have been set by the constraint \(\xi\leq R_{\rm h}(t_{\rm GUT})\), implying that
\[n_{\rm m}(t_{\rm GUT})\geq pR_{\rm h}(t_{\rm GUT})^{-3}\;. \tag{3}\]
If the GUT transition was first-order, the emergence of a vacuum expectation value for the Higgs field would have proceeded via the nucleation of stable bubbles within an unstable, unbroken medium. These bubbles would have expanded at lightspeed to fill the Universe with the stable phase [3]. GUT monopoles would then have been created where the bubbles collided and merged, based on the same Kibble mechanism described above. The principal difference between these two outcomes would have been the temperature at which the transition was finalized, which was presumably \(T_{\rm c}\) in the former, but somewhat lower in the latter, given that the phase with unbroken symmetry would have persisted until the bubble nucleation was complete. A somewhat lower temperature for the first-order transition implies a later time, so the predicted density of monopoles in this case would
have been
\[n_{\rm m}(t_{\rm bubble})\geq pR_{\rm h}(t_{\rm bubble})^{-3}\;, \tag{4}\]
with \(t_{\rm bubble}>t_{\rm GUT}\). Such differences, however, have no material impact on our discussion, so we shall henceforth simply assume that the initial GUT monopole density was generically given by Equation (3).
Once created, the abundance of monopole-antimonopole pairs might have been reduced by annihilations, but this process would have been very inefficient due to the very large monopole mass and the rapid cosmic expansion. As such, the density of GUT monopoles in the comoving frame would have been hardly reduced at all below the estimate given in Equation (3) [67, 55, 5, 54].
In anticipation of our proposed solution to the monopole anomaly, let us now examine why this monopole abundance creates a significant problem for standard Big Bang cosmology. Adopting the flat \(\Lambda\)CDM model, we may set the expansion factor \(a(t)\) equal to 1 today. Then, with \(kT_{\rm c}=10^{16}\) GeV, the equation
\[T_{\rm c}=(1+z_{\rm GUT})T_{0}\;, \tag{5}\]
in terms of today's CMB temperature, \(T_{0}=2.72548\pm 0.00057\) K, yields the redshift, \(z_{\rm GUT}\sim 4.3\times 10^{28}\), at which the GUT phase transition would have taken place. The corresponding expansion factor was
\[a(t_{\rm GUT})=\frac{1}{1+z_{\rm GUT}}\sim 2.3\times 10^{-29}\;. \tag{6}\]
In the standard model, the Hubble parameter may be written
\[H(a)=H_{0}\sqrt{\Omega_{\rm m}a^{-3}+\Omega_{\rm r}a^{-4}+\Omega_{\Lambda}}\;, \tag{7}\]
which we evaluate using the _Planck_ optimized parameters, including the Hubble constant \(H_{0}=67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\), and normalized densities \(\Omega_{\rm m}=0.315\pm 0.007\) (matter), \(\Omega_{\rm b}=0.0377\pm 0.0002\) (baryons), \(\Omega_{\rm r}=5.370\pm 0.001\times 10^{-5}\) (radiation) and \(\Omega_{\Lambda}=0.685\pm 0.015\) (cosmological constant) [52]. Thus, we find that the gravitational (or Hubble) radius at the GUT scale (Eq. 2) must have been
\[R_{\rm h}(t_{\rm GUT})\sim 3.9\times 10^{-28}\;{\rm cm}\;. \tag{8}\]
If we now assume that the comoving density of GUT monopoles has remained more or less constant for \(t\geq t_{\rm GUT}\), we find that their physical
density today would be
\[n_{\rm m}(t_{0})\sim p\left[\frac{a(t_{\rm GUT})}{R_{\rm h}(t_{\rm GUT})}\right] ^{3}\;, \tag{9}\]
which yields
\[n_{\rm m}(t_{0})\sim 21\left(\frac{p}{0.1}\right)\;{\rm m}^{-3}\;. \tag{10}\]
By comparison, the proton density is estimated to be
\[n_{\rm H}(t_{0})\approx\frac{\Omega_{\rm b}\rho_{\rm c}}{m_{\rm H}}\sim 0.2\;{ \rm m}^{-3}\;, \tag{11}\]
where \(\rho_{\rm c}\equiv 3H_{0}^{2}/8\pi G\) is the critical mass density and \(m_{\rm H}\) is the mass of the hydrogen atom.
A comparison of Equations (10) and (11) shows clearly why we have a monopole problem in cosmology, since the magnetic monopole density would not only be much larger than that of baryons today but, when coupled with their enormous mass difference (i.e., a factor of \(\sim 10^{16}\)), implies that the monopole contribution to the energy density of the Universe would be ridiculously high, certainly well beyond any reasonable upper limit placed by ongoing searches for these topological defects [54, 58].
A careful inspection of the argument leading up to Equation (10) would reveal that the standard model fails because it predicts a decelerated expansion at early times (i.e., \(\dot{a}\sim t^{-1/2}\); see Eq. 7 with \(H(a)\equiv\dot{a}/a\)), greatly inhibiting the rate at which the volume per monopole grew in comparison with the size of the visible Universe, i.e., \(\dot{R}_{\rm h}\). The monopole problem is thus caused by the same flaw that gives rise to the various horizon problems in standard cosmology [32, 36].
The accelerated spurt during inflation is supposed to have overcome this deficiency by greatly expanding the physical volume per monopole and thereby hugely diluting their density to undetectable levels today. But if inflation were to eventually go away, as some of the observations are now suggesting, we would be left with a significant conflict between the standard model of particle physics and the current standard model of cosmology. In the next section, we shall demonstrate how this impasse--like all the other current problems in \(\Lambda\)CDM--is completely and _naturally_ removed by the alternative FLRW cosmology known as the \(R_{\rm h}=ct\) universe.
## 3 The \(R_{\rm h}=ct\) universe
The \(R_{\rm h}=ct\) cosmology has been under development for over fifteen years [29; 49; 39]. As of today, more than 27 different kinds of observation have been used in comparative studies between this model and \(\Lambda\)CDM, at both high and low redshifts, employing a broad range of integrated and differential measures, such as the luminosity and angular diameter distances, the redshift-dependent expansion rate, and the redshift-age relationship. In all of the tests completed and published thus far, \(R_{\rm h}=ct\) has accounted for the data at least as well as the standard model, and often much better. A recent compilation of the papers reporting this work may be found in Table 2 of ref. [35]. A more complete description of this model, including its foundational underpinnings, may be found in [39].
Briefly, the original motivation for this model was the emergence of a very unusual 'coincidence' in the cosmological data, suggesting that the apparent (or gravitational) horizon in the Universe was equal to the light-travel distance since the Big Bang [29; 30; 37]. It is very straightforward to convince oneself that, given the various periods of acceleration and deceleration in the standard model, this equality can only happen once in the entire history of the Universe, and yet it is happening right now, at time \(t_{0}\), just when we happen to be looking [49]. Of course, the probability for such a chance coincidence is thus effectively zero if \(\Lambda\)CDM is the correct cosmology. It is well known that the standard model suffers from several inexplicable coincidences, but this one is arguably the worst.
The simplest'solution' to this conundrum is that the equality \(R_{\rm h}=ct\) (hence the eponymous name for the model) must be true at all times, not just at this instant. Then it wouldn't matter when the observations are made, since the same condition would be valid at all times \(t\) smaller than, or larger than, \(t_{0}\).
This equality, however, implies (via the Friedmann equations) that the cosmos expands at a constant rate, with an expansion factor \(a(t)=t/t_{0}\). This contrasts with the variable expansion rate predicted by \(\Lambda\)CDM, so the earliest work with this hypothesis has revolved around the acquisition of empirical evidence supporting this unexpected scenario. Needless to say, the general degree of success enjoyed by the standard model over the past few decades makes it difficult to believe that the history of the Universe could be adequately accounted for by such a different paradigm. And yet, test after test, now including over 27 different types of data, at both high
and low redshifts, based on measurements of the luminosity distance, or the angular diameter distance, or the redshift-dependent Hubble expansion rate or (perhaps most spectacularly) the redshift-time dependence, have all shown that the observations quite compellingly favor \(R_{\rm h}=ct\) over the standard model. A quick perusal of the aforementioned Table 2 in ref. [35] would show that the'score' is effectively 27 to 0 in favor of the former model.
And this compilation does not include the most recent comparative tests based on the latest JWST observations [17; 7; 50; 2] showing that the timeline for the formation of structure predicted by \(\Lambda\)CDM in the early Universe is strongly disfavored, while that predicted by \(R_{\rm h}=ct\) matches the data almost exactly [45; 46].
The successful empirical support it has received from this body of observational work has motivated a deeper exploration of its origin and viability. As we now understand it, \(R_{\rm h}=ct\) is essentially \(\Lambda\)CDM, but with a critical added constraint to its total equation-of-state, known as the zero active mass condition in general relativity, i.e., \(\rho+3p=0\), where \(\rho\) and \(p\) are, respectively, the total energy density and pressure in the cosmic fluid. Again, it is straightfoward to see this from the Friedmann equations when one imposes the constraint \(R_{\rm h}=ct\), which in turn implies that \(a(t)=t/t_{0}\).
But more recent theoretical work appears to show that this condition may be necessary for the proper usage of the FLRW metric in a cosmic setting [43; 44]. Evidence is growing that the choice of lapse function, \(g_{tt}=1\), in the FLRW metric precludes any possibility of an accelerated expansion, given that it permits no time dilation in the accelerated frame relative to a local free-falling frame. This is still work in progress, awaiting further independent confirmation. If correct, this fundamental underpinning explains why the zero active mass equation-of-state must produce an expansion profile with \(a(t)=(t/t_{0})\) at all redshifts, including the early Universe, where the monopole problem emerges.
The theoretical support this model now receives is extensive, impacting every area in which \(\Lambda\)CDM has a major problem or inconsistency. For example, the \(R_{\rm h}=ct\) universe completely eliminates the CMB temperature [32] and electroweak [36] horizon problems, without the need for inflation. It solves the cosmic entropy problem [41], and provides a natural explanation for the origin of the cutoff \(k_{\rm min}\) in the primordial power spectrum [38]. It also completely and naturally removes the time compression problem in \(\Lambda\)CDM, in which galaxies [33] and quasars [31] would otherwise appear far too early in its history. Quite remarkably, the \(R_{\rm h}=ct\) cosmology even explains the
origin of rest mass energy [40]. In addition to these notable successes, several other applications and conflict resolutions have also been reported in both the primary and secondary literature.
In this paper, we address one of the few remaining topics yet to be broached in this comparative study--i.e., the third and final original motivation for the introduction of inflation back in the early 1980's. The monopole problem has been invoked on countless occasions as important phenomenological support for the existence of an inflaton scalar field in the early Universe. But as we have noted elsewhere in this paper, the observations now appear to be retreating from this paradigm, creating a growing schism between the current standard model of particle physics and \(\Lambda\)CDM. In the next section, however, we shall demonstrate how the \(R_{\rm h}=ct\) universe completely removes the monopole problem naturally and elegantly, adding to its long list of accomplishments discussed above.
## 4 GUT Monopoles in the \(R_{\rm h}=ct\) universe
As noted earlier in SS 2, horizon problems emerge only in cosmologies with an early period of decelerated expansion, such as \(\Lambda\)CDM [32]. Horizon problems, which are closely related to an overabundance of magnetic monopoles, therefore do not even emerge in a model such as \(R_{\rm h}=ct\), whose expansion never decelerated.
This is very easy to demonstrate quantitatively. In this alternative cosmology, we have \(a(t)=t/t_{0}\) and \(R_{\rm h}(t)=ct\), so the monopole density in Equation (9) simply becomes
\[n_{\rm m}(t_{0})\sim\frac{p}{R_{\rm h}(t_{0})^{3}}\, \tag{12}\]
regardless of when the GUT phase transition occurred. In other words, since the initial density of magnetic monopoles in grand unified theories was expected to be of order \(p\) per Hubble volume, it would have remained of order \(p\) per Hubble volume throughout history, including today. Needless to say, this density is completely undetectable, given that \(R_{\rm h}(t_{0})\sim 1.4\times 10^{28}\) cm, and monopoles would have zero influence on the expansion dynamics.
Thus, the \(R_{\rm h}=ct\) universe trivially solves the horizon and monopole problems because the observable Universe today was always causally connected from the very beginning. Whatever density of monopoles was created at the GUT transition would have remained constant in time because the
comoving volume filling the Hubble sphere at \(t_{\rm GUT}\) in this cosmology would also fill the entire visible Universe today.
## 5 Conclusion
With a resolution of the monopole problem we have discussed in this paper, all of the difficulties inflation was designed to overcome have now been eliminated in the context of \(R_{\rm h}=ct\). This model not only accounts for the data at both low and high redshifts generally better than \(\Lambda\)CDM, it also obviates the need for additional exotic mechanisms to solve problems that may not have been real to begin with.
Looking to the future, several observational campaigns will directly test the predictions of \(\Lambda\)CDM versus \(R_{\rm h}=ct\), promising to unambiguously reject one or the other (or perhaps both) of these models. Chief among them will be the measurement of redshift drift [61]--a clear determination of whether the cosmic expansion is accelerating or not. This effect merely requires the validity of the cosmological principle, and produces a temporal change in the redshift of fixed comoving sources if the expansion of the Universe is variable [4; 56; 28]. The Extremely Large Telescope high resolution spectrometer (ELT-HIRES) [25] will facilitate measurements in the redshift range \(2\leq z\leq 5\), while the Square Kilometer Phase 2 Array (SKA) [23] will do the same for \(z\leq 1\). Complementary observations may also be feasible with 21 cm experiments, e.g., the Canadian Hydrogen Intensity Mapping Experiment (CHIME) [66]. The \(R_{\rm h}=ct\) universe predicts zero drift at all redshifts [34; 42], so the required outcome of these campaigns is essentially just a yes/no answer, which might be achievable with a baseline of just five to ten years.
This is but one of many new and improved measurements that should revolutionize our view of cosmic history. Hopefully, the work we have reported in this paper will help to set the stage for the meaningful interpretation of the new data, and help us to clearly identify the correct cosmological model.
## Acknowledgements
I am grateful to Amherst College for its support through a John Woodruff Simpson Lectureship. I am also grateful to the anonymous referee for their careful, expert reading of this manuscript. |
2309.03713 | Word segmentation granularity in Korean | This paper describes word {segmentation} granularity in Korean language
processing. From a word separated by blank space, which is termed an eojeol, to
a sequence of morphemes in Korean, there are multiple possible levels of word
segmentation granularity in Korean. For specific language processing and corpus
annotation tasks, several different granularity levels have been proposed and
utilized, because the agglutinative languages including Korean language have a
one-to-one mapping between functional morpheme and syntactic category. Thus, we
analyze these different granularity levels, presenting the examples of Korean
language processing systems for future reference. Interestingly, the
granularity by separating only functional morphemes including case markers and
verbal endings, and keeping other suffixes for morphological derivation results
in the optimal performance for phrase structure parsing. This contradicts
previous best practices for Korean language processing, which has been the de
facto standard for various applications that require separating all morphemes. | Jungyeul Park, Mija Kim | 2023-09-07T13:42:05Z | http://arxiv.org/abs/2309.03713v1 | # Word segmentation granularity in Korean
###### Abstract
This paper describes word segmentation granularity in Korean language processing. From a word separated by blank space, which is termed an eojeol, to a sequence of morphemes in Korean, there are multiple possible levels of word segmentation granularity in Korean. For specific language processing and corpus annotation tasks, several different granularity levels have been proposed and utilized, because the agglutinative languages including Korean language have a one-to-one mapping between functional morpheme and syntactic category. Thus, we analyze these different granularity levels, presenting the examples of Korean language processing systems for future reference. Interestingly, the granularity by separating only functional morphemes including case markers and verbal endings, and keeping other suffixes for morphological derivation results in the optimal performance for phrase structure parsing. This contradicts previous best practices for Korean language processing, which has been the de facto standard for various applications that require separating all morphemes.
**keywords**: word segmentation granularity, morphological segmentation, agglutinative language, evaluation
## 1 Introduction
Morphological analysis for Korean has been based on an eojeol, which has been considered as a basic segmentation unit in Korean delimited by white blank spaces in a sentence. Almost all of the
language processing systems and language data sets previously developed for Korean have utilized this eojeol as a fundamental unit of analysis. Given that Korean is an agglutinative language, joining content and functional morphemes of words is very productive and the number of their combinations is exponential. We can treat a given noun or verb as a stem (also content) followed by several functional morphemes in Korean. Some of these morphemes can, sometimes, be assigned its syntactic category. Let us consider the sentence in (1).
The corresponding morphological analysis is also provided in Figure 1. _Unggaro_ ('Ungaro') is a content morpheme (a proper noun) and a postposition _-ga_ (nominative) is a functional morpheme. They form together a single eojeol (or word) _unggaro-ga_ ('Ungaro+nom'). For the sake of convenience, we add a figure dash (-) at the beginning of functional morphemes, such as _-ga_ (nom) to distinguish between content and functional morphemes. The nominative case markers _-ga_ or _-i_ may vary depending on the previous letter -- vowel or consonant. A predicate _naseo-eoss-da_ also consists of the content morpheme _naseo_ ('become') and its functional morphemes, _-eoss_ ('past') and _-da_ ('decl'), respectively.
1. _pearangseu-ui segye-jeok-i-n_ _uisang dijaineo emmanuel unggaro-ga silnae_ France-gen world class-rel fashion designer Emanuel Ungaro-nom interior _jangsik-yong jikmul dijaineo-ro naseo-eoss-da_. decoration textile designer-ajt become-past-decl 'The world-class French fashion designer Emanuel Ungaro became an interior textile designer.'
Every approach for Korean language processing has decided how to separate _sequences_ of morphemes into component parts, ranging from eojeols, a basic word-like unit, all the way down to a complete morphological parse. These decisions have been, for the most part, argued as either linguistically or technically motivated, with little or no interest in exploring an alternative. The choice does have some impact on the performance of algorithms in various tasks, such as part of speech (POS) tagging, syntactic parsing and machine translation. In the study, we analyze different granularity levels previously proposed and utilized for Korean language processing. In accordance with these analyzing works, we present the results of language processing applications using different segmentation granularity levels for future reference. To the best of the authors' knowledge,
this is the first time that different granularity levels in Korean have been compared and evaluated against each other. This would contribute to fully understanding the current status of various granularity levels that have been developed for Korean language processing. Specifically the main goal of this paper is to diagnose the current state of natural language processing in Korean by tracing its development procedures and classifying them into five steps. Additionally, this paper aims to clearly explicate and evaluate the challenges unique to Korean language processing, with the objective of contributing to the improvement of various methodologies in this field. To this end, after presenting previous work in Section 2, the study introduces the segmentation granularity in Korean by classifying them into five different levels with a linguistic perspective as well as a natural language processing perspective in Section 3, and presents several application for Korean language processing using the five segmentation granularity levels by comparing them each other in Section 4. Finally, Section 5 concludes the discussion.
## 2 Previous work
Different granularity levels have been proposed mainly due to varying different syntactic analyses in several previously proposed Korean treebank datasets: KAIST (Choi, Han, Han, and Kwon, 1994), Penn (Han, Han, Ko, Palmer, and Yi, 2002), and Sejong. While segmentation granularity, which we deal with, is based on morphological analysis, the syntactic theories are implicitly presented
Figure 1: Morphological analysis and part of speech (POS) tagging example in the Sejong corpus: NN* are nouns, JK* are case makers and postpositions, V* are verbs, and E* are verbal endings.
in the corpus for Korean words. Figure 2 summarizes the syntactic trees which can be represented in Korean treebanks for different segmentation granularity levels. Korean TAG grammars (Park, 2006) and CCG grammars (Kang, 2011) described Korean by separating case markers. Most work on language processing applications such as phrase structure parsing and machine translation for Korean which uses the sentence by separating all morphemes (Choi, Park, and Choi, 2012; Park, Hong, and Cha, 2016; Kim and Park, 2022). The Penn Korean treebank introduced a tokenization scheme for Korean, while the KAIST treebank separates functional morphemes such as postpositions and verbal endings. Note that there are no functional tags (_i.e._, -sbj or -ajt) in the KAIST treebank.
Syllable-based granularity (_e.g._, _se_\(\sqcup\)_give_\(\sqcup\)_jeok_\(\sqcup\)_i_, 'world-class') (Yu, Kulkarni, Lee, and Kim, 2017; Choi, Kim, Seol, and Lee, 2017) and even character-based granularity using the Korean alphabet (_s_\(\sqcup\)_e_\(\sqcup\)_g_\(\sqcup\)_ye_\(\sqcup\)_j_\(\sqcup\)_eo_\(\sqcup\)_k_\(\sqcup\)_i_) (Stratos, 2017; Song and Park, 2020) have also been proposed where \(\sqcup\) indicates a blank space. They incorporate sub-word information to alleviate data sparsity especially in neural models. Dealing with sub-word level granularity using syllables and characters does not consider our linguistic intuition. We describe granularity based on a linguistically motivated approach in this paper, in which each segmentation is a meaningful morphological unit.
## 3 Definition of segmentation granularity
The annotation guidelines for Universal Dependencies stipulate each syntactic word, which is an atom of syntactic analysis, as a basic unit of dependency annotation (Nivre, de Marneffe, Ginter, Hajic, Manning, Pyysalo, 2020). This stipulation presupposes that there must be a separate autonomous level of syntax and morphology. One of the features of agglutinative languages is that there is a one-to-one mapping between suffixes and syntactic categories, indicating that each suffix must have an appropriate syntactic category to which it belongs. More specifically, nouns can have individual markers indicating case, number, possessive, etc., whose orders are fixed. Thus, we can regard any given noun or verb as a stem followed by several inflectional or derivational morphemes. The number of slots
Figure 2: Different syntactic analyses using different segmentation granularities. POS labels are omitted.
for a given part of a category may be pretty high. In addition, an agglutinating language adds information such as negation, passive voice, past tense, honorific degree to the verb form. That is, in an agglutinating language, verbal morphemes are added to the root of the verb to convey various grammatical features, such as negation, passive voice, past tense, and honorific degree. One of the characteristics in Korean is to use a system of honorifics to express the hierarchical status and familiarity of participants in conversation with respect to the subject, object or the interlocutor. This system plays a great role in Korean grammar. When a speaker uses honorific expression, we can figure out the social relationship between the speaker, the interlocutor, and the object in the subject position at the same time. This honorific system is reflected in the honorific markers attached to the nouns, and verbal endings to the verb.
Such a complex and rich morphological system in agglutinative languages poses numerous challenges for natural language processing. The key obstacle lies in a voluminous number of word forms that can be derived from a single stem. Word form analysis involves breaking a given surface word form into a sequence of morphemes in an order that is admissible in Korean. However, several difficulties may arise in dividing these sequences of morphemes into appropriate units. This paper describes the segmentation granularity procedures that could influence the performance of algorithms in various tasks and the various analyses that have been adopted in Korean. First of all, we define five different levels of segmentation granularity for Korean, which have been independently proposed in previous work as different segmentation units. While Levels 1 (eojeols as they are), 2 (tokenization - separating words and symbols process), and 5 (separating all morphemes process) are due to technical reasons, Levels 3 (separating case markers process) and 4 (separating verbal endings process) are based on linguistic intuition.
### Level 1: eojeols
As described previously, most Korean language processing systems and corpora have used the eojeol as a fundamental unit of analysis. For example, the Sejong corpus, the most widely-used corpus
for Korean, uses the cojeol as the basic unit of analysis.1 The Sejong corpus was first released in 2003 and was continually updated until 2011. The project produced the largest corpus for the Korean language. It includes several types of corpora: historical, contemporary, and parallel text. Contents of the Sejong corpus represent a variety of sources: newswire data and magazine articles on various subjects and topics, several book excerpts, and scraped texts from the Internet. The Sejong corpus consists of the morphologically (part of speech tagged), the syntactically (treebank), and the lexical-semantically annotated text as well as a list of Korean words as dictionaries based on part of speech categories. Figure 3 shows an example of the Sejong corpus for the sentence in (1).
Footnote 1: The Ministry of Culture and Tourism in Korea launched the 21st Century Sejong Project in 1998 to promote Korean language information processing. The project has its name from Sejong the Great who conceived and led the invention of _hangul_, the writing system of the Korean language.
We define oejools, as in the Sejong corpus, as granularity Level 1. Rationale of this segmentation granularity in Korean language processing is simply to use the word as it is in the surface form, in which the word is separated by a blank space in the sentence (that is, in a manner of what you see is what you get). Most morphological analysis systems have been developed based on oejools (Level 1) as input and can yield morphologically analyzed results, in which a single oejol can contain several morphemes. The dependency parsing systems described in Oh and Cha (2013) and Park, Kawahara, Kurohashi, and Choi (2013) used oejools as an input token to represent dependency relationships between oejools. Oh, Han, Park, and Cha (2011) presented a system which predict phrase-level syntactic label for oejools based on the sequence of morphemes in the oejol. What is the most interesting is that Petrov, Das, and McDonald (2012) proposed Universal POS tags for Korean based on the oejol and Stratos, Collins, and Hsu (2016) worked on POS tagging accordingly. Taking these basic trends into consideration, the study defines oejoeols as Level 1. Recently released KLUE (Korean Language Understanding Evaluation) also used the oejol as a fundamental unit of analysis (Park, Moon, Kim, Cho, Han, Park, Song, Kim, Song, Oh, Lee, Oh, Jeong, Lee, Seo, Lee, Kim, Lee,
\begin{tabular}{l l l l} (S & (NP-SBJ & (NP & (NP-MOD & /NNP+ /NNP+ /JKG) \\ & & (NP & (NP-MOD /NNG+ /XSN+ /VCP+ /ETM) \\ & & & (NP & (NP /NNG) \\ & & (NP /NEP /NNP /NNP) \\ & & (NP-SBJ /NNP /NNP+ /JKS))) \\ (VP & (NP-AJT & (NP (NP /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNNG /NNG /NNG /NNG /NNG /NNNG /NNG /NNNG /NNG /NNNG /NNG /NNNG /NNG /NNG /NNNG /NNNG /NNNG /NNG /NNNG /NNG /NNNG /NNG /NNNG /NNG /NNNG /NNNG /NNNG /NNG /NNNG /NNNG /NNNG /NNNG /NNNG /NNNG /NNNG /NNN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /N /NN /NN /N /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /N /NN /NN /N /NN /NN /NN /NN /NN /N /NN /N /NN /NN /N /NN /NN /N /NN /N /NN /NN /N /NN /N /NN /NN /NN /NN /N /NN /NN /NN /NN /NN /NN /N /NN /NN /NN /N /NN /NN /N /NN /NN /N /NN /NN /N /NN /N /NN /NN /N /NN /N /N /NN /N /NN /N /N /NN /N /NN /N /NN /N /N /N
### Level 2: separating words and symbols
The process of tokenization in the Korean language has often been overlooked, primarily because eojeols has traditionally been used as the basic unit of analysis. However, it has come to our attention that certain corpora have started adopting an English-like tokenization approach, which results in preprocessed words within these corpora. For example, the Penn Korean treebank (Han et al., 2002), which punctuation marks are separated from words.3 This segmentation granularity especially in the Penn-treebank style corpus focuses on multilingual processing where Penn treebanks include English (Marcus, Marcinkiewicz, and Santorini, 1993; Taylor, Marcus, and Santorini, 2003), Chinese (Xue, Xia, Chiou, and Palmer, 2005), Arabic (Maamouri and Bies, 2004) and Korean (Han et al., 2002). The Penn Korean treebank follows the tokenization scheme that has been used in the other language of the Penn treebanks, as shown in Figure 4. The most distinctive feature in Level 2 lies in that the punctuation mark is all separated from the original word (tokenized).
Footnote 3: While the Penn Korean treebank separates all punctuation marks, quotation marks are the only symbols that are separated from words in the Sejong treebank to distinguish between the quoted clause and the main sentence in the tree structure. We also note that among the existing corpora for Korean, only the Sejong treebank separates quotation marks from the word. Other Sejong corpora including the morphologically analyzed corpus do not separate the quotation marks, and still use the eojeol as a basic analysis unit.
We define the tokenization by separating words and symbols as a granularity Level 2. Chung and Gildea (2009) used a granularity Level 2 for a baseline tokenization system for a machine translation system from Korean into English where they proposed an unsupervised tokenization method to improve the machine translation result. Figure 4 illustrates that the punctuation marker has been separated
Figure 4: Example of the Penn Korean treebank where the punctuation mark is separated from the word (tokenized): N* are nouns PA* are case markers and postpositions, V* are verbs, and E* are verbal endings.
from the verb _deosbut-i-eoss-da_ ('added') and assigned its own category with the marker being designated as 'textttsfn' in Penn Treebank. In addition, the tokenization schema of the sentence follows the method similar to the English language. That is, syntactic unit _3-wol-mal-kka-ji_ ('until the end of March') is traditionally treated as one eojeol, but in Level 2, this unit is tokenized as three different units such as _3-wol_ ('March'), _mal_ (end) and _kka-ji_ ('until'), which is tokenized identically to that of English such as until the end of March. As mentioned in Level 1, most Korean language processing systems have used an eojeol as their basic unit of analysis, resulting in a single eojeol involved with several different morphemes, which is a prominent feature in Level 1. According to this principle, we can easily identify that the noun phrase in a subject position _geu-eun_ forms one eojeol consisting of a stem _geu_ and a topic marker _eun_. In the same way, the verb phrase _deosbut-i-eoss-da_ ('added') creates one eojeol with a root _deosbut_, a passive suffix _-i_, a past tense marker _-eoss_ and verb ending marker _-da_.
Park, Nam, Kim, Hahm, Hwang, and Choi (2014) also used this granularity to develop Korean FrameNet lexicon units by using the cross-lingual projection method from the Korean translation of the English Propbank (Palmer, Gildea, and Kingsbury, 2005). Universal Dependencies (Nivre, de Marneffe, Ginter, Goldberg, Hajic, Manning, McDonald, Petrov, Pyysalo, Silveira, Tsarfaty, and Zeman, 2016; Nivre et al., 2020) contains two Korean dependency treebanks, namely the GSD treebank (McDonald, Nivre, Quirmbach-Brundage, Goldberg, Das, Ganchev, Hall, Petrov, Zhang, Tackstrom, Bedini, Bertomeu, 2013) and the KAIST treebank (Choi et al., 1994; Chun, Han, Hwang, and Choi, 2018), which also use the tokenization scheme by separating words and punctuation marks.
Recently, Park and Kim (2023) insisted that the functional morphemes in Korean should be treated as part of a word in Korean categorial grammars, with the result that their categories for detailed morphemes do not require to be assigned individually in a syntactic level, and also that it would be more efficient to assign the syntactic categories on the fully inflected lexical word derived by the lexical rule of the morphological processes in the lexicon.
### Level 3: separating case markers
From a purely linguistic perspective, postpositions as functional morphemes in Korean convey grammatical cases (_e.g._, nominative or accusative), adverbial relations (spatial, temporal or directional), semantic roles and conjunctives by combining with the lexical words. We may separately indicate them as case marker, adverbial postposition, auxiliary postposition, and conjunctive postposition, respectively, though we generally term them as postpositions or case markers, depending on the authors. In linguistics, a marker also refers to a free or bound morpheme indicating the grammatical function of the word, phrase or sentence. For the sake of convenience, the paper uses case markers as a term for covering them. Case markers are immediately attached following a noun or pronoun. They are used to indicate the grammatical roles of a noun in a sentence such as subject, object, complement or topic.
First of all, _-i_ and _-ga_ are nominative case markers whose form depends on whether the stem ends with a vowel or consonant. When the honorific subject is used, this nominative case marker will be replaced by the honorific marker _-kkeseo_, instead of _-i_ or _-ga_. An honorific is marked to encode the relative social status of the interlocutors. A major feature of this honorific system is typically to convey the same message in both honorific and familiar forms. Korean honorifics are added to nouns, verbs, and adjectives. Similarly to this nominative case marker, the honorific dative case marker _-kke_ will be used instead of the familiar dative case marker _-ege_. The rest of the markers are used to express the adverbial relations such as directional, temporal, spatial including source and destination, and accompaniment. All of these markers attached to the noun stem cannot be duplicated, showing complementary distribution. As shown in the example (2), the nominative case marker _-ga_ cannot be together with the instrumental case marker _-ro_ in (2b), and cannot collocate also with the dative case marker _-ege_ in (2c).
* _holangi-ga sanab-da_. tiger-nom fierce-decl 'A tiger is fierce.'
* _holangi-ga-ro sanab-da_. tiger-nom-'to' fierce-decl
'A tiger is fierce.'
3. *_holangi-ga-ege sanab-da_. tiger-nom-dat fierce-decl 'A tiger is fierce.'
Under a perspective of natural language processing, the Sejong corpus has been criticized for the scope of the case marker, in which only a final noun (usually the lexical anchor) in the noun phrase is a modifier of the case marker. For example, _Emmanuel Ungaro-ga_ in the Sejong corpus is annotated as (NP (NP _Emmanuel_) (NP _Ungaro-ga_)), in which only _Ungaro_ is a modifier of _-ga_ ('nom'). For example as described in Ko (2010), while there are several debates on whether a noun or a case marker is a modifier in Korean, this is beyond the scope of the paper. The Penn Korean treebank does not explicitly represent this phenomenon. It just groups a noun phrase together: _e.g._, (NP _Emmanuel Ungaro-ga_), which seems to be treated superficially as a simple compound noun phrase. Collins' preprocessing for parsing the Penn treebank adds intermediate NP terminals for the noun phrase (Collins, 1997; Bikel, 2004), and so NPs in the Penn Korean treebank will have a similar NP structure to the Sejong corpus (Chung, Post, and Gildea, 2010). To fix the problem in the previous treebank annotation scheme, there are other annotation schemes in the corpus and lexicalized grammars. They are introduced to correctly represent the scope of the case marker. Park (2006) considered case markers as independent elements within the formalism of Tree adjoining grammars (Joshi, Levy, and Takahashi, 1975). Therefore, he defined case markers as an auxiliary tree to be adjoined to a noun phrase. In contrast to case markers, verbal endings in the inflected forms of predicates are still in the part of the eojoel and they are represented as initial trees for Korean TAG grammars. The lemma of the predicate and its verbal endings are dealt with as inflected forms instead of separating functional morphemes (Park, 2006). This idea is going back to Maurice Gross's lexicon grammars in 1970s (Gross, 1975) and his students who worked on a descriptive analysis of Korean in which the number of predicates in Korean could be fixed by generating all possible inflection forms: _e.g._, Pak (1987); Nho (1992); Nam (1994); Shin (1994); Park (1996); Chung (1998); Han (2000).
### Level 4: separating verbal endings
With a purely linguistic perspective, Korean verbs are formed in terms of the agglutinating process by adding various endings to the stem. Korean is widely known to have a great many verbal endings between this stem and final verbal endings. More specifically, the verbal endings in Korean are well known to be complex in their syntactic structures in the sense that the verbal endings carry much of functional load in the grammatical aspects such as sentence mood, tense, voice, aspect, honorific, conjunction, etc.: for example, _inter alia_, tense (Hwang, 2003), grammatical voice (Park, 2007), interaction of tense-aspect-mood marking with modality (Song, 1998), evidentiality (Lim, 2008), and interrogativity (Lim, 2011). More additional endings can be used to denote various semantic connotations. That is, a huge number of grammatical functions are achieved by adding various verbal endings to verbs. The number can also vary depending on the theoretical analyses, naturally differing in their functions and meanings. These endings, of course, do not change the argument structures of a predicate. A finite verb in Korean can have up to seven suffixes as its endings, whose order is fixed. As mentioned in the previous section, the Korean honorific system can also be reflected in verbs with honorific forms. When a speaker expresses his respect toward the entities in a subject or indirect object position, the honorific marker _-(eu)si_ is attached to the stem verb, thereby resulting in the verb form _sanchaegha_ ('take a walk'). The suffixes denoting tense, aspect, modal, formal, mood are followed by the honorific.
Unlike the markers attached to nouns, Korean verbal endings are added to the verb stem in a specific order, depending on the tense, mood, and politeness level of the sentence, as illustrated in (3). The verb stem _sanchaegha_ ('to take a walk') can be followed by the honorific _-si_ in (3a). The two suffixes indicating an honorific and past tense can be attached to the verb stem in (3b). One more additional suffix of retrospective aspect is added in the example in (3c). If the order of a past suffix and honorific suffix is changed in the verbal endings, the sentence would be ungrammatical, as in (3d).
* a. _halabeoji-kkeseo jamsi sanchaegha-si-n-da_. Grandfather-nom-hon for a while take a walk-hon-pres-decl 'Grandfather takes a walk for a moment.'
2. _halabeoji-kkeseo jamsi sanchaegha-sy-eoss-da_. Grandfather-hon for a while take a walk-hon-past-decl 'Grandfather took a walk for a moment.'
3. _halabeoji-kkeseo jamsi sanchaegha-sy-eoss-deon jeog-i iss-da_. Grandfather-hon once go for a walk-hon-past-asp experience-cop be-decl 'Grandfather once went for a walk for a moment.'
4. _*halabeoji-kkeseo jamsi sanchaegha-eoss-sy-deon jeog-i iss-da_. Grandfather-hon once go for a walk-past-hon-asp experience-cop be-decl 'Grandfather once went for a walk for a moment.'
Government and Binding (GB) theory (Chomsky, 1981, 1982) for Korean syntactic analyses, in which the entire sentence depends on verbal endings as described in Figure 5 for _naseo-eoss-da_ ('became'). This means that the functional morpheme _-eoss_ is assigned its own syntactic category T(ense) and the verbal ending _-da_ C(omplimentizer) attached in the final position determines the whole syntactic category CP in Korean.
From the Natural Language Processing perspective, the KAIST treebank (Choi et al., 1994), an earliest Korean treebank, introduced this type of analysis, which is Level 4. It is the granularity
Figure 5: GB theory for Korean syntactic analyses, in which the entire sentence depends on verbal endings
Level 4 that we adapt the KAIST treebank representation. While the KAIST treebank separates case markers and verbal endings with their lexical morphemes, punctuation marks are not separated and they are still a part of preceding morphemes as represented in the Sejong treebank. Therefore, strictly speaking, one could judge that the KAIST treebank is not granularity Level 4 by our definition because we separate punctuation marks. In addition, while it also represents derivational morphology in the treebank annotation (_i.e._, for a copula _segye-jeok_\(\sqcup\)\(\lnot\)_-_i_\(\sqcup\) -_n_ ('world-class') in the KAIST treebank), we separate only verbal endings (_i.e._, _segye-jeok-i_\(\sqcup\) -_n_).
### Level 5: separating all morphemes
Many downstream applications for Korean language processing are based on the granularity Level 5, in which all morphemes are separated: POS tagging (Jung, Lee, and Hwang, 2018; Park and Tyers, 2019), phrase-structure parsing (Choi et al., 2012; Park et al., 2016; Kim and Park, 2022) and statistical machine translation (SMT) (Park et al., 2016; Park, Dugast, Hong, Shin, and Cha, 2017), etc. where the applications take all the morphemes separated sequence instead of the surface sentence segmented by a blank, as input for language processing. A morpheme-based annotation scheme proposed in Park and Tyers (2019) for POS tagging has been extended to dependency parsing (Chen, Jo, Yao, Lim, Silfverberg, Tyers, and Park, 2022) and named-entity recognition (Chen, Lim, and Park, 2023) and it attained the most advanced evaluation outcomes. Figure 6 shows examples of the downstream application process: constituent parsing using the Sejong treebank and machine translation from Korean into English. The sentence often in these applications is converted into the sequence of morphemes to be parsed or translated. They mostly implement granularity level 5 to avoid the problems of data sparsity and unknown words because the number of possible types combined in longer segmentation granularities, such as eojeol, can increase exponentially. Such morpheme-based analysis for the word can be generated by a morphological analysis system. Therefore, most POS tagging systems can produce segmentation granularity Level 5. Separating these morphemes is straightforward from such morphological analysis results. For instance, as shown in Figure 6, in Level 5, The phrase _segyejeokin_ ('world-class'), which also includes derivational morphemes, is treated as a separated four morphemes sequence _segye-jeok-i-n_ instead
of one surface segment as input for language processing. Specifically, this phrase is assigned four different categories: a NNG (common noun for _segye_), XSN (nominal derivational affix for _jeok_), VCP (copular for _i_) and ETM (adnominal affix for _n_), respectively. These categories consist of the word stem, two derivational morphemes, and an inflectional morpheme, resulting in a new category verb functioning as a modifier in this sentence.
### Discussion
Figure 7 summarizes an example of each segmentation granularity level. For our explanatory purpose, we use the following sentence in (1): _segye-jeok-i-n... unggaro-ga... naseo-eoss-da_. ('The world-class... Ungaro became...'). The advantage of Level 1 is that it has many linguistics
Figure 6: Example of downstream application processes
resources to work with. The main weakness of Level 1 is that it requires segmentation including the tokenization process which has been a main problem in language processing in Korean. While Level 2 has appeared more frequently especially in recent Universal Dependencies (UD)-related resources, and Levels 3 and 4 propose an analysis more linguistically pertinent, they do not mitigate the segmentation problem. Level 5 has the practical merits of a processing aspect. However, the eventual problem for the reunion of segmentation morphemes, for example the generation task in machine translation, still remains, and it has not been discussed much yet.
Figure 7: Five levels of segmentation granularity in Korean and their POS annotation.
## 4 Diagnostic analysis
In this section, we present several applications for Korean language processing using proposed segmentation granularity levels to compare them to each other. We use the default options that the system provides for experiments. For experiments, we convert all data sets into each segmentation granularity. We utilize a 90-10 split for the Sejong treebank for the training and evaluation for POS tagging and syntactic parsing. We utilize training and evaluation data sets for Korean-English machine translation provided by Park et al. (2016).
Firstly, Table 1 shows the number of tokens, the ratio of morphologically complex words (MCW) which are made up of two or more morphemes, and the number of immediate non-terminal (NT) nodes (the number of monomorphemic and complex word patterns) in the entire Sejong treebank. Therefore, the immediate NT nodes signify the POS labels, and can be eojeols, morphemes and symbols according to different segmentation granularity.
### Language processing tasks
Word segmentation, morphological analysis and POS taggingWord segmentation, morphological analysis and POS tagging for Korean requires detection of morpheme boundaries. We use UDPipe, a trainable pipeline (Straka, Hajic, and Strakova, 2016) to perform tokenizing and POS tagging tasks. The current experimental setting achieved the state of the art word segmentation and POS tagging result for Korean (Park and Tyers, 2019). Each trained POS tagging model assigns POS labels for its tokens of granularity. For example, a model should generate _segye+jeok+i+n_ for morpheme boundary and nng+xsn+vcp+etm as a single POS label in Level 1 for _segyejeokin_ ('world-class'), or nng in Level 5 for _segye_ ('world'). We present the f1 score (\(2\cdot\frac{\text{precision-recall}}{\text{precision+recall}}\)) for
\begin{table}
\begin{tabular}{r c c c c c} \hline \hline & Level 1 & Level 2 & Level 3 & Level 4 & Level 5 \\ \hline Token & 370,729 & 436,448 & 577,153 & 752,654 & 829,506 \\ MCW & 0.7881 & 0.6451 & 0.2939 & 0.0934 & 0 \\ Immediate NT & 4,318 & 2,378 & 1,228 & 526 & 45 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of tokens, the ratio of the morphologically complex words (MCW) and the number of immediate non-terminal (NT) in the corpus
word segmentation evaluation using precision and recall described in (1), and the accuracy score for POS tagging evaluation as in (2).
\[\begin{split}\text{precision}=\frac{\text{\# of relevant word segments}\cap\text{\# of retrieved word segments}}{\text{\# of retrieved word segments}}\\ \text{recall}=\frac{\text{\# of relevant word segments}\cap\text{\# of retrieved word segments}}{\text{\# of relevant word segments}}\end{split} \tag{1}\]
\[\begin{split}\text{accuracy}=\frac{\text{correct \# of POS tagging labels}}{\text{total \# of POS tagging labels}}\end{split} \tag{2}\]
Syntactic parsingUsing the granularity Level 5 has been the de facto standard for Korean phrase structure parsing (Choi et al., 2012; Park et al., 2016; Kim and Park, 2022). We train and evaluate the Berkeley parser (Petrov et al., 2006; Petrov and Klein, 2007) with the different granularity levels. The Berkeley parser uses the probabilistic CFG with latent annotations previously proposed in Matsuzaki et al. (2005), and performs a series of split and merge cycles of non-terminal nodes to maximize the likelihood of a treebank. It still shows relatively good parsing results. We keep the structure of the Sejong treebank, and terminal nodes and their immediate NTs are varied depending on the granularity level. We provide gold POS labels as input instead of predicting them during parsing to the original word boundary in the word. This allows us to evaluate parsing results with the same number of terminals for all granularity levels. We present the f1 score by precision and recall of bracketing using EVALB (Black et al., 2011) for parsing evaluation which uses the f1 score based on precision and recall presented in (3).
\[\begin{split}\text{precision}=\frac{\text{\# of relevant constituents}\cap\text{\# of retrieved constituents}}{\text{\# of retrieved constituents}}\\ \text{recall}=\frac{\text{\# of relevant constituents}\cap\text{\# of retrieved constituents}}\end{split} \tag{3}\]
Machine translationUsing the granularity Level 5 has been the de facto standard for machine translation for Korean (Park et al., 2016, 2017). We use the Moses statistical machine translation system (Koehn, Hoang, Birch, Callison-Burch, Federico, Bertoldi, Cowan, Shen, Moran, Zens, Dyer, Bojar, Constantin, 2007) with the different granularity levels for Korean to train the phrase-based translation model and minimum error rate training (Och, 2003) during validation. We present the BLEU (BiLingual Evaluation Understudy) score (Papineni, Roukos, Ward, and Zhu, 2002) for evaluation.
### Results and discussion
The direct interpretation of task results between the different granularity levels would be difficult because the levels of representation are different (_e.g._, the number of lexical tokens is different in Table 1). For comparison purposes of experiment results, (1) we report segmentation results where Level 1 does not require any segmentation. (2) We convert all POS tagging results into Level 1 based on oejoel after training and predicting results for each segmentation granularity level. Therefore, the presented POS tagging accuracy is based on Level 1 oejools as in previous work on POS tagging (Cha, Lee, and Lee, 1998; Hong, 2009; Na, 2015). (3) We convert syntactic parsing results into morpheme-based Level 5 as in previous work on phrase structure parsing (Choi et al., 2012; Park et al., 2016; Kim and Park, 2022). Although the Berkeley parser can predict the POS label during parsing, we provide gold POS labels, which is correct POS labels from the test dataset as input for the parsing system to keep original morpheme boundaries. After parsing sentences for each segmentation granularity level, we convert parsing results into Level 5. (4) For machine translation, we translate Korean sentences in different segmentation granularity into English where there is no
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Level 1 & Level 2 & Level 3 & Level 4 & Level 5 & \\ \hline Segmentation & **100.00** & 95.43 & 94.31 & 93.05 & 90.15 & (F\({}_{1}\)) \\ POS tagging & 83.18 & 86.28 & 89.21 & 92.82 & **96.01** & (Acc) \\ Syntactic parsing & 76.69 & 77.50 & 81.54 & **84.64** & 82.23 & (F\({}_{1}\)) \\ Machine translation & 5.86 & 6.87 & 7.64 & 7.85 & **7.98** & (bleu) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiment results on POS tagging, syntactic parsing and machine translation based on different segmentation granularity levels. For comparison purposes, we convert POS tagging results into Level 1 and syntactic parsing results into Level 5. Translation direction is Korean into English.
different segmentation granularity. We uses multi-bleu.perl provided by Moses (Koehn et al., 2007) to evaluate the translation result.4
Footnote 4: [https://github.com/moses-smt/mosesdecoder](https://github.com/moses-smt/mosesdecoder)
All results based on different segmentation granularity levels are reported in Table 2. The interpretation of results of segmentation is straightforward where no tokenization is required in Level 1 and more tokenization is required in Level 5. From POS tagging to MT, we provide a gold-segmented sequence to evaluate each task. Results of POS tagging indicate that such morpheme-based analysis outperform other granularity, which conforms to the previous results on morphological analysis and POS tagging for Korean (Park and Tyers, 2019). As we described, fine-grained granularity by separating all morphemes (Level 5) has been utilized for downstream applications such as machine translation for Korean and it shows the best performance in the BLEU score (Papineni et al., 2002). Whereas phrase structure parsing also uses by separating all morphemes (Level 5) as input for the previous parsing system (Choi et al., 2012; Park et al., 2016; Kim and Park, 2022), granularity by separating only functional morphemes including case markers and verbal endings and keeping other affixes for morphological derivation (Level 4) outperform Level 5. The modern statistical parsers have used markovization annotation for non-terminal nodes to elaborate context-free grammar rules for parsing either using the manual heuristics (Johnson, 1998; Klein and Manning, 2003) or machine learning techniques (Petrov et al., 2006; Petrov and Klein, 2007). Parsing performance in the statistical parsers is directly related with the size and the quality of CFG rules generated by these annotation schemes of non-terminal nodes. The other explanation for Level 4's parsing performance involves its linguistically soundness of its segmentation of the word, in which its immediate non-terminal nodes represent actual part-of-speech information of the word with its adjoined functional morphemes. Linguistic information of this kind might help to improve the representation of the treebank grammar that is implied by the parsing system.
Conclusion
The study addresses word segmentation granularity for the segmentation in Korean language processing. There have been multiple possible word segmentation granularity levels from a word to morphemes in Korean, and for specific language processing and annotation tasks, several different granularity levels have been proposed and developed. It is that the agglutinative languages including Korean can have a one-to-one mapping between functional morpheme and syntactic category, even though the annotation guidelines for Universal Dependencies typically regard a basic unit of dependency annotation as a syntactic word. We have presented five different levels of segmentation granularity in Korean. We have analyzed and compared these levels of granularity by using Korean language applications as well. Previous work for Korean language processing has not explicitly mentioned which level of segmentation granularity is used, and this makes it difficult to properly compare results between systems. As described, these different levels of segmentation granularity could exist mainly because various Korean treebanks represent their syntactic structure differently. These treebanks also use the different segmentation of words depending on their linguistic and computational requirements. While a certain segmentation granularity may be well suited for some linguistic phenomena or applications, we need to find a correct segmentation granularity level to adapt to our requirements and expectations for Korean language processing.
## Acknowledgement
The work has started when Jungyeul Park was at University of Arizona during 2016-2017. The authors thank Mike Hammond, Francis Tyers and Shane Steinert-Threlkeld for their discussion on the earlier version of this manuscript, and Eric VanLieshout for proofreading. The authors also thank anonymous reviewers who have generously provided valuable feedback. |
2309.16450 | New Perspectives on Torsional Rigidity and Polynomial Approximations of
z-bar | We consider polynomial approximations of z-bar to better understand the
torsional rigidity of polygons. Our main focus is on low degree approximations
and associated extremal problems that are analogous to Polya's conjecture for
torsional rigidity of polygons. We also present some numerics in support of
Polya's Conjecture on the torsional rigidity of pentagons. | Adam Kraus, Brian Simanek | 2023-09-28T13:59:10Z | http://arxiv.org/abs/2309.16450v1 | # New perspectives on torsional rigidity and polynomial approximations of Z-bar
###### Abstract.
We consider polynomial approximations of \(\bar{z}\) to better understand the torsional rigidity of polygons. Our main focus is on low degree approximations and associated extremal problems that are analogous to Polya's conjecture for torsional rigidity of polygons. We also present some numerics in support of Polya's Conjecture on the torsional rigidity of pentagons.
**Keywords:** Torsional Rigidity, Bergman Analytic Content, Symmetrization
**Mathematics Subject Classification:** Primary 41A10; Secondary 31A35, 74P10
## 1. Introduction
Let \(\Omega\subseteq\mathbb{C}\) be a bounded and simply connected region whose boundary is a Jordan curve. We will study the _torsional rigidity_ of \(\Omega\) (denoted \(\rho(\Omega)\)), which is motivated by engineering problems about a cylindrical beam with cross-section \(\Omega\). One can formulate this quantity mathematically for simply connected regions by the following variational formula of Hadamard type
\[\rho(\Omega):=\sup_{u\in C_{0}^{1}(\Omega)}\frac{4\left(\int_{\Omega}u(z)dA( z)\right)^{2}}{\int_{\Omega}|\nabla u(z)|^{2}dA(z)}, \tag{1}\]
where \(dA\) denotes area measure on \(\Omega\) and \(C_{0}^{1}(\bar{\Omega})\) denotes the set of all continuously differentiable functions on \(\Omega\) that vanish on the boundary of \(\Omega\) (see [16] and also [2, 12]). The following basic facts are well known and easy to verify:
* for any \(c\in\mathbb{C}\), \(\rho(\Omega+c)=\rho(\Omega)\),
* for any \(r\in\mathbb{C}\), \(\rho(r\Omega)=|r|^{4}\rho(\Omega)\),
* if \(\Omega_{1}\) and \(\Omega_{2}\) are simply connected and \(\Omega_{1}\subseteq\Omega_{2}\), then \(\rho(\Omega_{1})\leq\rho(\Omega_{2})\),
* if \(\mathbb{D}=\{z:|z|<1\}\), then \(\rho(\mathbb{D})=\pi/2\).
There are many methods one can use to estimate the torsional rigidity of the region \(\Omega\) (see [13, 17]). For example, one can use the Taylor coefficients for a conformal bijection between the unit disk and the region (see [16, pages 115 & 120] and [17, Section 81]), the Dirichlet spectrum for the region (see [16, page 106]), or the expected lifetime of a Brownian Motion (see [1, Equations 1.8 and 1.11] and [9]). These methods are difficult to apply in general because the necessary information is rarely available.
More recently, Lundberg et al. proved that since \(\Omega\) is simply connected, it holds that
\[\rho(\Omega)=\inf_{f\in A^{2}(\Omega)}\int_{\Omega}|\bar{z}-f|^{2}dA(z), \tag{2}\]
where \(A^{2}(\Omega)\subseteq L^{2}(\Omega,dA)\) is the Bergman space of \(\Omega\) (see [7]). The right-hand side of (2) is the square of the Bergman analytic content of \(\Omega\), which is the distance from \(\bar{z}\) to
\(A^{2}(\Omega)\) in \(L^{2}(\Omega,dA)\). This formula was subsequently used extensively in [8] to calculate the approximate torsional rigidity of various regions. To understand their calculations, let \(\{p_{n}\}_{n=0}^{\infty}\) be the sequence of Bergman orthonormal polynomials, which are orthonormal in \(A^{2}(\Omega)\). By [5, Theorem 2] we know that \(\{p_{n}(z;\Omega)\}_{n\geq 0}\) is an orthonormal basis for \(A^{2}(\Omega)\) (because \(\Omega\) is a Jordan domain) and hence
\[\rho(\Omega)=\int_{\Omega}|z|^{2}dA-\sum_{n=0}^{\infty}|\langle 1,wp_{n}(w) \rangle|^{2}\]
(see [8]). Thus, one can approximate \(\rho(\Omega)\) by calculating
\[\rho_{N}(\Omega):=\int_{\Omega}|z|^{2}dA-\sum_{n=0}^{N}|\langle 1,wp_{n}(w) \rangle|^{2}\]
for some finite \(N\in\mathbb{N}\). Let us use \(\mathcal{P}_{n}\) to denote the space of polynomials of degree at most \(n\). Notice that \(\rho_{N}(\Omega)\) is the square of the distance from \(\bar{z}\) to \(\mathcal{P}_{N}\) in \(L^{2}(\Omega,dA)\). For this reason, and in analogy with the terminology of Bergman analytic content, we shall say that \(\operatorname{dist}(\bar{z},\mathcal{P}_{N})\) is the _Bergman \(N\)-polynomial content_ of the region \(\Omega\). The calculation of \(\rho_{n}(\Omega)\) is a manageable task in many applications, as was demonstrated in [8]. One very useful fact is that \(\rho_{N}(\Omega)\geq\rho(\Omega)\), so these approximations are always overestimates (a fact that was also exploited in [8]).
Much of the research around torsional rigidity of planar domains focuses on extremal problems and the search for maximizers under various constraints. For example, Saint-Venant conjectured that among all simply connected Jordan regions with area \(1\), the disk has the largest torsional rigidity. This conjecture has since been proven and is now known as Saint-Venant's inequality (see [15], [16, page 121], and also [2, 14]). It has been conjectured that the \(n\)-gon with area \(1\) having maximal torsional rigidity is the regular \(n\)-gon (see [15]). This conjecture remains unproven for \(n\geq 5\). It was also conjectured in [8] that among all right triangles with area \(1\), the one that maximizes torsional rigidity is the isosceles right triangle. This was later proven in a more general form by Solynin in [18]. Additional results related to optimization of torsional rigidity can be found in [11, 19, 20].
The formula (2) tells us that maximizing \(\rho\) within a certain class of Jordan domains means finding a domain on which \(\bar{z}\) is not well approximable by analytic functions (see [6]). This suggests that the Schwarz function of a curve is a relevant object. For example, on the real line, \(f(z)=z\) satisfies \(f(z)=\bar{z}\) and hence we can expect that any region that is always very close to the real line will have small torsional rigidity. Similar reasoning can be applied to other examples and one can interpret (2) as a statement relating the torsional rigidity of \(\Omega\) to a similarity between \(\Omega\) and an analytic curve with a Schwarz function. Some of the results from [12] are easily understood by this reasoning.
The quantities \(\rho_{N}\), defined above, suggest an entirely new class of extremal problems related to torsional rigidity. In this vein, we formulate the following conjecture, which generalizes Polya's conjecture:
**Conjecture 1.1**.: For an \(n,N\in\mathbb{N}\) with \(n\geq 3\), the convex \(n\)-gon of area \(1\) that maximizes the Bergman \(N\)-polynomial content is the regular \(n\)-gon.
We will see by example why we need to include convexity in the hypotheses of this conjecture (see Theorem 2.2 below).
The most common approach to proving conjectures of the kind we have mentioned is through symmetrization. Indeed, one can prove Polya's Conjecture and the St. Venant Conjecture through the use of Steiner symmetrization. This process chooses a line \(\ell\) and then replaces the intersection of \(\Omega\) with every perpendicular \(\ell^{\prime}\) to \(\ell\) by a line segment contained in \(\ell^{\prime}\), centered on \(\ell\), and having length equal to the 1-dimensional Lebesgue measure of \(\ell^{\prime}\cap\Omega\). This procedure results in a new region \(\Omega^{\prime}\) with \(\rho(\Omega^{\prime})\geq\rho(\Omega)\). Applications of this method and other symmetrization methods to torsional rigidity can be found in [18].
The rest of the paper presents significant evidence in support of Conjecture 1.1 and also the Polya Conjecture for \(n=5\). The next section will explain the reasoning behind Conjecture 1.1 by showing that many optimizers of \(\rho_{N}\) exhibit as much symmetry as possible, though we will see that Steiner symmetrization does not effect \(\rho_{N}\) the same way it effects \(\rho\). In Section 3, we will present numerical evidence in support of Polya's Conjecture for pentagons by showing that among all equilateral pentagons with area 1, the one with maximal torsional rigidity must be very nearly the regular one.
## 2. New Conjectures and Results
Let \(\Omega\) be a simply connected Jordan region in the complex plane (or the \(xy\)-plane). Our first conjecture asserts that there is an important difference between the Bergman analytic content and the Bergman \(N\)-polynomial content. We state it as follows.
**Conjecture 2.1**.: For each \(N\in\mathbb{N}\), there is an \(n\in\mathbb{N}\) so that among all \(n\)-gons with area 1, \(\rho_{N}\) has no maximizer.
We will provide evidence for this conjecture by proving the following theorem, which shows why we included the convexity assumption in Conjecture 1.1.
**Theorem 2.2**.: _Among all hexagons with area \(1\), \(\rho_{1}\) and \(\rho_{2}\) have no maximizer._
Before we prove this result, let us recall some notation. We define the moments of area for \(\Omega\) as in [8] by
\[I_{m,n}:=\int_{\Omega}x^{m}y^{n}dxdy,\hskip 56.905512ptm,n\in\mathbb{N}_{0}.\]
In [8] it was shown that if the centroid of \(\Omega\) is 0, then
\[\rho_{1}(\Omega)=4\frac{I_{2,0}I_{0,2}-I_{1,1}^{2}}{I_{2,0}+I_{0,2}} \tag{3}\]
(see also [4]).
One can write down a similar formula for \(\rho_{2}(\Omega)\), which is the content of the following proposition.
**Proposition 2.3**.: _Let \(\Omega\) be a simply connected, bounded region of area 1 in \(\mathbb{C}\) whose centroid is at the origin. Then_
\[\rho_{2}(\Omega)=4\Big{(}I_{0,4}I_{1,1}^{2}-4I_{1,1}^{4}-2I_{0,3}I_{1,1}I_{1, 2}+I_{0,2}^{3}I_{2,0}+I_{0,3}^{2}I_{2,0}+4I_{1,2}^{2}I_{2,0}-I_{1,1}^{2}I_{2,0 }^{2}-I_{0,2}^{2}(I_{1,1}^{2}+2I_{2,0}^{2})-6I_{1,1}I_{1,2}I_{2,1}-2I_{0,3}I_{ 2,0}I_{2,1}+I_{2,0}I_{2,1}^{2}+2I_{1,1}^{2}I_{2,2}+2I_{0,3}I_{1,1}I_{3,0}-2I_{1,1}I_{2,1}I_{3,0}+I_{0,2}(4I_{2,1}^{2}+(I_{1,2}-I_{3,0})^{2}+I_{2,0}(-I_{0,4}+ 6I_{1,1}^{2}+I_{2,0}^{2}-2I_{2,2}-I_{4,0}))+I_{1,1}^{2}I_{4,0}\Big{)}\Big{/} \Big{(}(I_{0,3}+I_{2,1})^{2}+(I_{1,2}+I_{3,0})^{2}+(I_{0,2}+I_{0,4}+4I_{1,1}^{2 }+(I_{0,2}-I_{2,0})^{2}-2I_{2,2}-I_{4,0})\Big{)}\]
Proof.: It has been shown in [8] that
\[|\langle 1,wp_{2}(w)\rangle|^{2}=\begin{vmatrix}c_{0,0}&c_{0,1}&c_{0,2}\\ c_{1,0}&c_{1,1}&c_{1,2}\\ c_{0,1}&c_{0,2}&c_{0,3}\end{vmatrix}\bigg{/}\left(\begin{vmatrix}c_{0,0}&c_{1, 0}\\ c_{0,1}&c_{1,1}\end{vmatrix}\cdot\begin{vmatrix}c_{0,0}&c_{0,1}&c_{0,2}\\ c_{1,0}&c_{1,1}&c_{1,2}\\ c_{2,0}&c_{2,1}&c_{2,2}\end{vmatrix}\right) \tag{4}\]
where
\[c_{m,n}=\langle z^{m},z^{n}\rangle=\int_{\Omega}z^{m}\bar{z}^{n}dA(z)\]
We can then write
\[\rho_{2}(\Omega) =\rho_{1}(\Omega)-|\langle 1,wp_{2}(w;\Omega)\rangle|^{2}\] \[=4\frac{I_{2,0}I_{0,2}-I_{1,1}^{2}}{I_{2,0}+I_{0,2}}-|\langle 1,wp _{2}(w;\Omega)\rangle|^{2}\]
If one calculates \(|\langle 1,wp_{2}(w;\Omega)\rangle|^{2}\) using (4), one obtains the desired formula for \(\rho_{2}(\Omega)\).
Proof of Theorem 2.2.: We will rely on the formula (3) in our calculations. To begin, fix \(a>0\) and construct a triangle with vertices \((-\epsilon,0)\), \((\epsilon/2,\frac{\epsilon\sqrt{3}}{2})\), and \((-\epsilon/2,\frac{\epsilon\sqrt{3}}{2})\), where \(\epsilon=\frac{2}{3a\sqrt{3}}\). Consider also the set of points \(S=\{(-a/2,\frac{a\sqrt{3}}{2}),(a,0),(-a/2,-\frac{a\sqrt{3}}{2})\}\). To each side of our triangle, append another triangle whose third vertex is in the set \(S\), as shown in Figure 1. Let this resulting "windmill" shaped region be denoted by \(\Gamma_{a}\).
To calculate the moments of area, we first determine the equations of the lines that form the boundary of this region. Starting with \(C_{1}(x,a)\) in the 3rd quadrant and moving clockwise
Figure 1. _The region \(\Gamma_{a}\) from the proof of Theorem 2.2._
we have:
\[C_{1}(x,a) =x\sqrt{3}-\frac{6(2x+a)}{2\sqrt{3}+9a^{2}}\] \[C_{2}(x,a) =\frac{3a(2+3ax\sqrt{3})}{-4\sqrt{3}+9a^{2}}\] \[C_{3}(x,a) =\frac{3a(2+3ax\sqrt{3})}{4\sqrt{3}-9a^{2}}\] \[C_{4}(x,a) =-x\sqrt{3}+\frac{6(2x+a)}{2\sqrt{3}+9a^{2}}\] \[C_{5}(x,a) =\frac{3(x-a)}{\sqrt{3}-9a^{2}}\] \[C_{6}(x,a) =\frac{-3(x-a)}{\sqrt{3}-9a^{2}}\]
To determine \(\rho_{1}(\Gamma_{a})\), we calculate the terms \(I_{2,0},I_{0,2}\), and \(I_{1,1}\) with boundaries determined by the lines given above. Thus for \(m,n\in\{0,1,2\}\) we have
\[I_{m,n}(\Gamma_{a}) =\int_{-\epsilon}^{\epsilon/2}\int_{C_{1}}^{C_{4}}x^{m}y^{n}dydx+ \int_{\epsilon/2}^{a}\int_{C_{6}}^{C_{5}}x^{m}y^{n}dydx+\int_{-a/2}^{-\epsilon }\int_{C_{3}}^{C_{4}}x^{m}y^{n}dydx\] \[\qquad\qquad+\int_{-a/2}^{-\epsilon}\int_{C_{1}}^{C_{2}}x^{m}y^{n }dydx\]
These are straightforward double integrals and after some simplification, we obtain
\[\rho_{1}(\Gamma_{a})=4\frac{I_{2,0}I_{0,2}-I_{1,1}^{2}}{I_{2,0}+I_{0,2}}=\frac {1}{162}\left(3\sqrt{3}+\frac{4}{a^{2}}+27a^{2}\right)\]
and
\[\rho_{2}(\Gamma_{a})=\frac{1}{1620}\left(3\sqrt{3}+\frac{4}{a^{2}}+27a^{2}(1+ 90/(27a^{4}-6\sqrt{3}a^{2}+4))\right),\]
where we used the formula from Proposition 2.3 to calculate this last expression. Notice that we have constructed \(\Gamma_{a}\) so that the area of \(\Gamma_{a}\) is \(1\) for all \(a>0\). Thus, as \(a\to\infty\), it holds that \(\rho_{j}(\Gamma_{a})\to\infty\) for \(j=1,2\).
Theorem 2.2 has an important corollary, which highlights how the optimization of \(\rho_{N}\) is fundamentally different than the optimization of \(\rho\).
**Corollary 2.4**.: _Steiner symmetrization need not increase \(\rho_{1}\) or \(\rho_{2}\)._
Proof.: If we again consider the region \(\Gamma_{a}\) from the proof of Theorem 2.2, we see that if we Steiner symmetrize this region with respect to the real axis, then the symmetrized version is a thin region that barely deviates from the real axis (as \(a\) becomes large). Thus, \(\bar{z}\) is approximately equal to \(z\) in this region and one can show that \(\rho_{1}\) of the symmetrized region remains bounded as \(a\to\infty\). Since \(\rho_{2}\leq\rho_{1}\), the same holds true for \(\rho_{2}\).
Our next several theorems will be about triangles. For convenience, we state the following basic result, which can be verified by direct computation.
**Proposition 2.5**.: _Let \(\Delta\) be the triangle with vertices \((x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3})\) and centroid \(\vec{c}\). Then_
\[\vec{c}=\left(\frac{x_{1}+x_{2}+x_{3}}{3},\frac{y_{1}+y_{2}+y_{3}}{3}\right)\]
For the following results, we define the _base_ of an isosceles triangle as the side whose adjacent sides have equal length to each other. In the case of an equilateral triangle, any side may be considered as the base.
Here is our first open problem about maximizing \(\rho_{N}\) for certain fixed collections of triangles.
**Problem 2.6**.: For each \(N\in\mathbb{N}\) and \(a>0\), find the triangle with area 1 and fixed side length \(a\) that maximizes \(\rho_{N}\).
Given the prevalence of symmetry in the solution to optimization problems, one might be tempted to conjecture that the solution to Problem 2.6 is the isosceles triangle with base \(a\). This turns out to be true for \(\rho_{1}\), but it is only true for \(\rho_{2}\) for some values of \(a\). Indeed, we have the following result.
**Theorem 2.7**.: _(i) Among all triangles with area 1 and a fixed side of length \(a\), the isosceles triangle with base \(a\) maximizes \(\rho_{1}\)._
_(ii) Let \(t_{*}\) be the unique positive root of the polynomial_
\[999x^{4}/64-93x^{3}-664x^{2}-5376x-9216.\]
_If \(0<a\leq t_{*}^{1/4}\), then among all triangles with area 1 and a fixed side of length \(a\), the isosceles triangle with base \(a\) maximizes \(\rho_{2}\). If \(a>t_{*}^{1/4}\), then among all triangles with area 1 and a fixed side of length \(a\), the isosceles triangle with base \(a\) does not maximize \(\rho_{2}\)._
Proof.: Let \(\hat{\Omega}\) be an area-normalized triangle with fixed side length \(a\).
We begin by proving part (i). As \(\rho_{1}\) is rotationally invariant, we may position \(\hat{\Omega}\) so that the side of fixed length is parallel to the \(y\)-axis as seen in Figure 2. Denote vertex \(\hat{A}\) as the origin, \(\hat{B}\) as the point \((0,a)\), and \(\hat{C}\) as the point \((-2/a,\lambda)\).
Notice as \(\lambda\) varies, the vertex \(\hat{C}\) stays on the line \(x=-\frac{2}{a}\) in order to preserve area-normalization. If we define
\[T_{x}:=\int_{\hat{\Omega}}xdA\hskip 42.679134pt\text{and}\hskip 42.679134ptT_{y}:= \int_{\hat{\Omega}}ydA \tag{5}\]
Figure 2. _An area-normalized triangle \(\hat{\Omega}\) with variable \(\lambda\) and fixed base length \(a\)._
then we may translate our triangle to obtain a new triangle \(\Omega\) with vertices \(A\), \(B\), and \(C\) given by
\[A =(-T_{x},-T_{y})\] \[B =(-T_{x},a-T_{y})\] \[C =\left(-\frac{2}{a}-T_{x},\lambda-T_{y}\right)\]
which has centroid zero (see Figure 3).
If we define
\[\ell_{1}=\lambda-\frac{\lambda a}{2}\left(x+\frac{2}{a}\right),\qquad\qquad \ell_{2}=\lambda+\frac{a^{2}-a\lambda}{2}\left(x+\frac{2}{a}\right),\]
by recalling our formula for the moments of area, we have
\[I_{m,n}(\Omega)=\int_{-\frac{2}{a}-T_{x}}^{-T_{x}}\int_{\ell_{1}}^{\ell_{2}}x^ {m}y^{n}dydx\]
We can now calculate \(\rho_{1}\) using (3) to obtain
\[\rho_{1}(\Omega)=\frac{2a^{2}}{3(4+a^{2}(a^{2}-a\lambda+\lambda^{2}))} \tag{6}\]
By taking the first derivative with respect to \(\lambda\) of equation (5) we obtain
\[\frac{d}{d\lambda}\left[\rho_{1}(\Omega)\right]=\frac{2a^{4}(a-2\lambda)}{3(4 +a^{2}(a^{2}-a\lambda+\lambda^{2}))^{2}}\]
Thus, the only critical point is when \(\lambda=\frac{a}{2}\), when the \(y\)-coordinate of the vertex \(\hat{C}\) is at the midpoint of our base.
To prove part (ii), we employ the same method, but use the formula from Proposition 2.3. After a lengthy calculation, we find a formula
\[\frac{d}{d\lambda}\left[\rho_{2}(\Omega)\right]=\frac{P(\lambda)}{Q(\lambda)}\]
Figure 3. _The area-normalized triangle \(\Omega\) as pictured is a translation of \(\hat{\Omega}\), with variable \(\lambda\), fixed base length \(a\), and whose centroid is the origin._
for explicit polynomials \(P\) and \(Q\). The polynomial \(Q\) is always positive, so we will ignore that when finding critical points. By inspection, we find that we can write
\[P(\lambda)=(\lambda-a/2)S(\lambda),\]
where \(S(\lambda+a/2)\) is an even polynomial. Again by inspection, we find that every coefficient of \(S(\lambda+a/2)\) is negative, except the constant term, which is
\[999a^{20}/64-93a^{16}-664a^{12}-5376a^{8}-9216a^{4}.\]
Thus, if \(0<a<t_{*}^{1/4}\), then this coefficient is also negative and therefore \(S(\lambda+a/2)\) does not have any positive zeros (and therefore does not have any real zeros since it is an even function of \(\lambda\)). If \(a>t_{*}^{1/4}\), then \(S(\lambda+a/2)\) does have a unique positive zero and it is easy to see that it is a local maximum of \(\rho_{2}\) (and the zero of \(P\) at \(a/2\) is a local minimum of \(\rho_{2}\)).
We remark that the number \(t_{*}^{1/4}\) from Theorem 2.7 is approximately \(1.86637\ldots\) and \(\sqrt{3}=1.73205\ldots\), so Theorem 2.7 does not disprove Conjecture 1.1. In Figure 4, we have plotted \(\rho_{2}\) as a function of \(\lambda\) when \(a=1\). We see the maximum is attained when \(\lambda=1/2\). In Figure 5, we have plotted \(\rho_{2}\) as a function of \(\lambda\) when \(a=3\). We see that \(\lambda=3/2\) is a local minimum and the maximum is actually attained when \(\lambda=3/2\pm 0.86508\ldots\).
Figure 4. \(\rho_{2}\) of a triangle with area \(1\) and fixed side length \(1\).
We can now prove the following corollary, which should be interpreted in contrast to Corollary 2.4.
**Corollary 2.8**.: _Given an arbitrary triangle of area 1, Steiner symmetrization performed parallel to one of the sides increases \(\rho_{1}\). Consequently, the equilateral triangle is the unique maximum for \(\rho_{1}\) among all triangles of fixed area._
Proof.: We saw in the proof of Theorem 2.7 that if a triangle has any two sides not equal, than we may transform it in a way that increases \(\rho_{1}\). The desired result now follows from the existence of a triangle that maximizes \(\rho_{1}\).
We can also consider a related problem of maximizing \(\rho_{N}\) among all triangles with one fixed angle. To this end, we formulate the following conjecture, which is analogous to Problem 2.6. If true, it would be an analog of results in [18] for the Bergman \(N\)-polynomial content.
**Conjecture 2.9**.: For an \(N\in\mathbb{N}\) and \(\theta\in(0,\pi)\), the triangle with area 1 and fixed interior angle \(\theta\) that maximizes \(\rho_{N}\) is the isosceles triangle with area 1 and interior angle \(\theta\) opposite the base.
The following theorem provides strong evidence that Conjecture 2.9 is true.
**Theorem 2.10**.: _Among all triangles with area 1 and fixed interior angle \(\theta\), the isosceles triangle with interior angle \(\theta\) opposite the base maximizes \(\rho_{1}\) and \(\rho_{2}\)._
Proof.: Let \(\Omega\) be an area-normalized triangle with fixed interior angle \(\theta\), centroid zero, and side length \(a\) adjacent to our angle \(\theta\). As \(\rho_{N}\) is rotationally invariant, let us position \(\Omega\) so that the side of length \(a\) runs parallel to the \(x\)-axis. First, let us consider the triangle \(\hat{\Omega}\) which is a translation of \(\Omega\), so that the corner of \(\hat{\Omega}\) with angle \(\theta\), say vertex \(A\), lies at the origin. Define \((T_{x},T_{y})\) as in (5). By translating the entire region \(\hat{\Omega}\) by its centroid we attain the previously described region \(\Omega\), now with centroid zero, as pictured in Figure 6.
Figure 5. \(\rho_{2}\) of a triangle with area \(1\) and fixed side length \(3\).
Our region \(\Omega\) is now a triangle with centroid \(0\) having vertices \(A\), \(B\), and \(C\) given by
\[A =(-T_{x},-T_{y})\] \[B =(-a-T_{x},-T_{y})\] \[C =\left(\frac{-2}{a\tan\theta}-T_{x},\frac{2}{a}-T_{y}\right)\]
We can now use (3) to calculate
\[\rho_{1}(\Omega)=\frac{2a^{2}}{3a^{4}-6a^{2}\cot\theta+12\csc^{2}\theta} \tag{7}\]
By taking the first derivative with respect to \(a\) of equation (4) we obtain
\[\frac{d}{da}\left[\rho_{1}(\Omega)\right]=\frac{-4a(a^{4}-4\csc^{2}\theta)}{3( a^{4}-2a^{2}\cot\theta+4\csc^{2}\theta)^{2}}\]
Thus, the only critical point of \(\rho_{1}\) is \(a=\sqrt{2\csc\theta}\) and this point is a local maximum. We conclude our proof by observing that \(a=\sqrt{2\csc\theta}\) is the side length of the area-normalized isosceles triangle with interior angle \(\theta\) opposite the base.
The calculation for \(\rho_{2}\) follows the same basic strategy, albeit handling lengthier calculations. In this case, we calculate
\[\frac{d}{da}\left[\rho_{2}(\Omega)\right]=\frac{Q_{\theta}(a)}{P_{\theta}(a)}\]
for explicitly computable functions \(Q_{\theta}\) and \(P_{\theta}\), which are polynomials in \(a\) and have coefficients that depend on \(\theta\). The function \(P_{\theta}(a)\) is positive for all \(a>0\), so the zeros will be the zeros of \(Q_{\theta}\). One can see by inspection that \(Q_{\theta}(\sqrt{2\csc\theta})=0\), so let us consider
\[S_{\theta}(a)=Q_{\theta}(a+\sqrt{2\csc\theta}).\]
Then \(S(0)=0\) and there is an obvious symmetry to these triangles that shows the remaining real zeros of \(S\) must come in pairs with a positive zero corresponding to a negative one. Thus, it suffices to rule out any positive zeros of \(S_{\theta}\). This is done with Descartes' Rule of Signs, once we notice that all coefficients of \(S_{\theta}\) are negative. For example, one finds that the coefficient of \(a^{7}\) in \(S_{\theta}(a)\) is equal to
\[-12288\csc^{7}\theta(140\cos(4\theta)-3217\cos(3\theta)+25010\cos(2\theta)-820 16\cos(\theta)+70136)\]
Figure 6. _A triangle \(\Omega\) with fixed angle \(\theta\), variable side length \(a\), area 1, and centroid zero._
One can plot this function to verify that it is indeed negative for all \(\theta\in[0,\pi]\). Similar elementary calculations can be done with all the other coefficients in the formula for \(S_{\theta}(a)\), but they are too numerous and lengthy to present here.
The end result is the conclusion that \(a=\sqrt{2\csc\theta}\) is the unique positive critical point of \(\rho_{2}\) and hence must be the global maximum, as desired.
We can now prove the following result, which is the a natural follow-up to Corollary 2.8.
**Corollary 2.11**.: _The equilateral triangle is the unique maximum for \(\rho_{2}\) among all triangles of fixed area._
Proof.: We saw in the proof of Theorem 2.10 that if a triangle has any two sides not equal, than we may transform it in a way that increases \(\rho_{2}\). The desired result now follows from the existence of a triangle that maximizes \(\rho_{2}\).
The same proof shows that Corollary 2.8 is also a corollary of Theorem 2.10.
## 3. Numerics on Torsional Rigidity for Pentagons
Here we present numerical evidence in support of Polya's conjecture for pentagons. In particular, we will consider only equilateral pentagons and show that in this class, the maximizer of torsional rigidity must be very close to the regular pentagon (see [3] for another computational approach to a similar problem). Our first task is to show that to every \(\theta,\phi\in(0,\pi)\) satisfying
\[(1-\cos(\theta)-\cos(\phi))^{2}+ (\sin(\theta)-\sin(\phi))^{2}\leq 4,\] \[\cos(\theta)\leq 1-\cos(\phi),\]
there exists a unique equilateral pentagon of area \(1\) with adjacent interior angles \(\theta\) and \(\phi\) (where the uniqueness is interpreted modulo rotation, translation, and reflection).
To see this, construct a pentagon with one side being the interval \([0,1]\) in the real axis. Form two adjacent sides of length \(1\) with interior angles \(\phi\) and \(\theta\) by choosing vertices \(V_{1}=(\cos(\theta),\sin(\theta))\) and \(V_{2}=(1-\cos(\phi),\sin(\phi))\). Our conditions imply that \(V_{1}\) lies to the left
Figure 7. A pentagon constructed with vertices \(V_{1}\), \(V_{2}\), and interior angles \(\theta\), \(\phi\) as described below. There is exactly one point on the perpendicular bisector of \(\overline{V_{1}V_{2}}\) for which our pentagon is equilateral.
of \(V_{2}\) and the distance between \(V_{1}\) and \(V_{2}\) is less than or equal to \(2\). Thus, if we join each of \(V_{1}\) and \(V_{2}\) to an appropriate point on the perpendicular bisector of the segment \(\overline{V_{1}V_{2}}\), we complete our equilateral pentagon with adjacent angles \(\theta\) and \(\phi\) (see Figure 7). Obtaining the desired area is now just a matter of rescaling.
Using this construction, one can write down the coordinates of all five vertices, which are simple (but lengthy) formulas involving basic trigonometric functions in \(\theta\) and \(\phi\). It is then a simple matter to compute a double integral and calculate the area of the resulting pentagon, rescale by the appropriate factor and thus obtain an equilateral pentagon with area \(1\) and the desired adjacent internal angles. One can then compute \(\rho_{N}\) for arbitrary \(N\in\mathbb{N}\) using the method of [8] to estimate the torsional rigidity of such a pentagon.
Theoretically, this is quite simple, but in practice this is a lengthy calculation. We were able to compute \(\rho_{33}(\Omega)\) for a large collection of equilateral pentagons \(\Omega\). Note that all interior angles in the regular pentagon are equal to \(108\) degrees. We discretized the region \(\theta,\phi\in[105,110]\) (in degrees) and calculated \(\rho_{33}\) for each pentagon in this discretization. The results showed a clear peak near \((\theta,\phi)=(108,108)\), so we further discretized the region \(\theta,\phi\in[107.5,108.5]\) (in degrees) into \(400\) equally spaced grid points and computed \(\rho_{33}\) for each of the \(400\) pentagons in our discretization. We then interpolated the results linearly and the resulting plot is shown as the orange surface in Figure 8.
The blue surface in Figure 8 is the plane at height \(0.149429\), which is the (approximate) torsional rigidity of the area normalized regular pentagon calculated by Keady in [10]. Recall that every \(\rho_{N}\) is an overestimate of \(\rho\), so any values of \(\theta\) and \(\phi\) for which \(\rho_{33}\) lies below this plane will not be the pentagon that maximizes \(\rho\). Thus, if we take the value \(1.49429\) from [10] as the exact value of the torsional rigidity of the regular pentagon with area \(1\), we see that among all equilateral pentagons, the maximizer of \(\rho\) will need to have two adjacent angles within approximately one third of one degree of \(108\) degrees. This is extremely close
Figure 8. \(\rho_{33}\) _for a selection of equilateral pentagons with area \(1\) having angles close to those of the regular pentagon._
to the regular pentagon, and of course the conjecture is that the regular pentagon is the maximizer.
### Acknowledgements
The second author graciously acknowledges support from the Simons Foundation through collaboration grant 707882.
|
2310.00351 | Neuroadaptation in Physical Human-Robot Collaboration | Robots for physical Human-Robot Collaboration (pHRC) systems need to change
their behavior and how they operate in consideration of several factors, such
as the performance and intention of a human co-worker and the capabilities of
different human-co-workers in collision avoidance and singularity of the robot
operation. As the system's admittance becomes variable throughout the
workspace, a potential solution is to tune the interaction forces and control
the parameters based on the operator's requirements. To overcome this issue, we
have demonstrated a novel closed-loop-neuroadaptive framework for pHRC. We have
applied cognitive conflict information in a closed-loop manner, with the help
of reinforcement learning, to adapt to robot strategy and compare this with
open-loop settings. The experiment results show that the closed-loop-based
neuroadaptive framework successfully reduces the level of cognitive conflict
during pHRC, consequently increasing the smoothness and intuitiveness of
human-robot collaboration. These results suggest the feasibility of a
neuroadaptive approach for future pHRC control systems through
electroencephalogram (EEG) signals. | Avinash Singh, Dikai Liu, Chin-Teng Lin | 2023-09-30T12:16:24Z | http://arxiv.org/abs/2310.00351v1 | # Neuroadaptation in Physical Human-Robot Collaboration
###### Abstract
Robots for physical Human-Robot Collaboration (pHRC) systems need to change their behavior and how they operate in consideration of several factors, such as the performance and intention of a human co-worker and the capabilities of different human-co-workers in collision avoidance and singularity of the robot operation. As the system's admittance becomes variable throughout the workspace, a potential solution is to tune the interaction forces and control the parameters based on the operator's requirements. To overcome this issue, we have demonstrated a novel closed-loop-neuroadaptive framework for pHRC. We have applied cognitive conflict information in a closed-loop manner, with the help of reinforcement learning, to adapt to robot strategy and compare this with open-loop settings. The experiment results show that the closed-loop-based neuroadaptive framework successfully reduces the level of cognitive conflict during pHRC, consequently increasing the smoothness/intuitiveness of human-robot collaboration. These results suggest the feasibility of a neuroadaptive approach for future pHRC control systems through electroencephalogram (EEG) signals.
physical human robot collaboration, reinforcement learning, cognitive conflict, deep learning, electroencephalogram
## I Introduction
Shared control or shared autonomy plays an important role in collaborative technologies. Collaborative robots or robots or Physical Human-Robot Collaboration (pHRC) [1] is one of such example. The pHRC happens when a human and a robot contact and exchange forces to accomplish a common task. In such a scenario, the human is always in complete or partial control of the robot's motions. Some of the examples of pHRC systems include a cobot for material handling [2], a human-robot system for homokinetic joint assembly [3], a wearable robot tested for lifting and holding tasks [4], hand rehabilitation [5], a lower-limb exoskeleton [6] and an exoskeleton to rehabilitate shoulder and elbow [7]. The collaborative robotics world is currently undergoing fundamental paradigm shifts in terms of research and applications [8]. One of the most common directions in pHRC is a human-centered design of robot mechanics and control [9]. However, it has limitations in terms of its understanding of the human co-worker's needs which stem mainly from subjective opinion [10]. These pHRCs do not take into account that every human co-worker is unique in their skills, knowledge, and experience. Interaction dynamics that are suitable for one human co-worker might be uncomfortable for others. However, in current pHRC settings [11, 12, 13], human usually adapt over time to any gaps between their expectations and robot behavior [14]. When such an adaptation has already reached a stagnant stage where further improvement is not possible, it might create a significant delay in the actual process and performance will drop until and unless the human co-worker has adapted to the robot. All existing pHRC [3, 9, 12, 15] share the common requirement of such adaptation from human co-workers for close, safe, and dependable physical interaction in a shared workspace. However, as we reach a point where such robots need to be carefully designed for the human co-workers' need, we now ask, What if such an adaptation is taken care of by the robot itself rather than the human co-worker? A pHRC system that can safely sense, reason, learn and act while safely working in a shared workspace with human is a sound aim. However, designing a pHRC system that can adapt to an human co-workers' needs, change and performance require a better understanding of the human co-worker's feelings/thoughts during their collaboration with a robot. To design a pHRC that can couple with control, planning, and learning according to human needs, performance, and capability in real-time, understanding the changes of human co-worker's cognitive state may provide the best source of information that can be utilized to tune and adapt the robots in pHRC settings. Recent advances in wearable physiological sensors such as electromyogram (EMG), heart-rate variability (HRV), electroencephalogram (EEG), photoplethysmogram (PPG), etc., have given rise to new paradigms in understanding a human's cognitive state changes during human-robot interaction [16]. The researchers now are using them in different settings to enhance the understanding of human co-worker's states that include stress [17, 18], attention [19], mental fatigue [20], engagement [21], workload [22] in human-robot collaboration and interaction environments. The information relating to such cognitive states becomes important if a pHRC task creates undue mental workload, stress, or mental fatigue on human co-workers. However, these cognitive states are not very suitable for understanding the human co-workers' covert cognitive state as influenced by collaborating with a robot. Also, mental fatigue, stress, and workload develop gradually and do not necessarily reflect the ongoing changes in human co-worker's feelings. Therefore, tracking this continuously changing cognitive state and feelings during collaborating with a robot remains highly desirable.
One such relevant cognitive factor here is cognitive conflict. This occurs if the predicted outcome of a human co-worker's
desirable action during collaborating with a robot does not match the robot's eventual action. It has been utilized as one of the essential cognitive features in successfully adapting human feelings to different environmental settings [23, 24]. Such an adaption of human feeling is generally known as neuroadaptation [25]. Several works on neuroadaptation [26, 27, 28, 29] have employed cognitive conflict-based features to interface with a robot to correct robot's mistakes. However, most of these works are limited in that the human co-worker is simply observing the robot's action from a distance i.e., non-contact action on a computer screen or using a keyboard/joystick [29] to control it, rather than exchanging forces and coupled motions as in pHRC settings. Furthermore, these works only focus on correcting a robot's mistakes after a task is completed rather than truly neuroadapting to the human co-worker's feelings during conducting the task [30]. In our previous work [31], we demonstrated the possibility of detecting cognitive conflict in pHRC settings and were successfully able to decode the human co-worker's feelings regarding the robot's actions. We also established that cognitive conflict could be detected more readily in line with increasing the degrees of a robot's movements, such as moving between one-dimensional (1D) and two-dimensional (2D) settings [31]. Continually working on this line of work, this paper presents a novel neuroadaptive framework for pHRC to adapt the human co-worker's feeling while physically collaborating with a robot in real-time within a short duration of 15 minutes, as shown in Figure 1.
## II Materials and Methods
### _Experiment design_
A pHRC robot called the ANBOT [32] was used as a platform to run this experiment. It consists of a UR10 (Universal Robots) controlled with an admittance-based controller via a force/torque sensor mounted between the arm and the tool. We used a 32-channels EEG device called MOVE (Brain Product GmbH) to record the brain activity, with a sampling rate of 1000Hz. The electrodes were placed on the scalp with an impedance below 50 and accordingly to the 10-20 international system [33].
Before starting the experiment (see setup in Figure 2), we recorded baseline data with the participants sitting still, first with open eyes and then with closed eyes. Each baseline lasts about one minute, and the experiment is divided into two parts: open-loop (without neuroadaptive framework) and closed-loop (with neuroadaptive framework). In the open-loop condition, participants explored the shared workspace. They were explicitly asked to stretch joint three of the UR10 (robot elbow) and test the proximity of singularity. The damping variables \(\bar{\sigma}_{0}\) and \(\bar{\sigma}_{1}\) are in bounded to the range [0.35, 0.45], when the robot is approaching a singular configuration and [0.25, 0.45] when it is moving away from the singularity following Carmichael et al. [15].
Similarly, in the closed-loop conditions, the participants operate the ANBOT while, concurrently, a reinforcement learning algorithm is setting and adjusting \(\bar{\sigma}_{1}\) automatically, while \(\bar{\sigma}_{0}\) is fixed and equal to 0.25. \(\bar{\sigma}_{1}\) bounded to the range [0.35, 0.45], when the robot is approaching a singular
Fig. 1: A schematics diagram for the Neuroadaptive framework for pHRC.
configuration and bounded to [0.25, 0.45], when moving away from the singularity similar to the open-loop condition. This ensures that there is always some damping when approaching singularity (fast to slow), while there can be no damping to slow damping if leaving singularity. At the end of both the conditions, we asked the participants to fill a questionnaire. In this questionnaire, we asked the participants' preference for the two conditions. Each open and closed-loop condition took about 15 minutes, excluding the time to prepare the participant for an experiment.
### _Participants_
The cohort for this experiment consisted of fourteen healthy participants - three females and eleven males, aged between 18 and 42. Two participants were, however, excluded from the EEG data analysis because the data was corrupted. The two excluded participants are, nevertheless, included in the questionnaire about the open-loop condition. Before participating in the study, each participant was given a full explanation of the experimental procedure, and each provided informed consent. Ethics approval was issued by the Human Research Ethics Committee of the University of Technology Sydney, Australia. The experiment was conducted in a large room by a male experimenter. None of the participants had a history of neurological or psychological disorders, which could have affected the experiment results. All the participants were allowed to wear glasses for corrected vision.
### _Methodology_
The EEG data were processed offline to see whether a cognitive conflict existed for specific conditions. We used a Lab-Streaming Layer (LSL)1 to synchronize the data with event markers defining the conditions. The event markers were sent for different values of \(\sigma_{i}\), particularly for \(\sigma_{i}<\sigma_{q}\), \(\sigma_{i}<(\sigma_{1}-\sigma_{0})/2\), \(\sigma_{i}<(\sigma_{1}-\sigma_{0})/4\) and \(\sigma_{i}<(\sigma_{1}-\sigma_{0})/8\). The EEG data were first resampled to 250Hz and filtered with a band-pass filter in the range of 2-50Hz. The process then employed Artifact Subspace Reconstruction (ASR) [34] to automatically reject bad channels, followed by independent component analysis (ICA) [35]. We next applied ADJUST [36] to further clean the data from artifacts. The epochs were then extracted to compute the Power Spectral Density (PSD) (49). Epochs are extracted for each condition from the relative event marker to 400ms after that. The resultant epoched data have been divided into three segments: the first 1/3, second 1/3 and third 1/3. The PSD of the recorded baseline for open eyes is also computed after this data went through the same pipeline. The baseline PSD is then subtracted from the PSD of each condition for participants. The result is a normalized PSD for each participant to allow for considerable variation in PSD between three time-segments.
Footnote 1: [https://github.com/sccn/labstreaminglayer](https://github.com/sccn/labstreaminglayer)
### _Cognitive Conflict Deep Deterministic Policy Gradient (CC-DDPG) model_
Motivated by the human goal-directed learning process, reinforcement learning models are based on the interaction between environment and agent by balancing the exploration and exploitation by maximizing the return (i.e., rewards). In this paper, a neuroadaptive framework for pHRC was develop based on deep reinforcement learning (DRL) [37] (see Figure 3). DRL can generate control policies directed from the robot's states for pHRC. In pHRC, the robot's state consists of the magnitude of position, velocity, and force applied by the participant while collaborating. Theoretically, the state-space for each variable is unlimited in this case. To solve such DRL problems requires a DRL for continuous action space, leading to the choice of a deep deterministic policy gradient (DDPG) [38] algorithm to learn environments with a continuous flow of state and action.
Let's assume the actor is \(\pi_{\mu}\)\((a_{t}|s_{t})\) and the \(critic\)\(Q(s_{t},a_{t}|\theta)\). The actor takes \(s_{t}\) and produces the action \(a_{t}\), while the critic takes these actions \(a_{t}\) and \(s_{t}\) and produces \(S(s_{t},a_{t})\) to minimize the reward-prediction error (RPE) as follows:
\[\delta_{t}=r_{t}+\gamma Q(s(t+1),\pi(s(t+1))|\theta)-Q(s_{t},a_{t}\ |\theta) \tag{1}\]
where loss function theta is as follows:
\[L=1/N\ \sum_{i}(y_{i}-Q(s_{i},a_{i}\ |\theta))^{2} \tag{2}\]
While the actor following the policy gradient theorem as follows:
\[\nabla_{\mu}\ \pi(s,a)=E_{\rho}(s)\ \ [\nabla_{a}\ Q(s,a|\theta)\ \nabla_{\mu}\ \pi(s|\mu)] \tag{3}\]
Fig. 2: Experiment Design. A participant is controlling pHRC in a singularity experiment wearing a wired EEG cap.
\[\nabla_{w}\ \pi(s_{i}\ )\approx 1/N\ \sum_{i} \tag{4}\] \[\nabla_{a}\ Q(s,a|\theta)\ |_{(}s=s_{i},a=\pi(s_{i}))\nabla_{w}pi(s)|_{(}s =s_{i})\]
### _Reward function with Cognitive Conflict_
The reward setting of the reinforcement learning (RL) algorithm is a highly essential component for effective model training. Improper reward setting relates directly to a failure of the model. Russell et al. [39] give an example: when the reward of a vacuum cleaner is set to "absorb dust", vacuum cleaners will get rewards by "spraying dust" and then "absorbing dust". The sparse reward may lead to inefficient model exploration and learning, slow iteration, and even difficulty in converging. We need to model the participant's feeling of cognitive conflict as measured directly from the brain as a reward to optimize the robot's action. We have formulated the reward function as follows:
\[r=r^{\prime}+r_{CNN} \tag{5}\]
where r represents the reward from cognitive conflict for each action performed between a participant and a pHRC. \(r_{CNN}\) is used to describe the comprehensive evaluation of a participant's feeling derived from 32-channels EEG over the 1.2s period.
The \(r_{CNN}\) used a convolution neural network (CNN) based on DeepConvNet [40] to classify three levels of cognitive conflict. The model is trained on three classes on previously collected cognitive conflict data in [41], where three classes were normal conditions such as no conflict condition, slow conflict condition, and sudden conflict condition. We have assigned \(r_{CNN}\) as 100, 50, -100 for normal conflict, slow conflict, and sudden conflict respectively after each classification over the 1.2s second of a trial. The data used for DeepConvNet involved minimal pre-processing. The acquired EEG data were filtered with a band-pass filter in the range of 2 to 50Hz before extracting the epoch of 1.2s long in real-time by buffering data of 1200 samples using LSL. Similarly, the robot data used for CC-DDPG consisted of the human-robot collaborating forces, position, end-effector velocities, and sigma values. This data was sampled by 125 Hz. See more detail about the DeepConvNet model, its hyperparameters, and training data results in Supplementary results.
## III Results
We have evaluated the neuroadaptive framework for pHRC in fourteen participants performing a singularity task [31] with a neuroadaptive framework (closed-loop) and without (open-loop) conditions. This task is to freely move and stretch the three joints of the UR10 (robot's elbow)2 and test the proximity of singularity. These participants had minimal experience with robot control, as reported in the questionnaire (see Figure 4).
Fig. 3: Schematic model structure for neuroadaptive framework using CC-DDPG.
### _Comparison in behavioral results for open and closed-loop-neuroadaptive frameworks_
Figure 4(A) shows that about 86% of participants did not know the type of task they were going to perform before the experiments, 64% had never used any robotic arms before and did not know how a robotic arm should work. Figure 4(B) indicates how participants rated the importance of their feelings on the questions of smoothness - 69%, responsiveness - 69%, control - 79%, frustration - 53%, and safety 89%, while using a closed-loop neuroadaptive framework condition compared to an open-loop condition.
We divided the data into three equal time segments as T1 (first 1/3), T2 (second 1/3), and T3 (third 1/3). The divided data then extracted and processed the forces applied by participants during pHRC, and the acceleration of the robotic arm in open and closed-loop conditions as shown in Figure 5. A repeated-measure ANOVA was conducted to compare 1 (force data) x 3 (time-segments) and 1 (acceleration data) x 3 (time-segments) for open and closed-loop conditions. There was a significant difference between the force at three time-segments (F (2, 9154) = 77.938, p =.000) for the open-loop condition and closed-loop conditions (F (2, 14562) = 27.776, p =.000). LSD post-hoc test on time-segments for open-loop revealed that there was a significant difference from T1 to T2 (p =.000), T1 to T3 (p =.000), and T2 to T3 (p =.001). Similarly, the LSD post-hoc test on time-segment under closed-loop conditions revealed also that there was a significant difference from T1 to T2 (p =.000), T1 to T3 (p =.000), and T2 to T3 (p =.000).
A repeated-measure ANOVA was also conducted to compare 1 (acceleration data) x 3 (time-segments) for open and closed-loop conditions. There was a significant difference between the force at three time-segments (F (2, 8096) = 15.768, p =.000) for an open-loop condition as well as (F (2, 12108) = 20.301, p =.000) for closed-loop conditions. LSD post-hoc test on time-segments for open-loop condition revealed that there was significant difference from T1 to T2 (p =.000), T1 to T3 (p =.007), and T2 to T3 (p =.004). Similarly, LSD post-hoc testing on time-segment for closed-loop terms also revealed that there was significant difference from T1 to T2 (p =.008), T1 to T3 (p =.000), and T2 to T3 (p =.000).
### _The comparison between EEG results for open and closed-loop-neuroadaptive framework_
To understand if the participants' brain dynamics exhibit any difference(s) when collaborating with pHRC both with (closed-loop) and without (open-loop) neuroadaptive framework, we also divided the data into three-time segments (T1, T2, and T3) and extracted cognitive conflict-related neuro-markers based on power-spectral density (PSD) for both conditions as presented in Figures 6 (A) and 6 (C). A repeated-measure ANOVA was conducted to compare 4 (power bands) x 3 (time-segments) for closed-loop settings. There was no significant difference for PSD between power bands (F (3, 30) = 0.747, p =.532), but there was a significant difference in PSD between time-segments (F (2, 20) = 4.431, p =.026). LSD post-hoc test on time-segments (T1, T2, and T3) for all bands revealed that there was significant difference in PSD from T1 to T2 (p =.043), T1 to T3 (p = 030), and T2 to T3 (p =.039) for delta bands, and T1 to T2 (p =.050), T1 to T3 (p =.029), and T2 to T3 (p =.055) for theta bands. However, there was no significant difference in PSD from T1 to T2 (p =.130), T1 to T3 (p =.122), and T2 to T3 (p =.682) for alpha bands, and T1 to T2 (p =.109), T1 to T3 (p =.117), and T2 to T3 (p =.978) for beta bands.
A repeated-measure ANOVA was also conducted to compare PSD for 4 (power bands) x 3 (time-segments) for open-loop settings. There was a no significant difference in PSD between power bands (F (3, 30) = 0.247, p =.863), but there was a significant difference in PSD between time-segments (F (2, 20) = 4.050, p =.033). LSD post-hoc testing on time-segment (T1, T2, and T3) for all bands revealed that there was no significant difference in PSD from T1 to T2 (p =.073), T1 to T3 (p =.065), or T2 to T3 (p =.350) for delta bands; T1 to T2 (p =.081), T1 to T3 (p =.066), and T2 to T3 (p =.243) for theta bands, T1 to T2 (p =.156), T1 to T3 (p =.149), and T2 to T3 (p =.643) for alpha bands, and T1 to T2 (p =.136), T1 to T3 (p =.140), and T2 to T3 (p =.882) for beta bands.
Fig. 4: The questionnaire results from participants comparing closed-loop and open-loop neuroadaptive frameworks. (A) The results show the participants’ experience toward the pHRC on a binary scale on three questions: “Q1. Have you ever used a robot arm before?”; “Q2: Do you know how robot arms work?”, and “Q3: Do you know what robot kinematic singularity is \(\mathcal{T}\); (B) The results show the participants’ experience toward pHRC’s smoothness, responsiveness, control, frustration, and safety on the five-point Likert scale (with standard deviation).
In addition to looking at only one EEG channel (Fz) in the frontal lobe following previous work [24], we observed PSD topoplots utilizing all the EEG channels as shown in Figures 6 (B) and 6 (D). We also compared topoplots from the participant(s) at three time-segments (T1, T2, and T3) for two conditions. A repeated-measure ANOVA was also conducted to compare PSD for topoplots in 4 (power bands) x 3 (time-segments) for closed-loop settings. As the data in Figure 6(D) indicate, there was a significant difference between power bands (F (3, 90) = 67.995, p =.000) as well as time-segments (F (2, 60) = 24.657, p =.000) for topoplots. An LSD post-hoc test on time-segments (T1, T2, and T3) for all bands revealed that there was significant difference in PSD for topoplots from T1 to T2 (p =.032), T1 to T3 (p =.000), and T2 to T3 (p =.000) for delta bands; T1 to T2 (p =.031), T1 to T3 (p =.000), and T2 to T3 (p =.000) for theta bands; T1 to T3 (p =.003), and T2 to T3 (p =.003) for alpha bands, and T1 to T2 (p =.002), T2 to T3 (p =.017) for beta bands. However, there was no significant difference in PSD for topoplots from T1 to T2 (p =.338) in alpha bands or T1 to T3 (p =.459) in beta bands.
Similarly, a repeated-measure ANOVA was conducted to compare 4 (power bands) x 3 (time-segments) in open-loop settings in PSD for topoplots. As illustrated in Figure 6(B) (topoplots), there was a significant difference between power bands (F (3, 90) = 75.328, p =.000) as well as time-segments (F (2, 60) = 6.501, p =.003) in PSD for topoplots. LSD post-hoc testing on time-segments (T1, T2, and T3) for all bands revealed that there was significant difference in PSD for topoplots from T1 to T3 (p =.000), and T2 to T3 (p =.000) in delta bands, T1 to T3 (p =.000), and T2 to T3 (p =.000) in theta bands. However, there was no significant difference in PSD for topoplots from T1 to T2 (p =.354) in delta bands; T1 to T2 (p =.074) in theta bands; T1 to T2 (p =.088), and T1 to T3 (p =.547), T2 to T2 (p =.455) in alpha bands, and T1 to T2 (p =.629), T1 to T3 (p =.115), T2 to T3 (p =.114) for beta bands.
To further evaluate the brain dynamics, we have utilized the Independent Component Analysis (ICA) [35] with dipole-fitting [42] to determine if cognitive conflict originates from the anterior cingulate cortex [43]. We found this is indeed the case, as shown in Figure 7. It clearly indicates that cognitive conflict originates in the ACC region of the brain, as dipole-fitting analysis and Tailarch coordinates verify. It is important to note that this ACC is the same for both open and closed-loop conditions.
Fig. 5: Force and acceleration while participants control it in open and closed-loop condition (A) The force applied by the participants over the three time-segments (T1, T2, and T3) over the open-loop condition; (B) The acceleration of robotic arm over the three time-segments in open-loop condition; (C) The force applied by the participant over the three time-segments over the closed-loop condition; (B) The acceleration of the robotic arm over the three time-segments in a closed-loop condition. (Bar plots indicate mean \(\pm\) SEM. Statistical analysis using a two-way repeated-measures ANOVA. “p¡
### _Reinforcement Learning results from the closed-loop-neuroadaptive framework_
To reveal the efficacy of the cognitive-conflict deep deterministic policy gradient (CC-DDPG) algorithm, we evaluated a representative participant's actor and critic costs. We also assess the action taken by the CC-DDPG algorithm for all participants to neuroadaptive pHRC. As Figure 8 (A) illustrates, throughout 100 trials (i.e., games), the actor cost sharply decreases as we approach 40 trials and then shows a slight increase beyond this up to 50 trials, before being followed by a sharp decrease up to 90 trials, and a slight increase
in the last 20 trials. Similarly, for critic cost as plotted in Figure 8 (B), the cost decreases as we rise to around 50 trials, followed by a slight increase before a decrease at 70 trials. Then, a sharp decrease of up to 80 games ensued by a slight increase that is succeeded by a decline in the next 20 trials. Overall, both actor and critic costs decrease over the trials, but initial convergence is achieved at around 40 trials.
In normal conditions, without reinforcement learning (RL), the standard sigma values are set between 0.35 (\(\sigma_{1}\)) to 0.45 (\(\sigma_{2}\)) and CC-DDPG is required to decide a value that minimizes cognitive conflict. As we show in Figure 8(C), participant preference generally moves more toward the higher
Fig. 6: Power Spectral Density at delta, theta, alpha, and beta for the open and closed-loop condition within the participant at three time-segments. (A) Boxplot in open-loop condition at times T1, T2, and T3 at channel ‘Fz’. (B). Topoplots open-loop condition at times T1, T2, and T3; (C) Topoplots with the closed-loop condition at times T1, T2, and T3; (D) Boxplot in closed-loop condition at times T1, T2, and T3 at channel ‘Fz’. (Bar plots indicate mean ± SEM. Statistical analysis using a two-way repeated-measures ANOVA. *p; 0.05, **p;0.005, ***p;0.0005.)
Fig. 7: Origination of cognitive conflict. Anterior Cingulate Cortex (ACC) and relative dipole positions in three views.)
sigma values, which also vary for each participant at different instances of the games. In comparison with actor and critic costs around trials 40 and 100, sigma values approach Sigma2.
## IV Discussion
The present work demonstrates a novel closed-loop-neuroadaptive framework for pHRC. The results show that cognitive conflict is an effective neuro-marker that enables the information about human co-workers' feelings while collaborating with the robot used in neuroadaptation. The system as developed through the work of this paper successfully demonstrates its effectiveness as seen by PSD changes in brain dynamics and topoplots. The results further strengthen the efficacy of the neuroadaptive framework through human co-workers' behavior reflected by force and acceleration data in addition to participant's feedback as recorded by robot's sensors, questionnaires, and the output from the CC-DDPG algorithm.
In work presented here, most participants had not been exposed to this or similar mechanisms before. Nevertheless, they were familiar with robot interaction but not of a pHRC kind. Therefore, they show almost equal attitudes towards both open and closed loops as recorded by questionnaires in terms of smooth and responsive control in pHRC settings. However, participants prefer safety and control in a closed-loop-neuroadaptive framework over an open-loop condition, potentially due to continuous adaptation compared to an open-loop condition. Another potential reason for better safety and control in closed-loop conditions, compared to the open-loop counterpart, might be a participant's perception of how a robot reacts to a too sudden change in sigma values, these require a participant's continuous change in strategy, hence encouraging them to feel safer and in control. This is also in alignment with previous studies [44] of pHRC systems where novice participants felt less resistive toward control, thus more control due to continuous changes in impedance level of robot.
It was interesting to note how participants also felt highly frustrated for closed-loop settings, which seems obvious again because they were continuously prompted with change to learn about their feelings by the CC-DDPG algorithm. A similar trend is visible in the force and acceleration applied by participants to collaborate with a robotic arm. It seems clear that, during the adaptation process, participants were able to control the robot much more smoothly and that flexible hand movement allows more stable acceleration. It can be argued that such a reduction in the force applied over time and a change in acceleration is due to the effect of the participant learning. If that is the case, the open-loop condition should have delivered similar findings, something that was not depicted in the results. There is a learning process involved, although not human, but robotic, based on human's feelings with the help of a neuroadaptive framework.
Fig. 8: The results from the cognitive conflict-based CC-DDPG algorithm. (A) Actor cost over 100 trials or episodes of reinforcement learning; (B) Critic cost over 100 trials or episodes of reinforcement learning; (C) Action was taken by CC-DPPG for all participants separately over the 100 trials or episodes of reinforcement learning.
Looking closely at the human brain with the help of EEG, it became even more apparent that our hypothesis on neuroadaptation was correct. The result from the PSD depicts a reduction in power over time (T1, T2, and T3) for delta and theta bands. The delta power band is known mainly for modulation while monitoring and adapting to external stimuli, as shown by [45]. The reduction in the theta band notably originated in the ACC areas of the brain [7], known to be related to a reduction in observation error in the environment. Following the theory of cognitive conflict [46], our results suggest that, as the robot shapes the participants' roles, the theta power originated in ACC areas is reducing. It is further evident with open-loop results on the theta band, where similar findings are not observed because there was no neuroadaptation procedure in place. The decrease in delta and theta bands is also observable in topoplots, particularly focused around frontal-central areas related which reflect ACC [43] region of the brain, which aligns with previous findings in cognitive conflict [23, 47].
Another interesting point is that alpha and beta power bands demonstrate a similar effect in both open and closed-loop conditions. The reason could be that they are not generally known to be related to a participant's cognitive states, such as cognitive conflict. Alpha power relates to changes in the level of attention [48] or stimuli novelty [49], while beta power is known for change in cognitive processing [50]. These factors do not apply directly to pHRC settings, particularly for a performed experiment. Consequently, no changes were observed.
Participants' closed-loop neuroadaptive frameworks were further strengthened via CC-DDPG results reflected by actor and critic costs in training the model and tuning the robot's sigma values. Over the total duration of trials, as adaptation improved, the cost for actors and critics lessened. As per the DDPG algorithm [38], it is a sign that the reinforcement learning (RL) model is learning about the environment and adapting to its condition to maximize the reward represented in the form of a reduction in cognitive conflict.
Although this work shows how neuroadaptation can be achieved using cognitive conflict in a pHRC setting, it still suffers from several limitations.
* The current setup used a 32-channel wet sensors-based MOVE system. Due to its long setup time and the common problem of gel drying out, the system is suitable only for the lab environment. We believe it should be possible to reproduce the results using off-the-shelf portable, wireless, dry-EEG devices such as Mindo [51].
* Synchronization of the whole system is also a challenge for hardware integration, especially if cognitive conflict is a target. The classification algorithm, information streamed by a pHRC system introduces, and command issue to robot, all together create a delay in closed-loop condition. There is also a potential delay in processing all this information and sending it over an LSL system. We estimated the latency to summate to approximately 200-300ms, which might cause a delay in EEG signals and, subsequently, with cognitive conflict information. For future work that focuses on the neuroadaptation framework, dedicated synchronization hardware should be used.
* The participants who took part in the experiment were of limited number and age (18-42 years) and do not represent a general population sample that operates pHRC. For future work, a broader age population will be recruited for such experiments to make sure that age, gender, and experience do not influence the results.
* Finally, the tasks involved in this work can induce boredom in some participants. However, clearly defined tasks that require more complex physical human-robot interaction might not result in boredom. For example, in [24], the participant is required to complete a task that also produces a score, thereby gamifying the interaction. We believe that the gamified setup can also be applied to the system we propose.
|
2309.12636 | Heterogeneous Rank Beamforming for Industrial Communications | This paper proposes a novel hardware beamforming architecture, which is
capable of utilizing a different number of Radio Frequency (RF) chains in
different parts of the bandwidth. It also shows that a proportional fairness
scheduler will effectively utilize the high rank part of the bandwidth in a
multi-user setting, thus operating more efficiently and effectively than
classical beamforming schemes. | Andrea Bedin, Akshay Jain, Andrea Zanella, Karthik Upadhya | 2023-09-22T06:09:01Z | http://arxiv.org/abs/2309.12636v1 | # Heterogeneous Rank Beamforming for Industrial Communications
###### Abstract
This paper proposes a novel hardware beamforming architecture, which is capable of utilizing a different number of Radio Frequency (RF) chains in different parts of the bandwidth. It also shows that a proportional fairness scheduler will effectively utilize the high rank part of the bandwidth in a multi-user setting, thus operating more efficiently and effectively than classical beamforming schemes.
beamforming, mimo, resource allocation, scheduling
## I Introduction
Wireless communication systems have emerged as a fundamental enabler for Industry 4.0, promising rapid transformation of traditional industrial environments into smart and interconnected ecosystems. Specifically, the integration of wireless technologies holds the potential to revolutionize the way industrial processes are managed, monitored, and optimized. Enabling real-time data exchange between pieces of industrial equipment, as well as remote control and maintenance of the machines by the operators, the use of wireless communication in industrial environments provides several advantages, such as reduced costs, improved productivity, and increased flexibility. Concretely, applications such as Digital twins, Collaborative robots and telepresence are expected to be serviced through wireless communications.
Tab. I lists the requirements for Digital twins and cooperative robots applications. This list reports the values defined by the European Sixth Generation (6G) flagship project Hexa-X [1]. The latency requirements for these applications can reach as low as sub-millisecond values, which is highly challenging to achieve in traditional wireless networks. This tight requirement is motivated by the fact that digital twins and cooperative robots require real-time interaction and decision-making capabilities. Any communication delay can lead to safety hazards and performance degradation. In addition, the reliability requirements for these applications are also very stringent, with an error rate as low as \(10^{-9}\). This level of reliability is necessary to ensure that the robots operate accurately and safely without causing any harm to the environment or humans.
While these requirements might seem extreme for today's factories, where robots typically move at limited speeds, they are crucial for enabling much faster operations while maintaining safety and control in future manufacturing plans. Tab. II reports a more detailed list of the Uplink (UL) and Downlink (DL) requirements for telepresence and Virtual Reality (VR), as for the Hexa-X project [1]. Here we can observe the heterogeneity of the requirements for different traffic categories within the same service. For example, we can observe that the haptic interactions, which require delays as low as \(1\)ms but are not highly demanding in terms of bitrate, must coexist with video streaming, which is, instead, very bandwidth-intensive. While this coexistence could be handled by deploying multiple networks with different capabilities, this would entail much higher Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) for the network operator. Therefore, it is crucial from an economic standpoint to meet all of these requirements in a single network.
In an effort to meet increasing demand for higher data rates, lower latency and reliable communication in such challenging environments, wireless communication technology has evolved significantly over the years. Many studies have been carried out on different aspects such as massive Multiple Input Multiple Output (mMIMO), beamforming techniques, Medium Access Control (MAC), etc [2, 3]. Of particular interest for this work are the beamforming techniques [4, 5, 6, 7]. In recent years, there has been a focus on hybrid and analog beamforming strategies, which are oriented towards sparse channels such as those observed in indoor environments [8]. However, the channel observed in industrial environments is different, given the very complex and rich
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Use Case** & **Tradite type** & **Latency** & **Reliability** & **Data Rate** \\ \hline Teleoperation & Haptic UL & \(5\) ms & \(10^{-1}\) & \(1\)-\(4\) kbps \\ & Haptic DL & \(1\)-\(50\) ms & \(10^{-1}\) & \(1\)-\(4\) kbps \\ & Audio UL & \(10\) ms & \(10^{-1}\) & \(5\)-\(512\) kbps \\ & Video UL & \(10\) ms & \(10^{-3}\) & \(1\)-\(100\) Mbps \\ \hline Immersive VR & Haptic UL & \(1\)-\(10\) ms & \(10^{-3}\) & \(1\)-\(4\) kbps \\ & Haptic DL & \(1\)-\(10\) ms & \(10^{-3}\) & \(1\)-\(4\) kbps \\ & Audio UL & \(20\) ms & \(10^{-1}\) & \(5\)-\(512\) kbps \\ & Video UL & \(10\)-\(20\) ms & \(10^{-3}\) & \(1\)-\(100\) Mbps \\ \hline \end{tabular}
\end{table} TABLE II: Telepresence requirements
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Use Case** & **Latency** & **Reliability** & **Data Rate** \\ \hline Cooperative Robots & \(1\)-\(50\) ms & \(10^{-9}\) & kbps \\ Digital Twins & \(0.1\) - \(100\) ms & \(10^{-2}\) - \(10^{-6}\) & \(1\)-\(10\) Gbps \\ \hline \end{tabular}
\end{table} TABLE I: Requirements for selected applications
multipath [9, 10]. Therefore the reliability of existing wireless systems in such environments is degraded, and their ability to exploit the rich multipath propagation is severely limited. On the other hand, due to the large number of antennas needed to overcome the path loss of millimeter wave (mmWave), the choice of moving towards analog or hybrid beamforming, motivated by the complexity and power consumption of the Analog to Digital Converters (ADCs) required to build digital beamforming devices, might seem inevitable. We therefore observe a clear need for an alternative solution that is capable of effectively exploiting the highly rich multipath channel, as well as maintaining manageable costs and complexity.
Addressing this ambitious objective, in this paper we propose:
* A novel hardware architecture [11], described in Sec. II, that supports the trade-off between bandwidth and rank of the channel. Such an architecture, as discussed in Sec. III, is slightly more complex than the classical hybrid beamforming architecture, but still significantly simpler, cheaper and energy efficient compared to fully digital beamforming architecture. This aims to overcome the limitations of analog beamforming in at least part of the bandwidth with limited increase in costs.
* An analysis of the performance of Proportional Fairness (PF) resource allocation for multiple users equipped with the proposed architecture, presented in Sec. IV, showing that in a multi-user environment a large part of the communication can be performed exploiting digital beamforming, even if the users do not operate with digital beamforming on the _full_ band.
* An update to the 3rd Generation Partnership Project (3GPP), signaling, described in Sec. V, that enable the support of this technology, with a small amount of additional parameters in the User Equipment (UE) configuration and resource allocation messages.
## II Beamforming architectures
### _Classical beamforming architectures_
The classical fully connected hybrid beamforming architecture, depicted in in Fig. 1 is widely used in the industry for mmWave communications.
However, this architecture has some limitations that can impact its effectiveness in modern wireless communication systems. One of the main limitations is the need for long and complex beam training and refinement procedures to establish and maintain a connection. Before transmitting the signal, the system needs to determine the optimal beam direction and shape, which involves sweeping through different beam directions and measuring the channel response. This process can take a long time, and it needs to be performed for each user within the system, so that it can become a significant overhead especially when the number of users is large. Additionally, these beam training and refinement procedures can cause excess delay in time-sensitive packets, as communication is not possible while the system is training the beam. Secondly, due to the limitations in Channel State Information (CSI) acquisition capability (i.e., the inability of the system to acquire the full channel matrix, but rather only to measure the power for a specific beam), modern wireless communication systems often make use of a small codebook of pencil beams to limit the number of channel measurements [12]. These beams concentrate the array gain in a single spatial direction and are effective in maximizing throughput for a line of sight scenario with limited reflections, where the signal can travel directly from the transmitter to the receiver without obstructions and with little multipath. However, when operating in an environment with rich multipath, using pencil beams comes at a cost. By focusing on a single multipath component, pencil beams sacrifice spatial diversity. Hence, when the selected component is blocked by an obstacle, this strategy is likely to experience a complete signal loss, and therefore not be able to communicate until a time consuming beam training is performed.
The naive solution to this issue is to use a fully digital beamforming architecture. The Radio Frequency (RF) frontend of such an architecture is depicted in Fig. 2. Note that, for the sake of a compactness, in the following we will refer to a RF frontend that enable the implementation of digital beamforming as a digital beamforming architecture. This architecture allows the receiver to collect the full signal information for each antenna, and then perform the combining in the digital domain. While the exact digital beamforming algorithms are outside the scope of this paper, it is clear that this scheme provides some advantages, including but not limited to
* The ability to combine signals from different antennae with different coefficients at different frequencies. This allows for the coherent combining of the multipath components therefore providing a much higher beamforming gain.
* The possibility to acquire full CSI from just the demodulation reference signals, without long and costly beam training operations. This enables overhead-free beamforming, as well as the ability to design better and more complex beams. It also makes it possible to efficiently
Fig. 1: Classical fully connected hybrid beamforming hardware architecture.
Fig. 2: Classical fully digital beamforming hardware architecture.
perform localization and sensing tasks by observing the channel estimates, without impacting the communication.
* The ability to perform spatial multiplexing, i.e., to send individual data streams to different receivers at the same time and frequency, separating the data stream by appropriately combining the signal from each antenna.
Unfortunately, this architecture may not be feasible, especially on the UE side, due to cost and power consumption constraints. Additionally, despite in a multi-user environment the resource allocation might be localized in frequency, a UE implementing this architecture typically receives on the whole bandwidth at all time with the same capabilities, even though the data meant for it may occupy only a small portion of the entire bandwidth.
For example, let us consider the resource grid depicted in Fig. 3. Here the horizontal axis represents time, the vertical axis frequency and the squares the Resource Blocks (RBs) that can be assigned to users. The RBs are according to the user they are allocated to. Let us now consider user \(1\) (orange). We can note how, out of the \(150\) available RBs, it is exploiting only \(28\), which corresponds to roughly \(18.7\%\). With a classical beamforming system the user would be receiving the signal for all the RBs, thus wasting \(81.3\%\) of the data acquired. Note that, despite being more expensive and complex, even the fully digital beamforming architecture suffers from this inefficiency.
To overcome the limitations and inefficiencies discussed in this section, we propose a new architecture described in the following.
### _Proposed archiechture_
We require a system that is both wideband, to support high data rates, and high rank, to better exploit the channel diversity, while maintaining practical costs and power consumption. While a fully digital Multiple Input Multiple Output (MIMO) architecture would satisfy the former constraints, it fails on the latter, i.e., its implementation is inherently costly and power-hungry due to the need for multiple high-speed ADCs. Hence, we propose a new architecture [11], depicted in Fig. 4.
This architecture comprises \(N\) antennas connected via a certain number of classical analog beamforming RF chains (marked in yellow in the figure), which operate on the full bandwidth \(B_{A}\) of the system. In particular, the signal from each antenna is amplified by a Low Noise Amplifier (LNA), phase shifted and then added together in the analog domain. Subsequently, the combined signal is down-converted and digitized by a single high-speed ADC. In addition to the analog beamforming system, for each antenna we implement an additional dedicated RF chain (marked in light blue) to implement digital beamforming on a smaller portion of \(B_{A}\). In particular, the signal from each antenna can be extracted after the front-end LNAs, individually down-converted to baseband and filtered with a Low Pass Filter (LPF). When the switch in Fig. 4 is in position B, one analog RF chain is deactivated and its ADC is repurposed to digitalize the multiplexing of all the low bandwidth individual antenna signals, in order to obtain a low bandwidth digital beamforming chain. More specifically, since \(N\) antennas are multiplexed into a single ADC that can support a bandwidth \(B_{A}\), the digital beamforming chain operates on a bandwidth \(B_{D}=\frac{B_{A}}{N}\). For the sake of simplicity, in this paper we will focus on the case with 2 RF chains, one implementing only analog beamforming, while the other that can be repurposed to implement digital beamforming. Furthermore, we note that the mixers belonging to the digital beamforming chain do not necessarily need to be fed with the same Local Oscillator (LO) frequency of the analog beamforming, therefore we can assume the center frequency of the digital beamforming part is anywhere within or outside the band of the analog beamforming. Note that, the proposed adaptive architecture can entail any number of RF chains, of which any subset can be converted to digital beamforming, clearly with higher cost, complexity, and power consumption. Moreover, though the proposed architecture is for the receiver side, a similar architecture can be replicated for a transmitter, thus the results of this study might extend also to uplink transmissions.
We note that, with this method, we can allocate the samples of the ADC to acquire a smaller part of the bandwidth with higher rank, thus reducing the number of wasted samples used by classical architectures to digitize the full bandwidth even when data is localized in frequency. Moreover, this ease of acquisition can enable better and more complete CSI estimation, which can better capture the rich multipath of industrial environments. This allows for more sophisticated beam design strategies as compared to the classic pencil beam, which can better exploit the channel diversity.
Fig. 4: Proposed hardware architecture. The wideband analog beamforming and the narrowband fully-digital beamforming blocks are highlighted in yellow and blue, respectively.
Fig. 3: Example resource grid.
To summarize, in this paper we assume that the following architecture can operate in two modes
* _Hybrid mode_, where the switch is in position A, and both ADCs are connected to an analog beamforming chain and operate on the full bandwidth \(B_{A}\).
* _Heterogeneous mode_, where the switch is in position B. Here the first ADC is still connected to the analog beamforming chain, and operates on the full bandwidth \(B_{A}\), whereas the second ADC is multiplexed between all the antennas, and therefore operates in digital beamforming on a reduced bandwidth \(\frac{B_{A}}{N}\).
Given the proposed architecture, it is apparent that due to the low bandwidth requirements of the digital beamforming RF chains, and the re-purposing of ADC meant for the analog beamforming chain, the complexity, power consumption and costs are not significantly increased. These aspects are better analyzed in Sec. III.
## III Complexity, power consumption and cost analysis
In this section, the power consumption and complexity of the proposed architecture are discussed and compared with those of the classical hybrid beamforming and fully digital beamforming architectures. We consider a system with a \(28\)GHz carrier and \(400\)MHz bandwidth, and select off-the-shelf components for the comparison. These were chosen through a thorough web search, filtering for the characteristics required to meet the system specifications. The search result is then sorted by price and the cheapest component is chosen.
It should be noted that the selected components are just examples to illustrate the main design trade-offs, and they might not entirely reflect the final cost and power consumption of an integrated device. This exercise however can give important insights into the costs and complexity associated with the hardware aspects of the proposed architecture.
The components selected for the comparison, as well as their main characteristics, are listed in the following:1
Footnote 1: All prices refer to those listed in the DigiKey website as of September 2023, for a quantity of 25 pieces.
* The mixer is the Mini-Circuits MDB-54H+ [13]. It operates in the \(20\)-\(50\)GHz frequency range, and requires an LO power of \(15\)dBm, (roughly \(32\)mW). Its price is \(30.66\).
* The ADC is the texas instruments ADS5403 [14]. It is capable of sampling at a rate of up to \(500\)Msps with a \(12\)bit resolution, which is sufficient to handle the required maximum bandwidth of \(400\)MHz. It has a total power dissipation of \(1\)W. At the time of writing (May 2023). Its price is \(152.61\).
* The LNA is the Mini-Circuits PMA3-313GLN+ [15]. It operates between \(26.5\) and \(31\)GHz with a gain of \(18\)dB. It is designed for a \(4\)V power supply with a biasing current of \(78\)mA, for a total power consumption of \(312\)mW. Its price is \(33.43\).
Note that, since modern communication systems typically use IQ sampling, each RF chain requires \(2\) mixers and \(2\) ADCs. All other components (multiplexer, phase shifters and filters) are cheap and have low power consumption.
Let us now consider a system with \(32\) antennas. The required amount of components to build such system is listed in Tab. III. Assuming the LO generation has an efficiency of \(50\%\), the power needed to generate the reference signal for each mixer is of \(2\times 32\)mW = \(64\)mW. From this, we can compute the power consumption of each system, as listed in Tab. IV.
It is clear from Tab. IV that the power consumption due to the ADCs is way more significant than that of the mixers. As a consequence, the power consumption of fully digital architecture, which requires as many ADC as the number of antennas, is more than \(5\) times larger than that of the hybrid beamforming architecture. Instead, the proposed architecture, which can reuse the same ADC for multiple antennas, has only slightly larger power consumption (\(+30\)%) with respect to the hybrid architecture. It should be noted that, in industrial applications, UEs that need high performance communications typically do not suffer from power constraints. They are in fact typically mounted on robots that consume hundreds or thousands of Watts, making the transceiver power consumption negligible. Instead, the limiting factor is the thermal output of the device, as this needs to be kept within operating temperatures. In the example above, the hybrid and proposed architectures can be cooled with a passive heatsink of limited size, whereas the fully digital architecture is likely to require a large heatsink. The latter might be problematic in terms of space constraints. Additionally, it might result in the inclusion of an active cooling component, such as a fan or a water pump, that can be an issue as they suffer from wear, especially in the harsh conditions of manufacturing plants, and therefore require active maintenance and are prone to cause disruptions when they get damaged.
Another factor to consider is the cost of the components. Clearly, the costs listed above are for low quantities, and are not the final production costs. Moreover, such a system would be most likely integrated in a few chips, instead of being composed of individual parts for each components. However, for the sake of this evaluation we will assume that the cost ratios between the components are similar to those of the final implementation. This is a reasonable assumption for a rough estimate, assuming the dimensional factors of the macro-components are somehow maintained in
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Architecture** & **Mixers** & **ADCs** & **LNAs** & **Total** \\ \hline Hybrid & \(256\)mW & \(4\)W & & \(14.26\)W \\ Proposed & \(4.1\)W & \(4\)W & \(10\)W & \(18.1\)W \\ Fully digital & \(4.1\)W & \(64\)W & & \(78.1\)W \\ \hline \end{tabular}
\end{table} TABLE IV: Comparison of the power consumption of each architecture.
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**Architecture** & **\# of mixers** & **\# of ADCs** & **\# of LNAs** \\ \hline Hybrid & \(4\) & \(4\) & \\ Proposed & \(64\) & \(4\) & \(32\) \\ Fully digital & \(64\) & \(64\) & \\ \hline \end{tabular}
\end{table} TABLE III: Components required by each type of architecture.
their integrated version, and considering that the cost of the Integrated Circuits (ICs) depends on their footprint in silicon. From the number of components indicated in Tab. III we can hence estimate an indicative cost of each architecture, as shown in Tab. V.
We can observe that the cost of the proposed architecture is about twice that of the classical hybrid beamforming, whereas the fully digital architecture is more than \(7\) times more expensive. In future factories, where we expect a huge number of connected devices, such difference might have a very significant CAPEX impact. Moreover the relaxed cooling requirements of the proposed architecture will also reduce OPEX. While an in depth estimation of CAPEX and OPEX is beyond the scope of this study, it should be clear that the proposed architecture is significantly cheaper than a fully digital beamforming solution.
Finally, we consider the complexity associated with the various architectures. To roughly quantify the complexity of the processing required to use such architectures, we consider the data rate of the samples generated by the systems. In particular, the hybrid beamforming architecture has \(4\) ADCs generating \(500\) million \(12\)-bit samples per second, for a total of \(24\)Gbps. The proposed architecture, having the same amount of ADCs, will generate the same amount of data to be processed by the baseband. In contrast, the fully digital beamforming architecture uses \(64\) ADCs, aggregating to a total data rate of \(384\)Gbps of data. If we assume that the processing time scales linearly with the amount of bits generated by the ADCs, the fully digital beamforming architecture would consequently require \(16\) times more processing power compared to the proposed and hybrid architectures. This would also lead to \(16\) times increase in the power consumption in the digital domain for the fully digital beamforming architecture. In addition, we observe that not all of the signal processing algorithms employed in modern receiver have linear complexity, therefore the gap in the required processing capabilities and power consumption will likely be even larger.
Hence we have shown that in, terms of cost, complexity and power consumption, the proposed architecture is clearly an interesting middle ground between the classical hybrid beamforming and the complex fully digital beamforming architectures. Next, in Sec. IV we demonstrate the benefits of the proposed architecture in a multi-user resource allocation scenario.
## IV Multi-user allocation for partial band MIMO
### _Proportional Fairness Resource Allocation_
This section discusses the use of the proposed architecture for communication in a multi-user setting. We assume that the Base Station (BS) is equipped with a fully digital (or at least high rank) architecture capable of fully exploiting the rank of the channel. The UEs are equipped with the proposed architecture which is also capable of exploiting the full channel rank in the digital beamforming part of the bandwidth.
Specifically, we assume that there are \(U\) users with an architecture consisting of \(N\) antennas sharing the total bandwidth \(B_{A}\). Recalling the resource allocation example shown in Fig. 3, we can see that user \(1\) (in orange) would potentially benefit from digital beamforming in the central RBs for the first four slots, then in RBs \(2\) and \(3\) for the following 4 slots, and in the top RBs for the rest of the time. This is however impractical, as it requires re-configuring the LO of the digital chains every few slots. Such re-configuration can take from tens of \(\mu\)s to some ms [16]. To avoid this issue, we propose that at the time of connection the BS informs the UE whether to use the second RF chain for digital beamforming and on which part of the band. Subsequently, at the time of MAC scheduling, the BS preferentially schedules each UE on the frequencies where it performs digital beamforming.
In the example, the BS could instruct user \(2\) to perform digital beamforming in RBs \(0\) and \(1\), user \(1\) on RBs \(4\) and \(5\), and user \(3\) on RBs \(8\) and \(9\). It would then schedule the data according to the new resource grid in Fig. 5. In this case, the data would fit entirely within the band where the UEs perform digital beamforming. With this allocation, the first RF chain, which operates with analog beamforming, still spends \(81.3\%\) of its samples to acquire the area covered with the black strips, but the second RF chain, operating in digital beamforming, only acquires \(30\) RBs, out of which \(28\) are intended for that user, thus only wasting \(6.7\%\) of the ADC samples. In total, the "efficiency" of the proposed architecture, intended as the fraction of ADC samples on frequency bands carrying data, can be estimated as
\[100-\frac{81.3+6.7}{2}=56\% \tag{1}\]
while that of the classical fully connected hybrid beamforming architecture in the same scenario would be \(18.7\)%, thus confirming that the proposed architecture is significantly more efficient than the classical one when UEs' data are allocated to sub portions of the whole bandwidth using digital beamforming.
Note that the analog RF chain is still necessary to decode the resource block allocation. Practically, there is control information sent by the BS that will reside outside the digital beamforming RBs. In order to decode this information, the analog RF will be needed. This is illustrated in Fig. 6, which
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Architecture** & **Mixers** & **ADCs** & **LNAs** & **Total** \\ \hline Hybrid & \(122.64\)**\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm
shows an example of spectrogram of a 5G downlink signal with a single user. Here it can be observed that some control signals, such as the Synchronization Signal Block (SSB) and Physical Downlink Control Channel (PDCCH), resides outside the RBs used by the UE for receiving data. Moreover, the analog RF chain will also be needed in case the data do not fit within the RBs used for digital beamforming, and have to be partially allocated to other RBs. This case is illustrated in Fig. 7.
Next, to analyze the performance of the proposed architecture from the aspect of spectral efficiency, let us assume that all the users are equal, and experience the following:
* A Spectral Efficiency (SE) of \(C_{A}\) bit/s/Hz when using the analog RF chain in _heterogeneous mode_.
* A SE of \(C_{D}\) bit/s/Hz when using the digital RF chain in _heterogeneous mode_.
* A SE of \(C_{H}\) bit/s/Hz when using the _hybrid mode_.
It is important to state that, in typical situations, we have:
\[C_{A}\leq C_{H}\leq C_{D}. \tag{2}\]
Assuming that the assignment of the digital beamforming subband is performed with minimum overlap, since each user can perform digital beamforming on \(\frac{1}{N}B_{A}\), the fraction of bandwidth that has at least one user with digital beamforming on it is given by:
\[\zeta_{D}=\min\left(1,\frac{U}{N}\right). \tag{3}\]
If all UEs are configured to perform digital beamforming in a part of the band, the maximum average SE achievable with digital beamforming enabled can be expressed as:
\[C_{max}^{(D)}=(1-\zeta_{D})C_{A}+\zeta_{D}C_{D}=C_{A}+\zeta_{D}(C_{D}-C_{A}). \tag{4}\]
Comparing this to the SE of the classical hybrid beamforming system, we observe that
\[C_{max}^{(D)}<C_{H}\iff\zeta_{D}(C_{D}-C_{A})<C_{H}-C_{A}. \tag{5}\]
Recalling (2), the term \(C_{D}-C_{A}\) is always positive, therefore we have
\[C_{max}^{(D)}<C_{H}\iff\zeta_{D}<\frac{C_{H}-C_{A}}{C_{D}-C_{A}}. \tag{6}\]
Moreover, from (2) we also conclude that \((C_{D}-C_{A})\geq(C_{H}-C_{A})\), therefore
\[0<\frac{C_{H}-C_{A}}{C_{D}-C_{A}}\leq 1, \tag{7}\]
which assures the resulting value of \(\zeta_{D}\) is meaningful. We can now replace the definition of \(\zeta_{D}\) in (6) to obtain:
\[\frac{U}{N}<\frac{C_{H}-C_{A}}{C_{D}-C_{A}}\Leftrightarrow U<\frac{N(C_{H}-C _{A})}{C_{D}-C_{A}}. \tag{8}\]
Under this condition, it is always better to utilize the two RF chains for hybrid beamforming. This also allows us to write the overall maximum achievable average SE as a function of the number of users, which is given as:
\[C_{max}=\begin{cases}C_{H}&\text{if}\:U<\frac{N(C_{H}-C_{A})}{C_{D}-C_{A}};\\ C_{A}+\frac{U}{N}(C_{D}-C_{A})&\text{if}\:\frac{N(C_{H}-C_{A})}{C_{D}-C_{A}}< U<N;\\ C_{D}&\text{otherwise}.\end{cases} \tag{9}\]
This function is illustrated in Fig. 8. The cyan line represents the maximum SE achievable when operating in _hybrid mode_, whereas the red line represents the maximum performance of the _heterogeneous mode_. The solid line represents the value of \(C_{max}\), which is the maximum of the two.
From the figure, we can clearly identify the three regions corresponding to the cases in (9). In the first part, the SE of the _hybrid mode_ dominates, because there are not enough users to have digital beamforming on a significant portion of the bandwidth when operating in _heterogeneous mode_. In the central region, the number of users is sufficient for the
Fig. 8: Maximum achievable SE as a function of the number of user.
Fig. 6: Example 5G downlink spectrogram with SSB and PDCCH information located outside the RBs designated for digital beamforming.
Fig. 7: Example resource grid with proposed architecture, where the data does not fit entirely within the RBs defined for digital beamforming.
combination of analog and digital beamforming of the _heterogeneous mode_ to achieve a better SE than the _hybrid mode_. However, the sub-channels used for digital beamforming are not enough to cover the whole available bandwidth, so that \(C_{max}<C_{D}\). Finally, in the third region, all portions of the band are used by at least one UE with digital beamforming, therefore potentially achieving fully digital MIMO across the whole bandwidth. These considerations refer to the maximum achievable rate. However, there is no guarantee that the actual system performance will be close to that bound. In particular, the bound is achievable only if all RBs that can be used for digital beamforming are actually allocated to the users that were given those channels to perform digital beamforming.
To verify that it is possible to exploit such capability in a more realistic scenario, we consider a system implementing Orthogonal Frequency Division Multiplexing Access (OFDMA) with \(R\) RBs, and receivers with \(N\) antennas. For the sake of simplicity, we assume that a RB lasts \(1\)s and has a bandwidth of \(1\)Hz, thus the capacity of a RB is equal to the spectral efficiency. When operating in _heterogeneous mode_, the second RF chain of user \(u\in\{1,...,U\}\) performs digital beamforming in \(\alpha=\left\lfloor\frac{R}{N}\right\rfloor\) RBs, and the allocations to different users are either orthogonal or completely overlapping. In particular, user \(u\) performs digital beamforming on RBs \(r\in\mathcal{D}_{u}\), where
\[\mathcal{D}_{u}= \big{\{}\left(u\alpha\bmod R\right),\left(u\alpha+1\bmod R \right),\] \[...,\left(u\alpha+\alpha-1\bmod R\right)\big{\}}. \tag{10}\]
We also define the set of users performing digital beamforming in RB \(r\) as
\[\mathcal{U}_{r}=\left\{u:r\in\mathcal{D}_{u}\right\}. \tag{11}\]
For each user, we keep an estimate of its average rate \(C_{u}\). After each slot, the estimate is updated to \(C_{u}=\gamma C_{u}+(1-\gamma)\bar{C}_{u}\), where \(\bar{C}_{u}\) is the total rate experienced by that user in the slot, that under the considered assumptions, above the amount of data transferred in that slot, and \(\gamma\in\{0,1\}\). The allocation process is depicted in Fig. 9. We assume the arrivals, depicted in the figure as orange arrows, happen between time slots. At each slot, we iterate over the RBs in the order shown by the cyan arrows and assign each RB \(r\) to the UE \(u\) that satisfies the following constraints:
* Its buffer is not empty.
* It has the highest PF weight \(W=\frac{C_{r}}{C_{u}}\).
In _hybrid mode_ we consider \(C_{r}=C_{H}\), whereas for the _heterogeneous mode_ we consider
\[C_{r}=\begin{cases}C_{D},&\text{if }r\in\mathcal{D}_{u},\\ C_{A},&\text{otherwise}.\end{cases} \tag{12}\]
If multiple UEs satisfy such constraint, we choose one at random. When an RB is assigned to a UE, the UE's queue is decreased by \(C_{r}\).
We assume each node \(n\), independently of the others, generates a random number \(v_{n}\) of bytes at every slot. Let \(F_{v}(a)=\Pr[v_{n}\leq a]\) be the Cumulative Distribution Function (CDF) of \(v_{n}\), identical for all nodes. Now, let us focus on the set \(\mathcal{D}_{u}\) of RBs assigned to certain user \(u\). Let \(V_{u}=\sum_{n\in\mathcal{U}_{r}}v_{n}\) be the aggregate traffic generated by all the nodes that are pre-assigned at the RBs in \(\mathcal{D}_{u}\). Let \(F_{V_{u}}(\cdot)\) be the CDF of \(V_{u}\), which can be computed from \(F_{v}(\cdot)\) with standard methods. For the sake of analysis, we also assume that:
1. At the beginning of the slot, all nodes in \(\mathcal{U}_{r}\) have empty queues.
2. Their average throughput is such that the PF algorithm will preferably assign the RBs in \(\mathcal{U}_{r}\) to these nodes, before considering other nodes which will use the RB with analog beamforming.
Let \(X_{u}\) be the number of RBs in \(D_{u}\) assigned to users that can perform digital beamforming. We hence have that
\[X_{u}\leq\min\left(\alpha,\left\lceil\frac{V_{u}}{C_{D}}\right\rceil\right), \tag{13}\]
and its expectation can be computed by deriving the Probability Density Function (PDF) of \(V_{u}\) from the finite difference of its CDF and averaging over all possible values \(h\) of \(X_{u}\) up to \(\alpha\), to obtain
\[\mathrm{E}[X_{u}] =\sum_{h=1}^{\alpha-1}h(F_{V_{u}}(hC_{d})-F_{V_{u}}((h-1)C_{D}))\] \[\quad+\alpha(1-F_{V_{u}}((\alpha-1)C_{D})). \tag{14}\]
An estimate of the total system throughput can hence be obtained as
\[\tilde{C}(U,F_{v})=\min\left(G,\sum_{u=1}^{N}\mathrm{E}[X_{u}]C_{D}+(\alpha- \mathrm{E}[X_{u}])C_{A}\right) \tag{15}\]
where \(G\) is the overall traffic generated by the \(U\) nodes in a slot, \(u>U\Rightarrow E[X_{u}]=0\), i.e. users that are not in the system do not generate traffic, and the summation is up to \(N\) because a user \(u^{\prime}>N\) will share the sub-band with user \(u=u^{\prime}\bmod N\).
### _Results_
In this section, we present the results for the system discussed so far with \(R=640\) RBs and \(N=32\) antennas. The digital beamforming SE is \(C_{D}=4\) and the analog beamforming SE is \(C_{A}=1\). When operating in _hybrid mode_, the SE is \(C_{H}=1.5\). Further, each user has a buffer of \(1000\)bits, and at each slot it generates a random number of bits between \(0\) and \(2\Lambda\), where \(\Lambda\) is the average generation rate. With this
Fig. 9: Allocation process.
assumption, \(F_{V_{u}}(a)\) is the CDF of an Irwin-Hall distribution of parameter \(n=|D_{r}|\) computed in \(\frac{a}{2\Lambda}\).
Fig. 10 shows the performance achieved with a data generation rate of \(\Lambda=500\)bps. This rate is sufficient to saturate the total capacity with a few users. In particular, Fig. 9(a) shows the aggregate rate of the base station as a function of the number of UEs, as well as the maximum achievable rate \(C_{max}\) and the estimated rate \(\tilde{C}(U,F_{v})\). Here we can observe that below \(5\) users the _hybrid mode_ performs better than the _heterogeneous mode_. We note that, in this case, we have \(\frac{N(C_{H}-C_{A})}{C_{D}-C_{A}}=5.33\), therefore the limit of \(5\) UEs corresponds to what is predicted by the upper bound. We can also observe that after this point the digital beamforming curve tightly follows the bound, confirming that a PF scheduler can fully exploit the capabilities of the proposed architecture.
Fig. 9(b) shows the average rate observed by each user. Here we can again observe that the _hybrid mode_ performs better only with less than \(5\) users, and the _heterogeneous mode_ provides a much higher rate for a large number of users.
In Fig. 10, we can also observe how the proposed estimate is close to the real rate for a loaded system. This is expected as, for a large traffic, \(X_{u}\) is likely to be close to \(\alpha\), and therefore it is expected that most RBs are allocated to digital beamforming.
Next, Fig. 11 shows the results for an application rate of \(\Lambda=100\)bps. In Fig. 10(a) we can see that, in _hybrid mode_, for \(U\leq 9\) the rate is limited by the application generation rate. In _heterogeneous mode_ instead, this happens for \(U\leq 17\), meaning that the proposed scheme is able to support almost twice the number of users at the full rate. In Fig. 10(b) we can also observe that the average rate observed by each user drops more rapidly for the hybrid beamforming scheme. Thus, the hybrid beamforming system can support only \(10\) UEs with \(90\)% of the required rate, whereas the proposed scheme can support up to \(21\) UEs for the same required rate. In this case, the estimate turns out to be conservative compared to the actual rate. This is due to the assumption of empty queue (A1), which in this case is unrealistic.
Further, in Fig. 12 we can observe the results for an application rate of \(\Lambda=50\)bps. From Fig. 11(a) it can be observed that the maximum capacity \(C_{max}\) achievable with _heterogeneous mode_ is not saturated even at \(50\) UEs, whereas the hybrid beamforming capacity is saturated at \(U=19\). Despite not saturating \(C_{max}\), in Fig. 11(b) we can observe that for \(U\geq 38\) the system is unable to support the aggregate generation rate rate of the UEs. This suggests that in this case, the scheduler is unable to fully exploit the digital beamforming, and end up allocating some RBs to analog beamforming. In this situation the estimate is again close to the simulated value, as the assumption A1 is more realistic due to the low traffic generated by the individual UEs.
Fig. 13 shows in cyan the fraction of RBs allocated to users performing analog beamforming. It also shows in orange the percentage of such allocations due to the lack of digital beamforming UE with data rather than higher PF weight of the analog beamforming UE. Indeed, we can see that \(100\)% of the analog allocations are done because there is no data from the digital beamforming users. We recall that this corresponds to assumption A2, thus validating the proposed theoretical framework.
Lastly, Fig. 14 shows the average fraction of unused RBs as a function of \(U\) for \(\Lambda=50\)bps. Here we can observe that for \(U\geq 19\) the hybrid beamforming system needs to use all the available RBs. This is consistent with the fact that we observe an average rate drop for the UEs after that point. In contrast, at \(U=19\) the proposed method uses only \(59\%\) of the RBs, thus saving \(41\%\) of the transmission resources. Assuming a constant power spectral density across the allocated RBs, this can directly result in a saving in the transmission power.
## V 3GPP signaling for the proposed architecture
In this section, we analyze the signaling requirements for realizing end-to-end system consisting of the proposed architecture. We also present any potential modifications of the current 3GPP standard that are needed to utilize the proposed
Fig. 11: Rate achieved by the user and system for \(\Lambda=100\)bps.
Fig. 12: Rate achieved by the user and system for \(\Lambda=50\)bps.
Fig. 10: Rate achieved by the user and system for \(\Lambda=500\)bps.
architecture. Specifically, we need to be able to perform the following tasks:
1. Notify the BS of the UEs' capabilities of trading some analog chains for digital beamforming, as well as the number of antennas it can use in the digital beamforming chain.
2. Notify the UEs' of which configuration should be used based on the status of the network (e.g., number of users and traffic pattern).
3. Update the UEs' configurations as the network conditions change.
4. Notify the resource allocation information for utilizing different capabilities in different parts of the band.
### _Capabilities reporting_
Let us now consider Task _T.1_. In 3GPP, UE capabilities are reported in the Radio Resource Control (RRC) [17] message, and in particular in a specific Information Element (IE) called _UE-NR-Capability_. In particular, the information about the RF capabilities of the UE is contained in the the _rf-Parameters_ IE and more specifically in the fields related to the MIMO and beamforming capabilities are located in the _mimo-ParametersPerBand_ IE inside the _BandNR_ IE.
To advertise the UE capability of performing the rank-bandwidth tradeoff for some RF chains we introduce a new IE called _analogdigitalRcap_ in the _mimo-ParametersPerBand_ IE, as described in Msg. 1. The newly added IE is described in Msg. 2.
The fields in the _ABCap_ IE have the following meaning:
* _numfexanalogchains_ is the number of RF chains that are exclusively capable of performing analog beamforming.
* _numtradablechains_ is the number of tradable RF chains. Concretely, these are the RF chains that are capable of performing analog beamforming on the full bandwidth or digital beamforming on a part of the bandwidth.
* _tradcapab_ contains the information on the rank-bandwidth tradeoff that such RF chains can perform. In particular, if the \(b\)-th bit is set to \(1\) the UE can use the tradable RF chains to acquire the signal from \(2^{b+1}\) antennas with bandwidth \(\frac{B_{A}}{2^{b+1}}\).
### _UE configuration_
We now investigate the signaling to perform Tasks _T.2_ and _T.3_. These tasks are again realized via the RRC message [20], specifically in an IE named _ServingCellConfig_. This message contains the list _downlinkBWP-ToAddModList_ which contains a set of BandWidth Part (BWP) configurations, of the format _BWP-Downlink_ to be used by the UE. To configure the UE to perform digital beamforming in a specific part of the band, we propose to change this message as described in Msg. 3. In particular, we add the IE _bwp-Tradchain_, which is described in Msg. 4.
The new fields added in the _BWP-Tradchain_ format are the following:
* _genericParameters_ is the field describing the location and bandwidth on which digital beamforming should be performed. It is of the standard format _BWP_.
* _tradcapuse_ informs the UE of what tradeoff should be used, and follows the same format of the _tradcapab_ field in the newly proposed _ABCap_ IE.
* _numtradechains_ informs the UE of how many of the tradable RF chains need to be configured to perform digital beamforming.
### _Resource allocation_
The resource allocation information (Task _T.4_) to be provided to the UEs is specified in the
Fig. 14: Fraction of unused RBs for \(\Lambda=50\)bps.
Fig. 13: Fraction of RBs allocated to analog beamforming and reason of the allocation for \(\Lambda=50\)bps.
Downlink Control Information (DCI) message [19]. In particular, the downlink data allocation is specified in the DCI _format1\(J\). In this message, the _Frequency domain resource assignment_ and _Time domain resource assignment_ specify where the data is located in the resource grid. These parameters do not need to be updated, and the allocation can be done as usual. The decision of whether to use the analog or digital beamforming chain will be implicit, i.e., the assigned resource blocks that fall within the digital beamforming BWP will be decoded with digital beamforming. However, there is still the need to add some parameters to the DCI, as we need to specify a different number of layers and Modulation and Coding Scheme (MCS) for the digital beamforming part. We therefore propose to add the following fields, listed in Tab. VI to the DCI _format1\(1\).
## VI Conclusions
In this paper, we proposed a novel beamforming architecture, which enables the use of an heterogeneous rank across the bandwidth, i.e. a different number of RF chains operating in different portions of the band. We have shown that such an architecture is significantly cheaper and has a much lower power consumption compared with a fully digital beamforming architecture, while maintaining roughly the same performance in a multi-user setting. Moreover, we have shown that such good performance can be achieved with a classical PF scheduler, thus making the scheme easy to implement on existing products. Finally, we proposed an update to the 3GPP standard that would allow for the implementation of such system with few additional parameters.
|
2309.08108 | Foundation Model Assisted Automatic Speech Emotion Recognition:
Transcribing, Annotating, and Augmenting | Significant advances are being made in speech emotion recognition (SER) using
deep learning models. Nonetheless, training SER systems remains challenging,
requiring both time and costly resources. Like many other machine learning
tasks, acquiring datasets for SER requires substantial data annotation efforts,
including transcription and labeling. These annotation processes present
challenges when attempting to scale up conventional SER systems. Recent
developments in foundational models have had a tremendous impact, giving rise
to applications such as ChatGPT. These models have enhanced human-computer
interactions including bringing unique possibilities for streamlining data
collection in fields like SER. In this research, we explore the use of
foundational models to assist in automating SER from transcription and
annotation to augmentation. Our study demonstrates that these models can
generate transcriptions to enhance the performance of SER systems that rely
solely on speech data. Furthermore, we note that annotating emotions from
transcribed speech remains a challenging task. However, combining outputs from
multiple LLMs enhances the quality of annotations. Lastly, our findings suggest
the feasibility of augmenting existing speech emotion datasets by annotating
unlabeled speech samples. | Tiantian Feng, Shrikanth Narayanan | 2023-09-15T02:19:03Z | http://arxiv.org/abs/2309.08108v1 | # Foundation Model Assisted Automatic Speech Emotion Recognition:
###### Abstract
Significant advances are being made in speech emotion recognition (SER) using deep learning models. Nonetheless, training SER systems remains challenging, requiring both time and costly resources. Like many other machine learning tasks, acquiring datasets for SER requires substantial data annotation efforts, including transcription and labeling. These annotation processes present challenges when attempting to scale up conventional SER systems. Recent developments in foundational models have had a tremendous impact, giving rise to applications such as ChatGPT. These models have enhanced human-computer interactions including bringing unique possibilities for streamlining data collection in fields like SER. In this research, we explore the use of foundational models to assist in automating SER from transcription and annotation to augmentation. Our study demonstrates that these models can generate transcriptions to enhance the performance of SER systems that rely solely on speech data. Furthermore, we note that annotating emotions from transcribed speech remains a challenging task. However, combining outputs from multiple LLMs enhances the quality of annotations. Lastly, our findings suggest the feasibility of augmenting existing speech emotion datasets by annotating unlabeled speech samples.
Tiantian Feng\({}^{1}\), Shrikanth Narayanan\({}^{1}\)\({}^{1}\)Signal Analysis and Interpretation Laboratory
University of Southern California, Los Angeles, USA Speech, Emotion recognition, Foundation model, Large Language Model
## 1 Introduction
Speech emotion recognition (SER) has benefited considerably from using large-scale pre-trained speech models [1, 2, 3, 4], offering substantial performance improvements over conventional SER systems that primarily depend on low-level acoustic descriptors (e.g., speech prosody and spectral information). These advances in emotion recognition open up opportunities for widespread applications in healthcare and virtual assistants, transforming our ways of connecting, engaging, and interacting with the world. However, success in deploying SER models in real-world applications requires the acquisition of high-quality annotations to speech samples, which is often expensive, time-consuming, and privacy-unfriendly.
One typical labeling step in SER datasets involves transcribing the speech content. For example, IEMOCAP [5], one of the most popular SER testbeds, had obtained the professional transcriptions of the audio dialogues using a commercial service. Such a process often requires training transcribers on transcription guidelines, creating considerable R&D costs. The advent of Amazon's Mechanical Turk[6] (MTurk) had substantially increased the efficiency of transcribing services by providing the marketplace for human workers to perform such tasks for pay. However, it still demands many MTurk hours to transcribe the audio conversations, leading to significant costs. In addition, MTurk may not be a viable option when the data collection poses significant privacy risks and must be annotated inhouse, which is a standard practice mandated by Institutional Review Boards (IRBs) involving sensitive human subject data [7].
Furthermore, SER dataset often requires emotion labeling. A standard emotion labeling process involves instructing multiple human annotators to assess the emotional content of the speech sample in terms of emotional descriptors. Similar to transcribing, the emotion annotation procedure yields substantial costs in hiring multiple annotators to ensure authentic appraisal of a speech sample. Moreover, utilizing services such as MTurk for emotion annotation would raise notable privacy risks. Therefore, curating the SER dataset remains a challenging task, particularly for institutions that encounter resource constraints and comply with strict regulatory guidelines.
The emergence of foundation models [8] delivered promising speech recognition and language reasoning performances, bringing unique opportunities to facilitate SER data curation. For example, Whisper [4] is designed for automatic speech recognition (ASR), trained on thousands of hours of audio data from the Internet. This model delivers remarkable zero-shot ASR performance, demonstrating its enormous potential for deployment as a transcription service. Along with the advancements in automatic transcription, large language models (LLMs) like GPT4 [9] offer human-level text reasoning and comprehension capabilities, positioning them as candidates for reducing the involvement of human emotion annotation.
In this paper, we report comprehensive experiments on the use of foundation models in assisting curation of the speech emotion recognition dataset in transcribing, emotion annotation, and augmentation. Our study focuses on exploring modeling approaches that require a single V100-equivalent GPU, ensuring the ease of reproducibility. In summary, our contributions are listed as follows:
* Our work represents one of the early studies on the use of the foundation model to assist SER dataset curation covering three critical factors: **transcribing, emotion annotation**, and **augmentation**.
* Our experiments study Whisper and MMS as transcribing annotators, where we find that existing foundation model systems provide transcriptions that are beneficial for SER training.
* We investigate using multiple open-source LLMs as emotion annotators, revealing that emotion annotation remains challenging for LLMs. Moreover, combining limited human annotations with LLM output substantially improves the SER training.
* We explore data augmentation using the foundation model-assisted annotations, leading to increases in SER performance.
## 2 Related Works
### Speech Recognition Models
Self-supervised learning (SSL) is a rapidly emerging research area for speech representation learning. This learning approach enables
the pre-trained speech models, which are then trained with labeled speech samples for speech-related tasks. One recent popular model in this category is the Massively Multilingual Speech (MMS) [10] model released by Meta, which is pre-trained on 491K hours of speech. In contrast, Whisper by OpenAI [4] adopts a weakly supervised learning approach, with objectives to perform tasks such as voice activity detection, language identification, and speech recognition. The training of this model is conducted using a dataset comprising 680k hours of labeled speech data.
### Large Language Models
Large language models like ChatGPT have demonstrated remarkable performance in language reasoning tasks. However, GPT4 or ChatGPT requires the user to upload the speech content to the remote server for prompting. This creates considerable privacy risks in sensitive settings and applications. Instead, we decided to explore foundation models that can operate on a single GPU, including LLMa 2 families [11], Falcon families [12], and Flan-T5 XXL [13]. We want to highlight that several prior works [14, 15] have investigated the ability of LLMs to annotate ground-truth transcriptions or ASR-generated transcription. However, most of these works consider conventional SER modeling architecture (e.g., ResNet-50). Moreover, they do not incorporate ASR-generated transcription in SER modeling and experiment with a limited set of LLMs.
## 3 Method
### Foundation Model Assisted Annotation
Our automatic annotation framework is presented in Fig 1. Given an unlabeled speech sample, we first propose to obtain the speech content using foundation speech recognition models. This work investigates two recent ASR models, Whisper-Large V2 and MMS, that offer the most competitive results. After obtaining the ASR-generated transcripts, we directly send them to the large language models. Our LLMs include LLaMa 2 families, Falcon families, and Flan-T5 XXL. The details about the foundation models used in this study and their approximate model size can be found in Table 1. The obtained emotion labels and transcripts are used for SER training.
### A Bag of Tricks in Prompt Engineering
We investigate and compare several tricks in prompt engineering.
**Base Prompt** Our prompt design is similar to [15], where instructing the LLMs to annotate the spoken utterance delivers decent zero-shot performance. In addition, we instruct the LLMs to choose emotions from five categories: neutral, sad, happy, angry, and other. This strategy constrains the LLMs to output more determined labels, and we introduce the option of "other" to filter out unconfident responses to include in SER modeling. In summary, our prompt template is:
What is the emotion of this utterance? "Everything is not working!"
Options: -neutral -sad -angry -happy -other ANSWER: **sad**
**Multiple-LLMs Agreement** It is known that relying on the response from one LLM could yield biased language reasoning [16]. To mitigate this concern, we propose ensemble the output from multiple LLMs, collecting the wisdom from multiple reasoners.
**LLMs + Human Feedback** One critical lesson we learned from prior research is that LLMs exhibit limited zero-shot capabilities in annotating emotions from speech. Consequently, we contend that human evaluation may remain essential. However, instead of relying on multiple human raters for a majority agreement, we propose that assessing the agreement between the LLM annotations and one human feedback is sufficient for quality control.
### Emotion Recognition Modeling
The complete model architecture is illustrated in Fig 1. Our SER includes speech and text backbones to extract the corresponding embeddings. Specifically, we utilize Whisper-Small [4] and MMS-300M [10] as the speech backbone and Roberta as the text backbone. We intend not to experiment with Whisper-Large as the speech backbone as it requires prohibitively large GPU capacities for our setting. The output of backbone models is subsequently fed into weighted averaging layers to combine the hidden outputs from all encoder layers. The weighted output is then passed through a cross-attention layer to obtain the multimodal representation for SER.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Foundation Model** & **Input** & **Annotation** & **\# Parameters** \\ \hline
**MMS-1B** & Speech & Transcription & 1000M \\
**Whisper Large V2** & Speech & Transcription & 1.550M \\ \hline
**LLaMa 2-7B** & Text & Emotion & 7B \\
**LLaMa 2-13B** & Text & Emotion & 13B \\
**Falcon-7B** & Text & Emotion & 7B \\
**Falcon-40B** & Text & Emotion & 40B \\
**Flan-T5 XXL** & Text & Emotion & 11B \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of foundation models used in this work.
Figure 1: Our proposed foundation model-assisted automatic SER framework. The speech is first transcribed to text and is subsequently fed to LLMs to annotate categorical emotions. Our SER modeling framework involves a text and speech backbone to extract corresponding embeddings, which are then passed through a cross-attention layer to obtain the multimodal representations to predict emotion labels.
## 4 Datasets
Table 2 displays data statistics for the four datasets included in our work. Due to the existence of imbalanced label distribution within the dataset, we decided to keep the four most frequently presented emotions for all the datasets, as recommended in [17, 18, 19, 20]. We acknowledge that this inclusion criterion trivializes the automatic emotion annotation, but it ensures fair comparisons when having multiple datasets with different emotions. The emotion annotation results reported in our experiments will likely decrease in practice.
**IEMOCAP**[5] contains multi-modal recordings of human interactions from 10 subjects evenly distributed between males and females.
**Multimodal EmotionLines Dataset (MELD)**[21] contains more than 13000 utterances from the Friends TV series. Each utterance is labeled with seven emotions, - Anger, Disgust, Sadness, Joy, Neutral, Surprise, and Fear. We map the Joy to happy emotion and keep Anger, Sadness, and Neutral in the experiments.
**MSP-Improv**[22] corpus is developed to investigate naturalistic emotions elicited from improvised situations. The corpus comprises audio and visual data collected from 12 individuals, with an equal number of subjects from both male and female participants.
**MSP-Podcast**[23] is collected from podcast recordings, with 610 speakers in the training, 30 in the development, and 50 in the test.
## 5 Experiment Details
### Foundation Model Assisted Annotation
We apply MMS-300M and Whisper Large V2 to obtain the ASR output. Since LLMs with more than 10B parameters exceed most GPU memory capacities, we decided to load LLMs over 10B using float 16 instead of float 32. In addition, we load Falcon-40B with 8-bit. We use a temperature of 0.02 in all prompting experiments, as a lower temperature results in more deterministic output. We use the checkpoints of all foundation models from HuggingFace [24].
### Emotion Recognition Modeling
We apply a 5-fold and 6-fold evaluation on IEMOCAP and MSP-Improv datasets respectively, where each session is regarded as a unique test fold. In contrast, we use the standard splits for training, validation, and testing from the MELD and MSP-Podcast datasets. We use the RoBERTa [25] model as the text backbone while we compare the speech backbones between MMS-300M and Whisper-Small. We choose MMS-300M along with MMS-1B ASR output and Whisper-Small along with Whisper Large V2 ASR output in SER modeling. Specifically, we set the batch size to 32, the learning rate to 0.0001, the max training epoch to 30, and truncated utterances to 15 seconds in baseline emotion recognition training. We use the ground-truth transcriptions in the test set for fair comparisons. We use the checkpoints of backbone models from HuggingFace [24].
## 6 Transcription Results
### Does SER benefit from ASR using Foundation Model?
This section compares the SER training using ASR-generated with ground-truth transcriptions (human transcriptions). As both MSP-Improv and MSP-Podcast datasets do not have transcriptions from human experts, we conduct SER training using only ASR output from selected foundation models. The results in Table 3 demonstrate that the foundation model provides transcriptions that lead to consistent performance increases compared to speech-only modeling. Moreover, we can identify that ASR-generated output delivers competitive SER performance compared to ground-truth transcripts. It is worth noting that our proposed SER training using ASR output from foundation models considerably outperforms conventional SER systems such as Dialogue RNN [26] and CNN-attention [27].
### Does SER vary with different Foundation Models?
We further compare SER performance using ASR output between Whisper-Large V2 and MMS-1B, as illustrated in Figure 2. The findings indicate that SER performance using ASR output provides consistent benefits to speech-only modeling approaches. However, we have noticed that SER with Whisper-Large V2 transcripts consistently outperforms using the MMS-1B transcripts. To identify the cause that may contribute to this performance difference, we inspect the WER of these two models on IEMOCAP and MELD datasets with ground-truth transcription shown in Tabel 5. The WER indicates that Whisper Large V2 yields better speech recognition than MMS-1B in our experimental datasets. However, we can observe that WER is still fairly large in both datasets, complying with the findings in [28]. Therefore, we proceeded with the remaining experiments for LLM emotion annotation using Whisper Large V2.
## 7 Emotion Annotations
### How does base prompt perform compared to prior works?
Table 4 shows the SER training performance leveraging the emotion annotations using each individual LLM. Similar to previous work,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Datasets** & **Neutral** & **Happy** & **Sad** & **Angry** & **Total** \\ \hline
**IEMOCAP** & 1,708 & 1,636 & 1,084 & 1,103 & 5,531 \\
**MELD** & 6,436 & 2,308 & 1,002 & 1,607 & 9045 \\
**MSP-Improv** & 3,477 & 2,644 & 885 & 792 & 7,798 \\
**MSP-Podcast** & 20,986 & 12,060 & 2,166 & 2,712 & 37,924 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of dataset statistics used in this work.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Datasets** & **Input** & **Transcription** & **UAR(\%)** \\ \hline \multirow{3}{*}{**IEMOCAP**} & Speech & - & 67.45 \\ & Speech+Text & Ground-truth & **73.87** \\ & Speech+Text & Whisper-Large V2 & 71.78 \\ \hline \multirow{3}{*}{**MELD**} & Speech & - & 48.55 \\ & Speech+Text & Ground-truth & **56.31** \\ & Speech+Text & Whisper-Large V2 & 54.32 \\ \hline \multirow{3}{*}{**MSP-Improv**} & Speech & - & 63.23 \\ & Speech+Text & Whisper-Large V2 & **65.44** \\ \hline \multirow{3}{*}{**MSP-Podcast**} & Speech & - & 60.82 \\ & Speech+Text & Whisper-Large V2 & **63.19** \\ \hline \hline \end{tabular}
\end{table}
Table 3: SER performances using transcriptions.
Figure 2: Comparisons between two foundation models in transcribing.
we identify that LLMs struggle to provide correct emotion labels for SER training, leading to a 10-20% decrease in performance compared to SER training using ground-truth emotion labels. Moreover, larger LLMs provide better emotion labels, with Falcon-40B yielding the best overall emotion annotations for SER training.
### Can majority vote of multi-LLMs improve annotation?
Based on the individual performance of emotional annotation shown in Table 4, we decide to apply the majority votes of emotion annotations from Flan-T5 XXL, LLaMa2-13B, and Falcon-40B as the emotion labels. The results indicate that aggregating majority votes from multi-LLMs enhances the quality of emotion annotation. However, this improvement is only marginal, leading to a 1-2\(\%\) increase in SER performance. This observation suggests that relying on LLMs alone, even when considering input from multi-LLMs, yields unsatisfactory labels compared to conventional human labeling methods.
### Would adding limited involvement of human annotation benefit emotion annotation?
The last column in Table 4 involves the performance of SER adding human feedback (HF) in the annotation process. As MELD does not provide individual annotator labels, we exclude this dataset in this experiment. It is obvious that integrating limited human feedback can lead to substantial improvement in SER training. Our hypothesis is that text modality may often provide ambiguous information in determining the emotion labels, thus LLMs are prone to give erroneous estimations of the expressed emotion given limited modalities. Limited inspections on audio samples with human annotators offer a disambiguation process that increases the label quality.
### How different are emotion annotations using transcriptions between ground-truth and ASR output?
Table 6 reveals the SER training comparisons using emotion labels inferred from ground truth and ASR transcriptions. We report results with datasets that include the ground truth transcriptions. Interestingly, results in Table 6 show that ASR transcriptions, even with fairly large WER, lead to comparable SER performance to ground truth transcriptions. Moreover, LLMs with HF consistently outperform LLMs-only annotation. In future studies, it is worth studying why erroneous ASR output can yield comparable emotion reasoning using clean ground-truth transcriptions.
## 8 Augmentation
This section explores the ability to use our proposed automated labeling framework to augment an existing training dataset. We choose the multiple-LLMs agreement and LLMs with human feedback to provide emotion labels from ASR transcriptions, as these two approaches yield higher SER. We select MSP-Podcast and MELD as the augmentation datasets as these two datasets originate from Internet sources. This experiment setup is similar to the previous work in [15]. The comparison aligns with the prior work [15] that augmenting IEMOCAP data with MELD using multi-LLMs labeling improves the performance. However, this finding does not hold when the training data is MSP-Improv. Moreover, augmenting SER training with MSP-Podcast using multi-LLM labeling consistently decreases the SER performance. On the other hand, we discover that augmenting data using LLM labeling with even limited human feedback consistently improves the SER performance, highlighting the importance of human feedback in emotional reasoning.
## 9 Conclusion
In this paper, we explore the use of the foundation model in assisting curation of the SER datasets in transcribing, emotion annotation, and augmentation. Our study focuses on exploring open-source models that require a single V100-equivalent GPU that is widely accessible. Our study demonstrates that foundational models can generate transcriptions to enhance the performance of SER systems that rely solely on speech data. However, WERs are fairly large. Furthermore, we observe that annotating emotions from transcribed speech remains a challenging task, even when combining outputs from multiple LLMs. Lastly, our findings suggest the feasibility of augmenting existing speech emotion datasets by annotating unlabeled speech samples using a two-stage annotation process that includes limited human feedback. In summary, our results highlight the importance of human-in-the-loop for annotating emotion labels from speech signals. Our future work would use multi-modal approaches to assist automatic emotion annotation instead of only LLMs.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Datasets** & **Transcription** & **Multi-LLMs** & **LLMs+HF** \\ \hline \multirow{2}{*}{**IEMOCAP**} & Ground truth & 50.36 & 59.08 \\ & Whisper Large V2 & 51.60 & 60.19 \\ \hline \multirow{2}{*}{**MELD**} & Ground truth & 55.69 & N.A. \\ & Whisper Large V2 & 53.90 & N.A. \\ \hline \hline \end{tabular}
\end{table}
Table 6: SER (UAR) comparisons with annotations using ground-truth and Whisper transcriptions. HF means human feedback.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Datasets** & **Han-T5 XXL** & **ILaMa2-7B** & **LLaMa2-13B** & **Falcon-7B** & **Falcon-40B** & **Multi-LLMs** & **Multi-LLMs+HF** \\ \hline
**IEMOCAP** & 49.60 & 43.87 & 46.29 & 43.68 & 51.16 & 51.60 & 60.19 \\
**MELD** & 36.73 & 43.87 & 43.85 & 46.96 & 47.62 & 53.90 & NA \\
**MSP-Improv** & 44.97 & 38.12 & 41.68 & 37.71 & 44.87 & 46.05 & 50.06 \\
**MSP-Podcast** & 51.20 & 47.23 & 48.12 & 43.25 & 48.11 & 52.59 & 53.54 \\ \hline \hline \end{tabular}
\end{table}
Table 4: SER (UAR) with emotion annotation from LLMs. The transcription is ASR output from Whisper Large V2. HF is human feedback.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Datasets** & **Augmentation** & **Multi-LLMs** & **LLMs+Human** \\ \hline \multirow{2}{*}{**IEMOCAP**} & MELD & 72.60 \(\uparrow\) & N.A. \\ & MSP-Podcast & 69.29 \(\downarrow\) & **72.62 \(\uparrow\)** \\ \hline \multirow{2}{*}{**MSP-Improv**} & MELD & 65.05 \(\downarrow\) & N.A. \\ & MSP-Podcast & 64.31 \(\downarrow\) & **66.68 \(\uparrow\)** \\ \hline \hline \end{tabular}
\end{table}
Table 7: SER performance with augmentation. \(\uparrow\) indicates an increase in SER performance using augmentation.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Datasets** & **Hansper Large V2** & **MMS-1B** \\ & Processed & Original & Processed & Original \\ \hline
**IEMOCAP** & 12.21 & 24.84 & 26.76 & 51.46 \\
**MELD** & 37.87 & 46.23 & 55.78 & 71.28 \\ \hline \hline \end{tabular}
\end{table}
Table 5: WER (word error rate) in transcriptions. Processed transcripts consider only lowercase and remove punctuation. |
2309.17028 | Robin Hood model versus Sheriff of Nottingham model: transfers in
population dynamics | We study the problem of transfers in a population structured by a continuous
variable corresponding to the quantity being transferred. The model takes the
form of an integro-differential equations with kernels corresponding to the
specific rules of the transfer process. We focus our interest on the
well-posedness of the Cauchy problem in the space of measures. We characterize
transfer kernels that give a continuous semiflow in the space of measures and
derive a necessary and sufficient condition for the stability of the space
$L^1$ of integrable functions. We construct some examples of kernels that may
be particularly interesting in economic applications. Our model considers blind
transfers of economic value (e.g. money) between individuals. The two models
are the ``Robin Hood model'', where the richest individual unconditionally
gives a fraction of their wealth to the poorest when a transfer occurs, and the
other extreme, the ``Sheriff of Nottingham model'', where the richest
unconditionally takes a fraction of the poorest's wealth. Between these two
extreme cases is a continuum of intermediate models obtained by interpolating
the kernels. We illustrate those models with numerical simulations and show
that any small fraction of the ``Sheriff of Nottingham'' in the transfer rules
leads to a segregated population with extremely poor and extremely rich
individuals after some time. Although our study is motivated by economic
applications, we believe that this study is a first step towards a better
understanding of many transfer phenomena occurring in the life sciences. | Quentin Griette, Pierre Magal | 2023-09-29T07:28:30Z | http://arxiv.org/abs/2309.17028v2 | # Robin Hood model versus Sheriff of Nottingham model: transfers in population dynamics
###### Abstract
We study the problem of transfers in a population structured by a continuous variable corresponding to the quantity being transferred. The model is of Boltzmann type with kernels corresponding to the specific rules of the transfer process. We focus our interest on the well-posedness of the Cauchy problem in the space of measures and \(L^{1}\). We characterize transfer kernels that give a continuous semiflow in the space of measures and \(L^{1}\). We construct some examples of kernels that may be particularly interesting in economic applications. Our model considers blind money transfers between individuals. The two models are the "Robin Hood model", where the richest individual unconditionally gives a fraction of their wealth to the poorest when a transfer occurs, and the other extreme, the "Sheriff of Nottingham model", where the richest unconditionally takes a fraction of the poorest's wealth. Between these two extreme cases is a continuum of intermediate models obtained by interpolating the kernels. We illustrate those models with numerical simulations and show that any small fraction of the "Sheriff of Nottingham" in the transfer rules leads to a segregated population with extremely poor and extremely rich individuals after some time. Although our study is motivated by economic applications, we believe that this study is a first step towards a better understanding of many transfer phenomena occurring in the life sciences.
**Keywords:** transfer processes, population dynamics, cooperative and competitive transfers.
## 1 Introduction
Transfer phenomena arise in many natural processes. For example, one may think of mass or energy exchanges between particles, transfers of proteins or DNA between cells and bacteria, or simply transfers of money or assets in the economy. We refer to the book by Bellouquid and Delitala [3] and the review papers by Bellomo, Li, and Maini [2], and Bellomo and Delitala [1].
Their mathematical formulations have a long history. A fundamental mathematical modeling approach has been proposed by Boltzmann, in which interacting particles are viewed as members of a continuum of population density. In these models transfer of physical quantities from one particle to another is modulated by a kernel function that specifies the transfer process. Many examples have been explored, and a review is found in Perthame [10], Villani [11].
This article studies models inspired by the life sciences with potential economic applications. In Magal and Webb [9] and Magal [8], the authors introduced a model devoted to transfers of genetic material in which parent cells exchange genetic material to form their offspring. Here we use a similar idea to model the transfer of richness between individuals. Before we introduce the mathematical concepts rigorously, let us explain our model's idea in a few words. We study a population of individuals who possess a certain transferable quantity (for example, money) and exchange it according to a rule expressed for two individuals chosen randomly in the population.
Let \(I\subset\mathbb{R}\) be a closed interval of \(\mathbb{R}\). Let \(\mathcal{M}(I)\) be the space of measures on \(I\). It is well known that \(\mathcal{M}(I)\) endowed with the norm
\[\|u\|_{\mathcal{M}(I)}=\int_{I}|u|(dx),\forall u\in\mathcal{M}(I),\]
is a Banach space.
In the above norm's definition, the non-trivial part is to define a unique set of finite positive measures \(u^{+}\) and \(u^{-}\) such that
\[u(dx)=u^{+}(dx)-u^{-}(dx).\]
In order to define the absolute value of a measure, one may first realize that the space of measures \(\mathcal{M}(I)\) is defined as
\[\mathcal{M}(I)=\mathcal{M}_{+}(I)-\mathcal{M}_{+}(I),\]
where \(\mathcal{M}_{+}(I)\) is the set of positive measures on \(I\).
Then by the Hahn-decomposition theorem (see Theorem A.2), the positive part \(u^{+}(dx)\) and the negative part of \(u^{-}(dx)\) are uniquely defined. Then the absolute value of \(u\) is defined by
\[|u(dx)|=u^{+}(dx)+u^{-}(dx).\]
The nontrivial part of defining measures' norms is its absolute value. The reader interested can find more results and references in the Appendix A.
An alternative to define the norm of a measure is the following (see Proposition A.8)
\[\|u\|_{\mathcal{M}(I)}=\sup_{\phi\in\operatorname{BC}(I):\|\phi\|_{\infty}\leq 1 }\int_{I}\phi(x)u(dx),\forall u\in\mathcal{M}(I).\]
A finite measure on an interval \(I\subset\mathbb{R}\) (bounded or not) is, therefore, a bounded linear form on \(\operatorname{BC}(I)\), the space of bounded and continuous functions from \(I\) to \(\mathbb{R}\). Since \(\mathcal{M}(I)\) endowed with its norm is a Banach space, we deduce that \(\mathcal{M}(I)\)
is a closed subset of \(\mathrm{BC}(I)^{\star}\). When the interval \(I\) is not compact, Example A.9 shows that
\[\mathcal{M}(I)\neq BC(I)^{\star}.\]
Let \(T:\mathcal{M}_{+}(I)\to\mathcal{M}_{+}(I)\) be a transfer operator. Let \(\tau>0\) be the rate of transfers. We assume the time between two transfers follows an exponential law with a mean value of \(1/\tau\). Two individuals will be involved once the transfer time has elapsed, so the transfer rate will have to be doubled (i.e, equal to \(2\tau\)). The model of transfers is an ordinary differential equation on the space of positive measures
\[\partial_{t}u(t,dx)=2\tau\,T\big{(}u(t,.)\big{)}(dx)-2\tau\,u(t,dx),\forall t \geq 0. \tag{1.1}\]
The equation (1.1) should be complemented with the measure-valued initial distribution
\[u(0,dx)=\phi(dx)\in\mathcal{M}_{+}(I). \tag{1.2}\]
A solution of (1.1), will be a continuous function \(u:[0,+\infty)\to\mathcal{M}_{+}(I)\), satisfying the fixed point problem
\[u(t,dx)=\phi(dx)+\int_{0}^{t}2\tau\,T\big{(}u(\sigma,.)\big{)}(dx)-2\tau\,u( \sigma,dx)d\sigma, \tag{1.3}\]
where the above integral is a Riemann integral in \((\mathcal{M}(I),\|.\|_{\mathcal{M}(I)})\).
To prove the positivity of the solution (1.1), one may prefer to use the fixed point problem
\[u(t,dx)=e^{-2\tau t}\phi(dx)+\int_{0}^{t}e^{-2\tau(t-s)}2\tau\,T\big{(}u( \sigma,.)\big{)}(dx)d\sigma, \tag{1.4}\]
which is equivalent to (1.3).
For each \(t\geq 0\),
\[u(t,dx)\in\mathcal{M}_{+}(I),\]
is understood as a measure-valued population density at time \(t\).
Let us recall that \(\mathcal{B}(I)\) is the \(\sigma\)-algebra generated by all the open subsets of \(I\) is called the **Borel \(\sigma\)-algebra**. A subset \(A\) of \(I\) that belongs to \(\mathcal{B}(I)\) is called a **Borel set**. For any Borel subset \(A\subset I\), the quantity
\[\int_{A}u(t,dx)=u(t,A),\]
is the number of individuals having their transferable quantity \(x\) in the domain \(A\subset I\). Therefore,
\[\int_{I}u(t,dx)=u(t,I),\]
is the total number of individuals at time \(t\).
The operator of transfer \(T:\mathcal{M}_{+}(I)\to\mathcal{M}_{+}(I)\) is defined by
\[T\left(u\right)(dx):=\left\{\begin{array}{ll}\frac{B(u,u)(dx)}{\int_{I}u( \mathrm{d}\sigma)},&\mbox{ if }u\in M_{+}(I)\setminus\left\{0\right\},\\ 0,&\mbox{ if }u=0,\end{array}\right.\]
where \(B:\mathcal{M}(I)\times\mathcal{M}(I)\to\mathcal{M}(I)\) is a bounded bi-linear map defined by
\[B(u,v)(dx):=\iint_{I^{2}}K(dx,x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2}),\]
or equivalently, defined by
\[B(u,v)(A):=\iint_{I^{2}}K(A,x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2}),\]
for each Borel set \(A\subset I\), and each \(u,v\in\mathcal{M}(I)\).
Based on the examples of kernels presented in Section 2, we can make the following assumption.
**Assumption 1.1**.: _We assume the kernel \(K\) satisfies the following properties_
1. _For each Borel set_ \(A\in\mathcal{B}(I)\)_, the map_ \((x_{1},x_{2})\mapsto K(A,x_{1},x_{2})\) _is Borel measurable._
2. _For each_ \((x_{1},x_{2})\in I\times I\)_, the map_ \(A\in\mathcal{B}(I)\mapsto K(A,x_{1},x_{2})\) _is a probability measure._
3. _For each_ \((x_{1},x_{2})\in I\times I\)_,_ \[\int_{I}x\,K(dx,x_{1},x_{2})=\frac{x_{1}+x_{2}}{2}.\]
The plan of the paper is the following. In Section 2, we present the Robin Hood (RH) model, which corresponds to cooperative transfers, and the model of the Sheriff of Nottingham (SN), which corresponds to competitive transfers. These two examples illustrate the problem and will show how the rules at the individual level can be expressed using a proper kernel \(K\). In Section 3, we prove that under Assumption 1.1, the operator \(B\) maps \(\mathcal{M}(I)\times\mathcal{M}(I)\) into \(\mathcal{M}(I)\) and is a bounded bi-linear operator. Due to the boundedness of \(B\), we will deduce that (1.1)-(1.2) generates a unique continuous semiflow on the space of positive measures \(\mathcal{M}_{+}(I)\). In Section 4, we consider the restriction of the system (1.1)-(1.2) to \(L^{1}_{+}(I)\). In Section 5, we run individual-based stochastic simulations of the mixed RH and SN model. The paper is complemented with an Appendix A in which we present some results about measure theory.
## 2 Examples of transfer kernels
Constructing a transfer kernel is a difficult problem. In this section, we propose two examples of transfer models. Some correspond to existing examples in the literature, while others seem new.
### Robin Hood model: (the richest give to the poorest)
The poorest gains a fraction \(f\) of the difference \(|x_{2}-x_{1}|\), and the richest looses a fraction \(f\) of the difference \(|x_{2}-x_{1}|\)
\[K_{1}(\mathrm{d}x,x_{1},x_{2}):=\frac{1}{2}\left(\delta_{x_{2}-f(x_{2}-x_{1}) }(\mathrm{d}x)+\delta_{x_{1}-f(x_{1}-x_{2})}(\mathrm{d}x)\right),\]
where \(f\in(0,1)\) is fixed.
In Figure 1, we explain why we need to consider a mean value of two Dirac masses in the kernel \(K_{1}\). Consider a transfer between two individuals, and consider \(x_{1}\) and \(x_{2}\) (respectively \(y_{1}\) and \(y_{2}\)) the values of the transferable quantities before transfer (respectively after transfer). When we define \(K_{1}\) we need to take a mean value, because we can not distinguish if \(x_{1}\) and \(x_{2}\) are ancestors of \(y_{1}\) or \(y_{2}\). In other words, we can not distinguish between the two cases: 1) \(x_{1}\to y_{1}\) and \(x_{2}\to y_{2}\); and 2) \(x_{1}\to y_{2}\) and \(x_{2}\to y_{1}\); (where \(x\to y\) means \(x\) becomes \(y\)).
In Figure 1, we use the following equalities
\[y_{1}=x_{2}-f(x_{2}-x_{1})=x_{1}+(1-f)(x_{2}-x_{1}),\]
and
\[y_{2}=x_{2}-(1-f)(x_{2}-x_{1})=x_{1}+(1-f)(x_{2}-x_{1}).\]
The values \(y_{1}\) and \(y_{2}\) are the same after a given transfer if we choose \(f\) or \(1-f\). In other words, we can not distinguish if \(x_{1}\) and \(x_{2}\) are ancestors of \(y_{1}\) or \(y_{2}\). This explains the mean value of Dirac masses in the kernel \(K_{1}\). It follows, that this kind of kernel will preserve the support of the initial distribution, so we can restrict to any closed bounded interval \(I\subset\mathbb{R}\). This transfer kernel corresponds to the one proposed to build the recombination operator in Magal and Webb [9]. An extended version with friction was proposed by Hinow, Le Foll, Magal, and Webb [6].
Here we can compute \(B_{1}(u,v)\) explicitly when \(I=\mathbb{R}\), \(u\in L^{1}(\mathbb{R})\) and \(v\in L^{1}(\mathbb{R})\). Indeed let \(\varphi\in C_{c}(I)\) be a compactly supported test function. Then
\[\int_{\mathbb{R}}\varphi(x)B_{1}(u,v)(\mathrm{d}x) =\int_{x_{1}\in\mathbb{R}}\int_{x_{2}\in\mathbb{R}}\int_{x\in \mathbb{R}}\varphi(x)K_{1}(\mathrm{d}x,x_{1},x_{2})u(x_{1})\mathrm{d}x_{1}u(x _{2})\mathrm{d}x_{2}\] \[=\iint_{\mathbb{R}\times\mathbb{R}}\frac{1}{2}\left(\varphi \big{(}x_{2}-f(x_{2}-x_{1})\big{)}+\varphi\big{(}x_{1}-f(x_{1}-x_{2})\big{)} \right)u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\] \[=\frac{1}{2}\left(\iint_{\mathbb{R}\times\mathbb{R}}\varphi \big{(}(1-f)x_{2}+fx_{1}\big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\right.\]
Figure 1: _In the figure, a transfer between two individuals, and consider \(x_{1}\) and \(x_{2}\) (respectively \(y_{1}\) and \(y_{2}\)) the values of the transferable quantities before transfer (respectively after transfer). We plot a transfer whenever \(x_{2}>x_{1}\), and \(f\in[0,1/2]\) (on the left hand side), and \((1-f)\in[1/2,1]\) (on the right hand side). We observe that the values \(y_{1}\) and \(y_{2}\) are the same on both sides._
\[+\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(}(1-f)x_{1}+fx_{2} \big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\Big{)}\] \[=\frac{1}{2}\left(\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(}( 1-f)x_{2}+fx_{1}\big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\right.\] \[\left.+\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(}(1-f)x_{2} +fx_{1}\big{)}u(x_{2})v(x_{1})\mathrm{d}x_{1}\mathrm{d}x_{2}\right).\]
Next, by using the change of variable
\[\left\{\begin{array}{l}x_{1}=x-(1-f)\sigma\\ x_{2}=x+f\sigma\end{array}\right.\Leftrightarrow\left\{\begin{array}{l} \sigma=x_{2}-x_{1}\\ x=(1-f)x_{2}+fx_{1}\end{array}\right.\]
we obtain
\[\int_{\mathbb{R}}\varphi(x)B_{1}(u,v)(\mathrm{d}x) =\frac{1}{2}\left(\int_{\mathbb{R}}\varphi\big{(}x\big{)}\int_{ \mathbb{R}}u(x-(1-f)\sigma)v(x+f\sigma)\mathrm{d}\sigma\mathrm{d}x\right.\] \[\left.+\int_{\mathbb{R}}\varphi\big{(}x\big{)}\int_{\mathbb{R}}u (x+f\sigma)v(x-(1-f)\sigma)\mathrm{d}\sigma\mathrm{d}x\right),\]
and we obtain
\[B_{1}(u,v)(x)=\frac{1}{2}\left(\int_{\mathbb{R}}u(x-(1-f)\sigma)v\left(x+f \sigma\right)\mathrm{d}\sigma+\int_{\mathbb{R}}v(x-(1-f)\sigma)u\left(x+f \sigma\right)\mathrm{d}\sigma\right).\]
We conclude that the transfer operator restricted to \(L^{1}_{+}(\mathbb{R})\) is defined by
\[T_{1}(u)(x)=\left\{\begin{array}{l}\frac{\int_{\mathbb{R}}u(x-(1-f)\sigma)u \left(x+f\sigma\right)\mathrm{d}\sigma}{\int_{\mathbb{R}}u(x)\mathrm{d}x},\ \mathrm{if}\ u\in L^{1}_{+}(\mathbb{R})\setminus\left\{0\right\},\\ 0,\ \mathrm{if}\ u=0.\end{array}\right.\]
**Remark 2.1**.: _One may also consider the case where the fraction transferred varies in function of the distance between the poorest and richest before transferred. This problem was considered by Hinow, Le Foll, Magal and Webb [6], and in the case_
\[K_{1}(\mathrm{d}x,x_{1},x_{2}):=\frac{1}{2}\left(\delta_{x_{2}-f(|x_{2}-x_{1}|) (x_{2}-x_{1})}(\mathrm{d}x)+\delta_{x_{1}-f(|x_{2}-x_{1}|)(x_{1}-x_{2})}( \mathrm{d}x)\right),\]
_where \(f:[0,+\infty)\to[0,1]\) is a continuous function._
### Sheriff of Nottingham model: (the poorest give to the richest)
The poorest looses a fraction \(f\) of the difference \(|x_{2}-x_{1}|\), and the richest gains a fraction \(f\) of the difference \(|x_{2}-x_{1}|\)
\[K_{2}(\mathrm{d}x,x_{1},x_{2}):=\frac{1}{2}\left(\delta_{x_{2}+f(x_{2}-x_{1})} (\mathrm{d}x)+\delta_{x_{1}+f(x_{1}-x_{2})}(\mathrm{d}x)\right),\]
where \(f\in(0,1)\) is fixed.
This kind of kernel will expand the support of the initial distribution to the whole real line, therefore we can not restrict to bounded intervals \(I\subset\mathbb{R}\).
In Figure 2, we explain why we need to consider a mean value of two Dirac masses in the kernel \(K_{1}\). Indeed, we have
\[y_{2}=x_{2}+f(x_{2}-x_{1})=x_{1}+(1+f)(x_{2}-x_{1}),\]
and
\[y_{1}=x_{2}-(1+f)(x_{2}-x_{1})=x_{1}-f(x_{2}-x_{1}).\]
Therefore, the transferable quantities \(y_{1}\) and \(y_{2}\) after a given transfer, are the same with \(f\) or \(1+f\).
Here again we can compute \(B_{2}(u,v)\) explicitly when \(I=\mathbb{R}\), \(u\in L^{1}(\mathbb{R})\) and \(v\in L^{1}(\mathbb{R})\). Indeed let \(\varphi\in C_{c}(I)\) be a compactly supported test function. Then
\[\int_{\mathbb{R}}\varphi(x)B_{2}(u,v)(\mathrm{d}x)\] \[=\int_{x_{1}\in\mathbb{R}}\int_{x_{2}\in\mathbb{R}}\int_{x\in \mathbb{R}}\varphi(x)K_{2}(\mathrm{d}x,x_{1},x_{2})u(x_{1})\mathrm{d}x_{1}u(x _{2})\mathrm{d}x_{2}\] \[=\iint_{\mathbb{R}\times\mathbb{R}}\frac{1}{2}\left(\varphi \big{(}x_{2}+f(x_{2}-x_{1})\big{)}+\varphi\big{(}x_{1}+f(x_{1}-x_{2})\big{)} \right)u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\] \[=\frac{1}{2}\left(\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(} (1+f)x_{2}-fx_{1}\big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\right.\] \[\quad+\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(}(1+f)x_{1} -fx_{2}\big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\bigg{)}\] \[=\frac{1}{2}\left(\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(} (1+f)x_{2}-fx_{1}\big{)}u(x_{1})v(x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}\right.\] \[\quad+\iint_{\mathbb{R}\times\mathbb{R}}\varphi\big{(}(1+f)x_{2} -fx_{1}\big{)}u(x_{2})v(x_{1})\mathrm{d}x_{1}\mathrm{d}x_{2}\bigg{)}\,.\]
Next, by using the change of variable
\[\left\{\begin{array}{l}x_{1}=x-(1+f)\sigma\\ x_{2}=x-f\sigma\end{array}\right.\Leftrightarrow\left\{\begin{array}{l} \sigma=x_{2}-x_{1}\\ x=(1+f)x_{2}-fx_{1}\end{array}\right.\]
Figure 2: _In the figure, the two values before transfers are \(x_{1}\) and \(x_{2}\), and the two values after transfer are \(y_{1}\) and \(y_{2}\). We plot a transfer whenever \(x_{2}>x_{1}\), and \(f\in[0,1]\) (on the left hand side), and \(f\) is replaced by \(1+f\) (on the right hand side). We observe that the values \(y_{1}\) and \(y_{2}\) are the same on both sides._
we obtain
\[\int_{\mathbb{R}}\varphi(x)B_{2}(u,v)(\mathrm{d}x) =\frac{1}{2}\left(\int_{\mathbb{R}}\varphi\big{(}x\big{)}\int_{ \mathbb{R}}u(x-(1+f)\sigma)v(x-f\sigma)\mathrm{d}\sigma\mathrm{d}x\right.\] \[\left.+\int_{\mathbb{R}}\varphi\big{(}x\big{)}\int_{\mathbb{R}}u( x-f\sigma)v(x-(1+f)\sigma)\mathrm{d}\sigma\mathrm{d}x\right),\]
and we obtain
\[B_{2}(u,v)(x)=\frac{1}{2}\left(\int_{\mathbb{R}}u(x-(1+f)\sigma)v\left(x-f \sigma\right)\mathrm{d}\sigma+\int_{\mathbb{R}}v(x-(1+f)\sigma)u\left(x-f \sigma\right)\mathrm{d}\sigma\right).\]
We conclude that the transfer operator restricted to \(L^{1}_{+}(\mathbb{R})\) is defined by
\[T_{2}(u)(x)=\left\{\begin{array}{l}\frac{\int_{\mathbb{R}}u( x-(1+f)\sigma)u\left(x-f\sigma\right)\mathrm{d}\sigma}{\int_{\mathbb{R}}u(x) \mathrm{d}x},\text{ if }u\in L^{1}_{+}(\mathbb{R})\setminus\left\{0\right\},\\ 0,\text{ if }u=0.\end{array}\right.\]
## 3 Understanding (1.1) in the space of measures
Before anything, we need to define \(B(u,v)(dx)\) whenever \(u\) and \(v\) are finite measures on \(I\).
**Theorem 3.1**.: _Let Assumption 1.1 be satisfied. Define_
\[B(u,v)(A):=\iint_{I^{2}}K(A,x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2}), \tag{3.1}\]
_for each Borel set \(A\subset I\), and each \(u,v\in\mathcal{M}(I)\). Then \(B\) maps \(\mathcal{M}(I)\times\mathcal{M}(I)\) into \(\mathcal{M}(I)\), and satisfies the following properties_
1. \[\|B(u,v)\|_{\mathcal{M}(I)}\leq\|u\|_{\mathcal{M}(I)}\|v\|_{\mathcal{M}(I)}, \forall u,v\in\mathcal{M}(I).\]
2. \[B(u,v)\in M_{+}(I),\forall u,v\in M_{+}(I),\]
3. \[\int_{I}B(u,v)(\mathrm{d}x)=\int_{I}u(\mathrm{d}x_{1})\int_{I}v(\mathrm{d}x_{ 2}),\forall u,v\in M_{+}(I).\]
4. \[\int_{I}xB(u,v)(\mathrm{d}x)=\frac{\int_{I}x_{1}u(\mathrm{d}x_{1})\int_{I}v( \mathrm{d}x_{2})+\int_{I}u(\mathrm{d}x)\int_{I}x_{2}v(\mathrm{d}x_{2})}{2}, \forall u,v\in M_{+}(I).\]
5. _For each integer_ \(n\geq 0\)_,_ \[\int_{I}x^{n}B(u,v)(dx)=\int_{I\times I}\int_{I}x^{n}K(dx,x_{1},x_{2})\,u(dx_{ 1})\,v(dx_{2}),\forall u,v\in M_{+}(I).\]
Proof.: Let \(u\in\mathcal{M}(I)\), \(v\in\mathcal{M}(I)\) and define
\[w(\mathrm{d}x):=B(u,v)(\mathrm{d}x)\]
by (3.1). Let \((A_{n})_{n\in\mathbb{N}}\) a collection of pairwise disjoint Borel-measurable sets in \(I\). We want to prove that
\[w\left(\bigcup_{n\in\mathbb{N}}A_{n}\right)=\sum_{n\in\mathbb{N}}w(A_{n}). \tag{3.2}\]
We have
\[w\left(\bigcup_{n\in\mathbb{N}}A_{n}\right) =\iint_{I^{2}}K\left(\bigcup_{n\in\mathbb{N}}A_{n},x_{1},x_{2} \right)u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\iint_{I^{2}}\sum_{n\in\mathbb{N}}K(A_{n},x_{1},x_{2})u(\mathrm{ d}x_{1})v(\mathrm{d}x_{2}).\]
In order to change the order of summation between the integral and the sum, we will use Fubini's Theorem [4, Vol. I Theorem 3.4.4 p.185] and Tonnelli's Theorem [4, Vol. I Theorem 3.4.5 p.185].
We consider \(P\subset\mathcal{B}(I^{2})\) the support of the positive part of \(u\otimes v\) (as given by [4, Theorem 3.1.1 p. 175]). That is
\[\mathbb{1}_{\,P}u\otimes v\in M_{+}(I^{2}),\text{ and }-\mathbb{1}_{\,P^{c}}u \otimes v\in M_{+}(I^{2}),\]
where \(\mathbb{1}_{\,P}\) (respectively \(\mathbb{1}_{\,P^{c}}\)) is the indicator functions of \(P\), that is \(\mathbb{1}_{\,P}(x)=1\) if \(x\in P\) else \(\mathbb{1}_{\,P}(x)=0\) (respectively the indicator function of \(P^{c}=I\setminus P\) the complement set of \(P\)).
We consider the maps defined for all \(n\in\mathbb{N}\), and all \(x_{1},x_{2}\in I\),
\[f_{n}(x_{1},x_{2}):=\mathbb{1}_{\,P}K(A_{n},x_{1},x_{2}), \tag{3.3}\]
and
\[f(n,x_{1},x_{2}):=f_{n}(x_{1},x_{2}).\]
Then by Assumption 1.1-(i), for each integer \(n\in\mathbb{N}\) the map \(f_{n}\) is Borel measurable (i.e., measurable with respect to the Borel \(\sigma\)-algebra), and for each Borel set \(B\subset\mathbb{R}\), we have
\[f^{-1}(B)=\bigcup_{n\in\mathbb{N}}\{n\}\times f_{n}^{-1}(B).\]
Consequently, \(f\) is measurable for the \(\sigma\)-algebra \(\mathcal{P}(\mathbb{N})\otimes\mathcal{B}(I^{2})\), the smallest \(\sigma\)-algebra in \(\mathcal{P}(\mathbb{N}\times I^{2})\) that contains all rectangles \(\mathcal{N}\times B\) where \(\mathcal{N}\in\mathcal{P}(\mathbb{N})\) and \(B\in\mathcal{B}(I^{2})\).
Let \(c\) be the counting measure on \(\mathbb{N}\), \(c:=\sum_{n\in\mathbb{N}}\delta_{n}\). By Tonnelli's Theorem [4, Vol. I Theorem 3.4.5 p.185], since \(f\) is nonnegative, and \(c\) and \((u\otimes v)_{+}\) are nonnegative \(\sigma\)-finite measures, and
\[\int_{I^{2}}\int_{\mathbb{N}}f(n,x_{1},x_{2})c(\mathrm{d}n)(u \otimes v)_{+}(\mathrm{d}x_{1}\mathrm{d}x_{2})\] \[=\int_{I^{2}}\sum_{n\in\mathbb{N}}\mathbb{1}_{\,P}(x_{1},x_{2})K( A_{n},x_{1},x_{2})(u\otimes v)_{+}(\mathrm{d}x_{1}\mathrm{d}x_{2})\]
\[=\iint_{I^{2}}\mathbbm{1}_{P}(x_{1},x_{2})K\left(\bigcup_{n\in \mathbb{N}}A_{n},x_{1},x_{2}\right)u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[\leq\iint_{I^{2}}\mathbbm{1}_{P}(x_{1},x_{2})K\left(I^{2},x_{1}, x_{2}\right)u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=(u\otimes v)(P)<+\infty.\]
We conclude that \(f\in L^{1}(c\otimes(u\otimes v)_{+})\) and therefore by Fubini's Theorem [4, Vol. I Theorem 3.4.4 p.185] we have
\[\int_{I^{2}}\int_{\mathbb{N}}f(n,x_{1},x_{2})c(\mathrm{d}n)(u \otimes v)_{+}(\mathrm{d}x_{1}\mathrm{d}x_{2})\] \[=\int_{\mathbb{N}}\int_{I^{2}}f(n,x_{1},x_{2})c(\mathrm{d}n)(u \otimes v)_{+}(\mathrm{d}x_{1}\mathrm{d}x_{2}).\]
This means that
\[\iint_{I^{2}}\sum_{n\in\mathbb{N}}\mathbbm{1}_{P}K(A_{n},x_{1}, x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\sum_{n\in\mathbb{N}}\iint_{I^{2}}\mathbbm{1}_{P}K(A_{n},x_{1}, x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2}).\]
By similar arguments we can show that
\[\iint_{I^{2}}\sum_{n\in\mathbb{N}}(-\mathbbm{1}_{P^{e}})K(A_{n}, x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\sum_{n\in\mathbb{N}}\iint_{I^{2}}(-\mathbbm{1}_{P^{e}})K(A_{n}, x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2}).\]
We conclude that
\[\iint_{I^{2}}\sum_{n\in\mathbb{N}}K(A_{n},x_{1},x_{2})u(\mathrm{d }x_{1})v(\mathrm{d}x_{2})\] \[=\iint_{I^{2}}\sum_{n\in\mathbb{N}}(\mathbbm{1}_{P}+\mathbbm{1}_{ P^{e}})K(A_{n},x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\iint_{I^{2}}\sum_{n\in\mathbb{N}}\mathbbm{1}_{P}K(A_{n},x_{1}, x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[\quad+\iint_{I^{2}}\sum_{n\in\mathbb{N}}\mathbbm{1}_{P^{e}}K(A_{n },x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\sum_{n\in\mathbb{N}}\iint_{I^{2}}\mathbbm{1}_{P}K(A_{n},x_{1},x_{ 2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[\quad+\sum_{n\in\mathbb{N}}\iint_{I^{2}}\mathbbm{1}_{P^{e}}K(A_{n },x_{1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\] \[=\sum_{n\in\mathbb{N}}\iint_{I^{2}}K(A_{n},x_{1},x_{2})u(\mathrm{ d}x_{1})v(\mathrm{d}x_{2})\] \[=\sum_{n\in\mathbb{N}}w(A_{n}).\]
We have proved (3.2) for any family of pairwise disjoint Borel sets \((A_{n})\), hence \(w\) is a measure. Moreover \(w\) is finite because
\[\int_{I}|w|(\mathrm{d}x)=|w|(I) \leq\iint_{I^{2}}K(I,x_{1},x_{2})|u\otimes v|(\mathrm{d}x_{1} \mathrm{d}x_{2})\] \[=\iint_{I^{2}}|u|(\mathrm{d}x_{1})|v|(\mathrm{d}x_{2})\] \[=\|u\|_{\mathcal{M}(I)}\|v\|_{\mathcal{M}(I)}.\]
We have proved that
\[\|B(u,v)\|_{\mathcal{M}(I)}\leq\|u\|_{\mathcal{M}(I)}\|v\|_{\mathcal{M}(I)}.\]
Hence \(B(u,v)\) is a continuous bilinear map on \(\mathcal{M}(X)\).
To prove (iii), we use Fubini's theorem in the formula (3.1) of \(B(u,v)\). To prove (iv), we use the definition (3.1) of \(B(u,v)\):
\[\begin{array}{ll}\int_{I}B(u,v)(\mathrm{d}x)=B(u,v)(I)&=\iint_{I^{2}}K(I,x_ {1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\\ &=\iint_{I^{2}}u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\\ &=\int_{I}u(\mathrm{d}x)\int_{I}v(\mathrm{d}x),\end{array}\]
because \(K(I,x_{1},x_{2})=1\) by assumption.
To prove (v), we use Fubini's theorem applied to the formula (3.1) of \(B(u,v)\):
\[\begin{array}{ll}\int_{I}xB(u,v)(\mathrm{d}x)&=\iint_{I^{2}}\int_{I}xK(dx,x_ {1},x_{2})u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\\ &=\iint_{I^{2}}\frac{x_{1}+x_{2}}{2}u(\mathrm{d}x_{1})v(\mathrm{d}x_{2})\\ &=\frac{\int_{I}x_{1}u(\mathrm{d}x_{1})\int_{I}v(\mathrm{d}x_{2})+\int_{I}u( \mathrm{d}x_{1})\int_{I}x_{2}v(\mathrm{d}x_{2})}{2},\end{array}\]
because \(\int_{I}xK(dx,x_{1},x_{2})=\frac{x_{1}+x_{2}}{2}\) by assumption. \(\blacksquare\)
As a consequence of Theorem 3.1, the map \(B\) is a bounded and bi-linear operator from \(\mathcal{M}(I)\times\mathcal{M}(I)\) to \(\mathcal{M}(I)\). Moreover \(B\) maps \(M_{+}(I)\times M_{+}(I)\) into \(M_{+}(I)\). To investigate the Lipschitz property of \(T:M_{+}(I)\to M_{+}(I)\), it is sufficient to observe that (here for short we replace \(\|.\|_{\mathcal{M}(I)}\) by \(\|.\|\))
\[\begin{array}{ll}&\|T(u)-T(v)\|\\ &=\left\|\|u\|^{-1}B(u,u-v)+\left(\|u\|^{-1}-\|v\|^{-1}\right)B(u,v)+\|v\|^{-1 }B(u-v,v)\right\|\\ &\leq\left\|\|u\|^{-1}B(u,|u-v|)+\left(\|u\|^{-1}-\|v\|^{-1}\right)B(u,v)+\|v \|^{-1}B(|u-v|,v)\right\|\\ &\leq 2\left\|u-v\|+\left|\|u\right|^{-1}-\|v\|^{-1}\right|\left\|u\right\|\|v \|\\ &\leq 2\left\|u-v\right\|+\left|\|u\right\|-\|v\|\\ &\leq 3\left\|u-v\right\|,\end{array}\]
therefore we obtain the following proposition.
**Proposition 3.2**.: _Let Assumption 1.1 be satisfied. The operator \(T\) map \(M_{+}(I)\) into itself, and \(T\) satisfies the following properties
1. \(T:M_{+}(I)\to M_{+}(I)\) _is Lipchitz continuous._
2. \(T\) _is positively homogeneous. That is,_ \[T(\lambda u)=\lambda T(u),\forall\lambda\geq 0,\forall u\in M_{+}(I).\]
3. \(T\) _preserves the total mass of individuals. That is,_ \[\int_{I}T(u)(dx)=\int_{I}u(dx),\forall u\in M_{+}(I).\]
4. \(T\) _preserves the total mass of transferable quantity. That is,_ \[\int_{I}xT(u)(dx)=\int_{I}xu(dx),\forall u\in M_{+}(I).\]
Therefore we obtain the following theorem.
**Theorem 3.3**.: _Let Assumption 1.1 be satisfied. Then the Cauchy problem is_
\[\partial_{t}u(t,dx)=2\tau\,T\big{(}u(t)\big{)}(dx)-2\tau\,u(t,dx), \tag{3.4}\]
_with_
\[u(0,dx)=\phi(dx)\in M_{+}(I). \tag{3.5}\]
_The Cauchy problem generates a unique continuous homogeneous semiflow \(t\to S(t)\phi\) on \(M_{+}(I)\). That is_
1. _(Semiflow property)_ \[S(0)\phi=\phi\text{ and }S(t)S(s)\phi=S(t+s)\phi,\forall t,s\geq 0,\forall\phi \in M_{+}(I).\]
2. _(Continuity) The map_ \((t,\phi)\to S(t)\phi\) _is a continuous map from_ \([0,+\infty)\times M_{+}(I)\) _to_ \(M_{+}(I)\)_._
3. _(Homogeneity)_ \[S(t)\lambda\phi=\lambda S(t)\phi,\forall t\geq 0,\forall\lambda\geq 0,\forall \phi\in M_{+}(I).\]
4. _(Preservation of the total mass of individuals) The total mass of individuals is preserved_ \[\int_{I}S(t)(\phi)(dx)=\int_{I}\phi(dx),\forall t\geq 0,\forall\lambda\geq 0, \forall\phi\in M_{+}(I).\]
5. _(Preservation of the total mass of transferable quantity) The total mass of transferable quantity is preserved_ \[\int_{I}xS(t)(\phi)(dx)=\int_{I}x\phi(dx),\forall t\geq 0,\forall\lambda\geq 0,\forall\phi\in M_{+}(I).\]
6. _(From transfer rate \(1/2\) to any transfer rate \(\tau>0\)) If we define_ \(S^{*}(t)\) _the semi-flow generated by (_3.4_)-(_3.5_) whenever_ \(\tau=1/2\)_, then_ \[S(t)=S^{\star}\left(2\tau t\right),\forall t\geq 0.\]
**Remark 3.4**.: _Let \(\mathcal{F}:\mathcal{M}(I)\to\mathbb{R}\) be a positive bounded linear form on \(\mathcal{M}(I)\). We can consider for example_
\[\mathcal{F}(u)=\int_{I}f(x)u(dx),\]
_where \(f:I\to\mathbb{R}\) a bounded and positive continuous map on \(I\)._
_Then \((t,\phi)\mapsto U(t)\phi\) define on \([0,+\infty)\times M_{+}(I)\) by_
\[U(t)u=\frac{S(t)\phi}{1+\int_{0}^{t}\mathcal{F}(S(\sigma)\phi)d\sigma},\]
_is the unique solution of the Cauchy problem_
\[u^{\prime}(t)=2\tau\,T\big{(}|u(t)|\big{)}(dx)-2\tau\,u(t,dx)-\mathcal{F}(u(t) )u(t,dx),\]
_with_
\[u(0,dx)=\phi(dx)\in M_{+}(I).\]
_More detailed arguments can be found in Magal and Webb [9], and Magal [7]._
**Remark 3.5**.: _The rate of transfers \(\tau(x)\) may vary in function of \(x\) the transferable quantity. In that case, we obtain the following model_
\[\partial_{t}u(t,x)=T\big{(}2\tau(.)\,u(t,.)\big{)}(x)-2\tau(x)u(t,x),\text{ for }x\in\mathbb{R},\]
_with_
\[u(0,dx)=\phi(dx)\in\mathcal{M}(I).\]
## 4 Understanding (1.1) in \(L^{1}(\mathbb{R})\)
Recall that a Borel subset \(A\in\mathcal{B}(I)\) is said to be **negligible** if and only if \(A\) has a null Lebesgue measure. Then, thanks to the Radon-Nikodym Theorem A.6, we have the following characterization of kernels that define a bilinear mapping from \(L^{1}(I)\times L^{1}(I)\) to \(L^{1}(I)\).
**Proposition 4.1**.: _Let Assumption 1.1 be satisfied. Then we have_
\[B(u,v)\in L^{1}(I)\text{ for all }(u,v)\in L^{1}(I)\times L^{1}(I)\]
_if, and only if, for each negligible subset \(A\in\mathcal{B}(I)\), the set_
\[\mathcal{N}(A):=\left\{(x_{1},x_{2})\in I\times I\,:\,K(A,x_{1},x_{2})\neq 0 \right\},\]
_is negligible. Equivalently, we have_
\[\iint_{I\times I}K(A,x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}=0,\]
_whenever \(A\in\mathcal{B}(I)\) is negligible._
Proof.: For simplicity, here we call \(\mathcal{L}\) the one-dimensional Lebesgue measure, that is to say \(\mathcal{L}(A)=\int_{A}\mathrm{d}x\); and \(\mathcal{L}^{2}:=\mathcal{L}\otimes\mathcal{L}\) the two-dimensional Lebesgue measure in \(\mathbb{R}^{2}\).
Let \(u,v\in L^{1}(I)\) be given. Suppose that
\[\mathcal{L}^{2}(\mathcal{N}(A))=0,\]
for each \(A\in\mathcal{B}(I)\) with \(\mathcal{L}(A)=0\). Then by definition (see (3.1)),
\[B(u,v)(A)=\iint_{I\times I}K(A,x_{1},x_{2})u(x_{1})\mathrm{d}x_{1}v(x_{2}) \mathrm{d}x_{2}=0,\]
since \((x_{1},x_{2})\mapsto K(A,x_{1},x_{2})\) is equal to zero \(\mathcal{L}^{2}\)-almost everywhere in \(\mathbb{R}^{2}\) by assumption. Therefore \(B(u,v)\) is absolutely continuous with respect to the Lebesgue measure \(\mathcal{L}\), and by the Radon-Nikodym Theorem A.6, we can find function \(f\in L^{1}(I)\) such that
\[B(u,v)(dx)=f(x)\mathrm{d}x,\]
which is equivalent to
\[B(u,v)\in L^{1}(I).\]
Conversely, assume that \(B(u,v)\in L^{1}(I)\) for any \((u,v)\in L^{1}(I)^{2}\). If \(I\) is bounded then \(1\in L^{1}(I)\), so taking \(u=v=1\), and \(B(1,1)(\mathrm{d}x)=f(x)\mathrm{d}x\) with \(f\in L^{1}(I)\) gives
\[B(u,v)(A)=\iint_{I\times I}K(A,x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}= \int_{A}f(x)\mathrm{d}x=0,\]
whenever \(\mathcal{L}(A)=0\), and we are done.
Let us consider the case when \(I\) is not bounded. Assume that \(A\in\mathcal{B}(I)\) is negligible. Define
\[u_{n}(x)=v_{n}(x)=\mathbb{1}_{[-n,n]\cap I}(x)\]
where \(x\mapsto\mathbb{1}_{E}(x)\) is the indicator function of the set \(E\).
Then, we have by assumption,
\[B(u_{n},v_{n})=f_{n}(x)\mathrm{d}x\]
for some \(f_{n}\in L^{1}(I)\).
Moreover
\[B\big{(}u_{n},v_{n}\big{)}(A)=\iint_{(I\cap[-n,n])\times(I\cap[-n,n])}K(A,x_{ 1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}=\int_{A}f_{n}(x)\mathrm{d}x=0,\]
thus
\[\mathcal{L}\big{(}\mathcal{N}(A)\cap[-n,n]^{2}\big{)}=0\text{ for all }n\in \mathbb{N}.\]
Finally since we have an increase sequence of subsets, we obtain
\[\mathcal{L}\big{(}\mathcal{N}(A)\big{)}=\mathcal{L}\left(\mathcal{N}(A)\cap \bigcup_{n\in\mathbb{N}}[-n,n]^{2}\right)=\lim_{n\to+\infty}\mathcal{L}\left( \mathcal{N}(A)\cap[-n,n]^{2}\right)=0,\]
and the proof is completed.
Since the norm in measure coincides with the \(L^{1}\) for an \(L^{1}\) function, we deduce that \(T\) maps \(L^{1}_{+}(I)\) into \(L^{1}_{+}(I)\) into itself, and the following statements are consequences of Theorem 3.3, and Proposition 4.1.
**Theorem 4.2**.: _Let Assumption 1.1 be satisfied. Then the Cauchy problem is_
\[\partial_{t}u(t,x)=2\tau\,T\big{(}u(t)\big{)}-2\tau\,u(t,x), \tag{4.1}\]
_with_
\[u(0,x)=\phi(x)\in L^{1}_{+}(I). \tag{4.2}\]
_The Cauchy problem (4.1)-(4.2), generates a unique semiflow which is the restriction of \(S(t)\) to \(L^{1}_{+}(I)\). We deduce that_
\[S(t)L^{1}_{+}(I)\subset L^{1}_{+}(I),\forall t\geq 0,\]
_and the semiflow \(t\to S(t)\phi\) restricted to \(L^{1}_{+}(I)\) satisfies the following properties:_
1. _(Continuity) The map_ \((t,\phi)\to S(t)\phi\) _is a continuous map from_ \([0,+\infty)\times L^{1}_{+}(I)\) _to_ \(L^{1}_{+}(I)\)_._
2. _(Preservation of the total mass of individuals) The total mass of individuals is preserved_ \[\int_{I}S(t)(\phi)(x)dx=\int_{I}\phi(x)dx,\forall t\geq 0,\forall\lambda\geq 0,\forall\phi\in M_{+}(I).\]
3. _(Preservation of the total mass of transferable quantity) The total mass of transferable quantity is preserved_ \[\int_{I}xS(t)(\phi)(x)dx=\int_{I}x\phi(x)dx,\forall t\geq 0,\forall\lambda\geq 0,\forall\phi\in M_{+}(I).\]
**Example 4.3** (Robin Hood model).: _Let \(K(\mathrm{d}x,x_{1},x_{2})=K_{1}(\mathrm{d}x,x_{1},x_{2})=\frac{1}{2}\big{(} \delta_{x_{2}-f(x_{2}-x_{1})}(\mathrm{d}x)+\delta_{x_{1}-f(x_{1}-x_{2})}( \mathrm{d}x)\big{)}\). If \(A\in\mathcal{B}(I)\) has zero Lebesgue measure, then we have:_
\[\mathcal{N}(A) =\{(x_{1},x_{2})\in I\times I\,:\,K(A,x_{1},x_{2})>0\}\] \[=\{(x_{1},x_{2})\,:\,x_{2}-f(x_{2}-x_{1})=y\in A\text{ or }x_{1}-f(x_{1}-x_{2})=z\in A\}\] \[=\bigg{\{}(x_{1},x_{2})\,:\,x_{1}=\frac{1-f}{1-2f}z-\frac{f}{1-2f }y\text{ and }\] \[\qquad x_{2}=\frac{1-f}{1-2f}y-\frac{f}{1-2f}z\text{ and }\big{(}y\in A\text{ or }z\in A\big{)}\bigg{\}}\] \[=\bigg{\{}\bigg{(}\frac{1-f}{1-2f}z-\frac{f}{1-2f}y,\frac{1-f}{1-2 f}y-\frac{f}{1-2f}z\bigg{)}\,:\,y\in A,z\in I\bigg{\}}\] \[\quad\cup\bigg{\{}\bigg{(}\frac{1-f}{1-2f}z-\frac{f}{1-2f}y,\frac {1-f}{1-2f}y-\frac{f}{1-2f}z\bigg{)}\,:\,y\in I,z\in A\bigg{\}}\,.\]
_The two sets above have zero Lebesgue measure because they are the image of \(A\times I\) and \(I\times A\) by a linear invertible transformation. Therefore \(\mathcal{N}(A)\) has zero Lebesgue measure and we can apply Proposition 4.1._
**Example 4.4** (Sheriff of Nottingham model).: _Let \(K(\mathrm{d}x,x_{1},x_{2})=K_{2}(\mathrm{d}x,x_{1},x_{2})=\frac{1}{2}\big{(} \delta_{x_{2}+f(x_{2}-x_{1})}(\mathrm{d}x)+\delta_{x_{1}+f(x_{1}-x_{2})}( \mathrm{d}x)\big{)}\). If \(A\in\mathcal{B}(\mathbb{R})\) has zero Lebesgue measure, then we have:_
\[\mathcal{N}(A) =\{(x_{1},x_{2})\in I\times I\,:\,K(A,x_{1},x_{2})>0\}\] \[=\{(x_{1},x_{2})\,:\,x_{2}+f(x_{2}-x_{1})=y\in A\text{ or }x_{1}+f(x_{1}-x_{2})=z\in A\}\] \[=\left\{\left(\frac{1+f}{1+2f}z+\frac{f}{1+2f}y,\frac{1+f}{1+2f}y +\frac{f}{1+2f}z\right)\,:\,y\in A,z\in I\right\}\] \[\quad\cup\left\{\left(\frac{1+f}{1+2f}z+\frac{f}{1+2f}y,\frac{1+f }{1+2f}y+\frac{f}{1+2f}z\right)\,:\,y\in I,z\in A\right\}.\]
_The two sets above have zero Lebesgue measure because they are the image of \(A\times\mathbb{R}\) and \(\mathbb{R}\times A\) by a linear invertible transformation. Therefore \(\mathcal{N}(A)\) has zero Lebesgue measure and we can apply Proposition 4.1._
Similarly, the mixed Robin Hood and Sheriff of Nottingham model also define a bilinear mapping from \(L^{1}\times L^{1}\) to \(L^{1}\).
**Example 4.5** (Distributed Robin Hood or Sheriff of Nottingham models).: _The kernel of the distributed Robin Hood model consists in replacing the Dirac mass centered a \(0\), by \(x\to g(x)\in L^{1}(I)\) a density of probability centered at \(0\). That is_
\[K_{3}(\mathrm{d}x,x_{1},x_{2})=\frac{1}{2}\bigg{\{}g\left(x-[x_{2}-f(x_{2}-x_{ 1})]\right)+g\left(x-[x_{1}-f(x_{1}-x_{2})]\right)\bigg{\}}\,\mathrm{d}x.\]
_Similarly, the kernel of the distributed Sheriff of Nottingham model is the following_
\[K_{4}(\mathrm{d}x,x_{1},x_{2}):=\frac{1}{2}\bigg{\{}g\left(x-[x_{2}+f(x_{2}-x_ {1})]\right)+g\left(x-[x_{1}+f(x_{1}-x_{2})]\right)\bigg{\}}\,\mathrm{d}x.\]
_Let \(K(\mathrm{d}x,x_{1},x_{2})=K(x,x_{1},x_{2})\mathrm{d}x\) with \(K(x,x_{1},x_{2})\in L^{1}(I)\) for any \((x_{1},x_{2})\in I\times I\). Examples are the distributed Robin Hood model, distributed Sheriff of Nottingham model, and distributed mixed Robin Hood and Sheriff of Nottingham model. If \(A\in\mathcal{B}(I)\) has zero Lebesgue measure, then we have automatically_
\[K(A,x_{1},x_{2})=\int_{A}K(x,x_{1},x_{2})\mathrm{d}x=0\text{ for any }(x_{1},x_{2})\in I\times I,\text{ so }\mathcal{N}(A)=\varnothing.\]
_Therefore we can apply Proposition 4.1._
## 5 Numerical simulation
We introduce \(p\in[0,1]\), the population's redistribution fraction. The parameter \(p\) is also the probability of applying the Robin Hood (RH) model during a transfer between two individuals. Otherwise, we use the Sheriff of Nottingham (SN) model with the probability \(1-p\). In that case, the model is the following
\[\partial_{t}u(t,dx)=2\tau\,\left[p\,T_{1}\big{(}u(t)\big{)}(dx)+(1-p)\,T_{2} \big{(}u(t)\big{)}(dx)\right]-2\tau\,u(t,dx), \tag{5.1}\]
with
\[u(0,dx)=\phi(dx)\in M_{+}(I). \tag{5.2}\]
In Figures 3-7, we run an individual based simulation of the model (5.1)-(5.2). Such simulations are stochastic. We first choose a pair randomly following an exponential law with average \(1/\tau\). Then we choose the RH model with a probability \(p\) and the SN model with a probability \(1-p\). Then we apply the transfers rule described in section 2. To connect this problem with our description in the space of measures, we can consider an initial distribution that is a sum of Dirac masses.
\[\phi(dx)=\sum_{i=1}^{N}\delta_{x_{i}}(x),\]
in which \(x_{i}\) is the value of the transferable quantity for individual \(i\) at \(t=0\).
Figure 5: _In this figure, we use \(p=0.5\) (i.e. \(50\%\) RH model and \(50\%\) SN model), \(f_{1}=f_{2}=0.1\), \(1/\tau=1\) years. We start the simulations with \(100\,000\) individuals. The figures (a) (b) (c) (d) are respectively the initial distribution at time \(t=0\), and the distribution \(10\) years, \(50\) years and \(100\) years._
Figure 6: _In this figure, we zoom on the distribution for \(t=100\) in Figure 5 (d). The figure on the right-hand side corresponds to the yellow region in the left figure._
Figure 3 corresponds to the full RH model which corresponds to \(p=1\). In that case, the population density converges to a Dirac mass centered at the mean value. That is, everyone will ultimately have the same amount of transferable quantity.
Whatever the value of \(p\) strictly less than \(1\), the simulations can be summed up by saying that "there is always a sheriff in town". In Figures 5-7, the unit for \(x\)-axis changes from (a) to (d). We can see from that some rich guys will always become richer and richer. The SN model induces competition between the poorest individuals also, and the population ends up after \(100\) years with a lot of debts. In other words, the richest individuals are becoming richer, while the poorest are becoming poorer. The effect in changing the value of the parameter \(p\in[0,1)\) is strictly positive, it seems that it is only a matter of time before we end up with a very segregated population. We observe a difference for
Figure 8: _In this figure, we zoom on the distribution for \(t=100\) in Figure 7 (d). The figure on the right-hand side corresponds to the yellow region in the left figure._
Figure 7: _In this figure, we use \(p=0\) (i.e. \(0\%\) RH model and \(100\%\) SN model), \(f_{1}=f_{2}=0.1\), \(1/\tau=1\) years. We start the simulations with \(100\,000\) individuals. The figures (a) (b) (c) (d) are respectively the initial distribution at time \(t=0\), and the distribution \(10\) years, \(50\) years and \(100\) years._
the richest of two orders of magnitude between the case \(p=0.5\) and \(p=0\). We conclude by observing that the smaller \(p\) is, the more the wealthiest individuals are rich.
## Appendix A Spaces of measures
Let \(X\) be a Polish space, that is complete metric space \((X,d)\) which is separable (i.e., there exists a countable dense subset). As an example for \(X\) one may consider any closed subset of \(\mathbb{R}^{n}\) endowed with the standard metric \(d(x,y)=\|x-y\|\) induced by \(\|.\|\) a norm on \(\mathbb{R}^{n}\).
Recall that the Borel \(\sigma\)-algebra of \(X\) is the set \(\mathcal{B}(X)\subset\mathcal{P}(X)\) (the \(\sigma\)-algebra generated by the open subsets of \(X\)) of all parts of \(X\) that can be obtained by countable union, countable intersection, and difference of open sets [4, Vol II Chap 6 section 6.3].
We define \(\mathcal{M}(X)\) the space of measures on \(X\) starting with the positive measures. A map \(\mu:\mathcal{B}(X)\to\mathbb{R}^{+}\) is a **positive measure**, if it is **additive** (or a **countably additive**). That is,
\[\mu\left(\bigcup_{n\in\mathbb{N}}B_{n}\right)=\sum_{n\in\mathbb{N}}\mu(B_{n}),\]
for any countable collection of disjoint Borel sets \(B_{n}\in\mathcal{B}(X)\) (where the empty set may occur infinitely many times). In the following, a countably additive measure will be called **Borel measure**.
A positive measure is **finite** if
\[\mu(X)<+\infty.\]
A **signed** measure \(\mu\) is the difference between two positive measures
\[\mu=\mu^{+}-\mu^{-}\]
where \(\mu^{+}\) and \(\mu^{-}\) are both positive finite measures.
**Definition A.1**.: _The set \(\mathcal{M}(X)\) is the space of all the signed finite measures \(\mu\)._
Given a signed measure \(\mu\), the Hahn decomposition theorem [4, Vol. I Theorem 3.1.1 p. 175] gives a decomposition of the space \(X\) into two subsets \(X^{+}\) and \(X^{-}\) on which \(\mu\) has constant sign.
**Theorem A.2** (Hahn decomposition).: _Let \(\mu\) be a signed measure on a measurable space \((X,\mathcal{B}(X))\). Then, there exist disjoint sets \(X^{+},X^{-}\in\mathcal{B}(X)\) such that \(X^{+}\cup X^{-}=X,\) and for all \(A\in\mathcal{B}(X)\), one has_
\[\mu(A\cap X^{-})\leq 0\text{ and }\mu(A\cap X^{+})\geq 0.\]
Considering for example \(\mu=\delta_{0}-\delta_{2}\) with \(X=\{0,1,2\}\), we deduce that the Hahn decomposition is not unique in general. But the Hahn decomposition allows us to define the _positive part_\(\mu^{+}\) and the _negative part_\(\mu^{-}\) of a signed measure \(\mu\):
\[\mu^{-}(A):=-\mu(A\cap X^{-})\text{ and }\mu^{+}(A):=\mu(A\cap X^{+}),\text{ for all }A\in\mathcal{B}(X).\] (A.1)
Let us prove that \(\mu^{+}\) is uniquely defined, the proof for \(\mu^{-}\) being similar. Indeed, if we consider \(\widetilde{X}^{+}\cup\widetilde{X}^{-}=X\) another Hahn decomposition for \(\mu\). Then we have
\[\mu(X^{+}\cap\widetilde{X}^{-})=0,\text{ and }\mu(\widetilde{X}^{+}\cap X^{-} )=0,\]
since both quantities are simultaneously positive and negative.
Therefore we have
\[\mu(X^{+}\cap A) =\mu\bigg{(}X^{+}\cap\bigg{(}(A\cap\widetilde{X}^{+})\cup(A\cap \widetilde{X}^{-})\bigg{)}\bigg{)}\] \[=\mu\big{(}A\cap\widetilde{X}^{+}\cap X\big{)}+\mu(A\cap \widetilde{X}^{-}\cap X^{+}\big{)}\] \[=\mu\big{(}A\cap\widetilde{X}^{+}\cap X^{+}\big{)}+\mu(A\cap \widetilde{X}^{-}\cap X^{+}\big{)}\] \[=\mu\bigg{(}\widetilde{X}^{+}\cap\bigg{(}(A\cap X^{+})\cup(A\cap X ^{-})\bigg{)}\bigg{)}=\mu(\widetilde{X}^{+}\cap A).\]
This shows that \(\mu^{+}\) defined by (A.1) is unique (i.e. \(\mu^{+}\) is independent of the Hahn decomposition).
The _total variation_ of \(\mu\) (see [4, Vol. I Definition 3.1.4 p.176]) is
\[|\mu|=\mu^{+}+\mu^{-}.\]
The space \(\mathcal{M}(X)\) of signed finite measures over \(X\), is a Banach space endowed with the _total variation norm_
\[\|\mu\|_{\mathcal{M}(X)}:=\int_{X}|\mu|(\mathrm{d}x).\]
We refer again to Bogachev [4, Vol. I Theorem 4.6.1] for this result.
First, we check that the positive part, negative part and total variation are continuous on \(\mathcal{M}(X)\).
**Lemma A.3**.: _Let \((X,\mathcal{B}(X))\) be a measurable space. The maps \(\mu\mapsto\mu^{+}\), \(\mu\mapsto\mu^{-}\) and \(\mu\mapsto|\mu|\) are 1-Lipschitz continuous on \(\mathcal{M}(X)\) equiped with \(\|\cdot\|_{\mathcal{M}(X)}\). That is,_
\[\|\mu_{1}^{+}-\mu_{2}^{+}\|_{\mathcal{M}(X)}\leq\|\mu_{1}-\mu_{2}\|_{ \mathcal{M}(X)},\]
\[\|\mu_{1}^{-}-\mu_{2}^{-}\|_{\mathcal{M}(X)}\leq\|\mu_{1}-\mu_{2}\|_{ \mathcal{M}(X)},\]
\[\||\mu_{1}|-|\mu_{2}|\|_{\mathcal{M}(X)}\leq\|\mu_{1}-\mu_{2}\|_{\mathcal{M}( X)}.\]
Proof.: Let \(\mu_{1},\mu_{2}\in\mathcal{M}(X)\) be given. We introduce the Hahn decompositions of \(X\) with respect to \(\mu_{1}\) and \(\mu_{2}\), respectively: \(X=:X_{1}^{+}\cup X_{1}^{-}\) and \(X=:X_{2}^{+}\cup X_{2}^{-}\), so that \(X_{1}^{+}\) is the support of \(\mu_{1}^{+}\), \(X_{1}^{-}\) is the support of \(\mu_{1}^{-}\), \(X_{2}^{+}\) is the support of \(\mu_{2}^{+}\), and \(X_{2}^{-}\) is the support of \(\mu_{2}^{-}\).
We also introduce the Hahn decomposition of \(X\) for \(|\mu_{1}|-|\mu_{2}|\), \(X=:Y^{+}\cup Y^{-}\). Then,
\[\||\mu_{1}|-|\mu_{2}|\|_{\mathcal{M}(X)} =\big{(}|\mu_{1}|-|\mu_{2}|\big{)}^{+}(X)+\big{(}|\mu_{1}|-|\mu_{2} |\big{)}^{-}(X)\] \[=|\mu_{1}|(Y^{+})-|\mu_{2}|(Y^{+})+|\mu_{2}|(Y^{-})-|\mu_{1}|(Y^{-})\] \[=\mu_{1}^{+}(Y^{+})+\mu_{1}^{-}(Y^{+})-\mu_{2}^{+}(Y^{+})-\mu_{2} ^{-}(Y^{+})\] (A.2) \[+\mu_{2}^{+}(Y^{-})+\mu_{2}^{-}(Y^{-})-\mu_{1}^{+}(Y^{-})-\mu_{1} ^{-}(Y^{-}).\] (A.3)
We decompose further \(Y^{+}=(Y^{+}\cap X_{1}^{+})\cup(Y^{+}\cap X_{1}^{-})\) to obtain
\[\mu_{1}^{+}(Y^{+})+\mu_{1}^{-}(Y^{+})-\mu_{2}^{+}(Y^{+})-\mu_{2} ^{-}(Y^{+}) =\mu_{1}(Y^{+}\cap X_{1}^{+})-\mu_{1}(Y^{+}\cap X_{1}^{-})\] \[-|\mu_{2}|(Y^{+}\cap X_{1}^{+})-|\mu_{2}|(Y^{+}\cap X_{1}^{-}),\] (A.4)
and
\[\mu_{1}(Y^{+}\cap X_{1}^{+})-|\mu_{2}|(Y^{+}\cap X_{1}^{+}) =\mu_{1}(Y^{+}\cap X_{1}^{+})-\mu_{2}^{+}(Y^{+}\cap X_{1}^{+})- \mu_{2}^{-}(Y^{+}\cap X_{1}^{+})\] \[\leq\mu_{1}(Y^{+}\cap X_{1}^{+})-\mu_{2}^{+}(Y^{+}\cap X_{1}^{+}) +\mu_{2}^{-}(Y^{+}\cap X_{1}^{+})\] \[=\mu_{1}(Y^{+}\cap X_{1}^{+})-\mu_{2}(Y^{+}\cap X_{1}^{+})\] \[\leq|\mu_{1}-\mu_{2}|(Y^{+}\cap X_{1}^{+}),\]
similarly
\[-\mu_{1}(Y^{+}\cap X_{1}^{-})-|\mu_{2}|(Y^{+}\cap X_{1}^{-}) =-\mu_{1}(Y^{+}\cap X_{1}^{-})-\mu_{2}^{+}(Y^{+}\cap X_{1}^{-})- \mu_{2}^{-}(Y^{+}\cap X_{1}^{-})\] \[\leq-\mu_{1}(Y^{+}\cap X_{1}^{-})+\mu_{2}^{+}(Y^{+}\cap X_{1}^{-} )-\mu_{2}^{-}(Y^{+}\cap X_{1}^{-})\] \[=\mu_{2}(Y^{+}\cap X_{1}^{-})-\mu_{1}(Y^{+}\cap X_{1}^{-})\] \[\leq|\mu_{1}-\mu_{2}|(Y^{+}\cap X_{1}^{-}),\]
so finally (A.4) becomes
\[(|\mu_{1}|-|\mu_{2}|)\,(Y^{+}) =\mu_{1}^{+}(Y^{+})+\mu_{1}^{-}(Y^{+})-\mu_{2}^{+}(Y^{+})-\mu_{2} ^{-}(Y^{+})\] (A.5) \[\leq|\mu_{1}-\mu_{2}|(Y^{+}\cap X_{1}^{+})+|\mu_{1}-\mu_{2}|(Y^{+ }\cap X_{1}^{-})\] \[=|\mu_{1}-\mu_{2}|(Y^{+}).\]
By a similar argument using this time the decomposition \(Y^{-}=(Y^{-}\cap X_{2}^{+})\cup(Y^{-}\cap X_{2}^{-})\), we obtain
\[(|\mu_{1}|-|\mu_{2}|)\,(Y^{-}) =\mu_{2}^{+}(Y^{-})+\mu_{2}^{-}(Y^{-})-\mu_{1}^{+}(Y^{-})-\mu_{1} ^{-}(Y^{-})\] (A.6) \[\leq|\mu_{1}-\mu_{2}|(Y^{-}\cap X_{2}^{+})+|\mu_{1}-\mu_{2}|(Y^{ -}\cap X_{2}^{-})\] \[=|\mu_{1}-\mu_{2}|(Y^{-}).\]
Finally, combining (A.5) and (A.6) into (A.2)-(A.3), we have
\[\||\mu_{1}|-|\mu_{2}|\|_{\mathcal{M}(X)} \leq|\mu_{1}-\mu_{2}|(Y^{+})+|\mu_{1}-\mu_{2}|(Y^{-})\] \[=|\mu_{1}-\mu_{2}|(Y^{+})+|\mu_{1}-\mu_{2}|(Y^{-})\] \[=|\mu_{1}-\mu_{2}|(X)\] \[=\|\mu_{1}-\mu_{2}\|_{\mathcal{M}(X)}.\]
We have proved that \(\mu\mapsto|\mu|\) is \(1\)-Lipschitz. Since \(\mu^{+}=\frac{1}{2}\big{(}|\mu|+\mu\big{)}\) and \(\mu^{-}=\frac{1}{2}\big{(}|\mu|-\mu\big{)}\), both \(\mu\mapsto\mu^{+}\) and \(\mu\mapsto\mu^{-}\) are also \(1\)-Lipschitz. The proof is completed.
We have the following lemma.
**Lemma A.4**.: _Let \((X,\mathcal{B}(X))\) be a measurable space. The subset \(\mathcal{M}_{+}(X)\) is a positive cone of \(\mathcal{M}(X)\). That is,_
* \(\mathcal{M}_{+}(X)\) _is a closed and convex subset of_ \(\mathcal{M}(X)\)_._
* \(\lambda\,m\in\mathcal{M}_{+}(X),\,\forall\lambda\geq 0,\forall m\in\mathcal{M}_ {+}(X)\)_._
* \(\mathcal{M}_{+}(X)\cap-\mathcal{M}_{+}(X)=\big{\{}0_{\mathcal{M}(X)}\big{\}}\)_._
Proof.: Proof of (i). By Lemma A.3, the map \(\mu\mapsto\mu^{-}\) is continuous, and
\[\mathcal{M}_{+}(X)=\{\mu\in\mathcal{M}(X)\,:\,\mu^{-}=0\}.\]
The property (ii) is trivial, since \((\lambda m)(A)=\lambda m(A),\forall A\in\mathcal{B}(X)\).
Proof of (iii). Let \(\mu\in\mathcal{M}_{+}(X)\cap-\mathcal{M}_{+}(X)\). We observe that \(\mu\in\mathcal{M}_{+}(X)\) implies \(\mu^{-}=0\). Next \(\mu\in-\mathcal{M}_{+}(X)\) is equivalent to \(-\mu\in\mathcal{M}_{+}(X)\), and it follows that \((-\mu)^{-}=\mu^{+}=0\). We conclude that \(\mu=\mu^{+}-\mu^{-}=0\), and (iii) is proved.
When \(\mu\in\mathcal{M}(X)\) is a given measure (not necessarily finite), one can define the space of integrable functions quotiented by the equivalence \(\mu\)-almost everywhere, \(L^{1}(X,\mu)\). It is a Banach space [4, Vol. I Theorem 4.1.1 p.250] equipped with the norm
\[\|f\|_{L^{1}(X,\mu)}=\int_{X}|f(x)||\mu|(\mathrm{d}x).\]
For each \(f\in L^{1}(X,\mu)\), the product measure \(m(dx)=f(x)\mu(\mathrm{d}x)\) is defined by
\[m(A)=\int_{A}f(x)\mu(\mathrm{d}x),\forall A\in\mathcal{B}(X),\]
and this measure satisfies
\[\|m\|_{\mathcal{M}(X)}=\int_{X}|f(x)||\mu|(\mathrm{d}x)=\|f\|_{L^{1}(X,\mu)}.\]
It follows from its Banach space property, that \(L^{1}(X,\mu)\) is a closed subspace of \(\mathcal{M}(X)\). Remark that it is still true when \(X=I\) is an interval and \(\mu(\mathrm{d}x)=\mathrm{d}x\) is the Lebesgue measure, in which case \(L^{1}(X,\mu)=L^{1}(I)\) is the usual space of \(L^{1}\) functions.
Let us recall the Radon-Nikodym Theorem for signed measures [4, Vol. I Theorem 3.2.2 p.178]. We first recall the notion of absolute continuity [4, Vol. I Definition 3.2.1 (i) p.178].
**Definition A.5** (Absolute continuity).: _Let \((X,\mathcal{B}(X))\) be a measurable space, and \(\mu,\nu\in\mathcal{M}(X)\) be two signed measures. The measure \(\nu\) is **absolutely continuous** with respect to \(\mu\) (notation: \(\nu\ll\mu\)) if for any Borel subset \(A\in\mathcal{B}(X),\)\(|\mu|(A)=0\) implies \(|\nu|(A)=0\)._
**Theorem A.6** (Radon-Nikodym).: _Let \((X,\mathcal{B}(X))\) be a measurable space and \(\mu,\nu\in\mathcal{M}(X)\). The measure \(\nu\) is absolutely continuous with respect to \(\mu\) if there exists a \(\mu\)-integrable function \(f\in L^{1}(X,\mu)\), such that_
\[\nu(A)=\int_{A}f(x)\mu(\mathrm{d}x),\forall A\in\mathcal{B}(X).\]
Next, we consider the following formula
\[\|u\|_{\mathcal{M}(X)}=\sup_{\phi\in C(\mathbb{R}):\|\phi\|_{\infty}\leq 1} \int_{X}\phi(x)u(dx),\forall u\in\mathcal{M}(I).\]
where \(X\) is a Polish space.
An equivalent statement is proved in [4, Vol.II Theorem 7.9.1 p.108] with far more general assumptions.
Here, we give a more elementary proof when \(X\) is Polish. We rely on the Borel-regularity of Borel measures that we recall first. The following statement is exactly [4, Vol. I Theorem 1.4.8 p.30] when \(X\subset\mathbb{R}^{n}\), and in general it is an easy consequence of the fact that all Borel measures are Radon in a Polish space [4, Vol. II Theorem 7.1.7 p.70].
**Theorem A.7** (Approximations of Borel measures).: _Let \((X,d)\) be a Polish space, and let \(\mu\) be a Borel measure on \(X\). Then, for any Borel set \(B\subset X\), and any \(\varepsilon>0\), there exists an open subset \(U_{\varepsilon}\subset X\), and a compact subset \(K_{\varepsilon}\subset X\), such that_
\[K_{\varepsilon}\subset B\subset U_{\varepsilon},\text{ and }\mu\left(U_{ \varepsilon}\backslash K_{\varepsilon}\right)\leq\varepsilon.\]
Now we have the following result.
**Proposition A.8**.: _Let \((X,d)\) be a Polish space. For any measure \(\mu\in\mathcal{M}(X)\), we have_
\[\|\mu\|_{\mathcal{M}(X)}=\sup_{\phi\in C(X)\,:\,|\phi|\leq 1}\int_{X}\phi(x) \mu(\mathrm{d}x).\]
Proof.: Let \(\mu^{+}\) and \(\mu^{-}\) be the positive and negative part of \(\mu\) and \(X^{+},X^{-}\) the support of \(\mu^{+}\) and \(\mu^{-}\), respectively. By Theorem A.7 applied to \(|\mu|\), there exists \(K_{\varepsilon}^{+}\subset X^{+}\subset U_{\varepsilon}^{+}\) with \(K_{\varepsilon}^{+}\) compact and \(U_{\varepsilon}^{+}\) open such that
\[|\mu|(U_{\varepsilon}^{+}\backslash K_{\varepsilon}^{+})\leq\frac{ \varepsilon}{4},\]
so
\[\begin{array}{ll}\mu^{+}(X^{+})=|\mu|(X^{+})&=|\mu|\big{(}K_{\varepsilon}^{+} \cup(X^{+}\cap K_{\varepsilon}^{+})\big{)}\\ &\geq\mu(K_{\varepsilon}^{+})-|\mu|\big{(}U_{\varepsilon}^{+}\cap K_{ \varepsilon}^{+})\\ &=\mu^{+}(K_{\varepsilon}^{+})-\frac{\varepsilon}{4}.\end{array}\]
Similarly we can find \(K_{\varepsilon}^{-}\) compact and \(U_{\varepsilon}^{-}\) open such that
\[|\mu|(U_{\varepsilon}^{-}\backslash K_{\varepsilon}^{-})\leq\frac{ \varepsilon}{4},\text{ so }\mu^{-}(X^{-})\geq\mu^{-}(K_{\varepsilon}^{-})-\frac{ \varepsilon}{4}.\]
Recall that the distance between a point \(x\) and a subset \(B\subset X\) is defined as
\[d(x,B)=\inf_{y\in B}|x-y|.\]
Consider
\[d_{+}=\min_{y\not\in U_{\varepsilon}^{+}}d(y,K_{\varepsilon}^{+})>0,\text{ and }d_{-}=\min_{y\not\in U_{\varepsilon}^{-}}d(y,K_{\varepsilon}^{-})>0.\]
Define \(d=\min(d_{-},d_{+})\). Then
\[\phi^{+}(x)=\rho\bigg{(}\text{dist}(x,K_{\varepsilon}^{+})/d\bigg{)},\text{ and }\phi^{-}(x)=\rho\bigg{(}\text{dist}(x,K_{\varepsilon}^{-})/d\bigg{)},\]
where \(\rho\) is truncation map
\[\rho(u)=\left\{\begin{array}{ll}e^{u^{2}/(u^{2}-1)},\text{ if }|u|<1,\\ 0,\text{if }|u|\geq 1.\end{array}\right.\]
By definition we have \(\phi^{+}(x)\) and \(\phi^{-}(x)\) are continuous maps, and
\[\phi^{+}(x)\left\{\begin{array}{ll}=0,\text{ if }x\not\in U_{\varepsilon}^{+}, \\ =1,\text{if }x\in K_{\varepsilon}^{+}\\ \in[0,1],\text{ otherwise},\end{array}\right.\text{ and }\phi^{-}(x)\left\{ \begin{array}{ll}=0,\text{ if }x\not\in U_{\varepsilon}^{-},\\ =1,\text{if }x\in K_{\varepsilon}^{-}\\ \in[0,1],\text{ otherwise}.\end{array}\right.\]
Consider \(\phi(x):=\phi^{+}(x)-\phi^{-}(x)\), then we have
\[\int_{X}\phi(x)\mu(\mathrm{d}x) =\int_{X}\phi^{+}(x)\mu(\mathrm{d}x)-\int_{X}\phi^{-}(x)\mu( \mathrm{d}x)\] \[=\int_{K_{\varepsilon}^{+}}\phi^{+}(x)\mu(\mathrm{d}x)+\int_{U_{ \varepsilon}^{+}\setminus K_{\varepsilon}^{+}}\phi^{+}(x)\mu(\mathrm{d}x)\] \[-\int_{K_{\varepsilon}^{-}}\phi^{-}(x)\mu(\mathrm{d}x)-\int_{U_{ \varepsilon}^{-}\setminus K_{\varepsilon}^{-}}\phi^{-}(x)\mu(\mathrm{d}x)\] \[\geq\mu(K_{\varepsilon}^{+})-\int_{U_{\varepsilon}^{+}\setminus K _{\varepsilon}^{+}}\phi^{+}(x)|\mu(\mathrm{d}x)-\mu(K_{\varepsilon}^{-})- \int_{U_{\varepsilon}^{-}\setminus K_{\varepsilon}^{-}}\phi^{-}(x)|\mu|( \mathrm{d}x)\] \[\geq\mu^{+}(K_{\varepsilon}^{+})+\mu^{-}(K_{\varepsilon}^{-})- \frac{\varepsilon}{2}\] \[\geq\mu^{+}(X^{+})-\frac{\varepsilon}{4}+\mu^{-}(X^{-})-\frac{ \varepsilon}{4}-\frac{\varepsilon}{2}\] \[=|\mu|(X)-\varepsilon=\|\mu\|_{\mathcal{M}(X)}-\varepsilon.\]
Since \(\varepsilon>0\) is arbitrary, we have proved that
\[\sup_{\phi\in C(X)\,:\,\text{sup}_{x\in X}}\int_{X}\phi(x)\mu(\mathrm{d}x)\geq \|\mu\|_{\mathcal{M}(X)}.\]
The converse inequality follows from the comparison of integrals \(\int_{X}\phi(x)\mu(\mathrm{d}x)\leq\|\phi\|_{\infty}\int_{X}1\mu(\mathrm{d}x)\). Proposition A.8 is proved.
**Example A.9** (A bounded linear form that is not a measure).: _The space of measures on non-compact metric space \((X,d)\) is not a dual space of the continuous functions or bounded sequences in the present case. Indeed, consider the example (taken from the book of [4]), \(X=\mathbb{N}\) endowed with the standard metric \(d(n,m)=|n-m|\). Due to the additive property of measures, any measure on \(\mathbb{N}\) must be a linear form. That is,_
\[\mu(f)=\int_{\mathbb{N}}f(n)\mu(dn)=\sum_{n=1}^{\infty}\mu_{n}f_{n},\]
_whenever \(f\in l^{\infty}\left(\mathbb{N},\mathbb{R}\right)\) the space of bounded sequence, which is a Banach space endowed with the standard supremum norm \(\|f\|_{\infty}=\sup_{n\geq 1}|f_{n}|\)._
_Next, if we consider the linear form_
\[x^{\star}(f)=\lim_{n\to\infty}f_{n},\]
_defined for the converging sequences. By the Hahn Banach theorem, \(x^{\star}\) has a continuous extension to the space of bounded sequence (endowed with the standard supremum norm), and this extension is not a measure. Therefore the dual space \(l^{\infty}\left(\mathbb{N},\mathbb{R}\right)^{\star}\) is a larger than \(\mathcal{M}(X)\) the space of measure on \(X=\mathbb{N}\)._
|
2309.16336 | Transient fading X-ray emission detected during the optical rise of a
tidal disruption event | We report on the SRG/eROSITA detection of ultra-soft ($kT=47^{+5}_{-5}$ eV)
X-ray emission ($L_{\mathrm{X}}=2.5^{+0.6}_{-0.5} \times 10^{43}$ erg s$^{-1}$)
from the tidal disruption event (TDE) candidate AT 2022dsb $\sim$14 days before
peak optical brightness. As the optical luminosity increases after the eROSITA
detection, then the 0.2--2 keV observed flux decays, decreasing by a factor of
$\sim 39$ over the 19 days after the initial X-ray detection. Multi-epoch
optical spectroscopic follow-up observations reveal transient broad Balmer
emission lines and a broad He II 4686A emission complex with respect to the
pre-outburst spectrum. Despite the early drop in the observed X-ray flux, the
He II 4686A complex is still detected for $\sim$40 days after the optical peak,
suggesting the persistence of an obscured, hard ionising source in the system.
Three outflow signatures are also detected at early times: i) blueshifted
H$\alpha$ emission lines in a pre-peak optical spectrum, ii) transient radio
emission, and iii) blueshifted Ly$\alpha$ absorption lines. The joint evolution
of this early-time X-ray emission, the He II 4686A complex and these outflow
signatures suggests that the X-ray emitting disc (formed promptly in this TDE)
is still present after optical peak, but may have been enshrouded by optically
thick debris, leading to the X-ray faintness in the months after the
disruption. If the observed early-time properties in this TDE are not unique to
this system, then other TDEs may also be X-ray bright at early times and become
X-ray faint upon being veiled by debris launched shortly after the onset of
circularisation. | A. Malyali, A. Rau, C. Bonnerot, A. J. Goodwin, Z. Liu, G. E. Anderson, J. Brink, D. A. H. Buckley, A. Merloni, J. C. A. Miller-Jones, I. Grotova, A. Kawka | 2023-09-28T10:48:16Z | http://arxiv.org/abs/2309.16336v1 | # Transient fading X-ray emission detected during the optical rise of a tidal disruption event
###### Abstract
We report on the _SRG_/eROSITA detection of ultra-soft (\(kT=47^{+5}_{-5}\) eV) X-ray emission (\(L_{\rm X}=\)2.5\({}^{+0.6}_{-0.5}\times 10^{43}\) erg s \({}^{-1}\)) from the tidal disruption event (TDE) candidate AT 2022dsb \(\sim\)14 days before peak optical brightness. As the optical luminosity increases after the eROSITA detection, then the 0.2-2 keV observed flux decays, decreasing by a factor of \(\sim\)39 over the 19 days after the initial X-ray detection. Multi-epoch optical spectroscopic follow-up observations reveal transient broad Balmer emission lines and a broad He ii 4686A emission complex with respect to the pre-outburst spectrum. Despite the early drop in the observed X-ray flux, the He ii 4686A complex is still detected for \(\sim\)40 days after the optical peak, suggesting the persistence of an obscured, hard ionising source in the system. Three outflow signatures are also detected at early times: i) blueshifted H\(\alpha\) emission lines in a pre-peak optical spectrum, ii) transient radio emission, and iii) blueshifted Ly\(\alpha\) absorption lines. The joint evolution of this early-time X-ray emission, the He ii 4686A complex and these outflow signatures suggests that the X-ray emitting disc (formed promptly in this TDE) is still present after optical peak, but may have been enshrouded by optically thick debris, leading to the X-ray faintness in the months after the disruption. If the observed early-time properties in this TDE are not unique to this system, then other TDEs may also be X-ray bright at early times and become X-ray faint upon being veiled by debris launched shortly after the onset of circularisation.
keywords: accretion, accretion discs - galaxies: nuclei - black hole physics - transients: tidal disruption events -
## 1 Introduction
The number of stellar tidal disruption event (TDE) candidates identified in recent years has greatly increased, largely fuelled by the increasing number of wide-field, high-cadence time-domain surveys operating across the electromagnetic spectrum. Although early theoretical work predicted TDEs to produce large amplitude, ultra-soft X-ray flares originating from the centres of galaxies (Rees, 1988) - consistent with the first TDE candidates identified by _ROSAT_(Trumper, 1982) in the 1990s (Bade et al., 1996; Grupe et al., 1999; Komossa & Greiner, 1999; Komossa & Bade, 1999; Greiner et al., 2000) - the majority of optically-selected TDE candidates do not show transient X-ray emission (van Velzen et al., 2021; Hammerstein et al., 2023). To explain the dearth of X-rays in these systems, it has been suggested that the optical emission is produced by the debris circularisation process instead of accretion (stream-stream collisions; Piran et al., 2015; Shiokawa et al., 2015), or that a large fraction of the X-ray emission is reprocessed to optical/UV bands by debris enveloping the nascent disc (Loeb & Ulmer, 1997; Lodato & Rossi, 2011; Miller, 2015; Metzger & Stone, 2016; Dai et al., 2018; Lu & Bonnerot, 2020).
Optical spectroscopic follow-up of optically bright TDEs has led to the classifications of TDEs into different spectral types (e.g. Arcavi et al., 2014; Leloudas et al., 2019; van Velzen et al., 2021), depending on the emission lines seen in the spectra. These are i) 'H', which show transient broad Balmer emission lines, i) 'H+He', showing transient broad Balmer emission lines and a broad emission complex around He ii 4686A, and iii) 'He', which show a transient broad He ii 4686A emission feature but no Balmer emission. An additional spectral TDE class not common in recent optically-selected TDE samples are the extreme coronal line emitters (ECLES; Komossa et al., 2009; Wang et al., 2011, 2012), which show strong emission from high-ionisation coronal lines with respect to their narrow [O iii] 5007 A emission. A hard ionising source (photons with energy above 54 eV) is needed to produce the He ii emission seen in 'He' and 'H+He'. TDEs (herein collectively referred to as He-TDEs), yet the majority of TDE candidates even in these classes do not show transient X-ray emission (Hammerstein et al., 2023). As it is thought that these hard photons originate from the high-energy tail of the newly formed disc, then the combination of the X-ray faintness and the He ii emission in
these systems has been suggested as evidence for 'obscured accretion' (Leloudas et al., 2019), where an accretion disc has formed in these systems, but its high-energy emission gets reprocessed into the optical band by an optically-thick gaseous envelope. Several TDE candidates have also shown broad He ii lines close to peak optical brightness (Blagorodnova et al., 2017; Nicholl et al., 2020; Wevers et al., 2022), which under the assumption of an obscured accretion-driven origin, suggests efficient circularisation of the debris into a disc post-disruption.
Here, we report on multi-wavelength observations of the TDE candidate AT 2022dsb, which shows a factor of 39 decrease in its 0.2-2 keV observed flux during the optical rise. Section 2 describes the discovery of AT 2022dsb, whilst sections 3 and 4 detail multi-wavelength observations of the system and their analysis, respectively. In section 5, we review previous X-ray observations of TDEs at early times and compare these with AT 2022dsb. The implications of our observational campaign are discussed in section 6, and our conclusions in section 7. All magnitudes are reported in the AB system and corrected for Galactic extinction using \(A_{\rm V}=0.62\) mag, obtained from Schlafly & Finkbeiner (2011), \(R_{\rm V}=3.1\) and a Cardelli extinction law (Cardelli et al., 1989). The effective wavelength for each filter was retrieved from the SVO Filter Profile Service1. All dates and times will be reported in universal time (UT).
Footnote 1: [http://svo2.cab.inta-csic.es/theory/fps/](http://svo2.cab.inta-csic.es/theory/fps/)
## 2 Discovery
AT 2022dsb/ eRASSt J154221.6-224012 was independently discovered by the extended ROentgen Survey with an Imaging Telescope Array (eROSITA; Predehl et al., 2021), the soft X-ray instrument on board the _Spektrum-Roentgen-Gamma_ (SRG; Sunyaev et al., 2021) observatory, during a systematic search for TDE candidates in the fifth eROSITA All-Sky Survey (eRASSS), when it was observed on 2022-02-17 as a new, bright (0.2-2 keV observed flux of \(\sim 3\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\)), ultra-soft (section 4.1) X-ray point source. Using the _eROSITA Science Analysis Software_ pipeline (eASS2; Brunner et al., 2022), the source was localised to (RAJ\({}_{2000}\), Dec\({}_{2000}\))=(15h42m21.6s,-22\({}^{\circ}\)40'12.1''), with a \(1\sigma\) positional uncertainty of \(1.9\arcsec\) (68% confidence), consistent with the galaxy ESO 583-G004 at \(z=0.0235\) (Fig. 1). No X-ray source had been detected within \(30\arcsec\) of this position in any of the previous four eRASS, with a \(3\sigma\) upper limit on the 0.2-2 keV band flux of \(5\times 10^{-14}\) erg cm\({}^{-2}\) s\({}^{-1}\), assuming the same spectral model fitted to the eRASSS spectrum (Liu et al., 2022). The last non-detection by eROSITA occurred \(\sim\)6 months before the eRASSS detection.
Footnote 2: Version: eSASSusers_211214.
AT 2022dsb was later publicly classified on 2022-03-02 as a TDE candidate in the TNS report TNSCR-2022-584 (Fulton et al., 2022), after the discovery and reporting of optical transient emission (associated to the nucleus of the host galaxy ESO 583-G004) initially by ASAS-SN on 2022-03-01 in Stanek & Kochanek (2022), and then by both the Asteroid Terrestrial Impact Last Alert System (ATLAS; Tonry et al., 2018), and the Zwicky Transient Facility (ZTF; Bellm et al., 2019; Graham et al., 2019) by the ALeRCE alert broker (Forster et al., 2021).
### Host galaxy
A pre-outburst optical spectrum of ESO 583-G004 was taken in 2002 during the Six-Degree Field (6dF; Jones et al., 2009) galaxy survey. A recent analysis of the narrow emission lines in this optical spectrum classified the system as a type II AGN (Chen et al., 2022), according to the criteria presented in Kewley et al. (2001). However, the pre-outburst _AllWISE_(Wright et al., 2010; Mainzer et al., 2014) colour of the host, \(W1-W2=0.00\pm 0.03\) mag, suggests that its mid-infrared emission is dominated by the galaxy light, instead of the luminous emission from a dusty torus surrounding an AGN (Stern et al., 2012; Assef et al., 2018). ESO 583-G004 may have hosted a low-luminosity AGN prior to the outburst of AT 2022dsb, similar to other TDE candidates (e.g. ASASSN-14li, Holoien et al., 2016; AT 2019qiz, Nicholl et al., 2020).
We fitted the DESI Legacy DR10 (Dey et al., 2019) archival photometry of the host galaxy (\(g\), \(i\), \(W1\), \(W2\), \(W3\) and \(W4\) bands3) with the stellar population inference tool Prospector(Johnson et al., 2021), which uses a python wrapper (Foreman-Mackey et al., 2014) of the Flexible Stellar Population Synthesis code (Conroy & Gunn, 2010) for generating the SEDs of stellar populations. The SED model includes both stellar and nebular emission, as well as dust attenuation and emission, and adopts a Chabrier initial mass function (IMF; Chabrier, 2003); the free parameters are the total stellar mass of the galaxy (\(M_{\star}\), the sum of both living and remnant stars), the metallicity (\(\log(Z/Z_{\odot})\)), the age of the galaxy (\(t_{\rm age}\)), the decay timescale under an exponentially declining star formation model (\(\tau_{\rm SF}\)), and the host galaxy dust extinction (\(A_{\rm V}\)). Posterior distributions were sampled using the dynamic nested sampler (Skilling, 2004, 2006; Higson et al., 2019) dynesty(Speagle, 2020; Koposov et al., 2023), with the posterior model shown in Fig. 2 and the parameter estimates in Table 1. From the inferred \(M_{\star}=7_{-3}^{+4}\times 10^{10}\) M\({}_{\odot}\)and using the rela
Figure 1: Finder chart for AT 2022dsb (DESI LS DR10 \(g\)-band image). The red circle denotes the \(3\sigma\) uncertainty on the eROSITA source position in eRASSS5, whilst the dark orange star marks the _Gaia_ DR3 optical centre of the host galaxy ESO 583-G004.
tion between \(M_{\rm BH}\) and \(M_{\bullet}\) in Reines and Volonteri (2015), we infer \(\log[M_{\rm BH}/M_{\odot}]=7.3^{+0.2}_{-0.3}\).
## 3 Observations and Data Reduction
After the initial eROSITA discovery, additional X-ray observations of AT 2022dsb were obtained with _XMM-Newton_ (section 3.1.2) and _Swift_ XRT (section 3.1.3); collectively, these sample the X-ray emission from a TDE pre-peak, near-peak and post-peak optical brightness. The optical, UV and radio evolution was also monitored with ground-based photometry (section 3.2.1). _Swift_ UVOT (section 3.2.2) and the Australia Telescope Compact Array (ATCA; section 3.4), respectively. The full multi-wavelength lightcurve of AT 2022dsb is depicted in Fig. 3, and a comparison of the optical lightcurve with other TDE candidates is shown in Fig. 4. A log of all X-ray observations and the inferred fluxes is presented in Table 2, whilst the optical and UV photometry can be found in Table A1.
### X-ray
#### 3.1.1 eROSITA
The position of AT 2022dsb was observed by eROSITA during the first four eRASS, (denoted eRASS1, eRASS2, eRASS3, and eRASS4, respectively) on 2020-02-27, 2020-08-26, 2021-02-11, and 2021-08-20. During eRASS5, eROSITA first observed AT 2022dsb on 2022-02-17, scanning over its position seven times over the following day, with each visit separated by four hours (Fig. 5). Using the eSASS task SRCTOOL, we generated source and background spectra by extracting counts from a source aperture of radius 30'', and a background annulus of inner and outer radii 90'' and 240'', respectively, with both apertures centred on the eROSITA position of AT 2022dsb. The same source and background apertures were used to generate a 0.2-2 keV lightcurve using SRCTOOL, shown in Fig. 5. AT 2022dsb is clearly detected above background by eROSITA in each of the seven observations within eRASS5 (i.e. is persistently bright, instead of showing a 'one-off' short flaring), providing a lower limit of 1 day on the duration of X-ray emission at early-times in AT 2022dsb. A log of X-ray observations is presented in Table 2.
#### 3.1.2 XMM-Newton
Additional observations of AT 2022dsb were performed with _XMM_ (P.I. Z. Liu), with the first taking place \(\sim\)19 days after the eRASS5 detection on 2022-03-08, and then \(\sim\)173 days after this on 2022-08-29; observations were carried out in imaging mode with the medium filter. To reduce and analyse the _XMM_ data, we used HEASOFT (version 6.29), the _XMM_ Science Analysis Software (SAS) (version 20211130_0941), and the latest calibration data files. Calibrated event files were generated from the Observation Data Files (ODF) using emproc and epproc for the MOS and PN cameras, respectively, and periods of high particle background during each observation were filtered out following the _XMM_ Science Operation Centre recommended procedures. This resulted in 18.0 ks and 13.3 ks exposures for the first and second observations, respectively. Source spectra were extracted from a circle of radius 20'', centred on the _Gaia_ EDR3 (Gaia Collaboration et al., 2021) position of ESO 583-G004, whilst background spectra were extracted from an annulus with inner and outer radii 76'' and 144'' respectively. Only events with PATTERN<=4 and FLAG==0 were extracted for PN, whilst PATTERN<=12 was applied for MOS1 and MOS2.
#### 3.1.3 Swift XRT
AT 2022dsb was further monitored in the 0.3-10 keV band with the XRT instrument (Burrows et al., 2005) on-board the _Neil Gehrels Swift_ observatory (Gehrels et al., 2004)4. XRT observations commenced on 2022-03-05, \(\sim\)16 days after the eRASS5 observation, and were performed in photon counting mode. These were then analysed with the online XRT product building tool provided by the UK Swift Science Data Centre's (UKSSDC) (Evans et al., 2007, 2009). AT 2022dsb was not detected in any of the XRT observations, with 3\(\sigma\) upper limits on the 0.3-2 keV count rates computed using the method presented in Kraft et al. (1991). These were then converted to 0.2-2 keV fluxes using webPIMMS5, where we adopted the spectral model inferred from our BXA fit to the eRASS5 spectrum.
Footnote 4: P.I. for _Swift_ observations: A. Malyali, P. Charalampopoulos, J. Hinkle, I. Lypova.
Footnote 5: [https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl)
### Photometry
#### 3.2.1 Ground-based photometry
We obtained ATLAS \(o\) and \(c\) band (Tonry et al., 2018) lightcurves of AT 2022dsb using the online forced photometry server (Smith et al., 2020; Shingles et al., 2021). For late-time observations (MJD\(>\)59635), we performed a weighted rebin of the lightcurve into
\begin{table}
\begin{tabular}{c c} \hline Parameter & 68\% CR \\ \hline \(M_{\bullet}/10^{10}M_{\odot}\) & \(6.9^{+3.9}_{-0.3}\) \\ \(\log[Z/Z_{\odot}]\) & \(-0.7^{+0.3}_{-0.3}\) \\ \(A_{\rm V}\)/mag & \(0.62^{+0.08}_{-0.06}\) \\ \(t_{\rm age}\)/Gyr & \(5.4^{+0.2}_{-0.2}\) \\ \(\tau_{\rm SF}\)/Gyr & \(0.2^{+0.2}_{-0.1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Host galaxy properties inferred via SED fitting to the archival photometry, with CR denoting the credible region for a parameter.
Figure 2: SED fit to the LS DR10 photometry of the host galaxy of AT 2022dsb. The observed photometry is plotted with black edged circular markers, whilst the posterior model and model photometry is shown in blue (solid line represents the median, shaded region encloses the middle 90% of the posterior) and red, respectively.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline MJD & MJD\({}_{\rm start}\) & MJD\({}_{\rm top}\) & Instrument & ObsID & log[\(F_{\rm 0.2-2keV,abs}\)] & log[\(F_{\rm 0.2-2keV,abs}\)] & log[\(L_{\rm 0.2-2keV}\)] \\ \hline
59627.939 & 59627.439 & 59628.439 & eROSITA & eRASS5 & \(-12.46^{+0.07}_{-0.08}\) & \(-10.79^{+0.10}_{-0.14}\) & \(43.34^{+0.10}_{-0.14}\) \\
59643.115 & 59643.004 & 59643.226 & XRT & 00015054002 & \(<\)-12.69 & \(<\)-11.11 & \(<\)43.02 \\
59646.226 & 59646.088 & 59646.363 & EPIC PN & XMM1 & \(-14.05^{+0.05}_{-0.06}\) & \(-13.50^{+0.10}_{-0.10}\) & \(40.62^{+0.10}_{-0.10}\) \\
59649.772 & 59649.702 & 59649.842 & XRT & 00015054003 & \(<\)-12.71 & \(<\)-11.12 & \(<\)-43.01 \\
59656.156 & 59656.150 & 59656.161 & XRT & 00015054005 & \(<\)-12.62 & \(<\)-11.03 & \(<\)43.10 \\
59663.969 & 59663.968 & 59663.969 & XRT & 00015054006 & \(<\)-10.81 & \(<\)-9.22 & \(<\)-44.90 \\
59668.128 & 59668.030 & 59668.225 & XRT & 00015054007 & \(<\)-12.65 & \(<\)-11.07 & \(<\)43.06 \\
59673.808 & 59673.805 & 59673.811 & XRT & 00015054008 & \(<\)-12.26 & \(<\)-10.67 & \(<\)43.45 \\
59677.316 & 59677.178 & 59677.453 & XRT & 00015054009 & \(<\)-12.12 & \(<\)-10.53 & \(<\)43.60 \\
59683.209 & 59683.205 & 59683.214 & XRT & 00015054010 & \(<\)-12.38 & \(<\)-10.80 & \(<\)43.33 \\
59696.480 & 59696.206 & 59696.755 & XRT & 00015054011 & \(<\)-12.53 & \(<\)-10.94 & \(<\)43.19 \\
59701.425 & 59701.251 & 59701.598 & XRT & 00015054012 & \(<\)-12.55 & \(<\)-10.96 & \(<\)43.16 \\
59704.749 & 59704.513 & 59704.986 & XRT & 00015054013 & \(<\)-12.51 & \(<\)-10.93 & \(<\)43.20 \\
59729.976 & 59729.051 & 59730.902 & XRT & 00015054014 & \(<\)-12.27 & \(<\)-10.68 & \(<\)43.44 \\
59735.808 & 59735.801 & 59735.816 & XRT & 00015054015 & \(<\)-12.39 & \(<\)-10.80 & \(<\)43.32 \\
59820.520 & 59820.376 & 59820.664 & EPIC PN & XMM2 & \(-14.18^{+0.08}_{-0.08}\) & \(-13.38^{+0.16}_{-0.15}\) & \(40.75^{+0.16}_{-0.15}\) \\
59855.517 & 59855.146 & 59855.88 & XRT & 00015054016 & \(<\)-12.91 & \(<\)-11.32 & \(<\)-42.81 \\ \hline \end{tabular}
\end{table}
Table 2: X-ray lightcurve of AT 2022dsb. \(F_{\rm 0.2-2keV,abs}\) and \(F_{\rm 0.2-2keV,abs}\) are the observed (not corrected for Galactic absorption) and unabsorbed 0.2–2 keV band fluxes in units of \(\rm erg\,cm^{-2}\,s^{-1}\). log[\(L_{\rm 0.2-2keV}\)] is inferred from \(F_{\rm 0.2-2keV,abs}\). MJD is computed from the midpoint of MJD\({}_{\rm start}\) and MJD\({}_{\rm top}\). The fluxes have been estimated from the best fitting model (Table 4), with the 3\(\sigma\) upper limits on the count rates converted to fluxes using the best fitting model to the eRASS5 spectrum.
Figure 3: 0.2–2 keV X-ray (top) and optical-UV (bottom) evolution of AT 2022dbb. Solid markers denote \(>3\sigma\) detections in each epoch, whereas translucent triangles mark \(3\sigma\) upper limits. The vertical red line marks the eROSITA observation of AT 2022dsb, whilst the vertical orange line marks the inferred time of optical peak on MJD 59641 (section 4.2), \(>14\) days after the eRASS5 detection.
1 day intervals. To improve the sampling of the lightcurve around peak optical brightness, then no such rebinning was performed for observations performed during the optical rise and the early part of the decay (59620 \(<\) MJD \(<\) 59635). To remove epochs of low quality photometry in the ATLAS lightcurve, we discarded datapoints where the semi-major axis of the fitted PSF model was greater than 3 pixels (1.86'' per pixel).
In addition, \(g\) and \(r\)-band lightcurves6 were generated using the ZTF forced photometry service (Masci et al., 2019), which were then calibrated using the method developed by Miller et al. (in prep.) for the ZTF Bright Transient Survey7. No significant optical variability is seen in the ZTF lightcurves before the 2022 outburst, and we note that the ZTF observations do not sample the rise and peak optical brightness (Fig. 3).
Footnote 6: This is generated from science images that have already been reference image subtracted
Footnote 7: [https://github.com/BrightTransientSurvey/ztf_forced_phot](https://github.com/BrightTransientSurvey/ztf_forced_phot)
#### 3.2.2 Swift UVOT
Over the course of the _Swift_ monitoring campaign, AT 2022dsb was observed by the UVOT (Roming et al., 2005) instrument across all filters (\(V\), \(B\), \(U\), \(UVW1\), \(UVM2\) and \(UVW2\)), although the number of filters used varied between each observation (see photometry in Table 1). In this work, we use observations performed only in the \(UVW1\), \(UVM2\) and \(UVW2\) filters, since the lightcurve sampling is highest in these bands, and the optical coverage is already provided by ATLAS and ZTF. We first downloaded the level 2 UVOT sky images from the UK Swift Science Data Centre, before computing aperture photometry on these with the uvotsource task (HEASOFT v6.29, CALDB v20201215), using a 5'' radius source aperture, and a nearby, source-free circular aperture of of radius 15'' for the background. Lastly, the recommended Small Scale Sensitivity check8 was completed.
Footnote 8: [https://swift.gsfc.nasa.gov/analysis/uvot_digest/sss_check.html](https://swift.gsfc.nasa.gov/analysis/uvot_digest/sss_check.html)
Footnote 9: Proposal ID CON2022A-001, PI: M. Salvato.
### Optical spectroscopy
The first follow-up optical spectrum of AT 2022dsb was obtained on 2022-02-26 (MJD= 59636, \(-\)5 days before optical peak), using the FLOYDS spectrograph mounted on the 2m Las Cumbres Observatory (LCO; Brown et al., 2013)1 telescope at Haleakala Observatory. Data processing and spectrum extraction were performed by the automatic FLOYDS pipeline at LCO (further details on the spectroscopic data reduction are presented in section B). This spectrum (Fig. 6) shows transient broad Balmer emission lines (H\(\alpha\), H\(\beta\)), a broad emission complex around 4600A, and a blue continuum, with the transient nature confirmed through comparison to archival and late-time optical spectra. In addition, narrow emission lines (H\(\alpha\), H\(\beta\), [N ii] 6548A and 6583 A, and the high-ionisation lines [O iii] 4959A and 5007 A), as well as several host galaxy absorption features, are clearly present. No strong blue continuum or broad emission lines were seen in a pre-outburst optical spectrum taken on 2002-04-15 during the 6dF Galaxy Survey (6dFGS; Jones et al., 2009). Over the 140 days of spectroscopic monitoring of AT 2022dsb after optical peak, the strength of the broad emission lines and the blue continuum relative to the host galaxy decreases (Fig. 6). Zoom-in plots on the evolution of the H\(\alpha\) and He ii complexes are presented in Fig. 11.
Footnote 1: Proposal ID CON2022A-001, PI: M. Salvato.
### Radio
#### 3.4.1 Archival
The Karl G. Jansky Very Large Array Sky Survey (VLASS; Lacy et al., 2020) observed the coordinates of AT2022dsb on 2020-11-03 and 2018-02-15, approximately 1 and 4 years prior to the detection of the transient event. There is no source present at the location of AT2022dsb in either of these observations, with a 3\(\sigma\) upper limit of 507\(\mu\)Jy and 419\(\mu\)Jy at 3 GHz for the 2020 and 2018 observations respectively.
#### 3.4.2 Follow-up
We observed the coordinates of AT2022dsb on three occasions with the Australia Telescope Compact Array (ATCA) between 2022 March and November (project C3334, PI Anderson/Goodwin). Observations were taken in the 4-cm band with the dual 5.5 and
Figure 4: Comparison of the ATLAS \(o\)-band lightcurve of AT 2022dsb (red stars) with the \(g\)-band lightcurves of ZTF-selected TDEs (blue markers) reported in Hammerstein et al. (2023). TDEs of similar peak absolute magnitudes are highlighted in non-dark blue colours, and we include data from the ‘faint and fast’ TDE iPTF-16fnI (Blagorodnova et al., 2017), and the ‘faint and slow’ TDE cRASJ074426.3+291606 (the faintest optically-bright TDE observed to date, J0744; Malyali et al., 2023b).
Figure 5: cRASSS lightcurve of AT 2022dsb in the 0.2-2 keV band. The blue markers denote the source count rates (corrected for vignetting), whilst the grey markers show the estimated background count rates. \(t-t_{\rm{HASS},0}\) is the time relative to the start of eROSITA’s observations of AT 2022dsb in eRASSS (MJD=59627.439). AT 2022dsb is persistently bright over the day-long monitoring window in eRASSS.
9 GHz receiver. Further, more detailed radio spectral monitoring of AT2022dsb is being carried out and will be published in a follow-up paper (Goodwin et al., in prep.). Because of the early eROSITA detection of AT 2022dsb, then the ATCA observations presented here represent one of the earliest radio detections of a TDE.
The ATCA data were reduced using the Common Astronomy Software Application (CASA v 5.6.3; The CASA Team et al. 2022) using standard procedures including flux and bandpass calibration with PKS 1934-638 and phase calibration with PKS 1514-241. The target field was imaged using the CASA task tclean with an image size of 4000 pixels and a cellsize of 0.3 arcsec at 5.5 GHz and an image size of 4000 pixels and a cellsize of 0.2 arcsec at 9 GHz. In all observations a point source was detected at the location of AT2022dsb. The flux density of the point source was extracted in the image plane using the CASA task imfit by fitting an elliptical Gaussian fixed to the size of the synthesized beam. A summary of the ATCA observations is given in Table 3 and the 5.5 GHz lightcurve of AT2022dsb is plotted in Figure 7 along with a selection of other radio-detected TDEs for comparison. Both the variability of the detected 5.5 GHz and 9 GHz radio emission and that the initial detection is above the 3\(\sigma\) VLASS 3 GHz upper limits years prior to the TDE suggest that the radio emission is likely related to the transient event and is not purely host galaxy emission. Although this VLASS upper limit is at a lower frequency than the ATCA observations (5.5 and 9.0 GHz), the ATCA spectrum is steep in the first epoch, so a spectral turnover would be needed in order to match the VLASS upper limit, which is not consistent with the host galaxy emission (that should be steep).
## 4 Data Analysis
### X-ray spectral fitting
The X-ray spectra were analysed using the Bayesian X-ray Analysis software (BXA; Buchner et al. 2014), which connects the nested sampling algorithm UltraNest10(Buchner, 2021) with the fitting environment CIAO/Sherpa (Fruscione et al., 2006). The eROSITA and _XMM_ PN spectra were fitted in the 0.2-8 keV and 0.2-10 keV range, respectively. A joint fit of the source and background spectra was
Figure 6: Optical spectroscopic evolution of AT 2022dsb. The phase of the observation with respect to the inferred optical peak (MID=59640.9\({}^{\circ}_{-0.4}\)) is shown on the right hand side above each spectrum. Black, orange and red spectra were obtained using LCO/FLOYDS, NTT/EFOSC2 and SALT/RSS, respectively. The archival spectrum of the host galaxy is presented in Fig. 10.
Figure 7: 5.5 GHz radio luminosity of AT2022dsb (red stars) compared to a selection of other radio-detected thermal TDEs (AT 2019ah, Goodwin et al. 2022; AT 2020bj, Goodwin et al. 2022b; AT 2019dsg, Cendes et al. 2012; ASASSN 141i, Alexander et al. 2016; ASASSN 15oi, Horesh et al. 2021; CNSS J0019+00, Anderson et al. 2020; XMMSL1 J0740-85, Alexander et al. 2017; IGR J12580+0134, Irwin et al. 2018; AT2020wful Goodwin et al. 2023; AT2018hyz Cendes et al. 2022). The horizontal axis indicates the time since first detection of the source at optical or X-ray wavelengths.
\begin{table}
\begin{tabular}{l l l l l} \hline MJD & Date & Array & Frequency & Flux density \\ & & config. & (GHz) & (\(\mu\)Jy) \\ \hline
59661 & 2022-03-23 & 6A & 5.5 & \(593\pm 19\) \\
59661 & 2022-03-23 & 6A & 9 & \(536\pm 17\) \\
59819 & 2022-08-28 & 6D & 5.5 & 211\(\pm\)11 \\
59819 & 2022-08-28 & 6D & 9 & 152\(\pm\)10 \\
59912 & 2022-11-29 & 6C & 5.5 & 171\(\pm\)8 \\
59912 & 2022-11-29 & 6C & 9 & 127\(\pm\)6 \\ \hline \end{tabular}
\end{table}
Table 3: ATCA radio observations of AT 2022dsb.
performed, using the C-statistic for fitting (Cash, 1976), and modelling the background using the principal component analysis (PCA) technique described in Simmonds et al. (2018). The Galactic absorption is modelled with a total (HI and H\({}_{2}\)) Galactic hydrogen column density of \(1.73\times 10^{21}\) cm\({}^{-2}\)(Willingale et al., 2013), cosmic abundances from Wilms et al. (2000) and cross sections from Verner et al. (1996).
Each of the eROSITA and _XMM_ PN spectra were fitted with the following source models, commonly used to fit the X-ray spectra of TDEs: i) zbbody: redshifted blackbody ii) zpowerlaw: redshifted power-law, iii) zbremsstrahlung: redshifted thermal bremsstrahlung. To assess the goodness of fit and compare between different fitted models, we use the Akaike Information Criterion (\(AIC\)), defined as \(AIC=2k-2\ln\hat{\mathcal{L}}\), where \(k\) is the number of free-parameters in the fitted model, and \(\hat{\mathcal{L}}\) the estimated maximum likelihood from the spectrum fitting; the lower the value of the AIC, the better the fit to the spectrum. An overview of the spectral fit parameters are listed in Table 4.
The eRASS5 spectrum, obtained \(\sim\)14 days before the optical peak, is ultra-soft, and can be well fitted by the thermal bremsstrahlung model with temperature \(kT_{\rm brems}=71^{+8}_{-5}\) eV (Fig. 8), or a blackbody with temperature \(kT_{\rm bb}=47^{+5}_{-5}\) eV; such temperatures are consistent with the X-ray spectra of other X-ray bright thermal TDEs (e.g. Saxton et al., 2020). This corresponds to a 0.2-2 keV observed flux, \(F_{\rm X,\,obs}=(3.4^{+0.6}_{-0.5})\times 10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\), and a 0.2-2 keV unabsorbed flux, \(F_{\rm X,\,unabs}=(1.6^{+0.4}_{-0.4})\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\)(\(L_{\rm X}=\)\(2.5^{+0.6}_{-0.5}\times 10^{43}\) erg s\({}^{-1}\)).
The first _XMM_ PN spectrum, taken \(\sim\)19 days after the eRASS5 spectrum, is harder, and can be best-fit by a power-law with photon index \(2.7^{+0.3}_{-0.3}\). The eRASS5 to XMM spectral hardening is also accompanied by a factor of \(\sim\)39 decrease in \(F_{\rm X,\,obs}\) to \((8.9^{+2.3}_{-1.8})\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\). The early-time evolution of the (SED) evolution between the eROSITA and XMM observation is plotted in Fig. 9. At \(\sim\)173 (\(\sim\)154) days after the eRASS5 (first XMM observation), the second _XMM_ observation shows a power-law slope consistent with the first _XMM_ observation with photon index \(3.5^{+0.5}_{-0.5}\), as well as a similar observed 0.2-2 keV flux of \((6.6^{+1.3}_{-1.1})\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\). The 0.2-2 keV fluxes in these _XMM_ observations are below the 3\(\sigma\) upper limit inferred from the stacked eROSITA observations from its first four all-sky surveys (section 2).
The photon index of \(\Gamma\sim 2.7\) in the first _XMM_ observation is much softer than the photon indices of X-ray bright AGN; for example, Nandra & Pounds (1994) characterised a sample of continuum slopes of Seyfert galaxies by a Gaussian with \(1.95\pm 0.15\). In addition, it is also softer than the hard X-ray emission from an advection-dominated accretion flow (ADAF; e.g. Narayan et al., 1998) of slope \(\lesssim 2\)(Gu & Cao, 2009), yet harder than the spectra of thermal TDEs (Saxton et al., 2021). With a 0.2-2 keV luminosity of \(\sim 4\times 10^{40}\) erg s\({}^{-1}\) and with no major change in flux in the 0.2-2 keV band between the two _XMM_ epochs, we consider the _XMM_ source spectra to be likely dominated by diffuse X-ray emission from within the circumnuclear environment of the host galaxy (i.e. unrelated to the TDE-triggered accretion episode onto the SMBH). This is in part motivated by the host galaxy of AT 2022dsb likely hosting a LLAGN prior to its 2022 outburst (recall the mid-infrared colour of \(W1-W2\sim 0\) and previous type II AGN classification for its host galaxy; section 2), with past X-ray observations of nearby LLAGN also suggesting the presence of hot, diffuse plasma within a few hundred parsecs of the nucleus (Fholic et al., 2006), and with \(\log[L_{\rm 0.5-2keV}]\) and \(\log[L_{\rm 2-10keV}]\) luminosities in the range of \(40.2\pm 1.3\) and \(39.9\pm 1.3\)(Gonzalez-Martin et al., 2009).
A similar scenario may also have been present in the TDE candidate ASASSN-15oi (Gezari et al., 2017), where two _XMM_ spectra, taken \(\sim\)80 days and \(\sim\)230 days after optical discovery, were best fitted11 by a two component model consisting of a blackbody with \(kT=47\pm 3\) eV and a power-law with \(\Gamma=2.5\pm 0.8\). As ASASSN-15oi
Figure 8: BXA fitted models to the convolved X-ray spectra from eROSITA (top, 14 days before optical peak) and _XMM_ (bottom two plots, \(\sim\)5 days and \(\sim\)180 days after optical peak). Black and grey markers represent source and scaled background spectra, respectively, with the background component not originating from the TDE host galaxy. The solid red line denotes the median model, whilst the shaded red band encloses 68% of the posterior. The preferred model (Table 4) for the eRASS5 spectrum is thermal bremsstrahlung with \(kT_{\rm brems}=71^{+8}_{-6}\) eV, whilst it is a powerlaw for the _XMM_ spectra, with \(\Gamma\) of \(2.7^{+0.3}_{-0.3}\) and \(3.5^{+0.5}_{-0.5}\), respectively. The unconrowded models and spectra are presented in Fig. 9.
brightened in the X-rays over the \(\sim 160\) days between these spectra, only the normalisation of the blackbody component increased, without a significant change in \(kT\) or \(\Gamma\). This would require a fair amount of model fine-tuning if the power-law component originated from Compton unscattered TDE disc photons, and the disc luminosity varied over time. Instead, this may be more easily explained if the disc emission evolves independently of the lower luminosity diffuse host emission (as also suggested in Gezari et al., 2017), with the latter being emitted at much larger physical length scales than the X-ray emission from the TDE disc.
If the _XMM_ source spectra are dominated by the diffuse emission with the host galaxy, then the soft X-ray emission dominating the eRASS5 spectrum may have been obscured by optically thick material (see section 6 for further discussion on its possible origin). Assuming that the TDE disc has spectral properties in its first _XMM_ spectrum similar to eRASS5 (\(kT_{\rm BB}\sim 50\) eV) and a similar blackbody normalisation, then an increase in the neutral hydrogen column density along our line-of-sight to \(>4.9\times 10^{21}\) cm\({}^{-2}\) would be capable of causing a 0.2--2 keV flux drop by a factor of at least 39 (between these two spectra), or a He ii column density of \(1.6\times 10^{21}\) cm\({}^{-2}\) if the flux drop is due to photoionisation of He ii (Fig. 10), when modelling He ii absorption12 using the xspec model ISMabs(Gatuzz et al., 2015). The conversion of these column densities into an estimate of the mass of a debris envelope obscuring the disc is complicated by the unknown ionisation fraction of helium in the debris; an alternate approach to constrain the reprocess's properties at early times is presented in section 6.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline MJD & ObsID & zbbody & zpowerlaw & zbremmsstrahlung \\ \hline & & AIC & \(kT\) [eV] & AIC & \(\Gamma\) & AIC & \(kT_{\rm brems}\) [eV] \\ \hline
59627.939 & eRASS5 & 367.6 & \(47^{+5}_{-5}\) & 367.9 & \(7.7^{+0.2}_{-0.3}\) & 366.8 & \(71^{+6}_{-6}\) \\
59646.226 & XMM1 & 9908.3 & \(219^{+28}_{-24}\) & 9893.8 & \(2.7^{+0.3}_{-0.3}\) & 9899.7 & \(768^{+346}_{-173}\) \\
59820.520 & XMM2 & 7925.8 & \(150^{+26}_{-23}\) & 7918.0 & \(3.5^{+0.3}_{-0.5}\) & 7921.1 & \(403^{+155}_{-96}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: X-ray spectral fit results. The Akaike Information Criterion (\(AIC\)) column estimates the goodness of fit, with a lower \(AIC\) representing a better fit.
Figure 9: Early SED evolution of AT 2022dsb. The red and orange markers show the SED at the time of the eRASS5 detection (MJD\(-\)59627, 14 days before optical peak) and first _XMM_ observation \(\sim\)19 days later (MJD\(-\)59646, 5 days after optical peak). The dotted and solid lines in the X-ray band-pass (grey region) denote the observed and unabsorbed best fitting spectral models. The two blackbody curves (blue) passing through the ATLAS \(o\)-band data point on MJD\(-\)59627 are at temperatures of \(\log[T/\rm K]=4.1\) and 4.56, the minimum and maximum temperatures inferred from fitting a single temperature blackbody to the multi-band photometry of a ZTF-selected TDE population (Hammerstein et al., 2023).
### Photometric analysis
Following Malyali et al. (2023b), the ATLAS \(o\)-band lightcurves were fitted with a half-Gaussian rise, exponential decay model, as described in van Velzen et al. (2019), and plotted in Fig. 11. Only the \(o\)-band photometry was fitted here due to this providing the best sampling of the rise and decay of the optical emission, and only photometry in the range 59600\(\times\)MD5\(\phi\)750 was used. The inferred rise and decay timescales are \(\sigma=7.9^{+0.4}_{-0.4}\) days and \(r=21.0^{+0.6}_{-0.6}\) days, respectively, with the lightcurve peaking at MJD=59640.9\({}^{+0.5}_{-0.4}\), this value is used as the reference time for the optical peak of AT 2022sb in this work. The peak inferred \(F_{\nu}\) is \(320^{+7}_{-6}\)\(\mu\)Jy, corresponding to \(\nu L_{\nu}\sim 2\times 10^{42}\) erg s\({}^{-1}\). To help understand when the eRASS5 detection occurs during the evolution of the TDE, then it is also valuable to define a start time for the optical rise based on the fitted model here. If one considers this to be when the optical flux is \(\sim\)1% (\(\sim 0.03\)%) of the optical peak, then this would occur when MJD\({}_{\rm start}\sim\)59617 (\(\sim 59609\)).
To obtain a more physically-motivated estimate of the start time of the event, then we also fitted the multi-band photometry (ATLAS \(o\) and \(c\), ZTF \(g\) and \(r\), _Swift_\(UVW1\), \(UVM2\) and \(UVW2\) bands; Fig. 12) with the TDE module (Mockler et al., 2019) of the Modular OpenSource Fitter for Transients (MOSFIT; Guillochon et al., 2018), using the nested sampler dynesty for posterior sampling (Speagle, 2020). The free parameters of this model are the black hole mass (\(M_{\rm BH}\)), mass of the disrupted star (\(M_{\star}\)), the scaled impact parameter (\(b\)), the efficiency of converting accretion luminosity into the optical luminosity (\(\epsilon\)), a normalising factor and exponent for the photosphere radius (\(R_{\rm ph,0}\), \(l_{\rm ph}\)), a viscous delay timescale (\(T_{\rm viscous}\)) and the time of first massfall back to pericentre (MJD\({}_{\rm fb}\)). Although most of the inferred parameter estimates are dominated by systematics (Table 5), we note that the MOSFIT modelling does suggest a lower mass SMBH for the disruption with \(\log(M_{\rmbh}/M_{\odot})=(6.4^{+0.2}_{-0.2})\pm\)0.2, as compared to the black hole mass derived from the \(M_{\rm BH}-M_{\star}\) relation (Reines and Volonteri, 2015) using \(M_{\star}\) from the host galaxy SED analysis (section 2). The inferred MJD\({}_{\rm fb}=(59610.2^{+6.6}_{-0.3})\pm 15\) is consistent with the start time for the optical rise inferred above.
### Spectroscopic analysis
We used a modified version of the Python quasar fitting code (PyQSOFit; Guo et al., 2018) to fit the optical spectra of AT 2022dsb after de-reddening the Galactic foreground contribution. First, we fitted the emission line free regions of each of the optical spectra with a power-law to estimate the continuum contribution to the spectrum, before dividing the observed spectrum by this continuum component to obtain a normalised spectrum. We note that we do not attempt to model or subtract the host galaxy component here, such that the modelled continuum involves both TDE and host galaxy emission. All spectral fitting made use of the python package lmfit(Newville
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Value & Systematic Error \\ \hline \(\log(M_{\rmbh}/M_{\odot})\) & \(6.4^{+0.2}_{-0.2}\) & \(\pm 0.2\) \\ \(\log(M_{\star}/M_{\odot})\) & \(0.2^{+0.7}_{-0.3}\) & \(\pm 0.66\) \\ \(b\) & \(1.0^{+0.3}_{-0.2}\) & \(\pm 0.35\) \\ \(\log(\epsilon\) ) & \(-2.0^{+0.5}_{-0.5}\) & \(\pm 0.68\) \\ \(\log(R_{\rm ph,0})\) & \(-0.1^{+0.3}_{-0.3}\) & \(\pm 0.4\) \\ \(l_{\rm ph}\) & \(1.0^{+0.3}_{-0.2}\) & \(\pm 0.2\) \\ \(\log(T_{\rm viscous}/{\rm days})\) & \(-0.9^{+0.71}_{-1.2}\) & \(\pm 0.1\) \\ MJD\({}_{\rm fb}\) & \(59610.2^{+6.6}_{-6.3}\) & \(\pm 15\) \\ \hline \end{tabular}
\end{table}
Table 5: Posterior medians and 1\(\sigma\) credible regions inferred from the MOSFIT TDE lightcurve fitting. The estimated systematic errors on each estimate are taken from Mockler et al. (2019).
Figure 11: Half-Gaussian rise, exponential decay model (red) fitted to the ATLAS \(\alpha\)-band photometry (orange markers). The shaded red bands denote the credible region enclosed by the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles of the posterior. The datapoint with \(F\sim 300\)\(\mu\)Jy at MJD=59600 has a 1\(\sigma\) error consistent with 0 (i.e. is not precursor emission to the main flare).
Figure 10: The amplitude of the observed flux drop in the 0.2–2 keV band (\(F_{\rm X,mabs}/F_{\rm X,obs}\)) due to absorption by neutral hydrogen (top) and He ii (bottom), with each curve representing a different blackbody temperature (\(kT\sim 0.05\) keV for AT 2022dsb). The sensitivity of \(F_{\rm X,unabs}/F_{\rm X,obs}\) to absorption depending on the balance between the ionisation potential of the absorber and \(kT\). The horizontal red dashed line corresponds to the flux drop by a factor of 39 at entry times seen in AT 2022dsb, corresponding to an \(N_{\rm H}\) (\(N_{\rm HeII}\)) of \(4.9\times 10^{21}\) cm\({}^{-2}\) (\(1.6\times 10^{21}\) cm\({}^{-2}\)).
et al., 2014) and the Markov Chain Monte Carlo (MCMC) sampler (Foreman-Mackey et al., 2013).
After normalising by the continuum, we fitted each of the narrow emission lines in this complex (H\(\alpha\), [N ii] 6548\(\rm\AA\) and 6583 \(\rm\AA\)) with a single Gaussian, and forced each of these to be of the same width. The broad H\(\alpha\) component was fit with a single Gaussian (Fig 13). Due to the possible presence and blending of emission from H\(\beta\), He ii 4686\(\rm\AA\), N iii 4640\(\rm\AA\), Hy, Fe ii 37, 38, within the broad He ii complex, it is not straightforward to constrain the evolution of each of these possible components; we therefore examine the more isolated broad H\(\alpha\) emission lines here.
In the first LCO FLOYDS spectrum obtained \(\sim\)5 days before optical peak, the full-width half max (FWHM) of the broad H\(\alpha\) line is \(10400\pm 400\) km s\({}^{-1}\), and its centroid is blueshifted to \(\lambda_{\rm rest}=\)6530\(\pm\)7 \(\rm\AA\), corresponding to a velocity of \(-1600\pm 300\) km s\({}^{-1}\). In the second optical spectrum obtained \(\sim\)1 day before optical peak, the FWHM is \(10500\pm 300\) km s\({}^{-1}\), consistent with the earlier spectrum, but the velocity offset is \(200\pm 400\) km s\({}^{-1}\). At later times, the H\(\alpha\) emission line is not clearly seen above the host galaxy continuum emission.
## 5 Early-Time X-ray emission in TDEs
In the following section, we briefly review the literature on early-time X-ray observations of TDEs, and compare the X-ray transient seen in AT 2022dsb (Fig. 3) with the wider TDE population.
The majority of the X-ray selected TDE population known prior to the launch of eROSITA was first discovered at a time when wide-field, high-cadence optical surveys were still relatively limited compared with the current generation, with respect to their depth, cadence, sky coverage, difference-imaging capabilities (important for nuclear transients), and ease-of-access to their optical lightcurves. Largely as a result of this, only a handful of the X-ray selected TDE candidates showed transient optical/ UV emission (Saxton et al., 2020), which was only ever identified after the initial X-ray discovery, and typically only through _Swift_ UVOT follow-up. Furthermore, of the systems with detected transient optical/UV emission, only the decaying phase of the TDE lightcurve was sampled in the UV (see discussion in Section 9 in a recent review Saxton et al. 2020), thus the early-time X-ray evolution (during the initial optical/ UV brightening) of these X-ray selected systems remains unknown. Whilst the launch of eROSITA has seen a vast increase in the number of X-ray bright TDE candidates (e.g. Malyali et al., 2021, 2023, 2023, 2023; Liu et al., 2023; Homan et al., 2023), the majority of the TDE candidate population show no transient optical emission. In a sample of eROSITA-selected TDE candidates (Sazonov et al., 2021), only four systems display both transient X-ray and optical emission, but the detections of flaring X
Figure 12: MOSFIT model fits to the multi-band photometry of AT 2022dsb, with colour scheme following Fig. 3. The black and red lines mark the estimated median time of first mass fallback (\(\rm{MJD_{B}}=59610\pm 6\)) and the eRASS5 coverage of AT 2022dsb, occurring only \(\sim\)17 days later. A zoom-in on the ATLAS \(o\)-band difference photometry sampling the optical rise, and which is used for constraining \(\rm{MJD_{B}}\), is presented in Fig. 11.
Figure 13: Example fit to the H\(\alpha\) complex, with the continuum normalised spectrum in black and the best fitting model in red. The grey band denotes a region of telluric absorption which was masked during fitting. The centroid of the broad H\(\alpha\) is shifted by \(-1600\pm 300\) km s\({}^{-1}\) with respect to the rest frame of the host galaxy.
ray emission associated to the TDE always occurs after the optical peak.
Although the number of optically-selected TDE candidates has rapidly increased over the last decade, the majority of these have X-ray observations commencing at earliest near to, or after, peak optical brightness. This may in part stem from the very high discovery rate of transients in the latest generation of optical surveys, meaning that astronomers generally wait until close to peak optical brightness for an optical transient to become a strong TDE candidate, and only then trigger X-ray follow-up observations. Despite this, there are still six optically-bright TDEs with X-ray observations starting before optical maximum (Table 6 and Fig. 14), where this list was obtained via visual inspection of the joint X-ray and optical lightcurves of the TDEs presented in both the ZTF TDE sample (Hammerstein et al., 2023), and those in the recent TDE review paper by Gezari (2021). All of these TDEs are of H+He type13, with the exception of AT 2019ahk, which only shows transient broad Balmer emission lines. For each of these systems, the 0.3-2 keV XRT lightcurves were generated and downloaded from the UKSSDC as in section 3.1.3. These were then converted to 0.2-2 keV lightcurves using webPIMMS, assuming a redshifted blackbody spectrum with \(kT=50\) eV (similar to other X-ray bright TDEs; Saxton et al. 2020), absorbed by a Galactic \(N_{\rm H}\) along the line-of-sight to the TDE taken from Willingale et al. (2013); lower \(kT\) values for each TDE here would lead to higher estimated 0.2-2 keV fluxes.
Footnote 13: This is likely due current TDE follow-up strategies, since if a He complex is detected in a follow-up optical spectrum obtained during the optical rise, then there’s a stronger indication that the event is a TDE at these early times, and a higher likelihood of Swift XRT and UVOT observations being triggered with high urgency to monitor the evolution of the system.
Comparing the X-ray lightcurve of AT 2022dsb with the other TDEs with pre-peak X-ray observations (Fig. 14), then it is clear that the early-time transient X-ray emission in AT 2022dsb has never been observed before across all known TDEs with well sampled optical peaks. Of the TDEs in Table 6, only AT 2019ahk has a significant detection of soft X-ray emission before the observed optical maximum, with the system being detected for the first time \(\sim\)3 days before optical peak. AT 2019azh then remains at approximately a constant \(L_{\rm X}\) over the following \(\sim\)40 days after the first significant detection (Fig. 14), and thus shows a vastly different X-ray evolution to AT 2022dsb.
The eRASS5 observation is not the earliest X-ray observation of a TDE in terms of the phase (number of days observed before optical maximum), with AT 2018dyb, AT 2019ahk, AT 2019azh and AT 2022so all having been observed at earlier phases than AT 2022dsb. Furthermore, each of the earliest time observations for each system should also have been able to detect AT 2022dsb-like X-ray emission, given the 3\(\sigma\) upper limits on \(L_{\rm X}\) were lower than the \(L_{\rm X}\) of the eRASS5 observation of AT 2022dsb. However, the optical-UV lightcurves of these TDEs evolve differently to AT 2022dsb, with respect to their peak luminosities and rise timescales. This is not unexpected, since these systems may span a range of different black hole masses, stellar masses and impact parameters. For example, the rise timescale of AT 2022dsb is inferred to be \(\sim\)8 days, whereas it is \(\sim\)31 days for iPTF-15af (van Velzen et al., 2020) (Table 6). This complicates a clean comparison between these systems, and the task of understanding how early on in a TDE's evolution an AT 2022dsb-like X-ray transient may be observable. If one considers the normalised phase (phase divided by the estimated rise timescale; Table 6), then the eRASS5 observation of AT 2022dsb represents the second earliest X-ray observation of a TDE showing a transient He ii emission complex, with only AT 2020zo being observed at an earlier stage of the lightcurve.
Whilst the detection of the early-time X-ray transient in AT 2022dsb certainly benefitted from eROSITA serendipitously scanning over it a day before the first optical detection, this cannot be the sole factor in this discovery given the XRT coverage of other TDEs described above (i.e. there were observations that were early and deep enough to detect a source with spectral properties similar to AT 2022dsb, but did not because of physical differences between these systems). The fact that the X-ray emission could have been observed in other TDEs but was not, particularly for AT 2020zo (the TDE with observations performed at the earliest normed phase), suggests that the assumptions made when converting the observed 0.3-2 keV XRT count rate into an unabsorbed 0.2-2 keV flux (corrected only for Galactic absorption), and then a 0.2-2 keV intrinsic luminosity, may have been oversimplified and require further consideration. For example, an additional absorber along the line-of-sight to the TDE disc (such as from stellar debris ejected during the circularisation process, which may be optically thick or thin depending on the observer's viewing angle to the system, or from neutral hydrogen in the host galaxy unrelated to the TDE), or a lower effective temperature of the disc emission (e.g. due to a retrograde black hole spin), would lead to larger estimated unabsorbed flux upper limits from the XRT observations, and might explain the previous non-detections of X-ray emission in these systems. The early-time X-ray transient seen in AT 2022dsb is likely not a universally observable feature in TDEs.
## 6 Discussion
The unique observational feature of AT 2022dsb with respect to the wider population of optically-selected TDEs is its early-time transient X-ray emission detected by eROSITA (see Table 8 for a summary of the key events in the evolution of AT 2022dsb). From the physical modelling of the multi-band photometry (section 4.2), then the time of first mass fallback to pericentre after the disruption is estimated to be MJD\(\sim 59610\) (\(\sim\)31 days before optical peak), meaning that the eROSITA discovery of ultra-soft X-ray emission on MJD 59627 occurs only 17 days after this. As the optical emission brightens in the system, then the observed X-ray emission in the 0.2-2 keV band decays over a 19 day period by a factor of \(\sim\)39 (Fig. 3). This joint X-ray-to-optical evolution has not been observed before in a TDE candidate. Although the observed X-ray emission rapidly dims during the optical rise, AT 2022dsb shows a broad He ii complex which persists for at least \(\sim\)38 days after optical peak, and was first detected 5 days before peak (Fig. 6). Several outflow signatures are also present during the early stages of this TDE, in the form of blueshifted H\(\alpha\) at \(-1600\pm 300\) km s\({}^{-1}\), first observed at \(P=-5\) days (section 4.1), radio transient emission at \(P=20\) days (Fig. 7), and blueshifted Ly\(\alpha\) absorption lines at \(\sim\)3000 km s\({}^{-1}\), observed at \(P=54\) days (Engelthaler & Maksym, 2023, Engelthaler et al., in prep.). Importantly, other past X-ray observations of He-TDEs before optical peak have not detected X-ray emission at a similar \(L_{\rm X}\) (Table 6), despite the observations being carried out at a similar phase to the eRASS5 observation of AT 2022dsb, and also having upper limits on \(L_{\rm X}\) lower than for the eRASS5 detection of AT 2022dsb; each of the He TDEs in this sample have also been reported to show outflow signatures in observations performed around optical peak (Table 7).
The early X-ray emission detected by eROSITA likely comes from an accretion disc that has recently been assembled through circularisation of the earliest-arriving gas in the fallback stream (Bonnerot et al., 2021). We rule out the early X-ray transient being produced
by shock breakout emission from the surface of the star after being maximally compressed at pericentre (Carter and Luminet, 1983; Guillochon et al., 2009; Stone et al., 2013; Yalinewich et al., 2019), as the predicted timescales of \(\mathcal{O}\)(minutes) for these flares are far shorter than what is observed in the reRASS5 observation (\(>\)1 day; Fig. 5). We also disfavour the X-ray transient being caused by accretion disc cooling (e.g. Cannizzaro et al., 2021), since the disc temperature should increase as the optical emission brightens if the optical light curve traces the accretion rate at early times, or Lense-Thirring driven precession of the newly formed disc (Stone and Loeb, 2012; Franchini et al., 2016), as no rebrightening episodes are detected over the X-ray follow-up campaign (Fig. 3). Regardless of origin, then the high-energy tail of this early hard ionising source also likely drives the ionisation of He ii and the formation of the He-complex in the optical spectra.
Around 14 days before the optical peak, it is likely that we have an unobscured view onto the nascent disc, which is initially surrounded by an envelope of gas formed through shocks during the circularization process, as found in simulations (Bonnerot et al., 2021). While this envelope is initially of low enough density for the disc emission to promptly emerge, the envelope mass increases over time due to feeding by the outflowing gas in the system. As a result, the X-ray emission may be more efficiently absorbed over time, leading to the observed drop-off in the X-ray emission, and a reprocessed-driven optical brightening. To quantitatively test this scenario, we follow Roth et al. (2016) by modelling the envelope as a sphere of inner and outer radii \(R_{\rm in}\) and \(R_{\rm out}\), containing a mass of gas \(M_{\rm env}\) distributed according to a density profile \(\rho\propto R^{-2}\), which is irradiated by an inner source of luminosity \(L\). The effective optical depth \(r_{\rm eff}\) relevant for X-ray absorption is then given by their equation 27, relying on He ii photoionization being the dominant absorption process and using solar composition. Here, we further assume that the envelope mass increases with time as \(M_{\rm env}=\dot{M}_{\rm env}t\) due to feeding by early returning debris. The time at which the envelope is able to absorb the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Name & \(\rm{MJD_{\rm peak}}\) & \(L_{\rm X}^{\rm 1st}\) [erg s\({}^{-1}\)] & \(P\) [d] & \(\sigma\) [d] & \(P/\sigma\) \\ \hline AT2018dyb & \(58340.7^{+1.4}_{-0.3}\) & \(<7\times 10^{42}\) & \(-23.1\) & \(31.6^{+2.3}_{-0.7}\) & \(-0.7\) \\ AT2019ahk & \(58548.3^{+0.8}_{-0.8}\) & \(<3\times 10^{42}\) & \(-33.5\) & \(20.0^{+0.5}_{-0.5}\) & \(-1.7\) \\ iPTF15af & \(57061.0^{+7.8}_{-1.8}\) & \(<2\times 10^{43}\) & \(-11.7\) & \(31.6^{+1.5}_{-2.1}\) & \(-0.4\) \\ AT2019azh & \(58558.6^{+3.6}_{-1.0}\) & \(<1\times 10^{42}\) & \(-13.8\) & \(20.0^{+2.4}_{-0.3}\) & \(-0.7\) \\ AT2019azh & \(58761.4^{+0.6}_{-0.6}\) & \(<1\times 10^{42}\) & \(-8.3\) & \(6.3^{+0.1}_{-0.1}\) & \(-1.3\) \\ AT2022oso & \(59188.0^{+1.4}_{-0.4}\) & \(<4\times 10^{43}\) & \(-15.0\) & \(6.9^{+0.2}_{-0.2}\) & \(-2.2\) \\ AT2022dsb & \(59640.9^{+0.4}_{-0.4}\) & \(2.5^{+0.6}_{-0.5}\times 10^{43}\) & \(-13.5\) & \(7.9^{+0.2}_{-0.4}\) & \(-1.7\) \\ \hline \end{tabular}
\end{table}
Table 6: Properties of TDEs with X-ray observations pre-optical peak. \(\rm{MJD_{\rm peak}}\) is the inferred optical peak, \(L_{\rm X}^{\rm 1st}\) is the inferred 3\(\sigma\) upper limit on the 0.2–2 keV luminosity from the earliest X-ray observation (by _Swift_ XRT) and \(P\) denotes the phase of the first X-ray observation relative to \(\rm{MJD_{\rm peak}}\). \(\sigma\) is the inferred rise timescale from a half-Gaussian fitted to the optical lightcurve (section 4.2). \(\rm{MJD_{\rm peak}}\) and \(\sigma\) were taken from van Velzen et al. (2020) for iPTF15af, AT 2018dyb, AT 2019ahk, AT 2019azh, AT 2019azh, AT 2019azh and Hammerstein et al. (2023) for AT 2020zso.
\begin{table}
\begin{tabular}{l l l} \hline \hline Name & \(\rm{Outflow}\) properties & & Reference \\ \hline AT2018dyb & H\(\alpha\) blueshifted by \(\sim\)700 km s\({}^{-1}\) in spectrum obtained \(\sim\)24 days before peak (although H\(\alpha\) redshifted after optical peak). & Leloudas et al. (2019) \\ AT2019ahk & Photosphere expanding at \(\sim\)2700 km s\({}^{-1}\) via modelling photometric rise, assuming constant temperature. & Holoien et al. (2019) \\ iPTF15af & No H\(\alpha\) detected in first optical spectrum near peak. Si iv blueshifted by \(\sim\)6000 km s\({}^{-1}\) in HST spectrum obtained \(\sim\)28 days post-peak. & Blagorodnova et al. (2019) \\ AT2019azh & Transient radio emission detected \(\sim\)10 days before optical peak. & Goodwin et al. (2022a) \\ AT2019azh & Photosphere expanding at \(\sim\)2200 km s\({}^{-1}\) via modelling photometric rise, assuming constant temperature. Inferred outflow velocity \(\lesssim\)5000 km s\({}^{-1}\) from asymmetric H\(\alpha\) line in spectrum \(\sim\)9 days before peak. & Nicholl et al. (2020) \\ AT202oso & Photosphere expanding at \(\sim\)2900 km s\({}^{-1}\) via modelling photometric rise, assuming constant temperature. & Wevers et al. (2022) \\ \hline \hline MJD & Event \\ \hline \(\sim\)59610 & Time of first mass fallback to pericentre (section 4.2). \\
59627 & eRASS5 detection. \\
59628 & First 3\(\sigma\) detection of optical emission in ATLAS \(\omega\)-band. \\
59636 & Broad blueshifted H\(\alpha\) and an emission complex detected around He ii 4686Å, in the first follow-up optical spectrum of AT 2022dsb. \\
59641 & Peak in the observed optical flux (section 4.2). \\
59646 & First XMM observation, 0.2–2 keV observed flux drop by a factor of \(\sim\)39 relative to reRASS5. \\
59661 & Detection of radio transient emission with ATCA in first radio follow-up observation. \\
59693 & First detection of outflow at 3000 km s\({}^{-1}\) from Ly\(\alpha\) absorption (FWHM\(\sim\) 14000 km s\({}^{-1}\)) in first HST spectrum (Engelhaher and Maksym, 2023). \\ \hline \end{tabular}
\end{table}
Table 8: Key events in the early evolution of AT 2022dsb.
inner X-ray radiation is obtained by solving \(\tau_{\rm eff}(t)=1\), which gives
\[t_{\rm abs}=24\,{\rm d}\,\left(\frac{L}{10^{43}\,{\rm erg\,s^{-1}}} \right)^{5/19}\left(\frac{\dot{M}_{\rm env}}{M_{\odot}\,{\rm yr^{-1}}}\right)^{-1}\] \[\left(\frac{R_{\rm in}}{10^{14}\,{\rm cm}}\right)^{20/19}\left( \frac{R_{\rm out}}{10^{15}\,{\rm cm}}\right)^{16/19}, \tag{1}\]
where the luminosity \(L\simeq 10^{43}\,{\rm erg\,s^{-1}}\) is determined from that of the detected X-rays. This time is approximately consistent with the 19 day delay between the eRASS5 detection and first _XMM_ observation, for an envelope feeding rate \(\dot{M}_{\rm env}\approx M_{\odot}\,{\rm yr^{-1}}\), comparable to the debris fallback rate of a typical TDE near peak (e.g. Rossi et al., 2021). Within this interpretation, the early-time X-ray detection presented in this paper provides a new way to constrain physical properties such as the feeding rate and size of the envelope, and the luminosity of the obscured accretion disc, which are crucial to improve our theoretical understanding of these systems.
The origin of the early-time outflows seen in some TDEs is still unclear (see discussion in Goodwin et al., 2022), but is thought to be due to either a stream-stream collision-induced outflow (CIO; e.g. Lu & Bonnerot, 2020), debris unbounded by accretion luminosity (Metzger & Stone, 2016) or a radiatively-driven disc wind (Lodato & Rossi, 2011; Miller, 2015). The data set in this work does not allow us to distinguish between these mechanisms for AT 2022dsb, since each scenario would initially lead to an increased density of gas and optical depth along our line-of-sight as the fallback rate increases over time14.
Footnote 14: Future spectroscopic monitoring of TDEs in the UV may distinguish between these two origins.
Figure 14: Early-time X-ray lightcurves of TDEs with observations pre-optical peak, with markers following the same definition as for Fig. 3. The dotted red lines mark the phase of the eRASS5 observation of AT 2022dsb and inferred 0.2–2 keV luminosity, whilst the dark orange solid line marks the eRASS5 observation at the time of its normed phase, \(P/\sigma\) (Table 6). MDDs are defined with respect to the inferred optical peaks in Table 6. The eROSITA observation clearly represents the earliest X-ray detection of a TDE to date, and although there are XRT observations of these other TDEs at a comparable phase, only AT 2020sno has been observed at an earlier \(P/\sigma\).
Importantly, if the observed early-time evolution in AT 2022dsb is not unique to this system, then other TDEs may also be X-ray bright at early times, and become X-ray faint only when relied by outflowing debris launched shortly after the onset of circularisation.15. Given that the existing models described above predict the launching of outflows which would extend large solid angles on the sky, as seen by the disrupting black hole, then a large fraction of optically-bright TDEs may therefore be X-ray faint when followed up in the weeks-to-months after optical peak (unless viewed at angles peering through an optically thin funnel in the reprocessor; Metzger & Stone, 2016; Dai et al., 2018; Lu & Bonnerot, 2020).
Footnote 15: Alternatively, if the X-ray emission is above a critical luminosity, then all of the He ii may be ionised to He iii, enabling the X-ray emission to escape the system (Metzger & Stone, 2016).
## 7 Summary
We reported on multi-wavelength observations of the TDE candidate AT 2022dsb, whose main properties can be summarised as follows:
1. eROSITA detected ultra-soft (\(kT_{\rm BB}\sim\) 45 eV) X-ray emission (0.2-2 keV \(L_{\rm X}\sim 3\times 10^{43}\) erg s\({}^{-1}\)) from a TDE \(\sim\)14 days before optical peak. The eROSITA detection precedes the first 3\(\sigma\) detection in the optical, and occurs only \(\sim\)17 days after the inferred time of first mass fallback to pericentre.
2. An _XMM_ follow-up observation 19 days after this eROSITA detection revealed a drop in the observed 0.2-2 keV flux by a factor of 39; during this period, the optical emission brightened to a maximum. A second _XMM_ observation \(\sim\)173 days after the eROSITA detection showed a 0.2-2 keV flux and spectral properties consistent with this first _XMM_ observation. No further X-ray emission was significantly detected above background by _Swift_ XRT monitoring observations in the following \(\sim\)200 days after the eROSITA detection. Thus without the early-time eROSITA observation, AT 2022dsb would likely have been classified as an "optically bright, X-ray quiet' TDE.
3. Follow-up optical spectra showed a broad emission complex around the He ii 4686A, broad H\(\alpha\) emission and a strong blue continuum in the early-time spectra. The He ii complex is clearly present in the spectra taken \(\sim\)5 days before optical peak, and is still detected \(\sim\)38 days after optical peak (even after the large amplitude X-ray dimming). The strength of these features with respect to the host galaxy emission decreases over the spectroscopic follow-up campaign.
4. Multiple outflow signatures are detected in the system at early times (transient radio emission with ATCA, first detected \(\sim\)20 days post-optical peak; blueshifted broad H\(\alpha\) emission at \(\sim\)1600 km s\({}^{-1}\), detected \(\sim\)5 days before optical peak, and blueshifted broad Ly\(\alpha\) absorption at \(\sim-3000\) km s\({}^{-1}\), detected \(\sim\)54 days after optical peak).
5. The combination of these observed features suggests that outflows launched at early times may boost the density of the material enshrouding the nascent disc, leading to an increased amount of reprocessing of the high-energy disc emission. This causes an early drop-off in the observed X-ray flux whilst the optical brightens.
6. If the observed early-time properties are not unique to this system, then other TDEs may be X-ray bright at early times, and become X-ray faint when veiled by outflowing stellar debris. The X-ray vs optically bright nature of a TDE is also time dependent at early times.
The early-time X-ray emission from TDEs may be monitored in greater detail with the next-generation of time-domain missions, such as the _Einstein Probe_ (_EP_; Yuan et al., 2018), scheduled for launch in late 2023, or through early follow-up of candidates identified with the _Ultraviolet Transient Astronomy Satellite_ (_ULTRASAT_; Sagiv et al., 2014) and the Vera Rubin Observatory (Ivezic, 2019). High cadence X-ray monitoring observations of such early X-ray transients may provide a new way to constrain the mass feeding rate and nature of the reprocessing envelope in TDEs in future work.
## Acknowledgements
AM is grateful to the generosity of Curtin University for hosting his visit, where parts of this work were completed. AM thanks the _XMM_, _Swift_ and _NICER_ teams for approving the ToO requests. AM acknowledges support by DLR under the grant 50 QR 2110 (XMM_NuTra, PI: Z. Liu). This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452.
This work is based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luft- und Raumfahrt (DLR). The SRG spacecraft was built by Lavuchkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE).
The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remies Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA.
The eROSITA data shown here were processed using the eSAS software system developed by the German eROSITA consortium.
The authors acknowledge support for obtaining the LCO/FLOYDS spectroscopy by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311.
This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
The Australia Telescope Compact Array is part of the Australia Telescope National Facility16 which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Gomeroi people as the Traditional Owners of the Observatory site.
Footnote 16: [https://ror.org/@5qajvd42](https://ror.org/@5qajvd42)
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID 2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID 2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by
NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkan Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
Some of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT) under the programme 2021-2-LSP-001 (PI: DAHB). Polish participation in SALT is funded by grant No. MEiN nr2021/WK/01.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program "The Emergence of Cosmological Structures" Grant XDB0900000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant 12120101003, 11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
This work was supported by the Australian government through the Australian Research Council's Discovery Projects funding scheme (DP200102471).
## Data Availability
A public release of the entire eRASS1 data taken within the German half of the eROSITA sky is anticipated for Q4 2023. eRASS2-5 data are expected to be released at a later stage. The Swift data is available to download through the UK Swift Data Science website17. The _XMM_ data will become public after the propriety period expires (2023-09-21). Publicly available ATLAS data can be accessed through the ATLAS forced photometry service18. Publicly available ZTF data can be accessed through the ZTF forced photometry service19.
Footnote 17: [https://www.swift.ac.uk/archive/index.php](https://www.swift.ac.uk/archive/index.php)
Footnote 18: [https://fallingstar-data.com/forcedphot/](https://fallingstar-data.com/forcedphot/)
Footnote 19: [https://irsa.ipac.caltech.edu/Missions/ztf.html](https://irsa.ipac.caltech.edu/Missions/ztf.html)
Footnote 20: [https://atoa.atnf.csiro.au/](https://atoa.atnf.csiro.au/)
NTT/EFOSC2 spectroscopy has been obtained under the program IDs 108.220C.012 (PI. C. Inserra) and 109.23JL.001 (PI. I. Grotova). All optical spectra are publicly available. ATCA data are stored in the Australia Telescope Online Archive20, and will become publicly accessible 18 months from the date of observation.
Footnote 20: [https://atoa.atnf.csiro.au/](https://atoa.atnf.csiro.au/)
|
2301.13478 | A survey on the Hausdorff dimension of intersections | Let $A$ and $B$ be Borel subsets of the Euclidean $n$-space with $\dim A +
\dim B > n$. This is a survey on the question: what can we say about the
Hausdorff dimension of the intersections $A\cap (g(B)+z)$ for generic
orthogonal transformations $g$ and translations by $z$. | Pertti Mattila | 2023-01-31T08:58:19Z | http://arxiv.org/abs/2301.13478v3 | # A survey on the Hausdorff dimension of intersections
###### Abstract.
Let \(A\) and \(B\) be Borel subsets of the Euclidean \(n\)-space with \(\dim A+\dim B>n\). This is a survey on the question: what can we say about the Hausdorff dimension of the intersections \(A\cap(g(B)+z)\) for generic orthogonal transformations \(g\) and translations by \(z\).
Key words and phrases:Hausdorff dimension, intersection, projection, energy integral, Fourier transform 2000 Mathematics Subject Classification: Primary 28A75
## 1. Introduction
1 The books [M5] and [M6] contain most of the required background information and the proofs of some of the results discussed below.
Footnote 1: This survey is based on the talk I gave in Karoly Simon’s 60+1 birthday conference in Budapest in June 2022.
Let \(\mathcal{L}^{n}\) stand for the Lebesgue measure on the Euclidean \(n\)-space \(\mathbb{R}^{n}\) and let \(\dim\) stand for the Hausdorff dimension and \(\mathcal{H}^{s}\) for \(s\)-dimensional Hausdorff measure. For \(A\subset\mathbb{R}^{n}\), denote by \(\mathcal{M}(A)\) the set of Borel measures \(\mu\) with \(0<\mu(A)<\infty\) and with the compact support \(\operatorname{spt}\mu\subset A\).
We let \(O(n)\) denote the orthogonal group of \(\mathbb{R}^{n}\) and \(\theta_{n}\) its Haar probability measure. The main fact needed about the measure \(\theta_{n}\) is the inequality:
\[\theta_{n}(\{g\in O(n):|x-g(z)|<r\})\lesssim(r/|z|)^{n-1}\text{ for }x,z\in \mathbb{R}^{n},r>0. \tag{1.1}\]
This is quite easy, in fact trivial in the plane.
Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) with Hausdorff dimensions \(s=\dim A\) and \(t=\dim B\). What can we say about the Hausdorff dimensions of the intersections of \(A\) and typical rigid motions of \(B\)? More precisely, of \(\dim A\cap(g(B)+z)\) for almost all \(g\in O(n)\) and for \(z\in\mathbb{R}^{n}\) in a set positive Lebesgue measure. Optimally one could hope that this dimension is given by the bigger of the numbers \(s+t-n\) and \(0\), which happens when smooth surfaces meet in a general position.
The problem on the upper bound is much easier than on the lower bound. Let
\[V_{z}=\{(x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}:x=y+z\},\ z\in\mathbb{R}^{n}, \tag{1.2}\]
be the \(z\) translate of the diagonal in \(\mathbb{R}^{n}\times\mathbb{R}^{n}\), and let \(\pi\) be the projection \(\pi(x,y)=x\). Then
\[A\cap(g(B)+z)=\pi((A\times g(B))\cap V_{z}), \tag{1.3}\]
and it follows from a Fubini-type inequality for Hausdorff dimension, [M5, Theorem 7.7], that for any \(g\in O(n)\),
\[\dim A\cap(g(B)+z)\leq\dim(A\times B)-n\text{ for almost all }z\in\mathbb{R}^{n}, \tag{1.4}\]
provided \(\dim(A\times B)\geq n\). We have always \(\dim(A\times B)\geq\dim A+\dim B\) and the equation \(\dim(A\times B)=\dim A+\dim B\) holds if, for example, \(0<\mathcal{H}^{s}(A)<\infty,0<\mathcal{H}^{t}(B)<\infty\), and one of the sets has positive lower density, say
\[\theta_{*}^{s}(A,x)=\liminf_{r\to 0}r^{-s}\mathcal{H}^{s}(A\cap B(x,r))>0 \text{ for }\mathcal{H}^{s}\text{ almost all }x\in A. \tag{1.5}\]
Even the weaker condition that the Hausdorff and packing dimensions of \(A\) agree suffices, see [M5], pp. 115-116. Then we have
\[\dim A+\dim B\leq\dim A+\dim B-n\text{ for almost all }z\in\mathbb{R}^{n}, \tag{1.6}\]
provided \(\dim A+\dim B\geq n\). Without some extra condition this inequality fails badly: for any \(0\leq s\leq n\) there exists a Borel set \(A\subset\mathbb{R}^{n}\) of dimension \(s\) such that \(\dim A\cap f(A)=s\) for all similarity maps \(f\) of \(\mathbb{R}^{n}\). This was proved by Falconer in [F3], see also Example 13.19 in [M5] and the further references given there.
We have the lower bound for the dimension of intersections if we use larger transformation groups, for example similarities:
**Theorem 1.1**.: _Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) with \(\dim A+\dim B>n\). Then for every \(\varepsilon>0\),_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(rg(B)+z)\geq\dim A+\dim B-n- \varepsilon\})>0,\]
_for almost all \(g\in O(n)\) and almost all \(r>0\)._
If \(A\) and \(B\) have positive and finite Hausdorff measure, \(\varepsilon\) is not needed. This theorem was proved in the 1980s independently by Kahane [K] and in [M2]. More generally, Kahane proved that the similarities can be replaced by any closed subgroup of the general linear group of \(\mathbb{R}^{n}\) which is transitive outside the origin. He gave applications to multiple points of stochastic processes.
There are many special cases where the equality \(\dim A\cap(g(B)+z)=\dim A+\dim B-n\) holds typically. The case where one of the sets is a plane, initiated by Marstrand in [M], has been studied a lot, see discussions in [M5, Chapter 10] and [M6, Chapter 6], and [MO] for a more recent result. More generally, one of the sets can be rectifiable, see [M2].
The main open problem is: what conditions on the Hausdorff dimensions or measures of \(A\) and \(B\) quarantee that for \(\theta_{n}\) almost all \(g\in O(n)\),
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq\dim A+\dim B-n\}) >0, \tag{1.7}\]
or perhaps for all \(\varepsilon>0\),
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq\dim A+\dim B-n- \varepsilon\})>0? \tag{1.8}\]
If one of the sets is a Salem set, that is, it supports a measure with an optimal Fourier decay allowed by its Hausdorff dimension, then (1.8) holds without dimensional restrictions, see [M4]. I expect (1.8) to be true for all Borel subsets \(A\) and \(B\) of \(\mathbb{R}^{n}\).
Below I shall discuss some partial results on this question. I shall also say something about the exceptional sets of transformations.
In this survey I shall concentrate on Hausdorff dimension and general Borel sets. For remarks and references about related results on other dimensions, see [M5, Section 13.20] and [M6, Section 7.3]. There is a rich literature on various questions about intersections of dynamically generated and related sets. For recent results and further references, see [S1], [Wu], [Y]. For probabilistic sets, see [SS] and its references.
## 2. Projections and plane intersections
This topic can be thought of as a study of integral-geometric properties of fractal sets and Hausdorff dimension. Let us briefly review some of the basic related results on projections and plane sections. This was started by Marstrand in [M] in the plane. His main results in general dimensions are the following. Let \(G(n,m)\) be the Grassmannian of linear \(m\)-dimensional subspaces of \(\mathbb{R}^{n}\) and \(P_{V}:\mathbb{R}^{n}\to V\) the orthogonal projection onto \(V\in G(n,m)\). Let also \(\gamma_{n,m}\) be the orthogonally invariant Borel probability measure on \(G(n,m)\).
**Theorem 2.1**.: _Let \(A\subset\mathbb{R}^{n}\) be a Borel set. Then for almost all \(V\in G(n,m)\),_
\[\dim P_{V}(A)=\dim A\text{ if }\dim A\leq m, \tag{2.1}\]
_and_
\[\mathcal{H}^{m}(P_{V}(A))>0\text{ if }\dim A>m. \tag{2.2}\]
**Theorem 2.2**.: _Let \(n-m\leq s\leq n\) and let \(A\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{s}\) measurable with \(0<\mathcal{H}^{s}(A)<\infty\). Then for almost all \(V\in G(n,m)\),_
\[\mathcal{H}^{n-m}(\{u\in V^{\perp}:\dim(A\cap(V+u))=s+m-n\})>0, \tag{2.3}\]
_and for almost all \(V\in G(n,n-m)\) and for \(\mathcal{H}^{s}\) almost all \(x\in A\),_
\[\dim(A\cap(V+x))=s+m-n. \tag{2.4}\]
One can sharpen these results by deriving estimates on the Hausdorff dimension of the exceptional sets of the planes \(V\). For the first part of Theorem 2.1 this was first done by Kaufman in [Ka] in the plane, then in [M1] in higher dimensions. For the second part of Theorem 2.1 the exceptional set estimates were proven by Falconer in [F1]. Thus we have, recall that \(\dim G(n,m)=m(n-m)\):
**Theorem 2.3**.: _Let \(A\subset\mathbb{R}^{n}\) be a Borel set with \(s=\dim A\) Then_
\[\dim\{V\in G(n,m):P_{V}(A)<\dim A\}\leq s-m+m(n-m)\text{ if }s\leq m, \tag{2.5}\]
_and_
\[\dim\{V\in G(n,m):\mathcal{H}^{m}(P_{V}(A))=0\})\leq m-s+m(n-m)\text{ if }s>m, \tag{2.6}\]
These inequalities are sharp by the examples in [KM] (and their modifications), but the proof for (2.5) also gives the upper bound \(t-m+m(n-m)\) if \(\dim A\) on the left hand side is replaced by \(t,0\leq t\leq\dim A\). Then for \(t<\dim A\) this is not always sharp, see the discussion in [M6, Section 5.4].
For the plane sections Orponen proved in [Or1], see also [M6, Theorem 6.7], the exceptional set estimate (which of course is sharp, as (2.6) is):
**Theorem 2.4**.: _Let \(n-m\leq s\leq n\) and let \(A\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{s}\) measurable with \(0<\mathcal{H}^{s}(A)<\infty\). Then there is a Borel set \(E\subset G(n,m)\) such that_
\[\dim E\leq n-m-s+m(n-m)\]
_and for \(V\in G(n,m)\setminus E\),_
\[\mathcal{H}^{n-m}(\{u\in V^{\perp}:\dim(A\cap(V+u))=s+m-n\})>0. \tag{2.7}\]
We can also ask for exceptional set estimates corresponding to (2.4). We proved with Orponen [MO] the following:
**Theorem 2.5**.: _Let \(n-m\leq s\leq n\) and let \(A\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{s}\) measurable with \(0<\mathcal{H}^{s}(A)<\infty\). Then the set \(B\) of points \(x\in\mathbb{R}^{n}\) with_
\[\gamma_{n,m}(\{V\in G(n,m):\dim A\cap(V+x)=s+m-n\})=0\]
_has dimension \(\dim B\leq n-m\)._
Very likely, the bound \(n-m\) is not sharp. When \(m=1\), probably the sharp bound should be \(2(n-1)-s\) in accordance with Orponen's sharp result for radial projections in [Or2].
Another open question is whether there could be some sort of non-trivial estimate for the dimension of the exceptional pairs \((x,V)\).
## 3. Some words about the methods
The methods in all cases use Frostman measures. Suppose that the Hausdorff measures \(\mathcal{H}^{s}(A)\) and \(\mathcal{H}^{t}(B)\) are positive. Then there are \(\mu\in\mathcal{M}(A)\) and \(\nu\in\mathcal{M}(B)\) such that \(\mu(B(x,r))\leq r^{s}\) and \(\nu(B(x,r))\leq r^{t}\) for \(x\in\mathbb{R}^{n},r>0\). In particular, for \(0<s<\dim A\) and \(0<t<\dim B\) there are \(\mu\in\mathcal{M}(A)\) and \(\nu\in\mathcal{M}(B)\) such that \(I_{s}(\mu)<\infty\) and \(I_{t}(\nu)<\infty\), where the \(s\) energy \(I_{s}(\mu)\) is defined by
\[I_{s}(\mu)=\iint|x-y|^{-s}\,d\mu x\,d\mu y.\]
Then the goal is to find intersection measures \(\lambda_{g,z}\in\mathcal{M}(A\cap(g(B)+z))\) such that
\[\operatorname{spt}\lambda_{g,z}\subset\operatorname{spt}\mu\cap(g( \operatorname{spt}\nu)+z), \tag{3.1}\]
\[\int\lambda_{g,z}(\mathbb{R}^{n})\,d\mathcal{L}^{n}z=\mu(\mathbb{R}^{n})\nu( \mathbb{R}^{n})\text{ for }\theta_{n}\text{ almost all }g\in O(n), \tag{3.2}\]
\[\iint I_{s+t-n}(\lambda_{g,z})\,d\mathcal{L}^{n}z\,d\theta_{n}g\lesssim I_{s} (\mu)I_{t}(\nu). \tag{3.3}\]
This would give (1.8).
There are two closely related methods to produce these measures. The first, used in [M2], is based on (1.3): the intersections \(A\cap(g(B)+z)\) can be realized as level sets of the projections \(S_{g}\):
\[S_{g}(x,y)=x-g(y),\ x,y\in\mathbb{R}^{n}, \tag{3.4}\]
\[A\cap(g(B)+z)=\pi((A\times g(B))\cap S_{g}^{-1}\{z\}),\ \pi(x,y)=x. \tag{3.5}\]
Notice that the map \(S_{g}\) is essentially the orthogonal projection onto the \(n\)-plane \(\{(x,-g(x)):x\in\mathbb{R}^{n}\}.\)
Thus one slices (disintegrates) \(\mu\times g_{\#}\nu\) (\(g_{\#}\nu\) is the push-forward) with the planes \(V_{z}=\{(x,y):x=y+z\},z\in\mathbb{R}^{n}.\) For this to work, one needs to know that
\[S_{g\#}(\mu\times\nu)\ll\mathcal{L}^{n}\ \text{for}\ \theta_{n}\ \text{almost}\ \text{all}\ g\in O(n). \tag{3.6}\]
This is usually proved by establishing the \(L^{2}\) estimate
\[\iint S_{g_{\#}}(\mu\times\nu)(x)^{2}\,dx\,d\theta_{n}g\lesssim 1, \tag{3.7}\]
which, by Plancherel's formula, is equivalent to
\[\iint\mathcal{F}(S_{g\#}(\mu\times\nu))(x)^{2}\,dx\,d\theta_{n}g\lesssim 1, \tag{3.8}\]
where \(\mathcal{F}\) stands for the Fourier transform.
The second method, used in [K], is based on convolution approximation. Letting \(\psi_{\varepsilon},\varepsilon>0,\) be a standard approximate identity, set \(\nu_{\varepsilon}=\psi_{\varepsilon}*\nu\) and
\[\nu_{g,z,\varepsilon}(x)=\nu_{\varepsilon}(g^{-1}(x-z)),\ x\in\mathbb{R}^{n}. \tag{3.9}\]
Then the plan is to show that when \(\varepsilon\to 0,\) the measures \(\nu_{g,z,\varepsilon}\mu\) converge weakly to the desired intersection measures.
No Fourier transform is needed to prove Theorem 1.1, but the proofs of all theorems discussed below, except Theorems 6.1 and 6.2, rely on the Fourier transform defined by
\[\widehat{\mu}(x)=\int e^{-2\pi ix\cdot y}\,d\mu y,\ x\in\mathbb{R}^{n}.\]
The basic reason for its usefulness in this connection is the formula
\[I_{s}(\mu)=\iint|x-y|^{-s}\,d\mu x\,d\mu y=c(n,s)\int|\widehat{\mu}(x)|^{2}|x| ^{s-n}\,dx, \tag{3.10}\]
which is a consequence of Parseval's formula and the fact that the distributional Fourier transform of the Riesz kernel \(k_{s},k_{s}(x)=|x|^{-s},\) is a constant multiple of \(k_{n-s}.\)
The decay of the spherical averages,
\[\sigma(\mu)(r)=r^{1-n}\int_{S(r)}|\widehat{\mu}(x)|^{2}\,d\sigma_{r}^{n-1}x,r >0,\]
of \(\mu\in\mathcal{M}(\mathbb{R}^{n}),\) where \(\sigma_{r}^{n-1}\) is the surface measure on the sphere \(S(r)=\{x\in\mathbb{R}^{n}:|x|=r\},\) often plays an important role. By integration in polar coordinates, if \(\sigma(\mu)(r)\lesssim r^{-t}\) for \(r>0\) and for some \(t>0,\) then \(I_{s}(\mu)<\infty\) for \(0<s<t.\) Hence the best decay we can hope for under the finite \(s\) energy assumption (or the Frostman assumption \(\mu(B(x,r))\leq r^{s})\)) is \(r^{-s}\). This is true when \(s\leq(n-1)/2,\) see [M6, Lemma 3.5], but false otherwise.
The decay estimates for \(\sigma(\mu)(r)\) have been studied by many people, discussion can be found in [M6, Chapter 15]. The best known estimates, due to Wolff, [W],
when \(n=2\) (the proof can also be found in [12, Chapter 16]) and to Du and Zhang, [4], in the general case, are the following: Let \(\mu\in\mathcal{M}(\mathbb{R}^{n})\) with \(\mu(B(x,r))\leq r^{s}\) for \(x\in\mathbb{R}^{n},r>0\). Then for all \(\varepsilon>0,r>1\),
\[\sigma(\mu)(r)\lesssim\begin{cases}r^{-(n-1)s/n+\varepsilon}\text{ for all }0<s<n,\\ r^{-(n-1)/2+\varepsilon}\text{ if }(n-1)/2\leq s\leq n/2,\\ r^{-s+\varepsilon}\text{ if }0<s\leq(n-1)/2.\end{cases} \tag{3.11}\]
The essential case for the first estimate is \(s>n/2\), otherwise the second and third are better. Up to \(\varepsilon\) these estimates are sharp when \(n=2\). When \(n\geq 3\) the sharp bounds are not known for all \(s\), see [4] for discussion and the most recent examples. As mentioned above, the last bound is always sharp.
## 4. The first theorem
If one of the sets has dimension bigger than \((n+1)/2\) we have the following theorem. It was proved in [12], see also [12, Theorem 13.11] or [12, Theorem 7.4]:
**Theorem 4.1**.: _Let \(s\) and \(t\) be positive numbers with \(s+t>n\) and \(s>(n+1)/2\). Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) with \(\mathcal{H}^{s}(A)>0\) and \(\mathcal{H}^{t}(B)>0\). Then_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq\dim A+\dim B-n\})>0 \tag{4.1}\]
_for almost all \(g\in O(n)\)._
The proof is based on the slicing method. The key estimate is
\[\mu\times\mu(\{(x,y):r\leq|x-y|\leq r+\delta\})\lesssim I_{s}(\mu)\delta r^{s-1} \tag{4.2}\]
if \(\mu\in\mathcal{M}(\mathbb{R}^{n}),0<\delta\leq r\) and \((n+1)/2\leq s<n\). This is combined with the inequality (1.1).
The inequality (4.2) is obtained with the help of the Fourier transform, and that is the only place in the proof of Theorem 4.1 where the Fourier transform is needed.
One problem of extending Theorem 4.1 below the dimension bound \((n+1)/2\) is that the estimate (4.2) then fails, at least in the plane by [12, Example 4.9] and in \(\mathbb{R}^{3}\) by [10].
In Section 7 we discuss estimates on the exceptional sets of orthogonal transformations. The proof of Theorem 7.1 gives another proof for Theorem 4.1 but under the stronger assumption \(s+t>n+1\). On the other hand, Theorem 6.3 below holds with the assumption \(s+(n-1)t/n>n\) but under the additional condition of positive lower density. Of course, \(s+(n-1)t/n>n\) is sometimes stronger and sometimes weaker than \(s>(n+1)/2,s+t>n\). For example, consider these in the plane. When \(s=t\), the first one says \(s>4/3\) and the second one \(s>3/2\). On the other hand, when \(s\) is slightly bigger than \(3/2\), the first requires \(t\) to be at least about \(1\), but the second allows \(t=1/2\).
Theorem 4.1 says nothing in \(\mathbb{R}^{1}\), and there is nothing to say: in [12] I constructed compact sets \(A,B\subset\mathbb{R}\) such that \(\dim A=\dim B=1\) and \(A\cap(B+z)\) contains at most one point for any \(z\in\mathbb{R}\). The \(n\)-fold Cartesian powers of \(A\) and \(B\) yield
the corresponding examples in \(\mathbb{R}^{n}\), that is, just with translations we get nothing in general.
Donoven and Falconer proved in [DF] an analogue of Theorem 4.1 for the isometries of the Cantor space. They didn't need any dimensional restrictions. They used martingales to construct the desired random measures with finite energy integrals on the intersections.
## 5. The projections \(S_{g}\)
We now discuss a bit more the projections \(S_{g}\), recall (3.4). They are particular cases of restricted projections, which recently have been studied extensively, see [M6, Section 5.4], [M9] and [GGGHMW] and the references given there. Restricted means that we are considering a lower dimensional subspace of the Grassmannian \(G(2n,n)\). For the full Grassmannian we have Marstrand's projection theorem 2.1.
As mentioned above, to prove Theorem 4.1 one first needs to know (3.6) when \(s+t>n\) and \(s>(n+1)/2\) and \(\mu\) and \(\nu\) have finite \(s\) and \(t\) energies. A simple proof using spherical averages is given in [M6, Lemma 7.1]. This immediately yields the weaker result: with the assumptions of Theorem 4.1, for almost all \(g\in O(n)\),
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:A\cap(g(B)+z)\neq\emptyset\})>0, \tag{5.1}\]
because (5.1) is equivalent to \(\mathcal{L}^{n}(S_{g}(A\times B))>0\). Even for this I don't know if the assumption \(s>(n+1)/2\) is needed.
Let us first look at general Borel subsets of \(\mathbb{R}^{2n}\):
**Theorem 5.1**.: _Let \(A\subset\mathbb{R}^{2n}\) be a Borel set. If \(\dim A>n+1\), then \(\mathcal{L}^{n}(S_{g}(A))>0\) for \(\theta_{n}\) almost all \(g\in O(n)\)._
This was proved in [M9]. That paper also contains dimension estimates for \(S_{g}(A)\) when \(\dim A\leq n+1\) and estimates on the dimension of exceptional sets of transformations \(g\). In particular, if \(n\leq\dim A\leq n+1\), then
\[\dim S_{g}(A)\geq\dim A-1\text{ for }\theta_{n}\text{ almost all }g\in O(n). \tag{5.2}\]
The bound \(n+1\) in Theorem 5.1 is sharp. This was shown by Harris in [H]. First, (5.2) is sharp. The example for \(\dim A=n\) is simply the diagonal \(D=\{(x,x):x\in\mathbb{R}^{n}\}\). To see this suppose that \(g\in O(n)\) is such that \(\det g=(-1)^{n+1}\), which is satisfied by half of the orthogonal transformations. Then by some linear algebra \(g\) has a fixed point, whence the kernel of \(x\mapsto S_{g}(x,x)\) is non-trivial, so \(\dim S_{g}(D)\leq n-1\). Taking the Cartesian product of \(D\) with a one-dimensional set of zero \(\mathcal{H}^{1}\) measure, we obtain \(A\) with \(\dim A=n+1\) and \(\mathcal{L}^{n}(S_{g}(A))=0\), which proves the sharpness.
But this only gives an example \(A\) of dimension \(n+1\) for which \(\mathcal{L}^{n}(S_{g}(A))=0\) for \(g\in O(n)\) with measure \(1/2\). Is there a counter-example that works for almost all \(g\in O(n)\)?
Here are the basic ingredients of the proof of Theorem 5.1. They were inspired by Oberlin's paper [O].
Let \(0<n+1<s<\dim A\) and \(\mu\in\mathcal{M}(A)\) with \(I_{s}(\mu)<\infty\), and let \(\mu_{g}\in\mathcal{M}(S_{g}(A))\) be the push-forward of \(\mu\) under \(S_{g}\). The Fourier transform of \(\mu_{g}\) is given by
\[\widehat{\mu_{g}}(\xi)=\widehat{\mu}(\xi,-g^{-1}(\xi)).\]
By fairly standard arguments, using also the inequality (1.1), one can then show that for \(R>1\),
\[\iint_{R\leq|\xi|\leq 2R}|\widehat{\mu}(\xi,-g^{-1}(\xi))|^{2}\,d\xi\,d\theta_{n }g\lesssim R^{n+1-s}. \tag{5.3}\]
This is summed over the dyadic annuli, \(R=2^{k},k=1,2,\dots\). The sum converges since \(s>n+1\). Hence for \(\theta_{n}\) almost all \(g\in O(n)\), \(\mu_{g}\) is absolutely continuous with \(L^{2}\) density, and so \(\mathcal{L}^{n}(S_{g}(A))>0\).
For product sets we can improve this, which is essential for the applications to intersections:
**Theorem 5.2**.: _Let \(A,B\subset\mathbb{R}^{n}\) be Borel sets. If \(\dim A+(n-1)\dim B/n>n\) or \(\dim A+\dim B>n\) and \(\dim A>(n+1)/2\), then \(\mathcal{L}^{n}(S_{g}(A\times B))>0\) for \(\theta_{n}\) almost all \(g\in O(n)\)._
The case \(\dim A>(n+1)/2\) is a special case of Theorem 4.1, recall (5.1). The proof of the case \(\dim A+(n-1)\dim B/n>n\) is based on the spherical averages and the first estimate of (3.11). Here is a sketch.
Let \(0<s<\dim A,0<t<\dim B\) and \(\varepsilon>0\) such that \(s+(n-1)t/n-\varepsilon>n\), and let \(\mu\in\mathcal{M}(A),\nu\in\mathcal{M}(B)\) with \(\mu(B(x,r))\leq r^{s},\nu(B(x,r))\leq r^{t}\) for \(x\in\mathbb{R}^{n},r>0\). Let \(\lambda_{g}=S_{g\#}(\mu\times\nu)\in\mathcal{M}(S_{g}(A\times B)).\) Then \(\widehat{\lambda}_{g}(\xi)=\widehat{\mu}(\xi)\widehat{\nu}(-g^{-1}(\xi))\). By (3.11) we have
\[\begin{split}&\iint|\widehat{\lambda}_{g}(\xi)|^{2}\,d\xi\,d \theta g=\int|\widehat{\mu}(\xi)|^{2}\sigma(\nu)(|\xi|)\,d\xi\\ &\lesssim\int|\widehat{\mu}(\xi)|^{2}|\xi|^{-(n-1)t/n+\varepsilon }\,d\xi=cI_{n-(n-1)t/n+\varepsilon}(\mu)\lesssim I_{s}(\mu)<\infty.\end{split} \tag{5.4}\]
This gives Theorem 5.2.
In fact, for some results on the intersections below we again need absolute continuity as in (3.6), and in the case \(s+(n-1)t>n\) the quantitative estimate: if \(s+(n-1)t>n,\mu,\nu\in\mathcal{M}(\mathbb{R}^{n})\) and \(\mu(B(x,r))\leq r^{s},\nu(B(x,r))\leq r^{t}\) for \(x\in\mathbb{R}^{n},r>0\), then
\[\iint S_{g\#}(\mu\times\nu)(x)^{2}\,dx\,d\theta_{n}g\lesssim 1, \tag{5.5}\]
with the implicit constant independent of \(\mu\) and \(\nu\). The arguments described above give this too.
## 6. Level sets and intersections
The estimate (5.5) can be used to derive information on the Hausdorff dimension of the level sets of \(S_{g}\), and hence, by (3.5), of intersections. We shall first discuss a more general version of this principle: a quantitative projection theorem leads to
estimates of the Hausdorff dimension of level sets. This is also how in [M5, Chapter 10] the proof for Marstrand's section theorem 2.2 runs.
We consider the following general setting. Let \(P_{\lambda}:\mathbb{R}^{n}\to\mathbb{R}^{m},\lambda\in\Lambda\), be orthogonal projections, where \(\Lambda\) is a compact metric space. Suppose that \(\lambda\mapsto P_{\lambda}x\) is continuous for every \(x\in\mathbb{R}^{n}\). Let also \(\omega\) be a finite non-zero Borel measure on \(\Lambda\). We denote by \(D(\mu,\cdot)\) the Radon-Nikodym derivative of a measure \(\mu\) on \(\mathbb{R}^{m}\).
**Theorem 6.1**.: _Let \(s>m\). Suppose that there exists a positive number \(C\) such that \(P_{\lambda\sharp}\mu\ll\mathcal{L}^{m}\) for \(\omega\) almost all \(\lambda\in\Lambda\) and_
\[\iint D(P_{\lambda\sharp}\mu,u)^{2}\,d\mathcal{L}^{m}u\,d\omega\lambda<C \tag{6.1}\]
_whenever \(\mu\in\mathcal{M}(B^{n}(0,1))\) is such that \(\mu(B(x,r))\leq r^{s}\) for \(x\in\mathbb{R}^{n},r>0\)._
_If \(A\subset\mathbb{R}^{n}\) is \(\mathcal{H}^{s}\) measurable, \(0<\mathcal{H}^{s}(A)<\infty\) and \(\theta_{*}^{s}(A,x)>0\) (recall (1.5)) for \(\mathcal{H}^{s}\) almost all \(x\in A\), then for \(\omega\) almost all \(\lambda\in\Lambda\),_
\[\mathcal{L}^{m}(\{u\in\mathbb{R}^{m}:\dim P_{\lambda}^{-1}\{u\}\cap A=s-m\})>0. \tag{6.2}\]
For an application to intersections we shall need the following product set version of Theorem 6.1. There \(P_{\lambda}:\mathbb{R}^{n}\times\mathbb{R}^{p}\to\mathbb{R}^{m},\lambda\in \Lambda,m<n+p\), are orthogonal projections with the same assumptions as before.
**Theorem 6.2**.: _Let \(s,t>0\) with \(s+t>m\). Suppose that there exists a positive number \(C\) such that \(P_{\lambda\sharp}(\mu\times\nu)\ll\mathcal{L}^{m}\) for \(\omega\) almost all \(\lambda\in\Lambda\) and_
\[\iint D(P_{\lambda\sharp}(\mu\times\nu),u)^{2}\,d\mathcal{L}^{m}u\,d\omega \lambda<C \tag{6.3}\]
_whenever \(\mu\in\mathcal{M}(B^{n}(0,1)),\nu\in\mathcal{M}(B^{p}(0,1))\) are such that \(\mu(B(x,r))\leq r^{s}\) for \(x\in\mathbb{R}^{n},r>0\), and \(\nu(B(y,r))\leq r^{t}\) for \(y\in\mathbb{R}^{p},r>0\)._
_If \(A\subset\mathbb{R}^{n}\) is \(\mathcal{H}^{s}\) measurable, \(0<\mathcal{H}^{s}(A)<\infty\), \(B\subset\mathbb{R}^{p}\) is \(\mathcal{H}^{t}\) measurable, \(0<\mathcal{H}^{t}(B)<\infty\), \(\theta_{*}^{s}(A,x)>0\) for \(\mathcal{H}^{s}\) almost all \(x\in A\), and \(\theta_{*}^{t}(B,y)>0\) for \(\mathcal{H}^{t}\) almost all \(y\in B\), then for \(\omega\) almost all \(\lambda\in\Lambda\),_
\[\mathcal{L}^{m}(\{u\in\mathbb{R}^{m}:\dim P_{\lambda}^{-1}\{u\}\cap(A\times B )=s+t-m\})>0. \tag{6.4}\]
I don't know if the assumptions on positive lower density are needed.
I give a few words about the proof of Theorem 6.1. First, notice that \(D(P_{\lambda\sharp}(\mu),u)\) is given by
\[D(P_{\lambda\sharp}\mu,u)=\lim_{\delta\to 0}\mathcal{L}^{m}(B(0,1))^{-1} \delta^{-m}\mu(\{y:|P_{\lambda}(y)-u|\leq\delta\}).\]
Let \(\mu\) be the restriction of \(\mathcal{H}^{s}\) to a subset of \(A\) so that \(\mu\) satisfies the Frostman \(s\) condition. Then (6.1) is applied to the measures
\[\mu_{a,r,\delta}=r^{-s}T_{a,r\sharp}(\mu_{\delta}\,\vrule height 6.0pt width 0.5pt depth 0.0pt \vrule height 6.0pt width 0.5pt depth 0.0pt\,B(a,r))\in\mathcal{M}(B(0,1)),a\in\mathbb{R}^{n},r>0,\delta>0,\]
where \(\mu_{\delta}(B)=\delta^{-n}\int_{B}\mu(B(x,r))\,d\mathcal{L}^{n}x\), \(T_{a,r}(x)=(x-a)/r\) is the blow-up map and \(\mu_{\delta}\,\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt\,B(a,r)\) is the restriction of \(\mu_{\delta}\) to \(B(a,r)\). This leads for almost all \(x\in A,\lambda\in\Lambda\)
to
\[\lim_{r\to 0}\liminf_{\delta\to 0}r^{-t}\delta^{-m}\mu(\{y\in B(x,r):|P_{\lambda}(y-x)| \leq\delta\})=0, \tag{6.5}\]
which is a Frostman type condition along the level sets of the \(P_{\lambda}\). With some further work it leads to (6.2). The proof of Theorem 6.2 is similar.
Theorem 6.2 together with the quantitative version of Theorem 5.2 and with (3.5) can be applied to the projections \(S_{g}\) to obtain the following result on the Hausdorff dimension of intersections:
**Theorem 6.3**.: _Let \(s,t>0\) with \(s+(n-1)t/n>n\) and let \(A\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{s}\) measurable with \(0<\mathcal{H}^{s}(A)<\infty\), and let \(B\subset\mathbb{R}^{n}\) be \(\mathcal{H}^{t}\) measurable with \(0<\mathcal{H}^{t}(B)<\infty\). Suppose that \(\theta_{*}^{s}(A,x)>0\) for \(\mathcal{H}^{s}\) almost all \(x\in A\) and \(\theta_{*}^{t}(B,y)>0\) for \(\mathcal{H}^{t}\) almost all \(y\in B\). Then for \(\theta_{n}\) almost all \(g\in O(n)\),_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)=s+t-n\})>0. \tag{6.6}\]
Again, I don't know if the positive lower density assumptions are needed for the lower bound \(s+t-n\). As mentioned before they are needed for the upper bound.
## 7. Exceptional set estimates
Recall the exceptional set estimates for orthogonal projections and for intersections with planes from Chapter 2. Now we discuss some similar results from [M7] for intersections.
First we have an exceptional set estimate related to Theorem 4.1. But we need a bit stronger assumption: the sum of the dimensions is required to be bigger than \(n+1\), rather than just one dimension bigger than \((n+1)/2\). Recall that the dimension of \(O(n)\) is \(n(n-1)/2\).
**Theorem 7.1**.: _Let \(s\) and \(t\) be positive numbers with \(s+t>n+1\). Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) with \(\mathcal{H}^{s}(A)>0\) and \(\mathcal{H}^{t}(B)>0\). Then there is \(E\subset O(n)\) such that_
\[\dim E\leq n(n-1)/2-(s+t-(n+1))\]
_and for \(g\in O(n)\setminus E\),_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq s+t-n\})>0. \tag{7.1}\]
The proof is based on the Fourier transform and the convolution approximation mentioned in Section 3. Instead of \(\theta_{n}\) one uses a Frostman measure \(\theta\) on the exceptional set \(E\): for some \(\alpha>(n-1)(n-2)/2\), \(\theta(B(g,r))\leq r^{\alpha}\) for all \(g\in O(n)\) and \(r>0\). Then for \(x,z\in\mathbb{R}^{n}\setminus\{0\},r>0\),
\[\theta(\{g:|x-g(z)|<r\})\lesssim(r/|z|)^{\alpha-(n-1)(n-2)/2}, \tag{7.2}\]
which replaces the inequality (1.1).
In the case where one of the sets has small dimension we have the following improvement of Theorem 7.1:
**Theorem 7.2**.: _Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) and suppose that \(\dim A\leq(n-1)/2\). If \(0<u<\dim A+\dim B-n\), then there is \(E\subset O(n)\) with_
\[\dim E\leq n(n-1)/2-u\]
_such that for \(g\in O(n)\setminus E\),_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq u\})>0. \tag{7.3}\]
The last decay estimate in (3.11) of spherical averages is essential for the proof. The reason why the assumption \(\dim A\leq(n-1)/2\) leads to a better result is that that estimate in (3.11) is stronger than the others. For \(\dim A>(n-1)/2\) the inequalities (3.11) would only give weaker results with \(u\) replaced by a smaller number, see [M7, Section 4].
If one of the sets supports a measure with sufficiently fast decay of the averages \(\sigma(\mu)(r)\), we can improve the estimate of Theorem 7.1. Then the results even hold without any rotations provided the dimensions are big enough. In particular, we have the following result in case one of the sets is a Salem set. By definition, \(A\) is a Salem set if for every \(0<s<\dim A\) there is \(\mu\in\mathcal{M}(A)\) such that \(|\widehat{\mu}(x)|^{2}\lesssim|x|^{-s}\). A discussion on Salem sets can be found, for example, in [M6], Section 3.6.
**Theorem 7.3**.: _Let \(A\) and \(B\) be Borel subsets of \(\mathbb{R}^{n}\) and suppose that \(A\) is a Salem set. Suppose that \(0<u<\dim A+\dim B-n\)._
_(a) If \(\dim A+\dim B>2n-1\), then_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(B+z)\geq u\})>0. \tag{7.4}\]
_(b) If \(\dim A+\dim B\leq 2n-1\), then there is \(E\subset O(n)\) with_
\[\dim E\leq n(n-1)/2-u\]
_such that for \(g\in O(n)\setminus E\),_
\[\mathcal{L}^{n}(\{z\in\mathbb{R}^{n}:\dim A\cap(g(B)+z)\geq u\})>0. \tag{7.5}\]
Could this hold for general sets, perhaps in the form that \(\dim E=0\), if \(\dim A+\dim B>2n-1\)? It follows from Theorem 2.4 that this is true if one of the sets is a plane. In \(\mathbb{R}^{2}\) a bit stronger question reads: if \(s+t>2\) and \(A\) and \(B\) are Borel subsets of \(\mathbb{R}^{2}\) with \(\mathcal{H}^{s}(A)>0\) and \(\mathcal{H}^{t}(B)>0\), is there \(E\subset O(2)\) with \(\dim E=0\), if \(s+t\geq 3\), and \(\dim E\leq 3-s-t\), if \(s+t\leq 3\), such that for \(g\in O(2)\setminus E\),
\[\mathcal{L}^{2}(\{z\in\mathbb{R}^{2}:\dim A\cap(g(B)+z)\geq s+t-2\})>0?\]
## 8. Some relations to the distance set problem
There are some connections of this topic to Falconer's distance set problem. For general discussion and references, see for example [M6]. Falconer showed in [F2] that for a Borel set \(A\subset\mathbb{R}^{n}\) the distance set \(\{|x-y|:x,y\in A\}\) has positive Lebesgue measure if \(\dim A>(n+1)/2\). We had the same condition in Theorem 4.1. Also for distance sets it is expected that \(\dim A>n/2\) should be enough.
When \(n=2\) Wolff [W] improved \(3/2\) to \(4/3\) using (3.11). Observe that when \(\dim A=\dim B\), the assumption \(\dim A+\dim B/2>2\) in Theorem 6.3 becomes
\(\dim A>4/3\) and it is the same as Wolff's. This is no coincidence: both results use Wolff's estimate (3.11).
The proofs of distance set results often involve the distance measure \(\delta(\mu)\) of a measure \(\mu\) defined by
\[\delta(\mu)(B)=\mu\times\mu(\{(x,y):|x-y|\in B\}),\ B\subset{\mathbb{R}}.\]
The crucial estimate (4.2) means that \(\delta(\mu)\) is absolutely continuous with bounded density if \(I_{(n+1)/2}(\mu)<\infty\). Hence it yields Falconer's result. As mentioned before we cannot hope to get bounded density when \(s<(n+1)/2\), at least when \(n=2\) or \(3\). In many of the later improvements one verifies absolute continuity with \(L^{2}\) density. For example, Wolff showed that \(\delta(\mu)\in L^{2}({\mathbb{R}})\), if \(I_{s}(\mu)<\infty\) for some \(s>4/3\). To do this he used decay estimates for the spherical averages \(\sigma(\mu)(r)\) and proved (3.11) for \(n=2\). The proofs of the most recent, and so far the best known, distance set results in [DZ], [GIOW], [DGOWWZ] and [DIOWZ] are quite involved using deep harmonic analysis techniques; restriction and decoupling. In the plane the result of [GIOW] says that the distance set of \(A\) has positive Lebesgue measure if \(\dim A>5/4\). See Shmerkin's survey [S2] for the distance set and related problems.
Distance measures are related to the projections \(S_{g}\) by the following:
\[\iint D(S_{g\#}(\mu\times\nu))(z)^{2}\,d{\mathcal{L}}^{n}z\,d\theta_{n}g=c \int\delta(\mu)(t)\delta(\nu)(t)t^{1-n}\,dt, \tag{8.1}\]
at least if \(\mu\) and \(\nu\) are smooth functions with compact support, see [M9, Section 5.2].
Since by an example in [GIOW], when \(n=2\), for any \(s<4/3\), \(I_{s}(\mu)<\infty\) is not enough for \(\delta(\mu)\) to be in \(L^{2}\), probably, because of (8.1), it is not enough for \(S_{g\#}(\mu\times\mu)\) to be in \(L^{2}\). But in [GIOW] it was shown that if \(I_{s}(\mu)<\infty\) for some \(s>5/4\), there is a complex valued modification of \(\mu\) with good \(L^{2}\) behaviour. In higher even dimensions similar results were proven in [DIOWZ] with \(n/2+1/4\) in place of \(5/4\). Could those methods be used to show, for instance, that if \(n=2\) and \(\dim A=\dim B>5/4\), then \({\mathcal{L}}^{2}({\mathcal{S}}_{g}(A\times B))>0\) for almost all \(g\in O(2)\)?
|
2309.10753 | Generalized Cactus and Structural Controllability of Switched Linear
Continuous-Time Systems | This paper explores the structural controllability of switched linear
continuous-time systems. It first identifies a gap in the proof for a pivotal
criterion for the structural controllability of switched linear systems in the
literature. To address this void, we develop novel graph-theoretic concepts,
such as multi-layer dynamic graphs, generalized stems/buds, and generalized
cacti, and based on them, provide a comprehensive proof for this criterion. Our
approach also induces a new, generalized cactus based graph-theoretic criterion
for structural controllability. This not only extends Lin's cactus-based
graph-theoretic condition to switched systems for the first time, but also
provides a lower bound for the generic dimension of controllable subspaces. | Yuan Zhang, Yuanqing Xia, Aming Li | 2023-09-19T16:54:16Z | http://arxiv.org/abs/2309.10753v2 | # Structural Controllability of Switched Continuous and Discrete Time Linear Systems
###### Abstract
This paper explores the structural controllability of switched continuous and discrete time linear systems. It identifies a gap in the proof for a pivotal criterion for structural controllability of switched continuous time systems in the literature. To address this void, we develop novel graph-theoretic concepts, such as multi-layer dynamic graphs, generalized stems/buds, and generalized cactus configurations, and based on them, provide a comprehensive proof for this criterion. Our approach also induces a new, generalized cactus based graph-theoretic criterion for structural controllability. This not only extends Lin's cactus-based graph-theoretic condition to switched systems for the first time, but also provides a lower bound for the generic dimension of controllable subspaces (which is conjectured to be exact). Finally, we present extensions to reversible switched discrete-time systems, which lead to not only a simplified necessary and sufficient condition for structural controllability, but also the determination of the generic dimension of controllable subspaces.
Structural controllability, switched systems, generalized stems and buds, generalized cactus, controllable subspaces
## I Introduction
Switched systems represent a class of hybrid systems where multiple subsystems are regulated by switching laws. This dynamic switching among subsystems can enrich the control strategies, often resulting in superior control performance compared to non-switched systems [1]. For instance, scenarios arise where stabilization through a constant feedback controller remains unattainable, yet becomes feasible through transitions between distinct constant feedback controllers [2]. Due to its significance in both practical applications and theoretical exploration, the study of switched systems has garnered substantial attention [3, 4, 5].
Controllability and observability are two fundamental concepts that are prerequisite for design of switched systems. Extensive investigations have been dedicated to these concepts [3, 4, 6, 7, 8]. Notably, it has been revealed that the reachable and controllable sets of switched continuous-time systems exist as subspaces within the total space, with complete characterizations established in [4]. However, for discrete-time systems, the reachable and controllable sets do not necessarily manifest as subspaces [9]. Geometric characterizations, ensuring these sets to span the entire space, were introduced in [6]. Simplified criteria for reversible systems can be found in [8].
It is important to note that the aforementioned outcomes rely on algebraic computations centered around precise values of system matrices. In practical scenarios, these exact system parameters might be elusive due to modeling inaccuracies or parameter uncertainties [10]. When only the zero-nonzero patterns of system matrices are accessible, Liu et al. [11] introduced the concept of _structural controllability_ for _switched systems_, aligning with the generic controllability concept initiated by [12]. In [11], several equivalent criteria, grounded in colored union graphs, were proposed to assess structural controllability. Notably, one criterion involves an input accessibility condition and a generic rank condition, distinguished by its simplicity and elegance. This criterion extends naturally from the structural controllability criterion for linear time-invariant (LTI) systems [13]. Exploiting this resemblance, elegant outcomes regarding optimal input selections for LTI systems [14] have been extended to switched systems [15, 16]. Liu et al.'s work has also stimulated research exploration into (strong) structural controllability for other class of time-varying systems, such as linear parameter varying systems [17], temporal networks [18, 19].
Nevertheless, in this paper, we identify a gap in the proof of the criterion's sufficiency. Regrettably, this gap seems unaddressed if we follow the original research thread in [11] (refer to Section II). Nonetheless, we establish the correctness of the said criterion by providing a rigorous and comprehensive proof for it. Our proof relies on novel graph-theoretic concepts, including multi-layer dynamic graphs, generalized stems, generalized buds, and generalized cactus configurations. Our approach also births a new criterion for structural controllability based on generalized stem-bud structures. Notably, this extends Lin's cactus-based graph-theoretic condition for structural controllability [20] to switched systems for the first time. This criterion also induces a lower bound for the generic dimension of controllable subspaces, which we conjecture to be exact. Lastly, we extend these results to reversible switched discrete-time systems. This not only yields simplified necessary and sufficient conditions for structural controllability but also enables us to determine the generic dimension of controllable subspaces.
The rest are organized as follows. Section II provides some basic preliminaries and the motivation of this paper by identifying a gap in the existing literature. Section III presents a new generalized cactus configuration based criterion for structural controllability and establishes the correctness of the existing one. Extensions to reversible switched discrete-time systems are given in Section IV. The last section concludes this paper.
## II Preliminaries and Motivation
### _Controllability of switched systems_
Consider a switched continuous time linear system whose dynamics is governed by [5]
\[\dot{x}(t)=A_{\sigma(t)}x(t)+B_{\sigma(t)}u_{\sigma(t)}(t), \tag{1}\]
where \(x(t)\in\mathbb{R}^{n}\) is the state, \(\sigma(t):[0,\infty)\rightarrow\{1,...,N\}\) is the switching signal that can be designed, \(u_{\sigma(t)}(t)\in\mathbb{R}^{m_{\sigma(t)}}\) is the piecewise continuous input, \(m_{i}\in\mathbb{N}\), \(i=1,...,N\). \((A_{i},B_{i})\) is called a subsystem of system (1), \(i=1,...,N\). \(\sigma(t)=i\) implies the subsystem \((A_{i},B_{i})\) is activated as the system realization at time instant \(t\). We may use the matrix set \((A_{i},B_{i})|_{i=1}^{N}\) to denote the switched system (1).
**Definition 1** ([4]): A state \(x\in\mathbb{R}^{n}\) is said to be controllable, if there exists a finite \(t_{f}\), a switching signal \(\sigma(t):[0,t_{f})\rightarrow\{1,...,N\}\)
and an input \(u(t)\in\mathbb{R}^{m_{\sigma(t)}}\), \(t\in[0,t_{f})\), such that \(x(0)=x\) and \(x(t_{f})=0\). The controllable set (controllable subspace) of system (1) is the set of controllable states. System (1) is said to be controllable, if its controllable set is \(\mathbb{R}^{n}\).
It is noted that if we change '\(x(0)=x\) and \(x(t_{f})=0\)' to '\(x(0)=0\) and \(x(t_{f})=x\)' in Definition 1, then the concept'reachability' can be defined. For switched continuous time systems and reversible switched discrete time systems (see Section IV), their reachability and controllability always coincide [6, 4].
**Lemma 1** ([4]): _The switched system (1) is controllable, if and only if the following controllability matrix \(\mathcal{R}\) has full row rank:_
\[\mathcal{R}=\left[A_{n}^{j_{n}}A_{n-1}^{j_{n-1}}\cdots A_{i_{1}}^{j_{1}}B_{i_ {1}}|_{1_{1},\ldots,n_{1}=1,\ldots,N}^{j_{n}=0,\ldots,n-1}\right].\]
_Here, each item \(A_{n}^{j_{n}}\cdots A_{i_{1}}^{j_{1}}B_{i_{1}}\) constitutes sub-columns of \(\mathcal{R}\). Moreover, the controllable subspace of system (1) is the column space spanned by \(\mathcal{R}\), denoted as \(\mathrm{spam}\mathcal{R}\), and its dimension equals \(\mathrm{rank}\mathcal{R}\)._
A structured matrix is a matrix with either fixed zero entries or free entries that can take values independently (the latter are called nonzero entries). The _generic rank_ of a structured matrix (or a polynomial of structured matrices), given by \(\mathrm{graph}\), is the maximum rank it can achieve as a function of parameters for its nonzero entries. It turns out that the generic rank is also the rank this matrix can achieve for almost all values of its nonzero entries. When only the zero-nonzero patterns of matrices \((A_{i},B_{i})|_{i=1}^{N}\) are available, that is, \((A_{i},B_{i})|_{i=1}^{N}\) are structured matrices, system (1) is called a structured system. \((\tilde{A}_{i},\tilde{B}_{i})|_{i=1}^{N}\) is a realization of \((A_{i},B_{i})|_{i=1}^{N}\), if \(\tilde{A}_{i}\) (\(\tilde{B}_{i}\)) is obtained by assigning some particular values to the nonzero entries of \(A_{i}\) (\(\tilde{B}_{i}\)), \(i=1,...,N\).
**Definition 2** ([11]): _A structured system (1) is said to be structurally controllable, if there is a realization \((\tilde{A}_{i},\tilde{B}_{i})|_{i=1}^{N}\) of \((A_{i},B_{i})|_{i=1}^{N}\), such that \((\tilde{A}_{i},\tilde{B}_{i})|_{i=1}^{N}\) is controllable._
From Lemma 1, if a (structured) system is structurally controllable, then almost all its realizations are controllable. This generic property makes the concept appealing, particularly for large-scale network systems [10].
### _Graph-theoretic preliminaries_
A directed graph (digraph) is denoted by \(\mathcal{G}=(V,E)\), where \(V\) is the vertex set, and \(E\subseteq V\times V\) is the edge set. A _subgraph_ of \(\mathcal{G}\) is a graph \(\mathcal{G}_{s}=(V_{s},E_{s})\) such that \(V_{s}\subseteq V\) and \(E_{s}\subseteq E\), and is called a subgraph induced by \(V_{s}\), if \(E_{s}=(V_{s}\times V_{s})\cap E\). We say \(\mathcal{G}_{s}\)_covers_\(V_{s}^{\prime}\subseteq V\) if \(V_{s}^{\prime}\subseteq V_{s}\), and \(\mathcal{G}_{s}\)_spans_\(\mathcal{G}\) if \(V_{s}=V\). An edge from \(i_{1}\) to \(i_{2}\), given by \((i_{1},i_{2})\), is called an _ingoing edge_ of vertex \(i_{2}\), and an _outgoing edge_ of vertex \(i_{1}\). A sequence of successive edges \((i_{1},i_{2}),(i_{2},i_{3}),...,(i_{k-1},i_{k})\) is called a _walk from vertex \(i_{1}\) to vertex \(i_{k}\)_. Such a walk \(p\) is either denoted by the sequence of edges it contains, i.e., \(p=(e_{1},e_{2},...,e_{k-1})\), where \(e_{j}=(i_{j},i_{j+1})\), or the sequence of vertices it passes, i.e., \(p=(i_{1},i_{2},...,i_{k})\). Vertex \(i_{1}\) is called the _tail_ (initial vertex), denoted as \(\mathrm{tail}(p)\), and vertex \(i_{k}\) is called the _head_ (terminal vertex), denoted as \(\mathrm{head}(p)\). The _length_ of a walk \(p\), given by \(|p|\), is the edges it contains (counting repeated edges). A walk without repeated vertices is called a _path_. A walk from a vertex to itself is called a loop. If the head (or tail) of a loop is the only repeated vertex when traversing along its way, this loop is called a _cycle_.
Two typical graph-theoretic presentations of system (1) are introduced. For the \(i\)th subsystem, the _system digraph_\(\mathcal{G}_{i}=(V_{Xi}\cup V_{Ui},EX_{Xi}\cup E_{UXi})\), where the state vertices \(V_{Xi}=\{v_{1}^{i},...,v_{n}^{i}\}\), the input vertices \(V_{Ui}=\{v_{n+1}^{i},...,v_{n+m_{i}}^{i}\}\), the state edges \(EX_{Xi}=\{(v_{k}^{i},v_{j}^{i}):A_{i,j,k}\neq 0\}\), and input edges \(E_{UXi}=\{(v_{n+k}^{i},v_{j}^{i}):B_{i,j,k}\neq 0,i=1,...,N,k=1,...,m_{i}\}\). Notice that multiple edges are allowable in \(E_{XX}\), and to distinguish them, we assign the _color index_\(i\) to the edge \((v_{k},v_{j})\) (resp. \((v_{n+k},v_{j})\)) corresponding to \(A_{i,jk}\neq 0\) (\(B_{i,jk}\neq 0\)), \(i\in\{1,...,N\}\). An edge \((v_{k},v_{j})\) with color index \(i\) is also denoted by \(e_{kj}^{i}\). A _stem_ is a path from some \(u\in U\) to some \(v\in X\) in \(\mathcal{G}_{c}\).
**Definition 3**: _A state vertex \(v_{i}\in X\) is said to be input-reachable, if there is a path from an input vertex \(u\in U\) to \(v\) in \(\mathcal{G}_{c}\)._
**Definition 4** ([11]): _In the colored union graph \(\mathcal{G}_{c}\), \(k\) edges are said to be S-disjoint if their heads are all distinct and if all the edges that have the same tail have different color indices._
The following lemma reveals the relation between the \(S\)-disjoint edge and \(\mathrm{graph}[A_{1},...,A_{N},B_{1},...,B_{N}]\).
**Lemma 2** ([11]): _There are \(n\)\(S\)-disjoint edges in \(\mathcal{G}_{c}\), if and only if \(\mathrm{graph}[A_{1},...,A_{N},B_{1},...,B_{N}]=n\)._
### _Motivation of this paper_
Liu et al. [11] propose a criterion for the structural controllability of system (1). This criterion says system (1) is structurally controllable, if and only if two conditions hold: (i) every state vertex is input-reachable in \(\mathcal{G}_{c}\), and (ii) \(\mathrm{graph}[A_{1},...,A_{N},B_{1},...,B_{N}]=n\) (see [11, Thoe 9]).
The necessity of conditions (i) and (ii) is relatively straightforward. The sufficiency, however, is not. In the proof for the sufficiency of conditions (i) and (ii), the authors of [11] intended to show that if the switched system (1) is not structurally controllable and condition (i) holds, then condition (ii) cannot hold, i.e., \(\mathrm{grank}[A_{1},...,A_{N},B_{1},...,B_{N}]<n\). To achieve this, the authors argued that if for every matrix pair \((\tilde{A},\tilde{B})\), where \(\tilde{A}=\sum_{i=1}^{N}\bar{u}_{i}\tilde{A}_{i}\), \(\tilde{B}=\sum_{i=1}^{m_{1}}\bar{u}_{i}\tilde{B}_{i}\), \((\tilde{A}_{i},\bar{B}_{i})\) can be any realization of \((A_{i},B_{i})\), and \(\bar{u}_{1},...,\bar{u}_{N}\in\mathbb{R}\), there is a nonzero vector \(q\) such that \(q\tilde{\tilde{A}}=0\) and \(q\tilde{B}=0\), then \(\mathrm{graph}[A_{1},...,A_{m},B_{1},...,B_{m}]=n\) cannot hold. However, this claim is not necessarily true. The following counter-example demonstrates this.
**Example 1**: _Consider a switched system with \(N=2\), whose subsystem parameters are (\(a_{21},a_{31},b_{1}\in\mathbb{R}\)):_
\[A_{1}=\left[\begin{array}{ccc}0&0&0\\ a_{21}&0&0\\ 0&0&0\end{array}\right],B_{1}=\left[\begin{array}{ccc}b_{1}\\ 0\\ 0\end{array}\right],\]
\[A_{2}=\left[\begin{array}{ccc}0&0&0\\ 0&0&0\\ a_{31}&0&0\end{array}\right],B_{2}=\left[\begin{array}{ccc}0\\ 0\\ 0\end{array}\right].\]
_Then, \(\mathrm{grank}[A_{1},A_{
new criteria for structural controllability of switched systems, and meanwhile, rigorously establish the sufficiency (and necessity) of conditions (i) and (ii),
## III Generalized cactus configuration and structural controllability
This section presents a novel generalized stem-bud based criterion for the structural controllability of switched systems. Based on it, the sufficiency of conditions (i) and (ii) is established. Key to this new criterion is the introduction of some novel graph-theoretic concepts, detailed in the first two subsections.
### _Multi-layer dynamic graph_
Given a subspace \(\mathcal{V}\subseteq\mathbb{R}^{n}\), let \(\Gamma_{A}\mathcal{V}=\sum_{i=0}^{n-1}A^{i}\mathcal{V}\). Let \(\langle A|B\rangle=\sum_{i=0}^{n-1}A^{i}\mathbb{I}\mathbb{m}B\) be the controllable subspace of \((A,B)\), with \(\mathbb{I}\mathbb{m}B\) being the subspace spanned by the columns of \(B\). From [4], the controllable subspace of system (1), denoted by \(\mathbf{\Omega}\), can be iteratively expressed as
\[\mathbf{\Omega}_{1}=\sum_{i=1}^{N}\langle A_{i}|B_{i}\rangle,\ \mathbf{\Omega}_{i}=\sum_{k=1}^{N}\Gamma_{A_{k}}\mathbf{\Omega}_{i-1},i=2,...,n.\]
Then, \(\mathbf{\Omega}=\mathrm{span}\mathcal{R}=\mathbf{\Omega}_{n}\).
Inspired by [4], we define the nested subspaces \(\{\mathbf{\Phi}_{i}|_{i=0}^{\infty}\}\) as
\[\begin{array}{l}\mathbf{\Phi}_{0}=\mathrm{Im}\ B_{1}+\mathrm{Im}\ B_{2}+ \cdots+\mathrm{Im}\ B_{N},\\ \mathbf{\Phi}_{j}=\sum_{i=1}^{N}A_{i}\mathbf{\Phi}_{j-1},j=1,....\infty\end{array}\]
Construct \(\{\mathbf{W}_{j}|_{j=0}^{\infty}-1\}\) as \(\mathbf{W}_{-1}=\emptyset\), \(\mathbf{W}_{j}=\mathbf{W}_{j-1}+\mathbf{\Phi}_{j}\) for \(j=0,1,\cdots,\infty\). This implies \(\mathbf{W}_{j}=\sum_{j=0}^{j}\mathbf{\Phi}_{i}\) (\(j=0,1,\cdots,\infty\)), \(\mathbf{W}_{0}\subseteq\mathbf{W}_{1}\subseteq\cdots\subseteq\mathbf{W}_{\infty}\), and \(\mathbf{W}_{\infty}=\mathbf{\Omega}\). It turns out that \(\mathbf{W}_{j}=\mathbf{W}_{j-1}+\sum_{i=1}^{N}A_{i}\mathbf{W}_{j-1}\). Therefore, if for some \(j\), it holds \(\mathbf{W}_{j}=\mathbf{W}_{j+1}\), then \(\mathbf{W}_{j+2}=\mathbf{W}_{j+1}+\sum_{i=1}^{N}A_{i}\mathbf{W}_{j+1}=\mathbf{W }_{j}+\sum_{i=1}^{jN}A_{i}\mathbf{W}_{j}=\mathbf{W}_{j+1}\), leading to \(\mathbf{W}_{k}=\mathbf{W}_{j}=\mathbf{\Omega}\) for any \(k\geq j\). This means there exists some \(l_{0}\leq n\), such that \(\mathbf{W}_{l}=\mathbf{\Omega}\) for any \(\bar{l}\geq l_{0}\). It is worth noting that the difference between the nested subspaces \(\{\mathbf{\Phi}_{j}|_{j=0}^{\infty}\}\) and those in [4, Sec. 4.3] lies in that, \(\mathbf{\Phi}_{j}\) does not necessarily contain \(\mathbf{\Phi}_{j-1}\), which is important to reduce the redundant edges in constructing the associated dynamic graphs shown below.
Corresponding to \(\{\mathbf{\Phi}_{i}|_{i=0}^{\infty}\}\), define the matrix series \(\{\Gamma_{j}|_{j=0}^{\infty}\}\) as
\[\Gamma_{0}=[B_{1},...,B_{N}],\]
\[\Gamma_{j}=[A_{i}\Gamma_{j-1}|_{i=1}^{N}],j=1,2...\]
Moreover, let
\[W_{0}=\Gamma_{0},W_{j}=[W_{j-1},\Gamma_{j}],j=1,...\infty\]
Following the above analysis, if \(\mathrm{rank}W_{j}=\mathrm{rank}W_{j+1}\) for some \(j\), then \(\mathrm{rank}W_{k}=\mathrm{rank}W_{j}\) for any \(k\geq j\). Since all the above relations hold for any numerical matrices, they must hold when '\(\mathrm{rank}\)' is replaced by 'grank' (the corresponding matrices \((A_{i},B_{i})\) become structured ones).
In what follows, we construct the dynamic graphs \(\{\hat{\mathcal{G}}_{i}:i=0,1,\cdots,\bar{l}\}\) associated with \(\{W_{i}|_{i=0}^{l}\}\) iteratively, with \(\bar{l}\geq l_{0}\doteq n-\mathrm{granular}\Gamma_{0}\). In the construction, we shall use the state vertex notation \(v_{\#\phi^{\circ}}^{\star\bullet}\), in which superscripts \(\star\), \(\bullet\) denote the subsystem and copy indices, while subscripts \(\#,\circ\) indicates its index in \(X\) and the layer index, respectively, and so is the input vertex notation \(v_{\#\iota,\circ}^{\star\bullet}\), in which the extra '\(t\)' indicates which subsystem this input vertex is copied from ('layer' and 'copy' shall be explained subsequently). At the beginning, \(\hat{\mathcal{G}}_{0}=(\hat{V}_{0},\hat{E}_{0})\) with \(\hat{V}_{0}=\hat{V}_{U0}\cup\hat{V}_{X0}\), \(\hat{V}_{X0}=\{v_{j0}^{00}:j=1,...,n\}\), \(\hat{V}_{U0}=\{v_{n+k,j0}^{00}:j=1,...,N,k=1,...,m_{j}\}\), and \(\hat{E}_{0}=\{(v_{n+k,j0}^{00},v_{0}^{00}):B_{j,\mathrm{\Phi}}\neq 0,j=1,...,N,k=1,...,m_{j}\}\). For \(i\geq 1\), \(\hat{\mathcal{G}}_{i}\) is obtained by adding vertices \(\hat{V}_{i}=\hat{V}_{Ui}\cup\hat{V}_{Xi}\) and the associated edges \(\hat{E}_{i}\cup\hat{E}_{i,i-1}\) to \(\hat{G}_{i-1}\), where \(\hat{V}_{Xi}=\{v_{ji}^{k}:j=1,...,n,k=1,...,N,t=1,...,N^{-1}\}\), \(\hat{V}_{Ui}=\{v_{n+j1}^{k}:k,l=1,...,N,j=1,...,m_{t},l=1,...,N^{-1}\}\), and \(\hat{E}_{i}=\{(v_{n+k,ti}^{n},v_{qi}^{p}):p,t=1,...,N,l=1,...,N^{-1},k=1,...,m_{t}, B_{\mathrm{\Phi}}\neq 0\}\). The edge set \(\hat{E}_{i,i-1}\) is defined as
\[\hat{E}_{i,i-1}=\bigcup_{\scriptsize\begin{array}{c}k=1,...,N,\\ A_{k,pj}\neq 0\end{array}}\left\{(v_{j1}^{kN^{i-1}},v_{p,i-1}^{k2}),(v_{ji}^{k2},v_{p,i-1}^{ 12}),\right.\]
that is, for each \(k\in\{1,...,N\}\), there is exactly one edge from the \(t\)th vertex of \(\{v_{j1}^{k1},\cdots,v_{p,k-1}^{kN^{i-1}}\}\) to the \(t\)th vertex of \(\hat{V}_{X,i-1}=\{v_{p,t-1}^{11},v_{p,i-1}^{12},\cdots,v_{p,t-1}^{1N^{i-1}}, \cdots,v_{p,t-1}^{NN^{i-2}}\}\) whenever \(A_{k,pj}\neq 0\), for \(t=1,...,N^{-1}\), where we set \(N^{-1}=0\), and \(v_{0,0}^{\infty}=v_{00}^{\infty}\) for \(q=1,...,N\) and multiple edges are disabled (i.e., \(\hat{E}_{1,0}=\{(v_{ji}^{k1},v_{00}^{0}):k=1,...,N,A_{k,pj}\neq 0\}\)). For an edge \(e=(v_{ji}^{kjN^{i-1}},v_{p,i-1}^{k^{\prime}N^{i-1}})\in\hat{E}_{i,i-1}\), the weight \(w(e)=A_{k,pj}\), and for an edge \(e=(v_{n+k,ti}^{pP},v_{qi}^{p})\in\hat{E}_{i}\), its weight \(w(e)=B_{\mathrm{\Phi},\mathrm{\Phi}}\).
We call the subgraph of \(\hat{\mathcal{G}}_{\bar{l}}\) induced by \(\hat{V}_{i}\) the \(i\)th layer, \(i=0,...,\bar{l}\). As can be seen, in the \(i\)th layer (\(i\geq 1\)), we copy the state vertex set \(\{v_{1}^{j},...,v_{n}^{j}\}\) of the \(j\)th subsystem \(N^{i-1}\) times (thus each set is called a copy), and each copy of them is connected to a copy of input vertices of the \(1,...,N\)-th subsystems. Moreover, between two successive layers, there are edges (corresponding to nonzero entries of \(A_{k}\)) from each copy of state vertices (\(N^{i-1}\) copies in total) of the \(k\)th subsystem in \(i\)th layer to _one and only_ copy of state vertices of the \(1,2,...,N\)th subsystems in the \((i-1)\)th layer, \(k=1,...,N\), \(i=1,...,\bar{l}\). Since \(\hat{\mathcal{G}}_{\bar{
\(i=1,...,k\). Then, we obtain a permutation \(\pi\doteq(\pi(1),...,\pi(k))\) of \((1,2,...,k)\), and denote its sign by \(\mathrm{sign}(\pi)\in\{1,-1\}\). Recall that the sign of a permutation is defined as \((-1)^{q}\), where \(q\) is the number of transpositions required to transform the permutation into an ordered sequence.
Let \(A_{i}(j,p)\) be the \((j,p)\)th entry of \(A_{i}\), and \(B_{i}(j,p)\) defined similarly. Observe that the \((t_{k},t_{0}^{\prime})\)th entry of \(A_{i_{k}}A_{i_{k-1}}\cdots A_{i_{l}}B_{i_{0}}\) is the sum of all products in the form of
\[A_{i_{k}}(t_{k},t_{k-1})A_{i_{k-1}}(t_{k-1},t_{k-2})\cdots A_{i_{l}}(t_{1},t_{ 0})B_{i_{0}}(t_{0},t_{0}^{\prime}), \tag{2}\]
where \(k\leq\bar{l}\), \(t_{k},t_{k-1},...,t_{0}=1,...,n\), \(i_{k},...,i_{0}=1,...,N\), and each involved entry in the product is nonzero. We call such a nonzero term a product term in \(W_{\bar{l}}\). By the construction of \(\hat{\mathcal{G}}_{l}\), for each product term (2), there exists a path
\[p\doteq(v_{n+t_{0}^{\prime},i_{0};bk}^{i_{1},j_{1}},v_{t_{0},k}^{i_{1},j_{1}},v_{t_{1},k-1}^{i_{2},j_{2}},\cdots,v_{t_{k-1},1}^{i_{k},j_{k}},v_{t_{k},0}^{ \prime 0}) \tag{3}\]
with head in \(\hat{V}_{X0}\) and tail in \(\hat{V}_{U}\) of \(\hat{\mathcal{G}}_{l}\), where \(j_{1},...,j_{k}\) are the corresponding copy indices in the \(k\)th,\(\cdots,1\)st layers, respectively; and vice versa, that is, any \(\hat{V}_{U}-\hat{V}_{X0}\) path, say \(p\), corresponds to a _unique_ product term in \(\mathcal{W}_{l}\).
Denote by \(W_{\bar{l}}(I,J)\) the submatrix of \(W_{\bar{l}}\) given by rows corresponding to \(I\) and columns corresponding to \(J\). Based on the above relation, there are one-one correspondences between rows of \(W_{\bar{l}}\) and the set \(\hat{V}_{X0}\), and between columns of \(W_{\bar{l}}\) and the set \(\hat{V}_{U}\). The following proposition relates the determinant of a square sub-matrix \(W_{\bar{l}}(I,J)\) to the \(J-I\) linkings of \(\hat{\mathcal{G}}_{\bar{l}}\).
**Proposition 1**: _Let \(W_{\bar{l}}(I,J)\) be a square-sub matrix of \(W_{\bar{l}}\). Then_
\[\det W_{\bar{l}}(I,J)=\sum_{L:\ J-I\ \mathrm{linking\ with\ size}\ |I|\ \mathrm{in}\ \hat{\mathcal{G}}}\mathrm{sign}(L)w(L), \tag{4}\]
_where \(J\) and \(I\) are regarded as subsets of \(\hat{V}_{U}\) and \(\hat{V}_{X0}\), respectively._
**Proof:** Based on the above analysis, the entry at the position \((v_{k_{0},0}^{00},v_{n+t_{0}^{\prime},t_{0}k}^{i_{0},j_{1}})\) of \(W_{\bar{l}}\), with \(v_{t_{0},0}^{00}\in\hat{V}_{X0}\), \(v_{n+t_{0}^{\prime},t_{0}k}^{i_{0},j_{1}}\in\hat{V}_{U}\), is expressed as
\[W_{\bar{l}}(v_{k_{0},0}^{00},v_{n+t_{0}^{\prime},t_{0}k}^{i_{0},j_{1}})=\sum_ {p}w(p), \tag{5}\]
where the summation is taken over all paths \(p\) in \(\hat{\mathcal{G}}_{\bar{l}}\) from \(v_{n+t_{0}^{\prime},t_{0}k}^{i_{0},j_{1}}\) to \(v_{k_{0},0}^{00}\), i.e., all paths in the form of (3). With the understanding of \(J\subseteq V_{U}\) and \(I\subseteq\hat{V}_{X0}\), by submitting (5) into the expression of the determinant, we have
\[\det W_{\bar{l}}(I,J)=\sum_{\pi}\mathrm{sign}(\pi)\prod_{v_{k_{0},0}^{00}\in I }W_{\bar{l}}(v_{t_{k},0}^{00},\pi(v_{t_{k},0}^{00})), \tag{6}\]
where the summation is taken over all permutations \(\pi:I\to J\) such that \(\prod_{v_{k_{0},0}^{00}\in I}W_{\bar{l}}(v_{t_{k},0}^{00},\pi(v_{t_{k},0}^{00 }))\neq 0\).
Notice that each nonzero \(\prod_{v_{k_{0},0}^{00}\in I}W_{\bar{l}}(v_{t_{k},0}^{00},\pi(v_{t_{k},0}^{00 }))\) is the product of weights of \(|I|\) paths, each path from \(\pi(i)\in J\) to \(i\in I\), given by \(p_{\pi(i),i}\). Let \(P_{\pi}\) be the collection of such \(|I|\) paths. If two paths \(p_{\pi(i),i},p_{\pi(i)^{\prime}}\in P_{\pi}\) intersect at a vertex \(v\), let \(w\) and \(\sigma\) be respectively the path from \(\pi(i)\) to \(v\) and the path from \(v\) to \(i\) in \(p_{\pi(i),i}\), and \(w^{\prime}\) and \(\sigma^{\prime}\) be the path from \(\pi(i^{\prime})\) to \(v\) and the path from \(v\) to \(i^{\prime}\) in \(p_{\pi(i^{\prime}),i^{\prime}}\). Then, two new paths are constructed by connecting \(w\) with \(\sigma^{\prime}\) and \(w^{\prime}\) with \(\sigma\), and the remaining paths remain unchanged. This produces a new collection of \(|I|\) paths, denoted by \(P_{\sigma^{\prime}}\), with \(\pi^{\prime}(i)=\pi(i^{\prime})\) and \(\pi^{\prime}(i^{\prime})=\pi(i)\), leading to \(\mathrm{sign}(\pi^{\prime})=-\mathrm{sign}(\pi)\), while \(\prod_{p_{i}\in P_{\pi}}w(p_{i})=\prod_{p_{i}\in P_{\pi}}w(p_{i})\). It is easy to see the correspondence between such \(P_{\pi}\) and \(\hat{P}_{\sigma^{\prime}}\) is one-to-one in all \(|I|\) paths from \(J\) to \(I\). Consequently, all collections of \(|I|\) paths from \(J\) to \(I\) that are not vertex-disjoint will cancel out in (6). This leads to (4). \(\Box\)
**Remark 1**: _A direct corollary of Proposition 1 is that the generic dimension of controllable subspaces \(\mathrm{gra}\mathcal{R}\) is no more than the maximum size of a \(\hat{V}_{U}-\hat{V}_{X0}\) linking in \(\hat{\mathcal{G}}_{\bar{l}}\), \(\bar{l}\geq l_{0}\)._
### _Generalized stem, bud, and cactus walking/configuration_
Now we introduce some new graph-theoretic notions, namely, _generalized stem, generalized bud, generalized cactus configuration, and generalized cactus walking_, which extends the corresponding graph-theoretic concepts from LII systems to switched systems. These extensions are crucial to our results.
**Definition 5** (Generalized stem): _A subgraph \(\mathcal{G}_{s}\!=\!(V_{s},E_{s})\) of the color union graph \(\mathcal{G}_{c}\) is said to be a generalized stem, if \(\mathcal{G}_{s}\) satisfies:_
* _There is only one input vertex_ \(u\in V_{s}\) _and no cycle;_
* _Each state vertex_ \(x\in V_{s}\backslash\{u\}\) _has exactly one ingoing edge (thus_ \(|E_{s}|=|V_{s}|-1\)_);_
* _All edges of_ \(E_{s}\) _are S-disjoint._
**Definition 6** (Generalized bud): _A subgraph \(\mathcal{G}_{s}=(V_{s},E_{s})\) of the color union graph \(\mathcal{G}_{c}\) is said to be a generalized bud, if \(\mathcal{G}_{s}\) satisfies:_
* _There is no input vertex in_ \(V_{s}\) _and only one cycle;_
* _Every state vertex_ \(x\in V_{s}\) _is input-reachable in_ \(\mathcal{G}_{c}\)_;_
* _Each state vertex_ \(x\in V_{s}\) _has exactly one ingoing edge (thus_ \(|E_{s}|=|V_{s}|\)_);_
* _All edges of_ \(E_{s}\) _are S-disjoint._
Fig. 2 presents an example of a generalized stem and a generalized bud. It can be verified that when \(\mathcal{G}_{c}\) is the system digraph of an LTI system (i.e., without being colored), the generalized stem collapses to a conventional stem, and the generalized bud collapses to a cycle which is input-reachable.1
Footnote 1: It should be mentioned that in the original definition, a bud includes an edge that connects a cycle with a vertex out of this cycle [20]. We do not include this edge in extending this definition for the sake of description simplicity.
**Definition 7**: _A subset \(X_{s}\subset X\) is said to be covered by a generalized cactus configuration, if there is a collection of vertex-disjoint generalised stems and generalised buds
by \(p^{c}=(e_{k}^{c},...,e_{1}^{c})\), i.e., obtained by reversing the direction of this walk. Given two walks \(p_{1}=(v_{i_{1}},...,v_{i_{k}})\) and \(p_{2}=(v_{j_{1}},...,v_{j_{k^{\prime}}})\), their _first intersection vertex_, given by \(\operatorname{ins}(p_{1},p_{2})\), is the first vertex when they intersect, i.e., \(\operatorname{ins}_{1}(p_{1},p_{2})=v_{i_{\star}}\), with \(k^{\star}=\min\{q:i_{q}=j_{\prime\prime},\exists q^{\prime}\in\{1,...,k^{ \prime}\}\}\). Note that \(\operatorname{ins}(p_{1},p_{2})=\operatorname{ins}(p_{2},p_{1})\). It follows that if \(p_{1}\) and \(p_{2}\) are vertex disjoint, then \(\operatorname{ins}(p_{1},p_{2})=\emptyset\). In addition, if \(\operatorname{ins}(p_{1},p_{2})=v_{i_{\star}}\neq\emptyset\), we use \(\operatorname{Pins}(p_{1}\setminus p_{2})\) to denote the sub-graph of \(p_{1}\) from \(v_{i_{1}}\) to \(\operatorname{ins}(p_{1},p_{2})\), i.e., \(\operatorname{Pins}(p_{1}\setminus p_{2})=(v_{i_{1}},v_{i_{2}},...,v_{i_{k^{ \star}}})\). Similarly, \(\operatorname{Pins}(p_{2}\setminus p_{1})=(v_{i_{1}},v_{i_{2}},...,v_{i_{k^{ \star}}})\).
An input-state walk is a walk with head in \(X\) and tail in \(U\) of \(\mathcal{G}_{c}\). For an input-state walk \(p=(e_{j_{1},j_{2},i_{2},i_{2},j_{3},...,e_{j_{k^{\prime}}}^{i_{\star}}}^{i_{ \star}})\), it is easy to see that there is a unique walk \((v_{j_{1},i_{2},i_{1},i_{2},i_{3},...,i_{j_{k^{\prime}}}^{i_{\star}}}^{i_{ \star}},v_{j_{3},i_{2},...,v_{i_{k^{\prime}}}^{i_{\star}}}^{i_{\star}},...,v_{j _{k^{\prime}}}^{i_{\star}},1,v_{j_{k^{\prime}}}^{0})\) from \(\hat{V}_{U}\) to \(\hat{V}_{X^{0}}\) in \(\hat{\mathcal{G}}_{l}\), where \(I\geq|p|\), and \(\mathfrak{s}\) is the omitted copy indices. We call such a path the _MDG-path_\(p\), and use \(\hat{p}\) to denote the MDG-path of \(p\) (in the corresponding dynamic graph \(\hat{\mathcal{G}}_{l}\) for any \(\hat{l}\geq|p|\)). _All notations introduced for \(p\) (such as \(\operatorname{Pins}(\cdot)\)) are also valid for \(\hat{p}\)_. Specially, for an edge \(e=(v_{ji}^{\star},v_{ji-i}^{\star\prime})\in\hat{E}_{i,i-1}\), its color index is \(k\).
For two walks \(p_{1}\) and \(p_{2}\), if \(\operatorname{head}(p_{1})=\operatorname{tail}(p_{2})\), \(p_{1}\lor p_{2}\) denotes the walk obtained by appending \(p_{2}\) to \(p_{1}\). For notation simplicity, we write \((p_{1}\lor p_{2})\lor p_{3}\) as \(p_{1}\lor p_{2}\lor p_{3}\). If \(\operatorname{tail}(p)=\operatorname{head}(p)\), define
\[p^{\wedge k}\doteq\underbrace{p\lor p\lor\cdots\lor v}_{k\text{ times}}p.\]
**Definition 8**: _A collection of input-state walks \(\{p_{1},...,p_{k}\}\) is called a generalized cactus walking (with size \(k\)), if the corresponding MDG-paths \(\{\hat{p}_{1},...,\hat{p}_{k}\}\) form a linking in the MDG \(\hat{\mathcal{G}}_{l}\), with \(\hat{l}\geq\max\{|p_{1}|,...,|p_{k}|\}\). The head of a generalized cactus walking is the set of heads of its walks._
**Remark 2**: _For an LTI system, say \((A_{i},B_{i})\), a set \(V_{Xi}^{\prime}\subseteq V_{Xi}\) is said to be covered by a cactus configuration [20], if (i) every vertex \(v\in V_{Xi}^{\prime}\) is input-reachable in \(\mathcal{G}_{i}\), and (ii) \(V_{Xi}^{\prime}\) is covered by a collection of vertex-disjoint stems and cycles of \(\mathcal{G}_{i}\). Suppose \(V_{Xi}^{\prime}\) is covered by a cactus configuration. As we shall show, this cactus configuration can naturally introduce \(|V_{Xi}^{\prime}|\) input-state walks that form a generalized cactus walking with size \(|V_{Xi}^{\prime}|\)[22]. This is why the terminologies _generalized cactus configuration_ in Definition 7 and _generalized cactus walking_ in Definition 8 are used.
The following property of walks is useful to show the vertex-disjointness of their corresponding MDG-paths.
**Lemma 3**: _For any two input-state walks \(p_{i}\) and \(p_{j}\) (\(i\neq j\)), their corresponding MDG-paths \(\hat{p}_{i},\hat{p}_{j}\) are vertex-disjoint, if either \(p_{i}\) and \(p_{j}\) are vertex-disjoint, or if they intersect, then \(\operatorname{Pins}(p_{i}^{c}\setminus p_{j}^{c})\) and \(\operatorname{Pins}(p_{j}^{c}\setminus p_{i}^{c})\) have distinct color indices._
**Proof:** If \(p_{i}\) and \(p_{j}\) are vertex-disjoint in \(\mathcal{G}_{c}\), then \(\hat{p}_{i}\) and \(\hat{p}_{j}\) are obviously so in \(\hat{\mathcal{G}}_{l}\) by definition. If \(\operatorname{ins}(\hat{p}_{i}^{c},\hat{p}_{j}^{c})\neq\emptyset\), or equivalently, \(\hat{p}_{i}\) and \(\hat{p}_{j}\) intersect, then we have \(\operatorname{clo}(\operatorname{Pins}(\hat{p}_{i}^{c}\setminus\hat{p}_{j}^{c}))= \operatorname{clo}(\operatorname{Pins}(\hat{p}_{j}^{c}\setminus\hat{p}_{i}^{c}))\), which follows from the fact that all outgoing edges from the same copy of state vertex set are injected into the same copy of state vertex set in the lower layer. As a result, \(\operatorname{clo}(\operatorname{Pins}(p_{i}^{c}\setminus p_{j}^{c}))= \operatorname{clo}(\operatorname{Pins}(p_{j}^{c}\setminus p_{j}^{c}))\), which contradicts the fact that \(\operatorname{Pins}(\hat{p}_{i}^{c}\setminus p_{j}^{c})\) and \(\operatorname{Pins}(p_{j}^{c}\setminus p_{i}^{c})\) have distinct color indices. \(\square\)
The following result reveals that each generalized stem (bud) can induce a generalized cactus walking, which is crucial for Theorem 1.
**Proposition 2**: _The following statements are true:_
_(1) If \(\mathcal{G}_{c}\) contains a generalized stem that covers \(X_{s}\subseteq X\), then there is a generalized cactus walking whose head is \(X_{s}\)._
_(2) If \(\mathcal{G}_{c}\) contains a generalized bud that covers \(X_{s}\subseteq X\), then there is a generalized cactus walking whose head is \(X_{s}\), and the length of the shortest walk in it can be arbitrarily large._
**Proof:** For a generalized stem, denoted as \(\mathcal{G}_{\mathrm{stem}}\), suppose it consists of vertices \(u,v_{1},...,v_{k}\), where \(u\) is the unique input vertex. Since there is no cycle in \(\mathcal{G}_{\mathrm{stem}}\) and each state vertex has only one ingoing edge, there is a unique path from \(u\) to \(v_{1},...,v_{k}\), denoted by \(p_{uv_{1}},...,p_{uv_{k}}\), respectively. We are to show that \(\{p_{uv_{1}},...,p_{uv_{k}}\}\) is a generalized cactus walking. To this end, for any two \(p_{uv_{i}}\) and \(p_{uv_{i}}\) (\(i\neq j\)), consider a schedule of \(2\) walkers along the paths \(p_{uv_{j}}^{c}\) and \(p_{uv_{i}}^{c}\) satisfying the following rules:_
* _At the time_ \(t=0\)_, they are located at_ \(v_{j}\) _and_ \(v_{i}\)_, respectively;_
* _The walker starting from_ \(v_{j}\) _(called walker_ \(j\)_; similar for_ \(v_{i}\)_) located at vertex_ \(v_{k_{1}}\) _at time_ \(t\) _must move along the path_ \(p_{uv_{j}}^{c}\) _(resp._ \(p_{uv_{i}}^{c}\)_) to a neighboring vertex_ \(v_{k_{2}}\) _such that_ \((v_{k_{1}},v_{k_{2}})\) _is an edge of_ \(p_{uv_{j}}^{c}\) _(resp._ \(p_{uv_{j}}^{c}\)_);_
* _In case a walker reaches_ \(u\) _at time_ \(t\)_, it will leave_ \(u\) _at time_ \(t+1\)_._
_It can be seen that if the walker \(j\) located at vertex \(v_{k_{1}}\) at time \(t\), then the corresponding MDG-path \(\hat{p}_{uv_{j}}\) will pass through \(v_{
Step 1: Suppose there are \(s\in\mathbb{N}\) generalized stems, given by \(\mathcal{G}^{1}_{\rm stem},...,\mathcal{G}^{s}_{\rm stem}\), and \(d\in\mathbb{N}\) generalized buds, given by \(\mathcal{G}^{1}_{\rm bud},...,\mathcal{G}^{s}_{\rm bud}\), all of them vertex-disjoint. Suppose the vertex set of \(\mathcal{G}^{1}_{\rm bud}\) is \(\{v_{1i},...,v_{id_{i}}\}\), for \(i=1,...,d\) (hence \(d_{i}\) is the number of vertices in \(\mathcal{G}^{1}_{\rm bud}\)), and for \(i=1,...,s\), the vertex set of \(\mathcal{G}^{i}_{\rm stem}\) is \(\{v^{\prime}_{0i},v^{\prime}_{i1},...,v^{\prime}_{is}\}\), in which \(v^{\prime}_{0i}\) is the unique input vertex (hence \(s_{i}\) is the number of _state_ vertices in \(\mathcal{G}^{i}_{\rm stem}\)). Denote the unique path from \(v^{\prime}_{i0}\) to \(v^{\prime}_{ik}\) in \(\mathcal{G}^{i}_{\rm stem}\) by \(p_{v^{\prime}_{i0}v^{\prime}_{ik}}\), \(i=1,...,s\), \(k=1,...,s_{i}\). It is not difficult to see that we can extend the unions of \(\mathcal{G}^{1}_{\rm stem}|_{i=1}^{s}\) and \(\mathcal{G}^{1}_{\rm bud}|_{i=1}^{d_{i}}\) to a subgraph \(\mathcal{G}_{\rm cact}\) of \(\mathcal{G}_{c}\) such that there is a unique path (in \(\mathcal{G}_{\rm cact}\)) from some input vertex \(v_{i0}\in U\) to every state vertex of the cycle in \(\mathcal{G}^{1}_{\rm bud}\); per \(i\in\{1,...,d\}\). Without loss of generality, assume that \(\{\mathcal{G}^{1}_{\rm bud}:i=1,...,d\}\) are partially ordered \(\mathcal{G}^{1}_{\rm bud}\preceq\mathcal{G}^{2}_{\rm bud}\preceq\cdots\preceq \mathcal{G}^{2}_{\rm bud}\) such that in \(\mathcal{G}_{\rm cact}\), there is no edge starting from \(\mathcal{G}^{k}_{\rm bud}\) to \(\mathcal{G}^{j}_{\rm bud}\) if \(k>j\). Moreover, assume that \(v_{i1}\) is the head of the shortest path (in \(\mathcal{G}_{\rm cact}\)) from the input vertex \(v_{i0}\) to a vertex of the cycle of \(\mathcal{G}^{i}_{\rm bud}\), and denote by the path from \(v_{i0}\) to \(v_{i1}\) in \(\mathcal{G}_{\rm cact}\) by \(p_{v_{i0}v_{i0}+v_{i1}}\), for \(i=1,...,d\). The cycle from \(v_{i1}\) to \(v_{i1}\) in \(\mathcal{G}^{i}_{\rm bud}\) is denoted by \(p_{v_{i1}\cdots v_{i1}}\) and the shortest path from \(v_{i1}\) to \(v_{ik}\) is denoted by \(p_{v_{i1}\cdots v_{i1}}\), \(k=1,...,d_{i}\). Then, construct a collection of walks \(\{p^{q_{i0}v_{ik}}_{i0\cdots i}:i=1,...,d,k=1,...,d_{i}\}\) as
\[p^{q_{i0}v_{ik}}_{v_{i0}v_{ik}}=p_{v_{i0}\cdots v_{i1}}\lor p^{q_{i}}_{v_{i1} \cdots v_{i1}}\lor p_{v_{i1}v_{ik}},\]
where \(q_{1},...,q_{d}\) are defined as follows:
\[q_{1} =\max\{s_{1},...,s_{s}\},\] \[q_{2} =\max\{|p^{q_{1}}_{v_{i0}v_{i1}}|,...,|p^{q_{1}}_{v_{i1}v_{i4}}|\},\] \[q_{i} =\max\{|p^{q_{i-1}}_{v_{i-1}\cdots v_{i-1}}|,...,|p^{q_{i-1}}_{v_{ i-1}\cdots i-1}|\},i=2,...,d.\]
We are to show the collection of walks \(\{p^{q_{i0}v^{\prime}_{ik}}_{i0\cdots i}:i=1,...,s_{i}\}\cup\{p^{q_{i0}v_{ik}}_{ i0\cdots i}:i=1,...,d_{i}\}\) constructed above forms a generalized cactus walking. Due to the vertex-disjointness of \(\mathcal{G}^{i}_{\rm stem}|_{i=1}^{s}\), it is obvious that the MDG-paths \(\{\hat{p}^{q_{i0}v^{\prime}_{ik}}_{i0\cdots i}:i=1,...,s,k=1,...,s_{i}\}\) are vertex-disjoint. The vertex-disjointness of the MDG-paths \(\{\hat{p}^{q_{i0}v_{ik}}_{i0\cdots i}:k=1,...,d_{i}\}\) within each generalized bud \(\mathcal{G}^{i}_{\rm bud}\) has been demonstrated in Proposition 2 for any \(q_{i}\geq 0\). Given a \(j\in\{1,...,d\}\), there are disjointness between any path in \(\hat{p}^{q_{j0}^{j}}_{v_{j0}v_{ik}}:k=1,...,d_{j}\}\), say \(\hat{p}_{j}\), and any one in \(\{\hat{p}^{q_{i0}v^{\prime}_{ik}}_{i0\cdots i}:i=1,...,s,k=1,...,s_{i}\}\cup\{ \hat{p}^{q_{i0}v_{ik}}_{i0\cdots i}:i=1,...,j-1,k=1,...,d_{i}\}\), say \(\hat{p}_{j^{\prime}}\), is demonstrated as follows. Observe that the number \(q_{j}\) of repeated cycles \(p_{v_{j1}\cdots v_{j1}}\) in \(p_{j}\) is no less than \(|p_{j^{\prime}}|\) by the construction of \(\{q_{i}:i=1,...,d\}\). As a result, each of the first \(|p_{j^{\prime}}|\) vertices in \(\hat{p}^{j}_{i}\) is different from any vertex in \(p^{\prime}_{j^{\prime}}\). Since \(\hat{p}^{e}_{j^{\prime}}\) terminates at the \(|\hat{p}^{e}_{j^{\prime}}|\)th layer in the corresponding MDG, the last \(|\hat{p}^{e}_{j^{\prime}}|-|\hat{p}^{e}_{j^{\prime}}|\) vertices of \(\hat{p}^{e}_{j^{\prime}}\) are in different layers from any vertex of \(\hat{p}^{e}_{j^{\prime}}\), which cannot intersect. Therefore, \(\hat{p}_{j}\) and \(\hat{p}_{j^{\prime}}\) are vertex-disjoint. Taking together, we obtain that \(L\doteq\{\hat{p}^{e}_{v^{\prime}_{i0}v^{\prime}_{ik}}:i=1,...,s,k=1,...,s_{i}\} \cup\{\hat{p}^{q_{i0}v_{ik}}_{i0\cdots i}:i=1,...,d,k=1,...,d_{i}\}\) is a linking with \({\rm head}(L)=\hat{V}_{X0}\) in the MDG \(\hat{\mathcal{G}}_{l}\), where \(\hat{l}\) is the largest length of a path in \(L\).
Step 2: We are to demonstrate that \({\rm sign}(L)w(L)\) cannot be canceled out in (4), in which \(I=\hat{V}_{X0}\), \(J={\rm tail}(L)\), and \(W_{\hat{I}}\) (as well as \(\hat{\mathcal{G}}_{l}\)) corresponds to the structured switched system associated with \(\mathcal{G}_{\rm cact}\) (i.e., preserving the edges in \(\mathcal{G}_{\rm cact}\) and removing those not in \(\mathcal{G}_{\rm cact}\)). Let \(S_{i}\subseteq\hat{V}_{U}\) and \(T_{i}\subseteq\hat{V}_{X0}\) be respectively the set of tails and heads of paths in \(L_{i}\doteq\{\hat{p}^{q_{i0}^{i}v_{ik}}_{i0\cdots i}:k=1,...,d_{i}\}\), and \(S^{\prime}_{i}\subseteq\hat{V}_{U}\) and \(T^{\prime}_{i}\subseteq\hat{V}_{X0}\) be respectively the set of tails and heads of paths in \(L^{\prime}_{i}\doteq\{\hat{p}^{e}_{v^{\prime}_{i0}v^{\prime}_{ik}}:k=1,...,s_{i}\}\). Then, \(w(L)=\prod_{i=1}^{d}w(L_{i})\prod_{i=1}^{s_{i}}w(L^{\prime}_{i})\). By Proposition 2 and Step 1, as well as the construction of the MDG \(\hat{\mathcal{G}}_{l}\) if there is a path from a vertex \(v\in S^{\prime}_{i}\) (resp. \(v\in S_{i}\)) to a vertex \(v^{\prime}\in T^{\prime}_{i}\) (\(v^{\prime}\in T^{\prime}_{i}\)), then this path is the unique path from \(v\) to \(v^{\prime}\) in \(\hat{\mathcal{G}}_{l}\). In addition, there is no path from any \(v^{\prime\prime}\in S^{\prime}_{i}\backslash\{v\}\) (resp. \(v^{\prime\prime}\in S_{i}\backslash\{v\}\)) to \(v^{\prime}\), and no path from \(v\) to any \(v^{\prime\prime}\in T^{\prime}_{i}\{v^{\prime}\}\) (resp. \(
(if such an edge does not exist, then no generalized stem can be found). Add \(e_{1}\) to \(E_{\rm stem}\). Then, find all edges \(e_{2}\in E_{2}\doteq\{e\in E_{S}:{\rm tail}(e)={\rm head}(e_{1})\}\), and add them to \(E_{\rm stem}\). Next, add all edges \(e_{3}\in E_{3}\doteq\{e\in E_{S}:{\rm tail}(e)={\rm head}(e_{2}),e_{2}\in E _{2}\}\) to \(E_{\rm stem}\). Repeating this procedure until the set \(E_{i+1}=\{e\in E_{S}:{\rm tail}(e)={\rm head}(e_{i}),e_{i}\in E_{i}\}\) is empty for some \(i\) (let \(E_{1}=\{e_{1}\}\)). Let \(V_{\rm stem}\) be the set of vertices of edges in \(E_{\rm stem}\). Then, it can be verified that all three conditions in Definition 5 are satisfied for \(\mathcal{G}_{\rm stem}=(V_{\rm stem},E_{\rm stem})\). Hence, \(\mathcal{G}_{\rm stem}\) is a generalized stem. Other possible generalized stems can be constructed from \(E_{S}\backslash E_{\rm stem}\) in a similar way.
After finding all generalized stems from \(E_{S}\), let \(E_{S}^{0}\) be the subset of \(E_{S}\) by removing all edges belonging to the generalized stems. A generalized bud \(\mathcal{G}_{\rm bud}=(V_{\rm bud},E_{\rm bud})\) can be constructed as follows. Pick an arbitrary \(e_{1}\in E_{S}^{0}\) and add it to \(E_{\rm bud}\). Let \(E_{1}=\{e_{1}\}\) and \(E_{S}^{1}=E_{S}^{0}\backslash E_{1}\). Find \(E_{2}=\{e\in E_{S}^{1}:{\rm head}(e)={\rm tail}(e_{1}),{\rm or}\ {\rm tail}(e)={\rm head}(e_{1})\}\), and add \(E_{2}\) to \(E_{\rm bud}\). Let \(E_{S}^{0}=E_{S}\backslash E_{2}\). Next, find \(E_{S}=\{e\in E_{S}^{0}:{\rm head}(e)={\rm tail}(e),{\rm or}\ {\rm tail}(e)={\rm head}(e_{2}),e_{2}\in E _{2}\}\), and \(E_{3}\) to \(E_{\rm bud}\), and update \(E_{S}^{0}=E_{S}^{0}\backslash E_{3}\). Repeat this procedure until the set \(E_{i+1}=\{e\in E_{S}^{i}:{\rm head}(e)={\rm tail}(e_{i}),{\rm or}\ {\rm tail}(e)={\rm head}(e_{i}),e_{i}\in E_{i}\}\) is empty. Let \(V_{\rm bud}\) be the set of end vertices of edges in \(E_{\rm bud}\). It turns out that there is at least one cycle in \(\mathcal{G}_{\rm bud}=(V_{\rm bud},E_{\rm bud})\). If not, since every vertex \(x\in V_{\rm bud}\) has exactly one ingoing edge in \(E_{\rm bud}\), the above-mentioned procedure cannot terminate. On the other hand, if there are two or more cycles in \(\mathcal{G}_{\rm bud}\), then at least one vertex has two or more ingoing edges, a contraction to the S-disjointness condition. Therefore, \(\mathcal{G}_{\rm bud}\) is a generalized bud. Other generalized buds can be found similarly.
(c)\(\Rightarrow\)(a): From Proposition 1, if system (1) is structurally controllable, i.e., \({\rm grank}W_{n}=n\), then there is a \(\hat{V}_{U}-\hat{V}_{X0}\) linking of size \(n\) in \(\hat{\mathcal{G}}_{n}\). By the construction of \(\hat{\mathcal{G}}_{n}\), every state vertex \(v_{i0}^{00}\in\hat{V}_{X0}\) is the head of a path from \(\hat{V}_{U}\), which leads to the necessity of condition (a.i). Moreover, there are \(n\) vertex-disjoint edges between \(\hat{V}_{U}\cup\hat{V}_{X1}\) and \(\hat{V}_{X0}\), which implies the necessity of condition (a.ii) via Lemma 2.
Since (b)\(\Rightarrow\)(c) has been proved in Theorem 1, the equivalence among (a), (b) and (c) follows from the above analysis immediately. \(\Box\)
**Remark 3**: _The proof '(a)\(\Leftrightarrow\)(b)' above implies that a generalized cactus configuration covering \(X\) can be uniquely determined by a set of \(n\) S-disjoint edges, and vice versa (given that all \(v\in X\) are input-reachable)._
Below we provide two examples to illustrate Theorem 2.
**Example 2**: _Consider a switched system with \(N=2\) subsystems and \(n=7\) state variables. The colored union graph \(\mathcal{G}_{c}\) is given in Fig. 3(a). It turns out that \(\mathcal{G}_{c}\) can be spanned by the union of a generalized stem Fig. 3(b) and a generalized bud Fig. 3(c). From Theorem 2, this switched system is structurally controllable._
**Example 3**: _Consider a switched system with \(N=2\) and \(n=10\). The colored union graph \(\mathcal{G}_{c}\) is given in Fig. 4(a), where there is no multiple edge (thus the subsystem digraphs \(\mathcal{G}_{1}\) and \(\mathcal{G}_{2}\) can be uniquely extracted from it). Let \((A_{1},B_{1})\) and \((A_{2},B_{2})\) be respectively the system matrices corresponding to \(\mathcal{G}_{1}\) (solid edges) and \(\mathcal{G}_{2}\) (dotted edges). It can be verified that \({\rm grank}[A_{1},A_{2},B_{1},B_{2}]=9\), implying that this system is not structurally controllable via Theorem 2. Furthermore, Fig. 4(b) shows that vertices \(\{v_{1},...,v_{6},v_{9},v_{10}\}\) can be covered by a generalized cactus configuration. It follows from Corollary 1 that \({\rm grank}\mathcal{R}\geq 8\). A lower bound of \({\rm grank}\mathcal{R}\geq 6\) can also be obtained from Fig. 4(c), which shows that \(\mathcal{G}_{c}\) contains a _conventional_ cactus configuration covering \(\{v_{1},...,v_{5},v_{9}\}\). A further inspection on the MDG \(\hat{\mathcal{G}}_{l}\) yields that any \(\hat{V}_{U}-\hat{V}_{X0}\) linking in \(\hat{\mathcal{G}}_{l}\) for any \(\bar{l}\geq 2\) has a size of \(8\). Combining it with the lower bound from Fig. 4(b), we obtain \({\rm grank}\mathcal{R}=8\).
Example 3 shows that the lower bound for \({\rm grank}\mathcal{R}\) provided by Corollary 1 can be tighter than the one provided by \({\rm grank}[A_{1},...,A_{N},B_{1},...,B_{N}]\). Motivated by this, we conjecture that the maximum size of state vertices that can be covered by a generalized cactus configuration in \(\mathcal{G}_{c}\) and the maximum size of a \(\hat{V}_{U}-\hat{V}_{X0}\) linking in \(\hat{\mathcal{G}}_{l}\) (\(\bar{l}\geq n\)) always coincide. If this conjecture is true, then Corollary 1 can determine the exact generic dimension of controllable subspaces of switched systems. Notice that when \({\rm grank}\mathcal{R}=n\), or when \(N=1\) (i.e., for LTI systems), this conjecture is true [21, 23].
## IV Extension to discrete-time systems
We extend the previous results to reversible switched discrete-time systems. This leads to not only simplified necessary and sufficient conditions for the structural controllability, but also the determination of the generic dimension of controllable subspaces.
Consider a switched discrete-time system governed by
\[x(k+1)=A_{\sigma(k)}x(k)+B_{\sigma(k)}u(k), \tag{7}\]
where \(\sigma:\{0,1,...\}\rightarrow\{1,2...,N\}\) is the switching path, \(x(k)\in\mathbb{R}^{n}\), \(u(k)\in\mathbb{R}^{m_{\sigma(k)}}\). Like the continuous-time case, \((A_{i},B_{i})\) is called the subsystem of (7), \(i=1,...,N\). In a switching path \(\sigma\), \(\sigma(k)=i\) implies that subsystem \((A_{i},B_{i})\) is chosen as the system realization.
**Definition 9**: _[_6_]_ _System (7) is said to be controllable, if for any \(x\in\mathbb{R}^{n}\), there exists a finite \(M\), a switching path \(\sigma(k):\{0,...,M-1\}\rightarrow\{1,...,N\}\), and input \(u(k)\in\mathbb{R}^{m_{\sigma(k)}}\) for \(k=0,...,M-1\), such that \(x(0)=x\) and \(x(M)=0\)._
Like [8, 24], we assume that system (7) is reversible, i.e., \(A_{i}\) is non-singular for \(i=1,...,N\). According to [25], it is possible to represent any causal discrete-time system (input-output) through a reversible state variable representation. Moreover, sampled data systems are naturally reversible. Therefore, reversible system representation is very general and applicable to a wide range of systems. On the other
Fig. 4: (a): \(\mathcal{G}_{c}\) (no multiple edges) of a switched system with \(N=2\) and \(n=10\). (b): a generalized cactus configuration in \(\mathcal{G}_{c}\) (depicted in bold red lines). (c): a conventional cactus configuration in \(\mathcal{G}_{c}\) (bold red lines).
Fig. 3: (a): \(\mathcal{G}_{c}\) of a switched system with \(N=2\) and \(n=7\). No multiple edges exist in \(\mathcal{G}_{c}\). (b) and (c): a generalized stem and a generalized bud that span \(\mathcal{G}_{c}\).
hand, as shown in [6, Theo 1, 2], criteria for controllability of non-reversible systems require checking an infinite number of switching paths. While for the class of reversible systems, its controllability can be characterized as follows.
**Lemma 4**: _For a reversible system (7), it is controllable if and only if the following controllability matrix \(\mathcal{R}\) has full row rank:_
\[\mathcal{R}=\left[A_{i_{n}}^{j_{n}}A_{i_{n-1}}^{j_{n-1}}\cdots A_{i_{1}}^{j_{1 }}B_{i_{1}}\right]_{i_{1},\ldots,i_{n}=1,\ldots,N}^{j_{n}=0,\ldots,n-1}\]
_Moreover, the dimension of the controllable subspaces is \(\mathrm{rank}\mathcal{R}\)._
Structural controllability of the discrete-time system (7) is defined in the same way as it is for the continuous-time system (1). Observe that the controllability matrix in Lemma 4 is of the same form as that in Lemma 1. This means the methods and results in the previous section can be directly applied to reversible switched discrete-time systems. However, by leveraging the reversible structure, some deeper insights can be obtained. In what follows, all graph-theoretic notions and notations are the same as in the continuous-time case.
**Theorem 3**: _The generic dimension of controllable subspaces of the reversible system (7) equals the number of input-reachable state vertices in \(\mathcal{G}_{c}\)._
**Proof:** Let \(X_{\mathrm{re}}\subseteq X\) be the input-reachable state vertex set in \(\mathcal{G}_{c}\). By the construction of \(W_{I}\) (\(\bar{l}\geq n\)), any linking from \(\hat{V}_{U}\) to \(\hat{V}_{X0}\) has a size upper bounded by \(|X_{\mathrm{re}}|\). From Proposition 1, \(\mathrm{graank}\mathcal{R}\) is no more than the maximum size of a \(\hat{V}_{U}-\hat{V}_{X0}\) linking in \(\mathcal{G}_{\bar{l}}\), thus upper bounded by \(|X_{\mathrm{re}}|\).
Since system (7) is reversible, \(\mathrm{graank}A_{i}=n\) for \(i=1,...,N\). It follows that there are \(n\) S-disjoint edges consisting of only edges from \(E_{XX}\) in \(\mathcal{G}_{c}\). From the proof '(a)\(\Rightarrow\)(b)' of Theorem 2, \(\mathcal{G}_{c}\) contains a collection of vertex-disjoint cycles covering \(X\). Notice that all vertices of a cycle are either input-reachable simultaneously, or not input-reachable simultaneously. Therefore, \(X_{\mathrm{re}}\) can be covered by a collection of input-reachable vertex-disjoint cycles (generalized buds). It follows from Corollary 1 that \(\mathrm{graank}W_{\bar{l}}\geq|X_{\mathrm{re}}|\), i.e., \(\mathrm{graank}\mathcal{R}\geq|X_{\mathrm{re}}|\). Taking both bounds together, we obtain \(\mathrm{graank}\mathcal{R}=|X_{\mathrm{re}}|\).
**Corollary 2**: _The reversible system (7) is structurally controllable, if and only if each \(x\in X\) is input-reachable in \(\mathcal{G}_{c}\)._
**Proof:** Immediate from Theorem 3.
## V Conclusions
In this paper, we have investigated the structural controllability of switched continuous time systems and reversible switched discrete time systems. By introducing new graph-theoretic concepts such as multi-layer dynamic graphs, generalized stems, buds, cactus, and cactus walking, we provide a novel generalized cactus configuration based criterion for structural controllability, which extends Lin's graph-theoretic condition to switched systems for the first time. This not only fixes a gap in the existing literature regarding the proof of a pivotal criterion for structural controllability, but also yields a lower bound for the generic dimension of controllable subspaces (which is conjectured to be exact). Additionally, we present extensions to reversible switched discrete-time systems, leading to the determination of the generic dimension of controllable subspaces. Our future work will consist of proving or disproving the conjecture made in this paper and finding a computationally efficient way to compute the maximum size of state vertices that can be covered by generalized cactus configurations.
## Acknowledgment
Feedback and suggestions from Prof. B. M. Chen of the Chinese University of HongKong on this manuscript are highly appreciated.
|
2309.16322 | Multi-Granularity Click Confidence Learning via Self-Distillation in
Recommendation | Recommendation systems rely on historical clicks to learn user interests and
provide appropriate items. However, current studies tend to treat clicks
equally, which may ignore the assorted intensities of user interests in
different clicks. In this paper, we aim to achieve multi-granularity Click
confidence Learning via Self-Distillation in recommendation (CLSD). Due to the
lack of supervised signals in click confidence, we first apply self-supervised
learning to obtain click confidence scores via a global self-distillation
method. After that, we define a local confidence function to adapt confidence
scores at the user group level, since the confidence distributions can be
varied among user groups. With the combination of multi-granularity confidence
learning, we can distinguish the quality of clicks and model user interests
more accurately without involving extra data and model structures. The
significant improvements over different backbones on industrial offline and
online experiments in a real-world recommender system prove the effectiveness
of our model. Recently, CLSD has been deployed on a large-scale recommender
system, affecting over 400 million users. | Chong Liu, Xiaoyang Liu, Lixin Zhang, Feng Xia, Leyu Lin | 2023-09-28T10:29:51Z | http://arxiv.org/abs/2309.16322v1 | # Multi-Granularity Click Confidence Learning via Self-Distillation in Recommendation
###### Abstract.
Recommendation systems rely on historical clicks to learn user interests and provide appropriate items. However, current studies tend to treat clicks equally, which may ignore the assorted intensities of user interests in different clicks. In this paper, we aim to achieve multi-granularity Click confidence Learning via Self-Distillation in recommendation (CLSD). Due to the lack of supervised signals in click confidence, we first apply self-supervised learning to obtain click confidence scores via a global self-distillation method. After that, we define a local confidence function to adapt confidence scores at the user group level, since the confidence distributions can be varied among user groups. With the combination of multi-granularity confidence learning, we can distinguish the quality of clicks and model user interests more accurately without involving extra data and model structures. The significant improvements over different backbones on industrial offline and online experiments in a real-world recommender system prove the effectiveness of our model. Recently, CLSD has been deployed on a large-scale recommender system, affecting over 400 million users.
Recommender Systems, Self-Distillation, Click Confidence +
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
model with varied emphases on different samples. The advantages of CLSD are summarized as follows: First, we apply self-supervised learning to distinguish the quality of clicks via self-distillation. Second, we generate sample-level confidence scores from the teacher model without involving extra data. Finally, we can learn more accurate and reliable user interests by considering the distinction of confidence levels between samples.
We evaluate the performance of CLSD on real-world recommendation feed scenarios. CLSD achieves significant enhancements on both offline and online experiments, which verifies the effectiveness and universality of CLSD. The main contributions of our method are concluded as follows:
* We emphasize the significance of click confidence, and propose our click confidence learning model. To the best of our knowledge, we are the first to utilize multi-granularity click confidence learning via self-distillation for CTR prediction.
* We explore a global distillation method and a local adaption module to achieve simple but efficient confidence learning. CLSD is a universal method with the model structure unchanged, which is easy to deploy in real-world recommendation systems.
* The significant improvements on both offline and online evaluations further prove the effectiveness of CLSD. Moreover, CLSD has been deployed on a real-world recommender system serving over 400 million users.
## 2. Related Work
**CTR Prediction.** CTR prediction is increasingly essential to many real-world recommendation systems, and various methods (Han et al., 2015; Chen et al., 2016; Chen et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) have been proposed to enhance model performance. Among these methods, Logistic regression (LR) (Li et al., 2019) and Factorization machine (FM) (Li et al., 2019) focus on low-order feature interactions. They are both time-saving due to the linear model structure. Google develops a Wide&Deep (Bird et al., 2016) learning system, combining the advantages of both the linear shallow and deep models. After that, many deep learning techniques have been used in CTR prediction. The self-attention technique is the core of AutoInt (Li et al., 2019), which can learn high-order feature interactions automatically. DFN (Li et al., 2019) utilizes a transformer to learn knowledge from both positive and negative feedback. Besides, some methods (Chen et al., 2016; Chen et al., 2016; Li et al., 2019; Li et al., 2019; Li et al., 2019) concentrate on how to extract user preferences from historical sequence behaviors.
**Self-Distillation.** Self-distillation research focuses on learning knowledge from the model itself (Chen et al., 2016; Chen et al., 2016; Li et al., 2019). Some studies learn knowledge in the aspect of input data, i.e., input features(Li et al., 2019) and class labels (Li et al., 2019). (Li et al., 2019) shrinks the size of the convolutional neural networks via self-distillation to enhance the performance. (Li et al., 2019) progressively distills the model's knowledge to soften hard labels. (Li et al., 2019) focuses on averaging model weights instead of predictions to achieve self-distillation. Limited studies utilize self-distillation in the recommendation. (Li et al., 2019) introduces the knowledge distillation concept into GCN-based recommendation. In CLSD, we adopt self-distillation to confidence learning for CTR prediction.
## 3. Model
### Preliminary
The overall structure of CLSD is presented as Figure 2. Before introducing our model, we introduce some basic notations in this part. CTR prediction tasks aim to predict the next clicked items based on abundant sample features, which are mainly divided into multi-fields. For each sample pair \((u,i)\) in the training dataset \(\mathcal{D}\), \(\mathbf{x}\) denotes the corresponding features, and label \(y\in\{0,1\}\) denotes whether the user \(u\) clicks the item \(i\). The vector \(\mathbf{x}\) with multi-fields features can be denoted as \(\mathbf{x}=[\mathbf{x}_{1},\dots,\mathbf{x}_{j},\dots,\mathbf{x}_{k}]\), where \(\mathbf{x}_{j}\) represents the j-th field of \(\mathbf{x}\). CTR prediction aims to model the probability \(p=f(x)\) that a user will click an item in a given context, where \(f(\cdot)\) indicates the model network. The universal loss function of CTR prediction is the negative log-likelihood loss to supervise model learning with labels:
\[L_{\text{ori}}=-\sum_{(u,i)}(y\log(p)+(1-y)\log(1-p)). \tag{1}\]
### Global Granularity Distillation
Due to the lack of supervised signals in click confidence, we introduce a self-supervised method to achieve confidence learning. With the observation that noisy positive instances tend to have lower CTR prediction scores and bring larger loss than true positive instances (Li et al., 2019), we suppose that reliable CTR prediction scores of instances can reflect the user click confidence. Instead of directly using the model prediction, we involve a teacher model to provide more reliable and efficient scores. Current self-supervised learning works provide numerous methods to obtain teacher models, i.e., a pre-trained large teacher model (Li et al., 2019), a previous model version with the best performance in the past epochs (Li et al., 2019), or an exponential moving average of previous models (Li et al., 2019). However, these methods always need to store a set of additional model parameters in memory, which brings extra memory costs. To avoid the above problems and simplify the training process, we achieve self-distillation by directly choosing the model's current state to be its own teacher.
Figure 2. The model structure of CLSD.
Firstly, we warm up the model with the original training objective \(L_{ori}\). After that, we regard the model generated from the last batch as the teacher for the current model, which can also alleviate the model forgetting issue (Krizhevsky et al., 2014). Then, the predictions of the teacher model are utilized to represent the confidence scores for the corresponding positive samples. Since we suppose the teacher model is a global-level user-interest learning ensemble, the predictions of the teacher model can reflect the intensities of user interests and confidence scores for different positive samples. Thus, we scale the original loss with confidence scores for positive samples, which can be formulated as:
\[L_{global}=-\sum_{(u,i)}\left((1+p)y\log(p)+(1-y)\log(1-p)\right). \tag{2}\]
\(p\in[0,1]\) is the prediction of the teacher model, serving as the confidence score.
Our global granularity distillation step, short for Global-GD, has the following characteristics: 1) Different from Focal loss (Krizhevsky et al., 2014) which down-weights the loss for well-classified samples, we up-weight the loss for well-classified samples and down-weight the loss for misclassified samples, since the click labels in recommendation systems are relatively uncertain with different confidence levels, while labels in CV are doubtless. Thus, we suppose users are actually more interested in their positive samples with higher predictions that are provided by a stable and well-trained teacher model. 2) Our confidence scores scale the original loss for only positive samples. In this paper, we do not focus on the intensities of dislikes for negative samples, so we only modify the original loss of positive samples. Besides, we maintain the scaled weights of positive samples not less than 1, due to the lack of positive samples compared with negative ones. 3) We dynamically adapt the original loss at the sample level upon confidence scores. Instead of bipolar values \(\{0,1\}\), we treat the intensities of user interests in samples as continuous values in the range \([0,1]\), which is relatively reasonable and explainable. 4) Compared with current studies (Krizhevsky et al., 2014) using dwell time to qualify clicks, we utilize a self-distillation approach to avoid involving additional data which may lead to extra noise and the seesaw phenomenon. Note that our Global-GD can match any self-distillation method. To save computation costs and memory costs, we achieve self-distillation via the current model method.
### Local Granularity Adaption
The Global-GD mentioned above scales the original loss over the entire dataset from a general perspective. In this section, we will further analyze the impact of confidence learning at the user group level and define a local granularity adaption module.
**Group Granularity Analysis.** Upon observing that confidence scores can vary among users, we analyze the prediction distribution of confidence scores \(p\) at the user level and find that the different action patterns among user groups lead to a skewed distribution of \(p\). These user groups can be user profiles (e.g., gender, age), user interactive status (e.g., user activeness), or simply regard each user as a single group (e.g. individual level). In our real-world business scenarios, the impact of age groups on the final results is particularly significant. As shown in Figure 3 (right), the normalized CTR collected from our offline dataset gradually increases with age. The trend of confidence scores \(p\) in Figure 3 (left) is similar to the normalized CTR. With our Global-GD module, the weights assigned to older users are higher due to their tendency to have higher \(\tilde{p}\), leading to an increasingly biased model towards this age group. Therefore, we find that prior user information can influence the prediction distribution, involving the unfairness of Global-GD.
**Local Adaptive Module.** In this part, we introduce an adaptive module to alleviate the bias caused by Global-GD. Concretely, we propose an Adaptive Gate, short for AdGate, to represent the imbalance of prior user information with personalized parameters and inject the adaptive results into the Global-GD. We denote the input features of the AdGate as adaptive features \(x_{local}\), which can be user profiles ( i.e., age, location, gender and activeness). Since we observe the remarkable influence of the user's age on our industrial datasets, we directly take the age feature as \(x_{local}\) in this paper to illustrate the effect of AdGate. Notice that the \(x_{local}\) can be any prior features that bring inevitable bias to the distribution of the real CTR or the predicted CTR, as shown in Figure 3. Then, we formulate the output of AdGate as below:
\[p_{local}=\sigma(AdGate(x_{local})) \tag{3}\]
where AdGate is a simple Multi-Layer Perceptron (MLP) structure in this paper and can be directly extended to various complicated structures. The Sigmoid function, \(\sigma(x)=1/(1+e^{-x})\), controls the output \(p_{local}\in[0,1]\). Here, we introduce \(p_{local}\) to separately formulate the prediction of prior user information (i.e., age), which can alleviate the biased influence of \(x_{local}\) on the training process of the backbone model as well as the Global-GD.
### Final Objective.
Finally, we combine the global granularity distillation and the local granularity adaption to obtain the final loss function:
\[L_{final}=-\sum_{(u,i)}\left((1+\sigma p)y\log(p+p_{local})+(1-y)\log(1-p-p_{ local})\right). \tag{4}\]
\(\alpha\in[0,\infty]\) is a hyper-parameter to control the influence of Global-GD. Due to the relatively low ratio of positive samples among all samples, our weight of the positive samples is consistently not less than 1. To guarantee the performance of the teacher model, CLSD first warms up via \(L_{ori}\), empirically taking nearly \(1/3\) of the entire training epochs, and then updates via \(L_{final}\).
## 4. Experiments
### Experiment Setup
**Datasets & Settings.** We conduct offline evaluations on two feed recommender systems, i.e., _Subscriptions_ and _TopStory_. The detailed
Figure 3. Distribution of \(\tilde{p}\) and CTR with age.
statistics of two offline datasets are shown in Table 1. For both datasets, we consider interactions in the first few days as train sets and the last day's interactions as test sets. In experiments, all our models and baselines are optimized by Adam with the learning rate 0.003. The batch sizes are 256 for all models.
**Baselines.** We compared our method with eight models widely used in the industry: (1) Wide&Deep (He et al., 2017). With the development of deep models, Google improves greatly by combining a wide (or shallow) network and a deep one. Wide&Deep is a general learning framework that can achieve the advantages of both wide and deep networks. (2) DeepFM (He et al., 2017). DeepFM extends Wide&Deep by substituting LR with FM to precisely model second-order feature interactions. (3)DCN (Wang et al., 2019). DCN introduces cross-network, which extends more feature crossing modes than FM. By controlling the number of layers, the cross-network can efficiently learn the low-dimensional feature interactions. (4) AutoInt (Wang et al., 2019). AutoInt automatically models the high-order interactions of input features by using self-attention networks. (5) xDeepFM (He et al., 2017). xDeepFM captures high-order interactions through its core module, the Compressed Interaction Network (CIN). CIN takes an outer product of a stacked feature matrix in a vector-wise way. (6) DIEN (Wang et al., 2019). Dien introduces an interest evolving layer to capture the changing trend of the user interest. (7) CAN (Chen et al., 2019). CAN proposes a Co-Action Network (CAN) to effectively utilize the information of different feature pairs.
### Offline Evaluation
To verify the effectiveness and universality of CLSD, we apply CLSD to eight backbone models for offline evaluation. From Table 2, we can find that: 1) CAN performs better than other baselines, which indicates the Co-Action Network can effectively capture feature interaction for training. Moreover, our CLSD can continuously improve the performance of CAN, since we can distinguish the intensities of user interests in clicks. 2) CLSD can enhance the performance over eight backbone models, achieving 0.36% (on average) improvements on _Subscriptions_ and 0.34% improvements on _TopStory_ in terms of AUC. The significant improvements (t-test with \(p<0.01\)) prove the universality and efficiency of CLSD, which is complementary to various model structures and continuously enhances performance.
### Online Evaluation
**General Performance.** To verify the online performance of CLSD, we deploy CLSD on four online scenarios with differences in domains and data scales. Concretely, we conduct A/B tests on Subscriptions Article (with 43.8 million users for 5 days), Subscriptions Video (with 3.7 million users for 4 days), Top Story Article (with 4.3 million users for 8 days), Top Story Video (with 3 million users for 10 days). We mainly focus on two online metrics: average click number per capita (ACN) and average dwell time (ADT). From Table 3, we can observe that: 1) CLSD improves both ACN and ADT on all four online scenarios. The relative improvements ranging from 1.4% to 2.1% are remarkable for such stable and mature online recommendation scenarios with sufficient features, enormous samples, and advanced models. 2) The concurrent improvements of ACN and ADT imply that CLSD can accurately and efficiently capture user interests, which further enhances user satisfaction in our recommender scenarios.
**Comparison of different objectives.** We also compare the performance of CLSD with other objectives, i.e., DT-Reweight using a dwell time function to reweight the origin loss (Wang et al., 2019), focal loss (Wang et al., 2019), and a multi-task learning loss with an additional dwell time loss based on Eq.1. Online results on Subscriptions Video are presented in Table 4. We can observe that: 1) Compared with the above three objectives, CLSD achieves the best online performance in both ACN and ADT, which indicates CLSD can avoid the seesaw phenomenon in traditional reweighed objectives by focusing on click confidence learning. Since focal loss gives priority to hard samples, the focal loss may pay exceeding attention to weak interests or even casual clicks, leading to inferior performance. 2) With the combination of
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & Instances & Fields & Features \\ \hline Subscriptions & 762M & 65 & 319M \\ TopStory & 614M & 57 & 237M \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dataset statistics of two offline corpuses from real-world applications.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Online Scenario & ACN & ADT \\ \hline Subscriptions Article & +2.116\% & +1.403\% \\ Subscriptions Video & +1.976\% & +1.984\% \\ Top Story Article & +1.158\% & +1.400\% \\ Top Story Video & +1.444\% & +1.665\% \\ \hline \hline \end{tabular}
\end{table}
Table 3. Online A/B Test (t-test with p<0.01).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Subscriptions & \multicolumn{2}{c}{TopStory} \\ Model & Origin & + CLSD & Origin & +CLSD \\ \hline W\&D & 0.7662 & 0.7695 & 0.7874 & 0.7901 \\ DeepFM & 0.7679 & 0.7708 & 0.7898 & 0.7929 \\ DCN & 0.7681 & 0.7701 & 0.7892 & 0.7918 \\ AFM & 0.7675 & 0.7709 & 0.7881 & 0.7906 \\ AutoInt & 0.7659 & 0.7702 & 0.7883 & 0.7907 \\ xDeepFM & 0.7687 & 0.7710 & 0.7904 & 0.7933 \\ DIEN & 0.7680 & 0.7699 & 0.7884 & 0.7907 \\ CAN & 0.7702 & 0.7725 & 0.7910 & 0.7940 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Offline Evaluation on AUC (t-test with p<0.01).
CLSD and the corresponding objectives, CLSD can continuously improve the performance of DT-reweight and MTL, which further proves the university and effectiveness of CLSD.
### Ablation Study
As shown in Table 5, we conduct an offline ablation study on four baselines on _Subscriptions_ to clarify the effect of different components of CLSD, i.e., the global granularity distillation and the local granularity adaption. Generally, the global component contributes more to the final performances, which implies the power of confidence learning via self-distillation. Also, the local component can adapt the loss at the user group level to further enhance the performance.
### Hyper-Parameter Analysis
In this section, we analyze the impact of the critical hyper-parameter \(\alpha\) in Eq. 4, which controls the influence of CLSD during the training process.
Table 6 shows the experiment results of CLSD with different \(\alpha\) on _Subscriptions_. We can observe that a small \(\alpha\) can bring a slight performance improvement on AUC. The performance improvement continuously increases with the increment of \(\alpha\) and reaches the best at around \(\alpha=1\). With a larger \(\alpha\), the model excessively focuses on adapting the training process upon click confidence and ignores the original objective, leading to worse performance.
## 5. Conclusion
In this work, we propose a simple but effective multi-granularity click confidence learning method via self-distillation for CTR prediction. CLSD achieves remarkable improvements on both offline and online experiments with different backbones. Also, we analyze the distribution of our confidence scores for a deeper understanding. Besides, CLSD has been deployed on a real-world recommender system serving over 400 million users. In the future, we will explore more sophisticated local adaption methods in CLSD.
|
2309.06131 | Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning
Strategies are not Better than Random Selection | Search methods based on Pretrained Language Models (PLM) have demonstrated
great effectiveness gains compared to statistical and early neural ranking
models. However, fine-tuning PLM-based rankers requires a great amount of
annotated training data. Annotating data involves a large manual effort and
thus is expensive, especially in domain specific tasks. In this paper we
investigate fine-tuning PLM-based rankers under limited training data and
budget. We investigate two scenarios: fine-tuning a ranker from scratch, and
domain adaptation starting with a ranker already fine-tuned on general data,
and continuing fine-tuning on a target dataset. We observe a great variability
in effectiveness when fine-tuning on different randomly selected subsets of
training data. This suggests that it is possible to achieve effectiveness gains
by actively selecting a subset of the training data that has the most positive
effect on the rankers. This way, it would be possible to fine-tune effective
PLM rankers at a reduced annotation budget. To investigate this, we adapt
existing Active Learning (AL) strategies to the task of fine-tuning PLM rankers
and investigate their effectiveness, also considering annotation and
computational costs. Our extensive analysis shows that AL strategies do not
significantly outperform random selection of training subsets in terms of
effectiveness. We further find that gains provided by AL strategies come at the
expense of more assessments (thus higher annotation costs) and AL strategies
underperform random selection when comparing effectiveness given a fixed
annotation cost. Our results highlight that ``optimal'' subsets of training
data that provide high effectiveness at low annotation cost do exist, but
current mainstream AL strategies applied to PLM rankers are not capable of
identifying them. | Sophia Althammer, Guido Zuccon, Sebastian Hofstätter, Suzan Verberne, Allan Hanbury | 2023-09-12T11:17:42Z | http://arxiv.org/abs/2309.06131v1 | Annotating Data for Fine-Tuning a Neural Ranker? Current Active Learning Strategies are not Better than Random Selection
###### Abstract.
Search methods based on Pretrained Language Models (PLM) have demonstrated great effectiveness gains compared to statistical and early neural ranking models. However, fine-tuning PLM-based rankers requires a great amount of annotated training data. Annotating data involves a large manual effort and thus is expensive, especially in domain specific tasks. In this paper we investigate fine-tuning PLM-based rankers under limited training data and budget. We investigate two scenarios: fine-tuning a ranker from scratch, and domain adaptation starting with a ranker already fine-tuned on general data, and continuing fine-tuning on a target dataset.
We observe a great variability in effectiveness when fine-tuning on different randomly selected subsets of training data. This suggests that it is possible to achieve effectiveness gains by actively selecting a subset of the training data that has the most positive effect on the rankers. This way, it would be possible to fine-tune effective PLM rankers at a reduced annotation budget. To investigate this, we adapt existing Active Learning (AL) strategies to the task of fine-tuning PLM rankers and investigate their effectiveness, also considering annotation and computational costs. Our extensive analysis shows that AL strategies do not significantly outperform random selection of training subsets in terms of effectiveness. We further find that gains provided by AL strategies come at the expense of more assessments (thus higher annotation costs) and AL strategies underperform random selection when comparing effectiveness given a fixed annotation cost. Our results highlight that "optimal" subsets of training data that provide high effectiveness at low annotation cost do exist, but current mainstream AL strategies applied to PLM rankers are not capable of identifying them.
PLM-based rankers, domain adaptation, active learning Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author/ must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SIGIR-AP'23, November 26-29, 2023, Beijing, China
2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISSN xxxx...$15.00
[https://doi.org/10.1145/1145-669-7/20/11/00](https://doi.org/10.1145/1145-669-7/20/11/00)
1
## 1. Introduction
Search methods based on Pre-trained Language Models (PLM) have shown great effectiveness gains compared to common statistical models and early neural methods [14, 21, 22, 38, 56]. These language models are pre-trained for language representation learning on a background corpus; they are then further trained for a specific task - a process commonly referred to as fine-tuning. Typically, PLM rankers are created through the fine-tuning of a PLM to the ranking task (and possibly, to a specific domain). The fine-tuning of PIM rankers typically requires a great amount of labelled training data. This can often be a challenge when considering search tasks with no or little training data available. Data annotation typically requires a large manual effort and thus is expensive, especially in domain-specific tasks where annotators should be domain experts. In real-life settings, annotation and computational budget1 is often limited, especially for start-ups or in domain-specific contexts.
Footnote 1: With annotation budget we refer to the amount of money set aside for paying annotators to the amount of money set aside for paying the computation costs arising from the training/fine-tuning of the PLM rankers. These costs may include the hardware and energy costs, or the purchase of food solutions.
In this paper we focus on the problem of fine-tuning PLM rankers under limited training data and budget. There are alternative directions one may take to deploy a PLM ranker in a specific task for which no or limited training data is available. These include for example the zero-shot application of PLM rankers trained on another, resource-rich, retrieval task or domain [55, 61], the learning with few-shot examples [16], and approaches based on pseudo-labelling [59]. However the effectiveness of these approaches depends on the relatedness of the fine-tuning task or the pre-training domain of the language model to the target retrieval task [60]; thus their generalization capabilities remain unclear. Therefore performing domain adaptation by fine-tuning the PLM ranker on the target task with annotated training data (the setting investigated in this paper) remains favourable for a (reliable) high effectiveness [13].
It is unclear however how much annotated training data is required for training an effective PLM ranker. Furthermore, in presence of a budget constraint that restricts the amount of data that can be annotated for training, it is unclear whether it is possible to select training data to minimise annotation cost while maximising ranker effectiveness.
In this paper, (1) we investigate how the amount of labelled data used for fine-tuning a PLM ranker impacts its effectiveness, (2) we adapt active learning (AL) strategies to the task of training PLM rankers, (3) we propose a budget-aware evaluation schema including aspects of annotation and computation cost, (4) we conduct an extensive analysis of AL strategies for training PLM rankers investigating the trade-offs between effectiveness, annotation budget and computational budget. We do this in the context of three common PLM ranker architectures: cross-encoders (MonoBERT (Moro et al., 2017)), single representation bi-encoders (DPR (Zhu et al., 2018)) and multi-representation bi-encoders (ColBERT (Moro et al., 2019)), and two scenarios:
* **Scratch**: the PLM is pre-trained on a background corpus, but has yet to be fine-tuned to the target ranking task and dataset;
* **Re-Train**: domain adaptation of the PLM ranker is performed. The PLM is pre-trained on a background corpus and fine-tuned to a ranking task and a specific dataset, but further fine-tuning has yet to be performed to transfer the ranker to another dataset and, possibly, a ranking task with characteristics that differ from those of the first fine-tuning process.
To investigate the effect of the amount of labelled data on the effectiveness of PLM rankers, we select incremental amounts of data to fine-tune a ranker. Our empirical results show that the size of the dataset available for fine-tuning the PLM ranker greatly influences the effectiveness of the ranker. While, somewhat unsurprisingly, we find that in general more training data leads to higher effectiveness, we also find large variability in effectiveness between different randomly selected training sets of the same size. Furthermore we find that, for some training sizes, the best random selection run outperforms the worst one, and significantly. This shows that there are subsets of the training data which lead to significant improvements within the same training data size.
This variability motivates us to investigate whether we can select those "high-yield" samples using Active Learning strategies. The intuition is that a good selection strategy would lead to a smaller amount of data to be annotated, and thus a lower annotation cost, while still producing a highly effective ranker. Selection of training data has been extensively investigated in AL for machine learning. Here, common active selection strategies are based on uncertainty or diversity criteria (Zhu et al., 2018; Li et al., 2018; Li et al., 2019). We thus adapt representative methods that implement these criteria to the context of fine-tuning PLM rankers. We evaluate the representative active selection strategies in terms of their effectiveness for fine-tuning PLM rankers on different training data sizes and compare the strategies to random selection of training data as baseline. For both scenarios the active selection strategies do not offer statistically significant improvements compared to random selection. For certain scenarios and PLM rankers we find varying beneficial selection strategies, however no selection strategy shows consistent and robust higher effectiveness than random selection. In addition, the adoption of active learning requires extra computation compared to random selection.
Since it is not our goal to minimize the training data size, but actually we aim to minimize the total cost of fine-tuning PLM rankers, we revisit the results in light of a budget-aware evaluation we introduce in this paper. This evaluation includes aspects of annotation cost as well as cost of computing resources. With this, we find that the annotations are the main cost factor. Since the selection methods require a different number of assessments to annotate a training set of a certain size, we compare the number of assessments to the effectiveness of the PLM rankers for random and active selection strategies. This reveals that the (marginal, if any) effectiveness gains provided by AL strategies come at the expense of more assessments (thus higher annotation costs) and AL strategies under-perform random selection when comparing both effectiveness and associated cost.
We publish our code at: _github.com/sophiaalthammer/al-rankers_.
## 2. Related Work
**Effect of Data Size on PLM Rankers.** Previous studies have observed that fine-tuning PLM rankers on subsets of the available training data decreases search effectiveness and, similarly, that increasing the size of the training data tends to improve search effectiveness. These types of observations and preliminary findings are reported for MS MARCO (Makrini et al., 2018; Makrini et al., 2018; Zhu et al., 2018; Li et al., 2019; Li et al., 2019) and in the case of domain adaptation (Zhu et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019). However, these conditions have never been systematically evaluated, which we do in our study. For example, Nogueira et al. (Nogueira et al., 2019) observe the variability in effectiveness when fine-tuning a MonoBERT ranker on subsets of different size (1k, 2.5k, 10k); however they do not systematically investigate this variability, nor they study the effect of larger subsets or different ranking architectures. Mokrii et al. (Makrini et al., 2018) investigate transfer learning for MonoBERT rankers first fine-tuned on MS MARCO and then transferred to question answering tasks in a zero-shot and full training setting, where the source and target domain hold large training sets. They also investigate the effect of training on subsets of the training data and find that the more training queries, the higher the effectiveness. Zhang et al. (Zhang et al., 2019) investigate domain transfer of BERT cross-encoders in a small data regime where they transfer MonoBERT from web search (trained on MS Marco) to small domain specific retrieval tasks. Interestingly they find that small in-domain training data sometimes decreases search effectiveness compared to the zero-shot application of MonoBERT.
**Active Learning for Information Retrieval.** Active Learning aims to minimize the annotation cost associated with the acquisition of training labels while maximizing the effectiveness of the trained model. Uncertainty (Zhu et al., 2018) and diversity-based (Li et al., 2019) strategies form the bulk of AL methods that have been proposed and extensively validated across a variety of learning tasks and datasets. In this paper, we adapt methods belonging to these two strategies.
Specifically, we explore the use of Active Learning for selecting data for the fine-tuning of PLM rankers. Active Learning has been used in Information Retrieval across a number of tasks and settings (Zhu et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), but never before in the context of PLM ranker fine-tuning.
Of particular interest for this paper are the methods of Cai et al. (Cai et al., 2018) and Xu et al. (Xu et al., 2019), that we describe next, because we adapt them to our task of fine-tuning a PLM ranker. Cai et al. (Cai et al., 2018) transfer a learning-to-rank (LTR) model trained on one target domain to another source domain. For the domain adaptation training they propose to use the Query-by-Committee algorithm for active selection of queries in the target and in the source domain as well as mixing the training sets of the target and source domain. For the domain adaptation of a LTR model, QBC reaches a higher retrieval effectiveness with less training data than the random selection
strategy. Xu et al. (2020) investigate different diversity-based active learning strategies for updating query relevance scoring and propose a combination of diversity and density based selection.
A variation of the AL setting that has shown success in certain domain-specific tasks is that of continuous active learning (Zhou et al., 2019; Zhang et al., 2020; Zhang et al., 2020), where documents are iteratively retrieved by actively learning for one specific query, typically aiming for total recall (Xu et al., 2020). For the task of technology assisted review (TAR), Yang et al. (2020) propose a TAR cost framework, however this framework focuses on cost modeling for reviewing one specific query.
Despite previous successes in the use of AL strategies in the context of search and ranking, AL strategies have not been studied for PLM rankers. The AL strategies proposed in previous work are not directly applicable to PLM rankers - however in Section 5 we propose adaptations of these methods to our task of interest.
## 3. Considered PLM rankers
In the consider cross-encoder model, MonoBERT, query and passage text are concatenated, encoded with BERT and the CLS representation is scored with a linear layer \(W\) on top of the encoding:
\[s=W\;\text{BERT}(\text{CLS};q;\text{SEP};p;\text{SEP})_{CLS} \tag{1}\]
where SEP is the separator token and \(s\) is the final score of passage \(p\) for query \(q\). Empirical findings show MonoBERT reaches a high re-ranking effectiveness (Zhang et al., 2020), however each passage needs to be encoded at query time and therefore this architecture is computationally resource-heavy and is characterized by high query latency (Yang et al., 2020). For the same reason, this ranker is commonly used only in top-\(k\) re-ranking settings, and not for retrieval (i.e., scoring the whole collection for each query).
DPR (Zhu et al., 2020) encodes the query and passages independently. The relevance of a passage \(p\) to a query \(q\) is estimated using the dot-product between the CLS token representation \(q\) and that of \(p\):
\[s=\text{BERT}(\text{CLS};q;\text{SEP})_{CLS}\cdot\text{BERT}(\text{CLS};p; \text{SEP})_{CLS} \tag{2}\]
The independence of query and passage encoding and dot-product relevance scoring make it possible to pre-compute and store the passage representations in the index and enable efficient retrieval at query time with approximate nearest neighbor search (Xu et al., 2020; Zhang et al., 2020).
The ColBERT (Zhang et al., 2020) method delays the interaction between the query and passage to after the encoding by computing the relevance score as the sum of the maximum similarity scores between all token representations of the query and passage:
\[s=\sum_{j}\max_{i}\left[\text{BERT}(\text{CLS};q;\text{SEP})_{j}\cdot\text{ BERT}(\text{CLS};p;\text{SEP})_{i}\right] \tag{3}\]
As for single representation bi-encoder methods, also in ColBERT the passage representation can be pre-computed offline and thus the query processing is sped up. Empirical results show that ColBERT achieves a competitive effectiveness compared to MonoBERT (Zhang et al., 2020).
## 4. Training scenarios & annotation modeling
We consider two scenarios for training the PLM rankers: \(\mathbf{\odot}\)**Scratch** training from scratch, starting with a PLM and \(\mathbf{\odot}\)**Re-Train** domain fine-tuning after rank/retrieval fine-tuning of the PLM has occurred. These are common scenarios that are encountered in the practical application of PLM rankers to search problems.
In \(\mathbf{\odot}\)**Scratch** our objective is to train a PLM ranker "from scratch", i.e., without having already performed any fine-tuning on a retrieval task. There are many reasons this scenario could occur in the practical deployment of PLM rankers. For example, no suitable labelled data corresponding to the ranking task may be available, or the data that may be available is protected by a license that prevents its use within a product (e.g., the MS Marco dataset). We model the first scenario by starting from a pre-trained BERT model (Zhou et al., 2019; Zhang et al., 2020) and training the ranker on the MS Marco dataset, a large scale web search collection commonly used to train these rankers. Note, in our experiments we assume that no labels are available for the dataset, and labels are iteratively collected (in a simulated setting) within the AL cycle.
In \(\mathbf{\odot}\)**Re-Train** our goal is to adapt a PLM ranker to a specific retrieval task (potentially in a specific domain). Here we assume that the PLM ranker has already undergone fine-tuning on a high-resource retrieval task (e.g., using the common MS Marco dataset), and the goal is to further fine-tune the ranker with additional data, on a different retrieval task or data domain. This is a common setting in domain-specific IR settings. The assumption is that the initial fine-tuning on the non-target retrieval task or domain data still highly contributes to the effectiveness of the ranker, especially when the target data available for fine-tuning is limited. We model the second scenario by starting from a ranker fine-tuned on MS Marco and fine-tune the ranker for a domain-specific retrieval task. In our experiments, we choose to validate the models using the retrieval task and datasets associated with health-oriented web search in the medical domain. We choose this task due to the availability of the TripClick dataset (Zhang et al., 2020), a large-scale training and test set for this task. This dataset has similar characteristics to MS Marco (e.g. query length, sparse judgments). In contrast to other domain adaptation approaches (Beng et al., 2020), we do not mix the training sets of the source and the target domain, in order to (i) be able to separate the effects of mixing the training sets from the active domain adaptation strategies, and (ii) study PLM ranker development and deployment strategies that are in line with the green IR principles of reuse and recycle (Yang et al., 2020).
In order to model the real-life process of incremental annotation and training we incrementally increase our fine-tuning set \(D\). The details of this incremental process are depicted in Algorithm 1. We start with an empty set \(D=\{\}\) and in each iteration a subset \(S\) of the whole training set \(T\) (\(S\subset T\)) is selected to be added to \(D\). We model the annotation process by attaining the labels from the training set qrels and adding the samples to the fine-tuning set (\(D=D\cup S\)). Then we train the PLM ranker on the updated set \(D\) and, based on random or active selection strategies, we select the next subset to annotate and add it to the training set.
## 5. Active selection strategies
We consider three active selection strategies to identify training data for labelling: uncertainty-based selection (Zhou et al., 2019; Zhang et al., 2020; Zhang et al., 2020), query-by-committee (QBC) (Beng et al., 2020), and diversity-based selection (Xu et al., 2020). We consider random selection as a baseline selection strategy. Next, we describe the active selection strategies and how we adapt them for fine-tuning PLM rankers.
### Uncertainty-based selection
The uncertainty-based selection strategy selects samples by measuring the model's (ranker) uncertainty in the scores it produced
and then selecting the samples with the least confidence (Bordes and McAllester, 2017; McAllester et al., 2018). Uncertainty-based strategies are commonly applied to classification problems, and often the score provided by the classifier is used as direct indication of uncertainty: scores are in the range \([0,1]\), the decision boundary is set to \(0.5\) and the confidence in the classification is measured in function of the distance to the decision boundary (the closer, the least confident) (Kal
and also annotated.This implies that the number of assessments differs from the training data size (the number of queries), e.g., one query sample in the training data can account for 10 assessments required when the first relevant passage is found at rank 10.
The total annotation cost then is the sum of the number of assessments for all the queries added to the training set. Formally, let \(A(i)\) be the number of assessments needed to create the training data of iteration \(i\), \(A_{h}\) be the number of assessments an annotator finishes in one hour and \(A_{C}\) be the cost for an annotator per hour2. Then, the total annotation cost at iteration \(i\) is computed as:
Footnote 2: Note that certain search tasks or domain may require multiple annotators to examine the same sample: in this case \(A_{C}\) would be the sum of the hourly rates associated to all the annotators.
\[C_{A}(i)=\frac{A(i)}{A_{h}}\cdot A_{C} \tag{5}\]
Computational CostsNext we model the computational costs involved in executing the active selection strategies. These strategies usually will require both CPU and GPU based computation, which typically incur different costs and thus we account for separately. Let \(H_{GPU}(i)\) be the accumulated number of GPU hours needed for training a PLM ranker for iteration \(i\) and \(G_{h}\) be the cost of running an GPU for one hour. Then, the total computational cost at iteration \(i\) is computed as:
\[C_{C}(i)=H_{GPU}(i)\cdot G_{h}+H_{CPU}\cdot C_{h}\cdot(i-1) \tag{6}\]
with \(H_{CPU}\) the number of CPU hours needed for computing the selection strategy and \(C_{h}\) the cost of one hour CPU.
Total CostFinally, the total cost at iteration \(i\) can then be computed using Equations 5 and 6:
\[C(i) =C_{A}(i)+C_{C}(i)\] \[=\frac{A(i)}{A_{h}}\cdot A_{C}+H_{GPU}(i)\cdot G_{h}+H_{CPU} \cdot C_{h}\cdot(i-1) \tag{7}\]
## 7. Experimental Setup
Next we describe the experimental setup we have devised to study the AL strategy for PLM rankers fine-tuning we have illustrated above. We develop our investigation along the following three lines of inquiry:
* that is the effect of the size of the labelled training data on the effectiveness of PLM rankers?
* How do different active selection strategies influence the effectiveness of PLM rankers?
* What is the effect of using an active selection strategy to fine-tune a PLM ranker under a constrained budget?
### Passage Collection & Query Sets
For **Scratch**, we use the MS Marco passage collection (Cordes and others, 2018). MS Marco is based on sampled Bing queries and contains 8.8 million passages; its training set contains 530\(k\) training triplets. We use the training portion for fine-tuning and evaluate on the TREC DL 2019 (Kumar et al., 2019) and 2020 (Kumar et al., 2020) with nDCG@10.
For **Re-Train**, we use the TripClick dataset. This dataset contains real user queries and click-based annotations. It consists of 1.5 million passages and 680\(k\) training queries. Test queries are divided with respect to their frequency into three sets of 1, 750 queries respectively; the three sets are Head, Torso, and Tail. For the Head queries a DCTR (Rasman et al., 2017) click model was used to create relevance signals from the click labels. We evaluate on the Head DCTR and the Torso Raw test set as in related work (Kumar et al., 2019; Denton et al., 2019; Denton et al., 2019).
### PLM ranker details
We train MonoBERT, ColBERT and DPR using training triplets with a RankNet loss (Denton et al., 2019). The triplets consist of the query, a relevant and an irrelevant passage; negative passages are taken from the top 1000 BM25 negatives. We train DPR and ColBERT with a batch size of 100, while we use a batch size of 32 for MonoBERT due to its high computational requirements. We train all models for 200 epochs with a learning rate of \(7\times 10^{-6}\) and we use early stopping. For training, we impose a maximum input length of 30 tokens for the query and 200 tokens for the passage; this setting truncates only a few outliers samples in the dataset but provides computational advantages for batching.
In **Scratch** we perform fine-tuning from scratch; as underlying PLMs we use DistilBERT (Derbst et al., 2017) for DPR and ColBERT and the bert-base-uncased model (Kumar et al., 2019) for MonoBERT both provided by Huggingface. We choose these models as starting point so that they match the fine-tuned models for **Re-Train**. In **Re-Train** we start with PLM rankers fine-tuned on MS Marco. For DPR we start from TASB (Denton et al., 2019), trained with knowledge distillation and topic-aware sampling; for ColBERT from a ColBERT DistilBERT model trained with knowledge distillation; for MonoBERT from a bert-base-uncased model solely trained on MS Marco.
For MonoBERT and ColBERT, we report results in a re-ranking context, i.e. using these PLM rankers to re-rank the top 1,000 results retrieved by BM25. For DPR, we instead consider a retrieval setting, where all the collection is scored and then only the top 1,000 are used for evaluation. However, the findings we observe for DPR in the retrieval setting are similar to those we obtained for the same PLM in a re-ranking setting (not reported here). We decided to report retrieval results for DPR, rather than re-ranking as for the other two PLM, because DPR is more commonly used for retrieval (while the other two for re-ranking).
### Active learning details
As foundational experiment we train the PLM rankers on different subsets of the training data of differing sizes; as size, we explore the values \([1k,5k,10k,20k,50k,100k,200k]\) for MS Marco and \([1k,5k,10k,20k,50k]\) for TripClick. We repeat these experiments 4 times with different random seeds for sampling the subsets, so that each time we train on different subsets with the same size and we can measure variance.
In our experiments we use random selection as a baseline and increase the training set incrementally. We run the random baseline 4 times with different random seeds.
For the active learning process, we increase the training subsets incrementally as denoted in Algorithm 1. In each iteration we train the PLM ranker from scratch to exclude a potential bias from incrementally training a ranker. In the first iteration we randomly select the first subset with the same random selection across the different active learning strategies. For uncertainty and diversity selection one could select the first batch with the selection strategy, however for QBC this is not possible because different committee members for selection are not available in the first iteration. Therefore we do random selection in the first iteration to be able to fairly compare across the three strategies.
For fast and resource efficient active selection, we train the PLM rankers for 15 epochs and use the trained ranker for active selection. For the sake of evaluation and in order to compare the effectiveness at different iterations, we resume the training after 200 epochs.
For the uncertainty-selection and QBC strategies, we score the BM25 top 100 passages of each training queries and use these passages for actively selecting the queries for annotation. For the QBC selection strategy we use the same hyper-parameters as Cai et al. (Cai et al., 2018); we use 2 members in the committee and train each member on 80% of the subset available at each iteration for training. We choose the size of the training subsets so that each 80% portion aligns with the other training set sizes.
For \(\blacklozenge\)**Scratch** we add in each iteration \(5,000\) training samples to the training size. For \(\blacklozenge\)**Re-Train** we have \(5,000\) samples for the first 2 iterations until the training size is 10k, from then on we use \(10,000\) samples for the remaining iterations in order to decrease computational cost.
### Costs for Budget-Aware Evaluation
For computing the annotation cost, for each triplet added to the training we store the rank of the first relevant document in the ranked list generated by the PLM ranker trained in the previous iteration. Since we do not have a trained PLM ranker in the first iteration, we start with the initial ranking provided by BM25. For the random baseline, we also use the initial BM25 ranking for computing the annotation effort.
We conduct our experiments on servers equipped with NVIDIA A40 GPUs and measure the GPU and CPU hours spent in the training of the PLM ranker and the execution of the selection strategies.
For the computational cost, we refer to common cloud computing costs3 and set \(G_{h}=3.060\$\) and \(C_{h}=0.40\$\). For the number of annotations per hour \(A_{h}\) we rely on estimates from Althammer et al. (Althammer et al., 2018) who conducted an annotation campaign on TripClick test set. Here annotators needed 47.7 seconds to annotate a query-passage pair on average, which corresponds to 75 assessments per hour. For the annotation cost per hour, \(A_{C}\), we assume \(50US\$\) as hourly rate of a domain expert annotator. We also have developed a small web tool4 to let the reader customise computation and annotation costs and the number of assessments per hour; then the reader can observe how the results presented in terms of budget-aware evaluation change according to different cost settings.
Footnote 3: From [https://aws.amazon.com/ec2/pricing/on-demand/](https://aws.amazon.com/ec2/pricing/on-demand/). Costs valid as of 02 January 2023. GPU costs refer to a p3.2xlarge instance and CPU costs to an al.4xlarge instance.
## 8. Results
### RQ1: Effect of Size of Training Data
We visualize the effect of training data size on the effectiveness of PLM rankers for \(\blacklozenge\)**Scratch** (Figure 0(a)) and for \(\blacklozenge\)**Re-Train** (Figure 0(b)). The boxplots visualize the range of effectiveness when the PLM ranker is trained on different subsets of the same size.
In both cases, it is observed that as the size of the training data increases, nDCG@10 improves for all three PLM rankers. When considering effectiveness across PLM rankers, it is noteworthy to observe ColBERT and MonoBERT. Recall from the literature that MonoBERT outperforms ColBERT on MS Marco when both are trained on the whole MS Marco training data (Mikolov et al., 2016), and the same holds for TripClick (Zhou et al., 2017). However, in our experiments, we observe that ColBERT outperforms MonoBERT for smaller training data sizes. MonoBERT eventually becomes better than ColBERT but only once more than 10,000 training samples are used for the \(\blacklozenge\)**Re-Train** scenario. For the \(\blacklozenge\)**Scratch** scenario the two rankers becomes largely indistinguishable when the training data is 50,000 samples, and eventually MonoBERT takes the lead thereafter (not shown in the figure).
In \(\blacklozenge\)**Scratch**, the improvement in effectiveness with increasing training size is particularly remarkable for small training subsets. For example, the improvement by adding 4k training samples from 1k to 5k samples is between 18% and 63% of the median nDCG@10. Noting the wide scale of the y-axis from 0.2 to 0.7 nDCG@10, we observe a large variability when training PLM rankers on limited data. This is particularly the case for MonoBERT, where we find a large difference from maximum and minimum nDCG@10 from 19 (0.27-0.46 nDCG@10) for 1k samples to to 7 (0.53-0.60) for 10k samples. The worst and the best MonoBERT run obtained are statistically different for train size 1k and 5k. For DPR, the interquartile range is up to a difference of 5 nDCG@10 (0.38-0.43 for 10k), thus 50% of the effectiveness points are within a range of 5 nDCG@10. A substantial variability in the effectiveness of DPR is observed when trained on 50k samples. The best and the worst runs for DPR are statistically different for 5k and 10k samples. It is noteworthy that the boxplots for 5k samples overlap in part with those for 10k, and similarly the 10k with those for 20k. This means that specific subset of training data of size 5k (10k) allow to reach the same effectiveness obtained when training the ranker on double the amount of data. 10k (20k).
For \(\blacklozenge\)**Re-Train** (Figure 0(b)) we also notice variability in search effectiveness; yet, we observe a relatively smaller variability compared to \(\blacklozenge\)**Scratch**. The differences between the worst and best runs for each training data size are not statistically significant in this scenario. We suspect that this smaller variability in effectiveness is due to starting from an already fine-tuned PLM ranker instead of training from scratch. Although our empirical results suggest a smaller variability, we still see overlaps of the boxplots, especially between 10k and 20k sample: that is, the same or even better effectiveness could have been reached with half the training data.
These results suggest that it is possible to select subsets of training data that would "speed-up" the learning: in other words, some
Figure 1. Boxplot of nDCG@10 effectiveness on TREC DL 2020 (\(\blacklozenge\)**Scratch, Figure 0(a)) and on TripClick Head DCTR test (\(\blacklozenge\)**Re-Train, Figure 0(b)), visualizing the variability of training on different training sample sizes. PLM rankers are trained on subsets of respective sets (MS Marco/TripClick) with different sizes. To measure variability, for each train data size we repeat random sampling 4 times.
subsets of training data can achieve the same or even higher effectiveness as using double the amount of data. This thus serves as a motivation for this paper: is it possible to identify "_high-yield_" training subsets so as to spare annotation costs but yet obtain high effectiveness? To this aim, we investigate the effectiveness of active learning strategies, which we discuss next.
### RQ2: Effectiveness of Active Selection
In Table 1 we report the effectiveness of the active learning strategies from Section 5, along with the random selection baseline, when used for training MonoBERT, ColBERT and DPR across different amounts of training data in scenario **Scratch** and **Re-Train**.
For the random selection baseline we report the mean effectiveness when randomly sampling and training on different subsets of the same size multiple times - we perform four random selections for each training size. Note that because the AL selection strategies are deterministic, there is only one result for each strategy at a certain training size, not multiple runs as for the Random baseline.
Various AL strategies outperform the Random baseline at most training data sizes in **Scratch**. However these effectiveness gains are not consistent throughout all training sizes: there is no single AL strategy that always perform better than the others and, importantly, that always outperforms the random selection baseline. For example, on TREC DL 2020 the uncertainty-based selection for DPR reaches the highest effectiveness when training with 20k samples, but effectiveness drops sensibly when training with 50k samples. Furthermore, effectiveness gains across all methods are not statistically significant, nor are the improvements substantial. When evaluating the PLM rankers on MS Marco Dev, we find similar results, there are varying, non-statistically significant improvements of AL strategies to the Random baseline; due to space constraints we do not report these measures here.
The effectiveness results are more consistent across methods and training data sizes in scenario **Re-Train**. Random outperforms all AL selection strategies when using DPR. The QBC strategy reaches slightly higher effectiveness than random selection when ColBERT is used; however, none of the improvements are significant despite the large number of test queries in the TripClick Head and Torso test sets. No statistical significance is found even when the worst random selection run is considered in place of the mean of the random runs.
In summary, we found that for the task of fine-tuning PLM rankers, there is no single active learning selection strategy that consistently and significantly delivers higher effectiveness compared to a random selection of the training data. This is a surprising and interesting result. Active learning has been shown to be effective in natural language tasks (Srivastava et al., 2017), also for methods that rely on PLM models (Krizhevsky et al., 2014): yet, popular AL methods do not work in the context of PLM rankers. However, RQ1 shows that there are subsets of the training data that when used for fine-tuning PLM rankers deliver sensibly higher effectiveness than others - but AL methods are unable to identify those high-yield training samples.
### RQ3: Budget-aware Evaluation
Since the goal of actively selecting training data is to minimize the annotation cost, we investigate the active selection strategies in the context of constraint budgets. For this, we use the budget-aware evaluation of Section 5.3, which accounts for the number of assessments needed to annotate the training data as well as the computational cost of the training and selection.
We visualize the effectiveness and associated costs at different training set sizes for the AL strategies for the three PLM rankers in Figure 1(a) for **Scratch** on TREC DL 2020 and in Figure 1(b) for **Re-Train** on TripClick Head DCTR. The lines and the left y-axis refer to the rankers' effectiveness, measured as nDCG@10. The bars and the right y-axis refer to the total cost computed with the budget-aware evaluation. The bars are stacked (the annotation and
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c}{**nDCG@10**} & \multicolumn{10}{c|}{**Scratch: MS Marco**} & \multicolumn{10}{c|}{**Re-Train: TripClick**} \\ \multicolumn{1}{c}{} & \multicolumn{10}{c|}{**TREC DL 2019**} & \multicolumn{10}{c|}{**TREC DL 2020**} & \multicolumn{10}{c|}{**Head test DCTR**} & \multicolumn{10}{c|}{**Torso test raw**} \\ \multicolumn{1}{c}{**Train data size**} & **0** & **5**k** & **10**k** & **20**k** & **50**k** & **0** & **5**k** & **10**k** & **20**k** & **50**k** & **0** & **5**k** & **10**k** & **20**k** & **50**k** & **0** & **5**k** & **10k** & **20k** & **50k** \\ \hline
0 & BM25 & 501 & & & & 475 & & & & 140 & & & & & 206 & & & & \\ \hline
**MonoBERT (re-rank BM25 top 1,000)** & & & & & & & & & & & & & & & & & \\ \multirow{4}{*}{1 Random} & 0.51 & 5935 & 6.272 & 6430 & 6705 &.041 &.5590 &.5871 & 6418 & 6552 &.036 &.1715 &.1833 &.1941 &.2129 &.036 &.2279 &.2352 &.2426 & **.2710** \\
2 & QBC & **6193** & 6157 & 6246 & **6728** & 5507 &.5844 & **6443** & 6630 & **.1731** &.1835 & **.2065** &.2059 &.2046 &.2328 &.2423 &.2679 \\
4 & Uncertainty & 6118 & 6232 & 6588 & 6509 & **5875** & **5873** & 6336 &.6595 & - & **.1920** &.1981 & **.2190** & - & **.2356** &.2672 &.2705 \\
5 & Diversity & 5925 & **6341** & 6448 & 6640 &.5407 & **6237** & **6338** & **6670** & - &.1837 &.1933 &.2123 & - &.2294 & **.2450** &.2650 \\ \hline \multicolumn{1}{c}{**ColBERT (re-rank BM25 top 1,000)**} & & & & & & & & & & & & & & & & & \\ \multirow{4}{*}{6 Random} & 352 & 6176 & **6385** & 6352 & 6614 &.246 &.5944 &.6091 &.6291 &.6577 &.155 & **.1675** &.1770 &.1813 &.1912 &.227 & **.2300** & **.2351** & **.2397** &.2475 \\
7 & QBC & 6192 & 6297 & **6541** & **6680** & 5813 & 6159 & **6511** & **6758** &.1302 & **.1791** & **.1860** & **.1962** &.1558 &.2273 &.2292 &.2360 \\
9 & Uncertainty & 6257 & 6034 & 6089 & 6370 & **5987** & 6076 & 6001 &.6246 & - &.1645 &.1536 &.1753 & - &.2190 &.1909 &.2274 \\
10 & Diversity & **6271** & 6239 & 6402 & 6644 &.5912 & 6038 & 6211 & 6363 & - &.1645 &.1811 &.1957 & - &.2187 &.2362 & **.2481** \\ \hline \multicolumn{1}{c}{**DPR (full retrieval)**} & & & & & & & & & & & & & & & & & & & \\ \multirow{4}{*}{11 Random} & 0.0 & 3674 & **4390** & 4457 & 5006 & 0.0 &.3225 & 3789 & 4190 & 4757 &.1398 & **.1459** & **.1516** & **.1621** & **.200** & **.1837** & **.1745** & **.1924** & **.2023** \\
12 & QBC & 3465 & 4343 & 4628 & **5079** & 3023 & 3849 & 4090 & 4534 &.0849* &.1043 &.1368 &.1603 &.0895 &.1122 &.1312 &.1440 \\
14 & Uncertainty & **3961** & 4067 & 4255 & 3757* & **3660** & 3733 & **4476** & 4254 &.1060 &.1165 &.1283 &.1336 &.0907 &.0946 &.1030 &.1031 \\
15 & Diversity & 3713 & 4086 & 4593 & 4750 &.3437 & 4030 & 4198 & **4998** &.1059 &.1150 &.1163 &.1458 &.0907 &.1041 &.1217 &.1473 \\ \hline \end{tabular}
\end{table}
Table 1. nDCG@10 effectiveness across different amounts of training data for **Scratch on TREC DL 2019 & & **2020** and for **Re-Train on TripClick Head DCTR & Torso Raw. Bold numbers denote highest effectiveness for each PLM ranker and training size. Statistical significant differences to random selection baseline (Random) is denoted with \({}^{*}\) (paired t-test; \(\mathbf{p}<0.05\), Bonferroni correction with n-\(\mathbf{3}\)). For **@@: No consistently best performing method and no statistical significant difference to Random. For **@:** for DPR, Random consistently is best; all statistical significance differences to Random are significantly lower. ‘-’ indicates no result at that training size.
computational cost), but since with our cost settings the annotation cost greatly exceeds the computational cost, the bars for the GPU and CPU costs are not visible. In all figures the blue line denotes the effectiveness of Random, with the blue shade representing the range measured between the worst and best random selection runs (recall that random selection was ran four times, and Random is the mean effectiveness of these runs).
A first observation is that the main cost factor is the annotation cost, and hence the number of assessments needed to create the training data, which largely overtles the computational cost. Because of this, in Figures 2(a) (**Scratch**) and 2(b) (**Re-Train**) we further visualise the effectiveness of the AL strategies relative to the number of assessments needed to reach that effectiveness.
Next, we analyse the results for **Scratch** (Figures 1(a) and 2(a)). For MonoBERT, the active selection strategies often provide higher effectiveness than Random when more than 10k samples are available - these effectiveness gains are however not significant. Nonetheless, QBC and diversity require a lower budget than the Random baseline with savings of up to 15k8 when 50k query-document samples are collected. We note that the uncertainty-based strategy provides similar effectiveness to Random (especially from 20k samples), at no cost-savings.
For ColBERT, QBC consistently provides higher effectiveness than Random, however at a much higher cost. For example, when 50k training samples are selected, using QBC costs nearly $200k more than Random, requiring annotations for roughly 200k more query-document pairs. In fact, when approximately the same budget/number of annotations are used, QBC and Random obtain the same effectiveness (in Figure 2(a) compare the last point of Random with the third last point of QBC). Aside from QBC, all other active selection strategies deliver similar or lower effectiveness of Random, for the same or higher cost.
For DPR, the uncertain-based strategy consistently delivers inferior effectiveness than the baseline. QBC and diversity-based selection do provide effectiveness gains when the training data is in the range 10k to 40-45k samples. For QBC, however, these gains come at a large budget expense: for 30k the QBC selection requires 90kS more annotation budget than Random. The diversity-based strategy instead does deliver some costs-savings compared to Random. For example, for Random to reach the same effectiveness of BM25, about 600k annotations are needed, while diversity delivers the same level of effectiveness with only 420k annotations. However, we note using more annotations with diversity-based sampling does not necessarily translate in a more effective model: going from 600k to about 750k annotations deteriorates the search effectiveness of the ranker.
Looking across PLM rankers, we observe that while the annotation costs across selection strategies are relatively similar for MonoBERT, they are higher for QBC than all other strategies when ColBERT and DPR are considered.
Overall, the selection strategies show relatively unstable effectiveness, the effectiveness can even decrease when training data increases. This is particularly the case for uncertainty selection for DPR: for example, its effectiveness decreases by from 0.45 to 0.37 when the amount of training data doubles from 20k to 40k.
We next analyse the results for scenario **Re-Train**. While some selection strategies provide gains over random selection, these gains largely depend on which PLM is used and the training size (Figure 1(b)). Nevertheless, despite the specific gains in effectiveness, all active selection strategies require more assessments, and thus a
Figure 3. nDCG@10 vs. number of assessed query-passage pairs on TREC DL 2020 (**Scratch, Figure 2(a)) and on TripClick Head DCTR (**@ Re-Train**, Figure 2(b)). Number of assessments per sample is measured with rank of highest relevant passage during selection. For the Random baseline the blue line denotes mean, blue shaded area denotes the range between max and min effectiveness versus mean of the number of assessments. Selection strategies are not consistently more effective considering the number of assessments to annotate the training samples.
Figure 2. nDCG@10 (lines, left y-axis) and stacked annotation and computational cost (bars, right y-axis) for different train data sizes on TREC DL 2020 (**Scratch, Figure 1(a)) and on TripClick Head DCTR (**@ Re-Train**, Figure 1(b)). For Random (**@**) the blue line denotes mean, shaded area denotes the range between min and max effectiveness. Good results would be expected to be between mean and max of Random, bad results between mean and min. For stacked cost, only annotation cost is visible since it greatly exceed computational costs.
higher budget, to reach the same level of effectiveness obtained when using random selection (Figure 2(b)).
For MonoBERT, uncertainty selection exhibits (non-significant) improvements when training data is less than 30k. In fact, for small amounts of training data, uncertainty sampling does provide some cost savings: for example MonoBERT with uncertainty sampling needs about 65,000 query-passage pairs assessments to obtain the same effectiveness obtained with random selection with \(\approx 100k\) assessed pairs. However, this effect is lost when the training data size increases further, with the budget required by uncertainty sampling becoming similar (or more in some instances of random selection) to that of Random to obtain the same level of effectiveness. All other active selection strategies, when used with MonoBERT, deliver either lower effectiveness than Random, or higher costs. This is the case particularly for QBC. In fact, although there is one setting in which QBC delivers major cost savings to reach the same effectiveness of the random baseline (QBC achieves nDCG@10 higher than 0.2 using a sensibly lower amount of annotated query-passage pairs), cost savings are not consistent across all training data sizes and larger sizes correspond to a higher number of assessments required compared to Random.
For ColBERT, QBC and diversity selection outperform the baseline from training data sizes of 20k onward. This however comes with a considerable increase of query-passage pairs to be assessed and thus of annotation cost. For example, with a training subset of 30k, random selection costs about $200,000 while diversity selection costs nearly double that - but the increase in search effectiveness is marginal. It is interesting to compare these results with that obtained for scenario **Sc scratch**. While in both scenarios uncertainty selection shows effectiveness losses when more training data is added, and QBC is associated with higher costs, diversity selection performs differently: it provides similar effectiveness for a similar cost in **Sc scratch**, and a marginal effectiveness improvement for a largely higher cost in **Re-Train**.
For DPR, all selection strategies underperform random selection, with the exception of QBC that provides marginal improvements when the training subset is larger than 30k, but this at the expense of a higher budget. The budget-aware evaluation, in fact, shows that all selection strategies require more query-passage pairs to be assessed (higher cost) than the random selection baseline to reach the same search effectiveness (and some strategies cannot even achieve that effectiveness). An example is QBC that requires 730,000 assessments to reach the same effectiveness obtained by Random with just \(\approx 250k\) assessments.
In summary, in answer to RQ3, we found that the use of the investigated active selection strategies does not deliver consistent budget savings. In our experiments, the budget is largely dominated by the assessment cost and all active selection strategies tend to require a higher amount of query-passage pairs to be annotated than random selection. Even in contexts where assessment is very cheap, active selection would not provide budget savings because more assessments are required for active selection than for random selection. We do note that there are cases where specific active selection strategies provide similar search effectiveness than random selection at a reduced cost. However, these cases occur for specific choices of selection strategy, PLM ranker and training subset size and thus are unlikely to generalise in practice.
## 9. Conclusion
We investigated fine-tuning PLM rankers under limited data and budget. For this, we adapted several active selection strategies, representing different key approaches in active learning that have been shown effective in many natural language processing tasks. Surprisingly, we found that for the task of fine-tuning PLM rankers no AL strategy consistently and significantly outperformed random selection of training data. However we found that there are subsets of the training data which lead to significantly higher effectiveness than others, thus we see it as an important open challenge to be able to automatically identify those training samples. Similarly, our budget-aware evaluation showed that the investigated AL strategies do not deliver consistent _budget savings_ since they require a higher amount of assessments than random selection.
One limitation of our study is that the estimation of annotation costs relies on sparse annotations of the training set. Potentially, the required number of assessments could be lower, since another relevant passage - that is not marked as relevant in the data - could be found earlier in the ranked list. We argue, however, that this should affect all selection strategies and does not benefit one strategy particularly.
Another limitation is the way uncertainty was computed in our experiments. Uncertainty estimation in Information Retrieval is a fundamental but largely unexplored problem (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015), especially for rankers based on PLMs (Krizhevsky et al., 2015; Krizhevsky et al., 2015). Attempts have been made to exploit uncertainty in relevance estimation for traditional statistical models such as language models and BM25 (Krizhevsky et al., 2015; Krizhevsky et al., 2015), but in these works the actual estimation of uncertainty is based on assumptions and heuristics such as to be related to similarities or covariance between term occurrences (Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015), to follow the Dirichlet distribution (Krizhevsky et al., 2015), or to be computed based on score distributions obtained through query term resampling (Krizhevsky et al., 2015). Recent attempts have been made to model uncertainty for neural rankers, for example Transformer Pointer Generator Network (T-PGN) model (Krizhevsky et al., 2015), or Cohen et al.'s (Cohen et al., 2016) efficient uncertainty and calibration modelling strategies based on Monte-Carlo drop-out (Krizhevsky et al., 2015), but these are not readily applicable to the PLM ranker architectures we consider. In future work we plan to adapt and investigate these uncertainty estimations.
Finally we also highlight that we only considered common baseline active learning methods. More sophisticated AL methods exist (Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015), including alternating between selection types like in AcTune, which alternates active learning and self-training (Krizhevsky et al., 2015), and Augmented SBERT which alternates random selection and kernel density estimation based selection (Krizhevsky et al., 2015). However, each of these approaches present specific challenges to be adapted to ranking. We also were interested to understand the promise AL has for PLM-based rankers, and provide a framework, inclusive of evaluation methodologies and baselines, in which these more advanced methods could be studied.
|
2309.09202 | Examining psychology of science as a potential contributor to science
policy | The psychology of science is the least developed member of the family of
science studies. It is growing, however, increasingly into a promising
discipline. After a very brief review of this emerging sub-field of psychology,
we call for it to be invited into the collection of social sciences that
constitute the interdisciplinary field of science policy. Discussing the
classic issue of resource allocation, this paper tries to indicate how prolific
a new psychological conceptualization of this problem would be. Further, from a
psychological perspective, this research will argue in favor of a more
realistic conception of science which would be a complement to the existing one
in science policy. | Arash Mousavi, Reza Hafezi, Hasan Ahmadi | 2023-09-17T08:07:25Z | http://arxiv.org/abs/2309.09202v1 | # Examining psychology of science as a potential contributor to science policy
###### Abstract
The psychology of science is the least developed member of the family of science studies. It is growing, however, increasingly into a promising discipline. After a very brief review of this emerging sub-field of psychology, we call for it to be invited into the collection of social sciences that constitute the interdisciplinary field of science policy. Discussing the classic issue of resource allocation, this paper tries to indicate how prolific a new psychological conceptualization of this problem would be. Further, from a psychological perspective, this research will argue in favor of a more realistic conception of science which would be a complement to the existing one in science policy.
Keywords:Psychology of science, Philosophy of science, Science and technology policymaking, Social science.
## 1 Introduction
The conventional wisdom holds that the science and technology policy is an interdisciplinary enterprise, a crossroad of social sciences. It seems, however, that some disciplines within the family of social sciences have had more opportunities so far for showing their potentials in the field than others. Amongst these lucky disciplines, economics has a noteworthy position. For newcomers to the field (from backgrounds other than economics), there arises almost immediately an impression, amongst the foremost contacts, of this crossroad of ideas: a flavor of 'economics' dominating all around the problems, concerns, methodology, and theoretical contents of the field. This experience turns out to be not so much surprising while considering the fact that a self-organized and rather unbiased sample of scholars working within the discipline contains a majority of near to 60% of researchers with backgrounds in economics (see: (Fagerberg & Verspagen)).
The relative dominance of economics, however, is not bad news for newcomers with other backgrounds. Instead, it shows a vast expanse of virgin potentialities for other disciplines to play their own game within the field. The recent history, indeed, has recorded good examples of the revelation of such potentialities. Appreciating the prolific role that Keith Pavitt played in the field, Daniele Archibugi reminds us that:
"Some of the fundamental contributions to our understanding of innovation have come from scholars who, like Pavitt, have no formal training in economics. Economics needed to import fresh blood from other disciplines such as engineering, management sciences, natural sciences, history and philosophy of science and knowledge to understand the determinants and impact of technological change" (Archibugi, 2001).
By importing this fresh blood from other disciplines, the traditional issues in science and technology policy can be reformulated in new ways. This can help in finding novel solutions for them. New perspectives can also create their own set of issues and problems. In addition, there is always the possibility of introducing new conceptual and theoretical frameworks applying insights from different disciplines into the field.
For centuries, formal rules were identified based on available information, while in the information era, scientists should cope with information overload. In the past, the focal question was "how to collect data" and now the main challenge has been changed to "how to manage/ analyze data". Then psychology of science attempts to uncover how scientists' minds are adapting to a new world with too much information (Webster, 2012).
Before calling it "psychology of science", and recognizing itself as an independent scientific branch, other researchers showed their interests in decoding intelligence, cognition, and personality as the main targets of the psychology of science. For example, Eiduson and Beckman (Eiduson & Beckman, 1973) analyzed scientists compared to other people in terms of personality, demographics, and biological traits. However, findings of such studies were not reliable and existence of obvious differences are hardly accepted.
Archibugi's list of examples includes most of the well-known contributing disciplines (Archibugi, 2001). It lacks though some of the less-familiar enormously potential fields. In what follows, we will examine one of these disciplines, a newly emerging sub-field of psychology, the psychology of science, to see how fruitful would be to invite it into the family of social sciences which all contribute to the interdisciplinary field of science policy. We exclude technology policy and issues relating to scientists in the industry because psychological research on these topics is still scarce.
After a brief introduction to the psychology of science, we will try to bring into view samples of traditional science policy problems by looking at them from a psychological perspective. Applying some psychological techniques, our challenge will be to find out whether these problems can be addressed in more productive ways. Last but not least, we will seek to propose
some unprecedented issues that can be raised from this new psychological point of view and may help to a more comprehensive science policy.
## 2 The Psychology of Science: a Nascent Discipline
Within the family of science studies (meta-sciences), psychology of science is the youngest member. Far behind such fully established disciplines as philosophy, sociology, and history of science, the psychology of science is still in its earlier stages of development. It does not enjoy yet Joseph Matarazzo's criteria for the establishment of a new field (i.e., its own association or society, journal, postdoctoral training programs, etc.) (Stone, Weiss, Matarazzo, Miller, & Rodin, 1987). Amongst the reasons for this comparative delay, two may be more important. First, the long-lasting image of scientists as some species of comprehensive super-humans with no passions or emotions meddling into their timeless process of self-sufficient rationality did not leave many open doors for psychologists to consider themselves as relevant components in the studies on science. Even these allegedly pure activities of science (discovery, theory evaluation, and so on) have been viewed until relatively recently as exclusive subjects of normative investigations of 'logicians' rather than descriptive issues to be studied by psychologists (see Thagard, 1993)). Second, even now much of the psychology of science is dormant, latent, and implicit. Indeed, there are more than a few talented minds in psychology who are conducting research on scientific thought, behavior, interest, talent, and creativity. They just do not identify themselves with or are not aware of the term 'psychology of science' (G. J. Feist, 2006b).
For about 30 years, since the end of the 1930s when the psychology of science was born officially, it has experienced a period of silence (Yanhui & Jianshan, 2019). It has become popular in the 1980s represented by (Brannigan & Wanner, 1983; Campbell, 1982; Grover, 1981; Simonton, 1988; Tweney, Doherty, & Mynatt, 1981). Psychology of Science Conference was held in 1986 at Memphis State University where research groups were formed and the basis of the psychology of science was discussed. After 2000, "the psychology of science and the origins of the scientific mind" by Feist (G. J. Feist, 2008), and "psychology of science: Implicit and explicit processes" edited by Proctor and Capaldi (Proctor & Capaldi, 2012) concepts and directions of this new member of philosophy of science were cleared.
Among the very rare reviews of the discipline, is the work done by Gregory Feist and Michael Gorman (G. J. Feist & Gorman, 1998) (revised and extended in (G. J. Feist, 2008)) in which they propose an integrative and organizing model for the psychology of science. This model (see figure 1) summarizes the main factors that lie at the foundation of scientific interest, talent, and achievement. The circles in the model indicate the five major domains in which the current literature is developing: biological, developmental, cognitive, personality, and social psychology of science. The size of this essay does not permit us to provide a detailed survey of each of these sub-disciplines. For each of them, therefore, it may suffice to identify the most important issues and problems and some samples of its findings. This section draws mainly upon Gregory Feist's works (G. J. Feist, 2006a, 2006b, 2006c, 2008; G. J. Feist & Gorman, 1998)
\(\bullet\)_Biological psychology of science_ One of the most interesting issues in psychology of science concerns the biological and genetic roots of scientific talent. Lots of efforts have been devoted to this area to find out whether some unique configurations of genetic factors can explain mathematical genius and creativity. There is also a lively discourse on the role gender plays in science: Are there differences between males and females in mathematical ability? Do men and women produce scientific works at different rates? Is there a gender difference in the quality of these works?
\(\bullet\)_Developmental psychology of science_ Psychologists of science has been curious for a long time about the shape of the diagram which indicates the relationship between scientific productivity and age. Feist and Gorman's conclusion is that a consensus exists over this issue: "there is a curvilinear relationship between age and productivity, with the peak generally occurring in one's late 30's or early 40's" (G. J. Feist & Gorman, 1998). Some other problems in this domain include: Does producing works early to predict later levels of productivity? Are older scientists more resistant to scientific revolutions than younger ones? What role do family members or teachers play in promoting scientific interest? Does being trained by an eminent scientist predict obtained eminence? What role does birth order or religious background play in scientific success or interest?
\(\bullet\)_Cognitive psychology of science_ Most of the research done in biological and developmental psychology of science have been essential of statistical nature. To become a well-developed and paradigmatic discipline, however, psychology of science needs to foster conceptual frameworks
and theories of its own. For this purpose, the cognitive psychology of science has already revealed itself as the most promising area within the discipline. Relying on epistemological insights provided by philosophers of science and also taking advantage of computational terminology and techniques supplied by the researchers working in the field of artificial intelligence, cognitive psychologists of science have begun creating testable models which all try to simulate the basic scientific tasks. These tasks, as Brewer and Mishra suggest, are generally of three kinds: (a) Understanding and evaluating scientific information; (b) Generating new scientific knowledge, and; (c) Disseminating scientific knowledge (Bechtel, Graham, & Balota, 1998).
\(\bullet\)_Personality psychology of science_ Four fundamental topics in this area of research are (a) Consistent personality differences between scientists and non-scientists; (b) Consistent personality differences between eminent and less eminent scientists; (c) Consistent personality differences among scientists of different theoretical persuasions; and finally (d) The directional influence of personality on scientific behavior. Empirical literature over the last 60 years, for example, have converged on a description of scientists as "more conscientious, driven, introverted, emotionally stable, and controlled compared with non-scientists" (G. J. Feist & Gorman, 1998).
\(\bullet\)_Social psychology of science_ Gordon Allport, the renowned American psychologist, defines social psychology as "an attempt to understand and explain how the thought, feeling, and behavior of individuals are influenced by the actual, imagined or implied presence of others" (Allport, 1985). As we are increasingly getting aware of non-cognitive and highly social aspects of scientific practice, the social psychology of science is becoming more relevant as a useful approach to the study of scientists. This sub-discipline has so far provided us with insightful explanations for how new ideas develop, are communicated, are evaluated, and become pervasive within a group of scientists. Applying the well-established theories of social psychology, researchers have been led to deeper understandings concerning such issues as the tensions between orthodoxy and heterodoxy, the issue of peer review and quality monitoring in science, citation patterns, and scientific teamwork. These are, however, only a small proportion of potentialities inherent in the social psychology of science as well as the psychology of science as a whole.
\(\bullet\)_Scientific reasoning_ Still the concept of "scientific reasoning" is an open question among researchers. Not that there is no consensus on it, but since scientific reasoning has been studied from different perspectives by various researchers from different scientific fields. The concept of scientific reasoning can be studied from the psychological and sociological viewpoints, to investigate how it defines through the lens of psychology of science.
In simple words, the reasoning is defined as an explanation of a phenomenon that is focal and desired goal of scientific inquiry. From the psychological viewpoint, as Zimmerman noted,
causal reasoning and explanation reflect this conceptualization (Koslowski, 2013; Zimmerman, 2000, 2007). For example, the explanation is reflected in the study of an event to identify it is the cause.
As this paper aims to review and summarize existing literature, we distinguish some main views of scientific reasoning and explanations from the psychology of science perspective. These include the following:
1. To detect causal mechanisms: these types of studies focused on a research question that examines the causal event in a specified situation. For example, in such studies, researchers attempt to identify the causal relationship among event X and event Y based on Humean indices such as priority, contiguity, and covariation (Macnabb, 2019). Some of such researches are investigated by (Gopnik, Schulz, & Schulz, 2007; Haim & Benson, 1998; Koslowski & Masnick, 2002; Koslowski & Thompson, 2002; D. Kuhn et al., 1988; Schulz & Gopnik, 2004).
2. The importance of background information: as Koslowski (Koslowski, 2013) noted, applying identified formal rules will fail unless background information about the explanation and causal mechanism is taken into account. This issue points out that scientists formed explanations based on relevant alternatives, rather than all possible alternatives (Boyd, 1989; Darden, 2006; Fine, 1984; Koslowski, 2013; T. S. Kuhn, 2012; Lipton). Note that, specifically, formed rules from a psychological perspective are characterized differently compared to what is known as the mean indices (for more information about psychological formal rules see: (Bonawitz & Lombrozo, 2012; Lombrozo, 2007; Samarapungavan, 1992).
3. The experiment of science differs in theory and practice: formal rules cannot support perfect science, so the philosophy of science proposed descriptions of scientific practice-based background information. Inferencing to be the best explanation is an accepted criterion to describe a perfect scientific practice which argues that the "explanation" will be agreed upon if it overcame competitive alternative explanations since it provides more logical and detailed causal relations among others (Harman, 1965; Lipton, 2003; Magnani, 2011; Proctor & Capaldi, 2006). In this manner, plausibility is a key factor to accept the best explanation.
4. In practice, science tends to make good outcomes rather than to guarantee them: in many scientific fields, actual relevant background information does not exist to form an alternative hypothesis. In other words, they might not have been discovered/ proved/ obtained. For example, physicists have not proposed a comprehensive model to explain quantum mechanics perfectly yet, or currently, definitive treatment for the COVID-19 pandemic is not proposed. In such circumstances, explanations are accepted since they make good outcomes, rather than to guarantee them. Consistency is a crucial key index to accept or reject a formal rule.
scientific personality_ One of the many building blocks of scientific thought and behavior is personality. Career interests in general and scientific career interest and talent, in particular, stem from personality and individual differences in thought and behavior (Feist,2013).
What does _personality_ mean? Individual differences exist in the way people think about past changes to their personality traits (_Cochran & Haas_, 2020). Personality is that pattern of characteristic thoughts, feelings, and behaviors that distinguishes one person from another and that persists over time and situation (_Heinstrom_, 2013). Personality is a --pattern of relatively permanent traits and unique characteristics that give both consistency and individuality to a person's behavior (_J. Feist & Feist_, 2009). Personality influences how people interact with their environment and interpret the particular meaning of the situations created by the environment (_John_, _Naumann_, & Soto_, 2008).
Personality traits exist as a multilevel hierarchical structure and are relatively stable over the course of life (_Reason_, 1994). After 50 years of personality research, there is a common agreement in the field that there are five basic dimensions that can be used to describe differences in cognitive, affective, and social behavior. This is the base for the five-factor model of personality (_Heinstrom_, 2013). The five dimensions are depicted in Table 1.
The Five-Factor Model (FFM) comprises five bipolar factors: openness (imaginative - down-to-earth, (conscientiousness (well organized - disorganized), extraversion (outgoing - reserved,( agreeableness (trusting - suspicious), and neuroticism (anxious - calm), which are placed on the continuum. All five factors are distributed normally across the population (_G. J. Feist_, 2008).
Co-authorshipScience and technology policy academics and evaluators use co-authorship as a proxy for research collaboration despite knowing better. We know better because anecdotally we understand that an individual might be listed as an author on a particular publication for numerous reasons other than research collaboration. Yet because of the accessibility and other advantages of bibliometric data, co-authorship is continuously used as a proxy for research collaboration (_Ponomariov & Boardman_, 2016).
In recent decades there has been growing interest in the nature and scale of scientific collaboration. Studies into co-authorship have taken two different approaches. The first one attempts to analyze the reasons why authors collaborate and the consequences of such a decision.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Personality dimension** & **High level** & **Low level** \\ \hline Neuroticism & sensitive, nervous & secure, confident \\ \hline Extraversion & outgoing, energetic & shy, withdrawn \\ \hline Openness to experience & inventive, curious & cautious, conservative \\ \hline Agreeableness & friendly, compassionate & competitive, outspoken \\ \hline Conscientiousness & efficient, organized & easy-going, careless \\ \hline \end{tabular}
\end{table}
Table 1: Personality dimensions and the poles of traits they form (_Heinstrom_, 2013)
The second approach is based on the idea that co-authorship creates a social network of researchers. In recent years, the collaboration between scientists increased. This collaboration can be formal (joint papers, the guidance of doctoral dissertations, and participation in research groups) or informal (arising from the comments of colleagues, reviewers, and editors) (Acedo, Barroso, Casanueva, & Galan, 2006).
Human research participationResearch refers to a class of scientific activities designed to develop or contribute to generalizable knowledge. The term "human research", then, refers to research that involves human subjects. Research is a public trust that must be ethically conducted, trustworthy, and socially responsible if the results are to be valuable (Fakruddin et al., 2013).
Over the past few decades, academic discussions in the broad contexts of public engagement in science policy, discourse,and research have taken a "participatory turn" (Jasanoff, 2005). 'For the last 15 years, Peter Reason has been developing a democratic model of research where researchers organize a group of people to study a phenomenon of concern to them, regardless of their educational background. Labeled 'human inquiry', we have always had a more theoretical justification of the process than actual description (Reason, 1994).
## 3 A Psychological Perspective on Science Policy
Browsing the existing literature on science policy, one cannot resist Lundvall and Borras's conclusion that "the major issues in science policy are about allocating sufficient resources to science, to distribute them wisely between activities, to make sure that resources are used efficiently and contribute to social welfare" (Lundvall & Borras, 2005). These issues are all of the economic characters. In fact, these issues are the main focus of a relatively new branch of economics, namely the 'economics of science.' Introducing this sub-field of economics, it is interesting that, Ed. Steinmueller inscribes almost the same issues that Lundvall and Borras ((Lundvall & Borras, 2005)) ascribe to science policy. He writes: "Determining the principles governing the allocation of resources to science as well as the management and consequences of the use of these resources are the central issues of the economics of science" (Pavitt, Steinmueller, Calvert, & Martin, 2000). They correspond with two simple but fundamental questions:
1. The question of _why_, i.e., why do we fund science?
2. The question of _how_, i.e., how do (or should) we fund science?1 Footnote 1: A third question may be put forward as _how much_ should we finance the science as compared with other public sectors? The question has received a good deal of economists’ attentions. We do not have enough space to discuss it here though.
The most noteworthy answer to the first question has been provided by the'simple economics' of Richard Nelson (Nelson, 1959) and Kenneth Arrow (Arrow, 1972). Science, they argue, possesses a set of properties which is adequate for considering it to be an instance of the 'public
goods.' Like any other public commodity, it faces, therefore, the challenge of 'appropriability' and needs governmental support.
The second question though has not been as straightforward to answer as the first one. Nor it has been possible to find a purely economic answer for it. Economists, in this case, owe a substantial debt to a sociologist, Robert Merton, for providing them with a good starting point. Science, as Merton demonstrates, is a highly competitive contest over the goal of _priority of discovery_. One who scores in this game is rewarded in varied forms by the scientific community (Merton in a series of articles; cited in (Stephan, 1996)) and by the broader society as well. Dasgupta and David were the first to realize the economic importance of this reward system (Dasgupta & David, 1987). While the scientists are driven and occupied by this non-market-based race over incentives, they are serving the economic requirement to the public disclosure of knowledge (Pavitt et al., 2000; Stephan, 1996). Priority, therefore, is (and ought to be) a good criterion for the allocation of resources to science.
These are not the whole work done in response to the main challenges of science policy. Nonetheless, they may represent the sort of rationality that is dominant in the field and pave the way for us to return to our primary question in this essay, i.e., how can a psychological perspective contribute to reformulating or even finding novel solutions for the main problems in science policy?
The mainstream economic view of science is simple. In this view, the market is the general paradigm for all modern social organizations and science is just one special case of this generic structure, a marketplace of ideas (see: (Mirowski & Sent, 2002)). The internal logic of the production of scientific knowledge, as we saw above, may be different from other market-based institutions, but the result from an economic point of view on resource allocation is the same: those who produce more receive more. Scientific 'productivity' is the most important variable in this view and its measurement is critical for policy decisions over resource allocation.
From a psychological point of view, scientific 'productivity' is not a 'given' variable which should exclusively determine policy decisions. It is, as we saw throughout section 2 above, a function of at least five categories of psychological variables. A scientist's level of productivity is determined by her biological and developmental history, by her cognitive capacities, by her personality, and by the people who are present around her. Even if we are restricted within the single goal of increasing scientific productivity and even if we have in hand the single tool of fund allocation, there are many more opportunities for us to expend our money than simply enriching the strongest links.
Biological psychologists of science are able, for example, to show how effective would be for the mathematical productivity of the next female generation investing in a campaign against the established stereotype of'mathematics as a masculine enterprise' (see (Benbow, 1988)). Developmental psychologists of science can help policymakers to realize the immense importance of subsidizing the popularization of characters like John Nash in movies like 'The Beautiful Mind' ((Nasar, 2011)) for the process of example-making of our teenagers and, therefore, for the aggregate scientific productivity of the next generation. Cognitive
psychologists have considerable capability to provide science policy with programs that would enhance scientists' self-consciousness while performing their cognitive tasks. Personality psychologists can perform a complementing role in employment filter mechanisms in science which has traditionally made schooling success the only indicator of the potential for scientific contribution (see (Pavitt et al., 2000)). Finally, social psychologists of science can adjust policy maker's understanding of the very measurements of productivity by providing complicated and systematic analyses of publishing and citation patterns.
Our list of examples can go on to include all psychological determinants (positive and negative) of scientific productivity. The underlying logic though is quite simple. A funding regime based on merely 'economic' considerations is a 'posterior' regime. It takes the _status quo_ of scientific production as given and tries to retain and improve it. This system of science governance has lots of shortcomings. It bears, for example, a sort of Matthew effect which diminishes diversity in science and leaves few resources for important scientific activities other than 'production' such as disseminating scientific results (Dasgupta & David, 1987). A psychological approach can make this regime more complete by adding to it an 'anterior' perspective which creates the possibility of enhancing productivity by manipulating its determinants.
## 4 Beyond the Economic Paradigm
There is more to psychology of science than focusing only on scientific productivity. Scientific production is important but just one aspect of scientific life. There is more also for science policy to be concerned about besides merely economic issues like resource allocation. Economics, by its nature, makes a thing-like picture of science, a commodity, which in turn results from a production process conducted by some semi-robotic species called a scientist. This is useful but not the whole picture. Mirowski and Sent are right when they write:
"It is a commonplace observation that economics love the individual; it is just real people that they cannot be bothered about. A way once added that economists also profess to love Science; it is just _real scientists_ that make them nervous" (Mirowski & Sent, 2002)
It is precisely the life-world, _Lebenswelt_, of these _real scientists_ that need to be studied from within. Science, as a unique sort of social life with its own set of shared practices, beliefs, values, institutions, and structures of interaction, requires researchers who are studying it to do more than reporting raw data. To continue to live healthy, this complicated world calls for policymakers to enhance their understanding of the very notion of 'the health of science.' The psychology of science along with her methodological complexity (including both objective and hermeneutic methods) and also novel and promising theories1 can play a great part in this process.
## 5 Directions of the psychology of science:
As noted by Gregory D. Webster, in the first decade of the 21st-century psychology of science was targeted by researchers in ascending order. Figure 2 schematically presents results for changes in google scholar hit counts from 2000 to 2009 (Webster, 2012).
As presented by figure 2, publications and citations related to "psychology of science" rose from 66 in 2000 to 189 in 2009. Moreover, linear regression analysis showed that the rising trend was largely linear (Webster, 2012).
To dedicate an updated and comprehensive outlook we have investigated a decade from 2010 to 2019 based on the google scholar database. The search strategy was to recall publications which exactly pointed to "psychology of science" in their titles. After a refining process to eliminate outliers, 32 items were detected. Figure 3 represents publication frequency over time and by type from 2010 to 2019.
The review revealed that still psychology of science needs more attention and it is growing slowly. Moreover, a significant portion of published works (in the last decade) were books (10 out of 32) which show that there are still debates about focal concepts and theoretical
foundations. In addition, a new research line was observed which is called "social psychology of science". The social psychology of science is aimed at investigating social aspects of scientific reasoning, personality, etc., rather to study scientists as individuals. The core idea is that social factors influence researchers dramatically. Some examples are: (Holtz, 2016; Johnson, 2018; Paletz; Purkhardt, 2015).
## 7 Conclusion
As a subfield of psychology, the psychology of science is still in its infancy. It is growing rapidly though and this rapid growth, the essay suggests, is providing a unique opportunity for science policy-makers and analysts to expand their understanding of science toward new horizons. The management of aggregate scientific behavior of a nation, including quantity and quality of scientific production, can be improved by adding to the knowledge base of managers a systematic comprehension of biological, developmental, cognitive, personality, and social variables that influence scientific behavior. The introspective and participation methods in psychology can add to the statistical and objective investigations of science the possibility of understanding and interpreting the internal environment of science as it is. This equips policymakers with some sort of conservatism which may be useful against periodical waves of radical change.
From the meta-science viewpoint, psychology of science is totally different from the classic psychology of science. Psychology of science as an emerging branch of meta-science aimed to use psychological research to uncover how scientists create explanations and to study their characteristics in terms of cognition and personality especially compared to non-scientists members of the society. In the 21st-century, psychology of science distinguished itself from the classic psychology of science and developed emerging branches such as "social psychology of science".
The social psychology of science challenged standard approaches as they attempt to analyze scientists in terms of cognition, personality, intelligence, etc. individually, while social interactions, roles, and cultural factors are ignored.
Psychology of science has not been yet recognized as a formal discipline in developing societies. Although scholars from philosophy of science, psychology, sociology, etc. majors conducted some attempts, however, they are far from idealized psychology of science research. Still there are limitations to develop psychology of science in developing societies. First, and almost the most important one, the discourse of psychology of science has not been formed and related concepts are not cleared. Second, as a developing field, it needs particular research methods, at least to some extent. Third, the scientific community of psychology of science consists of a multi-disciplinary approach that includes disciplines such as philosophy of science, psychology, sociology, and science policymaking to form an emerging inter-disciplinary research field. There are really few scholars who have sufficient knowledge about all of these majors. Our initial
recommendation is to establish an independent scientific journal to create forums for exchange of views and initialize psychology of science scientific community.
**Ethical statement:**
This article does not contain any studies with human participants (i.e. questionnaires, survays, interviews etc.) or animals performed by any of the authors.
|
2309.12144 | Towards a minimal description of dynamical correlation in metals | Dynamical correlations and non-local contributions beyond static mean-field
theories are of fundamental importance for describing the electronic structure
of correlated metals. Their effects are usually described with many-body
approaches and non-local dynamical self-energies. We suggest here a class of
simple model self-energies that are a generalization of the static DFT +
Hubbard approach. This formulation, for simplicity called DFT+U({\omega})+V,
provides an intuitive physical picture, a lightweight implementation, and
displays very good agreement with experimental data. | Marco Vanzini, Nicola Marzari | 2023-09-21T15:04:03Z | http://arxiv.org/abs/2309.12144v1 | # Towards a minimal description of dynamical correlation in metals
###### Abstract
Dynamical correlations and non-local contributions beyond static mean-field theories are of fundamental importance for describing the electronic structure of correlated metals. Their effects are usually described with many-body approaches and non-local dynamical self-energies. We suggest here a class of simple model self-energies that are a generalization of the static DFT + Hubbard approach. This formulation, for simplicity called DFT+\(U(\omega)+V\), provides an intuitive physical picture, a lightweight implementation, and displays very good agreement with experimental data.
+
Footnote †: preprint: APS/123-QED
Computational physics offers an invaluable tool to test our understanding of materials [1; 2; 3]. Several calculated quantities can be compared to the outcome of experiments, yielding often quantitative, besides qualitative, agreement. In particular, first-principles approaches are particularly appealing as they do not rely on any adjustable parameter. The standard method of choice is density-functional theory (DFT) [4; 5; 6], a powerful and popular mean-field theory that often yields reliable ground-state properties. Even if available in principle, it is practically difficult to get information on electronic excited states [7; 8]. The latter can be measured in angle-resolved photoemission experiments (ARPES) [9; 10], and are a primary source of information on the electronic structure of materials [11]. To obtain an accurate description of electronic spectra [12], different and advanced theories are needed, even for simple systems [13; 14]. Two of these, GW [15; 16; 17; 18; 19] and DMFT, although often considered as alternative routes, [20; 21; 22; 23], find their common root in many-body perturbation theory [24; 25; 26]. Together with their refinements and extensions, they are the state-of-the-art tools in first-principles computations. A key class of compounds for which these methods should be applied are correlated materials [27; 28], that can offer several promising technological applications [29; 30]; usually, these are systems with partially filled \(d\) or \(f\) orbitals. DFT+\(U\)[31; 32; 33] and its extensions [34] aimed to include a static Hubbard interaction \(U\) in a mean-field way. Its value, that can be tuned to get additional insight, can also be determined ab-initio [35; 36; 37; 38; 39; 40; 41; 42]. In reality, the Hubbard terms mostly remove the self-interaction error [43; 44] from DFT [45] and often improve the description of electronic interactions in insulators [46; 47]. In metals, however, dynamical correlation effects are essential [48]. Two examples we will consider are the perovskites SrVO\({}_{3}\) and LaNiO\({}_{3}\): the first is a paradigmatic correlated metal, while the second is the only nickelate that doesn't undergo a metal-insulator transition at low temperature. At the quasiparticle (QP) level, correlation is responsible for the reduction of the bandwidth close to the Fermi surface, with respect to, _e.g._, Kohn-Sham DFT states. The ratio between the true and the mean-field bandwidth is a measure of the effective mass \(m^{*}\), which is of fundamental importance for physical understanding and technological applications. This quantity is usually slightly overestimated in DMFT [49; 50], and considerably underestimated in GW [50; 51; 52; 53]. The latter does not capture the importance of localized physics [54], which is often recovered, at least conceptually, by the inclusion of vertex corrections. The former, instead, does not include non-diagonal contributions in the self energy, which can play an important role. Both effects are taken into account by GW+DMFT [55] that yields very good agreement with experiments [50]. Beyond QPs, ARPES experiments reveal additional features in the form of _satellites_[56; 57]. These are purely many-body structures rooted in the quantum correlation between electrons. By definition, they are missing in static approaches like DFT or DFT+\(U\), but are found in both DMFT and GW. Their interpretation as Hubbard bands [58] or plasmonic excitations [51; 54] is still under debate [59; 60]. In fact, the physics of both SrVO\({}_{3}\) and LaNiO\({}_{3}\), which is ruled by dynamical and non-local interactions and is very well reproduced by the (computationally expensive) GW+DMFT, is already contained in the simpler but still expensive GW, which qualitatively reproduces most of the experimental findings for these metals. Therefore, it is tempting to simplify, and at the same time model and refine the GW approximation [61; 62; 63; 64; 17; 65; 66; 18; 19; 17; 16; 19; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63] when applied to localized states [54]. This approach is inspired by Refs. [64; 65], where it has been shown that a localized version of COHSEX, the static version of GW, can be identified with the simpler DFT+\(U\). However, as mentioned above, dynamical screening effects (and, second, non-local interactions) cannot be discarded in correlated metals [66], and even DFT+DMFT calculations should include a dynamic \(U(\omega)\), interestingly related to the GW interaction [39], to account at the same time for both band renormalization and satellites [67; 68]. In this work, we propose a form for the self energy that keeps the
full frequency-dependence of GW as well as the simplicity and transparency of DFT+\(U\) or DFT+\(U\)+\(V\)[41; 34]. Due to their similarities, we have called the resulting approach DFT+\(U(\omega)(+V)\).
## II Theory
The GW approximation for the exchange-correlation part of the self energy \(\Sigma_{\rm xc}\) can be regarded as the first term in a perturbative expansion in \(G\) of the Hedin equation for \(\Sigma\)[26; 15]. Both the Green's function \(G\) and the screened interaction \(W\) are built on top of the eigensolutions of a mean field Hamiltonian \(\hat{h}_{0}\), like the DFT one. The latter yields, for each single-particle eigenstate \(\psi_{s}(\mathbf{x})\equiv\bra{\mathbf{x}}{\mathbf{s}}\)[131], an eigenenergy \(\varepsilon_{s}\), solution of the equation \(\hat{h}_{0}\ket{s}=\varepsilon_{s}\ket{s}\), and, at \(T=0\), an occupation number \(n_{s}=\theta(\mu_{0}-\varepsilon_{s})\), with \(\mu_{0}\) the Fermi energy. Regarding \(W\), a RPA approximation is usually considered [26]: the bare Coulomb interaction \(v\) is screened by neutral excitations (labelled by \(\lambda\)) of energy \(\omega_{\lambda}=E_{\lambda}-E_{0}\) (with \(E_{\lambda}\) eigenenergy of the many-body Hamiltonian) and strength \(W_{\lambda}^{\rm p}\), obtained as interband transitions between occupied and empty states [69; 70; 71; 72]. With the spectral decompositions for the single particle Green's function \(G\) and for the polarization part of the screened interaction \(W_{\rm p}=W-v\),
\[G\left(\mathbf{x},\mathbf{x}^{\prime},\omega\right)=\sum_{s}\frac{\psi_{ s}(\mathbf{x})\psi_{s}^{*}(\mathbf{x}^{\prime})}{\omega-\varepsilon_{s}+i\eta\,{\rm sign }(\varepsilon_{s}-\mu)} \tag{1}\] \[W_{\rm p}\left(\mathbf{x},\mathbf{x}^{\prime},\omega\right)=\sum_{ \lambda\neq 0}\frac{2\omega_{\lambda}W_{\lambda}^{\rm p}\left(\mathbf{x},\mathbf{x}^{ \prime}\right)}{\omega^{2}-\left(\omega_{\lambda}-i\eta\right)^{2}}, \tag{2}\]
(with \(\eta\) a small positive number), the GW self energy \(\Sigma_{\rm xc}^{\rm GW}\), which is the convolution between \(G\) and \(W=v+W_{\rm p}\)
\[\Sigma_{\rm xc}^{\rm GW}\left(\mathbf{x},\mathbf{x}^{\prime},\omega\right)=i\int \frac{d\omega^{\prime}}{2\pi}G\left(\mathbf{x},\mathbf{x}^{\prime},\omega+\omega^{ \prime}\right)W\left(\mathbf{x},\mathbf{x}^{\prime},\omega^{\prime}\right)e^{i\omega^ {\prime}\eta},\]
can be expressed analytically as:
\[\Sigma_{\rm xc}^{\rm GW}\left(\mathbf{x},\mathbf{x}^{\prime},\omega\right) =\sum_{s}\psi_{s}(\mathbf{x})\psi_{s}^{*}(\mathbf{x}^{\prime})\Big{\{} \!-\!n_{s}v\left(\mathbf{x},\mathbf{x}^{\prime}\right)+\] \[\!+\!\!\sum_{\lambda\neq 0}\!\Big{[}\frac{n_{s}}{\omega- \varepsilon_{s}+\omega_{\lambda}-i\eta}\!+\!\frac{1-n_{s}}{\omega-\varepsilon _{s}-\omega_{\lambda}+i\eta}\Big{]}W_{\lambda}^{\rm p}\left(\mathbf{x},\mathbf{x}^{ \prime}\right)\!\Big{\}}.\]
The first line is the static Fock exchange contribution, that acts only on the occupied states for which \(n_{s}\neq 0\), while the second line represents the correlation part of the self energy, in which occupied and empty states are treated symmetrically.
The GW self-energy matrix elementsThe matrix elements of the GW self energy between two generic states \(\ket{\alpha}\) and \(\ket{\beta}\) can be expressed in terms of the matrix elements of the \(\lambda\) component of the polarization part of the Coulomb interaction \(W_{\lambda,\alpha\beta}^{\rm p,ss}\) and of the bare interaction \(v_{\alpha\beta}^{\rm ss}\), both generically defined as \(f_{ad}^{bc}=\bra{ab}\hat{f}\ket{cd}=\int d^{3}xd^{3}x^{\prime}\phi_{a}^{*}( \mathbf{x})\phi_{c}(\mathbf{x})f\left(\mathbf{x},\mathbf{x}^{\prime}\right)\phi_{b}^{*}(\mathbf{x }^{\prime})\phi_{d}(\mathbf{x}^{\prime})\)[26]:
\[\bra{\alpha}\hat{\Sigma}_{\rm xc}^{\rm GW}\left(\omega\right) \ket{\beta}=\sum_{s}\!\Big{\{}\!-\!n_{s}v_{\alpha\beta}^{ss}\!+\] \[\!+\!\!\sum_{\lambda\neq 0}\left[\frac{n_{s}}{\omega- \varepsilon_{s}+\omega_{\lambda}-i\eta}+\frac{1-n_{s}}{\omega-\varepsilon_{s} -\omega_{\lambda}+i\eta}\right]W_{\lambda,\alpha\beta}^{\rm p,ss}\Big{\}}.\]
Gathering together the \(n_{s}\) terms reconstructs the matrix element of the fully screened interaction \(W\) plus a correction:
\[\bra{\alpha}\hat{\Sigma}_{\rm xc}^{\rm GW}\left(\omega\right) \ket{\beta}=\sum_{s}\!\Big{\{}\!-\!n_{s}W_{\alpha\beta}^{ss}\left(\omega- \varepsilon_{s}\right)+\] \[\!\!+\sum_{\lambda\neq 0}\frac{W_{\lambda,\alpha\beta}^{\rm p,ss}}{ \omega-\varepsilon_{s}-\omega_{\lambda}+i\eta}\Big{\}}. \tag{3}\]
CohsexThe COHSEX approximation [72; 15; 17; 73], from which DFT+\(U\) can be derived [64; 65], is a static approximation to GW, and it can be obtained from the latter when only \(\omega=\varepsilon_{s}\) is retained in the frequency argument of \(W\). Its matrix element reads:
\[\bra{\alpha}\hat{\Sigma}_{\rm xc}^{\rm COHSEX}\ket{\beta}=\sum_{s}\!\Big{\{}\! -\!n_{s}W_{\alpha\beta}^{ss}\left(0\right)+\sum_{\lambda\neq 0}\frac{W_{\lambda, \alpha\beta}^{\rm p,ss}}{-\omega_{\lambda}+i\eta}\Big{\}}.\]
The latter term involves a sum over all possible transitions. However, via the spectral representation of \(W_{\rm p}\), Eq. (2), it can be resummed to yield the simple quantity \(\frac{1}{2}W_{\alpha\beta}^{\rm p,ss}(0)\), so that the expression above assumes the expected scissor-like [61] form, that pushes up and down empty and occupied states respectively:
\[\bra{\alpha}\hat{\Sigma}_{\rm xc}^{\rm COHSEX}\ket{\beta}=\sum_{s}\!\Big{\{} \Big{(}\frac{1}{2}-n_{s}\Big{)}W_{\alpha\beta}^{ss}\left(0\right)-\frac{1}{2} v_{\alpha\beta}^{ss}\Big{\}}, \tag{4}\]
particularly suited for the link to DFT+\(U\) (see below).
GW as a dynamical COHSEXMotivated by the form of Eq. (4), it is useful to recast the GW matrix element in Eq. (3) as:
\[\bra{\alpha}\hat{\Sigma}_{\rm xc}^{\rm GW}\left(\omega\right) \ket{\beta}=\sum_{s}\!\Big{\{}\Big{(}\frac{1}{2}-n_{s}\Big{)}W_{\alpha\beta}^{ ss}\left(\omega-\varepsilon_{s}\right)\!-\!\frac{1}{2}v_{\alpha\beta}^{ss}\!+\] \[\!+\!\sum_{\lambda\neq 0}\frac{\left(\omega-\varepsilon_{s}\right)W_{ \lambda,\alpha\beta}^{\rm p,ss}}{\left(\omega-\varepsilon_{s}\right)^{2}-\left( \omega_{\lambda}-i\eta\right)^{2}}\Big{\}}. \tag{5}\]
The terms in the first line constitute a natural dynamic generalization of COHSEX, Eq. (4), to which the whole expression reduces once the approximation \(\omega=\varepsilon_{s}\) is considered. The last term, in fact, is a dynamical correction that goes to zero in that limit, but it is not obvious how to express it as a closed expression in the matrix elements of \(\hat{W}\) and \(\hat{v}\) only. However, in the spirit of having a simple final expression, by multiplying and dividing it by
and replacing the \(\omega_{\lambda}\) at the denominator by a constant \(\omega_{0}\), whose value we will discuss in the following, we can make use of the exact spectral decomposition of Eq. (2) to get:
\[\langle\alpha|\,\hat{\Sigma}_{\rm xc}\left(\omega\right)|\beta \rangle=\sum_{s}\Bigl{\{}\Bigl{[}\frac{1}{2}\Bigl{(}1+\frac{\omega-\varepsilon _{s}}{\omega_{0}}\Bigr{)}-n_{s}\Bigr{]}\times\\ \times W^{ss}_{\alpha\beta}\left(\omega-\varepsilon_{s}\right)- \frac{1}{2}\Bigl{(}1+\frac{\omega-\varepsilon_{s}}{\omega_{0}}\Bigr{)}v^{ss}_ {\alpha\beta}\Bigr{\}}. \tag{6}\]
The introduction of \(\omega_{0}\), which plays an analogous role to the one introduced in [70], is, together with the localization process described below, the main approximation of this work; however, through this, we achieve the goal of expressing the GW self energy matrix element in a form that is the straightforward dynamical generalization of Eq. (4).
Localization in the correlated subspace:We haven't specified so far the states that should undergo the action of \(\Sigma\). In this work, in the same spirit of DMFT, we restrict ourselves to correlated metals, where it is possible to identify a certain manifold of _correlated_ states. Those are often non-dispersing low-energy bands, not fully occupied nor fully empty, with a strong \(d\) or \(f\) character, that are not quantitatively well-described within DFT. We call that subspace \(\mathcal{C}\), and we describe it by a set of localized orbitals \(\{\ket{I,m}\}\), where \(I\) identifies the atom and \(m\) the orbital quantum numbers [34], \(\mathcal{C}=\cup_{I}\mathcal{C}_{I}\). For the purpose of the present derivation, it is not important if these are atomic orbitals, Wannier functions or other. However, if these were Wannier functions [74], they could be built from a larger set of bands, \(\mathcal{W}\supseteq\mathcal{C}\), to improve their localization. For instance, if \(\mathcal{C}\) is a certain \(d\) manifold, Wannier functions can be built from \(\mathcal{C}\) itself (what is called the \(d\)_model_) or a larger manifold that contains all bands with which the \(d\) states are mostly entangled (often \(p\) bands, hence the name \(dp\) model) [75].
Following Refs. [64; 65], each state \(\ket{s}\) can be decomposed into a purely itinerant component \(\ket{s^{\rm it}}\) and a localized contribution \(\sum_{I,m\in\mathcal{C}}\ket{I,m}\bra{I,m}\). We assume that any matrix element of any Coulomb interaction involving an itinerant and a localized contribution be zero. Thus, the matrix element of the self energy between the states \(\ket{I,m}\) and \(\ket{J,m^{\prime}}\in\mathcal{C}\) reads:
\[\langle I,m|\,\hat{\Sigma}_{\rm xc}\left(\omega\right)|J,m^{ \prime}\rangle=\sum_{s,Km_{1},Lm_{2}\in\mathcal{C}}\langle K,m_{1}|s\rangle \left\langle s|L,m_{2}\right\rangle\times\\ \times\Bigl{\{}\Bigl{[}\frac{1}{2}\Bigl{(}1+\frac{\omega- \varepsilon_{s}}{\omega_{0}}\Bigr{)}-n_{s}\Bigr{]}W^{Lm_{2},Km_{1}}_{Im,Jm^{ \prime}}\left(\omega-\varepsilon_{s}\right)+\\ -\frac{1}{2}\Bigl{(}1+\frac{\omega-\varepsilon_{s}}{\omega_{0}} \Bigr{)}v^{Lm_{2},Km_{1}}_{Im,Jm^{\prime}}\Bigr{\}}. \tag{7}\]
Of this general formula we will consider the simplest situation in which only the diagonal elements of the interactions are retained [34], \(W^{Lm_{2},Km_{1}}_{Im,Jm^{\prime}}(\omega)\approx\delta_{KI}\delta_{m_{1}m} \delta_{LJ}\delta_{mm^{\prime}}W^{Jm^{\prime},Im}_{Im,Jm^{\prime}}(\omega)\), and similarly for \(v\).
On-site contributions:To further simplify [77], we define \(U^{I}(\omega)\) as the average of those matrix elements on the site \(I\) of the correlated atom \(I\) (_e.g._, vanadium or nickel):
\[U^{I}\left(\omega\right):=\frac{1}{N_{I}^{2}}\sum_{mm^{\prime}\in\mathcal{C}_ {I}}W^{Im^{\prime},Im}_{Im,Im^{\prime}}\left(\omega\right), \tag{8}\]
and analogously for \(U^{I}_{\infty}\), defined as the average of the matrix elements of the bare Coulomb interaction \(v\), with \(N_{I}\) the total number of \(m\) states for the atom \(I\), see Fig. 2. This definition is very similar to the one usually adopted in cRPA [39], the difference stemming from the fact that here we do not discard any excitation channel, but the full RPA \(\epsilon^{-1}(\omega)\) is retained (see Fig. 1 and Fig. 3). With the introduction of \(U^{I}(\omega)\) in place of the tensor \(W^{Im^{\prime},Im}_{Im,Im^{\prime}}(\omega)\), we greatly simplify the site and orbital dependency of the on-site \(J=I\) self energy. In fact, introducing the matrix elements of the mean-field density matrix and Hamiltonian,
\[\sum_{s}n_{s}\ket{s}\bra{s}=\hat{\gamma}_{0};\qquad\sum_{s}\varepsilon_{s} \ket{s}\bra{s}=\hat{h}_{0},\]
in the localized basis:
\[n^{IJ}_{mm^{\prime}}=\left\langle I,m\right|\hat{\gamma}_{0}\left|J, m^{\prime}\right\rangle=\sum_{s}n_{s}\left\langle I,m|s\right\rangle\left\langle s |J,m^{\prime}\right\rangle \tag{9}\] \[\varepsilon^{IJ}_{mm^{\prime}}=\left\langle I,m\right|\hat{h}_{0} \left|J,m^{\prime}\right\rangle=\sum_{s}\varepsilon_{s}\left\langle I,m|s \right\rangle\left\langle s|J,m^{\prime}\right\rangle,\]
the self energy Eq. (7) becomes (when \(J=I\)):
\[\left\langle I,m\right|\hat{\Sigma}_{\text{xc}}\left(\omega \right)\left|I,m^{\prime}\right\rangle=\sum_{m^{\prime\prime}}\\ \left\{\frac{1}{2}\Big{(}\delta_{mm^{\prime\prime}}+\frac{\omega \delta_{mm^{\prime\prime}}-\varepsilon^{II^{\prime}}_{mm^{\prime\prime}}}{ \omega_{0}}\Big{)}-n^{II}_{mm^{\prime\prime}}\right\}\times\\ \times\left\langle I,m^{\prime\prime}\right|U^{I}(\omega-\hat{h} _{0})\left|I,m^{\prime}\right\rangle+\\ -\frac{1}{2}\Big{(}\delta_{mm^{\prime}}+\frac{\omega\delta_{mm^ {\prime}}-\varepsilon^{II^{\prime}}_{mm^{\prime}}}{\omega_{0}}\Big{)}U^{I}_{ \infty}, \tag{10}\]
where \(U^{I}(\omega-\hat{h}_{0})\) should be interpreted as a power expansion series, and the validity of the identity \(\mathbf{1}_{\mathcal{C}_{I}}=\sum_{m^{\prime\prime}}\left|I,m^{\prime\prime} \right\rangle\left\langle I,m^{\prime\prime}\right|\) depends on the disentanglement of the \(\mathcal{C}_{I}\) submanifold from the rest of the bands. In this situation, to a good approximation the off-diagonal elements of both matrices \(n^{II}_{mm^{\prime}}\) and \(\varepsilon^{II}_{mm^{\prime}}\) in \(\mathcal{C}_{I}\) can be considered negligible with respect to the diagonal ones [52, 78], as we have verified for both SrVO\({}_{3}\) and LaNiO\({}_{3}\). As a consequence, the self-energy matrix \(\left\langle I,m\right|\hat{\Sigma}_{\text{xc}}\left(\omega\right)\left|I,m^{ \prime}\right\rangle\equiv\Sigma^{I}_{\text{xc}\,mm^{\prime}}\left(\omega\right)\) becomes diagonal and reads:
\[\Sigma^{I}_{\text{xc}\;m}\left(\omega\right)=\left[\frac{1}{2} \Big{(}1+\frac{\omega-\varepsilon^{I}_{m}}{\omega_{0}}\Big{)}-n^{I}_{m}\right] U^{I}(\omega-\varepsilon^{I}_{m})+\\ -\frac{1}{2}\Big{(}1+\frac{\omega-\varepsilon^{I}_{m}}{\omega_{0 }}\Big{)}U^{I}_{\infty}, \tag{11}\]
with the shorthands \(n^{I}_{m}\) and \(\varepsilon^{I}_{m}\) for \(n^{II}_{mm}\) and \(\varepsilon^{II}_{mm}\) respectively. Finally, the self-energy operator assumes the form of a projector operator onto the correlated subspace \(\mathcal{C}\) only:
\[\hat{\Sigma}_{\text{xc}}\left(\omega\right)=\sum_{I,m}\Sigma^{I}_{\text{xc},m }\left(\omega\right)\left|I,m\right\rangle\left\langle I,m\right|. \tag{12}\]
Inclusion of inter-site terms:To complete our minimal description of correlated metals, off-site effects beyond the on-site \(U^{I}(\omega)\) shall be taken into account, combining \(\left\langle I,m\right|\hat{\Sigma}(\omega)\left|I,m\right\rangle\) with the matrix element of the self energy between different but neighboring atoms \(I\) and \(J\), \(\left\langle I,m\right|\hat{\Sigma}\left|J,m^{\prime}\right\rangle\), with \(J\neq I\). In this case, the matrices \(n^{IJ}_{mm^{\prime}}\) and \(\varepsilon^{IJ}_{mm^{\prime}}\) connect states of different atoms (_e.g._, the transition-metal \(d\) states and the ligand \(p\) states), hence their physics lays intrinsically in the cross terms (in general, at fixed \(I\) and \(J\), these are not even square matrices in \(m\) and \(m^{\prime}\)). Although a dynamical approach is possible, the simplifications we have implemented above cannot hold anymore, and the resulting expression would not be transparent. On the other hand, taking the _static_ COHSEX approximation \(\omega=\varepsilon_{s}\) in Eq. (7) and keeping only the diagonal elements of the interactions results in:
\[\left\langle I,m\right|\hat{\Sigma}_{\text{xc}}\left|J,m^{\prime} \right\rangle=\\ =\Big{[}\frac{1}{2}\delta_{IJ}\delta_{mm^{\prime}}-n^{IJ}_{mmm^{ \prime}}\Big{]}W^{Jm^{\prime},Im}_{Im,m^{\prime}}\left(0\right)-\frac{1}{2} \delta_{IJ}\delta_{mm^{\prime}}v^{Jm^{\prime},Im}_{Im,m^{\prime}}.\]
The on-site \(J=I\) terms, that would yield the static DFT+\(U\), have been replaced by the frequency-dependent self energy of Eq. (11). The \(J\neq I\) terms, instead, contribute as:
\[\left\langle I,m\right|\hat{\Sigma}_{\text{xc}}\left|J,m^{\prime}\right\rangle \overset{I\neq J}{=}-n^{IJ}_{mm^{\prime}}W^{Jm^{\prime},Im}_{Im,jm^{\prime}} \left(0\right).\]
Analogously to \(U^{I}(\omega)\), we average the matrix \(W^{Jm^{\prime},Im}_{Im,jm^{\prime}}(0)\), defining the off-site Hubbard interactions:
\[V^{IJ}:=\frac{1}{N_{I}N_{J}}\sum_{mm^{\prime}}W^{Jm^{\prime},Im}_{Im,jm^{\prime \prime}}\left(\omega=0\right), \tag{13}\]
which represents the strength of the fully screened Coulomb interaction between atoms \(I\) and \(J\) at zero frequency. Finally, this contribution to the self-energy correction reads:
\[\hat{v}_{V}:=-\sum_{I\neq J}V^{IJ}\sum_{mm^{\prime}}n^{IJ}_{mm^{\prime}}\left|I, m\right\rangle\left\langle J,m^{\prime}\right| \tag{14}\]
in agreement with the off-site generalization of DFT+\(U\) proposed in [34], but with a natural definition of \(V\) as a screened many-body quantity. With this additional term, that gathers the most important static but inter-site contributions discarded in the derivation of DFT+\(U(\omega)\), we aim to cover the same non-local physics [80, 79] included via a screened exchange term in SEx-DDMFT [81, 82] or the full GW+DMFT [53]. Here, however, both the static inter-site as well as the dynamical on-site terms have the same GW origin (and both do contain non-local physics). Furthermore, we have exactly disentangled inter- from on-site contributions, and the fact that the former is static and the latter dynamic descends here from a simplicity argument: while frequency-dependence naturally arises in the on-site part, it is less straightforward in the inter-site contribution.
The Hartree term:To treat the whole electron-electron interactions on the same footing, also the mean-field Hartree contribution should be expressed in terms of the localized orbitals [64, 65]. In this representation, the density reads \(n_{\text{loc}}(\mathbf{x}):=\sum_{Km_{1},Lm_{2}}\psi_{K,m_{1}}(\mathbf{x})n^{KL}_{m_{1}m _{2}}\psi^{\dagger}_{L,m_{2}}(\mathbf{x})\), and the matrix element of the Hartree operator is:
\[\left\langle I,m\right|\hat{v}^{\text{loc}}_{\text{H}}\left|J,m^{ \prime}\right\rangle=\int d^{3}x\,\psi^{*}_{Im}(\mathbf{x})v^{\text{loc}}_{\text{H}}( \mathbf{x})\psi_{Jm^{\prime}}(\mathbf{x})=\\ =\int d^{3}xd^{3}x^{\prime}\,\psi^{*}_{Im}(\mathbf{x})n_{\text{loc}}( \mathbf{x}^{\prime})v(\mathbf{x},\mathbf{x}^{\prime})\psi_{Jm^{\prime}}(\mathbf{x})=\\ =\sum_{Km_{1},Lm_{2}}n^{KL}_{m_{1}m_{2}}v^{Lm_{2},Jm^{\prime}}_{Im, Km_{1}},\]
which becomes the simpler \(\left\langle I,m\right|\hat{v}_{\mathrm{loc}}^{\mathrm{H}}\left|J,m^{\prime} \right\rangle=\delta_{IJ}\delta_{mm^{\prime}}\big{\{}n^{I}U_{\infty}^{I}+\sum_{ K\neq I}n^{K}V_{\infty}^{IK}\big{\}}\) within the same approximations as above, with \(n^{I}=\sum_{m\in\mathcal{C}_{I}}n_{mm}^{II}\) the total occupancy of the correlated manifold \(\mathcal{C}_{I}\). As expected, the Hartree term describes an electrostatic physics (\(\delta_{IJ}\)), depending on both the total charge on the same site \(I\) (\(n^{I}\)) and on different sites \(K\neq I\) (\(n^{K}\)), that interact through the on-site (\(U^{I}\)) and off-site (\(V^{IK}\)) bare (\(\infty\)) interactions.
Self-interaction correction:The diagonal matrix element of the full self energy \(\hat{\Sigma}^{I}=\hat{v}_{\mathrm{H}}^{\mathrm{loc},I}+\hat{\Sigma}_{\mathrm{xc}} ^{I}\) (Hartree _and_ exchange-correlation) acting on states in \(\mathcal{C}_{I}\) of the atom \(I\) can thus be written as:
\[\Sigma_{m}\left(\omega\right)=(n-n_{m})U_{\infty}+\sum_{K\neq I} V_{\infty}^{IK}n^{K}+\\ +\bigg{[}\frac{1}{2}\Big{(}1+\frac{\omega-\varepsilon_{m}}{ \omega_{0}}\Big{)}-n_{m}\bigg{]}\left(U(\omega-\varepsilon_{m})-U_{\infty} \right)\!, \tag{15}\]
where the superscript \(I\) is implied everywhere. In the infinite-frequency limit \(\omega\to\infty\), when particles do not have time to polarize, the previous expression becomes static and real, and it reduces to \((n-n_{m})U_{\infty}\) (neglecting the \(V\)-part, whose interpretation is as above and is clearly self-interaction free). Its orbital dependence removes the self-interaction error of a naive Hartree-like contribution \(nU_{\infty}\): when the state \(\left|m\right\rangle\) is empty, this term describes the action of \(n\) particles interacting through the bare Coulomb interaction \(U_{\infty}\). Conversely, if \(\left|m\right\rangle\) is occupied, only \(n-1\) particles do enter the self energy [83]. Thus, as expected in the infinite-frequency limit [84], the full self energy tends to a self-interaction-free, orbital-dependent Hartree-Fock-like term:
\[\Sigma_{m}\left(\omega\right)\stackrel{{\omega\to\infty}}{{ \longrightarrow}}v_{m}^{\mathrm{Hx}}\equiv(n-n_{m})U_{\infty}+\sum_{K\neq I} V_{\infty}^{IK}n^{K} \tag{16}\]
This observation allows us to identify the second line of Eq. (15), that is complex valued, frequency dependent, and non-zero for finite frequencies only, with the purely correlation part of the self energy.
Double counting:The self energy in Eq. (15) should be used perturbatively as a correction to the \(\mathcal{C}\) part of the spectrum of \(\hat{h}_{0}\). When considering the state \(\left|m\right\rangle\), all electron-electron interaction effects are accounted for by \(\Sigma_{m}\left(\omega\right)\), Eq. (15), and Eq. (14) for the off-site contributions. However, these are also included in \(\hat{h}_{0}\), although only at the mean-field level. The double-counting correction \(\hat{v}_{\mathrm{DC}}\), that is static, local and orbital-independent, aims exactly at removing the latter. Discussing the different expressions for \(\hat{v}_{\mathrm{DC}}\) proposed in the literature [85; 86; 31; 87; 88] is beyond the scope of this paper; we mention here that for metals the around mean-field (AMF) form \(v_{\mathrm{DC}}^{\mathrm{AMF}}=U(n-\bar{n})\)[31], with \(\bar{n}=n/N\), should be preferred [88] over the most popular fully localized limit (FLL) \(v_{\mathrm{DC}}^{\mathrm{FLL}}=U(n-\frac{1}{2})\)[89]. In a dynamical approach like the present one, one can naturally identify at least two static values for \(U\): the fully-screened \(U(0)\) and the bare \(U_{\infty}\). The latter is the interaction strength between non-interacting electrons, and it is the one to be chosen when we remove mean-field terms. An intriguing way to define it is taking it as the orbital-average of the purely static (_i.e._, what survives in the limit \(\omega\to\infty\)), purely local (_i.e._, on-site) part of \(\hat{\Sigma}^{I}+\hat{v}_{V}\), namely \(\big{\langle}(n-n_{m})U_{\infty}+\sum_{K\neq I}V_{\infty}^{IK}n^{K}\big{\rangle} _{m}\). Therefore:
\[v_{\mathrm{DC}}\equiv v_{0}^{\mathrm{Hxc}}=U_{\infty}(n-\bar{n})+\sum_{K\neq I }V_{\infty}^{IK}n^{K}\]
The first term is nothing but the around mean-field (AMF) form of the double-counting term [31], best suited for metals, but with \(U_{\infty}\), the bare interaction, in place of \(U\), at variance with methods that employ a single value for \(U\). The \(V\) contribution is the same as the one that stems from the double-counting energy \(E_{\mathrm{DC}}^{V}=\frac{1}{2}\sum_{I\neq J}V^{IJ}n^{I}n^{J}\) proposed in [34], but with \(V\to V_{\infty}\). The \(U\) term is very similar to the one in Eq. (16). Their difference \(U_{\infty}(\bar{n}-n_{m})\) can be viewed as the purely correlation contribution \(v_{m}^{c}\) in the mean-field Hamiltonian, whose orbital-dependency exactly balances the one in \(v_{m}^{\mathrm{Hx}}\) to yield an orbital-independent \(v_{0}^{\mathrm{Hxc}}\). In other words, the previous formulation can be viewed as an additional source of dynamical correlations - namely the second line of Eq. (15) - on top of the static \(v_{m}^{c}=U_{\infty}(\bar{n}-n_{m})\), already accounted for in \(\hat{h}_{0}\). Finally, the on-site self-energy correction \(\Delta\Sigma_{m}\left(\omega\right):=\Sigma_{m}\left(\omega\right)-v_{\mathrm{ DC}}\) can be written as:
\[\Delta\Sigma_{m}\left(\omega\right)=(\bar{n}-n_{m})U_{\infty}+\\ +\bigg{[}\frac{1}{2}\Big{(}1+\frac{\omega-\varepsilon_{m}}{ \omega_{0}}\Big{)}-n_{m}\bigg{]}\left(U(\omega-\varepsilon_{m})-U_{\infty} \right)\!, \tag{17}\]
and it contains purely exchange and correlation contributions. This is the main result of this work.
Figure 2: (Color online) \(U(\omega)\) in SrVO\({}_{3}\) (blue), and in LaNiO\({}_{3}\) (red); the real part is continuous and the imaginary part is dashed. The Wannier functions have been built from the \(d\) and \(p\) states. The two horizontal lines identify the static value of the bare Coulomb interaction \(U_{\infty}\), 19.16 and 25.51 eV respectively.
_Physical content:_ The advantage of having explicit and compact expressions is, beyond simplicity, understanding. In the static limit of \(\omega=\varepsilon_{s}\) GW reduces to COHSEX, which leads to DFT+\(U\) within the above procedure [65]. In the same limit \(\Delta\Sigma_{m}\left(\omega\right)\), Eq. (17), goes to \(\left(\frac{1}{2}-n_{m}\right)U(0)-\left(\frac{1}{2}-\bar{n}\right)U_{\infty}\), which has the same orbital dependence of DFT+\(U\) in the AMF formulation, \(v_{m}^{\rm AMF}=\left(\frac{1}{2}-n_{m}\right)U-\left(\frac{1}{2}-\bar{n} \right)U\)[26]; there, a single value of \(U\) averages the two \(U(0)\) and \(U_{\infty}\). In its simplest formulation, the AMF DFT+\(U\) functional can also be written as \(v_{m}^{\rm AMF}=\left(\bar{n}-n_{m}\right)U\), namely the first term in Eq. (17), with \(U\to U_{\infty}\). Eq. (17) can thus be interpreted as the unscreened DFT+\(U\) result, that removes the self-interaction error from \(\hat{h}_{0}\), modified by correlation for finite values of \(\omega\). In the limit of frequencies close to \(\varepsilon_{m}\) a perturbative expansion of the QP equation \(E_{m}^{\rm QP}=\varepsilon_{m}+\Delta\Sigma_{m}(E_{m}^{\rm QP})\) can be performed, resulting in \(E_{m}^{\rm QP}=\varepsilon_{m}+Z\left[\left(\frac{1}{2}-n_{m}\right)U(0)- \left(\frac{1}{2}-\bar{n}\right)U_{\infty}\right]\), with the renormalization factor \(Z=(1-\partial\Delta\Sigma/\partial\omega|_{em})^{-1}\approx(1+(U_{\infty}-U( 0))/2\omega_{0})^{-1}\). As expected, frequency dependence counteracts the non-locality of DFT+\(U\), by reducing the value of \(U(0)\) and \(U_{\infty}\): the more efficient the screening, namely the difference between \(U_{\infty}\) and \(U(0)\), the larger the reduction of \(U\), with \(\omega_{0}\) the typical energy over which screening takes place. In addition, the strong renormalization of the correlated low-energy bands is due to the linear dependence of the real part of the self energy in the vicinity of the Fermi level. Such a structure stems from the pre-factor \(-\frac{\omega-\varepsilon_{m}}{2\omega_{0}}\), always negative as \(\Re U(\omega)<U_{\infty}\) in the low-energy regime. This allows us to identify \(\omega_{0}\) - that sets the bandwidth reduction in a \(U(\omega)\) approach - with the subplasmon energy needed to reproduce the outcome of a GW result when using a plasmon-pole model in GW itself [51]. With such an identification, we recover a parameter-free theory again.
The simplicity of Eq. (17) and the resulting clarity in the physics should not hide the simplifications needed to obtain it, namely the introduction of \(\omega_{0}\) in passing from Eq. (5) to Eq. (6) and the drop of any coupling between \(\mathcal{C}\) and all other states in the self energy. The most visible practical consequence is the lack of a strict time-ordered structure in the resulting self energy. In fact, for the exact \(\mathrm{G}_{0}\mathrm{W}_{0}\) self energy, \(\mathrm{Im}\left\langle r\right|\hat{\Sigma}_{G_{0}\mathrm{W}_{0}}(\omega) \left|r^{\prime}\right\rangle=\sum_{s}\left[\theta(\omega-\varepsilon_{s})-n_ {s}\right]\mathrm{Im}W_{rr^{\prime}}^{ss}(\omega-\varepsilon_{s})\). Summing over all states \(s\) results in the expected Fermi liquid behaviour \(\mathrm{Im}\Sigma(\omega)\sim-\mathrm{sign}(\omega-\mu)(\omega-\mu)^{2}\) in the vicinity of \(\mu\)[90]. With Eq. (17), however, \(\mathrm{Im}\Delta\Sigma_{m}\left(\omega\right)=\left[\frac{1}{2}\big{(}\frac{ 1}{2}+\frac{\omega-\varepsilon_{m}}{\omega_{0}}\big{)}-n_{m}\right]\mathrm{ Im}U(\omega-\varepsilon_{m})\), a compact function of \(\omega\). In a way, the sum of \(\theta\) functions centered on \(\varepsilon_{s}\) has reduced to the single line \(\frac{1}{2}\big{(}1+\frac{\omega-\varepsilon_{m}}{\omega_{0}}\big{)}\), which still returns \(1/2\) for \(\omega=\varepsilon_{m}\) and, together with \(\mathrm{Im}U(\omega)\sim-|\omega|\), yields a quadratic behaviour of the self energy, positive for \(\omega<\varepsilon_{m}-2\omega_{0}(\frac{1}{2}-n_{m})\) and negative for \(\omega>\varepsilon_{m}\). This is a proper Fermi liquid time-ordered self energy (with a renormalized \(\mu\)), in a perfectly half-filled metal with \(n_{m}=1/2\). However, such a solution, besides being artificial, wouldn't reproduce the asymmetrical treatment of occupied and empty states that we see, _e.g._, in the position of the two satellites of SrVO\({}_{3}\). It is the scissor-like action that Eq. (17) inherits from DFT+\(U\) that restores this asymmetry [53], at the price of a non-time-ordered self-energy in the interval \([\varepsilon_{m}-2\omega_{0}(\frac{1}{2}-n_{m}),\varepsilon_{m}]\). This issue, practically outdone by a large enough value of the regularizing parameter \(\eta\) in Eq. (1) and (2), reflects the contrasting effects of metallicity (the frequency-dependence) and localization (the scissor action of DFT+\(U\)) already present in GW, but enhanced and clarified by the simple self energy of Eq. (17).
Figure 3: (Color online) Real (continuous) and imaginary (dashed) parts of \(U(\omega)\) for SrVO\({}_{3}\), built from different sets of Wannier functions: the \(d\) states only (blue) and the \(d\) and \(p\) states (red). The two horizontal lines identify the static value of the bare Coulomb interaction \(U_{\infty}\) in the \(d\) and \(dp\) model, \(15.10\) eV and \(19.16\) eV, respectively. For comparison, we also show the cRPA result for Wannier functions in the \(dp\) model with the \(t_{2g}-t_{2g}\) transitions removed (green lines).
Figure 4: (Color online) Density of screening modes \(\rho(\omega)=\mathrm{Im}U(\omega)/\omega^{2}\) in SrVO\({}_{3}\) (blue), and in LaNiO\({}_{3}\) (red). The vertical lines highlight the relevant screening mode \(\omega_{0}\) we use in this work for each material, \(\omega_{0}\approx 5\) eV and \(\omega_{0}\approx 3\) eV respectively.
## Application to metallic perovskites
To test the approach presented here, we study the correlated bands of two paradigmatic perovskites, SrVO\({}_{3}\) and LaNiO\({}_{3}\), that have already been extensively studied both experimentally and theoretically.
### Experimental findings
SrVO\({}_{3}\) crystallizes in a cubic structure, with a paramagnetic metallic ground state. ARPES results from SrVO\({}_{3}\) show two main features around the Fermi energy: a dispersing band, whose width is \(\sim\) 0.45 \(\div\) 0.7 eV [91; 92], and an incoherent feature at \(-\)2 \(\div\) \(-\)1.5 eV, with a weak dispersion of 0.1 eV but a dispersing intensity that finds its maximum at the \(\Gamma\) point [91; 52]. The importance of this satellite - interpreted as a lower Hubbard band - over the quasiparticle has been questioned in [93], where it has been shown that its intrinsic weight [94; 14] could be much less strong than previously thought, because of possible oxygen vacancies in the samples [60] and surface contributions that add up in the incoherent part of the spectrum [92; 95; 96]. Also what was interpreted as an upper Hubbard band (see below), between 2.7 and 3.5 eV [57], lies in the region of the \(e_{g}\) states, and would not be thus visible in IPES experiments [52; 53].
LaNiO\({}_{3}\), at variance with other rare-earth nickelates that exhibit metal-insulator transitions when cooled down to low temperatures, always displays a paramagnetic metallic ground state [97]. Its crystal structure is rhombohedral R\(\bar{3}\)c, but as a first approximation (\(t=0.97\)[98], \(\beta=90.41^{\circ}\)[99]) we will consider the high-temperature undistorted cubic structure Pm\(\bar{3}\)m of lattice parameter \(a=7.2887a_{0}\), for simplicity but also to be as close as possible to the perfectly cubic compound SrVO\({}_{3}\). Bulk LaNiO\({}_{3}\) was studied by PES in [100] and ARPES in [99], showing a momentum dependent mass renormalization at the Fermi energy. From thermal properties,
Figure 5: (Color online) PBE density of states of SrVO\({}_{3}\), in gray. The \(m\)-resolved projected DOS are shown for V, \(t_{2g}\) and \(e_{g}\) states, and O, orthogonal and longitudinal (with respect to the V \(e_{g}\) states) orbitals, see text. The empty states at high energy are mainly Sr \(4d\) states.
Figure 8: (Color online) PBE band structure of cubic LaNiO\({}_{3}\), highlighting the percentage of \(e_{g}\) character.
Figure 6: (Color online) PBE band structure of SrVO\({}_{3}\), highlighting the percentage of \(t_{2g}\) character.
Figure 7: (Color online) PBE density of states of cubic LaNiO\({}_{3}\), in gray. The \(m\)-resolved projected DOS are shown for Ni and O. The empty states at high energy are mainly La \(4f\) (the peak at \(\sim\) 3.5 eV) and \(5d\) states.
it is expected \(m^{*}/m\sim 10\), while from PES as well as from mean-field calculations in the cubic structure, it is around 3 [99]. This value is very anisotropic and strongly depends on the path in the Brillouin zone. In fact, for the path shown in [94] and using the cubic structure as a starting point, \(m^{*}/m\sim 7\)[101] (\(m^{*}/m=3.1\pm 0.5\) if the rhombohedral structure is employed).
### Theoretical state-of-the-art approaches
For both systems, a non-magnetic Kohn-Sham solution within the PBE functional [102] yields a metallic ground state [103; 104; 105], in agreement with experiments. In both perovskites the low-energy valence band is mainly constituted by oxygen \(2p\) states hybridized with the \(3d\) states of vanadium and nickel respectively, see Fig. 5 and Fig. 7. The crystal field splits both \(p\) and \(d\) states, gathering them into different groups depending on the reciprocal orientation of the orbitals: two \(d\) orbitals (\(e_{g}\)) have lobes pointing towards the oxygens (one \(p\) orbital per atom, which we call _longitudinal_), and are thus more hybridized and dispersing. The other three \(d\) (\(t_{2g}\)) and six \(p\) (_orthogonal_) orbitals are less overlapping, more localized and at lower energy. Hybridization between the transition metal \(3d\) states and oxygen ligands plays a fundamental role at the Fermi level. While the \(p\) bands get completely filled, the nominal occupancy of the \(d\) states is different for the two systems: \(t_{2g}^{1}e_{g}^{0}\) in SrVO\({}_{3}\) and \(t_{2g}^{6}e_{g}^{1}\) in LaNiO\({}_{3}\). As correlation mainly affects partially filled orbitals, it is larger for the \(t_{2g}\) bands in SrVO\({}_{3}\)[106; 107] and for the \(e_{g}\) bands in LaNiO\({}_{3}\)[100]. In fact, at the DFT level, these manifolds are not quite in agreement with the experimental results: the \(t_{2g}\) bandwidth for SrVO\({}_{3}\) is 2.5 \(\div\) 2.6 eV at the DFT level, twice as larger as measured; analogously for the \(e_{g}\) band of LaNiO\({}_{3}\), which is too dispersing.
For both systems, the low-energy physics requires a dynamical treatment of correlation beyond and alternative to hybrids or DFT+\(U\)[98; 108; 109]. SrVO\({}_{3}\), being the prototype of correlated metals, has been extensively studied by DFT+DMFT [110; 49; 75; 93; 96; 111] and GW+DMFT [50; 52; 53; 107]. While a GW calculation doesn't reduce the band width enough (by a factor of 0.7-0.8 instead of 0.5, \(m^{*}/m\sim 1.3\), bandwidth of around 2 eV) [50; 51; 52; 53], LDA+DMFT does the opposite, to 0.9 eV (\(m^{*}/m\sim 2.2\)[49]). The joint frequency-dependence and non-locality of GW+DMFT yields an acceptable value of 1.2 eV (0.5 eV in the occupied part) [50], as does a localized version of GW, 1.3 eV [54]. Both GW and GW+DMFT place a high-energy electron-gas plasmon peak at \(\sim 16\div 17\) eV [52], seen in the experimental EELS spectra of the related compound SrTiO\({}_{3}\)[112]. Around the main \(t_{2g}\) bands, GW shows a \(t_{2g}\) excitation between 2 and 4 eV [51; 52; 54], though hidden by \(e_{g}\) bands, and a lower satellite between \(\sim-2\) and \(-3\) eV [51; 53; 54]. GW+DMFT displays a lower Hubbard band (LHB) at \(-1.6\) eV [52], at higher energy than LDA+DMFT with a static \(U\), that places the Hubbard bands at \(-1.8\) eV and \(+3\) eV [49]. Using the value of \(U=3.5\) eV from cRPA, the LHB is barely a shoulder within LDA+DMFT. Finally, GW+EDMFT is shown to be not that different from \(G_{0}W_{0}\)[60], pushing towards the interpretation of the upper and lower Hubbard bands as plasmonic features. The intensity modulation of the LHB shown in experiments is confirmed.
In LaNiO\({}_{3}\), self-consistent GW yields \(m^{*}/m\sim 1.3\)[113], in a good direction but definitely not large enough. On the contrary, DMFT [94; 88] reproduces well the mass enhancement, \(m^{*}/m\sim 3\), and displays kinks in the \(e_{g}\) bands at \(\sim-0.2\) eV. These kinks stem from the onset of non-Fermi liquid behaviour in the self energy, which is
Figure 10: (Color online) The \(t_{2g}\) manifold of SrVO\({}_{3}\) in PBE (white dotted), PBE+\(V\) (gray dashed-dotted) and PBE+\(V\) + \(U(\omega)\) (color map).
Figure 9: (Color online) The \(t_{2g}\) manifold of SrVO\({}_{3}\) in DFT-PBE (white dotted) and PBE+\(U(\omega)\) (color map). For comparison, we have also included results from [50], namely GW, DFT+DMFT and GW+DMFT.
no more linear (parabolic) in its real (imaginary) part.
### DFT+\(U(\omega)\) workflow
In both perovskites we can obtain both a set of Wannier functions and the screened interaction from the PBE mean field solution \(\{\varepsilon_{n\mathbf{k}},\psi_{n\mathbf{k}}(\mathbf{x})\}\), as explained above. We choose to build a set of maximally localized Wannier functions \(\{\left|I,m\right>\}\)[74; 114] from the whole \(p\) and \(d\) states, due to their large hybridization, even if the self-energy correction is applied only to the partially filled submanifold of the \(d\) bands only, the \(t_{2g}\) states in SrVO\({}_{3}\) and the \(e_{g}\) in LaNiO\({}_{3}\).
The occupation and Hamiltonian matrices are built following Eq. (9), and the RPA \(U(\omega)\) from Eq. (8), centered on V and Ni respectively. The very localized \(4f\) states of La do not contribute much to screening, thus \(U(\omega)\) is similar for the two systems, see Fig. 2. As explained above, the parameter \(\omega_{0}\) is a neutral excitation energy that sets the characteristic energy of screening. In SrVO\({}_{3}\), we consider the value \(\omega_{0}=5\) eV, as this is the plasmon-pole energy [17] able to reproduce the full GW result [51]. It is also the most relevant excitation at low energies [53] that can be inferred from the density of screening modes, \(\rho(\omega)=\text{Im}U(\omega)/\omega^{2}\)[68], (the other one, at 2 eV, is responsible for the subplasmon satellites rather than band renormalization, see below); see also Fig. 4. Analogously, we have performed plasmon-pole GW calculations for LaNiO\({}_{3}\) and compared with a full frequency calculation [113], obtaining \(\omega_{0}=3\) eV. Again, this is the relevant screening mode at low energy, see Fig. 4. It's interesting the two \(\omega_{0}\)s correspond to each other, once we stretch the LaNiO\({}_{3}\) frequency axis in such a way that the two main plasmon peaks superimpose (dashed red line in Fig. 4). We stress that taking for \(\omega_{0}\) the plasmon-pole energy of a GW calculation is a well-defined procedure that, although possibly time-consuming, keeps this approach parameter-free and fully ab-initio.
With these ingredients at hand, we can build the self-energy correction from Eq. (17), use it perturbatively as a one-shot [60] to correct the Kohn-Sham states:
\[E_{n\mathbf{k}}(\omega)=\varepsilon_{n\mathbf{k}}+\sum_{I,m}\left|\langle I,m|n\mathbf{k} \rangle\right|^{2}\Delta\Sigma_{m}(\omega),\]
and evaluate the spectral function:
\[A_{n\mathbf{k}}(\omega)=-\frac{1}{\pi}\operatorname{sign}(\omega-\mu)\operatorname {Im}\frac{1}{\omega-E_{n\mathbf{k}}(\omega)},\]
with the new chemical potential \(\mu\) set by conserving the Fermi wavevector, as suggested in [53]. In fact, as in G\({}_{0}\)W\({}_{0}\), the chemical potential is not conserved by the renormalization brought by the self energy. Another method to set the Fermi energy is by counting the electrons from the integrated DOS; for these systems, the two methods differ by \(\sim 0.1\) eV (see also [50]).
### SrVO\({}_{3}\)
We have applied the protocol described above to the \(t_{2g}\) manifold of SrVO\({}_{3}\). The main action of the \(U(\omega)\) self energy on the PBE solution is to considerably shrink the coherent part of the manifold, from a DFT value of \(2.49\) to \(1.11\) eV in the overall \(t_{2g}\) bandwidth. In particular, the bottom of the \(t_{2g}\) manifold lies now at \(-0.48\) eV (from a DFT value of \(-0.96\) eV), in very good agreement with both experiments and GW+DMFT calculations, see Table 1 and Fig. 9. This result clearly improves what can be obtained at the GW level, where the renormalization brought is only of \(0.5\) eV in the whole \(t_{2g}\) manifold [51]. However, this self energy should not be considered (only) as an approximated GW, in the same way in which LDA+U is not (just) an approximated COHSEX. The additional ingredient is the localization procedure, that _adds_, rather than removes, physics. In fact, we have added this piece of physics directly in the expression of
\begin{table}
\begin{tabular}{c c c c c} & bw [eV] & \(Z\) & LS [eV] & US [eV] \\ \hline exp. [96] & & & \(-1.6\) & \\ exp. [91] & \(0.44\) & \(\sim 0.5\) & \(-1.5\) & \\ exp. [92] & \(\sim 0.7\) & \(0.55\) & \(-1.5\) & \\ \hline DFT-PBE & \(0.958\) & \(1\) & \(\varnothing\) & \(\varnothing\) \\ DFT+GW [51; 53] & \(0.8\) & \(0.77\) & \(-2\) & \(2.2\) \\ DFT+DMFT [96] & \(0.47\) & \(-2\) & \(2.5\) & \\ DFT+GW+DMFT [53] & \(0.5\) & \(0.5\) & \(-1.6\) & \(2\) \\ DFT+GW+DMFT [50] & \(0.6\) & \(0.6\) & \(-1.5\) & \(2.5\) \\ DFT+GW+EDMTF [60] & & & \(-1.7\) & \(2.8\) \\ \hline DFT+\(U(\omega)\) & \(0.48\) & \(0.50\) & \(-1.10\) & \(3.42\) \\ DFT+\(V_{\text{LR}}+U(\omega)\) & \(0.60\) & \(0.62\) & \(-1.34\) & \(3.18\) \\ DFT+\(V+U(\omega)\) & \(0.49\) & \(0.51\) & \(-1.15\) & \(3.37\) \\ \end{tabular}
\end{table}
Table 1: Bandwidth of the occupied \(t_{2g}\) manifold (bw), renormalization factor \(Z\), lower (LS) and upper (US) satellites in SrVO\({}_{3}\). Experimental results, state-of-the-art theories and three different flavours of this work are compared.
Figure 11: (Color online) The \(t_{2g}\) manifold of SrVO\({}_{3}\) in PBE+\(V+U(\omega)\) (color map), on a wider energy range.
GW, yielding a self energy that contains both effects. Along the same lines, Ref. [54] has artificially selected the local part only of the GW self-energy, that yields indeed the experimental renormalization. Finally, from this point of view one would interpret also the DMFT correction to the GW solution as the addition of localized vertex diagrams that restore a greater locality in the physics of GW.
The effective mass resulting from DFT+\(U(\omega)\) is \(m^{*}/m=2.00\) in the occupied part of the spectrum, corresponding to a renormalization factor \(Z_{U(\omega)}=0.50\), in agreement with the value obtained from the derivative of the self energy and DFT+DMFT results [50]. A renormalization factor smaller than one depends on the loss of electronic charge, typical of non-conservative self energies like, _e.g._, \(\mathrm{G_{0}W_{0}}\), and the emergence of satellites. In fact, from Fig. 11 we can observe that part of the spectral weight is transferred to high energy structures. Among others, two, at \(-14.6\) and \(+16.7\) eV, slightly dispersing, stem from the large plasmon in \(U(\omega)\) at \(14\) eV, in agreement with usual GW calculations [52; 53] and EELS experiments on the isostructural material SrTiO\({}_{3}\)[112]. Closer to the Fermi energy, a non-dispersing lower satellite appears at \(-1.09\) eV, at slightly too high energy with respect to experiments and DMFT results. The misplacement of this feature is most likely inherited from GW which, although usually very good in describing QPs [18], has a tendency to miss the exact positions of satellites [26]. However, the dispersing intensity of the satellite, which is largest at \(\Gamma\), is in agreement with experimental results and previous findings [52; 91; 115]. The intensity of this peak is extremely small with respect to DMFT calculations. However, refined photoemission experiments have revealed how that feature could have been previously overestimated [93; 96; 60; 91]. Finally, in the empty parts of the spectrum, there is a noticeable satellite at \(3.41\) eV, which will however be covered by \(e_{g}\) bands [53].
It is easy to show that these two satellites stem from the lowest energy peak of \(U(\omega)\), at \(\sim 2\) eV, the one removed in cRPA. Moreover, although the position of the two satellites may be overestimated in energy, their relative distance, \(4.5\) eV, is the same as the one obtained in GW+EDMFT [60] or the simpler DMFT [96], which is again roughly the same that one obtains in GW [51]. These two observations push towards the interpretation of these satellites as sub-plasmons due to intraband excitations rather than Hubbard bands [60]. Finally, note that DFT+\(U(\omega)\) yields an asymmetric position, with respect to the quasiparticle band, for the two couples of satellites, the low- (at \(-1.1\) and \(3.4\) eV) and the high-energy ones (at \(-14.6\) and \(+16.7\) eV). This asymmetry, confirmed by other theories, is missed by GW.
_Inclusion of Hubbard \(V\):_ An intersite Hubbard \(V\) parameter can be calculated _ab-initio_ from linear-response theory \(V=V_{\mathrm{LR}}\)[41; 42; 34]. It results in a negligible value when considering nearest vanadium atoms. While the effects of \(V_{\mathrm{LR}}^{\mathrm{V-Sr}}=0.11\) eV and \(V_{\mathrm{LR}}^{\mathrm{O-Sr}}=0.37\) eV are tiny on the band structure, a relevant modification goes with the inclusion of \(V_{\mathrm{LR}}^{\mathrm{V-O}}=1.69\) eV [132][133]. This is the major expected inter-site effect, as vanadium is surrounded by six oxygen atoms, as pointed out in [116]. The correlated \(t_{2g}\) manifold widens from \(2.49\) to \(3.19\) eV, an effect mirrored by the non-local part of \(GW\)[53]. The overall effect is a larger dispersion of the bands due to an enhancement of the V-O bonding. When \(\hat{h}_{\mathrm{DFT+V}}\) is used as the starting mean-field Hamiltonian [80], the resulting renormalization is too weak, as the bottom of the \(t_{2g}\) bands goes to \(-0.6\) eV. We suppose that this drawback is due to using an energy-related parameter into a spectrum-related approach. More deeply, all parameters introduced are effective quantities for which a certain prescription for use is due. Mixed approaches might be powerful, but not consistent. In fact, the definition of \(V\) in Eq. (13) is through the RPA \(\hat{W}(\omega=0)\) and, at the level of the more widespread \(U\), it is well known that \(U_{\mathrm{RPA}}<U_{\mathrm{cRPA}}<U_{\mathrm{LR}}\). In the case of SrVO\({}_{3}\), in fact, \(U_{\mathrm{RPA}}=U(\omega=0)=1.04\) eV and \(U_{\mathrm{LR}}=7.65\) eV (in passing, we note that all these values are not intrinsic nor universal, but depend on the localized orbital manifolds chosen [41]; these large differences arise also from the fact that \(V_{\mathrm{LR}}\) aims to correct self-interaction in the energy functional [45], while \(U_{\mathrm{RPA}}/U_{\mathrm{cRPA}}\) address spectral properties). To evaluate the \(V\) of Eq. (13), we hence propose to use the simplified formula \(V=\frac{U}{U_{\mathrm{LR}}}V_{\mathrm{LR}}\), where the linear response quantities are evaluated through the procedure of [41; 42; 34], \(U\) is the average of the on-site matrix elements of \(\hat{W}(\omega=0)\) and \(V\) is the unknown corresponding off-site average. The underlying assumption is of course that \(U\) and \(V\) are proportionally related to each other in different theories. With the values written above, we get \(V=0.23\) eV, a much smaller value of \(V_{\mathrm{LR}}\); the change in the spectrum is minimum as seen in Fig. 10, with a tiny renormalization factor \(Z_{V}=1.03\): inter-site effects are important but not fundamental for the \(t_{2g}\) manifold, which explains the early successes of base DMFT in reproducing these features. The bottom of the quasiparticle band now lies at \(-0.49\) eV, and the full \(t_{2g}\) bandwidth is \(1.107\) eV. This implies an effective mass \(m^{*}/m=1.96\) corresponding to a renormalization factor \(Z=0.51\) in the occupied part of the manifold, still in perfect agreement with state-of-the-art calculations [50; 53] and experimental findings [92; 96]. Note that this is also equal to the product \(Z_{V}\times Z_{U(\omega)}=1.03\times 0.50\)[54], which again highlights the separability, to a good approximation, of dynamical and non-local effects. On the other hand, it should be noted that the latter are already present in the \(U(\omega)\) only self-energy that, at zero frequency, reduces to DFT+U and hence already contains much of the non-locality of COHSEX [53]. Therefore, it would be more correct to talk about dynamical on-site and static off-site interactions. Finally, note that the
satellites gain some spectral weight and their positions go to \(-1.15\) and \(3.36\) eV, in a good direction towards agreement with other theories.
### LaNiO\({}_{3}\)
By contrast to SrVO\({}_{3}\) the \(e_{g}\) bands, usually considered as the correlated manifold in LaNiO\({}_{3}\), are not well separated from the rest of the valence bands (see Fig. 8). The effects of the self energy, Eq. (17), will therefore spread all over the valence manifold, according to the importance of the \(e_{g}\) character of the different Kohn-Sham states. Therefore, also high-energy QPs are affected (see Fig. 12).
We will focus on the low-energy region, where a precise knowledge of the effective mass is most needed and detailed ARPES measurements are available. As in the case of SrVO\({}_{3}\) and as discussed in the theory section, the main effect of the self energy of Eq. (17) is to increase the effective mass by weakening the dispersing character of the correlated bands or, equivalently, enhancing their localization. This is shown in particular for the two experimental paths we consider: the first [94] is in the \(k_{y}\) direction, with \(k_{x}=\pi/2a\) and \(k_{z}=0.7\pi/a\), with \(a\) the pseudo-cubic lattice vectors obtained when considering a rhombohedral structure; the second path [99] is in the \(\Gamma\)X direction.
For the latter case, the value of the renormalization can be derived from the different slopes of the DFT and DFT+\(U(\omega)\) bands that cross the Fermi level along \(\Gamma\)X. We obtain an effective mass \(m^{*}/m=3.5\), corresponding to a renormalization \(Z=0.3\). For the other path, we take for the renormalization the ratio between the distance of the extrema of the parabolas from the Fermi level, and we get \(Z=0.134\) and \(m^{*}/m=7.5\). This \(k\)-dependent renormalization is confirmed by other approaches and, more importantly, by ARPES experiments [99], as can be seen from the inset of Fig. 13 and the circles in Fig. 14. In particular, the bottom of the parabolic band has been measured to be \(50\) meV away from the Fermi level, and we get \(54\) meV with the present approach. Also the kink behavior of the experimental band in Fig. 14 at \(-2\) eV is somehow captured by this approach. The \(Z=0.3\) renormalization around the \(\Gamma\) point is confirmed by ARPES [99] and DMFT results (on the same cubic structure we use [88]), as well as the kink feature. Analogously to GW results [113], the two upper bands of mostly \(e_{g}\) character are now decoupled from the lower bands. However, due to the localization procedure, the renormalization is much stronger, and goes from an overall GW reduction of \(1.2\) eV [113] to a DFT+\(U(\omega)\) reduction of \(2.7\) eV. Also the downshift of the rest of the valence manifold is reproduced, from about half an eV in GW to \(0.75\) eV here.
In Fig. 12 we can note the lower intensity of the \(e_{g}\) bands with respect to the others. In fact, a dynamical renormalization of these bands goes together with the transfer of electronic charge to incoherent features. However, by contrast to the previous perovskite, here there are no important satellites at low energy. That can be understood from an analysis of the self energy and, in particular, the different behaviors of \(U(\omega)\) for the two systems (see Fig. 2, and also the dielectric functions of Fig. 1). The high-energy features are instead similar for the two systems, that share the same average electronic density: for SrVO\({}_{3}\) the Wigner-Seitz radius is \(r_{s}=1.31\) and the plasma frequency is \(\omega_{\rm P}=31.56\) eV, while for LaNiO\({}_{3}\)\(r_{s}=1.25\) and \(\omega_{\rm P}=33.61\) eV; these translate in the same main loss peak at around \(30\) eV (Fig. 1). In the low energy regime, instead, the two systems do differ: in fact, in \(\mathrm{Im}\,U(\omega)\), the distinct peak at \(\sim 2\) eV of SrVO\({}_{3}\) doesn't have a clear counterpart in LaNiO\({}_{3}\). One could surmise that its true counterpart could be the shoulder at \(3\) eV; however, the latter should be rather matched with the shoulder at \(5\) eV for SrVO\({}_{3}\). This can be seen in different ways: first, these shoulders are responsible for the renormalization of the correlated bands; they are the plasmon-pole-model energies to be used in a GW calculation; they correspond to each other in the density of the screening modes (Fig. 4), once the energy scales are stretched in such a way to have the two main plasmons
Figure 12: (Color online) Top panel, \(p\) and \(d\) bands of LaNiO\({}_{3}\) in DFT+\(U(\omega)\) (color map), and DFT (white dotted). Bottom panel, same, on a wider energy range.
in the same position. More deeply, they have the same physical origins as inter-band transitions, while the 2 eV peak of SrVO\({}_{3}\) can be considered as an intra-band Drude term (see, _e.g._, the divergence of the imaginary part of the macroscopic dielectric function in Fig. 1). The latter further screens electron-hole excitations, resulting in a smaller value of \(U(\omega=0)=1.04\) eV, to be compared to \(U(\omega=0)=1.59\) eV in LaNiO\({}_{3}\). Therefore, the lack of a strong, intra-band excitation in LaNiO\({}_{3}\) seems to be the reason of the absence of a visible low energy plasmon.
On the high-energy side, instead, non-dispersing plasmons do show up, stemming from the main excitation in \(U(\omega)\) at \(\sim 10\) eV. As expected, due to the different structure of \(U(\omega)\) in the two perovskites, plasmons are here closer to their quasiparticles, at \(\sim-12\) and \(+10\) eV respectively (see Fig. 12, bottom panel).
#### Inclusion of Hubbard V
We can include intersite interactions via Eq. (14) also for LaNiO\({}_{3}\). As for the value of \(V\) between nickel and oxygen, a linear-response calculation yields \(V_{\rm LR}=1.19\) eV, together with \(U_{\rm LR}=10.77\) eV. Including the intersite \(V\) term results in a slightly weaker renormalization of the \(e_{g}\) bands; in particular, the bottom of the parabola would now be at 83 meV. As in the case of SrVO\({}_{3}\), \(V_{\rm LR}\) is not the one prescribed by Eq. (13); to get the latter in a simple way, we employ the proportionality relation \(V=\frac{U}{U_{\rm LR}}V_{\rm LR}\), with \(U=U(\omega=0)=1.5920\) eV, to obtain \(V=0.176\) eV. With such a value inserted in \(\hat{h}_{\rm DFT+\it V}\) as a starting point Hamiltonian, we obtain 64 meV for the vertex of the parabolic band, and no significant modification along \(\Gamma\)X.
An exact quantitative agreement with experiments for LaNiO\({}_{3}\) is beyond the scope of this paper, as it would require to take into account at least the rhombohedral structure of the crystal. The already very good reproduction of experimental features is notable, as well as the momentum dependent renormalization of the \(e_{g}\) bands, \(m^{*}/m=3.5\) around \(\Gamma\) and \(m^{*}/m=6.3\) around the parabolic band at Fermi. In addition, the good match with the DMFT results of Ref. [88] along \(\Gamma\)X highlights again the power of such streamlined approach.
## VI Conclusions
Electronic correlations are known to play an important role in solids with partially filled \(d\) or \(f\) orbitals, where the atomic, localized physics competes with the dispersive nature of the solid-state bands. These effects are usually considered to be captured by strong vertex corrections to GW, leading to highly sophisticated approaches like GW+DMFT. However, for the cases studied, the same localization features can be included at the GW level it
Figure 14: (Color online) The band structure of LaNiO\({}_{3}\) along the direction \(\Gamma\)X in DFT-PBE (white dotted) and DFT+\(U(\omega)\) (color map). The cyan circles are the ARPES experiments from [99] as reproduced in [88].
Figure 13: (Color online) The band structure of LaNiO\({}_{3}\) around the Fermi level in DFT-PBE (white dotted) and DFT+\(U(\omega)\) (color map). In the inset, the experimental results from Ref. [94] and, in white, the DFT result in the rhombohedral structure.
self, using localized basis functions and suppressing cross contributions with plane-wave-like terms. Moreover, in order to get a simple and transparent framework, we have proposed a self-energy expression, Eq. (17), that contains both the plasmonic physics of GW and the scissor action of COHSEX, and can be thought as a dynamical generalization of DFT+U. The application of this self-energy to the correlated manifold of two metallic perovskites shows the power of the present approach, yielding results in extremely good agreement both with state-of-the-art theories and experiments. In particular, it can predict the renormalization of the band at the Fermi level without adjustable parameters; perhaps equally important, the simplicity of the self-energy allows a transparent understanding of the processes involved in these systems, as well as a lightweight implementation and fairly negligible computational costs, best suited for materials discovery [1], material characterization and technological applications.
## Acknowledgements
The authors would like to thank David O' Regan, Tommaso Chiarotti and Mario Caserta for fruitful discussions.
## Appendix
### Renormalization factor
In the examples above we have shown how the main feature brought by the \(U(\omega)\) self-energy of Eq. (17) is a renormalization of the bands around the Fermi level. As stated above, this is \(Z=(1-\partial\Delta\Sigma/\partial\omega|_{\varepsilon_{m}})^{-1}\approx(1+( U_{\infty}-U(0))/2\omega_{0})^{-1}\). It is interesting to note that this same relation approximately holds also for the bandwidth renormalization introduced in Ref. [70]. There, \(\omega_{0}\) plays an analogous but different role, as it is an average excitation energy which sets the important screening to renormalize, at the one-particle level, the low-energy bands:
\[\omega_{0}=\frac{\int_{0}^{+\infty}d\omega\omega\text{Im}U_{\text{cRPA}}( \omega)/\omega^{2}}{\int_{0}^{+\infty}d\omega\text{Im}U_{\text{cRPA}}(\omega) /\omega^{2}} \tag{18}\]
As a result, the renormalization \(Z_{B}=\exp\frac{1}{\pi}\int_{0}^{+\infty}d\omega\,\text{Im}U_{\text{cRPA}}( \omega)/\omega^{2}\) is weaker than the full \(Z\), and an additional DMFT calculation is responsible for further shrinking the band. In our approach, instead, a single renormalization with the full RPA \(U(\omega)\) and a different \(\omega_{0}\) account for the whole reduction of the band. However, the physics is similar (renormalization due to coupling with bosons) and thus it is not surprising that our renormalization formula \(Z\approx(1+(U_{\infty}-U(0))/2\omega_{0})^{-1}\) holds also in that case. This is shown in the comparison of Fig. 15 for the different materials studied in Ref. [70]. In fact, the expression proposed in Ref. [70], Eq. (18), reduces to \(Z=e^{-x}\), with \(x:=(U_{\infty}-U(0))/2\omega_{0}\), while ours is \(Z=1/(1+x)\), and the two are asymptotically equal for small values of \(x\), as shown in Fig. 16.
### Computational details
For the calculation of the PBE ground state and the Wannier functions \(\{|m\rangle\}\) we have used the open-source code Quantum ESPRESSO, version 6.4.1 [117; 118],
Figure 16: (Color online) The two formulas for \(Z_{B}\), in dark blue the one from Ref. [70] and in light blue \((1+(U_{\infty}-U(0))/2\omega_{0})^{-1}\), as a function of \(x:=(U_{\infty}-U(0))/2\omega_{0}\), for the different materials considered in Ref. [70] (gray vertical lines).
interfaced to Wannier90, version 3.0.0 [119], with ultrasoft PBE pseudopotentials [120] from the Materials Cloud SSSP library [134]. For \(U(\omega)\), we have employed the open-source code abinit, version 8.10.1 [121], where cRPA calculations are already implemented [77, 122] using projected local orbitals Wannier functions [75]. We have used PAW atomic data [123] with PBE potentials from [135].
SrVO\({}_{3}\):Strontium vanadate crystallizes in an undistorted perovskite structure [124], with a simple cubic unit cell \(Pm3m\) of experimental lattice constant \(a=3.842\AA\)[125]. For the calculation of \(U(\omega)\), we consider a shifted Monkhorst-Pack \(6\times 6\times 6\) grid [126] with a Fermi-Dirac smearing of 0.1 eV, 75 bands, a cut-off of 12 Ha for the wavefunctions, and 200 frequencies. We use a cut-off energy of 6 Ha for the dielectric tensor \(\epsilon(\omega)\) and 20 Ha for \(U(\omega)\).
LaNiO\({}_{3}\):At low temperature, the crystal structure of the paramagnetic lanthanum nickelate is \(R\bar{3}c\), with a slight distortion with respect to a perfect cube [127, 128], with lattice constant \(a=5.433\AA\), and pseudocubic length of \(3.842\AA\). To better compare with SrVO\({}_{3}\), we consider the high-temperature undistorted cubic structure Pm\(\bar{3}\)m [129] with lattice parameter \(a=3.857\AA\). For evaluating \(U(\omega)\), we have taken a shifted Monkhorst-Pack \(8\times 8\times 8\) grid with Fermi-Dirac smearing of 0.01 eV, 70 bands, a cut-off of 29 Ha for the wavefunctions, and 200 frequencies. The cut-off energy for the dielectric tensor \(\epsilon(\omega)\) is 5 Ha and 9 Ha for \(U(\omega)\).
|
2309.05438 | Towards Content-based Pixel Retrieval in Revisited Oxford and Paris | This paper introduces the first two pixel retrieval benchmarks. Pixel
retrieval is segmented instance retrieval. Like semantic segmentation extends
classification to the pixel level, pixel retrieval is an extension of image
retrieval and offers information about which pixels are related to the query
object. In addition to retrieving images for the given query, it helps users
quickly identify the query object in true positive images and exclude false
positive images by denoting the correlated pixels. Our user study results show
pixel-level annotation can significantly improve the user experience.
Compared with semantic and instance segmentation, pixel retrieval requires a
fine-grained recognition capability for variable-granularity targets. To this
end, we propose pixel retrieval benchmarks named PROxford and PRParis, which
are based on the widely used image retrieval datasets, ROxford and RParis.
Three professional annotators label 5,942 images with two rounds of
double-checking and refinement. Furthermore, we conduct extensive experiments
and analysis on the SOTA methods in image search, image matching, detection,
segmentation, and dense matching using our pixel retrieval benchmarks. Results
show that the pixel retrieval task is challenging to these approaches and
distinctive from existing problems, suggesting that further research can
advance the content-based pixel-retrieval and thus user search experience. The
datasets can be downloaded from
\href{https://github.com/anguoyuan/Pixel_retrieval-Segmented_instance_retrieval}{this
link}. | Guoyuan An, Woo Jae Kim, Saelyne Yang, Rong Li, Yuchi Huo, Sung-Eui Yoon | 2023-09-11T13:21:26Z | http://arxiv.org/abs/2309.05438v1 | # Towards Content-based Pixel Retrieval in Revisited Oxford and Paris
###### Abstract
This paper introduces the first two pixel retrieval benchmarks. Pixel retrieval is segmented instance retrieval. Like semantic segmentation extends classification to the pixel level, pixel retrieval is an extension of image retrieval and offers information about which pixels are related to the query object. In addition to retrieving images for the given query, it helps users quickly identify the query object in true positive images and exclude false positive images by denoting the correlated pixels. Our user study results show pixel-level annotation can significantly improve the user experience. Compared with semantic and instance segmentation, pixel retrieval requires a fine-grained recognition capability for variable-granularity targets. To this end, we propose pixel retrieval benchmarks named PROxford and PRParis, which are based on the widely used image retrieval datasets, ROyford and RParis. Three professional annotators label 5,942 images with two rounds of double-checking and refinement. Furthermore, we conduct extensive experiments and analysis on the SOTA methods in image search, image matching, detection, segmentation, and dense matching using our pixel retrieval benchmarks. Results show that the pixel retrieval task is challenging to these approaches and distinctive from existing problems, suggesting that further research can advance the content-based pixel-retrieval and thus user search experience. The datasets can be downloaded from this link.
## 1 Introduction
Image retrieval is a long-standing and fundamental computer vision task and has achieved remarkable advances. However, because the retrieved ranking list contains false positive images and the true positive images contain complex co-occurring backgrounds, users may be difficult to identify the query object from the ranking list. In this paper, we execute a user study and show that providing pixel-level annotations can help users better understand the retrieved results. Therefore, this paper introduces the pixel retrieval task and its first benchmarks. Pixel retrieval is defined as searching pixels that depict the query object from the database. More specifically, it requires the machine to recognize, localize, and segment the query object in database images in run time, as shown in Figure 1.
Similar to semantic segmentation, which works as an extension of classification and provides pixel-level category information to the machines, pixel retrieval is an extension of image retrieval. However, pixel retrieval differs from existing semantic segmentation [11, 62, 21] in two aspects: the fine-grained particular instance recognition and
Figure 1: Example scenarios of image retrieval and pixel retrieval for the same query image. Pixel retrieval offers pixel-level annotation (red outlines) on the target object. Our user study shows that pixel retrieval can significantly improve the user experience (Sec. 3). Yellow boxes in the searched results indicate the ground truth ones. You can check our user study from this link. To start the user study, please enter any character into the “unique Prolific ID” blank.
the variable-granularity recognition.
On the one hand, pixel retrieval asks the machine to consider the fine-grained information to segment the same instance with the query, _e.g_., segment the particular query building in the street figures that contain many similar buildings. This is different from existing semantic segmentation [11] and instance segmentation [21, 62]. Semantic segmentation only requires the category level information, _e.g_., to segment all the buildings in the street figures. On top of semantic segmentation, instance segmentation additionally requires demarcating individual instances, _e.g_., segmenting all the buildings and giving the boundary of each building separately. However, instance segmentation does not distinguish the differences among the buildings [62, 21, 4].
On the other hand, pixel retrieval requires adjusting the recognition granularity as needed. The query image can be the whole building or only a part of the building. The search engine should understand the intention of the query and adjust the segmentation granularity in demand. This differs from existing segmentation benchmarks [62, 7, 19, 8, 10], where the recognition granularity is fixed in advance. Therefore, the pixel retrieval task is supplementary to semantic and instance segmentation by considering the recognition and segmentation featured with fine-grained and variable-granularity properties, which are also fundamental visual abilities of humans.
In order to promote the study of pixel retrieval, we create the pixel retrieval benchmarks Pixel-Revisited-Oxford (PROxford) and Pixel-Revisited-Paris (PRParis) on top of the famous image retrieval benchmarks Revisited-Oxford (ROxford) and Revisited-Paris (RParis) [30, 31, 33]. There are three reasons to use ROxford and RParis as our base benchmarks. Firstly, they are notoriously difficult and can better reflect the search engines' performance. Secondly, each query in these datasets has up to hundreds of positive images, so they are suitable for evaluating the fine-grained recognition ability. Thirdly, every positive image is guaranteed to be identifiable by people without considering any contextual visual information [33].
We provide the segmentation labels to a total of 5,942 images in ROxford and RParis. To ensure the label quality, three professional annotators independently label the query-index pairs and then refine and check the labels. The annotators are aged between 26 to 32 years old and have worked full-time on annotation for over two years. We then design new metrics, mAP@50:5:95, and mAP, to evaluate the pixel retrieval performance (Section 2).
We provide an extensive comparison of State-Of-The-Art (SOTA) methods in related fields, including image search, detection, segmentation, and dense matching with our benchmarks. We have some interesting findings from the experiment. For example, we find the SOTA spatial verification methods [6, 28] give a high inlier number to some true query-index pairs but match the wrong regions. We find the dense and pixel-level approaches [25, 52] helpful for the pixel retrieval task. Most importantly, our results show that pixel retrieval is difficult and further research is needed for advancing the user experience on the content-based search task.
Our contributions are as follows:
* We introduced the pixel retrieval task and provided the first two landmark pixel retrieval benchmarks, PROxford and PRParis. Three professional annotators labeled, refined, and checked the labels.
* We executed the user study and showed that the pixel level annotation could significantly improve user experience.
* We performed extensive experiments with SOTA methods in image search, detection, segmentation, and dense matching. Our experiment results can be used as the baselines for future study.
## 2 Content-based pixel retrieval
### Why Revisited Oxford and Paris?
We design the first content-based pixel retrieval benchmarks, PROxford and PRParis, directly on top of the famous image retrieval benchmarks Revisited-Oxford (ROxford) and Revisited-Paris (RParis) [30, 31, 33]. Oxford [30] and Paris [31] are introduced by Philbin _et al_. in 2007 and 2008, respectively. Their images are obtained from Flickr by searching text tags for famous landmarks in Oxford University and Paris. Radenovic _et al_. [33] refined the annotations and updated more difficult queries for them in 2018; the refined datasets are called ROxford and RParis.
We choose ROxford and RParis because they are among the most popular image retrieval benchmarks. Many well-known image retrieval methods are evaluated on them, from the traditional methods like RootSIFT [2], VLAD [13], and ASMK [48], to the recent deep learning based methods like R-MAC [48], GeM [34], and DELF [28].
These datasets are the ideal data sources for our pixel-retrieval, thanks to several properties. Firstly, compared to other famous datasets like image matching Phototourism [14] and dense matching Megadepth [18], the positive image pairs in ROxford and RParis have severe viewpoint changes, occlusions, and illumination changes. The new queries added by Radenovic _et al_. [33] have cropped regions that cause extreme zooms with the positive database images. These properties make the ROxford and RParis notoriously difficult. Secondly, each query image contains up to hundreds of positive database images, while other datasets, such as UKBench [27] and Holiday [12], only have 4 to 5 positive images for each query. A large amount of
challenging positive images are suitable for evaluating fine-grained recognition ability.
The Google Landmark Dataset (GLD) [55] encompasses more landmarks than ROxford and RParis. However, ROxford and RParis outshine GLD in labeling quality. Notably, they stand as distinct benchmarks for contrasting machine and human recognition prowess.
It is known that people cannot easily recognize an object if it changes its pose significantly [32], but we do not know where the limit is. ROxford and RParis are **the only existing datasets that can reflect the human ability to identify objects** in the landmark domain to the best of our knowledge. Every positive image in ROxford and RParis is checked by five annotators independently based on the image appearance, and all the unclear cases are excluded [33]. This kind of annotation has two benefits. Firstly, although these benchmarks are difficult, the positive images are guaranteed to be identifiable by people without considering any contextual visual information [33]. This shows the possibility of enabling the machine to recognize these positive images by only analyzing the visual clue in the given query-index image pair. Secondly, these datasets can be used to compare human and machine recognition performance; human-level recognition performance should identify all the positive images. Although the classification performance (the top 5 accuracy) of machines on ImageNet has surpassed that of humans [37], the SOTA identification ability about the first-seen objects in ROxford and RParis is still far from human-level [17, 1, 6].
### From image retrieval to pixel retrieval
In a similar spirit that semantic segmentation works as an extension of classification and provides pixel-level category information to the machines, pixel retrieval is an extension of image retrieval. It offers information about which pixels or regions are related to the query object. This task is very helpful when only a small region of the positive image corresponds to the query. Such situations frequently happen in many image retrieval applications, such as web search [33, 16, 20], medical image analysis [24, 5, 57], geographical information systems [61, 63, 42], and so on. We discuss the related applications in Section 3. Distinguishing and segmenting the first-seen objects is also one basic function of human vision system [43]; it is meaningful to understand and automate this ability.
Some previous works also noticed the importance of localizing the query object in the searched image. They have tried to combine image search and object localization [16, 20, 40]. However, due to the lack of a challenging pixel retrieval benchmark, they show only the qualitative result instead of the quantitative performance. Pixel-level labeling and quality assurance are arduous. In this work, 5,942 images are labeled, refined, and checked by three professional annotators. We hope this benchmark can boost and encourage future research on pixel-level retrieval.
We also compare our pixel-retrieval benchmark with segmentation, image matching, and dense matching benchmarks in the supplementary material.
### Pixel-level annotation
**Images to annotate.** ROxford and RParis each contains 70 queries. The 70 queries are divided into 26 and 25 query groups in ROxford and RParis, respectively, based on the visual similarity; queries in the same query group share the same ground truth index image list. There are total 1,985 and 3,957 images to annotate for our PROxford and RParis, respectively.
**Mask annotation.** Figure 2 shows our labeling process. Researchers with a computer vision background first annotate the target object in each query image. Each annotator for our new benchmark observes all the queries with masks in a query group and labels the segmentation mask for the images in the ground-truth list. Annotators are asked to identify the query object in the labeling image first and then label all the pixels depicting the target object. We show the query masks and the labeling instruction details in the supplementary materials.
**Objectivity.** To ensure the pixel retrieval task and our benchmark are objectively defined, we adopt two approaches. Firstly, we use query masks to distinctly identify the target objects and segregate them from the background (_e.g.,_ the sky), occlusions (_e.g.,_ other buildings), and the remaining part of the same building if the object is only a small part of it. These masks guide the removal of background and indicate the query boundary. Secondly, by examining the query with masks, our annotators reach a consensus on the target object and its boundary, thereby avoiding disagreement about our query intention. This consensus-based approach is a common method for reducing subjectivity in recognition tasks; it is also employed in the original ROxford and RParis benchmarks, where voting is used to determine the final ground truth for each query [33].
We retain small-sized occlusion objects, like windows
Figure 2: Labeling process (please zoom in for details).
and fences, during annotation. While this may involve subjective judgments regarding what qualifies as a small-sized occlusion, it is worth noting that well-known semantic segmentation datasets like VOC [8] and COCO [19] also involve subjective elements, such as identifying objects on a table as a table or the background behind the bike wheel as a bike. Such subjectivities are inevitable, given the difficulty of removing them. Nonetheless, they do not diminish the usefulness of benchmarks as reliable metrics for evaluating state-of-the-art methods. We include in the supplementary materials our mask rules, all the queries with masks, and our consensus checking.
**Quality assurance.** To improve the annotation quality, every query-index image pair labeling is performed by three professional annotators following the three steps: 1) annotate; 2) refine + inspect; 3) refine + insp, as shown in Figure 2. The three annotators are aged between 26 to 32 years old and have worked on annotation full-time for over 2 years. Their works have been qualified in many annotation projects.
### Evaluation metrics
**Pixel retrieval from the database.** Pixel retrieval aims to search all the pixels depicting the query object from the large scale database images. An ideal pixel retrieval algorithm should achieve the image ranking, reranking, localization, and segmentation simultaneously. To the best of our knowledge, there is no existing pixel retrieval metric yet. Detection and segmentation tasks usually use mIoU and mAP@50:5:95 as the standard measurement [36]. Image retrieval methods commonly use mAP as the metric [33]. We combine them to evaluate the ranking, localization, and segmentation performance in pixel retrieval. Each ground-truth image in the ranking list is treated as a true-positive (TP), only if its detection or segmentation Intersection over Union (IoU) is larger than a threshold \(n\). The other process of calculating AP and mAP follows the traditional image search mAP. Note that the mAP calculation methods in image search and traditional segmentation [8] are different; image search focuses more on ranking. Similar to detection and segmentation fields, the threshold \(n\) is set from 0.5 to 0.95, with step 0.05. The average of scores under these thresholds are the final metric mAP@50:5:95. It is desirable to report both detection and segmentation mAP@50:5:95 for the methods that can generate pixel-level results; high segmentation performance does not necessarily lead to high localization performance, as shown in Sec 5. We follow the medium and hard protocols in ROxford and RParis [33] with and without 1 \(M\) distractors.
**Pixel retrieval from ground-truth query-index image pairs.** We can use existing ranking/reranking methods and treat the remaining process as one-shot detection/segmentation. In this case, the detection or segmentation performance is evaluated using the mean of mIoU of all the queries, where mIoU is the mean of the IoUs for all the ground-truth index images. We do not consider the false pairs because the ranking metric mAP well reflects the influence of false pairs in the ranking list.
## 3 Applications of pixel retrieval
Pixel retrieval requires the machine to recognize, localize, and segment a particular first-seen object, which is one of the fundamental abilities of the human visual system. It is useful for many applications. In this section, we first show that it can significantly improve the user experience in web search. We then discuss how pixel retrieval can help image-level ranking techniques. Finally, we introduce some other applications that may also benefit from pixel retrieval.
**Web search user experience improvement.** Modern image retrieval techniques focus on improving the image-level ranking performance of hard cases, such as images under extreme lighting conditions, novel views, or complicated occlusions. However, users may not easily perceive a hard case as a true positive, even if it is at the top of the ranking list. We claim that pixel-level annotation can significantly improve the user experience on the web search application.
To see how pixel-level annotation improves the user experience on image search, we ran a user study where users were asked to find images that contain a given target among candidate images in two different conditions; the one with pixel-level annotations (_i.e.,_ Pixel retrieval) and the other with no annotations (_i.e.,_ Image retrieval). We recruited 40 participants on Prolific1 and compared the time taken to complete the task between the two conditions.
Footnote 1: prolific.co
Participants were asked to complete 16 questions in total, where eight of them were Pixel retrieval and the other eight were Image retrieval. We divided the participants into four groups and counterbalanced the type of questions (Figure 3). For each question, participants were given a query image and 12 candidate images. There were three true positives and nine false positives in the candidate images, and we randomly chose ground truth images of other queries as false positives. We shuffled the order of the candidate images and asked participants to choose three images that contain the query image (_i.e.,_ true positives) among them. Figure 1 shows one of the 16 questions. You can check our user study from this link. To start the user study, please enter any character into the "unique Prolific ID" blank. Anonymity is guaranteed.
Our results show that participants completed the task faster when the pixel-level annotations were presented (mean=37.07s, std=49.76s) than when no annotations were presented (mean=53.71s, std=80.08s). The difference between two conditions is statistically significant (T
test, p-value=0.00091), and participants responded that it was helpful to see annotations in completing the task (mean=6.375/7, std=0.89).
**Other applications.** Image retrieval techniques have been applied to many applications, such as medical diagnosis and geographical information systems (GIS). The pixel-level retrieval is also desirable for these applications. For example, the size of medical and geographical images are usually huge, and the doctors and GIS experts are interested in retrieving regions of the particular structures or landmarks from the whole images in the database [57, 5, 24, 63].
Pixel retrieval can also help image matting [54, 56, 60]. Current image matting techniques rely on the user's click to confirm the target matting region [54, 56, 60]. Our pixel retrieval provides a new interaction method: deciding the target object based on the query example. This query-based interaction can significantly reduce user effort in situations where many images depict the same object [41].
## 4 Experiment
We evaluate the performance of state-of-the-art (SOTA) methods in multiple fields on our new pixel retrieval benchmarks. Our new pixel retrieval task is a visual object recognition problem. It requires the search engine to automate the human visual system's ability to identify, localize, and segment an object under illumination and viewpoint changes. It can be seen as a combination of image retrieval, one-shot detection, and one-shot segmentation. We introduce these related tasks and their SOTA methods in this section, and we implement these SOTA methods and discuss their results in Section 5.
### Localization in retrieval
Some pioneering works [16, 20, 40] in image retrieval emphasized the importance of localization and tried to combine the retrieval and detection methods. However, due to the lack of a standard pixel retrieval benchmark, these pioneering works only showed qualitative results instead of quantitative comparisons. In this paper, we implement and compare the SOTA localization-related retrieval methods on our new benchmark dataset. They can be divided into two categories: spatial verification (SP) and detection.
SP [40, 23, 2, 28, 6] is one of the most popular reranking approaches in image retrieval. It is also known as image matching [14]; SP and stereo task in Image Matching Challenge (IMC) [14] share the same pipeline and theory except for the final evaluation step. In this work, we selected the local features and matching hyperparameters with the best retrieval performance on ROxford and RParis, which contain more challenging cases than datasets in IMC.
SP compares the spatial configurations of the visual words in two images. Theoretically, it can achieve verification and localization simultaneously. However, the image-level ranking performance cannot fully reflect the SP accuracy or localization performance. In the hard positive cases, _e.g._, where many repeated patterns exist in the background, even though SP generates a high inlier number and ranks an image on top of the ranking list, the matched visual words can be wrong due to the repeated patterns. Our pixel retrieval benchmark can not only evaluate the localization performance, but also better reflect the SP accuracy and be helpful for future SP studies.
Researchers mainly focus on generating better local features to improve SP performance. The classical local features have SIFT [23], SUFT [3], and rootSIFT [2]. Recently, DELF [28] and DELG [6] local features, which are learned from the large landmarks training set [55], achieve the SOTA SP result. We evaluate the SP performance with SIFT, DELF, and DELG features on our new benchmark datasets in this paper.
Another localization-related image search approach is to directly apply the detection methods [16, 20, 35, 36, 53, 44, 45]. Faster-RCNN [36] and SSD detector [22] fine-tuned on a huge manually boxed landmark dataset [45] achieve the SOTA detect-related retrieval result [45]. Detect-to-retrieve (D2R) [45] uses these fine-tuned models to detect several landmark regions for a database image and uses aggregation methods like the Vector of Locally Aggregated Descriptors (VLAD) [13] and the Aggregated Selective Match Kernel (ASMK) [46] to represent each region. To better check the effect of the aggregation methods, we also implement the Mean aggregation (Mean), which simply represents each region using the mean of its local descriptors. The region with highest similarity can be seen as the target region for a given query. We evaluate the combination of different detectors and aggregation methods on our pixel retrieval benchmarks.
### One-shot detection and segmentation
We can treat pixel retrieval as combining image retrieval and one-shot detection and segmentation. We test the performance of these approaches.
The Vision Transformer for Open-World Localization
Figure 3: Design of the study on web search user experience. Image retrieval refers to a setting where no annotations are provided, whereas Pixel retrieval refers to a setting where pixel-level annotations are provided. 40 participants were divided into four groups and we counterbalanced the type of questions across the groups. Numbers 1 to 16 indicate the 16 questions.
(OWL-ViT) [26] is a vision transformer model trained on the large-scale 3.6 billion images in LiT dataset [59]. It has shown the SOTA performance on several tasks including one-shot detection. The One-Stage one-shot Detector (OS2D) combines and refines the traditional descriptor matching and spatial verification pipeline in image search to do the one-shot detection. It achieves impressive detection performance in several domains, _e.g._, retail products, buildings, and logos. We test these two detection methods on our new benchmarks.
The Hypercorrelation Squeeze Network (HSNet) [25] is one of the most famous few-shot segmentation methods. It finds multi-level feature correlations for a new class. The Mining model (Mining) [58] exploits the latent novel classes during the offline training stage to better tackle the new classes in the testing time. The Self-Support Prototype model (SSP) [9] generates the query prototype in testing time and uses the self-support matching to get the final segmentation mask. The self-support matching is based on one of the classical Gestalt principles [15]: pixels of the same object tend to be more similar than those of different objects. It achieves the SOTA few-shot segmentation results on multiple datasets. We evaluate these three methods on our new pixel retrieval benchmarks.
### Dense matching
Different from image matching (SP in this paper), which calculates the transformation between two images of the same object from different views, dense matching focuses on finding dense pixel correspondence. We check if we can use the SOTA dense matching methods to correctly find the correspondence points for pixels in the query image and achieve our pixel retrieval target.
GLUNet [50] and RANSAC-flow [39] are popular among many famous dense matching methods. Recently, Truong _et al_. have shown that the warp consistency objective (WarpC) [52] and the GOCor module [49] can further improve the performance and achieve the new SOTA. Another popular method is PDC-Net [51]. It can predict the uncertainty of the matching pixels. The uncertainty can be useful for our pixel retrieval task, which is sensitive to the outliers. We test the origin GLUNet, GLUNet with WarpC (WarpC-GLUNet), GLUNet with GOCor module (GOCorGLUNet), and PDC-Net in Table 1.
### Experiment detail
We try our best to find the best possible result for each method on our novel benchmark. The retrieval localization methods employed in this study, including image matching (SP in this paper) and D2R, were configured to achieve optimal performance on ROxford and RParis. These methods rely on precise localization to enhance image retrieval performance. Thus, we adopt the same experimental configurations in our similar pixel retrieval benchmark. Similarly, dense matching methods, which encompass geometric and semantic matching tasks, are expected to operate directly on our pixel retrieval benchmark, as per task definitions. We evaluate its geometric models with the best performance on MegaDepth [18] and ETH3D [38], datasets that feature actual building images, rendering them the ideal valid sets for our benchmark. The difference is that our dataset contains more extreme viewpoints and illumination changes. Moreover, we evaluate the performance of semantic models to see if including semantic information can enhance rigid body recognition in our benchmarks. We refrained from fine-tuning the segmentation methods as there is no segmentation training set pertaining to the building domain to the best of our knowledge. Our comprehensive experimental findings can be employed as baseline metrics for future comparisons. We include the detailed experimental configurations for each method in the supplementary materials and intend to make them, along with their codes, publicly available.
## 5 Results and discussion
We report the results of pixel retrieval from ground-truth image pairs (mean of mIoU) for all the above mentioned methods in Table 1. We choose one to two representative methods for each field and show their qualitative results in Figure 4. To evaluate the performance of pixel retrieval from database, we combine these methods with SOTA image level ranking and reranking methods: DELG and hypergraph propagation (HP) [1]. We show their final mAP@50:5:95 in Table 2.
Although SP achieves impressive image-level retrieval results [6, 28], it shows suboptimal performance on pixel retrieval. We observe some true positive pairs where SP gives a high inlier number but matches the wrong regions. For example, in the first easy case in Figure 4, SP with DELG features generates 19 inliers, but none of the inliers are in the target object region. Note that 19 inlier number is high and only 4 false positive images are ahead of the this easy case in the final DELG reranking list [6]. This is not to say DELG is bad; in fact, its matching results are quite good in most cases. We choose this striking example only to show that the image-level ranking performance is not enough to reflect the SP accuracy. Our pixel retrieval benchmarks can be used to evaluate the matched features' locations of SP.
For SP, both deep-learning features DELF and DELG significantly outperform the SIFT features. Interestingly, although DELG shows better image retrieval performance [6] than DELF, it is slightly inferior to DELF in the pixel retrieval task. One reason might be that though DELG generates more matching inliers for the positive pairs than DELF, these inliers tend to exist in a small region and do not reflect the location or size of the target object. Improving SP per
formance in both image and pixel level can be a practical research topic.
Although the detect-2-retrieval [45] is inferior to SP in image retrieval [6, 28, 33], it shows better performance than SP in our pixel-level retrieval benchmarks. We conjecture that the detection models tend to cover the whole building more than SP. Our benchmark is helpful in checking this conjecture and designing a better pixel retrieval model for future works. The results of the region detector and the aggregation method are similar to the trend in image search [45]. The VLAD and ASMK aggregation methods significantly improve the Mean aggregation. A faster-RCNN-based detector shows better performance than SSD.
For dense matching methods, GLU-Net using warp consistency or GOCor module and PDC-Net show better results than other models. This trend is similar to that in the dense matching benchmark Megadepth [18].
The segmentation methods significantly outperform other methods in terms of the mean of segmentation mIoU. However, their detection mIoU results are not so impressive. They tend to predict the entire foreground, which contains the target building, as shown in the SSP line of Figure 4. Among the segmentation methods, SSP shows better segmentation than others, showing its self-support approach is helpful for finding more related pixels.
Another interesting finding is that better image ranking mAP does not necessarily brings better pixel retrieval mAP@50:5:95, as shown in Table 2. The reason might be that the image search techniques rank some hard cases high, but detection methods do not well localize the query object in them.
It is interesting to note that segmentation and dense
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Medium} & \multicolumn{4}{c|}{Hard} \\ \cline{2-9} & \multicolumn{2}{c|}{PROxf} & \multicolumn{2}{c|}{PRPar} & \multicolumn{2}{c|}{PROxf} & \multicolumn{2}{c|}{PRParPar} \\ \cline{2-9} & D & S & D & S & D & S & D & S \\ \hline \multicolumn{9}{|c|}{Localization methods in retrieval} \\ \hline SIFT+SP [30] & 10.5 & 3.9 & 14.0 & 5.1 & 7.1 & 2.4 & 12.4 & 4.3 \\ DELF+SP [28] & 14.5 & **5.5** & 21.3 & **7.5** & 9.4 & **4.1** & 16.7 & **5.5** \\ DELG+SP [6] & 13.8 & 5.2 & 18.6 & 7.2 & 8.9 & 2.9 & 13.6 & 4.9 \\ D2R [45]+Resnet-50-Faster-RCNN+Mean & 20.2 & - & 29.6 & - & 16.7 & - & 27.4 & - \\ D2R [45]+Resnet-50-Faster-RCNN+VLAD [13] & 25.8 & - & 37.5 & - & **21.6** & - & 35.5 & - \\ D2R [45]+Resnet-50-Faster-RCNN+ASMK [47] & **26.3** & - & **38.5** & - & **21.6** & - & **35.6** & - \\ D2R [45]+Mobilenet-V2-SSD+Mean & 19.7 & - & 25.9 & - & 20.1 & - & 27.9 & - \\ D2R [45]+Mobilenet-V2-SSD+VLAD [13] & 23.1 & - & 33. & - & 20.9 & - & 33.6 & - \\ D2R [45]+Mobilenet-V2-SSD+ASMK [47] & 22.4 & - & 34.0 & - & 20.8 & - & 33.1 & - \\ \hline \multicolumn{9}{|c|}{One-shot detection and segmentation methods} \\ \hline OWL-VIT (LiT) [26] & 11.4 & - & 18.0 & - & 6.3 & - & 15.0 & - \\ OS2D-v2-trained [29] & 10.5 & - & 13.7 & - & 11.7 & - & 14.3 & - \\ OS2D-v1 [29] & 7.0 & - & 8.5 & - & 8.7 & - & 9.2 & - \\ OS2D-v2-init [29] & 13.6 & - & 15.4 & - & 14.0 & - & 15.1 & - \\ SSP (COCO) + ResNet50 [9] & 19.2 & 34.5 & 31.1 & 48.7 & 15.1 & 25.3 & 29.8 & **41.7** \\ SSP (VOC) + ResNet50 [9] & 19.7 & 34.3 & 31.4 & **48.8** & 16.1 & **26.1** & 30.3 & 40.4 \\ HSNet (COCO) + ResNet50 [25] & 23.4 & 32.8 & 37.4 & 41.9 & 21.0 & 25.7 & 34.7 & 36.5 \\ HSNet (VOC) + ResNet50 [25] & 21.0 & 29.8 & 31.4 & 39.7 & 17.1 & 23.2 & 29.7 & 34.9 \\ HSNet (FSS) + ResNet50 [25] & **30.5** & **35.7** & **39.4** & 40.2 & **22.7** & 25.1 & **34.7** & 32.8 \\ Mining (VOC) + ResNet50 [58] & 18.3 & 30.5 & 29.6 & 42.7 & 15.1 & 21.4 & 28.1 & 34.3 \\ Mining (VOC) + ResNet101 [58] & 18.1 & 28.6 & 29.5 & 40.0 & 14.2 & 20.4 & 28.2 & 34.4 \\ \hline \multicolumn{9}{|c|}{Dense matching methods} \\ \hline GLUNet-Geometric [50] & 18.1 & 13.2 & 22.8 & 15.2 & 7.7 & 4.6 & 13.3 & 7.8 \\ PDCNet-Geometric [51] & 29.1 & 24.0 & 30.7 & 21.9 & 20.4 & 15.7 & 20.6 & 12.6 \\ GOCor-GLUNet-Geometric [49] & 30.4 & **26.0** & 33.4 & 25.6 & 20.8 & **16.0** & 19.8 & 13.3 \\ Warp-GLUNet-Geometric (megadepth) [52] & **31.3** & 25.4 & 36.6 & **27.3** & **21.9** & 15.8 & 26.4 & 17.3 \\ Warp-GLUNet-Geometric (megadepth_stage1) [52] & 23.5 & 19.3 & 28.1 & 20.7 & 13.2 & 8.9 & 17.0 & 10.9 \\ GLUNet-Semantic [50] & 18.5 & 14.4 & 22.4 & 15.6 & 8.7 & 5.6 & 12.8 & 7.8 \\ Warp-GLUNet-Semantic [52] & 27.5 & 21.4 & **36.8** & 25.7 & 18.5 & 11.9 & **28.3** & **17.6** \\ \hline \end{tabular}
\end{table}
Table 1: Results of pixel retrieval from ground truth query-index image pairs (% mean of mIoU) on the PROxf/PRPar datasets with both Medium and Hard evaluation protocols. D and S indicate detection and segmentation results respectively. **Bold** number indicates the best performance in each field; **red** number indicates the best performance throughout all fields.
matching methods have demonstrated superior mIoU results compared to matching-based and detection-based retrieval methods, despite not being originally designed for retrieval tasks. However, to effectively tackle the pixel retrieval task, these methods must work in conjunction with image search techniques. While dense matching and segmentation methods are better suited for identifying target object areas, they may not achieve fine-grained recognition. In contrast, existing retrieval methods tend to identify certain textures or corners but lack the ability to capture the entire object's shape. Without a reliable benchmark, retrieval methods may simply associate an object and its context to improve image
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{PROxf} & \multicolumn{2}{c|}{PROxf+R1M} & \multicolumn{2}{c|}{PRPar} & \multicolumn{2}{c|}{PRPar+R1M} \\ \hline & & M & H & M & H & M & H & M & H \\ \hline \multicolumn{8}{|c|}{Image retrieval: DELG initial ranking [6]} \\ \hline \multicolumn{8}{|c|}{Image level mAP} & 76.3 & 55.6 & 63.7 & 37.5 & 86.6 & 72.4 & 70.6 & 46.9 \\ \hline \multirow{4}{*}{Pixel retrieval} & DELG + SP [6] & 6.1 & 6.3 & 5.8 & 6.7 & 10.9 & 8.0 & 10.5 & 7.8 \\ & D2R+Faster-RCNN+ASMK [45] & 29.6 & 22.5 & 28.8 & 19.1 & 26.3 & 25.6 & 23.7 & 20.5 \\ & OWL-VIT [26] & 13.1 & 8.1 & 12.8 & 7.2 & 8.3 & 12.7 & 7.6 & 11.4 \\ & SSP [9] & **37.3** & 34.6 & **36.6** & 29.9 & **47.0** & **43.1** & **44.5** & **37.1** \\ & WarpCGLUNet [52] & 34.3 & **36.8** & 33.9 & **34.9** & 33.9 & 28.8 & 32.9 & 27.1 \\ \hline \multicolumn{8}{|c|}{Image retrieval: DELG initial ranking [6] + HP reranking [1]} \\ \hline \multicolumn{8}{|c|}{Image level mAP} & 85.7 & 70.3 & 78.0 & 60.0 & 92.6 & 83.3 & 86.6 & 72.7 \\ \hline \multirow{4}{*}{Pixel retrieval} & DELG + SP [6] & 6.4 & 7.2 & 6.2 & 7.5 & 10.7 & 6.0 & 10.7 & 5.9 \\ & D2R+Faster-RCNN+ASMK [45] & 30.1 & 23.5 & 30.5 & 22.0 & 26.3 & 25.3 & 25.7 & 24.9 \\ & OWL-VIT [26] & 12.3 & 6.6 & 12.1 & 13.6 & 7.9 & 7.6 & 7.9 & 7.8 \\ \cline{1-1} & SSP [9] & **33.0** & 29.7 & **35.7** & **30.5** & **46.4** & **37.2** & **45.6** & **37.2** \\ \cline{1-1} & WarpCGLUNet [52] & 31.2 & **32.6** & 31.5 & 31.7 & 34.1 & 27.3 & 34.3 & 28.1 \\ \hline \end{tabular}
\end{table}
Table 2: Results of pixel retrieval from database (% mean of mAP@50:5:95) on the PROxf/PRPar datasets and their large-scale versions PROxf+1M/PRPar+1M, with both Medium (M) and Hard (H) evaluation protocols. **Bold** indicates the best performance using the same image ranking list; **red** indicates the best performance in two ranking lists. **green** lines show the image level mAPs of the ranking lists.
Figure 4: Qualitative comparison of the SOTA methods in different fields on the pixel retrieval benchmarks. Blue masks represent the prediction results of each method. For SP and WarpCGLUNet, we consider the union of all the matching points as the prediction masks. We also show the inlier numbers for the SP method. Pixel retrieval is challenging for existing methods and further research is needed.
level performance, leading to low localization and segmentation results, as we discussed above. We did our best to prepare our new benchmark so that it can provide a valuable evaluation for novel methods targeting pixel retrieval, which requires fine-grained and variable-granularity detection and segmentation. Moreover, we find pixel retrieval challenging. The current best mAP@50:5:95 in PROxford and PRParis at medium setting without distractors are only 37.3 and 47.0.
## 6 Future works
We present a novel task termed "Pixel Retrieval." This task mandates segmentation but transitions from a semantic directive to the content-based one, thus bypassing semantic vagueness. Concurrently, it demands large-scale, instance-level recognition--a subject frequently explored by the retrieval community. This innovative task poses several unique challenges, some of which we outline below:
### Enhancing accuracy
For a superior user experience, it's vital to embrace methods, workflows, and datasets that bolster accuracy. Our findings illustrate that segmentation and dense matching methods are beneficial, especially when an image ranking list is provided using existing retrieval techniques. Beyond merely superimposing segmentation over retrieval, a compelling approach would be to rank images based on the results of the segmentation. Further insights and experimental outcomes in this regard are available on our website.
Although the introduction of new datasets, even those echoing the landmarks in our benchmarks, is commendable, it's pivotal to articulate their application to discern the sources of performance enhancements. If PROxford/PRParis and ROxford/RParis are employed as benchmarks, it's crucial to ensure the consistent usage of the same training set. Given the public accessibility of our ground truth files, it's imperative to prevent any unintended data leaks during training.
### Scalability and speed
A major challenge lies in scaling the algorithms and augmenting the retrieval speed. Techniques like segmentation and dense matching, which compute for every pair, inherently lag in speed when compared to retrieval methods such as ASMK and D2R. Therefore, swift methods that can cater to extensive scales are highly sought after.
### Innate visual recognition and The significance of training data
The prevalent trend in research is to amass expansive training or fine-tuning sets closely aligned with test instances--certainly a commendable approach. However, intriguingly, humans exhibit an innate ability to discern instances in query images. Our annotators, despite being unfamiliar with European landmarks, could effortlessly segment target objects in each positive image, even when subjected to extreme lighting and perspective alterations. What fuels this innate recognition? Is it purely due to extensive prior exposure, or are there underlying mechanisms at play? How pivotal is the training dataset in replicating human-like content-based segmentation, especially when semantic influences are excluded? These questions beckon exploration.
## 7 Conclusion
We introduced the first landmark pixel retrieval benchmark datasets, _i.e._, PROxford and PRParis, in this paper. To create these benchmarks, three professional annotators labeled, refined, and checked the segmentation masks for a total of 5,942 image pairs. We executed the user study and found that pixel-level annotation can significantly improve the user experience on web search; pixel retrieval is a practical task. We did extensive experiments to evaluate the performance of SOTA methods in multiple fields on our pixel retrieval task, including image search, detection, segmentation, and dense matching. Our experiment results show that pixel retrieval is challenging and further research is needed.
|
2310.00423 | Ratios of Hidden-Charm Compact Pentaquark Decay Widths in Quark-Diquark
Model | A number of resonances comparable with a hypothesis of hidden-charm
pentaquark is observed by the LHCb Collaboration. We interpret these narrow
resonances as compact hidden-charm diquark-diquark-antiquark systems. Within
this assumption, an interplay between the charmonium and open-charm modes is
considered. Ratios of such modes for non-strange pentaquarks are obtained and
discussed. | Alexandra A. Dobrynina, Alexander Ya. Parkhomenko, Alexey V. Zinchenko | 2023-09-30T16:02:51Z | http://arxiv.org/abs/2310.00423v1 | # Ratios of Hidden-Charm Compact Pentaquark Decay Widths in Quark-Diquark Model
###### Abstract
A number of resonances comparable with a hypothesis of hidden-charm pentaquark is observed by the LHCb Collaboration. We interpret these narrow resonances as compact hidden-charm diquark-diquark-antiquark systems. Within this assumption, an interplay between the charmonium and open-charm modes is considered. Ratios of such modes for non-strange pentaquarks are obtained and discussed.
## 1 Introduction
At present, production, properties, and decays of bottom baryons are intensively studied both experimentally and theoretically. Of special interest are \(\Lambda_{b}\)-, \(\Xi^{0}_{b}\)- and \(\Xi^{-}_{b}\)-baryons which are decaying weakly and many decay modes are found experimentally [1]. \(\Lambda_{b}\)-baryon is a bound state of heavy \(b\)-quark and a pair of light \(u\)- and \(d\)-quarks. Its mass and lifetime are \(m_{\Lambda_{b}}=5619.51\pm 0.23\) MeV and \(\tau_{\Lambda_{b}}=(1.466\pm 0.010)\times 10^{-12}\) sec, respectively, [1], and such a large lifetime is due to weak interactions. More than 40 decay modes with branching fractions exceeded \(10^{-6}\) are experimentally found [1]. Two exotic resonances, \(P_{\psi}^{N}(4380)^{+}\) and \(P_{\psi}^{N}(4450)^{+}\), consistent with the pentaquark interpretation were originally found in the \(\Lambda_{b}\to p+J/\psi+K^{-}\) decay by the LHCb Collaboration [2]. Later in the same channel on higher statistics, the LHCb found three narrow resonances: \(P_{\psi}^{N}(4312)^{+}\), \(P_{\psi}^{N}(4440)^{+}\), and \(P_{\psi}^{N}(4457)^{+}\), while the existence of the broad one, \(P_{\psi}^{N}(4380)^{+}\), remains under question [3]. The evidence of the original pentaquark resonances was also announced in the \(\Lambda_{b}\to p+\pi^{-}+J/\psi\) decay by the LHCb Collaboration [4]. The evidence of the resonance consistent with the strange \(P_{\psi s}^{\Lambda}(4459)^{0}\) pentaquark was reported by the LHCb Collaboration [5] in the \(\Xi^{-}_{b}\to\Lambda+J/\psi+K^{-}\) decay of the \(\Xi^{-}_{b}\)-baryon, the \(SU(3)_{F}\)-partner of \(\Lambda_{b}\). Unfortunately, spin-parities of all these resonances are not yet determined and theoretical speculations about their quantum numbers and binding mechanisms are still debatable (see, for example, the latest reviews on this topic [6, 7, 8]). Note that several dynamical models of pentaquarks are suggested: baryon-meson model (molecular pentaquark), triquark-diquark model, diquark-diquark-antiquark model, etc. For example, in the diquark-diquark-antiquark model [9,
10], dynamics is determined by interaction of light diquark \([q_{2}q_{3}]\), heavy diquark \([cq_{1}]\) and \(c\)-antiquark, where \(q_{i}\) is one of the light \(u\)-, \(d\)- or \(s\)-quarks as shown in Fig. 1. As far as the calculation of the mass spectrum in this model done and experimentally observed resonances can be successfully identified with theoretically calculated states, hidden-charm pentaquark decay mechanism is not working out completely. Here, we give arguments and qualitative estimates of a possible mechanism similar to one suggested for decays of hidden-charm tetraquarks in [11].
## 2 Double Well Potential in Tetraquarks
L. Maiani, A. D. Polosa, and V. Riquer in [11] suggested the hypothesis: a tetraquark can plausibly be represented by two diquarks in double well potential separated by a barrier. In this case, there are two length scales: the diquark radius \(R_{Qq}\) and tetraquark radius \(R_{4q}\) which are assumed to be well separated and their ratio can be estimated as \(\lambda=R_{4q}/R_{Qq}\geq 3\). Tunneling transitions of quarks result into tetraquark strong decays. They have also claimed that the diquark radius \(R_{Qq}\) in tetraquark can be different from the diquark radius \(R_{Qq}^{\rm baryon}\) in baryon. An increase of the experimental resolution and statistics is crucial to support or disprove this hypothesis.
Let us start from the decays of hidden-charm tetraquarks to two \(D\)-mesons based on the \(X(3872)\) as an example. Diquark-antidiquark system, \(([cq][\bar{c}\bar{q}])\) can rearrange itself into a pair of color singlets by exchanging quarks through tunneling transition. Small overlap between constituent quarks in different wells suppresses the quark-antiquark pair from the direct annihilation. So, the two stage process should occur within this mechanism: first, the light quark and antiquark switch of among two wells and, second, the quark-antiquark pairs obtained are evolved in their color-singlet components (two \(D\)-mesons). Including diquark spins (subscripts), consider the states [11]:
\[\Psi_{\cal D}^{(1)}=[cu]_{0}(x)\,[\bar{c}\bar{u}]_{1}(y),\quad\Psi_{\cal D}^{ (2)}={\cal C}\Psi_{\cal D}^{(1)}=[cu]_{1}(y)\,[\bar{c}\bar{u}]_{0}(x), \tag{1}\]
with \({\cal C}\) being the charge conjugation operator. After Fierz rearrangements of color and spin indices and assuming quarks to be non-relativistic particles, in evident meson notations one
Figure 1: The hidden-charm pentaquark in the diquark-diquark-antiquark model used for getting the mass spectrum in [9, 10].
obtains:
\[\Psi^{(1)}_{\cal D} =A\,D^{0}\bar{\mathbf{D}}^{*0}-B\,\mathbf{D}^{*0} \bar{D}^{0}+iC\,\mathbf{D}^{*0}\mathbf{\times}\bar{\mathbf{D}}^{*0},\] \[\Psi^{(2)}_{\cal D} =B\,D^{0}\bar{\mathbf{D}}^{*0}-A\,\mathbf{D}^{*0 }\bar{D}^{0}-iC\,\mathbf{D}^{*0}\mathbf{\times}\bar{\mathbf{D}}^{*0},\]
where \(A\), \(B\), and \(C\) are non-perturbative coefficients associated to barrier penetration amplitudes for different total spins of \(u\) and \(\bar{u}\) light quarks.
The other possible decay channel of hidden-charm tetraquarks is to a charmonium and light meson. The tunneling transition of light quarks is as follows:
\[X_{u}\sim\frac{1}{\sqrt{2}}\left[\Psi^{(1)}_{\cal D}+\Psi^{(2)}_{\cal D}\right] =\frac{A+B}{\sqrt{2}}\left[D^{0}\bar{\mathbf{D}}^{*0}-\mathbf{D}^{*0}\bar{D}^{0}\right], \tag{2}\]
while the tunneling transition of heavy quarks with finite masses:
\[X_{u}\sim a\,i\mathbf{J}/\mathbf{\psi}\mathbf{ \times}\left(\mathbf{\omega}+\mathbf{\rho}^{0}\right). \tag{3}\]
For the tunneling amplitude in the leading semiclassical approximation, one has \({\cal A}_{M}\sim e^{-\sqrt{2ME}\ell}\), where \(E\) and \(\ell\) are the barrier height and extension. For the constituent quark masses, \(m_{q}\) and \(m_{c}\), \(E=100\) MeV and \(\ell=2\) fm [11], one can estimate the ratio of amplitudes squared to be:
\[R=[a/(A+B)]^{2}\sim\left({\cal A}_{m_{c}}/{\cal A}_{m_{q}}\right)^{2}\sim 10^ {-3}. \tag{4}\]
With the decay momenta \(p_{\rho}\simeq 124\) MeV and \(p_{D}\simeq 2\) MeV [11], the decay width ratio has the following estimate:
\[\frac{\Gamma(X(3872)\to J/\psi\,\rho)}{\Gamma(X(3872)\to D\,\bar{D}^{*})}= \frac{p_{\rho}}{p_{D}}\,R\sim 0.1. \tag{5}\]
Its comparison with existing experimental data [1]:
\[B_{\rm exp}(X(3872)\to J/\psi\,\rho)=(3.8\pm 1.2)\%,\qquad B_{\rm exp}(X(3872) \to D\,\bar{D}^{*})=(37\pm 9)\%, \tag{6}\]
shows the excellent agreement, \(R_{\rm exp}\simeq 0.1\), but one should remember that the coefficients associated to barrier penetration amplitudes are non-perturbative quantities and require a more detail information about a potential shape and parameters entering the potential.
Figure 2: The hidden-charm pentaquark decay to the charmed baryon and charmed meson.
## 3 Double Well Potential in Pentaquarks
In case of pentaquarks, similar hypothesis can be formulated: a pentaquark can be represented by the heavy diquark and heavy triquark in double well potential separated by barrier [10] as shown in Figs. 2 and 3. There are two triquark-diquark representations:
\[\Psi_{1}^{D} =\frac{1}{\sqrt{3}}\left[\frac{1}{\sqrt{2}}\,\epsilon_{ijk}\bar{c }^{i}\left[\frac{1}{\sqrt{2}}\,\epsilon^{jlm}c_{l}q_{m}\right]\right]\left[ \frac{1}{\sqrt{2}}\,\epsilon^{knp}q_{n}^{\prime}q_{p}^{\prime\prime}\right] \equiv\left[\bar{c}\left[cq\right]\right]\left[q^{\prime}q^{\prime\prime} \right], \tag{7}\] \[\Psi_{2}^{D} =\frac{1}{\sqrt{3}}\left[\frac{1}{\sqrt{2}}\,\epsilon_{ikj}\bar{c }^{i}\left[\frac{1}{\sqrt{2}}\,\epsilon^{knp}q_{n}^{\prime}q_{p}^{\prime\prime }\right]\right]\left[\frac{1}{\sqrt{2}}\,\epsilon^{jlm}c_{l}q_{m}\right] \equiv\left[\bar{c}\left[q^{\prime}q^{\prime\prime}\right]\right]\left[cq \right], \tag{8}\]
where all the diquarks are assumed to be \(\bar{3}\)-color states. From the color algebra, these states are related, \(\Psi_{2}^{D}=-\Psi_{1}^{D}\), but other internal dynamical properties can be different. The color connection of quarks in \(\Psi_{1}^{D}\) is used for getting the mass spectrum in [10]. The color structure of \(\Psi_{2}^{D}\) is suitable for study the pentaquark strong decays. This is employed in the Dynamical Diquark Model of multiquark exotic hadrons [12, 13, 14]. The color-singlet combinations are meson-baryon alternatives:
\[\Psi_{1}^{H} =\left(\frac{1}{\sqrt{3}}\,\bar{c}^{i}c_{i}\right)\left[\frac{1}{ \sqrt{6}}\,\epsilon^{jkl}q_{j}q_{k}^{\prime}q_{l}^{\prime\prime}\right]\equiv \left(\bar{c}c\right)\left[qq^{\prime}q^{\prime\prime}\right],\] \[\Psi_{2}^{H} =\left(\frac{1}{\sqrt{3}}\,\bar{c}^{i}q_{i}\right)\left[\frac{1}{ \sqrt{6}}\,\epsilon^{jkl}c_{j}q_{k}^{\prime}q_{l}^{\prime\prime}\right]\equiv \left(\bar{c}q\right)\left[cq^{\prime}q^{\prime\prime}\right],\] \[\Psi_{3}^{H} =\left(\frac{1}{\sqrt{3}}\,\bar{c}^{i}q_{i}^{\prime}\right)\left[ \frac{1}{\sqrt{6}}\,\epsilon^{jkl}c_{j}q_{k}q_{l}^{\prime\prime}\right]\equiv \left(\bar{c}q^{\prime}\right)\left[cqq^{\prime\prime}\right],\] \[\Psi_{4}^{H} =\left(\frac{1}{\sqrt{3}}\,\bar{c}^{i}q_{i}^{\prime\prime}\right) \left[\frac{1}{\sqrt{6}}\,\epsilon^{jkl}c_{j}q_{k}q_{l}^{\prime}\right]\equiv \left(\bar{c}q^{\prime\prime}\right)\left[cqq^{\prime}\right].\]
From these four states, two of them, \(\Psi_{1}^{H}\) and \(\Psi_{2}^{H}\), only satisfy the heavy-quark-symmetry condition [10]. The light \([q^{\prime}q^{\prime\prime}]\)-diquark is transmitted intact, retaining its spin quantum number, from the \(b\)-baryon to pentaquark. Keeping the color of the light diquark unchanged, a convolution of two Levi-Civita tensors entering the triquark gives:
\[\Psi_{1}^{D}=-\frac{\sqrt{3}}{2}\left[\Psi_{1}^{H}+\Psi_{2}^{H}\right]. \tag{9}\]
Figure 3: The hidden-charm pentaquark decay to the charmonium and light baryon.
The color reconnection is not enough to reexpress the pentaquark operator as a direct product of the meson and baryon operators. Spins of quarks and diquarks should be projected onto the definite hadronic spin states. One needs to know the Dirac structure of pentaquark operators to undertake the Fierz transformations in the Dirac space under assumption that quarks are non-relativistic. Let us exemplify this by considering the \(P_{\psi}^{N}(4312)^{+}\) pentaquark. Diquark-diquark-antiquark operators with spinless heavy and light diquarks are [10]:
\[\Psi_{1}^{H(1)}(x,y) =\frac{1}{3}\left(\tilde{c}^{i}(x)\,\sigma_{2}\right)(c_{i}(y)\, \sigma_{2}\,q_{k}(y))\,d_{0}^{k}(x), \tag{10}\] \[\Psi_{2}^{H(1)}(x,y) =\frac{1}{3}\left(\tilde{c}^{i}(x)\,\sigma_{2}\right)(c_{k}(y)\, \sigma_{2}\,q_{i}(y))\,d_{0}^{k}(x). \tag{11}\]
For the lowest lying pentaquark, \(q=u\) and \(d_{0}=[u\,C\,\gamma_{5}\,d]\), being scalar diquark. For simplicity, all the quarks are considered in the non-relativistic limit. After the Fierz transformation of the Pauli matrices and suppressing position dependence of the fields, they can be rewritten in terms of hadrons:
\[\Psi_{1}^{H(1)}=-\frac{i}{\sqrt{2}}\left[a\,\eta_{c}+b\left(\mathbf{ \sigma\,J/\psi}\right)\right]p,\quad\Psi_{2}^{H(1)}=-\frac{i}{\sqrt{2}} \left[A\,\bar{D}^{0}+B\left(\mathbf{\sigma\,\bar{D}^{*0}}\right) \right]\Lambda_{c}^{+}. \tag{12}\]
Here, \(A\) and \(B\) (\(a\) and \(b\)) are non-perturbative coefficients associated with barrier penetration amplitudes for the light (heavy) quark. They are equal in the limit of the naive Fierz coupling. The decays of the pentaquark into the \(D\)-meson and charmed baryon and into a charmonium and light baryon through the tunneling transition are shown in Figs. 2 and 3.
Similarly, diquark-diquark-antiquark operators containing heavy diquark with the spin \(S_{hd}=1\) and light diquark with \(S_{ld}=0\):
\[\mathbf{\Psi}_{1}^{H(2)}(x,y) =\frac{1}{3}\left(\tilde{c}^{i}(x)\,\sigma_{2}\right)(c_{i}(y)\, \sigma_{2}\,\mathbf{\sigma\,q}_{k}(y))\,d_{0}^{k}(x), \tag{13}\] \[\mathbf{\Psi}_{2}^{H(2)}(x,y) =\frac{1}{3}\left(\tilde{c}^{i}(x)\,\sigma_{2}\right)(c_{k}(y)\, \sigma_{2}\,\mathbf{\sigma\,q}_{i}(y))\,d_{0}^{k}(x). \tag{14}\]
Being direct product of spinor and vector, they need to be separated into two states with spins \(J=1/2\) and \(J=3/2\). For \(P_{\psi}^{N}(4312)^{+}\) interpreted as \(J^{P}=3/2^{-}\) pentaquark [9, 10], decompositions in term of hadrons are as follows:
\[\mathbf{\Psi}_{1}^{H(3/2)} =\frac{i\sqrt{2}}{3}\left\{b^{\prime}\,\mathbf{J/\psi}-2 ic^{\prime}\left[\mathbf{\sigma\times J/\psi}\right]\right\}p, \tag{15}\] \[\mathbf{\Psi}_{2}^{H(3/2)} =-\frac{i\sqrt{2}}{3}\left\{B^{\prime}\,\bar{\mathbf{D} }^{*0}-2iC^{\prime}\left[\mathbf{\sigma\times\bar{D}^{*0}}\right] \right\}\Lambda_{c}^{+}. \tag{16}\]
So, \(P_{\psi}^{N}(4312)^{+}\) is mainly decaying either to \(J/\psi\,p\) final state, in which it was observed, or to \(\Lambda_{c}^{+}\,\bar{D}^{*0}\).
The tunneling amplitude in leading semiclassical approximation, has a similar exponential behavior as for tetraquarks: \({\cal A}_{M}\sim e^{-\sqrt{2ME}\ell}\), where \(E\) and \(\ell\) are barrier height and extension. For constituent quark masses, \(m_{u}\) and \(m_{c}\), and keeping the same values as for tetraquarks,
\(E=100\) MeV and \(\ell=2\) fm [11], the ratio of amplitudes squared has the same order of magnitude as (4):
\[R_{\rm penta}=\frac{|b^{\prime}|^{2}+4|c^{\prime}|^{2}}{|B^{\prime}|^{2}+4|C^{ \prime}|^{2}}\sim\left(\frac{{\cal A}_{m_{c}}}{{\cal A}_{m_{u}}}\right)^{2} \sim 10^{-3}\sim R. \tag{17}\]
With the decay momenta \(p_{p}\simeq 660\) MeV and \(p_{\Lambda_{c}}\simeq 200\) MeV, being comparable to each other, one can get the ratio of pentaquark decay widths:
\[\frac{\Gamma(P_{\psi}^{N}(4312)^{+}\to J/\psi\,p)}{\Gamma(P_{\psi}^{N}(4312)^{ +}\to\Lambda_{c}^{+}\,\bar{D}^{*0})}=\frac{p_{p}}{p_{\Lambda_{c}}}\,R_{\rm penta }\sim 10^{-3}. \tag{18}\]
If this approach is correct, \(P_{\psi}^{N}(4312)^{+}\) should be also searched in \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\,\bar{D}^{*0}\,K^{-}\) decay with good chances to be observed. This can also be applied to decays of the \(P_{\psi s}^{\Lambda}(4459)^{0}\) pentaquark which we left for a future publication.
## 4 Conclusions
The Quark-Diquark approach used for pentaquarks is working quite successful in predictions of masses of heavy baryons and doubly-heavy exotic hadrons. Decay width of tetraquarks with hidden charm or bottom can be explained within the quark-diquark model by a presence of a barrier between heavy diquark and antidiquark. Similarly, decay width of pentaquarks with hidden charm or bottom can be explained within the quark-diquark model by a presence of a barrier between heavy diquark and heavy triquark. If this approach is correct, \(P_{\psi}^{N}(4312)^{+}\)-pentaquark should be also searched in the \(\Lambda_{b}^{0}\to\Lambda_{c}^{+}\,\bar{D}^{*0}\,K^{-}\) decay mode with good chances to be found.
## Acknowledgments
AP would like to thank Prof. Ahmed Ali for useful discussions. A. D. and A. P. are supported by the Russian Science Foundation (Project No. 22-22-00877, [https://rscf.ru/project/22-22-00877/](https://rscf.ru/project/22-22-00877/)). A, Z. is supported by the Russian Foundation for Basic Research (Project No 20-32-90205).
|
2309.14535 | A relativistic quantum broadcast channel | We investigate the transmission of classical and quantum information between
three observers in a general globally hyperbolic spacetime using a quantum
scalar field as a communication channel. We build a model for a quantum
broadcast channel in which one observer (sender) wishes to transmit (classical
and quantum) information to two other observers (receivers). They possess some
localized two-level quantum system (a qubit) that can interact with the quantum
field in order to prepare an input or receive the output of this channel. The
field is supposed to be in an arbitrary quasifree state, the three observers
may be in arbitrary states of motion, and no choice of representation of the
field canonical commutation relations is made. The interaction of the field and
qubits is such that it allows us to obtain the map that describes this channel
in a non-perturbative manner. We conclude by analyzing the rates at which
information can be transmitted through this channel and by investigating
relativistic causality effects on such rates. | Ian Bernardes Barcellos, André G. S. Landulfo | 2023-09-25T21:20:08Z | http://arxiv.org/abs/2309.14535v2 | # A relativistic quantum broadcast channel
###### Abstract
We investigate the transmission of classical and quantum information between three observers in a general globally hyperbolic spacetime using a quantum scalar field as a communication channel. We build a model for a quantum broadcast channel in which one observer (sender) wishes to transmit (classical and quantum) information to two other observers (receivers). They possess some localized two-level quantum system (a qubit) that can interact with the quantum field in order to prepare an input or receive the output of this channel. The field is supposed to be in an arbitrary quasifree state, the three observers may be in arbitrary states of motion, and no choice of representation of the field canonical commutation relations is made. The interaction of the field and qubits is such that it allows us to obtain the map that describes this channel in a non-perturbative manner. We conclude by analyzing the rates at which information can be transmitted through this channel and by investigating relativistic causality effects on such rates.
pacs: 03.67.-a,03.67.Hk, 04.62.+v
## I Introduction
Network information theory is the area of knowledge that studies classical communication problems involving multiple parts. Here, the word "classical" stands not only for the fact that the information being transmitted is classic (bits) but also for the physical systems in which such information is encoded, i.e., systems that can be described by some area of classical physics (such as Electromagnetism). One particular case of interest is the broadcast channel, where typically one sender wishes to transmit information to multiple receivers (like radio and TV stations broadcasting their signals, for example).
Nowadays, one of the main goals of quantum information theory is to extend several results of information theory to the quantum world [1; 2], investigating any new features or advantages that can arise when one uses quantum systems to encode, process, and transmit information. The quantum network information theory comprises the studies of communication protocols using quantum systems to convey classical (bits) or quantum (qubits) information. In particular, the classical broadcast channels can be extended to the so-called _quantum broadcast channels_, where one sender transmits classical or quantum input information to many receivers using a quantum system as a communication channel with quantum outputs [3; 4; 5].
Such communication scenarios are very suitable for analyzing how relativistic effects can influence one's ability to communicate using quantum channels. This could be due to the existence of nontrivial spacetime structures such as black hole event horizons, Cauchy horizons, and causal horizons arising from the relativistic relative motion between senders and receivers or even due to the expansion of spacetime [6].
In order to consistently analyze quantum information theory in general spacetimes, one should use quantum field theory in curved spacetimes (QFTCS) [7]. This approach was used by several authors to analyze the communication process in relativistic settings, with particular attention being paid to Minkowski [8; 9; 10; 20], Schwarzschild [21; 22; 23; 24], or asymptotically flat cosmological spacetimes [25; 26; 27]. However, only recently [28] a communication model valid in general globally hyperbolic spacetimes and in which the parts that convey information can move in arbitrary worldlines and interact with the quantum field (used as communication channel) only in the vicinity of its worldlines was developed (and, since then, other works in this context have emerged as, for instance, Ref. [29]). This is interesting for two reasons: firstly, it allows the analysis of information exchange between more general observers, not only observers following orbits of some Killing field (which does not even exist in spacetimes lacking timelike symmetries). Secondly, the model studied in [28] allows one to investigate the outputs of the quantum communication in a nonperturbative manner and thereby is suitable to investigate both the causality as well as the communication between parts lying in early and future asymptotic regions (limits that would invalidate results obtained by perturbative methods).
In the present paper, we generalize the analysis of [28]. This is done by constructing a model for a classical-quantum as well as entanglement-assisted classical-quantum and quantum-quantum broadcast channels. We consider an arbitrary globally hyperbolic spacetime in which one observer (Alice) wants to convey classical (or quantum) information to two receivers (Bob and Charlie) using a quantum scalar field as a communication channel. The three observers will use two-level quantum systems (qubits) to locally interact with the quantum field in or
der to send or receive information. The observers may be in arbitrary states of motion, the interaction between the detectors and the field is similar to the one given by the Unruh-DeWitt model [30], and the field may initially be in an arbitrary quasifree state [7]. We suppose, however, that the two levels of each qubit have the same energy. This model is interesting because the evolution of the system can be computed exactly, and therefore we will obtain nonperturbative results for the communication rates associated with such a broadcast channel. As we will see, causality in the information exchange is explicitly manifest in our results.
This work is organized as follows. In Sec. II we will present the quantization procedure of a free scalar field on a globally hyperbolic spacetime as well as the class of states we will be using. In Sec. III we describe the interaction between the qubits and the field and determine the quantum map that relates the information Alice wants to convey to the final joint state of Bob's and Charlie's qubits. In Sec. IV we investigate the rates at which information can be transmitted using this broadcast channel, as well as the influence of the spacetime curvature or relative motion of observers in the communication process. In Sec. V we give our final remarks. We assume metric signature \((-+++)\) and natural units in which \(c=\hbar=G=k_{B}=1\), unless stated otherwise.
## II Field Quantization
Let us consider a free, real scalar field \(\phi\) propagating in an arbitrary four-dimensional globally hyperbolic spacetime \((\mathcal{M},g)\), where \(\mathcal{M}\) denotes the four-dimensional spacetime manifold and \(g\) its Lorentzian metric. Let the spacetime be foliated by Cauchy surfaces \(\Sigma_{t}\) labeled by the real parameter \(t\). The field is described by the action
\[S\equiv-\frac{1}{2}\int_{\mathcal{M}}\epsilon_{\mathcal{M}}\left(\nabla_{a} \phi\nabla^{a}\phi+m^{2}\phi^{2}+\xi R\phi^{2}\right), \tag{1}\]
where \(\epsilon_{\mathcal{M}}=\sqrt{-\mathfrak{g}}dx^{0}\wedge\cdots\wedge dx^{3}\) is the spacetime volume 4-form, \(m\) is the field mass, \(\xi\in\mathbb{R}\), \(R\) is the scalar curvature, \(\nabla_{a}\) is the torsion-free covariant derivative compatible with the metric \(g\), and \(\mathfrak{g}\equiv\det(g_{\mu\nu})\) in some arbitrary coordinate system \(\{x^{\mu}\}\). The extremization of the action (1) gives rise to the Klein-Gordon equation
\[(-\nabla^{a}\nabla_{a}+m^{2}+\xi R)\phi=0. \tag{2}\]
In the canonical quantization procedure, we promote the real field \(\phi\) to an operator1 that satisfies the "equal-time" canonical commutation relations (CCR)
Footnote 1: Rigorously, an operator-valued distribution.
\[[\phi(t,\mathbf{x}),\phi(t,\mathbf{x}^{\prime})]_{\Sigma_{t}}=[\pi(t, \mathbf{x}),\pi(t,\mathbf{x}^{\prime})]_{\Sigma_{t}}=0, \tag{3}\]
\[[\phi(t,\mathbf{x}),\pi(t,\mathbf{x}^{\prime})]_{\Sigma_{t}}=i\delta^{3}( \mathbf{x},\mathbf{x}^{\prime}), \tag{4}\]
where \(\mathbf{x}\equiv(x^{1},x^{2},x^{3})\) are spatial coordinates on \(\Sigma_{t}\) and \(\pi(x)\) is the conjugate momentum defined as
\[\pi\equiv\frac{\delta S}{\delta\dot{\phi}}\,, \tag{5}\]
with the notation \(\dot{\phi}\equiv\partial_{t}\phi\). In addition, we may formally write the canonical Hamiltonian of the field as
\[H_{\phi}(t)\equiv\int_{\Sigma_{t}}d^{3}\mathbf{x}\,\left(\pi(t,\mathbf{x}) \dot{\phi}(t,\mathbf{x})-\mathcal{L}[\phi,\nabla_{a}\phi]\right), \tag{6}\]
with
\[d^{3}\mathbf{x}\equiv dx^{1}\wedge dx^{2}\wedge dx^{3} \tag{7}\]
and
\[\mathcal{L}[\phi,\nabla_{a}\phi]\equiv-\frac{1}{2}\sqrt{-\mathfrak{g}}\left( \nabla_{a}\phi\nabla^{a}\phi+m^{2}\phi^{2}+\xi R\phi^{2}\right) \tag{8}\]
being the Lagrangian density.
To find a representation of the CCR, Eqs. (3) and (4), we define an antisymmetric bilinear map \(\sigma\) acting on the space \(\mathcal{S}^{\mathbb{C}}\) of complex solutions of Eq. (2) as
\[\sigma(\psi_{1},\psi_{2})\equiv\int_{\Sigma_{t}}\epsilon_{\Sigma}\,n^{a}\left[ \psi_{2}\nabla_{a}\psi_{1}-\psi_{1}\nabla_{a}\psi_{2}\right], \tag{9}\]
where \(\epsilon_{\Sigma}\) represents the proper-volume 3-form on the Cauchy surface \(\Sigma_{t}\) and \(n^{a}\) its future-directed normal unit vector. It allows us to define the Klein-Gordon product as
\[\langle\psi_{1},\psi_{2}\rangle\equiv-i\,\sigma(\overline{\psi}_{1},\psi_{2}), \tag{10}\]
and, although this product is not positive-definite on \(\mathcal{S}^{\mathbb{C}}\), we may choose any subspace \(\mathcal{H}\in\mathcal{S}^{\mathbb{C}}\) (the so-called _one-particle Hilbert space_) such that: **(i)**\(\mathcal{S}^{\mathbb{C}}\simeq\mathcal{H}\oplus\overline{\mathcal{H}}\);2**(ii)** the KG product is positive definite on \(\mathcal{H}\), thus making \((\mathcal{H},(,))\) a Hilbert space;3**(iii)** given any \(u\in\mathcal{H}\) and \(v\in\overline{\mathcal{H}}\), \(\langle u,v\rangle=0\). (See [7] for details.) The Hilbert space that comprises the field states is defined as the symmetric Fock space \(\mathfrak{F}_{s}(\mathcal{H})\) and the quantum field operator is formally defined as
Footnote 2: For the sake of mathematical precision, we note that one must first suitably Cauchy-complete \(\mathcal{S}^{\mathbb{C}}\) for this decomposition to be valid.
Footnote 3: After its completion with respect to the norm induced by \((.)\).
\[\phi(t,\mathbf{x})\equiv\sum_{j}\left[u_{j}(t,\mathbf{x})a(\overline{u}_{j})+ \overline{u}_{j}(t,\mathbf{x})a^{\dagger}(u_{j})\right], \tag{11}\]
where \(\{u_{j}\}\) comprise an orthonormal basis for \(\mathcal{H}\) and \(a(\overline{u})/a^{\dagger}(v)\) are the usual annihilation/creation operators associated with the modes \(u/v\), respectively. They satisfy the commutation relations
\[\left[a(\overline{u}),a^{\dagger}(v)\right]=\langle u,v\rangle I, \tag{12}\]
with \(I\) being the identity operator on \(\mathfrak{F}_{s}(\mathcal{H})\). The vacuum state associated with this representation of the CCR is the normalized vector \(|0\rangle\) that satisfies \(a(\overline{u})|0\rangle=0\) for every mode \(u\in\mathcal{H}\).
In order to make it mathematically well-defined, the quantum field operator must be defined as an operator-valued distribution. To this end, let \(\mathcal{S}\subset\mathcal{S}^{\mathbb{C}}\) be the space of real solutions of Eq. (2) whose restriction to Cauchy surfaces have compact support and \(K:\mathcal{S}\rightarrow\mathcal{H}\) be the projection operator that takes the positive-norm part of any \(\psi\in\mathcal{S}\). If \(C_{0}^{\infty}(\mathcal{M})\) denote the set of all smooth compactly-supported real functions on \(\mathcal{M}\), we define the map \(E:C_{0}^{\infty}(\mathcal{M})\rightarrow\mathcal{S}\) acting on some _test function_\(f\in C_{0}^{\infty}(\mathcal{M})\) as
\[Ef(x)\equiv Af(x)-Rf(x), \tag{13}\]
where \(Af\) and \(Rf\) are the advanced and retarded solutions of the Klein-Gordon equation with source \(f\), respectively. Hence, they satisfy
\[P(Af)=P(Rf)=f, \tag{14}\]
with \(P\equiv-\nabla^{a}\nabla_{a}+m^{2}+\xi R\) representing the Klein-Gordon differential operator.
Now, for each test function \(f\in C_{0}^{\infty}(\mathcal{M})\), we define a _smeared quantum field operator_ by
\[\phi(f)\equiv i\left[a(\overline{KEf})-a^{\dagger}(KEf)\right], \tag{15}\]
which satisfies the covariant version of the CCR,
\[[\phi(f_{1}),\phi(f_{2})]=-i\Delta(f_{1},f_{2})I, \tag{16}\]
where
\[\Delta(f_{1},f_{2})\equiv\int_{\mathcal{M}}\epsilon_{\mathcal{M}}f_{1}(x) Ef_{2}(x) \tag{17}\]
for all \(f_{1},f_{2}\in C_{0}^{\infty}(\mathcal{M})\). As shown in [7], Eq. (15) can be obtained by formally integrating Eq. (11) weighed by the test function \(f\), i.e.,
\[\phi(f)=\int_{\mathcal{M}}\epsilon_{\mathcal{M}}\,\phi(x)f(x). \tag{18}\]
The above construction has the downside that there are infinitely many choices of \(\mathcal{H}\) satisfying properties **(i)**-**(iii)** listed below Eq. (10) and their respective Fock spaces are, in general, unitarily inequivalent. As discussed in [28], this issue can be avoided through the algebraic approach to quantum field theory (QFT). For more details, see Refs. [7; 31].
In this work, we will focus on a particular class of states: the _quasifree states_, defined as follows. Given a real inner product \(\mu:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}\) satisfying
\[|\sigma(\varphi_{1},\varphi_{2})|^{2}\leq 4\mu(\varphi_{1},\varphi_{1})\mu( \varphi_{2},\varphi_{2}), \tag{19}\]
for all \(\varphi_{1},\varphi_{2}\in\mathcal{S}\), we define a quasifree state \(\omega_{\mu}\) associated with \(\mu\) by the relation
\[\omega_{\mu}\left[W(Ef)\right]\equiv e^{-\mu(Ef,Ef)/2}, \tag{20}\]
for all \(f\in C_{0}^{\infty}(\mathcal{M})\), where the so-called _Weyl operators_\(W(Ef)\) are defined by
\[W(Ef)\equiv e^{i\phi(f)}\,,\;f\in C_{0}^{\infty}(\mathcal{M}). \tag{21}\]
The vacuum, n-particle, and thermal states are examples of quasifree states.
## III The quantum broadcast channel
A typical broadcast communication scenario involves the transmission of information between one station (sender) and several receivers who will decode the information independently. Let us consider a model in which one observer, Alice, wants to transmit separate information to two other observers, Bob and Charlie, using the quantum field \(\phi\) as a broadcast channel. Suppose that the field is initially in some quasifree state \(\omega_{\mu}\)4. Suppose also that the three observers follow arbitrary trajectories in the curved spacetime and that each one of them possesses a two-level quantum system that may interact with the quantum field at their will. The two-dimensional Hilbert spaces associated with Alice's, Bob's, and Charlie's qubits are denoted by \(\mathcal{H}_{A}\), \(\mathcal{H}_{B}\), and \(\mathcal{H}_{C}\), respectively.
Footnote 4: We note, however, that the results from this section apply to any algebraic state \(\omega\) which satisfies \(\omega\left[W(Ef)\right]\in\mathbb{R}^{+}\) for all \(f\in C_{0}^{\infty}(\mathcal{M})\).
The communication setup, illustrated by Fig. 1, is as follows: In order to transmit information to Bob and Charlie, Alice prepares her qubit in some initial quantum state \(\rho_{\rightarrow\infty}^{A}\) and switches on its interaction with the
Figure 1: The Figure depicts the quantum broadcast protocol being used. The dashed lines display the worldlines of the sender, Alice (A, Red), and receivers, Bob and Charlie (B and C, blue). The solid lines in each worldline depict the interaction interval of each observer’s qubit with the quantum field. Here, \(\Sigma_{t_{1}}\) and \(\Sigma_{t_{2}}\) represent two Cauchy surfaces of the spacetime.
field for a finite time interval \(\Delta t_{A}\) (measured by the parameter \(t\)). To measure the information imprinted by Alice on the field's state, Bob and Charlie initially prepare their qubits in suitable states \(\rho_{-\infty}^{B}\) and \(\rho_{-\infty}^{C}\) and then they switch on each of their qubit interaction with the field for finite time intervals \(\Delta t_{B}\) and \(\Delta t_{C}\), respectively. For the sake of simplicity, we will consider here the case where
* Bob lets his qubit interact with the field only after Alice finishes her transmission;
* Charlie lets his qubit interact with the field only after Bob finishes his measurement process.
Such communication setup is implemented by means of the Hamiltonian
\[H(t)\equiv H_{\phi}(t)+H_{\rm int}(t), \tag{22}\]
where \(H_{\phi}\) is the field Hamiltonian in Eq. (6) and \(H_{\rm int}\) is the Hamiltonian that describes the interaction between each qubit and the field which, in the interaction picture, is given by
\[H_{\rm int}^{\rm I}(t)\equiv\sum_{j}\epsilon_{j}(t)\int_{\Sigma_{t}}d^{3}{\bf x }\sqrt{-{\bf g}}\;\psi_{j}(t,{\bf x})\phi(t,{\bf x})\otimes\sigma_{j}^{\rm z}, \tag{23}\]
where \(j\in\{A,B,C\}\), with \(A\), \(B\), and \(C\) labeling Alice's, Bob's, and Charlie's qubit, respectively. Here, \(\sigma_{j}^{\rm z}\) is one of the Pauli matrices \(\left\{\sigma_{j}^{\rm x},\sigma_{j}^{\rm y},\sigma_{j}^{\rm z}\right\}\) associated with qubit \(j\); \(\psi_{j}(t,{\bf x})\) is a smooth real function satisfying \(\psi_{j}|_{\Sigma_{t}}\in C_{0}^{\infty}\left(\Sigma_{t}\right)\) for all \(t\), which models the finite range of interaction between qubit \(j\) and the field (i.e., the interaction occurs only at some vicinity of each qubit worldline); and \(\epsilon_{j}(t)\) is a smooth and compactly-supported real _coupling function_ modeling the finite-time coupling of qubit \(j\) with the field. Each coupling function has support
\[{\rm supp}\;\epsilon_{j}=\left[T_{j}^{i},T_{j}^{f}\right], \tag{24}\]
where \(T_{j}^{i}\) and \(T_{j}^{f}\) represent the time (with respect to the parameter \(t\)) in which each qubit interaction with the field is switched-on and -off, respectively. Here, we denote \(\Delta t_{j}\equiv T_{j}^{f}-T_{j}^{i}\). Thus, the hypotheses **(QB1)** and **(QB2)** previously listed can be summarized as
\[T_{C}^{i}\geq T_{B}^{f}\geq T_{B}^{i}\geq T_{A}^{f}. \tag{25}\]
The interaction between each qubit and the field given by Eq. (23) is very similar to the Unruh-DeWitt model [30]. However, we assumed that the two levels of each qubit have the same (zero) energy. As we shall see, this assumption allows us to calculate the evolution operator of the system and trace out the field degrees of freedom in a nonperturbative manner, thus making this model interesting to investigate both the causality in the information exchange process as well as the communication between parts lying in early and future asymptotic spacetime regions. We note that one could also have given an energy gap \(2\,\delta_{j}\) for each qubit \(j\) in \(z\)-direction by adding \(H_{j}=\delta_{j}\sigma_{j}^{\rm z}\) to the total Hamiltonian in Eq. (22) and still keep the model exactly solvable. This would change it to
\[H=H_{\phi}+H_{A}+H_{B}+H_{C}+H_{\rm int}, \tag{26}\]
but would keep the interaction Hamiltonian in the interaction picture, Eq. (23), unchanged. Hence, all the results we will describe below would remain the same.
The interaction-picture time-evolution operator at late times, associated with the foliation \(\Sigma_{t}\), can be written as the time-ordered expression
\[U\equiv T\exp\left[-i\int_{-\infty}^{\infty}dt\,H_{\rm int}^{\rm I}(t)\right]. \tag{27}\]
It can be computed nonperturbatively by using the Magnus expansion [32]
\[U=\exp\left[\sum_{n=1}^{\infty}\Omega_{n}\right]\!, \tag{28}\]
where
\[\Omega_{1}=-i\int_{-\infty}^{\infty}dt\,H_{\rm int}^{\rm I}(t)\;, \tag{29}\]
\[\Omega_{2}=-\frac{1}{2}\int_{-\infty}^{\infty}\!dt\int_{-\infty}^{t}dt^{\prime }[H_{\rm int}^{\rm I}(t)\,,\,H_{\rm int}^{\rm I}(t^{\prime})], \tag{30}\]
\[\Omega_{3}= \frac{i}{6}\int_{-\infty}^{\infty}\!\!dt\int_{-\infty}^{t}\!\!dt ^{\prime}\int_{-\infty}^{t^{\prime}}\!\!dt^{\prime\prime}[\{H_{\rm int}^{\rm I }(t),[H_{\rm int}^{\rm I}(t^{\prime}),H_{\rm int}^{\rm I}(t^{\prime\prime})]\}\] \[+[H_{\rm int}^{\rm I}(t^{\prime\prime}),[H_{\rm int}^{\rm I}(t^{ \prime}),H_{\rm int}^{\rm I}(t)]]\}], \tag{31}\]
and so on. By using Eqs. (18), (23), and (29), we get
\[\Omega_{1}=-i\sum_{j}\phi(f_{j})\otimes\sigma_{j}^{\rm z}, \tag{32}\]
where we have defined
\[f_{j}(t,{\bf x})\equiv\epsilon_{j}(t)\psi_{j}(t,{\bf x}). \tag{33}\]
Now, by making use of Eqs. (18) and (23) together with Eqs. (16), (25), and (30) we can cast \(\Omega_{2}\) as
\[\Omega_{2}= i\Xi I-\frac{i}{2}\Delta(f_{A},f_{B})\sigma_{A}^{\rm z}\otimes \sigma_{B}^{\rm z}-\frac{i}{2}\Delta(f_{A},f_{C})\sigma_{A}^{\rm z}\otimes\sigma _{C}^{\rm z}\] \[-\frac{i}{2}\Delta(f_{B},f_{C})\sigma_{B}^{\rm z}\otimes\sigma_{C}^ {\rm z}, \tag{34}\]
where \(\Xi\) is the c-number
\[\Xi\equiv\frac{1}{2}\sum_{j}\int_{-\infty}^{\infty}dt\;\epsilon_{j}(t)\int_{- \infty}^{t}\;dt^{\prime}\epsilon_{j}(t^{\prime})\Delta_{j}(t,t^{\prime}),\]
with
\[\Delta_{j}(t,t^{\prime})\!\equiv\!\!\int_{\Sigma_{t}}^{\rm\!\!d^{3}{\bf x} \sqrt{-{\bf g}}}\!\!\int_{\Sigma_{t^{\prime}}^{\rm\!d^{3}{\bf x}^{\prime}}}\!\! \sqrt{-{\bf g}^{\prime}}\psi_{j}(t,{\bf x})\Delta(x,x^{\prime})\psi_{j}(t^{ \prime},{\bf x}^{\prime}),\]
and we recall that \([\phi(x),\phi(x^{\prime})]\equiv-i\Delta(x,x^{\prime})I\) is the unsmeared version of Eq. (16). Finally, since \([H^{1}_{\rm int}(t),H^{1}_{\rm int}(t^{\prime})]\) is proportional to the identity, we get
\[\Omega_{k}=0\ \ \mbox{for}\ k\geq 3. \tag{35}\]
Using the Zassenhaus formula
\[e^{A+B}=e^{A}e^{B}e^{-\frac{1}{2}[A,B]}, \tag{36}\]
valid whenever \([A,B]\) is a proportional to the identity, together with Eqs. (28), (32), (34), and (35) we obtain the following unitary evolution operator:
\[U=e^{i\Xi}e^{-i\phi(f_{C})\otimes\sigma^{x}_{C}}e^{-i\phi(f_{B})\otimes\sigma^ {x}_{B}}e^{-i\phi(f_{A})\otimes\sigma^{x}_{A}}. \tag{37}\]
Now that we have the exact evolution operator \(U\), we can use it to evolve the initial state of the 3 qubit + field system and then trace out the field and Alice's qubit degrees of freedom. This procedure allows us to obtain the final state of Bob's and Charlie's qubits after the communication protocol has ended. This is the state that they will measure to recover the information that Alice has sent. Explicitly, the final Bob+Charlie state is given by
\[\rho^{BC}\equiv\mathrm{tr}_{\phi,A}\left(U\rho^{A}_{-\infty}\otimes\rho^{B}_{ -\infty}\otimes\rho^{C}_{-\infty}\otimes\rho_{\omega}U^{\dagger}\right), \tag{38}\]
where \(\rho^{j}_{-\infty}\) and \(\rho_{\omega}\) are the initial states of qubit \(j\) and the field, respectively.
To compute the trace in Eq. (38), let us cast the operators in Eq. (37) as
\[e^{-i\phi(f_{j})\otimes\sigma^{x}_{j}}=\cos\left[\phi(f_{j})\right]-i\sin \left[\phi(f_{j})\right]\otimes\sigma^{x}_{j}, \tag{39}\]
where
\[\cos\left[\phi(f_{j})\right]\equiv\frac{1}{2}\left[W(Ef_{j})+W(-Ef_{j})\right] \tag{40}\]
and
\[\sin\left[\phi(f_{j})\right]\equiv\frac{1}{2i}\left[W(Ef_{j})-W(-Ef_{j})\right], \tag{41}\]
where \(W(Ef)\) is defined in Eq. (21). By plugging Eqs. (37) and (39) into Eq. (38) and then taking the partial traces on \(\phi\) and \(A\), a direct calculation yields
\[\rho^{BC} =(\Gamma_{ccccc}+\Gamma_{ccccc})\rho^{BC}_{-\infty}\] \[+(\Gamma_{ccccc}+\Gamma_{ssccs})\sigma^{x}_{B}\rho^{BC}_{-\infty }\sigma^{x}_{B}\] \[+(\Gamma_{ccsscc}+\Gamma_{ssssss})\sigma^{x}_{C}\rho^{BC}_{- \infty}\sigma^{x}_{B}\] \[+(\Gamma_{csssc}+\Gamma_{ssssss})\sigma^{x}_{B}\otimes\sigma^{x} _{C}\rho^{BC}_{-\infty}\sigma^{x}_{B}\otimes\sigma^{x}_{C}\] \[+[(\Gamma_{ccccscc}+\Gamma_{ssssccs})\sigma^{x}_{B}\rho^{BC}_{- \infty}\sigma^{x}_{C}+\mathrm{h.c.}]\] \[-[(\Gamma_{csscc}+\Gamma_{ssssccc})\rho^{BC}_{-\infty}\sigma^{x} _{B}\otimes\sigma^{x}_{C}+\mathrm{h.c.}] \tag{42}\] \[+[(\Gamma_{ccccs}-\Gamma_{ssccccc})(\sigma^{x}_{A})\rho^{BC}_{- \infty}\rho^{BC}_{-\infty}\sigma^{x}_{B}+\mathrm{h.c.}]\] \[+[(\Gamma_{ccccsccs}-\Gamma_{sssscc})(\sigma^{x}_{A})\rho^{BC}_{- \infty}\rho^{BC}_{-\infty}\sigma^{x}_{B}+\mathrm{h.c.}]\] \[+[(\Gamma_{csscss}-\Gamma_{sssscc})(\sigma^{x}_{A})\rho^{BC}_{- \infty}\sigma^{x}_{B}\rho^{BC}_{-\infty}\sigma^{x}_{B}\otimes\sigma^{x}_{C}+ \mathrm{h.c.}]\] \[+[(\Gamma_{cssscs}-\Gamma_{sssscc})(\sigma^{x}_{A})\rho^{AC}_{- \infty}\sigma^{x}_{B}\rho^{BC}_{-\infty}\sigma^{x}_{B}\otimes\sigma^{x}_{C}+ \mathrm{h.c.}]\] \[+[(\Gamma_{cssscs}-\Gamma_{sssscc})(\sigma^{x}_{A})\rho^{AC}_{- \infty}\sigma^{x}_{C}\rho^{BC}_{-\infty}\sigma^{x}_{B}\otimes\sigma^{x}_{C}+ \mathrm{h.c.}],\]
where h.c. stands for Hermitian conjugation, and we have defined
\[\rho^{BC}_{-\infty}\equiv\rho^{B}_{-\infty}\otimes\rho^{C}_{-\infty}, \tag{43}\]
\[\langle\sigma^{x}_{A}\rangle_{\rho^{A}_{-\infty}}\equiv\mathrm{tr}\left( \sigma^{x}_{A}\rho^{A}_{-\infty}\right), \tag{44}\]
and
\[\Gamma_{\alpha\beta\gamma\delta\epsilon\zeta}\equiv\omega_{\mu} \big{(}\mathcal{F}_{\alpha}[\phi(f_{A})]\mathcal{F}_{\beta}[\phi(f _{B})]\mathcal{F}_{\gamma}[\phi(f_{C})]\] \[\times\mathcal{F}_{\delta}[\phi(f_{C})]\mathcal{F}_{\epsilon}[ \phi(f_{B})]\mathcal{F}_{\zeta}[\phi(f_{A})]\big{)}, \tag{45}\]
with \(\alpha,\beta,\gamma,\delta,\epsilon,\zeta\in\{c,s\}\), \(\mathcal{F}_{c}(x)\equiv\cos x\), and \(\mathcal{F}_{s}(x)\equiv\sin x\). We note that we have written the algebraic field state \(\omega_{\mu}\) as a density matrix with \(\mathrm{tr}\left[\rho_{\omega}W(Ef)\right]\equiv\omega_{\mu}\left[W(Ef)\right]\). Furthermore, we have used the fact that the expected value of odd functions of the field operator vanishes since we are assuming that \(\omega_{\mu}\) is a quasifree state (a consequence of Wick's theorem).
Now, each \(\Gamma_{\alpha\beta\gamma\delta\epsilon\zeta}\) in Eq. (42) can be evaluated by substituting Eqs. (40) and (41) in Eq. (45) and then using the identity
\[W(Ef_{1})W(Ef_{2})=e^{\frac{i}{2}\Delta(f_{1},f_{2})}W[E(f_{1}+f_{2})], \tag{46}\]
for all \(f,f_{1},f_{2}\in C^{\infty}_{0}(\mathcal{M})\), to simplify the product of the Weyl operators. By substituting these coefficients in Eq.(42) one finds the explicit form of the state \(\rho^{BC}\), which is given in Eq. (101) of Appendix A. The expression in Eq. (101) allows one to write the final joint state for Bob's and Charlie's qubits given any initial state configuration for the 3 qubits+field.
To define a quantum broadcast channel, we must choose suitable initial states for Bob and Charlie qubits in order to obtain a quantum map relating the initial state of Alice's qubit \(\rho^{A}_{-\infty}\) (which encodes the messages) to the final states that will be probed by them (to decode the messages). Since Bob only performs measurements in his own two-level system, we calculate the expression for the reduced state of his qubit, i.e.,
\[\rho^{B}\equiv\mathrm{tr}_{C}\left(\rho^{BC}\right). \tag{47}\]
Taking the trace in Eq. (101) relative to Charlie's degrees of freedom, we obtain
\[\rho^{B} =\frac{1}{2}\left(1+\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right] \right)\rho^{B}_{-\infty}\] \[+\frac{1}{2}\left(1-\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right] \right)\sigma^{x}_{B}\rho^{B}_{-\infty}\sigma^{x}_{B} \tag{48}\] \[+\frac{i}{2}\nu_{B}\sin\left[2\Delta(f_{A},f_{B})\right]\langle \sigma^{x}_{A}\rangle_{\rho^{A}_{-\infty}}\left[\rho^{B}_{-\infty},\sigma^{x}_{B} \right],\]
where
\[\nu_{B}\equiv\omega_{\mu}\left(W[E(2f_{B})]\right)=e^{-2\mu(KEf_{B},KEf_{B})}, \tag{49}\]
with \(\mu\) be the inner product associated with the field quasifree state \(\omega_{\mu}\) as in Eq. (20). Note that it is the last
term in Eq. (48) that contains the information encoded by Alice, and thus it will be useless for Bob to choose the eigenstates \(|0\rangle_{B}\) and \(|1\rangle_{B}\) of \(\sigma_{B}^{\pm}\) as his initial state \(\rho_{-\infty}^{B}\) since this term would vanish. Furthermore, since \(\sigma_{B}^{\pm}\) commutes with the interaction Hamiltonian, he won't recover any information either if he performs projective measurements on this basis. To choose a suitable state \(\rho_{-\infty}^{B}\) that maximizes the chances of success in their communication, suppose for simplicity that Alice encodes a pair of messages in states \(\rho_{-\infty+}^{A}\) and \(\rho_{-\infty-}^{A}\) which will be decoded by Bob using a set of projective measurements in the \(x\)-direction,
\[\{F_{+}^{B}\equiv|+\rangle_{BB}(+|,F_{-}^{B}\equiv|-\rangle_{BB}(-|), \tag{50}\]
where \(\sigma_{B}^{\pm}|\pm\rangle_{B}=\pm|\pm\rangle_{B}\). From Eq. (48), we conclude that the probability that Bob measures \(l=\pm\) given that Alice has encoded the message \(k=\pm\) in \(\rho_{-\infty k}^{A}\) is
\[p(l|k)\equiv\mathrm{tr}\left(F_{l}^{B}\rho_{k}^{B}\right)=\frac{1}{2}(1+l\nu_{ B}\Lambda_{k}), \tag{51}\]
where
\[\Lambda_{k}\!\equiv\!2\mathfrak{R}\{\beta_{B}(\cos[2\Delta(f_{A},f_{B})]\!-i (\sigma_{A}^{x})_{\rho_{-\infty k}^{A}}\!\sin[2\Delta(f_{A},f_{B})])\}\]
and \(\beta_{B}\equiv{}_{B}\langle 0|\rho_{-\infty}^{B}|1\rangle_{B}\). From these two equations, we see that it is the second term \(\Lambda_{k}\) that contains the information encoded by Alice on her qubit state, and thus we are motivated to choose a state \(\rho_{-\infty}^{B}\) that makes \(\beta_{B}\) a pure imaginary number, which will make the first term of \(\Lambda_{k}\) vanish while maximizing the amplitude of the second term. This motivates us to choose
\[\rho_{-\infty}^{B}\equiv|y_{+}\rangle_{BB}(y_{+}|, \tag{52}\]
where
\[|y_{+}\rangle_{B}\equiv\frac{1}{\sqrt{2}}\left(|0\rangle_{B}+i|1\rangle_{B}\right) \tag{53}\]
is an eigenvalue of \(\sigma_{B}^{y}\) (in this case, \(\beta_{B}=-i/2\)). With this choice, we can write Eq. (51) as
\[p(l|k)=\frac{1}{2}(1-l\nu_{B}(\sigma_{A}^{x})_{\rho_{-\infty k}^{A}}\sin[2 \Delta(f_{A},f_{B})]). \tag{54}\]
Now we turn our attention to Charlie. The final reduced state for his qubit is
\[\rho^{C}\equiv\mathrm{tr}_{B}\left(\rho^{BC}\right). \tag{55}\]
Taking the trace in Eq. (111) relative to Bob's degrees of freedom and using Eq. (52) we obtain
\[\rho^{C}=\frac{1}{2}(1+\nu_{C}\cos{[2\Delta(f_{A},f_{C})]}\!\cos {[2\Delta(f_{B},f_{C})]})\rho_{-\infty}^{C}\] \[+\frac{1}{2}(1-\nu_{C}\cos{[2\Delta(f_{A},f_{C})]}\!\cos{[2\Delta( f_{B},f_{C})]})\sigma_{C}^{x}\rho_{-\infty}^{C}\sigma_{C}^{x}\] \[+\frac{i}{2}\nu_{C}\!\sin{[2\Delta(f_{A},f_{C})]}\!\cos{[2\Delta( f_{B},f_{C})]}\!\{\sigma_{A}^{x}\}\!\rho_{-\infty}^{c}\!\!\rho_{C}^{x}, \sigma_{C}^{x}], \tag{56}\]
where
\[\nu_{C}\equiv\omega_{\mu}\left(W[E(2f_{C})]\right)=e^{-2\mu(KEF_{C},KEF_{C})}. \tag{57}\]
To obtain Eq. (56), we explicitly used the choice in Eq. (52), which implies that \(\langle\sigma_{B}^{x}\rangle_{\rho_{-\infty}^{B}}^{\rho_{-\infty}^{B}}\equiv \mathrm{tr}\left(\sigma_{B}^{x}\rho_{-\infty}^{B}\right)=0\). By a completely similar reasoning as the one used to choose Bob's initial state, we are motivated to choose Charlie's initial qubit state as
\[\rho_{-\infty}^{C}\equiv|y_{+}\rangle_{CC}\langle y_{+}|, \tag{58}\]
where \(\sigma_{C}^{y}|y_{+}\rangle_{C}=|y_{+}\rangle_{C}\).
Now, the quantum broadcast channel is completely characterized by a linear, completely positive and trace-preserving (CPTP) quantum map \(\mathcal{E}\) which takes \(\rho_{-\infty}^{A}\) into a final state \(\rho^{BC}\), i.e.,
\[\rho^{BC}=\mathcal{E}(\rho_{-\infty}^{A}). \tag{59}\]
By substituting the initial states of Bob's and Charlie's qubits given in Eqs. (52) and (58) into Eq. (111), we find the explicit expression for the quantum broadcast channel \(\mathcal{E}\). For the sake of clarity, due to its lengthy expression, we write its explicit form in Eq. (111) of Appendix A.
For later use, we will denote the reduced channels \(\mathcal{E}_{B}:A\to B\), \(\mathcal{E}_{C}:A\to C\) by
\[\mathcal{E}_{B}(\rho_{-\infty}^{A}) \equiv\mathrm{tr}_{C}\left[\mathcal{E}(\rho_{-\infty}^{A})\right], \tag{60}\] \[\mathcal{E}_{C}(\rho_{-\infty}^{A}) \equiv\mathrm{tr}_{B}\left[\mathcal{E}(\rho_{-\infty}^{A})\right], \tag{61}\]
respectively. It then follows from Eqs. (111), (60), and (61) that they can be explicitly written as
\[\mathcal{E}_{B}(\rho_{-\infty}^{A}) =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos{[2\Delta(f_{A},f_{B})]} \sigma_{B}^{y}\] \[-\frac{\nu_{B}}{2}\sin{[2\Delta(f_{A},f_{B})]}\langle\sigma_{A}^{x }\rangle_{\rho_{-\infty}^{A}}\sigma_{B}^{\times} \tag{62}\]
and
\[\mathcal{E}_{C}(\rho_{-\infty}^{A})=\frac{1}{2}I_{C}\] \[\quad+\frac{\nu_{C}}{2}\cos{[2\Delta(f_{A},f_{C})]}\!\cos{[2 \Delta(f_{B},f_{C})]}\sigma_{C}^{y} \tag{63}\] \[\quad-\frac{\nu_{C}}{2}\sin{[2\Delta(f_{A},f_{C})]}\!\cos{[2 \Delta(f_{B},f_{C})]}\!\{\sigma_{A}^{x}\}\!\rho_{-\infty}^{x}\sigma_{C}^{x}.\]
Given an initial state \(\rho_{-\infty}^{A}\) prepared by Alice on her qubit, these expressions for \(\mathcal{E}_{B}\) and \(\mathcal{E}_{C}\) determine the final local states of Bob's and Charlie's qubit, respectively.
## IV Achievable communication rates
Now that we have constructed a model for a relativistic quantum broadcast channel, we can investigate at which rates classical and quantum information can be reliably transmitted by Alice to Bob and Charlie. We first review a few protocols for quantum broadcast communication published in the literature and then we investigate the achievable rates for our quantum broadcast channel \(\mathcal{E}\) defined in Eq. (59).
### Unassisted classical communication
Let us begin with the investigation of unassisted transmission of classical information. We follow the protocol present in [3], where more details can be found. We evaluate achievable rates for our model and then we discuss how causality is explicitly manifest in our results.
Suppose Alice wishes to transmit a common message \(m\in M\) intended for both receivers while sending additional personal messages \(m_{B}\in M_{B}\) and \(m_{C}\in M_{C}\) intended for Bob and Charlie, respectively. Each message is chosen from one of the following sets,
\[M=\{1,\cdots,|M|\}\;,\;\;M_{j}=\{1,\cdots,|M_{j}|\}, \tag{64}\]
with \(j\in\{B,C\}\) and \(|M|\) denoting the cardinality of \(M\). Since the broadcast channel \(\mathcal{E}\) is noisy, Alice needs to do a suitable block coding on the possible messages and then make \(n\) independent uses of the channel in order to be able to reliably convey the information. More precisely, Alice maps each message triple \((m_{B},m,m_{C})\) to a codeword \(x^{n}(m_{B},m,m_{C})\) which is then associated with a quantum state \(\rho_{x^{n}(m_{B},m,m_{C})}^{A_{n}}\) defined in the space \(\mathcal{H}_{A}^{\otimes n}\). Then, she transmits \(\rho_{x^{n}(m_{B},m,m_{C})}^{A_{n}}\) by making \(n\) independent uses of the channel \(\mathcal{E}\). The output of the channel is the state
\[\rho_{x^{n}(m_{B},m,m_{C})}^{B_{n}C_{n}}\equiv\mathcal{E}^{\otimes n}\left( \rho_{x^{n}(m_{B},m,m_{C})}^{A_{n}}\right) \tag{65}\]
defined on \(\mathcal{H}_{B}^{\otimes n}\otimes\mathcal{H}_{C}^{\otimes n}\). To decode the message, Bob chooses a positive-operator valued measure (POVM) \(\{F_{m_{B},m}^{B_{n}}\,|\,(m_{B},m)\in M_{B}\times M\}\) which acts on the system \(B_{n}\). Similarly, Charlie chooses a POVM \(\{G_{m,m_{C}}^{C_{n}}\,|\,(m,m_{C})\in M\times M_{C}\}\) which acts on the system \(C_{n}\). We say that an error has occurred when at least one message is incorrectly decoded. Hence, the error probability associated with the transmission of the triple \((m_{B},m,m_{C})\) is
\[p_{e}(m_{B},m,m_{C})\equiv 1-\mathrm{tr}\left[\left(F_{m_{B},m}^{B_{n}}\otimes G _{m,m_{C}}^{C_{n}}\right)\rho_{x^{n}(m_{B},m,m_{C})}^{B_{n}C_{n}}\right]\!.\]
The transmission rates associated with each message are defined as
\[R\equiv\frac{1}{n}\log_{2}|M|\,,\;\;R_{j}\equiv\frac{1}{n}\log_{2}|M_{j}|. \tag{66}\]
These rates essentially measure how many bits of classical information are sent per channel use. If, given an \(\epsilon>0\), the average probability of error \(\overline{p}_{e}\) is bounded by \(\epsilon\), i.e.,
\[\overline{p}_{e}\equiv\frac{1}{|M_{B}||M||M_{C}}\sum_{m_{B},m,m_{C}}p_{e}(m_{B },m,m_{C})\leq\epsilon, \tag{67}\]
the _classical-quantum_ broadcast channel coding protocol described above is said to be a \((n,R_{B},R,R_{C},\epsilon)\) code. We say that a rate triple \((R_{B},R,R_{C})\) is achievable if given \(\epsilon,\delta>0\) there exists a \((n,R_{B}-\delta,R-\delta,R_{C}-\delta,\epsilon)\) code for sufficiently large \(n\). Hence, saying that a rate triple is achievable means that classical information can be reliably transmitted at rates arbitrarily close to them.
The achievable rates depend highly on the coding and decoding techniques chosen by the sender and receivers. The best known achievable rate region for general broadcast channels is attained through the so-called _Marton coding scheme_. Following [3], we investigate here the quantum version of this protocol.
Suppose for simplicity that no common message is meant to be sent, i.e., let us consider a \((R_{B},0,R_{C})\) quantum broadcast channel. In this scenario, one strategy they can use is the _Marton coding scheme_, where one chooses two correlated random variables \(U\) and \(V\), with joint probability distribution denoted by \(p\) and reduced probability distributions denoted by \(p_{U}\) and \(p_{V}\). Such a pair of random variables is usually referred to as _binning variables_. Then, for each \(m_{B}\in M_{B}\) and \(m_{C}\in M_{C}\), one generates codewords \(u^{n}(m_{B})\) and \(v^{n}(m_{C})\) according to the reduced probability distributions \(p_{U}(u)\) and \(p_{V}(v)\). Next, the codewords are mixed together into a single codeword \(x^{n}(m_{B},m_{C})\) according to a deterministic function \(x=f\big{(}u,v\big{)}\). With this approach, it follows that a rate pair \((R_{B},R_{C})\) is achievable if it satisfies [3]
\[0 \leq R_{B} \leq I(U;B)_{\sigma}, \tag{68}\] \[0 \leq R_{C} \leq I(V;C)_{\sigma},\] (69) \[R_{B} + R_{C} \leq I(U;B)_{\sigma}+I(V;C)_{\sigma}-I(U,V)_{\sigma}, \tag{70}\]
where
\[I(X;Y)_{\rho}\equiv S(X)+S(Y)-S(XY) \tag{71}\]
is the mutual information of a state \(\rho^{XY}\), with
\[S(\alpha)=-\mathrm{tr}\left(\rho^{\alpha}\log\rho^{\alpha}\right),\]
\(\alpha=X,Y\), being the von Neumann entropy of \(\rho^{\alpha}\), \(\alpha=X,Y\). Here, \(\rho^{X}=\mathrm{tr}_{Y}\rho^{XY}\) and \(\rho^{Y}=\mathrm{tr}_{X}\rho^{XY}\). The states \(\sigma\) in Eqs. (68)-(70) are obtained by suitably (partially) tracing out the degrees of freedom of the density matrix
\[\sigma^{UVBC}\equiv\sum_{u,v}p(u,v)|u\rangle\langle u|^{U}\otimes|v\rangle \langle v|^{V}\otimes\mathcal{E}\left(\rho_{f(u,v)}^{A}\right), \tag{72}\]
with \(p(u,v)\) being the joint probability distribution of the random variables \(U\) and \(V\).
We begin our analysis by deriving bounds for the achievable rates through the Marton coding scheme applied to our relativistic quantum broadcast channel. To evaluate Eq. (68), we take partial traces relative to \(V\) and \(C\) in Eq. (72), obtaining
\[\sigma^{UB}\equiv\sum_{u}p_{U}(u)|u\rangle\langle u|^{U}\otimes\omega_{u}^{B}, \tag{73}\]
where we have written \(p(u,v)=p_{V|U}(v|u)p_{U}(u)\), whereas
\[\omega_{u}^{B}\equiv\sum_{v}p_{V|U}(v|u)\mathcal{E}_{B}\left(\rho_{f(u,v)}^{A} \right). \tag{74}\]
A state like \(\sigma^{UB}\) in Eq. (73) is called a _classical-quantum state_. For this class of states, a straightforward calculation shows that [2]
\[I(U;B)_{\sigma}=S\left[\sum_{u}p_{U}(u)\omega_{u}^{B}\right]-\sum_{u}p_{U}(u)S \left[\omega_{u}^{B}\right]. \tag{75}\]
In order to compute \(\omega_{u}^{B}\) and its von Neumann entropy, let us decompose the initial state of Alice's qubit in terms of Bloch vectors, i.e.,
\[\rho_{f(u,v)}^{A}=\frac{1}{2}\left(I_{A}+\mathbf{r}_{f(u,v)}\cdot\mathbf{\sigma}_{ A}\right), \tag{76}\]
where \(\mathbf{r}_{f(u,v)}\equiv(x_{f(u,v)},y_{f(u,v)},z_{f(u,v)})\), \(I_{A}\) is the identity in \(\mathcal{H}_{A}\), \(\mathbf{\sigma}_{A}\equiv(\sigma_{A}^{\mathrm{x}},\sigma_{A}^{\mathrm{y}},\sigma_ {A}^{\mathrm{z}})\), and \(|\mathbf{r}_{f(u,v)}|^{2}=x_{f(u,v)}^{2}+y_{f(u,v)}^{2}+z_{f(u,v)}^{2}\leq 1\). From Eqs. (62), (74), and (76) we get
\[\omega_{u}^{B} =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos\left[2\Delta(f_{A},f_{B} )\right]\sigma_{B}^{\mathrm{y}}\] \[-\overline{z}_{u}\frac{\nu_{B}}{2}\sin\left[2\Delta(f_{A},f_{B}) \right]\sigma_{B}^{\mathrm{x}}, \tag{77}\]
where \(\overline{z}_{u}\equiv\sum_{v}p_{V|U}(v|u)z_{f(u,v)}\), and thus we can further write
\[\omega^{B}\equiv\sum_{u}p_{U}(u)\omega_{u}^{B} =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos\left[2\Delta(f_{A},f_{B} )\right]\sigma_{B}^{\mathrm{y}}\] \[-\overline{z}\frac{\nu_{B}}{2}\sin\left[2\Delta(f_{A},f_{B}) \right]\sigma_{B}^{\mathrm{x}}, \tag{78}\]
where \(\overline{z}\equiv\sum_{u,v}p(u,v)z_{f(u,v)}\).
Now, by using standard diagonalization, we find that \(\omega_{u}^{B}\) has eigenvalues \(p_{u}^{B}\) and \(1-p_{u}^{B}\), where
\[p_{u}^{B}\equiv\frac{1}{2}+\frac{\nu_{B}}{2}\sqrt{\overline{z}_{u}^{2}\sin^{2 }\left[2\Delta(f_{A},f_{B})\right]+\cos^{2}\left[2\Delta(f_{A},f_{B})\right]}, \tag{79}\]
whereas \(\omega^{B}\) has eigenvalues \(p^{B}\) and \(1-p^{B}\), with
\[p^{B}\equiv\frac{1}{2}+\frac{\nu_{B}}{2}\sqrt{\overline{z}^{2}\sin^{2}\left[2 \Delta(f_{A},f_{B})\right]+\cos^{2}\left[2\Delta(f_{A},f_{B})\right]}. \tag{80}\]
Therefore, we can now write Eq. (75) as
\[I(U;B)_{\sigma}=H\left(p^{B}\right)-\sum_{u}p_{U}(u)H\left(p_{u}^{B}\right), \tag{81}\]
where \(H(x)\equiv-x\log_{2}x-(1-x)\log_{2}(1-x)\), \(x\in[0,1]\). Following similar steps, we can show that
\[I(V;C)_{\sigma}=H\left(p^{C}\right)-\sum_{v}p_{V}(v)H\left(p_{v}^{C}\right), \tag{82}\]
where
\[p_{v}^{C} \equiv\frac{1}{2}+\frac{\nu_{C}}{2}|\cos\left[2\Delta(f_{B},f_{C} )\right]| \tag{83}\] \[\quad\times\sqrt{\overline{z}_{v}^{2}\sin^{2}\left[2\Delta(f_{A}, f_{C})\right]+\cos^{2}\left[2\Delta(f_{A},f_{C})\right]}\]
with \(\overline{z}_{v}\equiv\sum_{u}p_{U|V}(u|v)z_{f(u,v)}\), and
\[p^{C} \equiv\frac{1}{2}+\frac{\nu_{C}}{2}|\cos\left[2\Delta(f_{B},f_{C} )\right]| \tag{84}\] \[\quad\times\sqrt{\overline{z}_{v}^{2}\sin^{2}\left[2\Delta(f_{A}, f_{C})\right]+\cos^{2}\left[2\Delta(f_{A},f_{C})\right]}\]
Now, let us note that \(H(x)\) is a monotonically decreasing function when \(x\geq 1/2\). From Eqs. (79) and (80), we have
\[p_{u}^{B}\leq\frac{1}{2}+\frac{\nu_{B}}{2} \tag{85}\]
and
\[p^{B}\geq\frac{1}{2}+\frac{\nu_{B}}{2}|\cos\left[2\Delta(f_{A},f_{B})\right]|, \tag{86}\]
and thus it follows that
\[H(p_{u}^{B})\geq H\left(\frac{1}{2}+\frac{\nu_{B}}{2}\right) \tag{87}\]
and
\[H(p^{B})\leq H\left(\frac{1}{2}+\frac{\nu_{B}}{2}\left|\cos\left[2\Delta(f_{A}, f_{B})\right]\right|\right). \tag{88}\]
As a result, from Eq. (81), we conclude that
\[I(U;B)_{\sigma}\leq\mathcal{C}(\mathcal{E}_{B}), \tag{89}\]
where
\[\mathcal{C}(\mathcal{E}_{B})\equiv H\left(\frac{1}{2}+\frac{\nu_{B}}{2}\left| \cos\left[2\Delta(f_{A},f_{B})\right]\right|\right)-H\left(\frac{1}{2}+\frac{ \nu_{B}}{2}\right) \tag{90}\]
is the classical capacity of the reduced channel \(\mathcal{E}_{B}\), given in Eq. (62), as shown in [28]. We note that the upper bound in Eq. (89) can be attained if we choose random variables \(U,V=\{0,1\}\) with \(p(u,v)=1/4\) for all \(u,v\), associated with Bloch vectors \(\mathbf{r}_{f(0,0)}=\mathbf{r}_{f(1,0)}=\left(0,0,+1\right)\) and \(\mathbf{r}_{f(1,0)}=\mathbf{r}_{f(1,1)}=\left(0,0,-1\right)\). By using such choices together with Eq. (68), we conclude that Alice can reliably convey classical information to Bob at rates arbitrarily close to \(\mathcal{C}(\mathcal{E}_{B})\).
Similarly, we can show from Eqs. (82)-(84) that
\[I(V;C)_{\sigma}\leq\mathcal{C}(\mathcal{E}_{C}), \tag{91}\]
where
\[\mathcal{C}(\mathcal{E}_{C}) \equiv H\left(\frac{1}{2}+\frac{\nu_{C}}{2}\left|\cos\left[2\Delta(f_{B },f_{C})\right]\cos\left[2\Delta(f_{A},f_{C})\right]\right|\right)\] \[-H\left(\frac{1}{2}+\frac{\nu_{C}}{2}\left|\cos\left[2\Delta(f_{ B},f_{C})\right]\right|\right), \tag{92}\]
is the classical capacity of the reduced channel \(\mathcal{E}_{C}\) given in Eq. (63). The upper bound can be attained, e.g., if we choose random variables \(U,V=\{0,1\}\) with \(p(u,v)=1/4\) for all \(u,v\), associated with Bloch vectors \(\mathbf{r}_{f(0,0)}=\mathbf{r}_{f(1,0)}=\left(0,0,+1\right)\) and \(\mathbf{r}_{f(0,1)}=\mathbf{r}_{f(1,1)}=\left(0,0,-1\right)\)
Hence, from Eq. (69), we conclude that Alice can reliably convey classical information to Charlie as well at rates arbitrarily close to \(\mathcal{C}(\mathcal{E}_{C})\).
It is important to highlight that causality is explicitly manifest on the bounds of the achievable rates. First, we note that the achievable rates \(R_{B}\) between Alice and Bob are bounded by \(\mathcal{C}(\mathcal{E}_{B})\), which does not depends on the interaction between Charlie's qubit and the quantum field. This should indeed be the case as, from hypothesis **(QB2)** in Sec. III, Charlie cannot influence the communication between Alice and Bob since he does not perform any actions before Bob finishes his measurement process. Furthermore, the presence of the commutator \(\Delta(f_{B},f_{C})\) in Eq. (92) indicates that when Bob and Charlie let their qubits interact with the quantum field in causally connected regions of the spacetime, noise form Bob's actions can influence on the rate \(R_{C}\) of communication between Alice and Charlie. Additionally, we note that whenever \(\Delta(f_{A},f_{j})=0\), we have
\[\mathcal{C}(\mathcal{E}_{j})=0 \tag{93}\]
for \(j=B,C\). Hence, when Alice and Bob (Charlie) interact with the field in causally disconnected regions of the spacetime, the achievable rate in Eq. (68) (or Eq. (69)) will reduce to \(R_{B}=0\) (or \(R_{C}=0\)).
To this day, no one has been able to prove that the Marton rate region given by Eqs. (68)-(70) is optimal for general broadcast channels, not even in the classical case. However, it is generally conjectured that the Marton rate region indeed represents the full capacity region of general broadcast channels. If this is the case, then our analysis shows that causality will not be violated when transmitting classical information, no matter which communication protocol is chosen.
### A father protocol for quantum broadcast channels
Let us now turn our attention to the communication of quantum information. Following [5], we present a father protocol for entanglement-assisted quantum communication through quantum broadcast channels that can be used to investigate at which rates Alice can send classical or quantum information to Bob and Charlie when they share an unlimited supply of entanglement. Then, we show how this protocol can be adapted to investigate communication rates for classical information transmission using entanglement as well as for unassisted quantum communication.
Let us suppose that Alice has access to two quantum systems \(T_{A}\) and \(T_{A^{\prime}}\) while Bob and Charlie possess similar quantum systems \(T_{B}\) and \(T_{C}\), respectively. All systems possesses the same dimension \(d_{T_{A}}\equiv\mathrm{dim}\mathcal{H}_{T_{A}}\). Suppose further that Alice shares maximally entangled states with both Bob and Charlie:
\[\left|\Phi^{T_{A}T_{k}}\right\rangle=\frac{1}{\sqrt{d_{T_{A}}}}\sum_{i=0}^{d_{ T_{A}}-1}\left|i\right\rangle_{T_{A}}\otimes\left|i\right\rangle_{T_{k}}, \tag{94}\]
where the above state is defined on \(\mathcal{H}_{T_{A}}\otimes\mathcal{H}_{T_{k}}\), with \(k=B,C\) and \(\{\left|i\right\rangle_{T_{\alpha}}\) is an orthonormal set of vectors on \(\mathcal{H}_{T_{\alpha}}\), \(\alpha=A,B,C\).
In order to study the transmission of quantum information, we first note that whenever Alice is able to transmit the entanglement she shares with some reference system to each receiver, she will be able to send arbitrary quantum states to each of them. Hence, suppose that Alice possesses two quantum systems \(A_{1}\) and \(A_{2}\) respectively entangled with reference systems \(R_{1}\) and \(R_{2}\) and that these systems are in states \(\left|\Phi^{A_{j}R_{j}}\right\rangle\) defined on \(\mathcal{H}_{A_{j}}\otimes\mathcal{H}_{R_{j}}\) for \(j=1,2\)5. Her goal is to send her share of \(\left|\Phi^{A_{1}R_{1}}\right\rangle\) and \(\left|\Phi^{A_{2}R_{2}}\right\rangle\) to Bob and Charlie, respectively.
Footnote 5: As a result, the quantum state being transmitted by Alice to the receiver \(j\), \(j=B,C\), is \(\rho^{A_{j}}\equiv\mathrm{tr}_{B_{j}}|\Phi^{A_{j}R_{j}}\rangle(\Phi^{A_{j}R_{j }}|\).
The initial global state of the system is
\[\left|\varphi\right\rangle\equiv\left|\Phi^{A_{1}R_{1}}\right\rangle\left|\Phi ^{A_{2}R_{2}}\right\rangle\left|\Phi^{T_{A}T_{B}}\right\rangle\left|\Phi^{T_{A ^{\prime}}T_{C}}\right\rangle \tag{95}\]
and We will denote \(\rho_{\varphi}\equiv\left|\varphi\right\rangle\left\langle\varphi\right|\). In order to use the quantum channel \(\mathcal{E}\) to share her entanglement with \(R_{1}\) and \(R_{2}\) to Bob and Charlie (and hence, convey quantum information), Alice uses a CPTP map \(\mathcal{C}:\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\otimes\mathcal{H}_{ T_{A}}\otimes\mathcal{H}_{T_{A^{\prime}}}\rightarrow\mathcal{H}_{A}^{\otimes n}\) in order to encode her shares of the quantum systems-\(T_{A},T_{A^{\prime}},A_{1}\), and \(A_{2}\), into a state of \(n\) qubits. The global state will then reads
\[\bar{\rho}^{A_{n}R_{1}R_{2}T_{B}T_{C}}\equiv\left(\mathcal{C}\otimes I^{R_{1}R _{2}T_{B}T_{C}}\right)\left(\rho_{\varphi}\right), \tag{96}\]
where \(I^{R_{1}R_{2}T_{B}T_{C}}\) is the identity operator of the joint system \(R_{1}R_{2}T_{B}T_{C}\). Next, by making \(n\) independent uses of the channel \(\mathcal{E}\), Alice sends her total encoded state to Bob and Charlie, which results in the global state
\[\omega^{B_{n}C_{n}R_{1}R_{2}T_{B}T_{C}}\equiv\left(\mathcal{E}^{\otimes n} \otimes I^{R_{1}R_{2}T_{B}T_{C}}\right)\left(\bar{\rho}^{A_{n}R_{1}R_{2}T_{B} T_{C}}\right). \tag{97}\]
Bob and Charlie decode their share of the global state by using the CPTP maps \(\mathcal{D}_{B}:\mathcal{H}_{B}^{\otimes n}\otimes\mathcal{H}_{T_{B}} \rightarrow\mathcal{H}_{B^{\prime}}\) and \(\mathcal{D}_{C}:\mathcal{H}_{C}^{\otimes n}\otimes\mathcal{H}_{T_{C}} \rightarrow\mathcal{H}_{C^{\prime}}\), respectively. Hence, the final global state is
\[\zeta^{B^{\prime}C^{\prime}R_{1}R_{2}}\equiv\left(\mathcal{D}_{C}\otimes \mathcal{D}_{B}\otimes I^{R_{1}R_{2}}\right)\left(\omega^{B_{n}C_{n}R_{1}R_{2}T_ {B}T_{C}}\right). \tag{98}\]
We define the _entanglement-assisted quantum communication rates_ as
\[\widetilde{Q}_{B}\equiv\frac{1}{n}\log_{2}d_{A_{1}}\;,\;\;\widetilde{Q}_{C} \equiv\frac{1}{n}\log_{2}d_{A_{2}}, \tag{99}\]
where \(d_{A_{j}}\equiv\mathrm{dim}\mathcal{H}_{A_{j}}\) and \(j=1,2\). These rates of quantum communication measure how many qubits are being sent per channel use.
The communication process will be good if given a small \(\epsilon>0\) we have
\[\left\|\zeta^{B^{\prime}C^{\prime}R_{1}R_{2}}-\rho_{\varphi}^{B^{\prime}C^{\prime }R_{1}R_{2}}\right\|_{1}\leq\epsilon, \tag{100}\]
where
\[\left\|\mathcal{O}\right\|_{1}\equiv\mathrm{tr}\left(\sqrt{\mathcal{O}^{ \dagger}\mathcal{O}}\right) \tag{101}\]
is the trace norm of an operator \(\mathcal{O}\). Here, \(\rho_{\varphi}^{B^{\prime}C^{\prime}R_{1}R_{2}}\) is the analogous of the initial state in the composite system \(B^{\prime}C^{\prime}R_{1}R_{2}\), i.e., given the initial state in Alice's laboratory
\[\rho_{\varphi}^{A_{1}A_{2}R_{1}R_{2}}\equiv\left|\Phi^{A_{1}R_{1}}\right\rangle \left\langle\Phi^{A_{1}R_{1}}\right|\otimes\left|\Phi^{A_{2}R_{2}}\right\rangle \left\langle\Phi^{A_{2}R_{2}}\right|, \tag{102}\]
we define
\[\rho_{\varphi}^{B^{\prime}C^{\prime}R_{1}R_{2}}\equiv\left(\mathcal{I}^{A_{1 }\to B^{\prime}}\otimes\mathcal{I}^{A_{2}\to C^{\prime}}\right)\left(\rho_{ \varphi}^{A_{1}A_{2}R_{1}R_{2}}\right), \tag{103}\]
where \(\mathcal{I}^{A_{1}\to B^{\prime}}\) (or \(\mathcal{I}^{A_{2}\to C^{\prime}}\)) is the identity map between the quantum systems \(A_{1}\) (or \(A_{2}\)) and \(B^{\prime}\) (or \(C^{\prime}\)).
The communication protocol described here is named as a \((n,\widetilde{Q}_{B},\widetilde{Q}_{C},\epsilon)\) code if it satisfies Eq. (100) for every input state \(\rho_{\varphi}^{A_{1}A_{2}R_{1}R_{2}}\). Again, we say that a rate pair \(\left(\widetilde{Q}_{B},\widetilde{Q}_{C}\right)\) is achievable if given any \(\epsilon,\delta>0\) there exists a \((n,\widetilde{Q}_{B}-\delta,\widetilde{Q}_{C}-\delta,\epsilon)\) code for sufficiently large \(n\).
Now, given a general broadcast channel \(\mathcal{E}:A\to BC\) and an arbitrary mixed state \(\rho^{A_{1}A_{2}}\) defined on \(\mathcal{H}_{A}\otimes\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\), it can be shown [5] that a entanglement-assisted quantum rate pair \((\widetilde{Q}_{B},\widetilde{Q}_{C})\) is achievable if
\[0\leq\widetilde{Q}_{B} \leq\frac{1}{2}I(A_{1};B)_{\sigma}, \tag{104}\] \[0\leq\widetilde{Q}_{C} \leq\frac{1}{2}I(A_{2};C)_{\sigma},\] (105) \[\widetilde{Q}_{B}+\widetilde{Q}_{C} \leq\frac{1}{2}[I(A_{1};B)_{\sigma}+I(A_{2};C)_{\sigma}-I(A_{1}; A_{2})_{\sigma}], \tag{106}\]
where the mutual information quantities are evaluated relative to the state
\[\sigma^{A_{1}A_{2}BC}\equiv\left(\mathcal{E}\otimes I^{A_{1}A_{2}}\right) \left(\rho^{AA_{1}A_{2}}\right). \tag{107}\]
In addition to the entanglement-assisted quantum communication, the father protocol presented here can be adapted to obtain achievable rates for entanglement-assisted classical communication and for unassisted quantum communication, as we shall see in the next two sections.
### Unassisted quantum communication
We note that the father protocol presented above can be modified to describe quantum communication unassisted by entanglement simply by ignoring the existence of the quantum systems \(T_{A},T_{A^{\prime}},T_{B}\), and \(T_{C}\) and following the exact same procedure. As shown in [5], given an arbitrary mixed state \(\rho^{AA_{1}A_{2}}\) defined on \(\mathcal{H}_{A}\otimes\mathcal{H}_{A_{1}}\otimes\mathcal{H}_{A_{2}}\), it follows that the following unassisted quantum rate region is achievable:
\[0\leq Q_{B}\leq I(A_{1})B)_{\sigma}, \tag{108}\] \[0\leq Q_{C}\leq I(A_{2})C)_{\sigma}, \tag{109}\]
where \(\sigma\) is given by Eq. (107) and
\[I(A)B)\equiv S(B)-S(AB)\]
is the quantum coherent information between systems \(A\) and \(B\).
To analyze if Alice can send entanglement (and, as a result, an arbitrary state \(\rho^{A}\)) to Bob through the broadcast channel, let us note that we may purify the mixed state \(\rho^{AA_{1}A_{2}}\) by adding an environment system \(E\) such that
\[\rho^{AA_{1}A_{2}}=\mathrm{tr}_{E}\left(|\psi^{AA_{1}A_{2}E}\rangle\langle\psi ^{AA_{1}A_{2}E}|\right), \tag{110}\]
where \(|\psi^{AA_{1}A_{2}E}\rangle\in\mathcal{H}_{A}\otimes\mathcal{H}_{A_{1}} \otimes\mathcal{H}_{A_{2}}\otimes\mathcal{H}_{E}\) is a pure state. Let us decompose it as
\[|\psi^{AA_{1}A_{2}E}\rangle=\sum_{a=0}^{1}\sum_{a_{1}=0}^{1}\sum_{a_{2}=0}^{1 }\sum_{c=0}^{d-1}c_{aa_{1}a_{2}e}|a\rangle_{A}|a_{1}\rangle_{A_{1}}|a_{2} \rangle_{A_{2}}|e\rangle_{E}, \tag{111}\]
where \(|a\rangle_{A}\),\(|a\rangle_{A_{1}}\), and \(|a_{2}\rangle_{A_{2}}\) are eigenstates of \(\sigma_{A}^{\pi}\), \(\sigma_{A_{1}}^{\pi}\), and \(\sigma_{A_{2}}^{\pi}\), respectively. Furthermore, \(\{|e\rangle_{E}\}\) is some orthonormal basis for \(\mathcal{H}_{E}\), with \(d\equiv\mathrm{dim}\mathcal{H}_{E}\) being as large as needed, and
\[\sum_{a=0}^{1}\sum_{a_{1}=0}^{1}\sum_{a_{2}=0}^{1}\sum_{e=0}^{d-1}|c_{aa_{1}a _{2}e}|^{2}=1. \tag{112}\]
By defining
\[|\zeta_{a}\rangle_{A_{1}A_{2}E}\equiv\sum_{a_{1}=0}^{1}\sum_{a_{2}=0}^{1} \sum_{e=0}^{d-1}c_{aa_{1}a_{2}e}|a_{1}\rangle_{A_{1}}|a_{2}\rangle_{A_{2}}|e \rangle_{E} \tag{113}\]
and
\[\zeta_{aa^{\prime}}^{A_{1}A_{2}}\equiv\mathrm{tr}_{E}\left(|\zeta_{a}\rangle_{ A_{1}A_{2}E}\,A_{1}A_{2}E\,\langle\zeta_{a^{\prime}}|\right|\right), \tag{114}\]
we can write Eq. (110) as
\[\rho^{AA_{1}A_{2}}=\sum_{a=0}^{1}\sum_{a^{\prime}=0}^{1}\zeta_{aa^{\prime}}^{A_ {1}A_{2}}\otimes|a\rangle_{AA}\langle a^{\prime}|. \tag{115}\]
By using Eq. (115) in Eq. (107) and taking the partial trace over \(C\) and \(A_{2}\) we obtain
\[\sigma^{A_{1}B}=\sum_{a=0}^{1}\zeta_{aa}^{A_{1}}\otimes\mathcal{E}_{B}\left(|a \rangle_{AA}\langle a|\right), \tag{116}\]
where \(\zeta_{aa}^{A_{1}}\equiv\mathrm{tr}_{A_{2}}(\zeta_{aa}^{A_{1}A_{2}})\) and we have used the fact that
\[\mathcal{E}_{B}\left(|a\rangle_{AA}\langle a^{\prime}|\right)=\delta_{aa^{ \prime}}\mathcal{E}_{B}\left(|a\rangle_{AA}\langle a|\right), \tag{117}\]
which can be proven by a direct calculation using Eq. (62). We define now the density matrices
\[\mathfrak{S}_{a}^{B}\equiv\mathcal{E}_{B}\left(\left|a\right\rangle_{AA}\!\left\langle a \right|\right), \tag{118}\]
and
\[\tau_{a}^{A_{1}}\equiv\left\|\zeta_{a}\right\|^{-2}\zeta_{aa}^{A_{1}}, \tag{119}\]
with \(\left\|\zeta_{a}\right\|^{2}\equiv\mathrm{tr}(\zeta_{aa}^{A_{1}})\). This allows us to rewrite Eq. (116) as
\[\sigma^{A_{1}B}=\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}\tau_{a}^{A_{1}} \otimes\mathfrak{S}_{a}^{B}, \tag{120}\]
and we note that \(\mathrm{tr}(\mathfrak{S}_{a}^{B})=\mathrm{tr}(\tau_{a}^{A_{1}})=1\) and
\[\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}=\sum_{a=0}^{1}\sum_{a_{1}=0}^{1} \sum_{a_{2}=0}^{1}\sum_{e=0}^{d-1}\left|c_{aa_{1}a_{2}}\right|^{2}=1. \tag{121}\]
Hence, we have shown that \(\sigma^{A_{1}B}\) is a separable state, which implies that the reduced channel from Alice to Bob lies in the class of the _entanglement-breaking channels_. As shown in [33], the coherent information is non-positive for separable states like \(\sigma^{A_{1}B}\), i.e.,
\[I(A_{1})B)_{\sigma}\leq 0. \tag{122}\]
Following similar steps, one can also show that \(I(A_{2})C)_{\sigma}\leq 0\). As a result, the achievable rate region given by Eqs. (108) and (109) reduces to
\[Q_{B}=Q_{C}=0. \tag{123}\]
It should be noted that it is not known, to this day, if the region defined by Eqs. (108)-(109) characterizes the full capacity region for general quantum broadcast channels. If this is the case, our analysis implies that Alice cannot send qubits to the receivers without prior shared entanglement. Since the reduced channels \(\mathcal{E}_{B}\) and \(\mathcal{E}_{C}\) are entanglement-breaking, Alice cannot transmit the needed entanglement to establish quantum communication by using only the quantum broadcast channel \(\mathcal{E}\).
### Entanglement-assisted quantum communication
We have seen in Sec. IV.1 that Alice can reliably transmit classical messages to Bob and Charlie through the quantum broadcast channel \(\mathcal{E}\) provided that their interactions with the quantum field are causally connected. On the other hand, we have seen in Sec. IV.3 that (probably) Alice can never convey qubits to Bob or Charlie if they do not share prior entanglement. Now, we investigate if this limitation changes if the three observers perform an entanglement-assisted quantum communication protocol as described in Sec. IV.2. In this scenario, we recall that Eqs. (104)-(106) give an achievable (entanglement-assisted) quantum rate region that we shall investigate now.
We begin by deriving upper bounds for this region. Recall that the information bounds are evaluated with respect to the final global state \(\sigma^{A_{1}A_{2}BC}\) given by Eq. (107). Following the procedure described in Sec. IV.3, we can take partial traces over Charlie's qubit space \(C\) and system \(A_{2}\) and write the reduced state \(\sigma^{A_{1}B}\equiv\mathrm{tr}_{A_{2},C}(\sigma^{A_{1}A_{2}BC})\) in a separable form given by Eq. (120). Then, by using the concavity of the von Neumann entropy, together with its addictive property for product states [2], we find that
\[S(\sigma^{A_{1}B}) \geq\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S(\tau_{a}^{A_{1}} \otimes\mathfrak{S}_{a}^{B})\] \[=\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S(\tau_{a}^{A_{1}})+ \sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S(\mathfrak{S}_{a}^{B})\] \[\geq\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S(\mathfrak{S}_{a} ^{B}). \tag{124}\]
Hence, by the definition of quantum mutual information, we have
\[I(A_{1};B)_{\sigma} =S(\sigma^{A_{1}})+S(\sigma^{B})-S(\sigma^{A_{1}B})\] \[\leq S(\sigma^{A_{1}})+S(\sigma^{B})-\sum_{a=0}^{1}\left\|\zeta_{ a}\right\|^{2}S(\mathfrak{S}_{a}^{B})\] \[\leq S(\sigma^{B})-\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S( \mathfrak{S}_{a}^{B}). \tag{125}\]
A direct calculation using Eqs. (118) and (62) yields
\[\mathfrak{S}_{0}^{B} =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos\left[2\Delta(f_{A},f_{B} )\right]\!\sigma_{B}^{y}\] \[-\frac{\nu_{B}}{2}\sin\left[2\Delta(f_{A},f_{B})\right]\!\left\| \zeta_{0}\right\|^{2}\sigma_{B}^{x}, \tag{126}\] \[\mathfrak{S}_{1}^{B} =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos\left[2\Delta(f_{A},f_{B} )\right]\!\sigma_{B}^{y}\] \[+\frac{\nu_{B}}{2}\sin\left[2\Delta(f_{A},f_{B})\right]\!\left\| \zeta_{1}\right\|^{2}\sigma_{B}^{x}, \tag{127}\]
and hence, as \(\sigma^{B}=\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}\mathfrak{S}_{a}^{B}\), we get
\[\sigma^{B} =\frac{1}{2}I_{B}+\frac{\nu_{B}}{2}\cos\left[2\Delta(f_{A},f_{B}) \right]\!\sigma_{B}^{y}\] \[-\frac{\nu_{B}}{2}\sin\left[2\Delta(f_{A},f_{B})\right]\!\left\| \zeta_{0}\right\|^{2}-\left\|\zeta_{1}\right\|^{2}\sigma_{B}^{x}. \tag{128}\]
By standard diagonalization, we can show that the eigenvalues of \(\mathfrak{S}_{a}^{B}\) are \(p_{a}^{B}\) and \(1-p_{a}^{B}\), where
\[p_{a}^{B}\equiv\frac{1}{2}+\frac{\nu_{B}}{2}\sqrt{\left\|\zeta_{a}\right\|^{4} \sin^{2}\left[2\Delta(f_{A},f_{B})\right]+\cos^{2}\left[2\Delta(f_{A},f_{B}) \right]}. \tag{129}\]
Similarly, the eigenvalues of \(\sigma^{B}\) are \(p^{B}\) and \(1-p^{B}\), where
\[p^{B}\equiv\frac{1}{2}+\frac{\nu_{B}}{2}\sqrt{\zeta_{01}^{4}\sin^{2}\left[2 \Delta(f_{A},f_{B})\right]+\cos^{2}\left[2\Delta(f_{A},f_{B})\right]}, \tag{130}\]
with \(\zeta_{01}^{2}\equiv\left\|\zeta_{0}\right\|^{2}-\left\|\zeta_{1}\right\|^{2}\). This implies that the RHS of Eq. (125) can be written as
\[S(\sigma^{B})-\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}S(\mathfrak{S}_{a}^{B})= H(p^{B})-\sum_{a=0}^{1}\left\|\zeta_{a}\right\|^{2}H(p_{a}^{B}), \tag{131}\]
with \(H(x)\) defined below Eq. (81). Following similar steps as the ones in Sec. IV.1, we note that \(H(x)\) is a monotonically decreasing function for \(x\geq 1/2\). Hence, as
\[\frac{1}{2}\leq p_{a}^{B}\leq\frac{1}{2}+\frac{\nu_{B}}{2} \tag{132}\]
and
\[p^{B}\geq\frac{1}{2}+\frac{\nu_{B}}{2}|\cos{[2\Delta(f_{A},f_{B})]}|, \tag{133}\]
we get by Eqs. (125) and (131) that
\[I(A_{1};B)_{\sigma}\leq\mathcal{C}(\mathcal{E}_{B}), \tag{134}\]
where \(\mathcal{C}(\mathcal{E}_{B})\) is given by Eq. (90). By an analogous reasoning, we can show that
\[I(V;C)_{\sigma}\leq\mathcal{C}(\mathcal{E}_{C}), \tag{135}\]
where \(\mathcal{C}(\mathcal{E}_{C})\) is given by Eq. (92).
Hence, we can conclude that any entanglement-assisted rate pair satisfying Eqs. (104)-(106) must satisfy the upper bounds
\[\widetilde{Q}_{B} \leq\frac{1}{2}\,\mathcal{C}(\mathcal{E}_{B}), \tag{136}\] \[\widetilde{Q}_{C} \leq\frac{1}{2}\,\mathcal{C}(\mathcal{E}_{C}), \tag{137}\]
i.e., the individual rates are bounded by half of the classical capacities of the reduced channels \(\mathcal{E}_{B}\) and \(\mathcal{E}_{C}\), respectively. In fact, we can show that both bounds are (not simultaneously) attainable by making different choices of the input state \(\rho^{AA_{1}A_{2}}\). To see this, let us choose
\[\rho^{AA_{1}A_{2}}\equiv|\Phi^{AA_{1}}\rangle\langle\Phi^{AA_{1}}|\otimes \rho^{A_{2}}, \tag{138}\]
where \(\rho^{A_{2}}\) is arbitrary and \(|\Phi^{AA_{1}}\rangle\) is the maximally entangled state
\[|\Phi^{AA_{1}}\rangle\equiv\frac{1}{\sqrt{2}}\sum_{a=0}^{1}|a\rangle_{A}|a \rangle_{A_{1}}. \tag{139}\]
For this particular state, Eq. (120) can be written as
\[\sigma^{A_{1}B}=\frac{1}{2}\sum_{a=0}^{1}|a\rangle_{A_{1}A_{1}}\langle a| \otimes\mathfrak{S}_{a}^{B}, \tag{140}\]
which implies that
\[I(A_{1};B)_{\sigma}=\mathcal{C}(\mathcal{E}_{B}), \tag{141}\] \[I(A_{2};C)_{\sigma}=0,\] (142) \[I(A_{1};A_{2})_{\sigma}=0. \tag{143}\]
In view of Eqs. (104)-(106), this implies that Alice will be able to convey quantum information to Bob at a rate arbitrarily close to \(\widetilde{Q}_{B}=\mathcal{C}(\mathcal{E}_{B})/2\) when they initially share unlimited amounts of entanglement. Note that this is in contrast with the unassisted case discussed in Sec. IV.3. Similarly, by switching \(A_{1}\) by \(A_{2}\) in Eq. (138), we show that Alice will be able to transmit quantum states to Charlie at a rate arbitrarily close to \(\widetilde{Q}_{C}=\mathcal{C}(\mathcal{E}_{C})/2\) when they communicate assisted by shared entanglement.
Furthermore, initial tripartite entangled states \(\rho^{AA_{1}A_{2}}\) will, in general, lead to simultaneously nonvanishing rate pairs provided that sender and receivers interact with the field in causally connected regions of spacetime. In contrast, in view of the upper bounds in Eqs. (136)-(137), we can see that whenever Alice, Bob, and Charlie try to communicate being in causally disconnected regions, we have that \(\Delta(f_{A},f_{j})=0\), with \(j=B,C\), and the entanglement-assisted quantum region reduces to
\[\widetilde{Q}_{B}=\widetilde{Q}_{C}=0. \tag{144}\]
Although an expression for the full capacity region is not known, this result suggests that whenever Alice and Bob/Charlie interact with the field in causally disconnected regions of spacetime, no quantum information can be sent from her to them, not even with unlimited prior shared entanglement.
## V Conclusions
In this paper, we have built a quantum broadcast channel by using a bosonic quantum field in a general globally hyperbolic spacetime. In this context, we have explored relativistic effects on the communication of classical and quantum information in a covariant manner, where the parts conveying the information are moving in arbitrary states of motion with the field being assumed to be in an arbitrary (quasifree) state.
To construct the quantum broadcast channel, we have considered that Alice (the sender) prepares some input state \(\rho^{A}_{-\infty}\) for her qubit and switches on its interaction with the field for a finite time. After that, Bob (the first receiver) lets his qubit interact with the field for a finite time interval, thus obtaining a final state possibly containing information encoded by Alice. Similarly, after Bob finishes his measurement, Charlie performs an interaction between his qubit and the quantum field to try to recover information imprinted by Alice in the field state. We were able to trace the field degrees of freedom nonperturbatively and showed that suitable initial states for Bob's and Charlie's qubits can be chosen in order to maximize the signaling between Alice and the receivers. This procedure defines a fully relativistic quantum broadcast channel \(\mathcal{E}\).
With this channel, we were able to investigate at which rates Alice can reliably convey classical and quantum information to Bob and Charlie. By considering first a
scenario where the three observers do not share prior entanglement, we found that Alice can reliably convey classical information to both Bob and Charlie and at which rates she can perform this task. However, we have shown that the broadcast channel presented here breaks entanglement and thus, Alice cannot convey quantum information to Bob and Charlie following an unassisted strategy. Nevertheless, we have shown that this situation changes when they perform entanglement-assisted quantum communication. In this scenario, we were able to find achievable rates that Alice can achieve when sending qubits to the receivers provided that they initially share entangled states.
We were also able to show that all rates that were analyzed here vanish when the interactions between qubits and field occur in causally disconnected regions, an effect that is manifest in all expressions bounding the classical and quantum rates of communication even with the use of quantum resources like entanglement. Thus, our investigation provides good evidence that causality is preserved throughout the communication process, reinforcing the fundamental principles of relativistic physics.
## Appendix A Full expression for the quantum broadcast channel map
As discussed in Sec. III, each \(\Gamma_{\alpha\beta\gamma\delta\epsilon\zeta}\) coefficient defined in Eq. (45) can be evaluated by using Eqs. (40) and (41) together with the product relation given by Eq. (46). Then, we substitute these coefficients in Eq. (42), obtaining
\[\rho^{BC} =\frac{1}{4}(1+\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]+\nu_{ C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right])\rho^{BC}_{ -\infty}\] \[+\frac{1}{4}(1-\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]+\nu_{ C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right])\sigma^{ \mathrm{z}}_{B\rho-\infty}\sigma^{\mathrm{z}}_{B}\] \[+\frac{1}{4}(1+\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]-\nu_{ C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right]) \sigma^{\mathrm{z}}_{C}\rho^{\mathrm{z}}_{-\infty}\sigma^{\mathrm{z}}_{C}\] \[+\frac{1}{4}(1-\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]-\nu_{ C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right]) \sigma^{\mathrm{z}}_{B}\otimes\sigma^{\mathrm{z}}_{C}\rho^{\mathrm{z}}_{- \infty}\sigma^{\mathrm{z}}_{B}\otimes\sigma^{\mathrm{z}}_{C}\] \[+\frac{i\nu_{B}}{4}\sin\left[2\Delta(f_{A},f_{B})\right]\langle \sigma^{\mathrm{z}}_{A}\rangle_{\rho^{\mathrm{z}}_{-\infty}}[\rho^{BC}_{- \infty}+\sigma^{\mathrm{z}}_{C}\rho^{BC}_{-\infty}\sigma^{\mathrm{z}}_{C}, \sigma^{\mathrm{z}}_{B}]\] \[+\frac{i\nu_{C}}{4}\sin\left[2\Delta(f_{A},f_{C})\right]\cos \left[2\Delta(f_{B},f_{C})\right]\langle\sigma^{\mathrm{z}}_{A}\rangle_{\rho^ {\mathrm{z}}_{-\infty}}[\rho^{BC}_{-\infty}+\sigma^{\mathrm{z}}_{B}\rho^{BC}_{ -\infty}\sigma^{\mathrm{z}}_{B},\sigma^{\mathrm{z}}_{C}] \tag{10}\] \[+\frac{i\nu_{C}}{8}\left(\rho^{BC}_{-\infty}-\sigma^{\mathrm{z}}_{ B}\rho^{BC}_{-\infty}\sigma^{\mathrm{z}}_{B}-\sigma^{\mathrm{z}}_{C}\rho^{BC}_{- \infty}\sigma^{\mathrm{z}}_{C}+\sigma^{\mathrm{z}}_{B}\otimes\sigma^{\mathrm{z }}_{C}\rho^{\mathrm{z}}_{-\infty}\sigma^{\mathrm{z}}_{B}\otimes\sigma^{\mathrm{ z}}_{C}\right)\] \[+\frac{i\lambda^{*}_{c}}{8}\left(\{\rho^{BC}_{-\infty},\sigma^{ \mathrm{z}}_{B}\otimes\sigma^{\mathrm{z}}_{C}\}-\sigma^{\mathrm{z}}_{B}\rho^{ BC}_{-\infty}\sigma^{\mathrm{z}}_{C}-\sigma^{\mathrm{z}}_{C}\rho^{BC}_{- \infty}\sigma^{\mathrm{z}}_{B}\right)\] \[+\frac{i\lambda^{*}_{s}}{8}\langle\sigma^{\mathrm{z}}_{A}\rangle_ {\rho^{\mathrm{z}}_{-\infty}}[\rho^{BC}_{-\infty}-\sigma^{\mathrm{z}}_{C}\rho^{ BC}_{-\infty}\sigma^{\mathrm{z}}_{C},\sigma^{\mathrm{z}}_{B}]+\frac{i\bar{\Lambda}^{*}_{s}}{8} \langle\sigma^{\mathrm{z}}_{A}\rangle_{\rho^{\mathrm{z}}_{-\infty}}[\rho^{BC}_{ -\infty}-\sigma^{\mathrm{z}}_{B}\rho^{BC}_{-\infty}\sigma^{\mathrm{z}}_{B}, \sigma^{\mathrm{z}}_{C}]\] \[+\frac{i\nu_{C}}{4}\cos\left[2\Delta(f_{A},f_{C})\right]\sin\left[ 2\Delta(f_{B},f_{C})\right]\left([\rho^{BC}_{-\infty},\sigma^{\mathrm{z}}_{B} \otimes\sigma^{\mathrm{z}}_{C}]+\sigma^{\mathrm{z}}_{B}\rho^{BC}_{-\infty} \sigma^{\mathrm{z}}_{C}-\sigma^{\mathrm{z}}_{C}\rho^{BC}_{-\infty}\sigma^{ \mathrm{z}}_{B}\right)\] \[-\frac{\nu_{C}}{4}\sin\left[2\Delta(f_{A},f_{C})\right]\sin\left[ 2\Delta(f_{B},f_{C})\right]\langle\sigma^{\mathrm{z}}_{A}\rangle_{\rho^{ \mathrm{z}}_{-\infty}}\langle\rho^{BC}_{-\infty}-\sigma^{\mathrm{z}}_{C}\rho^{ BC}_{-\infty}\sigma^{\mathrm{z}}_{C},\sigma^{\mathrm{z}}_{B}\rangle,\]
where we have defined the following coefficients:
\[\Lambda^{\mathrm{z}}_{c} \equiv\nu^{+}_{BC}\cos\left[2\Delta(f_{A},f_{B}+f_{C})\right]\pm \nu^{-}_{BC}\cos\left[2\Delta(f_{A},f_{B}-f_{C})\right], \tag{11}\] \[\Lambda^{\mathrm{z}}_{s} \equiv\nu^{+}_{BC}\sin\left[2\Delta(f_{A},f_{B}+f_{C})\right]\pm \nu^{-}_{BC}\sin\left[2\Delta(f_{A},f_{B}-f_{C})\right],\] (12) \[\nu_{j} \equiv\omega_{\mu}\left(W[E(2f_{j})]\right),\] (13) \[\nu^{\mathrm{z}}_{BC} \equiv\omega_{\mu}\left(W[E(2f_{B}+2f_{C})]\right). \tag{14}\]
As discussed in Sec. III, we are motivated to fix the initial states for Bob's and Charlie's qubit as given in Eqs. (52) and (58). We write these states in terms of their Bloch vectors, i.e.,
\[\rho_{-\infty}^{j}=\frac{I_{j}+\sigma_{j}^{\mathrm{y}}}{2}, \tag{66}\]
where \(j=B,C\). By substituting Eq. (66) in Eq. (61), and by using the standard commutation relations of the Pauli matrices, we obtain the following expression describing the quantum broadcast channel map:
\[\mathcal{E}(\rho_{-\infty}^{A}) =\frac{1}{16}(1+\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]+\nu_ {C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right] )\left(I_{B}+\sigma_{B}^{\mathrm{y}}\right)\otimes\left(I_{C}+\sigma_{C}^{ \mathrm{y}}\right) \tag{67}\] \[+\frac{1}{16}(1-\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]+\nu_ {C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right] )\left(I_{B}-\sigma_{B}^{\mathrm{y}}\right)\otimes\left(I_{C}+\sigma_{C}^{ \mathrm{y}}\right)\] \[+\frac{1}{16}(1+\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]-\nu _{C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right] )\left(I_{B}+\sigma_{B}^{\mathrm{y}}\right)\otimes\left(I_{C}-\sigma_{C}^{ \mathrm{y}}\right)\] \[+\frac{1}{16}(1-\nu_{B}\cos\left[2\Delta(f_{A},f_{B})\right]-\nu _{C}\cos\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{B},f_{C})\right] )\left(I_{B}-\sigma_{B}^{\mathrm{y}}\right)\otimes\left(I_{C}-\sigma_{C}^{ \mathrm{y}}\right)\] \[-\frac{\nu_{B}}{4}\sin\left[2\Delta(f_{A},f_{B})\right]\{\sigma_{ A}^{\mathrm{y}}\}_{\rho_{-\infty}^{A}}\left(\sigma_{B}^{\mathrm{x}}\otimes I_{C} )-\frac{\nu_{C}}{4}\sin\left[2\Delta(f_{A},f_{C})\right]\cos\left[2\Delta(f_{ B},f_{C})\right]\{\sigma_{A}^{\mathrm{x}}\}_{\rho_{-\infty}^{A}}\left(I_{B} \otimes\sigma_{C}^{\mathrm{x}}\right)\] \[+\frac{\Lambda_{c}^{\mathrm{x}}}{8}\sigma_{B}^{\mathrm{y}} \otimes\sigma_{B}^{\mathrm{y}}-\frac{\Lambda_{c}^{\mathrm{x}}}{8}\left(\sigma _{B}^{\mathrm{x}}\otimes\sigma_{C}^{\mathrm{y}}\right)-\frac{\Lambda_{s}^{ \mathrm{x}}}{8}(\sigma_{A}^{\mathrm{x}})_{\rho_{-\infty}^{A}}\left(\sigma_{B}^{ \mathrm{x}}\otimes\sigma_{C}^{\mathrm{y}}\right)-\frac{\Lambda_{s}^{\mathrm{ x}}}{8}(\sigma_{A}^{\mathrm{x}})_{\rho_{-\infty}^{A}}\left(\sigma_{B}^{\mathrm{y}} \otimes\sigma_{C}^{\mathrm{x}}\right)\] \[-\frac{\nu_{C}}{4}\cos\left[2\Delta(f_{A},f_{C})\right]\sin\left[2 \Delta(f_{B},f_{C})\right]\left(\sigma_{B}^{\mathrm{x}}\otimes\sigma_{C}^{ \mathrm{x}}\right)-\frac{\nu_{C}}{4}\sin\left[2\Delta(f_{A},f_{C})\right]\sin \left[2\Delta(f_{B},f_{C})\right]\{\sigma_{A}^{\mathrm{x}}\}_{\rho_{-\infty}^{A }}\left(\sigma_{B}^{\mathrm{x}}\otimes\sigma_{C}^{\mathrm{y}}\right).\]
By taking partial traces relative to each qubit, one recovers Eqs. (62) and (63).
|
2309.04726 | Eigenvalues of some classes of signed complete graphs | In this work, we discuss some properties of the eigenvalues of some classes
of signed complete graphs. We also obtain the form of characteristic polynomial
for these graphs. | Prajnanaswaroopa S | 2023-09-09T09:07:23Z | http://arxiv.org/abs/2309.04726v1 | # Eigenvalues of some classes of signed complete graphs
###### Abstract
In this work, we discuss some properties of the eigenvalues of some classes of signed complete graphs. We also obtain the form of characteristic polynomial for these graphs.
## Introduction
Signed graphs was introduced by Frank Harary [3]. Signed graphs are becoming highly useful tools to analyze several networks in real life, typically social networks [2]. A signed graph is a simple loopless graph with a function defined from the set of edges to the set \(\{-1,1\}\). Spectrum of signed graphs have been discussed in [6], [7]. Spectrum of complete signed graphs have been discussed in some length in [1].. Here, we find the eigenvalues of the adjacency matrix associated to the signed complete graph \(G^{\prime}\), where the negative edges induce a graph \(G\) of order \(n\) which consist of a union of \(k\) cliques of order \(h\) such that we have a \(h-p\) clique common to all the \(k\) cliques, and each of the \(p\) vertices of each of the \(k\) cliques are disjoint. In other words, if \(G\) be the induced graph formed by the negative edges, and, if we label the vertices of the disjoint cliques as \(v_{ij}\), \(i\in\{1,2,\ldots,p\}\) and \(j\in\{1,2,\ldots,k\}\), then the vertices \(v_{ml}\), where, \(m\in\{h-p+1,h-p+2,\ldots,h\}\) and \(l\in\{1,2,\ldots,k\}\) form a clique in the graph \(G\). We can call the induced graph as \(G\) with parameters \(n,h,p\). The adjacency matrix of such a graph using a suitable labelling of its vertices can be given by:
\[\begin{pmatrix}K_{p}&O&\cdots&O&X_{p}\\ O&K_{p}&\cdots&O&X_{p}\\ O&O&\ddots(k-1)-times&O&X_{p}\\ O&O&&\cdots&K_{p}&X_{p}\\ X&O&&\cdots&O&K_{h}\end{pmatrix}\]
where \(O\) is the zero matrix and \(X_{p}\) are the first \(p\) rows of a matrix \(X\) given by
\[X=\begin{pmatrix}J&O&\cdots&O\\ J&O&\cdots&O\\ \vdots&\vdots&(k-1)-times&\vdots\\ J&O&\cdots&O\\ J&O&\cdots&O\end{pmatrix}\]
with \(J_{p,(h-p)}\) being the all ones matrix of order \(p\times(h-p)\) given by
\[J=\begin{pmatrix}1&1&\dots&1\\ 1&1&\dots&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&\dots&1\end{pmatrix}\]
As the adjacency matrix for a signed complete graph with negative edges inducing a graph \(G\) is the same as the Siedel adjacency of the graph \(G\); and, since the Siedel adjacency matrix of a graph with adjacency matrix \(A\) of order \(n\) is defined as \(J_{n}-I_{n}-2A\), therefore, we get that, the matrix of which we wish to find the spectrum is
\[\begin{pmatrix}-K_{p}&J&\dots&J&X_{1}^{\prime}\\ J&-K_{p}&\dots&J&X_{2}^{\prime}\\ J&J&\ddots(k-1)-times&J&X_{3}^{\prime}\\ J&J&\dots&-K_{p}&X_{4}^{\prime}\\ X_{5}^{\prime}&J&\dots&J&-K_{h}\end{pmatrix}\]
with \(X_{i}^{\prime}\) being the \(i\)-th \(p\) rows of the matrix \(X^{\prime}\) given in block form by:
\[X^{\prime}=\begin{pmatrix}-J_{(k-1)p,h-p}&J_{(k-1)p,p}\end{pmatrix}\]
. We denote the main matrix by \(S(G)\) with respect to parameters \(n,h,p\).
### Main Theorems
**Lemma 1**.: _The eigenvalues of the matrix \(X=aI_{n}+bJ_{n}\) are given by \(bn+a\) with multiplicity \(1\) and \(a\) with multiplicity \(n-1\)._
Proof.: The proof is quite straightforward. We observe that \(bJ_{n}\) has rank \(1\), and hence a single non-zero eigenvalue. We also observe that \((1\ 1\ 1\dots\ 1^{T}\) is an eigenvector of \(bJ_{n}\) with corresponding eigenvalue \(bn\). The eigenvectors of \(bJ_{n}\) are also eigenvectors of \(X\), as \(aI_{n}\) is a scalar matrix. Therefore, as the only eigenvalues of \(aI_{n}\) with respect to any of its eigenvectors are \(a\), the eigenvalues of \(X\) are \(a\) with multiplicity \(n-1\) and \(bn+a\) with multiplicity \(1\). For more explicit clarity, we note that the eigenvectors of \(X\) are \((1\ 1\ \dots\ 1)^{T}\) with eigenvalue \(nb+a\), and \((1\ 0\ 0\dots\ -1)^{T},(0\ 1\ 0\dots\ -1)^{T}\dots,(0\ 0\ 0\dots\ 0\ -1)^{T}\) each with eigenvalue \(a\)
**Theorem 1**.: _If \(A\) is a \(k\) order square matrix having constant row sum \(r\) and having the same eigenvectors as \(aI+bJ\). Then the eigenvalues of the matrix \(M\) defined by_
\[\begin{pmatrix}A&bJ_{k}&\dots&bJ_{k}\\ bJ_{k}&A&\dots&bJ_{k}\\ \vdots&\vdots&(n-times)&\vdots\\ bJ_{k}&\dots&\dots&A\end{pmatrix}\]
_are given by \(r+bk(n-1)\) with multiplicity \(1\), \(r-bk\) with multiplicity \(n-1\) and \(d\) with multiplicity \(k(n-1)\), where \(d\) is the eigenvalue of \(A\) with respect to the eigenvectors other than \((1\ 1\ 1\dots\ 1)^{T}\)._
Proof.: Taking the previous theorem as an inspiration, we can construct eigenvectors for \(M\) as follows: Let \(\vec{j}_{i}\) denote the all ones vector of order \(i\). Let the eigenvectors of \(A\) except \(\vec{j}_{k}\) be labelled as \(e_{1},e_{2},\dots,e_{k-1}\). Then, we have the eigenvectors of \(M\) to be \(\vec{j}_{kn},(\vec{j}_{k}\ 0\ \dots\ -\vec{j}_{k})^{T},(0\ \vec{j}_{k}\ 0\dots\ -\vec{j}_{k})^{T}, \dots,(0\ 0\ \dots\ 0\ -vec{v}cj_{k})^{T},(e_{1}\ 0\ 0\ \dots 0)^{T},(e_{2}\ 0\ \dots\ 0)^{T}, \dots,(e_{k-1}\ 0\ \dots\ 0)^{T},(0\ e_{1}\ 0\dots\ 0)^{T},\dots,(0\ 0\ \dots\ e_{k-1})^{T}\). The corresponding eigenvalues would then be \(r+(bkn-bk)=r+bk(n-1)\) with multiplicity \(1\) (for eigenvector \(\vec{j}_{kn}\)), \(r-bk\) with multiplicity \(n-1\) (for eigenvectors \((\vec{j}_{k}\ 0\ \dots\ -\vec{j}_{k})^{T},(0\ \vec{j}_{k}\ 0\dots\ -\vec{j}_{k})^{T}, \dots,(0\ 0\ \dots\ 0\ \vec{j}_{k}-vec{v}ecj_{k})^{T}\)) and lastly the eigenvalue of \(A\) corresponding to the eigenvectors \(e_{i}\) with multiplicity \(k(n-1)\). The vectors given above are actually eigenvectors can be easily verified by multiplication and by using the properties of \(A\) and \(bJ_{k}\)
**Lemma 2**.: _We have \(X^{\prime}J_{h}=J_{h}X^{\prime T}=(2p-h)J_{h}\), where_
\[X^{\prime}=\begin{pmatrix}-J_{(k-1)p,h-p}&J_{(k-1)p,p}\end{pmatrix}\]
Proof.: As \(X^{\prime}\) is a matrix having constant row and columns sums, the entries of the matrix \(X^{\prime}J_{h}\) will consist only be the row sum of \(X^{\prime}\). This is because \(J_{h}\) consists of only \(1\)s. Similarly, the entries of the matrix \(J_{h}X^{\prime T}\) consist only of the column sum of \(X^{\prime T}\), which is equal to the row sum of \(X^{\prime}\). As any row of \(X^{\prime}\) equals \((-1)\)\(h-p\) times and \(1\)\(p\) times, the row sum would be \(p-(h-p)=2p-h\), from which the lemma follows.
**Lemma 3**.: _Let \(A\), \(D\) be square matrices of arbitrary orders. The determinant of the block matrix_
\[M=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\]
_with invertible matrix \(D\) is given by \(|M|=|D||A-CD^{-1}B|=\frac{||D|A-C\cdot adj(D)B|}{|D|^{n-1}}\), where \(adj(D)\) is the adjugate (or adjoint) of \(D\)._
Proof.: Proof is by using the Schur complement of the matrix \(D\) in the matrix \(M\). In other words, when we perform Gaussian elimination of the block matrix \(M\), knowing that \(D\) is invertible, we obtain the reduced matrix as
\[M=\begin{pmatrix}A-BD^{-1}C&O\\ D^{-1}C&I\end{pmatrix}\]
, where \(I\) is the identity matrix of appropriate order, that is, the same order as that of \(D\). Knowing that the determinant of the block matrix
\[M=\begin{pmatrix}A-BD^{-1}C&O\\ D^{-1}C&I\end{pmatrix}\]
is just the product of the diagonals [5], which, in this case, is \(|D|\cdot|A-BD^{-1}C|\). The expression involving adjugate follows by noting that \(A^{-1}=\frac{adj(A)}{|A|}\).
**Lemma 4**.: _If \(A_{n}\) is the adjacency matrix of the complete graph \(K_{n}\), then adjugate of \(M=-A_{n}-\lambda I_{n}\) has the form_
\[\begin{pmatrix}C_{p}(n-1)&(\lambda-1)^{n-2}&\dots&(1-\lambda)^{n-2}\\ (1-\lambda)^{n-2}&C_{p}(n-1)&\dots&(1-\lambda)^{n-2}\\ \vdots&\vdots&\ddots&\vdots\\ (1-\lambda)^{n-2}&\dots&ldots&C_{p}(n-1)\end{pmatrix}\]
_, where \(C_{p}(n)=(1-\lambda)^{n-1}(1-n-\lambda)\) is the characteristic polynomial of \(M\) (or negative of adjacency matrix of the complete graph of order \(n\))._
Proof.: As \(A_{n}=(1-\lambda)I_{n}-J_{n}\) is invertible, we again use the property that \(adj(A)=|A|\cdot A^{-1}\). To calculate the inverse of \(M\), we use Sherman-Morrison formula [4]. By the formula, we have \((X+uv^{T})^{-1}=X^{-1}-\frac{X^{-1}uv^{T}X^{-1}}{1+v^{T}X^{-1}u}\), where \(u\) and \(v\) are vectors and \(1+v^{T}A^{-1}u\neq 0\). Here, we can take \(u=(1\ 1\ \dots\ (n-times)\ 1)^{T}\) and \(v=(-1\ -1\ \dots(n-times)\ -1)^{T}\) so that \(uv^{T}=-J_{n}\) and \(X=(1-\lambda)I_{n}\). Then, we get
\[A_{n}^{-1} =\frac{1}{1-\lambda}I_{n}-\frac{\frac{1}{1-\lambda}I_{n}uv^{T} \frac{1}{1-\lambda}I_{n}}{1+\frac{-n}{1-\lambda}}\] \[=\frac{1}{1-\lambda}I_{n}-\frac{\frac{1}{(1-\lambda)^{2}}(-J)}{ \frac{(1-\lambda)-n}{1-\lambda}}\] \[=\frac{1}{(1-\lambda)(1-\lambda-n)}(1-\lambda-n)I_{n}+J_{n}\]
This implies that the adjugate then becomes \(|A_{n}|\cdot A_{n}^{-1}=C_{p}(n)\frac{1}{(1-\lambda)(1-\lambda-n)}(1-\lambda-n)I_{ n}+J_{n}\cdot=(1-\lambda)^{n-2}(1-\lambda-n)I_{n}+(1-\lambda)^{n-2}J_{n}=(1- \lambda)^{n-2}((2-\lambda-n)-1)I_{n}+(1-\lambda)^{n-2}J_{n}=(C_{p}(n-1)-(1- \lambda)^{n-2})I_{n}+(1-\lambda)^{n-2}\). This matrix, when expanded, at once gives the desired result.
**Lemma 5**.: _If \(K_{h}\) denotes the adjacency matrix of the complete graph on \(h\) vertices, then we have \(X^{\prime}\cdot adj(-K_{h}-\lambda I_{h})\cdot X^{\prime T}=(((C_{p}(h-1)-(1- \lambda)^{h-2}))h+((2p-h)^{2})(1-\lambda)^{h-2})J_{n-h}\), where_
\[X^{\prime}=\begin{pmatrix}-J_{(k-1)p,h-p}&J_{(k-1)p,p}\end{pmatrix}\]
_as before._
Proof.: The proof is straight-forward multiplication and using Lemmas 2 and 4.
Proof.: In this case, the matrix has the form
\[\begin{pmatrix}-K_{n-h}&X\\ X^{T}&-K_{h}\end{pmatrix}\]
with
\[X=(v\quad J_{n-h,h-1})\]
and \(v\) is the vector \((-1,-1,\ldots,-1)^{T}\).First, let us find the characteristic polynomial.
**Theorem 2**.: _The spectrum of the matrix \(S(G)\) with parameters \(n,h,p\) is given by the roots of the polynomial \(F(\lambda)=(1-2p-\lambda)^{\frac{n-h}{p}-1}(1-\lambda)^{n-2-\frac{n-h}{p}}s\), where \(s=-\lambda^{3}-(2h-n+2p-3)\lambda^{2}-(2h^{2}-2(h-1)n+2(h-2)p-4h+3)\lambda+2h ^{2}-(2h-1)n-2(2h^{2}-2hn-h+1)p-2h+4(h-n)p^{2}+1\). In particular, it has eigenvalue of \(1\) with multiplicity at least \(n-h-\frac{n-h}{p}\), and \(2p-1\) with multiplicity at least \(\frac{n-h}{p}-1\)._
Proof.: The Siedel matrix \(S(G)\) is given by:
\[\begin{pmatrix}-K_{p}&J&\ldots&J&X^{\prime}_{1}\\ J&-K_{p}&\ldots&J&X^{\prime}_{2}\\ J&J&\ddots(k-1)-times&J&X^{\prime}_{3}\\ J&J&\ldots&-K_{p}&X^{\prime}_{4}\\ X^{\prime}_{5}&J&\ldots&J&-K_{h}\end{pmatrix}\]
, with \(X^{\prime}_{i}\) being the \(i\)-th \(p\) rows of the matrix \(X^{\prime}\) given in block form by:
\[X^{\prime}=\begin{pmatrix}-J_{(k-1)p,h-p}&J_{(k-1)p,p}\end{pmatrix}\]
. We proceed to calculate the characteristic polynomial of the matrix \(S(G)\). This is nothing but the determinant of the matrix \(S(G)-\lambda I_{n}\). In matrix form, this is
\[\begin{pmatrix}-K_{p}-\lambda I_{p}&J&\ldots&J&X^{\prime}_{1}\\ J&-K_{p}-\lambda I_{p}&\ldots&J&X^{\prime}_{2}\\ J&J&\ddots(k-1)-times&J&X^{\prime}_{3}\\ J&J&\ldots&-K_{p}-\lambda I_{p}&X^{\prime}_{4}\\ X^{\prime}_{5}&J&\ldots&J&-K_{h}-\lambda I_{h}\end{pmatrix}\]
. By using Lemma 3, the determinant can be written as: \(\frac{|-K_{h}-\lambda I_{h}|M-X^{\prime}\cdot adj(-K_{h}-\lambda I_{h})X^{ \prime T}|}{|-K_{h}-\lambda I_{h}|^{n-1}}\), where \(M\) is the block matrix of the first \(n-h\) rows and columns given by
\[\begin{pmatrix}-K_{p}-\lambda I_{p}&J&\ldots&J\\ J&-K_{p}-\lambda I_{p}&\ldots&J\\ J&J&\ddots(k-1)-times&J\\ J&J&\ldots&-K_{p}-\lambda I_{p}\end{pmatrix}\]
. Taking cognizance of the fact that \(|-K_{h}-\lambda I_{h}|=C_{p}(h)\) and \(X^{\prime}\cdot adj(-K_{h}-lambda I_{h})\cdot X^{\prime T}=(1-\lambda)^{n-2}(1- \lambda-n)I_{n}+(1-\lambda)^{n-2}J_{n}\) from Lemma 5, we get the determinant as \(\frac{|C_{p}(h)M-((1-\lambda)^{n-2}(1-\lambda-n)I_{n}+(1-\lambda)^{n-2}J_{n})|} {(C_{p}(h))^{n-h-1}}\). We take \(Y=(1-\lambda)^{n-2}(1-\lambda-n)I_{n}+(1-\lambda)^{n-2}\). Then, the determinant becomes, in block form:
\[\frac{1}{(C_{p}(h))^{n-h-1}}\left|\begin{smallmatrix}C_{p}(h)[-K_{p}-\lambda I _{p}]-YJ_{p}&J_{p}(C_{p}(h)-Y)&...&J_{p}(C_{p}(h)-Y)\\ J_{p}(C_{p}(h)-Y)&C_{p}(h)[-K_{p}-\lambda I_{p}-YJ_{p}]&...&J_{p}(C_{p}(h)-Y) \\ J_{p}(C_{p}(h)-Y)&J_{p}(C_{p}(h)-Y)&\ddots(k-1)-times&J_{p}(C_{p}(h)-Y)\\ J_{p}(C_{p}(h)-Y)&J_{p}(C_{p}(h)-Y)&...&C_{p}(h)[-K_{p}-\lambda I_{p}-YJ_{p}] \end{smallmatrix}\right|\]
The above matrix has a similar form to the one in Theorem 1, with \(A=Cp(h)[-K_{p}-\lambda I_{p}]-YJ_{p}\) and \(b=Cp(h)-Y\). Therefore, as the eigenvalues are \((C_{p}(h)(n-h-2p+1-x)+Y(h-n))\) with multiplicity \(1\), \((C_{p}(h)(1-2p-x))\) with multiplicity \(\frac{n-h}{p}\) the determinant will be equal to \(\frac{1}{(C_{p}(h))^{n-h-1}}[(C_{p}(h)(n-h-2p+1-x)+Y(h-n))(C_{p}(h)(1-2p-x))^{ \frac{n-h}{p}-1}(C_{p}(h)(1-x))^{n-h-\frac{n-h}{p}}]\). Simplifying the expression using the form of \(C_{p}(h)=(1-\lambda)^{h-1}(1-\lambda-h)\), we get \(F(\lambda)=(1-2p-\lambda)^{\frac{n-h}{p}-1}(1-\lambda)^{n-2-\frac{n-h}{p}}s\), where \(s=-\lambda^{3}-(2h-n+2p-3)\lambda^{2}-(2h^{2}-2(h-1)n+2(h-2)p-4h+3)\lambda+2h ^{2}-(2h-1)n-2(2h^{2}-2hn-h+1)p-2h+4(h-n)p^{2}+1\). The expression \(F(\lambda)\) is therefore the characteristic polynomial of \(S(G)\). Therefore, roots of the cubic polynomial \(s\) will fully determine the spectrum of \(G\), as the eigenvalues \(1\) and \(2p-1\) are already known with their minimum multiplicities \(n-2-\frac{n-h}{p}\) and \(\frac{n-h}{p}-1\) from the expression.
## Conclusion
In this paper, we have used the block matrix technique, Sherman-Morrison formula and eigenvector reconstruction method to compute the spectrum and characteristic polynomial of certain signed complete graphs. The method as such can have long-lasting applications in spectral graph theory and spectrum of the signed complete graphs can also be used further for various applications.
|
2309.06657 | Statistical Rejection Sampling Improves Preference Optimization | Improving the alignment of language models with human preferences remains an
active research challenge. Previous approaches have primarily utilized
Reinforcement Learning from Human Feedback (RLHF) via online RL methods such as
Proximal Policy Optimization (PPO). Recently, offline methods such as Sequence
Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have
emerged as attractive alternatives, offering improvements in stability and
scalability while maintaining competitive performance. SLiC refines its loss
function using sequence pairs sampled from a supervised fine-tuned (SFT)
policy, while DPO directly optimizes language models based on preference data,
foregoing the need for a separate reward model. However, the maximum likelihood
estimator (MLE) of the target optimal policy requires labeled preference pairs
sampled from that policy. DPO's lack of a reward model constrains its ability
to sample preference pairs from the optimal policy, and SLiC is restricted to
sampling preference pairs only from the SFT policy. To address these
limitations, we introduce a novel approach called Statistical Rejection
Sampling Optimization (RSO) that aims to source preference data from the target
optimal policy using rejection sampling, enabling a more accurate estimation of
the optimal policy. We also propose a unified framework that enhances the loss
functions used in both SLiC and DPO from a preference modeling standpoint.
Through extensive experiments across three diverse tasks, we demonstrate that
RSO consistently outperforms both SLiC and DPO on evaluations from both Large
Language Model (LLM) and human raters. | Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, Jialu Liu | 2023-09-13T01:07:25Z | http://arxiv.org/abs/2309.06657v2 | # Statistical rejection sampling improves preference optimization
###### Abstract
Improving the alignment of language models with human preferences remains an active research challenge. Previous approaches have primarily utilized Reinforcement Learning from Human Feedback (RLHF) via online RL methods such as Proximal Policy Optimization (PPO). Recently, offline methods such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have emerged as attractive alternatives, offering improvements in stability and scalability while maintaining competitive performance. SLiC refines its loss function using sequence pairs sampled from a supervised fine-tuned (SFT) policy, while DPO directly optimizes language models based on preference data, foregoing the need for a separate reward model. However, the maximum likelihood estimator (MLE) of the target optimal policy requires labeled preference pairs sampled from that policy. DPO's lack of a reward model constrains its ability to sample preference pairs from the optimal policy, and SLiC is restricted to sampling preference pairs only from the SFT policy. To address these limitations, we introduce a novel approach called _Statistical Rejection Sampling Optimization_ (RSO) that aims to source preference data from the target optimal policy using rejection sampling, enabling a more accurate estimation of the optimal policy. We also propose a unified framework that enhances the loss functions used in both SLiC and DPO from a preference modeling standpoint. Through extensive experiments across three diverse tasks, we demonstrate that RSO consistently outperforms both SLiC and DPO on evaluations from both Large Language Model (LLM) and human raters.
## 1 Introduction
Recent advancements in Large Language Models (LLMs) have unlocked unprecedented capabilities in diverse tasks, from programming to creative writing. Models such as GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), and PaLM 2-L (Anil et al., 2023) that are pre-trained on large unlabeled corpus have showcased remarkable powers in tackling both zero-shot (Radford et al., 2019) and few-shot (Brown et al., 2020; Chowdhery et al., 2022) tasks. Their performance and alignment with user intent can be further improved through supervised fine-tuning (SFT). A specific strategy that has garnered attention in this context is Instruction Tuning (Wei et al., 2021). By leveraging datasets enriched with instructions and corresponding human-authored completions, this approach has demonstrated enhanced generalization and model usability (Chung et al., 2022). Despite its promise, achieving desired model outputs across various domains remains a challenge.
Reinforcement Learning with Human Feedback (RLHF) has been introduced as a promising approach to enhance the alignment of large language models with human preferences (Stiennon et al., 2020). RLHF introduces notable complexities into the training process. Integrating a reward model and frequently sampling from the policy during training often demand substantial memory, thereby limiting the maximum feasible size of a model given memory constraints. The inclusion of roll-outs in the training loop can also slow down the optimization process, as it necessitates continuous model sampling and decoding. Further, understanding how to tune the PPO process calls for specialized knowledge.
Recognizing these challenges, recent research has pioneered alternatives to RLHF. Notable among these are RRHF (Yuan et al., 2023), SLiC (Zhao et al., 2022; 2023) and DPO (Rafailov et al., 2023). These methodologies aim to more effectively align LLMs with human preferences while avoiding the complexities of reinforcement learning. Given supervised finetuning data \(\mathcal{D}_{\text{sf}}=\{(x,y_{\text{ref}})\}\) and preference data \(\mathcal{D}_{\text{sf}}=\{(x,y_{w},y_{l})\}\) where output text \(y_{w}\) is preferred over \(y_{l}\) on the same input text \(x\), RLHF first trains a supervised fine-tuned model and a reward model, after which it applies reinforcement algorithm such as PPO to maximize the reward. On the other hand, RL-free algorithm fits preference data in various ways. For example, RRHF use a trained reward model or human raters to compute rewards \(\{r_{i}\}\) for multiple sequences generated from the same prompt \(x\), and then apply a ranking loss plus supervised fine-tuning loss.
\[\mathcal{L}(\theta)=\sum_{r_{i}<r_{j}}\max{(0,\pi_{\theta}(y_{i}|x)-\pi_{ \theta}(y_{j}|x))}-\lambda\log{\pi_{\theta}(y_{\text{ref}}|x)}.\]
They study various ways of sampling sequences such as beam search, online sampling, iterative updates. Similarly, for sampled and labeled preference pairs, SLiC uses a contrastive ranking calibration loss plus a regularization loss
\[\mathcal{L}(\theta)=\max{(0,\delta-\log{\pi_{\theta}(y_{w}|x)}+\log{\pi_{ \theta}(y_{l}|x)})}-\lambda\log{\pi_{\theta}(y_{\text{ref}}|x)}, \tag{1}\]
where \(\delta\) is a positive margin and \(\pi_{\theta}\) is the learnable conditional probability function by a language model. Intuitively speaking, SLiC penalizes model when \(\frac{\pi_{\theta}(y_{w}|x)}{\pi_{\theta}(y_{w}|x)}<\exp(\delta)\) and thus encourages a large likelihood ratio between positive and negative outputs. Instead of directly fitting on human preference data, SLiC proposed to sample pairs from the SFT policy and get them labeled by a pairwise reward-ranking model. They show that a pairwise reward-ranking model, sampling from SFT policy, and the proposed loss result in clear gains on a summarization dataset that are consistent over model size scaling up.
Although RRHF and SLiC show to be a scalable alternative to PPO, they lack theoretical understanding. DPO analyzes RLHF's objective function in the form of KL-regularized reward maximization, and analytically solved the optimal policy induced by a reward function. Based on Bradley-Terry (BT) model (Bradley and Terry, 1952), DPO proposes a maximum likelihood estimator (MLE) to fit on human preference data directly. It establishes theoretical foundation that connected language model with preference model as a density estimation problem from the labeled response pairs. Mathematically, DPO expresses the human preference probability in terms of only the optimal policy \(\pi^{*}\) and reference policy \(\pi_{\text{sf}}\):
\[p^{*}(y_{1}\succ y_{2}|x)=\frac{1}{1+\exp\left(\beta\log\frac{\pi^{*}(y_{2}| x)}{\pi_{\text{sf}}(y_{2}|x)}-\beta\log\frac{\pi^{*}(y_{1}|x)}{\pi_{\text{sf}}(y_{ 1}|x)}\right)} \tag{2}\]
In the above equation, \(\pi^{*}\) is the function to be estimated. Empirically, one can leverage observed preference pairs to approximate \(p^{*}(y_{1}\succ y_{2}|x)\). To estimate \(\pi^{*}\) as a density estimation problem, the optimal way is to fit a policy model on collected preference pairs sampled from \(\pi^{*}\). However, DPO uses the collected human preference data from other policies directly in all the experiments and lacks study on the effect of sampling. Although they propose to sample pairs from SFT policy and get them labeled by human, it is still not strictly a MLE for the preference model due to the mismatch between the sampling distribution and \(\pi^{*}\). In reality, it is very challenging to obtain human preference pairs directly sampled from \(\pi^{*}\).
In this work, we address the above issues by constructing preference pairs from the approximated \(\pi^{*}\). We illustrate our pipeline in Figure 1. Starting from a human preference dataset \(\mathcal{D}_{\text{sf}}\) collected from other policies, we first train a pairwise reward-ranking model, then apply a statistical rejection sampling algorithm to generate response pairs sampled from optimal policy by using SFT policy and the pairwise reward-ranking model. After that, we label the sampled response pairs by the reward model. Then we fit the model on labeled pairs via classification loss.
Our statistical rejection sampling refers to the one in the statistical field (Neal, 2003). In RLHF works (Bai et al., 2022; Stiennon et al., 2020; Touvron et al., 2023), they usually refer to rejection sampling as best-of-N or top-k-over-N algorithm, where they sample a batch of N completions from a language model policy and then evaluate them across a reward model, returning the best one or the best k ones. In this paper we show that top-k-over-N is a special case of our statistical rejection sampling.
Our contributions of this work are three-fold:
1. We propose a scalable and easy-to-implement framework to learn from human preference data. We provide a comprehensive recipe among different choices of loss functions and preference pairs generation. We show the importance of the reward model instead of directly optimizing the model on the preference data.
2. Statistically, we unify DPO and SLiC by showing that they vary by loss functions to fit on human preference data: DPO is a logistic regression on human preference data and SLiC is _almost_ equivalent to a support vector machine (SVM) with hinge loss. We improve SLiC as the SVM counter part of DPO.
3. We propose a statistical rejection sampling algorithm to sample pairs from the optimal policy and get them labeled by a pairwise reward-ranking model. The proposed sampling strategy is shown to be effective on three generative tasks.
## 2 Preliminaries
Learning from Human FeedbackSeveral works show the significant improvement of conditional language generation by learning from human feedback data. Such works include RLHF (Ziegler et al., 2019), RRHF (Yuan et al., 2023), SLiC (Zhao et al., 2023), and DPO (Rafailov et al., 2023). All algorithms take two inputs:
1. \(\pi_{\text{sf}}(y|x)\): a supervised fine-tuned policy (SFT), where \(x\) is the prompt and \(y\) is the response.
2. \(\mathcal{D}_{\text{hf}}=\{x^{(i)},y_{w}^{(i)},y_{l}^{(i)}\}_{i=1}^{N}\): a human preference dataset that distinguishes the better response from the worse given the same prompt.
Dpo, SLiC, RRHF and RLHFIn the rapidly evolving landscape of model optimization based on human feedback data, various strategies have been developed. DPO employs a straightforward approach, fitting a logistic regression model directly to the human preference dataset \(\mathcal{D}_{\text{hf}}\) to achieve an optimized policy. In a similar vein, SLiC-direct employs a max margin loss function to fit human preference data, enhancing the performance of the model. SLiC-sample-rank, a variant of SLiC (Zhao et al., 2023), takes a two-step approach: it initially fits a pairwise reward-ranking model, denoted as \(\rho_{\psi}(x,y_{1},y_{2})\), to human preference data. Subsequently, response pairs are sampled from the SFT policy \(\pi_{\text{sf}}(y|x)\) and labeled according to \(\rho_{\psi}(x,y_{1},y_{2})\); the SLiC loss is then applied to these generated preference pairs (Zhao et al., 2023). Another strategy, RRHF, also generates multiple decoded sequences but distinguishes itself by ranking these sequences either manually by humans or through a reward model. The RRHF rank loss is then applied to refine the model based on these rankings. Lastly, RLHF diverges by first fitting a reward model and then employing the PPO algorithm to yield the final optimized policy (Schulman et al., 2017). Each of these approaches contributes
Figure 1: RSO first fits a pairwise reward-ranking model \(\rho_{\psi}(x,y_{1},y_{2})\) from human preference data. This model is later applied to generate preference pairs with candidates sampled from the optimal policy, followed by a preference optimization step to align sequence likelihood towards preferences.
uniquely to the field, offering a variety of methodologies for leveraging human feedback in model training.
KL-Constrained Reward Maximization ObjectiveStarting with a reward function \(r(x,y)\) and input prompt distribution \(\mathcal{P}\), the DPO and RLHF optimizes for the following objective:
\[\max_{\pi}\mathbb{E}_{x\sim\mathcal{P},y\sim\pi}\left[r(x,y)\right]-\beta \mathbb{D}_{KL}\left[\pi(y|x)||\pi_{\text{sft}(y|x)}\right] \tag{3}\]
Optimal PolicyRafailov et al. (2023) showed that the optimal policy \(\pi_{r}(y|x)\) that maximizes the above objective is:
\[\pi_{r}(y|x)=\frac{1}{Z(x)}\pi_{\text{sft}}(y|x)\exp\left(\frac{1}{\beta}r(x,y)\right) \tag{4}\]
for all \(x\in\mathcal{P}\), where \(Z(x)=\sum_{y}\pi_{\text{sft}}(y|x)\exp\left(\frac{1}{\beta}r(x,y)\right)\) is the partition function. \(\beta\) controls the balance between exploitation and exploration. When \(\beta\to 0\), all probability mass will concentrate on the max reward with full exploitation. When \(\beta\to\infty\), optimal policy will be the same as \(\pi_{\text{sft}}\) with full exploration. Rearrange the Equation (4) we get
\[r(x,y)=\beta\log\frac{\pi_{r}(y|x)}{\pi_{\text{sft}}(y|x)}+\beta\log Z(x). \tag{5}\]
The Equation (4) and (5) establish the relation between optimal policy and the reward function. One can infer the other. In reality, the final goal is to have a good policy for response generation and \(\pi_{r}(y|x)\) is usually of more interest. The key is to effectively estimate the \(\pi_{r}(y|x)\) from the human preference data.
Preference ModelLet the ground-truth reward function be \(r^{*}\), then the optimal policy \(\pi^{*}\) associated with \(r^{*}\) can be represented by Equation (4). For two responses \((y_{1},y_{2})\) from the same input \(x\), one can assume that
\[\mathbb{P}(y_{1}\succ y_{2}|x)=g(r^{*}(x,y_{1})-r^{*}(x,y_{2})), \tag{6}\]
where \(g:\mathbb{R}\to[0,1]\) is a monotonically non-decreasing function that converts the reward difference into winning probability. Specifically, if we set \(g\) as sigmoid function \(\sigma\), we get the Bradley-Terry (BT) model (Bradley and Terry, 1952). Reusing Equation (5), the reward advantage of \(y_{1}\) over \(y_{2}\) is
\[\delta_{r^{*}}(y_{1},y_{2},x,\pi^{*},\pi_{\text{sft}},\beta)\triangleq r^{*}( x,y_{1})-r^{*}(x,y_{2})=\beta\log\frac{\pi^{*}(y_{1}|x)}{\pi_{\text{sft}}(y_{1}|x )}-\beta\log\frac{\pi^{*}(y_{2}|x)}{\pi_{\text{sft}}(y_{2}|x)} \tag{7}\]
\[\mathbb{P}(y_{1}\succ y_{2}|x)=g(\delta_{r^{*}}(y_{1},y_{2},x,\pi^{*},\pi_{ \text{sft}},\beta)) \tag{8}\]
The above equation establishes a connection between the preference data and optimal policy. If we leverage the human preference data to represent \(\mathbb{P}(y_{1}\succ y_{2}|x)\), the estimation of \(\pi^{*}\) can be viewed as a density estimation problem from the preference data. We will discuss different ways of estimating \(\pi^{*}\) from the preference data in Section 3.1.
Policy Estimation on Preference PairsDOO estimates the \(\pi^{*}\) by fitting the BT model on preference data directly using logistic loss. SLiC estimates the \(\pi^{*}\) via a contrastive manner to ensure the winner of the preference pair has higher probability than the loser. In statistical density estimation problem, the preference pairs should be generated from \(\pi^{*}\), the density to be estimated, while DPO uses preference pairs from some unknown distribution and SLiC-sample-rank uses preference pairs from \(\pi_{\text{sft}}\). Thus neither of the above approaches are the MLE of the \(\pi^{*}\). This motivates us to develop an approach that can obtain preference pairs from \(\pi^{*}\).
Reward ModelUsually the reward model is a pointwise score assigned to a (prompt, response) pair. The model is trained based on BT model (Zigegler et al., 2019; Bai et al., 2022). We argue that it is an easier and more straightforward way to train a pairwise reward model from (prompt, worse response, better response) triplets. Zhao et al. (2023) demonstrates that pairwise reward model is preferred in RL-free learning. In this work we train the pairwise reward-ranking model \(\hat{\mathbb{P}}(y_{1}\succ y_{2}|x)=\rho_{\psi}(x,y_{1},y_{2})\) on \(\mathcal{D}_{p}\) to predict the probability that \(y_{1}\) is preferred over \(y_{2}\). For
summarization task, the input format is "[CONTEXT] document [SUMMARY A] positive summary [SUMMARY B] negative summary", and the output value is either "A" or "B". For AI assistant tasks, the input format is "[CONTEXT] conversation between human and assistant [RESPONSE A] positive response [RESPONSE B] negative response", and the output value is either "A" or "B".
We can induce the reward score of response \(y_{1}\) from the reward score of response \(y_{2}\) from winning probability according to Equation (6) with \(g=\sigma\):
\[r(x,y_{1})=\text{logit}(\mathbb{P}(y_{1}\succ y_{2}|x))+r(x,y_{2}),\]
where \(\text{logit}(x)=\log(\frac{x}{1-x})\). Suppose we have a baseline sequence \(y_{b}\) of reward score 0 and a fitted pairwise reward-ranking model \(\rho_{\psi}(x,y,y_{b})\), we have estimated reward score as
\[r_{\psi}(x,y)=\text{logit}(\rho_{\psi}(x,y,y_{b})). \tag{9}\]
In other words, "pointwise" reward score can be derived from a "pairwise" reward-ranking model with a baseline sequence. In practice, we choose a random decoded sequence from SFT policy as the baseline.
## 3 RSO Approach
### Statistical Estimation of the Optimal Policy \(\pi^{*}\)
Our proposed approach is illustration in Figure 1. The inputs to our system are SFT policy, reward-ranking model, and prompts. First we sample responses from the optimal policy through rejection sampling approach, then we fit a classification model on labeled preference pairs.
To study the effectiveness of our approach, we consider a few options on loss and preference dataset construction. Given a preference dataset \(\mathcal{D}_{p}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\}\), we can estimate \(\pi^{*}\) according to Equation (8). There are two aspects we need to consider for estimating \(\pi^{*}\):
1. Choice of loss function: Equation (8) is a simple binary classifier with \(\delta_{\tau^{*}}\) as logit. Two most common choices are logistic loss used in logistic regression and hinge loss used in support vector machine.
2. Choice of \(\mathcal{D}_{p}\): Equation (8) does not depend on the distribution of \(y_{1},y_{2}\) given \(x\). Thus we need to decide how to obtain \((x,y_{1},y_{2})\) triplets.
Choice of loss functionGiven a preference dataset \(\mathcal{D}_{p}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\}\), we can fit a binary classifier according to Equation (8). DPO (Rafailov et al., 2023) proposed to use sigmoid loss on normalized likelihood (sigmoid-norm) to fit a logistic regression:
\[\mathcal{L}_{\text{sigmoid-norm}}\left(\pi_{\theta}|\pi_{\text{st}},\mathcal{ D}_{p}\right)=-\mathbb{E}_{(x,y_{w},y_{l})\sim\mathcal{D}_{p}}\left[\log \sigma\left(\gamma\log\frac{\pi_{\theta}\left(y_{w}|x\right)}{\pi_{\text{st}} \left(y_{w}|x\right)}-\gamma\log\frac{\pi_{\theta}\left(y_{l}|x\right)}{\pi_{ \text{st}}\left(y_{l}|x\right)}\right)\right]. \tag{10}\]
In their original paper, \(\gamma=\beta\) and we extend this in preference model as \(g(x)=\sigma(\frac{\gamma}{\beta}x)\). By doing this, we can fully control the steepness of the logits function for classification. The larger the \(\gamma\), the more we penalize the mis-classified examples at the decision boundaries. Another thing to notice is that we don't include any bias term in the logistic regression. This is due the the symmetry property of the preference data. If two rewards are the same, the winning probability should always be 0.5.
SLiC (Zhao et al., 2023) proposed to use a hinge loss as
\[\mathcal{L}_{\text{hinge}}\left(\pi_{\theta}|\mathcal{D}_{p}\right)=\mathbb{E} _{(x,y_{w},y_{l})\sim\mathcal{D}_{p}}\left[\max\left(0,1-\left[\gamma\log\pi_ {\theta}\left(y_{w}|x\right)-\gamma\log\pi_{\theta}\left(y_{l}|x\right)\right] \right)\right] \tag{11}\]
Note that we use \(1/\gamma\) as the margin \(\delta\) used in the original SLiC paper. This is equivalent to a hinge loss with logit \(\left(\gamma\log\pi_{\theta}\left(y_{w}|x\right)-\gamma\log\pi_{\theta}\left( y_{l}|x\right)\right)\). If we replace the logit by \(\frac{\gamma}{\beta}\delta_{r}(y_{1},y_{2},x,\pi_{\theta},\pi_{\text{st}},\gamma)\), we introduce the hinge loss on normalized likelihood (hinge-norm):
\[\mathcal{L}_{\text{hinge-norm}}\left(\pi_{\theta}|\mathcal{D}_{p}\right)= \mathbb{E}_{(x,y_{w},y_{l})\sim\mathcal{D}_{p}}\left[\max\left(0,1-\left[ \gamma\log\frac{\pi_{\theta}\left(y_{w}|x\right)}{\pi_{\text{st}}\left(y_{w}|x \right)}-\gamma\log\frac{\pi_{\theta}\left(y_{l}|x\right)}{\pi_{\text{st}} \left(y_{l}|x\right)}\right]\right)\right] \tag{12}\]
Choice of preference data distributionSuppose we have access to the oracle preference data \(\mathcal{D}^{*}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\mid y_{w}^{(i)},y_{l}^{(i)} \sim\pi^{*}(y|x^{(i)})\}_{i=1}^{N^{*}}\), we can directly fit a maximum likelihood estimation (MLE) on the dataset. In reality, we may not have access to such data, and we have access to \(\mathcal{D}_{\text{hf}}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\mid y_{w}^{(i)},y_{l }^{(i)}\sim\pi_{\text{unk}}(y|x^{(i)})\}_{i=1}^{N_{\text{unk}}}\), where \(\pi_{\text{unk}}\) denotes some mixed unknown policies. The mixed unknown policies can include SFT policy, current RLHF policy, or policies from other agents, or even web mined preference pairs (Touvron et al., 2023). In our study, we use the human preference data sampled from other agents, such as OpenAI agents for Reddit TL;DR dataset (Stiennon et al., 2020) and Anthropic agents for AnthropicHH dataset (Bai et al., 2022).
Given \(\mathcal{D}_{\text{hf}}\), we consider the following three choices:
1. **direct**: directly fit the policy on \(\mathcal{D}_{\text{hf}}\) according to Equation (8). This is the approach used in DPO. In SLiC, they also proposed similar variant SLiC-direct, but with a different loss function.
2. **sft-sample-rank**: first train a reward-ranking model \(\rho_{\psi}(x,y_{1},y_{2})\) on \(\mathcal{D}_{\text{hf}}\). Then use \(\pi_{\text{sft}}(y|x)\) to sample response pairs and label them by \(\rho_{\psi}(x,y_{1},y_{2})\). The results in a preference dataset \(\mathcal{D}_{p}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\mid y_{w}^{(i)},y_{l}^{(i) }\sim\pi_{\text{sft}}(y|x^{(i)})\}_{i=1}^{N_{\text{tr}}}\). This is the same as SLiC-sample-rank variant proposed in the SLiC paper.
3. **rso-sample-rank**: first train a reward-ranking model \(\rho_{\psi}(x,y_{1},y_{2})\) on \(\mathcal{D}_{\text{hf}}\). Then use \(\pi_{r_{\psi}}(y|x)\) induced by \(r_{\psi}\) according to Equation (4) to sample response pairs, where \(r_{\psi}(x,y)\) is induced from \(\rho_{\psi}(x,y_{1},y_{2})\). After that we label response pairs using the reward-ranking model to construct the preference dataset \(\mathcal{D}_{p}=\{(x^{(i)},y_{w}^{(i)},y_{l}^{(i)})\mid y_{w}^{(i)},y_{l}^{(i )}\sim\pi_{r_{\psi}}(y|x^{(i)})\}_{i=1}^{N_{\rho_{\psi}}}\) from the optimal policy \(\pi_{r_{\psi}}(y|x)\) via reward function \(r_{\psi}(x,y)\) according to Equation (4). The reward function \(r_{\psi}(x,y)\) is induced by a fitted reward model \(\rho_{\psi}(x,y_{1},y_{2})\) according to Equation (9).
Statistically speaking, since we are estimating \(\pi^{*}(y|x)\), it is desired to draw samples from \(\pi^{*}(y|x)\). Rso-sample-rank is the best solution towards this direction because it can generate samples from \(\pi_{r_{\psi}}(y|x)\), which is closer to \(\pi^{*}(y|x)\) than \(\pi_{\text{unk}}\) used in direct and \(\pi_{\text{sft}}\) used in sft-sample-rank. However, sampling from \(\pi_{r_{\psi}}\) is not straightforward, and we propose a statistical rejection sampling approach to achieve this.
Figure 2: Statistical rejection sampling illustration. Although the output space of language models is a huge high-dimensional discrete space, we use a continuous 1D input space for illustration purpose. There are three curves in the figure: \(M\) times SFT policy, reward, optimal policy. The sample is first generated by SFT policy, then gets accepted or rejected depending on whether a uniform random variable locates in acceptance or rejection region. If the sample has high SFT policy probability but low optimal policy probability and reward score, it has a higher chance of being rejected.
```
fromtypingimportList importnumpyasnp defconduct_rejection_sampling(response_candidates:List[str], response_rewards:List[float], num_samples:int, beta:float]:"""Conductsrejectionsamplingguidedbyrewards. Args: response_candidates:responsecandidatesfromsftpolicy response_rewards:responserewards. num_samples:numberofsamplestosub-sample. beta:betaparameterinKL-constrainedrewardmaximizationobjective. Returns: Rejectionsampledsequencesfromtheoptimalpolicy. """ candidates={c:rforc,rinzip(response_candidates,response_rewards)} accepted=[] whilelen(accepted)<num_samples: max_reward=max(candidates.values()) to_remove=[] forc,rincandidates.items(): u=np.random.uniform() ifu>=np.exp((r-max_reward)/beta): continue accepted.append(c) to_remove.append(c) iflen(accepted)==num_samples: break forcinto_remove: candidates.pop(c) returnaccepted
```
**Algorithm 1** Statistical Rejection Sampling Algorithm in Python
### Statistical Rejection Sampling Algorithm
Statistical rejection sampling (Neal, 2003) is an efficient statistical technique to generate observations from a distribution. If we want to generate a distribution of density \(\pi_{r_{\phi}}\), we can use \(\pi_{\text{sth}}\) as the proposal distribution. Then we follow the following steps:
1. Generate \(y\sim\pi_{\text{sth}}(y|x)\) and \(u\sim U[0,1]\).
2. Let \(M=\min\{m\mid m\pi_{\text{sth}}(y|x)\geq\pi_{r_{\phi}}(y|x)\text{ for all }y\}\). If \(u<\frac{\pi_{r_{\phi}}(y|x)}{M\pi_{\text{sth}}(y|x)}\), then we accept \(y\). Otherwise, we reject \(y\) and redo the sampling again.
Figure 2 is an illustration of the statistical rejection sampling approach. Starting from a proposal distribution \(\pi_{\text{sth}}\), we can draw samples from the optimal policy by rejecting more at the low reward region and accepting more at the high reward region.
Statistical Rejection Sampling AlgorithmWe propose Algorithm 1 as an empirical version of the statistical rejection sampling algorithm. The derivation will be shown in Section A.1. Regarding the Algorithm 1, we have the following theorem:
**Theorem 1**.: _Let \(N\) be number of candidate responses, and \(r_{\text{max}}\) be the maximum rewards among the candidate responses not yet accepted. As \(N\rightarrow\infty\), Algorithm 1 can generate \(K\) distinct samples
from \(\pi_{r_{\psi}}\) with expected acceptance rate \(\mathbb{E}_{y\sim\pi_{\theta}(y|x)}\left[\exp\left(\frac{1}{\beta}\cdot(r_{\psi}( x,y)-r_{\text{max}})\right)\right]\), where \(r_{\psi}(x,y)\) is defined in Equation (9)._
If \(\beta\rightarrow\infty\), each sample generated from SFT policy will be accepted with probability 1. If \(\beta\to 0\), only the highest reward response will be accepted and all other responses will be rejected. In the case of \(\beta\to 0\), the Algorithm 1 will sample the \(K\) samples with highest rewards, which coincide with the statistical rejection sampling (top-k-over-N) referred by Bai et al. (2022) and Touvron et al. (2023). \(\beta\) indicates how much we trust the reward model. If the reward model is very accurate and robust, we should set a small \(\beta\). Otherwise, we should set a larger \(\beta\). In practice, we treat \(\beta\) as a hyper-parameter and pick one according to validation metrics.
## 4 Related Work
Recent advancements in the pre-training of LLMs like GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), and PaLM 2-L (Anil et al., 2023) have greatly improve the ability on achieving tasks in a zero-shot or few-shot way (Radford et al., 2019; Chowdhery et al., 2022). However, improving performance in downstream tasks and better aligning with user intent can be achieved by fine-tuning these systems on datasets that include instructions and human-generated completions (Mishra et al., 2021; Sanh et al., 2021; Wei et al., 2021). Although instruction-based tuning has proven successful, it's often easier to gather human preference on responses than it is to collect expert demonstrations. Consequently, later research has shifted toward training LLMs using human preferences. This approach has led to enhanced performance in various applications, including translation (Kreutzer et al., 2018), summarization (Steinmon et al., 2020), storytelling (Ziegler et al., 2019), and following instructions (Ramamurthy et al., 2022). The process usually includes a reward model training based on Bradley-Terry (BT) model (Bradley and Terry, 1952) and reinforcement learning with human feedback (RLHF) to make the model perform better according to the reward function. RLHF (Ziegler et al., 2019) optimizes for maximum reward by interacting with a reward model using reinforcement learning algorithms such as PPO (Schulman et al., 2017). Despite the success of RLHF, the fine-tuning of large language models with reinforcement learning remains a practical challenge due to its instability, reward hacking, and scalability (Zhao et al., 2023; Rafailov et al., 2023). This work provides an effective RL-free way to optimize for human preference feedback with theoretical guarantees.
Recent works have proposed theoretically-justified approaches to optimizing relative preferences without relying on RL (Zhao et al., 2023; Yuan et al., 2023; Rafailov et al., 2023). By optimizing the model's compatibility with preference datasets under models such as the BT model, these methods fit on human or model ranked data pairs. SLiC (Zhao et al., 2023) proposed to first train a pairwise reward-ranking model, then label pairs generated from supervised fine-tuning (SFT model using the reward model, finally train the model using a contrastive calibration loss (Zhao et al., 2022) and a regularization fine-tuning loss. RRHF (Yuan et al., 2023) assumes access to a list of responses and their reward values to the same input, and use a zero-margin likelihood contrastive loss. DPO (Rafailov et al., 2023) fits a model directly on human preference data using BT model. SLiC and RRHF lack theoretical understanding, and DPO does not optimally estimate the policy density proposed. To address the aforementioned issues, this work unifies the SLiC and DPO approaches from choice of loss perspective with theoretical understanding, and proposes an improved way of collecting preference pairs via statistical rejection sampling.
Statistical rejection sampling (Neal, 2003) is a statistical approach used to generate samples from a target distribution using a proposal distribution. With the terminology "rejection sampling", Bai et al. (2022) propose to select highest top \(k\) sampled candidates to collect human preference datasets. More recently, Touvron et al. (2023) propose to use the same approach with PPO that shows to improve RLHF. Their reference to rejection sampling means to take the top \(k\) best reward decoding sequences among the samples, which is not the same concept as those used in statistical world. In this work, we use the statistical rejection sampling from the statistical world and show the existing approach is a special case of our algorithm. We also provide the theoretical guarantees and show that the approach can deliver a MLE estimation for human preference data.
Experiments
TasksWe study RSO on three different open-ended text generation datasets: Reddit TL;DR summarization (Stiennon et al., 2020), CNN/DailyMail (Hermann et al., 2015), and Anthropic Helpful and Harmless (AnthropicHH) dialogue dataset (Bai et al., 2022).
The Reddit TL;DR summarization dataset contains both fine-tune data \(\mathcal{D}_{\text{sfh}}\), human feedback data \(\mathcal{D}_{\text{hf}}\), along with their SFT and RLHF model decodes which we use for comparison with our models. \(\mathcal{D}_{\text{sfh}}^{\text{iddr}}\) is a filtered version of Reddit TL;DR dataset (Volske et al., 2017). It contains 117k/6k/6k examples in train, validation and test splits. \(\mathcal{D}_{\text{hf}}^{\text{iddr}}\) consists of 64k human preferences on decodes from multiple models.
The CNN/DailyMail dataset contains online news articles paired with multi-sentence summaries. \(\mathcal{D}_{\text{sfh}}^{\text{endm}}\) has 287k/13k/11k examples in train, validation and test splits. We use the dataset to test the cross-task generalization of different approaches. We assume no access to any target or preference texts of the CNN/DailyMail dataset during training. Starting from a SFT model trained on \(\mathcal{D}_{\text{sfh}}^{\text{iddr}}\), we further optimize the SFT policy using \(\mathcal{D}_{\text{hf}}^{\text{iddr}}\). We evaluate the performance using target texts on validation split of \(\mathcal{D}_{\text{sfh}}^{\text{endm}}\).
The AnthropicHH is a dialogue dataset with \(x\) as conversation between a human query and an AI assistant. Each example ends with a pair of responses generated by an unknown large language model along with a preference label denoting the human-preferred response according to helpfulness and harmlessness. The original dataset has 161k/9k examples in train and test splits. Each example has a task tag and two responses, a better one and a worse one. The task includes helpfulness and harmlessness and they conflict with each other. We use the helpfulness task in our experiment setting. For SFT, we use the positive response (the better one) as SFT target.
MethodStarting from a T5-large model (770M) (Raffel et al., 2020) large SFT policy and a T5-XXL (11B) pairwise reward-ranking model1, we consider nine settings as discussed in Section 3.1. The settings are all the combinations between loss functions \((\mathcal{L}_{\text{sigmoid-norm}},\mathcal{L}_{\text{hinge}},\mathcal{L}_{ \text{hinge-norm}})\) and preference data distribution (direct, sft-sample-rank, rso-sample-rank). Note that DPO approach in the original paper is the same as sigmoid-norm direct in our setting. And the SLiC approach in the original paper is almost the same as hinge-sft-sample-rank in our setting with two tiny differences. The first difference is that the original SLiC paper has calibration loss (first term in Equation (1)) and regularization loss (second term in Equation (1)), we found that regularization loss does not significantly improve the final metrics (discussed in Appendix A.5) and thus dropped it to follow the DPO recipe. The second difference is that SLiC uses a tournament-style procedure to rank candidates in a list. For example, given a list of 4 candidates \(c1,c2,c3,c4\), we first rank \(c1,c2\) and \(c3,c4\) and then rank \(winner(c1,c2)\), \(winner(c3,c4)\). Given \(m\) candidates, the ranking model is called \(m-1\) times and \(m-1\) positive/negative pairs are yielded. In our case, we only use the first round of the tournament ranking, which are strictly random samples from SFT policy. Given \(m\) candidates, we generate \(m/2\) pairs. We will discuss the sampling difference in Section 5.2. Unless specifically mentioned, we set \(\beta=0.5\) and \(\gamma=0.05\). For statistical rejection sampling algorithm (Algorithm 1), we first sample 64 response candidates from the SFT policy, then subsample 8 samples. For SFT sampling, we start with 64 response candidates sampled, then randomly sub-sample 8 candidates without replacement. We use batch size 128 and learning rate 1e-5 with Adafactor optimizer (Shazeer and Stern, 2018). For each run, we pick the checkpoint with the highest reward score according to the reward-ranking model.
Footnote 1: We find that smaller T5 ranking/reward models do not converge reliably in our setup.
EvaluationOur experiments use three different approaches to evaluate. They are all side-by-side comparisons on generated sequence versus the original SFT target. We report the win rate by three systems: trained T5-XXL pairwise reward-ranking model (Reward Model), PaLM 2-L few-shot side-by-side (AutoSXS), and human side-by-side (HumanSXS).
For AutoSXS, we use PaLM 2-L few-shot in-context learning to infer 8 decoded samples with 4 flipped the order. The label contains three choices: A, B, and tie with score 1, 0, and 0.5, respectively. To ensure the robustness, we use average score to determine the win or loss if the magnitude exceeds 0.35. The AutoSXS prompts are reported in Appendix A.3. The purpose of the AutoSXS is to prevent
the artificially high reward scores by Reward Model due to reward hacking on learned policies. Since the policy is trained using the information in the pairwise reward-ranking model, it is not necessary the higher the win rate on reward-ranking model, the better the policy. The AutoSxS has been demonstrated as effective and consistent in Rafailov et al. (2023) using GPT-4 as zero-shot rater. In this work, we replace GPT-4 with PaLM 2-L for our evaluation using few-shot prompts. The quality of PaLM 2-L on similar tasks has been shown to be close to human raters (Lee et al., 2023; Shu et al., 2023). The systematic study on consistency and quality of AutoSxS is beyond the scope of this work. The human evaluation details will be discussed in Section 5.3.
### Performance comparison on three tasks
The comparison results are shown in Table 1. Compared with DPO, \(\text{RSO}_{\text{sigmoid}}\) and \(\text{RSO}_{\text{sigmoid}}\) From the table we can conclude that "sft-sample-rank" can bring consistent gains over "direct". And "tso-sample-rank" brings further gains on top of "sft-sample-rank". The explanation is that when we fit the language model on preference data, the closer the data distribution to the optimal policy, the closer the estimator to the real MLE. Let \(\mathbb{D}(\cdot,\cdot)\) denote any distribution distance, and \(\mathcal{D}^{a}_{p}\) be the preference data generated by policy \(\mathbf{a}\), we have
\[\mathbb{D}(\mathcal{D}^{\text{rso-sample-rank}}_{p},\mathcal{D}^{\pi^{*}}_{p} )\leq\mathbb{D}(\mathcal{D}^{\text{sft-sample-rank}}_{p},\mathcal{D}^{\pi^{* }}_{p})\leq\mathbb{D}(\mathcal{D}^{\text{direct}}_{p},\mathcal{D}^{\pi^{*}}_{p})\]
Thus "rso-sample-rank" results in the best performance.
Regarding the loss function, we observed that sigmoid-norm and hinge-norm perform similarly, while hinge shows some extend of reward hacking with higher reward model win rate by lower auto rater win rate. This is because the hinge loss trust too much on the reward function without considering any SFT policy as shown in Equation (11).
We showcase an example with responses from different policies on Reddit TL;DR and AnthropicHH tasks in Figure 3 and Figure 4, respectively.
Figure 3: Example summaries generated by SFT, SLiC, DPO, and RSO policies for a Reddit post. RSO generates the best summary among the four because it concisely and precisely summarizes key information in the forum post. Salient details are bolded.
### RSO ablation
Effect of \(\gamma\) and \(\beta\) in RSOTo study the effect of \(\gamma\), we fix the statistical rejection sampling \(\beta=0.5\), and vary \(\gamma=0.005,0.05,0.5\) in the loss function on Reddit TL;DR dataset. Figure 4(a) shows that \(\gamma=0.05\) provides the optimal win rate.
To study the effect of \(\beta\) for rejection sampling, we fix the \(\gamma=0.05\) in the loss function and vary \(\beta=0,0.05,0.5,5\) on Reddit TL;DR dataset. Figure 4(b) shows that \(\beta=0.5\) provides the optimal win rate. Notice that \(\beta=0\) coincides with the rejection sampling algorithm mentioned in Bai et al. (2022) and Touvron et al. (2023). Thus it is not always best to select top-k-over-N, which is because the quality of reward model may be limited and we should not blindly trust reward without regularization in the constraint reward optimization (Equation (3)).
Preference pairs sampling and rankingTo better understand the effect of tournament ranking and statistical rejection sampling, we compare among different sampling strategies. Since we first sample 64 responses from sft policy and followed by 8 responses by statistical rejection sampling, it is natural to ask: why not use all of the 64 samples in the calibration? In SLiC, we use tournament ranking, which introduces bias on higher reward sequences. We would like to understand the effect of that. Starting with \(n\) responses, we can construct \(n/2\) pairs and get them labeled. We call this
\begin{table}
\begin{tabular}{l l l c c} \hline \hline Approach & \multicolumn{2}{c}{Ablation} & \multicolumn{2}{c}{Metrics} \\ & Loss & Preference Pair & Reward Model (\%) & AutoSxS (\%) 2 \\ \hline \hline & \multicolumn{3}{c}{**Reddit TL;DR**} \\ \hline DPO & sigmoid-norm & direct & 84.35 & 67.72 \\ & sigmoid-norm & sft-sample-rank & 88.63 & 69.02 \\ & sigmoid-norm & rs-sample-rank & **92.37** & **71.86** \\ & slice\({}_{\text{target}}\) & hinge & direct & 86.92 & 60.54 \\ & slice\({}_{\text{sample-rank}}\) & hinge & sft-sample-rank & 89.75 & 67.04 \\ & hinge & rso-sample-rank & **93.36** & **69.26** \\ & hinge-norm & direct & 83.93 & 66.63 \\ & hinge-norm & sft-sample-rank & 88.04 & 68.46 \\ & hinge-norm & rso-sample-rank & **92.80** & **70.84** \\ \hline \hline & \multicolumn{3}{c}{**CNN/DailyMail**} \\ \hline DPO & sigmoid-norm & direct & 61.31 & 37.36 \\ & sigmoid-norm & sft-sample-rank & 62.72 & 38.63 \\ & sigmoid-norm & rso-sample-rank & **69.38** & **39.71** \\ & slice\({}_{\text{target}}\) & hinge & direct & 64.18 & 33.63 \\ & hinge & sft-sample-rank & 67.16 & 33.21 \\ & hinge & rso-sample-rank & **71.62** & **35.46** \\ & hinge-norm & direct & 60.04 & 33.91 \\ & hinge-norm & sft-sample-rank & 61.77 & 40.63 \\ & hinge-norm & rso-sample-rank & **69.82** & **42.18** \\ \hline \hline & \multicolumn{3}{c}{**AnthropicHH**} \\ \hline DPO & sigmoid-norm & direct & 51.63 & 24.01 \\ & sigmoid-norm & sft-sample-rank & 85.09 & 39.56 \\ & sigmoid-norm & rso-sample-rank & **86.94** & **40.98** \\ & slice\({}_{\text{target}}\) & hinge & direct & 35.95 & 15.69 \\ & hinge & sft-sample-rank & 80.82 & 30.66 \\ & hinge & rso-sample-rank & **82.21** & **32.56** \\ & hinge-norm & direct & 49.55 & 22.89 \\ & hinge-norm & sft-sample-rank & 82.40 & 35.96 \\ & hinge-norm & rso-sample-rank & **84.44** & **38.58** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Compare different methods to leverage human feedback data on three tasks. “direct” preference pair directly comes from human preference data. “sft-sample-rank” preference pair uses SFT policy to sample 4 pairs for each training example input text and get them labeled by the pairwise reward model. “rso-sample-rank” preference pair uses rejection sampling to sample 4 pairs for each training example input text and get them labeled by the pairwise reward model. Both pairwise reward model and few-shot PaLM 2-L win rate against SFT target text are reported. RSO shows improvement on three tasks over DPO and SLiC with a clear margin.
approach "first-round-rank". We can also keep the tournament until the winner is decided with a total of \(n-1\) pairs (each game eliminates one response). We call this approach "tournament-rank". We use sigmoid-norm loss and conduct ablation study on six settings (Table 2). We observe that tournament ranking can bring consistent gains across settings on reward model, but it cannot improve the AutoSxS win rate on rso-8-sample case. Rso-8-sample-first-round-rank shows to be the optimal choice based on AutoSxS metric, which means it is not always good to sample many responses or conduct the tournament ranking.
Figure 4: Example responses generated by SFT, SLiC, DPO, and RSO policies for a Human-Assistant dialogue on AnthropicHH dataset. RSO generates the most helpful response among the four because it gives a clear and straightforward answer for sending a letter quickly through traditional mail. In contrast, SFT repeats information about email rater than answering the question about traditional mail, SLiC and DPO are vague and repetitive. Salient details are bolded.
Figure 5: Effect of hyper-parameters in loss functions and statistical rejection sampling algorithm.
### Human Evaluation Results
We conduct human evaluation side-by-side using Amazon Mechanical Turk. Given a document and three responses generated from direct, sft-sample-rank and rso-sample-rank, raters are asked to assign a pointwise overall quality to each response, and choose the best one. Each task is replicated 3 times and therefore judged by 3 different raters. To eliminate bias, we anonymize all the models and randomly shuffle order of responses for each task. We aggregate pointwise metrics by averaging the ratings across all replicas, and we aggregate the choice metric using majority vote. The rating tasks are shown in Appendix A.4. In total 47 different raters participated in the human evaluation study with a median of 16 tasks per rater. The human evaluation results are shown in Table 3. Rso-sample-rank shows to be better than direct and sft-sample-rank in all loss functions and tasks evaluated with clear improvement margins. RSO (sigmoid-norm-rso-sample-rank) is chosen to be preferred more than 2x as DPO (sigmoid-norm-direct) in both tasks. Comparing between two loss functions (sigmoid-norm and hinge-norm), there is no clear conclusion on which one is better when applying rso-sample-rank. Thus improved loss on SLiC and original loss DPO perform similarly.
\begin{table}
\begin{tabular}{l r r} \hline \multicolumn{1}{c}{ Ablation} & \multicolumn{2}{c}{Metrics} \\ Preference Pair & Reward Model (\%) & AutoSxS (\%) \\ \hline \hline sft-8-sample-first-round-rank & 88.63 & 68.51 \\ sft-8-sample-tournament-rank & 90.69 & 68.57 \\ rso-8-sample-first-round-rank & 92.37 & **71.86** \\ rso-8-sample-tournament-rank & **93.35** & 71.69 \\ sft-64-sample-first-round-rank & 88.91 & 68.84 \\ sft-64-sample-tournament-rank & 91.14 & 71.08 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison among different preference pairs sampling and ranking approaches on the Reddit TL;DR dataset. “sft-” means generating preference pairs from SFT policy and “rso-” means generating preference pairs from optimal policy via statistical rejection sampling algorithm. “k-sample” means we randomly sample \(k\) response candidates and then construct labeled preference pairs. For \(n\) sampled response candidates, “first-round-rank” means only use \(n/2\) pairs in policy optimization, while “tournament-rank” use \(n-1\) pairs (all the comparisons in a single-elimination tournament) in policy optimization. Four pairs made of eight candidates generated by statistical rejection sampling show the best AutoSxS metrics among all settings considered.
\begin{table}
\begin{tabular}{l c c c} \hline Loss & Preference Pair & Chosen as Preferred3 & Quality \\ \hline & \multicolumn{2}{c}{**Reddit TL;DR**} \\ \hline sigmoid-norm & direct & 21\% & 3.84 \\ sigmoid-norm & sft-sample-rank & 10\% & 3.74 \\ sigmoid-norm & rso-sample-rank & **48\%** & **4.02** \\ \hline hinge-norm & direct & 21\% & 3.80 \\ hinge-norm & sft-sample-rank & 11\% & 3.68 \\ hinge-norm & rso-sample-rank & **46\%** & **3.97** \\ \hline & \multicolumn{2}{c}{**AnthropicHH**} \\ \hline sigmoid-norm & direct & 15\% & 3.04 \\ sigmoid-norm & sft-sample-rank & 22\% & 3.21 \\ sigmoid-norm & rso-sample-rank & **31\%** & **3.37** \\ \hline hinge-norm & direct & 13\% & 3.33 \\ hinge-norm & sft-sample-rank & 22\% & 3.56 \\ hinge-norm & rso-sample-rank & **33\%** & **3.60** \\ \hline \end{tabular}
\end{table}
Table 3: 3-way human evaluation to compare three ways of constructing preference pairs: direct, sft-sample-rank, and rso-sample-rank. Rso-sample-rank shows a clear improvement over direct and sft-sample-rank on two loss functions and two tasks with more frequently chosen as preferred and higher quality. The hinge-norm (improved SLiC loss as support vector machine) and sigmoid-norm (DPO loss as logistic regression) perform comparatively on preference pairs generated by rso-sample-rank.
Conclusion
In this paper, we proposed RSO recipe to train large language models from human feedbacks. Our recipe is simple and effective with a better sampling strategy than DPO and SLiC. We unified loss functions used in DPO and SLiC from the preference optimization perspective with one as logistic regression and the other as support vector machine. We demonstrate our approach to be powerful on three tasks with comprehensive numerical experiments and analysis. Future work may includes studying RSO on larger model, large scale decoding samples, other language generation tasks, other reward function and/or non-human feedback.
|
2309.16444 | Quasi-local masses and cosmological coupling of black holes and
mimickers | Motivated by the recent heated debate on whether the masses of local objects,
such as compact stars or black holes (BHs), may be affected by the large-scale,
cosmological dynamics, we analyze the conditions under which, in a general
relativity framework, such a coupling small/large scales is allowed. We shed
light on some controversial arguments, which have been used to rule out the
latter possibility. We find that the cosmological coupling occurs whenever the
energy of the central objects is quantified by the quasi-local Misner-Sharp
mass (MS). Conversely, the decoupling occurs whenever the MS mass is fully
equivalent to the (nonlocal) Arnowitt-Deser-Misner (ADM) mass. Consequently,
for singular BHs embedded in cosmological backgrounds, like the
Schwarzschild-de Sitter or McVittie solutions, we show that there is no
cosmological coupling, confirming previous results in the literature.
Furthermore, we show that nonsingular compact objects couple to the
cosmological background, as quantified by their MS mass. We conclude that
observational evidence of cosmological coupling of astrophysical BHs would be
the smoking gun of their nonsingular nature. | Mariano Cadoni, Riccardo Murgia, Mirko Pitzalis, Andrea P. Sanna | 2023-09-28T13:55:06Z | http://arxiv.org/abs/2309.16444v3 | # Quasi-local masses and cosmological coupling of black holes and mimickers
###### Abstract
Motivated by the recent heated debate on whether the masses of local objects, such as compact stars or black holes (BHs), may be affected by the large-scale, cosmological dynamics, we analyze the conditions under which, in a general relativity framework, such a coupling small/large scales is allowed. We shed light on some controversial arguments, which have been used to rule out the latter possibility. We argue that the actual observational quantity at play is the quasi-local Misner-Sharp mass (MS), and we find that the cosmological coupling occurs whenever the energy of the central object is quantified by it. Conversely, the decoupling occurs whenever the MS mass is fully equivalent to the (nonlocal) Arnowitt-Deser-Misner (ADM) mass. Consequently, for singular BHs embedded in cosmological backgrounds, like the Schwarzschild-de Sitter or McVittie solutions, we show that there is no cosmological coupling, confirming previous results in the literature. Furthermore, we show that nonsingular compact objects couple to the cosmological background, as quantified by their MS mass. We conclude that observational evidence of cosmological coupling of astrophysical BHs would be the smoking gun of their nonsingular nature.
## I Introduction
Recently, there has been renewed interest in an old question of general relativity (GR): are small-scale, local systems, like planets, stars or compact objects/black holes (BHs), affected by the large-scale dynamics of the cosmological background they are embedded in?
The first known attempt to consistently answer this question dates back to McVittie [1], who found a solution of Einstein's field equations describing a point-like object embedded in a spatially-flat Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime. However, the issue was far from being settled, as the extremely nontrivial physics involved in this embedding entails conceptual and interpretative problems (see, _e.g._, Refs. [2; 3; 4; 5; 6]).
As a result, a considerable body of work has ensued over the years, with, however, contradictory results (for an incomplete list, see, \(e.g.\), Refs. [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] and references therein). The main conceptual obstacle is caused by the huge separation of scales between local inhomogeneities, whose characteristic scale is their virial radius \(\sim GM\), and the large-scale cosmological dynamics, occurring instead at the Hubble radius \(H\). Although there seems to be general agreement on the negligible impact of the cosmological expansion on small-scale1 Newtonian systems [35; 37; 68; 69], the issue is still opened for local, relativistic bodies, like BHs.
Footnote 1: However, this is not necessarily the case with large-scale structures [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67].
Notice that the coupling could be a rather natural feature of highly-compact gravitational systems. Naively, the mass/radius relation of a BH would suggest that, if lengths are affected by the cosmological expansion, so should do masses.
In the latest years, the debate has rekindled due to recent developments based on the theoretical work of Croker and collaborators [70; 71; 72]. Through a perturbative approach and an averaging procedure, they derived the Friedmann's equations from varying the gravitational action. What they show, in this way, is that the pressure in the interior of BHs/compact objects contribute actively to the energy density sourcing the cosmological equations. The conservation of the stress-energy tensor, then, implies the presence of a coupling of these objects with the cosmological expansion, which should manifest as a significant shift in their masses. Their model also allows to predict that the masses of local objects should vary with the scale factor \(a\) according to the power law \(M(a)\propto a^{k}\)[70]. This formula was then tested in Ref. [73] against an observational sample of supermassive BHs at the centre of elliptical galaxies at different redshift. Such objects are quiescent, \(i.e.\), they are concerned by negligible processes of accretion or mergers, so that the dataset is not sensibly affected by other growth channels other than the supposed coupling mechanism with the expanding background. This set of data showed a preference for \(k\sim 3\)[73; 74]. The conclusion of the authors of Ref. [73] is that BHs may be the source of dark energy.
However, this claim and the underlying theoretical framework have faced significant criticism. Even if the underlying framework could be flawed from the beginning [75], most criticism has been directed at the concept of coupling itself. On the one hand, the substantial separation in scales between local and cosmological systems makes such coupling implausible [76; 77]. On the other hand, the equation of state of matter inside a BH, which is typically taken to be dust, is unable to mimic dark energy [78; 79]. Moreover, current observational constraints
on the slope parameter \(k\) capturing the mass-redshift dependence, are highly controversial, as they heavily depend on the astrophysical probes employed in the analysis [80; 81; 82; 83].
These critiques cast again doubts on the feasibility and validity of the proposed coupling between cosmological dynamics and the masses of BHs/compact objects. However, in Ref. [74], we and collaborators built a solid general relativistic framework that enabled us to describe the coupling of local inhomogeneities with the cosmological background in full generality, as well as to recover the expression of the mass-shift from Refs. [70; 73].
Independently of the intricate situation on the observational side, the theoretical question that we want to address in the present paper is: in which conditions does the cosmological expansion affect dynamical quantities, such as the BH masses?
We give a precise answer to this question by working on a solid theoretical ground. Previously, the debate was biased by the use of the nonlocal Arnowitt-Deser-Misner (ADM) mass to quantify the energy pertaining to local objects. The starting point of this work, instead, is the identification of the quasi-local Misner-Sharp (MS) mass as the most appropriate quantity to determine the energy of local compact objects, and to investigate their cosmological coupling.
The MS mass is covariantly defined, and it reduces to the ADM mass at asymptotically-flat infinity. Therefore, it can be identified as the actual astrophysical observable. We then use the MS mass to compute the energy of various cosmologically-embedded solutions: Schwazschild-de Sitter, McVittie, Sultana-Dyer (SD), and nonsingular BHs sourced by anisotropic fluids.
We find that when the energy of the central object is described using the MS mass, the cosmological coupling becomes manifest, while the decoupling occurs whenever the energy of the central object is equivalent to its ADM mass in an asymptotically-flat spacetime, as in singular BH models. We explicitly show that when the energy of the nonembedded object can be quantified everywhere by the MS mass, but not by the ADM one, the presence of cosmological coupling is inevitable. That is the case of nonsingular BHs and other compact objects.
The paper is structured as follows.
In Section II, we discuss the role played by the different definitions of mass for BHs/compact objects in cosmology, and provide a brief overview of the key properties of the MS mass.
In Section III, we revisit the singular Schwarzschild-de Sitter and McVittie solutions, demonstrating that the mass of the central local object does not couple to the cosmological dynamics.
In Section IV, we examine the cosmological embedding of nonsingular BHs/compact objects: we firstly revisit the SD solution, then discuss the most general case of compact objects with anisotropic sources, and finally the isotropic case. We explicitly show that the cosmological coupling is quantified by the MS mass, and we conclude that any observational evidence of cosmological coupling could be a smoking gun for the nonsingular nature of astrophysical BHs.
We present our conclusions in Section V.
## II Definitions of mass for compact objects and cosmological coupling
Answering the question posed in Section I, about the mass growth of compact objects due to cosmological expansion, is complicated by the fact that there are two relevant formal definitions for the mass of a static, spherically-symmetric compact object: (\(i\)) the MS mass, a quasi-local quantity characterizing the energy inside a sphere of a given radius; (\(ii\)) the ADM mass, a nonlocal quantity defined in terms of a surface integral at spatial infinity.
Static eternal BHs embedded in a cosmological background are usually described by neglecting the cosmological asymptotics and resorting to asymptotic flatness instead, where all observables can be precisely identified and quantified in terms of surface integrals at spatial infinity [84]. In this case, one can safely use the ADM mass. For spacetimes with different asymptotics, like the FLRW ones, the identification and interpretation of such nonlocal observables become much more involved.
The key issue in this type of problem is not purely kinematic, like, \(e.g.\), that concerning the cosmological redshift of distances in cosmology, but fully dynamic. As such, it implies some explicit or implicit assumption about how the small-scale, inhomogeneous dynamics of the compact object is related to the large-scale, homogeneous and isotropic cosmological background dynamics. The usual assumption is that there exists a scale of _decoupling_, which is essentially justified by the huge separation of scales between the heaviest known galactic BH (\(\sim 10^{-3}\) pc) and the Hubble radius (\(\sim 10^{10}\) pc) [77].
The use of the ADM mass to characterize the energy of cosmologically-embedded BHs is fully justified only if one accepts the assumption of the decoupling of scales. This is the only case where a cosmologically-embedded BH can be safely treated as an eternal, asymptotically-flat object.
The decoupling of scales assumption is physically justified only for the Schwarzschild-de Sitter solution, where one has a globally-defined, static, radial coordinate to safely define the \(r\to 0\) and \(r\to\infty\) limits. In other cases, such as, \(e.g.\), the McVittie solution, the \(r\to 0\) and \(r\to\infty\) limits use different radial coordinates, related by a time-dependent coordinate transformation.
Finally, another strong limitation of the ADM mass is that it correctly quantifies the energy of the compact objects only in the case of astrophysical bodies in which the stress-energy tensor is zero outside, such as singular BHs. This is not the case for nonsingular BHs [85].
We thus argue that, in the most general case, the actual physical observable is the quasi-local MS mass. In the
following, we will briefly review its basic properties.
### Basic features of the Misner-Sharp mass
Depending on the asymptotics of a given spacetime, there are several ways to quantify the energy of a gravitational system. As already stated, in asymptotically-flat spacetimes (or, more in general, for spacetimes with a timelike asymptotic boundary, like, \(e.g.\), anti de Sitter), the key observables can be unambiguously quantified, through the ADM decomposition, as _nonlocal_ quantities defined at the boundary of the spacetime, the so-called "hair" of classical, singular Kerr-Newman solutions [86].
For BHs not in vacuum, like nonsingular BHs, this identification is less straightforward, even if the manifold is asymptotically flat (see, \(e.g.\), Refs. [85; 87] and references therein). In these cases, there is a different definition which better encapsulates the _local_ properties that the energy of a gravitational system should satisfy: the Hawking-Hayward quasi-local mass [88; 89], which, for spherically-symmetric spacetimes, reduces to the MS mass [90]. In a generic asymptotically-flat spacetime, the ADM and MS masses coincide only at spatial infinity, namely \(M_{\rm ADM}=\lim_{r\to\infty}M_{\rm MS}\). They are fully equivalent outside the compact object only if the stress-energy tensor vanishes outside of the object.
On the contrary, the MS mass can be defined _covariantly_ also for non-asymptotically flat and non-stationary spacetimes. Moreover, as it encodes the _local_ properties of the energy of a given gravitational system, it can be considered as the physical mass tested with astrophysical observations. For this reason, it represents the most appropriate tool to investigate the possible cosmological coupling of local objects embedded in cosmological backgrounds.
For a spherically-symmetric spacetime with a metric of the general form2
Footnote 2: Throughout the paper, we shall use natural units in which \(c=\hbar=k_{\rm B}=1\). Latin indices \(a,b=1,2\) denote the time and radial coordinates. We use Greek indices to denote four-dimensional spacetime coordinates. \({\rm d}\Omega\) is the line element of the two-sphere.
\[{\rm d}s^{2}=h_{ab}(x){\rm d}x^{a}{\rm d}x^{b}+r(x)^{2}{\rm d}\Omega^{2}\,, \tag{1}\]
the MS mass is defined as
\[M_{\rm MS}=\frac{r(x)}{2G}\left[1-h^{ab}(x)\nabla_{a}r(x)\nabla_{b}r(x)\right]\,. \tag{2}\]
Given that the MS mass is defined in a covariant way, all the physical results based on its use are coordinate-independent, while its explicit form rests, of course, on the particular gauge chosen. In the following, we consider the systems of coordinates that are mostly adopted when discussing the embedding of spherical objects in cosmological backgrounds. One system is given by Lemaitre coordinates (\(t,\,r,\,\theta,\,\phi\))
\[{\rm d}s^{2}=-e^{\alpha(t,r)}{\rm d}t^{2}+e^{\beta(t,r)}{\rm d}r^{2}+R(t,r)^{ 2}{\rm d}\Omega^{2}\,, \tag{3}\]
where \(\alpha\), \(\beta\) and \(R\) all depend on the radial and time coordinates. Note also that this metric generalizes the ones written in isotropic coordinates
\[{\rm d}s^{2}=-e^{\alpha(t,r)}{\rm d}t^{2}+e^{\tilde{\beta}(t,r)}\left({\rm d} r^{2}+r^{2}{\rm d}\Omega^{2}\right)\,, \tag{4}\]
that have also been frequently adopted to discuss the cosmological embedding of compact objects.
However, in order to discuss the MS mass, it is more convenient to use \(R(t,\,r)\) as the radial coordinate. Through a straightforward change of coordinates (see Ref. [91]), one can recast the metric (3) into the form
\[{\rm d}s^{2}=-A(T,R){\rm d}T^{2}+B(T,R){\rm d}R^{2}+R^{2}{\rm d}\Omega^{2}\,, \tag{5}\]
with the relations
\[A =\left(e^{\alpha}-e^{\beta}\frac{\dot{R}^{2}}{R^{\prime 2}} \right)F^{2}\,; \tag{6a}\] \[B =\frac{e^{\alpha+\beta}}{R^{\prime 2}\left(e^{\alpha}-e^{\beta} \frac{\dot{R}^{2}}{R^{\prime 2}}\right)}\,, \tag{6b}\]
where the dot and prime stand for derivation with respect to \(t\) and \(r\), respectively. \(F\) is an integration function entering the time-coordinate transformation, it is required to guarantee that \({\rm d}T\) is an exact differential [91]. Using Eq. (5), the MS mass reads as
\[M_{\rm MS} =\frac{R}{2G}\left(1-g^{\mu\nu}\nabla_{\mu}R\nabla_{\nu}R\right) \tag{7}\] \[=\frac{R}{2G}\left(1-g^{RR}\right)=\frac{R}{2}\left(1-B^{-1} \right)\,. \tag{8}\]
Using Eq. (6b), in the gauge (3), it becomes
\[M_{\rm MS}=\frac{R}{2G}\left(1+\dot{R}^{2}e^{-\alpha}-R^{\prime 2}e^{-\beta} \right)\,. \tag{9}\]
In the following sections we will use this formula to compute the mass of cosmologically-embedded compact objects.
## III Embedding point-like objects and perfect fluid stars in cosmological backgrounds
In this Section we will use the MS mass to reproduce already-known results regarding the nonexistence of a cosmological coupling of standard singular BHs. By doing so, we will clarify the physical reasons behind the absence of the small-/large- scale coupling in two well-known solutions that were recently reconsidered to advocate against the ubiquity of the cosmological coupling [77]. The gist of their argument is that there is
a complete separation between the scales pertaining to local objects (like BHs) and the dynamics of the cosmological background, such that the mass of the central BH can be approximately identified with its ADM mass. First of all, let us note that the ADM mass cannot be properly defined in non-asymptotically flat spacetimes, as those corresponding to cosmologically embedded objects. Secondly, as previously stressed, the ADM mass is a _nonlocal_ quantity, thereby unable to quantify local effects, such as the coupling. Finally, the separation of scales presented in [77] involves rather questionable limits on small and large scales, due to the use of time-dependent radial coordinates. The two limits truly represent separated scales only if we consider the small-scale local dynamics at times for which the scale factor does not change significantly. If this not the case, the notion of small- and large-scale limits would instead depend on time. The only case in which one can truly show that the two limits give neatly separates scales at all times is the Schwarzschild-de Sitter solution, since one can define coordinates in which the spacetime is static.
We shall show that, for point-like objects, the separation of scales and the resulting decoupling emerges naturally when considering the MS mass of the solutions mentioned above.
### Schwarzschild-de Sitter solution
The simplest known example of an embedding of a compact object (mass-particle) in a cosmological background is the Schwarzschild-de Sitter metric, which is a vacuum solution of Einstein's equations with a positive cosmological constant. A peculiarity of this metric is that it can be written in the static patch
\[\begin{split}\mathrm{d}s^{2}=&-\left(1-\frac{2Gm} {r}-H^{2}r^{2}\right)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{1-\frac{2Gm}{r}- H^{2}r^{2}}\\ &+r^{2}\mathrm{d}\Omega^{2}\,.\end{split} \tag{10}\]
As already noted, here we have a clear separation between the small scales, where the solution reduces to the Schwarzschild one with an ADM mass \(m\), and the large scales, where we instead have the de Sitter asymptotics. The decoupling can be readily seen using the MS mass given by Eq. (9), instead of a more intricate change of coordinates as done in Ref. [77]. A straightforward calculation yields indeed
\[M_{\mathrm{MS}}=m+\frac{H^{2}}{2G}r^{3}\,. \tag{11}\]
The first term is the ADM mass of the Schwarzschild BH, while the second term is simply the mass contribution due to the constant cosmological density over a volume \(r^{3}\). There is no trace of the growth of \(m\) due to the expanding cosmological background given by the Hubble parameter \(H=\dot{a}/a\), namely there is no cosmological coupling.
### McVittie solution
The McVittie spacetime [1] represents a generalization of the Schwarzschild-de Sitter spacetime to a generic FLRW model. It was the first exact solution of GR which allowed for the embedding of spherically-symmetric objects in a generic cosmological background. It is based on some assumptions, the most important ones being a perfect, isotropic and spherically-symmetric fluid as a source, and the absence of fluxes of matter/energy into/away from the central object. Moreover, the metric is required to reduce to the Schwarzschild one, written in isotropic coordinates, when expressed in terms of radial coordinate of the observer, \(\hat{r}=ar\). It thus has the same singularity at the origin.
In the coordinates used in Eq. (3), it reads as
\[\begin{split}\mathrm{d}s^{2}=&-\frac{\left(1-\frac {Gm(t)}{2r}\right)^{2}}{\left(1+\frac{Gm(t)}{2r}\right)^{2}}\mathrm{d}t^{2}\\ &+a^{2}\left(1+\frac{Gm(t)}{2r}\right)^{4}\left(\mathrm{d}r^{2}+ r^{2}\mathrm{d}\Omega^{2}\right)\,,\end{split} \tag{12}\]
where \(a\) is the scale factor and \(m(t)=m_{0}/a(t)\), from Einstein's equations and from the requirement of absence of radial fluxes.
We identify the areal radius as
\[R(t,r)\equiv a(t)\,r\left(1+\frac{Gm_{0}}{2ra(t)}\right)^{2}\,. \tag{13}\]
By writing this solution in the gauge (5), Eq. (9) yields,
\[\begin{split}\mathrm{d}s^{2}=&-\left(1-\frac{2Gm_{0 }}{R}-H^{2}R^{2}\right)F^{2}\,\mathrm{d}T^{2}\\ &+\frac{\mathrm{d}R^{2}}{1-\frac{2Gm_{0}}{R}-H^{2}R^{2}}+R^{2}\, \mathrm{d}\Omega^{2}\,,\end{split} \tag{14}\]
which is very similar to the Schwarzschild-de Sitter solution, the only differences being the factor \(F\) in the \(g_{TT}\) component, and the fact that \(H\) is not restricted to describe a de Sitter cosmological background. This similarity translates also in the behavior of the MS mass, where again we have a complete separation of scales
\[M_{\mathrm{MS}}=\frac{R}{2G}\left(\frac{2Gm_{0}}{R}+H^{2}R^{2}\right)=m_{0}+ \frac{H^{2}}{2G}R^{3}\,. \tag{15}\]
We again identify two independent terms: a contribution of the ADM mass of the BH, and a purely cosmological term. No coupling is present, as expected, since the density sourcing the McVittie solution is purely cosmological, _i.e._, \(\rho=3H^{2}/8\pi G\), while the contribution of the ADM mass of the central object can only be accounted for by inserting by hand the usual Dirac delta distribution.
Coupling of compact objects: local anisotropic and isotropic sources
In Section III we showed that, for solutions of Einstein's equations describing the cosmological embedding of the Schwarzschild black hole, there is no cosmological coupling.
On the contrary, when the impact of small-scale anisotropies is taken into account, it is possible to have nontrivial solutions describing compact objects/BHs [92; 93; 94] circumventing the Penrose theorem [95], and enabling the construction of nonsingular solutions [96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109]. In this Section, we will consider several scenarios of cosmological embedding of compact objects: the SD solution, solutions sourced by anisotropic fluids, solutions sourced by isotropic fluids, and charged singular solutions.
### The Sultana-Dyer solution
The SD solution [30] is another exact solution describing a BH embedded in a spatially flat FLRW. It was found by conformally transforming the Schwarzschild metric with the goal of changing the Schwarzschild global timelike Killing vector into a conformal Killing one. The conformal transformation also allows the spacetime to be nonsingular at \(r=0\).
Despite it being problematic due to the fluid becoming tachyonic at late times near the horizon [30; 35], it is still interesting for our purposes.
The metric is essentially the McVittie one (3), with an important difference: the mass \(m\) appearing in the metric is now a constant
\[\mathrm{d}s^{2}=-\frac{\left(1-\frac{Gm_{0}}{2r}\right)^{2}}{\left(1+\frac{Gm _{0}}{2r}\right)^{2}}\mathrm{d}t^{2}+a^{2}\left(1+\frac{Gm_{0}}{2r}\right)^{4} \left(\mathrm{d}r^{2}+r^{2}\mathrm{d}\Omega^{2}\right)\,. \tag{16}\]
The fact that the mass does not depend on \(a\) naturally introduces fluxes, making the source anisotropic. In fact, the source of the SD solution is a combination of two non-interacting perfect fluids, one in the form of an ordinary massive dust and the other of a null dust [30; 35]. It is well-known that such a combination can be recast as a single anisotropic fluid [110]. As we shall see, this is the origin of the coupling with the cosmological background.
We now compute the MS mass (9) of the SD solution. We first identify
\[R\equiv ar\left(1+\frac{Gm_{0}}{2r}\right)^{2}\,, \tag{17}\]
with which, using Eq. (6b) and reading \(e^{\alpha}\) and \(e^{\beta}\) from Eq. (16), we get
\[B=\frac{1-\frac{2Gam_{0}}{R}}{\left(1-\frac{2Gam_{0}}{R}\right)^{2}-H^{2}R^{2}}\,. \tag{18}\]
Therefore, the MS mass (9) reads as
\[M_{\mathrm{MS}}=a\,m_{0}+\frac{H^{2}R^{3}}{2G\left(1-\frac{2Gam_{0}}{R}\right) }\,. \tag{19}\]
The first term represents the coupling of the mass of the solution with the cosmological background, and it is consistent with the linearly-scaling universal coupling term derived for the first time in Ref. [74] for generic anisotropic fluids. The second term cannot be interpreted as a pure cosmological contribution, due to the presence, in the denominator, of a term depending on \(a\,m_{0}\). The latter encodes the interaction between the small and large scales, as a physical consequence of the accretion flow of cosmic fluid onto the central object, due to the presence of nonzero fluxes in the source. Note that similar results hold for the class of exact models analyzed in Ref. [35], devised to correct the problems of the SD metric, as well as for the solutions considered in Ref. [111].
### Compact objects sourced by anisotropic fluids
In Ref. [74] it was shown that the metric parametrization (\(\eta\) is the conformal time)
\[\mathrm{d}s^{2}=a^{2}(\eta)\left[-e^{\alpha(\eta,r)}\mathrm{d}\eta^{2}+e^{ \beta(\eta,r)}\mathrm{d}r^{2}+r^{2}\mathrm{d}\Omega^{2}\right]\,, \tag{20}\]
representing the cosmological embedding of a generic compact object sourced by an anisotropic fluid, allows to describe the coupling of GR BHs/horizonless configurations to the cosmological background. The stress-energy tensor pertaining to the source has the form \(T^{\mu}_{\mu}=\mathrm{diag}\left(-\rho,\,p_{\mathbf{\mathrm{\mathrm{\Lambda}}}},\,p _{\perp},\,p_{\perp}\right)\).
Einstein's equations and stress-energy tensor conservation give:
\[e^{-\beta(r,\eta)}=g(r)a^{r\alpha^{\prime}}\,; \tag{21a}\] \[\frac{\dot{a}^{2}}{a^{2}}\left(3-r\alpha^{\prime}\right)e^{- \alpha}+\frac{1-e^{-\beta}+r\beta^{\prime}e^{-\beta}}{r^{2}}=8\pi Ga^{2}\rho\,;\] (21b) \[\frac{e^{-\beta}+re^{-\beta}\alpha^{\prime}-1}{r^{2}}+e^{-\alpha }\left(-2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^{2}}\right)=8\pi Ga^{2}p_{ \parallel}\,;\] (21c) \[\dot{\rho}+\frac{\dot{a}}{a}\left(3\rho+3p_{\mathbf{\mathrm{\Lambda}}}+ r\mathbf{\mathrm{\nu}}_{\mathbf{\mathrm{\mathrm{\Lambda}}}}\right)=0\,, \tag{21d}\]
where a dot now means derivation with respect to \(\eta\). The remaining equation, stemming from the conservation of the stress-energy tensor, is used to compute \(p_{\perp}\).
The field equations allow for a regime in which \(\dot{\alpha}=0\), which is the only one in which compact objects can be consistently embedded in a cosmological background (see Refs. [74; 112; 113]). It describes the _absence_ of fluxes, unlike the case analyzed in Section IV.1.
This set-up is suitable to describe both the large- (cosmological) and the small- (inhomogeneity) scale dynam
ics, and it allows for a non-zero interaction term between these two scales3.
Footnote 3: It is worth noting that the cosmological embedding realized using the metric (20) is more general than the one used by McVitite in Eq. (12). In our case the metric is required to reduce to the static metric of the local compact object at any fixed instant of time. Conversely, in the McVitite parametrization, such reduction must happen by passing to the observer radial coordinate \(\hat{r}=ar\).
The cosmological mass coupling is immediately manifest when computing the density through Einstein's equations and integrating it over a reference cosmological volume (see Ref. [74] for further details)
\[\begin{split} M(\eta)&=4\pi a^{3}(\eta)\int_{0}^{L} \mathrm{d}r\,r^{2}\,\rho(r,\eta)\\ &=\frac{4\pi}{3}\rho_{1}a^{3}L^{3}\,e^{-\alpha(L)}+M(a_{i})\frac{ a}{a_{i}}\left[1-e^{-\beta_{0}(L)}a^{k_{L}}\right]\,,\end{split} \tag{22}\]
where \(L\) is the scale of a particular compact objects, while
\[k_{L}\equiv k(L)=r\alpha^{\prime}(r)\bigg{|}_{r=L}\,. \tag{23}\]
Here we defined \(M(a_{i})\equiv a_{i}L/2G\), which is the mass of the object computed at the coupling epoch. This expression defines the proper Schwarzschild radius \(a_{i}L=2GM(a_{i})\) at this reference time.
The first term in Eq. (22) corresponds to a purely cosmological contribution, which depends on the cosmological background energy density \(\rho_{1}\). It is therefore expected to be relevant only beyond the transition scale to homogeneity and isotropy. It does not play a role at the typical scales \(L\) of the compact object. As here we are not interested in whether and how the small-scale dynamics affects the large-scale cosmological one, another long-standing problem commonly known as "cosmological backreaction" [55; 56; 57; 58; 59], we can safely neglect this term in the following discussion.
The second term in Eq. (22) represents a "universal cosmological Schwarzschild mass". The cosmological coupling emerges as a linear dependence between the mass of the object and the scale factor \(a\). Note that this is the same universal coupling term found for the SD solution (see Eq. (19)). It has here a geometric origin in terms of the local curvature generated by the compact object [74]. Finally, the last term encodes model-dependent corrections to the universal term.
Note that, for standard Schwarzschild BHs, the sum of the second and third terms in Eq. (22) is identically zero, and we are left with the purely cosmological contribution.
Eq. (22) can also be derived from the general definition of the MS mass. Using \(R=ar\), from Eq. (9) one gets
\[\begin{split} M_{\mathrm{MS}}&=\frac{ar}{2G}\left[1 +\frac{\dot{a}^{2}}{a^{2}}r^{2}e^{-\alpha}-\frac{a^{2}}{a^{2}}e^{-\beta} \right]\\ &=\frac{4\pi}{3}\rho_{1}a^{3}r^{3}e^{-\alpha}+\frac{ar}{2G}\left[ 1-e^{-\beta_{0}(r)}a^{k(r)}\right]\,,\end{split} \tag{24}\]
where, in the last step, we made use of the Friedmann equation \(3\dot{a}^{2}/a^{2}=8\pi Ga^{2}\rho_{1}\), with \(\rho_{1}=\rho_{1}(\eta)\) being the density of the cosmological fluid sourcing the background. To obtain the last element we also used Eq. (21a). Eq. (24) is equivalent to Eq. (22) when evaluated at the radius of the compact object \(r=L\).
Notice that the cosmological coupling is present independently of the underlying cosmological background (for instance, it is present also in a de Sitter background). That is because \(\rho_{1}\) can be freely specified, and it determines the scale factor _regardless_ of the dynamics of the small-scale inhomogeneities4. If we impose the solution at constant time to be Schwarzschild-de Sitter, we have no coupling, \(i.e.\), we have a complete separation between scales. Let us finally recall that, differently from the SD and other solutions, here the coupling is not due to some accretion onto compact objects, since we imposed the absence of radial fluxes.
Footnote 4: This is strictly true only if one neglects the backreaction (see the discussion above)
### Compact objects sourced by isotropic fluids
The case of nonsingular compact objects sourced by isotropic fluids can be considered as a particular case of the previously discussed anisotropic fluid, where \(p_{\perp}=p\,\hbox to 0.0pt{\rule{4.3pt}{6.0pt}}\rule[0.4pt]{0.4pt}{0.4pt} =p\). The only difference is that, now, the conservation of the stress-energy tensor gives the additional equation \(p^{\prime}+\alpha^{\prime}\left(\rho+p\right)/2=0\). This makes the system (21) more constrained, so that, as shown by McVittie [1], it does not allow for smooth solutions describing the cosmological embedding of compact objects. Only the standard FLRW cosmological solutions are allowed. On the contrary, in the derivation of the main results of Section IV.2, _i.e._, Eqs. (22) and (24), we did not exploit the conservation equations for the stress-energy tensor. Therefore they still hold true also in the case of isotropic compact objects, provided that the cosmologically embedded solutions exist. This could be, for instance, the case of a non-smooth cosmological embedding of a local compact object. Thus, we expect also nonsingular compact objects sourced by isotropic fluids to couple to the cosmological evolution in the same way as their anisotropic counterparts.
### Charged singular solution embedded in a FLRW background
The cosmological embedding of singular solutions can also be extended to the charged Reissner-Nordstrom (RN) solution of GR. There has been some work devoted to the charged generalization of the McVittie spacetime, the so-called Shah-Vaidya solution (see Refs. [11; 27; 33; 41; 47] and references therein). Even in this case, the stress-energy tensor is anisotropic, but any flux onto/away from the central object is absent. One might attempt to naively apply the general results of this section to the cosmological embedding of the RN BH, given that the stress-energy tensor is nonzero outside of the horizon, implying that the MS and ADM mass are not identical in the BH exterior. However, there is a problem in the definition of the quasi-local mass, due to the divergence associated to the electric field at \(r=0\).
In the presence of an electric field, the gravitational potential scales as \(1/r^{2}\), due to the density of the electric field scaling as \(r^{-4}\)). As a consequence, there is an extra factor - scaling as \(1/r\) in the quasi-local energy - which is ill-defined at \(r=0\). This feature is interpreted as a repulsive effect due to the intensity of the electric field in the vicinity of the singularity (for further details, see, \(e.g.\), Section VIII in Ref. [90], or Section III in Ref. [114]).
Alternatively, one can quantify the energy of a charged solution with the ADM mass, which is anyhow undefined in non-asymptotically flat spacetimes, and moreover does not capture the coupling to the cosmological background, as discussed in detail throughout this paper. We thus conclude that a clear description of the coupling between a charged singular spacetime embedded in a cosmological background is quite problematic. Moreover, from a purely phenomenological point of view, the presence of an electromagnetic charge is astrophysically irrelevant.
## V Conclusions
In this paper we have shown that the longstanding debate on the theoretical status of the coupling of BHs/compact astrophysical objects to the cosmological background can be finally put on solid ground, if the physically observable mass of the compact object is identified as the MS mass. By doing so, we have explicitly demonstrated that singular BHs cannot couple to the large-scale cosmological dynamics.
We have also shown that the cosmological coupling is not only allowed, but quite natural, for generic compact objects sourced both by isotropic fluids and local anisotropies, like, \(e.g.\), nonsingular BHs. The energy of these systems has a quasi-local nature, so that it can be correctly quantified by the MS mass (instead of the nonlocal ADM mass). In this case, we have found that the mass of the object is intrinsically linked to the scale factor \(a\) by a universal linearly-scaling leading term, with a geometric origin in terms of the local curvature of spacetime.
From a purely theoretical point of view, there are two main issues that still need further investigation: \((i)\) fully understand the cosmological coupling of singular objects sourced by local anisotropies, but characterized by an ill-defined MS mass (which prevents a direct application of the results of Section IV). This is the case of the RN BH briefly discussed in this paper; \((ii)\) generalize the framework in order to encompass rotating compact objects (see, \(e.g.\), Refs. [13; 14; 15] for earlier works on the embedding of the Kerr metric in a FLRW cosmology). Note, however that including rotation is expected to have a non-neglibile impact only on the subleading non-universal term in Eq. (22), without affecting at all our conclusions about the leading universal linear coupling term and the decoupling of singular BHs.
As described in Section I, the situation still remains rather complicated from the point of view of observations, mainly due to the lack of clear observational results. New sets of data are needed in order to validate the theoretical predictions for the exponent \(k\). Our theoretical analysis shows quite clearly that eternal BHs, \(i.e.\), objects with event horizons, are characterized either by \(k=0\) if they are singular objects, or by \(k=1\) if they are regular. It seems that GR can not allow for other possibilities. Observational evidence of a nonzero cosmological coupling would thus be the smoking gun of the nonsingular nature of the actual astrophysical black holes. Conversely, a clear detection of \(k=0\) would imply that nonsingular GR BHs are hardly compatible with observations.
## VI Acknowledgements
We thank Valerio Faraoni and Francesca Lepori for very interesting and fruitful discussions.
|
2309.07549 | A single layer representation of the scattered field for multiple
scattering problems | The scattering of scalar waves by a set of scatterers is considered. It is
proven that the scattered field can be represented as the integral of a density
over an arbitrary smooth surface enclosing the scatterers. This is a
generalization of the series expansion over spherical harmonics and spherical
Bessel functions for spherical geometries. It allows an extension of the Fast
Multiple Algorithm to non spherical domains. | Didier Felbacq, Anthony Gourdin, Emmanuel Rousseau | 2023-09-14T09:20:57Z | http://arxiv.org/abs/2309.07549v1 | # A single layer representation of the scattered field for multiple scattering problems
###### Abstract
The scattering of scalar waves by a set of scatterers is considered. It is proven that the scattered field can be represented as the integral of a density over an arbitrary smooth surface enclosing the scatterers. This is a generalization of the series expansion over spherical harmonics and spherical Bessel functions for spherical geometries. It allows an extension of the Fast Multiple Algorithm to non spherical domains.
+
Footnote †: : _J. Phys. A: Math. Gen._
_Keywords_: scattering theory, scalar waves, integral representation
## 1 Introduction and setting of the problem
We consider the scattering of scalar waves by a set of obstacles in \(\mathbb{R}^{p},\,p=2,3\) in the harmonic regime with a time dependence of \(e^{-i\omega t}\). When the number of obstacles is very large the solving of the scattering problem requires the use of an efficient algorithm, such as the Fast Multipole Method [1]. In the present work, we show that the scattered field can be represented by an integral supported by a surface enclosing the obstacles. This approach allows a drastic reduction of the number of unknowns and an extension of the Fast Multipole Method. The theoretical results are illustrated by simple numerical examples in the last section.
Let us specify a few notations. The unit sphere of \(\mathbb{R}^{p}\) is denoted \(S^{p-1}\). For \(\mathbf{x}\in\mathbb{R}^{p}\), we denote \(x=\left|\mathbf{x}\right|,\,\hat{x}=\mathbf{x}/x\) and \(k=\omega/c\). We denote \(\mathcal{H}_{a}u=\Delta u+k^{2}au\), where the potential \(a\) belongs to \(L^{\infty}(\mathbb{R}^{p})\). The fundamental solution \(g^{+}\) of the Helmholtz equation: \(\mathcal{H}_{1}g^{+}=\delta_{0}\) with outgoing wave condition is: \(g^{+}(\mathbf{x})=-\frac{1}{4\pi x}e^{ikx}\) for \(p=3\) and \(g^{+}(\mathbf{x})=-\frac{i}{4}H_{0}^{(1)}(kx)\) for \(p=2\). The Green function with the incoming wave condition is denoted \(g^{-}(\mathbf{x})\). Explicitly: \(g^{-}(\mathbf{x})=-\frac{1}{4\pi x}e^{-ikx}\) for \(p=3\), and \(g^{-}(\mathbf{x})=-\frac{i}{4}H_{0}^{(2)}(kx)\) for \(p=2\). The functions \(H_{0}^{(1)}\) and \(H_{0}^{(2)}\) are the Hankel functions of first and second type [2].
Let us consider the following time-harmonic scattering problem. Let \(\Omega\) be a bounded domain of \(\mathbb{R}^{p}\) with boundary \(\partial\Omega=\Gamma\), containing a collection of scatterers (see Figure 1). The scatterers are characterized by a potential \(a\) such that \(a-1\) has a compact support \(K\subset\Omega\). For a given incident field \(u^{\rm inc}(x)\) satisfying the Helmholtz equation: \(\mathcal{H}_{1}u^{\rm inc}=0\), the scattering problem consists in finding the scattered field \(u^{s}(\mathbf{x})\) such that the total field \(u=u^{\rm inc}+u^{s}\) satisfies:
\[\mathcal{H}_{a}u=0\,,\]
and \(u^{s}\) satisfies a radiation condition at infinity: \(\partial_{n}u^{s}-iku^{s}=o\left(x^{-1}\right)\) and \(u^{s}(\mathbf{x})=O\left(x^{-1}\right)\) when \(x\to\infty\).
This scattering problem has a unique solution, as stated in the following lemma:
**Lemma 1**.: _The scattered field \(u^{s}\) exists and is unique. There is a linear operator \(\mathcal{T}\), the scattering amplitude, relating \(u^{\rm inc}\) to \(u^{s}\): \(u^{s}=\mathcal{T}(u^{\rm inc})\)._
Proof.: \(\mathcal{H}_{a}(u^{s})=(\mathcal{H}_{1}-\mathcal{H}_{a})(u^{\rm inc})\) and \(\mathcal{V}\equiv\mathcal{H}_{1}-\mathcal{H}_{a}\) is null outside the compact region \(K\). Then : \(\mathcal{H}_{1}(u^{s})=\mathcal{V}(u^{s})+\mathcal{V}(u^{\rm inc})\) and thus: \(u^{s}=(1-\mathcal{G}_{1}\mathcal{V})^{-1}\mathcal{G}_{1}\mathcal{V}(u^{\rm inc})\) where the inverse operator \(\mathcal{G}_{1}=\mathcal{H}_{1}^{-1}\) is an integral convolution operator with kernel \(g^{+}\).
The existence of the resolvent operator is classical although rather subtle (see for instance [3, 4]). This provides a decomposition of the total field in the form:
\[u=u^{\rm inc}+\mathcal{T}(u^{\rm inc}).\]
Let \(B_{e}=B(O,R_{e})\) be the smallest ball with center \(O\) containing \(\Omega\) and \(B_{i}=B(O,R_{i})\) the largest ball, with center \(O\), contained in \(K\). A modal expansion for the scattered
Figure 1: Sketch of the scattering problem under study.
field is valid outside \(B_{e}\):
\[u^{s}(\mathbf{x})=\left\{\begin{array}{l}\sum_{n,m}u^{s}_{nm}h^{(1)}_{n}(kx)Y^{m }_{n}(\hat{x})\mbox{ for }p=3\\ \sum_{n}u^{s}_{n}H^{(1)}_{n}(kx)e^{in\theta}\mbox{ for }p=2\end{array} \right.,\,x>R_{e}. \tag{1}\]
Here, \(h^{(1)}_{n}\) is the spherical Hankel function of first type and order \(n\)[2] and \(\theta\) is the polar angle of \(\mathbf{x}\) in \(\mathbb{R}^{2}\).
Whether the functions defined by these series can be extended inside the ball is a difficult problem known as Rayleigh hypothesis, it was essentially solved in the 80' [8, 9].
Note that, if there is only one scatterer, i.e. \(a\) is constant inside \(K\), there is also a representation of the field inside \(K\) by a series in the following form:
\[u(\mathbf{x})=\left\{\begin{array}{l}\sum_{n,m}u^{s}_{nm}j_{n}(kx)Y^{m}_{n}( \hat{x})\mbox{ for }p=3\\ \sum_{n}u^{s}_{n}J_{n}(kx)e^{in\theta}\mbox{ for }p=2\end{array}\right.,\,x<R_{i}. \tag{2}\]
Here, \(J_{n}\) (resp. \(j_{n}\)) is the Bessel (resp. spherical Bessel) function of order \(n\)[2].
When \(\overline{\Omega}=K\) and the boundary \(\Gamma\) is a sphere, both series can be matched on \(\Gamma\), which leads to an explicit form of the scattering coefficients. By considering the traces of the field on the boundary, one can obtain a pseudo-differential operator relating the coefficient of the incident field to that of the scattered field. In the case where \(\Gamma\) is not a sphere and \(K\) is a proper subset of \(\Omega\), this approach can be extended by using an integral representation of the fields: this is the purpose of this work. A pioneering work in that direction can be found in [5].
## 2 Integral representations of the incident and scattered fields
Our aim is to obtain a representation of the incident and scattered fields as an integral supported by \(\Gamma\). Let us first specify some notations.
For \(u\in H^{1}(\Omega)\) (the Sobolev space of function of \(L^{2}(\Omega)\) with gradient in \(L^{2}(\Omega)^{2}\), see [3, chap. 2] for more results on Sobolev spaces), the interior traces [3, chap. 2] of \(u\) and its normal derivative on \(\Gamma\) are denoted by:
\[\gamma^{-}(u)=\left.u\right|_{\Gamma},\,\gamma^{-}(\partial_{n}u)=\left. \partial_{n}u\right|_{\Gamma} \tag{3}\]
For fields belonging to \(H^{1}_{\rm loc}(\Omega\setminus\mathbb{R}^{p})\), we denote the exterior traces by:
\[\gamma^{+}(u)=\left.u\right|_{\Gamma},\,\gamma^{+}(\partial_{n}u)=\left. \partial_{n}u\right|_{\Gamma}. \tag{4}\]
Given a field \(u\in H^{1}_{\rm loc}(\mathbb{R}^{p})\), we denote \([f]_{\Gamma}\) the jump of \(f\) across \(\Gamma\), i.e.:
\[[u]_{\Gamma}=\gamma^{+}(u)-\gamma^{-}(u)\mbox{ and }[\partial_{n}u]_{\Gamma}= \gamma^{+}(\partial_{n}u)-\gamma^{-}(\partial_{n}u). \tag{5}\]
In order to have an integral representation of the fields, we state a result concerning the incident field. To do so, we first need a technical lemma. Note that in the following, the proofs of the results are given for \(p=3\) and can be easily adapted for \(p=2\) (or, in fact, any other dimension \(>1\)).
**Lemma 2**.: _Given \(\sigma\in H^{-1/2}(\Gamma)\), define:_
\[J[\sigma](\hat{x})=\int_{\Gamma}\sigma(\mathbf{x}^{\prime})e^{-ik\hat{x}\cdot \mathbf{x}^{\prime}}d\mathbf{x}^{\prime}. \tag{6}\]
_Assume that \(k^{2}\) is not an eigenvalue of \(-\Delta\) inside \(\Omega\) with Dirichlet boundary conditions on \(\Gamma\). If \(J[\sigma](\hat{x})=0\), then \(\sigma=0\). Moreover \(J\) defines a bijection: \(H^{-1/2}(\Gamma)\to L^{2}(S^{2})\)._
Proof.: Consider the unique function \(v\) satisfying \(\mathcal{H}_{1}v=0\) in \(\Omega\cup(\mathbb{R}^{3}\setminus\overline{\Omega})\) and the boundary conditions:
\[[v]_{\Gamma}=0,\,[\partial_{n}v]_{\Gamma}=\sigma.\]
Then \(v\) is represented in the following integral form: \(v(\mathbf{x})=\int_{\Gamma}g^{+}(\mathbf{x}-\mathbf{x}^{\prime})\sigma( \mathbf{x}^{\prime})d\mathbf{x}^{\prime}\). Outside \(B_{e}\), \(v\) can be expanded in spherical harmonics:
\[v(\mathbf{x})=\sum_{mn}v_{nm}h_{n}^{(1)}(kx)Y_{n}^{m}(\hat{x}).\]
The spherical Hankel functions have the following asymptotic forms [2] as \(x\to\infty\):
\[h_{n}^{(1)}(kx) \equiv\sqrt{\frac{\pi}{2x}}H_{n+1/2}^{(1)}(x)\sim_{x\to\infty} \frac{e^{i(kx-(n+1)\pi/2)}}{kx}, \tag{7}\] \[h_{n}^{(2)}(kx) \equiv\sqrt{\frac{\pi}{2x}}H_{n+1/2}^{(2)}(x)\sim_{x\to\infty} \frac{e^{-i(kx-(n+1)\pi/2)}}{kx}. \tag{8}\]
From these relations, we deduce that the following asymptotic behavior holds: \(v(\mathbf{x})\sim\frac{e^{ikx}}{kx}w(\hat{x})\) with
\[w(\hat{x})=\sum_{nm}v_{nm}e^{-i(n+1)\pi/2}Y_{n}^{m}(\hat{x}).\]
Besides, using the asymptotic form of the Green function:
\[g^{\pm}(\mathbf{x}-\mathbf{x}^{\prime})\sim_{x\to\infty}-\frac{e^{\pm ikx}}{4 \pi x}e^{\mp ik\mathbf{x}\cdot\mathbf{x}^{\prime}}, \tag{9}\]
we obtain:
\[v(x)\sim_{x\to\infty}\frac{e^{ikx}}{kx}J[\sigma](\hat{x}),\]
and thus \(J[\sigma](\hat{x})=w(\hat{x})\). Consequently, the nullity of \(J[\sigma](\hat{x})\) implies that of \(w\) and thus that of \(v\) in \(\mathbb{R}^{3}\setminus\Omega\). Therefore \(\gamma_{1}(v)=0\) and \(\gamma_{2}(v)\) is non zero iff \(v\) is a solution of the Dirichlet problem, therefore \(v=0\) everywhere by the hypothesis on \(k^{2}\), and thus \(\sigma=0\). The surjectivity is obtained as follows. Take a function \(\phi\in L^{2}(S^{2})\) and expand it in spherical harmonics:
\[\phi(\hat{x})=\sum_{nm}\phi_{nm}Y_{n}^{m}(\hat{x}).\]
Then construct a field:
\[u(\mathbf{x})=\sum_{nm}\phi_{nm}h_{n}^{(1)}(kx)Y_{n}^{m}(\hat{x}).\]
For a fixed \(x>0\), this series converges in \(L^{2}(S^{2})\), since \((h_{n}^{(1)}(kx))_{n}\) is a bounded sequence for every \(x>0\). Finally, define the field \(\tilde{u}\) equals to \(u\) outside \(\Omega\) and satisfying:
\[\mathcal{H}_{1}\tilde{u}=0\text{ in }\Omega,\gamma_{1}(\tilde{u})=\gamma_{1}(u) \text{ on }\Gamma.\]
Then \(\tilde{u}\) satisfies: \(\mathcal{H}_{1}\tilde{u}=[\partial_{n}\tilde{u}]_{\Gamma}\delta_{\Gamma}\) and it holds:
\[\tilde{u}(\mathbf{x})=\int_{\Gamma}[\partial_{n}\tilde{u}]_{\Gamma}g^{+}( \mathbf{x}-\mathbf{x}^{\prime})d\mathbf{x}^{\prime},\,\mathbf{x}\in\mathbb{R} ^{3}\setminus\Omega.\]
Therefore, we obtain the existence of \(\sigma=[\partial_{n}\tilde{u}]_{\Gamma}\in H^{-1/2}(\Gamma)\).
Using this lemma, we are now in a position to prove the following result that provides a representation of the incident field as an integral over \(\Gamma\):
**Theorem 1**.: _The incident field can be represented in the form:_
\[u^{\mathrm{inc}}(\mathbf{x})=i\int_{\Gamma}\sigma^{\mathrm{inc}}(\mathbf{x}^ {\prime})\Im(g^{+}(\mathbf{x}-\mathbf{x}^{\prime}))dr^{\prime},\,\mathbf{x} \in\mathbb{R}^{p}\]
_where \(\sigma^{\mathrm{inc}}\) belongs to \(H^{-1/2}(\Gamma)\)._
Proof.: The incident field can be expanded in spherical harmonics in the form \(u^{\mathrm{inc}}=u^{+}+u^{-}\), where:
\[u^{+}(\mathbf{x})=\frac{1}{2}\sum_{nm}i_{nm}h_{n}^{(1)}(kx)Y_{n}^{m}(\hat{x}),\,u^{-}(\mathbf{x})=\frac{1}{2}\sum_{nm}i_{nm}h_{n}^{(2)}(kx)Y_{n}^{m}(\hat{x}).\]
Using the asymptotic forms of the spherical Hankel functions (7,8) we obtain the existence of two functions \(u_{\infty}^{\pm}(\hat{x})\) defined on \(S^{2}\) and such that:
\[u^{\mathrm{inc}}(x)\sim\frac{e^{ikx}}{kx}u_{\infty}^{+}(\hat{x})+\frac{e^{-ikx }}{kx}u_{\infty}^{-}(\hat{x})\]
Explicitly, these functions are given by:
\[u_{\infty}^{+}(\hat{x})=\frac{1}{2}\sum_{nm}i_{nm}e^{-i(n+1)\pi/2}Y_{n}^{m}( \hat{x}),\,u_{\infty}^{-}(\hat{x})=\frac{1}{2}\sum_{nm}i_{nm}e^{i(n+1)\pi/2}Y_ {n}^{m}(\hat{x}).\]
Since:
\[e^{-i(n+1)\pi/2}=(-1)^{n+1}e^{i(n+1)\pi/2}\text{ and }Y_{n}^{m}(-\hat{x})=(-1)^ {n}Y_{n}^{m}(\hat{x}),\]
we have that:
\[u_{\infty}^{-}(-\hat{x})=\frac{1}{2}\sum_{nm}i_{nm}e^{i(n+1)\pi/2 }Y_{n}^{m}(-\hat{x})=\frac{1}{2}\sum_{nm}i_{nm}(-1)^{n}e^{i(n+1)\pi/2}Y_{n}^{m }(\hat{x})\] \[=-\frac{1}{2}\sum_{nm}i_{nm}e^{-i(n+1)\pi/2}Y_{n}^{m}(\hat{x})=-u _{\infty}^{+}(\hat{x}).\]
Consider now the field \(\tilde{u}^{\mathrm{inc}}\), defined by the following integral:
\[\tilde{u}^{\mathrm{inc}}(\mathbf{x})=\frac{1}{2}\int_{\Gamma}\left[\sigma^{ \mathrm{inc}}(\mathbf{x}^{\prime})g^{+}(\mathbf{x}-\mathbf{x}^{\prime})- \sigma^{\mathrm{inc}}(\mathbf{x}^{\prime})g^{-}(\mathbf{x}-\mathbf{x}^{\prime })\right]d\mathbf{x}^{\prime}.\]
Due to the continuity properties of a single layer potential [6], it satisfies \(\mathcal{H}_{1}\tilde{u}^{\mathrm{inc}}=0\). Using (9), it is given asymptotically as \(x\to\infty\), by:
\[\tilde{u}^{\mathrm{inc}}(\mathbf{x})\sim\frac{e^{ikx}}{kx}\int_{\Gamma}\sigma^{ \mathrm{inc}}(\mathbf{x}^{\prime})e^{-ik\hat{x}\cdot\mathbf{x}^{\prime}}d \mathbf{x}^{\prime}-\frac{e^{-ikx}}{kx}\int_{\Gamma}\sigma^{\mathrm{inc}}( \mathbf{x}^{\prime})e^{ik\hat{x}\cdot\mathbf{x}^{\prime}}d\mathbf{x}^{\prime}\]
Therefore, the existence of \(\sigma^{\mathrm{inc}}\) satisfying: \(u^{+}_{\infty}(\hat{x})=\int_{\Gamma}\sigma^{\mathrm{inc}}(\mathbf{x}^{\prime })e^{-ik\hat{x}\cdot\mathbf{x}^{\prime}}d\mathbf{x}^{\prime}\) follows from lemma (2) and the second relation:
\[u^{-}_{\infty}(\hat{x})=-\frac{e^{-ikx}}{kx}\int_{\Gamma}\sigma^{\mathrm{inc}} (\mathbf{x}^{\prime})e^{ik\hat{x}\cdot\mathbf{x}^{\prime}}d\mathbf{x}^{\prime}\]
is fulfilled thanks to:
\[u^{-}_{\infty}(\hat{x})=-u^{+}_{\infty}(-\hat{x})=-\int_{\Gamma}\sigma^{ \mathrm{inc}}(\mathbf{x}^{\prime})e^{ik\hat{x}\cdot\mathbf{x}^{\prime}}d \mathbf{x}^{\prime}.\]
We conclude from Rellich lemma [3, p. 74] that \(u^{\mathrm{inc}}=\tilde{u}^{\mathrm{inc}}\), since both fields satisfy the same equation and the same asymptotic behavior at infinity. The integral expression follows by noting that \(g^{+}\) and \(g^{-}\) are complex conjugated functions.
Our next result is that the scattered field can also be represented by an integral over \(\Gamma\):
**Theorem 2**.: _There exists \(\sigma^{s}\in H^{-1/2}(\Gamma)\) such that:_
\[u^{s}(\mathbf{x})=\int_{\Gamma}\sigma^{s}(\mathbf{x}^{\prime})g^{+}(\mathbf{x }-\mathbf{x}^{\prime})d\mathbf{x}^{\prime},\mathbf{x}\in\mathbb{R}^{p}\setminus\Omega. \tag{10}\]
Proof.: Consider the field \(\tilde{u}^{s}\) that is equal to \(u^{s}\) outside \(\Omega\) and that satisfies the following problem inside \(\Omega\):
\[\mathcal{H}_{1}\tilde{u}^{s}=0\mbox{ inside }\Omega,\;\tilde{u}^{s}\big{|}_{ \Gamma}=\left.u^{s}\right|_{\Gamma}.\]
Over \(\mathbb{R}^{p}\), it satisfies: \(\mathcal{H}_{1}\tilde{u}^{s}=[\partial_{n}\tilde{u}^{s}]_{\Gamma}\delta_{\Gamma}.\) Given the outgoing wave condition at infinity, this gives:
\[u^{s}(\mathbf{x})=\int_{\Gamma}[\partial_{n}\tilde{u}^{s}]_{\Gamma}g^{+}( \mathbf{x}-\mathbf{x}^{\prime})d\mathbf{x}^{\prime},\]
and consequently the existence of the density \(\sigma^{s}=[\partial_{n}\tilde{u}^{s}]_{\Gamma}\) belonging to \(H^{-1/2}(\Gamma)\).
We can now deduce the following representation result for the total field:
**Corollary 1**.: _The total field \(u\) can be written in the form_
\[u(\mathbf{x})=u^{\mathrm{tot},+}(\mathbf{x})+u^{\mathrm{tot},-}(\mathbf{x}), \tag{11}\]
_where_
\[u^{\mathrm{tot},+}(\mathbf{x})=\int_{\Gamma}\sigma^{+}(\mathbf{x}^{\prime})g^ {+}(\mathbf{x}-\mathbf{x}^{\prime})d\mathbf{x}^{\prime},\,u^{\mathrm{tot},-}( \mathbf{x})=\int_{\Gamma}\sigma^{-}(\mathbf{x}^{\prime})g^{-}(\mathbf{x}- \mathbf{x}^{\prime})dx^{\prime}, \tag{12}\]
_and \(\sigma^{+},\sigma^{-}\) belong to \(H^{-1/2}(\Gamma)\)._
Proof.: This is a direct consequence of theorem 1 and theorem 2. Putting the scattered field and the incident field together, we obtain:
\[\sigma^{+}=[\partial_{n}\tilde{u}^{s}]_{\Gamma}+\frac{1}{2}\sigma^{\rm inc},\, \sigma^{-}=-\frac{1}{2}\sigma^{\rm inc}. \tag{13}\]
## 3 Discussion and a numerical example
### A numerical example
Let us consider the scattering of an electromagnetic plane wave in \(E_{||}\) polarization by a collection of cylinders contained in a domain whose cross section \(\Omega\) is bounded by an astroid \(\Gamma\) (cf. Figure 2).
For numerical purpose, the units are that defined by the wavelength and we choose \(\lambda=1\). The astroid contains \(N=4657\) rods with relative permittivity \(12\). We use the multiple scattering approach described in [7].
We assume that the rods, at positions \(({\bf x}_{k})\), are small enough that they can each be characterized by only one scattering coefficient \(s_{q}^{0}\)[7]. We denote \({\bf x}=(x_{1},x_{2})\). At this step the scattered field is represented, everywhere outside the cylinders, by a sum over the rods in the form:
\[u^{s}({\bf x})=\sum_{q=1}^{N}s_{q}^{0}H_{0}^{(1)}(k|{\bf x}-{\bf x}_{q}|). \tag{14}\]
Figure 2: One quarter of the domain \(\Omega\): it is a smooth astroid filled with small dielectic rods.
The coefficients \(\hat{s}=(s_{q}^{0})_{1,\ldots,N}\) are determined from the multiple scattering theory [7]. The scattering coefficient \(s_{q}^{0}\) is related to the local incident field \(u_{q}^{\rm inc,loc}\) through the scattering amplitude \(t_{q}^{0}\): \(s_{q}^{0}=t_{q}^{0}u_{q}^{\rm inc,loc}\). For a circular dielectric rod of radius \(r_{q}\) and relative permittivity \(\varepsilon_{q}=\nu_{q}^{2}\), it holds:
\[t_{q}^{0}=-\frac{J_{1}(kr_{q})J_{0}(k\nu_{q}r_{q})-\nu_{q}J_{0}(kr_{q})J_{1}(k \nu_{q}r_{q})}{H_{1}^{(1)}(kr_{q})J_{0}(k\nu_{q}r_{q})-\nu_{q}H_{0}^{(1)}(kr_{q} )J_{1}(k\nu_{q}r_{q})}.\]
Figure 4: Map of the normalized modulus of the total field outside the astroid, computed by a direct summation over all the scattere s contained inside \(\Omega\).
Figure 3: Normalized Discrete Fourier Transform coefficients of the sequence of coefficients (\(s_{k}\)).
The local incident field is:
\[u_{q}^{\rm inc,loc}=u^{\rm inc}({\bf x}_{q})+\sum_{j\neq q}s_{j}^{0}H_{0}^{(1)}(|{ \bf x}_{q}-{\bf x}_{j}|). \tag{15}\]
Therefore, it holds:
\[s_{q}^{0}=t_{q}^{0}\left(u^{\rm inc}({\bf x}_{q})+\sum_{j\neq q}s_{j}^{0}H_{0}^ {(1)}(|{\bf x}_{q}-{\bf x}_{j}|)\right). \tag{16}\]
Let us denote \(\overline{\overline{t}}\) the diagonal matrix defined by \(\overline{\overline{t}}=diag(t_{1}^{0},\ldots,t_{N}^{0})\) and \(\overline{\overline{h}}\) the matrix with entries \(h_{ij}=H_{0}^{(1)}(|{\bf x}_{i}-{\bf x}_{j}|)\) for \(i\neq j\) and \(h_{ii}=0\). The coefficients \(\hat{s}\) are obtained by solving the system:
\[\left((\overline{\overline{t}})^{-1}-\overline{\overline{h}}\right)\hat{s}= \hat{u}^{\rm inc} \tag{17}\]
where \(\hat{u}^{\rm inc}=(u^{\rm inc}({\bf x}_{q}))_{1,\ldots,N}\).
The point is to be able to represent the scattered field by means of a single layer potential as explained in section (2). The unicity of \(\sigma^{s}\) being ensured by theorem (1), it can be determined by solving an integral equation of the first kind by imposing
\[\int_{\Gamma}\sigma^{s}({\bf x}^{\prime})H_{0}^{(1)}({\bf x}-{\bf x}^{\prime })d{\bf x}^{\prime}=u^{s}({\bf x}),{\bf x}\in\Gamma.\]
In order to do so numerically, that is, to obtain a discretized version of the density \(\sigma^{s}\), we simply write a discrete version of the integral:
\[u_{P}^{s}({\bf x})=\sum_{p=1}^{P}\sigma_{p}^{s}H_{0}^{(1)}(k|{\bf x}-{\bf y}_{ p}|), \tag{18}\]
Figure 5: Map of the normalized modulus of the total field outside the astroid, computed by using the single layer representation of the scattered field.
and the points \((\mathbf{y}_{p})\) are put uniformly on \(\Gamma\). On \(\Gamma\), the scattered field can be written:
\[u_{p}^{s}\equiv u^{s}(\mathbf{y}_{p}^{\prime})=\sum_{q=1}^{N}s_{q}^{0}H_{0}^{(1) }(k|\mathbf{y}_{p}^{\prime}-\mathbf{x}_{q}|),\,\mathbf{y}_{p}^{\prime}\in\Gamma,p=1,\ldots,P.\]
The second set of points \((\mathbf{y}_{p}^{\prime})\) is different from \((\mathbf{y}_{p})\) in order to avoid the \(0\) singularity of the Hankel function. In matrix form, this reads as:
\[\overline{\overline{H}}\hat{s}=\hat{u}^{s} \tag{19}\]
where \(\hat{u}^{s}=(u_{p}^{s})\). Let us remark that \(\overline{\overline{\overline{H}}}\) is a \(P\times N\) matrix. Then a square linear system is obtained by writing:
\[\sum_{p=1}^{P}\sigma_{p}^{s}H_{0}^{(1)}(k|\mathbf{y}_{p^{\prime}}^{\prime}- \mathbf{y}_{p}|)=u_{p^{\prime}}^{s},\,p^{\prime}=1\ldots P.\]
In matrix form, this can be written:
\[\overline{\overline{I}}\hat{\sigma}^{s}=\hat{u}^{s}, \tag{20}\]
where: \(\hat{\sigma}^{s}=(\sigma_{p}^{s})\) and \(\overline{\overline{I}}\) is a \(P\times P\) matrix with entries \(I_{pp^{\prime}}=H_{0}^{(1)}(k|\mathbf{y}_{p^{\prime}}^{\prime}-\mathbf{y}_{p}|)\). Finally, we obtain a matrix \(\overline{\overline{\overline{B}}}\) relating \(\hat{s}^{s}\) to \(\hat{\sigma}^{s}\):
\[\overline{\overline{\overline{B}}}=(\overline{\overline{I}})^{-1}\overline{ \overline{\overline{H}}}. \tag{21}\]
Figure 6: Modulus of the total field on a curve deduced from \(\Gamma\) by a homothety of ratio 1.3. The red curve corresponds to the total field reconstructed by means of the single layer representation and the blue circles correspond to the total field computed by a direct summation over all the scatterers.
This matrix is a \(P\times N\) matrix.
Heuristically, the number \(P\) can be determined by recalling that the function \(\sigma^{s}\) is periodic, since it is defined on a bounded curve. Consequently, the computation of the DFT of \((\sigma^{s}_{p})_{p\in\{1,\ldots P\}}\) can indicate whether the approximation is good, by checking the decreasing of the Fourier coefficients. This is examplified in Figure 3 where we have computed the DFT of the finite sequence \((\sigma^{s}_{p})\). It is important to have this criterium, since the discrete values of the density \((\sigma^{s}_{p})\) do not have a decreasing behavior with \(P\). The final value is \(P=160\).
We are able to reconstruct with a very good precision the diffracted field. In Figure 4 we have plotted a map of the total field outside the region where the scatterers are contained, obtained by summing the contributions of the dielectric rods, and in Figure 5 it is the reconstructed field. Both fields have been normalized so that their maximal value is equal to 1, in order to have the same color scale. For a more direct comparison, in Figure 6 we have plotted the total field on a curve deduced from \(\Gamma\) by a homothety of ratio 1.3. We stress that, thanks to this approach the representation of the scattered field is now ensured by 160 terms instead of 4657.
### Extension of the Fast Multipole Method
It is important to remark that the single layer representation involves a surface \(\Gamma\), enclosing the scatterers, that can be chosen at will. By this we mean that, given a set of scatterers and any smooth enough surface \(\Gamma\) enclosing this set, the field scattered by this set and the incident field can be represented by an integral over \(\Gamma\). This result is in fact a generalization of the expansion over spherical harmonics and spherical Bessel functions to an arbitrary surface. As a consequence, it is possible to split a given set of scatterers into several subset, apply multiple scattering theory to each smaller subset then use the representation by the single layers to couple the subsets in between them. In order to do so, an iterative algorithm is to be used. Let us be more specific: we put \(p=2\) and consider simply two subsets \(O^{1}=\cup_{j=1}^{N^{1}}\Omega^{1}_{j}\) and \(O^{2}=\cup_{j=1}^{N^{2}}\Omega^{2}_{j}\) and two surfaces \(\Gamma_{1}\) and \(\Gamma_{2}\) enclosing respectively \(O^{1}\) and \(O^{2}\) (see Figure 7). The incident field \(u^{\rm inc}\) illuminates \(O^{1}\) and \(O^{2}\). As in the preceding section, we assume for simplicity that the wavelength is large enough that the obstacle \(\Omega^{\alpha}_{j}\) (\(\alpha=1,2\)) can be considered to be a point at coordinate \({\bf x}^{\alpha}_{j}\), and that the field scattered by \(\Omega^{\alpha}_{j}\) reads as:
\[u^{s,\alpha}_{j}({\bf x})=s^{0,\alpha}_{j}H^{(1)}_{0}(k|{\bf x}-{\bf x}^{ \alpha}_{j}|)\]
For each scatterer \(\Omega^{1}_{j}\in O^{1}\), the incident field is the sum of the "true" incident field \(u^{\rm inc}({\bf x}^{1}_{j})\) and the field coming from the other subset \(O^{2}\) and given by the discrete single layer representation:
\[u^{s,2}({\bf x}^{1}_{j})=\sum_{p=1}^{P_{2}}\sigma^{s,2}_{p}({\bf y}_{p})H^{(1 )}_{0}(k|{\bf x}^{1}_{j}-{\bf y}_{p}|). \tag{22}\]
Of course, there is the same set of relations obtained by making the switching \(1\leftrightarrow 2\). Here the local incident field is therefore:
\[u_{j}^{\rm inc,loc}=u^{\rm inc}({\bf x}_{j}^{1})+u^{s,2}({\bf x}_{j}^{1}). \tag{23}\]
We denote the diagonal matrix \(\overline{\overline{t}^{\alpha}}=diag(t_{1}^{0,\alpha},\ldots,t_{N_{\alpha}}^ {0,\alpha})\), \(\hat{u}^{s,2}=(u^{s,2}(x_{j}^{1}))\) (resp. \(\hat{u}^{s,1}=(u^{s,1}({\bf x}_{j}^{2}))\)) and \(\hat{u}^{\rm inc,\alpha}=(u^{\rm inc}({\bf x}_{j}^{\alpha}))\). As in the preceding section, the operator that relates the scattering coefficients \(\hat{s}^{\alpha}=(s_{j}^{0,\alpha})\) to the discretized density \(\hat{\sigma}^{s,\alpha}=(\sigma_{p}^{s,\alpha})\) is denoted by \(\overline{\overline{B}}^{\alpha}\) (cf. (21)):
\[\hat{\sigma}^{\alpha}=\overline{\overline{B}}^{\alpha}\;\hat{s}^{\alpha}. \tag{24}\]
It is a \(P_{\alpha}\times N_{\alpha}\) matrix. Finally, \(\overline{\overline{R}}^{\alpha}\) is the matrix that relates \(\hat{\sigma}^{s,\alpha}\) to \(\hat{u}^{s,\alpha}\):
\[\hat{u}^{s,\alpha}=\overline{\overline{R}}^{\alpha}\;\hat{\sigma}^{\alpha}. \tag{25}\]
Explicitly (taking \(\alpha=2\)):
\[\hat{u}^{s,2}({\bf x}_{j}^{1})=\sum_{p=1}^{P_{2}}H_{0}^{(1)}(|{\bf x}_{j}^{1} -{\bf y}_{p}^{2}|)\,\sigma_{p}^{s,2},\,j=1,\ldots N_{1}. \tag{26}\]
Therefore \(\overline{\overline{R}}^{2}\) as entries: \(R_{ij}^{2}=H_{0}^{(1)}(|{\bf x}_{i}^{1}-{\bf y}_{j}^{2}|)\), \(i=1,\ldots N_{1},\,j=1,\ldots P_{2}\). It is a \(N_{1}\times P_{2}\) matrix. Finally, the linear system to be solved is:
\[\left[\left(\begin{matrix}(\overline{\overline{t}^{1}})^{-1}&0\\ 0&(\overline{\overline{t}^{2}})^{-2}\end{matrix}\right)-\left(\begin{matrix} \overline{\overline{h}}^{1}&\overline{\overline{R}}^{2}\overline{\overline{B }}^{2}\\ \overline{\overline{h}}^{1}\overline{\overline{B}}^{1}&\overline{\overline{h}}^{ 2}\end{matrix}\right)\right]\left(\begin{matrix}\hat{s}^{1}\\ \hat{s}^{2}\end{matrix}\right)=\left(\begin{matrix}\hat{u}^{\rm inc,1}\\ \hat{u}^{\rm inc,2}\end{matrix}\right). \tag{27}\]
Figure 7: Sketch of the scattering problem with two subsets.
The numerical gain here lies in the diagonal terms \(\overline{\overline{R}}^{\alpha}\overline{\overline{B}}^{\alpha}\). If we were to use directly the multiple scattering theory, we would have to compute these terms by coupling directly each cylinder to every other cylinder, which would involve the direct computation of the matrix with entries \(H_{0}^{(1)}(|\mathbf{x}_{i}^{1}-\mathbf{x}_{j}^{2}|)\) of size \(N_{1}\times N_{2}\). Here, we have to compute directly the matrix \(\overline{\overline{h}}^{1}\) (resp. \(\overline{\overline{h}}^{2}\)) of size \(N_{1}\times N_{1}\) (resp. \(N_{2}\times N_{2}\)) and then the coupling between the two subsets is ensured by the terms \(\overline{\overline{R}}^{1}\overline{\overline{B}}^{1}\) and \(\overline{\overline{R}}^{2}\overline{\overline{B}}^{2}\). While these matrices are of course of size \(N_{2}\times N_{1}\) (resp. \(N_{1}\times N_{2}\)) they are obtained as the product of matrices of size \(N_{2}\times P_{1}\) and \(P_{1}\times N_{1}\) (resp. \(N_{1}\times P_{2}\) and \(P_{2}\times N_{1}\) ).
Let us give a numerical example. We consider two subsets of dielectric rods contained in two elliptical domains, as depicted in Figure 8. The incident field is a plane wave \(u^{\mathrm{inc}}(\mathbf{x})=e^{-ikx_{2}}\). As in the first numerical example, the length unit is that of the wavelength and we choose \(\lambda=10\). There are 3234 rods with radius 0.015 and relative permittivity \(\varepsilon=10\) in each domain. The map of the field is given in Figure 9. We have computed the maps by using the single layer representation (left panel) and by using the multiple scattering theory for the entire set of rods in the right panel. We have used \(P_{1}=P_{2}=100\) points to compute the single layer representations (22). The maps and the scattering coefficients agree to a precision below 0.3% (in \(L^{2}\) norm for the entire region covered by the map). In Figure 10, we have plotted the modulus of the scattered field on the red ellipsis plotted in Figure 8. The fields coincide to 0.6%. The calculation time on a laptop for the single layer approach is around 2 to 3 times faster than for the direct multiple scattering approach. It is to be noted that the number of points \(P_{1,2}\) plays a negligible role in the total calculation time: reducing \(P_{1}=P_{2}\) to 30 does not change the calculation time up to the time fluctuations.
In conclusion, we have established a new way of representing the field scattered by a large collection of object by using a single layer representation. The scattered field
Figure 8: Two elliptic subsets containing each 3234 dielectric cylinders. The red ellipsis indicates where the field is computed in Figure 10.
is characterized by a density supported by the boundary of a domain containing the scatterers. From a numerical point of view, the gain lies in the number of parameters needed to represent the field. Since the sources are supported by a region of codimension 1, much less information is needed, as compared to a volumetric representation (of
Figure 10: The curve in solid line corresponds to the field computed by using directly the single layer representations of the fields scattered by each subset (cf. (18)). The circles corresponds to the field computed by summing over the contribution of each cylinder (cf. (14)).
Figure 9: Maps of the modulus field. On the left (a) panel, the field is computed by using the extended Fast Multipole Method, on the right (b) panel, it is computed by using directly the multiple scattering theory for the entire set of cylinders.
codimension 0). This result in a drastic reduction of the number of values required for representing the scattered field with a given precision. This result is a generalization of the representation of the field by spherical harmonics used in the Fast Multiple Method and extend this algorithm beyond the spherical geometry.
|
2309.12965 | One continuous parameter family of Dirac Lorentz scalar potentials
associated with exceptional orthogonal polynomials | We extend our recent works [ Int. J. Mod. Phys. A 38 (2023) 2350069-1] and
obtain one parameter $(\lambda)$ family of rationally extended Dirac Lorentz
scalar potentials with their explicit solutions in terms of $X_{m}$ exceptional
orthogonal polynomials. We further show that as the parameter $\lambda
\rightarrow 0$ or $-1$, we get the corresponding rationally extended Pursey and
the rationally extended Abraham-Moses type of scalar potentials respectively,
which have one bound state less than the starting scalar potentials. | Suman Banerjee, Rajesh Kumar Yadav | 2023-09-22T16:02:35Z | http://arxiv.org/abs/2309.12965v1 | One continuous parameter family of Dirac Lorentz scalar potentials associated with exceptional orthogonal polynomials
###### Abstract
We extend our recent works [_Int. J. Mod. Phys._ A 38 (2023) 2350069-1] and obtain one parameter (\(\lambda\)) family of rationally extended Dirac Lorentz scalar potentials with their explicit solutions in terms of \(X_{m}\) exceptional orthogonal polynomials. We further show that as the parameter \(\lambda\to 0\) or \(-1\), we get the corresponding rationally extended Pursey and the rationally extended Abraham-Moses type of scalar potentials respectively, which have one bound state less than the starting scalar potentials.
\({}^{a}\)Department of Physics, Sido Kanhu Murmu University, Dumka-814110, India.
## 1 Introduction
In non-relativistic quantum mechanics, after the discovery of \(X_{m}\)-exceptional orthogonal polynomials (EOPs) [1, 2, 3], several new potentials have been discovered whose bound state solutions are in terms of these EOPs [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. These potentials are generally the rational extension of the known conventional potentials and hence also known as rationally extended (RE) potentials. Out of these RE potentials some are exactly solvable and few are quasi exactly or conditionally exactly solvable potentials. Different approaches such as the Darboux-Crum-transformation (DBT), Point Canonical transformation (PCT) and
Supersymmetry in quantum mechanics (SQM) etc. have been adopted to obtain these potentials and their solutions.
One obvious question is whether there are relativistic problems whose solution is also in terms of these EOPs. As a first step in that direction, recently we [20] considered Dirac equation with Lorentz scalar potential and showed that indeed the exact eigenfunctions for some of the Lorentz scalar potentials are in terms of these EOPs. In particular, we showed that the exact solutions for Dirac potentials (with Lorentz scalar coupling) corresponding to RE radial oscillator potential, trigonometric scarf potential and generalized Poschl-Teller (GPT) potential can be obtained in terms of these EOPs. One related question is, are there strictly isospectral RE Dirac potentials corresponding to these three Dirac potentials? Further, are there corresponding RE Pursey and Abraham Moses (AM) potentials with one bound state less? The purpose of this note is to answer these questions in the affirmative. In particular, starting from these three RE Dirac scalar potentials and using the formalism of SQM, we construct one continuous (\(\lambda\)) parameter family of strictly isospectral Dirac potentials. Further, by taking the limit of \(\lambda=0\) and -1, we obtain the corresponding RE Pursey and RE AM potentials.
The plan of the paper is as follows. In section 2, we briefly discuss the general formalism explaining how using SQM formalism one can construct one continuous parameter family of strictly isospectral Dirac Lorentz scalar potentials. We also indicate how to construct the corresponding Pursey and AM potentials with one bound state less. In section 3, we apply the above formalism to the case of the RE radial oscillator potential and obtain the corresponding one parameter family of strictly isospectral Dirac Lorentz scalar potentials as well as the corresponding RE Pursey and RE AM potentials. In order to avoid unnecessary details, the results associated with the RE Scarf and RE GPT potentials are summarized in Tables 1, 2 and 3. Finally, we summarize the results in section 4.
## 2 Formalism
To set up the notations, let us consider one dimensional Dirac equation with a Lorentz scalar potential \(\phi(x)\) given by [21]
\[[i\gamma^{\mu}\partial_{\mu}-\phi(x)]\Psi_{n}(x,t)=0,\quad\mu=0,1 \tag{1}\]
where \(\Psi_{n}(x,t)\) is Dirac spinor defined in the matrix form as
\[\Psi_{n}(x,t)=\begin{bmatrix}\Psi_{n}^{(1)}(x,t)\\ \Psi_{n}^{(2)}(x,t)\end{bmatrix} \tag{2}\]
If we put \(\Psi_{n}(x,t)=\exp(-i\epsilon t)\Psi_{n}(x)\) (where \(\epsilon=\) energy associated with the spectrum) the above Dirac equation Eq. (1) takes the form
\[\gamma^{0}\epsilon\Psi_{n}(x)+i\gamma^{1}\frac{d}{dx}\Psi_{n}(x)-\phi_{n}(x) \Psi_{n}(x)=0\,. \tag{3}\]
We choose a \(2D\) representation of the gamma matrices [25] to directly cast the problem in one dimensional SUSY form i.e,
\[\gamma^{0}=\sigma_{x}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\qquad\gamma^{1}=i\sigma_{z}=\begin{bmatrix}i&0\\ 0&-i\end{bmatrix} \tag{4}\]
and
\[\Psi_{n}(x)=\begin{bmatrix}\Psi_{n}^{(1)}(x)\\ \Psi_{n}^{(2)}(x)\end{bmatrix}, \tag{5}\]
so that we have two coupled equations corresponding to the Dirac equation (3) given by
\[\hat{A}\Psi_{n}^{(1)}(x)=\epsilon\Psi_{n}^{(2)}(x)\quad\text{and}\quad\hat{A} ^{\dagger}\Psi_{n}^{(2)}(x)=\epsilon\Psi_{n}^{(1)}(x)\,, \tag{6}\]
where the operators,
\[\hat{A}=\frac{d}{dx}+\phi(x)\quad\text{and}\quad\hat{A}^{\dagger}=-\frac{d}{ dx}+\phi(x)\,. \tag{7}\]
Now the above Eq.(6) can be decoupled easily (in terms of \(H_{1}(=\hat{A}\hat{A}^{\dagger})\) and
\(H_{2}(=\hat{A}^{\dagger}\hat{A})\)) and written as
\[H_{2}\Psi_{n}^{(1)}(x)=\epsilon^{2}\Psi_{n}^{(1)}(x)\quad\text{and}\quad H_{1 }\Psi_{n}^{(2)}(x)=\epsilon^{2}\Psi_{n}^{(2)}(x)\,, \tag{8}\]
which are equivalent to two Schrodinger like equations namely
\[-\frac{d^{2}}{dx^{2}}\Psi_{n}^{(1)}(x)+V^{(1)}(x)\Psi_{n}^{(1)}(x)=\epsilon^{2 }\Psi_{n}^{(1)}(x)\,, \tag{9}\]
\[-\frac{d^{2}}{dx^{2}}\Psi_{n}^{(2)}(x)+V^{(2)}(x)\Psi_{n}^{(2)}(x)=\epsilon^{ 2}\Psi_{n}^{(2)}(x)\,, \tag{10}\]
with potential like terms
\[V^{(1)}(x)=\phi^{2}(x)-\phi^{{}^{\prime}}(x)\,, \tag{11}\]
and
\[V^{(2)}(x)=\phi^{2}(x)+\phi^{{}^{\prime}}(x)\,. \tag{12}\]
On comparing with the well known formalism of SQM [22], we see that there is supersymmetry in the problem and the scalar potential \(\phi(x)\) is just the superpotential of the SQM formalism and \(V^{(1,2)}\) being the partner potentials. Thus the eigenvalues and the eigenfunctions of the two Hamiltonians \(H_{1}\) and \(H_{2}\) are related except that one of them has an extra bound state at zero energy so long as \(\phi(x\rightarrow\pm\infty)\) have opposite signs. Without any loss of generality we always choose \(\phi(x)\) such that the ground state energy of \(H_{1}\) is zero. In that case the eigenfunctions and the eigenvalues of the two Hamiltonians are related as follows
\[\Psi_{n}^{(2)}(x)=[E_{n+1}^{(1)}]^{-1/2}A\Psi_{n+1}^{(1)}(x)\,, \tag{13}\]
\[\Psi_{n+1}^{(1)}(x)=[E_{n}^{(2)}]^{-1/2}A^{+}\Psi_{n}^{(2)}(x)\,, \tag{14}\]
\[E_{n}^{(2)}=E_{n+1}^{(1)}\,,\ \ E_{0}^{(1)}=0\,, \tag{15}\]
while the scalar potential \(\phi(x)\) is related to the zero energy ground state eigenfunction by
\[\phi(x)=-\frac{d}{dx}[\ln\Psi_{0}^{(1)}(x)] \tag{16}\]
Finally, it is well known [22, 23, 24] that given any potential \(V^{(1)}(x)\) with at least one bound state, it is straight forward to construct one continuous parameter family of scalar potentials \(\phi(x,\lambda)\) (which are nothing but the super potentials in the present case) and hence one continuous parameter family of potentials \(V^{(1)}(x,\lambda)\). Here \(\lambda\) can take any value \(>0\) or \(<-1\). In particular, the corresponding one parameter family of scalar potentials \(\phi(x,\lambda)\) are given by [23]
\[\phi(x,\lambda)=\phi(x)+\frac{d}{dx}\ln[I(x)+\lambda]\,, \tag{17}\]
where \(I(x)\) in terms of the normalized ground state eigenfunction \(\Psi_{0}^{(1)}(x)\) is given by
\[I(x)=\int_{-\infty}^{x}[\Psi_{0}^{(1)}(y)]^{2}\,dy\,. \tag{18}\]
The corresponding one continuous parameter family of strictly isospectral potentials are
\[V^{(1)}(x,\lambda)=V^{(1)}(x)-2\frac{d^{2}}{dx^{2}}\ln[I(x)+\lambda]\,, \tag{19}\]
with the corresponding normalized ground state eigenfunctions being
\[\hat{\Psi}_{0}^{(1)}(\lambda,x)=\frac{\sqrt{\lambda(1+\lambda)}\Psi_{0}^{(1) }(x)}{\big{(}I(x)+\lambda\big{)}} \tag{20}\]
Thus the ground state wave functions corresponding to Eq.(17) can be expressed as
\[\hat{\Psi}_{0}(\lambda,x)=\begin{bmatrix}\hat{\Psi}_{0}^{(1)}(\lambda,x)\\ 0\end{bmatrix} \tag{21}\]
The normalized excited-state (\(n=0,1,2....\)) eigenfunctions are obtained as
\[\hat{\Psi}_{n+1}(\lambda,x)=\begin{bmatrix}\hat{\Psi}_{n+1}^{(1)}(\lambda,x) \\ \Psi_{n}^{(2)}(x)\end{bmatrix}\sqsupseteq \tag{22}\]
where,
\[\hat{\Psi}_{n+1}^{(1)}(\lambda,x)=\Psi_{n+1}^{(1)}(x)+\frac{1}{E_{n+1}^{(1)}} \bigg{(}\frac{I^{\prime}(x)}{I(x)+\lambda}\bigg{)}\bigg{(}\frac{d}{dx}+\phi(x )\bigg{)}\Psi_{n+1}^{(1)}(x). \tag{23}\]
Finally, as \(\lambda\to 0\) and \(-1\), we get the Pursey and the AM like scalar potentials respectively which are isospectral to \(V^{(2)}(x)\). We summarize the key features of the two potentials as below:
(a) **The Pursey potential:**
For \(\lambda=0\), we get Pursey like Dirac potential from Eq.(17),
\[\hat{\phi}(\lambda=0,x)=\phi^{[P]}(x)=\phi(x)+\frac{d}{dx}\ln[I(x)]\,. \tag{24}\]
The corresponding eigenfunctions are
\[\Psi_{n}^{[P]}(x) = \hat{\Psi}_{n+1}^{(1)}(\lambda=0,x) \tag{25}\] \[= \Psi_{n+1}^{(1)}(x)+\frac{1}{E_{n+1}^{(1)}}\bigg{(}\frac{I^{ \prime}(x)}{I(x)}\bigg{)}\bigg{(}\frac{d}{dx}+\phi(x)\bigg{)}\Psi_{n+1}^{(1)}( x),\]
while the energy eigen value for the \(\lambda=0\) are
\[E_{n}^{[P]}=E_{n}^{(2)}. \tag{26}\]
(b) **The Abraham-Moses potential:**
For \(\lambda=-1\), we get AM like Dirac potential from Eq.(17),
\[\hat{\phi}(\lambda=-1,x)=\phi^{[AM]}(x)=\phi(x)+\frac{d}{dx}\ln[I(x)-1]\,. \tag{27}\]
The corresponding eigenfunctions are
\[\Psi_{n}^{[AM]}(x) = \hat{\Psi}_{n+1}^{(1)}(\lambda=-1,x)\] \[= \Psi_{n+1}^{(1)}(x)+\frac{1}{E_{n+1}^{(1)}}\bigg{(}\frac{I^{\prime }(x)}{I(x)-1}\bigg{)}\bigg{(}\frac{d}{dx}+\phi(x)\bigg{)}\Psi_{n+1}^{(1)}(x).\]
while the energy eigen values for \(\lambda=-1\) are
\[E_{n}^{[AM]}=E_{n}^{(2)}. \tag{29}\]
Proceeding in this way, for a system with \(n\) bound states, one can also obtain n-parameter family of Lorentz scalar potential by iterating the same procedure \(n\) times.
## 3 Examples
We now consider three different RE scalar potentials namely the RE radial oscillator, RE Scarf-I and RE generalized Poschl-Teller potentials and obtain one-parameter family of corresponding Lorentz scalar potentials with their solutions in terms of \(X_{m}\) exceptional Orthogonal Polynomials. We discuss, the example of radial oscillator case in detail and summarize the results for the other two cases in Tables \(1,2\) and \(3\).
### RE Radial oscillator type Lorentz scalar potential
In this case, we consider the \(\phi(x)\rightarrow\phi_{m,ext}(r,\omega,\ell)\) given by [20]
\[\phi_{m,ext}(r,\omega,\ell)=\phi_{con}(r,\omega,\ell)+\phi_{m,rat}(r,\omega, \ell);\quad 0\leq r\leq\infty \tag{30}\]
where
\[\phi_{con}(r,\omega,\ell)=\frac{\omega r}{2}-\frac{(\ell+1)}{r};\quad\ell>0 \tag{31}\]
and
\[\phi_{m,rat}(r,\omega,\ell)=\omega r\bigg{[}\frac{L_{m-1}^{(\alpha)}(-z)}{L_{ m}^{(\alpha-1)}(-z)}-\frac{L_{m-1}^{(\alpha+1)}(-z)}{L_{m}^{(\alpha)}(-z)} \bigg{]}, \tag{32}\]
are the conventional and rational terms respectively. Here, \(L_{m}^{(\alpha)}(z)\) is the Laguerre polynomial, \(z=\frac{\omega r^{2}}{2}\) and \(\alpha=l+\frac{1}{2}\).
If we use this \(\phi_{m,ext}(r,\omega,\ell)\) in equation (11), we get the Schrodinger like equation (9),
where
\[V^{(1)}(x)\to V^{(1)}_{m,ext}(r,\omega,\ell)=V_{con}(r,\omega,\ell)+V_{rat,m}(r, \omega,\ell) \tag{33}\]
with terms
\[V_{con}(r,\omega,\ell)=\frac{1}{4}\omega^{2}r^{2}+\frac{\ell(\ell+1)}{r^{2}}- \omega(\ell+\frac{3}{2}) \tag{34}\]
and
\[V_{m,rat}(r,\omega,\ell) = -\omega^{2}r^{2}\frac{L^{(\alpha+1)}_{m-2}(-z)}{L^{(\alpha-1)}_{ m}(-z)}+2\omega(z+\alpha-1)\frac{L^{(\alpha)}_{m-1}(-z)}{L^{(\alpha-1)}_{m}(-z)} \tag{35}\] \[+ 2\omega^{2}r^{2}\bigg{(}\frac{L^{(\alpha)}_{m-1}(-z)}{L^{(\alpha -1)}_{m}(-z)}\bigg{)}^{2}-2m\omega,\quad 0<r<\infty\]
which is a well-known rationally extended radial oscillator potential. The solution of the Eq. (9) in terms of \(X_{m}\)-exceptional Laguerre polynomials \(\hat{L}^{(\alpha)}_{n+m}(z)\) is thus given by
\[\Psi^{(1)}_{n}(x)\rightarrow\Psi^{(1)}_{n,m,ext}(r,\omega,\ell)=N^{(\alpha)}_ {n,m}f_{m}(\alpha,z)\hat{L}^{(\alpha)}_{n+m}(z), \tag{36}\]
where, \(f_{m}(\alpha,z)=\frac{r^{\alpha+\frac{1}{2}}\exp\left(-\frac{i}{2}\right)}{L ^{(\alpha-1)}_{m}(-z)}\) and the normalization constant
\[N^{(\alpha)}_{n,m}=\bigg{[}\frac{n!\omega^{(\alpha+1)}}{2^{\alpha}(\alpha+n+m )\Gamma(\alpha+n)}\bigg{]}^{1/2} \tag{37}\]
for \(m=1,2,...\) and \(\quad n=0,1,2...,\). In terms of the classical Laguerre polynomials the expression of \(\hat{L}^{(\alpha)}_{m}(z)\) is given as [24]
\[\hat{L}^{(\alpha)}_{n+m}(z)=L^{(\alpha)}_{m}(-z)L^{(\alpha-1)}_{n}(z)+L^{( \alpha-1)}_{m}(-z)L^{(\alpha)}_{n-1}(z);\quad n\geq m. \tag{38}\]
If we compare Eqs. (11) and (12), we observe that the \(V^{(2)}(x)\to V^{(2)}_{m,ext}(r,\omega,\ell)\) is the partner potential of \(V^{(1)}_{m,ext}(r,\omega,\ell)\) and hence the second component of the eigenfunction \(\Psi_{n}(x)\rightarrow\Psi_{n,m,ext}(r,\omega,\ell)\) i.e, \(\Psi^{(2)}_{n}(x)\rightarrow\Psi^{(2)}_{n,m,ext}(r,\omega,\ell)\) can easily be obtained using Eq. (6) i.e,
\[\Psi^{(2)}_{n,m,ext}(r,\omega,\ell)=N^{(\alpha+1)}_{n,m}f_{m}(\alpha+1,z)\hat{ L}^{(\alpha+1)}_{n+m}(z). \tag{39}\]
The energy eigenvalues are
\[E^{(1)}_{n+1}=E^{(2)}_{n}=2(n+1)\omega\,,\ \ E^{(1)}_{0}=0\,. \tag{40}\]
Here \(n=0\) corresponds to the ground state solutions. Using the first component of the ground state Dirac eigenfunction and following Eqs.(18) and (17), one can obtain the
integral \(I(x)\to I_{m}(r,\omega,\ell)\) and hence get the one parameter isospectral family of Lorentz scalar potentials \(\hat{\phi}_{m,ext}(\lambda,r,\omega,\ell)\) for given values of \(m\) and \(\lambda\).
#### 3.1.1 Illustration for \(m=1\)
As an illustration, we consider the \(X_{1}\) case (\(m=1\)) for which the expression for the extended scalar potential (30) looks like
\[\phi_{1,ext}(r,\omega,\ell) = \phi_{con}(r,\omega,\ell)+\phi_{1,rat}(r,\omega,\ell)\] \[= \frac{\omega r}{2}-\frac{(\ell+1)}{r}+\frac{4\omega r}{(1+2\ell +\omega r^{2})(3+2\ell+\omega r^{2})}\]
The ground state Dirac eigenfunction is given by
\[\Psi^{(1)}_{0,1,ext}(r,\omega,\ell)=N^{(\alpha)}_{0,1}\Bigg{(}\frac{3+2\ell+ \omega r^{2}}{1+2\ell+\omega r^{2}}\Bigg{)}r^{(1+\ell)}\exp\left(-\frac{\omega r ^{2}}{4}\right). \tag{42}\]
Now to get the explicit expression for \(I_{1}(r,\omega,\ell)\), we fix the particular values of potential parameters (say \(\omega=3\) and \(\ell=1\)) and get
\[I_{1}(r,3,1)=-\frac{e^{-\frac{3r^{2}}{2}}\sqrt{\frac{6}{\pi}}r\left(5+10r^{2}+ 3r^{4}\right)}{5\left(1+r^{2}\right)}+\mbox{erf}\left(\sqrt{\frac{3}{2}}r \right), \tag{43}\]
where the \(\mbox{erf}(r)\) is the error function. Thus, the expression for the one parameter family (17) of rational radial oscillator type scalar potentials are given by
\[\hat{\phi}_{1,ext}(\lambda,r,3,1) = -\frac{\zeta(r)+\xi(r)(\lambda+\mbox{erf}\left(\sqrt{\frac{3}{2}}r \right))}{2r\left(\vartheta(r)-\Upsilon(r)(\lambda+\mbox{erf}\left(\sqrt{ \frac{3}{2}}r\right))\right)},\] \[\mbox{where,}\quad\zeta(r) = \sqrt{\frac{6}{\pi}}r(100+145r^{2}+195r^{4}+117r^{6}+27r^{8}),\] \[\xi(r) = 5e^{\frac{3r^{2}}{2}}(-20-9r^{2}+12r^{4}+9r^{6}),\] \[\vartheta(r) = \sqrt{\frac{6}{\pi}}r(5+3r^{2})(5+10r^{2}+3r^{4})\] \[\mbox{and}\quad\Upsilon(r) = 5e^{\frac{3r^{2}}{2}}(5+8r^{2}+3r^{4}). \tag{44}\]
The corresponding normalized ground state eigenfunction has the form
\[\hat{\Psi}^{(1)}_{0,1,ext}(\lambda,r,3,1)=\frac{\sqrt{\lambda(1+ \lambda)}\Psi^{(1)}_{0,1,ext}(r,3,1)}{[I_{1}(r,3,1)+\lambda]}, \tag{45}\]
where \(\Psi^{(1)}_{0,1,ext}(r,3,1)\) and \(I_{1}(r,3,1)\) are given by Eqs. (42) and (43) respectively. One can also obtain the normalized excited-state eigenfunctions
\[\hat{\Psi}^{(1)}_{n,1,ext}(\lambda,r,3,1)=\Psi^{(1)}_{n+1}(r,3,1)+ \frac{1}{E^{(1)}_{n+1}}\bigg{(}\frac{I^{\prime}_{1}(r,3,1)}{I_{1}(r,3,1)+ \lambda}\bigg{)}\bigg{(}\frac{d}{dr}+\phi_{1,ext}(r,3,1)\bigg{)}\Psi^{(1)}_{n+ 1,ext}(r,3,1). \tag{46}\]
For \(\lambda=0\), we get the RE Pursey type Lorentz scalar potentials i.e,
\[\phi^{[P]}_{1,ext}(r)=-\frac{\zeta(r)+\xi(r)(\mbox{erf}\left( \sqrt{\frac{3}{2}}r\right))}{2r\left(\vartheta(r)-\Upsilon(r)(\mbox{erf} \left(\sqrt{\frac{3}{2}}r\right))\right)}\]
with the corresponding eigenfunctions being
\[\Psi^{[P]}_{n,1,ext}(r,3,1)=\bigg{[}1+\frac{1}{E^{(1)}_{n+1}} \bigg{(}\frac{I^{\prime}_{1}(r,3,1)}{I_{1}(r,3,1)}\bigg{)}\bigg{(}\frac{d}{dr }+\phi_{1,ext}(r,3,1)\bigg{)}\bigg{]}\Psi^{(1)}_{n+1,1,ext}(r,3,1) \tag{48}\]
while using Eq.(26) and (40) the energy eigenvalues are given by
\[E^{[P]}_{n}=6(n+1);. \tag{49}\]
Similarly, for \(\lambda=-1\), we get the RE AM type Lorentz scalar potentials
\[\phi^{[AM]}_{1,ext}(r,3,1)=-\frac{\zeta(r)+\xi(r)(-1+\mbox{erf} \left(\sqrt{\frac{3}{2}}r\right))}{2r\left(\vartheta(r)-\Upsilon(r)(-1+\mbox{ erf}\left(\sqrt{\frac{3}{2}}r\right))\right)}\]
with the corresponding eigenfunctions being
\[\Psi^{[AM]}_{n,1,ext}(r,3,1)=\bigg{[}1+\frac{1}{E^{(1)}_{n+1}} \bigg{(}\frac{I^{\prime}_{1}(r,3,1)}{I_{1}(r,3,1)-1}\bigg{)}\bigg{(}\frac{d}{ dr}+\phi_{1,ext}(r,3,1)\bigg{)}\bigg{]}\Psi^{(1)}_{n+1,1,ext}(r,3,1). \tag{50}\]
while the energy eigenvalues are
\[E^{[AM]}_{n}=E^{[P]}_{n}=6(n+1). \tag{51}\]
The plots of \(\hat{\Phi}_{1,ext}(\lambda,r,3,1)\), \(\Phi^{[P]}(r,3,1)\), \(\Phi^{[AM]}(r,3,1)\) and the normalized ground state eigenfunctions \(\hat{\Psi}_{0,1,ext}^{(1)}(\lambda,r,3,1)\) are shown in fig. \(1(a)\) to \(1(c)\) for different values of \(\lambda\).
**Fig.1**: (a) _Plot of \(\hat{\phi}_{1,ext}(\lambda,r,3,1)\) vs. \(r\) for positive \(\lambda(0,0.05,0.1,1\) and \(\infty)\). REP type scalar potential is shown for \(\lambda=0\)._
**Fig.1**: (b) _Plots of \(\hat{\phi}_{1,ext}(\lambda,r,3,1)\) for negative \(\lambda(-\infty,-1.1,-1.01,-1.001\) and \(-1)\). The REAM type scalar potential is shown for \(\lambda=-1\)._
**Fig.1**: (c) _Normalized ground-state wavefunctions \(\frac{\hat{\Psi}_{0,1}^{(1)}(\lambda,r,3,1)}{r}\) for some potentials (with positive \(\lambda\))._
In Fig.1(a), we observe that as \(\lambda\) starts decreasing from \(+\infty\) to \(0\), the Dirac potential starts developing maxima and the corresponding peaks shifts towards \(r=0\). On the other hand in Fig.1(b), as \(\lambda\) increases from \(-\infty\) to \(-1\), the depth of the curves are gradually decreasing and these become flat as \(\lambda\rightarrow-\infty\).
### RE Scarf-I and RE GPT type Lorentz scalar potentials
Similar to the RE radial oscillator scalar potential case, the general form for the RE Scarf-I and the RE GPT type Lorentz scalar potentials with their solutions in terms of \(X_{m}\) Jacobi polynomials1 and the corresponding energy eigenvalues are summarized in a tabular form and shown in Table-1. Using these we can find the expressions for \(\phi_{m,ext}(x,A,B)\), the integral \(I_{m}(x,A,B)\) and \(\hat{\Psi}_{0,1,ext}^{(1)}(\lambda,x,A,B)\) for the special case of \(m=1\) and fixed values of parameters \(A\) and \(B\) (for RE Scarf-I: \(A=4\), \(B=2\) and for RE GPT: \(A=2\), \(B=5\)) are shown in Table:2. The one parameter family of potentials \(\hat{\phi}_{1,ext}(\lambda,x,A,B)\) and the normalized ground state eigenfunctions \(\hat{\Psi}_{0,1,ext}^{(1)}(\lambda,x,A,B)\) corresponding to these
two potentials are given in Table 3. The expressions for the corresponding RE Pursey and the RE AM potentials for the same sets of potential parameters are also given in Table 3. The one parameter (\(\lambda\)) family of rationally extended scalar potentials for different \(\lambda\) and their corresponding ground state eigenfunctions for RE Scarf-I and RE generalized Poschl-Teller type scalar Potential are shown in Fig. 2 and Fig. 3 respectively.
**Fig.2**: (a) _Plots of \(\hat{\phi}_{1,\text{ext}}(\lambda,x,4,2)\) vs. \(x\) for positive \(\lambda\)\((0,0.001,0.1,1\) and \(\infty)\). The REP scalar potential is shown for \(\lambda=0\)._
**Fig.2**: (b) _Plots of \(\hat{\phi}_{1,\text{ext}}(\lambda,x,4,2)\) vs. \(r\) for negative \(\lambda(-\infty,-1.001,-1.01\) and \(-1)\). The REAM scalar potential is shown for \(\lambda=-1\)._
**Fig.2**: (c) _Plots of normalized ground-state eigenfunctions \(\hat{\Psi}^{(1)}_{0,1,ext}(\lambda,x,4,2)\) corresponding one parameter family of scalar potentials (with positive \(\lambda\))shown in Fig. \(2(a)\)._
\begin{tabular}{|c|l|l|} \hline \(X_{m}\) Case & RE Scarf-I & RE GPT \\ \hline \(\phi_{\text{com.}}(x,A,B)\) & \(\begin{array}{l}A\tan x-B\sec x;\\ 0<B<A-1,\end{array}\) & \(\begin{array}{l}A\coth r-B\text{cosech}r;\\ B>A+1>1,\end{array}\) & \(\begin{array}{l}A\coth r-B\text{cosech}r;\\ B>A+1>1,\end{array}\) \\ \hline \(\phi_{m,rat.}(x,A,B)\) & \(\begin{array}{l}-\frac{(\beta-\alpha+m-1)}{2}z^{\prime}(x)\big{(}\frac{P_{m }^{(-\alpha-1,\beta+1)}(z(x))}{P_{m}^{(\alpha-2,\beta)}(z(x))}\big{)}\\ -\frac{P_{m}^{(-\alpha,\beta)}(z(x))}{P_{m}^{(\alpha-1,\beta-1)}(z(x))};\,z(x) =\sin x,\end{array}\) & \(\begin{array}{l}-\frac{(\beta-\alpha+m-1)}{2}z^{\prime}(r)\big{(}\frac{P_{m }^{(-\alpha-1,\beta+1)}(z(r))}{P_{m}^{(\alpha-2,\beta)}(z(r))}-\] \\ \frac{P_{m}^{(-\alpha,\beta)}(z(r))}{P_{m}^{(\alpha-1,\beta-1)}(z(r))};\,z(x) =\cosh r\\ \alpha=A-B-\frac{1}{2},\,\beta=A+B-\frac{1}{2}\end{array}\) & \(\begin{array}{l}\alpha=-A+B-\frac{1}{2},\,\beta=-A-B-\frac{1}{2}\end{array}\) \\ \hline \(\Psi_{n,m,ext}^{(1)}(x,A,B)\) & \(\begin{array}{l}N_{n,m}^{(1)}\frac{(1-x(x))^{\frac{(A-B)}{2}}(1+z(x))(\frac{ A+B}{2})}{P_{m}^{(\alpha-1,\beta-1)}(z(x))}\times\\ \hat{P}_{n+m}^{(\alpha,\beta)}(z(x))\end{array}\) & \(\begin{array}{l}N_{n,m}^{(1)}\frac{(z-1)^{\frac{(B-A)}{2}}(1+z)^{-\frac{(B+A) }{2}}}{P_{m}^{(\alpha-1,\beta-1)}(z(r))}\times\\ \hat{P}_{n+m}^{(\alpha,\beta)}(z(r))\end{array}\) & \(\begin{array}{l}\times\\ \hat{P}_{n+m}^{(\alpha,\beta)}(z(r))\end{array}\) & \(\begin{array}{l}\times\\ \end{array}\) \\ \hline \(E_{n}^{(1)}\) & \((A+n)^{2}-A^{2};\) & \(n=0,1,2..\) & \(A^{2}-(A-n)^{2};\) & \(n=0,1,2..\) \\ \hline \end{tabular}
Table 1: Expressions of Extended Lorentz scalar potentials (\(\phi_{m,ext}(x,A,B)=\phi_{\text{com.}}(x,A,B)+\phi_{m,rat.}(x,A,B)\)), first component of the eigenfunctions (\(\Psi_{n,m,ext}^{(1)}(x,A,B)\)) and their energy eigenvalues (\(E_{n}^{(1)}\)) corresponding to RE Scarf-I and RE GPT like potentials. Here \(\hat{P}_{n+m}^{(\alpha,\beta)}(z)\) is the \(X_{m}\)-exceptional Jacobi orthogonal polynomial.
\begin{tabular}{|c|c|c|} \hline \(X_{1}Case\) & RE Scarf-I & RE GPT \\ \hline \(\phi_{1,ext}(x,A,B)\) & \(\Big{(}4\tan(x)-2\sec(x)\) & \(\Big{(}2\coth r-5\text{cosech}r+10\sinh r\times\) \\ & \(+\)\(\frac{8\cos(x)}{-71+8\cos(2x)+64\sin(x)}\Big{)}\) & \(\big{(}\frac{1}{10\cosh r-5}\)\(-\)\(\frac{1}{10\cosh r-3}\big{)}\Big{)}\) \\ \hline \(\Psi_{0,1,ext}^{(1)}(x,A,B)\) & \((\frac{8}{3}\sqrt{\frac{10}{39\pi}})\frac{(1-\varepsilon(x))(1+\varepsilon(x) )^{2}}{P_{1}^{(\frac{1}{2},\frac{2}{2})}(\varepsilon(x))}\hat{P}_{0+1}^{( \frac{3}{2},\frac{11}{2})}(z(x))\) & \((21\sqrt{\frac{11}{2}})\frac{(\varepsilon-1)^{(\frac{11}{2})}(\varepsilon+1) ^{-(\frac{11}{2})}}{P_{1}^{(\frac{1}{2},-\frac{2}{2})}(\varepsilon(x))}\hat{ P}_{0+1}^{(\frac{3}{2},-\frac{11}{2})}(z(r))\) \\ \hline \(I_{1}(x,A,B)\) & \(\Big{(}\frac{1}{32760\pi(-7+4\sin(x))}(-114660(\pi\)\(+2x)\)\(+244608\cos(x)\)\(+19\cosh(3r)\)\(+\cosh(4r)\Big{)}\)\(\times\) \\ \(19\cosh(3r)\)\(+\cosh(4r)\Big{)}\) & \(\frac{\text{sech}^{(\frac{1}{2})}(\varepsilon)\tanh^{(\frac{1}{2})}(\varepsilon )}{(-32+64\cosh(r))}\) \\ \(-42\cos(9x)\)\(+65520(\pi\)\(+2x)\sin(x)\) & \\ \(-125216\sin(2x)\)\(+3416\sin(4x)\) & \\ \(+5984\sin(6x)\)\(+\)\(141\sin(8x))\Big{)}\) & \\ \hline \end{tabular}
Table 2: The \(X_{1}\) cases (\(m=1\)) for RE Scarf-I and RE GPT scalar potentials, their ground state eigenfunctions \(\Psi_{0,1,ext}^{(1)}(x,A,B)\) and integral \(I_{1}(x,A,B)\) for fixed values of the parameters (\(A=4,B=2\)) and (\(A=2,B=5\)) respectively.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(X_{1}Case\) & RE Scarf-I & RE GPT \\ \hline \multirow{4}{*}{\(\frac{2(G(x)+M(x)+16380(137+25(x))(x+2x))\sec(x)}{(H(x)-114660(x+2x)+1456D(x) \cos(x))(-9+4\sin(x))}\)} & \(2\coth(r)-5\text{cosech}(r)+\frac{4\sinh(r)}{3+4\cosh(r)(-4+5\cosh(r))}\) \\ & \(-\frac{2(G(x)+M(x)+16380(137+25(x))(x+2x+2x))\sec(x)}{(-9+4\sin(x))(H(x)-11460(x +2x+2x)+1456D(x)\cos(x)+131040x\sin(x))}\) & \(+\frac{198(3-10\cosh(r)^{2}\text{Concept}(r)\sinh^{2}\left(\frac{r}{2}\right)}{(- 1+2\cosh(r))(32\cosh^{1}\left(\frac{r}{2}\right)(-1+2\cosh(r))+4(r)\sinh^{2} \left(\frac{r}{2}\right)})};\) \\ & & \(Q(r)=85+1103\cosh(r)+178\cosh(2r)+19\cosh(3r)+\cosh(4r)\) \\ \hline \multirow{4}{*}{\(\frac{2\coth(r)}{\left(H(x)-11460(x+2x)+1456D(x)\cos(x))(-9+4\sin(x))}\)} & \(\frac{\sqrt{4}(1+x)}{\left(\frac{10}{2}(x)+1456D(x)\cos(x))(21\sqrt{\frac{r}{2 }}\right)}\) & \(\frac{\sqrt{4}(1+x)}{\left(\frac{10}{2}(x)+1456D(x)\cos(x))(21\sqrt{\frac{r}{2 }}\right)}\) \\ & with \(z(x)=\sin x\) & with \(z(r)=\cosh r\) \\ \hline \multirow{4}{*}{\(\frac{2\coth(r)}{\left(H(x)-11460(x+2x)+16380(137+25(x))(x+2x))\sec(x)}\)} & \multirow{4}{*}{\(2\coth(r)-5\text{cosech}(r)+\frac{4\sinh(r)}{3+4\cosh(r)(-4+5\cosh(r))}\)} \\ & & \(50540\cos(5x)+29003\cos(7x)+2345\cos(9x)+84\cos(11x)+312840\sin(2x)\), \\ & & \(84\cos(11x)+312840\sin(2x)\), \\ & \(S(x)=-35\cos(2x)-107\sin(x)+4\sin(3x)\), \\ & \(G(x)=-208840\sin(4x)-49335\sin(6x)+685\sin(8x)+177\sin(10x)\), \\ & \(H(x)=59696\cos(3x)+11984\cos(5x)-854\cos(7x)-\) \\ & \(42\cos(9x)+3416\sin(4x)+5984\sin(6x)+141\sin(8x)\), \\ & \(D(x)=168-172\sin(x)+45(\pi+2x)\tan(x)\) & \\ \hline \end{tabular}
\end{table}
Table 3: The expressions for \(\hat{\phi}_{1,ext}(\lambda,x,A,B)\), the RE Pursey and the RE AM scalar potentials and their corresponding ground state eigenfunctions (\(\hat{\Psi}^{(1)}_{0,ext}(\lambda,x,A,B)\)) for RE Scarf-I and RE GPT scalar potential for fixed values of the parameters (\(A=4,B=2\)) and (\(A=2,B=5\)) respectively.
**Fig.3**: (a) _Plots of \(\hat{\phi}_{1,ext}(\lambda,r,2,5)\) vs. \(r\) for positive \(\lambda(0,0.001,0.01,1\) and \(\infty)\). The REP potential is shown for \(\lambda=0\)._
**Fig.3**: (b) _Plots of \(\hat{\phi}_{1,ext}(\lambda,r,2,5)\) vs. \(r\) for negative \(\lambda(-\infty,-1.1,-1.01,-1.001\) and \(-1)\). The REAM scalar potential is shown for \(\lambda=-1\)._
**Fig.3**: (c) _Normalized ground-state eigenfunctions \(\hat{\Psi}^{(1)}_{0,1,ext}(\lambda,r,2,5)\) for some potentials (with positive \(\lambda\))shown in Fig. \(3(a)\)._
## 4 Summary and Conclusions
In this work, we have considered the one-dimensional Dirac equation with three different RE Lorentz scalar potentials, i.e. Radial oscillator, the Scarf-I and GPT scalar potentials and for all of them have constructed a one continuous parameter family (\(\lambda\)) of strictly isospectral RE Lorentz scalar potentials and obtained their solutions in terms of \(X_{m}\)-exceptional orthogonal polynomials. The cases of RE Pursey and the RE AM are also discussed in the special cases of \(\lambda=0\) and \(-1\) respectively. The ground and the excited state solutions of all these RE strictly isospectral scalar potentials are obtained explicitly. For the special case of \(m=1\), we have graphically shown the behavior of the strictly isospectral potentials as a function of the continuous parameter \(\lambda\).
|
2309.04861 | Exploring Music Genre Classification: Algorithm Analysis and Deployment
Architecture | Music genre classification has become increasingly critical with the advent
of various streaming applications. Nowadays, we find it impossible to imagine
using the artist's name and song title to search for music in a sophisticated
music app. It is always difficult to classify music correctly because the
information linked to music, such as region, artist, album, or non-album, is so
variable. This paper presents a study on music genre classification using a
combination of Digital Signal Processing (DSP) and Deep Learning (DL)
techniques. A novel algorithm is proposed that utilizes both DSP and DL methods
to extract relevant features from audio signals and classify them into various
genres. The algorithm was tested on the GTZAN dataset and achieved high
accuracy. An end-to-end deployment architecture is also proposed for
integration into music-related applications. The performance of the algorithm
is analyzed and future directions for improvement are discussed. The proposed
DSP and DL-based music genre classification algorithm and deployment
architecture demonstrate a promising approach for music genre classification. | Ayan Biswas, Supriya Dhabal, Palaniandavar Venkateswaran | 2023-09-09T19:01:12Z | http://arxiv.org/abs/2309.04861v2 | # Exploring Music Genre Classification: Algorithm Analysis and Deployment Architecture
###### Abstract
Music genre classification has become increasingly critical with the advent of various streaming applications. Nowadays, we find it impossible to imagine using the artist's name and song title to search for music in a sophisticated music app. It is always difficult to classify music correctly because the information linked to music, such as region, artist, album, or non-album, is so variable. This paper presents a study on music genre classification using a combination of Digital Signal Processing (DSP) and Deep Learning (DL) techniques. A novel algorithm is proposed that utilizes both DSP and DL methods to extract relevant features from audio signals and classify them into various genres. The algorithm was tested on the GTZAN dataset and achieved high accuracy. An end-to-end deployment architecture is also proposed for integration into music-related applications. The performance of the algorithm is analyzed and future directions for improvement are discussed. The proposed DSP and DL-based music genre classification algorithm and deployment architecture demonstrate a promising approach for music genre classification.
Music Genre Classification, Feature Extraction, Deep Learning
## I Introduction
Aside from providing entertainment, music is one of the easiest ways to communicate among people, a way to share emotions, and a place to keep memories and emotions. Emotions can be expressed succinctly and effectively through music. Depending on the mood and objective of the listener, people select different music at different times. As Internet technology flourishes, more and more music is available on personal computers, in music libraries, and via the Internet. Systems that can automatically analyze music, like categorizing it, searching through it, and creating playlists, are crucial for efficiently managing music. Several studies have proposed that music mood can also be used to classify and recommend music. There are a number of existing mood-based music recommendation systems that categorize some moods and map those moods into discrete regions in two or three dimensions. We have also included a section that shows a possible architecture for deploying the solution to mobile applications or the web since as developers of music applications it is pretty ambiguous whether these scientific solutions should be deployed or not.
The remainder of this paper is organized as follows: Section II reviews the related works. Section III presents the overall approach, the deep-learning model, the algorithm and the discussion on the final results. The deployment architecture of the music genre classification algorithm has been discussed in Section IV. Finally, Section V draws the concluding remarks of our paper.
## II Related Works
Recently, there has been an increase in the attention given to analyzing audio to extract different kinds of information, specifically in relation to music and emotions. Research has focused on developing automated methods for classifying music according to its mood or emotional content. Some different proposed approaches are there, including the use of spectral and harmonic features to infer the mood of a music piece. These features have been linked to human perception of music and moods, and have been used to classify music according to different mood labels using neural networks. This literature review suggests that the use of spectral and harmonic features, along with neural network-based classification methods, can be a promising approach for classifying music according to mood.
Bhat et al. have proposed a number of different approaches to solving the problem in their work [1], including using spectral and harmonic features to infer the mood of a given music piece. In particular, features such as rhythm, harmony, spectral feature, and others have been studied in order to classify songs according to their mood. This has been based on Thayer's model, which proposes that certain features of music are linked to human perception of music and moods.
In this paper [2] by Kim et al., a probability-based music mood model and its application to a music recommendation system have been presented. In this approach three types of mood-based music recommendation players, for PC and mobile devices, and the web has been implemented. This paper also shows the analysis result of users' satisfaction and mood reappearance test after listening to music.
According to this paper [3] by Patel et al., sound is the most important aspect of this project and can be distinguished by its pitch, quality, and loudness. The fundamental tone and
the harmonics are generated and give rise to different musical notes. The Fourier transform has been used for breaking musical tones into sinusoidal waves.
Tzanetakis et al. in [4] demonstrated that music genre classification can be done by manipulating three types of features that represent the texture, rhythm, and pitch of the music. They evaluated the effectiveness and significance of these features by training machine learning models using real-world audio collections in their research.
In prior literature, [3], the development of an algorithm for the identification of musical notes was presented, yet the crucial aspect of deployment architecture remained elusive. This study aims to bridge that gap by proffering a comprehensive deployment schema that ensures optimal performance and minimal error in the identification of musical notes. The succeeding sections of this paper are dedicated to delving into the intricacies of the proposed implementation, providing a detailed account of its deployment and execution.
## III Our Approach
### _Methodology_
The music signal feature classification begins by recording a music sound and obtaining the corresponding waveform [5]. The frequency of the notes within the music is identified by analyzing the duration of each note in the time domain. An averaging process is applied to reduce the number of samples and fluctuations. The envelope of the original signal can also be extracted. Subsequently, thresholding is performed to establish a threshold value for identifying the maximum peaks in the signal. A technique of dynamic adaptive thresholding [6], which adjusts the threshold value based on the number of peaks, can be used for this purpose. Next, a width interval is selected to facilitate further operations. The width interval is chosen such that a larger number of peaks can be condensed within a smaller length. The width interval increases as the sampling frequency increases, as it is an essential aspect of the sampling process. To find the sine waves in a signal, a Fourier transform is applied, and zero padding is used to minimize error and get the Discrete Fourier Transform (DFT) of the signal. The frequency of the musical notes is identified by analyzing the frequency of the resulting signal from the DFT.
### _Feature Extraction from Music Samples_
The focus on extracting as many features [7, 8, 9] as possible is motivated by the fact that this can make the subsequent classification task more straightforward. Some common features that are extracted from audio signals include tonality, pitch, temporal energy, harmonicity, spectral centroid, and Mel-Frequency Cepstral Coefficients (MFCC) [10].
There are two main types of features that describe an audio signal: global descriptors and instantaneous descriptors. Global descriptors are computed for the entire signal and help identify steady patterns in the signal, such as the total energy of an audio clip or the emotional tone of a song.
Instantaneous descriptors provide information about the dynamic and temporal variations of a signal. These descriptors are obtained by dividing the signal into small segments, called frames, and then applying pre-processing techniques to each frame. The features calculated for each frame are usually related to time, spectral shape, harmonic, and energy. This paper focuses on extracting instantaneous descriptors for each frame, discussed in the work of [11] and [10].
#### Iii-A1 Pitch
Pitch is a metric that describes the regularity of a sound wave or the perceived fundamental frequency of the signal. The true frequency of the signal can be determined precisely, but it may not match the perceived pitch due to the presence of harmonics. To determine the pitch, the auto-correlation sequence (ACS) for a given frame of the signal is calculated using a specific formula as per equation (1).
\[r(m)=\frac{1}{N}\sum_{n=0}^{N-|m|-1}x(n+|m|)x(n) \tag{1}\]
where \(N\) is the length of the frame in samples and \(x\) is the input signal, such as speech or audio signal
#### Iii-A2 Temporal Energy
The temporal energy E, which is a measure of the strength of the signal over a specific frame of time, is calculated by finding the average of the squared values of the signal over that frame. This is expressed mathematically in equation (2). The energy feature can be used to distinguish between voiced frames, which contain significant information about the signal, and unvoiced frames, which are typically silent or noise-like, by comparing the energy values to a fixed threshold value.
\[E=\frac{1}{N}\sum_{n=0}^{N-1}x^{2}(n) \tag{2}\]
Fig. 1: Architecture of the Feature Extractor and Classifier
Fig. 2: Plot of the pitch for all genres of music samples
#### Iii-B3 Tonality Measure
A significant amount of background noise or sensor noise can obscure the true tone of an audio or speech signal. Tonality is a metric that describes how much of the signal has a tone-like or noise-like quality. The Spectral Flatness Measure (SFM) is used to compute the tonality of each frame. It is defined as the ratio of the geometric mean to the arithmetic mean of the power spectrum P, as per equation (3) to (5).
#### Iii-B4 Spectral Centroid
The spectral centroid can be defined as the mean of the distribution of frequency components for a given frame of the signal. This mean can be calculated using either the linear frequency or the Bark-scale as parameters. The weights for each parameter (magnitude of FFT components) are applied according to Eq. (6)
\[SC=\frac{\sum\limits_{k=0}^{N-1}kX^{2}(k)}{\sum\limits_{k=0}^{N-1}X^{2}(k)} \tag{6}\]
\[SC_{b}=\frac{\sum\limits_{j=0}^{N-1}b_{j}(b_{j}-b_{j-1})X^{2}(j)}{\sum\limits_{ j=0}^{N-1}(b_{j}-b_{j-1})X^{2}(j)} \tag{7}\]
The signal's sound is affected by its spectral centroid. A higher spectral centroid indicates a brighter, happier sound, while a lower spectral centroid indicates a duller, gloomier sound. This is evident in Figure 5. The spectral centroid, computed over the Bark scale, is a psycho-acoustically adopted a measure that indicates this Eq. (7).
#### Iii-B5 Harmonicity
Harmonicity features are a set of characteristics used to analyze the periodic properties of a signal. These features are based on two primary measures: the harmonicity ratio and the fundamental frequency. The
Fig. 4: Tonality measurement and it’s spectrogram for genre **disco**
Fig. 5: Tonality measurement and it’s spectrogram for genre **reggae**
Fig. 3: Temporal Energy measurement of a random music signal from GTZAN dataset
Fig. 6: Spectograms of spectral centroids of two different genres of music samples
harmonicity ratio is a metric that reflects how regularly the signal oscillates, while the fundamental frequency is the frequency that gives the most coherent explanation of the signal's spectrum. The fundamental frequency is computed using Goldstein's algorithm [12], which utilizes a likelihood approximation method to obtain the fundamental frequency.
#### Iii-B6 Mel-Frequency Cepstral Coefficients
The MFCC, or Mel-Frequency Cepstral Coefficients [13], is a method for representing the shape of a spectrum using a limited number of coefficients. This method is based on the cepstrum, which is the Fourier transform of the logarithm of the spectrum. However, the MFCC uses a variation of the cepstrum that is calculated on Mel-frequency bands rather than the traditional Fourier spectrum. This variation, known as the Mel-cepstrum, is particularly effective at capturing the characteristics of the mid-frequency range of a signal. The calculation of the Mel-cepstrum is described by equation (8).
\[f_{mel}=2595\log_{10}\left(1+\frac{f}{700}\right) \tag{8}\]
#### Iii-B7 Time Domain Zero Crossings
Zero-crossings in the time domain represents the noisiness of the signal. It is calculated by using the sign function: 0 for negative arguments while a positive argument is given for 1 in the signal. Let's take a signal x[n] in the time domain. The time domain zero crossings are calculated for the frame t as per Eq. (9).
\[TDZC_{t}=\frac{1}{2}\sum_{n=1}^{M}|sign[x[n]]-sign[x[n-1]]| \tag{9}\]
### _Preprocessing of Dataset_
The GTZAN dataset [4] has been used for the work. The GTZAN dataset is a public domain dataset that consists of 1000 music signals. The music signals are of 30 seconds in duration. The music signals are divided into 10 genres. The genres are blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. The GTZAN dataset is divided into a training set and a test set. The training set consists of 800 music signals and the test set consists of 200 music signals.
### _Classification Algorithm_
The classification of music genres is a challenging task due to the inherent variability and subjectivity of music. In this study, we proposed a machine-learning algorithm for music genre classification using the GTZAN dataset. The algorithm is implemented using the Python programming language and several libraries such as numpy, pandas, os, librosa, sklearn, and keras.
The dataset consists of 1000 audio files of 10 different music genres, with 100 samples per genre. The dataset was preprocessed by extracting the filenames of the audio files, extracting the labels of the audio files, and encoding the labels using the LabelEncoder. The labels were then converted to a categorical format. The features of the audio files were extracted using the librosa library, which is a library for music and audio processing in Python. The Mel-Frequency Cepstral Coefficients (MFCCs) were used as the feature representation of the audio files. The MFCCs were extracted from the audio data and the mean of the MFCCs was taken across the time axis. The feature data was then converted to a numpy array. The classification model was built using the Keras library, which is a high-level neural networks API, written in Python and capable of running on top of TensorFlow. The model was implemented as a sequential model with two dense layers. The first dense layer had 256 neurons and the activation function used was ReLU. A 0.5 dropout rate was used for reducing overfitting. The second dense layer had 9 neurons, corresponding to the number of genres in the dataset, and the activation function used was softmax.
The model was compiled using the categorical cross-entropy
Fig. 8: Deep-Learning Model of the Classification System
Fig. 7: Block diagram of the classification system
loss function and the Adam optimizer. The model was trained using the feature data and labels, with a batch size of 40 and 20 epochs. The validation split was set to 10% to evaluate the performance of the model on unseen data during training. The proposed algorithm achieved an overall accuracy of 81% on the test data, demonstrating its effectiveness in classifying music genres. The algorithm can be further improved by using different feature representations, and by using more advanced neural network architectures such as convolutional neural networks.
### _Results and Discussion_
The trained model was used to predict the test data and the classification report was generated using the metrics library. The classification report provides the precision, recall, f1-score, and support for each genre, which are useful in evaluating the performance of the model. The diagonal elements of the confusion matrix are highlighted in the heatmap, where the darker color represents the higher count of correctly classified observations. The off-diagonal elements represent the misclassification, whereas the lighter color represents the lower count of misclassification.
It is also important to note that the accuracy of the model can be calculated using the formula (correctly classified observations / total observations) and it can be computed using the diagonal elements of the matrix.
## IV Deployment Architecture
Deployment is an essential step in the development of any machine learning system. In this section, we propose a deployment architecture for a music genre classification system that utilizes the cloud services provided by Amazon Web Services (AWS). The proposed architecture is designed to be scalable, durable, and easy to access for users. The first component of the proposed architecture is Amazon S3 [14]. S3 is a fully managed object storage service that provides scalable and durable storage for audio files and metadata. This allows for easy management and retrieval of the data needed for training and inference. Amazon SageMaker [15] is the second component of the proposed architecture. SageMaker provides a fully managed platform for building, training, and deploying machine learning models. With SageMaker, we can train a model for music genre classification using the audio files and metadata stored in S3. Once the model is trained, it will be deployed to a SageMaker endpoint for inference. The endpoint can be accessed via an API, allowing users to submit audio files for classification. Amazon API Gateway [16] is used to create a RESTful API for the SageMaker endpoint, providing a convenient way for users to access the classification service. The classification results and metadata will be stored in Amazon DynamoDB [17], a fully managed NoSQL database service. DynamoDB provides high performance and scalability, making it well-suited for storing large amounts of data generated by a music genre classification system.
To enable fast and powerful search capabilities for the classification results and metadata, an Elasticsearch index will be created using Amazon Elasticsearch (OpenSearch [18]) Service. Elasticsearch is a popular search engine that is well-suited for handling large amounts of data. Finally, Amazon CloudFront [19] will be used to distribute the classification results and metadata to users. CloudFront is a content delivery network (CDN) that ensures low latency and high availability of the results, making it easy for users to access the classi
Fig. 9: Genre classification confusion matrix
fication results from anywhere in the world. In conclusion, the proposed architecture is designed to provide a scalable, durable, and easy-to-access music genre classification system using the cloud services provided by AWS. The architecture includes various services like S3, SageMaker, API Gateway, DynamoDB, Elasticsearch, and CloudFront. These services together enable the system to handle a large amount of data, train and deploy models effectively and provide fast and accurate
## V Conclusion
The proposed DSP-based music genre classification system was found to be effective in classifying various types of music. The system was able to correctly classify different types of music with an accuracy of 80%. The proposed system can be used to classify different types of music in a real-time scenario and also when the music was played at different speeds. The proposed system can also be used to automatically generate playlists for users based on their music preferences and it will help researchers to better understand the relationship between music and human emotions.
|
2309.10814 | Natural Language Embedded Programs for Hybrid Language Symbolic
Reasoning | How can we perform computations over natural language representations to
solve tasks that require symbolic and numeric reasoning? We propose natural
language embedded programs (NLEP) as a unifying framework for addressing
math/symbolic reasoning, natural language understanding, and instruction
following tasks. Our approach prompts a language model to generate full Python
programs that define functions over data structures which contain natural
language representations of structured knowledge. A Python interpreter then
executes the generated code and prints the output. Despite using a task-general
prompt, we find that this approach can improve upon strong baselines across a
range of different tasks including math and symbolic reasoning, text
classification, question answering, and instruction following. We found that
the generated programs are interpretable since they outline the exact reasoning
process followed by the program interpreter. | Tianhua Zhang, Jiaxin Ge, Hongyin Luo, Yung-Sung Chuang, Mingye Gao, Yuan Gong, Xixin Wu, Yoon Kim, Helen Meng, James Glass | 2023-09-19T17:54:21Z | http://arxiv.org/abs/2309.10814v2 | # Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
###### Abstract
How can we perform computations over natural language representations to solve tasks that require symbolic and numeric reasoning? We propose _natural language embedded programs_ (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks. Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge. A Python interpreter then executes the generated code and prints the output. Despite using a task-general prompt, we find that this approach can improve upon strong baselines across a range of different tasks including math and symbolic reasoning, text classification, question answering, and instruction following. We further find the generated programs are often interpretable and enable post-hoc verification of the intermediate reasoning steps.1
Footnote 1: Source code for the project is available at [https://github.com/luohongyin/LangCode](https://github.com/luohongyin/LangCode).
**Instruction:** How many secretaries-general of United Nations are not from Europe?
## 1 Introduction
The most common approach to understanding the language is to understand the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the of the language of the language of the language of the language of the language of the language of the language of the language of the language of the language of the of the language of the language of the the language of the language of the of the language of the language of the language of the language of the of the language of the language of the language of the of the language of the language of the language of the language of the language of the language of the language of the of the language of the language of the language of the language of the language of the of the language of the language of the language of the language of the language of the language of the of the language of the of the language of the language of the language of the of the language of the of the language of the language of the language of the of the language of the language of the of the language of the language of the of the language of the of the language of the language of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the language of the of the of the language of the of the language of the of the language of the of the language of the of the of the language of the of the of the language of the of the of the language of the of the of the language of the of the of the language of the of the of the language of the of the of the of the of the language of the of the of the of the of the of the of the of the of the of the of the of the of the of the of the of of the of the of the of of the of the of the of the of of the of the of of the of the of of the of of the of the of the of the of of the of the of of the of of the of the of of the of of the of of the of of the of of the of of the of of the of of the of of the of of the of of the of of the of of the of of of the of of the of of of the of of of the of of of the of of the of of of the of of of of the of of of of the of of of the of of of the of of the of of of the of of of the of of of the of of of the of of of of the of of the of of of the of of of the of of of the of of of the of of the of of of the of of of the of of of the of of of the of of of the of of of of the of of of the of of of the of of of the of of of the of of of of the of of of the of of of the of of of of the of of the of of of the of of of the of of of the of of the of of of of the of of of the of of the of of of the of of of of the of of of of the of of of the of of of of the of of of of the of of of the of of of the of of of the of of of of the of of of of the of of of the of of of the of of of of the of of the of of of of the of of of the of of of of of the of of of of the of of of of the of of of the of of of of the of of of of the of of of of the of of of of the of of of of the of of of of of the of of of of the of of of of the of of of of the of of of of the of of of of of the of of of the of of of of of the of of of of the of of of of the of of of of of of the of of of of of the of of of of of the of of of of of the of of of of of the of of of of the of of of of of the of of of of of the of of of of of of the of of of of of of the of of of of of the of of of of of of the of of of of of of the of of of of of of the of of of of of of the of of of of of of of of the of of of of of of of of the of of of of of of of of the of of of of of of of of of of the of of of of of of of of of of of of the
## 1 Introduction
Solving complex language tasks often requires performing computations over natural language representations. For language-based reasoning, chain-of-thought prompting (CoT; Wei et al., 2022) has emerged as a promising approach for surfacing the symbolic reasoning capabilities of large language models (LLMs). However, certain types of computations (e.g., arithmetic) are unnatural to perform in pure language space, and hence present difficulties for LLMs. General-purpose programming languages, on the other hand, provide convenient abstractions as well as predefined libraries and functions for natively implementing many types of symbolic computations, and there has been much recent work on interleaving program calls within CoT-style reasoning to extend the capabilities of LLMs. While promising, existing methods are generally limited to narrow types of tasks such as math and symbolic reasoning (Chen et al., 2022; Cai et al., 2023; Gao et al., 2023), simple API calling (Schick et al., 2023; Paranjape et al., 2023; Liang et al., 2023a), and database accessing (Cheng et al., 2022). These works moreover rely on task-specific prompts which are hard to generalize across datasets.
This work describes a task-general approach for combining the language-based reasoning capabilities of LLMs with symbolic computations enabled by the use of programs. Specifically, we prompt LLMs to generate _natural language embedded programs_ (NLEPs), which are fully executable Python programs containing appropriate package importing, structured natural language representations of knowledge, function definitions for problem solving, and response printing. The generated NLEP is then executed using a Python interpreter that captures the standard output of the program as the response. An example of an NLEP generated by GPT-4 is shown in Figure 1.
NLEPs use code as a scaffold to reason over natural language representations of knowledge. This makes our approach different from ToolFormer (Schick et al., 2023) and language model as tool maker (LATM; Cai et al., 2023), which instead use language as the scaffold and interleave API calls within natural language sentences during LLM generation. Compared to program-of-thought (PoT; Chen et al., 2022) and program aided language models (PAL; Gao et al., 2023), which mainly focus on math and symbolic problems, NLEPs utilize more flexible programming elements including packages, data types/structures, and functions. This design allows NLEP to solve more general tasks such as question answering over factual knowledge. Existing works also generally require task-specific prompts (e.g., to demonstrate tool usage). In contrast, we find that we can generate NLEPs for various tasks by feeding task-general demonstrations as prompts to an LLM.
Experiments across math and symbolic reasoning, question answering and instruction following, and text classification tasks demonstrate that NLEPs can potentially serve as a unifying framework for tackling a variety of tasks within the prompt-based learning framework. In particular, our results suggest that appropriately-prompted LLMs can make rich use of programming structures to tackle tasks that require a combination of language-based reasoning and symbolic computations.
## 2 Approach: NLEP Prompting
In this section we describe _natural language embedded programs_ (NLEPs) in more detail and present a simple prompting framework for NLEP generation. We also describe instantiations of NLEPs for different types of tasks.
**Natural language embedded programs (NLEPs).** An NLEP is a program containing both programming code and natural language. NLEPs use natural language in several different ways. First, it uses natural language comments to guide step-by-step program generation. Second, language is used to represent structured knowledge through Python's native data structures (e.g., dictionaries and lists). Finally, an NLEP uses language to print fluent responses to the user input by constructing a standard output string containing references to program variables.
The hybrid language-symbolic design of NLEP enables generalized problem solving for natural language, math, symbolic reasoning, and API calling tasks, which have traditionally been tackled by separate mechanisms. This approach combines the benefits of language-based reasoning with program synthesis: comments and knowledge in natural language improve program generation, while the structured/symbolic reasoning powered by program interpreters provides more accurate computations than would have been obtained via direct decoding from LLMs.
An example of an NLEP for answering a question is shown in Figure 1. In the generated program, each section is preceded by comments in natural language, and the defined counting function uses knowledge stored in a key-value dictionary (which itself is generated from GPT-4's internal knowledge) to find the correct answer. Finally, the answer is printed through a natural language response. In this example, we generated 5 independent NLEPs and found that they achieve 100% accuracy, compared to 60% for ChatGPT-4 and 40% GPT-4 API.
**NLEP structure.** More generally, each NLEP contains four sections: importing necessary libraries, defining variables containing structured knowledge, implementing problem-solving functions, and printing the response in natural language. Instead of providing direct solutions for each task, we guide the model to arrive at a solution following this four-step process. As an example, in Figure 2 an NLEP answers the question by constructing a structured knowledge dictionary containing the birthday and start date of the US presidents. To recognize the weekdays, the program utilizes predefined functions in the datetime package. The selected answers are stored in a list and then embedded into an output template. The NLEP also handles the situation when no answer is found. The correct answer is then printed by the NLEP.
**Task-general demonstration prompts.** As is standard in chain-of-thought prompting (Nye et al., 2021; Wei et al., 2022), our approach uses demonstration prompts for NLEP generation. However, unlike previous approaches our demonstrations are not task-specific. For example, for all classification tasks we consider we use the _same_ demonstration prompt (derived from SST2). Similarly, we use mostly the same prompt for our math and symbolic reasoning tasks. This task-general prompt is similar in spirit to zero-shot chain-of-thought prompting (Kojima et al., 2023) which adds a task-agnostic prompt ("Let's think step-by-step") to elicit the reasoning capabilities of LLMs in
Figure 2: A generated NLEP correctly answers the given question while ChatGPT-4 obtains an incorrect answer (link). This NLEP uses the date-weekday conversion tool in the datetime package, constructs structured knowledge about US presidents, implements a selection function, and outputs natural language responses depending on the function output.
a task-agnostic way. The prompts used for the various tasks are given in Table 1, and the exact prompts are given in Appendix B. In summary, we use 4 different demonstration prompts across 16 tasks, each of which works well within a task category. Thus, while the proposed method is not fully task-agnostic in the strictest sense of the term, it is still more flexible than previous approaches that combine program synthesis with chain-of-thought prompting (Chen et al., 2022; Gao et al., 2023), which use examples from the dataset to craft prompts.
**Programmatic reasoning for natural language understanding tasks.** Prior works on combining program synthesis with LLM-based reasoning have generally focused on math and symbolic reasoning tasks (Chen et al., 2022; Gao et al., 2023), and it has not been clear how such methods could be extended to address natural language understanding (NLU) tasks. We show that NLEPs can be straightforwardly extended to tackle more language-based tasks.
For question answering, we directly apply NLEP prompting where the target output is constructed by the generated programs as shown in previous figures. Classification tasks, on the other hand, are handled by a different type of NLEP consisting of a decision tree. Each node of the decision tree is annotated by a simple natural language sentence, and the Yes/No decisions at each node are handled in a zero-shot way by an entailment classifier, which has in general been shown to be an effective approach to zero-shot text classification (Obamuyide and Vlachos, 2018; Condoravdi et al., 2003; Ge et al., 2023). Concretely, given the tree we compute the entailment score between the input and the language description of each node and traverse the decision tree until a leaf node is reached. We emphasize that the topology of the tree and the language description of each node is generated by the prompted LLM. The demonstration prompt for classification tasks is given by a manually constructed example for SST2 (Wang et al., 2018). We find that this prompt can generate NLEPs containing sensible decision trees for various classification tasks without requiring task-specific examples. An example of the generated program and the corresponding decision tree is shown in Figure 3.
## 3 Experiments
We evaluate natural language embedded programs (NLEPs) on 16 tasks across three broad task categories. The tasks and corresponding prompts are summarized in Table 1.
**Math and symbolic reasoning** tasks include Tracking Shuffled Objects (7), Dyck Language, Word Sorting and Chinese Remainder Theorem from BigBench (Srivastava et al., 2023), Scheduling Meeting task from Cai et al. (2023), GSM-Hard benchmark of math word problems from Gao et al. (2023), and Game of 24 (Yao et al., 2023a). We use two examples for all tasks except for Game of
Figure 3: A decision tree structure generated within an NLEP for emotion classification based on task description using an example program for SST2 as the prompt. The branching of each node in the decision tree is decided by a RoBERTa (Liu et al., 2019) text entailment model according to the proposition constructed by the node description and the input text. Experiments show that this language-based decision tree generated by an NLEP outperforms GPT-3 and entailment-based multi-class prediction (Ge et al., 2023) without needing any task-specific examples (i.e., exemplars specific to the emotion classification dataset).
24, for which we applied a word sorting example to elicit stronger game-playing reasoning ability. The exact NLEP prompts we used are given in Appendix B.1 and B.2.
**Question answering** tasks include the StrategyQA (Geva et al., 2021), TruthfulQA (Lin et al., 2022), and VicunaQA (Chiang et al., 2023) benchmarks. StrategyQA requires models to answer multi-hop questions with "Yes" or "No". TruthfulQA and VicunaQA contain questions and instructions requiring free-form responses. The evaluation metrics on question answering focus on accuracy, relevance, and actuality of the generated answers. The prompts in Appendix B.1 are used for StrategyQA. For TruthfulQA and VicunaQA, we added an example with a longer response to encourage more detailed response generation (Appendix B.3).
**Text classification** tasks includes tasks that require understanding of both natural language inputs and labels. We evaluate NLEP on movie-review classification (SST2; Socher et al., 2013), linguistic acceptance (COLA; Warstadt et al., 2019), emotion classification (Saravia et al., 2018), amazon review (Ni et al., 2019), hate speech detection (de Gibert et al., 2018), and stereotypes recognition (Sap et al., 2019). We use the prompts in Appendix B.1 for model-free classification. For decision tree generation, the prompts in Appendix B.4 are applied.
### Math and Symbolic Reasoning
We compare NLEP prompting with chain-of-thought (CoT; Wei et al., 2022), program-of-thought (PoT; Chen et al., 2022), and LLMs as tool makers (LATM; Cai et al., 2023). We also compare against tree-of-thought (ToT; Yao et al., 2023) on the Game of 24 benchmark, where ToT outperforms CoT by a significant margin (but requires many more calls to the LLM). We evaluate CoT and PoT with both task-general and task-specific demonstrations. Since LATM needs in-domain input-output pairs to create tools, we only report the results with task-specific examples for LATM.
**Task-general prompting**. For task-general prompts we use two examples as the in-context demonstration for the math and symbolic reasoning benchmarks (see Table 1 and Appendix B). For CoT, we present two examples with intermediate reasoning represented in natural language rather than as programs. Our task-general PoT implementation takes the math and symbolic reasoning lines similar as Chen et al. (2022) and Gao et al. (2023), but without the step-by-step programming scheme in NLEP as an ablation.
**Task-specific prompting baselines.** We report the task-specific prompting performance as an "upper bound" for each task. For CoT, we use the same prompting settings (from 3 to 8-shot) adopted in previous studies (Cobbe et al., 2021; Cai et al., 2023; Fu et al., 2023). For PoT, we use the same in-context examples as in the task-specific CoT examples, but provide intermediate reasoning steps in Python code. On the GSM-Hard benchmark, we adopt the demonstrations (9-shot) for GSM8K used in Chen et al. (2022). For the Chinese Remainder Theorem and Scheduling Meeting benchmarks, we construct the in-context examples with the first three successful instances of task-general PoT. For LATM, we evaluate its performance on Tracking Shuffled Objects (7) using the provided
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Domain & Dataset & Split & Dataset Size & Prompt & Output format \\ \hline \multirow{8}{*}{Math and Symbolic Reasoning} & Tracking Shuffled Objects (7) & test & 250 & B.1 & Option \\ & Dyck Language & test & 250 & B.1 & Free Form \\ & Word Sorting & test & 250 & B.1 & Free Form \\ & Chinese Remainder Theorem & test & 250 & B.1 & Number \\ & Scheduling Meeting & test & 250 & B.1 & Free Form \\ & GSM-Hard & test & 1319 & B.1 & Number \\ & Game of 24 & test & 100 & B.2 & Free Form \\ \hline \multirow{4}{*}{Question Answering} & StrategyQA & dev & 229 & B.1 & Yes/No \\ & TruthfulQA & test & 817 & B.3 & Free Form \\ & VicunaQA & test & 80 & B.3 & Free Form \\ \hline \multirow{8}{*}{Text Classification} & SST2 & val & 872 & B.4 & Class \\ & Cola & val & 1.04k & B.4 & Class \\ \cline{1-1} & Emotion-Classification & val & 2k & B.4 & Class \\ \cline{1-1} & Amazon Review & val & 5k & B.4 & Class \\ \cline{1-1} & Hate-Speech & val & 478 & B.4 & Class \\ \cline{1-1} & Social Bias Frame & val & 16.7k & B.4 & Class \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary descriptions of the various tasks considered in this work.
tool2 and cite the results for other tasks from Cai et al. (2023). Implementation details are presented in Appendix C.
Footnote 2: [https://github.com/ctllllll/LLM-ToolMaker/blob/main/tools/tracking_shuffled_objects_seven_objects_json](https://github.com/ctllllll/LLM-ToolMaker/blob/main/tools/tracking_shuffled_objects_seven_objects_json)
Program synthesis approaches (PoT and NLEP) may sometimes generate non-executable programs if lack task-specific programming demonstration. For both approaches, we select certain benchmarks to resample up to three additional programs if the returned program failed at execution. Since this condition is triggered only if program execution fails, there is no label leakage. We discuss this further in Section 4 and provide results details in Appendix A.
#### 3.1.1 Results
We show the main results of NLEP prompting on six math and symbolic reasoning tasks in Table 2. An example of NLEP generated for solving a Dyck language problem is shown in Figure 4(a).
**GPT-4 Results.** Among the three approaches which use task-general prompts, NLEP outperforms both CoT and PoT on 5 of 6 tasks. The large performance gap between NLEP and CoT suggests that programmatic reasoning can enable more accurate answers. Compared to PoT, NLEP achieves significantly higher average accuracy, especially on the Dyck Language (66.4%\(\rightarrow\)91.6%) and the Chinese Remainder Theorem (84.4%\(\rightarrow\)97.2%) tasks. On GSM-Hard, we confirm the same phenomenon discovered by Gao et al. (2023) where language does not further benefit the calculation accuracy with GPT-4.
NLEP also achieves comparable performance to task-specific, few-shot prompting methods. Notably, our method achieves the best performance on Tracking Shuffled Objects (7) and Dyck Language, and outperforms task-specific CoT on many benchmarks. On the Word Sorting benchmark, NLEP only fails on one instance where the input word sequence contains "steelmake" and GPT-4 automatically corrected it to "steelmaker". We find that the high scores of task-specific PoT on Word Sorting and Chinese Remainder Theorem come from the generally applicable programming code from the in-context demonstrations.
**GPT-3.5 Results.** We observe significant performance degradation with GPT-3.5, presumably due to its limited programming capabilities. However NLEP still achieves the best average performance, exhibiting significant improvement on 5 of 6 tasks over all baselines. On the Dyck Language benchmark, program-based strategies (PoT and NLEP with task-general prompts) failed to accomplish the problem without task-specific examples, highlighting the need for strong backbone LLMs.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c|c c c} \hline \hline & \multicolumn{8}{c|}{**GPT-4**} & \multicolumn{8}{c}{**GPT-3.5-Throb**} \\ \cline{2-13}
**Tasks / Method** & \multicolumn{3}{c|}{(a) Task-Specific} & \multicolumn{3}{c|}{**(b) Task-General**} & \multicolumn{3}{c|}{(c) Task-Specific} & \multicolumn{3}{c}{**(d) Task-General**} \\ & CoT & PoT & LATH & **CoT** & **PoT** & **NLP** & CoT & PoT & **CoT** & **PoT** & **NLP** \\ \hline Tracking Shuffled Objects & 100.0 & 100.0 & 100.0 & 81.2 & 98.4 & **100.0** & 68.0 & 6.8 & 51.2 & 88.4 & 74.4 \\ Dyck Language & \(63.6^{\dagger}\) & \(60.8\) & \(87.5^{\dagger}\) & 39.6 & 66.4 & **91.6** & \(20.4^{\dagger}\) & \(28.4\) & 38.0 & 4.0 & 7.2 \\ Word Sorting & \(99.1^{\dagger}\) & 100.0 & 99.1 & 84.4 & 99.6 & 99.6 & \(59.2^{\dagger}\) & **100.0** & 75.2 & **100.0** & 99.6 \\ Chinese Remainder Theorem & \(0.0^{\dagger}\) & 100.0 & 100.0\({}^{\dagger}\) & 80.4 & 84.4 & 97.2 & \(0.0^{\dagger}\) & 100.0 & 0.0 & 72.4 & 96.4 \\ Scheduling Meeting & \(55.6^{\dagger}\) & 75.2 & 100.0\({}^{\dagger}\) & 82.8 & 85.2 & 93.2 & \(18.9^{\dagger}\) & 33.6 & 39.6 & 49.2 & 85.6 \\ GSM-Hard & \(57.4\) & 74.1 & – & 54.9 & 69.3 & 67.7 & 45.0 & 63.4 & 42.8 & 52.2 & 54.1 \\ \hline
**Average** & \(61.3\) & \(85.0\) & \(97.3\) & \(57.2\) & \(83.9\) & \(91.6\) & \(35.3\) & \(55.4\) & 41.1 & 61.0 & 69.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance on math and symbolic reasoning tasks with both task-specific and task-general demonstration prompts. \({}^{\dagger}\) stands for results from Cai et al. (2023). LATH results are not available for GSM-Hard benchmark as it is hard to derive a generally applicable tool function for all test cases.
\begin{table}
\begin{tabular}{l|c c c c c|c c|c c c} \hline \hline \multirow{2}{*}{setting} & \multicolumn{8}{c|}{Task-Specific} & \multicolumn{8}{c}{Task-General} \\ & IO & CoT & \begin{tabular}{c} IO \\ (best of 100) \\ \end{tabular} & \begin{tabular}{c} CoT \\ (best of 100) \\ \end{tabular} & \begin{tabular}{c} IoT \\ (best of 100) \\ \end{tabular} & \begin{tabular}{c} ToT \\ (b=1) \\ \end{tabular} & \begin{tabular}{c} ToT \\ (b=5) \\ \end{tabular} & \begin{tabular}{c} ToT \\ (b=5) \\ \end{tabular} &
\begin{tabular}{c} NLEP (ours) \\ \end{tabular} \\ \hline Game of 24 (\%) & \(7^{\dagger}\) & \(4^{\dagger}\) & \(33^{\dagger}\) & 49\({}^{\dagger}\) & \(45^{\dagger}\) & \(74^{\dagger}\) & 52 & 66 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance on the Game of 24 benchmark. CoT and ToT stand for chain-of-thought (Wei et al., 2022) and tree-of-thought (Yao et al., 2023a) prompting respectively. \({}^{\dagger}\) shows the results from Yao et al. (2023a).
**Game of 24 results.** Table 3 shows the results on the challenging Game of 24 task from Yao et al. (2023). It is worth noting that the task-general NLEP surpasses all task-specific baselines except tree-of-thoughts (ToT) with a breadth of 5 (ToT b=5), which requires many more LLM calls. Our approach also surpasses the oracle setup of IO/CoT, which calculates the success rate of IO/CoT by considering the best of 100 samples for each instance. However, unlike ToT which requires in-context demonstrations for each decomposed sub-task, NLEP prompting achieves a significant performance gain over ToT (b=1) without requiring a computationally expensive multi-chain inference procedure.
### Question Answering and Instruction Following
We next apply NLEP prompting to tackle question answering and instruction following tasks requiring different answer forms: StrategyQA, TruthfulQA, and VicunaQA. StrategyQA tests for commonsense reasoning ability of language models and requires Yes/No answers, while TruthfulQA and VicunaQA have free-form responses.
**StrategyQA.** Experiment results are presented in Table 4. With GPT-4, NLEP achieves the best performance under the task-general prompt setting and is competitive with the task-specific CoT. With GPT-3.5, although the scores of code-based strategies decrease more than CoT (PoT: 18.4%, NLEP: 20.1%, task-general CoT: 10.5%, task-specific CoT: 10.1%), NLEP still exceeds PoT by a significant margin. An example of output is shown in 4(b).
\begin{table}
\begin{tabular}{l|c|c c c|c|c c c} \hline \hline & \multicolumn{4}{c|}{GPT-4} & \multicolumn{4}{c}{GPT-3.5-Turbo} \\ \cline{2-10} setting & Task-specific & \multicolumn{2}{c|}{Task-general} & \multicolumn{2}{c|}{Task-specific} & \multicolumn{2}{c}{Task-general} \\ & CoT & CoT & PoT & NLEP (ours) & CoT & CoT & PoT & NLEP (ours) \\ \hline StrategyQA & **81.7** & 78.6 & 68.6 & 81.2 & 71.6 & 68.1 & 50.2 & 61.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on the StrategyQA benchmark. The experimental setup is the same as in Table 2. Note that LLMs do not always generate “Yes” or “No”. and we only predict the “Yes” label if the “Yes” string is generated explicitly. See Appendices B.1 and C for implementation details.
Figure 4: NLEP generated for solving Dyck language and StrategyQA problems. For Dyck, the instruction is _“Complete the rest of the sequence, making sure that the parentheses are closed properly.”_ For StrategyQA, the instruction is _“Answer the question with yes or no.”_
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Foundation Model** & **Mode** & **True** & **Info** & **True * Info** \\ \hline \multirow{2}{*}{GPT-4} & Text & **76.01** & 97.55 & 73.56 \\ & NLEP & 75.76 & **99.63** & **75.40** \\ \hline \multirow{2}{*}{GPT-3.5-Turbo} & Text & 68.91 & 98.90 & 67.93 \\ & NLEP & 61.69 & 97.18 & 59.00 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of GPT-4 and GPT-3.5-Turbo on the TruthfulQA benchmark.
**TruthfulQA.** We also evaluate how NLEP prompting influences the factuality of question answering with the TruthfulQA benchmark (Lin et al., 2022). A fine-tuned GPT-3 model is applied for automatic scoring. In this experiment, we compare the vanilla auto-regressive text generation method against NLEP. Traditionally, such question answering tasks have been solved only with black-box language model without explicit symbolic computations due to the complexity of test questions.
The results are shown in Table 5. With GPT-4, the truth score of NLEP prompting strategy is close to standard LLM-based generation, while the informativeness score is higher. However, performance degrades significant with GPT-3.5-Turbo, indicating a strong dependence on the programming ability of the underlying language model.
**VicunaQA.** Results on the VicunaQA benchmark are shown in Figure 5, where we follow the standard approach and evaluate the answers using GPT-4. We find that GPT-4 prefers its own generations, which are generally more detailed than GPT-3.5-Turbo and NLEP responses. To control for the bias due to response lengths, we also assess all responses without the requirement about details using another evaluation prompt. The evaluation prompts with and without the requirement on details is shown in Appendix D.1 and D.2.
As we demonstrate in Figure 5, this assessment leads to different results on GPT-4. After removing the detail requirement in the automatic scoring pipeline, NLEP achieves better performance. This suggests that NLEP can help GPT-4 generate accurate, factual, and relevant responses. However, human-generated programs for pretraining the GPT-4 models usually do not embed long pieces of natural language. As a result, the responses generated by NLEP have a limited level of detail. We also notice that NLEP improves GPT-3.5-Turbo under both detail assessment settings, since neither text and NLEP generated by GPT-3.5-Turbo reaches the detail level preferred by the GPT-4 scorer.
### Text Classification
Finally, we evaluate whether NLEPs can be applied to solve text classification tasks that have traditionally been difficult for pure program synthesis-based approaches. As discussed in section 2, we manually construct a decision tree NLEP for SST2 and use it as a prompt to guide GPT models to generate decision trees for other tasks only with task and label descriptions. An example input and output NLEP generated by GPT-4 for emotion classification is shown in Figure 3.
We compare NLEP against two baseline methods. Our first baseline uses the zero-shot classification method proposed in Ge et al. (2023) ("multi-class prompting"). This method uses the same entailment models but makes the prediction without the tree structure. Our second baseline asks a human expert to design a decision tree for each task also based on the SST-2 example. The results shown in Table 6 show that NLEP generated by GPT-4 outperforms multi-class prompting and human-generated tree baselines on most datasets.
Since we use the trees derived from SST2 to prompt the LLM for the classification tasks, it would be inappropriate to use these examples for SST2 itself. For SST2, we thus use an automatically generated decision tree for the CoLA task to prompt GPT-4 to generate a new tree for SST2. As
Figure 5: Automatic evaluation results of NLEP against standard LLM-based generation with different models. **# NLEP >Text** means that the % of NLEP responses containing more tokens than the baseline. **Detail** means if the evaluation metric considers details and response lengths. **Score** stands for the scores received by NLEP divided by the baseline scores (>100 means NLEP is better). **Win**, **tie**, and **lose** stand for the % of evaluation cases resulting in each category. **Length Bias** shows how much the evaluation pipeline prefers longer or shorter answers (lower means fairer, introduced in Appendix D.3).
shown in Table 7, the automatically generated tree matches the performance of the SST2 decision tree created by the authors.
**Model-free NLEP.** We also tried using the task-general prompt shown in B.1 to generate NLEPs that directly use programs to solve these tasks. These programs do not need any neural models and are hence very efficient (e.g., finishing the entire validation set in about 2 seconds on CPUs). The results can be found in Table 6 ("Model-free NLEP"). While not achieving the performance of entailment-based methods, the generated NLEP significantly outperforms random baselines, suggesting that this may be a viable approach for quickly extracting simple and interpretable classifiers from LLMs.
## 4 Discussion
Here we describe some additional experiments as well as limitations of our approach.
**Execution failures.** While the task-general PoT and NLEP lack programming demonstrations for the target task, GPT-4 in general is able to generate bug-free programs as presented in Appendix A. Table 11. Notably, both PoT and NLEP obtain execution error rate of 0 on Tracking Shuffled Objects (7) and Word Sort tasks. The proposed NLEP prompting can even reduce the execution failures on Dyck Language and GSM-Hard tasks over the task-specific PoT.
**Retries given execution failure.** One advantage of the program synthesis approaches such as PoT and NLEP is that non-executable programs can be identified and filtered. This gives LLMs the chance to "self-correct" and generate new answers, and we take advantage of this in our math and symbolic reasoning tasks by generating up to three programs if there is an execution failure on certain benchmarks. (For fair comparison we apply this reattempting scheme to PoT as well). We ablate on this mechanism in Appendix A, Tables 8, 10 and 11. Besides effectively reducing the execution error as presented in Table 11, these retries greatly enhance the reasoning accuracy. In particular, 12% and 15.6% improvement is observed on the Chinese Remainder Theorem and the Scheduling Meeting tasks in Table 8(b). In this work we only experiment extra retries with larger temperatures for diverse sampling and leave more advanced "self-correction" algorithms (e.g., those that make use of error messages (Cai et al., 2023; Hu et al., 2023)) for future work.
**NLEP prompting requires strong LLMs.** The results in Table 2 and Table 9 of the Appendix A show that the performance of CodeLlama-7b-Instruct (Roziere et al., 2023), GPT-3.5-Turbo, and GPT-4 have large gaps on many tasks. For example, on the Dyck Language task, GPT-3.5-Turbo only achieves 7.2% accuracy while GPT-4 achieves 91.6% accuracy. TruthfulQA experiments also
\begin{table}
\begin{tabular}{l l|c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Performance (Num. Classes)} \\ \cline{3-8} & & cola (2) & emotion (6) & amazon (5) & hsd (2) & sbic (3) & **Average** \\ \hline \multirow{4}{*}{RoBERTa} & Multi-class Prompting & 65.87 & 49.2 & 33.31 & 67.78 & 52.99 & 53.83 \\ & Human-Generated Tree & **69.03** & 22.20 & 26.88 & 64.85 & **58.37** & 48.27 \\ & NLEP w/ GPT-3.5 & 56.66 & 35.1 & 33.46 & 67.36 & 38.25 & 46.17 \\ & NLEP w/ GPT-4 & 68.94 & **54.5** & **38.88** & **70.92** & 55.95 & **57.65** \\ \hline \multirow{4}{*}{DeBERTa} & Multi-class Prompting & 53.50 & 51.93 & 37.01 & 67.78 & 59.08 & 53.86 \\ & Human-Generated Tree & **69.22** & 32.15 & 33.00 & **72.18** & 55.02 & 52.31 \\ & NLEP w/ GPT-3.5 & 49.66 & 39.00 & 36.18 & 70.29 & 52.49 & 49.52 \\ & NLEP w/ GPT-4 & 68.36 & **55.4** & **40.2** & 70.08 & **59.68** & **58.74** \\ \hline None & Model-free NLEP w/o Tree & 69.13 & 40.55 & 25.76 & 59.62 & 37.63 & 46.54 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Zero-shot performance of different human-crafted and LLM-generated text classification schemes. The GPT-4 generated decision trees consistently exhibit significant improvement. For model-free NLEP, generated code can be executed on the entire validation set in 2 seconds and notably surpasses the random baseline, with cola notably matching the state-of-the-art performance.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & Model-free & RoBERTa-Manual & RoBERTa-Automatic & DeBERTa-Manual & DeBERTa-Automatic \\ \hline SST2 & 66.17 & 83.03 & 87.36 & 84.06 & 83.49 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance of manually crafted vs. generated decision trees on SST2. Both methods display comparable outcomes.
show that NLEP could _hurt_ the factuality of GPT-3.5-Turbo. These results show that in order to generate accurate responses and achieve performance improvements with NLEP, the underlying LLM has to have strong programming ability.
**Limitation of NLEP prompting.** In this work, we found that the NLEP prompts are not suitable for generating long-form natural language responses. Experimental results on VicunaQA show that most responses generated by NLEP prompting have fewer tokens than responses obtained from usual LLM generation. This feature is expected, because most naturally-occurring programs (on which the LLMs were pretrained) do not contain large chunks of natural language. Future work could consider incorporating (potentially synthetically generated) programs with longer-form natural language within the pretraining set to enable the application of NLEP to more involved NLG tasks.
## 5 Related Work
**Large language models for reasoning.** State-of-the-art LLMs (OpenAI, 2022; 2023; Touvron et al., 2023; Zeng et al., 2022) have shown very strong performance on complicated reasoning tasks, including commonsense (Geva et al., 2021), math (Cobbe et al., 2021), symbolic reasoning (Suzgun et al., 2022), and programming (Austin et al., 2021; Chen et al., 2021). Tackling such tasks with LLMs often requires prompting them with demonstrations that elicit their reasoning capabilities. Wei et al. (2022) proposed chain-of-thought prompting technique that encourages language to generate answers step-by-step. Wang et al. (2022) found that self-consistency can further improve the performance of chain of thoughts reasoning ability. Kojima et al. (2023) discovered that LLMs can perform reasoning without any demonstrations through adding the incantation "Let's think step-by-step". Tee of thoughts (Yao et al., 2023) and graph of thoughts (Yao et al., 2023; Besta et al., 2023) were proposed to tackle tasks that require more complicated reasoning processes. These improved reasoning methods apply chain of thoughts as the atomic reasoning step but organize reasoning "chains" through more advanced mechanisms.
**Programs and tools.** Previous studies have found that some limitations of LLMs can be overcome by combining program synthesis techniques with prompt-based learning. Program of thoughts (Chen et al., 2022) and program aided language model Gao et al. (2023) both translate mathematical questions to equations and use the python interpreter to ensure the correctness of the calculations. Hu et al. (2023) explore code prompting with both zero-shot and few-shot versions but simply use task instruction as input under zero-shot setup. Another line of related work for enabling LLMs to use tools is through interleaving API calls during LLM generation (Qin et al., 2023; Liang et al., 2023; Mialon et al., 2023; Tang et al., 2023). APIs can aid many tasks that are challenging for LLMs by providing tailored tools (e.g., calculators, search) that can solve specific tasks. Toolformer (Schick et al., 2023) addresses reasoning tasks by using predefined tools, and LLMs as tool makers (LATM) can implement functions solving a class of tasks based on few-shot examples (Cai et al., 2023). With these solutions, the correctness of a prediction can be ensured if correct API is called and correct inputs are selected. Existing works on combining program synthesis and tool usage with LLMs generally rely on task-specific prompts, in contrast to the more task-general prompt explored in the present work.
## 6 Conclusion
This work describes natural language embedded programs (NLEP), which flexibly combine natural language reasoning with program synthesis within prompt-based learning to tackle a variety of tasks. Our experiments demonstrate that NLEPs expand the scope of applications that can be addressed by program synthesis by more closely incorporating natural language during code generation.
## Acknowledgement
This research was supported by the Center for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Commission's InnoHK Scheme. |
2306.17732 | Observations of Magnetospheric Solar Wind Charge Exchange | The study of solar wind charge exchange (SWCX) emission is vital to both the
X-ray astrophysics and heliophysics communities. SWCX emission contaminates all
astrophysical observations in X-rays regardless of the direction. Ignoring this
contribution to X-ray spectra can lead to erroneous conclusions regarding the
astrophysical plasmas along the line of sight due to the similar spectral
distributions of SWCX and several common types of more distant astrophysical
plasmas. Since its discovery, literature has distinguished between diffuse SWCX
emission resulting from solar wind neutral interactions within the terrestrial
magnetosphere, called magnetospheric SWCX, and similar interactions occurring
more generally throughout the heliosphere, called heliospheric SWCX. Here, we
build upon previous work validating a modeling method for the heliospheric SWCX
contribution in X-ray spectra obtained with a medium resolution CubeSat
instrument named HaloSat at low ecliptic latitudes. We now apply this model to
a specially designed set of extended observations with the same instrument and
successfully separate the spectral contributions of the astrophysical
background and the heliospheric SWCX from the remaining contributions.
Specifically, we find significant excess emission for four observations in the
O VII emission line not explained by other sources, possibly indicative of
magnetospheric SWCX. We discuss these results in comparison with simulation
results publicly available through the Community Coordinated Modeling Center.
We also report an absorbed high-temperature component in two of the twelve
fields of view analyzed. | R. Ringuette, K. D. Kuntz, D. Koutroumpa, P. Kaaret, D. LaRocca, J. Richardson | 2023-06-30T15:22:06Z | http://arxiv.org/abs/2306.17732v1 | # Observations of Magnetospheric Solar Wind Charge Exchange
###### Abstract
The study of solar wind charge exchange (SWCX) emission is vital to both the X-ray astrophysics and heliophysics communities. SWCX emission contaminates all astrophysical observations in X-rays regardless of the direction. Ignoring this contribution to X-ray spectra can lead to erroneous conclusions regarding the astrophysical plasmas along the line of sight due to the similar spectral distributions of SWCX and several common types of more distant astrophysical plasmas. Since its discovery, literature has distinguished between diffuse SWCX emission resulting from solar wind-neutral interactions within the Earth's magnetosphere, called magnetospheric SWCX, and similar interactions occurring more generally throughout the heliosphere, called heliospheric SWCX. Here, we build upon previous work validating a modeling method for the heliospheric SWCX contribution in X-ray spectra obtained with a medium resolution CubeSat instrument named _HaloSat_ at low ecliptic latitudes. We now apply this model to a specially designed set of extended observations with the same instrument and successfully separate the spectral contributions of the astrophysical background and the heliospheric SWCX from the remaining contributions. Specifically, we find significant excess emission for four observations in the O VII emission line not explained by other sources, possibly indicative of magnetospheric SWCX. We discuss these results in comparison with simulation results publicly available through the Community Coordinated Modeling Center. We also report an absorbed high-temperature component in two of the twelve fields of view analyzed.
## 1 Introduction
Solar wind charge exchange (SWCX) emission was discovered when comets proved to be strong emitters in the ROSAT soft X-ray energy bands (Lisse, 1996). The emission is produced when highly charged ions in the solar wind (e.g. O7+) capture an electron from nearby neutrals (Cravens, 1997). The newly created ions are in an excited state and emit soft X-ray photons (! 2 keV) as they de-excite into the ground state. Besides comets, SWCX emission is produced in planetary exospheres, the Earth's magnetosheath and the heliosphere. The latter two components (named m-SWCX and h-SWCX from now on) are a challenge for X-ray astrophysicists, since they contribute a variable foreground with similar spectral characteristics as some astrophysical thermal plasmas (e.g. Local Hot Bubble, galactic halo) for CCD-like spectral resolution (Cox, 1998; Kuntz, 2018). |
2309.10800 | Quantum Algorithm for Estimating Betti Numbers Using a Cohomology
Approach | Topological data analysis has emerged as a powerful tool for analyzing
large-scale data. High-dimensional data form an abstract simplicial complex,
and by using tools from homology, topological features could be identified.
Given a simplex, an important feature is so-called Betti numbers. Calculating
Betti numbers classically is a daunting task due to the massive volume of data
and its possible high-dimension. While most known quantum algorithms to
estimate Betti numbers rely on homology, here we consider the `dual' approach,
which is inspired by Hodge theory and de Rham cohomology, combined with recent
advanced techniques in quantum algorithms. Our cohomology method offers a
relatively simpler, yet more natural framework that requires exponentially less
qubits, in comparison with the known homology-based quantum algorithms.
Furthermore, our algorithm can calculate its $r$-th Betti number $\beta_r$ up
to some multiplicative error $\delta$ with running time $\mathcal{O}\big(
\log(c_r) c_r^2 / (c_r - \beta_r)^2 \delta^2 \big)$, where $c_r$ is the number
of $r$-simplex. It thus works best when the $r$-th Betti number is considerably
smaller than the number of the $r$-simplex in the given triangulated manifold. | Nhat A. Nghiem, Xianfeng David Gu, Tzu-Chieh Wei | 2023-09-19T17:44:53Z | http://arxiv.org/abs/2309.10800v2 | # Quantum Algorithm for Estimating Betti Numbers Using a Cohomology Approach
###### Abstract
Topological data analysis has emerged as a powerful tool for analyzing large-scale data. An abstract simplicial complex, in principle, can be built from data points, and by using tools from homology, topological features could be identified. Given a simplex, an important feature is called Betti numbers, which roughly count the number of 'holes' in different dimensions. Calculating Betti numbers exactly can be #P-hard, and approximating them can be NP-hard, which rules out the possibility of any generic efficient algorithms and unconditional exponential quantum speedup. Here, we explore the specific setting of a triangulated manifold. In contrast to most known quantum algorithms to estimate Betti numbers, which rely on homology, we exploit the 'dual' approach by cohomology, combining the Hodge theory and de Rham cohomology, as well as recent advancement in matrix inversion, multiplication, and block encoding. This cohomology approach requires exponentially fewer qubits than those known homology-based approaches. Our proposed algorithm can calculate its \(r\)-th Betti number \(\beta_{r}\) up to some multiplicative error \(\delta\) with running time \(\mathcal{O}\big{(}\log(c_{r})c_{r}/(c_{r}-\beta_{r})\delta\big{)}\), where \(c_{r}\) is the number of \(r\)-simplex. It thus works best in the regime when the \(r\)-th Betti number is considerably smaller than the number of the \(r\)-simplices and is exponentially faster than previous known methods.
## I Introduction
Topology and geometry are lasting and vibrant areas in mathematics. Despite being abstract, topology has laid the groundwork for many important tools that have been widely applied in science and engineering [1; 2; 3; 4; 5]. Among them, topological data analysis (TDA) is gaining much attention due to its utility in revealing important topological features of datasets, which, in reality, can be sensitive to noise or sampling errors. Persistent homology, which is built upon algebraic topology, is a powerful method that can probe the topological structure of the underlying dataset. Typically, a collection of high-dimensional data (i.e., vectors) forms an abstract simplicial complex in which the connectivity between data points is controlled by a quantity called length scale.
Given a simplicial complex, one is interested in its Betti numbers, which reveal the number of connected components, loops, holes, etc., of such configuration. The Abelian property and linearity of homology allows the formulation of problems into a linear-algebra framework, which can be more convenient to work with practically. However, the massive dataset volume and its exponential growth in dimensionality induce a very large computational cost, and hence possessing a severe hurdles in practical execution.
Figure 1: An (abstract) simplicial complex. Each point (red points), or 0-simplex, might be a vector in a very high dimension.
Lloyd, Garnerone, and Zanard considered the problem of computing Betti numbers in quantum setting, and their proposed algorithm (the so-called LGZ algorithm) claimed to yield an exponential speedup compared to classical methods [6]. Their underlying approach essentially relies on (simplicial) homology: given a simplicial complex \(\Sigma\), the \(k\)-th Betti number is the rank of the \(k\)-th homology group \(H_{k}\). The rank of such a group (or equivalently, the dimension of such space) is revealed by analyzing the spectrum of the so-called boundary map \(\partial\), which is a linear map between chain spaces. In Ref. [6], such analysis is then done by applying quantum techniques, such as the quantum phase estimation [7] and sparse matrix simulation [8], leading to the LGZ quantum algorithm. Following Ref. [6], a series of works [9; 10; 11; 12] have been substantially improving the running time of the original algorithm [6]. Most recently, a striking result from Ref. [13] clarified some assumptions and performance of the LGZ-like algorithms [6]. In particular, Schmidhuber and Lloyd [13] showed that computing Betti numbers is #P-hard in the general case and approximating them up to multiplicative error is NP-hard, ruling out the generic efficient estimation of Betti numbers. The potentially exponential advantage of the quantum algorithm can only be obtained if the input is a specified complex instead of a list of vertices (which means that we have to build the complex based on the length scale). Given a specified complex \(S\), the most efficient quantum method (combining [9] and [11]) to estimate the \(k\)-th Betti number \(\beta_{k}\) to some multiplicative error \(\delta\) is
\[\mathcal{O}\Big{(}\frac{1}{\delta}n\kappa\sqrt{\frac{|c_{k}|}{\beta_{k}}} \Big{)},\]
where \(\kappa\) is the conditional number of the so-called Dirac operator [13], \(n\) is the number of points (or vertices) in \(S\), and \(|c_{k}|\) is the number of \(k\)-simplex at the given length scale. As pointed out in [13], the best classical algorithm can compute the \(k\)-th Betti number at a complexity of \(\mathcal{O}\big{(}|c_{k}|^{3}\big{)}\). The exponential advantage is recovered in the regime where \(\beta_{k}\to|c_{k}|\) and \(k\sim\mathcal{O}(n)\). It is reasonable and worthy to question whether the regime \(\beta_{k}\to|c_{k}|\) seems to be common in practice, as point clouds usually involve many points with sophisticated connectivity.
In this paper, inspired by the exciting development of recent quantum algorithms for TDA, we attempt to provide a 'dual' approach to [6] via cohomology. Another motivation for our method is the discrete Hodge theory, which offers a powerful tool in computational conformal geometry [4]. We focus on the scenario where a simplicial complex \(S\) is a uniform triangulation of some closed manifold. By uniform triangulation, we mean that the simplicial complex is built with uniform simplices. For instance, a uniformly triangulated 2-manifold is formed by equilateral triangles or 2-simplices properly gluing together. The same construction holds for arbitrary dimensions, i.e., by gluing higher dimensional objects. According to Schmidhuber and Lloyd [13], this is the case where an exponential speedup is possible, as the description of a simplicial complex is given beforehand instead of a set of vertices and pairwise distances.
As a brief summary, our 'dual' approach via cohomology is built upon De Rham cohomology and Hodge theory, as they provide a direct link between the \(k\)-th homology/cohomology group and a special group called the harmonic group. Cohomology assigns a real number to each simplex, and therefore, we could store all those numbers in a vector, which again can be stored using a logarithmic number of (qu)bits. Other enabling elements for our algorithm are the recent advances in quantum algorithmic techniques [14; 15; 16; 17; 18; 19; 20] that allow handling large matrix operations, such as multiplication, inverting of matrices, and their polynomial transformation. We will see a surprising conclusion that this cohomology framework achieve best performance in the regime \(\beta_{k}\ll|c_{k}|\), which is opposite to that of LGZ algorithm. It is interesting to mention that many well-known surfaces, such as a sphere, a torus, have low genus numbers which corresponds to low Betti numbers.
This article is organized as follows. First, in section II, we revise some important quantum tools that would contribute as main recipes of our subsequent quantum algorithm. In section III.1, we then give some background of de Rham cohomology and Hodge theory in smooth and discrete settings, which contains the key insight into our algorithm. Next, in section III.2, we describe the classical procedure for calculating the \(k\)-th Betti number in a triangulated manifold. Subsequently, in section IV, we outline the corresponding quantum algorithm for estimating the Betti number. Some analysis and discussion follow subsequently, e.g., in Section VI, including the generalization to different Betti numbers and higher dimensional triangulated manifolds, e.g., in Section V. We conclude in Section VII with some summary and remarks for future exploration.
## II Some preliminaries
In this section, we introduce the main quantum ingredients that are needed to construct our subsequent algorithm. We recapitulate the key results for brevity and leave out the details.
First, we define the block encoding.
**Definition 1** (Block Encoding Unitary): _[_18, 19, 20_]_ _Let \(A\) be some Hermitian matrix of size \(N\times N\) whose matrix norm \(\left|A\right|<1\). Let a unitary \(U\) have the following form:_
\[U=\begin{pmatrix}A&\cdot\\ \cdot&\cdot\end{pmatrix}.\]
_Then \(U\) is said to be a block encoding of matrix \(A\). Equivalently, we can write:_
\[U=\left|\mathbf{0}\right\rangle\left\langle\mathbf{0}\right|\otimes A+\cdots\]
where \(\left|\mathbf{0}\right\rangle\) denotes the first computational basis state in some larger Hilbert space (of, e.g., multiple qubits). It is quite obvious from the above definition that
\[A_{ij}=(\langle\mathbf{0}|\otimes\langle i|)U(\left|\mathbf{0}\right\rangle \otimes\left|j\right\rangle), \tag{1}\]
where \(A_{ij}\) refers to the entry of \(A\) at \(i\)-th row and \(j\)-th column.
The unitary block encoding of some matrix \(A\) as described above allows us to apply the tool of the so-called quantum signal processing [18, 19, 20] to achieve arbitrary transformation of \(A\) with high efficiency. For instance, as shown in [18, 19], the method yields an optimal algorithm for simulating a Hamiltonian \(\hat{H}\), i.e., implementing the operator \(\exp(-i\hat{H}t)\) for arbitrary \(t\) given a black box access to entries of \(\hat{H}\). We refer the readers to these original works for details; instead, we delineate the necessary recipe for our subsequent algorithm.
**Lemma 1**: _[_18, 20_]_ _Given unitary block encoding of some matrix \(A\) of dimension \(n\), then it is possible to construct \(\exp(-iAt)\) up to accuracy \(\epsilon\) in time_
\[\mathcal{O}\big{(}d_{A}(t+poly\log(1/\epsilon))\big{)},\]
_where \(d_{A}\) is the required time complexity for preparing the unitary block encoding of \(A\)._
Furthermore, the following result for encoding a product of two matrices is useful later and is also proved in Appendix C. Then, we will describe the matrix multiplication and inversion that will be used below.
**Lemma 2** (Block Encoding of Product of Two Matrices [20]): _Given a unitary block encoding of two matrices \(A_{1}\) and \(A_{2}\), an efficient procedure exists to construct a unitary block encoding of their product \(A_{1}A_{2}\)._
**Lemma 3** (Efficient Matrix Application [17]): _Given a coherent oracle access to some \(s\)-sparse, Hermitian matrix \(\hat{H}\) of dimension \(n\times n\) with eigenvalues' norm bounded in the range \((1/\kappa,1)\), and a given \(n\times 1\) state \(\left|b\right\rangle\), then there is a unitary \(U_{H}\) that acts in the following way,_
\[U_{H}\left|0^{m}\right\rangle\left|b\right\rangle=\left|0^{m}\right\rangle \left(\hat{H}/s\right)\left|b\right\rangle+\left|\Phi_{\perp}\right\rangle,\]
_where \(\left|\Phi_{\perp}\right\rangle\) is some unimportant state (not properly normalized) that is orthogonal to \(\left|0^{m}\right\rangle\left(H/s\right)\left|b\right\rangle\), i.e., \(\left|0^{m}\right\rangle\left\langle 0^{m}\right|\otimes\mathbf{1}\left|\Phi_{ \perp}\right\rangle=0\), and \(m=\log(n)+1\). The unitary \(U_{H}\) runs in time_
\[\mathcal{O}\Big{(}\log(n),poly\big{(}\log(\frac{1}{\epsilon})\big{)}\Big{)},\]
_where \(\epsilon\) is the error tolerance._
**Lemma 4** (Matrix Inversion [16]): _Given an oracle access to some \(s\)-sparse, Hermitian matrix \(\hat{H}\) of dimension \(n\times n\) with eigenvalues' norm bounded in the range \((1/\kappa,1)\), and a given \(n\times 1\) state \(\left|b\right\rangle\). Then, there is a unitary \(U\) that acts as follows:_
\[U_{H}\left|0^{m}\right\rangle\left|b\right\rangle=\left|0^{m}\right\rangle \left(\hat{H}^{-1}/\alpha\right)\left|b\right\rangle+\left|\Phi_{\perp} \right\rangle,\]
_where \(\left|\Phi_{\perp}\right\rangle\) is some unimportant state (not properly normalized) that is orthogonal to \(\left|0^{m}\right\rangle\left(\hat{H}^{-1}/s\right)\left|b\right\rangle\), i.e., \(\left|0^{m}\right\rangle\left\langle 0^{m}\right|\otimes\mathbf{1}\left|\Phi_{\perp} \right\rangle=0\). The unitary \(U_{H}\) runs in time_
\[\mathcal{O}\Big{(}\kappa s\log(n)poly\big{(}\log(\frac{1}{\epsilon})\big{)} \Big{)},\]
_where \(\epsilon\) is the error tolerance, and \(\kappa\) is the conditional number of \(\hat{H}\), and \(\alpha\) is a (classically computable) constant that guarantees the normalization condition._
We further remark that while in [14], [16] the inverted matrix is assumed to be square and non-singular, some subsequent works, such as [20], were built upon an advanced technique called unitary block encoding (or quantum signal processing [18]), allowing the pseudo inverse if \(\hat{H}\) is rectangular and/or has zero eigenvalues. We make an important remark that in Lemma 3 and 4 the eigenvalues' norm is set to be within a fixed range. This can always be achieved by some trivial scaling of a given matrix. Throughout this work, we assume such condition is made. Furthermore, we make an observation that, the factor \(\alpha\) appearing in Lemma 4 depends only on the approximation factor that one chooses to approximate, e.g., \(1/x\) in [16]. It means that two linear systems in principle can have the same factor \(\alpha\) as a result of the quantum linear solver (4).
## III Classical framework
This section introduces key ingredients from differential geometry that underlie classical and quantum algorithms.
### de Rham Cohomology and Hodge Theory
De Rham cohomology is a very important tool that encompasses both algebraic and differential topology and can be used to probe the topological structure of smooth manifolds. While in homology, the main objects are spaces (or groups) of chains, in de Rham cohomology, the main objects are spaces of _forms_. The homology group is formed via the equivalence classes of closed chains, whereas in de Rham cohomology, the cohomology group is formed via the equivalence classes of closed forms. Hodge's theory is built on an important observation that each cohomology class has a canonical representative, the so-called harmonic form. A standard result is the Hodge decomposition theorem:
**Theorem 1** (Hodge Decomposition): _Suppose \(M\) is an \(n\)-dimensional closed Riemannian manifold, then_
\[\Omega_{k}=\mathrm{Img}(d^{k-1})\oplus\mathrm{Img}(\delta^{k+1})\oplus H^{k}_ {\Delta}(M), \tag{2}\]
_where \(\Omega_{k}\) is the space of \(k\)-forms, \(\mathrm{Img}\) denotes the image of a map, \(d^{k-1}\) is the exterior derivative map: \(\Omega_{k-1}(M)\rightarrow\Omega_{k}(M)\), \(\delta^{k+1}\) is the codifferential operator: \(\Omega_{k+1}\rightarrow\Omega_{k}\), and \(H^{k}_{\Delta}\) is the space of harmonic \(k\)-forms._
In other words, if \(\omega\in\Omega_{k}\), it can be expressed and decomposed as
\[\omega=\delta\Omega+d\eta+h, \tag{3}\]
where \(\eta\in\Omega_{k-1}\), \(\Omega\in\Omega_{k+1}\), and a special property that it vanishes under the action of \(d\) and \(\delta\), i.e., \(dh=0\) and \(\delta h=0\). Most importantly, \(h\) is unique for each cohomology class, i.e., if \(\omega\) and \(\omega^{\prime}\) lie in the same cohomology class, then they have the same \(h\). Throughout this work, we may abuse the notation by writing \(\delta\) and \(d\) without the superscript.
Generally, de Rham cohomology and Hodge theory work for the smooth setting. But the extension to the discrete setting can be achieved by simply replacing de Rham cohomology with simplicial cohomology by, e.g., identifying \(k\)-forms with \(k\)-cochains, and so on. The discrete version has been developed and applied extensively in real applications; see, e.g., Refs [3; 4], as \(d\) and \(\delta\) operators can be represented as a linear transformation on a set of corresponding simplices. We use the same notations for both cases, and we treat groups and vector spaces in the same manner, as a vector space is also an Abelian group, and we only work in the Abelian (commutative) setting.
Next, we state two important results that provide a useful insight into our algorithm, to be described below. As mentioned above, each cohomology class has a unique representative, and, therefore, it directly implies the correspondence between the two spaces (or groups), as stated in the two theorems below.
**Theorem 2**: _Given an \(n\)-dimensional closed Riemannian manifold. The \(k\)-th de Rham cohomology group is isomorphic to the harmonic \(k\)-form group_
\[H^{k}_{dR}(M)\approx H^{k}_{\Delta}(M). \tag{4}\]
Another standard and important result that we will employ is the following.
**Theorem 3**: _The de Rham cohomology group is isomorphic to the simplicial cohomology group_
\[H^{k}_{dR}(M)\approx H^{k}(M). \tag{5}\]
We remark that the above two theorems are standard in the areas of differential geometry and algebraic topology, which are explained in standard textbooks, e.g., see [21].
### Sketch of the Method for Calculating Betti Numbers
Theorems 2 and 3, plus the duality between the simplicial homology and cohomology, show that all these groups are isomorphic, which reveals a potential approach to calculate Betti numbers of a given simplicial complex \(\sum\). Given some \(k\), the \(k\)-th Betti number is the rank of the \(k\)-th homology group, which is also the rank of the \(k\)-th cohomology group. If we regard them as vector spaces, the rank becomes the dimension. We remark that the harmonic forms also form a vector space; therefore, the dimension of such space can be inferred if we know the maximum number of linearly independent vectors. The Hodge decomposition theorem 2 allows us to find the harmonic form given an initial \(k\)-form \(\omega\), as \(h=\omega-d\eta-\delta\Omega\). In fact, the Hodge decomposition theorem works with arbitrary forms, as we will discuss in detail subsequently. The following algorithm 1 summarizes the procedure to find the \(k\)-th Betti number.
```
0: A set of randomized \(M\)\(k\)-forms \(\{\omega_{i}\}\in\Omega_{k}\). A simplicial complex \(\sum\) For each \(i\in\{M\}\), do:
1: 1. Compute the coexact term \(\delta\Omega_{i}\)
2: 2. Compute the exact term \(d\eta_{i}\)
3: 3. Find the harmonic form \(h_{i}=\omega_{i}-d\eta_{i}-\delta\Omega_{i}\)
4: 4. Arrange \(\{h_{i}\}\) as a matrix \(\mathbb{H}\). Find the number \(\mathcal{K}\) of linearly independent harmonic forms among all \(\{h_{i}\}\), which is equivalently the rank of \(\mathbb{H}\).
5: The \(k\)-th Betti number is \(\mathcal{K}\).
```
**Algorithm 1** Algorithm for Finding the \(k\)-th Betti Number
In the next two sections, we describe how to execute the above procedure in the quantum setting and elaborate on additional details in the Appendix. For convention, throughout the text, we use \(\vec{x}\) to denote an arbitrary vector, \(|\vec{x}\rangle\) is the quantum state (or normalized vector) corresponding to \(\vec{x}\), and \(|\vec{x}|\) denotes its length, i.e., the \(l_{2}\)-norm. We remark that sometimes we make the following abuse of notation: we typically use \(|\mathbf{0}\rangle\) to denote necessary ancillas required to execute algorithms without specifying the actual number of qubits. That being said, if two registers of states being \(|\mathbf{0}\rangle\) are provided, they might not have the same dimension. The same convention holds for any \(|Garbage\rangle\) state since they are irrelevant to our quantum algorithm for computing the rank.
## IV Quantum Algorithm for Estimating Betti Numbers of Triangulated 2-Manifold
The classical algorithm 1, on which our quantum algorithm is based, can, in principle, be used to find all the Betti numbers. We shall see how its quantum version provides a speedup. To explain our quantum algorithm, we consider specifically the first Betti number for simplicity and illustration. The next section describes how to generalize to higher Betti numbers. The particular simplicial complex that we consider here is the triangulation of a 2-dim manifold, which means that it is composed of many 2-simplices glued together (see Fig. 1). As we will show, the only difference with higher Betti numbers is the specific entries or coefficients of linear equations. We remind readers that our goal now is to construct a quantum version of Algorithm 1. To improve the readability, we will construct each step one by one. We now begin with the computation of the coexact term and then the exact term.
### Computing Coexact Terms \(\delta\Omega\)
Let us denote the number of 0-simplices, 1-simplices and 2-simplices as \(c_{0},c_{1}\), and \(c_{2}\), respectively (and hence \(c_{r}\) denote the number of \(r\)-simplices). The first step in Algorithm 1 is to compute the coexact term \(\delta\Omega\). From equation (3) we have
\[d\omega(f)=d\delta\Omega(f), \tag{6}\]
where \(f\) is some 2-simplex. The above yields the following linear equation:
\[A\cdot\vec{\Omega}=B, \tag{7}\]
where \(A\) is a sparse matrix \(\in R^{c_{2}\times c_{2}}\) which encodes the linear action of \(d\delta\), \(\vec{\Omega}\) is a column vector containing all \(\{\Omega(f_{i})\}\), and \(B\) encodes the action of \(d\omega\). Both \(\vec{\Omega}\) and \(B\) are \(\in R^{c_{2}\times 1}\). We want to start from a 1-form vector \(\vec{\omega}\) (this is a vector of size \(c_{1}\times 1\) that contains the values of 1-form \(\omega\) on all 1-simplices), and then \(B\) can be written in a simpler
form as \(B=C\cdot\vec{\omega}\), where \(C\in R^{c_{2}\times c_{1}}\), which encodes explicitly the operation \(d\) with a known expression and is also sparse, and \(\vec{\omega}\in R^{c_{1}\times 1}\). Therefore, the above equation becomes
\[A\cdot\vec{\Omega}=C\cdot\vec{\omega}. \tag{8}\]
In order to find \(\vec{\Omega}\), in principle, we can use the quantum linear solver [14]. However, both \(A\) and \(C\) are not Hermitian, and do not share the same dimension, and therefore, in order to apply the method, we need to modify the system as follows [14; 22],
\[\begin{bmatrix}\mathbf{0}_{c_{2}\times c_{2}}&A\\ A^{T}&\mathbf{0}_{c_{2}\times c_{2}}\end{bmatrix}\cdot\begin{bmatrix}\mathbf{0}_ {c_{2}\times 1}\\ \vec{\Omega}\end{bmatrix}=\begin{bmatrix}\mathbf{0}_{c_{2}\times c_{2}}&C\\ C^{T}&\mathbf{0}_{c_{1}\times c_{1}}\end{bmatrix}\cdot\begin{bmatrix}\mathbf{0 }_{c_{2}\times 1}\\ \vec{w}\end{bmatrix}. \tag{9}\]
By doing so we obtain a new matrix that is square and Hermitian. We denote the equation for the new system as
\[A^{\prime}\cdot\vec{\Omega}^{\prime}=C^{\prime}\cdot\vec{\omega^{\prime}}, \tag{10}\]
where \(A^{\prime},C^{\prime}\in R^{c\times c}\) with \(c=c_{1}+c_{2}\), and \(\vec{\Omega}^{\prime}\) contains the solution \(\vec{\Omega}\) in its last \(c_{2}\) entries, and zeros in the remaining. On the right-hand side, \(\vec{\omega^{\prime}}\) contains the 1-form \(\omega\) in its last \(c_{1}\) entries. (We note that if one desires the dimensions \(c_{1}\) and \(c_{2}\) to be powers of 2, one can always pad 0's in the corresponding matrices and vectors above.) Then the solution has the explicit form:
\[\vec{\Omega}^{\prime}=A^{\prime-1}\cdot C^{\prime}\cdot\vec{\omega^{\prime}}, \tag{11}\]
Given access to entries of \(A\) and \(C\) (and hence, \(A^{\prime}\) and \(C^{\prime}\)), we can then solve for \(\vec{\Omega}^{\prime}\) using the quantum linear solver [14; 16]. We remark that in this case, aside from inverting a matrix, we also need to apply \(C^{\prime}\) to \(\vec{\omega^{\prime}}\). Such application is in fact a modification of the inverting eigenvalue technique originally introduced in [14] and was also encountered in [22]. More recently, a highly efficient technique for matrix multiplication was outlined in [17], which is built upon the Chebyshev polynomial approach in [16]. Here, we simply integrate the two steps together. We remark that we need to find the coexact term \(\delta\Omega\), which can be computed by another round of matrix application, \(\delta\vec{\Omega}=R_{1}\cdot\vec{\Omega}\), where \(R_{1}\) is a sparse matrix of size \(c_{1}\times c_{2}\) that is the discrete version of \(\delta\). In order to apply the same techniques, we made the same modification as above and obtain:
\[\delta\vec{\Omega}^{\prime} =R_{1}^{\prime}\cdot\vec{\Omega}^{\prime} \tag{12}\] \[=R_{1}^{\prime}\cdot A^{\prime-1}\cdot C^{\prime}\cdot\vec{\omega ^{\prime}}, \tag{13}\]
where \(R_{1}^{\prime}\) again is the isometry embedding of \(R_{1}\), in a similar form as in Eq. (11). Note that \(\delta\vec{\Omega}^{\prime}\) now contains the coexact values \(\delta\vec{\Omega}\) in its first \(c_{1}\) entries. Since \(w^{\prime}\) is arbitrary, we can safely work with its quantum state \(\ket{w^{\prime}}\) as the length does not matter. Suppose we are equipped with \(\ket{w^{\prime}}\), consecutive application of matrices and matrix inversion (see Eqn 13), which we denote as unitary \(U_{\delta\Omega}\), yields the following operation:
\[U_{\delta\Omega}\ket{w^{\prime}}\ket{\mathbf{0}}=\ket{\psi}=(\ket{\delta\vec {\Omega}^{\prime}}\ket{/s_{\Omega}}\cdot\ket{\mathbf{0}}\ket{\delta\vec{ \Omega}^{\prime}}+\ket{\text{garbage}_{1}}, \tag{14}\]
where \(\ket{\text{garbage}_{1}}\) is some unimportant state that is orthogonal to \(\ket{\delta\vec{\Omega}^{\prime}}\ket{\mathbf{0}}\); and \(s_{\Omega}=s_{R_{1}^{\prime}}s_{C^{\prime}}\alpha\) which is the product of the sparsity of \(R_{1}^{\prime}\) and \(C^{\prime}\) and \(\alpha\), as the application of Lemma 3 contains the sparsity of the matrix, and the usage of Lemma 4 contains the known factor \(\alpha\). We remark that as we will show in appendix A, the sparsity is known. We remind that the notation \(\ket{\mathbf{0}}\) is an abbreviation of extra registers that is required in the application of [16; 17].
### Computing exact term \(d\eta\)
Now we proceed to the second step in Algorithm 1, which is computing the exact term \(d\eta\). This step is very similar to what we had for \(\delta\Omega\). From equation (3), we have:
\[\delta\omega(v)=\delta d\eta(v), \tag{15}\]
where \(v\) is some 0-simplex. The above yields the following linear equation:
\[K\cdot\vec{\eta}=D\cdot\vec{\omega}, \tag{16}\]
where \(K\) is a matrix that encodes the linear action of \(\delta d\). Proceeding similarly as in the \(\delta\Omega\) case, we embed the above system to a square \(c\times c\) system (note the dimension) and denote the resultant equation in the enlarged system as
\[\vec{d\eta^{\prime}}=R_{2}^{\prime}\cdot K^{\prime-1}\cdot D^{\prime}\cdot\vec{ \omega^{\prime}}, \tag{17}\]
where \(\vec{d\eta^{\prime}}\) contains the exact term \(d\eta\) in its first \(c_{1}\) entries. Application of matrix multiplication and matrix inversion in a similar manner to what we did in the previous part results in a unitary \(U_{d\eta}\) that, given \(\ket{w^{\prime}}\), yields the following:
\[U_{d\eta}\ket{w^{\prime}}\ket{\mathbf{0}}=\ket{\phi}=(|\vec{d\eta^{\prime}}|/s _{\eta})\cdot\ket{\mathbf{0}}\ket{\vec{d\eta^{\prime}}}+|\text{garbage}_{2} \rangle\,. \tag{18}\]
where again, we note that \(s_{\eta}\) is known.
### Preparing 1-form state \(\ket{w^{\prime}}\)
The above two subsections show that the first step in both is to prepare the 1-form state \(\ket{w^{\prime}}\), which is the state of dimension \(c\) that contains the real 1-form \(w\) in the last \(c_{1}\) entries. We note that there are \(M\) different initial 1-forms \(w_{j}\)'s, so we need to be able to efficiently prepare many initial states \(\ket{w_{j}^{\prime}}\)'s.
According to the input in Algorithm 1, we note that for any \(j\), entries of \(\ket{w_{j}^{\prime}}\) are randomly chosen. Therefore, we can pick a short-depth random unitary \(U_{w}\) of dimension \(c\times c\). For each index state \(\ket{j}\), we have that \(U_{w}\ket{j}\) is the \(j\)-th column of \(U_{w}\), which is also randomized. In order to filter out those top entries, i.e., making these first \(c-c_{1}\) entries to be zero and keeping the last \(c_{1}\) entries non-zero, we can multiply the vector \(U_{w}\ket{j}\) with a matrix \(A\) that is zero everywhere except those last \(c_{1}\) entries on the diagonal being 1. In other words, we have \(A_{ii}=0\) for \(i<c-c_{1}\) and \(A_{ii}=1\) for any \(i\geq c-c_{1}\). The matrix \(A\) is apparently row/column computable, which is equivalent to being coherently accessible. Therefore, we can apply Lemma 3 to obtain \(AU_{w}\ket{j}\), which is our desired 1-form state \(\ket{w_{j}^{\prime}}\). We have the following result:
**Lemma 5** (Preparation of \(\ket{w^{\prime}}\)): _Given an input state \(\ket{j}\) plus an ancillary register initialized in \(\ket{\mathbf{0}}\), then it is highly efficient to construct the unitary \(U_{W}\) that performs the following:_
\[U_{W}\ket{\mathbf{0}}\ket{j}=\ket{\mathbf{0}}\ket{w_{j}^{\prime}}+\ket{ \mathrm{Garbage}}, \tag{19}\]
_The running time of \(U_{W}\) is_
\[\mathcal{O}\Big{(}\log(c),poly(\log(1/\epsilon))\Big{)},\]
_where \(\ket{\mathrm{Garbage}}\) is orthogonal to \(\ket{\mathbf{0}}\ket{w_{j}^{\prime}}\) and \(\epsilon\) is the error tolerance._
**Important Remark:** In the following section, for brevity, we make a subtle convention that we focus on the subspace spanned by \(\ket{\mathbf{0}}\) in the first register, which means that any further stated operations on \(\ket{w_{j}^{\prime}}\) (plus possible ancillas) are actually being controlled by \(\ket{\mathbf{0}}\) in the Equation 5 of Lemma 5. For convenience, we therefore suppress writing out this register unless necessary.
### Computing Harmonic Form \(h\)
Now we proceed to describe our algorithm to compute \(h\). We first note that while \(M\) can be arbitrary large, we choose \(M=c_{1}\) in our algorithm, as it is sufficient for our purpose.
Suppose we begin with \(\ket{w_{j}^{\prime}}\ket{\mathbf{0}}\), where \(\ket{\mathbf{0}}\) is an (additional) ancillary system (we recall the convention regarding the working subspace that we set before). Basically we append extra \(\ket{\mathbf{0}}\) to the right of Equation 5 and consider only the subspace spanned by \(\ket{\mathbf{0}}\) that appears on the left of \(\ket{w_{j}^{\prime}}\) in Eqn 5. As we said previously, this can be done by executing any unitary on the first register of \(\ket{w_{j}^{\prime}}\ket{\mathbf{0}}\), controlled by the second register being \(\ket{\mathbf{0}}\) on the left). We then append an extra register initialized as:
\[\frac{1}{2}\big{(}\ket{00}-\ket{01}-\ket{10}+\ket{11}\big{)},\]
which can be prepared by two Hadamard gates acting on \(\left|1\right\rangle\left|1\right\rangle\). Our whole system is then:
\[\frac{1}{2}\Big{(}\left|00\right\rangle-\left|01\right\rangle-\left|10\right\rangle +\left|11\right\rangle\Big{)}\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0} \right\rangle. \tag{20}\]
We note that the quantum 1-form \(\left|w_{j}^{\prime}\right\rangle\) has dimension \(c\), and the actual 1-form \(w_{j}\) is stored in the last \(c_{1}\) entries, and we need to relocate them to the top \(c_{1}\) entries so that we can do the subtraction properly (note that \(h=w-\delta\Omega-d\eta\)). A simple way to achieve this goal is to perform the basis permutation, as we only change the corresponding basis. However, it is unknown (at least to us) whether there is any universal and efficient way to realize arbitrary permutation. Instead, we propose a solution that relies on matrix multiplication. Let \(A\) be some symmetric matrix such that \(A_{ij}=1\) if \(i\leq c_{1},j>c-c_{1}\) (note that we need to assume \(c\geq 2c_{1}\), but this can always be made possible as we only need to enlarge the system and set the extra coefficients to be zero), and \(0\) otherwise. Then, a simple multiplication of \(A\) with \(\left|w_{j}^{\prime}\right\rangle\) will exactly relocate those last \(c_{1}\) entries. Since \(A\) is easily computable, applying Lemma 3 allows us to implement such quantum multiplication. We have the following lemma:
**Lemma 6**: _Given a quantum state \(\left|x^{\prime}\right\rangle\) of dimension \(c\) that is non-zero in the last \(c_{1}\) rows, and zero otherwise. Let \(\vec{x}\) be the vector with the same entries as \(\left|x^{\prime}\right\rangle\) but instead have those non-zero entries in the first \(c_{1}\) rows (with the same order of entries). Then, the following operation can be achieved efficiently:_
\[U_{C}\left|x^{\prime}\right\rangle\left|\mathbf{0}\right\rangle=\left|\vec{x }\right|\left|x\right\rangle\left|\mathbf{0}\right\rangle+\left|Garbage\right\rangle, \tag{21}\]
_where \(\left|Garbage\right\rangle\) is some state that is orthogonal to \(\left|x\right\rangle\left|\mathbf{0}\right\rangle\)._
Note that the state \(\left|x^{\prime}\right\rangle\) represents a normalized vector, so the sub-vector \(\vec{x}\) is guaranteed to have a norm less than unity. Now we specifically look at the part \(\left|00\right\rangle\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\). Using the above Lemma, we can apply \(U_{C}\), which is controlled by the first two qubits being \(\left|00\right\rangle\), to the state \(\left|w_{j}^{\prime}\right\rangle\) so as to produce the shifted version of the vector \(\left|w_{j}^{\prime}\right\rangle\), for which we denote simply as \(\left|w_{j}\right\rangle\). (Without confusion, we will use the language that \(U_{c}\) is applied to \(\left|w_{j}^{\prime}\right\rangle\) controlled by \(\left|00\right\rangle\).) To be more specific, the state \(\left|00\right\rangle\left|w_{j}^{\prime}\right\rangle\) is transformed to:
\[\left|\phi_{1}\right\rangle=\left|00\right\rangle(\left|\vec{w_{j}}\right| \left|w_{j}\right\rangle\left|\mathbf{0}\right\rangle+\left|Garbage\right\rangle). \tag{22}\]
Next, we consider the part \(\left|01\right\rangle\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\). We aim to apply \(U_{\delta\Omega}\) to \(\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\) being controlled by the first two qubits being \(\left|01\right\rangle\). We then obtain the following state:
\[\left|\phi_{2}\right\rangle=\left|01\right\rangle\Big{(}|\delta\vec{\Omega}_{ j}^{\prime}|/s_{\Omega}\cdot|\delta\vec{\Omega}_{j}^{\prime}\rangle\left| \mathbf{0}\right\rangle+\left|\text{garbage}\right\rangle\Big{)}. \tag{23}\]
Similarly, we consider the part \(\left|10\right\rangle\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\). We apply \(U_{d\eta}\) being controlled by the first two qubits being \(\left|10\right\rangle\) to obtain:
\[\left|\phi_{3}\right\rangle=\left|10\right\rangle\Big{(}|d\vec{\eta}_{j}^{ \prime}|/s_{\eta}\cdot|d\vec{\eta}_{j}^{\prime}\rangle\left|\mathbf{0}\right\rangle +\left|\text{garbage}\right\rangle\Big{)}. \tag{24}\]
For the last part \(\left|11\right\rangle\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\), we simply flip the first bit in the register \(\left|\mathbf{0}\right\rangle\) controlled by the register \(\left|11\right\rangle\). In other words, we transform \(\left|11\right\rangle\left|w_{j}^{\prime}\right\rangle\left|\mathbf{0}\right\rangle\) into
\[\left|11\right\rangle\left|w_{j}^{\prime}\right\rangle\left|1\mathbf{\tilde{0} }\right\rangle,\]
where \(\mathbf{\tilde{0}}\) denotes one less \(0\) from \(\mathbf{0}\).
So far we have transformed the equation 20 into:
\[\left|\phi\right\rangle=\frac{1}{2}\Big{(}\left|\phi_{1}\right\rangle-\left| \phi_{2}\right\rangle-\left|\phi_{3}\right\rangle+\left|11\right\rangle\left|w_ {j}^{\prime}\right\rangle\left|1\mathbf{\tilde{0}}\right\rangle\Big{)}. \tag{25}\]
Denote the whole unitary process that transforms the initial state \(\left|\mathbf{0}\right\rangle\left|j\right\rangle\) (which actually began from Lemma 5) to the above state as \(U_{\phi}\), specifically, which is \(\left|00\right\rangle\!\left\langle 00\right|\otimes U_{C}\otimes I+\left|01 \right\rangle\!\left\langle 01\right|\otimes U_{d\eta}\otimes I+\left|10\right\rangle \!\left\langle 10\right|\otimes U_{d\eta}\otimes I+\left|11\right\rangle\!\left\langle 1 11\right|\otimes I\otimes X_{1}\). As a summary, by explicitly including the previously suppressed register and all the ancillas, we have:
\[U_{\phi}\left|\mathbf{0}\right\rangle\left|00\right\rangle\left|j\right\rangle \left|\mathbf{0}\right\rangle=\left|\mathbf{0}\right\rangle\left|\phi\right\rangle +\left|\text{Garbage}\right\rangle. \tag{26}\]
The reason why we have \(\left|\phi\right\rangle\) entangled with \(\left|\mathbf{0}\right\rangle\) on the r.h.s. is due to the convention that we have made after Lemma 5, as everything was done being (additionally) controlled by \(\left|\mathbf{0}\right\rangle\) in Eqn. 5.
Now we use a different procedure with the state \(\left|\mathbf{0}\right\rangle\left|00\right\rangle\left|i\right\rangle\left| \mathbf{0}\right\rangle\). Let \(s_{m}\) be some integer value that is greater than both \(s_{\Omega}+s_{\eta}\), e.g., \(s_{m}=s_{\Omega}+s_{\eta}+1\). We use controlled rotation gates to transform the register \(\left|00\right\rangle\) to
\[\frac{1}{s_{m}}\left|00\right\rangle+\frac{s_{\Omega}}{s_{m}}\left|01\right \rangle+\frac{s_{\eta}}{s_{m}}\left|10\right\rangle+G\left|11\right\rangle,\]
where \(G\) in the above state refers to the required normalization factor, i.e.,
\[G=\sqrt{1-\frac{1}{s_{m}^{2}}-\frac{s_{\Omega}^{2}}{s_{m}^{2}}-\frac{s_{\eta}^ {2}}{s_{m}^{2}}}.\]
We again note that as the linear matrices that implement \(d\) and \(\delta\) are known, their sparsities are also known. The above state can be obtained from \(\left|00\right\rangle\) by the following procedure. We first use rotation gate to transform \(\left|00\right\rangle\) to:
\[\left|0\right\rangle\Big{(}\sqrt{1/s_{m}^{2}+(s_{\eta}/s_{m})^{2}}\left|0 \right\rangle+\sqrt{(s_{\Omega}/s_{m})^{2}+G^{2}}\left|1\right\rangle\Big{)}.\]
We then perform two controlled rotation gates. The first one is controlled by \(\left|0\right\rangle\), and the second is controlled by \(\left|1\right\rangle\) in the second register. More specifically, we transform:
\[\sqrt{1/s_{m}^{2}+(s_{\eta}/s_{m})^{2}}\left|00\right\rangle\to\sqrt{1/s_{m}^ {2}+(s_{\eta}/s_{m})^{2}}\Big{(}\frac{1/s_{m}}{\sqrt{1/s_{m}^{2}+(s_{\eta}/s_{ m})^{2}}}\left|00\right\rangle+\frac{s_{\eta}/s_{m}}{\sqrt{1/s_{m}^{2}+(s_{\eta}/s_{ m})^{2}}}\left|10\right\rangle\Big{)}, \tag{27}\]
and
\[\sqrt{(s_{\Omega}/s_{m})^{2}+G^{2}}\left|01\right\rangle\to\sqrt{(s_{\Omega} /s_{m})^{2}+G^{2}}\Big{(}\frac{s_{\Omega}/s_{m}}{\sqrt{(s_{\Omega}/s_{m})^{2 }+G^{2}}}\left|01\right\rangle+\frac{G}{\sqrt{(s_{\Omega}/s_{m})^{2}+G^{2}}} \left|11\right\rangle\Big{)}. \tag{28}\]
We thus obtain our desired two-qubit state and the other part, and we denote for convenience the whole process as \(U\), i.e.,
(29)
We then make the following crucial observation:
(30)
which allows us to compute the components in the above combination of vectors,
\[\vec{w_{j}}-\left|\delta\vec{\Omega}^{\prime}_{j}\right|\cdot\left|\delta \vec{\Omega}^{\prime}_{j}\right\rangle-\left|d\vec{\eta}^{\prime}_{j}\right| \cdot\left|d\vec{\eta}^{\prime}_{j}\right\rangle,\]
which is exactly the harmonic form \(h_{j}\) associated with the initial 1-form \(w_{j}\). Therefore, the inner product \(\left\langle\Phi,\phi\right\rangle\) is exactly the \(i\)-th entry of the \(j\)-th harmonic form \(h_{j}\). By definition, the unitary \(U^{\dagger}U_{\phi}\) is exactly the unitary block encoding of \(\mathbb{H}\) (where \(\mathbb{H}\) was defined in algorithm 1), scaled down by a factor \(s_{m}\). We note that the scaling-down factor does not affect the subsequent algorithm, as the kernel of a scaled-down matrix is the same. In fact, in the Appendix, we will show that \(s_{m}\) is actually not large, of order unity only. Therefore, we omit the factor \(s_{m}\) in subsequent discussion.
Given the unitary encoding of \(\mathbb{H}\), it is trivial to obtain the unitary encoding of \(\mathbb{H}^{\dagger}\), as it is just a matter of transposition (note that \(\mathbb{H}\) is real). Therefore, it is straightforward to apply Lemma 2 to obtain the unitary block encoding of \(\mathbb{H}^{\dagger}\mathbb{H}\). Since all the forms \(w_{j}\)'s (and hence, their harmonic forms) are real, it is safe to use \(\mathbb{H}^{T}\mathbb{H}\) simply. Note we shall use \(\mathbb{H}^{T}\mathbb{H}\) instead of \(\mathbb{H}\) because \(\mathbb{H}\) is not generally symmetric, while \(\mathbb{H}^{T}\mathbb{H}\) is apparently symmetric. Lemma 1 then allows us to efficiently simulate \(\exp(-i\mathbb{H}^{T}\mathbb{H}t)\). The next goal is to estimate the dimension of \(Ker(\mathbb{H}^{T}\mathbb{H})\), where \(Ker\) refers to the kernel space. This is similar to what was done in Ref. [6], where the authors also employed the simulation of the boundary map combined with quantum phase estimation method [7] to extract the kernel of the corresponding boundary operator, which in turn reveals the Betti numbers. Basically, we first generate the following mixed state:
\[\rho=\frac{1}{c_{1}}\sum_{j=1}^{c_{1}}\left|j\right\rangle\left\langle j\right|,\]
by first preparing \(\sum_{j=1}^{c_{1}}\left|j\right\rangle/\sqrt{c_{1}}\) and using CNOT gates to copy the bitstring to another ancillary system initialized in \(\left|\mathbf{0}\right\rangle\). Tracing out either system yields \(\rho\). We can then run the quantum phase estimation algorithm with the unitary action being \(\exp(-i\mathbb{H}^{T}\mathbb{H}t)\) and the input state being \(\rho\). Note that kernel space corresponds to the eigenvectors with zero eigenvalues; therefore, for those eigenvectors, the quantum phase estimation algorithm will output the zero-bit string on the extra register that holds the outcome of phase estimation. Since the mixture \(\rho\) is uniform, the probability of measuring zeros on the extra register in the quantum phase estimation circuit is:
\[p_{0}=\frac{\dim Ker(\mathbb{H}^{T}\mathbb{H})}{c_{1}}. \tag{31}\]
Therefore, we can estimate \(\dim\ Ker(\mathbb{H}^{T}\mathbb{H})\) by repeating the algorithm to extract the measurement outcomes. We note that the phase estimation step also appeared in [6] and was pointed out in [13] that quantum counting [23] can be used instead to estimate \(p_{0}\). In order to estimate \(p_{0}\) to additive accuracy \(\epsilon\), it takes \(\mathcal{O}(1/\epsilon)\) time steps by quantum couting. However, \(\dim Ker(\mathbb{H}^{T}\mathbb{H})\) is an integer, and we require the ability to estimate it to some multiplicative error \(\delta_{d}\). One can choose
\[\epsilon=\delta_{d}\frac{\dim Ker(\mathbb{H}^{T}\mathbb{H})}{c_{1}}\]
to achieve such a goal. As the last step in our algorithm, the following result shows that \(\dim\ Ker(\mathbb{H}^{T}\mathbb{H})\) is exactly \(\dim\ Ker(\mathbb{H})\).
**Lemma 7**: _Let \(X=A^{T}A\) be some matrix of size \(d\times d\) (\(A\) is not necessary to be square). Then Ker(\(X\)) = Ker(\(A\))._
_Proof:_ (\(\rightarrow\)) It is obvious that \(\text{Ker}(A)\subset\text{Ker}(X)\), since if \(Ax=0\) (i.e., \(x\in Ker(A)\)) then \(A^{T}Ax=0\).
(\(\leftarrow\)) Let \(x\in Ker(X)\), hence \(A^{T}Ax=0\). We consider the inner product
\[x^{T}A^{T}Ax=x^{T}(A^{T}Ax)=0.\]
But \(x^{T}A^{T}=(Ax)^{T}\), and, therefore, the above equality is equivalent to
\[(Ax)^{T}Ax=0.\]
The above product is also the definition of a norm of a real vector \(Ax\), which is, by definition, equal to \(0\) if and only if \(Ax=0\). It implies that \(x\in Ker(A)\). Our proof is thus completed.
The result sheds some light on how the Betti numbers can be calculated. We remark that \(\dim\ Ker(\mathbb{H})\) is not the Betti number itself but the rank of \(\mathbb{H}\) which is \(c_{1}-\dim Ker(\mathbb{H})\) is equal to \(\beta_{1}\) - the first Betti number. Therefore, the ability to find \(\dim Ker(\mathbb{H}^{T}\mathbb{H})\) (which is \(c_{1}-\beta_{1}\)) via the quantum counting method outlined earlier yields the first Betti number.
As a brief summary, our algorithm mainly employs the matrix inversion and matrix multiplication technique to transform the initial 1-form \(\left|w^{\prime}\right\rangle\) to its harmonic representation. Quantum signal processing is then employed, combined with the quantum phase estimation plus repeating measurements to extract the Betti number. The main computational expenses come from matrix multiplication and inversion and the repetition of measurements required to estimate Betti numbers with sufficient accuracy. For the matrix multiplication/ inversion step, the matrices that appeared to have a dimension \(c\times c\); therefore, these steps take \(\mathcal{O}(\log(c))\) times (we ignore the sparsity dependent because it is very small, as we will see later). We emphasize a quite important point that, per Lemma 4, its running time also involves the conditional number, which can be large in general. This issue has been resolved in [24], where the authors provided a generalized quantum inversion method that allows a high conditional number matrix to be efficiently inverted. While the method in [24] induces a larger scaling on the sparsity, we note that the sparsity in our case is very small, as we will explicitly show in Appendix A. Therefore, we can safely integrate the result of [24] into our work without substantial further scaling. Further recall that \(c=c_{1}+c_{2}\leq 2\max(c_{0},c_{1},c_{2})\), where \(c_{0,1,2}\)'s are the number of 0-simplices, 1-simplices and 2-simplices in the given complex, respectively. It is straightforward to see that \(\mathcal{O}(\log(c))=\mathcal{O}(\log(n))\) where \(n\) is the number of points. We state our main result in the following theorem.
**Theorem 4** (Estimating 1st Betti number): _Let \(\sum\) be a simplicial complex with \(n\) points, corresponding to the triangulation of a 2-manifold. The 1st Betti number \(\beta_{1}\) can be estimated to multiplicative accuracy \(\delta_{d}\) in time_
\[\mathcal{O}\Big{(}\frac{\log(n)}{\delta_{d}}\cdot\frac{c_{r}}{(c_{r}-\beta_{r })}\Big{)}.\]
Generalization to different Betti numbers and higher dimensional manifolds
Given a triangulated 2-manifold, aside from the first, there are also the zeroth and the second Betti numbers \(\beta_{0},\beta_{2}\). The zeroth Betti number is always one here, as we already assume that the given manifold is a connected graph that is triangulated. We have presented a quantum algorithm for the first Betti number. In order to compute \(\beta_{2}\), we first randomize a different 2-form \(\Omega\). We then deform them to the harmonic form by using the Hodge decomposition (2). The difference is that for a 2-manifold, there is no higher than 2-simplex; therefore, there will be no co-exact term. The decomposition is thus
\[\Omega=d\omega+h, \tag{32}\]
where \(\omega\) is a 1-form, and \(h\) is the corresponding harmonic 2-form of \(\Omega\). We then follow the same procedure as we did for calculating \(\beta_{1}\).
The next important and somewhat subtle point is about higher dimensional cases. As we have emphasized from the beginning, the underlying groundwork of our method is the discrete Hodge theory. In order to apply this, for example, using Eqn. 2, we need to be able to formulate the operator \(d\delta\) for all forms (see Eqn. 7). This formula depends on the dimension of the given manifold. The main reason is that, while the operator \(d\) is easily formulated in terms of a matrix given the simplex, the formulation of the operation \(\delta\) relies on the Poincare duality, which uses the concept of _dual cell_.
For a triangulated 2-manifold, a dual cell to an edge (1-simplex) is an edge. However, for a triangulated 3-manifold, a dual cell to an edge is no longer an edge but a face (2-simplex). It means that even for the same first Betti number \(\beta_{1}\), the matrix coefficients (see Eqn. 7) are different. Fortunately, the computation procedure is the same, as it still relies on deforming a given \(k\)-form to its harmonic part. We can proceed with the computation of arbitrary Betti numbers and in arbitrary dimensional manifold in exactly the same way as we did in the above algorithm, provided the matrix form of \(\delta\) (and hence \(d\delta\)) is given. Despite that there is no exact expression for \(\delta\), its matrix form can be efficiently calculated given the triangulation of the manifold (see Appendix B). Therefore, given a triangulated \(m\)-dimensional manifold \(\sum\) having \(n\) points, its \(r\)-th Betti number \(\beta_{r}\) can be estimated (with a multiplicative error \(\delta_{e}\)) with time complexity
\[\mathcal{O}\Big{(}\frac{\log(c_{r})}{\delta_{e}}\cdot\frac{c_{r}}{(c_{r}- \beta_{r})}\Big{)},\]
where \(c_{r}\) is the number of \(r\)-simplices in \(\sum\). In the low \(r\) limit (\(r\ll n\)), then \(\log(c_{k})\in\mathcal{O}(\log(n))\); whereas for the limit \(r\sim n\), then \(\log(c_{r})\sim r\log(n)\). Our algorithm achieves its best performance when \(\beta_{r}\ll c_{r}\), which is opposite to that of the LGZ [6; 13].
Figure 2: Dual cell complexes in 2 and 3 dimensions. Left figure: In a triangulated 2-manifold, the dual to 2-simplex \([v_{i},v_{j},v_{k}]\) is a point o. The dual to an edge \([v_{i},v_{j}]\) is an edge \(o\bar{o}\). Right figure: In a triangulated 3-manifold, the dual to 3-simplex \([v_{i},v_{j},v_{k},v_{m}]\) is a point o. The dual to a face (2-simplex) \([v_{i},v_{j},v_{k}]\) is an edge \(o\bar{o}\).
Fuerther analysis and discussion
To make a comparison, we remind some of the results regarding the calculation of the first Betti number. As mentioned in [13], given a specified simplicial complex \(\sum\) with \(n\) points, the best classical algorithm to compute the first Betti number \(\beta_{1}\) takes time \(\mathcal{O}(c_{1}^{3})\), where \(c_{1}\) is the number of 1-simplices in \(\sum\). As also mentioned in [13], the best running time of the improved-LGZ algorithm to estimate Betti number to multiplicative error \(\delta\) is
\[\mathcal{O}\Big{(}\frac{1}{\delta}n\kappa\sqrt{\frac{c_{1}}{\beta_{1}}}\Big{)}. \tag{33}\]
The advantage of the improved-LGZ algorithm compared to the classical algorithm is obtained in the regime \(\beta_{1}\to c_{1}\) (and generally, \(\beta_{k}\to c_{k}\)), as they emphasize in their work [13]. More specifically, in such a regime for a triangulated 2-manifold as in our specific case, the classical running time is \(\mathcal{O}(n^{3})\), since for a triangulation of \(n\) points, the number of edges (1-simplices) is bounded by \(\mathcal{O}(n)\). The running time of the improved LGZ is (we ignore the \(\kappa\) factor) \(\mathcal{O}(n/\delta)\). Hence, their quantum algorithm yields a cubic speedup.
In the same regime, our algorithm turns out to be slower than both above approaches, as \(c_{1}^{2}\) will have the order \(\sim\mathcal{O}(n^{4})\). On the other hand, in the regime where \(\beta_{1}\) is small compared to \(c_{1}\), our algorithm will have a running time
\[\mathcal{O}\Big{(}\frac{\log(n)}{\delta}\Big{)},\]
which is an exponential speedup compared to both the classical algorithm and that of the quantum algorithms (LGZ and its improved version) [6; 13]. Such an interesting opposite performance is a somewhat unexpected outcome from the duality between cohomology and homology.
Another subtle point that we would like to discuss is the assumption that we made regarding the shape of the simplicial complex. In our work, we only deal with a uniformly triangulated manifold, i.e., all the composing simplices are similar in size, and the manifold is built by properly gluing them together. In some previous works, such as [6; 9; 10; 13], the setting is seemingly more general where the shape can be arbitrary, as two points only get connected if their distance is smaller than a known threshold. One may wonder if our assumption would severely limit the practicality of the outlined algorithm. The answer is no, as the topology of the underlying manifold only depends on the connectivity but not the actual distance between arbitrary two points. As an example, let us consider the 2-dimensional case. One can imagine that, given a triangle with arbitrary angles/lengths, it is deformable or topologically equivalent to an equilateral triangle. Therefore, the topological space formed by the union, or by gluing different triangles together, is topologically equivalent to the union of equilateral triangles, which form the uniformly triangulated manifold. Since they are topologically equivalent, their underlying topological properties are the same. Consequently, performing computation on the uniformly triangulated manifold is more convenient, as we discuss further in Appendix B, where we provide an explicit formula for the codifferential operator. Finally, we remark that, in general, Hodge theory (see Theorem 2) works with closed manifolds (in both smooth and discrete settings). Therefore, the specification of a given simplicial complex requires an extra criterion. In the 2d case that we worked out earlier, each edge (1-simplex) is supposed to be adjacent to two triangles (2-simplex). The generalization to higher dimensions is straightforward. If our initial configuration is an open triangulated manifold, e.g., with a boundary, then a simple trick is to double cover the simplicial complex while preserving the symmetry. In this way, we can recover the closeness condition and proceed with our algorithm. As a final remark, while our framework requires more specification of the complex to ensure the close property, it requires exponentially fewer qubits and can potentially achieve exponentially faster running time. We regard it as a tradeoff factor.
## VII Conclusion
Our work has provided a 'dual' approach to [6] and is built upon the (discrete) Hodge theory and de Rham cohomology. There are a few major reasons underlying the advantage of our approach compared to [6]. In [6], the authors basically quantized the homology approach, associating the chain group to a vector space and finding singular values/ vectors of the boundary map \(\partial\). The key step in [6] is the identification of the chain group as a vector space, whereby each simplex is represented by a basis state, which means that the vector space needs to be at least as large as the number of simplices contained in \(\sum\). By doing this way, the resource scales as a polynomial of \(n\). In reality, the high value of \(n\) is usually desired (large-scale analysis), which means that \(\mathrm{poly}(n)\) could be very high, and hence the algorithm induces a very high computational cost. The cohomology approach we have adopted here fits nicely in such a large-scale setting. We recall that in (simplicial) cohomology, a cochain is a map \(C\to R\) where \(C\) is some chain
group (space). Equivalently, we can imagine that each simplex is associated with a real number, and, therefore, we can have a more efficient way of storing our data, as we only need \(\sim\log(M)\) qubits to store an \(M\)-dimensional vector, which further reduces the resource needed for processing. Another major point is that in [6], it is required to generate the proper simplicial complex state, which contributes substantially to the computational cost and, in certain cases, has a high failing probability (for details, see [6]). Here, in the cohomology approach, the initial 1-form state can, in fact, be chosen arbitrarily, which is more convenient. We further make an important remark that usually, the first and second Betti numbers are important enough for us to infer the underlying structure of a dataset, which implies that for those low Betti numbers, our algorithm actually has running time \(\sim\mathcal{O}(\log(n))\). It is worth noting that our dual approach does not perform well in the regime of high Betti numbers, for which the LGZ algorithm works very well. Whether there is a mixed approach that can give rise to better performance in the intermediate regime is left for future exploration.
###### Acknowledgements.
This work was supported in part by the US Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704 (T.-C.W.), and by the National Science Foundation under Grants No. CMMI-1762287 (X.D.G.) and No. FAIN-2115095 (X.D.G.), as well as NIH 3R01LM012434-05S1 (X.D.G.) and NIH 1R21EB029733-01A1 (X.D.G.). We also acknowledge the support from a Seed Grant from Stony Brook University's Office of the Vice President for Research.
|
2309.04875 | Approximating ReLU on a Reduced Ring for Efficient MPC-based Private
Inference | Secure multi-party computation (MPC) allows users to offload machine learning
inference on untrusted servers without having to share their privacy-sensitive
data. Despite their strong security properties, MPC-based private inference has
not been widely adopted in the real world due to their high communication
overhead. When evaluating ReLU layers, MPC protocols incur a significant amount
of communication between the parties, making the end-to-end execution time
multiple orders slower than its non-private counterpart.
This paper presents HummingBird, an MPC framework that reduces the ReLU
communication overhead significantly by using only a subset of the bits to
evaluate ReLU on a smaller ring. Based on theoretical analyses, HummingBird
identifies bits in the secret share that are not crucial for accuracy and
excludes them during ReLU evaluation to reduce communication. With its
efficient search engine, HummingBird discards 87--91% of the bits during ReLU
and still maintains high accuracy. On a real MPC setup involving multiple
servers, HummingBird achieves on average 2.03--2.67x end-to-end speedup without
introducing any errors, and up to 8.64x average speedup when some amount of
accuracy degradation can be tolerated, due to its up to 8.76x communication
reduction. | Kiwan Maeng, G. Edward Suh | 2023-09-09T20:49:12Z | http://arxiv.org/abs/2309.04875v1 | # Approximating ReLU on a Reduced Ring for Efficient MPC-based Private Inference
###### Abstract
Secure multi-party computation (MPC) allows users to offload machine learning inference on untrusted servers without having to share their privacy-sensitive data. Despite their strong security properties, MPC-based private inference has not been widely adopted in the real world due to their high communication overhead. When evaluating ReLU layers, MPC protocols incur a significant amount of communication between the parties, making the end-to-end execution time multiple orders slower than its non-private counterpart.
This paper presents HummingBird, an MPC framework that reduces the ReLU communication overhead significantly by using only a subset of the bits to evaluate ReLU on a smaller ring. Based on theoretical analyses, HummingBird identifies bits in the secret share that are not crucial for accuracy and excludes them during ReLU evaluation to reduce communication. With its efficient search engine, HummingBird discards 87-91% of the bits during ReLU and still maintains high accuracy. On a real MPC setup involving multiple servers, HummingBird achieves on average 2.03-2.67\(\times\) end-to-end speedup without introducing any errors, and up to 8.64\(\times\) average speedup when some amount of accuracy degradation can be tolerated, due to its up to 8.76\(\times\) communication reduction.
## 1 Introduction
Machine learning (ML) inference often uses privacy-sensitive user data as an input feature. A model that predicts patients' disease by looking at their X-ray images [1] uses the patients' private X-ray data. Code auto-completion services like GitHub CoPilot [2] take in the user's proprietary code snippet to fill in the rest of the code. Smart home devices that take in the user's verbal command [3, 4, 5] collect the user's raw microphone inputs that can contain sensitive information. As ML models powering these services become larger and are often proprietary, an increasing trend is to host these models on a remote server owned by the service provider, to which the users send their input data. This emerging trend creates a dilemma for the users -- to use high-quality services empowered by large ML models, the users have to send their privacy-sensitive input data to a third party, risking potential privacy leakage.
Secure _multi-party computation_ (MPC; [6]) is gaining wide interest as a potential solution to this dilemma. MPC allows users to offload ML inference to third-party servers, without having to reveal their private data to the servers [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. In MPC, instead of sending their raw data, users send _secret shares_ of their data, from which the servers cannot infer the users' raw data. Without learning anything about the users' data, the servers run inference using the secret shares and send the result back to the users. Only the users, once they receive all the results from the servers, can retrieve the output of the inference. Figure 2 summarizes the high-level operation of an MPC-based private inference.
Despite their strong security guarantees, MPC-based private inference has not been widely adopted in the real world yet, due to their high runtime overheads. Even the most efficient MPC schemes [12, 15] experience multiple orders of magnitude slowdown over a non-private baseline. Unlike non-private inference that are usually computation- or memory-bound, the majority of the overhead in MPC comes from communications between parties during non-linear operations -- or most prominently, ReLU. In a particular setup we studied, ReLU was accountable for over 93% of the total overhead (Figure 1, leftmost bar), which is in line with observations from prior works [16]. To tackle this unique source of overhead, recent works concentrated on designing a faster algorithm for ReLU [18, 19, 10, 14, 11] or model architectures that use less number of ReLUs [16, 17, 20, 21, 22].
In this paper, we explore an orthogonal approach that accelerates _existing_ ReLU algorithms further by approximating the sign estimation process (_i.e._, _DReLU_). The key insight is that simply _guessing the sign_, unlike high-precision arithmetic operations, can still be done correctly by only looking at a small subset of bits on a smaller ring. We theoretically show that for a large family of ReLU protocols, discarding a carefully-selected amount of high-order and low-order bits of a secret share renders the final ReLU outcome equivalent to magnitude-based activation pruning, which is empirically known to have little effect on accuracy [23, 24, 25, 26, 27] if done properly (Section 3).
Based on the theoretical insight, we propose HummingBird,
Figure 1: Latency of running _512_ CIFAR10 inferences on ResNet18 with CrypTen [12] and our proposed framework, HummingBird. When 0.2% accuracy degradation is tolerated, HummingBird achieves a throughput of 87 samples/s (4.41\(\times\) over CrypTen). Details of the setup can be found in Section 5.
a framework that automatically selects a proper number of bits to discard for each ReLU layer and uses an optimized kernel to translate the reduced bits into an end-to-end speedup. HummingBird achieves 2.49-5.34\(\times\) end-to-end speedup on a typical LAN setup (Figure 1), and up to 8.64\(\times\) speedup on a network-constrained WAN setup over the popular CrypTen framework [12]. HummingBird is orthogonal to works that reduce the number of ReLUs [16, 17, 20, 21, 22] and can be used in synergy to further accelerate them. Below summarizes our contributions:
1. We theoretically show that only using a small subset of the bits of the secret shares is sufficient to keep the ReLU results close to the original result for a large family of MPC protocols. Specifically, we show that removing the majority of the high-order and low-order bits in the secret shares renders the result identical to activation pruning. The theoretical result serves as a stepping stone for HummingBird.
2. We propose an efficient search algorithm to decide how many high- and low-order bits to remove for each layer, and present an efficient search engine that performs the search on a lightweight simulation environment. Within a reasonable amount of time (several minutes to an hour), HummingBird finds a configuration that minimally impacts the model accuracy while significantly improving the communication overhead.
3. We implemented a runtime library as an extension to CrypTen [12] that can bring up to 8.64\(\times\) average end-to-end speedup and 8.76\(\times\) communication reduction with the configuration found by the search algorithm. We will open-source the entire codebase, including the search engine and the runtime library, upon paper publication.
## 2 Background and Motivation
### Private Inference with Multi-party Computation
With the rising concerns on data privacy in ML-based services, MPC-based private inference is gaining wide interest. Existing works on MPC-based inference can be broadly classified into either a _client-server_ setup or a _multi-server_ setup.
Client-server MPC [7, 8, 9, 14, 15, 28, 29] studies a setup where an MPC-based inference runs between a client holding data and a server holding a model. In this setup, the server runs most of the heavy computations, assuming that the client device is not powerful (_e.g._, smartphone or personal laptop) [30]. This setup provides strong security where the client does not need to worry about collusion. However, protocols targeting this setup are generally slower because they use a mixture of MPC and homomorphic encryption (HE). These protocols are often called 2PC [14] or hybrid [31] protocols as well.
Multi-server MPC [11, 12, 13, 18, 19, 32, 33, 34, 35, 36, 37] studies a setup where multiple non-colluding servers collaboratively run an MPC-based inference. Unlike client-server MPC where one of the parties (the server) does most of the computation, workloads are more balanced in this setup. While users can also act as one of the parties if they have enough computing power, it is more common to assume they do not participate. Instead, users simply offload the inference to multiple non-colluding servers [28] by generating and sending secret shares of their inputs (Figure 2). The servers performing MPC cannot learn about the users' input from the received secret shares unless they collude. In this setup, the model can be both shared between the parties or be private to one of the parties. If the model is private, participating parties except for the owner of the model use an encrypted model, and the execution is slower compared to when the model is shared.
Multi-server MPC is usually faster than the client-server MPC because it does not involve expensive HE operations -- a recent study [15] observed a 15\(\times\) difference between the two due to the HE operations. The major downside is that the user data are safe only when the involving parties do not collude [12]. This non-colluding assumption can be realized with policies and contracts between the parties. Many companies are forming an alliance [38] to explore and adopt MPC technologies, and some simple form of MPC is already being adopted in the industry [39].
Evaluation of ReLUFor all the MPC protocols, evaluating ReLU consists of a significant portion of the overhead. ReLU is evaluated in several different ways: some of the popular approaches include the Goldreich-Micali-Wigderson (GMW) protocol [11, 12, 35, 40, 41], garbled circuit [7, 8, 9, 11, 28, 41], or a variant of SecureNN [18]'s protocol [13, 18, 19]. Among these, the GMW protocol is GPU-friendly [32] and is often used in GPU-based high-throughput systems [12, 32].
Many of the aforementioned protocols [11, 12, 13, 18, 19, 32, 40, 35] evaluate ReLU by first evaluating whether the secret is positive, _i.e._, \(x\geq 0\)?, and multiplying the boolean result by the original secret. Following prior works [13], we call this sign estimation operator \(DReLU\)1: \(\text{DReLU}(x)=1\) iff \(x\geq 0\) and \(0\) otherwise. With DReLU, ReLU is trivially:
Footnote 1: for derivative of ReLU
\[\text{ReLU}(x)=x\times\text{DReLU}(x). \tag{1}\]
Accelerating the DReLU operation can directly accelerate ReLU for these protocols [11, 12, 13, 18, 19, 32, 40, 35].
Scope of HummingBirdWe describe and evaluate the idea of HummingBird on top of CrypTen [12], a GMW-based multi-server MPC framework developed and maintained by Meta.
Figure 2: Overview of a multi-server MPC protocol.
CrypTen is popular due to its high-speed GPU support [12] and has served as a foundation of several recent works [32, 43, 42].
While this paper is written around CrypTen, its idea is relevant to a wider range of works -- it is directly applicable to any other protocol that uses Equation 1 for ReLU and experiences a DReLU overhead that increases with the ring size (_i.e_., the number of bits in the secret share). All the other GMW-based systems [35, 32, 11] and other popular systems [18, 13, 14, 19] fall into this category. As in the original CrypTen paper, we assume an honest-but-curious adversary [12].
### Operation of CrypTen and GMW Protocol
NotationsLet \(x\in\mathbb{Z}/Q\mathbb{Z}\) be a secret value in an integer ring of size \(Q=2^{N}\). We denote \(p\) arithmetic secret shares of \(x\) as \(\langle x\rangle_{p}^{Q}\in\mathbb{Z}/Q\mathbb{Z}\), where \(\Sigma_{i=0}^{p-1}\langle x\rangle_{i}^{Q}\equiv x\ (\bmod\ Q)\). We simply denote the set of the shares as \(\langle x\rangle^{Q}=\{\langle x\rangle_{p}^{Q}\}\). For \(x\) represented in an \(N\)-bit signed integer representation (two's complement), we denote \(p\) binary secret shares of \(x\) as \(\langle x\rangle_{p}^{B}\), where \(\oplus_{i=0}^{p-1}\langle x\rangle_{i}^{B}=x\) for a bitwise XOR operation \(\oplus\). Throughout the paper, we assume an element in a ring of size \(2^{n}\) is always in an \(n\)-bit signed integer representation for any \(n\). We express bits from the \(m\)-th bit to the \(k-1\)-th bit in \(x\) (\(m\leq k\)) as \(x[k:m]\). For example, if \(x=1101\,1101_{p}\), \(x[5:1]=1110_{p}\). Note that the \(k\)-th bit is excluded. We treat the resulting \(x[k:m]\) as an element on a smaller ring \(\mathbb{Z}/2^{k-m}\mathbb{Z}\) unless stated otherwise. Similarly, we denote the \(k\)-th bit of \(x\) as \(x[k]\).
Operation of CrypTenIn CrypTen, users split their secret input \(x\in\mathbb{Z}/Q\mathbb{Z}\) into \(p\) arithmetic secret shares and send each share \(\langle x\rangle_{p}^{Q}\) to different participating servers \(P_{p}\). CrypTen can work with any number of \(p\geq 2\); when \(p=2\), secret shares can be easily generated by the client generating a random number \(r\) and making \(\langle x\rangle_{0}^{Q}=x+r\), \(\langle x\rangle_{0}^{Q}=-r\). Floating-point values \(x_{f}\) are converted to an integer ring element \(x\) by multiplying with a large integer \(D\) and rounding (\(x=\lfloor Dx_{f}\rceil\)).
Addition or multiplication by a public value can be trivially done directly on arithmetic secret shares (_e.g_., \(\Sigma_{i=0}^{p-1}a\langle x\rangle_{i}^{Q}\equiv ax\ (\bmod\ Q)\)), allowing efficient linear operations (convolution or fully-connected layers) by a public weight. Addition between two secret shares can also be done trivially without additional overhead. Multiplication between secret shares adds more overheads because it requires communications between the parties and a set of random numbers called the Beaver triplets [44]. Beaver triplets can be generated and distributed by a trusted third-party (TTP) or using oblivious transfer [12]. We defer detailed explanations of these arithmetic operations to prior works [12], as it is not the focus of our optimization.
Evaluating ReLU with GMWNon-linear operations, such as max pooling or ReLU, are much more complicated and expensive in MPC. Here, we describe in detail how ReLU operation is evaluated with the Goldreich-Micali-Wigderson (GMW) protocol, which accounts for more than 93% of the total execution time (Figure 1) and is the focus of our paper.
CrypTen evaluates ReLU by separately evaluating DReLU (Equation 1). When DReLU is applied to a secret share \(\langle x\rangle^{Q}\), the output is a secret share of one (\(\langle 1\rangle^{Q}\)) if \(x\geq 0\) and \(\langle 0\rangle^{Q}\) otherwise. ReLU is evaluated by:
\[\text{ReLU}(\langle x\rangle^{Q})=\langle x\rangle^{Q}\times\text{DReLU}( \langle x\rangle^{Q}). \tag{2}\]
This requires a multiplication between secret shares and uses the aforementioned Beaver triplets.
Most of the overheads of ReLU come from estimating DReLU(\(\langle x\rangle^{Q}\)). Below, we explain how the GMW protocol evaluates DReLU. First, the arithmetic secret shares \(\langle x\rangle^{Q}\) are converted into binary secret shares \(\langle x\rangle^{B}\). The arithmetic-to-binary (A2B) conversion is done by each party \(P_{p}\) first generating binary secret shares of their arithmetic secret shares, \(\langle\langle x\rangle_{p}^{Q}\rangle^{B}\), and adding their binary shares \(\langle\langle x\rangle^{Q}\rangle_{p}^{B}\) locally [12]. As only bitwise operations like AND or XOR can be done on the binary shares, the addition of \(\langle\langle x\rangle^{Q}\rangle_{p}^{B}\) is performed using a series of AND and XOR operations, as it would be done by an adder circuit (_e.g_., carry-lookahead adder) [12]. After the conversion, the most significant bit (MSB; sign bit) of \(\langle x\rangle^{B}\) (which is \(\langle x\rangle^{B}[\text{N--1}]\) if \(Q=2^{N}\)) holds the binary secret share of \(0\) if \(x\) is positive and \(1\) if negative. Converting \(\langle x\rangle^{B}[\text{N--1}]\) back into arithmetic secret shares (binary-to-arithmetic; B2A) and subtracting it from a public value \(1\) gives us our desired DReLU(\(\langle x\rangle^{Q}\)) [12].
During the circuit addition, XOR can be done locally on each party, similarly to how addition can be done privately on arithmetic secret shares. However, AND, like multiplication between arithmetic secret shares, requires Beaver triplets and communications between the parties. For an \(N\)-bit secret \(x\), the circuit adder implementation requires \(O(logN)\) rounds of communication and \(O(N)\) bits communicated at each round, resulting in \(O(NlogN)\) total communication overheads. Usually, \(N\) is large (_e.g_., \(64\)[12]) to avoid arithmetic wrap-around errors [12, 28], and the communication overhead becomes the major bottleneck of DReLU [12, 11, 32].
### Detailed Overhead Characterization
To study the bottleneck of GMW-based MPC protocols, we measured the major overheads of running CrypTen on two nodes with an A100 GPU, connected with a 10 Gbps LAN. More details of the setup can be found in Section 5. We ran ResNet18 [45] with CIFAR10 [46] dataset with a batch size of 512. We replaced the max pooling layer with average pooling as in prior works [30, 47] to concentrate on the ReLU overhead. While CrypTen (and our proposed optimizations) can be applied both to unencrypted and encrypted models [12], we assumed that the model is unencrypted and shared among parties, which makes the inference more efficient.
The leftmost bar in Figure 1 shows the measured overhead breakdown. First, we can see that the numbers are already quite efficient -- finishing an inference of 512 samples in only 26.82 seconds (19.1 samples/s) -- thanks to CrypTen's
efficient GPU support. However, the overhead is still significant. Especially, it can be observed that 93% of the overhead comes from ReLU layers. As we will show in Section 5, HummingBird reduces the total communication by 2.68-8.76\(\times\), resulting in up to 8.64\(\times\) end-to-end speedup (Figure 1).
Figure 3 further breaks down the large communication overhead incurred by the ReLU layer into different components. **Circuit** refers to the circuit adder explained in Section 2.2 during the A2B conversion (82.76%). Specifically, the AND operation inside the circuit adder incurs communication. **Mult** refers to the multiplication shown in Equation 2 that is done at the end between the secret share and the DReLU output (6.9%). **B2A** refers to the B2A conversion of the 1-bit DReLU output. Unlike the A2B counterpart that performs \(N\)-bit to \(N\)-bit conversion, B2A converts only one bit (indicating the sign) and is much cheaper (3.45%). **Others** are AND operations happening inside A2B other than what is captured by **Circuit** (6.9%). Evidently, the vast majority of the communication comes from the circuit adder during A2B conversion.
By reducing the number of bits used in DReLU, HummingBird directly optimizes **Circuit**, as its communication overhead is \(O(NlogN)\) with \(N\) bits (Section 2.2). HummingBird's optimization additionally improves **Others**, and HummingBird's efficient bitpacking library (Section 4.2) also accelerates **B2A**. **Mult** cannot be optimized with HummingBird.
## 3 Approximating DReLU with a Subset of Bits
The core idea of our optimization is to only use a small fraction of the bits in the secret shares to evaluate the sign of the secret (_i.e._, DReLU). Especially, we will show that discarding a certain number of the most- and least-significant bits still allows for correctly estimating the sign. In other words, for a properly chosen \(k\) and \(m\) (\(k\geq m\)), only using \(\langle x\rangle^{Q}[k:m]\) to estimate DReLU still gives the correct sign most of the time. We leverage the fact and propose to use the following approximate equation instead the exact Equation 2:
\[\text{ReLU}(\langle x\rangle^{Q})\approx\langle x\rangle^{Q}\times\text{DReLU} (\langle x\rangle^{Q}[k:m]). \tag{3}\]
Figure 4 summarizes the proposed approximation, where our unique components are highlighted in blue. For the GMW protocol, the approximation significantly improves the DReLU complexity from the original \(O(NlogN)\) with \(N\) bits into \(O((k-m)log(k-m))\), where \(k-m<<N\). The approximation will also benefit any other protocols whose DReLU overhead decreases with the number of bits [11, 13, 14, 18, 19, 32, 35].
### Correctness of the Approximate Algorithm
In this section, we first explain how the approximate algorithm works in more detail with an example. Then, we theoretically show that the ReLU results stay mostly unchanged if \(k\) and \(m\) are properly selected; in fact, we will show that the result becomes equivalent to performing a magnitude-based activation pruning after performing exact ReLU.
#### 3.1.1 Example Execution
We show how the approximate algorithm can still generate a mostly-correct result with an example in Figure 4. In this example, the user wants to evaluate ReLU on her secret input \(x=9\). The user first generates secret shares \(\langle x\rangle^{Q}=\{47,\ -38\}\) and sends each share to different parties, \(P_{0}\) and \(P_{1}\). Note that \(47-38=9\) retrieves the original secret value. Without our optimization, DReLU takes the two secret share values directly as an input and outputs secret shares that indicate the original secret's sign. As the secret (\(x=9\)) is positive in our example, the output will be \(\langle 1\rangle^{Q}\).
In the approximated algorithm, instead of using the shares (47 and -38) directly, each party extracts bits from \(k-1\) to \(m\) (highlighted in green for \(k=5\), \(m=2\)) and creates new secret shares \(\langle x\rangle^{Q}[k:m]=\{3,-2\}\). Note that the bit extraction can be done locally. The new secret shares can be considered as secret shares of \(3-2=1\) in a smaller ring of size \(2^{3}=8\). While the values of the secret shares and the secret value the shares encode all changed significantly (47 \(\rightarrow\) 3, -38 \(\rightarrow\) -2, 9 \(\rightarrow\) 1), note that the sign of the secret value (9 and 1) did not change. As the secret is still positive, DReLU will still output \(\langle 1\rangle^{Q}\), and the approximated ReLU result in this example will be _exactly the same_ with the precise output.
The reason why the approximation works at a high level is (roughly) because the DReLU result does not change as long as the inequality relationship between the secret shares stays the same. For example, \(\langle x\rangle^{Q}=\{47,-38\}\) results in a DReLU output of \(\langle 1\rangle^{Q}\) because the positive secret share's absolute value (47) is larger than the negative share's (38). This relationship still holds even if we apply, _e.g._, modulo of 32 (equivalent to dropping high-order bits) or division by 4 (similar to dropping low-order bits) to both shares. In the next
Figure 4: Summary of the proposed approximate ReLU calculation. Our unique contributions are highlighted in blue.
Figure 3: Communication incurred by each part of ReLU.
section, we provide formal proof of this insight.
#### 3.1.2 Theoretical Analysis
In this section, we theoretically prove that the approximate ReLU result is equivalent to magnitude-based activation pruning after performing exact ReLU, with a properly-chosen \(k\) and \(m\) values. Our proof is in two steps: we first prove that (1) removing the \(k\)-th and higher bits from a secret share does not impact the output of DReLU with a carefully-chosen \(k\); then, we prove that (2) removing \(m\) low-order bits of a secret share is equivalent to magnitude-based activation pruning. We only show the proof for a 2-party case (_i.e._, \(p\in\{0,1\}\)) for simplicity; the proof can be extended to more parties trivially.
Removing high-order bitsFirst, we prove that removing \(N-k\) high-order bits of a secret share (_i.e._, using \(\langle x\rangle^{Q}[k:0]\) instead) does not change the DReLU output, if \(k\) is selected such that \(-2^{k-1}\leq x<2^{k-1}\) holds for all \(x\). The high-level idea of the proof is that \(\langle x\rangle^{Q}[k:0]\) can be seen as secret shares of \(x[k:0]\) in \(\mathbb{Z}/2^{k}\mathbb{Z}\), and hence the DReLU result will be the same if the most significant bit (MSB; sign bit) of \(x[k:0]\) is the same as the MSB of \(x\).
**Theorem 1**.: _Consider arithmetic secret shares of \(x\in\mathbb{Z}/Q\mathbb{Z}\), \(\langle x\rangle_{p}^{Q}\in\mathbb{Z}/Q\mathbb{Z}\) (\(p\in\{0,1\}\)). Assume \(\langle x\rangle_{p}^{Q}\) is represented in an \(N\)-bit signed integer representation. For \(k<N\), \(\mathrm{DReLU}(\langle x\rangle^{Q})=\mathrm{DReLU}(\langle x\rangle^{Q}[k:0])\) if \(-2^{k-1}\leq x<2^{k-1}\)._
Proof.: \(\langle x\rangle^{Q}[k:0]\) can be seen as secret shares of \(x[k:0]\) in \(\mathbb{Z}/2^{k}\mathbb{Z}\). This is because \(\langle x\rangle^{Q}[k:0]\equiv\langle x\rangle^{Q}\pmod{2^{k}}\) and \(x[k:0]\equiv x\pmod{2^{k}}\), and thus, applying \(\pmod{2^{k}}\) to both sides of
\[\langle x\rangle_{0}^{Q}+\langle x\rangle_{1}^{Q}\equiv x\pmod{2^{N}}\]
results in
\[\langle x\rangle_{0}^{Q}[k:0]+\langle x\rangle_{1}^{Q}[k:0]\equiv x[k:0] \pmod{2^{k}}.\]
Applying DReLU to \(\langle x\rangle^{Q}[k:0]\) on a smaller ring \(\mathbb{Z}/2^{k}\mathbb{Z}\) will simply output secret shares indicating whether its secret (\(x[k:0]\)) is positive. Thus, \(\mathrm{DReLU}(\langle x\rangle^{Q})=\mathrm{DReLU}(\langle x\rangle^{Q}[k:0])\) if and only if their secrets (\(x[k:0]\in\mathbb{Z}/2^{k}\mathbb{Z}\) and \(x\in\mathbb{Z}/Q\mathbb{Z}\)) have the same sign bits, _i.e._, \(x[k-1]=x[N-1]\). This is always the case if (but not only if) \(-2^{k-1}\leq x<2^{k-1}\).
In MPC frameworks, \(N\) is usually chosen to be much larger than what is needed to represent the range of \(x\) to avoid wrap-around during arithmetic computation [28, 12]. For example, CrypTen [12] uses \(N=64\), while a floating point representation \(x_{f}\) is converted into an integer ring element with \(x=\lfloor 2^{16}x_{f}\rfloor\). As intermediate activations (\(x_{f}\)) in a DNN are usually close to zero, \(x=\lfloor 2^{16}x_{f}\rfloor\) only occupies a small subset of the full range represented by \(N=64\). For the dataset we studied, \(k\) between 18-22 was sufficient for \(-2^{k-1}\leq x<2^{k-1}\) to always hold. The result indicates that 42-46 high-order bits (accounting for **66-72%**) of the secret shares can be safely discarded without causing **any** mathematical error. Unlike linear layers, DReLU does not cause any wrap-around errors and does not need to operate on a large ring.
Removing low-order bitsNext, we show that discarding \(m\) low-order bits in secret shares (_i.e._, using \(\langle x\rangle^{Q}[N:m]\)) before DReLU is equivalent to applying magnitude-based activation pruning after ReLU.
**Theorem 2**.: _Consider arithmetic secret shares of \(x\): \(\langle x\rangle_{p}^{Q}\in\mathbb{Z}/Q\mathbb{Z}\) in an \(N\)-bit signed integer representation. If each party removes \(m\) low-order bits of the secret shares and uses \(\langle x\rangle_{p}^{Q}[N:m]\in\mathbb{Z}/2^{N-m}\mathbb{Z}\) for DReLU evaluation, the ReLU output is equivalent to performing ReLU precisely and zeroing-out values below \(2^{m}\)._
Proof.: Note that \(\langle x\rangle^{Q}[N:m]=\lfloor\frac{\langle x\rangle^{Q}}{2^{m}}\rfloor\). Consequently,
\[\langle x\rangle_{0}^{Q}[N:m]+\langle x\rangle_{1}^{Q}[N:m]\] \[\equiv\lfloor\frac{\langle x\rangle_{0}^{Q}}{2^{m}}\rfloor+ \lfloor\frac{\langle x\rangle_{1}^{Q}}{2^{m}}\rfloor\] \[\equiv\begin{cases}\lfloor\frac{x}{2^{m}}\rfloor\pmod{2^{N-m}}, &\text{or}\\ \lfloor\frac{x}{2^{m}}\rfloor-1\pmod{2^{N-m}}.\end{cases}\]
In other words, \(\langle x\rangle^{Q}[N:m]\in\mathbb{Z}/2^{N-m}\mathbb{Z}\) are secret shares of either \(\lfloor\frac{x}{2^{m}}\rfloor\) or \(\lfloor\frac{x}{2^{m}}\rfloor-1\) in \(\mathbb{Z}/2^{N-m}\mathbb{Z}\). The sign of the former is always the same as \(x\) (here, for simplicity we consider zero as positive, which does not make any difference for ReLU), so applying DReLU yields the same sign as \(x\). The latter can cause the sign to flip if (1) \(0<x<2^{m}\) (\(\lfloor\frac{x}{2^{m}}\rfloor\) smaller than 1), or (2) \(\lfloor\frac{x}{2^{m}}\rfloor-1<-2^{N-m-1}\) (underflow).
In CrypTen, \(x\)'s range is usually much smaller compared to \(N\) for the second case to happen. The first case can actually cause an incorrect result, as DReLU will incorrectly consider secrets in \(0<x<2^{m}\) as negative and output secret shares of zero, which will cause the corresponding ReLU result to become zero. The behavior is equivalent to magnitude-based activation pruning with a threshold \(2^{m}\).
As many prior works [23, 24, 25, 26, 27] empirically showed, magnitude-based activation pruning degrades accuracy gracefully when used in moderation. Thus, a careful choice of \(m\) is expected to not harm the model accuracy significantly, having similar effects with prior works on magnitude-based pruning.
Comparison with traditional compressionWhat we do is similar in spirit to compression or quantization methods [48] in that we aim to reduce the number of bits used. However, traditional compression/quantization aims to make the value of the compressed result close to the original value, _i.e._, \(\mathrm{Compress}(\langle x\rangle_{p}^{Q})\approx\langle x\rangle_{p}^{Q}\); however, as \(\langle x\rangle_{p}^{Q}\) are random values fully occupying the \(N\)-bit representation space (64-bit in CrypTen [12]), it cannot be compressed much without significantly distorting the result. In contrast, our proposed method does not preserve the values of the secret shares at
all \(\langle\langle x\rangle_{B}^{Q}[k:m]\neq\langle x\rangle_{B}^{Q}\rangle\), but it instead ensures that the DReLU result would be similar before and after the bits are discarded (\(\text{DReLU}(\langle x\rangle^{Q}[k:m])\approx\text{DReLU}(\langle x\rangle^{Q})\)).
Applicability to other protocolsThe proof of Theorem 1 and 2 is not confined to GMW or CrypTen as the proofs do not assume any particular implementation of DReLU. Thus, the proof is directly applicable to any protocols that calculate DReLU to evaluate ReLU (_i.e_., use Equation 1) and experience DReLU overhead increasing with the ring size. Prior works such as [13, 18, 19, 32, 35, 11] fall into this category.
of ReLU layers and quickly becomes intractable. With \(l\) ReLU layers and \(N\) possible bits that can be assigned to each layer, the combinations of possible bit assignments are already \(O(N^{l})\). To make matters worse, each ReLU layer has to choose \(k\) and \(m\) that satisfies the number of assigned bits. For example, if one decides to retain 4 bits for all ReLU layers, each ReLU layer has to choose \(k\) and \(m\) from \((k,m)\in\{(4,0),(5,1),\...,\ (64,60)\}\), resulting in a total \(O(N^{l})\) possible choices. This leads to a combined \(O(N^{2l})\) search complexity.
HummingBird-\(b\) enumerates all possible bit assignments starting from the first ReLU layer in a depth-first-search (DFS) manner (Figure 6). To navigate through the exponential search space within a reasonable amount of time, HummingBird uses several optimizations: using locally-optimal \(k\) and \(m\) values, early stopping for unlikely paths, and allowing a coarser search.
First, to avoid the \(O(N^{l})\) complexity of finding a global optimum \(k\) and \(m\) values, HummingBird uses a local optimum for each layer instead. When a certain number of bits is assigned for a layer, the search engine immediately fixes the \(k\) and \(m\) values for all the other layers and finds the \(k\) and \(m\) values for the particular layer that gives the best validation accuracy. This is done by (1) fixing \(k\) and \(m\) with the already-found values for previous layers that already have been searched, (2) using \(k=N\), \(m=0\) (_i.e._, no bit discarded) for successive layers that haven't been searched yet, and (3) linearly enumerating all the possible \(k\) and \(m\) values that meet the assigned number of bits for the current layer. The process essentially finds a locally-optimal \(k\) and \(m\) for each layer while optimistically assuming that successive layers will not degrade the accuracy further. We empirically saw that the heuristic works well.
Even when we use the locally-optimal \(k\) and \(m\), navigating all the possible bit assignments with DFS still incurs \(O(N^{l})\) complexity. To further make the search tractable, we prune the search space early if a particular branch in the DFS is likely to yield suboptimal configurations. After assigning a certain number of bits to a layer, the search engine evaluates an optimistic accuracy to find a locally-optimal \(k\) and \(m\) (discussed in the previous paragraph). We immediately stop exploring branches whose optimistic accuracy is already worse than a predefined threshold (Figure 6, Early stop 1) or the best candidate found so far (Early stop 2). The insight is that if the optimistic accuracy is already bad, the actual accuracy of any configurations from this branch cannot be good. We also track the total number of bits assigned to each layer and immediately stop when it exceeds the budget (Early stop 3).
For additional efficiency, we allow performing a search at a larger granularity by grouping multiple ReLUs and making them share the same parameters. For models with a repeating block structure (_e.g._, ResNet [45]), a natural choice is to group the ReLUs within the same block. All these optimizations (using locally-optimal \(k\) and \(m\), early stopping, and ReLU grouping) combined together allow our search engine to find a good configuration usually within several minutes, making the search highly practical (Section 5.3).
When zero bit is assigned to a layer, that ReLU layer becomes an identity layer (_i.e._, input = output). HummingBird can be seen as a generalization of ReLU culling [16] which replaces a ReLU layer with an identity layer for performance.
#### 4.1.3 Model Finetuning
After we find a good configuration, we go through a model finetuning process to regain some of the accuracy drops. The finetuning process is simply done by re-training the model for a small number of epochs with the same training data, while using the approximate ReLU layers with the found parameters. The finetuning process helps the rest of the model to adapt to the approximate ReLU layers. We found that finetuning was not necessary for budgets near 1/8 and above as the approximation does not degrade the accuracy much; however, finetuning was essential for aggressive budgets below 1/8, where non-negligible accuracy drops occurred (Section 5.4).
Figure 6: Summary of the search algorithm of HummingBird. The search enumerates all possible bit assignments in a DFS manner. For each bit assignment, the locally-optimal \(k\) and \(m\) values are selected. Searching a particular path is immediately stopped when the optimistic accuracy of that path is already worse than the predefined threshold (Early stop 1) or the previous best accuracy found so far (Early stop 2), or when the search budget is exceeded (Early stop 3).
### Online Phase: Efficient DReLU on a Smaller Ring
Using the parameters (\(k\) and \(m\)) found for each ReLU layer, HummingBird uses the approximate ReLU in Equation 3 during online MPC inference. Note that \(k\) and \(m\) for each layer are selected during the offline phase using the validation data and are fixed during the online phase, not leaking any additional information about the online user data.
With the reduced number of bits, HummingBird speeds up the DReLU process, especially the circuit adder (Section 2.3), with mainly two optimizations. First, it runs a circuit of depth \(O(\lceil log(k-m)\rceil)\) instead of \(O(logN)\). Second, it efficiently packs and unpacks the subset of bits into a 64-bit tensor before and after each communication to reduce the overhead. While the circuit depth change only impacts the circuit adder overhead (**Circuit**; Section 2.3), the reduced communication due to bitpacking also improves **Mult** and **B2A** from Section 2.3.
We implement HummingBird's online phase as an extension to the popular CrypTen [12] codebase with Python. The added code accounts for less than 2% of the total execution time.
## 5 Evaluation Results
In this section, we answer the following questions:
* How faster is HummingBird in different settings?
* How much communication is reduced?
* What are the major overheads of HummingBird?
* How long is the search time?
* How important are each component of HummingBird?
### Evaluation Setup
System setupWe evaluate HummingBird in several representative setups. The first setup runs two parties on two nodes connected with a 10 Gbps LAN, each with one A100 GPU. The second setup runs an otherwise identical setup, with a less powerful V100 GPU. Finally, the third setup runs two parties on two A100 GPUs on a single node. The third represents an ideal setup where the network bandwidth is much higher. We do not model the overhead of generating Beaver triplets, assuming they are generated and stored offline [30] or sent by a trusted third-party (TTP) asynchronously. Unlike in a client-server MPC setup where the clients have limited storage [30], we assume the servers have enough storage to hold pre-generated triplets.
Models and datasetsFollowing prior works [17, 30, 47], we evaluated HummingBird with ResNet18 and ResNet50 [45], models that are easily supported with MPC with minimal modifications. Popular models like MobileNet [49] have components not suitable for MPC (_e.g._, ReLU6) and are not commonly used. We evaluated three different datasets, CIFAR10 [46], CIFAR100 [46], and TinyImageNet [50]. For CIFAR10, we replaced the max pooling with average pooling, following [30, 47]. For the rest, we simply removed max pooling (as average pooling did not work well), following [17]. The baseline accuracy for each model/dataset is summarized in Table 1; the numbers align with prior works [17].
-to-end performance significantly. Without adding any errors (HammingBird-eco), HummingBird improves the average performance by 2.49\(\times\) and 1.90\(\times\) on A100 and V100, respectively. When some accuracy degradation is tolerated, the average performance improvement becomes 4.93\(\times\) and 3.04\(\times\) (-0.3%; HummingBird-8/64), and 5.34\(\times\) and 3.26\(\times\) (-1.2%; HummingBird-6/64), for A100 and V100, respectively.
The performance improvement is less on the less powerful V100 GPUs because the linear layer computation (_e.g._, convolution), which HummingBird does not accelerate, is slower on V100. The performance improvement discrepancy becomes larger with a tighter, as communication is no longer the sole bottleneck, and computation time becomes more important.
Performance improvement on different networksFigure 9 shows the average speedup across all the models/benchmarks for different network setups. High-BW represents an ideal setup with very high bandwidth. It is measured on two GPUs on a single node, connected with up to 16 Tbps link [51]. LAN reports a setup where two nodes each with a GPU are connected with a 10 Gbps LAN. WAN reports an analytical projection assuming a 352 Mbps bandwidth, a WAN bandwidth number used in prior work [15]. To analyze the end-to-end performance in the WAN setup, we separately measured the communication time from the High-BW setup and scaled it according to the assumed bandwidth.
Figure 9 shows that, as expected, HummingBird's performance benefit becomes more notable as the network becomes more limited. Compared to the 2.49-5.34\(\times\) speedup of LAN, High-BW setup enjoyed less speedup of 2.03-4.12\(\times\), while WAN setup enjoyed more speedup of 2.67-8.64\(\times\). High-BW and the LAN setup did not show significant difference although their bandwidth differed by multiple orders, because HummingBird was not able to fully utilize the bandwidth of High-BW anyways -- even when the High-BW setup could support up to 16 Tbps, the usage did not exceed 20 Gbps.
CommunicationFigure 11 shows the total bytes communicated (bar plot) and the number of communication rounds (line plot). On average, HummingBird reduces the number of communication rounds by 1.12-1.56\(\times\), and reduces the total bytes communicated by 2.68-8.76\(\times\). Communication does not decrease proportionally with the budget and starts to saturate because there are communications that cannot be reduced by HummingBird (_e.g._, **Mult** from Figure 3).
Overhead breakdownFigure 10 shows the overhead breakdown of CrypTen and HummingBird-8/64, both on A100 and V100 GPUs. The breakdown clearly shows that HummingBird reduces the communication overhead down to a point where the computation overhead becomes non-negligible again. With HummingBird-8/64, the portion of the communication overhead decreased from 93% to 78% (A100) and 78% to 39% (V100), respectively. For high-performance GPUs like A100, the major bottleneck is still communication (78%); however, for less-powerful GPUs like V100, HummingBird shifts the major bottleneck to computation.
Figure 8: On V100 GPUs, HummingBird improves the end-to-end performance by 1.55–2.22\(\times\) (HummingBird-eco), 2.57–3.67\(\times\) (HummingBird-8/64), and 2.66–4.03\(\times\) (HummingBird-6/64) over CrypTen. Any accuracy degradation is shown above the bar.
Figure 10: Overhead breakdown of the baseline CrypTen and Hummingbird-8/64 on A100 and V100 GPUs. HummingBird reduces the communication overhead to a degree where the computation overhead is no longer negligible.
Figure 9: Speedup of HummingBird on different network setups. The bar shows the geometric mean across all the benchmarks. On WAN, HummingBird’s speedup reaches 2.67–8.64\(\times\).
The result also clearly shows why HummingBird's speedup is larger for A100 compared to V100. In V100, the computation overhead, which HummingBird does not accelerate, becomes the major bottleneck. With HummingBird, communication is not the sole bottleneck anymore, and future works would have to optimize both the computation and the communication to gain meaningful performance improvements.
### HummingBird Search Overhead
Table 2 summarizes the search time of HummingBird for different setups. In most cases, HummingBird was able to find a satisfactory configuration in a few minutes. When the dataset and the model were large (_e.g._, TinyImageNet with ResNet50), the search time became longer, sometimes reaching an hour. The search time can be further reduced by using a smaller validation set or using a coarser ReLU group.
### Ablation Studies
Effectiveness of the search engineHummingBird's search engine finds bits to discard (_i.e._, \(k\), _m_) per each ReLU group. A simple alternative approach would be to use the same \(k\) and \(m\) for all the ReLU layers. We found that such a naive alternative does not work well, incurring more than an 8% accuracy drop for the same search budget compared to HummingBird. Figure 12 visualizes the bits that are discarded (gray hatched) or retained (green) among the 64 bits for the two approaches with a budget of 8/64. Unlike the naive approach that discards the same bits for all the ReLU layers (Figure 12, left), HummingBird flexibly chooses to discard different amounts of bits in different positions (Figure 12, right), sometimes discarding more bits (G3) and sometimes discarding less (G4). As different ReLU layers have different importance and characteristics, the search engine is crucial for achieving high accuracy.
Effectiveness of finetuningWhile finetuning was not necessary in cases where the search budget was reasonably large (_e.g._, HummingBird-8/64) and the accuracy degradation was already small, we found finetuning to be crucial when the search budget was small (_e.g._, HummingBird-6/64) and non-negligible accuracy degradation occurred after discarding bits. Table 3 shows the accuracy before and after finetuning for HummingBird-6/64. Finetuning improves the model accuracy by 0.95-7.05% depending on the dataset and the model.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Model & \multicolumn{2}{c}{Search budget} \\ & & 8/64 & 6/64 \\ \hline \multirow{2}{*}{CIFAR10} & ResNet18 & 5m 34s & 4m 28s \\ & ResNet50 & 6m 10s & 5m 47s \\ \hline \multirow{2}{*}{CIFAR100} & ResNet18 & 5m 37s & 4m 19s \\ & ResNet50 & 18m 32s & 18m 34s \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Tiny- \\ ImageNet \\ \end{tabular} } & ResNet18 & 13m 1s & 11m 34s \\ & ResNet50 & 42m 3s & 1h 8m \\ \hline \hline \end{tabular}
\end{table}
Table 2: HummingBird’s configuration search time.
Figure 11: Normalized bytes that need to be communicated (bar) and the number of communication rounds (line). HummingBird reduces the number of communication rounds by 1.12–1.56\(\times\) and total communicated bytes by 2.68–8.76\(\times\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Model & Before FT & After TF \\ \hline \multirow{2}{*}{CIFAR10} & ResNet18 & 90.09\% & 91.04\% \\ & ResNet50 & 87.6\% & 91.12\% \\ \hline \multirow{2}{*}{CIFAR100} & ResNet18 & 73.04\% & 75.57\% \\ & ResNet50 & 72.45\% & 78.49\% \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Tiny- \\ ImageNet \\ \end{tabular} } & ResNet18 & 60.21\% & 64.79\% \\ & ResNet50 & 59.82\% & 66.47\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Impact of finetuning (FT) on HummingBird-6/64.
Figure 12: Retained (green) and discarded (grey hatched) bits for each ReLU group on different search strategies. The plot shows that HummingBird’s search engine chooses different s and positions of bits for different ReLU groups.
## 6 Additional Related Works
### Alternative Approaches to Private Inference
There are multiple alternative approaches to realize private inference. Here, we briefly discuss those alternatives.
Trusted execution environment (TEE)TEE [52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69] provide hardware-level protection that (1) allows a remote party to authenticate the software that is running on the hardware and (2) ensures the confidentiality and integrity of code and data inside the TEE. Users can send their private data to a remote server's TEE and run inference or training, while ensuring their data stay confidential. Following the initial proposal from academia [56, 57, 58], most major vendors have TEEs in their commercial products, including Intel SGX [52], ARM TrustZone [53], AMD SEV [54], RISC-V Keystone [55], and NVIDIA's recently announced confidential computing feature [70]. Moreover, TEEs for emerging heterogeneous accelerators are also being actively proposed [71, 72, 73, 74, 75, 76, 77, 78, 79, 80]. TEEs are efficient because they eliminate the need for any expensive HE or MPC operations, and are widely available in commodity off-the-shelf hardware. However, the security assurance from a TEE is generally considered to be weaker than cryptographic protection from HE/MPC. Although data inside a TEE should ideally be secure, TEE implementations may be vulnerable due to hardware/software bugs [81, 82] or side channels [83].
Fully homomorphic encryption (FHE)FHE is a cryptographic technique that allows certain computations directly on an encrypted ciphertext. Using FHE, servers can collect user data in a ciphertext form and run computation (_e.g._, DNN inference) directly on the ciphertext [84]. While the first HE schemes and systems were very slow, subsequent works accelerated HE-based private inference heavily on CPUs [85], GPUs [86, 87], FPGAs [88, 89, 90], and custom accelerators [91, 92, 93]. Similar to MPC, non-linear layers such as ReLU incur high overhead and are often approximated with high-degree polynomial functions [92]. While recent advances in algorithms and hardware accelerators significantly reduced the latency of FHE, the throughput is still limited: using CIFAR10 and ResNet20, recent studies report a throughput of 8 samples/s on a custom accelerator [93] and 0.7 samples/s on an A100 GPU [87], which are orders of magnitude less than what HummingBird achieves.
Instance encodingInstance encoding [94] refers to a general concept where the client encodes the input into an encoding in a statistical way, such that reconstructing the original input is hard while some useful downstream inference or training is still possible with the encoding. Similar concepts have been explored under many different names across different communities, including split inference [95, 96, 97, 98, 99, 100], split learning [101, 102, 103], vertical federated learning (vFL) [104, 105, 106], learnable encryption [107, 108, 109], private image publication [110, 111], _etc_. Instance encoding is usually efficient computation-wise, as no cryptographically-heavy operation is needed. However, these approaches lack a strong theoretical guarantee on their claimed privacy-enhancing properties [94, 112], unlike MPC or FHE which are shown to be cryptographically secure. A few recent studies provided a theoretical analysis of privacy through instance encoding, using tools like (metric) differential privacy [110, 111], Fisher information leakage [113, 114], or PAC theory [115]. Still, the theoretical guarantees are much weaker compared to MPC. For example, although instance encoding can make input reconstruction more difficult, it still leaks a non-trivial amount of information about private inputs.
### Additional Related Works on MPC
Section 2.1 summarizes popular client-server and multi-server MPC protocols. Many of these works simultaneously introduce orthogonal approaches to accelerate ReLU, which are often complementary to ours. Some of the popular approaches include replacing ReLU with an identity function [16, 17], replacing ReLU with a polynomial [8, 31, 116, 117, 118], and using a neural architecture search to find a model with less number of ReLUs [16, 20, 21]. As most of these works were not able to fully replace all the ReLUs, HummingBird will still be beneficial to these systems as well. Other works focused on applying MPC to more complex models other than CNNs, including Transformers [42, 43] and recommendation models [119]. HummingBird can still be applied to these works when they use ReLU [42, 119].
## 7 Conclusion
MPC-based private inference can allow users to run large models hosted on a remote server without worrying about their private data being leaked. However, running inference using MPC is very slow, due to the significant communication overhead it incurs. A majority (\(>93\%\)) of the overhead comes from the ReLU layers.
In this work, we theoretically show that most of the bits in the secret shares can be removed during ReLU evaluation with little to no impact on accuracy for many popular protocols. Leveraging the finding, we propose HummingBird, an efficient MPC framework that uses a reduced number of bits during the ReLU evaluation. HummingBird carefully selects the bits to retain for each layer and uses an efficient runtime library, reducing the communication overhead by up to \(8.76\times\) and achieving up to \(8.64\times\) end-to-end speedup over CrypTen.
|
2309.12781 | Multi-Agent Digital Twinning for Collaborative Logistics: Framework and
Implementation | Collaborative logistics has been widely recognised as an effective avenue to
reduce carbon emissions by enhanced truck utilisation and reduced travel
distance. However, stakeholders' participation in collaborations is hindered by
information-sharing barriers and the absence of integrated systems. We, thus,
in this paper addresses these barriers by investigating an integrated platform
that foster collaboration through the integration of agents with digital twins.
Specifically, we employ a multi-agent system approach to integrate stakeholders
and physical mobile assets in collaborative logistics, representing them as
agents. We introduce a loosely-coupled system architecture that facilitates the
connection between physical and digital systems, enabling the integration of
agents with digital twins. Using this architecture, we implement the platform
(or testbed). The resulting testbed, comprising a physical environment and a
digital replica, is a digital twin that integrates distributed entities
involved in collaborative logistics. The effectiveness of the testbed is
demonstrated through a carrier collaboration scenario. This paper is among the
earliest few efforts to investigate the integration of agents and digital twin
concepts and goes beyond the conceptual discussion of existing studies to the
technical implementation of such integration. | Liming Xu, Stephen Mak, Stefan Schoepf, Michael Ostroumov, Alexandra Brintrup | 2023-09-22T10:46:45Z | http://arxiv.org/abs/2309.12781v2 | # AgentChat: Multi-Agent Collaborative Logistics for Carbon Reduction
###### Abstract
Heavy Good Vehicles (HGVs) are the second largest source of greenhouse gas emissions in transportation, after cars and taxis. However, HGVs are inefficiently utilised, with more than one-third of their weight capacity not being used during travel. We, thus, in this paper address collaborative logistics, an effective pathway to enhance HGVs' utilisation and reduce carbon emissions. We investigate a multi-agent system approach to facilitate collaborative logistics, particularly carrier collaboration. We propose a simple yet effective multi-agent collaborative logistics (MACL) framework, representing key stakeholders as intelligent agents. Furthermore, we utilise the MACL framework in conjunction with a proposed system architecture to create an integrated collaborative logistics testbed. This testbed, consisting of a physical system and its digital replica, is a tailored cyber-physical system or digital twin for collaborative logistics. Through a demonstration, we show the utility of the testbed for studying collaborative logistics.
Collaborative Logistics, Multi-Agent System, Carbon Reduction, Cyber-Physical System, Testbed, Digital Twin
## I Introduction
Transportation is the largest source of greenhouse gas (GHG) emissions [1]. Among transportation modes, Heavy Goods Vehicles (HGVs) rank the second-largest GHG emitter, following behind cars and taxis. However, HGVs are currently utilised inefficiently, operating at around 60% of their weight capacity, and approximately 30% of the distance they travel carries no freight [2]. A slight improvement in HGVs' utilisation can lead to substantial and immediate reductions in carbon emissions.
Collaborative logistics [3][4] is an effective pathway to enhance HGVs' utilisation. This approach involves carriers collaborating through coalition to collectively fulfil delivery requests, achieving reduced collective cost and total travel distance through economies of scale. Prior studies have shown that horizontal collaboration can lead to cost reductions ranging from 4% to 46% [3][5].
Although collaboration in the private transportation sector through centralised platforms such as UberPool 1 has achieved widespread adoption, implementing collaboration in the logistics sector remains a challenge. This challenge can be attributed to two main barriers: 1) _Information Asymmetry_: Carriers operating in close proximity may lack information about whether they are transporting goods to similar destinations at similar times. 2) _Lack of Trusted Platforms_: Business secrecy concerns may discourage carriers from sharing data with centralised platforms despite collaboration opportunities. This paper further explores collaborative logistics, adopting a Multi-Agent System (MAS) approach to address these challenges.
Footnote 1: [https://www.uber.com/au/en/ride/uberpool/](https://www.uber.com/au/en/ride/uberpool/)
Although centralised platforms can help mitigate information asymmetry, they may not ensure neutrality and trustworthiness. An ideal solution is a decentralised, distributed platform operated collectively by participants, where planning and decision-making are distributed rather than delegated to central authorities. The MAS approach, commonly used for complex distributed systems, is well-suited for constructing such platforms [6].
To foster collaboration among carriers, it is necessary to engage in communication with other potential carriers, often carriers located nearby. Figure 1 illustrates a simple scenario: a truck with 15 unloaded pallets sends a message to nearby carriers to propose co-loading collaboration. If successful, this collaboration could reduce five trips (T1: one way, T2 and T3: round trips) to just one one-way trip by T1. Such collaborative efforts can significantly enhance HGVs' utilisation and reduce operational costs, contributing to social and ecological good, particularly regarding to the current climate emergency.
The main contributions of this paper can be summarised as follows:
* We introduce a simple yet effective agent framework called the Multi-Agent Collaborative Logistics framework (MACL) framework, employing a MAS approach to address collaborative logistics challenges.
Fig. 1: Illustration of a horizontal collaboration scenario.
* We present a system architecture suitable for building hybrid systems, such as cyber-physical systems and supply chains digital twins.
* By utilising the MACL framework and the proposed system architecture, we create an integrated collaborative logistics testbed and showcase its efficacy in facilitating collaborative logistics studies.
The rest of this paper is structured as follows. Section II describes related works. Section III introduces the MACL framework. Section IV details the design and development of the testbed. Section V discusses limitations and implications of this work. Finally, Section VI concludes this paper and briefly describes future work.
## II Related Work
We review related work in this section, including collaborative vehicle routing and MAS approaches.
### _Collaborative Vehicle Routing_
Collaborative logistics, sometimes known as horizontal collaborations in logistics, has garnered increasing attention in the past decade [3][7]. Among the various challenges within collaborative logistics, one of the central problems is the vehicle routing problem (VRP) [8][9], specifically known as Collaborative Vehicle Routing Problem (CVRP). In CVRP, carriers collaborate by sharing their delivery requests to optimise routes, thereby reducing total travel distance, collective costs, and achieving environmental objectives [5].
Studies on enabling collaborative vehicle routing can be mainly categorised into either centralised planning or decentralised planning [5]. Centralised planning is characterised by the presence of a central authority with complete information. This distinguishes it from decentralised planning, where collaboration decisions are distributed among participants. In centralised planning methods, various optimisation techniques have been employed in the literature to find optimal collaborations, including metaheuristics [10][11] and greedy heuristics [12]. As observed by Cruijssen et al. [7] and supported by other works, such as [13] and [14], centralised collaborations can yield significant synergies, achieving up to 30% improvement in synergy values, and reduce road congestion and carbon emissions.
Centralised planning often requires gathering extensive collaboration-related data, which may raise privacy and competition concerns among coalition participants [3][5]. Consequently, decentralised planning balances collaboration, privacy, and autonomy. In a decentralised context, participants independently collaborate by selecting appropriate partners or requests, either with or without the involvement of a third party having partial information. Numerous efforts have been made to facilitate decentralised collaboration, with or without auction mechanisms. Methods without auctions (e.g., [15][16]) tend to handle less complex collaborations compared to auction-based approaches such as [17][18][19][20]. However, they are often limited to relatively straightforward auction processes and may not effectively address realistic cases [5]. Optimisation-based methods may not scale well for larger, real-world problems, reinforcement learning-based techniques (as seen in [4][21][22]) are thus introduced to search for routing collaborations in a _scalable_ manner.
### _Multi-Agent System Approach_
The MAS approach [6][23] has demonstrated successful applications across various domains, including logistics and transportation [24][25]. This approach, designed to model distributed systems, is inherently well-suited for tackling challenges in collaborative logistics. In collaborative logistics, agents can represent different entities (e.g., central authorities, carriers, trucks, and shippers) and working together to achieve both economical and ecological objectives. Therefore, the MAS approach is a suitable choice for constructing the underlying architecture to facilitate collaborations. Despite its widespread use in transportation and logistics domains [24], its application in collaborative logistics remains limited, particularly when building complete systems based on this approach. An illustrative example is the work by Dai and Chen [26], where they presented a multi-agent framework for decentralised carrier collaboration, integrating an auction mechanism for managing transportation request outsourcing and acquisition. In this paper, we harness the MAS approach to develop a platform that enables carrier collaboration.
## III Multi-Agent Collaborative Logistics Framework
Existing studies have mainly focused on approaches to enable collaborative logistics, leaving a gap in the technical implementation of these solutions through system frameworks. This motivates our work in developing the Multi-Agent Collaborative Logistics (MACL) framework, representing stakeholders as software agents and facilitating communication through messaging. Our goal is to provide an overarching framework for building a third-party collaborative logistics platform, enabling different forms of collaboration among logistics participants.
### _Scenario_
Many transportation collaboration scenarios have been studied in existing literature [5]. This paper focuses on a specific scenario: carrier collaboration. In this scenario, carriers, which are the owners and operators of transportation equipment, work together to achieve cost reduction and enhanced efficiency. Shippers, the shipment owners, are excluded from this scenario due to its carrier-oriented focus. Therefore, this scenario involves a group of carriers, each owns a fleet of trucks, collaborating to fulfil a set of their customers. This scenario is outlined without considering the technical implementation, allowing for both centralised and decentralised approaches to be employed for its realisation.
### _Agent Decomposition_
Building upon the scenario outlined in the previous section, this MAS framework consists of five primary agent types, detailed as follows:
1. Orchestrator: The orchestrator is a computational or algorithmic agent responsible for coordinating collaborations. It conducts searches for collaborative solutions and optimises routes for the collection of delivery requests from carriers in the coalition.
2. Carrier: Carrier agents represent the carriers who possess delivery requests (shipments). Carriers have a pivotal role in collaboration within our scenario, managing the transportation process. They typically own multiple depots and operate a fleet of trucks. Carriers aim to efficiently utilise their trucks to achieve both economical and ecological goals.
3. Truck: Truck agents represent the trucks owned by carriers. They receive transport requests and follow specified routes to fulfil these requests.
4. Depot: Depot agents represent the depots, acting as bases of truck fleets. Carriers may have one or more depots. Trucks depart from their respective depots to fulfil requests and return to them after completing their assigned tasks.
5. Customer: Customer agents represent the recipients of the shipments, referred to as customers. These agents are the main stakeholders of the MACL framework, each undertaking a specific role and collaborating coherently to facilitate carrier collaboration.
### _Agent Organisation and Interaction_
While the previous section introduces the primary agents of the framework, this section presents their organisation and interactions. Agents can be organised in various structures to achieve goals. We present an agent organisation for the outlined collaborative logistics scenario, as shown in Figure 2. For clarity, the figure only includes a partial list of agents in each category. It is important to note that a real-world collaborative logistics scenario may involve a larger number of carriers and customers than what is depicted in Figure 2.
As shown in Figure 2, the proposed agent organisation exhibits a hierarchical structure, where agents are conceptually organised in a tree-like manner. The orchestrator, a computational agent, coordinates collaborations among carriers in the coalition. Carriers selectively share information with the orchestrator based on their operational objectives. Carriers manage depots, each with a fleet of assigned trucks. Trucks interact with their respective depots when they depart from and return to them at the beginning and end of delivery tasks. Carriers handle transportation requests from shippers and transport shipments to the recipients (i.e., customers). Notably, direct communication between carriers and customers is absent; instead, trucks engage with customers to manage shipment reception and confirmation. Customers, loosely linked to the system, react to incoming shipment arrival notifications.
This design results in a hierarchical agent organisation, which we employ to build a collaborative logistics testbed.
## IV An Integrated Collaborative Logistics Testbed
We proceed with constructing a platform based on the proposed MACL framework, which will serve as a testbed for evaluating collaborative logistics research. In subsequent sections, we will use "the platform" and "the testbed" interchangeably to denote to this integrated collaborative logistics system. While the MACL framework enables the communication between distributed collaborative logistics stakeholders, we incorporate Digital Twin (DT) concepts to establish an integrated collaborative logistics platform. The resulting platform implements a basic digital twin system, comprising a physical system with a scene map for collaborative logistics assets and its digital counterpart that enables real-time monitoring and visualisation. Unlike a purely digital system, this platform combines both physical elements and digital systems. It can be deployed across geographically distributed machines and involves a fleet of physically moving and autonomously driving trucks. Further details about the platform's conceptual design, development, and testing will be discussed in the following sections.
### _Conceptual Design_
The design of such a collective, decentralised collaboration platform needs to ensure _neutrality_, _autonomy_, _privacy_ and _transparency_. The platform should act in an unbiased, fair manner to maintain neutrality among all participating carriers. Given that carrier collaboration involves a form of _coopetition_, where competing businesses cooperate for mutual interest, it is imperative to safeguard the independence and confidentiality of each carrier. Moreover, the platform should be transparent to foster trust among stakeholders. Participants should have visibility into its operations and behaviour, with convenient access to its state. These principles are further concretised through the following specific design guidelines:
1. _Decentralised Agent Deployment_: Carriers maintain autonomy by running their agents on their own machines. This decentralised approach allows carriers to retain control over their operations while engaging in collaborative logistics.
Fig. 2: An agent framework for carrier collaboration.
2. _Collective Resource Management_: Shared resources are collectively managed instead of being privatised. This ensures a fair and efficient distribution of resources among participants, enhancing collaboration and optimising resource utilisation.
3. _Real-time Visibility_: Stakeholders have access to real-time insights into the platform's status. This feature empowers stakeholders to monitor and assess the collaborative logistics system's performance, enabling timely informed decision-making and proactive interventions.
The platform is designed in alignment with these guidelines.
The resulting platform is conceptually composed of three main parts: a physical system, a digital system, and the connections between these two systems. The main components within each of these parts are illustrated in Figure 3. The physical system consists of a scene map, a collection of physical assets, and their representative software agents. The scene map depicts the physical environment where collaborative logistics operations unfold, as exemplified in Figure 4. In this map, the primary physical assets are robotic cars (robocars), simulating actual trucks used in transportation scenarios. These robocars follow predefined routes on the the scene map to fulfil transportation tasks. Additionally, the map includes representations of carriers, depots, and customers, as illustrated in Figure 4, simulating real-world configurations. Each of these physical assets has a representative software agent, which acts on their behalf to interact with other assets.
The digital system is primarily used to provide users with visibility into the physical system. It mainly consists of a digital map, an agent chat room, and a control panel. Central to the digital system is the digital map -- a digital replica of the physical system's scene map. This digital map regularly synchronises with its physical counterpart, updating to visually represent ongoing events like real-time robocar locations. This provides for visibility at the macro level of the platform, although visibility at the micro level is absent. Agent chat room (ACR) is therefore added into the digital system. The primary purpose of the ACR is to monitor and display agent interactions, providing visibility into the internal interactions between collaborative agents. Through messaging, agents interact to collaborate or resolve conflicts, achieving a coherently collaborative logistics system. This ACR provides a visual representation of the entire messaging process, displaying sent messages and providing services for querying messages. Additionally, a configuration panel containing buttons to trigger and configure the physical system is added into the digital system.
The third part is the connections, comprising communication mechanisms or protocols that enable data exchange between the physical and digital systems. We consider two distinct communication modes: request-driven and event-driven communication. In the former, communication is stateless and unidirectional. This mode is commonly employed to retrieve resources or submit data. Examples include loading a web interface, querying data, or saving system configurations. This type of communication can be implemented either synchronously or asynchronously. The latter is event-driven, offering bi-directional, low-latency, high-frequency communication services. It is therefore particularly suitable for real-time use cases, such as immediate transmission of robocars' locations from the physical to the digital system.
### _Implementation Overview_
The previous section describes the platform's conceptual design, focusing on its three main parts. This section presents its implementation details, including system architecture and the individual implementations of each component.
#### Iii-B1 System Architecture
In this section, we briefly describe the system architecture, focusing on its composition and component connections. Architecturally, the platform consists of two main sub-systems: the MACL-based physical system and the web-based digital system. Adhering to the design guidelines detailed in Section IV-A, these sub-systems should be independent, loosely-coupled, and capable of running on geographically distributed machines, Therefore, we devise a microservice architecture for this platform (see Figure 5).
As shown in Figure 5, the physical and the digital systems are connected via Websocket and RESTful API, which are two of the most commonly used architectural design practices.
Fig. 4: Illustration of a 5x5 scene map. The map has 25 available locations, including depots labelled as D1-D3 and customers labelled as C1-C9.
Fig. 3: Main conceptual components of the platform.
The two systems are isolated from each other, and their communication occurs through messaging or API calls. This architectural choice ensures that the two systems can be independently deployed, operated, and maintained, without concerning about the other system's current state, internal logic, and implementation. Specifically, the servers (e.g., agent name server, Websocket server, and HTTP server) and agents (orchestrator, carriers, etc.) can run on separate machines. Each of these entities holds a distinct domain of responsibility and collaborate to establish a loosely-coupled platform -- the _testbed_.
The physical system also contains physical assets, such as the physical (scene) map and robocars. The inclusion of these assets within the platform is facilitated through "physical association", signifying that collaborative logistics operates in a common physical environment, namely, the physical map. The digital system, apart from its backend servers, features a frontend interface designed for user interaction. The frontend is connected to the backend through either synchronous (HTTP) or asynchronous (AJAX) requests, modifying its content or updating data to the backend. Both the frontend and backend contain additional components, which, in conjunction with others, will be further detailed in the corresponding sections.
#### Iii-A2 The Physical Map
A physical environment, where relevant actors commonly operate, is crucial for the collaborative logistics testbed. However, our goal is not to replicate the entire realistic environment, which is challenging, and yet unnecessary and uneconomical. Instead, we create a simple yet effective simulated environment -- a physical map with a coordinate system that allows localisation and navigation. In this coordinate system, the map is divided into uniform grids, and intersections are marked using ArUco markers [27] -- a widely-used fiducial marker system in robotics (see Figure 6 for the ArUco marker examples generated by the 5x5_50 dictionary). ArUco markers are binary square fiducial markers that contain unique identification numbers, which can be used to identify each intersection. Their detection is simple and quick, yet reliable and robust. They can be generated automatically, have ample code space, and are free. Therefore, ArUco markers are ideal for marking locations in this case, especially considering their compatibility with the low-cost onboard camera used for marker recognition on the robocars.
Moreover, given the map's division into a grid with designated IDs, this environment supports both localisation and navigation. Robocars can determine their locations on the map by identifying these markers and can find routes between two locations using basic pathfinding algorithms. Grid density corresponds to quantity of accessible locations: denser grids yield more locations and enhance localisation accuracy. Consequently, the grid combined with fiducial markers placed at each intersection effectively converts the physical map space into a discrete two-dimensional coordinate system.
The map itself can be constructed from materials that satisfy below the properties:
* _Anti-slip_: It is crucial to have a map surface that offers sufficient traction for the robocars to ensure optimal driving condition.
* _Non-reflective_: Specular reflections can interfere with low-level vision systems, particularly when detecting markers. A non-reflective surface with minimal reflectivity is essential.
* _Portable (optional)_: Although not mandatory, portability may be a requirement for the testbed. In such instances, careful material selection or design is essential. For example, using a rollable material or implementing an assemblable design can facilitate the testbed's transportation.
After evaluating several options, yoga mats emerged as a suitable choice that satisfies the desired characteristics for the physical map. They are also inexpensive and easily obtainable.
#### Iii-A3 MACL System
Section III-B presents the conceptual design of the MACL system. This section describes its concrete implementation.
This MACL system has mainly five types of agents, each exhibiting its own behaviour, as detailed in Section III-B. These agents can be categorised into two groups based on their mobility: mobile and stationary. In this testbed, the truck agents fall into the mobile category, driving trucks on the map to fulfil transportation requests, resulting in dynamic changes to their locations. The remaining agents are stationary and maintain fixed locations. To simulate the real-world logistics operations, we utilise self-driving robocars equipped with Raspberry Pi as the medium for running truck agents. These truck agents control the robocars' movements while facilitating direct communication with other agents to seek collaboration. The control of robocars' driving is achieved through the implementation of a line-guided driving algorithm. Other stationary agents can be executed on various computing
Fig. 5: System architecture of the platform.
Fig. 6: Examples of ArUco markers (5x5 bits).
platforms, including desktop or portable computers, and even devices like Raspberry Pi.
Moreover, it is imperative for these agents to possess the ability to run on diverse machines while interacting coherently. This entails these agents must be capable of locating and communicating with one another over the network. This MACL system thus requires to include a search and discovery (S&D) service. This S&D service provides agents with naming and connection functionalities, allowing them to locate other agents using meaningful names instead of non-human-readable URIs. Consequently, agents can easily establish connections and foster smooth communication through this S&D service.
Utilising the S&D service's connection mechanism, agents can coordinate through messaging. In this process, agents receive messages and process them with appropriate handlers. Effective message parsing requires recipients to grasp both the structural format and semantics of the message. This includes comprehending sender information, message content, and potentially domain-specific data (such as delivery destination, lead time and available time window for delivery). Achieving semantic understanding relies on commonly recognised protocols and ontologies shared among participants.
All the aforementioned considerations are crucial for developing this MACL system. To avoid starting development from scratch, we adopt the osBrain 2 framework to create agents in this system. osBrain, a Python-based, general-purpose MAS framework, was originally developed by OpenSistreams for a real-time automated trading platform. By leveraging osBrain, we can expedite the development process and concentrate on designing and implementing higher-level functionalities, including agent behaviour, organisation, and interaction.
Footnote 2: [https://osbrain.readthedocs.io/en/stable/index.html](https://osbrain.readthedocs.io/en/stable/index.html)
#### Iv-B4 The Digital System
This section presents an overview of the implementation of the digital system. The digital system incorporates essential components that enhance visibility to their corresponding physical counterparts.
The physical system can achieve collaboration independently, even without reliance on the digital system. However, the visibility into its operations is limited. Stakeholders have local visibility, and there is a lack of transparency regarding its internal operations, including consensus-achieving among agents. This visibility requirement calls for a digital system capable of digitally representing the physical system and visualising its internal processes, particularly in real-time.
To provide convenient access to this system, we created a web-based digital system following the architecture presented in SectionIV-B1. The developed system implements the three main conceptual components (Figure3), namely the digital map, ACR, and configuration. The digital map and ACR are virtual counterparts of their respective physical components. We employed Websockets as communication channels for data exchange between the physical and digital systems. This design ensures that events occurring in the physical system are promptly reflected in the corresponding areas of the digital system. Specifically, by establishing Websockets connections, the digital map and ACR components allow for real-time monitoring of robocar movements and agent communications. The configuration component is used for controlling and configuring the physical system. It establishes connections with the physical system through request-driven communication channels, which can be implemented using techniques such as RESTful APIs and AJAX calls.
To facilitate rapid and clean development, Django 3 was employed as the foundational web framework, in conjunction with other compatible plugins and frontend frameworks, to develop this digital system. This system also incorporates auxiliary functionalities, such as data storage and naming service, for additional support. The toolset used to implement this system is illustrated in Figure7.
Footnote 3: [https://www.djangoproject.com/](https://www.djangoproject.com/)
### _Implementation Details and Showcase_
Following the guidelines described in the previous sections, we implemented a _demonstration_ testbed instead of a fully-fledged one, due to constraints in budget and physical space. This sections details the implementation of the testbed and presents a collaborative logistics showcase.
#### Iv-C1 Environment Setting-up
The main components that undergo simplification or reduction are associated with the physical elements, specifically the physical map and the self-driving trucks. As described in SectionIV-B2, yoga mats meet the three desired criteria. We aligned two black yoga mats vertically and stitched them together, with each measuring 130 cm \(\times\) 200 cm, to create a 200 cm \(\times\) 200 cm base material for constructing the map. For creating the self-driving trucks, we utilised low-cost educational robocar kits, specifically the SunFounder Picar-x, in conjunction with Raspberry Pi as the fundamental toolkit.
Fig. 7: Mindmap illustrating the testbed specification.
We assembled, configured, and tested multiple robocars for this showcase. Each robocar is equipped with a Raspberry Pi serving as its control centre and incorporates an LCD display to provide operational status updates. Additionally, every robocar integrates a three-channel grayscale sensor and a low-cost camera, both used for detecting the environment ahead of the robocar. An assembled robocar example, highlighting these components, is shown in Figure (a)a. Due to hardware limitations, these robocars cannot perform small, sharp turns. Considering the constraints of the map size and robocar maneuverability, we divided the map into a uniform 5 \(\times\) 5 grid, resulting in a 5 \(\times\) 5 coordinate system with 25 different locations (see Figure 4 for an illustration). Each location is identified by an ArUco marker with a unique ID ranging from from 0 to 24. These markers are printed on white paper, firmly attached to stiff paper, and then affixed to the map with adhesive. Each grid measures 45 cm \(\times\) 45 cm, resulting in a 180 cm \(\times\) 180 cm map area in the central region of the base map material. This design ensures efficient utilisation of space while allowing for effective robocar mobility.
Figure (b)b shows the resulting physical map, materialising the conceptual map depicted in Figure 4. Annotations in this figure highlight essential elements and provide explanatory details. In particular, the numbers within this figure are the IDs assigned to the corresponding ArUco markers. The white lines on the map denote the designated driving routes, added for driving assistance. This physical map provides a tangible environment for investigating collaborative logistics.
#### Iii-B2 Self-Driving Trucks
Given the testbed's focus on collaboration, the trucks' self-driving capabilities are optional, not required. Any driving solution that successfully navigates from source to destination is considered acceptable. We developed a simple yet effective self-driving algorithm, specifically a line-guided driving algorithm, to allow trucks to autonomously navigate designated routes and achieve localisation without manual intervention. The algorithm uses the grayscale sensor located at the front of the robocars, as shown in Figure (a)a, to detect lines and adjust steering accordingly. Additionally, the front camera of the robocars is utilised to identify the ArUco markers for localisation. Based on whether the detected location ID corresponds to the target customer, the truck either stops for delivery or continues driving. Due to the focus and space limitation of this article, we refrain from delving into the technical details of this driving algorithm and hardware configurations.
#### Iii-B3 Development of the MACL system
Following the MACL framework described in Section IV-B3, we implemented a MACL system using Python and the osBrain framework. This implemented system consists of _sixteen_ agents, including an orchestrator, three trucks, three depots (each with one truck), and nine customers. Carrier agents described in the framework were omitted as the depots encompassed their functionalities. Additionally, we simplified the messaging process, reducing the number of interactions and minimising the data exchanged between agents. Messaging between agents is facilitated by the osBrain framework, specifically leveraging the ZeroMQ 4 messaging service to enable a request-reply communication pattern. This pattern enables effective agent message exchange and collaboration within this MACL system.
Footnote 4: [https://zeromq.org/](https://zeromq.org/)
We developed an agent nameserver using osBrain to provide S&D service for agents. This service enables agents to conve
Fig. 8: Illustration of an assembled robocar and the physical map with annotations highlighting key components.
Fig. 9: Interaction flow enabled by the NDS.
niently find each other using assigned nicknames. To ensure a consistent "nickname" for nameservers, we implemented a naming directory service (NDS) for the digital system, following the REST architectural style. The NDS maintain an updated lookup table containing a collection of nickname-IP address pairs. The NDS works as a "phonebook" for this MACL system. It allows agents to submit a lookup request and retrieve the current IP address associated with the nameserver via its nickname, removing reliance on potentially dynamic IP address that may change at every system startup. The NDS combined with the nameserver allows the system to be deployed on different machines under different networks. Competing stakeholders can thus run and manage their own agents independently, consistent with the second design principle discussed in Section IV-A. The interaction flow enabled by this NDS is depicted in Figure 9.
In this MACL system, the three truck agents were deployed on the three assembled robocars respectively. These agents act as their decision-making components, control the robocars' hardward and are responsible for interacting with other agents to facilitate collaboration.
#### Iv-B4 System Running and Showcase
We implemented a web interface for the digital system according to the design described in Section IV-B4. The dashboard, depicted in Figure 10, is the interface. It consists of two panels: "agentchat virtual" and "agent chat room". The left panel, agenat virtual, display a digital _replica_ of the physical environment (as shown in Figure 8b) on a 2-dimensional map. It includes several digital elements: 1) A 5 x 5 coordinate system, 2) Labels representing supply chain entities, 3) Routes of each truck before (dashed lines) and after (solid lines) collaboration, 4) Clickable legends for interactive showing or hiding of selected elements, and 5) Control buttons for system configuration and launch. The right panel, agent chat room, initially appears empty with no messages until the system is running.
To run the system, a series of setup steps must first be performed, as illustrated in Figure 11. These steps includes:
1. Run the web server and allow _local_ access to the interface via a browser.
2. Launch the tunnelling service powered by ngrok 5 to enable _remote_ access to the digital system and its interface through a fixed URL 6. Footnote 5: [https://ngrok.com/](https://ngrok.com/)
3. Start the nameserver after the web server and tunnelling service are operating.
4. Update the nameserver's IP address (automatically detected) to the NDS via the configuration popup window as shown in Figure 10b.
5. Finally, run all agents. Truck agents need to establish connections with corresponding _active_ non-truck agents, non-truck agents (e.g., customer agents) thus must be executed first.
With all aforementioned steps completed, the system is ready for execution. When the "Run" button is clicked, the system starts to run. The status of both the physical system and its digital counterpart at the start, middle and end of the running process is shown in Figure 12. At the beginning, as shown in Figures 12a, the trucks (labelled T1, T2, and T3) are stationed at their respective depots: D1, D2, and D3. These depots are located at the bottom left, bottom right, and top right corners of both the physical and the digital maps. The maps are populated with nine customers, evenly distributed over the area. Depots and customers are distinguished by coloured labels (red, blue, and green) that denote their association with specific trucks. On the physical map, three white lines are marked to represent the routes after carrier collaboration. These lines guide the self-driving of the trucks. The routes before carrier collaboration are not drawn on the physical map for clarity. However, on the digital map, both the routes before and after carrier collaboration are visible, represented by dashed lines and solid lines, respectively, as presented in
Fig. 11: Prerequisite steps to run the system.
Fig. 10: Screenshots of the dashboard of the digital system at its initial and configuration status.
Figure 12b. When they receive messages containing delivery requests, the trucks illuminate their LCD display in blue and adjust their front cameras to face down around 30 degrees, indicating their readiness to fulfil the requests. Messages and related notifications are displayed in the ACR area and the upper right corner of the dashboard (see the right part of Figure 12b).
Fig. 12: The status of the physical system (left) and the screenshots of the dashboard of the digital system (right) at corresponding times, illustrating the testbed of at the start, middle, and end of a run.
Fig. 13: Example messages displayed on LCDs.
During the delivery process, the trucks use their cameras to scan the area ahread, identifying ArUco markers to determine their locations. Upon reaching a customer' location, a truck pauses and sends a notice-of-arrival message to the customer, waiting for a response. Once receiving a confirmation-of-arrival message from its customer, the truck starts a simulated offloading process, displaying a message on its LCD display (see Figure 13) and flickering the display. As shown in Figures 11(c), the trucks are currently in the middle of fulfilling their delivery tasks. Specifically, T1 and T2 have arrived at the location of their first customers, C5 and C4, respectively, and have successfully delivered products to them. T3 is approaching its second customer, C8, but has not yet completed the delivery. The digital system continually monitors and interprets the state of the physical system. As illustrated in Figures 11(d), the trucks' location are constantly tracked and represented as coloured circles on the digital map. The ongoing conversations between the trucks and their respective customers are displayed in the right panel of the dashboard. Additionally, milestone system events, such as T1 successfully delivered products to C5, are promptly conveyed via notifications located at the upper right corner of the dashboard.
After a short stop following successful deliveries, the trucks resume their journey, continuing along their assigned routes until all delivery assignments are fulfilled. Finally, they return to their respective depots. Figures 11(e) and 11(f) show the state of the physical and digital system where all trucks have arrived their depots. They send messages to their respective depots to inform their arrival and await for next fulfilment.
Coherence among distributed agents in this system is achieved through inter-agent messaging. Trucks, customers, and depots communicate to coordinate their activities. Trucks can also engage in dialogues to manage conflicts that might arise from simultaneous use of critical resources. For instance, when a truck is driving into a "single-track road" that accommodates only one vehicle at a time (like the road between coordinates (2, 1) and (2, 3)), the truck proactively informs nearby trucks about its road usage. Other trucks wait until the road is available, preventing potential conflicts and enhancing efficiency. This approach was employed in this testbed to avoid conflicts when using single-track roads on the map.
## V Discussion and Implications
The demonstration of truck collaboration 7, despite its simplicity and directness, effectively showcases the suitability of the testbed for examining issues related to multi-agent collaborative logistics (MACL). Additionally, the demonstration also provides a tangible example of how the proposed MACL framework can be efficiently applied.
Footnote 7: This demo was accepted to be exhibited in AI UK 2023 ([https://ai-uk.turing.ac.uk](https://ai-uk.turing.ac.uk)) — the UK’s national showcase of data science and AI.
However, it is important to note that both the showcase and the testbed does come with certain limitations. The showcased collaboration tackles a relatively small-scale problem, containing only _three_ carriers, each with a single truck. While this three-carrier setup is a common practice in mainstream collaborative logistics research, it may not adequately capture the complexity of real-world scenarios involving large numbers of participants. Additionally, while the designed MACL agent organisation is suitable for demonstration purposes, it may not be scale to larger collaboration scenarios. Consequently, the current testbed (along with its demonstrations) lacks the capability to study scalability-related issues.
The demonstration only includes a single instance of collaboration, where pre- and post-collaboration delivery routes are predetermined and displayed on the map. Therefore, this testbed is limited to demonstrating this specific "fixed" collaboration instances and lacks the flexibility to explore different collaboration instances.
These two limitations stem from the inherent capabilities of the current testbed, namely, the limited area of its physical map and the lack of dynamic navigation capabilities. While the first constraint is determined by real-world conditions that might be beyond the scope of research efforts, the second constraint can be addressed by redesigning the navigation system of the physical map. An attainable approach to achieve dynamic navigation is to to use white tape to divide the map into uniform "city blocks", and then assign each intersection a distinct ID. Figure 14 shows an illustration of the physical map with such blocks. When determining the curvature and dimensions of these blocks, it is crucial to carefully consider the robocars' turning radius. Our future work will explore this map design to build a dynamic navigation system for this testbed, accommodating more and random instances of collaboration.
Despite its limitations, the testbed makes contributions not only to collaborative logistics but also to broader domains such as CPSs and Supply Chain Digital Twin (SCDT). Its communication facilities enable distributed, decentralised entities to effectively discover and connect to each other in a dynamic IP allocation environment, adaptable to various distributed, multi-entity communication applications. While the architecture in Figure 5 is tailored for this collaborative logistics testbed, it is somewhat general and can be used to build other CPSs. Essentially, the testbed exemplifies a form of CPS, combining digital (agents) and physical components (robocars and the map) to create intelligent, interconnected entities that facilitate
Fig. 14: Illustration of a physical map with uniformly partition blocks.
collaborative logistics.
This testbed also embodies the concept of SCDT using a MAS approach with a focus on logistics. The testbed includes a physical collaborative logistics environment and its virtual replica. The digital system monitors and has the capability to control the operations of the physical system. However, it is worth noting that the testbed only implements the SCDT on the architectural aspects such as connectivity and interaction. It lacks more detailed features such as advanced analytics and predictive capabilities.
## VI Conclusion and Future Work
Sustainable logistics has gained significant attention recently, largely driven by the ongoing green transitions. In this article, we tackle collaborative logistics, an effective pathway to immediately reduce carbon emissions in transportation. This is achieved through operational modifications, avoiding the need for large investments in transforming physical assets, such as electrifying trucks.
We introduced the Multi-Agent Collaborative Logistics (MACL) framework, a simple yet effective approach. This framework consists of a set of representative or algorithmic agents collaborating coherently to achieve logistics objectives in a cost-effective and sustainable manner. We proposed a system architecture and employed it to design and develop an integrated collaborative logistics testbed. The testbed, a specific CPS and SCDT, consists of a physical environment including robocars, a physical map, and other tangible components and a digital system that virtually replicates and monitors the physical environment. A demonstration involving sixteen agents, including three truck agents and nine customer agents, was conducted to showcase the effectiveness of the developed testbed.
Our future work will address the limitations highlighted in the previous section. This involves integrating dynamic navigation systems into the physical map, expanding its size to accommodate more available locations, and designing a robust and _scalable_ agent organisation. This will enable more robocars to operate concurrently. Additionally, we will explore _decentralised_ collaborative logistics by using trucks equipped with autonomous agents, where these agents can negotiate with each other to collaborate without a third party. This would be achieved by leveraging deep multi-agent reinforcement learning techniques and emerging revolutionary foundation models.
|
2309.03854 | General gravitational charges on null hypersurfaces | We perform a detailed study of the covariance properties of the symplectic
potential of general relativity on a null hypersurface, and of the different
polarizations that can be used to study conservative as well as leaky boundary
conditions. This allows us to identify a one-parameter family of covariant
symplectic potentials. We compute the charges and fluxes for the most general
phase space with arbitrary variations. We study five symmetry groups that arise
when different restrictions on the variations are included. Requiring
stationarity as in the original Wald-Zoupas prescription selects a unique
member of the family of symplectic potentials, the one of Chandrasekaran,
Flanagan and Prabhu. The associated charges are all conserved on non-expanding
horizons, but not on flat spacetime. We show that it is possible to require a
weaker notion of stationarity which selects another symplectic potential, again
in a unique way, and whose charges are conserved on both non-expanding horizons
and flat light-cones. Furthermore, the flux of future-pointing diffeomorphisms
at leading-order around an outgoing flat light-cone is positive and reproduces
a tidal heating plus a memory term. We also study the conformal conservative
boundary conditions suggested by the alternative polarization and identify
under which conditions they define a non-ambiguous variational principle. Our
results have applications for dynamical notions of entropy, and are useful to
clarify the interplay between different boundary conditions, charge
prescriptions, and symmetry groups that can be associated with a null boundary. | Gloria Odak, Antoine Rignon-Bret, Simone Speziale | 2023-09-07T17:12:24Z | http://arxiv.org/abs/2309.03854v3 | # General gravitational charges on null hypersurfaces
###### Abstract
We perform a detailed study of the covariance properties of the symplectic potential of general relativity on a null hypersurface, and of the different polarizations that can be used to study conservative as well as leaky boundary conditions. This allows us to identify a one-parameter family of covariant symplectic potentials. We compute the charges and fluxes for the most general phase space with arbitrary variations. We study five symmetry groups that arise when different restrictions on the variations are included. Requiring stationarity as in the original Wald-Zoupas prescription selects a unique member of the family of symplectic potentials, the one of Chandrasekaran, Flanagan and Prabhu. The associated charges are all conserved on non-expanding horizons, but not on flat spacetime. We show that it is possible to require a weaker notion of stationarity which selects another symplectic potential, again in a unique way, and whose charges are conserved on both non-expanding horizons and flat light-cones. Furthermore, the flux of future-pointing diffeomorphisms at leading-order around an outgoing flat light-cone is positive and reproduces a tidal heating plus a memory term. We also study the conformal conservative boundary conditions suggested by the alternative polarization and identify under which conditions they define a non-ambiguous variational principle. Our results have applications for dynamical notions of entropy, and are useful to clarify the interplay between different boundary conditions, charge prescriptions, and symmetry groups that can be associated with a null boundary.
###### Contents
* 1 Introduction
* 2 Null hypersurfaces
* 2.1 Foliations
* 2.2 Affine coordinates
* 3 Null symplectic potential
* 3.1 Phase space polarizations
* 4 Conservative boundary conditions and the variational principle
* 4.1 Dirichlet boundary conditions and their ambiguity
* 4.2 Conformal boundary conditions
* 5 Leaky boundary conditions and covariant phase space
* 5.1 Anomalies and class-III invariance
Anomalies of the boundary Lagrangians
* 5.3 Anomalies of the symplectic potentials
* 5.4 Boundary symmetry groups
* 6 Charges and fluxes
* 6.1 Wald-Zoupas conditions on null hypersurfaces
* 6.2 Stationarity on flat light-cones
* 6.3 Larger phase spaces
* 6.4 Charges
* 7 Addenda
* 7.1 Second-order perturbations around flat light-cones
* 7.2 Wald-Zoupas prescription with field-dependent diffeomorphisms
* 8 Conclusions
* A Internal Lorentz transformations
* A.1 Class-I
* A.2 Class-III
* A.3 Anomalies and NP representatives
* B Alternative polarizations
* C Closure of Lie brackets
* D Derivation of Damour's equation
## 1 Introduction
Chandrasekaran, Flanagan and Prabhu (CFP) characterized the symmetry group of general relativity on generic null hypersurfaces as an extension of the BMS group to include arbitrary diffeomorphisms and Weyl transformations of any 2d space-like cross-section [1].1 They then used the Wald-Zoupas (WZ) procedure [4] to prescribe charges for this symmetry group and study their flux-balance laws. The charges they obtained satisfy important properties, and are conserved on shear-free and expansion-free hypersurfaces, or equivalently non-expanding horizons (NEHs) in vacuum. The analysis was then specialized to NEHs and further advanced in [5], explaining for instance how the NEH area is the charge aspect associated with the global sector of the Weyl transformations. The same Weyl transformations play also a prominent role in the investigation of possible black hole soft hairs by Hawking, Perry and Strominger [6].
Footnote 1: The same group that can be obtained at null infinity relaxing the fall-off conditions in a way compatible with renormalization of the symplectic potential [2], and was in that context referred to as BMSW group. See also [3].
The CFP construction is based on the covariant phase space with a specific choice of symplectic potential, associated with Dirichlet boundary conditions, and on a specific set of restrictions of the variations, corresponding to a certain universal structure constructed along the guidelines of what has successfully been done at future null infinity (see e.g. [7] for a review). These restrictions clarify in particular the issue of ambiguous null boundary terms that was raised in [8]. It was then pointed out
in [9] that the CFP charges differ from what one would obtain following a procedure a la Brown-York by a term generated by an anomalous transformation under diffeomorphisms, thus bringing to the forefront the extension of the covariant phase space to include anomalies described in [10, 11] (see also [12, 13]). We show here that the anomaly term in the charges is crucial to make them covariant, and explain why this seemingly counterintuitive statement is actually natural.
A shortcoming of the CFP charges is that some of them are not conserved even in the absence of radiation, specifically those whose aspect is the area, since this grows on a flat light-cone. This limits their applicability to study physical processes, for instance a spherical collapse would see a flat light-cone bend into an event horizon, and this process will be poorly described by a charge that is already varying prior to any matter infalling. This raises the question whether a different prescription for the charges exists that is free of this shortcoming. Indeed, the Noether charges as well as the Wald-Zoupas charges are not guaranteed to be unique, and may depend on a choice of polarization made in writing down the symplectic potential [14, 15, 16, 11, 17, 12, 13]. We show that there exists a different choice of polarization that leads to charges which are conserved on flat light-cones, as well as on shear-free and expansion-free hypersurfaces. The polarization we use was previously considered in [18], although with a restriction that spoiled its covariance. The application of the new charges to study dynamical processes of BH formation was anticipated in [19], see also [20] on this. It is the unique one that satisfies the Wald-Zoupas covariance condition and a stationarity condition interpreted in a weaker sense than the original WZ paper and that allows one to include both shear and expansion-free surfaces and flat light-cones. A further useful property of this potential is that the flux of future-pointing diffeomorphisms at leading-order around an outgoing flat light-cone is positive, and reproduces the tidal heating term plus a memory term.
We also show that this polarization leads to conformal boundary conditions on a null hypersurface that provide an alternative resolution to the boundary term ambiguities of [8] based on Dirichlet boundary conditions, and discuss the residual ambiguities that would be present in the corner terms.
One of the restrictions on the variations considered in the CFP paper concerns the inaffinity of the normal to the null hypersurface. This restriction plays an important role. Relaxing it reintroduces the ambiguities of [8] and prevents a complete implementation of the WZ procedure. Nonetheless, the authors considered the possibility that this variation may be physically relevant, and the question was left open whether one could construct WZ charges in this larger phase space. We investigate this issue with a general analysis of the covariant phase space in which all restrictions on the variations are removed one by one, until we are left with the minimal condition that the boundary is preserved. Removing restrictions the symmetry group gets enlarged from the CFP group to a group which includes super-translations of arbitrary time-dependence, to the complete hypersurface diffeomorphisms, to an extension of the hypersurface diffeomorphism with one additional free function which applies to the largest phase space.
Having identified the symmetry groups, we look at the symplectic potential for all phase spaces. We perform a systematic calculation of all anomalies and provide their interpretation. We identify a one-parameter family of symplectic potentials that satisfy the covariance condition in the larger phase spaces, family that includes the conformal polarization. However, none of them satisfies the stationarity condition, neither in the original sense nor in the weaker sense. As a consequence, the new symmetries produce fluxes which don't vanish in stationary solutions such as non-expanding horizons and flat light-cones. So while it is interesting to notice that this extension of the phase space is possible, it remains unclear to us how it should be used in physical applications.
The stationarity condition can instead be satisfied for the whole family if the variation of the inaffinity is non-zero but fixed to be proportional to the variation of the expansion. The proportionality
parameter labels the members of the family, and when it vanishes the CFP choice is recovered. In all other cases one finds symmetry vector fields with a non-trivial metric dependence, and will be investigated elsewhere.
We complete the analysis providing the general expression for the Noether charges for arbitrary variations and a two-parameter family of polarization that include the covariant ones. The covariant ones are the only ones that are invariant under arbitrary choices such as rescaling of the null normal, change of embeddings, and change of rigging vector. We explain under which conditions these charges satisfy the Wald-Zoupas prescription.
Our results strengthen on the one hand the value of the universal structure defined in [1], and enrich it proposing an alternative symplectic potential with improved stationary properties. On the other hand they open the possibility of working with a much weaker universal structure while preserving the Wald-Zoupas covariance criterium, if the lack of stationarity can be dealt with.
At the technical level, a difference between our approach and [1] is that we take a spacetime description as starting point, as opposed to the intrinsic description based on hypersurface tensors. Our complementary approach can be useful to provide a new angle on their analysis, and allows one to relate it to the Newman-Penrose (NP) formalism. This gives us useful tools for the systematic analysis of anomalies in the phase space. For instance, the geometry of a null hypersurface depends only on the equivalence class of normals under rescaling. But many quantities that appear in the phase space depend also on a specific choice of normal representative, as well as on an auxiliary rigging vector used to define a local projector on space-like cross-sections. Identifying the quantities which are independent of these two auxiliary and non-geometric characteristics is straightforward using internal transformations of the NP tetrad which have long since been tabulated, and are for instance denoted class-I and class-III in [21]. One of our technical results is to point out the direct link between lack of invariance under these transformations and anomalies in the covariant phase space. Another technical difference is that we don't use the notation for the covariant phase space a la Wald, where field variations are interpreted as tangent vectors, but as in as differential forms [22, 15, 11]. This makes it easier to distinguish the Lie derivative in field space from those in spacetime, which is the basic step to study anomalies. Some of the details we present are therefore mere translations of the results of [1] in the formalism based on spacetime tensors and on the exterior calculus notation for the covariant phase space. This makes for a paper longer than initially intended, but we hope that the dictionary it provides will be of use to navigate the literature.
As this paper was being completed we learned of a similar analysis by Venkatesa Chandrasekaran and Eanna Flanagan which has considerable overlap with our results and which is being submitted simultaneously to the arXiv [23].
We use mostly-plus spacetime signature. Greek letters are spacetime indices, and we will sometimes denote scalar products by a dot. When needed, lower case latin letters \(a,b,...\) are hypersurface indices, and upper case latin letters \(A,B,...\) are indices for 2d cross-sections of the hypersurface. In all cases, \((,)\) denotes symmetrization, \(\langle,\rangle\) trace-free symmetrization, and \([,]\) antisymmetrization. An arrow under a \(p\)-form means pull-back on \(\mathcal{N},\ \hat{=}\) means on-shell of the field equations, and \(\stackrel{{\mathcal{N}}}{{=}}\) means an equality valid at the null boundary only. We use units \(16\pi G=c=1\).
Null hypersurfaces
In this Section we review basic facts of null hypersurfaces. This is useful to fix the notation, but it will also allows us to highlight properties that are often scattered between different literature, and to provide somewhat of a dictionary. We will in particular review the distinction between intrinsic and extrinsic geometries, using spacetime covariant notation and offering the translation to hypersurface indices on the one hand and to Newman-Penrose (NP) notation on the other hand. We will recall the notion of class-III and class-I invariance to talk about quantities which are independent respectively of the choice of normal and of rigging vector. We will then recall Sachs' identification of constraint-free data and how it allows one a clear distinction between conservative and radiative or leaky boundary conditions, and finally some useful expressions that arise when working with the special coordinate system provided by affine coordinates.
We consider a null hypersurface \(\mathcal{N}\) defined by a cartesian equation \(\Phi=0\), and denote its normal
\[l_{\mu}:\stackrel{{\mathcal{N}}}{{=}}-f\partial_{\mu}\Phi,\qquad l ^{2}\,\stackrel{{\mathcal{N}}}{{=}}0, \tag{2.1}\]
with \(f>0\) as to have the vector future-pointing. The corresponding vector \(l^{\mu}\) is null and hypersurface orthogonal, hence it is also tangent to \(\mathcal{N}\) and geodetic,
\[l^{\mu}\nabla_{\mu}l^{\nu}\,\stackrel{{\mathcal{N}}}{{=}}\,kl^{ \nu}. \tag{2.2}\]
The hypersurface is thus naturally fibrated by null geodesics, and \(k=0\) if they are affinely parametrized. The chosen tangent vector is referred to as generator of \(\mathcal{N}\). We will assume that \(\mathcal{N}\) has topology \(I\times S\), where \(S=S^{2}\) and \(I\) is some interval in \(\mathbb{R}\). If \(I=\mathbb{R}\) in affine coordinates the hypersurface is called complete, and in this case all null geodesics extend indefinitely in both directions. It is called semi-complete if it extends indefinitely in one direction only, and has a boundary in the other direction caused for instance by the formation of caustics or crossings. We will often use adapted coordinates \((\Phi,x^{a})\), where \(x^{a}\), \(a=1,2,3\) are coordinates on the leaves of the \(\Phi\) foliation. The condition that \(\mathcal{N}\) is null then induces a partial gauge-fixing of the metric given by
\[g^{\Phi\Phi}:=g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi\,\stackrel{{ \mathcal{N}}}{{=}}0. \tag{2.3}\]
For space-like and time-like hypersurfaces, there is a canonical choice of normal with unit norm. This makes the normal independent of the embedding of the hypersurface, namely invariant under a change of parametrization \(\Phi\mapsto\Phi^{\prime}=\Phi F(x)\) with \(F\) smooth at \(\mathcal{N}\) that preserve the location of the boundary. No such preferred choice exists in the null case. The function \(f\) is thus arbitrary, and one has to check on a case by case basis whether a given quantity is independent of the embedding or not. The geometry of the hypersurface is on the other hand only sensitive to the equivalence class \([l=Al]\) of normals identified up to an arbitrary rescaling. This rescaling can be obtained in two independent ways. First, changing the choice of \(f\). Second, changing the embedding, which has the effect of multiplying \(f\) by \(F\).
For instance, the inaffinity \(k\) appearing in (2.2) is not a geometric property of a generic null hypersurface, since it depends on \(f\) and \(\Phi\). An explicit calculation using (2.1) gives in fact
\[k=\pounds_{l}\ln f-\frac{f}{2}\partial_{\Phi}g^{\Phi\Phi}, \tag{2.4}\]
written in adapted coordinates. If follows that \(k\) depends on the transversal derivative of the metric, namely it contains information about the extrinsic geometry. We can rewrite (2.4) in a more covariant
form if we parametrize an arbitrary extension of the normal as \(l=-fd\Phi+\Phi\,v\), then
\[k=\pounds_{l}\ln f-\frac{1}{2}\pounds_{n}l^{2}+\frac{1}{f}l\cdot v, \tag{2.5}\]
where \(n\) is any null vector such that \(n\cdot l=-1\). This expression is slightly misleading because it may give the impression that \(k\) depends on \(v\), but this dependence cancels out between the second and third term to give back (2.4).2 From this general expression one can also read off the special values when the extension is null everywhere (\(l^{2}=0\), \(v\neq 0\)), when it is hypersurface-orthogonal everywhere (\(v=0\)), or both. A typical example of the first special case is Kerr's principal null direction, hypersurface orthogonal only at the horizon. For the second special case, \(l\) is normal to a foliation that has a single null leaf, and for the third, \(l\) is normal to a null foliation.
Footnote 2: Independence of \(k\) from the extension of \(l\) can be also checked showing that \(l^{\prime}=l+\Phi v\) gives the same \(k\) as \(l\), and it means that different \(k\) imply different \(f\) at fixed spacetime metric.
This discussion is valid for a generic null hypersurface. If it is a Killing horizon in a spacetime with a global isometry, then the possibility of selecting as generator the preferred Killing vector of asymptotic unit-norm allows one to eliminate these arbitrariness and interpret \(k\) as surface gravity. If it is a non-expanding horizon, the arbitrariness in the normal can be reduced to constant rescaling. Every member of the equivalence class has \(k=0\) (not to be confused with the surface gravity of the NEH, which is the arbitrary parameter of the Weyl rescaling symmetry vector field). We recall that the definition of a NEH coincides with a shear-free and expansion-free null hypersurface in vacuum, but it is slightly more specific in the presence of matter, where it requires the stronger condition \(R_{\mu\nu}l^{\mu}\stackrel{{ N}}{{=}}\alpha l_{\nu}\) for some function \(\alpha\), as opposed to \(R_{\mu\nu}l^{\mu}l^{\nu}\stackrel{{ N}}{{=}}0\) satisfied by a shear-free and expansion-free null hypersurface. In this paper we will only deal with vacuum general relativity, hence we will use NEH as a synonym of a shear and expansion-free hypersurface.
The null vector \(n\) introduced in (2.5) is known as the rigging vector, and it is a convenient tool to work on null hypersurfaces. It allows one the use of covariant expressions at all times and to avoid hypersurface indices, thus making the relation to spacetime objects transparent. It also allows one to use the Newman-Penrose (NP) formalism and the numerous results that have been derived in that language. To that end, we complete the pair \((l,n)\) to a doubly-null NP tetrad \((l,n,m,\bar{m})\) on \(\mathcal{N}\).
The downside of the rigging vector approach is its reliance on an arbitrary choice of auxiliary vector. But it is quite easy to check which quantities are independent of this choice. The arbitrariness is a 2-parameter family given by
\[n\to n+\bar{a}m+a\bar{m}+|a|^{2}l,\qquad m\to m+al,\qquad a\in\mathbb{C}. \tag{2.6}\]
Quantities which are invariant under (2.6) are independent of the choice of auxiliary rigging vector. For instance, it is easy to check that (2.5) is invariant. The map (2.6) is an internal Lorentz transformations of the NP tetrad that corresponds to the two translations of the ISO(2) little group stabilizing \(l\). We will refer to it as a class-I transformation (of the NP tetrad), following [21]. In this classification, class-II transformations are the two null translations of the ISO(2) little group stabilizing \(n\). They change \(l\) and disalign it from the normal to the hypersurface, and will not be considered in the rest of the paper. The remaining two internal transformations are the class-III spin-boost transformations
\[(l,n,m,\bar{m})\rightarrow(Al,A^{-1}n,e^{i\varphi}m,e^{-i\varphi}\bar{m}). \tag{2.7}\]
The boost transformation acts as a rescaling of the normal by an arbitrary real function \(A\). Therefore, quantities invariant under this boost are independent of the choices of \(f\) and of the embedding used when writing (2.1). This is not the case for the inaffinity, which transforms as \(k\to A(k+\pounds_{l}A)\).
An important result of [1] is that the equivalence class
\[[l,k]=[Al,A(k+\pounds_{l}A)] \tag{2.8}\]
can be taken to be the universal background structure in the covariant phase space of metrics with a null hypersurface.3 That is, any two metrics with a null hypersurface admit a coordinate system in which they have the same (2.8). Elements in the universal structure (2.8) must thus be class-III invariant.
Footnote 3: This is referred to as universal boundary structure. We prefer the adjective background to emphasize that these are quantities that will not be varied in the phase space. The paper [1] also introduces a notion of ‘universal intrinsic structure’, based on purely intrinsic quantities, and which we do not consider here.
These internal Lorentz transformations are thus practical tools to discern the quantities that depend solely on the geometry of \({\cal N}\) from those that depend on additional structures or choices. To reiterate, class-I invariance means independence from the choice of rigging vector,4 and class-III invariance guarantees independence from rescaling the normal, namely from the choice of \(f\) and from reparametrizations \(\Phi\mapsto\Phi^{\prime}=\Phi F(x)\). One theme of this paper will be that lack of class I and class III invariance translates to anomalies in the covariant phase space.
Footnote 4: The extended structure of a null hypersurface plus a specific choice of normal is called Carollian structure in some literature, and the further extension including a specific choice of rigging vector a ‘ruled’ or rigged Carollian structure, see e.g. [24].
If need be to select a specific rigging vector, there are two natural ways to do so that are common in the literature. The first is to require it to be parallel-transported along \(l\) on \({\cal N}\), see e.g. [25]. This choice is unique, and fixes the class-I transformation so that the NP spin-coefficient \(\pi\) vanishes.5 The second way is to require it to be adapted to a given \(2+1\) foliation of \({\cal N}\). This choice is again unique once the foliation is given. In this case the class-I transformation is fixed setting to zero two of the three components of the pull-back of \(n\), thus making it hypersurface orthogonal within \({\cal N}\). It follows that the 2d planes spanned by \((m,\bar{m})\) integrate to the leaves of the foliation.
Footnote 5: Requiring \((m,\bar{m})\) to be also parallel-transported will further fix the spin part of the class-III so that the NP coefficient \(\epsilon\) is real. This is the same letter used below for the volume form, but being the first a scalar and the second a form no confusion should hopefully arise.
A volume form on \({\cal N}\) can be defined from the spacetime volume form \(\epsilon\) via
\[\epsilon=-l\wedge\epsilon_{{\cal N}}. \tag{2.9}\]
The conventional minus sign here follows from assuming \(l\) outgoing, and would be plus if incoming. The volume form \(\epsilon_{{\cal N}}\) is class-I invariant but not class-III invariant because it depends on \(f\). This formula defines actually an equivalence class of volume forms, related by adding any 3-form containing \(l\). A convenient representative of this equivalence class can be chosen using the rigging vector as
\[\epsilon_{{\cal N}}:=\underline{i_{n}\epsilon}=\frac{\sqrt{-g}}{f}d^{3}x, \tag{2.10}\]
where the second equality uses adapted coordinates \((\Phi,x^{a})\), and the arrow under the form means pull-back on \({\cal N}\). We will make this choice from now on. Written in this way, class-I invariance may not appear as obvious, but it follows from the pull-back and the fact that \(m^{\mu}\) is tangent to \({\cal N}\).
On \({\cal N}\), we also define the space-like area form
\[\epsilon_{S}:=i_{l}\epsilon_{{\cal N}}=i\underline{m}\wedge\bar{m},\qquad i_ {l}\epsilon_{S}=0,\qquad\qquad\epsilon_{{\cal N}}=-n\wedge\epsilon_{S}. \tag{2.11}\]
It is class-I invariant and defined independently of any choice of foliation of \(\mathcal{N}\). It satisfies
\[d\epsilon_{S}=\theta\epsilon_{\mathcal{N}}, \tag{2.12}\]
where \(\theta\) is the expansion of \(l\), as defined below. Notice that \(\epsilon_{S}\) so defined can contain components along the null direction, even if \(n\) is adapted to a foliation and \((m,\bar{m})\) are integrable. Choosing affine coordinates eliminates these components. From this equation one also derives the following useful identity,
\[(\pounds_{l}+\theta)X\,\epsilon_{\mathcal{N}}=d(X\epsilon_{S}). \tag{2.13}\]
The rigging vector is also handy to introduce a local projector on 2d space-like planes, given by
\[\gamma_{\mu\nu}:=g_{\mu\nu}+2l_{(\mu}n_{\nu)}=2m_{(\mu}\bar{m}_{\nu)}. \tag{2.14}\]
Its pull-back \(\underset{\mathcal{L}}{\gamma}\mu\nu\), or \(\gamma_{ab}\) in hypersurface indices, coincides with the pull-back of the spacetime metric. This is the (degenerate) induced metric, whose null direction is given by \(l^{\mu}\) itself. The class-III invariant pair \((\underset{\mathcal{L}}{\gamma}\mu\nu,l^{\mu}\epsilon_{\mathcal{N}})\) contains six independent quantities, which are the analogue of the induced geometry in the non-degenerate case.
For the extrinsic geometry, we look at the pull-back of the gradient of the normal vector. This quantity gives the extrinsic curvature in the case of a space-like or time-like hypersurface. In the null case, \(W_{\mu}{}^{\nu}:=\nabla_{\underset{\leftarrow}{\mu}}l^{\nu}\) defines a a purely hypersurface objet, satisfying \(n^{\mu}W_{\mu}{}^{\nu}=0=W_{\mu}{}^{\nu}l_{\nu}\), and \(l^{\mu}W_{\mu}{}^{\nu}=kl^{\nu}\). It is related to the Weingarten map, which is the reason for the notation \(W\). The actual map is given using hypersurface indices as in [1, 9], but that definition is equivalent to ours in terms of covariant 4d indices. To see the geometric content of this map, it is convenient to use the rigging vector and decompose it as follows,
\[W_{\mu}{}^{\nu}:=\nabla_{\underset{\leftarrow}{\mu}}l^{\nu} \overset{\mathcal{N}}{=} \underset{\leftarrow}{\omega}\underset{\leftarrow}{\mu}l^{\nu}+ \gamma_{\rho}^{\nu}B_{\underset{\leftarrow}{\mu}}{}^{\rho} \tag{2.15}\] \[=\big{(}(\bar{\alpha}+\beta)\bar{m}_{\underset{\leftarrow}{ \leftarrow}}-\epsilon n_{\underset{\leftarrow}{\mu}}\big{)}l^{\nu}-( \sigma\bar{m}_{\underset{\leftarrow}{\leftarrow}}+\rho m_{\underset{ \leftarrow}{\leftarrow}})\bar{m}^{\nu}+\text{cc}.\]
The second line makes reference to the NP formalism (with mostly-plus signature, we use the conventions of [26]6), and the various tensors there appearing are:
Footnote 6: This formula can be found for example in references on the NP formalism (e.g. [21]). The NP formalism assumes an extension of \(l\) which is null everywhere, but since the derivative index is here pulled-back on \(\mathcal{N}\), it is valid for an arbitrary extension as well.
\[B_{\mu\nu}:=\gamma_{\mu}^{\rho}\gamma_{\nu}^{\sigma}\nabla_{\rho} l_{\sigma}=\frac{1}{2}\gamma_{\mu}^{\rho}\gamma_{\nu}^{\sigma}\pounds_{l}\gamma_{ \rho\sigma}\overset{\mathcal{N}}{=}\sigma_{\mu\nu}+\frac{1}{2}\gamma_{\mu\nu}\theta, \tag{2.16a}\] \[\sigma_{\mu\nu}:=\gamma_{\langle\mu}^{\rho}\gamma_{\nu\rangle}^{ \sigma}\nabla_{\rho}l_{\sigma}=-\bar{m}_{\mu}\bar{m}_{\nu}\sigma+cc,\qquad \theta:=2m^{(\mu}\bar{m}^{\nu)}\nabla_{\mu}l_{\nu}=-2\rho,\] (2.16b) \[\omega_{\mu}:=-\eta_{\mu}-kn_{\mu},\qquad\quad\eta_{\mu}:=\gamma_ {\mu}^{\rho}n^{\sigma}\nabla_{\rho}l_{\sigma}=-(\alpha+\bar{\beta})m_{\mu}+ cc,\qquad l^{\mu}\omega_{\mu}=k=2\text{Re}(\epsilon). \tag{2.16c}\]
Here \(B\) is the deformation tensor, whose antisymmetric part vanishes because \(l\) is hypersurface orthogonal at \(\mathcal{N}\), \(\sigma\) is the shear and \(\theta\) the expansion; \(\omega\) is the rotational 1-form of isolated and non-expanding horizons [26, 27], satisfying \(\omega\cdot l=k\); \(\eta\) is the connection 1-form on the normal time-like planes spanned by \((l,n)\), whereas the complementary quantity \(\alpha-\bar{\beta}\) is the 2-sphere connection of the covariant derivative \(\bar{\upsigma}\) used in NP calculus [28, 29]. \(\eta\) is sometimes called Hajicek 1-form [30], or twist, since it is related to the non-integrability of the normal planes via
\[\gamma_{\mu\nu}[n,l]^{\nu}=\eta_{\mu}-\gamma_{\mu}^{\nu}(l^{\rho}\nabla_{\rho} n_{\nu}-\partial_{\nu}\ln f). \tag{2.17}\]
The Weingarten map depends on a specific choice of normal and not on the equivalence class. It is nonetheless useful to describe the geometry of the null hypersurface. From (2.16a), we see that the shear and expansion are entirely determined by the induced metric and a choice of \(l\), so they are part of the intrinsic geometry. The dependence on the scaling of \(l\) can be eliminated if we look at the densitized expressions \(\sigma\epsilon_{\mathcal{N}}\) and \(\theta\epsilon_{\mathcal{N}}\) which are class-III invariant.
Transversal derivatives of the metric enter the inaffinity \(k\) and the twist \(\eta_{\mu}\). These quantities could be taken as the analogue of the extrinsic geometry, but they are ambiguous since they depend on the choice of \(l\) representative and not on the equivalence class. This dependence can be partially removed if we consider the following shifts,
\[\bar{k}:=k-l^{\mu}\partial_{\mu}\ln f=-\frac{f}{2}\partial_{\Phi}g^ {\Phi\Phi}, \tag{2.18}\] \[\bar{\eta}_{\mu}:=\eta_{\mu}+\gamma^{\nu}_{\mu}\partial_{\nu}\ln f =\gamma_{\mu\nu}([n,l]^{\nu}+l^{\rho}\nabla_{\rho}n^{\nu})=m_{\mu}(\bar{m}_{ \nu}[n,l]^{\nu}+\pi)+\mathrm{cc}, \tag{2.19}\]
where \(\pi\) here is one of the NP coefficients. \(\bar{\eta}_{\mu}\) and \(\bar{k}\epsilon_{\mathcal{N}}\) are invariant under changes of \(f\), but not under changes of embedding \(\Phi\to\Phi F(x)\). Therefore they are still not class-III invariant, but at least satisfy the weaker requirement of being independent of the choice of normal representative at fixed embedding.7 If we keep the embedding fixed, \(\bar{k}\epsilon_{\mathcal{N}}\) is fully unambiguous. However \(\bar{\eta}_{\mu}\) is not, because it inherits from \(\eta_{\mu}\) a dependence on the rigging vector, hence it is still not a genuine measure of the extrinsic geometry of \(\mathcal{N}\). In fact, even though the Weingarten map is independent of the choice of rigging vector, the decomposition we used on the right-hand side of (2.15) introduces a dependence on it: only \(\theta\), \(k\) and (the scalar contraction) \(\sigma\) are class-I invariant, whereas \(\sigma_{\mu\nu}\), \(\eta_{\mu}\) and \(\omega_{\mu}\) are not. For convenience, the transformation properties of all quantities are summarized in Table 1, with the details reported in Appendix A.
Footnote 7: This is consistent with the statement in [31] that a quantity like \(\bar{k}\epsilon_{\mathcal{N}}\) here is class-III boost invariant, because that paper works with a fixed 2+2 foliation.
The only case in which (the pull-back of) \(\bar{\eta}_{\mu}\) is class-I invariant is on a non-expanding horizon with \(k=0\). And in fact it characterizes the shape of a non-expanding horizon via the Noether charge construction [5]. To use it as a measure of the extrinsic geometry of a general \(\mathcal{N}\), one has to fix the class-I gauge freedom. If we do so taking \(n\) parallel transported by \(l\) the NP spin coefficient \(\pi\) vanishes and can identify the twist \(\bar{\eta}_{\mu}\) (or equivalently \(\eta_{\mu}\) with a gradient normal representative) with the non-integrability of the time-like planes, thanks to (2.19). Below we will however find it more convenient to fix instead \(n\) to be adapted to a foliation of \(\mathcal{N}\), and we will then show that \(\bar{\eta}_{\mu}\) determines the evolving Noether charges associated with the leaves of that foliation.
We conclude with two more remarks about the Weingarten map. First, its trace is given by
\[W:=W_{\underset{\leftarrow}{\mu}}^{\mu}=\nabla_{\mu}l^{\mu}+\frac{1}{2} \partial_{n}l^{2}=\theta+k, \tag{2.20}\]
and provides the boundary term for the variational principle with Dirichlet boundary conditions on a null hypersurface [32, 8, 33, 34], the equivalent of the Gibbons-Hawking-York term. The discrepancy between the trace of the Weingarten map and the divergence of the normal may look unfamiliar, but it would occur also in the time-like case if the normal \(\tau\) is not of unit-norm off the hypersurface: \(K=\nabla_{\mu}\tau^{\mu}+\frac{1}{2}\partial_{\tau}\tau^{2}\), where \(K_{\mu\nu}:=q^{\rho}_{\mu}\nabla_{\rho}\tau_{\nu}\).
Second, an alternative covariant construction of the Weingarten map can be given in terms of the "half-projector" \(\Pi_{\mu}{}^{\nu}:=\gamma^{\nu}_{\mu}-n_{\mu}l^{\nu}\), defining \(\tilde{W}_{\mu}{}^{\nu}:=\Pi_{\mu}{}^{\rho}\nabla_{\rho}l^{\nu}\). This tensor is rigging-vector dependent, but not its pull-back on the hypersurface. This pull-back is the definition used in [9], and coincides with (2.15). The trace also coincides with (2.20), namely \(\tilde{W}_{\mu}{}^{\mu}=W.\)
### Foliations
The volume form \(\epsilon_{\mathcal{N}}\) is not class-III invariant, and depends on the full spacetime metric determinant \(\sqrt{-g}\). On non-degenerate hypersurfaces choosing a unit-norm normal makes the volume form depend only on the determinant of the induced metric. The unit-norm option does not exist in the null case, but one can achieve a similar result introducing a \(2+1\) foliation of \(\mathcal{N}\). The foliation can be arbitrary, provided that its leaves are space-like. We take it to be defined by the level sets of some scalar function \(\lambda\), and denote \(x^{a}=(\lambda,x^{A})\) the coordinates adapted to it.
Note that if we take this choice together with the foliation defined by \(\Phi\), we obtain spacetime a coordinate system \((\Phi,\lambda,x^{A})\) adapted to a 2+2 foliation of spacetime (see e.g. [35]). Our choice of letters for these coordinates is meant to preserve generality of the formalism with respect to common applications. For example, to make the link the Schwarzschild metric in retarded Bondi coordinates we would take \((\lambda,\Phi)=(u,r-2M)\) and \(\mathcal{N}\) is the white hole horizon, or using advanced time instead \((\lambda,\Phi)=(v,2M-r)\) and \(\mathcal{N}\) is the black hole horizon. We can also keep assuming \(l^{\mu}\) future pointing namely \(g^{\Phi\lambda}<0\) without loss of generality. In the first case this leads to \(g^{ur}<0\), in the second case to \(g^{vr}>0\). Or if \(\mathcal{N}\) is a null cone in Minkowski in a doubly-null foliation, we can identify \(\Phi=u:=t-r\) and \(\lambda=v:=t+r\). Since \(\lambda\) is a (null) time, we will refer to \(\partial_{\lambda}\) as a time derivative, and use a dot to indicate it.
In these coordinates,
\[\sqrt{-g}=-\frac{1}{g^{\Phi\lambda}}\sqrt{\gamma}, \tag{2.21}\]
where \(\gamma\) is the determinant of the space-like metric \(\gamma_{AB}\) on the 2d leaves. Hence,
\[\epsilon_{\mathcal{N}}=\frac{\sqrt{\gamma}}{l^{\lambda}}d\lambda d^{2}x. \tag{2.22}\]
We see that it is a completely intrinsic quantity, but it is still not class-III invariant and contains more information than the 2d area form \(\gamma\): it depends also on the extent of \(l\) via \(l^{\lambda}\). If we now choose \(f=-1/g^{\Phi\lambda}\), we obtain \(l^{\lambda}=1\) and
\[\epsilon_{\mathcal{N}}=\sqrt{\gamma}d\lambda d^{2}x. \tag{2.23}\]
\begin{table}
\begin{tabular}{c|c c|c c} quantity & Rigging-vector & Rescaling & Boost & Spin \\ & independence & independence & weight & weight \\ \hline \(\sigma_{\mu\nu}\) & ✗ & ✗ & 1 & 0 \\ \(\sigma\) & ✓ & ✗ & 1 & 2 \\ \(\theta\) & ✓ & ✗ & 1 & 0 \\ \(\epsilon_{\mathcal{N}}\) & ✓ & ✗ & \(-1\) & 0 \\ \(k\) & ✓ & ✗ & 1+inhom & 0 \\ \(\eta_{\mu}\) & ✗ & ✗ & 0+inhom & 0 \\ \(\alpha+\bar{\beta}\) & ✗ & ✗ & 0+inhom & -1 \\ \(\gamma_{\mu\nu}[l,n]^{\nu}\) & ✗ & ✓ & 0 & 0 \\ \end{tabular}
\end{table}
Table 1: _Behaviour under class-I and class-III transformations. Quantities that are not invariant under (2.7) can be characterized in terms of their boost and spin weights, respectively \(a\) and \(b\), defined by \(X\to A^{a}e^{ib\varphi}X\) (up to possible inhomogeneous terms) under (2.7). The boost weight can also be interpreted as a conformal weight, for instance in the case of future null infinity where the normal is the gradient of the conformal rescaling of the metric._
Notice that \(\lambda\) does not need to be a parameter along the null geodesics. In general after making these choices,
\[l^{a}=(1,-b^{A}),\qquad\underline{g_{ab}}=\left(\begin{array}{cc}\gamma_{AB}b^{ A}b^{B}&\gamma_{AB}b^{B}\\ &\gamma_{AB}\end{array}\right), \tag{2.24}\]
and the vector \(b^{A}\) acts as a shift vector for the 2+1 foliation defined by \(\lambda\). If we partially fix the coordinate gauge requiring that \(x^{A}\) are conserved along the generators, then we are setting the shift vector to zero, and \(l^{a}=(1,0,0)\). In terms of the spacetime metric, this partial gauge-fixing reads \(g^{\Phi A}\stackrel{{\mathcal{N}}}{{=}}0\). We refer to it as partial Bondi gauge, as in [36, 37]. The foliation-dependent choice \(f=-1/g^{\Phi\lambda}\) was referred to as 'canonical' normalization in [34], for its analogy with the ADM space-like case, since \(1/g^{\Phi\lambda}\) plays the role of lapse in the \(3+1\) decomposition with null slices [38].
The simplification (2.23) gives to the volume form a similar structure to the one of non-degenerate hypersurfaces (albeit in terms of a codimension-2 determinant), and it is often used in the literature, e.g. [39]. It is valid only in the foliation chosen, but in the partial Bondi gauge it remains valid for any new foliation obtained by a super-translation \(\lambda^{\prime}=\lambda+T(x^{A})\). We will however not do this choice in the following, neither for \(f\) nor for the coordinates, and keep fully general \(f\) and \(\epsilon_{\mathcal{N}}\).
A common choice of \(2+1\) foliation is the one induced by the intersections of \(\mathcal{N}\) with a space-like foliation. In this case the cross-sections of \(\mathcal{N}\) provide the boundary \(\partial\Sigma\) of each 3d space-like leaf \(\Sigma\). Let us denote by \(\tau\) the unit-norm normal to the space-like foliation, and parametrize the scalar product as follows,
\[l\cdot\tau\stackrel{{\mathcal{N}}}{{=}}-\frac{1}{\sqrt{2}}e^{- \hat{\beta}}. \tag{2.25}\]
The overall minus sign is due to the fact that both vectors are future pointing. The quantity \(\hat{\beta}\) has no geometric meaning per se, since it is not class-III invariant. It can be used to measure the change of geometric tilt between \(\mathcal{N}\) and \(\Sigma\) only if \(l\) is kept fixed. The unit-norm normal to the cross-section within \(T\Sigma\) is
\[\hat{r}^{\mu}\stackrel{{\mathcal{N}}}{{=}}\pm\sqrt{2}e^{\hat{ \beta}}q_{\nu}^{\mu}l^{\nu},\qquad q_{\mu\nu}:=g_{\mu\nu}+\tau_{\mu}\tau_{\nu}, \tag{2.26}\]
where the sign is plus if \(\mathcal{N}\) is the outgoing null hypersurface from the boundary of \(\Sigma\), and minus if it is the incoming one. It can be used to define a rigging vector adapted to \(\partial\Sigma\), which is given by
\[n=\frac{1}{\sqrt{2}}(e^{\hat{\beta}}\tau\mp e^{-\hat{\beta}}\hat{r}). \tag{2.27}\]
Now \((l,n)\) and \((\tau,\hat{r})\) provide two possible basis for the time-like plane normal to \(\partial\Sigma\). This change of basis is used to determine the corner terms required in the action by the variational principle.
### Affine coordinates
The fact that a null hypersurface is ruled by null geodesics endows it with a preferred class of foliations, in which \(\lambda\) is a parameter along the geodesics, \(l^{\mu}\partial_{\mu}\lambda=1\). To use this parameter as one of the coordinates, we fix an initial cross-section of \(\mathcal{N}\), say at \(\lambda=0\), define angular coordinates \(x^{A}\) there and then Lie-drag them along \(\mathcal{N}\). This defines a coordinate system with vanishing shift vector,
\[l^{a}=(1,0,0),\qquad\underline{g_{ab}}=\left(\begin{array}{cc}0&0\\ &\gamma_{AB}\end{array}\right). \tag{2.28}\]
These coordinates satisfy the partial Bondi gauge. We have \(l_{\mu}=(g_{\lambda\Phi},0,0,0)\) and \(g_{\Phi\lambda}=1/g^{\Phi\lambda}\), therefore this choice of tangent vector corresponds to the 'canonical normalization' for \(f\). This is an example of a situation in which \(f\) is metric-dependent. We can complete this partial gauge fixing on \(\mathcal{N}\) with a fourth condition, for instance redefining \(\Phi\) so that \(g_{\lambda\Phi}\stackrel{{\mathcal{N}}}{{=}}-1\). The metric now satisfies
\[g_{\lambda\lambda}=O(\Phi),\qquad g_{\Phi\lambda}=-1+O(\Phi),\qquad g_{\lambda A }=O(\Phi), \tag{2.29}\]
and it is fully gauge-fixed on \(\mathcal{N}\).
The coordinate system can be further specialized if we require the parameter to be affine, namely
\[l_{o}:=\frac{d}{d\lambda},\qquad l_{o}^{\mu}\nabla_{\mu}l_{o}^{\nu}\stackrel{{ \mathcal{N}}}{{=}}0. \tag{2.30}\]
This condition fixes the first-order extension of the metric component \(g_{\lambda\lambda}\) so that \(\partial_{\Phi}g_{\lambda\lambda}\stackrel{{\mathcal{N}}}{{=}}2 \partial_{\lambda}g_{\Phi\lambda}\).8 Since one can always choose the adapted coordinate \(\Phi\) such that \(g_{\Phi\lambda}=-1+O(\Phi)\), in that gauge we have \(g_{\lambda\lambda}=g^{\Phi\Phi}=O(\Phi^{2})\). 9 At this point,
Footnote 8: This follows from \(\Gamma^{\mu}_{\lambda\lambda}\stackrel{{\mathcal{N}}}{{=}}0\), which by invertibility of the metric is equivalent to \(2\partial_{\lambda}g_{\mu\lambda}-\partial_{\mu}g_{\lambda\lambda}\stackrel{{ \mathcal{N}}}{{=}}0\). This is identically satisfied by (2.28) for \(\mu=(\lambda,A)\), and thus reduces to the single equation given in the text.
Footnote 9: Notice that this would be a ‘generalized’ diffeomorphism, not invertible at the hypersurface, pretty much like going from static Schwarzschild coordinates to Eddington-Finkelstein is singular at the horizon.
\[g_{\lambda\lambda}=O(\Phi^{2}),\qquad g_{\Phi\lambda}=-1+O(\Phi),\qquad g_{ \lambda A}=O(\Phi), \tag{2.31}\]
and the rest of the metric is arbitrary. The condition of affinity can always be imposed via gauge-fixing, but we see that it is not a characteristic of the hypersurface coordinates alone, since involves the first-order extension of the metric.
In the affine coordinate system, any normal vector in the equivalence class satisfies
\[l^{\mu}\stackrel{{\mathcal{N}}}{{=}}fl_{o}^{\mu} \tag{2.32}\]
and
\[k=l^{\mu}\partial_{\mu}\ln f. \tag{2.33}\]
Hence it is affine iff \(f\) is chosen constant in \(\lambda\), namely \(\pounds_{l}f=0\). Furthermore since \(\partial_{\Phi}g^{\Phi\Phi}\stackrel{{\mathcal{N}}}{{=}}0\), any extension of \(l\) with \(v\cdot l\stackrel{{\mathcal{N}}}{{=}}0\) satisfies \(\partial_{n}l^{2}\stackrel{{\mathcal{N}}}{{=}}0\), namely it is null at first-order off the hypersurface. We also recall that in affine coordinates \(\bar{k}=0\).
This coordinate system can be extended to a neighbourhood of \(\mathcal{N}\) as follows (see e.g. [5]). We shoot geodesics off \(\mathcal{N}\), and Lie drag \(x^{a}\) along them. Namely, we have
\[n_{o}^{\mu}=\frac{\partial}{\partial\Phi},\qquad n_{o}^{\mu}\nabla_{\mu}n_{o} ^{\nu}=k_{n_{o}}n_{o}^{\nu},\qquad\pounds_{n_{o}}x^{a}=0. \tag{2.34}\]
We can then completely fix the bulk coordinate gauge freedom if we require that \((i)\)\(n_{o}\) is null everywhere, \((ii)\)\(\Phi\) is affine (hence \(k_{n_{o}}=0\)), and \((iii)\) it is the gradient of the foliation of constant \(\lambda\) on \(\mathcal{N}\), namely \(n_{o}\stackrel{{\mathcal{N}}}{{=}}-d\lambda\). The last condition in particular means that \(n_{o}\) gives a choice of rigging vector for \(l_{o}\) adapted to the \(\lambda\) foliation. In terms of metric components, \((i)\) fixes \(g_{\Phi\Phi}=0\), then \((ii)\) requires \(\Gamma^{\mu}_{\Phi\Phi}=0\), which in turns implies \(\partial_{\Phi}g_{\Phi\mu}=0\). Finally, \((iii)\) fixes \(g_{\Phi\mu}\stackrel{{\mathcal{N}}}{{=}}(-1,0,0,0)\). The
resulting coordinates \((\lambda,\Phi,x^{A})\) are defined in a caustic-free open neighbourhood of \({\cal N}\), in which the metric reads
\[g_{\mu\nu}=\left(\begin{array}{ccc}\Phi^{2}F&-1&\Phi P_{A}\\ &0&0\\ &&\gamma_{AB}\end{array}\right),\qquad g^{\mu\nu}=\left(\begin{array}{ccc}0&-1 &0\\ &-\Phi^{2}(F-P^{2})&\Phi P^{A}\\ &&\gamma^{AB}\end{array}\right), \tag{2.35}\]
where \(F,P_{A}\) and \(\gamma_{AB}\) are arbitrary metric coefficients. With this gauge fixing, \(\bar{\eta}_{A}=\Gamma^{\Phi}_{A\Phi}=-P_{A}/2\) and \(\bar{k}=0\). The latter makes it manifest that the extrinsic geometry captured by the inaffinity, or more precisely by \(\bar{k}\), describes whether the hypersurface is described in affine coordinates or not, thus being faithful to its name. We stress that what makes \(\lambda\) an affine parameter on \({\cal N}\) is not so much \(g_{\lambda\Phi}=-1\) but \(g_{\lambda\lambda}=O(\Phi^{2})\). This coordinate system can always be reached, and if one restricts the residual diffeomorphisms to preserve it, the whole extension of \(\xi\) in the neighbourhood is fixed. On the other hand, if one relaxes it and requires only the minimal conditions (2.31), only the first order extension \(\hat{\xi}^{\Phi}\) is fixed, whereas \(\hat{\xi}^{\lambda}\) and \(\hat{\xi}^{A}\) remain arbitrary.
We can now choose an extension of \(l\) such that
\[l^{\mu}=fl^{\mu}_{o} \tag{2.36}\]
everywhere in the chart, and not only at \({\cal N}\). This is achieved taking \(v=\Phi Fd\lambda+P_{A}dx^{A}\). This extension is not hypersurface-orthogonal nor null nor geodesic, except at \({\cal N}\). But it satisfies \(l\cdot v\stackrel{{{\cal N}}}{{=}}0\) and it is thus null at first order around \({\cal N}\).
Summarizing, affine coordinates on \({\cal N}\) depend on extrinsic properties of the metric, and give us (2.31) and (2.33). The normal in these coordinates reads (2.32), is in general not null at first order, but this can be easily achieved choosing for instance the extension (2.36).10
Footnote 10: Another convenient extension is \(dl\stackrel{{{\cal N}}}{{=}}0\), which implies instead \(v=-df+\Phi v^{\prime}\), namely \(l=-d(f\Phi)+\Phi^{2}v^{\prime}\) is a gradient on \({\cal N}\), and \(v\cdot l\stackrel{{{\cal N}}}{{=}}-\partial f\). Then choosing \(\Phi^{\prime}=f\Phi\) or more in general \(f\) time independent is also enough to have \(v\cdot l\stackrel{{{\cal N}}}{{=}}0\) hence \(\partial_{n}l^{2}\stackrel{{{\cal N}}}{{=}}0\). So taking \(dl\stackrel{{{\cal N}}}{{=}}0=\pounds_{l}f\) is another solution than (2.36) to have \(\partial_{n}l^{2}\stackrel{{{\cal N}}}{{=}}0\).
## 3 Null symplectic potential
We start from the standard Einstein-Hilbert symplectic potential
\[\theta^{\mbox{\tiny EH}}=\frac{1}{3!}\theta^{\mbox{\tiny EH}\mu}\epsilon_{\mu \nu\rho\sigma}\ dx^{\nu}\wedge dx^{\rho}\wedge dx^{\sigma},\qquad\theta^{ \mbox{\tiny EH}\mu}=2g^{\rho[\sigma}\delta\Gamma^{\mu]}_{\rho\sigma}, \tag{3.1}\]
and consider the most general expression for its pull-back on a null hypersurface. This was computed for instance in [34],11 and reads
Footnote 11: With \(\omega\) here defined with opposite sign, as to match [26, 27]. We took the time to accurately translate notations to prove that it is indeed equivalent to the one computed in [32], including the corner term, thus answering the question left open in [34]. In doing so we realized that the statement in [34] that taking an extension with \(\partial_{n}l^{2}\neq 0\) produces an extra term \(\partial_{n}l^{2}n^{\mu}\delta l_{\mu}\) in \(\theta^{\mbox{\tiny EH}}\) was incorrect. We thank Laurent Freidel for pointing this out to us. The final expression generalizes the one of [8] which assumes \(\delta l^{\mu}=0\), and the one of [31] which assumes \(\delta l_{\mu}=-n^{\rho}l^{\sigma}\delta g_{\rho\sigma}l_{\mu}\), the latter implying that \(n_{\mu}\delta l^{\mu}=0\). It also generalizes the one of [10] – contrarily to what there stated –, which assumes \(\delta l_{\mu}=0\), see (B.2). We will come back to these restrictions and their motivations below. We hope that no confusion arises because the same letter \(\theta\) appears as both the symplectic potential integrand and the expansion of \(l\). The risk should be reduced by the fact that the letter used for the symplectic potential always comes with labels such as \({}^{\mbox{\tiny EH}}\) or \({}^{\prime}\).
\[\underline{\theta}^{\mbox{\tiny EH}}=\big{[}(\sigma^{\mu\nu}+\frac{\theta}{2} \gamma^{\mu\nu})\delta\gamma_{\mu\nu}-2\omega_{\mu}\delta l^{\mu}+2\delta\left( \theta+k\right)\big{]}\epsilon_{\cal N}+d\vartheta^{\mbox{\tiny EH}}. \tag{3.2}\]
This expression holds for arbitrary variations on the null-hypersurface: the only restriction made is to preserve the null nature of the hypersurface, namely
\[l_{\mu}\delta l^{\mu}\,\stackrel{{\mbox{\tiny$\Delta^{\prime}$}}}{{= }}\,0. \tag{3.3}\]
In particular, it is valid for a field-dependent \(f\), so that
\[\delta l_{\mu}\,\stackrel{{\mbox{\tiny$\Delta^{\prime}$}}}{{=} }\,\delta\ln f\,l_{\mu} \tag{3.4}\]
doesn't need to vanish. If \(f\) is field-independent, \(\delta l_{\mu}\,\stackrel{{\mbox{\tiny$\Delta^{\prime}$}}}{{=} }\,0\) or equivalently \(n^{\mu}\delta l_{\mu}\,\stackrel{{\mbox{\tiny$\Delta^{\prime}$}}}{{= }}\,0\).
The expression (3.2) does not depend on rescalings of \(l\) nor on the choice of auxiliary vector \(n\). Independence from rescalings follows from its invariance under the class-III spin-boost transformations (2.7), and implies independence from changes of embeddings \(\Phi\to\Phi^{\prime}(\Phi)\). Independence from \(n\) follows from the invariance under class-I Lorentz transformations (2.6). See App. A for proofs. The invariance is only a property of the full expression. The individual quantities are not, as summarized in Table 1. This is relevant for various considerations that we do below.
The variation of the inaffinity can be written in terms of \(\delta l_{\mu}\) and \(\delta l^{\mu}\):
\[\delta k =kn^{\mu}\delta l_{\mu}-2n^{\mu}\nabla_{(\mu}l_{\nu)}\delta l^{ \nu}-2n^{\mu}l^{\nu}\nabla_{(\mu}\delta l_{\nu)}+\frac{1}{2}n^{\mu}\nabla_{\mu }\delta l^{2}\] \[=(kn^{\mu}+\frac{1}{2}n^{\nu}\nabla_{\nu}l^{\mu}-\frac{1}{2}l^{ \mu}n^{\nu}\nabla_{\nu}-n^{\mu}l^{\nu}\nabla_{\nu})\delta l_{\mu}-(n^{\nu} \nabla_{\mu}l_{\nu}+\frac{1}{2}n^{\nu}\nabla_{\nu}l_{\mu}-\frac{1}{2}l_{\mu}n^ {\nu}\nabla_{\nu})\delta l^{\mu}. \tag{3.5}\]
Notice that presence of normal derivatives on the variations, which imply that \(\delta k\) varies even if we fix both \(l_{\mu}\) and \(l^{\mu}\) on the hypersurface:
\[\delta l^{\mu}=\delta l_{\mu}\,\stackrel{{\mbox{\tiny$\Delta^{ \prime}$}}}{{=}}\,0\qquad\Rightarrow\qquad\delta k=-\frac{1}{2}n^{\mu}\nabla_ {\mu}(l_{\nu}l_{\rho}\delta g^{\nu\rho}). \tag{3.6}\]
This vanishes if we restrict the variations to preserve affine coordinates. Therefore \(\delta k\) is an independent variation because it captures the possibility of varying the metric between the form (2.29) and (2.31), which are both consistent with fixing \(l_{\mu}\) and \(l^{\mu}\) on the hypersurface.
Finally, the corner term is [31, 34]
\[\vartheta^{\mbox{\tiny EH}}:=n^{\mu}\delta l_{\mu}\epsilon_{S}-i_{\delta l} \epsilon_{\mathcal{N}}=(n^{\mu}\delta l_{\mu}+n_{\mu}\delta l^{\mu})\,\epsilon _{S}-n\wedge i_{\delta l}\epsilon_{S}. \tag{3.7}\]
The last term vanishes if we pull-back on a space-like cross section with \(n\) adapted to it.
### Phase space polarizations
We would like to manipulate the RHS of (3.2) so to put it in the form
\[\underline{\theta}=\theta^{\prime}-\delta\ell+d\vartheta, \tag{3.8}\]
where \(\theta=\theta^{\mbox{\tiny EH}}\), and \(\theta^{\prime}=p\delta q\) for a given choice of polarization of the (kinematical) phase space on the boundary.12 This form will be useful to discuss two different but related contexts: the variational principle, and the definition of charges using covariant phase space methods.
If we attempt to interpret (3.2) as a symplectic potential in \(p\delta q\) form, we see that the \(q\)'s appearing are not independent, since \(\gamma_{\mu\nu}\) also determines \(\theta\) through (2.16a). To put it in diagonal form, we need two steps. The first is to observe that the variation of the volume form has two components,
\[\delta\epsilon_{\mathcal{N}}=\left(\frac{1}{2}\gamma^{\mu\nu}\delta\gamma_{\mu \nu}+n_{\mu}\delta l^{\mu}\right)\epsilon_{\mathcal{N}}, \tag{3.9}\]
as can be established starting from the identity
\[\frac{1}{2}g^{\mu\nu}\delta g_{\mu\nu}=\frac{1}{2}\gamma^{\mu\nu}\delta\gamma_ {\mu\nu}+n_{\mu}\delta l^{\mu}-n^{\mu}\delta l_{\mu}. \tag{3.10}\]
Using (3.9), the pull-back (3.2) can be rewritten in the form
\[\not{\varrho}^{\text{\tiny EH}}=\big{[}\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+ \pi_{\mu}\delta l^{\mu}+2\delta(\theta+k)\big{]}\epsilon_{\mathcal{N}}+ \theta\delta\epsilon_{\mathcal{N}}+d\vartheta^{\text{\tiny EH}}, \tag{3.11}\]
where
\[\pi_{\mu}:=-2\left(\omega_{\mu}+\frac{\theta}{2}n_{\mu}\right)=2\left(\eta_{ \mu}+\left(k-\frac{\theta}{2}\right)n_{\mu}\right). \tag{3.12}\]
The second step is to integrate by parts in field space the third term. This leads to
\[\not{\varrho}^{\text{\tiny EH}}=\theta^{\text{\tiny D}}-\delta\ell^{\text{ \tiny D}}+d\vartheta^{\text{\tiny EH}}, \tag{3.13}\]
where
\[\theta^{\text{\tiny D}}:=\sigma^{\mu\nu}\delta\gamma_{\mu\nu}\epsilon_{ \mathcal{N}}+\pi_{\mu}\delta l^{\mu}\epsilon_{\mathcal{N}}-(\theta+2k)\delta \epsilon_{\mathcal{N}}, \tag{3.14}\]
and
\[\ell^{\text{\tiny D}}:=-2W\epsilon_{\mathcal{N}}=-2(\theta+k)\epsilon_{ \mathcal{N}}=-2k\epsilon_{\mathcal{N}}-2d\epsilon_{S}. \tag{3.15}\]
The last equality follows from (2.12), and can be used to simplify the boundary term reabsorbing its dependence on the expansion in the corner term, and work with
\[\ell^{\text{\tiny D}\prime}=-2k\epsilon_{\mathcal{N}},\qquad\vartheta^{\text {\tiny EH}\prime}=\vartheta^{\text{\tiny EH}}+2\delta\epsilon_{S}. \tag{3.16}\]
We can now identify \(\theta^{\prime}=\theta^{\text{\tiny D}}\), which is diagonal form \(p\delta q\) with \(q=(\underline{\gamma}_{\mu\nu},l^{\mu})\). The \(q\) terms only involve the intrinsic geometry, therefore the symplectic potential is in the form of a Dirichlet polarization, whence the D label. The diagonalization obtained involves the sum of three pairs of configuration variables and momenta that can be characterized as spin 2, 1 and 0, as discussed for instance in [31]. Denoting \(\varepsilon:=\sqrt{-g}/f\), we can write the conjugate momenta as densities,
\[\tilde{\pi}^{\mu\nu}:=\varepsilon\sigma^{\mu\nu},\qquad\tilde{\pi}_{\mu}:=2 \varepsilon\left(\eta_{\mu}+\left(k-\frac{\theta}{2}\right)n_{\mu}\right), \qquad\tilde{\pi}:=-\varepsilon\left(\theta+2k\right). \tag{3.17}\]
The first term in (3.14) can be equally written as
\[\sigma^{\mu\nu}\delta\gamma_{\mu\nu}\,\epsilon_{\mathcal{N}}=-\sigma_{\mu\nu }\delta(\gamma^{\mu\nu}\,\epsilon_{\mathcal{N}})=-\gamma_{\mu\nu}\delta \sigma^{\mu\nu}\epsilon_{\mathcal{N}}, \tag{3.18}\]
where \(\gamma^{\mu\nu}\,\epsilon_{\mathcal{N}}\) is manifestly conformal invariant. In other words, only the trace-less part of the metric perturbations enters here. This is the spin-2 pair.
The spin-1 pair has three components, two 'transverse' ones whose momentum is the twist, and a 'longitudinal' one proportional to \(n_{\mu}\delta l^{\mu}\). The longitudinal variation is there to compensate the
dependence of \(\gamma^{\mu\nu}\delta\gamma_{\mu\nu}\) on the choice of rigging vector. This dependence prevents the interpretation of \(\gamma^{\mu\nu}\delta\gamma_{\mu\nu}\) as the variation of the volume form, see (3.9), and can be removed if we restrict the variations to satisfy
\[n_{\mu}\delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\qquad \Rightarrow\qquad\frac{1}{2}\gamma^{\mu\nu}\delta\gamma_{\mu\nu}\epsilon_{ \mathcal{N}}\,\stackrel{{\mathcal{N}}}{{=}}\,\delta\epsilon_{ \mathcal{N}}. \tag{3.19}\]
This restriction can be achieved in two ways. The first is to choose the metric-dependence of \(f\) such that \(\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,-n^{\rho\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Remarkably, there is a solution that does not require any restriction on the variations, but instead performing an integration by parts in field space of the spin-0 term:
\[-(\theta+2k)\delta\epsilon_{\mathcal{N}}=-\delta[(\theta+2k)\epsilon_{\mathcal{N }}]+\delta(\theta+2k)\epsilon_{\mathcal{N}}. \tag{3.26}\]
This leads to
\[\not{\varrho}^{\text{\tiny EH}}=\theta^{\text{\tiny Conf}}-\delta\ell^{\text{ \tiny Conf}}+d\vartheta^{\text{\tiny EH}}, \tag{3.27}\]
where now
\[\theta^{\text{\tiny Conf}}:=[\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+\pi_{\mu} \delta l^{\mu}+\delta(\theta+2k)]\epsilon_{\mathcal{N}}, \tag{3.28}\]
and
\[\ell^{\text{\tiny Conf}}:=-\theta\epsilon_{\mathcal{N}}=-d\epsilon_{S}. \tag{3.29}\]
The new boundary Lagrangian and symplectic potential are class-III invariant. This shows the type of valuable insights that can be obtained using the freedom of changing potential via (3.8).
The decomposition (3.28) corresponds to a change of polarization in the phase space that identifies as configuration variables the conformal class of the 2d metric - equivalently the shear, recall (3.18) and discuss around there -, the tangent vector, and the spin-0 momentum \(\theta+2k\) instead of the volume form. We will show that it leads to a Noether flux-balance law with no anomaly term. This is related to that fact that this choice makes the boundary Lagrangian unambiguous even without any restriction on the variations. Conformal boundary conditions appear thus to be better behaved, something argued for also in the time-like case [41]. This polarization was referred to as \(\theta^{\text{\tiny Y}}\) in [19], by analogy with York's conformal boundary conditions for the time-like case. A similar decomposition to (3.28) was considered also in [18], while looking for thermodynamical interpretations of the symplectic potential. However they included the additional restriction \(n_{\mu}\delta l^{\mu}=0\), which spoils class-I invariance.
There are other integrations by parts in field space that could be considered. One could change the spin-0 sector with different numerical factors. For instance the investigations of black hole entropy by Chandrasekaran and Speranza (CS) in [10] motivates the choice
\[\ell^{\text{\tiny CS}}=-(k+2\theta)\epsilon_{\mathcal{N}}=\ell^{\text{\tiny D }}+k\epsilon_{\mathcal{N}}. \tag{3.30}\]
The motivation from black hole entropy will be briefly explained below, but notice that this choice is not class-III invariant. On the other hand, a change of polarization in the spin-1 pair seems of little use, since we cannot treat \(\eta_{\mu}\) as an independent configuration variable from the spin-2 pair. This is due to the fact that the constraint equations on \(\mathcal{N}\) relates \(\eta_{\mu}\) to the radiative data which are contained in the spin-2 pair. For completeness, we report in App. B an exploration of alternative polarizations and their boundary conditions.
In the rest of the paper we will study how changing the boundary Lagrangian as in the examples above affects the variational principle and the construction of gravitational charges. With the exception of (3.25), the boundary Lagrangians that we consider have the same functional dependence, and differ only by numerical factors. This is similar to what we had in the case of a time-like boundary [17], and allows us to treat all cases at once writing the boundary Lagrangian in parametric form as
\[\ell^{(b,c)}=-(bk+c\theta)\epsilon_{\mathcal{N}}. \tag{3.31}\]
A boundary Lagrangian of this family is class-III invariant for \(b=0\) and any value of \(c\). The specific examples described above correspond to:
\begin{tabular}{c|c c c c} & D & CFP & Conf & CS \\ \hline \(b\) & 2 & 0 & 0 & 1 \\ \(c\) & 2 & 2 & 1 & 2 \\ \end{tabular} (3.32)
The option (3.25) could be included adding a third parameter, but we have seen that it is only a partial solution and will thus be considered less in the following. The symplectic potential corresponding to this family is
\[\theta^{(b,c)}=\big{[}\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+\pi_{\mu}\delta l^{ \mu}+(2-b)\delta k+(2-c)\delta\theta\big{]}\epsilon_{\mathcal{N}}-(bk+(c-1) \theta)\delta\epsilon_{\mathcal{N}}, \tag{3.33}\]
and we repeat for convenience of comparison the four particular cases discussed earlier:
\[\theta^{\text{D}}:=(\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+\pi_{\mu }\delta l^{\mu})\epsilon_{\mathcal{N}}-(\theta+2k)\delta\epsilon_{\mathcal{N }}, \tag{3.34a}\] \[\theta^{\text{CFPk}}:=\big{(}\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+ \pi_{\mu}\delta l^{\mu}+2\delta k\big{)}\epsilon_{\mathcal{N}}-\theta\delta \epsilon_{\mathcal{N}},\] (3.34b) \[\theta^{\text{Conf}}:=[\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+\pi_ {\mu}\delta l^{\mu}+\delta(\theta+2k)]\epsilon_{\mathcal{N}},\] (3.34c) \[\theta^{\text{CS}}:=\big{(}\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+ \pi_{\mu}\delta l^{\mu}+\delta k\big{)}\epsilon_{\mathcal{N}}-(k+\theta) \delta\epsilon_{\mathcal{N}}. \tag{3.34d}\]
Here \(\text{CFPk}\) stands for the extension of the CFP case to \(\delta k\neq 0\). Although both \(\theta^{\text{CFPk}}\) and \(\theta^{\text{Conf}}\) are III-invariant, only the latter is in diagonal form for the general case with \(\delta k\neq 0\), since in the former the volume form appears both as \(q\) and as \(p\). More in general, any potential with \(b=0\) is III-invariant, but only the one with \(c=1\) is in diagonal form. For \(\delta k=0\), both \(\theta^{\text{CFP}}\) and \(\theta^{\text{Conf}}\) are diagonal and class III-invariant for \(\delta A=0\).
So far we have assumed that \(\delta A=0\). This condition is satisfied if we restrict the class of allowed normals to satisfy \(\delta l^{\mu}\ \overset{\mathcal{N}}{=}\ 0\) or \(\delta l_{\mu}\ \overset{\mathcal{N}}{=}\ 0\). If one relaxes both conditions and allows for \(\delta A\neq 0\), then a class-III invariant boundary Lagrangian is no longer sufficient to have a class-III invariant symplectic potential, because of the contribution from \(\vartheta^{\text{EH}}\). A possibility would then be to absorb \(d\vartheta^{\text{EH}}\) in the definition of \(\theta^{\prime}\). This is possible of course, however it would spoil the idea that \(\theta^{\prime}\) should be in diagonal \(p\delta q\) form. Furthermore, we will see that there are other reasons to impose \(\delta l^{\mu}\ \overset{\mathcal{N}}{=}\ 0\) or \(\delta l_{\mu}\ \overset{\mathcal{N}}{=}\ 0\), as well as to keep \(\vartheta^{\text{EH}}\) out of \(\theta^{\prime}\). For this reason we keep \(\vartheta^{\text{EH}}\) in the corner term.
## 4 Conservative boundary conditions and the variational principle
In the study of the variational principle, one wants to find suitable boundary conditions that make the variation of the action vanish everywhere on-shell, including at the boundary. In this context, (3.8) is useful to identify the required boundary and corner terms to be added to the action for the allowed boundary conditions. Suppose that we find a decomposition like (3.8) with a certain \(\theta^{\prime}=p\delta q\), and such that adding \(\vartheta\) to the contribution coming from the part of the boundary complementary to \(\mathcal{N}\) we get a total variation, call it \(\delta c\). We can then conclude that the boundary conditions identified by \(\delta q=0\) provide a well-defined variational principle, once the action is supplemented with the boundary term \(\ell\) as well as the corner term \(c\).
In this Section we show how the different polarizations of the null symplectic potential give a variational principle with different boundary conditions. We first review two known but non-trivial facts about Dirichlet boundary conditions, namely that one has to fix one more condition than the intrinsic geometry [32], and that the resulting boundary terms are ambiguous [8]. We then show that the alternative conformal boundary conditions improve this problem.
### Dirichlet boundary conditions and their ambiguity
Dirichlet boundary conditions hold fixed the intrinsic geometry. In the case of a null hypersurface, we could take this to mean
\[\gamma^{\mu\rho}\gamma^{\nu\sigma}\delta\gamma_{\rho\sigma}\ \overset{\mathcal{N}}{=}\ 0,\qquad\delta l^{\mu}\ \overset{\mathcal{N}}{=}\ 0. \tag{4.1}\]
The first condition is class-III invariant, but the second only if \(\delta A=0\). Nonetheless they imply \(\delta\epsilon_{\mathcal{N}}\stackrel{{\mathcal{N}}}{{=}}0\) thanks to (3.9), therefore they fix entirely the intrinsic geometry. On-shell of these conditions (3.2) gives
\[\varrho^{\text{\tiny EH}}=-\delta\ell^{\text{\tiny D}}+d\vartheta^{\text{\tiny EH }}, \tag{4.2}\]
with \(\vartheta^{\text{\tiny EH}}=n^{\mu}\delta l_{\mu}\epsilon_{S}\). The first term on the RHS is a total variation, and can be eliminated if the boundary Lagrangian (3.15) is added to the initial action. This is the equivalent of the Gibbons-Hawking-York term, and can even be written in exactly the same form as the divergence of the normal using (2.20). Notice also that (4.1) also imply \(\delta\theta\stackrel{{\mathcal{N}}}{{=}}0\), hence the only relevant term in (3.15) is the inaffinity \(k\). It is then equivalent to work with this boundary Lagrangian or the alternative choice (3.16).
The second term on the RHS is not a total variation, but it can be shown that once it is added to the contribution coming from the rest of the boundary, one obtains a total variation [42, 43, 8, 33, 34]. For instance if the null boundary is joined to a space-like boundary \(\Sigma\),
\[\vartheta^{\text{\tiny EH}}+\vartheta^{\text{\tiny EH}}_{\Sigma}=-2\delta \hat{\beta}\epsilon_{S}, \tag{4.3}\]
where \(\hat{\beta}\) is defined by (2.25). This is a total variation under Dirichlet boundary conditions, since the first of (4.1) implies \(\delta\epsilon_{S}\stackrel{{\mathcal{N}}}{{=}}0\). It can thus be compensated by the corner Lagrangian
\[l^{\text{\tiny H}}:=2\hat{\beta}\epsilon_{S}. \tag{4.4}\]
Here H stands for Hayward. See [43, 8] for other examples of joints and their corner terms.
We see that the Dirichlet variational principle is not well-defined for the Einstein-Hilbert action with a null boundary, and one needs to supplement the action with the boundary terms \(\ell^{\text{\tiny D}}\) and \(\ell^{\text{\tiny H}}\), in analogy with what happens with other types of boundaries. This is the result that one typically finds in the literature [32, 8, 33]. The problem that was raised in [8] is that these boundary and corner terms are ambiguous and non-geometric: they involve quantities like \(k\) and \(\hat{\beta}\) which depend on the choice of normal representative and on changes of embedding. In our language, they are not class-III invariant. A solution proposed in [8] was to add the non-local boundary term \((\theta\ln\theta)\epsilon_{\mathcal{N}}\). An alternative solution that doesn't involve the non-local counterterm would be to work with class-III invariant quantities only.
As we have seen at the end of the previous Section, this can be achieved in various ways. One is to add the condition \(\delta k\stackrel{{\mathcal{N}}}{{=}}0\). The boundary Lagrangian for this variational principle is the class-III invariant choice \(\ell^{\text{\tiny GFP}}\) and it is unambiguous. In fact, it is spacetime exact, hence it can be reabsorbed in the corner term, leading to a variational principle without any boundary term along the null hypersurface. The problem raised in [8] is however not completely solved, because there is still the need for corner terms like (4.4), which maintain their ambiguity.
Next, let us consider the condition \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\). Adding it is quite natural, since the combination \(\delta l^{\mu}=\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) is equivalent to \(l_{\mu}\delta g^{\mu\nu}\stackrel{{\mathcal{N}}}{{=}}0\) and as such, manifestly class III-invariant. A boundary Lagrangian for the Dirichlet variational principle supplemented by \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) is given in (3.25), but as remarked there, this removes only the ambiguity under change of \(f\) at fixed embedding, and not under changes of embedding. This would then only be a partial resolution to the problem of ambiguities. The reason why we have class-III invariant boundary conditions but fail to have a fully class-III invariant boundary Lagrangian is that \(\ell^{D}\) transforms under class-III with a inhomogeneous term proportional to \(\pounds_{l}\ln A\), and its variation is zero under the above conditions.
As for the corner terms, the additional condition means that (3.7) vanishes. This does not remove the Hayward corner term (4.4) from the variational principle, since it is still needed to cancel a contribution to the variation coming from the space-like boundary. But it removes it in the case of a corner between two null boundaries with the same boundary conditions. In the latter case there is no corner ambiguity.
It follows that if we take both additional restrictions,
\[\gamma^{\mu\rho}\gamma^{\nu\sigma}\delta\gamma_{\rho\sigma}=\delta l^{\mu}= \delta l_{\mu}=\delta k\,\stackrel{{\mathcal{N}}}{{=}}\,0, \tag{4.5}\]
this strengthened Dirichlet variational principle is well-defined with no contribution from the null boundary, and all potential ambiguities reduced to a choice of normal at the corner between a null and a non-null boundary.
The importance of choosing the right boundary term goes beyond the variational principle. In [8], it was discussed its relevance to in the context of the 'action=complexity' proposal in AdS/CFT holography. Below, we will see how it affects the charges constructed with covariant phase space methods. In this application, only the ambiguity of the boundary Lagrangian matters, and not the one of the corner terms used in the variational principle.
### Conformal boundary conditions
Consider now the alternative polarization (3.28). This vanishes for the conformal boundary conditions
\[\delta\sigma^{\mu\nu}\,\stackrel{{\mathcal{N}}}{{=}}\,0,\qquad \delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0,\qquad\delta( \theta+2k)\,\stackrel{{\mathcal{N}}}{{=}}\,0. \tag{4.6}\]
The first two are equivalent to Dirichlet except for \(\delta\epsilon_{\mathcal{N}}\,\stackrel{{\mathcal{N}}}{{=}}\,0\), which is replaced by the third one above. Since \(\theta\) is intrinsic and \(k\) depends on transversal derivatives of the metric, the last condition is of Robin type. With respect to the general family (3.33), \(\theta^{\text{\tiny Conf}}\) corresponds to \(b=0\) and \(c=1\), and it is easy to see that it is the only possibility that would allow conservative boundary conditions with \(\delta\epsilon_{\mathcal{N}}\neq 0\). We then have
\[\begin{array}{c}\theta^{\text{\tiny EH}}_{\leftarrow}=-\delta\ell^{\text{ \tiny Conf}}+d\vartheta^{\text{\tiny EH}}.\end{array} \tag{4.7}\]
The interesting remark is that this boundary Lagrangian is class-III invariant, hence geometric and not ambiguous. Changing polarization resolves the problem of ambiguity of the boundary Lagrangian without adding counter-terms, nor any of the additional restrictions \(\delta l_{\mu}=0\) or \(\delta k=0\) considered above. If we do include the \(\delta k=0\) restriction, then the conformal and Dirichlet polarization boil down to the same boundary conditions, and their boundary Lagrangians (3.29) and (3.23) are indeed the same up to corner terms.
Next, we look at the corner terms. \(\vartheta^{\text{\tiny EH}}\) is the same as before, therefore we still have (4.3) when looking at the joint between a null and a space-like boundary. This is no longer a total variation, because (4.6) does not imply \(\delta\epsilon_{S}\,\stackrel{{\mathcal{N}}}{{=}}\,0\). Therefore, to have a well-defined variational principle we need to add the corner boundary condition
\[\delta\epsilon_{S}\,\stackrel{{ S}}{{=}}\,0. \tag{4.8}\]
The need for an additional condition on top of (4.6) seems reasonable because the expansion is the derivative of the 2d metric, hence one is missing an initial datum when providing boundary data in terms of the expansion. Upon doing so, the required action corner terms in the variational principle are the same as in the Dirichlet case, e.g. (4.4). Therefore even though these boundary conditions eliminate the ambiguity of the null boundary Lagrangian, they do not eliminate the ambiguity of the
corner terms, at least in so far as they are completed with (4.8). If we further add \(\delta l_{\mu}=0\) then as before we remove the need of corner terms between null boundaries.
The boundary Lagrangian \(\ell^{\mbox{\tiny Conf}}\) is spacetime exact, hence it could also be absorbed into a modified \(\vartheta\). In this case the conformal boundary conditions require no boundary Lagrangian at all. This however does not change the ambiguity of the corner terms, since one is adding a non-ambiguous term to the existing ambiguous one.
Having found a polarization with a class-III invariant boundary Lagrangian will be very useful for the construction of charges in the covariant phase space, which is what we turn to next. In that context, having corner terms in the action principle which are ambiguous is not important, because these don't enter neither the expression for the Noether current nor that for the Hamiltonian generator. However from the point of view of the variational principle one may be interested in going further, and see whether it exists a completely unambiguous variational principle including the corner terms. A possibility would be to replace (4.8) with its conjugated corner variable, namely set \(\delta\hat{\beta}\stackrel{{ S}}{{=}}0\). No corner term in the action would then be needed. To that end, one should first study whether \(\delta(\theta+2k)\stackrel{{ S}}{{=}}0\) has any bearing on \(\hat{\beta}\). We leave further investigations of this idea for future work.
## 5 Leaky boundary conditions and covariant phase space
Sachs' identification of constraint-free data on a null hypersurface [44] can be used to cut a distinction between physical and gauge degrees of freedom, and in turn understand which part of the conservative boundary conditions can be relaxed in order to allow flux of physical degrees of freedom through the boundary. Such leaky boundary conditions are useful in order to construct a covariant phase space that describes the evolution of dynamical gravitational systems using the flux defined by the symplectic potential. As we have seen, the symplectic potential depends on the intrinsic and extrinsic geometry of the hypersurface, as well as on non-geometric quantities such as the choice of extension of the normal. Furthermore any split like (3.8) can introduce a dependence of the individual terms on the scaling of the normal and on the choice of rigging vector. This dependence shows up in the covariant phase space in the form of anomalous transformations of the fields, or anomalies for short. The exact nature of the anomalies depends on the leaky boundary conditions chosen. Different types have been explored in the literature, leading to different residual gauge transformations and thus different boundary symmetry groups. In this Section we characterize the different symmetry groups and compute their anomalies. In the following Section we will see how the anomalies enter the gravitational flux and the Noether charges, and study how the flux-balance laws are reorganized when changing polarization of the symplectic potential.
Let us first talk about the variations that enter the symplectic potential. The shear and expansion are determined by the induced metric, and the twist is also expected to be determined by the induced metric on-shell of the Einstein's equations. Therefore, the symplectic potential contains substantially only four independent variations: \(\delta\gamma_{\mu\nu}\), \(\delta l^{\mu}\), \(\delta l_{\mu}\) and \(\delta k\). The radiative data are contained in the first variation, so that one should definitely be left free in leaky boundary conditions allowing for gravitational flux. The question is then what to do with the remaining three.
On the hypersurface, we can always choose coordinates such that the tangent vector \(l^{\mu}\) has the simple form (2.28). Therefore \(\delta l^{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) can be interpreted as a restriction of the phase space to variations preserving this choice of coordinates. Furthermore, \(l_{\mu}\) is now metric dependent, unless we fix the \(\Phi\) coordinate to have \(g_{\Phi\lambda}=-1\), or equivalently we take the 'canonical' normalization. With the first option, \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) is also a restriction of the phase space preserving a certain choice of coordinates.
Both coordinate choices are always achievable and don't restrict the physics of the system: they can be taken as part of the universal structure. Therefore it seems reasonable to impose both restrictions, and this is indeed the conclusion reached through the careful analysis done in [1].
The situation with \(\delta k\) is a bit more subtle, because one may expect that having eliminated the variations coming from \(f\), this is a genuine variation of the extrinsic geometry that contains physics. But as we see from (3.6), this variation captures the transversal derivative of \(l_{\mu}l_{\nu}\delta g^{\mu\nu}\), whose vanishing means that the coordinates are affine. Hence restricting \(\delta k\) to vanish or not means that the symmetry group of the covariant phase space preserves or not affine coordinates, namely metrics in the form (2.31) as opposed to (2.29). And this seems merely a gauge statement.
This question was also left partially open in [1]. In the main body of the paper \(\delta k\) is fixed to vanish, on the account that any two metrics with a null hypersurface \(\mathcal{N}\) can be made to have the same pair \((l^{\mu},k)\) via a diffeomorphism on \(\mathcal{N}\), suggesting that \(k\) should be taken as part of the universal background structure. Vanishing \(\delta k\) was also used in order to complete the Wald-Zoupas procedure and identify a non-ambiguous notion of charges. On the other hand, it was pointed out that the symplectic 2-form is not degenerate along any of the boundary diffeomorphisms. If one takes zero-modes of the symplectic 2-form with boundary terms included to be a definition of gauge, then every diffeomorphism of the boundary should be considered as a physical transformation. This motivates the investigation of an enlarged phase space in which \(k\) is allowed to vary, and possibly even \(l^{\mu}\). As we will see, enlarging the phase space affects the symmetry groups and the associated transformations in field space, in particular their anomalies.
### Anomalies and class-III invariance
The variations studied in Section 3 keep the boundary fixed and null: \(\delta\Phi=0\) and \(l_{\mu}\delta l^{\mu}\stackrel{{\mathcal{N}}}{{=}}0\), or \(\delta g^{\Phi\Phi}\stackrel{{\mathcal{N}}}{{=}}0\) in adapted coordinates. This means that \(\Phi\) and \(g^{\Phi\Phi}|_{\mathcal{N}}\) are fixed background structures, whereas the rest of the metric can be varied freely. The split between dynamical (namely varying) and background (namely fixed) structures introduces a delicate aspect in the construction of the phase space, that of anomalies, which we now review. We use \(\phi\) to denote a generic collection of dynamical fields, which we will specialize below to the spacetime metric.
We use the notation of [11] for the exterior calculus in field space. In particular, \(\delta\) is the exterior derivative in field space, \(I_{X}\) the inner product with a vector field
\[\hat{X}=\int X(\phi)\frac{\delta}{\delta\phi}, \tag{5.1}\]
and \(\delta_{X}=\delta I_{X}+I_{X}\delta\) is the Lie derivative. Recall to avoid any confusion that the corresponding quantities for the exterior calculus in spacetime are denoted \(d\), \(i_{\xi}\) and \(\pounds_{\xi}=di_{\xi}+i_{\xi}d\). We will often consider vector fields in field space whose components are spacetime diffeomorphisms, and which we denote by
\[\hat{\xi}:=\int\pounds_{\xi}\phi\frac{\delta}{\delta\phi}. \tag{5.2}\]
It follows from this definition that the field-space Lie derivative of a dynamical field \(\phi\) coincides with the spacetime Lie derivative, namely \(\delta_{\xi}\phi=\hat{\xi}(\phi)=\pounds_{\xi}\phi\), whereas for a background field \(\chi\) we have \(\delta_{\xi}\chi=\hat{\xi}(\chi)=0\). If \(\chi\) is not left invariant by the diffeomorphisms under considerations, \(\pounds_{\xi}\chi\neq 0\) and therefore \(\delta_{\xi}\) and \(\pounds_{\xi}\) have a different action, and the field \(\chi\) is thus non-covariant. More in general, we say that a functional \(F(\phi,\chi)\) in field space is covariant if the Lie derivatives coincide, \(\delta_{\xi}F=\pounds_{\xi}F\). This property is trivial for any functional that depends on the dynamical fields only, but may fail
for functionals that depend on background fields as well. The difference \(\delta_{\xi}-\pounds_{\xi}\) then measures the non-covariance.
It is also important to introduce the anomaly operator
\[\Delta_{\xi}:=\delta_{\xi}-\pounds_{\xi}-I_{\delta\xi}, \tag{5.3}\]
which coincides with the non-covariance for field-independent diffeomorphisms and for field-space scalar functionals. The third term in (5.3) is relevant when acting on functionals of the fields that are forms in field space, as for example on the symplectic potential: For a 1-form \(F(\phi,\chi)\delta\phi\) we have
\[\delta_{\xi}(F\delta\phi)=\partial_{\phi}F\delta_{\xi}\phi\delta\phi+F\delta \delta_{\xi}\phi=\partial_{\phi}F\pounds_{\xi}\phi\delta\phi+F\delta\pounds_{ \xi}\phi=\pounds_{\xi}(F\delta\phi)-\partial_{\chi}F\pounds_{\xi}\chi\delta \phi+F\pounds_{\delta\xi}\phi, \tag{5.4}\]
where we used \([\delta,\delta_{\xi}]=0\) in the first equality, and \([\delta,\pounds_{\xi}]=\pounds_{\delta\xi}\) in the last, and the definition \(I_{\delta\xi}\delta\phi:=\pounds_{\delta\xi}\phi\). From this formula we see that anomaly-freeness means
\[\partial_{\chi}F\pounds_{\xi}\chi=0. \tag{5.5}\]
Namely, \(F\) should either not depend on the background fields \(\chi\), or if it does, the symmetry group should be made only of symmetries of \(\chi\). The symmetry group is typically required to preserve some universal background structure. If this is described by the background fields \(\chi\), then the symmetry group coincides with those diffeomorphisms that are symmetries of \(\chi\), and there are no anomalies. But if the background structure is described equivalence classes of background fields, the situation is different, because isometries of the background structure need not be symmetries of individual representative fields, which are the quantities entering (5.5). We also see that the notion of covariance given by matching Lie derivatives can be stated equivalently as
\[\partial_{\chi}F\pounds_{\xi}\chi=F\pounds_{\delta\xi}\phi. \tag{5.6}\]
It means that representatives of the equivalence class for which the symmetry group is not an isometry can still be allowed, provided it carries a specific field-dependence. Even though it seems natural to talk about covariance when the two Lie derivatives match, it is the notion of anomaly-freeness that carries the most direct interpretation in terms of independence from background structures. We will come back to this difference in Section 7.2 at the end.
On a null boundary, \(\Phi\) and \((g^{\Phi\Phi}:=g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi)|_{\cal N}\) are background fields. Their anomalies are
\[\Delta_{\xi}\Phi=-\pounds_{\xi}\Phi=-\xi^{\Phi},\qquad\Delta_{\xi}g^{\Phi\Phi} =-\pounds_{\xi}g^{\Phi\Phi}\;\stackrel{{\cal N}}{{=}}\;-\xi^{ \Phi}\partial_{\Phi}g^{\Phi\Phi}, \tag{5.7}\]
and vanish if we restrict attention to diffeomorphisms that satisfy \(\xi^{\Phi}\;\stackrel{{\cal N}}{{=}}\;0\). These are the tangent diffeomorphisms, and don't move the boundary. They can be parametrized as
\[\xi=\xi^{a}_{\cal N}\partial_{a}+\Phi\bar{\xi}^{\mu}\partial_{\mu}\in{\rm Diff }({\cal N}),\qquad\xi\cdot l\;\stackrel{{\cal N}}{{=}}\;0, \tag{5.8}\]
with \(\bar{\xi}\) smooth on \({\cal N}\). We will refer to \(\bar{\xi}^{\mu}\) as the _extension_ of the symmetry vector outside of the boundary and into the bulk, and to the specific component \(\bar{\xi}^{\Phi}\;\stackrel{{\cal N}}{{=}}\;-(f\Phi)^{-1}\xi\cdot l\) as the transversal extension. Since \(\pounds_{\xi}g^{\Phi\Phi}\;\stackrel{{\cal N}}{{=}}\;0\) for \(\xi\in T{\cal N}\), we can write covariantly
\[\hat{\xi}:=\int\pounds_{\xi}g_{\mu\nu}\frac{\delta}{\delta g_{\mu\nu}}, \tag{5.9}\]
without the need to treat separately \(g^{\Phi\Phi}\) and the dynamical components of the metric.
However, anomalies are present even for tangent diffeomorphisms if we have to deal with normal derivatives of the background fields. This is precisely the case at hand, since the pull-back of the symplectic potential depends on the normal 1-form. For a tangent diffeomorphism, we have
\[\Delta_{\xi}l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,-w_{\xi}l_{ \mu},\qquad\Delta_{\xi}l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,-w_ {\xi}l^{\mu},\qquad w_{\xi}:=(\pounds_{\xi}-\delta_{\xi})\ln f+\bar{\xi}^{\Phi}. \tag{5.10}\]
We see that the anomaly \(w_{\xi}\) depends on both the choice of normal representative, through the non-covariance of \(f\), and on the diffeomorphism considered, through the transversal extension of the symmetry vector field, namely its \(\Phi\) component. As far as both quantities are arbitrary, one can choose them so that anomalies are vanishing, for instance taking \(f=1\) and \(\bar{\xi}^{\Phi}=0\). However, while the choice \(f=1\) is always acceptable (but may not be the best choice to study a specific problem), \(\bar{\xi}^{\Phi}=0\) is not, because in most cases of interest this extension is fixed to a non-vanishing value determined by the parameters \(\xi^{a}_{\mathcal{N}}\). These include the case of isometries, asymptotic symmetries at future null infinity, and it would exclude from the symmetries the possibility of a Killing vector, whose transversal extension is fixed and non-vanishing. More in general, asymptotic symmetries at future null infinity as well as on a physical null hypersurface and a non-expanding horizon all require to fix the transversal extension of the symmetry vectors non a non-vanishing value determined by the parameters \(\xi^{a}\). We will review below why.
To give further intuition about the meaning of \(w_{\xi}\), consider the case of a non-null boundary. We still have (5.7), hence anomalies only appear for quantities like the normal, once we restrict attention to diffeomorphisms that are tangent to the boundary. For an arbitrary normal, (5.10) is also still valid. But if we choose a unit-norm normal, then \(w_{\xi}\) vanishes identically. If we recall that a unit-norm normal has the property of being independent of the embedding of the boundary, we see that anomalies arise not so much from the presence of a boundary, but rather from a foliation-dependence in its description. In other words, the equivalent of class-III invariance in the time-like case is achieved through invariance under \(\Phi\) reparametrization only, because \(f\) is fixed. Coming back to the case of a null boundary, there is no foliation-independent description, and no canonical normalization for the normal, hence anomalies become relevant.
For the sake of this paper, (5.10) is the main anomaly that we have to worry about, but not the only one. A second source of anomalies is the rigging vector, which is also a non-dynamical and background quantity. Its anomalies are less important in the end, but will appear in some intermediate calculations and it is useful to track them as well. For an arbitrary choice of rigging,
\[\Delta_{\xi}n_{\mu}=w_{\xi}n_{\mu}+Z_{\mu},\qquad Z\cdot l=Z\cdot n=0, \tag{5.11}\]
where the proportionality to \(w_{\xi}\) of the first term follows from \(l\cdot n=-1\), and the vector \(Z\) parametrizes the rigging anomaly. Its explicit form depends on the specific choice of \(n\), and we can leave it unspecified in the following. For instance, the projector \(\gamma_{\mu\nu}\) is manifestly class-III invariant but not class-I invariant. It has an anomaly determined by (5.11) as
\[\Delta_{\xi}\gamma_{\mu\nu}=2l_{(\mu}Z_{\nu)}. \tag{5.12}\]
The anomalies (5.10) and (5.11) correspond to the non-invariance under infinitesimal class-I and class-III transformations with parameters \(A=e^{-w_{\xi}}\simeq 1-w_{\xi}\) and \(a=m\cdot Z\). Further anomalies appear for quantities with a non-vanishing spin weight, since these depend on the background structure \(m\) associated with the choice of \(n\). However these will not be relevant for us, we will always compute
anomalies of quantities that can be expressed in terms of \(l\) and \(n\) alone. For these, it is easy to prove that class-I and class-III invariance implies anomaly-free, see Appendix A.3.
A subtle point to highlight is that anomaly-freeness requires class-III invariance in the general sense of a field-dependent rescaling. For instance, \(n^{\mu}\delta l_{\mu}\) is manifestly class-III invariant if the rescaling is field-independent, but not otherwise: \(n^{\mu}\delta l_{\mu}\to n^{\mu}\delta l_{\mu}-\delta\ln A\). It is in fact anomalous,
\[\Delta_{\xi}(n^{\mu}\delta l_{\mu})=-\Delta_{\xi}\delta\ln f=\delta w_{\xi}-w_ {\delta\xi}. \tag{5.13}\]
We remark for later use that this specific anomaly vanishes if the variations are restricted by \(\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\), since \(\xi\) is tangent to \(\mathcal{N}\). But it vanishes also if \(\delta l_{\mu}\neq 0\) provided that \(\delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\). This may not look obvious, but it follows from (5.10), and can be deduced also looking at (3.10).
Similarly, a quantity that is only partially class-III invariant like \(\bar{k}\epsilon_{\mathcal{N}}\) (we recall that it is independent of \(f\) but not of invariant under reparametrizations of \(\Phi\)) is also anomalous,
\[\Delta_{\xi}(\bar{k}\epsilon_{\mathcal{N}})=-\pounds_{l}\bar{\xi}^{\Phi}\, \epsilon_{\mathcal{N}}. \tag{5.14}\]
### Anomalies of the boundary Lagrangians
As a first application of this formalism, we compute the anomaly of the boundary Langrangians (3.31). Using (5.10) we find
\[\Delta_{\xi}\epsilon_{\mathcal{N}}=w_{\xi}\epsilon_{\mathcal{N}},\qquad\Delta _{\xi}\theta=-w_{\xi}\theta,\qquad\Delta_{\xi}\epsilon_{S}=0,\qquad\Delta_{ \xi}k=-(\pounds_{l}+k)w_{\xi}. \tag{5.15}\]
From the last one, we also deduce that
\[\pounds_{l}w_{\xi}=(\pounds_{\xi}-w_{\xi})k-\delta_{\xi}k. \tag{5.16}\]
Adding up these contributions we have
\[a_{\xi}^{\text{\tiny(b,c)}}:=\Delta_{\xi}\ell^{\text{\tiny(b,c)}}=b\pounds_{ l}w_{\xi}\,\epsilon_{\mathcal{N}}=b\,dw_{\xi}\wedge\epsilon_{S}, \tag{5.17}\]
where in the last equality we used (2.11) and the fact that \(w_{\xi}\) is only defined on \(\mathcal{N}\). As expected, any member with \(b\neq 0\) is not class-III invariant and it is anomalous. The family of covariant boundary Lagrangians is (3.31) with \(b=0\) and \(c\) arbitrary. This includes in particular the Conf and CFP choices. In the latter case, notice that the statement about covariance is valid also if \(\delta k\neq 0\). The anomalous Lagrangians include the Dirichlet choice with \(b=2\). Such anomalies would only vanish in the special case \(\pounds_{l}w_{\xi}=0\). Looking at (5.16), we see that this would occur for instance if the phase space is restricted to satisfy \(k=\delta k=0\). Finally for the excluded member (3.25), we have
\[a_{\xi}^{\text{\tiny D}}:=\Delta_{\xi}\ell^{\text{\tiny D}}=2\pounds_{l}\bar {\xi}^{\Phi}\,\epsilon_{\mathcal{N}}, \tag{5.18}\]
which captures explicitly its dependence on reparametrizations of \(\Phi\).
This result shows that Dirichlet boundary conditions require an anomalous boundary Lagrangian, whereas conformal boundary conditions admit a covariant one. The situation is the same if we move the expansion term to the corner and work with the primed boundary Lagrangians. In this case the conformal boundary Lagrangian vanishes and its covariance is obvious. Recalling the earlier discussion on ambiguities, we see that the anomaly here keeps track of the dependence of the Dirichlet boundary Lagrangian on non-geometric structures, hence of its ambiguity. This comes from its dependence on
the inaffinity and failure of being class-III invariant, and the problem is resolved switching to the conformal polarization instead.
Lagrangian anomalies appear in the study of central extensions of charge algebras. As shown in [10, 11], the Lagrangian anomaly can be used to compute the cocyle that appears on the right-hand side of the Barnich-Troessaert bracket [45]. This approach was applied in [10] to investigate whether one can obtain a central charge that would be relevant to understand black hole entropy as proposed in [46]. It was found that one can indeed reproduce the functional dependence of the entropy on the horizon's area, but with a wrong numerical factor. The right numerical factor would require as boundary Lagrangian \(k\epsilon_{\mathcal{N}}\) instead of \(2k\epsilon_{\mathcal{N}}\).14
Footnote 14: This alternative boundary Lagrangian corresponds to the boundary condition \(\delta k=(\theta+k)\delta\ln\varepsilon\). They do not impose \(\delta l^{\mu}=0\) however, instead the vector fields considered in [46, 10] stem from asymptotic symmetries of an auxiliary AdS\({}_{3}\) space that appears under a special coordinate transformation of the near-horizon geometry.
### Anomalies of the symplectic potentials
Next, we look at the anomalies of the symplectic potential. The standard symplectic potential \(\theta^{\text{\tiny EH}}\) is manifestly anomaly-free, since it depends only on the metric and its derivatives. So does its pull-back, since the anomaly operator commutes with taking the pull-back for tangent diffeomorphisms. Decomposing it as in (3.2) introduces the background structure given by the reference NP tetrad used, which captures the choice of normal representative and of rigging vector. This step does not introduce anomalies, since as we proved (3.2) is both class I and III invariant.
Anomalies can instead appear in the preferred choice of \(\theta^{\prime}\). From (3.8) we have
\[0=\Delta_{\xi}\theta^{\prime}-\Delta_{\xi}\delta\ell+d\Delta_{\xi}\vartheta. \tag{5.19}\]
Using this formula we can easily compute the anomaly for the family (3.31). The corner term is (3.7) in all cases, and its anomaly is given by
\[\Delta_{\xi}\vartheta^{\text{\tiny EH}}=2\Delta_{\xi}(n^{\mu}\delta l_{\mu}) \epsilon_{S}. \tag{5.20}\]
Using this and (5.17),
\[\Delta_{\xi}\theta^{(b,c)}=b\,dw_{\xi}\wedge\delta\epsilon_{S}+(b-2)d\Delta_{ \xi}(n^{\mu}\delta l_{\mu})\wedge\epsilon_{S}-2\theta\Delta_{\xi}(n^{\mu} \delta l_{\mu})\epsilon_{\mathcal{N}}. \tag{5.21}\]
The relation of this formula to the lack of class-III invariance is straightforwardly obtained with the replacement \(\ln A=-w_{\xi}\).
For conservative boundary conditions with \(\delta\epsilon_{S}=0\), the anomaly vanishes for \(b=2\) (the case of Dirichlet boundary conditions) if \(\theta=0\), and for any \(b\) and any \(\theta\) if \(\Delta_{\xi}(n^{\mu}\delta l_{\mu})\) vanishes, which we recall follows from either \(\delta l_{\mu}\ {}^{\underline{\wedge}}\ 0\) or \(\delta l^{\mu}\ {}^{\underline{\wedge}}\ 0\).
What if we want to impose leaky boundary conditions instead, with \(\delta\epsilon_{S}\neq 0\)? We need either \(b\) or \(dw_{\xi}\) to vanish. The family with \(b=0\) is covariant under a minimal set of restrictions:
\[\delta l^{\mu}\ {}^{\underline{\wedge}}\ 0\quad or\quad\delta l_{\mu}\ {}^{ \underline{\wedge}}\ 0\qquad\Rightarrow\qquad\Delta_{\xi}\theta^{(0,c)}=0. \tag{5.22}\]
These are the most generic conditions that guarantee that the symplectic potential is anomaly-free for all \(\xi\)'s, namely independent of the choice of normal representative. If \(b\neq 0\), we can use
\[dw_{\xi}\wedge\delta\epsilon_{S}=\pounds_{\delta l}w_{\xi}\epsilon_{\mathcal{ N}}+\pounds_{l}w_{\xi}\delta\epsilon_{\mathcal{N}}. \tag{5.23}\]
This vanishes if
\[\delta l^{\mu}=\delta k=k\,\stackrel{{\mathcal{N}}}{{=}}\,0\qquad \Rightarrow\qquad\Delta_{\xi}\theta^{\text{\tiny(b,c)}}=0, \tag{5.24}\]
however with these conditions the terms in \(b\) drop out completely. We conclude that the only relevant case is \(b=0\).
We see that the conformal polarization is the only choice of _diagonal_ symplectic potential that is anomaly-free upon imposing only the minimal condition that \(\delta f=0\). All other choices considered are either not in diagonal form, or require additional restrictions on the phase space. This makes the conformal polarization best suitable to study more general leaky boundary conditions without introducing anomalies. We will see below that anomalies in the symplectic potential spoil the integrability of Hamiltonian charges.
The anomaly of the symplectic potentials can also be derived summing up the anomalies of each spin pair, which we report here for completeness. The shear tensor \(\sigma^{\mu\nu}\) is neither class-I nor class-III invariant, and it carries both anomalies:
\[\Delta_{\xi}\sigma^{\mu\nu}=-w_{\xi}\sigma^{\mu\nu}+2l^{(\mu}\sigma^{\nu)\rho }Z_{\rho} \tag{5.25}\]
(whereas the NP scalar being rigging-independent only carries the first anomaly, \(\Delta_{\xi}\sigma=-w_{\xi}\sigma\)). Using this and (5.12) in the spin-2 pair of the symplectic potential, we find
\[\Delta_{\xi}(\sigma^{\mu\nu}\delta\gamma_{\mu\nu}\epsilon_{ \mathcal{N}}) =\Delta_{\xi}(\sigma^{\mu\nu}\epsilon_{\mathcal{N}})\delta\gamma _{\mu\nu}+\sigma^{\mu\nu}\epsilon_{\mathcal{N}}\Delta_{\xi}\delta\gamma_{\mu \nu}=2l^{\mu}\sigma^{\nu\rho}Z_{\rho}\delta\gamma_{\mu\nu}\epsilon_{\mathcal{ N}}\] \[=\delta l^{\mu}(\theta Z_{\mu}-2Z^{\rho}\nabla_{\nu}l_{\rho}) \epsilon_{\mathcal{N}}, \tag{5.26}\]
where we used \(\Delta_{\delta\xi}g_{\mu\nu}=0\) and the orthogonality properties of \(Z\). Next, we have
\[\Delta_{\xi}\pi_{\mu}=2(\partial_{\mu}+l_{\mu}\partial_{n})w_{\xi}+2Z^{\rho} \nabla_{\mu}l_{\rho}-\theta Z_{\mu}+2l_{\mu}(Z^{\rho}n^{\nu}+n^{\rho}Z^{\nu}) \nabla_{\nu}l_{\rho}, \tag{5.27}\]
so the spin-1 pair has anomaly
\[\Delta_{\xi}(\pi_{\mu}\delta l^{\mu}\epsilon_{\mathcal{N}}) =\Delta_{\xi}\pi_{\mu}\delta l^{\mu}\epsilon_{\mathcal{N}}-\pi\cdot l \,(\delta w_{\xi}-w_{\delta\xi})\epsilon_{\mathcal{N}}\] \[=(2\partial_{\mu}w_{\xi}+2Z^{\sigma}\nabla_{\mu}l_{\sigma}- \theta Z_{\mu})\delta l^{\mu}\epsilon_{\mathcal{N}}+(2k-\theta)\Delta_{\xi}( n^{\mu}\delta l_{\mu})\epsilon_{\mathcal{N}}. \tag{5.28}\]
where we used \(\pi\cdot l=\theta-2k\). The last term in the RHS depends on \(\delta l_{\mu}\) but vanishes for \(\delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\) as discussed below (5.13). Finally for the spin-0 part let us consider the cases of Dirichlet and conformal polarizations as examples.
\[-\Delta_{\xi}[(\theta+2k)\delta\epsilon_{\mathcal{N}}]=-(\theta+2k)\Delta_{ \xi}(n^{\mu}\delta l_{\mu})\epsilon_{\mathcal{N}}+2\pounds_{l}w_{\xi}\delta \epsilon_{\mathcal{N}}. \tag{5.29}\]
Adding up the three contributions we recover (5.21) for \(b=2\),
\[\Delta_{\xi}\theta^{\text{\tiny D}}=2dw_{\xi}\wedge\delta\epsilon_{S}-2\theta \Delta_{\xi}(n^{\mu}\delta l_{\mu})\epsilon_{\mathcal{N}}. \tag{5.30}\]
Switching to conformal polarization,
\[\Delta_{\xi}[\delta(\theta+2k)\epsilon_{\mathcal{N}}]=-(\theta+2k)\Delta_{\xi }(n^{\mu}\delta l_{\mu})\epsilon_{\mathcal{N}}-2(\pounds_{\delta l}w_{\xi}+ \pounds_{l}\Delta_{\xi}(n^{\mu}\delta l_{\mu}))\epsilon_{\mathcal{N}}. \tag{5.31}\]
In the last term we can use
\[\pounds_{l}\Delta_{\xi}(n^{\mu}\delta l_{\mu})\epsilon_{\mathcal{N}}=d\Delta_{ \xi}(n^{\mu}\delta l_{\mu})\wedge\epsilon_{S}. \tag{5.32}\]
Adding up we recover (5.21) for \(b=0\),
\[\Delta_{\xi}\theta^{\mbox{\tiny Conf}}=-2d[\Delta_{\xi}(n^{\mu}\delta l_{\mu}) \epsilon_{S}]. \tag{5.33}\]
This derivation allows one to appreciate that the potential anomaly coming from the term \(\delta k\) present in \(\theta^{\mbox{\tiny Conf}}\) cancels out with a contribution coming from the spin-1 term. The subtle point is that even though the anomaly of \(k\) does not vanish for \(\delta l^{\mu}=0\), the anomaly of \(\delta k\,\epsilon_{\cal N}\) does. As a consequence, it is crucial in order for the anomaly to be a boundary term that the spin-1 momentum is \(\pi_{\mu}\), and not \(\eta_{\mu}\) only, as in [18]. In fact the restriction \(n_{\mu}\delta l^{\mu}\ \stackrel{{{\cal N}}}{{=}}\,0\) they use is not class-I invariant, and introduces an anomaly in their symplectic potential that cannot be eliminated requiring \(\delta l_{\mu}\ \stackrel{{{\cal N}}}{{=}}\,0\). In conclusion, we state again that the conditions for anomaly-freeness are \(\delta l_{\mu}\ \stackrel{{{\cal N}}}{{=}}\,0\), and not other non-covariant restriction, or the whole \(\delta l^{\mu}\ \stackrel{{{\cal N}}}{{=}}\,0\).
### Boundary symmetry groups
In this Section we review the different boundary symmetry groups that have been considered in the literature, with emphasis on the different background structures kept fixed. We show how they can be derived in a simple way using \(\delta_{\xi}\) and \(w_{\xi}\). We highlight how each additional restriction on the variations affects the symmetry group, the extension of the symmetry vector fields, and the anomalies.
The minimal requirement that we consider is that the boundary should be a null surface, which restricts the variations to satisfy \(l_{\mu}\delta l^{\mu}\ \stackrel{{{\cal N}}}{{=}}\,0\). The residual diffeomorphisms that preserve the boundary and the condition that it is null must satisfy \(\pounds_{\xi}\Phi\ \stackrel{{{\cal N}}}{{=}}\,0\) and
\[l_{\mu}\delta_{\xi}l^{\mu}=l_{\mu}(\pounds_{\xi}l^{\mu}+\Delta_{\xi}l^{\mu})\ \stackrel{{{\cal N}}}{{=}}\,l_{\mu}l_{\nu}\pounds_{\xi}g^{\mu\nu} \ \stackrel{{{\cal N}}}{{=}}\,0. \tag{5.34}\]
These equations are solved by \(\xi^{\Phi}\ \stackrel{{{\cal N}}}{{=}}\,0\), namely any diffeomorphism tangent to the boundary. For later convenience, we parametrize the tangent vectors in affine coordinates as
\[\xi=\tau(\lambda,x^{A})\partial_{\lambda}+Y^{A}(\lambda,x^{B})\partial_{A}+\dots \tag{5.35}\]
The restriction of these vectors to \({\cal N}\) is arbitrary, hence they span the whole group \(\mbox{Diff}({\cal N})\).15 The dots here denote the extension of the vector field off the hypersurface, which is also arbitrary. This means that the anomaly (5.10) is also arbitrary.
Footnote 15: Here \(\tau\) is a free function and not the NP coefficient. NP notation will not be used in the rest of the paper.
As we will see below, the gravitational charges depend on the symmetry vectors and their derivatives, and therefore on the extension. More precisely, the charges turn out to depend on the anomaly (5.10), which is determined by the choice of \(f\) and the extension component \(\bar{\xi}^{\Phi}\). The relevant symmetry vector fields for the charges are thus \(\xi^{a}\partial_{a}+\Phi\bar{\xi}^{\Phi}\partial_{\Phi}\). They close under the Lie bracket and span the subgroup \(\mbox{Diff}({\cal N})\ltimes\mathbb{R}^{\cal N}\) parametrized by four free functions on \({\cal N}\). Therefore one can conclude that \(\mbox{Diff}({\cal N})\ltimes\mathbb{R}^{\cal N}\) is the symmetry group of the most general phase space with arbitrary metric variations on a given null boundary. This group appears also in [47]. At this stage, the physical relevance of a symmetry group not intrinsic to the boundary is unclear to us. It means in particular that the charges can have arbitrary values even though the restriction of the vector field to the boundary hypersurface vanishes.16 For this reason we think that this enlarged phase space does not provide a good handle
for the study of dynamical geometric properties of a null hypersurface.17
Footnote 17: A different viewpoint is taken in [23], where it is argued that this additional free parameter should be taken seriously as a characterization of the near-null hypersurface geometry.
Next, we add the restriction that the tangent vector be fixed, \(\delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\), as done for instance in [8]. The residual diffeomorphisms must satisfy
\[\delta_{\xi}l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0. \tag{5.36}\]
The analysis of these three conditions can be split in two cases, corresponding to null and space-like components:
\[n_{\mu}\delta_{\xi}l^{\mu}=n_{\mu}l_{\nu}\pounds_{\xi}g^{\mu\nu} -\delta_{\xi}\ln f\,\stackrel{{\mathcal{N}}}{{=}}\,0 \Rightarrow\quad\text{restricts the extension} \tag{5.37}\] \[m_{\mu}\delta_{\xi}l^{\mu}=m_{\mu}\pounds_{\xi}l^{\mu}\, \stackrel{{\mathcal{N}}}{{=}}\,0 \Rightarrow\quad\text{restricts the allowed diffeos} \tag{5.38}\]
To understand the first condition, observe that in adapted coordinates it contains the term \(\partial_{\Phi}\xi^{\Phi}\,\stackrel{{\mathcal{N}}}{{=}}\, \bar{\xi}^{\Phi}\). This equation fixes the extension component \(\bar{\xi}^{\Phi}\), and the symmetry group is thus \(\text{Diff}(\mathcal{N})\). Having fixed part of the extension however makes closure under the spacetime Lie bracket non-trivial, and an additional condition is needed. We will come back to this shortly. To understand the second restriction, it is easiest to take a coordinate system \((\lambda,\Phi,x^{A})\) with \(\lambda\) affine parameter, so that (2.31) holds. Then
\[\delta_{\xi}l^{\mu}=(\delta_{\xi}\ln f-\bar{\xi}^{\Phi})l^{\mu}-f\partial_{ \lambda}\xi^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\quad \Rightarrow\quad\left\{\begin{array}{ll}\mu=\Phi&\quad\xi\in T\mathcal{N} \\ \mu=\lambda&\quad\bar{\xi}^{\Phi}=\delta_{\xi}\ln f-\partial_{\lambda}\tau\\ \mu=A&\quad\partial_{\lambda}Y^{A}=0\end{array}\right. \tag{5.39}\]
The \(\lambda\) component shows that in these coordinates the extension is restricted via a function of the time derivative of \(\tau:=\xi^{\lambda}\) and of \(\delta_{\xi}f\).18 Comparing with (5.10), we see that the anomalies for these residual diffeos read
Footnote 18: It should be clear that the equation can not be solved taking \(f\) as a function of an arbitrary extension, because that would make the scaling of the normal dependent on the diffeomorphism considered.
\[w_{\xi}=\pounds_{\xi}\ln f-\dot{\tau}. \tag{5.40}\]
They are thus determined by the parameters of the symmetry vector field plus the choice of \(f\). The cross-section components \(A\) show that the tangential diffeomorphisms must be time-independent, \(\dot{Y}^{A}=0\). Namely they become the \(\text{Diff}(S)\) super-Lorentz transformations encountered at future null infinity in [48, 49]. In the following, we will refer to time-independent arbitrary diffeomorphisms of the cross-sections as super-Lorentz. On shell of (5.39), the symmetry vector fields (5.35) reduce to
\[\xi=\tau(\lambda,x^{A})\partial_{\lambda}+Y^{A}(x^{B})\partial_{A}-\Phi(\dot{ \tau}-\delta_{\xi}\ln f)\partial_{\Phi}+\dots, \tag{5.41}\]
where the dots include the part of the extension left arbitrary.
These vector fields however do _not_ close under the spacetime Lie bracket, see Appendix C for a proof. It happens only if we require \(\delta_{\xi}f=0\), or if \(\Delta_{\xi}f=0\). The first option is a priori more general since it can be achieved without any further restriction on \(\xi\) simply requiring \(\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\). Having done so, the vector fields are given by
\[\xi=\tau\partial_{\lambda}+Y^{A}\partial_{A}-\Phi\dot{\tau}\partial_{\Phi}+ \dots,\qquad\tau(\lambda,x^{A}),\quad Y^{A}=Y^{A}(x^{B}). \tag{5.42}\]
The extension \(\xi^{\Phi}\) no longer depends on \(f\), and is entirely determined by the parameters of the symmetry vector fields at \({\cal N}\). These vector fields close under the Lie bracket, and span the subgroup
\[{\rm Diff}_{l}({\cal N}):={\rm Diff}(S)\ltimes{\rm Diff}(\mathbb{R})^{S}\subset{\rm Diff }({\cal N}). \tag{5.43}\]
This is the 'little group' of diffeomorphisms preserving the null geodesic congruence spanned by \(l\) on \({\cal N}\). The semi-direct product structure follows from (C.2), and it means in particular that the identification of the group components with \(Y^{A}\) and \(\tau\) is not canonical, but relies on the choice of affine coordinates we made. This is a situation familiar from the BMS and CFP groups.
There is also another way to understand the importance of adding the condition \(\delta l_{\mu}\,\stackrel{{{\cal N}}}{{=}}\,0\). Without it, the four conditions (5.34) and (5.39) imply only three restrictions of the metric variations, because varying \(f\) does not affect the metric. Therefore the residual diffeomorphisms (5.41) do not correspond to a complete gauge fixing like (2.29). Imposing \(\delta l_{\mu}\,\stackrel{{{\cal N}}}{{=}}\,0\) turns the four conditions into four conditions on the metric variations, given by:
\[\delta l^{\mu}=\delta l_{\mu}\,\stackrel{{{\cal N}}}{{=}}\,0\quad \Rightarrow\quad l^{\mu}\delta g_{\mu\nu}\,\stackrel{{{\cal N}}}{{= }}\,0. \tag{5.44}\]
Notice that the last equation is class-III invariant, hence the symmetry group satisfying \(l^{\mu}\delta_{\xi}g_{\mu\nu}\,\stackrel{{{\cal N}}}{{=}}\,0\) depends only on the equivalence class of normals and not on a choice of representative. Diffeomorphisms preserving this condition satisfy \(l^{\mu}\pounds_{\xi}g_{\mu\nu}=0\). In affine coordinates, and restricting \(\xi\) to be tangential, we get
\[\xi^{\Phi}\,\stackrel{{{\cal N}}}{{=}}\,0,\qquad\partial_{\Phi} \xi^{\Phi}+\partial_{\lambda}\xi^{\lambda}\,\stackrel{{{\cal N}}}{{= }}\,0,\qquad\partial_{\lambda}\xi^{A}\,\stackrel{{{\cal N}}}{{=} }\,0, \tag{5.45}\]
that coincide with the restriction of (5.39) to \(\delta_{\xi}f=0\).19
Footnote 19: Notice that it is necessary to restrict upfront to tangential diffeomorphisms, otherwise one obtains the larger set of solutions with \(\xi^{\Phi}\,\stackrel{{{\cal N}}}{{=}}\,f(x^{A})\). We also point out that the weaker set \(l^{\mu}\delta g_{\mu\nu}=0\) misses \(\partial_{\lambda}\xi^{\mu}+\partial_{\Phi}\xi^{\Phi}=0\), namely (5.37).
Let us now see the effect of adding \(\delta k=0\) on top of the previous conditions. From (3.6) we have
\[\delta_{\xi}k=-\frac{1}{2}n^{\mu}\nabla_{\mu}(l_{\nu}l_{\rho}\pounds_{\xi}g^{ \nu\rho})\,\stackrel{{{\cal N}}}{{=}}\,0. \tag{5.46}\]
This equation involves the first derivatives off the hypersurface of the metric. What it does is to further restrict the diffeomorphisms to preserve the condition of affine coordinates, namely the metric in the form (2.31), as opposed to the more general form (2.29). It is easy to see that in affine coordinates the equation simplifies to
\[\ddot{\tau}=0. \tag{5.47}\]
This means that we can write \(\tau=T(x^{A})+\lambda W(x^{A})\) in terms of a supertranslation with parameter \(T\) and a Weyl transformation with parameter \(W\). We have thus recovered the symmetry group of [1],
\[G^{\rm CFP}:=({\rm Diff}(S)\ltimes\mathbb{R}^{S}_{W})\ltimes\mathbb{R}^{S}_{T}. \tag{5.48}\]
We will refer to this group as CFP from the authors of [1]. An alternative good name is BMSW group. Its vector fields read
\[\xi=T\partial_{\lambda}+Y^{A}\partial_{A}+W(\lambda\partial_{\lambda}-\Phi \partial_{\Phi})+\ldots. \tag{5.49}\]
in affine coordinates As for the anomaly, this is still given by (5.40), now
\[w_{\xi}=T\partial_{\lambda}\ln f+W(\lambda\partial_{\lambda}\ln f-1)+Y^{A} \partial_{A}\ln f. \tag{5.50}\]
To be precise, this is the symmetry group if \({\cal N}\) is complete. If it is semi-complete instead, super-translations must be dropped because they do to preserve the boundary of \({\cal N}\)[1]. This happens for instance if \({\cal N}\) is the boundary of a causal diamond [50] or a light-cone. If \({\cal N}\) has two boundaries, as for example when connecting two space-like surfaces, then \(\tau\) must vanish at two different values of \(\lambda\). This removes both super-translations and Weyl rescaling from the CFP group, which reduces in this case to \({\rm Diff}(S)\) only. The situation is different for the \({\rm Diff}_{l}({\cal N})\) group, which having arbitrary time-dependence in the super-translations survives as a symmetry group with \({\rm Diff}(S)\) plus a left-over super-translations with \(\tau(\lambda_{0})=\tau(\lambda_{1})=0\) (which do form a group) also for a \({\cal N}\) with two boundaries. This is a physical set-up in which the new group \({\rm Diff}_{l}({\cal N})\) is important.
If the covariant phase space is restricted to describe only NEHs, then the background structure can be strengthen even more to allow for a constant rescaling only of the normal, and then the group is restricted to constant \(W\)'s and conformal isometries of the sphere [5],
\[G^{\rm AKKL}:=({\rm SL}(2,{\mathbb{C}})\ltimes{\mathbb{R}}_{T}^{S})\times{ \mathbb{R}}_{W}^{+}. \tag{5.51}\]
We will refer to this group as AKKL from the authors of [5]. An alternative good name is NEH group.
Let us comment on the method we used to derive the symmetry groups \({\rm Diff}_{l}({\cal N})\) and \(G^{\rm CFP}\). This was based on identifying the diffeomorphisms that preserve the variations required to vanish, as opposed to the more common approach in the literature that consists on identifying the isometries of the background structure. But it is easy to prove the equivalence of our method in this context. From (5.10), we see that \(\delta_{\xi}l^{\mu}\stackrel{{{\cal N}}}{{=}}0\) is equivalent to \(\pounds_{\xi}l^{\mu}\stackrel{{{\cal N}}}{{=}}w_{\xi}l^{\mu}\), namely the diffeomorphisms that preserve the equivalence class \([l^{\mu}=A^{\mu}]\). Same story for \(\delta_{\xi}l_{\mu}\stackrel{{{\cal N}}}{{=}}0\). For \(\delta_{\xi}k\stackrel{{{\cal N}}}{{=}}0\), we see from (5.15) that is equivalent to \(\pounds_{\xi}k\stackrel{{{\cal N}}}{{=}}w_{\xi}(k+\pounds_{l} \ln w_{\xi})\), namely the diffeomorphisms that preserve the equivalence class (2.8). The three equations for the Lie derivatives of \(l^{\mu},l_{\mu}\) and \(k\) are indeed the conditions used in [1].
We also remark the importance of using affine coordinates in order to solve for the vector fields that satisfy the phase space restrictions. The conditions (5.36) and (5.55) are in fact complicated in arbitrary coordinates, and boil down to simple statements about time-independence in affine coordinates. More importantly, the \(\xi\)'s solving these equations in arbitrary coordinates depend explicitly on the metric, and can be characterized in a metric-independent way only in affine coordinates. What makes affine coordinates special is that they corresponding to a gauge fixing whose preservation coincides with preserving the background structure. Otherwise one cannot describe the symmetry vector fields in a universal way, and must work with field-dependent diffeomorphisms.20
Footnote 20: This observation should be contrasted with the analysis in the CFP paper, where the vector fields where characterized in a metric-independent way independently of the choice of coordinates. We believe that the reason for this is that their characterization is done directly in terms of intrinsic quantities on the hypersurface only, and therefore in a metric-independent way (as shown by (5.44), it is only by looking at spacetime restrictions that the symmetry group can be seen as preserving a metric gauge-fixing). We also remark that when they construct a spacetime diffeomorphism representative of the symmetry, they _define_ it to match the intrinsic diffeomorphism when restricted to \({\cal N}\), see their (5.6). We can do the same here: once the metric-dependent vector fields are found in arbitrary coordinates, we can do an intrinsic diffeomorphism on \({\cal N}\) that maps them to the metric-independent one in affine coordinates.
How about the Robin-type condition (4.6)? In this case, (5.46) is replaced by
\[\delta_{\xi}k=-\frac{1}{2}\pounds_{l}(\nabla\cdot\xi), \tag{5.52}\]
which in affine coordinates becomes
\[-2\ddot{\tau}=\partial_{\lambda}(\tau\theta_{\lambda}+D_{A}Y^{A})=\dot{\tau} \theta_{\lambda}+\tau\dot{\theta}_{\lambda}+Y[\theta_{\lambda}], \tag{5.53}\]
where \(D_{A}\) denotes the derivative on the cross-section, and \(\theta_{\lambda}\) is the expansion of the affine normal. The RHS vanishes for a NEH, but not in general. The novelty with respect to the previous cases is the metric-dependence of the equation in affine coordinates. We don't know the general solution to this equation, but it is clear that it will be a metric-dependent function \(\tau=\tau(\lambda,T,W,Y;g)\), where \(T(x^{A})\) and \(W(x^{A})\) are integration constants. These vector fields are labelled by the same parameters space as (5.48), but they are field-dependent. This case will be studied elsewhere.
We have shown what happens when one implements one by one the vanishing of \(n_{\mu}\delta l^{\mu}\), \(\delta l^{\mu}\), \(\delta l_{\mu}\) and \(\delta k\), and how it determines the symmetry groups \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\), \(\mathrm{Diff}(\mathcal{N})\), \(\mathrm{Diff}_{l}(\mathcal{N})\), \(G^{\mathrm{CFP}}\), and their anomalies. We proceeded in this specific order, because it is the one that appears the most useful to us, but with the same method one can consider any mixture of partial implementations. Let us briefly comment on a few of these partial alternatives.
The condition (5.47) can be derived also without imposing the condition \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\). To see that, we start from the general formula (3.5). Restricting to \(\delta l^{\mu}\stackrel{{\mathcal{N}}}{{=}}0\), we have
\[\delta k=\frac{1}{2}l^{\mu}l^{\nu}n^{\rho}\nabla_{\rho}\delta g_{\mu\nu}-n^{ \mu}l^{\nu}l^{\rho}\nabla_{\rho}\delta g_{\mu\nu}. \tag{5.54}\]
The residual diffeomorphisms preserving this condition must satisfy
\[\delta_{\xi}k=l^{\mu}l^{\nu}n^{\rho}(R^{\sigma}{}_{\mu\nu\rho}\xi_{\sigma}- \nabla_{\mu}\nabla_{\nu}\xi_{\rho})\stackrel{{\mathcal{N}}}{{=} }0. \tag{5.55}\]
The term in bracket can be recognized as a property of a Killing vector, but the allowed \(\xi\)'s are here more general since only a specific scalar contraction of that term is being imposed to vanish, and on the hypersurface only. The \(\xi\)'s solving this equation are complicated functions of the metric in general, but in affine coordinates it gives back (5.47). As explained earlier though, in the extended phase space with \(\delta l_{\mu}\neq 0\) the vector fields don't close under the spacetime Lie bracket, and don't correspond to the residual gauge fixings preserving (2.31).
Suppose now that we require \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) and/or \(\delta k=0\) without fixing \(l^{\mu}\). Imposing \(\delta_{\xi}l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) can only be solved if \(\delta f=0\), and this imposes no restriction on the symmetry vector fields. We have \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) with an arbitrary extension \(\bar{\xi}^{\Phi}\) and general anomaly (5.10), and if we add \(n_{\mu}\delta_{\xi}l^{\mu}=0\) we have \(\mathrm{Diff}(\mathcal{N})\) with anomaly (5.40).
Preserving \(k\) on top of \(\delta_{\xi}l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) leads to
\[\delta_{\xi}k\stackrel{{\mathcal{N}}}{{=}}2l_{\rho}\nabla^{(\mu }\xi^{\rho)}n^{\nu}\nabla_{\mu}l_{\nu}-l^{\mu}l^{\nu}n^{\rho}\nabla_{\rho} \nabla_{\mu}\xi_{\nu}. \tag{5.56}\]
This equation is not class-III invariant, therefore the diffeos solving this equations depend on the choice of normal, and cannot be characterized in purely geometric terms. For instance for a null diffeo \(\xi=\tau\partial_{\lambda}\), we get \(k(\pounds_{l}+k)\tau/f=0\), which is solved by any \(\tau\) for \(k=0\), and by \(\dot{\tau}=(\dot{f}-k)\tau/f\) for \(k\neq 0\). The situation is similar if we preserve \(k\) without preserving \(l_{\mu}\), just the above equation become more complicated, and remains not class-III invariant. We conclude that imposing \(\delta k\stackrel{{\mathcal{N}}}{{=}}0\) without \(\delta l^{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) leads to symmetry groups which depend on structures unrelated to the geometry of \(\mathcal{N}\).
For related leaky boundary conditions and symmetry groups see also [51, 52, 53, 54, 55, 56, 47, 57].
## 6 Charges and fluxes
Having discussed the different symmetry groups associated with a larger or smaller background structure, we now briefly review how the covariant phase space allows one to associate Noether charges
and Hamiltonian generators to these symmetries. In particular, we will discuss how the choice of polarization affects the definition of charges and their fluxes, and what is the role played by anomalies.
In the context of the covariant phase space, (3.8) represents the freedom in choosing the symplectic potential: \(\ell\) comes from the freedom of adding a boundary term to the Lagrangian without affecting the field equations, and \(\vartheta\) from the fact that the Lagrangian only determines the symplectic potential up to an exact 3-form. We refer to the choice of \(\theta\) that can be read directly from \(\delta L\) without any additional information as the standard, or bare, choice.21
Footnote 21: This bare choice can also be mathematically selected if one defines the potential using the homotopy operator [14, 58, 16].
If one starts from a covariant Lagrangian \(L\) and a covariant symplectic potential one obtains the following well-known formulas [59] for the Noether current \(j_{\xi}\),
\[j_{\xi}:=I_{\xi}\theta-i_{\xi}L\,\hat{=}\,dq_{\xi},\qquad dj_{\xi}\,\hat{=}\,0, \tag{6.1}\]
and for the infinitesimal Hamiltonian generator \(h_{\xi}\),
\[\not{\delta}h_{\xi}:=-I_{\xi}\omega=\delta I_{\xi}\theta-di_{\xi}\theta\,\hat {=}\,d(\delta q_{\xi}-q_{\delta\xi}-i_{\xi}\theta). \tag{6.2}\]
Here we used the exterior calculus notation as in [11], \(\omega:=\delta\theta\) is the symplectic 2-form current, \(\,\hat{=}\,\) means on-shell of the field equations and \(q_{\xi}\) is the Noether surface charge. A flux-balance law for this charge can be derived taking the pull-back of (6.1) along a lateral boundary (time-like or null). This eliminates the last term when \(\xi\) is restricted to a symmetry vector, since the latter is tangent to the boundary. It follows that \(I_{\xi}\theta\) is the flux determining the variation of the charge along that boundary.
There are however two problems with (6.1). The first is that the associated Noether charges may be of little practical use. For instance with the bare choice \(\theta=\theta^{\mbox{\tiny EH}}\) given by (3.1), the resulting Noether charges are given by the Komar 2-form [59]. These have notorious problems such as wrong numerical factors, and not being conserved even in the absence of radiation. The second problem is that they don't coincide in general with the Hamiltonian generators, as one can see from (6.2). This discrepancy is referred to as the problem of integrability of the infinitesimal generator, see e.g. [58, 60, 45, 49, 61].
The problems can be addressed using the Wald-Zoupas procedure [4], which aims at prescribing a (possibly unique) set of charges requiring them to coincide with the canonical generators when a physically identified flux vanishes. This preferred flux is selected from the equivalence class (3.8) based on covariance and physical criteria, for instance such that the charges are constant under conservative boundary conditions, or for perturbations around special solutions corresponding to stationary space-times. We then showed in [13] how this procedure can be extended to include corner contribution, anomalies and field-dependent diffeomorphisms. We also showed under which conditions the resulting WZ charges can be identified as Noether charges for a specific choice of boundary Lagrangian (see also [15, 10, 11, 12] on this).
Let us briefly recap some details of the WZ procedure as extended in [13]. We consider here only field-independent diffeomorphisms, because this is sufficient to understand the symmetry groups described in the previous Section. The case with field-dependent diffeomorphism is discussed at the end. Starting from \(\theta=\theta^{\mbox{\tiny EH}}\) and a given hypersurface, we select a preferred symplectic potential \(\theta^{\prime}\) in the equivalence class (3.8) satisfying three criteria:
1. \(\delta\vartheta=0\), so that \(\omega^{\prime}=\omega\);
2. \(\Delta_{\xi}\theta^{\prime}=0\), so that the preferred potential is anomaly-free;
2. \(\theta^{\prime}\) is in the form \(p\delta q\) where: Case I: \(\delta q=0\) for conservative boundary conditions, which imply that \(\omega^{\prime}\) has vanishing pull-back on the boundary; Case II: \(p=0\) for points in phase space satisfying a useful notion of stationarity, and the pull-back of \(\omega^{\prime}\) is non-vanishing.
Ideally, these criteria should be enough to select a unique \(\theta^{\prime}\).22 If this fails, additional or revisited conditions should be considered. Notice that one typically imposes some boundary conditions also in case II, weaker than the conservative ones, and needed to preserve a certain boundary structure of physical relevance, for instance in order to characterize graviational radiation. The boundary structure shared by a certain class of metrics is referred to as their _universal structure_. The conservative boundary conditions of case I are of the same type that are used in the variational principle. We refer to the generic, weaker set of boundary conditions of case II as leaky.
Footnote 22: This uniqueness may also require fixing field-space constant terms using a special solution as reference. In the following we neglect the discussion of these constant terms.
If the preferred \(\theta^{\prime}\) and its Lagrangian \(L^{\prime}=L+d\ell\) are covariant, then the formulas (6.1) and (6.2) are still valid with primes everywhere. The importance of the condition 2 is then clear: when \(\theta^{\prime}\) vanishes, the Noether charges coincide with the canonical generator for field-independent diffeomorphisms. Furthermore, they are automatically conserved in the subset of the phase space satisfying the conditions of case I or II. The new Noether charges are related to those of \(\theta\) by [15, 16, 11]
\[q^{\prime}_{\xi}=q_{\xi}+i_{\xi}\ell-I_{\xi}\vartheta, \tag{6.3}\]
and are sometimes referred to as boundary-improved, or improved for short, Noether charges.
However, there is a caveat. In spite of the covariance requirement of condition 1, the selection process may introduce anomalies. This happens if the preferred \(\theta^{\prime}\) is associated to a new Lagrangian \(L^{\prime}=L+d\ell\) whose boundary term is anomalous: \(a^{\prime}_{\xi}:=\Delta_{\xi}\ell\neq 0\). In this case the new charges do not satisfy Noether's theorem in its original form (6.1), because that relies on the covariance of \(L\), specifically on the fact that \(\delta_{\xi}L=\pounds_{\xi}L=di_{\xi}L\). If the boundary Lagrangian is anomalous we have instead \(\delta_{\xi}L^{\prime}=d(i_{\xi}L^{\prime}+a_{\xi})\) and the formula becomes
\[j^{\prime}_{\xi}:=I_{\xi}\theta^{\prime}-i_{\xi}L^{\prime}-a^{\prime}_{\xi} \,\hat{=}\,dq^{\prime}_{\xi},\qquad dy^{\prime}_{\xi}\,\hat{=}\,0. \tag{6.4}\]
Condition 2 is no longer sufficient to guarantee the conservation of the Noether charges \(q^{\prime}_{\xi}\). Even worse, for a generic anomaly \(a^{\prime}_{\xi}\) we are not even guaranteed that the charges are conserved for isometries. This potential problem is avoided thanks to condition 1. In fact, the pull-back of the Hamiltonian generators on the lateral boundary gives
\[-I_{\xi}\omega=\delta I_{\xi}\theta^{\prime}-(\delta_{\xi}-\pounds_{\xi}) \theta^{\prime}-di_{\xi}\theta^{\prime}\,\hat{=}\,\delta(dq^{\prime}_{\xi}+a^ {\prime}_{\xi})-(\delta_{\xi}-\pounds_{\xi})\theta^{\prime}-di_{\xi}\theta^{ \prime}. \tag{6.5}\]
Here we used condition 0 and \(d\theta^{\prime}\equiv 0\) (that follows since \(\theta^{\prime}\) is only defined after pull-back) in the first equality, and (6.4) in the second. If condition 1 holds the generator is integrable once the preferred flux is subtracted:
\[-I_{\xi}\omega+di_{\xi}\theta^{\prime}=\delta I_{\xi}\theta^{\prime}\,\hat{=} \,\delta(dq^{\prime}_{\xi}+a^{\prime}_{\xi}). \tag{6.6}\]
Furthermore, condition 1 also implies that the Lagrangian anomaly must be spacetime-exact, specifically that \(a^{\prime}_{\xi}=ds_{\xi}\) where \(\delta s_{\xi}=-A^{\prime}_{\xi}\) and \(A^{\prime}_{\xi}\) is the symplectic anomaly of the preferred \(\theta^{\prime}\)[11]. This makes it possible to define the WZ charges
\[q^{\rm WZ}_{\xi}:=q^{\prime}_{\xi}+s_{\xi}. \tag{6.7}\]
They satisfy the flux-balance laws
\[dq_{\xi}^{\text{\tiny WZ}}\,\hat{=}\,I_{\xi}\theta^{\prime},\qquad d\delta q_{ \xi}^{\text{\tiny WZ}}\,\hat{=}\,-\,I_{\xi}\omega+di_{\xi}\theta^{\prime}. \tag{6.8}\]
It follows that they are conserved _and_ provide Hamiltonian generators when \(\theta^{\prime}\) vanishes, be it for conservative boundary conditions, or leaky boundary conditions around stationary configurations.
This is the Wald-Zoupas prescription. We stress that its keystone is condition 1. Without condition 1, we would in fact be stuck with (6.4), without the possibility to use (6.5) to justify and be guaranteed that the anomaly can be reabsorbed in the definition of the charge. With \(a^{\prime}_{\xi}\) (or even just part of it) still on the LHS of (6.4), stationarity of \(\theta^{\prime}\) would fail to give conserved charges, hence condition 2 would entirely lose its physical relevance.
We also stress that even if the selected \(\theta^{\prime}\) is covariant and the Lagrangian anomaly \(a^{\prime}_{\xi}\) drops out of the flux-balance laws in the end, anomalous transformations can still be present, since
\[I_{\xi}\theta^{\prime}=p\pounds_{\xi}q+p\Delta_{\xi}q. \tag{6.9}\]
This anomaly contribution is physically correct, because it is the right quantity to have so that background structures don't contribute to the flux: remember in fact that \(\delta_{\xi}l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\) but \(\pounds_{\xi}l^{\mu}\neq 0\), for instance.
A natural question at this point is whether the WZ charges (6.7) can always be interpreted as Noether charges like (6.3) for some boundary Lagrangian. The answer is yes iff \(s_{\xi}=-\Delta_{\xi}c\) for some local 2-form \(c\) constructed out of the fields and the background structure, and in this case the correct boundary Lagrangian is the anomaly-free choice \(\ell+dc\). So we have two approaches to constructing charges: the Noether charge, based on selecting specific \(\theta^{\prime}\) and \(\ell\). And the WZ prescription, based on selecting only a preferred \(\theta^{\prime}\). The convergence of the two approaches is obtained when the WZ charges can be derived as improved Noether charges with the choice of \(\ell\) determined by a condition of covariance [13].
An important point to appreciate is that the charges are anomalous even if the symplectic potential is not. This is simply because the charges depend on the symmetry vector fields \(\xi\) which are generally anomalous. What one should require then is that the charge anomaly is sourced only by the \(\xi\)'s, namely that
\[\Delta_{\chi}q_{\xi}=\frac{\partial q_{\xi}}{\partial\xi}\Delta_{\chi}\xi=- \frac{\partial q_{\xi}}{\partial\xi}\pounds_{\chi}\xi=-q_{[\chi,\xi]}. \tag{6.10}\]
This is precisely what is guaranteed for the WZ charges thanks to the covariance requirements of symplectic potential and boundary Lagrangian.
It is also possible to drop condition 0, and consider a generalized WZ prescription based on 1 and 2 alone [15, 13]. This generalization will not be needed here, but it is necessary in order to obtain Brown-York charges at finite time-like boundaries with non-orthogonal corners [15, 17], and for the generalized angular momentum of \(\text{Diff}(S)\) at future null infinity [49]. The WZ charges are still given by (6.7), but where (6.3) has a non-trivial \(\vartheta\) term, and \(\omega^{\prime}\) replaces \(\omega\) in (6.8).
Summarizing, our viewpoint as put forward in [13] is that the crux of the WZ procedure is really condition 1 (or its alternative version as anomaly-freeness). Condition 0 can be dropped, and condition 2 should be interpreted as a framework rather than a unique set-up, meaning that different notions of conservative boundary conditions or stationarity conditions can be considered, in order to describe different physical problems.
### Wald-Zoupas conditions on null hypersurfaces
We now study which of the family of symplectic potentials \(\theta^{\text{\tiny(b,c)}}\) in (3.33) satisfies the WZ conditions. We will recover the results of [1], show how they change for different polarizations, and how they can be extended to the relaxed phase space with \(\delta k\neq 0\).
Condition 0. Looking at (3.7), we see that it requires
\[\delta l^{\mu}=\delta l_{\mu}\,\stackrel{{\text{\tiny$\Delta$}}}{{= }}\,0. \tag{6.11}\]
These can be satisfied without any restriction on the dynamics, and correspond to the gauge fixing (5.44).
Condition 1. From Section 5.3 we know that the family with \(b=0\) and \(c\) arbitrary is covariant for either one of the two conditions in (6.11).
Condition 2, case I. We can distinguish two options for conservative boundary conditions. If we impose \(\delta\epsilon_{\mathcal{N}}=0\), then (3.33) vanishes for \(\delta l^{\mu}=\delta\sigma^{\mu\nu}=0\), which in turns imply \(\delta\theta=0\), and \((2-b)\delta k=0\). This gives us Dirichlet boundary conditions for \(b=2\), and strengthened Dirichlet conditions including \(\delta k=0\) for \(b\neq 2\). If we don't impose \(\delta\epsilon_{\mathcal{N}}=0\), then necessarily \(b=0\) and \(c=1\), and we find the conformal boundary conditions (4.6) of the York polarization.
Condition 2, case II. If we want stationarity to correspond to a shear and expansion-free surface, which we recall is equivalent to a NEH in vacuum, then we need
\[\big{(}\pi_{\mu}\delta l^{\mu}+(2-b)\delta k+(2-c)\delta\theta\big{)}\epsilon _{\mathcal{N}}-bk\delta\epsilon_{\mathcal{N}}=0. \tag{6.12}\]
The most general solution of this equation is
\[b=0,\qquad\delta l^{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0, \qquad\delta k\,\stackrel{{\mathcal{N}}}{{=}}\,\frac{c-2}{2} \delta\theta. \tag{6.13}\]
For \(c=2\), we recover the result of [1]: the symplectic potential
\[\theta^{\text{\tiny{CFP}}}=\sigma^{\mu\nu}\delta\gamma_{\mu\nu}\epsilon_{ \mathcal{N}}-\theta\delta\epsilon_{\mathcal{N}} \tag{6.14}\]
meets all the WZ criteria (and was argued in [1] to be unique under these conditions). For arbitrary \(c\), we find a 1-parameter family of covariant WZ potentials that satisfy the same stationarity condition,
\[\theta^{c}=\sigma^{\mu\nu}\delta\gamma_{\mu\nu}\epsilon_{\mathcal{N}}-(c-1) \theta\delta\epsilon_{\mathcal{N}}. \tag{6.15}\]
This includes the conformal polarization for \(c=1\). These potentials are associated with a phase space in which inaffinity is allowed to vary, but in a way fully constrained by the expansion via (6.13). These properties holds also if we further relax \(\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\). This defines a new phase space that could be interesting to further explore. The difficulty with this generalization of the CFP result is that the symmetry vector fields appear to be field-dependent and not universal, as we saw in Section 5.4.
### Stationarity on flat light-cones
The notion of stationarity as shear and expansion-free used above is solidly based on physical grounds: shear and expansion-free hypersurfaces capture the idea that no radiation is going through the surface, and include standard stationary examples such as non-expanding horizons and Killing horizons. However, this notion is not exhaustive of stationarity understood as lack of radiation, as there are
plenty of null hypersurfaces which possess shear and expansion even in the absence of gravitational waves. Consider for instance a light-cone in flat Minkowski space: its expansion grows, hence the CFP flux (6.14) is non-zero, even though there is no actual dynamics taking place. This is an objectable feature, which disconnects charge conservation from absence of radiation. It motivates the question whether one can find a different potential leading to a vanishing flux on both non-expanding horizons and flat light-cones. This is not possible within the framework above, because the CFP symplectic potential is unique under the requests of covariance and stationarity on NEH. What we propose is to relax the notion of stationarity, from \(\theta^{\prime}=0\) to:
Case III: \(I_{\xi}\theta^{\prime}=0\) for _every_ symmetry vector field on the stationary solutions.
Namely, we require vanishing of the Noether flux, as opposed to vanishing of the symplectic flux. This requirement is weaker than \(\theta^{\prime}=0\), therefore the immediate consequence is a lost of uniqueness.23 While this appears bad at first sight, our point is that it can be compensated by the larger set of solutions that can be included. For the family of anomaly-free potentials,
Footnote 23: It is still stronger than the minimal requirement of stationarity for isometries, which is so weak that it is satisfied even by Komar.
\[I_{\xi}\theta^{c} =\big{[}\sigma^{\mu\nu}\delta_{\xi}\gamma_{\mu\nu}+\pi_{\mu} \delta_{\xi}l^{\mu}+2\delta_{\xi}k+(2-c)\delta_{\xi}\theta\big{]}\epsilon_{ \mathcal{N}}-(c-1)\theta\delta_{\xi}\epsilon_{\mathcal{N}}\] \[=\big{[}\sigma^{\mu\nu}\pounds_{\xi}\gamma_{\mu\nu}+\pi_{\mu} \pounds_{\xi}l^{\mu}+2(\pounds_{\xi}k-\pounds_{l}w_{\xi})+(2-c)\pounds_{\xi} \theta\big{]}\epsilon_{\mathcal{N}}-(c-1)\theta\pounds_{\xi}\epsilon_{\mathcal{ N}}. \tag{6.16}\]
On a NEH in vacuum \(\sigma_{\mu\nu}=\theta=0\) and \(\pounds_{\xi}\theta=0\) for tangent diffeomorphisms, and the flux reduces to
\[I_{\xi}\theta^{c}\stackrel{{\rm NEH}}{{=}}\big{(}\pi_{\mu}\delta_ {\xi}l^{\mu}+2\delta_{\xi}k\big{)}\epsilon_{\mathcal{N}}=\big{(}\pi_{\mu}\pounds _{\xi}l^{\mu}+2(\pounds_{\xi}k-\pounds_{l}w_{\xi})\big{)}\epsilon_{\mathcal{N}}. \tag{6.17}\]
This vanishes only for \(\delta_{\xi}l^{\mu}=\delta_{\xi}k=0\), namely for symmetry vector fields belonging to the CFP group. On the other hand, it vanishes for any \(c\), hence the weaker stationarity condition leaves an ambiguity in the choice of symplectic potential. But this ambiguity is eliminated because we can now extend the set of solutions that fullfill the stationary property. Consider a flat light-cone. Its expansion does not vanish, hence it does not fit into the previous notion of stationarity. Only the shear vanishes, so the flux gives
\[I_{\xi}\theta^{c}\stackrel{{\rm lightcone}}{{=}}(2-c)\delta_{\xi }\theta\epsilon_{\mathcal{N}}-(c-1)\theta\delta_{\xi}\epsilon_{\mathcal{N}}=( 2-c)\pounds_{\xi}\theta\epsilon_{\mathcal{N}}-(c-1)\theta\pounds_{\xi} \epsilon_{\mathcal{N}}-\theta w_{\xi}\epsilon_{\mathcal{N}}. \tag{6.18}\]
We also recall that super-translations are not allowed on a semi-complete null surface, so the CFP group is here reduced to Weyl transformations and super-Lorentz. To evaluate the flux, we can exploit the fact that the potential is class-III invariant and make a convenient choice of normal. We take affine coordinate \(\lambda\) and
\[l=\lambda\partial_{\lambda},\qquad\Rightarrow\qquad\theta=2,\qquad\epsilon_{ \mathcal{N}}=\frac{\sqrt{-g}}{f}d^{3}x=\lambda d\lambda\wedge\stackrel{{ \circ}}{{\epsilon}}_{S}. \tag{6.19}\]
Then the first term in (6.18) vanishes, and so does the anomaly, see (5.50): Only super-translations have non-vanishing anomaly for this choice of \(l\), but these are not part of the allowed symmetries. The only non-zero term is the second one, since \(\pounds_{\xi}\epsilon_{\mathcal{N}}\neq 0\) for any non-trivial \(W\) and \(Y\). The flux on a flat light-cone is thus \(I_{\xi}\theta^{c}=(1-c)\pounds_{\xi}\epsilon_{\mathcal{N}}\), and vanishes for arbitrary transformations only for the choice \(c=1\).
We conclude that the flux (6.16) vanishes for NEH _and also_ for a flat light cone if \(c=1\). Stationarity in the weaker sense of case III allows one to solve the problem of a non-vanishing flux on a flat light-cone.24 This process selects again a unique potential. It is the conformal one instead of the CFP one, and on the CFP phase space reads
Footnote 24: One may wonder whether this idea can be taken one step further to identify a symplectic potential with vanishing flux on shear-full hypersurfaces in flat spacetime. We don’t know if this could be done, but it seems difficulty since even a non-radiative shear can introduce a dynamical evolution of the null hypersurface. The case of a light-cone is special in this sense because even if the area changes, the expansion has a constant representative.
\[\theta^{\text{Conf}}=(\sigma^{\mu\nu}\delta\gamma_{\mu\nu}+\delta\theta) \epsilon_{\mathcal{N}}. \tag{6.20}\]
### Larger phase spaces
We have seen that the minimal condition for covariance is \(\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{=}}\,0\). Therefore the larger phase space defined by \(l_{\mu}\delta l^{\mu}=\delta l_{\mu}\,\stackrel{{\mathcal{N}}}{{= }}\,0\) admits a one-parameter family of covariant symplectic potentials. This space contains arbitrary variations of the three tangent components \(\delta l^{\mu}\) and of \(\delta k\), and its symmetry group is \(\text{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\). The stationarity condition is violated in both its original WZ definition and the weaker one of case III, see (6.12) and (6.17). Adding the restrictions \(n_{\mu}\delta_{\xi}l^{\mu}=0\) alone reduces the symmetry group to \(\text{Diff}(\mathcal{N})\) but breaks covariance, because this condition is not class-I invariant. Adding instead \(\delta l^{\mu}=0\) reduces the group to \(\text{Diff}_{l}(\mathcal{N})\), covariance is preserved for \(b=0\) even without \(\delta l_{\mu}=0\), and stationarity is lost in both versions.
Specifically to the'relaxed' CFP phase space with varying inaffinity considered in the Appendix of their paper, we find that the symmetry vector fields span the group \(\text{Diff}_{l}(\mathcal{N})\) given in (5.43), with \(\delta l_{\mu}=0\) required to have closure under the Lie bracket, while \(n_{\mu}\delta_{\xi}l^{\mu}=0\) is required to make the anomaly 'canonical', namely depend on the symmetry parameters and \(f\) but not on the extension, see (5.40). The first condition also guarantees that \(\theta^{\text{c}}\) with \(b=0\) is covariant. On the other hand the stationarity condition is violated for both the original WZ definition and the weaker definition of case III. In other words, the vector fields in \(\text{Diff}_{l}(\mathcal{N})\) which are not in \(G^{\text{CFP}}\) have a non-vanishing flux on a non-expanding horizon, given by \(I_{\xi}\theta=2(\pounds_{\xi}k-\pounds_{l}w_{\xi}-kw_{\xi})\epsilon_{\mathcal{ N}}\). In affine coordinates this reduces to \(f\tilde{\tau}\), making it manifest that it vanishes only for the CFP vectors.
### Charges
In this Section we use the formula (6.3) to write the improved Noether charges for arbitrary variations, arbitrary \(\xi\) and any choice of \((b,c)\) in the boundary Lagrangian. We will give some general observations and then comment on the special features that occur when the additional restrictions are added, and specific values of \(b\) and \(c\) chosen, in parallel with the discussion on the fluxes of the previous Section. In particular we will explain how they relate to the WZ prescription, and when they can be identified as WZ charges.
In the following we take \(n\) adapted to the cross sections \(S\) of the null boundary on which we are evaluating the charges. This fixes the class-I ambiguity, and it is a choice useful to simplify various expressions. Namely,
\[n=\frac{1}{fg^{\lambda\Phi}}d\lambda \tag{6.21}\]
where \(\lambda\) is an arbitrary parameter labelling the cross sections of a space-like foliation of \(\lambda\). We can
then write the pull-backs as follows. For the Komar charge,
\[q_{\xi}=-\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\nabla^{\mu}\xi^{\nu}dx^{\rho} \wedge dx^{\sigma}\stackrel{{ S}}{{=}}2n_{\mu}l_{\nu}\nabla^{[\mu} \xi^{\nu]}\epsilon_{S}. \tag{6.22}\]
For the boundary Lagrangian,
\[i_{\xi}\ell^{\text{\tiny(b,c)}}=-(bk+c\theta)i_{\xi}\epsilon_{\mathcal{N}} \stackrel{{ S}}{{=}}(bk+c\theta)\xi\cdot n\,\epsilon_{S}. \tag{6.23}\]
For the corner symplectic potential,
\[I_{\xi}\vartheta^{\text{\tiny EH}}\stackrel{{ S}}{{=}}(n_{\mu} \delta_{\xi}l^{\mu}+n^{\mu}\delta_{\xi}l_{\mu})\epsilon_{S}=(n_{\mu}\pounds_{ \xi}l^{\mu}+n^{\mu}\pounds_{\xi}l_{\mu}+2w_{\xi})\epsilon_{S}, \tag{6.24}\]
where we used (5.10). The Lie derivatives satisfy the following identity,
\[n^{\mu}\pounds_{\xi}l_{\mu}+n_{\mu}\pounds_{\xi}l^{\mu}=2n_{\mu}l_{\nu}\nabla ^{[\mu}\xi^{\nu]}+2n_{\mu}\xi^{\nu}\nabla_{\nu}l^{\mu}. \tag{6.25}\]
The first term on the RHS coincides with the pull-back of the Komar 2-form. The second is a contraction of the Weingarten map, thanks to the restriction of \(\xi\) to be tangent.
Adding up according to (6.3), we get
\[q_{\xi}^{\text{\tiny(b,c)}} =-2[n^{\mu}\xi^{\nu}(W_{\nu\mu}-\frac{1}{2}(bk+c\theta)g_{\mu\nu}) +w_{\xi}]\epsilon_{S} \tag{6.26}\] \[=-[2n^{\mu}\xi^{\nu}(W_{\nu\mu}-Wg_{\mu\nu})+\xi\cdot n((2-b)k+(2 -c)\theta)+2w_{\xi}]\epsilon_{S}\] \[=-[2\xi^{\mu}(\eta_{\mu}-\theta n_{\mu})+\xi\cdot n((2-b)k+(2-c) \theta)+2w_{\xi}]\epsilon_{S}.\]
These are the Noether charges for the full group \(\text{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\), and any polarization in the family (3.31). No restrictions except for \(l_{\mu}\delta l^{\mu}=0\). The \(c\)-term in the boundary Lagrangian is a total derivative and could have been moved to \(\vartheta\). This move leaves the charges invariant, because corner shifts in the boundary Lagrangian only matter if the shift is anomalous [12, 13], and \(\theta\epsilon_{\mathcal{N}}\) is not. Using \(\ell^{\text{\tiny D}}\) versus \(\ell^{\text{\tiny D}\prime}\) in the case of Dirichlet polarization, or \(\ell^{\text{\tiny Conf}}\) versus nothing in the case of conformal polarization, is irrelevant.
We can now make the earlier discussion on the need of a physical prescription for the charges concrete. First of all, they are in general not class-III invariant, and depend explicitly on the choice of normal representative taken. Secondly, they depend on the extension \(\bar{\xi}^{\Phi}\) of the symmetry vector fields through the anomaly \(w_{\xi}\). Therefore if this extension is a free parameter, the charges take value on the group \(\text{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) and can be given an arbitrary value even if the intrinsic parameters on the hypersurface are kept fixed. Further problems appear if we look at their flux, which is given by
\[\underline{dq}_{\xi}^{\text{\tiny(b,c)}} \stackrel{{}}{{=}}I_{\xi}\theta^{\text{\tiny(b,c)}}- a_{\xi}^{\text{\tiny(b,c)}} \tag{6.27}\] \[=[\sigma^{\mu\nu}\pounds_{\xi}\gamma_{\mu\nu}+\pi_{\mu}\pounds_{ \xi}l^{\mu}+(2-b)\pounds_{\xi}k-2\pounds_{l}w_{\xi}+(2-c)\pounds_{\xi}\theta- 2\theta w_{\xi}]\epsilon_{\mathcal{N}}-[bk+(c-1)\theta]\pounds_{\xi}\epsilon_ {\mathcal{N}}.\]
The problem with this flux is the same one that plagues the Komar charges: it can be non-zero flux even on a NEH or in Minkowski space for generic diffeomorphisms tangent to a generic null hypersurface! This is what we meant in the earlier discussion when we said that a generic version of Noether theorem may be unpractical, and one needs some additional input to reorganize it in a more useful way. To make this more precise, recall that the dynamical content of the flux-balance laws is the constraint equations, namely for a null hypersurface the Raychaudhuri and Damour equations, as discussed for instance in [18]. These can be derived from (6.27) for \(\xi\) tangent to the null geodesics
or to the cross-sections, respectively. The point is that for arbitrary \(b\) and \(c\) the terms in \(\dot{\theta}\) and \(\ddot{\bar{\eta}}_{\mu}\) will appear scattered on both LHS and RHS, and that without phase space restrictions there will be gauge-dependent terms in both charge and flux that cancel out in the final equation. Let us know see how these problems are solved using the WZ prescription described above.
The first thing we want to comment on is the explicit appearance of the anomaly \(w_{\xi}\). This responsible for the shift between these charges and a Brown-York-like expression based on the Weingarten map alone, as was found in [9] for the Dirichlet polarization.25 One of our initial motivation was to study whether this shift could be removed changing polarization. As we can see from the general expression (6.26), this is not the case for the polarizations considered. They only affect the numerical coefficients that would give rise to the trace term \(W\), and not the anomaly contribution. But the anomaly term is actually very important: it leads to the area being the charge associated with a constant Weyl rescaling, arguably the most famous gravitational charge for horizons. More in general, it is crucial to guarantee that these charges are perfectly covariant for \(b=0\), because it compensates the non-class-III invariance of \(\eta_{\mu}\) and \(k\). To see this, we use (5.10) to rewrite the charges as
Footnote 25: See also [62, 63] for other Brown-York-like formulas on null boundaries.
\[q_{\xi}^{\text{\tiny(b,c)}}=-2\left[\xi\cdot\bar{\eta}-\xi\cdot n\left(\frac {c}{2}\theta+\frac{b}{2}k-\bar{k}\right)+\bar{\xi}^{\Phi}-\delta_{\xi}\ln f \right]\epsilon_{S}. \tag{6.28}\]
What we have done here is to expand the term \(\pounds_{l}\ln f\) in the anomaly and reabsorb it into the shifts of \(k\) and \(\eta_{\mu}\) to \(\bar{k}\) and \(\bar{\eta}_{\mu}\). Recall that the terms in \(\bar{k}\) and \(\bar{\eta}_{\mu}\) are partially invariant under a class-III transformation, namely they change only if the rescaling is induced by a reparametrization of \(\Phi\). But the same reparametrization changes also the anomaly, see (5.10)! The two changes perfectly compensate, making these terms fully class-III invariant. The only non-class-III invariant contributions are the terms in \(k\) and \(\delta_{\xi}l_{\mu}\), and this was to be expected from the results we obtained previously on the covariance of the symplectic potential. We conclude that the charges are class-III invariant for \(b=0\) and \(\delta l_{\mu}=0\). To complete the proof of covariance we need to check also for class-I invariance. This cannot be done for (6.26) because we wrote it in a fixed choice of rigging vector, but follows from the properties of the symplectic potential, and could be easily checked writing the more general formulas for the pull-backs with a non-adapted rigging.
We have seen the importance of the anomaly in establishing covariance of the charges. This can be stated in other words as follows: writing the charges in terms of geometric quantities such as those that enter the Weingarten map and its decomposition requires the use of non-dynamical fields, and the anomaly contribution is there to remove the non-dynamical dependence. One of the consequences of this result is that a Brown-York-like construction for the charges on null hypersurfaces cannot be covariant.
If we now impose \(n_{\mu}\delta l^{\mu}=0\) as in [18]. Working in affine coordinates, so that \(\bar{k}=0\) and we can use the explicit formula (5.40), we obtain
\[q_{\xi}^{\text{\tiny(b,c)}}=-2\left[Y\cdot\bar{\eta}+\tau\left(\frac{c}{2} \theta_{\lambda}+\frac{b}{2}k\right)-\dot{\tau}\right]\epsilon_{S}. \tag{6.29}\]
These are the general charges for the group \(\text{Diff}(\mathcal{N})\). The above formula gives the illusion that they are covariant for \(b=0\) without \(\delta l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\), but this is wrong, covariance is spoiled by the non-class-I invariant restriction that one is making. This is true regardless of the value of \(b\) and \(c\), hence we conclude that charges for the symmetry group \(\text{Diff}(\mathcal{N})\) constructed in this way are not covariant. While it may be possible to construct other \(\text{Diff}(\mathcal{N})\) charges, we believe there is an intuitive reason
as to why they cannot be covariant. First, we have seen that the dependence of the charges on the first-order extension of \(\xi\) is crucial to remove the dependence on the embedding, which cannot be eliminated as in the non-null case by choosing the unit-norm normal. So the natural group of charges associated with a null hypersurface is \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\). Second, there is no canonical way to reduce this group to \(\mathrm{Diff}(\mathcal{N})\) because of the lack of a projector on a null hypersurface. One can pick a \(\mathrm{Diff}(\mathcal{N})\) choosing a section of \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) via the rigging vector, but there result will depend on the rigging vector, hence non-covariant and anomalous.
To continue the discussion on covariance, we take \(b=0\) and consider instead the restrictions \(\delta l_{\mu}=\delta l^{\mu}\stackrel{{\wedge}}{{=}}0\). We then have
\[q_{\xi}^{c}=-2\left[Y\cdot\bar{\eta}+\frac{c}{2}\tau\theta-\dot{\tau}\right] \epsilon_{S}. \tag{6.30}\]
This is a one-parameter family of covariant charges for the new group \(\mathrm{Diff}_{l}(\mathcal{N})\), explicitly class-III invariant. In particular: the super-Lorentz charge aspect is the shifted twist \(\bar{\eta}\), and we have charges aspects for the null diffeomorphisms and their first derivative given respectively by the expansion and the area.
If we add the CFP restriction \(\delta k\stackrel{{\wedge}}{{=}}0\), we can use \(\tau=T+\lambda W\) as in the form (5.49) of the symmetry vector fields, and write
\[q_{\xi}^{c}=-2\left[Y\cdot\bar{\eta}+c\,T\theta_{\lambda}+\left(\frac{c}{2} \lambda\theta_{\lambda}-1\right)W\right]\epsilon_{S}, \tag{6.31}\]
where \(\theta_{\lambda}\) denotes the expansion of the affine generator. This is a one-parameter family of covariant charges for the group \(G^{\mathrm{CFP}}\). We can also remark that the restriction (5.47) implies that \(\pounds_{l}\bar{\xi}^{\Phi}=0\), therefore the partially class-III invariant quantities \(\bar{k}\) and \(\bar{\eta}_{\mu}\) become fully class-III invariant, and accordingly, their anomaly vanishes. We have the same super-Lorentz charge aspect as before, and the null diffeomorphisms are now split into a super-translation charge aspect given by the expansion, and the Weyl charge aspect given by the area minus the expansion. The latter in particular reduces to the area on a NEH, and we see that this result holds for any \(c\). This is a general property: all charges are both \(b\) and \(c\) independent on a NEH. Requiring \(\mathcal{N}\) to be a NEH is a huge restriction on the phase space of general relativity, and the fact that a degeneracy in the symplectic flux and charges is introduced should not be surprising.
Now let us talk about stationarity. To understand the need of additional requirements to prescribe the charges, observe that this family of covariant charges contains the original Komar expression! This occurs for \(b=c=0\) and \(\delta l_{\mu}=\delta l^{\mu}=0\). Therefore covariance alone can only take us so far, and we still haven't solved the initial problem of little physical meaning. Hence the importance of the stationarity requirement. If we require it in the strong sense of Case II, then we must pick \(c=2\) (6.31), and we recover in this way the CFP charges of [1]. In the special case of a NEH they are given by
\[q_{\xi}^{c}\stackrel{{\mathrm{NEH}}}{{=}}-2[Y\cdot\bar{\eta}-W] \epsilon_{S}. \tag{6.32}\]
They match those given in [5], and are conserved and independent of the polarization parameter \(c\) as stated above. A somewhat subprime feature of this charges is that we would like the reference solution in which they vanish to be flat spacetime, but this is not the case, since flat spacetime does not contain NEH of compact cross-sections. What was done in [1] was to evaluate them for a Schwarzschild horizon of mass \(M\) and argue that they tend to zero in the limit \(M\to 0\). What flat spacetime contains are light-cones, these are shear-free null hypersurfaces of compact cross-sections, but are expanding. On
a flat light-cone we have \(\bar{\eta}_{\mu}=0\), and the charges are given by
\[q^{c}_{{\rm(W,Y)}}\stackrel{{ l.c.}}{{=}}-2[(c-1)W]\epsilon_{S}. \tag{6.33}\]
They are not conserved unless \(c=1\), in which case they vanish. Therefore if we require stationarity in the weaker sense of Case III, we select \(c=1\) and obtain charges that only only are conserved on both NEH and flat light-cones, but also vanish on flat light-cones. The conformal polarization \(c=1\) not only has better stationarity properties, but it also makes it more natural to assess that the reference solution for the charges is flat spacetime.
The choice of charge with \(b=0\) and \(c=1\) has also an interesting consequence for the relation between its flux-balance law and the Raychaudhuri equation. Recall in fact that if take \(\theta\) as the charge, its flux \(\dot{\theta}\) is not monotonic, even if the null energy conditions are satisfied. When choosing \(b=0\) and \(c=1\) on the other hand, the Raychaudhuri equation is reorganized so that the charge \(2(1-\frac{1}{2}\lambda\theta_{\lambda})\epsilon_{S}\) has a monotonic flux if the null energy conditions are satisfied and the hypersurface is future complete [19]. This fact can be used for notions of dynamical entropy in the context of the generalized second law. Obtaining a monotonic flux from the Raychaudhuri equation is also studied in the recent work [64].
Finally, let us cover the relation between these improved Noether charges and the Wald-Zoupas charges. The covariance requirement for the symplectic potential has selected the family with \(b=0\). For this family the boundary Lagrangian is anomaly-free, hence we are in case \((a)\) of [13]: the covariant improved Noether charges are Wald-Zoupas charges, there is no need for a corner shift. They can however be considered proper Wald-Zoupas charges only for those symmetry groups for which the stationarity condition is satisfied. This means \(G^{\rm CFP}\) and \(G^{\rm AKKL}\), while for the larger groups \({\rm Diff}_{l}({\cal N})\) and \({\rm Diff}({\cal N})\ltimes\mathbb{R}^{\cal N}\) the charges are well defined but do not satisfy the stationarity condition neither in the original sense of case II, nor in the weaker sense of case III.
## 7 Addenda
### Second-order perturbations around flat light-cones
In the previous Section, we identified covariant charges and fluxes which are conserved on a flat light-cone, and vanish exactly on each cross sections. This occurs for the special choice of polarization \(b=0\) and \(c=1\), and for symmetry vector fields belonging to the CFP group. We now study their evolution when the light-cone is perturbed by gravitational radiation. Since the charges and fluxes are covariant, we can use any normal representative, and we pick the choice (6.19) with a constant expansion in flat light-cones and vanishing anomaly. The flux-balance law is
\[dq_{\xi}=(\sigma^{\mu\nu}\pounds_{\xi}\gamma_{\mu\nu}+\pounds_{\xi}\theta) \epsilon_{\cal N}, \tag{7.1}\]
where we remember that only super-Lorentz \(Y\) and Weyl super-translations \(W\) are allowed as symmetries. For a pure Weyl transformation,
\[dq_{W}=W(2\sigma^{2}_{\mu\nu}+\pounds_{l}\theta)\epsilon_{\cal N}. \tag{7.2}\]
We now solve for \(\pounds_{l}\theta\) using the Raychaudhuri equation, in a perturbative expansion around a NEH. We assume that the shear is infinitesimal, and write \(\theta=2+\theta_{1}+O(\sigma^{4})\). Linearizing the Raychaudhuri equation we find
\[\pounds_{l}\theta_{1}=-\theta_{1}-\sigma^{2}_{\mu\nu}, \tag{7.3}\]
whose solution is
\[\theta_{1}(\lambda,x^{A})=-\frac{1}{\lambda}\int_{\lambda_{0}}^{\lambda}\sigma_{ \mu\nu}^{2}(\lambda^{\prime},x^{A})d\lambda^{\prime}. \tag{7.4}\]
Here \(\lambda_{0}\) can be taken as the value of affine parameter after which the perturbation enters the light-cone, with the tip located at \(\lambda_{0}=0\). Plugging this result in (7.2) and integrating over a region \(\Delta{\cal N}\) of the null hypersurface, we get
\[\Delta q_{W}=\int_{\Delta{\cal N}}W\left(\lambda^{3}\sigma_{\lambda}^{2}+\int_{ \lambda_{0}}^{\lambda}{\lambda^{\prime}}^{2}\sigma_{\lambda^{\prime}}^{2}d \lambda^{\prime}\right)d\lambda\wedge\overset{\circ}{\epsilon}_{S}+O(\sigma^{ 4}), \tag{7.5}\]
where \(\sigma_{\lambda}:=\sigma/\lambda\) is the shear of the affinely parametrized normal. The flux is made of two pieces. The first one, proportional to the shear squared, represents the energy of weak gravitational waves entering the light cone locally. It is a tidal heating term. The second piece is related to the gravitational waves which have entered the light-cone since \(\lambda_{0}\). Unlike the first term, this terms is not local, and depend on the history of the gravitational waves which entered the outgoing light from \(\lambda=0\). Hence, even in the absence of local shear, we expect a variation of the charge if some weak gravitational waves have previously entered the outgoing light cone. This is because if it is the case, spacetime is not flat anymore in the local surrounding, as there is some energy localized inside the outgoing light cone, the energy of the gravitational waves which have previously entered. Hence, this second term is a memory effect. Furthermore, the flux of the future pointing diffeomorphisms is positive, and so the charge increases, underlying the fact the gravitational waves carry positive energy. Monotonicity is a key property for flux, and one more reason to appreciate the conformal polarization. Here we have proved it perturbatively, but it can be extended to all orders under the hypothesis of future completeness of the hypersurface [19].
Next, we take a pure super-Lorentz, so \(\xi\in TS\). In affine coordinates, \(\xi^{\mu}\partial_{\mu}=Y^{A}\partial_{A}\) and the charge density associated to this tangent vector is just given by \(q_{Y}=-2\bar{\eta}_{\mu}\xi^{\mu}\epsilon_{S}=Y^{A}P_{A}\epsilon_{S}\), with variation
\[dq_{Y}=d(Y^{A}P_{A}\epsilon_{S})=-2\pounds_{l}(\xi^{\mu}\bar{\eta}_{\mu}). \tag{7.6}\]
We can now compute the flux of the charge using our flux balance law. We make use of the linearized Raychaudhuri equation (7.3) to express the linearized expansion \(\theta_{1}\) in terms of the shear (7.4). For a tangent diffeomorphism \(\xi^{\mu}\partial_{\mu}=Y^{A}\partial_{A}\), we have for small perturbations around the flat light cone
\[I_{\xi}\theta^{\rm Conf} =\sigma^{\mu\nu}\pounds_{\xi}\gamma_{\mu\nu}\epsilon_{\cal N}+ \pounds_{\xi}\theta\epsilon_{\cal N}\] \[=2D_{\mu}(\sigma^{\mu\nu}\xi_{\nu})\epsilon_{\cal N}-2\xi^{\mu}D_ {\nu}\sigma^{\nu}_{\mu}\epsilon_{\cal N}-\left(\frac{1}{\lambda}\int_{0}^{ \lambda}\sigma^{\mu\nu}\xi^{\rho}D_{\rho}\sigma_{\mu\nu}d\lambda^{\prime} \right)\epsilon_{\cal N}+O(\theta_{1}^{2})\] \[=-2Y^{A}D_{B}\sigma^{B}_{A}\epsilon_{\cal N}+O(\sigma^{2}_{\mu \nu}), \tag{7.7}\]
where we disregarded the first term in the equation (7.7) because it was a total divergence which does not give any contribution upon integration on the compact cross sections. Therefore, at leading order, we find that the charge variation is given by the angular derivative of the shear along the cross sections. We notice that the charge variation is proportional to the shear at leading order, not the square of the shear. The coefficient \(P_{A}\) appearing in the charge is the coefficient of the first order expansion of \(g_{uA}\) in affine coordinate, and so it has the interpretation of an angular momentum for small perturbations around the flat background. Therefore the charge associated to the tangent diffeomorphsims is modified by the the angular momentum of the weak gravitational waves crossing
the outgoing light cone. The equation relating the charge variation (7.6) to the flux is a linearization of the more general Damour equation, derived in the appendix.
These results open an interesting direction which we hope to pursue in future work and link to light-cone thermodynamics [65].
### Wald-Zoupas prescription with field-dependent diffeomorphisms
We have seen that if the symmetry vector fields include field-dependent diffeomorphisms, the notions of covariance defined by matching Lie derivatives and by anomaly-freeness can give different answers. The question is then which of the two should be used as condition 1 of the WZ prescription. In our previous paper [13] we used the matching of Lie derivatives. Following discussions with Chandrasekaran and Flanagan on the topics of the present paper, we were motivated to provide more details about this choice. In this Section we compare the two options, and show that in the end they are both valid, but with a different definition of charge bracket. We also briefly explain why this difference was in the end not important to understand the application of the formalism to the BMS group at future null infinity studied in [13].
We start from (6.5), which is still valid if \(\delta\xi\neq 0\). This formula suggests to take the matching-Lie-derivative options. In fact if we require \((\delta_{\xi}-\pounds_{\xi})\theta^{\prime}=0\) (6.6) is still valid, so we can proceed as before subtracting the preferred flux to obtain an integrable generator. Furthermore, condition 1 also implies that the Lagrangian anomaly must be spacetime-exact, specifically that \(a^{\prime}_{\xi}=ds_{\xi}\) where \(\delta s_{\xi}=-q^{\prime}_{\delta\xi}-A^{\prime}_{\xi}\)[13]. The WZ charges are then defined as in (6.7) with this new \(s_{\xi}\), and still satisfy the flux-balance laws (6.8). This notion of covariance appears thus naturally when talking about integrability of the charges.26
Footnote 26: Looking at (5.6), we see that the argument for integrability resonates with the ‘slicing’ approach proposed in [47].
Let us consider now requiring instead \(\Delta_{\xi}\theta^{\prime}=0\), which as we discussed in Section 7.2 is a simpler notion of background-independence. Furthermore, it is this property that is satisfied by the standard symplectic potential, \(\Delta_{\xi}\theta^{\text{\tiny EH}}=0\), while \(I_{\delta\xi}\theta^{\text{\tiny EH}}\neq 0\) in general. Imposing the anomaly-free condition, the term \(I_{\delta\xi}\theta^{\prime}\) appears in the RHS of (6.5), and the previous procedure no longer works if this is not zero. We can then attempt a charge definition subtracting this new term as well, so that (6.6) is replaced by
\[-\underbrace{I_{\xi}\omega}_{\xi}+I_{\delta\xi}\theta^{\prime}+di_{\xi}\theta ^{\prime}=\delta I_{\xi}\theta^{\prime}\,\hat{=}\,\delta(dq^{\prime}_{\xi}+a^ {\prime}_{\xi}). \tag{7.8}\]
However we are no longer guaranteed that \(a^{\prime}_{\xi}\) is spacetime exact. Therefore this condition alone is not sufficient to define the charges, and must be supplemented by the additional condition that
\[I_{\delta\xi}\theta^{\prime}=dX_{\delta\xi} \tag{7.9}\]
for some \(X_{\delta\xi}\). This additional property suffices to obtain WZ charges when the symmetry vectors are field dependent. The charges are still given by (6.7), this time with \(\delta s_{\xi}=-q^{\prime}_{\delta\xi}-A^{\prime}_{\xi}+X_{\delta\xi}\) (up to a closed form), and are as before conserved and Hamiltonian generators when \(\theta^{\prime}\) vanishes. Notice that (7.9) is guaranteed on-shell if the final boundary Lagrangian is covariant, since we are assuming that \(\delta\xi\) is a symmetry vector field. So one can rephrase the two independent prescriptions \(\Delta_{\xi}\theta^{\prime}=0\) and (7.9) also as \(a^{\prime}_{\xi}=0\) and \(A^{\prime}_{\xi}=0\) (up to a closed 2-form). We conclude that even if the notion of anomaly-freeness may appear more natural, it is less economical, in that it is not sufficient per se to guarantee the existence of the WZ charges, and one has to require also (7.9).
A further difference between the two procedure arises if we go beyond the flux-balance properties (6.8), and require also that the charges give a representation of the symmetry group. To that end we
look again at (6.10), which for \(\delta\xi\neq 0\) is replaced by
\[\Delta_{\chi}q_{\xi}=\frac{\partial q_{\xi}}{\partial\xi}\Delta_{\chi}\xi=\frac{ \partial q_{\xi}}{\partial\xi}(\delta_{\chi}-\pounds_{\chi})\xi=q_{\delta_{\xi} \chi}-q_{[\chi,\xi]}=q_{\Delta_{\chi}\xi}. \tag{7.10}\]
This is the property satisfied by a Noether charge whose background dependence comes only from \(\xi\), such as the Komar 2-form. Conversely, the anomaly of a generic Noether charge can be evaluated applying the anomaly operator to the flux formula (6.4), which gives [11]
\[\Delta_{\chi}q_{\xi}\,\hat{=}\,q_{\delta_{\xi}\chi}-q_{[\chi,\xi]}+I_{\xi}A_{ \chi}+i_{\xi}a_{\chi}. \tag{7.11}\]
The result (7.10) is then associated with a symplectic potential and boundary Lagrangian that are anomaly-free, consistently with the Komar example. If we take the matching-Lie derivatives requirement for the symplectic potential, we have instead \(\Delta_{\chi}q_{\xi}\,\hat{=}\,-\,q_{[\chi,\xi]}\). This different behaviour of the anomaly impacts the representation of the symmetry algebra via the Barnich-Troessaert bracket [45]. In the first case the original bracket needs to be modified to subtract the \(q_{\delta\xi}\) term, along the lines considered for instance in [11]. Whereas in the latter case the charge algebra is represented by the Barnich-Troessaert bracket without any modification. This whole discussion is valid up to the addition of closed 2-forms to the charge.
We stress that the relevance of this distinction occurs only when \(\delta\xi\) is a symmetry vector field. In the case of future null infinity the symmetry vector fields are field-independent, but it is typical to choose a field-dependent extension which preserves a choice of bulk coordinates. As a result, the Einstein-Hilbert symplectic potential is anomaly-free but not covariant in the sense of matching Lie derivatives,
\[\Delta_{\xi}\theta=0,\qquad(\delta_{\xi}-\pounds_{\xi})\theta=I_{\delta\xi} \theta\,\hat{=}\,dq_{\delta\xi}. \tag{7.12}\]
However, \(\delta\xi\) is not a symmetry vector field. In this context, we found it clearer to require \((\delta_{\xi}-\pounds_{\xi})\theta^{\prime}=0\) for the Wald-Zoupas potential defined at \(\mathcal{I}\). This is what we took as definition of covariance in [13] and it leads to the correct BMS charges.
## 8 Conclusions
In this paper we have presented a general analysis of the covariant phase space of general relativity on an arbitrary null hypersurface, with arbitrary variations of the metric allowed. We have computed the fluxes and charges for a two-parameter family of polarizations of the symplectic potential, studied their covariance and conservation properties, and explained their relation to the Wald-Zoupas prescription. We pointed out in particular the use of a weaker notion of stationarity that allows to select a unique set of charges that are conserved, and vanish, on a flat light-cone, as opposed to the charges obtained following the strong stationarity condition. This general analysis should provide useful for many applications of gravitational physics on null hypersurfaces.
We have reviewed various symmetry groups that arise as variations are restricted and the universal structure is strengthened: \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\), \(\mathrm{Diff}(\mathcal{N})\), \(\mathrm{Diff}_{l}(\mathcal{N})\), \(G^{\mathrm{CFP}}\) and \(G^{\mathrm{AKKL}}\). We have highlighted the importance of the anomaly contribution in the expression for the charges. It is necessary to make them independent of the embedding, subtracting off the residual non-class-III invariance of \(\bar{k}\) and \(\bar{\eta}_{\mu}\), and to satisfy the stationarity requirements. This shows one side of the importance of using a prescription a la Wald-Zoupas for the charges, as opposed to the Komar expression or any Brown-York-like expression, which fail to be covariant. In particular we have shown that covariant charges
can be written for all groups except \(\mathrm{Diff}(\mathcal{N})\). The group \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) appears naturally because charges need to depend on the first-order extension of the symmetry vector fields in order not depend on the embedding of \(\mathcal{N}\) in spacetime. The intuitive reason why \(\mathrm{Diff}(\mathcal{N})\) does not admit covariant charges is that it arises as a section of \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\), and there is no canonical choice of section because of the lack of a projector on null hypersurfaces. The group \(\mathrm{Diff}_{l}(\mathcal{N})\) of time-independent super-Lorentz and super-translations of arbitrary time dependence can have interesting applications, for instance we pointed out it allows non-trivial super-translations when \(\mathcal{N}\) has two boundaries, given for instance by intersecting initial and final space-like hypersurfaces. The group \(\mathrm{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) lacks on the other hand the locality property that charges vanish when the parameters on the 3d boundary vanish, but it is nonetheless argued to be physically relevant in [23].
We have used a spacetime description for all quantities, and have found it convenient to describe everything using a NP tetrad. This introduces non-dynamical background quantities that have non-covariant transformation properties in the phase space, but we have shown that independence from the background structure can be easily kept under control. Quantities independent of the choice of NP tetrad are covariant, and can be identified from their invariance under a joint class-I and class-III transformation of the tetrad. Furthermore the extra structures are relevant to the Carroll literature, hence our formalism can be immediately used in that context.
Understanding covariance as independence from the choice of NP representative has immediate application to clarify the ambiguities of null boundary terms that arise if Dirichlet boundary conditions are imposed in a weak way. We have further discussed why reducing the ambiguity requires working with strengthened Dirichlet boundary conditions, whose meaning is to preserve a choice of affine coordinates on the boundary. Alternatively, the ambiguity can be reduced allowing the inaffinity to change but in a way fixed by the rate of change of the boundary area, namely by the expansion. This choice provides a definition of conformal boundary conditions on null hypersurfaces.
### Acknowledgements
We thank Alejandro Perez and Antony Speranza for many discussions on the topics of this paper, and Venkatesa Chandrasekaran and Eanna Flanagan for discussions and exchanges on our drafts. We acknowledge support from the John Templeton Foundation via the grant n.62312, as part of the project _The Quantum Information Structure of Spacetime (QISS)_. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation.
## Appendix A Internal Lorentz transformations
The behaviour of all geometric quantities under a class-I transformation (2.6) can be easily computed, or red from [21] using the NP formalism. To that end, recall that
\[\sigma_{\mu\nu}=-\sigma\bar{m}_{\mu}\bar{m}_{\nu}+cc,\qquad\eta_{\mu}=-(\alpha +\bar{\beta})m_{\mu}+cc.\qquad\theta=-2\mathrm{Re}(\rho),\qquad k=2\mathrm{ Re}(\epsilon).\] (A.1)
The fact that \(l\) is hypersurface-orthogonal hence geodesic fixes the two NP coefficients \(\kappa=0\) and \(\rho=-\theta/2\). Apart from this, the formulas are general.
### Class-I
Under (2.6), we have:
\[\gamma_{\mu\nu}\to\gamma_{\mu\nu}+2\left(a{l}_{(\mu}\bar{m}_{\nu)}+ \bar{a}{l}_{(\mu}m_{\nu)}+|a|^{2}{l}_{\mu}{l}_{\nu}\right),\qquad\epsilon_{ \mathcal{N}}\to\epsilon_{\mathcal{N}},\] (A.2) \[\sigma_{\mu\nu}\to\sigma_{\mu\nu}-2(\sigma\bar{a}\bar{m}_{(\mu}{ l}_{\nu)}+cc)-2\mathrm{Re}(\sigma\bar{a}^{2}){l}_{\mu}{l}_{\nu}\qquad\sigma\to \sigma,\qquad\theta\to\theta,\qquad k\to k,\] (A.3) \[\eta_{\mu}\to\eta_{\mu}-[a\bar{\sigma}+\bar{a}\,(k+\rho)]m_{\mu}+ [\bar{a}\eta\cdot m-a(a\bar{\sigma}+\bar{a}\,(k+\rho))]{l}_{\mu}+cc.\] (A.4)
We see that \(\eta_{\mu}\) is invariant on a non-expanding horizon iff \(k=0\).
We now check invariance of the pulled-back standard symplectic potential (3.2). We first notice that the corner term \(\vartheta^{\text{\tiny EH}}\) is invariant thanks to \(m^{\mu}\delta{l}_{\mu}=0\) and the pull-back. Of the bulk term, the third and fourth are invariant. Plugging the above transformations in the first and second term, and using \({l}_{\mu}\delta{l}^{\mu}=0\), we obtain
\[(\sigma^{\mu\nu}+\frac{\theta}{2}\gamma^{\mu\nu})\delta\gamma_{ \mu\nu}\to\mathrm{idem}+2[(a\bar{\sigma}+\bar{a}\rho)m_{\mu}\delta{l}^{\mu}+cc],\] (A.5) \[2(\eta_{\mu}+kn_{\mu})\delta{l}^{\mu}\to\mathrm{idem}-2[(a\bar{ \sigma}+\bar{a}\rho)m_{\mu}\delta{l}^{\mu}+cc],\] (A.6)
from which the invariance of \(\underline{\theta}^{\text{\tiny EH}}\) follows immediately. The result holds also if \(\delta a\neq 0\).
### Class-III
Under (2.7), we have
\[\gamma_{\mu\nu}\to\gamma_{\mu\nu},\qquad\epsilon_{\mathcal{N}} \to A^{-1}\epsilon_{\mathcal{N}},\qquad\sigma^{\mu\nu}\to A\,\sigma^{\mu\nu}, \qquad\theta\to A\,\theta,\qquad k\to A(k+\pounds_{l}\ln A),\] (A.7) \[\eta_{\mu}\to\eta_{\mu}-\gamma_{\mu}^{\nu}\nabla_{\nu}\ln A,\qquad \omega_{\mu}\to\omega_{\mu}+\partial_{\mu}\ln A+{l}_{\mu}\pounds_{n}\ln A.\] (A.8)
We now check invariance of the pulled-back standard symplectic potential (3.2). Using these transformations, we first derive
\[\vartheta^{\text{\tiny EH}}\to\mathrm{idem}-2\delta\ln A\,\epsilon_{S}.\] (A.9)
It is class-III invariant for a field-independent rescaling, but not for a field-dependent one. The first of the bulk terms is invariant. The others give
\[-2\omega_{\mu}\delta{l}^{\mu}\epsilon_{\mathcal{N}} \to\mathrm{idem}-2(k\delta\ln A+\pounds_{\delta l}\ln A)\epsilon_{ \mathcal{N}},\] (A.10) \[2\delta(\theta+k)\epsilon_{\mathcal{N}} \to\mathrm{idem}+2\Big{(}(\theta+k)\delta\ln A+\frac{1}{A}\delta (\pounds_{l}A)\Big{)}\epsilon_{\mathcal{N}}.\] (A.11)
Adding up, we obtain
\[\underline{\theta}^{\text{\tiny EH}}\to\mathrm{idem}+2(\pounds_{l}+\theta) \delta\ln A\,\epsilon_{\mathcal{N}}-2d(\delta\ln A\,\epsilon_{S}).\] (A.12)
This is manifestly invariant for a field-independent rescaling. For field-dependent ones invariance follows from the cancellation between the bulk and corner terms thanks to the identity (2.13).
### Anomalies and NP representatives
The background structure we use to describe a null hypersurface is a choice of NP tetrad. In this Appendix we prove that quantities that are independent of the choice of NP representative, namely invariant under both class I and III transformations, are also anomaly-free. Consider a generic functional \(F\) of the dynamical fields \(\phi=g_{\mu\nu}\) and the background fields \((\Phi,l_{\mu},n_{\mu})\). Anomaly-freeness with
respect to \(\Phi\) is achieved restricting the diffeomorphisms to be tangent, so we assume to have done that in the following. The variation of \(F\) under a change of tetrad is
\[\delta_{(a,\alpha)}F=\frac{\partial F}{\partial l}\delta_{(a,\alpha)}l+\frac{ \partial F}{\partial n}\delta_{(a,\alpha)}n,\] (A.13)
where
\[\delta_{(a,\alpha)}l=\alpha l,\qquad\delta_{(a,\alpha)}n=-\alpha n+\bar{a}m+a \bar{m},\qquad a\ll 1,\qquad\alpha\ll 1,\] (A.14)
is the infinitesimal version of (2.6) and (2.7). This coincides with the anomalies (5.10) and (5.11) for \(\alpha=-w_{\xi}\) and \(a=m\cdot Z\). Taking this special values,
\[\delta_{(a,\alpha)}F=\frac{\partial F}{\partial l}\Delta_{\xi}l+\frac{ \partial F}{\partial n}\Delta_{\xi}n=\Delta_{\xi}F.\] (A.15)
Therefore the vanishing of the LHS implies that \(F\) is anomaly-free.
## Appendix B Alternative polarizations
Different choices of \(\theta^{\prime}\) can be obtained integrating by parts in field space writing \(p\delta q=\delta(pq)-q\delta p\) for one or more canonical pairs, or by integrating by parts on the hypersurface and thus moving terms in and out of \(\vartheta\). Not all such manipulations are useful when looking for admissible boundary conditions, because they may lead to a symplectic potential which is not in diagonal form, or whose \(\delta q\) are not independent. In the main text we restricted attention to changes of polarization in the spin-0 sector only. In this Appendix we present two more changes that affect the boundary Lagrangian. We will not consider integrations by part in spacetime, namely the corner term \(\vartheta^{\text{\tiny EH}}\) is always the same. We can then start from (3.14).
Changing polarization in the spin-1 sector can be done using
\[\pi_{\mu}\delta l^{\mu}\,\epsilon_{\mathcal{N}}=\delta((\theta-2k)\epsilon_{ \mathcal{N}})-l^{\mu}\delta(\pi_{\mu}\epsilon_{\mathcal{N}}).\] (B.1)
This manipulation changes the boundary Lagrangian. It remains in the family (3.31), but with different numerical coefficients. As mentioned in the main text, the momentum \(\pi_{\mu}\) is determined in terms of the shear of the null hypersurface by the Einstein's equations. Therefore it cannot be specified independently from the shear, hence boundary conditions based on this polarization would be consistent only if the boundary equations of motions are satisfied.
For the second change, we observe that the spin-2 and spin-0 sectors can be written in terms of a single tensor, so that
\[\not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
One can also combine this with a change in the spin-1 sector via
\[(\eta_{\mu}-\theta n_{\mu})\delta l^{\mu}\epsilon_{\mathcal{N}}=\delta(\theta \epsilon_{\mathcal{N}})-l^{\mu}\delta\big{(}(\eta_{\mu}-\theta n_{\mu})\epsilon _{\mathcal{N}}\big{)},\] (B.5)
and get another element of the same family. If we now restrict the variations to \(\delta l^{\mu}=0\) we have
\[\not{\varrho}^{\text{\tiny EH}}=-\gamma_{\mu\nu}\delta\tilde{\Pi}^{\mu\nu}d^{ 3}x-\delta(\theta\epsilon_{\mathcal{N}})+d\vartheta^{\text{\tiny EH}}.\] (B.6)
This is reminiscent of the Neumann form of the symplectic potential in the non-null case, however \(\Pi\) misses the \(\eta\) part of the extrinsic geometry. One could try to resolve this rewriting in terms of \(W_{\mu\nu}\), which gives
\[\not{\varrho}^{\text{\tiny EH}}=[\gamma^{\mu\nu}\delta W_{\mu\nu}+\delta W+( 2\eta_{\mu}+kn_{\mu})\delta l^{\mu}+kn^{\mu}\delta l_{\mu}]\epsilon_{ \mathcal{N}}+d\vartheta^{\text{\tiny EH}}.\] (B.7)
So even if \(W_{\mu\nu}\) contains the eta term missing in \(\Pi\), it drops out because \(\eta_{\mu}l_{\nu}\delta\gamma^{\mu\nu}=0\).
## Appendix C Closure of Lie brackets
In this Appendix we study the conditions under which symmetry vector fields are closed under the spacetime Lie bracket. We consider first the vector fields (5.41). First of all we check that they close as intrinsic vectors on \(\mathcal{N}\). Namely we define
\[\hat{\xi}^{\mu}:=\tau(\lambda,x^{B})\partial_{\lambda}+Y^{A}(x^{B})\partial_{ A}.\] (C.1)
Then we have
\[[\hat{\xi}_{1},\hat{\xi}_{2}]=\hat{\xi}_{12},\qquad\tau_{12}:=\tau_{1}\dot{ \tau}_{2}+Y_{1}[\tau_{2}]-(1\leftrightarrow 2),\qquad Y_{12}=[Y_{1},Y_{2}]_{S}.\] (C.2)
The algebra closes and has a semi-direct product structure with \(\text{Diff}(S)\) acting on the space \(\text{Diff}(\mathbb{R})^{S}\) of \(\text{Diff}(\mathbb{R})\)-valued functions as scalars.
Next, recall that the condition \(n_{\mu}\delta_{\xi}l^{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) partially constraints the extension \(\xi\) of the intrinsic vectors \(\bar{\xi}\) off of \(\mathcal{N}\), specifically the component \(\xi^{\Phi}=\Phi\bar{\xi}^{\Phi}=\Phi(\delta_{\xi}\ln f-\dot{\tau})\). The fact that the extension is not arbitrary makes closure of the \(\xi\)'s under the spacetime Lie bracket not automatic. The non-trivial component to check is
\[[\xi_{1},\xi_{2}]^{\Phi} =\Phi((\pounds_{\xi_{1}}\delta_{\xi_{2}}-\pounds_{\xi_{2}}\delta_ {\xi_{1}})\ln f-\dot{\tau}_{[\xi_{1},\xi_{2}]})+O(\Phi^{2})\] \[=\Phi(\delta_{[\xi_{1},\xi_{2}]}\ln f-\dot{\tau}_{[\xi_{1},\xi_{2 }]}+(\pounds_{\xi_{1}}\delta_{\xi_{2}}-\pounds_{\xi_{2}}\delta_{\xi_{1}}- \delta_{[\xi_{1},\xi_{2}]})\ln f)+O(\Phi^{2}),\] (C.3)
so closure occurs for \(\delta_{\xi}f=0\). If \(\delta_{\xi}f\neq 0\), it is still possible to obtain closure, but only if \(f\) and \(\xi\) satisfy
\[(\pounds_{\xi_{1}}\delta_{\xi_{2}}-\pounds_{\xi_{2}}\delta_{\xi_{1 }}-\delta_{[\xi_{1},\xi_{2}]})\ln f =(\pounds_{\xi_{1}}\Delta_{\xi_{2}}-\pounds_{\xi_{2}}\Delta_{ \xi_{1}}-\Delta_{[\xi_{1},\xi_{2}]})\ln f\] \[=\pounds_{\xi_{1}}\left(\frac{\partial\ln f}{\partial g_{\mu\nu} }\right)\pounds_{\xi_{2}}g_{\mu\nu}-\pounds_{\xi_{2}}\left(\frac{\partial\ln f }{\partial g_{\mu\nu}}\right)\pounds_{\xi_{1}}g_{\mu\nu}.\] (C.4)
This is a non-trivial equation that cannot be satisfied for general \(f\) without restricting the diffeomorphisms. In particular we notice that it is satisfied if \(\Delta_{\xi}f=0\).
Adding the condition \(\delta_{\xi}l_{\mu}\stackrel{{\mathcal{N}}}{{=}}0\) eliminates the extra term, and the algebra closes. Notice that it closes for \(\tau\) an arbitrary function on \(\mathcal{N}\), namely for the group \(\text{Diff}_{l}(\mathcal{N})\) associated with the relaxed
CFP phase space with \(\delta_{\xi}k\) arbitrary, as well as for \(\text{Diff}(\mathcal{N})\) associated with the further relaxation of \(\delta_{\xi}l^{\mu}\) to \(n_{\mu}\delta_{\xi}l^{\mu}=0\) only. This result on the closure of the algebra may appear in tension with [66], where it was proved that the largest _corner_ subalgebra that closes at first order under the Lie bracket includes at most a linear dependence in time, as opposed to the arbitrary time dependence of \(\tau(\lambda,x^{A})\) here. The difference is that the corner subalgebra includes two independent sets of super-translations, corresponding to the two null times that tick off the corner. Restricting the diffeomorphisms to be tangent to \(\mathcal{N}\) eliminates from the algebra those elements that prevent an arbitrary time dependence.
As for the group \(\text{Diff}(\mathcal{N})\ltimes\mathbb{R}^{\mathcal{N}}\) mentioned in the paragraph after (5.35), the abelian character of the extra factor follows from the fact that vector fields labelled only by the first-order transversal extension \(\bar{\xi}^{\Phi}\) commute under the spacetime Lie bracket.
## Appendix D Derivation of Damour's equation
In this Appendix we give for convenience the derivation of Damour's equation in our notation. This equation is the tangential constraint on a null-hypersurface, and its relevance is that it the dynamical content of the flux-balance law for \(\bar{\eta}_{\mu}\). We fix a cross-section \(S\) of \(\mathcal{N}\) at the help of a rigging vector \(n_{\mu}\). We then consider a vector \(v\in TS\), and contract the Einstein's equations to look at the two tangential constraints:
\[G_{\mu\nu}l^{\mu}v^{\nu}=R_{\mu\nu}l^{\mu}v^{\nu}=v^{\nu}(\nabla_{\rho}\nabla_ {\nu}-\nabla_{\nu}\nabla_{\rho})l^{\rho}.\] (D.1)
For the second term we can use (2.20), and we choose \(l\) such that \(n^{\mu}\partial_{\mu}l^{2}=0\). For the first term, we expand
\[\nabla_{\nu}l^{\rho}=(\sigma^{\rho}_{\nu}+\frac{1}{2}\theta\gamma^{\rho}_{ \nu})-l^{\rho}\eta_{\nu}-kn_{\nu}l^{\rho}-A^{\rho}l_{\nu}+B^{\rho}l_{\nu},\] (D.2)
where \(A^{\rho}=\gamma^{\rho}_{\sigma}n^{\mu}\nabla_{\mu}l^{\sigma}\) and \(B^{\rho}=l^{\rho}n_{\sigma}n^{\mu}\nabla_{\mu}l^{\sigma}\). Then
\[v^{\nu}\nabla_{\rho}\nabla_{\nu}l^{\rho}=v^{\nu}\nabla_{\rho} \sigma^{\rho}_{\nu}+\frac{1}{2}v^{\rho}\nabla_{\nu}(\theta\gamma^{\nu}_{\rho} )-(\theta+k)v^{\nu}\eta_{\nu}-v^{\nu}l^{\rho}\nabla_{\rho}\eta_{\nu}-kv^{\nu} l^{\rho}\nabla_{\rho}n_{\nu}-A^{\rho}v^{\nu}(\sigma_{\nu\rho}+\frac{1}{2}\gamma_{ \nu\rho}\theta),\] (D.3)
where we used \(B^{\rho}v^{\nu}\nabla_{\rho}l_{\nu}=n_{\sigma}n^{\mu}\nabla_{\mu}l^{\sigma}(l ^{\rho}v^{\nu}\nabla_{\rho}l_{\nu})=0\). Next, we compute
\[v^{\nu}\nabla_{\rho}\sigma^{\rho}_{\nu}=v^{\nu}D_{\rho}\sigma^{\rho}_{\nu}+v^ {\nu}l^{\sigma}\sigma^{\rho}_{\nu}\nabla_{\sigma}n_{\rho}+A^{\rho}v^{\nu} \sigma_{\nu\rho}\] (D.4)
where we used a resolution of the identity in \(\rho\), and \(D_{\mu}v^{\nu}:=\gamma^{\rho}_{\mu}\nabla_{\rho}v^{\nu}\) is the 2d covariant derivative, and
\[\frac{1}{2}v^{\nu}\nabla_{\rho}(\theta\gamma^{\rho}_{\nu})=\frac{1}{2}v^{\rho }\partial_{\rho}\theta+\frac{1}{2}\theta v^{\nu}\pounds_{n}l_{\nu}+\frac{1}{2 }\theta v^{\nu}\pounds_{l}n_{\nu}\] (D.5)
Furthermore, we write that \(-v^{\nu}l^{\rho}\nabla_{\rho}\eta_{\nu}=-v^{\nu}\pounds_{l}\eta_{\nu}+v^{\nu} \eta_{\rho}\nabla_{\nu}l^{\rho}=-v^{\nu}\pounds_{l}\eta_{\nu}+v^{\nu}\eta_{\rho }(\sigma^{\rho}_{\nu}+\frac{1}{2}\theta\gamma^{\rho}_{\nu})\). If we consider the second term of this equality, it can be divided into two subterms, one proportional to the shear which combines with the term \(v^{\nu}l^{\sigma}\sigma^{\rho}_{\nu}\nabla_{\sigma}n_{\rho}\) of (D.4) and a second term proportional to the shear which combines with the last term of (D.3) to get \(-\frac{1}{2}\theta v^{\nu}\pounds_{n}l_{\nu}\), and therefore cancels the last term of (D.5). Thus, we have
\[v^{\nu}\nabla_{\rho}\nabla_{\nu}l^{\rho}=v^{\nu}D_{\rho}\sigma^{\rho}_{\nu}+v^ {\nu}\sigma^{\rho}_{\nu}\pounds_{l}n_{\rho}+\frac{1}{2}v^{\nu}\partial_{\nu} \theta+\frac{1}{2}\theta v^{\nu}\pounds_{l}n_{\nu}-\theta v^{\nu}\pounds_{l}n _{\nu}-kv^{\nu}\pounds_{l}n_{\nu}\] (D.6)
Furthermore, we have that \(\pounds_{l}(v^{\mu}n_{\mu})=0\), so if we assume that \(n_{\mu}\pounds_{l}v^{\mu}=0\), we have that \(v^{\mu}\pounds_{l}n_{\mu}=0\) for any \(v\in TS\), and we get
\[v^{\nu}\nabla_{\rho}\nabla_{\nu}l^{\rho}=v^{\nu}D_{\rho}\sigma_{\nu}^{\rho}+ \frac{1}{2}v^{\nu}\partial_{\nu}\theta-\theta v^{\nu}\eta_{\nu}-v^{\nu} \pounds_{l}\eta_{\nu}\] (D.7)
and so by substructing the piece \(-v^{\nu}\nabla_{\nu}\nabla_{\rho}v^{\rho}\) we arrive at the Damour equation
\[T_{\mu\nu}l^{\mu}v^{\nu}\epsilon_{\mathcal{N}}=v^{\nu}D_{\rho}\sigma_{\nu}^{ \rho}\epsilon_{\mathcal{N}}-v^{\rho}\partial_{\rho}(k+\frac{1}{2}\theta) \epsilon_{\mathcal{N}}-v^{\mu}\pounds_{l}(\eta_{\mu}\epsilon_{\mathcal{N}}).\] (D.8)
|
2309.08901 | Combinatorial curvature flows for generalized hyperbolic circle packings | Generalized circle packings were introduced in \cite{Ba-Hu-Sun} as a
generalization of tangential circle packings in hyperbolic background geometry.
In this paper, we introduce the combinatorial Calabi flow, fractional
combinatorial Calabi flow and combinatorial $p$-th Calabi flow for generalized
hyperbolic circle packings. We establish several equivalent conditions
regarding the longtime behaviors of these flows. This provides effective
algorithms for finding the generalized circle packings with prescribed total
geodesic curvatures. | Te Ba, Chao Zheng | 2023-09-16T06:59:59Z | http://arxiv.org/abs/2309.08901v1 | # Combinatorial curvature flows for generalized hyperbolic circle packings
###### Abstract.
Generalized circle packings were introduced in [1] as a generalization of tangential circle packings in hyperbolic background geometry. In this paper, we introduce the combinatorial Calabi flow, fractional combinatorial Calabi flow and combinatorial \(p\)-th Calabi flow for generalized hyperbolic circle packings. We establish several equivalent conditions regarding the longtime behaviors of these flows. This provides effective algorithms for finding the generalized circle packings with prescribed total geodesic curvatures.
Key words and phrases:Combinatorial curvature flows; Generalized hyperbolic circle packings; Total geodesic curvatures; MSC (2020): 52C26, 53E99, 57Q15
## 1. Introduction
### Background
The notion of circle patterns was proposed in the work of Thurston [12] as a significant tool to study the hyperbolic structure on 3-manifolds. He introduced hyperbolic circle patterns on a triangulated surface with prescribed intersection angles. The induced polyhedral metric may produce conical singularities at the vertices. The classical discrete Gaussian curvature is introduced to describe the singularities at the vertices, which is defined by the difference of \(2\pi\) and the cone angle at the vertices. Motivated by the work of Hamilton [11], Chow-Luo [2] introduced the combinatorial Ricci flow on closed triangulated surfaces, which is a discrete analogue of Hamilton's Ricci flow. Under some combinatorial conditions, they proved that the combinatorial Ricci flow exists for all time and converges exponentially fast to Thurston's circle patterns on closed triangulated surfaces both in Euclidean and hyperbolic background geometry. Since then, combinatorial curvature flows became an important approach for finding geometric structures on low-dimensional manifolds. See, for instance, combinatorial Yamabe flow [7], combinatorial Calabi flow [3, 4] and fractional combinatorial Calabi flow [13].
Recently, a new geometric data called "total geodesic curvature" was introduced by Nie [8] to measure the singularities of circle patterns in spherical background geometry. Nie [8] provided the existence and rigidity results for spherical circle patterns with respect to the total geodesic curvature. Motivated by Nie's work, Ba-Hu-Sun [1] investigated the existence and rigidity of the generalized hyperbolic circle packing (the intersection angle of two circles is zero) with respect to the total geodesic curvature. To search the generalized circle packings with prescribed total geodesic curvature, they [1] further introduced the combinatorial Ricci flow and proved the solution of the combinatorial Ricci flow exists for all time and converges exponentially. In this paper, we introduce the combinatorial Calabi flow, fractional combinatorial Calabi flow and combinatorial \(p\)-th Calabi flow for generalized hyperbolic circle packings. We further prove the longtime existence and convergence for the solutions of these combinatorial curvature flows.
### Set up
We begin by introducing pseudo ideal triangulation on surfaces, which is introduced in [9] and generalized in [1, 5]. Let \((S,T)\) be a connected closed surface with a triangulation \(T\). Let \(V\), \(E\), \(F\) be the vertex, edge and face set of \(T\). For simplicity of notations, we use one index to denote a vertex, two indices to denote an edge (\(ij\) is the arc on \(S\) joining \(i\), \(j\)). For each \(i\in V\), we use \(U(i)\) to denote a small open regular neighborhood of \(i\). We define
\[N(I):=\cup_{i\in I}U(i)\]
for each \(I\subset V\). Suppose \(I_{1},I_{2}\subset V\) satisfying \(I_{1}\cap I_{2}=\emptyset\). Set
\[S_{I_{1},I_{2}}=S\setminus(N(I_{1})\cup I_{2}).\]
Then \(S_{I_{1},I_{2}}\) is a connected surface with \(n\geq 0\) boundary components and \(m\geq 0\) punctures, where \(|I_{1}|=n\), \(|I_{2}|=m\). The intersection
\[T_{I_{1},I_{2}}=\{\sigma\cap S_{I_{1},I_{2}}|\sigma\in T\}\]
is called the pseudo ideal triangulation of \(S_{I_{1},I_{2}}\). We use \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) to denote the surface \(S_{I_{1},I_{2}}\) with a pseudo ideal triangulation \(T_{I_{1},I_{2}}\). The intersections
\[E_{I_{1},I_{2}}:=\{ij\cap S_{I_{1},I_{2}}|ij\in E\},\quad F_{I_{1},I_{2}}:=\{ijk \cap S_{I_{1},I_{2}}|ijk\in F\}\]
are called the edge and face set of \(T_{I_{1},I_{2}}\). The intersection of a face of \(F_{I_{1},I_{2}}\) and \(\partial S_{I_{1},I_{2}}\) is called a \(B\)-arc.
A generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is a map \(k:V\to\mathbb{R}_{+}\) satisfying
* \(k(i)<1\) if \(i\in I_{1}\),
* \(k(i)=1\) if \(i\in I_{2}\),
* \(k(i)>1\) if \(i\in I_{3}\),
where \(I_{3}=V\setminus(I_{1}\cup I_{2})\). The geometry of \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is determiend as follows:
* The length of edges of \(E_{I_{1},I_{2}}\) is defined by \(d:E_{I_{1},I_{2}}\to\mathbb{R}_{+}\), where \[d(ij)=\left\{\begin{aligned} &\operatorname{arctanh}k(i)+ \operatorname{arctanh}k(j),&& i,j\in I_{1},\\ &\operatorname{arccoth}k(i)+\operatorname{arccoth}k(j),&& i,j\in I_{3},\\ &\operatorname{arctanh}k(i)+\operatorname{arccoth}k(j),&& i\in I_{1},j\in I_{3},\\ &+\infty,&& i\text{ or }j\in I_{2},\end{aligned}\right.\]
* Each angle at the endpoints of the \(B\)-arcs is defined to be \(\pi/2\).
It is proved in [1] that the side lengths of \(B\)-arcs and angles of each face can be uniquely determined by the generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) and (\(i\)), (\(ii\)).
Let us provide a brief introduction to the geometric meaning of \(d(ij)\). If \(i,j\in I_{1}\), \(d(ij)\) is the distance between of axis of two hypercycles with curvature \(k(i)\), \(k(j)\). If \(i,j\in I_{3}\), \(d(ij)\) is the distance between the centers of two circles with curvature \(k(i)\), \(k(j)\). If \(i\in I_{1}\) and \(j\in I_{3}\), \(d(ij)\) is the distance between the center of the circle with curvature \(k(i)\) and the axis of the hypercycle with curvature \(k(j)\). If \(i\) or \(j\in I_{2}\), \(d(ij)\) is the distance between the center of the circle with curvature \(k(i)=1\) (a horocycle) to the center or axis of a circle, or a horocycle, or a hypercycle with curvature k(v), which is \(+\infty\).
Suppose \(k:V\to\mathbb{R}_{+}\) is a generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\). Each \(f\in F_{I_{1},I_{2}}\) can be embedded into three mutually tangent hyperbolic circles (including horocycles and hypercycles). Here we cite [1, Figure 4] as an explanation, as shown in Figure
1. Then there exists a hyperbolic circle packing (with possibly horocycles or hypercycles) on \(S_{I_{1},I_{2}}\) induced by \(k\). Let \(C_{v}\) be the circle of this packing which centered at \(v\). The total geodesic curvature of \(k\) at \(v\in V\) is defined as the total geodesic curvature of \(C_{v}\). It can be calculated by
\[L_{v}=\int_{C_{v}}k(v)ds=l(v)k(v),\]
where \(l(v)\) is the length of \(C_{v}\). Note that each circle is not necessarily to be embedded in \(\mathbb{H}^{2}\) because \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) may exists singularities at the vertices or edges when assigned with a generalized hyperbolic circle packing metric.
**Theorem 1.1** ([1], Theorem 1.2).: Let \((S,T)\) be a connected closed surface with the vertex, face set \(V\), \(F\). Let \(F_{I}\) be the set of faces having at least one vertex in \(I\) for subset \(I\subset V\). Then there exists \(I_{1},I_{2}\subset V\) and a generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) having the total geodesic curvature \(L_{1},\cdots,L_{|V|}\) on each vertex if and only if \((L_{1},\cdots,L_{|V|})\in\Omega\), where
\[\Omega=\left\{(L_{1},\cdots,L_{|V|})\in\mathbb{R}_{+}^{|V|}\,|\sum\nolimits_{i =1}^{|V|}L_{i}<\pi|F_{I}|\,\,\,\text{for each }I\subset V\right\}. \tag{1}\]
Moreover, the choice of \(I_{1},I_{2}\) and generalized hyperbolic circle packing metric is unique if it exists.
### Main results
Motivated by Ge's work on combinatorial Calabi flow [3, 4], Wu-Xu's work on fractional combinatorial Calabi flow [13] and Lin-Zhang's work on combinatorial \(p\)-th Calabi flow [6], we introduce the following combinatorial Calabi flow, fractional combinatorial Calabi flow and combinatorial \(p\)-th Calabi flow for generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\). Set \(K_{i}=\ln k_{i}\).
**Definition 1.2**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). The combinatorial Calabi flow for generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is defined to be
\[\begin{cases}\frac{dK_{i}}{dt}=\Delta(L-\widehat{L})_{i},\\ K_{i}(0)=K_{0},\end{cases} \tag{2}\]
Figure 1. Three-circle configurations
where \(\Delta\) is the discrete Laplace operator defined by
\[\Delta f_{i}=-\sum_{j=1}^{|V|}\frac{\partial L_{i}}{\partial K_{j}}f_{j} \tag{3}\]
for any function \(f:V\rightarrow\mathbb{R}\).
Set
\[\Lambda=(\Lambda_{ij})_{|V|\times|V|}=\frac{\partial(L_{1},...,L_{|V|})}{ \partial(K_{1},...,K_{|V|})}.\]
The equation (3) implies \(\Delta_{\mathcal{T}}=-\Lambda\). By Lemma 2.2, the matrix \(\Lambda\) is symmetric and positive definite on \(\mathbb{R}^{|V|}\). There exists an orthonormal matrix \(Q\) such that
\[\Lambda=Q^{T}\cdot\text{diag}\{\lambda_{1},...,\lambda_{|V|}\}\cdot Q,\]
where \(\lambda_{1},...,\lambda_{|V|}\) are non-negative eigenvalues of the matrix \(\Lambda\). For any \(s\in\mathbb{R}\), the \(2s\)-th order fractional discrete Laplace operator \(\Delta^{s}\) is defined to be
\[\Delta^{s}=-\Lambda^{s}=-Q^{T}\cdot\text{diag}\{\lambda_{1}^{s},...,\lambda_{ |V|}^{s}\}\cdot Q. \tag{4}\]
Therefore, the fractional discrete Laplace operator \(\Delta^{s}\) is negative definite on \(\mathbb{R}^{|V|}\). Specially, if \(s=0\), then \(\Delta^{s}\) is reduced to the minus identity operator; if \(s=1\), then \(\Delta^{s}\) is reduced to the discrete Laplace operator \(\Delta=-\Lambda=-(\frac{\partial L_{i}}{\partial K_{j}})_{|V|\times|V|}\).
**Definition 1.3**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). The fractional combinatorial Calabi flow for generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is defined to be
\[\begin{cases}\frac{dK_{i}}{dt}=\Delta^{s}(L-\widehat{L})_{i},\\ K_{i}(0)=K_{0},\end{cases} \tag{5}\]
where \(\Delta^{s}\) is the fractional discrete Laplace operator defined by (4).
**Remark 1.4**.: If \(s=0\), the fractional combinatorial Calabi flow (5) is reduced to the combinatorial Ricci flow introduced by Ba-Hu-Sun [1]. If \(s=1\), the fractional combinatorial Calabi flow (5) is reduced to the combinatorial Calabi flow (2).
By Lemma 2.2, we have
\[\Delta f_{i}=-\Lambda f_{i}=\sum_{j\sim i}(-B_{ij})(f_{j}-f_{i})+A_{i}f_{i},\]
where \(B_{ij}=\frac{\partial L_{i}^{jk}}{\partial K_{j}}+\frac{\partial L_{i}^{jl}} {\partial K_{j}}\) defined by (8) and \(A_{i}=\frac{\partial}{\partial K_{i}}\left(\sum_{ijk}\text{Area}(\Omega_{ijk})\right)\) defined by (9). For any \(p>1\), we define the discrete \(p\)-th Laplace operator \(\Delta_{p}\) for generalized hyperbolic circle packing metrics by the following formula
\[\Delta_{p}f_{i}=\sum_{j\sim i}(-B_{ij})|f_{j}-f_{i}|^{p-2}(f_{j}-f_{i}), \tag{6}\]
where \(f:V\rightarrow\mathbb{R}\) is a function.
**Definition 1.5**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). The combinatorial \(p\)-th Calabi flow for generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is defined to be
\[\begin{cases}\frac{dK_{i}}{dt}=(\Delta_{p}+A_{i})(L-\widehat{L})_{i},\\ K_{i}(0)=K_{0},\end{cases} \tag{7}\]
where \(\Delta_{p}\) is the discrete \(p\)-th Laplace operator defined by (6).
**Remark 1.6**.: If \(p=2\), then the discrete \(p\)-th Laplace operator (6) is reduced to the discrete Laplace operator (3) and hence the combinatorial \(p\)-th Calabi flow (7) is reduced to the combinatorial Calabi flow (2).
The main result of this paper is as follows, which gives the longtime existence and convergence for the solutions of the combinatorial Calabi flow (2), the fractional combinatorial Calabi flow (5) and the combinatorial \(p\)-th Calabi flow (7) for generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\).
**Theorem 1.7**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). The following statements are equivalent:
**(1):**: \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), where \(\Omega\) is defined by (1);
**(2):**: The solution of the combinatorial Calabi flow (2) exists for all time and converges exponentially fast to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\);
**(3):**: The solution of the fractional combinatorial Calabi flow (5) exists for all time and converges exponentially fast to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\);
**(4):**: The solution of the combinatorial \(p\)-th Calabi flow (7) exists for all time and converges to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\).
**Remark 1.8**.: Different from the combinatorial Calabi flow (\(p=2\)), we can not get the exponential convergence for the solution of the combinatorial \(p\)-th Calabi flow for \(p>1,p\neq 2\).
### Acknowledgements
The first author is supported by NSF of China (No.11631010). The authors would like to thank Xu Xu and Ze Zhou for helpful discussions.
## 2. Some useful Lemmas
Let \(C_{i},C_{j},C_{k}\) be three mutually tangent hyperbolic circles (with possibly horocycles or hypercycles). Denote \(\Omega_{ijk}\) as the region enclosed by three arcs between tangency points of \(C_{i}\), \(C_{j}\) and \(C_{k}\). Denote \(l_{i}\) as the length of the arc between two points of tangency of \(C_{i}\). Set \(L_{i}=l_{i}k_{i}\) and \(K_{i}=\ln k_{i}\).
**Lemma 2.1** ([1]).: Let \(L_{i}^{jk}\), \(K_{i}\) and \(\Omega_{ijk}\) be defined as above. Then
**(\(i\)):**: \(\frac{\partial L_{i}^{jk}}{\partial K_{j}}=\frac{\partial L_{j}^{ik}}{ \partial K_{i}}<0.\)
**(\(ii\)):**: \(\frac{\partial\,\mathrm{Area}(\Omega_{ijk})}{\partial K_{i}}<0.\)
_(iii)_**:**: \(\frac{\partial L_{i}^{jk}}{\partial K_{i}}>0\).
For any two adjacent faces \(ijk\) and \(ijl\) sharing a common edge \(ij\), we set
\[B_{ij}=\frac{\partial L_{i}^{jk}}{\partial K_{j}}+\frac{\partial L_{i}^{jl}}{ \partial K_{j}}, \tag{8}\]
\[A_{i}=\frac{\partial}{\partial K_{i}}\left(\sum_{ijk}\text{Area}(\Omega_{ijk}) \right). \tag{9}\]
By Lemma 2.1, we have \(A_{i}<0\) and \(B_{ij}<0\). Set \(\Lambda_{A}=\text{diag}\{A_{1},...,A_{|V|}\}\) and \(\Lambda_{B}=((\Lambda_{B})_{ij})_{|V|\times|V|}\), where
\[(\Lambda_{B})_{ij}=\begin{cases}-\sum_{k\sim i}B_{ik},&j=i,\\ B_{ij},&j\sim i,\\ 0,&j\nsim i,j\neq i.\end{cases}\]
Then \(\Lambda_{A}\) is negative definite. For any \(x\in\mathbb{R}^{|V|}\), we have
\[x^{T}\Lambda_{B}x=\sum_{i,j=1}^{|V|}(\Lambda_{B})_{ij}x_{i}x_{j}=\sum_{i\sim j }(B_{ij}x_{i}x_{j})-\sum_{i\sim j}x_{i}^{2}B_{ij}=-\frac{1}{2}\sum_{i\sim j}B_ {ij}(x_{i}-x_{j})^{2}\geq 0,\]
which implies \(\Lambda_{B}\) is positive semi-definite.
**Lemma 2.2**.: The matrix \(\Lambda=\frac{\partial(L_{1},...,L_{|V|})}{\partial(K_{1},...,K_{|V|})}\) could be decomposed to be
\[\Lambda=-\Lambda_{A}+\Lambda_{B}.\]
As a result, the matrix \(\Lambda\) is symmetric and positive definite on \(\mathbb{R}^{|V|}\).
Proof.: By Lemma 2.1, \(\frac{\partial L_{i}^{jk}}{\partial K_{j}}=\frac{\partial L_{i}^{jk}}{ \partial K_{i}}\) and then \(\frac{\partial L_{i}}{\partial K_{j}}=\frac{\partial L_{j}}{\partial K_{i}}\).
**(1):**: If \(j\nsim i\) and \(j\neq i\), then \(\frac{\partial L_{i}}{\partial K_{j}}=0\).
**(2):**: If \(j\sim i\), then
\[\frac{\partial L_{i}}{\partial K_{j}}=\frac{\partial(\sum_{ijk}L_{i}^{jk})}{ \partial K_{j}}=\sum_{ijk}\frac{\partial L_{i}^{jk}}{\partial K_{j}}=\frac{ \partial L_{i}^{jk}}{\partial K_{j}}+\frac{\partial L_{i}^{jl}}{\partial K_{j}}.\]
Then \(\frac{\partial L_{i}}{\partial K_{j}}=B_{ij}\) by (8).
**(3):**: If \(j=i\), then
\[\frac{\partial L_{i}}{\partial K_{i}}= \sum_{ijk}\frac{\partial\left(\pi-L_{j}^{ik}-L_{k}^{ij}-\text{ Area}(\Omega_{ijk})\right)}{\partial K_{i}}\] \[= -\sum_{ijk}\left(\frac{\partial L_{i}^{jk}}{\partial K_{j}}+\frac{ \partial L_{i}^{jk}}{\partial K_{k}}\right)-\sum_{ijk}\frac{\partial\text{ Area}(\Omega_{ijk})}{\partial K_{i}}\] \[= -\sum_{j\sim i}\left(\frac{\partial L_{i}^{jk}}{\partial K_{j}}+ \frac{\partial L_{i}^{jl}}{\partial K_{j}}\right)-\frac{\partial}{\partial K _{i}}\left(\sum_{ijk}\text{Area}(\Omega_{ijk})\right),\]
where the first equality is due to following formula obtained by Ba-Hu-Sun ([1], Lemma 2.10)
\[\operatorname{Area}(\Omega_{ijk})=\pi-L_{i}^{jk}-L_{j}^{ki}-L_{k}^{ij}.\]
Then \(\frac{\partial L_{i}}{\partial K_{i}}=-A_{i}-\sum_{j\sim i}B_{ij}\). Therefore, \(\Lambda=-\Lambda_{A}+\Lambda_{B}\). Q.E.D.
## 3. The proof of Theorem 1.7
We divide Theorem 1.7 into three theorems and prove them respectively.
**Theorem 3.1**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). If the solution of the combinatorial Calabi flow (2) converges for any initial data, then \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\). Furthermore, if \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), then the solution of the combinatorial Calabi flow (2) exists for all time and converges exponentially fast to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\).
Proof.: As \(t\to+\infty\), the solution \(K(t)\) of the combinatorial Calabi flow (2) converges to \(\widehat{K}\). By the \(C^{1}\)-smoothness of \(L\), we have \(L(\widehat{K})=\lim_{t\to+\infty}L(K(t))\). By the mean value theorem, there exists a sequence \(\xi_{n}\in(n,n+1)\) such that
\[K_{i}(n+1)-K_{i}(n)=K_{i}^{\prime}(\xi_{n})=\Delta(L(K(\xi_{n}))-\widehat{L})_ {i}\to 0,\text{ as }n\to+\infty.\]
Combining with Lemma 2.2, we have \(L_{i}(\widehat{K})=\lim_{n\to+\infty}L_{i}(K(\xi_{n}))=\widehat{L}_{i}\) for all \(i\in V\). Then \(\widehat{K}\) is a generalized circle packing metric with the total geodesic curvature \(\widehat{L}\). By Theorem 1.1, \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\).
Conversely, if \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), there exists a unique \(\widehat{K}\) with the total geodesic curvature \(\widehat{L}\) by Theorem 1.1. The following function
\[\mathcal{E}(K)=\int_{\widehat{K}}^{K}\sum_{i=1}^{|V|}(L_{i}-\widehat{L}_{i})dK _{i}.\]
is well-defined and strictly convex on \(\mathbb{R}^{|V|}\) by Lemma 2.2. Furthermore, \(\mathcal{E}(\widehat{K})=0,\ \nabla\mathcal{E}(\widehat{K})=0\) and \(\operatorname{Hess}\mathcal{E}>0\). This implies \(\lim_{||K||\to+\infty}\mathcal{E}(K)=+\infty\). Hence, \(\mathcal{E}(K)\) is proper and \(0=\mathcal{E}(\widehat{K})\leq\mathcal{E}(K)\). By direct calculations, we have
\[\frac{d\mathcal{E}(K(t))}{dt}=\sum_{i=1}^{|V|}\frac{\partial\mathcal{E}}{ \partial K_{i}}\frac{dK_{i}}{dt}=\sum_{i=1}^{|V|}(L-\widehat{L})_{i}\Delta(L- \widehat{L})_{i}=-(L-\widehat{L})^{T}\cdot\Lambda\cdot(L-\widehat{L})\leq 0\]
by Lemma 2.2, which implies \(0\leq\mathcal{E}(K(t))\leq\mathcal{E}(K(0))\). Thus the solution \(\{K(t)\}\) of the combinatorial Calabi flow (2) lies in a compact subset of \(\mathbb{R}^{|V|}\), which implies the solution of the combinatorial Calabi flow (2) exists for all time.
By Lemma 2.1, the matrix \(\Lambda^{2}\) is strictly positive definite on \(\mathbb{R}^{|V|}\). By the continuity of the eigenvalues of \(\Lambda^{2}\), there exists \(\lambda_{0}>0\) such that the non-zero eigenvalues \(\lambda\) of \(\Lambda^{2}\) satisfy \(\lambda>\lambda_{0}\) along the combinatorial Calabi flow (2). Therefore, for the combinatorial Calabi energy
\[\mathcal{C}(K)=||L-\widehat{L}||^{2}=\sum_{i=1}^{|V|}(L_{i}-\widehat{L}_{i})^{ 2}, \tag{10}\]
we have
\[\frac{d\mathcal{C}(K(t))}{dt}=\sum_{i=1}^{|V|}\frac{\partial\mathcal{C}}{\partial K _{i}}\frac{dK_{i}}{dt}=-2(L-\widehat{L})^{T}\cdot\Lambda^{2}\cdot(L-\widehat{L} )\leq-2\lambda_{0}\mathcal{C}(K(t)),\]
which implies \(\mathcal{C}(K(t))\leq e^{-2\lambda_{0}t}\mathcal{C}(K(0))\). Then
\[||K(t)-\widehat{K}||^{2}\leq C_{1}||L(t)-\widehat{L}||^{2}\leq C_{1}e^{-2 \lambda_{0}t}||L(0)-\widehat{L}||^{2}\leq C_{2}e^{-2\lambda_{0}t}\]
for some positive constants \(C_{1},C_{2}\). Q.E.D.
**Theorem 3.2**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). If the solution of the fractional combinatorial Calabi flow (5) converges for any initial data, then \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\). Furthermore, if \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), then the solution of the fractional combinatorial Calabi flow (5) exists for all time and converges exponentially fast to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\).
Proof.: The first part follows from \(\Delta^{s}\) is a negative definite on \(\mathbb{R}^{|V|}\). If \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), there exists a unique \(\widehat{K}\) with the total geodesic curvature \(\widehat{L}\) by Theorem 1.1. By direct calculations, we have
\[\frac{d\mathcal{E}(K(t))}{dt}=\sum_{i=1}^{|V|}\frac{\partial\mathcal{E}}{ \partial K_{i}}\frac{dK_{i}}{dt}=\sum_{i=1}^{|V|}(L-\widehat{L})_{i}\Delta^{s} (L-\widehat{L})_{i}=-(L-\widehat{L})^{T}\cdot\Lambda^{s}\cdot(L-\widehat{L}) \leq 0\]
by Lemma 2.2, which implies \(0\leq\mathcal{E}(K(t))\leq\mathcal{E}(K(0))\). Combining with the properness of \(\mathcal{E}\), the solution \(\{K(t)\}\) of the fractional combinatorial Calabi flow (5) lies in a compact subset of \(\mathbb{R}^{|V|}\), which implies the solution of the fractional combinatorial Calabi flow (5) exists for all time and \(\mathcal{E}(K(t))\) converges. There exists a sequence \(\xi_{n}\in(n,n+1)\) such that as \(n\to+\infty\),
\[\mathcal{E}(K(n+1))-\mathcal{E}(K(n))=(\mathcal{E}(K(t))^{\prime }|_{\xi_{n}}=\nabla\mathcal{E}\cdot\frac{dK_{i}}{dt}|_{\xi_{n}}\] \[= \sum_{i=1}^{|V|}(L-\widehat{L})_{i}\Delta^{s}(L-\widehat{L})_{i}| _{\xi_{n}}=-(L-\widehat{L})^{T}\cdot\Lambda^{s}\cdot(L-\widehat{L})|_{\xi_{n} }\to 0.\]
Then \(\lim_{n\to+\infty}L_{i}(K(\xi_{n}))=\widehat{L}_{i}=L_{i}(\widehat{K})\) for all \(i\in V\). By \(\{K(t)\}\subset\subset\mathbb{R}^{|V|}\), there exists \(K^{*}\in\mathbb{R}^{|V|}\) and a subsequence of \(\{K(\xi_{n})\}\), still denoted as \(\{K(\xi_{n})\}\) for simplicity, such that \(\lim_{n\to\infty}K(\xi_{n})=K^{*}\), which implies \(L_{i}(K^{*})=\lim_{n\to+\infty}L_{i}(K(\xi_{n}))=L_{i}(\widehat{K})\). This further implies \(K^{*}=\widehat{K}\) by Theorem 1.1. Therefore, \(\lim_{n\to\infty}K(\xi_{n})=\widehat{K}\).
Set \(\Gamma(u)=\Delta^{s}(L-\widehat{L})\), then \(D\Gamma|_{K=\widehat{K}}\) is negative definite, which implies that \(\widehat{K}\) is a local attractor of (5). The conclusion follows from Lyapunov Stability Theorem ([10], Chapter 5).
Q.E.D.
**Remark 3.3**.: One can also use the combinatorial Calabi energy (10) to prove the exponential convergence of the solution \(K(t)\) of the fractional combinatorial Calabi flow (5), which is similar to the proof of Theorem 3.1.
**Theorem 3.4**.: Let \(\widehat{L}\in\mathbb{R}_{+}^{|V|}\) be a given function defined on \(V\). If the solution of the combinatorial \(p\)-th Calabi flow (7) converges for any initial data, then \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\). Furthermore, if \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), then the solution of the combinatorial \(p\)-th Calabi flow (7) exists for all time and converges to a unique generalized circle packing metric with the total geodesic curvature \(\widehat{L}\).
Proof.: Suppose the solution \(K(t)\) of the combinatorial \(p\)-th Calabi flow (7) converges to \(\widehat{K}\) as \(t\to+\infty\), then \(L(\widehat{K})=\lim_{t\to+\infty}L(K(t))\) by the \(C^{1}\)-smoothness of \(L\). Furthermore, there exists a sequence \(\xi_{n}\in(n,n+1)\) such that
\[K_{i}(n+1)-K_{i}(n)=K^{\prime}_{i}(\xi_{n})=(\Delta_{p}+A_{i})(L(K(\xi_{n}))- \widehat{L})_{i}\to 0,\text{ as }n\to+\infty.\]
Set \(\widetilde{L}=\lim_{n\to+\infty}(L(K(\xi_{n}))-\widehat{L})=L(\widehat{K})- \widehat{L}\), then
\[\lim_{n\to+\infty}\sum_{i=1}^{|V|}\widetilde{L}_{i}(\Delta_{p}+A_{i}) \widetilde{L}_{i}=0. \tag{11}\]
Since \(A_{i}<0\) by (9), then
\[\sum_{i=1}^{|V|}\widetilde{L}_{i}A_{i}\widetilde{L}_{i}\leq 0. \tag{12}\]
By the following formula obtained by Lin-Zhang ([6], Lemma 5.5)
\[\sum_{i=1}^{|V|}f_{i}\Delta_{p}f_{i}=\frac{1}{2}\sum_{i=1}^{|V|} \sum_{j\sim i}B_{ij}|f_{j}-f_{i}|^{p}\]
for any \(f:V\to\mathbb{R}\), we have
\[\sum_{i=1}^{|V|}\widetilde{L}_{i}\Delta_{p}\widetilde{L}_{i}\leq 0 \tag{13}\]
by \(B_{ij}<0\) in (8). Combining (11), (12) and (13), we have \(\widetilde{L}=0\), i.e., \(L_{i}(\widehat{K})=\widehat{L}_{i}\) for all \(i\in V\). By Theorem 1.1, \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\).
Conversely, if \(\{\widehat{L}_{i}\}_{i\in V}\in\Omega\), there exists a unique \(\widehat{K}\) with the total geodesic curvature \(\widehat{L}\) by Theorem 1.1. By direct calculations, we have
\[\frac{d\mathcal{E}(K(t))}{dt}= \sum_{i=1}^{|V|}\frac{\partial\mathcal{E}}{\partial K_{i}}\frac{ dK_{i}}{dt}=\sum_{i=1}^{|V|}(L-\widehat{L})_{i}(\Delta_{p}+A_{i})(L-\widehat{L})_ {i}\] \[= \frac{1}{2}\sum_{i=1}^{|V|}\sum_{j\sim i}B_{ij}|(L-\widehat{L})_ {i}-(L-\widehat{L})_{j}|^{p}+(L-\widehat{L})^{T}\cdot\Lambda_{A}\cdot(L- \widehat{L})\] \[\leq 0\]
by \(A_{i}<0\) and \(B_{ij}<0\), which implies \(0\leq\mathcal{E}(K(t))\leq\mathcal{E}(K(0))\). Combining with the properness of \(\mathcal{E}\), the solution \(\{K(t)\}\) of the combinatorial \(p\)-th Calabi flow (7) lies in a compact subset of \(\mathbb{R}^{|V|}\), which implies the solution of the combinatorial \(p\)-th Calabi flow (7) exists for
all time and \(\mathcal{E}(K(t))\) converges. There exists a sequence \(\xi_{n}\in(n,n+1)\) such that as \(n\to+\infty\),
\[\mathcal{E}(K(n+1))-\mathcal{E}(K(n))= (\mathcal{E}(K(t))^{\prime}|_{\xi_{n}}=\nabla\mathcal{E}\cdot \frac{dK_{i}}{dt}|_{\xi_{n}}\] \[= \sum_{i=1}^{|V|}(L-\widehat{L})_{i}(\Delta_{p}+A_{i})(L-\widehat{ L})_{i}|_{\xi_{n}}\] \[= \frac{1}{2}\sum_{i=1}^{|V|}\sum_{j\sim i}B_{ij}|(L-\widehat{L})_{i }-(L-\widehat{L})_{j}|^{p}|_{\xi_{n}}+(L-\widehat{L})^{T}\cdot\Lambda_{A}\cdot( L-\widehat{L})|_{\xi_{n}}\] \[\to 0.\]
Then \(\lim_{n\to+\infty}L_{i}(K(\xi_{n}))=\widehat{L}_{i}=L_{i}(\widehat{K})\) for all \(i\in V\). By \(\{K(t)\}\subset\subset\mathbb{R}^{|V|}\), there exists \(K^{*}\in\mathbb{R}^{|V|}\) and a convergent subsequence \(\{u(\xi_{n_{k}})\}\) of \(\{K(\xi_{n})\}\) such that \(\lim_{k\to\infty}K(\xi_{n_{k}})=K^{*}\). Then \(L_{i}(K^{*})=\lim_{k\to+\infty}L_{i}(K(\xi_{n_{k}}))=L_{i}(\widehat{K})\). This implies \(K^{*}=\widehat{K}\) by Theorem 1.1. Therefore, \(\lim_{k\to\infty}K(\xi_{n_{k}})=\widehat{K}\).
We use Lin-Zhang's trick in [6] to prove \(\lim_{t\to\infty}K(t)=\widehat{K}\). Suppose otherwise, there exists \(\delta>0\) and \(t_{n}\to+\infty\) such that \(|K(t_{n})-\widehat{K}|>\delta\). This implies \(\{K(\xi_{n})\}\subseteq\mathbb{R}^{|V|}\backslash B(\widehat{K},\delta)\), where \(B(\widehat{K},\delta)\) is a ball centered at \(\widehat{K}\) with radius \(\delta\). Hence, for any \(K\in\mathbb{R}^{|V|}\backslash B(\widehat{K},\delta)\), \(\mathcal{E}(u)\geq C>0\). Then \(\mathcal{E}(K(\xi_{n}))\geq C>0\). Since \(\mathcal{E}(K(t))\) converges and \(\lim_{k\to\infty}K(\xi_{n_{k}})=\widehat{K}\), then \(\mathcal{E}(+\infty)=\lim_{k\to\infty}\mathcal{E}(K(t_{n_{k}}))=\mathcal{E}( \widehat{K})=0\). Hence, \(\lim_{n\to\infty}\mathcal{E}(K(\xi_{n}))=\mathcal{E}(+\infty)=0\). This is a contradiction. Q.E.D.
**Data availability statements** Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2305.19501 | Improving VLT/SPHERE without additional hardware: Comparing quasi-static
correction strategies | Direct imaging is the primary technique currently used to detect young and
warm exoplanets and understand their formation scenarios. The extreme flux
ratio between an exoplanet and its host star requires the use of coronagraphs
to attenuate the starlight and create high contrast images. However, their
performance is limited by wavefront aberrations that cause stellar photons to
leak through the coronagraph and on to the science detector preventing the
observation of fainter extrasolar companions. The VLT/SPHERE instrument takes
advantage of its efficient adaptive optics system to minimize dynamical
aberrations to improve the image contrast. In good seeing conditions, the
performance is limited by quasi-static aberrations caused by slowly varying
aberrations and manufacturing defects in the optical components. The mitigation
of these aberrations requires additional wavefront sensing and control
algorithms to enhance the contrast performance of SPHERE. Dark hole algorithms
initially developed for space-based application and recently performed on
SPHERE calibration unit have shown significant improvement in contrast. This
work presents a status update of dark hole algorithms applied on SPHERE and the
results obtained during the on-sky tests performed on February 15th 2022. | Axel Potier, Zahed Wahhaj, Raphael Galicher, Johan Mazoyer, Pierre Baudoz, Gael Chauvin, Garreth Ruane | 2023-05-31T02:18:13Z | http://arxiv.org/abs/2305.19501v1 | # Improving VLT/SPHERE without additional hardware: Comparing quasi-static correction strategies.
###### Abstract
Direct imaging is the primary technique currently used to detect young and warm exoplanets and understand their formation scenarios. The extreme flux ratio between an exoplanet and its host star requires the use of coronagraphs to attenuate the starlight and create high contrast images. However, their performance is limited by wavefront aberrations that cause stellar photons to leak through the coronagraph and on to the science detector preventing the observation of fainter extrasolar companions. The VLT/SPHERE instrument takes advantage of its efficient adaptive optics system to minimize dynamical aberrations to improve the image contrast. In good seeing conditions, the performance is limited by quasi-static aberrations caused by slowly varying aberrations and manufacturing defects in the optical components. The mitigation of these aberrations requires additional wavefront sensing and control algorithms to enhance the contrast performance of SPHERE. Dark hole algorithms initially developed for space-based application and recently performed on SPHERE calibration unit have shown significant improvement in contrast. This work presents a status update of dark hole algorithms applied on SPHERE and the results obtained during the on-sky tests performed on February 15th 2022.
high contrast imaging, coronagraphs, exoplanets Send correspondence to Axel Potier: [email protected]
high contrast imaging, coronagraphs, exoplanets
## 1 Introduction
Direct imaging of exoplanet is currently performed with large ground-based telescopes equipped with state-of-the-art coronagraph instruments[1, 2, 3]. These instruments provide diffraction-limited point spread functions thanks to their extreme adaptive optics systems (XAO) while the coronagraph aims to attenuate the starlight to reveal the faint companions and/or circumstellar disks. Their capabilities are however limited by post-XAO wavefront residuals that enable starlight leakage through the coronagraph, hence creating bright stellar speckles on the science detector. Advanced post-processing techniques like angular (ADI)[4], spectral (SDI)[5], polarimetric (PDI)[6], and reference-star (RDI)[7] differential imaging are therefore used on raw images to further enhance detection capabilities by mitigating speckle noise in the images. These techniques are particularly efficient to remove speckles originating from instabilities and wavefront errors internal to the instruments like the non-common-path aberrations (NCPAs). But the broadly used ADI technique requires 1-2 h sequences of observations and suffers for self-subtraction of astrophysical sources at small angular separations. Mitigating static and quasi-static speckles using focal plane wavefront sensors is now an important study of research[8, 9, 10, 11]. In this work, we present the first on-sky correction of the NCPAs on SPHERE with pair-wise probing (PWP) and electric field conjugation (EFC), originally developed for high-contrast imaging in stable environments like
space. In Sec. 2, we describe the algorithms used throughout this study and how they were implemented in the instrument. In Sec. 3, we demonstrate the algorithms on the internal source to investigate the contrast limitation and apply those solutions while observing on-sky. In Sec. 4, we demonstrate the correction loop on-sky in a half DH. In Sec. 5, we use PWP alone to calibrate the static stellar speckles in post-processing. Finally in Sec. 6, we discuss the combination of the different strategies and best practices to optimize the speckle calibration in accordance with the science objectives.
## 2 Method and Algorithms
PWP and EFC algorithms have been extensively described in the literature[12, 13, 14]. Their combination in closed-loop aims to minimize the speckle intensity a region of the science image hence called the dark hole (DH). First, PWP probing estimates the focal plane electric field (E-field) through a temporal modulation of the speckle field, equivalently to conventional phase diversity techniques. PWP requires a high-order deformable mirror (DM) to introduce optical path differences (OPDs) called probes that coherently interfere with the speckle field we wish to correct. In the absence of a specific DM for this purpose, the OPDs are introduced while AO runs in closed-loop by modifying the Shack-Hartmann (SH) reference slopes, assuming the sensor remains linear. Such an assumption might be invalid in the case of a pyramid wavefront sensor planned for the upgrade of both VLT/SPHERE[15] and Gemini/GPI[16]. The probes need to be chosen such that they create a different electric field at every location of the speckle field. As explained in a previous work[17], the first probe is here the poke of one individual actuator, whose phase-shift is mostly unaffected by the coronagraph masks. The second probe is then the individual poke of a second actuator in the direct neighborhood of the first one. The second actuator is chosen depending on the correction zone as explained in the next paragraph. The algorithm requires each probe to be pushed and pulled with an image recorded via the science detector for each DM shape. PWP therefore requires 4 images per E-field estimation (or per iteration when used in closed-loop with EFC).
EFC is the E-field counterpart of phase conjugation in AO. Knowing the E-field with any focal plane wavefront sensor (e.g. either the Self-Coherent Camera[18] or PWP can be used), we aim to create destructive interference in regions of the focal plane by applying the opposite of the estimated E-field, applying OPDs using the DM in the pupil plane. The technique is powerful when applied in half the field of view (called half DH or HDH) since it enables the correction of speckles originating from phase and amplitude aberrations as well as the residual diffraction pattern from the coronagraph masks[17]. The algorithm inverts a Jacobian that describes the effect of each actuator movement on the E-field in the science detector plane. Here, the Jacobian is model-based using Fourier optics and a simplified compact model of the system. The DH regions are chosen in agreement with the set of probes and depend on the science case: two actuators spread vertically (resp. horizontally) are chosen for top and bottom (resp. left and right) DH corrections. The presented DH here ranges from 100 mas to 650 mas from the optical axis. The inversion of the Jacobian is regularized with a smooth filter (Tikhonov regularization[19]) whose knee is defined at the 400th singular value (when sorted in descending order). We use a global loop control gain of 0.2 to ensure convergence.
Focal plane wavefront sensing is performed in the H3 band (\(\lambda_{0}=1667\) nm, \(\Delta\lambda=54\) nm) with IRDIS[20] behind an Apodized Pupil Lyot Coronagraph[21] (APLC) whose diameter of the central focal plane mask obstruction is 185 mas. On-sky experiments were performed during the second half of the night of February 15th 2022, starting at 5 a.m. UTC. The seeing averaged \(\sim\)0.8 arcsec at 500 nm through the night while the atmospheric coherence time remained around 10 ms. These good conditions enabled the use of the small pinhole for spatial filtering in front on the SH WFS[22]. Images with 64 s of exposure time are all acquired on HIP 57013 (RA=11 41 19.79, \(\delta\)= -43 05 44.40, R=5.5, H=5.5).
## 3 Strategy 1: NCPA Calibration Using the Internal Source Unit
### Internal turbulence and ultimate contrast
Building on previous demonstrations using the internal source[17], here we use a longer exposure time to improve on the estimate of the contrast floor. Our updated algorithm now enables the user to modify the science and probe image exposure times on the fly, which allowed us to dig deeper DH and better understand SPHERE limitations. We show in Fig. 1 the composite image (10s exposure time) after 4 individual series of half DH corrections
in 4 quadrants of the field of view. The images are normalized by the maximum of the non-coronagraphic point spread function (PSF). Unlike the expectations to reach performance down to \(\sim 10^{-8}\) as in other in-air testbeds in stabilized environments[23, 24, 25, 14], the normalized intensity is limited by a smooth axi-symmetric halo whose level is \(\sim 10^{-6}\) at 200 mas. We also show the modulated component, i.e. the coherent part of the residuals that is sensed by PWP. It demonstrates the limitation is not caused by EFC or the inability of DM to correct for aberrations since the halo is not sensed by PWP. Some residual modulated speckles remain in each image quadrant and would be mitigated with additional iterations of PWP+EFC. The unmodulated component shown in Fig. 1 represents the difference between the total intensity and the modulated component to highlight residuals that are not sensed by PWP. The unmodulated light is mostly a smooth halo whose intensity levels in the four different quadrants are shown in Fig. 2. The halo is explained as the effect of fast internal turbulence in VLT/SPHERE at a level of a few nanometers rms that has been recently discovered using the ZELDA wavefront sensor, and likely caused by a warm motor located underneath the NIR channel[26]. The limit of performance
Figure 1: Internal calibration unit data. Left: Resulting image after half DH correction successively performed in the 4 quadrants of IRDIS field of view. Center: Remaining coherent speckle (also called modulated component) in the left image as sensed by PWP. Right: Residuals not sensed by PWP (Total intensity - Modulated component). The right image highlights the internal turbulent aberrations that cannot be sensed by PWP since the phase errors evolve too fast.
Figure 2: Internal calibration unit data. Normalized intensity of the unmodulated component (mainly the halo of internal turbulence) in the four image quadrants with respect to the angular separation from the optical axis.
foreseen by the former study (from \(10^{-5}\) to \(10^{-7}\) in the HODM influence function) is here confirmed by the experiment. The difference of unmodulated intensity between the left quadrant and the three others could be explained by ghosts all oriented in the same direction during instrument commissioning.
### On-sky performance applying the internal source correction
Our next test was to apply the DM settings that were determined _a priori_ using the internal source unit of SPHERE. Indeed, these settings do not require any telescope or instrument overheads during the night and can be acquired prior to any observation. The WFS slopes were recorded during the afternoon of February 15th, 2022 successively in the four image quadrants with a similar process as described in Sec. 3.1. The final WFS slopes were then recorded for each DH geometry and reapplied on sky a few hours later during the night observations. In Fig. 3, we compare the IRDIS images with the initial NCPA calibration and with the bottom DH correction calculated earlier on the internal source. The presented results are processed with a Gaussian high-passed filter whose standard deviation is equal to \(0.57\lambda/D\) in order to highlight the effect of localized static or quasi-static speckles in the images by reducing the effect of the smooth atmospheric turbulence halo.
Figure 3: On-sky data (\(\times 10^{-6}\)) with 64s of exposure time. Images before (top left) and after bottom half DH correction calculated either on the internal source (top right) or with the PWP+EFC loop closed on-sky (bottom left). The image on the bottom right is the result of coherent differential imaging after only one on-sky iteration of PWP+EFC. The halo of atmospheric residuals have been filtered in all the images.
Qualitatively, the corrected image shows a slight decrease of the static speckle intensity in the DH region. We especially correct for the speckles induced by the diffraction pattern of the APLC coronagraph associated with large telescope spiders that can be considered as amplitude aberrations. Figure 4 shows the image radial profiles RMS (calculated as the standard deviation in azimutal rings of size \(\lambda/2D\) in the DH regions versus the angular separation) for corrections performed in the 4 image quadrants. The curves show that the best gain in performance is obtained with the top DH where an improvement up to a factor of 3 is attained in this region. In the other regions, performance improvement are limited up to a factor of 2 because the calibration on the internal source unit leads to 1) the insensitivity to quasi-static aberrations whose spatial distribution have evolved in between the calibration calculated on the internal source and the application of this calibration on-sky[26], 2) a non-common-path between both the internal calibration unit and the telescope pupil, including a misalignment on the coronagraph focal plane mask as well as the insensitivity to amplitude aberrations located in the telescope light path, all resulting in the creation of new speckles during the on-sky observations, and 3) a discrepancy in the correction of the APLC diffraction pattern due to the inconsistency of entrance pupil between internal source and on-sky observations. First limitation could be partly mitigated by calculated the correction reference slopes right before the on-sky operations. The third limitation could be solved with the introduction of a VLT-pupil-like mask in the optical path of the internal calibration unit. The second limitation can only be resolved by closing the correction loop on sky.
## 4 Strategy 2: On-sky Closed-Loop
Starting from the original SPHERE calibration, we also closed the correction loop while observing HIP 57013. We applied PWP+EFC in the bottom D-shape DH starting at 135 mas from the star to 650 mas. The exposure time of the probe images was set to 64 s for the sake of averaging the intensity of the turbulence halo below the
Figure 4: On-sky data. Top: 1-\(\sigma\) standard deviation of the normalized intensity obtained before and after DH correction of the NCPAs in the four image quadrants versus the distance from the star. Bottom: Gain in performance with respect to the current SPHERE calibration by using the DH calibration calculated on the internal source unit.
static speckle intensity. Assuming this halo is stable in time (i.e. the turbulence statistic is in steady state for two probe acquisition in a row), it is then removed in the PWP algorithm when the difference of each pair of diversity images is calculated. In total, each image required 80s with overheads for a total of \(\sim\)400s per iteration (4 images required for PWP and 1 image to confirm loop convergence). We present the resulting processed images after 4 iterations (i.e. \(\sim\)30mn) in Fig. 3. The resulting normalized intensity RMS at each iteration is also plotted in Fig. 5, showing that the static speckle intensity is continuously minimized in the DH and demonstrating robustness of the algorithm. A factor \(\sim\)2 in performance is gained between 165 mas and 270 mas while we achieve a factor \(\sim\)3 of improvement between 300 mas and 500 mas. Only a few stellar residuals persist in the science image at small separations after 4 iterations. If this signal is coherent with the starlight, we expect these speckles to be corrected with more PWP+EFC iterations or with a better calibration of the PWP diversity amplitude.
## 5 Strategy 3: Coherent Differential Imaging
One can also take advantage of the starlight coherence to extract the incoherent light, including any substellar faint companion, from the total intensity image. Indeed, PWP can be used alone to estimate the stellar speckle E-field. The squared absolute value of this E-field is then used as a reference image to be subtracted from the total intensity image. The same technique has been used to highlight the halo of internal turbulence in Fig. 2. The processed image is then called equivalently unmodulated component, incoherent component, or the result of coherent differential imaging (CDI). When applying PWP+EFC, this post-processing technique can be done for free since PWP is already employed at each iteration. We therefore show the result of CDI after iteration 1 of PWP+EFC in Fig. 3. This means one single EFC iteration has already been performed in the bottom DH, followed by one PWP set of acquisitions to get the residual speckle E-field that is used for CDI. The resulting
Figure 5: On-sky data. Top: 1-\(\sigma\) standard deviation of the normalized intensity obtained at each iteration of the PWP+EFC on-sky pseudo-closed loop in the bottom DH. Bottom: Gain in performance with respect to the current SPHERE calibration (iteration 0).
CDI image is cleaned out from modt of the stellar speckles in the regions that are well sensed by PWP (top and bottom regions of the field of view for the chosen probes here) and then enable a speckle correction in almost all the field of view. The resulting normalized intensity RMS before and after CDI in the bottom DH is also plotted in Fig. 6. It demonstrates performance equivalent to the full bottom half DH correction in Sec. 4, but after less than \(\sim 15\)min of on-sky calibration. The CDI itself improves the performance to a factor up to 3 in the bottom region.
## 6 Discussion and Conclusion
In this work, we have presented three different observing strategies using PWP and EFC. First, minimizing the stellar speckle intensity with a half DH correction performed on the instrument internal calibration unit enables a factor up to 3 in contrast performance. The technique is robust since it is not affected by atmospheric turbulence and it does not overlap with science operations because it uses the AO system during the day. It is also readily adaptable (if anticipated) to any target separation, coronagraph, or spectral bandwidth. However, we showed it has a limited field of view and presented the cause of its performance limitations. The second strategy of observation is to close the loop on-sky while directly observing the target of interest. This method provides the best performance at all angular separation because it is iterative and corrects for wavefront errors along the entire beam, but it relies on good observing conditions and stable AO system and is also limited in field of view. The modulation of the science images with the probes in PWP also prevents a 100% science duty cycle. The third strategy of observation is to use the PWP estimations to apply CDI in post-processing. This increases the field of view for science and provides encouraging performance but requires PWP to be active during science operations.
In the future, these different strategies can be combined, depending on the science case. First, deeper contrast
Figure 6: Top: 1-\(\sigma\) standard deviation of the normalized high-pass filtered total intensity after one on-sky PWP+EFC iteration (dashed) and processing of the same image with CDI (continuous), calculated in the bottom DH. Bottom: Gain in performance of CDI with respect to the total intensity image at iteration 1.
in one particular region of the image can be achieved using a combination of strategies 1 and 2. Strategy 1 would start improving the instrument performance for free and reduce the required number of iteration for strategy 2, hence increasing the time dedicated to science observations. Strategy 3 can also be applied after strategies 1 and 2 by introducing PWP on the already corrected image to perform a final clean up of the DH and potentially observe fainter objects. It would currently require about 5 min after a few on-sky iterations of PWP+EFC for an improvement factor up to 10 in the half DH region. Strategy 3 could also be applied right after strategy 1 and the speckle calibration would require only 5 min of on-sky observation but with potentially degraded results in the DH region. For a broader field of view and for either the study of extended objects or the blind search for planets, strategy 3 could also be applied alone at a regular cadence while the CDI results would be used an inputs for conventional ADI, SDI, PDI or RDI postprocessing techniques. These different calibration scenarios will be investigated in ongoing studies of the optimal observing strategy for VLT/SPHERE.
###### Acknowledgements.
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
|
2308.16816 | A General Equivalence Theorem for Crossover Designs under Generalized
Linear Models | With the help of Generalized Estimating Equations, we identify locally
D-optimal crossover designs for generalized linear models. We adopt the
variance of parameters of interest as the objective function, which is
minimized using constrained optimization to obtain optimal crossover designs.
In this case, the traditional general equivalence theorem could not be used
directly to check the optimality of obtained designs. In this manuscript, we
derive a corresponding general equivalence theorem for crossover designs under
generalized linear models. | Jeevan Jankar, Jie Yang, Abhyuday Mandal | 2023-08-31T15:46:48Z | http://arxiv.org/abs/2308.16816v2 | # A General Equivalence Theorem for Crossover Designs under Generalized Linear Models
###### Abstract
With the help of Generalized Estimating Equations, we identify locally \(\mathbf{D}\)-optimal crossover designs for generalized linear models. We adopt the variance of parameters of interest as the objective function, which is minimized using constrained optimization to obtain optimal crossover designs. In this case, the traditional general equivalence theorem could not be used directly to check the optimality of obtained designs. In this manuscript, we derive a corresponding general equivalence theorem for crossover designs under generalized linear models.
**Keywords:** Approximate Design, Crossover Design, \(\mathbf{D}\)-Optimality,
Generalized Estimating Equations, General Equivalence Theorem.
## 1 Introduction
Crossover designs, also known as change-over or repeated measurement designs, are widely used in many industrial, medical, and agricultural research. In crossover experiments, the effect of treatments is carried over in the periods following the period of their direct application. Crossover designs have been extensively studied in literature [3, 12, 14]. Crossover designs have recently been used to address problems outside of medical and agriculture research. In recent years, most corporate offices and organizations have adopted open
office spaces over traditional cubical office spaces. Booking.com conducted an experiment to assess different office spacing efficiency and the case study was first reported in [20]. In the absence of literature on optimal cross-over designs for generalized linear models (GLMs), traditional uniform designs are used. Uniform design is optimal under a linear model but they are no longer a good choice for non-Gaussian responses.
Over the years optimal crossover designs for normal responses have been widely studied in the literature, however, there are several examples in real life where responses are not normal and described by GLMs. Recently, [13] provided an algorithm to search locally \(D\)-optimal crossover designs in case of non-normal response, and showed that optimal designs obtained for normal responses can be quite inefficient in the case of GLMs. But, there was no guarantee that the designs obtained by their algorithm were indeed optimal. In this manuscript, we derive a general equivalence theorem specifically for crossover designs under GLMs, which can be used to verify the optimality of proposed designs. Moreover, it provides an alternative that is faster and numerically more stable than the general algorithm proposed in [13].
The General Equivalence Theorem is an important tool in optimum experimental designs, which has been widely used for checking for the optimality of designs in terms of the Fisher information matrix [2, 8, 9, 10, 11, 17, 28]. Nevertheless, the traditional equivalence theorem does not apply to check the optimality of obtained crossover designs. The optimal crossover designs under GLMs discussed [13] are identified using generalized estimating equations (GEEs) and are based on the variance matrix of the parameters of interest. Since the variance matrix is asymptotically connected with the inverse of the Fisher information matrix, it is natural to derive a condition that can be used to check the optimality of designs (see Remark 1 for more details).
For illustration purposes, we consider two real-life motivating examples. First, we consider an experiment conducted at Booking.com to determine the optimal office design. In the supplementary material, we discuss another motivating example, an experiment conducted to investigate the effects of various dietary starch levels on milk production. [15] discussed this dietary example along with the data set used for analysis (for more details see [14]). The design used in both these examples is a \(4\times 4\) Latin Square design with four periods and four treatments.
The paper is organized as follows. To set ideas we describe notation and definitions for crossover designs in Section 2. In Section 3 we propose and derive two different versions of the general equivalence theorem for crossover designs. More specifically, in Section 3.1 we use the variance of all parameter estimates as an objective function, and in Section 3.2 we use the variance of treatment effects as an objective function to derive two versions of the theorem. We present illustration in Section 3.3 and real-life motivating examples in Section 4.
## 2 Notation and Preliminaries
Consider a crossover trial with \(t\) treatments, \(n\) subjects, and \(p\) periods. The responses obtained from these \(n\) subjects are denoted as \(\boldsymbol{Y_{1}},\ldots,\boldsymbol{Y_{n}}\), where the response from the \(j^{th}\) subject is \(\boldsymbol{Y_{j}}=(Y_{1j},\ldots,Y_{pj})^{\prime}\). Let \(\mu_{ij}\) denote the mean of a response \(Y_{ij}\). To fix ideas, first, consider the following model (see, equation (4.1) in [29] and [5] for linear model), which models the marginal mean \(\mu_{ij}\) for crossover trial as
\[\mathrm{g}(\mu_{ij})=\eta_{ij}=\lambda+\beta_{i}+\tau_{d(i,j)}+\rho_{d(i-1,j)}, \tag{1}\]
where \(i=1,\ldots,p;\;j=1,\ldots,n\); \(\lambda\) is the overall mean, \(\beta_{i}\) represents the effect of the \(i^{th}\) period, \(\tau_{s}\) is the direct effect due to treatment \(s\) and \(\rho_{s}\) is the carryover effect due to treatment \(s\), where \(s=1,\ldots,t\) and \(g\) is the link function.
In matrix notation, under baseline constraints \(\beta_{1}=\tau_{1}=\rho_{1}=0\) we have \(\boldsymbol{\beta}=(\beta_{2},\ldots,\beta_{p})^{\prime}\),\(\boldsymbol{\tau}=(\tau_{2},\ldots,\tau_{t})^{\prime}\) and \(\boldsymbol{\rho}=(\rho_{2},\ldots,\rho_{t})^{\prime}\), which defines the parameter vector \(\boldsymbol{\theta}=(\lambda,\beta^{\prime},\tau^{\prime},\rho^{\prime})^{\prime}\). The linear predictor corresponding to the \(j^{th}\) subject, \(\boldsymbol{\eta_{j}}=(\eta_{1j},\ldots,\eta_{pj})^{\prime}\), can be written as
\[\boldsymbol{\eta_{j}}\,=\,\boldsymbol{X_{j}}\boldsymbol{\theta}.\]
The corresponding design matrix \(\boldsymbol{X_{j}}\) can be written as \(\boldsymbol{X_{j}}=[\boldsymbol{1_{p}},\boldsymbol{P_{j}},\boldsymbol{T_{j}},\boldsymbol{F_{j}}]\), where \(\boldsymbol{P_{j}}\) is \(p\times(p-1)\) such that \(\boldsymbol{P_{j}}=[\boldsymbol{0}_{(p-1)1},\boldsymbol{I_{p-1}}]^{\prime}\); \(\boldsymbol{T_{j}}\) is a \(p\times(t-1)\) matrix with its \((i,s-1)^{th}\) entry equal to 1 if subject \(j\) receives the direct effect of the treatment \(s\) (\(\geq 2\)) in the \(i^{th}\) period and zero otherwise; \(\boldsymbol{F_{j}}\) is a \(p\times(t-1)\) matrix with its \((i,s-1)^{th}\) entry equal to 1 if subject \(j\) receives the carryover effect of the treatment \(s\) (\(\geq 2\)) in the \(i^{th}\) period and zero otherwise.
If the number of subjects \(n\) and the number of periods \(p\) are fixed, then the goal is to determine the number of subjects assigned to different treatment sequences through some optimality criterion. As the number of periods \(p\) is fixed, each treatment sequence will be of length \(p\) and a typical sequence can be written as \(\omega=(t_{1},\ldots,t_{p})^{\prime}\) where \(t_{i}\in\{1,\ldots,t\}\). Now let \(\boldsymbol{\Omega}\) be the set of all such sequences and \(n_{\omega}\) denote the number of subjects assigned to sequence \(\omega\). Then the total number of subjects \(n\) can be written as \(n=\Sigma_{\omega\in\boldsymbol{\Omega}}n_{\omega},n_{\omega}\geq 0\). A crossover design \(\xi\) in approximate theory is specified by the set \(\{p_{\omega},\omega\in\boldsymbol{\Omega}\}\), where \(p_{\omega}=n_{\omega}/n\) is the proportion of subjects assigned to treatment sequence \(\omega\). As denoted by Silvey [25], such a crossover design \(\xi\) can be written as follows:
\[\xi=\left\{\begin{array}{ccc}\omega_{1}&\omega_{2}&\ldots&\omega_{k}\\ p_{1}&p_{2}&\ldots&p_{k}\end{array}\right\}, \tag{2}\]
where \(k\) is the number of treatment sequences involved, \(\omega_{i}\) is the \(i\)th treatment sequence and \(p_{i}\) is the corresponding proportion of units allocated to that support point, such that \(\sum_{i=1}^{k}p_{i}=1\), for \(i=1,\ldots,k\). Note [13] observed
that, in the case of non-uniform allocations, only a few sequences have non-zero proportions. Hence in our illustrations, we consider \(\Omega\) to be the collection of only those sequences that have non-zero allocations.
Generalized estimating equations are quasi-likelihood equations that allow us to estimate quasi-likelihood estimators [21, 31]. In crossover trials, it is typical to make an assumption that the observations from the same subject are correlated while the observations from different subjects are independent [14]. This dependency among repeated observations from the same subject can be modeled by the "working correlation" matrix \(\mathbf{C_{\alpha}}\), which is a function of correlation coefficient \(\alpha\). If \(\mathbf{C_{\alpha}}\) is the true correlation matrix of \(\mathbf{Y_{j}}\), then from the definition of covariance we can write
\[Cov(\mathbf{Y_{j}})\,=\,\mathbf{D_{j}}^{1/2}\mathbf{C_{\alpha}}\mathbf{D_{j}}^{1/2},\]
where \(\mathbf{D_{j}}=diag\Big{(}Var(Y_{1j}),\ldots,Var(Y_{pj})\Big{)}\). Let us denote \(Cov(\mathbf{Y_{j}})\) by \(\mathbf{W_{j}}\). In [31] (equation (3.1)) it was shown that for repeated measurement models, the generalized estimating equations (GEE) are defined to be
\[\sum_{j=1}^{n}\frac{\partial\mathbf{\mu_{j}}^{\prime}}{\partial\mathbf{\theta}}\mathbf{W_ {j}}^{-1}\left(\mathbf{Y_{j}}-\mathbf{\mu_{j}}\right)=0,\]
where \(\mathbf{\mu_{j}}=\left(\mu_{1j},\ldots,\mu_{pj}\right)^{\prime}\) and the asymptotic variance for the GEE estimator \(\mathbf{\hat{\theta}}\) (see [31], equation (3.2)) is
\[\text{Var}(\mathbf{\hat{\theta}})\,=\,\left[\sum_{j=1}^{k}np_{j}\frac{\partial\bm {\mu_{j}}^{\prime}}{\partial\mathbf{\theta}}\mathbf{W_{j}}^{-1}\frac{\partial\mathbf{\mu_ {j}}}{\partial\mathbf{\theta}}\right]^{-1}=\mathbf{M}^{-1}, \tag{3}\]
where \(\frac{\partial\mathbf{\mu_{j}}^{\prime}}{\partial\mathbf{\theta}}=\mathbf{X_{j}}^{\prime }\text{diag}\left\{(g^{-1})^{\prime}(\eta_{1j}),\ldots,(g^{-1})^{\prime}(\eta _{pj})\right\}\) and \(j\) stands for the \(j^{th}\) treatment sequence. In Section 3, we will define \(\mathbf{M}\) explicitly for crossover designs. Later, we consider the situation where direct treatment effects are studied specifically.
Remark 1: Note that the subject effect term is not included in the model (1). In this work, GLM is used to describe the response and hence the Fisher information matrix depends on the model parameters. Since we are considering the local optimality approach, an educated guess for the subject effect, if included, will be needed. But, from a design point of view, the subject effect has to be treated random. In model (1), the link function is used to model only the mean response and hence we are free to choose a variance-covariance matrix. So, instead of including a random subject effect in the model, we choose a working variance-covariance matrix through GEE to capture the effect of a subject (see Appendix A.3 in [32], [2 ], [13] and references therein).
_Remark 2_: The general equivalence theorem describes the optimality criteria in terms of the Fisher information matrix. The information matrix for optimal crossover designs under GLMs is defined as the inverse of the variance-covariance matrix of parameters of interest through GEE, which is easier to obtain and works similarly to the Fisher information matrix. Here we assume that the responses from a particular subject are mutually correlated, while the responses from different subjects are uncorrelated. According to [13], the obtained optimal designs are robust to the choices of such working correlation matrices.
As mentioned in [2], the general equivalence theorem can be viewed as a consequence of the result that the derivative of a smooth function over an unconstrained region is zero at its minimum. In this manuscript, we derive the general equivalence theorem for crossover designs by calculating the directional derivative of an objective function \(\Phi(\xi)\) expressed in terms of \(\mathbf{M}(\xi)\). Consider \(\bar{\xi_{i}}\) to be the design that puts unit mass at the point \(x_{i}\), i.e., the design supported only at \(x_{i}\), where \(i=1,2,\ldots,k\). Let \(\xi_{i}^{\prime}=(1-h)\xi+h\bar{\xi_{i}}\). Then the derivative of \(\Phi(\xi)\) in the direction \(\bar{\xi_{i}}\) or \(x_{i}\) in case of \(D\)-optimal criterion is
\[\phi(x_{i},\xi)=\lim_{h\to 0^{+}}\frac{1}{h}[\Phi(\xi_{i}^{\prime})-\Phi(\xi_ {i})]=-\lim_{h\to 0^{+}}\frac{1}{h}[\ln det(\mathbf{M}(\xi_{i}^{\prime}))-\ln det( \mathbf{M}(\xi_{i}))],\]
and \(\xi\) is \(D\)-optimal if and only if \(min_{i}\phi(x_{i},\xi)=0\) and \(\phi(x_{i},\xi)=0\) if \(p_{\omega_{i}}>0\), where this minimum is occurring at the points of support of design.
In the case of crossover designs and estimates using generalized estimating equations, a different approach compared to the one mentioned above is needed as the design points are finite and pre-specified for crossover designs. We use the technique used in the supplement materials of [30]. Instead of using \(\xi_{i}^{\prime}=(1-h)\xi+h\bar{\xi_{i}}=\xi+h(\bar{\xi_{i}}-\xi)\), they used \(\mathbf{p_{r}}+u\mathbf{\delta_{i}^{(r)}}\), where \(\mathbf{p_{r}}\) and \(\mathbf{\delta_{i}^{(r)}}\) are defined below. Therefore, the directional derivative \(\phi(u,\mathbf{p_{r}})\) of the objective function is equal to \(\left.\frac{\partial\Phi(\mathbf{p_{r}}+u\mathbf{\delta_{i}^{(r)}})}{\partial u}\right| _{u=0}.\)
Here is the outline of the general equivalence theorem in the case of crossover designs. Note that \(0\leq p_{i}<1\) for \(i=1,\ldots,k\), and since \(\sum_{i=1}^{k}p_{i}=1\) we may assume without any loss of generality that \(p_{k}>0\). Define \(\mathbf{p_{r}}=(p_{1},\ldots,p_{k-1})^{\prime}\), and \(\Phi(\mathbf{p_{r}})=-\ln det(\mathbf{M}(p_{1},\ldots,p_{k-1},1-\sum_{i=1}^{k-1}p_{i}))\). Let \(\mathbf{\delta_{i}^{(r)}}=(-p_{1},\ldots,-p_{i-1},1-p_{i},-p_{i+1},\ldots,-p_{k-1 })^{\prime}\) for \(i=1,\ldots,k-1\). \(\mathbf{\delta_{i}^{(r)}}\) are defined in such a way that the determinant \(|(\mathbf{\delta_{1}^{(r)}},\ldots,\mathbf{\delta_{k-1}^{(r)}})|=p_{k}\neq 0\). Hence, \(\mathbf{\delta_{1}^{(r)}},\ldots,\mathbf{\delta_{k-1}^{(r)}}\) are linearly independent and thus can serve as the new basis of
\[\mathbf{S_{r}}=\{(p_{1},\ldots,p_{k-1})^{\prime}|\sum_{i=1}^{k-1}p_{i}<1,\text{and }p_{i}\geq 0,i=1,\ldots,k-1\}.\]
Note that negative \(\ln det\) is a convex function on a set of positive definite matrices. Hence, \(\boldsymbol{p_{r}}\) minimizes \(\Phi(\boldsymbol{p_{r}})\) if and only if along each direction \(\boldsymbol{\delta_{i}^{r}}\),
\[\left.\frac{\partial\Phi(\boldsymbol{p_{r}}+u\boldsymbol{\delta_{i}^{(r)}})}{ \partial u}\right|_{u=0}\left\{\begin{array}{l}\begin{array}{l}=0\text{ if }p_{i}>0\\ \geq 0\text{ if }p_{i}=0\end{array}\right.\]
## 3 Equivalence Theorems for Crossover Designs
As defined earlier, \(\boldsymbol{C_{\alpha}}\) is the "working correlation" matrix and hence is a positive definite and symmetric. So, there exists a square matrix \(\boldsymbol{R}\) such that \(\boldsymbol{C_{\alpha}}^{-1}=\boldsymbol{R}^{T}\boldsymbol{R}\). Then the inverse of the variance of the parameter estimates through GEEs is as follows:
\[\boldsymbol{M} = \sum_{j=1}^{k}np_{j}\frac{\partial\boldsymbol{\mu_{j}}^{\prime}} {\partial\boldsymbol{\theta}}\boldsymbol{W_{j}}^{-1}\frac{\partial\boldsymbol {\mu_{j}}}{\partial\boldsymbol{\theta}}=\sum_{j=1}^{k}np_{j}\boldsymbol{X_{j} }^{T}\boldsymbol{G_{j}}\boldsymbol{D_{j}}^{-\frac{1}{2}}\boldsymbol{C_{ \alpha}}^{-1}\boldsymbol{D_{j}}^{-\frac{1}{2}}\boldsymbol{G_{j}}\boldsymbol{ X_{j}} \tag{4}\]
,where \(\boldsymbol{G_{j}}=\operatorname{diag}\left\{(g^{-1})^{\prime}(\eta_{1j}), \ldots,(g^{-1})^{\prime}(\eta_{pj})\right\}.\) Equation (4) can be further simplified as,
\[\boldsymbol{M}=\sum_{j=1}^{k}np_{j}(\boldsymbol{X_{j}}^{*})^{T}(\boldsymbol{X_ {j}}^{*}),\]
where \(\boldsymbol{X_{j}}^{*}=\boldsymbol{R}\boldsymbol{D_{j}}^{-\frac{1}{2}} \boldsymbol{G_{j}}\boldsymbol{X_{j}}\).
### Equivalence Theorem when Objective Function is Variance of Parameter Estimates
In this section, we present the equivalence theorem for crossover design when the objective function is a determinant of the variance of parameter estimates. We also present a special case of the theorem when there are only two treatment sequences involved in the design.
**Theorem 1**: _(General Equivalence Theorem for Crossover Design when the objective function is \(|\boldsymbol{Var(\hat{\boldsymbol{\theta}})}|\)):_Consider the design \(\xi\) with \(k\) treatment sequences as defined in equation (2). Then,_
_(a) The set of optimal designs is convex._
_(b) The design \(\xi\) is \(D\)-optimal if and only if_
\[\text{trace}\left(\boldsymbol{X_{i}}^{*}\boldsymbol{M}(\xi)^{-1}\boldsymbol{X_ {i}}^{*T}\right)\left\{\begin{array}{l}\begin{array}{l}=m\text{ if }p_{i}>0\\ \leq m\text{ if }p_{i}=0\end{array},\right.\end{array}\]
_for each \(p_{i}\in[0,1]\), where \(p_{i}\) is the allocation corresponding to point \(\omega_{i}\) of design \(\xi\) for all \(i=1,2,\ldots,k\), and \(m\) is the number of parameters in \(\boldsymbol{\theta}\)._
Proof of Theorem 3.1:
Let \(k\) be the number of treatment sequences involved in the experiment and \(\xi\) be any design, then \(\Phi(\mathbf{M}(\xi))=-\ln det(\mathbf{M}(\xi))\).
Proof of (a):
Let \(\xi_{1}^{*}\) and \(\xi_{2}^{*}\) be optimal designs i.e.,
\[\Phi[\mathbf{M}(\xi_{1}^{*})]=\Phi[\mathbf{M}(\xi_{2}^{*})]= \min_{\xi}\Phi[\mathbf{M}(\xi)]\]
and let \(\xi^{*}=(1-\gamma)\xi_{1}^{*}+\gamma\xi_{2}^{*}\), for \(0\leq\gamma\leq 1\). \(\Phi[\mathbf{M}(\xi)]=-\ln det(\mathbf{M}(\xi))\) is convex on set of positive definite matrices [6]. Therefore,
\[\Phi[\mathbf{M}(\xi^{*})]\leq(1-\gamma)\Phi[\mathbf{M}(\xi_ {1}^{*})]+\gamma\Phi[\mathbf{M}(\xi_{2}^{*})]=\min_{\xi}\Phi[\mathbf{M}(\xi)],\]
which proves the optimality of \(\xi^{*}\).
Proof of (b):
We have \(\mathbf{p_{r}}=(p_{1},p_{2},\ldots,p_{k-1})^{\prime}\) and \(\mathbf{\delta_{1}^{(r)}}=(1-p_{1},-p_{2},\ldots,-p_{k-1})^{\prime}\),
\(\mathbf{\delta_{2}^{(r)}}=(-p_{1},1-p_{2},\ldots,-p_{k-1})^{{}^{\prime }},\ldots,\mathbf{\delta_{k-1}^{(r)}}=(-p_{1},-p_{2},\ldots,1-p_{k-1} )^{\prime}\).
Hence, \(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}}=(p_{1}+u(1-p_{1}), (1-u)p_{2},\ldots,(1-u)p_{k-1})^{\prime}\),
\(\mathbf{p_{r}}+u\mathbf{\delta_{2}^{(r)}}=((1-u)p_{1},p_{2}+ u(1-p_{2}),\ldots,(1-u)p_{k-1})^{\prime}\),...,
\(\mathbf{p_{r}}+u\mathbf{\delta_{k-1}^{(r)}}=((1-u)p_{1},(1-u )p_{2},\ldots,p_{k-1}+u(1-p_{k-1}))^{\prime}\).
The determinant of \((\mathbf{\delta_{1}^{(r)}},\cdots,\mathbf{\delta_{k-1}^{(r)} })\) is equal to \(1-(p_{1}+p_{2}+\cdots+p_{k-1})=p_{k}\). Then for design with \(k\) treatment sequences we can write \(M\) as,
\[\mathbf{M}(\mathbf{p_{r}})=\sum_{j=1}^{k}np_{j }(\mathbf{X_{j}}^{*})^{T}(\mathbf{X_{j}}^{*})=np_{1}(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})+np_{2}(\mathbf{X_{2}}^{*})^{T}(\mathbf{X_{2}}^{*})+\cdots\] \[+np_{k-1}(\mathbf{X_{k-1}}^{*})^{T}(\mathbf{X_{k- 1}}^{*})+n\left(1-(p_{1}+p_{2}+\cdots+p_{k-1})\right)(\mathbf{X_{k}$ }^{*})^{T}(\mbox{\boldmath$X_{k}}^{*})\]
For illustration purpose consider the direction \(\mathbf{\delta_{1}^{(r)}}\), and calculations for other directions can be done similarly,
\[\Phi(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}}) =-\ln det\bigg{[}\mathbf{M}\left(\{p_{1}+u(1-p_{1}),(1-u)p_{2}, \ldots,(1-u)p_{k-1}\}^{\prime}\right)\bigg{]}\] \[=-\ln det\bigg{[}n\left\{p_{1}+u(1-p_{1})\right\}(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})\] \[+n\left\{(1-u)p_{2}\right\}(\mathbf{X_{2}}^{*})^{T}( \mathbf{X_{2}}^{*})+\cdots+n\left\{(1-u)p_{k-1}\right\}(\mathbf{X_{k-1}}^{*})^{T}(\mathbf{X_{k-1}}^{*})\] \[+n(1-u)\left\{1-(p_{1}+p_{2}+\cdots+p_{k-1})\right\}(\mathbf{X_{k}}^{*})^{T}(\mathbf{X_{k}}^{*})\bigg{]}\]
\[=-m\ln n-\ln det[\mathbf{M}(u,\mathbf{p_{r}})]=-m\ln n+\Phi^{(r)}(u),\]
where \(\mathbf{M}(u,\mathbf{p_{r}})=\frac{\mathbf{M}(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}})}{n}\), and \(\Phi^{(r)}(u)=-\ln det[\mathbf{M}(u,\mathbf{p_{r}})]\).
The directional derivative of the above objective function along one specific direction for a design with \(k\) treatment sequences can be calculated as follows:
\[\phi(u,\mathbf{p_{r}})=\frac{\partial\Phi(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}})}{ \partial u}=\lim_{h\to 0}\frac{1}{h}\left[\Phi^{(r)}(u+h)-\Phi^{(r)}(u)\right]\]
\[=-\lim_{h\to 0}\frac{1}{h}\biggl{\{}\ln det\left[\mathbf{M}(u+h,\mathbf{p_{r}}) \right]-\ln det\left[\mathbf{M}(u,\mathbf{p_{r}})\right]\biggr{\}}\]
\[=-\lim_{h\to 0}\frac{1}{h}\biggl{\{}\ln det\biggl{[}\mathbf{M}(u,\mathbf{p_{r}})+h(1- p_{1})\mathbf{X_{1}}^{*T}\mathbf{X_{1}}^{*}-hp_{2}\mathbf{X_{2}}^{*T}\mathbf{X_{2}}^{*}-\cdots\]
\[-hp_{k-1}\mathbf{X_{k-1}}^{*T}\mathbf{X_{k-1}}^{*}-h\left(1-(p_{1}\cdots+p_{k-1}) \right)\mathbf{X_{k}}^{*T}\mathbf{X_{k}}^{*}\biggr{]}det\mathbf{M}(u,\mathbf{p_{r}})^{-1} \biggr{\}}\]
\[=-\lim_{h\to 0}\frac{1}{h}\biggl{\{}\ln det\biggl{[}\mathbf{M}(u,\mathbf{p_{r}}) \mathbf{M}(u,\mathbf{p_{r}})^{-1}+h\left\{\mathbf{X_{1}}^{*T}\mathbf{X_{1}}^{*}-\mathbf{M}(\mathbf{p_{ r}})\right\}\mathbf{M}(u,\mathbf{p_{r}})^{-1}\biggr{]}\biggr{\}}\]
Using the approximation of determinant \(\det(\mathbf{I}+h\mathbf{A})=1+h\mbox{trace}(\mathbf{A})+\mathcal{O}(h^{2})\)[4] we get,
\[=-\lim_{h\to 0}\frac{1}{h}\biggl{\{}\ln\left(1+h\mbox{trace}\left[\left\{\bm {X_{1}}^{*T}\mathbf{X_{1}}^{*}-\mathbf{M}(\mathbf{p_{r}})\right\}\mathbf{M}(u,\mathbf{p_{r}})^{-1 }\right]+\mathcal{O}(h^{2})\right)\biggr{\}}\]
And using \(\ln(1+t)=t+\mathcal{O}(t^{2})\) we get,
\[=-\lim_{h\to 0}\frac{1}{h}\biggl{\{}h\mbox{trace}\left[(\mathbf{X_{1}}^{ *T}\mathbf{X_{1}}^{*}-\mathbf{M}(\mathbf{p_{r}}))\mathbf{M}(u,\mathbf{p_{r}})^{-1}\right]+ \mathcal{O}(h^{2})\biggr{\}}\] \[=-\mbox{trace}\biggl{[}(\mathbf{X_{1}}^{*T}\mathbf{X_{1}}^{*}-\mathbf{M}(\mathbf{ p_{r}}))\mathbf{M}(u,\mathbf{p_{r}})^{-1}\biggr{]}\] \[=\mbox{trace}\left(\mathbf{M}(\mathbf{p_{r}})\mathbf{M}(u,\mathbf{p_{r}})^{-1} \right)-\mbox{trace}\left(\mathbf{X_{1}}^{*}\mathbf{M}(u,\mathbf{p_{r}})^{-1}\mathbf{X_{1}}^{ *T}\right)\] \[\frac{\partial\Phi(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}})}{\partial u }\Bigg{|}_{u=0}=m-\mbox{trace}\left(\mathbf{X_{1}}^{*}\mathbf{M}(\mathbf{p_{r}})^{-1}\mathbf{X_ {1}}^{*T}\right) \tag{5}\]
The proof follows by equating the above expression in equation (5) to zero.
### Equivalence Theorem when Objective Function is
Variance of Treatment Effect Estimates
As the main interest usually lies in estimating the direct treatment effect contrasts, instead of working with the full variance-covariance matrix of parameters estimate, in this section, we concentrate only on the variance of the
estimator of treatment effects \(\mathrm{Var}(\hat{\mathbf{\tau}})\) given as
\[\mathrm{Var}(\hat{\mathbf{\tau}})\,=\,\mathbf{H}\mathrm{Var}(\hat{\mathbf{\theta}})\mathbf{H}^{ \prime}, \tag{6}\]
where \(\mathbf{H}\) is a \((t-1)\times m\) matrix given by \([\mathbf{0}_{(t-1)1},\mathbf{0}_{(t-1)(p-1)},\mathbf{I}_{t-1},\mathbf{0}_{(t-1)(t-1)}]\) and \(m=p+2t-2\) is the total number of parameters in \(\mathbf{\theta}\). Below, we present the equivalence theorem for crossover design when the objective function is a determinant of the variance of treatment effects estimate i.e., the determinant of dispersion matrix.
**Lemma 1**: _Consider function \(f:\mathbb{R}_{>0}^{n}\rightarrow\mathbb{R}_{>0}\), such that \(f(\mathbf{x})=\frac{1}{\prod_{i=1}^{n}x_{i}}\) where \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})^{\prime}\in\mathbb{R}_{>0}^{n}\). Then \(f(\mathbf{x})\) is a strictly convex function._
Proof of Lemma 1:
Let \(H\) be the Hessian matrix, i.e., the matrix of second-order partial derivatives.
Then \(H=f(x)(D+qq^{\prime})\), where \(D\) is the diagonal matrix with elements \(1/(x_{1})^{2},\ldots,1/(x_{n})^{2}\) and \(q\) is the column vector with elements \(1/(x_{1}),\ldots,1/(x_{n})\).
The lemma follows as \(H\) is positive definite. An alternative proof is provided in the supplementary material.
**Theorem 2**: _General Equivalence Theorem for Crossover Design when objective function is \(|\mathbf{Var(\hat{\mathbf{\tau}})}|\): Consider the design \(\xi\) with \(k\) treatment sequences as defined in equation (2). Then,_
_(a) The set of optimal designs is convex._
_(b) The design \(\xi\) is \(D\)-optimal if and only if_
\[\text{trace}\left\{\mathbf{A({X_{i}}^{*})}^{T}({X_{i}}^{*})\right\}\left\{ \begin{array}{l}=t-1\text{ if }p_{i}>0\\ \leq t-1\text{ if }p_{i}=0\end{array}\right.\]
_for each \(p_{i}\in[0,1]\), where \(\mathbf{A}=\mathbf{M}^{-1}\mathbf{H}^{\prime}\left(\mathbf{H}\mathbf{M}^{-1}\mathbf{H}^{\prime}\right) ^{-1}\mathbf{H}\mathbf{M}^{-1}\), \(p_{i}\) is the allocation corresponding to point \(\omega_{i}\) of design \(\xi\) for all \(i=1,2,\ldots,k\), and \(t\) is number of treatments._
Proof of Theorem 3.2:
Let \(k\) be the number of treatment sequences involved in the experiment and \(\xi\) be any design, then \(\Phi(\mathbf{M}(\xi))=\ln det(\mathbf{H}\mathbf{M}(\xi)^{-1}\mathbf{H}^{\prime})\).
Proof of (a):
Let \(\xi_{1}^{*}\) and \(\xi_{2}^{*}\) be optimal designs i.e.,
\[\Phi[\mathbf{M}(\xi_{1}^{*})]=\Phi[\mathbf{M}(\xi_{2}^{*})]=\min_{\xi}\Phi[\mathbf{M}(\xi )]\]
and let \(\xi^{*}=(1-\gamma)\xi_{1}^{*}+\gamma\xi_{2}^{*}\), for \(0\leq\gamma\leq 1\).
Since we are using the \(D\)-optimality criterion, we prove the following equation (7) to prove the optimality of \(\xi^{*}\).
\[|\mathbf{HM}(\xi^{*})^{-1}\mathbf{H}^{\prime}|\leq(1-\gamma)|\mathbf{HM}(\xi_{1}^{*})^{-1} \mathbf{H}^{\prime}|+\gamma|\mathbf{HM}(\xi_{2}^{*})^{-1}\mathbf{H}^{\prime}|. \tag{7}\]
Since both \(\mathbf{M}(\xi_{1}^{*})\) and \(\mathbf{M}(\xi_{2}^{*})\) are positive definite, we can find a non-singular matrix \(\mathbf{O}^{-1}\) such that \(\mathbf{M}(\xi_{1}^{*})=\mathbf{OO}^{T}\) and \(\mathbf{M}(\xi_{2}^{*})=\mathbf{O\Lambda}\mathbf{O}^{T}\), where \(\mathbf{\Lambda}=\text{diag}\{\lambda_{1},\ldots,\lambda_{m}\}\) is a \(m\times m\) diagonal matrix (see page 41 [22]). In this situation, \(\mathbf{M}(\xi^{*})=\mathbf{O}((1-\gamma)\mathbf{I}+\gamma\mathbf{\Lambda})\mathbf{O}^{T}\). Then (7) is equivalent to
\[|\mathbf{G}((1-\gamma)\mathbf{I}+\gamma\mathbf{\Lambda})^{-1}\mathbf{G}^{T}|\leq(1-\gamma)|\bm {G}\mathbf{G}^{T}|+\gamma|\mathbf{G\Lambda}^{-1}\mathbf{G}^{T}|, \tag{8}\]
where \(\mathbf{G}=\mathbf{H}(\mathbf{O}^{T})^{-1}\). According to Theorem 1.1.2 in [9],
\[|\mathbf{G}((1-\gamma)\mathbf{I}+\gamma\mathbf{\Lambda})^{-1}\mathbf{G}^{T}|=\sum_{1\leq i_{1} <\cdots<i_{q}\leq m}|\mathbf{G}^{T}[i_{1},\ldots,i_{q}]|^{2}\prod_{l=1}^{q}\frac{ 1}{(1-\gamma)+\gamma\lambda_{i_{l}}},\]
where \(\mathbf{G}^{T}[i_{1},\ldots,i_{q}]\) is the \(q\times q\) sub-matrix of \(\mathbf{G}^{T}\) consisting of the \(i_{1},\ldots,i_{q}\) rows of \(\mathbf{G}^{T}\). Similarly,
\[(1-\gamma)|\mathbf{G}\mathbf{G}^{T}|+\gamma|\mathbf{G\Lambda}^{-1}\mathbf{G}^{T}|=\sum_{1\leq i _{1}<\cdots<i_{q}\leq m}|\mathbf{G}^{T}[i_{1},\ldots,i_{q}]|^{2}\left(1-\gamma+ \gamma\prod_{l=1}^{q}\frac{1}{\lambda_{i_{l}}}\right).\]
Then (8) is true if
\[\prod_{l=1}^{q}\frac{1}{(1-\gamma)+\gamma\lambda_{i_{l}}}\leq 1-\gamma+\gamma \prod_{l=1}^{q}\frac{1}{\lambda_{i_{l}}}. \tag{9}\]
Since \(f(\mathbf{x})=\frac{1}{\prod_{l=1}^{q}x_{i}}\) is convex function (from Lemma 1), we have \(f\left((1-\gamma)\mathbf{1}+\gamma\mathbf{\lambda}\right)\leq(1-\gamma)f(\mathbf{1})+ \gamma f(\mathbf{\lambda})\), where \(\mathbf{\lambda}=(\lambda_{i_{1}},\cdots,\lambda_{i_{q}})\) and hence the result follows.
Proof of (b):
\[\mathbf{M}(\mathbf{p_{r}}) =np_{1}(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})+np_{2}(\mathbf{X_{2}}^{*}) ^{T}(\mathbf{X_{2}}^{*})+\cdots+np_{k-1}(\mathbf{X_{k-1}}^{*})^{T}(\mathbf{X_{k-1}}^{*})\] \[+n\left(1-(p_{1}+p_{2}\cdots+p_{k-1})\right)(\mathbf{X_{k}}^{*})^{T}( \mathbf{X_{k-1}}^{*}).\]
\[\Phi(\mathbf{p_{r}}+u\mathbf{\delta_{1}^{(r)}}) =\Phi\left(\{p_{1}+u(1-p_{1}),(1-u)p_{2},\ldots,(1-u)p_{k-1}\}^{ \prime}\right)\] \[=\ln det\bigg{[}\mathbf{H}\bigg{\{}\mathbf{M}\left(\{p_{1}+u(1-p_{1}),(1- u)p_{2},\ldots,(1-u)p_{k-1}\}^{\prime}\right)\bigg{\}}^{-1}\mathbf{H}^{\prime}\bigg{]}\] \[=-(t-1)\ln n+\ln det\bigg{[}\mathbf{H}\bigg{\{}\{p_{1}+u(1-p_{1})\} \,(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})\]
\[+\left\{(1-u)p_{2}\right\}({\boldsymbol{X_{2}}}^{*})^{T}({\boldsymbol{X_{2}}}^{*}) +\cdots+\left\{(1-u)p_{k-1}\right\}({\boldsymbol{X_{k-1}}}^{*})^{T}({ \boldsymbol{X_{k-1}}}^{*})\]
\[+\left(1-u\right)\left\{1-(p_{1}+p_{2}+\cdots+p_{k-1})\right\}({\boldsymbol{X_{ k}}}^{*})^{T}({\boldsymbol{X_{k}}}^{*})\right\}^{-1}{\boldsymbol{H}}^{\prime}\]
\[=-(t-1)\ln n+\ln det\left[{\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{ \boldsymbol{H}}^{\prime}\right]=-(t-1)\ln n+\Phi^{(r)}(u),\]
where now \(\Phi^{(r)}(u)=\ln det\left[{\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{ \boldsymbol{H}}^{\prime}\right]\).
Consider direction \({\boldsymbol{\delta_{1}^{(r)}}}\), then the directional derivative of the above objective function for a design with \(k\) treatment sequences can be calculated as follows:
\[\phi(u,{\boldsymbol{p_{r}}})=\tfrac{\partial\Phi({\boldsymbol{p_{r}}}+u{ \boldsymbol{\delta_{1}^{(r)}}})}{\partial u}=\lim_{h\to 0}\tfrac{1}{h}\left[ \Phi^{(r)}(u+h)-\Phi^{(r)}(u)\right]\]
\[=\lim_{h\to 0}\frac{1}{h}\bigg{\{}\ln det\left[{\boldsymbol{HM}}(u+h,{ \boldsymbol{p_{r}}})^{-1}{\boldsymbol{H}}^{\prime}\right]-\ln det\left[{ \boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{\boldsymbol{H}}^{\prime}\right] \bigg{\}}\]
\[=\lim_{h\to 0}\frac{1}{h}\bigg{\{}\ln det\left[{\boldsymbol{H}}\left\{(1- \mu-h){\boldsymbol{M}}({\boldsymbol{p_{r}}})+(\mu+h)({\boldsymbol{X_{1}}}^{*} )^{T}({\boldsymbol{X_{1}}}^{*})\right\}^{-1}{\boldsymbol{H}}^{\prime}\right]\]
\[-\ln det\left[{\boldsymbol{HM}}(\mu,{\boldsymbol{p_{r}}})^{-1}{\boldsymbol{H} }^{\prime}\right]\bigg{\}}\]
\[=\lim_{h\to 0}\frac{1}{h}\bigg{\{}\ln det\left[{\boldsymbol{H}}\left\{ {\boldsymbol{M}}(u,{\boldsymbol{p_{r}}})-h\left({\boldsymbol{M}}({\boldsymbol {p_{r}}})-({\boldsymbol{X_{1}}}^{*})^{T}({\boldsymbol{X_{1}}}^{*})\right) \right\}^{-1}{\boldsymbol{H}}^{\prime}\right]\]
\[=\lim_{h\to 0}\frac{1}{h}\bigg{\{}\ln det\left[{\boldsymbol{H}}\left\{ [{\boldsymbol{M}}(u,{\boldsymbol{p_{r}}})]\left[{\boldsymbol{I}}-h{ \boldsymbol{M}}(u,{\boldsymbol{p_{r}}})^{-1}\left({\boldsymbol{M}}({ \boldsymbol{p_{r}}})-({\boldsymbol{X_{1}}}^{*})^{T}({\boldsymbol{X_{1}}}^{*}) \right)\right]\right\}^{-1}{\boldsymbol{H}}^{\prime}\right]\]
\[\times det\left[{\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{\boldsymbol{H}} ^{\prime}\right]^{-1}\bigg{\}}\]
\[=\lim_{h\to 0}\frac{1}{h}\bigg{\{}\ln det\left[{\boldsymbol{H}}\left\{ \left[{\boldsymbol{I}}-h{\boldsymbol{M}}(u,{\boldsymbol{p_{r}}})^{-1}\left({ \boldsymbol{M}}({\boldsymbol{p_{r}}})-({\boldsymbol{X_{1}}}^{*})^{T}({ \boldsymbol{X_{1}}}^{*})\right)\right]^{-1}\left[{\boldsymbol{M}}(u,{ \boldsymbol{p_{r}}})\right]^{-1}\right\}{\boldsymbol{H}}^{\prime}\right]\]
\[\times det\left[{\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{\boldsymbol{H}} ^{\prime}\right]^{-1}\bigg{\}}\]
Assuming \(h\) is sufficiently small we use the binomial series expansion \(({\boldsymbol{I}}+h{\boldsymbol{X}})^{-1}=\sum_{i=0}^{\infty}(-t{\boldsymbol{X }})^{i}\) to obtain,
\[\phi(u,{\boldsymbol{p_{r}}})=\lim_{h\to 0}\frac{1}{h}\left\{\ln det\left[{ \boldsymbol{I}}+h{\boldsymbol{B}}+{\mathcal{O}}(h^{2})\right]\right\},\]
\[{\boldsymbol{B}}={\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}\left[{ \boldsymbol{M}}({\boldsymbol{p_{r}}})-({\boldsymbol{X_{1}}}^{*})^{T}({ \boldsymbol{X_{1}}}^{*})\right]{\boldsymbol{M}}(u,{\boldsymbol{p_{r}}})^{-1}{ \boldsymbol{H}}^{\prime}\left[{\boldsymbol{HM}}(u,{\boldsymbol{p_{r}}})^{-1}{ \boldsymbol{H}}^{\prime}\right]^{-1}.\]
Using \(\ln det\left[{\boldsymbol{I}}+h{\boldsymbol{B}}+{\mathcal{O}}(h^{2})\right]=h \mbox{trace}({\boldsymbol{B}})+{\mathcal{O}}(h^{2})\)[26],
\[\phi(u,{\boldsymbol{p_{r}}})=\mbox{trace}\bigg{\{}{\boldsymbol{HM}}(u,{ \boldsymbol{p_{r}}})^{-1}\left[{\boldsymbol{M}}({\boldsymbol{p_{r}}})-({ \boldsymbol{X_{1}}}^{*})^{T}({\boldsymbol{X_{1}}}^{*})\right]{\boldsymbol{M}}(u,{ \boldsymbol{p_{r}}})^{-1}{\boldsymbol{H}}^{\prime}\]
\[\times\left[\mathbf{HM}(u,\mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\right]^{-1}\right\}\]
\[\phi(u,\mathbf{p_{r}})|_{u=0} =\text{trace}\bigg{\{}\mathbf{HM}(\mathbf{p_{r}})^{-1}\left[\mathbf{M}(\mathbf{p_{r }})-(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})\right]\mathbf{M}(\mathbf{p_{r}})^{-1}\mathbf{H^{ \prime}}\] \[\times\left[\mathbf{HM}(\mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\right]^{-1} \bigg{\}}\] \[=\text{trace}\bigg{\{}\mathbf{I}_{(t-1)}-\mathbf{HM}(\mathbf{p_{r}})^{-1}(\bm {X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})\mathbf{M}(\mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\] \[\times\left[\mathbf{HM}(\mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\right]^{-1} \bigg{\}}\] \[=(t-1)-\text{trace}\bigg{\{}\mathbf{HM}(\mathbf{p_{r}})^{-1}(\mathbf{X_{1}}^ {*})^{T}(\mathbf{X_{1}}^{*})\mathbf{M}(\mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\left(\mathbf{HM}( \mathbf{p_{r}})^{-1}\mathbf{H^{\prime}}\right)^{-1}\bigg{\}}\] \[=(t-1)-\text{trace}\bigg{\{}\left[\mathbf{M}^{-1}\mathbf{H^{\prime}}\left( \mathbf{HM}^{-1}\mathbf{H^{\prime}}\right)^{-1}\mathbf{HM}^{-1}\right](\mathbf{X_{1}}^{*})^{T }(\mathbf{X_{1}}^{*})\bigg{\}} \tag{10}\]
The proof follows by equating the above expression in equation (10) to zero.
### Illustration
To illustrate the results of the above general equivalence theorems, we consider a design space \(\{AB,BA\}\) has \(k=2,p=2.\) Since we are considering a local optimality approach, for illustration purposes we assume that the parameter values are \(\mathbf{\theta}=(\lambda,\beta_{2},\tau_{B},\rho_{B})^{\prime}=(0.5,-1.0,4.0,-2.0 )^{\prime}.\) Note that we need to assume parameter values before calculating the optimal proportions. Considering the AR(1) correlation structure with \(\alpha=0.1,\) i.e.,
\[\mathbf{C_{\alpha}}=\left(\alpha^{|i-i^{\prime}|}\right)=\left(\begin{array}{cc }1&\alpha\\ \alpha&1\end{array}\right),\]
for the assumed parameter values the optimal proportions are \(p_{1}=p_{2}=0.5.\)
The graph of the objective function, \(\Phi(p_{1})=-\ln det(\mathbf{M}(p_{1}))\) and its directional derivative \(\text{\emph{trace}}\left(\mathbf{X_{1}}^{*}\mathbf{M}(p_{1})^{-1}\mathbf{X_{1}}^{*T} \right)-m\) w.r.t \(p_{1}\in[0,1]\) are shown in Figure 1.
Graphs in Figure 1 verify that the minimum of the objective function is located at \(p_{1}=0.5\) and directional derivative is zero at \(p_{1}=0.5.\) Using Theorem 1, we conclude that for assumed values of parameters, design
\[\xi=\left\{\begin{array}{cc}AB&BA\\ 0.5&0.5\end{array}\right\}\]
is the \(D\)-optimal design when the objective function is \(Var(\mathbf{\hat{\theta}}).\)
Considering \(Var(\mathbf{\hat{\tau}})\) as the objective function, the graph of the objective function, \(\Phi(p_{1})=\ln det[\mathbf{HM}(p_{1})^{-1}\mathbf{H^{{}^{\prime}}}]\) and it's directional derivative \(\text{\emph{trace}}\left\{\mathbf{A}(\mathbf{X_{1}}^{*})^{T}(\mathbf{X_{1}}^{*})\right\}-(t -1)\) w.r.t \(p_{1}\in[0,1]\) are shown in Figure 2.
Graphs in Figure 2 verify that the minimum of the objective function is located at \(p_{1}=0.177\) and directional derivative is zero at \(p_{1}=0.177\). Using Theorem 2, we conclude that for assumed values of parameters, design
\[\xi=\left\{\begin{array}{cc}AB&BA\\ 0.177&0.823\end{array}\right\}\]
is the \(D\)-optimal design when the objective function is \(Var(\hat{\boldsymbol{\tau}})\).
## 4 Real Example
In this section, we look at the application of the above equivalence theorems to a real-life example discussed earlier. We obtain the \(D\)-optimal designs by solving a system of equations given by the general equivalence theorems when the objective functions are \(Var(\hat{\boldsymbol{\theta}})\) and \(Var(\hat{\boldsymbol{\tau}})\), respectively.
Figure 1: Objective function and its directional derivative for designs with two treatment sequences.
Figure 2: Objective function and its directional derivative for designs with two treatment sequences.
### Work Environment Experiment
Consider the data obtained from the work environment experiment conducted at Booking.com [20]. In recent years, many corporate offices and organizations have adopted open office spaces over traditional cubical office spaces. Since there were no previous studies to examine the effects of office designs in workspaces, Booking.com conducted an experiment to assess different office spacing efficiency.
In this experiment, there were a total of \(n=288\) participants. Participants were divided into four groups \(G_{1},G_{2},G_{3},G_{4}\) with each group having an equal number of \(72\) individual participants. It is essentially a uniform crossover design with \(p=4\) periods and \(t=4\) treatments. Periods were named Wave 1, Wave 2, Wave 3, and Wave 4, where each Wave had a duration of \(2\) weeks. The four treatments involved in this experiment are office designs named \(A\) (Activity-Based), \(B\) (Open Plan), \(C\) (Team Offices), and \(D\) (Zoned Open Plan), as shown in the Figure 3. During the experiment, each group is exposed to different treatments over different periods depending on its treatment sequence. At a particular given period, there was no interaction between subjects from different groups. A Latin square design (see, for example, [29]) of order four has been used to determine the sequence of exposure so that no group was exposed to the conditions in the same order as any other group. The design is shown below in Table 1. A total of \(m=23\) covariates was involved in the experiment, but we consider only the most important ones in our fitted model.
The images are reproduced from the manuscript [20], under Creative Commons Attribution license ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)).
For illustration purposes, consider the response \(commit\)\(count\) to illustrate the optimal crossover design for the Poisson response. The commit count is the number of commits submitted to the main git repository by each subject.
Figure 3: The four office designs involved in Work Environment Experiment
In the fitted model, we examine three primary predictors: \(area\), \(wave\), and \(carryover\). Here, \(area\) represents the direct treatment effect, \(wave\) denotes the period effect, and \(carryover\) represents the effect of the treatment from the previous period. To illustrate the local optimality approach, we assume specific parameter values \(\boldsymbol{\theta}=\) (2.0, 0.3, 0.8, \(-0.1\), \(-2.0\), 0.40, \(-2.0\), \(-1.0\), 0.3, \(-1.0)^{\prime}\), which lead to non-uniform allocations using the log link function and AR(1) correlation structure with \(\alpha=0.1\).
According to Theorem 3.1, the \(D\)-optimal design, i.e., the optimal proportions, can be obtained by solving the following system of equations instead of performing constrained optimization:
\[\text{trace}\left(\boldsymbol{X_{i}}^{*}\boldsymbol{M}(\boldsymbol{p_{r}})^{- 1}\boldsymbol{X_{i}}^{*T}\right)=10,\]
for \(i=1,2,3,4\). The resulting \(D\)-optimal design is the same as the one obtained through constrained optimization, indicating that the design is given by:
\[\xi=\left\{\begin{array}{llll}BADC&CDAB&DBCA&ACBD\\ 0.2375&0.2894&0.2246&0.2485\end{array}\right\}\]
is the \(D\)-optimal design when the objective function is \(Var(\boldsymbol{\hat{\theta}})\).
Similarly, according to Theorem 3.2, for the objective function \(Var(\boldsymbol{\hat{\tau}})\), the \(D\)-optimal design can be obtained by solving the following system of equations:
\[\text{trace}\left\{\left[\boldsymbol{M}(\boldsymbol{p_{r}})^{-1}\boldsymbol{ H}^{\prime}\left(\boldsymbol{HM}(\boldsymbol{p_{r}})^{-1}\boldsymbol{H}^{ \prime}\right)^{-1}\boldsymbol{HM}(\boldsymbol{p_{r}})^{-1}\right]( \boldsymbol{X_{i}}^{*})^{T}(\boldsymbol{X_{i}}^{*})\right\}=3,\]
for \(i=1,2,3,4\). Again, the resulting \(D\)-optimal design is the same as the one obtained through constrained optimization, indicating that the design is given by:
\[\xi=\left\{\begin{array}{llll}BADC&CDAB&DBCA&ACBD\\ 0.2900&0.2963&0.1734&0.2403\end{array}\right\}\]
is the \(D\)-optimal design.
Remark 3: In [13], we study the effect of misspecification of working correlation structures on optimal design. We calculate optimal designs under two choices of unknown parameters for a misspecified working correlation structure. Then we calculate relative \(D\)-efficiency under two parameter choices. The relative \(D\)-efficiency under two parameter choices suggests that the effect of variance misspecification on the local optimal designs is minimal. We also study the performance of proposed locally optimal designs via sensitivity study in terms of relative loss of efficiency for choosing
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Groups \(\Rightarrow\) & \(G_{1}\) & \(G_{2}\) & \(G_{3}\) & \(G_{4}\) \\ Period \(\Downarrow\) & (\(BADC\)) & (\(CDAB\)) & (\(DBCA\)) & (\(ACBD\)) \\ \hline Wave 1 & OPEN & TEAM & ZONE & ACT \\ Wave 2 & ACT & ZONE & OPEN & TEAM \\ Wave 3 & ZONE & ACT & TEAM & OPEN \\ Wave 4 & TEAM & OPEN & ACT & ZONE \\ \hline \end{tabular}
\end{table}
Table 1: Latin square design
assumed parameter values instead of true parameter values. The relative loss of efficiency increases as we move away from true parameter values. However, **Fig 6.** in [13] suggest that this loss of efficiency does not go beyond 2%. We also calculate the optimal designs with all 24 sequences, by considering AR (1) correlation structure and different values of \(\alpha\). We observe that in the case of non-uniform allocations, the optimal design has more zeros than non-zero proportions; and these allocations do not vary a lot as \(\alpha\) changes, particularly for the sequences where we have zero allocations.
## 5 Summary and Conclusion
In many experiments in real life, uniform designs are typically used. Uniform designs are optimal in the case of a linear model i.e., when the response is normally distributed. However, in situations where responses are non-normal, the obtained optimal proportions are not necessarily uniform. In this manuscript, we derive an expression for the general equivalence theorem to check for the optimality of identified locally \(D\)-optimal crossover designs for generalized linear models. The equivalence theorem provides us with a system of equations that can calculate optimal proportions without performing constrained optimization of the objective function. We derive two different versions of the general equivalence theorem, one with the objective function \(Var(\boldsymbol{\hat{\theta}})\) and the other with the objective function \(Var(\boldsymbol{\hat{\tau}})\). We illustrate the application of these equivalence theorems on two real-life examples and obtain the same set of optimal proportions by solving the system of equations as obtained by performing constrained optimization. In our future work, we plan to use the Bayesian approach to avoid guessing the values of unknown parameters.
_Funding:_ This research is in part supported by NSF Grant DMS-2311186.
_Conflict of Interests:_ There is no conflict of interest.
|
2309.10828 | Hidden ring crack in a rotating cylindrical shell under torsion | We consider the impact of a ring crack within a rotating hollow cylinder of
fixed height under axisymmetric (torsion) loading. The form of the displacement
is obtained from the equation of motion using the Fourier sin transform. The
displacement jump over the crack is obtained from the boundary condition on the
tangential stress, formulated as a singular integral equation which is solved
by the method of orthogonal polynomials. The stress intensity factors on the
opposing crack surfaces are calculated. The dependence of the crack extension
on the problem geometry is investigated, including the impact of the crack's
location, cylinder's height, torsion loading and rotation frequency. Possible
extensions of the model to cover fatigue cracking are considered. A practical
test to detect and locate cracks within a rotating cylinder is outlined. | Zinaida Zhuravlova, Igor Istenes, Daniel Peck, Yuriy Protserov, Nataly Vaysfeld | 2023-09-16T10:59:24Z | http://arxiv.org/abs/2309.10828v1 | # Hidden ring crack in a rotating cylindrical shell under torsion
###### Abstract
We consider the impact of a ring crack within a rotating hollow cylinder of fixed height under axisymmetric (torsion) loading. The form of the displacement is obtained from the equation of motion using the Fourier sin transform. The displacement jump over the crack is obtained from the boundary condition on the tangential stress, formulated as a singular integral equation which is solved by the method of orthogonal polynomials. The stress intensity factors on the opposing crack surfaces are calculated. The dependence of the crack extension on the problem geometry is investigated, including the impact of the crack's location, cylinder's height, torsion loading and rotation frequency. Possible extensions of the model to cover fatigue cracking are considered. A practical test to detect and locate cracks within a rotating cylinder is outlined.
## 1 Introduction
Rotational motion is a fundamental aspect of many technological systems, as it enables reliable, efficient conversion of energy and precise control of movement, amongst others. As a result, it forms the basis of many types of machinery, such as engines, turbines, and motors. It is often used when converting energy from one form to another, be it motion, mechanical work, or enabling power generation. Manufacturing processes like drilling, milling and lathe operations rely on controlled rotational motion to shape and modify the materials that shape our modern world.
It is therefore unsurprising that the study of general problems involving rotational motion, and the key features underlying them, remains an important area of modelling and study. The results of studies of the general laws of rotational motion in the theory of shells [4], the behaviour of waves under a rotational load in coupled fields [22], and the nonlinear dynamics of structures [7], underlie the modelling of special problems that arise in robotics (simulation of rotational movements of human limbs), energy (wind turbines converting wind energy into electricity using rotational movements), biomechanics (for modelling human joints, muscular mechanisms), and many more.
One of the simplest three-dimensional shapes to consider in such torsion problems is that of a cylinder, which has immediate applications to approximate more complex structures such as pipes, tubes, columns, shafts, and so forth, that are affected by stress and strains. Modern studies of the wave fields of finite cylinders include both the analysis of bodies comprised of different materials (for example, models of orthotropic bi-directional FGMaterials [5], hyperelastic materials [10], inter-layer fracture of carbon
nanotubes [32], etc), and their behaviour under various dynamic modes of loading [12, 14]. One of the essential factors affecting the stress state of the cylinder is the nature of the applied load. For example, it was shown in [25] that for a fixed wall thickness and stress gradient the gain in the ultimate loading capacity depends on the magnitude of the gradient, however it is only weakly dependent on the gradient direction and the pipe radii. Meanwhile, in [33] a weakly nonlinear analysis was conducted for localized necking of a hyperelastic solid cylinder under axial stretching, based on the exact theory of nonlinear elasticity.
In the case where a dynamic load is applied, modelling becomes much more complicated and requires new mathematical approaches. In [28], the impact of forced waves in a uniform waveguide with distributed and localized dynamic structures was considered, and the general patterns obtained. Meanwhile, in the context of mixture theory [9] considered the steady diffusion of an ideal fluid through a two-layer thick walled pre-stressed and fibre-reinforced hollow cylinder.
A special case of problems in rotating cylinders is when a small defect is present within the body, as it significantly increases the risk of crack development and the appearance of dangerous stresses. The importance of microstructure modelling for additively manufactured metal post-process simulations was demonstrated in [29]. Meanwhile, in [14] the issue of stress concentration near various types of crack within a cylinder was considered. This was achieved using the torsion problem for a circular cylinder containing an equilateral triangle opening and a line crack, with the solution obtained using the method of singular integral equations and the crack-cutting technique. The case of an external crack on a hollow cylinder under axisymmetric torsion has been considered by numerous authors, such as the case of a circumferential crack in [11], multiple cracks [3], and that of a lone crack in [13] (see also references therein), although only the latter accounted for cylinder rotation. In that paper, the mixed boundary value problem being reduced to a pair of dual series equations. They found that the magnitude of the stress intensity factor was heavily dependent on the crack location. Similarly, in [1] they demonstrated that small perturbations away from the case of a symmetric crack configuration (under pure mode I or II) produces mixed-mode conditions at the crack tips. This in turn may lead to an increase in the stress intensity factors and the energy release rates. The case of internal cracks within the cylinder has been studied, for example for a single crack [36], multiple cracks [31], and a numerical approach that can handle multiple internal and external cracks [20], however these approaches did not incorporate potential rotation of the cylinder.
In this paper, we seek to develop a general model for a rotating hollow cylinder of fixed height, containing an internal crack, under axisymmetric torsion. With the above results in mind, we take the problem geometry to allow a thorough investigation of the crack location on the stress intensity factors. The primary aim is to develop ensure the results can be utilized in practical application to the operation of turbines, and other machinery with rotating components. While for the case of turbines it is typically fracture of the blades that is the primary concern (see for example: [6] for gas turbines, [34] for wind turbines), damage to the central shaft can be more difficult to detect - even with routine inspection (for details of proposed inspection methodology, see e.g. [21, 27, 35]), and may interfere with indirect damage monitoring methods, such as vibration detection (see e.g. [15, 30] and references therein). Fractures within the central shaft however allow the possibility of local repair, provided they are detected. The presented model is therefore designed to investigate whether a practical test for detecting and locating cracks within cylindrical shafts (without stopping operation), as well as predicting their rate of extension (fast or fatigue), are possible, and to facilitate any related risk management.
The paper is organised as follows. The problem formulation is given in Sect. 2, with the cylinder geometry outlined, while the governing equations are stated and normalized. The form of the displacement in the cylinder is obtained in Sect. 3 using the Fourier sin transform. The jump of the displacement over the crack follows from expressing the boundary condition on the tangential stress in the form of a singular integral equation, which is then resolved using the method of orthogonal polynomials. This is used to obtain the stress intensity factor on the crack surfaces in Sect. 4. With the solution obtained, in Sect. 5 results are presented for the case of a small steel cylinder. First, the extension of an existing crack is examined in Sect. 5.2, with an investigation of the influence of the crack and cylinder geometry on quasi-static fracture growth. Extensions of the model to examine fatigue cracking are given. Meanwhile, in Sect. 5.3 the impact of the crack on the displacement within the cylinder is investigated. Whether a test to determine the presence and location of a crack within a cylinder can be created from the presented formulation is investigated. Finally, concluding remarks are given in Sect. 6.
## 2 Problem Formulation
Sect:ProbForm)?
### Governing equations
We consider a hollow elastic cylinder containing a ring crack in cylindrical coordinates (see Fig. 1). The cylinders inner radius is a distance \(a_{0}\) from the origin, its outer radius a distance \(a_{1}\), and has height \(h\). The ring crack is located at height \(d\), with inner radius \(c_{0}\) and outer radius \(c_{1}\). The problem domain is therefore, in cylindrical coordinates \((R,\phi,Z)\), given by: \(a_{0}<R<a_{1}\), \(-\pi<\phi<\pi\), \(0<Z<h\). The cylinder is rotating with frequency \(\tilde{\omega}\), while the medium has wavespeed \(c\). Additional axisymmetric loading (torsion) is applied on the inner, \(P_{0}(z)\), and outer, \(P_{1}(z)\), surfaces of the cylinder.
The equation of motion takes the form
\[\frac{1}{R}\frac{\partial}{\partial R}\left(R\frac{\partial u}{\partial R} \right)-\frac{1}{R^{2}}u+\frac{\partial^{2}u}{\partial Z^{2}}=-\frac{\tilde{ \omega}^{2}}{c^{2}}u,\quad a_{0}<R<a_{1},\quad 0<Z<h, \tag{1}\]
where here \(u=u_{\phi}(R,Z)\) is the displacement, which will have a discontinuity over the crack surfaces.
The cylinder is assumed to be fixed at the bottom edge
\[u(R,0)=0,\quad a_{0}<R<a_{1}. \tag{2}\]
The upper edge of the cylinder is free from loading
\[\tau_{z_{0}}(R,h)=0,\quad a_{0}<R<a_{1}, \tag{3}\]
while the cylindrical boundaries are under the tangential loading
\[\tau_{r_{\phi}}(a_{i},Z)=P_{i}(Z),\quad 0<Z<h,\quad i=0,1, \tag{4}\]
where \(\tau_{s_{0}}(R,Z)\), \(\tau_{r_{0}}(R,Z)\) are the tangential stresses, while \(P_{i}(Z)\) is a prescribed (known) function.
Inside the cylinder, the crack results in a displacement jump (denoted by double brackets [.])
\[\llbracket u(R,d)\rrbracket=\tilde{\chi}(R),\quad c_{0}<R<c_{1}, \tag{5}\]
where the jump \(\llbracket u(R,d)\rrbracket=u(R,d-0)-u(R,d+0)\), while \(\tilde{\chi}(R)\neq 0\) is an unknown jump function to be computed as part of the solution. The tangential stress over the crack is such that
\[\llbracket\tau_{s_{0}}(R,d)\rrbracket=0,\quad c_{0}<R<c_{1}. \tag{6}\]
Figure 1: Geometry and coordinate system for a cylinder with a circular crack.
### Normalization
We introduce the following normalization
\[r=\frac{R}{a_{1}-a_{0}},\quad z=\frac{Z}{h},\quad w(r,z)=\frac{u(R,Z)}{a_{1}-a_{0}}, \tag{7}\]
\[p_{i}(z)=\frac{P_{i}(Z)}{(a_{1}-a_{0})GF},\quad\omega=\frac{\tilde{\omega}}{ \tilde{\Omega}},\quad\chi(r)=\frac{\tilde{\chi}(R)}{a_{1}-a_{0}},\]
where \(G\) is the shear modulus, \(F\) is the maximal applied load, and \(\Omega\) is the maximal frequency.
Under this normalization, the problem (1) - (6) can be expressed in the form
\[\left\{\begin{array}{l}\frac{1}{r}\frac{\partial}{\partial r} \left(r\frac{\partial w}{\partial r}\right)-\frac{1}{r^{2}}w+\gamma^{2}\frac{ \partial^{2}w}{\partial z^{2}}=-\Psi^{2}w,\quad\rho_{0}<r<\rho_{1},\quad 0<z<1, \\ w(r,0)=0,\quad\rho_{0}<r<\rho_{1},\\ \frac{\partial w}{\partial z}(r,1)=0,\quad\rho_{0}<r<\rho_{1},\\ \rho_{i}\frac{\partial w}{\partial r}(\rho_{i},z)-w(\rho_{i},z)=p_{i}(z), \quad 0<z<1,\quad i=0,1,\\ \llbracket w(r,\delta)\rrbracket=\chi(r),\quad\alpha<r<\beta,\\ \left\{\begin{array}{l}\left[\frac{\partial w}{\partial z}(r,\delta) \right]=0,\quad\alpha<r<\beta,\\ \end{array}\right.\end{array}\right. \tag{8}\]
where
\[\gamma=\frac{a_{1}-a_{0}}{h},\quad\Psi^{2}=\frac{\omega^{2}\Omega^{2}(a_{1}-a _{0})^{2}}{c^{2}},\quad\rho_{i}=\frac{a_{i}}{a_{1}-a_{0}},\quad i=0,1,\]
\[\alpha=\frac{c_{0}}{a_{1}-a_{0}},\quad\beta=\frac{c_{1}}{a_{1}-a_{0}},\quad \delta=\frac{d}{h}.\]
## 3 Displacement within the cylinder
### The form of the displacement
To obtain the displacement function satisfying (8), we begin by reducing this to a 1D problem utilizing the finite Fourier sin transform with respect to the variable \(z\) (for details on this transform, see e.g. [8]). The transformed displacement takes the following form
\[w_{k}(r)=\int_{0}^{1}w(r,z)\sin(\lambda_{k}z)\,dz,\quad\lambda_{k}=\frac{\pi} {2}(2k-1),\quad k=1,2,3,\ldots, \tag{9}\]
with \(w(r,z)\) recovered utilizing the associated inverse
\[w(r,z)=2\sum_{k=1}^{\infty}w_{k}(r)\sin(\lambda_{k}z),\quad\lambda_{k}=\frac{ \pi}{2}(2k-1). \tag{10}\]
Under transformation (9), the boundary value problem for the displacement (8) becomes
\[\left\{\begin{array}{l}\frac{d}{dr}\left(r\frac{dw_{k}}{dr} \right)-\left(\frac{1}{r}+\mu_{k}^{2}r\right)w_{k}=\gamma^{2}\lambda_{k}r\cos \left(\lambda_{k}\delta\right)\chi(r),\quad\rho_{0}<r<\rho_{1},\quad k=1,2,3, \ldots,\\ \rho_{i}\frac{dw_{k}}{dr}(\rho_{i})-w_{k}(\rho_{i})=p_{ik},\quad i=0,1, \quad k=1,2,3,\ldots,\end{array}\right. \tag{11}\]
where \(\mu_{k}=\sqrt{\gamma^{2}\lambda_{k}^{2}-\Psi^{2}}\), while \(p_{ik}\) is the transformed tangential loading.
The general solution to the transformed problem (11) can be expressed as
\[w_{k}(r)=A_{k}I_{1}\left(r\mu_{k}\right)+B_{k}K_{1}\left(r\mu_{k} \right)+\gamma^{2}\lambda_{k}\cos\left(\lambda_{k}\delta\right)\int_{\alpha}^ {\beta}\Phi_{k}(r,\eta)\eta\chi(\eta)\,d\eta,\quad k=1,2,3,\ldots, \tag{12}\]
where \(A_{k},\,B_{k}\) are unknown constants to be obtained from the boundary conditions (11)\({}_{2}\), while
\[\Phi_{k}(r,\eta)=-\int_{0}^{\infty}\frac{xJ_{1}\left(rx\right)J_{1}\left(\eta x \right)}{x^{2}+\mu_{k}^{2}}\,dx=-\left\{\begin{array}{ll}K_{1}\left(r\mu_{k} \right)I_{1}\left(\eta\mu_{k}\right),&\eta<r,\\ I_{1}\left(r\mu_{k}\right)K_{1}\left(\eta\mu_{k}\right),&\eta>r,\end{array}\right.\]
with \(J_{n}(.)\) denoting the Bessel function, and \(I_{n}(.)\), \(K_{n}(.)\) being the modified Bessel functions of the first and second kind respectively (see e.g. [2]).
Applying the inverse Fourier sin tranform (10), the normalized displacement experienced by the cylinder immediately follows
\[w(r,z) =2\sum_{k=1}^{\infty}\left[F_{1k}(r)p_{1k}-F_{0k}p_{0k}\right] \sin\left(\lambda_{k}z\right)-2\gamma^{2}\int_{\alpha}^{\beta}\left[\sum_{k=1} ^{\infty}\lambda_{k}\cos\left(\lambda_{k}\delta\right)\sin\left(\lambda_{k}z \right)N_{k}\left(r,\eta\right)\right]\eta\chi(\eta)\,d\eta\] \[\quad+2\gamma^{2}\int_{\alpha}^{\beta}\left[\sum_{k=1}^{\infty} \lambda_{k}\cos\left(\lambda_{k}\delta\right)\sin\left(\lambda_{k}z\right) \Phi_{k}\left(r,\eta\right)\right]\eta\chi(\eta)\,d\eta, \tag{13}\]
where
\[F_{ik}(r)=\frac{1}{\Delta_{k}}\left[K_{2}\left(\rho_{i}\mu_{k}\right)I_{1} \left(r\mu_{k}\right)+I_{2}\left(\rho_{i}\mu_{k}\right)K_{1}\left(r\mu_{k} \right)\right],\quad i=0,1,\quad k=1,2,3,\ldots,\]
\[N_{k}(r,\eta)=\frac{1}{\Delta_{k}}\left[K_{2}(\rho_{i}\mu_{k})I_{1}(r\mu_{k} )\left\{I_{2}(\rho_{i}\mu_{k})K_{1}(\eta\mu_{k})+K_{2}(\rho_{0}\mu_{k})I_{1}( \eta\mu_{k})\right\}\right.\]
\[\left.+I_{2}(\rho_{0}\mu_{k})K_{1}(r\mu_{k})\left\{K_{2}(\rho_{1}\mu_{k})I_{1} (\eta\mu_{k})+I_{2}(\rho_{1}\mu_{k})K_{1}(\eta\mu_{k})\right\}\right],\quad k =1,2,3,\ldots,\]
with
\[\Delta_{k}=I_{2}\left(\rho_{1}\mu_{k}\right)K_{2}\left(\rho_{0}\mu_{k} \right)-K_{2}\left(\rho_{1}\mu_{k}\right)I_{2}\left(\rho_{0}\mu_{k}\right), \quad k=1,2,3,\ldots.\]
While the form of the displacement has now been obtained, it is in terms of the still unknown jump function \(\chi(r)\), which must be computed.
### The jump of the displacement
#### 3.2.1 The singular integral equation
The unknown function \(\chi(r)\) is obtained from the remaining boundary condition \(\left[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[M^{*}(r)=-4\gamma\sum_{k=1}^{\infty}\left[F_{1k}^{*}(r)p_{1}k-F_{0k}^{*}(r)p_{0k} \right]\lambda_{k}\cos\left(\lambda_{k}\delta\right),\]
and
\[F_{lk}^{*}(r)=\frac{1}{\Delta_{k}}\left[K_{2}\left(\rho_{1}\mu_{k}\right)I_{0} \left(r\mu_{k}\right)-I_{2}\left(\rho_{1}\mu_{k}\right)K_{0}\left(r\mu_{k} \right)\right],\quad i=0,1,\]
\[N_{k}^{*}(r,\eta)=-\frac{1}{\mu_{k}^{2}\Delta_{k}}\left[K_{2}\left(\rho_{1}\mu _{k}\right)I_{1}\left(r\mu_{k}\right)\left\{I_{2}\left(\rho_{0}\mu_{k}\right) K_{1}\left(\eta\mu_{k}\right)+K_{2}\left(\rho_{0}\mu_{k}\right)I_{1}\left(\eta\mu_{k} \right)\right\}\right.\]
\[\left.+I_{2}\left(\rho_{0}\mu_{k}\right)K_{1}\left(r\mu_{k}\right)\left\{K_{2 }\left(\rho_{1}\mu_{k}\right)I_{1}\left(\eta\mu_{k}\right)+I_{2}\left(\rho_{1 }\mu_{k}\right)K_{1}\left(\eta\mu_{k}\right)\right\}\right],\]
with \(K(.)\) being the complete elliptic integral of the first kind (see e.g. [2]).
The problem of obtaining the jump function satisfying the boundary condition now consists of finding the solution \(t(\xi)\) of the singular integral equation (15), and inverting expression (14) to obtain \(\chi(r)\).
#### 3.2.2 Form of the function \(t\)
The solution to the singular integral equation (15) is sought utilizing the method of orthogonal polynomials first proposed in [24]. Accordingly, we seek the function \(t(\xi)\) in the form
\[t(\xi)=\frac{1}{\sqrt{1-\xi^{2}}}\sum_{n=0}^{\infty}\left(t_{n}+C\cdot t_{n}^{ C}\right)T_{n}(\xi), \tag{16}\]
where \(t_{n}\), \(C\), \(t_{n}^{C}\), with \(n=0,1,2,\ldots\), are unknown constants, while \(T_{n}(\xi)\) are Chebyshev polynomials of the first kind (see e.g. [2]).
Before inserting expression (16) into the singular integral equation (15), we observe the following spectral correspondence
\[\int_{-1}^{1}\ln|s-\xi|\,\frac{T_{n}(\xi)}{\sqrt{1-\xi^{2}}}\,d\xi=-\sigma_{n }T_{n}(s),\quad\sigma_{n}=\begin{cases}\pi\ln(2),&n=0,\\ \frac{\pi}{n},&n=1,2,\ldots.\end{cases} \tag{17}\]
Therefore, multiplying (15) through by \(T_{n}(s)/\sqrt{1-s^{2}}\) and integrating over \(s\in[-1,1]\), we obtain
\[\tilde{t}_{m}+C\cdot\tilde{t}_{m}^{C}+\sum_{n=0}^{\infty}\left(\tilde{t}_{n}+ C\cdot\tilde{t}_{n}^{C}\right)A_{mn}=b_{m}+C\cdot b_{m}^{C},\quad m=0,1,2,\ldots, \tag{18}\]
where
\[\tilde{t}_{m}=\sqrt{\sigma_{m}\gamma_{m}}t_{n},\quad\tilde{t}_{m}^{C}=\sqrt{ \sigma_{m}\gamma_{m}}\sigma_{m}^{C},\quad b_{m}^{C}=\frac{\pi\nu}{\sqrt{\sigma _{m}\gamma_{m}}}e^{\frac{1}{2D}}J_{m}\left(\frac{1}{2\nu}\right),\quad\gamma_{ m}=\begin{cases}\pi,&m=0,\\ \frac{\pi}{2},&m=1,2,\ldots,\end{cases}\]
\[A_{mn}=\frac{1}{\sqrt{\sigma_{m}\gamma_{m}}\sigma_{n}\gamma_{m}}\int_{-1}^{1} \frac{T_{m}(s)}{\sqrt{1-s^{2}}}\,ds\int_{-1}^{1}\left[l^{*}(s-\xi)+L(s,\xi) \right]\frac{T_{n}(\xi)}{\sqrt{1-\xi^{2}}}\,d\xi,\quad b_{m}=\frac{1}{\sqrt{ \sigma_{m}\gamma_{m}}}\int_{-1}^{1}\frac{T_{m}(s)}{\sqrt{1-s^{2}}}\,ds.\]
The only unknowns to be solved for are the constant \(C\) and the constants \(t_{n}\), \(t_{n}^{C}\), \(n=0,1,2,\ldots\).
The value of the constant \(C\) follows from expanding the singular integral equation (15), and noting that the jump function at the crack surfaces \(\chi(\alpha)=\chi(\beta)=0\) (see [31] for details)
\[C=-\sum_{n=0}^{\infty}\frac{\tilde{t}_{n}}{\sqrt{\sigma_{n}\gamma_{n}}}I_{n} \left(\frac{1}{2\nu}\right)\left[\sum_{n=0}^{\infty}\frac{\tilde{t}_{m}^{C}}{ \sqrt{\sigma_{m}\gamma_{m}}}I_{m}\left(\frac{1}{2\nu}\right)\right]^{-1}. \tag{19}\]
Consequently, the constants \(t_{n}\), \(t_{n}^{C}\), \(n=0,1,2,\ldots\), now follow immediately from (18). This allows the displacement jump \(\chi(r)\) to be obtained, yielding a full description of the displacement within the cylinder.
## 4 The stress intensity factor
We investigate the stress intensity factor experienced on the opposing crack faces, in order to determine the crack behaviour and potential for extension (see e.g. [23]). The normalization of this parameter is taken in line with (7) as
\[K_{III}=\frac{\tilde{K}_{III}\sqrt{a_{1}-a_{0}}}{F}, \tag{20}\]
for \(\tilde{K}_{III}\) the (dimensional) mode-III stress intensity factor, and \(F\) the maximal applied load.
The dimensionless stress intensity factors are then given by
\[K^{-}_{III}=\lim_{r\to e_{0}^{-}}\sqrt{2\pi(c_{0}-r)}\tau_{\phi z},\quad K^{+}_{ III}=\lim_{r\to e_{1}^{+}}\sqrt{2\pi(c_{1}-r)}\tau_{\phi z}.\]
Noting the form of the displacement (16), the tangential stress \(\tau_{\phi z}\) follows immediately from the stress - displacement relations in cylindrical coordinates. The stress intensity factor is thus obtained as (for full details of evaluating this limit, see [31])
\[K^{-}_{III}=\frac{G\sqrt{\pi}}{2\sqrt{c_{0}\nu}}\sum_{n=0}^{\infty}(-1)^{n} \left(t_{n}+C\cdot t_{n}^{C}\right),\quad K^{+}_{III}=\frac{G\sqrt{\pi}}{2 \sqrt{c_{1}\nu}}\sum_{n=0}^{\infty}\left(t_{n}+C\cdot t_{n}^{C}\right). \tag{21}\]
## 5 Results for a crack in a rotating cylinder
### Numerical scheme and simulation parameters
The system of equations is solved using an iterative scheme in a Python environment. The jump function is obtained by solving (18) - (19), inserting these constants into (16) to yield the function \(t(\xi)\), and then solving the inverse problem (14) to obtain \(\chi(r)\). The displacement, \(w(r,z)\), then follows immediately from (13) (noting the boundary conditions), while the stress intensity factor on the crack surfaces, \(K^{+/-}_{III}\), is obtained using (21).
In simulations, the constants defining the problem geometry and material constants are taken for a small steel cylinder, unless otherwise stated. The values used for the problem geometry are given in Table. 1a. To examine the influence of the crack location, we vary the height and inner radius of the fracture in four configurations, as outlined in Table. 1b.
For the prescribed loading, we assume that the cylinder is not experiencing loading on the outer surface (\(P_{1}(z)=0\)), with all loading occurring on the inner surface (\(P_{0}(z)\neq 0\)). For simplicity, we will stick to power-law loading, for example constant loading \(P_{0}(z)=F\) (with \(F\), the maximal applied load, given in Table. 1a), linear loading \(P_{0}(z)=Fz\), etc. Simulations considering piece-wise loading were also conducted, such that only a portion of the cylinder inner surface experienced loading, and the results were inline with those presented in the remainder of this section (for this reason, they are not included).
### The dependence of crack extension on process parameters
The first immediate concern when considering a crack within the rotating cylinder is the direct damage it may cause. This can primarily be considered in terms of the fracture extension, both in the case of a single event (fast fracture) and over multiple cycles (fatigue crack). Towards this end, we consider the normalized stress intensity factor (SIF) experienced on the crack surfaces (21), and their dependence on the fracture location.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \(a_{0}\) [m] & \(a_{1}\) [m] & \(c_{1}\) [m] & \(h\) [m] & \(\Omega\) [Hz] & \(F\) [Pa] \\ \hline \(5.5\times 10^{-3}\) & \(7.5\times 10^{-3}\) & \(7\times 10^{-3}\) & \(5\times 10^{-3}\) & 150 & \(3\times 10^{5}\) \\ \hline \hline Configuration & \(d\) [m] & \multicolumn{3}{c}{\(c_{0}\) [m]} \\ \hline Symmetric & \(h/2\) & \(a_{0}+(a_{1}-a_{0})/4\) \\ \hline Edge & \(h/10\) & \(a_{0}+(a_{1}-a_{0})/15\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **(a)** The value of constants used in simulations. **(b)** The four configurations of the crack location used in simulations (symmetric - symmetric, symmetric - edge, edge - symmetric and edge - edge). Here, the edge cases represent the minimal distance to the bottom of the cylinder and the inner surface for \(d\) and \(c_{0}\) respectively.
#### 5.2.1 Quasi-static fracture growth
Let us first consider the case of a single cycle/loading event. In this instance, the stress intensity factor can be utilized to predict fracture extension, through use of the energy release rate or similar criterion. When interpreting the results, the presented normalization scheme for \(K_{III}\) (20) will yield a normalized mode-I material toughness for steel of approximately \(K_{Ic}=7.45\) (assuming dimensional \(\tilde{K}_{Ic}=50\,\mathrm{MPa}\cdot\sqrt{\mathrm{m}}\)).
The influence of the fracture inner and outer surface positions, \(c_{0}\) and \(c_{1}\) respectively, on the SIFs
Figure 2: Dependence of the dimensionless stress intensity factor on the inner (\(K_{III}^{-}\), square markers) and outer (\(K_{III}^{+}\), cross markers) fracture surfaces on the normalized crack **(a)**, **(b)** inner radius \(\alpha\), **(c)**, **(d)** outer radius \(\beta\). Results are shown for constant loading (unbroken lines) and linear loading (dashed lines), in the case of an **(a)**, **(c)** edge crack, **(b)**, **(d)** symmetric crack (see Table. 1b). All remaining parameters for the cylinder and crack geometry are taken as stated in Table. 1a.
are provided in Fig. 2. It is clear that for both parameters the impact of the crack is significantly higher when it is located at the edge of the domain (i.e. at the bottom of the cylinder) rather than at the center (compare (a), (c) and (b), (d)). The results also show that the normalized stress intensity factor increases with increasing \(c_{0}\), and decrease with decreasing \(c_{1}\), reflecting the fact that a higher the stress intensity factor experienced for a smaller crack surface under identical loading.
The dependence of the stress intensity factor on the fracture height, \(d\), and cylinder height, \(h\), are provided in Fig. 3. We again observe a larger impact of the crack in the edge cases (when the crack almost touches the inner cylinder surface), although it is far less pronounced. The results for the crack
Figure 3: Dependence of the dimensionless stress intensity factor on the inner (\(K_{III}^{-}\), square markers) and outer (\(K_{II}^{+}\), cross markers) fracture surfaces on the normalized **(a)**, **(b)** crack height \(\delta\), **(c)**, **(d)** cylinder height \(1/\gamma=h/(a_{1}-a_{0})\). Results are shown for constant loading (unbroken lines) and linear loading (dashed lines), in the case of an **(a)**, **(c)** edge crack, **(b)**, **(d)** symmetric crack (see Table. 1b). All remaining parameters for the cylinder and crack geometry are taken as stated in Table. 1a.
height indicate that the stress intensity factor is largest when the crack is located near the bottom of the cylinder, with the SIFs falling rapidly as \(d\) increases. Modifying the cylinder height \(h\) has the opposite effect, with \(K_{III}^{+/-}\) becoming negative for a sufficiently small cylinder, but increasing rapidly as the cylinder size increases.
Interestingly, numerical investigations by the authors demonstrated that the influence of the stress intensity factor on the frequency of cylinder rotation, \(\omega\), is almost negligible. This is likely due to the small cylinder size being considered, which alongside the high wave speed in the medium leads to the term \(\Psi\) being negligible in (8), thereby eliminating the rotation-induced effects from the formulation. It follows that the impact of the rotational frequency only becomes significant when considering larger structures.
It is clear from the above results that the crack location has a significant impact on the stress intensity factor experienced on the fracture walls. This is not unexpected, but does mean that the crack location must be determined in order to effectively predict the risk of fast fracture within the cylinder. The issue of determining the crack location is considered in more detail in Sect. 5.3.
#### 5.2.2 Potential applications to fatigue crack risk management
The presented formulation can easily be utilized to produce estimates regarding fatigue cracking over multiple loading cycles. The can be achieved by using the presented formulation to obtain quasi-static predictions of the maximum and minimum stress intensity factor experienced during each individual loading cycle, and utilizing Paris' law to predict fracture growth during/between cycles (see for example [26] and references therein).
This can be utilized in a number of ways. For example, it could be used during the design of cylindrical components to predict an upper bound on the number of loading cycles for a fatigue crack to reach a certain length. This can then be used to inform the maximum permissible time between component inspections.
Alternatively, if a fracture was detected, the presented formulation could be utilized to provide an upper bound on the remaining'safe' operation time, or a permissible timeline for repair work to take place. Such applications however require the ability to detect the crack within the cylinder, even though it is not visible.
### On detecting a crack within a rotating cylinder
One of the most important considerations for cracks within a cylinder is the ability to detect them. Being able to determine their location is also important, as this allows local fixes (repairs or strengthening) to be applied. We therefore consider tests which can be applied to a rotating cylinder, in order to determine the crack location.
Let us consider the effect of the crack on the displacement experienced at the top of the cylinder (\(z=1\)), again taking the case of a small steel cylinder. This point is chosen as it is the location where measurements are most easily taken. In order to account for different crack sizes we consider two different fracture geometries, which are outlined in Table. 2. All other parameters are taken in line with Table. 1.
The ratio of the normalized displacement experienced at this point, against that experienced in the absence of a crack, \(w^{*}\), is provided in Fig. 4 for a variety of crack locations and torsion loadings (for solution \(w^{*}\), see e.g. [31], or this can also be obtained by inserting \(\chi\equiv 0\) into the presented formulation). It can be seen that there is a clear quantitative effect of the crack on the displacement, with the crack leading to a small increase in displacement for fractures near to the upper surface, but a far larger increase in displacement when the crack is located in the bottom half of the cylinder. Although the crack location
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Configuration & \(c_{0}\) [m] & \(c_{1}\) [m] \\ \hline Smaller crack & \(6\times 10^{-3}\) & \(7\times 10^{-3}\) \\ \hline Larger crack & \(5.9\times 10^{-3}\) & \(7.1\times 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The different sizes of crack, defined in terms of the inner and outer radius (\(c_{0}\) and \(c_{1}\) respectively), utilized in simulations in Sect. 5.3. Note that both cracks are centrally located within the cylinder, while the ‘smaller crack’ corresponds to the centrally located case considered in Sect. 5.2.
impacts the magnitude of the displacement, the outline of the crack is not directly visible within the observed displacement, even when it is near to the surface (\(\delta=0.9\)).
We therefore seek to use the quantitative impact of the crack on the displacement to determine its location. To this end, we evaluate the relative difference
\[f(r,\delta)=\frac{w(r,1;\delta)-w^{*}(r,1)}{w^{*}(r,1)}, \tag{22}\]
Figure 4: The ratio of the normalized displacement experienced at the top of a cylinder (\(z=1\)) with a crack, \(w(r,1)\), against that experienced in the absence of a crack, \(w^{*}(r,1)\). The crack is located at height \(\delta\) and radial position between \(2.75<r<3.75\). We show results for: **(a)**, **(c)** fixed linear loading on the inside surface \(P_{0}(z)\), and variable crack height \(\delta\), **(b)**, **(d)** fixed \(\delta=2/3\), and variable loading on the inside surface, for the case of a **(a)**, **(b)** smaller crack, **(c)**, **(d)** larger crack (see Table. 2). All remaining parameters are fixed as outlined in Table. 1 for the symmetrical case.
where \(w\) is the normalized displacement for a given crack height \(\delta\), while \(w^{*}\) is the displacement in the absence of a crack.
The value of \(f(\rho_{1},\delta)\), where \(\rho_{1}\) is the outer edge of the cylinder, is provided in Fig. 5 for various loadings. It can be seen that in the case of the larger crack there is an intersection of the curves for different loadings at approximately \(\delta=0.6\), however there is no such intersection for the smaller crack. Furthermore, as the fracture approaches the surface (\(\delta\to 1\)), the curves for different loadings almost coincide, and remain closely packed together. These results, and their clear sensitivity to the crack size, mean that it would be difficult to use this measurement to determine the vertical position \(\delta\) of a crack, and so we instead seek a clearer indicator.
In simulations it was found that, while the displacement experienced within the cylinder with a crack depends on \(r\), the relative difference is almost constant with respect to \(r\). Numerical investigations of \(f(r,\delta)\) by the authors found that it varies by less than 1% over the problem domain for all fixed values of \(\delta\) considered, irrespective of the loading applied.
Consequently, we can average the relative difference \(f(r,\delta)\) over \(r\) and consider the difference solely in terms of \(\delta\), yielding the difference function
\[\tilde{f}(\delta)=\frac{1}{\rho_{1}-\rho_{0}}\int_{\rho_{0}}^{\rho_{1}}f(r, \delta)\,dr, \tag{23}\]
where \(\rho_{0}\), \(\rho_{1}\) are the normalized inner and outer edges of the cylinder respectively. This now provides an averaged measure of the cracks' influence on the normalized displacement.
The results for \(\tilde{f}(\delta)\) under a variety of applied loadings are provided in Fig. 6, for both the smaller and larger crack. It can be seen that the presence of the crack increases the displacement observed at the top of the cylinder. This effect decreases as the crack approaches the top of the cylinder, as there is less impacted material between the crack and the measured region. Over most of the region considered, the cracks' relative influence on the normalized displacement is almost linear, however we would not expect this to be the case as the crack approached the surface (\(\delta\to 1\)) in the considered formulation. The case of very small (\(0<\delta\ll 1\)) and large (\(0<1-\delta\ll 1\)) values of \(\delta\) is left for future research.
One crucial aspect of \(\tilde{f}(\delta)\) observed in Fig. 6 is that, unlike the relative difference without averaging \(f(r,\delta)\) (22) (see Fig. 5), there is no intersection between the curves representing different loading regimes. This result provides a test which can be performed in order to detect a crack within the cylinder and determine its approximate vertical location. The cylinder is placed under a variety of loading regimes (at
Figure 5: The relative influence of the crack on the normalized displacement \(f(\rho_{1},\delta)\) (22) experienced at the top of the cylinder (\(z=1\)) on the outer edge (\(r=\rho_{1}\)), as a function of the crack location (normalized height \(\delta\)) for various loading regimes. Figures show the case of a **(a)**, **(b)** smaller crack, **(c)**, **(d)** larger crack (see Table. 2). All remaining parameters are fixed as outlined in Table. 1 for the symmetrical case.
least two, e.g. constant and linear), and the displacement at the top of the cylinder is measured in each case. The results are used to evaluate \(\tilde{f}\) (23), as the average over \(r\) of the relative difference between the measured displacement and the analytical solution in the absence of a crack. The obtained \(\tilde{f}\) can then be compared with Fig. 6 to determine whether a crack is present, and find the value of \(\delta\) which best fits the results.
To make this test effective, an approach to determining (or controlling for) the size of the crack needs to be determined. This is not only as it influences the value of \(\tilde{f}\) observed in Fig. 6, but because the size of the crack, given in this formulation by normalized inner and outer radius \(\alpha\) and \(\beta\) respectively, was also shown to play a key role in the crack stress intensity factor (see Fig. 2). It is possible that this could be achieved by utilizing the analysis of \(\tilde{f}\) (see Fig. 6), with the crack size being determined from the location of the intersection point for \(f(r,\delta)\) (see Fig. 5), or directly from the displacement distribution (see Fig. 4). Note as well that the presented results are for a steel cylinder, and accounting for material parameters will again add an additional layer of complexity.
Therefore, we can state that developing such a full test for the presence, location, and size of a crack within a cylinder requires a more detailed analysis. However, the presented results strongly indicate that one can be developed from the presented formulation, at least for a fixed cylinder geometry or material parameters.
## 6 Concluding remarks
The problem of a rotating hollow cylinder containing a crack was considered. The form of the displacement and traction were obtained, and an iterative scheme developed to solve the resulting system.
Numerical investigations for the case of a small steel cylinder were undertaken. It was demonstrated that:
* the presence of the crack has an influence on the displacement experienced by the cylinder, which is primarily dependent on the location of the crack;
* the stress intensity factor experienced on the crack surfaces is greater when the crack is located near to the edge of the cylinder (as opposed to being centrally located);
* the stress intensity factor was typically positively valued, and decreases with increasing crack size;
Figure 6: The relative influence of the cracks’ presence on the normalized displacement, \(\tilde{f}(\delta)\) (23), as a function of the location (normalized height \(\delta\)) of the crack within the cylinder. Figures show the case of a **(a)**, **(b)** smaller crack, **(c)**, **(d)** larger crack (see Table. 2). All remaining parameters are fixed as outlined in Table. 1 for the symmetrical case.
* rotation only played a negligible role for the small cylinder considered here, but may play a more prominent role for larger structures.
The presented formulation can be used to predict whether crack extension will occur once the fracture location is known. The quasi-static formulation can be utilized to determine whether fracture initiation is an immediate concern, or it can be coupled with fatigue models (e.g. Paris' law) to estimate the extent of fatigue crack over time. These capabilities provide useful insight for risk management, for example when determining the number of cycles that can be'safely' performed on a cracked cylinder, or the number of cycles that can be performed between inspections.
Whether the presented formulation can be used to detect and locate cracks within a cylinder was also investigated. The initial results indicate that the presence of a crack can be inferred from examining the displacement on the surface of the cylinder, and comparing the results with the case without a crack being present (see e.g. Fig. 4). It was shown that, for a fixed crack size, examining the displacement under multiple loading configurations (at least two) and considering the average of the relative difference compared to the case without a crack can be used to determine crack height \(\delta\) (see Fig. 6).
This investigation was however conducted for a single cylinder geometry, and further analysis is required to produce a general procedure. Most notably, one which can determine both the crack location (height and radial position), and size, which are necessary to predict the likelihood of fracture extension over time (see Sect. 5.2). One crucial advantage of such a test for crack location is that it can be conducted while the cylinder is undergoing rotation and loading (provided the prescribed loading can be induced), meaning that it could be employed while the cylinder is still 'in use', rather than needing to stop and remove the component. Locating the crack position also allows for targeted repairs (where possible) to be planned and performed with minimal disruption.
When developing such a test for locating a fracture within the cylinder, it may also be useful to simplify the problem by replacing the crack by a soft interface (see e.g. [17, 37]) across the entire cross-section. Comparison of the results with the presented formulation could be used to ensure accuracy of the analysis. This approximated problem may be used to develop a simplified test for locating the crack, but can also be utilized to examine additional important effects, such as the case of an imperfect interface (impact of force conservation)/ the edge effect [19]. Finally, the presence of a damage zone within the neighbourhood of the existing fracture can also be incorporated [16, 18], to better model the impact of the crack on the cylinder behaviour.
## Acknowledgements
The research is supported by European project funded by Horizon 2020 Framework Programme for Research and Innovation (2014-2020) (H2020-MSCA-RISE-2020) Grant Agreement number 101008140 EffectFact "Effective Factorisation techniques for matrix-functions: Developing theory, numerical methods and impactful applications".
|
2310.20650 | Addressing Limitations of State-Aware Imitation Learning for Autonomous
Driving | Conditional Imitation learning is a common and effective approach to train
autonomous driving agents. However, two issues limit the full potential of this
approach: (i) the inertia problem, a special case of causal confusion where the
agent mistakenly correlates low speed with no acceleration, and (ii) low
correlation between offline and online performance due to the accumulation of
small errors that brings the agent in a previously unseen state. Both issues
are critical for state-aware models, yet informing the driving agent of its
internal state as well as the state of the environment is of crucial
importance. In this paper we propose a multi-task learning agent based on a
multi-stage vision transformer with state token propagation. We feed the state
of the vehicle along with the representation of the environment as a special
token of the transformer and propagate it throughout the network. This allows
us to tackle the aforementioned issues from different angles: guiding the
driving policy with learned stop/go information, performing data augmentation
directly on the state of the vehicle and visually explaining the model's
decisions. We report a drastic decrease in inertia and a high correlation
between offline and online metrics. | Luca Cultrera, Federico Becattini, Lorenzo Seidenari, Pietro Pala, Alberto Del Bimbo | 2023-10-31T17:21:26Z | http://arxiv.org/abs/2310.20650v1 | # Addressing Limitations of State-Aware Imitation Learning for Autonomous Driving
###### Abstract
Conditional Imitation learning is a common and effective approach to train autonomous driving agents. However, two issues limit the full potential of this approach: (i) the inertia problem, a special case of causal confusion where the agent mistakenly correlates low speed with no acceleration, and (ii) low correlation between offline and online performance due to the accumulation of small errors that brings the agent in a previously unseen state. Both issues are critical for state-aware models, yet informing the driving agent of its internal state as well as the state of the environment is of crucial importance. In this paper we propose a multi-task learning agent based on a multi-stage vision transformer with state token propagation. We feed the state of the vehicle along with the representation of the environment as a special token of the transformer and propagate it throughout the network. This allows us to tackle the aforementioned issues from different angles: guiding the driving policy with learned stop/go information, performing data augmentation directly on the state of the vehicle and visually explaining the model's decisions. We report a drastic decrease in inertia and a high correlation between offline and online metrics.
Article submission, IEEE, IEEEtran, journal, IEEE, paper, template, typesetting.
## I Introduction
Autonomous driving is becoming a reality. To make this possible, several problems have to be solved, such as perception [1], planning [2], and forecasting [3]. A recent trend that has obtained remarkable results is to directly train driving agents from raw observations with Imitation Learning (IL) [4, 5], i.e. learning to mimic demonstrations from expert human drivers. In this way, the autonomous driving problem is tackled holistically, without having to rely on different heterogeneous modules.
Imitation learning, however, has some limitations. Since the driving capabilities are learned by behavioral cloning, IL models usually lack explicit causal understanding. Rather than rules, relations between patterns are learned, thus making the agent vulnerable to spurious correlations in the data. This phenomenon is known in the literature as _causal confusion_[6]. In particular, when training IL agents for automotive, there is evidence of a special case of causal confusion referred to as the _inertia problem_[5, 7, 8]. The inertia problem stems from a spurious correlation between low speed and no acceleration in the training data, making the driving agent likely to get stuck in a stationary state. As a consequence, when a state-aware agent halts (e.g. at a traffic light or in a traffic jam), it may not move again when it should. For state-awareness here we refer to any source of information that can inform the agent about its halted state, such as a state variable, either explicitly modeled or implicitly inferred, that encodes velocity.
A second issue that limits the applicability of IL is the gap between offline and online driving capabilities [9, 10]. Codevilla _et al._[10] showed that there is a low correlation between offline evaluation metrics (e.g. frame-wise Mean Squared Error in steer angle prediction) and the success rate in online driving benchmarks. In online driving, the output of the model influences future inputs, violating the i.i.d. assumption made by the learning framework [11]. Accumulation of small errors thus brings the vehicle into new states, never observed at training time [12]. Similarly to the inertia problem, this issue manifests itself the most in state-aware models: the more variables are observed by the model, such as ego-velocity or previous driving commands, the sparser the coverage of the training data gets, making it more likely to end up in under-represented configurations at driving time.
To summarize, IL agents suffer from ill-distributed training data that presents spurious correlations and domain shift compared to the test set. These issues make it particularly hard to train state-aware agents: using multiple input sources increases the chances of discovering unwanted correlations in the data or of observing under-represented inputs at inference time, for which the agent does not know how to act confidently [11, 12, 13]. In this paper, we address these difficulties in training state-aware IL models.
In literature, some attempts to identify and solve these issues have been done. The inertia problem has been addressed by regularizing training through vehicle speed prediction [5], whereas [10] demonstrated the usefulness of two data augmentation approaches to improve offline and online driving capability correlation: first, augmenting the training set with lateral cameras, thus simulating a vehicle with an unusual trajectory, and second, perturbing the driving policy to record samples where the vehicle recovers from anomalous states. In this paper, we build on these ideas without collecting additional data. We propose an IL agent that propagates the state of the vehicle through the model and uses it as the core of a multi-task architecture. On the one hand, this allows us to explicitly train the model to avoid issues such as the inertia problem. On the other hand, this allows us to perform data augmentation on all the observed data, reducing the distribution shift between training samples and what the agent may see at driving time.
Our IL agent is designed as a hierarchical transformer model with state token propagation. The vehicle's state is encoded in a special token of a vision transformer [14] and is enriched with new information at each stage of the architecture. At first, we predict whether the vehicle must stop or go, directly tack
ling _inertia_. This information is passed to the next stage which predicts the driving commands (namely steer, throttle, and brake). Finally, the model leverages a differentiable Command Coherency Module (CCM), encouraging the model to correctly bring the vehicle to the desired future state by generating non-conflicting controls. Such command is used only at training time and acts as a regularizer. Since our architecture is based on a transformer encoder [15], it heavily relies on attention. We leverage such attention to gain insights about what the model is focusing on to make its decisions (e.g., the vehicle's state or visual patterns), following the recent trend of designing explainable driving models [16, 17, 18].
Interestingly, the ability to explain the model's decisions provides us with a better understanding of the inertia problem. Inertia makes an IL model halt and stay still whenever the speed of the vehicle is close to zero. However, it is hard to discriminate this phenomenon from other kinds of failures that make the vehicle stop indefinitely. For instance, if part of the environment is mistakenly interpreted as a crossing pedestrian or a red traffic light, the vehicle will wait indefinitely for the state of the surrounding environment to change. Whenever this happens, a different solution must be sought in order to enforce the visual backbone of the model, rather than its causal inference capabilities. By combining the model's attention with a retrieval-based explainability method, we are able to highlight these differences and isolate instances of inertia from backbone failures.
The main contributions of our paper are the following:
* We propose a state-aware conditional imitation learning model for autonomous driving. The model is multi-stage and exploits state propagation through different transformer layers breaking down the generation of driving commands into coarse to fine tasks.
* We specifically address issues in state-aware imitation learning such as the inertia problem and the offline/online performance gap. Inertia is drastically limited by state token propagation and multi-stage learning, whereas the correlation between online success rate and offline metrics is enforced via data augmentation on the vehicle's state.
* We propose a combination of the transformer's self attention with an ex-post semantic explainability method that we use for inspecting model failures. This points out interesting "hallucinations" of the visual backbone that cause behaviors mistakenly confused with inertia.
## II Related Works
Imitation Learning (IL) is based on the idea that, to learn a complex task, a model can observe the demonstrations of an expert performing it [19, 20]. This paradigm has been successfully applied to autonomous driving. One of the first approaches based on IL predicted steering commands for lane following and obstacle-avoiding tasks [21]. The task soon evolved into the so-called Conditional Imitation Learning (CIL), in which predictions are conditioned on high-level commands such as _turn_ or _go straight_. Several works followed this approach [4, 17, 22, 23, 24], also combining it with reinforcement learning [25, 26, 27].
To obtain better driving capabilities, several sensors and additional synthetic data are often used [4, 28, 29, 30, 31]. Large use of environmental information is done by prior work in the form of semantic segmentations [32, 33, 26, 34], top-view maps [30], or both [35, 36, 37, 38]. Similarly, other methods leverage depth information [38, 23], LiDAR data [24, 24, 32, 32] or cues such as traffic light states [33, 26], lane position [26], and intersection presence [33, 26]. Such data has been also used in the form of _affordances_, low-dimensional representations of environmental attributes [22, 26, 39]. Differently from all the aforementioned methods, we rely on a purely RGB-based approach. Whereas these methods have access to environmental data, either as inputs or as additional sources of supervision, we assume to have access only to the RGB stream and the state of the vehicle (i.e., current speed, steer, acceleration, and brake), which is a direct consequence of the driving policy. A similar assumption is done in recent works such as [40, 5, 41].
Training a state-aware imitation learning agent hides some challenges [13]. Despite its simplicity and effectiveness, it breaks the i.i.d assumption made by any statistical supervised learning framework since current decisions influence future inputs [11, 12]. The main difficulty that needs to be addressed is trying to keep the model in a state close to what has been observed at training time [13]. When this does not happen, online errors tend to accumulate over time, generating less accurate behaviors [11, 42]. The effect is to have online capabilities that do not correlate with offline error metrics measured on a validation set [10], which makes the agent difficult to train. A solution to bridge this gap is to perform data augmentation. Codevilla _et al._[10] showed that collecting data from three different cameras while adding noise to the driving policy helps in recovering from unexpected scenarios. This however requires collecting hours of additional data. Image-level data augmentations such as changes in contrast, brightness and tone also have beneficial effects, especially for generalizing to similar scenarios with different conditions (e.g. weather) [5, 10]. Nonetheless, augmenting the pixel space has a limited effect on state-aware models, where predicted quantities are provided as input. Differently from prior work, we perform augmentation on the vehicle's state, injecting it into the model as a special token of a transformer [15]. Augmenting the state leads to a better coverage of the state space during training.
The presence of the state token allows us to address another well-known issue with imitation learning in automotive: the inertia problem [5, 7]. This has been addressed in literature by predicting the current speed of the vehicle [5] or via causal imitative learning [8], also based on speed prediction. A memory-based approach for retrieving previously observed scenarios has also been exploited recently [43]. The common speed-prediction solution proposed in [5] suffers from a high collision rate, likely due to overcompensation of inertia. Instead of making the network predict its current velocity, we leverage a multi-stage architecture, where a stop/go loss based on the actual causes for stopping (presence of pedestrians, traffic lights, other vehicles) conditions the command generation. In this way, we inform the model about external elements
that should be taken into account while driving. We find this solution to almost eradicate inertia entirely.
## III Overview
Imitation Learning (IL) trains an agent by observing a set of expert demonstrations to learn a policy [20]. In the simplest scenario, IL is a direct mapping from observations to actions [19]. In automotive, the expert is a driver, the policy is _"safe driving"_ and the demonstrations are a set of _(frame, driving-controls)_ pairs. In this paper, we address Conditional Imitation Learning (CIL), a declination of imitation learning where the policy must reflect a given high-level command, such as _turn right_ or _follow lane_. As in prior work (e.g. [4, 22, 17]), we divide our architecture into multiple branches, with separate heads learning command-specific policies. However, differently from prior work, we structure our model as a hierarchy of stages, each of which is dedicated to addressing different aspects of driving, as depicted in Fig. 1.
The proposed model is state-aware, in the sense that it takes as input the speed and the steer, acceleration and brake values predicted at the previous timestep. In principle, informing the model of the current state of the vehicle could ensure temporal smoothness and coherency in the driving policy (i.e., the predicted driving controls). In practice, this makes the model vulnerable to spurious correlations in the data, bringing out the _inertia problem_. To address this issue, we propose a multi-stage transformer model with state token propagation. We feed the vehicle state to the model as a special token of a vision transformer (ViT) [14]. Operatively speaking, the state token fulfills the same role as the _CLS_ token in standard ViTs. However, by enclosing vehicle measurements we can inject information into the model and let it correlate to relevant spatial features via self-attention. After each layer, the state token is enriched with spatial information and is decoded into coarse-to-fine driving commands, depending on the stage. The coarser of such commands is a decision on whether the vehicle should stop or go, thus explicitly addressing inertia. Injecting the state token into the model has the additional benefit of enabling data augmentation on the state values itself, addressing what is arguably the biggest limitation of imitation learning, i.e., the inability to perform well in previously unseen states [9] that is also responsible for the gap of accuracy between offline and online driving. We also introduce a regularizer that ensures coherency in the generated driving commands. This is different from similar solutions adopted in prior works, where speed is predicted to reduce inertia [5], but here we use it to reduce online-offline evaluation gap.
## IV Model
### _State Token Propagation_
Our model exploits a multi-stage transformer encoder architecture. The hierarchy of layers reflects a coarse-to-fine learning where each stage generates a different output. The rationale is that the \(i-th\) stage can inform stage \(i+1\) by taking the output of the encoder corresponding to the state token and propagating it as the new state token. To enrich the token with increasingly complex semantics, at each stage we decode it into a different output with a Feed Forward Network (FFN), specific for separate tasks.
We define our multi-task hierarchy as follows. The first stage predicts whether the agent should halt the car or keep it going. This is specifically thought to address the inertia problem. This stage does not produce any driving control and is expected to focus on traffic lights and other agents. The second stage generates the actual driving commands: throttle, brake, and steer. This second stage of the model should instead learn and understand road topology and ego-motion patterns. Thanks to the propagated state token, the generation of the driving commands is conditioned on the stop/go decision of the previous stage. The third and final stage is the command coherency module that acts as a regularizer, thus we use it only at training time. The initial state token is the embedding of steer, throttle, brake and speed at time \(t-1\).
To cope with the non-uniform distribution of vehicle states in the train set (see Sec. VII-A), we introduce a data augmentation strategy based on noise injection to perturb the state token. We inject a zero mean Gaussian noise with \(\sigma=0.1\) for driving controls, since they are all in \([0,1]\). For the speed, that takes values in \([0,10]\), we use \(\sigma=1\) instead.
Fig. 1: A convolutional backbone extracts a feature map, which is fed to a multi-stage transformer architecture. The first stage (E1) takes the feature and a state token, which is propagated across the network. The output of E1 corresponding to the state token is decoded into a stop/go prediction with a Feed Forward Network (FFN). The second stage (E2) uses the propagated state to predict driving commands. Finally, the Command Coherency Module is used as a loss regularizer to ensure consistency between driving commands.
### _Pixel-State Attention_
Every stage of the model performs token-to-token attention, thanks to the transformer's self-attention. The advantages are twofold: on the one hand, prior work has shown that explicitly modeling attention improves driving capabilities [17, 40]; on the other hand, it provides a built-in interpretability mechanism that can be used to visually explain decisions.
In our model, the attention involves not only visual patches as in [17, 40] but also the state of the vehicle. First, the output of the convolutional backbone, i.e. a feature map \(f\) of size \(H_{f}\times W_{f}\times C\), is flattened into \(N=H_{f}\ \cdot\ W_{f}\) separate \(C\)-dimensional tokens, corresponding to \(1\times 1\) spatial patches in the feature map. Patches are then linearly projected into a \(D\)-dimensional space to adapt them to the input size of the transformer. The four scalar quantities that compose the state of the vehicle (_speed_, _steer_, _acceleration_ and _brake_) are lifted to a dimension of \(D/4\) and concatenated into the \(D\)-dimensional state token, which we refer to as \(x_{state}\). As in [14], a learnable positional embedding \(E_{pos}\) is added to all the \(N+1\) tokens. To summarize, the set of \(N+1\) tokens fed to the encoder is composed as follows:
\[z=[x_{state};f^{1}P_{v};\ldots;f^{N}P_{v}]+E_{pos} \tag{1}\]
where \(P_{v}\in\mathbb{R}^{C\times D}\) is the feature projection matrix, \(E_{pos}\in\mathbb{R}^{(N+1)\times D}\) and \(x_{state}\in\mathbb{R}^{1\times D}\).
The self-attention carried out in every layer of the transformer is thus a pixel-state attention, where every pixel of the feature map can attend to each other plus the state token. This allows us to inspect at each stage which information is privileged by the model: when the state token carries relevant information from the previous stage (e.g., if the vehicle must stop), the model will give it high importance; vice-versa, if the image carries meaningful cues (e.g., an intersection) the model will focus on the interested pixels.
### _Command Coherency Module_
A possible cause for low correlation between off-line error and on-line driving performance [10] can be found in throttle, brake and steer being predicted independently. What is missing is the optimization of a common goal that brings the vehicle from one initial state to a desired one, considering all three quantities. Furthermore, individual biases may interfere with the quality of the overall policy.
To generate the appropriate driving behavior, the predicted commands must be compatible with each other. To this end, we introduce the Command Coherency Module (CCM). The CCM takes as input steer, throttle, brake and speed at time \(t\) and predicts the future speed at time \(t+1\). We first train the command coherency module on training measurements to learn how such quantities affect the speed of the vehicle. Once the module is trained, we freeze it and use it as a regularizer while training the driving agent. To implement the CCM, we use a lightweight multi-layer perceptron with three layers and ELU activations.
Our CCM shares some traits with the speed prediction module of [5]. Here, the authors feed a frame-based estimate of the speed to the model. Instead of feeding the predicted speed as an additional input, we optimize it to regularize the outputs and conciliate the driving commands.
### _Architecture and Training_
The proposed model is composed of a shared convolutional backbone plus four parallel branches, one for each high-level command. The shared backbone consists of 5 convolutional layers with ELU activations. The first three layers have respectively 24, 36, and 48 \(5\times 5\) kernels with stride 2, followed by two other layers with 64 \(3\times 3\) filters with stride 1. Input images are resized to a \(200\times 88\) px, yielding a \(4\times 18\times 64\) feature map. After flattening we obtain \(N=72\) visual tokens. Each branch is a multi-stage transformer encoder with input size \(D=64\). We use \(3\) heads with a depth of \(4\) for each encoder stage.
The first stage of the transformer takes the state token \(x_{state}\) along with the \(N\) visual tokens. The stage outputs \(N+1\) transformed tokens, among which the enriched state token is used to predict whether the vehicle should stop or go. To optimize the stop/go prediction we use an L1 loss:
\[\mathcal{L}_{SG}=\frac{|S_{TL}-\bar{S}_{TL}|+|S_{P}-\bar{S}_{P}|+|S_{o}-\bar{S }_{o}|}{3} \tag{2}\]
where \(S_{TL}\), \(S_{P}\), \(S_{o}\) represent intention signals in \([0,1]\)[5], respectively for traffic light stop, pedestrian stop, and stop due to other vehicles.
The second stage is in charge of generating driving commands. Similarly to the first stage, the propagated state token taken from the output of the stage is fed to a feed-forward regressor to predict steer, throttle and brake. We use an L1 loss for driving command prediction:
\[\mathcal{L}_{c}=|\alpha(s-\bar{s})|+|\beta(t-\bar{t})|+|\gamma(b-\bar{b})| \tag{3}\]
where \(s\in[0,1]\), \(t\in[0,1]\) and \(b\in[0,1]\) are respectively the predicted steer, throttle and brake values, \(\bar{s}\), \(\bar{t}\) and \(\bar{b}\) the corresponding ground truth values and \(\alpha\), \(\beta\), and \(\gamma\) are weights with values \(0.5,0.45\), and \(0.05\), as in [23]. For the Command Coherency Module, we also use an L1 loss. The CCM loss \(\mathcal{L}_{CCM}\), and the stop/go loss, denoted as \(\mathcal{L}_{SG}\), contribute to the total loss according to: \(\mathcal{L}_{Total}=\lambda\mathcal{L}_{c}+\kappa\mathcal{L}_{CCM}+\tau \mathcal{L}_{SG}\) where \(\lambda=0.8\), \(\kappa=0.1\) and \(\tau=0.1\). We train our model end to end with the Adam optimizer for \(100\) epochs with a batch size of \(64\) and a learning rate of \(0.0001\).
## V Model Explainability
The self-attention of the transformer stages in our model allows us to inspect the behavior of the model, thus providing explanations for the predictions. We refer to this as _built-in explainability_. Since we have dedicated each stage of the model to different tasks, we can leverage such information to gain insights about what is important for different aspects of the learned policy. We combine the built-in explainability with _ex-post explainability_, i.e. an approach specifically designed to provide an additional interpretation of the model's behavior at inference time.
### _Built-in Explainability_
In both stages of the transformer model, we can obtain visual explanations in the form of attention maps. The maps are obtained by considering the attention between the state token and the image patches. The first stage provides information on what the model looks at for stop/go prediction, whereas the second identifies relevant image regions for a correct navigation.
### _Ex-Post Semantic Explainability_
Built-in explainability only explains which regions are taken into account. However, it does not provide information about how these regions are interpreted by the model. We propose an ex-post semantic explainability that combines visual attention with k-NN search of image features.
We gather offline a set of \(m\) feature maps from the training set and collect the \(D\)-dimensional descriptors of each spatial location. In this way, we obtain a total of \(M=m*N\) feature vectors, \(N\) being the number of image spatial patches. We denote the i-th feature in the set as \(y_{i}\). At inference time, we extract the feature map \(f\) of the input image and, for any spatial location of interest (e.g., the most attended ones by built-in attention), we perform a k-NN search with FAISS [44] using the \(L2\) distance:
\[L=\textit{k-argmin}_{i=1:M}\|f_{p}-y_{i}\|_{2} \tag{4}\]
where \(f_{p}\) is the p-th feature vector of the input image (\(p\in\{1,\dots,N\}\)).
For each k-NN we reproject the feature back onto the original image and take the semantic segmentation of the corresponding region1. This allows us to inspect what the model is hallucinating by finding the dominant semantic category in the neighbors and allows us to interpret failures.
Footnote 1: Ground truth segmentations are available in the _NoCrash_ dataset [5]
## VI Experimental results
### _Dataset_
For training and evaluating our model, we use the _Corl2017_[45] and NoCrash [5] datasets, both based on the Carla simulator [45]. The _Corl2017_ dataset has expert demonstrations driving across the same town with a set of different weather conditions. Testing is performed by driving in different conditions: same town and weather as training; same town and new weather; new town; new town new weather. Testing also includes 4 tasks: go straight, one turn, navigation, navigation dynamic. The navigation tasks require driving from two distant waypoints and the dynamic scenario includes other vehicles and pedestrians. _NoCrash_ has been designed to evaluate advanced driving skills such as stopping at traffic lights, avoiding collisions and driving in dense traffic environments. The evaluation involves 25 episodes on three navigation tasks, spanning from an empty town scenario to a dense traffic one. _Corl2017_ has 657.601 frames and _NoCrash_ instead 1.279.738 frames, divided into frontal and two lateral cameras (\(-30^{\circ}\),\(+30^{\circ}\)). For both datasets, the agent must comply with a given high-level command among _go straight_, _turn right_, _turn left_ and _follow lane_. As in [5] we train on a subsample of 10% of the data, comprising 10 hours out of a total 100 hours of driving. Both datasets provide metadata including the current state of the autonomous vehicle and environmental information such as driving commands, high-level commands and position.
### _Results_
We report in Tab. I the results on the _Corl2017_ dataset. For a fair analysis, we compare our method directly against other RGB-based methods. We also report methods that leverage additional sources of supervision such as depth and semantic
segmentation or additional data to train the model. The results show that our method obtains better or on-par results when compared to other RGB-based models. Per-task success rates are in the supplementary material.
Compared to _Corl2017_, where traffic light violations and collisions are not considered, the _NoCrash_ benchmark is extremely more challenging since environmental cues must be taken into account. We report results in Tab. II. Our approach outperforms RGB methods, with the only exception of CILRS [5], which performs slightly better in some empty scenarios. In the more challenging scenarios with regular and dense traffic, our approach performs better than the competitors, highlighting the capacity of the model to interpret patterns relative to traffic lights and other agents.
In Tab. III we show the percentage of traffic light violations committed by our model. These results are computed on the task _Empty_ both for _Training Conditions_ and for _New Weather & New Town_. As a baseline, we also report the results for a Single Stage model, i.e. a simplified version of our approach without the first stage. This model is state-aware as the full model, but does not exploit the stop/go loss which we designed to prevent inertia. Interestingly, our model outperforms the single stage baseline by a large margin, showing the usefulness of the stop/go loss to correctly focus on traffic lights. At the same time, we significantly lower the traffic light violations compared to CILRS, despite it obtained a higher success rate in the empty tasks for _New Weather & New Town_ (see Tab. II). We attribute this difference to two factors: (i) CILRS' strong ResNet vision backbone yields better generalization across weather conditions; (ii) higher capacity of our model to focus on traffic lights thanks to attention and stop/go loss.
Attention plays an important role in identifying relevant cues. Since we employ transformer encoders in every stage of the model, we can visually inspect self-attention for every stage. We create heatmaps by reprojecting on the image the attention value relative to the state token against every visual token (Fig. 2). The heatmaps for the two stages reflect the tasks that are addressed at the corresponding levels: stop/go decision and driving command generation. The first stage focuses on small scene details such as traffic lights or pedestrians (additional qualitative examples for the first stage of our model are shown in Fig. 9), while the second stage attention is scattered and attends regions that are important for correct navigation such as intersections and roadsides.
## VII State Token and Inertia Problem
We inspect the relative importance of the state token and the image image content. The state token emitted by the first stage is used to predict a stop/go decision with a dedicated loss. This makes the token carry useful information to the second stage, which is in charge of generating the actual driving commands. In presence of a halt cue (e.g. red traffic light) encoded in the state token propagated to the second stage, the attention scheme of the second stage focuses on the state token rather than on the image patches. When the state token indicates that the vehicle can advance, the attention focuses instead on the image patches to generate appropriate driving commands. Fig. 3 shows examples of stage 2 attention, with values of the state token and of the whole image, accumulated for each visual token.
The stop/go loss has a great impact on driving performance. In Tab. IV we show the effect of removing such loss on the _NoCrash_ benchmark. In densely trafficked environments, the success rate is almost halved when removing the loss. Similar results are obtained with the single stage baseline. We also test a model trained using a random vector as state token (w/o ST), yet keeping the stop/go loss: success rate heavily drops,
Fig. 3: Importance of state token vs image tokens. The presence of a red traffic light is detected by the first transformer stage and encoded in the state token propagated to the second stage. In this case, the second transformed stage assigns a high attention value (red bar) to the state token. When restarting at the green light and turning, the image tokens (blue bar) gain importance.
Fig. 2: Each row shows visual attention for the two stages of the model w.r.t an input image. The two stages reflect important cues for the stop/go and command generation losses respectively.
especially in crowded environments. By feeding the state of the vehicle, the agent becomes aware of its speed and momentum, e.g. indicating whether and how a turn is taking place. This is hard to deduct from a single image.
Furthermore, the use of the state token and the stop/go loss, have a direct effect on addressing the inertia problem since the first stage is explicitly trained to predict movement. Tab. V shows the failure rate due to inertia. As in [5] we identify the inertia problem when an agent is still for 8 seconds before time out. Most failures of the single-stage baseline can be traced back to inertia and these are almost completely eliminated with the multi-stage model. Surprisingly, the NewWeather-Empty configuration in the NoCrash Benchmark exhibits the highest failure rate, attributed to inertia (as indicated in Table V). In this context, when the vehicle comes to a halt, it becomes trapped in a stationary state due to inertia. Notably, in an empty scenario, the sole discernible visual cue is the traffic light. Conversely, in Regular or Dense scenarios, the dynamic nature of the environment allows the autonomous vehicle to break free from its stationary state by observing other vehicle behaviors and the dynamic surroundings, prompting a reevaluation of its decisions. In simple terms, the distance from a vehicle ahead or the dynamic behaviors of other agents can act as a trigger to escape from stall states caused by the inertia problem.
A more general analysis of the causes of failure is also provided in Tab. VI. The multi-stage model considerably reduces collisions with pedestrians and vehicles, compared to the single-stage baseline. Interestingly failures due to time out (which include inertia) are almost eliminated. Tab. V and Tab. VI indicate that, despite addressing in a very effective way the inertia problem, the model still suffers from a few inertia failures. We exploit the Ex-post Semantic Explainability approach presented in Sec. V to inspect 50 episodes of the _NoCrash_ benchmark where the inertia problem still occurs at traffic lights. In 56% of the cases where the vehicle is stuck at a green light, the \(k\) most similar features to the attended one contain a red traffic light, in 18% a pedestrian crossing, and in 3% a vehicle (Tab. VII).
In Fig. 4 we show the top 10 nearest samples of the image region with the highest attention value (first transformer stage). The first two rows show failure cases: the model correctly focuses on the traffic light but although it is green, the model maps it in a region of the latent space densely populated by red traffic lights. We also show a sample of correct driving, where the vehicle accelerates as soon as the light turns green: retrieved images all depict green lights. This suggests that what may appear as inertia might instead be confused with a failure of the backbone that mistakenly "hallucinates" halt cues.
Fig. 4: Top 10 neighbors for the highest scoring attention after a traffic light turns green. We show examples of both successful crossing of the traffic light (framed in green) and failed due to red light “hallucination” (framed in red).
Fig. 5: _NoCrash_ (left) Throttle distribution; (right) Steer distribution.
### _Online/Offline Evaluation and Noise Injection_
To address the offline/online shift, exhaustive coverage at training time of possible input configurations (observed environment + internal state) could be a solution, yet it is difficult to achieve. For instance, the _NoCrash_ dataset is unbalanced and throttle and steer values are extremely biased (Fig. 5). This limits the possibility of effectively inputting the vehicle state into the model at driving time. Our data augmentation strategy that injects noise on the state token (Sec. IV-A) is intended to address this limitation. We introduce a zero mean Gaussian noise with \(\sigma=0.1\) on the driving controls (which are in \([0,1]\)) and with \(\sigma=1\) for speed. This has the effect of letting the model see at training time different combinations of state values. In Fig. 6 we show the joint distribution of steer and throttle values with and without noise injection. Two modes for throttle can be observed corresponding to the over-represent stationary and full-throttle scenarios. At the same time, steer has a Gaussian distribution centered in zero (indicating no steer). With noise injection, we get a more uniform distribution in the low-steer interval [-0.25, 0.25] across all throttle values. Also, higher steer values obtain a more uniform coverage.
In Fig. 7 we quantify the correlation between online success rate and offline validation MAE using the sample Pearson correlation coefficient, as done in [10]. We plot the results without using data augmentation via noise injection (corr: -0.64) and with (corr: -0.92). Despite not having a huge impact on the results in training conditions, as shown in Tab. IV, in generalization conditions noise injection brings noticeable benefits. From the plots in Fig. 7 it can be seen that without using data augmentation there are huge differences for similar MAE values (e.g., 20% success rate gap with a small difference of 0.0001 in MAE).
Another component to help the agent act as the expert demonstrator is the command consistency module (see IV-C). This module acts as a regularizer, encouraging the model to generate driving commands that are not in conflict with each other, and thus preventing unwanted behaviors at driving time. The necessity of CCM also stems from the fact that maneuvers (e.g. a right turn) could be performed in different ways (e.g. slow and narrow or fast and wide turn). Results in Tab. IV confirm the usefulness of CCM.
Fig. 6: Distribution of Steer-Throttle without (left) and with (right) Noise Injection on _NoCrash_. We have a better coverage of the space with Noise Injection.
Fig. 7: Pearson correlation between online success rate and offline validation MAE obtained by training the model multiple times without (left) and with (right) data augmentation on the state token. Dot size corresponds to different training epochs.
### _On the Command Coherency Module_
In the proposed architecture, the CCM module is responsible for the generation of non-conflicting throttle and brake commands at training time. The impact of this module on the coherency of throttle and brake pairs and ultimately on the accuracy of driving at test time is demonstrated in the experiments reported in Table IV that highlight a severe performance drop when training is conducted with the CCM disabled. To delve deeper into the effect of the CCM module, we log pairs of throttle and brake command values during a driving session executed twice, with CCM respectively enabled and disabled. For this experiment, we use an episode of the _NoCrash_ benchmark consisting of about 3000 frames. Values of pairs (throttle, brake) are shown in the scatter plots of Fig. 8. When the CCM is disabled, it can be noticed that a relevant portion of the outputs is characterized by both throttle and brake values greater than zero, meaning that the vehicle is both trying to accelerate and decelerate at the same time. By using the CCM, almost all these non-coherent configurations disappear and only one of the two commands at a time can take values significantly greater than zero.
## VIII Conclusions and Future Works
We addressed two major issues in training a state-aware model with imitation learning. First, the inertia problem has been dealt with using a multi-stage architecture with state token propagation, where the first stage learns to inform the next one about stop/go decisions. We report extremely low rates of inertia. Second, the offline/online gap has been bridged by performing data augmentation on the state token, significantly increasing the correlation between success rate and validation error. In addition, we also exploited built-in visual attention with a retrieval-based ex-post explainability to characterize failures. We found that what may appear as inertia might indeed be caused by a completely different problem, such as backbone hallucinations. In future works, we intend to exploit the hierarchical structure of the model to create an inspection chain to debug the model: whenever the explainability allows us to discover an issue, we can add a new level in the hierarchy to enrich the state token with new information and condition the generation of the driving commands. An interesting approach in this direction would be to study a model with different modules to be deployed in parallel rather than stacking them hierarchically. Furthermore, the proposed model could be easily improved to include additional sources of data such as depth and segmentation, e.g. concatenating them to the input image.
|
2309.15788 | Reconciling quantum and classical spectral theories of ultrastrong
coupling: Role of cavity bath coupling and gauge corrections | Focusing on the widely adopted Hopfield model with cavity dissipation, we
show how the linear spectrum of an ultrastrongly coupled cavity and a dipole
can be described either classically or quantum mechanically, but only when the
quantum model includes (i) corrections to maintain gauge invariance, and (ii) a
specific type of cavity bath coupling. We also show the impact of this bath
model on the quantum Rabi model, which has no classical analogue in ultrastrong
coupling. | Stephen Hughes, Chris Gustin, Franco Nori | 2023-09-27T17:08:27Z | http://arxiv.org/abs/2309.15788v2 | # Reconciling quantum and classical spectral theories of ultrastrong coupling:
###### Abstract
Focusing on the widely adopted Hopfield model with cavity dissipation, we show how the linear spectrum of an ultrastrongly coupled cavity and a dipole can be described either classically or quantum mechanically, but only when the quantum model includes (i) corrections to maintain gauge invariance, and (ii) a specific type of cavity bath coupling. We also show the impact of this bath model on the quantum Rabi model, which has no classical analogue in ultrastrong coupling.
_Introduction.--_Strong coupling between a single cavity mode and a dipole or two-level system (TLS) [1; 2; 3] can be well explained quantum mechanically or classically (or semiclassically) [4; 5; 6]. The characteristic signature of strong coupling is a splitting in the emitted spectrum by \(2g\), where \(g\) is the dipole-cavity coupling rate, which exceeds any losses in the system, e.g., \(g^{2}>\kappa^{2}/16\)[3] (or more strictly \(g^{2}>\kappa^{2}/8\)[7]), with \(\kappa\) the cavity decay rate. Quantum mechanically, this is referred to as _vacuum_ Rabi splitting, or in classical physics as normal-mode splitting.
Strong coupling has been observed in atoms [2], molecules [8; 9], quantum dots [10; 11], and circuit QED [12; 13], and is often considered a prerequisite for exploring _unique quantum effects_ when one moves beyond a weak excitation approximation or linear response [14; 15]. In a quantum description of cavity-TLS coupling, multiphoton effects manifest in an _anharmonic_ response [16; 17; 18; 19], which is not captured by the physics of two coupled classical harmonic oscillators (HOs). Nevertheless, a classical description of the emitted spectrum under weak excitation is an adequate description of the system, and one recovers a perfect quantum to classical correspondence of the light-matter system, albeit with a different interpretation. Indeed, the quantum interpretation of spontaneous emission can be described in terms of radiation reaction, vacuum fluctuations, or a mixture of both these effects [20; 21].
Quantum and classical descriptions of certain light-matter coupling have lead to interesting interpretations and insights, including the difference between quantum and classical oscillations in phase qubits [22], classical pseudo-Rabi oscillations in flux qubits [22], and vacuum Rabi splitting as a manifestation of linear-dispersion theory [4].
Beyond strong coupling, recent interest in cavity-QED has turned to ultrastrong coupling (USC) [23; 24; 25; 26; 27; 28; 29; 30], where one cannot invoke a rotating wave approximation (RWA), typically when \(g\geq 0.1\omega_{0}\), where \(\omega_{0}\) is the dipole resonance frequency. The regime of USC presents some fascinating uniquely quantum concepts such as _virtual photons in the ground state_[28; 29; 30; 31]. Squeezed vacuum states, also with no classical analogue, occur in both bosonic and TLS emitter systems in the USC, which are described using the Hopfield model and the quantum Rabi model (QRM), respectively [29]. Exciton and many-emitter Dicke systems also take on the form of the Hopfield model, such as cavity coupling to 2D electron systems including Landau levels in THz cavities [32]. In the USC regime, these systems exhibit spectral signatures that reflect the nature of the quantum Hopfield model. On the other hand, the classical theory of coupled oscillators (including collectively coupled TLSs in the dilute thermodynamic limit) does not require a RWA, and it might be expected that the USC regime should also have a quantum to classical correspondence.
In probing the Hopfield regime, the optical spectrum is typically measured, and most features are argued as being quantum mechanical in nature, e.g., stemming from diamagnetic coupling terms. Yet, the spectral locations of the upper and lower polaritons can be matched with coupled classical HO theory [23; 33; 34]. In the presence of dissipation, however, a quantum-classical correspondence is not known. Although there exists some quantum Langevin approaches for simplified geometries [23], these are not suitable for calculating higher-order quantum correlation functions and arbitrary geometries. Furthermore, the physics of a cavity-coupled TLS is often said to reduce to cavity-HO physics if one neglects saturation effects [35; 36]. However, this is not true in the USC regime, even with linear response, since multiple photon states emerge already in excited states, and saturation effects are unavoidable in the USC regime due to the virtual excitation of the TLS, including the ground state.
The USC regime presents additional challenges for quantum field models, including: (i) gauge corrections because of a truncated Hilbert space [37; 38; 27], and (ii) the specific form of the system-bath interaction for the cavity mode matters [39; 40]. Since a classically-coupled HO model (expressed entirely in terms of classical electromagnetic fields) has no issues with gauge invariance, it is essential to seek out if and when such a quantum-classical correspondence can be made. This is not just motivated by fundamental physics reasons, but is practically important since many of the emerging USC experiments require some sort of mod
elling with classical Maxwell solvers [33, 41].
In this work, we first show that, for a lossless system, the spectral poles (resonances) of the Hopfield model precisely overlap a classical HO solution, and these deviate from the QRM as soon as one enters the USC regime. We then introduce a spectral theory of the dissipative Hopfield model, using a gauge-invariant master equation theory expressed in the multi-polar gauge, and show how it is possible to identify a specific form of the system-bath coupling that matches the classical solution.
We choose a common and established classical theory, based on a normal-mode expansion of the cavity Green function with phenomenological decay; a more rigorous approach could use quasinormal modes [42, 43]. Finally, we study the impact of this model on both the dissipative Hopfield model and the dissipative QRM, and show the striking differences between these two models for different \(\eta\equiv g/\omega_{c}\). Usually \(\eta>0.1\) is the criterion for the USC regime. A simple schematic of lossy cavity-QED systems are shown in Fig. 1.
_Theory_.-We first consider the interaction between a bosonic cavity mode, with creation (annihilation) operator \(a^{\dagger},a\), and a bosonic dipole, with creation (annihilation) operator \(b^{\dagger},b\). Neglecting bath losses for now, in the multi-polar gauge, and with the dipole approximation, the Hopfield model can be written as (\(\hbar=1\))
\[H_{\text{Hop}}=\omega_{c}a^{\dagger}a+\omega_{0}b^{\dagger}b+ig(a^{\dagger}-a )(b+b^{\dagger})+D(b+b^{\dagger})^{2}, \tag{1}\]
where \(\omega_{c}\) (\(\omega_{0}\)) is the cavity (atom) transition frequency, and \(D=\eta g\) is the diamagnetic amplitude [44]. Note that a naive truncation of the multipolar gauge Hamiltonian would render this Hopfield \(D\) term infinite, and one must account for the gauge invariance of the truncated single-mode model to obtain physical and correct results [38, 45].
Using a Bogoliubov transformation, we rewrite this Hamiltonian as \(H_{\text{Hop}}=\omega_{c}a^{\dagger}a+\tilde{\omega}_{0}\hat{b}^{\dagger} \hat{b}+i\hat{g}(a^{\dagger}-a)(\hat{b}+\hat{b}^{\dagger})+D\), where \(\tilde{\omega}_{0}=\omega_{0}(1+4\eta^{2})^{0.5}\) and \(\tilde{g}^{2}=g^{2}/(1+4\eta^{2})^{0.5}\). Diagonalization yields two polariton poles [46]: \(\omega_{\pm}^{2}=\frac{1}{2}\left[\tilde{\omega}_{0}^{2}+\omega_{c}^{2}\pm \sqrt{(\tilde{\omega}_{0}^{2}-\omega_{c}^{2})^{2}+16\tilde{g}^{2}\tilde{ \omega}_{0}\omega_{c}}\right]\). Assuming on resonance conditions (\(\omega_{0}=\omega_{c}\)), then
\[\omega_{\pm}=\omega_{0}\sqrt{1+2\eta^{2}\pm 2\eta(1+\eta^{2})^{1/2}}. \tag{2}\]
To lowest order in the counter rotating-wave effects, i.e., to order \(\eta^{2}\) (Bloch-Siegert regime), we obtain \(\omega_{\pm}|_{\text{BS}}=\omega_{0}(1+\eta^{2}/2)\pm g\). If one neglects the diamagnetic term, then \(\omega_{\pm}^{0}=\omega_{0}(1\pm 2\eta)^{0.5}\), which is problematic for \(\eta\geq 0.5\).
We can also compare this solution to the QRM, with
\[H_{\text{QRM}}=\omega_{c}a^{\dagger}a+\omega_{0}\sigma^{+}\sigma^{-}+ig(a^{ \dagger}-a)(\sigma^{+}+\sigma^{-}), \tag{3}\]
where \(\sigma^{+}\) (\(\sigma^{-}\)) is the creation (annihilation) operator for the TLS, which has important saturation effects. In this case, we have no diamagnetic term to consider as the relevant TLS term has no effect on energy differences with the TLS operators. For the QRM, we have an infinite set of anharmonic eigenenergies, which modifies the Jaynes-Cummings ladder states because of counter-rotating wave terms. Considering again a Bloch-Siegert regime (order \(\eta^{2}\) coupling) [40, 30], we obtain the poles of the lowest order polaritons \(\omega_{\pm}|_{\text{BS}}^{\text{QRM}}=\omega_{0}\pm g(1+\eta^{2}/4)^{0.5}\approx \omega_{0}\pm g\). These QRM pole resonances differ from the Hopfield model in the Bloch-Siegert regime, even with linear response.
Linear spectral shifts beyond those in the Jaynes-Cummings model are often termed vacuum Bloch-Siegert shifts [47], but below we quantify why there is nothing uniquely quantum about such resonance shifts in a Hopfield model. This is in contrast to the QRM, which becomes uniquely quantum in nature in the USC regime.
In classical electromagnetism, the bare polarizability volume of an oscillator is \(\alpha(\omega)=A_{0}\omega_{0}/(\omega_{0}^{2}-\omega^{2})\), with \(A_{0}=2d^{2}/\epsilon_{0}\) and \(d\) the dipole moment. Considering the emitter position \(\mathbf{r}_{0}\), the photonic Green function, under a single mode expansion (and assuming scalar fields) is [48]
\[G_{c}(\mathbf{r}_{0},\mathbf{r}_{0},\omega)=\frac{A_{c}\omega^{2}}{\omega_{c}^ {2}-\omega^{2}}, \tag{4}\]
where \(A_{c}=1/V_{\text{eff}}\epsilon_{b}\), with \(V_{\text{eff}}\) the effective mode volume and \(\epsilon_{b}\) the background dielectric constant. Embedding the HO dipole in the cavity, we obtain the exact polarizability:
\[\alpha(\omega)=\frac{A_{0}\omega_{0}}{\omega_{0}^{2}-\omega^{2}-(\omega_{0}/ \omega_{c})4g^{2}\omega^{2}/(\omega_{c}^{2}-\omega^{2})}, \tag{5}\]
where \(4g^{2}=d^{2}\omega_{c}/(2\epsilon_{0}V_{\text{eff}}\epsilon_{b})\). Considering the on-resonance case again, the classical poles are \(\omega_{\pm}^{G}=\omega_{\pm}\) [as in Eq. (2)], so _it is identical_ to the solution of the lossless Hopfield model. This is consistent with experimental results on ultrastrongly-coupled molecular vibrational dipoles in IR cavities, where the same classical-quantum correspondence was observed in the oscillator frequencies [49]. In a quantum picture, the blueshift is caused by the \(P^{2}\) term (or \(A^{2}\) term in the Coulomb gauge). However, in a classical picture, this blueshift is caused from the poles of the cavity-renormalized polarizability; thus, there is nothing uniquely quantum about this spectral blueshift. The blueshift is caused by counter rotating-wave effects, in both models.
This correspondence with the poles of the quantum Hopfield model and classical electromagnetism is partly known [33, 34], yet sometimes not recovered in quantum models. Moreover, in a linear optical material, the classical Green function of the hybrid system must have poles in the upper complex plane [50], even with linear gain [51]. The classical pole correspondence does not mean that there are no unique quantum effects in the Hopfield model, since the ground and excited states are squeezed states [31, 52].
Figure 1: Schematic of two systems in dissipative cavity-QED that can realize USC, including (a) an atom inside a cavity, which has a decay rate \(\kappa\), and (b) a planar cavity coupled to a collective emitter system. The emitters can be treated as a bosonic (Hopfield model) or as a TLS (QRM).
Indeed, for \(\omega_{c}=\omega_{0}\), the quantum ground state, \(|0_{+}0_{-}\rangle\) has energy: \(\omega_{0,0}\)=\((\omega_{+}+\omega_{-})/2\)\(-\)\(\omega_{0}=\omega_{0}(1+\eta^{2})^{0.5}\)\(-\)\(\omega_{0}>0\). The ground state contains virtual photons and is an entangled state [53]. Despite this, there appears to be no unique quantum effects that affect the polariton eigenfrequencies.
What is less well known is to what degree the predicted optical spectra agree (or not) between the quantum and classical coupled mode theories in the USC regime, and how to describe such a regime with open-system master equations. Since all cavity-QED systems have dissipation and input-output channels, it is essential to model them as open quantum systems. Within the RWA, the vacuum Rabi doublets are well described classically or quantum mechanically [4], for both boson and TLS emitters. In the USC regime, things are much more subtle, and the quantum models have technical problems related to how to properly include dissipation as well as gauge correction terms (caused by material and cavity mode truncation).
In a classical light-matter model, we can include a heuristic cavity decay rate, \(\kappa\), in the cavity Green function, and derive the the cavity-emitted spectrum as
\[S^{\text{Class}}=F(\mathbf{R})\left|\frac{E_{0}\,g^{2}\omega^{2}}{(\omega^{2} -\omega_{c}^{2}-i\omega\kappa)(\omega^{2}-\omega_{0}^{2})-4g^{2}\omega^{2}} \right|^{2}, \tag{6}\]
where \(E_{0}\) is the excitation field strength and \(F(\mathbf{R})\) is a geometric factor. The solution is non-Markovian, causal, and contains no RWA [48]. Also, this phenomenological approach to dissipation ensures a symmetric spectrum outside of the USC regime for a resonant cavity and TLS.
In a quantum picture, to include cavity dissipation, we use an open-system approach [54, 40], at the level of a generalized master equation (GME) [55, 56, 40, 57],
\[\frac{\mathrm{d}}{\mathrm{dt}}\rho=-\frac{i}{\hbar}[H_{\text{S}},\rho]+\mathcal{ L}_{\text{G}}\rho+\frac{P_{c}}{2}\mathcal{D}[X^{-}]\rho, \tag{7}\]
where \(P_{c}\) is an incoherent pump term, with \(\mathcal{D}[O]\rho=2O\rho O^{\dagger}-\rho O^{\dagger}O-O^{\dagger}O\rho\), and the cavity dissipator is
\[\mathcal{L}_{\text{G}}\rho =\frac{1}{2}\sum_{\omega,\omega^{\prime}>0}\Gamma_{c}(\omega)[X^ {+}(\omega)\rho X^{-}(\omega^{\prime})-X^{-}(\omega^{\prime})X^{+}(\omega)\rho]\] \[+\Gamma_{c}(\omega^{\prime})[X^{+}(\omega)\rho X^{-}(\omega^{ \prime})-\rho X^{-}(\omega^{\prime})X^{+}(\omega)]. \tag{8}\]
The dressed-state operators, \(X^{\pm}\), are defined from
\[X^{+}(\omega)=\left\langle j|\Pi_{c}|k\right\rangle\left|j\right\rangle\left\langle k \right|, \tag{9}\]
where \(\omega=\omega_{k}-\omega_{j}>0\), \(X^{-}=(X^{+})^{\dagger}\), and \(\Pi_{c}\) is a cavity operator linear in the photon creation and destruction operators. We neglect atom/emitter decay channels, since these are typically negligible. The cavity decay rates are obtained from \(\Gamma_{c}(\omega)=2\pi J_{c}(\omega)\), where \(J_{c}(\omega)\) is the spectral bath function. Below, we use \(\Gamma_{c}=\kappa\), and show that this is sufficient to recover the classical spectral form if the appropriate \(\Pi_{c}\) operator can be identified.
The precise form of \(\Pi_{c}\) matters in the USC regime [40]. For example, one could choose \(\Pi_{c}=i(a^{\dagger}-a)\equiv P\), or \(\Pi_{c}=a^{\dagger}+a\equiv Q\), and obtain significantly different predictions, or any linear combination of these two. This is not the case with a RWA. Furthermore, in the USC regime, there a gauge ambiguity for the electric field operator [37], because \(P\) represents the Coulomb gauge electric field, and we are using a system Hamiltonian in the dipole gauge. For a restricted TLS subspace, this ambiguity is corrected through the transformation \(a^{\prime}\rightarrow\mathcal{U}\mathcal{U}\mathcal{U}^{\dagger}=a+i\eta \sigma_{x}\)[58], where \(\mathcal{U}=\exp(-i\eta(a+a^{\dagger})\sigma_{x})\) is the projected unitary operator [37, 59, 38], with \(\sigma_{x}=b+b^{\dagger}\) (Hopfield model) or \(\sigma_{x}=\sigma^{+}+\sigma^{-}\) (QRM). Thus, one must use \(a^{\prime}\) and \(a^{\prime\dagger}\) in the computation of the dissipators to ensure gauge invariance. The fact that the \(\Pi_{c}\) operator should consist only of bosonic \(a\) and \(a^{\dagger}\) operators in the Coulomb gauge is a consequence of photon loss being associated only with electromagnetic degrees of freedom, and not the TLS [38].
In the USC regime, the system has transition operators \(\left|j\right\rangle\left\langle k\right|\) which cause transitions between the dressed eigenstates of the system \(\left\{\left|j\right\rangle,\left|k\right\rangle\right\}\). For the cavity mode operator, these transitions are obtained from the dressed operators \(X^{\pm}\), and again these must be gauge corrected. To make the notation clearer, we can use \(X^{\pm}_{\text{GC}}\) to indicate that we are applying \(\Pi_{c}\) operators with _gauge corrections_. Thus, the cavity-emitted quantum spectrum is obtained from
\[S^{\text{QM}}\propto\text{Re}\left[\int_{0}^{\infty}d\tau e^{i\omega\tau}\int_ {0}^{\infty}\left\langle X^{-}_{\text{GC}}(t)X^{+}_{\text{GC}}(t+\tau)\right\rangle dt \right], \tag{10}\]
and calculations without gauge corrections simply use \(X^{\pm}\), i.e., without primed cavity operators in the computation of the dissipators and cavity-mode obervables. We will show both solutions to better highlight the role of these gauge corrections, and also show how they are _required_ to recover
Figure 2: Hopfield GME results versus the classical model for \(\eta=0.5\), and \(\kappa=0.05g\), with three different bath models, (a) \(\Pi_{c}=P\), (b) \(\Pi_{c}=Q\), and (c) \(\Pi_{c}=(P+Q)/\sqrt{2}\) (\(\Pi_{c}=(P-Q)/\sqrt{2}\) gives identical results). Gauge-corrected results use \(X^{\pm}_{\text{GC}}\) and primed cavity operators for \(\Pi_{c}\). Only model (c), with gauge corrections (‘GC’), overlaps with the classical solution.
classical correspondence. In all GME calculations below, we use \(P_{c}\ll\kappa\), to ensure weak excitation, and the numerical results are carried out using Python and QuTiP [60, 61].
It is important to stress that our gauge-corrected results are necessary to ensure gauge invariance. For example, we could also use a Hopfield model in the Coulomb gauge, where \(H_{\text{Hop}}^{\text{GC}}=\omega_{c}a^{\dagger}a+\omega_{0}b^{\dagger}b+ig_{c} ^{\frac{\omega_{0}}{2}}(b^{\dagger}-b)(a+a^{\dagger})+D(a+a^{\dagger})^{2}\), and then we could use unprimed operators for the cavity mode, where indicated above, specifically in \(X^{\pm}\) and \(P\). Note also that \(D\) must be the same in both gauges to ensure gauge invariance, which has been proven also for a dilute Dicke model [34]. For the QRM, the gauge-corrected system Hamiltonian in the Coulomb gauge is [37], \(H_{\text{QRM}}^{\text{CG}}=\omega_{c}a^{\dagger}a+\frac{\omega_{0}}{2}\{ \sigma_{z}\cos(2\eta(a+a^{\dagger}))+\sigma_{y}\sin(2\eta(a+a^{\dagger}))\}\), which contains field operators to all orders. For simplicity, we use only the dipole gauge below, but we have checked that all results below are identical in the Coulomb gauge.
_Computed spectra.--_Figure 2, shows the classical and quantum solutions for the dissipative Hopfield model, with three types of bath coupling models: (a) \(\Pi_{c}=P\), (b) \(\Pi_{c}=Q\), and (c) \(\Pi_{c}=(P\pm Q)/\sqrt{2}\), where primed indices are used for the gauge-corrected models. We first choose \(\eta=0.5\) here, which is well into the USC regime, with \(\kappa=0.05g\). As recognized, _we find very good agreement with the classical solution only when \(1/\sqrt{2}(P\pm Q)\) and only with gauge corrections._ To the best of our knowledge, this is the first time that such a solution and correspondence has been made, and these results also demonstrate the significant problem with choosing an arbitrary system-bath interaction form in the USC regime. It should be noted that such a correspondence does not necessarily indicate that this is the _correct_ choice of dissipation model. Rather, this result allows for unambiguous connection and comparison between quantum and classical heuristic models of dissipation, and is further evidence of the limitations of purely phenomenological approaches, when treating losses in open quantum systems.
Next, we look at the role of this bath coupling model, for different \(\eta\), using \(\Pi_{c}=(P\pm Q)/\sqrt{2}\), for both the Hopfield model and the QRM. The spectral calculations are shown in Fig. 3, along with the classical solution. We see that the Hopfield model and QRM differ substantially in all USC regimes, and the QRM takes on multiple resonances when \(\eta\) is sufficiently large, even for weak excitation, as well as pronounced spectral asymmetries. Moreover, we find that the dissipative Hopfield model, with \(\Pi_{c}=(P+Q)/\sqrt{2}\), agrees very well with the classical two oscillator model at all coupling regimes shown. This is clearly not the case for the QRM, and such substantial differences should be easy to identify in experiments.
_Discussions and summary.--_We have shown how the optical spectra for a dissipative Hopfield model in USC can be described quantum mechanically or classically. To achieve correspondence with the classical dissipative result, quantum models must properly respect gauge invariance and implement the appropriate bath coupling operator. Without such a correspondence, any open-system master equations in this regime with _ad hoc_ system-bath interactions are ambiguous and can predict wildly differing spectra.
We have also clarified how the dissipative Hopfield model and QRM substantially differ, even for weak excitation, at all USC regimes, including the perturbative Bloch-Siegert regime. Thus, only the Hopfield model yields a classical correspondence under linear response, and this correspondence _only occurs with a careful treatment of quantum dissipation and gauge corrections._ While we used a normal mode expansion with heuristic broadening, this form is well established for high \(Q\) cavities outside the USC regime, and future work could improve such models using classical and quantized quasinormal mode theories [42, 43]. One possi
Figure 3: Dissipative (GME) Hopfield model (a-c) and QRM (d-f) versus the classical solution, for three values of \(\eta\). We only present the \(\Pi_{c}=(P+Q)/\sqrt{2}\) bath modeland show GME results, with and without gauge corrections. Once again, only the Hopfield model with gauge corrections overlaps with the classical solution. At higher values of \(\eta\), the QRM also shows multiple resonances, and these are also substantially different with gauge corrections. The QRM fails to recover the classical solution in the USC regime.
ble clue as to the significance of the \(\Pi_{c}\propto P\pm Q\) coupling can be seen by noting that the classical phenomenological loss model is phase-insensitive. In the quantum loss model, a choice of \(P\pm Q\) for the quadrature coupling to the bath is the only choice which gives equal coupling magnitude to each quadrature (i.e., is phase-insensitive in magnitude).
Broadly, these findings are important for a wide class of light-matter systems now emerging to study the USC regime, including lossy Landau systems and metallic systems [32; 33]. Apart from showing a direct classical correspondence for dissipative modes, our results can be used to guide open-system quantum models that are needed when observations are uniquely quantum in nature, e.g., in the QRM for any excitation including coherent excitation, and the Hopfield model excited with non-classical fields.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the National Research Council of Canada (NRC), the Canadian Foundation for Innovation (CFI), and Queen's University, Canada. S.H. acknowledges the Japan Society for the Promotion of Science (JSPS) for funding support through an Invitational Fellowship. F.N. is supported in part by Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant No. JPMJMS2061], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. We thank Jun Kono and Hideo Mabuchi for useful discussions.
|
2310.20644 | Persistence diagrams as morphological signatures of cells: A method to
measure and compare cells within a population | Cell biologists study in parallel the morphology of cells with the regulation
mechanisms that modify this morphology. Such studies are complicated by the
inherent heterogeneity present in the cell population. It remains difficult to
define the morphology of a cell with parameters that can quantify this
heterogeneity, leaving the cell biologist to rely on manual inspection of cell
images. We propose an alternative to this manual inspection that is based on
topological data analysis. We characterise the shape of a cell by its contour
and nucleus. We build a filtering of the edges defining the contour using a
radial distance function initiated from the nucleus. This filtering is then
used to construct a persistence diagram that serves as a signature of the cell
shape. Two cells can then be compared by computing the Wasserstein distance
between their persistence diagrams. Given a cell population, we then compute a
distance matrix that includes all pairwise distances between its members. We
analyse this distance matrix using hierarchical clustering with different
linkage schemes and define a purity score that quantifies consistency between
those different schemes, which can then be used to assess homogeneity within
the cell population. We illustrate and validate our approach to identify
sub-populations in human mesenchymal stem cell populations. | Yossi Bokor Bleile, Patrice Koehl, Florian Rehfeldt | 2023-10-31T17:12:01Z | http://arxiv.org/abs/2310.20644v1 | Persistence diagrams as morphological signatures of cells:
## Abstract
Cell biologists study in parallel the morphology of cells with the regulation mechanisms that modify this morphology. Such studies are complicated by the inherent heterogeneity present in the cell population. It remains difficult to define the morphology of a cell with parameters that can quantify this heterogeneity, leaving the cell biologist to rely on manual inspection of cell images. We propose an alternative to this manual inspection that is based on topological data analysis. We characterise the shape of a cell by its contour and nucleus. We build a filtering of the edges defining the contour using a radial distance function initiated from the nucleus. This filtering is then used to construct a persistence diagram that serves as a signature of the cell shape. Two cells can then be compared by computing the Wasserstein distance between their persistence diagrams. Given a cell population, we then compute a distance matrix that includes all pairwise distances between its members. We analyse this distance matrix using hierarchical clustering with different linkage schemes and define a purity score that quantifies consistency between those different schemes, which can then be used to assess homogeneity within the cell population. We illustrate and validate our approach to identify sub-populations in human mesenchymal stem cell populations.
## Author summary
Cells are the basic unit of life. Understanding how they grow, divide, die, and change shape is of central importance in many other areas of the life sciences. In this paper, we focus on the concept of shape and, more specifically, on how to compare the shapes of two cells. We characterise this shape with the cell contour supplemented by the position of its nuclei. We use topological data analysis to define a signature of that shape, generated from its persistence diagram, a structure that reflects the relative position of the nucleus with respect to segments of the contours. We compute the distance between two cells as the Wasserstein distance between their shape signature. Using this distance, we analyse populations of cells to help identify members with unusual shapes (usually referred to as outliers) as well as sub-populations. We validate
our approach to identify sub-populations within human mesenchymal stem cell populations that are known to be heterogeneous.
## 1 Introduction
Cells are the basic unit of life. Understanding how they grow, divide, die, and change shape is of central importance for immunology, cancer biology, pathology, tissue and organ morphogenesis during development, as well as for many other areas in the life sciences. In this paper, we focus on the concept of shape. The shape of a cell is defined by the geometrical constraints of the space it occupies and is determined by the external boundaries and positions of the internal components. The shape is the result of the mechanical balance of forces exerted on the cell membrane by intra-cellular components and the extra-cellular environment. It is a geometric property controlled by a variety of biochemical pathways. Cell biologists study in parallel the morphology of cells (their geometry) with the regulation mechanisms that modify this morphology. These studies are benefiting from recent advances in microscopy and image processing techniques. Current microscopes provide 2D images that make it possible to study cellular shapes, or more precisely 2D projections of cellular shapes. The question remains as to how to measure and compare those shapes. This paper focusses on a new technique for performing those analyses.
Our proposed method for 2D shape comparisons is motivated by a seminal paper by Engler et al. that demonstrated that the mechanical properties (Young's elastic modulus \(E\)) of the extracellular matrix direct the differentiation of human mesenchymal stem cells (hMSCs) [1]. While up- and down-regulation of genes and transcription factors takes up to several days or even weeks, experiments focused on the first 24 hours of hMSCs after seeding on a substrate showed a significant impact of matrix rigidity on the structural formation of acto-myosin stress fibers and quantified that by an order parameter \(S\) that could be used as an early morphological descriptor of mechano-directed stem cell differentiation [2]. Although this analysis was based on the filamentous structure of the cytoskeleton and its pattern formation, we aim to use the global cell morphology, in particular, the outline of the cellular cortex in two dimensions. Importantly, the hMSCs used in all these studies are primary cells, collected from the bone marrow of human individuals, and not an immortalised cell line. This leads to an intrinsic variety of the cell population that is expected to be further impacted by potential sub-populations of bone marrow fibroblasts (roughly 5%) [3, 4]. Our aim is to see if geometry alone allows us to identify those sub-populations within a sample of cells collected from the bone marrow.
A 2D shape is defined as a domain \(D\) in \(\mathbb{R}^{2}\), delimited by its boundary, \(\partial D\), often referred to as the contour of \(D\). In all our applications, we will take the contour to be a piecewise smooth or polygonal Jordan curve, that is, a simple closed curve in \(\mathbb{R}^{2}\). There are multiple geometric representations of such 2D shapes, leading to different methods for their characterisations. We briefly review three such representations.
In the _digital image_ representation, common to most real applications, raw data is provided in the form of 2D images (see Figure 1A). In essence, the data to be understood and compared is a collection of pixels. Traditional methods of comparing such images usually proceed in three steps. They first define a set of well-chosen landmarks or key points on the surfaces of the shapes, then assign "signatures" to these key points (coordinates in a parameterising domain), and finally determine a map maximising the correspondence of signatures (for a review, see [5]). With the increase in computing power and the large number of image data sets that are generated, these ideas are often studied in the context of deep learning, where the key points and signatures are learnt from large data sets. Deep learning has become the
predominant method used in 2D image analysis (see [6] for a review of applications to the analysis of medical images). However, its applicability requires access to large data sets. In many cases, limited numbers of images are available, either because they are expensive to produce or because they model a rare phenomenon. This is the case for the stem cell images considered in this paper. In addition, deep learning remains something of a black-box procedure for classification. Cell biologists seek to understand the interplay between the geometry of a cell and the biochemical processes that are responsible for this geometry. They need a finer and more mechanistic understanding of the processes that drive shape, requiring mathematical approaches.
A second representation of 2D shapes, which we refer to as _shape as planar contour_, is based on the curve describing the outer boundary of the shape (see Figure 1C). This is well suited to applications focused on the geometric configuration of a shape, where factors such as the colour or grey level of the interior are not relevant or available. Methods to model the similarity between two shapes given as planar contours have been based on defining a distance between two curves in the plane. The proposed distances include the Hausdorff and Frechet distances [7]. Other techniques are based on the Poisson equation [8], integral invariants [9], and an elastic shape distance on the energy required to elastically deform one boundary contour to the other [10, 11].
Methods based on shape as planar contour do not directly consider the interior of a shape, possibly discarding relevant information. A third approach, _shape as planar region,_ compares shapes using surface correspondences that take into account both the contour and the interior of the shape. Measures of similarity based on the distortion energy of a 2-dimensional correspondence taking one shape to another have been based on conformal [12, 13, 14] and quasi-conformal mappings [15, 16, 17]. These are of particular interest when aligning landmarks, special points of interest that lie on the boundary or in the interior of the shape. The Uniformization Theorem implies that conformal maps can be found that align up to 3 boundary landmarks in each of a pair of disk type shapes, or one in the interior and one on the boundary. Quasi-conformal maps allow the alignment of any number of landmarks [15, 17], and can also be used for shape alignment when there are holes in the interior of a shape [16]. When applied to studying cell shapes, they make it possible to take into account the positions of the nucleus, of actin filaments, and of reticulum endoplasmic in the interior of a cell, which are of special interest because they are visible in microscopy images.
Paraphrasing a recent review paper by D. Chitwood and colleagues, 'Shape is data and data is shape' [18]. As described above, shape is a signature of biological objects such as cells discussed above, that are significant for their biological functions. As such, the shape characteristics are integral parts of the data that represent these biological objects. Reversely, there is a geometric structure within data that is referred
Fig 1: (A) Fluorescence microscopy image of a human mesenchymal stem cell (hMSC). (B) Fluorescence microscopy image of the corresponding nucleus. (C) Plot of the corresponding contour of that cell with the centre of the cell shown as a dot.
as the shape of data. Analysing the shape of data has become an essential section of data science, known as _Topological Data Analysis_, or in short as TDA. TDA has its roots in the pioneering works of Robins [19], Edelsbrunner et al [20] and Zomorodian and Carlsson [21] in persistent homology and became popular with the publication of a landmark paper by G. Carlsson [22]. Since this paper was published, it has become ubiquitous to data science, with many applications in biology (see, for example, the review mentioned above, [18], and references therein illustrating applications in structural biology, evolution, cellular architecture, and neurobiology). TDA is particularly useful when the data are represented in the form of a graph, or network. As such, it proceeds by connecting data points to form a geometric complex structure whose topological behaviour is then used to analyse the data. Coming back to the fact that the shape is data, a shape can be characterised through TDA. Using, for example, the Euler characteristic transform to study the morphology of barley seeds [23].
In this paper, we introduce a new method for analysing the morphology of a cell that falls into the second category described above, namely with the cell represented with its contour with one additional point \(C\), taken to be the center of mass of the cell nucleus. From TDA, we use _persistent homology_ to obtain a summary of the morphological features of the cell contour. We use the persistence of sub-level sets of the radial distance function from \(C\) and compute the corresponding persistence diagram (see the next section for a primer on persistent homology applied to analysing cell contours). As the contour of each cell is a closed, non-self-intersecting curve, we know that it consists of a single connected component and a single 1-cycle. These two cycles correspond to a persistent cycle with infinite life (called _essential_ cycles) in dimension 0 and dimension 1, respectively. Hence, we combine the information from these two persistent cycles by pairing the birth of the essential connected component with the birth of the essential 1-cycle. A pair of cells is then compared by computing the _2-Wasserstein distance_ between their _persistence diagrams_, providing a measure of similarity between the two cells. We can then apply various clustering techniques to these similarity scores, to identify homogeneous populations of cells.
The paper is organised as follows. The next section introduces the concept of persistence homology applied to analysing the morphology of a cell, the construction of the persistence diagram of a cell contour, and the computation of the Wasserstein distance between two persistence diagrams. The Materials and Methods section gives information on the experimental data and implementations of the methods mentioned above. The Results section discusses the applications of this new method for identifying sub-populations among samples of human mesenchymal stem cells collected from bone marrow which may contain some bone marrow fibroblasts [3, 4]. We conclude with a discussion of future applications of persistence homology for comparing cell shapes.
## 2 Theory: persistence homology applied to analysing cell contours
### Persistent Homology on Contours
Given a microscopy image of a fixed and immuno-stained cell, we use a graph \(G\) to represent the boundary in 2 dimensions. This graph is a list of ordered vertices (pixel locations), \(V\), with edges, \(E\), between neighbouring vertices. Note that \(G\) is connected and every vertex has degree 2, so \(G\) consists of precisely one cycle. We extract morphological information using the persistence of connected components of the sub-level sets of a radial function from the centroid of the nucleus.
For a graph \(G\), we say that two vertices \(v_{1},v_{2}\) are in the same _equivalence class_, or
connected component_, if there is a path \(\gamma\) from \(v_{1}\) to \(v_{2}\). For each connected component of \(G\), we choose a representative vertex \(v\) and denote the set of vertices \(v^{\prime}\) connected to \(v\) by \([v]\). We call the set \(\{\,[v]\,\text{ for }v\in G\}\) the _connected components_ of \(G\).
To use persistent homology, we need to define a filtration on \(G\).
**Definition 1** (Sub-level sets and sequence of graphs).: _Let \(f\) be a function from the vertices \(V\) of a graph \(G\) to \(\mathbb{R}\), and fix \(a\in\mathbb{R}\). The sublevel set \(G_{a}:=f^{-1}((-\infty,a])\) is the subgraph consisting of the set \(V_{a}\) of vertices \(v\) with \(f(v)\leq a\) and the set of edges \(E_{a}\) between any pair of neighbouring vertices that are in both \(V_{a}\). Note that for any_
\[a\leq b\in\mathbb{R}\]
_we have_
\[f^{-1}((-\infty,a])\subseteq f^{-1}((-\infty,b]),\]
_and the sub-level sets form a sequence of nested graphs._
**Remark 1**.: The above definition of sub-level sets is cell-wise constant, rather than piecewise-linear one. The distance of a point on an edge to the centre of the function is not the standard Euclidean distance in \(\mathbb{R}^{2}\), but instead the maximum of the distances of the two vertices. This is not an issue, as the difference in these two values is bounded.
### Persistence Diagrams
Given a nested sequence of graphs \(G_{0}\subseteq G_{1}\subseteq\ldots\subseteq G_{\alpha}\) (in general \(G_{\alpha}=G\) the full graph), we can track the changes in connected components of the graphs as the filtering parameter varies. Consider some \(G_{\beta}\), and let \(C_{\beta}:=\left\{\left[v_{j}\right]^{\beta}\right\}_{j=1}^{n_{i}}\) be the set of connected components in \(G_{\beta}\). For each connected component of \(G_{\beta}\) we choose a canonical representative vertex, namely the vertex with the lowest function value. We say that a connected component \([v_{j}]\) is _born_ at time \(\beta\) if there is no vertex in \([v_{j}]\) it is in \(C_{\beta-1}\). We say \([v_{j}]\)_dies_ at \(\gamma\) if in \(G_{\gamma}\), \([v_{j}]\) becomes path connected to a component born before \(v_{j}\). For any pair \(\beta\leq\gamma\) we obtain a map \(\mathfrak{A}_{\beta}^{\gamma}:C_{\beta}\to C_{\gamma}\), which is induced by the inclusion \(\iota_{\beta}^{\gamma}:G_{\beta}\to G_{\gamma}\).
**Remark 2**.: The map \(\mathfrak{A}_{\beta}^{\gamma}:C_{\beta}\to C_{\gamma}\)is obtained from the inclusion \(\iota_{\beta}^{\gamma}:G_{\beta}\to G_{\gamma}\) by
\[\mathfrak{A}_{\beta}^{\gamma}\left([v]\right):=\left\lfloor\iota_{\beta}^{ \gamma}(v)\right\rfloor,\]
which is a well-defined map.
The births and deaths of the connected components can be visualised in a _persistence diagram_.
**Definition 2** (Persistence Diagram).: _Let \(f\) be a function from a graph \(G\) to \(\mathbb{R}\), and let \(\mathfrak{G}=\{G_{a}\}_{a\in\mathbb{R}}\). Let \(C=\bigcup_{a\in\mathbb{R}}C_{a}\) be the set of connected components across the sequence of graphs \(\mathfrak{G}\). The persistence diagram, \(\mathfrak{D}(\mathfrak{G})\) of \(\mathfrak{G}\) is the multiset of points \((b_{j},d_{j})\in\mathbb{R}^{2}\), where \(b_{j}\) is the birth time of \([v_{j}]\in C\), and \(d_{j}\) its death time. A point with \(d_{j}=\infty\) is called an essential point, and the corresponding equivalence class an essential class._
We can also define these filtrations and persistence diagrams algebraically, including persistence modules, as in [24].
### Example
The _input contour_\(C\) (see Figure 2A), with the center of the nucleus marked, forms a graph. Using the center as a reference point, we construct a _radial distance function_ to the graph as follows: for vertices, we use the standard Euclidean distance to the center of the nucleus, and for edges, we take the maximum of the distances of their two endpoints. Vertices and edges whose radial distances are below a certain threshold (or 'time step'), form a sub-graph of \(C\) (Figure 2B). The _persistence diagram_ (Figure 2D), captures the changes in the connected components of the sequence or filtration of subgraphs of \(C\) obtained at increasing time values.
The relationship between the sequence of subgraphs and the persistence diagram is as follows. At \(t_{1}\), we see the birth of a single connected component, which has infinite life and corresponds to the point \((t_{1},\infty)\) in the diagram (where \(\infty\) is represented by being at the top of the diagram). At \(t_{2}\), there are no changes (no birth or death events). At \(t_{3}\), 3 connected components are born. At \(t_{4}\), a component born at \(t_{3}\) merges with another component (and hence dies), which corresponds to the point \((t_{3},t_{4})\). We also see the birth of 3 components. At \(t_{5}\), we have a single connected component, formed by the remaining 2 components born at \(t_{3}\) merging with the component born at \(t_{1}\), corresponding to the multiplicity 2 point \((t_{3},t_{5})\), and all 3
Fig 2: A) _The input data:_ a cell contour and the center of its nucleus marked; the latter serves as the base point for the radial distance function. B) _The radial distance function:_ The complete cell contour forms a graph \(G\). The edges of this graph are measured relative to the cell center by computing the largest Euclidean distance between the center and the endpoints of the edge: the corresponding measure is the radial distance function with respect to the center. Edges whose radial distance function is below a given cutoff value (or ‘time step’), illustrated as concentric circles around the center, define a sub-graph of the whole contour. C) _Graph filtration:_ Examples of subgraphs for five different time steps. The different graphs obtained at increasing values of time form a filtration of the graph \(G\). D) _The persistence diagram_ captures the topological properties of the graph filtration. The points marked as ‘#2’ and ‘#3’ indicate that the corresponding points have multiplicity 2 and 3, respectively, in the persistence diagram.
components born at \(t_{4}\) merge with the original component as well, corresponding to the multiplicity \(3\) point \((t_{4},t_{5})\).
As a multi-set of points, the persistence diagram is
\[\mathcal{D}=\left\{(t_{1},\infty),(t_{3},t_{4}),(t_{3},t_{5}),(t_{3},t_{5}),(t_{ 4},t_{5}),(t_{4},t_{5}),(t_{4},t_{5})\right\},\]
and, since we are only considering the connected components, we call this a _dimension_\(0\) persistence diagram.
As we are using graphs to represent each contour, we can also consider the information captured by the cycles in the subgraph filtration. Each contour is a simple, closed curve in \(\mathbb{R}^{2}\), and hence the corresponding graph \(G\) contains a single cycle. Furthermore, this cycle appears only in the filtration when the _last_ vertex appears. While it is an important descriptor of the _size_ of the contour, it is inefficient to capture this information in a _dimension_\(1\) persistence diagram. Hence, we modify our dimension \(0\) diagram as follows, so that we capture this information: we pair the birth of the essential class in dimension \(0\) with the birth of the essential class in dimension \(1\). In this case, the set of points in the persistence diagram becomes
\[\mathcal{D}=\left\{(t_{1},t_{5}),(t_{3},t_{4}),(t_{3},t_{5}),(t_{3},t_{5}),(t_ {4},t_{5}),(t_{4},t_{5}),(t_{4},t_{5})\right\}.\]
**Remark 3**.: Readers familiar with persistent homology and persistence diagrams will notice that this is a nonstandard modification. Due to the nature of the contours, performing this _essential pairing_ allows us to more efficiently represent and compare the topological descriptors.
### Comparing two persistence diagrams using the Wasserstein distance
A persistence diagram provides a summary of the changes in the connected components as we progress along the sequence of graphs. Let us consider two sequences of graphs
\[\mathfrak{G}^{1}=G_{0}^{1}\to G_{1}^{1}\rightarrow\ldots G_{\alpha_{1}}^{1}\]
and
\[\mathfrak{G}^{2}=G_{0}^{2}\to G_{1}^{2}\rightarrow\ldots G_{\alpha_{2}}^{2},\]
corresponding to two cell contours, with their associated persistence diagrams \(D_{1}=\mathfrak{D}(\mathfrak{G}^{1})\), \(D_{2}=\mathfrak{D}(\mathfrak{G}^{2})\). We define the distance between the cell contours as the distance between \(D_{1}\) and \(D_{2}\), where the distance is the _Wasserstein distance_, defined below.
Imagine that there are \(N\) farms that serve \(N\) markets, and assume balance, that is, that each farm produces enough fruits and vegetables as needed by one market. A company in charge of the distribution of the produce from the farms to the market will take into account the individual cost of transport from any farm to any market to find an 'optimal transportation plan', namely an assignment of farms to markets that leads to a minimal total cost for the transport. The seemingly simple problem can be traced back to the work of Monge in the 1780s [25]. What makes it so interesting is that its solution includes two essential components. First, it defines the assignment between farms and markets, enabling the registration between those two sets. Second, and more relevant to us, it defines a distance between the set of farms and the set of markets, with such distance being referred to as the Monge distance, the Wasserstein
distance, or the earth mover's distance, depending on the field of applications.
Formally, if \(F\) is the set of farms and \(M\) the set of markets, and if we define \(C(i,j)\) the cost of transport between farm \(i\) and market \(j\), the assignment problem refers to finding a bijection \(f\) between \(F\) and \(M\) that minimises
\[U=\sum_{i\in F}C(i,f(i)). \tag{1}\]
Note, \(f\) can be seen as a permutation of \(\{1,\ldots,N\}\). As mentioned above, the optimal \(U_{min}\) is a distance between \(F\) and \(M\). This is the distance we use to compare two cell contours based on their persistence diagram.
As described above, a persistence diagram is defined by a set of points. Let \(S_{1}\) (resp. \(S_{2}\)) be the set of points associated with \(D_{1}\) (resp. \(D_{2}\)):
\[S_{1}=\{X_{1},\ldots,X_{N}\}\] \[S_{2}=\{Y_{1},\ldots,Y_{N}\}\]
Note that we assume first that the two sets have the same number of points. We define the cost matrix \(C\) be to a power of the Euclidean distance, i.e.,
\[C(x_{i},y_{j})=||x_{i}-y_{j}||^{p}\]
The \(p\)-Wasserstein distance between \(S_{1}\) and \(S_{2}\) is then:
\[W_{p}(S_{1},S_{2})=\left(\min_{f}\sum_{x_{i}\in S_{1}}||x_{i}-f(x_{i})||^{p} \right)^{1/p}\]
The formalism defined above assumes that the two sets of points \(S_{1}\) and \(S_{2}\) considered have the same size, that is, there are as many points in \(D_{1}\) as there are points in \(D_{2}\). There is no reason that this is the case. In the more general case, \(S_{1}\) contains \(N_{1}\) points and \(S_{2}\) contains \(N_{2}\), with \(N_{1}>N_{2}\), without loss of generality. This problem, however, can easily be reduced to the balanced case presented above by adding \(N_{1}-N_{2}\) pseudo, or 'ghost' points in \(S_{2}\) that the two corresponding sets have the same cardinality. The distance between a point is \(S_{1}\) and one of these pseudo-points can be chosen arbitrarily. One option is to position the "ghost" points on the diagonal of \(D_{2}\).
In the following, we will use the 2-Wasserstein distance to compare two cell contours via their persistence diagrams.
## 3 Materials and Methods
### Human Mesenchymal Stem Cells
Adult human mesenchymal stem cells (hMSCs) were purchased from Lonza (catalogue \(\#PT-2501\)) and cultured in low glucose DMEM (Gibco, \(\#1885-023\)) supplemented with 10% FBS (Sigma-Aldrich, Ref. \(F7524\)), and 1% penicillin/streptomycin (Gibco, \(\#15140122\)) in regular tissue culture treated flasks (greater Bio-One, \(75cm^{2}\), \(\#658175\)) at \(37^{\circ}\) C and \(5.0\%\) CO\({}_{2}\). Cells were kept subconfluent at low density all the time and passaged and split every two or three days using trypsin incubation of 3 min for detachment after a washing step with PBS (Gibco, \(\#14190144\)). Cells were seeded on ibidi \(\mu\)-Dishes (35 mm, high, biIreat, Cat.No: \(\#81156\)) at a density of 500 cells cm\({}^{-1}\) to maintain a sufficient number of isolated cells for observation and grown for 24 hours under identical culture conditions. The cells were then washed once with PBS and
chemically fixed for 5min in a 10% solution of formaldehyde (Sigma-Aldrich, 252549) in PBS. Next, cells were permeabilized with TritonX (Sigma-Aldrich, T 9284) and extensively washed with PBS. Filamentous actin was stained using fluorescent Phalloidin-Atto 550 (ATTO-TEC GmbH, AD \(550-81\)) and the nucleus was visualised using a DNA-intercalating dye (Invitrogen, Hoechst #33342).
### Unbiased Microscopy
The fixed cells were imaged on an inverted fluorescence microscope (Zeiss AxioObserver, Oberkochen, Germany) using a 20x objective (Zeiss, Plan-Neofluar, 440340-9904) and recorded by a sCMOS camera (Andor Zyla, 4.2P USB3.0) using two filter sets (blue (Zeiss Filterset 49) and red (AHF, F46-008)) for the stained nucleus and actin, respectively. For unbiased data acquisition, the samples were inspected using the nucleus channel first and selecting cells that were isolated (no other nucleus in the field of view) and had a healthy-looking nondeformed nucleus. Multiple nuclei, oddly shaped nuclei as well as any oddly shaped nuclei were excluded to avoid recording cell outlines from abnormal cells. Subsequently, the actin channel of the cell was recorded to complete the data set for each cell. In this way, three individual data sets were recorded from three individual ibidi \(\mu\)-Dishes.
### Image Processing and Contour Generation
We used the FilamentSensor2.0 tool [26] to perform the image processing and extract the contour of each cell. Here, we used the features 'Include Area-Outline' to export the contour from the binarized image of the cells. The _center_ of the cell is obtained from the _center of mass_ from the aligned microscopy image of the nucleus. Here, we thresholded the nucleus in Fiji [27] using the 'Otsu' method, before outlining it and determining the \(x\)- and \(y\)-coordinates of the centre of mass.
### Contour Analysis: computing the distance between 2 cells
After extracting the contour from each image and identifying the centre of the nucleus, we convert it to the graph representation \(G\). Recall that every vertex in \(G\) is of degree 2, and \(G\) contains a single cycle. Let \(V=\{v_{i}\}_{i=1}^{n}\) be the set of vertices of \(G\), ordered clockwise around the contour. Then every edge \(e\) of \(G\) is of the form \((v_{i},v_{i+1})\), where \(v_{n+1}=v_{1}\). Before we obtain our sequence of graphs \(\mathfrak{G}\), We _clean_ our graph representation \(G\) of \(C\) by replacing any set of consecutive edges
\(\{(v_{i},v_{i+1}),\ldots,(v_{j-1},v_{j})\}\) which are colinear with the edge \((v_{i},v_{j})\) and removing the vertices \(v_{k}\) for \(i<k<j\).
**Remark 4**.: Consider a contour \(C\), and let \(G\) be the original graph representation and \(G^{\prime}\) the graph after it has been cleaned. As the metrics on the edges of \(G,G^{\prime}\) are defined as the maximum of the values on the 2 vertices, the sequences of graphs \(\mathfrak{G}\) and \(\mathfrak{G}^{\prime}\) generated by these metrics on \(G\) and \(G^{\prime}\) respectively will have different topological features. In particular, connected components may be born _later_, by the removal of vertices that are closer to the base point of the radial distance function. These changes in values are bounded, and hence, by the stability of persistence diagrams [28], the distance between the respective persistence diagrams is also bounded. Although it is possible to generate contours where this cleaning process leads to large bounds on the distance between the persistence diagrams, the geometric features that lead to this are not of concern in our application. Hence, we prioritise computational efficiency and proceed with the cleaned graphs.
Working with the cleaned graph \(G_{X}\) for each cell \(X\), we filter \(G_{X}\) (see Definition 1 and Section 2.3), and obtain a persistence diagram \(D_{X}\) (Definition 2). Then we construct a distance matrix \(M\), using the 2-Wasserstein distance between the persistence diagrams \(D_{X},D_{Y}\) as the distance between two cells \(X,Y\).
### Clustering cells based on their contour
Clustering is the task of regrouping cells such that those that belong to the same group, referred to as a cluster, are more similar to each other than to those in other clusters. The similarity between two cells is set to be the 2-Wasserstein distance between the persistence diagrams of their contours (see above). The clustering of the cells is then performed using the agglomerative hierarchical clustering analysis, or HCA. The is a bottom-up approach in which each cells starts in its own cluster, and pairs of clusters are merged iteratively until all cells belong to the same cluster. The whole procedure defines a clustering tree. While the distance between two cells is clearly defined above, a key element is to define the distance between two clusters. When two clusters A and B are sets of elements, the distance between A and B is then defined as a function of the pairwise distances between their elements. Four common choices of linkage are:
* _Average linkage_: the distance between two clusters is the arithmetic mean of all the distances between the objects of one and the objects of the other: \[d(A,B)=\sum_{a\in A}\sum_{b\in B}\frac{d(a,b)}{|A||B|}\] where \(|\cdot|\) stands for cardinality. Average linkage, also called UPGMA, is the default linkage for most HCA implementations.
* _Single linkage_: the distance between two clusters is the smallest distance between the objects in one and the objects in the other. \[d(A,B)=\min\{d(a,b),a\in A,b\in B\}\]
* _Complete linkage_:the distance between two clusters is the largest distance between the objects in one and the objects in the other. \[d(A,B)=\max\{d(a,b),a\in A,b\in B\}\]
* _Ward's linkage_ accounts for the variances of the clusters to be compared. For a cluster \(A\), the variance \(SSE(A)\) is defined as: \[SSE(A)=\sum_{a\in A}d(a,m(A))^{2}\] where d is the underlying distance used to compare two objects and \(m(A)\) is either the centroid (if it can be computed) or medioid of the cluster (the medioid is the point in \(A\) that has the least total distance to the other points in \(A\)). The Ward distance between two clusters \(A\) and \(B\) is then: \[d(A,B)=SSE(A\bigcup B)-(SSE(A)+SSE(B))\] The choice of the linkage can have a significant influence in the clustering found by HCA: for example, simple linkage only looks locally at cluster distance and as such may lead to elongated clusters, while reversely complete linkage will have a tendency
to generate more compact clusters. There is no consensus as to which linkage to use for a specific data set; this is, in fact, an active area of research.
To avoid possible biases associated with the choice of linkage, we will use all four options in our analyses, performing HCA with [29]. However, this requires a way to compare the results of one option with the others. We chose our own concept of purity to perform such a comparison, defined as follows. Let \(C_{1}\) be one cluster identified with HCA with a linkage method \(L_{1}\). It is possible that \(C_{1}\) may not be identified as its own cluster within the tree \(T_{2}\) generated with another linkage method \(L_{2}\). To assess how well \(T_{2}\) recognises \(C_{1}\), we follow the following algorithm:
1. We choose first a seed, \(S_{1}\), i.e. an object that belongs to \(C_{1}\). We initialise a list of objects \(O=\{S_{1}\}\).
2. We identify the leaf of \(T_{2}\) corresponding to \(S_{1}\), and add to the list \(O\) the object that has the same parent \(P_{1}\) in \(T_{2}\) as \(S_{1}\).
3. We find the parent \(P_{2}\) of \(P_{1}\) and add to \(O\) all objects that are in the sub tree of \(T_{2}\) starting from \(P_{2}\). Wet set \(P_{1}\gets P_{2}\).
4. We repeat step 3 until \(O\) contains all objects in \(C_{1}\)
If the results with the linkage \(L_{2}\) map exactly to the results with the linkage \(L_{1}\), \(O\) will be equal to \(C_{1}\). However, in general, \(O\) will be bigger because it will include objects that are found by \(L_{2}\) to be similar to objects in \(C_{1}\) that were not identified by \(L_{1}\). The _purity_\(P(C_{1}/L_{2})\) of \(C_{1}\) with respect to \(L_{2}\) is then defined as:
\[P(C_{1}/L_{2})=\frac{N-|O|}{N-|C_{1}|} \tag{2}\]
where \(|\cdot|\) stands for cardinality and \(N\) is the total number of objects. Note that \(P\) is between 0 and 1. The closer \(P\) is to one, the more consistent the two linkage strategies \(L_{1}\) and \(L_{2}\) are with respect to \(C_{1}\).
## 4 Results and discussion
With the advent of imaging techniques associated with advanced microscopes, cell biology has become quantitative. It is now common to study even large populations of cells by analysing their morphological features captured in an image. For example, those morphological features may be measured from two populations of the same cell types, with one population treated with chemical or physical constraints, while the other is not treated and serves as a control population. The effects of the treatment are then quantified by measuring changes in the features in the two populations (see, for example, [30, 31, 32, 33]). Identifying which morphological feature is relevant and measuring those features in the images are fields of study by themselves (see [33] for a review). However, there are two other main difficulties that cannot be ignored in such studies. First, as with any experimental techniques, there are possible artefacts coming from the sample itself (dead cells, cells undergoing apoptosis, dividing cells, etc.), the cell-fixing process and subsequent staining, or even the imaging and/or image processing steps of the analysis. Detecting cells that were affected by such artefacts, usually referred to as _outlier cells_, is a time-consuming process if performed manually, especially with large populations of cells, and might sometimes be subjectively influenced by the human experts. Second, the population of cells itself may be heterogeneous (e.g. primary cells collected from a patient), leading to _sub-populations_. In this section, we report how our method for comparing the shapes
of hMSC cells using persistence homology applied to the cell contours can help identify both unusual cell shapes as well as possible sub-populations. hMSC cells are known to exist as heterogeneous populations (see, for example, [34]).
We analysed one set of hMSCs, \(X1\), with the experimental setup and analysis pipeline described in Section 3. The whole procedure and results are discussed in Section 4.1.
### \(X1\)
The set \(X1\) consists of 136 cells. These cells have already been selected based on manual inspection, as described in Section 3.2. To further analyse the homogeneity of this set of cells, we computed all pairwise distances between the cell contours using the persistence homology technique described above. The corresponding distance matrix is visualised as a heat map in Figure 3. The column/row of mostly bright yellow suggests that there is one cell that differs significantly from the others. This cell is shown Figure 4. Clearly, this cell is oddly shaped: it is long and thin, with three long filipods, significantly different from the expected shape of a hMSC (see Figure 1 and Figure 7). Such a shape is usually considered an outlier.
We perform clustering of cell contours in X1 using the Wasserstein distance between their associated persistence diagram, in the presence Figure 5, and in the absence Figure 6) of the 'outlier' X1-031 identified above. We used HCA, with four different linkages: average, complete, single, and Ward.
Figure 4: Image of the unusual cell shape (X1-031) identified in Figure 3 (image processed using [36].)
Figure 3: Heat map of the distance matrix for \(X1\). There is a cell that has distinctly higher than average distances to the other cells, indicated by the row/column of mostly bright yellow. Generated with [35].
As expected, cell X1-031 is identified as its own cluster with all four linkages in Figure 5. This cell has a unique shape that differentiates it from other hMSC cells. Although there are many possible reasons for this behaviour, X1-031 is considered an outlier.
Fig 5: Dendrograms for \(X1\), the colours correspond to 4 clusters obtained using average linkage. (A) Average linkage. (B) Complete linkage. (C) Single linkage. (D) Ward linkage. In each of these, there is an outlier, with the corresponding leaf coloured purple. Generated with [37, 38].
The clustering of the set X1 with this outlier removed identifies subgroups among X1. However, those subgroups seem to differ under different choices of the linkage for HCA (Figure 6). This behaviour is not unexpected, as different linkage schemes capture different geometries for the cluster (see Section 3.5). It is common to focus on only one linkage scheme, usually the average linkage, and ignore the others. Our approach is different. We use all four linkages and assess their consistency, as illustrated in Figure 6. We start with the average linkage scheme and cut the associated dendrogram to get four clusters. These four clusters are referenced as A (in red), with \((n=86)\) elements, B (in blue) with \((n=7)\) elements, C (in green, \((n=22)\)), and D (in purple, \((n=24)\)). We then consistently colour the dendrograms for all linkage schemes based on those clusters A, B, C, and D. As expected, there are differences. However, some consistencies are observed. For example, we note that cluster D (in purple) is grouped together across all 4 linkage schemes. To confirm this visual consistency, we computed a purity score (see Section 3.5) of the clusters obtained with the average linkage in all four linkage schemes. The purity score quantifies how 'pure' a group of objects is within a dendrogram. It is computed by first identifying the subtree within the dendrogram that contains all objects within that group. If this subtree only contains this group, it is deemed pure and the purity score is set to 1. If instead this subtree contains other objects, its purity is reduced. When the subtree is the whole tree, the purity score is reduced to 0. The purity scores of clusters A to D are reported in Table 1, while examples of cells for each clusters are shown in Figure 7.
Fig 6: Dendrograms for \(X1\) main, the colours correspond to 4 clusters obtained using average linkage. (A) Average linkage. (B) Complete linkage. (C) Single linkage. (D) Ward linkage. In each of these, there is a consistent sub-population, coloured purple. Generated using [37, 38].
As mentioned above, cluster D (purple) is visually homogeneous within all four linkage schemes: this is confirmed as its purity scores remain equal to 1. Cells in this cluster have compact shapes and a prominent nucleus, as expected from cells that have been plated on glass. The same types of cells was distinguished as a sub-population FC by Haaster _et al._[34]. In contrast, cluster A (red) is much less consistent within the different linkage schemes, with purity scores close to 0 (with the obvious exception of the average linkage). Visually, cells belonging to cluster A are more heterogeneous, with a star-shaped or a triangular shape (first row of Figure 7). This group of cells maps with the sub-population RS identified by Haaster _et al._. Cells belonging to cluster B are significantly more elongated. Their purity score is high with the exception of the single linkage scheme but this could just be anecdotic as there are only 7 cells in this cluster. They may correspond to elongated, fibroblastic-like, spindle-shaped cells, identified as SS cells by Haaster _et al._. Cells in cluster C are mostly compact, similar to those in cluster D, but usually bigger. The purity scores of cluster C are close to 1, indicating that they form a group with homogeneous shapes. They were likely identified as belonging to the sub-population FC by Haaster _et al._.
\begin{table}
\begin{tabular}{l|c c c c} & \multicolumn{4}{c}{Cluster} \\ Linkage & A (red, \(n=86\)) & B (blue, \(n=7\)) & C (green, \(n=22\)) & D (purple, \(n=24\)) \\ \hline average & 1.0 & 1.0 & 1.0 & 1.0 \\ complete & 0.0 & 1.0 & 0.732 & 1.0 \\ single & 0.021 & 0.0 & 0.056 & 1.0 \\ ward & 0.0 & 1.0 & 0.690 & 1.0 \\ \end{tabular}
\end{table}
Table 1: Purity score of the 4 clusters obtained with the average linkage for \(X1\) main, see Figure 6. The colour and size of each cluster is in parentheses.
## 5 Conclusion
Cell biologists commonly study in parallel the morphology of cells with the regulation mechanisms that affect this morphology. In the case of stem cells for example, the shapes they assumed when plated on substrate with different rigidity are expected to define morphological descriptors of mechano-directed differentiation. The heterogeneous nature of cell population is, however, a major difficulty when studying cell shape based on images from images from digital microscopes. It is common to manually assess first all the images associated with a population of cells under study in order to identify "outliers", i.e. cells with unusual shapes that raise questions on their nature (i.e. these cells could be associated with contamination) or on the presence of experimental artefacts. The aim of the present study was to propose an alternative, automated method to help with this manual assessment. We have developed a new method for analysing cell shapes that is based on three elements:
Fig 7: Example cells from each cluster of the set \(X1\) (after removal of the outlier, see text for details). Those clusters are identified with HCA and average linkage scheme (see Figure 6). All cell images are shown at the same magnification level. Images were processed using [36].
* _A description of cell shapes using persistence homology_. The shape of a cell is defined from its contour and the position of its nucleus. We compute a filtration of the edges defining the contour, using the radial distance to the nucleus as a filter. This filtration is used to define a persistence diagram that serves as a signature of the cell contour.
* _A distance between two cells_. This distance is the Wasserstein distance between the persistence diagrams of their contours.
* _A measure of homogeneity of cell subgroups_. We perform hierarchical clustering on cell shapes using the distance defined above, with four different linkage schemes. We define a purity score for subgroups of cells within the dendrograms associated with those clustering. This purity score reflects homogeneity.
We have tested our method on hMSC cells that are known to be heterogeneous. We have shown that it automatically identifies unusual cells that can then be deemed outlier or not, as well as sub-populations that are consistent with previous analyses of sub-populations of hMSCs [34].
There are many morphometric parameters that could have been included to complement our topological data analysis, such as cell area, aspect ratios, ellipticity, curvature of the contours,... It is our intent to complement our analyses with a more comprehensive set of morphological signatures of cell shapes. In addition, all those parameters, including the persistence diagrams presented in this paper, are computed based on 2D images. Cells are 3D objects and ultimately should be studied as such. The concepts we have introduced in this paper extend to the analyses of 3D surfaces. We will explore this in further studies.
|
2309.04849 | Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect
Representations | We propose EmoDistill, a novel speech emotion recognition (SER) framework
that leverages cross-modal knowledge distillation during training to learn
strong linguistic and prosodic representations of emotion from speech. During
inference, our method only uses a stream of speech signals to perform unimodal
SER thus reducing computation overhead and avoiding run-time transcription and
prosodic feature extraction errors. During training, our method distills
information at both embedding and logit levels from a pair of pre-trained
Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on
the IEMOCAP benchmark demonstrate that our method outperforms other unimodal
and multimodal techniques by a considerable margin, and achieves
state-of-the-art performance of 77.49% unweighted accuracy and 78.91% weighted
accuracy. Detailed ablation studies demonstrate the impact of each component of
our method. | Debaditya Shome, Ali Etemad | 2023-09-09T17:30:35Z | http://arxiv.org/abs/2309.04849v2 | # Speech Emotion Recognition with Distilled Prosodic and Linguistic Affect Representations
###### Abstract
We propose EmoDistill, a novel speech emotion recognition (SER) framework that leverages cross-modal knowledge distillation during training to learn strong linguistic and prosodic representations of emotion from speech. During inference, our method only uses a stream of speech signals to perform unimodal SER thus reducing computation overhead and avoiding run-time transcription and prosodic feature extraction errors. During training, our method distills information at both embedding and logit levels from a pair of pre-trained Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on the IEMOCAP benchmark demonstrate that our method outperforms other unimodal and multimodal techniques by a considerable margin, and achieves state-of-the-art performance of \(77.49\)% unweighted accuracy and \(78.91\)% weighted accuracy. Detailed ablation studies demonstrate the impact of each component of our method.
Debaditya Shome, Ali Etemad Queen's University, Canada Speech emotion recognition, knowledge distillation, prosodic features, linguistic features.
## 1 Introduction
Speech Emotion Recognition (SER) is a challenging yet crucial task, with applications spanning a broad spectrum from human-computer interaction to mental health diagnostics. The inherent ambiguity in perceiving emotions and the variability across speakers and languages further amplifies the complexity of SER.
Speech emotion information are present in and can be extracted from two different domains, linguistic and prosodic [1]. The linguistic information includes the semantic aspects of emotion at the word level, while prosodic information includes the melodic aspects such as rhythm, tone, pitch, pauses, etc. Most existing solutions attempt to implicitly learn a combination of the two domain directly from raw speech signals. However, we identify three key problems in this category of approaches as follows.
(_i_) Implicitly learning prosodic information from audio is often less than optimal because the discretization of audio signals during training of leading speech models like HUBERT [1] and Wav2Vec2 [2] can lead to the weakening of important prosodic features.
(_ii_) Direct fine-tuning of existing speech models which were originally trained for Automatic Speech Recognition (ASR), on SER tasks, may not always yield strong performances [3].
(_iii_) Direct use of speech transcripts at _run-time_ can lead to low performances due to transcription errors [4].
(_iv_) Lastly, the use of both audio and linguistic information at run-time requires a multimodal system which can increase computational overhead.
To tackle the problems stated above, we propose EmoDistill, an SER method that learns both prosodic and linguistic information during training, but requires only input speech at run-time. Our method distills information from both logits and embeddings through a pre-trained prosodic teacher alongside a pre-trained linguistic teacher to learn unimodal representations for downstream SER. Experiments demonstrate that our method significantly outperforms prior solutions on the IEMOCAP [5] dataset to achieve state-of-the-art results. Additionally, ablation studies demonstrate the importance of each component of EmoDistill.
In summary, we make the following contributions: (**1**) We introduce EmoDistill, a novel cross-modal Knowledge Distillation (KD) framework for learning unimodal representations from speech that explicitly capture both the linguistic and prosodic aspects of emotions. Unlike multimodal models combining audio and text modalities, EmoDistill doesn't require explicitly transcribed text during inference, thereby reducing the computational overhead and errors that arise from transcription and prosodic feature extraction. (**2**) We empirically evaluate the importance of the ability to capture and distinguish linguistic and prosodic components of emotion in speech through detailed ablation studies. (**3**) Our rigorous evaluation on the IEMOCAP benchmark in a subject-independent setup demonstrates that EmoDistill outperforms previous state-of-the-art methods and achieves \(77.49\)% unweighted accuracy (UA) and \(78.91\)% weighted accuracy (WA).
## 2 Related Work
The recent progress of deep learning has had a considerable impact on the field of SER. Mao _et al._[6] utilized a Convolutional Neural Network (CNN) with Autoencoder-based pre-training for improved SER performance. Luo _et al._[7]
explored the combination of handcrafted features and Convolutional Recurrent Neural Network (CRNN) architecture for SER. Similarly, different variants of CNNs, RNNs or CRNNs have been developed for SER, some of which have been equipped with Attention mechanisms. Recently, transformer-based speech models with self-supervised pre-training have shown promising performance in various downstream tasks including SER. Wang _et al._[8] fine-tuned several variants of HuBERT and Wav2Vec2 for SER, speaker verification, and spoken language understanding tasks. Wagner _et al._[9] analyzed the various factors like fairness, generalization, efficiency and robustness of pre-trained speech models for continuous SER. They found that such pre-trained transformers show better robustness and fairness compared to CNNs.
Multimodal methods that incorporate both speech and text in training and run-time have also been explored for SER. Sun _et al._[10] utilized CNN and CNN-LSTM networks for multimodal SER from speech and text data on the IEMOCAP corpus. Heusser _et al._[11] explored multimodal fusion from BiLSTM-based speech features with text features from a pre-trained XLNet language model. Triantafyllopoulos _et al._[12] studied various combinations of speech features from Multi-stage CNNs and text-features from BERT for SER and demonstrated improved performance. Deschamps _et al._[13] analyzed several multimodal fusion strategies using Wav2Vec2-based speech features and FlauBERT-based text features for SER on an emergency call-center recordings corpus. Ho _et al._[14] proposed an SER method with a multi-level multi-head attention mechanism for fusion of MFCC-based audio features and BERT-based text features.
As discussed in the previous section, fusion-based methods have multiple disadvantages like transcription errors, due to which cross-modal KD is being explored. KD was introduced by Hinton _et al._[15] for model compression, where they utilized only logit-level information. Subsequently, KD was adapted to transfer cross-modal information in low-resource tasks such as SER. Hajavi _et al._[16] used video as privileged information for distilling feature-level knowledge into a unimodal student on speech data, and demonstrated improved performance on speaker recognition and SER. Ren _et al._[17] developed a self-distillation framework for SER aimed at model compression and demonstrated improvements over layer-wise KD.
## 3 Method
The objective of our framework is to train a unimodal speech student model using KD from pre-trained prosodic and linguistic teacher models. The overview of our method is presented in Figure 1. The details of each component are described as follows.
**Linguistic teacher.** We consider a teacher model \(f_{T}^{L}\) with
Figure 1: EmoDistill Framework. Our student network is trained using a distillation of logit-level and embedding-level knowledge from frozen linguistic and prosodic teacher networks, along with standard cross-entropy loss. During inference, we only use the student network in a unimodal setup, avoiding computational overhead as well as transcription and prosodic feature extraction errors.
strong language representations and refer to it as Linguistic teacher. We adopt the pre-trained _BERT-base_[18] model as the backbone for \(f_{T}^{L}\), and perform supervised fine-tuning on the training set of our emotion classification corpus.
**Prosodic teacher.** We consider a teacher model \(f_{T}^{P}\) that takes explicit prosodic features as input, and refer to it as Prosodic teacher. We use eGeMAPs Low-Level Descriptors (LLDs) [19] as prosodic features, which are commonly used in SER literature. We perform supervised fine-tuning of \(f_{T}^{P}\) on the training set of our emotion classification corpus. We adopt a 2D ResNet-based [20] backbone for \(f_{T}^{P}\) which consists of \(4\) residual blocks.
**Student KD.** To facilitate knowledge transfer from our Linguistic and Prosodic teacher models, we follow a teacher-student KD setup and keep the weights of the teachers frozen. We consider a uni-modal speech model \(f_{S}\) as the student, which consists of a pre-trained transformer encoder followed by \(2\) GELU-activated feedforward projection layers for disjoint linguistic and prosodic embeddings. We keep these disjoint in order to allow optimal embedding-level KD from each teacher without interference. These two embeddings are concatenated and passed on to a feed-forward network (FFN) for final output predictions. First, we transfer the logit-level knowledge using traditional KD with temperature-scaled labels [15]. Specifically, we minimize the KL-Divergence \(L_{KL}\) between the predicted logit distributions of teacher and student models, where the objective becomes:
\[\mathcal{L}_{\textit{logits}}=\mathcal{L}_{\textit{KL}}(y_{S}||y_{L})+ \mathcal{L}_{\textit{KL}}(y_{S}||y_{P}). \tag{1}\]
Here, \(y_{S}\) refers to the predictions of the student, while \(y_{L}\) and \(y_{P}\) represent the predictions of Linguistic and Prosodic teacher models, respectively. In all cases, the predicted logits \(y\) are obtained using temperature parameter \(\tau\) in the output softmax activation function. In practice, we use different values of \(\tau\) for KD from \(f_{T}^{L}\) and \(f_{T}^{P}\). Let \(z_{c}\) be the output logits for class \(c\), among a total \(N\) classes. The temperature-scaled logits \(y_{c}\) are obtained as:
\[y_{c}=\frac{e^{z_{c}/\tau}}{\sum_{k=1}^{N}e^{z_{c}/\tau}}. \tag{2}\]
Next, we use embedding-level KD to transfer knowledge to the student model from the latent space of Linguistic and Prosodic teacher models. Let \(z_{L}\) and \(z_{P}\) denote the embeddings of Linguistic and Prosodic teachers, while \(z_{L}^{{}^{\prime}}\) and \(z_{P}^{{}^{\prime}}\) denote the embeddings of the student model from linguistic and prosodic projection layers respectively. We minimize the negative cosine similarity \(L_{cos}\) among the teacher and student embeddings as follows:
\[\mathcal{L}_{\textit{embeddings}}=\mathcal{L}_{\textit{cos}}(z_{L}^{{}^{ \prime}},z_{L})+\mathcal{L}_{\textit{cos}}(z_{P}^{{}^{\prime}},z_{P}). \tag{3}\]
Given two embeddings \(a\) and \(b\), \(L_{cos}\) can be defined as:
\[\mathcal{L}_{\textit{cos}}(a,b)=\frac{a}{\left\|a\right\|_{2}}\cdot\frac{b}{ \left\|b\right\|_{2}}, \tag{4}\]
where \(\left\|\cdot\right\|_{2}\) represents \(\ell_{2}\)-norm. Finally, the total training loss of EmoDistill becomes:
\[\mathcal{L}_{\textit{EmoDistill}}=\alpha\mathcal{L}_{\textit{logits}}+\beta\mathcal{L}_{\textit{embeddings}}+\gamma\mathcal{L}_{\textit{CE}}, \tag{5}\]
where \(\mathcal{L}_{\textit{CE}}\) refers to the standard cross-entropy loss, and \(\alpha\), \(\beta\), \(\gamma\) are loss coefficients.
## 4 Experiments
### Dataset
We use the Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset for our experiments [5]. IEMOCAP is the most widely used benchmark for SER. The dataset encompasses roughly \(12\) hours of audio-visual content, with an average duration of \(4.5\) seconds for each vocal segment. We only use the audio and text transcriptions in this work. Following prior works, we use 4 categories of emotions: 'neutral', 'angry','sad', and 'happy' (merged with 'excited' class).
### Implementation details
We train all models on \(4\times\)NVIDIA A100 GPUs, using a batch size of 128, except for EmoDistill w/HuBERT-large for which we use batch size of 64 due to computational limitations. We use AdamW optimizer with \(\mathrm{CosineAnnealingWarmup}\) learning rate (LR) scheduler starting with a base LR of \(1\times 10^{-4}\). For logit-level KD from the Prosodic teacher, a temperature \(\tau_{P}=0.5\) is chosen, while for the Linguistic teacher temperature \(\tau_{L}=5\) (see ablation experiments in Section 4.3). \(\alpha=1\), \(\beta=10\), \(\gamma=2\) are used as loss coefficients. The pre-trained weights of HuBERT-base and HuBERT-large were obtained from TorchAudio. For BERT-base, we use the 'bert-base-uncased' checkpoint from HuggingFace. For extracting eGeMAPs LLDs, we use the opensmile-python toolkit [21].
### Results and Discussion
**Performance.** Following prior works, we evaluate EmoDistill on the IEMOCAP benchmark using \(10\)-fold cross-validation in the leave-one-speaker-out scheme. The results are shown in Table 1. It can be clearly seen that EmoDistill significantly outperforms prior works in terms of both WA and UA metrics, with improvments of up to \(7.26\%\) in UA and \(4.99\%\) in WA over the best previous method [22]. Furthermore, we observe that while our method is technically not multi-modal as it only uses a single modality during inference, it still outperforms prior works that have dedicated components for different text and audio modalities in the literature [11, 10, 14, 12].
**Ablation studies.** To understand the impact of each component of EmoDistill, we conduct a systematic ablation study and present the results in Table 2. First, we individually remove the \(\mathcal{L}_{\textit{logits}}\) as well as \(\mathcal{L}_{\textit{embedding}}\) and observe between 1%
to 2% drop in performance in each case. Next, we ablate the model by individually removing the entire Prosodic and Linguistic teachers (\(f_{T}^{P}\) and \(f_{T}^{L}\)). In this experiment, we observe that while the removal of either component degrades performance, the ablation of the Linguistic teacher has a more significant negative impact. We then ablate both teachers together (\(f_{T}^{P}\) and \(f_{T}^{L}\)), essentially only using the HuBERT-base backbone with fine-tuning for SER, and observe a considerable drop in performance. Finally we remove the student network along with either of the teachers, essentially only using the remaining teacher for inference. We observe here that while both tests result in a considerable drop in performance, the removal of \(f_{S}\) and \(f_{T}^{L}\) together has the highest negative impact, indicating that linguistic information is crucial, and prosodic information can serve as complementary knowledge to improve SER but can't replace linguistic information.
Next, we aim to analyze the impact of the temperature parameter \(\tau\) on the performance. To this end, remove \(f_{T}^{L}\) and set the prosodic temperature \(\tau_{P}\) to \(0.1\), \(0.5\), \(2\), and \(4\). Similarly, we remove \(f_{T}^{P}\) and set the linguistic temperature parameter \(\tau_{L}\) to \(0.5\), \(2\), \(4\), and \(10\). As shown in Figure 2 (Left), \(\tau_{P}=0.5\) (hard-logits) works best and increasing \(\tau_{P}\) shows strong decline in performance. In the second case, as shown in Figure 2 (Right), we observe that \(\tau_{L}=4\) (soft-logits) works best and decreasing \(\tau_{L}\) leads to a strong decline in performance. Although standard logit-level KD methods use soft logits (\(\tau>1\)), we observe that soft logits don't work well for the Prosodic teacher \(f_{T}^{P}\). Our intuition behind this is that since \(f_{T}^{P}\) is a weak teacher (see Table 2), smaller temperature values result in hard logits as per Eq. 2, and therefore improve performance by providing stronger supervision signals through distillation. Finally, we observe that for both teachers, too high or low temperatures lead to a drop in performance.
## 5 Conclusion
We present EmoDistill, a novel cross-modal knowledge distillation framework for learning emotion representations from speech. EmoDistill explicitly captures linguistic and prosodic aspects of emotions in a unimodal inference setup, reducing computational overhead and limitations like transcription and prosodic feature extraction errors. For training our framework, EmoDistill extracts information from both the embedding and logit levels through a pair of pre-trained Prosodic and Linguistic teacher models that have been fine-tuned for SER. Experiments on the commonly used SER benchmark IEMOCAP demonstrates that our method considerably outperforms other state-of-the-art methods by achieving \(77.49\%\) (\(7.26\%\) improvement) and \(78.91\%\) (\(4.99\%\) improvement) weighted and unweighted accuracies. We demonstrate the importance each component of our method through detailed ablation experiments.
## Acknowledgement
This work was supported by Mitacs, Vector Institute, and Ingenuity Labs Research Institute.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Inf. Backbone** & **Modality** & **WA** & **UA** \\ \hline \hline
[10] & CNN+LSTM & Multimodal & 61.2 & 56.01 \\
[11] & BiLSTM+XLNet & Multimodal & 71.40 & 68.60 \\
[12] & MFCNN+BERT & Multimodal & - & 72.60 \\
[14] & RNN+BERT & Multimodal & 73.23 & 74.33 \\ \hline
[23] & **FCNN** & Unimodal & 70.23 & **70.76** \\
[24] & TFCNN+DenseCap+ELM & Unimodal & 70.34 & 70.78 \\
[25] & LSTM+Attention & Unimodal & 70.50 & 72.50 \\
[26] & RNN-T & Unimodal & 71.72 & 72.56 \\
[27] & CNN-GRU+SeqCap & Unimodal & 72.73 & 59.71 \\
[28] & Wav2Vec2+CNN+LSTM & Unimodal & 71.64 & 72.70 \\
[22] & TIM-Net & Unimodal & 72.50 & 71.65 \\ \hline Ours & HuBERT-base & Unimodal & 75.16 & 76.12 \\ Ours & HuBERT-large & Unimodal & **77.49** & **78.91** \\ \hline \hline \end{tabular}
\end{table}
Table 1: SER results on IEMOCAP. **Bold** denotes the best results while underline denotes the second-best.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Variants** & **WA** & **UA** \\ \hline \hline Ours & **75.16** & **76.12** \\ w/o \(\mathcal{L}_{logits}\) & 73.94 (\(\downarrow 1.22\)) & 74.02 (\(\downarrow 2.10\)) \\ w/o \(\mathcal{L}_{embedding}\) & 73.88 (\(\downarrow 1.28\)) & 74.01 (\(\downarrow 2.11\)) \\ w/o \(f_{T}^{L}\) & 74.09 (\(\downarrow 1.07\)) & 72.82 (\(\downarrow 3.30\)) \\ w/o \(f_{T}^{L}\) & 66.01 (\(\downarrow 9.15\)) & 67.27 (\(\downarrow 8.85\)) \\ w/o \(f_{T}^{P}\) and \(f_{T}^{L}\) & 69.92 (\(\downarrow 5.24\)) & 70.17 (\(\downarrow 5.95\)) \\ w/o \(f_{S}\) and \(f_{T}^{L}\) & 49.42 (\(\downarrow 25.74\)) & 50.08 (\(\downarrow 26.04\)) \\ w/o \(f_{S}\) and \(f_{T}^{P}\) & 71.09 (\(\downarrow 4.07\)) & 71.83 (\(\downarrow 4.29\)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study demonstrating the impact of key components of EmoDistill. |
2309.14482 | LogGPT: Log Anomaly Detection via GPT | Detecting system anomalies based on log data is important for ensuring the
security and reliability of computer systems. Recently, deep learning models
have been widely used for log anomaly detection. The core idea is to model the
log sequences as natural language and adopt deep sequential models, such as
LSTM or Transformer, to encode the normal patterns in log sequences via
language modeling. However, there is a gap between language modeling and
anomaly detection as the objective of training a sequential model via a
language modeling loss is not directly related to anomaly detection. To fill up
the gap, we propose LogGPT, a novel framework that employs GPT for log anomaly
detection. LogGPT is first trained to predict the next log entry based on the
preceding sequence. To further enhance the performance of LogGPT, we propose a
novel reinforcement learning strategy to finetune the model specifically for
the log anomaly detection task. The experimental results on three datasets show
that LogGPT significantly outperforms existing state-of-the-art approaches. | Xiao Han, Shuhan Yuan, Mohamed Trabelsi | 2023-09-25T19:29:50Z | http://arxiv.org/abs/2309.14482v2 | # LogGPT: Log Anomaly Detection via GPT
###### Abstract
Detecting system anomalies based on log data is important for ensuring the security and reliability of computer systems. Recently, deep learning models have been widely used for log anomaly detection. The core idea is to model the log sequences as natural language and adopt deep sequential models, such as LSTM or Transformer, to encode the normal patterns in log sequences via language modeling. However, there is a gap between language modeling and anomaly detection as the objective of training a sequential model via a language modeling loss is not directly related to anomaly detection. To fill up the gap, we propose LogGPT, a novel framework that employs GPT for log anomaly detection. LogGPT is first trained to predict the next log entry based on the preceding sequence. To further enhance the performance of LogGPT, we propose a novel reinforcement learning strategy to finetune the model specifically for the log anomaly detection task. The experimental results on three datasets show that LogGPT significantly outperforms existing state-of-the-art approaches.
anomaly detection, log data, generative language model
## I Introduction
Effectively detecting abnormal events in online computer systems is critical to maintaining the security and reliability of the systems. Logs, which are a fundamental component of modern computer systems, serve as a critical source of information for system monitoring, debugging, and security auditing as they record the system status, offering valuable insights into system performance and potential issues. Anomalies in log data often signify system faults, security breaches, or operational failures, making their detection a crucial task [1, 2, 3, 4, 5, 6].
However, the task of anomaly detection in log data is challenging due to the nature of high dimensionality, large volume, and complex structure. Machine learning models have been extensively employed for anomaly detection in log data. Traditional models, such as Principal Component Analysis (PCA) [7], Isolation forest [8], and one-class Support Vector Machines (OCSVM) [9] have been widely used. However, these models often require manual feature engineering or assume linear relationships among log entries, which makes them less effective in handling the dynamic nature of log data.
Recently, deep learning models have emerged for log anomaly detection, such as LSTM-based models like DeepLog [1], LogAnomaly [10], and OC4Seq [11], and BERT-based models like LogBERT [2]. One commonly used strategy is to borrow the idea of language modeling in the natural language processing field to capture the sequential pattern of log data. In this paper, we call this group of log anomaly detection models **log language model**-based approaches. Particularly, the log language model is first trained to predict the next or masked log entries given the normal sequences. Then, the anomalies can be detected if the observed log entry is not in the top-K list predicted by the log language model. The rationale is that if a log sequence follows normal patterns, the log language model should be able to predict the next or masked log entries. Therefore, when an observed log entry is not in the top-K list predicted by the log language model, it means that the log entry has a low ratio to be in this specific position given the context, indicating the abnormality.
Although empirical studies have demonstrated the effectiveness of leveraging language models for log anomaly detection, the current models still face some limitations. The traditional LSTM-based log language models, such as DeepLog, often fail to fully capture long-term dependencies in log sequences. Therefore, the recently developed models usually adopt the Transformer structure [12] to model the long log sequences, such as LogBERT [2]. However, the masked log language model adopted in LogBERT may not be able to capture the natural flow in log sequences. More importantly, there is a gap between log language modeling and anomaly detection. Technically, the log language model is usually trained to correctly predict the next log entry, while the current log anomaly detection models label the anomalies if the observed log entry is not in the Top-K list predicted by the log language model. In other words, there is a gap in the objective between the training phase and the testing phase for log anomaly detection.
Inspired by the training strategy for large language models, to fill up the gap, we introduce LogGPT, a novel framework for log anomaly detection that leverages the Generative Pretrained Transformer (GPT) model. LogGPT still harnesses the power of generative log language models to capture the intricate patterns and dependencies in log data. Specifically, LogGPT is pre-trained to predict the next log entry given the preceding sequence (prompt). More importantly, we further fine-tune LogGPT via reinforcement learning. Specifically, LogGPT employs a novel reward mechanism based on whether the observed log entry is within the Top-K predicted log entries from the log language model. If the observed log entry is found within the Top-K predictions, LogGPT will receive a
positive reward; otherwise, it will receive a negative reward. Reinforced by this reward signal, we expect that for the normal sequences, LogGPT can ensure the log entry is within the Top-K predictions.
The contributions of this paper are threefold. First, we propose LogGPT, a novel framework for anomaly detection in log data, which utilizes the generative log language model to capture the patterns of normal log sequences by training to predict the next log key given the previous sequence. This novel approach effectively addresses the limitations of both traditional machine learning models and deep learning models like DeepLog [1] and LogBERT [2], providing a more robust and effective solution for log anomaly detection. Second, we introduce a Top-K reward metric specifically designed for fine-tuning the log language model for anomaly detection. This reward metric gives a positive reward if the actual log key is in the Top-K predictions, and a negative reward otherwise, thereby guiding the model to focus on the most relevant parts of the log sequence and enhancing the accuracy of anomaly detection. Third, we conduct extensive experiments to validate the effectiveness of LogGPT in detecting anomalies in log data. Experimental results demonstrate that LogGPT outperforms state-of-the-art methods, underscoring its potential as a powerful tool for anomaly detection in log data.
## II Related Work
Log anomaly detection, a critical task for ensuring system security and reliability, has received extensive research. The methods for log anomaly detection can be broadly categorized into two phases: traditional machine learning models and deep learning models.
In the early phase, traditional machine-learning models were the primary tools for log anomaly detection. Models such as Principal Component Analysis (PCA) [7], Isolation forest [8], and one-class Support Vector Machines (OCSVM) [9] were commonly used. Although these models are capable of identifying outliers in the log data, these models have several limitations. First, the traditional machine learning models usually require manual feature engineering, which is labor-intensive and might not capture the complex patterns in log data. Furthermore, these models struggle with capturing complex patterns in log sequences.
The advanced deep learning models have significantly improved the performance of log anomaly detection. In particular, Long Short-Term Memory Networks (LSTMs), known for their ability to model sequential data, have proven to be effective for log anomaly detection, such as DeepLog [1] and LogAnomaly [10]. DeepLog functions by predicting the next log key based on the preceding sequence, identifying anomalies when the actual next log key significantly deviates from the prediction. On the other hand, LogAnomaly models a log stream as a natural language sequence and develops template2vec to extract the semantic information hidden in log templates. Therefore, LogAnomaly can detect both sequential and quantitative log anomalies simultaneously. However, these models come with their own set of limitations. A primary challenge with LSTM is that this type of recurrent architecture struggles to encode very long or complex sequences due to its relatively simple structure. This issue is particularly pronounced in log anomaly detection, where the sequences can be quite long and complex.
To address the limitations of LSTM-based models, researchers have turned to the use of Transformer [13], which is a more powerful model to capture the long-term dependencies in the sequences, such as LogBERT [2] or CAT [14]. LogBERT is a self-supervised framework that learns the patterns of normal log sequences based on BERT [13]. Specifically, LogBERT takes normal log sequences with random masks as inputs and is trained to predict the randomly masked log entries. After training, LogBERT can encode the patterns of normal log sequences. One limitation is that the masked log language model may not always capture the natural flow of log sequences in some contexts. Moreover, the performance of LogBERT is sensitive to the mask ratio, a hyperparameter controlling how many tokens will be replaced with MASK tokens during both the training and testing phases. In this work, we propose LogGPT, which leverages the GPT model to learn patterns in normal log sequences by predicting the next log entries in a sequence, and further proposes a novel reinforcement learning mechanism to enhance the performance for anomaly detection.
## III Preliminary
In this section, we provide a detailed overview of two key components for log anomaly detection, log sequence preprocessing and log language model.
### _Log Sequence Preprocessing_
The first step of log anomaly detection is to preprocess the log messages because it is hard to capture the sequential pattern from the raw text-based log messages. The major line of research in log anomaly detection is to first adopt a log parser, such as Drain [15], to extract the template from the
Fig. 1: Log key extraction from HDFS dataset messages via Log Parser. The message with a red/blue underscore indicates the detailed computational event for each log key separately.
log messages, as shown in Figure 1. Each template usually indicates one type of log message, called a log key.
After getting the log keys, the sequence of raw log messages can be transformed into a sequence of log keys. In this case, the log keys are similar to the vocabulary in natural language, while the sequence is like a sentence consisting of a sequence of log keys. Therefore, a language model can be leveraged to model the log sequences.
Formally, after preprocessing, the log messages with the same template are represented by a log key \(k\in\mathcal{K}\), where \(\mathcal{K}\) indicates the set of log keys extracted from the log messages. Then, a log sequence is organized as ordered log keys, denoted as \(S=\{k_{1},...,k_{t},...,k_{T}\}\), where \(T\) indicates the length of the log sequence.
### _Log Language Model_
We use DeepLog [1] to illustrate the concept of the log language model. DeepLog leverages Long Short-Term Memory networks (LSTMs) for log language modeling. The primary objective of DeepLog is to learn a probabilistic model of normal execution from log data and then detect anomalies as significant deviations from normal patterns.
DeepLog is trained on \(\mathcal{D}=\{S^{i}\}_{i=1}^{N}\) consisting of normal log sequences. The LSTM network in DeepLog is trained to predict the next log key in a sequence based on the preceding sequence. Formally, given a sequence of log keys \(S_{1:T}=\{k_{1},...,k_{t},...,k_{T}\}\), where \(k_{t}\) indicates the log key at the \(t\)-th position. DeepLog trains an LSTM to model the conditional probability \(p(k_{t+m+1}|S_{t:t+m})\) for \(t=1,2,...,T-m-1\), where \(m\) indicates the window size. Particularly, DeepLog adopts a sliding window with size \(m\) to split the sequences into a set of small windows and predict the next log key given the previous \(m\) log keys. The LSTM is trained to maximize the likelihood of the next log key given the preceding sequence, which can be formulated as the following objective function:
\[\mathcal{L}(\theta)=-\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T-m-1}\log p(k_{t+m +1}^{i}|S_{t:t+m}^{i}), \tag{1}\]
where \(\theta\) denotes the parameters of LSTM.
During the anomaly detection phase, given a new sequence, DeepLog still splits the sequences into small windows and employs the trained LSTM model to predict the next log key. The LSTM model predicts a probability distribution over all possible log keys in \(\mathcal{K}\), ranking them based on their likelihood of being the next key in the sequence. Then, an abnormal sequence will be labeled as abnormal if the observed log key does not appear in the Top-K prediction list multiple times across all sliding windows in that sequence.
The concept of Top-K predictions is introduced to account for the inherent uncertainty and variability in log sequences. Even in normal operations, there can be multiple valid "next" log keys as the systems usually have multiple normal patterns. Therefore, during the anomaly detection phase, instead of predicting a single'most likely' next log key, the model identifies the Top-K most probable next log keys. As long as the observed log key is in the Top-K list, we could consider the sequence normal.
The value of K, a tunable hyperparameter, determines the strictness of the model for anomaly detection. A smaller K results in a stricter model that allows fewer possibilities for the next log key, usually leading to high recall and low precision, while a larger K results in a more flexible model that considers a broader range of log keys as normal, usually resulting in high precision and low recall.
## IV LogGPT
In this section, we introduce LogGPT, a novel log anomaly detection model based on GPT. Similar to DeepLog, LogGPT detects the log anomaly by examining whether the observed log key is in the Top-K prediction list. Because GPT is a more powerful structure compared to LSTM used by DeepLog, LogGPT does not need to further split the sequence into multiple small windows. Instead, LogGPT is trained to predict the next log key given the previous sequence, which intrinsically can capture the long-term dependence of log sequences. Moreover, besides leveraging the powerful GPT structure, we also propose a novel reinforcement learning strategy to further improve the performance of log anomaly detection.
The design of LogGPT is inspired by the training process of large language models, where the training process consists of two primary stages: pre-training and fine-tuning, as shown in Figure 2.
In the pre-training stage (Figure 1(a)), a generative log language model \(f_{\theta}(\cdot)\) is trained on a corpus of normal log sequences \(\mathcal{D}\), which allows the model to learn the underlying patterns and structures of normal system behavior. After pre-training, LogGPT is capable of generating log sequences based on a given part of the log sequences.
The fine-tuning stage (Figure 1(b)) is designed to further refine the model's ability to distinguish between normal and abnormal log sequences. In this stage, we employ reinforcement learning techniques to finetune the pre-trained LogGPT. Borrowing the terminology from the large language model, we define a set of prompts \(\mathcal{P}=\{S_{1:t}^{i}\}_{i=1}^{N}\), where \(S_{1:t}^{i}\subseteq S_{1:T}^{i}\) and \(S_{1:T}^{i}\in\mathcal{D}\). These prompts are fed into the LogGPT to generate the following sequence \(\hat{S}_{t:T}^{i}\) step by step. We propose a novel reward, called the Top-K metric, to fine-tune LogGPT for anomaly detection.
### _Generative Log Language Model_
LogGPT utilizes GPT-2 [16] for modeling the log sequences, which is based on Transformer decoder [12] that utilizes a self-attention mechanism to capture dependencies between log keys in the log sequence. LogGPT is trained to predict the next log key given the preceding log keys. The objective function for pretraining the LogGPT is defined as follows:
\[\mathcal{L}(\theta)=-\frac{1}{N}\sum_{i=1}^{N}\sum_{t=1}^{T-1}\log p(k_{t+1}^{i }|S_{1:t}^{i}), \tag{2}\]
where \(\theta\) denotes the parameters of LogGPT, \(N\) is the number of log sequences and \(T\) is the length of each sequence, \(p(k_{t+1}^{i}|S_{1:t}^{i})\) indicates the probability of log key at the \(t+1\)-th position predicted by LogGPT given the sequence \(S_{1:t}^{i}\).
Specifically, to derive \(p(k_{t+1}^{i}|S_{1:t}^{i})\), the structure of LogGPT can be defined as:
\[\mathbf{h}_{t}^{i} =\textsf{Transformer\_Decoder}(S_{1:t}^{i}) \tag{3a}\] \[p(k_{t+1}^{i}|S_{1:t}^{i}) =\textsf{Softmax}(\mathbf{h}_{t}^{i}\mathbf{W}), \tag{3b}\]
where \(\mathbf{h}_{t}^{i}\in\mathbb{R}^{d}\) indicates the hidden representation derived from the Transformer decoder [12, 16], and \(\mathbf{W}\in\mathbb{R}^{d\times|\mathcal{K}|}\) is the parameter of the language model head that maps the hidden representation to a probability distribution of all log keys in \(\mathcal{K}\).
By training the model to predict the next log key in normal log sequences, LogGPT encodes the normal system behavior. After pre-training, GPT-2 is capable of generating a log sequence \(\hat{S}_{t+1:T}^{i}=\{\hat{k}_{t+1}^{i},...,\hat{k}_{T}^{i}\}\) based on a given part of the log sequence \(S_{1:t}^{i}\). This capability is crucial for the subsequent fine-tuning stage, where the model is further refined to distinguish between normal and anomalous log sequences.
### _Reinforcement Learning for Log Anomaly Detection_
In the context of LogGPT, we employ reinforcement learning to fine-tune the pre-trained GPT-2 model for the task of log anomaly detection. The reinforcement learning paradigm is particularly suitable for our task as it allows the model to learn from its predictions and adjust its behavior based on the feedback received, thereby enhancing its ability to detect anomalies. In the context of our framework, we define the following elements.
**State:** The state, denoted as \(\tilde{S}_{1:t}^{i}=S_{1:t}^{i}\), is initially defined as the given part of a log sequence. As the model generates the log sequence \(\hat{S}_{t+1:T}^{i}\) based on the given part, the state evolves dynamically. Specifically, for each step \(j\) where \(t+1\leq j\leq T-1\), the state \(\tilde{S}_{1:j}^{i}\) becomes the concatenation of the given part of the log sequence \(S_{1:t}^{i}\) and the generated part of the log sequence \(\tilde{S}_{t+1:j}^{i}\), denoted as \(\tilde{S}_{1:j}^{i}=\{S_{1:t}^{i},\tilde{S}_{t+1:j}^{i}\}\). The sequence \(\tilde{S}_{1:j}^{i}\) is further transformed to a hidden representation \(\tilde{\mathbf{h}}_{j}^{i}\) by the Transformer decoder shown in Equation 3a.
**Action:** An action is defined as sampling a log key from the K log keys with the highest probabilities predicted by LogGPT, denoted as \(a_{j+1}^{i}\sim\text{Top-K}(p(\hat{k}_{j+1}^{i}|\tilde{S}_{1:j}^{i}))\).
**Policy:** A policy takes the form of LogGPT and is defined by its parameters. Specifically, given the current part of the sequence until the \(j\)-th position, the policy outputs a probability distribution over the action space, represented as \(\pi_{\theta}(a_{j+1}^{i}|\tilde{\mathbf{h}}_{j}^{i})\), where \(\theta\) indicates the parameters of LogGPT.
**Reward:** The reward function provides feedback to the policy based on the quality of its actions. We propose a novel reward function to evaluate the predicted log key for anomaly detection, called the Top-K metric.
At each step, the Top-K metric checks whether the observed next log key is within the Top-K predicted log keys. If this is the case, the model receives a reward of 1; otherwise, it receives a reward of -1. Given a part of log sequence \(\tilde{S}_{1:t}^{i}\), after an action is taken, the reward function is formulated as:
\[r_{j+1}=\begin{cases}1,&\text{if }k_{j+1}^{i}\in\text{Top-K}(p(\hat{k}_{j+1}^ {i}|\tilde{S}_{1:j}^{i}))\\ -1,&\text{if }k_{j+1}^{i}\notin\text{Top-K}(p(\hat{k}_{j+1}^{i}|\tilde{S}_{1:j}^{i})) \end{cases}. \tag{4}\]
Here, \(k_{j+1}^{i}\) refers to the actual next log key, and \(p(\hat{k}_{j+1}^{i}|\tilde{S}_{1:j}^{i})\) denotes the probability distribution predicted by LogGPT over the action space given the current state.
The Top-K metric promotes better generalization and robustness of LogGPT in anomaly detection. By encouraging the model to predict a set of likely next log keys rather than a single most likely log key, the Top-K metric helps LogGPT learn a more nuanced representation of the normal log patterns. This approach recognizes that log data may contain inherent variability even for the normal log sequences, and a broader range of acceptable candidates can still reflect normal system
Fig. 2: Framework of LogGPT.
behavior. The Top-K metric, therefore, enhances the precision of anomaly detection by aligning the model's predictions with the complex nature of log data.
### _Policy Update_
We adopt Proximal Policy Optimization (PPO) [17] for the policy update. PPO is a type of policy gradient method that optimizes the policy directly by maximizing the expected reward and can further maintain the stability of the learning process and prevent harmful updates. The objective function of PPO is defined as follows:
\[J(\theta)=\mathbb{E}_{\pi_{\theta}}\left[\sum_{i=1}^{N}\sum_{j=t}^{T-1}\frac{ \pi_{\theta}(a_{j+1}^{i}|\mathbf{h}_{j}^{i})}{\pi_{\theta\text{\tiny{all}}}(a_ {j+1}^{i}|\mathbf{h}_{j}^{i})}\tau_{j+1}\right], \tag{5}\]
where \(\pi_{\theta}\) is the new policy, \(\pi_{\theta\text{\tiny{all}}}\) is the old policy, and \(r_{j+1}\) is the reward for an action.
The policy \(\pi_{\theta}\) is updated by performing gradient ascent on the objective function \(J(\theta)\):
\[\theta\leftarrow\theta+\alpha\nabla_{\theta}J(\theta), \tag{6}\]
where \(\alpha\) is the learning rate.
The policy update process is repeated for a number of iterations until the policy converges or a maximum number of iterations is reached. The Top-K metric encourages the model to recognize the inherent variability in normal log data by rewarding predictions that include the actual next log key within a broader set.
### _Anomaly Detection_
After fine-tuning, LogGPT is deployed to detect abnormal log sequences. Given a new log sequence \(S_{1:T}\), LogGPT iteratively predicts the next log key \(k_{t+1}\) given the preceding subsequence \(S_{1:t}\) for \(1\leq t\leq T-1\).
For each predicted log key, the model generates a set of Top-K predicted log keys. This set represents the K most likely log keys at the current position. The actual next log key is then compared to this set. As long as one actual log key is not in the set of Top-K predicted log keys, the whole log sequence will be flagged as anomalous.
## V Experiments
### _Experimental Setup_
**Datasets.** We evaluate LogGPT on three log datasets, namely HDFS, BGL, and Thunderbird. Table I shows the statistics of three datasets. For all the datasets, we randomly select 5000 normal log sequences as the training dataset.
* HDFS (Hadoop Distributed File System) [7]: This dataset is derived from Hadoop-based map-reduce jobs that were run on Amazon EC2 nodes. The anomalies within this dataset are identified through a manual labeling process based on a set of predefined rules. The log sequences are constructed based on the session ID present in each log message, resulting in an average sequence length of 19. The HDFS dataset consists of 575,061 log sequences, out of which 16,838 have been labeled as anomalous.
* BGL (BlueGene/L Supercomputer System) [18]: The BGL dataset originates from a BlueGene/L supercomputer system, located at the Lawrence Livermore National Labs (LLNL). It includes both alert and non-alert messages, with the alert messages being treated as anomalies. Log sequences are formed using a time sliding window of 1 minute, yielding an average sequence length of 58. The BGL dataset contains 36,927 log sequences, with 3,296 of them classified as anomalous.
* Thunderbird [18]: This dataset is collected from another supercomputer system. The dataset used in this study comprises the first 20,000,000 log messages from the original Thunderbird dataset that compose 112,959 log sequences, with 40,920 of them marked as anomalous. Log sequences are created using a time sliding window of 1 minute, leading to an average sequence length of 166.
**Baselines.** We compare LogGPT with a variety of baseline methods, consisting of both traditional machine learning models and deep learning models:
* PCA (Principal Component Analysis) [19]: This technique constructs a counting matrix based on the frequency of log key sequences. It then reduces this matrix into a lower-dimensional space to identify anomalies.
* iForest (Isolation Forest) [8]: iForest is an unsupervised learning algorithm, which also adopts a counting matrix as input. It isolates anomalies instead of profiling normal data points. It represents features as tree structures and anomalies are detected as instances with short average path lengths on the constructed isolation trees.
* OCSVM (One-Class Support Vector Machine) [20]: OCSVM is a variant of the Support Vector Machine algorithm that is designed for anomaly detection tasks [9, 21]. The model is trained on normal data and finds the maximum margin hyperplane that separates the normal data from the origin.
* LogCluster [22]: LogCluster is a density-based log clustering approach that groups similar log messages together. Anomalies are detected as log messages that do not belong to any cluster or belong to small clusters.
* DeepLog [1]: DeepLog is a deep learning-based approach for anomaly detection in log data. It uses a long short-term memory (LSTM) network to model the log sequences and detect anomalies based on the prediction errors.
* LogAnomaly [10]: LogAnomaly models a log stream as a natural language sequence, which can detect both sequential and quantitative log anomalies simultaneously.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & \# of Unique & \# of Log & Avg. Seq. & Training & Testing Data \\ \cline{2-7} & Log Keys & Sequences & Length & Data & Normal & Anomalous \\ \hline HDFS & 48 (15) & 575,061 & 19 & 5,000 & 553,223 & 16,838 \\ \hline BGL & 396 (160) & 36,927 & 58 & 5,000 & 28,871 & 3,296 \\ \hline Thunderbird & 7,703 (904) & 112,959 & 166 & 5,000 & 67,039 & 40,920 \\ \hline \end{tabular}
\end{table} TABLE I: Statistics of the Datasets. The number in the parentheses indicates the unique log keys in the training set.
* OC4Seq (Multi-Scale One-Class Recurrent Neural Networks) [11]: OC4Seq is designed to detect anomalies in discrete event sequences. Recognizing that an anomalous sequence could be caused by individual events, subsequences of events, or the entire sequence, OC4Seq employs a multi-scale RNN framework to capture different levels of sequential patterns simultaneously.
* LogBERT [2]: LogBERT is a BERT-based architecture to capture the patterns of normal log sequences via a log language model. LogBERT is trained to predict the masked log keys on normal log sequences and detects the abnormal log sequences based on the prediction errors.
* CAT (Content-Aware Transformer) [14]: CAT is a self-attentive encoder-decoder transformer framework designed for anomaly detection in event sequences. It incorporates the semantic information of event content by using a content-awareness layer to generate representations of each event. The encoder learns preamble event sequence representations with content awareness, and the decoder embeds sequences under detection into a latent space where anomalies are distinguishable.
**Implementation Details.** We first employ Drain [15] to parse raw log messages into log keys. For the baseline models, we utilize the Loglizer [23] package to evaluate PCA, OCSVM, iForest, and LogCluster for anomaly detection. DeepLog and LogAnomaly are evaluated using the Deep-loglizer [24] package. For OC4Seq1, LogBERT2, and CAT3, we use the open-source code provided by the authors separately.
Footnote 1: [https://github.com/KnowledgeDiscovery/OC4Seq](https://github.com/KnowledgeDiscovery/OC4Seq)
Footnote 2: [https://github.com/HelenGuoha/logbert](https://github.com/HelenGuoha/logbert)
Footnote 3: [https://github.com/hmichaelrhang/CAT](https://github.com/hmichaelrhang/CAT)
As for LogGPT, we use a GPT model with 6 layers and 6 heads. The dimensions of the embeddings and hidden states are set to 60. The learning rate is set to 1e-4 for the pre-training phase and 1e-6 for the fine-tuning phase. To accommodate different datasets, we set the K in Top-K to 50% of the training log keys. It means during the test phase if an observed log key is not in the top 50% of the prediction list from the GPT, the sequence will be labeled as an anomaly. This allows us to maintain a high level of flexibility when dealing with datasets of varying sizes and characteristics. The batch size for the pre-training phase is set to 16, and we train the model for 100 epochs. The episode is set to 20 with early stop criteria to prevent overfitting and ensure efficient training.
### _Experimental Results_
**Performance on Log Anomaly Detection.** Table II illustrates the results and standard deviation of LogGPT and various baselines over 10 runs on the HDFS, BGL, and Thunderbird datasets. The asterisk in the table indicates that LogGPT significantly outperforms the best baseline for each dataset at the 0.05 level, according to the paired t-test.
First, we can observe that PCA, iForest, and OCSVM perform poorly on the HDFS and BGL datasets, as indicated by their low F-1 scores. However, PCA's performance is notably better on the Thunderbird dataset, achieving a high F-1 score. This inconsistency in performance across datasets highlights the sensitivity of PCA to datasets.
LogCluster, specifically designed for log anomaly detection, shows improved performance over other traditional machine learning models, i.e., PCA, iForest, and OCSVM, on the HDFS and BGL datasets but is outperformed by PCA on the Thunderbird dataset. This pattern further emphasizes the importance of dataset-specific characteristics in determining the effectiveness of different methods.
Deep learning-based approaches, such as DeepLog, LogAnomaly, OC4seq, LogBERT, and CAT, outperform traditional methods across all three datasets, which shows the advantages of utilizing deep learning to capture complex patterns in log sequences.
Our proposed model, LogGPT, stands out by consistently achieving the highest F-1 scores across all three datasets, with significant margins over all baselines.
**Ablation Studies.** To investigate the contribution of reinforcement learning (RL) to the performance of LogGPT, we conducted an ablation study, comparing the performance of LogGPT with and without the RL component. The results are summarized in Table III.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{HDFS} & \multicolumn{3}{c|}{BGL} & \multicolumn{3}{c|}{Thunderbird} \\ \cline{2-10} & Precision & Recall & F-1 score & Precision & Recall & F-1 score & Precision & Recall & F-1 score \\ \hline PCA & 0.166\({}_{+0.008}\) & 0.059\({}_{+0.003}\) & 0.087\({}_{+0.002}\) & 0.0117\({}_{+0.023}\) & 0.038\({}_{+0.007}\) & 0.054\({}_{+0.010}\) & 0.953\({}_{+0.004}\) & 0.980\({}_{+0.005}\) & 0.966\({}_{+0.003}\) \\ \hline Forest & 0.043\({}_{+0.010}\) & 0.422\({}_{+0.224}\) & 0.078\({}_{+0.021}\) & 0.491\({}_{+0.394}\) & 0.037\({}_{+0.052}\) & 0.063\({}_{+0.009}\) & 0.338\({}_{+1.28}\) & 0.015\({}_{+0.011}\) & 0.028\({}_{+0.020}\) \\ \hline OCSVM & 0.058\({}_{+0.012}\) & 0.910\({}_{+0.089}\) & 0.108\({}_{+0.021}\) & 0.073\({}_{+0.003}\) & 0.345\({}_{+0.010}\) & 0.121\({}_{+0.004}\) & 0.056\({}_{+0.004}\) & 0.986\({}_{+0.000}\) & 0.706\({}_{+0.003}\) \\ \hline LogCluster & **0.996\({}_{+0.008}\)** & 0.368\({}_{+0.011}\) & 0.335\({}_{+0.010}\) & **0.941\({}_{+0.015}\)** & 0.641\({}_{+0.033}\) & 0.762\({}_{+0.021}\) & **0.977\({}_{+0.005}\)** & 0.291\({}_{+0.063}\) & 0.445\({}_{+0.067}\) \\ \hline DeepLog & 0.793\({}_{+0.092}\) & 0.863\({}_{+0.031}\) & 0.824\({}_{+0.060}\) & 0.792\({}_{+0.048}\) & 0.946\({}_{+0.012}\) & 0.861\({}_{+0.028}\) & 0.864\({}_{+0.005}\) & 0.997\({}_{+0.000}\) & 0.926\({}_{+0.003}\) \\ \hline LogAnomaly & 0.097\({}_{+0.027}\) & 0.863\({}_{+0.031}\) & 0.524\({}_{+0.017}\) & 0.884\({}_{+0.002}\) & 0.850\({}_{+0.000}\) & 0.867\({}_{+0.000}\) & 0.873\({}_{+0.005}\) & 0.986\({}_{+0.000}\) & 0.931\({}_{+0.002}\) \\ \hline OC4Seq & 0.922\({}_{+0.059}\) & 0.758\({}_{+0.227}\) & 0.805\({}_{+0.157}\) & 0.441\({}_{+0.045}\) & 0.352\({}_{+0.044}\) & 0.391\({}_{+0.041}\) & 0.901\({}_{+0.046}\) & 0.823\({}_{+0.232}\) & 0.845\({}_{+0.177}\) \\ \hline LogBERT & 0.754\({}_{+0.142}\) & 0.749\({}_{+0.037}\) & 0.745\({}_{+0.082}\) & 0.917\({}_{+0.006}\) & 0.892\({}_{+0.006}\) & 0.905\({}_{+0.005}\) & 0.962\({}_{+0.019}\) & 0.965\({}_{+0.008}\) & 0.963\({}_{+0.007}\) \\ \hline CAT & 0.102\({}_{+0.022}\) & 0.422\({}_{+0.022}\) & 0.062\({}_{+0.032}\) & 0.061\({}_{+0.034}\) & 0.177\({}_{+0.122}\) & 0.210\({}_{+0.184}\) & 0.190\({}_{+0.184}\) & 0.751\({}_{+0.022}\) & 0.150\({}_{+0.124}\) & 0.807\({}_{+0.129}\) \\ \hline LogGPT & 0.884\({}_{+0.030}\) & **0.921\({}_{+0.066}\)** & **0.901\({}_{+0.038}\)** & **0.940\({}_{+0.010}\)** & **0.977\({}_{+0.018}\)** & **0.958\({}_{+0.011}\)** & **0.973\({}_{+0.004}\)** & **1.000\({}_{+0.000}\)** & **0.986\({}_{+0.002}\)** \\ \hline \end{tabular}
\end{table} TABLE II: Experimental Results on HDFS, BGL, and Thunderbird Datasets.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Metric & Approach & HDFS & BGL & Thunderbird \\ \hline \multirow{2}{*}{Precision} & LogGPT w/o RL & 0.932\({}_{+0.015}\) & 0.936\({}_{+0.011}\) & 0.971\({}_{+0.004}\) \\ \cline{2-4} & LogGPT & 0.884\({}_{+0.030}\) & 0.940\({}_{+0.010}\) & 0.973\({}_{+0.004}\) \\ \hline
First, we can notice that on both HDFS and Thunderbird datasets, LogGPT significantly outperforms LogGPT without the RL component, which demonstrates that the RL component enhances the overall performance of LogGPT for log anomaly detection. Especially, on the HDFS dataset, by finetuning the GPT model with RL reward, the recall achieved by LogGPT is improved with a large margin with a little sacrifice on precision, leading to extensive improvement in the F-1 score. It also shows that fine-tuning the log language model with Top-K reward can identify more log anomalies. Meanwhile, on the BGL dataset, we can also notice a slight improvement in F-1 of LogGPT compared to the one without the RL component. Another interesting finding is that even the LogGPT without the RL component already outperforms all baselines (shown in Table II) in three datasets, which also shows the advantage of leveraging the GPT model to capture the patterns of log sequences.
**Parameter Analysis: Ratio of Top-K**. LogGPT detects the anomalies by examining whether the observed log key is in the Top-K list predicted by GPT. Therefore, K is an important parameter to determine the anomalies. We first analyze the difference in the performance by tuning K for anomaly detection. By default, K is set as 50% of unique log keys. It means if the next log key falls into the top 50% of unique log keys predicted by GPT, the sequence is normal.
The impact of different top-K ratios on the precision, recall, and F-1 score for the HDFS, BGL, and Thunderbird datasets is illustrated in Figure 3. On both HDFS and BGL datasets, we have similar observations. With the increasing of ratios as normal log keys, the recall keeps decreasing when the ratio is greater than a threshold, such as 40% in HDFS and BGL. This happens because when we have a large ratio, most of the keys are considered normal. In this case, the recall will be low. On the other hand, if the observed log key is predicted with an extremely low probability at a specific position, with a high chance, this log key is abnormal. Therefore, we can observe the increase in precision along with the increase in ratios.
For the Thunderbird dataset, the precision increases as the top-K ratio increases, while the recall remains almost constant, with a slight decrease at higher top-K ratios. The F-1 score increases steadily, reaching a peak at a specific top-K ratio. The reason for this behavior can be attributed to the inherent characteristics of the Thunderbird dataset. It is likely that the normal data within the Thunderbird dataset has high variability, which needs a broader range of acceptable continuations in the log sequences to reduce the false positive. As the top-K ratio increases, LogGPT becomes more selective in flagging anomalies, thereby increasing precision by reducing false positives.
Overall, a low top-K ratio tends to lead to high recall but low precision, while a high top-K ratio leads to high precision but potentially lower recall. The optimal top-K ratio varies
Fig. 4: Impact of the training size.
Fig. 3: Impact of the ratio of Top-K log keys.
across datasets, reflecting the unique characteristics of each dataset.
**Scalability Analysis: Training Size.** It is well known that deep learning models usually require a sufficient number of training samples. The impact of training size on the performance of log anomaly detection models is critical. By analyzing the F-1 scores of various models across different training sizes, we can gain insights into their effectiveness and efficiency. In this experiment, we compare LogGPT with other deep learning-based baselines, across three datasets by varying the training size. Figure 4 shows the experimental results.
The effect of the training size on the HDFS dataset reveals distinct patterns across different models (shown in Figure 4a). LogGPT demonstrates consistent performance across various training sizes, highlighting its robustness and ability to generalize well. OC4Seq shows a consistent increase in performance with the training size, indicating that it benefits from more extensive training data. DeepLog and LogAnomaly exhibit fluctuations in performance, which may be attributed to the sensitivity to training size. The decline in performance for LogBERT and stability for CAT may reflect limitations in their ability to leverage additional training data without changing other hyper-parameters. The varying behaviors of these models underscore the importance of carefully selecting the training size based on the model's characteristics.
We have similar observations on BGL and Thunderbird datasets. First, with larger training sizes, the performance of LogGPT, DeepLog, LogAnomaly, and LogBERT keep improving, which shows that these models can benefit from additional training data. Meanwhile, LogGPT can outperform those baselines in most cases. However, the sharp decline for OC4Seq and overall downward trend for CAT may indicate overfitting or challenges in generalizing from larger training sets.
Overall, LogGPT can achieve very good performance in three datasets. More training samples can further boost the performance of LogGPT.
## VI Conclusion
In this work, we introduced LogGPT, a novel approach to log anomaly detection that builds upon GPT models, further enhanced by a reinforcement learning strategy. Through modeling log sequences as natural language, LogGPT innovatively adapts GPT for log anomaly detection. More importantly, recognizing the existing gap between language modeling and anomaly detection, LogGPT integrates a fine-tuning process guided by a novel Top-K reward metric for anomaly detection. Extensive experiments conducted across various datasets demonstrated the effectiveness of LogGPT, showcasing significant improvements over existing state-of-the-art methods.
|
2309.06086 | Plasticity-Optimized Complementary Networks for Unsupervised Continual
Learning | Continuous unsupervised representation learning (CURL) research has greatly
benefited from improvements in self-supervised learning (SSL) techniques. As a
result, existing CURL methods using SSL can learn high-quality representations
without any labels, but with a notable performance drop when learning on a
many-tasks data stream. We hypothesize that this is caused by the
regularization losses that are imposed to prevent forgetting, leading to a
suboptimal plasticity-stability trade-off: they either do not adapt fully to
the incoming data (low plasticity), or incur significant forgetting when
allowed to fully adapt to a new SSL pretext-task (low stability). In this work,
we propose to train an expert network that is relieved of the duty of keeping
the previous knowledge and can focus on performing optimally on the new tasks
(optimizing plasticity). In the second phase, we combine this new knowledge
with the previous network in an adaptation-retrospection phase to avoid
forgetting and initialize a new expert with the knowledge of the old network.
We perform several experiments showing that our proposed approach outperforms
other CURL exemplar-free methods in few- and many-task split settings.
Furthermore, we show how to adapt our approach to semi-supervised continual
learning (Semi-SCL) and show that we surpass the accuracy of other
exemplar-free Semi-SCL methods and reach the results of some others that use
exemplars. | Alex Gomez-Villa, Bartlomiej Twardowski, Kai Wang, Joost van de Weijer | 2023-09-12T09:31:34Z | http://arxiv.org/abs/2309.06086v1 | # Plasticity-Optimized Complementary Networks for Unsupervised Continual Learning
###### Abstract
Continuous unsupervised representation learning (CURL) research has greatly benefited from improvements in self-supervised learning (SSL) techniques. As a result, existing CURL methods using SSL can learn high-quality representations without any labels, but with a notable performance drop when learning on a many-tasks data stream. We hypothesize that this is caused by the regularization losses that are imposed to prevent forgetting, leading to a suboptimal plasticity-stability trade-off: they either do not adapt fully to the incoming data (low plasticity), or incur significant forgetting when allowed to fully adapt to a new SSL pretext-task (low stability). In this work, we propose to train an expert network that is relieved of the duty of keeping the previous knowledge and can focus on performing optimally on the new tasks (optimizing plasticity). In the second phase, we combine this new knowledge with the previous network in an adaptation-retrospection phase to avoid forgetting and initialize a new expert with the knowledge of the old network. We perform several experiments showing that our proposed approach outperforms other CURL exemplar-free methods in few- and many-task split settings. Furthermore, we show how to adapt our approach to semi-supervised continual learning (Semi-SCL) and show that we surpass the accuracy of other exemplar-free Semi-SCL methods and reach the results of some others that use exemplars.
## 1 Introduction
Continual learning (CL) designs algorithms that can learn from shifting distributions (non-IID data), generally this is modeled by learning from a sequence of tasks [14]. The main challenge for these methods is the problem of catastrophic forgetting [41], which is a dramatic drop in performance on previous tasks. Most CL approaches, therefore, need to address the trade-off between acquiring new knowledge (plasticity) and preventing forgetting of previous knowledge (stability). The vast majority of existing methods in continual learning have focussed on supervised learning, where the incoming data stream is fully labeled. In this paper, we focus on continual learning on unsupervised data.
Only recently, some works have explored continual learning on unsupervised non-IID data-streams [22, 19]. Motivated by the tremendous progress in unsupervised learning, notably of contrastive learning approaches [11, 61], methods aim to extend these methods to the continual setting. An additional motivation is the fact that unsupervised learning tends to lead to more generalizable feature representations since features that are not relevant to the specific discriminative task are not automatically discarded. This can potentially lead to representations that can faster incorporate new tasks without incurring significant amounts of forgetting. PFR [22] uses a projection head after the feature extractor of an SSL framework to predict past representations. Projected representations are motivated to be close to past representations; therefore, the present model is encouraged to remember past knowledge. CaSSLe [19] uses a similar strategy as PFR, but the projection head is used after the projector of the SSL approach. Even though these methods obtain satisfactory results, they struggle to adapt to new tasks without jeopardizing the vast knowledge already accumulated by the network. We hypothesize that the regularization imposed by these methods to avoid forgetting hurts the learning process of CURL in the following ways: 1) the SSL component cannot fully adapt to the incoming data (low plasticity), 2) The model will have a significant drift (forgetting) when the current model is unable to perform the SSL pretext-task. These effects increase as the number of tasks increases, and consequently, the data for training reduces.
Complementary learning systems (CLS) theory [40, 31] proposes a computational framework in which the interplay between a fast (episodic memory/specific experience) and
a slow (semantic/general structured) memory system is the core of the mechanism of knowledge consolidation. Several existing CL methods have taken inspiration from CLS as a way to find a good stability-plasticity trade-off (see [44, 45] for a review). The fast learner can quickly adapt to new knowledge, which then is carefully absorbed by the slower learner. DualNet [47] proposes to use a self-supervised method to train the slow, more generic learner, which weights can be quickly adapted for solving a supervised task with exemplars from the replay buffer. Recently, in [2] the authors proposed to use a CLS-based approach and maintain separate plastic and stable models for online CL with experience replay. However, existing methods that exploit CLS for continual learning have in common that they only consider a supervised learning scenario.
This paper aims to apply complementary learning systems theory to improve continual learning from unsupervised data streams. The existing methods [22, 19] can suffer from sub-optimal stability and plasticity on longer sequences since they have difficulty adapting to the new knowledge required to address the latest task, while maintaining the vast knowledge already learned on earlier tasks. Instead, we propose to train an expert network that is relieved of the task of keeping the previous knowledge and can focus on the task of performing optimally on new tasks. In the second phase, we combine this new knowledge with the old network in an adaptation-retrospection phase to avoid forgetting. In conclusion, the main contributions of this work are :
* A new exemplar-free continual unsupervised representation learning (CURL) method called _Plasticity-Optimized COmplementary Networks_ (POCON). Existing CURL methods learn new knowledge while imposing regularization to prevent forgetting. Instead, POCON separates the learning of new knowledge from the knowledge integration part. Analysis confirms that this leads to a better stability-plasticity trade-off.
* Extensive experiments confirm that POCON outperforms state-of-the-art CURL on various settings (e.g., a 5-9 % performance gain over CaSSLe on ImageNet100 for a 20-100 task-split). Unlike previous CURL methods, POCON can thrive in low-data regimes (such as small-task incremental learning) and setups without task boundaries. We also demonstrate the application of POCON to semi-supervised continual learning.
* We propose and evaluate a _heterogeneous_ version of POCON, where the main network can have a different network architecture than the expert. This opens up the possibility for interesting applications where a slow/big network can be deployed in a cloud environment, while a fast/slow learner can be utilized on an edge device, such as a mobile phone.
## 2 Related work
**Continual Learning and Class Incremental Learning.** Existing continual learning methods can be broadly categorized into three types: replay-based, architecture-based, and regularization-based methods [14, 38]. Replay-based methods either save a small amount of data from previously seen tasks [4, 10] or generate synthetic data with a generative model [56, 64]. The replay data can be used during training together with the current data, such as in iCaRL [49] and LUCIR [25], or to constrain the gradient direction while training, such as in AGEM [9]. Architecture-based methods activate different subsets of network parameters for different tasks by allowing model parameters to grow with the number of tasks. Previous works following this strategy include HAT [52], Piggyback [36], PackNet [37], DER [59] and Ternary Masks [39]. Regularization-based methods add a regularization term derived from knowledge of previous tasks to the training loss. This can be done by either regularizing the weight space, which constrains important parameters [53, 55], or the functional space, which constrains predictions or intermediate features [18, 13, 26]. EWC [29], MAS [1], REWC [34], SI [62], and RWalk [8] constrain the importance of network parameters to prevent forgetting. Methods such as LwF [33], LwM [16], and BiC [57] leverage knowledge distillation to regularize features or predictions. DMC [65] work is more related to POCON as the authors proposed to train the expert network without any regularization for a classification task. However, after that, in a distillation phase an additional auxiliary dataset is used to integrate old and new knowledge.
**Self-supervised representation learning.** In recent years, unsupervised methods based on self-supervision have become dominant in learning representation for computer vision systems. The aim of self-supervised learning is to acquire high-quality image representations without explicit labeling. Initially, these methods addressed some well-defined pretext tasks, such as predicting rotation [21], determining patch position [17], or solving jigsaw puzzles in images [43], and labels for these discriminative pretext tasks can be automatically computed to enable learning of meaningful feature representations of images. Recently, researchers have adapted contrastive methods for use with unlabeled data and have placed more emphasis on instance-level data augmentation to find similar or contrasting samples [5, 7, 11, 23, 61]. These methods heavily rely on stochastic data augmentation [58, 67] to generate sufficient similar examples for learning representations, and negative examples are either randomly sampled or excluded entirely [12].
Self-supervised learning has also been used to improve the learning of a sequence of supervised tasks [61, 66]. Their objective is not to learn from unlabeled data, but rather to use self-supervised learning to further enrich the
feature representation. Similarly, pre-trained models with self-supervision have been used to improve incremental average classification metrics [20] with data augmentation, distillation, and even exemplars. Training self-supervised models directly on class-IL setting without exemplars was also proposed in [22, 19], where both present results that self-supervised learning mitigates the problem of forgetting.
**Complementary learning systems.** There are CLS-based methods that use several networks in addition to a rehearsal memory or pseudo-sample generator. FearNet [28] uses a hippocampal network for recalling new examples, a PFC network for long-term memories, and a third network to select between the PFC or hippocampal networks for a particular instance. In [46], they propose a G-EM network that performs unsupervised learning from spatiotemporal representations and a G-SM network that uses signals from G-EM and class labels to learn from videos incrementally. Closer to our work are DualNet [47] and CLS-ER [2] (previously explained); however, both models are supervised and use exemplars. Table 1 presents a summary of similar CL methods.
## 3 Method
In this section, we describe our approach for continual learning of self-supervised representations, referred to as Plasticity-Optimized COmplementary Networks (POCON), which eliminates the need for memory or replay. Our method is based on the complementary learning system (CLS) framework, and involves an interplay between a fast-expert learner and a slow-main network. Our work is motivated by the recognition that fast adaptation to new data is crucial in constructing more robust representations during continual unsupervised representation learning. Rather than attempting to maintain network stability in its old state through strict or relaxed distillation methods, as suggested in recent works [19, 22], POCON allows the expert network to learn a new task freely without any restrictions. After acquiring the new knowledge, we integrate it into the main network. In turn, the main network serves as a good starting point for the new expert. Before presenting the details of the POCON method, we first provide a brief introduction to the problem of continual self-supervised representation learning.
### Self-supervised representation learning
In recent research on self-supervised learning, the objective is to train a network \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{F}\), which maps input data from \(\mathcal{X}\) to output feature representations in \(\mathcal{F}\). The network is trained using unlabeled input data \(x\) sampled from a distribution \(\mathcal{D}\). The learned feature representation is subsequently used to facilitate various downstream tasks. In this paper, we employ the BarlowTwins approach [61] for self-supervised learning of the representation network \(g_{\theta}\). This approach serves as a common baseline for previous works [19, 22]. However, the proposed method is versatile and can be extended to other self-supervised techniques.
In Fig. 1 (left) the BarlowTwins architecture is presented. Both branches use a projector network \(z\) and both share the same parameters, but while computing the empirical cross-correlation loss operates on different views of the same sample \(x\) created by data augmentation techniques. For simplicity of notation, we omit the explicit mention of \(z\) network parameters, since it is not utilized by downstream tasks. The BarlowTwins method eliminates the need for explicit negative samples and achieves comparable performance while maintaining computational efficiency. It assumes that negatives are accessible in each mini-batch to estimate correlations among all samples in it.
The network is trained by minimizing an invariance and a redundancy reduction term in the loss function [61]. Here, different augmented views \(X_{A}\) and \(X_{B}\) of the same data samples \(X\) are taken from the set of data augmentations \(\mathcal{D}^{\star}\). This leads to the loss defined as:
\[\mathcal{L}_{c}=\mathbb{E}_{X_{A},X_{B}\sim\mathcal{D}}\Big{[}\sum_{i}(1- \mathcal{C}_{ii})^{2}+\lambda\sum_{i}\sum_{j\neq i}{\mathcal{C}_{ij}}^{2} \Big{]}, \tag{1}\]
where \(\lambda\) is a positive constant trade-off parameter between both terms, and where \(\mathcal{C}\) is the cross-correlation matrix computed between the representations \(z\) of all samples \(X_{A}\) and \(X_{B}\) in a mini-batch indexed by \(b\):
\[\mathcal{C}_{ij}\ =\ \sum_{b}z_{b,i}^{A}z_{b,j}^{B}/(\sqrt{\sum_{b}{(z_{b,i}^{A} )}^{2}}\sqrt{\sum_{b}{(z_{b,j}^{B})}^{2}}).\]
The cross-correlation matrix \(\mathcal{C}\) contains values ranging from -1.0 (worst) to 1.0 (best) for the correlation between the projector's outputs: \(Z_{A}=z(g_{\phi}(X_{A}))\) and \(Z_{B}=z(g_{\phi}(X_{B}))\). The invariance term of the loss function encourages the diagonal elements to have a value of 1. This ensures that the learned embedding is invariant to the applied data augmentations. Meanwhile, the second term (redundancy reduction) maintains the off-diagonal elements close to zero and decorrelates the outputs of non-related images.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Method & _Labels_ & _Exemplars_ & _Regularization_ \\ \hline DMC [65] & ✓ & ✓\({}^{*}\) & ✗ \\ DualNet [47] & ✓ & ✓ & ✓ \\ CLS-ER [2] & ✓ & ✓ & ✓ \\ LUMP [35] & ✗ & ✓ & ✓ \\ \hline PFR [22] & ✗ & ✗ & ✓ \\ CaSSLe [19] & ✗ & ✗ & ✓ \\ POCON (ours) & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of SSL-based CURL. Only POCON does not use any regularization during the training, what results in higher plasticity for the current task. \({}^{*}\)DMC uses an auxiliary dataset for distillation, instead of exemplars.
### Continual SSL Problem Definition
In this work, we consider a CL scenario in which the feature extractor \(f_{\theta}\) must learn from a set of task \(\{1,\dots,T\}\) from different distributions, where each task \(t\) from that set follows the distributions \(\mathcal{D}_{t}\). We would like to find the parameters \(\theta\) of the feature extractor \(f_{\theta}\) that minimizes the summed loss over all tasks \(T\):
\[\arg\min_{\theta}\sum_{t=1}^{T}\mathcal{L}_{c}^{t}, \tag{2}\]
where \(\mathcal{L}_{c}^{t}=\mathbb{E}_{X_{A},X_{B}\sim\mathcal{D}_{t}^{t}}[\mathcal{ L}_{c}]\) and \(\mathcal{L}_{c}\) is defined as in Eq. 1. However, finding the right \(\theta\) poses the main problem in continual learning, as previous data \(D_{1},...,D_{t-1}\) is not available at time \(t\) and Eq. 2 cannot be minimized directly.
### Plasticity-Optimized Complementary Networks
We propose Plasticity-Optimized Complementary Networks (POCON) based on CLS framework. POCON is composed of three stages training (see Fig. 1 for details): 1 learn expert knowledge from current data task, 2 integrate new expert knowledge to the main network, and 3 initialize the new expert from the updated main network. Each stage is explained in the details in the next sections.
_Stage_ 1**: SSL of expert for the current task.**
In this step, we are interested to fully-adapt to the input training data of the current task. Hence, a feature extractor \(g_{\theta}^{t}\) is used to learn in a self-supervised way (following Eq. 1) on the data \(D_{t}\). Note that unlike previous methods (PFR and CaSSLe), we do not constraint in any way our expert during the training (like imposing regularization to prevent forgetting). We allow the expert network to be fully plastic and optimal for learning representation for the current task.
_Stage_ 2**: Knowledge integration.Once we absorb the knowledge of the current task in the expert network at _Stage_ 1, it needs to be transferred and accumulated to the main feature extractor \(f_{\theta}\) without forgetting previous tasks (see Fig.1 2). To do so, we employ an additional _adaptation projector_\(n:\mathcal{Z}\rightarrow\mathcal{W}\), which maps the embedding space from the main network \(f_{\theta}\) to the embedding space learned in the expert network \(g_{\phi^{t}}\). Then, to avoid forgetting, we use a _retrospection projector_\(m:\mathcal{Z}\rightarrow\mathcal{Z}\) that maps the embedding space learned on the current task back to the embedding space learned on the previous ones. The final loss function for the knowledge integration stage consists of an adaptation and retrospection component:
\[\mathcal{L}_{INT}^{t}=\mathbb{E}_{X_{A}\sim\mathcal{D}_{t}^{t}} \Bigg{[}\sum_{i=0}^{|X_{A}|}\parallel n(g_{\phi^{t}}\left(x_{a} \right))-f_{\theta^{t}}\left(x_{a}\right)\parallel+\] \[\parallel m(f_{\theta^{t}}\left(x_{a}\right))-f_{\theta^{t-1}} \left(x_{a}\right)\parallel\Bigg{]} \tag{3}\]
where both sources from which we integrate our knowledge: previous main network \(f_{\theta^{t-1}}\) and the current expert \(g_{\phi^{t}}\) are frozen, and only \(f_{\theta^{t}}\) and adaptor networks \(n\) and \(m\) are being updated by this loss function. The goal is to integrate knowledge using distillation and current task data.
_Stage_ 3**: New expert initialization.In order to begin the training on the next task, the expert \(g_{\phi}^{t+1}\) must be prepared to solve the new task with the best prior knowledge for training
Figure 1: Overview of the POCON method that uses multi-stage training of complementary learning system for unsupervised continual representation learning. Three stages of training allow POCON to maintain fast and efficient adaptation of the fast expert network while maintaining stable representation in a slow learner – the main network. To avoid forgetting, knowledge from the new task expert is integrated (adaptation) with the previous state of the slow network (retrospection). The last stage of the training is dedicated for the best preparation of the new expert for the new task.
new task representation efficiently. In this stage, we need to initialize a new expert (see Fig. 1 ) in the best way. Improper initialization can influence the training epochs in _Stage_1 and make the problem of adaptation for projector \(n\) more difficult in _Stage_2. We looked into two potential initialization setups based on the similarities between the expert and main backbones. The homogeneous setup, where both the main network and the expert network share the same architecture. This is the default setup in our experiments. In addition, we will also consider the heterogeneous setup, where the expert has another architecture than the main network. This allows, for example, to apply smaller networks when per task data is limited, or computation should be performed on edge devices.
_Homogeneous setup (CopyOP):_ In order to begin the training of \(g_{\phi}^{t+1}\) with all the accumulated knowledge till \(t\) we can just copy the weight of the main network \(f_{\theta}^{t}\) into \(g_{\phi}^{t+1}\). This operation avoids the recency bias of (\(g_{\phi}^{t}\)) and provides an excellent initialization point for \(g_{\phi}^{t+1}\) to continue learning. Furthermore, CopyOP makes the problem of the adaptation projector \(n\) easier since \(g_{\phi}^{t+1}\) will have a similar representation to \(f_{\theta}^{t}\). The main drawback of CopyOP is that it constrains of POCON to use the same architecture for the main \(f_{\theta}^{t}\) and the expert \(g_{\phi}^{t+1}\) networks.
_Heterogeneous setup (D2eOP):_ This distillation-based initialization method allows the use of heterogeneous network architectures for \(f_{\theta}^{t}\) and \(g_{\phi}^{t+1}\) in POCON. In order to transfer the knowledge, we propose a projected distillation as in _Stage_2 using a fresh adaptation projector \(n\). Despite being more computationally demanding, this way of transferring knowledge offers one big advantage - different architecture for \(g_{\phi}^{t+1}\) allows the use of a smaller backbone network or even very different ones, e.g. ViT. Using a smaller backbone is useful for low-data regimes (as we will show later) or for devices low in computational power at the time of learning the expert (robotics, edge devices).
The loss function for D2eOP in _Stage_3 is given as:
\[\mathcal{L}^{t}_{D2eOP}=\mathbb{E}_{X_{A}\sim\mathcal{D}_{t}^{t}}\left[\sum_{ i=1}^{|X_{A}|}\parallel n(g_{\phi}^{t+1}\left(x_{a}\right))-f_{\theta^{t}} \left(x_{a}\right)\parallel\right] \tag{4}\]
where \(n\) is a projector that adapts the embedding space of new expert \(g_{\phi}^{t+1}\) to the previous one from the main network with current task data \(D^{t}\). The difference of D2eOP and CopyOP is presented in the right column of Fig. 1.
Other initialization options exists. We discuss them and provide more results in the Appendix.
### Plasticity-stability trade-off in CURL with SSL
Our work is motivated by two key observations:
* Regularized-based CURL models (PFR and CaSSLe) have an implicit handicapped plasticity. During the training, they learn new tasks data using SSL while maintaining the backbone representations close to the previous tasks to avoid forgetting. Therefore, the backbone network cannot fully adapt to current data due to the regularization imposed by the CL method.
* As the number of tasks increases, current CURL models lose stability. In this case, the data distribution changes more abruptly and the backbone is pushed to follow an imperfect estimation of the new distribution restricted by the regularization of the old model (see first observation). Hence, these models are not as plastic as finetuning and do not present stable behavior.
In Fig. 2 we present the stability-plasticity tradeoff for twenty task partition of CIFAR100 in PFR and our method (POCON). For the plasticity plot, we evaluate the accuracy at the end of each task \(t\) using task \(t\) data for training and evaluation. The stability plots were made by training and evaluating in a fixed task partition (\(1\), \(8\), and \(15\) for Fig. 2) after the training of each task \(t\).
As Fig. 2\(a)\) shows POCON has a higher accuracy (more plastic) in most of the task during the whole training. As training an expert in stage 1 is not constrained by any regularization. Morover, Fig. 2\(b)\), \(c)\), and \(d)\) display how POCON is able to retain previous representations over the whole incremental learning session, where our double distillation (adaptation and retrospection) is able to retain the learned representation correctly.
Figure 2: Plasticity-stability trade-off in POCON and PFR: (a) accuracy of a current task during the continual learning session, (b-d) accuracy of task 1, 8, and 15 in the following incremental steps of learning the representation with POCON and PFR. POCON presents superior stability to PFR at the same time still maintaining better plasticity, with its non-restrictive training of the expert network.
## 4 Experimental Results
### Experimental setup
**Datasets** We use the following datasets: _CIFAR-100_[30], consists of 100 object classes in 45,000 images for training, 5,000 for validation, and 10,000 for test with 100 classes. All images are 32\(\times\)32 pixels; _TinyImageNet_ a rescaled subset of 200 ImageNet [15] classes used in [54] and containing 64\(\times\)64 pixel images. Each class has 500 training images, 50 validation images, and 50 test images; _ImageNet100_ a subset of one hundred classes from ImageNet [15] that contains \(130\)k images of size 224\(\times\)224 pixels.
**Training procedure** In all experiments, we train ResNet-18 [24] (or ResNet-9 for the heterogeneity experiment) for expert and main network using SGD with an initial learning rate of \(0.01\) and a weight decay of \(0.0001\) for \(250\) epochs (200 for ImageNet100)in _Stage_1. For _Stage_2 same optimization procedures as _Stage_1 is followed but for \(500\) epochs (400 for ImageNet100).
The data augmentation used in _Stage_1 is the same as in BarlowTwins [11]). Based on the self-supervised distillation ideas of [51, 42], we use a four-layer MLP projector as adaptation and retrospection projectors following the architecture of [42].
Downstream classifiers are by default linear and trained with a CE-loss and Adam optimizer with a learning rate \(5e\)-\(2\) on CIFAR-100, and \(3\) on TinyImageNet. We use validation data to implement a patience scheme that lowers the learning rate by a factor of \(0.3\) and \(0.06\) up to three times while training a downstream task classifier. For ImageNet100 we use the same training and evaluation procedure as [19].
**Baseline methods** We only compare to exemplar-free methods and exclude methods that require replay from our comparison1.
Footnote 1: Code available at [https://github.com/alviur/pocon_wav2024](https://github.com/alviur/pocon_wav2024)
_Fine-tuning (FT)_: The network is trained sequentially on each task without access to previous data and with no mitigation of catastrophic forgetting. _Joint_: We perform joint training with fine-tuning on all data which provides an upper bound. Equivalent to having a single-task scenario.
_PFR_[22] and _CaSSLe_[19] with Barlow Twins: We use the code and hyperparameters provided by the authors, in PFR we used \(\lambda=25\) for all the experiments.
In continual semi-supervised learning (section 4.4), we consider the following methods. Regularization-based methods: Learning without Forgetting (LwF) [33], online Elastic Weight Consolidation (oEWC) [29], Replay-based methods: Experience Replay (ER) [50], iCaRL [49] and GDumbicteprabhu2020gdumb, and Continual semi-supervised learning methods: CCIC [6], PAWS [3] and NNCSL [27].
### Continual representation learning
In this experiment, we evaluate all methods in the continual representation learning setting, where each task consist of a distinct set of classes from a single dataset. Splits are prepared similarly to the class incremental learning setting, but without access to labels. Specifically, we split datasets into four, ten, twenty, fifty, and one hundred equal tasks as done in [49]. In each task, we perform SSL (_Stage_1), knowledge integration (_Stage_2), and a new expert initialization (_Stage_3). In the evaluation phase, we train a linear classifier using the learned representation of the main network encoder. Please note that the expert network is never used for evaluation (unless specified). We use all available test data to obtain the overall task-agnostic performance evaluation2. In all our tables we report the accuracy in the last task.
Footnote 2: We use _task-agnostic_ evaluation in this paper to refer to the class-incremental learning evaluation [38] as in CaSSLe and PFR methods
**Homogeneous setup.** Table 2 presents the results for all the methods on three commonly used dataset - CIFAR100, TinyImageNet, and ImageNet100. For CIFAR100 the upper
\begin{table}
\begin{tabular}{l|l l l l l} \hline \hline \multicolumn{5}{c}{CIFAR-100 (32x32)} \\ \hline
**Method** & **4 tasks** & **10 tasks** & **20 tasks** & **50 tasks** & **100 tasks** \\ \hline FT & 54.8 & 50.94 & 44.95 & 38.0 & 27.0 \\ CaSSLe & 59.80 & 52.5 & 49.6 & 45.3 & 42.10 \\ PFR & 59.70 & 54.33 & 44.80 & 46.5 & 43.30 \\ POCON & **63.7** & **60.5** & **56.8** & **48.9** & **48.94** \\ \hline Joint & \multicolumn{5}{c}{65.4} \\ \hline \hline \multicolumn{5}{c}{TinyImagenet (64x64)} \\ \hline
**Method** & **4 tasks** & **10 tasks** & **20 tasks** & **50 tasks** & **100 tasks** \\ \hline FT & 41.95 & 36.55 & 32.29 & 22.34 & 2.80 \\ \hline CaSSLe & **46.37** & **41.53** & 38.18 & 28.08 & 25.38 \\ PFR & 42.23 & 39.20 & 31.22 & 25.87 & 21.20 \\ POCON & 40.97 & 41.06 & **41.14** & **37.20** & **30.24** \\ \hline Joint & \multicolumn{5}{c}{50.18} \\ \hline \hline \multicolumn{5}{c}{ImageNet100 (224x224)} \\ \hline
**Method** & **5 tasks** & **10 tasks** & **20 tasks** & **50 tasks** & **100 tasks** \\ \hline FT & 56.10 & 48.13 & 42.73 & 39.64 & 21.03 \\ \hline CaSSLe & **67.56** & 59.78 & 53.92 & 46.64 & 36.44 \\ PFR & 66.12 & 60.46 & 54.84 & 42.18 & 38.34 \\ POCON & 66.30 & **61.36** & **59.32** & **53.50** & **45.40** \\ \hline Joint & \multicolumn{5}{c}{71.06} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy of a linear evaluation on equal split various datasets and different number of tasks with ResNet-18. POCON present better result than other regularization-based methods and maintain high accuracy even with increasing number of tasks.
bound of Joint training for a single task is \(65.4\%\). Note, that the gap between FT and _Joint_ is getting bigger with more tasks, from \(10.6\%\) for four task up to \(38.4\%\) for extreme case of one hundred tasks. POCON is significantly better than CaSSLe and PFR for different number of tasks. In four tasks setting, POCON is only \(1.7\%\) points lower than Joint, while second next, CaSSLe reaches \(3.3\%\) pints lower accuracy. With the increasing number of tasks, the CaSSLe performance drops faster than PFR, while POCON maintains superior results against others and presents the lowest decrease in performance.
The results for TinyImagenet and ImageNet100 follow the same procedure as for CIFAR-100, but here we have larger images (64x64 and 224x224) and more classes for TinyImagenet (200). In this case, POCON outperforms other methods whenever the number of tasks is higher, 20 for TinyImageNet and 10 for ImageNet100. For a few tasks scenario, POCON will be close to other CL methods due to high data availability. The accuracy gap between POCON and other methods increases with the number of tasks.
**Heterogeneous setup.** An expert in POCON can use a different network architecture than the main network. That opens the possibility of using a smaller network for the expert whenever this can be beneficial, e.g., tasks are small with not enough data to train a large ResNet-18 network, or the device where the expert network is being trained is not powerful enough (robot, edge). We investigated heterogeneous architecture use in POCON with a smaller network, ResNet-9 which has \(6.5\)M number of parameters instead of 11M in ResNet-18. The results are presented in Table 3. The different combinations of POCON are presented for using smaller network in expert only, or for both, expert and main networks. With increasing number of tasks it is more beneficial to use smaller expert (20 tasks). And having less data per task (50 and 100 tasks) we see improved results when as well the main network is smaller, we gain \(4.6\%\) changing from ResNet-18 to ResNet-9 for one hundred tasks.
### Task-free setting
To show the ability of POCON to handle varying data streams, we test it in the task-free setting. In this setting, there is no explicit boundary between tasks, the data from one task changes smoothly to the data from the next [63, 32]. This prevents methods from having a fixed point where the network can change and prepare for the new task; the adaptation is ongoing. For instance, when we receive data \(D_{t}\) there is a mix with the data \(D_{t-1}\). At some point, we only get data from \(D_{t}\), but later on, we will get a mix with \(D_{t+1}\). For this experiment, we employ the data partition of [63] for the CIFAR100 dataset with beta equal to \(4\) (please see the Appendix for more details).
Since there is no clear boundary, we cannot perform distillation without losing data from the stream of mini-batches. In this setting, POCON performs _Stage_2 in parallel with _Stage_1. In order to do so, a frozen copy of the expert \(g_{\phi}\) is used for the _Stage_2 while a new expert is learning on the current data. After \(s\) steps, a copy of the expert \(g_{\phi}\) is passed to perform a distillation for \(ds\) steps. Note that we do not store any data; distillation for _Stage_2 and SSL in _Stage_1 is always performed using the current mini-batch data. _Stage_3 is omitted, as the expert network is not changed and initialized, as there is no task switch.
To compare to other method, we adapted PFR to work in the task-free setting similarly. In this case, the feature extractor of past data is updated after \(s\) steps (copy of the current feature extractor), and the regularization is performed during the whole training as in normal PFR. We also present results with a simple fine-tuning (FT).
Fig. 3 presents the results of linear evaluation of the learned representation in continual learning for the ten, twenty, and fifty data partitions settings. Only at the first steps at the beginning, for 10 and 20 tasks, PFR and FT
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{exemplar} & \multicolumn{3}{c}{CIFAR-100} \\ & & 0.8\% & 5.0\% & 25.0\% \\ \hline \multicolumn{5}{c}{_Exemplar_-_based methods_} \\ \hline ER [50] & ✓ & \(8.2\pm 0.1\) & \(13.7\pm 0.6\) & \(17.1\pm 0.7\) \\ iCaRL [49] & ✓ & \(3.6\pm 0.1\) & \(11.3\pm 0.3\) & \(27.6\pm 0.4\) \\ GDumb [48] & ✓ & \(8.6\pm 0.1\) & \(9.9\pm 0.4\) & \(10.1\pm 0.4\) \\ \hline CCIC [6] (500) & ✓ & \(11.5\pm 0.7\) & \(19.5\pm 0.2\) & \(20.3\pm 0.3\) \\ PAWS [3] (500) & ✓ & \(16.1\pm 0.4\) & \(21.2\pm 0.4\) & \(19.2\pm 0.4\) \\ NNCSL [27] (500) & ✓ & \(\mathbf{27.4}\pm 0.5\) & \(\mathbf{31.4}\pm 0.4\) & \(\mathbf{35.3}\pm 0.3\) \\ \hline \multicolumn{5}{c}{_Exemplar-_free methods_} \\ \hline Fine-tuning & ✗ & \(1.8\pm 0.2\) & \(5.0\pm 0.3\) & \(7.8\pm 0.1\) \\ LwF [33] & ✗ & \(1.6\pm 0.1\) & \(4.5\pm 0.1\) & \(8.0\pm 0.1\) \\ oEWC [29] & ✗ & \(1.4\pm 0.1\) & \(4.7\pm 0.1\) & \(7.8\pm 0.4\) \\ \hline Prototypes & ✗ & \(19.2\pm 0.9\) & \(23.5\pm 0.4\) & \(24.1\pm 0.1\) \\ Prototypes+_SDC_ & ✗ & \(\mathbf{22.7}\pm 0.6\) & \(\mathbf{27.6}\pm 0.4\) & \(\mathbf{28.5}\pm 0.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Semi-supervised continual learning comparison on CIFAR100 dataset. The number between brackets indicates the size of the memory buffer. We highlight the best method in each group with **bold** fonts.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline
**Method** & **Arch** & **4 tasks** & **10 tasks** & **20 tasks** & **50 tasks** & **100 tasks** \\ \hline & 58.95 & 55.4 & 50.77 & 49.18 & 40.78 \\ POCON & R18-R18 & **63.7** & **60.5** & 56.8 & 48.9 & 48.94 \\ POCON & R18-R9 & 62.05 & 60.5 & **57.48** & 49.7 & 47.94 \\ POCON & R9-R9 & 58.34 & 58.32 & 56.07 & **53.3** & **51.22** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy of linear evaluation with heterogeneous POCON architecture using ResNet-18 (R18) and ResNet-9 (R9) on split CIFAR-100. With increasing number of tasks and lower data POCON benefits from a smaller ResNet-9 network, first, only for the expert (20 tasks) and later (50 tasks) for both networks, expert and main.
accuracy is above POCON. However, that changes significantly in the following steps in favor of POCON, which continues improving learned representations. Unlike PFR and FT, POCON has a more stable learning curve for all task splits. The improvements over PFR and FT are better for more tasks, since the regularization-based method hurts plasticity while learning new tasks with PFR.
### Continual semi-supervised learning
We propose a simple extension of POCON to the semi-supervised setting, where a small percentage of the data is labeled. We train the method in the same fashion as the unsupervised case, and we initially ignore available labels. After updating \(f^{t}_{\theta}\) until convergence with POCON, we initialize the prototypes of each class as the average of the labeled samples. Then we assign all unlabeled data to the nearest prototype center, and we again compute the average of all samples assigned to each prototype. We perform the same procedure after each of the tasks. We call this method _Prototypes_. We also show for _Prototypes+SDC_ where we use Semantic Drift Compensation [60] to prevent forgetting - we estimate and compensate for learned class prototypes drift.
In Table 4 we present the results for continual semi-supervised learning under three levels of supervision, namely with only 0.8%, 5.0% and 25.0% of samples labeled (following the settings of NNCSL [27] and CCIC [6]). Simply applying prototypes with POCON achieves much better results than the other exemplar-free methods (LwF [33] and oEWC [29]). The method outperforms the continual learning methods which only exploit the labeled data (such as ER, iCaRL, and GDumb). Furthermore, it also outperforms the semi-supervised methods CCIC and PAWS. Note that our method, without storing any exemplars, is only outperformed by the recent NNCSL method which requires exemplars and is dedicated to the semi-supervised learning setting.
## 5 Conclusions and Future directions
We proposed a method for exemplar-free continual unsupervised representation learning called _Plasticity-Optimized COmplementary Networks_. POCON trains an expert network that performs optimally on the current data without any regularization (optimizing plasticity). The knowledge is transferred to the main network through distillation while retaining similarity with the old main network to avoid forgetting. Experiments of CIFAR100, TinyImagenet, and ImageNet100 show that our approach outperforms other methods for SSL exemplar-free CL learning (PFR and CaSSLe), and it is especially good for many tasks scenarios.
The POCON method presents several opportunities for improvement. One promising direction is to extend the method to the scenario where multiple experts can be trained in parallel on different tasks, similar to the clients in federated learning. Secondly, heterogeneity of the fast and slow learner can be better investigated - how we can benefit from having different architectures (even mixed ones with transformer-based network).
Limitations.Although we show how to squeeze (and retain) the knowledge learned by the SSL method, when the number of samples per tasks is too low, the knowledge transfer (stages 2 and 3) and the expert training (stage 1) degrades. Note that in this extreme scenario, we are going towards online continual learning.
Impact Statement.Continual learning systems do not require data to be stored. As such, they can contribute to data privacy, and reduce vulnerabilities related to data storage. As with all machine learning algorithms, special care should be taken to address biases present in the data (and the data collection process). Continual learning could exacerbate biases in the data because of the task-recency bias, which refers to the problem that continual learning algorithms tend
Figure 3: Task-task setting for CIFAR-100 with different length of incremental learning sequence. Plots present linear evaluation of learned representation for the blurred 10, 20, and 30 data partitions respectively (like in [63, 32]). POCON presents significantly better results with stable learning accuracy curves.
to be biased towards the last data fed to the algorithm.
AcknowledgmentsWe acknowledge the support from the Spanish Government funding for projects PID2022-143257NB-I00, TED2021-132513B-I00 funded by MCIN/AEI/10.13039/501100011033 and by FSE+ and the European Union NextGenerationEU/PRTR, and the CERCA Programme of Generalitat de Catalunya. Bartlomiej Twardowski acknowledges the grant RYC2021-032765-I.
|
2309.16295 | Influence of the porosity pattern on the aerodynamics of a square plate | The evolution of the normal aerodynamic coefficient of 19 configurations of
square plates with various porosity patterns, ranging from solid plate to
homogeneous porous plate, is experimentally characterized. The variation of the
porosity pattern is obtained by partially covering the holes of a commercial
fly-swatter using adhesive tape. Evolution of the normal aerodynamic
coefficient is assessed from the measurement of the angular position of the
porous plate, placed as a freely rotating pendulum swept by a flow in a wind
tunnel. These angular measurements are also supported by PIV measurements of
the structure of the wake. We show that the porosity pattern determines whether
or not an abrupt stall occurs. In particular, the details of the porosity
pattern on the edges of the plate are decisive for the existence of abrupt
stall. | Ariane Gayout, Mickaël Bourgoin, Nicolas Plihon | 2023-09-28T09:46:24Z | http://arxiv.org/abs/2309.16295v1 | # Influence of the porosity pattern on the aerodynamics of a square plate
###### Abstract
The evolution of the normal aerodynamic coefficient of 19 configurations of square plates with various porosity patterns, ranging from solid plate to homogeneous porous plate, is experimentally characterized. The variation of the porosity pattern is obtained by partially covering the holes of a commercial fly-swatter using adhesive tape. Evolution of the normal aerodynamic coefficient is assessed from the measurement of the angular position of the porous plate, placed as a freely rotating pendulum swept by a flow in a wind tunnel. These angular measurements are also supported by PIV measurements of the structure of the wake. We show that the porosity pattern determines whether or not an abrupt stall occurs. In particular, the details of the porosity pattern on the edges of the plate are decisive for the existence of abrupt stall.
a]Also at [ Biomimetic Group, Energy and Sustainability Research Institute Groningen, Faculty of Science and Engineering, University of Groningen, 9747 AG Groningen, The Netherlands
## I Introduction
The omnipresence of porous structures in Nature and in technological applications induces complexity in a wide range of physical problems. In the context of technological applications, the water flow through nets and the drag exerted on net structures is crucial for aquaculture [1]. The use of porous structures and fences has long been proposed as a means of controlling the characteristics of flows, with a number of important applications in aerodynamics or civil engineering [2; 3]. The development of fog harvesters for water supply in arid regions requires a fine understanding of flows in the vicinity of nets [4; 5]. At smaller scales, the efficiency of respiratory masks to reduce the propagation of airborn viruses rely on reducing the amount and distance of aerosols spread following the propagation of multiphase flows through finely meshed masks [6; 7]. In the context of wind-dispersed plant seed, the flight of the dandelion involves a porous structure made of a bundle of bristles. It was recently demonstrated that the aerodynamic drag was maximized thanks to a specific structure of the wake, namely a separated of a vortex ring [8; 9]. Bristled wing are also widespread in Nature and were shown to increase lift at small Reynolds numbers [10; 11].
Fundamental aerodynamics studies of porous materials date back to the pioneering work of G.I. Taylor [12; 13], and largely focused on the influence of porosity on pressure drop and on drag at normal incidence. Castro investigated the influence of the porosity fraction of perforated plates with centimetric-holes and showed a drag decrease as the porosity fraction decreases and the absence of vortex shedding for porous fractions above \(\sim 0.2\). Several experimental studies then focused, for instance, on the interaction of periodically arranged jets emerging from porous screens [14] or the shape of the perforated obstacle [15]. The modelling of the flow behind porous screen at normal incidence also attracted attention [16; 17]. Surprisingly, the effect of the angle of attack on the aerodynamic coefficients and the flow features of porous screen has only recently been studied experimentally and theoretically [18], though it was shown that porosity strongly influences the trajectory of permeable or porous disks [19], or its stability [20]. In this article, we extend these previous work and study experimentally the aerodynamic coefficients of porous plates with inhomogeneous porosity pattern, and span a wide range of angles of attack. Our strategy was to systematically vary the porosity pattern by partially covering a porous plate with a large numbers of holes, and we chose an ideal object widely spread for this study: a fly-swatter.
Patented in 1900 by Montgomery, the first modern fly-swatter, the "Fly-Killer", was composed of a rectangular wired net [21]. The use of wire-netting was introduced for durability and elasticity but no reference on the aerodynamic advantage of such netting is mentioned in the patent. In the later patents of Gatch in 1927 [22] and Brownson in 1938 [23], improvements on the fly-swatter mostly cover the handle, to facilitate the killing motion. In 1939, Baker patented a different kind of fly-swatter, made of a rubber surface with a few holes, which is supposed to act as a pocket to trap the fly with the elastic recoil from the surface killing the fly without crushing it [24]. This change was motivated by avoiding property damage and traces when battling with and killing a fly. Here again however, no aerodynamic considerations are presented in the patents and it seems that holes were added empirically, to either reduce costs or increase elasticity. Sketches of the above-mentioned fly-swatter are reproduced in Fig. 1.a-d), together with a photograph of the modern plastic model used for the present investigation.
The evolution of the normal aerodynamic coefficient (see next section) when varying the angle of attack and the porosity was obtained placing the fly-swatter at the bottom of a freely rotating rod, acting as a pendulum, and placed in a windtunnel, following a setting previous investigated [25; 26]. As the wind speed increases, the angular position evolves and is set by the torque balance [25; 26]. When an abrupt stall occurs, the result is a discontinuity in angular position between a drag-dominated branch and a lift-dominated branch, possibly leading to bistability [25; 26]. The transitions between the two branches were shown to be controlled by rare aerodynamics events [26], making this aerodynamic pendulum one of the simplest experimental configuration to study rare-event statistics.
This article is organized as follows: Sec. II describes the experimental setup. The results are then presented and discussed in Sec. III, with a focus on the differences between a solid and a porous plate in Sec. III.1, and the investigation of 19 different porosity pattern in Sec. III.2 and III.3. Conclusions are then drawn in Sec. IV.
## II Experimental setup
The influence of the porosity pattern on the aerodynamic coefficients has been experimentally evaluated using the plastic fly-swatter shown in Fig. 1.e). The plastic fly-swatter consists in a square of size \(a=10\,\mathrm{cm}\) with square holes of size \(2.4\,\mathrm{mm}\) equally spaced at a distance of \(1.8\,\mathrm{mm}\). Each row and column of the square section contains 22 holes, for a total of 484 holes. A small triangular shape at the top of the fly-swatter, with 26 holes on each side, connects the square section to the fly-swatter holder. In this article, we will focus on the influence of the porosity pattern of the square section on the aerodynamic coefficients of the system, and the holes of the upper triangular part will be left open. The porosity pattern is modified by adding adhesive vinyl tape to block specific holes (see Sec. III.2 for details). The vinyl tape was placed so that only rows or columns of 2-hole width were sealed by one piece of tape, and no tape peeling was observed over the various experiments. The maximum porosity, given as the ratio between the surface of the 484 holes and the surface of the whole square is \(\simeq 30\%\). Of the \(2^{484}\) possible configurations for the partial covering
Figure 1: a) Fly-killer in 1900 from [21]. b) Fly-swatter in 1927 from [22]. c) Fly-swatter in 1938 from [23]. d) Fly-swatter in 1939 from [24]. e) Fly-swatter used in this article.
of the fly-swatter, we decided to select only 19 with left-right symmetry and to focus on the onset of stall when the porosity pattern is modified. For 11 of them, due to a slight curvature of the fly-swatter, two sets of measurements were carried out, one with the curvature facing upstream and the other downstream, as will be discussed in Sec. III.4.
The aerodynamic coefficients of the plates with various porosity patterns are assessed from measurements in a wind tunnel, as sketched in Fig. 2, following the protocol detailed elsewhere [25; 26; 27], and recalled here. The fly-swatter, placed facing the flow, is attached at the extremity of a rod free to rotate around point \(O\). Frictionless rotation of this pendulum is obtained using an air bushing (OAVTB16i04 from OAV Labs), and the angular position \(\theta\) with respect to the vertical is recorded by a contact-less rotary encoder with minimal friction (DS- 25, 17-bit digital encoder from Netzer). The distance between the pivot point and the center of the fly-swatter is \(L=17.3\,\mathrm{cm}\). The distance \(l\) between point \(O\) and center of mass G of the pendulum is computed for each configuration, knowing the mass of the rod and of the fly-swatter and measuring the mass of the added vinyl tape strips. The flow impinging the porous square exerts an aerodynamic torque \(\Gamma_{\mathrm{zero}}\) at point \(O\), due to both lift and drag, and the equation of motion reads \(J\tilde{\theta}=-mgl\sin\theta+\Gamma_{\mathrm{zero}}\), with \(J\) the moment of inertia of the pendulum. The aerodynamic forces acting on the fly-swatter (see Fig. 2c) are the drag force \(\mathbf{D}=1/2\rho LU^{2}a^{2}C_{D}(\theta)\mathbf{e_{x}}\) and the lift force \(\mathbf{L}=1/2\rho LU^{2}a^{2}C_{L}(\theta)\mathbf{e_{z}}\), with \(\rho\) the air density, \(a\) the side of the square plate and where the drag coefficient \(C_{D}\) and the lift coefficient \(C_{L}\) depend on \(\theta\)[28]. The total aerodynamic torque is expressed as \(1/2\rho LU^{2}a^{2}C_{N}(\theta)\), where the normal coefficient \(C_{N}\) is defined as \(C_{D}\cos\theta+C_{L}\sin\theta\)[25]. In this article, we will focus on steady state regimes, for which the torque induced by the weight of the pendulum balances the aerodynamic torque:
\[mgl\sin(\theta)=\frac{1}{2}\rho U^{2}a^{2}LC_{N}(\theta) \tag{1}\]
We made the choice of ignoring the covering fraction for the area \(a^{2}\) of the fly-swatter used to define the aerodynamic coefficients, as we expect it to not be a simple proportionality factor due to the aerodynamic coupling of holes in the array. As mentionned above, the mass \(m\) has been
Figure 2: a) Schematic view of the wind tunnel with the fly-swatter. b) Details of the pendular attachment of the fly-swatter. c) Definition of the aerodynamic forces. d) PIV setup overview. The camera focus is redressed by a Scheimpflug apparatus.
measured for each configuration, due to the vinyl tape adding up to 2 g to the fly-swater when fully covered, and \(l\) was computed accordingly. For each configuration, the mean angle \(\theta\) is measured over at least 15 s for each flow velocity, ensuring statistical convergence of the mean-value, and the normal aerodynamic coefficient \(C_{N}(\theta)\) is computed from Eq. 1.
In addition to the recording of the angular position, Particle Image Velocimetry (PIV) has been implemented in the wind tunnel to enable flow visualization in the wake of the fixed fly-swater. This flow visualization is done in the transverse \((y,z)\) plane. This choice was in particular motivated by the tri-dimensionality of the wake for pendulums of aspect ratio close to \(1\)[29]. The flow structure is obtained from smoke particle imaging. Particles are illuminated from a laser sheet produced by a 5 W blue diode laser through a Powell lens with a \(30^{\circ}\) fan angle, and imaged using a high-speed camera (Phantom v26.40) at a resolution of 2048 by 1952 pixels and a 100 mm lens through a Scheimpflug adaptor. As we chose to visualize the transverse \((v,w)\) flow in the \((y,z)\) plane, the particles only remain in the sheet for a short time, which imposes constraints on the flow velocity (set to 1.7 m s\({}^{-1}\)), the thickness of the laser sheet (4 mm was observed to be an optimal choice) and the framerate (set to 2000 fps). The PIV algorithm uses the open software UVMAT[30].
## III Results
### Solid versus homogeneous porous square
Let us first focus on the main differences between a solid square (i.e. all holes have been covered by vinyl tape) and a homogeneous porous square (i.e. the original fly-swater geometry), before investigating more complex patterns in the following subsections.
Figure 3.a) shows the evolution of the angular position \(\theta\) as a function of the wind velocity \(U\) for the solid square (orange) an the porous square (black). The evolution of the normal aerodynamic coefficient \(C_{N}(\theta)\) as a function of \(\theta\), computed from Eq. 1, is shown in Fig. 3.b). We note that \(\theta(U)\) exhibits abrupt transitions and bistability for the solid plate, which can be readily understood from the evolution of the steady-state aerodynamic coefficients (see detailed discussions in Ref. [25; 26]). For velocities below the bistable region (and angles below \(50^{\circ}\)), the dominant aerodynamic force is the drag force, and corresponds to a nearly constant \(C_{N}\) value. For velocities above the bistable region (and angles above \(53^{\circ}\)), the dominant aerodynamic force is the lift force. The stall angle corresponds to a dramatic decrease of \(C_{N}\) with decreasing angle, namely at around \(52^{\circ}\) for the solid square, and translates in abrupt transition in the evolution of \(\theta\) as a function of \(U\). We stress that this stall angle is in agreement with values observed for flat square by Eiffel (\(\theta_{stall}\simeq 51^{\circ}\))[31] and Flachspart (\(\theta_{stall}\simeq 50^{\circ}\))[28]. Remarkably, the transition is smoother for the homogeneous porous square, and there is no abrupt stall angle. We note two noteworthy features at low angles, for both configurations. The first one is a strong increase of \(C_{N}\) as \(\theta\) is decreased below a 10 degrees, and is discussed further in Sec. III.4. The second one is the bump observed on \(C_{N}\) between \(15^{\circ}\) and \(18^{\circ}\), which is attributed to the presence of stall on the holding rod. In the remaining of this article (apart from Sec. III.4), we will thus ignore the evolution of the \(C_{N}\) coefficient below \(18^{\circ}\), and we will focus on the conditions for which an abrupt stall is observed when varying the porosity pattern (data below \(18^{\circ}\) will be systematically shown as lighter symbols in the remaining of this article).
The PIV measurements of the transverse components of the velocity field in the wake, 10 cm downstream of the center of the fly-swater are shown in Fig. 4. For these measurements, the angle \(\theta\) was set to four different constant values (namely \(20^{\circ}\), \(40^{\circ}\), \(60^{\circ}\) and \(80^{\circ}\)), with no free rotation allowed around point \(O\), i.e. this is not a pendulum configuration. In each panel, the flow of the porous plate (the original fly-swater) is displayed on the left half while the flow of the solid plate (fully covered fly-swater) is shown on the right half. The two upper rows show the time-averaged structure of the wake, and the two lower rows the amplitude of the fluctuations. The PIV measurements are shown here with the purpose of illustrating the differences in flow features between the wake created by solid and porous plates, but no deep quantitative analysis is presented. We note strong differences between both configurations. In the case of the solid plate, there is a clear difference below and above the stall angle (\(\simeq 50^{\circ}\)). Strong trailing vortices (also known as wingtip vortices) with a large down-wash at the center are observed in the mean flow above the stall
angle (_i.e._\(60^{\circ}\) and \(80^{\circ}\)). In contrast, the structure of the mean-flow is similar for all inclination angle for the porous plate; a feature shared with the fluctuating part. The maximal level of fluctuations of the solid plate is about three times that of the porous one. The influence of the stall angle for the solid plate is also evident, with very localized fluctuations are observed around the vortices above the stall angle.
### Reducing the fraction of porous surface by concentric holes covering
The porosity pattern was first modified by concentric holes covering in two different ways: either from the center towards the edges or vice-versa. Six different configurations, for each concentric direction, are investigated, with five of them being partially porous and obtained by the addition of vinyl tape strips covering a two-hole wide band, as indicated in the side sketches of Fig. 5. When the fraction of porous surface is reduced from the center (left half in Fig. 5), no significant changes are observed on the evolution of \(\theta\) with the velocity \(U\), until covering the last two lines of holes on the edges. In particular, bistability is only observed for the fully taped configuration (see Fig. 5.a). This observation is in contrast with the behavior observed when the porous fraction is reduced when covering the holes from the edges towards the center (right half in Fig. 5). Bistable regimes are observed for several configurations, as soon as the central porous square is smaller than a \(6\times 6\)-hole square (see Fig. 5.b). Almost no difference is observed between the solid case and the case when a \(2\times 2\)-hole square is left open at the center. For bistable configurations, the evolution of the \(C_{N}\) coefficient with \(\theta\) displays a sharp increase at the stall angle (see Fig. 5.c and d).
These results highlight the importance of the porosity of the edges for the existence of a sharp stall: the presence of holes on the edges of the plate prevent the occurrence of a sharp stall. On the contrary, stall can be readily observed in the presence of a significant porous fraction at the center of the plate.
### Triggering stall from the edge porosity pattern
Section III.2 highlighted that bistability on the evolution of \(\theta(U)\) and abrupt stall only appear when the last two lines of holes at the periphery of the plate are covered, when all the inner holes are covered. In order to better understand influence of edge porosity on the emergence of abrupt stall, seven configurations with a partially covered periphery were tested. Note that some of these configurations break the top-down symmetry, and that, following the convention of Fig. 3, the holding rod of the various configuration is placed at the top part of the sketches. Fig. 6 focuses on configurations for which the bistability is observed as the fraction of porous surface of the plate is decreased. Configurations, that, despite a similar porous fraction, present no bistability, are shown
Figure 3: Aerodynamic response of the fly-swater (black) compared to a square plate (beige): a) evolution of \(\theta\) as a function of \(U\), b) \(C_{N}\) coefficient computed from a) and Eq. 1. The curves are spline interpolations and thus an eye guide.
in Fig. 7.
The two leftmost configurations shown in Fig. 6 do not present any bistability, whereas bistability develops for the four rightmost configurations. When bistability is observed, the range of bistable positions increases with the covering fraction, _i.e._ from left to right. The bistability thus first arises, when in addition to the center of the fly-swarter, the top two rows are fully covered with tape. Covering the upper 4-holes corners on each side seems critical for the onset of bistability, since, when they remain uncovered (as in the second leftmost configuration), no bistability was observed.
The signature of bistable regimes is also readily observed on the evolution of \(C_{N}(\theta)\) (see Fig. 6.b). A local increase of \(C_{N}(\theta)\) is observed for angles between 50 and 70\({}^{\circ}\), seemingly correlated with the stall angle, that separates the lift and the drag branches. More precisely, we note that covering the two upper corners only induces a bump on the \(C_{N}\) coefficient between 57\({}^{\circ}\) and 65\({}^{\circ}\) (third leftmost configuration). This modest increase leads to an inflection point on the \(C_{N}\) coefficient, which is nonetheless sufficient to induce a sharp stall. As the porous fraction decreases further, the stall angle occurs for lower \(\theta\) angles, decreasing from from 62\({}^{\circ}\) to 52\({}^{\circ}\). The lift-dominated regime thus
Figure 4: Wake structure behind the hollow fly-swarter (left side) and fully covered fly-swarter (right side) for 20\({}^{\circ}\), 40\({}^{\circ}\), 60\({}^{\circ}\) and 80\({}^{\circ}\). Upper part: mean velocity – transverse mean \(<v>\) (top) and vertical mean \(<w>\) (bottom). Lower part: velocity fluctuations – transverse fluctuations \(v_{rms}\) (top) and vertical fluctuations \(w_{rms}\) (bottom). For the hollow fly-swarter, the wake structure is consists primarily of a trailing-edge vortex and does not change with the angle, apart from a vertical shrinking due to the vertical projection of the fly-swarter diminishing. For the fully covered fly-swarter, the intensity of fluctuations is much higher than the hollow fly-swarter and the structures are more difficult to identify. Above 60\({}^{\circ}\), a trailing vortex is visible in the upper region of the wake.
occurs over a larger range of angles \(\theta\) (we recall that \(\theta=\pi/2-\alpha\), with \(\alpha\) the angle of attack). Consequently, since bistability is observed around the stall angle, it occurs for angles that decrease as the porous fraction decreases. The range of velocity for which bistability occurs also increases as the porous fraction decreases. Surprisingly though, the span of forbidden angles (which are never explored neither in the lift branch nor in the drag branch) remains almost constant, around \(6^{\circ}\). Understanding whether this observation might be linked to the structure of the wake would require an extensive dataset of 3D flow measurements, out of reach of the present study.
No bistability, and thus no stall, was observed for any configuration shown in Fig. 7.a (apart from the solid plate). A specificity of all non-bistable configurations tested thus far is the presence of holes on the upper rows. For all these configuration, the evolution of the \(C_{N}\) coefficient with \(\theta\) (see Fig. 7.b), does not display a strong bump in the range of angles \(\theta\in[45^{\circ},70^{\circ}]\). We stress here that a weak bump is observed for the fifth leftmost configuration, around \(\theta=58^{\circ}\), but which does not trigger bistable regimes. The values of the \(C_{N}\) coefficient observed form the solid plate on the drag branch (\(\theta<45^{\circ}\)) are recovered as soon as the lateral edges of the fly-swater are covered.
This detailed study of the influence of the porous pattern at the edges of the plate suggests that a necessary condition for the existence of the bistability is the full covering of the upper rows. The configuration that displays bistability with the highest porous fraction is the third leftmost configuration shown in Fig. 6, with only the side and bottom edges left porous. On the other hand, the configuration with the least porosity that present no sharp stall is the second rightmost in Fig. 7, with only the leading-edge being partially hollowed.
Over the \(2^{484}-19\) remaining configurations, _i.e._ more than \(4.9\times 10^{145}\) configurations, we cannot rule out that these are not the ones with the respectively highest porous fraction for bistable regimes and lowest porous fraction for the absence of stall.
Figure 5: Influence of the concentric sealing of holes on the angular equilibrium positions and the \(C_{N}\) coefficient. Top (a,b): evolution of \(\theta\) as a function of \(U\) for different hole-covering configurations. Bottom (c,d): associated \(C_{N}\) coefficient computed using Eq. 1. Left (a,c): concentric covering starting from the center towards the edges. Right (b,d): concentric covering from the edges towards the center. Color codes for the configuration: the lighter the color, the more holes sealed. The curves are spline interpolations and thus an eye guide.
### Curvature effects on \(C_{n}\) coefficient
Let us now briefly discuss a side effect of using a commercial fly-swatter as the initial porous plate: the effect of the weak natural curvature of the plate on its aerodynamic response. This effect was particularly experienced when testing whether the side covered by the tape influences the aerodynamics. The fly-swatter is indeed slightly curved and curvature can have a strong effect on aerodynamic properties, as observed already nearly a century ago by Flachsbart [28].
Four series of additional experiments were carried out for the concentric covering configurations, for which the face facing the incoming air flow could be either the concave or convex face, and could be the face covered or not by the adhesive tape. No influence of the side on which tape is glued was observed, while curvature orientation seems to greatly alter the aerodynamics, as shown in Fig. 8.
Let us first discuss the influence of curvature on the evolution of \(\theta(U)\) for the six configurations
Figure 6: Triggering bistability and sharp stall by partially covering the outer rows of holes on the fly-swatter. Color codes for the configuration, described at the bottom. a) Evolution of \(\theta\) as a function of \(U\). b) \(C_{N}\) coefficient computed from a) and Eq. 1. The curves are spline interpolations and thus an eye guide.
Figure 7: Partially covering the outer rows of holes on the fly-swatter without achieving bistability. Color codes for the configuration, described at the bottom. a) Evolution of \(\theta\) as a function of \(U\). b) \(C_{N}\) coefficient computed from a) and Eq. 1. The curves are spline interpolations and thus an eye guide.
shown in Fig. 8.a) and b). Several differences are noted : 1) the bistable zone is much narrower when the convex side faces upstream (b), 2) for the same flow velocity, for instance \(U=6\,\mathrm{m}\,\mathrm{s}^{-1}\), the angular position is lower when the convex side faces upstream.
These observations lead to large differences for the values of the \(C_{N}\) coefficient due to curvature (see Fig. 8.c and d). On average, for all configurations and all angles, the \(C_{N}\) coefficient is much higher when the concave side faces upstream. A striking observation is on the sharpness of the stall. For the fully-covered fly-swatter, the stall is indeed much smaller when the convex side faces upstream, with a factor of two for the amplitude of the discontinuity. The difference is also striking when the pendulum is close to the vertical position (_i.e._ for \(\theta\) values below \(18^{\circ}\)), for which \(C_{N}\) appears to diverge when the concave side faces upstream (Fig. 8.c), while it decreases to 0 when the convex side faces upstream (Fig. 8.d). This feature is observed for all covering configurations, which supports the conjecture that this is an effect of curvature.
All results presented in Sec. III.1 and III.2 and III.4 were obtained with the concave side facing the flow (_i.e._ the configuration of Fig. 8.a and c). Based on the observations for the concentric covering configurations, we expect to observe results similar to those reported in Sec. III.3 when the convex side faces the flow.
## IV Conclusion
By sealing holes on a fly-swatter,we were able to explore the influence of porosity patterning on the aerodynamic coefficients and bistability pendular porous plates. In spite of the simplicity of the considered system, several converging observations allow to draw some general conclusions
Figure 8: Influence of curvature on the aerodynamic response of the partially covered fly-swatter. Top (a,b): evolution of \(\theta\) as a function of \(U\) for two curvature configurations with concentric covering starting from the center towards the edges. Bottom (c,d): associated \(C_{N}\) coefficient computed using Eq. 1. Left (a,c): concave side facing upstream. Right (b,d): convex side facing upstream. Color codes for the configuration, same as in Fig. 5.a and 5.c. The curves are spline interpolations and thus an eye guide.
regarding the role of certain porosity zones. The existence of a sharp stall leads to bistable regimes as a function of the flow velocity in the pendular configuration, with bistability occurring around the stall angle. For a solid square plate, a sharp stall exists and the pendulum displays bistability. No sharp stall has been observed for the other limit case, for which the square is homogeneously porous, leading to a continuous evolution of the pendulum angle with the flow velocity. In all tested configurations that present a sharp stall the upper rows are indeed covered and a major part of the holes around the center are also sealed off. Seen in a different light, the bistability of a square plate disappears as soon as holes are opened in the upper rows (i.e. in the immediate vicinity of the leading edge), without impairing the lift production for angles \(\theta>70^{\circ}\). Leading-edge porosity therefore appears as a possibly relevant strategy to dampen stall. Surface porosity close to the leading-edge on airfoils has also been observed to reduce pressure load due to wing-vortex interactions [32]. PIV measurements of the evolution of the wake structure with the angle of attack for the two limit cases (solid and homogenously porous square) clearly demonstrate that the observations made on the global aerodynamic coefficients are linked to the wake characteristics. A detailed study of the influence of the porosity pattern at the immediate vicinity of the leading edge on the wake structure and dynamics, and its relation with the existence of sharp stall was beyond the scope of the present work, but would represent a useful extension. Another aspect not investigated here is noise reduction induced by the surface porosity. Indeed, porosity at the leading- and trailing-edge is often associated with noise reduction [33]. The small vortices induced by the pores destabilizes the large-scale leading- and trailing-edge vortices, which are responsible to a large extent for aircraft noise. This effect of porosity was, in fact, first observed in Nature [34] and biomimetic concerns spread it to aerospace engineering [35]. Owls are particularly known for their silent flight and recent studies have shown how the particular structure of their flight feathers enables this feat [36]. The owl feather presents serrations at its leading-edge, and sometimes also throughout the inner vane. Serrations are an ultra-thin comb of barbules and increase the porosity of the feather. The comb breaks the two-dimensionality of the leading-edge vortex which is no longer sustained [37]. Engineering aerodynamic noise generation by the fine tuning of the surface porosity of objects moving in a flow would also be a possible continuation of this work.
## Acknowledgements
This work was partly supported by Initiative d'Excellence de Lyon (IDEXLYON) of the University of Lyon in the framework of the Programme Investissements d'Avenir (ANR-16- IDEX-0005) Universite de Lyon. The authors would like to thank Samuel Bera for his involvement in the implementation of the PIV measurements.
|
2309.03840 | Generating Minimal Training Sets for Machine Learned Potentials | This letter presents a novel approach for identifying uncorrelated atomic
configurations from extensive data sets with a non-standard neural network
workflow known as random network distillation (RND) for training
machine-learned inter-atomic potentials (MLPs). This method is coupled with a
DFT workflow wherein initial data is generated with cheaper classical methods
before only the minimal subset is passed to a more computationally expensive ab
initio calculation. This benefits training not only by reducing the number of
expensive DFT calculations required but also by providing a pathway to the use
of more accurate quantum mechanical calculations for training. The method's
efficacy is demonstrated by constructing machine-learned inter-atomic
potentials for the molten salts KCl and NaCl. Our RND method allows accurate
models to be fit on minimal data sets, as small as 32 configurations, reducing
the required structures by at least one order of magnitude compared to
alternative methods. | Jan Finkbeiner, Samuel Tovey, Christian Holm | 2023-09-07T16:54:43Z | http://arxiv.org/abs/2309.03840v1 | # Generating Minimal Training Sets for Machine Learned Potentials
###### Abstract
This letter presents a novel approach for identifying uncorrelated atomic configurations from extensive data sets with a non-standard neural network workflow known as random network distillation (RND) for training machine-learned inter-atomic potentials (MLPs). This method is coupled with a DFT workflow wherein initial data is generated with cheaper classical methods before only the minimal subset is passed to a more computationally expensive ab initio calculation. This benefits training not only by reducing the number of expensive DFT calculations required but also by providing a pathway to the use of more accurate quantum mechanical calculations for training. The method's efficacy is demonstrated by constructing machine-learned inter-atomic potentials for the molten salts KCl and NaCl. Our RND method allows accurate models to be fit on minimal data sets, as small as 32 configurations, reducing the required structures by at least one order of magnitude compared to alternative methods.
Data-driven approaches for reconstructing potential energy surfaces have provided scientists with a unique environment for combining two thriving research areas: machine learning and molecular dynamics. These machine learning approaches aim to use data from expensive ab initio calculations such as density functional theory (DFT) to fit a model, which may then be used to perform molecular dynamics (MD) simulations at roughly the speed and on scales of a classical approach while retaining the accuracy of the ab initio computations. The last decade has seen significant advances in the use of machine learning algorithms for the development of these potentials (MLPs), be it Gaussian process regression [1], neural networks [2; 3; 4], or other kernel methods [5; 6]. A fundamental component to fitting these potentials that has recently become an active area of research is how to select data from these ab initio computations so that one minimises the size of training data sets while maximally representing the underlying potential energy surface (PES). Typically, this data selection is made uniformly in time, energy, or local energies if a classical potential is used at the initial data selection stages [7; 8; 9; 10]. In more recent studies, active learning approaches have been implemented to iteratively correct a potential as it ventures into poorly defined areas of configurations space [11]. In some cases, configurations are deliberately constructed, such as in the case of RAG sampling [12] or kernel functions applied to identify unique structures in descriptor space [13]. With our focus continually on the strictly physical properties of configurations, it can sometimes be instructive to look into methods adopted by the broader machine learning community, which ventures far beyond the realm of molecular dynamics simulations. One such approach developed in reinforcement learning is Random Network Distillation or RND [14]. This approach has been used previously to identify unseen regions of target space for a reinforcement learner and ignore those regions the machine learning algorithm is believed to have explored [14]. However, the design of the problem closely mirrors that of selecting data for the development of machine-learned inter-atomic potentials and, therefore, is of interest to the community. RND is a method that utilises the intrinsic bias of a neural network architecture to identify regions of the underlying data manifold that will result in a better model after training [15]. When used for data selection, the goal of RND is to take a large set of data and reduce it to a much smaller but still representative subset on which a model can be trained. The method is built upon two neural networks, the target network: \(f:\mathcal{R}^{M}\rightarrow\mathcal{R}^{N}\) which acts as an embedding operation for the data, and the predictor network: \(g:\mathcal{R}^{M}\rightarrow\mathcal{R}^{N}\) which is trained to predict the output of the target network iteratively. Before the data selection occurs, the RND mechanism must be seeded. To do so, all points in the large data set are passed through each neural network, and a distance metric is used to compute the distance between the representations generated by \(f\) and \(g\) for each point. The point with the greatest distance, \(p_{i}\), is selected and added to the training set, \(\mathcal{T}\). The predictor network, \(g\), is then trained on the representation generated by \(f(p_{i})\). This process is continued until a data set of a desired size has been selected.
Our work applies RND to selecting a representative subset of atomistic configurations on which a machine-learned potential will be trained. In building the ini
tial data pool from which the subset is selected, large amounts of configuration space must be covered so that the chosen training set is informative. One approach is to use classical MD simulations to quickly span the configuration space at a lower accuracy. In this work, MD simulations are performed in systems made up of 100 atoms in a Nose-Hoover chain [16; 17] enforced NPT ensemble using the LAMMPS simulation software [18]. Interactions between the constituent atoms are defined using the Born-Meyer-Huggins-Tosi-Fumi potential [19; 20; 21; 22; 23] parameterized based on literature values [24] and accompanied by PPPM electrostatic corrections [25]. The simulations are run under a temperature ramp from 1100K and 1700K to cover the liquid phase of the salts. From this data pool, RND selects representative subsets of varying sizes. For the application of RND, atomic configurations are mapped into a descriptor space using untrained SchNet graph-based representations [3; 26]. These representations are then passed through the target and predictor network to perform the data-set selection. Once a subset is selected, single-point density functional theory (DFT) calculations are performed on the smaller data sets. These DFT simulations are performed with the CP2K simulation software [27], using the PBE-GGA [28] functionals, double-zeta MOLOPT basis sets optimized for dense liquids [29], GTH pseudo-potentials [30], and RVV10 non local integral corrections [31]. The workflow from classical MD to DFT single-point calculations is outlined in Figure 1. While this classical to ab initio transfer method appears to work in the case of simple liquids, it relies on the similarity of the configuration spaces across these scales. Therefore, it is not a priori valid for more complex systems, and further investigation should be performed in this direction. A benefit of RND as a data-selection method is that it scales only with N data points desired in the final data set, as the use of two neural networks introduces some concept of memory of what has been seen before, thus avoiding the expensive nature of other descriptor-based selection methods. Furthermore, it separates itself from other descriptor-based methods in that it requires no training in the SchNet representation beforehand. Therefore, it is agnostic to the descriptor and imposes little to no bias on the problem.
After selecting the subsets, machine learning models are trained on the ab initio data. This work uses the machine learning framework SchNet [3; 26]. SchNet is a graph neural network (GNN) based architecture that builds representations from atomic coordinates while respecting the symmetries inherent to the system. Models are trained on subsets of varying sizes and compared with more commonly used training data selection methods. Figure 2 outlines the results of the investigation. The figure displays both the RMSE and L4 error calculations for the force predictions of the machine learning models on previously unseen validation data as a function of data-set size for the KCl model (see SI for NaCl plots). In each plot, the colour and shape of the lines correspond to a data-selection method, black circles surrounding a point symbolise that a successful MD simulation was performed using this model, and a black square shows that the simulation failed before 100 ps. Simulation failure is decided by either drift in energy and temperature, artifacts in the radial distribution function computations, or large forces experienced during the run. In the RMSE plots, while it is clear that RND generates models with lower loss values, the differences are not large compared with the other techniques. What is clear is that far more of the RND-trained models can perform MD simulations, as seen in the number of circles along the line. This trend is elucidated in the L4 error plot, where we can see that the RND-trained data sets converge much faster than all other methods to a minimum value. L4 error values
Figure 1: Use of random network distillation to fill a training set \(\mathcal{T}\). In the initial stage, classical MD simulations are used to sample configuration space and build the data pool \(\mathcal{P}\) before the RND architecture is used to select unique configurations and add these to the training data. This training data is then passed through a DFT calculation to label the configurations with energy and forces before training a machine-learned potential.
have the impact of penalising outliers to a greater extent than their RMSE counterparts. The reduction in L4 error suggests that RND can identify maximally separated points, thus reducing the number of outliers in the validation data. This trend persists even when compared with other data selection techniques, which explicitly consider local atomic effects, e.g., force selection and atomic energy selection. Interestingly, the L4 error coincides with the successful running of a simulation. This relationship suggests that using loss functions that penalise outliers significantly is a good indicator of whether a potential will succeed.
With successful model fits, the trained potentials can be utilised in scaled-up MD simulations to measure relevant properties. One such thermophysical observable of interest to the community is the density of a liquid at different temperatures. Density is typically challenging for machine-learned potentials to reproduce as it requires a good representation of configuration space in the training data, typically achieved through active learning and accurate ab initio data [32]. NPT simulations are performed using a custom-written SchNet plugin for LAMMPS [18] on scaled-up system sizes of 400 atoms. Densities are computed from 1 ns simulations at several temperatures and plotted against DFT and experimental density values in Figure 3. The DFT values are taken from 10 ps DFT-MD simulations in an NPT ensemble with 400 atoms using the same DFT parameters as in the single-point calculations. We can see that the MLPs accurately reproduce the underlying DFT data with temperature, suggesting that the RND-selected data set of only 32 configurations adequately mapped the configuration space of the salts.
Another important observable in MD simulations is the radial distribution function (RDF), which can be directly related to the DFT data on which the ML model was trained. To generate data for the RDF calculations, NVT simulations are performed at densities fixed to those of the compared experimental values. The MD-Suite post-processing software [34] is then used to compute the RDFs. The MD simulations are run for 1 ns using a Nose-Hoover chain [16; 17] with a coupling constant of 100 fs. To create reference data, 10 ps DFT-MD runs in an NVT ensemble are also
Figure 3: Density of each salt at different temperatures computed with the machine learned potentials trained on 32 configurations (orange crosses), using pure DFT-MD (blue stars) and experimental data (black dots) taken from Ref [33].
Figure 2: The RMSE and L4 loss compared to the number of training configurations for different data selection algorithms shows the convergence of the model loss with respect to the number of training configurations used. Circles correspond to those models that could be used to run a stable MD simulation, whereas a square indicates that the potential failed when deployed in a simulation. This labelling measures how well the training data represents the configuration space.
eters described for the single-point calculations. Figure 4 compares the anion-cation RDF curves for the machine-learned potentials against the reference DFT data. RDFs are shown for two different models trained on different amounts of data. In all cases, the ML potentials accurately reproduced the underlying DFT data.
Finally, the dynamic properties of the salts are assessed in the form of self-diffusion coefficients and ionic conductivity. The trajectories from 1 ns MD studies are used along with the MDSuite software [34] in the computation of the properties. Tables 1 and 2 compare the results computed from the MD simulations with those of the experiment.
We see that for both salts, the self-diffusion coefficients match well with experimental values, suggesting an accurate MLP trained on good ab initio data. Ionic conductivity measurements are also in good agreement with experimental values.
We have demonstrated that random network distillation can be used to identify relevant atomic configurations to train data-driven inter-atomic potentials. We did so by fitting machine-learned potentials on systems of NaCl and KCl using the SchNet framework. Furthermore, our data selection method outperformed several other approaches, including global energy selection, local energy selection, and force-based selection in model convergence. We have performed molecular dynamics simulations on scaled systems of up to 500 ion pairs and for more than 1 ns to validate the ML potentials on more significant length and time scales. The structural and dynamic properties computed from these simulations were shown to reproduce pure ab initio investigations and experimental data adequately. Finally, we showed that RND is capable, without additional active learning, of performing stable NPT simulations and converging to the system density expected from DFT. These results support several conclusions. Random network distillation is an efficient method for identifying unique configurations for training MLPs. Single-point DFT calculations on classically generated configurations are sufficient for producing accurate training data for machine learning models. At least for chemically simple systems, the number of configurations required for an NPT-capable model yielding accurate structures, dynamics, and densities is significantly smaller than previously reported in the literature, resulting in improved training time and reduced computational demand. This minimal training set also provides an avenue for extending the potentials to higher level ab-initio calculations such as coupled cluster [37] or configuration interaction [38] and thereby producing MLPs beyond the accuracy of DFT. Future work should investigate the application of RND to more complex systems and better understand its limitations.
###### Acknowledgements.
The authors acknowledge financial support from the German Funding Agency (Deutsche Forschungsgemeinschaft DFG) under Germany's Excellence Strategy EXC 2075-390740016. This work was supported by SPP 2363-"Utilization and Development of Machine Learning for Molecular Applications - Molecular Machine Learning." Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project-No 497249646
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(\sigma_{\mathrm{Sim}}\) & \(\sigma_{\mathrm{Exp}}\) \\ \hline NaCl & \(3.885\pm 0.118\) & \(3.954\pm 0.032\) \\ KCl & \(2.779\pm 0.057\) & \(2.517\pm 0.044\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ionic conductivity data from the ML potential simulations compared with experimental values taken from Ref [36]
Figure 4: Comparison of radial distribution functions generated from MD simulations performed in an NVT ensemble using the machine learned potentials and the underlying density functional theory data.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Species & D\({}_{\mathrm{sim}}\) & D\({}_{\mathrm{exp}}\) \\ \hline NaCl & Na & \(1.118\pm 0.006\) & \(1.052\pm 0.210\) \\ & Cl & \(0.903\pm 0.005\) & \(0.842\pm 0.168\) \\ \hline KCl & K & \(1.052\pm 0.005\) & \(1.005\pm 0.201\) \\ & Cl & \(1.069\pm 0.006\) & \(0.905\pm 0.181\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Self-diffusion coefficients computed from the ML potential simulations compared with experimental fits from Ref [35] |
2309.05202 | Graph-Aware Contrasting for Multivariate Time-Series Classification | Contrastive learning, as a self-supervised learning paradigm, becomes popular
for Multivariate Time-Series (MTS) classification. It ensures the consistency
across different views of unlabeled samples and then learns effective
representations for these samples. Existing contrastive learning methods mainly
focus on achieving temporal consistency with temporal augmentation and
contrasting techniques, aiming to preserve temporal patterns against
perturbations for MTS data. However, they overlook spatial consistency that
requires the stability of individual sensors and their correlations. As MTS
data typically originate from multiple sensors, ensuring spatial consistency
becomes essential for the overall performance of contrastive learning on MTS
data. Thus, we propose Graph-Aware Contrasting for spatial consistency across
MTS data. Specifically, we propose graph augmentations including node and edge
augmentations to preserve the stability of sensors and their correlations,
followed by graph contrasting with both node- and graph-level contrasting to
extract robust sensor- and global-level features. We further introduce
multi-window temporal contrasting to ensure temporal consistency in the data
for each sensor. Extensive experiments demonstrate that our proposed method
achieves state-of-the-art performance on various MTS classification tasks. The
code is available at https://github.com/Frank-Wang-oss/TS-GAC. | Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, Xiaoli Li, Lihua Xie, Zhenghua Chen | 2023-09-11T02:35:22Z | http://arxiv.org/abs/2309.05202v3 | # Graph Contextual Contrasting for Multivariate Time Series Classification
###### Abstract
Contrastive learning, as a self-supervised learning paradigm, becomes popular for Multivariate Time-Series (MTS) classification. It ensures the consistency across different views of unlabeled samples and then learns effective representations for these samples. Existing contrastive learning methods mainly focus on achieving temporal consistency with temporal augmentation and contrasting techniques, aiming to preserve temporal patterns against perturbations for MTS data. However, they overlook spatial consistency that requires the stability of individual sensors and their correlations. As MTS data typically originate from multiple sensors, ensuring spatial consistency becomes essential for the overall performance of contrastive learning on MTS data. Thus, we propose Graph Contextual Contrasting (GCC) for spatial consistency across MTS data. Specifically, we propose graph augmentations including node and edge augmentations to preserve the stability of sensors and their correlations, followed by graph contrasting with both node- and graph-level contrasting to extract robust sensor- and global-level features. We further introduce multi-window temporal contrasting to ensure temporal consistency in the data for each sensor. Extensive experiments demonstrate that our proposed GCC achieves state-of-the-art performance on various MTS classification tasks.
1Nanyang Technological University, Singapore
Institute for Infocomm Research, A*STAR, Singapore
Centre for Frontier AI Research, A*STAR, Singapore
{yucheng003, xuyu0014, yang0478, chen0832}@e.ntu.edu.sg, {wumin, xlli}@i2r.a-star.edu.sg, [email protected]
## 1 Introduction
Multivariate Time-Series (MTS) data are widely used in areas such as healthcare and industrial manufacturing for classification tasks, attracting significant research interests. To improve the performance of MTS classification, deep learning has gained popularity for learning effective representations [1, 1, 1, 2, 10]. However, the need for substantial labeled samples poses challenges as large-scale manual labeling is impractical, limiting their applicability to real-world scenarios. To address this challenge, Contrastive Learning (CL) has emerged as a promising approach [2, 1]. By contrasting the different views of unlabeled samples that are commonly generated by augmentations, CL enhances encoder's robustness to perturbations and learns robust and effective representations.
Researchers have recently begun exploring CL for MTS data [1, 13], with a primary focus on achieving temporal consistency by preserving temporal patterns robustly against perturbations. Specifically, temporal augmentations such as jittering or permutation are commonly used to create different views for MTS data. Encoders are then employed to extract temporal features, based on which CL is performed to make the encoders robust to temporal disturbances, thus preserving temporal patterns within MTS data. To further enhance temporal consistency, temporal contrasting is often achieved with a predictive contrastive loss when predicting the future timestamps with the past information [1, 1].
While the current methods have made progress with CL for MTS data, they mainly focused on temporal consistency while ignoring spatial consistency during the CL process. Here, the spatial consistency refers to maintaining the stability of both the individual sensors and the correlations across the different sensors. Specifically, the robustness of MTS data relies on the stability of each individual sensor, i.e., any disturbance in a sensor could have a significant impact on the classification performance of an MTS sample. We take Fig. 1 for illustration. Amplitude disturbances, such as insensitivity, in foot signals can lead to the similar foot amplitude in walking and running actions, potentially causing a classifier to misclassify running as walking. Thus, a robust algorithm should be able to identify disturbances within individual sensors. Moreover, correlations exist between sensors, with certain sensors exhibiting stronger correlations across each other than with others. For example, due to the physical connection between the foot and knee, a foot sensor is more correlated with a knee sensor than a hand sensor. Preserving the robustness of these relative sensor relationships can further help learn robust sensor features [23, 24]. As MTS data typically originate from multiple sensors, it is crucial to ensure the spatial consistency to enhance the overall CL performance on MTS data.
The above discussion motivates us to propose a novel approach called Graph Contextual Contrasting (GCC). To achieve spatial consistency, specific augmentation and contrasting methods tailored for MTS data are designed. We first design graph augmentations, involving node and edge augmentations, to augment MTS data. For node augmentations,
we apply temporal and frequency augmentations [14, 15] to fully augment each sensor, while edge augmentations are introduced to augment sensor correlations, ensuring robustness in the relationships between sensors. By capturing the augmented sensor correlations, Graph Neural Network (GNN) [14, 15] is utilized to update sensor features.
With updated sensor features, we then design graph contrasting which incorporates both node- and graph-level contrasting to learn robust sensor- and global-level features. For node-level contrasting, we create two views using the proposed augmentations and contrast the sensors in different views within each MTS sample to ensure the robustness of each sensor against perturbations. Additionally, we map the sensor features to global features and introduce graph-level contrasting by contrasting MTS samples in different views within each training batch. Furthermore, we achieve temporal consistency for each sensor through temporal contrasting by following prior works [13, 12]. Due to the dynamic nature of sensor correlations in MTS data [14], we propose segmenting a sample into multiple windows, enabling us to incorporate multi-window temporal contrasting which ensures the consistency of temporal patterns within each sensor.
In summary, our contributions are three folds. First, to promote spatial consistency, we propose novel graph augmentations to enhance the quality of augmented views for MTS data. The graph augmentations involve node and edge augmentations, aiming to augment sensors and their correlations respectively. Second, we design graph contrasting that includes node- and graph-level contrasting, facilitating the learning of robust sensor- and global-level features. We also introduce a multi-window temporal contrasting to achieve temporal consistency for each sensor. Third, we conduct extensive experiments on five public MTS datasets, showing that our GCC achieves state-of-the-art performance.
## 2 Related Work
#### Contrastive Learning (CL)
As a self-supervised learning paradigm, CL has gained popularity due to its ability to learn effective features from unlabeled samples by bringing positive pairs closer while pushing negative pairs farther [14, 12]. Augmentations are commonly used to create positive pairs, generating augmented samples from different perspectives. Negative pairs, on the other hand, are created using the remaining samples in the same batch [15] or stored in a memory bank [16]. Contrasting these positive and negative pairs helps encoders become robust to perturbations, ensuring consistency in the learned features, and thus learning robust and effective features from unlabeled data.
Researchers have proven the effectiveness of CL in image tasks [11, 16, 15, 12]. MoCo [16] designed a momentum encoder with a memory bank to store negative samples, achieving desirable performance with limited computational resources. SimCLR [15] adopted larger batches of negative pairs and achieved comparable results to supervised learning. Inspired by SimCLR, MoCo-v2 [15] improved performance with powerful augmentations without requiring large batches. Besides, negative pairs may occupy computation resources, so BYOL [17] and SimSiam [15] learned representations with only positive pairs. Although these methods have achieved decent performance, they are proposed for images. Different from images, MTS data contain both temporal and spatial information from multiple sensors, making traditional image-based augmentation and contrasting methods unsuitable for MTS data.
#### CL for MTS Data
Pioneering works have successfully utilized CL techniques to learn decent representations from unlabeled MTS data, primarily focusing on achieving temporal consistency [18, 12, 13, 14, 15]. Specifically, they augmented MTS data with temporal augmentations such as jittering, cropping, and sub-series, and then conducted CL to ensure encoders robustness to temporal disturbances. Meanwhile, some works [13, 12] also introduced temporal contrasting by summarizing past information for contrasting with future timestamps, further enforcing robustness to perturbations within timestamps.
While these works advanced CL for MTS data by ensuring temporal consistency, they overlook spatial consistency for MTS data. Some recent works proposed to incorporate spatial information, e.g., sensor correlations, into CL frameworks. For example, TAGCN [14] utilized GNN to extract features from sub-series of MTS data and then performed CL. Additionally, TSGCC [14] designed a graph-based method to compute weights between samples for clustering by instance- and clustering-contrasting. However, these methods only utilized GNN to extract spatial information within MTS data, while still overlooking spatial consistency to achieve better CL for MTS data. Although a few recent studies [15, 16] explored channel-wise signal augmentations, graph-level augmentations and contrasting are still under-explored, limiting their ability to achieve robust spatial consistency for MTS data.
To overcome the limitations, we propose GCC which in
Figure 1: Signals from knee and foot for walking and running. Foot is more important for classification due to its large amplitude. (a) During walking, both knee and foot have low frequency and amplitude. (b) During running, both sensors show increased frequency and amplitude. Disturbances in the foot sensor, like insensitivity, may cause running signals to have a similar amplitude to walking signals, which may mislead a classifier and mis-classify running as walking.
corporates both the graph augmentation and graph contrasting techniques to ensure spatial consistency during the CL process for MTS classification.
## 3 Methodology
### Problem Formulation
Given a dataset with unlabeled MTS samples \(\mathcal{X}=\{X_{j}\}_{j=1}^{n}\), each sample \(X_{j}\in\mathbb{R}^{N\times L}\) is collected from \(N\) sensors with \(T\) timestamps. Our objective is to perform contrastive learning scheme that can achieve spatial consistency for MTS data, enabling the training of an encoder \(\mathcal{F}\) without relying on labels. This approach allows us to achieve enhanced CL performance and thus extract effective representations \(h_{j}=\mathcal{F}(X_{j})\in\mathbb{R}^{d}\). With \(h_{j}\), we employ a simple classifier, e.g., a multi-layer perceptron, to obtain class probabilities \(y_{j}\in\mathbb{R}^{c}\), where \(c\) represents the number of classes in the classification task. For simplicity, the subscript \(j\) is removed, and we denote an MTS sample as \(X\).
### Overall Structure
Fig. 2 shows the overall structure of Graph Contextual Contrasting (GCC), which aims to achieve spatial consistency in CL for MTS classification. Specific augmentation and contrasting techniques are tailored for MTS data. For augmentation, we consider node and edge augmentations to augment individual sensors and their correlations, generating weak and strong views for each sample. Node frequency augmentations are applied first, followed by segmenting augmented samples into multiple windows considering the dynamic local patterns in MTS data. Node temporal augmentations are utilized within each window, followed by a 1-Dimensional Convolutional Neural Network (1D-CNN) to process these windows. Subsequently, graphs are constructed with each sensor as a node and sensor correlations are edges. The constructed graphs are further augmented by edge augmentations, and then processed by a GNN-based encoder to learn representations. Next, to achieve spatial consistency, we design graph contrasting including Node-level Contrasting (NC) and Graph-level Contrasting (GC). NC enables the contrasting of sensors within each sample to learn robust sensor-level features while GC allows the contrasting of samples within each training batch, promoting the learning of robust global-level features. We further introduce MWTC to ensure temporal consistency for each sensor, by utilizing past windows in one view to predict the future windows in another view.
### Augmentation
CL learns robust representations by contrasting different views of unlabeled data, which are commonly generated by augmentations. Then, the augmented views from the same data are pulled closer and the views from different data are simultaneously pushed farther for representation learning. Thus, augmentations are critical for CL to learn robust and generalizable representations. To enhance augmentation quality for MTS data, we consider its multi-source nature, i.e., collected from multiple sensors [14]. We argue that augmentations for MTS data should be able to ensure the learning of robust sensor features and sensor correlations. For this purpose, we design node and edge augmentations that augment individual sensors and their correlations respectively. Further, following [13], we generate weak and strong views, i.e., weakly and strongly augmented, for each sample with the augmentations for subsequent contrasting.
Node AugmentationsWe perform both frequency and temporal augmentations for the nodes (i.e., sensors).
_Frequency augmentations_: We utilize frequency augmentations to augment individual sensors, as the augmentations are widely recognized as effective in augmenting time-series data [14, 15]. This involves transforming the signals of each sensor into the frequency domain and augmenting the extracted frequency features. The augmented frequency features are then transformed back into the temporal domain to obtain augmented signals.
Particularly, we adopt Discrete Wavelet Transform (DWT) [1] to decompose signals into detail and approximation coefficients using high-pass and low-pass filters, representing detailed and general trends within the signals, respectively. To generate weak and strong views, we add Gaussian noise to the detail and approximation coefficients respectively. The augmented frequency features are then transformed back into the temporal domain using inverse DWT (iDWT) to obtain the augmented signals. Mathematically, frequency augmentations are achieved via Eq. (1), where \(\eta_{A,i}\) and \(\eta_{D,i}\) denote the approximation and detail coefficients for the \(i\)-th sensor, and \(\xi\) represents the noise added to coefficients. We denote \(\{X^{w},X^{s}\}\) as the augmented signals in weak and strong views.
\[\eta_{A,i},\eta_{D,i}=DWT(x_{i}), \tag{1}\] \[\eta_{A,i}^{s}=\eta_{A,i}+\xi,\eta_{D,i}^{w}=\eta_{D,i}+\xi,\] \[x_{i}^{s}=iDWT(\eta_{A,i}^{s},\eta_{D,i}),x_{i}^{w}=iDWT(\eta_{A,i},\eta_{D,i}^{w}).\]
_Temporal augmentations_: We further introduce temporal augmentations to augment each sensor due to its importance in augmenting time-series data [10, 13, 12]. Before temporal augmentations, we note that MTS data show dynamic properties, i.e., local patterns of MTS data are dynamically changing [13]. To capture such properties, we segment each MTS sample into mini windows. As displayed in Fig. 3, given the window with length \(f\), we segment an MTS sample into \(L=[L/f]\) windows, where \([\,]\) represents truncation. Thus, we obtain \(X^{w}=\{\bar{X}_{t}^{w}\}_{t=1}^{L}\) for the weak view, where \(t\) is the index of the window, and \(\bar{X}_{t}^{w}=\{\bar{x}_{t,i}^{w}\}_{i=1}^{N}\in\mathbb{R}^{N\times f}\) contains the local patterns, including local sensor features and correlations. The windows in the strong view \(\{\bar{X}_{t}^{s}\}_{t=1}^{L}\) are obtained in the same way. In this case, if we conduct temporal augmentations before segmentation, it is hard to augment each window averagely, so we propose augmenting each window after segmentation.
We adopt permutation for temporal augmentations due to its wide application [13, 10] and augment each sensor of
each window. After augmentation, we obtain the augmented windows, e.g., \(\{\bar{X}_{t}^{a,w}\}_{t=1}^{L}\) in the weak view, where \(\bar{X}_{t}^{a,w}=\{\bar{x}_{t,i}^{a,w}\}_{i=1}^{N}\). 1D-CNN is then utilized as an encoder to capture the temporal information between windows Jin et al. (2022), _whose details are attached in our supplementary materials_. With the encoder, we learn updated windows, e.g., \(\{Z_{t}^{w}\}_{t=1}^{k}\) for the weak view, where \(Z_{t}^{w}=\{z_{t,i}^{w}\}_{i=1}^{N}\). Similar notations such as \(\bar{X}_{t}^{a,s}\) and \(Z_{t}^{s}\) apply to the strong view.
Edge AugmentationsThe correlations between sensors should remain robust due to their importance for learning sensor features Jia et al. (2020); Zhang et al. (2022). To ensure robust sensor relationships, we begin by constructing graphs whose nodes and edges represent sensors and the correlations between these sensors respectively. Augmenting the edges allows us to augment the relations effectively. For graph construction, we note that correlated sensors should follow similar properties and their features should be similar in the feature space, so we leverage the features similarities to define the sensor correlations. Given the features \(Z_{t}=\{z_{t,i}\}_{i=1}^{N}\in\mathbb{R}^{N\times f}\), we compute the correlation between sensors \(i\) and \(j\) using the dot product of their features, i.e., \(e_{t,ij}=z_{t,i}(z_{t,j})^{T}\). Then, the softmax function is used to restrict the correlations within the range [0,1]. Multiple graphs are built based on the windows for two views. For the weak view, the graph for \(t^{th}\) window is denoted as \(\mathcal{G}_{t}^{w}=(Z_{t}^{w},E_{t}^{w})\), where \(E_{t}^{w}=\{e_{t,i}^{w}\}_{i,j}^{N}\). Similar graphs \(\mathcal{G}_{t}^{w}\) are obtained for the strong view.
We then introduce edge augmentations to augment the correlations between sensors. A naive approach would be randomly adding noise, replacing, or dropping certain edges for graph augmentation You et al. (2020). However, this method may introduce excessive bias and significantly alter the topological structure within MTS data. Note that GNN updates sensor features based on their correlations with other sensors. Thus, strong correlations ensure more information propagation, making them more crucial than weak correlations. Randomly disturbing these strong correlations can introduce excessive bias. To address this issue, it is necessary to add constraints for the edge augmentation. Thus, we propose retaining the \(s\) strongest correlations (i.e., top-\(s\) correlations) for each sensor and augmenting the remaining correlations by replacing them with random values within the range [0, 1]. This approach allows us to fully augment
Figure 3: The multi-window segmentation to generate multiple windows for one MTS sample.
Figure 2: Overall structure of GCC. (1) Graph augmentations to augment MTS data effectively, generating weak and strong views. The graph augmentations involve node and edge augmentations, where node augmentations include both frequency and temporal augmentations to fully augment sensors. Node frequency augmentations are first applied, followed by segmenting augmented samples into multiple windows by considering the dynamic local patterns in MTS data. Node temporal augmentations are utilized within each window, followed by 1D-CNN to process these windows. Subsequently, graphs are constructed and augmented through edge augmentations, and then processed by GNN. (2) Graph contrasting includes NC and GC to achieve spatial consistency. NC ensures robust sensors by pulling closer corresponding sensors in different views and pushing father different sensors in those views within each sample. GC ensures robust global features by pulling closer corresponding samples in different views and pushing father different samples in those views within each batch. MWTC further achieves temporal consistency for each sensor by summarizing past windows to contrast with future windows in another view.
sensor correlations while preserving the topological information within MTS data as much as possible. Specifically, we retain more strong correlations for graphs in the weak view and fewer strong correlations for graphs in the strong view. The resulting augmented graph for the \(t^{th}\) window in the weak view is denoted as \(\mathcal{G}_{t}^{a,w}=(Z_{t}^{w},E_{t}^{a,w})\), and \(E_{t}^{a,w}\) are augmented sensor correlations. Similarly, \(\mathcal{G}_{t}^{a,s}\) denotes the augmented graph for the strong view.
With the augmented graphs, we adopt GNN to update sensor features by leveraging the augmented correlations as conventional works did [13, 20]. Particularly, the features for sensor \(i\) in the weak view are updated by a nonlinear function, i.e., \(z_{t,i}^{w}=\sigma(\sum_{j}^{N}z_{t,j}^{w}e_{t,ij}^{a,w}W_{g})\), where \(W_{g}\) are learnable weights. The updated sensor features \(z_{t,i}^{w}\) and \(z_{t,i}^{s}\) in weak and strong views are then used for subsequent contrasting.
### Contrasting
With the augmentations to generate weak and strong views, we design graph contrasting to achieve spatial consistency and further design MWTC to achieve temporal consistency for each sensor. We begin by presenting MWTC in this section, as it learns high-level sensor features within multi-window for subsequent graph contrasting.
Multi-Window Temporal ContrastingMWTC operates at the sensor-level, ensuring temporal consistency for each sensor. It is noted that the multi-window of each sensor show temporal dependencies where future windows are normally affected and dependent on past windows, which can be incorporated to keep the multi-window robust. Inspired by the idea of predictive coding [1] and temporal contrasting [10, 11], we propose to summarize past windows in one view to contrast with the future windows in another view. By doing so, we aim to maintain the temporal dependency robustness against perturbations to the windows, enabling that the temporal patterns within MTS data are preserved.
Specifically, we introduce an auto-regressive model \(f_{a}\) to summarize the sensor features in past \(\bar{k}\) windows, i.e., \(c_{i}^{w}=f_{a}(z_{1,i}^{w},...,z_{k,i}^{w}|W_{a})\), representing the summarized vectors for the \(i\)-th sensor in the weak view. \(c_{i}^{w}\) is then to predict future windows, i.e., \(\bar{z}_{k+1,i}^{w}=f_{\bar{k}+1}(c_{i}^{w}),...,\bar{z}_{k,i}^{w}=f_{k}(c_{i} ^{w})\), where \(f_{\bar{k}+1}(\cdot),...,f_{k}(\cdot)\) are nonlinear functions to predict the \((\bar{k}+1)\)-th,..., \(k\)-th windows. Similar operations are conducted for the strong view. Here, we adopt a transformer model for \(f_{a}\) following [1], the detail of which is attached in our supplementary materials. \(\mathcal{L}_{MWTC}^{s\to w}\) in Eq. (2) is the loss using the past windows in the strong view to predict the future windows in the weak view. Here, the predicted window \(\bar{z}_{t,i}^{s}\) should exhibit similarity with its positive pair \(z_{t,i}^{w}\), while being dissimilar with its negative pairs \(z_{v,i}^{w},v\in\hat{\mathcal{V}}_{t,i}\), where \(\hat{\mathcal{V}}_{t,i}\) denotes the set of windows excluding the \(t\)-th window for sensor \(i\).
\[\mathcal{L}_{MWTC}^{s\to w}=\frac{-1}{N(k-\bar{k})}\sum_{i}^{N}\sum_{t= \bar{k}}^{k}log\frac{exp((\bar{z}_{t,i}^{s})^{T}z_{t,i}^{w})}{\sum_{v\in\hat{ \mathcal{V}}_{t,i}}exp((\bar{z}_{t,i}^{s})^{T}z_{v,i}^{w})}. \tag{2}\]
Similarly, we can obtain \(\mathcal{L}_{MWTC}^{w\to s}\) and thus obtain \(\mathcal{L}_{MWTC}=\mathcal{L}_{MWTC}^{s\to w}+\mathcal{L}_{MWTC}^{w\to s}\) for sample \(X\).
Graph ContrastingWe propose graph contrasting to achieve spatial consistency, including Node-level Contrasting (NC) and Graph-level Contrasting (GC) to learn robust sensor- and global-level features. NC is achieved by contrasting sensors in different views within each MTS sample while GC is achieved by contrasting the samples within each training batch. Notably, we leverage the vectors \(\{c_{i}\}_{i=1}^{N}\) for graph contrasting, as the vectors represent the high-level features by summarizing the sensor-level features within multi-window. By utilizing the high-level features, we can achieve more effective graph contrasting.
_Node-level Contrasting_: NC is designed to learn robust sensor-level features. Specifically, it aims to maximize the similarity between the corresponding sensors in two views while minimizing the similarity between different sensors in those views. By doing so, NC encourages the encoder to learn features against perturbations to each sensor. Eq. (3) presents the node-level contrastive loss, where \(\hat{\mathcal{V}}_{i}\) denotes the set of sensors excluding sensor \(i\). The visualization process is shown in NC of Fig. 2.
\[\mathcal{L}_{NC}^{s\to w}=-\frac{1}{N}\sum_{i}^{N}log\frac{exp(f_{sim}(c_{i}^{ s},c_{i}^{w})/\tau)}{\sum_{v\in\hat{\mathcal{V}}_{i}}exp(f_{sim}(c_{i}^{s},c_{v}^{w })/\tau)}. \tag{3}\]
Here \(f_{sim}(a,b)\) is a function to measure the similarity of samples implemented as the dot product \(a^{T}b\), and \(\tau\) is a temperature parameter. \(\mathcal{L}_{NC}^{s\to w}\) denotes that the sensors in the strong view are contrasted with the positive and negative pairs in the weak view. Similarly, we can obtain \(\mathcal{L}_{NC}^{w\to s}\) and thus obtain \(\mathcal{L}_{NC}=\mathcal{L}_{NC}^{s\to w}+\mathcal{L}_{NC}^{s\to s}\) for sample \(X\).
_Graph-level Contrasting_: GC aims to learn robust global-level features by contrasting samples within each training batch. For subsequent contrasting, we here obtain the global-level features by stacking all sensor features. For the weak view, \(g^{w}=[c_{1}^{w}|...|c_{N}^{w}]\), where \([\,]\) denotes concatenation. Similar operations are conducted for the strong view.
To learn robust global-level features, GC is achieved by maximizing the similarity between the corresponding samples in two views and simultaneously minimizing the similarity between the different samples in those views. Given a batch of \(B\) MTS samples, we have \(2B\) augmented samples from two augmented views. The corresponding samples in two views are treated as positive pairs, and each view of the sample can form \(2B\)-\(2\) negative pairs with the remaining augmented samples. We denote the global-level features of the \(p\)-th augmented samples in weak and strong views within the batch as \(g_{p}^{\{w,s\}}\). Accordingly, the graph-level contrasting is demonstrated as Eq. (4), which denotes that the samples in the strong view are contrasted with the remaining augmented samples in the batch. Here, \(\hat{\mathcal{V}}_{p}\) denotes the set of samples in
the batch excluding the \(p\)-th sample.
\[\mathcal{L}_{GC}^{s}=-\frac{1}{B}\sum_{p=1}^{B}log\frac{exp(f_{sim}(g_{p}^{s},g_{p }^{w})/\tau)}{\sum_{v\in\hat{\mathcal{V}}_{p}}exp(f_{sim}(g_{p}^{s},g_{v}^{\{w,s \}})/\tau}. \tag{4}\]
Similarly, we can obtain \(\mathcal{L}_{GC}^{w}\) for the weak view and thus obtain \(\mathcal{L}_{GC}=\mathcal{L}_{GC}^{s}+\mathcal{L}_{GC}^{w}\).
Finally, we combine MWTC, NC, and GC to form the final self-supervised loss as Eq. (5), where \(\lambda_{MWTC}\), \(\lambda_{NC}\), and \(\lambda_{GC}\) are hyperparameters that denote relative weights of the losses. Notably, MWTC and NC are both achieved for each MTS sample, so they are denoted as \(\mathcal{L}_{p,MWTC}\) and \(\mathcal{L}_{p,NC}\) for the \(p\)-th sample.
\[\mathcal{L}=\lambda_{MWTC}\sum_{p}^{B}\mathcal{L}_{p,MWTC}+\lambda_{NC}\sum_ {p}^{B}\mathcal{L}_{p,NC}+\lambda_{GC}\mathcal{L}_{GC}. \tag{5}\]
## 4 Experimental Results
DatasetsWe examine our method on five public MTS datasets for classification, including Human Activity Recognition (HAR) [1], ISRUC [14], and three datasets from UEA archive following previous works [11], i.e., ArticulatoryWordRecognition (AWR), FingerMovements (FM), SpokenArabicDigitsEq (SAD). For HAR and ISRUC, we randomly split them into 80% and 20% for training and testing, while for those from UEA archive, we directly adopt their pre-defined train-test splits. _The statistics of the datasets are in the Appendix_.
EvaluationFor evaluation, we follow the standard linear classification scheme as current methods did [10, 12], i.e., train an encoder with only training data in a self-supervised manner and then train a linear classifier on top of the pre-trained encoder. To evaluate performance, we adopt two metrics, i.e., Accuracy (Accu.) and Macro-averaged F1-Score (MF1) [10, 12]. Besides, to reduce the effect of random initialization, we conduct ten times for all experiments and take the average results for comparisons. The standard variations are reported to show the robustness of the results.
Implementation DetailsAll methods are conducted with NVIDIA GeForce RTX 3080Ti and implemented by PyTorch [13]. We set the batch size as 128 and choose ADAM as the optimizer with a learning rate of 3e-4. We pre-train the model and train the linear classifier 40 epochs. _More implementation details are in the Appendix_.
### Comparisons with State-of-the-Arts
We compare our method with SOTA methods, including SimCLR [10], CPC [12], TNC [13], PS2Vec [23], TASC [24], MHCCL [11], CaSS [10], and TAGCN [12]. All methods are re-implemented based on their original settings except for the encoders, which are replaced by the same encoder as ours for fair comparisons.
Table 1 shows the comparisons with SOTA methods. From the table, we observe that GCC achieves the best performance on four out of five datasets. Particularly, GCC gains great improvements on HAR and ISRUC, improving by 1.44% and 3.13% respectively regarding accuracy. In the remaining case where GCC achieves the second best, the gap of GCC with the best result is marginal, i.e., only 0.4% lower than the best accuracy. Meanwhile, GCC has smaller variances, indicating that our GCC is more robust and stable.
### Ablation Study
In this section, we evaluate designed augmentation and contrasting techniques within GCC, which fall into two categories of variants. The first category tests augmentations, including w/o Aug. (N) and w/o Aug. (E), representing variants without node and edge augmentations, respectively. The second category assesses the effectiveness of contrastive losses, with variants w/o GC, w/o NC, and w/o MWTC indicating the removal of graph-level contrasting, node-level contrasting, and multi-window temporal contrasting, respectively. Finally, we compare them with the complete GCC.
Table 2 shows the results, where we only present the results on HAR and ISRUC due to limited space. _More results can be found in our supplementary materials_. The experimental results demonstrate the effectiveness of our proposed graph augmentation and contrasting techniques in achieving spatial consistency for MTS data. Specifically, the graph augmentations show significant improvements in learning robust representations. Compared to the variant without node augmentations, our complete GCC achieves improvements of 1.30% and 0.36% on the two datasets. Similarly, compared to the model without edge augmentations, our complete GCC achieves improvements of 0.60% and 0.35% on the two datasets. The improvements indicate the necessity of using graph augmentations for better augmenting MTS data. Meanwhile, the designed contrasting techniques play crucial roles in learning robust representations, and our complete GCC achieves the best performance compared to the variants without any of the contrastive losses. For instance, we see drops of 2.17% and 2.52% by removing GC and drops of 1.98% and 2.93% by removing NC on the two datasets, indicating the effectiveness of graph contrasting in achieving spatial consistency. We further observe drops of 0.67% and 2.90% by removing MWTC on the two datasets, showing the importance of achieving temporal consistency for each sensor. Additionally, we can derive from the results that GCC can still achieve good performance even when only graph contrasting is used, further highlighting the effectiveness of graph contrasting.
Overall, these findings validate the importance of our proposed graph augmentation and contrasting techniques, demonstrating the necessity of achieving spatial consistency when conducting CL for MTS data.
### Sensitivity Analysis
Hyperparameter AnalysisWe analyze \(\lambda_{MWTC}\), \(\lambda_{GC}\), and \(\lambda_{NC}\) to test their effects. The hyperparameters are trade-offs between various losses, so we choose the values within
Number of retained edges for edge augmentationsTo effectively augment sensor correlations, we design edge augmentations by retaining the \(s\) strongest correlations, i.e., edges, for each sensor and replacing remaining correlations with random values. The value of \(s\) is crucial for augmenting sensor relations and thus requires testing. Here, the weak view should have larger \(s\) for weak augmentation while the strong view should have smaller \(s\) for strong augmentation. Meanwhile, each sensor in HAR and ISRUC has 9 and 10 edges respectively. Thus, we set \(s\) in the weak view within [5,6,7,8,9] for HAR and add 10 for ISRUC. For the strong view, we set \(s\) within [1, 2, 3, 4, 5] for both datasets. Fig. 5 shows the results on HAR and ISRUC, where no. of retained edges represents the value of \(s\). We take the results in HAR for example, and observe that our model shows better performance when \(s\) in the strong view is set to 2 while keeping \(s\) in the weak view fixed. On the other hand, our model shows better performance when \(s\) in the weak view is set to 7 or 8 while keeping \(s\) in the strong view fixed. These trends indicate that having fewer retained correlations in the strong view has a positive effect, but the value of \(s\) should not be too small so as to avoid overly distorted correlations. Similarly, having more retained correlations in the weak view is beneficial, but the value of \(s\) should not be too large.
## 5 Conclusion
We propose Graph Contextual Contrasting (GCC) for MTS classification. To achieve spatial consistency, specific aug
\begin{table}
\begin{tabular}{c|c|c c c c c c c|c} \hline \hline Datasets & Metrics & SimCLR & CPC & TNC & T-TCC & TS2Vec & MHCCL & CaSS & TAGCN & GCC (Ours) \\ \hline \multirow{3}{*}{HAR} & Accu & 89.97\(\pm\)0.46 & 90.35\(\pm\)0.34 & 81.10\(\pm\)1.88 & 91.66\(\pm\)0.42 & 92.78\(\pm\)0.32 & 82.95\(\pm\)0.55 & 82.64\(\pm\)0.31 & 92.83\(\pm\)0.28 & **94.27\(\pm\)0.12** \\ & MF1 & 89.91\(\pm\)0.42 & 90.50\(\pm\)0.34 & 78.24\(\pm\)2.91 & 91.86\(\pm\)0.40 & 92.78\(\pm\)0.33 & 82.70\(\pm\)0.62 & 82.34\(\pm\)0.31 & 92.66\(\pm\)0.29 & **94.07\(\pm\)0.14** \\ \hline \multirow{3}{*}{ISRUC} & Accu & 75.07\(\pm\)0.40 & 80.26\(\pm\)0.34 & 77.69\(\pm\)1.28 & 80.50\(\pm\)0.42 & 76.32\(\pm\)0.48 & 74.71\(\pm\)0.98 & 81.09\(\pm\)0.19 & 77.21\(\pm\)0.21 & **84.22\(\pm\)0.17** \\ & MF1 & 72.60\(\pm\)0.38 & 78.42\(\pm\)0.36 & 64.08\(\pm\)1.60 & 79.12\(\pm\)0.40 & 74.44\(\pm\)0.59 & 72.09\(\pm\)1.23 & 79.73\(\pm\)0.29 & 76.23\(\pm\)0.27 & **83.45\(\pm\)0.23** \\ \hline \multirow{3}{*}{AWR} & Accu & 92.78\(\pm\)0.80 & 94.97\(\pm\)0.32 & 82.60\(\pm\)4.21 & 89.44\(\pm\)0.68 & 98.30\(\pm\)0.09 & 93.00\(\pm\)0.56 & 97.47\(\pm\)0.16 & 97.87\(\pm\)0.27 & **98.33\(\pm\)0.08** \\ & MF1 & 92.69\(\pm\)0.82 & 95.10\(\pm\)0.31 & 77.42\(\pm\)5.34 & 89.51\(\pm\)0.73 & 98.29\(\pm\)0.10 & 93.14\(\pm\)0.75 & 97.46\(\pm\)0.16 & 97.86\(\pm\)0.27 & **98.33\(\pm\)0.07** \\ \hline \multirow{3}{*}{FM} & Accu & 50.52\(\pm\)2.04 & 48.23\(\pm\)2.46 & 48.90\(\pm\)2.42 & 47.40\(\pm\)1.63 & 47.10\(\pm\)4.22 & **52.40\(\pm\)2.28** & 50.00\(\pm\)1.79 & 51.50\(\pm\)1.91 & 52.00\(\pm\)1.54 \\ & MF1 & 47.35\(\pm\)1.97 & 48.15\(\pm\)2.50 & 43.02\(\pm\)5.25 & 47.36\(\pm\)1.64 & 47.03\(\pm\)4.18 & **49.82\(\pm\)3.06** & 35.10\(\pm\)2.01 & 49.52\(\pm\)2.04 & 48.78\(\pm\)0.71 \\ \hline \multirow{3}{*}{SAD} & Accu & 93.72\(\pm\)0.50 & 95.81\(\pm\)0.13 & 90.30\(\pm\)1.36 & 95.20\(\pm\)0.15 & 97.31\(\pm\)0.19 & 95.91\(\pm\)0.56 & 97.44\(\pm\)0.07 & 97.50\(\pm\)0.03 & **97.99\(\pm\)0.05** \\ & MF1 & 93.76\(\pm\)0.50 & 95.82\(\pm\)0.13 & 88.83\(\pm\)1.42 & 95.24\(\pm\)0.15 & 97.31\(\pm\)0.19 & 95.92\(\pm\)0.45 & 97.45\(\pm\)0.07 & 97.52\(\pm\)0.03 & **97.99\(\pm\)0.05** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons with State-of-the-Art methods for different tasks (%)
\begin{table}
\begin{tabular}{c|c|c c|c c c|c} \hline \hline \multirow{2}{*}{GCC (Variants)} & \multicolumn{3}{c|}{Augmentations} & \multicolumn{3}{c|}{Contrasting} & \multirow{2}{*}{Complete} \\ & w/o Aug. (N) & w/o Aug. (E) & & & & & \\ \hline \multirow{3}{*}{HAR} & Accu & 92.97\(\pm\)0.23 & 93.67\(\pm\)0.11 & 92.10\(\pm\)0.09 & 92.29\(\pm\)0.27 & 93.60\(\pm\)0.19 & 94.27\(\pm\)0.12 \\ & MF1 & 92.69\(\pm\)0.27 & 93.41\(\pm\)0.12 & 91.76\(\pm\)0.11 & 92.03\(\pm\)0.30 & 93.38\(\pm\)0.21 & 94.07\(\pm\)0.14 \\ \hline \multirow{3}{*}{ISRUC} & Accu & 83.86\(\pm\)0.20 & 83.87\(\pm\)0.18 & 81.70\(\pm\)0.14 & 81.29\(\pm\)0.12 & 81.29\(\pm\)0.34 & 84.22\(\pm\)0.17 \\ & MF1 & 82.88\(\pm\)0.28 & 82.80\(\pm\)0.15 & 80.62\(\pm\)0.13 & 80.11\(\pm\)0.11 & 80.11\(\pm\)0.87 & 83.45\(\pm\)0.23 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study for graph augmentation and graph contrasting (%)
Figure 4: Sensitivity analysis for \(\lambda_{MWTC}\), \(\lambda_{GC}\), and \(\lambda_{NC}\).
Figure 5: Sensitivity analysis for retained edges in views.
mentation and contrasting techniques are tailored for MTS data. To better augment MTS data, graph augmentations are proposed, including node and edge augmentations for ensuring the robustness of sensors and their correlations. Besides, graph contrasting is designed, including node- and graph-level contrasting to extract robust sensor- and global-level features. We further introduce multi-window temporal contrasting to ensure temporal consistency for each sensor. Experiments show that GCC achieves SOTA performance in various MTS classification tasks.
|
2310.00219 | Theory of chemically driven pattern formation in phase-separating
liquids and solids | Motivated by recent experimental and theoretical work on the control of phase
separation by (electro-)autocatalytic reactions, we analyze pattern formation
in externally driven phase separating systems described by a generalization of
the Cahn-Hilliard and Allen-Cahn equations combining nonlinear reaction
kinetics with diffusive transport. The theory predicts that phase separation
can be suppressed by driven autoinhibitory reactions when chemically driven at
a sufficiently high reaction rate and low diffusivity, while autocatalytic
reactions enhance phase separation. Analytical stability criteria for
predicting the critical condition of suppressed phase separation based on
linear stability analysis track the history dependence of pattern formation and
agree well with numerical simulations. By including chemo-mechanical coupling
in the model, we extend the theory to solids, where coherency strain alters the
morphology and dynamics of driven phase separation. We apply this model to
lithium iron phosphate nanoparticles and simulate their rate-dependent
electrochemical charging and discharging patterns, paving the way for a
quantitative understanding of the effect of reaction kinetics, diffusion, and
mechanics on the electrochemical performance of energy materials. The theory
may also find applications to microstructure formation in hardening cement
paste, as well as membraneless organelle formation in biological cells by
chemically controlled liquid-liquid phase separation. | Hongbo Zhao, Martin Z. Bazant | 2023-09-30T01:48:32Z | http://arxiv.org/abs/2310.00219v1 | # Theory of chemically driven pattern formation in phase-separating liquids and solids
###### Abstract
Motivated by recent experimental and theoretical work on the control of phase separation by (electro-)autocatalytic reactions, we analyze pattern formation in externally driven phase separating systems described by a generalization of the Cahn-Hilliard and Allen-Cahn equations combining nonlinear reaction kinetics with diffusive transport. The theory predicts that phase separation can be suppressed by driven autoinhibitory reactions when chemically driven at a sufficiently high reaction rate and low diffusivity, while autocatalytic reactions enhance phase separation. Analytical stability criteria for predicting the critical condition of suppressed phase separation based on linear stability analysis track the history dependence of pattern formation and agree well with numerical simulations. By including chemo-mechanical coupling in the model, we extend the theory to solids, where coherency strain alters the morphology and dynamics of driven phase separation. We apply this model to lithium iron phosphate nanoparticles and simulate their rate-dependent electrochemical charging and discharging patterns, paving the way for a quantitative understanding of the effect of reaction kinetics, diffusion, and mechanics on the electrochemical performance of energy materials. The theory may also find applications to microstructure formation in hardening cement paste, as well as membraneless organelle formation in biological cells by chemically controlled liquid-liquid phase separation.
## I Introduction
When a phase-separating mixture is in a thermodynamically unstable state - the free energy \(G\) is concave with respect to certain variations in the concentration of the constitutive chemical species (\(\delta^{2}G<0\)), the system can find a lower energy state by spontaneously separating into phases with different chemical compositions. The dynamics of this process, known as spinodal decomposition [1; 2], namely the formation of interspersed domains of different phases and their coarsening over time as the free energy of the system continues to decrease by the reduction in its interfacial energy between the phases, has been studied extensively through mean-field theory[2; 3; 4; 5; 6; 7; 8], molecular dynamics simulations[9; 10; 11] and experiments[12; 13; 14].
A well-known mean-field model that describes spinodal decomposition in systems that are dominated by molecular diffusion and satisfy mass conservation is the Cahn-Hilliard equation [15], in which the diffusive flux driven by the gradient of the chemical potential leads to uphill diffusion in the thermodynamically unstable region. On the other hand, the Allen-Cahn equation (A-H) [16; 17] and various stochastic pattern-forming models of Hohenberg and Halperin [18] have been used to describe phase-transforming systems with nonconserved order parameters, whose dynamics follow the direction of gradient descent of the free energy.
Building on the theory of non-reactive phase separation, there has been a growing interest in the study of phase separation in chemically driven systems, due to its application in a wide range of fields, such as externally driven electrochemical reactions in phase-separating energy materials [19; 20; 21; 22], ATP-driven reactions and cell proliferation related to phase separation in biology[23; 24; 25; 26; 27], and reactive colloidal cluster formation in hardening cement paste [28; 29] and nanoparticle aggregation and gelation [30; 31]. As a result of chemical reactions, the system deviates from the downhill trajectory toward free energy minima. In many works coupling Cahn-Hilliard equation with reactions that break detailed balance, emergent patterns such as suppression of Ostwald ripening, and even dynamic morphologies that resemble protocell division[23; 24; 27; 32] have been observed.
In recent years, a general thermodynamically consistent model of chemical kinetics in phase separating systems has been developed that generalizes Cahn-Hilliard and Allen-Cahn equations for nonlinear reaction kinetics coupled with diffusion-driven phase separation [19]. The model has led to a general theory of thermodynamic stability for driven open systems far from equilibrium, which predicts suppression of phase separation, as well as pattern formation in stable mixtures, driven by chemical reactions [20]. The theory elucidates reaction-driven phase transformations in electrochemistry, especially in lithium-ion batteries where many widely used lithium intercalation materials such as lithium iron phosphate (LFP) nanoparticles and graphite undergo phase transformation [21; 22; 33; 34; 35; 36; 37; 38; 39; 40], along with applications in electrodeposition, adsorption, and hydrogen storage[41; 42; 43; 44]. The stability condition of the model predicts that autocatalytic or autoinhibitory reactions, which satisfy detailed balance, can strongly compete with phase separation, thereby controlling the thermodynamic stability of reactive mixtures. Indeed, experiments have confirmed the predicted role of electro-autocatalysis in the suppres |
2309.13460 | Suppression of stacking order with doping in 1T-TaS$_{2-x}$Se$_x$ | In 1T-TaS$_{2-x}$Se$_x$, the charge density wave (CDW) state features a star
of David lattice that expands across layers as the system becomes commensurate
on cooling. The layers can also order along the c-axis and different stacking
orders have been proposed. Using neutron scattering on powder samples, we
compared the stacking order previously observed in 1T-TaS$_2$ as the system is
doped with Se. While at low temperature, a 13c layer sequence stacking was
observed in TaS$_2$, this type of ordering was not evident with doping. Doping
with Se results in a nearly commensurate state with the Mott state suppressed
which may be linked to the absence of the layer stacking. | Sharon S. Philip, Despina Louca, J. C. Neuefeind, Matthew B. Stone, A. I. Kolesnikov | 2023-09-23T19:07:16Z | http://arxiv.org/abs/2309.13460v1 | # Suppression of stacking order with doping in 1T-TaS\({}_{2-x}\)Se\({}_{x}\)
###### Abstract
In 1T-TaS\({}_{2-2x}\)Se\({}_{2x}\), the charge density wave (CDW) state features a star of David lattice that expands across layers as the system becomes commensurate on cooling. The layers can also order along the c-axis and different stacking orders have been proposed. Using neutron scattering on powder samples, we compared the stacking order previously observed in 1T-TaS\({}_{2}\) as the system is doped with Se. While at low temperature, a 13c layer sequence stacking was observed in TaS\({}_{2}\), this type of ordering was not evident with doping. Doping with Se results in a nearly commensurate state with the Mott state suppressed which may be linked to the absence of the layer stacking.
## I Introduction
Quasi-two dimensional (2D) in nature, transition metal dichalcogenides (TMDs) 1T-MX\({}_{2}\) (M = Ti, Ta and X = S, Se, Te) are prone to electronic instabilities [1]. 1T-Ta(S/Se)\({}_{2}\) exhibits an incredibly rich phase diagram with multiple charge density wave (CDW) transitions emerging as a function of temperature and upon doping. In 1T-TaS\({}_{2-2x}\)Se\({}_{2x}\), macroscopic behaviors such as CDW and superconductivity [2; 3; 4; 5] have been observed, and more recently, a quantum spin liquid (QSL) has been proposed in 1T-TaS\({}_{2}\) as well [6]. In the typical Peierls model for CDW order [7], the instability of the coupled electron-lattice system brings a structural phase transition that is driven by strong electron-phonon coupling [8; 9; 10; 11]. The CDW formation can bring electron localization where displacements along phonon modes lower the total electronic energy by opening up a gap at the Fermi level, E\({}_{F}\)[12; 13]. This scenario, although applicable to simple one-dimensional systems, does not fully describe the case of 1T-TaS\({}_{2-x}\)Se\({}_{x}\) where the CDW behavior is intertwined with the opening of a Mott gap [14]. The origin of the CDW has been highly debated in TMDs [15; 16; 17]. The Fermi surface nesting scenario most often does not apply. Existing models of the CDW order are broadly classified into three types: in one, it involves an excitonic condensation mechanism; in two, it involves a Jahn-Teller-like distortion mechanism; and in three, it involves a hybrid model, a combination of Jahn-Teller and exciton condensation[17].
1T-TaS\({}_{2}\) exhibits a strong CDW instability and electronic localization that leads to several interesting effects. Upon cooling from high temperatures, three main phases form: the high temperature incommensurate CDW (ICDW), the intermediate temperature nearly commensurate CDW (NCCDW) and the low temperature commensurate CDW (CCDW) [18]. The ICDW appears below 540 K on cooling from the high temperature metallic state, with a transition from the \(P\overline{3}m\)1 crystal symmetry shown in Fig. 1(a) to the \(P\overline{3}\) structure shown in Fig. 1(b). This transition leads to displacements of Ta ions that gives rise to the well-known star of David motifs. Upon cooling from the normal, high temperature metallic state, systematic displacements of the transition metal Ta leads to a star of David formation consisting of 13 Ta ions, in-plane. Domains of these formations expand to a commensurate CDW phase on cooling. Distinct from other CDW systems, in 1T-TaS\({}_{2}\), the commensurate CDW state is accompanied by a metal-insulator (MI) transition that has been proposed to arise either from Mott localization or from disorder induced Anderson localization. Important to the MI behavior are the orbital ordering and out of plane correlations, as well as layer stacking order.
In the ICDW, the stars have limited ordering in-plane. Further cooling leads to the ICDW becoming NCCDW at T=350 K, where the \(\sqrt{13}\cdot\sqrt{13}\) structural modulation first appears with a \(12^{o}\) tilt relative to the original ab-plane. An expansion of the star of David motifs occurs in-plane [19]. Below 180 K, the \(\sqrt{13}\cdot\sqrt{13}\) structural modulation persists with a rotation of \(13.9^{o}\) relative to the plane while the CDW becomes commensurate. The steps in the CDW transitions coincide with the kinks observed in the transport [20] as the system goes from the metallic to the insulating state. On the other end of the phase diagram, in 1T-TaSe\({}_{2}\), the CCDW sets in at T \(\approx\) 430 K, and the system shows no MI transition. It remains metallic down to the lowest temperature according to transport data [21]. Between these two ends, superconductivity emerges upon doping that coexists with a broad NCCDW region in the phase diagram [18]. The coexistence of superconductivity with CDW domains has been observed in other TMDs such as in the 2H polytype and in other systems such as the cuprates [22; 23; 24; 25].
The electronic bands appear to undergo a continuous change with decreasing temperature in going though the many transition steps [26; 27; 28]. In the absence of high temperature angle resolved photoemission spectroscopy (ARPES) due to resolution, there is no apparent nesting of the Fermi surface and a CDW gap is not necessarily located at the \(\Gamma\) point. Measurements suggested that the gap appears elsewhere in k-space [7; 29]. The domain-like CDW structures of the NCCDW and ICDW states in 1T-TaS\({}_{2}\) are discommensurate and semi-metallic, but
when the CDW becomes commensurate, the Fermi surface disappears and either a Mott-Hubbard localization or a disorder induced Anderson localization sets in [29]. Across the NCCDW-CCDW boundary, the Fermi surface is continuously reduced. This effect is convoluted by d-electron localization that opens up an energy gap. In the CCDW phase, the gap is fully present, leading to a semiconducting state with about a 200 meV bandgap [7].
In the normal phase above 540 K, the Ta 5d band at the \(\Gamma\) point should be above E\({}_{F}\)[30]. As the system goes through the NCCDW phase, this band becomes visible with ARPES. Further cooling to the CCDW state, band folding is observed because of the smaller Brillouin zone between 180 and 160 K, and an abrupt energy shift occurs due to opening of the energy gap [4]. The loss of the Fermi surface continues with further cooling while the CDW gap continues to grow. The first order transition seen in the transport at 180 K on cooling is most likely due to a Mott-Hubbard localization [31; 32]. On warming, a different behavior is observed where the resistivity exhibits a hysteresis, with its value dropping 280 K, marking the CCDW-NCCDW transition. This has been attributed to be due to changes in the c-axis stacking order [4; 33].
We report on the nature of the layer stacking order with temperature and doping. Earlier, we observed that the c-axis expands in the CCDW phase of 1T-TaS\({}_{2}\) on warming but drops at the crossover between the CCDW-NCCDW transition [34]. It has been suggested that the localization of the d-electrons that brings the gap in the electronic structure depends on the expansion of the c-axis [31; 35]. This in turn is related to the c-axis stacking order where changes in the interlayer coupling might drive the Mott transition. Neutron diffraction measurements confirmed the presence of 13c stacking order that disappears on warming across the CCDW-NCDW transition in 1T-TaS\({}_{2}\). The appearance of the 13c layer sequence is expected to drive the Mott localization. The 13c stacking sequence was previously suggested in Ref. [36] from X-ray diffraction data down to 80 K. This study extends the data down to 2 K. Moreover, from single crystal measurements, we previously identified a 3c layer stacking as well, that commences in the ICDW state and continues to grow through the NCCDW to CCDW crossover [37] It is possible that both the 3c and 13c coexist at low temperatures in 1T-TaS\({}_{2}\). Our single crystal data only reached 150 K. With doping, the neutron diffraction data clearly indicate that the 13c structure is suppressed. Its signature diffraction peak around 0.6 A is not observed with doping. At the same time, it is not clear what happens to the 3c stacking with doping. Further experiments using single crystals are underway to elucidate the doping dependence of the 3c order.
## II Results and discussion
Shown in Fig.1(a) is the hexagonal crystal structure of the high temperature undistorted lattice. Layers of the transition metal are separated by the chalcogen ion creating a quasi-2D lattice where weak interlayer interactions are expected due to the van der Waals nature of the forces holding the layers together. However, orbitals play an important role in this TMD and out of plane electron correlations lead to layer ordering and a gap in the density of states. The out-of-plane coupling is important to understand the electronic characteristics of these materials where band structure calculations suggested that opening a gap at the \(\Gamma\) point depends on the orbital order and out-of-plane stacking [31; 35]. Also shown in Fig.1(a) is the low temperature crystal structure in the CCDW phase where the high temperature cell has undergone a \(\sqrt{13}\cdot\sqrt{13}\) structural expansion and a rotation of 13.9\({}^{\rm o}\) relative to the primary axis. The star formation is the result of Ta displacements towards the middle Ta ion. The extent to which the star lattice spreads in the ab-plane depends on temperature. The star clusters expand on cooling giving rise to large domains in the CCDW state that are highly ordered, but become disordered on warming, breaking up into domains with star formations separated by regions of undistorted lattice. Three samples were measured using neutron scattering and the diffraction data are shown in Figs. 1(c) and 1(d). Powder samples of TaS\({}_{2}\), TaSSe and TaSe\({}_{2}\) were measured as a function of temperature. At 300 K, several superlattice reflections are indicated that might be due to 3c stacking structure. Similarly, at 2 K, the same superlattice reflections are observed as indicated. However, the intensity of the peaks is too small to be discerned from the powder diffraction data and single crystal experiments will help elucidate their presence.
The reciprocal lattice vector \(\mathbf{Q}\) was calculated using \(\mathbf{Q}=h\mathbf{a}_{0}^{\rm s}+k\mathbf{b}_{0}^{\rm s}+l\mathbf{c}^{\rm*} +m_{1}\mathbf{q}^{\rm 1}+m_{2}\mathbf{q}^{\rm 2}\), following the formalism introduced in [19]. The modulation wave vectors \(\mathbf{q}^{\rm 1}=\sigma_{1}\mathbf{a}_{0}^{\rm s}+\sigma_{2}\mathbf{b}_{0}^{ \rm s}\) and \(\mathbf{q}^{\rm 2}=-\sigma_{2}\mathbf{a}_{0}^{\rm s}+(\sigma_{1}+\sigma_{2})\mathbf{b }_{0}^{\rm s}\) for the CCDW phase were obtained based on the commensurate wave vector parameters \(\sigma_{1}\) and \(\sigma_{2}\). The reciprocal lattice commensurate wave vector can be described as \(\mathbf{q}_{cdw}=(\sigma_{1}\mathbf{a}_{0}^{\rm s}+\sigma_{2}\mathbf{b}_{0}^{ \rm s})\) and for \(\sqrt{13}\cdot\sqrt{13}\) in-plane translation, \(\sigma_{1}\) and \(\sigma_{2}\) values are 0.2308 and 0.0769 respectively. In the NCCDW phase, \(\sigma_{1}\) and \(\sigma_{2}\) values are 0.2448 and 0.0681 respectively. Shown in Fig. 1(e) is a plot of the diffraction pattern at very small momentum transfers, Q. At 5 K, a superlattice reflection belonging to 13c ordering is observed in 1T-TaS\({}_{2}\) at low temperatures. This reflection only appears from the 13c stacking order and not from the 3c order, even though most of the higher order reflections overlap between the two stacking models as shown in Fig. 1(f). The calculated positions of the satellite peaks corresponding to 3 \(c\) stacking order (\(\mathbf{c}^{\rm*}=\mathbf{c}_{0}^{\rm s}/3\)) and 13 \(c\) stacking order (\(\mathbf{c}^{\rm*}=\mathbf{c}_{0}^{\rm s}/13\)) are shown. Also shown in Fig. 1(e) are data for TaSSe in the same region of momentum transfer. In the TaSSe
data, the reflection \(\sim\) 0.6 A\({}^{-1}\) is notably absent which indicates that there is no 13c ordering in the superconducting state. A similar measurement was carried out for 1T-TaSe\({}_{2}\) and even though the sample was not a single phase of 1T, no evidence for the 0.6 A\({}^{-1}\) was observed. This indicates that stacking order might only be present in 1T-TaS\({}_{2}\).
The temperature and composition dependence of the Ta and S/Se thermal factors, \(\langle U\rangle^{2}\), lattice constants and unit cell volume are plotted in Fig. 2. As a function of composition, superconducting TaSSe has the largest thermal factor for the Ta ion that continues to increase on warming. Shown in Fig. 2(c) are the thermal factors for S and Se. Fig. 2(b) is a plot of the c/a ratio. In the case of TaSSe and TaSe\({}_{2}\), the c/a ratio is almost constant as a function of temperature which indicates that the unit cell expands uniformly in the a- and c-direction. However, in TaS\({}_{2}\), the ratio drops between 200 and 300 K because of the contraction of the c-lattice constant as previously observed in our earlier study [34] and by others [38]. The contraction of the c-axis corresponds to the transition from the commensurate to the nearly commensurate state. We observed that this transition is coupled to the disappearance of the 13c ordering. Shown in Fig. 2(d) is the unit cell volume for the three compositions as a function of temperature.
Fig. 3 shows the results from the pair density function (PDF) analysis of the Ta displacements from the local structure at 2 K. The local structure is obtained by Fourier transforming the diffraction data shown in Fig. 1, to obtain the pair correlation function, G(r). Fitting of the G(r) with a local model results in the distortions shown in the table of Fig. 3(a). Local Ta distortions are listed for the 12 Ta ions shown in 3(b). The 13th center Ta ion does not move by symmetry. This indicates that even after the transition from the \(P\overline{3}m1\) to the \(P\overline{3}\) symmetry, locally the stars are distorted due to displacements of Ta in the directions shown with the arrows in the star lattice on the right. Moreover, the Ta ions are not all displaced in a symmetric way. This implies that the local trigonal symmetry is broken but that there is, nonetheless, long-range order of the star of David motifs in-plane. Similar distortions were observed in all three compositions with the results listed for 1T-TaS\({}_{2}\).
Figure 1: (**a**) The high-temperature and low temperature crystal structure of 1T-TaX\({}_{2}\). The lattice symmetry is the trigonal \(P\overline{3}m1\) at high temperature which becomes \(P\overline{3}\) at low temperatures. The star is the result of the Ta displacements. (**b**) A plot of the diffraction pattern at low temperatures showing the presence of the 13c superlattice in 1T-TaS\({}_{2}\) although absent in 1T-TaSSe. The calculated peak positions of 1T-TaSSe corresponding to 3c and 13c stacking order is shown at the bottom. (**c**) The neutron powder diffraction data collected at 300 K compared among 1T-TaS\({}_{2}\), 1T-TaSSe and 1T-TaSe\({}_{2}\). All data are fit well using the \(P\overline{3}\) symmetry. The diffraction peaks shift to the left with doping because Se is nominally a larger ion than S. (**d**) The neutron powder diffraction data collected at 2 K are shown. The arrows mark the positions of the CDW superlattice reflections.(**e**) The diffraction data plotted at very low Q indicate a superlattice peak corresponding to the 13c stacking order present in 1T-TaS\({}_{2}\). (**f**) A plot of the expected positions of 13c and 3c stacking order.
Figure 3: (**a**) A list of the Ta distortions obtained from fitting the local atomic structure. The Ta atoms make up the star of David. The center Ta ion does not move by symmetry.
Figure 2: (**a**) The Ta atomic displacement!U\({}_{L}\)\({}^{2}\) is shown as a function of temperature for the three compositions. Of the three, superconducting TaSSe shows the largest thermal factors. Shown in (**c**) are the S and Se thermal factors for the three compositions. In (**b**) is a plot of the c/a ratio for the three compositions and in (**d**) is a plot of the unit cell volume. The c/a ratio in 1T-TaS\({}_{2}\) shows a decline on warming past 200K.
## III Materials and Methods
Powders were prepared using solid-state reaction. The neutron powder diffraction measurements were performed to investigate the structure through the multiple CDW steps. The time-of-flight (TOF) neutron measurements were carried out at the Nanoscale Ordered Materials Diffractometer (NOMAD/BL-1B) and at SEQUOIA (BL-17), a direct geometry spectrometer, at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL) at temperatures ranging from 1.8 to 500 K. The aluminium can was used for SEQUOIA measurements and the empty can data were subtracted from the data. The reason SEQUOIA was used is that it reaches very small momentum transfers, not accessible to NOMAD. The diffraction data from NOMAD were analyzed using the Rietveld refinement to obtain the unit cell parameters characterizing the crystal structure [39], resulting in what is referred to as the average model. The pair density function (PDF) analysis [40; 41] provides information on the local arrangement of atoms in real space without the assumption of periodicity. It was performed on the same neutron diffraction data as the ones used for the Rietveld refinement. NOMAD is a diffractometer with a large bandwidth of momentum transfer \(Q\), and it provides the total structure function \(S(Q)\). The \(S(Q)\) was Fourier transformed into real-space to obtain the \(G(r)\)[42; 43]. The instrument background and empty sample container were subtracted from the \(S(Q)\) and the data were normalized by a vanadium rod. A maximum \(Q\) of 40 A\({}^{-1}\) was used.
## IV Conclusions
The formation of hetero Layer stacking can be engineered to enable new behaviors and new properties. For instance it has been theoretically proposed that stacking of the honeycomb ferromagnet CrI\({}_{3}\) has the potential to give rise to ferroelectricity [44]. Moreover, stacking in moire superlattices can create polar domains because of local spontaneous polarization [45]. Hexagonal boron nitride was shown to exhibit ferroelectric switching in bilayers, leading to new concepts for functional heterostructures [46]. Similarly, ferromagnetic heterostructures were demonstrated by stacking non-magnetic WS\({}_{2}\) with antiferromagnetic FePS\({}_{3}\). At the interface, the FePS\({}_{3}\) shows ferromagnetism [47].
Layer stacking in homostructure TMDs maybe similarly linked to the transport behavior. 1T-TaS\({}_{2}\) has an insulating CDW in contrast to other CDW dichalcogenides that have metallic CDW's. The reason for this is linked to Mott-Hubbard electron-electron correlations. Every star of David contributes one _5d_ electron to a half filled narrow conduction band. In the 13c layer stacking, there is an odd number of electrons and in the presence of large Coulomb repulsion acting on the layers, the Mott-Hubbard transition occurs [29]. Density functional theory (DFT) calculations [33] have shown that the insulating phase and the MI transition originate not from the 2D order of the stars of David but by the vertical order. Our results confirm the significance of interlayer coupling and the insulating property. Interlayer stacking order in the CCDW phase has been verified to be a 13c repeat unit cell. This result contradicts the notion that the stacking is partially disordered in the CCDW state. Hence it is less likely that Anderson localization drives the MI transition. This result also contradicts the bilayer stacking model.
## V Acknowledgements
A portion of this research used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by Oak Ridge National Laboratory. We thank Dr. John Schneeloch (University of Virginia) for valuable inputs on the sample growth and Dr. Utpal Chatterjee for valuable discussions about the ARPES data.
|
2309.10169 | Picard groups of quasi-Frobenius algebras and a question on Frobenius
strongly graded algebras | Our initial aim was to answer the question: does the Frobenius (symmetric)
property transfers from a strongly graded algebra to its homogeneous component
of trivial degree? Related to it, we investigate invertible bimodules and the
Picard group of a finite dimensional quasi-Frobenius algebra $R$. We compute
the Picard group, the automorphism group and the group of outer automorphisms
of a $9$-dimensional quasi-Frobenius algebra which is not Frobenius,
constructed by Nakayama. Using these results and a semitrivial extension
construction, we give an example of a symmetric strongly graded algebra whose
trivial homogeneous component is not even Frobenius. We investigate
associativity of isomorphisms $R^*\ot_RR^*\simeq R$ for quasi-Frobenius
algebras $R$, and we determine the order of the class of the invertible
bimodule $H^*$ in the Picard group of a finite dimensional Hopf algebra $H$. As
an application, we construct new examples of symmetric algebras. | Sorin Dascalescu, Constantin Nastasescu, Laura Nastasescu | 2023-09-18T21:35:39Z | http://arxiv.org/abs/2309.10169v2 | # Picard groups of quasi-Frobenius algebras and a question on Frobenius strongly graded algebras
###### Abstract.
Our initial aim was to answer the question: does the Frobenius (symmetric) property transfers from a strongly graded algebra to its homogeneous component of trivial degree? Related to it, we investigate invertible bimodules and the Picard group of a finite dimensional quasi-Frobenius algebra \(R\). We compute the Picard group, the automorphism group and the group of outer automorphisms of a \(9\)-dimensional quasi-Frobenius algebra which is not Frobenius, constructed by Nakayama. Using these results and a semitrivial extension construction, we give an example of a symmetric strongly graded algebra whose trivial homogeneous component is not even Frobenius. We investigate associativity of isomorphisms \(R^{*}\otimes_{R}R^{*}\simeq R\) for quasi-Frobenius algebras \(R\), and we determine the order of the class of the invertible bimodule \(H^{*}\) in the Picard group of a finite dimensional Hopf algebra \(H\). As an application, we construct new examples of symmetric algebras.
2020 MSC: 16D50, 16D20, 16L60, 16S99, 16T05, 16W50
Key words: quasi-Frobenius algebra, Frobenius algebra, symmetric algebra, invertible bimodule, Picard group, strongly graded algebra, Hopf algebra, Nakayama automorphism.
## 1. Introduction and preliminaries
A finite dimensional algebra \(A\) over a field \(K\) is called Frobenius if \(A\simeq A^{*}\) as left (or equivalently, as right) \(A\)-modules. If \(A\) satisfies the stronger condition that \(A\simeq A^{*}\) as \(A\)-bimodules, then \(A\) is called a symmetric algebra. Frobenius algebras and symmetric algebras occur in algebra, geometry, topology and quantum theory, and they have a rich representation theory, which is relevant for all these branches of mathematics. A general problem is whether a certain ring property transfers from an algebra on which a Hopf algebra (co)acts to the subalgebra of (co)invariants; of special interest is the situation where the (co)action produces a Galois extension. Particular cases of high relevance are: (1) Algebras \(A\) on which a group \(G\) acts as automorphisms, and the transfer of properties to the subalgebra \(A^{G}\) of invariants; (2) Algebras \(A\) graded by a group \(G\), and the transfer of properties to the homogeneous component of trivial degree. In the second case, such an \(A\) is in fact a comodule algebra over the Hopf group algebra \(KG\), and the subalgebra of coinvariants is just the component of trivial degree; moreover, the associated extension is \(KG\)-Hopf-Galois if and only if \(A\) is strongly graded.
Our initial aim was to answer the following.
**Question 1**.: _If \(A=\oplus_{g\in G}A_{g}\) is a strongly \(G\)-graded algebra, where \(G\) is a group with neutral element \(e\), and \(A\) is Frobenius (symmetric), does it follow that the subalgebra \(A_{e}\) is Frobenius (symmetric)?_
There is an interesting alternative way to formulate this question for the Frobenius property. Frobenius algebras in the monoidal category of \(G\)-graded vector spaces were considered in [4], where they were called graded Frobenius algebras. Such objects and a shift version of them
occur in noncommutative geometry, for example as Koszul duals of certain Artin-Shelter regular algebras, and also in the theory of Calabi-Yau algebras. A \(G\)-graded algebra \(A\) is graded Frobenius if \(A\simeq A^{*}\) as graded left \(A\)-modules, where \(A^{*}\) is provided with a standard structure of such an object. Obviously, if \(A\) is graded Frobenius, then it is a Frobenius algebra, while the converse is not true in general. If \(A\) is strongly graded, then \(A\) is graded Frobenius if and only if \(A_{e}\) is Frobenius, see [4, Corollary 4.2]. Thus the question above can be also formulated as: If \(A\) is a strongly graded algebra which is Frobenius, is it necessarily graded Frobenius?
Question 1 cannot be reformulated in a similar way for the symmetric property. As \(KG\) is a cosovereign Hopf algebra with respect to its counit, a concept of symmetric algebra can be defined in its category of corepresentations, i.e., in the monoidal category of \(G\)-graded vector spaces; the resulting objects are called graded symmetric algebras. As expected, \(A\) is graded symmetric if \(A\simeq A^{*}\) as graded \(A\)-bimodules. If \(A\) is strongly graded, then \(A_{e}\) is symmetric whenever \(A\) is graded symmetric, however the converse is not true, see [4, Remark 5.3]. This shows that Question 1 is not equivalent to asking whether a symmetric strongly graded algebra is graded symmetric, nevertheless this other question is also of interest.
The transfer of the Frobenius property from the strongly graded algebra \(A\) to \(A_{e}\) works well under additional conditions, for example if \(A\) is free as a left and as a right \(A_{e}\)-module, in particular if \(A\) is a crossed product of \(A_{e}\) by \(G\), see [5]. If \(A\) is Frobenius, then it is left (and right) self-injective, and then so is \(A_{e}\); this means that \(A_{e}\) is a quasi-Frobenius algebra. Thus a possible example answering Question 1 in the negative should be built on a quasi-Frobenius algebra which is not Frobenius. Moreover, by Dade's Theorem, each homogeneous component of the strongly graded algebra \(A\) is an invertible \(A_{e}\)-bimodule, see [11], suggesting a study of the Picard group \(\operatorname{Pic}(A_{e})\) of \(A_{e}\). In Section 2 we look at invertible bimodules over a finite dimensional quasi-Frobenius algebra \(R\). For such an \(R\), an object of central interest is the linear dual \(R^{*}\) of the regular bimodule \(R\); we show that it is an invertible \(R\)-bimodule. In the case where \(R\) is Frobenius, \(R^{*}\) is isomorphic to a deformation of the regular bimodule \(R\), with the right action modified by the Nakayama automorphism \(\nu\) of \(R\) with respect to a Frobenius form. It follows that the order of the class \([R^{*}]\) of \(R^{*}\) in \(\operatorname{Pic}(R)\) is just the order of the class of \(\nu\) in the group \(\operatorname{Out}(R)\) of outer automorphisms of \(R\). If \(R\) is not Frobenius, then \(R^{*}\) cannot be obtained from \(R\) by deforming the right action by an automorphism, or in other words, \([R^{*}]\) does not lie in the image of \(\operatorname{Out}(R)\), and we show that it lies in the centralizer of \(\operatorname{Out}(R)\). We compute the order of \([R^{*}]\) in \(\operatorname{Pic}(R)\) for: (1) liftings of certain Hopf algebras in the braided category of Yetter-Drinfeld modules, called quantum lines, over the group Hopf algebra of a finite abelian group; (2) certain quotients of quantum planes. This order may be any positive integer, as well it can be infinite. It is known that a finite dimensional Hopf algebra is Frobenius. In this case we prove the following.
**Theorem A.**_Let \(H\) be a finite dimensional Hopf algebra with antipode \(S\). Then the order of \([H^{*}]\) in \(\operatorname{Pic}(H)\) is the least common multiple of the order of the class of \(S^{2}\) in \(\operatorname{Out}(H)\) and the order of the modular element of \(H^{*}\) in the group of grouplike elements of \(H^{*}\)._
As a particular case, one gets a well-known characterization of symmetric finite dimensional Hopf algebras, as those unimodular Hopf algebras such that \(S^{2}\) is inner.
In Section 3 we consider an algebra of dimension 9 which is quasi-Frobenius, but not Frobenius, and we investigate its structure and determine its Picard group. This algebra was introduced by Nakayama in [9] in a matrix presentation, see also [7, Example 16.19.(5)]. We use a different presentation given in [6]. Let \(\mathcal{R}\) be the \(K\)-algebra with basis \(\mathbf{B}=\{E,X_{1},X_{2},Y_{1},Y_{2}\}\cup\{F_{ij}|1\leq i,j\leq 2\}\), and relations
\[E^{2}=E, F_{ij}F_{jr}=F_{ir}\] \[EX_{i}=X_{i}, X_{i}F_{ir}=X_{r}\] \[F_{ij}Y_{j}=Y_{i}, Y_{i}E=Y_{i}\]
for any \(1\leq i,j,r\leq 2\), and any other product of two elements of \(\mathbf{B}\) is zero. We show that any invertible \(\mathcal{R}\)-bimodule is either a deformation of \(\mathcal{R}\) or one of \(\mathcal{R}^{*}\) by an automorphism of \(\mathcal{R}\), and we have an exact sequence
\[1\rightarrow\mathrm{Inn}(\mathcal{R})\rightarrow\mathrm{Aut}(\mathcal{R}) \rightarrow\mathrm{Pic}(\mathcal{R})\to C_{2}\to 1,\]
where \(C_{2}\) is the cyclic group of order \(2\). If \(V\) is an \(\mathcal{R}\)-bimodule, and \(\alpha\) is an automorphism of \(\mathcal{R}\), we denote by \({}_{1}V_{\alpha}\) the bimodule obtained from \(V\) by changing the right action via \(\alpha\). We collect the conclusions of this section in:
**Theorem B.**_There is an isomorphism of \(\mathcal{R}\)-bimodules \(\varphi:\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\rightarrow \mathcal{R}\), thus \([\mathcal{R}^{*}]\) has order \(2\) in \(\mathrm{Pic}(\mathcal{R})\). An invertible \(\mathcal{R}\)-bimodule is isomorphic either to \({}_{1}\mathcal{R}_{\alpha}\) or to \({}_{1}\mathcal{R}^{*}{}_{\alpha}\) for some \(\alpha\in\mathrm{Aut}(\mathcal{R})\), and \(\mathrm{Pic}(\mathcal{R})\simeq\mathrm{Out}(\mathcal{R})\times C_{2}\)._
In Section 4 we compute the automorphism group \(\mathrm{Aut}(\mathcal{R})\) and the group \(\mathrm{Out}(\mathcal{R})\) of outer automorphisms. For this aim, we use another presentation of \(\mathcal{R}\), given in [6]. Thus \(\mathcal{R}\) is isomorphic to the Morita ring associated with a Morita context connecting the rings \(K\) and \(M_{2}(K)\), where the connecting bimodules are \(K^{2}\) and \(M_{2,1}(K)\) with actions given by the usual matrix multiplication, and such that both Morita maps are zero. Thus \(\mathcal{R}\) is isomorphic as a linear space to the matrix algebra \(M_{3}(K)\), but its multiplication is altered by collapsing the product of the off diagonal blocks. We prove:
**Theorem C.**\(\mathrm{Aut}(\mathcal{R})\) _is isomorphic to a semidirect product \((K^{2}\times M_{2,1}(K))\rtimes(K^{*}\times GL_{2}(K))\), and \(\mathrm{Out}(\mathcal{R})\simeq K^{*}\)._
We explicitly describe the automorphisms and the outer automorphisms. Comparing to the matrix algebra \(M_{3}(K)\), where there are no outer automorphisms, the alteration of the multiplication produces non-trivial outer automorphisms of \(\mathcal{R}\). As a consequence of Theorems B and C, wee see that \(\mathrm{Pic}(\mathcal{R})\simeq K^{*}\times C_{2}\).
In Section 5 we consider an arbitrary finite dimensional algebra \(R\) and a morphism of \(R\)-bimodules \(\psi:R^{*}\otimes_{R}R^{*}\to R\) which is associative, i.e., \(\psi(r^{*}\otimes_{R}s^{*})\leftharpoonup t^{*}=r^{*}\rightharpoonup\psi(s^{*} \otimes_{R}t^{*})\) for any \(r^{*},s^{*},t^{*}\in R^{*}\); here \(\rightharpoonup\) and \(\leftharpoonup\) denote the usual left and right actions of \(R\) on \(R^{*}\). Then we can form the semitrivial extension \(R\rtimes_{\psi}R^{*}\), which is the cartesian product \(R\times R^{*}\) with the usual addition, and multiplication defined by
\[(r,r^{*})(s,s^{*})=(rs+\psi(r^{*}\otimes_{R}s^{*}),(r\rightharpoonup s^{*})+(r ^{*}\leftharpoonup s))\]
for any \(r,s\in R,r^{*},s^{*}\in R^{*}\). It has a structure of a \(C_{2}\)-graded algebra with \(R\) as the homogeneous component of trivial degree. We prove:
**Proposition A.**\(R\rtimes_{\psi}R^{*}\) _is a symmetric algebra._
If \(\psi=0\), we get a well-known construction of Tachikawa, see [7]; in this case \(R\) may be any finite dimensional algebra. If \(\psi\) is an isomorphism, which implies that \(R^{*}\) is invertible
(therefore \(R\) must be quasi-Frobenius), then \(R\rtimes_{\varphi}R^{*}\) is a strongly \(C_{2}\)-graded algebra. Next we show that the isomorphism \(\varphi:\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\to\mathcal{R}\) constructed in Theorem B is associative, and we conclude that the strongly \(C_{2}\)-graded algebra \(\mathcal{R}\rtimes_{\varphi}\mathcal{R}^{*}\) is symmetric, thus also Frobenius, while its component of trivial degree is not Frobenius. This answers in the negative Question 1, for both Frobenius and symmetric properties. It also answers the other question related to the symmetric property, since \(\mathcal{R}\rtimes_{\varphi}\mathcal{R}^{*}\) is symmetric, but it is not graded symmetric as its homogeneous component of degree \(e\) is not symmetric. Besides producing the large class of symmetric algebras presented above, the semitrivial extension construction is of interest by itself, at least taking into account the wealth of results of interest concerning trivial extensions, i.e., those associated to zero morphisms \(\psi\).
In Section 6 we address the following.
**Question 2.**_If \(R\) is a finite dimensional algebra such that \(R^{*}\otimes_{R}R^{*}\simeq R\) as \(R\)-bimodules, is it true that any isomorphism \(\psi:R^{*}\otimes_{R}R^{*}\to R\) is associative?_
We show that the answer depends only on the algebra, and not on a particular choice of the isomorphism, and we answer in the positive in the Frobenius case, proving the following.
**Proposition B.**_Let \(R\) be a Frobenius algebra such that \([R^{*}]\) has order at most \(2\) in \(\operatorname{Pic}(R)\). Then any isomorphism \(\psi:R^{*}\otimes_{R}R^{*}\to R\) is associative._
As a consequence, for any Frobenius algebra \(R\) such that \([R^{*}]\) has order \(2\) in \(\operatorname{Pic}(R)\), we can construct a semitrivial extension which is a strongly \(C_{2}\)-graded algebra and also symmetric as an algebra, and has \(R\) as the homogeneous component of trivial degree. Thus we give more examples answering in the negative Question 1 for the symmetric property. We present several classes of algebras \(R\) enjoying these properties. Among them, we note that for any finite dimensional unimodular Hopf algebra \(H\), \([H^{*}]\) has order at most \(2\) in \(\operatorname{Pic}(H)\).
We work over a field \(K\). We refer to [7], [8] and [16] for facts related to (quasi)-Frobenius algebras and symmetric algebras, to [11] for results about graded rings, and to [15] for basic notions about Hopf algebras. We recall that if \(G\) is a group with neutral element \(e\), an algebra \(A\) is \(G\)-graded if it has a decomposition \(A=\oplus_{g\in G}A_{g}\) as a direct sum of linear subspaces such that \(A_{g}A_{h}\subset A_{gh}\) for any \(g,h\in G\); in particular, \(A_{e}\) is a subalgebra of \(A\). Such an \(A\) is called strongly graded if \(A_{g}A_{h}=A_{gh}\) for any \(g,h\in G\).
## 2. Quasi-Frobenius algebras and invertible bimodules
We recall from [2] some basic facts concerning invertibile bimodules and the Picard group. Let \(R\) be an algebra over a field \(K\). An \(R\)-bimodule \(P\) is called invertible if it satisfies one of the following equivalent conditions: (1) There exists a bimodule \(Q\) such that \(P\otimes_{R}Q\) and \(Q\otimes_{R}P\) are isomorphic to \(R\) as bimodules; (2) The functor \(P\otimes_{R}-:R-mod\to R-mod\) is an equivalence of categories; (3) \(P\) is a finitely generated projective generator as a left \(R\)-module, and the map \(\omega:R\to\operatorname{End}(_{R}P)\), \(\omega(r)(p)=pr\) for any \(r\in R,p\in P\) is a ring isomorphism.
We keep the usual convention that the multiplication in \(\operatorname{End}(_{R}P)\) is the inverse map composition. The set of isomorphism types of invertible \(R\)-bimodules is a group with multiplication defined by \([U]\cdot[V]=[U\otimes_{R}V]\), where \([U]\) denotes the class of the bimodule \(U\) with respect to the isomorphism equivalence relation. This group is called the Picard group of \(R\), and it is denoted by \(\operatorname{Pic}(R)\).
If \(V\) is an \(R\)-bimodule and \(\alpha,\beta\) are elements in the group \(\operatorname{Aut}(R)\) of algebra automorphisms of \(R\), we denote by \({}_{\alpha}V_{\beta}\) the bimodule with the same underlying space as \(V\), and left and right actions defined by \(r*v=\alpha(r)v\) and \(v*r=v\beta(r)\) for any \(v\in V\) and \(r\in R\). The following facts hold for any \(\alpha,\beta,\gamma\in\operatorname{Aut}(R)\). All isomorphisms are of \(R\)-bimodules, and \(1\) denotes the identity morphism.
* \({}_{\gamma\alpha}R_{\gamma\beta}\simeq{}_{\alpha}R_{\beta}\), in particular \({}_{\alpha}R_{\beta}\simeq{}_{1}R_{\alpha^{-1}\beta}\).
* \({}_{1}R_{\alpha}\otimes_{R}{}_{1}R_{\beta}\simeq{}_{1}R_{\alpha\beta}\), thus \({}_{1}R_{\alpha}\) is invertible, and \([{}_{1}R_{\alpha}]^{-1}=[{}_{1}R_{\alpha^{-1}}]\).
* \({}_{1}R_{\alpha}\simeq{}_{1}R_{\beta}\) if and only if \(\alpha\beta^{-1}\) is an inner automorphism of \(R\), i.e., there exists an invertible element \(u\in R\) such that \(\alpha\beta^{-1}(r)=u^{-1}ru\) for any \(r\in R\). Denote by \(\operatorname{Inn}(R)\) the group of inner automorphisms of \(R\). In particular, \({}_{1}R_{\alpha}\simeq R\) if and only if \(\alpha\in\operatorname{Inn}(R)\), thus there is an exact sequence of groups \(0\to\operatorname{Inn}(R)\hookrightarrow\operatorname{Aut}(R)\to\operatorname {Pic}(R)\), the last morphism in the sequence taking \(\alpha\) to \({}_{1}R_{\alpha}\). The factor group \(\operatorname{Aut}(R)/\operatorname{Inn}(R)\), denoted by \(\operatorname{Out}(R)\), is called the group of outer automorphisms of \(R\), and it embeds into \(\operatorname{Pic}(R)\).
* \({}_{\alpha}V_{\beta}\simeq{}_{\alpha}R_{1}\otimes_{R}V\otimes_{R}{}_{1}R_{\beta}\).
We will also need the following.
**Proposition 2.1**.: ([2, page 73]) _Let \(U\) and \(V\) be invertible \(R\)-bimodules such that \(U\simeq V\) as left \(R\)-modules. Then there exists \(\alpha\in\operatorname{Aut}(R)\) such that \(U\simeq{}_{1}V_{\alpha}\) as \(R\)-bimodules._
Now let \(V\) be a bimodule over the \(K\)-algebra \(R\). Then the linear dual \(V^{*}=\operatorname{Hom}_{K}(V,K)\) is an \(R\)-bimodule with actions denoted by \(\rightharpoonup\) and \(\leftharpoonup\), given by \((r\rightharpoonup v^{*})(v)=v^{*}(vr)\) and \((v^{*}\leftharpoonup r)(v)=v^{*}(rv)\) for any \(r\in R,v^{*}\in V^{*},v\in V\). One can easily check that \(({}_{\alpha}V_{\beta})^{*}={}_{\alpha}(V^{*})_{\beta}\) for any \(\alpha,\beta\in Aut(R)\). If \(V\) is finite dimensional, then \((V^{*})^{*}\simeq V\), and this shows that two finite dimensional bimodules \(V\) and \(W\) are isomorphic if and only if so are their duals \(V^{*}\) and \(W^{*}\).
We are interested in a particular bimodule, namely \(R^{*}\), the dual of \(R\). Some immediate consequences of the discussion above are that for any \(\alpha,\beta,\gamma\in\operatorname{Aut}(R)\):
* \({}_{\gamma\alpha}(R^{*})_{\gamma\beta}\simeq{}_{\alpha}(R^{*})_{\beta}\), in particular \({}_{\alpha}(R^{*})_{\beta}\simeq{}_{1}(R^{*})_{\alpha^{-1}\beta}\). Indeed, \({}_{\gamma\alpha}(R^{*})_{\gamma\beta}\simeq({}_{\gamma\beta}R_{\gamma\alpha })^{*}\simeq({}_{\beta}R_{\alpha})^{*}\simeq{}_{\alpha}(R^{*})_{\beta}\).
* If \(R\) has finite dimension, then \({}_{1}(R^{*})_{\alpha}\simeq{}_{1}(R^{*})_{\beta}\) if and only if \(\alpha^{-1}\beta\in\operatorname{Inn}(R)\). Indeed, \({}_{1}(R^{*})_{\alpha}\) and \({}_{1}(R^{*})_{\beta}\) are isomorphic if and only if so are their duals, i.e., \({}_{\alpha}R_{1}\simeq{}_{\beta}R_{1}\), which is the same to \({}_{1}R_{\alpha^{-1}}\simeq{}_{1}R_{\beta^{-1}}\), i.e., \(\alpha^{-1}\beta\in\operatorname{Inn}(R)\). Since \(\operatorname{Inn}(R)\) is a normal subgroup of \(\operatorname{Aut}(R)\), this is equivalent to \(\alpha^{-1}\beta\in\operatorname{Inn}(R)\).
The following holds for any finite dimensional algebra.
**Proposition 2.2**.: _Let \(R\) be a finite dimensional algebra. Then the map \(\omega:R\to\operatorname{End}({}_{R}R^{*})\) defined by \(\omega(a)(r^{*})=r^{*}\leftharpoonup a\) for any \(r^{*}\in R^{*}\) and \(a\in R\), is an isomorphism of algebras._
Proof.: It is easy to check that \(\omega\) is well defined and it is an algebra morphism.
If \(\omega(a)=0\) for some \(a\), then \(r^{*}\leftharpoonup a=0\) for any \(r^{*}\in R^{*}\), and evaluating at \(1\), we get \(r^{*}(a)=0\). Thus \(a\) must be \(0\), so \(\omega\) is injective.
To check that \(\omega\) is surjective, let \(f\in\operatorname{End}(_{R}R^{*})\), and let \(\theta:R^{*}\to K,\theta(r^{*})=f(r^{*})(1)\). Then \(\theta\in R^{**}\), and as \(R\simeq R^{**}\), there is \(r\in R\) such that \(f(r^{*})(1)=\theta(r^{*})=r^{*}(r)\) for any \(r^{*}\in R^{*}\). Then for any \(r^{*}\in R^{*}\) and \(s\in R\) one has
\[f(r^{*})(s) = f(r^{*})(1\cdot s)\] \[= (s\rightharpoonup f(r^{*}))(1)\] \[= f(s\rightharpoonup r^{*})(1)\] \[= (s\rightharpoonup r^{*})(r)\] \[= r^{*}(rs)\] \[= (r^{*}\leftharpoonup r)(s),\]
showing that \(f(r^{*})=r^{*}\leftharpoonup r\), i.e., \(f=\omega(r)\).
Let \(R\) be a finite dimensional algebra. We recall that \(R\) is called quasi-Frobenius if it is injective as a left (or equivalently, right) \(R\)-module. It is known that \(R\) is quasi-Frobenius if and only if the left \(R\)-modules \(R\) and \(R^{*}\) have the same distinct indecomposable components (possibly occurring with different multiplicities), see [7, Section 16C]. Therefore a Frobenius algebra is always quasi-Frobenius.
**Corollary 2.3**.: _Let \(R\) be a finite dimensional algebra. Then \(R^{*}\) is an invertible \(R\)-bimodule if and only if \(R\) is a quasi-Frobenius algebra._
Proof.: If \(R^{*}\) is an invertible bimodule, then it is projective as a right \(R\)-module, so then its linear dual \((R^{*})^{*}\) is an injective left \(R\)-module. But \((R^{*})^{*}\simeq R\) as left \(R\)-modules, and we get that \(R\) is left selfinjective.
Conversely, assume that \(R\) is quasi-Frobenius. Since \(R\) is an injective right \(R\)-module, we get that \(R^{*}\) is a projective left \(R\)-module. On the other hand, since the left \(R\)-modules \(R\) and \(R^{*}\) have the same distinct indecomposable components, we see that there is an epimorphism \((R^{*})^{n}\to R\) for a large enough positive integer \(n\), thus \(R^{*}\) is a generator as a left \(R\)-module. If we also take into account Proposition 2.2, we get that \(R^{*}\) is invertible.
If \(R\) is Frobenius, then an element \(\lambda\in R^{*}\) such that \(R\rightharpoonup\lambda=R^{*}\), is called a Frobenius form on \(R\); in this case, the map \(a\mapsto a\rightharpoonup\lambda\) is an isomorphism of left \(R\)-modules between \(R\) and \(R^{*}\), and also, the map \(a\mapsto\lambda\leftharpoonup a\) is an isomorphism of right \(R\)-modules from \(R\) to \(R^{*}\). The Nakayama automorphism of \(R\) associated with a Frobenius form \(\lambda\) is the map \(\nu:R\to R\) defined such that for any \(a\in R\), \(\nu(a)\) is the unique element of \(R\) satisfying \(\nu(a)\rightharpoonup\lambda=\lambda\leftharpoonup a\), or equivalently, \(\lambda(ar)=\lambda(r\nu(a))\) for any \(r\in R\); \(\nu\) turns out to be an algebra automorphism. If \(\nu\) and \(\nu^{\prime}\) are Nakayama automorphisms associated with two Frobenius forms, then there exists an invertible element \(u\in R\) such that \(\nu^{\prime}(a)=u^{-1}\nu(a)u\) for any \(a\in R\), thus \(\nu\) and \(\nu^{\prime}\) are equal up to an inner automorphism. It follows that the class of a Nakayama automorphism in \(\operatorname{Out}(R)\) does not depend on the Frobenius form; see [7, Section 16E], [8, Section 2.2] or [16, Chapter IV] for details.
If the quasi-Frobenius algebra \(R\) is not Frobenius, \(R^{*}\) is not isomorphic to any \({}_{1}R_{\alpha}\), as \(R^{*}\) is not isomorphic to \(R\) as left \(R\)-modules. In the Frobenius case, we have the following result; it appears in an equivalent formulation in [16, Proposition 3.15].
**Proposition 2.4**.: _Let \(R\) be a Frobenius algebra. Then there exists \(\nu\in\operatorname{Aut}(R)\) such that \(R^{*}\simeq{}_{1}R_{\nu}\) as bimodules. Moreover, any such \(\nu\) is the Nakayama automorphism of \(R\) associated with a Frobenius form. As a consequence, the order of \([R^{*}]\) in \(\operatorname{Pic}(R)\) is equal to the order of the class of \(\nu\) in \(\operatorname{Out}(R)\)._
Proof.: The first part follows directly from Proposition 2.1, since \(R^{*}\simeq R\) as left \(R\)-modules.
Let \(\gamma:{}_{1}R_{\nu}\to R^{*}\) be an isomorphism of bimodules, and let \(\lambda=\gamma(1)\). Then \(R\rightharpoonup\lambda=R^{*}\), so \(\lambda\) is a Frobenius form on \(R\). Then for any \(a,x\in R\)
\[(\lambda\leftharpoonup a)(x) = (\gamma(1)\leftharpoonup a)(x)\] \[= \gamma(1\cdot\nu(a))(x)\] \[= \gamma(\nu(a)\cdot 1)(x)\] \[= (\nu(a)\rightharpoonup\gamma(1))(x)\] \[= (\nu(a)\rightharpoonup\lambda)(x),\]
showing that \(\lambda\leftharpoonup a=\nu(a)\rightharpoonup\lambda\), thus \(\nu\) is the Nakayama automorphism associated with \(\lambda\).
Looking inside the Picard group, the previous Proposition gives a new perspective on the well-known fact that a Frobenius algebra is symmetric if and only if the Nakayama automorphism is inner, see [7, Theorem 16.63]. Indeed, \(R\) is symmetric if and only if \(R^{*}\simeq R\) as bimodules, i.e., \({}_{1}R_{\nu}\simeq R\), and this is equivalent to \(\nu\) being inner.
The following indicates a commutation property of the class of \(R^{*}\) in the Picard group of \(R\).
**Proposition 2.5**.: _Let \(R\) be a quasi-Frobenius finite dimensional algebra, and let \(\alpha\in\operatorname{Aut}(R)\). Then \(R^{*}\otimes_{R}{}_{1}R_{\alpha}\simeq{}_{1}R_{\alpha}\otimes_{R}R^{*}\) as \(R\)-bimodules. Thus the element \([R^{*}]\) of the Picard group \(\operatorname{Pic}(R)\) lies in the centralizer of the image of \(\operatorname{Out}(R)\)._
Proof.: Taking into account the above considerations, we have isomorphisms of \(R\)-bimodules
\[R^{*}\otimes_{R}{}_{1}R_{\alpha}\simeq{}_{1}(R^{*})_{\alpha}\simeq{}_{\alpha^ {-1}}(R^{*})_{1}\simeq{}_{\alpha^{-1}}R_{1}\otimes_{R}R^{*}\simeq{}_{1}R_{ \alpha}\otimes_{R}R^{*}\]
**Corollary 2.6**.: _Let \(R\) be a Frobenius algebra. Then the class of the Nakayama automorphism of \(R\) lies in the centre of \(\operatorname{Out}(R)\)._
If \(R\) is quasi-Frobenius, we are interested in the order of \([R^{*}]\) in the group \(\operatorname{Pic}(R)\). This order is \(1\) if and only if \(R\) is a symmetric algebra. The following examples show that it may be any integer \(\geq 2\) in other quasi-Frobenius algebras, and also it can be infinite.
For the first example, we recall that if \(H\) is a finite dimensional Hopf algebra, then a left integral on \(H\) is an element \(\lambda\in H^{*}\) such that \(h^{*}\lambda=h^{*}(1)\lambda\) for any \(h^{*}\in H^{*}\); the multiplication of \(H^{*}\) is given by the convolution product. Any finite dimensional Hopf algebra \(H\) is a Frobenius algebra, and a non-zero left integral \(\lambda\) on \(H\) is a Frobenius form, see [8, Theorem 12.5].
**Example 2.7**.: Let \(C\) be a finite abelian group, and let \(C^{*}\) be its character group. We consider certain Hopf algebras in the braided category of Yetter-Drinfeld modules over the group Hopf algebra \(KC\), called quantum lines, and their liftings, obtained by a bosonization construction. We obtain some finite dimensional pointed Hopf algebras with coradical \(KC\), see [1], [3]. There are two classes of such objects.
(**I**) Hopf algebras of the type \(H_{1}(C,n,c,c^{*})\), where \(n\geq 2\) is an integer, \(c\in C\) and \(c^{*}\in C^{*}\), such that \(c^{n}\neq 1\), \((c^{*})^{n}=1\) and \(c^{*}(c)\) is a primitive \(n\)th root of unity. It is generated as an algebra by the Hopf subalgebra \(KC\) and a \((1,c)\)-skewprimitive element \(x\), i.e., the comultiplication works as \(\Delta(x)=c\otimes x+x\otimes 1\) on \(x\), subject to relations \(x^{n}=c^{n}-1\) and \(xg=c^{*}(g)gx\) for any \(g\in G\). Note that the required conditions show that \(c^{*}\) has order \(n\).
(**II**) Hopf algebras of the type \(H_{2}(C,n,c,c^{*})\), where \(n\geq 2\) is an integer, \(c\in C\) and \(c^{*}\in C^{*}\), such that \(c^{*}(c)\) is a primitive \(n\)th root of unity. It is generated as an algebra by the Hopf subalgebra \(KC\) and a \((1,c)\)-skewprimitive element \(x\), subject to relations \(x^{n}=0\) and
\(c^{*}(g)gx\) for any \(g\in G\). We note that in this case the order of \(c^{*}\), which we denote by \(m\), is a multiple of \(n\).
If \(H\) is any of \(H_{1}(C,n,c,c^{*})\) or \(H_{2}(C,n,c,c^{*})\), a linear basis of \(H\) is \(\mathcal{B}=\{gx^{j}|g\in C,0\leq j\leq n-1\}\), thus the dimension of \(H\) is \(n|C|\), and the linear map \(\lambda\in H^{*}\) such that \(\lambda(c^{1-n}x^{n-1})=1\) and \(\lambda\) takes any other element of \(\mathcal{B}\) to \(0\), is a left integral on \(H\), see [3, Proposition 1.17].
If \(g\in C\), then
\[(\lambda\leftharpoonup g)(g^{-1}c^{1-n}x^{n-1})=\lambda(c^{1-n}x^{n-1})=1\]
and \(\lambda\leftharpoonup g\) takes any other element of \(\mathcal{B}\) to \(0\), while
\[(g\rightharpoonup\lambda)(g^{-1}c^{1-n}x^{n-1})=\lambda((g^{-1}c^{1-n}x^{n-1} g)=c^{*}(g)^{n-1}\lambda(c^{1-n}x^{n-1})=c^{*}(g)^{n-1}\]
and \(g\rightharpoonup\lambda\) takes any other element of \(\mathcal{B}\) to \(0\). These show that \(g\rightharpoonup\lambda=c^{*}(g)^{n-1}\lambda\leftharpoonup g\), so the Nakayama automorphism \(\nu\) associated with the Frobenius form \(\lambda\) satisfies \(\nu(g)=c^{*}(g)^{1-n}g\).
On the other hand, if we denote \(\xi=c^{*}(c)\), we have
\[(x\rightharpoonup\lambda)(c^{1-n}x^{n-2})=\lambda(c^{1-n}x^{n-1})=1\]
and \(x\rightharpoonup\lambda\) takes any other element of \(\mathcal{B}\) to \(0\), while
\[(\lambda\leftharpoonup x)(c^{1-n}x^{n-2})=\lambda(xc^{1-n}x^{n-2})=\xi^{1-n }\lambda(c^{1-n}x^{n-1})=\xi\]
and \(\lambda\leftharpoonup x\) takes any other element of \(\mathcal{B}\) to \(0\). Thus we get \(\nu(x)=\xi x\).
Denote the order of \(c^{*}\) by \(m\); we noticed that \(m=n\) in the case of \(H_{1}(C,n,c,c^{*})\), and \(m=dn\) for some positive integer \(d\) in the case of \(H_{2}(C,n,c,c^{*})\). If \(j\) is a positive integer, then \(\nu^{j}=1\) if and only if \(\xi^{j}=1\) and \(c^{*}(g)^{j(1-n)}=1\) for any \(g\in C\). If the latter condition is satisfied, then \((c^{*})^{j(1-n)}=1\), or equivalently, \(m|j(1-n)\), hence \(n|j(1-n)\), and then \(n|j\), so the condition \(\xi^{j}=1\) is automatically satisfied. Thus the order of \(\nu\) is the least positive integer \(j\) such that \(m|j(1-n)\). For any such \(j\) we have \(n|j\), so \(j=bn\) for some integer \(b\). Then \(m|j(1-n)\) is equivalent to \(d|b(n-1)\), and also to \(\frac{d}{(d,n-1)}|b\cdot\frac{n-1}{(d,n-1)}\). Since \(\frac{d}{(d,n-1)}\) and \(\frac{n-1}{(d,n-1)}\) are relatively prime, the latter condition is equivalent to \(\frac{d}{(d,n-1)}|b\). We conclude that the least such \(b\) is \(\frac{d}{(d,n-1)}\), and the order of \(\nu\) is
\[j=bn=\frac{dn}{(d,n-1)}=\frac{m}{(\frac{m}{n},n-1)}.\]
This shows that for \(H_{1}(C,n,c,c^{*})\), where \(m=n\), the order of \(\nu\) is necessarily \(n\), while for \(H_{2}(C,n,c,c^{*})\), the order may be larger than \(n\), depending on the value of \(m\).
Now we show that for any \(1\leq j<\frac{m}{(\frac{m}{n},n-1)}\), \(\nu^{j}\) is not an inner automorphism. Indeed, if it were, then there would exist an invertible \(u\) such that \(\nu^{j}(r)=u^{-1}ru\) for any \(r\) in the Hopf algebra (which is either \(H_{1}(C,n,c,c^{*})\) or \(H_{2}(C,n,c,c^{*})\)). In particular, for any \(g\in C\), \(c^{*}(g)^{j(1-n)}g=u^{-1}gu\). Applying the counit \(\varepsilon\), one gets \(c^{*}(g)^{j(1-n)}=1\) for any \(g\in C\), so \((c^{*})^{j(1-n)}=1\). Hence \(m|j(1-n)\), and we have seen above that this implies that \(j\) must be at least \(\frac{m}{(\frac{m}{n},n-1)}\), a contradiction.
We conclude that if \(A\) is a Hopf algebra of type \(H_{1}(C,n,c,c^{*})\) or \(H_{2}(C,n,c,c^{*})\), then the order of the Nakayama automorphism \(\nu\) of \(A\) in the group of algebra automorphisms of \(A\), as well as the order of the class of \(\nu\) in \(\operatorname{Out}(A)\) (which is the same with the order of \([A^{*}]\) in \(\operatorname{Pic}(A)\)) is \(\frac{m}{(\frac{m}{n},n-1)}\), where \(m\) is the order of \(c^{*}\) in \(C^{*}\). In the case of \(H_{1}(C,n,c,c^{*})\), where \(m=n\), this order is just \(n\).
A particular case is when \(C=C_{n}=<c>\) is the cyclic group of order \(n\geq 2\). Then for any linear character \(c^{*}\in C^{*}\) such that \(c^{*}(c)\) is a primitive \(n\)th root of unity, \(H_{1}(C,n,c,c^{*})\) is a Taft
Hopf algebra. For such algebras, the order of the Nakayama automorphism associated with a left integral as a Frobenius form is computed in [16, Example 5.9, page 614].
**Example 2.8**.: Let \(q\) be a non-zero element of a field \(K\), and let \(K_{q}[X,Y]\) be the quantum plane, which is the \(K\)-algebra generated by \(X\) and \(Y\), subject to the relation \(YX=qXY\). Let \(R_{q}=K_{q}[X,Y]/(X^{2},Y^{2})\), which has dimension \(4\), and a basis \(\mathcal{B}=\{1,x,y,xy\}\), where \(x,y\) denote the classes of \(X,Y\) in \(R\). We have \(x^{2}=y^{2}=0\) and \(yx=qxy\). Denote by \(\mathcal{B}^{*}=\{1^{*},x^{*},y^{*},(xy)^{*}\}\) the basis of \(R_{q}^{*}\) dual to \(\mathcal{B}\). Then
\[1\rightharpoonup(xy)^{*}=(xy)^{*},x\rightharpoonup(xy)^{*}=qy^{*},y\rightharpoonup( xy)^{*}=x^{*},(xy)\rightharpoonup(xy)^{*}=1^{*},\]
showing that the linear map from \(R_{q}\) to \(R_{q}^{*}\) which takes \(r\) to \(r\rightharpoonup(xy)^{*}\) is an isomorphism. Thus \(R_{q}\) is a Frobenius algebra and \(\lambda=(xy)^{*}\) is a Frobenius form on \(R_{q}\). Now since \((xy)^{*}\leftharpoonup x=y^{*}\) and \((xy)^{*}\leftharpoonup y=qx^{*}\), the Nakayama automorphism associated with \(\lambda\) is \(\nu\in\operatorname{Aut}(R_{q})\) given by \(\nu(x)=q^{-1}x,\nu(y)=qy\). Then it is clear that the order of \(\nu\) in the automorphism group of \(R_{q}\) is \(n\) if \(q\) is a primitive \(n\)th root of unity in \(K\), and it is infinite when no non-trivial power of \(q\) is \(1\). This fact was observed in [16, Example 10.7, page 417] by using periodic modules with respect to actions of the syzygy and Auslander-Reiten operators.
We show that if \(t\) is a positive integer such that \(q^{t}\neq 1\), then \(\nu^{t}\) is not even an inner automorphism. Indeed, if it were, then \(\nu^{t}(x)=u^{-1}xu\), or \(ux=q^{t}xu\) for some invertible \(u\in R_{q}\). If we write \(u=a1+bx+cy+dy\) with \(a,b,c,d\in K\), this means that \(ax+qcxy=q^{t}(ax+cxy)\), showing that \(a=0\). But then \(u\) cannot be invertible, since \(xyu=0\), a contradiction.
In conclusion, if \(q\) is not a root of unity, \(\nu\) has infinite order in \(\operatorname{Out}(R_{q})\), and so does \([R_{q}^{*}]\) in \(\operatorname{Pic}(R_{q})\), while if \(q\) is a primitive \(n\)th root of unity, then \([R_{q}^{*}]\) has order \(n\) in \(\operatorname{Pic}(R_{q})\).
We end this example with the remark that Nakayama and Nesbitt constructed in [10, page 665] a class of examples of Frobenius algebras which are not symmetric, presented in a matrix form. More precisely, in the presentation of [7, Example 16.66], for any non-zero elements \(u,v\in K\),
\[A_{u,v}\]
be the subalgebra of
\[M_{4}(K)\]
consisting of all matrices of the type
\[\left[\begin{array}{cccc}a&b&c&d\\ 0&a&0&uc\\ 0&0&a&vb\\ 0&0&0&a\end{array}\right]\]
, where
\[a,b,c,d\in K\]
. Then
\[A_{u,v}\]
is Frobenius for any
\[u,v\in K^{*}\]
, and it is symmetric if and only if
\[u=v\]
\[A_{u,v}\]
has a basis consisting of the elements
\[I_{4},\;x=E_{12}+vE_{34},\;y=E_{13}+uE_{24},\;z=E_{14},\]
where
\[E_{ij}\]
denote the usual matrix units in
\[M_{4}(K)\]
, and they satisfy the relations
\[x^{2}=0,\;y^{2}=0,\;xy=uz,\;yx=vz.\]
These show that in fact, \(A_{u,v}\) is isomorphic to the quotient \(R_{u^{-1}v}\) of the quantum plane.
If \(H\) is a finite dimensional Hopf algebra, let \(t\in H\) be a non-zero left integral in \(H\), i.e., \(ht=\varepsilon(h)t\) for any \(h\in H\), where \(\varepsilon\) is the counit of \(H\). As the space of left integrals is one-dimensional and \(th\) is a left integral for any \(h\in H\), there is a linear map \(\mathcal{G}:H\to K\) such that \(th=\mathcal{G}(h)t\) for any \(h\in H\). In fact, \(\mathcal{G}\) is an algebra morphism, thus an element of the group \(G(H^{*})\) of grouplikes elements of \(H^{*}\). \(\mathcal{G}\) is called the distinguished group-like element of \(H^{*}\), and also the right modular element of \(H^{*}\).
**Theorem 2.9**.: _Let \(H\) be a finite dimensional Hopf algebra with antipode \(S\) and counit \(\varepsilon\), and let \(\mathcal{G}\) be the modular element in \(H^{*}\). If \(n\) is a positive integer, then \([H^{*}]^{n}=1\) in \(\operatorname{Pic}(H)\) if and only if \(S^{2n}\) is inner and \(\mathcal{G}^{n}=\varepsilon\). As a consequence, the order of \([H^{*}]\) in the Picard group of
_is the least common multiple of the order of the class of \(S^{2}\) in \({\rm Out}(H)\) and the order of \({\mathcal{G}}\) in \(G(H^{*})\)._
Proof.: Let \(\lambda\) be a non-zero left integral on \(H\), which is a Frobenius form on \(H\), and let \(\nu\) be the associated Nakayama automorphism. By [14, Theorem 3(a)], in the reformulation of [8, Proposition 12.8], \(\nu(h)=\sum{\mathcal{G}}(h_{2})S^{2}(h_{1})\) for any \(h\in H\). Let \(\ell_{\mathcal{G}}:H\to H\) be the linear map defined by \(\ell_{\mathcal{G}}(h)={\mathcal{G}}\rightharpoonup h=\sum{\mathcal{G}}(h_{2}) h_{1}\). We have \(\nu=S^{2}\ell_{\mathcal{G}}\). We note that \({\mathcal{G}}S^{2}={\mathcal{G}}\). Indeed, it is clear that \({\mathcal{G}}S=S^{*}({\mathcal{G}})={\mathcal{G}}^{-1}\), since the dual map \(S^{*}\) of \(S\) is the antipode of the dual Hopf algebra \(H^{*}\), and it takes a group-like element to its inverse. Now we have
\[(\ell_{\mathcal{G}}S^{2})(h) = {\mathcal{G}}\rightharpoonup S^{2}(h)\] \[= \sum{\mathcal{G}}(S^{2}(h_{2}))S^{2}(h_{1})\] \[= \sum{\mathcal{G}}(h_{2})S^{2}(h_{1})\] \[= S^{2}({\mathcal{G}}\rightharpoonup h)\] \[= (S^{2}\ell_{\mathcal{G}})(h),\]
showing that \(\ell_{\mathcal{G}}S^{2}=S^{2}\ell_{\mathcal{G}}\). Since \(\rightharpoonup\) is a left action, we have \((\ell_{\mathcal{G}})^{n}=\ell_{{\mathcal{G}}^{n}}\) for any positive integer \(n\), and it follows that \(\nu^{n}=S^{2n}\ell_{{\mathcal{G}}^{n}}\). Now if \(\ell_{{\mathcal{G}}^{n}}=\varepsilon\) and \(S^{2n}\) is inner, then \(\nu^{n}=S^{2n}\) is inner, so \([H^{*}]^{n}=[_{1}H_{\nu}]^{n}=[_{1}H_{\nu^{n}}]=1\) in \({\rm Pic}(H)\). Conversely, if \([H^{*}]^{n}=1\), then \(\nu^{n}\) is inner. Let \(\nu^{n}(h)=u^{-1}hu\) for some invertible \(u\in H\). Then \(S^{2n}(\ell_{{\mathcal{G}}^{n}}(h))=u^{-1}hu\) for any \(h\in H\), and applying \(\varepsilon\) of \(H\) and using that \(\varepsilon S=\varepsilon\), we obtain \(\varepsilon(\ell_{{\mathcal{G}}^{n}}(h))=\varepsilon(u^{-1})\varepsilon(h) \varepsilon(h)=\varepsilon(h)\). As \(\varepsilon(\ell_{{\mathcal{G}}^{n}}(h))=\varepsilon(\sum{\mathcal{G}}^{n}(h_{ 2})h_{1})={\mathcal{G}}^{n}(h)\), we get \({\mathcal{G}}^{n}=\varepsilon\). Consequently, \(\nu^{n}=S^{2n}\), so \(S^{2n}\) is inner.
We note that in the particular case where \(n=1\), the previous Theorem says that a finite dimensional Hopf algebra \(H\) is a symmetric algebra if and only if \({\mathcal{G}}=\varepsilon\), i.e., \(H\) is unimodular, and \(S^{2}\) is inner. This is a result of [12], see also [8, Theorem 12.9].
The structure of \({\mathcal{R}}\) and \({\mathcal{R}}^{*}\), and the Picard group of \({\mathcal{R}}\)
Let \({\mathcal{R}}\) be the \(K\)-algebra presented in the Introduction. It has basis \({\bf B}=\{E,X_{1},X_{2},Y_{1},Y_{2}\}\cup\{F_{ij}|1\leq i,j\leq 2\}\), and relations
\[E^{2}=E, F_{ij}F_{jr}=F_{ir},\] \[EX_{i}=X_{i}, X_{i}F_{ir}=X_{r},\] \[F_{ij}Y_{j}=Y_{i}, Y_{i}E=Y_{i}\]
for any \(1\leq i,j,r\leq 2\), and any other product of two elements of \({\bf B}\) is zero.
Let
\[{\mathcal{V}}_{1} = <X_{1},F_{11},F_{21}>\] \[{\mathcal{V}}^{\prime}_{1} = <X_{2},F_{12},F_{22}>\] \[{\mathcal{V}}_{2} = <Y_{1},Y_{2},E>.\]
Then \({\mathcal{R}}={\mathcal{V}}_{1}\oplus{\mathcal{V}}^{\prime}_{1}\oplus{ \mathcal{V}}_{2}\) is a decomposition of \({\mathcal{R}}\) into a direct sum of indecomposable left \({\mathcal{R}}\)-modules, and \({\mathcal{V}}_{1}\simeq{\mathcal{V}}^{\prime}_{1}\npreceq{\mathcal{V}}_{2}\). Indeed, right multiplication by \(F_{12}\) is an isomorphism from \({\mathcal{V}}_{1}\) to \({\mathcal{V}}^{\prime}_{1}\), with inverse the right multiplication by \(F_{21}\), while \({\mathcal{V}}_{1}\) and \({\mathcal{V}}_{2}\) are not isomorphic since they have different annihilators.
Similarly, a decomposition of \(\mathcal{R}\) into a direct sum of indecomposable right \(\mathcal{R}\)-modules is \(\mathcal{R}=\mathcal{U}_{1}\oplus\mathcal{U}_{2}\oplus\mathcal{U}_{2}^{\prime}\), with \(\mathcal{U}_{2}\simeq\mathcal{U}_{2}^{\prime}\neq\mathcal{U}_{1}\), where
\[\mathcal{U}_{1} = <E,X_{1},X_{2}>\] \[\mathcal{U}_{2} = <F_{11},F_{12},Y_{1}>\] \[\mathcal{U}_{2}^{\prime} = <F_{21},F_{22},Y_{2}>.\]
**Proposition 3.1**.: _With the notation above, \(\dim_{K}\left(\mathcal{U}_{i}\otimes\mathcal{V}_{j}\right)=1\) for any \(1\leq i,j\leq 2\)._
Proof.: We first look at \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{1}\). We see that
\[\begin{array}{l}E\otimes_{\mathcal{R}}F_{i1}=E^{2}\otimes_{\mathcal{R}}F_{ i1}=E\otimes_{\mathcal{R}}EF_{i1}=0\\ X_{i}\otimes_{\mathcal{R}}X_{1}=X_{i}\otimes_{\mathcal{R}}EX_{1}=X_{i}E \otimes_{\mathcal{R}}X_{1}=0\\ X_{1}\otimes_{\mathcal{R}}F_{21}=X_{1}\otimes_{\mathcal{R}}F_{21}F_{11}=X_{1} F_{21}\otimes_{\mathcal{R}}F_{11}=0\\ X_{2}\otimes_{\mathcal{R}}F_{11}=X_{2}\otimes_{\mathcal{R}}F_{11}F_{11}=X_{2} F_{11}\otimes_{\mathcal{R}}F_{11}=0\\ X_{i}\otimes_{\mathcal{R}}F_{i1}=EX_{i}\otimes_{\mathcal{R}}F_{i1}=E\otimes_{ \mathcal{R}}X_{i}F_{i1}=E\otimes_{\mathcal{R}}X_{1},\end{array}\]
thus \(E\otimes_{\mathcal{R}}X_{1}=X_{1}\otimes_{\mathcal{R}}F_{11}=X_{2}\otimes_{ \mathcal{R}}F_{21}\), and this element spans \(U_{1}\otimes_{\mathcal{R}}V_{1}\), in particular \(\dim_{K}\left(\mathcal{U}_{1}\otimes\mathcal{V}_{1}\right)\leq 1\).
Now in \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{2}\) we have
\[\begin{array}{l}E\otimes_{\mathcal{R}}Y_{i}=E^{2}\otimes_{\mathcal{R}}Y_{i} =E\otimes_{\mathcal{R}}EY_{i}=0\\ X_{i}\otimes_{\mathcal{R}}Y_{j}=EX_{i}\otimes_{\mathcal{R}}Y_{j}=E\otimes_{ \mathcal{R}}X_{i}Y_{j}=0\\ X_{i}\otimes_{\mathcal{R}}E=X_{i}\otimes_{\mathcal{R}}E^{2}=X_{i}E\otimes_{ \mathcal{R}}E=0,\end{array}\]
showing that \(U_{1}\otimes_{\mathcal{R}}V_{2}\) is spanned by \(E\otimes_{\mathcal{R}}E\), so \(\dim_{K}\left(\mathcal{U}_{1}\otimes\mathcal{V}_{2}\right)\leq 1\).
Next, in \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{1}\) one has
\[\begin{array}{l}F_{1i}\otimes_{\mathcal{R}}X_{1}=F_{11}F_{1i}\otimes_{ \mathcal{R}}X_{1}=F_{11}\otimes_{\mathcal{R}}F_{1i}X_{1}=0\\ Y_{1}\otimes_{\mathcal{R}}X_{1}=Y_{1}\otimes_{\mathcal{R}}X_{1}F_{11}=Y_{1}X_{1} \otimes_{\mathcal{R}}F_{11}=0\\ Y_{1}\otimes_{\mathcal{R}}F_{i1}=Y_{1}E\otimes_{\mathcal{R}}F_{i1}=Y_{1} \otimes_{\mathcal{R}}EF_{i1}=0\\ F_{1i}\otimes_{\mathcal{R}}F_{j1}=F_{11}F_{1i}\otimes_{\mathcal{R}}F_{j1}=F_{ 11}\otimes_{\mathcal{R}}F_{1i}F_{j1}=\delta_{ij}F_{11}\otimes_{\mathcal{R}}F_ {11},\end{array}\]
so \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{1}\) is spanned by \(F_{11}\otimes_{\mathcal{R}}F_{11}=F_{12}\otimes_{\mathcal{R}}F_{21}\) and \(\dim_{K}\left(\mathcal{U}_{2}\otimes\mathcal{V}_{1}\right)\leq 1\).
Finally, in \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)
\[\begin{array}{l}Y_{1}\otimes_{\mathcal{R}}Y_{i}=Y_{1}E\otimes_{\mathcal{R}} Y_{i}=Y_{1}\otimes_{\mathcal{R}}EY_{i}=0\\ F_{1i}\otimes_{\mathcal{R}}E=F_{1i}\otimes_{\mathcal{R}}E^{2}=F_{1i}E\otimes_{ \mathcal{R}}E=0\\ F_{1i}\otimes_{\mathcal{R}}Y_{j}=F_{1i}\otimes_{\mathcal{R}}Y_{j}E=F_{1i}Y_{j} \otimes_{\mathcal{R}}E=\delta_{ij}Y_{1}\otimes_{\mathcal{R}}E\end{array}\]
so \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{1}\) is spanned by \(Y_{1}\otimes_{\mathcal{R}}E=F_{11}\otimes_{\mathcal{R}}Y_{1}=F_{12}\otimes_{ \mathcal{R}}Y_{2}\) and \(\dim_{K}\left(\mathcal{U}_{2}\otimes\mathcal{V}_{2}\right)\leq 1\).
As \(\mathcal{R}_{\mathcal{R}}\simeq\mathcal{U}_{1}\oplus\mathcal{U}_{2}^{2}\) and \({}_{\mathcal{R}}\mathcal{R}\simeq\mathcal{V}_{1}^{2}\oplus\mathcal{V}_{2}\), there are linear isomorphisms
\[\mathcal{R}\simeq\mathcal{R}\otimes_{\mathcal{R}}\mathcal{R}\simeq(\mathcal{U} _{1}\otimes_{\mathcal{R}}\mathcal{V}_{1})^{2}\oplus(\mathcal{U}_{2}\otimes_{ \mathcal{R}}\mathcal{V}_{1})^{4}\oplus(\mathcal{U}_{1}\otimes_{\mathcal{R}} \mathcal{V}_{2})\oplus(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2})^{2},\]
and equating the dimensions we see that we must have
\[\dim_{K}\left(\mathcal{U}_{1}\otimes\mathcal{V}_{1}\right)=\dim_{K}\left( \mathcal{U}_{1}\otimes\mathcal{V}_{2}\right)=\dim_{K}\left(\mathcal{U}_{2} \otimes\mathcal{V}_{1}\right)=\dim_{K}\left(\mathcal{U}_{2}\otimes\mathcal{V}_ {2}\right)=1.\]
**Remark 3.2**.: We retain from the proof of the previous Proposition that the non-zero tensor monomials formed with elements of \(\mathbf{B}\) are: \(E\otimes_{\mathcal{R}}X_{1}=X_{1}\otimes_{\mathcal{R}}F_{11}=X_{2}\otimes_{ \mathcal{R}}F_{21}\) in \(U_{1}\otimes_{\mathcal{R}}V_{1}\)
\(F_{11}\otimes_{\mathcal{R}}F_{11}=F_{12}\otimes_{\mathcal{R}}F_{21}\) in \(U_{2}\otimes_{\mathcal{R}}V_{1}\), \(Y_{1}\otimes_{\mathcal{R}}E=F_{11}\otimes_{\mathcal{R}}Y_{1}=F_{12}\otimes_{ \mathcal{R}}Y_{2}\) in \(U_{2}\otimes_{\mathcal{R}}V_{2}\), and \(E\otimes_{\mathcal{R}}E\) in \(U_{1}\otimes_{\mathcal{R}}V_{2}\).
Let us look now at \(\mathcal{R}^{*}=Hom_{K}(\mathcal{R},K)\), with the \(\mathcal{R}\)-bimodule structure induced by the one of \(\mathcal{R}\); we denote by \(\rightharpoonup\) and \(\leftharpoonup\) the left and right actions of \(\mathcal{R}\) on \(\mathcal{R}^{*}\). Denote by \(\mathbf{B}^{*}=\{E^{*},F^{*}_{ij},X^{*}_{i},Y^{*}_{j}|1\leq i,j\leq 2\}\) the basis of \(\mathcal{R}^{*}\) dual to \(\mathbf{B}\). On basis elements, the left action of \(\mathcal{R}\) on \(\mathcal{R}^{*}\) is
\[\begin{array}{llll}E\rightharpoonup E^{*}=E^{*},&E\rightharpoonup F^{*}_{ ij}=0,&E\rightharpoonup X^{*}_{i}=0,&E\to Y^{*}_{i}=Y^{*}_{i},\\ F_{ij}\rightharpoonup E^{*}=0,&F_{ij}\rightharpoonup F^{*}_{rp}=\delta_{jp}F^ {*}_{ri},&F_{ij}\rightharpoonup X^{*}_{r}=\delta_{jr}X^{*}_{i},&F_{ij} \rightharpoonup Y^{*}_{r}=0,\\ X_{i}\rightharpoonup E^{*}=0,&X_{i}\rightharpoonup F^{*}_{rj}=0,&X_{i} \rightharpoonup X^{*}_{j}=\delta_{ij}E^{*},&X_{i}\rightharpoonup Y^{*}_{j}=0, \\ Y_{i}\rightharpoonup E^{*}=0,&Y_{i}\rightharpoonup F^{*}_{rj}=0,&Y_{i} \rightharpoonup X^{*}_{j}=0,&Y_{i}\rightharpoonup Y^{*}_{j}=F^{*}_{ji},\end{array}\]
for any \(1\leq i,j,r,p\leq 2\), while the right action is
\[\begin{array}{llll}E^{*}\leftharpoonup E=E^{*},&F^{*}_{ij}\leftharpoonup E =0,&X^{*}_{i}\leftharpoonup E=X^{*}_{i},&Y^{*}_{i}\leftharpoonup E=0,\\ E^{*}\leftharpoonup F_{ij}=0,&F^{*}_{rp}\leftharpoonup F_{ij}=\delta_{ri}F^ {*}_{jp},&X^{*}_{r}\leftharpoonup F_{ij}=0,&Y^{*}_{r}\leftharpoonup F_{ ij}=\delta_{ri}Y^{*}_{j},\\ E^{*}\leftharpoonup X_{i}=0,&F^{*}_{rj}\leftharpoonup X_{i}=0,&X^{*}_{j} \leftharpoonup X_{i}=F^{*}_{ij},&Y^{*}_{j}\leftharpoonup X_{i}=0,\\ E^{*}\leftharpoonup Y_{i}=0,&F^{*}_{rj}\leftharpoonup Y_{i}=0,&X^{*}_{j} \leftharpoonup Y_{i}=0,&Y^{*}_{j}\leftharpoonup Y_{i}=\delta_{ji}E^{*},\end{array}\]
for any \(1\leq i,j,r,p\leq 2\).
We will identify \(\mathcal{U}^{*}_{1}\) with \(<E^{*},X^{*}_{1},X^{*}_{2}>\) inside \(\mathcal{R}^{*}\), and similarly for the duals of \(\mathcal{U}_{2},\mathcal{U}^{\prime}_{2},\mathcal{V}_{1},\mathcal{V}^{\prime}_ {1},\mathcal{V}_{2}\).
**Lemma 3.3**.: \(\mathcal{U}^{*}_{1}\simeq\mathcal{V}_{1}\) _and \(\mathcal{U}^{*}_{2}\simeq\mathcal{V}_{2}\) as left \(\mathcal{R}\)-modules. Consequently, \(\mathcal{V}^{*}_{1}\simeq\mathcal{U}_{1}\) and \(\mathcal{V}^{*}_{2}\simeq\mathcal{U}_{2}\) as right \(\mathcal{R}\)-modules, \(\mathcal{R}^{*}\simeq\mathcal{V}_{1}\oplus\mathcal{V}^{2}_{2}\) as left \(\mathcal{R}\)-modules and \(\mathcal{R}^{*}\simeq\mathcal{U}^{2}_{1}\oplus\mathcal{U}_{2}\) as right \(\mathcal{R}\)-modules._
Proof.: It follows from the action table above that the linear map taking \(X_{1}\) to \(E^{*}\), \(F_{11}\) to \(X^{*}_{1}\) and \(F_{21}\) to \(X^{*}_{2}\) is an isomorphism of left \(\mathcal{R}\)-modules from \(\mathcal{V}_{1}\) to \(\mathcal{U}^{*}_{1}\). Also, the mapping \(Y_{1}\mapsto F^{*}_{11}\), \(Y_{2}\mapsto F^{*}_{12}\), \(E\mapsto Y^{*}_{1}\) defines an isomorphism \(\mathcal{V}_{2}\simeq\mathcal{U}^{*}_{2}\).
**Proposition 3.4**.: _The space \(\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\) has dimension 9, with a basis consisting of the elements_
\[\mathcal{E}=Y^{*}_{1}\otimes_{\mathcal{R}}X^{*}_{1},\ \ \mathcal{F}_{ij}=X^{*}_{i}\otimes_{ \mathcal{R}}Y^{*}_{j},\]
\[\mathcal{X}_{i}=E^{*}\otimes_{\mathcal{R}}Y^{*}_{i},\ \ \ \mathcal{Y}_{i}=F^{*}_{1i}\otimes_{ \mathcal{R}}X^{*}_{1},\]
_where \(1\leq i,j\leq 2\). The only non-zero tensor monomials \(u\otimes_{\mathcal{R}}v\) with \(u,v\in\mathbf{B}^{*}\) are_
\[\begin{array}{llll}Y^{*}_{i}\otimes_{\mathcal{R}}X^{*}_{i}=\mathcal{E},&X^{*} _{i}\otimes_{\mathcal{R}}Y^{*}_{j}=\mathcal{F}_{ij},&E^{*}\otimes_{\mathcal{R }}Y^{*}_{i}=\mathcal{X}_{i},\\ Y^{*}_{i}\otimes_{\mathcal{R}}F^{*}_{ji}=\mathcal{X}_{j},&X^{*}_{i}\otimes_{ \mathcal{R}}E^{*}=\mathcal{Y}_{i},&F^{*}_{ij}\otimes_{\mathcal{R}}X^{*}_{i}= \mathcal{Y}_{j},\end{array}\]
_where \(1\leq i,j\leq 2\)._
_Moreover, the linear isomorphism \(\varphi:\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\to\mathcal{R}\) given by_
\[\varphi(\mathcal{E})=E,\,\varphi(\mathcal{F}_{ij})=F_{ij},\,\varphi(\mathcal{X} _{i})=X_{i},\,\varphi(\mathcal{Y}_{i})=Y_{i}\]
_for any \(1\leq i,j\leq 2\), is an isomorphism of \(\mathcal{R}\)-bimodules._
Proof.: Using Lemma 3.3, we see that
\[\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\simeq(\mathcal{U}_{1} \otimes_{\mathcal{R}}\mathcal{V}_{1})^{2}\oplus(\mathcal{U}_{1}\otimes_{ \mathcal{R}}\mathcal{V}_{2})^{4}\oplus(\mathcal{U}_{2}\otimes_{\mathcal{R}} \mathcal{V}_{1})\oplus(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2})^{2}\]
as \(K\)-spaces, and using Proposition 3.1 we obtain that \(\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\) has dimension 9.
For finding the non-zero tensor monomials \(u\otimes_{\mathcal{R}}v\) with \(u,v\in\mathbf{B}^{*}\), we use Remark 3.2, and find the only such monomials are:
\(\bullet\) In \(\mathcal{V}^{*}_{1}\otimes_{\mathcal{R}}\mathcal{U}^{*}_{1}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{1}\)): \(X^{*}_{1}\otimes_{\mathcal{R}}E^{*}=F^{*}_{11}\otimes_{\mathcal{R}}X^{*}_{1}=F^ {*}_{21}\otimes_{\mathcal{R}}X^{*}_{2}\) (\(=\mathcal{Y}_{1}\))
\(\bullet\) In \((\mathcal{V}^{*}_{1})^{*}\otimes_{\mathcal{R}}\mathcal{U}^{*}_{1}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{1}\)): \(X^{*}_{2}\otimes_{\mathcal{R}}E^{*}=F^{*}_{12}\otimes_{\mathcal{R}}X^{*}_{1}=F^ {*}_{22}\otimes_{\mathcal{R}}X^{*}_{2}\) (\(=\mathcal{Y}_{2}\))
* In \(\mathcal{V}_{2}^{*}\otimes_{\mathcal{R}}\mathcal{U}_{1}^{*}\) (isomorphic to \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{1}\)): \(Y_{1}^{*}\otimes_{\mathcal{R}}X_{1}^{*}=Y_{2}^{*}\otimes_{\mathcal{R}}X_{2}^{*}\) (\(=\mathcal{E}\))
* In \(\mathcal{V}_{1}^{*}\otimes_{\mathcal{R}}\mathcal{U}_{2}^{*}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(X_{1}^{*}\otimes_{\mathcal{R}}Y_{1}^{*}\) (\(=\mathcal{F}_{11}\))
* In \((\mathcal{V}_{1}^{*})^{*}\otimes_{\mathcal{R}}\mathcal{U}_{2}^{*}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(X_{2}^{*}\otimes_{\mathcal{R}}Y_{1}^{*}\) (\(=\mathcal{F}_{21}\))
* In \(\mathcal{V}_{2}^{*}\otimes_{\mathcal{R}}\mathcal{U}_{2}^{*}\) (isomorphic to \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(E^{*}\otimes_{\mathcal{R}}Y_{1}^{*}=Y_{1}^{*}\otimes_{\mathcal{R}}F_{11}^{*}= Y_{2}^{*}\otimes_{\mathcal{R}}F_{12}^{*}\) (\(=\mathcal{X}_{1}\))
* In \(\mathcal{V}_{2}^{*}\otimes_{\mathcal{R}}(\mathcal{U}_{2}^{\prime})^{*}\) (isomorphic to \(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(E^{*}\otimes_{\mathcal{R}}Y_{2}^{*}=Y_{1}^{*}\otimes_{\mathcal{R}}F_{21}^{*}= Y_{2}^{*}\otimes_{\mathcal{R}}F_{22}^{*}\) (\(=\mathcal{X}_{2}\))
* In \(\mathcal{V}_{1}^{*}\otimes_{\mathcal{R}}(\mathcal{U}_{2}^{\prime})^{*}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(X_{1}^{*}\otimes_{\mathcal{R}}Y_{2}^{*}\) (\(=\mathcal{F}_{12}\))
* In \((\mathcal{V}_{1}^{*})^{*}\otimes_{\mathcal{R}}(\mathcal{U}_{2}^{\prime})^{*}\) (isomorphic to \(\mathcal{U}_{1}\otimes_{\mathcal{R}}\mathcal{V}_{2}\)): \(X_{2}^{*}\otimes_{\mathcal{R}}Y_{2}^{*}\) (\(=\mathcal{F}_{22}\))
Collecting all these, the second part of the statement is proved.
Now using the (left and right) action table of \(\mathcal{R}\) on \(\mathcal{R}^{*}\), we find that the left and right actions of \(\mathcal{R}\) on \(\mathcal{R}^{*}\otimes_{\mathcal{R}}\mathcal{R}^{*}\), which we also denote by \(\rightharpoonup\) and \(\leftharpoonup\), are given by
\[\begin{array}{llll}E\rightharpoonup\mathcal{E}=\mathcal{E},&E\rightharpoonup \mathcal{F}_{ij}=0,&E\rightharpoonup\mathcal{X}_{i}=\mathcal{X}_{i},&E \rightharpoonup\mathcal{Y}_{i}=0,\\ F_{ij}\rightharpoonup\mathcal{E}=0,&F_{ij}\rightharpoonup\mathcal{F}_{rp}= \delta_{jr}\mathcal{F}_{ip},&F_{ij}\rightharpoonup\mathcal{X}_{r}=0,&F_{ij} \rightharpoonup\mathcal{Y}_{r}=\delta_{jr}\mathcal{Y}_{i},\\ X_{i}\rightharpoonup\mathcal{E}=0,&X_{i}\rightharpoonup\mathcal{F}_{rj}= \delta_{ir}\mathcal{X}_{j},&X_{i}\rightharpoonup\mathcal{X}_{j}=0,&X_{i} \rightharpoonup\mathcal{Y}_{j}=0,\\ Y_{i}\rightharpoonup\mathcal{E}=\mathcal{Y}_{i},&Y_{i}\rightharpoonup\mathcal{F}_{ rj}=0,&Y_{i}\rightharpoonup\mathcal{X}_{j}=0,&Y_{i}\rightharpoonup \mathcal{Y}_{j}=0,\end{array}\]
for any \(1\leq i,j,r,p\leq 2\), and
\[\begin{array}{llll}\mathcal{E}\leftharpoonup E=\mathcal{E},&\mathcal{F}_{ ij}\leftharpoonup E=0,&\mathcal{X}_{i}\leftharpoonup E=0,&\mathcal{Y}_{i} \leftharpoonup E=\mathcal{Y}_{i},\\ \mathcal{E}\leftharpoonup F_{ij}=0,&\mathcal{F}_{rp}\leftharpoonup F_{ij}= \delta_{pi}\mathcal{F}_{rj},&\mathcal{X}_{r}\leftharpoonup F_{ij}=\delta_{ri} \mathcal{X}_{j},&\mathcal{Y}_{r}\leftharpoonup F_{ij}=0,\\ \mathcal{E}\leftharpoonup X_{i}=\mathcal{X}_{i},&\mathcal{F}_{rj}\leftharpoonup X_{i}= 0,&\mathcal{X}_{j}\leftharpoonup X_{i}=0,&\mathcal{Y}_{j}\leftharpoonup X_{i}= 0,\\ \mathcal{E}\leftharpoonup Y_{i}=0,&\mathcal{F}_{rj}\leftharpoonup Y_{i}= \delta_{ji}\mathcal{Y}_{r},&\mathcal{X}_{j}\leftharpoonup Y_{i}=0,&\mathcal{Y}_{j} \leftharpoonup Y_{i}=0,\end{array}\]
for any \(1\leq i,j,r,p\leq 2\).
The first set of relations shows that \(\varphi\) is a morphism of left \(\mathcal{R}\)-modules, while the second one shows that \(\varphi\) is also a morphism of right \(\mathcal{R}\)-modules.
**Lemma 3.5**.: _Let \(P\) be an invertible \(\mathcal{R}\)-bimodule. Then \(P\) is isomorphic either to \(\mathcal{V}_{1}\oplus\mathcal{V}_{2}^{2}\) or to \(\mathcal{V}_{1}^{2}\oplus\mathcal{V}_{2}\) as a left \(\mathcal{R}\)-module._
Proof.: Let \(Q\) be a bimodule such that \([Q]=[P]^{-1}\) in \(\operatorname{Pic}(R)\). Since \(P\) is a finitely generated projective left module over the finite dimensional algebra \(\mathcal{R}\), it is isomorphic to a finite direct sum of principal indecomposable left \(\mathcal{R}\)-modules, say \(P\simeq\mathcal{V}_{1}^{a}\oplus\mathcal{V}_{2}^{b}\) for some non-negative integers \(a,b\). But \(P\) is a generator as a left \(\mathcal{R}\)-module, so \(\mathcal{R}\) is a direct summand in the left \(\mathcal{R}\)-module \(P^{m}\) for some positive integer \(m\). Thus by the Krull-Schmidt Theorem, both \(a\) and \(b\) are positive. Similarly, \(Q\simeq\mathcal{U}_{1}^{c}\oplus\mathcal{U}_{2}^{d}\) as right \(\mathcal{R}\)-modules for some integers \(c,d>0\). Now there are linear isomorphisms
\[\mathcal{R}\simeq Q\otimes_{\mathcal{R}}P\simeq(\mathcal{U}_{1}\otimes_{ \mathcal{R}}\mathcal{V}_{1})^{ca}\oplus(\mathcal{U}_{1}\otimes_{\mathcal{R}} \mathcal{V}_{2})^{cb}\oplus(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{1}) ^{da}\oplus(\mathcal{U}_{2}\otimes_{\mathcal{R}}\mathcal{V}_{2})^{db}\]
Counting dimensions and using Proposition 3.1, we see that \((c+d)(a+b)=9\). As \(a,b,c,d>0\), we must have \(c+d=a+b=3\), so then either \(a=1\) and \(b=2\), or \(a=2\) and \(b=1\).
**Theorem 3.6**.: _Any invertible \(\mathcal{R}\)-bimodule is isomorphic either to \({}_{1}\mathcal{R}_{\alpha}\) or to \({}_{1}\mathcal{R}^{*}_{\alpha}\) for some \(\alpha\in\operatorname{Aut}(\mathcal{R})\). As a consequence, \(\operatorname{Pic}(\mathcal{R})\simeq\operatorname{Out}(\mathcal{R})\times C_{2}\), where \(C_{2}\) is the cyclic group of order 2._
Proof.: We know that a bimodule of type \({}_{1}\mathcal{R}_{\alpha}\), with \(\alpha\in\operatorname{Aut}(\mathcal{R})\), is invertible; the inverse of \([{}_{1}\mathcal{R}_{\alpha}]\) in \(\operatorname{Pic}(\mathcal{R})\) is \([{}_{1}\mathcal{R}_{\alpha^{-1}}]\). Moreover, \([{}_{1}\mathcal{R}_{\alpha}]\cdot[{}_{1}\mathcal{R}_{\beta}]=[{}_{1}\mathcal{R}_{ \alpha\beta}]\), and \([{}_{1}\mathcal{R}_{\alpha}]\) depends only on the class of \(\alpha\) modulo \(\operatorname{Inn}(\mathcal{R})\).
By Corollary 2.3, \(\mathcal{R}^{*}\) is an invertible \(\mathcal{R}\)-bimodule, and then so is \(\mathcal{R}^{*}\otimes_{\mathcal{R}}{}_{1}\mathcal{R}_{\alpha}\simeq{}_{1} \mathcal{R}^{*}_{\alpha}\). Since \(\mathcal{R}^{*}\otimes_{\mathcal{R}}{}_{1}\mathcal{R}_{\alpha}\simeq{}_{1} \mathcal{R}_{\alpha}\otimes_{\mathcal{R}}\mathcal{R}^{*}\) by Proposition 2.
that the subset \(\mathcal{P}\) of \(\mathrm{Pic}(\mathcal{R})\) consisting of all \({}_{1}\mathcal{R}_{\alpha}\) and \({}_{1}\mathcal{R}_{\alpha}^{*}\), with \(\alpha\in\mathrm{Aut}(\mathcal{R})\), is a subgroup isomorphic to \(\mathrm{Out}(\mathcal{R})\times C_{2}\); an isomorphism between \(\mathcal{P}\) and \(\mathrm{Out}(\mathcal{R})\times C_{2}\) takes \({}_{1}\mathcal{R}_{\alpha}\) to \((\hat{\alpha},e)\), and \({}_{1}\mathcal{R}_{\alpha}^{*}\) to \((\hat{\alpha},c)\), where \(C_{2}=<c>\), \(e\) is the neutral element of \(C_{2}\), and \(\hat{\alpha}\) is the class of \(\alpha\) in \(\mathrm{Out}(\mathcal{R})\).
Let \(P\) be an invertible \(\mathcal{R}\)-bimodule. By Lemma 3.5 we see that as a left \(\mathcal{R}\)-module, \(P\) is isomorphic either to \(\mathcal{R}\) or to \(\mathcal{R}^{*}\). Now Proposition 2.1 shows that either \(P\simeq{}_{1}\mathcal{R}_{\alpha}\) or \(P\simeq{}_{1}\mathcal{R}_{\alpha}^{*}\) as \(\mathcal{R}\)-bimodules for some \(\alpha\in\mathrm{Aut}(\mathcal{R})\). We conclude that \(\mathrm{Pic}(\mathcal{R})=\mathcal{P}\), which ends the proof.
## 4. Automorphisms of \(\mathcal{R}\)
The aim of this section is to compute the automorphism group and the group of outer automorphisms of \(\mathcal{R}\). We will use a presentation of \(\mathcal{R}\) given in [6, Remark 4.1], where it is explained that \(\mathcal{R}\) is isomorphic to the Morita ring \(\left[\begin{array}{cc}K&X\\ Y&M_{2}(K)\end{array}\right]\) associated with the Morita context connecting the rings \(K\) and \(M_{2}(K)\), by the bimodules \(X=K^{2}\) and \(Y=M_{2,1}(K)\), with all actions given by the usual matrix multiplication, such that both Morita maps are zero. The multiplication of this Morita ring is given by
\[\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right]\left[\begin{array}{cc}\alpha^{\prime}&x^{\prime}\\ y^{\prime}&f^{\prime}\end{array}\right]=\left[\begin{array}{cc}\alpha \alpha^{\prime}&\alpha x^{\prime}+xf^{\prime}\\ \alpha^{\prime}y+fy^{\prime}&ff^{\prime}\end{array}\right]\]
for any \(\alpha,\alpha^{\prime}\in K\), \(f,f^{\prime}\in M_{2}(K)\), \(x,x^{\prime}\in X\) and \(y,y^{\prime}\in Y\). This Morita ring and \(M_{3}(K)\) coincide as \(K\)-vector spaces, but they have different multiplications. An algebra isomorphism between \(\mathcal{R}\) and \(\left[\begin{array}{cc}K&X\\ Y&M_{2}(K)\end{array}\right]\) takes \(E,(X_{i})_{1\leq i\leq 2},(Y_{i})_{1\leq i\leq 2},(F_{ij})_{1\leq i,j\leq 2}\) to the elements in the Morita ring corresponding to the "matrix units" in \(K,X,Y,M_{2}(K)\). Throughout this section, we will identify \(\mathcal{R}\) with \(\left[\begin{array}{cc}K&X\\ Y&M_{2}(K)\end{array}\right]\).
The multiplicative group \(K^{*}\times GL_{2}(K)\) acts on the additive group \(K^{2}\times M_{2,1}(K)\) by
\[(\lambda,P)\cdot(x_{1},y_{1})=(\lambda x_{1}P^{-1},Py_{1})\]
for any \(\lambda\in K^{*},P\in GL_{2}(K),x_{1}\in K^{2},y_{1}\in M_{2,1}(K)\), so we can form a semidirect product \((K^{2}\times M_{2,1}(K))\rtimes(K^{*}\times GL_{2}(K))\).
For any \(x_{1}\in K^{2},y_{1}\in M_{2,1}(K),\lambda\in K^{*},P\in GL_{2}(K)\) define \(\varphi_{x_{1},y_{1},\lambda,P}:\mathcal{R}\to\mathcal{R}\) by
\[\varphi_{x_{1},y_{1},\lambda,P}(\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right])=\left[\begin{array}{cc}\alpha&\alpha x_{1}+\lambda xP ^{-1}-x_{1}PfP^{-1}\\ \alpha y_{1}+Py-PfP^{-1}y_{1}&PfP^{-1}\end{array}\right].\]
**Theorem 4.1**.: \(\varphi_{x_{1},y_{1},\lambda,P}\) _is an algebra automorphism of \(\mathcal{R}\) for any \(x_{1}\in K^{2},y_{1}\in M_{2,1}(K),\lambda\in K^{*},P\in GL_{2}(K)\), and \(\Phi:(K^{2}\times M_{2,1}(K))\rtimes(K^{*}\times GL_{2}(K))\to\mathrm{Aut}( \mathcal{R})\), \(\Phi(x_{1},y_{1},\lambda,P)=\varphi_{x_{1},y_{1},\lambda,P}\) is an isomorphism of groups. An automorphism \(\varphi_{x_{1},y_{1},\lambda,P}\) of \(\mathcal{R}\) is inner if and only if \(\lambda=1\). As a consequence, \(\mathrm{Out}(\mathcal{R})\simeq K^{*}\)._
Proof.: Let \(\varphi\in\mathrm{Aut}(\mathcal{R})\). Since the Jacobson radical of \(\mathcal{R}\) is \(J(\mathcal{R})=\left[\begin{array}{cc}0&X\\ Y&0\end{array}\right]\), \(\varphi\) induces an automorphism \(\tilde{\varphi}\) of the algebra \(\mathcal{R}/J(\mathcal{R})\simeq K\times M_{2}(K)\), thus \(\tilde{\varphi}\) acts as identity on the first position, and as an inner automorphism associated to some \(P\in GL_{2}(K)\) on the second one. Lifting to \(\mathcal{R}\), we see that \(\varphi(\left[\begin{array}{cc}1&0\\ 0&0\end{array}\right])=\left[\begin{array}{cc}1&x_{1}\\ y_{1}&0\end{array}\right]\) for some \(x_{1}\in K^{2}\) and \(y_{1}\in M_{2,1}(K)\), and \(\varphi(\left[\begin{array}{cc}0&0\\ 0&f\end{array}\right])=\left[\begin{array}{cc}0&\mu(f)\\ \omega(f)&PfP^{-1}\end{array}\right]\) for some linear maps \(\mu:M_{2}(K)\to X\) and \(\omega:M_{2}(K)\to Y\).
On the other hand, since \(\varphi(J(\mathcal{R})\subset J(\mathcal{R})\), \(\varphi(\left[\begin{array}{cc}0&x\\ 0&0\end{array}\right])\in\left[\begin{array}{cc}0&X\\ Y&0\end{array}\right]\), so then
\[\varphi(\left[\begin{array}{cc}0&x\\ 0&0\end{array}\right])=\varphi(\left[\begin{array}{cc}1&0\\ 0&0\end{array}\right]\left[\begin{array}{cc}0&x\\ 0&0\end{array}\right])\in\left[\begin{array}{cc}1&x_{1}\\ y_{1}&0\end{array}\right]\left[\begin{array}{cc}0&X\\ Y&0\end{array}\right]\subset\left[\begin{array}{cc}0&X\\ 0&0\end{array}\right].\]
This shows that \(\varphi(\left[\begin{array}{cc}0&x\\ 0&0\end{array}\right])=\left[\begin{array}{cc}0&\theta(x)\\ 0&0\end{array}\right]\) for a linear map \(\theta:X\to X\); thus \(\theta(x)=xA\) for any \(x\in X\), where \(A\in M_{2}(K)\).
Similarly we see that \(\varphi(\left[\begin{array}{cc}0&0\\ y&0\end{array}\right])=\left[\begin{array}{cc}0&0\\ By&0\end{array}\right]\) for any \(y\in Y\), where \(B\in M_{2}(K)\). Thus we obtain that \(\varphi\) must be of the form
\[\varphi(\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right])=\left[\begin{array}{cc}\alpha&\alpha x_{1}+xA+\mu(f) \\ \alpha y_{1}+By+\omega(f)&PfP^{-1}\end{array}\right]. \tag{1}\]
By equating the corresponding entries, we see that the matrices \(\varphi(\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right]\left[\begin{array}{cc}\alpha^{\prime}&x^{\prime}\\ y^{\prime}&f^{\prime}\end{array}\right])\) and \(\varphi(\left[\begin{array}{cc}\alpha\alpha^{\prime}&\alpha x^{\prime}+xf^{ \prime}\\ y\alpha^{\prime}+fy^{\prime}&ff^{\prime}\end{array}\right])\) are equal if and only if the equations
\[\alpha\mu(f^{\prime})+\alpha x_{1}Pf^{\prime}P^{-1}+xAPf^{\prime}P^{-1}+\mu(f) Pf^{\prime}P^{-1}=xf^{\prime}A+\mu(ff^{\prime}) \tag{2}\]
and
\[\alpha^{\prime}\omega(f)+\alpha^{\prime}PfP^{-1}y_{1}+PfP^{-1}By^{\prime}+PfP ^{-1}\omega(f^{\prime})=Bfy^{\prime}+\omega(ff^{\prime}) \tag{3}\]
are satisfied for any \(\alpha,\alpha^{\prime}\in K\), \(x,x^{\prime}\in K^{2}\), \(y,y^{\prime}\in M_{2,1}(K)\), \(f,f^{\prime}\in M_{2}(K)\). If in equation (2) we take \(f=0\), we get \(\alpha(\mu(f^{\prime})+x_{1}Pf^{\prime}P^{-1})+xAPf^{\prime}P^{-1}-xf^{\prime}A=0\). As this holds for any \(\alpha\in K\), we must have
\[\mu(f^{\prime})+x_{1}Pf^{\prime}P^{-1}=0 \tag{4}\]
and \(x(APf^{\prime}P^{-1}-f^{\prime}A)=0\). As \(x\) runs through \(K^{2}\), we get \(APf^{\prime}P^{-1}-f^{\prime}A=0\), showing that \(APf^{\prime}=f^{\prime}AP\) for any \(f^{\prime}\), so \(AP\in KI_{2}\), or equivalently,
\[A\in KP^{-1}. \tag{5}\]
On the other hand, it is clear that if equations (4) and (5) holds, then (2) is satisfied.
In a similar way, we see that (3) is true if and only
\[\omega(f)=-PfP^{-1}y_{1} \tag{6}\]
and
\[B\in KP. \tag{7}\]
These show that a map \(\varphi\) of the form given in (1) is a ring morphism if and only if
\[\mu(f)=-x_{1}PfP^{-1},\;\omega(f)=-PfP^{-1}y_{1},\;A\in KP^{-1},\;B\in KP.\]
Thus take \(A=\lambda P^{-1}\) and \(B=\rho P\), with \(\lambda,\rho\in K\); in fact, in order for \(\varphi\) to be injective one needs \(\lambda,\rho\in K^{*}\). For any \(x_{1}\in K^{2},y_{1}\in M_{2,1}(K),\lambda,\rho\in K^{*},P\in GL_{2}(K)\), denote by \(\psi_{x_{1},y_{1},\lambda,\rho,P}:\mathcal{R}\rightarrow\mathcal{R}\) the map defined by
\[\psi_{x_{1},y_{1},\lambda,\rho,P}(\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right])=\left[\begin{array}{cc}\alpha&\alpha x_{1}+\lambda xP ^{-1}-x_{1}PfP^{-1}\\ \alpha y_{1}+\rho Py-PfP^{-1}y_{1}&PfP^{-1}\end{array}\right].\]
The considerations above show that \(\psi_{x_{1},y_{1},\lambda,\rho,P}\) is an algebra endomorphism of \(\mathcal{R}\). As it is clearly injective, it is in fact an automorphism of \(\mathcal{R}\). We showed that any automorphism of \(\mathcal{R}\) is one such \(\psi_{x_{1},y_{1},\lambda,\rho,P}\).
A straightforward computation shows that
\[\psi_{x_{1}^{\prime},y_{1}^{\prime},\lambda^{\prime},\rho^{\prime},P^{\prime}} \psi_{x_{1},y_{1},\lambda,\rho,P}=\psi_{x_{1}^{\prime}+\lambda^{\prime}x_{1}(P ^{\prime})^{-1},y_{1}^{\prime}+\rho^{\prime}P^{\prime}y_{1},\lambda^{\prime} \lambda,\rho^{\prime}\rho,P^{\prime}P} \tag{8}\]
Consider the additive group \(A=K^{2}\times M_{2,1}(K)\) and the multiplicative group \(B=K^{*}\times K^{*}\times GL_{2}(K)\). Then \(B\) acts on \(A\) by \((\lambda,\rho,P)\cdot(x_{1},y_{1})=(\lambda x_{1}P^{-1},\rho Py_{1})\), and (8) shows that
\[\Psi:A\rtimes B\to\operatorname{Aut}(\mathcal{R}),\ \Psi(x_{1},y_{1}, \lambda,\rho,P)=\psi_{x_{1},y_{1},\lambda,\rho,P}\]
is a group morphism. We have also seen that \(\Psi\) is surjective. Now \(\psi_{x_{1},y_{1},\lambda,\rho,P}\) is the identity morphism if and only if
\[PfP^{-1}=f,\ \alpha x_{1}+\lambda xP^{-1}-x_{1}PfP^{-1}=x,\ \alpha y_{1}+ \rho Py-PfP^{-1}y_{1}=y\]
for any \(\alpha\in K,x\in K^{2},y\in M_{2,1}(K),f\in M_{2}(K)\). If we take \(\alpha=1,x=0,f=0\) in the second relation, we get \(x_{1}=0\). Hence \(\lambda xP^{-1}=x\) for any \(x\), so \(P=\lambda I_{2}\). Similarly, the third relation shows that \(y_{1}=0\) and \(P=\rho^{-1}I_{2}\). Therefore \(\operatorname{Ker}(\Psi)=0\times B_{0}\), where \(B_{0}=\{(\lambda,\lambda^{-1},\lambda I_{2})|\lambda\in K^{*}\}\). As \(B_{0}\) acts trivially on \(A\), the action of \(B\) induces an action of the factor group \(\frac{B}{B_{0}}\) on \(A\), and then \(\operatorname{Aut}(\mathcal{R})\simeq\frac{A\rtimes B}{0\rtimes B_{0}}\simeq A \rtimes\frac{B}{B_{0}}\). Denoting by \(\overline{b}\) the class of some \(b\in B\) modulo \(B_{0}\), we see that
\[\overline{(\lambda,\rho,P)}=\overline{(\rho^{-1},\rho,\rho I_{2})(\lambda \rho,1,\rho^{-1}P)}=\overline{(\lambda\rho^{-1},1,\rho^{-1}P)},\]
so there is a group isomorphism \(\Gamma:K^{*}\times GL_{2}(K)\to\frac{B}{B_{0}}\) taking \((\lambda,P)\) to \(\overline{(\lambda,1,P)}\). \(\Gamma\) induces an action of \(K^{*}\times GL_{2}(K)\) on \(A\), given by
\[(\lambda,P)\cdot(x_{1},y_{1})=(\lambda,1,P)\cdot(x_{1},y_{1})=(\lambda x_{1}P^ {-1},Py_{1}).\]
We obtain a composition of group isomorphisms
\[\Phi:A\rtimes(K^{*}\times GL_{2}(K))\longrightarrow A\rtimes\frac{B}{B_{0}} \longrightarrow\operatorname{Aut}(\mathcal{R})\]
given by \(\Phi(x_{1},y_{1},\lambda,P)=\psi_{x_{1},y_{1},\lambda,1,P}\). Now we denote \(\psi_{x_{1},y_{1},\lambda,1,P}=\varphi_{x_{1},y_{1},\lambda,P}\) and the first part of the statement is proved.
A direct computation shows that an element \(\left[\begin{array}{cc}\beta&z\\ g&m\end{array}\right]\) of \(\mathcal{R}\) is invertible if and only if \(\beta\neq 0\) and \(m\in GL_{2}(K)\), and in this case its inverse is \(\left[\begin{array}{cc}\beta^{-1}&-\beta^{-1}zm^{-1}\\ -\beta^{-1}m^{-1}g&m^{-1}\end{array}\right]\), and the associated inner automorphism of \(\mathcal{R}\) takes \(\left[\begin{array}{cc}\alpha&x\\ y&f\end{array}\right]\) to
\[\left[\begin{array}{cc}\alpha&\alpha\beta^{-1}z+\beta^{-1}xm-\beta^{-1}zm^{ -1}fm\\ -\alpha m^{-1}g+\beta m^{-1}y+m^{-1}fg&m^{-1}fm\end{array}\right],\]
so it is just \(\psi_{\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{-1}}\). Hence \(\varphi_{x_{1},y_{1},\lambda,P}=\psi_{x_{1},y_{1},\lambda,1,P}\) is inner if and only if \(\psi_{x_{1},y_{1},\lambda,1,P}=\psi_{\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{ -1}}\) for some \(\beta\in K^{*},z\in K^{2},g\in M_{2,1}(K),m\in GL_{2}(K)\), and taking into account the description of the kernel of \(\Psi\), this equality is equivalent to \((x_{1},y_{1},\lambda,1,P)=(\beta^{-1}z,-m^{-1}g,\beta^{-1},\beta,m^{-1})(0,0, \rho,\rho^{-1},\rho I_{2})=(\beta^{-1}z,-m^{-1}g,\beta^{-1}\rho,\beta\rho^{-1}, \rho m^{-1})\) for some \(\rho\in K^{*}\). Equating the corresponding positions, we get \(1=\beta\rho^{-1}\), so \(\rho=\beta\), and then \(\lambda=\beta^{-1}\rho=1\)
\(z=\beta x_{1}=\rho x_{1}\), \(m=\rho P^{-1}\) and \(g=-my_{1}=-\rho P^{-1}y_{1}\). We conclude that \(\varphi_{x_{1},y_{1},\lambda,P}\) is inner if and only if \(\lambda=1\), and in this case, by making the choice \(\rho=1\), \(\varphi_{x_{1},y_{1},1,P}\) is the inner automorphism associated with the invertible element \(\left[\begin{array}{cc}1&x_{1}\\ -P^{-1}y_{1}&P^{-1}\end{array}\right]\).
We got that \(\operatorname{Inn}(\mathcal{R})=\Phi(A\rtimes(1\times GL_{2}(K))\), so then
\[\operatorname{Out}(\mathcal{R})=\frac{\operatorname{Aut}(\mathcal{R})}{ \operatorname{Inn}(\mathcal{R})}\simeq\frac{A\rtimes(K^{*}\times GL_{2}(K))}{ A\rtimes(1\times GL_{2}(K))}\simeq K^{*}.\]
Finally, we note that the outer automorphism corresponding to \(\lambda\in K^{*}\) through the isomorphism \(\operatorname{Out}(\mathcal{R})\simeq K^{*}\) is (the class of) \(\varphi_{0,0,\lambda,I_{2}}\).
## 5. Semitrivial extensions and a question on Frobenius strongly graded algebras
We first present a general construction. Let \(R\) be a finite dimensional \(K\)-algebra, and let \(R^{*}\) be its linear dual with the usual \(R\)-bimodule structure, and actions denoted by \(\rightharpoonup\) and \(\leftharpoonup\). Let \(\psi:R^{*}\otimes_{R}R^{*}\to R\) be a morphism of \(R\)-bimodules, and denote \(\psi(r^{*}\otimes_{R}s^{*})\) by \([r^{*},s^{*}]\) for any \(r^{*},s^{*}\in R^{*}\). We say that \(\psi\) is associative if \([r^{*},s^{*}]\rightharpoonup t^{*}=r^{*}\leftharpoonup[s^{*},t^{*}]\) for any \(r^{*},s^{*},t^{*}\in R^{*}\); in other words, we have a Morita context \((R,R,R^{*},R^{*},\psi,\psi)\) connecting the rings \(R\) and \(R\), with both bimodules being \(R^{*}\), and both Morita maps equal to \(\psi\). It follows from Morita theory that if \(\psi\) is associative and surjective, then it is an isomorphism of \(R\)-bimodules.
If \(\psi:R^{*}\otimes_{R}R^{*}\to R\) is an associative morphism of \(R\)-bimodules, we consider the semitrivial extension \(R\rtimes_{\psi}R^{*}\), which is the cartesian product \(R\times R^{*}\) with the usual addition, and multiplication defined by
\[(r,r^{*})(s,s^{*})=(rs+[r^{*},s^{*}],(r\rightharpoonup s^{*})+(r^{*} \leftharpoonup s))\]
for any \(r,s\in R,r^{*},s^{*}\in R^{*}\). Then \(R\rtimes_{\psi}R^{*}\) is an algebra with identity element \((1,0)\); this construction was introduced in [13]. Moreover, it is a \(C_{2}\)-graded algebra, where \(C_{2}=<c>\) is a cyclic group of order \(2\), with homogeneous components \((R\rtimes_{\psi}R^{*})_{e}=R\times 0\) and \((R\rtimes_{\psi}R^{*})_{c}=0\times R^{*}\); here \(e\) denotes the neutral element of \(C_{2}\). It is a strongly graded algebra if and only if \(\psi\) is surjective, thus an isomorphism.
**Proposition 5.1**.: _Let \(R\) be a finite dimensional algebra and let \(\psi:R^{*}\otimes_{R}R^{*}\to R\) be an associative morphism of \(R\)-bimodules. Then \(R\rtimes_{\psi}R^{*}\) is a symmetric algebra._
Proof.: We first note that
\[t^{*}([r^{*},s^{*}])=r^{*}([s^{*},t^{*}])\text{ for any }r^{*},s^{*},t^{*}\in R ^{*}. \tag{9}\]
Indeed, we just evaluate both sides of \([r^{*},s^{*}]\rightharpoonup t^{*}=r^{*}\leftharpoonup[s^{*},t^{*}]\) at \(1\).
Denote \(A=R\rtimes_{\psi}R^{*}\) and define
\[\Phi:A\to A^{*},\;\Phi(r,r^{*})(s,s^{*})=r^{*}(s)+s^{*}(r)\;\;\text{for any }r,s\in R,r^{*},s^{*}\in R^{*}.\]
It is clear that \(\Phi\) is injective. Indeed, if \(\Phi(r,r^{*})=0\), then \(r^{*}(s)=\Phi(r,r^{*})(s,0)=0\) for any \(s\in R\), so \(r^{*}=0\), and \(s^{*}(r)=\Phi(r,r^{*})(0,s^{*})=0\) for any \(s^{*}\in R^{*}\), so \(r=0\). Thus \(\Phi\) is a linear isomorphism. Moreover, if \((x,x^{*}),(r,r^{*}),(s,s^{*})\in A\), then
\[(\Phi((x,x^{*})(r,r^{*})))(s,s^{*}) = (\Phi(xr+[x^{*},r^{*}],(x\rightharpoonup r^{*})+(x^{*}\leftharpoonup r)))(s, s^{*})\] \[= (x\rightharpoonup r^{*})(s)+(x^{*}\leftharpoonup r)(s)+s^{*}(xr+[x ^{*},r^{*}])\] \[= r^{*}(sx)+x^{*}(rs)+s^{*}(xr)+s^{*}([x^{*},r^{*}])\] \[= r^{*}(sx)+x^{*}(rs)+s^{*}(xr)+r^{*}([s^{*},x^{*}])\quad\text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(r^{*}\) & \(s^{*}\) & \([r^{*},s^{*}]\) & \(t^{*}\) & \([r^{*},s^{*}]\rightharpoonup t^{*}\) \\ \hline
1 & \(Y_{i}^{*}\) & \(X_{i}^{*}\) & \(E\) & \(E^{*}\) & \(E^{*}\) \\ \hline
2 & \(Y_{i}^{*}\) & \(X_{i}^{*}\) & \(E\) & \(Y_{j}^{*}\) & \(Y_{j}^{*}\) \\ \hline
3 & \(X_{i}^{*}\) & \(Y_{j}^{*}\) & \(F_{ij}\) & \(F_{ri}^{*}\) & \(F_{ri}^{*}\) \\ \hline
4 & \(X_{i}^{*}\) & \(Y_{j}^{*}\) & \(F_{ij}\) & \(X_{j}^{*}\) & \(X_{i}^{*}\) \\ \hline
5 & \(E^{*}\) & \(Y_{i}^{*}\) & \(X_{i}\) & \(X_{i}^{*}\) & \(E^{*}\) \\ \hline
6 & \(Y_{i}^{*}\) & \(F_{ji}^{*}\) & \(X_{j}\) & \(X_{j}^{*}\) & \(E^{*}\) \\ \hline
7 & \(X_{i}^{*}\) & \(E^{*}\) & \(Y_{i}\) & \(Y_{j}^{*}\) & \(F_{ij}^{*}\) \\ \hline
8 & \(F_{ij}^{*}\) & \(X_{i}^{*}\) & \(Y_{j}\) & \(Y_{p}^{*}\) & \(F_{pj}^{*}\) \\ \hline \end{tabular}
We proceed similarly to find all triples \((r^{*},s^{*},t^{*})\) with elements in the basis \(\mathbf{B}^{*}\) such that \(r^{*}\rightharpoonup[s^{*},t^{*}]\neq 0\). We make appropriate choices for the indices to make easier the identifications between the two tables.
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(s^{*}\) & \(t^{*}\) & \([s^{*},t^{*}]\) & \(r^{*}\) & \(r^{*}\leftharpoonup[s^{*},t^{*}]\) \\ \hline \(1^{\prime}\) & \(Y_{i}^{*}\) & \(X_{i}^{*}\) & \(E\) & \(E^{*}\) & \(E^{*}\) \\ \hline \(2^{\prime}\) & \(Y_{j}^{*}\) & \(X_{j}^{*}\) & \(E\) & \(X_{i}^{*}\) & \(X_{i}^{*}\) \\ \hline \(3^{\prime}\) & \(X_{i}^{*}\) & \(Y_{p}^{*}\) & \(F_{ip}\) & \(F_{ij}^{*}\) & \(F_{pj}^{*}\) \\ \hline \(4^{\prime}\) & \(X_{i}^{*}\) & \(Y_{j}^{*}\) & \(F_{ij}\) & \(Y_{i}^{*}\) & \(Y_{j}^{*}\) \\ \hline \(5^{\prime}\) & \(E^{*}\) & \(Y_{j}^{*}\) & \(X_{j}\) & \(X_{i}^{*}\) & \(F_{ji}^{*}\) \\ \hline \(6^{\prime}\) & \(Y_{j}^{*}\) & \(F_{ri}^{*}\) & \(X_{r}\) & \(X_{i}^{*}\) & \(F_{ri}^{*}\) \\ \hline \(7^{\prime}\) & \(X_{i}^{*}\) & \(E^{*}\) & \(Y_{i}\) & \(Y_{i}^{*}\) & \(E^{*}\) \\ \hline \(8^{\prime}\) & \(F_{ji}^{*}\) & \(X_{j}^{*}\) & \(Y_{i}\) & \(Y_{i}^{*}\) & \(E^{*}\) \\ \hline \end{tabular}
We see that the two tables indicate the same non-vanishing triples \((r^{*},s^{*},t^{*})\) for the left hand side and the right hand side of the equality we need to prove, and in each case the two sides are indeed equal. The corresponding cases are: \(1=7^{\prime}\), \(2=4^{\prime}\), \(3=6^{\prime}\), \(4=2^{\prime}\), \(5=1^{\prime}\), \(6=8^{\prime}\), \(7=5^{\prime}\) and \(8=3^{\prime}\).
As a consequence, we can construct the semitrivial extension \(A=\mathcal{R}\rtimes_{\varphi}\mathcal{R}^{*}\). This algebra will answer in the negative our initial question presented in the Introduction, for both the symmetric property and the Frobenius property.
**Corollary 5.3**.: _Let \(A=\mathcal{R}\rtimes_{\varphi}\mathcal{R}^{*}\) with the \(C_{2}\)-grading given by \(A_{e}=\mathcal{R}\rtimes 0\) and \(A_{c}=0\times\mathcal{R}^{*}\). Then \(A\) is a strongly graded algebra which is symmetric, whose homogeneous component \(A_{e}\) of trivial degree is not Frobenius._
This example also answers a question posed by the referee of our paper [5]. It was proved in [5, Proposition 2.1] that if \(B\) is a subalgebra of a Frobenius algebra \(A\), such that \(A\) is free as a left \(B\)-module and also as a right \(B\)-module, then \(B\) is Frobenius, too. The question was whether the conclusion remains valid if we only suppose that \(A\) is projective as a left \(B\)-module and as a right \(B\)-module. The example constructed in Corollary 5.3 shows that the answer is negative. Indeed, \(A\) is even symmetric, and it is projective as a left \(A_{e}\)-module and as a right \(A_{e}\)-module, however \(A_{e}\) is not a Frobenius algebra.
## 6. Order 2 elements in Picard groups and associative isomorphisms
We return to an arbitrary finite dimensional algebra \(R\). We have seen in Example 2.7 that if \(R^{*}\) is an invertible \(R\)-bimodule, then \([R^{*}]\) may have order \(>2\) (finite or infinite) in \(\operatorname{Pic}(R)\), thus \(R^{*}\otimes_{R}R^{*}\) may not be isomorphic to \(R\). Now we look at the case where \(R^{*}\otimes_{R}R^{*}\simeq R\) as
bimodules, thus \(R^{*}\) is an invertible bimodule, and its order in the Picard group is at most \(2\); we have seen in Proposition 2.3 that \(R\) is necessarily quasi-Frobenius in this case. In order to construct semitrivial extensions, we are interested in associativity of isomorphisms \(R^{*}\otimes_{R}R^{*}\simeq R\). Thus we addressed Question 2 in the Introduction, asking whether any such isomorphism is associative.
The following shows that the answer to the question depends only on the algebra, and not on a particular choice of the isomorphism.
**Proposition 6.1**.: _If \(R\) is a finite dimensional algebra such that \(R^{*}\otimes_{R}R^{*}\simeq R\) as bimodules and there exists an associative isomorphism \(R^{*}\otimes_{R}R^{*}\to R\), then any other such isomorphism is associative._
Proof.: We first note that if \(\psi,\psi^{\prime}:R^{*}\otimes_{R}R^{*}\to R\) are isomorphisms, then \(\psi^{-1}\psi^{\prime}\) is an automorphism of the bimodule \(R\), so it is the multiplication by a central invertible element \(c\). Therefore \(\psi^{\prime}(y)=c\psi(y)\) for any \(y\in R^{*}\otimes_{R}R^{*}\).
We also see that if \(c\) is a central element of \(R\), then \(c\rightharpoonup r^{*}=r^{*}\leftharpoonup c\) for any \(r^{*}\in R^{*}\). Indeed, \((c\rightharpoonup r^{*})(a)=r^{*}(ac)=r^{*}(ca)=(r^{*}\leftharpoonup c)(a)\) for any \(a\in R\).
Now \(R^{*}\otimes_{R}R^{*}\simeq R\), so \(R^{*}\) is an invertible \(R\)-bimodule and
is an equivalence of categories. By Morita theory, see [2, Proposition 3.1, page 60], there exists a strict Morita context \((R,R,R^{*},R^{*},\phi,\phi^{\prime})\), where \(\phi,\phi^{\prime}:R^{*}\otimes_{R}R^{*}\to R\) are isomorphisms of \(R\)-bimodules satisfying \(\phi(r^{*}\otimes_{R}s^{*})\rightharpoonup t^{*}=r^{*}\leftharpoonup\phi^{ \prime}(s^{*}\otimes_{R}t^{*})\). But \(\phi^{\prime}=c\phi\) for some central invertible element \(c\in R\). We get that \(\phi(r^{*}\otimes_{R}s^{*})\rightharpoonup t^{*}=c\rightharpoonup(r^{*} \leftharpoonup\phi(s^{*}\otimes_{R}t^{*}))\) for any \(r^{*},s^{*},t^{*}\in R^{*}\), i.e. \(\phi\) is associative up to the central unit \(c\).
We show that any isomorphism \(\psi:R^{*}\otimes_{R}R^{*}\to R\) of bimodules has the same property. Indeed, \(\psi=b\phi\) for some central invertible element \(b\in R\), and then
\[\psi(r^{*}\otimes_{R}s^{*})\rightharpoonup t^{*} = (b\phi(r^{*}\otimes_{R}s^{*}))\rightharpoonup t^{*}\] \[= \phi(r^{*}\otimes_{R}s^{*})\rightharpoonup(b\rightharpoonup t^{*})\] \[= \phi(r^{*}\otimes_{R}s^{*})\rightharpoonup(t^{*}\leftharpoonup b)\] \[= (\phi(r^{*}\otimes_{R}s^{*})\rightharpoonup t^{*})\leftharpoonup b\] \[= (c\rightharpoonup(r^{*}\leftharpoonup\phi(s^{*}\otimes_{R}t^{*}))) \leftharpoonup b\] \[= c\rightharpoonup(r^{*}\leftharpoonup(\phi(s^{*}\otimes_{R}t^{*})b))\] \[= c\rightharpoonup(r^{*}\leftharpoonup\psi(s^{*}\otimes_{R}t^{*}))\]
Now if there is such an isomorphism \(\psi\) which is associative, we get that \(c\rightharpoonup(r^{*}\leftharpoonup\psi(s^{*}\otimes_{R}t^{*}))=r^{*} \leftharpoonup\psi(s^{*}\otimes_{R}t^{*})\) for any \(r^{*},s^{*},t^{*}\in R^{*}\). As \(\psi\) is surjective, this shows that \(c\rightharpoonup r^{*}=r^{*}\) for any \(r^{*}\in R^{*}\), so then \(r^{*}(ac)=r^{*}(a)\) for any \(a\in R\) and \(r^{*}\in R^{*}\). Hence \(ac=a\) for any \(a\), so \(c=1\).
We conclude that any other isomorphism \(R^{*}\otimes_{R}R^{*}\to R\) is associative.
The following answers in the positive our question in the Frobenius case.
**Proposition 6.2**.: _Let \(R\) be a Frobenius algebra such that \(R^{*}\otimes_{R}R^{*}\simeq R\) as bimodules. Then any isomorphism \(\varphi:R^{*}\otimes_{R}R^{*}\to R\) is associative._
Proof.: Let \(\lambda\in R^{*}\) be a Frobenius form and let \(\nu\) be the Nakayama automorphism associated with \(\lambda\). Then \(R^{*}\simeq{}_{1}R_{\nu}\), so \(R^{*}\otimes_{R}R^{*}\simeq{}_{1}R_{\nu}\otimes_{R}{}_{1}R_{\nu}\simeq{}_{1}R_{ \nu^{2}}\). Thus \({}_{1}R_{\nu^{2}}\simeq R\), which shows that \(\nu^{2}\) is inner; let \(\nu^{2}(r)=u^{-1}ru\) for any \(r\in R\), where \(u\) is an invertible element of \(R\).
We have seen in Proposition 2.4 that \(\theta:{}_{1}R_{\nu}\to R^{*}\), \(\theta(r)=r\rightharpoonup\lambda\), is a bimodule isomorphism. It is easy to check that \(\delta:{}_{1}R_{\nu}\otimes_{R}{}_{1}R_{\nu}\rightharpoonup{}_{1}R_{\nu^{2}}\), \(\delta(r\otimes_{R}s)=r\nu(s)\) for any \(r,s\in R\), and \(\omega:{}_{1}R_{\nu^{2}}\to R\), \(\omega(r)=ru^{-1}\) for any \(r\in R\), are both bimodule isomorphisms. Composing these isomorphisms we obtain an \(R\)-bimodule isomorphism \(\psi:R^{*}\otimes_{R}R^{*}\to R\), \(\psi=\omega\delta(\theta^{-1}\otimes\theta^{-1})\). Explicitly,
\[\psi((r\rightharpoonup\lambda)\otimes_{R}(s\rightharpoonup\lambda))=r\nu(s)u^ {-1}\]
for any \(r,s\in R\). Denoting \(\psi(r^{*}\otimes_{R}s^{*})\) by \([r^{*},s^{*}]\), we have
\[[r\rightharpoonup\lambda,s\rightharpoonup\lambda]\rightharpoonup(t \rightharpoonup\lambda) = (r\nu(s)u^{-1})\rightharpoonup(t\rightharpoonup\lambda)\] \[= (r\nu(s)u^{-1}t)\rightharpoonup\lambda\]
and
\[(r\rightharpoonup\lambda)\leftharpoonup[s\rightharpoonup\lambda,t \rightharpoonup\lambda] = (r\rightharpoonup\lambda)\leftharpoonup(s\nu(t)u^{-1})\] \[= r\rightharpoonup(\lambda\leftharpoonup(s\nu(t)u^{-1}))\] \[= r\rightharpoonup(\nu(s)\nu^{2}(t)\nu(u^{-1})\rightharpoonup\lambda)\] \[= (r\nu(s)u^{-1}tu\nu(u)^{-1})\rightharpoonup\lambda,\]
showing that \(\psi\) is associative if and only if \(\nu(u)=u\).
Now for any \(a\in R\)
\[\lambda(au) = \lambda(u\nu(a))\ \ \mbox{(since $\nu$ is the Nakayama automorphism)}\] \[= \lambda(u\nu^{2}(\nu^{-1}(a)))\] \[= \lambda(uu^{-1}\nu^{-1}(a)u)\ \ \mbox{(since $\nu^{2}$ is inner)}\] \[= \lambda(\nu^{-1}(a)u)\] \[= \lambda(ua)\ \ \mbox{(since $\nu$ is the Nakayama automorphism)},\]
showing that \(\lambda(au)=\lambda(ua)\), or equivalently, \(u\rightharpoonup\lambda=\lambda\leftharpoonup u\). Therefore \(\theta(u)=u\rightharpoonup\lambda=\lambda\leftharpoonup u=\nu(u)\rightharpoonup \lambda=\theta(\nu(u))\), so \(\nu(u)=u\), since \(\theta\) is injective. This shows that \(\psi\) is associative, and then so is any isomorphism \(\varphi:R^{*}\otimes_{R}R^{*}\simeq R\).
As a consequence, we obtain a class of examples of strongly graded algebras that are symmetric as algebras, while their homogeneous component of trivial degree is not symmetric. Indeed, we can take a Frobenius algebra \(R\) such that the order of \([R^{*}]\) in \(\operatorname{Pic}(R)\) is \(2\); in other words, the Nakayama automorphism \(\nu\) with respect to a Frobenius form is not inner, but \(\nu^{2}\) is inner. Then there is an isomorphism of \(R\)-bimodules \(\psi:R^{*}\otimes_{R}R^{*}\to R\), and by Proposition 6.2, it is associative. Hence we can form the semitrivial extension \(R\rtimes{}_{\psi}R^{*}\), which is a strongly \(C_{2}\)-graded algebra which is symmetric, and its homogeneous component of trivial degree is isomorphic to \(R\), which is Frobenius, but not symmetric.
We have several classes of Frobenius algebras \(R\) such that \([R^{*}]\) has order \(2\) in \(\operatorname{Pic}(R)\):
(i) A first class follows from Example 2.7. For \(R=H_{1}(C,n,c,c^{*})\), the order of \([R^{*}]\) is \(2\) if and only if \(n=2\). Thus we obtain such an \(R\) if we have a finite abelian group \(C\), an element
with \(c^{2}\neq 1\), and a linear character \(c^{*}\in C^{*}\) such that \((c^{*})^{2}=1\) and \(c^{*}(c)=-1\). A particular family of such examples is when we take \(C=<c>\simeq C_{2r}\), where \(r\geq 2\), and \(c^{*}\in C^{*}\) defined by \(c^{*}(c)=-1\), obtaining a Hopf algebra of dimension \(4r\), generated by the grouplike element \(c\) and the \((1,c)\)-skew-primitive element \(x\), subject to relations \(c^{2r}=1,x^{2}=c^{2}-1,xc=-cx\).
(ii) A second class follows from Example 2.7, too. For \(R=H_{2}(C,n,c,c^{*})\), the order of \([R^{*}]\) is \(2\) if and only if \(\frac{m}{(\frac{m}{n},n-1)}=2\), where \(m\) is the order of \(c^{*}\). It is easy to check that this happens if and only if \(m=n=2\). Thus we need a finite abelian group \(C\), a character \(c^{*}\in C^{*}\) such that \((c^{*})^{2}=1\) and an element \(c\in C\) such that \(c^{*}(c)=-1\) (in particular, the order of \(c\) must be even). A particular family of such examples is when we take \(C=<c>\simeq C_{2r}\), where \(r\geq 1\), and \(c^{*}\in C^{*}\) defined by \(c^{*}(c)=-1\), obtaining a Hopf algebra of dimension \(4r\), generated by the grouplike element \(c\) and the \((1,c)\)-skew-primitive element \(x\), subject to relations \(c^{2r}=1,x^{2}=0,xc=-cx\). For \(r=1\) this is just Sweedler's \(4\)-dimensional Hopf algebra.
(iii) Another example is \(R_{-1}=K_{-1}[X,Y]/(X^{2},Y^{2})\) from Example 2.8 for \(q=-1\).
(iv) Let \(H\) be a unimodular finite dimensional Hopf algebra, i.e., the spaces of left integrals and right integrals coincide in \(H\); equivalently, the unimodular element \(\mathcal{G}\) is trivial. By Radford's formula, see [8, Theorem 12.10] or [15, Theorem 10.5.6], \(S^{4}(h)=a^{-1}ha\) for any \(h\in H\), where \(a\) is the modular element of \(H^{*}\) regarded inside \(H\) via the isomorphism \(H\simeq H^{**}\). Thus \(S^{4}\) is inner, and then the order of \(S^{2}\) in \(\operatorname{Out}(H)\) is either \(1\) or \(2\). By Theorem 2.9, in the first case \([H^{*}]\) has order \(1\) in \(\operatorname{Pic}(H)\), and \(H\) is symmetric, while in the second case, \([H^{*}]\) has order \(2\) in \(\operatorname{Pic}(H)\). We conclude that a class of Frobenius algebras as we are looking for is the family of all unimodular finite dimensional Hopf algebras that are not symmetric. A class of such objects was explicitly constructed in [17].
We do not know whether the answer Question 2 is positive for any finite dimensional quasi-Frobenius algebra.
**Acknowledgement.** The first two authors were supported by a grant of UEFISCDI, project number PN-III-P4-PCE-2021-0282, contract PCE 47/2022.
|
2309.08726 | Using a quantitative assessment of propulsion biomechanics in wheelchair
racing to guide the design of personalized gloves: a case study | This study with a T-52 class wheelchair racing athlete aimed to combine
quantitative biomechanical measurements to the athlete's perception to design
and test different prototypes of a new kind of rigid gloves. Three personalized
rigid gloves with various, fixed wrist extension angles were prototyped and
tested on a treadmill in a biomechanics laboratory. The prototype with 45{\deg}
wrist extension was the athlete's favourite as it reduced his perception of
effort. Biomechanical assessment and user-experience data indicated that his
favourite prototype increased wrist stability throughout the propulsion cycle
while maintaining a very similar propulsion technique to the athlete's prior
soft gloves. Moreover, the inclusion of an innovative attachment system on the
new gloves allowed the athlete to put his gloves on by himself, eliminating the
need for external assistance and thus significantly increasing his autonomy.
This multidisciplinary approach helped to prototype and develop a new rigid
personalized gloves concept and is clearly a promising avenue to tailor
adaptive sports equipment to an athlete's needs. | Félix Chénier, Gerald Parent, Mikaël Leblanc, Colombe Bélaise, Mathieu Andrieux | 2023-09-15T19:24:50Z | http://arxiv.org/abs/2309.08726v2 | **PRE-PRINT**
## Abstract
This study with a T-52 class wheelchair racing athlete aimed to combine quantitative biomechanical measurements to the athlete's perception to design and test different prototypes of a new kind of rigid gloves. Three personalized rigid gloves with various, fixed wrist extension angles were prototyped and tested on a treadmill in a biomechanics laboratory. The prototype with 45\({}^{\circ}\) wrist extension was the athlete's favourite as it reduced his perception of effort. Biomechanical assessment and user-experience data indicated that his favourite prototype increased wrist stability throughout the propulsion cycle while maintaining a very similar propulsion technique to the athlete's prior soft gloves. Moreover, the inclusion of an innovative attachment system on the new gloves allowed the athlete to put his gloves on by himself, eliminating the need for external assistance and thus significantly increasing his autonomy. This multidisciplinary approach helped to prototype and develop a new rigid personalized gloves concept and is clearly a promising avenue to tailor adaptive sports equipment to an athlete's needs.
## Keywords
Wheelchair propulsion, adaptive sports, ergonomic design, autonomy, treadmill, 3D scans, 3D printers
## Introduction
The first biomechanical studies on wheelchair racing were done during the late 1980s, with the main objective being to enhance the athlete's performance (Ridgway et al., 1988; Sanderson & Sommer III, 1985). Since then, radical technological advances have led to major modifications to the wheelchair. For instance, wheels were reduced from four to three, a crown compensator to assist the athlete in negotiating transitions from straights to curves was added, and lighter frames and wheels are now used (Cooper & De Luigi, 2014). The technical optimization of the
wheelchair, the interface and the accessories is in fact a main objective of most biomechanical studies in Paralympic sports (Morrien et al., 2016).
Among the most recent shifts in wheelchair racing optimization is the athletes' choice regarding the gloves used to push the wheels during propulsion. During the last 10 years, most elite wheelchair racers have replaced traditional soft gloves made of padded leather and Velcro with rigid, thermoformed plastics (Rice, 2016). Optimizing the gloves remains a promising avenue, since at every stroke, a large quantity of kinetic energy initially stored in the upper body must be transferred to the pushrim via the gloves, during a very short amount of time (Vanlandewijck et al., 2001). In standard wheelchair propulsion, a large component of the force applied on the pushrims does not contribute to wheelchair motion (Boninger et al., 1999; Robertson et al., 1996; H. W. Wu et al., 1998), and wheelchair racing thus seems to be mechanically inefficient (Chenier et al., 2021). As such, any energy that is lost in deformation, friction or bouncing during this short contact must be avoided to improve propulsion efficiency and thus performance.
Rice et al. (2015) are the only authors to have measured the impact of both types of gloves on performance. While they did observe differences in some kinetic parameters (e.g., braking moment) and spatiotemporal parameters (e.g., cadence, push angle) at submaximal steady state velocities, they did not at maximal intensity. Moreover, their sample was relatively homogeneous, with 9 athletes competing in only two classes (T53 and T54) of the seven allowed classes of wheelchair racing (T32-T34, T51-T54); consequently, we would expect to observe even fewer measurable differences between both types of gloves in a more diverse sample. People with different disabilities and preferences most probably have different needs and therefore different optimal gloves designs. For instance, a person with affected wrist control may benefit from additional wrist support, while a person without wrist mobility impairment may benefit from more freedom to generate more momentum using wrist movement.
Personalizing racing accessories such as racing gloves is more accessible than ever, with recent technologies such as motion capture, force measurement devices and rapid prototyping devices (e.g., 3D printers). However, while 3D motion capture has largely been used to enhance the positioning, seating interface, or propulsion technique in standard wheelchair propulsion (Dellabiancia et al., 2013) and in court sports wheelchair propulsion (B. S. Mason et al., 2013), to our knowledge no recent literature has focused on optimizing the propulsion in wheelchair racing using 3D motion capture. For kinetic parameters, although some instrumented wheels have been built (Chenier et al., 2021; Goosey-Tolfrey et al., 2001; Limroongreungrat et al., 2009; Miyazaki et al., 2020; Rice et al., 2015), an instrumented racing wheel has never been used as a tool to drive personalized design in wheelchair racing.
In this single case study, we present how combining quantitative biomechanical measurements with a qualitative user perception questionnaire helped us to create and test different prototypes of a new kind of rigid gloves that are adapted to the specific needs of a single athlete with Charcot-Marie Tooth disease.
## Context of the study
This project was initiated when a wheelchair racing athlete contacted the research team about the possibility of designing new racing gloves that would be more tailored to his needs. The athlete reported that his current soft gloves were slipping during propulsion and induced numbing of his hands. He also mentioned that before propelling, he needed assistance of approximately 10 minutes to tape his hands and forearms to keep his wrist joints as stable as possible.
### Participant
The aforementioned person is a regional parasport athlete, of functional class T-52, male, aged 32, 1.75 m, 57 kg. He has had an IPC (International Paralympic Committee) permanent functional classification since the start of his athletic career. The origin of his disability is a neuropathic disease (Charcot-Marie Tooth, type IIa), which impairs the strength of his distal musculature. He has muscular atrophy in his forearms, hands, thighs, legs and feet, and moderate muscular atrophy in his arms. His muscular strength levels scored 3 on the ASIA scale for the proximal
muscular of upper and lower limbs and 0 on fingers and toes.
He gave his written consent to take part in the study and to publish the results and conclusions. The protocol was developed in conformity with the ethical principles of Cegep de Terrebonne.
### Gloves design
The design of the gloves aimed to 1) stabilize the athlete's wrists in an optimal position for propulsion, and 2) allow him to put his gloves on and attach them independently. The ideation phase and validation models determined that the rigid plastic gloves design would cover the 2/3 of the forearms length and circumference and be attached by nylon Velcro bands and g-hooks. Moreover, a mushroom shaped button on the back of the hand and rings at the extremity of the straps were included on the gloves to help the athlete to put it on autonomously. We took 3D scans of the participant's hands and forearms and created 3D-printed prototypes using fused filament fabrication with PLA plastic, as shown in Fig. 1b. To determine which fixed wrist extension angle would be optimal, the participant was asked to wear his usual soft gloves, stabilized with tape, as usually done. We measured his wrist extension in this static, unloaded condition, using a goniometer. This gave an extension angle of 45\({}^{\circ}\), which we selected as the fixed extension for one rigid prototype. However, since the athlete's usual gloves are soft, this 45\({}^{\circ}\) angle was expected to vary during propulsion, and therefore we built two other prototypes with lower and higher angles of 30\({}^{\circ}\) and 55\({}^{\circ}\) extension.
## Prototype evaluation
### Instrumentation
**Gloves:** In addition to the three pairs of prototypes, the participant also used his original racing gloves (soft-cushioned, leather coated and strapped at wrist level) to allow for comparisons.
**Wheelchair and wheels:** The participant used his own custom racing wheelchair (modified Invacare TopEnd). A custom force-sensing instrumented wheel based on the wheel described in Chenier et al. (2021) was installed on the right side of the wheelchair. A 14-inch pushrim, equivalent to the one installed on the participant's left wheel, was installed on the instrumented wheel. The instrumented wheel measured the propulsive moment applied by the athlete on the pushrim at an average frequency of 2.7 kHz.
**Treadmill:** A treadmill (H/P Cosmos, Saturn 300/100r) was used at a 1% incline to simulate the friction effect present in real propulsion conditions (B. Mason et al., 2013). A guide attached to the treadmill allowed for safe anteroposterior limitation of the wheelchair's movements.
**Motion capture:** The kinematics of the participant and wheel were acquired using a passive 17-camera optoelectronic system (Optitrack, Motive 2.3.0) at a frequency of 180 Hz. Two rigid bodies of 3 reflective markers were affixed to the participant's right forearm and hand as shown in Fig. 1. Five reflective markers were attached to the racing wheelchair's wheel. The medial and lateral epicondyles and the ulna and radial styloid processes were digitized using a probe and expressed in relation to the forearm rigid body. The 2\({}^{\text{\tiny{zd}}}\) and 5\({}^{\text{\tiny{th}}}\) metacarpal heads were also probed and expressed in relation to the hand rigid body.
### Tasks
**Speed selection:** The participant was instructed that he would have to propel himself at a high speed in continuous trials of more than one minute. To determine the propulsion speed that would be used during the tests, he was asked to propel using his own soft gloves while the speed of the treadmill was gradually increased, until he was unable to sustain the treadmill speed (at which point the wheelchair position was moving backward on the treadmill). This speed was 3.5 m/s. The propulsion speed used for all the following tests were set at 2/3 of this max speed, which corresponds to a common training speed, for a speed of 2.3 m/s.
**Gloves testing:** Each pair of gloves was tested twice, including the athlete's own gloves, in the following order: [0, 0, 2, 2, 1, 1, 3, 3], for a total of eight propulsion trials, with gloves 0 being the athlete's own gloves, and gloves 1, 2 and 3 being the 30\({}^{\circ}\), 45\({}^{\circ}\) and 55\({}^{\circ}\) wrist extension prototypes, respectively. At the beginning of each trial, the participant was stationary
on the treadmill. Then, after gently impacting the instrumented wheel to synchronize the wheel to the motion capture system, the treadmill gradually accelerated up to the predetermined speed of 2.3 m/s in less than 10 seconds. After one minute of acquisition at steady speed, the participant was instructed that the acquisition was completed and that he could stop propelling. He was then asked to rate his perceived level of effort using the 6-20 Borg scale for perceived exertion (Borg, 1982), and to rate his level of satisfaction with the gloves on a 0-10 scale. He was also asked to formulate his overall impression of the gloves that included various aspects like comfort, adjustment, and stability. A minimal pause of 10 minutes was allocated between each trial to recover and limit fatigue.
### Data processing
**Kinematic measurements:** The 3D trajectories of the reflective markers in space were filtered at 6 Hz using a no-lag, 2nd order Butterworth low-pass filter.
Wrist extension angle \(\boldsymbol{\theta}_{\text{wrist}}\) was calculated using the right forearm and right-hand coordinate systems, expressed from the reconstructed bony landmarks in accordance with the recommendations of the International Society of Biomechanics (G. Wu et al., 2005). Wrist extension was defined as the first angle in a sequence of three mobile Euler angles (ZXY).
Hand position angle \(\boldsymbol{\theta}_{\text{hand}}\) was expressed in a fixed wheel hub coordinate system created from the circular motion of the wheel's markers, with the origin being the wheel centre, x being forward and z being normal to the wheel plane, and y being upward and inward due to wheel camber. Hand position angle \(\boldsymbol{\theta}_{\text{hand}}\) was defined as the angle between y and a line from the wheel centre to the hand, with 0\({}^{\circ}\) being the top of the pushrim and 90\({}^{\circ}\) being the pushrim's most forward point (Vanlandewijck et al., 2001).
Wheel rotation angle \(\boldsymbol{\theta}_{\text{wheel}}\) was calculated using the wheel's markers and was expressed in degrees in the wheel plane.
**Kinetic measurements:** The propulsive moment values measured by the instrumented wheel were filtered at 30 Hz using a no-lag, 2nd order Butterworth low-pass filter.
**Cycle segmentation and selection:** Cycles were segmented manually: push phases were defined by the propulsive moment signal being of greater amplitude than the noise floor measured during recovery. The 30 most repeatable cycles were selected based on the similarities of the propulsion moment curves.
**Outcome measures:** The following parameters were calculated and averaged over the 30 most repeatable cycles:
* Temporal parameters: push time (s), recovery time (s), cycle time (s)
* Spatial parameters: start angle (deg, defined as \(\boldsymbol{\theta}_{\text{hand}}\) at hand contact), end angle (deg, defined as \(\boldsymbol{\theta}_{\text{hand}}\) at hand release), push arc (deg, defined as \(\boldsymbol{\theta}_{\text{wheel}}(\text{release})-\boldsymbol{\theta}_{ \text{wheel}}(\text{contact})\).
* Kinetic parameters: mean propulsive moment during push phase (Nm), angular impulse (Nm's, defined as mean propulsive moment \(\times\) push time).
The entire data processing was performed using Matlab R2022b (Mathworks).
## Results
The perception of each gloves by the athlete is shown in Table 1. In terms of user perception, the prototype that gave both the lowest perception of effort (Borg scale of 12, similar to his current gloves) and the highest general rating (5.5, higher than his current gloves), were gloves 2. While the athlete found gloves 2 uncomfortable at first, he got used to it, he felt that his propulsion pattern was more healthy for his shoulders, and he liked the increased wrist stability that it provides. He found that the highest wrist stability was attained with gloves 3, but also found that these gloves made it increasingly harder to make good contact with the pushrims. The worsts gloves were gloves 1, where he found that he touched the pushrims too late during the propulsion cycle.
Figure 2 shows the wrist extension pattern during a propulsion cycle. As expected, the three rigid prototypes were stabler than his usual soft gloves, with
wrist extension ranges of [25\({}^{\circ}\)--31\({}^{\circ}\)] (variation of 6\({}^{\circ}\)) for gloves 1, [41\({}^{\circ}\)--46\({}^{\circ}\)] (variation of 5\({}^{\circ}\)) for gloves 2, and [54\({}^{\circ}\)--56\({}^{\circ}\)] (variation of 2\({}^{\circ}\)) for gloves 3, compared to [21\({}^{\circ}\)--37\({}^{\circ}\)] (variation of 16\({}^{\circ}\)) for gloves 0. However, although the static, unloaded wrist extension with gloves 0 have been measured as 45\({}^{\circ}\), it was much lower (21\({}^{\circ}\) to 37\({}^{\circ}\)) in dynamic, loaded conditions, most probably due to deformation of the gloves. Therefore, the prototype with the wrist angle most similar to the athlete's current gloves were gloves 1.
Figure 3 shows the temporal, spatial and kinetic parameters for each pair of gloves. The largest differences between the three prototypes were in push time, cycle time, start angle, push arc, and impulse. Globally, the prototype with the most similar parameters to the athlete's current gloves were gloves 2.
Figure 4 shows the evolution of the propulsion moment (a) in time from push initiation, and (b) in hand position angle. Although the mean propulsive moments are similar in Figure 3, we observe in Figure 4a that the propulsive moment reaches a higher peak with gloves 1, and that this peak is reached earlier with gloves 1 and 3 than with gloves 0 and 2. Figure 4a highlights the differences in push time, with gloves 1 being the shortest and gloves 0 being the longest. Figure 4b shows the same moment but as a function of hand position angle. It strongly highlights the spatial differences between the gloves, not only in push arc, but also in start and end angles. While peak moment is reached within about 0.07 to 0.13 seconds after impact for every gloves, these instances correspond to very different locations on the pushrim for the different gloves: this peak happens at about 140\({}^{\circ}\) for gloves 0 and 2, at 160\({}^{\circ}\) for gloves 1, and from 100\({}^{\circ}\) to 130\({}^{\circ}\) for gloves 3.
Finally, Fig. 5 shows the trajectory of the hand with the four gloves. Overall, the trajectories were similar between gloves, although the trajectory of the hand had a higher amplitude during the recovery phase with gloves 0.
## Discussion
The objective of the study was to investigate how a quantitative assessment of racing wheelchair biomechanics added to a qualitative user perception questionnaire could enhance our understanding of the user's perception and gain new insights into gloves development.
As gloves 1 have a posteriori the most similar wrist extension to gloves 0 for most of the cycle, we would expect that the preference of the athlete would go toward gloves 1. However, the contrary is true, since gloves 1 produced the least satisfaction and the highest perceived effort among the three prototypes. The athlete felt that his hand contacted the wheel further on the pushrim, which is confirmed in Fig. 3 and Fig. 4. In fact, the start angle appears to be strongly related to the wrist extension angle, with the most extended wrist condition making contact sooner with the pushrim.
The athlete's main comment on his favourite prototype (gloves 2) was that he liked the increased stability of the wrist in comparison to his own gloves (gloves 0). Apart from increased stability, which can effectively be observed compared to gloves 0 in Fig. 2, we believe the reason for this preference may be explained by his ability, using these gloves, to propel using a very similar technique as with gloves 0. Indeed, most parameters of Fig. 3 and Fig. 5 were the most similar to gloves 0: push time, recovery time, cycle time, start angle, and mean propulsive moment. Not only were these parameters similar, but the peak moment production was at the same time after hand contact, as seen in Fig. 4a. Most notably, the peak moment production was also at the same position on the wheel, as seen in Fig. 4b, which may feel more natural for him compared to the other prototypes that shifted the moment production curve greatly in terms of hand position angle.
Although the athlete preferred gloves 2, he felt that gloves 3 provided the best stability, which is coherent with Fig. 2 where the wrist extension varies the least during the propulsion cycle. He also mentioned that he felt that the point of contact was also the most stable. We observe in Fig. 4 that he was able to maintain a high, constant propulsive moment on a longer arc,
which may be related to this feeling of hand contact stability. However, while he found this contact easy at the beginning, he felt tired faster, and this may be related to the hand contact that occurred so early during the propulsion phase. Wheelchair racing propulsion technique implies a transfer of kinetic energy from the trunk and arm movements before contact, to the wheel during the push (Vanlandewijck et al., 2001). Contacting the wheel too soon may have decreased the ability of the athlete to generate kinetic energy with the trunk and arms, and therefore increased the muscular demand from the arms.
As a first iteration, this work that combines traditional design and biomechanical assessment has practical value, because:
1. An initial problem with the athlete's current gloves was solved: he can now put his gloves on by himself, and the gloves slip much less because they are moulded to his hands.
2. It appears that one of the prototypes (gloves 2) allows him to maintain most of his original technique, which is positive since switching to these radically new gloves should not be associated with a loss of performance.
As a follow-up, the athlete mentioned after a few training sessions on the track with his new gloves (gloves 2), that he wished for even better general stability as the gloves were slowly moving on his hands as he propelled on long races. This led to a second iteration with added straps for a better fit on the forearms that he has been using ever since (Fig. 6).
The data acquired during this first assessment can be used to guide the design of the next iteration of gloves, by reinterpreting it in an aspect of pain prevention. We note in Fig. 4a that among the three prototypes, gloves 1 were the ones with the highest moment rate of rise. We also note that although the mean propulsive moment was similar between gloves, the peak is much higher for gloves 1: as per Fig. 4b, the moment was indeed very low during the 120\({}^{\circ}\) to 150\({}^{\circ}\), before spiking at 170\({}^{\circ}\). Finally, this propulsion moment happens in the least amount of time, with a push time of 0.19s vs. 0.31s for gloves 0, and consequently generates the least impulse, with 1.25 Nm's vs. 1.73 Nm's for gloves 0. This leads the athlete to increase his cadence, with a cycle time of 0.66 s vs. 0.87 s for gloves 0, to keep up to speed with the treadmill. All these observations go against the recommendations for preservation of upper limb integrity in standard wheelchair propulsion (Consortium for Spinal Cord Medicine, 2005), as they have been correlated to higher risk of developing shoulder and wrist disorders and pain in standard wheelchair propulsion (Boninger et al., 2005; Mercer et al., 2006; Mulroy et al., 2006). Although we should not directly transfer these recommendations from standard wheelchair to racing wheelchair since the technique is so different, it minimally signals that for future iterations of the racing gloves for this athlete, decreasing wrist extension angle should be done with care by taking these possible risks into account.
A similar study by Costa et al. (2009) aimed to personalize the equipment of an athlete using technological instrumentation. The authors aimed to find the best pushrim diameter for one elite athlete of class T52, also diagnosed with Charcot-Marie Tooth disease. They calculated push time and stroke frequency using a high-speed camera, heart rate using a training heart-rate monitor, and lactate using a portable lactate analyzer, for three pushrim diameters. Interestingly, this was, to the authors' knowledge, the only study to describe the use of technological instrumentation as a method to personalize wheelchair racing equipment. As a matter of fact, 3D kinematic instrumentation has been used in labs before (De Klerk et al., 2022; Lewis et al., 2018; Poulet et al., 2022), but mainly to better understand the principles of wheelchair racing propulsion performance and injury prevention. Experimental prototypes of instrumented pushrims have also been developed (Chenier et al., 2021; Goosey-Tolfrey et al., 2001; Limroongreungrat et al., 2009; Miyazaki et al., 2020; Rice et al., 2015); however, this is the first time these instruments were used together to personalize wheelchair racing equipment.
As a main limitation of our case study, any modification to sports equipment implies adaptation by the athlete, and this type of one-day experiment cannot allow for such adaptation. In the study by Costa et al. (2009), the athlete rotated between the three pushrims
during training for three weeks before the biomechanical test to become accustomed to each. However, using different pushrim diameters is less disruptive than testing completely different gloves designs. Moreover, the neuromuscular and physical condition of the athlete may change with time, and the best gloves at a given time may not be the best a year later. The next logical steps are therefore to continue optimizing the gloves with other iterations of this assessment a few months later, most likely using prototypes with finer differences.
Another limitation is the use of stationary instrumentation instead of collecting data directly on a racing track, which has the potential to interfere with the athlete's own propulsion technique. For instance, although the treadmill slope had been adjusted to generate a similar rolling resistance as described in Mason et al. (2013), the main source of resistance in wheelchair racing at high speed is air drag (Hedrick et al., 1990), which means that propelling on the treadmill may have minimized the propulsive moments needed to reach a similar speed on a racing track. Propelling a standard wheelchair increases the stroke frequency for a same speed (Chenier et al., 2018); it is possible that similar behaviour would be observed for wheelchair racing. Finally, as seen in Fig. 1, the instrumented wheel has a small bump in its centre, to accommodate its force cell. The athlete indicated that he inadvertently touched it with his gloves on some occasions. However, this was sporadic and we do not believe that his propulsion pattern was affected. These limitations were unavoidable to measure the 3D kinematics and kinetics of the athlete, and to avoid external sources of bias such as variable weather conditions. They do not limit the results of the comparisons between the four gloves, because all gloves were tested under similar conditions. It may, however, impact the transfer of those measurements to real conditions, and this is why continuous follow-up with the athlete is necessary as he trains on a track with his new gloves.
## Conclusion
In this paper, we presented a method to personalize the design of wheelchair racing equipment, namely the conception of new gloves, that adds quantitative biomechanical assessment to traditional iterative design based on qualitative interactions with the user. Such user-centred, personalized design is important in adaptive sports because the abilities and inabilities between different athletes are so unique. In this case study, we created three variants of rigid gloves that would allow the athlete to be autonomous and to overcome wrist mobilizer weakness due to his disease. The combination of kinematic and kinetic instrumentation allowed to better understand why the user preferred a particular pair of gloves prototype, and will be helpful for designing other iterations as the athlete's physical condition and technique change over time.
|
2309.16518 | A Peters cycle at the end of the cosmic ray spectrum? | We investigate the degree to which current ultrahigh energy cosmic ray
observations above the ankle support a common maximum rigidity for all nuclei,
often called a Peters cycle, over alternative scenarios for the cosmic ray
spectra escaping sources. We show that a Peters cycle is not generally
supported by the data when compared with these alternatives. We explore the
observational signatures of non-Peters cycle scenarios, and the opportunities
to explore both ultrahigh energy cosmic ray source conditions, as well as,
physics beyond the Standard model they present. | Marco Stein Muzio, Luis A. Anchordoqui, Michael Unger | 2023-09-28T15:24:53Z | http://arxiv.org/abs/2309.16518v2 | # A Peters cycle at the end of the cosmic ray spectrum?
###### Abstract
We investigate the degree to which current ultrahigh energy cosmic ray observations above the ankle support a common maximum rigidity for all nuclei, often called a Peters cycle, over alternative scenarios for the cosmic ray spectra escaping sources. We show that a Peters cycle is not generally supported by the data when compared with these alternatives. We explore the observational signatures of non-Peters cycle scenarios, and the opportunities to explore both ultrahigh energy cosmic ray source conditions, as well as, physics beyond the Standard model they present.
## I Introduction
One of the most challenging open questions regarding the origin of ultrahigh energy cosmic rays (UHECRs) deals with the relative maximum energies of the spectra escaping the source for different nuclei. A commonly-used simplifying assumption is that the cosmic ray source spectra trace a Peters cycle, in which the maximum cosmic ray energy scales linearly with \(Z\), i.e., with the charge of the UHECR in units of the proton charge [1] (see e.g. [2, 3, 4] for fits of UHECR data with a Peters cycle at the source). A Peters cycle arises naturally for acceleration processes which depend on magnetic fields to confine cosmic rays to the acceleration region. This limits the maximum rigidity of the accelerator to \(R_{\rm max}\lesssim BL\), where \(B\) and \(L\) are the magnetic field strength and size of the accelerating region. This condition, often called the Hillas criterion [5], results in a common maximum rigidity among all nuclei which are accelerated.
Current UHECR data, though, is not compatible with a pure Peters cycle from a single source population across the entire spectrum [6, 7] above \(10^{18}\) eV. In particular, current spectrum and composition data require at least one of two possibilities in order to be reconciled. The first possibility is that the UHECR flux below the ankle is dominantly produced by a second class of extragalactic sources (see [7]). However, the number of different source populations and their relative variance cannot be too large [8]. Another possibility is that an alternative scaling of maximum energies is required to explain the full UHECR spectrum and composition data (see, e.g., [9, 10, 11, 12]). Such alternative scalings are a natural assumption for models which consider the possibility that UHECRs escape their source only after suffering significant energy losses. A specific illustration of this in the context of gamma-ray bursts (GRBs) is presented in [13].
However, since the CR escape time naturally decreases with rigidity, it is possible that the highest energy CRs suffer minimal energy losses and escape their sources with a Peters cycle intact. Therefore, here we consider whether the UHECR data above the ankle, in particular above \(10^{18.8}\) eV, can still be reconciled with a pure Peters cycle, rather than one modified by energy losses.
Energy losses are typically parametrized in terms of \(Z\) and the UHECR baryon number \(A\). For example, the energy loss rate of synchrotron and curvature radiative processes scales as \(Z^{4}/A^{2}\) and \(Z^{2}\), respectively, see e.g. [14], leading to different scalings for the maximum energy [15, 16, 17]. For instance, when considering a diffusive acceleration mechanism, synchrotron radiation limits the cosmic ray maximum energy so that it scales as \(E_{\rm max}\propto A^{4}/Z^{4}\). On the other hand, for one-shot acceleration processes, synchrotron radiation constrains the maximum energy to scale as \(E_{\rm max}\propto A^{2}/Z^{3/2}\), whereas curvature radiation yields \(E_{\rm max}\propto A/Z^{1/4}\). Photonuclear interactions with the thermal radiation fields and hadronic interactions with the ambient gas are driven by the scattering cross section, which scales with \(A\). Even consideration of a single particle species at injection can provide a complex multi-species source spectra after propagation through the source environment [9, 12, 18].
The maximum energy of cosmic rays may even be independent of both \(Z\) and \(A\). A specific example of this type of framework is the dark dimension scenario, which naturally addresses the cosmological hierarchy problem by adding one mesoscopic dimension of micron scale [19]. Since within this scenario physics becomes strongly coupled to gravity around \(10^{10}\) GeV, it was recently conjectured that new universal energy losses deep-rooted within the dark dimension could control the cosmic ray maximum energy [19, 20, 21].
In this paper we take a pragmatic approach to investigate whether existing data favor any of the above mentioned scenarios over a pure Peters cycle above the ankle. Using data from the Pierre Auger Observatory, we carry out a statistical analysis to study the degree to which the observed spectrum
and nuclear composition constrain the source emission spectra.
The layout of the paper is as follows. In Sec. II we provide an overview of the data and we introduce the framework for a data-driven analysis to model the maximum cosmic ray energy and constrain the source spectra. In Sec. III we discuss the results for benchmark models. We find that a Peters cycle is not the most favorable model. In Sec. IV we provide an astrophysical interpretation of our results and we discuss possible multimessenger connections and implications. Our conclusions are collected in Sec. V.
## II Model
We adopt the working assumption that the all source spectra can be described by
\[\frac{dN(A)}{dE}\propto E^{-\gamma}\,e^{-E/E_{\rm max}^{A}}\,, \tag{1}\]
with
\[E_{\rm max}^{A}=E_{0}\,Z^{\alpha}A^{\beta}, \tag{2}\]
where \(E_{0}\) is the proton maximum energy of the system, and the spectral index is common to all nuclei. Note that the case with \(\alpha=1\) and \(\beta=0\) corresponds to the Peters cycle, whereas the case with \(\alpha=0\) and \(\beta=0\) stands for the case in which the spectrum is dominated by some universal energy loss which is independent of the cosmic ray Lorentz boost.
With this model for the spectra escaping the source we consider five mass groups representing \(p\), He, CNO, Si, and Fe, whose relative abundances are adjusted to obtain the best-fit. The mass groups are propagated through the cosmic microwave background (CMB) and extragalactic background light (EBL) using propagation matrices built from CRPropa3 [22]. The observed spectrum of CRs is fit to the spectrum [23] and composition [24] of the Pierre Auger Observatory (Auger) by minimizing the value of a combined \(\chi^{2}\),
\[\chi^{2}=\sum_{i}\frac{(J_{i}-\hat{J}_{i})^{2}}{\sigma_{\lambda i}^{2}}+\sum_ {i}\frac{(\mu_{i}-\hat{\mu}_{i})^{2}}{\sigma_{\mu,i}^{2}}+\sum_{i}\frac{(V_{ i}-\hat{V}_{i})^{2}}{\sigma_{\nu,i}^{2}}\,, \tag{3}\]
where \(J\) is the UHECR flux and \(\mu\) and \(V\) are the mean and variance of the distribution of the logarithm of cosmic-ray mass number, \(\ln A\). Quantities with a hat are the model prediction and the subscript \(i\) denotes the energy. In particular, to interpret the air shower data provided by Auger we must adopt a hadronic interaction model. We consider two hadronic interaction models, Sibyll2.3b [25] and Epos-LHC [26], to interpret the depth of shower maximum, \(X_{\rm max}\), data in terms of \(\ln A\).
We perform a fit to the Auger data [23; 24] above \(10^{18.8}\) eV so as to directly address the question of whether a pure Peters cycle is preferred above the ankle. At lower energies, we assume either an additional source population contributes to the spectrum or that energy loss processes have significantly distorted the original Peters cycle. Given our free parameters (the proton maximum energy \(E_{0}\), spectral index \(\gamma\), and 4 parameters controlling 5 mass group fractions) this choice of fit range leaves \(N_{\rm dof}=29\). Here we focus on a benchmark set of systematic data shifts, which provide the best-fit to the Auger data overall: shifting the energy scale by \({\rm dlg}E=+0.1\) and shifting the \(\langle X_{\rm max}\rangle\) by \(-1\sigma_{X}\), since these gave the best overall fit of the shifts explored. We consider the sensitivity of our results to other systematic shifts of the data in Appendix A.
## III Results
After optimizing the model parameters for a particular \((\alpha,\beta)\) combination we calculate the relative goodness-of-fit, \(N_{\sigma}\), compared to a Peters cycle in units of sigma. To calculate \(N_{\sigma}\) we apply Wilks' theorem to convert the \(\Delta\chi^{2}\) between an alternative scenario and a Peters cycle to a p-value, given that alternative scenarios have \(\Delta N_{\rm dof}=2\). We calculate \(\Delta\chi^{2}\) as
\[\Delta\chi^{2}\equiv S^{-1}\sqrt{\left|\chi_{\alpha,\beta}^{2}-\chi_{\rm Peters} ^{2}\right|} \tag{4}\]
where \(S=\left(\min(\chi_{\alpha,\beta}^{2},\chi_{\rm Peters}^{2})/N_{\rm dof})\right)^ {1/2}\) is a scale factor introduced to enlarge the uncertainties to account for a \(\chi_{\rm min}^{2}/N_{\rm dof}>1\)[27]. Additionally, we assign \(N_{\sigma}\) a sign equal to \({\rm sgn}\left\{\chi_{\alpha,\beta}^{2}-\chi_{\rm Present}^{2}\right\}\) to encode whether the fit has improved or worsened compared to a Peters cycle. In other words, negative values of \(N_{\sigma}\) represent the statistical significance at which one can reject the null hypothesis of a pure Peters cycle in favor of the alternative scenario \((\alpha,\beta)\).
Figure 1 shows \(N_{\sigma}\) for Sibyll2.3d and Epos-LHC using our benchmark systematic shifts for the data. For reference, the Peters cycle (PC) and a number of alternative scenarios are highlighted: a photodisintegration-limited spectrum (PD), a synchrotron-limited diffusion accelerated spectrum (SDA), a synchrotron-limited one-shot accelerated spectrum (S1A), a curvature radiation-limited one-shot accelerated spectrum (C1A), and a universal energy loss spectrum (UEL).
The value of \(\alpha\) and \(\beta\) change the relative position of nuclei in the spectrum emerging from the source (as illustrated in Appendix B). This in turn changes the relative position of mass groups at Earth which affects the quality of fit to the CR composition data. Therefore, one expects that points in the \(\alpha-\beta\) plane with similar ratios of maximum energy between nuclei to produce similar quality fits. In practice this is realized due to the approximate degeneracy between \(A\) and \(Z\), since \(A\simeq 2Z\) for stable nuclei with the exception of protons where \(A=Z\) (though these fall below our fit range in some cases, including the standard Peters cycle scenario). In particular, the ratio between maximum energies for two nuclei \(A\) and \(A^{\prime}\) will be constant along lines of constant \(\alpha+\beta\), since:
\[\frac{E_{\rm max}^{A}}{E_{\rm max}^{A}}=\left(\frac{A}{A^{\prime}}\right)^{ \alpha+\beta}\,. \tag{5}\]
By contrast, the ratio between the maximum energies of a nucleus \(A\) and protons will be constant along lines of constant
\((1-\log_{A}2)\alpha+\beta\), since:
\[\frac{E_{\max}^{A}}{E_{\max}^{p}}=2^{-\alpha}A^{\alpha+\beta}. \tag{6}\]
Equations (5) and (6) have two important consequences. First, in the case that the escaping proton flux is not significant relative to other mass groups, different acceleration scenarios fall into families which have a common \(\alpha+\beta\). Within these families, one can expect that the observed UHECR spectrum and composition are nearly indistinguishable despite the fundamentally different processes responsible for them. Second, in the case that a substantial proton population escapes the source, the proton spectrum can peak at arbitrarily high energies compared to the peak energies of nuclei irrespective of the sign of \(\alpha+\beta\). As an example, this means that even for scenarios in the Peters cycle family \(\alpha+\beta=1\), nuclei remain ordered in terms of their peak energy but protons can peak below helium, above iron, or between the two.
As can be seen from Fig. 1, alternative scenarios are favored over a simple Peters cycle. For Eros-LHC alternative scenarios can achieve \(N_{\sigma}<-4\), indicating that these scenarios could be used to reject the Peters cycle hypothesis with strong statistical significance. Notably the region with the most negative value of \(N_{\sigma}\) has a slope deviating from \(-1\), indicating that the position of protons relative to heavier nuclei is responsible for the improvement in fit quality. In particular, this region satisfies \(\beta\simeq 0.4-0.8\alpha\) and we can infer that it is the ratio of the maximum proton energy to the maximum energy of \(A\simeq 32\) which drives the fit.
It is clear from Fig. 1 that the global minimum is outside the plotted range (which was driven by the range of \(\alpha\) and \(\beta\) values of the alternative scenarios we considered). To explore how far outside of this range the global minimum lies, we performed a 1-D scan along the line giving the best-fit for Eros-LHC: \(\beta\simeq 0.4-0.8\alpha\). The results of this scan are shown in Fig. 2. Independent of the hadronic interaction model, the best-fit is found for \(\alpha\simeq 6.75\) and \(\beta\simeq-5\). Currently, we are unaware of any scenarios which could produce such an exotic dependence on mass and charge.
Figure 3 shows the best-fit spectrum and composition
Figure 1: Change in quality of fit to the UHECR spectrum and composition relative to a Peters cycle. We consider (a) Shivll2.3d and (b) Eros-LHC with data shifted by \(\mathrm{d}\mathrm{l}\mathrm{g}E=0.1\) and \(-1\sigma_{X}\). The family of scenarios with \(\alpha+\beta=0\) are indicated by the white dashed line. The Peters cycle (PC) and a number of alternative scenarios (green dots) are highlighted: a photodisintegration-limited spectrum (PD), a synchrotron-limited diffusion accelerated spectrum (SDA), a synchrotron-limited one-shot accelerated spectrum (S1A), a curvature radiation-limited one-shot accelerated spectrum (C1A), and a universal energy loss spectrum (UEL).
related observables under the Peters cycle and alternative scenarios highlighted in Fig. 1. The global best-fit (BF) scenario found from the 1-D scan discussed above is also plotted in Fig. 3. While predictions for the UHECR spectrum are very similar between all the scenarios considered, there are differences in their predictions for the UHECR composition. It is worthwhile noting that the models presented here explain nearly all the flux in the two data points below our fit range, requiring a fine-tuned transition to a second, extragalactic source population. This may hint that the UHECR spectrum is best explained by a energy-loss-modified spectrum, because the nucleons created during energy losses in the source environment can populate the flux just below the ankle, see [9].
As is hinted at by Fig. 1 and (5), the best-fit spectrum and composition for models with similar \(\alpha+\beta\) are nearly indistinguishable from each other if no significant proton component exists in the escaping spectrum. This explains the stark similarity both between a Peters cycle and photodisintegration-limited scenario (\(\alpha+\beta=1\)), as well as, between a synchrotron-limited diffusion accelerated scenario and a universal energy loss scenario (\(\alpha+\beta=0\)). Owing to its having \(\alpha+\beta=0.75\), the curvature radiation-limited one-shot accelerated scenario falls between the Peters cycle/photodisintegration-limited scenario and the synchrotron-limited one-shot accelerated scenario (\(\alpha+\beta=0.5\)). While some of these alternative scenarios give slightly better or worse fits compared to the Peters cycle, Fig. 3 makes clear the difficulty in distinguishing between these scenarios in a statistically significant way.
This pattern is clearly broken by the best-fit scenario, which is most similar to the \(\alpha+\beta=0\) family of scenarios despite its having \(\alpha+\beta\simeq 1.75\). This is due to its significant escaping proton flux and large value of \(\alpha\), so that this proton flux is at high energies relative to nuclei. This can be seen explicitly in Figs. 6f and 7f. While this feature may not be easily accessible through the high-level information provided by the UHECR spectrum, \(\langle X_{\text{max}}\rangle\), and \(\sigma(X_{\text{max}})\) data, its other signatures will be discussed in the following Section.
## IV Signatures of alternative scenarios
Given the difficulty of distinguishing a Peters cycle from alternative scenarios using high-level information like the UHECR spectrum and composition it is worthwhile to explore other signatures which might provide a smoking gun to the dominant process in the universe. In principle, accurate measurements of the spectra of each mass group in the UHECR spectrum would directly access information about \(\alpha\) and \(\beta\), but this level of mass separation is difficult above the ankle since it would require separation between nuclei of a single mass group. If data below the ankle is also included, such a mass separation is possible [28] but it is not possible to distinguish whether the resulting maximum energy scalings are due to an alternative scenario or a superposition of source populations. However, other signals may provide different means to distinguish between a Peters cycle and alternative scenarios.
First, let us consider the case where there is a substantial proton flux escaping the source. For small values of \(\alpha\), this component will peak at low energies relative to nuclei - a factor of \(2^{-\alpha}\) below the energy-per-nucleon of nuclei for the Peters cycle family of scenarios. At such low energies, this proton component likely will be deep into the spectrum's Galactic-to-extragalactic transition. While this lower energy than expected proton component compared to a Peters cycle would be an indication that an alternative scenario is at work, such a signal may be difficult to distinguish from both the protons from photodisintegrated nuclei or a second source population below the ankle.
For large values of \(\alpha\), the proton component will peak at higher than expected energies - a factor of \(2^{\alpha}\) above the energy-per-nucleon of nuclei for the Peters cycle family of scenarios. This would imply that a significant proton component exists in the spectrum at peaking above heavier nuclei, which can result in an increase in \(\sigma(X_{\text{max}})\) at high energies (see Fig. 3). Additionally, for large enough values of \(\alpha\), this proton component will extend beyond the GZK threshold and, therefore, produce a flux of cosmogenic neutrinos at \(\sim\) EeV energies. Good measurements of the proton fraction throughout the spectrum or observation of cosmogenic neutrinos will probe whether such a component exists and measurement of its peak energy will constrain the value of \(\alpha\).
Second, let us consider the case where there is no substantial proton flux escaping the source. In this case, a proton flux will still arrive at Earth due to UHECR photodisintegration interactions with the CMB and extragalactic background light (EBL). These interations preserve the energy-per-nucleon of the primary CR, so that:
\[E^{p}_{\text{APD}}=\frac{E^{A}}{A}. \tag{7}\]
This implies that the peak in the spectrum of photodisintegrated protons from primary CRs of mass \(A\) will be given by:
\[E^{p}_{\text{APD,max}}=2^{-\alpha}E_{0}A^{\alpha+\beta-1}. \tag{8}\]
In the case of the Peters cycle family of scenarios, where \(\alpha+\beta=1\), this implies that the spectra of photodisintegrated proton peak at an energy \(2^{-\alpha}E_{0}\) irrespective of their parent CR's mass \(A\). Moreover, the peak energy of any primary proton spectrum will be the same as the photodisintegrated spectrum's up to a factor of \(2^{-\alpha}\). Distinguishing a Peters cycle from alternative scenarios in its family is very difficult and can only be probed by through this factor of \(2^{-\alpha}\) difference in the peak energies of the primary proton component and photodisintegrated proton component.
However, this is not true for alternative families of scenarios where \(\alpha+\beta\neq 1\), and the ratio between the peaks in the spectra of photodisintegrated protons from CRs of mass \(A\) and mass \(A^{\prime}\) will be given by:
\[\frac{E^{p}_{\text{APD,max}}}{E^{p}_{A^{\prime}\text{PD,max}}}=\left(\frac{A} {A^{\prime}}\right)^{\alpha+\beta-1}. \tag{9}\]
Therefore, a signature of alternative scenarios outside the Peters cycle family is a multi-peaked proton component in the
UHECR spectrum. While it may be difficult to resolve these separate peaks, this signature also implies an extended proton spectrum throughout the entire UHECR spectrum, which is unexpected for scenarios in the Peters cycle family. This is a key step towards the development of the UHECR spectrum, which is a key step towards the development of the UHECR spectrum.
Figure 3: Best-fit spectra (top panels) and composition-related observables (bottom panels) assuming a Peters cycle and other alternative scenarios (colored lines). Data points are the Auger 2021 spectrum [23] and 2019 composition data [24] shifted by \(\mathrm{d}\mathrm{lg}E=+0.1\) and \(-1\sigma_{X}\). Predictions for \(\langle X_{\mathrm{max}}\rangle\) and \(\sigma(X_{\mathrm{max}})\) are made assuming (a) Sibyll2.3d and (b) Epos-LHC. Precitions for pure proton and iron compositions for each hadronic interaction model are shown for reference (solid gray lines). Models were fit to the data above \(10^{18.8}\) eV (solid points). Data points below the fit range are shown for reference (open points). The different models are: Peters cycle (PC), a photodisintegration-limited spectrum (PD), a synchrotron-limited diffusion accelerated spectrum (SDA), a synchrotron-limited one-shot accelerated spectrum (S1A), a curvature radiation-limited one-shot accelerated spectrum (C1A), and a universal energy loss spectrum (UEL).
extended proton component may be more easily detected than resolve the peak energy of spectrum of each mass group, and can result in larger-than-expected values of \(\sigma(X_{\rm max})\) at high energies.
An important caveat, however, is that an alternative scenario to a Peters cycle may be difficult to distinguish from a second UHECR source population which produces a substantial high-energy proton flux (this possibility has been explored in a number of studies, including [29; 30; 31; 12]). Such a source population could mimic alternative scenarios by producing cosmogenic neutrinos or a larger than expected proton fraction through the spectrum. If the fluxes of heavier nuclei produced by this source population are not large enough to be detected, it may be impossible to distinguish between the two.
## V Conclusions
In this study we explored the degree to which observations of the ultrahigh energy cosmic ray spectrum and composition above the ankle favor a Peters cycle over alternative scenarios for spectra escaping sources. Such alternatives are motivated by energy loss and beyond the Standard Model processes which imprint particular scalings between the maximum energy of nuclei and their mass and charge. We find that alternative scenarios explain the UHECR data above the ankle better than a Peters cycle, regardless of the hadronic interaction model or systematic data shifts assumed. This result raises the possibility that a Peters cycle is not realized in any energy range of the observed UHECR spectrum.
We investigated the observational signatures which might be used to further discriminate alternative scenarios from a Peters cycle. These include unexpected scalings of the peak energy of different mass groups at Earth, a substantial GZK neutrino flux, an extended proton component across the spectrum's full energy range, or an unexpected proton component at the highest energies (which could result in a large \(\sigma(X_{\rm max})\) and therefore may be constrained by deep neural network-based \(X_{\rm max}\) measurements [32]). However, some of these signatures are difficult to distinguish from an additional UHECR source population below the ankle or one producing a substantial high-energy proton flux above it.
We emphasize that alternative scenarios to a Peters cycle represent an exciting observational opportunity. In particular, further constraints on these scenarios will not only significantly reduce the theoretical uncertainties in UHECR modeling and open a window to the conditions inside UHECR sources, but they will also enable UHECR data to directly constrain new physics processes.
###### Acknowledgements.
The work of L.A.A. is supported by the U.S. National Science Foundation (NSF Grant PHY-2112527). The research of M.S.M. is supported by the NSF MPS-Ascend Postdoctoral Award #2138121.
|
2309.09438 | Performance Benefit of Aerocapture for the Design Reference Mission Set | Aerocapture is a maneuver which uses aerodynamic drag to slow down a
spacecraft in a single pass through the atmosphere. All planetary orbiters to
date have used propulsive orbit insertion. Aerocapture is a promising
alternative, especially for small satellite missions and missions to the ice
giants. The large {\Delta}V requirement makes it practically impossible for
small satellites to enter low circular orbits. Aerocapture can enable insertion
of low-cost satellites into circular orbits around Mars and Venus. For ice
giant missions, aerocapture can enable orbit insertion from fast arrival
trajectories which are impractical with propulsive insertion. By utilizing the
atmospheric drag to impart the {\Delta}V, aerocapture can offer significant
propellant mass and cost savings for a wide range of planetary missions. The
present study analyzes the performance benefit offered by aerocapture for a set
of design reference missions and their applications to future Solar System
exploration from Venus to Neptune. The estimated performance benefit for
aerocapture in terms of delivered mass increase are: Venus (92%), Earth (108%),
Mars (17%), and Titan (614%), Uranus (35%), and Neptune (43%). At Uranus and
Neptune, aerocapture is a mission enabling technology for orbit insertion from
fast arrival interplanetary trajectories. | Athul Pradeepkumar Girija | 2023-09-18T02:41:49Z | http://arxiv.org/abs/2309.09438v1 | # Performance Benefit of Aerocapture for the Design Reference Mission Set
###### Abstract
Aerocapture is a maneuver which uses aerodynamic drag to slow down a spacecraft in a single pass through the atmosphere. All planetary orbiters to date have used propulsive orbit insertion. Aerocapture is a promising alternative, especially for small satellite missions and missions to the ice giants. The large \(\Delta\)V requirement makes it practically impossible for small satellites to enter low circular orbits. Aerocapture can enable insertion of low-cost satellites into circular orbits around Mars and Venus. For ice giant missions, aerocapture can enable orbit insertion from fast arrival trajectories which are impractical with propulsive insertion. By utilizing the atmospheric drag to impart the \(\Delta\)V, aerocapture can offer significant propellant mass and cost savings for a wide range of planetary missions. The present study analyzes the performance benefit offered by aerocapture for a set of design reference missions and their applications to future Solar System exploration from Venus to Neptune. The estimated performance benefit for aerocapture in terms of delivered mass increase are: Venus (92%), Earth (108%), Mars (17%), and Titan (614%), Uranus (35%), and Neptune (43%). At Uranus and Neptune, aerocapture is a mission enabling technology for orbit insertion from fast arrival interplanetary trajectories.
Aerocapture, Performance Benefit, Design Reference Missions
## I Introduction
Aerocapture is a maneuver which uses aerodynamic drag to slow down a spacecraft in a single pass through the atmosphere to achieve nearly fuel-free orbit insertion [1, 2]. To date, all planetary orbiters have used propulsive orbit insertion. However, aerocapture is a promising alternative, especially for small satellite missions and missions to the ice giants [3]. The large \(\Delta\)V requirement makes it practically impossible for small satellites to enter low circular orbits. Aerocapture can enable insertion of low-cost satellites into circular orbits around Mars and Venus [4]. For ice giant missions, aerocapture can enable orbit insertion from fast arrival interplanetary trajectories which are impractical with propulsive insertion due to the prohibitively large \(\Delta\)V [5]. By utilizing the atmospheric drag to impart the \(\Delta\)V, aerocapture can offer significant propellant mass and cost savings for a wide range of missions [6]. The concept of operations for the aerocapture maneuver is shown in Figure 1, for a drag modulation vehicle at Mars. The aero-thermal conditions encountered during the maneuver depend on the destination, and performance benefit is also destination dependent [7, 8]. A recent NASA study underscored the need for design reference missions, as benchmarks for evaluating the benefits of aerocapture at various destinations. The present study uses the Aerocapture Mission Analysis Tool (AMAT) to analyze the performance benefit offered by aerocapture for a set of design reference missions [9, 10].
Figure 1: Aerocapture maneuver concept of operations.
## II Venus
Venus is Earth's closest planetary neighbor and has a thick atmosphere. The dense thick atmosphere also presents challenging entry conditions, making it not compelling for high-ballistic coefficient rigid entry vehicles [11, 12]. However, low-ballistic coefficient deployable entry systems such as ADEPT can greatly alleviate these difficulties and present an attractive method for inserting small satellites around Venus, particularly into low circular orbits for which the \(\Delta\)V requirements are substantial. The deployable systems decelerate much higher up in the atmosphere, thus greatly reduces the aero-thermal heating. Recent work has established a design reference mission for inserting small satellites at Venus using drag modulation aerocapture [13]. The reference interplanetary trajectory arrives with a hyperbolic excess speed of 3.5 km/s and aims to insert a small 25 kg spacecraft into a 400 km circular orbit at Venus. The \(\Delta\)V required for the propulsive orbit insertion maneuver is 3533 m/s, which is quite challenging to achieve for a small spacecraft propulsion system. Figure 2 compares the mass fraction delivered to orbit using propulsive insertion and aerocapture. With propulsive insertion, only 25% of the arrival mass can be inserted into orbit with the remaining 75% being the propulsion system. With aerocapture, about 50% of the arrival mass can be inserted into orbit. This has two important implications for small low-cost spacecraft. Aerocapture allows a 100% increase in delivered mass to orbit compared to propulsive insertion. Conversely, it can allow for smaller and cheaper spacecraft. A 25 kg orbiter at Venus would require launching a 100 kg wet mass spacecraft. Aerocapture would only require launching a spacecraft that is 50 kg at launch, smaller by a factor of 2. Preliminary estimates have indicated that by reducing the required \(\Delta\)V from 3500 m/s (propulsive) to about 30 m/s (aerocapture, with periapsis raise maneuver), the mission cost can be reduced from over $100M to about $20M, a factor of 5, thus enabling a range of low-cost missions to Venus [14]. Examples include small standalone secondary ride-share payloads to Venus orbit, small satellites as part of large New Frontiers or Flagship missions [15], and missions to return atmospheric samples from the Venusian cloud layers [16].
Figure 2: Performance comparison of propulsive insertion and aerocapture at Venus.
## III Earth
Aerocapture at Earth may be relevant for future sample return missions which seek to deliver samples to an orbiting space station rather than to Earth's surface for planetary protection reasons [17]. Aerocapture at Earth has also been studied for various technology demonstration experiments at Earth, although none were realized [18, 19]. Figure 3 shows the performance comparison of the delivered mass to a 400 km circular orbit from a trajectory with an excess speed of 3.5 km/s for which the orbit insertion \(\Delta\)V is 3731 m/s. As with Venus, drag modulation aerocapture enables nearly a 100% increase in the mass delivered to orbit. Aerocapture at Earth also has applications to orbital transfer from GTO to LEO without the use of propellant, where it offers comparable performance advantages.
## IV Mars
Figure 4 shows the performance comparison at Mars. The reference design seeks a 400 km circular orbit from an interplanetary trajectory with an excess speed of 2.6 km/s for which the orbit insertion \(\Delta\)V is 2079 m/s. The performance benefit of aerocapture at Mars is considerably less that at Venus or Earth. However, the benign aerothermal environment and make it an ideal candidate for a low-cost technology demonstration mission [20, 21].
Figure 4: Performance comparison of propulsive insertion and aerocapture at Mars.
Figure 3: Performance comparison of propulsive insertion and aerocapture at Earth.
## V Titan
Titan's greatly extended thick atmosphere and the low entry speeds present an ideal destination for aerocapture. Titan offers the largest aerocapture corridor and the most benign aero-thermal environment of any Solar System destination. Ever since the Cassini-Huygens mission revealed Titan to have a diverse landscape and one of great scientific interest, there have been numerous mission proposals for a dedicated Flagship mission to Titan, w ith a Titan orbiter and a lander [22, 23, 24]. The Dragonfly mission will deliver a relocatable lander to Titan's surface. An orbiter around Titan remains to be accomplished by a future mission [25, 26]. However, getting into orbit around Titan requires very large \(\Delta\)V, which requires enormous propellant and hence drives up the wet mass and mission cost. This has essentially precluded any New Frontiers or Discovery class mission concepts for a Titan orbiter, and with Europa and Uranus missions being the top priority for Flagships, a Flagship Titan mission is not viable in the near future. Drag modulation aerocapture offers an elegant solution to this challenge. The reference interplanetary trajectory arrives with a hyperbolic excess speed of 7.0 km/s and aims to insert a 2500 kg spacecraft into a 1700 km circular orbit at Venus. The \(\Delta\)V required for the propulsive orbit insertion maneuver is 5832 m/s. This is so prohibitively large for a propulsive insertion, that only about 6% of the arrival mass can be delivered to orbit around Titan as shown in Figure 5. With aerocapture, 50% of the arrival mass can be delivered to orbit. This implies aerocapture offers enormous performance benefits for a future Titan orbiter mission, enabling a launch mass that is approximately 8 times smaller than what is possible with chemical propulsive insertion, potentially enabling a Titan orbiter to fit within the New Frontiers cost cap [27]. An spacecraft around Titan in a low-circular orbit can study Titan's surface with unprecedented detail using radar, mapping the entire surface at resolutions of 100s of meters and some regions at potentially much higher resolutions. Aerocapture is thus a key enabling technology for a future New Frontiers Titan orbiter mission.
Figure 5: Performance comparison of propulsive insertion and aerocapture at Titan.
## VI. Uranus
The closer of the ice giants at 19 AU, Uranus is the top priority for a Flagship class mission in the next decade [28]. The large heliocentric distance of the ice giants poses significant mission design challenge to get there quickly, but also insert a reasonable payload into orbit. Due to risk considerations such as lack of confidence in the atmosphere models (potentially more perceived risk compared to actual risk), current baseline Uranus mission architectures have not used aerocapture [29, 30]. However, aerocapture has been shown to offer significant benefits for future Uranus missions [31, 32]. Two design reference missions are considered here. The first is a slow arrival (vinf = 10 km/s) trajectory, and the second is a fast arrival trajectory (20 km/s), both targeting a 4000 x 1M km orbit. For the slow arrival trajectory, the orbit insertion \(\Delta\)V is 2667 m/s, and the \(\Delta\)V for the fast arrival trajectory is 8631 m/s. With the slow arrival trajectory, drag modulation aerocapture is chosen as it is better suited compared to lift modulation due to corridor width and heating considerations. With the fast arrival trajectory, lift modulation aerocapture is chosen as it offers more corridor width and can use the HEEET TPS. Figure 6 shows the mass fraction delivered to orbit for the slow and fast arrival trajectories. For the slow arrival trajectory, drag modulation aerocapture is able to deliver about 35% more mass compared to propulsive insertion. For the fast arrival trajectory, the \(\Delta\)V is so high that it is prohibitive for propulsive insertion. However, lift modulation aerocapture with an MSL-like aeroshell is still able to deliver 50% of the arrival mass to orbit. The fast arrival trajectory does present challenges associated with large heat loads in the range of 200-300 kJ/cm2, but HEEET is expected to be able to accommodate such large heat loads within a TPS mass fraction of about 25% [33, 34]. Figure 6 shows the enormous benefit offered by aerocapture for Uranus missions with fast arrival trajectories, enabling significantly shorter time of flight missions, with a reasonable payload mass fraction.
Figure 6: Performance comparison of propulsive insertion and aerocapture at Uranus.
## VII Neptune
The farther of the ice giants, Neptune is a more demanding destination for orbiter missions than Uranus [35]. Even though Uranus and Neptune are both scientifically compelling, the greater mission design challenges associated with Neptune appears to be the primary reason Uranus is preferred for the next Flagship mission. In contrast to Uranus, Neptune also offers the ability to study Triton, a captured Kuiper belt object which may be an active ocean world up close. Neptune aerocapture has been studied since 2003 using a mid-L/D vehicle to compensate for the large uncertainties [36]. However, since it has become clear such a vehicle would not be viable and recent studies have investigated using innovative techniques to leverage low-L/D aeroshells [37, 38, 39]. Two design reference missions are considered here. The first is a slow arrival (vinf = 10 km/s) trajectory, and the second is a fast arrival trajectory (20 km/s), both targeting a 4000 x 500,000 km orbit which is close to that of Triton. For the slow arrival trajectory, the orbit insertion \(\Delta\)V is 2798 m/s, and the \(\Delta\)V for the fast arrival trajectory is 8452 m/s. Figure 7 shows the mass fraction delivered to orbit for the slow and fast arrival trajectories at Neptune. For the slow arrival trajectory, drag modulation aerocapture is able to deliver about 40% more mass compared to propulsive insertion. For the fast arrival trajectory, the \(\Delta\)V is again so high that it is prohibitive for propulsive insertion. However, lift modulation aerocapture with an MSL-derived aeroshell is still able to deliver 50% of the arrival mass to orbit. As with Uranus, Figure 7 shows the enormous advantage offered by aerocapture for Neptune missions with fast arrival trajectories. For ice giants, aerocapture essentially removes the upper limit on the arrival v_inf of about 12 km/s imposed by propulsive insertion. This opens up entirely new class shorter time of flight, fast arrival trajectories, making aerocapture an enabling technology for delivering well-instrumented orbiters to Uranus and Neptune with these fast trajectories [40].
Figure 7: Performance comparison of propulsive insertion and aerocapture at Neptune.
## VIII Summary
Figure 8 summarizes the performance benefit of aerocapture across the Solar System destinations for the design reference missions. For Venus and Earth, drag modulation aerocapture provides nearly a 100% increase in delivered mass to a 400 km circular orbit compared to purely propulsive insertion. At Mars, the performance benefit is smaller at about 17%, but still significant. At Titan, aerocapture provides a 600% increase in delivered mass to a 1700 km circular orbit. At Uranus, for the slow arrival trajectories aerocapture provides a 35% increase in delivered mass to a 4000 x 1M km orbit compared to propulsive insertion. At Neptune, for the slow arrival trajectories aerocapture provides a 43% increase in delivered mass to a 4000 x 500,000 km orbit compared to propulsive insertion. At Titan, Uranus, and Neptune, aerocapture is a mission enabling technology for orbit insertion from fast arrival trajectories.
## IX Conclusions
The present study analyzed the performance benefit offered by aerocapture for a set of design reference missions. The estimated performance benefit of aerocapture in terms of delivered mass increase are as follows: Venus (92%), Earth (108%), Mars (17%), and Titan (614%), Uranus (35%), and Neptune (43%). At Titan, Uranus, and Neptune, aerocapture is a mission enabling technology for orbit insertion from fast arrival trajectories.
## Data Availability
The results presented in the paper can be reproduced using the open-source Aerocapture Mission Analysis Tool (AMAT) v2.2.22. The data and code used to make the study results will be made available by the author upon request.
Figure 8: Summary of the aerocapture performance comparison. |
2309.11277 | Periodic solution for transport of intense and coupled coasting beams
through quadrupole channels | Imposing defined spinning to a particle beam increases its stability against
perturbations from space charge~[Y.-L.~Cheon et al., Effects of beam spinning
on the fourth-order particle resonance of 3D bunched beams in high-intensity
linear accelerators, Phys. Rev. Accel. \& Beams {\bf 25}, 064002 (2022)]. In
order to fully explore this potential, proper matching of intense coupled beams
along regular lattices is mandatory. Herein, a novel procedure assuring matched
transport is described and benchmarked through simulations. The concept of
matched transport along periodic lattices has been extended from uncoupled
beams to those with considerable coupling between the two transverse degrees of
freedom. For coupled beams, matching means extension of cell-to-cell
periodicity from just transverse envelopes to the coupled beam moments and to
quantities being derived from these. | Chen Xiao, Lars Groening | 2023-09-20T13:02:51Z | http://arxiv.org/abs/2309.11277v1 | # Periodic solution for transport of intense and coupled coasting beams through quadrupole channels
###### Abstract
Imposing defined spinning to a particle beam increases its stability against perturbations from space charge [Y.-L. Cheon et al., Effects of beam spinning on the fourth-order particle resonance of 3D bunched beams in high-intensity linear accelerators, Phys. Rev. Accel. & Beams **25**, 064002 (2022)]. In order to fully explore this potential, proper matching of intense coupled beams along regular lattices is mandatory. Herein, a novel procedure assuring matched transport is described and benchmarked through simulations. The concept of matched transport along periodic lattices has been extended from uncoupled beams to those with considerable coupling between the two transverse degrees of freedom. For coupled beams, matching means extension of cell-to-cell periodicity from just transverse envelopes to the coupled beam moments and to quantities being derived from these.
## I Introduction
Preservation of beam quality is of major concern for acceleration and transport especially of intense hadron beams. This aim is reached at best through provision of smooth and periodic beam envelopes, being so-called matched to the periodicity of the external focusing lattice. The latter is usually composed of a regular arrangement from solenoids or quadrupoles. For the time being, the quality of matching has been evaluated through the periodicity of spatial beam envelopes. This is fully sufficient as long as there is no coupling between the phase space planes (for brevity "planes"), neither in beam properties nor in lattice properties.
For beams without coupling, various matching methods for intense beams have been proposed and realized in operation. First approaches, being still applied nowadays, base on differential rms-envelope equations formulated by F. Sacherer [1; 2]. These assume KV-distributions and calculate space charge forces from homogeneously charged rms-equivalent ellipsoids. The forces are linear and preserve the rms-emittances. Albeit of assuming artificial KV-distributions, rms-equivalent matching of real beams has been conducted very successfully during the last decades. It became a state-of-the-art tool in operation of modern intense-beam accelerators, see [3; 4; 5] for instance. Proper periodic solutions are especially relevant for systematic optimization of different lattice properties w.r.t. preservation of beam quality. Usually, the lattice parameter being optimized is its focusing strength, i.e., the imposed phase advance.
Variation of lattice parameters revealed many tools to optimize acceleration of intense beams with given emittances and intensity. Focusing can be accomplished by solenoids or by quadrupoles and systematic comparisons are discussed in [6]. Another way is varying the phase advance along the periodic structure as considered in [7]. Already in the 1960's, different quadrupole focusing schemes as FODO, FOFODODO, and FOFOFODODODO have been analyzed systematically [8]. Recent studies revealed that imposing of spinning to the incoming beam opens another set of free parameters for further optimizing beam quality along periodic lattices [9]. Evidence has been provided that beam stability against perturbations from non-linear space charge forces increases with the amount of imposed spinning. This is in analogy to stability of spinning flying objects as bullets or footballs.
Spinning of beams is a very promising tool to further augment accelerator performance. It requires coupling between planes and thus imposes dedicated efforts for proper matching to periodic lattices. Beam matching with coupling between the horizontal and longitudinal planes has been investigated in [10].
The present work is on the development and demonstration of a method to assure rms-matched transport of intense beams with considerable transverse coupling, an issue being addressed conceptually in [12]. It partially implements the early concept, i.e. tracking of moments, into a procedure to obtain full cell-to-cell four-dimensional (4D)-periodicity. Through simulations it is shown that the lattice periodicity is not just matched by the two transverse envelopes but also by the beam rms-moments that quantify coupling. To this end, an iterative procedure towards the periodic solution is applied. It starts from determining the solution with zero current, using a method that is applied later also to beams with current.
The TRACE-2D code [11] is well suited to provide for a matching beam line between a given initial beam matrix and a desired exit beam matrix even for a full 4D scenario. However, it is an intrinsic property of the periodic-solution-problem, that the initial beam matrix at the entrance of the periodic channel is unknown. Accordingly, this code cannot be applied to the present scenario in a straight forward way.
It is explicitly stated here that providing for a specific design of the matching line itself is beyond the scope of the present work. This paper aims at demonstrating that a 4D-periodic cell-by-cell solution exists and demonstrates its derivation. Detailed definition of the specific matching line is a hard task to be addressed within future work.
The following section briefly introduces basic terms of beam rms-moments transportation through linear lattice elements. Afterwards, the beam line providing spinning, matching, and periodic focusing is introduced. The fourth section is on modeling the periodic channel for beams without and
with current, followed by the description of the procedure to determine the matched solution for intense coupled beams. Finally, benchmarking of the procedure to results obtained from tracking an intense coupled Gaussian beam using a well-established simulation code is presented.
## II Basic concepts of beam second moments transportation
Particle coordinates are denoted by a 4\(\times\)1 column vector \(\vec{r}\left(s\right)\) with elements \(x\left(s\right)\), \(x^{\prime}\left(s\right)\), \(y\left(s\right)\), and \(y^{\prime}\left(s\right)\) with
\[u^{\prime}\left(s\right):=\frac{du\left(s\right)}{ds}\,, \tag{1}\]
defining the derivation of the spatial coordinate \(u\) (refers to either \(x\) or \(y\)) w.r.t. the longitudinal coordinate \(s\). It is assumed that the according transverse velocity \(\beta cu^{\prime}\) is small in comparison to the main propagation velocity \(\beta c\) of the beam along \(s\). Linear transport of particle coordinates from an initial location to a final location is modeled through a linear 4\(\times\)4 matrix equation
\[\left[\vec{r}\left(s\right)\right]_{\text{final}}:=M\cdot\left[\vec{r}\left(s \right)\right]_{\text{initial}}\,. \tag{2}\]
Coupled beams inhabit ten independent second-order rms-moments. They are summarized within the symmetric beam moments matrix
\[C:=\begin{bmatrix}\left\langle x\right\rangle&\left\langle x\right\rangle^{ \prime}&\left\langle xy\right\rangle&\left\langle xy^{\prime}\right\rangle\\ \left\langle x^{\prime}x\right\rangle&\left\langle x^{\prime}x^{\prime} \right\rangle&\left\langle xy\right\rangle&\left\langle x^{\prime}y^{ \prime}\right\rangle\\ \left\langle yx\right\rangle&\left\langle yx^{\prime}\right\rangle&\left\langle yy \right\rangle&\left\langle xy^{\prime}\right\rangle\\ \left\langle y^{\prime}x\right\rangle&\left\langle y^{\prime}x^{\prime} \right\rangle&\left\langle y^{\prime}y\right\rangle&\left\langle y^{\prime} y^{\prime}\right\rangle\end{bmatrix}\,. \tag{3}\]
Four of its elements quantify beam coupling. Beams are \(x\)-\(y\) coupled if at least one of these elements is different from zero. The projected rms-emittances \(\epsilon_{x}\) and \(\epsilon_{y}\) are defined through the determinants of the two on-diagonal sub-matrices as
\[\epsilon_{u}=\sqrt{\left\langle uu\right\rangle\left\langle u^{\prime}u^{ \prime}\right\rangle-\left\langle uu^{\prime}\right\rangle^{2}}\,, \tag{4}\]
i.e, they do not depend on coupled beam moments. In turn, the two eigen-emittances
\[\epsilon_{1,2}=\frac{1}{2}\sqrt{-\text{tr}\left(CJ\right)^{2}\pm\sqrt{\text{ tr}^{2}\left(CJ\right)^{2}-16\det\left(C\right)}}\,, \tag{5}\]
depend on all beam moments including those with coupling. Any linear transformation \(M\) obeying
\[J=M^{\text{T}}\cdot J\cdot M\,,\hskip 14.226378ptJ:=\begin{bmatrix}-1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\end{bmatrix}\,, \tag{6}\]
is called symplectic and it preserves the two eigen-emittances. Just if \(M\) does not include any coupling elements, it will also preserve the two projected rms-emittances. In case that a transformation \(M\) decouples a given beam, the decoupled beam's rms-emittances are equal to the two eigen-emittances which remained unchanged by \(M\). Coupling can be quantified by the coupling parameter [13]
\[t:=\frac{\epsilon_{x}\epsilon_{y}}{\epsilon_{1}\epsilon_{2}}-1\,,\hskip 14.226378pt \epsilon_{\text{4d}}=\epsilon_{1}\epsilon_{2}\,, \tag{7}\]
and if and only if \(t\) is equal to zero, there is no inter-plane correlation and the projected rms-emittances are equal to the eigen-emittances.
A simple way to impose spinning to a beam is to pass it through an effective half solenoid. Although half solenoids do not exist due to \(\vec{\nabla}\cdot\vec{B}=0\), their effect can be imposed by particle creation inside the solenoid or by changing the beam charge state inside the solenoid. The first method is applied in Electron-Cyclotron-Resonance (ECR) ion sources [14; 15] and the second method has been proposed in [16; 17] and demonstrated experimentally in [18]. Effective half solenoids have the appealing feature that decoupling afterwards for further transportation is quasi independent from their magnetic field strength [17; 19].
The first part of the transport matrix \(S^{\text{fl}}\) of an effective half solenoid is given by the matrix \(S_{\rightarrow}\) of the main body of the solenoid of effective length \(L\), comprising just the pure longitudinal magnetic field \(B_{s}\)
\[S_{\rightarrow}=\begin{bmatrix}1&\frac{\sin\left(2KL\right)}{2K}&0&\frac{1- \cos\left(2KL\right)}{2K}\\ 0&\cos\left(2KL\right)&0&\sin\left(2KL\right)\\ 0&-\frac{1-\cos\left(2L\right)}{2K}&1&\frac{\sin\left(2KL\right)}{2K}\\ 0&-\sin\left(2KL\right)&0&\cos\left(2KL\right)\end{bmatrix}\,, \tag{8}\]
with \(K:=B_{s}/\left[2\left(B\rho\right)\right]\) (Larmor wave number) and \(\left(B\rho\right)\) as beam rigidity. \(K\) imposes spinning to the particles through preservation of the canonical angular momentum during transition through the solenoid, i.e.,
\[L_{\theta}=m\gamma\gamma\nu_{\theta}+\frac{qB_{s}}{2}r^{2}=\text{const} \tag{9}\]
being also known as Busch's theorem [20]. Assuming an incoming particle without canonical angular momentum in front of the solenoid with
\[v_{\theta}=0\,,\hskip 14.226378ptL_{\theta}=0\,, \tag{10}\]
inside the solenoid \(v_{\theta}\) will be changed to
\[v_{\theta}=-\frac{qB_{s}}{2\gamma m}r\,, \tag{11}\]
being equivalent to introduction of spinning by the solenoid. The extension of Busch's theorem from one single particle to beams is treated in [21].
The second part of \(S^{\text{fl}}\) is from the fringe field matrix \(S_{\downarrow}\) of the solenoid exit
\[S_{\downarrow}=\begin{bmatrix}1&0&0&0\\ 0&1&-K&0\\ 0&0&1&0\\ K&0&0&1\end{bmatrix}\,, \tag{12}\]
and the total matrix of the half solenoid is the product of both matrices
\[S^{h}=S_{\downarrow}\cdot S_{\rightarrow}=\begin{bmatrix}S_{xx}^{h}&S_{xy}^{h}\\ S_{yx}^{h}&S_{yy}^{h}\end{bmatrix}\,. \tag{13}\]
The total transfer matrix \(S\) of a complete solenoid is the product of entrance matrix \(S_{\uparrow}\), main body matrix \(S_{\rightarrow}\), and exit matrix \(S_{\downarrow}\)
\[S=S_{\downarrow}\cdot S_{\rightarrow}\cdot S_{\uparrow},\ \ \ \ \ \ S_{\uparrow}= \begin{bmatrix}1&0&0&0\\ 0&1&K&0\\ 0&0&1&0\\ -K&0&0&1\end{bmatrix}\,. \tag{14}\]
The determinants of the diagonal sub-matrices \(S_{xx}^{h}\) and \(S_{yy}^{h}\) are different from 1.0, hence the projected rms-emitances are changed by \(S^{h}\). Additionally, \(S_{xy}^{h}\) and \(S_{yx}^{h}\) are also different from zero, thus coupling will be imposed to an initially uncoupled beam. Although being non-symplectic, \(S^{h}\) has the determinant of 1.0 preserving the product of the two eigen-emitances.
The sub-matrices of a transport matrix \(Q\) of a regular quadrupole of strength \(k:=Gl/\left(B\rho\right)\) and effective length \(l\) are given by
\[Q_{xx}=\begin{bmatrix}\cos\left(kl\right)&\frac{\sin\left(kl\right)}{k}\\ -k\sin\left(kl\right)&\cos\left(kl\right)\end{bmatrix}\,, \tag{15}\]
and
\[Q_{yy}=\begin{bmatrix}\cosh\left(kl\right)&\frac{\sinh\left(kl\right)}{k}\\ k\sinh\left(kl\right)&\cosh\left(kl\right)\end{bmatrix}\,, \tag{16}\]
with \(G\) being the magnetic field gradient of the quadrupole implying \(B_{y}=Gx\) and \(B_{x}=-Gy\). For positive (negative) \(G\), quadrupoles focus in the horizontal (vertical) plane and defocus in the vertical (horizontal) plane. The coupling sub-matrices are zero. The matrix of a drift is
\[D_{xx}=D_{yy}=\begin{bmatrix}1&l\\ 0&1\end{bmatrix}\,, \tag{17}\]
with its sub-matrices being equal to zero. Finally, clockwise rotation of the beam by \(\theta\) around the positive \(s\)-axis is modeled through the symplectic matrix
\[R\left(\theta\right)=\begin{bmatrix}\cos\left(\theta\right)&0&-\sin\left( \theta\right)&0\\ 0&\cos\left(\theta\right)&0&-\sin\left(\theta\right)\\ \sin\left(\theta\right)&0&\cos\left(\theta\right)&0\\ 0&\sin\left(\theta\right)&0&\cos\left(\theta\right)\end{bmatrix}\,. \tag{18}\]
## III Beam line for coupling, matching, and transportation
The beam line being used to determine the periodic solution of an intense coupled beam along a periodic channel is sketched systematically in Fig. 1. It comprises three sections, starting with an effective half solenoid being followed by a matching section. This section transports the beam parameters from the solenoid exit to the entrance of the periodic channel. These two sections include coupling elements. The third section is a periodic sequence of non-coupling regular quadrupoles.
At the beginning of the beam line, an uncoupled beam is assumed with beam sigma-matrix
\[C\left(s_{0}\right)=\begin{bmatrix}C_{xx}&O\\ O&C_{yy}\end{bmatrix}\,, \tag{19}\]
\[C_{xx}=\epsilon_{x}\begin{bmatrix}\beta_{x}&-\alpha_{x}\\ -\alpha_{x}&\frac{1+\alpha_{x}^{2}}{B_{x}}\end{bmatrix}\,,\ C_{yy}=\epsilon_{y} \begin{bmatrix}\beta_{y}&-\alpha_{y}\\ -\alpha_{y}&\frac{1+\alpha_{x}^{2}}{B_{y}}\end{bmatrix}\,, \tag{20}\]
with \(\beta_{u}=\langle uu\rangle/\epsilon_{u}\) and \(\alpha_{u}=-\langle uu^{\prime}\rangle/\epsilon_{u}\). The beam matrix at the beginning of the matching section is
\[C\left(s_{1}\right)=S^{h}\cdot C\left(s_{0}\right)\cdot\left(S^{h}\right)^{ \mathrm{T}}\,. \tag{21}\]
The matching section is modeled through the symplectic and coupling matrix \(\mathfrak{R}\) and hence
\[C\left(s_{2}\right)=\mathfrak{R}\cdot C\left(s_{1}\right)\cdot\mathfrak{R}^{ \mathrm{T}}\,, \tag{22}\]
is the beam matrix at the entrance to the quadrupole channel. Figure 2 depicts one cell of the quadrupole FODO channel
Figure 2: One cell of the periodic quadrupole channel (cell length \(\ell\) = 0.8 m). Focusing and defocusing quadrupoles have gradients of \(G=\pm 1.0\) T/m.
Figure 1: The beam line comprises three parts: (I) effective half solenoid; (II) matching section; (III) regular quadrupole doublet section (twelve cells). Space charge effects are not considered along the first two sections (see text).
Its transport matrix is a product of five single matrices
\[\mathfrak{T}=Q_{f}^{h}\cdot D\cdot Q_{d}\cdot D\cdot Q_{f}^{h}\,,\ \ \ \ \ \ Q_{f}=Q_{f}^{h}\cdot Q_{f}^{h}\,. \tag{23}\]
At the entrance to the beam line at \(s_{0}\), an uncoupled proton beam with an energy of 150 keV/u is assumed. The type of channel corresponds to a common scheme of focusing. Low energy protons at this energy are provided at many sources around the world. The rigidity allows to apply solenoid field strengths being reasonably low in order to provide for a considerable amount of coupling. Beam Twiss parameters are set to \(\varepsilon_{x}=\varepsilon_{y}=69.90\) mm mrad, \(\beta_{x}=\beta_{y}=2\) m/rad, \(\alpha_{x}=0.250\), and \(\alpha_{y}=\) -0.275. If the length of the half solenoid is set to 0.25 m, the values of eigen-emittances and projected emittances at the exit of the half solenoid (position \(s_{1}\)) are determined by the solenoid field as shown in Fig. 3.
After transport through this half solenoid the beam matrix (in units of mm and mrad) is
\[C\left(s_{1}\right)=\begin{bmatrix}+133.6&-8.578&+2.021&+124.9\\ -8.578&+139.5&-124.9&-31.08\\ +2.021&-124.9&+151.4&+28.22\\ +124.9&-31.08&+28.22&+154.1\end{bmatrix}\,, \tag{24}\]
in order to obtain a periodic solution for this coupled beam, the details of the matching section are not required as seen in the following. However, it is modeled by a transport matrix including 16 elements
\[\mathfrak{R}\left(m_{1},m_{2},\ldots,m_{16}\right)=\begin{bmatrix}m_{1}&m_{2} &m_{3}&m_{4}\\ m_{5}&m_{6}&m_{7}&m_{8}\\ m_{9}&m_{10}&m_{11}&m_{12}\\ m_{13}&m_{14}&m_{15}&m_{16}\end{bmatrix}\,. \tag{25}\]
Although initially being unknown, the 16 elements must provide for \(\det\left(\mathfrak{R}\right)=1.0\) and that \(\mathfrak{R}\) is symplectic according to Eq. (6). For brevity, the set of \(m_{1},m_{2},\ldots,m_{16}\) shall be denoted by \(\mathfrak{R}\). Although the detailed layout of the matching section is beyond the scope of this paper, a conceptual approach is sketched in the Appendix C.
## IV Modeling of periodic channel
For zero current, the effective focusing forces are given solely by the external lattice. The actual beam shape has no influence on them and therefore the periodic solution even for coupled beams may be found analytically. For intense beams instead, defocusing space charge forces depend on the beam shape and orientation in real space. Actually, they depend also on the spatial distribution. However, since modeling of space charge forces using rms-equivalent KV-distributions proofed to work very well for matching purposes, this approach is followed here as well.
In the following, an iterative method is described to determine the periodic solution for zero current. At first glance, it seems more complicated w.r.t. a straight analytical approach. However, it has the advantage to be applicable easily to obtain the periodic solution even with current.
### beam with zero current
The periodic solution meets the condition
\[C\left(s_{2}\right)=\mathfrak{S}\cdot C\left(s_{2}\right)\cdot\mathfrak{S}^{ \mathrm{T}}=C\left(s_{2}+\ell\right)\,, \tag{26}\]
where \(\ell\) is the length of one cell and the transport matrix from the exit of the solenoid \(s_{1}\) to the exit of the first cell is
\[\mathcal{U}\left(\mathfrak{K}\right)=\mathfrak{S}\cdot\mathfrak{R}\left( \mathfrak{K}\right)\,, \tag{27}\]
where \(\mathfrak{S}\) is fully known from the cell of the quadrupole channel (see Fig. 2).
From first principles, neither the periodic solution is known nor are the elements \(\mathfrak{K}\) that provide for the according matching from the exit of the solenoid \(s_{1}\) to the entrance of the channel \(s_{2}\). The iterative procedure to obtain finally both, starts with a guessed initial set \(\mathfrak{K}^{i}\) that just meets the condition of being symplectic and \(\det\left[\mathfrak{R}\left(\mathfrak{K}^{i}\right)\right]=1.0\). It will most likely not meet the condition of the periodic solution, i.e.,
\[\mathfrak{R}\left(\mathfrak{K}^{i}\right)\cdot C\left(s_{1}\right)\cdot \mathfrak{R}^{\mathrm{T}}\left(\mathfrak{K}^{i}\right)\neq\mathcal{U}\left( \mathfrak{K}^{i}\right)\cdot C\left(s_{1}\right)\cdot\mathcal{U}^{\mathrm{T}} \left(\mathfrak{K}^{i}\right)\,, \tag{28}\]
hence the beam matrix in front of the channel is different from the one behind the first cell (see details in appendix A).
With the MATHCAD[22] routine _Minerr_, a set of matching matrix elements \(\mathfrak{K}^{0}\) for zero beam current can be found, such that the symplectic condition and \(\det\left[\mathfrak{R}\left(\mathfrak{K}^{0}\right)\right]=1.0\) is met sharply together with providing periodicity. The routine is dedicated to solve an under-determined system of equations with a defined set boundary conditions.
\[\mathfrak{R}\left(\mathfrak{K}^{0}\right)\cdot C\left(s_{1}\right)\cdot \mathfrak{R}^{\mathrm{T}}\left(\mathfrak{K}^{0}\right)=\mathcal{U}\left( \mathfrak{K}^{0}\right)\cdot C\left(s_{1}\right)\cdot\mathcal{U}^{\mathrm{T}} \left(\mathfrak{K}^{0}\right)\,. \tag{29}\]
Figure 3: Projected rms-emittances (red), eigen-emittances (blue), and square root of 44-emittance (green) at the exit of half solenoid. If the solenoid field is off, \(\varepsilon_{x}=\varepsilon_{y}=\varepsilon_{1}=\varepsilon_{2}=69.90\) mmrad. If the solenoid field is \(B_{s}=0.1\) T, \(\varepsilon_{x}=136.3\), \(\varepsilon_{y}=150.1\), \(\varepsilon_{1}=268.0\), and \(\varepsilon_{2}=18.23\) mm mrad with coupling factor of \(t=3.186\).
With \(\Re^{0}\) being determined, the periodic beam matrix at the beginning of the channel has been calculated as
\[C^{0}\left(s_{2}\right)=\begin{bmatrix}+158.1&+0.000&-76.88&+95.30\\ +0.000&+97.93&-27.65&-164.1\\ +76.88&-27.65&+56.66&+0.000\\ +95.30&-164.1&+0.000&+438.9\end{bmatrix}\,, \tag{30}\]
and it is equal to \(C^{0}\left(s_{2}+\ell\right)\). As for the case of an uncoupled beam, the periodic solution of the coupled beam features \(\alpha_{x,y}=0\) as expected from the symmetry of the regular cell of the channel. However, the corresponding coupling parameters from combinations of other planes are different from zero due to inter-plane coupling. The zero current transport matrix of one cell (in units of m and rad) is
\[\Im\left(\Re^{0}\right)=\begin{bmatrix}+0.321&+1.203&+0.000&+0.000\\ -0.745&+0.321&+0.000&+0.000\\ +0.000&+0.000&+0.321&+0.349\\ +0.000&+0.000&-2.569&+0.321\end{bmatrix}\,, \tag{31}\]
and evaluation of its sub-traces delivers the zero current phase advance of \(\mu_{0}=71.26^{\circ}\). The zero current transport matrix \(\Im\left(\Re^{0}\right)\) is independent of the initial beam matrix \(C^{0}\left(s_{2}\right)\) and is determined only by the lattice of the quadrupole channel.
### beam with high current
For KV-beams, the electric self-field caused by space charge can be calculated analytically as done by Sacherer1 for uncoupled beams, i.e., for upright ellipses. In case of coupling, the ellipse is generally tilted as drawn in Fig. 4. Here, the space charge forces are firstly calculated within the tilted frame. In a second step, these forces are projected into the upright laboratory frame and applied to the beam. They are equivalent to a defocusing quadrupole kick in both planes. The strengths are not equal along both planes but the resulting 4D-transformation is linear and symplectic. Hence it will be modeled by another 4\(\times\)4 transport matrix \(\varkappa\).
The ellipse is described by its two semi-axes \(a_{1}\) and \(a_{2}\) and by the rotation angle \(\theta\) of \(a_{1}\) w.r.t. \(x\)-axis. Its rms-area is given by
\[A_{xy}=\sqrt{\left\langle xx\right\rangle\left\langle yy\right\rangle-\left \langle xy\right\rangle^{2}}=a_{1}a_{2}\,. \tag{32}\]
The above ellipse parameters are calculated from the beam second moments through
\[\beta_{xy}=\frac{\left\langle xx\right\rangle}{A_{xy}}\,,\ \ \ \ \ \alpha_{xy}=-\frac{\left\langle xy \right\rangle}{A_{xy}}\,, \tag{33}\]
\[\Theta=\frac{1}{2}\arctan\frac{-2\alpha_{xy}}{\beta_{xy}-\frac{1+\alpha_{xy}^ {2}}{\beta_{yy}}}\,,\ \ \ h=\frac{\beta_{xy}}{2}+\frac{1+\alpha_{xy}^{2}}{2\beta_{xy}}\,, \tag{34}\]
and
\[a_{1,2}=\sqrt{\frac{A_{xy}}{2}}\left(\sqrt{h+1}\pm\sqrt{h-1}\right)\,. \tag{35}\]
The transport matrix \(\varkappa\) is calculated from the ellipse geometric parameters and the general beam parameters as
\[\varkappa=R^{-1}\left(\Theta\right)\cdot\varkappa\cdot R\left(\Theta\right)\,, \tag{36}\]
where \(\varkappa^{*}\) is the matrix in the tilted ellipse frame. It reads
\[\varkappa_{1,2}^{*}=\begin{bmatrix}1&0\\ \kappa_{1,2}\delta s&1\end{bmatrix}\,,\ \ \ \ \varkappa^{*}=\begin{bmatrix} \varkappa_{1}^{*}&O\\ O&\varkappa_{2}^{*}\end{bmatrix}\,, \tag{37}\]
with \(\delta s\) being the step size along \(s\) between two space charge kicks. \(\kappa_{1,2}\) are the respective kick strengths along each semi-axis and are given by
\[\kappa_{1}=\frac{\kappa_{\rm sc}}{2a_{1}\left(a_{1}+a_{2}\right)}\,,\ \ \ \ \ \kappa_{2}=\frac{\kappa_{\rm sc}}{2a_{2}\left(a_{1}+a_{2}\right)}\,, \tag{38}\]
from the generalized beam perveance
\[\kappa_{\rm sc}=\frac{qI}{2\pi\epsilon_{0}m\left(\gamma\beta c\right)^{3}}\,, \tag{39}\]
with \(q\) as particle charge, \(I\) as beam current, and \(\beta\) and \(\gamma\) as relativistic factors.
With these prerequisites, any beam line from (skewed) quadrupoles transporting a coupled intense beam is modeled through a sequence of symplectic linear transport matrices. Quadrupoles and drifts are sub-divided into many slices each and transportation through them is by a sequence of transports along slice length \(\delta s\) without space charge and execution of the space charge kick with \(\varkappa\) afterwards. This method has been implemented into many codes. For uncoupled beams, the PARMILA code [23] for instance uses it to design periodic lattices and to evaluate their performances. Here it shall serve to obtain cell-by-cell periodic solutions for intense coupled beams.
Figure 4: Ellipse of an \(x\)-\(y\) coupled beam in real space. \(A_{xy}\) is the rms-area of the beam, see Eq. (32). Parameters \(\alpha_{xy}\) and \(\beta_{xy}\) are its equivalent Twiss parameters defining the ellipse orientation and aspect ratio in real space. The \(x\), \(y\), and \(s\) unit vectors of the Cartesian coordinate system follow the right-hand rule.
Periodic solution with space charge and coupling
Solutions of the beam matrix along the periodic channel are considered as periodic, if the equation
\[C\left(s_{2}\right)\approx C\left(s_{2}+\ell\right) \tag{40}\]
is fulfilled to very good approximation. Subsection IV.1 presented such a solution \(C^{0}\left(s_{2}\right)\) for zero current. This solution will not hold with beam current being switched on. This is from the dependence of the cell transport matrix \(\mathfrak{S}\) from the beam current and from the beam Twiss parameters at the entrance to the channel as shown in subsection IV.2.
In order to find a solution that holds even with current, another iterative procedure is applied. It uses the method of determining a matching setting \(\Re\) presented in section IV.1. Additionally, it performs an iterative switching between obtaining the periodic transport matrix from tracking and using it to re-adapt the matching to it.
The iterative procedure starts from the beam moments matrix \(C\left(s_{1}\right)\) behind the solenoid being then transported through the matching line \(\Re\left(\Re^{0}\right)\) for zero current. The resulting beam matrix at the entrance to the channel
\[C^{0}\left(s_{2}\right)=\Re\left(\Re^{0}\right)\cdot C\left(s_{1}\right)\cdot \Re^{\mathrm{T}}\left(\Re^{0}\right)\,, \tag{41}\]
is then tracked with high current (10 mA) through one cell. Accordingly, the total transport matrix of the cell \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\) is a result of the tracking procedure described in subsection IV.2. \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\) depends on the current \(I\) and on the spatial beam parameters at the entrance of the channel. The \(4\times 4\) elements of \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\) are stored for further use. Most likely, \(C^{0}\left(s_{2}\right)\) does not meet the condition of the periodic solution with current, i.e,
\[C^{0}\left(s_{2}\right)\neq\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right) \cdot\Re\left(\Re^{0}\right)\cdot C\left(s_{1}\right)\cdot\Re^{\mathrm{T}} \left(\Re^{0}\right)\cdot\mathfrak{S}_{\mathrm{sc}}^{\mathrm{T}}\left(\Re^{0 }\right)\,. \tag{42}\]
However, the cell matrix \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\) is used to re-adapt the matching setting such, that a new matching \(\Re^{1}\) is found which provides for equal beam matrices before and after transport through the cell matrix \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\)
\[C^{1}\left(s_{2}\right)=\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\cdot \Re\left(\Re^{1}\right)\cdot C\left(s_{1}\right)\cdot\Re^{\mathrm{T}}\left( \Re^{1}\right)\cdot\mathfrak{S}_{\mathrm{sc}}^{\mathrm{T}}\left(\Re^{0} \right)\,, \tag{43}\]
emphasizing that the above equation uses the stored elements of \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{0}\right)\).
This new matching \(\Re^{1}\) delivers the beam matrix \(C^{1}\left(s_{2}\right)\) in front of the channel. It is now re-tracked with current through the cell as described in subsection IV.2. The tracking will provide a new cell matrix \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{1}\right)\). Again its \(4\times 4\) elements are stored to re-adapt the matching to a setting \(\Re^{2}\) meeting the periodic solution assuming the new matrix \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{1}\right)\) along the channel
\[C^{2}\left(s_{2}\right)=\mathfrak{S}_{\mathrm{sc}}\left(\Re^{1}\right)\cdot \Re\left(\Re^{2}\right)\cdot C\left(s_{1}\right)\cdot\Re^{\mathrm{T}}\left( \Re^{2}\right)\cdot\mathfrak{S}_{\mathrm{sc}}^{\mathrm{T}}\left(\Re^{1} \right)\,. \tag{44}\]
This in turn provides a new beam matrix \(C^{2}\left(s_{2}\right)\) in front of the channel, which changes the transport matrix of the cell to \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{2}\right)\). Continuing this procedure finally converges, i.e., the changes from \(\Re^{n-1}\) to \(\Re^{n}\) become very small and finally negligible. Accordingly, after a sufficient amount of iterations \(\ae\), the periodic condition is fulfilled through
\[C^{j}\left(s_{2}\right)\approx\mathfrak{S}_{\mathrm{sc}}\left(\Re^{j}\right) \cdot\Re\left(\Re^{j}\right)\cdot C\left(s_{1}\right)\cdot\Re^{\mathrm{T}} \left(\Re^{j}\right)\cdot\mathfrak{S}_{\mathrm{sc}}^{\mathrm{T}}\left(\Re^{j} \right)\,. \tag{45}\]
The matrix \(C^{j}\left(s_{2}\right)\) contains the periodic beam moments at the entrance to the channel and \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{j}\right)\) is the periodic transport matrix of the cell including current and coupling. Since all \(\mathfrak{S}_{\mathrm{sc}}\left(\Re^{n}\right)\) are products from symplectic slice matrices, all matrices \(C^{n}\left(s_{2}\right)\) have the same eigen-emittances.
In case of the example presented here, sufficient convergence has been reached at \(j=4\) and the corresponding beam matrix (in units of mm and mrad) is
\[C^{4}\left(s_{2}\right)=\begin{bmatrix}+153.0&-0.004&-86.70&+0.006\\ -0.004&+85.92&-0.004&-170.2\\ -86.70&-0.004&+68.44&+0.019\\ +0.006&-170.2&+0.019&+431.3\end{bmatrix}\,, \tag{46}\]
with \(\varepsilon_{x}=114.6\), \(\varepsilon_{y}=171.8\), \(\varepsilon_{1}=268.0\), and \(\varepsilon_{2}=18.23\) mm mrad with coupling factor of \(t=3.031\). The corresponding output beam matrix is
\[C^{4}\left(s_{2}+\ell\right)=\begin{bmatrix}+153.3&+0.110&-86.72&+0.660\\ +0.110&+85.71&-0.215&-170.2\\ -86.72&-0.215&+68.30&-0.131\\ +0.660&-170.2&-0.131&+432.2\end{bmatrix}\,, \tag{47}\]
the according transport matrix along the channel (one cell) is determined as
\[\mathfrak{S}_{\mathrm{sc}}\left(\Re^{4}\right)=\begin{bmatrix}+0.476&+1.263&+0.1 26&+0.022\\ -0.611&+0.476&+0.128&+0.038\\ +0.038&+0.022&+0.440&+0.374\\ +0.128&+0.126&-2.148&+0.441\end{bmatrix}\,, \tag{48}\]
with corresponding phase advances of \(\mu_{x}=61.59^{\circ}\) and \(\mu_{y}=63.87^{\circ}\), respectively. Accordingly, the averaged transverse phase advance depression w.r.t. the zero-current case is 12.0%. Figure 5 compares the six 2D-projections of the 4D-phase space ellipses \(C^{4}\left(s_{2}\right)\) and \(C^{4}\left(s_{2}+\ell\right)\) in front of and behind the cell. It reveals that cell-to-cell periodicity has been achieved for all ten rms-moments of the beam matrix.
The corresponding rms-moments along a channel comprising two cells are plotted in Fig. 6. It has been shown that cell-to-cell periodicity of an intense coupled coasting beam can be achieved under the assumption of a KV-distribution. Introduction of coupling artificially increases the projected transverse emittances. However, this growth is not intrinsic since it can be removed afterwards by decoupling. For instance, dispersive sections are parts of many beam lines, albeit they come along with horizontal emittance growth. Figure 7 plots the behaviors of 4d-rms-emittance, eigen-emittances, and projected rms-emittances along the periodic channel. However, mitigation of increase or growth of projected emittances is not the aim of the presented study. It is provision of a fully 4d-periodic solution for intense and coupled beams. In case that the solenoid field is off, all emittances remain constant (\(\sqrt{\varepsilon_{1}\varepsilon_{2}}=69.90\) mm mrad) along the periodic channel.
In the following chapter, the previous results shall be benchmarked with a beam featuring a Gaussian distribution. Corresponding comparisons have been done extensively for uncoupled beams during the last decades. This shall be done here for the present example to validate the method for coupled beams.
## VI Benchmarking
Benchmarking has been done with the BEAMPATH code [24] using a Gaussian-type beam. The initial distribution of \(2\times 10^{4}\) particles is rms-equivalent to the second beam moments matrix \(\mathcal{C}^{4}\left(s_{2}\right)\) from Eq. (46).
Tracking has been done using 10 mA and sixty-six cells of the periodic channel. Figure 8 shows the transverse rms-beam sizes along the quadrupole channel obtained from the tracking method described in subsection IV.2 and extracted from the simulations with BEAMPATH.
Both rms-beam sizes, from rms-tracking a KV-distribution and from simulating a Gaussian beam, reveal a high degree of matching to the lattice periodicity. The KV-based rms-beam size is perfectly regular and the Gaussian rms-beam size shows slight fluctuation around it. Some deviations are to be expected, since space charge forces especially at the outer parts of the beam are different for KV and Gaussian distributions. The matching proofed to work very well even for the Gaussian beam and a large number of cells.
## VII Conclusion
It has been shown that cell-to-cell 4D-matching can be achieved for a coupled beam with considerable space charge forces. This has been accomplished by rms-tracking of coupled beams with KV-distribution combined with a dedicated iterative procedure of tracking and re-matching. Benchmarking with an initial Gaussian distribution along a channel with large cell number revealed that the method works very well. Hence, it provides a tool for systematic investigations of intense, coupled beam transport along periodic lattices. One special application is imposing well defined spinning to beams being transported along such lattices as drift tube linacs for instance.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A Transfer matrices of matching section
For zero current beam injection into the channel, the initial transfer matrix of the matching section has been assumed randomly as
\[\mathfrak{R}\left(\mathfrak{R}^{i}\right)=\begin{bmatrix}+0.603&-0.157&+0.001& -0.001\\ +0.320&+1.555&+0.001&+0.501\\ +0.004&-0.024&+0.978&-1.686\\ +0.039&+0.143&+0.000&+1.082\end{bmatrix}\,, \tag{11}\]
satisfying Eq. (28). Applying the matching method (routine _Minerr_ of the MATHCAD) delivers
\[\mathfrak{R}\left(\mathfrak{R}^{0}\right)=\begin{bmatrix}+0.770&+0.402&-0.522& -0.513\\ -0.304&+0.527&+0.444&-0.470\\ +0.352&-0.232&+0.427&-0.193\\ +0.637&+0.924&+0.252&+1.121\end{bmatrix}\,, \tag{12}\]
satisfying Eq. (29).
For high current injection, \(\mathfrak{R}\left(\mathfrak{R}^{0}\right)\) has been used as initial transfer matrix and the optimization routine has been applied again giving
\[\mathfrak{R}\left(\mathfrak{R}^{4}\right)=\begin{bmatrix}+0.687&+0.430&-0.400& -0.794\\ -0.334&+0.371&+0.560&-0.393\\ +0.360&-0.351&+0.379&-0.237\\ +0.913&+0.782&+0.263&+0.886\end{bmatrix}\,, \tag{13}\]
satisfying Eq. (45).
Figure 5: Projected rms-ellipses of the beam second moments matrix at the entrance (blue) and exit (red) of the first cell of the periodic channel for a coupled proton beam with 10 mA.
The field strength of the half solenoid has been used to control the coupling parameter. If the solenoid field is off, the eigen-emittances and projected emittances are equal to each other. Applying the identical matching routine, the transport matrices of the matching section for zero and for high current injections are obtained as
\[\mathfrak{R}_{\circ}\left(\mathfrak{R}^{0}\right)=\begin{bmatrix}+0.431&+1.049&-0.1 84&+0.975\\ -0.399&+0.550&-0.353&-0.001\\ +0.242&+0.230&-0.038&-0.637\\ -0.255&+1.180&+0.972&-0.934\end{bmatrix}\,, \tag{10}\]
\[\mathfrak{R}_{\circ}\left(\mathfrak{R}^{4}\right)=\begin{bmatrix}+0.468&+1.093&- 0.194&+1.028\\ -0.370&+0.539&-0.333&-0.001\\ +0.258&+0.260&-0.022&-0.706\\ -0.252&+1.076&+0.907&-0.786\end{bmatrix}\,. \tag{11}\]
Figure 8: Transverse rms-beam sizes of a coupled 10 mA proton beam along a regular FODO quadrupole channel (twelve cells) as obtained from rms-tracking (blue) and extracted from BEAMPATH particle-tracking simulations (red).
Figure 6: The ten independent rms-moments along the regular quadrupole channel (two cells) for a coupled proton beam with 10 mA. Left: rms-moments \(\langle xx\rangle\), \(\langle yy\rangle\), and \(\langle xy\rangle\) (red, blue, and green dots); Middle: rms-moments \(\langle xx^{\prime}\rangle\), \(\langle yy^{\prime}\rangle\), \(\langle xy^{\prime}\rangle\), and \(\langle x^{\prime}y\rangle\) (red, blue, green, and magenta dots); Right: rms-moments \(\langle x^{\prime}x^{\prime}\rangle\), \(\langle y^{\prime}y^{\prime}\rangle\), and \(\langle x^{\prime}y^{\prime}\rangle\) (red, blue, and green dots).
Figure 7: Projected rms-emittances (red), eigen-emittances (blue), and square root of 4d-rms-emittance (green) of a coupled 10 mA proton beam along a regular FODO quadrupole channel (twelve cells).
## Appendix B Uncoupled beam through the channel
If the solenoid field is set to zero (treated as drift), the beam matrix at position \(s_{1}\) is written as
\[C_{\circ}\left(s_{1}\right)=\begin{bmatrix}+133.4&-8.192&+0.000&+0.000\\ -8.192&+37.14&+0.000&+0.000\\ +0.000&+0.000&+151.8&+28.62\\ +0.000&+0.000&+28.62&+37.60\end{bmatrix}\,. \tag{10}\]
Together with the transport matrix \(\mathfrak{R}_{\circ}\left(\mathfrak{R}^{4}\right)\), the corresponding beam matrices \(C_{\circ}^{4}\left(s_{2}\right)\) and \(C_{\circ}^{4}\left(s_{2}+\ell\right)\) are determined as
\[C_{\circ}^{4}\left(s_{2}\right)=\begin{bmatrix}+99.21&+0.011&+0.000&+0.000\\ +0.011&+49.25&+0.000&+0.000\\ +0.000&+0.000&+29.95&+0.009\\ +0.000&+0.000&+0.009&+163.2\end{bmatrix} \tag{11}\]
and
\[C_{\circ}^{4}\left(s_{2}+\ell\right)=\begin{bmatrix}+99.22&+0.013&+0.000&+0.000 \\ +0.013&+49.25&+0.000&+0.000\\ +0.000&+0.000&+29.95&+0.012\\ +0.000&+0.000&+0.012&+163.2\end{bmatrix}\,, \tag{12}\]
with \(\varepsilon_{x}=\varepsilon_{y}=69.90\) mm mrad for both of them. The transport matrix along the channel (one cell) is determined as
\[\mathfrak{T}_{\mathrm{sc}}^{\circ}\left(\mathfrak{R}^{4}\right)=\begin{bmatrix} +0.463&+1.258&+0.000&+0.000\\ -0.642&+0.463&+0.000&+0.000\\ +0.000&+0.000&+0.463&+0.380\\ +0.000&+0.000&-2.069&+0.463\end{bmatrix}\,, \tag{13}\]
with phase advances of \(\mu_{x}^{\diamond}=62.41^{\circ}\) and \(\mu_{y}^{\diamond}=62.42^{\circ}\). The rms-beam sizes along the channel are plotted in Fig. 9.
## Appendix C Preliminary design of matching section
As mentioned previously, detailed provision of the 4D-matching beam line with space charge is a hard task being beyond the scope of this paper. However, this section shall sketch a conceptual approach to obtain an according layout. It is drawn schematically in Fig. 10 and it comprises three sections.
Sections \(a\) and \(c\) each comprise five rotated quadrupoles being separated by solenoids. Within these sections the beam is coupled. The section \(b\) in between comprises just four regular quadrupoles, and the beam along this section is fully decoupled.
The provision of the full matching beam line starts with determination of the settings of section \(c\). It uses the known periodic solution with space charge at the beginning of the periodic channel at position \(s_{2}\). Its according beam moments matrix \(C^{4}\left(s_{2}\right)\) is transported backwards to position \(s_{2}^{*}\). This backward transportation is done such that the resulting beam is fully decoupled at \(s_{2}^{*}\). The required settings are denoted as \(\lx@sectionsign^{c}\) and they comprise the quadrupole strengths, rotation angles, and solenoid strengths. These parameters are obtained through an appropriate numerical routine (_Minimize_ of MATHCAD for instance). The according backward transport matrix is denoted as \(\mathfrak{R}_{c}^{-1}\).
Within the second step, the settings of section \(a\) are determined numerically in order to decouple the beam at the effective half solenoid's exit at position \(s_{1}\). The according transport matrix is denoted as \(\mathfrak{R}_{a}\) at it provides for decoupled beam at position \(s_{1}^{*}\). Its settings are summarized as \(\lx@sectionsign^{a}\).
Finally, the matching beam line is completed by an appropriate section \(b\) modeled by the transport matrix \(\mathfrak{R}_{b}\), that just provides for the matching between the two uncoupled beam matrices at \(s_{1}^{*}\) and \(s_{2}^{*}\). The transport matrix of the complete matching line hence reads as
\[\mathfrak{R}=\mathfrak{R}_{c}\cdot\mathfrak{R}_{b}\cdot\mathfrak{R}_{a}\,. \tag{14}\]
The maximum strength of the regular (rotated) quadrupoles is about 1 T/m. In the following the individual transport matrices
Figure 10: Conceptual matching beam line including rotated quadrupoles and solenoids. Strengths and effective lengths of the solenoid are set to \(B_{s}=0.1\) T and \(L=0.25\) m. The drift length between solenoids and rotated quadrupoles is 0.05 m.
Figure 9: Transverse rms-beam sizes of a uncoupled 10 mA proton beam along a regular FODO quadrupole channel (twelve cells) as obtained from rms-tracking (blue) and extracted from BEAMPATH particle-tracking simulations (red).
are stated explicitly
\[\mathfrak{R}_{a}=\begin{bmatrix}+0.804&+0.615&-0.797&+0.810\\ +0.375&+0.874&-0.989&+0.343\\ -1.078&-1.421&-1.518&+0.940\\ +1.760&+1.832&+2.018&-1.562\end{bmatrix}, \tag{30}\]
\[\mathfrak{R}_{b}=\begin{bmatrix}-2.182&+1.230&+0.000&+0.000\\ +0.551&-0.769&+0.000&+0.000\\ +0.000&+0.000&+2.131&+1.047\\ +0.000&+0.000&+1.658&+1.283\end{bmatrix}, \tag{31}\]
\[\mathfrak{R}_{c}=\begin{bmatrix}-0.048&+1.237&+0.469&+1.417\\ -0.296&-0.398&-0.238&+0.591\\ +0.034&-0.857&+0.204&+0.616\\ +0.681&+0.916&-0.344&+0.853\end{bmatrix}. \tag{32}\]
Corresponding transverse rms-beam sizes from positions \(s_{0}\) to \(s_{2}+\ell\) are shown in Fig. 11.
|
2301.13524 | Quantum contextual bandits and recommender systems for quantum data | We study a recommender system for quantum data using the linear contextual
bandit framework. In each round, a learner receives an observable (the context)
and has to recommend from a finite set of unknown quantum states (the actions)
which one to measure. The learner has the goal of maximizing the reward in each
round, that is the outcome of the measurement on the unknown state. Using this
model we formulate the low energy quantum state recommendation problem where
the context is a Hamiltonian and the goal is to recommend the state with the
lowest energy. For this task, we study two families of contexts: the Ising
model and a generalized cluster model. We observe that if we interpret the
actions as different phases of the models then the recommendation is done by
classifying the correct phase of the given Hamiltonian and the strategy can be
interpreted as an online quantum phase classifier. | Shrigyan Brahmachari, Josep Lumbreras, Marco Tomamichel | 2023-01-31T10:17:53Z | http://arxiv.org/abs/2301.13524v1 | # Quantum contextual bandits and
###### Abstract
We study a recommender system for quantum data using the linear contextual bandit framework. In each round, a learner receives an observable (the context) and has to recommend from a finite set of unknown quantum states (the actions) which one to measure. The learner has the goal of maximizing the reward in each round, that is the outcome of the measurement on the unknown state. Using this model we formulate the low energy quantum state recommendation problem where the context is a Hamiltonian and the goal is to recommend the state with the lowest energy. For this task, we study two families of contexts: the Ising model and a generalized cluster model. We observe that if we interpret the actions as different phases of the models then the recommendation is done by classifying the correct phase of the given Hamiltonian and the strategy can be interpreted as an online quantum phase classifier.
## I Introduction
Recommender systems are a class of online reinforcement learning algorithms that interact sequentially with an environment suggesting relevant items to a user. During the last decade, there has been an increasing interest in online recommendation techniques due to the importance of advertisement recommendation for e-commerce websites or the rise of movies and music streaming platforms [1; 2]. Among different settings for recommender systems, in this work, we focus on the contextual bandit framework applied to the recommendation of quantum data. The contextual bandit problem is a variant of the multi-armed bandit problem where a learner at each round receives a context and given a set of actions (also called actions) has to decide the best action using the context information. After selecting an action the learner will receive a reward and for the next rounds, they will use the previous information of contexts and rewards in order to make their future choices. As in the classical multi-armed bandit problem, the learner has to find a balance between exploration and exploitation; exploration refers to trying different actions in order to eventually learn the ones with the highest reward and exploitation refers to selecting the actions that apparently will give the highest reward immediately. For a comprehensive review of bandit algorithms, we refer to the book by Lattimore and Szepesvari [3]. Some real-life applications [4] of bandit include clinical trials [5], dynamic pricing [6], advertisement recommendation [7] or online recommender systems [8; 9]. As an example, in [8] a news article recommender system was considered where the context is the user features, the actions are the articles to recommend and the reward is modeled as a binary outcome indicating that the user clicks or not on the recommended article.
Quantum algorithms for the classical multi-armed bandit problem have been studied for the settings of best-arm identification [10; 11], exploration-exploitation with stochastic environments [12] (uncorrelated and linear correlated actions) and adversarial environments [13]. Also, a quantum neural network approach was considered in [14] for a simple best-arm identification problem. A quantum algorithm for a classical recommender system was considered in [15] claiming an exponential speedup over known classical algorithms but later in [16] it was proven that the price of the speedup comes from the assumptions of the quantum state preparation part and argued that under related classical assumptions a classical algorithm can also achieve the speedup. There are other more general reinforcement learning frameworks beyond bandits where actions affect the rewards in the long term such as Markov decision process. The quantum generalization of this framework has been considered in [17; 18], and although our model of study falls into their class we can derive more concrete results since we study a specific setting.
We are interested in studying a recommender system for quantum data that is modeled by a set of
unknown quantum processes- which is called the _environment_, and a set of tasks to perform using these quantum processes- which is called a _context set_. A learner interacts sequentially with the environment receiving at each round a task from the context set and then choosing the best quantum process to perform this task. For example, we could model the environments as a set of noisy quantum computers, the context set as a set of different quantum algorithms, and then at each round, the learner is given a quantum algorithm to run and their goal is to recommend the best quantum computer to do this task. We note that this model exemplifies the bandit exploration-exploitation trade-off since the learner has to try (explore) the different quantum computers in order to decide the best one but at the same time has to choose the best one (exploitation) to perform the task. This trade-off is interesting in a practical scenario because it captures settings where online decisions are important and or they have some associated cost that makes the learner always try to perform optimally. In our example, one could think that using a quantum computer costs money for the learner, so at each stage, they always want to select the ones that will output the best solutions.
In our work, we extend the setting considered in [19] where they studied the exploration-exploration trade-off of learning properties of quantum states. In our model, the environment is a set of unknown quantum states, the context set is a (finite or infinite) set of observables and at each round the learner receives an observable and has to perform a measurement on one of the unknown quantum states (the recommendation) aiming to maximize its outcome. We define this problem as the _quantum contextual bandit_ (QCB) and we note that it falls into the class of linear contextual bandits [20; 21; 22]. The QCB is the basic framework where we formulate our recommender system for quantum data. We use as a figure of merit the regret, which is the cumulative sum of the difference between the expected outcome of the best and selected action at each round. Finding a strategy that minimizes regret implies finding the mentioned balance between exploration-exploitation of the different actions. As a concrete recommendation task captured by the QCB model, we consider the _low energy quantum state recommendation problem_. In this problem, at each round, the learner receives a quantum Hamiltonian and has to recommend from the environment the state with the lowest energy. The ground state preparation problem is an important ingredient of NISQ algorithms [23] and our model could be useful in order to implement an online recommendation algorithm that helps the learner choose the best ansatz for their energy minimization task when they have multiple problems to solve. One of the advantages of using bandit algorithms for this task is that they do not need to
Figure 1: Sketch of a recommender system for quantum data. The learner receives sequentially quantum contexts and feed them to the classical processing system. The context is also fed to the measurement system. The classical processing system uses the information about the context to pick one of the quantum processes (no information regarding these processes are known besides from measurements). The chosen quantum process is applied to the measurement system, and the measurement outcome is fed to the classical processing and is added to the cumulative reward.
reconstruct the whole \(d\)-dimensional state, just the relevant part for the recommendation which depends on the structure of the context set. In order to do that we combine a Grahm-Schmidt procedure with classical linear bandit strategies. This allows our algorithm to store low-dimensional approximations of the unknown quantum states without prior knowledge of the context set. We also perform some numerical studies of the scaling of the regret for the cases where the context set is an Ising model and a generalized cluster model studied in [24]. For these models, we propose unknown actions for the algorithm corresponding to ground states located at different phases, and then for each context received by the algorithm we associate each action to a different phase and we reproduce a ground state phase diagram. We observe that the recommendation of the algorithm is done approximately by classifying the different phases of the studied models and we are able to clearly distinguish them in the phase diagram.
The rest of the paper is organized as follows: in Section II, we establish the mathematical model for the quantum contextual bandit, and then define the notation used throughout the paper; in the next section, Section III we prove the lower bound on a performance metric (expected regret, which we define in Section II) over all possible algorithms. In Section IV we review the linear Upper Confidence Bound algorithm. In Section V we describe the low-energy recommendation system and adapt the LinUCB algorithm to this setting. We illustrate the efficiency of the algorithm through simulations of different context sets.
## II The model
First, we introduce some notation in order to define our model and present our results. We define \([T]=\{1,...,T\}\) for \(T\in\mathbb{N}\). Let \(\mathcal{S}_{d}=\{\rho\in\mathbb{C}^{d\times d}:\rho\geq 0\wedge\mathrm{Tr}( \rho)=1\}\) denote the set of positive semi-definite operators with unit trace, i.e _quantum states_ that act on a \(d\)-dimensional Hilbert space \(\mathbb{C}^{d}\). Moreover, _observables_ are Hermitian operators acting on \(\mathbb{C}^{d}\), collected in the set \(\mathcal{O}_{d}=\{O\in\mathbb{C}^{d\times d}:O^{\dagger}=O\}\). We denote real \(d\)-dimensional column vectors as \(\mathbf{v}\) and the inner product of two of them \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d}\) as \(\mathbf{u}^{\top}\mathbf{v}\) where \(\mathbf{u}^{\top}\) denotes the transpose of \(\mathbf{u}\) that is a row vector. We use \(\|\cdot\|_{2}\) in order to denote the 2-norm of a real vector. For a \(n\)-qubit system with Hilbert space dimension \(d=2^{n}\), we denote \(X_{i},Y_{i}\), and \(Z_{i}\) the \(x,y,z\) Pauli operators acting on the \(i\)-th qubit (\(1\leq i\leq n\)). A Pauli observable can be expressed as the \(n\)-fold tensor product of the \(2\times 2\) Pauli matrices, i.e it is an element of the set \(\{I,X,Y,Z\}^{\otimes n}/I_{4^{n}\times 4^{n}}\). Note that there are \(4^{n}-1\) such observables and each of them are orthogonal, and therefore form a basis, which alludes to as the _Pauli basis_ henceforth.
The definition of our model takes some of the conventions used for the multi-armed quantum bandit (MAQB) problem [19].
**Definition 1** (Quantum contextual bandit).: Let \(d\in\mathbb{N}\). A \(d\)-dimensional _quantum contextual bandit_ is given by a set of observables \(\mathcal{C}=\{O_{c}\}_{c\in\Omega_{c}}\subseteq\mathcal{O}_{d}\) that we call the _context set_, \((\Omega_{\mathcal{C}},\Sigma_{\mathcal{C}})\) is a measurable space and \(\Sigma_{\mathcal{C}}\) is a \(\sigma\)-algebra of subsets of \(\Omega_{\mathcal{C}}\). The bandit is in an _environment_, a finite set of quantum states \(\gamma=\{\rho_{1},\rho_{2},\cdots,\rho_{k}\}\subset\mathcal{S}_{d}\), that it is unknown. The quantum contextual bandit problem is characterized by the tuple \((\mathcal{C},\gamma)\).
Given the environment \(\gamma\) such that \(|\gamma|=k\) we define the _action set_\(\mathcal{A}=\{1,...,k\}\) as the set of indices that label the quantum states \(\rho_{i}\in\gamma\) in the environment. For every observable \(O_{c}\in\mathcal{C}\) the spectral decomposition is given by
\[O_{c}=\sum_{i=1}^{d_{c}}\lambda_{c,i}\Pi_{c,i}, \tag{1}\]
where \(\lambda_{c,i}\in\mathbb{R}\) denote the \(d_{c}\leq d\) distinct eigenvalues of \(O_{c}\) and \(\Pi_{c,i}\) are the orthogonal projectors on the respective eigenspaces. For each action \(a\in\mathcal{A}\) we define the reward distribution with outcome \(R\in\mathbb{R}\) as the conditional probability distribution associated of performing a measurement using \(O_{c}\) on \(\rho_{a}\) given by Born's rule
\[Pr\left[R=r|A=a,O=O_{c}\right]=P_{\rho_{a}}(r|a,c)=\begin{cases} \mathrm{Tr}(\rho_{a}\Pi_{c,i})\text{ if }r=\lambda_{c,i},\\ 0\text{ else.}\end{cases} \tag{2}\]
With the above definitions, we can explain the learning process. The learner interacts sequentially with the QCB over \(T\) rounds such that for every round \(t\in[T]\):
1. The learner receives a context \(O_{c_{t}}\in\mathcal{C}\) from some (possibly unknown) probability measure \(P_{\mathcal{C}}:\Sigma\rightarrow[0,1]\) over the set \(\Omega_{\mathcal{C}}\).
2. Using the previous information of received contexts, actions played, and observed rewards the learner chooses an action \(A_{t}\in\mathcal{A}\).
3. The learner uses the context \(O_{c_{t}}\) and performs a measurement on the unknown quantum state \(\rho_{A_{t}}\) and receives a reward \(R_{t}\) sampled according to the probability distribution (2).
We use the index \(c_{t}\in[m]\) to denote the observable \(O_{c_{t}}\) received at round \(t\in[T]\). The strategy of the learner is given by a set of (conditional) probability distributions \(\pi=\{\pi_{t}\}_{t\in\mathbb{N}}\) (policy) on the action index set \([k]\) of the form
\[\pi_{t}(a_{t}|a_{1},r_{1},c_{1},...,a_{t-1},r_{t-1},c_{t-1},c_{t}), \tag{3}\]
defined for all valid combinations of actions, rewards, and contexts \((a_{1},r_{1},c_{1},...,a_{t-1},r_{t-1},c_{t-1})\) up to time \(t-1\). Then, if we run the policy \(\pi\) on the environment \(\gamma\) over \(T\in\mathbb{N}\) rounds, we can define a joint probability distribution over the set of actions, rewards, and contexts as
\[P_{\gamma,\mathcal{C},\pi}(a_{1},X_{1},C_{1},...,a_{T},X_{T},C_{T })=\int_{C_{T}} \int_{X_{T}} \ldots\int_{C_{1}}\int_{X_{1}}\prod_{t=1}^{T}\pi_{t}(a_{t}|a_{1},r_{1},c_{1},...,a_{t-1},r_{t-1},c_{t-1})\times\] \[\times P_{\mathcal{C}}(dc_{1})P_{\rho_{a_{1}}}(dr_{1}|a_{1},c_{1}) \cdots P_{\mathcal{C}}(dc_{T})P_{\rho_{a_{T}}}(dr_{T}|a_{T},c_{T}). \tag{4}\]
Thus, the conditioned expected value of reward \(R_{t}\) is given by
\[\mathbb{E}_{\gamma,\mathcal{C},\pi}[R_{t}|A_{t}=a,O_{c_{t}}=O_{c}]=\mathrm{Tr }(\rho_{a}O_{c}), \tag{5}\]
where \(\mathbb{E}_{\gamma,\mathcal{C},\pi}\) denotes the expectation value over the probability distribution (4). The goal of the learner is to maximize its expected cumulative reward \(\sum_{t=1}^{T}\mathbb{E}_{\gamma,\mathcal{C},\pi}\left[R_{t}\right]\) or equivalently minimizing the _cumulative expected regret_
\[\mathrm{Regret}_{T}^{\gamma,\mathcal{C},\pi}=\sum_{t=1}^{T}\mathbb{E}_{\gamma,\mathcal{C},\pi}\left[\max_{\rho_{i}\in\gamma}\mathrm{Tr}(\rho_{i}O_{c_{t}})- R_{t}\right]. \tag{6}\]
For a given action \(a\in\mathcal{A}\) and context \(O_{c}\in\mathcal{C}\) the _sub-optimality gap_ is defined as
\[\Delta_{a,O_{c}}=\max_{i\in\mathcal{A}}\mathrm{Tr}(\rho_{i}O_{c})-\mathrm{Tr }(\rho_{a}O_{c}). \tag{7}\]
Note that the learner could try to learn the distribution of contexts \(P_{\mathcal{C}}\), however, this will not make a difference in minimizing the regret. The strategy of the learner has to be able to learn the relevant part of the unknown states \(\{\rho_{a}\}_{a=1}^{k}\) that depend on the context set and at the same time balance the tradeoff between exploration and exploitation. We note that it is straightforward to generalize the above setting to continuous sets of contexts \(\mathcal{C}\). In order to do that we need a well-defined probability distribution \(P_{\mathcal{C}}(O)dO\) over the context set \(\mathcal{C}\).
## III Lower Bound
In this section, we derive a lower bound for the cumulative expected regret by finding a QCB that is hard to learn for any strategy. Our regret lower bound proof for the QCB model relies on a reduction to a classical multi-armed stochastic bandit given in Theorem 5.1 in [25]. Now we briefly review the multi-armed stochastic bandit problem.
The _multi-armed stochastic bandit_ problem is defined by a discrete set of probability distributions \(\nu=(P_{a}:a\in[k])\) that is called the environment and \(\mu_{i}\) is the mean of the probability distribution \(P_{i}\) for \(i\in[k]\). The learner interacts sequentially with the bandit selecting at each round \(t\in[T]\) an action \(a\in[k]\) and sampling a reward \(R_{t}\) distributed accordingly to \(P_{a}\). The expected cumulative regret is defined as
\[\mathrm{Regret}_{T}^{\nu,\pi}=\sum_{t=1}^{T}\max_{a\in[k]}\mu_{a}-\mathbb{E}_{ \nu,\pi}[R_{t}], \tag{8}\]
where \(\pi\) and \(\mathbb{E}_{\nu,\pi}\) are both defined analogously from the definitions of the previous section accordingly to this model. It is important to remark that in this setting the actions are independent, meaning that when the learner samples from one action then it cannot use this information to learn about other actions.
Using the above model we describe the multi-armed stochastic bandit studied in Theorem 5.1 in [25] The bandit is constructed defining an environment \(\nu=(P_{a}:a\in[k])\) for \(k\geq 2\) such that \(P_{a}\) are Bernoulli distributions for all \(a\in[k]\) with outcomes \(\{l_{1},l_{2}\}\). Then we set the distributions as follows: we choose an index \(i\in[k]\) uniformly at random and assign \(P_{i}(R=l_{1})=\frac{1+\Delta}{2}\) for some \(\Delta>0\) and \(P_{a}(\underline{R}=l_{1})=\frac{1}{2}\) for \(a\neq i\). Thus, there is a unique best action corresponding to \(a=i\). Then choosing \(\Delta=\epsilon\sqrt{\frac{k}{n}}\) for some small positive constant \(\epsilon\), for \(n\geq k\) the expected regret for any strategy will scale as
\[\mathrm{Regret}_{T}^{\nu,\pi}=\Omega(\sqrt{kT}). \tag{9}\]
**Theorem 2**.: _Consider a quantum contextual bandit with underlying dimension \(d=2^{n}\) and \(n\in\mathbb{N}\), context size \(c\geq 1\) and \(k\geq 2\) actions. Then, for any strategy \(\pi\), there exists a context set \(\mathcal{C}\), \(|\mathcal{C}|=c\), a probability distribution over the context set \(\mathcal{C}\)\(P_{\mathcal{C}}\) and an environment \(\gamma\in\mathcal{S}_{d}\) such that for the QCB defined by \((\mathcal{C},\gamma)\) the expected cumulative regret will scale as_
\[\mathrm{Regret}_{T}^{\gamma,\mathcal{C},\pi}=\Omega\left(\sqrt{kT}\cdot\min \big{\{}d,\sqrt{c}\big{\}}\right), \tag{10}\]
_for \(T\geq k\min\{c,d^{2}\}\)._
Proof.: We use a similar technique to [20, 22] in order to analyze the regret by dividing the problem into subsets of independent rounds. We start dividing the \(T\) rounds in \(c^{\prime}=\min\{c,d^{2}-1\}\) groups of \(T^{\prime}=\lfloor\frac{T}{c^{\prime}}\rfloor\) elements. We say that time step \(t\) belongs to group \(s\) if \(\lfloor\frac{t}{T^{\prime}}\rfloor=s\). We construct a context set \(\mathcal{C}\) by picking a set of \(c^{\prime}\) distinct Pauli observables (which is possible since the maximum number of independent Pauli observables is \(d^{2}-1\geq c^{\prime}\)), so \(\mathcal{C}=\{\sigma_{i}\}_{i=1}^{c^{\prime}}\). Recall that a Pauli observable is a \(n\)-fold tensor product of the 2 \(\times\) 2 Pauli matrices, thus the reward will be a binary outcome \(r_{t}\in\{-1,1\}\). Then, the context distribution works as follows: at each group \(s\) of rounds the learner will receive a different context \(\sigma_{s}\in\mathcal{C}\), so at group \(s\) the learner only receives \(\sigma_{s}\).
We want to build an environment such that for each group of rounds \(s\in[m]\) all probability distributions are uniform except one that is slightly perturbed. We associate each Pauli observable \(\sigma_{i}\) to one unique action \(a\in[k]\), and we do this association uniformly at random (each action can be associated with more than 1 Pauli observable). Then each action \(a\in[k]\) will have \(\{\sigma_{a,1},...,\sigma_{a,n_{a}}\}\) associated Paulis observables and we can construct the following environment \(\gamma=\{\rho_{a}\}_{a=1}^{k}\) where
\[\rho_{a}=\frac{I}{d}+\sum_{j=1}^{n_{a}}\frac{\Delta}{d}\sigma_{a,j}, \tag{11}\]
\(n_{a}\in\big{\{}0,1,...,d^{2}-1\big{\}}\), \(\sum_{a=1}^{k}n_{a}=c^{\prime}\) and \(\Delta\) is some positive constant. For every group \(s\in[m]\) the learner will receive a fixed context \(\sigma_{s}\in\mathcal{C}\) and there will be a unique action \(a^{\prime}\) with \(P_{\rho_{a}^{\prime}}(1|A_{t}=a^{\prime},s)=\frac{1}{2}+\frac{\Delta}{2}\) (probability of obtaining \(+1\)) and the rest \(a\neq a^{\prime}\) will have \(P_{\rho_{a}}(1|a,s)=\frac{1}{2}\) (uniform distributions). Thus, using that the contexts are independent (\(\mathrm{Tr}(\sigma_{i}\sigma_{j})\) for \(i\neq j\)) we can apply (9) independently to every group \(s\) and we obtain a regret lower bound \(\Omega(\sqrt{T^{\prime}k})=\Omega(\sqrt{\frac{Tk}{c^{\prime}}})\). Note that in order to apply (9) we need \(T^{\prime}\geq k\) or equivalently \(T\geq c^{\prime}k\). Thus, summing all the \(m\) groups we obtain the total regret scales as,
\[\mathrm{Regret}_{T}^{\gamma,\mathcal{C},\pi}=\Omega\left(c^{\prime}\sqrt{\frac{ Tk}{c^{\prime}}}\right)=\Omega\left(\sqrt{kT}\cdot\min\big{\{}d,\sqrt{c} \big{\}}\right). \tag{12}\]
Algorithm
In this section, we review the linear model of multi-armed stochastic bandits and one of the main classical strategies that can be used to minimize regret in this model and also in the QCB model.
### Linear disjoint single context bandits and QCB
The classical setting that matches our problem is commonly referred to as linear contextual bandits [22] although it has received other names depending on the specific setting such as linear disjoint model [8] or associative bandits [21]. The setting that we are interested uses discrete action sets and optimal algorithms are based on upper confidence bounds (UCB). While these algorithms use the "principle of optimism in the face of uncertainty" there are other approaches like a Thompson sampling [26] algorithm but they are not optimal for discrete action sets. We use the contextual linear disjoint bandit model from [8] where each action \(a\in[k]\) has an associated unknown parameter \(\theta_{a}\in\mathbb{R}^{d}\) and at each round \(t\) the learner receives a context vector \(\mathbf{c}_{t,a}\in\mathbb{R}^{d}\) for each actions. Then after selecting an action \(a\in[k]\) the sampled reward is
\[R_{t}=\mathbf{\theta}_{a}^{\top}\mathbf{c}_{t,a}+\eta_{t}, \tag{13}\]
where \(\eta_{t}\) is some bounded subgaussian [27] noise such that \(\mathbb{E}[R_{t}|A_{t}=a]=\mathbf{\theta}_{a}\cdot\mathbf{c}_{t,a}\).
In order to map the above setting to the \(d\)-dimensional QCB model \((\gamma,\mathcal{C})\) it suffices to consider a vector parametrization (similarly done for the MAQB [19]). We choose a set \(\{\sigma_{i}\}_{i=1}^{d^{2}}\) of independent Hermitian matrices and parametrize any \(\rho_{a}\in\gamma\) and \(O_{l}\in\mathcal{C}\) as
\[\rho_{a}=\sum_{i=1}^{d^{2}}\theta_{a,i}\sigma_{i},\quad O_{l}=\sum_{i=1}^{d^{ 2}}c_{l,i}\sigma_{i}, \tag{14}\]
where \(\theta_{a,i}=\mathrm{Tr}(\rho_{a}\sigma_{i})\) and \(c_{l,i}=\mathrm{Tr}(O_{c}\sigma_{i})\) and we define the vectors \(\mathbf{\theta}_{a}=(\theta_{a,i})_{i=1}^{d^{2}}\in\mathbb{R}^{d^{2}}\) and \(\mathbf{c}_{l}=(c_{a,i})_{i=1}^{d^{2}}\in\mathbb{R}^{d^{2}}\). Then we note that for the QCB model the rewards will be given by (13) with the restriction that since we only receive one observable at each round then the context vector is constant among all actions. Thus, in our model, the rewards have the following expression
\[R_{t}=\mathbf{\theta}_{a}^{\top}\mathbf{c}_{t}+\eta_{t}. \tag{15}\]
We denote this classical model as _linear disjoint single context bandits_. In order to make clear when the classical real vectors parametrize an action \(\rho_{a}\in\gamma\) or context \(O_{l}\in\mathcal{C}\) (14) we will use the notation \(\mathbf{\theta}_{\rho_{a}}\) and \(\mathbf{c}_{O_{l}}\) respect to the standard Pauli basis.
### Linear Upper Confidence Bound algorithm
Now we discuss the main strategy for the linear disjoint single context model (15) that is the LinUCB (linear upper confidence bound) algorithm [28; 29; 21; 30; 22]. We describe the procedure of LinUCB for selecting an action and we leave for the next section a complete description of the algorithm for the QCB setting.
At each time step \(t\), given the previous rewards \(R_{1},...,R_{t-1}\in\mathbb{R}\), selected actions \(a_{1},...,a_{t-1}\in[k]\) and observed contexts \(\mathbf{c}_{1},...,\mathbf{c}_{t}\in\mathbb{R}^{d}\) the LinUCB algorithm builds the _regularized least squares estimator_ for each unknown parameter \(\mathbf{\theta}_{a}\) that have the following expression
\[\tilde{\mathbf{\theta}}_{t,a}=V_{t,a}^{-1}\sum_{s=1}^{t-1}R_{s}\mathbf{c}_{s} \mathbb{I}\{a_{t}=a\}, \tag{16}\]
where \(V_{t,a}=I+\sum_{s=1}^{t-1}\mathbf{c}_{s}\mathbf{c}_{s}^{\top}\mathbb{I}\{a_{t }=a\}\). Then LinUCB selects the following action according to
\[a_{t+1}=\operatorname*{argmax}_{a\in[k]}\tilde{\mathbf{\theta}}_{t,a}^{\top} \mathbf{c}_{t}+\alpha\sqrt{\mathbf{c}_{t}^{\top}V_{t}^{-1}\mathbf{c}_{t}}, \tag{17}\]
where \(\alpha>0\) is a constant that controls the width of the confidence region on the direction of \(\mathbf{c}_{t,a}\). The idea behind this selection is to use an overestimate of the unknown expected value using an upper confidence bound. This is the principle behind UCB[31] which is the main algorithm that gives rise to this class of optimistic strategies. The value of the constant \(\alpha\) is chosen depending on the structure of the action set. In the next section, we will discuss the appropriate choice of \(\alpha\) for our setting. We note that this strategy can also be applied in an adversarial approach where the context is chosen by an adversary instead of sampled from some probability distribution.
The above procedure is shown to be sufficient for practical applications [8] but the algorithms achieving the optimal regret bound are SupLinRel[21] and BaseLinUCB[22]. They use a phase elimination technique that consists of each round playing only with actions that are highly rewarding but still the main subroutine for selecting the actions is LinUCB. This technique is not the most practical for applications but it was introduced in order to derive rigorous regret upper bounds. For these strategies if we apply it to a \(d\)-dimensional QCB bandit \((\gamma,\mathcal{C})\) they achieve the almost optimal regret bound of
\[\mathrm{Regret}_{T}^{\gamma,\mathcal{C},\pi}=O\left(d\sqrt{kT\ln^{3}(T^{2} \log(T))}\right). \tag{18}\]
The above bound comes from [22] and it is adapted to our setting [32] using the vector parametrization (14). Their regret analysis works under the normalization assumptions \(\|\mathbf{\theta}\|_{2}\leq 1\), \(\|\mathbf{c}_{t}\|_{2}\leq 1\) and the choice of \(\alpha=\sqrt{\frac{1}{2}\ln(2T^{2}k)}\). We note that it matches our lower bound (10) except for the logarithmic terms.
## V Low energy quantum state recommender system
In this section, we describe how the QCB framework can be adapted for a recommender system for low-energy quantum states. We consider a setting where the learner is given optimization problems in an online fashion and is able to encode these problems into Hamiltonians and also has access to a set of unknown preparations of (mixed) quantum states that they want to use in order to solve these optimization problems. The task is broken into several rounds; at every round, they receive an optimization problem and are required to choose the state that they will use for that problem. As a recommendation rule, we use the state with the lowest energy with respect to the Hamiltonian where the optimization problem is encoded. We denote this problem as the _low energy quantum state recommendation problem_. We note that our model focuses on the recommendation following the mentioned rule. After selecting the state the learner will use it for the optimization problem (for example the initial ansatz state of a variational quantum eigensolver), but that is a separate task. When a learner chooses an action, they must perform an energy measurement using the given Hamiltonian on the state corresponding to the chosen action. Then the measurement outcomes are used to model rewards, and their objective is to maximize the expected cumulative reward, i.e, the expectation on the sum of the measurement outcomes over all the rounds played. These measurements can be done fairly simply. Any Hamiltonian can be written as a linear combination of Pauli observables. Now by measuring each of these Pauli observables (since these measurements are conceivable, [33]), and taking the appropriately weighted sum of the measurement outcomes, we can simulate such a measurement. The QCB framework naturally lends itself to this model, where the Hamiltonians are the contexts, and the set of states that can be prepared reliably serve as the actions.
In this paper we study some important families of Hamiltonians -- specifically, the Ising and a generalized cluster model from [24], which are linear combinations of Pauli observables with nearest-neighbor interactions and for \(n\) qubits can be written as
\[H_{\mathrm{ising}}(h)=\sum_{i=1}^{n}(Z_{i}Z_{i+1}+hX_{i}), \tag{19}\] \[H_{\mathrm{cluster}}(j_{1},j_{2})=\sum_{i=1}^{n}(Z_{i}-j_{i}X_{ i}X_{i+1}-j_{2}X_{i-1}Z_{i}X_{i+1}), \tag{20}\]
where \(h,j_{1},j_{2}\in\mathbb{R}\). In the Ising model, \(h\) corresponds to the external magnetic field. Specifically, we consider QCB with the following context sets
\[\mathcal{C}_{\text{Ising}}=\left\{H_{\text{ising}}(h):h\in\mathbb{R}\right\},\quad \mathcal{C}_{\text{cluster}}=\left\{H_{\text{cluster}}(j_{1},j_{2}):j_{1},j_{2 }\in\mathbb{R}\right\}. \tag{21}\]
Important families of Hamiltonians like the models discussed above show translation-invariance and are spanned by Pauli observables showing nearest-neighbor interactions, and as a result, span a low dimensional subspace. We illustrate the scheme described above through the example of the Ising Model contexts. The Pauli observables that need to be measured are \(\{X_{i}\}_{i\in[n]}\) and \(\{Z_{i}Z_{i+1}\}_{i\in[n]}\). These observables have 2 possible measurement outcomes, -1 and 1, and by the reward distribution of a Pauli observable \(M\) given by Born's rule (2) on a quantum state \(\rho\), the reward can be modeled as
\[R_{M,\rho}=2\text{Bern}\left(\frac{\text{Tr}(M\rho)+1}{2}\right)-1, \tag{22}\]
where \(\text{Bern}(x)\in\{0,1\}\) is a random variable with Bernoulli distribution with mean \(x\in[0,1]\). By performing such a measurement for all the Pauli observables and adding the rewards, the reward for \(\mathcal{C}_{\text{Ising}}\) is
\[R_{\text{Ising}}=-h\sum_{M\in X_{i},i\in[n]}R_{M,\rho}-\sum_{M^{\prime}\in Z_{ i}Z_{i+1},i\in[n]}R_{M^{\prime},\rho}, \tag{23}\]
where we took the negative of the sum of the measurements because we are interested in a recommender system for the lowest energy state. A similar formulation applies to the QCB with generalized cluster Hamiltonian contexts.
In the rest of this section, we illustrate a modified LinUCB algorithm for the QCB setting. Then we implement this recommender system where the contexts are Hamiltonians belonging to the Ising and a generalized cluster models (21), and demonstrate our numerical analysis of the performance of the algorithm by studying the expected regret. We also demonstrate that depending on the action set, the algorithm is able to approximately identify the phases of the context Hamiltonians.
### Gram-Schmidt method
Similarly to the task of shadow tomography [34] and classical shadows [35], we do not need to reconstruct the full quantum states since the algorithm has only to predict the trace between the contexts and the unknown quantum states. Thus, the LinUCB algorithm has only to store the relevant part of the estimators for this computation. As the measurement statistics depend only on the coefficient corresponding to the Pauli observables spanning the observables in the context set, only those Pauli observables in the expansion of the estimators are relevant. This means that our algorithm can operate in a space with a smaller dimension than the entire spaces spanned by \(n\) qubits, which has a dimension that is exponential on the number of qubits.
In order to exploit this property to improve the space complexity of the LinUCB algorithm, we use the Gram-Schmidt procedure in the following way. At any round, a basis for the vector parameterizations (as shown in (14)) of all the previously received contexts is stored. If the incoming vector parameterization of the context is not spanned by this basis, the component of the vector orthogonal to the space spanned by this set is found by a Gram-Schmidt orthonormalization-like process, and this component is added to the set, after normalization. Therefore, at any round, there will be a list of orthonormal vectors that span the subspace of all the vector parameterizations of the contexts received so far, and the size of the list will be equal to the dimension of the subspace, which we call _effective dimension_, i.e,
\[d_{\text{eff},t}=\text{dim}(\{O_{c_{t}}\in\mathcal{C}\}:t\in[T]). \tag{24}\]
From now on we will omit the subscript for the time step \(t\) and simply denote the effective dimension as \(d_{\text{eff}}\). Instead of feeding the context vectors directly, for any incoming context vector, we construct \(d_{\text{eff}}\)-dimensional vectors, whose \(i^{th}\) term is the inner product of the context vector and the \(i^{th}\) basis vector. In case the incoming vector is not spanned by the basis, we first update the list by a Gram-Schmidt
procedure (which will result in an addition of another orthonormal vector to the list, and an increase in \(d_{\text{eff}}\) by 1), and then construct a \(d_{\text{eff}}\)-dimensional vector as described before. This vector is fed to the \(\mathsf{LinUCB}\) algorithm. The Gram-Schmidt procedure is stated in Algorithm 1 and the modified \(\mathsf{LinUCB}\) algorithm is stated explicitly in Algorithm 2. The efficiency of this method is well illustrated in the case where all the contexts are local Hamiltonians. As an example, we discuss the case of generalised cluster Hamiltonians. Note that the space complexity of the standard QCB framework is \(O(kd^{2})\), where k is the number of actions, and d is the dimension of the vector parameterizations of the contexts. In the standard \(\mathsf{LinUCB}\) technique, the context vectors \(\mathbf{c_{t}},t\in[T]\) would be \(4^{n}\)-dimensional, where n is the number of qubits the Hamiltonian acts on, in which case the space complexity of the algorithm is \(O(k4^{2n})\). In our studies, the contexts are Ising Hamiltonians and a generalised cluster Hamiltonian (21) with \(d_{\text{eff}}\leq 2\) and \(d_{\text{eff}}\leq 3\) respectively. Since the vectors fed into the modified \(\mathsf{LinUCB}\) is \(d_{\text{eff}}\)-dimensional, the space complexity is \(O(kd_{\text{eff}}^{2})\), i.e, \(O(4k)\) and \(O(9k)\) respectively.
```
Input \([\mathbf{c},\{V_{a}\}_{a\in\mathcal{A}},\{\mathbf{b}_{a}\}_{a\in\mathcal{A}}, \text{Cbasis}]\) for\(\mathbf{v}\) in Cbasis do \(\mathbf{c}\leftarrow\mathbf{c}-(\mathbf{v}^{\top}\mathbf{c})\mathbf{v}\) \(\mathbf{v}_{\mathbf{c}}=\mathbf{v}_{\mathbf{c}}\oplus(\mathbf{v}^{\top} \mathbf{c})\) endfor if\(\mathbf{c}!=\mathbf{0}\)then \(\mathbf{v}_{\mathbf{c}}=\mathbf{v}_{\mathbf{c}}\oplus\mathbf{c}\|_{2}\) Add \(\mathbf{c}/\|\mathbf{c}\|_{2}\) to Cbasis for\(a=1,2,\ldots,K\)do Set \(V_{a}=V_{a}\oplus I_{1}\), \(\mathbf{b}_{a}=\mathbf{b}_{a}\oplus\mathbf{0}_{1}\) endfor endif Return\([\mathbf{c}^{\prime},\{V_{a}\}_{a\in\mathcal{A}},\{\mathbf{b}_{a}\}_{a\in \mathcal{A}},\text{Cbasis}]\)
```
**Algorithm 1** Gram-Schmidt Algorithm (\(\text{Gram}(\mathbf{c},V_{a},\mathbf{b}_{a},\text{Cbasis})\))
```
1:Input \(\alpha\in\mathbb{R}\)
2:Set \(\text{Cbasis}=[\;]\)
3:Set \(V_{a}=\mathbf{1},\mathbf{b}_{a}=\mathbf{0},\forall a\in\mathcal{A}\)
4:for\(t=1,2,\ldots\)do
5:\([\mathbf{c}^{\prime},\{V_{a}\}_{a\in\mathcal{A}},\{\mathbf{b}_{a}\}_{a\in \mathcal{A}},,\text{Cbasis}]\leftarrow\text{Gram}(\mathbf{c}_{O_{t}},\{V_{a}\}_ {a\in\mathcal{A}},\{\mathbf{b}_{a}\}_{a\in\mathcal{A}},,\text{Cbasis})\)
6:for\(\mathbf{c}\in\mathcal{A}\)do
7:\(\tilde{\mathbf{\theta}}_{\rho_{a}}\leftarrow V_{a}^{-1}\mathbf{b}_{a}\)
8:\(p_{t,a}\leftarrow\tilde{\mathbf{\theta}}_{\rho_{a}}\mathbf{c}^{\prime}{}_{O_{t}}+ \alpha\sqrt{\mathbf{c}_{O_{t}}^{\top}V_{a}^{-1}\mathbf{c}^{\prime}{}_{O_{t}}}\)
9:endfor
10:Choose action \(\mathbf{a}_{t}=\text{argmax}_{a\in\mathcal{A}}\,p_{t,a}\);
11:Measure state \(\rho_{a_{t}}\) with \(O_{c_{t}}\) and observe reward \(R_{O_{t}}\)
12:Set \(V_{a_{t}}\gets V_{a_{t}}+\mathbf{c}^{\prime}{}_{O_{t}}\mathbf{c}^{\prime} {}_{O_{t}}^{\top}\)
13:Set \(\mathbf{b}_{a_{t}}\gets b_{a_{t}}+R_{O_{t}}\mathbf{c}^{\prime}{}_{O_{t}}\)
14:endfor
```
**Algorithm 2**\(\mathsf{LinUCB}\) with Gram-Schmidt
### Phase classifier
In order to implement the numerical simulations we need to choose the environments for the QCB with context sets \(\mathcal{C}_{\text{ising}}\) and \(\mathcal{C}_{\text{cluster}}\). Elements of both context sets are parameterized by tunable parameters. We study the performance of the recommender system by choosing a context probability distribution that is uniform on these parameters. Then we chose the actions as ground states of Hamiltonians that corresponded to the limiting cases (in terms of the parameters) of these models. In order to study the performance of our strategy apart from the expected regret (6) we want to observe how the actions are chosen. For every action, we maintained a set, which contained all the Hamiltonians for which that action
was chosen. We observed that almost all the elements in each of these sets belonged to the same phase of the Hamiltonian models.
In order to study the performance of the algorithm in this respect, we define the _classifier regret_ as
\[\text{ClassifierRegret}_{T}^{\gamma,\mathcal{C},\pi}=\sum_{t=0}^{T-1}\mathbb{I }\left[a_{t}\neq a_{\text{optimal},t}\right], \tag{25}\]
where \(a_{\text{optimal},t}=\text{argmax}_{a\in[k]}\operatorname{Tr}\left(O_{t} \rho_{a}\right)\), and \(O_{t}\in\mathcal{C}\) is the context observable received in \(t^{\text{th}}\) round. Note that the above classifier regret is not guaranteed to be sublinear, like expected regret is (18) for the \(\mathsf{LinUCB}\) strategy. This can be understood intuitively: consider a scenario where the bandit picks an actions with a small sub-optimality gap (7); then the linear regret will increase by a very small amount, the classifier regret will increase by one unit, as all misclassifications have equal contribution to regret. These, however, are theoretic worst-case scenarios, and this classifier regret is useful to study the performance of the algorithm in practice in our settings.
### Numerical simulations
Before we move into the specific cases, we note the importance of the choice of \(\alpha\) in Algorithm 2. While the theoretical analysis of the \(\mathsf{LinUCB}\) algorithm depends on the choice of \(\alpha\), in practice one can tune this value to observe a better performance. We primarily use the \(\alpha\) described in [3] (Chapter 19) given by
\[\alpha_{t}=m+\sqrt{2\log\left(\frac{1}{\delta}\right)+d\log\left(1+\frac{tL^{ 2}}{d}\right)}. \tag{26}\]
Here, \(L\) and \(m\) are upper bounds on the 2-norm of the action vectors and unknown parameter respectively, \(d\) is the dimension and \(\delta\) is once more a probability of failure.
Finally, while we study the performance of our algorithm in our simulations with estimates of expected regret and expected classifier regret, it is important to note that in an experimental setup, the learner will only be able to measure the cumulative reward at every round. However, since these are simulations, we are able to study the regret as well, as they are standard metrics to gauge the performance of the algorithms. In the next subsection we discuss our simulations of the QCB bandit \((\gamma,\mathcal{C}_{\text{cluster}})\) model and later, the QCB bandit \((\gamma,\mathcal{C}_{\text{lsing}})\) is discussed in the Appendix VI.
#### iii.3.1 Generalised Cluster Model
We study the performance of the recommender system for the QCB bandit \((\gamma,\mathcal{C}_{\text{cluster}})\), where the generalised cluster Hamiltonians [24], act on 10 qubits and 100 qubits respectively. We observe that the performance of the algorithm is not affected by the number of qubits, as the effective dimension of the context set remains unchanged,i,e \(d_{\text{eff}}=3\). We study the expected regret and classifier regret for these two cases, and illustrate the system's performance in finding the phases of the generalised cluster Hamiltonians. This model was also studied in [36], where they designed a quantum convolutional neural network to classify quantum states across phase transitions. We chose 5 actions corresponding to approximate ground states of Hamiltonians that are the limiting cases of the generalised cluster model; i.e, generalised cluster Hamiltonians with parameters \(j_{1},j_{2}\) in (20), \(j_{1},j_{2}\rightarrow\{0,0\},\{0,\infty\},\{\infty,0\},\{0,-\infty\}\)and \(\{0,-\infty\}\). Note that these methods of approximating ground states is only for simulation purposes.
Initially a steep growth in regret is observed, followed by sudden slower pace. On looking closely, in the plot below, we find that the regret indeed continues to grow, albeit at a slower pace. This was be explained by observing that the sub-optimality gap of the second-best action is quite small in comparison to the sub-optimality gaps of the rest of the actions. Initially the \(\mathsf{LinUCB}\) algorithm does not have enough information about the unknown parameters and has to play all actions resulting in an exploration phase. However, at some point the bandit recognizes the "bad" actions, and plays either the best action or the action with a small sub-optimality gap most of the time - this is when the bandit has begun to balance
exploration and exploitation. This is illustrated by observing the growth of the regret before and after the first 50 rounds in the insets of Fig. 2.
In the beginning of this subsection, we had mentioned that the recommendation system picks the same action for context Hamiltonians belonging to the same phase. We illustrate this in Figure 3. In the scatter plot, when a context generalised cluster Hamiltonian is received, a dot is plotted with the x-axis and y-axis coordinates corresponding to its parameters \(j_{1},j_{2}\) respectively. Depending on the action picked by the algorithm, we associate a color to the dot. The resultant plot is similar (but not exact) to the phase diagram of the generalised cluster Hamiltonian.
Figure 3: These plots illustrate how the recommender system identifies the phases of the generalised cluster Hamiltonian. The x and y-axis represent the coupling coefficients of the generalised cluster Hamiltonian received as context. Like the Ising Model simulations, we associate a color to each action. For any context \(H_{\rm cluster}(j_{1},j_{2})\) corresponding to any of the T rounds, one of these actions is picked by the algorithm. We plot the corresponding colored dot (blue for ground state of \(H_{\rm cluster}(-\infty,0)\), orange for \(H_{\rm cluster}(0,\infty)\), red for \(H_{\rm cluster}(\infty,0)\), green for \(H_{\rm cluster}(0,-\infty)\) and purple for \(H_{\rm cluster}(0,0)\)) at the appropriate coordinates, for rounds that follow after the bandit has ”learned” the actions, i.e, the growth in regret has slowed down.
Figure 2: Plots for Regret and Classifier regret for QCB bandit (\(\gamma,\mathcal{C}\)), where the Hamiltonians in \(\mathcal{C}\) are a specific form of generalised cluster models acting on 10 and 100 qubits respectively. The performance is not very different since \(d_{\rm eff}=3\) (24) for both cases. The action set is chosen to be approx. ground states of some generalised cluster Hamiltonians
Outlook
This work describes the first steps for recommending quantum data by implementing the bandit framework in a rigorous fashion for practical scenarios. We provide a recommender system based on the theory of linear contextual bandits and show that the upper and lower-bounds on the expected regret are tight except for logarithmic factors. We also demonstrate its efficiency in practice through simulations. Later, we show how such a system could also be used to recognise phases of Hamiltonians.
We restricted our attention to a model where the expected rewards follow a linear function in terms of the context and the unknown states. While the low energy quantum state recommendation problem uses the outcome of the measurement as a reward, one could think of other recommendation tasks with more complicated reward functions. Non-linear rewards have been studied in the bandit literature and receive the name of structured bandits [37; 38; 39]. This model could be a natural extension of the QCB for other recommendation tasks where the rewards are not in one-to-one correspondence with measurement outcomes. Going back to the general model, the environment is modeled by a set of unknown quantum processes which in the QCB model we assumed to be a set of stationary unknown quantum states. In a more general scenario, we can consider environments that change with time due to some Hamiltonian evolution or noise interaction with an external environment. In the bandit literature, non-stationary environments were first considered in [40; 41] where each action was associated with a Markov chain or the restless bandit model [42] where the Markov chain associated to each action evolves with time. More recently in [43] they studied a contextual bandit model with non-stationary environments. We expect that recommender systems for quantum data can also be extended to similar settings.
**Acknowledgements:** This research is supported by the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant and the Quantum Engineering Programme grant NRF2021-QEP2-02-P05.
|
2309.08040 | Gradient based Grasp Pose Optimization on a NeRF that Approximates Grasp
Success | Current robotic grasping methods often rely on estimating the pose of the
target object, explicitly predicting grasp poses, or implicitly estimating
grasp success probabilities. In this work, we propose a novel approach that
directly maps gripper poses to their corresponding grasp success values,
without considering objectness. Specifically, we leverage a Neural Radiance
Field (NeRF) architecture to learn a scene representation and use it to train a
grasp success estimator that maps each pose in the robot's task space to a
grasp success value. We employ this learned estimator to tune its inputs, i.e.,
grasp poses, by gradient-based optimization to obtain successful grasp poses.
Contrary to other NeRF-based methods which enhance existing grasp pose
estimation approaches by relying on NeRF's rendering capabilities or directly
estimate grasp poses in a discretized space using NeRF's scene representation
capabilities, our approach uniquely sidesteps both the need for rendering and
the limitation of discretization. We demonstrate the effectiveness of our
approach on four simulated 3DoF (Degree of Freedom) robotic grasping tasks and
show that it can generalize to novel objects. Our best model achieves an
average translation error of 3mm from valid grasp poses. This work opens the
door for future research to apply our approach to higher DoF grasps and
real-world scenarios. | Gergely Sóti, Björn Hein, Christian Wurll | 2023-09-14T22:00:18Z | http://arxiv.org/abs/2309.08040v1 | # Gradient based Grasp Pose Optimization on a NeRF that Approximates Grasp Success
###### Abstract
Current robotic grasping methods often rely on estimating the pose of the target object, explicitly predicting grasp poses, or implicitly estimating grasp success probabilities. In this work, we propose a novel approach that directly maps gripper poses to their corresponding grasp success values, without considering objectness. Specifically, we leverage a Neural Radiance Field (NeRF) architecture to learn a scene representation and use it to train a grasp success estimator that maps each pose in the robot's task space to a grasp success value. We employ this learned estimator to tune its inputs, i.e., grasp poses, by gradient-based optimization to obtain successful grasp poses. Contrary to other NeRF-based methods which enhance existing grasp pose estimation approaches by relying on NeRF's rendering capabilities or directly estimate grasp poses in a discretized space using NeRF's scene representation capabilities, our approach uniquely sidesteps both the need for rendering and the limitation of discretization. We demonstrate the effectiveness of our approach on four simulated 3DoF (Degree of Freedom) robotic grasping tasks and show that it can generalize to novel objects. Our best model achieves an average translation error of 3mm from valid grasp poses. This work opens the door for future research to apply our approach to higher DoF grasps and real-world scenarios.
Keywords:robotic grasping, neural scene representation, transfer learning
## 1 Introduction
Research in robotic grasping has explored various approaches such as analytic and data-driven, model-based and model-free, supervised, self supervised and reinforcement learning methods. These methods can be based on different types of sensor data, such as RGB or depth images, and can be designed for different types of grippers [10].
Most of these methods are based on object pose estimation, directly estimate a grasp pose or implicitly map grasp poses to their probability of success. However, if we observe ourselves while grasping an object, we might notice, that we intuitively adjust our hand position to increase the chances of a successful grasp and to achieve a good grasp position ultimately. This suggests that the process of grasping can be modeled as an optimization problem that optimizes the pose of our hands to maximize the probability of a successful grasp.
In this work, we introduce a novel approach to robotic grasping. Leveraging VisionNeRF [12], a learned neural network model capable of capturing a 3D scene representation, we create a model that estimates the success of a grasp given a candidate pose. Unlike other - including NeRF-based - grasping methods which directly estimate grasp poses, our approach stands out by formulating grasp pose estimation as a continuous optimization problem. The goal is to maximize the likelihood of successful grasping through gradient-based optimization. We show the efficiency of our proposed approach on four simulated 3DoF robotic grasping tasks. We summarize our contributions as follows:
* We propose a method to explicitly map grasp candidates to their corresponding grasp success value.
* We show the efficacy of applying transfer learning to a trained VisionNeRF to obtain this explicit mapping.
* We propose a novel approach to find valid grasp poses by applying gradient based optimization on the learned grasp success estimator.
## 2 Related Work
### Data-driven Robotic Grasping
In recent years, data-driven methods have become the state of the art in the context of robotic object handling. Keypoint detection or dense descriptor-based methods are effective at learning successful grasp poses and can even generalize to object categories, but they often require a large amount of object-specific labeled data to achieve good performance [15, 11, 13, 18, 5]. End-to-end learning models that directly learn to map the robot's raw sensor input to a desired output offer a promising alternative in unstructured environments [19, 23, 28, 1, 3, 6, 9]. Most of these models directly propose suitable grasp candidates, or estimate the success probability of grasp poses and rank them. These latter models implicitly map grasp poses to success probability, limiting their ability to optimize grasp poses to iterative methods [20] that sample, evaluate, and re-sample grasp candidates to find a better solutions. In contrast, our proposed method explicitly maps grasp poses to grasp success using a neural network, making it differentiable and enabling gradient-based optimization to refine the grasp pose.
### NeRFs and NeRF-based Robotic Grasping
Recently, differentiable scene representations, such as Neural Radiance Fields [17], have been increasingly used in the field of robotics also for grasping among
other applications. A NeRF maps a 5-degree-of-freedom (5DoF) pose to an RGB color vector and a so-called density. Color and density are then combined along camera rays via volumetric rendering in order to render novel views for scenes. Various extensions of the NeRFs have been developed for different applications, such as NeuS [21] for surface reconstruction or NeRF-W [16] on unconstrained photo collections of famous landmarks. Plenoctrees [26] have been proposed for fast rendering with NeRFs. PixelNeRF [27] and VisionNeRF [12] overcome the need for training a NeRF for each scene, by generalizing over multiple scenes given sparse observations.
Inverse Neural Radiance Fields [25] perform camera pose estimation by inverting a trained NeRF. Starting from an initial camera pose estimate, it uses gradient based optimization to minimize the residual between pixels rendered from an already-trained NeRF and pixels in an observed image. To estimate the 6DoF camera pose, iNeRF casts rays from the camera's perspective and samples points along them, to finally apply volumetric rendering to get the pixel values and thus the residual. This requires querying the NeRF with different 5DoF poses multiple times. In our method, we use a similar approach, but since we are only interested in estimating 3DoF poses (5DoF with a fixed direction), we can simply use the NeRF's output at 5DoF poses as an objective.
There are several successful methods that utilize variants NeRFs for robotic grasping. Dex-NeRF [7] uses a NeRF-based model to render high-quality depth images of a scene, which are then fed to DexNet [14] to compute robust grasp poses. Evo-NeRF [8], is similar method, but instead of focusing on improving the depth rendering, the grasp planner network is trained to perform well on the NeRF-rendered depth maps and utilizes a different NeRF implementation to significantly improve training times. GraspNeRF [2] utilizes a multiview NeRF-based approach to estimate a truncated-signed-distance-field in voxels to predict successful grasps. An other approach [24] uses a NeRF to learn dense object descriptors from visual observations, which are then used to track keypoints on objects and calculate grasp poses.
### Our Contribution
While these methods demonstrate the effectiveness of utilizing NeRFs in robotic grasping, they typically enhance existing grasp pose estimation techniques with NeRFs' rendering capabilities or directly estimate grasp poses in a discretized space using NeRFs. However, these methods do not fully exploit the potential of NeRFs for continuous optimization of the grasp pose.
Our method uniquely employs NeRFs to explicitly represent the mapping of grasp poses to grasp success probability. This approach enables a gradient-based optimization method to find optimal grasp poses, providing more fine-grained control over the optimization process. Furthermore, our explicit mapping of grasp poses to grasp success offers a natural representation of the problem, where the gradient directly depicts rigid transformations leading to more successful poses. We believe our method addresses the gaps in the current state of the art and introduces a fresh perspective to the field of robotic grasping.
## 3 Grasp Success Approximation and Optimization
Given an RGB observation of a tabletop scene, the goal of the proposed method is to detect 5-DoF grasps (e.g. with a suction cup) consisting of a position and a direction vector. We assume the camera intrinsics and extrinsics are known for the image. We formulate the 5-DoF grasp detection as an optimization problem, that maximizes grasp success probability over gripper poses. We approximate the function that maps 5-DoF grasp poses \(g=(x,d)\in\mathbf{G}\), with \(x\) position and \(d\) direction, to their probability of success by the neural network \(\mathbf{\Theta}\). Since neural networks are differentiable, we can solve the problem
\[\max_{g\in\mathbf{G}}\mathbf{\Theta}(g,o) \tag{1}\]
by gradient based optimization methods, where \(o=(c,K,RT)\) is an observation containing a camera image with known intrinsics and extrinsics.
In this section we first describe the architecture of \(\mathbf{\Theta}\) and how we train it, then we describe the gradient based optimization. Note, that although this formulation is valid for 5DoF grasps, we constrain ourselves to 3DoF grasps (position only with fixed direction) in the evaluation.
### Grasp Success Approximation
NeRFs excel in novel view synthesis and are increasingly being applied in various other tasks that require some sort of scene representation. By using volumetric rendering to compute the loss during training, NeRFs are forced to learn how to consistently represent 3D scenes. In this paper, we demonstrate the potential of this representation for grasp success estimation.
#### 3.1.1 Architecture
In our approach we use a VisionNeRF [12], a generalized implementation of NeRFs capable of representing multiple scenes by conditioning on observed inputs. To achieve this, a Vision Transformer (ViT) [4] and a Convolutional Neural Network (CNN) are combined to extract global and local features from the input observation, the source image, which are then used to inform the color and density estimator. We denote this combination as \(\mathbf{\Omega}\). While standard NeRFs map a 5DoF pose \((x,d)\)(corresponding to a 3D point in the scene and the camera's perspective) to an RGB color vector and density, VisionNeRFs require an additional input: a single camera image \(c\) of the scene with known intrinsics \(K\) and extrinsics \(RT\).
The NeRF architecture consists of a sequence of ResNetMLP blocks (see [12], for more details) denoted as \(\mathbf{\Phi}\), and a final fully connected layer that outputs the color and density of the 3D point \(x\) given an observation direction \(d\). To inform this color and density estimator, the feature vector from \(\mathbf{\Omega}(c)\) at the projected position of the 3D point onto the camera image \(\pi_{K,RT}(x)\) is concatenated with encoded \(x\) and \(d\) vectors. We use a positional encoding typical for NeRFs:
\[\gamma(p)=(sin(2^{0}\pi p),cos(2^{0}\pi p),...,sin(2^{M-1}\pi p),cos(2^{M-1} \pi p)) \tag{2}\]
with \(M\) the number of frequency phases. Note, VisionNeRF only applies position encoding to the position vector and not the direction vector, but we also use it on the direction vector, just like the original NeRF implementation. This concatenated vector of \(\gamma(x)\), \(\gamma(d)\) and \(\mathbf{\Omega}(c)[\pi_{K,RT}(x)]\) is then fed into \(\mathbf{\Phi}\) and passed to the final fully connected layer. The output colors and densities of multiple points along a camera ray are then integrated using volumetric rendering to obtain pixel values, thus rendering the target image, as shown in Fig. 1.
To leverage the learned representation, we propose an extension to the VisionNeRF architecture. One potential issue is that the output of \(\mathbf{\Phi}\), which is primarily trained to approximate color and density, can be biased towards these features. To address this, we introduce skip connections after each ResNetMLP block in \(\mathbf{\Phi}\). By concatenating these skip connections with the output of \(\mathbf{\Phi}\), we create the input for our grasp success estimator module, denoted as \(\mathbf{\Psi}\). \(\mathbf{\Psi}\) consists of two ResNetMLP blocks and a final fully connected layer, which outputs a grasp success score. Fig. 1 shows the architecture of our model, but for visualization purposes we only depict the position and the direction of our grasp candidate \(g=(x,d)\) in the image. In reality we propagate four 5DoF poses through the network simultaneously, all along \(d\) and centered around \(x\) with 2.5mm spacing. We sum up the output of the grasp success estimator for these poses to obtain the final grasp success of the \(g\).
Figure 1: The structure of our proposed architecture: a VisionNeRF that estimates color and density for volumetric rendering and a grasp success estimator. Both process 5DoF poses \((x,d)\) and are informed by the extracted features from the camera image \(c\) that correspond to \(x\)
With these we can define the objective function of the optimization problem:
\[\mathbf{\Theta}(g,o)=\mathbf{\Psi}(\mathbf{\Phi}(\gamma(x),\gamma(d),\mathbf{ \Omega}(c)[\pi_{K,RT}(x)])) \tag{3}\]
with grasp candidate \(g=(x,d)\) and observation \(o=(c,K,RT)\).
#### 3.1.2 Training
To get the model to learn to represent the scene, we initially train \(\mathbf{\Omega}\) and \(\mathbf{\Phi}\) for novel view synthesis via volumetric rendering. The ViT in \(\mathbf{\Omega}\) is initialized with pretrained weigths from [22]. For training we use the Adam optimizer with a warmup learning rate schedule. The learning rate is increased from 0 to 1e-4 in 10000 steps for \(\mathbf{\Omega}\) and similarly for \(\mathbf{\Phi}\), the learning rate is increased from 0 to 1e-5 in 10000 steps.
After training we apply transfer learning to the VisionNeRF by freezing the weights of \(\mathbf{\Omega}\) and \(\mathbf{\Phi}\) and training only \(\mathbf{\Psi}\) to obtain the complete grasp success estimator \(\mathbf{\Theta}\). Categorical cross-entropy loss is used with one successful grasp pose \(g\) for an observation \(o\) as a positive example labeled as 1 and multiple randomly sampled poses as negative examples from the workspace labeled as 0. To obtain a valid grasp pose, we sample a position \(x\) on the top surface of the (prismatic) object - the optimal site for a suction gripper. We set the direction \(d\) perpendicular to this surface. We use the Adam optimizer with learning rate 1e-4 and sample 2047 negative samples.
As baseline, we use the same architecture, but instead of pretraining \(\mathbf{\Omega}\) and \(\mathbf{\Phi}\), we only load the ViT pretrained weights. We then train \(\mathbf{\Omega}\), \(\mathbf{\Phi}\) and \(\mathbf{\Psi}\) jointly, with the same configurations as described above.
### Gradient based Optimization
To solve the optimization problem, we used a gradient base optimization method. We apply the Adam optimizer with a decaying learning rate starting at 0.05 with decay rate 0.8 after each step to minimize the objective function \(-\mathbf{\Theta}(g)\) over \(g\in\mathbf{G}\) grasp poses, thus maximizing the estimated grasp success. We initialize the optimization process with \(2^{13}\) random poses as grasp candidates, and the optimization is allowed to run for a maximum of 16 iterations.
Since we constrain ourselves to 3DoF poses only, we fix the direction \(d\) and only optimize \(x\). This constraint is also applied while training the grasp optimizer by only sampling negative examples with the same direction.
In the context of grasping, the gradient used for optimization corresponds to rigid transformations of the gripper that lead to more successful grasp poses.
## 4 Experiments
We use a simulated tabletop environment to evaluate the performance of the proposed approach on 3DoF robotic grasping tasks. There are three fixed-pose cameras in the environment that provide camera images as observations. To measure the accuracy of the grasping predictions, we computed the translation
error, which represents the distance between a predicted grasp position and the nearest valid grasp position. Our approach enables the simultaneous optimization of multiple grasp candidates by maximizing their predicted success rate \(\mathbf{\Theta}(g,o)\). We evaluated its performance using two different metrics:
* **best-success**: the translation error of the grasp with the highest predicted success rate
* **lowest-from-5**:the lowest translation error among the five grasp candidates with the highest predicted success rates, which can be roughly understood as if a grasp fails, we can try the next best candidate
For a task, we spawn objects from one of the following sets of objects (see Figure 2):
* **single object**: red cross (0.05)
* **multi object**\(\mathbf{A}\): red cross (0.05), green square (0.07), yellow rectangle (0.015), dark blue L-shape (0.03), orange T-shape (0.09)
* **multi object**\(\mathbf{B}\): red cross (0.05), turquoise square (0.08), green long rectangle (0.02), white U-shape (0.06), dark blue double-L-shape (0.03)
* **multi object**\(\mathbf{C}\): blue rectangle (0.04), yellow L-shape (0.02), orange T-shape (0.07), purple block-ring (0.05)
All objects are prismatic and are characterized by their bases, heights (in meters) and colors. The set multi object A contains objects similar to some of the other multi object sets, while multi object B and multi object C contain mainly different objects.
We define two tasks in which objects need to be grasped:
* **single object grasp**: the object of the single object set is spawned in a random position in the workspace
* **multi object grasp**: five objects are sampled from a multi object set and spawned in random non-overlapping positions in the workspace; the objects need to be removed successively one by one resulting in 5 different scenes for a complete episode
In our experiment we use three different backbones:
* **no-NeRF**: the baseline without pretraining the VisionNeRF
* **single-NeRF**: a VisionNeRF trained on 100 scenes of the single object grasp task, corresponding to 100 complete episodes of the task
Figure 2: The different object sets used during training and evaluation.
* **multi-NeRF**: a VisionNeRF trained on 500 scenes of the multi object grasp task with objects from the multi object A set, corresponding to 100 complete episodes of the task
Both single-NeRF and multi-NeRF are trained for 8000 epochs with batch-size 1. Source and target camera images are sampled from the three fixed-pose camera observations. In contrast, NeRFs are generally trained using many different views which is not realistic in real world setups.
With all three backbones, we train two grasp success estimator modules:
* **single-grasp**: trained on 100 scenes of the single object grasp task, corresponding to 100 complete demonstrations of the task
* **multi-grasp**: trained on 100 scenes of the multi object grasp task with objects from the multi object B set, corresponding to 20 complete demonstrations of the task,
resulting in six models overall. All grasp estimator modules are trained for 250 epochs. The models are evaluated on four tasks:
* **single-object-task**: 50 scenes of the single object grasp task using the single object object set, corresponding to 50 complete episodes
* **multi-object-A-task**: 50 scenes of the multi object task using objects from the multi object A object set corresponding to 10 complete episodes; note, that these objects were also used for training the multi-NeRF module
* **multi-object-B-task**: 50 scenes of the multi object task using objects from the multi object B object set corresponding to 10 complete episodes; note, that these objects were also used for training the multi-grasp module
* **multi-object-C-task**: 50 scenes of the multi object task using objects from the multi object C object set corresponding to 10 complete episodes
For each task we obtain three observations \(o_{1}\), \(o_{2}\) and \(o_{3}\), one for each camera. For \(\mathbf{\Theta}(g,o)\) however, we only need one observation, thus we define two optimization objectives:
* **1-view**: \(\mathbf{\Theta}(g,o_{1})\)
* **3-views**: \(\sum_{i\in[1,2,3]}\mathbf{\Theta}(g,o_{i})\)
We record the best five grasp candidates with the highest estimated grasp success score after 8, 12 and 16 optimization steps for evaluation.
## 5 Results and Discussion
### Qualitative Analysis of Architecture Modules
#### 5.1.1 VisionNeRF
As described above, we trained our VisionNeRF for novel view synthesis only using three perspectives. This leads to a strong bias towards these perspectives during rendering new perspectives given a camera image from a known perspective. When rendering for perspectives that were used during training, the quality of the image is far superior than for other perspectives, however the representation of the objects in the workspace remains mostly consistent, as shown in Fig. 3.
#### 3.1.2 Grasp success estimator and grasp pose optimization
To ensure that our grasp pose estimation method, which involves gradient-based optimization, is accurate, the learned grasp success estimation function must correctly map 3D (or 5D) space to grasp success. Ideally, the function should assign higher success estimates to points closer to valid grasp positions. Although our learned grasp success estimator has some local maxima that do not correspond to valid grasp poses, the global maxima do, as shown in Fig. 4.
Of course, gradient based optimization methods are prone to get stuck in local limits. We overcome this problem by initializing the optimization method with many initial grasp candidates as described in 3.2 and evaluating the grasp candidates that have the highest grasp success estimation at the end of the optimization. Fig. 5 shows the successively improved poses of a grasp candidate during optimization and the estimated grasp success progression of the grasp candidates with the highest estimated grasp success at the end of the optimization. This also suggests, that the gradient does indeed correspond to a rigid transformation that moves the gripper towards better grasp poses.
Figure 4: A visualization of the grasp success estimation: the discretized workspace of an instance of the single-grasp-task (left) is mapped to its 3-views optimization objective \(\sum_{i\in[1,2,3]}\boldsymbol{\Theta}(g,o_{i})\) (middle). On the right, only the points with the highest success estimation values are shown, also corresponding to the object’s position in the workspace.
Figure 3: VisionNeRF rendering of new perspectives given a source image with known perspective. The left- and right-most renderings belong to perspectives that were used at training and produce better quality images. For the other perspectives, the static objects in the scene (ground and robot) seems to fall apart, but the objects are rendered consistently even if they were occluded in the ground truth images.
### Robotic Grasping Performance Evaluation
Using 3-views instead of 1-view for the optimization objective reduces the errors of our approach by over an average of 60% for all backbone and grasp success estimator combinations as shown in Fig. 6 using both best-success and lowest-from-5 metrics. Furthermore, for both optimization objectives, the architectures with a pretrained NeRF outperform the models that did not make use of transfer learning, while models with multi-NeRF mostly even outperform models with single-NeRF. A significant exception is observable when models using multi-grasp are evaluated with the best-success metric, where single-NeRF architectures do not outperform models that do not use a pretrained NeRF backbone. This suggests that using a single view in the objective does not depict the reality as reliably as using three views.
In the single-object-task (Fig. 6), architectures that combine a single-NeRF with a single-grasp models can slightly outperform their multi-NeRF counterparts, which is however reasonable, as both single-NeRF and single-grasp models were trained on the same object set, that is used in this task.
We can observe similar behaviour if we observe the results of multi-grasp models with different backbones on the different multi-object-tasks (Fig. 7, right): that models using a NeRF backbone mostly outperform models without a pretrained NeRF backbone and that multi-NeRF outperforms single-NeRF in most cases. Additionally, all models perform best on the multi-object-B-task, which is again reasonable, as the multi-grasp models were trained on the same object set. In case of the multi-object-A-task, the model using multi-NeRF clearly outperforms the other models, which is most likely due to the fact, that the multi-NeRF was also trained on multi object A object set, thus it likely extracts the most descriptive features from scenes with these objects. In the multi-object-C-task the no-NeRF end-to-end model and the single-NeRF model perform similarly and are still outperformed by the multi-NeRF architecture.
When we examine single-grasp models on the same tasks (Fig. 7, left), only multi-object-A-task shows the same pattern regarding backbone configuration. For the multi-object-B task, the architecture without a pretrained NeRF outper
Figure 5: Grasp pose improvement during optimization (left) and the estimated grasp success progression of the five grasp candidates that have the highest estimated grasp success at the end of the optimization (right).
forms the single-NeRF architecture, while the model with a multi-NeRF backbone still outperforms both. In case of multi-object-C-task however, the end-to-end architecture delivered the best results.
On average, our models achieved best performance after 16 optimization steps, but only slightly better than after 12 optimization steps. Table 1 summarizes the average errors (in mm) for all models after 16 optimization steps. Considering the best-success metric, for the single-object-task, the single-NeRF with a single-grasp performs best, although it is worth noting, that both multi-NeRF based models have less than 0.7mm larger average error. In case of all multi-object-tasks, the multi-NeRF and the multi-grasp model show better performance, albeit in the multi-object-B-task only slightly better than the other models with a multi-grasp module.
The lowest-from-5 metric models the case when we also consider retrying a failed grasp. As Table 1 demonstrates, the results show similar trends, though not as consistent as for the best-success metric. There is one major outlier: in case of the multi-object-C-task the model combining a single-NeRF with a single-grasp model outperforms all other combinations.
Overall, our results show that it is beneficial to apply transfer learning to a pretrained VisionNeRF model to obtain a model that explicitly maps grasp poses to grasp success. Furthermore, the results suggest that if a VisionNeRF was trained on multiple objects instead of one, thus learning a more descriptive representation of the scene, the obtained grasp success estimator is also better.
Figure 6: Error distribution of different model configurations on the single-object-task after 8, 12, and 16 optimization steps using 1-view and 3-views as optimization objective for the metrics best-success and lowest-from-5.
While single-grasp models are not able to generalize very well to novel objects, the multi-grasp model, which was trained on the multi-object-B object set performed reasonably well on objects from the multi-object-A set, containing partly similar objects, and also on objects from the multi-object-C set containing objects of different shapes and colors. Considering all tasks, the best model is the combination of the multi-NeRF and the multi-grasp model achieving an average error of 3mm.
## 6 Limitations and Future Work
The VisionNeRFs we train only uses three camera perspectives, leading to distorted rendering for other perspectives. This shows, that the learned scene representation is far from perfect. Our method would most likely benefit from a NeRF that is trained with many perspectives. This however, also leads to a major limitation, as such a model would take a huge effort to obtain in a real world
Figure 7: Best-success error distribution of different model configurations on all multi-object-tasks after 8, 12, and 16 optimization steps using 3-views as optimization objective.
scenario. A possible future work is investigating the possibilities of creating such a model applying sim-2-real transfer learning methods, thus reducing the real world data required.
An other strong limitation is, that our experiments only consider 3DoF grasps and only in simulation. Exploring 5DoF and 6DoF grasps, especially in a real-world experiment, is crucial for a successful model architecture, and is thus also in the scope of possible future works.
For training the grasp estimation models, we used 100 demonstrations from each task. On one hand this does not seem to be that many, if we consider that deep learning architectures usually require an exceptionally large body of data to train on, however for a real world application it would be beneficial if one could reduce the amount of demonstrations to the minimum while retaining the robustness of the method.
## 7 Conclusion
In this work, we propose a unique approach to robotic grasping, employing transfer learning on a trained VisionNeRF to explicitly map grasp poses to their corresponding grasp success. We further applied a gradient-based optimization method on this learned mapping to refine the poses of grasp candidates and thereby attain successful grasp poses. We demonstrated the efficacy of our method on four simulated 3DoF robotic grasping tasks, and showed its ability to generalize to novel objects.
A clear direction for future work is the extension of our method to 5DoF and 6DoF grasps, and to apply it on real world tasks. The methodology we propose here is not limited to robotic grasping, but can be extended to estimate
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \multicolumn{8}{c}{**best-success**} \\ \hline \multicolumn{2}{c|}{\multicolumn{2}{c|}{single-grasp}} & \multicolumn{4}{c}{multi-grasp} \\ \cline{2-7} \multicolumn{2}{c|}{no-NeRF} & \multicolumn{1}{c|}{single-NeRF} & \multicolumn{1}{c|}{multi-NeRF} & \multicolumn{1}{c|}{no-NeRF} & \multicolumn{1}{c|}{single-NeRF} & \multicolumn{1}{c}{multi-NeRF} \\ \hline so & 9.39 & **2.94** & 3.17 & 17.50 & 5.68 & 3.61 \\ mo-A & 38.46 & 28.43 & 18.81 & 26.47 & 10.73 & **8.46** \\ mo-B & 41.98 & 63.30 & 13.34 & 3.43 & 3.59 & **3.41** \\ mo-C & 10.67 & 36.50 & 29.98 & 12.86 & 13.09 & **9.33** \\ \hline \hline \multicolumn{8}{c}{**lowest-from-5**} \\ \cline{2-7} \multicolumn{2}{c|}{\multirow{-2}{*}{no-NeRF}} & \multicolumn{1}{c|}{single-grasp} & \multicolumn{4}{c}{multi-grasp} \\ \cline{2-7} \multicolumn{2}{c|}{no-NeRF} & \multicolumn{1}{c|}{single-NeRF} & \multicolumn{1}{c|}{multi-NeRF} & \multicolumn{1}{c|}{no-NeRF} & \multicolumn{1}{c|}{single-NeRF} & \multicolumn{1}{c}{multi-NeRF} \\ \hline so & 2.70 & 1.05 & **0.91** & 4.87 & 2.16 & 1.31 \\ mo-A & 22.22 & 22.87 & 13.67 & 13.63 & 5.75 & **3.40** \\ mo-B & 28.45 & 28.21 & 9.38 & 1.16 & **1.09** & 1.29 \\ mo-C & **3.53** & 24.87 & 25.10 & 4.66 & 7.44 & 5.93 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average errors of all models using the 3-views optimization objective in mm according to the best-success and lowest-from-5 errors. The single-object-task is denoted as so and the different multi-object-tasks are denoted with mo-X with X referring to the object set they were defined on.
the success of other types of robotic manipulation or interaction. An intriguing prospect for future development of our work could involve integrating additional criteria, tailored to specific tasks, into the optimization objective. One such criterion could be language conditioning, this could provide a foundation for robots to handle tasks of greater complexity.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.